The Nonlinear Library: EA Forum The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org The Nonlinear Fund © 2023 The Nonlinear Fund en-us https://www.nonlinear.org https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png The Nonlinear Fund podcast@nonlinear.org no The Nonlinear Fund The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org Thu, 01 Jun 2023 01:39:25 +0000veR4W92bZsTsGgS3D_NL_EA_EA EA - A moral backlash against AI will probably slow down AGI development by Geoffrey Miller Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A moral backlash against AI will probably slow down AGI development, published by Geoffrey Miller on May 31, 2023 on The Effective Altruism Forum.Note: This is a submission for the 2023 Open Philanthropy AI Worldviews contest, due May 31, 2023. It addresses Question 1: “What is the probability that AGI is developed by January 1, 2043?”OverviewPeople tend to view harmful things as evil, and treat them as evil, to minimize their spread and impact. If enough people are hurt, betrayed, or outraged by AI applications, or lose their jobs, professional identity, and sense of purpose to AI, and/or become concerned about the existential risks of AI, then an intense public anti-AI backlash is likely to develop. That backlash could become a global, sustained, coordinated movement that morally stigmatizes AI researchers, AI companies, and AI funding sources. If that happens, then AGI is much less likely to develop by the year 2043. Negative public sentiment could be much more powerful in slowing AI than even the most draconian global regulations or formal moratorium, yet it is a neglected factor in most current AI timelines.IntroductionThe likelihood of AGI being developed by 2043 depends on two main factors: (1) how technically difficult it will be for AI researchers to make progress on AGI, and (2) how many resources – in terms of talent, funding, hardware, software, training data, etc. – are available for making that progress. Many experts’ ‘AI timelines’ for predicting AI development assume that AGI likelihood will be dominated by the first factor (technical difficulty), and assume that the second factor (available resources) will continue increasing.In this essay I disagree with that assumption. The resources allocated into AI research, development, and deployment may be much more vulnerable to public outrage and anti-AI hatred than the current AI hype cycle suggests. Specifically, I argue that ongoing AI developments are likely to provoke a moral backlash against AI that will choke off many of the key resources for making further AI progress. This public backlash could deploy the ancient psychology of moral stigmatization against our most advanced information technologies. The backlash is likely to be global, sustained, passionate, and well-organized. It may start with grass-roots concerns among a few expert ‘AI doomers’, and among journalists concerned about narrow AI risks, but it is likely to become better-organized over time as anti-AI activists join together to fight an emerging existential threat to our species.(Note that this question of anti-AI backlash likelihood is largely orthogonal to the issues of whether AGI is possible, and whether AI alignment is possible.)I’m not talking about a violent Butlerian Jihad. In the social media era, violence in the service of a social cause is almost always counter-productive, because it undermines the moral superiority and virtue-signaling strategies of righteous activists. (Indeed, a lot of ‘violence by activists’ turns out to be false flag operations funded by vested interests to discredit the activists that are fighting those vested interests.)Rather, I’m talking about a non-violent anti-AI movement at the social, cultural, political, and economic levels. For such a movement to slow down the development of AGI by 2043 (relative to the current expectations of Open Philanthropy panelists judging this essay competition), it only has to arise sometime in the next 20 years, and to gather enough public, media, political, and/or investor support that it can handicap the AI industry’s progress towards AGI, in ways that have not yet been incorporated into most experts’ AI timelines.An anti-AI backlash could include political, religious, ideological, and ethical objections to AI, sparked by vivid, outrageous, newsworthy fai...]]>
Geoffrey Miller https://forum.effectivealtruism.org/posts/veR4W92bZsTsGgS3D/a-moral-backlash-against-ai-will-probably-slow-down-agi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A moral backlash against AI will probably slow down AGI development, published by Geoffrey Miller on May 31, 2023 on The Effective Altruism Forum.Note: This is a submission for the 2023 Open Philanthropy AI Worldviews contest, due May 31, 2023. It addresses Question 1: “What is the probability that AGI is developed by January 1, 2043?”OverviewPeople tend to view harmful things as evil, and treat them as evil, to minimize their spread and impact. If enough people are hurt, betrayed, or outraged by AI applications, or lose their jobs, professional identity, and sense of purpose to AI, and/or become concerned about the existential risks of AI, then an intense public anti-AI backlash is likely to develop. That backlash could become a global, sustained, coordinated movement that morally stigmatizes AI researchers, AI companies, and AI funding sources. If that happens, then AGI is much less likely to develop by the year 2043. Negative public sentiment could be much more powerful in slowing AI than even the most draconian global regulations or formal moratorium, yet it is a neglected factor in most current AI timelines.IntroductionThe likelihood of AGI being developed by 2043 depends on two main factors: (1) how technically difficult it will be for AI researchers to make progress on AGI, and (2) how many resources – in terms of talent, funding, hardware, software, training data, etc. – are available for making that progress. Many experts’ ‘AI timelines’ for predicting AI development assume that AGI likelihood will be dominated by the first factor (technical difficulty), and assume that the second factor (available resources) will continue increasing.In this essay I disagree with that assumption. The resources allocated into AI research, development, and deployment may be much more vulnerable to public outrage and anti-AI hatred than the current AI hype cycle suggests. Specifically, I argue that ongoing AI developments are likely to provoke a moral backlash against AI that will choke off many of the key resources for making further AI progress. This public backlash could deploy the ancient psychology of moral stigmatization against our most advanced information technologies. The backlash is likely to be global, sustained, passionate, and well-organized. It may start with grass-roots concerns among a few expert ‘AI doomers’, and among journalists concerned about narrow AI risks, but it is likely to become better-organized over time as anti-AI activists join together to fight an emerging existential threat to our species.(Note that this question of anti-AI backlash likelihood is largely orthogonal to the issues of whether AGI is possible, and whether AI alignment is possible.)I’m not talking about a violent Butlerian Jihad. In the social media era, violence in the service of a social cause is almost always counter-productive, because it undermines the moral superiority and virtue-signaling strategies of righteous activists. (Indeed, a lot of ‘violence by activists’ turns out to be false flag operations funded by vested interests to discredit the activists that are fighting those vested interests.)Rather, I’m talking about a non-violent anti-AI movement at the social, cultural, political, and economic levels. For such a movement to slow down the development of AGI by 2043 (relative to the current expectations of Open Philanthropy panelists judging this essay competition), it only has to arise sometime in the next 20 years, and to gather enough public, media, political, and/or investor support that it can handicap the AI industry’s progress towards AGI, in ways that have not yet been incorporated into most experts’ AI timelines.An anti-AI backlash could include political, religious, ideological, and ethical objections to AI, sparked by vivid, outrageous, newsworthy fai...]]>
Thu, 01 Jun 2023 00:30:46 +0000 EA - A moral backlash against AI will probably slow down AGI development by Geoffrey Miller Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A moral backlash against AI will probably slow down AGI development, published by Geoffrey Miller on May 31, 2023 on The Effective Altruism Forum.Note: This is a submission for the 2023 Open Philanthropy AI Worldviews contest, due May 31, 2023. It addresses Question 1: “What is the probability that AGI is developed by January 1, 2043?”OverviewPeople tend to view harmful things as evil, and treat them as evil, to minimize their spread and impact. If enough people are hurt, betrayed, or outraged by AI applications, or lose their jobs, professional identity, and sense of purpose to AI, and/or become concerned about the existential risks of AI, then an intense public anti-AI backlash is likely to develop. That backlash could become a global, sustained, coordinated movement that morally stigmatizes AI researchers, AI companies, and AI funding sources. If that happens, then AGI is much less likely to develop by the year 2043. Negative public sentiment could be much more powerful in slowing AI than even the most draconian global regulations or formal moratorium, yet it is a neglected factor in most current AI timelines.IntroductionThe likelihood of AGI being developed by 2043 depends on two main factors: (1) how technically difficult it will be for AI researchers to make progress on AGI, and (2) how many resources – in terms of talent, funding, hardware, software, training data, etc. – are available for making that progress. Many experts’ ‘AI timelines’ for predicting AI development assume that AGI likelihood will be dominated by the first factor (technical difficulty), and assume that the second factor (available resources) will continue increasing.In this essay I disagree with that assumption. The resources allocated into AI research, development, and deployment may be much more vulnerable to public outrage and anti-AI hatred than the current AI hype cycle suggests. Specifically, I argue that ongoing AI developments are likely to provoke a moral backlash against AI that will choke off many of the key resources for making further AI progress. This public backlash could deploy the ancient psychology of moral stigmatization against our most advanced information technologies. The backlash is likely to be global, sustained, passionate, and well-organized. It may start with grass-roots concerns among a few expert ‘AI doomers’, and among journalists concerned about narrow AI risks, but it is likely to become better-organized over time as anti-AI activists join together to fight an emerging existential threat to our species.(Note that this question of anti-AI backlash likelihood is largely orthogonal to the issues of whether AGI is possible, and whether AI alignment is possible.)I’m not talking about a violent Butlerian Jihad. In the social media era, violence in the service of a social cause is almost always counter-productive, because it undermines the moral superiority and virtue-signaling strategies of righteous activists. (Indeed, a lot of ‘violence by activists’ turns out to be false flag operations funded by vested interests to discredit the activists that are fighting those vested interests.)Rather, I’m talking about a non-violent anti-AI movement at the social, cultural, political, and economic levels. For such a movement to slow down the development of AGI by 2043 (relative to the current expectations of Open Philanthropy panelists judging this essay competition), it only has to arise sometime in the next 20 years, and to gather enough public, media, political, and/or investor support that it can handicap the AI industry’s progress towards AGI, in ways that have not yet been incorporated into most experts’ AI timelines.An anti-AI backlash could include political, religious, ideological, and ethical objections to AI, sparked by vivid, outrageous, newsworthy fai...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A moral backlash against AI will probably slow down AGI development, published by Geoffrey Miller on May 31, 2023 on The Effective Altruism Forum.Note: This is a submission for the 2023 Open Philanthropy AI Worldviews contest, due May 31, 2023. It addresses Question 1: “What is the probability that AGI is developed by January 1, 2043?”OverviewPeople tend to view harmful things as evil, and treat them as evil, to minimize their spread and impact. If enough people are hurt, betrayed, or outraged by AI applications, or lose their jobs, professional identity, and sense of purpose to AI, and/or become concerned about the existential risks of AI, then an intense public anti-AI backlash is likely to develop. That backlash could become a global, sustained, coordinated movement that morally stigmatizes AI researchers, AI companies, and AI funding sources. If that happens, then AGI is much less likely to develop by the year 2043. Negative public sentiment could be much more powerful in slowing AI than even the most draconian global regulations or formal moratorium, yet it is a neglected factor in most current AI timelines.IntroductionThe likelihood of AGI being developed by 2043 depends on two main factors: (1) how technically difficult it will be for AI researchers to make progress on AGI, and (2) how many resources – in terms of talent, funding, hardware, software, training data, etc. – are available for making that progress. Many experts’ ‘AI timelines’ for predicting AI development assume that AGI likelihood will be dominated by the first factor (technical difficulty), and assume that the second factor (available resources) will continue increasing.In this essay I disagree with that assumption. The resources allocated into AI research, development, and deployment may be much more vulnerable to public outrage and anti-AI hatred than the current AI hype cycle suggests. Specifically, I argue that ongoing AI developments are likely to provoke a moral backlash against AI that will choke off many of the key resources for making further AI progress. This public backlash could deploy the ancient psychology of moral stigmatization against our most advanced information technologies. The backlash is likely to be global, sustained, passionate, and well-organized. It may start with grass-roots concerns among a few expert ‘AI doomers’, and among journalists concerned about narrow AI risks, but it is likely to become better-organized over time as anti-AI activists join together to fight an emerging existential threat to our species.(Note that this question of anti-AI backlash likelihood is largely orthogonal to the issues of whether AGI is possible, and whether AI alignment is possible.)I’m not talking about a violent Butlerian Jihad. In the social media era, violence in the service of a social cause is almost always counter-productive, because it undermines the moral superiority and virtue-signaling strategies of righteous activists. (Indeed, a lot of ‘violence by activists’ turns out to be false flag operations funded by vested interests to discredit the activists that are fighting those vested interests.)Rather, I’m talking about a non-violent anti-AI movement at the social, cultural, political, and economic levels. For such a movement to slow down the development of AGI by 2043 (relative to the current expectations of Open Philanthropy panelists judging this essay competition), it only has to arise sometime in the next 20 years, and to gather enough public, media, political, and/or investor support that it can handicap the AI industry’s progress towards AGI, in ways that have not yet been incorporated into most experts’ AI timelines.An anti-AI backlash could include political, religious, ideological, and ethical objections to AI, sparked by vivid, outrageous, newsworthy fai...]]>
Geoffrey Miller https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 25:58 None full 6131
fsaogRokXxby6LFd7_NL_EA_EA EA - A compute-based framework for thinking about the future of AI by Matthew Barnett Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A compute-based framework for thinking about the future of AI, published by Matthew Barnett on May 31, 2023 on The Effective Altruism Forum.How should we expect AI to unfold over the coming decades? In this article, I explain and defend a compute-based framework for thinking about AI automation. This framework makes the following claims, which I defend throughout the article:The most salient impact of AI will be its ability to automate labor, which is likely to trigger a productivity explosion later this century, greatly altering the course of history.The availability of useful compute is the most important factor that determines progress in AI, a trend which will likely continue into the foreseeable future.AI performance is likely to become relatively predictable on most important, general measures of performance, at least when predicting over short time horizons.While none of these ideas are new, my goal is to provide a single article that articulates and defends the framework as a cohesive whole. In doing so, I present the perspective that Epoch researchers find most illuminating about the future of AI. Using this framework, I will justify a value of 40% for the probability of Transformative AI (TAI) arriving before 2043.SummaryThe post is structured as follows.In part one, I will argue that what matters most is when AI will be able to automate a wide variety of tasks in the economy. The importance of this milestone is substantiated by simple models of the economy that predict AI could greatly accelerate the world economic growth rate, dramatically changing our world.In part two, I will argue that availability of data is less important than compute for explaining progress in AI, and that compute may even play an important role driving algorithmic progress.In part three, I will argue against a commonly held view that AI progress is inherently unpredictable, providing reasons to think that AI capabilities may be anticipated in advance.Finally, in part four, I will conclude by using the framework to build a probability distribution over the date of arrival for transformative AI.Part 1: Widespread automation from AIWhen discussing AI timelines, it is often taken for granted that the relevant milestone is the development of Artificial General Intelligence (AGI), or a software system that can do or learn “everything that a human can do.” However, this definition is vague. For instance, it's unclear whether the system needs to surpass all humans, some upper decile, or the median human.Perhaps more importantly, it’s not immediately obvious why we should care about the arrival of a single software system with certain properties. Plausibly, a set of narrow software programs could drastically change the world before the arrival of any monolithic AGI system (Drexler, 2019). In general, it seems more useful to characterize AI timelines in terms of the impacts AI will have on the world. But, that still leaves open the question of what impacts we should expect AI to have and how we can measure those impacts.As a starting point, it seems that automating labor is likely to be the driving force behind developing AI, providing huge and direct financial incentives for AI companies to develop the technology. The productivity explosion hypothesis says that if AI can automate the majority of important tasks in the economy, then a dramatic economic expansion will follow, increasing the rate of technological, scientific, and economic growth by at least an order of magnitude above its current rate (Davidson, 2021).A productivity explosion is a robust implication of simple models of economic growth models, which helps explain why the topic is so important to study. What's striking is that the productivity explosion thesis appears to follow naturally from some standard assump...]]>
Matthew Barnett https://forum.effectivealtruism.org/posts/fsaogRokXxby6LFd7/a-compute-based-framework-for-thinking-about-the-future-of Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A compute-based framework for thinking about the future of AI, published by Matthew Barnett on May 31, 2023 on The Effective Altruism Forum.How should we expect AI to unfold over the coming decades? In this article, I explain and defend a compute-based framework for thinking about AI automation. This framework makes the following claims, which I defend throughout the article:The most salient impact of AI will be its ability to automate labor, which is likely to trigger a productivity explosion later this century, greatly altering the course of history.The availability of useful compute is the most important factor that determines progress in AI, a trend which will likely continue into the foreseeable future.AI performance is likely to become relatively predictable on most important, general measures of performance, at least when predicting over short time horizons.While none of these ideas are new, my goal is to provide a single article that articulates and defends the framework as a cohesive whole. In doing so, I present the perspective that Epoch researchers find most illuminating about the future of AI. Using this framework, I will justify a value of 40% for the probability of Transformative AI (TAI) arriving before 2043.SummaryThe post is structured as follows.In part one, I will argue that what matters most is when AI will be able to automate a wide variety of tasks in the economy. The importance of this milestone is substantiated by simple models of the economy that predict AI could greatly accelerate the world economic growth rate, dramatically changing our world.In part two, I will argue that availability of data is less important than compute for explaining progress in AI, and that compute may even play an important role driving algorithmic progress.In part three, I will argue against a commonly held view that AI progress is inherently unpredictable, providing reasons to think that AI capabilities may be anticipated in advance.Finally, in part four, I will conclude by using the framework to build a probability distribution over the date of arrival for transformative AI.Part 1: Widespread automation from AIWhen discussing AI timelines, it is often taken for granted that the relevant milestone is the development of Artificial General Intelligence (AGI), or a software system that can do or learn “everything that a human can do.” However, this definition is vague. For instance, it's unclear whether the system needs to surpass all humans, some upper decile, or the median human.Perhaps more importantly, it’s not immediately obvious why we should care about the arrival of a single software system with certain properties. Plausibly, a set of narrow software programs could drastically change the world before the arrival of any monolithic AGI system (Drexler, 2019). In general, it seems more useful to characterize AI timelines in terms of the impacts AI will have on the world. But, that still leaves open the question of what impacts we should expect AI to have and how we can measure those impacts.As a starting point, it seems that automating labor is likely to be the driving force behind developing AI, providing huge and direct financial incentives for AI companies to develop the technology. The productivity explosion hypothesis says that if AI can automate the majority of important tasks in the economy, then a dramatic economic expansion will follow, increasing the rate of technological, scientific, and economic growth by at least an order of magnitude above its current rate (Davidson, 2021).A productivity explosion is a robust implication of simple models of economic growth models, which helps explain why the topic is so important to study. What's striking is that the productivity explosion thesis appears to follow naturally from some standard assump...]]>
Wed, 31 May 2023 22:54:41 +0000 EA - A compute-based framework for thinking about the future of AI by Matthew Barnett Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A compute-based framework for thinking about the future of AI, published by Matthew Barnett on May 31, 2023 on The Effective Altruism Forum.How should we expect AI to unfold over the coming decades? In this article, I explain and defend a compute-based framework for thinking about AI automation. This framework makes the following claims, which I defend throughout the article:The most salient impact of AI will be its ability to automate labor, which is likely to trigger a productivity explosion later this century, greatly altering the course of history.The availability of useful compute is the most important factor that determines progress in AI, a trend which will likely continue into the foreseeable future.AI performance is likely to become relatively predictable on most important, general measures of performance, at least when predicting over short time horizons.While none of these ideas are new, my goal is to provide a single article that articulates and defends the framework as a cohesive whole. In doing so, I present the perspective that Epoch researchers find most illuminating about the future of AI. Using this framework, I will justify a value of 40% for the probability of Transformative AI (TAI) arriving before 2043.SummaryThe post is structured as follows.In part one, I will argue that what matters most is when AI will be able to automate a wide variety of tasks in the economy. The importance of this milestone is substantiated by simple models of the economy that predict AI could greatly accelerate the world economic growth rate, dramatically changing our world.In part two, I will argue that availability of data is less important than compute for explaining progress in AI, and that compute may even play an important role driving algorithmic progress.In part three, I will argue against a commonly held view that AI progress is inherently unpredictable, providing reasons to think that AI capabilities may be anticipated in advance.Finally, in part four, I will conclude by using the framework to build a probability distribution over the date of arrival for transformative AI.Part 1: Widespread automation from AIWhen discussing AI timelines, it is often taken for granted that the relevant milestone is the development of Artificial General Intelligence (AGI), or a software system that can do or learn “everything that a human can do.” However, this definition is vague. For instance, it's unclear whether the system needs to surpass all humans, some upper decile, or the median human.Perhaps more importantly, it’s not immediately obvious why we should care about the arrival of a single software system with certain properties. Plausibly, a set of narrow software programs could drastically change the world before the arrival of any monolithic AGI system (Drexler, 2019). In general, it seems more useful to characterize AI timelines in terms of the impacts AI will have on the world. But, that still leaves open the question of what impacts we should expect AI to have and how we can measure those impacts.As a starting point, it seems that automating labor is likely to be the driving force behind developing AI, providing huge and direct financial incentives for AI companies to develop the technology. The productivity explosion hypothesis says that if AI can automate the majority of important tasks in the economy, then a dramatic economic expansion will follow, increasing the rate of technological, scientific, and economic growth by at least an order of magnitude above its current rate (Davidson, 2021).A productivity explosion is a robust implication of simple models of economic growth models, which helps explain why the topic is so important to study. What's striking is that the productivity explosion thesis appears to follow naturally from some standard assump...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A compute-based framework for thinking about the future of AI, published by Matthew Barnett on May 31, 2023 on The Effective Altruism Forum.How should we expect AI to unfold over the coming decades? In this article, I explain and defend a compute-based framework for thinking about AI automation. This framework makes the following claims, which I defend throughout the article:The most salient impact of AI will be its ability to automate labor, which is likely to trigger a productivity explosion later this century, greatly altering the course of history.The availability of useful compute is the most important factor that determines progress in AI, a trend which will likely continue into the foreseeable future.AI performance is likely to become relatively predictable on most important, general measures of performance, at least when predicting over short time horizons.While none of these ideas are new, my goal is to provide a single article that articulates and defends the framework as a cohesive whole. In doing so, I present the perspective that Epoch researchers find most illuminating about the future of AI. Using this framework, I will justify a value of 40% for the probability of Transformative AI (TAI) arriving before 2043.SummaryThe post is structured as follows.In part one, I will argue that what matters most is when AI will be able to automate a wide variety of tasks in the economy. The importance of this milestone is substantiated by simple models of the economy that predict AI could greatly accelerate the world economic growth rate, dramatically changing our world.In part two, I will argue that availability of data is less important than compute for explaining progress in AI, and that compute may even play an important role driving algorithmic progress.In part three, I will argue against a commonly held view that AI progress is inherently unpredictable, providing reasons to think that AI capabilities may be anticipated in advance.Finally, in part four, I will conclude by using the framework to build a probability distribution over the date of arrival for transformative AI.Part 1: Widespread automation from AIWhen discussing AI timelines, it is often taken for granted that the relevant milestone is the development of Artificial General Intelligence (AGI), or a software system that can do or learn “everything that a human can do.” However, this definition is vague. For instance, it's unclear whether the system needs to surpass all humans, some upper decile, or the median human.Perhaps more importantly, it’s not immediately obvious why we should care about the arrival of a single software system with certain properties. Plausibly, a set of narrow software programs could drastically change the world before the arrival of any monolithic AGI system (Drexler, 2019). In general, it seems more useful to characterize AI timelines in terms of the impacts AI will have on the world. But, that still leaves open the question of what impacts we should expect AI to have and how we can measure those impacts.As a starting point, it seems that automating labor is likely to be the driving force behind developing AI, providing huge and direct financial incentives for AI companies to develop the technology. The productivity explosion hypothesis says that if AI can automate the majority of important tasks in the economy, then a dramatic economic expansion will follow, increasing the rate of technological, scientific, and economic growth by at least an order of magnitude above its current rate (Davidson, 2021).A productivity explosion is a robust implication of simple models of economic growth models, which helps explain why the topic is so important to study. What's striking is that the productivity explosion thesis appears to follow naturally from some standard assump...]]>
Matthew Barnett https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 36:32 None full 6129
HuLCFBEbekZ6AE7LP_NL_EA_EA EA - Linkpost: Survey evidence on the number of vegans in the UK by Sagar K Shah Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Linkpost: Survey evidence on the number of vegans in the UK, published by Sagar K Shah on May 31, 2023 on The Effective Altruism Forum.Stephen Walsh PhD recently carried out a review of different surveys estimating the number of vegans in the UK on behalf of the UK Vegan Society. This is the best review I’ve seen in the UK context that takes into account recent data.The review suggests:Around 0.25% of UK adults were vegan in 2015. The proportion was probably stable around this level for at least 15 years.The share increased to around 1% by 2018.A best guess of a further increase to around 1.35% by 2022 (note this estimate is less certain and not directly comparable to earlier estimates).The headline results are based on the Food and You (face-to-face) and Food and You 2 (online, postal) surveys commissioned by the UK Food Standards Agency, after comparison with results from other surveys, including consideration of questions asked to identify vegans, survey mode, sampling method and sample size.Stephen’s article was originally published in the Vegan Society Magazine (only available to members). Given the potential wider interest in the results, I have received his permission to share a link to a copy of his article, and he is happy to answer any interesting questions that come through in the comments.I have copied below the chart summarising the results of different surveys offering a consistent time trend, and the questions used in the Food and You Survey (2010 to 2018). The article contains links with further information about the surveys used in the chart below.Questions used in the Food and You Survey (2010 to 2018)Question 2_7Which, if any, of the following applies to you? Please state all that apply.Completely vegetarianPartly vegetarianVeganAvoid certain food for religious or cultural reasonsNone (SINGLE CODE ONLY)IF Q2_7 = Vegan VeganChkCan I just check, do you eat any foods of animal origin. That is meat, fish, poultry, milk, milk products, eggs or any dishes that contain these?1 Yes2 NoThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sagar K Shah https://forum.effectivealtruism.org/posts/HuLCFBEbekZ6AE7LP/linkpost-survey-evidence-on-the-number-of-vegans-in-the-uk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Linkpost: Survey evidence on the number of vegans in the UK, published by Sagar K Shah on May 31, 2023 on The Effective Altruism Forum.Stephen Walsh PhD recently carried out a review of different surveys estimating the number of vegans in the UK on behalf of the UK Vegan Society. This is the best review I’ve seen in the UK context that takes into account recent data.The review suggests:Around 0.25% of UK adults were vegan in 2015. The proportion was probably stable around this level for at least 15 years.The share increased to around 1% by 2018.A best guess of a further increase to around 1.35% by 2022 (note this estimate is less certain and not directly comparable to earlier estimates).The headline results are based on the Food and You (face-to-face) and Food and You 2 (online, postal) surveys commissioned by the UK Food Standards Agency, after comparison with results from other surveys, including consideration of questions asked to identify vegans, survey mode, sampling method and sample size.Stephen’s article was originally published in the Vegan Society Magazine (only available to members). Given the potential wider interest in the results, I have received his permission to share a link to a copy of his article, and he is happy to answer any interesting questions that come through in the comments.I have copied below the chart summarising the results of different surveys offering a consistent time trend, and the questions used in the Food and You Survey (2010 to 2018). The article contains links with further information about the surveys used in the chart below.Questions used in the Food and You Survey (2010 to 2018)Question 2_7Which, if any, of the following applies to you? Please state all that apply.Completely vegetarianPartly vegetarianVeganAvoid certain food for religious or cultural reasonsNone (SINGLE CODE ONLY)IF Q2_7 = Vegan VeganChkCan I just check, do you eat any foods of animal origin. That is meat, fish, poultry, milk, milk products, eggs or any dishes that contain these?1 Yes2 NoThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 31 May 2023 19:46:56 +0000 EA - Linkpost: Survey evidence on the number of vegans in the UK by Sagar K Shah Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Linkpost: Survey evidence on the number of vegans in the UK, published by Sagar K Shah on May 31, 2023 on The Effective Altruism Forum.Stephen Walsh PhD recently carried out a review of different surveys estimating the number of vegans in the UK on behalf of the UK Vegan Society. This is the best review I’ve seen in the UK context that takes into account recent data.The review suggests:Around 0.25% of UK adults were vegan in 2015. The proportion was probably stable around this level for at least 15 years.The share increased to around 1% by 2018.A best guess of a further increase to around 1.35% by 2022 (note this estimate is less certain and not directly comparable to earlier estimates).The headline results are based on the Food and You (face-to-face) and Food and You 2 (online, postal) surveys commissioned by the UK Food Standards Agency, after comparison with results from other surveys, including consideration of questions asked to identify vegans, survey mode, sampling method and sample size.Stephen’s article was originally published in the Vegan Society Magazine (only available to members). Given the potential wider interest in the results, I have received his permission to share a link to a copy of his article, and he is happy to answer any interesting questions that come through in the comments.I have copied below the chart summarising the results of different surveys offering a consistent time trend, and the questions used in the Food and You Survey (2010 to 2018). The article contains links with further information about the surveys used in the chart below.Questions used in the Food and You Survey (2010 to 2018)Question 2_7Which, if any, of the following applies to you? Please state all that apply.Completely vegetarianPartly vegetarianVeganAvoid certain food for religious or cultural reasonsNone (SINGLE CODE ONLY)IF Q2_7 = Vegan VeganChkCan I just check, do you eat any foods of animal origin. That is meat, fish, poultry, milk, milk products, eggs or any dishes that contain these?1 Yes2 NoThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Linkpost: Survey evidence on the number of vegans in the UK, published by Sagar K Shah on May 31, 2023 on The Effective Altruism Forum.Stephen Walsh PhD recently carried out a review of different surveys estimating the number of vegans in the UK on behalf of the UK Vegan Society. This is the best review I’ve seen in the UK context that takes into account recent data.The review suggests:Around 0.25% of UK adults were vegan in 2015. The proportion was probably stable around this level for at least 15 years.The share increased to around 1% by 2018.A best guess of a further increase to around 1.35% by 2022 (note this estimate is less certain and not directly comparable to earlier estimates).The headline results are based on the Food and You (face-to-face) and Food and You 2 (online, postal) surveys commissioned by the UK Food Standards Agency, after comparison with results from other surveys, including consideration of questions asked to identify vegans, survey mode, sampling method and sample size.Stephen’s article was originally published in the Vegan Society Magazine (only available to members). Given the potential wider interest in the results, I have received his permission to share a link to a copy of his article, and he is happy to answer any interesting questions that come through in the comments.I have copied below the chart summarising the results of different surveys offering a consistent time trend, and the questions used in the Food and You Survey (2010 to 2018). The article contains links with further information about the surveys used in the chart below.Questions used in the Food and You Survey (2010 to 2018)Question 2_7Which, if any, of the following applies to you? Please state all that apply.Completely vegetarianPartly vegetarianVeganAvoid certain food for religious or cultural reasonsNone (SINGLE CODE ONLY)IF Q2_7 = Vegan VeganChkCan I just check, do you eat any foods of animal origin. That is meat, fish, poultry, milk, milk products, eggs or any dishes that contain these?1 Yes2 NoThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sagar K Shah https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:22 None full 6128
HMXAgWrCZ6aagHYqg_NL_EA_EA EA - Updates from the Dutch EA community by James Herbert Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updates from the Dutch EA community, published by James Herbert on May 30, 2023 on The Effective Altruism Forum.We wanted to post something on the Forum to share all of the amazing things the Dutch EA community has achieved in the past 18 months or so. But we also wanted to avoid spending too much time writing it. So please accept this very messy post and feel free to ask questions in the comments! Parts were co-written by ChatGPT with minimal editing, hence the sometimes overly braggadocious tone.We start with national-level updates and two quick lessons-learnt, and then we have bullet-point summaries from some of the local groups. But first, an executive summary.Executive summaryOver the past year, the Dutch EA community has seen impressive growth at both the national and local levels.At the national level, the community has seen significant gains. The number of intro programme completions increased nearly tenfold, from 45 in 2021 to 400 in 2022. The number of city groups and university groups also grew, from 1 to 3 and 1 to 13 respectively. Notably, there was an influx of €700k donations via Doneer Effectief and an increase in EA Netherlands newsletter subscribers from 670 to around 1500.Since hiring two full-time community builders in 2022, EAN has helped establish over a dozen new groups which have collectively produced 350 intro programme graduates in 2022 alone. In addition to launching a new website and co-working space, EAN organized multiple retreats, conducted introductory talks, facilitated 'giving games', provided career counselling, hosted city meet-ups, and participated in a public debate on EA.Effective altruism is gaining recognition in the Dutch media, with coverage in major Dutch publications and appearances by prominent figures like writer Rutger Bregman. However, there have also been a few critical pieces, to which the EAN board has responded.Other significant achievements include the successful launch of Doneer Effectief's online donation platform, the high-profile EAGxRotterdam 2022 conference, and the Tien Procent Club's successful events on effective giving.Local EA groups across Dutch cities have also seen substantial growth. For example, the Amsterdam city and university groups have merged, and together they host weekly meetups, multiple programs, and are developing a mental health program. At Utrecht, the student group has hatched an Alt Protein group with a grant from the university, has launched an AI Safety group, has hosted a big speaker event with Rutger Bregman, and runs introduction fellowships, socials, coworking sessions and other events. In The Hague, the group conducted weekly dinners, three rounds of intro fellowships, and two rounds of AI governance fellowships.The team at Delft has increased EA awareness through fellowships, book clubs, a retreat, and launching the Delft AI Safety Initiative. Eindhoven’s group has engaged 31 people in Introduction Fellowships, has launched an AI safety team, and collaborated with other groups on their university campus. Nijmegen’s group has grown rapidly, with biweekly meetups and collaborations with other campus groups.The PISE group in Rotterdam hosts member-only weekly events, open book clubs, and four fellowship rounds this year. They also initiated EAGx Rotterdam. The Twente group has attended the university’s career fair and organized meetups and an introductory talk. Wageningen University's group has hosted live events and completed an introductory fellowship.Lessons learnt:Do organising and mobilising (organisers invest in developing the capacities of people to engage with others in activism and become leaders; mobilisers focus on maximising the number of people involved without developing their capacity for civic action)It's very valuable to have a public figure endorse...]]>
James Herbert https://forum.effectivealtruism.org/posts/HMXAgWrCZ6aagHYqg/updates-from-the-dutch-ea-community Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updates from the Dutch EA community, published by James Herbert on May 30, 2023 on The Effective Altruism Forum.We wanted to post something on the Forum to share all of the amazing things the Dutch EA community has achieved in the past 18 months or so. But we also wanted to avoid spending too much time writing it. So please accept this very messy post and feel free to ask questions in the comments! Parts were co-written by ChatGPT with minimal editing, hence the sometimes overly braggadocious tone.We start with national-level updates and two quick lessons-learnt, and then we have bullet-point summaries from some of the local groups. But first, an executive summary.Executive summaryOver the past year, the Dutch EA community has seen impressive growth at both the national and local levels.At the national level, the community has seen significant gains. The number of intro programme completions increased nearly tenfold, from 45 in 2021 to 400 in 2022. The number of city groups and university groups also grew, from 1 to 3 and 1 to 13 respectively. Notably, there was an influx of €700k donations via Doneer Effectief and an increase in EA Netherlands newsletter subscribers from 670 to around 1500.Since hiring two full-time community builders in 2022, EAN has helped establish over a dozen new groups which have collectively produced 350 intro programme graduates in 2022 alone. In addition to launching a new website and co-working space, EAN organized multiple retreats, conducted introductory talks, facilitated 'giving games', provided career counselling, hosted city meet-ups, and participated in a public debate on EA.Effective altruism is gaining recognition in the Dutch media, with coverage in major Dutch publications and appearances by prominent figures like writer Rutger Bregman. However, there have also been a few critical pieces, to which the EAN board has responded.Other significant achievements include the successful launch of Doneer Effectief's online donation platform, the high-profile EAGxRotterdam 2022 conference, and the Tien Procent Club's successful events on effective giving.Local EA groups across Dutch cities have also seen substantial growth. For example, the Amsterdam city and university groups have merged, and together they host weekly meetups, multiple programs, and are developing a mental health program. At Utrecht, the student group has hatched an Alt Protein group with a grant from the university, has launched an AI Safety group, has hosted a big speaker event with Rutger Bregman, and runs introduction fellowships, socials, coworking sessions and other events. In The Hague, the group conducted weekly dinners, three rounds of intro fellowships, and two rounds of AI governance fellowships.The team at Delft has increased EA awareness through fellowships, book clubs, a retreat, and launching the Delft AI Safety Initiative. Eindhoven’s group has engaged 31 people in Introduction Fellowships, has launched an AI safety team, and collaborated with other groups on their university campus. Nijmegen’s group has grown rapidly, with biweekly meetups and collaborations with other campus groups.The PISE group in Rotterdam hosts member-only weekly events, open book clubs, and four fellowship rounds this year. They also initiated EAGx Rotterdam. The Twente group has attended the university’s career fair and organized meetups and an introductory talk. Wageningen University's group has hosted live events and completed an introductory fellowship.Lessons learnt:Do organising and mobilising (organisers invest in developing the capacities of people to engage with others in activism and become leaders; mobilisers focus on maximising the number of people involved without developing their capacity for civic action)It's very valuable to have a public figure endorse...]]>
Tue, 30 May 2023 21:43:38 +0000 EA - Updates from the Dutch EA community by James Herbert Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updates from the Dutch EA community, published by James Herbert on May 30, 2023 on The Effective Altruism Forum.We wanted to post something on the Forum to share all of the amazing things the Dutch EA community has achieved in the past 18 months or so. But we also wanted to avoid spending too much time writing it. So please accept this very messy post and feel free to ask questions in the comments! Parts were co-written by ChatGPT with minimal editing, hence the sometimes overly braggadocious tone.We start with national-level updates and two quick lessons-learnt, and then we have bullet-point summaries from some of the local groups. But first, an executive summary.Executive summaryOver the past year, the Dutch EA community has seen impressive growth at both the national and local levels.At the national level, the community has seen significant gains. The number of intro programme completions increased nearly tenfold, from 45 in 2021 to 400 in 2022. The number of city groups and university groups also grew, from 1 to 3 and 1 to 13 respectively. Notably, there was an influx of €700k donations via Doneer Effectief and an increase in EA Netherlands newsletter subscribers from 670 to around 1500.Since hiring two full-time community builders in 2022, EAN has helped establish over a dozen new groups which have collectively produced 350 intro programme graduates in 2022 alone. In addition to launching a new website and co-working space, EAN organized multiple retreats, conducted introductory talks, facilitated 'giving games', provided career counselling, hosted city meet-ups, and participated in a public debate on EA.Effective altruism is gaining recognition in the Dutch media, with coverage in major Dutch publications and appearances by prominent figures like writer Rutger Bregman. However, there have also been a few critical pieces, to which the EAN board has responded.Other significant achievements include the successful launch of Doneer Effectief's online donation platform, the high-profile EAGxRotterdam 2022 conference, and the Tien Procent Club's successful events on effective giving.Local EA groups across Dutch cities have also seen substantial growth. For example, the Amsterdam city and university groups have merged, and together they host weekly meetups, multiple programs, and are developing a mental health program. At Utrecht, the student group has hatched an Alt Protein group with a grant from the university, has launched an AI Safety group, has hosted a big speaker event with Rutger Bregman, and runs introduction fellowships, socials, coworking sessions and other events. In The Hague, the group conducted weekly dinners, three rounds of intro fellowships, and two rounds of AI governance fellowships.The team at Delft has increased EA awareness through fellowships, book clubs, a retreat, and launching the Delft AI Safety Initiative. Eindhoven’s group has engaged 31 people in Introduction Fellowships, has launched an AI safety team, and collaborated with other groups on their university campus. Nijmegen’s group has grown rapidly, with biweekly meetups and collaborations with other campus groups.The PISE group in Rotterdam hosts member-only weekly events, open book clubs, and four fellowship rounds this year. They also initiated EAGx Rotterdam. The Twente group has attended the university’s career fair and organized meetups and an introductory talk. Wageningen University's group has hosted live events and completed an introductory fellowship.Lessons learnt:Do organising and mobilising (organisers invest in developing the capacities of people to engage with others in activism and become leaders; mobilisers focus on maximising the number of people involved without developing their capacity for civic action)It's very valuable to have a public figure endorse...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updates from the Dutch EA community, published by James Herbert on May 30, 2023 on The Effective Altruism Forum.We wanted to post something on the Forum to share all of the amazing things the Dutch EA community has achieved in the past 18 months or so. But we also wanted to avoid spending too much time writing it. So please accept this very messy post and feel free to ask questions in the comments! Parts were co-written by ChatGPT with minimal editing, hence the sometimes overly braggadocious tone.We start with national-level updates and two quick lessons-learnt, and then we have bullet-point summaries from some of the local groups. But first, an executive summary.Executive summaryOver the past year, the Dutch EA community has seen impressive growth at both the national and local levels.At the national level, the community has seen significant gains. The number of intro programme completions increased nearly tenfold, from 45 in 2021 to 400 in 2022. The number of city groups and university groups also grew, from 1 to 3 and 1 to 13 respectively. Notably, there was an influx of €700k donations via Doneer Effectief and an increase in EA Netherlands newsletter subscribers from 670 to around 1500.Since hiring two full-time community builders in 2022, EAN has helped establish over a dozen new groups which have collectively produced 350 intro programme graduates in 2022 alone. In addition to launching a new website and co-working space, EAN organized multiple retreats, conducted introductory talks, facilitated 'giving games', provided career counselling, hosted city meet-ups, and participated in a public debate on EA.Effective altruism is gaining recognition in the Dutch media, with coverage in major Dutch publications and appearances by prominent figures like writer Rutger Bregman. However, there have also been a few critical pieces, to which the EAN board has responded.Other significant achievements include the successful launch of Doneer Effectief's online donation platform, the high-profile EAGxRotterdam 2022 conference, and the Tien Procent Club's successful events on effective giving.Local EA groups across Dutch cities have also seen substantial growth. For example, the Amsterdam city and university groups have merged, and together they host weekly meetups, multiple programs, and are developing a mental health program. At Utrecht, the student group has hatched an Alt Protein group with a grant from the university, has launched an AI Safety group, has hosted a big speaker event with Rutger Bregman, and runs introduction fellowships, socials, coworking sessions and other events. In The Hague, the group conducted weekly dinners, three rounds of intro fellowships, and two rounds of AI governance fellowships.The team at Delft has increased EA awareness through fellowships, book clubs, a retreat, and launching the Delft AI Safety Initiative. Eindhoven’s group has engaged 31 people in Introduction Fellowships, has launched an AI safety team, and collaborated with other groups on their university campus. Nijmegen’s group has grown rapidly, with biweekly meetups and collaborations with other campus groups.The PISE group in Rotterdam hosts member-only weekly events, open book clubs, and four fellowship rounds this year. They also initiated EAGx Rotterdam. The Twente group has attended the university’s career fair and organized meetups and an introductory talk. Wageningen University's group has hosted live events and completed an introductory fellowship.Lessons learnt:Do organising and mobilising (organisers invest in developing the capacities of people to engage with others in activism and become leaders; mobilisers focus on maximising the number of people involved without developing their capacity for civic action)It's very valuable to have a public figure endorse...]]>
James Herbert https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 18:37 None full 6114
B5hnpo2yDv9Hstpka_NL_EA_EA EA - Announcement: you can now listen to all new EA Forum posts by peterhartree Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcement: you can now listen to all new EA Forum posts, published by peterhartree on May 30, 2023 on The Effective Altruism Forum.For the next few weeks, all new EA Forum posts will have AI narrations.We're releasing this feature as a pilot. We will collect feedback and then decide whether to keep the feature and/or roll it out more broadly (e.g. for our full post archive).This project is run by TYPE III AUDIO in collaboration with the EA Forum team.How can I listen?On post pagesYou'll find narrations on post pages; you can listen to them by clicking on the speaker icon:On our podcast feedsDuring the pilot, posts that get >125 karma will also be released on the "EA Forum (Curated and Popular)" podcast feed:EA Forum (Curated & Popular)Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125+ karma.Subscribe:Apple Podcasts | Google Podcasts | Spotify | RSSThis feed was previously known as "EA Forum (All audio)". We renamed it for reasons.During the pilot phase, most "Curated" posts will still be narrated by Perrin Walker of TYPE III AUDIO.Posts that get >30 karma will be released on the new "EA Forum (All audio)" feed:EA Forum (All audio)Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30+ karma, and other great writing.Subscribe:Apple Podcasts | Spotify | RSS | Google Podcasts (soon)How is this different from Nonlinear Library?The Nonlinear Library has made unofficial AI narrations of EA Forum posts available for the last year or so.The new EA Forum AI narration project can be thought of as "Nonlinear Library 2.0". We hope our AI narrations will be clearer and more engaging. Some specific improvements:Audio notes to indicate headings, lists, images, etc.Specialist terminology, acronyms and idioms are handled gracefully. Footnotes too.We'll skip reading out long URLs, academic citations, and other things that you probably don't want to listen to.Episode descriptions include a link to the original post. According to Nonlinear, this is their most common feature request!We'd like to thank Kat Woods and the team at Nonlinear Library for their work, and for giving us helpful advice on this project.What do you think?We'd love to hear your thoughts!To give feedback on a particular narration, click the feedback button on the audio player, or go to t3a.is.We're keen to hear about even minor issues: we have control over most details of the narration system, and we're keen to polish it. The narration system, which is being developed by TYPE III AUDIO, will be rolled out for thousands of hours of EA-relevant writing over the summer.To share feature ideas or more general feedback, comment on this post or write to eaforum@type3.audio.The reason for this mildly confusing update is the vast majority of people subscribed to the existing "All audio" feed, but we think that most of them don't actually want to receive ~4 episodes per day. If you're someone who wants to max out the number of narrations in your podcast app, please subscribe to the new "All audio" feed. For everyone else: no action required.Are you a writer with a blog, article or newsletter to narrate? Write to team@type3.audio and we'll make it happen.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
peterhartree https://forum.effectivealtruism.org/posts/B5hnpo2yDv9Hstpka/announcement-you-can-now-listen-to-all-new-ea-forum-posts Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcement: you can now listen to all new EA Forum posts, published by peterhartree on May 30, 2023 on The Effective Altruism Forum.For the next few weeks, all new EA Forum posts will have AI narrations.We're releasing this feature as a pilot. We will collect feedback and then decide whether to keep the feature and/or roll it out more broadly (e.g. for our full post archive).This project is run by TYPE III AUDIO in collaboration with the EA Forum team.How can I listen?On post pagesYou'll find narrations on post pages; you can listen to them by clicking on the speaker icon:On our podcast feedsDuring the pilot, posts that get >125 karma will also be released on the "EA Forum (Curated and Popular)" podcast feed:EA Forum (Curated & Popular)Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125+ karma.Subscribe:Apple Podcasts | Google Podcasts | Spotify | RSSThis feed was previously known as "EA Forum (All audio)". We renamed it for reasons.During the pilot phase, most "Curated" posts will still be narrated by Perrin Walker of TYPE III AUDIO.Posts that get >30 karma will be released on the new "EA Forum (All audio)" feed:EA Forum (All audio)Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30+ karma, and other great writing.Subscribe:Apple Podcasts | Spotify | RSS | Google Podcasts (soon)How is this different from Nonlinear Library?The Nonlinear Library has made unofficial AI narrations of EA Forum posts available for the last year or so.The new EA Forum AI narration project can be thought of as "Nonlinear Library 2.0". We hope our AI narrations will be clearer and more engaging. Some specific improvements:Audio notes to indicate headings, lists, images, etc.Specialist terminology, acronyms and idioms are handled gracefully. Footnotes too.We'll skip reading out long URLs, academic citations, and other things that you probably don't want to listen to.Episode descriptions include a link to the original post. According to Nonlinear, this is their most common feature request!We'd like to thank Kat Woods and the team at Nonlinear Library for their work, and for giving us helpful advice on this project.What do you think?We'd love to hear your thoughts!To give feedback on a particular narration, click the feedback button on the audio player, or go to t3a.is.We're keen to hear about even minor issues: we have control over most details of the narration system, and we're keen to polish it. The narration system, which is being developed by TYPE III AUDIO, will be rolled out for thousands of hours of EA-relevant writing over the summer.To share feature ideas or more general feedback, comment on this post or write to eaforum@type3.audio.The reason for this mildly confusing update is the vast majority of people subscribed to the existing "All audio" feed, but we think that most of them don't actually want to receive ~4 episodes per day. If you're someone who wants to max out the number of narrations in your podcast app, please subscribe to the new "All audio" feed. For everyone else: no action required.Are you a writer with a blog, article or newsletter to narrate? Write to team@type3.audio and we'll make it happen.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 30 May 2023 18:29:08 +0000 EA - Announcement: you can now listen to all new EA Forum posts by peterhartree Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcement: you can now listen to all new EA Forum posts, published by peterhartree on May 30, 2023 on The Effective Altruism Forum.For the next few weeks, all new EA Forum posts will have AI narrations.We're releasing this feature as a pilot. We will collect feedback and then decide whether to keep the feature and/or roll it out more broadly (e.g. for our full post archive).This project is run by TYPE III AUDIO in collaboration with the EA Forum team.How can I listen?On post pagesYou'll find narrations on post pages; you can listen to them by clicking on the speaker icon:On our podcast feedsDuring the pilot, posts that get >125 karma will also be released on the "EA Forum (Curated and Popular)" podcast feed:EA Forum (Curated & Popular)Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125+ karma.Subscribe:Apple Podcasts | Google Podcasts | Spotify | RSSThis feed was previously known as "EA Forum (All audio)". We renamed it for reasons.During the pilot phase, most "Curated" posts will still be narrated by Perrin Walker of TYPE III AUDIO.Posts that get >30 karma will be released on the new "EA Forum (All audio)" feed:EA Forum (All audio)Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30+ karma, and other great writing.Subscribe:Apple Podcasts | Spotify | RSS | Google Podcasts (soon)How is this different from Nonlinear Library?The Nonlinear Library has made unofficial AI narrations of EA Forum posts available for the last year or so.The new EA Forum AI narration project can be thought of as "Nonlinear Library 2.0". We hope our AI narrations will be clearer and more engaging. Some specific improvements:Audio notes to indicate headings, lists, images, etc.Specialist terminology, acronyms and idioms are handled gracefully. Footnotes too.We'll skip reading out long URLs, academic citations, and other things that you probably don't want to listen to.Episode descriptions include a link to the original post. According to Nonlinear, this is their most common feature request!We'd like to thank Kat Woods and the team at Nonlinear Library for their work, and for giving us helpful advice on this project.What do you think?We'd love to hear your thoughts!To give feedback on a particular narration, click the feedback button on the audio player, or go to t3a.is.We're keen to hear about even minor issues: we have control over most details of the narration system, and we're keen to polish it. The narration system, which is being developed by TYPE III AUDIO, will be rolled out for thousands of hours of EA-relevant writing over the summer.To share feature ideas or more general feedback, comment on this post or write to eaforum@type3.audio.The reason for this mildly confusing update is the vast majority of people subscribed to the existing "All audio" feed, but we think that most of them don't actually want to receive ~4 episodes per day. If you're someone who wants to max out the number of narrations in your podcast app, please subscribe to the new "All audio" feed. For everyone else: no action required.Are you a writer with a blog, article or newsletter to narrate? Write to team@type3.audio and we'll make it happen.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcement: you can now listen to all new EA Forum posts, published by peterhartree on May 30, 2023 on The Effective Altruism Forum.For the next few weeks, all new EA Forum posts will have AI narrations.We're releasing this feature as a pilot. We will collect feedback and then decide whether to keep the feature and/or roll it out more broadly (e.g. for our full post archive).This project is run by TYPE III AUDIO in collaboration with the EA Forum team.How can I listen?On post pagesYou'll find narrations on post pages; you can listen to them by clicking on the speaker icon:On our podcast feedsDuring the pilot, posts that get >125 karma will also be released on the "EA Forum (Curated and Popular)" podcast feed:EA Forum (Curated & Popular)Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125+ karma.Subscribe:Apple Podcasts | Google Podcasts | Spotify | RSSThis feed was previously known as "EA Forum (All audio)". We renamed it for reasons.During the pilot phase, most "Curated" posts will still be narrated by Perrin Walker of TYPE III AUDIO.Posts that get >30 karma will be released on the new "EA Forum (All audio)" feed:EA Forum (All audio)Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30+ karma, and other great writing.Subscribe:Apple Podcasts | Spotify | RSS | Google Podcasts (soon)How is this different from Nonlinear Library?The Nonlinear Library has made unofficial AI narrations of EA Forum posts available for the last year or so.The new EA Forum AI narration project can be thought of as "Nonlinear Library 2.0". We hope our AI narrations will be clearer and more engaging. Some specific improvements:Audio notes to indicate headings, lists, images, etc.Specialist terminology, acronyms and idioms are handled gracefully. Footnotes too.We'll skip reading out long URLs, academic citations, and other things that you probably don't want to listen to.Episode descriptions include a link to the original post. According to Nonlinear, this is their most common feature request!We'd like to thank Kat Woods and the team at Nonlinear Library for their work, and for giving us helpful advice on this project.What do you think?We'd love to hear your thoughts!To give feedback on a particular narration, click the feedback button on the audio player, or go to t3a.is.We're keen to hear about even minor issues: we have control over most details of the narration system, and we're keen to polish it. The narration system, which is being developed by TYPE III AUDIO, will be rolled out for thousands of hours of EA-relevant writing over the summer.To share feature ideas or more general feedback, comment on this post or write to eaforum@type3.audio.The reason for this mildly confusing update is the vast majority of people subscribed to the existing "All audio" feed, but we think that most of them don't actually want to receive ~4 episodes per day. If you're someone who wants to max out the number of narrations in your podcast app, please subscribe to the new "All audio" feed. For everyone else: no action required.Are you a writer with a blog, article or newsletter to narrate? Write to team@type3.audio and we'll make it happen.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
peterhartree https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:27 None full 6111
Nxtq2d8Xb3QuuHKE8_NL_EA_EA EA - The bullseye framework: My case against AI doom by titotal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The bullseye framework: My case against AI doom, published by titotal on May 30, 2023 on The Effective Altruism Forum.Introduction:I’ve written quite a few articles casting doubt on several aspects of the AI doom narrative. (I’ve starting archiving them on my substack for easier sharing). This article is my first attempt to link them together to form a connected argument for why I find imminent AI doom unlikely.I don’t expect every one of the ideas presented here to be correct. I have a PHD and work as a computational physicist, so I’m fairly confident about aspects related to that, but I do not wish to be treated as an expert on other subjects such as machine learning where I am familiar with the subject, but not an expert. You should never expect one person to cover a huge range of topics across multiple different domains, without making the occasional mistake. I have done my best with the knowledge I have available.I don’t speculate about specific timelines here. I suspect that AGI is decades away at minimum, and I may reassess my beliefs as time goes on and technology changes.In part 1, I will point out the parallel frameworks of values and capabilities. I show what happens when we entertain the possibility that at least some AGI could be fallible and beatable.In part 2, I outline some of my many arguments that most AGI will be both fallible and beatable, and not capable of world domination.In part 3, I outline a few arguments against the ideas that “x-risk” safe AGI is super difficult, taking particular aim at the “absolute fanatical maximiser” assumption of early AI writing.In part 4, I speculate on how the above assumptions could lead to a safe navigation of AI development in the future.This article does not speculate on AI timelines, or on the reasons why AI doom estimates are so high around here. I have my suspicions on both questions. On the first, I think AGI is many decades away, on the second, I think founder effects are primarily to blame. However these will not be the focus of this article.Part 1: The bullseye frameworkWhen arguing for AI doom, a typical argument will involve the possibility space of AGI. Invoking the orthogonality thesis and instrumental convergence, the argument goes that in the possibility space of AGI, there are far more machines that want to kill us than those that don’t. The argument is that the fraction is so small that AGI will be rogue by default: like the picture below.As a sceptic, I do not find this, on its own, to be convincing. My rejoinder would be that AGI’s are not being plucked randomly from possibility space. They are being deliberately constructed and evolved specifically to meet that small target. An AI that has the values of “scream profanities at everyone” is not going to survive long in development. Therefore, even if AI development starts in dangerous territory, it will end up in safe territory, following path A. (I will flesh this argument out more in part 3 of this article).To which the doomer will reply: Yes, there will be some pressure towards the target of safety, but it won’t be enough to succeed, because of things like deception, perverse incentives, etc. So it will follow something more like path B above, where our attempts to align it are not successful.Often the discussion stops there. However, I would argue that this is missing half the picture. Human extinction/enslavement does not just require that an AI wants to kill/enslave us all, it also requires that the AI is capable of defeating us all. So there’s another, similar, target picture going on:The possibility space of AGI’s includes countless AI’s that are incapable of world domination. I can think of 8 billion such AGI’s off the top of my head: Human beings. Even a very smart AGI may still fail to dominate humanity, if it’s locked...]]>
titotal https://forum.effectivealtruism.org/posts/Nxtq2d8Xb3QuuHKE8/the-bullseye-framework-my-case-against-ai-doom Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The bullseye framework: My case against AI doom, published by titotal on May 30, 2023 on The Effective Altruism Forum.Introduction:I’ve written quite a few articles casting doubt on several aspects of the AI doom narrative. (I’ve starting archiving them on my substack for easier sharing). This article is my first attempt to link them together to form a connected argument for why I find imminent AI doom unlikely.I don’t expect every one of the ideas presented here to be correct. I have a PHD and work as a computational physicist, so I’m fairly confident about aspects related to that, but I do not wish to be treated as an expert on other subjects such as machine learning where I am familiar with the subject, but not an expert. You should never expect one person to cover a huge range of topics across multiple different domains, without making the occasional mistake. I have done my best with the knowledge I have available.I don’t speculate about specific timelines here. I suspect that AGI is decades away at minimum, and I may reassess my beliefs as time goes on and technology changes.In part 1, I will point out the parallel frameworks of values and capabilities. I show what happens when we entertain the possibility that at least some AGI could be fallible and beatable.In part 2, I outline some of my many arguments that most AGI will be both fallible and beatable, and not capable of world domination.In part 3, I outline a few arguments against the ideas that “x-risk” safe AGI is super difficult, taking particular aim at the “absolute fanatical maximiser” assumption of early AI writing.In part 4, I speculate on how the above assumptions could lead to a safe navigation of AI development in the future.This article does not speculate on AI timelines, or on the reasons why AI doom estimates are so high around here. I have my suspicions on both questions. On the first, I think AGI is many decades away, on the second, I think founder effects are primarily to blame. However these will not be the focus of this article.Part 1: The bullseye frameworkWhen arguing for AI doom, a typical argument will involve the possibility space of AGI. Invoking the orthogonality thesis and instrumental convergence, the argument goes that in the possibility space of AGI, there are far more machines that want to kill us than those that don’t. The argument is that the fraction is so small that AGI will be rogue by default: like the picture below.As a sceptic, I do not find this, on its own, to be convincing. My rejoinder would be that AGI’s are not being plucked randomly from possibility space. They are being deliberately constructed and evolved specifically to meet that small target. An AI that has the values of “scream profanities at everyone” is not going to survive long in development. Therefore, even if AI development starts in dangerous territory, it will end up in safe territory, following path A. (I will flesh this argument out more in part 3 of this article).To which the doomer will reply: Yes, there will be some pressure towards the target of safety, but it won’t be enough to succeed, because of things like deception, perverse incentives, etc. So it will follow something more like path B above, where our attempts to align it are not successful.Often the discussion stops there. However, I would argue that this is missing half the picture. Human extinction/enslavement does not just require that an AI wants to kill/enslave us all, it also requires that the AI is capable of defeating us all. So there’s another, similar, target picture going on:The possibility space of AGI’s includes countless AI’s that are incapable of world domination. I can think of 8 billion such AGI’s off the top of my head: Human beings. Even a very smart AGI may still fail to dominate humanity, if it’s locked...]]>
Tue, 30 May 2023 17:45:49 +0000 EA - The bullseye framework: My case against AI doom by titotal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The bullseye framework: My case against AI doom, published by titotal on May 30, 2023 on The Effective Altruism Forum.Introduction:I’ve written quite a few articles casting doubt on several aspects of the AI doom narrative. (I’ve starting archiving them on my substack for easier sharing). This article is my first attempt to link them together to form a connected argument for why I find imminent AI doom unlikely.I don’t expect every one of the ideas presented here to be correct. I have a PHD and work as a computational physicist, so I’m fairly confident about aspects related to that, but I do not wish to be treated as an expert on other subjects such as machine learning where I am familiar with the subject, but not an expert. You should never expect one person to cover a huge range of topics across multiple different domains, without making the occasional mistake. I have done my best with the knowledge I have available.I don’t speculate about specific timelines here. I suspect that AGI is decades away at minimum, and I may reassess my beliefs as time goes on and technology changes.In part 1, I will point out the parallel frameworks of values and capabilities. I show what happens when we entertain the possibility that at least some AGI could be fallible and beatable.In part 2, I outline some of my many arguments that most AGI will be both fallible and beatable, and not capable of world domination.In part 3, I outline a few arguments against the ideas that “x-risk” safe AGI is super difficult, taking particular aim at the “absolute fanatical maximiser” assumption of early AI writing.In part 4, I speculate on how the above assumptions could lead to a safe navigation of AI development in the future.This article does not speculate on AI timelines, or on the reasons why AI doom estimates are so high around here. I have my suspicions on both questions. On the first, I think AGI is many decades away, on the second, I think founder effects are primarily to blame. However these will not be the focus of this article.Part 1: The bullseye frameworkWhen arguing for AI doom, a typical argument will involve the possibility space of AGI. Invoking the orthogonality thesis and instrumental convergence, the argument goes that in the possibility space of AGI, there are far more machines that want to kill us than those that don’t. The argument is that the fraction is so small that AGI will be rogue by default: like the picture below.As a sceptic, I do not find this, on its own, to be convincing. My rejoinder would be that AGI’s are not being plucked randomly from possibility space. They are being deliberately constructed and evolved specifically to meet that small target. An AI that has the values of “scream profanities at everyone” is not going to survive long in development. Therefore, even if AI development starts in dangerous territory, it will end up in safe territory, following path A. (I will flesh this argument out more in part 3 of this article).To which the doomer will reply: Yes, there will be some pressure towards the target of safety, but it won’t be enough to succeed, because of things like deception, perverse incentives, etc. So it will follow something more like path B above, where our attempts to align it are not successful.Often the discussion stops there. However, I would argue that this is missing half the picture. Human extinction/enslavement does not just require that an AI wants to kill/enslave us all, it also requires that the AI is capable of defeating us all. So there’s another, similar, target picture going on:The possibility space of AGI’s includes countless AI’s that are incapable of world domination. I can think of 8 billion such AGI’s off the top of my head: Human beings. Even a very smart AGI may still fail to dominate humanity, if it’s locked...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The bullseye framework: My case against AI doom, published by titotal on May 30, 2023 on The Effective Altruism Forum.Introduction:I’ve written quite a few articles casting doubt on several aspects of the AI doom narrative. (I’ve starting archiving them on my substack for easier sharing). This article is my first attempt to link them together to form a connected argument for why I find imminent AI doom unlikely.I don’t expect every one of the ideas presented here to be correct. I have a PHD and work as a computational physicist, so I’m fairly confident about aspects related to that, but I do not wish to be treated as an expert on other subjects such as machine learning where I am familiar with the subject, but not an expert. You should never expect one person to cover a huge range of topics across multiple different domains, without making the occasional mistake. I have done my best with the knowledge I have available.I don’t speculate about specific timelines here. I suspect that AGI is decades away at minimum, and I may reassess my beliefs as time goes on and technology changes.In part 1, I will point out the parallel frameworks of values and capabilities. I show what happens when we entertain the possibility that at least some AGI could be fallible and beatable.In part 2, I outline some of my many arguments that most AGI will be both fallible and beatable, and not capable of world domination.In part 3, I outline a few arguments against the ideas that “x-risk” safe AGI is super difficult, taking particular aim at the “absolute fanatical maximiser” assumption of early AI writing.In part 4, I speculate on how the above assumptions could lead to a safe navigation of AI development in the future.This article does not speculate on AI timelines, or on the reasons why AI doom estimates are so high around here. I have my suspicions on both questions. On the first, I think AGI is many decades away, on the second, I think founder effects are primarily to blame. However these will not be the focus of this article.Part 1: The bullseye frameworkWhen arguing for AI doom, a typical argument will involve the possibility space of AGI. Invoking the orthogonality thesis and instrumental convergence, the argument goes that in the possibility space of AGI, there are far more machines that want to kill us than those that don’t. The argument is that the fraction is so small that AGI will be rogue by default: like the picture below.As a sceptic, I do not find this, on its own, to be convincing. My rejoinder would be that AGI’s are not being plucked randomly from possibility space. They are being deliberately constructed and evolved specifically to meet that small target. An AI that has the values of “scream profanities at everyone” is not going to survive long in development. Therefore, even if AI development starts in dangerous territory, it will end up in safe territory, following path A. (I will flesh this argument out more in part 3 of this article).To which the doomer will reply: Yes, there will be some pressure towards the target of safety, but it won’t be enough to succeed, because of things like deception, perverse incentives, etc. So it will follow something more like path B above, where our attempts to align it are not successful.Often the discussion stops there. However, I would argue that this is missing half the picture. Human extinction/enslavement does not just require that an AI wants to kill/enslave us all, it also requires that the AI is capable of defeating us all. So there’s another, similar, target picture going on:The possibility space of AGI’s includes countless AI’s that are incapable of world domination. I can think of 8 billion such AGI’s off the top of my head: Human beings. Even a very smart AGI may still fail to dominate humanity, if it’s locked...]]>
titotal https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 26:03 None full 6106
Yk4D4DZpx6eriMDyY_NL_EA_EA EA - Statement on AI Extinction - Signed by AGI Labs, Top Academics, and Many Other Notable Figures by Center for AI Safety Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Statement on AI Extinction - Signed by AGI Labs, Top Academics, and Many Other Notable Figures, published by Center for AI Safety on May 30, 2023 on The Effective Altruism Forum.Today, the AI Extinction Statement was released by the Center for AI Safety, a one-sentence statement jointly signed by a historic coalition of AI experts, professors, and tech leaders.Geoffrey Hinton and Yoshua Bengio have signed, as have the CEOs of the major AGI labs–Sam Altman, Demis Hassabis, and Dario Amodei–as well as executives from Microsoft and Google (but notably not Meta).The statement reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”We hope this statement will bring AI x-risk further into the overton window and open up discussion around AI’s most severe risks. Given the growing number of experts and public figures who take risks from advanced AI seriously, we hope to improve epistemics by encouraging discussion and focusing public and international attention toward this issue.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Center for AI Safety https://forum.effectivealtruism.org/posts/Yk4D4DZpx6eriMDyY/statement-on-ai-extinction-signed-by-agi-labs-top-academics Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Statement on AI Extinction - Signed by AGI Labs, Top Academics, and Many Other Notable Figures, published by Center for AI Safety on May 30, 2023 on The Effective Altruism Forum.Today, the AI Extinction Statement was released by the Center for AI Safety, a one-sentence statement jointly signed by a historic coalition of AI experts, professors, and tech leaders.Geoffrey Hinton and Yoshua Bengio have signed, as have the CEOs of the major AGI labs–Sam Altman, Demis Hassabis, and Dario Amodei–as well as executives from Microsoft and Google (but notably not Meta).The statement reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”We hope this statement will bring AI x-risk further into the overton window and open up discussion around AI’s most severe risks. Given the growing number of experts and public figures who take risks from advanced AI seriously, we hope to improve epistemics by encouraging discussion and focusing public and international attention toward this issue.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 30 May 2023 09:49:44 +0000 EA - Statement on AI Extinction - Signed by AGI Labs, Top Academics, and Many Other Notable Figures by Center for AI Safety Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Statement on AI Extinction - Signed by AGI Labs, Top Academics, and Many Other Notable Figures, published by Center for AI Safety on May 30, 2023 on The Effective Altruism Forum.Today, the AI Extinction Statement was released by the Center for AI Safety, a one-sentence statement jointly signed by a historic coalition of AI experts, professors, and tech leaders.Geoffrey Hinton and Yoshua Bengio have signed, as have the CEOs of the major AGI labs–Sam Altman, Demis Hassabis, and Dario Amodei–as well as executives from Microsoft and Google (but notably not Meta).The statement reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”We hope this statement will bring AI x-risk further into the overton window and open up discussion around AI’s most severe risks. Given the growing number of experts and public figures who take risks from advanced AI seriously, we hope to improve epistemics by encouraging discussion and focusing public and international attention toward this issue.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Statement on AI Extinction - Signed by AGI Labs, Top Academics, and Many Other Notable Figures, published by Center for AI Safety on May 30, 2023 on The Effective Altruism Forum.Today, the AI Extinction Statement was released by the Center for AI Safety, a one-sentence statement jointly signed by a historic coalition of AI experts, professors, and tech leaders.Geoffrey Hinton and Yoshua Bengio have signed, as have the CEOs of the major AGI labs–Sam Altman, Demis Hassabis, and Dario Amodei–as well as executives from Microsoft and Google (but notably not Meta).The statement reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”We hope this statement will bring AI x-risk further into the overton window and open up discussion around AI’s most severe risks. Given the growing number of experts and public figures who take risks from advanced AI seriously, we hope to improve epistemics by encouraging discussion and focusing public and international attention toward this issue.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Center for AI Safety https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:15 None full 6107
8CD4i8FsRApcbt3an_NL_EA_EA EA - List of Masters Programs in Tech Policy, Public Policy and Security (Europe) by sarahfurstenberg Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: List of Masters Programs in Tech Policy, Public Policy and Security (Europe), published by sarahfurstenberg on May 29, 2023 on The Effective Altruism Forum.We created this non-exhaustive List which was inspired and based on Konstantins personal research into Masters programmes. We expanded it with the help of others across the policy community. It was created for the 2023 cohort of fellows of the EU Tech Policy Fellowship hosted by Training for Good and includes a list of Masters in Europe, the UK and the US. CLICK HERE for the list.Limitations and Epistemic StatusThe list is based on personal experience, research, and limited feedback from others in the community.It is curated from a European perspective. Thus, the numbers and deadlines take European/EEA citizens as a reference point. Furthermore, whilst Masters from Europe, the UK and the US are listed, we have focussed on researching Masters in Europe. The latter lists are currently very incomplete.It's important to emphasise that this list is not exhaustive and may not represent all options of Masters in this Field.Additionally, the quality and relevance of each program may vary depending on individual needs, goals, and interests.Therefore, we recommended that individuals interested in pursuing a career in tech policy or policy in general conduct their own research, explore various programs, and consider multiple sources of information before making a decision! Ultimately, the decision to pursue a particular graduate program should be based on a thorough evaluation of individual goals, resources, and circumstances.What this post is notThis post does not outline what to study and what to aim for in choosing your Masters Degree. It is supposed to help people who have already decided that they want to pursue a Masters in Tech Policy, Security Studies or Public Policy but does not mean to imply that these are your only or even best options if you want to enter the Tech Policy field. A possibly safer and more classical approach of entering EU policy is to study basic law and economics subjects as they still hold a high standing across departments and fields in policy (See this article on “Joining the EU bubble”). This would also give you more flexible career capital than tech policy degrees.To elaborate on these different paths a detailed post (such this one) outlining what to aim for in your studies if you want to contribute to tech policy, would be incredibly valuable and we encourage you to write this up and share your perspective if you have spent some time thinking about this!Created for who?This list is aimed at people interested in working in public policy (especially in Europe) and in tech policy with a potential to specialise in AI but only provides a very narrow selection of options. Degrees with "tech" or "AI'' related words in the name are helpful to quickly signal your relevance on these topics. Many of the Masters in this list are geared towards people with a non-technical undergraduate degree in social sciences, economics etc. Thus, it excludes many Masters on Artificial Intelligence and Tech Policy that require you to have had a Computer Sciences or technical background. We wanted to share the list to help with some of the preliminary research in choosing a Masters programme.The inclusion of Security Studies Masters programmes comes from the argument that it seems like a viable path from which to enter inter/national think tanks or institutions working on relevant AI policy without having technical specialisations beforehand.Other considerationsBesides studying in Europe, studying in the US can be a great and high-impact option since many degrees are both highly regarded in Europe as well as allowing you to potentially work in US policy. We highly encourage you to read this post on worki...]]>
sarahfurstenberg https://forum.effectivealtruism.org/posts/8CD4i8FsRApcbt3an/list-of-masters-programs-in-tech-policy-public-policy-and Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: List of Masters Programs in Tech Policy, Public Policy and Security (Europe), published by sarahfurstenberg on May 29, 2023 on The Effective Altruism Forum.We created this non-exhaustive List which was inspired and based on Konstantins personal research into Masters programmes. We expanded it with the help of others across the policy community. It was created for the 2023 cohort of fellows of the EU Tech Policy Fellowship hosted by Training for Good and includes a list of Masters in Europe, the UK and the US. CLICK HERE for the list.Limitations and Epistemic StatusThe list is based on personal experience, research, and limited feedback from others in the community.It is curated from a European perspective. Thus, the numbers and deadlines take European/EEA citizens as a reference point. Furthermore, whilst Masters from Europe, the UK and the US are listed, we have focussed on researching Masters in Europe. The latter lists are currently very incomplete.It's important to emphasise that this list is not exhaustive and may not represent all options of Masters in this Field.Additionally, the quality and relevance of each program may vary depending on individual needs, goals, and interests.Therefore, we recommended that individuals interested in pursuing a career in tech policy or policy in general conduct their own research, explore various programs, and consider multiple sources of information before making a decision! Ultimately, the decision to pursue a particular graduate program should be based on a thorough evaluation of individual goals, resources, and circumstances.What this post is notThis post does not outline what to study and what to aim for in choosing your Masters Degree. It is supposed to help people who have already decided that they want to pursue a Masters in Tech Policy, Security Studies or Public Policy but does not mean to imply that these are your only or even best options if you want to enter the Tech Policy field. A possibly safer and more classical approach of entering EU policy is to study basic law and economics subjects as they still hold a high standing across departments and fields in policy (See this article on “Joining the EU bubble”). This would also give you more flexible career capital than tech policy degrees.To elaborate on these different paths a detailed post (such this one) outlining what to aim for in your studies if you want to contribute to tech policy, would be incredibly valuable and we encourage you to write this up and share your perspective if you have spent some time thinking about this!Created for who?This list is aimed at people interested in working in public policy (especially in Europe) and in tech policy with a potential to specialise in AI but only provides a very narrow selection of options. Degrees with "tech" or "AI'' related words in the name are helpful to quickly signal your relevance on these topics. Many of the Masters in this list are geared towards people with a non-technical undergraduate degree in social sciences, economics etc. Thus, it excludes many Masters on Artificial Intelligence and Tech Policy that require you to have had a Computer Sciences or technical background. We wanted to share the list to help with some of the preliminary research in choosing a Masters programme.The inclusion of Security Studies Masters programmes comes from the argument that it seems like a viable path from which to enter inter/national think tanks or institutions working on relevant AI policy without having technical specialisations beforehand.Other considerationsBesides studying in Europe, studying in the US can be a great and high-impact option since many degrees are both highly regarded in Europe as well as allowing you to potentially work in US policy. We highly encourage you to read this post on worki...]]>
Mon, 29 May 2023 23:16:17 +0000 EA - List of Masters Programs in Tech Policy, Public Policy and Security (Europe) by sarahfurstenberg Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: List of Masters Programs in Tech Policy, Public Policy and Security (Europe), published by sarahfurstenberg on May 29, 2023 on The Effective Altruism Forum.We created this non-exhaustive List which was inspired and based on Konstantins personal research into Masters programmes. We expanded it with the help of others across the policy community. It was created for the 2023 cohort of fellows of the EU Tech Policy Fellowship hosted by Training for Good and includes a list of Masters in Europe, the UK and the US. CLICK HERE for the list.Limitations and Epistemic StatusThe list is based on personal experience, research, and limited feedback from others in the community.It is curated from a European perspective. Thus, the numbers and deadlines take European/EEA citizens as a reference point. Furthermore, whilst Masters from Europe, the UK and the US are listed, we have focussed on researching Masters in Europe. The latter lists are currently very incomplete.It's important to emphasise that this list is not exhaustive and may not represent all options of Masters in this Field.Additionally, the quality and relevance of each program may vary depending on individual needs, goals, and interests.Therefore, we recommended that individuals interested in pursuing a career in tech policy or policy in general conduct their own research, explore various programs, and consider multiple sources of information before making a decision! Ultimately, the decision to pursue a particular graduate program should be based on a thorough evaluation of individual goals, resources, and circumstances.What this post is notThis post does not outline what to study and what to aim for in choosing your Masters Degree. It is supposed to help people who have already decided that they want to pursue a Masters in Tech Policy, Security Studies or Public Policy but does not mean to imply that these are your only or even best options if you want to enter the Tech Policy field. A possibly safer and more classical approach of entering EU policy is to study basic law and economics subjects as they still hold a high standing across departments and fields in policy (See this article on “Joining the EU bubble”). This would also give you more flexible career capital than tech policy degrees.To elaborate on these different paths a detailed post (such this one) outlining what to aim for in your studies if you want to contribute to tech policy, would be incredibly valuable and we encourage you to write this up and share your perspective if you have spent some time thinking about this!Created for who?This list is aimed at people interested in working in public policy (especially in Europe) and in tech policy with a potential to specialise in AI but only provides a very narrow selection of options. Degrees with "tech" or "AI'' related words in the name are helpful to quickly signal your relevance on these topics. Many of the Masters in this list are geared towards people with a non-technical undergraduate degree in social sciences, economics etc. Thus, it excludes many Masters on Artificial Intelligence and Tech Policy that require you to have had a Computer Sciences or technical background. We wanted to share the list to help with some of the preliminary research in choosing a Masters programme.The inclusion of Security Studies Masters programmes comes from the argument that it seems like a viable path from which to enter inter/national think tanks or institutions working on relevant AI policy without having technical specialisations beforehand.Other considerationsBesides studying in Europe, studying in the US can be a great and high-impact option since many degrees are both highly regarded in Europe as well as allowing you to potentially work in US policy. We highly encourage you to read this post on worki...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: List of Masters Programs in Tech Policy, Public Policy and Security (Europe), published by sarahfurstenberg on May 29, 2023 on The Effective Altruism Forum.We created this non-exhaustive List which was inspired and based on Konstantins personal research into Masters programmes. We expanded it with the help of others across the policy community. It was created for the 2023 cohort of fellows of the EU Tech Policy Fellowship hosted by Training for Good and includes a list of Masters in Europe, the UK and the US. CLICK HERE for the list.Limitations and Epistemic StatusThe list is based on personal experience, research, and limited feedback from others in the community.It is curated from a European perspective. Thus, the numbers and deadlines take European/EEA citizens as a reference point. Furthermore, whilst Masters from Europe, the UK and the US are listed, we have focussed on researching Masters in Europe. The latter lists are currently very incomplete.It's important to emphasise that this list is not exhaustive and may not represent all options of Masters in this Field.Additionally, the quality and relevance of each program may vary depending on individual needs, goals, and interests.Therefore, we recommended that individuals interested in pursuing a career in tech policy or policy in general conduct their own research, explore various programs, and consider multiple sources of information before making a decision! Ultimately, the decision to pursue a particular graduate program should be based on a thorough evaluation of individual goals, resources, and circumstances.What this post is notThis post does not outline what to study and what to aim for in choosing your Masters Degree. It is supposed to help people who have already decided that they want to pursue a Masters in Tech Policy, Security Studies or Public Policy but does not mean to imply that these are your only or even best options if you want to enter the Tech Policy field. A possibly safer and more classical approach of entering EU policy is to study basic law and economics subjects as they still hold a high standing across departments and fields in policy (See this article on “Joining the EU bubble”). This would also give you more flexible career capital than tech policy degrees.To elaborate on these different paths a detailed post (such this one) outlining what to aim for in your studies if you want to contribute to tech policy, would be incredibly valuable and we encourage you to write this up and share your perspective if you have spent some time thinking about this!Created for who?This list is aimed at people interested in working in public policy (especially in Europe) and in tech policy with a potential to specialise in AI but only provides a very narrow selection of options. Degrees with "tech" or "AI'' related words in the name are helpful to quickly signal your relevance on these topics. Many of the Masters in this list are geared towards people with a non-technical undergraduate degree in social sciences, economics etc. Thus, it excludes many Masters on Artificial Intelligence and Tech Policy that require you to have had a Computer Sciences or technical background. We wanted to share the list to help with some of the preliminary research in choosing a Masters programme.The inclusion of Security Studies Masters programmes comes from the argument that it seems like a viable path from which to enter inter/national think tanks or institutions working on relevant AI policy without having technical specialisations beforehand.Other considerationsBesides studying in Europe, studying in the US can be a great and high-impact option since many degrees are both highly regarded in Europe as well as allowing you to potentially work in US policy. We highly encourage you to read this post on worki...]]>
sarahfurstenberg https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:47 None full 6110
q7dJz9ZaZGTSZL8Jk_NL_EA_EA EA - Obstacles to the Implementation of Indoor Air Quality Improvements by JesseSmith Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Obstacles to the Implementation of Indoor Air Quality Improvements, published by JesseSmith on May 29, 2023 on The Effective Altruism Forum.1. Tl;drMany reports indicate that indoor air quality (IAQ) interventions are likely to be effective at reducing respiratory disease transmission. However, to date there’s been very little focus on the workforce that will implement these interventions. I suggest that the US Heating, Ventilation and Air Conditioning (HVAC) and building maintenance workforces have already posed a significant obstacle to these interventions, and broad uptake of IAQ measures will be significantly hindered by them in the future. The impact will vary in predictable ways depending on the nature of the intervention and its implementation. We should favor simple techniques with improved oversight and outsource or crosscheck technically complex work to people outside of the current HVAC workforce. We should also make IAQ conditions and devices as transparent as possible to both experts and building occupants.To skip my bio and the technical horrors section, proceed to the recommendations in section 4.2. Who am I? Why do I think This? How Certain am I?I began working in construction in 1991. I did a formal carpentry apprenticeship in Victoria BC in the mid-90s and moved to the US in ‘99. Around 2008 I started taking greater interest in HVAC because - despite paying top dollar to local subcontractors - our projects had persistent HVAC problems. Despite protestations that they were following exemplary practices, our projects were plagued with high humidity, loud noise, frequent mechanical failure, and room-to-room temperature differences. This led me to first learn all aspects of system design and controls, and culminated in full system installations. Along the way I obtained a NJ Master HVAC license, performed the thermal work of ~2k light-duty energy retrofits, obtained multiple certifications in HVAC and low-energy design, and became a regional expert in building diagnostics. Since 2010 I’ve worked as a contractor or consultant to roughly a dozen major HVAC contractors and hundreds of homeowners.I’m reasonably certain that the baseline competence of the HVAC workforce is insufficient to broadly and reliably deploy IAQ interventions and that this is a serious obstacle. My comments are specific to the US. I’ve discussed these problems extensively with friends and acquaintances working at a national level and in other parts of the US and believe them to be common to most of the country. The problems are specific to the light commercial and residential workforce, but not domains that are closely monitored by mechanical engineering teams (e.g. hospitals). Based on some limited experience I suspect these problems are also common to Canada, but I’m less certain about their severity.3. Technical Horrors: Why is This so Difficult?Within HVAC, many important jobs are currently either not performed or delegated to people who are largely incapable of performing them. Many people convincingly lie about their capacity to perform a job they’re incapable of, report having done things they haven’t, or even make statements at odds with physics.Examples include:Accurate heat load/loss calculations: These are used to size heating and cooling systems, and in most areas are code mandated for both new and replacement systems. Competent sizing (Manual J for residential) is viewed as highly important by virtually all experts within HVAC. However, despite decades of investment in training and compliance, a lead technical manager of a clean energy program reported to me that >90% of Manual Js reviewed by his program had significant errors made apparent due to internal inconsistency (eg duct load on a hydronic system) or obvious inconsistencies with public information on zi...]]>
JesseSmith https://forum.effectivealtruism.org/posts/q7dJz9ZaZGTSZL8Jk/obstacles-to-the-implementation-of-indoor-air-quality Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Obstacles to the Implementation of Indoor Air Quality Improvements, published by JesseSmith on May 29, 2023 on The Effective Altruism Forum.1. Tl;drMany reports indicate that indoor air quality (IAQ) interventions are likely to be effective at reducing respiratory disease transmission. However, to date there’s been very little focus on the workforce that will implement these interventions. I suggest that the US Heating, Ventilation and Air Conditioning (HVAC) and building maintenance workforces have already posed a significant obstacle to these interventions, and broad uptake of IAQ measures will be significantly hindered by them in the future. The impact will vary in predictable ways depending on the nature of the intervention and its implementation. We should favor simple techniques with improved oversight and outsource or crosscheck technically complex work to people outside of the current HVAC workforce. We should also make IAQ conditions and devices as transparent as possible to both experts and building occupants.To skip my bio and the technical horrors section, proceed to the recommendations in section 4.2. Who am I? Why do I think This? How Certain am I?I began working in construction in 1991. I did a formal carpentry apprenticeship in Victoria BC in the mid-90s and moved to the US in ‘99. Around 2008 I started taking greater interest in HVAC because - despite paying top dollar to local subcontractors - our projects had persistent HVAC problems. Despite protestations that they were following exemplary practices, our projects were plagued with high humidity, loud noise, frequent mechanical failure, and room-to-room temperature differences. This led me to first learn all aspects of system design and controls, and culminated in full system installations. Along the way I obtained a NJ Master HVAC license, performed the thermal work of ~2k light-duty energy retrofits, obtained multiple certifications in HVAC and low-energy design, and became a regional expert in building diagnostics. Since 2010 I’ve worked as a contractor or consultant to roughly a dozen major HVAC contractors and hundreds of homeowners.I’m reasonably certain that the baseline competence of the HVAC workforce is insufficient to broadly and reliably deploy IAQ interventions and that this is a serious obstacle. My comments are specific to the US. I’ve discussed these problems extensively with friends and acquaintances working at a national level and in other parts of the US and believe them to be common to most of the country. The problems are specific to the light commercial and residential workforce, but not domains that are closely monitored by mechanical engineering teams (e.g. hospitals). Based on some limited experience I suspect these problems are also common to Canada, but I’m less certain about their severity.3. Technical Horrors: Why is This so Difficult?Within HVAC, many important jobs are currently either not performed or delegated to people who are largely incapable of performing them. Many people convincingly lie about their capacity to perform a job they’re incapable of, report having done things they haven’t, or even make statements at odds with physics.Examples include:Accurate heat load/loss calculations: These are used to size heating and cooling systems, and in most areas are code mandated for both new and replacement systems. Competent sizing (Manual J for residential) is viewed as highly important by virtually all experts within HVAC. However, despite decades of investment in training and compliance, a lead technical manager of a clean energy program reported to me that >90% of Manual Js reviewed by his program had significant errors made apparent due to internal inconsistency (eg duct load on a hydronic system) or obvious inconsistencies with public information on zi...]]>
Mon, 29 May 2023 20:53:48 +0000 EA - Obstacles to the Implementation of Indoor Air Quality Improvements by JesseSmith Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Obstacles to the Implementation of Indoor Air Quality Improvements, published by JesseSmith on May 29, 2023 on The Effective Altruism Forum.1. Tl;drMany reports indicate that indoor air quality (IAQ) interventions are likely to be effective at reducing respiratory disease transmission. However, to date there’s been very little focus on the workforce that will implement these interventions. I suggest that the US Heating, Ventilation and Air Conditioning (HVAC) and building maintenance workforces have already posed a significant obstacle to these interventions, and broad uptake of IAQ measures will be significantly hindered by them in the future. The impact will vary in predictable ways depending on the nature of the intervention and its implementation. We should favor simple techniques with improved oversight and outsource or crosscheck technically complex work to people outside of the current HVAC workforce. We should also make IAQ conditions and devices as transparent as possible to both experts and building occupants.To skip my bio and the technical horrors section, proceed to the recommendations in section 4.2. Who am I? Why do I think This? How Certain am I?I began working in construction in 1991. I did a formal carpentry apprenticeship in Victoria BC in the mid-90s and moved to the US in ‘99. Around 2008 I started taking greater interest in HVAC because - despite paying top dollar to local subcontractors - our projects had persistent HVAC problems. Despite protestations that they were following exemplary practices, our projects were plagued with high humidity, loud noise, frequent mechanical failure, and room-to-room temperature differences. This led me to first learn all aspects of system design and controls, and culminated in full system installations. Along the way I obtained a NJ Master HVAC license, performed the thermal work of ~2k light-duty energy retrofits, obtained multiple certifications in HVAC and low-energy design, and became a regional expert in building diagnostics. Since 2010 I’ve worked as a contractor or consultant to roughly a dozen major HVAC contractors and hundreds of homeowners.I’m reasonably certain that the baseline competence of the HVAC workforce is insufficient to broadly and reliably deploy IAQ interventions and that this is a serious obstacle. My comments are specific to the US. I’ve discussed these problems extensively with friends and acquaintances working at a national level and in other parts of the US and believe them to be common to most of the country. The problems are specific to the light commercial and residential workforce, but not domains that are closely monitored by mechanical engineering teams (e.g. hospitals). Based on some limited experience I suspect these problems are also common to Canada, but I’m less certain about their severity.3. Technical Horrors: Why is This so Difficult?Within HVAC, many important jobs are currently either not performed or delegated to people who are largely incapable of performing them. Many people convincingly lie about their capacity to perform a job they’re incapable of, report having done things they haven’t, or even make statements at odds with physics.Examples include:Accurate heat load/loss calculations: These are used to size heating and cooling systems, and in most areas are code mandated for both new and replacement systems. Competent sizing (Manual J for residential) is viewed as highly important by virtually all experts within HVAC. However, despite decades of investment in training and compliance, a lead technical manager of a clean energy program reported to me that >90% of Manual Js reviewed by his program had significant errors made apparent due to internal inconsistency (eg duct load on a hydronic system) or obvious inconsistencies with public information on zi...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Obstacles to the Implementation of Indoor Air Quality Improvements, published by JesseSmith on May 29, 2023 on The Effective Altruism Forum.1. Tl;drMany reports indicate that indoor air quality (IAQ) interventions are likely to be effective at reducing respiratory disease transmission. However, to date there’s been very little focus on the workforce that will implement these interventions. I suggest that the US Heating, Ventilation and Air Conditioning (HVAC) and building maintenance workforces have already posed a significant obstacle to these interventions, and broad uptake of IAQ measures will be significantly hindered by them in the future. The impact will vary in predictable ways depending on the nature of the intervention and its implementation. We should favor simple techniques with improved oversight and outsource or crosscheck technically complex work to people outside of the current HVAC workforce. We should also make IAQ conditions and devices as transparent as possible to both experts and building occupants.To skip my bio and the technical horrors section, proceed to the recommendations in section 4.2. Who am I? Why do I think This? How Certain am I?I began working in construction in 1991. I did a formal carpentry apprenticeship in Victoria BC in the mid-90s and moved to the US in ‘99. Around 2008 I started taking greater interest in HVAC because - despite paying top dollar to local subcontractors - our projects had persistent HVAC problems. Despite protestations that they were following exemplary practices, our projects were plagued with high humidity, loud noise, frequent mechanical failure, and room-to-room temperature differences. This led me to first learn all aspects of system design and controls, and culminated in full system installations. Along the way I obtained a NJ Master HVAC license, performed the thermal work of ~2k light-duty energy retrofits, obtained multiple certifications in HVAC and low-energy design, and became a regional expert in building diagnostics. Since 2010 I’ve worked as a contractor or consultant to roughly a dozen major HVAC contractors and hundreds of homeowners.I’m reasonably certain that the baseline competence of the HVAC workforce is insufficient to broadly and reliably deploy IAQ interventions and that this is a serious obstacle. My comments are specific to the US. I’ve discussed these problems extensively with friends and acquaintances working at a national level and in other parts of the US and believe them to be common to most of the country. The problems are specific to the light commercial and residential workforce, but not domains that are closely monitored by mechanical engineering teams (e.g. hospitals). Based on some limited experience I suspect these problems are also common to Canada, but I’m less certain about their severity.3. Technical Horrors: Why is This so Difficult?Within HVAC, many important jobs are currently either not performed or delegated to people who are largely incapable of performing them. Many people convincingly lie about their capacity to perform a job they’re incapable of, report having done things they haven’t, or even make statements at odds with physics.Examples include:Accurate heat load/loss calculations: These are used to size heating and cooling systems, and in most areas are code mandated for both new and replacement systems. Competent sizing (Manual J for residential) is viewed as highly important by virtually all experts within HVAC. However, despite decades of investment in training and compliance, a lead technical manager of a clean energy program reported to me that >90% of Manual Js reviewed by his program had significant errors made apparent due to internal inconsistency (eg duct load on a hydronic system) or obvious inconsistencies with public information on zi...]]>
JesseSmith https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 12:19 None full 6108
5hegHvhrBZaBTWwxs_NL_EA_EA EA - Governments Might Prefer Bringing Resources Back to the Solar System Rather than Space Settlement in Order to Maintain Control, Given that Governing Interstellar Settlements Looks Almost Impossible by Dr. David Mathers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Governments Might Prefer Bringing Resources Back to the Solar System Rather than Space Settlement in Order to Maintain Control, Given that Governing Interstellar Settlements Looks Almost Impossible, published by Dr. David Mathers on May 29, 2023 on The Effective Altruism Forum.Part of my work for Arb Research (/).Epistemic Status: I have no scientific background and wrote this after only a couple of days thought, so it is very possible that there is some argument I am unaware of, but which would be obvious to physicists, why a ‘resource-gathering without settlement’ approach to interstellar exploration is not feasible. However, my Arb colleague Vasco Grilo has aerospace engineering expertise, and says he can’t think of any reason why it wouldn’t be feasible in principle. Still, take all this with a large dose of caution.Some futurists have considered it likely that, at least absent existential catastrophe in the next few centuries, human beings (or our post-human or machine descendants) will eventually attempt to settle our galaxy. After all, there are vastly more resources in the rest of the Milky Way than in the Solar system. So we could support far more lives and create much more of anything else we care about, if we make use of stuff out there in the wider galaxy. And one very obvious way for us to make use of that stuff is for us to send out spaceships to establish settlements which make use of the energy of the stars they arrive at. Those settlements could in turn seed further settlements in an iterative process. (This would likely require “digital people”/#fnref6 given the distances involved in interstellar travel.)However, this is not the only way in which we could try to make use of resources outside the solar system. Another way to do so would be to try and gather resources and bring them back to the Solar system without establishing any permanent settlements of either humans or AIs outside the Solar system itself. I think that a government on Earth (or elsewhere in the solar system) might actually prefer gathering resources in this way to space settlements for the following reason:Impossibility of Interstellar Governance (IIG): Because of the huge distances between stars, it is simply not possible for a government in the Solar system to exercise long-term effective governance over any space colonies further away than (at most) the closest handful of stars.For a powerful, although not completely conclusive, case for this claim see this Medium post:/@KevinKohlerFM/cosmic-anarchy-and-its-consequences-b1a557b1a2e3Given IIG, no government within the Solar system can be the government of a settlement outside it. Therefore, if a government sets up a colony run by agents in another star system, it loses direct control of those resources. Of course, the government can try and exercise more indirect control over what happens by choosing starting colonists with particular values. But it’s unclear the degree of control that will allow for long-term.Meanwhile, a government could try and send a mission to other stars which:A) Is not capable of setting-up a new self-sufficient settlement, or can be trusted not to do so. BUT,B) is capable of setting up physical infrastructure to extract the system’s energy and resources and bringing them back to the Solar system. This way, a government situated in the Solar system could maintain direct control over how resources are used. In contrast if they go the space settlement route, the government cannot directly govern the settlement. So it has to rely on the idea that if values of the initial settlers are correct, then the settlement will use its resources in the way the government desires even whilst operating outside the government’s control.A purely resource-gathering mission without settlement will be particularly att...]]>
Dr. David Mathers https://forum.effectivealtruism.org/posts/5hegHvhrBZaBTWwxs/governments-might-prefer-bringing-resources-back-to-the Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Governments Might Prefer Bringing Resources Back to the Solar System Rather than Space Settlement in Order to Maintain Control, Given that Governing Interstellar Settlements Looks Almost Impossible, published by Dr. David Mathers on May 29, 2023 on The Effective Altruism Forum.Part of my work for Arb Research (/).Epistemic Status: I have no scientific background and wrote this after only a couple of days thought, so it is very possible that there is some argument I am unaware of, but which would be obvious to physicists, why a ‘resource-gathering without settlement’ approach to interstellar exploration is not feasible. However, my Arb colleague Vasco Grilo has aerospace engineering expertise, and says he can’t think of any reason why it wouldn’t be feasible in principle. Still, take all this with a large dose of caution.Some futurists have considered it likely that, at least absent existential catastrophe in the next few centuries, human beings (or our post-human or machine descendants) will eventually attempt to settle our galaxy. After all, there are vastly more resources in the rest of the Milky Way than in the Solar system. So we could support far more lives and create much more of anything else we care about, if we make use of stuff out there in the wider galaxy. And one very obvious way for us to make use of that stuff is for us to send out spaceships to establish settlements which make use of the energy of the stars they arrive at. Those settlements could in turn seed further settlements in an iterative process. (This would likely require “digital people”/#fnref6 given the distances involved in interstellar travel.)However, this is not the only way in which we could try to make use of resources outside the solar system. Another way to do so would be to try and gather resources and bring them back to the Solar system without establishing any permanent settlements of either humans or AIs outside the Solar system itself. I think that a government on Earth (or elsewhere in the solar system) might actually prefer gathering resources in this way to space settlements for the following reason:Impossibility of Interstellar Governance (IIG): Because of the huge distances between stars, it is simply not possible for a government in the Solar system to exercise long-term effective governance over any space colonies further away than (at most) the closest handful of stars.For a powerful, although not completely conclusive, case for this claim see this Medium post:/@KevinKohlerFM/cosmic-anarchy-and-its-consequences-b1a557b1a2e3Given IIG, no government within the Solar system can be the government of a settlement outside it. Therefore, if a government sets up a colony run by agents in another star system, it loses direct control of those resources. Of course, the government can try and exercise more indirect control over what happens by choosing starting colonists with particular values. But it’s unclear the degree of control that will allow for long-term.Meanwhile, a government could try and send a mission to other stars which:A) Is not capable of setting-up a new self-sufficient settlement, or can be trusted not to do so. BUT,B) is capable of setting up physical infrastructure to extract the system’s energy and resources and bringing them back to the Solar system. This way, a government situated in the Solar system could maintain direct control over how resources are used. In contrast if they go the space settlement route, the government cannot directly govern the settlement. So it has to rely on the idea that if values of the initial settlers are correct, then the settlement will use its resources in the way the government desires even whilst operating outside the government’s control.A purely resource-gathering mission without settlement will be particularly att...]]>
Mon, 29 May 2023 19:36:25 +0000 EA - Governments Might Prefer Bringing Resources Back to the Solar System Rather than Space Settlement in Order to Maintain Control, Given that Governing Interstellar Settlements Looks Almost Impossible by Dr. David Mathers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Governments Might Prefer Bringing Resources Back to the Solar System Rather than Space Settlement in Order to Maintain Control, Given that Governing Interstellar Settlements Looks Almost Impossible, published by Dr. David Mathers on May 29, 2023 on The Effective Altruism Forum.Part of my work for Arb Research (/).Epistemic Status: I have no scientific background and wrote this after only a couple of days thought, so it is very possible that there is some argument I am unaware of, but which would be obvious to physicists, why a ‘resource-gathering without settlement’ approach to interstellar exploration is not feasible. However, my Arb colleague Vasco Grilo has aerospace engineering expertise, and says he can’t think of any reason why it wouldn’t be feasible in principle. Still, take all this with a large dose of caution.Some futurists have considered it likely that, at least absent existential catastrophe in the next few centuries, human beings (or our post-human or machine descendants) will eventually attempt to settle our galaxy. After all, there are vastly more resources in the rest of the Milky Way than in the Solar system. So we could support far more lives and create much more of anything else we care about, if we make use of stuff out there in the wider galaxy. And one very obvious way for us to make use of that stuff is for us to send out spaceships to establish settlements which make use of the energy of the stars they arrive at. Those settlements could in turn seed further settlements in an iterative process. (This would likely require “digital people”/#fnref6 given the distances involved in interstellar travel.)However, this is not the only way in which we could try to make use of resources outside the solar system. Another way to do so would be to try and gather resources and bring them back to the Solar system without establishing any permanent settlements of either humans or AIs outside the Solar system itself. I think that a government on Earth (or elsewhere in the solar system) might actually prefer gathering resources in this way to space settlements for the following reason:Impossibility of Interstellar Governance (IIG): Because of the huge distances between stars, it is simply not possible for a government in the Solar system to exercise long-term effective governance over any space colonies further away than (at most) the closest handful of stars.For a powerful, although not completely conclusive, case for this claim see this Medium post:/@KevinKohlerFM/cosmic-anarchy-and-its-consequences-b1a557b1a2e3Given IIG, no government within the Solar system can be the government of a settlement outside it. Therefore, if a government sets up a colony run by agents in another star system, it loses direct control of those resources. Of course, the government can try and exercise more indirect control over what happens by choosing starting colonists with particular values. But it’s unclear the degree of control that will allow for long-term.Meanwhile, a government could try and send a mission to other stars which:A) Is not capable of setting-up a new self-sufficient settlement, or can be trusted not to do so. BUT,B) is capable of setting up physical infrastructure to extract the system’s energy and resources and bringing them back to the Solar system. This way, a government situated in the Solar system could maintain direct control over how resources are used. In contrast if they go the space settlement route, the government cannot directly govern the settlement. So it has to rely on the idea that if values of the initial settlers are correct, then the settlement will use its resources in the way the government desires even whilst operating outside the government’s control.A purely resource-gathering mission without settlement will be particularly att...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Governments Might Prefer Bringing Resources Back to the Solar System Rather than Space Settlement in Order to Maintain Control, Given that Governing Interstellar Settlements Looks Almost Impossible, published by Dr. David Mathers on May 29, 2023 on The Effective Altruism Forum.Part of my work for Arb Research (/).Epistemic Status: I have no scientific background and wrote this after only a couple of days thought, so it is very possible that there is some argument I am unaware of, but which would be obvious to physicists, why a ‘resource-gathering without settlement’ approach to interstellar exploration is not feasible. However, my Arb colleague Vasco Grilo has aerospace engineering expertise, and says he can’t think of any reason why it wouldn’t be feasible in principle. Still, take all this with a large dose of caution.Some futurists have considered it likely that, at least absent existential catastrophe in the next few centuries, human beings (or our post-human or machine descendants) will eventually attempt to settle our galaxy. After all, there are vastly more resources in the rest of the Milky Way than in the Solar system. So we could support far more lives and create much more of anything else we care about, if we make use of stuff out there in the wider galaxy. And one very obvious way for us to make use of that stuff is for us to send out spaceships to establish settlements which make use of the energy of the stars they arrive at. Those settlements could in turn seed further settlements in an iterative process. (This would likely require “digital people”/#fnref6 given the distances involved in interstellar travel.)However, this is not the only way in which we could try to make use of resources outside the solar system. Another way to do so would be to try and gather resources and bring them back to the Solar system without establishing any permanent settlements of either humans or AIs outside the Solar system itself. I think that a government on Earth (or elsewhere in the solar system) might actually prefer gathering resources in this way to space settlements for the following reason:Impossibility of Interstellar Governance (IIG): Because of the huge distances between stars, it is simply not possible for a government in the Solar system to exercise long-term effective governance over any space colonies further away than (at most) the closest handful of stars.For a powerful, although not completely conclusive, case for this claim see this Medium post:/@KevinKohlerFM/cosmic-anarchy-and-its-consequences-b1a557b1a2e3Given IIG, no government within the Solar system can be the government of a settlement outside it. Therefore, if a government sets up a colony run by agents in another star system, it loses direct control of those resources. Of course, the government can try and exercise more indirect control over what happens by choosing starting colonists with particular values. But it’s unclear the degree of control that will allow for long-term.Meanwhile, a government could try and send a mission to other stars which:A) Is not capable of setting-up a new self-sufficient settlement, or can be trusted not to do so. BUT,B) is capable of setting up physical infrastructure to extract the system’s energy and resources and bringing them back to the Solar system. This way, a government situated in the Solar system could maintain direct control over how resources are used. In contrast if they go the space settlement route, the government cannot directly govern the settlement. So it has to rely on the idea that if values of the initial settlers are correct, then the settlement will use its resources in the way the government desires even whilst operating outside the government’s control.A purely resource-gathering mission without settlement will be particularly att...]]>
Dr. David Mathers https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:11 None full 6109
E3wSsdTXLGZ3pvYSY_NL_EA_EA EA - EA Estonia's Impact in 2022 by RichardAnnilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Estonia's Impact in 2022, published by RichardAnnilo on May 29, 2023 on The Effective Altruism Forum.BackgroundThis report is about January to December 2022 in EA Estonia, corresponding to our last grant period (funding from the EA Infrastructure Fund for 1 FTE and group expenses).Quick facts about Estonia: it has a population of 1.3 million and is placed both geographically and culturally between the Nordics and Eastern Europe. Our language has 14 noun cases, it is the birthplace of 10 unicorns, and we have the best mushroom scientists. Go figure.In our national EA group, there are 23–30 people whom I would consider to be “highly engaged” (meaning they have spent more than 100 hours engaging with EA content, have developed career plans and have taken significant steps in pursuit of doing good). You could expect around 30 people to attend our largest community event (Figure 1) and our Slack channel has 50–60 active weekly members (Figure 2).Figure 1: Attendees of our largest community event, the EA Estonia Summer Retreat. August 2022.Figure 2. EA Estonia Slack statistics from its creation. Weekly active members have been oscillating between 40 and 65 throughout 2022.Group strategyHere are the main metrics we used to evaluate our impact:Awareness: Number of people aware of the term “effective altruism” and EA Estonia.Activities:Introductory talksDirect outreachSocial media outreachFirst engagement: Number of people who took action because of our outreach.Activities:Introductory courseCause-specific reading groupsCareer planning: Number of people that develop career plans based on EA principles that are well informed and well reasoned.Activities:Career courseTaking action: Number of people taking significant action based on EA-informed career plans (e.g. starting full-time jobs, university degrees).Activities:1-1 career callsPeer-mentoringDirectly sharing opportunitiesConcerns with this model:The actual impact comes when people take action within high-impact jobs, which we currently don't measure.We don't measure value drift or other kinds of decreased engagement after taking significant next steps.This model doesn't prioritize targeting existing Highly Engaged EAs (HEAs) to have a higher impact.This also doesn't include a more meta-level goal of keeping people engaged and interested while moving towards an HEA status. We do organize social events for this reason, however the impact of them is not quantified.Regarless of these concerns, the main theory of change feels relatively straight-forward: (1) we find young altruistically-minded people who are unclear about their future career plans, then (2) we make them aware of the effective altruism movement and various high-impact career paths, and then (3) we prompt them to develop explicit career plans and encourage them to take action upon them.Below I will go into more detail regarding the goals, activities and results of 2022 in two categories: (i) outreach and (ii) growing HEAs. I will end with a short conclusion and key takeaways for next year.I OutreachGoal: 5,000 new people who know what “effective altruism” means and that there is an active local group in Estonia.Actual: 20,776 people reached.Activities:Liina Salonen started working full-time as the Communictions Specialist in EA Estonia.Reached at least 20,000 people on Facebook with the Introductory Course social media campaignStudent fair tabling.At least 155 people reached (played the Giving Game)Goal: 10 lecturers mentioning EA EstoniaActual: 1 lecturers reachedVisited a philosophy lecture. Number of students: 20.Talked about effective altruism and longtermism. Created a discussion with the lecturer.Suggested people sign up for our career course. Nobody responded.Wrote to two other philosophy lecturers, but the...]]>
RichardAnnilo https://forum.effectivealtruism.org/posts/E3wSsdTXLGZ3pvYSY/ea-estonia-s-impact-in-2022 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Estonia's Impact in 2022, published by RichardAnnilo on May 29, 2023 on The Effective Altruism Forum.BackgroundThis report is about January to December 2022 in EA Estonia, corresponding to our last grant period (funding from the EA Infrastructure Fund for 1 FTE and group expenses).Quick facts about Estonia: it has a population of 1.3 million and is placed both geographically and culturally between the Nordics and Eastern Europe. Our language has 14 noun cases, it is the birthplace of 10 unicorns, and we have the best mushroom scientists. Go figure.In our national EA group, there are 23–30 people whom I would consider to be “highly engaged” (meaning they have spent more than 100 hours engaging with EA content, have developed career plans and have taken significant steps in pursuit of doing good). You could expect around 30 people to attend our largest community event (Figure 1) and our Slack channel has 50–60 active weekly members (Figure 2).Figure 1: Attendees of our largest community event, the EA Estonia Summer Retreat. August 2022.Figure 2. EA Estonia Slack statistics from its creation. Weekly active members have been oscillating between 40 and 65 throughout 2022.Group strategyHere are the main metrics we used to evaluate our impact:Awareness: Number of people aware of the term “effective altruism” and EA Estonia.Activities:Introductory talksDirect outreachSocial media outreachFirst engagement: Number of people who took action because of our outreach.Activities:Introductory courseCause-specific reading groupsCareer planning: Number of people that develop career plans based on EA principles that are well informed and well reasoned.Activities:Career courseTaking action: Number of people taking significant action based on EA-informed career plans (e.g. starting full-time jobs, university degrees).Activities:1-1 career callsPeer-mentoringDirectly sharing opportunitiesConcerns with this model:The actual impact comes when people take action within high-impact jobs, which we currently don't measure.We don't measure value drift or other kinds of decreased engagement after taking significant next steps.This model doesn't prioritize targeting existing Highly Engaged EAs (HEAs) to have a higher impact.This also doesn't include a more meta-level goal of keeping people engaged and interested while moving towards an HEA status. We do organize social events for this reason, however the impact of them is not quantified.Regarless of these concerns, the main theory of change feels relatively straight-forward: (1) we find young altruistically-minded people who are unclear about their future career plans, then (2) we make them aware of the effective altruism movement and various high-impact career paths, and then (3) we prompt them to develop explicit career plans and encourage them to take action upon them.Below I will go into more detail regarding the goals, activities and results of 2022 in two categories: (i) outreach and (ii) growing HEAs. I will end with a short conclusion and key takeaways for next year.I OutreachGoal: 5,000 new people who know what “effective altruism” means and that there is an active local group in Estonia.Actual: 20,776 people reached.Activities:Liina Salonen started working full-time as the Communictions Specialist in EA Estonia.Reached at least 20,000 people on Facebook with the Introductory Course social media campaignStudent fair tabling.At least 155 people reached (played the Giving Game)Goal: 10 lecturers mentioning EA EstoniaActual: 1 lecturers reachedVisited a philosophy lecture. Number of students: 20.Talked about effective altruism and longtermism. Created a discussion with the lecturer.Suggested people sign up for our career course. Nobody responded.Wrote to two other philosophy lecturers, but the...]]>
Mon, 29 May 2023 18:48:17 +0000 EA - EA Estonia's Impact in 2022 by RichardAnnilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Estonia's Impact in 2022, published by RichardAnnilo on May 29, 2023 on The Effective Altruism Forum.BackgroundThis report is about January to December 2022 in EA Estonia, corresponding to our last grant period (funding from the EA Infrastructure Fund for 1 FTE and group expenses).Quick facts about Estonia: it has a population of 1.3 million and is placed both geographically and culturally between the Nordics and Eastern Europe. Our language has 14 noun cases, it is the birthplace of 10 unicorns, and we have the best mushroom scientists. Go figure.In our national EA group, there are 23–30 people whom I would consider to be “highly engaged” (meaning they have spent more than 100 hours engaging with EA content, have developed career plans and have taken significant steps in pursuit of doing good). You could expect around 30 people to attend our largest community event (Figure 1) and our Slack channel has 50–60 active weekly members (Figure 2).Figure 1: Attendees of our largest community event, the EA Estonia Summer Retreat. August 2022.Figure 2. EA Estonia Slack statistics from its creation. Weekly active members have been oscillating between 40 and 65 throughout 2022.Group strategyHere are the main metrics we used to evaluate our impact:Awareness: Number of people aware of the term “effective altruism” and EA Estonia.Activities:Introductory talksDirect outreachSocial media outreachFirst engagement: Number of people who took action because of our outreach.Activities:Introductory courseCause-specific reading groupsCareer planning: Number of people that develop career plans based on EA principles that are well informed and well reasoned.Activities:Career courseTaking action: Number of people taking significant action based on EA-informed career plans (e.g. starting full-time jobs, university degrees).Activities:1-1 career callsPeer-mentoringDirectly sharing opportunitiesConcerns with this model:The actual impact comes when people take action within high-impact jobs, which we currently don't measure.We don't measure value drift or other kinds of decreased engagement after taking significant next steps.This model doesn't prioritize targeting existing Highly Engaged EAs (HEAs) to have a higher impact.This also doesn't include a more meta-level goal of keeping people engaged and interested while moving towards an HEA status. We do organize social events for this reason, however the impact of them is not quantified.Regarless of these concerns, the main theory of change feels relatively straight-forward: (1) we find young altruistically-minded people who are unclear about their future career plans, then (2) we make them aware of the effective altruism movement and various high-impact career paths, and then (3) we prompt them to develop explicit career plans and encourage them to take action upon them.Below I will go into more detail regarding the goals, activities and results of 2022 in two categories: (i) outreach and (ii) growing HEAs. I will end with a short conclusion and key takeaways for next year.I OutreachGoal: 5,000 new people who know what “effective altruism” means and that there is an active local group in Estonia.Actual: 20,776 people reached.Activities:Liina Salonen started working full-time as the Communictions Specialist in EA Estonia.Reached at least 20,000 people on Facebook with the Introductory Course social media campaignStudent fair tabling.At least 155 people reached (played the Giving Game)Goal: 10 lecturers mentioning EA EstoniaActual: 1 lecturers reachedVisited a philosophy lecture. Number of students: 20.Talked about effective altruism and longtermism. Created a discussion with the lecturer.Suggested people sign up for our career course. Nobody responded.Wrote to two other philosophy lecturers, but the...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Estonia's Impact in 2022, published by RichardAnnilo on May 29, 2023 on The Effective Altruism Forum.BackgroundThis report is about January to December 2022 in EA Estonia, corresponding to our last grant period (funding from the EA Infrastructure Fund for 1 FTE and group expenses).Quick facts about Estonia: it has a population of 1.3 million and is placed both geographically and culturally between the Nordics and Eastern Europe. Our language has 14 noun cases, it is the birthplace of 10 unicorns, and we have the best mushroom scientists. Go figure.In our national EA group, there are 23–30 people whom I would consider to be “highly engaged” (meaning they have spent more than 100 hours engaging with EA content, have developed career plans and have taken significant steps in pursuit of doing good). You could expect around 30 people to attend our largest community event (Figure 1) and our Slack channel has 50–60 active weekly members (Figure 2).Figure 1: Attendees of our largest community event, the EA Estonia Summer Retreat. August 2022.Figure 2. EA Estonia Slack statistics from its creation. Weekly active members have been oscillating between 40 and 65 throughout 2022.Group strategyHere are the main metrics we used to evaluate our impact:Awareness: Number of people aware of the term “effective altruism” and EA Estonia.Activities:Introductory talksDirect outreachSocial media outreachFirst engagement: Number of people who took action because of our outreach.Activities:Introductory courseCause-specific reading groupsCareer planning: Number of people that develop career plans based on EA principles that are well informed and well reasoned.Activities:Career courseTaking action: Number of people taking significant action based on EA-informed career plans (e.g. starting full-time jobs, university degrees).Activities:1-1 career callsPeer-mentoringDirectly sharing opportunitiesConcerns with this model:The actual impact comes when people take action within high-impact jobs, which we currently don't measure.We don't measure value drift or other kinds of decreased engagement after taking significant next steps.This model doesn't prioritize targeting existing Highly Engaged EAs (HEAs) to have a higher impact.This also doesn't include a more meta-level goal of keeping people engaged and interested while moving towards an HEA status. We do organize social events for this reason, however the impact of them is not quantified.Regarless of these concerns, the main theory of change feels relatively straight-forward: (1) we find young altruistically-minded people who are unclear about their future career plans, then (2) we make them aware of the effective altruism movement and various high-impact career paths, and then (3) we prompt them to develop explicit career plans and encourage them to take action upon them.Below I will go into more detail regarding the goals, activities and results of 2022 in two categories: (i) outreach and (ii) growing HEAs. I will end with a short conclusion and key takeaways for next year.I OutreachGoal: 5,000 new people who know what “effective altruism” means and that there is an active local group in Estonia.Actual: 20,776 people reached.Activities:Liina Salonen started working full-time as the Communictions Specialist in EA Estonia.Reached at least 20,000 people on Facebook with the Introductory Course social media campaignStudent fair tabling.At least 155 people reached (played the Giving Game)Goal: 10 lecturers mentioning EA EstoniaActual: 1 lecturers reachedVisited a philosophy lecture. Number of students: 20.Talked about effective altruism and longtermism. Created a discussion with the lecturer.Suggested people sign up for our career course. Nobody responded.Wrote to two other philosophy lecturers, but the...]]>
RichardAnnilo https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 13:40 None full 6099
2TdXocyDF9PxWewwY_NL_EA_EA EA - Should the EA community be cause-first or member-first? by EdoArad Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should the EA community be cause-first or member-first?, published by EdoArad on May 29, 2023 on The Effective Altruism Forum.It's really hard to do community building well. Opinions on strategy and vision vary a lot, and we don't yet know enough about what actually works and how well. Here, I'll suggest one axis of community-building strategy which helped me clarify and compare some contrasting opinions.Cause-firstWill Macaskill's proposed Definition of Effective Altruism is composed of:An overarching effort to figure out what are the best opportunities to do good.A community of people that work to bring more resources to these opportunities, or work on these directly.This suggests a "cause-first" community-building strategy, where the main goal for community builders is to get more manpower into the top cause areas. Communities are measured by the total impact produced directly through the people they engage with. Communities try to find the most promising people, persuade them to work on top causes, and empower them to do so well.CEA's definition and strategy seem to be mostly along these lines:Effective altruism is a project that aims to find the best ways to help others, and put them into practice.It’s both a research field, which aims to identify the world’s most pressing problems and the best solutions to them, and a practical community that aims to use those findings to do good.andOur mission is to nurture a community of people who are thinking carefully about the world’s biggest problems and taking impactful action to solve them.Member-firstLet's try out a different definition for the EA community, taken from CEA's guiding principles:What is the effective altruism community?The effective altruism community is a global community of people who care deeply about the world, make helping others a significant part of their lives, and use evidence and reason to figure out how best to do so.This, to me, suggests a subtly different vision and strategy for the community. One that is, first of all, focused on these people who live by EA principles. Such a "member-first" strategy could have a supporting infrastructure that is focused on helping the individuals involved to live their lives according to these principles, and an outreach/growth ecosystem that works to make the principles of EA more universal.What's the difference?I think this dimension has important effects on the value of the community, and that both local and global community-building strategies should be aware of the tradeoffs between the two.I'll list some examples and caricatures for the distinction between the two, to give a more intuitive grasp of how these strategies differ, without any clear order:Leaning cause-firstLeaning member-firstKeep EA Small and WeirdBig Tent EACurrent EA Handbook (focus on introducing major causes)2015's EA Handbook (focus on core EA principles)80,000 HoursProbably GoodWants more people doing high-quality AI Safety work, regardless of their acceptance of EA principlesWants more people deeply understanding and accepting EA principles, regardless of what they actually work on or donate to.Targeted outreach to students in high ranking universitiesBroad outreach with diverse messagingEncourages people to change occupations to focus on the world's most pressing problemsEncourages people to use the tools and principles of EA to do more good in their current trajectoryRisk of people not finding useful ways to contribute to top causesRisk of not enough people who want to contribute to the world's top causesThe community as a whole leads by example, by taking in-depth prioritization research with the proper seriousnessEach individual is focused more on how toimplement EA principles in their own lives, taking their personal worldview and situation into account ...]]>
EdoArad https://forum.effectivealtruism.org/posts/2TdXocyDF9PxWewwY/should-the-ea-community-be-cause-first-or-member-first Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should the EA community be cause-first or member-first?, published by EdoArad on May 29, 2023 on The Effective Altruism Forum.It's really hard to do community building well. Opinions on strategy and vision vary a lot, and we don't yet know enough about what actually works and how well. Here, I'll suggest one axis of community-building strategy which helped me clarify and compare some contrasting opinions.Cause-firstWill Macaskill's proposed Definition of Effective Altruism is composed of:An overarching effort to figure out what are the best opportunities to do good.A community of people that work to bring more resources to these opportunities, or work on these directly.This suggests a "cause-first" community-building strategy, where the main goal for community builders is to get more manpower into the top cause areas. Communities are measured by the total impact produced directly through the people they engage with. Communities try to find the most promising people, persuade them to work on top causes, and empower them to do so well.CEA's definition and strategy seem to be mostly along these lines:Effective altruism is a project that aims to find the best ways to help others, and put them into practice.It’s both a research field, which aims to identify the world’s most pressing problems and the best solutions to them, and a practical community that aims to use those findings to do good.andOur mission is to nurture a community of people who are thinking carefully about the world’s biggest problems and taking impactful action to solve them.Member-firstLet's try out a different definition for the EA community, taken from CEA's guiding principles:What is the effective altruism community?The effective altruism community is a global community of people who care deeply about the world, make helping others a significant part of their lives, and use evidence and reason to figure out how best to do so.This, to me, suggests a subtly different vision and strategy for the community. One that is, first of all, focused on these people who live by EA principles. Such a "member-first" strategy could have a supporting infrastructure that is focused on helping the individuals involved to live their lives according to these principles, and an outreach/growth ecosystem that works to make the principles of EA more universal.What's the difference?I think this dimension has important effects on the value of the community, and that both local and global community-building strategies should be aware of the tradeoffs between the two.I'll list some examples and caricatures for the distinction between the two, to give a more intuitive grasp of how these strategies differ, without any clear order:Leaning cause-firstLeaning member-firstKeep EA Small and WeirdBig Tent EACurrent EA Handbook (focus on introducing major causes)2015's EA Handbook (focus on core EA principles)80,000 HoursProbably GoodWants more people doing high-quality AI Safety work, regardless of their acceptance of EA principlesWants more people deeply understanding and accepting EA principles, regardless of what they actually work on or donate to.Targeted outreach to students in high ranking universitiesBroad outreach with diverse messagingEncourages people to change occupations to focus on the world's most pressing problemsEncourages people to use the tools and principles of EA to do more good in their current trajectoryRisk of people not finding useful ways to contribute to top causesRisk of not enough people who want to contribute to the world's top causesThe community as a whole leads by example, by taking in-depth prioritization research with the proper seriousnessEach individual is focused more on how toimplement EA principles in their own lives, taking their personal worldview and situation into account ...]]>
Mon, 29 May 2023 17:24:45 +0000 EA - Should the EA community be cause-first or member-first? by EdoArad Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should the EA community be cause-first or member-first?, published by EdoArad on May 29, 2023 on The Effective Altruism Forum.It's really hard to do community building well. Opinions on strategy and vision vary a lot, and we don't yet know enough about what actually works and how well. Here, I'll suggest one axis of community-building strategy which helped me clarify and compare some contrasting opinions.Cause-firstWill Macaskill's proposed Definition of Effective Altruism is composed of:An overarching effort to figure out what are the best opportunities to do good.A community of people that work to bring more resources to these opportunities, or work on these directly.This suggests a "cause-first" community-building strategy, where the main goal for community builders is to get more manpower into the top cause areas. Communities are measured by the total impact produced directly through the people they engage with. Communities try to find the most promising people, persuade them to work on top causes, and empower them to do so well.CEA's definition and strategy seem to be mostly along these lines:Effective altruism is a project that aims to find the best ways to help others, and put them into practice.It’s both a research field, which aims to identify the world’s most pressing problems and the best solutions to them, and a practical community that aims to use those findings to do good.andOur mission is to nurture a community of people who are thinking carefully about the world’s biggest problems and taking impactful action to solve them.Member-firstLet's try out a different definition for the EA community, taken from CEA's guiding principles:What is the effective altruism community?The effective altruism community is a global community of people who care deeply about the world, make helping others a significant part of their lives, and use evidence and reason to figure out how best to do so.This, to me, suggests a subtly different vision and strategy for the community. One that is, first of all, focused on these people who live by EA principles. Such a "member-first" strategy could have a supporting infrastructure that is focused on helping the individuals involved to live their lives according to these principles, and an outreach/growth ecosystem that works to make the principles of EA more universal.What's the difference?I think this dimension has important effects on the value of the community, and that both local and global community-building strategies should be aware of the tradeoffs between the two.I'll list some examples and caricatures for the distinction between the two, to give a more intuitive grasp of how these strategies differ, without any clear order:Leaning cause-firstLeaning member-firstKeep EA Small and WeirdBig Tent EACurrent EA Handbook (focus on introducing major causes)2015's EA Handbook (focus on core EA principles)80,000 HoursProbably GoodWants more people doing high-quality AI Safety work, regardless of their acceptance of EA principlesWants more people deeply understanding and accepting EA principles, regardless of what they actually work on or donate to.Targeted outreach to students in high ranking universitiesBroad outreach with diverse messagingEncourages people to change occupations to focus on the world's most pressing problemsEncourages people to use the tools and principles of EA to do more good in their current trajectoryRisk of people not finding useful ways to contribute to top causesRisk of not enough people who want to contribute to the world's top causesThe community as a whole leads by example, by taking in-depth prioritization research with the proper seriousnessEach individual is focused more on how toimplement EA principles in their own lives, taking their personal worldview and situation into account ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should the EA community be cause-first or member-first?, published by EdoArad on May 29, 2023 on The Effective Altruism Forum.It's really hard to do community building well. Opinions on strategy and vision vary a lot, and we don't yet know enough about what actually works and how well. Here, I'll suggest one axis of community-building strategy which helped me clarify and compare some contrasting opinions.Cause-firstWill Macaskill's proposed Definition of Effective Altruism is composed of:An overarching effort to figure out what are the best opportunities to do good.A community of people that work to bring more resources to these opportunities, or work on these directly.This suggests a "cause-first" community-building strategy, where the main goal for community builders is to get more manpower into the top cause areas. Communities are measured by the total impact produced directly through the people they engage with. Communities try to find the most promising people, persuade them to work on top causes, and empower them to do so well.CEA's definition and strategy seem to be mostly along these lines:Effective altruism is a project that aims to find the best ways to help others, and put them into practice.It’s both a research field, which aims to identify the world’s most pressing problems and the best solutions to them, and a practical community that aims to use those findings to do good.andOur mission is to nurture a community of people who are thinking carefully about the world’s biggest problems and taking impactful action to solve them.Member-firstLet's try out a different definition for the EA community, taken from CEA's guiding principles:What is the effective altruism community?The effective altruism community is a global community of people who care deeply about the world, make helping others a significant part of their lives, and use evidence and reason to figure out how best to do so.This, to me, suggests a subtly different vision and strategy for the community. One that is, first of all, focused on these people who live by EA principles. Such a "member-first" strategy could have a supporting infrastructure that is focused on helping the individuals involved to live their lives according to these principles, and an outreach/growth ecosystem that works to make the principles of EA more universal.What's the difference?I think this dimension has important effects on the value of the community, and that both local and global community-building strategies should be aware of the tradeoffs between the two.I'll list some examples and caricatures for the distinction between the two, to give a more intuitive grasp of how these strategies differ, without any clear order:Leaning cause-firstLeaning member-firstKeep EA Small and WeirdBig Tent EACurrent EA Handbook (focus on introducing major causes)2015's EA Handbook (focus on core EA principles)80,000 HoursProbably GoodWants more people doing high-quality AI Safety work, regardless of their acceptance of EA principlesWants more people deeply understanding and accepting EA principles, regardless of what they actually work on or donate to.Targeted outreach to students in high ranking universitiesBroad outreach with diverse messagingEncourages people to change occupations to focus on the world's most pressing problemsEncourages people to use the tools and principles of EA to do more good in their current trajectoryRisk of people not finding useful ways to contribute to top causesRisk of not enough people who want to contribute to the world's top causesThe community as a whole leads by example, by taking in-depth prioritization research with the proper seriousnessEach individual is focused more on how toimplement EA principles in their own lives, taking their personal worldview and situation into account ...]]>
EdoArad https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:42 None full 6097
N3rKebheBhAfoStqa_NL_EA_EA EA - Drawing attention to invasive Lymantria dispar dispar spongy moth outbreaks as an important, neglected issue in wild animal welfare by Meghan Barrett Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Drawing attention to invasive Lymantria dispar dispar spongy moth outbreaks as an important, neglected issue in wild animal welfare, published by Meghan Barrett on May 28, 2023 on The Effective Altruism Forum.This post contains only the summary of a longer research post, written by Meghan Barrett and Hannah McKay. The full post can be found at the above link on Rethink Priorities website.SummaryOne aim of wild animal welfare research is to identify situations where large numbers of wild animals are managed by humans in ways that have significant welfare impacts. If the number of individuals is large and the welfare impacts significant, the issue is important. As humans are managing these animals, it is possible the welfare impacts could be moderated to reduce their suffering. The massive scale of invasive (e.g., non-native) Lymantria dispar dispar (spongy moth) outbreaks represents an unappreciated wild animal welfare issue, and thus deserves further attention from a welfare (not simply an invasive species-control) perspective.The spongy moth is not endemic to North America. The species experiences localized three year-long outbreaks of half a billion or more caterpillars/km2 every 10-15 years in regions where they are well established (including their native range). Spongy moths currently occupy at least 860,000 km2 in North America, only ¼ of their possible range (though most of the occupied area is not experiencing outbreak conditions, most of the time). L. dispar continues to spread slowly to new areas each year despite multi-million dollar efforts to stop expansion. Assuming spongy moth caterpillars are sentient, methods for actively controlling them during outbreaks cause substantial suffering. The aerial spray (Btk) ruptures the stomach, causing the insect to die from either starvation or sepsis over two to seven days. However, because outbreaks are so large, most caterpillars are not actively targeted for control, and ‘natural forces’ are allowed to reduce the outbreak.The most prominent natural forces to rein in an outbreak are starvation and disease. The accidentally introduced fungus, Entomophaga (meaning “insect eater”) maimaiga, digests caterpillars’ insides before pushing through the exoskeleton to release spores, usually within a week. LdNPV virus is also common in the spongy moth population, but only causes high levels of mortality during outbreaks when larvae are stressed from extreme competition. A symptom of severe LdNPV infection is “larval melting,” or the liquefaction of the insect’s internal organs.The scale of spongy moth outbreaks is tremendous, though notably these outbreaks are not necessarily higher-density than numbers of other insect species (e.g., 740 million to 6.2 billion individual wireworms/km2; Smithsonian, n.d.). However, spongy moths are one of the best tracked non-native insects (Grayson & Johnson, 2018; e.g., Stop the Spread program), providing us with better data for analyzing the scale of the welfare issue (both in terms of caterpillar density within outbreaks, and the total area affected by outbreaks). In addition, there is potential for significant range expansion by spongy moths that would increase the scope of the welfare concern, and there appears to be extreme suffering1 induced by both active and natural outbreak control. As a result, spongy moth welfare during outbreaks could be an issue of concern for animal welfare advocates.Further research could improve spongy moth welfare by: 1) identifying the most promising long-term interventions for preventing/reducing the occurrence of outbreaks behind the invasion front, 2) contributing to halting the spread of spongy moths into new areas, and 3) identifying the highest-welfare outbreak management strategies where outbreaks do occur.Thanks for listening. To help us ...]]>
Meghan Barrett https://forum.effectivealtruism.org/posts/N3rKebheBhAfoStqa/drawing-attention-to-invasive-lymantria-dispar-dispar-spongy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Drawing attention to invasive Lymantria dispar dispar spongy moth outbreaks as an important, neglected issue in wild animal welfare, published by Meghan Barrett on May 28, 2023 on The Effective Altruism Forum.This post contains only the summary of a longer research post, written by Meghan Barrett and Hannah McKay. The full post can be found at the above link on Rethink Priorities website.SummaryOne aim of wild animal welfare research is to identify situations where large numbers of wild animals are managed by humans in ways that have significant welfare impacts. If the number of individuals is large and the welfare impacts significant, the issue is important. As humans are managing these animals, it is possible the welfare impacts could be moderated to reduce their suffering. The massive scale of invasive (e.g., non-native) Lymantria dispar dispar (spongy moth) outbreaks represents an unappreciated wild animal welfare issue, and thus deserves further attention from a welfare (not simply an invasive species-control) perspective.The spongy moth is not endemic to North America. The species experiences localized three year-long outbreaks of half a billion or more caterpillars/km2 every 10-15 years in regions where they are well established (including their native range). Spongy moths currently occupy at least 860,000 km2 in North America, only ¼ of their possible range (though most of the occupied area is not experiencing outbreak conditions, most of the time). L. dispar continues to spread slowly to new areas each year despite multi-million dollar efforts to stop expansion. Assuming spongy moth caterpillars are sentient, methods for actively controlling them during outbreaks cause substantial suffering. The aerial spray (Btk) ruptures the stomach, causing the insect to die from either starvation or sepsis over two to seven days. However, because outbreaks are so large, most caterpillars are not actively targeted for control, and ‘natural forces’ are allowed to reduce the outbreak.The most prominent natural forces to rein in an outbreak are starvation and disease. The accidentally introduced fungus, Entomophaga (meaning “insect eater”) maimaiga, digests caterpillars’ insides before pushing through the exoskeleton to release spores, usually within a week. LdNPV virus is also common in the spongy moth population, but only causes high levels of mortality during outbreaks when larvae are stressed from extreme competition. A symptom of severe LdNPV infection is “larval melting,” or the liquefaction of the insect’s internal organs.The scale of spongy moth outbreaks is tremendous, though notably these outbreaks are not necessarily higher-density than numbers of other insect species (e.g., 740 million to 6.2 billion individual wireworms/km2; Smithsonian, n.d.). However, spongy moths are one of the best tracked non-native insects (Grayson & Johnson, 2018; e.g., Stop the Spread program), providing us with better data for analyzing the scale of the welfare issue (both in terms of caterpillar density within outbreaks, and the total area affected by outbreaks). In addition, there is potential for significant range expansion by spongy moths that would increase the scope of the welfare concern, and there appears to be extreme suffering1 induced by both active and natural outbreak control. As a result, spongy moth welfare during outbreaks could be an issue of concern for animal welfare advocates.Further research could improve spongy moth welfare by: 1) identifying the most promising long-term interventions for preventing/reducing the occurrence of outbreaks behind the invasion front, 2) contributing to halting the spread of spongy moths into new areas, and 3) identifying the highest-welfare outbreak management strategies where outbreaks do occur.Thanks for listening. To help us ...]]>
Sun, 28 May 2023 17:45:38 +0000 EA - Drawing attention to invasive Lymantria dispar dispar spongy moth outbreaks as an important, neglected issue in wild animal welfare by Meghan Barrett Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Drawing attention to invasive Lymantria dispar dispar spongy moth outbreaks as an important, neglected issue in wild animal welfare, published by Meghan Barrett on May 28, 2023 on The Effective Altruism Forum.This post contains only the summary of a longer research post, written by Meghan Barrett and Hannah McKay. The full post can be found at the above link on Rethink Priorities website.SummaryOne aim of wild animal welfare research is to identify situations where large numbers of wild animals are managed by humans in ways that have significant welfare impacts. If the number of individuals is large and the welfare impacts significant, the issue is important. As humans are managing these animals, it is possible the welfare impacts could be moderated to reduce their suffering. The massive scale of invasive (e.g., non-native) Lymantria dispar dispar (spongy moth) outbreaks represents an unappreciated wild animal welfare issue, and thus deserves further attention from a welfare (not simply an invasive species-control) perspective.The spongy moth is not endemic to North America. The species experiences localized three year-long outbreaks of half a billion or more caterpillars/km2 every 10-15 years in regions where they are well established (including their native range). Spongy moths currently occupy at least 860,000 km2 in North America, only ¼ of their possible range (though most of the occupied area is not experiencing outbreak conditions, most of the time). L. dispar continues to spread slowly to new areas each year despite multi-million dollar efforts to stop expansion. Assuming spongy moth caterpillars are sentient, methods for actively controlling them during outbreaks cause substantial suffering. The aerial spray (Btk) ruptures the stomach, causing the insect to die from either starvation or sepsis over two to seven days. However, because outbreaks are so large, most caterpillars are not actively targeted for control, and ‘natural forces’ are allowed to reduce the outbreak.The most prominent natural forces to rein in an outbreak are starvation and disease. The accidentally introduced fungus, Entomophaga (meaning “insect eater”) maimaiga, digests caterpillars’ insides before pushing through the exoskeleton to release spores, usually within a week. LdNPV virus is also common in the spongy moth population, but only causes high levels of mortality during outbreaks when larvae are stressed from extreme competition. A symptom of severe LdNPV infection is “larval melting,” or the liquefaction of the insect’s internal organs.The scale of spongy moth outbreaks is tremendous, though notably these outbreaks are not necessarily higher-density than numbers of other insect species (e.g., 740 million to 6.2 billion individual wireworms/km2; Smithsonian, n.d.). However, spongy moths are one of the best tracked non-native insects (Grayson & Johnson, 2018; e.g., Stop the Spread program), providing us with better data for analyzing the scale of the welfare issue (both in terms of caterpillar density within outbreaks, and the total area affected by outbreaks). In addition, there is potential for significant range expansion by spongy moths that would increase the scope of the welfare concern, and there appears to be extreme suffering1 induced by both active and natural outbreak control. As a result, spongy moth welfare during outbreaks could be an issue of concern for animal welfare advocates.Further research could improve spongy moth welfare by: 1) identifying the most promising long-term interventions for preventing/reducing the occurrence of outbreaks behind the invasion front, 2) contributing to halting the spread of spongy moths into new areas, and 3) identifying the highest-welfare outbreak management strategies where outbreaks do occur.Thanks for listening. To help us ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Drawing attention to invasive Lymantria dispar dispar spongy moth outbreaks as an important, neglected issue in wild animal welfare, published by Meghan Barrett on May 28, 2023 on The Effective Altruism Forum.This post contains only the summary of a longer research post, written by Meghan Barrett and Hannah McKay. The full post can be found at the above link on Rethink Priorities website.SummaryOne aim of wild animal welfare research is to identify situations where large numbers of wild animals are managed by humans in ways that have significant welfare impacts. If the number of individuals is large and the welfare impacts significant, the issue is important. As humans are managing these animals, it is possible the welfare impacts could be moderated to reduce their suffering. The massive scale of invasive (e.g., non-native) Lymantria dispar dispar (spongy moth) outbreaks represents an unappreciated wild animal welfare issue, and thus deserves further attention from a welfare (not simply an invasive species-control) perspective.The spongy moth is not endemic to North America. The species experiences localized three year-long outbreaks of half a billion or more caterpillars/km2 every 10-15 years in regions where they are well established (including their native range). Spongy moths currently occupy at least 860,000 km2 in North America, only ¼ of their possible range (though most of the occupied area is not experiencing outbreak conditions, most of the time). L. dispar continues to spread slowly to new areas each year despite multi-million dollar efforts to stop expansion. Assuming spongy moth caterpillars are sentient, methods for actively controlling them during outbreaks cause substantial suffering. The aerial spray (Btk) ruptures the stomach, causing the insect to die from either starvation or sepsis over two to seven days. However, because outbreaks are so large, most caterpillars are not actively targeted for control, and ‘natural forces’ are allowed to reduce the outbreak.The most prominent natural forces to rein in an outbreak are starvation and disease. The accidentally introduced fungus, Entomophaga (meaning “insect eater”) maimaiga, digests caterpillars’ insides before pushing through the exoskeleton to release spores, usually within a week. LdNPV virus is also common in the spongy moth population, but only causes high levels of mortality during outbreaks when larvae are stressed from extreme competition. A symptom of severe LdNPV infection is “larval melting,” or the liquefaction of the insect’s internal organs.The scale of spongy moth outbreaks is tremendous, though notably these outbreaks are not necessarily higher-density than numbers of other insect species (e.g., 740 million to 6.2 billion individual wireworms/km2; Smithsonian, n.d.). However, spongy moths are one of the best tracked non-native insects (Grayson & Johnson, 2018; e.g., Stop the Spread program), providing us with better data for analyzing the scale of the welfare issue (both in terms of caterpillar density within outbreaks, and the total area affected by outbreaks). In addition, there is potential for significant range expansion by spongy moths that would increase the scope of the welfare concern, and there appears to be extreme suffering1 induced by both active and natural outbreak control. As a result, spongy moth welfare during outbreaks could be an issue of concern for animal welfare advocates.Further research could improve spongy moth welfare by: 1) identifying the most promising long-term interventions for preventing/reducing the occurrence of outbreaks behind the invasion front, 2) contributing to halting the spread of spongy moths into new areas, and 3) identifying the highest-welfare outbreak management strategies where outbreaks do occur.Thanks for listening. To help us ...]]>
Meghan Barrett https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:47 None full 6093
aHkthQrAfuyNBNFhM_NL_EA_EA EA - Has Russia’s Invasion of Ukraine Changed Your Mind? by JoelMcGuire Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Has Russia’s Invasion of Ukraine Changed Your Mind?, published by JoelMcGuire on May 27, 2023 on The Effective Altruism Forum.[This post was written in a purely personal capacity, etc.]I recently had several long conversations with a friend about whether my regular doom-scrolling regarding the Ukraine war had sharpened my understanding of the world or mostly been a waste of time.Unfortunately, it seems more of the latter. When my mind has changed, it's been slight, and it’s unclear what actions my new views justify. Personally, this means I should probably go back to thinking about happiness and RCTs.I set out what I think are some relevant questions Russia's invasion of Ukraine could change your mind about and provide some sloppy commentary, but I'm interested to know what other EAs and rationalists think about this issue.High-level questionsLikelihood of great power conflictIt seems like the Metaculus forecasting community is now more worried about great power conflict than it was before the war. I assume the invasion of Ukraine is a causal factor. But I feel oddly reassured about this, like the world was ruled by drunks who sobered up when the knives came out, reminded that knives are sharp and bodily fluids are precious.After the invasion, the prospect of a Russia-USA War shifted from a 5-15% to a 25% chance before 2050. I hadn’t known about this forecast, but I would have assumed the opposite. Before the war, Russia viewed the US as a waning power, losing in Afghanistan, not-winning in Syria, Libya and Venezuela, riven by internecine strife and paralyzed by self-doubt. Meanwhile, Russia’s confidence in its comeback rose with each cost-effective success in Crimea, Syria, and Kazakhstan.Now Russia knows how hollow its military was. And it knows the USA knows. And it knows that NATO hand-me-downs are emptying its once vast stockpiles of tanks and APCs. I assume it won’t recover the depth of its armour stocks in the near term (it doesn’t have the USSR’s state capacity or industrial base). The USA also doesn’t need to fight Russia. If Ukraine is doing this well, then Ukraine + Poland + Baltics would probably do just fine. I’d put this more around 6.5%.I think a Russian war with a European state has probably increased simply based on Russia’s revealed willingness to go to war, in conjunction with forecasters predicting a good chance (20%-24%) that the US and China will go to war over Taiwan. Russia may find such a conflict an opportunity to attempt to occupy a square mile of uninhabited Lithuanian forest to create a safe zone for ethnic Russian speakers and puncture the myth of NATO’s 5th article.Will there be a 'World War Three' before 2050? | MetaculusThe predicted probability to this question shifted by around 10%, from the 10-15% range to 20-25% after the war began. I assume this is mostly driven by Russia-NATO-initiated conflict. China-India conflict predictions have decreased from 30% pre-war to 17% before 2035 most recently. And China-US war predictions have stayed constant (20% before 2035). So the rise must stem from the increase in the likelihood of a Russia-US war or by other major powers between 2035 and 2050. I don’t think I agree with the community here, as I explained previously.Will China get involved in the Russo-Ukrainian conflict by 2024?China hasn’t involved itself in the Ukraine war yet. And the prospects for its involvement seem like they should dim over time — surely it would have acted or given more hints that it was considering doing so by now?This makes me more confused about whether China committed to a military confrontation with the West. If it has, and China believed it had more military-industrial capacity than the West (which is what I’d believe if I was China), then now is the perfect opportunity to drain Western stocks ...]]>
JoelMcGuire https://forum.effectivealtruism.org/posts/aHkthQrAfuyNBNFhM/has-russia-s-invasion-of-ukraine-changed-your-mind Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Has Russia’s Invasion of Ukraine Changed Your Mind?, published by JoelMcGuire on May 27, 2023 on The Effective Altruism Forum.[This post was written in a purely personal capacity, etc.]I recently had several long conversations with a friend about whether my regular doom-scrolling regarding the Ukraine war had sharpened my understanding of the world or mostly been a waste of time.Unfortunately, it seems more of the latter. When my mind has changed, it's been slight, and it’s unclear what actions my new views justify. Personally, this means I should probably go back to thinking about happiness and RCTs.I set out what I think are some relevant questions Russia's invasion of Ukraine could change your mind about and provide some sloppy commentary, but I'm interested to know what other EAs and rationalists think about this issue.High-level questionsLikelihood of great power conflictIt seems like the Metaculus forecasting community is now more worried about great power conflict than it was before the war. I assume the invasion of Ukraine is a causal factor. But I feel oddly reassured about this, like the world was ruled by drunks who sobered up when the knives came out, reminded that knives are sharp and bodily fluids are precious.After the invasion, the prospect of a Russia-USA War shifted from a 5-15% to a 25% chance before 2050. I hadn’t known about this forecast, but I would have assumed the opposite. Before the war, Russia viewed the US as a waning power, losing in Afghanistan, not-winning in Syria, Libya and Venezuela, riven by internecine strife and paralyzed by self-doubt. Meanwhile, Russia’s confidence in its comeback rose with each cost-effective success in Crimea, Syria, and Kazakhstan.Now Russia knows how hollow its military was. And it knows the USA knows. And it knows that NATO hand-me-downs are emptying its once vast stockpiles of tanks and APCs. I assume it won’t recover the depth of its armour stocks in the near term (it doesn’t have the USSR’s state capacity or industrial base). The USA also doesn’t need to fight Russia. If Ukraine is doing this well, then Ukraine + Poland + Baltics would probably do just fine. I’d put this more around 6.5%.I think a Russian war with a European state has probably increased simply based on Russia’s revealed willingness to go to war, in conjunction with forecasters predicting a good chance (20%-24%) that the US and China will go to war over Taiwan. Russia may find such a conflict an opportunity to attempt to occupy a square mile of uninhabited Lithuanian forest to create a safe zone for ethnic Russian speakers and puncture the myth of NATO’s 5th article.Will there be a 'World War Three' before 2050? | MetaculusThe predicted probability to this question shifted by around 10%, from the 10-15% range to 20-25% after the war began. I assume this is mostly driven by Russia-NATO-initiated conflict. China-India conflict predictions have decreased from 30% pre-war to 17% before 2035 most recently. And China-US war predictions have stayed constant (20% before 2035). So the rise must stem from the increase in the likelihood of a Russia-US war or by other major powers between 2035 and 2050. I don’t think I agree with the community here, as I explained previously.Will China get involved in the Russo-Ukrainian conflict by 2024?China hasn’t involved itself in the Ukraine war yet. And the prospects for its involvement seem like they should dim over time — surely it would have acted or given more hints that it was considering doing so by now?This makes me more confused about whether China committed to a military confrontation with the West. If it has, and China believed it had more military-industrial capacity than the West (which is what I’d believe if I was China), then now is the perfect opportunity to drain Western stocks ...]]>
Sun, 28 May 2023 05:23:21 +0000 EA - Has Russia’s Invasion of Ukraine Changed Your Mind? by JoelMcGuire Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Has Russia’s Invasion of Ukraine Changed Your Mind?, published by JoelMcGuire on May 27, 2023 on The Effective Altruism Forum.[This post was written in a purely personal capacity, etc.]I recently had several long conversations with a friend about whether my regular doom-scrolling regarding the Ukraine war had sharpened my understanding of the world or mostly been a waste of time.Unfortunately, it seems more of the latter. When my mind has changed, it's been slight, and it’s unclear what actions my new views justify. Personally, this means I should probably go back to thinking about happiness and RCTs.I set out what I think are some relevant questions Russia's invasion of Ukraine could change your mind about and provide some sloppy commentary, but I'm interested to know what other EAs and rationalists think about this issue.High-level questionsLikelihood of great power conflictIt seems like the Metaculus forecasting community is now more worried about great power conflict than it was before the war. I assume the invasion of Ukraine is a causal factor. But I feel oddly reassured about this, like the world was ruled by drunks who sobered up when the knives came out, reminded that knives are sharp and bodily fluids are precious.After the invasion, the prospect of a Russia-USA War shifted from a 5-15% to a 25% chance before 2050. I hadn’t known about this forecast, but I would have assumed the opposite. Before the war, Russia viewed the US as a waning power, losing in Afghanistan, not-winning in Syria, Libya and Venezuela, riven by internecine strife and paralyzed by self-doubt. Meanwhile, Russia’s confidence in its comeback rose with each cost-effective success in Crimea, Syria, and Kazakhstan.Now Russia knows how hollow its military was. And it knows the USA knows. And it knows that NATO hand-me-downs are emptying its once vast stockpiles of tanks and APCs. I assume it won’t recover the depth of its armour stocks in the near term (it doesn’t have the USSR’s state capacity or industrial base). The USA also doesn’t need to fight Russia. If Ukraine is doing this well, then Ukraine + Poland + Baltics would probably do just fine. I’d put this more around 6.5%.I think a Russian war with a European state has probably increased simply based on Russia’s revealed willingness to go to war, in conjunction with forecasters predicting a good chance (20%-24%) that the US and China will go to war over Taiwan. Russia may find such a conflict an opportunity to attempt to occupy a square mile of uninhabited Lithuanian forest to create a safe zone for ethnic Russian speakers and puncture the myth of NATO’s 5th article.Will there be a 'World War Three' before 2050? | MetaculusThe predicted probability to this question shifted by around 10%, from the 10-15% range to 20-25% after the war began. I assume this is mostly driven by Russia-NATO-initiated conflict. China-India conflict predictions have decreased from 30% pre-war to 17% before 2035 most recently. And China-US war predictions have stayed constant (20% before 2035). So the rise must stem from the increase in the likelihood of a Russia-US war or by other major powers between 2035 and 2050. I don’t think I agree with the community here, as I explained previously.Will China get involved in the Russo-Ukrainian conflict by 2024?China hasn’t involved itself in the Ukraine war yet. And the prospects for its involvement seem like they should dim over time — surely it would have acted or given more hints that it was considering doing so by now?This makes me more confused about whether China committed to a military confrontation with the West. If it has, and China believed it had more military-industrial capacity than the West (which is what I’d believe if I was China), then now is the perfect opportunity to drain Western stocks ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Has Russia’s Invasion of Ukraine Changed Your Mind?, published by JoelMcGuire on May 27, 2023 on The Effective Altruism Forum.[This post was written in a purely personal capacity, etc.]I recently had several long conversations with a friend about whether my regular doom-scrolling regarding the Ukraine war had sharpened my understanding of the world or mostly been a waste of time.Unfortunately, it seems more of the latter. When my mind has changed, it's been slight, and it’s unclear what actions my new views justify. Personally, this means I should probably go back to thinking about happiness and RCTs.I set out what I think are some relevant questions Russia's invasion of Ukraine could change your mind about and provide some sloppy commentary, but I'm interested to know what other EAs and rationalists think about this issue.High-level questionsLikelihood of great power conflictIt seems like the Metaculus forecasting community is now more worried about great power conflict than it was before the war. I assume the invasion of Ukraine is a causal factor. But I feel oddly reassured about this, like the world was ruled by drunks who sobered up when the knives came out, reminded that knives are sharp and bodily fluids are precious.After the invasion, the prospect of a Russia-USA War shifted from a 5-15% to a 25% chance before 2050. I hadn’t known about this forecast, but I would have assumed the opposite. Before the war, Russia viewed the US as a waning power, losing in Afghanistan, not-winning in Syria, Libya and Venezuela, riven by internecine strife and paralyzed by self-doubt. Meanwhile, Russia’s confidence in its comeback rose with each cost-effective success in Crimea, Syria, and Kazakhstan.Now Russia knows how hollow its military was. And it knows the USA knows. And it knows that NATO hand-me-downs are emptying its once vast stockpiles of tanks and APCs. I assume it won’t recover the depth of its armour stocks in the near term (it doesn’t have the USSR’s state capacity or industrial base). The USA also doesn’t need to fight Russia. If Ukraine is doing this well, then Ukraine + Poland + Baltics would probably do just fine. I’d put this more around 6.5%.I think a Russian war with a European state has probably increased simply based on Russia’s revealed willingness to go to war, in conjunction with forecasters predicting a good chance (20%-24%) that the US and China will go to war over Taiwan. Russia may find such a conflict an opportunity to attempt to occupy a square mile of uninhabited Lithuanian forest to create a safe zone for ethnic Russian speakers and puncture the myth of NATO’s 5th article.Will there be a 'World War Three' before 2050? | MetaculusThe predicted probability to this question shifted by around 10%, from the 10-15% range to 20-25% after the war began. I assume this is mostly driven by Russia-NATO-initiated conflict. China-India conflict predictions have decreased from 30% pre-war to 17% before 2035 most recently. And China-US war predictions have stayed constant (20% before 2035). So the rise must stem from the increase in the likelihood of a Russia-US war or by other major powers between 2035 and 2050. I don’t think I agree with the community here, as I explained previously.Will China get involved in the Russo-Ukrainian conflict by 2024?China hasn’t involved itself in the Ukraine war yet. And the prospects for its involvement seem like they should dim over time — surely it would have acted or given more hints that it was considering doing so by now?This makes me more confused about whether China committed to a military confrontation with the West. If it has, and China believed it had more military-industrial capacity than the West (which is what I’d believe if I was China), then now is the perfect opportunity to drain Western stocks ...]]>
JoelMcGuire https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:30 None full 6089
7hMgK4hciBhXmBRnW_NL_EA_EA EA - Do you think decreasing the consumption of animals is good/bad? Think again? by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Do you think decreasing the consumption of animals is good/bad? Think again?, published by Vasco Grilo on May 27, 2023 on The Effective Altruism Forum.QuestionDo you think decreasing the consumption of animals is good/bad? For which groups of farmed animals?ContextI stopped eating animals 4 years ago mostly to decrease the suffering of farmed animals. I am glad I did that based on the information I had at the time. However, I am no longer confident that decreasing the consumption of animals is good/bad. It has many effects:Decreasing the number of factory-farmed animals.I believe this would be good for chickens, since I expect them to have negative lives. I estimated the lives of broilers in conventional and reformed scenarios are, per unit time, 2.58 and 0.574 times as bad as human lives are good (see 2nd table). However, these numbers are not resilient:On the one hand, if I consider disabling pain is 10 (instead of 100) times as bad as hurtful pain, the lives of broilers in conventional and reformed scenarios would be, per unit time, 2.73 % and 26.2 % as good as human lives. Nevertheless, disabling pain being only 10 times as bad as hurtful pain seems quite implausible if one thinks being alive is as good as hurtful pain is bad.On the other hand, I may be overestimating broilers’ pleasurable experiences.I guess the same applies to other species, but I honestly do not know. Figuring out whether farmed shrimps and prawns have good/bad lives seems especially important, since they are arguably the driver for the welfare of farmed animals.Decreasing the production of animal feed, and therefore reducing crop area, which tends to:Increase the population of wild animals, which I do not know whether it is good or bad. I think the welfare of terrestrial wild animals is driven by that of terrestrial arthropods, but I am very uncertain about whether they have good or bad lives. I recommend checking this preprint from Heather Browning and Walter Weit for an overview of the welfare status of wild animals.Decrease the resilience against food shocks. As I wrote here:The smaller the population of (farmed) animals, the less animal feed could be directed to humans to mitigate the food shocks caused by the lower temperature, light and humidity during abrupt sunlight reduction scenarios (ASRS), which can be a nuclear winter, volcanic winter, or impact winter.Because producing calories from animals is much less efficient than from plants, decreasing the number of animals results in a smaller area of crops.So the agricultural system would be less oversized (i.e. it would have a smaller safety margin), and scaling up food production to counter the lower yields during an ASRS would be harder.To maximise calorie supply, farmed animals should stop being fed and quickly be culled after the onset of an ASRS. This would decrease the starvation of humans and farmed animals, but these would tend to experience more severe pain for a faster slaughtering rate.As a side note, increasing food waste would also increase resilience against food shocks, as long as it can be promptly cut down. One can even argue humanity should increase (easily reducible) food waste instead of the population of farmed animals. However, I suspect the latter is more tractable.Increase biodiversity, which arguably increases existential risk due to ecosystem collapse (see Kareiva 2018).Decreasing greenhouse gas emissions, and therefore decreasing global warming.I have little idea whether this is good or bad.Firstly, it is quite unclear whether climate change is good or bad for wild animals.Secondly, although more global warming makes climate change worse for humans, I believe it mitigates the food shocks caused by ASRSs. Accounting for both of these effects, I estimated the optimal median global warming i...]]>
Vasco Grilo https://forum.effectivealtruism.org/posts/7hMgK4hciBhXmBRnW/do-you-think-decreasing-the-consumption-of-animals-is-good Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Do you think decreasing the consumption of animals is good/bad? Think again?, published by Vasco Grilo on May 27, 2023 on The Effective Altruism Forum.QuestionDo you think decreasing the consumption of animals is good/bad? For which groups of farmed animals?ContextI stopped eating animals 4 years ago mostly to decrease the suffering of farmed animals. I am glad I did that based on the information I had at the time. However, I am no longer confident that decreasing the consumption of animals is good/bad. It has many effects:Decreasing the number of factory-farmed animals.I believe this would be good for chickens, since I expect them to have negative lives. I estimated the lives of broilers in conventional and reformed scenarios are, per unit time, 2.58 and 0.574 times as bad as human lives are good (see 2nd table). However, these numbers are not resilient:On the one hand, if I consider disabling pain is 10 (instead of 100) times as bad as hurtful pain, the lives of broilers in conventional and reformed scenarios would be, per unit time, 2.73 % and 26.2 % as good as human lives. Nevertheless, disabling pain being only 10 times as bad as hurtful pain seems quite implausible if one thinks being alive is as good as hurtful pain is bad.On the other hand, I may be overestimating broilers’ pleasurable experiences.I guess the same applies to other species, but I honestly do not know. Figuring out whether farmed shrimps and prawns have good/bad lives seems especially important, since they are arguably the driver for the welfare of farmed animals.Decreasing the production of animal feed, and therefore reducing crop area, which tends to:Increase the population of wild animals, which I do not know whether it is good or bad. I think the welfare of terrestrial wild animals is driven by that of terrestrial arthropods, but I am very uncertain about whether they have good or bad lives. I recommend checking this preprint from Heather Browning and Walter Weit for an overview of the welfare status of wild animals.Decrease the resilience against food shocks. As I wrote here:The smaller the population of (farmed) animals, the less animal feed could be directed to humans to mitigate the food shocks caused by the lower temperature, light and humidity during abrupt sunlight reduction scenarios (ASRS), which can be a nuclear winter, volcanic winter, or impact winter.Because producing calories from animals is much less efficient than from plants, decreasing the number of animals results in a smaller area of crops.So the agricultural system would be less oversized (i.e. it would have a smaller safety margin), and scaling up food production to counter the lower yields during an ASRS would be harder.To maximise calorie supply, farmed animals should stop being fed and quickly be culled after the onset of an ASRS. This would decrease the starvation of humans and farmed animals, but these would tend to experience more severe pain for a faster slaughtering rate.As a side note, increasing food waste would also increase resilience against food shocks, as long as it can be promptly cut down. One can even argue humanity should increase (easily reducible) food waste instead of the population of farmed animals. However, I suspect the latter is more tractable.Increase biodiversity, which arguably increases existential risk due to ecosystem collapse (see Kareiva 2018).Decreasing greenhouse gas emissions, and therefore decreasing global warming.I have little idea whether this is good or bad.Firstly, it is quite unclear whether climate change is good or bad for wild animals.Secondly, although more global warming makes climate change worse for humans, I believe it mitigates the food shocks caused by ASRSs. Accounting for both of these effects, I estimated the optimal median global warming i...]]>
Sat, 27 May 2023 21:15:28 +0000 EA - Do you think decreasing the consumption of animals is good/bad? Think again? by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Do you think decreasing the consumption of animals is good/bad? Think again?, published by Vasco Grilo on May 27, 2023 on The Effective Altruism Forum.QuestionDo you think decreasing the consumption of animals is good/bad? For which groups of farmed animals?ContextI stopped eating animals 4 years ago mostly to decrease the suffering of farmed animals. I am glad I did that based on the information I had at the time. However, I am no longer confident that decreasing the consumption of animals is good/bad. It has many effects:Decreasing the number of factory-farmed animals.I believe this would be good for chickens, since I expect them to have negative lives. I estimated the lives of broilers in conventional and reformed scenarios are, per unit time, 2.58 and 0.574 times as bad as human lives are good (see 2nd table). However, these numbers are not resilient:On the one hand, if I consider disabling pain is 10 (instead of 100) times as bad as hurtful pain, the lives of broilers in conventional and reformed scenarios would be, per unit time, 2.73 % and 26.2 % as good as human lives. Nevertheless, disabling pain being only 10 times as bad as hurtful pain seems quite implausible if one thinks being alive is as good as hurtful pain is bad.On the other hand, I may be overestimating broilers’ pleasurable experiences.I guess the same applies to other species, but I honestly do not know. Figuring out whether farmed shrimps and prawns have good/bad lives seems especially important, since they are arguably the driver for the welfare of farmed animals.Decreasing the production of animal feed, and therefore reducing crop area, which tends to:Increase the population of wild animals, which I do not know whether it is good or bad. I think the welfare of terrestrial wild animals is driven by that of terrestrial arthropods, but I am very uncertain about whether they have good or bad lives. I recommend checking this preprint from Heather Browning and Walter Weit for an overview of the welfare status of wild animals.Decrease the resilience against food shocks. As I wrote here:The smaller the population of (farmed) animals, the less animal feed could be directed to humans to mitigate the food shocks caused by the lower temperature, light and humidity during abrupt sunlight reduction scenarios (ASRS), which can be a nuclear winter, volcanic winter, or impact winter.Because producing calories from animals is much less efficient than from plants, decreasing the number of animals results in a smaller area of crops.So the agricultural system would be less oversized (i.e. it would have a smaller safety margin), and scaling up food production to counter the lower yields during an ASRS would be harder.To maximise calorie supply, farmed animals should stop being fed and quickly be culled after the onset of an ASRS. This would decrease the starvation of humans and farmed animals, but these would tend to experience more severe pain for a faster slaughtering rate.As a side note, increasing food waste would also increase resilience against food shocks, as long as it can be promptly cut down. One can even argue humanity should increase (easily reducible) food waste instead of the population of farmed animals. However, I suspect the latter is more tractable.Increase biodiversity, which arguably increases existential risk due to ecosystem collapse (see Kareiva 2018).Decreasing greenhouse gas emissions, and therefore decreasing global warming.I have little idea whether this is good or bad.Firstly, it is quite unclear whether climate change is good or bad for wild animals.Secondly, although more global warming makes climate change worse for humans, I believe it mitigates the food shocks caused by ASRSs. Accounting for both of these effects, I estimated the optimal median global warming i...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Do you think decreasing the consumption of animals is good/bad? Think again?, published by Vasco Grilo on May 27, 2023 on The Effective Altruism Forum.QuestionDo you think decreasing the consumption of animals is good/bad? For which groups of farmed animals?ContextI stopped eating animals 4 years ago mostly to decrease the suffering of farmed animals. I am glad I did that based on the information I had at the time. However, I am no longer confident that decreasing the consumption of animals is good/bad. It has many effects:Decreasing the number of factory-farmed animals.I believe this would be good for chickens, since I expect them to have negative lives. I estimated the lives of broilers in conventional and reformed scenarios are, per unit time, 2.58 and 0.574 times as bad as human lives are good (see 2nd table). However, these numbers are not resilient:On the one hand, if I consider disabling pain is 10 (instead of 100) times as bad as hurtful pain, the lives of broilers in conventional and reformed scenarios would be, per unit time, 2.73 % and 26.2 % as good as human lives. Nevertheless, disabling pain being only 10 times as bad as hurtful pain seems quite implausible if one thinks being alive is as good as hurtful pain is bad.On the other hand, I may be overestimating broilers’ pleasurable experiences.I guess the same applies to other species, but I honestly do not know. Figuring out whether farmed shrimps and prawns have good/bad lives seems especially important, since they are arguably the driver for the welfare of farmed animals.Decreasing the production of animal feed, and therefore reducing crop area, which tends to:Increase the population of wild animals, which I do not know whether it is good or bad. I think the welfare of terrestrial wild animals is driven by that of terrestrial arthropods, but I am very uncertain about whether they have good or bad lives. I recommend checking this preprint from Heather Browning and Walter Weit for an overview of the welfare status of wild animals.Decrease the resilience against food shocks. As I wrote here:The smaller the population of (farmed) animals, the less animal feed could be directed to humans to mitigate the food shocks caused by the lower temperature, light and humidity during abrupt sunlight reduction scenarios (ASRS), which can be a nuclear winter, volcanic winter, or impact winter.Because producing calories from animals is much less efficient than from plants, decreasing the number of animals results in a smaller area of crops.So the agricultural system would be less oversized (i.e. it would have a smaller safety margin), and scaling up food production to counter the lower yields during an ASRS would be harder.To maximise calorie supply, farmed animals should stop being fed and quickly be culled after the onset of an ASRS. This would decrease the starvation of humans and farmed animals, but these would tend to experience more severe pain for a faster slaughtering rate.As a side note, increasing food waste would also increase resilience against food shocks, as long as it can be promptly cut down. One can even argue humanity should increase (easily reducible) food waste instead of the population of farmed animals. However, I suspect the latter is more tractable.Increase biodiversity, which arguably increases existential risk due to ecosystem collapse (see Kareiva 2018).Decreasing greenhouse gas emissions, and therefore decreasing global warming.I have little idea whether this is good or bad.Firstly, it is quite unclear whether climate change is good or bad for wild animals.Secondly, although more global warming makes climate change worse for humans, I believe it mitigates the food shocks caused by ASRSs. Accounting for both of these effects, I estimated the optimal median global warming i...]]>
Vasco Grilo https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:03 None full 6086
7yjd2wJjSqbzz3dZX_NL_EA_EA EA - By failing to take serious AI action, the US could be in violation of its international law obligations by Cecil Abungu Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: By failing to take serious AI action, the US could be in violation of its international law obligations, published by Cecil Abungu on May 27, 2023 on The Effective Altruism Forum.“Long-term risks remain, including the existential risk associated with the development of artificial general intelligence through self-modifying AI or other means”.2023 Update to the US National Artificial Intelligence Research and Development Strategic Plan.IntroductionThe United States is yet to take serious steps to govern the licensing, setting up, operation, security and supervision of AI. In this piece I suggest that this could be in violation of its obligations under Article 6(1) of the International Covenant on Civil and Political Rights (ICCPR). By most accounts, the US is the key country in control of how quickly we have artificial general intelligence (AGI), a goal that companies like OpenAI have been very open about pursuing. The fact that AGI could carry risk to human life has been detailed in various fora and I won’t belabor that point. I present this legal argument so that those trying to get the US government to take action have additional armor to call on.A. Some important premisesThe US signed and ratified the ICCPR on June 8 1992.[1] While it has not ratified the Optional Protocol allowing for individual complaints against it, it did submit to the competence of the Human Rights Committee (the body charged with interpreting the ICCPR) where the party suing is another state. This means that although individuals cannot bring action against the US for ICCPR violations, other states can. As is the case for domestic law, provisions of treaties are given real meaning when they’re interpreted by courts or other bodies with the specific legal mandate to do so. Most of this usually happens in a pretty siloed manner, but international human rights law is famously non-siloed. The interpretive bodies determining international human rights law cases regularly borrow from each other when trying to make meaning of the different provisions before them.This piece is focused on what the ICCPR demands, but I will also discuss some decisions from other regional human rights courts because of the cross fertilization that I’ve just described. Before understanding my argument, they’re a few crucial premises you have to appreciate. I will discuss them next.(i) All major human rights treaties, including the ICCPR, impose on states a duty to protect lifeIn addition to the ICCPR, the African Charter, European Convention and American Convention have all give states a duty to protect life.[2] As you might imagine, the existence of the actual duty is generally undisputed. It is when we get to the specific content of the duty where things become murky.(ii) A state’s duty to protect life under the ICCPR can extend to citizens of other countriesThe Human Rights Committee (quick reminder: this is the body with the legal mandate to interpret the ICCPR) has made it clear that this duty to protect under the ICCPR extends not only to activities which are conducted within the territory of the state being challenged but also to those conducted in other places – so long as the activities could have a direct and reasonably foreseeable impact on persons outside the state’s territory. The fact that the US has vehemently disputed this understanding[3] does not mean it is excused from abiding by it.(iii) States’ duties to protect life under the ICCPR require attention to the activities of corporate entities headquartered in their countriesEven though the US protested the move,[4] the Human Rights Committee has been clear that the duty to protect extends to protecting individuals from violations by private persons or entities,[5] including activities by corporate entities based in their territory or subjec...]]>
Cecil Abungu https://forum.effectivealtruism.org/posts/7yjd2wJjSqbzz3dZX/by-failing-to-take-serious-ai-action-the-us-could-be-in Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: By failing to take serious AI action, the US could be in violation of its international law obligations, published by Cecil Abungu on May 27, 2023 on The Effective Altruism Forum.“Long-term risks remain, including the existential risk associated with the development of artificial general intelligence through self-modifying AI or other means”.2023 Update to the US National Artificial Intelligence Research and Development Strategic Plan.IntroductionThe United States is yet to take serious steps to govern the licensing, setting up, operation, security and supervision of AI. In this piece I suggest that this could be in violation of its obligations under Article 6(1) of the International Covenant on Civil and Political Rights (ICCPR). By most accounts, the US is the key country in control of how quickly we have artificial general intelligence (AGI), a goal that companies like OpenAI have been very open about pursuing. The fact that AGI could carry risk to human life has been detailed in various fora and I won’t belabor that point. I present this legal argument so that those trying to get the US government to take action have additional armor to call on.A. Some important premisesThe US signed and ratified the ICCPR on June 8 1992.[1] While it has not ratified the Optional Protocol allowing for individual complaints against it, it did submit to the competence of the Human Rights Committee (the body charged with interpreting the ICCPR) where the party suing is another state. This means that although individuals cannot bring action against the US for ICCPR violations, other states can. As is the case for domestic law, provisions of treaties are given real meaning when they’re interpreted by courts or other bodies with the specific legal mandate to do so. Most of this usually happens in a pretty siloed manner, but international human rights law is famously non-siloed. The interpretive bodies determining international human rights law cases regularly borrow from each other when trying to make meaning of the different provisions before them.This piece is focused on what the ICCPR demands, but I will also discuss some decisions from other regional human rights courts because of the cross fertilization that I’ve just described. Before understanding my argument, they’re a few crucial premises you have to appreciate. I will discuss them next.(i) All major human rights treaties, including the ICCPR, impose on states a duty to protect lifeIn addition to the ICCPR, the African Charter, European Convention and American Convention have all give states a duty to protect life.[2] As you might imagine, the existence of the actual duty is generally undisputed. It is when we get to the specific content of the duty where things become murky.(ii) A state’s duty to protect life under the ICCPR can extend to citizens of other countriesThe Human Rights Committee (quick reminder: this is the body with the legal mandate to interpret the ICCPR) has made it clear that this duty to protect under the ICCPR extends not only to activities which are conducted within the territory of the state being challenged but also to those conducted in other places – so long as the activities could have a direct and reasonably foreseeable impact on persons outside the state’s territory. The fact that the US has vehemently disputed this understanding[3] does not mean it is excused from abiding by it.(iii) States’ duties to protect life under the ICCPR require attention to the activities of corporate entities headquartered in their countriesEven though the US protested the move,[4] the Human Rights Committee has been clear that the duty to protect extends to protecting individuals from violations by private persons or entities,[5] including activities by corporate entities based in their territory or subjec...]]>
Sat, 27 May 2023 15:57:35 +0000 EA - By failing to take serious AI action, the US could be in violation of its international law obligations by Cecil Abungu Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: By failing to take serious AI action, the US could be in violation of its international law obligations, published by Cecil Abungu on May 27, 2023 on The Effective Altruism Forum.“Long-term risks remain, including the existential risk associated with the development of artificial general intelligence through self-modifying AI or other means”.2023 Update to the US National Artificial Intelligence Research and Development Strategic Plan.IntroductionThe United States is yet to take serious steps to govern the licensing, setting up, operation, security and supervision of AI. In this piece I suggest that this could be in violation of its obligations under Article 6(1) of the International Covenant on Civil and Political Rights (ICCPR). By most accounts, the US is the key country in control of how quickly we have artificial general intelligence (AGI), a goal that companies like OpenAI have been very open about pursuing. The fact that AGI could carry risk to human life has been detailed in various fora and I won’t belabor that point. I present this legal argument so that those trying to get the US government to take action have additional armor to call on.A. Some important premisesThe US signed and ratified the ICCPR on June 8 1992.[1] While it has not ratified the Optional Protocol allowing for individual complaints against it, it did submit to the competence of the Human Rights Committee (the body charged with interpreting the ICCPR) where the party suing is another state. This means that although individuals cannot bring action against the US for ICCPR violations, other states can. As is the case for domestic law, provisions of treaties are given real meaning when they’re interpreted by courts or other bodies with the specific legal mandate to do so. Most of this usually happens in a pretty siloed manner, but international human rights law is famously non-siloed. The interpretive bodies determining international human rights law cases regularly borrow from each other when trying to make meaning of the different provisions before them.This piece is focused on what the ICCPR demands, but I will also discuss some decisions from other regional human rights courts because of the cross fertilization that I’ve just described. Before understanding my argument, they’re a few crucial premises you have to appreciate. I will discuss them next.(i) All major human rights treaties, including the ICCPR, impose on states a duty to protect lifeIn addition to the ICCPR, the African Charter, European Convention and American Convention have all give states a duty to protect life.[2] As you might imagine, the existence of the actual duty is generally undisputed. It is when we get to the specific content of the duty where things become murky.(ii) A state’s duty to protect life under the ICCPR can extend to citizens of other countriesThe Human Rights Committee (quick reminder: this is the body with the legal mandate to interpret the ICCPR) has made it clear that this duty to protect under the ICCPR extends not only to activities which are conducted within the territory of the state being challenged but also to those conducted in other places – so long as the activities could have a direct and reasonably foreseeable impact on persons outside the state’s territory. The fact that the US has vehemently disputed this understanding[3] does not mean it is excused from abiding by it.(iii) States’ duties to protect life under the ICCPR require attention to the activities of corporate entities headquartered in their countriesEven though the US protested the move,[4] the Human Rights Committee has been clear that the duty to protect extends to protecting individuals from violations by private persons or entities,[5] including activities by corporate entities based in their territory or subjec...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: By failing to take serious AI action, the US could be in violation of its international law obligations, published by Cecil Abungu on May 27, 2023 on The Effective Altruism Forum.“Long-term risks remain, including the existential risk associated with the development of artificial general intelligence through self-modifying AI or other means”.2023 Update to the US National Artificial Intelligence Research and Development Strategic Plan.IntroductionThe United States is yet to take serious steps to govern the licensing, setting up, operation, security and supervision of AI. In this piece I suggest that this could be in violation of its obligations under Article 6(1) of the International Covenant on Civil and Political Rights (ICCPR). By most accounts, the US is the key country in control of how quickly we have artificial general intelligence (AGI), a goal that companies like OpenAI have been very open about pursuing. The fact that AGI could carry risk to human life has been detailed in various fora and I won’t belabor that point. I present this legal argument so that those trying to get the US government to take action have additional armor to call on.A. Some important premisesThe US signed and ratified the ICCPR on June 8 1992.[1] While it has not ratified the Optional Protocol allowing for individual complaints against it, it did submit to the competence of the Human Rights Committee (the body charged with interpreting the ICCPR) where the party suing is another state. This means that although individuals cannot bring action against the US for ICCPR violations, other states can. As is the case for domestic law, provisions of treaties are given real meaning when they’re interpreted by courts or other bodies with the specific legal mandate to do so. Most of this usually happens in a pretty siloed manner, but international human rights law is famously non-siloed. The interpretive bodies determining international human rights law cases regularly borrow from each other when trying to make meaning of the different provisions before them.This piece is focused on what the ICCPR demands, but I will also discuss some decisions from other regional human rights courts because of the cross fertilization that I’ve just described. Before understanding my argument, they’re a few crucial premises you have to appreciate. I will discuss them next.(i) All major human rights treaties, including the ICCPR, impose on states a duty to protect lifeIn addition to the ICCPR, the African Charter, European Convention and American Convention have all give states a duty to protect life.[2] As you might imagine, the existence of the actual duty is generally undisputed. It is when we get to the specific content of the duty where things become murky.(ii) A state’s duty to protect life under the ICCPR can extend to citizens of other countriesThe Human Rights Committee (quick reminder: this is the body with the legal mandate to interpret the ICCPR) has made it clear that this duty to protect under the ICCPR extends not only to activities which are conducted within the territory of the state being challenged but also to those conducted in other places – so long as the activities could have a direct and reasonably foreseeable impact on persons outside the state’s territory. The fact that the US has vehemently disputed this understanding[3] does not mean it is excused from abiding by it.(iii) States’ duties to protect life under the ICCPR require attention to the activities of corporate entities headquartered in their countriesEven though the US protested the move,[4] the Human Rights Committee has been clear that the duty to protect extends to protecting individuals from violations by private persons or entities,[5] including activities by corporate entities based in their territory or subjec...]]>
Cecil Abungu https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 16:46 None full 6083
ZtiAwejFNRdQQ5TTh_NL_EA_EA EA - Co-found an incubator for independent AI Safety researchers! by Alexandra Bos Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Co-found an incubator for independent AI Safety researchers!, published by Alexandra Bos on May 26, 2023 on The Effective Altruism Forum.Full-time, remoteAPPLY HEREDeadline: Thursday, June 8th (in your timezone)If your ideal job would be leading an impact-driven organization, being your own boss and pushing for a safer future with AI, you might be a great fit for co-founding Catalyze Impact!Below, you will find out more about Catalyze’s mission and focus, why co-founding this org would be high-impact, how to tell if you’re a good fit, and how to apply.In short, Catalyze will 1) help people become independent technical AI Safety researchers, and 2) deliver key support to independent AI Safety researchers so they can do their best work.If you think this non-profit’s work could be important, please like/upvote and share this message so that the right people get to see it.You can ask questions, register interest to potentially fund us, work with us, make use of our services in the future and share information here.Why support independent AI Safety researchers?Lots of people want to do AI Safety (AIS) research and are trying to get in a position where they can, yet only around 100-300 people worldwide are actually doing research in this crucial area. Why? Because there are almost no AIS researcher jobs available due to AIS research organizations facing difficult constraints to scaling up. Luckily there is another way to grow the research field: having more people do independent research (where a self-employed individual gets a grant, usually from a fund).There is, however, a key problem: becoming and being a good independent AIS researcher is currently very difficult. It requires a lot of qualities which have nothing to do with being able to do good research: you have to be proactive, pragmatic, social, good enough at fundraising, very good at self-management and willing to take major career risks. Catalyze Impact will take away a large part of the difficulties that come with being an independent researcher, thereby making it a suitable option for more people so they are empowered to do good AIS research.How will we help?This is the current design of the pilot - but you will help shape this further!1. Fundraising support> help promising individuals get funded to do research2. Peer support networks & mentor-matching> get feedback, receive mentorship, find collaborators, brainstorm and stay motivated rather than falling into isolation3. Accountability and coaching> have structure, stay motivated and productive4. Fiscal sponsorship: hiring funded independent researchers as ‘employees’> take away operational tasks which distract from research & help them build better career capital through institutional affiliationIn what ways would this be impactful?Alleviating a bottleneck for scaling the AIS research field by making independent research suitable for more people: it seems that we need a lot more people to be working on solving alignment. However, talented individuals who have invested in upskilling themselves to go do AIS research (e.g. SERI MATS graduates) are largely unable to secure research positions. This is oftentimes not because they are not capable enough of doing the research, but because there are simply too few positions available (see footnote). Because of this, many of these talented individuals are left with a few sub-optimal options. 1) try to do research/a PhD in a different academic field in hopes that it will make them a better AIS researcher in the future, 2) take a job working on AI capabilities (!), or 3) try to become an independent AIS researcher.For many people, independent research (i.e. without this incubator) is not a good & viable option because being an independent researcher brings a lot of difficulties with it and arran...]]>
Alexandra Bos https://forum.effectivealtruism.org/posts/ZtiAwejFNRdQQ5TTh/co-found-an-incubator-for-independent-ai-safety-researchers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Co-found an incubator for independent AI Safety researchers!, published by Alexandra Bos on May 26, 2023 on The Effective Altruism Forum.Full-time, remoteAPPLY HEREDeadline: Thursday, June 8th (in your timezone)If your ideal job would be leading an impact-driven organization, being your own boss and pushing for a safer future with AI, you might be a great fit for co-founding Catalyze Impact!Below, you will find out more about Catalyze’s mission and focus, why co-founding this org would be high-impact, how to tell if you’re a good fit, and how to apply.In short, Catalyze will 1) help people become independent technical AI Safety researchers, and 2) deliver key support to independent AI Safety researchers so they can do their best work.If you think this non-profit’s work could be important, please like/upvote and share this message so that the right people get to see it.You can ask questions, register interest to potentially fund us, work with us, make use of our services in the future and share information here.Why support independent AI Safety researchers?Lots of people want to do AI Safety (AIS) research and are trying to get in a position where they can, yet only around 100-300 people worldwide are actually doing research in this crucial area. Why? Because there are almost no AIS researcher jobs available due to AIS research organizations facing difficult constraints to scaling up. Luckily there is another way to grow the research field: having more people do independent research (where a self-employed individual gets a grant, usually from a fund).There is, however, a key problem: becoming and being a good independent AIS researcher is currently very difficult. It requires a lot of qualities which have nothing to do with being able to do good research: you have to be proactive, pragmatic, social, good enough at fundraising, very good at self-management and willing to take major career risks. Catalyze Impact will take away a large part of the difficulties that come with being an independent researcher, thereby making it a suitable option for more people so they are empowered to do good AIS research.How will we help?This is the current design of the pilot - but you will help shape this further!1. Fundraising support> help promising individuals get funded to do research2. Peer support networks & mentor-matching> get feedback, receive mentorship, find collaborators, brainstorm and stay motivated rather than falling into isolation3. Accountability and coaching> have structure, stay motivated and productive4. Fiscal sponsorship: hiring funded independent researchers as ‘employees’> take away operational tasks which distract from research & help them build better career capital through institutional affiliationIn what ways would this be impactful?Alleviating a bottleneck for scaling the AIS research field by making independent research suitable for more people: it seems that we need a lot more people to be working on solving alignment. However, talented individuals who have invested in upskilling themselves to go do AIS research (e.g. SERI MATS graduates) are largely unable to secure research positions. This is oftentimes not because they are not capable enough of doing the research, but because there are simply too few positions available (see footnote). Because of this, many of these talented individuals are left with a few sub-optimal options. 1) try to do research/a PhD in a different academic field in hopes that it will make them a better AIS researcher in the future, 2) take a job working on AI capabilities (!), or 3) try to become an independent AIS researcher.For many people, independent research (i.e. without this incubator) is not a good & viable option because being an independent researcher brings a lot of difficulties with it and arran...]]>
Sat, 27 May 2023 15:22:20 +0000 EA - Co-found an incubator for independent AI Safety researchers! by Alexandra Bos Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Co-found an incubator for independent AI Safety researchers!, published by Alexandra Bos on May 26, 2023 on The Effective Altruism Forum.Full-time, remoteAPPLY HEREDeadline: Thursday, June 8th (in your timezone)If your ideal job would be leading an impact-driven organization, being your own boss and pushing for a safer future with AI, you might be a great fit for co-founding Catalyze Impact!Below, you will find out more about Catalyze’s mission and focus, why co-founding this org would be high-impact, how to tell if you’re a good fit, and how to apply.In short, Catalyze will 1) help people become independent technical AI Safety researchers, and 2) deliver key support to independent AI Safety researchers so they can do their best work.If you think this non-profit’s work could be important, please like/upvote and share this message so that the right people get to see it.You can ask questions, register interest to potentially fund us, work with us, make use of our services in the future and share information here.Why support independent AI Safety researchers?Lots of people want to do AI Safety (AIS) research and are trying to get in a position where they can, yet only around 100-300 people worldwide are actually doing research in this crucial area. Why? Because there are almost no AIS researcher jobs available due to AIS research organizations facing difficult constraints to scaling up. Luckily there is another way to grow the research field: having more people do independent research (where a self-employed individual gets a grant, usually from a fund).There is, however, a key problem: becoming and being a good independent AIS researcher is currently very difficult. It requires a lot of qualities which have nothing to do with being able to do good research: you have to be proactive, pragmatic, social, good enough at fundraising, very good at self-management and willing to take major career risks. Catalyze Impact will take away a large part of the difficulties that come with being an independent researcher, thereby making it a suitable option for more people so they are empowered to do good AIS research.How will we help?This is the current design of the pilot - but you will help shape this further!1. Fundraising support> help promising individuals get funded to do research2. Peer support networks & mentor-matching> get feedback, receive mentorship, find collaborators, brainstorm and stay motivated rather than falling into isolation3. Accountability and coaching> have structure, stay motivated and productive4. Fiscal sponsorship: hiring funded independent researchers as ‘employees’> take away operational tasks which distract from research & help them build better career capital through institutional affiliationIn what ways would this be impactful?Alleviating a bottleneck for scaling the AIS research field by making independent research suitable for more people: it seems that we need a lot more people to be working on solving alignment. However, talented individuals who have invested in upskilling themselves to go do AIS research (e.g. SERI MATS graduates) are largely unable to secure research positions. This is oftentimes not because they are not capable enough of doing the research, but because there are simply too few positions available (see footnote). Because of this, many of these talented individuals are left with a few sub-optimal options. 1) try to do research/a PhD in a different academic field in hopes that it will make them a better AIS researcher in the future, 2) take a job working on AI capabilities (!), or 3) try to become an independent AIS researcher.For many people, independent research (i.e. without this incubator) is not a good & viable option because being an independent researcher brings a lot of difficulties with it and arran...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Co-found an incubator for independent AI Safety researchers!, published by Alexandra Bos on May 26, 2023 on The Effective Altruism Forum.Full-time, remoteAPPLY HEREDeadline: Thursday, June 8th (in your timezone)If your ideal job would be leading an impact-driven organization, being your own boss and pushing for a safer future with AI, you might be a great fit for co-founding Catalyze Impact!Below, you will find out more about Catalyze’s mission and focus, why co-founding this org would be high-impact, how to tell if you’re a good fit, and how to apply.In short, Catalyze will 1) help people become independent technical AI Safety researchers, and 2) deliver key support to independent AI Safety researchers so they can do their best work.If you think this non-profit’s work could be important, please like/upvote and share this message so that the right people get to see it.You can ask questions, register interest to potentially fund us, work with us, make use of our services in the future and share information here.Why support independent AI Safety researchers?Lots of people want to do AI Safety (AIS) research and are trying to get in a position where they can, yet only around 100-300 people worldwide are actually doing research in this crucial area. Why? Because there are almost no AIS researcher jobs available due to AIS research organizations facing difficult constraints to scaling up. Luckily there is another way to grow the research field: having more people do independent research (where a self-employed individual gets a grant, usually from a fund).There is, however, a key problem: becoming and being a good independent AIS researcher is currently very difficult. It requires a lot of qualities which have nothing to do with being able to do good research: you have to be proactive, pragmatic, social, good enough at fundraising, very good at self-management and willing to take major career risks. Catalyze Impact will take away a large part of the difficulties that come with being an independent researcher, thereby making it a suitable option for more people so they are empowered to do good AIS research.How will we help?This is the current design of the pilot - but you will help shape this further!1. Fundraising support> help promising individuals get funded to do research2. Peer support networks & mentor-matching> get feedback, receive mentorship, find collaborators, brainstorm and stay motivated rather than falling into isolation3. Accountability and coaching> have structure, stay motivated and productive4. Fiscal sponsorship: hiring funded independent researchers as ‘employees’> take away operational tasks which distract from research & help them build better career capital through institutional affiliationIn what ways would this be impactful?Alleviating a bottleneck for scaling the AIS research field by making independent research suitable for more people: it seems that we need a lot more people to be working on solving alignment. However, talented individuals who have invested in upskilling themselves to go do AIS research (e.g. SERI MATS graduates) are largely unable to secure research positions. This is oftentimes not because they are not capable enough of doing the research, but because there are simply too few positions available (see footnote). Because of this, many of these talented individuals are left with a few sub-optimal options. 1) try to do research/a PhD in a different academic field in hopes that it will make them a better AIS researcher in the future, 2) take a job working on AI capabilities (!), or 3) try to become an independent AIS researcher.For many people, independent research (i.e. without this incubator) is not a good & viable option because being an independent researcher brings a lot of difficulties with it and arran...]]>
Alexandra Bos https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:08 None full 6082
5wwcMr8tDqCwZrDGM_NL_EA_EA EA - [Linkpost] Longtermists Are Pushing a New Cold War With China by Mohammad Ismam Huda Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Longtermists Are Pushing a New Cold War With China, published by Mohammad Ismam Huda on May 27, 2023 on The Effective Altruism Forum.Jacob Davis, a writer for the socialist political magazine Jacobin, raises an interesting concern about how current longermist initiatives in AI Safety are in his assessment escalating tensions between the US and China. This highlights a conundrum for the Effective Altruism movement which seeks to advance both AI Safety and avoid a great power conflict between the US and China.This is not the first time this conundrum has been raised which has been explored on the forum previously by Stephen Clare.The key points Davis asserts are that:Longtermists have been key players in President Biden’s choice last October to place heavy controls on semiconductor exports.Key longtermist figures advancing export controls and hawkish policies against China include former Google CEO Eric Schmidt (through Schmidt Futures and the longtermist political fund Future Forward PAC), former congressional candidate and FHI researcher Carrick Flynn, as well as other longtermists in key positions at Gerogetown Center for Security and Emerging Technology and the RAND Corporation.Export controls have failed to limit China's AI research, but have wrought havoc on global supply chains and seen as protectionist in some circles.I hope this linkpost opens up a debate about the merits and weaknesses of current strategies and views in longtermist circles.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mohammad Ismam Huda https://forum.effectivealtruism.org/posts/5wwcMr8tDqCwZrDGM/linkpost-longtermists-are-pushing-a-new-cold-war-with-china Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Longtermists Are Pushing a New Cold War With China, published by Mohammad Ismam Huda on May 27, 2023 on The Effective Altruism Forum.Jacob Davis, a writer for the socialist political magazine Jacobin, raises an interesting concern about how current longermist initiatives in AI Safety are in his assessment escalating tensions between the US and China. This highlights a conundrum for the Effective Altruism movement which seeks to advance both AI Safety and avoid a great power conflict between the US and China.This is not the first time this conundrum has been raised which has been explored on the forum previously by Stephen Clare.The key points Davis asserts are that:Longtermists have been key players in President Biden’s choice last October to place heavy controls on semiconductor exports.Key longtermist figures advancing export controls and hawkish policies against China include former Google CEO Eric Schmidt (through Schmidt Futures and the longtermist political fund Future Forward PAC), former congressional candidate and FHI researcher Carrick Flynn, as well as other longtermists in key positions at Gerogetown Center for Security and Emerging Technology and the RAND Corporation.Export controls have failed to limit China's AI research, but have wrought havoc on global supply chains and seen as protectionist in some circles.I hope this linkpost opens up a debate about the merits and weaknesses of current strategies and views in longtermist circles.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 27 May 2023 15:21:54 +0000 EA - [Linkpost] Longtermists Are Pushing a New Cold War With China by Mohammad Ismam Huda Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Longtermists Are Pushing a New Cold War With China, published by Mohammad Ismam Huda on May 27, 2023 on The Effective Altruism Forum.Jacob Davis, a writer for the socialist political magazine Jacobin, raises an interesting concern about how current longermist initiatives in AI Safety are in his assessment escalating tensions between the US and China. This highlights a conundrum for the Effective Altruism movement which seeks to advance both AI Safety and avoid a great power conflict between the US and China.This is not the first time this conundrum has been raised which has been explored on the forum previously by Stephen Clare.The key points Davis asserts are that:Longtermists have been key players in President Biden’s choice last October to place heavy controls on semiconductor exports.Key longtermist figures advancing export controls and hawkish policies against China include former Google CEO Eric Schmidt (through Schmidt Futures and the longtermist political fund Future Forward PAC), former congressional candidate and FHI researcher Carrick Flynn, as well as other longtermists in key positions at Gerogetown Center for Security and Emerging Technology and the RAND Corporation.Export controls have failed to limit China's AI research, but have wrought havoc on global supply chains and seen as protectionist in some circles.I hope this linkpost opens up a debate about the merits and weaknesses of current strategies and views in longtermist circles.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Longtermists Are Pushing a New Cold War With China, published by Mohammad Ismam Huda on May 27, 2023 on The Effective Altruism Forum.Jacob Davis, a writer for the socialist political magazine Jacobin, raises an interesting concern about how current longermist initiatives in AI Safety are in his assessment escalating tensions between the US and China. This highlights a conundrum for the Effective Altruism movement which seeks to advance both AI Safety and avoid a great power conflict between the US and China.This is not the first time this conundrum has been raised which has been explored on the forum previously by Stephen Clare.The key points Davis asserts are that:Longtermists have been key players in President Biden’s choice last October to place heavy controls on semiconductor exports.Key longtermist figures advancing export controls and hawkish policies against China include former Google CEO Eric Schmidt (through Schmidt Futures and the longtermist political fund Future Forward PAC), former congressional candidate and FHI researcher Carrick Flynn, as well as other longtermists in key positions at Gerogetown Center for Security and Emerging Technology and the RAND Corporation.Export controls have failed to limit China's AI research, but have wrought havoc on global supply chains and seen as protectionist in some circles.I hope this linkpost opens up a debate about the merits and weaknesses of current strategies and views in longtermist circles.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mohammad Ismam Huda https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:32 None full 6081
kuopGotdCWeNCDpWi_NL_EA_EA EA - How to evaluate relative impact in high-uncertainty contexts? An update on research methodology and grantmaking of FP Climate by jackva Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to evaluate relative impact in high-uncertainty contexts? An update on research methodology & grantmaking of FP Climate, published by jackva on May 26, 2023 on The Effective Altruism Forum.1/ IntroductionWe recently doubled our full-time climate team (hi Megan!), and we are just going through another doubling (hiring a third researcher, as well as a climate communications manager, job ad for the latter coming soon, for now reach out to sally@founderspledge.com).Apart from getting a bulk rate for wedding cake, we thought this would be a good moment to update on our progress and what we have in the pipeline for the next months, both in terms of research to be released as well as grantmaking with the FP Climate Fund and beyond.As discussed in the next section, If you are not interested in climate, but in EA grantmaking research in general, we think it still might be interesting reading. Being part of Founders Pledge and the effective altruist endeavor at large, we continually try to build tools that are useful for applications outside the narrow cause area work – for example, some of the methodology work on impact multipliers has also been helpful for work in other areas, such as global catastrophic risks (here, as well as FP's Christian Ruhl's upcoming report on the nuclear risk landscape) and air pollution. Another way to put this is that we think of our climate work as one example of an effective altruist research and grantmaking program in a “high-but-not-maximal-uncertainty” environment, facing and attacking similar epistemic and methodological problems as, say, work on great power war, or risk-neutral current generations work. We will come back to this throughout the piece.In what follows, this update is organized as follows: We first describe the fundamental value proposition and mission of FP Climate (Section 2). We then discuss, at a high level, the methodological principles that flow from this mission (Section 3), before making this much more concrete with the discussion of three of the furthest developed research projects putting this into action (Section 4). This is the bulk of this methodology-focused-update. We then briefly discuss grantmaking plans (Section 5) and backlog (Section 6) before concluding (Section 7).2/ The value proposition and mission of FP ClimateAs part of Founders Pledge’s research team, the fundamental goal of FP Climate is to provide donors interested in maximizing the impact of their climate giving with a convenient vehicle to do so – the Founders Pledge Climate Fund. Crucially, and this is often misunderstood, our goal is not to serve arbitrary donor preferences but rather to guide donors to the most impactful opportunities available.. Taking caring about climate as given, we seek to answer the effective altruist question of what to prioritize.We are conceiving of FP Climate as a research-based grantmaking program to find and fund the best opportunities to reduce climate damage.We believe that at the heart of this effort has to be a credible comparative methodology to estimate relative expected impact, fit for purpose to the field of climate where a layer of uncertainties about society, economy, techno-economic factors, and the climate system, as well as a century-spanning global decarbonization effort. This is so because we are in a situation where causal effects and theories of change are often indirect and uncertainty is often irreducible on relevant time-frames (we discuss this more in our recent 80K Podcast (throughout links to 80K link to specific sections of the transcript), as well as Volts, and in our Changing Landscape report).While we have been building towards such a methodology since 2021 our recent increase in resourcing is quickly narrowing the gap between aspiration and reality. Before describing some exe...]]>
jackva https://forum.effectivealtruism.org/posts/kuopGotdCWeNCDpWi/how-to-evaluate-relative-impact-in-high-uncertainty-contexts Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to evaluate relative impact in high-uncertainty contexts? An update on research methodology & grantmaking of FP Climate, published by jackva on May 26, 2023 on The Effective Altruism Forum.1/ IntroductionWe recently doubled our full-time climate team (hi Megan!), and we are just going through another doubling (hiring a third researcher, as well as a climate communications manager, job ad for the latter coming soon, for now reach out to sally@founderspledge.com).Apart from getting a bulk rate for wedding cake, we thought this would be a good moment to update on our progress and what we have in the pipeline for the next months, both in terms of research to be released as well as grantmaking with the FP Climate Fund and beyond.As discussed in the next section, If you are not interested in climate, but in EA grantmaking research in general, we think it still might be interesting reading. Being part of Founders Pledge and the effective altruist endeavor at large, we continually try to build tools that are useful for applications outside the narrow cause area work – for example, some of the methodology work on impact multipliers has also been helpful for work in other areas, such as global catastrophic risks (here, as well as FP's Christian Ruhl's upcoming report on the nuclear risk landscape) and air pollution. Another way to put this is that we think of our climate work as one example of an effective altruist research and grantmaking program in a “high-but-not-maximal-uncertainty” environment, facing and attacking similar epistemic and methodological problems as, say, work on great power war, or risk-neutral current generations work. We will come back to this throughout the piece.In what follows, this update is organized as follows: We first describe the fundamental value proposition and mission of FP Climate (Section 2). We then discuss, at a high level, the methodological principles that flow from this mission (Section 3), before making this much more concrete with the discussion of three of the furthest developed research projects putting this into action (Section 4). This is the bulk of this methodology-focused-update. We then briefly discuss grantmaking plans (Section 5) and backlog (Section 6) before concluding (Section 7).2/ The value proposition and mission of FP ClimateAs part of Founders Pledge’s research team, the fundamental goal of FP Climate is to provide donors interested in maximizing the impact of their climate giving with a convenient vehicle to do so – the Founders Pledge Climate Fund. Crucially, and this is often misunderstood, our goal is not to serve arbitrary donor preferences but rather to guide donors to the most impactful opportunities available.. Taking caring about climate as given, we seek to answer the effective altruist question of what to prioritize.We are conceiving of FP Climate as a research-based grantmaking program to find and fund the best opportunities to reduce climate damage.We believe that at the heart of this effort has to be a credible comparative methodology to estimate relative expected impact, fit for purpose to the field of climate where a layer of uncertainties about society, economy, techno-economic factors, and the climate system, as well as a century-spanning global decarbonization effort. This is so because we are in a situation where causal effects and theories of change are often indirect and uncertainty is often irreducible on relevant time-frames (we discuss this more in our recent 80K Podcast (throughout links to 80K link to specific sections of the transcript), as well as Volts, and in our Changing Landscape report).While we have been building towards such a methodology since 2021 our recent increase in resourcing is quickly narrowing the gap between aspiration and reality. Before describing some exe...]]>
Sat, 27 May 2023 14:13:14 +0000 EA - How to evaluate relative impact in high-uncertainty contexts? An update on research methodology and grantmaking of FP Climate by jackva Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to evaluate relative impact in high-uncertainty contexts? An update on research methodology & grantmaking of FP Climate, published by jackva on May 26, 2023 on The Effective Altruism Forum.1/ IntroductionWe recently doubled our full-time climate team (hi Megan!), and we are just going through another doubling (hiring a third researcher, as well as a climate communications manager, job ad for the latter coming soon, for now reach out to sally@founderspledge.com).Apart from getting a bulk rate for wedding cake, we thought this would be a good moment to update on our progress and what we have in the pipeline for the next months, both in terms of research to be released as well as grantmaking with the FP Climate Fund and beyond.As discussed in the next section, If you are not interested in climate, but in EA grantmaking research in general, we think it still might be interesting reading. Being part of Founders Pledge and the effective altruist endeavor at large, we continually try to build tools that are useful for applications outside the narrow cause area work – for example, some of the methodology work on impact multipliers has also been helpful for work in other areas, such as global catastrophic risks (here, as well as FP's Christian Ruhl's upcoming report on the nuclear risk landscape) and air pollution. Another way to put this is that we think of our climate work as one example of an effective altruist research and grantmaking program in a “high-but-not-maximal-uncertainty” environment, facing and attacking similar epistemic and methodological problems as, say, work on great power war, or risk-neutral current generations work. We will come back to this throughout the piece.In what follows, this update is organized as follows: We first describe the fundamental value proposition and mission of FP Climate (Section 2). We then discuss, at a high level, the methodological principles that flow from this mission (Section 3), before making this much more concrete with the discussion of three of the furthest developed research projects putting this into action (Section 4). This is the bulk of this methodology-focused-update. We then briefly discuss grantmaking plans (Section 5) and backlog (Section 6) before concluding (Section 7).2/ The value proposition and mission of FP ClimateAs part of Founders Pledge’s research team, the fundamental goal of FP Climate is to provide donors interested in maximizing the impact of their climate giving with a convenient vehicle to do so – the Founders Pledge Climate Fund. Crucially, and this is often misunderstood, our goal is not to serve arbitrary donor preferences but rather to guide donors to the most impactful opportunities available.. Taking caring about climate as given, we seek to answer the effective altruist question of what to prioritize.We are conceiving of FP Climate as a research-based grantmaking program to find and fund the best opportunities to reduce climate damage.We believe that at the heart of this effort has to be a credible comparative methodology to estimate relative expected impact, fit for purpose to the field of climate where a layer of uncertainties about society, economy, techno-economic factors, and the climate system, as well as a century-spanning global decarbonization effort. This is so because we are in a situation where causal effects and theories of change are often indirect and uncertainty is often irreducible on relevant time-frames (we discuss this more in our recent 80K Podcast (throughout links to 80K link to specific sections of the transcript), as well as Volts, and in our Changing Landscape report).While we have been building towards such a methodology since 2021 our recent increase in resourcing is quickly narrowing the gap between aspiration and reality. Before describing some exe...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to evaluate relative impact in high-uncertainty contexts? An update on research methodology & grantmaking of FP Climate, published by jackva on May 26, 2023 on The Effective Altruism Forum.1/ IntroductionWe recently doubled our full-time climate team (hi Megan!), and we are just going through another doubling (hiring a third researcher, as well as a climate communications manager, job ad for the latter coming soon, for now reach out to sally@founderspledge.com).Apart from getting a bulk rate for wedding cake, we thought this would be a good moment to update on our progress and what we have in the pipeline for the next months, both in terms of research to be released as well as grantmaking with the FP Climate Fund and beyond.As discussed in the next section, If you are not interested in climate, but in EA grantmaking research in general, we think it still might be interesting reading. Being part of Founders Pledge and the effective altruist endeavor at large, we continually try to build tools that are useful for applications outside the narrow cause area work – for example, some of the methodology work on impact multipliers has also been helpful for work in other areas, such as global catastrophic risks (here, as well as FP's Christian Ruhl's upcoming report on the nuclear risk landscape) and air pollution. Another way to put this is that we think of our climate work as one example of an effective altruist research and grantmaking program in a “high-but-not-maximal-uncertainty” environment, facing and attacking similar epistemic and methodological problems as, say, work on great power war, or risk-neutral current generations work. We will come back to this throughout the piece.In what follows, this update is organized as follows: We first describe the fundamental value proposition and mission of FP Climate (Section 2). We then discuss, at a high level, the methodological principles that flow from this mission (Section 3), before making this much more concrete with the discussion of three of the furthest developed research projects putting this into action (Section 4). This is the bulk of this methodology-focused-update. We then briefly discuss grantmaking plans (Section 5) and backlog (Section 6) before concluding (Section 7).2/ The value proposition and mission of FP ClimateAs part of Founders Pledge’s research team, the fundamental goal of FP Climate is to provide donors interested in maximizing the impact of their climate giving with a convenient vehicle to do so – the Founders Pledge Climate Fund. Crucially, and this is often misunderstood, our goal is not to serve arbitrary donor preferences but rather to guide donors to the most impactful opportunities available.. Taking caring about climate as given, we seek to answer the effective altruist question of what to prioritize.We are conceiving of FP Climate as a research-based grantmaking program to find and fund the best opportunities to reduce climate damage.We believe that at the heart of this effort has to be a credible comparative methodology to estimate relative expected impact, fit for purpose to the field of climate where a layer of uncertainties about society, economy, techno-economic factors, and the climate system, as well as a century-spanning global decarbonization effort. This is so because we are in a situation where causal effects and theories of change are often indirect and uncertainty is often irreducible on relevant time-frames (we discuss this more in our recent 80K Podcast (throughout links to 80K link to specific sections of the transcript), as well as Volts, and in our Changing Landscape report).While we have been building towards such a methodology since 2021 our recent increase in resourcing is quickly narrowing the gap between aspiration and reality. Before describing some exe...]]>
jackva https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 30:34 None full 6080
vbGKuNsS5ix5g7Nqk_NL_EA_EA EA - Bandgaps, Brains, and Bioweapons: The limitations of computational science and what it means for AGI by titotal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bandgaps, Brains, and Bioweapons: The limitations of computational science and what it means for AGI, published by titotal on May 26, 2023 on The Effective Altruism Forum.[In this post I discuss some of my field of expertise in computational physics. Although I do my best to make it layman friendly, I can't guarantee as such. In later parts I speculate about other fields such as brain simulation and bioweapons, note that I am not an expert in these subjects.]In a previous post, I argued that a superintelligence that only saw three frames of a webcam would not be able to deduce all the laws of physics, specifically general relativity and Newtonian gravity. But this specific scenario would only apply to certain forms of boxed AI.Any AI that can read the internet has a very easy way to deduce general relativity and all our other known laws of physics: look it up on wikipedia. All of the fundamental laws of physics relevant to day to day life are on there. An AGI will probably need additional experiments to deduce a fundamental theory of everything, but you don’t need that to take over the world. The AI in this case will know all the laws of physics that are practically useful.Does this mean that an AGI can figure out anything?There is a world of difference between knowing the laws of physics, and actually using the laws of physics in a practical manner. The problem is one that talk of “solomonoff induction” sweeps under the rug: Computational time is finite. And not just that. Compared to some of the algorithms we’d like to pull off, computational time is miniscule.Efficiency or deathThe concept of computational efficiency is at the core of computer science. The running of computers costs time and money. If we are faced with a problem, we want an algorithm to find the right answer. But just as important is figuring out how to find the right answer in the least amount of time.If your challenge is “calculate pi”, getting the exact “right answer” is impossible, because there are an infinite number of digits. At this point, we are instead trying to find the most accurate answer we can get for a given amount of computational resources.This is also applicable to NP-hard problems. Finding the exact answer to the travelling salesman problem for large networks is impossible within practical resource limits (assuming P not equal NP). What is possible is finding a pretty good answer. There’s no efficient algorithm for getting the exact right route, but there is one for guaranteeing you are within 50% of the right answer.When discussing AI capabilities, the computational resources available to the AI are finite and bounded. Balancing accuracy with computational cost will be fundamental to a successful AI system. Imagine an AI that, when asked a simple question, starts calculating an exact solution that would take a decade to finish. We’re gonna toss this AI in favor of one that gives a pretty good answer in practical time.This principle goes double for secret takeover plots. If computer model A spends half it’s computational resources modelling proteins, while computer model B doesn’t, computer model A is getting deleted. Worse, the engineers might start digging in to why model A is so slow, and get tipped off to the plot. All this is just to say: computational cost matters. A lot.A taste of computational physicsIn this section, I want to give you a taste of what it actually means to do computational physics. I will include some equations for demonstration, but you do not need to know much math to follow along. The subject will be a very highly studied problem in my field called the “band gap problem”.“band gap” is one of the most important material properties in semiconductor physics. It describes whether there is a slice of possible energy values that are forbidden ...]]>
titotal https://forum.effectivealtruism.org/posts/vbGKuNsS5ix5g7Nqk/bandgaps-brains-and-bioweapons-the-limitations-of Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bandgaps, Brains, and Bioweapons: The limitations of computational science and what it means for AGI, published by titotal on May 26, 2023 on The Effective Altruism Forum.[In this post I discuss some of my field of expertise in computational physics. Although I do my best to make it layman friendly, I can't guarantee as such. In later parts I speculate about other fields such as brain simulation and bioweapons, note that I am not an expert in these subjects.]In a previous post, I argued that a superintelligence that only saw three frames of a webcam would not be able to deduce all the laws of physics, specifically general relativity and Newtonian gravity. But this specific scenario would only apply to certain forms of boxed AI.Any AI that can read the internet has a very easy way to deduce general relativity and all our other known laws of physics: look it up on wikipedia. All of the fundamental laws of physics relevant to day to day life are on there. An AGI will probably need additional experiments to deduce a fundamental theory of everything, but you don’t need that to take over the world. The AI in this case will know all the laws of physics that are practically useful.Does this mean that an AGI can figure out anything?There is a world of difference between knowing the laws of physics, and actually using the laws of physics in a practical manner. The problem is one that talk of “solomonoff induction” sweeps under the rug: Computational time is finite. And not just that. Compared to some of the algorithms we’d like to pull off, computational time is miniscule.Efficiency or deathThe concept of computational efficiency is at the core of computer science. The running of computers costs time and money. If we are faced with a problem, we want an algorithm to find the right answer. But just as important is figuring out how to find the right answer in the least amount of time.If your challenge is “calculate pi”, getting the exact “right answer” is impossible, because there are an infinite number of digits. At this point, we are instead trying to find the most accurate answer we can get for a given amount of computational resources.This is also applicable to NP-hard problems. Finding the exact answer to the travelling salesman problem for large networks is impossible within practical resource limits (assuming P not equal NP). What is possible is finding a pretty good answer. There’s no efficient algorithm for getting the exact right route, but there is one for guaranteeing you are within 50% of the right answer.When discussing AI capabilities, the computational resources available to the AI are finite and bounded. Balancing accuracy with computational cost will be fundamental to a successful AI system. Imagine an AI that, when asked a simple question, starts calculating an exact solution that would take a decade to finish. We’re gonna toss this AI in favor of one that gives a pretty good answer in practical time.This principle goes double for secret takeover plots. If computer model A spends half it’s computational resources modelling proteins, while computer model B doesn’t, computer model A is getting deleted. Worse, the engineers might start digging in to why model A is so slow, and get tipped off to the plot. All this is just to say: computational cost matters. A lot.A taste of computational physicsIn this section, I want to give you a taste of what it actually means to do computational physics. I will include some equations for demonstration, but you do not need to know much math to follow along. The subject will be a very highly studied problem in my field called the “band gap problem”.“band gap” is one of the most important material properties in semiconductor physics. It describes whether there is a slice of possible energy values that are forbidden ...]]>
Sat, 27 May 2023 10:07:42 +0000 EA - Bandgaps, Brains, and Bioweapons: The limitations of computational science and what it means for AGI by titotal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bandgaps, Brains, and Bioweapons: The limitations of computational science and what it means for AGI, published by titotal on May 26, 2023 on The Effective Altruism Forum.[In this post I discuss some of my field of expertise in computational physics. Although I do my best to make it layman friendly, I can't guarantee as such. In later parts I speculate about other fields such as brain simulation and bioweapons, note that I am not an expert in these subjects.]In a previous post, I argued that a superintelligence that only saw three frames of a webcam would not be able to deduce all the laws of physics, specifically general relativity and Newtonian gravity. But this specific scenario would only apply to certain forms of boxed AI.Any AI that can read the internet has a very easy way to deduce general relativity and all our other known laws of physics: look it up on wikipedia. All of the fundamental laws of physics relevant to day to day life are on there. An AGI will probably need additional experiments to deduce a fundamental theory of everything, but you don’t need that to take over the world. The AI in this case will know all the laws of physics that are practically useful.Does this mean that an AGI can figure out anything?There is a world of difference between knowing the laws of physics, and actually using the laws of physics in a practical manner. The problem is one that talk of “solomonoff induction” sweeps under the rug: Computational time is finite. And not just that. Compared to some of the algorithms we’d like to pull off, computational time is miniscule.Efficiency or deathThe concept of computational efficiency is at the core of computer science. The running of computers costs time and money. If we are faced with a problem, we want an algorithm to find the right answer. But just as important is figuring out how to find the right answer in the least amount of time.If your challenge is “calculate pi”, getting the exact “right answer” is impossible, because there are an infinite number of digits. At this point, we are instead trying to find the most accurate answer we can get for a given amount of computational resources.This is also applicable to NP-hard problems. Finding the exact answer to the travelling salesman problem for large networks is impossible within practical resource limits (assuming P not equal NP). What is possible is finding a pretty good answer. There’s no efficient algorithm for getting the exact right route, but there is one for guaranteeing you are within 50% of the right answer.When discussing AI capabilities, the computational resources available to the AI are finite and bounded. Balancing accuracy with computational cost will be fundamental to a successful AI system. Imagine an AI that, when asked a simple question, starts calculating an exact solution that would take a decade to finish. We’re gonna toss this AI in favor of one that gives a pretty good answer in practical time.This principle goes double for secret takeover plots. If computer model A spends half it’s computational resources modelling proteins, while computer model B doesn’t, computer model A is getting deleted. Worse, the engineers might start digging in to why model A is so slow, and get tipped off to the plot. All this is just to say: computational cost matters. A lot.A taste of computational physicsIn this section, I want to give you a taste of what it actually means to do computational physics. I will include some equations for demonstration, but you do not need to know much math to follow along. The subject will be a very highly studied problem in my field called the “band gap problem”.“band gap” is one of the most important material properties in semiconductor physics. It describes whether there is a slice of possible energy values that are forbidden ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bandgaps, Brains, and Bioweapons: The limitations of computational science and what it means for AGI, published by titotal on May 26, 2023 on The Effective Altruism Forum.[In this post I discuss some of my field of expertise in computational physics. Although I do my best to make it layman friendly, I can't guarantee as such. In later parts I speculate about other fields such as brain simulation and bioweapons, note that I am not an expert in these subjects.]In a previous post, I argued that a superintelligence that only saw three frames of a webcam would not be able to deduce all the laws of physics, specifically general relativity and Newtonian gravity. But this specific scenario would only apply to certain forms of boxed AI.Any AI that can read the internet has a very easy way to deduce general relativity and all our other known laws of physics: look it up on wikipedia. All of the fundamental laws of physics relevant to day to day life are on there. An AGI will probably need additional experiments to deduce a fundamental theory of everything, but you don’t need that to take over the world. The AI in this case will know all the laws of physics that are practically useful.Does this mean that an AGI can figure out anything?There is a world of difference between knowing the laws of physics, and actually using the laws of physics in a practical manner. The problem is one that talk of “solomonoff induction” sweeps under the rug: Computational time is finite. And not just that. Compared to some of the algorithms we’d like to pull off, computational time is miniscule.Efficiency or deathThe concept of computational efficiency is at the core of computer science. The running of computers costs time and money. If we are faced with a problem, we want an algorithm to find the right answer. But just as important is figuring out how to find the right answer in the least amount of time.If your challenge is “calculate pi”, getting the exact “right answer” is impossible, because there are an infinite number of digits. At this point, we are instead trying to find the most accurate answer we can get for a given amount of computational resources.This is also applicable to NP-hard problems. Finding the exact answer to the travelling salesman problem for large networks is impossible within practical resource limits (assuming P not equal NP). What is possible is finding a pretty good answer. There’s no efficient algorithm for getting the exact right route, but there is one for guaranteeing you are within 50% of the right answer.When discussing AI capabilities, the computational resources available to the AI are finite and bounded. Balancing accuracy with computational cost will be fundamental to a successful AI system. Imagine an AI that, when asked a simple question, starts calculating an exact solution that would take a decade to finish. We’re gonna toss this AI in favor of one that gives a pretty good answer in practical time.This principle goes double for secret takeover plots. If computer model A spends half it’s computational resources modelling proteins, while computer model B doesn’t, computer model A is getting deleted. Worse, the engineers might start digging in to why model A is so slow, and get tipped off to the plot. All this is just to say: computational cost matters. A lot.A taste of computational physicsIn this section, I want to give you a taste of what it actually means to do computational physics. I will include some equations for demonstration, but you do not need to know much math to follow along. The subject will be a very highly studied problem in my field called the “band gap problem”.“band gap” is one of the most important material properties in semiconductor physics. It describes whether there is a slice of possible energy values that are forbidden ...]]>
titotal https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 28:35 None full 6079
nsLTKCd3Bvdwzj9x8_NL_EA_EA EA - Ingroup Deference by trammell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ingroup Deference, published by trammell on May 26, 2023 on The Effective Altruism Forum.Epistemic status: yes. All about epistemicsIntroductionIn principle, all that motivates the existence of the EA community is collaboration around a common goal. As the shared goal of preserving the environment characterizes the environmentalist community, say, EA is supposed to be characterized by the shared goal of doing the most good. But in practice, the EA community shares more than just this abstract goal (let’s grant that it does at least share the stated goal) and the collaborations that result. It also exhibits an unusual distribution of beliefs about various things, like the probability that AI will kill everyone or the externalities of polyamory.My attitude has long been that, to a first approximation, it doesn’t make sense for EAs to defer to each other’s judgment any more than to anyone else’s on questions lacking consensus.When we do, we land in the kind of echo chamber which convinced environmentalists that nuclear power is more dangerous than most experts think, and which at least to some extent seems to have trapped practically every other social movement, political party, religious community, patriotic country, academic discipline, and school of thought within an academic discipline on record. This attitude suggests the following template for an EA-motivated line of strategy reasoning, e.g. an EA-motivated econ theory paper:Look around at what most people are doing. Assume you and your EA-engaged readers are no more capable or better informed than others are, on the whole; take others’ behavior as a best guess on how to achieve their own goals.Work out [what, say, economic theory says about] how to act if you believe what others believe, but replace the goal of “what people typically want” with some conception of “the good”.And so a lot of my own research has fit this mold, including the core of my work on “patient philanthropy”[1, 2] (if we act like typical funders except that we replace the rate of pure time preference with zero, here’s the formula for how much higher our saving rate should be). The template is hardly my invention, of course. Another example would be Roth Tran’s (2019) paper on “mission hedging” (if a philanthropic investor acts like a typical investor except that they’ll be spending the money on some cause, instead of their own consumption, here’s the formula for how they should tweak how they invest). Or this post on inferring AI timelines from interest rates and setting one’s philanthropic strategy accordingly.But treating EA thought as generic may not be a good first approximation.Seeing the “EA consensus” be arguably ahead of the curve on some big issues—Covid a few years ago, AI progress more recently—raises the question of whether there’s a better heuristic: one which doesn’t treat these cases as coincidences, but which is still principled enough that we don’t have to worry too much about turning the EA community into [more of] an echo chamber all around. This post argues that there is. The gist is simple. If you’ve been putting in the effort to follow the evolution of EA thought, you have some “inside knowledge” of how it came to be what it is on some question. (I mean this not in the sense that the evolution of EA thinking is secret, just in the sense that it’s somewhat costly to learn.)If this costly knowledge informs you that EA beliefs on some question are unusual because they started out typical and then updated in light of some idiosyncratic learning, e.g. an EA-motivated research effort, then it’s reasonable for you to update toward them to some extent. On the other hand, if it informs you that EA beliefs on some question have been unusual from the get-go, it makes sense to update the other way, toward the distribution o...]]>
trammell https://forum.effectivealtruism.org/posts/nsLTKCd3Bvdwzj9x8/ingroup-deference Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ingroup Deference, published by trammell on May 26, 2023 on The Effective Altruism Forum.Epistemic status: yes. All about epistemicsIntroductionIn principle, all that motivates the existence of the EA community is collaboration around a common goal. As the shared goal of preserving the environment characterizes the environmentalist community, say, EA is supposed to be characterized by the shared goal of doing the most good. But in practice, the EA community shares more than just this abstract goal (let’s grant that it does at least share the stated goal) and the collaborations that result. It also exhibits an unusual distribution of beliefs about various things, like the probability that AI will kill everyone or the externalities of polyamory.My attitude has long been that, to a first approximation, it doesn’t make sense for EAs to defer to each other’s judgment any more than to anyone else’s on questions lacking consensus.When we do, we land in the kind of echo chamber which convinced environmentalists that nuclear power is more dangerous than most experts think, and which at least to some extent seems to have trapped practically every other social movement, political party, religious community, patriotic country, academic discipline, and school of thought within an academic discipline on record. This attitude suggests the following template for an EA-motivated line of strategy reasoning, e.g. an EA-motivated econ theory paper:Look around at what most people are doing. Assume you and your EA-engaged readers are no more capable or better informed than others are, on the whole; take others’ behavior as a best guess on how to achieve their own goals.Work out [what, say, economic theory says about] how to act if you believe what others believe, but replace the goal of “what people typically want” with some conception of “the good”.And so a lot of my own research has fit this mold, including the core of my work on “patient philanthropy”[1, 2] (if we act like typical funders except that we replace the rate of pure time preference with zero, here’s the formula for how much higher our saving rate should be). The template is hardly my invention, of course. Another example would be Roth Tran’s (2019) paper on “mission hedging” (if a philanthropic investor acts like a typical investor except that they’ll be spending the money on some cause, instead of their own consumption, here’s the formula for how they should tweak how they invest). Or this post on inferring AI timelines from interest rates and setting one’s philanthropic strategy accordingly.But treating EA thought as generic may not be a good first approximation.Seeing the “EA consensus” be arguably ahead of the curve on some big issues—Covid a few years ago, AI progress more recently—raises the question of whether there’s a better heuristic: one which doesn’t treat these cases as coincidences, but which is still principled enough that we don’t have to worry too much about turning the EA community into [more of] an echo chamber all around. This post argues that there is. The gist is simple. If you’ve been putting in the effort to follow the evolution of EA thought, you have some “inside knowledge” of how it came to be what it is on some question. (I mean this not in the sense that the evolution of EA thinking is secret, just in the sense that it’s somewhat costly to learn.)If this costly knowledge informs you that EA beliefs on some question are unusual because they started out typical and then updated in light of some idiosyncratic learning, e.g. an EA-motivated research effort, then it’s reasonable for you to update toward them to some extent. On the other hand, if it informs you that EA beliefs on some question have been unusual from the get-go, it makes sense to update the other way, toward the distribution o...]]>
Fri, 26 May 2023 18:18:10 +0000 EA - Ingroup Deference by trammell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ingroup Deference, published by trammell on May 26, 2023 on The Effective Altruism Forum.Epistemic status: yes. All about epistemicsIntroductionIn principle, all that motivates the existence of the EA community is collaboration around a common goal. As the shared goal of preserving the environment characterizes the environmentalist community, say, EA is supposed to be characterized by the shared goal of doing the most good. But in practice, the EA community shares more than just this abstract goal (let’s grant that it does at least share the stated goal) and the collaborations that result. It also exhibits an unusual distribution of beliefs about various things, like the probability that AI will kill everyone or the externalities of polyamory.My attitude has long been that, to a first approximation, it doesn’t make sense for EAs to defer to each other’s judgment any more than to anyone else’s on questions lacking consensus.When we do, we land in the kind of echo chamber which convinced environmentalists that nuclear power is more dangerous than most experts think, and which at least to some extent seems to have trapped practically every other social movement, political party, religious community, patriotic country, academic discipline, and school of thought within an academic discipline on record. This attitude suggests the following template for an EA-motivated line of strategy reasoning, e.g. an EA-motivated econ theory paper:Look around at what most people are doing. Assume you and your EA-engaged readers are no more capable or better informed than others are, on the whole; take others’ behavior as a best guess on how to achieve their own goals.Work out [what, say, economic theory says about] how to act if you believe what others believe, but replace the goal of “what people typically want” with some conception of “the good”.And so a lot of my own research has fit this mold, including the core of my work on “patient philanthropy”[1, 2] (if we act like typical funders except that we replace the rate of pure time preference with zero, here’s the formula for how much higher our saving rate should be). The template is hardly my invention, of course. Another example would be Roth Tran’s (2019) paper on “mission hedging” (if a philanthropic investor acts like a typical investor except that they’ll be spending the money on some cause, instead of their own consumption, here’s the formula for how they should tweak how they invest). Or this post on inferring AI timelines from interest rates and setting one’s philanthropic strategy accordingly.But treating EA thought as generic may not be a good first approximation.Seeing the “EA consensus” be arguably ahead of the curve on some big issues—Covid a few years ago, AI progress more recently—raises the question of whether there’s a better heuristic: one which doesn’t treat these cases as coincidences, but which is still principled enough that we don’t have to worry too much about turning the EA community into [more of] an echo chamber all around. This post argues that there is. The gist is simple. If you’ve been putting in the effort to follow the evolution of EA thought, you have some “inside knowledge” of how it came to be what it is on some question. (I mean this not in the sense that the evolution of EA thinking is secret, just in the sense that it’s somewhat costly to learn.)If this costly knowledge informs you that EA beliefs on some question are unusual because they started out typical and then updated in light of some idiosyncratic learning, e.g. an EA-motivated research effort, then it’s reasonable for you to update toward them to some extent. On the other hand, if it informs you that EA beliefs on some question have been unusual from the get-go, it makes sense to update the other way, toward the distribution o...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ingroup Deference, published by trammell on May 26, 2023 on The Effective Altruism Forum.Epistemic status: yes. All about epistemicsIntroductionIn principle, all that motivates the existence of the EA community is collaboration around a common goal. As the shared goal of preserving the environment characterizes the environmentalist community, say, EA is supposed to be characterized by the shared goal of doing the most good. But in practice, the EA community shares more than just this abstract goal (let’s grant that it does at least share the stated goal) and the collaborations that result. It also exhibits an unusual distribution of beliefs about various things, like the probability that AI will kill everyone or the externalities of polyamory.My attitude has long been that, to a first approximation, it doesn’t make sense for EAs to defer to each other’s judgment any more than to anyone else’s on questions lacking consensus.When we do, we land in the kind of echo chamber which convinced environmentalists that nuclear power is more dangerous than most experts think, and which at least to some extent seems to have trapped practically every other social movement, political party, religious community, patriotic country, academic discipline, and school of thought within an academic discipline on record. This attitude suggests the following template for an EA-motivated line of strategy reasoning, e.g. an EA-motivated econ theory paper:Look around at what most people are doing. Assume you and your EA-engaged readers are no more capable or better informed than others are, on the whole; take others’ behavior as a best guess on how to achieve their own goals.Work out [what, say, economic theory says about] how to act if you believe what others believe, but replace the goal of “what people typically want” with some conception of “the good”.And so a lot of my own research has fit this mold, including the core of my work on “patient philanthropy”[1, 2] (if we act like typical funders except that we replace the rate of pure time preference with zero, here’s the formula for how much higher our saving rate should be). The template is hardly my invention, of course. Another example would be Roth Tran’s (2019) paper on “mission hedging” (if a philanthropic investor acts like a typical investor except that they’ll be spending the money on some cause, instead of their own consumption, here’s the formula for how they should tweak how they invest). Or this post on inferring AI timelines from interest rates and setting one’s philanthropic strategy accordingly.But treating EA thought as generic may not be a good first approximation.Seeing the “EA consensus” be arguably ahead of the curve on some big issues—Covid a few years ago, AI progress more recently—raises the question of whether there’s a better heuristic: one which doesn’t treat these cases as coincidences, but which is still principled enough that we don’t have to worry too much about turning the EA community into [more of] an echo chamber all around. This post argues that there is. The gist is simple. If you’ve been putting in the effort to follow the evolution of EA thought, you have some “inside knowledge” of how it came to be what it is on some question. (I mean this not in the sense that the evolution of EA thinking is secret, just in the sense that it’s somewhat costly to learn.)If this costly knowledge informs you that EA beliefs on some question are unusual because they started out typical and then updated in light of some idiosyncratic learning, e.g. an EA-motivated research effort, then it’s reasonable for you to update toward them to some extent. On the other hand, if it informs you that EA beliefs on some question have been unusual from the get-go, it makes sense to update the other way, toward the distribution o...]]>
trammell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 33:02 None full 6074
83A4qpkmnXqDFJmWf_NL_EA_EA EA - EA cause areas are likely power-law distributed too by Stian Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA cause areas are likely power-law distributed too, published by Stian on May 25, 2023 on The Effective Altruism Forum.So there are two pieces of common effective altruist thinking that I think are in tension with each other, but for which a more sophisticated version of a similar view makes sense and dissolves that tension. This means that in my experience, people can see others holding the more sophisticated view, and adopting the simple one without really examining them (and discovering this tension).This proposed tension is between two statements/beliefs. The first is the common (and core!) community belief that the impact of different interventions is power-law distributed. This means that the very best intervention is several times more impactful than the almost-best ones. The second is a statement or belief along the lines of "I am so glad someone has done so much work thinking about which areas/interventions would have the most impact, as that means my task of choosing among them is easier", or the extreme one which continues "as that means I don't have to think hard about choosing among them." I will refer to this as the uniform belief.Now, there is on the face of it many things to take issue with in how I phrased the uniform belief[1], but I want to deal with two things. 1) I think the uniform belief is a ~fairly common thing to "casually" believe - it is a belief that is easy to automatically form after cursorily engaging with EA topics - and 2) it goes strictly against the belief regarding the power-law distribution of impact.On a psychological level, I think people can come to hold the uniform belief when they fail to adequately reflect on and internalise that interventions are power-law distributed. Because once they do, the tension between the power-law belief and the uniform belief becomes clear. If the power-law (or simply a right-skewed distribution) holds, then even among the interventions and cause areas already identified, their true impact might be very different from each other. We just don’t know which ones have the highest impact.The holding of the uniform belief is a trap that I think people who don't reflect too heavily can fall into, and which I know because I was in it myself for a while - making statements like "Can't go wrong with choosing among the EA-recommended topics". Now I think you can go wrong in choosing among them, and in many different ways. To be clear, I don't think too many people stay in this trap for too long - EA has good social mechanisms for correcting others' beliefs [2] and I would think that it is often caught early. But it is the kind of thing that I am afraid new or cursory EAs might come away permanently believing: that someone else has already done all of the work of figuring out which interventions are the best.The more sophisticated view, and which I think is correct, is that because no one knows ex ante the "true" impact of an intervention, or the total positive consequences of work in an area, you personally cannot, before you start doing the difficult work of figuring out what you think, know which of the interventions you will end up thinking is the most important one. So at first blush - at first encounter with the 80k problem profiles, or whatever - it is fine to think that all the areas have equal expected impact [3]. You probably won't come in thinking this - because you have some prior knowledge - but it would be fine to think. What is not fine would be to (automatically, unconsciously) go on to directly choose a career path among them without figuring out what you think is important, what the evidence for each problem area is, and which area you would be a good personal fit for.So newcomers see that EA has several problem areas, and are looking at a wide selection of possible interventions, ...]]>
Stian https://forum.effectivealtruism.org/posts/83A4qpkmnXqDFJmWf/ea-cause-areas-are-likely-power-law-distributed-too Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA cause areas are likely power-law distributed too, published by Stian on May 25, 2023 on The Effective Altruism Forum.So there are two pieces of common effective altruist thinking that I think are in tension with each other, but for which a more sophisticated version of a similar view makes sense and dissolves that tension. This means that in my experience, people can see others holding the more sophisticated view, and adopting the simple one without really examining them (and discovering this tension).This proposed tension is between two statements/beliefs. The first is the common (and core!) community belief that the impact of different interventions is power-law distributed. This means that the very best intervention is several times more impactful than the almost-best ones. The second is a statement or belief along the lines of "I am so glad someone has done so much work thinking about which areas/interventions would have the most impact, as that means my task of choosing among them is easier", or the extreme one which continues "as that means I don't have to think hard about choosing among them." I will refer to this as the uniform belief.Now, there is on the face of it many things to take issue with in how I phrased the uniform belief[1], but I want to deal with two things. 1) I think the uniform belief is a ~fairly common thing to "casually" believe - it is a belief that is easy to automatically form after cursorily engaging with EA topics - and 2) it goes strictly against the belief regarding the power-law distribution of impact.On a psychological level, I think people can come to hold the uniform belief when they fail to adequately reflect on and internalise that interventions are power-law distributed. Because once they do, the tension between the power-law belief and the uniform belief becomes clear. If the power-law (or simply a right-skewed distribution) holds, then even among the interventions and cause areas already identified, their true impact might be very different from each other. We just don’t know which ones have the highest impact.The holding of the uniform belief is a trap that I think people who don't reflect too heavily can fall into, and which I know because I was in it myself for a while - making statements like "Can't go wrong with choosing among the EA-recommended topics". Now I think you can go wrong in choosing among them, and in many different ways. To be clear, I don't think too many people stay in this trap for too long - EA has good social mechanisms for correcting others' beliefs [2] and I would think that it is often caught early. But it is the kind of thing that I am afraid new or cursory EAs might come away permanently believing: that someone else has already done all of the work of figuring out which interventions are the best.The more sophisticated view, and which I think is correct, is that because no one knows ex ante the "true" impact of an intervention, or the total positive consequences of work in an area, you personally cannot, before you start doing the difficult work of figuring out what you think, know which of the interventions you will end up thinking is the most important one. So at first blush - at first encounter with the 80k problem profiles, or whatever - it is fine to think that all the areas have equal expected impact [3]. You probably won't come in thinking this - because you have some prior knowledge - but it would be fine to think. What is not fine would be to (automatically, unconsciously) go on to directly choose a career path among them without figuring out what you think is important, what the evidence for each problem area is, and which area you would be a good personal fit for.So newcomers see that EA has several problem areas, and are looking at a wide selection of possible interventions, ...]]>
Fri, 26 May 2023 15:20:47 +0000 EA - EA cause areas are likely power-law distributed too by Stian Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA cause areas are likely power-law distributed too, published by Stian on May 25, 2023 on The Effective Altruism Forum.So there are two pieces of common effective altruist thinking that I think are in tension with each other, but for which a more sophisticated version of a similar view makes sense and dissolves that tension. This means that in my experience, people can see others holding the more sophisticated view, and adopting the simple one without really examining them (and discovering this tension).This proposed tension is between two statements/beliefs. The first is the common (and core!) community belief that the impact of different interventions is power-law distributed. This means that the very best intervention is several times more impactful than the almost-best ones. The second is a statement or belief along the lines of "I am so glad someone has done so much work thinking about which areas/interventions would have the most impact, as that means my task of choosing among them is easier", or the extreme one which continues "as that means I don't have to think hard about choosing among them." I will refer to this as the uniform belief.Now, there is on the face of it many things to take issue with in how I phrased the uniform belief[1], but I want to deal with two things. 1) I think the uniform belief is a ~fairly common thing to "casually" believe - it is a belief that is easy to automatically form after cursorily engaging with EA topics - and 2) it goes strictly against the belief regarding the power-law distribution of impact.On a psychological level, I think people can come to hold the uniform belief when they fail to adequately reflect on and internalise that interventions are power-law distributed. Because once they do, the tension between the power-law belief and the uniform belief becomes clear. If the power-law (or simply a right-skewed distribution) holds, then even among the interventions and cause areas already identified, their true impact might be very different from each other. We just don’t know which ones have the highest impact.The holding of the uniform belief is a trap that I think people who don't reflect too heavily can fall into, and which I know because I was in it myself for a while - making statements like "Can't go wrong with choosing among the EA-recommended topics". Now I think you can go wrong in choosing among them, and in many different ways. To be clear, I don't think too many people stay in this trap for too long - EA has good social mechanisms for correcting others' beliefs [2] and I would think that it is often caught early. But it is the kind of thing that I am afraid new or cursory EAs might come away permanently believing: that someone else has already done all of the work of figuring out which interventions are the best.The more sophisticated view, and which I think is correct, is that because no one knows ex ante the "true" impact of an intervention, or the total positive consequences of work in an area, you personally cannot, before you start doing the difficult work of figuring out what you think, know which of the interventions you will end up thinking is the most important one. So at first blush - at first encounter with the 80k problem profiles, or whatever - it is fine to think that all the areas have equal expected impact [3]. You probably won't come in thinking this - because you have some prior knowledge - but it would be fine to think. What is not fine would be to (automatically, unconsciously) go on to directly choose a career path among them without figuring out what you think is important, what the evidence for each problem area is, and which area you would be a good personal fit for.So newcomers see that EA has several problem areas, and are looking at a wide selection of possible interventions, ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA cause areas are likely power-law distributed too, published by Stian on May 25, 2023 on The Effective Altruism Forum.So there are two pieces of common effective altruist thinking that I think are in tension with each other, but for which a more sophisticated version of a similar view makes sense and dissolves that tension. This means that in my experience, people can see others holding the more sophisticated view, and adopting the simple one without really examining them (and discovering this tension).This proposed tension is between two statements/beliefs. The first is the common (and core!) community belief that the impact of different interventions is power-law distributed. This means that the very best intervention is several times more impactful than the almost-best ones. The second is a statement or belief along the lines of "I am so glad someone has done so much work thinking about which areas/interventions would have the most impact, as that means my task of choosing among them is easier", or the extreme one which continues "as that means I don't have to think hard about choosing among them." I will refer to this as the uniform belief.Now, there is on the face of it many things to take issue with in how I phrased the uniform belief[1], but I want to deal with two things. 1) I think the uniform belief is a ~fairly common thing to "casually" believe - it is a belief that is easy to automatically form after cursorily engaging with EA topics - and 2) it goes strictly against the belief regarding the power-law distribution of impact.On a psychological level, I think people can come to hold the uniform belief when they fail to adequately reflect on and internalise that interventions are power-law distributed. Because once they do, the tension between the power-law belief and the uniform belief becomes clear. If the power-law (or simply a right-skewed distribution) holds, then even among the interventions and cause areas already identified, their true impact might be very different from each other. We just don’t know which ones have the highest impact.The holding of the uniform belief is a trap that I think people who don't reflect too heavily can fall into, and which I know because I was in it myself for a while - making statements like "Can't go wrong with choosing among the EA-recommended topics". Now I think you can go wrong in choosing among them, and in many different ways. To be clear, I don't think too many people stay in this trap for too long - EA has good social mechanisms for correcting others' beliefs [2] and I would think that it is often caught early. But it is the kind of thing that I am afraid new or cursory EAs might come away permanently believing: that someone else has already done all of the work of figuring out which interventions are the best.The more sophisticated view, and which I think is correct, is that because no one knows ex ante the "true" impact of an intervention, or the total positive consequences of work in an area, you personally cannot, before you start doing the difficult work of figuring out what you think, know which of the interventions you will end up thinking is the most important one. So at first blush - at first encounter with the 80k problem profiles, or whatever - it is fine to think that all the areas have equal expected impact [3]. You probably won't come in thinking this - because you have some prior knowledge - but it would be fine to think. What is not fine would be to (automatically, unconsciously) go on to directly choose a career path among them without figuring out what you think is important, what the evidence for each problem area is, and which area you would be a good personal fit for.So newcomers see that EA has several problem areas, and are looking at a wide selection of possible interventions, ...]]>
Stian https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:01 None full 6072
bPXmomt5boy72Pfre_NL_EA_EA EA - It is good for EA funders to have seats on boards of orgs they fund [debate] by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It is good for EA funders to have seats on boards of orgs they fund [debate], published by Nathan Young on May 25, 2023 on The Effective Altruism Forum.It has come to my attention that many people (including my past self) think that it's bad for funders to sit on the boards of orgs they fund. Eg someone at OpenPhil being the lead decision maker on a grant and then sitting on the board of that org.Let's debate thisSince I said this, several separate people I always update to, including a non-EA said this is trivially wrong. It is typical practice with good reason:EA is not doing something weird and galaxy-brained here. Particularly in America this is normal practiceHaving a board seat ensures that your funding is going where you want and might allow you to fund with other fewer strings attachedIt allows funder oversight. They can ask the relevant questions at the time rather than in some funding meetingPerhaps you might think that it causes funders to become too involved, but I dunno. And this is clearly a different argument than the standard "EA is doing something weird and slightly nepotistic"To use the obvious examples, it is therefore good that Claire Zabel sits on whatever boards she sits on of orgs OP funds. And reasonable that OpenPhil considered funding OpenAI as a way to get a board seat (you can disagree with the actual cost benefit but there was nothing bad normsy about doing it)Do you buy my arguments? Please read the comments to this article also, then vote in this anonymouse poll.Loading...And now you can bet and then make your argument to try and shift future respondents and earn mana for doing so.This market resolves in a month to the final agree % + weakly agree % of the above poll. Hopefully we can see it move in real time if someone makes a convincing argument.I think this is a really cool real time debate format and we should have it at EAG. Relevant docThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nathan Young https://forum.effectivealtruism.org/posts/bPXmomt5boy72Pfre/it-is-good-for-ea-funders-to-have-seats-on-boards-of-orgs Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It is good for EA funders to have seats on boards of orgs they fund [debate], published by Nathan Young on May 25, 2023 on The Effective Altruism Forum.It has come to my attention that many people (including my past self) think that it's bad for funders to sit on the boards of orgs they fund. Eg someone at OpenPhil being the lead decision maker on a grant and then sitting on the board of that org.Let's debate thisSince I said this, several separate people I always update to, including a non-EA said this is trivially wrong. It is typical practice with good reason:EA is not doing something weird and galaxy-brained here. Particularly in America this is normal practiceHaving a board seat ensures that your funding is going where you want and might allow you to fund with other fewer strings attachedIt allows funder oversight. They can ask the relevant questions at the time rather than in some funding meetingPerhaps you might think that it causes funders to become too involved, but I dunno. And this is clearly a different argument than the standard "EA is doing something weird and slightly nepotistic"To use the obvious examples, it is therefore good that Claire Zabel sits on whatever boards she sits on of orgs OP funds. And reasonable that OpenPhil considered funding OpenAI as a way to get a board seat (you can disagree with the actual cost benefit but there was nothing bad normsy about doing it)Do you buy my arguments? Please read the comments to this article also, then vote in this anonymouse poll.Loading...And now you can bet and then make your argument to try and shift future respondents and earn mana for doing so.This market resolves in a month to the final agree % + weakly agree % of the above poll. Hopefully we can see it move in real time if someone makes a convincing argument.I think this is a really cool real time debate format and we should have it at EAG. Relevant docThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 26 May 2023 13:50:20 +0000 EA - It is good for EA funders to have seats on boards of orgs they fund [debate] by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It is good for EA funders to have seats on boards of orgs they fund [debate], published by Nathan Young on May 25, 2023 on The Effective Altruism Forum.It has come to my attention that many people (including my past self) think that it's bad for funders to sit on the boards of orgs they fund. Eg someone at OpenPhil being the lead decision maker on a grant and then sitting on the board of that org.Let's debate thisSince I said this, several separate people I always update to, including a non-EA said this is trivially wrong. It is typical practice with good reason:EA is not doing something weird and galaxy-brained here. Particularly in America this is normal practiceHaving a board seat ensures that your funding is going where you want and might allow you to fund with other fewer strings attachedIt allows funder oversight. They can ask the relevant questions at the time rather than in some funding meetingPerhaps you might think that it causes funders to become too involved, but I dunno. And this is clearly a different argument than the standard "EA is doing something weird and slightly nepotistic"To use the obvious examples, it is therefore good that Claire Zabel sits on whatever boards she sits on of orgs OP funds. And reasonable that OpenPhil considered funding OpenAI as a way to get a board seat (you can disagree with the actual cost benefit but there was nothing bad normsy about doing it)Do you buy my arguments? Please read the comments to this article also, then vote in this anonymouse poll.Loading...And now you can bet and then make your argument to try and shift future respondents and earn mana for doing so.This market resolves in a month to the final agree % + weakly agree % of the above poll. Hopefully we can see it move in real time if someone makes a convincing argument.I think this is a really cool real time debate format and we should have it at EAG. Relevant docThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It is good for EA funders to have seats on boards of orgs they fund [debate], published by Nathan Young on May 25, 2023 on The Effective Altruism Forum.It has come to my attention that many people (including my past self) think that it's bad for funders to sit on the boards of orgs they fund. Eg someone at OpenPhil being the lead decision maker on a grant and then sitting on the board of that org.Let's debate thisSince I said this, several separate people I always update to, including a non-EA said this is trivially wrong. It is typical practice with good reason:EA is not doing something weird and galaxy-brained here. Particularly in America this is normal practiceHaving a board seat ensures that your funding is going where you want and might allow you to fund with other fewer strings attachedIt allows funder oversight. They can ask the relevant questions at the time rather than in some funding meetingPerhaps you might think that it causes funders to become too involved, but I dunno. And this is clearly a different argument than the standard "EA is doing something weird and slightly nepotistic"To use the obvious examples, it is therefore good that Claire Zabel sits on whatever boards she sits on of orgs OP funds. And reasonable that OpenPhil considered funding OpenAI as a way to get a board seat (you can disagree with the actual cost benefit but there was nothing bad normsy about doing it)Do you buy my arguments? Please read the comments to this article also, then vote in this anonymouse poll.Loading...And now you can bet and then make your argument to try and shift future respondents and earn mana for doing so.This market resolves in a month to the final agree % + weakly agree % of the above poll. Hopefully we can see it move in real time if someone makes a convincing argument.I think this is a really cool real time debate format and we should have it at EAG. Relevant docThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nathan Young https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:55 None full 6071
whEmrvK9pzioeircr_NL_EA_EA EA - Will AI end everything? A guide to guessing | EAG Bay Area 23 by Katja Grace Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will AI end everything? A guide to guessing | EAG Bay Area 23, published by Katja Grace on May 25, 2023 on The Effective Altruism Forum.Below is the video and transcript for my talk from EA Global, Bay Area 2023. It's about how likely AI is to cause human extinction or the like, but mostly a guide to how I think about the question and what goes into my probability estimate (though I do get to a number!)The most common feedback I got for the talk was that it helped people feel like they could think about these things themselves rather than deferring. Which may be a modern art type thing, like "seeing this, I feel that my five year old could do it", but either way I hope this empowers more thinking about this topic, which I view as crucially important.You can see the slides for this talk hereIntroductionHello, it's good to be here in Oakland. The first time I came to Oakland was in 2008, which was my first day in America. I met Anna Salamon, who was a stranger and who had kindly agreed to look after me for a couple of days. She thought that I should stop what I was doing and work on AI risk, which she explained to me. I wasn't convinced, and I said I'd think about it; and I've been thinking about it. And I'm not always that good at finishing things quickly, but I wanted to give you an update on my thoughts.Two things to talk aboutBefore we get into it, I want to say two things about what we're talking about. There are two things in this vicinity that people are often talking about. One of them is whether artificial intelligence is going to literally murder all of the humans. And the other one is whether the long-term future – which seems like it could be pretty great in lots of ways – whether humans will get to bring about the great things that they hope for there, or whether artificial intelligence will take control of it and we won't get to do those things.I'm mostly interested in the latter, but if you are interested in the former, I think they're pretty closely related to one another, so hopefully there'll also be useful things.The second thing I want to say is: often people think AI risk is a pretty abstract topic. And I just wanted to note that abstraction is a thing about your mind, not the world. When things happen in the world, they're very concrete and specific, and saying that AI risk is abstract is kind of like saying World War II is abstract because it's 1935 and it hasn't happened yet. Now, if it happens, it will be very concrete and bad. It'll be the worst thing that's ever happened. The rest of the talk's gonna be pretty abstract, but I just wanted to note that.A picture of the landscape of guessingSo this is a picture. You shouldn't worry about reading all the details of it. It's just a picture of the landscape of guessing [about] this, as I see it. There are a bunch of different scenarios that could happen where AI destroys the future. There’s a bunch of evidence for those different things happening. You can come up with your own guess about it, and then there are a bunch of other people who have also come up with guesses.I think it's pretty good to come up with your own guess before, or at some point separate to, mixing it up with everyone else's guesses. I think there are three reasons that's good.First, I think it's just helpful for the whole community if numerous people have thought through these things. I think it's easy to end up having an information cascade situation where a lot of people are deferring to other people.Secondly, I think if you want to think about any of these AI risk-type things, it's just much easier to be motivated about a problem if you really understand why it's a problem and therefore really believe in it.Thirdly, I think it's easier to find things to do about a problem if you understand exactly why it's a p...]]>
Katja Grace https://forum.effectivealtruism.org/posts/whEmrvK9pzioeircr/will-ai-end-everything-a-guide-to-guessing-or-eag-bay-area Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will AI end everything? A guide to guessing | EAG Bay Area 23, published by Katja Grace on May 25, 2023 on The Effective Altruism Forum.Below is the video and transcript for my talk from EA Global, Bay Area 2023. It's about how likely AI is to cause human extinction or the like, but mostly a guide to how I think about the question and what goes into my probability estimate (though I do get to a number!)The most common feedback I got for the talk was that it helped people feel like they could think about these things themselves rather than deferring. Which may be a modern art type thing, like "seeing this, I feel that my five year old could do it", but either way I hope this empowers more thinking about this topic, which I view as crucially important.You can see the slides for this talk hereIntroductionHello, it's good to be here in Oakland. The first time I came to Oakland was in 2008, which was my first day in America. I met Anna Salamon, who was a stranger and who had kindly agreed to look after me for a couple of days. She thought that I should stop what I was doing and work on AI risk, which she explained to me. I wasn't convinced, and I said I'd think about it; and I've been thinking about it. And I'm not always that good at finishing things quickly, but I wanted to give you an update on my thoughts.Two things to talk aboutBefore we get into it, I want to say two things about what we're talking about. There are two things in this vicinity that people are often talking about. One of them is whether artificial intelligence is going to literally murder all of the humans. And the other one is whether the long-term future – which seems like it could be pretty great in lots of ways – whether humans will get to bring about the great things that they hope for there, or whether artificial intelligence will take control of it and we won't get to do those things.I'm mostly interested in the latter, but if you are interested in the former, I think they're pretty closely related to one another, so hopefully there'll also be useful things.The second thing I want to say is: often people think AI risk is a pretty abstract topic. And I just wanted to note that abstraction is a thing about your mind, not the world. When things happen in the world, they're very concrete and specific, and saying that AI risk is abstract is kind of like saying World War II is abstract because it's 1935 and it hasn't happened yet. Now, if it happens, it will be very concrete and bad. It'll be the worst thing that's ever happened. The rest of the talk's gonna be pretty abstract, but I just wanted to note that.A picture of the landscape of guessingSo this is a picture. You shouldn't worry about reading all the details of it. It's just a picture of the landscape of guessing [about] this, as I see it. There are a bunch of different scenarios that could happen where AI destroys the future. There’s a bunch of evidence for those different things happening. You can come up with your own guess about it, and then there are a bunch of other people who have also come up with guesses.I think it's pretty good to come up with your own guess before, or at some point separate to, mixing it up with everyone else's guesses. I think there are three reasons that's good.First, I think it's just helpful for the whole community if numerous people have thought through these things. I think it's easy to end up having an information cascade situation where a lot of people are deferring to other people.Secondly, I think if you want to think about any of these AI risk-type things, it's just much easier to be motivated about a problem if you really understand why it's a problem and therefore really believe in it.Thirdly, I think it's easier to find things to do about a problem if you understand exactly why it's a p...]]>
Fri, 26 May 2023 00:03:00 +0000 EA - Will AI end everything? A guide to guessing | EAG Bay Area 23 by Katja Grace Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will AI end everything? A guide to guessing | EAG Bay Area 23, published by Katja Grace on May 25, 2023 on The Effective Altruism Forum.Below is the video and transcript for my talk from EA Global, Bay Area 2023. It's about how likely AI is to cause human extinction or the like, but mostly a guide to how I think about the question and what goes into my probability estimate (though I do get to a number!)The most common feedback I got for the talk was that it helped people feel like they could think about these things themselves rather than deferring. Which may be a modern art type thing, like "seeing this, I feel that my five year old could do it", but either way I hope this empowers more thinking about this topic, which I view as crucially important.You can see the slides for this talk hereIntroductionHello, it's good to be here in Oakland. The first time I came to Oakland was in 2008, which was my first day in America. I met Anna Salamon, who was a stranger and who had kindly agreed to look after me for a couple of days. She thought that I should stop what I was doing and work on AI risk, which she explained to me. I wasn't convinced, and I said I'd think about it; and I've been thinking about it. And I'm not always that good at finishing things quickly, but I wanted to give you an update on my thoughts.Two things to talk aboutBefore we get into it, I want to say two things about what we're talking about. There are two things in this vicinity that people are often talking about. One of them is whether artificial intelligence is going to literally murder all of the humans. And the other one is whether the long-term future – which seems like it could be pretty great in lots of ways – whether humans will get to bring about the great things that they hope for there, or whether artificial intelligence will take control of it and we won't get to do those things.I'm mostly interested in the latter, but if you are interested in the former, I think they're pretty closely related to one another, so hopefully there'll also be useful things.The second thing I want to say is: often people think AI risk is a pretty abstract topic. And I just wanted to note that abstraction is a thing about your mind, not the world. When things happen in the world, they're very concrete and specific, and saying that AI risk is abstract is kind of like saying World War II is abstract because it's 1935 and it hasn't happened yet. Now, if it happens, it will be very concrete and bad. It'll be the worst thing that's ever happened. The rest of the talk's gonna be pretty abstract, but I just wanted to note that.A picture of the landscape of guessingSo this is a picture. You shouldn't worry about reading all the details of it. It's just a picture of the landscape of guessing [about] this, as I see it. There are a bunch of different scenarios that could happen where AI destroys the future. There’s a bunch of evidence for those different things happening. You can come up with your own guess about it, and then there are a bunch of other people who have also come up with guesses.I think it's pretty good to come up with your own guess before, or at some point separate to, mixing it up with everyone else's guesses. I think there are three reasons that's good.First, I think it's just helpful for the whole community if numerous people have thought through these things. I think it's easy to end up having an information cascade situation where a lot of people are deferring to other people.Secondly, I think if you want to think about any of these AI risk-type things, it's just much easier to be motivated about a problem if you really understand why it's a problem and therefore really believe in it.Thirdly, I think it's easier to find things to do about a problem if you understand exactly why it's a p...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will AI end everything? A guide to guessing | EAG Bay Area 23, published by Katja Grace on May 25, 2023 on The Effective Altruism Forum.Below is the video and transcript for my talk from EA Global, Bay Area 2023. It's about how likely AI is to cause human extinction or the like, but mostly a guide to how I think about the question and what goes into my probability estimate (though I do get to a number!)The most common feedback I got for the talk was that it helped people feel like they could think about these things themselves rather than deferring. Which may be a modern art type thing, like "seeing this, I feel that my five year old could do it", but either way I hope this empowers more thinking about this topic, which I view as crucially important.You can see the slides for this talk hereIntroductionHello, it's good to be here in Oakland. The first time I came to Oakland was in 2008, which was my first day in America. I met Anna Salamon, who was a stranger and who had kindly agreed to look after me for a couple of days. She thought that I should stop what I was doing and work on AI risk, which she explained to me. I wasn't convinced, and I said I'd think about it; and I've been thinking about it. And I'm not always that good at finishing things quickly, but I wanted to give you an update on my thoughts.Two things to talk aboutBefore we get into it, I want to say two things about what we're talking about. There are two things in this vicinity that people are often talking about. One of them is whether artificial intelligence is going to literally murder all of the humans. And the other one is whether the long-term future – which seems like it could be pretty great in lots of ways – whether humans will get to bring about the great things that they hope for there, or whether artificial intelligence will take control of it and we won't get to do those things.I'm mostly interested in the latter, but if you are interested in the former, I think they're pretty closely related to one another, so hopefully there'll also be useful things.The second thing I want to say is: often people think AI risk is a pretty abstract topic. And I just wanted to note that abstraction is a thing about your mind, not the world. When things happen in the world, they're very concrete and specific, and saying that AI risk is abstract is kind of like saying World War II is abstract because it's 1935 and it hasn't happened yet. Now, if it happens, it will be very concrete and bad. It'll be the worst thing that's ever happened. The rest of the talk's gonna be pretty abstract, but I just wanted to note that.A picture of the landscape of guessingSo this is a picture. You shouldn't worry about reading all the details of it. It's just a picture of the landscape of guessing [about] this, as I see it. There are a bunch of different scenarios that could happen where AI destroys the future. There’s a bunch of evidence for those different things happening. You can come up with your own guess about it, and then there are a bunch of other people who have also come up with guesses.I think it's pretty good to come up with your own guess before, or at some point separate to, mixing it up with everyone else's guesses. I think there are three reasons that's good.First, I think it's just helpful for the whole community if numerous people have thought through these things. I think it's easy to end up having an information cascade situation where a lot of people are deferring to other people.Secondly, I think if you want to think about any of these AI risk-type things, it's just much easier to be motivated about a problem if you really understand why it's a problem and therefore really believe in it.Thirdly, I think it's easier to find things to do about a problem if you understand exactly why it's a p...]]>
Katja Grace https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 28:18 None full 6066
PwdjcsoeuH9E9hB8g_NL_EA_EA EA - Introducing Allied Scholars for Animal Protection by Dr Faraz Harsini Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Allied Scholars for Animal Protection, published by Dr Faraz Harsini on May 24, 2023 on The Effective Altruism Forum.We’re excited to introduce Allied Scholars for Animal Protection, a nonprofit creating a unified infrastructure for effective and sustainable animal advocacy at universities. Our mission is to organize, train, and mentor students who are interested in advocating for animal welfare and pursuing impactful careers.The ProblemUniversities play a critical role in shaping the future of society and effecting systemic change. As future leaders, college students hold immense potential for driving progress and cultural transformation.Unfortunately, animal advocacy in universities tends to be limited, sporadic, and unsustainable. The existing clubs on campuses operate independently with no coordination, and students are often hindered by a lack of time, training, experience, and support. Often, when active students graduate, their animal advocacy clubs become inactive. Much time and effort are wasted due to a lack of continuity and longevity of animal advocacy on campuses because students have to reinvent the proverbial wheel each time they restart a group.One of the worst consequences of this is that much talent goes untapped due to insufficient education and inspiration for vegans to choose effective and impactful careers. The EA Community is working hard to reach this talent through career advising and community building. We believe that on-the-ground support for university animal rights clubs can complement EA recruitment efforts and can encourage vegan college students to engage with critical intellectual work being done by the EA community. We also think that community building work focused specifically on animal advocacy can help reach vegans who might not be as interested in other cause areas or the broader EA project.Some animal advocacy organizations provide opportunities for students to volunteer, but enabling a strong campus movement is not the sole focus of these organizations. Having a single organization dedicated to providing infrastructure for campus activism would therefore fill a highly neglected niche in the current animal advocacy ecosystem.Building a strong campus animal advocacy movement is also highly tractable. There are many vegan students out there who care deeply about these issues but do not feel they have the knowledge or resources to organize a group of their own. By providing the needed support, we can dramatically lower the barrier of entry to vegan advocacy and broaden the pool of talent going towards highly impactful careers.Our ApproachAnimal organizations often focus on training individual students rather than on building a sustainable vegan community. ASAP takes a more holistic approach. Our strategy for constructing a strong campus animal movement involves the following:Building a growing vegan community while investing in and strengthening individuals. This means conducting outreach to vegans who might like to become more active as advocates, and training vegans to conduct effective outreach to nonvegans.Providing on-the-ground support to student groups.Collecting thousands of signatures through petitions.Streamlining the process of starting and running student animal advocacy groups.Facilitating systemic and long-term educational frameworks, rather than just one-time events. We will provide educational seminars to empower vegans and educate the general student population, with a special emphasis on plant-based nutrition for future healthcare professionals.Fighting speciesism and humane-washing while promoting plant-based options.By facilitating more effective student advocacy, we believe ASAP can help produce more influential vegans who push for change. We want to inspire the next Eric Adams, Co...]]>
Dr Faraz Harsini https://forum.effectivealtruism.org/posts/PwdjcsoeuH9E9hB8g/introducing-allied-scholars-for-animal-protection Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Allied Scholars for Animal Protection, published by Dr Faraz Harsini on May 24, 2023 on The Effective Altruism Forum.We’re excited to introduce Allied Scholars for Animal Protection, a nonprofit creating a unified infrastructure for effective and sustainable animal advocacy at universities. Our mission is to organize, train, and mentor students who are interested in advocating for animal welfare and pursuing impactful careers.The ProblemUniversities play a critical role in shaping the future of society and effecting systemic change. As future leaders, college students hold immense potential for driving progress and cultural transformation.Unfortunately, animal advocacy in universities tends to be limited, sporadic, and unsustainable. The existing clubs on campuses operate independently with no coordination, and students are often hindered by a lack of time, training, experience, and support. Often, when active students graduate, their animal advocacy clubs become inactive. Much time and effort are wasted due to a lack of continuity and longevity of animal advocacy on campuses because students have to reinvent the proverbial wheel each time they restart a group.One of the worst consequences of this is that much talent goes untapped due to insufficient education and inspiration for vegans to choose effective and impactful careers. The EA Community is working hard to reach this talent through career advising and community building. We believe that on-the-ground support for university animal rights clubs can complement EA recruitment efforts and can encourage vegan college students to engage with critical intellectual work being done by the EA community. We also think that community building work focused specifically on animal advocacy can help reach vegans who might not be as interested in other cause areas or the broader EA project.Some animal advocacy organizations provide opportunities for students to volunteer, but enabling a strong campus movement is not the sole focus of these organizations. Having a single organization dedicated to providing infrastructure for campus activism would therefore fill a highly neglected niche in the current animal advocacy ecosystem.Building a strong campus animal advocacy movement is also highly tractable. There are many vegan students out there who care deeply about these issues but do not feel they have the knowledge or resources to organize a group of their own. By providing the needed support, we can dramatically lower the barrier of entry to vegan advocacy and broaden the pool of talent going towards highly impactful careers.Our ApproachAnimal organizations often focus on training individual students rather than on building a sustainable vegan community. ASAP takes a more holistic approach. Our strategy for constructing a strong campus animal movement involves the following:Building a growing vegan community while investing in and strengthening individuals. This means conducting outreach to vegans who might like to become more active as advocates, and training vegans to conduct effective outreach to nonvegans.Providing on-the-ground support to student groups.Collecting thousands of signatures through petitions.Streamlining the process of starting and running student animal advocacy groups.Facilitating systemic and long-term educational frameworks, rather than just one-time events. We will provide educational seminars to empower vegans and educate the general student population, with a special emphasis on plant-based nutrition for future healthcare professionals.Fighting speciesism and humane-washing while promoting plant-based options.By facilitating more effective student advocacy, we believe ASAP can help produce more influential vegans who push for change. We want to inspire the next Eric Adams, Co...]]>
Thu, 25 May 2023 13:31:05 +0000 EA - Introducing Allied Scholars for Animal Protection by Dr Faraz Harsini Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Allied Scholars for Animal Protection, published by Dr Faraz Harsini on May 24, 2023 on The Effective Altruism Forum.We’re excited to introduce Allied Scholars for Animal Protection, a nonprofit creating a unified infrastructure for effective and sustainable animal advocacy at universities. Our mission is to organize, train, and mentor students who are interested in advocating for animal welfare and pursuing impactful careers.The ProblemUniversities play a critical role in shaping the future of society and effecting systemic change. As future leaders, college students hold immense potential for driving progress and cultural transformation.Unfortunately, animal advocacy in universities tends to be limited, sporadic, and unsustainable. The existing clubs on campuses operate independently with no coordination, and students are often hindered by a lack of time, training, experience, and support. Often, when active students graduate, their animal advocacy clubs become inactive. Much time and effort are wasted due to a lack of continuity and longevity of animal advocacy on campuses because students have to reinvent the proverbial wheel each time they restart a group.One of the worst consequences of this is that much talent goes untapped due to insufficient education and inspiration for vegans to choose effective and impactful careers. The EA Community is working hard to reach this talent through career advising and community building. We believe that on-the-ground support for university animal rights clubs can complement EA recruitment efforts and can encourage vegan college students to engage with critical intellectual work being done by the EA community. We also think that community building work focused specifically on animal advocacy can help reach vegans who might not be as interested in other cause areas or the broader EA project.Some animal advocacy organizations provide opportunities for students to volunteer, but enabling a strong campus movement is not the sole focus of these organizations. Having a single organization dedicated to providing infrastructure for campus activism would therefore fill a highly neglected niche in the current animal advocacy ecosystem.Building a strong campus animal advocacy movement is also highly tractable. There are many vegan students out there who care deeply about these issues but do not feel they have the knowledge or resources to organize a group of their own. By providing the needed support, we can dramatically lower the barrier of entry to vegan advocacy and broaden the pool of talent going towards highly impactful careers.Our ApproachAnimal organizations often focus on training individual students rather than on building a sustainable vegan community. ASAP takes a more holistic approach. Our strategy for constructing a strong campus animal movement involves the following:Building a growing vegan community while investing in and strengthening individuals. This means conducting outreach to vegans who might like to become more active as advocates, and training vegans to conduct effective outreach to nonvegans.Providing on-the-ground support to student groups.Collecting thousands of signatures through petitions.Streamlining the process of starting and running student animal advocacy groups.Facilitating systemic and long-term educational frameworks, rather than just one-time events. We will provide educational seminars to empower vegans and educate the general student population, with a special emphasis on plant-based nutrition for future healthcare professionals.Fighting speciesism and humane-washing while promoting plant-based options.By facilitating more effective student advocacy, we believe ASAP can help produce more influential vegans who push for change. We want to inspire the next Eric Adams, Co...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Allied Scholars for Animal Protection, published by Dr Faraz Harsini on May 24, 2023 on The Effective Altruism Forum.We’re excited to introduce Allied Scholars for Animal Protection, a nonprofit creating a unified infrastructure for effective and sustainable animal advocacy at universities. Our mission is to organize, train, and mentor students who are interested in advocating for animal welfare and pursuing impactful careers.The ProblemUniversities play a critical role in shaping the future of society and effecting systemic change. As future leaders, college students hold immense potential for driving progress and cultural transformation.Unfortunately, animal advocacy in universities tends to be limited, sporadic, and unsustainable. The existing clubs on campuses operate independently with no coordination, and students are often hindered by a lack of time, training, experience, and support. Often, when active students graduate, their animal advocacy clubs become inactive. Much time and effort are wasted due to a lack of continuity and longevity of animal advocacy on campuses because students have to reinvent the proverbial wheel each time they restart a group.One of the worst consequences of this is that much talent goes untapped due to insufficient education and inspiration for vegans to choose effective and impactful careers. The EA Community is working hard to reach this talent through career advising and community building. We believe that on-the-ground support for university animal rights clubs can complement EA recruitment efforts and can encourage vegan college students to engage with critical intellectual work being done by the EA community. We also think that community building work focused specifically on animal advocacy can help reach vegans who might not be as interested in other cause areas or the broader EA project.Some animal advocacy organizations provide opportunities for students to volunteer, but enabling a strong campus movement is not the sole focus of these organizations. Having a single organization dedicated to providing infrastructure for campus activism would therefore fill a highly neglected niche in the current animal advocacy ecosystem.Building a strong campus animal advocacy movement is also highly tractable. There are many vegan students out there who care deeply about these issues but do not feel they have the knowledge or resources to organize a group of their own. By providing the needed support, we can dramatically lower the barrier of entry to vegan advocacy and broaden the pool of talent going towards highly impactful careers.Our ApproachAnimal organizations often focus on training individual students rather than on building a sustainable vegan community. ASAP takes a more holistic approach. Our strategy for constructing a strong campus animal movement involves the following:Building a growing vegan community while investing in and strengthening individuals. This means conducting outreach to vegans who might like to become more active as advocates, and training vegans to conduct effective outreach to nonvegans.Providing on-the-ground support to student groups.Collecting thousands of signatures through petitions.Streamlining the process of starting and running student animal advocacy groups.Facilitating systemic and long-term educational frameworks, rather than just one-time events. We will provide educational seminars to empower vegans and educate the general student population, with a special emphasis on plant-based nutrition for future healthcare professionals.Fighting speciesism and humane-washing while promoting plant-based options.By facilitating more effective student advocacy, we believe ASAP can help produce more influential vegans who push for change. We want to inspire the next Eric Adams, Co...]]>
Dr Faraz Harsini https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:39 None full 6060
yMptv5msFnnfESCqm_NL_EA_EA EA - How I solved my problems with low energy (or: burnout) by Luise Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How I solved my problems with low energy (or: burnout), published by Luise on May 24, 2023 on The Effective Altruism Forum.I had really bad problems with low energy and tiredness for about 2 years. This post is about what I’ve learned. This is not a general guide to solving any and all energy problems since I mostly only have one data point. I still hope reading about my personal problems and solutions will help people solve theirs.SummaryI had basically two periods of very low energy: college and last summer.In college, I felt tired literally all day, especially as soon as I tried to study. I was also depressed.In the summer, I was very happy but I had days at a time where I wouldn’t do anything productive. All tasks seemed unbearably hard to me, sometimes even writing a simple email. I also became introverted.I thought I was being lazy and just needed to “get over it”. Starting to notice I had a ‘real’ problem was a big step forward.I learned that I actually had multiple hard-to-disentangle problems:I’m sensitive to disruptions in my sleep leading to feeling tired.Certain types of work that are both hard and demotivating also make me feel physically tired.My biggest realization was that I was burned out much of last summer. This was because I didn’t give myself rest, even though I didn’t see it that way at the time. This led to the unproductive days (not laziness).In college, I lived a weird lifestyle regarding sleep, social life, and other things. Some part of this was probably bad. Having common sense would’ve helped.I can now notice symptoms of overloading myself before it leads to burnout. Learning to distinguish this from “being lazy” phenomenologically was crucial.My problems had nothing to do with physical health or stress.When experimenting to solve my problems, it was useful for me to track when I had unproductive days. This way I could be sure how much the experiments impacted me.What my problems were like (so you know whether they’re similar to yours)A typical low-energy day while I was in college in first year:I wake up at 12 pm. I slept 9 hours but I’m tired. It doesn’t go away even after an hour. I open my math book. But literally as soon as I read the first sentences, I feel so tired that I physically want to lie down and close my eyes. It feels very hard to keep reading. Often I just stare at the wood of the table right next to my book. Not doing anything, just avoiding thinking. Even staring at the wall for 10 minutes sounds great right now. I never really stop feeling tired until it’s night again.A typical low-energy day while I was working on EA community building projects in the summer:I have to do a task I usually love doing, maybe reading applications for an event I’m running. But as soon as I look at the wall of Airtable fields and text, the task feels way too large. I will have to think deeply about these answers people wrote in the application form and make difficult decisions, drawing on information from over 20 fields. That depth of thinking and amount of working memory sounds way too hard right now. I try, but 3 minutes later I give up. I decide to read something instead. I feel the strong desire to sit in a comfy bean bag and get a blanket. Even sitting upright in an office chair feels hard. I start reading. The text requires slight cognitive effort on the part of the reader to understand. It sounds too hard. I stare at a sentence, willing myself to think. I give up after 3 sentences.It’s lunchtime. I used to love lunchtime at the office because I get to chat with all these super cool people and because I’m quite extraverted. But now the idea of a group conversation sounds way too much. I don’t even want to chat to a single person. I would have to be ‘switched on’, think of things to say, smile, and I just don’t have...]]>
Luise https://forum.effectivealtruism.org/posts/yMptv5msFnnfESCqm/how-i-solved-my-problems-with-low-energy-or-burnout Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How I solved my problems with low energy (or: burnout), published by Luise on May 24, 2023 on The Effective Altruism Forum.I had really bad problems with low energy and tiredness for about 2 years. This post is about what I’ve learned. This is not a general guide to solving any and all energy problems since I mostly only have one data point. I still hope reading about my personal problems and solutions will help people solve theirs.SummaryI had basically two periods of very low energy: college and last summer.In college, I felt tired literally all day, especially as soon as I tried to study. I was also depressed.In the summer, I was very happy but I had days at a time where I wouldn’t do anything productive. All tasks seemed unbearably hard to me, sometimes even writing a simple email. I also became introverted.I thought I was being lazy and just needed to “get over it”. Starting to notice I had a ‘real’ problem was a big step forward.I learned that I actually had multiple hard-to-disentangle problems:I’m sensitive to disruptions in my sleep leading to feeling tired.Certain types of work that are both hard and demotivating also make me feel physically tired.My biggest realization was that I was burned out much of last summer. This was because I didn’t give myself rest, even though I didn’t see it that way at the time. This led to the unproductive days (not laziness).In college, I lived a weird lifestyle regarding sleep, social life, and other things. Some part of this was probably bad. Having common sense would’ve helped.I can now notice symptoms of overloading myself before it leads to burnout. Learning to distinguish this from “being lazy” phenomenologically was crucial.My problems had nothing to do with physical health or stress.When experimenting to solve my problems, it was useful for me to track when I had unproductive days. This way I could be sure how much the experiments impacted me.What my problems were like (so you know whether they’re similar to yours)A typical low-energy day while I was in college in first year:I wake up at 12 pm. I slept 9 hours but I’m tired. It doesn’t go away even after an hour. I open my math book. But literally as soon as I read the first sentences, I feel so tired that I physically want to lie down and close my eyes. It feels very hard to keep reading. Often I just stare at the wood of the table right next to my book. Not doing anything, just avoiding thinking. Even staring at the wall for 10 minutes sounds great right now. I never really stop feeling tired until it’s night again.A typical low-energy day while I was working on EA community building projects in the summer:I have to do a task I usually love doing, maybe reading applications for an event I’m running. But as soon as I look at the wall of Airtable fields and text, the task feels way too large. I will have to think deeply about these answers people wrote in the application form and make difficult decisions, drawing on information from over 20 fields. That depth of thinking and amount of working memory sounds way too hard right now. I try, but 3 minutes later I give up. I decide to read something instead. I feel the strong desire to sit in a comfy bean bag and get a blanket. Even sitting upright in an office chair feels hard. I start reading. The text requires slight cognitive effort on the part of the reader to understand. It sounds too hard. I stare at a sentence, willing myself to think. I give up after 3 sentences.It’s lunchtime. I used to love lunchtime at the office because I get to chat with all these super cool people and because I’m quite extraverted. But now the idea of a group conversation sounds way too much. I don’t even want to chat to a single person. I would have to be ‘switched on’, think of things to say, smile, and I just don’t have...]]>
Thu, 25 May 2023 08:15:09 +0000 EA - How I solved my problems with low energy (or: burnout) by Luise Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How I solved my problems with low energy (or: burnout), published by Luise on May 24, 2023 on The Effective Altruism Forum.I had really bad problems with low energy and tiredness for about 2 years. This post is about what I’ve learned. This is not a general guide to solving any and all energy problems since I mostly only have one data point. I still hope reading about my personal problems and solutions will help people solve theirs.SummaryI had basically two periods of very low energy: college and last summer.In college, I felt tired literally all day, especially as soon as I tried to study. I was also depressed.In the summer, I was very happy but I had days at a time where I wouldn’t do anything productive. All tasks seemed unbearably hard to me, sometimes even writing a simple email. I also became introverted.I thought I was being lazy and just needed to “get over it”. Starting to notice I had a ‘real’ problem was a big step forward.I learned that I actually had multiple hard-to-disentangle problems:I’m sensitive to disruptions in my sleep leading to feeling tired.Certain types of work that are both hard and demotivating also make me feel physically tired.My biggest realization was that I was burned out much of last summer. This was because I didn’t give myself rest, even though I didn’t see it that way at the time. This led to the unproductive days (not laziness).In college, I lived a weird lifestyle regarding sleep, social life, and other things. Some part of this was probably bad. Having common sense would’ve helped.I can now notice symptoms of overloading myself before it leads to burnout. Learning to distinguish this from “being lazy” phenomenologically was crucial.My problems had nothing to do with physical health or stress.When experimenting to solve my problems, it was useful for me to track when I had unproductive days. This way I could be sure how much the experiments impacted me.What my problems were like (so you know whether they’re similar to yours)A typical low-energy day while I was in college in first year:I wake up at 12 pm. I slept 9 hours but I’m tired. It doesn’t go away even after an hour. I open my math book. But literally as soon as I read the first sentences, I feel so tired that I physically want to lie down and close my eyes. It feels very hard to keep reading. Often I just stare at the wood of the table right next to my book. Not doing anything, just avoiding thinking. Even staring at the wall for 10 minutes sounds great right now. I never really stop feeling tired until it’s night again.A typical low-energy day while I was working on EA community building projects in the summer:I have to do a task I usually love doing, maybe reading applications for an event I’m running. But as soon as I look at the wall of Airtable fields and text, the task feels way too large. I will have to think deeply about these answers people wrote in the application form and make difficult decisions, drawing on information from over 20 fields. That depth of thinking and amount of working memory sounds way too hard right now. I try, but 3 minutes later I give up. I decide to read something instead. I feel the strong desire to sit in a comfy bean bag and get a blanket. Even sitting upright in an office chair feels hard. I start reading. The text requires slight cognitive effort on the part of the reader to understand. It sounds too hard. I stare at a sentence, willing myself to think. I give up after 3 sentences.It’s lunchtime. I used to love lunchtime at the office because I get to chat with all these super cool people and because I’m quite extraverted. But now the idea of a group conversation sounds way too much. I don’t even want to chat to a single person. I would have to be ‘switched on’, think of things to say, smile, and I just don’t have...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How I solved my problems with low energy (or: burnout), published by Luise on May 24, 2023 on The Effective Altruism Forum.I had really bad problems with low energy and tiredness for about 2 years. This post is about what I’ve learned. This is not a general guide to solving any and all energy problems since I mostly only have one data point. I still hope reading about my personal problems and solutions will help people solve theirs.SummaryI had basically two periods of very low energy: college and last summer.In college, I felt tired literally all day, especially as soon as I tried to study. I was also depressed.In the summer, I was very happy but I had days at a time where I wouldn’t do anything productive. All tasks seemed unbearably hard to me, sometimes even writing a simple email. I also became introverted.I thought I was being lazy and just needed to “get over it”. Starting to notice I had a ‘real’ problem was a big step forward.I learned that I actually had multiple hard-to-disentangle problems:I’m sensitive to disruptions in my sleep leading to feeling tired.Certain types of work that are both hard and demotivating also make me feel physically tired.My biggest realization was that I was burned out much of last summer. This was because I didn’t give myself rest, even though I didn’t see it that way at the time. This led to the unproductive days (not laziness).In college, I lived a weird lifestyle regarding sleep, social life, and other things. Some part of this was probably bad. Having common sense would’ve helped.I can now notice symptoms of overloading myself before it leads to burnout. Learning to distinguish this from “being lazy” phenomenologically was crucial.My problems had nothing to do with physical health or stress.When experimenting to solve my problems, it was useful for me to track when I had unproductive days. This way I could be sure how much the experiments impacted me.What my problems were like (so you know whether they’re similar to yours)A typical low-energy day while I was in college in first year:I wake up at 12 pm. I slept 9 hours but I’m tired. It doesn’t go away even after an hour. I open my math book. But literally as soon as I read the first sentences, I feel so tired that I physically want to lie down and close my eyes. It feels very hard to keep reading. Often I just stare at the wood of the table right next to my book. Not doing anything, just avoiding thinking. Even staring at the wall for 10 minutes sounds great right now. I never really stop feeling tired until it’s night again.A typical low-energy day while I was working on EA community building projects in the summer:I have to do a task I usually love doing, maybe reading applications for an event I’m running. But as soon as I look at the wall of Airtable fields and text, the task feels way too large. I will have to think deeply about these answers people wrote in the application form and make difficult decisions, drawing on information from over 20 fields. That depth of thinking and amount of working memory sounds way too hard right now. I try, but 3 minutes later I give up. I decide to read something instead. I feel the strong desire to sit in a comfy bean bag and get a blanket. Even sitting upright in an office chair feels hard. I start reading. The text requires slight cognitive effort on the part of the reader to understand. It sounds too hard. I stare at a sentence, willing myself to think. I give up after 3 sentences.It’s lunchtime. I used to love lunchtime at the office because I get to chat with all these super cool people and because I’m quite extraverted. But now the idea of a group conversation sounds way too much. I don’t even want to chat to a single person. I would have to be ‘switched on’, think of things to say, smile, and I just don’t have...]]>
Luise https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 21:42 None full 6058
MDNcMLQfxg2n9qXEZ_NL_EA_EA EA - AGI Catastrophe and Takeover: Some Reference Class-Based Priors by zdgroff Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI Catastrophe and Takeover: Some Reference Class-Based Priors, published by zdgroff on May 24, 2023 on The Effective Altruism Forum.This is a linkpost forI am grateful to Holly Elmore, Michael Aird, Bruce Tsai, Tamay Besiroglu, Zach Stein-Perlman, Tyler John, and Kit Harris for pointers or feedback on this document.Executive SummaryOverviewIn this document, I collect and describe reference classes for the risk of catastrophe from superhuman artificial general intelligence (AGI). On some accounts, reference classes are the best starting point for forecasts, even though they often feel unintuitive. To my knowledge, nobody has previously attempted this for risks from superhuman AGI. This is to a large degree because superhuman AGI is in a real sense unprecedented. Yet there are some reference classes or at least analogies people have cited to think about the impacts of superhuman AI, such as the impacts of human intelligence, corporations, or, increasingly, the most advanced current AI systems.My high-level takeaway is that different ways of integrating and interpreting reference classes generate priors on AGI-caused human extinction by 2070 anywhere between 1/10000 and 1/6 (mean of ~0.03%-4%). Reference classes offer a non-speculative case for concern with AGI-related risks. On this account, AGI risk is not a case of Pascal’s mugging, but most reference classes do not support greater-than-even odds of doom. The reference classes I look at generate a prior for AGI control over current human resources anywhere between 5% and 60% (mean of ~16-26%). The latter is a distinctive result of the reference class exercise: the expected degree of AGI control over the world looks to far exceed the odds of human extinction by a sizable margin on these priors. The extent of existential risk, including permanent disempowerment, should fall somewhere between these two ranges.This effort is a rough, non-academic exercise and requires a number of subjective judgment calls. At times I play a bit fast and loose with the exact model I am using; the work lacks the ideal level of theoretical grounding. Nonetheless, I think the appropriate prior is likely to look something like what I offer here. I encourage intuitive updates and do not recommend these priors as the final word.ApproachI collect sets of events that superhuman AGI-caused extinction or takeover would be plausibly representative of, ex ante. Interpreting and aggregating them requires a number of data collection decisions, the most important of which I detail here:For each reference class, I collect benchmarks for the likelihood of one or two things:Human extinctionAI capture of humanity’s available resources.Many risks and reference classes are properly thought of as annualised risks (e.g., the yearly chance of a major AI-related disaster or extinction from asteroid), but some make more sense as risks from a one-time event (e.g., the chance that the creation of a major AI-related disaster or a given asteroid hit causes human extinction). For this reason, I aggregate three types of estimates (see the full document for the latter two types of estimates):50-Year Risk (e.g. risk of a major AI disaster in 50 years)10-Year Risk (e.g. risk of a major AI disaster in 10 years)Risk Per Event (e.g. risk of a major AI disaster per invention)Given that there are dozens or hundreds of reference classes, I summarise them in a few ways:Minimum and maximumWeighted arithmetic mean (i.e., weighted average)I “winsorise”, i.e. replace 0 or 1 with the next-most extreme value.I intuitively downweight some reference classes. For details on weights, see the methodology.Weighted geometric meanFindings for Fifty-Year Impacts of Superhuman AISee the full document and spreadsheet for further details on how I arrive at these figures....]]>
zdgroff https://forum.effectivealtruism.org/posts/MDNcMLQfxg2n9qXEZ/agi-catastrophe-and-takeover-some-reference-class-based Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI Catastrophe and Takeover: Some Reference Class-Based Priors, published by zdgroff on May 24, 2023 on The Effective Altruism Forum.This is a linkpost forI am grateful to Holly Elmore, Michael Aird, Bruce Tsai, Tamay Besiroglu, Zach Stein-Perlman, Tyler John, and Kit Harris for pointers or feedback on this document.Executive SummaryOverviewIn this document, I collect and describe reference classes for the risk of catastrophe from superhuman artificial general intelligence (AGI). On some accounts, reference classes are the best starting point for forecasts, even though they often feel unintuitive. To my knowledge, nobody has previously attempted this for risks from superhuman AGI. This is to a large degree because superhuman AGI is in a real sense unprecedented. Yet there are some reference classes or at least analogies people have cited to think about the impacts of superhuman AI, such as the impacts of human intelligence, corporations, or, increasingly, the most advanced current AI systems.My high-level takeaway is that different ways of integrating and interpreting reference classes generate priors on AGI-caused human extinction by 2070 anywhere between 1/10000 and 1/6 (mean of ~0.03%-4%). Reference classes offer a non-speculative case for concern with AGI-related risks. On this account, AGI risk is not a case of Pascal’s mugging, but most reference classes do not support greater-than-even odds of doom. The reference classes I look at generate a prior for AGI control over current human resources anywhere between 5% and 60% (mean of ~16-26%). The latter is a distinctive result of the reference class exercise: the expected degree of AGI control over the world looks to far exceed the odds of human extinction by a sizable margin on these priors. The extent of existential risk, including permanent disempowerment, should fall somewhere between these two ranges.This effort is a rough, non-academic exercise and requires a number of subjective judgment calls. At times I play a bit fast and loose with the exact model I am using; the work lacks the ideal level of theoretical grounding. Nonetheless, I think the appropriate prior is likely to look something like what I offer here. I encourage intuitive updates and do not recommend these priors as the final word.ApproachI collect sets of events that superhuman AGI-caused extinction or takeover would be plausibly representative of, ex ante. Interpreting and aggregating them requires a number of data collection decisions, the most important of which I detail here:For each reference class, I collect benchmarks for the likelihood of one or two things:Human extinctionAI capture of humanity’s available resources.Many risks and reference classes are properly thought of as annualised risks (e.g., the yearly chance of a major AI-related disaster or extinction from asteroid), but some make more sense as risks from a one-time event (e.g., the chance that the creation of a major AI-related disaster or a given asteroid hit causes human extinction). For this reason, I aggregate three types of estimates (see the full document for the latter two types of estimates):50-Year Risk (e.g. risk of a major AI disaster in 50 years)10-Year Risk (e.g. risk of a major AI disaster in 10 years)Risk Per Event (e.g. risk of a major AI disaster per invention)Given that there are dozens or hundreds of reference classes, I summarise them in a few ways:Minimum and maximumWeighted arithmetic mean (i.e., weighted average)I “winsorise”, i.e. replace 0 or 1 with the next-most extreme value.I intuitively downweight some reference classes. For details on weights, see the methodology.Weighted geometric meanFindings for Fifty-Year Impacts of Superhuman AISee the full document and spreadsheet for further details on how I arrive at these figures....]]>
Thu, 25 May 2023 06:59:28 +0000 EA - AGI Catastrophe and Takeover: Some Reference Class-Based Priors by zdgroff Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI Catastrophe and Takeover: Some Reference Class-Based Priors, published by zdgroff on May 24, 2023 on The Effective Altruism Forum.This is a linkpost forI am grateful to Holly Elmore, Michael Aird, Bruce Tsai, Tamay Besiroglu, Zach Stein-Perlman, Tyler John, and Kit Harris for pointers or feedback on this document.Executive SummaryOverviewIn this document, I collect and describe reference classes for the risk of catastrophe from superhuman artificial general intelligence (AGI). On some accounts, reference classes are the best starting point for forecasts, even though they often feel unintuitive. To my knowledge, nobody has previously attempted this for risks from superhuman AGI. This is to a large degree because superhuman AGI is in a real sense unprecedented. Yet there are some reference classes or at least analogies people have cited to think about the impacts of superhuman AI, such as the impacts of human intelligence, corporations, or, increasingly, the most advanced current AI systems.My high-level takeaway is that different ways of integrating and interpreting reference classes generate priors on AGI-caused human extinction by 2070 anywhere between 1/10000 and 1/6 (mean of ~0.03%-4%). Reference classes offer a non-speculative case for concern with AGI-related risks. On this account, AGI risk is not a case of Pascal’s mugging, but most reference classes do not support greater-than-even odds of doom. The reference classes I look at generate a prior for AGI control over current human resources anywhere between 5% and 60% (mean of ~16-26%). The latter is a distinctive result of the reference class exercise: the expected degree of AGI control over the world looks to far exceed the odds of human extinction by a sizable margin on these priors. The extent of existential risk, including permanent disempowerment, should fall somewhere between these two ranges.This effort is a rough, non-academic exercise and requires a number of subjective judgment calls. At times I play a bit fast and loose with the exact model I am using; the work lacks the ideal level of theoretical grounding. Nonetheless, I think the appropriate prior is likely to look something like what I offer here. I encourage intuitive updates and do not recommend these priors as the final word.ApproachI collect sets of events that superhuman AGI-caused extinction or takeover would be plausibly representative of, ex ante. Interpreting and aggregating them requires a number of data collection decisions, the most important of which I detail here:For each reference class, I collect benchmarks for the likelihood of one or two things:Human extinctionAI capture of humanity’s available resources.Many risks and reference classes are properly thought of as annualised risks (e.g., the yearly chance of a major AI-related disaster or extinction from asteroid), but some make more sense as risks from a one-time event (e.g., the chance that the creation of a major AI-related disaster or a given asteroid hit causes human extinction). For this reason, I aggregate three types of estimates (see the full document for the latter two types of estimates):50-Year Risk (e.g. risk of a major AI disaster in 50 years)10-Year Risk (e.g. risk of a major AI disaster in 10 years)Risk Per Event (e.g. risk of a major AI disaster per invention)Given that there are dozens or hundreds of reference classes, I summarise them in a few ways:Minimum and maximumWeighted arithmetic mean (i.e., weighted average)I “winsorise”, i.e. replace 0 or 1 with the next-most extreme value.I intuitively downweight some reference classes. For details on weights, see the methodology.Weighted geometric meanFindings for Fifty-Year Impacts of Superhuman AISee the full document and spreadsheet for further details on how I arrive at these figures....]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI Catastrophe and Takeover: Some Reference Class-Based Priors, published by zdgroff on May 24, 2023 on The Effective Altruism Forum.This is a linkpost forI am grateful to Holly Elmore, Michael Aird, Bruce Tsai, Tamay Besiroglu, Zach Stein-Perlman, Tyler John, and Kit Harris for pointers or feedback on this document.Executive SummaryOverviewIn this document, I collect and describe reference classes for the risk of catastrophe from superhuman artificial general intelligence (AGI). On some accounts, reference classes are the best starting point for forecasts, even though they often feel unintuitive. To my knowledge, nobody has previously attempted this for risks from superhuman AGI. This is to a large degree because superhuman AGI is in a real sense unprecedented. Yet there are some reference classes or at least analogies people have cited to think about the impacts of superhuman AI, such as the impacts of human intelligence, corporations, or, increasingly, the most advanced current AI systems.My high-level takeaway is that different ways of integrating and interpreting reference classes generate priors on AGI-caused human extinction by 2070 anywhere between 1/10000 and 1/6 (mean of ~0.03%-4%). Reference classes offer a non-speculative case for concern with AGI-related risks. On this account, AGI risk is not a case of Pascal’s mugging, but most reference classes do not support greater-than-even odds of doom. The reference classes I look at generate a prior for AGI control over current human resources anywhere between 5% and 60% (mean of ~16-26%). The latter is a distinctive result of the reference class exercise: the expected degree of AGI control over the world looks to far exceed the odds of human extinction by a sizable margin on these priors. The extent of existential risk, including permanent disempowerment, should fall somewhere between these two ranges.This effort is a rough, non-academic exercise and requires a number of subjective judgment calls. At times I play a bit fast and loose with the exact model I am using; the work lacks the ideal level of theoretical grounding. Nonetheless, I think the appropriate prior is likely to look something like what I offer here. I encourage intuitive updates and do not recommend these priors as the final word.ApproachI collect sets of events that superhuman AGI-caused extinction or takeover would be plausibly representative of, ex ante. Interpreting and aggregating them requires a number of data collection decisions, the most important of which I detail here:For each reference class, I collect benchmarks for the likelihood of one or two things:Human extinctionAI capture of humanity’s available resources.Many risks and reference classes are properly thought of as annualised risks (e.g., the yearly chance of a major AI-related disaster or extinction from asteroid), but some make more sense as risks from a one-time event (e.g., the chance that the creation of a major AI-related disaster or a given asteroid hit causes human extinction). For this reason, I aggregate three types of estimates (see the full document for the latter two types of estimates):50-Year Risk (e.g. risk of a major AI disaster in 50 years)10-Year Risk (e.g. risk of a major AI disaster in 10 years)Risk Per Event (e.g. risk of a major AI disaster per invention)Given that there are dozens or hundreds of reference classes, I summarise them in a few ways:Minimum and maximumWeighted arithmetic mean (i.e., weighted average)I “winsorise”, i.e. replace 0 or 1 with the next-most extreme value.I intuitively downweight some reference classes. For details on weights, see the methodology.Weighted geometric meanFindings for Fifty-Year Impacts of Superhuman AISee the full document and spreadsheet for further details on how I arrive at these figures....]]>
zdgroff https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 12:40 None full 6057
s5W3FTYYoR4hvnL8h_NL_EA_EA EA - New s-risks audiobook available now by Alistair Webster Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New s-risks audiobook available now, published by Alistair Webster on May 24, 2023 on The Effective Altruism Forum.Tobias Baumann's first-of-its-kind introduction to s-risks, Avoiding the Worst: How to Prevent a Moral Catastrophe is now available to listen to for free.Professionally narrated by Adrian Nelson, the full audiobook is out now on Audible and other audiobook stores. Additionally, a captioned video can be listened to for free on the CRS YouTube channel.Running at just 2 hours and 40 minutes, the audiobook packs in a comprehensive introduction to the topic, explaining what s-risks are, whether we should focus on them, and what we can do now to reduce the likelihood of s-risks occurring.The eBook is also available in various formats, or you can read a PDF version.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Alistair Webster https://forum.effectivealtruism.org/posts/s5W3FTYYoR4hvnL8h/new-s-risks-audiobook-available-now Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New s-risks audiobook available now, published by Alistair Webster on May 24, 2023 on The Effective Altruism Forum.Tobias Baumann's first-of-its-kind introduction to s-risks, Avoiding the Worst: How to Prevent a Moral Catastrophe is now available to listen to for free.Professionally narrated by Adrian Nelson, the full audiobook is out now on Audible and other audiobook stores. Additionally, a captioned video can be listened to for free on the CRS YouTube channel.Running at just 2 hours and 40 minutes, the audiobook packs in a comprehensive introduction to the topic, explaining what s-risks are, whether we should focus on them, and what we can do now to reduce the likelihood of s-risks occurring.The eBook is also available in various formats, or you can read a PDF version.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 25 May 2023 03:24:14 +0000 EA - New s-risks audiobook available now by Alistair Webster Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New s-risks audiobook available now, published by Alistair Webster on May 24, 2023 on The Effective Altruism Forum.Tobias Baumann's first-of-its-kind introduction to s-risks, Avoiding the Worst: How to Prevent a Moral Catastrophe is now available to listen to for free.Professionally narrated by Adrian Nelson, the full audiobook is out now on Audible and other audiobook stores. Additionally, a captioned video can be listened to for free on the CRS YouTube channel.Running at just 2 hours and 40 minutes, the audiobook packs in a comprehensive introduction to the topic, explaining what s-risks are, whether we should focus on them, and what we can do now to reduce the likelihood of s-risks occurring.The eBook is also available in various formats, or you can read a PDF version.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New s-risks audiobook available now, published by Alistair Webster on May 24, 2023 on The Effective Altruism Forum.Tobias Baumann's first-of-its-kind introduction to s-risks, Avoiding the Worst: How to Prevent a Moral Catastrophe is now available to listen to for free.Professionally narrated by Adrian Nelson, the full audiobook is out now on Audible and other audiobook stores. Additionally, a captioned video can be listened to for free on the CRS YouTube channel.Running at just 2 hours and 40 minutes, the audiobook packs in a comprehensive introduction to the topic, explaining what s-risks are, whether we should focus on them, and what we can do now to reduce the likelihood of s-risks occurring.The eBook is also available in various formats, or you can read a PDF version.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Alistair Webster https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:58 None full 6056
8Z2uFCkrg2dCnadA4_NL_EA_EA EA - KFC Supplier Sued for Cruelty by alene Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: KFC Supplier Sued for Cruelty, published by alene on May 24, 2023 on The Effective Altruism Forum.Dear EA Forum readers,The EA charity, Legal Impact for Chickens (LIC), just filed our second lawsuit!As many of you know, LIC is a litigation nonprofit dedicated to making factory-farm cruelty a liability. We focus on chickens because of the huge numbers in which they suffer and the extreme severity of that suffering.Today, we sued one of the country’s largest poultry producers and a KFC supplier, Case Farms, for animal cruelty.The complaint comes on the heels of a 2021 undercover investigation by Animal Outlook, revealing abuse at a Morganton, N.C. Case Farms hatchery that processes more than 200,000 chicks daily.Our lawsuit attacks the notion that Big Ag is above the law. We are suing under North Carolina's 19A statute, which lets private parties enjoin animal cruelty.Case Farms was documented knowingly operating faulty equipment, including a machine piston which repeatedly smashes chicks to death and a dangerous metal conveyor belt which traps and kills young birds. Case Farms was also documented crushing chicks’ necks between heavy plastic trays.Case Farms supplies its chicken to KFC, Taco Bell, and Boar’s Head, among other customers.Thank you so much to all the EA Forum readers who helped make this happen, by donating to, and volunteering for, Legal Impact for Chickens!Sincerely,AleneThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
alene https://forum.effectivealtruism.org/posts/8Z2uFCkrg2dCnadA4/kfc-supplier-sued-for-cruelty Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: KFC Supplier Sued for Cruelty, published by alene on May 24, 2023 on The Effective Altruism Forum.Dear EA Forum readers,The EA charity, Legal Impact for Chickens (LIC), just filed our second lawsuit!As many of you know, LIC is a litigation nonprofit dedicated to making factory-farm cruelty a liability. We focus on chickens because of the huge numbers in which they suffer and the extreme severity of that suffering.Today, we sued one of the country’s largest poultry producers and a KFC supplier, Case Farms, for animal cruelty.The complaint comes on the heels of a 2021 undercover investigation by Animal Outlook, revealing abuse at a Morganton, N.C. Case Farms hatchery that processes more than 200,000 chicks daily.Our lawsuit attacks the notion that Big Ag is above the law. We are suing under North Carolina's 19A statute, which lets private parties enjoin animal cruelty.Case Farms was documented knowingly operating faulty equipment, including a machine piston which repeatedly smashes chicks to death and a dangerous metal conveyor belt which traps and kills young birds. Case Farms was also documented crushing chicks’ necks between heavy plastic trays.Case Farms supplies its chicken to KFC, Taco Bell, and Boar’s Head, among other customers.Thank you so much to all the EA Forum readers who helped make this happen, by donating to, and volunteering for, Legal Impact for Chickens!Sincerely,AleneThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 25 May 2023 02:02:44 +0000 EA - KFC Supplier Sued for Cruelty by alene Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: KFC Supplier Sued for Cruelty, published by alene on May 24, 2023 on The Effective Altruism Forum.Dear EA Forum readers,The EA charity, Legal Impact for Chickens (LIC), just filed our second lawsuit!As many of you know, LIC is a litigation nonprofit dedicated to making factory-farm cruelty a liability. We focus on chickens because of the huge numbers in which they suffer and the extreme severity of that suffering.Today, we sued one of the country’s largest poultry producers and a KFC supplier, Case Farms, for animal cruelty.The complaint comes on the heels of a 2021 undercover investigation by Animal Outlook, revealing abuse at a Morganton, N.C. Case Farms hatchery that processes more than 200,000 chicks daily.Our lawsuit attacks the notion that Big Ag is above the law. We are suing under North Carolina's 19A statute, which lets private parties enjoin animal cruelty.Case Farms was documented knowingly operating faulty equipment, including a machine piston which repeatedly smashes chicks to death and a dangerous metal conveyor belt which traps and kills young birds. Case Farms was also documented crushing chicks’ necks between heavy plastic trays.Case Farms supplies its chicken to KFC, Taco Bell, and Boar’s Head, among other customers.Thank you so much to all the EA Forum readers who helped make this happen, by donating to, and volunteering for, Legal Impact for Chickens!Sincerely,AleneThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: KFC Supplier Sued for Cruelty, published by alene on May 24, 2023 on The Effective Altruism Forum.Dear EA Forum readers,The EA charity, Legal Impact for Chickens (LIC), just filed our second lawsuit!As many of you know, LIC is a litigation nonprofit dedicated to making factory-farm cruelty a liability. We focus on chickens because of the huge numbers in which they suffer and the extreme severity of that suffering.Today, we sued one of the country’s largest poultry producers and a KFC supplier, Case Farms, for animal cruelty.The complaint comes on the heels of a 2021 undercover investigation by Animal Outlook, revealing abuse at a Morganton, N.C. Case Farms hatchery that processes more than 200,000 chicks daily.Our lawsuit attacks the notion that Big Ag is above the law. We are suing under North Carolina's 19A statute, which lets private parties enjoin animal cruelty.Case Farms was documented knowingly operating faulty equipment, including a machine piston which repeatedly smashes chicks to death and a dangerous metal conveyor belt which traps and kills young birds. Case Farms was also documented crushing chicks’ necks between heavy plastic trays.Case Farms supplies its chicken to KFC, Taco Bell, and Boar’s Head, among other customers.Thank you so much to all the EA Forum readers who helped make this happen, by donating to, and volunteering for, Legal Impact for Chickens!Sincerely,AleneThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
alene https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:39 None full 6055
HAaXks5QgurLLJub2_NL_EA_EA EA - Top Idea Reports from the EA Philippines Mental Health Charity Ideas Research Project by Shen Javier Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Top Idea Reports from the EA Philippines Mental Health Charity Ideas Research Project, published by Shen Javier on May 24, 2023 on The Effective Altruism Forum.In October 2021 to May 2022, EA Philippines organized the Mental Health Charity Ideas Research project. The project's goal was to find ideas that can become highly impactful and cost-effective charities in improving the well-being of people living in the Philippines and other low- to middle-income countries. It focused on children and adolescent mental health.This was a follow-up to the participation of Brian Tan and myself in Charity Entrepreneurship’s 2021 Incubation Program, in their region-specific track for training people to research the top charity ideas in a region. The project was awarded $11,000 in funding from the EA Infrastructure Fund in 2021 for 1.2 FTE in salary for the project for 8 months. Brian transitioned to being an advisor of the project early on, and AJ Sunglao was brought on as a part-time project co-lead, while two part-time researchers (Mae Muñoz, and Zam Superadble) were also hired.Links to our reportsWe already held a brown bag session last June 11, 2022 discussing the research process and introducing the top four charity ideas we found last year. Now, we share deep reports on those ideas that detail the evidence supporting their effectiveness and how one might implement the charities in the Philippines. We also share the shallow reports made for the other top mental health interventions.Access the reports here:Deep ReportsSelf-Help Workbooks for Children and Adolescents in the Philippines and Low-to-Middle-Income CountriesSchool-based Psychoeducation in the Philippines and Low-to-Middle-Income CountriesGuided Self-Help Game-based App for Adolescents in the Philippines and Low-to-Middle-Income CountriesShallow ReportsHere’s a quick guide to our top ideas:Idea NameDescriptionCost-Effectiveness ($ per unit, total costs)Self-Help Workbooks for Children and AdolescentsThis intervention will develop and distribute self-help workbooks to improve depression and anxiety symptoms in children and young adolescents, particularly 6 to 18-year-olds. Depending on the severity of mental health disorders, the workbook can be accompanied by weekly guidance by lay counselors through telephone, email, social media, or other available platforms.School-based PsychoeducationThis preventive approach entails training and supervising teachers to deliver psychoeducation on mental health topics in their respective schools. Through weekly participatory learning sessions, students would learn to apply positive coping strategies, build interpersonal skills, and/or develop personal characteristics that would empower them to care for their mental health and navigate important life transitions.Guided Self-Help Game-based App for Adolescents The intervention is a self-help game-based mobile application for help-seeking adolescents aged 12 - 19 years old. As a self-help format, the app aims to teach service users concepts and skills that will aid them in addressing MH concerns. The content of the app will be based on evidence-based therapeutic modalities. The game-based format is used to enhance service user engagement and prevent dropout. Youth-led Mental Health SupportThis intervention is a community-based intervention for adolescents aged 13-18. It uses task-sharing principles in delivering basic para-mental health support by training community members like SK officials and student leaders in basic mental health skills such as psychoeducation, peer counseling, and psychological first aid. The content of the training would be based on other community-based interventions like Thinking Healthy Programme, PM+, and Self Help+.$2.67 per WHO-5 improvement$85.93 per GSES improvement$69.47 per SWEMWBS impro...]]>
Shen Javier https://forum.effectivealtruism.org/posts/HAaXks5QgurLLJub2/top-idea-reports-from-the-ea-philippines-mental-health Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Top Idea Reports from the EA Philippines Mental Health Charity Ideas Research Project, published by Shen Javier on May 24, 2023 on The Effective Altruism Forum.In October 2021 to May 2022, EA Philippines organized the Mental Health Charity Ideas Research project. The project's goal was to find ideas that can become highly impactful and cost-effective charities in improving the well-being of people living in the Philippines and other low- to middle-income countries. It focused on children and adolescent mental health.This was a follow-up to the participation of Brian Tan and myself in Charity Entrepreneurship’s 2021 Incubation Program, in their region-specific track for training people to research the top charity ideas in a region. The project was awarded $11,000 in funding from the EA Infrastructure Fund in 2021 for 1.2 FTE in salary for the project for 8 months. Brian transitioned to being an advisor of the project early on, and AJ Sunglao was brought on as a part-time project co-lead, while two part-time researchers (Mae Muñoz, and Zam Superadble) were also hired.Links to our reportsWe already held a brown bag session last June 11, 2022 discussing the research process and introducing the top four charity ideas we found last year. Now, we share deep reports on those ideas that detail the evidence supporting their effectiveness and how one might implement the charities in the Philippines. We also share the shallow reports made for the other top mental health interventions.Access the reports here:Deep ReportsSelf-Help Workbooks for Children and Adolescents in the Philippines and Low-to-Middle-Income CountriesSchool-based Psychoeducation in the Philippines and Low-to-Middle-Income CountriesGuided Self-Help Game-based App for Adolescents in the Philippines and Low-to-Middle-Income CountriesShallow ReportsHere’s a quick guide to our top ideas:Idea NameDescriptionCost-Effectiveness ($ per unit, total costs)Self-Help Workbooks for Children and AdolescentsThis intervention will develop and distribute self-help workbooks to improve depression and anxiety symptoms in children and young adolescents, particularly 6 to 18-year-olds. Depending on the severity of mental health disorders, the workbook can be accompanied by weekly guidance by lay counselors through telephone, email, social media, or other available platforms.School-based PsychoeducationThis preventive approach entails training and supervising teachers to deliver psychoeducation on mental health topics in their respective schools. Through weekly participatory learning sessions, students would learn to apply positive coping strategies, build interpersonal skills, and/or develop personal characteristics that would empower them to care for their mental health and navigate important life transitions.Guided Self-Help Game-based App for Adolescents The intervention is a self-help game-based mobile application for help-seeking adolescents aged 12 - 19 years old. As a self-help format, the app aims to teach service users concepts and skills that will aid them in addressing MH concerns. The content of the app will be based on evidence-based therapeutic modalities. The game-based format is used to enhance service user engagement and prevent dropout. Youth-led Mental Health SupportThis intervention is a community-based intervention for adolescents aged 13-18. It uses task-sharing principles in delivering basic para-mental health support by training community members like SK officials and student leaders in basic mental health skills such as psychoeducation, peer counseling, and psychological first aid. The content of the training would be based on other community-based interventions like Thinking Healthy Programme, PM+, and Self Help+.$2.67 per WHO-5 improvement$85.93 per GSES improvement$69.47 per SWEMWBS impro...]]>
Wed, 24 May 2023 21:30:55 +0000 EA - Top Idea Reports from the EA Philippines Mental Health Charity Ideas Research Project by Shen Javier Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Top Idea Reports from the EA Philippines Mental Health Charity Ideas Research Project, published by Shen Javier on May 24, 2023 on The Effective Altruism Forum.In October 2021 to May 2022, EA Philippines organized the Mental Health Charity Ideas Research project. The project's goal was to find ideas that can become highly impactful and cost-effective charities in improving the well-being of people living in the Philippines and other low- to middle-income countries. It focused on children and adolescent mental health.This was a follow-up to the participation of Brian Tan and myself in Charity Entrepreneurship’s 2021 Incubation Program, in their region-specific track for training people to research the top charity ideas in a region. The project was awarded $11,000 in funding from the EA Infrastructure Fund in 2021 for 1.2 FTE in salary for the project for 8 months. Brian transitioned to being an advisor of the project early on, and AJ Sunglao was brought on as a part-time project co-lead, while two part-time researchers (Mae Muñoz, and Zam Superadble) were also hired.Links to our reportsWe already held a brown bag session last June 11, 2022 discussing the research process and introducing the top four charity ideas we found last year. Now, we share deep reports on those ideas that detail the evidence supporting their effectiveness and how one might implement the charities in the Philippines. We also share the shallow reports made for the other top mental health interventions.Access the reports here:Deep ReportsSelf-Help Workbooks for Children and Adolescents in the Philippines and Low-to-Middle-Income CountriesSchool-based Psychoeducation in the Philippines and Low-to-Middle-Income CountriesGuided Self-Help Game-based App for Adolescents in the Philippines and Low-to-Middle-Income CountriesShallow ReportsHere’s a quick guide to our top ideas:Idea NameDescriptionCost-Effectiveness ($ per unit, total costs)Self-Help Workbooks for Children and AdolescentsThis intervention will develop and distribute self-help workbooks to improve depression and anxiety symptoms in children and young adolescents, particularly 6 to 18-year-olds. Depending on the severity of mental health disorders, the workbook can be accompanied by weekly guidance by lay counselors through telephone, email, social media, or other available platforms.School-based PsychoeducationThis preventive approach entails training and supervising teachers to deliver psychoeducation on mental health topics in their respective schools. Through weekly participatory learning sessions, students would learn to apply positive coping strategies, build interpersonal skills, and/or develop personal characteristics that would empower them to care for their mental health and navigate important life transitions.Guided Self-Help Game-based App for Adolescents The intervention is a self-help game-based mobile application for help-seeking adolescents aged 12 - 19 years old. As a self-help format, the app aims to teach service users concepts and skills that will aid them in addressing MH concerns. The content of the app will be based on evidence-based therapeutic modalities. The game-based format is used to enhance service user engagement and prevent dropout. Youth-led Mental Health SupportThis intervention is a community-based intervention for adolescents aged 13-18. It uses task-sharing principles in delivering basic para-mental health support by training community members like SK officials and student leaders in basic mental health skills such as psychoeducation, peer counseling, and psychological first aid. The content of the training would be based on other community-based interventions like Thinking Healthy Programme, PM+, and Self Help+.$2.67 per WHO-5 improvement$85.93 per GSES improvement$69.47 per SWEMWBS impro...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Top Idea Reports from the EA Philippines Mental Health Charity Ideas Research Project, published by Shen Javier on May 24, 2023 on The Effective Altruism Forum.In October 2021 to May 2022, EA Philippines organized the Mental Health Charity Ideas Research project. The project's goal was to find ideas that can become highly impactful and cost-effective charities in improving the well-being of people living in the Philippines and other low- to middle-income countries. It focused on children and adolescent mental health.This was a follow-up to the participation of Brian Tan and myself in Charity Entrepreneurship’s 2021 Incubation Program, in their region-specific track for training people to research the top charity ideas in a region. The project was awarded $11,000 in funding from the EA Infrastructure Fund in 2021 for 1.2 FTE in salary for the project for 8 months. Brian transitioned to being an advisor of the project early on, and AJ Sunglao was brought on as a part-time project co-lead, while two part-time researchers (Mae Muñoz, and Zam Superadble) were also hired.Links to our reportsWe already held a brown bag session last June 11, 2022 discussing the research process and introducing the top four charity ideas we found last year. Now, we share deep reports on those ideas that detail the evidence supporting their effectiveness and how one might implement the charities in the Philippines. We also share the shallow reports made for the other top mental health interventions.Access the reports here:Deep ReportsSelf-Help Workbooks for Children and Adolescents in the Philippines and Low-to-Middle-Income CountriesSchool-based Psychoeducation in the Philippines and Low-to-Middle-Income CountriesGuided Self-Help Game-based App for Adolescents in the Philippines and Low-to-Middle-Income CountriesShallow ReportsHere’s a quick guide to our top ideas:Idea NameDescriptionCost-Effectiveness ($ per unit, total costs)Self-Help Workbooks for Children and AdolescentsThis intervention will develop and distribute self-help workbooks to improve depression and anxiety symptoms in children and young adolescents, particularly 6 to 18-year-olds. Depending on the severity of mental health disorders, the workbook can be accompanied by weekly guidance by lay counselors through telephone, email, social media, or other available platforms.School-based PsychoeducationThis preventive approach entails training and supervising teachers to deliver psychoeducation on mental health topics in their respective schools. Through weekly participatory learning sessions, students would learn to apply positive coping strategies, build interpersonal skills, and/or develop personal characteristics that would empower them to care for their mental health and navigate important life transitions.Guided Self-Help Game-based App for Adolescents The intervention is a self-help game-based mobile application for help-seeking adolescents aged 12 - 19 years old. As a self-help format, the app aims to teach service users concepts and skills that will aid them in addressing MH concerns. The content of the app will be based on evidence-based therapeutic modalities. The game-based format is used to enhance service user engagement and prevent dropout. Youth-led Mental Health SupportThis intervention is a community-based intervention for adolescents aged 13-18. It uses task-sharing principles in delivering basic para-mental health support by training community members like SK officials and student leaders in basic mental health skills such as psychoeducation, peer counseling, and psychological first aid. The content of the training would be based on other community-based interventions like Thinking Healthy Programme, PM+, and Self Help+.$2.67 per WHO-5 improvement$85.93 per GSES improvement$69.47 per SWEMWBS impro...]]>
Shen Javier https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:52 None full 6053
qrKWqWN2QBgduB89n_NL_EA_EA EA - Who does work you are thankful for? by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who does work you are thankful for?, published by Nathan Young on May 23, 2023 on The Effective Altruism Forum.I think that the other side of criticism is community support. So who are you grateful is doing what they are doing?Perhaps pick people who you think don't get complimented very much or don't get complimented as much as they get criticised.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nathan Young https://forum.effectivealtruism.org/posts/qrKWqWN2QBgduB89n/who-does-work-you-are-thankful-for Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who does work you are thankful for?, published by Nathan Young on May 23, 2023 on The Effective Altruism Forum.I think that the other side of criticism is community support. So who are you grateful is doing what they are doing?Perhaps pick people who you think don't get complimented very much or don't get complimented as much as they get criticised.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 24 May 2023 16:30:27 +0000 EA - Who does work you are thankful for? by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who does work you are thankful for?, published by Nathan Young on May 23, 2023 on The Effective Altruism Forum.I think that the other side of criticism is community support. So who are you grateful is doing what they are doing?Perhaps pick people who you think don't get complimented very much or don't get complimented as much as they get criticised.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who does work you are thankful for?, published by Nathan Young on May 23, 2023 on The Effective Altruism Forum.I think that the other side of criticism is community support. So who are you grateful is doing what they are doing?Perhaps pick people who you think don't get complimented very much or don't get complimented as much as they get criticised.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nathan Young https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:32 None full 6050
KLZphi3BAyqy2Jbs7_NL_EA_EA EA - Don Efficace is hiring a CEO by Don Efficace Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don Efficace is hiring a CEO, published by Don Efficace on May 23, 2023 on The Effective Altruism Forum.Don Efficace is a French effective giving association that aims to enable donors to support charitable programs deemed to be highly effective by independent evaluators in the area of global health and poverty, with an aim to expand to include climate change, and animal welfare.Our scope and purpose are similar to those of other effective and successful national giving organizations (eg. Effektiv Spenden in Germany, Ayuda Efectiva in Spain, Doneer Effectief in the Netherlands). The costs of Don Efficace are currently funded by private donors and Giving What We Can, so 100% of common donations fund charitable programs.We are recruiting for the position of Executive Director. In this strategic role for the development of Don Efficace in France, you will have the autonomy to create your own team, and collaborate with the Board of Directors, which is composed of internationally recognized experts with experience in various fields. The main task is to develop a fundraising strategy with French donors, including the media presence. You will also be in charge of overseeing the operational aspects such as the development of the website, communication tools or means, budget, recruitment, etc.Responsibilities:Raising funds for charitable programmes with proven effectivenessEngage with the French community and the media to promote understanding of the importance of impact and the value of evidence in charitable givingInform the general public about the wide variations in effectiveness of different programs, and the ability of donors to increase their charitable impact based on evidenceWorking alongside other stakeholders (e.g., Giving What We Can, and organizations involved in charitable programmes)Developing the community of donors seeking to give effectively in FranceManaging operations in Don Efficace (budget tracking, meetings, reporting to donors, etc.)Recruiting and managing a small team of staff and volunteersThe ideal candidate would have:Strong interpersonal and communication skills, including teamwork but also convincing people to support a project you believe inAbility to work independently and take initiativeHaving a growth mindset, strategic and iterative thinkerStrong taste for fast-paced projects and small structuresStrong interest in projects that aim to make a real impactExcellent written and spoken FrenchSufficient English for written and spoken communication3-5 years of experience, ideally some in fundraising and managementOpen to the values of Effective Giving: transparency, efficiency, and an evidence-based approach to maximize positive impactSalary, benefits and location:We are flexible on the availability of candidates and can accept different formats: full time (CDI), part time, job sharing and contractual arrangements, remote work (full remote acceptable) or on-site in Paris (CET time zone), suitable for family and life commitments. The position requires infrequent participation in meetings compatible with different time zones (approximately 2x/month).Compensation: ~45 k€ gross per year (+ a variable part), to be negotiated according to experience and location.Application:To apply, email acristia@givingwhatwecan.org a CV (including at least two references) and cover letter explaining your fit with the job.We will review applications as we receive them. We would prefer to find someone able to start by September 2023 (but can be flexible for the right person).For any questions, contact acristia@givingwhatwecan.org.We are an equal opportunity employer and value diversity within our organization. We do not discriminate on the basis of ethnicity, religion, color, national origin, gender, sexual orientation, age, marital status, ...]]>
Don Efficace https://forum.effectivealtruism.org/posts/KLZphi3BAyqy2Jbs7/don-efficace-is-hiring-a-ceo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don Efficace is hiring a CEO, published by Don Efficace on May 23, 2023 on The Effective Altruism Forum.Don Efficace is a French effective giving association that aims to enable donors to support charitable programs deemed to be highly effective by independent evaluators in the area of global health and poverty, with an aim to expand to include climate change, and animal welfare.Our scope and purpose are similar to those of other effective and successful national giving organizations (eg. Effektiv Spenden in Germany, Ayuda Efectiva in Spain, Doneer Effectief in the Netherlands). The costs of Don Efficace are currently funded by private donors and Giving What We Can, so 100% of common donations fund charitable programs.We are recruiting for the position of Executive Director. In this strategic role for the development of Don Efficace in France, you will have the autonomy to create your own team, and collaborate with the Board of Directors, which is composed of internationally recognized experts with experience in various fields. The main task is to develop a fundraising strategy with French donors, including the media presence. You will also be in charge of overseeing the operational aspects such as the development of the website, communication tools or means, budget, recruitment, etc.Responsibilities:Raising funds for charitable programmes with proven effectivenessEngage with the French community and the media to promote understanding of the importance of impact and the value of evidence in charitable givingInform the general public about the wide variations in effectiveness of different programs, and the ability of donors to increase their charitable impact based on evidenceWorking alongside other stakeholders (e.g., Giving What We Can, and organizations involved in charitable programmes)Developing the community of donors seeking to give effectively in FranceManaging operations in Don Efficace (budget tracking, meetings, reporting to donors, etc.)Recruiting and managing a small team of staff and volunteersThe ideal candidate would have:Strong interpersonal and communication skills, including teamwork but also convincing people to support a project you believe inAbility to work independently and take initiativeHaving a growth mindset, strategic and iterative thinkerStrong taste for fast-paced projects and small structuresStrong interest in projects that aim to make a real impactExcellent written and spoken FrenchSufficient English for written and spoken communication3-5 years of experience, ideally some in fundraising and managementOpen to the values of Effective Giving: transparency, efficiency, and an evidence-based approach to maximize positive impactSalary, benefits and location:We are flexible on the availability of candidates and can accept different formats: full time (CDI), part time, job sharing and contractual arrangements, remote work (full remote acceptable) or on-site in Paris (CET time zone), suitable for family and life commitments. The position requires infrequent participation in meetings compatible with different time zones (approximately 2x/month).Compensation: ~45 k€ gross per year (+ a variable part), to be negotiated according to experience and location.Application:To apply, email acristia@givingwhatwecan.org a CV (including at least two references) and cover letter explaining your fit with the job.We will review applications as we receive them. We would prefer to find someone able to start by September 2023 (but can be flexible for the right person).For any questions, contact acristia@givingwhatwecan.org.We are an equal opportunity employer and value diversity within our organization. We do not discriminate on the basis of ethnicity, religion, color, national origin, gender, sexual orientation, age, marital status, ...]]>
Wed, 24 May 2023 13:08:51 +0000 EA - Don Efficace is hiring a CEO by Don Efficace Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don Efficace is hiring a CEO, published by Don Efficace on May 23, 2023 on The Effective Altruism Forum.Don Efficace is a French effective giving association that aims to enable donors to support charitable programs deemed to be highly effective by independent evaluators in the area of global health and poverty, with an aim to expand to include climate change, and animal welfare.Our scope and purpose are similar to those of other effective and successful national giving organizations (eg. Effektiv Spenden in Germany, Ayuda Efectiva in Spain, Doneer Effectief in the Netherlands). The costs of Don Efficace are currently funded by private donors and Giving What We Can, so 100% of common donations fund charitable programs.We are recruiting for the position of Executive Director. In this strategic role for the development of Don Efficace in France, you will have the autonomy to create your own team, and collaborate with the Board of Directors, which is composed of internationally recognized experts with experience in various fields. The main task is to develop a fundraising strategy with French donors, including the media presence. You will also be in charge of overseeing the operational aspects such as the development of the website, communication tools or means, budget, recruitment, etc.Responsibilities:Raising funds for charitable programmes with proven effectivenessEngage with the French community and the media to promote understanding of the importance of impact and the value of evidence in charitable givingInform the general public about the wide variations in effectiveness of different programs, and the ability of donors to increase their charitable impact based on evidenceWorking alongside other stakeholders (e.g., Giving What We Can, and organizations involved in charitable programmes)Developing the community of donors seeking to give effectively in FranceManaging operations in Don Efficace (budget tracking, meetings, reporting to donors, etc.)Recruiting and managing a small team of staff and volunteersThe ideal candidate would have:Strong interpersonal and communication skills, including teamwork but also convincing people to support a project you believe inAbility to work independently and take initiativeHaving a growth mindset, strategic and iterative thinkerStrong taste for fast-paced projects and small structuresStrong interest in projects that aim to make a real impactExcellent written and spoken FrenchSufficient English for written and spoken communication3-5 years of experience, ideally some in fundraising and managementOpen to the values of Effective Giving: transparency, efficiency, and an evidence-based approach to maximize positive impactSalary, benefits and location:We are flexible on the availability of candidates and can accept different formats: full time (CDI), part time, job sharing and contractual arrangements, remote work (full remote acceptable) or on-site in Paris (CET time zone), suitable for family and life commitments. The position requires infrequent participation in meetings compatible with different time zones (approximately 2x/month).Compensation: ~45 k€ gross per year (+ a variable part), to be negotiated according to experience and location.Application:To apply, email acristia@givingwhatwecan.org a CV (including at least two references) and cover letter explaining your fit with the job.We will review applications as we receive them. We would prefer to find someone able to start by September 2023 (but can be flexible for the right person).For any questions, contact acristia@givingwhatwecan.org.We are an equal opportunity employer and value diversity within our organization. We do not discriminate on the basis of ethnicity, religion, color, national origin, gender, sexual orientation, age, marital status, ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don Efficace is hiring a CEO, published by Don Efficace on May 23, 2023 on The Effective Altruism Forum.Don Efficace is a French effective giving association that aims to enable donors to support charitable programs deemed to be highly effective by independent evaluators in the area of global health and poverty, with an aim to expand to include climate change, and animal welfare.Our scope and purpose are similar to those of other effective and successful national giving organizations (eg. Effektiv Spenden in Germany, Ayuda Efectiva in Spain, Doneer Effectief in the Netherlands). The costs of Don Efficace are currently funded by private donors and Giving What We Can, so 100% of common donations fund charitable programs.We are recruiting for the position of Executive Director. In this strategic role for the development of Don Efficace in France, you will have the autonomy to create your own team, and collaborate with the Board of Directors, which is composed of internationally recognized experts with experience in various fields. The main task is to develop a fundraising strategy with French donors, including the media presence. You will also be in charge of overseeing the operational aspects such as the development of the website, communication tools or means, budget, recruitment, etc.Responsibilities:Raising funds for charitable programmes with proven effectivenessEngage with the French community and the media to promote understanding of the importance of impact and the value of evidence in charitable givingInform the general public about the wide variations in effectiveness of different programs, and the ability of donors to increase their charitable impact based on evidenceWorking alongside other stakeholders (e.g., Giving What We Can, and organizations involved in charitable programmes)Developing the community of donors seeking to give effectively in FranceManaging operations in Don Efficace (budget tracking, meetings, reporting to donors, etc.)Recruiting and managing a small team of staff and volunteersThe ideal candidate would have:Strong interpersonal and communication skills, including teamwork but also convincing people to support a project you believe inAbility to work independently and take initiativeHaving a growth mindset, strategic and iterative thinkerStrong taste for fast-paced projects and small structuresStrong interest in projects that aim to make a real impactExcellent written and spoken FrenchSufficient English for written and spoken communication3-5 years of experience, ideally some in fundraising and managementOpen to the values of Effective Giving: transparency, efficiency, and an evidence-based approach to maximize positive impactSalary, benefits and location:We are flexible on the availability of candidates and can accept different formats: full time (CDI), part time, job sharing and contractual arrangements, remote work (full remote acceptable) or on-site in Paris (CET time zone), suitable for family and life commitments. The position requires infrequent participation in meetings compatible with different time zones (approximately 2x/month).Compensation: ~45 k€ gross per year (+ a variable part), to be negotiated according to experience and location.Application:To apply, email acristia@givingwhatwecan.org a CV (including at least two references) and cover letter explaining your fit with the job.We will review applications as we receive them. We would prefer to find someone able to start by September 2023 (but can be flexible for the right person).For any questions, contact acristia@givingwhatwecan.org.We are an equal opportunity employer and value diversity within our organization. We do not discriminate on the basis of ethnicity, religion, color, national origin, gender, sexual orientation, age, marital status, ...]]>
Don Efficace https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:03 None full 6048
TLSPQjjXZruwmg4PE_NL_EA_EA EA - Some governance research ideas to prevent malevolent control over AGI and why this might matter a hell of a lot by Jim Buhler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some governance research ideas to prevent malevolent control over AGI and why this might matter a hell of a lot, published by Jim Buhler on May 23, 2023 on The Effective Altruism Forum.Epistemic status: I spent only a few weeks reading/thinking about this. I could have asked more people to give me feedback so I could improve this piece but I’d like to move on to other research projects and thought throwing this out there was still a good idea and might be insightful to some.SummaryMany power-seeking actors will want to influence the development/deployment of artificial general intelligence (AGI). Some of them may have malevolent(-ish) preferences which they could satisfy on massively large scales if they succeed at getting some control over (key parts of the development/deployment of) AGI. Given the current rate of AI progress and dissemination, the extent to which those actors are a prominent threat will likely increase.In this post:I differentiate between different types of scenarios and give examples.I argue that 1) governance work aimed at reducing the influence of malevolent actors over AGI does not necessarily converge with usual AGI governance work – which is as far as I know – mostly focused on reducing risks from “mere” uncautiousness and/or inefficiencies due to suboptimal decision-making processes, and 2) the expected value loss due to malevolence, specifically, might be large enough to constitute an area of priority in its own right for longtermists.I, then, list some research questions that I classify under the following categories:Breaking down the conditions for an AGI-related long-term catastrophe from malevolenceRedefining the set of actors/preferences we should worry aboutSteering clear from information/attention hazardsAssessing the promisingness of various interventionsHow malevolent control over AGI may trigger long-term catastrophes?(This section is heavily inspired by discussions with Stefan Torges and Linh Chi Nguyen. I also build on Das Sarma and Wiblin’s (2022) discussion.We could divide the risks we should worry about into those two categories: Malevolence as a risk factor for AGI conflict and Direct long-term risks from malevolence.Malevolence as a risk factor for AGI conflictClifton et al. (2022) write:Several recent research agendas related to safe and beneficial AI have been motivated, in part, by reducing the risks of large-scale conflict involving artificial general intelligence (AGI). These include the Center on Long-Term Risk’s research agenda, Open Problems in Cooperative AI, and AI Research Considerations for Human Existential Safety (and this associated assessment of various AI research areas). As proposals for longtermist priorities, these research agendas are premised on a view that AGI conflict could destroy large amounts of value, and that a good way to reduce the risk of AGI conflict is to do work on conflict in particular.In a later post from the same sequence, they explain that one of the potential factors leading to conflict is conflict-seeking preferences (CSPs) such as pure spite or unforgivingness. While AGIs might develop CSPs by themselves in training (e.g., because there are sometimes advantages to doing so; see, e.g., Abreu and Sethi 2003), they might also inherit them from malevolent(-ish) actors. Such an actor would also be less likely to want to reduce the chance of CSPs arising by “accident”.This actor can be a legitimate decisive person/group in the development/deployment of AGI (e.g., a researcher at a top AI lab, a politician, or even some influencer whose’ opinion is highly respected), but also a spy/infiltrator or external hacker (or something in between these last two).Direct long-term risks from malevolenceFor simplicity, say we are concerned about the risk of some AGI ending up with...]]>
Jim Buhler https://forum.effectivealtruism.org/posts/TLSPQjjXZruwmg4PE/some-governance-research-ideas-to-prevent-malevolent-control Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some governance research ideas to prevent malevolent control over AGI and why this might matter a hell of a lot, published by Jim Buhler on May 23, 2023 on The Effective Altruism Forum.Epistemic status: I spent only a few weeks reading/thinking about this. I could have asked more people to give me feedback so I could improve this piece but I’d like to move on to other research projects and thought throwing this out there was still a good idea and might be insightful to some.SummaryMany power-seeking actors will want to influence the development/deployment of artificial general intelligence (AGI). Some of them may have malevolent(-ish) preferences which they could satisfy on massively large scales if they succeed at getting some control over (key parts of the development/deployment of) AGI. Given the current rate of AI progress and dissemination, the extent to which those actors are a prominent threat will likely increase.In this post:I differentiate between different types of scenarios and give examples.I argue that 1) governance work aimed at reducing the influence of malevolent actors over AGI does not necessarily converge with usual AGI governance work – which is as far as I know – mostly focused on reducing risks from “mere” uncautiousness and/or inefficiencies due to suboptimal decision-making processes, and 2) the expected value loss due to malevolence, specifically, might be large enough to constitute an area of priority in its own right for longtermists.I, then, list some research questions that I classify under the following categories:Breaking down the conditions for an AGI-related long-term catastrophe from malevolenceRedefining the set of actors/preferences we should worry aboutSteering clear from information/attention hazardsAssessing the promisingness of various interventionsHow malevolent control over AGI may trigger long-term catastrophes?(This section is heavily inspired by discussions with Stefan Torges and Linh Chi Nguyen. I also build on Das Sarma and Wiblin’s (2022) discussion.We could divide the risks we should worry about into those two categories: Malevolence as a risk factor for AGI conflict and Direct long-term risks from malevolence.Malevolence as a risk factor for AGI conflictClifton et al. (2022) write:Several recent research agendas related to safe and beneficial AI have been motivated, in part, by reducing the risks of large-scale conflict involving artificial general intelligence (AGI). These include the Center on Long-Term Risk’s research agenda, Open Problems in Cooperative AI, and AI Research Considerations for Human Existential Safety (and this associated assessment of various AI research areas). As proposals for longtermist priorities, these research agendas are premised on a view that AGI conflict could destroy large amounts of value, and that a good way to reduce the risk of AGI conflict is to do work on conflict in particular.In a later post from the same sequence, they explain that one of the potential factors leading to conflict is conflict-seeking preferences (CSPs) such as pure spite or unforgivingness. While AGIs might develop CSPs by themselves in training (e.g., because there are sometimes advantages to doing so; see, e.g., Abreu and Sethi 2003), they might also inherit them from malevolent(-ish) actors. Such an actor would also be less likely to want to reduce the chance of CSPs arising by “accident”.This actor can be a legitimate decisive person/group in the development/deployment of AGI (e.g., a researcher at a top AI lab, a politician, or even some influencer whose’ opinion is highly respected), but also a spy/infiltrator or external hacker (or something in between these last two).Direct long-term risks from malevolenceFor simplicity, say we are concerned about the risk of some AGI ending up with...]]>
Wed, 24 May 2023 10:07:04 +0000 EA - Some governance research ideas to prevent malevolent control over AGI and why this might matter a hell of a lot by Jim Buhler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some governance research ideas to prevent malevolent control over AGI and why this might matter a hell of a lot, published by Jim Buhler on May 23, 2023 on The Effective Altruism Forum.Epistemic status: I spent only a few weeks reading/thinking about this. I could have asked more people to give me feedback so I could improve this piece but I’d like to move on to other research projects and thought throwing this out there was still a good idea and might be insightful to some.SummaryMany power-seeking actors will want to influence the development/deployment of artificial general intelligence (AGI). Some of them may have malevolent(-ish) preferences which they could satisfy on massively large scales if they succeed at getting some control over (key parts of the development/deployment of) AGI. Given the current rate of AI progress and dissemination, the extent to which those actors are a prominent threat will likely increase.In this post:I differentiate between different types of scenarios and give examples.I argue that 1) governance work aimed at reducing the influence of malevolent actors over AGI does not necessarily converge with usual AGI governance work – which is as far as I know – mostly focused on reducing risks from “mere” uncautiousness and/or inefficiencies due to suboptimal decision-making processes, and 2) the expected value loss due to malevolence, specifically, might be large enough to constitute an area of priority in its own right for longtermists.I, then, list some research questions that I classify under the following categories:Breaking down the conditions for an AGI-related long-term catastrophe from malevolenceRedefining the set of actors/preferences we should worry aboutSteering clear from information/attention hazardsAssessing the promisingness of various interventionsHow malevolent control over AGI may trigger long-term catastrophes?(This section is heavily inspired by discussions with Stefan Torges and Linh Chi Nguyen. I also build on Das Sarma and Wiblin’s (2022) discussion.We could divide the risks we should worry about into those two categories: Malevolence as a risk factor for AGI conflict and Direct long-term risks from malevolence.Malevolence as a risk factor for AGI conflictClifton et al. (2022) write:Several recent research agendas related to safe and beneficial AI have been motivated, in part, by reducing the risks of large-scale conflict involving artificial general intelligence (AGI). These include the Center on Long-Term Risk’s research agenda, Open Problems in Cooperative AI, and AI Research Considerations for Human Existential Safety (and this associated assessment of various AI research areas). As proposals for longtermist priorities, these research agendas are premised on a view that AGI conflict could destroy large amounts of value, and that a good way to reduce the risk of AGI conflict is to do work on conflict in particular.In a later post from the same sequence, they explain that one of the potential factors leading to conflict is conflict-seeking preferences (CSPs) such as pure spite or unforgivingness. While AGIs might develop CSPs by themselves in training (e.g., because there are sometimes advantages to doing so; see, e.g., Abreu and Sethi 2003), they might also inherit them from malevolent(-ish) actors. Such an actor would also be less likely to want to reduce the chance of CSPs arising by “accident”.This actor can be a legitimate decisive person/group in the development/deployment of AGI (e.g., a researcher at a top AI lab, a politician, or even some influencer whose’ opinion is highly respected), but also a spy/infiltrator or external hacker (or something in between these last two).Direct long-term risks from malevolenceFor simplicity, say we are concerned about the risk of some AGI ending up with...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some governance research ideas to prevent malevolent control over AGI and why this might matter a hell of a lot, published by Jim Buhler on May 23, 2023 on The Effective Altruism Forum.Epistemic status: I spent only a few weeks reading/thinking about this. I could have asked more people to give me feedback so I could improve this piece but I’d like to move on to other research projects and thought throwing this out there was still a good idea and might be insightful to some.SummaryMany power-seeking actors will want to influence the development/deployment of artificial general intelligence (AGI). Some of them may have malevolent(-ish) preferences which they could satisfy on massively large scales if they succeed at getting some control over (key parts of the development/deployment of) AGI. Given the current rate of AI progress and dissemination, the extent to which those actors are a prominent threat will likely increase.In this post:I differentiate between different types of scenarios and give examples.I argue that 1) governance work aimed at reducing the influence of malevolent actors over AGI does not necessarily converge with usual AGI governance work – which is as far as I know – mostly focused on reducing risks from “mere” uncautiousness and/or inefficiencies due to suboptimal decision-making processes, and 2) the expected value loss due to malevolence, specifically, might be large enough to constitute an area of priority in its own right for longtermists.I, then, list some research questions that I classify under the following categories:Breaking down the conditions for an AGI-related long-term catastrophe from malevolenceRedefining the set of actors/preferences we should worry aboutSteering clear from information/attention hazardsAssessing the promisingness of various interventionsHow malevolent control over AGI may trigger long-term catastrophes?(This section is heavily inspired by discussions with Stefan Torges and Linh Chi Nguyen. I also build on Das Sarma and Wiblin’s (2022) discussion.We could divide the risks we should worry about into those two categories: Malevolence as a risk factor for AGI conflict and Direct long-term risks from malevolence.Malevolence as a risk factor for AGI conflictClifton et al. (2022) write:Several recent research agendas related to safe and beneficial AI have been motivated, in part, by reducing the risks of large-scale conflict involving artificial general intelligence (AGI). These include the Center on Long-Term Risk’s research agenda, Open Problems in Cooperative AI, and AI Research Considerations for Human Existential Safety (and this associated assessment of various AI research areas). As proposals for longtermist priorities, these research agendas are premised on a view that AGI conflict could destroy large amounts of value, and that a good way to reduce the risk of AGI conflict is to do work on conflict in particular.In a later post from the same sequence, they explain that one of the potential factors leading to conflict is conflict-seeking preferences (CSPs) such as pure spite or unforgivingness. While AGIs might develop CSPs by themselves in training (e.g., because there are sometimes advantages to doing so; see, e.g., Abreu and Sethi 2003), they might also inherit them from malevolent(-ish) actors. Such an actor would also be less likely to want to reduce the chance of CSPs arising by “accident”.This actor can be a legitimate decisive person/group in the development/deployment of AGI (e.g., a researcher at a top AI lab, a politician, or even some influencer whose’ opinion is highly respected), but also a spy/infiltrator or external hacker (or something in between these last two).Direct long-term risks from malevolenceFor simplicity, say we are concerned about the risk of some AGI ending up with...]]>
Jim Buhler https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 34:40 None full 6047
jM3MSankqktQBf6Fu_NL_EA_EA EA - Review of Animal Liberation Now by Richard Y Chappell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Review of Animal Liberation Now, published by Richard Y Chappell on May 23, 2023 on The Effective Altruism Forum.Animal Liberation Now releases today! I received an advance copy for review, so will share some thoughts and highlights. (It feels a bit presumptuous to “review” such a classic text—obviously you should read it, no-one needs to await my verdict in order to know that—but hopefully there’s still some value in my sharing a few thoughts and highlights that stood out to me.)As Singer notes in his publication announcement, he considers it “a new book, rather than just a revision, because so much of the material in the book is new.” I’m embarrassed to admit that I never actually got around to reading the original Animal Liberation (aside from the classic first chapter, widely anthologized as ‘All Animals are Equal’, and commonly taught in intro ethics classes). So I can’t speak to any differences, except to note that the present book is very much “up to date”, focusing on describing the current state of animal experimentation and agriculture, and (in the final chapter) engaging with recent philosophical defenses of speciesism.Empirical DetailsThis book is not exactly an enjoyable read. It describes, clearly and dispassionately, humanity’s abusive treatment of other animals. It’s harrowing stuff. To give just one example, consider our treatment of broiler chickens: they have been bred to grow so large they cannot support themselves or walk without pain (p. 118):The birds may try to avoid the pain by sitting down, but they have nothing to sit on except the ammonia-laden litter, which, as we saw earlier, is so corrosive that it can burn their bodies. Their situation has been likened to that of someone with arthritic leg joints who is forced to stand up all day. [Prof.] Webster has described modern intensive chicken production as “in both magnitude and severity, the single most severe, systematic example of man’s inhumanity to another sentient animal.”Their parents—breeder birds—are instead starved to keep their weight at a level that allows mating to occur, and for the birds to survive longer—albeit in a state of hunger-induced aggression and desperation. In short, we’ve bred these birds to be physically incapable of living happy, healthy lives. It’s abominable.Our treatment of dairy cows is also heartbreaking:Dairy producers must ensure that their cows become pregnant every year, for otherwise their milk will dry up. Their babies are taken from them at birth, an experience that is as painful for the mother as it is terrifying for the calf. The mother often makes her feelings plain by constant calling and bellowing for her calf—and this may continue for several days after her infant calf is taken away. Some female calves will be reared on milk substitutes to become replacements of dairy cows when they reach the age, at around two years, when they can produce milk. Some others will be sold at between one to two weeks of age to be reared for beef in fattening pens or feedlots. The remainder will be sold to veal producers. (p. 155)A glimmer of hope is offered in the story of niche dairy farms that produce milk “without separating the calves from their mothers or killing a single calf.” (p. 157) The resulting milk is more expensive, since the process is no longer “optimized” purely for production. But I’d certainly be willing to pay more to support a less evil (maybe even positively good!) treatment of farm animals. I dearly hope these products become more widespread.The book also relates encouraging legislation, especially in the EU and New Zealand, constraining the mistreatment of animals in various respects. The U.S. is more disheartening for the most part, but here’s one (slightly) positive note (p. 282):In the U.S. the joint impact of the changes in stat...]]>
Richard Y Chappell https://forum.effectivealtruism.org/posts/jM3MSankqktQBf6Fu/review-of-animal-liberation-now Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Review of Animal Liberation Now, published by Richard Y Chappell on May 23, 2023 on The Effective Altruism Forum.Animal Liberation Now releases today! I received an advance copy for review, so will share some thoughts and highlights. (It feels a bit presumptuous to “review” such a classic text—obviously you should read it, no-one needs to await my verdict in order to know that—but hopefully there’s still some value in my sharing a few thoughts and highlights that stood out to me.)As Singer notes in his publication announcement, he considers it “a new book, rather than just a revision, because so much of the material in the book is new.” I’m embarrassed to admit that I never actually got around to reading the original Animal Liberation (aside from the classic first chapter, widely anthologized as ‘All Animals are Equal’, and commonly taught in intro ethics classes). So I can’t speak to any differences, except to note that the present book is very much “up to date”, focusing on describing the current state of animal experimentation and agriculture, and (in the final chapter) engaging with recent philosophical defenses of speciesism.Empirical DetailsThis book is not exactly an enjoyable read. It describes, clearly and dispassionately, humanity’s abusive treatment of other animals. It’s harrowing stuff. To give just one example, consider our treatment of broiler chickens: they have been bred to grow so large they cannot support themselves or walk without pain (p. 118):The birds may try to avoid the pain by sitting down, but they have nothing to sit on except the ammonia-laden litter, which, as we saw earlier, is so corrosive that it can burn their bodies. Their situation has been likened to that of someone with arthritic leg joints who is forced to stand up all day. [Prof.] Webster has described modern intensive chicken production as “in both magnitude and severity, the single most severe, systematic example of man’s inhumanity to another sentient animal.”Their parents—breeder birds—are instead starved to keep their weight at a level that allows mating to occur, and for the birds to survive longer—albeit in a state of hunger-induced aggression and desperation. In short, we’ve bred these birds to be physically incapable of living happy, healthy lives. It’s abominable.Our treatment of dairy cows is also heartbreaking:Dairy producers must ensure that their cows become pregnant every year, for otherwise their milk will dry up. Their babies are taken from them at birth, an experience that is as painful for the mother as it is terrifying for the calf. The mother often makes her feelings plain by constant calling and bellowing for her calf—and this may continue for several days after her infant calf is taken away. Some female calves will be reared on milk substitutes to become replacements of dairy cows when they reach the age, at around two years, when they can produce milk. Some others will be sold at between one to two weeks of age to be reared for beef in fattening pens or feedlots. The remainder will be sold to veal producers. (p. 155)A glimmer of hope is offered in the story of niche dairy farms that produce milk “without separating the calves from their mothers or killing a single calf.” (p. 157) The resulting milk is more expensive, since the process is no longer “optimized” purely for production. But I’d certainly be willing to pay more to support a less evil (maybe even positively good!) treatment of farm animals. I dearly hope these products become more widespread.The book also relates encouraging legislation, especially in the EU and New Zealand, constraining the mistreatment of animals in various respects. The U.S. is more disheartening for the most part, but here’s one (slightly) positive note (p. 282):In the U.S. the joint impact of the changes in stat...]]>
Wed, 24 May 2023 01:02:30 +0000 EA - Review of Animal Liberation Now by Richard Y Chappell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Review of Animal Liberation Now, published by Richard Y Chappell on May 23, 2023 on The Effective Altruism Forum.Animal Liberation Now releases today! I received an advance copy for review, so will share some thoughts and highlights. (It feels a bit presumptuous to “review” such a classic text—obviously you should read it, no-one needs to await my verdict in order to know that—but hopefully there’s still some value in my sharing a few thoughts and highlights that stood out to me.)As Singer notes in his publication announcement, he considers it “a new book, rather than just a revision, because so much of the material in the book is new.” I’m embarrassed to admit that I never actually got around to reading the original Animal Liberation (aside from the classic first chapter, widely anthologized as ‘All Animals are Equal’, and commonly taught in intro ethics classes). So I can’t speak to any differences, except to note that the present book is very much “up to date”, focusing on describing the current state of animal experimentation and agriculture, and (in the final chapter) engaging with recent philosophical defenses of speciesism.Empirical DetailsThis book is not exactly an enjoyable read. It describes, clearly and dispassionately, humanity’s abusive treatment of other animals. It’s harrowing stuff. To give just one example, consider our treatment of broiler chickens: they have been bred to grow so large they cannot support themselves or walk without pain (p. 118):The birds may try to avoid the pain by sitting down, but they have nothing to sit on except the ammonia-laden litter, which, as we saw earlier, is so corrosive that it can burn their bodies. Their situation has been likened to that of someone with arthritic leg joints who is forced to stand up all day. [Prof.] Webster has described modern intensive chicken production as “in both magnitude and severity, the single most severe, systematic example of man’s inhumanity to another sentient animal.”Their parents—breeder birds—are instead starved to keep their weight at a level that allows mating to occur, and for the birds to survive longer—albeit in a state of hunger-induced aggression and desperation. In short, we’ve bred these birds to be physically incapable of living happy, healthy lives. It’s abominable.Our treatment of dairy cows is also heartbreaking:Dairy producers must ensure that their cows become pregnant every year, for otherwise their milk will dry up. Their babies are taken from them at birth, an experience that is as painful for the mother as it is terrifying for the calf. The mother often makes her feelings plain by constant calling and bellowing for her calf—and this may continue for several days after her infant calf is taken away. Some female calves will be reared on milk substitutes to become replacements of dairy cows when they reach the age, at around two years, when they can produce milk. Some others will be sold at between one to two weeks of age to be reared for beef in fattening pens or feedlots. The remainder will be sold to veal producers. (p. 155)A glimmer of hope is offered in the story of niche dairy farms that produce milk “without separating the calves from their mothers or killing a single calf.” (p. 157) The resulting milk is more expensive, since the process is no longer “optimized” purely for production. But I’d certainly be willing to pay more to support a less evil (maybe even positively good!) treatment of farm animals. I dearly hope these products become more widespread.The book also relates encouraging legislation, especially in the EU and New Zealand, constraining the mistreatment of animals in various respects. The U.S. is more disheartening for the most part, but here’s one (slightly) positive note (p. 282):In the U.S. the joint impact of the changes in stat...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Review of Animal Liberation Now, published by Richard Y Chappell on May 23, 2023 on The Effective Altruism Forum.Animal Liberation Now releases today! I received an advance copy for review, so will share some thoughts and highlights. (It feels a bit presumptuous to “review” such a classic text—obviously you should read it, no-one needs to await my verdict in order to know that—but hopefully there’s still some value in my sharing a few thoughts and highlights that stood out to me.)As Singer notes in his publication announcement, he considers it “a new book, rather than just a revision, because so much of the material in the book is new.” I’m embarrassed to admit that I never actually got around to reading the original Animal Liberation (aside from the classic first chapter, widely anthologized as ‘All Animals are Equal’, and commonly taught in intro ethics classes). So I can’t speak to any differences, except to note that the present book is very much “up to date”, focusing on describing the current state of animal experimentation and agriculture, and (in the final chapter) engaging with recent philosophical defenses of speciesism.Empirical DetailsThis book is not exactly an enjoyable read. It describes, clearly and dispassionately, humanity’s abusive treatment of other animals. It’s harrowing stuff. To give just one example, consider our treatment of broiler chickens: they have been bred to grow so large they cannot support themselves or walk without pain (p. 118):The birds may try to avoid the pain by sitting down, but they have nothing to sit on except the ammonia-laden litter, which, as we saw earlier, is so corrosive that it can burn their bodies. Their situation has been likened to that of someone with arthritic leg joints who is forced to stand up all day. [Prof.] Webster has described modern intensive chicken production as “in both magnitude and severity, the single most severe, systematic example of man’s inhumanity to another sentient animal.”Their parents—breeder birds—are instead starved to keep their weight at a level that allows mating to occur, and for the birds to survive longer—albeit in a state of hunger-induced aggression and desperation. In short, we’ve bred these birds to be physically incapable of living happy, healthy lives. It’s abominable.Our treatment of dairy cows is also heartbreaking:Dairy producers must ensure that their cows become pregnant every year, for otherwise their milk will dry up. Their babies are taken from them at birth, an experience that is as painful for the mother as it is terrifying for the calf. The mother often makes her feelings plain by constant calling and bellowing for her calf—and this may continue for several days after her infant calf is taken away. Some female calves will be reared on milk substitutes to become replacements of dairy cows when they reach the age, at around two years, when they can produce milk. Some others will be sold at between one to two weeks of age to be reared for beef in fattening pens or feedlots. The remainder will be sold to veal producers. (p. 155)A glimmer of hope is offered in the story of niche dairy farms that produce milk “without separating the calves from their mothers or killing a single calf.” (p. 157) The resulting milk is more expensive, since the process is no longer “optimized” purely for production. But I’d certainly be willing to pay more to support a less evil (maybe even positively good!) treatment of farm animals. I dearly hope these products become more widespread.The book also relates encouraging legislation, especially in the EU and New Zealand, constraining the mistreatment of animals in various respects. The U.S. is more disheartening for the most part, but here’s one (slightly) positive note (p. 282):In the U.S. the joint impact of the changes in stat...]]>
Richard Y Chappell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 13:41 None full 6044
FAHHFFmJuDqxBMCNH_NL_EA_EA EA - Save the date: EAGxVirtual 2023 by Sasha Berezhnoi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Save the date: EAGxVirtual 2023, published by Sasha Berezhnoi on May 23, 2023 on The Effective Altruism Forum.EAGxVirtual 2023 will take place on November 17-19Imagine interacting with EAs from over 70 countries and learning from their unique perspectives. Imagine walking across a virtual venue and making valuable connections, all from the comfort of your own home. Imagine no visa requirements and no airports. It's about to come true this November.Vision for the conferenceOur main goal is to help attendees identify the next steps to act based on EA principles wherever they are in the world and build stronger bonds within the community.Many people living outside of major EA hubs have uncertainties about how to take action. They don't have a good understanding of the EA landscape or who to ask. There are many types of constraints: language barriers, travel restrictions, or lack of knowledge about relevant opportunities.We want to address that by facilitating valuable connections, highlighting relevant opportunities and resources, and inviting speakers who are working on concrete projects. There will be a range of talks, workshops, live Q&A sessions, office hours with experts, and facilitated networking.What to expectLast year's EAGxVirtual featured 900 participants from 75 countries and facilitated lots of connections and progress. We want to build on this success, experiment, and improve.You can expect:Action-oriented content that will be relevant to people from different contexts and locationsAlways-available virtual venue (Gathertown) for unstructured conversations, socials, and private meetingsSchedule tailored for participants from different time zonesApplication processApplications will be open in September. Sign-up here to get notified when it’s open.Admissions will not be based on prior EA engagement or EA background knowledge. We welcome all who have a genuine interest in learning more or connecting!If you are completely new to EA, we recommend signing up for the Introductory EA Program to familiarise yourself with the core ideas before applying.EAGxVirtual 2023 will be hosted by EA Anywhere with the support of the CEA Events team.We are looking forward to an inspiring conference with you!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sasha Berezhnoi https://forum.effectivealtruism.org/posts/FAHHFFmJuDqxBMCNH/save-the-date-eagxvirtual-2023 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Save the date: EAGxVirtual 2023, published by Sasha Berezhnoi on May 23, 2023 on The Effective Altruism Forum.EAGxVirtual 2023 will take place on November 17-19Imagine interacting with EAs from over 70 countries and learning from their unique perspectives. Imagine walking across a virtual venue and making valuable connections, all from the comfort of your own home. Imagine no visa requirements and no airports. It's about to come true this November.Vision for the conferenceOur main goal is to help attendees identify the next steps to act based on EA principles wherever they are in the world and build stronger bonds within the community.Many people living outside of major EA hubs have uncertainties about how to take action. They don't have a good understanding of the EA landscape or who to ask. There are many types of constraints: language barriers, travel restrictions, or lack of knowledge about relevant opportunities.We want to address that by facilitating valuable connections, highlighting relevant opportunities and resources, and inviting speakers who are working on concrete projects. There will be a range of talks, workshops, live Q&A sessions, office hours with experts, and facilitated networking.What to expectLast year's EAGxVirtual featured 900 participants from 75 countries and facilitated lots of connections and progress. We want to build on this success, experiment, and improve.You can expect:Action-oriented content that will be relevant to people from different contexts and locationsAlways-available virtual venue (Gathertown) for unstructured conversations, socials, and private meetingsSchedule tailored for participants from different time zonesApplication processApplications will be open in September. Sign-up here to get notified when it’s open.Admissions will not be based on prior EA engagement or EA background knowledge. We welcome all who have a genuine interest in learning more or connecting!If you are completely new to EA, we recommend signing up for the Introductory EA Program to familiarise yourself with the core ideas before applying.EAGxVirtual 2023 will be hosted by EA Anywhere with the support of the CEA Events team.We are looking forward to an inspiring conference with you!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 24 May 2023 00:06:14 +0000 EA - Save the date: EAGxVirtual 2023 by Sasha Berezhnoi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Save the date: EAGxVirtual 2023, published by Sasha Berezhnoi on May 23, 2023 on The Effective Altruism Forum.EAGxVirtual 2023 will take place on November 17-19Imagine interacting with EAs from over 70 countries and learning from their unique perspectives. Imagine walking across a virtual venue and making valuable connections, all from the comfort of your own home. Imagine no visa requirements and no airports. It's about to come true this November.Vision for the conferenceOur main goal is to help attendees identify the next steps to act based on EA principles wherever they are in the world and build stronger bonds within the community.Many people living outside of major EA hubs have uncertainties about how to take action. They don't have a good understanding of the EA landscape or who to ask. There are many types of constraints: language barriers, travel restrictions, or lack of knowledge about relevant opportunities.We want to address that by facilitating valuable connections, highlighting relevant opportunities and resources, and inviting speakers who are working on concrete projects. There will be a range of talks, workshops, live Q&A sessions, office hours with experts, and facilitated networking.What to expectLast year's EAGxVirtual featured 900 participants from 75 countries and facilitated lots of connections and progress. We want to build on this success, experiment, and improve.You can expect:Action-oriented content that will be relevant to people from different contexts and locationsAlways-available virtual venue (Gathertown) for unstructured conversations, socials, and private meetingsSchedule tailored for participants from different time zonesApplication processApplications will be open in September. Sign-up here to get notified when it’s open.Admissions will not be based on prior EA engagement or EA background knowledge. We welcome all who have a genuine interest in learning more or connecting!If you are completely new to EA, we recommend signing up for the Introductory EA Program to familiarise yourself with the core ideas before applying.EAGxVirtual 2023 will be hosted by EA Anywhere with the support of the CEA Events team.We are looking forward to an inspiring conference with you!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Save the date: EAGxVirtual 2023, published by Sasha Berezhnoi on May 23, 2023 on The Effective Altruism Forum.EAGxVirtual 2023 will take place on November 17-19Imagine interacting with EAs from over 70 countries and learning from their unique perspectives. Imagine walking across a virtual venue and making valuable connections, all from the comfort of your own home. Imagine no visa requirements and no airports. It's about to come true this November.Vision for the conferenceOur main goal is to help attendees identify the next steps to act based on EA principles wherever they are in the world and build stronger bonds within the community.Many people living outside of major EA hubs have uncertainties about how to take action. They don't have a good understanding of the EA landscape or who to ask. There are many types of constraints: language barriers, travel restrictions, or lack of knowledge about relevant opportunities.We want to address that by facilitating valuable connections, highlighting relevant opportunities and resources, and inviting speakers who are working on concrete projects. There will be a range of talks, workshops, live Q&A sessions, office hours with experts, and facilitated networking.What to expectLast year's EAGxVirtual featured 900 participants from 75 countries and facilitated lots of connections and progress. We want to build on this success, experiment, and improve.You can expect:Action-oriented content that will be relevant to people from different contexts and locationsAlways-available virtual venue (Gathertown) for unstructured conversations, socials, and private meetingsSchedule tailored for participants from different time zonesApplication processApplications will be open in September. Sign-up here to get notified when it’s open.Admissions will not be based on prior EA engagement or EA background knowledge. We welcome all who have a genuine interest in learning more or connecting!If you are completely new to EA, we recommend signing up for the Introductory EA Program to familiarise yourself with the core ideas before applying.EAGxVirtual 2023 will be hosted by EA Anywhere with the support of the CEA Events team.We are looking forward to an inspiring conference with you!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sasha Berezhnoi https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:23 None full 6042
AgihXxbw6aHAuNzLg_NL_EA_EA EA - Give feedback on the new 80,000 Hours career guide by Benjamin Hilton Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Give feedback on the new 80,000 Hours career guide, published by Benjamin Hilton on May 23, 2023 on The Effective Altruism Forum.We’ve spent the last few months updating 80,000 Hours’ career guide (which we previously released in 2017 and which you've been able to get as a physical book). Today, we’ve put our new career guide live on our website. Before we formally launch and promote the guide - and republish the book - we’d like to gather feedback from you!How can you help?Take a look at the new career guide, which you can find at 80000hours.org/career-guide/.Please bear in mind that the vast majority of people who read the 80,000 Hours website are not EAs. Rather, our target audience for this career guide is approximately the ~100k young adults most likely to have high-impact careers, in the English speaking world. In particular, many of them are not yet familiar with many of the ideas that are widely discussed in the EA community. Also, this guide is primarily aimed at people aged 18-24.When you’re ready there’s a simple form to fill in:Click here to give feedback.Thank you so much!Extra context: why are we making this change?In 2018, we deprioritised 80,000 Hours’ career guide in favour of our key ideas series.Our key ideas series had a more serious tone, and was more focused on impact. It represented our best and most up-to-date advice. We expected that this switch would reduce engagement time on our site, but that the key ideas series would better appeal to people more likely to change their careers to do good.However, the drop in engagement time which we could attribute to this change was larger than we’d expected. In addition, data from our user survey suggested that people who changed their careers were more, not less, likely to have found and used the older, more informal career guide (which we kept up on our site).As a result, we decided to bring the advice in our career guide in line with our latest views, while attempting to retain its structure, tone and engagingness.We’re retaining the content in our key ideas series: it’s been re-released as our advanced series.Thank you for your help! You can find the new career guide here, and the feedback form here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Benjamin Hilton https://forum.effectivealtruism.org/posts/AgihXxbw6aHAuNzLg/give-feedback-on-the-new-80-000-hours-career-guide Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Give feedback on the new 80,000 Hours career guide, published by Benjamin Hilton on May 23, 2023 on The Effective Altruism Forum.We’ve spent the last few months updating 80,000 Hours’ career guide (which we previously released in 2017 and which you've been able to get as a physical book). Today, we’ve put our new career guide live on our website. Before we formally launch and promote the guide - and republish the book - we’d like to gather feedback from you!How can you help?Take a look at the new career guide, which you can find at 80000hours.org/career-guide/.Please bear in mind that the vast majority of people who read the 80,000 Hours website are not EAs. Rather, our target audience for this career guide is approximately the ~100k young adults most likely to have high-impact careers, in the English speaking world. In particular, many of them are not yet familiar with many of the ideas that are widely discussed in the EA community. Also, this guide is primarily aimed at people aged 18-24.When you’re ready there’s a simple form to fill in:Click here to give feedback.Thank you so much!Extra context: why are we making this change?In 2018, we deprioritised 80,000 Hours’ career guide in favour of our key ideas series.Our key ideas series had a more serious tone, and was more focused on impact. It represented our best and most up-to-date advice. We expected that this switch would reduce engagement time on our site, but that the key ideas series would better appeal to people more likely to change their careers to do good.However, the drop in engagement time which we could attribute to this change was larger than we’d expected. In addition, data from our user survey suggested that people who changed their careers were more, not less, likely to have found and used the older, more informal career guide (which we kept up on our site).As a result, we decided to bring the advice in our career guide in line with our latest views, while attempting to retain its structure, tone and engagingness.We’re retaining the content in our key ideas series: it’s been re-released as our advanced series.Thank you for your help! You can find the new career guide here, and the feedback form here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 23 May 2023 17:06:41 +0000 EA - Give feedback on the new 80,000 Hours career guide by Benjamin Hilton Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Give feedback on the new 80,000 Hours career guide, published by Benjamin Hilton on May 23, 2023 on The Effective Altruism Forum.We’ve spent the last few months updating 80,000 Hours’ career guide (which we previously released in 2017 and which you've been able to get as a physical book). Today, we’ve put our new career guide live on our website. Before we formally launch and promote the guide - and republish the book - we’d like to gather feedback from you!How can you help?Take a look at the new career guide, which you can find at 80000hours.org/career-guide/.Please bear in mind that the vast majority of people who read the 80,000 Hours website are not EAs. Rather, our target audience for this career guide is approximately the ~100k young adults most likely to have high-impact careers, in the English speaking world. In particular, many of them are not yet familiar with many of the ideas that are widely discussed in the EA community. Also, this guide is primarily aimed at people aged 18-24.When you’re ready there’s a simple form to fill in:Click here to give feedback.Thank you so much!Extra context: why are we making this change?In 2018, we deprioritised 80,000 Hours’ career guide in favour of our key ideas series.Our key ideas series had a more serious tone, and was more focused on impact. It represented our best and most up-to-date advice. We expected that this switch would reduce engagement time on our site, but that the key ideas series would better appeal to people more likely to change their careers to do good.However, the drop in engagement time which we could attribute to this change was larger than we’d expected. In addition, data from our user survey suggested that people who changed their careers were more, not less, likely to have found and used the older, more informal career guide (which we kept up on our site).As a result, we decided to bring the advice in our career guide in line with our latest views, while attempting to retain its structure, tone and engagingness.We’re retaining the content in our key ideas series: it’s been re-released as our advanced series.Thank you for your help! You can find the new career guide here, and the feedback form here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Give feedback on the new 80,000 Hours career guide, published by Benjamin Hilton on May 23, 2023 on The Effective Altruism Forum.We’ve spent the last few months updating 80,000 Hours’ career guide (which we previously released in 2017 and which you've been able to get as a physical book). Today, we’ve put our new career guide live on our website. Before we formally launch and promote the guide - and republish the book - we’d like to gather feedback from you!How can you help?Take a look at the new career guide, which you can find at 80000hours.org/career-guide/.Please bear in mind that the vast majority of people who read the 80,000 Hours website are not EAs. Rather, our target audience for this career guide is approximately the ~100k young adults most likely to have high-impact careers, in the English speaking world. In particular, many of them are not yet familiar with many of the ideas that are widely discussed in the EA community. Also, this guide is primarily aimed at people aged 18-24.When you’re ready there’s a simple form to fill in:Click here to give feedback.Thank you so much!Extra context: why are we making this change?In 2018, we deprioritised 80,000 Hours’ career guide in favour of our key ideas series.Our key ideas series had a more serious tone, and was more focused on impact. It represented our best and most up-to-date advice. We expected that this switch would reduce engagement time on our site, but that the key ideas series would better appeal to people more likely to change their careers to do good.However, the drop in engagement time which we could attribute to this change was larger than we’d expected. In addition, data from our user survey suggested that people who changed their careers were more, not less, likely to have found and used the older, more informal career guide (which we kept up on our site).As a result, we decided to bring the advice in our career guide in line with our latest views, while attempting to retain its structure, tone and engagingness.We’re retaining the content in our key ideas series: it’s been re-released as our advanced series.Thank you for your help! You can find the new career guide here, and the feedback form here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Benjamin Hilton https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:16 None full 6035
FrshKTu34cFGGsyka_NL_EA_EA EA - Announcing a new organization: Epistea by Epistea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing a new organization: Epistea, published by Epistea on May 22, 2023 on The Effective Altruism Forum.SummaryWe are announcing a new organization called Epistea. Epistea supports projects in the space of existential security, epistemics, rationality, and effective altruism. Some projects we initiate and run ourselves, and some projects we support by providing infrastructure, know-how, staff, operations, or fiscal sponsorship.Our current projects are FIXED POINT, Prague Fall Season, and the Epistea Residency Program. We support ACS (Alignment of Complex Systems Research Group), PIBBSS (Principles of Intelligent Behavior in Biological and Social Systems), and HAAISS (Human Aligned AI Summer School).History and contextEpistea was initially founded in 2019 as a rationality and epistemics research and education organization by Jan Kulveit and a small group of collaborators. They ran an experimental workshop on group rationality, the Epistea Summer Experiment in the summer of 2019 and were planning on organizing a series of rationality workshops in 2020. The pandemic paused plans to run workshops and most of the original staff have moved on to other projects.In 2022, Irena Kotíková was looking for an organization to fiscally sponsor her upcoming projects. Together with Jan, they decided to revamp Epistea as an umbrella organization for a wide range of projects related to epistemics and existential security, under Irena’s leadership.What?Epistea is a service organization that creates, runs, and supports projects that help with clear thinking and scale-sensitive caring. We believe that actions in sensitive areas such as existential risk mitigation often follow from good epistemics, and we are particularly interested in supporting efforts in this direction.The core Epistea team is based in Prague, Czech Republic, and works primarily in person there, although we support projects worldwide. As we are based in continental Europe and in the EU, we are a good fit for projects located in the EU.We provide the following services:Fiscal sponsorship (managing payments, accounting, and overall finances)Administrative and operations support (booking travel, accommodation, reimbursements, applications, visas)Events organization and support (conferences, retreats, workshops)Ad hoc operations supportWe currently run the following projects:FIXED POINTFixed Point is a community and coworking space situated in Prague. The space is optimized for intellectual work and interesting conversations but also prioritizes work-life balance. You can read more about FIXED POINT here.Prague Fall SeasonPFS is a new model for global movement building which we piloted in 2022. The goal of the Season is to have a high concentration of people and events, in a limited time, in one space, and working on a specific set of problems. This allows for better coordination and efficiency and creates more opportunities for people to collaborate, co-create and co-work on important projects together, possibly in a new location - different from their usual space. Part of PFS is a residency program. You can read more about the Prague Fall Season here.Additionally, we support:ACS - Alignment of Complex Systems Research GroupPIBBSS - Principles of Intelligent Behavior in Biological and Social SystemsHAAISS - Human Aligned AI Summer SchoolWho?Irena Kotíková leads a team of 4 full-time staff and 4 contractors:Jana Meixnerová - Head of Programs, focus on the Prague Fall SeasonViktorie Havlíčková - Head of OperationsMartin Hrádela - Facilities Manager, focus on Fixed PointJan Šrajhans - User Experience SpecialistKarin Neumanová - Interior DesignerLinh Dan Leová - Operations AssociateJiří Nádvorník - Special ProjectsFrantišek Drahota - Special ProjectsThe team has a wide range of experience...]]>
Epistea https://forum.effectivealtruism.org/posts/FrshKTu34cFGGsyka/announcing-a-new-organization-epistea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing a new organization: Epistea, published by Epistea on May 22, 2023 on The Effective Altruism Forum.SummaryWe are announcing a new organization called Epistea. Epistea supports projects in the space of existential security, epistemics, rationality, and effective altruism. Some projects we initiate and run ourselves, and some projects we support by providing infrastructure, know-how, staff, operations, or fiscal sponsorship.Our current projects are FIXED POINT, Prague Fall Season, and the Epistea Residency Program. We support ACS (Alignment of Complex Systems Research Group), PIBBSS (Principles of Intelligent Behavior in Biological and Social Systems), and HAAISS (Human Aligned AI Summer School).History and contextEpistea was initially founded in 2019 as a rationality and epistemics research and education organization by Jan Kulveit and a small group of collaborators. They ran an experimental workshop on group rationality, the Epistea Summer Experiment in the summer of 2019 and were planning on organizing a series of rationality workshops in 2020. The pandemic paused plans to run workshops and most of the original staff have moved on to other projects.In 2022, Irena Kotíková was looking for an organization to fiscally sponsor her upcoming projects. Together with Jan, they decided to revamp Epistea as an umbrella organization for a wide range of projects related to epistemics and existential security, under Irena’s leadership.What?Epistea is a service organization that creates, runs, and supports projects that help with clear thinking and scale-sensitive caring. We believe that actions in sensitive areas such as existential risk mitigation often follow from good epistemics, and we are particularly interested in supporting efforts in this direction.The core Epistea team is based in Prague, Czech Republic, and works primarily in person there, although we support projects worldwide. As we are based in continental Europe and in the EU, we are a good fit for projects located in the EU.We provide the following services:Fiscal sponsorship (managing payments, accounting, and overall finances)Administrative and operations support (booking travel, accommodation, reimbursements, applications, visas)Events organization and support (conferences, retreats, workshops)Ad hoc operations supportWe currently run the following projects:FIXED POINTFixed Point is a community and coworking space situated in Prague. The space is optimized for intellectual work and interesting conversations but also prioritizes work-life balance. You can read more about FIXED POINT here.Prague Fall SeasonPFS is a new model for global movement building which we piloted in 2022. The goal of the Season is to have a high concentration of people and events, in a limited time, in one space, and working on a specific set of problems. This allows for better coordination and efficiency and creates more opportunities for people to collaborate, co-create and co-work on important projects together, possibly in a new location - different from their usual space. Part of PFS is a residency program. You can read more about the Prague Fall Season here.Additionally, we support:ACS - Alignment of Complex Systems Research GroupPIBBSS - Principles of Intelligent Behavior in Biological and Social SystemsHAAISS - Human Aligned AI Summer SchoolWho?Irena Kotíková leads a team of 4 full-time staff and 4 contractors:Jana Meixnerová - Head of Programs, focus on the Prague Fall SeasonViktorie Havlíčková - Head of OperationsMartin Hrádela - Facilities Manager, focus on Fixed PointJan Šrajhans - User Experience SpecialistKarin Neumanová - Interior DesignerLinh Dan Leová - Operations AssociateJiří Nádvorník - Special ProjectsFrantišek Drahota - Special ProjectsThe team has a wide range of experience...]]>
Tue, 23 May 2023 08:53:17 +0000 EA - Announcing a new organization: Epistea by Epistea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing a new organization: Epistea, published by Epistea on May 22, 2023 on The Effective Altruism Forum.SummaryWe are announcing a new organization called Epistea. Epistea supports projects in the space of existential security, epistemics, rationality, and effective altruism. Some projects we initiate and run ourselves, and some projects we support by providing infrastructure, know-how, staff, operations, or fiscal sponsorship.Our current projects are FIXED POINT, Prague Fall Season, and the Epistea Residency Program. We support ACS (Alignment of Complex Systems Research Group), PIBBSS (Principles of Intelligent Behavior in Biological and Social Systems), and HAAISS (Human Aligned AI Summer School).History and contextEpistea was initially founded in 2019 as a rationality and epistemics research and education organization by Jan Kulveit and a small group of collaborators. They ran an experimental workshop on group rationality, the Epistea Summer Experiment in the summer of 2019 and were planning on organizing a series of rationality workshops in 2020. The pandemic paused plans to run workshops and most of the original staff have moved on to other projects.In 2022, Irena Kotíková was looking for an organization to fiscally sponsor her upcoming projects. Together with Jan, they decided to revamp Epistea as an umbrella organization for a wide range of projects related to epistemics and existential security, under Irena’s leadership.What?Epistea is a service organization that creates, runs, and supports projects that help with clear thinking and scale-sensitive caring. We believe that actions in sensitive areas such as existential risk mitigation often follow from good epistemics, and we are particularly interested in supporting efforts in this direction.The core Epistea team is based in Prague, Czech Republic, and works primarily in person there, although we support projects worldwide. As we are based in continental Europe and in the EU, we are a good fit for projects located in the EU.We provide the following services:Fiscal sponsorship (managing payments, accounting, and overall finances)Administrative and operations support (booking travel, accommodation, reimbursements, applications, visas)Events organization and support (conferences, retreats, workshops)Ad hoc operations supportWe currently run the following projects:FIXED POINTFixed Point is a community and coworking space situated in Prague. The space is optimized for intellectual work and interesting conversations but also prioritizes work-life balance. You can read more about FIXED POINT here.Prague Fall SeasonPFS is a new model for global movement building which we piloted in 2022. The goal of the Season is to have a high concentration of people and events, in a limited time, in one space, and working on a specific set of problems. This allows for better coordination and efficiency and creates more opportunities for people to collaborate, co-create and co-work on important projects together, possibly in a new location - different from their usual space. Part of PFS is a residency program. You can read more about the Prague Fall Season here.Additionally, we support:ACS - Alignment of Complex Systems Research GroupPIBBSS - Principles of Intelligent Behavior in Biological and Social SystemsHAAISS - Human Aligned AI Summer SchoolWho?Irena Kotíková leads a team of 4 full-time staff and 4 contractors:Jana Meixnerová - Head of Programs, focus on the Prague Fall SeasonViktorie Havlíčková - Head of OperationsMartin Hrádela - Facilities Manager, focus on Fixed PointJan Šrajhans - User Experience SpecialistKarin Neumanová - Interior DesignerLinh Dan Leová - Operations AssociateJiří Nádvorník - Special ProjectsFrantišek Drahota - Special ProjectsThe team has a wide range of experience...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing a new organization: Epistea, published by Epistea on May 22, 2023 on The Effective Altruism Forum.SummaryWe are announcing a new organization called Epistea. Epistea supports projects in the space of existential security, epistemics, rationality, and effective altruism. Some projects we initiate and run ourselves, and some projects we support by providing infrastructure, know-how, staff, operations, or fiscal sponsorship.Our current projects are FIXED POINT, Prague Fall Season, and the Epistea Residency Program. We support ACS (Alignment of Complex Systems Research Group), PIBBSS (Principles of Intelligent Behavior in Biological and Social Systems), and HAAISS (Human Aligned AI Summer School).History and contextEpistea was initially founded in 2019 as a rationality and epistemics research and education organization by Jan Kulveit and a small group of collaborators. They ran an experimental workshop on group rationality, the Epistea Summer Experiment in the summer of 2019 and were planning on organizing a series of rationality workshops in 2020. The pandemic paused plans to run workshops and most of the original staff have moved on to other projects.In 2022, Irena Kotíková was looking for an organization to fiscally sponsor her upcoming projects. Together with Jan, they decided to revamp Epistea as an umbrella organization for a wide range of projects related to epistemics and existential security, under Irena’s leadership.What?Epistea is a service organization that creates, runs, and supports projects that help with clear thinking and scale-sensitive caring. We believe that actions in sensitive areas such as existential risk mitigation often follow from good epistemics, and we are particularly interested in supporting efforts in this direction.The core Epistea team is based in Prague, Czech Republic, and works primarily in person there, although we support projects worldwide. As we are based in continental Europe and in the EU, we are a good fit for projects located in the EU.We provide the following services:Fiscal sponsorship (managing payments, accounting, and overall finances)Administrative and operations support (booking travel, accommodation, reimbursements, applications, visas)Events organization and support (conferences, retreats, workshops)Ad hoc operations supportWe currently run the following projects:FIXED POINTFixed Point is a community and coworking space situated in Prague. The space is optimized for intellectual work and interesting conversations but also prioritizes work-life balance. You can read more about FIXED POINT here.Prague Fall SeasonPFS is a new model for global movement building which we piloted in 2022. The goal of the Season is to have a high concentration of people and events, in a limited time, in one space, and working on a specific set of problems. This allows for better coordination and efficiency and creates more opportunities for people to collaborate, co-create and co-work on important projects together, possibly in a new location - different from their usual space. Part of PFS is a residency program. You can read more about the Prague Fall Season here.Additionally, we support:ACS - Alignment of Complex Systems Research GroupPIBBSS - Principles of Intelligent Behavior in Biological and Social SystemsHAAISS - Human Aligned AI Summer SchoolWho?Irena Kotíková leads a team of 4 full-time staff and 4 contractors:Jana Meixnerová - Head of Programs, focus on the Prague Fall SeasonViktorie Havlíčková - Head of OperationsMartin Hrádela - Facilities Manager, focus on Fixed PointJan Šrajhans - User Experience SpecialistKarin Neumanová - Interior DesignerLinh Dan Leová - Operations AssociateJiří Nádvorník - Special ProjectsFrantišek Drahota - Special ProjectsThe team has a wide range of experience...]]>
Epistea https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:41 None full 6032
oo96uRHNbGjr4DHut_NL_EA_EA EA - [Linkpost] "Governance of superintelligence" by OpenAI by Daniel Eth Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] "Governance of superintelligence" by OpenAI, published by Daniel Eth on May 22, 2023 on The Effective Altruism Forum.OpenAI has a new blog post out titled "Governance of superintelligence" (subtitle: "Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI"), by Sam Altman, Greg Brockman, and Ilya Sutskever.The piece is short (~800 words), so I recommend most people just read it in full.Here's the introduction/summary (bold added for emphasis):Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive. Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example.We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination.And below are a few more quotes that stood out:"First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society.""Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.""It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say.""Third, we need the technical capability to make a superintelligence safe. This is an open research question that we and others are putting a lot of effort into.""We think it’s important to allow companies and open-source projects to develop models below a significant capability threshold, without the kind of regulation we describe here""By contrast, the systems we are concerned about will have power beyond any technology yet created, and we should be careful not to water down the focus on them by applying similar standards to technology far below this bar.""we believe it would be unintuitively risky and difficult to stop the creation of superintelligence"Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Daniel Eth https://forum.effectivealtruism.org/posts/oo96uRHNbGjr4DHut/linkpost-governance-of-superintelligence-by-openai Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] "Governance of superintelligence" by OpenAI, published by Daniel Eth on May 22, 2023 on The Effective Altruism Forum.OpenAI has a new blog post out titled "Governance of superintelligence" (subtitle: "Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI"), by Sam Altman, Greg Brockman, and Ilya Sutskever.The piece is short (~800 words), so I recommend most people just read it in full.Here's the introduction/summary (bold added for emphasis):Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive. Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example.We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination.And below are a few more quotes that stood out:"First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society.""Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.""It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say.""Third, we need the technical capability to make a superintelligence safe. This is an open research question that we and others are putting a lot of effort into.""We think it’s important to allow companies and open-source projects to develop models below a significant capability threshold, without the kind of regulation we describe here""By contrast, the systems we are concerned about will have power beyond any technology yet created, and we should be careful not to water down the focus on them by applying similar standards to technology far below this bar.""we believe it would be unintuitively risky and difficult to stop the creation of superintelligence"Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 23 May 2023 08:04:15 +0000 EA - [Linkpost] "Governance of superintelligence" by OpenAI by Daniel Eth Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] "Governance of superintelligence" by OpenAI, published by Daniel Eth on May 22, 2023 on The Effective Altruism Forum.OpenAI has a new blog post out titled "Governance of superintelligence" (subtitle: "Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI"), by Sam Altman, Greg Brockman, and Ilya Sutskever.The piece is short (~800 words), so I recommend most people just read it in full.Here's the introduction/summary (bold added for emphasis):Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive. Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example.We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination.And below are a few more quotes that stood out:"First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society.""Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.""It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say.""Third, we need the technical capability to make a superintelligence safe. This is an open research question that we and others are putting a lot of effort into.""We think it’s important to allow companies and open-source projects to develop models below a significant capability threshold, without the kind of regulation we describe here""By contrast, the systems we are concerned about will have power beyond any technology yet created, and we should be careful not to water down the focus on them by applying similar standards to technology far below this bar.""we believe it would be unintuitively risky and difficult to stop the creation of superintelligence"Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] "Governance of superintelligence" by OpenAI, published by Daniel Eth on May 22, 2023 on The Effective Altruism Forum.OpenAI has a new blog post out titled "Governance of superintelligence" (subtitle: "Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI"), by Sam Altman, Greg Brockman, and Ilya Sutskever.The piece is short (~800 words), so I recommend most people just read it in full.Here's the introduction/summary (bold added for emphasis):Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive. Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example.We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination.And below are a few more quotes that stood out:"First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society.""Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.""It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say.""Third, we need the technical capability to make a superintelligence safe. This is an open research question that we and others are putting a lot of effort into.""We think it’s important to allow companies and open-source projects to develop models below a significant capability threshold, without the kind of regulation we describe here""By contrast, the systems we are concerned about will have power beyond any technology yet created, and we should be careful not to water down the focus on them by applying similar standards to technology far below this bar.""we believe it would be unintuitively risky and difficult to stop the creation of superintelligence"Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Daniel Eth https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:46 None full 6031
8ymrq8FzAmzAWgxkS_NL_EA_EA EA - X-risk discussion in a college commencement speech by SWK Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: X-risk discussion in a college commencement speech, published by SWK on May 22, 2023 on The Effective Altruism Forum.Yesterday, Juan Manuel Santos, former president of Colombia (2010-2018) and 2016 Nobel Peace Prize winner gave the commencement address at the University of Notre Dame.The address contained the usual graduation speech stuff about all the problems humanity is facing and how this special class of students is uniquely equipped to change the world or whatever.While the gist of the message was pretty standard, I was pleasantly surprised that Santos spent a significant chunk of time talking about existential risk as the most pressing problem of our time (see clip here). Santos touched on the major x-risk factors well known to the EA community: AI, biosecurity, and nuclear weapons. However, he also emphasized climate change as one of the most pressing existential threats, which of course is a view that many EAs do not share.On one hand, I think this speech should be seen as a sign of hope. Ideas on the importance of mitigating x-risk — and the particular threats of AI, pandemics, and nuclear war — seem to be entering more mainstream circles. This trend is evidenced further in a recent post noting another former world leader (Israeli PM Naftali Bennett) who publicly discussed AI x-risk. And I think a college commencement speech is arguably more mainstream than the Asian Leadership Conference at which Bennett delivered his talk.On the other hand, it was clear that there is still a long way to go in terms of convincing most of the general population that x-risk should be taken seriously. Sitting in the audience, the people around me were smirking and giggling throughout the x-risk portion of Santos' speech. Afterward, I overheard people joking about the AI part and how they thought it was inappropriate to talk about such morbid material in a commencement address.Overall, I certainly appreciated Santos for talking about x-risk, but I'm not convinced that his words had much of an impact on the people in the audience. To be sure, I realize that commencement speeches are largely ceremonial and all but a handful don't have any broader societal impact. Still, it would have been nice to see people be more receptive to Santos' important ideas.I would be interested to hear if anyone has any thoughts on Santos' discussion of x-risk. Was it appropriate to talk about this stuff in the context of a commencement address? Is this an effective forum to spread ideas about x-risk (or other "weird" EA ideas), or will these ideas just fall on deaf ears? Or are commencement addresses mostly irrelevant and not worth even thinking about for the purposes of growing EA and promoting some of the more idiosyncratic concepts?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
SWK https://forum.effectivealtruism.org/posts/8ymrq8FzAmzAWgxkS/x-risk-discussion-in-a-college-commencement-speech Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: X-risk discussion in a college commencement speech, published by SWK on May 22, 2023 on The Effective Altruism Forum.Yesterday, Juan Manuel Santos, former president of Colombia (2010-2018) and 2016 Nobel Peace Prize winner gave the commencement address at the University of Notre Dame.The address contained the usual graduation speech stuff about all the problems humanity is facing and how this special class of students is uniquely equipped to change the world or whatever.While the gist of the message was pretty standard, I was pleasantly surprised that Santos spent a significant chunk of time talking about existential risk as the most pressing problem of our time (see clip here). Santos touched on the major x-risk factors well known to the EA community: AI, biosecurity, and nuclear weapons. However, he also emphasized climate change as one of the most pressing existential threats, which of course is a view that many EAs do not share.On one hand, I think this speech should be seen as a sign of hope. Ideas on the importance of mitigating x-risk — and the particular threats of AI, pandemics, and nuclear war — seem to be entering more mainstream circles. This trend is evidenced further in a recent post noting another former world leader (Israeli PM Naftali Bennett) who publicly discussed AI x-risk. And I think a college commencement speech is arguably more mainstream than the Asian Leadership Conference at which Bennett delivered his talk.On the other hand, it was clear that there is still a long way to go in terms of convincing most of the general population that x-risk should be taken seriously. Sitting in the audience, the people around me were smirking and giggling throughout the x-risk portion of Santos' speech. Afterward, I overheard people joking about the AI part and how they thought it was inappropriate to talk about such morbid material in a commencement address.Overall, I certainly appreciated Santos for talking about x-risk, but I'm not convinced that his words had much of an impact on the people in the audience. To be sure, I realize that commencement speeches are largely ceremonial and all but a handful don't have any broader societal impact. Still, it would have been nice to see people be more receptive to Santos' important ideas.I would be interested to hear if anyone has any thoughts on Santos' discussion of x-risk. Was it appropriate to talk about this stuff in the context of a commencement address? Is this an effective forum to spread ideas about x-risk (or other "weird" EA ideas), or will these ideas just fall on deaf ears? Or are commencement addresses mostly irrelevant and not worth even thinking about for the purposes of growing EA and promoting some of the more idiosyncratic concepts?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 22 May 2023 22:52:35 +0000 EA - X-risk discussion in a college commencement speech by SWK Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: X-risk discussion in a college commencement speech, published by SWK on May 22, 2023 on The Effective Altruism Forum.Yesterday, Juan Manuel Santos, former president of Colombia (2010-2018) and 2016 Nobel Peace Prize winner gave the commencement address at the University of Notre Dame.The address contained the usual graduation speech stuff about all the problems humanity is facing and how this special class of students is uniquely equipped to change the world or whatever.While the gist of the message was pretty standard, I was pleasantly surprised that Santos spent a significant chunk of time talking about existential risk as the most pressing problem of our time (see clip here). Santos touched on the major x-risk factors well known to the EA community: AI, biosecurity, and nuclear weapons. However, he also emphasized climate change as one of the most pressing existential threats, which of course is a view that many EAs do not share.On one hand, I think this speech should be seen as a sign of hope. Ideas on the importance of mitigating x-risk — and the particular threats of AI, pandemics, and nuclear war — seem to be entering more mainstream circles. This trend is evidenced further in a recent post noting another former world leader (Israeli PM Naftali Bennett) who publicly discussed AI x-risk. And I think a college commencement speech is arguably more mainstream than the Asian Leadership Conference at which Bennett delivered his talk.On the other hand, it was clear that there is still a long way to go in terms of convincing most of the general population that x-risk should be taken seriously. Sitting in the audience, the people around me were smirking and giggling throughout the x-risk portion of Santos' speech. Afterward, I overheard people joking about the AI part and how they thought it was inappropriate to talk about such morbid material in a commencement address.Overall, I certainly appreciated Santos for talking about x-risk, but I'm not convinced that his words had much of an impact on the people in the audience. To be sure, I realize that commencement speeches are largely ceremonial and all but a handful don't have any broader societal impact. Still, it would have been nice to see people be more receptive to Santos' important ideas.I would be interested to hear if anyone has any thoughts on Santos' discussion of x-risk. Was it appropriate to talk about this stuff in the context of a commencement address? Is this an effective forum to spread ideas about x-risk (or other "weird" EA ideas), or will these ideas just fall on deaf ears? Or are commencement addresses mostly irrelevant and not worth even thinking about for the purposes of growing EA and promoting some of the more idiosyncratic concepts?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: X-risk discussion in a college commencement speech, published by SWK on May 22, 2023 on The Effective Altruism Forum.Yesterday, Juan Manuel Santos, former president of Colombia (2010-2018) and 2016 Nobel Peace Prize winner gave the commencement address at the University of Notre Dame.The address contained the usual graduation speech stuff about all the problems humanity is facing and how this special class of students is uniquely equipped to change the world or whatever.While the gist of the message was pretty standard, I was pleasantly surprised that Santos spent a significant chunk of time talking about existential risk as the most pressing problem of our time (see clip here). Santos touched on the major x-risk factors well known to the EA community: AI, biosecurity, and nuclear weapons. However, he also emphasized climate change as one of the most pressing existential threats, which of course is a view that many EAs do not share.On one hand, I think this speech should be seen as a sign of hope. Ideas on the importance of mitigating x-risk — and the particular threats of AI, pandemics, and nuclear war — seem to be entering more mainstream circles. This trend is evidenced further in a recent post noting another former world leader (Israeli PM Naftali Bennett) who publicly discussed AI x-risk. And I think a college commencement speech is arguably more mainstream than the Asian Leadership Conference at which Bennett delivered his talk.On the other hand, it was clear that there is still a long way to go in terms of convincing most of the general population that x-risk should be taken seriously. Sitting in the audience, the people around me were smirking and giggling throughout the x-risk portion of Santos' speech. Afterward, I overheard people joking about the AI part and how they thought it was inappropriate to talk about such morbid material in a commencement address.Overall, I certainly appreciated Santos for talking about x-risk, but I'm not convinced that his words had much of an impact on the people in the audience. To be sure, I realize that commencement speeches are largely ceremonial and all but a handful don't have any broader societal impact. Still, it would have been nice to see people be more receptive to Santos' important ideas.I would be interested to hear if anyone has any thoughts on Santos' discussion of x-risk. Was it appropriate to talk about this stuff in the context of a commencement address? Is this an effective forum to spread ideas about x-risk (or other "weird" EA ideas), or will these ideas just fall on deaf ears? Or are commencement addresses mostly irrelevant and not worth even thinking about for the purposes of growing EA and promoting some of the more idiosyncratic concepts?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
SWK https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:41 None full 6028
mHk9h3RxvuGmTThaS_NL_EA_EA EA - If you find EA conferences emotionally difficult, you're not alone by Amber Dawn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If you find EA conferences emotionally difficult, you're not alone, published by Amber Dawn on May 22, 2023 on The Effective Altruism Forum.I went to EAG London this weekend. I had some interesting chats, wrote some cryptic squiggles in my notebook (“Clockify” “the Easterlin paradox”, “functionalist eudaimonic theories”), and gave and received some hopefully-useful advice. Overall, the conference was fun and worthwhile for me. But at times, I also found the conference emotionally difficult.I think this is pretty common. After last year’s EAG, Alastair Fraser-Urquhart wrote about how he burnt out at the conference and had to miss a retreat starting the next day. The post was popular, and many said they’d had similar experiences.The standard euphemism for this facet of EA conferences is ‘intense’ or ‘tiring’, but I suspect these adjectives are often a more socially-acceptable way of saying ‘I feel low/anxious/exhausted and want to curl up in a foetal position in a darkened room’.I want to write this post to:balance out the ‘woo EAG lfg!’ hype, and help people who found it a bad or ambivalent experience to feel less alonedig into to why EAGs can be difficult: this might help attendees have better experiences themselves, and also create an environment where others are more likely to have good experienceshelp people who mostly enjoy EAGs understand what their more neurotic or introverted friends are going throughHere are some reasons that EAGs might be emotionally difficult. Some of these I’ve experienced personally, others are based on comments I’ve heard, and others are plausible educated guesses.It’s easy to compare oneself (negatively) to othersEA conferences are attended by a bunch of “impressive” people: big-name EAs like Will MacAskill and Toby Ord, entrepreneurs, organisation leaders, politicians, and “inner-circle-y” people who are Forum- or Twitter-famous. You’ve probably scheduled meetings with people because they’re impressive to you; perhaps you’re seeking mentorship and advice from people who are more senior or advanced in your field, or you want to talk to someone because they have cool ideas.This can naturally inflame impostor syndrome, feelings of inadequacy, and negative comparisons. Everyone seems smarter, harder-working, more agentic, better informed. Everyone’s got it all figured out, while you’re still stuck at Stage 2 of 80k’s career planning process. Everyone expects you to have a plan to save the world, and you don’t even have a plan for how to start making a plan.Most EAs, I think, know that these thought patterns are counterproductive. But even if some rational part of you knows this, it can still be hard to fight them - especially if you’re tired, scattered, or over-busy, since this makes it harder to employ therapeutic coping mechanisms.The stakes are highWe’re trying to solve immense, scary problems. We (and CEA) pour so much time and money into these conferences because we hope that they’ll help us make progress on those problems. This can make the conferences anxiety-inducing - you really really hope that the conference pays off. This is especially true if you have some specific goal - such as finding a job, collaborators or funders - or if you think the conference has a high opportunity cost for you.You spend a lot of time talking about depressing thingsThis is just part of being an EA, of course, but most of us don’t spend all our time directly confronting the magnitude of these problems. Having multiple back-to-back conversations about ‘how can we solve [massive, seemingly-intractable problem]?’ can be pretty discouraging.Everything is busy and franticYou’re constantly rushing from meeting to meeting, trying not to bump into others who are doing the same. You see acquaintances but only have time to wave hello, because y...]]>
Amber Dawn https://forum.effectivealtruism.org/posts/mHk9h3RxvuGmTThaS/if-you-find-ea-conferences-emotionally-difficult-you-re-not Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If you find EA conferences emotionally difficult, you're not alone, published by Amber Dawn on May 22, 2023 on The Effective Altruism Forum.I went to EAG London this weekend. I had some interesting chats, wrote some cryptic squiggles in my notebook (“Clockify” “the Easterlin paradox”, “functionalist eudaimonic theories”), and gave and received some hopefully-useful advice. Overall, the conference was fun and worthwhile for me. But at times, I also found the conference emotionally difficult.I think this is pretty common. After last year’s EAG, Alastair Fraser-Urquhart wrote about how he burnt out at the conference and had to miss a retreat starting the next day. The post was popular, and many said they’d had similar experiences.The standard euphemism for this facet of EA conferences is ‘intense’ or ‘tiring’, but I suspect these adjectives are often a more socially-acceptable way of saying ‘I feel low/anxious/exhausted and want to curl up in a foetal position in a darkened room’.I want to write this post to:balance out the ‘woo EAG lfg!’ hype, and help people who found it a bad or ambivalent experience to feel less alonedig into to why EAGs can be difficult: this might help attendees have better experiences themselves, and also create an environment where others are more likely to have good experienceshelp people who mostly enjoy EAGs understand what their more neurotic or introverted friends are going throughHere are some reasons that EAGs might be emotionally difficult. Some of these I’ve experienced personally, others are based on comments I’ve heard, and others are plausible educated guesses.It’s easy to compare oneself (negatively) to othersEA conferences are attended by a bunch of “impressive” people: big-name EAs like Will MacAskill and Toby Ord, entrepreneurs, organisation leaders, politicians, and “inner-circle-y” people who are Forum- or Twitter-famous. You’ve probably scheduled meetings with people because they’re impressive to you; perhaps you’re seeking mentorship and advice from people who are more senior or advanced in your field, or you want to talk to someone because they have cool ideas.This can naturally inflame impostor syndrome, feelings of inadequacy, and negative comparisons. Everyone seems smarter, harder-working, more agentic, better informed. Everyone’s got it all figured out, while you’re still stuck at Stage 2 of 80k’s career planning process. Everyone expects you to have a plan to save the world, and you don’t even have a plan for how to start making a plan.Most EAs, I think, know that these thought patterns are counterproductive. But even if some rational part of you knows this, it can still be hard to fight them - especially if you’re tired, scattered, or over-busy, since this makes it harder to employ therapeutic coping mechanisms.The stakes are highWe’re trying to solve immense, scary problems. We (and CEA) pour so much time and money into these conferences because we hope that they’ll help us make progress on those problems. This can make the conferences anxiety-inducing - you really really hope that the conference pays off. This is especially true if you have some specific goal - such as finding a job, collaborators or funders - or if you think the conference has a high opportunity cost for you.You spend a lot of time talking about depressing thingsThis is just part of being an EA, of course, but most of us don’t spend all our time directly confronting the magnitude of these problems. Having multiple back-to-back conversations about ‘how can we solve [massive, seemingly-intractable problem]?’ can be pretty discouraging.Everything is busy and franticYou’re constantly rushing from meeting to meeting, trying not to bump into others who are doing the same. You see acquaintances but only have time to wave hello, because y...]]>
Mon, 22 May 2023 14:54:20 +0000 EA - If you find EA conferences emotionally difficult, you're not alone by Amber Dawn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If you find EA conferences emotionally difficult, you're not alone, published by Amber Dawn on May 22, 2023 on The Effective Altruism Forum.I went to EAG London this weekend. I had some interesting chats, wrote some cryptic squiggles in my notebook (“Clockify” “the Easterlin paradox”, “functionalist eudaimonic theories”), and gave and received some hopefully-useful advice. Overall, the conference was fun and worthwhile for me. But at times, I also found the conference emotionally difficult.I think this is pretty common. After last year’s EAG, Alastair Fraser-Urquhart wrote about how he burnt out at the conference and had to miss a retreat starting the next day. The post was popular, and many said they’d had similar experiences.The standard euphemism for this facet of EA conferences is ‘intense’ or ‘tiring’, but I suspect these adjectives are often a more socially-acceptable way of saying ‘I feel low/anxious/exhausted and want to curl up in a foetal position in a darkened room’.I want to write this post to:balance out the ‘woo EAG lfg!’ hype, and help people who found it a bad or ambivalent experience to feel less alonedig into to why EAGs can be difficult: this might help attendees have better experiences themselves, and also create an environment where others are more likely to have good experienceshelp people who mostly enjoy EAGs understand what their more neurotic or introverted friends are going throughHere are some reasons that EAGs might be emotionally difficult. Some of these I’ve experienced personally, others are based on comments I’ve heard, and others are plausible educated guesses.It’s easy to compare oneself (negatively) to othersEA conferences are attended by a bunch of “impressive” people: big-name EAs like Will MacAskill and Toby Ord, entrepreneurs, organisation leaders, politicians, and “inner-circle-y” people who are Forum- or Twitter-famous. You’ve probably scheduled meetings with people because they’re impressive to you; perhaps you’re seeking mentorship and advice from people who are more senior or advanced in your field, or you want to talk to someone because they have cool ideas.This can naturally inflame impostor syndrome, feelings of inadequacy, and negative comparisons. Everyone seems smarter, harder-working, more agentic, better informed. Everyone’s got it all figured out, while you’re still stuck at Stage 2 of 80k’s career planning process. Everyone expects you to have a plan to save the world, and you don’t even have a plan for how to start making a plan.Most EAs, I think, know that these thought patterns are counterproductive. But even if some rational part of you knows this, it can still be hard to fight them - especially if you’re tired, scattered, or over-busy, since this makes it harder to employ therapeutic coping mechanisms.The stakes are highWe’re trying to solve immense, scary problems. We (and CEA) pour so much time and money into these conferences because we hope that they’ll help us make progress on those problems. This can make the conferences anxiety-inducing - you really really hope that the conference pays off. This is especially true if you have some specific goal - such as finding a job, collaborators or funders - or if you think the conference has a high opportunity cost for you.You spend a lot of time talking about depressing thingsThis is just part of being an EA, of course, but most of us don’t spend all our time directly confronting the magnitude of these problems. Having multiple back-to-back conversations about ‘how can we solve [massive, seemingly-intractable problem]?’ can be pretty discouraging.Everything is busy and franticYou’re constantly rushing from meeting to meeting, trying not to bump into others who are doing the same. You see acquaintances but only have time to wave hello, because y...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If you find EA conferences emotionally difficult, you're not alone, published by Amber Dawn on May 22, 2023 on The Effective Altruism Forum.I went to EAG London this weekend. I had some interesting chats, wrote some cryptic squiggles in my notebook (“Clockify” “the Easterlin paradox”, “functionalist eudaimonic theories”), and gave and received some hopefully-useful advice. Overall, the conference was fun and worthwhile for me. But at times, I also found the conference emotionally difficult.I think this is pretty common. After last year’s EAG, Alastair Fraser-Urquhart wrote about how he burnt out at the conference and had to miss a retreat starting the next day. The post was popular, and many said they’d had similar experiences.The standard euphemism for this facet of EA conferences is ‘intense’ or ‘tiring’, but I suspect these adjectives are often a more socially-acceptable way of saying ‘I feel low/anxious/exhausted and want to curl up in a foetal position in a darkened room’.I want to write this post to:balance out the ‘woo EAG lfg!’ hype, and help people who found it a bad or ambivalent experience to feel less alonedig into to why EAGs can be difficult: this might help attendees have better experiences themselves, and also create an environment where others are more likely to have good experienceshelp people who mostly enjoy EAGs understand what their more neurotic or introverted friends are going throughHere are some reasons that EAGs might be emotionally difficult. Some of these I’ve experienced personally, others are based on comments I’ve heard, and others are plausible educated guesses.It’s easy to compare oneself (negatively) to othersEA conferences are attended by a bunch of “impressive” people: big-name EAs like Will MacAskill and Toby Ord, entrepreneurs, organisation leaders, politicians, and “inner-circle-y” people who are Forum- or Twitter-famous. You’ve probably scheduled meetings with people because they’re impressive to you; perhaps you’re seeking mentorship and advice from people who are more senior or advanced in your field, or you want to talk to someone because they have cool ideas.This can naturally inflame impostor syndrome, feelings of inadequacy, and negative comparisons. Everyone seems smarter, harder-working, more agentic, better informed. Everyone’s got it all figured out, while you’re still stuck at Stage 2 of 80k’s career planning process. Everyone expects you to have a plan to save the world, and you don’t even have a plan for how to start making a plan.Most EAs, I think, know that these thought patterns are counterproductive. But even if some rational part of you knows this, it can still be hard to fight them - especially if you’re tired, scattered, or over-busy, since this makes it harder to employ therapeutic coping mechanisms.The stakes are highWe’re trying to solve immense, scary problems. We (and CEA) pour so much time and money into these conferences because we hope that they’ll help us make progress on those problems. This can make the conferences anxiety-inducing - you really really hope that the conference pays off. This is especially true if you have some specific goal - such as finding a job, collaborators or funders - or if you think the conference has a high opportunity cost for you.You spend a lot of time talking about depressing thingsThis is just part of being an EA, of course, but most of us don’t spend all our time directly confronting the magnitude of these problems. Having multiple back-to-back conversations about ‘how can we solve [massive, seemingly-intractable problem]?’ can be pretty discouraging.Everything is busy and franticYou’re constantly rushing from meeting to meeting, trying not to bump into others who are doing the same. You see acquaintances but only have time to wave hello, because y...]]>
Amber Dawn https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:57 None full 6025
rtnWMb9ewbNnhhFPd_NL_EA_EA EA - Announcing the Prague community space: Fixed Point by Epistea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Prague community space: Fixed Point, published by Epistea on May 22, 2023 on The Effective Altruism Forum.SummaryA coworking and event space in Prague is open for the entirety of 2023, offering up to 50 desk spaces in a coworking office, a large event space for up to 60 people, multiple meeting rooms and other amenities. We are currently in the process of transitioning from an initial subsidized model to a more sustainable paid membership model.In 2022, Fixed Point was the home of the Prague Fall Season which will be returning there in 2023.We are seeking people, projects and hosts of events here in 2023. If you are interested you can apply here.What is Fixed Point?Fixed Point is a unique community and coworking space located in the heart of Prague operated by Epistea. We support organizations and individuals working on existential security, epistemics, rationality, and effective altruism. Across five floors there is a variety of coworking offices offering up to 50 workstations, as well as numerous meeting rooms and private call stations. In addition to functional work areas, there are inviting communal spaces such as a large comfortable common room accommodating up to 60 people, two fully equipped large kitchens, and a spacious dining area. These amenities create a welcoming environment that encourages social interaction and facilitates spontaneous exchange of ideas. Additionally, there are on-site amenities like a small gym, a nap room, two laundry rooms, bathrooms with showers, and a garden with outdoor tables and seating. For those in need of short-term accommodation, our on-site guesthouse has a capacity of up to 10 beds.Fixed Point is a space where brilliant and highly engaged individuals make crucial career decisions, establish significant relationships, and find opportunities for introspection among like-minded peers when they need it most. In 2022, Fixed Point was home to the Prague Fall Season, when 350 people visited the space.The name "Fixed Point" draws inspiration from the prevalence of various Fixed Point theorems in almost all areas people working in the space work on. If you study the areas seriously, you will find fixed points sooner or later.Why Prague?The Czech effective altruism and rationalist community has long been committed to operational excellence and the creation of physical spaces that facilitate collaboration. With numerous successfully incubated organizations and passionate individuals making a difference in high-impact domains, Prague is now a viable option, especially for EU citizens wanting to settle in continental Europe.In addition to the Prague Fall Season, Prague is home to many different projects, such as Alignment of Complex Systems Research Group, ESPR or Czech Priorities. We host the European runs of CFAR workshops and CFAR rEUnions.Whom is it for?We extend a warm welcome to both short and long-term visitors working on meaningful projects in the areas of existential risk mitigation, AI safety, rationality, epistemics, and effective altruism. We are particularly excited to accommodate individuals and teams in the following categories:Those interested in hosting events,Teams seeking a workspace for an extended period of time.Here are a few examples of the projects we are equipped to host and are enthusiastic about:Weekend hackathons,Incubators lasting up to several months,Conferences,Workshops and lectures on relevant topics,Providing office spaces for existing projects.Additional supportIn addition to the amenities, we can also offer the following services upon request:Project management for existing initiatives,Catering services for events,Administrative and operations supportAccommodation arrangements,Event logistics and operations assistance,Event design consulting.Feel free to r...]]>
Epistea https://forum.effectivealtruism.org/posts/rtnWMb9ewbNnhhFPd/announcing-the-prague-community-space-fixed-point Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Prague community space: Fixed Point, published by Epistea on May 22, 2023 on The Effective Altruism Forum.SummaryA coworking and event space in Prague is open for the entirety of 2023, offering up to 50 desk spaces in a coworking office, a large event space for up to 60 people, multiple meeting rooms and other amenities. We are currently in the process of transitioning from an initial subsidized model to a more sustainable paid membership model.In 2022, Fixed Point was the home of the Prague Fall Season which will be returning there in 2023.We are seeking people, projects and hosts of events here in 2023. If you are interested you can apply here.What is Fixed Point?Fixed Point is a unique community and coworking space located in the heart of Prague operated by Epistea. We support organizations and individuals working on existential security, epistemics, rationality, and effective altruism. Across five floors there is a variety of coworking offices offering up to 50 workstations, as well as numerous meeting rooms and private call stations. In addition to functional work areas, there are inviting communal spaces such as a large comfortable common room accommodating up to 60 people, two fully equipped large kitchens, and a spacious dining area. These amenities create a welcoming environment that encourages social interaction and facilitates spontaneous exchange of ideas. Additionally, there are on-site amenities like a small gym, a nap room, two laundry rooms, bathrooms with showers, and a garden with outdoor tables and seating. For those in need of short-term accommodation, our on-site guesthouse has a capacity of up to 10 beds.Fixed Point is a space where brilliant and highly engaged individuals make crucial career decisions, establish significant relationships, and find opportunities for introspection among like-minded peers when they need it most. In 2022, Fixed Point was home to the Prague Fall Season, when 350 people visited the space.The name "Fixed Point" draws inspiration from the prevalence of various Fixed Point theorems in almost all areas people working in the space work on. If you study the areas seriously, you will find fixed points sooner or later.Why Prague?The Czech effective altruism and rationalist community has long been committed to operational excellence and the creation of physical spaces that facilitate collaboration. With numerous successfully incubated organizations and passionate individuals making a difference in high-impact domains, Prague is now a viable option, especially for EU citizens wanting to settle in continental Europe.In addition to the Prague Fall Season, Prague is home to many different projects, such as Alignment of Complex Systems Research Group, ESPR or Czech Priorities. We host the European runs of CFAR workshops and CFAR rEUnions.Whom is it for?We extend a warm welcome to both short and long-term visitors working on meaningful projects in the areas of existential risk mitigation, AI safety, rationality, epistemics, and effective altruism. We are particularly excited to accommodate individuals and teams in the following categories:Those interested in hosting events,Teams seeking a workspace for an extended period of time.Here are a few examples of the projects we are equipped to host and are enthusiastic about:Weekend hackathons,Incubators lasting up to several months,Conferences,Workshops and lectures on relevant topics,Providing office spaces for existing projects.Additional supportIn addition to the amenities, we can also offer the following services upon request:Project management for existing initiatives,Catering services for events,Administrative and operations supportAccommodation arrangements,Event logistics and operations assistance,Event design consulting.Feel free to r...]]>
Mon, 22 May 2023 14:24:41 +0000 EA - Announcing the Prague community space: Fixed Point by Epistea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Prague community space: Fixed Point, published by Epistea on May 22, 2023 on The Effective Altruism Forum.SummaryA coworking and event space in Prague is open for the entirety of 2023, offering up to 50 desk spaces in a coworking office, a large event space for up to 60 people, multiple meeting rooms and other amenities. We are currently in the process of transitioning from an initial subsidized model to a more sustainable paid membership model.In 2022, Fixed Point was the home of the Prague Fall Season which will be returning there in 2023.We are seeking people, projects and hosts of events here in 2023. If you are interested you can apply here.What is Fixed Point?Fixed Point is a unique community and coworking space located in the heart of Prague operated by Epistea. We support organizations and individuals working on existential security, epistemics, rationality, and effective altruism. Across five floors there is a variety of coworking offices offering up to 50 workstations, as well as numerous meeting rooms and private call stations. In addition to functional work areas, there are inviting communal spaces such as a large comfortable common room accommodating up to 60 people, two fully equipped large kitchens, and a spacious dining area. These amenities create a welcoming environment that encourages social interaction and facilitates spontaneous exchange of ideas. Additionally, there are on-site amenities like a small gym, a nap room, two laundry rooms, bathrooms with showers, and a garden with outdoor tables and seating. For those in need of short-term accommodation, our on-site guesthouse has a capacity of up to 10 beds.Fixed Point is a space where brilliant and highly engaged individuals make crucial career decisions, establish significant relationships, and find opportunities for introspection among like-minded peers when they need it most. In 2022, Fixed Point was home to the Prague Fall Season, when 350 people visited the space.The name "Fixed Point" draws inspiration from the prevalence of various Fixed Point theorems in almost all areas people working in the space work on. If you study the areas seriously, you will find fixed points sooner or later.Why Prague?The Czech effective altruism and rationalist community has long been committed to operational excellence and the creation of physical spaces that facilitate collaboration. With numerous successfully incubated organizations and passionate individuals making a difference in high-impact domains, Prague is now a viable option, especially for EU citizens wanting to settle in continental Europe.In addition to the Prague Fall Season, Prague is home to many different projects, such as Alignment of Complex Systems Research Group, ESPR or Czech Priorities. We host the European runs of CFAR workshops and CFAR rEUnions.Whom is it for?We extend a warm welcome to both short and long-term visitors working on meaningful projects in the areas of existential risk mitigation, AI safety, rationality, epistemics, and effective altruism. We are particularly excited to accommodate individuals and teams in the following categories:Those interested in hosting events,Teams seeking a workspace for an extended period of time.Here are a few examples of the projects we are equipped to host and are enthusiastic about:Weekend hackathons,Incubators lasting up to several months,Conferences,Workshops and lectures on relevant topics,Providing office spaces for existing projects.Additional supportIn addition to the amenities, we can also offer the following services upon request:Project management for existing initiatives,Catering services for events,Administrative and operations supportAccommodation arrangements,Event logistics and operations assistance,Event design consulting.Feel free to r...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Prague community space: Fixed Point, published by Epistea on May 22, 2023 on The Effective Altruism Forum.SummaryA coworking and event space in Prague is open for the entirety of 2023, offering up to 50 desk spaces in a coworking office, a large event space for up to 60 people, multiple meeting rooms and other amenities. We are currently in the process of transitioning from an initial subsidized model to a more sustainable paid membership model.In 2022, Fixed Point was the home of the Prague Fall Season which will be returning there in 2023.We are seeking people, projects and hosts of events here in 2023. If you are interested you can apply here.What is Fixed Point?Fixed Point is a unique community and coworking space located in the heart of Prague operated by Epistea. We support organizations and individuals working on existential security, epistemics, rationality, and effective altruism. Across five floors there is a variety of coworking offices offering up to 50 workstations, as well as numerous meeting rooms and private call stations. In addition to functional work areas, there are inviting communal spaces such as a large comfortable common room accommodating up to 60 people, two fully equipped large kitchens, and a spacious dining area. These amenities create a welcoming environment that encourages social interaction and facilitates spontaneous exchange of ideas. Additionally, there are on-site amenities like a small gym, a nap room, two laundry rooms, bathrooms with showers, and a garden with outdoor tables and seating. For those in need of short-term accommodation, our on-site guesthouse has a capacity of up to 10 beds.Fixed Point is a space where brilliant and highly engaged individuals make crucial career decisions, establish significant relationships, and find opportunities for introspection among like-minded peers when they need it most. In 2022, Fixed Point was home to the Prague Fall Season, when 350 people visited the space.The name "Fixed Point" draws inspiration from the prevalence of various Fixed Point theorems in almost all areas people working in the space work on. If you study the areas seriously, you will find fixed points sooner or later.Why Prague?The Czech effective altruism and rationalist community has long been committed to operational excellence and the creation of physical spaces that facilitate collaboration. With numerous successfully incubated organizations and passionate individuals making a difference in high-impact domains, Prague is now a viable option, especially for EU citizens wanting to settle in continental Europe.In addition to the Prague Fall Season, Prague is home to many different projects, such as Alignment of Complex Systems Research Group, ESPR or Czech Priorities. We host the European runs of CFAR workshops and CFAR rEUnions.Whom is it for?We extend a warm welcome to both short and long-term visitors working on meaningful projects in the areas of existential risk mitigation, AI safety, rationality, epistemics, and effective altruism. We are particularly excited to accommodate individuals and teams in the following categories:Those interested in hosting events,Teams seeking a workspace for an extended period of time.Here are a few examples of the projects we are equipped to host and are enthusiastic about:Weekend hackathons,Incubators lasting up to several months,Conferences,Workshops and lectures on relevant topics,Providing office spaces for existing projects.Additional supportIn addition to the amenities, we can also offer the following services upon request:Project management for existing initiatives,Catering services for events,Administrative and operations supportAccommodation arrangements,Event logistics and operations assistance,Event design consulting.Feel free to r...]]>
Epistea https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:21 None full 6023
DPHu3p3LwTFBxB3ed_NL_EA_EA EA - Announcing the Prague Fall Season 2023 and the Epistea Residency Program by Epistea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Prague Fall Season 2023 and the Epistea Residency Program, published by Epistea on May 22, 2023 on The Effective Altruism Forum.SummaryFollowing a successful pilot in 2022, we are announcing the Prague Fall Season 2023, which is a program run by Epistea, happening from September 1 to December 10 at FIXED POINT in Prague, Czech Republic. In this time, FIXED POINT will host a variety of programs, projects, events and individuals in the areas of existential security, rationality, epistemics, and effective altruism. We will announce specific events and programs as we confirm them but for now, our flagship program is the 10-week Epistea Residency Program for teams working on projects related to epistemics and rationality. We are now seeking expressions of interest from potential Epistea residents and mentors.What is a season?The main benefit of doing a season is having a dedicated limited time to create an increased density of people in one place. This creates more opportunities for people to collaborate, co-create and co-work on important projects - sometimes in a new location. This happens to some extent naturally around major EA conferences in London or San Francisco - many people are there at the same time which creates opportunities for additional events and collaborations. However, the timeframe is quite short and it is not clearly communicated that there are benefits in staying in the area longer and there is not a lot of infrastructure in place to support that.We ran the pilot project Prague Fall Season last autumn: Along with 25 long-term residents, we hosted over 300 international visitors between September and December 2022. We provided comprehensive support through funding, venue operations, technical and personal development programs, social gatherings, and additional events, such as the CFAR workshop series. Based on the feedback we received and our own experience with the program, we decided to produce another edition of Prague Fall Season this year with a couple of changes:We are narrowing the scope of the program primarily to existential security, epistemics, and rationality.We ask that participants of the season help us share the cost of running the FIXED POINT house. We may be able to offer financial aid on a case by case basis but the expectation is that when you visit, you can cover at least some part of the cost.We are seeking event organizers who would like to make their events part of the season.We will be sharing more information about how to get involved soon. For now, our priority is launching the Epistea Residency program.The Epistea Residency 2023The backbone of the Prague Fall Season 2023 will once again be a 10-week residency program. This year, we are looking for 6-10 teams of 3-5 members each working on specific projects related to areas of rationality, epistemics, group rationality, and civilizational sanity, and delivering tangible outcomes. A residency project can be:Research on a relevant topic (examples of what we would be excited about are broad in some directions and include abstract foundations like geometric rationality or Modal Fixpoint Cooperation without Löb's Theorem, research and development of applied rationality techniques like Internal communication framework, research on the use of AI to improve human rationality like "automated Double-Crux aid" and more);Distillation, communication, and publishing (writing and publishing a series of explanatory posts, video production, writing a textbook or course materials, etc.);Program development (events, workshops, etc.);Anything else that will provide value to this space.Teams will have the option to apply to work on a specific topic (to be announced soon) or propose their own project. The selected teams will work on their projects in person at FIXED ...]]>
Epistea https://forum.effectivealtruism.org/posts/DPHu3p3LwTFBxB3ed/announcing-the-prague-fall-season-2023-and-the-epistea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Prague Fall Season 2023 and the Epistea Residency Program, published by Epistea on May 22, 2023 on The Effective Altruism Forum.SummaryFollowing a successful pilot in 2022, we are announcing the Prague Fall Season 2023, which is a program run by Epistea, happening from September 1 to December 10 at FIXED POINT in Prague, Czech Republic. In this time, FIXED POINT will host a variety of programs, projects, events and individuals in the areas of existential security, rationality, epistemics, and effective altruism. We will announce specific events and programs as we confirm them but for now, our flagship program is the 10-week Epistea Residency Program for teams working on projects related to epistemics and rationality. We are now seeking expressions of interest from potential Epistea residents and mentors.What is a season?The main benefit of doing a season is having a dedicated limited time to create an increased density of people in one place. This creates more opportunities for people to collaborate, co-create and co-work on important projects - sometimes in a new location. This happens to some extent naturally around major EA conferences in London or San Francisco - many people are there at the same time which creates opportunities for additional events and collaborations. However, the timeframe is quite short and it is not clearly communicated that there are benefits in staying in the area longer and there is not a lot of infrastructure in place to support that.We ran the pilot project Prague Fall Season last autumn: Along with 25 long-term residents, we hosted over 300 international visitors between September and December 2022. We provided comprehensive support through funding, venue operations, technical and personal development programs, social gatherings, and additional events, such as the CFAR workshop series. Based on the feedback we received and our own experience with the program, we decided to produce another edition of Prague Fall Season this year with a couple of changes:We are narrowing the scope of the program primarily to existential security, epistemics, and rationality.We ask that participants of the season help us share the cost of running the FIXED POINT house. We may be able to offer financial aid on a case by case basis but the expectation is that when you visit, you can cover at least some part of the cost.We are seeking event organizers who would like to make their events part of the season.We will be sharing more information about how to get involved soon. For now, our priority is launching the Epistea Residency program.The Epistea Residency 2023The backbone of the Prague Fall Season 2023 will once again be a 10-week residency program. This year, we are looking for 6-10 teams of 3-5 members each working on specific projects related to areas of rationality, epistemics, group rationality, and civilizational sanity, and delivering tangible outcomes. A residency project can be:Research on a relevant topic (examples of what we would be excited about are broad in some directions and include abstract foundations like geometric rationality or Modal Fixpoint Cooperation without Löb's Theorem, research and development of applied rationality techniques like Internal communication framework, research on the use of AI to improve human rationality like "automated Double-Crux aid" and more);Distillation, communication, and publishing (writing and publishing a series of explanatory posts, video production, writing a textbook or course materials, etc.);Program development (events, workshops, etc.);Anything else that will provide value to this space.Teams will have the option to apply to work on a specific topic (to be announced soon) or propose their own project. The selected teams will work on their projects in person at FIXED ...]]>
Mon, 22 May 2023 09:11:51 +0000 EA - Announcing the Prague Fall Season 2023 and the Epistea Residency Program by Epistea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Prague Fall Season 2023 and the Epistea Residency Program, published by Epistea on May 22, 2023 on The Effective Altruism Forum.SummaryFollowing a successful pilot in 2022, we are announcing the Prague Fall Season 2023, which is a program run by Epistea, happening from September 1 to December 10 at FIXED POINT in Prague, Czech Republic. In this time, FIXED POINT will host a variety of programs, projects, events and individuals in the areas of existential security, rationality, epistemics, and effective altruism. We will announce specific events and programs as we confirm them but for now, our flagship program is the 10-week Epistea Residency Program for teams working on projects related to epistemics and rationality. We are now seeking expressions of interest from potential Epistea residents and mentors.What is a season?The main benefit of doing a season is having a dedicated limited time to create an increased density of people in one place. This creates more opportunities for people to collaborate, co-create and co-work on important projects - sometimes in a new location. This happens to some extent naturally around major EA conferences in London or San Francisco - many people are there at the same time which creates opportunities for additional events and collaborations. However, the timeframe is quite short and it is not clearly communicated that there are benefits in staying in the area longer and there is not a lot of infrastructure in place to support that.We ran the pilot project Prague Fall Season last autumn: Along with 25 long-term residents, we hosted over 300 international visitors between September and December 2022. We provided comprehensive support through funding, venue operations, technical and personal development programs, social gatherings, and additional events, such as the CFAR workshop series. Based on the feedback we received and our own experience with the program, we decided to produce another edition of Prague Fall Season this year with a couple of changes:We are narrowing the scope of the program primarily to existential security, epistemics, and rationality.We ask that participants of the season help us share the cost of running the FIXED POINT house. We may be able to offer financial aid on a case by case basis but the expectation is that when you visit, you can cover at least some part of the cost.We are seeking event organizers who would like to make their events part of the season.We will be sharing more information about how to get involved soon. For now, our priority is launching the Epistea Residency program.The Epistea Residency 2023The backbone of the Prague Fall Season 2023 will once again be a 10-week residency program. This year, we are looking for 6-10 teams of 3-5 members each working on specific projects related to areas of rationality, epistemics, group rationality, and civilizational sanity, and delivering tangible outcomes. A residency project can be:Research on a relevant topic (examples of what we would be excited about are broad in some directions and include abstract foundations like geometric rationality or Modal Fixpoint Cooperation without Löb's Theorem, research and development of applied rationality techniques like Internal communication framework, research on the use of AI to improve human rationality like "automated Double-Crux aid" and more);Distillation, communication, and publishing (writing and publishing a series of explanatory posts, video production, writing a textbook or course materials, etc.);Program development (events, workshops, etc.);Anything else that will provide value to this space.Teams will have the option to apply to work on a specific topic (to be announced soon) or propose their own project. The selected teams will work on their projects in person at FIXED ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Prague Fall Season 2023 and the Epistea Residency Program, published by Epistea on May 22, 2023 on The Effective Altruism Forum.SummaryFollowing a successful pilot in 2022, we are announcing the Prague Fall Season 2023, which is a program run by Epistea, happening from September 1 to December 10 at FIXED POINT in Prague, Czech Republic. In this time, FIXED POINT will host a variety of programs, projects, events and individuals in the areas of existential security, rationality, epistemics, and effective altruism. We will announce specific events and programs as we confirm them but for now, our flagship program is the 10-week Epistea Residency Program for teams working on projects related to epistemics and rationality. We are now seeking expressions of interest from potential Epistea residents and mentors.What is a season?The main benefit of doing a season is having a dedicated limited time to create an increased density of people in one place. This creates more opportunities for people to collaborate, co-create and co-work on important projects - sometimes in a new location. This happens to some extent naturally around major EA conferences in London or San Francisco - many people are there at the same time which creates opportunities for additional events and collaborations. However, the timeframe is quite short and it is not clearly communicated that there are benefits in staying in the area longer and there is not a lot of infrastructure in place to support that.We ran the pilot project Prague Fall Season last autumn: Along with 25 long-term residents, we hosted over 300 international visitors between September and December 2022. We provided comprehensive support through funding, venue operations, technical and personal development programs, social gatherings, and additional events, such as the CFAR workshop series. Based on the feedback we received and our own experience with the program, we decided to produce another edition of Prague Fall Season this year with a couple of changes:We are narrowing the scope of the program primarily to existential security, epistemics, and rationality.We ask that participants of the season help us share the cost of running the FIXED POINT house. We may be able to offer financial aid on a case by case basis but the expectation is that when you visit, you can cover at least some part of the cost.We are seeking event organizers who would like to make their events part of the season.We will be sharing more information about how to get involved soon. For now, our priority is launching the Epistea Residency program.The Epistea Residency 2023The backbone of the Prague Fall Season 2023 will once again be a 10-week residency program. This year, we are looking for 6-10 teams of 3-5 members each working on specific projects related to areas of rationality, epistemics, group rationality, and civilizational sanity, and delivering tangible outcomes. A residency project can be:Research on a relevant topic (examples of what we would be excited about are broad in some directions and include abstract foundations like geometric rationality or Modal Fixpoint Cooperation without Löb's Theorem, research and development of applied rationality techniques like Internal communication framework, research on the use of AI to improve human rationality like "automated Double-Crux aid" and more);Distillation, communication, and publishing (writing and publishing a series of explanatory posts, video production, writing a textbook or course materials, etc.);Program development (events, workshops, etc.);Anything else that will provide value to this space.Teams will have the option to apply to work on a specific topic (to be announced soon) or propose their own project. The selected teams will work on their projects in person at FIXED ...]]>
Epistea https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:05 None full 6022
gSGhrCXdntxLrMAmJ_NL_EA_EA EA - AI strategy career pipeline by Zach Stein-Perlman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI strategy career pipeline, published by Zach Stein-Perlman on May 22, 2023 on The Effective Altruism Forum.The pipeline for (x-risk-focused) AI strategy/governance/forecasting careers has never been strong, especially for new researchers. But it feels particularly weak recently (e.g. no summer research programs this year from Rethink Priorities, SERI SRF, or AI Impacts, at least as of now, and as few job openings as ever). (Also no governance course from AGI Safety Fundamentals in a while and no governance-focused programs elsewhere.) We're presumably missing out on a lot of talent.I'm not sure what the solution is, or even what the problem is-- I think it's somewhat about funding and somewhat about mentorship and mostly about [orgs not prioritizing boosting early-career folks and not supporting them for various idiosyncratic reasons] + [the community being insufficiently coordinated to realize that it's dropping the ball and it's nobody's job to notice and nobody has great solutions anyway].If you have information or takes, I'd be excited to learn. If you've been looking for early-career support (an educational program, way to test fit, way to gain experience, summer program, first job in AI strategy/governance/forecasting, etc.), I'd be really excited to hear your perspective (feel free to PM).(In AI alignment, I think SERI MATS has improved the early-career pipeline dramatically-- kudos to them. Maybe I should ask them why they haven't expanded to AI strategy or if they have takes on that pipeline. For now, maybe they're evidence that someone prioritizing pipeline-improving is necessary for it to happen...)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Zach Stein-Perlman https://forum.effectivealtruism.org/posts/gSGhrCXdntxLrMAmJ/ai-strategy-career-pipeline Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI strategy career pipeline, published by Zach Stein-Perlman on May 22, 2023 on The Effective Altruism Forum.The pipeline for (x-risk-focused) AI strategy/governance/forecasting careers has never been strong, especially for new researchers. But it feels particularly weak recently (e.g. no summer research programs this year from Rethink Priorities, SERI SRF, or AI Impacts, at least as of now, and as few job openings as ever). (Also no governance course from AGI Safety Fundamentals in a while and no governance-focused programs elsewhere.) We're presumably missing out on a lot of talent.I'm not sure what the solution is, or even what the problem is-- I think it's somewhat about funding and somewhat about mentorship and mostly about [orgs not prioritizing boosting early-career folks and not supporting them for various idiosyncratic reasons] + [the community being insufficiently coordinated to realize that it's dropping the ball and it's nobody's job to notice and nobody has great solutions anyway].If you have information or takes, I'd be excited to learn. If you've been looking for early-career support (an educational program, way to test fit, way to gain experience, summer program, first job in AI strategy/governance/forecasting, etc.), I'd be really excited to hear your perspective (feel free to PM).(In AI alignment, I think SERI MATS has improved the early-career pipeline dramatically-- kudos to them. Maybe I should ask them why they haven't expanded to AI strategy or if they have takes on that pipeline. For now, maybe they're evidence that someone prioritizing pipeline-improving is necessary for it to happen...)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 22 May 2023 07:20:08 +0000 EA - AI strategy career pipeline by Zach Stein-Perlman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI strategy career pipeline, published by Zach Stein-Perlman on May 22, 2023 on The Effective Altruism Forum.The pipeline for (x-risk-focused) AI strategy/governance/forecasting careers has never been strong, especially for new researchers. But it feels particularly weak recently (e.g. no summer research programs this year from Rethink Priorities, SERI SRF, or AI Impacts, at least as of now, and as few job openings as ever). (Also no governance course from AGI Safety Fundamentals in a while and no governance-focused programs elsewhere.) We're presumably missing out on a lot of talent.I'm not sure what the solution is, or even what the problem is-- I think it's somewhat about funding and somewhat about mentorship and mostly about [orgs not prioritizing boosting early-career folks and not supporting them for various idiosyncratic reasons] + [the community being insufficiently coordinated to realize that it's dropping the ball and it's nobody's job to notice and nobody has great solutions anyway].If you have information or takes, I'd be excited to learn. If you've been looking for early-career support (an educational program, way to test fit, way to gain experience, summer program, first job in AI strategy/governance/forecasting, etc.), I'd be really excited to hear your perspective (feel free to PM).(In AI alignment, I think SERI MATS has improved the early-career pipeline dramatically-- kudos to them. Maybe I should ask them why they haven't expanded to AI strategy or if they have takes on that pipeline. For now, maybe they're evidence that someone prioritizing pipeline-improving is necessary for it to happen...)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI strategy career pipeline, published by Zach Stein-Perlman on May 22, 2023 on The Effective Altruism Forum.The pipeline for (x-risk-focused) AI strategy/governance/forecasting careers has never been strong, especially for new researchers. But it feels particularly weak recently (e.g. no summer research programs this year from Rethink Priorities, SERI SRF, or AI Impacts, at least as of now, and as few job openings as ever). (Also no governance course from AGI Safety Fundamentals in a while and no governance-focused programs elsewhere.) We're presumably missing out on a lot of talent.I'm not sure what the solution is, or even what the problem is-- I think it's somewhat about funding and somewhat about mentorship and mostly about [orgs not prioritizing boosting early-career folks and not supporting them for various idiosyncratic reasons] + [the community being insufficiently coordinated to realize that it's dropping the ball and it's nobody's job to notice and nobody has great solutions anyway].If you have information or takes, I'd be excited to learn. If you've been looking for early-career support (an educational program, way to test fit, way to gain experience, summer program, first job in AI strategy/governance/forecasting, etc.), I'd be really excited to hear your perspective (feel free to PM).(In AI alignment, I think SERI MATS has improved the early-career pipeline dramatically-- kudos to them. Maybe I should ask them why they haven't expanded to AI strategy or if they have takes on that pipeline. For now, maybe they're evidence that someone prioritizing pipeline-improving is necessary for it to happen...)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Zach Stein-Perlman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:44 None full 6021
4p8RpK2fYKFmEcA9w_NL_EA_EA EA - OPTIC [Forecasting Comp] — Pilot Postmortem by OPTIC Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OPTIC [Forecasting Comp] — Pilot Postmortem, published by OPTIC on May 19, 2023 on The Effective Altruism Forum.OPTIC is an in-person, intercollegiate forecasting competition where undergraduate forecasters compete to make accurate predictions about the future. Think olympiad/debate tournament/hackathon, but for forecasting — teams compete for thousands of dollars in cash prizes on question topics ranging from geopolitics to celebrity twitter patterns to financial asset prices.We ran the pilot event on Saturday, April 22 in Boston and are scaling up to an academic league/olympiad. See our website at opticforecasting.com, and contact us at opticforecasting@gmail.com (or by dropping a comment below)!What happened at the competition?Attendance114 competitors from 5 different countries and 13 different US states initially registered interest. A significant proportion indicated that they wouldn’t be able to compete in this iteration (logistical/scheduling concerns), but expressed interest to compete in the next one. 39 competitors RSVP’d “yes,” though a few didn’t end up attending and a couple unregistered competitors did show up. At the competition, the total attendance was 31 competitors in 8 teams of 3-4, with 2 spectators.Schedule1 hour check-in time/lunch/socialization10 min introduction speech1 hour speech by Seth Blumberg on the future of forecasting (Seth is a behavioral economist and head of Google’s internal prediction market, speaking in his individual capacity) — you can watch the speech hereQuestions released; 3 hours for forecasting (“forecasting period”)10 min conclusion speech, merch distribution20 min retrospective feedback formForecasting (teams, platform, scoring, prizes, etc)Competitors were split up into teams of 3-4. They submitted one forecast per team on each of 30 questions through a private tournament on Metaculus. Teams’ forecasts were not made visible to other teams until after the forecasting period closed. Questions were a mix of binary and continuous, all with a resolution timeframe of weeks-months; all will have resolved by August 15. At that point, we’ll score the forecasts using log scoring.We will have awarded $3000 in cash prizes, to be distributed after the scoring is completed:1st place — $15002nd place — $8003rd place — $400Other prizes — $300Note that prizes for 1st-3rd place are given to the team and split between the members of the team.FundingWe received $4000 USD from the ACX Forecasting Mini-Grants on Manifund, and $2000 USD from the Long Term Future Fund.OrganizersOur organizing team comprises:Jingyi Wang (Brandeis University EA organizer)Saul Munn (Brandeis University EA organizer)Tom Shlomi (Harvard University EA organizer)Also, Saul and Jingyi will be attending EAG London — please reach out if you want to be involved with OPTIC, have questions/comments/concerns, or just want to chat!The following is a postmortem we wrote based on the recording of a verbal postmortem our team held after the event.SummaryOverall, the pilot went really well. We did especially well with setting ourselves up for future iterations, with flexibility/adaptability, and resource use. We could have improved time management and communication, as well as some other minor issues. We’re excited about the future of OPTIC!What went wellStrong pilot/good setupAs a pilot, the April 22 event definitely has set us up for future iterations of OPTIC. We now have a network of previous team captains and competitors from schools all around the Boston area (and beyond) who have indicated that they’d be excited to compete again. We have people set up at a few schools around the country who are going to start forecasting clubs which will compete as teams in forecasting tournaments. We have undergraduate interest (and associated emails)...]]>
OPTIC https://forum.effectivealtruism.org/posts/4p8RpK2fYKFmEcA9w/optic-forecasting-comp-pilot-postmortem Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OPTIC [Forecasting Comp] — Pilot Postmortem, published by OPTIC on May 19, 2023 on The Effective Altruism Forum.OPTIC is an in-person, intercollegiate forecasting competition where undergraduate forecasters compete to make accurate predictions about the future. Think olympiad/debate tournament/hackathon, but for forecasting — teams compete for thousands of dollars in cash prizes on question topics ranging from geopolitics to celebrity twitter patterns to financial asset prices.We ran the pilot event on Saturday, April 22 in Boston and are scaling up to an academic league/olympiad. See our website at opticforecasting.com, and contact us at opticforecasting@gmail.com (or by dropping a comment below)!What happened at the competition?Attendance114 competitors from 5 different countries and 13 different US states initially registered interest. A significant proportion indicated that they wouldn’t be able to compete in this iteration (logistical/scheduling concerns), but expressed interest to compete in the next one. 39 competitors RSVP’d “yes,” though a few didn’t end up attending and a couple unregistered competitors did show up. At the competition, the total attendance was 31 competitors in 8 teams of 3-4, with 2 spectators.Schedule1 hour check-in time/lunch/socialization10 min introduction speech1 hour speech by Seth Blumberg on the future of forecasting (Seth is a behavioral economist and head of Google’s internal prediction market, speaking in his individual capacity) — you can watch the speech hereQuestions released; 3 hours for forecasting (“forecasting period”)10 min conclusion speech, merch distribution20 min retrospective feedback formForecasting (teams, platform, scoring, prizes, etc)Competitors were split up into teams of 3-4. They submitted one forecast per team on each of 30 questions through a private tournament on Metaculus. Teams’ forecasts were not made visible to other teams until after the forecasting period closed. Questions were a mix of binary and continuous, all with a resolution timeframe of weeks-months; all will have resolved by August 15. At that point, we’ll score the forecasts using log scoring.We will have awarded $3000 in cash prizes, to be distributed after the scoring is completed:1st place — $15002nd place — $8003rd place — $400Other prizes — $300Note that prizes for 1st-3rd place are given to the team and split between the members of the team.FundingWe received $4000 USD from the ACX Forecasting Mini-Grants on Manifund, and $2000 USD from the Long Term Future Fund.OrganizersOur organizing team comprises:Jingyi Wang (Brandeis University EA organizer)Saul Munn (Brandeis University EA organizer)Tom Shlomi (Harvard University EA organizer)Also, Saul and Jingyi will be attending EAG London — please reach out if you want to be involved with OPTIC, have questions/comments/concerns, or just want to chat!The following is a postmortem we wrote based on the recording of a verbal postmortem our team held after the event.SummaryOverall, the pilot went really well. We did especially well with setting ourselves up for future iterations, with flexibility/adaptability, and resource use. We could have improved time management and communication, as well as some other minor issues. We’re excited about the future of OPTIC!What went wellStrong pilot/good setupAs a pilot, the April 22 event definitely has set us up for future iterations of OPTIC. We now have a network of previous team captains and competitors from schools all around the Boston area (and beyond) who have indicated that they’d be excited to compete again. We have people set up at a few schools around the country who are going to start forecasting clubs which will compete as teams in forecasting tournaments. We have undergraduate interest (and associated emails)...]]>
Sun, 21 May 2023 22:04:37 +0000 EA - OPTIC [Forecasting Comp] — Pilot Postmortem by OPTIC Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OPTIC [Forecasting Comp] — Pilot Postmortem, published by OPTIC on May 19, 2023 on The Effective Altruism Forum.OPTIC is an in-person, intercollegiate forecasting competition where undergraduate forecasters compete to make accurate predictions about the future. Think olympiad/debate tournament/hackathon, but for forecasting — teams compete for thousands of dollars in cash prizes on question topics ranging from geopolitics to celebrity twitter patterns to financial asset prices.We ran the pilot event on Saturday, April 22 in Boston and are scaling up to an academic league/olympiad. See our website at opticforecasting.com, and contact us at opticforecasting@gmail.com (or by dropping a comment below)!What happened at the competition?Attendance114 competitors from 5 different countries and 13 different US states initially registered interest. A significant proportion indicated that they wouldn’t be able to compete in this iteration (logistical/scheduling concerns), but expressed interest to compete in the next one. 39 competitors RSVP’d “yes,” though a few didn’t end up attending and a couple unregistered competitors did show up. At the competition, the total attendance was 31 competitors in 8 teams of 3-4, with 2 spectators.Schedule1 hour check-in time/lunch/socialization10 min introduction speech1 hour speech by Seth Blumberg on the future of forecasting (Seth is a behavioral economist and head of Google’s internal prediction market, speaking in his individual capacity) — you can watch the speech hereQuestions released; 3 hours for forecasting (“forecasting period”)10 min conclusion speech, merch distribution20 min retrospective feedback formForecasting (teams, platform, scoring, prizes, etc)Competitors were split up into teams of 3-4. They submitted one forecast per team on each of 30 questions through a private tournament on Metaculus. Teams’ forecasts were not made visible to other teams until after the forecasting period closed. Questions were a mix of binary and continuous, all with a resolution timeframe of weeks-months; all will have resolved by August 15. At that point, we’ll score the forecasts using log scoring.We will have awarded $3000 in cash prizes, to be distributed after the scoring is completed:1st place — $15002nd place — $8003rd place — $400Other prizes — $300Note that prizes for 1st-3rd place are given to the team and split between the members of the team.FundingWe received $4000 USD from the ACX Forecasting Mini-Grants on Manifund, and $2000 USD from the Long Term Future Fund.OrganizersOur organizing team comprises:Jingyi Wang (Brandeis University EA organizer)Saul Munn (Brandeis University EA organizer)Tom Shlomi (Harvard University EA organizer)Also, Saul and Jingyi will be attending EAG London — please reach out if you want to be involved with OPTIC, have questions/comments/concerns, or just want to chat!The following is a postmortem we wrote based on the recording of a verbal postmortem our team held after the event.SummaryOverall, the pilot went really well. We did especially well with setting ourselves up for future iterations, with flexibility/adaptability, and resource use. We could have improved time management and communication, as well as some other minor issues. We’re excited about the future of OPTIC!What went wellStrong pilot/good setupAs a pilot, the April 22 event definitely has set us up for future iterations of OPTIC. We now have a network of previous team captains and competitors from schools all around the Boston area (and beyond) who have indicated that they’d be excited to compete again. We have people set up at a few schools around the country who are going to start forecasting clubs which will compete as teams in forecasting tournaments. We have undergraduate interest (and associated emails)...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OPTIC [Forecasting Comp] — Pilot Postmortem, published by OPTIC on May 19, 2023 on The Effective Altruism Forum.OPTIC is an in-person, intercollegiate forecasting competition where undergraduate forecasters compete to make accurate predictions about the future. Think olympiad/debate tournament/hackathon, but for forecasting — teams compete for thousands of dollars in cash prizes on question topics ranging from geopolitics to celebrity twitter patterns to financial asset prices.We ran the pilot event on Saturday, April 22 in Boston and are scaling up to an academic league/olympiad. See our website at opticforecasting.com, and contact us at opticforecasting@gmail.com (or by dropping a comment below)!What happened at the competition?Attendance114 competitors from 5 different countries and 13 different US states initially registered interest. A significant proportion indicated that they wouldn’t be able to compete in this iteration (logistical/scheduling concerns), but expressed interest to compete in the next one. 39 competitors RSVP’d “yes,” though a few didn’t end up attending and a couple unregistered competitors did show up. At the competition, the total attendance was 31 competitors in 8 teams of 3-4, with 2 spectators.Schedule1 hour check-in time/lunch/socialization10 min introduction speech1 hour speech by Seth Blumberg on the future of forecasting (Seth is a behavioral economist and head of Google’s internal prediction market, speaking in his individual capacity) — you can watch the speech hereQuestions released; 3 hours for forecasting (“forecasting period”)10 min conclusion speech, merch distribution20 min retrospective feedback formForecasting (teams, platform, scoring, prizes, etc)Competitors were split up into teams of 3-4. They submitted one forecast per team on each of 30 questions through a private tournament on Metaculus. Teams’ forecasts were not made visible to other teams until after the forecasting period closed. Questions were a mix of binary and continuous, all with a resolution timeframe of weeks-months; all will have resolved by August 15. At that point, we’ll score the forecasts using log scoring.We will have awarded $3000 in cash prizes, to be distributed after the scoring is completed:1st place — $15002nd place — $8003rd place — $400Other prizes — $300Note that prizes for 1st-3rd place are given to the team and split between the members of the team.FundingWe received $4000 USD from the ACX Forecasting Mini-Grants on Manifund, and $2000 USD from the Long Term Future Fund.OrganizersOur organizing team comprises:Jingyi Wang (Brandeis University EA organizer)Saul Munn (Brandeis University EA organizer)Tom Shlomi (Harvard University EA organizer)Also, Saul and Jingyi will be attending EAG London — please reach out if you want to be involved with OPTIC, have questions/comments/concerns, or just want to chat!The following is a postmortem we wrote based on the recording of a verbal postmortem our team held after the event.SummaryOverall, the pilot went really well. We did especially well with setting ourselves up for future iterations, with flexibility/adaptability, and resource use. We could have improved time management and communication, as well as some other minor issues. We’re excited about the future of OPTIC!What went wellStrong pilot/good setupAs a pilot, the April 22 event definitely has set us up for future iterations of OPTIC. We now have a network of previous team captains and competitors from schools all around the Boston area (and beyond) who have indicated that they’d be excited to compete again. We have people set up at a few schools around the country who are going to start forecasting clubs which will compete as teams in forecasting tournaments. We have undergraduate interest (and associated emails)...]]>
OPTIC https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:04 None full 6020
jpsugrAbjsgfm9gZM_NL_EA_EA EA - EAG talks are underrated IMO by Chi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAG talks are underrated IMO, published by Chi on May 20, 2023 on The Effective Altruism Forum.Underrated is relative. My position is something like "most people should consider going to >1 EAG talk" and not "most people should spend most of their EAG in talks." This probably most applies to people who are kind of like me. (Been involved for a while, already have a strong network, don't need to do 1-1s for their job.)There's a meme that 1-1s are clearly the most valuable part of EAG(x) and that you should not really go to talks. (See e.g. this, this, this, they don't say exactly this but I think push in the direction of the meme.)I think EAG talks can be really interesting and are underrated. It's true that most of them are recorded and you could watch them later but I'm guessing most people don't actually do that. It also takes a while for them to be uploaded.I still think 1-1s are pretty great, especially if you'renew and don't know many people yet (or otherwise mostly want to increase the number of people you know),have a very specific thing you're trying to get out of EAG and talking to lots of people seems to be the right thing to achieve it.I'm mostly writing this post because I think the meme is really strong in some parts of the EA community. I can imagine that some people in the EA community would feel bad for attending talks because it doesn't feel "optimal." If you feel like you need permission, I want to give you permission to go to talks without feeling bad. Another motivation is that I recently attended my first set of EAG talks in years (I was doing lots of 1-1s for my job before) and was really surprised by how great they were. (That said, it was a bit hit or miss.) I previously accidentally assumed that talks and other prepared sessions would give me ~nothing.See also the rule of equal and opposite advice (1, 2) although I haven't actually read the posts I linked.My best guess is that people in EA are more biased towards taking actions that are part of a collectively "optimal" plan for [generic human with willpower and without any other properties] than taking actions that are good given realistic counterfactuals.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Chi https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAG talks are underrated IMO, published by Chi on May 20, 2023 on The Effective Altruism Forum.Underrated is relative. My position is something like "most people should consider going to >1 EAG talk" and not "most people should spend most of their EAG in talks." This probably most applies to people who are kind of like me. (Been involved for a while, already have a strong network, don't need to do 1-1s for their job.)There's a meme that 1-1s are clearly the most valuable part of EAG(x) and that you should not really go to talks. (See e.g. this, this, this, they don't say exactly this but I think push in the direction of the meme.)I think EAG talks can be really interesting and are underrated. It's true that most of them are recorded and you could watch them later but I'm guessing most people don't actually do that. It also takes a while for them to be uploaded.I still think 1-1s are pretty great, especially if you'renew and don't know many people yet (or otherwise mostly want to increase the number of people you know),have a very specific thing you're trying to get out of EAG and talking to lots of people seems to be the right thing to achieve it.I'm mostly writing this post because I think the meme is really strong in some parts of the EA community. I can imagine that some people in the EA community would feel bad for attending talks because it doesn't feel "optimal." If you feel like you need permission, I want to give you permission to go to talks without feeling bad. Another motivation is that I recently attended my first set of EAG talks in years (I was doing lots of 1-1s for my job before) and was really surprised by how great they were. (That said, it was a bit hit or miss.) I previously accidentally assumed that talks and other prepared sessions would give me ~nothing.See also the rule of equal and opposite advice (1, 2) although I haven't actually read the posts I linked.My best guess is that people in EA are more biased towards taking actions that are part of a collectively "optimal" plan for [generic human with willpower and without any other properties] than taking actions that are good given realistic counterfactuals.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 20 May 2023 19:13:47 +0000 EA - EAG talks are underrated IMO by Chi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAG talks are underrated IMO, published by Chi on May 20, 2023 on The Effective Altruism Forum.Underrated is relative. My position is something like "most people should consider going to >1 EAG talk" and not "most people should spend most of their EAG in talks." This probably most applies to people who are kind of like me. (Been involved for a while, already have a strong network, don't need to do 1-1s for their job.)There's a meme that 1-1s are clearly the most valuable part of EAG(x) and that you should not really go to talks. (See e.g. this, this, this, they don't say exactly this but I think push in the direction of the meme.)I think EAG talks can be really interesting and are underrated. It's true that most of them are recorded and you could watch them later but I'm guessing most people don't actually do that. It also takes a while for them to be uploaded.I still think 1-1s are pretty great, especially if you'renew and don't know many people yet (or otherwise mostly want to increase the number of people you know),have a very specific thing you're trying to get out of EAG and talking to lots of people seems to be the right thing to achieve it.I'm mostly writing this post because I think the meme is really strong in some parts of the EA community. I can imagine that some people in the EA community would feel bad for attending talks because it doesn't feel "optimal." If you feel like you need permission, I want to give you permission to go to talks without feeling bad. Another motivation is that I recently attended my first set of EAG talks in years (I was doing lots of 1-1s for my job before) and was really surprised by how great they were. (That said, it was a bit hit or miss.) I previously accidentally assumed that talks and other prepared sessions would give me ~nothing.See also the rule of equal and opposite advice (1, 2) although I haven't actually read the posts I linked.My best guess is that people in EA are more biased towards taking actions that are part of a collectively "optimal" plan for [generic human with willpower and without any other properties] than taking actions that are good given realistic counterfactuals.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAG talks are underrated IMO, published by Chi on May 20, 2023 on The Effective Altruism Forum.Underrated is relative. My position is something like "most people should consider going to >1 EAG talk" and not "most people should spend most of their EAG in talks." This probably most applies to people who are kind of like me. (Been involved for a while, already have a strong network, don't need to do 1-1s for their job.)There's a meme that 1-1s are clearly the most valuable part of EAG(x) and that you should not really go to talks. (See e.g. this, this, this, they don't say exactly this but I think push in the direction of the meme.)I think EAG talks can be really interesting and are underrated. It's true that most of them are recorded and you could watch them later but I'm guessing most people don't actually do that. It also takes a while for them to be uploaded.I still think 1-1s are pretty great, especially if you'renew and don't know many people yet (or otherwise mostly want to increase the number of people you know),have a very specific thing you're trying to get out of EAG and talking to lots of people seems to be the right thing to achieve it.I'm mostly writing this post because I think the meme is really strong in some parts of the EA community. I can imagine that some people in the EA community would feel bad for attending talks because it doesn't feel "optimal." If you feel like you need permission, I want to give you permission to go to talks without feeling bad. Another motivation is that I recently attended my first set of EAG talks in years (I was doing lots of 1-1s for my job before) and was really surprised by how great they were. (That said, it was a bit hit or miss.) I previously accidentally assumed that talks and other prepared sessions would give me ~nothing.See also the rule of equal and opposite advice (1, 2) although I haven't actually read the posts I linked.My best guess is that people in EA are more biased towards taking actions that are part of a collectively "optimal" plan for [generic human with willpower and without any other properties] than taking actions that are good given realistic counterfactuals.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Chi https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:06 None full 6018
Konde3tJY2SFoFbpd_NL_EA_EA EA - Former Israeli Prime Minister Speaks About AI X-Risk by Yonatan Cale Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Former Israeli Prime Minister Speaks About AI X-Risk, published by Yonatan Cale on May 20, 2023 on The Effective Altruism Forum.Watch here:/ (it's in English)From the video:just like nuclear tech is an amazing invention for humanity but can also risk the destruction of humanity, AI is the same. The world needs to get together NOW, to form the equivalent of the IAEA [...]Who is this?This is Naftali Bennett (wikipedia), an Israeli politician who was prime minister between June 2021 and June 2022.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Yonatan Cale https://forum.effectivealtruism.org/posts/Konde3tJY2SFoFbpd/former-israeli-prime-minister-speaks-about-ai-x-risk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Former Israeli Prime Minister Speaks About AI X-Risk, published by Yonatan Cale on May 20, 2023 on The Effective Altruism Forum.Watch here:/ (it's in English)From the video:just like nuclear tech is an amazing invention for humanity but can also risk the destruction of humanity, AI is the same. The world needs to get together NOW, to form the equivalent of the IAEA [...]Who is this?This is Naftali Bennett (wikipedia), an Israeli politician who was prime minister between June 2021 and June 2022.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 20 May 2023 18:08:33 +0000 EA - Former Israeli Prime Minister Speaks About AI X-Risk by Yonatan Cale Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Former Israeli Prime Minister Speaks About AI X-Risk, published by Yonatan Cale on May 20, 2023 on The Effective Altruism Forum.Watch here:/ (it's in English)From the video:just like nuclear tech is an amazing invention for humanity but can also risk the destruction of humanity, AI is the same. The world needs to get together NOW, to form the equivalent of the IAEA [...]Who is this?This is Naftali Bennett (wikipedia), an Israeli politician who was prime minister between June 2021 and June 2022.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Former Israeli Prime Minister Speaks About AI X-Risk, published by Yonatan Cale on May 20, 2023 on The Effective Altruism Forum.Watch here:/ (it's in English)From the video:just like nuclear tech is an amazing invention for humanity but can also risk the destruction of humanity, AI is the same. The world needs to get together NOW, to form the equivalent of the IAEA [...]Who is this?This is Naftali Bennett (wikipedia), an Israeli politician who was prime minister between June 2021 and June 2022.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Yonatan Cale https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:46 None full 6015
JjAjJ53mmpQqBeobQ_NL_EA_EA EA - “The Race to the End of Humanity” – Structural Uncertainty Analysis in AI Risk Models by Froolow Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: “The Race to the End of Humanity” – Structural Uncertainty Analysis in AI Risk Models, published by Froolow on May 19, 2023 on The Effective Altruism Forum.SummaryThis is an entry into the Open Philanthropy AI Worldview Contest. It investigates the risk of Catastrophe due to an Out-of-Control AI. It makes the case that that model structure is a significant blindspot in AI risk analysis, and hence there is more theoretical work that needs to be done on model structure before this question can be answered with a high degree of confidence.The bulk of the essay is a ‘proof by example’ of this claim – I identify a structural assumption which I think would have been challenged in a different field with a more established tradition of structural criticism, and demonstrate that surfacing this assumption reduce the risk of Catastrophe due to Out-of-Control (OOC) AI by around a third. Specifically, in this essay I look at what happens if we are uncertain about the timelines of AI Catastrophe and Alignment, allowing them to occur in any order.There is currently only an inconsistent culture of peer reviewing structural assumptions in the AI Risk community, especially in comparison to the culture of critiquing parameter estimates. Since models can only be as accurate as the least accurate of these elements, I conclude that this disproportionate focus on refining parameter estimates places an avoidable upper limit on how accurate estimates of AI Risk can be. However, it also suggests some high value next steps to address the inconsistency, so there is a straightforward blueprint for addressing the issues raised in this essay.The analysis underpinning this result is available in this spreadsheet. The results themselves are displayed below. They show that introducing time dependency into the model reduces the risk of OOC AI Catastrophe from 9.8% to 6.7%:My general approach is that I found a partially-complete piece of structural criticism on the forums here and then implemented it into a de novo predictive model based on a well-regarded existing model of AI Risk articulated by Carlsmith (2021). If results change dramatically between the two approaches then I will have found a ‘free lunch’ – value that can be added to the frontier of the AI Risk discussion without me actually having to do any intellectual work to push that frontier forward. Since the results above demonstraste quite clearly that the results have changed, I conclude that work on refining parameters has outpaced work on refining structure, and that ideally there would be a rebalancing of effort to prevent such ‘free lunches’ from going unnoticed in the future.I perform some sensitivity analysis to show that this effect is plausible given what we know about community beliefs about AI Risk. I conclude that my amended model is probably more suitable than the standard approach taken towards AI Risk analysis, especially when there are specific time-bound elements of the decision problem that need to be investigated (such as a restriction that AI should be invented before 2070). Therefore, I conclude that hunting for other such structural assumptions is likely to be an extremely valuable use of time, since there is probably additional low-hanging fruit in the structural analysis space.I offer some conclusions for how to take this work forwards:There are multiple weaknesses of my model which could be addressed by someone with better knowledge of the issues in AI Alignment. For example, I assume that Alignment is solved in one discrete step which is probably not a good model of how Aligning AIs will actually play out in practice.There are also many other opportunities for analysis in the AI Risk space where more sophisticated structure can likely resolve disagreement. For example, a live discussion in AI Risk at the ...]]>
Froolow https://forum.effectivealtruism.org/posts/JjAjJ53mmpQqBeobQ/the-race-to-the-end-of-humanity-structural-uncertainty Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: “The Race to the End of Humanity” – Structural Uncertainty Analysis in AI Risk Models, published by Froolow on May 19, 2023 on The Effective Altruism Forum.SummaryThis is an entry into the Open Philanthropy AI Worldview Contest. It investigates the risk of Catastrophe due to an Out-of-Control AI. It makes the case that that model structure is a significant blindspot in AI risk analysis, and hence there is more theoretical work that needs to be done on model structure before this question can be answered with a high degree of confidence.The bulk of the essay is a ‘proof by example’ of this claim – I identify a structural assumption which I think would have been challenged in a different field with a more established tradition of structural criticism, and demonstrate that surfacing this assumption reduce the risk of Catastrophe due to Out-of-Control (OOC) AI by around a third. Specifically, in this essay I look at what happens if we are uncertain about the timelines of AI Catastrophe and Alignment, allowing them to occur in any order.There is currently only an inconsistent culture of peer reviewing structural assumptions in the AI Risk community, especially in comparison to the culture of critiquing parameter estimates. Since models can only be as accurate as the least accurate of these elements, I conclude that this disproportionate focus on refining parameter estimates places an avoidable upper limit on how accurate estimates of AI Risk can be. However, it also suggests some high value next steps to address the inconsistency, so there is a straightforward blueprint for addressing the issues raised in this essay.The analysis underpinning this result is available in this spreadsheet. The results themselves are displayed below. They show that introducing time dependency into the model reduces the risk of OOC AI Catastrophe from 9.8% to 6.7%:My general approach is that I found a partially-complete piece of structural criticism on the forums here and then implemented it into a de novo predictive model based on a well-regarded existing model of AI Risk articulated by Carlsmith (2021). If results change dramatically between the two approaches then I will have found a ‘free lunch’ – value that can be added to the frontier of the AI Risk discussion without me actually having to do any intellectual work to push that frontier forward. Since the results above demonstraste quite clearly that the results have changed, I conclude that work on refining parameters has outpaced work on refining structure, and that ideally there would be a rebalancing of effort to prevent such ‘free lunches’ from going unnoticed in the future.I perform some sensitivity analysis to show that this effect is plausible given what we know about community beliefs about AI Risk. I conclude that my amended model is probably more suitable than the standard approach taken towards AI Risk analysis, especially when there are specific time-bound elements of the decision problem that need to be investigated (such as a restriction that AI should be invented before 2070). Therefore, I conclude that hunting for other such structural assumptions is likely to be an extremely valuable use of time, since there is probably additional low-hanging fruit in the structural analysis space.I offer some conclusions for how to take this work forwards:There are multiple weaknesses of my model which could be addressed by someone with better knowledge of the issues in AI Alignment. For example, I assume that Alignment is solved in one discrete step which is probably not a good model of how Aligning AIs will actually play out in practice.There are also many other opportunities for analysis in the AI Risk space where more sophisticated structure can likely resolve disagreement. For example, a live discussion in AI Risk at the ...]]>
Sat, 20 May 2023 07:33:32 +0000 EA - “The Race to the End of Humanity” – Structural Uncertainty Analysis in AI Risk Models by Froolow Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: “The Race to the End of Humanity” – Structural Uncertainty Analysis in AI Risk Models, published by Froolow on May 19, 2023 on The Effective Altruism Forum.SummaryThis is an entry into the Open Philanthropy AI Worldview Contest. It investigates the risk of Catastrophe due to an Out-of-Control AI. It makes the case that that model structure is a significant blindspot in AI risk analysis, and hence there is more theoretical work that needs to be done on model structure before this question can be answered with a high degree of confidence.The bulk of the essay is a ‘proof by example’ of this claim – I identify a structural assumption which I think would have been challenged in a different field with a more established tradition of structural criticism, and demonstrate that surfacing this assumption reduce the risk of Catastrophe due to Out-of-Control (OOC) AI by around a third. Specifically, in this essay I look at what happens if we are uncertain about the timelines of AI Catastrophe and Alignment, allowing them to occur in any order.There is currently only an inconsistent culture of peer reviewing structural assumptions in the AI Risk community, especially in comparison to the culture of critiquing parameter estimates. Since models can only be as accurate as the least accurate of these elements, I conclude that this disproportionate focus on refining parameter estimates places an avoidable upper limit on how accurate estimates of AI Risk can be. However, it also suggests some high value next steps to address the inconsistency, so there is a straightforward blueprint for addressing the issues raised in this essay.The analysis underpinning this result is available in this spreadsheet. The results themselves are displayed below. They show that introducing time dependency into the model reduces the risk of OOC AI Catastrophe from 9.8% to 6.7%:My general approach is that I found a partially-complete piece of structural criticism on the forums here and then implemented it into a de novo predictive model based on a well-regarded existing model of AI Risk articulated by Carlsmith (2021). If results change dramatically between the two approaches then I will have found a ‘free lunch’ – value that can be added to the frontier of the AI Risk discussion without me actually having to do any intellectual work to push that frontier forward. Since the results above demonstraste quite clearly that the results have changed, I conclude that work on refining parameters has outpaced work on refining structure, and that ideally there would be a rebalancing of effort to prevent such ‘free lunches’ from going unnoticed in the future.I perform some sensitivity analysis to show that this effect is plausible given what we know about community beliefs about AI Risk. I conclude that my amended model is probably more suitable than the standard approach taken towards AI Risk analysis, especially when there are specific time-bound elements of the decision problem that need to be investigated (such as a restriction that AI should be invented before 2070). Therefore, I conclude that hunting for other such structural assumptions is likely to be an extremely valuable use of time, since there is probably additional low-hanging fruit in the structural analysis space.I offer some conclusions for how to take this work forwards:There are multiple weaknesses of my model which could be addressed by someone with better knowledge of the issues in AI Alignment. For example, I assume that Alignment is solved in one discrete step which is probably not a good model of how Aligning AIs will actually play out in practice.There are also many other opportunities for analysis in the AI Risk space where more sophisticated structure can likely resolve disagreement. For example, a live discussion in AI Risk at the ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: “The Race to the End of Humanity” – Structural Uncertainty Analysis in AI Risk Models, published by Froolow on May 19, 2023 on The Effective Altruism Forum.SummaryThis is an entry into the Open Philanthropy AI Worldview Contest. It investigates the risk of Catastrophe due to an Out-of-Control AI. It makes the case that that model structure is a significant blindspot in AI risk analysis, and hence there is more theoretical work that needs to be done on model structure before this question can be answered with a high degree of confidence.The bulk of the essay is a ‘proof by example’ of this claim – I identify a structural assumption which I think would have been challenged in a different field with a more established tradition of structural criticism, and demonstrate that surfacing this assumption reduce the risk of Catastrophe due to Out-of-Control (OOC) AI by around a third. Specifically, in this essay I look at what happens if we are uncertain about the timelines of AI Catastrophe and Alignment, allowing them to occur in any order.There is currently only an inconsistent culture of peer reviewing structural assumptions in the AI Risk community, especially in comparison to the culture of critiquing parameter estimates. Since models can only be as accurate as the least accurate of these elements, I conclude that this disproportionate focus on refining parameter estimates places an avoidable upper limit on how accurate estimates of AI Risk can be. However, it also suggests some high value next steps to address the inconsistency, so there is a straightforward blueprint for addressing the issues raised in this essay.The analysis underpinning this result is available in this spreadsheet. The results themselves are displayed below. They show that introducing time dependency into the model reduces the risk of OOC AI Catastrophe from 9.8% to 6.7%:My general approach is that I found a partially-complete piece of structural criticism on the forums here and then implemented it into a de novo predictive model based on a well-regarded existing model of AI Risk articulated by Carlsmith (2021). If results change dramatically between the two approaches then I will have found a ‘free lunch’ – value that can be added to the frontier of the AI Risk discussion without me actually having to do any intellectual work to push that frontier forward. Since the results above demonstraste quite clearly that the results have changed, I conclude that work on refining parameters has outpaced work on refining structure, and that ideally there would be a rebalancing of effort to prevent such ‘free lunches’ from going unnoticed in the future.I perform some sensitivity analysis to show that this effect is plausible given what we know about community beliefs about AI Risk. I conclude that my amended model is probably more suitable than the standard approach taken towards AI Risk analysis, especially when there are specific time-bound elements of the decision problem that need to be investigated (such as a restriction that AI should be invented before 2070). Therefore, I conclude that hunting for other such structural assumptions is likely to be an extremely valuable use of time, since there is probably additional low-hanging fruit in the structural analysis space.I offer some conclusions for how to take this work forwards:There are multiple weaknesses of my model which could be addressed by someone with better knowledge of the issues in AI Alignment. For example, I assume that Alignment is solved in one discrete step which is probably not a good model of how Aligning AIs will actually play out in practice.There are also many other opportunities for analysis in the AI Risk space where more sophisticated structure can likely resolve disagreement. For example, a live discussion in AI Risk at the ...]]>
Froolow https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 36:26 None full 6013
8xNSiwj5gjoDTRquQ_NL_EA_EA EA - Announcing the Publication of Animal Liberation Now by Peter Singer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Publication of Animal Liberation Now, published by Peter Singer on May 19, 2023 on The Effective Altruism Forum.SummaryMy new book, Animal Liberation Now, will be out next Tuesday (May 23).I consider ALN to be a new book, rather than just a revision, because so much of the material in the book is new.Pre-ordering from Amazon or other online booksellers (US only) or ordering/purchasing within the first week of publication will increase the chance of the book getting on NYT best-seller list. (Doing the same in other countries may increase the prospects of the book getting on that country’s bestseller list.)Along with the publication of the book, I will be doing a speaking tour with the same title as the book. You can book tickets here, with a 50% discount if you use the code SINGER50 (Profits will be 100% donated to effective charities opposing intensive animal production).Please spread the words (and links) about the book and the speaking tour to help give the book a strong start.Why a new book?The major motivation of writing the new book is to have a book about animal ethics that is relevant in the 21st Century. Compared with Animal Liberation, there are major updates on the situation of animals used in research and factory farming, and people’s attitudes toward animals, as well as new research on the capacities of animals to suffer, and on the contribution of meat to climate change.What’s different?The animal movement emerged after the 1975 version of AL. In particular, the concern for farmed animals developed rapidly over the last two decades. These developments deserve to be reported and discussed.Some of the issues discussed in AL have seen many changes since then. Some animal experiments are going out of fashion, while some others emerged. On factory farming, there were wins for the farmed animal movement, such as the partially successful “cage-free movement” and various wins in legislative reforms. But the number of animals raised in factory farms increased rapidly during the same time. A significant portion of this increased number came from aquaculture, in other words fish factory farms. New developments were also seen regarding replacing factory farming, in particular the development of plant-based meat alternative and cultivated meats.ALN has a more global perspective than AL, most notably discussing what happened in China. Since the last edition of AL, China has greatly increased the use of animals in research and factory farming.There are also changes in my views about a number of issues. Firstly, since 1990 (The year of publication for the last full revision of the 1975 version of AL), scientists have gained more evidence that suggests the sentience of fish and some invertebrates. Accordingly, I have updated my attitudes toward the probability of sentience of these animals. Secondly, I have changed my views toward the suffering of wild animals, in particular the possibility and tractability of helping them. Thirdly, I have added the discussion about the relation between climate change and meat consumption. Last but not least, Effective Altruism, as an idea or as a movement, did not exist when the versions of Animal Liberation were written, so I have added some discussions of the EA movement and EA principles in the new book.Is the book relevant to EA?Animal welfare is, and should be, one of the major cause areas with EA for reasons I do not need to repeat here. I will explain why ALN is relevant to EA.Firstly, ALN contains some of the commonly used arguments by EAs who work on animal welfare on why the issues of animal suffering is important. Reading ALN provides an opportunity for newcomers to the EA community to learn about animal ethics and why some (hopefully most) EAs think that animals matter morally and that they are...]]>
Peter Singer https://forum.effectivealtruism.org/posts/8xNSiwj5gjoDTRquQ/announcing-the-publication-of-animal-liberation-now Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Publication of Animal Liberation Now, published by Peter Singer on May 19, 2023 on The Effective Altruism Forum.SummaryMy new book, Animal Liberation Now, will be out next Tuesday (May 23).I consider ALN to be a new book, rather than just a revision, because so much of the material in the book is new.Pre-ordering from Amazon or other online booksellers (US only) or ordering/purchasing within the first week of publication will increase the chance of the book getting on NYT best-seller list. (Doing the same in other countries may increase the prospects of the book getting on that country’s bestseller list.)Along with the publication of the book, I will be doing a speaking tour with the same title as the book. You can book tickets here, with a 50% discount if you use the code SINGER50 (Profits will be 100% donated to effective charities opposing intensive animal production).Please spread the words (and links) about the book and the speaking tour to help give the book a strong start.Why a new book?The major motivation of writing the new book is to have a book about animal ethics that is relevant in the 21st Century. Compared with Animal Liberation, there are major updates on the situation of animals used in research and factory farming, and people’s attitudes toward animals, as well as new research on the capacities of animals to suffer, and on the contribution of meat to climate change.What’s different?The animal movement emerged after the 1975 version of AL. In particular, the concern for farmed animals developed rapidly over the last two decades. These developments deserve to be reported and discussed.Some of the issues discussed in AL have seen many changes since then. Some animal experiments are going out of fashion, while some others emerged. On factory farming, there were wins for the farmed animal movement, such as the partially successful “cage-free movement” and various wins in legislative reforms. But the number of animals raised in factory farms increased rapidly during the same time. A significant portion of this increased number came from aquaculture, in other words fish factory farms. New developments were also seen regarding replacing factory farming, in particular the development of plant-based meat alternative and cultivated meats.ALN has a more global perspective than AL, most notably discussing what happened in China. Since the last edition of AL, China has greatly increased the use of animals in research and factory farming.There are also changes in my views about a number of issues. Firstly, since 1990 (The year of publication for the last full revision of the 1975 version of AL), scientists have gained more evidence that suggests the sentience of fish and some invertebrates. Accordingly, I have updated my attitudes toward the probability of sentience of these animals. Secondly, I have changed my views toward the suffering of wild animals, in particular the possibility and tractability of helping them. Thirdly, I have added the discussion about the relation between climate change and meat consumption. Last but not least, Effective Altruism, as an idea or as a movement, did not exist when the versions of Animal Liberation were written, so I have added some discussions of the EA movement and EA principles in the new book.Is the book relevant to EA?Animal welfare is, and should be, one of the major cause areas with EA for reasons I do not need to repeat here. I will explain why ALN is relevant to EA.Firstly, ALN contains some of the commonly used arguments by EAs who work on animal welfare on why the issues of animal suffering is important. Reading ALN provides an opportunity for newcomers to the EA community to learn about animal ethics and why some (hopefully most) EAs think that animals matter morally and that they are...]]>
Fri, 19 May 2023 17:37:55 +0000 EA - Announcing the Publication of Animal Liberation Now by Peter Singer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Publication of Animal Liberation Now, published by Peter Singer on May 19, 2023 on The Effective Altruism Forum.SummaryMy new book, Animal Liberation Now, will be out next Tuesday (May 23).I consider ALN to be a new book, rather than just a revision, because so much of the material in the book is new.Pre-ordering from Amazon or other online booksellers (US only) or ordering/purchasing within the first week of publication will increase the chance of the book getting on NYT best-seller list. (Doing the same in other countries may increase the prospects of the book getting on that country’s bestseller list.)Along with the publication of the book, I will be doing a speaking tour with the same title as the book. You can book tickets here, with a 50% discount if you use the code SINGER50 (Profits will be 100% donated to effective charities opposing intensive animal production).Please spread the words (and links) about the book and the speaking tour to help give the book a strong start.Why a new book?The major motivation of writing the new book is to have a book about animal ethics that is relevant in the 21st Century. Compared with Animal Liberation, there are major updates on the situation of animals used in research and factory farming, and people’s attitudes toward animals, as well as new research on the capacities of animals to suffer, and on the contribution of meat to climate change.What’s different?The animal movement emerged after the 1975 version of AL. In particular, the concern for farmed animals developed rapidly over the last two decades. These developments deserve to be reported and discussed.Some of the issues discussed in AL have seen many changes since then. Some animal experiments are going out of fashion, while some others emerged. On factory farming, there were wins for the farmed animal movement, such as the partially successful “cage-free movement” and various wins in legislative reforms. But the number of animals raised in factory farms increased rapidly during the same time. A significant portion of this increased number came from aquaculture, in other words fish factory farms. New developments were also seen regarding replacing factory farming, in particular the development of plant-based meat alternative and cultivated meats.ALN has a more global perspective than AL, most notably discussing what happened in China. Since the last edition of AL, China has greatly increased the use of animals in research and factory farming.There are also changes in my views about a number of issues. Firstly, since 1990 (The year of publication for the last full revision of the 1975 version of AL), scientists have gained more evidence that suggests the sentience of fish and some invertebrates. Accordingly, I have updated my attitudes toward the probability of sentience of these animals. Secondly, I have changed my views toward the suffering of wild animals, in particular the possibility and tractability of helping them. Thirdly, I have added the discussion about the relation between climate change and meat consumption. Last but not least, Effective Altruism, as an idea or as a movement, did not exist when the versions of Animal Liberation were written, so I have added some discussions of the EA movement and EA principles in the new book.Is the book relevant to EA?Animal welfare is, and should be, one of the major cause areas with EA for reasons I do not need to repeat here. I will explain why ALN is relevant to EA.Firstly, ALN contains some of the commonly used arguments by EAs who work on animal welfare on why the issues of animal suffering is important. Reading ALN provides an opportunity for newcomers to the EA community to learn about animal ethics and why some (hopefully most) EAs think that animals matter morally and that they are...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Publication of Animal Liberation Now, published by Peter Singer on May 19, 2023 on The Effective Altruism Forum.SummaryMy new book, Animal Liberation Now, will be out next Tuesday (May 23).I consider ALN to be a new book, rather than just a revision, because so much of the material in the book is new.Pre-ordering from Amazon or other online booksellers (US only) or ordering/purchasing within the first week of publication will increase the chance of the book getting on NYT best-seller list. (Doing the same in other countries may increase the prospects of the book getting on that country’s bestseller list.)Along with the publication of the book, I will be doing a speaking tour with the same title as the book. You can book tickets here, with a 50% discount if you use the code SINGER50 (Profits will be 100% donated to effective charities opposing intensive animal production).Please spread the words (and links) about the book and the speaking tour to help give the book a strong start.Why a new book?The major motivation of writing the new book is to have a book about animal ethics that is relevant in the 21st Century. Compared with Animal Liberation, there are major updates on the situation of animals used in research and factory farming, and people’s attitudes toward animals, as well as new research on the capacities of animals to suffer, and on the contribution of meat to climate change.What’s different?The animal movement emerged after the 1975 version of AL. In particular, the concern for farmed animals developed rapidly over the last two decades. These developments deserve to be reported and discussed.Some of the issues discussed in AL have seen many changes since then. Some animal experiments are going out of fashion, while some others emerged. On factory farming, there were wins for the farmed animal movement, such as the partially successful “cage-free movement” and various wins in legislative reforms. But the number of animals raised in factory farms increased rapidly during the same time. A significant portion of this increased number came from aquaculture, in other words fish factory farms. New developments were also seen regarding replacing factory farming, in particular the development of plant-based meat alternative and cultivated meats.ALN has a more global perspective than AL, most notably discussing what happened in China. Since the last edition of AL, China has greatly increased the use of animals in research and factory farming.There are also changes in my views about a number of issues. Firstly, since 1990 (The year of publication for the last full revision of the 1975 version of AL), scientists have gained more evidence that suggests the sentience of fish and some invertebrates. Accordingly, I have updated my attitudes toward the probability of sentience of these animals. Secondly, I have changed my views toward the suffering of wild animals, in particular the possibility and tractability of helping them. Thirdly, I have added the discussion about the relation between climate change and meat consumption. Last but not least, Effective Altruism, as an idea or as a movement, did not exist when the versions of Animal Liberation were written, so I have added some discussions of the EA movement and EA principles in the new book.Is the book relevant to EA?Animal welfare is, and should be, one of the major cause areas with EA for reasons I do not need to repeat here. I will explain why ALN is relevant to EA.Firstly, ALN contains some of the commonly used arguments by EAs who work on animal welfare on why the issues of animal suffering is important. Reading ALN provides an opportunity for newcomers to the EA community to learn about animal ethics and why some (hopefully most) EAs think that animals matter morally and that they are...]]>
Peter Singer https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:37 None full 6008
ckokr9uhr2Cu3h5En_NL_EA_EA EA - Tips for people considering starting new incubators by Joey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tips for people considering starting new incubators, published by Joey on May 19, 2023 on The Effective Altruism Forum.Charity Entrepreneurship is frequently contacted by individuals and donors who like our model. Several have expressed interest in seeing the model expanded, or seeing what a twist on the model would look like (e.g., different cause area, region, etc.) Although we are excited about maximizing CE’s impact, we are less convinced by the idea of growing the effective charity pool via franchising or other independent nonprofit incubators. This is because new incubators often do not address the actual bottlenecks faced by the nonprofit landscape, as we see them.There are lots of factors that prevent great new charities from being launched, and from eventually having a large impact. We have scaled CE to about 10 charities a year, and from our perspective, these are the three major bottlenecks to growing the new charity ecosystem further:Mid-stage fundingFoundersMultiplying effectsMid-stage fundingWe try to look at every step of our charities’ future journeys, to see how we expect them to fare as they progress. In general, there seems to be enough appetite in the philanthropic community to supply seed funding to brand new projects, and we have been successful in helping charities to launch with the funding they need. However, many cause areas appear to have gaps in available funding for charities who are around two to five years old.Charities’ budgets tend to grow each year; for example, a charity might need $150k for its first year, $250k for its 2nd, $400k for its third, and so on. The average charity might require a seed of $150k for its first year, and mid-stage funding (years 2-5) of ~$2 million over 4 years. Currently, it is much more difficult for highly effective charities to fundraise this much at this stage of their journey than it is for them to get the funding they need at the seed-funding stage. Keep in mind that this mid-stage funding is still too early and small for most major institutional funders (e.g., GiveWell does not recommend organizations that can only absorb $1 million a year as top charities), and governments rarely consider projects this young.Mental health case study: An example that demonstrates this issue well is found in the cause area of mental health. We have identified a number of promising intervention ideas in this area over the past few years, and a solid pool of aspiring entrepreneurs interested in founding mental health charities. Although we expected our seed network would be able to support the first round of funding, we did not have confidence in what came next for these charities. We have since worked to improve the situation by helping launch the Mental Health Funders Circle, but even with that network we are concerned about mid-stage funding in the future.Why is mid-stage the problem?: Instead of seed or late stage? I believe it’s the same donors who consider seed or mid-stage funding, but as the volume of funding is smaller at the seed stage, it is covered much more easily. While some charities may struggle in the late stage as well, less will even get to that stage and the number of options typically expand once a charity is clearly established as a field leader.FoundersAlthough a solid number of people are interested in founding charities, it's only an ideal fit for a relatively small percentage of people. It is a career path that requires a highly entrepreneurial mindset, plus a very strong ethical compass to succeed in. Due to the low number of people who are a good fit, I don’t believe that it is a career path that can absorb a high number of people (my guess would be less than 5% of people are actually suited to founding a nonprofit). It is my opinion that other career paths, like for-profit found...]]>
Joey https://forum.effectivealtruism.org/posts/ckokr9uhr2Cu3h5En/tips-for-people-considering-starting-new-incubators Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tips for people considering starting new incubators, published by Joey on May 19, 2023 on The Effective Altruism Forum.Charity Entrepreneurship is frequently contacted by individuals and donors who like our model. Several have expressed interest in seeing the model expanded, or seeing what a twist on the model would look like (e.g., different cause area, region, etc.) Although we are excited about maximizing CE’s impact, we are less convinced by the idea of growing the effective charity pool via franchising or other independent nonprofit incubators. This is because new incubators often do not address the actual bottlenecks faced by the nonprofit landscape, as we see them.There are lots of factors that prevent great new charities from being launched, and from eventually having a large impact. We have scaled CE to about 10 charities a year, and from our perspective, these are the three major bottlenecks to growing the new charity ecosystem further:Mid-stage fundingFoundersMultiplying effectsMid-stage fundingWe try to look at every step of our charities’ future journeys, to see how we expect them to fare as they progress. In general, there seems to be enough appetite in the philanthropic community to supply seed funding to brand new projects, and we have been successful in helping charities to launch with the funding they need. However, many cause areas appear to have gaps in available funding for charities who are around two to five years old.Charities’ budgets tend to grow each year; for example, a charity might need $150k for its first year, $250k for its 2nd, $400k for its third, and so on. The average charity might require a seed of $150k for its first year, and mid-stage funding (years 2-5) of ~$2 million over 4 years. Currently, it is much more difficult for highly effective charities to fundraise this much at this stage of their journey than it is for them to get the funding they need at the seed-funding stage. Keep in mind that this mid-stage funding is still too early and small for most major institutional funders (e.g., GiveWell does not recommend organizations that can only absorb $1 million a year as top charities), and governments rarely consider projects this young.Mental health case study: An example that demonstrates this issue well is found in the cause area of mental health. We have identified a number of promising intervention ideas in this area over the past few years, and a solid pool of aspiring entrepreneurs interested in founding mental health charities. Although we expected our seed network would be able to support the first round of funding, we did not have confidence in what came next for these charities. We have since worked to improve the situation by helping launch the Mental Health Funders Circle, but even with that network we are concerned about mid-stage funding in the future.Why is mid-stage the problem?: Instead of seed or late stage? I believe it’s the same donors who consider seed or mid-stage funding, but as the volume of funding is smaller at the seed stage, it is covered much more easily. While some charities may struggle in the late stage as well, less will even get to that stage and the number of options typically expand once a charity is clearly established as a field leader.FoundersAlthough a solid number of people are interested in founding charities, it's only an ideal fit for a relatively small percentage of people. It is a career path that requires a highly entrepreneurial mindset, plus a very strong ethical compass to succeed in. Due to the low number of people who are a good fit, I don’t believe that it is a career path that can absorb a high number of people (my guess would be less than 5% of people are actually suited to founding a nonprofit). It is my opinion that other career paths, like for-profit found...]]>
Fri, 19 May 2023 16:04:30 +0000 EA - Tips for people considering starting new incubators by Joey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tips for people considering starting new incubators, published by Joey on May 19, 2023 on The Effective Altruism Forum.Charity Entrepreneurship is frequently contacted by individuals and donors who like our model. Several have expressed interest in seeing the model expanded, or seeing what a twist on the model would look like (e.g., different cause area, region, etc.) Although we are excited about maximizing CE’s impact, we are less convinced by the idea of growing the effective charity pool via franchising or other independent nonprofit incubators. This is because new incubators often do not address the actual bottlenecks faced by the nonprofit landscape, as we see them.There are lots of factors that prevent great new charities from being launched, and from eventually having a large impact. We have scaled CE to about 10 charities a year, and from our perspective, these are the three major bottlenecks to growing the new charity ecosystem further:Mid-stage fundingFoundersMultiplying effectsMid-stage fundingWe try to look at every step of our charities’ future journeys, to see how we expect them to fare as they progress. In general, there seems to be enough appetite in the philanthropic community to supply seed funding to brand new projects, and we have been successful in helping charities to launch with the funding they need. However, many cause areas appear to have gaps in available funding for charities who are around two to five years old.Charities’ budgets tend to grow each year; for example, a charity might need $150k for its first year, $250k for its 2nd, $400k for its third, and so on. The average charity might require a seed of $150k for its first year, and mid-stage funding (years 2-5) of ~$2 million over 4 years. Currently, it is much more difficult for highly effective charities to fundraise this much at this stage of their journey than it is for them to get the funding they need at the seed-funding stage. Keep in mind that this mid-stage funding is still too early and small for most major institutional funders (e.g., GiveWell does not recommend organizations that can only absorb $1 million a year as top charities), and governments rarely consider projects this young.Mental health case study: An example that demonstrates this issue well is found in the cause area of mental health. We have identified a number of promising intervention ideas in this area over the past few years, and a solid pool of aspiring entrepreneurs interested in founding mental health charities. Although we expected our seed network would be able to support the first round of funding, we did not have confidence in what came next for these charities. We have since worked to improve the situation by helping launch the Mental Health Funders Circle, but even with that network we are concerned about mid-stage funding in the future.Why is mid-stage the problem?: Instead of seed or late stage? I believe it’s the same donors who consider seed or mid-stage funding, but as the volume of funding is smaller at the seed stage, it is covered much more easily. While some charities may struggle in the late stage as well, less will even get to that stage and the number of options typically expand once a charity is clearly established as a field leader.FoundersAlthough a solid number of people are interested in founding charities, it's only an ideal fit for a relatively small percentage of people. It is a career path that requires a highly entrepreneurial mindset, plus a very strong ethical compass to succeed in. Due to the low number of people who are a good fit, I don’t believe that it is a career path that can absorb a high number of people (my guess would be less than 5% of people are actually suited to founding a nonprofit). It is my opinion that other career paths, like for-profit found...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tips for people considering starting new incubators, published by Joey on May 19, 2023 on The Effective Altruism Forum.Charity Entrepreneurship is frequently contacted by individuals and donors who like our model. Several have expressed interest in seeing the model expanded, or seeing what a twist on the model would look like (e.g., different cause area, region, etc.) Although we are excited about maximizing CE’s impact, we are less convinced by the idea of growing the effective charity pool via franchising or other independent nonprofit incubators. This is because new incubators often do not address the actual bottlenecks faced by the nonprofit landscape, as we see them.There are lots of factors that prevent great new charities from being launched, and from eventually having a large impact. We have scaled CE to about 10 charities a year, and from our perspective, these are the three major bottlenecks to growing the new charity ecosystem further:Mid-stage fundingFoundersMultiplying effectsMid-stage fundingWe try to look at every step of our charities’ future journeys, to see how we expect them to fare as they progress. In general, there seems to be enough appetite in the philanthropic community to supply seed funding to brand new projects, and we have been successful in helping charities to launch with the funding they need. However, many cause areas appear to have gaps in available funding for charities who are around two to five years old.Charities’ budgets tend to grow each year; for example, a charity might need $150k for its first year, $250k for its 2nd, $400k for its third, and so on. The average charity might require a seed of $150k for its first year, and mid-stage funding (years 2-5) of ~$2 million over 4 years. Currently, it is much more difficult for highly effective charities to fundraise this much at this stage of their journey than it is for them to get the funding they need at the seed-funding stage. Keep in mind that this mid-stage funding is still too early and small for most major institutional funders (e.g., GiveWell does not recommend organizations that can only absorb $1 million a year as top charities), and governments rarely consider projects this young.Mental health case study: An example that demonstrates this issue well is found in the cause area of mental health. We have identified a number of promising intervention ideas in this area over the past few years, and a solid pool of aspiring entrepreneurs interested in founding mental health charities. Although we expected our seed network would be able to support the first round of funding, we did not have confidence in what came next for these charities. We have since worked to improve the situation by helping launch the Mental Health Funders Circle, but even with that network we are concerned about mid-stage funding in the future.Why is mid-stage the problem?: Instead of seed or late stage? I believe it’s the same donors who consider seed or mid-stage funding, but as the volume of funding is smaller at the seed stage, it is covered much more easily. While some charities may struggle in the late stage as well, less will even get to that stage and the number of options typically expand once a charity is clearly established as a field leader.FoundersAlthough a solid number of people are interested in founding charities, it's only an ideal fit for a relatively small percentage of people. It is a career path that requires a highly entrepreneurial mindset, plus a very strong ethical compass to succeed in. Due to the low number of people who are a good fit, I don’t believe that it is a career path that can absorb a high number of people (my guess would be less than 5% of people are actually suited to founding a nonprofit). It is my opinion that other career paths, like for-profit found...]]>
Joey https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 13:08 None full 6007
EFEwBvuDrTLDndqCt_NL_EA_EA EA - Relative Value Functions: A Flexible New Format for Value Estimation by Ozzie Gooen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Relative Value Functions: A Flexible New Format for Value Estimation, published by Ozzie Gooen on May 18, 2023 on The Effective Altruism Forum.SummaryQuantifying value in a meaningful way is one of the most important yet challenging tasks for improving decision-making. Traditional approaches rely on standardized value units, but these falter when options differ widely or lack an obvious shared metric. We propose an alternative called relative value functions that uses programming functions to value relationships rather than absolute quantities. This method captures detailed information about correlations and uncertainties that standardized value units miss. More specifically, we put forward value ratio formats of univariate and multivariate forms.Relative value functions ultimately shine where single value units struggle: valuing diverse items in situations with high uncertainty. Their flexibility and elegance suit them well to collective estimation and forecasting. This makes them particularly well-suited to ambitious, large-scale valuation, like estimating large utility functions.While promising, relative value functions also pose challenges. They require specialized knowledge to develop and understand, and will require new forms of software infrastructure. Visualization techniques are needed to make their insights accessible, and training resources must be created to build modeling expertise.Writing programmatic relative value functions can be much easier than one might expect, given the right tools. We show some examples using Squiggle, a programming language for estimation.We at QURI are currently building software to make relative value estimation usable, and we expect to share some of this shortly. We of course also very much encourage others to try other setups as well.Ultimately, if we aim to eventually generate estimates of things like:The total value of all effective altruist projects;The value of 100,000 potential personal and organizational interventions; orThe value of each political bill under consideration in the United States;then the use of relative value assessments may be crucial.Presentation & DemoI gave a recent presentation on relative values, as part of a longer presentation in our work at QURI. This features a short walk-through of an experimental app we're working on to express these values. The Relative Values part of the presentation is is from 22:25 to 35:59.This post gives a much more thorough description of this work than the presentation does, but the example in the presentation might make the rest of this make more sense.Challenges with Estimating Value with Standard UnitsThe standard way to measure the value of items is to come up with standardized units and measure the items in terms of these units.Many health measure benefits are estimated in QALYs or DALYsConsumer benefit has been measured in willingness to payLongtermist interventions have occasionally been measured in “Basis Points”, Microdooms and MicrotopiasRisky activities can be measured in MicromortsCOVID activities have been measured in MicroCOVIDsLet’s call these sorts of units “value units” as they are meant as approximations or proxies of value. Most of these (QALYs, Basis Points, Micromorts) can more formally be called summary measures, but we’ll stick to the term unit for simplicity.These sorts of units can be very useful, but they’re still infrequently used.QALYs and DALYS don’t have many trusted and aggregated tables. Often there are specific estimates made in specific research papers, but there aren’t many long aggregated tables for public use.There are very few tables of personal intervention value estimates, like the net benefit of life choices.Very few business decisions are made with reference to clear units of value. For example, “Whi...]]>
Ozzie Gooen https://forum.effectivealtruism.org/posts/EFEwBvuDrTLDndqCt/relative-value-functions-a-flexible-new-format-for-value Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Relative Value Functions: A Flexible New Format for Value Estimation, published by Ozzie Gooen on May 18, 2023 on The Effective Altruism Forum.SummaryQuantifying value in a meaningful way is one of the most important yet challenging tasks for improving decision-making. Traditional approaches rely on standardized value units, but these falter when options differ widely or lack an obvious shared metric. We propose an alternative called relative value functions that uses programming functions to value relationships rather than absolute quantities. This method captures detailed information about correlations and uncertainties that standardized value units miss. More specifically, we put forward value ratio formats of univariate and multivariate forms.Relative value functions ultimately shine where single value units struggle: valuing diverse items in situations with high uncertainty. Their flexibility and elegance suit them well to collective estimation and forecasting. This makes them particularly well-suited to ambitious, large-scale valuation, like estimating large utility functions.While promising, relative value functions also pose challenges. They require specialized knowledge to develop and understand, and will require new forms of software infrastructure. Visualization techniques are needed to make their insights accessible, and training resources must be created to build modeling expertise.Writing programmatic relative value functions can be much easier than one might expect, given the right tools. We show some examples using Squiggle, a programming language for estimation.We at QURI are currently building software to make relative value estimation usable, and we expect to share some of this shortly. We of course also very much encourage others to try other setups as well.Ultimately, if we aim to eventually generate estimates of things like:The total value of all effective altruist projects;The value of 100,000 potential personal and organizational interventions; orThe value of each political bill under consideration in the United States;then the use of relative value assessments may be crucial.Presentation & DemoI gave a recent presentation on relative values, as part of a longer presentation in our work at QURI. This features a short walk-through of an experimental app we're working on to express these values. The Relative Values part of the presentation is is from 22:25 to 35:59.This post gives a much more thorough description of this work than the presentation does, but the example in the presentation might make the rest of this make more sense.Challenges with Estimating Value with Standard UnitsThe standard way to measure the value of items is to come up with standardized units and measure the items in terms of these units.Many health measure benefits are estimated in QALYs or DALYsConsumer benefit has been measured in willingness to payLongtermist interventions have occasionally been measured in “Basis Points”, Microdooms and MicrotopiasRisky activities can be measured in MicromortsCOVID activities have been measured in MicroCOVIDsLet’s call these sorts of units “value units” as they are meant as approximations or proxies of value. Most of these (QALYs, Basis Points, Micromorts) can more formally be called summary measures, but we’ll stick to the term unit for simplicity.These sorts of units can be very useful, but they’re still infrequently used.QALYs and DALYS don’t have many trusted and aggregated tables. Often there are specific estimates made in specific research papers, but there aren’t many long aggregated tables for public use.There are very few tables of personal intervention value estimates, like the net benefit of life choices.Very few business decisions are made with reference to clear units of value. For example, “Whi...]]>
Fri, 19 May 2023 07:42:58 +0000 EA - Relative Value Functions: A Flexible New Format for Value Estimation by Ozzie Gooen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Relative Value Functions: A Flexible New Format for Value Estimation, published by Ozzie Gooen on May 18, 2023 on The Effective Altruism Forum.SummaryQuantifying value in a meaningful way is one of the most important yet challenging tasks for improving decision-making. Traditional approaches rely on standardized value units, but these falter when options differ widely or lack an obvious shared metric. We propose an alternative called relative value functions that uses programming functions to value relationships rather than absolute quantities. This method captures detailed information about correlations and uncertainties that standardized value units miss. More specifically, we put forward value ratio formats of univariate and multivariate forms.Relative value functions ultimately shine where single value units struggle: valuing diverse items in situations with high uncertainty. Their flexibility and elegance suit them well to collective estimation and forecasting. This makes them particularly well-suited to ambitious, large-scale valuation, like estimating large utility functions.While promising, relative value functions also pose challenges. They require specialized knowledge to develop and understand, and will require new forms of software infrastructure. Visualization techniques are needed to make their insights accessible, and training resources must be created to build modeling expertise.Writing programmatic relative value functions can be much easier than one might expect, given the right tools. We show some examples using Squiggle, a programming language for estimation.We at QURI are currently building software to make relative value estimation usable, and we expect to share some of this shortly. We of course also very much encourage others to try other setups as well.Ultimately, if we aim to eventually generate estimates of things like:The total value of all effective altruist projects;The value of 100,000 potential personal and organizational interventions; orThe value of each political bill under consideration in the United States;then the use of relative value assessments may be crucial.Presentation & DemoI gave a recent presentation on relative values, as part of a longer presentation in our work at QURI. This features a short walk-through of an experimental app we're working on to express these values. The Relative Values part of the presentation is is from 22:25 to 35:59.This post gives a much more thorough description of this work than the presentation does, but the example in the presentation might make the rest of this make more sense.Challenges with Estimating Value with Standard UnitsThe standard way to measure the value of items is to come up with standardized units and measure the items in terms of these units.Many health measure benefits are estimated in QALYs or DALYsConsumer benefit has been measured in willingness to payLongtermist interventions have occasionally been measured in “Basis Points”, Microdooms and MicrotopiasRisky activities can be measured in MicromortsCOVID activities have been measured in MicroCOVIDsLet’s call these sorts of units “value units” as they are meant as approximations or proxies of value. Most of these (QALYs, Basis Points, Micromorts) can more formally be called summary measures, but we’ll stick to the term unit for simplicity.These sorts of units can be very useful, but they’re still infrequently used.QALYs and DALYS don’t have many trusted and aggregated tables. Often there are specific estimates made in specific research papers, but there aren’t many long aggregated tables for public use.There are very few tables of personal intervention value estimates, like the net benefit of life choices.Very few business decisions are made with reference to clear units of value. For example, “Whi...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Relative Value Functions: A Flexible New Format for Value Estimation, published by Ozzie Gooen on May 18, 2023 on The Effective Altruism Forum.SummaryQuantifying value in a meaningful way is one of the most important yet challenging tasks for improving decision-making. Traditional approaches rely on standardized value units, but these falter when options differ widely or lack an obvious shared metric. We propose an alternative called relative value functions that uses programming functions to value relationships rather than absolute quantities. This method captures detailed information about correlations and uncertainties that standardized value units miss. More specifically, we put forward value ratio formats of univariate and multivariate forms.Relative value functions ultimately shine where single value units struggle: valuing diverse items in situations with high uncertainty. Their flexibility and elegance suit them well to collective estimation and forecasting. This makes them particularly well-suited to ambitious, large-scale valuation, like estimating large utility functions.While promising, relative value functions also pose challenges. They require specialized knowledge to develop and understand, and will require new forms of software infrastructure. Visualization techniques are needed to make their insights accessible, and training resources must be created to build modeling expertise.Writing programmatic relative value functions can be much easier than one might expect, given the right tools. We show some examples using Squiggle, a programming language for estimation.We at QURI are currently building software to make relative value estimation usable, and we expect to share some of this shortly. We of course also very much encourage others to try other setups as well.Ultimately, if we aim to eventually generate estimates of things like:The total value of all effective altruist projects;The value of 100,000 potential personal and organizational interventions; orThe value of each political bill under consideration in the United States;then the use of relative value assessments may be crucial.Presentation & DemoI gave a recent presentation on relative values, as part of a longer presentation in our work at QURI. This features a short walk-through of an experimental app we're working on to express these values. The Relative Values part of the presentation is is from 22:25 to 35:59.This post gives a much more thorough description of this work than the presentation does, but the example in the presentation might make the rest of this make more sense.Challenges with Estimating Value with Standard UnitsThe standard way to measure the value of items is to come up with standardized units and measure the items in terms of these units.Many health measure benefits are estimated in QALYs or DALYsConsumer benefit has been measured in willingness to payLongtermist interventions have occasionally been measured in “Basis Points”, Microdooms and MicrotopiasRisky activities can be measured in MicromortsCOVID activities have been measured in MicroCOVIDsLet’s call these sorts of units “value units” as they are meant as approximations or proxies of value. Most of these (QALYs, Basis Points, Micromorts) can more formally be called summary measures, but we’ll stick to the term unit for simplicity.These sorts of units can be very useful, but they’re still infrequently used.QALYs and DALYS don’t have many trusted and aggregated tables. Often there are specific estimates made in specific research papers, but there aren’t many long aggregated tables for public use.There are very few tables of personal intervention value estimates, like the net benefit of life choices.Very few business decisions are made with reference to clear units of value. For example, “Whi...]]>
Ozzie Gooen https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 29:31 None full 6003
hvrwNmXtWRgHGwzaz_NL_EA_EA EA - EA Forum: content and moderator positions by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Forum: content and moderator positions, published by Lizka on May 18, 2023 on The Effective Altruism Forum.TL;DR: We’re hiring for Forum moderators — apply by 1 June (it’s the first round of the application, and should take 15-20 minutes). We’re also pre-announcing a full-time Content Specialist position on the Online Team at CEA — you can indicate interest in that.➡️ Apply to be a part-time Forum moderator by 1 June.Round 1 of the application should take around 15-20 minutes, and applying earlier is better.You can see moderator responsibilities below. This is a remote, part-time, paid position.➡️ Indicate interest in a full-time Content Specialist position on the Online Team at CEA.We’ll probably soon be hiring for someone to work with me (Lizka) on content-related tasks on the Forum. If you fill out this form, we will send you an email when the application opens and consider streamlining your application if you seem like a particularly good fit.You can see more about the role’s responsibilities below. This is a full-time position, and can be remote or in-person from Oxford/Boston/London/Berkeley.➡️ You can also indicate interest in working as a copy-editor for CEA or in being a Forum Facilitator (both are part-time remote roles).If you know someone who might be interested, please consider sending this post to them!Please feel free to get in touch with any questions you might have. You can contact forum@centreforeffectivealtruism.org or forum-moderation@effectivealtruism.org, comment here, or reach out to moderators and members of the Online Team.An overview of the rolesI’ve shared a lot more information on the moderator role and the full-time content role in this post — here's a summary in table form. (You can submit the first round of the moderator application or indicate interest in the content role without reading the whole post.)TitleAbout the roleKey responsibilitiesStage the application is atModeratorPart-time, remote (average ~3 hours a week but variable), $40/hourMake the Forum safe, welcoming, and collaborative (e.g. by stopping or preventing aggressive behavior, being clear about moderation decisions), nurture important qualities on the Forum (e.g. by improving the written discussion norms or proactively nudging conversations into better directions), and help the rest of the moderation team.Round 1 is open (and should take 15-20 minutes): apply by 1 JuneContent SpecialistFull-time, remote/ in-person (Oxford/ London/ Boston/ Berkeley)Encourage engagement with important and interesting online content (via outreach, newsletters, curation, Forum events, writing, etc.), improve the epistemics, safety, and trust levels on the Forum (e.g. via moderation), and more.Indication of interest (we'll probably open a full application soon)We’re also excited for indications of interest for the following part-timecontractor roles, although we might not end up hiring for these in the very near futureCopy-editor indication of interestPart-time, remote (~4 hours a week average), $30/hour by defaultCopy-editing for style, clarity, grammar — and generally sanity-checking content for CEA. Sometimes also things like reformatting, summarizing other content, finding images, and possibly posting on the website or social media. Forum Facilitator indication of interestPart-time, remote (~3 hours a week average), $30/hourTitleAbout the roleKey responsibilitiesStage the application is atModeratorPart-time, remote (average ~3 hours a week but variable), $40/hourMake the Forum safe, welcoming, and collaborative (e.g. by stopping or preventing aggressive behavior, being clear about moderation decisions), nurture important qualities on the Forum (e.g. by improving the written discussion norms or proactively nudging conversations into better directions), and help the rest ...]]>
Lizka https://forum.effectivealtruism.org/posts/hvrwNmXtWRgHGwzaz/ea-forum-content-and-moderator-positions Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Forum: content and moderator positions, published by Lizka on May 18, 2023 on The Effective Altruism Forum.TL;DR: We’re hiring for Forum moderators — apply by 1 June (it’s the first round of the application, and should take 15-20 minutes). We’re also pre-announcing a full-time Content Specialist position on the Online Team at CEA — you can indicate interest in that.➡️ Apply to be a part-time Forum moderator by 1 June.Round 1 of the application should take around 15-20 minutes, and applying earlier is better.You can see moderator responsibilities below. This is a remote, part-time, paid position.➡️ Indicate interest in a full-time Content Specialist position on the Online Team at CEA.We’ll probably soon be hiring for someone to work with me (Lizka) on content-related tasks on the Forum. If you fill out this form, we will send you an email when the application opens and consider streamlining your application if you seem like a particularly good fit.You can see more about the role’s responsibilities below. This is a full-time position, and can be remote or in-person from Oxford/Boston/London/Berkeley.➡️ You can also indicate interest in working as a copy-editor for CEA or in being a Forum Facilitator (both are part-time remote roles).If you know someone who might be interested, please consider sending this post to them!Please feel free to get in touch with any questions you might have. You can contact forum@centreforeffectivealtruism.org or forum-moderation@effectivealtruism.org, comment here, or reach out to moderators and members of the Online Team.An overview of the rolesI’ve shared a lot more information on the moderator role and the full-time content role in this post — here's a summary in table form. (You can submit the first round of the moderator application or indicate interest in the content role without reading the whole post.)TitleAbout the roleKey responsibilitiesStage the application is atModeratorPart-time, remote (average ~3 hours a week but variable), $40/hourMake the Forum safe, welcoming, and collaborative (e.g. by stopping or preventing aggressive behavior, being clear about moderation decisions), nurture important qualities on the Forum (e.g. by improving the written discussion norms or proactively nudging conversations into better directions), and help the rest of the moderation team.Round 1 is open (and should take 15-20 minutes): apply by 1 JuneContent SpecialistFull-time, remote/ in-person (Oxford/ London/ Boston/ Berkeley)Encourage engagement with important and interesting online content (via outreach, newsletters, curation, Forum events, writing, etc.), improve the epistemics, safety, and trust levels on the Forum (e.g. via moderation), and more.Indication of interest (we'll probably open a full application soon)We’re also excited for indications of interest for the following part-timecontractor roles, although we might not end up hiring for these in the very near futureCopy-editor indication of interestPart-time, remote (~4 hours a week average), $30/hour by defaultCopy-editing for style, clarity, grammar — and generally sanity-checking content for CEA. Sometimes also things like reformatting, summarizing other content, finding images, and possibly posting on the website or social media. Forum Facilitator indication of interestPart-time, remote (~3 hours a week average), $30/hourTitleAbout the roleKey responsibilitiesStage the application is atModeratorPart-time, remote (average ~3 hours a week but variable), $40/hourMake the Forum safe, welcoming, and collaborative (e.g. by stopping or preventing aggressive behavior, being clear about moderation decisions), nurture important qualities on the Forum (e.g. by improving the written discussion norms or proactively nudging conversations into better directions), and help the rest ...]]>
Fri, 19 May 2023 07:39:45 +0000 EA - EA Forum: content and moderator positions by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Forum: content and moderator positions, published by Lizka on May 18, 2023 on The Effective Altruism Forum.TL;DR: We’re hiring for Forum moderators — apply by 1 June (it’s the first round of the application, and should take 15-20 minutes). We’re also pre-announcing a full-time Content Specialist position on the Online Team at CEA — you can indicate interest in that.➡️ Apply to be a part-time Forum moderator by 1 June.Round 1 of the application should take around 15-20 minutes, and applying earlier is better.You can see moderator responsibilities below. This is a remote, part-time, paid position.➡️ Indicate interest in a full-time Content Specialist position on the Online Team at CEA.We’ll probably soon be hiring for someone to work with me (Lizka) on content-related tasks on the Forum. If you fill out this form, we will send you an email when the application opens and consider streamlining your application if you seem like a particularly good fit.You can see more about the role’s responsibilities below. This is a full-time position, and can be remote or in-person from Oxford/Boston/London/Berkeley.➡️ You can also indicate interest in working as a copy-editor for CEA or in being a Forum Facilitator (both are part-time remote roles).If you know someone who might be interested, please consider sending this post to them!Please feel free to get in touch with any questions you might have. You can contact forum@centreforeffectivealtruism.org or forum-moderation@effectivealtruism.org, comment here, or reach out to moderators and members of the Online Team.An overview of the rolesI’ve shared a lot more information on the moderator role and the full-time content role in this post — here's a summary in table form. (You can submit the first round of the moderator application or indicate interest in the content role without reading the whole post.)TitleAbout the roleKey responsibilitiesStage the application is atModeratorPart-time, remote (average ~3 hours a week but variable), $40/hourMake the Forum safe, welcoming, and collaborative (e.g. by stopping or preventing aggressive behavior, being clear about moderation decisions), nurture important qualities on the Forum (e.g. by improving the written discussion norms or proactively nudging conversations into better directions), and help the rest of the moderation team.Round 1 is open (and should take 15-20 minutes): apply by 1 JuneContent SpecialistFull-time, remote/ in-person (Oxford/ London/ Boston/ Berkeley)Encourage engagement with important and interesting online content (via outreach, newsletters, curation, Forum events, writing, etc.), improve the epistemics, safety, and trust levels on the Forum (e.g. via moderation), and more.Indication of interest (we'll probably open a full application soon)We’re also excited for indications of interest for the following part-timecontractor roles, although we might not end up hiring for these in the very near futureCopy-editor indication of interestPart-time, remote (~4 hours a week average), $30/hour by defaultCopy-editing for style, clarity, grammar — and generally sanity-checking content for CEA. Sometimes also things like reformatting, summarizing other content, finding images, and possibly posting on the website or social media. Forum Facilitator indication of interestPart-time, remote (~3 hours a week average), $30/hourTitleAbout the roleKey responsibilitiesStage the application is atModeratorPart-time, remote (average ~3 hours a week but variable), $40/hourMake the Forum safe, welcoming, and collaborative (e.g. by stopping or preventing aggressive behavior, being clear about moderation decisions), nurture important qualities on the Forum (e.g. by improving the written discussion norms or proactively nudging conversations into better directions), and help the rest ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Forum: content and moderator positions, published by Lizka on May 18, 2023 on The Effective Altruism Forum.TL;DR: We’re hiring for Forum moderators — apply by 1 June (it’s the first round of the application, and should take 15-20 minutes). We’re also pre-announcing a full-time Content Specialist position on the Online Team at CEA — you can indicate interest in that.➡️ Apply to be a part-time Forum moderator by 1 June.Round 1 of the application should take around 15-20 minutes, and applying earlier is better.You can see moderator responsibilities below. This is a remote, part-time, paid position.➡️ Indicate interest in a full-time Content Specialist position on the Online Team at CEA.We’ll probably soon be hiring for someone to work with me (Lizka) on content-related tasks on the Forum. If you fill out this form, we will send you an email when the application opens and consider streamlining your application if you seem like a particularly good fit.You can see more about the role’s responsibilities below. This is a full-time position, and can be remote or in-person from Oxford/Boston/London/Berkeley.➡️ You can also indicate interest in working as a copy-editor for CEA or in being a Forum Facilitator (both are part-time remote roles).If you know someone who might be interested, please consider sending this post to them!Please feel free to get in touch with any questions you might have. You can contact forum@centreforeffectivealtruism.org or forum-moderation@effectivealtruism.org, comment here, or reach out to moderators and members of the Online Team.An overview of the rolesI’ve shared a lot more information on the moderator role and the full-time content role in this post — here's a summary in table form. (You can submit the first round of the moderator application or indicate interest in the content role without reading the whole post.)TitleAbout the roleKey responsibilitiesStage the application is atModeratorPart-time, remote (average ~3 hours a week but variable), $40/hourMake the Forum safe, welcoming, and collaborative (e.g. by stopping or preventing aggressive behavior, being clear about moderation decisions), nurture important qualities on the Forum (e.g. by improving the written discussion norms or proactively nudging conversations into better directions), and help the rest of the moderation team.Round 1 is open (and should take 15-20 minutes): apply by 1 JuneContent SpecialistFull-time, remote/ in-person (Oxford/ London/ Boston/ Berkeley)Encourage engagement with important and interesting online content (via outreach, newsletters, curation, Forum events, writing, etc.), improve the epistemics, safety, and trust levels on the Forum (e.g. via moderation), and more.Indication of interest (we'll probably open a full application soon)We’re also excited for indications of interest for the following part-timecontractor roles, although we might not end up hiring for these in the very near futureCopy-editor indication of interestPart-time, remote (~4 hours a week average), $30/hour by defaultCopy-editing for style, clarity, grammar — and generally sanity-checking content for CEA. Sometimes also things like reformatting, summarizing other content, finding images, and possibly posting on the website or social media. Forum Facilitator indication of interestPart-time, remote (~3 hours a week average), $30/hourTitleAbout the roleKey responsibilitiesStage the application is atModeratorPart-time, remote (average ~3 hours a week but variable), $40/hourMake the Forum safe, welcoming, and collaborative (e.g. by stopping or preventing aggressive behavior, being clear about moderation decisions), nurture important qualities on the Forum (e.g. by improving the written discussion norms or proactively nudging conversations into better directions), and help the rest ...]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 24:09 None full 6004
dFWGSMRqWNbrQtdvK_NL_EA_EA EA - Introducing Healthy Futures Global: Join Us in Tackling Syphilis in Pregnancy and Creating Healthier Futures by Nils Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Healthy Futures Global: Join Us in Tackling Syphilis in Pregnancy and Creating Healthier Futures, published by Nils on May 18, 2023 on The Effective Altruism Forum.TLDRHealthy Futures Global is a new global health charity originating from CE’s incubation program trying to prevent mother-to-child transmission of syphilis, founded by Keyur Doolabh (medical doctor with research experience) and Nils Voelker (M Sc in health economics and former strategy consultant)Healthy Futures’ strategy is to elevate syphilis screening rates in antenatal clinics to the high levels of HIV screening rates by replacing HIV-only tests with a dual HIV/syphilis testKeyur and Nils are currently exploring potential pilot countries, and will be in the Philippines and Tanzania soon - they invite you to subscribe to their newsletter and to reach out to volunteer, especially if you are in the Philippines or Tanzania or could connect us to people thereI. Introduction: Healthy Futures Global and its OriginsKeyur and Nils are excited to announce the launch of Healthy Futures Global, a new organisation originating from Charity Entrepreneurship's latest incubation programme, dedicated to making a positive impact on global health.Healthy Futures’ mission is to improve maternal and newborn health by focusing on the elimination of congenital syphilis, a preventable but devastating disease that affects millions of families worldwide.II. The Problem: Congenital Syphilis' Global ImpactSyphilis in pregnancy is a pressing global health issue. It causes approximately 60,000 newborn deaths and 140,000 (almost 10% of global) stillbirths annually, contributing up to 50% of all stillbirths in some regions (1, 2, 3, 4).Antenatal syphilis also causes lifelong disabilities for many surviving children, often going unaddressed in many countries. This disability can include cognitive impairment, vision and hearing deficits, bone deformity, and liver dysfunction. If a pregnant woman has syphilis, her child has a 12% chance of neonatal death, 16% chance of stillbirth, and 25% chance of disability (5).III. The Solution: Test and Treat StrategyThe theory of change (below) is a hybrid between direct intervention, technical assistance and policy work. It involves lobbying governments for policy support, and supporting governments, local NGOs and antenatal clinics to roll out dual HIV/Syphilis tests.The key components of the approach involve rapid testing (RDTs) during antenatal care and immediate treatment with antibiotics (BPG) for positive cases.The main strengths of Healthy Futures are:Cost effectiveness: The strategy has the potential to cost-effectively save lives, prevent disabilities, and reduce the burden on health systems. Our analysis gives us an expected value of $2,400 per life saved and ~10x the cost-effectiveness of direct cash transfers. The medical evidence for positive effects of treating the pregnant woman and her baby is strong (6).Monitoring and evaluation: The direct nature of this intervention offers quick feedback loops, allowing us to re-evaluate our strategy accordingly.Track record: Keyur is a medical doctor and Nils brings a background of consulting for pharmaceutical companies and global health organisations. Their backgrounds are a good fit for this type of intervention.Organisational fuel: Healthy Futures benefits from ongoing mentoring from Charity Entrepreneurship and previously incubated charities.IV. Challenges, Risks and MitigationThe challenges Healthy Futures plans to address are (ranked according to severity x likelihood of occurring - highest on top):Implementational challenges:Sustainability: The long-term success and cost-effectiveness of this intervention depend on governments making the necessary changes in the health care system and establishing mech...]]>
Nils https://forum.effectivealtruism.org/posts/dFWGSMRqWNbrQtdvK/introducing-healthy-futures-global-join-us-in-tackling Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Healthy Futures Global: Join Us in Tackling Syphilis in Pregnancy and Creating Healthier Futures, published by Nils on May 18, 2023 on The Effective Altruism Forum.TLDRHealthy Futures Global is a new global health charity originating from CE’s incubation program trying to prevent mother-to-child transmission of syphilis, founded by Keyur Doolabh (medical doctor with research experience) and Nils Voelker (M Sc in health economics and former strategy consultant)Healthy Futures’ strategy is to elevate syphilis screening rates in antenatal clinics to the high levels of HIV screening rates by replacing HIV-only tests with a dual HIV/syphilis testKeyur and Nils are currently exploring potential pilot countries, and will be in the Philippines and Tanzania soon - they invite you to subscribe to their newsletter and to reach out to volunteer, especially if you are in the Philippines or Tanzania or could connect us to people thereI. Introduction: Healthy Futures Global and its OriginsKeyur and Nils are excited to announce the launch of Healthy Futures Global, a new organisation originating from Charity Entrepreneurship's latest incubation programme, dedicated to making a positive impact on global health.Healthy Futures’ mission is to improve maternal and newborn health by focusing on the elimination of congenital syphilis, a preventable but devastating disease that affects millions of families worldwide.II. The Problem: Congenital Syphilis' Global ImpactSyphilis in pregnancy is a pressing global health issue. It causes approximately 60,000 newborn deaths and 140,000 (almost 10% of global) stillbirths annually, contributing up to 50% of all stillbirths in some regions (1, 2, 3, 4).Antenatal syphilis also causes lifelong disabilities for many surviving children, often going unaddressed in many countries. This disability can include cognitive impairment, vision and hearing deficits, bone deformity, and liver dysfunction. If a pregnant woman has syphilis, her child has a 12% chance of neonatal death, 16% chance of stillbirth, and 25% chance of disability (5).III. The Solution: Test and Treat StrategyThe theory of change (below) is a hybrid between direct intervention, technical assistance and policy work. It involves lobbying governments for policy support, and supporting governments, local NGOs and antenatal clinics to roll out dual HIV/Syphilis tests.The key components of the approach involve rapid testing (RDTs) during antenatal care and immediate treatment with antibiotics (BPG) for positive cases.The main strengths of Healthy Futures are:Cost effectiveness: The strategy has the potential to cost-effectively save lives, prevent disabilities, and reduce the burden on health systems. Our analysis gives us an expected value of $2,400 per life saved and ~10x the cost-effectiveness of direct cash transfers. The medical evidence for positive effects of treating the pregnant woman and her baby is strong (6).Monitoring and evaluation: The direct nature of this intervention offers quick feedback loops, allowing us to re-evaluate our strategy accordingly.Track record: Keyur is a medical doctor and Nils brings a background of consulting for pharmaceutical companies and global health organisations. Their backgrounds are a good fit for this type of intervention.Organisational fuel: Healthy Futures benefits from ongoing mentoring from Charity Entrepreneurship and previously incubated charities.IV. Challenges, Risks and MitigationThe challenges Healthy Futures plans to address are (ranked according to severity x likelihood of occurring - highest on top):Implementational challenges:Sustainability: The long-term success and cost-effectiveness of this intervention depend on governments making the necessary changes in the health care system and establishing mech...]]>
Thu, 18 May 2023 14:53:13 +0000 EA - Introducing Healthy Futures Global: Join Us in Tackling Syphilis in Pregnancy and Creating Healthier Futures by Nils Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Healthy Futures Global: Join Us in Tackling Syphilis in Pregnancy and Creating Healthier Futures, published by Nils on May 18, 2023 on The Effective Altruism Forum.TLDRHealthy Futures Global is a new global health charity originating from CE’s incubation program trying to prevent mother-to-child transmission of syphilis, founded by Keyur Doolabh (medical doctor with research experience) and Nils Voelker (M Sc in health economics and former strategy consultant)Healthy Futures’ strategy is to elevate syphilis screening rates in antenatal clinics to the high levels of HIV screening rates by replacing HIV-only tests with a dual HIV/syphilis testKeyur and Nils are currently exploring potential pilot countries, and will be in the Philippines and Tanzania soon - they invite you to subscribe to their newsletter and to reach out to volunteer, especially if you are in the Philippines or Tanzania or could connect us to people thereI. Introduction: Healthy Futures Global and its OriginsKeyur and Nils are excited to announce the launch of Healthy Futures Global, a new organisation originating from Charity Entrepreneurship's latest incubation programme, dedicated to making a positive impact on global health.Healthy Futures’ mission is to improve maternal and newborn health by focusing on the elimination of congenital syphilis, a preventable but devastating disease that affects millions of families worldwide.II. The Problem: Congenital Syphilis' Global ImpactSyphilis in pregnancy is a pressing global health issue. It causes approximately 60,000 newborn deaths and 140,000 (almost 10% of global) stillbirths annually, contributing up to 50% of all stillbirths in some regions (1, 2, 3, 4).Antenatal syphilis also causes lifelong disabilities for many surviving children, often going unaddressed in many countries. This disability can include cognitive impairment, vision and hearing deficits, bone deformity, and liver dysfunction. If a pregnant woman has syphilis, her child has a 12% chance of neonatal death, 16% chance of stillbirth, and 25% chance of disability (5).III. The Solution: Test and Treat StrategyThe theory of change (below) is a hybrid between direct intervention, technical assistance and policy work. It involves lobbying governments for policy support, and supporting governments, local NGOs and antenatal clinics to roll out dual HIV/Syphilis tests.The key components of the approach involve rapid testing (RDTs) during antenatal care and immediate treatment with antibiotics (BPG) for positive cases.The main strengths of Healthy Futures are:Cost effectiveness: The strategy has the potential to cost-effectively save lives, prevent disabilities, and reduce the burden on health systems. Our analysis gives us an expected value of $2,400 per life saved and ~10x the cost-effectiveness of direct cash transfers. The medical evidence for positive effects of treating the pregnant woman and her baby is strong (6).Monitoring and evaluation: The direct nature of this intervention offers quick feedback loops, allowing us to re-evaluate our strategy accordingly.Track record: Keyur is a medical doctor and Nils brings a background of consulting for pharmaceutical companies and global health organisations. Their backgrounds are a good fit for this type of intervention.Organisational fuel: Healthy Futures benefits from ongoing mentoring from Charity Entrepreneurship and previously incubated charities.IV. Challenges, Risks and MitigationThe challenges Healthy Futures plans to address are (ranked according to severity x likelihood of occurring - highest on top):Implementational challenges:Sustainability: The long-term success and cost-effectiveness of this intervention depend on governments making the necessary changes in the health care system and establishing mech...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Healthy Futures Global: Join Us in Tackling Syphilis in Pregnancy and Creating Healthier Futures, published by Nils on May 18, 2023 on The Effective Altruism Forum.TLDRHealthy Futures Global is a new global health charity originating from CE’s incubation program trying to prevent mother-to-child transmission of syphilis, founded by Keyur Doolabh (medical doctor with research experience) and Nils Voelker (M Sc in health economics and former strategy consultant)Healthy Futures’ strategy is to elevate syphilis screening rates in antenatal clinics to the high levels of HIV screening rates by replacing HIV-only tests with a dual HIV/syphilis testKeyur and Nils are currently exploring potential pilot countries, and will be in the Philippines and Tanzania soon - they invite you to subscribe to their newsletter and to reach out to volunteer, especially if you are in the Philippines or Tanzania or could connect us to people thereI. Introduction: Healthy Futures Global and its OriginsKeyur and Nils are excited to announce the launch of Healthy Futures Global, a new organisation originating from Charity Entrepreneurship's latest incubation programme, dedicated to making a positive impact on global health.Healthy Futures’ mission is to improve maternal and newborn health by focusing on the elimination of congenital syphilis, a preventable but devastating disease that affects millions of families worldwide.II. The Problem: Congenital Syphilis' Global ImpactSyphilis in pregnancy is a pressing global health issue. It causes approximately 60,000 newborn deaths and 140,000 (almost 10% of global) stillbirths annually, contributing up to 50% of all stillbirths in some regions (1, 2, 3, 4).Antenatal syphilis also causes lifelong disabilities for many surviving children, often going unaddressed in many countries. This disability can include cognitive impairment, vision and hearing deficits, bone deformity, and liver dysfunction. If a pregnant woman has syphilis, her child has a 12% chance of neonatal death, 16% chance of stillbirth, and 25% chance of disability (5).III. The Solution: Test and Treat StrategyThe theory of change (below) is a hybrid between direct intervention, technical assistance and policy work. It involves lobbying governments for policy support, and supporting governments, local NGOs and antenatal clinics to roll out dual HIV/Syphilis tests.The key components of the approach involve rapid testing (RDTs) during antenatal care and immediate treatment with antibiotics (BPG) for positive cases.The main strengths of Healthy Futures are:Cost effectiveness: The strategy has the potential to cost-effectively save lives, prevent disabilities, and reduce the burden on health systems. Our analysis gives us an expected value of $2,400 per life saved and ~10x the cost-effectiveness of direct cash transfers. The medical evidence for positive effects of treating the pregnant woman and her baby is strong (6).Monitoring and evaluation: The direct nature of this intervention offers quick feedback loops, allowing us to re-evaluate our strategy accordingly.Track record: Keyur is a medical doctor and Nils brings a background of consulting for pharmaceutical companies and global health organisations. Their backgrounds are a good fit for this type of intervention.Organisational fuel: Healthy Futures benefits from ongoing mentoring from Charity Entrepreneurship and previously incubated charities.IV. Challenges, Risks and MitigationThe challenges Healthy Futures plans to address are (ranked according to severity x likelihood of occurring - highest on top):Implementational challenges:Sustainability: The long-term success and cost-effectiveness of this intervention depend on governments making the necessary changes in the health care system and establishing mech...]]>
Nils https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:38 None full 5994
yWRJFAmEzofnsuHNK_NL_EA_EA EA - U.S. Regulatory Updates to Benefit-Cost Analysis: Highlights and Encouragement to Submit Public Comments by DannyBressler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: U.S. Regulatory Updates to Benefit-Cost Analysis: Highlights and Encouragement to Submit Public Comments, published by DannyBressler on May 18, 2023 on The Effective Altruism Forum.On April 6, 2023, the U.S. Office of Management and Budget released a draft of the first update to the Federal benefit-cost analysis (BCA) guidelines in 20 years. I saw a nice article in Vox Future Perfect and a nice EA forum post that covered this. These posts cover some of the key points, but I think there are other important updates that might be overlooked. I will highlight some of those below.The key new documents stipulating the new draft BCA guidelines are:Circular A-4The Circular A-4 PreambleCircular A-94Why is the update important? Since 1993, U.S. agencies have been required to conduct a regulatory analysis of all significant regulatory actions (the definition of which was just revised from a rule with an annual impact of more than $100 million to $200 million), which includes an assessment of the costs and benefits of the action. Essentially, all major regulatory actions in the U.S. are subject to BCA, guided by Circular A-4.However, these updated guidance documents are still drafts. They are subject to public comment and peer review and may be changed significantly in light of the feedback received in this process.If you think that some of these highlights or other parts of the new A-4 and A-94 are a good idea (or if you don’t) I’d highly recommend submitting a public comment via the Regulations.gov system (A-4 link and A-94 link). The deadline for public comments is June 6th.Positive comments that support the approach taken in the document are equally and often more useful/impactful than critical comments. If everyone who dislikes something criticizes it, and everyone who supports something doesn’t bother mentioning their support, it looks like everyone who had an opinion opposed it! So, if you like the approach taken (or don’t), please write a comment! Also, note that comments supported with the analytical reasons why the approach is (or is not) justified are generally more useful and taken more seriously.Now on to the highlights:Short-Term Discount RateAs the Vox article mentioned, the new update to Circular A-4 significantly lowers the discount rate to a 1.7% near-term discount rate. This of course is a large change from the previous 3% rate, but this comes from just using a similar method to the previous 2003 A-4 method with more recent Treasury yield data. The preamble has a deeper dive into this calculation and asks the public a number of questions about whether there is a better approach, for those who are interested.The draft Circular A-4 continues to take a “descriptive” approach to discounting in which market data is used to determine the observed tradeoffs people make between money now and money in the future. The discount rate is now lower simply because yields have been steadily declining for the last 20 years.There are good reasons to believe that rates will continue to be low, but it’s also important to emphasize that if rates are not low in the future, then this near-term discount rate will go up again. This is why from the perspective of placing more weight on the future, the next bullets may be more important.Long-Term Declining Discount RateA related important change (and more robust to future interest rate fluctuations) is that A-4 and A-94 endorse the general concept of declining discount rates, and the A-4 preamble proposed and asked for comment on a specific declining discount rate schedule, which discounts the future at progressively lower rates to account for future interest rate uncertainty. This is in line with the approach recommended in the literature based on the best available economics, and also ends up placing larger weight on t...]]>
DannyBressler https://forum.effectivealtruism.org/posts/yWRJFAmEzofnsuHNK/u-s-regulatory-updates-to-benefit-cost-analysis-highlights Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: U.S. Regulatory Updates to Benefit-Cost Analysis: Highlights and Encouragement to Submit Public Comments, published by DannyBressler on May 18, 2023 on The Effective Altruism Forum.On April 6, 2023, the U.S. Office of Management and Budget released a draft of the first update to the Federal benefit-cost analysis (BCA) guidelines in 20 years. I saw a nice article in Vox Future Perfect and a nice EA forum post that covered this. These posts cover some of the key points, but I think there are other important updates that might be overlooked. I will highlight some of those below.The key new documents stipulating the new draft BCA guidelines are:Circular A-4The Circular A-4 PreambleCircular A-94Why is the update important? Since 1993, U.S. agencies have been required to conduct a regulatory analysis of all significant regulatory actions (the definition of which was just revised from a rule with an annual impact of more than $100 million to $200 million), which includes an assessment of the costs and benefits of the action. Essentially, all major regulatory actions in the U.S. are subject to BCA, guided by Circular A-4.However, these updated guidance documents are still drafts. They are subject to public comment and peer review and may be changed significantly in light of the feedback received in this process.If you think that some of these highlights or other parts of the new A-4 and A-94 are a good idea (or if you don’t) I’d highly recommend submitting a public comment via the Regulations.gov system (A-4 link and A-94 link). The deadline for public comments is June 6th.Positive comments that support the approach taken in the document are equally and often more useful/impactful than critical comments. If everyone who dislikes something criticizes it, and everyone who supports something doesn’t bother mentioning their support, it looks like everyone who had an opinion opposed it! So, if you like the approach taken (or don’t), please write a comment! Also, note that comments supported with the analytical reasons why the approach is (or is not) justified are generally more useful and taken more seriously.Now on to the highlights:Short-Term Discount RateAs the Vox article mentioned, the new update to Circular A-4 significantly lowers the discount rate to a 1.7% near-term discount rate. This of course is a large change from the previous 3% rate, but this comes from just using a similar method to the previous 2003 A-4 method with more recent Treasury yield data. The preamble has a deeper dive into this calculation and asks the public a number of questions about whether there is a better approach, for those who are interested.The draft Circular A-4 continues to take a “descriptive” approach to discounting in which market data is used to determine the observed tradeoffs people make between money now and money in the future. The discount rate is now lower simply because yields have been steadily declining for the last 20 years.There are good reasons to believe that rates will continue to be low, but it’s also important to emphasize that if rates are not low in the future, then this near-term discount rate will go up again. This is why from the perspective of placing more weight on the future, the next bullets may be more important.Long-Term Declining Discount RateA related important change (and more robust to future interest rate fluctuations) is that A-4 and A-94 endorse the general concept of declining discount rates, and the A-4 preamble proposed and asked for comment on a specific declining discount rate schedule, which discounts the future at progressively lower rates to account for future interest rate uncertainty. This is in line with the approach recommended in the literature based on the best available economics, and also ends up placing larger weight on t...]]>
Thu, 18 May 2023 14:52:04 +0000 EA - U.S. Regulatory Updates to Benefit-Cost Analysis: Highlights and Encouragement to Submit Public Comments by DannyBressler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: U.S. Regulatory Updates to Benefit-Cost Analysis: Highlights and Encouragement to Submit Public Comments, published by DannyBressler on May 18, 2023 on The Effective Altruism Forum.On April 6, 2023, the U.S. Office of Management and Budget released a draft of the first update to the Federal benefit-cost analysis (BCA) guidelines in 20 years. I saw a nice article in Vox Future Perfect and a nice EA forum post that covered this. These posts cover some of the key points, but I think there are other important updates that might be overlooked. I will highlight some of those below.The key new documents stipulating the new draft BCA guidelines are:Circular A-4The Circular A-4 PreambleCircular A-94Why is the update important? Since 1993, U.S. agencies have been required to conduct a regulatory analysis of all significant regulatory actions (the definition of which was just revised from a rule with an annual impact of more than $100 million to $200 million), which includes an assessment of the costs and benefits of the action. Essentially, all major regulatory actions in the U.S. are subject to BCA, guided by Circular A-4.However, these updated guidance documents are still drafts. They are subject to public comment and peer review and may be changed significantly in light of the feedback received in this process.If you think that some of these highlights or other parts of the new A-4 and A-94 are a good idea (or if you don’t) I’d highly recommend submitting a public comment via the Regulations.gov system (A-4 link and A-94 link). The deadline for public comments is June 6th.Positive comments that support the approach taken in the document are equally and often more useful/impactful than critical comments. If everyone who dislikes something criticizes it, and everyone who supports something doesn’t bother mentioning their support, it looks like everyone who had an opinion opposed it! So, if you like the approach taken (or don’t), please write a comment! Also, note that comments supported with the analytical reasons why the approach is (or is not) justified are generally more useful and taken more seriously.Now on to the highlights:Short-Term Discount RateAs the Vox article mentioned, the new update to Circular A-4 significantly lowers the discount rate to a 1.7% near-term discount rate. This of course is a large change from the previous 3% rate, but this comes from just using a similar method to the previous 2003 A-4 method with more recent Treasury yield data. The preamble has a deeper dive into this calculation and asks the public a number of questions about whether there is a better approach, for those who are interested.The draft Circular A-4 continues to take a “descriptive” approach to discounting in which market data is used to determine the observed tradeoffs people make between money now and money in the future. The discount rate is now lower simply because yields have been steadily declining for the last 20 years.There are good reasons to believe that rates will continue to be low, but it’s also important to emphasize that if rates are not low in the future, then this near-term discount rate will go up again. This is why from the perspective of placing more weight on the future, the next bullets may be more important.Long-Term Declining Discount RateA related important change (and more robust to future interest rate fluctuations) is that A-4 and A-94 endorse the general concept of declining discount rates, and the A-4 preamble proposed and asked for comment on a specific declining discount rate schedule, which discounts the future at progressively lower rates to account for future interest rate uncertainty. This is in line with the approach recommended in the literature based on the best available economics, and also ends up placing larger weight on t...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: U.S. Regulatory Updates to Benefit-Cost Analysis: Highlights and Encouragement to Submit Public Comments, published by DannyBressler on May 18, 2023 on The Effective Altruism Forum.On April 6, 2023, the U.S. Office of Management and Budget released a draft of the first update to the Federal benefit-cost analysis (BCA) guidelines in 20 years. I saw a nice article in Vox Future Perfect and a nice EA forum post that covered this. These posts cover some of the key points, but I think there are other important updates that might be overlooked. I will highlight some of those below.The key new documents stipulating the new draft BCA guidelines are:Circular A-4The Circular A-4 PreambleCircular A-94Why is the update important? Since 1993, U.S. agencies have been required to conduct a regulatory analysis of all significant regulatory actions (the definition of which was just revised from a rule with an annual impact of more than $100 million to $200 million), which includes an assessment of the costs and benefits of the action. Essentially, all major regulatory actions in the U.S. are subject to BCA, guided by Circular A-4.However, these updated guidance documents are still drafts. They are subject to public comment and peer review and may be changed significantly in light of the feedback received in this process.If you think that some of these highlights or other parts of the new A-4 and A-94 are a good idea (or if you don’t) I’d highly recommend submitting a public comment via the Regulations.gov system (A-4 link and A-94 link). The deadline for public comments is June 6th.Positive comments that support the approach taken in the document are equally and often more useful/impactful than critical comments. If everyone who dislikes something criticizes it, and everyone who supports something doesn’t bother mentioning their support, it looks like everyone who had an opinion opposed it! So, if you like the approach taken (or don’t), please write a comment! Also, note that comments supported with the analytical reasons why the approach is (or is not) justified are generally more useful and taken more seriously.Now on to the highlights:Short-Term Discount RateAs the Vox article mentioned, the new update to Circular A-4 significantly lowers the discount rate to a 1.7% near-term discount rate. This of course is a large change from the previous 3% rate, but this comes from just using a similar method to the previous 2003 A-4 method with more recent Treasury yield data. The preamble has a deeper dive into this calculation and asks the public a number of questions about whether there is a better approach, for those who are interested.The draft Circular A-4 continues to take a “descriptive” approach to discounting in which market data is used to determine the observed tradeoffs people make between money now and money in the future. The discount rate is now lower simply because yields have been steadily declining for the last 20 years.There are good reasons to believe that rates will continue to be low, but it’s also important to emphasize that if rates are not low in the future, then this near-term discount rate will go up again. This is why from the perspective of placing more weight on the future, the next bullets may be more important.Long-Term Declining Discount RateA related important change (and more robust to future interest rate fluctuations) is that A-4 and A-94 endorse the general concept of declining discount rates, and the A-4 preamble proposed and asked for comment on a specific declining discount rate schedule, which discounts the future at progressively lower rates to account for future interest rate uncertainty. This is in line with the approach recommended in the literature based on the best available economics, and also ends up placing larger weight on t...]]>
DannyBressler https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:52 None full 5998
fFgfkZm7SHWQ2Lwyh_NL_EA_EA EA - Presenting: 2023 Incubated Charities (Round 1) - Charity Entrepreneurship by CE Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Presenting: 2023 Incubated Charities (Round 1) - Charity Entrepreneurship, published by CE on May 18, 2023 on The Effective Altruism Forum.We are thrilled to announce the launch of four new nonprofit organizations through our February/March 2023 Incubation Program.Executive summaryThe 2023 Round 1 incubated charities are:Animal Policy International - Ensuring animal welfare standards are upheld in international trade policyAnsh - Empowering mothers to save newborn lives by building healthcare capacity for adoption of Kangaroo CareHealthy Futures Global - Preventing mother-to-child transmission of syphilis through testing and treatmentHealthLearn - Providing the world’s best online training to health workers in developing countriesTwo more organizations got started during the program, but are not officially incubated by CE. We believe that the interventions, chosen by the solo founders, are promising (one was recommended by us as a top idea). We have provided support to both organizations through mentorship and benefits similar to those offered to our incubated projects.These organizations are:The Mission Motor - Building a more evidence-driven animal cause area by training and supporting organizations to use monitoring and evaluation to improve the impact of their interventionsUpstream Policies - Driving responsible fishing practices by championing bait fish prohibitionContext: The Charity Entrepreneurship Incubation Program February/March 2023The February/March 2023 program focused on global health and animal advocacy. Our generous donors from the CE Seed Network have enabled us to provide these initiatives with $590,000 in grant funding to kickstart their interventions.In addition to our seed grants, we are dedicated to providing our founders with comprehensive support. This includes continuous mentorship, operational assistance, free co-working spaces at our London office, and access to an ever-growing network of founders, donors, and highly specialized mentors. We have offered several tailored safety nets for those who have decided not to found a charity this program, such as career mentorship, connections to job opportunities, a two-month stipend, or another chance to found a charity through one of our upcoming programs. Our aim is to ensure that all program participants pursue high-impact careers, regardless of whether they found a charity in the given round.We are also pleased to share with you a recently-published video, which showcases program participants sharing their insights on the challenges and benefits of the program. They discuss their motivations for applying, as well as what they found most useful and enjoyable. The footage was filmed during an in-person week held at our London office, and we believe it provides valuable insights into what makes our program unique. We hope you take a moment to watch it.Feel free to learn more about the program here. The next application phase will start on July 10, and close on September 30, 2023. You'll have the opportunity to apply for both the February/March 2024 and July/August 2024 Incubation Programs. For the February/March 2024 program, our focus will be on: mass-media interventions, and preventative animal advocacy. To receive notifications when we start accepting applications, sign up here.Our new charities in detailANIMAL POLICY INTERNATIONALCo-founders and Co-Executive Directors: Mandy Carter, Rainer KravetsWebsite: animalpolicyinternational.orgEmail address: info@animalpolicyinternational.orgLinkedInCE Incubation Grant: $110,000Description of the interventionAnimal Policy International is working with policymakers in regions with higher levels of animal welfare legislation to advocate for responsible imports that are in line with domestic laws. By applying equal standards, they aim ...]]>
CE https://forum.effectivealtruism.org/posts/fFgfkZm7SHWQ2Lwyh/presenting-2023-incubated-charities-round-1-charity Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Presenting: 2023 Incubated Charities (Round 1) - Charity Entrepreneurship, published by CE on May 18, 2023 on The Effective Altruism Forum.We are thrilled to announce the launch of four new nonprofit organizations through our February/March 2023 Incubation Program.Executive summaryThe 2023 Round 1 incubated charities are:Animal Policy International - Ensuring animal welfare standards are upheld in international trade policyAnsh - Empowering mothers to save newborn lives by building healthcare capacity for adoption of Kangaroo CareHealthy Futures Global - Preventing mother-to-child transmission of syphilis through testing and treatmentHealthLearn - Providing the world’s best online training to health workers in developing countriesTwo more organizations got started during the program, but are not officially incubated by CE. We believe that the interventions, chosen by the solo founders, are promising (one was recommended by us as a top idea). We have provided support to both organizations through mentorship and benefits similar to those offered to our incubated projects.These organizations are:The Mission Motor - Building a more evidence-driven animal cause area by training and supporting organizations to use monitoring and evaluation to improve the impact of their interventionsUpstream Policies - Driving responsible fishing practices by championing bait fish prohibitionContext: The Charity Entrepreneurship Incubation Program February/March 2023The February/March 2023 program focused on global health and animal advocacy. Our generous donors from the CE Seed Network have enabled us to provide these initiatives with $590,000 in grant funding to kickstart their interventions.In addition to our seed grants, we are dedicated to providing our founders with comprehensive support. This includes continuous mentorship, operational assistance, free co-working spaces at our London office, and access to an ever-growing network of founders, donors, and highly specialized mentors. We have offered several tailored safety nets for those who have decided not to found a charity this program, such as career mentorship, connections to job opportunities, a two-month stipend, or another chance to found a charity through one of our upcoming programs. Our aim is to ensure that all program participants pursue high-impact careers, regardless of whether they found a charity in the given round.We are also pleased to share with you a recently-published video, which showcases program participants sharing their insights on the challenges and benefits of the program. They discuss their motivations for applying, as well as what they found most useful and enjoyable. The footage was filmed during an in-person week held at our London office, and we believe it provides valuable insights into what makes our program unique. We hope you take a moment to watch it.Feel free to learn more about the program here. The next application phase will start on July 10, and close on September 30, 2023. You'll have the opportunity to apply for both the February/March 2024 and July/August 2024 Incubation Programs. For the February/March 2024 program, our focus will be on: mass-media interventions, and preventative animal advocacy. To receive notifications when we start accepting applications, sign up here.Our new charities in detailANIMAL POLICY INTERNATIONALCo-founders and Co-Executive Directors: Mandy Carter, Rainer KravetsWebsite: animalpolicyinternational.orgEmail address: info@animalpolicyinternational.orgLinkedInCE Incubation Grant: $110,000Description of the interventionAnimal Policy International is working with policymakers in regions with higher levels of animal welfare legislation to advocate for responsible imports that are in line with domestic laws. By applying equal standards, they aim ...]]>
Thu, 18 May 2023 10:55:37 +0000 EA - Presenting: 2023 Incubated Charities (Round 1) - Charity Entrepreneurship by CE Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Presenting: 2023 Incubated Charities (Round 1) - Charity Entrepreneurship, published by CE on May 18, 2023 on The Effective Altruism Forum.We are thrilled to announce the launch of four new nonprofit organizations through our February/March 2023 Incubation Program.Executive summaryThe 2023 Round 1 incubated charities are:Animal Policy International - Ensuring animal welfare standards are upheld in international trade policyAnsh - Empowering mothers to save newborn lives by building healthcare capacity for adoption of Kangaroo CareHealthy Futures Global - Preventing mother-to-child transmission of syphilis through testing and treatmentHealthLearn - Providing the world’s best online training to health workers in developing countriesTwo more organizations got started during the program, but are not officially incubated by CE. We believe that the interventions, chosen by the solo founders, are promising (one was recommended by us as a top idea). We have provided support to both organizations through mentorship and benefits similar to those offered to our incubated projects.These organizations are:The Mission Motor - Building a more evidence-driven animal cause area by training and supporting organizations to use monitoring and evaluation to improve the impact of their interventionsUpstream Policies - Driving responsible fishing practices by championing bait fish prohibitionContext: The Charity Entrepreneurship Incubation Program February/March 2023The February/March 2023 program focused on global health and animal advocacy. Our generous donors from the CE Seed Network have enabled us to provide these initiatives with $590,000 in grant funding to kickstart their interventions.In addition to our seed grants, we are dedicated to providing our founders with comprehensive support. This includes continuous mentorship, operational assistance, free co-working spaces at our London office, and access to an ever-growing network of founders, donors, and highly specialized mentors. We have offered several tailored safety nets for those who have decided not to found a charity this program, such as career mentorship, connections to job opportunities, a two-month stipend, or another chance to found a charity through one of our upcoming programs. Our aim is to ensure that all program participants pursue high-impact careers, regardless of whether they found a charity in the given round.We are also pleased to share with you a recently-published video, which showcases program participants sharing their insights on the challenges and benefits of the program. They discuss their motivations for applying, as well as what they found most useful and enjoyable. The footage was filmed during an in-person week held at our London office, and we believe it provides valuable insights into what makes our program unique. We hope you take a moment to watch it.Feel free to learn more about the program here. The next application phase will start on July 10, and close on September 30, 2023. You'll have the opportunity to apply for both the February/March 2024 and July/August 2024 Incubation Programs. For the February/March 2024 program, our focus will be on: mass-media interventions, and preventative animal advocacy. To receive notifications when we start accepting applications, sign up here.Our new charities in detailANIMAL POLICY INTERNATIONALCo-founders and Co-Executive Directors: Mandy Carter, Rainer KravetsWebsite: animalpolicyinternational.orgEmail address: info@animalpolicyinternational.orgLinkedInCE Incubation Grant: $110,000Description of the interventionAnimal Policy International is working with policymakers in regions with higher levels of animal welfare legislation to advocate for responsible imports that are in line with domestic laws. By applying equal standards, they aim ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Presenting: 2023 Incubated Charities (Round 1) - Charity Entrepreneurship, published by CE on May 18, 2023 on The Effective Altruism Forum.We are thrilled to announce the launch of four new nonprofit organizations through our February/March 2023 Incubation Program.Executive summaryThe 2023 Round 1 incubated charities are:Animal Policy International - Ensuring animal welfare standards are upheld in international trade policyAnsh - Empowering mothers to save newborn lives by building healthcare capacity for adoption of Kangaroo CareHealthy Futures Global - Preventing mother-to-child transmission of syphilis through testing and treatmentHealthLearn - Providing the world’s best online training to health workers in developing countriesTwo more organizations got started during the program, but are not officially incubated by CE. We believe that the interventions, chosen by the solo founders, are promising (one was recommended by us as a top idea). We have provided support to both organizations through mentorship and benefits similar to those offered to our incubated projects.These organizations are:The Mission Motor - Building a more evidence-driven animal cause area by training and supporting organizations to use monitoring and evaluation to improve the impact of their interventionsUpstream Policies - Driving responsible fishing practices by championing bait fish prohibitionContext: The Charity Entrepreneurship Incubation Program February/March 2023The February/March 2023 program focused on global health and animal advocacy. Our generous donors from the CE Seed Network have enabled us to provide these initiatives with $590,000 in grant funding to kickstart their interventions.In addition to our seed grants, we are dedicated to providing our founders with comprehensive support. This includes continuous mentorship, operational assistance, free co-working spaces at our London office, and access to an ever-growing network of founders, donors, and highly specialized mentors. We have offered several tailored safety nets for those who have decided not to found a charity this program, such as career mentorship, connections to job opportunities, a two-month stipend, or another chance to found a charity through one of our upcoming programs. Our aim is to ensure that all program participants pursue high-impact careers, regardless of whether they found a charity in the given round.We are also pleased to share with you a recently-published video, which showcases program participants sharing their insights on the challenges and benefits of the program. They discuss their motivations for applying, as well as what they found most useful and enjoyable. The footage was filmed during an in-person week held at our London office, and we believe it provides valuable insights into what makes our program unique. We hope you take a moment to watch it.Feel free to learn more about the program here. The next application phase will start on July 10, and close on September 30, 2023. You'll have the opportunity to apply for both the February/March 2024 and July/August 2024 Incubation Programs. For the February/March 2024 program, our focus will be on: mass-media interventions, and preventative animal advocacy. To receive notifications when we start accepting applications, sign up here.Our new charities in detailANIMAL POLICY INTERNATIONALCo-founders and Co-Executive Directors: Mandy Carter, Rainer KravetsWebsite: animalpolicyinternational.orgEmail address: info@animalpolicyinternational.orgLinkedInCE Incubation Grant: $110,000Description of the interventionAnimal Policy International is working with policymakers in regions with higher levels of animal welfare legislation to advocate for responsible imports that are in line with domestic laws. By applying equal standards, they aim ...]]>
CE https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 22:22 None full 5993
yLKhz7LgqKJae84PM_NL_EA_EA EA - Announcing the Animal Welfare Library 🦉 by arvomm Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Animal Welfare Library 🦉, published by arvomm on May 17, 2023 on The Effective Altruism Forum.TL;DRIntroducing the Animal Welfare Library (AWL🦉): a web repository of animal welfare resources. We hope the library is useful for:People who want to learn about animal welfare.More advanced users seeking references or organisations related to animal ethics and effective animal advocacy projects.The library contains:BooksArticlesOrganisationsVideos and FilmsRepositoriesAdvocatesNewsPodcastsAnd more, like newsletters, coming soon.We want AWL to be a link you can easily share with someone wanting to learn more about animal welfare. At 212 entries right now, the library is still expanding and is by no means exhaustive (suggest additions here or in the comments!).We found having a searchable and interactive overview of what is out there really helpful, and we hope you do too! We really value any feedback you might have!Our StoryToday we just launched the Animal Welfare Library or AWL (pronounced owl /aʊl/ 🦉!). AWL is our answer to the question "what is the go-to place for finding high-quality animal welfare resources?". Here's our story.Arvo: When I began my journey to learn more about animal welfare, I realised that there was an overwhelming amount of information on the subject, varying substantially in quality. Over the months and years that followed, I came across several websites and organisations that seemed to be doing incredible work and sharing valuable insights, and I frequently found myself wishing I had known about these resources from the start. I also started to see the interdisciplinary nature of the endeavour which made me realise that it would be particularly beneficial to develop a tool to catalogue some of the knowledge we possess and build a home for a centralised repository of valuable resources.Eleos: The plight of badly-off humans and other animals has been a priority of mine for many years. There has always been a sense of urgency behind my thinking: to me, practical philosophy isn’t simply a set of interesting puzzles, but a fundamentally important enterprise to make the world a better place. A few years ago, I embarked on my path to study how I could apply my compassion more systematically in a way that helps me be more effective at making this world a better place. In summer 2021, I started gathering resources on animal ethics and helping animals that would help others make a difference, which resulted in this compilation (and this forum post) that would later help ground this project.In 2022, we met and found we had an aligned vision for a project like AWL. We joined forces and decided to make the library happen.This is how this project was born. This website is the resource we wish we had had in our hands when we started learning about animal welfare and humanity’s role in helping end the neglected suffering of millions of animals.Go to the library here: Animal Welfare Library.Structure of the SiteThe website is structured into home, library and act now pages. (We are also drafting a `why care’ page summarising key arguments in the animal welfare space). The home page has some highlighted resources and organisations and it prompts visitors to explore the library. The Library has all the resources we compiled and the options to filter and search (desktop only) for specific items, themes or key terms. The act now is split into four parts: career (which redirects to Animal Advocacy Careers), donations (redirecting to Animal Charity Evaluators), expanding our social influence and eating plant-based (each of these last two redirects to a subset of relevant resources within the library). When possible, we tried to keep things minimalistic.FeedbackThis is all still work in progress - we wanted to put it out there be...]]>
arvomm https://forum.effectivealtruism.org/posts/yLKhz7LgqKJae84PM/announcing-the-animal-welfare-library Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Animal Welfare Library 🦉, published by arvomm on May 17, 2023 on The Effective Altruism Forum.TL;DRIntroducing the Animal Welfare Library (AWL🦉): a web repository of animal welfare resources. We hope the library is useful for:People who want to learn about animal welfare.More advanced users seeking references or organisations related to animal ethics and effective animal advocacy projects.The library contains:BooksArticlesOrganisationsVideos and FilmsRepositoriesAdvocatesNewsPodcastsAnd more, like newsletters, coming soon.We want AWL to be a link you can easily share with someone wanting to learn more about animal welfare. At 212 entries right now, the library is still expanding and is by no means exhaustive (suggest additions here or in the comments!).We found having a searchable and interactive overview of what is out there really helpful, and we hope you do too! We really value any feedback you might have!Our StoryToday we just launched the Animal Welfare Library or AWL (pronounced owl /aʊl/ 🦉!). AWL is our answer to the question "what is the go-to place for finding high-quality animal welfare resources?". Here's our story.Arvo: When I began my journey to learn more about animal welfare, I realised that there was an overwhelming amount of information on the subject, varying substantially in quality. Over the months and years that followed, I came across several websites and organisations that seemed to be doing incredible work and sharing valuable insights, and I frequently found myself wishing I had known about these resources from the start. I also started to see the interdisciplinary nature of the endeavour which made me realise that it would be particularly beneficial to develop a tool to catalogue some of the knowledge we possess and build a home for a centralised repository of valuable resources.Eleos: The plight of badly-off humans and other animals has been a priority of mine for many years. There has always been a sense of urgency behind my thinking: to me, practical philosophy isn’t simply a set of interesting puzzles, but a fundamentally important enterprise to make the world a better place. A few years ago, I embarked on my path to study how I could apply my compassion more systematically in a way that helps me be more effective at making this world a better place. In summer 2021, I started gathering resources on animal ethics and helping animals that would help others make a difference, which resulted in this compilation (and this forum post) that would later help ground this project.In 2022, we met and found we had an aligned vision for a project like AWL. We joined forces and decided to make the library happen.This is how this project was born. This website is the resource we wish we had had in our hands when we started learning about animal welfare and humanity’s role in helping end the neglected suffering of millions of animals.Go to the library here: Animal Welfare Library.Structure of the SiteThe website is structured into home, library and act now pages. (We are also drafting a `why care’ page summarising key arguments in the animal welfare space). The home page has some highlighted resources and organisations and it prompts visitors to explore the library. The Library has all the resources we compiled and the options to filter and search (desktop only) for specific items, themes or key terms. The act now is split into four parts: career (which redirects to Animal Advocacy Careers), donations (redirecting to Animal Charity Evaluators), expanding our social influence and eating plant-based (each of these last two redirects to a subset of relevant resources within the library). When possible, we tried to keep things minimalistic.FeedbackThis is all still work in progress - we wanted to put it out there be...]]>
Thu, 18 May 2023 05:20:30 +0000 EA - Announcing the Animal Welfare Library 🦉 by arvomm Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Animal Welfare Library 🦉, published by arvomm on May 17, 2023 on The Effective Altruism Forum.TL;DRIntroducing the Animal Welfare Library (AWL🦉): a web repository of animal welfare resources. We hope the library is useful for:People who want to learn about animal welfare.More advanced users seeking references or organisations related to animal ethics and effective animal advocacy projects.The library contains:BooksArticlesOrganisationsVideos and FilmsRepositoriesAdvocatesNewsPodcastsAnd more, like newsletters, coming soon.We want AWL to be a link you can easily share with someone wanting to learn more about animal welfare. At 212 entries right now, the library is still expanding and is by no means exhaustive (suggest additions here or in the comments!).We found having a searchable and interactive overview of what is out there really helpful, and we hope you do too! We really value any feedback you might have!Our StoryToday we just launched the Animal Welfare Library or AWL (pronounced owl /aʊl/ 🦉!). AWL is our answer to the question "what is the go-to place for finding high-quality animal welfare resources?". Here's our story.Arvo: When I began my journey to learn more about animal welfare, I realised that there was an overwhelming amount of information on the subject, varying substantially in quality. Over the months and years that followed, I came across several websites and organisations that seemed to be doing incredible work and sharing valuable insights, and I frequently found myself wishing I had known about these resources from the start. I also started to see the interdisciplinary nature of the endeavour which made me realise that it would be particularly beneficial to develop a tool to catalogue some of the knowledge we possess and build a home for a centralised repository of valuable resources.Eleos: The plight of badly-off humans and other animals has been a priority of mine for many years. There has always been a sense of urgency behind my thinking: to me, practical philosophy isn’t simply a set of interesting puzzles, but a fundamentally important enterprise to make the world a better place. A few years ago, I embarked on my path to study how I could apply my compassion more systematically in a way that helps me be more effective at making this world a better place. In summer 2021, I started gathering resources on animal ethics and helping animals that would help others make a difference, which resulted in this compilation (and this forum post) that would later help ground this project.In 2022, we met and found we had an aligned vision for a project like AWL. We joined forces and decided to make the library happen.This is how this project was born. This website is the resource we wish we had had in our hands when we started learning about animal welfare and humanity’s role in helping end the neglected suffering of millions of animals.Go to the library here: Animal Welfare Library.Structure of the SiteThe website is structured into home, library and act now pages. (We are also drafting a `why care’ page summarising key arguments in the animal welfare space). The home page has some highlighted resources and organisations and it prompts visitors to explore the library. The Library has all the resources we compiled and the options to filter and search (desktop only) for specific items, themes or key terms. The act now is split into four parts: career (which redirects to Animal Advocacy Careers), donations (redirecting to Animal Charity Evaluators), expanding our social influence and eating plant-based (each of these last two redirects to a subset of relevant resources within the library). When possible, we tried to keep things minimalistic.FeedbackThis is all still work in progress - we wanted to put it out there be...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Animal Welfare Library 🦉, published by arvomm on May 17, 2023 on The Effective Altruism Forum.TL;DRIntroducing the Animal Welfare Library (AWL🦉): a web repository of animal welfare resources. We hope the library is useful for:People who want to learn about animal welfare.More advanced users seeking references or organisations related to animal ethics and effective animal advocacy projects.The library contains:BooksArticlesOrganisationsVideos and FilmsRepositoriesAdvocatesNewsPodcastsAnd more, like newsletters, coming soon.We want AWL to be a link you can easily share with someone wanting to learn more about animal welfare. At 212 entries right now, the library is still expanding and is by no means exhaustive (suggest additions here or in the comments!).We found having a searchable and interactive overview of what is out there really helpful, and we hope you do too! We really value any feedback you might have!Our StoryToday we just launched the Animal Welfare Library or AWL (pronounced owl /aʊl/ 🦉!). AWL is our answer to the question "what is the go-to place for finding high-quality animal welfare resources?". Here's our story.Arvo: When I began my journey to learn more about animal welfare, I realised that there was an overwhelming amount of information on the subject, varying substantially in quality. Over the months and years that followed, I came across several websites and organisations that seemed to be doing incredible work and sharing valuable insights, and I frequently found myself wishing I had known about these resources from the start. I also started to see the interdisciplinary nature of the endeavour which made me realise that it would be particularly beneficial to develop a tool to catalogue some of the knowledge we possess and build a home for a centralised repository of valuable resources.Eleos: The plight of badly-off humans and other animals has been a priority of mine for many years. There has always been a sense of urgency behind my thinking: to me, practical philosophy isn’t simply a set of interesting puzzles, but a fundamentally important enterprise to make the world a better place. A few years ago, I embarked on my path to study how I could apply my compassion more systematically in a way that helps me be more effective at making this world a better place. In summer 2021, I started gathering resources on animal ethics and helping animals that would help others make a difference, which resulted in this compilation (and this forum post) that would later help ground this project.In 2022, we met and found we had an aligned vision for a project like AWL. We joined forces and decided to make the library happen.This is how this project was born. This website is the resource we wish we had had in our hands when we started learning about animal welfare and humanity’s role in helping end the neglected suffering of millions of animals.Go to the library here: Animal Welfare Library.Structure of the SiteThe website is structured into home, library and act now pages. (We are also drafting a `why care’ page summarising key arguments in the animal welfare space). The home page has some highlighted resources and organisations and it prompts visitors to explore the library. The Library has all the resources we compiled and the options to filter and search (desktop only) for specific items, themes or key terms. The act now is split into four parts: career (which redirects to Animal Advocacy Careers), donations (redirecting to Animal Charity Evaluators), expanding our social influence and eating plant-based (each of these last two redirects to a subset of relevant resources within the library). When possible, we tried to keep things minimalistic.FeedbackThis is all still work in progress - we wanted to put it out there be...]]>
arvomm https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:09 None full 5992
j9yT9Sizu2sjNuygR_NL_EA_EA EA - Why AGI systems will not be fanatical maximisers (unless trained by fanatical humans) by titotal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why AGI systems will not be fanatical maximisers (unless trained by fanatical humans), published by titotal on May 17, 2023 on The Effective Altruism Forum.[Disclaimer: While I have dabbled in machine learning, I do not consider myself an expert.]IntroductionWhen introducing newcomers to the idea of AI existential risk, a typical story of destruction will involve some variation of the “paperclip maximiser” story. The idea is that some company wishes to use an AGI to perform some seemingly simple and innocuous task, such as producing paperclips. So they set the AI with a goal function of maximizing paperclips. But, foolishly, they haven’t realized that taking over the world and killing all the humans would allow it to maximize paperclips, so it deceives them into thinking it’s friendly until it gets a chance to defeat humanity and tile the universe with paperclips (or wiggles that the AI interprets as paperclips under it's own logic).What is often not stated in these stories is an underlying assumption about the structure of the AI in question. These AI’s are fixed goal utility function maximisers, hellbent on making an arbitrary number as high as possible, by any means necessary. I’ll refer to this model as “fanatical” AI, although I’ve seen other posts refer to them as “wrapper” AI, referring to their overall structure.Increasingly, the assumption that AGI’s will be fanatical in nature is being challenged. I think this is reflected in the “orthodox” and “reform” Ai split. This post was mostly inspired by Nostalgebraist's excellent “why optimise for fixed goals” post, although I think there is some crossover with the arguments of the “shard theory” folks.Humans are not fanatical AI. They do have goals, but the goals change over time, and can only loosely be described by mathematical functions. Traditional programming does not fit this description, being merely a set of instructions executed sequentially. None of the massively successful recent machine-learning based AI fits this description, as I will explain in the next section. In fact, nobody even knows how to make such a fanatical AI.These days AI is being designed by trial-and-error techniques. Instead of hand designing every action it makes, we’re jumbling it's neurons around and letting it try stuff until it finds something that works. The inner working of even a very basic machine learning model is somewhat opaque to us. What is ultimately guiding the AI development is some form of evolution: the strategies that work survive, the strategies that don’t get discarded.This is ultimately why I do not believe that most AI will end up as fanatical maximisers. Because in the world that an AI grows up in, trying to be a fanatical global optimizer is likely to get you killed.This post relies on two assumptions: that there will be a fairly slow takeoff of AI intelligence, and that world takeover is not trivially easy. I believe both to be true, but I won't cover my reasons here for the sake of brevity.In part 1, I flesh out the argument for why selection pressures will prevent most AI from becoming fanatical. In part 2, I will point out some ways that catastrophe could still occur, if AI is trained by fanatical humans.Part 1: Why AI won't be fanatical maximisersGlobal and local maxima[I've tried to keep this to machine learning 101 for easy understanding].Machine learning, as it exists today, can be thought of as an efficient trial and error machine. It contains a bazillion different parameters, such as the “weights” of a neural network, that go into one gigantic linear algebra equation. You throw in an input, compute the output of the equation, and “score” the result based on some goal function G. So if you were training an object recognition program, G might be “number of objects correctly identified”. ...]]>
titotal https://forum.effectivealtruism.org/posts/j9yT9Sizu2sjNuygR/why-agi-systems-will-not-be-fanatical-maximisers-unless Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why AGI systems will not be fanatical maximisers (unless trained by fanatical humans), published by titotal on May 17, 2023 on The Effective Altruism Forum.[Disclaimer: While I have dabbled in machine learning, I do not consider myself an expert.]IntroductionWhen introducing newcomers to the idea of AI existential risk, a typical story of destruction will involve some variation of the “paperclip maximiser” story. The idea is that some company wishes to use an AGI to perform some seemingly simple and innocuous task, such as producing paperclips. So they set the AI with a goal function of maximizing paperclips. But, foolishly, they haven’t realized that taking over the world and killing all the humans would allow it to maximize paperclips, so it deceives them into thinking it’s friendly until it gets a chance to defeat humanity and tile the universe with paperclips (or wiggles that the AI interprets as paperclips under it's own logic).What is often not stated in these stories is an underlying assumption about the structure of the AI in question. These AI’s are fixed goal utility function maximisers, hellbent on making an arbitrary number as high as possible, by any means necessary. I’ll refer to this model as “fanatical” AI, although I’ve seen other posts refer to them as “wrapper” AI, referring to their overall structure.Increasingly, the assumption that AGI’s will be fanatical in nature is being challenged. I think this is reflected in the “orthodox” and “reform” Ai split. This post was mostly inspired by Nostalgebraist's excellent “why optimise for fixed goals” post, although I think there is some crossover with the arguments of the “shard theory” folks.Humans are not fanatical AI. They do have goals, but the goals change over time, and can only loosely be described by mathematical functions. Traditional programming does not fit this description, being merely a set of instructions executed sequentially. None of the massively successful recent machine-learning based AI fits this description, as I will explain in the next section. In fact, nobody even knows how to make such a fanatical AI.These days AI is being designed by trial-and-error techniques. Instead of hand designing every action it makes, we’re jumbling it's neurons around and letting it try stuff until it finds something that works. The inner working of even a very basic machine learning model is somewhat opaque to us. What is ultimately guiding the AI development is some form of evolution: the strategies that work survive, the strategies that don’t get discarded.This is ultimately why I do not believe that most AI will end up as fanatical maximisers. Because in the world that an AI grows up in, trying to be a fanatical global optimizer is likely to get you killed.This post relies on two assumptions: that there will be a fairly slow takeoff of AI intelligence, and that world takeover is not trivially easy. I believe both to be true, but I won't cover my reasons here for the sake of brevity.In part 1, I flesh out the argument for why selection pressures will prevent most AI from becoming fanatical. In part 2, I will point out some ways that catastrophe could still occur, if AI is trained by fanatical humans.Part 1: Why AI won't be fanatical maximisersGlobal and local maxima[I've tried to keep this to machine learning 101 for easy understanding].Machine learning, as it exists today, can be thought of as an efficient trial and error machine. It contains a bazillion different parameters, such as the “weights” of a neural network, that go into one gigantic linear algebra equation. You throw in an input, compute the output of the equation, and “score” the result based on some goal function G. So if you were training an object recognition program, G might be “number of objects correctly identified”. ...]]>
Wed, 17 May 2023 23:16:15 +0000 EA - Why AGI systems will not be fanatical maximisers (unless trained by fanatical humans) by titotal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why AGI systems will not be fanatical maximisers (unless trained by fanatical humans), published by titotal on May 17, 2023 on The Effective Altruism Forum.[Disclaimer: While I have dabbled in machine learning, I do not consider myself an expert.]IntroductionWhen introducing newcomers to the idea of AI existential risk, a typical story of destruction will involve some variation of the “paperclip maximiser” story. The idea is that some company wishes to use an AGI to perform some seemingly simple and innocuous task, such as producing paperclips. So they set the AI with a goal function of maximizing paperclips. But, foolishly, they haven’t realized that taking over the world and killing all the humans would allow it to maximize paperclips, so it deceives them into thinking it’s friendly until it gets a chance to defeat humanity and tile the universe with paperclips (or wiggles that the AI interprets as paperclips under it's own logic).What is often not stated in these stories is an underlying assumption about the structure of the AI in question. These AI’s are fixed goal utility function maximisers, hellbent on making an arbitrary number as high as possible, by any means necessary. I’ll refer to this model as “fanatical” AI, although I’ve seen other posts refer to them as “wrapper” AI, referring to their overall structure.Increasingly, the assumption that AGI’s will be fanatical in nature is being challenged. I think this is reflected in the “orthodox” and “reform” Ai split. This post was mostly inspired by Nostalgebraist's excellent “why optimise for fixed goals” post, although I think there is some crossover with the arguments of the “shard theory” folks.Humans are not fanatical AI. They do have goals, but the goals change over time, and can only loosely be described by mathematical functions. Traditional programming does not fit this description, being merely a set of instructions executed sequentially. None of the massively successful recent machine-learning based AI fits this description, as I will explain in the next section. In fact, nobody even knows how to make such a fanatical AI.These days AI is being designed by trial-and-error techniques. Instead of hand designing every action it makes, we’re jumbling it's neurons around and letting it try stuff until it finds something that works. The inner working of even a very basic machine learning model is somewhat opaque to us. What is ultimately guiding the AI development is some form of evolution: the strategies that work survive, the strategies that don’t get discarded.This is ultimately why I do not believe that most AI will end up as fanatical maximisers. Because in the world that an AI grows up in, trying to be a fanatical global optimizer is likely to get you killed.This post relies on two assumptions: that there will be a fairly slow takeoff of AI intelligence, and that world takeover is not trivially easy. I believe both to be true, but I won't cover my reasons here for the sake of brevity.In part 1, I flesh out the argument for why selection pressures will prevent most AI from becoming fanatical. In part 2, I will point out some ways that catastrophe could still occur, if AI is trained by fanatical humans.Part 1: Why AI won't be fanatical maximisersGlobal and local maxima[I've tried to keep this to machine learning 101 for easy understanding].Machine learning, as it exists today, can be thought of as an efficient trial and error machine. It contains a bazillion different parameters, such as the “weights” of a neural network, that go into one gigantic linear algebra equation. You throw in an input, compute the output of the equation, and “score” the result based on some goal function G. So if you were training an object recognition program, G might be “number of objects correctly identified”. ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why AGI systems will not be fanatical maximisers (unless trained by fanatical humans), published by titotal on May 17, 2023 on The Effective Altruism Forum.[Disclaimer: While I have dabbled in machine learning, I do not consider myself an expert.]IntroductionWhen introducing newcomers to the idea of AI existential risk, a typical story of destruction will involve some variation of the “paperclip maximiser” story. The idea is that some company wishes to use an AGI to perform some seemingly simple and innocuous task, such as producing paperclips. So they set the AI with a goal function of maximizing paperclips. But, foolishly, they haven’t realized that taking over the world and killing all the humans would allow it to maximize paperclips, so it deceives them into thinking it’s friendly until it gets a chance to defeat humanity and tile the universe with paperclips (or wiggles that the AI interprets as paperclips under it's own logic).What is often not stated in these stories is an underlying assumption about the structure of the AI in question. These AI’s are fixed goal utility function maximisers, hellbent on making an arbitrary number as high as possible, by any means necessary. I’ll refer to this model as “fanatical” AI, although I’ve seen other posts refer to them as “wrapper” AI, referring to their overall structure.Increasingly, the assumption that AGI’s will be fanatical in nature is being challenged. I think this is reflected in the “orthodox” and “reform” Ai split. This post was mostly inspired by Nostalgebraist's excellent “why optimise for fixed goals” post, although I think there is some crossover with the arguments of the “shard theory” folks.Humans are not fanatical AI. They do have goals, but the goals change over time, and can only loosely be described by mathematical functions. Traditional programming does not fit this description, being merely a set of instructions executed sequentially. None of the massively successful recent machine-learning based AI fits this description, as I will explain in the next section. In fact, nobody even knows how to make such a fanatical AI.These days AI is being designed by trial-and-error techniques. Instead of hand designing every action it makes, we’re jumbling it's neurons around and letting it try stuff until it finds something that works. The inner working of even a very basic machine learning model is somewhat opaque to us. What is ultimately guiding the AI development is some form of evolution: the strategies that work survive, the strategies that don’t get discarded.This is ultimately why I do not believe that most AI will end up as fanatical maximisers. Because in the world that an AI grows up in, trying to be a fanatical global optimizer is likely to get you killed.This post relies on two assumptions: that there will be a fairly slow takeoff of AI intelligence, and that world takeover is not trivially easy. I believe both to be true, but I won't cover my reasons here for the sake of brevity.In part 1, I flesh out the argument for why selection pressures will prevent most AI from becoming fanatical. In part 2, I will point out some ways that catastrophe could still occur, if AI is trained by fanatical humans.Part 1: Why AI won't be fanatical maximisersGlobal and local maxima[I've tried to keep this to machine learning 101 for easy understanding].Machine learning, as it exists today, can be thought of as an efficient trial and error machine. It contains a bazillion different parameters, such as the “weights” of a neural network, that go into one gigantic linear algebra equation. You throw in an input, compute the output of the equation, and “score” the result based on some goal function G. So if you were training an object recognition program, G might be “number of objects correctly identified”. ...]]>
titotal https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 24:08 None full 5990
SATv4zyd3LhPzGRcS_NL_EA_EA EA - Introducing The Long Game Project: Tabletop Exercises for a Resilient Tomorrow by Dr Dan Epstein Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing The Long Game Project: Tabletop Exercises for a Resilient Tomorrow, published by Dr Dan Epstein on May 17, 2023 on The Effective Altruism Forum.“No pilot would dare to fly a commercial airliner without significant training in a flight simulator . yet decision-makers are expected to make critical decisions relying on ‘theory’, ‘experience’, ‘intuition’, ‘gut feeling’, or less” - de Suarez et al 2012.tl;drThe Long Game Project is an organisation focused on helping institutions improve decision-making, enhance team collaboration, and build resilience. We aim to do this by a) Producing tools and resources to improve access to effective tabletop exercising; b) Providing advice, thought leadership and full-suite consultation to organisations interested in tabletop exercises; c) Encouraging a culture of running organisational games and building a community around tabletop games. Follow us on twitter and LinkedIn, play with our beta scenario-generator tool, provide feedback for some cash prizes and spread the word!Set the sceneImagine a world where organisations are prepared for the unexpected, where decision-makers can more confidently navigate crises and create positive outcomes even in the face of chaos because they have regular, simulated experience. Dynamic decision-making under uncertain conditions is a skill that can and should, be practised.Experience matters.Play it now,before you live it later.Roll for Initiative: Introducing The Long Game ProjectOur mission is to help organisations improve decision-making (IIDM), enhance team collaboration, and build resilience using tabletop exercises that are rules-light but experience heavy.Why we existThe world is becoming increasingly complex and unpredictable, with organisations facing various challenges. Institutional decision-making in practice is often ill-equipped to handle such complexity, leading to sub-optimal decision quality and severe failure modes. 80,000 hours lists IIDM as one of the best ways to improve existing levers, interventions and practical suggestions for our long-term future and lists this as a neglected issue. While IIDM is a complex cause area, requiring some disentangling, there are existing levers, interventions and practical suggestions that are under-utilised in practice. The Long Game Project exists to bridge this gap by applying several levers to a diverse range of sectors, helping organisations adapt and thrive in an ever-changing landscape.Our toolkit of levers includes; tabletop exercises, role-playing, future simulations, facilitation, probabilistic thinking games, goal and value alignment, decision design and other methods that combine game design and ideas from behavioural economics, psychology, and organisational theory.Critical Hit: Our organisational aimsBe the place to point to for tabletop scenario advice and tools- Provide expert-level thought leadership and consultation on running serious tabletop exercises.Lower the entry bar to effective tabletop exercising by producing tools, giving guidance and providing expertise.Empower institutions to tackle complex challenges effectively by transforming how they plan, react, and adapt in an increasingly uncertain world.Focus on scenarios of the world's most pressing problems and long-term horizons.Encourage a culture of running organisational games and building a community around tabletop games.Theory of ChangeOur theory of change is rooted in the belief that effective decision-making is a learnable skill. By providing organisations with simulated experience, we aim to create a positive feedback loop in which participants develop stronger decision-making abilities, leading to better outcomes and increased resilience.This, in turn, creates more efficient, effective, and adaptable organisations, ultimately c...]]>
Dr Dan Epstein https://forum.effectivealtruism.org/posts/SATv4zyd3LhPzGRcS/introducing-the-long-game-project-tabletop-exercises-for-a Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing The Long Game Project: Tabletop Exercises for a Resilient Tomorrow, published by Dr Dan Epstein on May 17, 2023 on The Effective Altruism Forum.“No pilot would dare to fly a commercial airliner without significant training in a flight simulator . yet decision-makers are expected to make critical decisions relying on ‘theory’, ‘experience’, ‘intuition’, ‘gut feeling’, or less” - de Suarez et al 2012.tl;drThe Long Game Project is an organisation focused on helping institutions improve decision-making, enhance team collaboration, and build resilience. We aim to do this by a) Producing tools and resources to improve access to effective tabletop exercising; b) Providing advice, thought leadership and full-suite consultation to organisations interested in tabletop exercises; c) Encouraging a culture of running organisational games and building a community around tabletop games. Follow us on twitter and LinkedIn, play with our beta scenario-generator tool, provide feedback for some cash prizes and spread the word!Set the sceneImagine a world where organisations are prepared for the unexpected, where decision-makers can more confidently navigate crises and create positive outcomes even in the face of chaos because they have regular, simulated experience. Dynamic decision-making under uncertain conditions is a skill that can and should, be practised.Experience matters.Play it now,before you live it later.Roll for Initiative: Introducing The Long Game ProjectOur mission is to help organisations improve decision-making (IIDM), enhance team collaboration, and build resilience using tabletop exercises that are rules-light but experience heavy.Why we existThe world is becoming increasingly complex and unpredictable, with organisations facing various challenges. Institutional decision-making in practice is often ill-equipped to handle such complexity, leading to sub-optimal decision quality and severe failure modes. 80,000 hours lists IIDM as one of the best ways to improve existing levers, interventions and practical suggestions for our long-term future and lists this as a neglected issue. While IIDM is a complex cause area, requiring some disentangling, there are existing levers, interventions and practical suggestions that are under-utilised in practice. The Long Game Project exists to bridge this gap by applying several levers to a diverse range of sectors, helping organisations adapt and thrive in an ever-changing landscape.Our toolkit of levers includes; tabletop exercises, role-playing, future simulations, facilitation, probabilistic thinking games, goal and value alignment, decision design and other methods that combine game design and ideas from behavioural economics, psychology, and organisational theory.Critical Hit: Our organisational aimsBe the place to point to for tabletop scenario advice and tools- Provide expert-level thought leadership and consultation on running serious tabletop exercises.Lower the entry bar to effective tabletop exercising by producing tools, giving guidance and providing expertise.Empower institutions to tackle complex challenges effectively by transforming how they plan, react, and adapt in an increasingly uncertain world.Focus on scenarios of the world's most pressing problems and long-term horizons.Encourage a culture of running organisational games and building a community around tabletop games.Theory of ChangeOur theory of change is rooted in the belief that effective decision-making is a learnable skill. By providing organisations with simulated experience, we aim to create a positive feedback loop in which participants develop stronger decision-making abilities, leading to better outcomes and increased resilience.This, in turn, creates more efficient, effective, and adaptable organisations, ultimately c...]]>
Wed, 17 May 2023 23:14:39 +0000 EA - Introducing The Long Game Project: Tabletop Exercises for a Resilient Tomorrow by Dr Dan Epstein Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing The Long Game Project: Tabletop Exercises for a Resilient Tomorrow, published by Dr Dan Epstein on May 17, 2023 on The Effective Altruism Forum.“No pilot would dare to fly a commercial airliner without significant training in a flight simulator . yet decision-makers are expected to make critical decisions relying on ‘theory’, ‘experience’, ‘intuition’, ‘gut feeling’, or less” - de Suarez et al 2012.tl;drThe Long Game Project is an organisation focused on helping institutions improve decision-making, enhance team collaboration, and build resilience. We aim to do this by a) Producing tools and resources to improve access to effective tabletop exercising; b) Providing advice, thought leadership and full-suite consultation to organisations interested in tabletop exercises; c) Encouraging a culture of running organisational games and building a community around tabletop games. Follow us on twitter and LinkedIn, play with our beta scenario-generator tool, provide feedback for some cash prizes and spread the word!Set the sceneImagine a world where organisations are prepared for the unexpected, where decision-makers can more confidently navigate crises and create positive outcomes even in the face of chaos because they have regular, simulated experience. Dynamic decision-making under uncertain conditions is a skill that can and should, be practised.Experience matters.Play it now,before you live it later.Roll for Initiative: Introducing The Long Game ProjectOur mission is to help organisations improve decision-making (IIDM), enhance team collaboration, and build resilience using tabletop exercises that are rules-light but experience heavy.Why we existThe world is becoming increasingly complex and unpredictable, with organisations facing various challenges. Institutional decision-making in practice is often ill-equipped to handle such complexity, leading to sub-optimal decision quality and severe failure modes. 80,000 hours lists IIDM as one of the best ways to improve existing levers, interventions and practical suggestions for our long-term future and lists this as a neglected issue. While IIDM is a complex cause area, requiring some disentangling, there are existing levers, interventions and practical suggestions that are under-utilised in practice. The Long Game Project exists to bridge this gap by applying several levers to a diverse range of sectors, helping organisations adapt and thrive in an ever-changing landscape.Our toolkit of levers includes; tabletop exercises, role-playing, future simulations, facilitation, probabilistic thinking games, goal and value alignment, decision design and other methods that combine game design and ideas from behavioural economics, psychology, and organisational theory.Critical Hit: Our organisational aimsBe the place to point to for tabletop scenario advice and tools- Provide expert-level thought leadership and consultation on running serious tabletop exercises.Lower the entry bar to effective tabletop exercising by producing tools, giving guidance and providing expertise.Empower institutions to tackle complex challenges effectively by transforming how they plan, react, and adapt in an increasingly uncertain world.Focus on scenarios of the world's most pressing problems and long-term horizons.Encourage a culture of running organisational games and building a community around tabletop games.Theory of ChangeOur theory of change is rooted in the belief that effective decision-making is a learnable skill. By providing organisations with simulated experience, we aim to create a positive feedback loop in which participants develop stronger decision-making abilities, leading to better outcomes and increased resilience.This, in turn, creates more efficient, effective, and adaptable organisations, ultimately c...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing The Long Game Project: Tabletop Exercises for a Resilient Tomorrow, published by Dr Dan Epstein on May 17, 2023 on The Effective Altruism Forum.“No pilot would dare to fly a commercial airliner without significant training in a flight simulator . yet decision-makers are expected to make critical decisions relying on ‘theory’, ‘experience’, ‘intuition’, ‘gut feeling’, or less” - de Suarez et al 2012.tl;drThe Long Game Project is an organisation focused on helping institutions improve decision-making, enhance team collaboration, and build resilience. We aim to do this by a) Producing tools and resources to improve access to effective tabletop exercising; b) Providing advice, thought leadership and full-suite consultation to organisations interested in tabletop exercises; c) Encouraging a culture of running organisational games and building a community around tabletop games. Follow us on twitter and LinkedIn, play with our beta scenario-generator tool, provide feedback for some cash prizes and spread the word!Set the sceneImagine a world where organisations are prepared for the unexpected, where decision-makers can more confidently navigate crises and create positive outcomes even in the face of chaos because they have regular, simulated experience. Dynamic decision-making under uncertain conditions is a skill that can and should, be practised.Experience matters.Play it now,before you live it later.Roll for Initiative: Introducing The Long Game ProjectOur mission is to help organisations improve decision-making (IIDM), enhance team collaboration, and build resilience using tabletop exercises that are rules-light but experience heavy.Why we existThe world is becoming increasingly complex and unpredictable, with organisations facing various challenges. Institutional decision-making in practice is often ill-equipped to handle such complexity, leading to sub-optimal decision quality and severe failure modes. 80,000 hours lists IIDM as one of the best ways to improve existing levers, interventions and practical suggestions for our long-term future and lists this as a neglected issue. While IIDM is a complex cause area, requiring some disentangling, there are existing levers, interventions and practical suggestions that are under-utilised in practice. The Long Game Project exists to bridge this gap by applying several levers to a diverse range of sectors, helping organisations adapt and thrive in an ever-changing landscape.Our toolkit of levers includes; tabletop exercises, role-playing, future simulations, facilitation, probabilistic thinking games, goal and value alignment, decision design and other methods that combine game design and ideas from behavioural economics, psychology, and organisational theory.Critical Hit: Our organisational aimsBe the place to point to for tabletop scenario advice and tools- Provide expert-level thought leadership and consultation on running serious tabletop exercises.Lower the entry bar to effective tabletop exercising by producing tools, giving guidance and providing expertise.Empower institutions to tackle complex challenges effectively by transforming how they plan, react, and adapt in an increasingly uncertain world.Focus on scenarios of the world's most pressing problems and long-term horizons.Encourage a culture of running organisational games and building a community around tabletop games.Theory of ChangeOur theory of change is rooted in the belief that effective decision-making is a learnable skill. By providing organisations with simulated experience, we aim to create a positive feedback loop in which participants develop stronger decision-making abilities, leading to better outcomes and increased resilience.This, in turn, creates more efficient, effective, and adaptable organisations, ultimately c...]]>
Dr Dan Epstein https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:41 None full 5991
pdfmwgGLNMwNAjJdo_NL_EA_EA EA - New CSER Director: Prof Matthew Connelly by HaydnBelfield Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New CSER Director: Prof Matthew Connelly, published by HaydnBelfield on May 17, 2023 on The Effective Altruism Forum.The Centre for the Study of Existential Risk (CSER) at Cambridge University is getting a new Director – Professor Matthew Connelly will be our Director from July 2023. Seán Ó hÉigeartaigh, our Interim Director, is staying at CSER and will be focussing more on AI governance and safety research.Prof Connelly is currently a Professor of international and global history at Columbia University, and for the last seven years has been Co-Director of its social science research centre, the Institute for Social and Economic Research and Policy. He has significant policy experience as a consultant for the Gates Foundation, the World Bank, and the Department of Homeland Security and Senate committees. He is the author of The Declassification Engine: What History Reveals about America’s Top Secrets and Fatal Misconception (an Economist and Financial Times book of the year). His next book is on “the history of the end of the world”, a subject on which he has taught multiple courses.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
HaydnBelfield https://forum.effectivealtruism.org/posts/pdfmwgGLNMwNAjJdo/new-cser-director-prof-matthew-connelly Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New CSER Director: Prof Matthew Connelly, published by HaydnBelfield on May 17, 2023 on The Effective Altruism Forum.The Centre for the Study of Existential Risk (CSER) at Cambridge University is getting a new Director – Professor Matthew Connelly will be our Director from July 2023. Seán Ó hÉigeartaigh, our Interim Director, is staying at CSER and will be focussing more on AI governance and safety research.Prof Connelly is currently a Professor of international and global history at Columbia University, and for the last seven years has been Co-Director of its social science research centre, the Institute for Social and Economic Research and Policy. He has significant policy experience as a consultant for the Gates Foundation, the World Bank, and the Department of Homeland Security and Senate committees. He is the author of The Declassification Engine: What History Reveals about America’s Top Secrets and Fatal Misconception (an Economist and Financial Times book of the year). His next book is on “the history of the end of the world”, a subject on which he has taught multiple courses.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 17 May 2023 22:54:45 +0000 EA - New CSER Director: Prof Matthew Connelly by HaydnBelfield Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New CSER Director: Prof Matthew Connelly, published by HaydnBelfield on May 17, 2023 on The Effective Altruism Forum.The Centre for the Study of Existential Risk (CSER) at Cambridge University is getting a new Director – Professor Matthew Connelly will be our Director from July 2023. Seán Ó hÉigeartaigh, our Interim Director, is staying at CSER and will be focussing more on AI governance and safety research.Prof Connelly is currently a Professor of international and global history at Columbia University, and for the last seven years has been Co-Director of its social science research centre, the Institute for Social and Economic Research and Policy. He has significant policy experience as a consultant for the Gates Foundation, the World Bank, and the Department of Homeland Security and Senate committees. He is the author of The Declassification Engine: What History Reveals about America’s Top Secrets and Fatal Misconception (an Economist and Financial Times book of the year). His next book is on “the history of the end of the world”, a subject on which he has taught multiple courses.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New CSER Director: Prof Matthew Connelly, published by HaydnBelfield on May 17, 2023 on The Effective Altruism Forum.The Centre for the Study of Existential Risk (CSER) at Cambridge University is getting a new Director – Professor Matthew Connelly will be our Director from July 2023. Seán Ó hÉigeartaigh, our Interim Director, is staying at CSER and will be focussing more on AI governance and safety research.Prof Connelly is currently a Professor of international and global history at Columbia University, and for the last seven years has been Co-Director of its social science research centre, the Institute for Social and Economic Research and Policy. He has significant policy experience as a consultant for the Gates Foundation, the World Bank, and the Department of Homeland Security and Senate committees. He is the author of The Declassification Engine: What History Reveals about America’s Top Secrets and Fatal Misconception (an Economist and Financial Times book of the year). His next book is on “the history of the end of the world”, a subject on which he has taught multiple courses.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
HaydnBelfield https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:14 None full 5989
3fpsT79M4sj2cDvxt_NL_EA_EA EA - Hiatus: EA and LW post summaries by Zoe Williams Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hiatus: EA and LW post summaries, published by Zoe Williams on May 17, 2023 on The Effective Altruism Forum.For the past ~8 months, I've been summarizing the top posts on the EA and LW forums each week (see archive here), a project supported by Rethink Priorities (RP).I’ve recently taken on a new role as an AI Governance & Strategy Research Manager at RP. As a result of this we're going to be putting the forum summaries on hiatus while we work on what they should look like in the future, and hire for someone new to run the project. A big thank you to everyone who completed our recent survey - it’s great input for us as we evaluate this project going forward!The hiatus will likely last for ~4-6 months. We’ll continue to use the existing email list and podcast channel (EA Forum Podcast (Summaries)) when it's back up and running, so subscribe if you’re interested and feel free to continue to share it with others.If you’re looking for other ways to stay up to date in the meantime, some resources to consider:NewslettersThe EA Forum Digest - a weekly newsletter recommending new EA forum posts that have high karma, active discussion, or could use more input.Monthly Overload of Effective Altruism - a monthly newsletter with top research, organizational updates and events in the EA community.PodcastsEA forum podcast (curated posts) - human narrations of some of the best posts from the EA forum.Nonlinear library - AI narrations of all posts from the EA Forum, Alignment Forum, and LessWrong that meet a karma threshold.There are heaps of cause-area specific newsletters out there too - if you have suggestions, please share them in the comments.I’ve really enjoyed my time running this project! Thanks for reading and engaging, to Coleman Snell for narrating, and to all the writers who’ve shared their ideas and helped people find new opportunities to do good.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Zoe Williams https://forum.effectivealtruism.org/posts/3fpsT79M4sj2cDvxt/hiatus-ea-and-lw-post-summaries Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hiatus: EA and LW post summaries, published by Zoe Williams on May 17, 2023 on The Effective Altruism Forum.For the past ~8 months, I've been summarizing the top posts on the EA and LW forums each week (see archive here), a project supported by Rethink Priorities (RP).I’ve recently taken on a new role as an AI Governance & Strategy Research Manager at RP. As a result of this we're going to be putting the forum summaries on hiatus while we work on what they should look like in the future, and hire for someone new to run the project. A big thank you to everyone who completed our recent survey - it’s great input for us as we evaluate this project going forward!The hiatus will likely last for ~4-6 months. We’ll continue to use the existing email list and podcast channel (EA Forum Podcast (Summaries)) when it's back up and running, so subscribe if you’re interested and feel free to continue to share it with others.If you’re looking for other ways to stay up to date in the meantime, some resources to consider:NewslettersThe EA Forum Digest - a weekly newsletter recommending new EA forum posts that have high karma, active discussion, or could use more input.Monthly Overload of Effective Altruism - a monthly newsletter with top research, organizational updates and events in the EA community.PodcastsEA forum podcast (curated posts) - human narrations of some of the best posts from the EA forum.Nonlinear library - AI narrations of all posts from the EA Forum, Alignment Forum, and LessWrong that meet a karma threshold.There are heaps of cause-area specific newsletters out there too - if you have suggestions, please share them in the comments.I’ve really enjoyed my time running this project! Thanks for reading and engaging, to Coleman Snell for narrating, and to all the writers who’ve shared their ideas and helped people find new opportunities to do good.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 17 May 2023 21:32:12 +0000 EA - Hiatus: EA and LW post summaries by Zoe Williams Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hiatus: EA and LW post summaries, published by Zoe Williams on May 17, 2023 on The Effective Altruism Forum.For the past ~8 months, I've been summarizing the top posts on the EA and LW forums each week (see archive here), a project supported by Rethink Priorities (RP).I’ve recently taken on a new role as an AI Governance & Strategy Research Manager at RP. As a result of this we're going to be putting the forum summaries on hiatus while we work on what they should look like in the future, and hire for someone new to run the project. A big thank you to everyone who completed our recent survey - it’s great input for us as we evaluate this project going forward!The hiatus will likely last for ~4-6 months. We’ll continue to use the existing email list and podcast channel (EA Forum Podcast (Summaries)) when it's back up and running, so subscribe if you’re interested and feel free to continue to share it with others.If you’re looking for other ways to stay up to date in the meantime, some resources to consider:NewslettersThe EA Forum Digest - a weekly newsletter recommending new EA forum posts that have high karma, active discussion, or could use more input.Monthly Overload of Effective Altruism - a monthly newsletter with top research, organizational updates and events in the EA community.PodcastsEA forum podcast (curated posts) - human narrations of some of the best posts from the EA forum.Nonlinear library - AI narrations of all posts from the EA Forum, Alignment Forum, and LessWrong that meet a karma threshold.There are heaps of cause-area specific newsletters out there too - if you have suggestions, please share them in the comments.I’ve really enjoyed my time running this project! Thanks for reading and engaging, to Coleman Snell for narrating, and to all the writers who’ve shared their ideas and helped people find new opportunities to do good.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hiatus: EA and LW post summaries, published by Zoe Williams on May 17, 2023 on The Effective Altruism Forum.For the past ~8 months, I've been summarizing the top posts on the EA and LW forums each week (see archive here), a project supported by Rethink Priorities (RP).I’ve recently taken on a new role as an AI Governance & Strategy Research Manager at RP. As a result of this we're going to be putting the forum summaries on hiatus while we work on what they should look like in the future, and hire for someone new to run the project. A big thank you to everyone who completed our recent survey - it’s great input for us as we evaluate this project going forward!The hiatus will likely last for ~4-6 months. We’ll continue to use the existing email list and podcast channel (EA Forum Podcast (Summaries)) when it's back up and running, so subscribe if you’re interested and feel free to continue to share it with others.If you’re looking for other ways to stay up to date in the meantime, some resources to consider:NewslettersThe EA Forum Digest - a weekly newsletter recommending new EA forum posts that have high karma, active discussion, or could use more input.Monthly Overload of Effective Altruism - a monthly newsletter with top research, organizational updates and events in the EA community.PodcastsEA forum podcast (curated posts) - human narrations of some of the best posts from the EA forum.Nonlinear library - AI narrations of all posts from the EA Forum, Alignment Forum, and LessWrong that meet a karma threshold.There are heaps of cause-area specific newsletters out there too - if you have suggestions, please share them in the comments.I’ve really enjoyed my time running this project! Thanks for reading and engaging, to Coleman Snell for narrating, and to all the writers who’ve shared their ideas and helped people find new opportunities to do good.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Zoe Williams https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:58 None full 5986
Lna7SayJkyrKczH4n_NL_EA_EA EA - Play Regrantor: Move up to $250,000 to Your Top High-Impact Projects! by Dawn Drescher Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Play Regrantor: Move up to $250,000 to Your Top High-Impact Projects!, published by Dawn Drescher on May 17, 2023 on The Effective Altruism Forum.Summary. We have collected expressions of interest for a total donation budget of about $200,000 from donors who are interested in using our platform. How many of them will end up donating and how much they’ll donate depends on you and how many great projects you bring onto the platform. But Greg Colbourn has firmly committed another $50,000 for AGI moratorium projects for a total of $250,000!FundingOver the past months we’ve asked many donors whether they would consider using the Impact Markets platform to find giving opportunities and what their 2023 donation budget is. We’re now at an aggregate budget of $200,000 and counting. Of course these donors are free to decide that they prefer another project over the ones that have registered on our platform, so that these $200,000 are far from committed.But Greg Colbourn has firmly committed another $50,000 for projects related to an AGI moratorium! He is already in touch with some early-stage efforts, but we definitely need more people to join the cause.You want to become a top donor – a project scout? You want to support a project?You think Charity X is the most impactful one but it’s still underfunded? Convince them to register on app.impactmarkets.io. Then register your donations to them. (They have to confirm that your claim is accurate.) Speed the process by repeating it with a few high-impact projects. When the projects reach some milestones, they can submit them for review. The better the review, the higher your donor score.A score > 200 still puts you firmly among the top 10 donors on our platform. That can change as more project scouts register their high-impact donations.At the moment, we’re still allowing donors to import all their past donations. (Please contact us if you would like to import a lot.) We will eventually limit this to the past month.What if you need moneyIf you’re running or planning to run some impactful project, you can register it on app.impactmarkets.io and pitch it to our top donors. If they think it’s great, their donation (an endorsement in itself) can pull in many more donations from the people who trust them.We’re continually onboarding more people with great donation track records, so both the people on the leaderboard and the ranking algorithm are in constant flux. Please check for updates at least in monthly intervals.You want to donate but don’t know whereFor now, you can add yours to the expressions of interest we’re collecting. That’ll increase the incentive for awesome projects to join our platform and for awesome donors to vie for the top donor status.Once there are top donors that you trust (say, because they share your values), the top donors’ donations will guide you to high-impact projects. Such projects may be outside the purview of charity evaluators like GiveWell or ACE, or they might be too young to have earned top charity status yet. Hence why our impact market doubles as a crowdsourced charity evaluator.QuestionsIf you have any questions:Please see our full FAQ.Check out this demo of a bot trained on our FAQ.Have a look at our recent Substack posts.Join our Discord and the #questions-and-feedback channel.Or of course ask your question below!Acknowledgements. Thanks to Frankie, Dony, Matt, and Greg for feedback, and thanks to Greg and everyone who has filled in the expression of interest form for their pledges!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Dawn Drescher https://forum.effectivealtruism.org/posts/Lna7SayJkyrKczH4n/play-regrantor-move-up-to-usd250-000-to-your-top-high-impact Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Play Regrantor: Move up to $250,000 to Your Top High-Impact Projects!, published by Dawn Drescher on May 17, 2023 on The Effective Altruism Forum.Summary. We have collected expressions of interest for a total donation budget of about $200,000 from donors who are interested in using our platform. How many of them will end up donating and how much they’ll donate depends on you and how many great projects you bring onto the platform. But Greg Colbourn has firmly committed another $50,000 for AGI moratorium projects for a total of $250,000!FundingOver the past months we’ve asked many donors whether they would consider using the Impact Markets platform to find giving opportunities and what their 2023 donation budget is. We’re now at an aggregate budget of $200,000 and counting. Of course these donors are free to decide that they prefer another project over the ones that have registered on our platform, so that these $200,000 are far from committed.But Greg Colbourn has firmly committed another $50,000 for projects related to an AGI moratorium! He is already in touch with some early-stage efforts, but we definitely need more people to join the cause.You want to become a top donor – a project scout? You want to support a project?You think Charity X is the most impactful one but it’s still underfunded? Convince them to register on app.impactmarkets.io. Then register your donations to them. (They have to confirm that your claim is accurate.) Speed the process by repeating it with a few high-impact projects. When the projects reach some milestones, they can submit them for review. The better the review, the higher your donor score.A score > 200 still puts you firmly among the top 10 donors on our platform. That can change as more project scouts register their high-impact donations.At the moment, we’re still allowing donors to import all their past donations. (Please contact us if you would like to import a lot.) We will eventually limit this to the past month.What if you need moneyIf you’re running or planning to run some impactful project, you can register it on app.impactmarkets.io and pitch it to our top donors. If they think it’s great, their donation (an endorsement in itself) can pull in many more donations from the people who trust them.We’re continually onboarding more people with great donation track records, so both the people on the leaderboard and the ranking algorithm are in constant flux. Please check for updates at least in monthly intervals.You want to donate but don’t know whereFor now, you can add yours to the expressions of interest we’re collecting. That’ll increase the incentive for awesome projects to join our platform and for awesome donors to vie for the top donor status.Once there are top donors that you trust (say, because they share your values), the top donors’ donations will guide you to high-impact projects. Such projects may be outside the purview of charity evaluators like GiveWell or ACE, or they might be too young to have earned top charity status yet. Hence why our impact market doubles as a crowdsourced charity evaluator.QuestionsIf you have any questions:Please see our full FAQ.Check out this demo of a bot trained on our FAQ.Have a look at our recent Substack posts.Join our Discord and the #questions-and-feedback channel.Or of course ask your question below!Acknowledgements. Thanks to Frankie, Dony, Matt, and Greg for feedback, and thanks to Greg and everyone who has filled in the expression of interest form for their pledges!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 17 May 2023 21:32:10 +0000 EA - Play Regrantor: Move up to $250,000 to Your Top High-Impact Projects! by Dawn Drescher Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Play Regrantor: Move up to $250,000 to Your Top High-Impact Projects!, published by Dawn Drescher on May 17, 2023 on The Effective Altruism Forum.Summary. We have collected expressions of interest for a total donation budget of about $200,000 from donors who are interested in using our platform. How many of them will end up donating and how much they’ll donate depends on you and how many great projects you bring onto the platform. But Greg Colbourn has firmly committed another $50,000 for AGI moratorium projects for a total of $250,000!FundingOver the past months we’ve asked many donors whether they would consider using the Impact Markets platform to find giving opportunities and what their 2023 donation budget is. We’re now at an aggregate budget of $200,000 and counting. Of course these donors are free to decide that they prefer another project over the ones that have registered on our platform, so that these $200,000 are far from committed.But Greg Colbourn has firmly committed another $50,000 for projects related to an AGI moratorium! He is already in touch with some early-stage efforts, but we definitely need more people to join the cause.You want to become a top donor – a project scout? You want to support a project?You think Charity X is the most impactful one but it’s still underfunded? Convince them to register on app.impactmarkets.io. Then register your donations to them. (They have to confirm that your claim is accurate.) Speed the process by repeating it with a few high-impact projects. When the projects reach some milestones, they can submit them for review. The better the review, the higher your donor score.A score > 200 still puts you firmly among the top 10 donors on our platform. That can change as more project scouts register their high-impact donations.At the moment, we’re still allowing donors to import all their past donations. (Please contact us if you would like to import a lot.) We will eventually limit this to the past month.What if you need moneyIf you’re running or planning to run some impactful project, you can register it on app.impactmarkets.io and pitch it to our top donors. If they think it’s great, their donation (an endorsement in itself) can pull in many more donations from the people who trust them.We’re continually onboarding more people with great donation track records, so both the people on the leaderboard and the ranking algorithm are in constant flux. Please check for updates at least in monthly intervals.You want to donate but don’t know whereFor now, you can add yours to the expressions of interest we’re collecting. That’ll increase the incentive for awesome projects to join our platform and for awesome donors to vie for the top donor status.Once there are top donors that you trust (say, because they share your values), the top donors’ donations will guide you to high-impact projects. Such projects may be outside the purview of charity evaluators like GiveWell or ACE, or they might be too young to have earned top charity status yet. Hence why our impact market doubles as a crowdsourced charity evaluator.QuestionsIf you have any questions:Please see our full FAQ.Check out this demo of a bot trained on our FAQ.Have a look at our recent Substack posts.Join our Discord and the #questions-and-feedback channel.Or of course ask your question below!Acknowledgements. Thanks to Frankie, Dony, Matt, and Greg for feedback, and thanks to Greg and everyone who has filled in the expression of interest form for their pledges!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Play Regrantor: Move up to $250,000 to Your Top High-Impact Projects!, published by Dawn Drescher on May 17, 2023 on The Effective Altruism Forum.Summary. We have collected expressions of interest for a total donation budget of about $200,000 from donors who are interested in using our platform. How many of them will end up donating and how much they’ll donate depends on you and how many great projects you bring onto the platform. But Greg Colbourn has firmly committed another $50,000 for AGI moratorium projects for a total of $250,000!FundingOver the past months we’ve asked many donors whether they would consider using the Impact Markets platform to find giving opportunities and what their 2023 donation budget is. We’re now at an aggregate budget of $200,000 and counting. Of course these donors are free to decide that they prefer another project over the ones that have registered on our platform, so that these $200,000 are far from committed.But Greg Colbourn has firmly committed another $50,000 for projects related to an AGI moratorium! He is already in touch with some early-stage efforts, but we definitely need more people to join the cause.You want to become a top donor – a project scout? You want to support a project?You think Charity X is the most impactful one but it’s still underfunded? Convince them to register on app.impactmarkets.io. Then register your donations to them. (They have to confirm that your claim is accurate.) Speed the process by repeating it with a few high-impact projects. When the projects reach some milestones, they can submit them for review. The better the review, the higher your donor score.A score > 200 still puts you firmly among the top 10 donors on our platform. That can change as more project scouts register their high-impact donations.At the moment, we’re still allowing donors to import all their past donations. (Please contact us if you would like to import a lot.) We will eventually limit this to the past month.What if you need moneyIf you’re running or planning to run some impactful project, you can register it on app.impactmarkets.io and pitch it to our top donors. If they think it’s great, their donation (an endorsement in itself) can pull in many more donations from the people who trust them.We’re continually onboarding more people with great donation track records, so both the people on the leaderboard and the ranking algorithm are in constant flux. Please check for updates at least in monthly intervals.You want to donate but don’t know whereFor now, you can add yours to the expressions of interest we’re collecting. That’ll increase the incentive for awesome projects to join our platform and for awesome donors to vie for the top donor status.Once there are top donors that you trust (say, because they share your values), the top donors’ donations will guide you to high-impact projects. Such projects may be outside the purview of charity evaluators like GiveWell or ACE, or they might be too young to have earned top charity status yet. Hence why our impact market doubles as a crowdsourced charity evaluator.QuestionsIf you have any questions:Please see our full FAQ.Check out this demo of a bot trained on our FAQ.Have a look at our recent Substack posts.Join our Discord and the #questions-and-feedback channel.Or of course ask your question below!Acknowledgements. Thanks to Frankie, Dony, Matt, and Greg for feedback, and thanks to Greg and everyone who has filled in the expression of interest form for their pledges!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Dawn Drescher https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:30 None full 5988
cPkfCviK5cAsevTdM_NL_EA_EA EA - The Charity Entrepreneurship top ideas new charity prediction market by CE Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Charity Entrepreneurship top ideas new charity prediction market, published by CE on May 17, 2023 on The Effective Altruism Forum.TL;DR: Charity Entrepreneurship would like your help in our research process. We are running a prediction market on the top 10 ideas across two cause areas. A total of $2000 in prizes is available for prediction accuracy and comment quality. Check it out at:The CE prediction marketFor our upcoming (winter 2024) Incubation Program, we are researching two cause areas. Within global development we are looking into mass media interventions –social and behavior change communication campaigns delivered through mass media (e.g., radio advertising, TV shows, text messages, etc.) aiming to improve human well-being. Within animal welfare we are looking into preventive (or long-run) interventions for farmed animals – the new charities will not just positively affect farmed animals in the short term, but will have a long-run effect on preventing animal suffering in farms 35 years from now.We have narrowed down to the most promising top 10 ideas for each of these cause areas. The Charity Entrepreneurship research team will be doing ~80-hour research projects on as many of these ideas as we can between now and July, carefully examining the evidence and crucial considerations that could either make or break the idea. At the end of this we will aim to recommend two-three ideas for each cause area.This is where you come in. We want to get your views and predictions on our top ideas within each cause area. We have put our top idea list onto the Manifold Markets prediction market platform, and you are invited to join a collective exercise to assess these ideas and input into our decision making.You can do this by reading the list of top ideas (below) for one or both of the cause areas, and then going to the Manifold Market platform and:Make a prediction about how likely you think it is that a specific idea will be recommended by us at the end of our research.Leave comments on each idea with your thoughts or views on why it might or might not be recommended, or why it might or might not be a good idea.As well as having the great benefit of helping our research, we have $2000 in prizes to give away (generously donated by Manifold Markets).$1,000 for comment prizes. We will give $100 to each person who gives one of the top 10 arguments or pieces of information that changes our minds the most regarding our selection decisions.$1,000 for forecasting prizes. We will grant prizes to the individuals who do the best at predicting which of the ideas we end up selecting.More details on these prizes are available on the page at Manifold.The market is open until June 5, 2023 for predictions and comments. This gives the CE research team time to read and integrate comments and insights into our research before our early July deadline.To participate, read the list below and go to: to make predictions and leave comments.Summary of ideas under considerationMass MediaBy ‘mass media’ interventions we refer to social and behavior change communication campaigns delivered through mass media, aiming to improve human well-being.1. Using mobile technologies (mHealth) to encourage women to attend antenatal clinics and/or give birth at a healthcare facilityAcross much of sub-Saharan Africa, only about 55% of women make the recommended four+ antenatal care visits, and only 60% give birth at a healthcare facility. This organization would encourage greater healthcare utilization and achieve lower maternal and neonatal mortality by scaling up evidence-based mHealth interventions, such as one-way text messages or two-way SMS/WhatsApp communications. These messages would aim to address common concerns about professional healthcare, as well as reminding women not to mis...]]>
CE https://forum.effectivealtruism.org/posts/cPkfCviK5cAsevTdM/the-charity-entrepreneurship-top-ideas-new-charity Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Charity Entrepreneurship top ideas new charity prediction market, published by CE on May 17, 2023 on The Effective Altruism Forum.TL;DR: Charity Entrepreneurship would like your help in our research process. We are running a prediction market on the top 10 ideas across two cause areas. A total of $2000 in prizes is available for prediction accuracy and comment quality. Check it out at:The CE prediction marketFor our upcoming (winter 2024) Incubation Program, we are researching two cause areas. Within global development we are looking into mass media interventions –social and behavior change communication campaigns delivered through mass media (e.g., radio advertising, TV shows, text messages, etc.) aiming to improve human well-being. Within animal welfare we are looking into preventive (or long-run) interventions for farmed animals – the new charities will not just positively affect farmed animals in the short term, but will have a long-run effect on preventing animal suffering in farms 35 years from now.We have narrowed down to the most promising top 10 ideas for each of these cause areas. The Charity Entrepreneurship research team will be doing ~80-hour research projects on as many of these ideas as we can between now and July, carefully examining the evidence and crucial considerations that could either make or break the idea. At the end of this we will aim to recommend two-three ideas for each cause area.This is where you come in. We want to get your views and predictions on our top ideas within each cause area. We have put our top idea list onto the Manifold Markets prediction market platform, and you are invited to join a collective exercise to assess these ideas and input into our decision making.You can do this by reading the list of top ideas (below) for one or both of the cause areas, and then going to the Manifold Market platform and:Make a prediction about how likely you think it is that a specific idea will be recommended by us at the end of our research.Leave comments on each idea with your thoughts or views on why it might or might not be recommended, or why it might or might not be a good idea.As well as having the great benefit of helping our research, we have $2000 in prizes to give away (generously donated by Manifold Markets).$1,000 for comment prizes. We will give $100 to each person who gives one of the top 10 arguments or pieces of information that changes our minds the most regarding our selection decisions.$1,000 for forecasting prizes. We will grant prizes to the individuals who do the best at predicting which of the ideas we end up selecting.More details on these prizes are available on the page at Manifold.The market is open until June 5, 2023 for predictions and comments. This gives the CE research team time to read and integrate comments and insights into our research before our early July deadline.To participate, read the list below and go to: to make predictions and leave comments.Summary of ideas under considerationMass MediaBy ‘mass media’ interventions we refer to social and behavior change communication campaigns delivered through mass media, aiming to improve human well-being.1. Using mobile technologies (mHealth) to encourage women to attend antenatal clinics and/or give birth at a healthcare facilityAcross much of sub-Saharan Africa, only about 55% of women make the recommended four+ antenatal care visits, and only 60% give birth at a healthcare facility. This organization would encourage greater healthcare utilization and achieve lower maternal and neonatal mortality by scaling up evidence-based mHealth interventions, such as one-way text messages or two-way SMS/WhatsApp communications. These messages would aim to address common concerns about professional healthcare, as well as reminding women not to mis...]]>
Wed, 17 May 2023 15:35:52 +0000 EA - The Charity Entrepreneurship top ideas new charity prediction market by CE Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Charity Entrepreneurship top ideas new charity prediction market, published by CE on May 17, 2023 on The Effective Altruism Forum.TL;DR: Charity Entrepreneurship would like your help in our research process. We are running a prediction market on the top 10 ideas across two cause areas. A total of $2000 in prizes is available for prediction accuracy and comment quality. Check it out at:The CE prediction marketFor our upcoming (winter 2024) Incubation Program, we are researching two cause areas. Within global development we are looking into mass media interventions –social and behavior change communication campaigns delivered through mass media (e.g., radio advertising, TV shows, text messages, etc.) aiming to improve human well-being. Within animal welfare we are looking into preventive (or long-run) interventions for farmed animals – the new charities will not just positively affect farmed animals in the short term, but will have a long-run effect on preventing animal suffering in farms 35 years from now.We have narrowed down to the most promising top 10 ideas for each of these cause areas. The Charity Entrepreneurship research team will be doing ~80-hour research projects on as many of these ideas as we can between now and July, carefully examining the evidence and crucial considerations that could either make or break the idea. At the end of this we will aim to recommend two-three ideas for each cause area.This is where you come in. We want to get your views and predictions on our top ideas within each cause area. We have put our top idea list onto the Manifold Markets prediction market platform, and you are invited to join a collective exercise to assess these ideas and input into our decision making.You can do this by reading the list of top ideas (below) for one or both of the cause areas, and then going to the Manifold Market platform and:Make a prediction about how likely you think it is that a specific idea will be recommended by us at the end of our research.Leave comments on each idea with your thoughts or views on why it might or might not be recommended, or why it might or might not be a good idea.As well as having the great benefit of helping our research, we have $2000 in prizes to give away (generously donated by Manifold Markets).$1,000 for comment prizes. We will give $100 to each person who gives one of the top 10 arguments or pieces of information that changes our minds the most regarding our selection decisions.$1,000 for forecasting prizes. We will grant prizes to the individuals who do the best at predicting which of the ideas we end up selecting.More details on these prizes are available on the page at Manifold.The market is open until June 5, 2023 for predictions and comments. This gives the CE research team time to read and integrate comments and insights into our research before our early July deadline.To participate, read the list below and go to: to make predictions and leave comments.Summary of ideas under considerationMass MediaBy ‘mass media’ interventions we refer to social and behavior change communication campaigns delivered through mass media, aiming to improve human well-being.1. Using mobile technologies (mHealth) to encourage women to attend antenatal clinics and/or give birth at a healthcare facilityAcross much of sub-Saharan Africa, only about 55% of women make the recommended four+ antenatal care visits, and only 60% give birth at a healthcare facility. This organization would encourage greater healthcare utilization and achieve lower maternal and neonatal mortality by scaling up evidence-based mHealth interventions, such as one-way text messages or two-way SMS/WhatsApp communications. These messages would aim to address common concerns about professional healthcare, as well as reminding women not to mis...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Charity Entrepreneurship top ideas new charity prediction market, published by CE on May 17, 2023 on The Effective Altruism Forum.TL;DR: Charity Entrepreneurship would like your help in our research process. We are running a prediction market on the top 10 ideas across two cause areas. A total of $2000 in prizes is available for prediction accuracy and comment quality. Check it out at:The CE prediction marketFor our upcoming (winter 2024) Incubation Program, we are researching two cause areas. Within global development we are looking into mass media interventions –social and behavior change communication campaigns delivered through mass media (e.g., radio advertising, TV shows, text messages, etc.) aiming to improve human well-being. Within animal welfare we are looking into preventive (or long-run) interventions for farmed animals – the new charities will not just positively affect farmed animals in the short term, but will have a long-run effect on preventing animal suffering in farms 35 years from now.We have narrowed down to the most promising top 10 ideas for each of these cause areas. The Charity Entrepreneurship research team will be doing ~80-hour research projects on as many of these ideas as we can between now and July, carefully examining the evidence and crucial considerations that could either make or break the idea. At the end of this we will aim to recommend two-three ideas for each cause area.This is where you come in. We want to get your views and predictions on our top ideas within each cause area. We have put our top idea list onto the Manifold Markets prediction market platform, and you are invited to join a collective exercise to assess these ideas and input into our decision making.You can do this by reading the list of top ideas (below) for one or both of the cause areas, and then going to the Manifold Market platform and:Make a prediction about how likely you think it is that a specific idea will be recommended by us at the end of our research.Leave comments on each idea with your thoughts or views on why it might or might not be recommended, or why it might or might not be a good idea.As well as having the great benefit of helping our research, we have $2000 in prizes to give away (generously donated by Manifold Markets).$1,000 for comment prizes. We will give $100 to each person who gives one of the top 10 arguments or pieces of information that changes our minds the most regarding our selection decisions.$1,000 for forecasting prizes. We will grant prizes to the individuals who do the best at predicting which of the ideas we end up selecting.More details on these prizes are available on the page at Manifold.The market is open until June 5, 2023 for predictions and comments. This gives the CE research team time to read and integrate comments and insights into our research before our early July deadline.To participate, read the list below and go to: to make predictions and leave comments.Summary of ideas under considerationMass MediaBy ‘mass media’ interventions we refer to social and behavior change communication campaigns delivered through mass media, aiming to improve human well-being.1. Using mobile technologies (mHealth) to encourage women to attend antenatal clinics and/or give birth at a healthcare facilityAcross much of sub-Saharan Africa, only about 55% of women make the recommended four+ antenatal care visits, and only 60% give birth at a healthcare facility. This organization would encourage greater healthcare utilization and achieve lower maternal and neonatal mortality by scaling up evidence-based mHealth interventions, such as one-way text messages or two-way SMS/WhatsApp communications. These messages would aim to address common concerns about professional healthcare, as well as reminding women not to mis...]]>
CE https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 16:50 None full 5982
fjQJaAfiA6eML5xwx_NL_EA_EA EA - Don't optimise for social status within the EA community by freedomandutility Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don't optimise for social status within the EA community, published by freedomandutility on May 17, 2023 on The Effective Altruism Forum.One downside of engaging with the EA community is that social status in the community probably isn't well aligned with impact, so if you consciously or subconsciously start optimising for status, you may be less impactful than you could be otherwise.For example, roles outside EA organisations which lead to huge social impact probably won't help much with social status inside the EA community.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
freedomandutility https://forum.effectivealtruism.org/posts/fjQJaAfiA6eML5xwx/don-t-optimise-for-social-status-within-the-ea-community Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don't optimise for social status within the EA community, published by freedomandutility on May 17, 2023 on The Effective Altruism Forum.One downside of engaging with the EA community is that social status in the community probably isn't well aligned with impact, so if you consciously or subconsciously start optimising for status, you may be less impactful than you could be otherwise.For example, roles outside EA organisations which lead to huge social impact probably won't help much with social status inside the EA community.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 17 May 2023 14:12:18 +0000 EA - Don't optimise for social status within the EA community by freedomandutility Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don't optimise for social status within the EA community, published by freedomandutility on May 17, 2023 on The Effective Altruism Forum.One downside of engaging with the EA community is that social status in the community probably isn't well aligned with impact, so if you consciously or subconsciously start optimising for status, you may be less impactful than you could be otherwise.For example, roles outside EA organisations which lead to huge social impact probably won't help much with social status inside the EA community.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don't optimise for social status within the EA community, published by freedomandutility on May 17, 2023 on The Effective Altruism Forum.One downside of engaging with the EA community is that social status in the community probably isn't well aligned with impact, so if you consciously or subconsciously start optimising for status, you may be less impactful than you could be otherwise.For example, roles outside EA organisations which lead to huge social impact probably won't help much with social status inside the EA community.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
freedomandutility https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:43 None full 5983
kXaxasXfG8DQR4jgq_NL_EA_EA EA - Some quotes from Tuesday's Senate hearing on AI by Daniel Eth Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some quotes from Tuesday's Senate hearing on AI, published by Daniel Eth on May 17, 2023 on The Effective Altruism Forum.On Tuesday, the US Senate held a hearing on AI. The hearing involved 3 witnesses: Sam Altman, Gary Marcus, and Christina Montgomery. (If you want to watch the hearing, you can watch it here – it's around 3 hours.)I watched the hearing and wound up live-tweeting quotes that stood out to me, as well as some reactions. I'm copying over quotes to this post that I think might be of interest to others here. Note this was a very impromptu process and I wasn't originally planning on writing a forum post when I was jotting down quotes, so I've presumably missed a bunch of quotes that would be of interest to many here. Without further ado, here are the quotes (organized chronologically):Senator Blumenthal (D-CT): "I think you [Sam Altman] have said, in fact, and I'm gonna quote, 'Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.' You may have had in mind the effect on jobs, which is really my biggest nightmare in the long run..."Sam Altman: [doesn't correct the misunderstanding of the quote and instead proceeds to talk about possible effects of AI on employment]Sam Altman: "My worst fears are that... we, the field, the technology, the industry, cause significant harm to the world. I think that could happen in a lot of different ways; it's why we started the company... I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening, but we try to be very clear eyed about what the downside case is and the work that we have to do to mitigate that."Sam Altman: "I think the US should lead [on AI regulation], but to be effective, we do need something global... There is precedent – I know it sounds naive to call for something like this... we've done it before with the IAEA... Given what it takes to make these models, the chip supply chain, the sort of limited number of competitive GPUs, the power the US has over these companies, I think there are paths to the US setting some international standards that other countries would need to collaborate with and be part of, that are actually workable, even though it sounds on its face like an impractical idea. And I think it would be great for the world."Senator Coons (D-DE): "I understand one way to prevent generative AI models from providing harmful content is to have humans identify that content and then train the algorithm to avoid it. There's another approach that's called 'constitutional AI' that gives the model a set of values or principles to guide its decision making. Would it be more effective to give models these kinds of rules instead of trying to require or compel training the model on all the different potentials for harmful content? ... I'm interested also, what international bodies are best positioned to convene multilateral discussions to promote responsible standards? We've talked about a model being CERN and nuclear energy. I'm concerned about proliferation and nonproliferation."Senator Kennedy (R-LA): "Permit me to share with you three hypotheses that I would like you to assume for the moment to be true... Hypothesis number 3... there is likely a berserk wing of the artificial intelligence community that intentionally or unintentionally could use artificial intelligence to kill all of us and hurt us the entire time that we are dying... Please tell me in plain English two or three reforms, regulations, if any, that you would implement if you were queen or king for a day..."Gary Marcus: "Number 1: a safety-review like we use with the FDA prior to widespread deployment... Number 2: a nimble monitoring agency to follow what's going ...]]>
Daniel Eth https://forum.effectivealtruism.org/posts/kXaxasXfG8DQR4jgq/some-quotes-from-tuesday-s-senate-hearing-on-ai Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some quotes from Tuesday's Senate hearing on AI, published by Daniel Eth on May 17, 2023 on The Effective Altruism Forum.On Tuesday, the US Senate held a hearing on AI. The hearing involved 3 witnesses: Sam Altman, Gary Marcus, and Christina Montgomery. (If you want to watch the hearing, you can watch it here – it's around 3 hours.)I watched the hearing and wound up live-tweeting quotes that stood out to me, as well as some reactions. I'm copying over quotes to this post that I think might be of interest to others here. Note this was a very impromptu process and I wasn't originally planning on writing a forum post when I was jotting down quotes, so I've presumably missed a bunch of quotes that would be of interest to many here. Without further ado, here are the quotes (organized chronologically):Senator Blumenthal (D-CT): "I think you [Sam Altman] have said, in fact, and I'm gonna quote, 'Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.' You may have had in mind the effect on jobs, which is really my biggest nightmare in the long run..."Sam Altman: [doesn't correct the misunderstanding of the quote and instead proceeds to talk about possible effects of AI on employment]Sam Altman: "My worst fears are that... we, the field, the technology, the industry, cause significant harm to the world. I think that could happen in a lot of different ways; it's why we started the company... I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening, but we try to be very clear eyed about what the downside case is and the work that we have to do to mitigate that."Sam Altman: "I think the US should lead [on AI regulation], but to be effective, we do need something global... There is precedent – I know it sounds naive to call for something like this... we've done it before with the IAEA... Given what it takes to make these models, the chip supply chain, the sort of limited number of competitive GPUs, the power the US has over these companies, I think there are paths to the US setting some international standards that other countries would need to collaborate with and be part of, that are actually workable, even though it sounds on its face like an impractical idea. And I think it would be great for the world."Senator Coons (D-DE): "I understand one way to prevent generative AI models from providing harmful content is to have humans identify that content and then train the algorithm to avoid it. There's another approach that's called 'constitutional AI' that gives the model a set of values or principles to guide its decision making. Would it be more effective to give models these kinds of rules instead of trying to require or compel training the model on all the different potentials for harmful content? ... I'm interested also, what international bodies are best positioned to convene multilateral discussions to promote responsible standards? We've talked about a model being CERN and nuclear energy. I'm concerned about proliferation and nonproliferation."Senator Kennedy (R-LA): "Permit me to share with you three hypotheses that I would like you to assume for the moment to be true... Hypothesis number 3... there is likely a berserk wing of the artificial intelligence community that intentionally or unintentionally could use artificial intelligence to kill all of us and hurt us the entire time that we are dying... Please tell me in plain English two or three reforms, regulations, if any, that you would implement if you were queen or king for a day..."Gary Marcus: "Number 1: a safety-review like we use with the FDA prior to widespread deployment... Number 2: a nimble monitoring agency to follow what's going ...]]>
Wed, 17 May 2023 14:04:46 +0000 EA - Some quotes from Tuesday's Senate hearing on AI by Daniel Eth Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some quotes from Tuesday's Senate hearing on AI, published by Daniel Eth on May 17, 2023 on The Effective Altruism Forum.On Tuesday, the US Senate held a hearing on AI. The hearing involved 3 witnesses: Sam Altman, Gary Marcus, and Christina Montgomery. (If you want to watch the hearing, you can watch it here – it's around 3 hours.)I watched the hearing and wound up live-tweeting quotes that stood out to me, as well as some reactions. I'm copying over quotes to this post that I think might be of interest to others here. Note this was a very impromptu process and I wasn't originally planning on writing a forum post when I was jotting down quotes, so I've presumably missed a bunch of quotes that would be of interest to many here. Without further ado, here are the quotes (organized chronologically):Senator Blumenthal (D-CT): "I think you [Sam Altman] have said, in fact, and I'm gonna quote, 'Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.' You may have had in mind the effect on jobs, which is really my biggest nightmare in the long run..."Sam Altman: [doesn't correct the misunderstanding of the quote and instead proceeds to talk about possible effects of AI on employment]Sam Altman: "My worst fears are that... we, the field, the technology, the industry, cause significant harm to the world. I think that could happen in a lot of different ways; it's why we started the company... I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening, but we try to be very clear eyed about what the downside case is and the work that we have to do to mitigate that."Sam Altman: "I think the US should lead [on AI regulation], but to be effective, we do need something global... There is precedent – I know it sounds naive to call for something like this... we've done it before with the IAEA... Given what it takes to make these models, the chip supply chain, the sort of limited number of competitive GPUs, the power the US has over these companies, I think there are paths to the US setting some international standards that other countries would need to collaborate with and be part of, that are actually workable, even though it sounds on its face like an impractical idea. And I think it would be great for the world."Senator Coons (D-DE): "I understand one way to prevent generative AI models from providing harmful content is to have humans identify that content and then train the algorithm to avoid it. There's another approach that's called 'constitutional AI' that gives the model a set of values or principles to guide its decision making. Would it be more effective to give models these kinds of rules instead of trying to require or compel training the model on all the different potentials for harmful content? ... I'm interested also, what international bodies are best positioned to convene multilateral discussions to promote responsible standards? We've talked about a model being CERN and nuclear energy. I'm concerned about proliferation and nonproliferation."Senator Kennedy (R-LA): "Permit me to share with you three hypotheses that I would like you to assume for the moment to be true... Hypothesis number 3... there is likely a berserk wing of the artificial intelligence community that intentionally or unintentionally could use artificial intelligence to kill all of us and hurt us the entire time that we are dying... Please tell me in plain English two or three reforms, regulations, if any, that you would implement if you were queen or king for a day..."Gary Marcus: "Number 1: a safety-review like we use with the FDA prior to widespread deployment... Number 2: a nimble monitoring agency to follow what's going ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some quotes from Tuesday's Senate hearing on AI, published by Daniel Eth on May 17, 2023 on The Effective Altruism Forum.On Tuesday, the US Senate held a hearing on AI. The hearing involved 3 witnesses: Sam Altman, Gary Marcus, and Christina Montgomery. (If you want to watch the hearing, you can watch it here – it's around 3 hours.)I watched the hearing and wound up live-tweeting quotes that stood out to me, as well as some reactions. I'm copying over quotes to this post that I think might be of interest to others here. Note this was a very impromptu process and I wasn't originally planning on writing a forum post when I was jotting down quotes, so I've presumably missed a bunch of quotes that would be of interest to many here. Without further ado, here are the quotes (organized chronologically):Senator Blumenthal (D-CT): "I think you [Sam Altman] have said, in fact, and I'm gonna quote, 'Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.' You may have had in mind the effect on jobs, which is really my biggest nightmare in the long run..."Sam Altman: [doesn't correct the misunderstanding of the quote and instead proceeds to talk about possible effects of AI on employment]Sam Altman: "My worst fears are that... we, the field, the technology, the industry, cause significant harm to the world. I think that could happen in a lot of different ways; it's why we started the company... I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening, but we try to be very clear eyed about what the downside case is and the work that we have to do to mitigate that."Sam Altman: "I think the US should lead [on AI regulation], but to be effective, we do need something global... There is precedent – I know it sounds naive to call for something like this... we've done it before with the IAEA... Given what it takes to make these models, the chip supply chain, the sort of limited number of competitive GPUs, the power the US has over these companies, I think there are paths to the US setting some international standards that other countries would need to collaborate with and be part of, that are actually workable, even though it sounds on its face like an impractical idea. And I think it would be great for the world."Senator Coons (D-DE): "I understand one way to prevent generative AI models from providing harmful content is to have humans identify that content and then train the algorithm to avoid it. There's another approach that's called 'constitutional AI' that gives the model a set of values or principles to guide its decision making. Would it be more effective to give models these kinds of rules instead of trying to require or compel training the model on all the different potentials for harmful content? ... I'm interested also, what international bodies are best positioned to convene multilateral discussions to promote responsible standards? We've talked about a model being CERN and nuclear energy. I'm concerned about proliferation and nonproliferation."Senator Kennedy (R-LA): "Permit me to share with you three hypotheses that I would like you to assume for the moment to be true... Hypothesis number 3... there is likely a berserk wing of the artificial intelligence community that intentionally or unintentionally could use artificial intelligence to kill all of us and hurt us the entire time that we are dying... Please tell me in plain English two or three reforms, regulations, if any, that you would implement if you were queen or king for a day..."Gary Marcus: "Number 1: a safety-review like we use with the FDA prior to widespread deployment... Number 2: a nimble monitoring agency to follow what's going ...]]>
Daniel Eth https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:13 None full 5981
LnDkYCHAh5Dw4uvny_NL_EA_EA EA - Probably Good launches improved website and 1-on-1 advising by Probably Good Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Probably Good launches improved website & 1-on-1 advising, published by Probably Good on May 16, 2023 on The Effective Altruism Forum.TL;DR: Probably Good’s career guidance services just got a lot better!We renovated & rebranded our website – check it out and leave feedbackWe opened applications for 1-on-1 career guidance services! Apply hereWe published a lot more career profiles & completed our career guideMeet us at EAG! Come by our organization fair booth or community office hoursWhat’s new?Probably Good is a career guidance organization aiming to make impact-centered career advice more accessible to more people (you can read more about our goals, approach, and more in our about-us page). Our renovated site is a big improvement to our overall look and should provide a much better experience for readers. Updates include:A complete redesign to make the site more friendly, engaging, and easy to navigate. It loads faster, it looks better, and it's now easier to spend the hours of research your career deserves.A full end-to-end guide for how to think about and pursue an impactful career! We restructured the guide to be more accessible and engaging, and will continue making updates/adding summaries in the coming weeks.A LOT of new content:5 career path profiles:Climate SciencePsychologyFor-Profit EntrepreneurshipJournalismVeterinary MedicineImpactful career path recommendations for:Biology degreesEconomics degreesPsychology degreesBusiness degreesUpdated core concept articlesApply for 1-on-1 career advisingAlong with the new site, we also officially launched our 1-on-1 career advising service. These calls are a chance for us to help you think through your goals and plans, connect you to concrete opportunities and experts in Probably Good's focus areas, and provide ongoing consultation services. Applications are now open!If you’re currently planning your career path or looking to make a change, we encourage you to apply. We’re also happy to work with people who are motivated to do good but are unfamiliar or less involved with EA, so feel free to share this opportunity more broadly with your network outside of the community. If you have further questions about the application process, don’t hesitate to contact us at contact@probablygood.org.Get InvolvedOur team will be at EAG, so if you’d like to learn more about Probably Good or chat about career advising, feel free to stop by our table at the organization fair Friday or come by community office hours on Saturday 5-6pm.Apply for advising!Give us feedback on the site. There are still quite a few changes we plan on making and technical quirks we’ll continue to update over the coming weeks. That said, we’d greatly appreciate any feedback on the site. To ensure that we’ll see your comments, the best way to leave feedback is through our contact form. You can also reach out directly at hello@probablygood.org.Direct people who might be interested to our site. If you’re a community organizer (especially at a university or in a region outside the U.S. & U.K) we’d appreciate it if you'd spread the word about our resources & advising opportunities and let us know what further resources would be useful for your community.AcknowledgementsMany thanks to User-Friendly for making our whole website redesign & rebranding possible! They did a great job understanding our brand & needs, and provided a new look for PG that we’re really excited about.We also want to shout out our newest team members who helped make all these updates possible!Itamar Shatz, our new Head of Growth. Itamar’s writing about applied psychology and philosophy is read by over a million people each year, and linked from places like The NY Times and TechCrunch. He has a PhD from Cambridge University, where he is an affiliated researcher...]]>
Probably Good https://forum.effectivealtruism.org/posts/LnDkYCHAh5Dw4uvny/probably-good-launches-improved-website-and-1-on-1-advising Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Probably Good launches improved website & 1-on-1 advising, published by Probably Good on May 16, 2023 on The Effective Altruism Forum.TL;DR: Probably Good’s career guidance services just got a lot better!We renovated & rebranded our website – check it out and leave feedbackWe opened applications for 1-on-1 career guidance services! Apply hereWe published a lot more career profiles & completed our career guideMeet us at EAG! Come by our organization fair booth or community office hoursWhat’s new?Probably Good is a career guidance organization aiming to make impact-centered career advice more accessible to more people (you can read more about our goals, approach, and more in our about-us page). Our renovated site is a big improvement to our overall look and should provide a much better experience for readers. Updates include:A complete redesign to make the site more friendly, engaging, and easy to navigate. It loads faster, it looks better, and it's now easier to spend the hours of research your career deserves.A full end-to-end guide for how to think about and pursue an impactful career! We restructured the guide to be more accessible and engaging, and will continue making updates/adding summaries in the coming weeks.A LOT of new content:5 career path profiles:Climate SciencePsychologyFor-Profit EntrepreneurshipJournalismVeterinary MedicineImpactful career path recommendations for:Biology degreesEconomics degreesPsychology degreesBusiness degreesUpdated core concept articlesApply for 1-on-1 career advisingAlong with the new site, we also officially launched our 1-on-1 career advising service. These calls are a chance for us to help you think through your goals and plans, connect you to concrete opportunities and experts in Probably Good's focus areas, and provide ongoing consultation services. Applications are now open!If you’re currently planning your career path or looking to make a change, we encourage you to apply. We’re also happy to work with people who are motivated to do good but are unfamiliar or less involved with EA, so feel free to share this opportunity more broadly with your network outside of the community. If you have further questions about the application process, don’t hesitate to contact us at contact@probablygood.org.Get InvolvedOur team will be at EAG, so if you’d like to learn more about Probably Good or chat about career advising, feel free to stop by our table at the organization fair Friday or come by community office hours on Saturday 5-6pm.Apply for advising!Give us feedback on the site. There are still quite a few changes we plan on making and technical quirks we’ll continue to update over the coming weeks. That said, we’d greatly appreciate any feedback on the site. To ensure that we’ll see your comments, the best way to leave feedback is through our contact form. You can also reach out directly at hello@probablygood.org.Direct people who might be interested to our site. If you’re a community organizer (especially at a university or in a region outside the U.S. & U.K) we’d appreciate it if you'd spread the word about our resources & advising opportunities and let us know what further resources would be useful for your community.AcknowledgementsMany thanks to User-Friendly for making our whole website redesign & rebranding possible! They did a great job understanding our brand & needs, and provided a new look for PG that we’re really excited about.We also want to shout out our newest team members who helped make all these updates possible!Itamar Shatz, our new Head of Growth. Itamar’s writing about applied psychology and philosophy is read by over a million people each year, and linked from places like The NY Times and TechCrunch. He has a PhD from Cambridge University, where he is an affiliated researcher...]]>
Tue, 16 May 2023 18:38:56 +0000 EA - Probably Good launches improved website and 1-on-1 advising by Probably Good Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Probably Good launches improved website & 1-on-1 advising, published by Probably Good on May 16, 2023 on The Effective Altruism Forum.TL;DR: Probably Good’s career guidance services just got a lot better!We renovated & rebranded our website – check it out and leave feedbackWe opened applications for 1-on-1 career guidance services! Apply hereWe published a lot more career profiles & completed our career guideMeet us at EAG! Come by our organization fair booth or community office hoursWhat’s new?Probably Good is a career guidance organization aiming to make impact-centered career advice more accessible to more people (you can read more about our goals, approach, and more in our about-us page). Our renovated site is a big improvement to our overall look and should provide a much better experience for readers. Updates include:A complete redesign to make the site more friendly, engaging, and easy to navigate. It loads faster, it looks better, and it's now easier to spend the hours of research your career deserves.A full end-to-end guide for how to think about and pursue an impactful career! We restructured the guide to be more accessible and engaging, and will continue making updates/adding summaries in the coming weeks.A LOT of new content:5 career path profiles:Climate SciencePsychologyFor-Profit EntrepreneurshipJournalismVeterinary MedicineImpactful career path recommendations for:Biology degreesEconomics degreesPsychology degreesBusiness degreesUpdated core concept articlesApply for 1-on-1 career advisingAlong with the new site, we also officially launched our 1-on-1 career advising service. These calls are a chance for us to help you think through your goals and plans, connect you to concrete opportunities and experts in Probably Good's focus areas, and provide ongoing consultation services. Applications are now open!If you’re currently planning your career path or looking to make a change, we encourage you to apply. We’re also happy to work with people who are motivated to do good but are unfamiliar or less involved with EA, so feel free to share this opportunity more broadly with your network outside of the community. If you have further questions about the application process, don’t hesitate to contact us at contact@probablygood.org.Get InvolvedOur team will be at EAG, so if you’d like to learn more about Probably Good or chat about career advising, feel free to stop by our table at the organization fair Friday or come by community office hours on Saturday 5-6pm.Apply for advising!Give us feedback on the site. There are still quite a few changes we plan on making and technical quirks we’ll continue to update over the coming weeks. That said, we’d greatly appreciate any feedback on the site. To ensure that we’ll see your comments, the best way to leave feedback is through our contact form. You can also reach out directly at hello@probablygood.org.Direct people who might be interested to our site. If you’re a community organizer (especially at a university or in a region outside the U.S. & U.K) we’d appreciate it if you'd spread the word about our resources & advising opportunities and let us know what further resources would be useful for your community.AcknowledgementsMany thanks to User-Friendly for making our whole website redesign & rebranding possible! They did a great job understanding our brand & needs, and provided a new look for PG that we’re really excited about.We also want to shout out our newest team members who helped make all these updates possible!Itamar Shatz, our new Head of Growth. Itamar’s writing about applied psychology and philosophy is read by over a million people each year, and linked from places like The NY Times and TechCrunch. He has a PhD from Cambridge University, where he is an affiliated researcher...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Probably Good launches improved website & 1-on-1 advising, published by Probably Good on May 16, 2023 on The Effective Altruism Forum.TL;DR: Probably Good’s career guidance services just got a lot better!We renovated & rebranded our website – check it out and leave feedbackWe opened applications for 1-on-1 career guidance services! Apply hereWe published a lot more career profiles & completed our career guideMeet us at EAG! Come by our organization fair booth or community office hoursWhat’s new?Probably Good is a career guidance organization aiming to make impact-centered career advice more accessible to more people (you can read more about our goals, approach, and more in our about-us page). Our renovated site is a big improvement to our overall look and should provide a much better experience for readers. Updates include:A complete redesign to make the site more friendly, engaging, and easy to navigate. It loads faster, it looks better, and it's now easier to spend the hours of research your career deserves.A full end-to-end guide for how to think about and pursue an impactful career! We restructured the guide to be more accessible and engaging, and will continue making updates/adding summaries in the coming weeks.A LOT of new content:5 career path profiles:Climate SciencePsychologyFor-Profit EntrepreneurshipJournalismVeterinary MedicineImpactful career path recommendations for:Biology degreesEconomics degreesPsychology degreesBusiness degreesUpdated core concept articlesApply for 1-on-1 career advisingAlong with the new site, we also officially launched our 1-on-1 career advising service. These calls are a chance for us to help you think through your goals and plans, connect you to concrete opportunities and experts in Probably Good's focus areas, and provide ongoing consultation services. Applications are now open!If you’re currently planning your career path or looking to make a change, we encourage you to apply. We’re also happy to work with people who are motivated to do good but are unfamiliar or less involved with EA, so feel free to share this opportunity more broadly with your network outside of the community. If you have further questions about the application process, don’t hesitate to contact us at contact@probablygood.org.Get InvolvedOur team will be at EAG, so if you’d like to learn more about Probably Good or chat about career advising, feel free to stop by our table at the organization fair Friday or come by community office hours on Saturday 5-6pm.Apply for advising!Give us feedback on the site. There are still quite a few changes we plan on making and technical quirks we’ll continue to update over the coming weeks. That said, we’d greatly appreciate any feedback on the site. To ensure that we’ll see your comments, the best way to leave feedback is through our contact form. You can also reach out directly at hello@probablygood.org.Direct people who might be interested to our site. If you’re a community organizer (especially at a university or in a region outside the U.S. & U.K) we’d appreciate it if you'd spread the word about our resources & advising opportunities and let us know what further resources would be useful for your community.AcknowledgementsMany thanks to User-Friendly for making our whole website redesign & rebranding possible! They did a great job understanding our brand & needs, and provided a new look for PG that we’re really excited about.We also want to shout out our newest team members who helped make all these updates possible!Itamar Shatz, our new Head of Growth. Itamar’s writing about applied psychology and philosophy is read by over a million people each year, and linked from places like The NY Times and TechCrunch. He has a PhD from Cambridge University, where he is an affiliated researcher...]]>
Probably Good https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:21 None full 5973
ejPNrR4pCEPiziqwE_NL_EA_EA EA - Charity Entrepreneurship’s research into large-scale global health interventions by CE Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Charity Entrepreneurship’s research into large-scale global health interventions, published by CE on May 16, 2023 on The Effective Altruism Forum.TL;DR: Description of our comprehensive research process, used to determine the most effective charity ideas within the large-scale global health cause area- our focus for the most recent research round. Starting with 300+ ideas, we used our iterative process to find four highly-scalable, cost-effective interventions to be launched from our Incubation Program.Every year at Charity Entrepreneurship, we try to find the best interventions to launch impact-focused charities through our Incubation Program. As a research-driven organization, we try to continuously improve our research methodology and process to ensure a robust and comprehensive analysis of the interventions under consideration. In our last research round (late 2022- early 2023), we focused on the area of large-scale global health. In this post we share with you our insights on the objectives, research framework, and selection criteria that have guided us in identifying and recommending the most impactful ideas in this space.Why “large-scale” global health?There is evidence to suggest that the larger a charity scales, the less cost-effective it becomes. This tradeoff likely applies to most cause areas, but it is most evident in the global health and development space.In this diagram from an Open Philanthropy talk, we can clearly see this correlation mapped out:Source: Open Philanthropy’s Cause Prioritization Framework Talk (min. 22:12)The diagram shows GiveWell's top-recommended charities from 2020 clustered on the 10x cash line, each having the ability to spend approximately $100 million or more, annually. GiveDirectly is located on the 1x cash point, having the capacity to spend approximately $100 billion annually.This has lead us to two considerations::Firstly, it suggests that those who prioritize evidence-based, impact-driven philanthropy may identify highly effective, yet challenging-to-scale interventions that surpass the efficacy of GiveWell's top recommendations. However, identifying such interventions may be challenging.Secondly, it means that Charity Entrepreneurship needs to determine how to balance cost-effectiveness and scalability when recommending potential high-impact interventions.During our 2020 and 2021 research round in the global health and development space, our primary focus was on maximizing cost-effectiveness. We honed in on policy charities in particular, which are likely to reside in the top left quadrant of the scalability versus cost-effectiveness graph; such organizations may be many times more effective than current top recommendations by GiveWell, but have limited capacity to absorb additional funds. For instance, HLI estimates here that LEEP, the longest-running policy charity incubated by Charity Entrepreneurship, is approximately 100 times more cost-effective than cash transfers.In 2022, we made the strategic decision to shift our focus from maximizing cost-effectiveness, to maximizing scalability.This decision was made given the apparent high level of funding available from organizations such as GiveWell.We challenged ourselves to seek out the most promising new charity ideas that could scale to absorb $5 million or more in funding within five years, while also maintaining the same level of cost-effectiveness as current top GiveWell recommendations (10x cash, ~$100/DALY).Our research processIn late 2022 and early 2023, we conducted a six-month research round with a team of four staff members, as well as several research fellows, to identify the most promising new charity ideas.Our approach prioritized ideas that met the following criteria, in order of importance:Surpassed our benchmark of 10x cash, and could sc...]]>
CE https://forum.effectivealtruism.org/posts/ejPNrR4pCEPiziqwE/charity-entrepreneurship-s-research-into-large-scale-global Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Charity Entrepreneurship’s research into large-scale global health interventions, published by CE on May 16, 2023 on The Effective Altruism Forum.TL;DR: Description of our comprehensive research process, used to determine the most effective charity ideas within the large-scale global health cause area- our focus for the most recent research round. Starting with 300+ ideas, we used our iterative process to find four highly-scalable, cost-effective interventions to be launched from our Incubation Program.Every year at Charity Entrepreneurship, we try to find the best interventions to launch impact-focused charities through our Incubation Program. As a research-driven organization, we try to continuously improve our research methodology and process to ensure a robust and comprehensive analysis of the interventions under consideration. In our last research round (late 2022- early 2023), we focused on the area of large-scale global health. In this post we share with you our insights on the objectives, research framework, and selection criteria that have guided us in identifying and recommending the most impactful ideas in this space.Why “large-scale” global health?There is evidence to suggest that the larger a charity scales, the less cost-effective it becomes. This tradeoff likely applies to most cause areas, but it is most evident in the global health and development space.In this diagram from an Open Philanthropy talk, we can clearly see this correlation mapped out:Source: Open Philanthropy’s Cause Prioritization Framework Talk (min. 22:12)The diagram shows GiveWell's top-recommended charities from 2020 clustered on the 10x cash line, each having the ability to spend approximately $100 million or more, annually. GiveDirectly is located on the 1x cash point, having the capacity to spend approximately $100 billion annually.This has lead us to two considerations::Firstly, it suggests that those who prioritize evidence-based, impact-driven philanthropy may identify highly effective, yet challenging-to-scale interventions that surpass the efficacy of GiveWell's top recommendations. However, identifying such interventions may be challenging.Secondly, it means that Charity Entrepreneurship needs to determine how to balance cost-effectiveness and scalability when recommending potential high-impact interventions.During our 2020 and 2021 research round in the global health and development space, our primary focus was on maximizing cost-effectiveness. We honed in on policy charities in particular, which are likely to reside in the top left quadrant of the scalability versus cost-effectiveness graph; such organizations may be many times more effective than current top recommendations by GiveWell, but have limited capacity to absorb additional funds. For instance, HLI estimates here that LEEP, the longest-running policy charity incubated by Charity Entrepreneurship, is approximately 100 times more cost-effective than cash transfers.In 2022, we made the strategic decision to shift our focus from maximizing cost-effectiveness, to maximizing scalability.This decision was made given the apparent high level of funding available from organizations such as GiveWell.We challenged ourselves to seek out the most promising new charity ideas that could scale to absorb $5 million or more in funding within five years, while also maintaining the same level of cost-effectiveness as current top GiveWell recommendations (10x cash, ~$100/DALY).Our research processIn late 2022 and early 2023, we conducted a six-month research round with a team of four staff members, as well as several research fellows, to identify the most promising new charity ideas.Our approach prioritized ideas that met the following criteria, in order of importance:Surpassed our benchmark of 10x cash, and could sc...]]>
Tue, 16 May 2023 15:44:36 +0000 EA - Charity Entrepreneurship’s research into large-scale global health interventions by CE Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Charity Entrepreneurship’s research into large-scale global health interventions, published by CE on May 16, 2023 on The Effective Altruism Forum.TL;DR: Description of our comprehensive research process, used to determine the most effective charity ideas within the large-scale global health cause area- our focus for the most recent research round. Starting with 300+ ideas, we used our iterative process to find four highly-scalable, cost-effective interventions to be launched from our Incubation Program.Every year at Charity Entrepreneurship, we try to find the best interventions to launch impact-focused charities through our Incubation Program. As a research-driven organization, we try to continuously improve our research methodology and process to ensure a robust and comprehensive analysis of the interventions under consideration. In our last research round (late 2022- early 2023), we focused on the area of large-scale global health. In this post we share with you our insights on the objectives, research framework, and selection criteria that have guided us in identifying and recommending the most impactful ideas in this space.Why “large-scale” global health?There is evidence to suggest that the larger a charity scales, the less cost-effective it becomes. This tradeoff likely applies to most cause areas, but it is most evident in the global health and development space.In this diagram from an Open Philanthropy talk, we can clearly see this correlation mapped out:Source: Open Philanthropy’s Cause Prioritization Framework Talk (min. 22:12)The diagram shows GiveWell's top-recommended charities from 2020 clustered on the 10x cash line, each having the ability to spend approximately $100 million or more, annually. GiveDirectly is located on the 1x cash point, having the capacity to spend approximately $100 billion annually.This has lead us to two considerations::Firstly, it suggests that those who prioritize evidence-based, impact-driven philanthropy may identify highly effective, yet challenging-to-scale interventions that surpass the efficacy of GiveWell's top recommendations. However, identifying such interventions may be challenging.Secondly, it means that Charity Entrepreneurship needs to determine how to balance cost-effectiveness and scalability when recommending potential high-impact interventions.During our 2020 and 2021 research round in the global health and development space, our primary focus was on maximizing cost-effectiveness. We honed in on policy charities in particular, which are likely to reside in the top left quadrant of the scalability versus cost-effectiveness graph; such organizations may be many times more effective than current top recommendations by GiveWell, but have limited capacity to absorb additional funds. For instance, HLI estimates here that LEEP, the longest-running policy charity incubated by Charity Entrepreneurship, is approximately 100 times more cost-effective than cash transfers.In 2022, we made the strategic decision to shift our focus from maximizing cost-effectiveness, to maximizing scalability.This decision was made given the apparent high level of funding available from organizations such as GiveWell.We challenged ourselves to seek out the most promising new charity ideas that could scale to absorb $5 million or more in funding within five years, while also maintaining the same level of cost-effectiveness as current top GiveWell recommendations (10x cash, ~$100/DALY).Our research processIn late 2022 and early 2023, we conducted a six-month research round with a team of four staff members, as well as several research fellows, to identify the most promising new charity ideas.Our approach prioritized ideas that met the following criteria, in order of importance:Surpassed our benchmark of 10x cash, and could sc...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Charity Entrepreneurship’s research into large-scale global health interventions, published by CE on May 16, 2023 on The Effective Altruism Forum.TL;DR: Description of our comprehensive research process, used to determine the most effective charity ideas within the large-scale global health cause area- our focus for the most recent research round. Starting with 300+ ideas, we used our iterative process to find four highly-scalable, cost-effective interventions to be launched from our Incubation Program.Every year at Charity Entrepreneurship, we try to find the best interventions to launch impact-focused charities through our Incubation Program. As a research-driven organization, we try to continuously improve our research methodology and process to ensure a robust and comprehensive analysis of the interventions under consideration. In our last research round (late 2022- early 2023), we focused on the area of large-scale global health. In this post we share with you our insights on the objectives, research framework, and selection criteria that have guided us in identifying and recommending the most impactful ideas in this space.Why “large-scale” global health?There is evidence to suggest that the larger a charity scales, the less cost-effective it becomes. This tradeoff likely applies to most cause areas, but it is most evident in the global health and development space.In this diagram from an Open Philanthropy talk, we can clearly see this correlation mapped out:Source: Open Philanthropy’s Cause Prioritization Framework Talk (min. 22:12)The diagram shows GiveWell's top-recommended charities from 2020 clustered on the 10x cash line, each having the ability to spend approximately $100 million or more, annually. GiveDirectly is located on the 1x cash point, having the capacity to spend approximately $100 billion annually.This has lead us to two considerations::Firstly, it suggests that those who prioritize evidence-based, impact-driven philanthropy may identify highly effective, yet challenging-to-scale interventions that surpass the efficacy of GiveWell's top recommendations. However, identifying such interventions may be challenging.Secondly, it means that Charity Entrepreneurship needs to determine how to balance cost-effectiveness and scalability when recommending potential high-impact interventions.During our 2020 and 2021 research round in the global health and development space, our primary focus was on maximizing cost-effectiveness. We honed in on policy charities in particular, which are likely to reside in the top left quadrant of the scalability versus cost-effectiveness graph; such organizations may be many times more effective than current top recommendations by GiveWell, but have limited capacity to absorb additional funds. For instance, HLI estimates here that LEEP, the longest-running policy charity incubated by Charity Entrepreneurship, is approximately 100 times more cost-effective than cash transfers.In 2022, we made the strategic decision to shift our focus from maximizing cost-effectiveness, to maximizing scalability.This decision was made given the apparent high level of funding available from organizations such as GiveWell.We challenged ourselves to seek out the most promising new charity ideas that could scale to absorb $5 million or more in funding within five years, while also maintaining the same level of cost-effectiveness as current top GiveWell recommendations (10x cash, ~$100/DALY).Our research processIn late 2022 and early 2023, we conducted a six-month research round with a team of four staff members, as well as several research fellows, to identify the most promising new charity ideas.Our approach prioritized ideas that met the following criteria, in order of importance:Surpassed our benchmark of 10x cash, and could sc...]]>
CE https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:05 None full 5972
eYQ4A3Wft7rbdZahG_NL_EA_EA EA - Announcing the Confido app: bringing forecasting to everyone by Blanka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Confido app: bringing forecasting to everyone, published by Blanka on May 16, 2023 on The Effective Altruism Forum.SummaryHi, we are the Confido Institute and we believe in a world where decision makers (even outside the EA-rationalist bubble) can make important decisions with less overconfidence and more awareness of the uncertainties involved. We believe that almost all strategic decision-makers (or their advisors) can understand and use forecasting, quantified uncertainty and public forecasting platforms as valuable resources for making better and more informed decisions.We design tools, workshops and materials to support this mission. This is the first in a series of multiple EA Forum posts. We will tell you more about our mission and our other projects in future articles.In this post, we are pleased to announce that we have just released the Confido app, a web-based tool for tracking and sharing probabilistic predictions and estimates. You can use it in strategic decision making when you want a probabilistic estimate on a topic from different stakeholders, in meetings to avoid anchoring, to organize forecasting tournaments, or in calibration workshops and lectures. We offer very high data privacy, so it is used also in government setting. See our demo or request your Confido workspace for free.The current version of Confido is already used by several organizations, including the Dutch government, several policy think tanks and EA organizations.Confido is under active development and there is a lot more to come. We’d love to hear your feedback and feature requests. To see news, follow us on Twitter, Facebook or LinkedIn or collaborate with us on Discord. We are also looking for funding.Why are we building Confido?We think there is a lot of attention in the EA space toward making better forecasts – investing in big public forecasting platforms, researching better scoring and aggregation methods, skilling up superforecasters and other quantitatively and technically minded people in the EA community, building advanced tools for more complex probabilistic models, etc.This is clearly important and well done and we do not expect to add much to this effort.However, we believe some other aspects of forecasting / quantified uncertainty are also valuable and currently neglected.For example, little effort has gone into making these concepts and tools accessible to people without a math or technical background. This includes, for example, many people from:organizations working on animal welfareorganizations working on non-technical AI safety, strategy and governanceorganizations working on biological risks and pandemic preparednessorganizations working on improving policymaking and governance in general, think tanks, even government bodiesWe think all of these would benefit from clearer ways of communicating uncertain beliefs and estimates, yet existing tools may have a high barrier of entry.What makes Confido unique?Confido is a tool for working collaboratively with probabilistic forecasts, estimates, beliefs and quantified uncertainty together with your team. Several features distinguish Confido from existing tools in this space:a strong emphasis on being easy to understand and convenient to usethe ability to use it internally and privately, including self-hostingthe ability to use it for more than just forecasting questions (more below)Confido is free and open sourceEase of use & user experienceConfido’s two main goals thus are to be maximally:easy to understand (easy to get started with, requiring minimal background knowledge, guiding the user where needed)convenient to use (requiring minimum hassle and extra steps to use it and perform tasks, pleasant to work with)The second part is also very important because when a tool is cum...]]>
Blanka https://forum.effectivealtruism.org/posts/eYQ4A3Wft7rbdZahG/announcing-the-confido-app-bringing-forecasting-to-everyone Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Confido app: bringing forecasting to everyone, published by Blanka on May 16, 2023 on The Effective Altruism Forum.SummaryHi, we are the Confido Institute and we believe in a world where decision makers (even outside the EA-rationalist bubble) can make important decisions with less overconfidence and more awareness of the uncertainties involved. We believe that almost all strategic decision-makers (or their advisors) can understand and use forecasting, quantified uncertainty and public forecasting platforms as valuable resources for making better and more informed decisions.We design tools, workshops and materials to support this mission. This is the first in a series of multiple EA Forum posts. We will tell you more about our mission and our other projects in future articles.In this post, we are pleased to announce that we have just released the Confido app, a web-based tool for tracking and sharing probabilistic predictions and estimates. You can use it in strategic decision making when you want a probabilistic estimate on a topic from different stakeholders, in meetings to avoid anchoring, to organize forecasting tournaments, or in calibration workshops and lectures. We offer very high data privacy, so it is used also in government setting. See our demo or request your Confido workspace for free.The current version of Confido is already used by several organizations, including the Dutch government, several policy think tanks and EA organizations.Confido is under active development and there is a lot more to come. We’d love to hear your feedback and feature requests. To see news, follow us on Twitter, Facebook or LinkedIn or collaborate with us on Discord. We are also looking for funding.Why are we building Confido?We think there is a lot of attention in the EA space toward making better forecasts – investing in big public forecasting platforms, researching better scoring and aggregation methods, skilling up superforecasters and other quantitatively and technically minded people in the EA community, building advanced tools for more complex probabilistic models, etc.This is clearly important and well done and we do not expect to add much to this effort.However, we believe some other aspects of forecasting / quantified uncertainty are also valuable and currently neglected.For example, little effort has gone into making these concepts and tools accessible to people without a math or technical background. This includes, for example, many people from:organizations working on animal welfareorganizations working on non-technical AI safety, strategy and governanceorganizations working on biological risks and pandemic preparednessorganizations working on improving policymaking and governance in general, think tanks, even government bodiesWe think all of these would benefit from clearer ways of communicating uncertain beliefs and estimates, yet existing tools may have a high barrier of entry.What makes Confido unique?Confido is a tool for working collaboratively with probabilistic forecasts, estimates, beliefs and quantified uncertainty together with your team. Several features distinguish Confido from existing tools in this space:a strong emphasis on being easy to understand and convenient to usethe ability to use it internally and privately, including self-hostingthe ability to use it for more than just forecasting questions (more below)Confido is free and open sourceEase of use & user experienceConfido’s two main goals thus are to be maximally:easy to understand (easy to get started with, requiring minimal background knowledge, guiding the user where needed)convenient to use (requiring minimum hassle and extra steps to use it and perform tasks, pleasant to work with)The second part is also very important because when a tool is cum...]]>
Tue, 16 May 2023 14:25:18 +0000 EA - Announcing the Confido app: bringing forecasting to everyone by Blanka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Confido app: bringing forecasting to everyone, published by Blanka on May 16, 2023 on The Effective Altruism Forum.SummaryHi, we are the Confido Institute and we believe in a world where decision makers (even outside the EA-rationalist bubble) can make important decisions with less overconfidence and more awareness of the uncertainties involved. We believe that almost all strategic decision-makers (or their advisors) can understand and use forecasting, quantified uncertainty and public forecasting platforms as valuable resources for making better and more informed decisions.We design tools, workshops and materials to support this mission. This is the first in a series of multiple EA Forum posts. We will tell you more about our mission and our other projects in future articles.In this post, we are pleased to announce that we have just released the Confido app, a web-based tool for tracking and sharing probabilistic predictions and estimates. You can use it in strategic decision making when you want a probabilistic estimate on a topic from different stakeholders, in meetings to avoid anchoring, to organize forecasting tournaments, or in calibration workshops and lectures. We offer very high data privacy, so it is used also in government setting. See our demo or request your Confido workspace for free.The current version of Confido is already used by several organizations, including the Dutch government, several policy think tanks and EA organizations.Confido is under active development and there is a lot more to come. We’d love to hear your feedback and feature requests. To see news, follow us on Twitter, Facebook or LinkedIn or collaborate with us on Discord. We are also looking for funding.Why are we building Confido?We think there is a lot of attention in the EA space toward making better forecasts – investing in big public forecasting platforms, researching better scoring and aggregation methods, skilling up superforecasters and other quantitatively and technically minded people in the EA community, building advanced tools for more complex probabilistic models, etc.This is clearly important and well done and we do not expect to add much to this effort.However, we believe some other aspects of forecasting / quantified uncertainty are also valuable and currently neglected.For example, little effort has gone into making these concepts and tools accessible to people without a math or technical background. This includes, for example, many people from:organizations working on animal welfareorganizations working on non-technical AI safety, strategy and governanceorganizations working on biological risks and pandemic preparednessorganizations working on improving policymaking and governance in general, think tanks, even government bodiesWe think all of these would benefit from clearer ways of communicating uncertain beliefs and estimates, yet existing tools may have a high barrier of entry.What makes Confido unique?Confido is a tool for working collaboratively with probabilistic forecasts, estimates, beliefs and quantified uncertainty together with your team. Several features distinguish Confido from existing tools in this space:a strong emphasis on being easy to understand and convenient to usethe ability to use it internally and privately, including self-hostingthe ability to use it for more than just forecasting questions (more below)Confido is free and open sourceEase of use & user experienceConfido’s two main goals thus are to be maximally:easy to understand (easy to get started with, requiring minimal background knowledge, guiding the user where needed)convenient to use (requiring minimum hassle and extra steps to use it and perform tasks, pleasant to work with)The second part is also very important because when a tool is cum...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Confido app: bringing forecasting to everyone, published by Blanka on May 16, 2023 on The Effective Altruism Forum.SummaryHi, we are the Confido Institute and we believe in a world where decision makers (even outside the EA-rationalist bubble) can make important decisions with less overconfidence and more awareness of the uncertainties involved. We believe that almost all strategic decision-makers (or their advisors) can understand and use forecasting, quantified uncertainty and public forecasting platforms as valuable resources for making better and more informed decisions.We design tools, workshops and materials to support this mission. This is the first in a series of multiple EA Forum posts. We will tell you more about our mission and our other projects in future articles.In this post, we are pleased to announce that we have just released the Confido app, a web-based tool for tracking and sharing probabilistic predictions and estimates. You can use it in strategic decision making when you want a probabilistic estimate on a topic from different stakeholders, in meetings to avoid anchoring, to organize forecasting tournaments, or in calibration workshops and lectures. We offer very high data privacy, so it is used also in government setting. See our demo or request your Confido workspace for free.The current version of Confido is already used by several organizations, including the Dutch government, several policy think tanks and EA organizations.Confido is under active development and there is a lot more to come. We’d love to hear your feedback and feature requests. To see news, follow us on Twitter, Facebook or LinkedIn or collaborate with us on Discord. We are also looking for funding.Why are we building Confido?We think there is a lot of attention in the EA space toward making better forecasts – investing in big public forecasting platforms, researching better scoring and aggregation methods, skilling up superforecasters and other quantitatively and technically minded people in the EA community, building advanced tools for more complex probabilistic models, etc.This is clearly important and well done and we do not expect to add much to this effort.However, we believe some other aspects of forecasting / quantified uncertainty are also valuable and currently neglected.For example, little effort has gone into making these concepts and tools accessible to people without a math or technical background. This includes, for example, many people from:organizations working on animal welfareorganizations working on non-technical AI safety, strategy and governanceorganizations working on biological risks and pandemic preparednessorganizations working on improving policymaking and governance in general, think tanks, even government bodiesWe think all of these would benefit from clearer ways of communicating uncertain beliefs and estimates, yet existing tools may have a high barrier of entry.What makes Confido unique?Confido is a tool for working collaboratively with probabilistic forecasts, estimates, beliefs and quantified uncertainty together with your team. Several features distinguish Confido from existing tools in this space:a strong emphasis on being easy to understand and convenient to usethe ability to use it internally and privately, including self-hostingthe ability to use it for more than just forecasting questions (more below)Confido is free and open sourceEase of use & user experienceConfido’s two main goals thus are to be maximally:easy to understand (easy to get started with, requiring minimal background knowledge, guiding the user where needed)convenient to use (requiring minimum hassle and extra steps to use it and perform tasks, pleasant to work with)The second part is also very important because when a tool is cum...]]>
Blanka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 15:04 None full 5971
N9jeiuok59MQjFWcX_NL_EA_EA EA - 2023 update on Muslims for Effective Altruism by Kaleem Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2023 update on Muslims for Effective Altruism, published by Kaleem on May 15, 2023 on The Effective Altruism Forum.Summary:Muslims for Effective Altruism made progress this year through our three projects: “The Muslim Network for Positive Impact”, “The Muslim Impact Lab”, and “Afterfund”, as well as continued behind the scenes work on fundraising, incubating new projects, and developing our short and medium term plans. The number of members of Muslims for Effective Altruism has continued to grow at a rate of ~25 members per year since mid-2021 (now at ~50 members).“Verdict”:It was a good (but not perfect) year for us, and we’re looking forward to doing even more interesting work, and having more of an impact, in the next one!Introduction:Around a year ago we posted our first post which gave a high level overview of why we think strating EA projects aimed at Muslims, or focusing on the intersection between EA and Islam, would be a high value endeavor. We were really pleasantly surprised at the amount of interest and support expressed about this endeavor, and continue to be grateful for the encouragement and feedback we receive from the community.Perhaps the most notable change before reading the rest of this post is that we’ve moved to thinking about Muslims for Effective Altruism’s structure as a federation of independent projects, rather than as one large org. We think this is useful because it prevents our lack of managerial capacity from holding back or interfering with our motivated project leads, as well as allowing us to increase our likelihood of expanding our network as other impactful projects can get off the ground without our knowledge or input.Kaleem will be attending EAG in London in May as well as hosting the “Muslims in EA” meetup there - so we thought it would be a good time to update everyone on what we’ve been up to in case you’d like to meet to discuss any aspects of our work.So, this post aims to provide an update on some of the work on which we’ve managed to make headway over the past year, as well as things we have fallen short on, and ways in which our plans have changed since the initial post.Projects:The Muslim Impact LabThe Muslim Impact Lab is a research and advisory body. This dynamic multi-disciplinary team is focused on collating and producing content on the intersections between EA ideas and Islamic Intellectual history, as well as providing expert consulting services to EA-Aligned organizations doing outreach in Muslim communities.The Lab assisted GiveDirectly by co-authoring their post in which they launched the Yemen Zakat Fund and previously advised them on their plans. Nayaaz Hashim, one of the co-founders of the lab, also published a piece on Unconditional Cash Transfers from an Islamic perspective on the Muslim Impact Lab substack. To date, GiveDirectly has raised $165,000 for the Yemen program, however it is difficult to quantify our counterfactual impact on this figure, given that other organizations such as Giving What We Can, and GiveDirectly themselves, have also been involved in raising funds. In the future we should look into ways of building in mechanisms to our processes that may help us establish our counterfactual impact with regards to raising effective Zakat.The lab is currently working on a survey to understand the moral priorities of the Muslim community, and developing a research agenda for, and producing content on, exploring intersections and challenges between understandings of Islam and current theories of effective altruism.The plan for the Muslim Impact Lab is to continue doing this research and, in collaboration with the Muslim Network for Positive Impact, put together a structured fellowship in the near future.Core Team: Maryam Khan, Nayaaz Hashim, Faezah Izadi, Mufti Sayed Haroon, Ahmed Gh...]]>
Kaleem https://forum.effectivealtruism.org/posts/N9jeiuok59MQjFWcX/2023-update-on-muslims-for-effective-altruism Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2023 update on Muslims for Effective Altruism, published by Kaleem on May 15, 2023 on The Effective Altruism Forum.Summary:Muslims for Effective Altruism made progress this year through our three projects: “The Muslim Network for Positive Impact”, “The Muslim Impact Lab”, and “Afterfund”, as well as continued behind the scenes work on fundraising, incubating new projects, and developing our short and medium term plans. The number of members of Muslims for Effective Altruism has continued to grow at a rate of ~25 members per year since mid-2021 (now at ~50 members).“Verdict”:It was a good (but not perfect) year for us, and we’re looking forward to doing even more interesting work, and having more of an impact, in the next one!Introduction:Around a year ago we posted our first post which gave a high level overview of why we think strating EA projects aimed at Muslims, or focusing on the intersection between EA and Islam, would be a high value endeavor. We were really pleasantly surprised at the amount of interest and support expressed about this endeavor, and continue to be grateful for the encouragement and feedback we receive from the community.Perhaps the most notable change before reading the rest of this post is that we’ve moved to thinking about Muslims for Effective Altruism’s structure as a federation of independent projects, rather than as one large org. We think this is useful because it prevents our lack of managerial capacity from holding back or interfering with our motivated project leads, as well as allowing us to increase our likelihood of expanding our network as other impactful projects can get off the ground without our knowledge or input.Kaleem will be attending EAG in London in May as well as hosting the “Muslims in EA” meetup there - so we thought it would be a good time to update everyone on what we’ve been up to in case you’d like to meet to discuss any aspects of our work.So, this post aims to provide an update on some of the work on which we’ve managed to make headway over the past year, as well as things we have fallen short on, and ways in which our plans have changed since the initial post.Projects:The Muslim Impact LabThe Muslim Impact Lab is a research and advisory body. This dynamic multi-disciplinary team is focused on collating and producing content on the intersections between EA ideas and Islamic Intellectual history, as well as providing expert consulting services to EA-Aligned organizations doing outreach in Muslim communities.The Lab assisted GiveDirectly by co-authoring their post in which they launched the Yemen Zakat Fund and previously advised them on their plans. Nayaaz Hashim, one of the co-founders of the lab, also published a piece on Unconditional Cash Transfers from an Islamic perspective on the Muslim Impact Lab substack. To date, GiveDirectly has raised $165,000 for the Yemen program, however it is difficult to quantify our counterfactual impact on this figure, given that other organizations such as Giving What We Can, and GiveDirectly themselves, have also been involved in raising funds. In the future we should look into ways of building in mechanisms to our processes that may help us establish our counterfactual impact with regards to raising effective Zakat.The lab is currently working on a survey to understand the moral priorities of the Muslim community, and developing a research agenda for, and producing content on, exploring intersections and challenges between understandings of Islam and current theories of effective altruism.The plan for the Muslim Impact Lab is to continue doing this research and, in collaboration with the Muslim Network for Positive Impact, put together a structured fellowship in the near future.Core Team: Maryam Khan, Nayaaz Hashim, Faezah Izadi, Mufti Sayed Haroon, Ahmed Gh...]]>
Tue, 16 May 2023 01:00:09 +0000 EA - 2023 update on Muslims for Effective Altruism by Kaleem Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2023 update on Muslims for Effective Altruism, published by Kaleem on May 15, 2023 on The Effective Altruism Forum.Summary:Muslims for Effective Altruism made progress this year through our three projects: “The Muslim Network for Positive Impact”, “The Muslim Impact Lab”, and “Afterfund”, as well as continued behind the scenes work on fundraising, incubating new projects, and developing our short and medium term plans. The number of members of Muslims for Effective Altruism has continued to grow at a rate of ~25 members per year since mid-2021 (now at ~50 members).“Verdict”:It was a good (but not perfect) year for us, and we’re looking forward to doing even more interesting work, and having more of an impact, in the next one!Introduction:Around a year ago we posted our first post which gave a high level overview of why we think strating EA projects aimed at Muslims, or focusing on the intersection between EA and Islam, would be a high value endeavor. We were really pleasantly surprised at the amount of interest and support expressed about this endeavor, and continue to be grateful for the encouragement and feedback we receive from the community.Perhaps the most notable change before reading the rest of this post is that we’ve moved to thinking about Muslims for Effective Altruism’s structure as a federation of independent projects, rather than as one large org. We think this is useful because it prevents our lack of managerial capacity from holding back or interfering with our motivated project leads, as well as allowing us to increase our likelihood of expanding our network as other impactful projects can get off the ground without our knowledge or input.Kaleem will be attending EAG in London in May as well as hosting the “Muslims in EA” meetup there - so we thought it would be a good time to update everyone on what we’ve been up to in case you’d like to meet to discuss any aspects of our work.So, this post aims to provide an update on some of the work on which we’ve managed to make headway over the past year, as well as things we have fallen short on, and ways in which our plans have changed since the initial post.Projects:The Muslim Impact LabThe Muslim Impact Lab is a research and advisory body. This dynamic multi-disciplinary team is focused on collating and producing content on the intersections between EA ideas and Islamic Intellectual history, as well as providing expert consulting services to EA-Aligned organizations doing outreach in Muslim communities.The Lab assisted GiveDirectly by co-authoring their post in which they launched the Yemen Zakat Fund and previously advised them on their plans. Nayaaz Hashim, one of the co-founders of the lab, also published a piece on Unconditional Cash Transfers from an Islamic perspective on the Muslim Impact Lab substack. To date, GiveDirectly has raised $165,000 for the Yemen program, however it is difficult to quantify our counterfactual impact on this figure, given that other organizations such as Giving What We Can, and GiveDirectly themselves, have also been involved in raising funds. In the future we should look into ways of building in mechanisms to our processes that may help us establish our counterfactual impact with regards to raising effective Zakat.The lab is currently working on a survey to understand the moral priorities of the Muslim community, and developing a research agenda for, and producing content on, exploring intersections and challenges between understandings of Islam and current theories of effective altruism.The plan for the Muslim Impact Lab is to continue doing this research and, in collaboration with the Muslim Network for Positive Impact, put together a structured fellowship in the near future.Core Team: Maryam Khan, Nayaaz Hashim, Faezah Izadi, Mufti Sayed Haroon, Ahmed Gh...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2023 update on Muslims for Effective Altruism, published by Kaleem on May 15, 2023 on The Effective Altruism Forum.Summary:Muslims for Effective Altruism made progress this year through our three projects: “The Muslim Network for Positive Impact”, “The Muslim Impact Lab”, and “Afterfund”, as well as continued behind the scenes work on fundraising, incubating new projects, and developing our short and medium term plans. The number of members of Muslims for Effective Altruism has continued to grow at a rate of ~25 members per year since mid-2021 (now at ~50 members).“Verdict”:It was a good (but not perfect) year for us, and we’re looking forward to doing even more interesting work, and having more of an impact, in the next one!Introduction:Around a year ago we posted our first post which gave a high level overview of why we think strating EA projects aimed at Muslims, or focusing on the intersection between EA and Islam, would be a high value endeavor. We were really pleasantly surprised at the amount of interest and support expressed about this endeavor, and continue to be grateful for the encouragement and feedback we receive from the community.Perhaps the most notable change before reading the rest of this post is that we’ve moved to thinking about Muslims for Effective Altruism’s structure as a federation of independent projects, rather than as one large org. We think this is useful because it prevents our lack of managerial capacity from holding back or interfering with our motivated project leads, as well as allowing us to increase our likelihood of expanding our network as other impactful projects can get off the ground without our knowledge or input.Kaleem will be attending EAG in London in May as well as hosting the “Muslims in EA” meetup there - so we thought it would be a good time to update everyone on what we’ve been up to in case you’d like to meet to discuss any aspects of our work.So, this post aims to provide an update on some of the work on which we’ve managed to make headway over the past year, as well as things we have fallen short on, and ways in which our plans have changed since the initial post.Projects:The Muslim Impact LabThe Muslim Impact Lab is a research and advisory body. This dynamic multi-disciplinary team is focused on collating and producing content on the intersections between EA ideas and Islamic Intellectual history, as well as providing expert consulting services to EA-Aligned organizations doing outreach in Muslim communities.The Lab assisted GiveDirectly by co-authoring their post in which they launched the Yemen Zakat Fund and previously advised them on their plans. Nayaaz Hashim, one of the co-founders of the lab, also published a piece on Unconditional Cash Transfers from an Islamic perspective on the Muslim Impact Lab substack. To date, GiveDirectly has raised $165,000 for the Yemen program, however it is difficult to quantify our counterfactual impact on this figure, given that other organizations such as Giving What We Can, and GiveDirectly themselves, have also been involved in raising funds. In the future we should look into ways of building in mechanisms to our processes that may help us establish our counterfactual impact with regards to raising effective Zakat.The lab is currently working on a survey to understand the moral priorities of the Muslim community, and developing a research agenda for, and producing content on, exploring intersections and challenges between understandings of Islam and current theories of effective altruism.The plan for the Muslim Impact Lab is to continue doing this research and, in collaboration with the Muslim Network for Positive Impact, put together a structured fellowship in the near future.Core Team: Maryam Khan, Nayaaz Hashim, Faezah Izadi, Mufti Sayed Haroon, Ahmed Gh...]]>
Kaleem https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:58 None full 5967
tX3ax2aSTbu4BtQBN_NL_EA_EA EA - Accidentally teaching AI models to deceive us (Ajeya Cotra on The 80,000 Hours Podcast) by 80000 Hours Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Accidentally teaching AI models to deceive us (Ajeya Cotra on The 80,000 Hours Podcast), published by 80000 Hours on May 15, 2023 on The Effective Altruism Forum.Over at The 80,000 Hours Podcast we just published an interview that is likely to be of particular interest to people who identify as involved in the effective altruism community: Ajeya Cotra on accidentally teaching AI models to deceive us.You can click through for the audio, a full transcript, and related links. Below is the episode summary and some key excerpts.Episode SummaryI don’t know yet what suite of tests exactly you could show me, and what arguments you could show me, that would make me actually convinced that this model has a sufficiently deeply rooted motivation to not try to escape human control. I think that’s, in some sense, the whole heart of the alignment problem.And I think for a long time, labs have just been racing ahead, and they’ve had the justification — which I think was reasonable for a while — of like, “Come on, of course these systems we’re building aren’t going to take over the world.” As soon as that starts to change, I want a forcing function that makes it so that the labs now have the incentive to come up with the kinds of tests that should actually be persuasive.Ajeya CotraImagine you are an orphaned eight-year-old whose parents left you a $1 trillion company, and no trusted adult to serve as your guide to the world. You have to hire a smart adult to run that company, guide your life the way that a parent would, and administer your vast wealth. You have to hire that adult based on a work trial or interview you come up with. You don’t get to see any resumes or do reference checks. And because you’re so rich, tonnes of people apply for the job — for all sorts of reasons.Today’s guest Ajeya Cotra — senior research analyst at Open Philanthropy — argues that this peculiar setup resembles the situation humanity finds itself in when training very general and very capable AI models using current deep learning methods.As she explains, such an eight-year-old faces a challenging problem. In the candidate pool there are likely some truly nice people, who sincerely want to help and make decisions that are in your interest. But there are probably other characters too — like people who will pretend to care about you while you’re monitoring them, but intend to use the job to enrich themselves as soon as they think they can get away with it.Like a child trying to judge adults, at some point humans will be required to judge the trustworthiness and reliability of machine learning models that are as goal-oriented as people, and greatly outclass them in knowledge, experience, breadth, and speed. Tricky!Can’t we rely on how well models have performed at tasks during training to guide us? Ajeya worries that it won’t work. The trouble is that three different sorts of models will all produce the same output during training, but could behave very differently once deployed in a setting that allows their true colours to come through. She describes three such motivational archetypes:Saints — models that care about doing what we really wantSycophants — models that just want us to say they’ve done a good job, even if they get that praise by taking actions they know we wouldn’t want them toSchemers — models that don’t care about us or our interests at all, who are just pleasing us so long as that serves their own agendaIn principle, a machine learning training process based on reinforcement learning could spit out any of these three attitudes, because all three would perform roughly equally well on the tests we give them, and ‘performs well on tests’ is how these models are selected.But while that’s true in principle, maybe it’s not something that could plausibly happen in the real world. Af...]]>
80000 Hours https://forum.effectivealtruism.org/posts/tX3ax2aSTbu4BtQBN/accidentally-teaching-ai-models-to-deceive-us-ajeya-cotra-on Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Accidentally teaching AI models to deceive us (Ajeya Cotra on The 80,000 Hours Podcast), published by 80000 Hours on May 15, 2023 on The Effective Altruism Forum.Over at The 80,000 Hours Podcast we just published an interview that is likely to be of particular interest to people who identify as involved in the effective altruism community: Ajeya Cotra on accidentally teaching AI models to deceive us.You can click through for the audio, a full transcript, and related links. Below is the episode summary and some key excerpts.Episode SummaryI don’t know yet what suite of tests exactly you could show me, and what arguments you could show me, that would make me actually convinced that this model has a sufficiently deeply rooted motivation to not try to escape human control. I think that’s, in some sense, the whole heart of the alignment problem.And I think for a long time, labs have just been racing ahead, and they’ve had the justification — which I think was reasonable for a while — of like, “Come on, of course these systems we’re building aren’t going to take over the world.” As soon as that starts to change, I want a forcing function that makes it so that the labs now have the incentive to come up with the kinds of tests that should actually be persuasive.Ajeya CotraImagine you are an orphaned eight-year-old whose parents left you a $1 trillion company, and no trusted adult to serve as your guide to the world. You have to hire a smart adult to run that company, guide your life the way that a parent would, and administer your vast wealth. You have to hire that adult based on a work trial or interview you come up with. You don’t get to see any resumes or do reference checks. And because you’re so rich, tonnes of people apply for the job — for all sorts of reasons.Today’s guest Ajeya Cotra — senior research analyst at Open Philanthropy — argues that this peculiar setup resembles the situation humanity finds itself in when training very general and very capable AI models using current deep learning methods.As she explains, such an eight-year-old faces a challenging problem. In the candidate pool there are likely some truly nice people, who sincerely want to help and make decisions that are in your interest. But there are probably other characters too — like people who will pretend to care about you while you’re monitoring them, but intend to use the job to enrich themselves as soon as they think they can get away with it.Like a child trying to judge adults, at some point humans will be required to judge the trustworthiness and reliability of machine learning models that are as goal-oriented as people, and greatly outclass them in knowledge, experience, breadth, and speed. Tricky!Can’t we rely on how well models have performed at tasks during training to guide us? Ajeya worries that it won’t work. The trouble is that three different sorts of models will all produce the same output during training, but could behave very differently once deployed in a setting that allows their true colours to come through. She describes three such motivational archetypes:Saints — models that care about doing what we really wantSycophants — models that just want us to say they’ve done a good job, even if they get that praise by taking actions they know we wouldn’t want them toSchemers — models that don’t care about us or our interests at all, who are just pleasing us so long as that serves their own agendaIn principle, a machine learning training process based on reinforcement learning could spit out any of these three attitudes, because all three would perform roughly equally well on the tests we give them, and ‘performs well on tests’ is how these models are selected.But while that’s true in principle, maybe it’s not something that could plausibly happen in the real world. Af...]]>
Tue, 16 May 2023 00:09:47 +0000 EA - Accidentally teaching AI models to deceive us (Ajeya Cotra on The 80,000 Hours Podcast) by 80000 Hours Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Accidentally teaching AI models to deceive us (Ajeya Cotra on The 80,000 Hours Podcast), published by 80000 Hours on May 15, 2023 on The Effective Altruism Forum.Over at The 80,000 Hours Podcast we just published an interview that is likely to be of particular interest to people who identify as involved in the effective altruism community: Ajeya Cotra on accidentally teaching AI models to deceive us.You can click through for the audio, a full transcript, and related links. Below is the episode summary and some key excerpts.Episode SummaryI don’t know yet what suite of tests exactly you could show me, and what arguments you could show me, that would make me actually convinced that this model has a sufficiently deeply rooted motivation to not try to escape human control. I think that’s, in some sense, the whole heart of the alignment problem.And I think for a long time, labs have just been racing ahead, and they’ve had the justification — which I think was reasonable for a while — of like, “Come on, of course these systems we’re building aren’t going to take over the world.” As soon as that starts to change, I want a forcing function that makes it so that the labs now have the incentive to come up with the kinds of tests that should actually be persuasive.Ajeya CotraImagine you are an orphaned eight-year-old whose parents left you a $1 trillion company, and no trusted adult to serve as your guide to the world. You have to hire a smart adult to run that company, guide your life the way that a parent would, and administer your vast wealth. You have to hire that adult based on a work trial or interview you come up with. You don’t get to see any resumes or do reference checks. And because you’re so rich, tonnes of people apply for the job — for all sorts of reasons.Today’s guest Ajeya Cotra — senior research analyst at Open Philanthropy — argues that this peculiar setup resembles the situation humanity finds itself in when training very general and very capable AI models using current deep learning methods.As she explains, such an eight-year-old faces a challenging problem. In the candidate pool there are likely some truly nice people, who sincerely want to help and make decisions that are in your interest. But there are probably other characters too — like people who will pretend to care about you while you’re monitoring them, but intend to use the job to enrich themselves as soon as they think they can get away with it.Like a child trying to judge adults, at some point humans will be required to judge the trustworthiness and reliability of machine learning models that are as goal-oriented as people, and greatly outclass them in knowledge, experience, breadth, and speed. Tricky!Can’t we rely on how well models have performed at tasks during training to guide us? Ajeya worries that it won’t work. The trouble is that three different sorts of models will all produce the same output during training, but could behave very differently once deployed in a setting that allows their true colours to come through. She describes three such motivational archetypes:Saints — models that care about doing what we really wantSycophants — models that just want us to say they’ve done a good job, even if they get that praise by taking actions they know we wouldn’t want them toSchemers — models that don’t care about us or our interests at all, who are just pleasing us so long as that serves their own agendaIn principle, a machine learning training process based on reinforcement learning could spit out any of these three attitudes, because all three would perform roughly equally well on the tests we give them, and ‘performs well on tests’ is how these models are selected.But while that’s true in principle, maybe it’s not something that could plausibly happen in the real world. Af...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Accidentally teaching AI models to deceive us (Ajeya Cotra on The 80,000 Hours Podcast), published by 80000 Hours on May 15, 2023 on The Effective Altruism Forum.Over at The 80,000 Hours Podcast we just published an interview that is likely to be of particular interest to people who identify as involved in the effective altruism community: Ajeya Cotra on accidentally teaching AI models to deceive us.You can click through for the audio, a full transcript, and related links. Below is the episode summary and some key excerpts.Episode SummaryI don’t know yet what suite of tests exactly you could show me, and what arguments you could show me, that would make me actually convinced that this model has a sufficiently deeply rooted motivation to not try to escape human control. I think that’s, in some sense, the whole heart of the alignment problem.And I think for a long time, labs have just been racing ahead, and they’ve had the justification — which I think was reasonable for a while — of like, “Come on, of course these systems we’re building aren’t going to take over the world.” As soon as that starts to change, I want a forcing function that makes it so that the labs now have the incentive to come up with the kinds of tests that should actually be persuasive.Ajeya CotraImagine you are an orphaned eight-year-old whose parents left you a $1 trillion company, and no trusted adult to serve as your guide to the world. You have to hire a smart adult to run that company, guide your life the way that a parent would, and administer your vast wealth. You have to hire that adult based on a work trial or interview you come up with. You don’t get to see any resumes or do reference checks. And because you’re so rich, tonnes of people apply for the job — for all sorts of reasons.Today’s guest Ajeya Cotra — senior research analyst at Open Philanthropy — argues that this peculiar setup resembles the situation humanity finds itself in when training very general and very capable AI models using current deep learning methods.As she explains, such an eight-year-old faces a challenging problem. In the candidate pool there are likely some truly nice people, who sincerely want to help and make decisions that are in your interest. But there are probably other characters too — like people who will pretend to care about you while you’re monitoring them, but intend to use the job to enrich themselves as soon as they think they can get away with it.Like a child trying to judge adults, at some point humans will be required to judge the trustworthiness and reliability of machine learning models that are as goal-oriented as people, and greatly outclass them in knowledge, experience, breadth, and speed. Tricky!Can’t we rely on how well models have performed at tasks during training to guide us? Ajeya worries that it won’t work. The trouble is that three different sorts of models will all produce the same output during training, but could behave very differently once deployed in a setting that allows their true colours to come through. She describes three such motivational archetypes:Saints — models that care about doing what we really wantSycophants — models that just want us to say they’ve done a good job, even if they get that praise by taking actions they know we wouldn’t want them toSchemers — models that don’t care about us or our interests at all, who are just pleasing us so long as that serves their own agendaIn principle, a machine learning training process based on reinforcement learning could spit out any of these three attitudes, because all three would perform roughly equally well on the tests we give them, and ‘performs well on tests’ is how these models are selected.But while that’s true in principle, maybe it’s not something that could plausibly happen in the real world. Af...]]>
80000 Hours https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 25:18 None full 5966
DAD4777WJqFe9jZZM_NL_EA_EA EA - A flaw in a simple version of worldview diversification by NunoSempere Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A flaw in a simple version of worldview diversification, published by NunoSempere on May 15, 2023 on The Effective Altruism Forum.SummaryI consider a simple version of “worldview diversification”: allocating a set amount of money per cause area per year. I explain in probably too much detail how that setup leads to inconsistent relative values from year to year and from cause area to cause area. This implies that there might be Pareto improvements, i.e., moves that you could make that will result in strictly better outcomes. However, identifying those Pareto improvements wouldn’t be trivial, and would probably require more investment into estimation and cross-area comparison capabilities.1More elaborate versions of worldview diversification are probably able to fix this particular flaw, for example by instituting trading between the different worldview—thought that trading does ultimately have to happen. However, I view those solutions as hacks, and I suspect that the problem I outline in this post is indicative of deeper problems with the overall approach of worldview diversification.The main flaw: inconsistent relative valuesThis section perhaps has too much detail to arrive at a fairly intuitive point. I thought this was worth doing because I find the point that there is a possible Pareto improvement on the table a powerful argument, and I didn’t want to hand-wave it. But the reader might want to skip to the next sections after getting the gist.Deducing bounds for relative values from revealed preferencesSuppose that you order the ex-ante values of grants in different cause areas. The areas could be global health and development, animal welfare, speculative long-termism, etc. Their values could be given in QALYs (quality-adjusted life-years), sentience-adjusted QALYs, expected reduction in existential risk, but also in some relative unit2.For simplicity, let us just pick the case where there are two cause areas:More undilluted shades represent more valuable grants (e.g., larger reductions per dollar: of human suffering, animal suffering or existential risk), and lighter shades represent less valuable grants. Due to diminishing marginal returns, I’ve drawn the most valuable grants as smaller, though this doesn’t particularly matter.Now, we can augment the picture by also considering the marginal grants which didn’t get funded.In particular, imagine that the marginal grant which didn’t get funded for cause #1 has the same size as the marginal grant that did get funded for cause #2 (this doesn’t affect the thrust of the argument, it just makes it more apparent):Now, from this, we can deduce some bounds on relative values:In words rather than in shades of colour, this would be:Spending L1 dollars at cost-effectiveness A greens/$ is better than spending L1 dollars at cost-effectiveness B reds/$Spending L2 dollars at cost-effectiveness X reds/$ is better than spending L2 dollars at cost-effectiveness Y greens/$Or, dividing by L1 and L2,A greens is better than B redsX reds is better than Y redsIn colors, this would correspond to all four squares having the same size:Giving some values, this could be:10 greens is better than 2 reds3 reds is better than 5 greensFrom this we could deduce that 6 reds > 10 greens > 2 reds, or that one green is worth between 0.2 and 0.6 reds.But now there comes a new yearBut the above was for one year. Now comes another year, with its own set of grants. But we are keeping the amount we allocate to each area constant.It’s been a less promising year for green, and a more promising year for red, . So this means that some of the stuff that wasn’t funded last year for green is funded now, and some of the stuff that was funded last year for red isn’t funded now:Now we can do the same comparisons as the last time:And when ...]]>
NunoSempere https://forum.effectivealtruism.org/posts/DAD4777WJqFe9jZZM/a-flaw-in-a-simple-version-of-worldview-diversification Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A flaw in a simple version of worldview diversification, published by NunoSempere on May 15, 2023 on The Effective Altruism Forum.SummaryI consider a simple version of “worldview diversification”: allocating a set amount of money per cause area per year. I explain in probably too much detail how that setup leads to inconsistent relative values from year to year and from cause area to cause area. This implies that there might be Pareto improvements, i.e., moves that you could make that will result in strictly better outcomes. However, identifying those Pareto improvements wouldn’t be trivial, and would probably require more investment into estimation and cross-area comparison capabilities.1More elaborate versions of worldview diversification are probably able to fix this particular flaw, for example by instituting trading between the different worldview—thought that trading does ultimately have to happen. However, I view those solutions as hacks, and I suspect that the problem I outline in this post is indicative of deeper problems with the overall approach of worldview diversification.The main flaw: inconsistent relative valuesThis section perhaps has too much detail to arrive at a fairly intuitive point. I thought this was worth doing because I find the point that there is a possible Pareto improvement on the table a powerful argument, and I didn’t want to hand-wave it. But the reader might want to skip to the next sections after getting the gist.Deducing bounds for relative values from revealed preferencesSuppose that you order the ex-ante values of grants in different cause areas. The areas could be global health and development, animal welfare, speculative long-termism, etc. Their values could be given in QALYs (quality-adjusted life-years), sentience-adjusted QALYs, expected reduction in existential risk, but also in some relative unit2.For simplicity, let us just pick the case where there are two cause areas:More undilluted shades represent more valuable grants (e.g., larger reductions per dollar: of human suffering, animal suffering or existential risk), and lighter shades represent less valuable grants. Due to diminishing marginal returns, I’ve drawn the most valuable grants as smaller, though this doesn’t particularly matter.Now, we can augment the picture by also considering the marginal grants which didn’t get funded.In particular, imagine that the marginal grant which didn’t get funded for cause #1 has the same size as the marginal grant that did get funded for cause #2 (this doesn’t affect the thrust of the argument, it just makes it more apparent):Now, from this, we can deduce some bounds on relative values:In words rather than in shades of colour, this would be:Spending L1 dollars at cost-effectiveness A greens/$ is better than spending L1 dollars at cost-effectiveness B reds/$Spending L2 dollars at cost-effectiveness X reds/$ is better than spending L2 dollars at cost-effectiveness Y greens/$Or, dividing by L1 and L2,A greens is better than B redsX reds is better than Y redsIn colors, this would correspond to all four squares having the same size:Giving some values, this could be:10 greens is better than 2 reds3 reds is better than 5 greensFrom this we could deduce that 6 reds > 10 greens > 2 reds, or that one green is worth between 0.2 and 0.6 reds.But now there comes a new yearBut the above was for one year. Now comes another year, with its own set of grants. But we are keeping the amount we allocate to each area constant.It’s been a less promising year for green, and a more promising year for red, . So this means that some of the stuff that wasn’t funded last year for green is funded now, and some of the stuff that was funded last year for red isn’t funded now:Now we can do the same comparisons as the last time:And when ...]]>
Mon, 15 May 2023 23:54:36 +0000 EA - A flaw in a simple version of worldview diversification by NunoSempere Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A flaw in a simple version of worldview diversification, published by NunoSempere on May 15, 2023 on The Effective Altruism Forum.SummaryI consider a simple version of “worldview diversification”: allocating a set amount of money per cause area per year. I explain in probably too much detail how that setup leads to inconsistent relative values from year to year and from cause area to cause area. This implies that there might be Pareto improvements, i.e., moves that you could make that will result in strictly better outcomes. However, identifying those Pareto improvements wouldn’t be trivial, and would probably require more investment into estimation and cross-area comparison capabilities.1More elaborate versions of worldview diversification are probably able to fix this particular flaw, for example by instituting trading between the different worldview—thought that trading does ultimately have to happen. However, I view those solutions as hacks, and I suspect that the problem I outline in this post is indicative of deeper problems with the overall approach of worldview diversification.The main flaw: inconsistent relative valuesThis section perhaps has too much detail to arrive at a fairly intuitive point. I thought this was worth doing because I find the point that there is a possible Pareto improvement on the table a powerful argument, and I didn’t want to hand-wave it. But the reader might want to skip to the next sections after getting the gist.Deducing bounds for relative values from revealed preferencesSuppose that you order the ex-ante values of grants in different cause areas. The areas could be global health and development, animal welfare, speculative long-termism, etc. Their values could be given in QALYs (quality-adjusted life-years), sentience-adjusted QALYs, expected reduction in existential risk, but also in some relative unit2.For simplicity, let us just pick the case where there are two cause areas:More undilluted shades represent more valuable grants (e.g., larger reductions per dollar: of human suffering, animal suffering or existential risk), and lighter shades represent less valuable grants. Due to diminishing marginal returns, I’ve drawn the most valuable grants as smaller, though this doesn’t particularly matter.Now, we can augment the picture by also considering the marginal grants which didn’t get funded.In particular, imagine that the marginal grant which didn’t get funded for cause #1 has the same size as the marginal grant that did get funded for cause #2 (this doesn’t affect the thrust of the argument, it just makes it more apparent):Now, from this, we can deduce some bounds on relative values:In words rather than in shades of colour, this would be:Spending L1 dollars at cost-effectiveness A greens/$ is better than spending L1 dollars at cost-effectiveness B reds/$Spending L2 dollars at cost-effectiveness X reds/$ is better than spending L2 dollars at cost-effectiveness Y greens/$Or, dividing by L1 and L2,A greens is better than B redsX reds is better than Y redsIn colors, this would correspond to all four squares having the same size:Giving some values, this could be:10 greens is better than 2 reds3 reds is better than 5 greensFrom this we could deduce that 6 reds > 10 greens > 2 reds, or that one green is worth between 0.2 and 0.6 reds.But now there comes a new yearBut the above was for one year. Now comes another year, with its own set of grants. But we are keeping the amount we allocate to each area constant.It’s been a less promising year for green, and a more promising year for red, . So this means that some of the stuff that wasn’t funded last year for green is funded now, and some of the stuff that was funded last year for red isn’t funded now:Now we can do the same comparisons as the last time:And when ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A flaw in a simple version of worldview diversification, published by NunoSempere on May 15, 2023 on The Effective Altruism Forum.SummaryI consider a simple version of “worldview diversification”: allocating a set amount of money per cause area per year. I explain in probably too much detail how that setup leads to inconsistent relative values from year to year and from cause area to cause area. This implies that there might be Pareto improvements, i.e., moves that you could make that will result in strictly better outcomes. However, identifying those Pareto improvements wouldn’t be trivial, and would probably require more investment into estimation and cross-area comparison capabilities.1More elaborate versions of worldview diversification are probably able to fix this particular flaw, for example by instituting trading between the different worldview—thought that trading does ultimately have to happen. However, I view those solutions as hacks, and I suspect that the problem I outline in this post is indicative of deeper problems with the overall approach of worldview diversification.The main flaw: inconsistent relative valuesThis section perhaps has too much detail to arrive at a fairly intuitive point. I thought this was worth doing because I find the point that there is a possible Pareto improvement on the table a powerful argument, and I didn’t want to hand-wave it. But the reader might want to skip to the next sections after getting the gist.Deducing bounds for relative values from revealed preferencesSuppose that you order the ex-ante values of grants in different cause areas. The areas could be global health and development, animal welfare, speculative long-termism, etc. Their values could be given in QALYs (quality-adjusted life-years), sentience-adjusted QALYs, expected reduction in existential risk, but also in some relative unit2.For simplicity, let us just pick the case where there are two cause areas:More undilluted shades represent more valuable grants (e.g., larger reductions per dollar: of human suffering, animal suffering or existential risk), and lighter shades represent less valuable grants. Due to diminishing marginal returns, I’ve drawn the most valuable grants as smaller, though this doesn’t particularly matter.Now, we can augment the picture by also considering the marginal grants which didn’t get funded.In particular, imagine that the marginal grant which didn’t get funded for cause #1 has the same size as the marginal grant that did get funded for cause #2 (this doesn’t affect the thrust of the argument, it just makes it more apparent):Now, from this, we can deduce some bounds on relative values:In words rather than in shades of colour, this would be:Spending L1 dollars at cost-effectiveness A greens/$ is better than spending L1 dollars at cost-effectiveness B reds/$Spending L2 dollars at cost-effectiveness X reds/$ is better than spending L2 dollars at cost-effectiveness Y greens/$Or, dividing by L1 and L2,A greens is better than B redsX reds is better than Y redsIn colors, this would correspond to all four squares having the same size:Giving some values, this could be:10 greens is better than 2 reds3 reds is better than 5 greensFrom this we could deduce that 6 reds > 10 greens > 2 reds, or that one green is worth between 0.2 and 0.6 reds.But now there comes a new yearBut the above was for one year. Now comes another year, with its own set of grants. But we are keeping the amount we allocate to each area constant.It’s been a less promising year for green, and a more promising year for red, . So this means that some of the stuff that wasn’t funded last year for green is funded now, and some of the stuff that was funded last year for red isn’t funded now:Now we can do the same comparisons as the last time:And when ...]]>
NunoSempere https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:57 None full 5961
AJDgnPXqZ48eSCjEQ_NL_EA_EA EA - EA Survey 2022: Demographics by David Moss Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Survey 2022: Demographics, published by David Moss on May 15, 2023 on The Effective Altruism Forum.SummaryGenderSince 2020, the percentage of women in our sample has increased (26.5% vs 29.3%) and the percentage of men decreased (70.5% vs 66.2%).More recent cohorts of EAs have lower percentages of men than earlier cohorts. This pattern is compatible with either increased recruitment of women, non-binary or other non-male EAs in more recent years and/or men being more likely to drop out of EA.We examine differences between cohorts across years and find no evidence of significant differences in dropout between men and women.Race/ethnicityThe percentage of white respondents in our sample (76.26%) has remained fairly flat over time.More recent cohorts contain lower percentages of white respondents (compatible with either increased recruitment and/or lower dropout of non-white respondents).We also examine differences between cohorts across years for race/ethnicity, but do not find a consistent pattern.AgeThe average age at which people first get involved in EA (26) has continued to increase.Education and employmentThe percentage of students in the movement has decreased since 2020 and the percentage in employment has increased. However, just over 40% of those who joined EA in the last year were students.Universities11.8% of respondents attended the top 10 (QS) ranked universities globally.Career strategiesThe most commonly cited strategy for impact in one’s career was ‘research’ (20.61%) followed by ‘still deciding’ (19.63%).More than twice as many respondents selected research as selected ‘earning to give’ (10.24%), organization-building skills (ops, management), government and policy, entrepreneurship or community building (<10% each).Men were significantly more likely to select research and significantly less likely to select organization-building skills. We found no significant differences by race/ethnicity.Highly engaged EAs were much more likely to select research (25.0% vs 15.1%) and much less likely to select earning to give (5.7% vs 15.7%).PoliticsRespondents continue to be strongly left-leaning politically (63.6% vs 2.4% right-leaning).Our 2022 sample was slightly more left-leaning than in 2019.ReligionA large majority of respondents (69.58%) were atheist, agnostic or non-religious (similar to 2019).Introduction3567 respondents completed the 2022 EA Survey.A recurring observation in previous surveys is that the community is relatively lacking in demographic diversity on the dimensions of gender, age, race/ethnicity, and nationality. In this report, we examine the demographic composition of the community, how it has changed over time, and how this is related to different outcomes.In future posts in this series we will examine differences in experiences of and satisfaction with the community (see posts from 2019 and 2020), and explore the geography of the EA community.In a forthcoming follow-up survey, we will also be examining experiences related to gender and community satisfaction in more detail.Basic DemographicsGenderThe percentage of women has slightly increased since 2020 (26.5% to 29.3%), while the percentage of men has slightly decreased (70.5% to 66.2%).Gender across survey yearsLooking across different survey years, we can see that there is now a higher percentage of women in our sample than in the earliest years. In the earliest EA Surveys, we saw just over 75% men, whereas in the most recent survey, we see just over 65%.Gender across cohortsLooking across cohorts (EAs who reported first getting involved in a given year), we see that more recent cohorts contain more women than men. This is compatible with either/both increased recruitment of women (or decreased recruitment of men) or disproportionate attrition of...]]>
David Moss https://forum.effectivealtruism.org/posts/AJDgnPXqZ48eSCjEQ/ea-survey-2022-demographics Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Survey 2022: Demographics, published by David Moss on May 15, 2023 on The Effective Altruism Forum.SummaryGenderSince 2020, the percentage of women in our sample has increased (26.5% vs 29.3%) and the percentage of men decreased (70.5% vs 66.2%).More recent cohorts of EAs have lower percentages of men than earlier cohorts. This pattern is compatible with either increased recruitment of women, non-binary or other non-male EAs in more recent years and/or men being more likely to drop out of EA.We examine differences between cohorts across years and find no evidence of significant differences in dropout between men and women.Race/ethnicityThe percentage of white respondents in our sample (76.26%) has remained fairly flat over time.More recent cohorts contain lower percentages of white respondents (compatible with either increased recruitment and/or lower dropout of non-white respondents).We also examine differences between cohorts across years for race/ethnicity, but do not find a consistent pattern.AgeThe average age at which people first get involved in EA (26) has continued to increase.Education and employmentThe percentage of students in the movement has decreased since 2020 and the percentage in employment has increased. However, just over 40% of those who joined EA in the last year were students.Universities11.8% of respondents attended the top 10 (QS) ranked universities globally.Career strategiesThe most commonly cited strategy for impact in one’s career was ‘research’ (20.61%) followed by ‘still deciding’ (19.63%).More than twice as many respondents selected research as selected ‘earning to give’ (10.24%), organization-building skills (ops, management), government and policy, entrepreneurship or community building (<10% each).Men were significantly more likely to select research and significantly less likely to select organization-building skills. We found no significant differences by race/ethnicity.Highly engaged EAs were much more likely to select research (25.0% vs 15.1%) and much less likely to select earning to give (5.7% vs 15.7%).PoliticsRespondents continue to be strongly left-leaning politically (63.6% vs 2.4% right-leaning).Our 2022 sample was slightly more left-leaning than in 2019.ReligionA large majority of respondents (69.58%) were atheist, agnostic or non-religious (similar to 2019).Introduction3567 respondents completed the 2022 EA Survey.A recurring observation in previous surveys is that the community is relatively lacking in demographic diversity on the dimensions of gender, age, race/ethnicity, and nationality. In this report, we examine the demographic composition of the community, how it has changed over time, and how this is related to different outcomes.In future posts in this series we will examine differences in experiences of and satisfaction with the community (see posts from 2019 and 2020), and explore the geography of the EA community.In a forthcoming follow-up survey, we will also be examining experiences related to gender and community satisfaction in more detail.Basic DemographicsGenderThe percentage of women has slightly increased since 2020 (26.5% to 29.3%), while the percentage of men has slightly decreased (70.5% to 66.2%).Gender across survey yearsLooking across different survey years, we can see that there is now a higher percentage of women in our sample than in the earliest years. In the earliest EA Surveys, we saw just over 75% men, whereas in the most recent survey, we see just over 65%.Gender across cohortsLooking across cohorts (EAs who reported first getting involved in a given year), we see that more recent cohorts contain more women than men. This is compatible with either/both increased recruitment of women (or decreased recruitment of men) or disproportionate attrition of...]]>
Mon, 15 May 2023 19:10:40 +0000 EA - EA Survey 2022: Demographics by David Moss Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Survey 2022: Demographics, published by David Moss on May 15, 2023 on The Effective Altruism Forum.SummaryGenderSince 2020, the percentage of women in our sample has increased (26.5% vs 29.3%) and the percentage of men decreased (70.5% vs 66.2%).More recent cohorts of EAs have lower percentages of men than earlier cohorts. This pattern is compatible with either increased recruitment of women, non-binary or other non-male EAs in more recent years and/or men being more likely to drop out of EA.We examine differences between cohorts across years and find no evidence of significant differences in dropout between men and women.Race/ethnicityThe percentage of white respondents in our sample (76.26%) has remained fairly flat over time.More recent cohorts contain lower percentages of white respondents (compatible with either increased recruitment and/or lower dropout of non-white respondents).We also examine differences between cohorts across years for race/ethnicity, but do not find a consistent pattern.AgeThe average age at which people first get involved in EA (26) has continued to increase.Education and employmentThe percentage of students in the movement has decreased since 2020 and the percentage in employment has increased. However, just over 40% of those who joined EA in the last year were students.Universities11.8% of respondents attended the top 10 (QS) ranked universities globally.Career strategiesThe most commonly cited strategy for impact in one’s career was ‘research’ (20.61%) followed by ‘still deciding’ (19.63%).More than twice as many respondents selected research as selected ‘earning to give’ (10.24%), organization-building skills (ops, management), government and policy, entrepreneurship or community building (<10% each).Men were significantly more likely to select research and significantly less likely to select organization-building skills. We found no significant differences by race/ethnicity.Highly engaged EAs were much more likely to select research (25.0% vs 15.1%) and much less likely to select earning to give (5.7% vs 15.7%).PoliticsRespondents continue to be strongly left-leaning politically (63.6% vs 2.4% right-leaning).Our 2022 sample was slightly more left-leaning than in 2019.ReligionA large majority of respondents (69.58%) were atheist, agnostic or non-religious (similar to 2019).Introduction3567 respondents completed the 2022 EA Survey.A recurring observation in previous surveys is that the community is relatively lacking in demographic diversity on the dimensions of gender, age, race/ethnicity, and nationality. In this report, we examine the demographic composition of the community, how it has changed over time, and how this is related to different outcomes.In future posts in this series we will examine differences in experiences of and satisfaction with the community (see posts from 2019 and 2020), and explore the geography of the EA community.In a forthcoming follow-up survey, we will also be examining experiences related to gender and community satisfaction in more detail.Basic DemographicsGenderThe percentage of women has slightly increased since 2020 (26.5% to 29.3%), while the percentage of men has slightly decreased (70.5% to 66.2%).Gender across survey yearsLooking across different survey years, we can see that there is now a higher percentage of women in our sample than in the earliest years. In the earliest EA Surveys, we saw just over 75% men, whereas in the most recent survey, we see just over 65%.Gender across cohortsLooking across cohorts (EAs who reported first getting involved in a given year), we see that more recent cohorts contain more women than men. This is compatible with either/both increased recruitment of women (or decreased recruitment of men) or disproportionate attrition of...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Survey 2022: Demographics, published by David Moss on May 15, 2023 on The Effective Altruism Forum.SummaryGenderSince 2020, the percentage of women in our sample has increased (26.5% vs 29.3%) and the percentage of men decreased (70.5% vs 66.2%).More recent cohorts of EAs have lower percentages of men than earlier cohorts. This pattern is compatible with either increased recruitment of women, non-binary or other non-male EAs in more recent years and/or men being more likely to drop out of EA.We examine differences between cohorts across years and find no evidence of significant differences in dropout between men and women.Race/ethnicityThe percentage of white respondents in our sample (76.26%) has remained fairly flat over time.More recent cohorts contain lower percentages of white respondents (compatible with either increased recruitment and/or lower dropout of non-white respondents).We also examine differences between cohorts across years for race/ethnicity, but do not find a consistent pattern.AgeThe average age at which people first get involved in EA (26) has continued to increase.Education and employmentThe percentage of students in the movement has decreased since 2020 and the percentage in employment has increased. However, just over 40% of those who joined EA in the last year were students.Universities11.8% of respondents attended the top 10 (QS) ranked universities globally.Career strategiesThe most commonly cited strategy for impact in one’s career was ‘research’ (20.61%) followed by ‘still deciding’ (19.63%).More than twice as many respondents selected research as selected ‘earning to give’ (10.24%), organization-building skills (ops, management), government and policy, entrepreneurship or community building (<10% each).Men were significantly more likely to select research and significantly less likely to select organization-building skills. We found no significant differences by race/ethnicity.Highly engaged EAs were much more likely to select research (25.0% vs 15.1%) and much less likely to select earning to give (5.7% vs 15.7%).PoliticsRespondents continue to be strongly left-leaning politically (63.6% vs 2.4% right-leaning).Our 2022 sample was slightly more left-leaning than in 2019.ReligionA large majority of respondents (69.58%) were atheist, agnostic or non-religious (similar to 2019).Introduction3567 respondents completed the 2022 EA Survey.A recurring observation in previous surveys is that the community is relatively lacking in demographic diversity on the dimensions of gender, age, race/ethnicity, and nationality. In this report, we examine the demographic composition of the community, how it has changed over time, and how this is related to different outcomes.In future posts in this series we will examine differences in experiences of and satisfaction with the community (see posts from 2019 and 2020), and explore the geography of the EA community.In a forthcoming follow-up survey, we will also be examining experiences related to gender and community satisfaction in more detail.Basic DemographicsGenderThe percentage of women has slightly increased since 2020 (26.5% to 29.3%), while the percentage of men has slightly decreased (70.5% to 66.2%).Gender across survey yearsLooking across different survey years, we can see that there is now a higher percentage of women in our sample than in the earliest years. In the earliest EA Surveys, we saw just over 75% men, whereas in the most recent survey, we see just over 65%.Gender across cohortsLooking across cohorts (EAs who reported first getting involved in a given year), we see that more recent cohorts contain more women than men. This is compatible with either/both increased recruitment of women (or decreased recruitment of men) or disproportionate attrition of...]]>
David Moss https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 14:03 None full 5959
ZtZmkgDW6MH8AEEK6_NL_EA_EA EA - How much do markets value Open AI? by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How much do markets value Open AI?, published by Ben West on May 14, 2023 on The Effective Altruism Forum.Summary: A BOTEC indicates that Open AI might have been valued at 220-430x their annual recurring revenue, which is high but not unheard of. Various factors make this multiple hard to interpret, but it generally does not seem consistent with investors believing that Open AI will capture revenue consistent with creating transformative AI.OverviewEpistemic status: revenue multiples are intended as a rough estimate of how much investors believe a company is going to grow, and I would be surprised if my estimated revenue multiple was off by more than a factor of 5. But the "strategic considerations" portion of this is a bunch of wild guesses that I feel much less confident about.There has been some discussion about how much markets are expecting transformative AI, e.g. here. One obvious question is "why isn't Open AI valued at a kajillion dollars?"I estimate that Microsoft's investment implicitly valued OAI at 220-430x their annual recurring revenue. This is high - average multiples are around 7x, but some pharmaceutical companies have multiples > 1000x. This would seem to support the argument that investors think that OAI is exceptional (but not "equivalent to the Industrial Revolution" exceptional).However, Microsoft received a set of benefits from the deal which make the EV multiple overstated. Based on adjustments, I can see the actual implied multiple being anything from -2,200x to 3,200x.(Negative multiples imply that Microsoft got more value from access to OAI models than the amount they invested and are therefore willing to treat their investment as a liability rather than an asset.)One particularly confusing fact is that OAI's valuation appears to have gone from $14 billion in 2021 to $19 billion in 2023. Even ignoring anything about transformative AI, I would have expected that the success of ChatGPT etc. should have resulted in a more than a 35% increase.Qualitatively, my guess is that this was a nice but not exceptional deal for OAI, and I feel confused why they took it. One possible explanation is “the kind of people who can deploy $10B of capital are institutionally incapable of investing at > 200x revenue multiples”, which doesn’t seem crazy to me. Another explanation is that this is basically guaranteeing them a massive customer (Microsoft), and they are willing to give up some stock to get that customer.Squiggle model hereIt would be cool if someone did a similar write up about Anthropic, although publicly available information on them is slim. My guess is that they will have an even higher revenue multiple (maybe infinite? I'm not sure if they had revenue when they first raised).DetailsValuation: $19BA bunch of news sites (e.g. here) reported that Microsoft invested $10 billion to value OAI at $29 billion. I assume that this valuation is post money, meaning the pre-money valuation is 19 billion.Although this site says that they were valued at $14 billion in 2021, meaning that they only increased in value 35% the past two years. This seems weird, but I guess it is consistent with the view that markets aren’t valuing the possibility of TAI.Revenue: $54M/yearReuters claims they are projecting $200M revenue in 2023.FastCompany says they made $30 million in 2022.If the deal closed in early 2023, then presumably annual projections of their monthly revenue were higher than $30 million, though it's unclear how much.Let’s arbitrarily say MRR will increase 10x this year, implying a monthly growth rate of 10^(1/12) = 1.22Solving the geometric series of 200 = x (1-1.22^12) / (1 -1.22) we get that their first month revenue is $4.46M, a run rate of $53.52M/yearOther factors:The vast majority of the investment is going to be spent on Micros...]]>
Ben West https://forum.effectivealtruism.org/posts/ZtZmkgDW6MH8AEEK6/how-much-do-markets-value-open-ai Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How much do markets value Open AI?, published by Ben West on May 14, 2023 on The Effective Altruism Forum.Summary: A BOTEC indicates that Open AI might have been valued at 220-430x their annual recurring revenue, which is high but not unheard of. Various factors make this multiple hard to interpret, but it generally does not seem consistent with investors believing that Open AI will capture revenue consistent with creating transformative AI.OverviewEpistemic status: revenue multiples are intended as a rough estimate of how much investors believe a company is going to grow, and I would be surprised if my estimated revenue multiple was off by more than a factor of 5. But the "strategic considerations" portion of this is a bunch of wild guesses that I feel much less confident about.There has been some discussion about how much markets are expecting transformative AI, e.g. here. One obvious question is "why isn't Open AI valued at a kajillion dollars?"I estimate that Microsoft's investment implicitly valued OAI at 220-430x their annual recurring revenue. This is high - average multiples are around 7x, but some pharmaceutical companies have multiples > 1000x. This would seem to support the argument that investors think that OAI is exceptional (but not "equivalent to the Industrial Revolution" exceptional).However, Microsoft received a set of benefits from the deal which make the EV multiple overstated. Based on adjustments, I can see the actual implied multiple being anything from -2,200x to 3,200x.(Negative multiples imply that Microsoft got more value from access to OAI models than the amount they invested and are therefore willing to treat their investment as a liability rather than an asset.)One particularly confusing fact is that OAI's valuation appears to have gone from $14 billion in 2021 to $19 billion in 2023. Even ignoring anything about transformative AI, I would have expected that the success of ChatGPT etc. should have resulted in a more than a 35% increase.Qualitatively, my guess is that this was a nice but not exceptional deal for OAI, and I feel confused why they took it. One possible explanation is “the kind of people who can deploy $10B of capital are institutionally incapable of investing at > 200x revenue multiples”, which doesn’t seem crazy to me. Another explanation is that this is basically guaranteeing them a massive customer (Microsoft), and they are willing to give up some stock to get that customer.Squiggle model hereIt would be cool if someone did a similar write up about Anthropic, although publicly available information on them is slim. My guess is that they will have an even higher revenue multiple (maybe infinite? I'm not sure if they had revenue when they first raised).DetailsValuation: $19BA bunch of news sites (e.g. here) reported that Microsoft invested $10 billion to value OAI at $29 billion. I assume that this valuation is post money, meaning the pre-money valuation is 19 billion.Although this site says that they were valued at $14 billion in 2021, meaning that they only increased in value 35% the past two years. This seems weird, but I guess it is consistent with the view that markets aren’t valuing the possibility of TAI.Revenue: $54M/yearReuters claims they are projecting $200M revenue in 2023.FastCompany says they made $30 million in 2022.If the deal closed in early 2023, then presumably annual projections of their monthly revenue were higher than $30 million, though it's unclear how much.Let’s arbitrarily say MRR will increase 10x this year, implying a monthly growth rate of 10^(1/12) = 1.22Solving the geometric series of 200 = x (1-1.22^12) / (1 -1.22) we get that their first month revenue is $4.46M, a run rate of $53.52M/yearOther factors:The vast majority of the investment is going to be spent on Micros...]]>
Mon, 15 May 2023 09:31:53 +0000 EA - How much do markets value Open AI? by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How much do markets value Open AI?, published by Ben West on May 14, 2023 on The Effective Altruism Forum.Summary: A BOTEC indicates that Open AI might have been valued at 220-430x their annual recurring revenue, which is high but not unheard of. Various factors make this multiple hard to interpret, but it generally does not seem consistent with investors believing that Open AI will capture revenue consistent with creating transformative AI.OverviewEpistemic status: revenue multiples are intended as a rough estimate of how much investors believe a company is going to grow, and I would be surprised if my estimated revenue multiple was off by more than a factor of 5. But the "strategic considerations" portion of this is a bunch of wild guesses that I feel much less confident about.There has been some discussion about how much markets are expecting transformative AI, e.g. here. One obvious question is "why isn't Open AI valued at a kajillion dollars?"I estimate that Microsoft's investment implicitly valued OAI at 220-430x their annual recurring revenue. This is high - average multiples are around 7x, but some pharmaceutical companies have multiples > 1000x. This would seem to support the argument that investors think that OAI is exceptional (but not "equivalent to the Industrial Revolution" exceptional).However, Microsoft received a set of benefits from the deal which make the EV multiple overstated. Based on adjustments, I can see the actual implied multiple being anything from -2,200x to 3,200x.(Negative multiples imply that Microsoft got more value from access to OAI models than the amount they invested and are therefore willing to treat their investment as a liability rather than an asset.)One particularly confusing fact is that OAI's valuation appears to have gone from $14 billion in 2021 to $19 billion in 2023. Even ignoring anything about transformative AI, I would have expected that the success of ChatGPT etc. should have resulted in a more than a 35% increase.Qualitatively, my guess is that this was a nice but not exceptional deal for OAI, and I feel confused why they took it. One possible explanation is “the kind of people who can deploy $10B of capital are institutionally incapable of investing at > 200x revenue multiples”, which doesn’t seem crazy to me. Another explanation is that this is basically guaranteeing them a massive customer (Microsoft), and they are willing to give up some stock to get that customer.Squiggle model hereIt would be cool if someone did a similar write up about Anthropic, although publicly available information on them is slim. My guess is that they will have an even higher revenue multiple (maybe infinite? I'm not sure if they had revenue when they first raised).DetailsValuation: $19BA bunch of news sites (e.g. here) reported that Microsoft invested $10 billion to value OAI at $29 billion. I assume that this valuation is post money, meaning the pre-money valuation is 19 billion.Although this site says that they were valued at $14 billion in 2021, meaning that they only increased in value 35% the past two years. This seems weird, but I guess it is consistent with the view that markets aren’t valuing the possibility of TAI.Revenue: $54M/yearReuters claims they are projecting $200M revenue in 2023.FastCompany says they made $30 million in 2022.If the deal closed in early 2023, then presumably annual projections of their monthly revenue were higher than $30 million, though it's unclear how much.Let’s arbitrarily say MRR will increase 10x this year, implying a monthly growth rate of 10^(1/12) = 1.22Solving the geometric series of 200 = x (1-1.22^12) / (1 -1.22) we get that their first month revenue is $4.46M, a run rate of $53.52M/yearOther factors:The vast majority of the investment is going to be spent on Micros...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How much do markets value Open AI?, published by Ben West on May 14, 2023 on The Effective Altruism Forum.Summary: A BOTEC indicates that Open AI might have been valued at 220-430x their annual recurring revenue, which is high but not unheard of. Various factors make this multiple hard to interpret, but it generally does not seem consistent with investors believing that Open AI will capture revenue consistent with creating transformative AI.OverviewEpistemic status: revenue multiples are intended as a rough estimate of how much investors believe a company is going to grow, and I would be surprised if my estimated revenue multiple was off by more than a factor of 5. But the "strategic considerations" portion of this is a bunch of wild guesses that I feel much less confident about.There has been some discussion about how much markets are expecting transformative AI, e.g. here. One obvious question is "why isn't Open AI valued at a kajillion dollars?"I estimate that Microsoft's investment implicitly valued OAI at 220-430x their annual recurring revenue. This is high - average multiples are around 7x, but some pharmaceutical companies have multiples > 1000x. This would seem to support the argument that investors think that OAI is exceptional (but not "equivalent to the Industrial Revolution" exceptional).However, Microsoft received a set of benefits from the deal which make the EV multiple overstated. Based on adjustments, I can see the actual implied multiple being anything from -2,200x to 3,200x.(Negative multiples imply that Microsoft got more value from access to OAI models than the amount they invested and are therefore willing to treat their investment as a liability rather than an asset.)One particularly confusing fact is that OAI's valuation appears to have gone from $14 billion in 2021 to $19 billion in 2023. Even ignoring anything about transformative AI, I would have expected that the success of ChatGPT etc. should have resulted in a more than a 35% increase.Qualitatively, my guess is that this was a nice but not exceptional deal for OAI, and I feel confused why they took it. One possible explanation is “the kind of people who can deploy $10B of capital are institutionally incapable of investing at > 200x revenue multiples”, which doesn’t seem crazy to me. Another explanation is that this is basically guaranteeing them a massive customer (Microsoft), and they are willing to give up some stock to get that customer.Squiggle model hereIt would be cool if someone did a similar write up about Anthropic, although publicly available information on them is slim. My guess is that they will have an even higher revenue multiple (maybe infinite? I'm not sure if they had revenue when they first raised).DetailsValuation: $19BA bunch of news sites (e.g. here) reported that Microsoft invested $10 billion to value OAI at $29 billion. I assume that this valuation is post money, meaning the pre-money valuation is 19 billion.Although this site says that they were valued at $14 billion in 2021, meaning that they only increased in value 35% the past two years. This seems weird, but I guess it is consistent with the view that markets aren’t valuing the possibility of TAI.Revenue: $54M/yearReuters claims they are projecting $200M revenue in 2023.FastCompany says they made $30 million in 2022.If the deal closed in early 2023, then presumably annual projections of their monthly revenue were higher than $30 million, though it's unclear how much.Let’s arbitrarily say MRR will increase 10x this year, implying a monthly growth rate of 10^(1/12) = 1.22Solving the geometric series of 200 = x (1-1.22^12) / (1 -1.22) we get that their first month revenue is $4.46M, a run rate of $53.52M/yearOther factors:The vast majority of the investment is going to be spent on Micros...]]>
Ben West https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:48 None full 5955
ueQgG8tahDyDhjCHw_NL_EA_EA EA - Consider using the EA Gather Town for your online meetings and events by Siao Si Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider using the EA Gather Town for your online meetings and events, published by Siao Si on May 14, 2023 on The Effective Altruism Forum.Tl;drThe EA Gather town is cost-efficient to share among EA organisations - and gives you access to its own highly engaged EA community.Ways you can use the EA Gather spaceHave a permanent online ‘office’ for your organisation or group to cowork inRun meetings or internal events in the spaceRun public events, either for the broader EA community or other groupsThe space is run by a group of volunteers, so ping one of us on here if you want to talk about setting up a presence there - or just hop onto the space for a while and see how you find it!EA Gather UpdatesGather has reduced their free plan from 25 concurrent users to 10, which has forced many EA groups using it to seek different services, which may also charge or have their own limitations.However, CEA has generously agreed to fund the EA Gather Town instance for 30-40 concurrent users, a capacity which EA organisations can freely piggyback on.The rest of this post will discuss the benefits and drawbacks of using the space.Benefits‘Free’ virtual spaceGiven the uneven distribution of usage over a month, almost all of the capacity we’re paying for goes unused. Each org might have a weekly meeting of 1-2 hours, spiking the usage to near capacity, then have 0-5 users online for the rest of the week. So it’s very likely you could run your own weekly meetings at whatever time suits you without any concern about capacity, and virtually certain if you have any flexibility to adjust the times.We have a shared event calendar so that you can track whether your usage spikes might overlap. In that event, we have some excess funding to boost capacity.Integration with a growing section of the EA communityEA groups that have used the space regularly include EAGx Cambridge, Charity Entrepreneurship, Anima International, Training for Good, Alignment Ecosystem Development, EA France, EA Hong Kong, Metaculus, and more.Last year we were the meetup and hangout space for EAGxVirtual, and hopefully will host many more such events. We also have many independent regular users, who might be future staff of, donors to, beneficiaries of, or otherwise supporters of your group.It’s entirely up to you how public your office space is to other users. Some EA groups welcome guests, some are entirely private, some are in between. We’re currently exploring intuitive visual norms to clearly signal the preferences of different groups (feel free to suggest some!).Also, your office is not a prison - your members are always welcome and warmly encouraged to join us either for coworking or socialising in the common area :)Gather Town native functionality/default normsGather Town has a number of nice features that led us to originally set up this group there and to stay there since:Intuitive video call protocol (if you stand near someone, you’re in conversation with them)Embeddable webpages (so you can eg have native access to pomodoro timers)Cute aesthetic - your office can look like a virtual office, a virtual park, a virtual pirate ship, or anything else you can imagine! You can traverse on foot, by go-kart, or by magic portal :)Various other bits of functionality and suggested normsDrawbacksThe main reasons why you might not want to use the space:BuggynessGather is relatively new, and has a few more moving parts than other video calling services. It has a few intermittent glitches (most of which can be resolved by refreshing the page). Twice in the last 13 months or so I’m aware of it having gone down for about an hour. If you need perfect uptime, Zoom is probably better. Note there’s both an app and browser version, so one might work substantially better than the other at any given po...]]>
Siao Si https://forum.effectivealtruism.org/posts/ueQgG8tahDyDhjCHw/consider-using-the-ea-gather-town-for-your-online-meetings Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider using the EA Gather Town for your online meetings and events, published by Siao Si on May 14, 2023 on The Effective Altruism Forum.Tl;drThe EA Gather town is cost-efficient to share among EA organisations - and gives you access to its own highly engaged EA community.Ways you can use the EA Gather spaceHave a permanent online ‘office’ for your organisation or group to cowork inRun meetings or internal events in the spaceRun public events, either for the broader EA community or other groupsThe space is run by a group of volunteers, so ping one of us on here if you want to talk about setting up a presence there - or just hop onto the space for a while and see how you find it!EA Gather UpdatesGather has reduced their free plan from 25 concurrent users to 10, which has forced many EA groups using it to seek different services, which may also charge or have their own limitations.However, CEA has generously agreed to fund the EA Gather Town instance for 30-40 concurrent users, a capacity which EA organisations can freely piggyback on.The rest of this post will discuss the benefits and drawbacks of using the space.Benefits‘Free’ virtual spaceGiven the uneven distribution of usage over a month, almost all of the capacity we’re paying for goes unused. Each org might have a weekly meeting of 1-2 hours, spiking the usage to near capacity, then have 0-5 users online for the rest of the week. So it’s very likely you could run your own weekly meetings at whatever time suits you without any concern about capacity, and virtually certain if you have any flexibility to adjust the times.We have a shared event calendar so that you can track whether your usage spikes might overlap. In that event, we have some excess funding to boost capacity.Integration with a growing section of the EA communityEA groups that have used the space regularly include EAGx Cambridge, Charity Entrepreneurship, Anima International, Training for Good, Alignment Ecosystem Development, EA France, EA Hong Kong, Metaculus, and more.Last year we were the meetup and hangout space for EAGxVirtual, and hopefully will host many more such events. We also have many independent regular users, who might be future staff of, donors to, beneficiaries of, or otherwise supporters of your group.It’s entirely up to you how public your office space is to other users. Some EA groups welcome guests, some are entirely private, some are in between. We’re currently exploring intuitive visual norms to clearly signal the preferences of different groups (feel free to suggest some!).Also, your office is not a prison - your members are always welcome and warmly encouraged to join us either for coworking or socialising in the common area :)Gather Town native functionality/default normsGather Town has a number of nice features that led us to originally set up this group there and to stay there since:Intuitive video call protocol (if you stand near someone, you’re in conversation with them)Embeddable webpages (so you can eg have native access to pomodoro timers)Cute aesthetic - your office can look like a virtual office, a virtual park, a virtual pirate ship, or anything else you can imagine! You can traverse on foot, by go-kart, or by magic portal :)Various other bits of functionality and suggested normsDrawbacksThe main reasons why you might not want to use the space:BuggynessGather is relatively new, and has a few more moving parts than other video calling services. It has a few intermittent glitches (most of which can be resolved by refreshing the page). Twice in the last 13 months or so I’m aware of it having gone down for about an hour. If you need perfect uptime, Zoom is probably better. Note there’s both an app and browser version, so one might work substantially better than the other at any given po...]]>
Mon, 15 May 2023 05:53:24 +0000 EA - Consider using the EA Gather Town for your online meetings and events by Siao Si Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider using the EA Gather Town for your online meetings and events, published by Siao Si on May 14, 2023 on The Effective Altruism Forum.Tl;drThe EA Gather town is cost-efficient to share among EA organisations - and gives you access to its own highly engaged EA community.Ways you can use the EA Gather spaceHave a permanent online ‘office’ for your organisation or group to cowork inRun meetings or internal events in the spaceRun public events, either for the broader EA community or other groupsThe space is run by a group of volunteers, so ping one of us on here if you want to talk about setting up a presence there - or just hop onto the space for a while and see how you find it!EA Gather UpdatesGather has reduced their free plan from 25 concurrent users to 10, which has forced many EA groups using it to seek different services, which may also charge or have their own limitations.However, CEA has generously agreed to fund the EA Gather Town instance for 30-40 concurrent users, a capacity which EA organisations can freely piggyback on.The rest of this post will discuss the benefits and drawbacks of using the space.Benefits‘Free’ virtual spaceGiven the uneven distribution of usage over a month, almost all of the capacity we’re paying for goes unused. Each org might have a weekly meeting of 1-2 hours, spiking the usage to near capacity, then have 0-5 users online for the rest of the week. So it’s very likely you could run your own weekly meetings at whatever time suits you without any concern about capacity, and virtually certain if you have any flexibility to adjust the times.We have a shared event calendar so that you can track whether your usage spikes might overlap. In that event, we have some excess funding to boost capacity.Integration with a growing section of the EA communityEA groups that have used the space regularly include EAGx Cambridge, Charity Entrepreneurship, Anima International, Training for Good, Alignment Ecosystem Development, EA France, EA Hong Kong, Metaculus, and more.Last year we were the meetup and hangout space for EAGxVirtual, and hopefully will host many more such events. We also have many independent regular users, who might be future staff of, donors to, beneficiaries of, or otherwise supporters of your group.It’s entirely up to you how public your office space is to other users. Some EA groups welcome guests, some are entirely private, some are in between. We’re currently exploring intuitive visual norms to clearly signal the preferences of different groups (feel free to suggest some!).Also, your office is not a prison - your members are always welcome and warmly encouraged to join us either for coworking or socialising in the common area :)Gather Town native functionality/default normsGather Town has a number of nice features that led us to originally set up this group there and to stay there since:Intuitive video call protocol (if you stand near someone, you’re in conversation with them)Embeddable webpages (so you can eg have native access to pomodoro timers)Cute aesthetic - your office can look like a virtual office, a virtual park, a virtual pirate ship, or anything else you can imagine! You can traverse on foot, by go-kart, or by magic portal :)Various other bits of functionality and suggested normsDrawbacksThe main reasons why you might not want to use the space:BuggynessGather is relatively new, and has a few more moving parts than other video calling services. It has a few intermittent glitches (most of which can be resolved by refreshing the page). Twice in the last 13 months or so I’m aware of it having gone down for about an hour. If you need perfect uptime, Zoom is probably better. Note there’s both an app and browser version, so one might work substantially better than the other at any given po...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider using the EA Gather Town for your online meetings and events, published by Siao Si on May 14, 2023 on The Effective Altruism Forum.Tl;drThe EA Gather town is cost-efficient to share among EA organisations - and gives you access to its own highly engaged EA community.Ways you can use the EA Gather spaceHave a permanent online ‘office’ for your organisation or group to cowork inRun meetings or internal events in the spaceRun public events, either for the broader EA community or other groupsThe space is run by a group of volunteers, so ping one of us on here if you want to talk about setting up a presence there - or just hop onto the space for a while and see how you find it!EA Gather UpdatesGather has reduced their free plan from 25 concurrent users to 10, which has forced many EA groups using it to seek different services, which may also charge or have their own limitations.However, CEA has generously agreed to fund the EA Gather Town instance for 30-40 concurrent users, a capacity which EA organisations can freely piggyback on.The rest of this post will discuss the benefits and drawbacks of using the space.Benefits‘Free’ virtual spaceGiven the uneven distribution of usage over a month, almost all of the capacity we’re paying for goes unused. Each org might have a weekly meeting of 1-2 hours, spiking the usage to near capacity, then have 0-5 users online for the rest of the week. So it’s very likely you could run your own weekly meetings at whatever time suits you without any concern about capacity, and virtually certain if you have any flexibility to adjust the times.We have a shared event calendar so that you can track whether your usage spikes might overlap. In that event, we have some excess funding to boost capacity.Integration with a growing section of the EA communityEA groups that have used the space regularly include EAGx Cambridge, Charity Entrepreneurship, Anima International, Training for Good, Alignment Ecosystem Development, EA France, EA Hong Kong, Metaculus, and more.Last year we were the meetup and hangout space for EAGxVirtual, and hopefully will host many more such events. We also have many independent regular users, who might be future staff of, donors to, beneficiaries of, or otherwise supporters of your group.It’s entirely up to you how public your office space is to other users. Some EA groups welcome guests, some are entirely private, some are in between. We’re currently exploring intuitive visual norms to clearly signal the preferences of different groups (feel free to suggest some!).Also, your office is not a prison - your members are always welcome and warmly encouraged to join us either for coworking or socialising in the common area :)Gather Town native functionality/default normsGather Town has a number of nice features that led us to originally set up this group there and to stay there since:Intuitive video call protocol (if you stand near someone, you’re in conversation with them)Embeddable webpages (so you can eg have native access to pomodoro timers)Cute aesthetic - your office can look like a virtual office, a virtual park, a virtual pirate ship, or anything else you can imagine! You can traverse on foot, by go-kart, or by magic portal :)Various other bits of functionality and suggested normsDrawbacksThe main reasons why you might not want to use the space:BuggynessGather is relatively new, and has a few more moving parts than other video calling services. It has a few intermittent glitches (most of which can be resolved by refreshing the page). Twice in the last 13 months or so I’m aware of it having gone down for about an hour. If you need perfect uptime, Zoom is probably better. Note there’s both an app and browser version, so one might work substantially better than the other at any given po...]]>
Siao Si https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:19 None full 5954
4xd2EPXZ8GeAoCQSi_NL_EA_EA EA - Announcing the African EA Forum Competition - $4500 in prizes for excellent EA Forum Posts by Luke Eure Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the African EA Forum Competition - $4500 in prizes for excellent EA Forum Posts, published by Luke Eure on May 14, 2023 on The Effective Altruism Forum.The Effective Altruism Forum serves as the main platform for thought leadership and experience sharing within the effective altruism movement. It is the virtual gathering spot for EAs and the space where most EAs will go to check for resources, materials and experiences for various areas of interest.However, there are low levels of authorship by EAs from Africa, which gives the impression that not much is happening in the African space with regards to EA - despite there being 10+ local EA groups in Africa.There is a need for increased thought leadership, experience sharing, and engagement from an African perspective that reflects the growth of EA in Africa and brings African perspectives to the wider EA community.To get more Africans writing on the forum, I'm excited to announce the African EA Forum Competition.How it worksPrizes will be awarded for winners and runners up in each of three categories:Cause exploration: Explorations of new cause areas or contribution to thinking on existing cause areasAfrican perspectives on EA: Posts that challenge or complement existing EA perspectives and/or cause areas from an African worldviewSummaries of existing work: Informing the EA community about ongoing/completed research projects, sharing experiences within community groups, or reflections on being in EATop prize in each category will be awarded $1,000. Runner up in each category will get $500.No need to include the category in your submission.JudgingPosts will be judged based on the following rubric:Originality of insightClarityDiscussion provoked: (judged by the post’s forum score and number of comments)Persuasiveness of argumentThis is replaced by Relevance to forum readers for summaries of existing workThe members of our judging panel are:Alimi Salifou - Events and Outreach Coordinator for EA NigeriaJordan Pieters - independent community builderKaleem Ahmid - Project Manager at Effective VenturesDr. Kikiope Oluwarore - co-founder of Healthier HensZainab Taonga Chirwa - Chairperson for Effective Altruism UCTSupport offered to writersWe will support writers in the following two ways:Virtual training on EA Forum writing: there will be a ~1-2 hour workshop to build the capacity of individuals interested in writing forum postsOffering mentorship: curating a list of individuals who are happy to offer feedback to new writers to bolster the confidence of those writing about their posts, especially those who may feel intimidated by the seemingly high bar of forum postsWriters do not have to attend the forum or receive mentorship to be eligible for the competition. They only have to make a forum post within the competition period, and meet the eligibility criteria below.Please fill out this form if you would like to join the workshop or receive mentorship.Who is eligibleAnybody who is:a citizen of an African countrya children of a citizen of an Africa countryPosts should be made to the EA forum with the tag ‘Africa EA Forum Competition’ to make them easy to find, and then the writer should identify themself by filling out this form.Posts with multiple authors are eligible as long as 50% or more of the authors meet the above criteria.TimelineThe competition will run for 3 months. Any post made between 14 May 2023 - 11:59pm on Friday 18 August 2023 is eligible.Please reach out to me with any questions or feedback on how this competition could be better! (ljeure@gmail.com)Thanks to Daniel Yu for funding the prizes, Waithera Mwangi for support in writing this post, and to our judges for offering their timeThanks for listening. To help us out with The Nonlinear Library or to learn mo...]]>
Luke Eure https://forum.effectivealtruism.org/posts/4xd2EPXZ8GeAoCQSi/announcing-the-african-ea-forum-competition-usd4500-in Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the African EA Forum Competition - $4500 in prizes for excellent EA Forum Posts, published by Luke Eure on May 14, 2023 on The Effective Altruism Forum.The Effective Altruism Forum serves as the main platform for thought leadership and experience sharing within the effective altruism movement. It is the virtual gathering spot for EAs and the space where most EAs will go to check for resources, materials and experiences for various areas of interest.However, there are low levels of authorship by EAs from Africa, which gives the impression that not much is happening in the African space with regards to EA - despite there being 10+ local EA groups in Africa.There is a need for increased thought leadership, experience sharing, and engagement from an African perspective that reflects the growth of EA in Africa and brings African perspectives to the wider EA community.To get more Africans writing on the forum, I'm excited to announce the African EA Forum Competition.How it worksPrizes will be awarded for winners and runners up in each of three categories:Cause exploration: Explorations of new cause areas or contribution to thinking on existing cause areasAfrican perspectives on EA: Posts that challenge or complement existing EA perspectives and/or cause areas from an African worldviewSummaries of existing work: Informing the EA community about ongoing/completed research projects, sharing experiences within community groups, or reflections on being in EATop prize in each category will be awarded $1,000. Runner up in each category will get $500.No need to include the category in your submission.JudgingPosts will be judged based on the following rubric:Originality of insightClarityDiscussion provoked: (judged by the post’s forum score and number of comments)Persuasiveness of argumentThis is replaced by Relevance to forum readers for summaries of existing workThe members of our judging panel are:Alimi Salifou - Events and Outreach Coordinator for EA NigeriaJordan Pieters - independent community builderKaleem Ahmid - Project Manager at Effective VenturesDr. Kikiope Oluwarore - co-founder of Healthier HensZainab Taonga Chirwa - Chairperson for Effective Altruism UCTSupport offered to writersWe will support writers in the following two ways:Virtual training on EA Forum writing: there will be a ~1-2 hour workshop to build the capacity of individuals interested in writing forum postsOffering mentorship: curating a list of individuals who are happy to offer feedback to new writers to bolster the confidence of those writing about their posts, especially those who may feel intimidated by the seemingly high bar of forum postsWriters do not have to attend the forum or receive mentorship to be eligible for the competition. They only have to make a forum post within the competition period, and meet the eligibility criteria below.Please fill out this form if you would like to join the workshop or receive mentorship.Who is eligibleAnybody who is:a citizen of an African countrya children of a citizen of an Africa countryPosts should be made to the EA forum with the tag ‘Africa EA Forum Competition’ to make them easy to find, and then the writer should identify themself by filling out this form.Posts with multiple authors are eligible as long as 50% or more of the authors meet the above criteria.TimelineThe competition will run for 3 months. Any post made between 14 May 2023 - 11:59pm on Friday 18 August 2023 is eligible.Please reach out to me with any questions or feedback on how this competition could be better! (ljeure@gmail.com)Thanks to Daniel Yu for funding the prizes, Waithera Mwangi for support in writing this post, and to our judges for offering their timeThanks for listening. To help us out with The Nonlinear Library or to learn mo...]]>
Sun, 14 May 2023 13:34:05 +0000 EA - Announcing the African EA Forum Competition - $4500 in prizes for excellent EA Forum Posts by Luke Eure Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the African EA Forum Competition - $4500 in prizes for excellent EA Forum Posts, published by Luke Eure on May 14, 2023 on The Effective Altruism Forum.The Effective Altruism Forum serves as the main platform for thought leadership and experience sharing within the effective altruism movement. It is the virtual gathering spot for EAs and the space where most EAs will go to check for resources, materials and experiences for various areas of interest.However, there are low levels of authorship by EAs from Africa, which gives the impression that not much is happening in the African space with regards to EA - despite there being 10+ local EA groups in Africa.There is a need for increased thought leadership, experience sharing, and engagement from an African perspective that reflects the growth of EA in Africa and brings African perspectives to the wider EA community.To get more Africans writing on the forum, I'm excited to announce the African EA Forum Competition.How it worksPrizes will be awarded for winners and runners up in each of three categories:Cause exploration: Explorations of new cause areas or contribution to thinking on existing cause areasAfrican perspectives on EA: Posts that challenge or complement existing EA perspectives and/or cause areas from an African worldviewSummaries of existing work: Informing the EA community about ongoing/completed research projects, sharing experiences within community groups, or reflections on being in EATop prize in each category will be awarded $1,000. Runner up in each category will get $500.No need to include the category in your submission.JudgingPosts will be judged based on the following rubric:Originality of insightClarityDiscussion provoked: (judged by the post’s forum score and number of comments)Persuasiveness of argumentThis is replaced by Relevance to forum readers for summaries of existing workThe members of our judging panel are:Alimi Salifou - Events and Outreach Coordinator for EA NigeriaJordan Pieters - independent community builderKaleem Ahmid - Project Manager at Effective VenturesDr. Kikiope Oluwarore - co-founder of Healthier HensZainab Taonga Chirwa - Chairperson for Effective Altruism UCTSupport offered to writersWe will support writers in the following two ways:Virtual training on EA Forum writing: there will be a ~1-2 hour workshop to build the capacity of individuals interested in writing forum postsOffering mentorship: curating a list of individuals who are happy to offer feedback to new writers to bolster the confidence of those writing about their posts, especially those who may feel intimidated by the seemingly high bar of forum postsWriters do not have to attend the forum or receive mentorship to be eligible for the competition. They only have to make a forum post within the competition period, and meet the eligibility criteria below.Please fill out this form if you would like to join the workshop or receive mentorship.Who is eligibleAnybody who is:a citizen of an African countrya children of a citizen of an Africa countryPosts should be made to the EA forum with the tag ‘Africa EA Forum Competition’ to make them easy to find, and then the writer should identify themself by filling out this form.Posts with multiple authors are eligible as long as 50% or more of the authors meet the above criteria.TimelineThe competition will run for 3 months. Any post made between 14 May 2023 - 11:59pm on Friday 18 August 2023 is eligible.Please reach out to me with any questions or feedback on how this competition could be better! (ljeure@gmail.com)Thanks to Daniel Yu for funding the prizes, Waithera Mwangi for support in writing this post, and to our judges for offering their timeThanks for listening. To help us out with The Nonlinear Library or to learn mo...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the African EA Forum Competition - $4500 in prizes for excellent EA Forum Posts, published by Luke Eure on May 14, 2023 on The Effective Altruism Forum.The Effective Altruism Forum serves as the main platform for thought leadership and experience sharing within the effective altruism movement. It is the virtual gathering spot for EAs and the space where most EAs will go to check for resources, materials and experiences for various areas of interest.However, there are low levels of authorship by EAs from Africa, which gives the impression that not much is happening in the African space with regards to EA - despite there being 10+ local EA groups in Africa.There is a need for increased thought leadership, experience sharing, and engagement from an African perspective that reflects the growth of EA in Africa and brings African perspectives to the wider EA community.To get more Africans writing on the forum, I'm excited to announce the African EA Forum Competition.How it worksPrizes will be awarded for winners and runners up in each of three categories:Cause exploration: Explorations of new cause areas or contribution to thinking on existing cause areasAfrican perspectives on EA: Posts that challenge or complement existing EA perspectives and/or cause areas from an African worldviewSummaries of existing work: Informing the EA community about ongoing/completed research projects, sharing experiences within community groups, or reflections on being in EATop prize in each category will be awarded $1,000. Runner up in each category will get $500.No need to include the category in your submission.JudgingPosts will be judged based on the following rubric:Originality of insightClarityDiscussion provoked: (judged by the post’s forum score and number of comments)Persuasiveness of argumentThis is replaced by Relevance to forum readers for summaries of existing workThe members of our judging panel are:Alimi Salifou - Events and Outreach Coordinator for EA NigeriaJordan Pieters - independent community builderKaleem Ahmid - Project Manager at Effective VenturesDr. Kikiope Oluwarore - co-founder of Healthier HensZainab Taonga Chirwa - Chairperson for Effective Altruism UCTSupport offered to writersWe will support writers in the following two ways:Virtual training on EA Forum writing: there will be a ~1-2 hour workshop to build the capacity of individuals interested in writing forum postsOffering mentorship: curating a list of individuals who are happy to offer feedback to new writers to bolster the confidence of those writing about their posts, especially those who may feel intimidated by the seemingly high bar of forum postsWriters do not have to attend the forum or receive mentorship to be eligible for the competition. They only have to make a forum post within the competition period, and meet the eligibility criteria below.Please fill out this form if you would like to join the workshop or receive mentorship.Who is eligibleAnybody who is:a citizen of an African countrya children of a citizen of an Africa countryPosts should be made to the EA forum with the tag ‘Africa EA Forum Competition’ to make them easy to find, and then the writer should identify themself by filling out this form.Posts with multiple authors are eligible as long as 50% or more of the authors meet the above criteria.TimelineThe competition will run for 3 months. Any post made between 14 May 2023 - 11:59pm on Friday 18 August 2023 is eligible.Please reach out to me with any questions or feedback on how this competition could be better! (ljeure@gmail.com)Thanks to Daniel Yu for funding the prizes, Waithera Mwangi for support in writing this post, and to our judges for offering their timeThanks for listening. To help us out with The Nonlinear Library or to learn mo...]]>
Luke Eure https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:48 None full 5946
oLk5QEY2y2fL9ifoE_NL_EA_EA EA - Blog update: Reflective altruism by David Thorstad Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Blog update: Reflective altruism, published by David Thorstad on May 14, 2023 on The Effective Altruism Forum.About meI’m a research fellow in philosophy at the Global Priorities Institute. Starting in the Fall, I'll be Assistant Professor of Philosophy at Vanderbilt University. (All views are my own, except the worst. Those are to be blamed on my cat.).There are many things I like about effective altruism. I’ve started a blog to discuss some views and practices in effective altruism that I don’t like, in order to drive positive change both within and outside of the movement.About this blogThe blog features long-form discussions, structured into thematic series of posts, informed by academic research. Currently, the blog features six thematic series, described below.One distinctive feature of my approach is that I share a number of philosophical views with many effective altruists. I accept or am sympathetic to all of the following: consequentialism; totalism; fanaticism; expected value maximization; and the importance of using science, reasons and evidence to solve global problems. Nevertheless, I am skeptical of many views held by effective altruists including longtermism and the view that humanity currently faces high levels of existential risk. We also have a number of methodological disagreements.I've come to understand that this is a somewhat distinctive approach within the academic literature, as well as in the broader landscape. I think that is a shame. I want to say what can be said for this approach, and what can be learned from it. I try to do that on my blog.About this documentThe blog is currently five months old. Several readers have asked me to post an update about my blog on the EA Forum. I think that is a good idea: I try to be transparent about what I am up to, and I value feedback from my readers.Below, I say a bit about existing content on the blog; plans for new content; and some lessons learned during the past five months.Existing seriesSeries 1: Academic papersThe purpose of this blog is to use academic research to drive positive change within and outside of the effective altruism movement. This series draws insights from academic papers related to effective altruism.Sub-series A: Existential risk pessimism and the time of perilsThis series is based on my paper “Existential risk pessimism and the time of perils”. The paper develops a tension between two claims: Existential Risk Pessimism (levels of existential risk are very high) and the Astronomical Value Thesis (efforts to reduce existential risk have astronomical value). It explores the Time of Perils hypothesis as a way out of the tension.Status: Completed. Parts 1-6 present the main argument of the paper. Part 7 discusses an application to calculating the cost-effectiveness of biosecurity. Part 8 draws implications. Part 9 responds to objections.Sub-series B: The good it promisesThis series is based on a volume of essays entitled The good it promises, the harm it does: Critical essays on effective altruism. The volume brings together a diverse collection of scholars, activists and practitioners to critically reflect on effective altruism. In this series, I draw lessons from papers contained in the volume.Status: In progress. Part 1 introduces the series and discusses the foreword to the book by Amia Srinivasan. Part 2 looks at Simone de Lima’s discussion of colonialism and animal advocacy in Brazil. Part 3 looks at Carol J Adams' care ethical approach.Series 2: Academics review WWOTFWill MacAskill’s book What we owe the future is one of the most influential recent books about effective altruism. A number of prominent academics have written insightful reviews of the book. In this series, I draw lessons from some of my favorite academic reviews of What we owe the future....]]>
David Thorstad https://forum.effectivealtruism.org/posts/oLk5QEY2y2fL9ifoE/blog-update-reflective-altruism Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Blog update: Reflective altruism, published by David Thorstad on May 14, 2023 on The Effective Altruism Forum.About meI’m a research fellow in philosophy at the Global Priorities Institute. Starting in the Fall, I'll be Assistant Professor of Philosophy at Vanderbilt University. (All views are my own, except the worst. Those are to be blamed on my cat.).There are many things I like about effective altruism. I’ve started a blog to discuss some views and practices in effective altruism that I don’t like, in order to drive positive change both within and outside of the movement.About this blogThe blog features long-form discussions, structured into thematic series of posts, informed by academic research. Currently, the blog features six thematic series, described below.One distinctive feature of my approach is that I share a number of philosophical views with many effective altruists. I accept or am sympathetic to all of the following: consequentialism; totalism; fanaticism; expected value maximization; and the importance of using science, reasons and evidence to solve global problems. Nevertheless, I am skeptical of many views held by effective altruists including longtermism and the view that humanity currently faces high levels of existential risk. We also have a number of methodological disagreements.I've come to understand that this is a somewhat distinctive approach within the academic literature, as well as in the broader landscape. I think that is a shame. I want to say what can be said for this approach, and what can be learned from it. I try to do that on my blog.About this documentThe blog is currently five months old. Several readers have asked me to post an update about my blog on the EA Forum. I think that is a good idea: I try to be transparent about what I am up to, and I value feedback from my readers.Below, I say a bit about existing content on the blog; plans for new content; and some lessons learned during the past five months.Existing seriesSeries 1: Academic papersThe purpose of this blog is to use academic research to drive positive change within and outside of the effective altruism movement. This series draws insights from academic papers related to effective altruism.Sub-series A: Existential risk pessimism and the time of perilsThis series is based on my paper “Existential risk pessimism and the time of perils”. The paper develops a tension between two claims: Existential Risk Pessimism (levels of existential risk are very high) and the Astronomical Value Thesis (efforts to reduce existential risk have astronomical value). It explores the Time of Perils hypothesis as a way out of the tension.Status: Completed. Parts 1-6 present the main argument of the paper. Part 7 discusses an application to calculating the cost-effectiveness of biosecurity. Part 8 draws implications. Part 9 responds to objections.Sub-series B: The good it promisesThis series is based on a volume of essays entitled The good it promises, the harm it does: Critical essays on effective altruism. The volume brings together a diverse collection of scholars, activists and practitioners to critically reflect on effective altruism. In this series, I draw lessons from papers contained in the volume.Status: In progress. Part 1 introduces the series and discusses the foreword to the book by Amia Srinivasan. Part 2 looks at Simone de Lima’s discussion of colonialism and animal advocacy in Brazil. Part 3 looks at Carol J Adams' care ethical approach.Series 2: Academics review WWOTFWill MacAskill’s book What we owe the future is one of the most influential recent books about effective altruism. A number of prominent academics have written insightful reviews of the book. In this series, I draw lessons from some of my favorite academic reviews of What we owe the future....]]>
Sun, 14 May 2023 13:21:03 +0000 EA - Blog update: Reflective altruism by David Thorstad Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Blog update: Reflective altruism, published by David Thorstad on May 14, 2023 on The Effective Altruism Forum.About meI’m a research fellow in philosophy at the Global Priorities Institute. Starting in the Fall, I'll be Assistant Professor of Philosophy at Vanderbilt University. (All views are my own, except the worst. Those are to be blamed on my cat.).There are many things I like about effective altruism. I’ve started a blog to discuss some views and practices in effective altruism that I don’t like, in order to drive positive change both within and outside of the movement.About this blogThe blog features long-form discussions, structured into thematic series of posts, informed by academic research. Currently, the blog features six thematic series, described below.One distinctive feature of my approach is that I share a number of philosophical views with many effective altruists. I accept or am sympathetic to all of the following: consequentialism; totalism; fanaticism; expected value maximization; and the importance of using science, reasons and evidence to solve global problems. Nevertheless, I am skeptical of many views held by effective altruists including longtermism and the view that humanity currently faces high levels of existential risk. We also have a number of methodological disagreements.I've come to understand that this is a somewhat distinctive approach within the academic literature, as well as in the broader landscape. I think that is a shame. I want to say what can be said for this approach, and what can be learned from it. I try to do that on my blog.About this documentThe blog is currently five months old. Several readers have asked me to post an update about my blog on the EA Forum. I think that is a good idea: I try to be transparent about what I am up to, and I value feedback from my readers.Below, I say a bit about existing content on the blog; plans for new content; and some lessons learned during the past five months.Existing seriesSeries 1: Academic papersThe purpose of this blog is to use academic research to drive positive change within and outside of the effective altruism movement. This series draws insights from academic papers related to effective altruism.Sub-series A: Existential risk pessimism and the time of perilsThis series is based on my paper “Existential risk pessimism and the time of perils”. The paper develops a tension between two claims: Existential Risk Pessimism (levels of existential risk are very high) and the Astronomical Value Thesis (efforts to reduce existential risk have astronomical value). It explores the Time of Perils hypothesis as a way out of the tension.Status: Completed. Parts 1-6 present the main argument of the paper. Part 7 discusses an application to calculating the cost-effectiveness of biosecurity. Part 8 draws implications. Part 9 responds to objections.Sub-series B: The good it promisesThis series is based on a volume of essays entitled The good it promises, the harm it does: Critical essays on effective altruism. The volume brings together a diverse collection of scholars, activists and practitioners to critically reflect on effective altruism. In this series, I draw lessons from papers contained in the volume.Status: In progress. Part 1 introduces the series and discusses the foreword to the book by Amia Srinivasan. Part 2 looks at Simone de Lima’s discussion of colonialism and animal advocacy in Brazil. Part 3 looks at Carol J Adams' care ethical approach.Series 2: Academics review WWOTFWill MacAskill’s book What we owe the future is one of the most influential recent books about effective altruism. A number of prominent academics have written insightful reviews of the book. In this series, I draw lessons from some of my favorite academic reviews of What we owe the future....]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Blog update: Reflective altruism, published by David Thorstad on May 14, 2023 on The Effective Altruism Forum.About meI’m a research fellow in philosophy at the Global Priorities Institute. Starting in the Fall, I'll be Assistant Professor of Philosophy at Vanderbilt University. (All views are my own, except the worst. Those are to be blamed on my cat.).There are many things I like about effective altruism. I’ve started a blog to discuss some views and practices in effective altruism that I don’t like, in order to drive positive change both within and outside of the movement.About this blogThe blog features long-form discussions, structured into thematic series of posts, informed by academic research. Currently, the blog features six thematic series, described below.One distinctive feature of my approach is that I share a number of philosophical views with many effective altruists. I accept or am sympathetic to all of the following: consequentialism; totalism; fanaticism; expected value maximization; and the importance of using science, reasons and evidence to solve global problems. Nevertheless, I am skeptical of many views held by effective altruists including longtermism and the view that humanity currently faces high levels of existential risk. We also have a number of methodological disagreements.I've come to understand that this is a somewhat distinctive approach within the academic literature, as well as in the broader landscape. I think that is a shame. I want to say what can be said for this approach, and what can be learned from it. I try to do that on my blog.About this documentThe blog is currently five months old. Several readers have asked me to post an update about my blog on the EA Forum. I think that is a good idea: I try to be transparent about what I am up to, and I value feedback from my readers.Below, I say a bit about existing content on the blog; plans for new content; and some lessons learned during the past five months.Existing seriesSeries 1: Academic papersThe purpose of this blog is to use academic research to drive positive change within and outside of the effective altruism movement. This series draws insights from academic papers related to effective altruism.Sub-series A: Existential risk pessimism and the time of perilsThis series is based on my paper “Existential risk pessimism and the time of perils”. The paper develops a tension between two claims: Existential Risk Pessimism (levels of existential risk are very high) and the Astronomical Value Thesis (efforts to reduce existential risk have astronomical value). It explores the Time of Perils hypothesis as a way out of the tension.Status: Completed. Parts 1-6 present the main argument of the paper. Part 7 discusses an application to calculating the cost-effectiveness of biosecurity. Part 8 draws implications. Part 9 responds to objections.Sub-series B: The good it promisesThis series is based on a volume of essays entitled The good it promises, the harm it does: Critical essays on effective altruism. The volume brings together a diverse collection of scholars, activists and practitioners to critically reflect on effective altruism. In this series, I draw lessons from papers contained in the volume.Status: In progress. Part 1 introduces the series and discusses the foreword to the book by Amia Srinivasan. Part 2 looks at Simone de Lima’s discussion of colonialism and animal advocacy in Brazil. Part 3 looks at Carol J Adams' care ethical approach.Series 2: Academics review WWOTFWill MacAskill’s book What we owe the future is one of the most influential recent books about effective altruism. A number of prominent academics have written insightful reviews of the book. In this series, I draw lessons from some of my favorite academic reviews of What we owe the future....]]>
David Thorstad https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 15:00 None full 5945
GD73T2xpNcx4Rvt2E_NL_EA_EA EA - The Implications of the US Supreme Court upholding Prop 12 by ishankhire Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Implications of the US Supreme Court upholding Prop 12, published by ishankhire on May 13, 2023 on The Effective Altruism Forum.Background: Farming Conditions and Prop 12Currently in the US, most breeding pigs live in factory farmers, where they are confined in gestation crates which are small metal cages so small that pigs can’t even turn around, while egg-laying hens live in tiny, cramped battery cages that cause a range of psychological and physiological harm. The crowded conditions also have potential health harms by increasing the stress levels of pigs and weakening their immune systems, which can make them more susceptible to zoonotic diseases that may spread to humans.Starting in the early 2000s, a few animal welfare groups including the Humane Society of the Unites States aimed to ban the farming system of cages for hens, breeding pigs and veal calves. In 2008, Proposition 2 was passed which put in place a “production” ban on cages, which said that producers had to ensure pigs, hens, and calves could lie down, turn around, and extend their limbs or wings without hitting the side of an enclosure. However, this specific language allowed some egg farms to circumvent the law by using bigger cages. In 2010, California passed AB 1437 which was a “sales” ban requiring all eggs sold in California had to meet those standards. These laws have brought about results — the share of hens that are cage-free has been rising and is expected to continue doing so.In 2018, over 62% of California voters passed Proposition 12, the strongest law to improve conditions for farmed animals. Under Prop 12, some of the gaps in these laws are covered — for one, it extends the cage-free ban to cover not just the eggs that are sold in the grocery store (shell eggs) but also liquid eggs, which are sold to restaurants, cafeterias and food manufacturers (liquid eggs).Opposition from Pork IndustryThe law is expected to be especially impactful to the pork industry which has been more resistant to change in doing away with confinement systems. Progress has been very mixed in terms of companies following through with their commitments to phase out gestation crates. So far, 10 states have banned them, but Prop 12’s space requirements are stricter and close some gaps that allow for loopholes. The law also makes it illegal for eggs and pork to be sold in California if the animals in other states are put in gestation crates (pigs) or battery cages (for chickens). California consumes 14% of the US’s pork and 12% of eggs and veal, so pork and egg producers would be forced to modify barns or construct new ones (only 1% of existing sow housing meets Prop 12’s standards according to the National Pork Producers Council (NPCC)), which would be costly and time taking, causing various meat trade groups to be opposed to it.Interestingly, some industries such as Whole Foods, aren’t concerned with the law as they claim they already meet animal welfare requirements. I think this is a crucial reason why the phase out of battery cages did not get as much opposition to phasing out pork crates — many companies already have commitments to phase out battery cage. In fact, these companies may have the incentive to increase regulations to raise costs on competitors.For this reason, the law was attacked by various meat industry trade groups, which filed three separate lawsuits to overturn it. The Supreme Court declined to take two of them, and in October 2022, the case National Pork Producers v. Ross began.Explaining the Supreme Court RulingOn May 11th, 2023, the Supreme Court upheld Prop 12 in a 5-4 decision of the case National Pork Producers v. Ross. Interestingly, the verdict was not split along conservative-liberal lines, with 3 conservative judges and 2 liberal judges in the majority.The pork industry ...]]>
ishankhire https://forum.effectivealtruism.org/posts/GD73T2xpNcx4Rvt2E/the-implications-of-the-us-supreme-court-upholding-prop-12 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Implications of the US Supreme Court upholding Prop 12, published by ishankhire on May 13, 2023 on The Effective Altruism Forum.Background: Farming Conditions and Prop 12Currently in the US, most breeding pigs live in factory farmers, where they are confined in gestation crates which are small metal cages so small that pigs can’t even turn around, while egg-laying hens live in tiny, cramped battery cages that cause a range of psychological and physiological harm. The crowded conditions also have potential health harms by increasing the stress levels of pigs and weakening their immune systems, which can make them more susceptible to zoonotic diseases that may spread to humans.Starting in the early 2000s, a few animal welfare groups including the Humane Society of the Unites States aimed to ban the farming system of cages for hens, breeding pigs and veal calves. In 2008, Proposition 2 was passed which put in place a “production” ban on cages, which said that producers had to ensure pigs, hens, and calves could lie down, turn around, and extend their limbs or wings without hitting the side of an enclosure. However, this specific language allowed some egg farms to circumvent the law by using bigger cages. In 2010, California passed AB 1437 which was a “sales” ban requiring all eggs sold in California had to meet those standards. These laws have brought about results — the share of hens that are cage-free has been rising and is expected to continue doing so.In 2018, over 62% of California voters passed Proposition 12, the strongest law to improve conditions for farmed animals. Under Prop 12, some of the gaps in these laws are covered — for one, it extends the cage-free ban to cover not just the eggs that are sold in the grocery store (shell eggs) but also liquid eggs, which are sold to restaurants, cafeterias and food manufacturers (liquid eggs).Opposition from Pork IndustryThe law is expected to be especially impactful to the pork industry which has been more resistant to change in doing away with confinement systems. Progress has been very mixed in terms of companies following through with their commitments to phase out gestation crates. So far, 10 states have banned them, but Prop 12’s space requirements are stricter and close some gaps that allow for loopholes. The law also makes it illegal for eggs and pork to be sold in California if the animals in other states are put in gestation crates (pigs) or battery cages (for chickens). California consumes 14% of the US’s pork and 12% of eggs and veal, so pork and egg producers would be forced to modify barns or construct new ones (only 1% of existing sow housing meets Prop 12’s standards according to the National Pork Producers Council (NPCC)), which would be costly and time taking, causing various meat trade groups to be opposed to it.Interestingly, some industries such as Whole Foods, aren’t concerned with the law as they claim they already meet animal welfare requirements. I think this is a crucial reason why the phase out of battery cages did not get as much opposition to phasing out pork crates — many companies already have commitments to phase out battery cage. In fact, these companies may have the incentive to increase regulations to raise costs on competitors.For this reason, the law was attacked by various meat industry trade groups, which filed three separate lawsuits to overturn it. The Supreme Court declined to take two of them, and in October 2022, the case National Pork Producers v. Ross began.Explaining the Supreme Court RulingOn May 11th, 2023, the Supreme Court upheld Prop 12 in a 5-4 decision of the case National Pork Producers v. Ross. Interestingly, the verdict was not split along conservative-liberal lines, with 3 conservative judges and 2 liberal judges in the majority.The pork industry ...]]>
Sat, 13 May 2023 17:44:03 +0000 EA - The Implications of the US Supreme Court upholding Prop 12 by ishankhire Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Implications of the US Supreme Court upholding Prop 12, published by ishankhire on May 13, 2023 on The Effective Altruism Forum.Background: Farming Conditions and Prop 12Currently in the US, most breeding pigs live in factory farmers, where they are confined in gestation crates which are small metal cages so small that pigs can’t even turn around, while egg-laying hens live in tiny, cramped battery cages that cause a range of psychological and physiological harm. The crowded conditions also have potential health harms by increasing the stress levels of pigs and weakening their immune systems, which can make them more susceptible to zoonotic diseases that may spread to humans.Starting in the early 2000s, a few animal welfare groups including the Humane Society of the Unites States aimed to ban the farming system of cages for hens, breeding pigs and veal calves. In 2008, Proposition 2 was passed which put in place a “production” ban on cages, which said that producers had to ensure pigs, hens, and calves could lie down, turn around, and extend their limbs or wings without hitting the side of an enclosure. However, this specific language allowed some egg farms to circumvent the law by using bigger cages. In 2010, California passed AB 1437 which was a “sales” ban requiring all eggs sold in California had to meet those standards. These laws have brought about results — the share of hens that are cage-free has been rising and is expected to continue doing so.In 2018, over 62% of California voters passed Proposition 12, the strongest law to improve conditions for farmed animals. Under Prop 12, some of the gaps in these laws are covered — for one, it extends the cage-free ban to cover not just the eggs that are sold in the grocery store (shell eggs) but also liquid eggs, which are sold to restaurants, cafeterias and food manufacturers (liquid eggs).Opposition from Pork IndustryThe law is expected to be especially impactful to the pork industry which has been more resistant to change in doing away with confinement systems. Progress has been very mixed in terms of companies following through with their commitments to phase out gestation crates. So far, 10 states have banned them, but Prop 12’s space requirements are stricter and close some gaps that allow for loopholes. The law also makes it illegal for eggs and pork to be sold in California if the animals in other states are put in gestation crates (pigs) or battery cages (for chickens). California consumes 14% of the US’s pork and 12% of eggs and veal, so pork and egg producers would be forced to modify barns or construct new ones (only 1% of existing sow housing meets Prop 12’s standards according to the National Pork Producers Council (NPCC)), which would be costly and time taking, causing various meat trade groups to be opposed to it.Interestingly, some industries such as Whole Foods, aren’t concerned with the law as they claim they already meet animal welfare requirements. I think this is a crucial reason why the phase out of battery cages did not get as much opposition to phasing out pork crates — many companies already have commitments to phase out battery cage. In fact, these companies may have the incentive to increase regulations to raise costs on competitors.For this reason, the law was attacked by various meat industry trade groups, which filed three separate lawsuits to overturn it. The Supreme Court declined to take two of them, and in October 2022, the case National Pork Producers v. Ross began.Explaining the Supreme Court RulingOn May 11th, 2023, the Supreme Court upheld Prop 12 in a 5-4 decision of the case National Pork Producers v. Ross. Interestingly, the verdict was not split along conservative-liberal lines, with 3 conservative judges and 2 liberal judges in the majority.The pork industry ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Implications of the US Supreme Court upholding Prop 12, published by ishankhire on May 13, 2023 on The Effective Altruism Forum.Background: Farming Conditions and Prop 12Currently in the US, most breeding pigs live in factory farmers, where they are confined in gestation crates which are small metal cages so small that pigs can’t even turn around, while egg-laying hens live in tiny, cramped battery cages that cause a range of psychological and physiological harm. The crowded conditions also have potential health harms by increasing the stress levels of pigs and weakening their immune systems, which can make them more susceptible to zoonotic diseases that may spread to humans.Starting in the early 2000s, a few animal welfare groups including the Humane Society of the Unites States aimed to ban the farming system of cages for hens, breeding pigs and veal calves. In 2008, Proposition 2 was passed which put in place a “production” ban on cages, which said that producers had to ensure pigs, hens, and calves could lie down, turn around, and extend their limbs or wings without hitting the side of an enclosure. However, this specific language allowed some egg farms to circumvent the law by using bigger cages. In 2010, California passed AB 1437 which was a “sales” ban requiring all eggs sold in California had to meet those standards. These laws have brought about results — the share of hens that are cage-free has been rising and is expected to continue doing so.In 2018, over 62% of California voters passed Proposition 12, the strongest law to improve conditions for farmed animals. Under Prop 12, some of the gaps in these laws are covered — for one, it extends the cage-free ban to cover not just the eggs that are sold in the grocery store (shell eggs) but also liquid eggs, which are sold to restaurants, cafeterias and food manufacturers (liquid eggs).Opposition from Pork IndustryThe law is expected to be especially impactful to the pork industry which has been more resistant to change in doing away with confinement systems. Progress has been very mixed in terms of companies following through with their commitments to phase out gestation crates. So far, 10 states have banned them, but Prop 12’s space requirements are stricter and close some gaps that allow for loopholes. The law also makes it illegal for eggs and pork to be sold in California if the animals in other states are put in gestation crates (pigs) or battery cages (for chickens). California consumes 14% of the US’s pork and 12% of eggs and veal, so pork and egg producers would be forced to modify barns or construct new ones (only 1% of existing sow housing meets Prop 12’s standards according to the National Pork Producers Council (NPCC)), which would be costly and time taking, causing various meat trade groups to be opposed to it.Interestingly, some industries such as Whole Foods, aren’t concerned with the law as they claim they already meet animal welfare requirements. I think this is a crucial reason why the phase out of battery cages did not get as much opposition to phasing out pork crates — many companies already have commitments to phase out battery cage. In fact, these companies may have the incentive to increase regulations to raise costs on competitors.For this reason, the law was attacked by various meat industry trade groups, which filed three separate lawsuits to overturn it. The Supreme Court declined to take two of them, and in October 2022, the case National Pork Producers v. Ross began.Explaining the Supreme Court RulingOn May 11th, 2023, the Supreme Court upheld Prop 12 in a 5-4 decision of the case National Pork Producers v. Ross. Interestingly, the verdict was not split along conservative-liberal lines, with 3 conservative judges and 2 liberal judges in the majority.The pork industry ...]]>
ishankhire https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:41 None full 5939
6Rrv6G74tzAyDZxz9_NL_EA_EA EA - I want to read more stories by and about the people of Effective Altruism by Gemma Paterson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I want to read more stories by and about the people of Effective Altruism, published by Gemma Paterson on May 12, 2023 on The Effective Altruism Forum.TL; DRI want to read more stories by and about the people of the Effective Altruism movementBut like, fun ones, not CVsI’ve added a tag for EA origin stories and tagged a bunch of relevant posts from the forumIf I’ve missed some, please tag themThe community experiences tag has a lot of others that don’t quite fitI think it is important to emphasise the personal in the effective altruism movement - you never know if your story is enough to connect with someone (especially if you don’t fit the stereotypical EA mold)I would also be very interested in reading folks’ answers to the “What is your current plan to improve the world?” question from the EA Global application - it’s really helpful to see other people’s thought processes (you can read mine here)Why?At least for me, what grabbed and kept my attention when I first heard about EA were the stories of people on the ground trying to do effective altruism.The audacity of a group of students looking at the enormity of suffering in the world but then pushing past that overwhelm. Recognising that they could use their privileges to make a dent if they really gave it a go.The folks behind Charity Entrepreneurship who didn’t stop at one highly effective charity but decided to jump straight into making an non-profit incubator to multiply their impact - building out, in my opinion, some of the coolest projects in the movement.I love that the 80,000 hours podcast takes the concept behind Big Talk seriouslyIt’s absurd but amazing!I love the ethos of practicality within the movement. It isn’t about purity, it isn’t about perfection, it’s about actually changing the world.These are the people I’d back to build a robust Theory of Change that might just move us towards Fully Automated Luxury Gay Space CommunismMaybe that google doc already exists?I have never been the kind of person who had role models. I have always been a bit too cynical to put people on a pedestal. I had respect for successful people and tried to learn what I could from them but I didn’t have heroes.But my response to finding the EA movement was, “Fuck, these people are cool.”I think there is a problem with myth making and hero worshipping within EA. I do agree that it is healthier to Live Without Idols. However, I don’t think we should live without stories.The stories I’m more interested in are the personal ones. Of people actually going out and living their values. Examples of trades offs that real people make that allow them to be ambitiously altruistic in a way that suits them. That show that it is fine to care about lots of things. That it is okay to make changes in your life when you get more or better information.I think about this post a lot because I agree that if people think that “doing effective altruism” means they have to live like monks and change their whole lives then they’ll just reject it. Making big changes is hard. People aren’t perfect.I can trace huge number of positive changes in my life to my decision to take EA seriously but realistically it was my personal IRL and parasocial connections to the people of EA that gave me the space and support to make these big changes in my life. In the footnotes and in this post about my EA story, I’ve included a list of podcasts, blog posts and other media by people within EA that were particularly influential and meaningful to me (if you made them then thank you <3)While I do see as EA as the key source of purpose in my life, it is a core value among many (I like Valuism - doing the intrinsic values test was really helpful for me). Like everyone else in the EA movement, I’m not an impact machine, I’m a person. I love throwing the...]]>
Gemma Paterson https://forum.effectivealtruism.org/posts/6Rrv6G74tzAyDZxz9/i-want-to-read-more-stories-by-and-about-the-people-of Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I want to read more stories by and about the people of Effective Altruism, published by Gemma Paterson on May 12, 2023 on The Effective Altruism Forum.TL; DRI want to read more stories by and about the people of the Effective Altruism movementBut like, fun ones, not CVsI’ve added a tag for EA origin stories and tagged a bunch of relevant posts from the forumIf I’ve missed some, please tag themThe community experiences tag has a lot of others that don’t quite fitI think it is important to emphasise the personal in the effective altruism movement - you never know if your story is enough to connect with someone (especially if you don’t fit the stereotypical EA mold)I would also be very interested in reading folks’ answers to the “What is your current plan to improve the world?” question from the EA Global application - it’s really helpful to see other people’s thought processes (you can read mine here)Why?At least for me, what grabbed and kept my attention when I first heard about EA were the stories of people on the ground trying to do effective altruism.The audacity of a group of students looking at the enormity of suffering in the world but then pushing past that overwhelm. Recognising that they could use their privileges to make a dent if they really gave it a go.The folks behind Charity Entrepreneurship who didn’t stop at one highly effective charity but decided to jump straight into making an non-profit incubator to multiply their impact - building out, in my opinion, some of the coolest projects in the movement.I love that the 80,000 hours podcast takes the concept behind Big Talk seriouslyIt’s absurd but amazing!I love the ethos of practicality within the movement. It isn’t about purity, it isn’t about perfection, it’s about actually changing the world.These are the people I’d back to build a robust Theory of Change that might just move us towards Fully Automated Luxury Gay Space CommunismMaybe that google doc already exists?I have never been the kind of person who had role models. I have always been a bit too cynical to put people on a pedestal. I had respect for successful people and tried to learn what I could from them but I didn’t have heroes.But my response to finding the EA movement was, “Fuck, these people are cool.”I think there is a problem with myth making and hero worshipping within EA. I do agree that it is healthier to Live Without Idols. However, I don’t think we should live without stories.The stories I’m more interested in are the personal ones. Of people actually going out and living their values. Examples of trades offs that real people make that allow them to be ambitiously altruistic in a way that suits them. That show that it is fine to care about lots of things. That it is okay to make changes in your life when you get more or better information.I think about this post a lot because I agree that if people think that “doing effective altruism” means they have to live like monks and change their whole lives then they’ll just reject it. Making big changes is hard. People aren’t perfect.I can trace huge number of positive changes in my life to my decision to take EA seriously but realistically it was my personal IRL and parasocial connections to the people of EA that gave me the space and support to make these big changes in my life. In the footnotes and in this post about my EA story, I’ve included a list of podcasts, blog posts and other media by people within EA that were particularly influential and meaningful to me (if you made them then thank you <3)While I do see as EA as the key source of purpose in my life, it is a core value among many (I like Valuism - doing the intrinsic values test was really helpful for me). Like everyone else in the EA movement, I’m not an impact machine, I’m a person. I love throwing the...]]>
Sat, 13 May 2023 15:55:56 +0000 EA - I want to read more stories by and about the people of Effective Altruism by Gemma Paterson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I want to read more stories by and about the people of Effective Altruism, published by Gemma Paterson on May 12, 2023 on The Effective Altruism Forum.TL; DRI want to read more stories by and about the people of the Effective Altruism movementBut like, fun ones, not CVsI’ve added a tag for EA origin stories and tagged a bunch of relevant posts from the forumIf I’ve missed some, please tag themThe community experiences tag has a lot of others that don’t quite fitI think it is important to emphasise the personal in the effective altruism movement - you never know if your story is enough to connect with someone (especially if you don’t fit the stereotypical EA mold)I would also be very interested in reading folks’ answers to the “What is your current plan to improve the world?” question from the EA Global application - it’s really helpful to see other people’s thought processes (you can read mine here)Why?At least for me, what grabbed and kept my attention when I first heard about EA were the stories of people on the ground trying to do effective altruism.The audacity of a group of students looking at the enormity of suffering in the world but then pushing past that overwhelm. Recognising that they could use their privileges to make a dent if they really gave it a go.The folks behind Charity Entrepreneurship who didn’t stop at one highly effective charity but decided to jump straight into making an non-profit incubator to multiply their impact - building out, in my opinion, some of the coolest projects in the movement.I love that the 80,000 hours podcast takes the concept behind Big Talk seriouslyIt’s absurd but amazing!I love the ethos of practicality within the movement. It isn’t about purity, it isn’t about perfection, it’s about actually changing the world.These are the people I’d back to build a robust Theory of Change that might just move us towards Fully Automated Luxury Gay Space CommunismMaybe that google doc already exists?I have never been the kind of person who had role models. I have always been a bit too cynical to put people on a pedestal. I had respect for successful people and tried to learn what I could from them but I didn’t have heroes.But my response to finding the EA movement was, “Fuck, these people are cool.”I think there is a problem with myth making and hero worshipping within EA. I do agree that it is healthier to Live Without Idols. However, I don’t think we should live without stories.The stories I’m more interested in are the personal ones. Of people actually going out and living their values. Examples of trades offs that real people make that allow them to be ambitiously altruistic in a way that suits them. That show that it is fine to care about lots of things. That it is okay to make changes in your life when you get more or better information.I think about this post a lot because I agree that if people think that “doing effective altruism” means they have to live like monks and change their whole lives then they’ll just reject it. Making big changes is hard. People aren’t perfect.I can trace huge number of positive changes in my life to my decision to take EA seriously but realistically it was my personal IRL and parasocial connections to the people of EA that gave me the space and support to make these big changes in my life. In the footnotes and in this post about my EA story, I’ve included a list of podcasts, blog posts and other media by people within EA that were particularly influential and meaningful to me (if you made them then thank you <3)While I do see as EA as the key source of purpose in my life, it is a core value among many (I like Valuism - doing the intrinsic values test was really helpful for me). Like everyone else in the EA movement, I’m not an impact machine, I’m a person. I love throwing the...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I want to read more stories by and about the people of Effective Altruism, published by Gemma Paterson on May 12, 2023 on The Effective Altruism Forum.TL; DRI want to read more stories by and about the people of the Effective Altruism movementBut like, fun ones, not CVsI’ve added a tag for EA origin stories and tagged a bunch of relevant posts from the forumIf I’ve missed some, please tag themThe community experiences tag has a lot of others that don’t quite fitI think it is important to emphasise the personal in the effective altruism movement - you never know if your story is enough to connect with someone (especially if you don’t fit the stereotypical EA mold)I would also be very interested in reading folks’ answers to the “What is your current plan to improve the world?” question from the EA Global application - it’s really helpful to see other people’s thought processes (you can read mine here)Why?At least for me, what grabbed and kept my attention when I first heard about EA were the stories of people on the ground trying to do effective altruism.The audacity of a group of students looking at the enormity of suffering in the world but then pushing past that overwhelm. Recognising that they could use their privileges to make a dent if they really gave it a go.The folks behind Charity Entrepreneurship who didn’t stop at one highly effective charity but decided to jump straight into making an non-profit incubator to multiply their impact - building out, in my opinion, some of the coolest projects in the movement.I love that the 80,000 hours podcast takes the concept behind Big Talk seriouslyIt’s absurd but amazing!I love the ethos of practicality within the movement. It isn’t about purity, it isn’t about perfection, it’s about actually changing the world.These are the people I’d back to build a robust Theory of Change that might just move us towards Fully Automated Luxury Gay Space CommunismMaybe that google doc already exists?I have never been the kind of person who had role models. I have always been a bit too cynical to put people on a pedestal. I had respect for successful people and tried to learn what I could from them but I didn’t have heroes.But my response to finding the EA movement was, “Fuck, these people are cool.”I think there is a problem with myth making and hero worshipping within EA. I do agree that it is healthier to Live Without Idols. However, I don’t think we should live without stories.The stories I’m more interested in are the personal ones. Of people actually going out and living their values. Examples of trades offs that real people make that allow them to be ambitiously altruistic in a way that suits them. That show that it is fine to care about lots of things. That it is okay to make changes in your life when you get more or better information.I think about this post a lot because I agree that if people think that “doing effective altruism” means they have to live like monks and change their whole lives then they’ll just reject it. Making big changes is hard. People aren’t perfect.I can trace huge number of positive changes in my life to my decision to take EA seriously but realistically it was my personal IRL and parasocial connections to the people of EA that gave me the space and support to make these big changes in my life. In the footnotes and in this post about my EA story, I’ve included a list of podcasts, blog posts and other media by people within EA that were particularly influential and meaningful to me (if you made them then thank you <3)While I do see as EA as the key source of purpose in my life, it is a core value among many (I like Valuism - doing the intrinsic values test was really helpful for me). Like everyone else in the EA movement, I’m not an impact machine, I’m a person. I love throwing the...]]>
Gemma Paterson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:26 None full 5938
vBcT7i7AkNJ6u9BcQ_NL_EA_EA EA - Prioritising animal welfare over global health and development? by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prioritising animal welfare over global health and development?, published by Vasco Grilo on May 13, 2023 on The Effective Altruism Forum.SummaryCorporate campaigns for chicken welfare increase wellbeing way more cost-effectively than the best global health and development (GHD) interventions.In addition, the effects on farmed animals of such interventions can influence which countries they should target, and those on wild animals might determine whether they are beneficial or harmful.I encourage Charity Entrepreneurship (CE), Founders Pledge (FP), GiveWell (GW), Open Philanthropy (OP) and Rethink Priorities (RP) to:Increase their support of animal welfare interventions relative to those of GHD (at the margin).Account for effects on animals in the cost-effectiveness analyses of GHD interventions.Corporate campaigns for chicken welfare increase nearterm wellbeing way more cost-effectively than GiveWell’s top charitiesCorporate campaigns for chicken welfare are considered one of the most effective animal welfare interventions. A key supporter of these is The Humane League (THL), which is one of the 3 top charities of Animal Charity Evaluators.I calculated the cost-effectiveness of corporate campaigns for broiler welfare in human-years per dollar from the product between:Chicken-years affected per dollar, which I set to 15 as estimated here by Saulius Simcikas.Improvement in welfare as a fraction of that of median welfare range when broilers go from a conventional to a reformed scenario, assuming:The time broilers experience each level of pain defined here (search for “definitions”) in a conventional and reformed scenario is given by these data (search for “pain-tracks”) from the Welfare Footprint Project (WFP).The welfare range is symmetric around the neutral point, and excruciating pain corresponds to the worst possible experience.Excruciating pain is 1 k times as bad as disabling pain.Disabling pain is 100 times as bad as hurtful pain.Hurtful pain is 10 times as bad as annoying pain.The lifespan of broilers is 42 days, in agreement with section “Conventional and Reformed Scenarios” of Chapter 1 of Quantifying pain in broiler chickens by Cynthia Schuck-Paim and Wladimir Alonso.Broilers sleep 8 h each day, and have a neutral experience during that time.Broilers being awake is as good as hurtful pain is bad. This means being awake with hurtful pain is neutral, thus accounting for positive experiences.Median welfare range of chickens, which I set to RP's median estimate of 0.332.Reciprocal of the intensity of the mean human experience, which I obtained supposing humans:Sleep 8 h each day, and have a neutral experience during that time.Being awake is as good as hurtful pain is bad. This means being awake with hurtful pain is neutral, thus accounting for positive experiences.I computed the cost-effectiveness in the same metric for the lowest cost to save a life among GW's top charities from the ratio between:Life expectancy at birth in Africa in 2021, which was 61.7 years according to these data from OWID.Lowest cost to save a life of 3.5 k$ (from Helen Keller International), as stated by GW here.The results are in the tables below. The data and calculations are here (see tab “Cost-effectiveness”).Intensity of the mean experience as a fraction of the median welfare rangeBroiler in a conventional scenarioBroiler in a reformed scenarioHuman5.7710^-62.5910^-53.3310^-6Broiler in a conventional scenario relative to a humanBroiler in a reformed scenario relative to a humanBroiler in a conventional scenario relative to a reformed scenario7.771.734.49Improvement in chicken welfare when broilers go from a conventional to a reformed scenario as a fraction of...The median welfare range of chickensThe intensity of the mean human experience2....]]>
Vasco Grilo https://forum.effectivealtruism.org/posts/vBcT7i7AkNJ6u9BcQ/prioritising-animal-welfare-over-global-health-and Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prioritising animal welfare over global health and development?, published by Vasco Grilo on May 13, 2023 on The Effective Altruism Forum.SummaryCorporate campaigns for chicken welfare increase wellbeing way more cost-effectively than the best global health and development (GHD) interventions.In addition, the effects on farmed animals of such interventions can influence which countries they should target, and those on wild animals might determine whether they are beneficial or harmful.I encourage Charity Entrepreneurship (CE), Founders Pledge (FP), GiveWell (GW), Open Philanthropy (OP) and Rethink Priorities (RP) to:Increase their support of animal welfare interventions relative to those of GHD (at the margin).Account for effects on animals in the cost-effectiveness analyses of GHD interventions.Corporate campaigns for chicken welfare increase nearterm wellbeing way more cost-effectively than GiveWell’s top charitiesCorporate campaigns for chicken welfare are considered one of the most effective animal welfare interventions. A key supporter of these is The Humane League (THL), which is one of the 3 top charities of Animal Charity Evaluators.I calculated the cost-effectiveness of corporate campaigns for broiler welfare in human-years per dollar from the product between:Chicken-years affected per dollar, which I set to 15 as estimated here by Saulius Simcikas.Improvement in welfare as a fraction of that of median welfare range when broilers go from a conventional to a reformed scenario, assuming:The time broilers experience each level of pain defined here (search for “definitions”) in a conventional and reformed scenario is given by these data (search for “pain-tracks”) from the Welfare Footprint Project (WFP).The welfare range is symmetric around the neutral point, and excruciating pain corresponds to the worst possible experience.Excruciating pain is 1 k times as bad as disabling pain.Disabling pain is 100 times as bad as hurtful pain.Hurtful pain is 10 times as bad as annoying pain.The lifespan of broilers is 42 days, in agreement with section “Conventional and Reformed Scenarios” of Chapter 1 of Quantifying pain in broiler chickens by Cynthia Schuck-Paim and Wladimir Alonso.Broilers sleep 8 h each day, and have a neutral experience during that time.Broilers being awake is as good as hurtful pain is bad. This means being awake with hurtful pain is neutral, thus accounting for positive experiences.Median welfare range of chickens, which I set to RP's median estimate of 0.332.Reciprocal of the intensity of the mean human experience, which I obtained supposing humans:Sleep 8 h each day, and have a neutral experience during that time.Being awake is as good as hurtful pain is bad. This means being awake with hurtful pain is neutral, thus accounting for positive experiences.I computed the cost-effectiveness in the same metric for the lowest cost to save a life among GW's top charities from the ratio between:Life expectancy at birth in Africa in 2021, which was 61.7 years according to these data from OWID.Lowest cost to save a life of 3.5 k$ (from Helen Keller International), as stated by GW here.The results are in the tables below. The data and calculations are here (see tab “Cost-effectiveness”).Intensity of the mean experience as a fraction of the median welfare rangeBroiler in a conventional scenarioBroiler in a reformed scenarioHuman5.7710^-62.5910^-53.3310^-6Broiler in a conventional scenario relative to a humanBroiler in a reformed scenario relative to a humanBroiler in a conventional scenario relative to a reformed scenario7.771.734.49Improvement in chicken welfare when broilers go from a conventional to a reformed scenario as a fraction of...The median welfare range of chickensThe intensity of the mean human experience2....]]>
Sat, 13 May 2023 13:37:17 +0000 EA - Prioritising animal welfare over global health and development? by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prioritising animal welfare over global health and development?, published by Vasco Grilo on May 13, 2023 on The Effective Altruism Forum.SummaryCorporate campaigns for chicken welfare increase wellbeing way more cost-effectively than the best global health and development (GHD) interventions.In addition, the effects on farmed animals of such interventions can influence which countries they should target, and those on wild animals might determine whether they are beneficial or harmful.I encourage Charity Entrepreneurship (CE), Founders Pledge (FP), GiveWell (GW), Open Philanthropy (OP) and Rethink Priorities (RP) to:Increase their support of animal welfare interventions relative to those of GHD (at the margin).Account for effects on animals in the cost-effectiveness analyses of GHD interventions.Corporate campaigns for chicken welfare increase nearterm wellbeing way more cost-effectively than GiveWell’s top charitiesCorporate campaigns for chicken welfare are considered one of the most effective animal welfare interventions. A key supporter of these is The Humane League (THL), which is one of the 3 top charities of Animal Charity Evaluators.I calculated the cost-effectiveness of corporate campaigns for broiler welfare in human-years per dollar from the product between:Chicken-years affected per dollar, which I set to 15 as estimated here by Saulius Simcikas.Improvement in welfare as a fraction of that of median welfare range when broilers go from a conventional to a reformed scenario, assuming:The time broilers experience each level of pain defined here (search for “definitions”) in a conventional and reformed scenario is given by these data (search for “pain-tracks”) from the Welfare Footprint Project (WFP).The welfare range is symmetric around the neutral point, and excruciating pain corresponds to the worst possible experience.Excruciating pain is 1 k times as bad as disabling pain.Disabling pain is 100 times as bad as hurtful pain.Hurtful pain is 10 times as bad as annoying pain.The lifespan of broilers is 42 days, in agreement with section “Conventional and Reformed Scenarios” of Chapter 1 of Quantifying pain in broiler chickens by Cynthia Schuck-Paim and Wladimir Alonso.Broilers sleep 8 h each day, and have a neutral experience during that time.Broilers being awake is as good as hurtful pain is bad. This means being awake with hurtful pain is neutral, thus accounting for positive experiences.Median welfare range of chickens, which I set to RP's median estimate of 0.332.Reciprocal of the intensity of the mean human experience, which I obtained supposing humans:Sleep 8 h each day, and have a neutral experience during that time.Being awake is as good as hurtful pain is bad. This means being awake with hurtful pain is neutral, thus accounting for positive experiences.I computed the cost-effectiveness in the same metric for the lowest cost to save a life among GW's top charities from the ratio between:Life expectancy at birth in Africa in 2021, which was 61.7 years according to these data from OWID.Lowest cost to save a life of 3.5 k$ (from Helen Keller International), as stated by GW here.The results are in the tables below. The data and calculations are here (see tab “Cost-effectiveness”).Intensity of the mean experience as a fraction of the median welfare rangeBroiler in a conventional scenarioBroiler in a reformed scenarioHuman5.7710^-62.5910^-53.3310^-6Broiler in a conventional scenario relative to a humanBroiler in a reformed scenario relative to a humanBroiler in a conventional scenario relative to a reformed scenario7.771.734.49Improvement in chicken welfare when broilers go from a conventional to a reformed scenario as a fraction of...The median welfare range of chickensThe intensity of the mean human experience2....]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prioritising animal welfare over global health and development?, published by Vasco Grilo on May 13, 2023 on The Effective Altruism Forum.SummaryCorporate campaigns for chicken welfare increase wellbeing way more cost-effectively than the best global health and development (GHD) interventions.In addition, the effects on farmed animals of such interventions can influence which countries they should target, and those on wild animals might determine whether they are beneficial or harmful.I encourage Charity Entrepreneurship (CE), Founders Pledge (FP), GiveWell (GW), Open Philanthropy (OP) and Rethink Priorities (RP) to:Increase their support of animal welfare interventions relative to those of GHD (at the margin).Account for effects on animals in the cost-effectiveness analyses of GHD interventions.Corporate campaigns for chicken welfare increase nearterm wellbeing way more cost-effectively than GiveWell’s top charitiesCorporate campaigns for chicken welfare are considered one of the most effective animal welfare interventions. A key supporter of these is The Humane League (THL), which is one of the 3 top charities of Animal Charity Evaluators.I calculated the cost-effectiveness of corporate campaigns for broiler welfare in human-years per dollar from the product between:Chicken-years affected per dollar, which I set to 15 as estimated here by Saulius Simcikas.Improvement in welfare as a fraction of that of median welfare range when broilers go from a conventional to a reformed scenario, assuming:The time broilers experience each level of pain defined here (search for “definitions”) in a conventional and reformed scenario is given by these data (search for “pain-tracks”) from the Welfare Footprint Project (WFP).The welfare range is symmetric around the neutral point, and excruciating pain corresponds to the worst possible experience.Excruciating pain is 1 k times as bad as disabling pain.Disabling pain is 100 times as bad as hurtful pain.Hurtful pain is 10 times as bad as annoying pain.The lifespan of broilers is 42 days, in agreement with section “Conventional and Reformed Scenarios” of Chapter 1 of Quantifying pain in broiler chickens by Cynthia Schuck-Paim and Wladimir Alonso.Broilers sleep 8 h each day, and have a neutral experience during that time.Broilers being awake is as good as hurtful pain is bad. This means being awake with hurtful pain is neutral, thus accounting for positive experiences.Median welfare range of chickens, which I set to RP's median estimate of 0.332.Reciprocal of the intensity of the mean human experience, which I obtained supposing humans:Sleep 8 h each day, and have a neutral experience during that time.Being awake is as good as hurtful pain is bad. This means being awake with hurtful pain is neutral, thus accounting for positive experiences.I computed the cost-effectiveness in the same metric for the lowest cost to save a life among GW's top charities from the ratio between:Life expectancy at birth in Africa in 2021, which was 61.7 years according to these data from OWID.Lowest cost to save a life of 3.5 k$ (from Helen Keller International), as stated by GW here.The results are in the tables below. The data and calculations are here (see tab “Cost-effectiveness”).Intensity of the mean experience as a fraction of the median welfare rangeBroiler in a conventional scenarioBroiler in a reformed scenarioHuman5.7710^-62.5910^-53.3310^-6Broiler in a conventional scenario relative to a humanBroiler in a reformed scenario relative to a humanBroiler in a conventional scenario relative to a reformed scenario7.771.734.49Improvement in chicken welfare when broilers go from a conventional to a reformed scenario as a fraction of...The median welfare range of chickensThe intensity of the mean human experience2....]]>
Vasco Grilo https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 30:39 None full 5937
syyFCv5bfizfoZ8Ha_NL_EA_EA EA - Proposed - 'How Much Does It Cost to Save a Life?' Quiz, calculator, tool by david reinstein Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Proposed - 'How Much Does It Cost to Save a Life?' Quiz, calculator, tool, published by david reinstein on May 12, 2023 on The Effective Altruism Forum.Epistemic basis/status: I've talked this over with Grace and others at GWWC, and people seem generally interested. I'm posting this to get feedback and gauge interest before potentially pushing it further.Basic ideaI'd like to get your thoughts on a "How Much Does It Cost to Save a Life?"[1] quiz and calculator. I've been discussing this with Giving What We Can; it's somewhat modeled off their how rich am I calculator, which drives a lot of traffic to their site.This would mainly target non-EAs, but it would try to strike a good balance between sophistication and simplicity. It could start as a quiz to get people's attention. People would be asked to guess this cost. They could then be asked to reconsider it considering some follow-up questions. This might be a good opportunity for a chatbot to work its magic.After this interaction, the 'correct answer' and 'how well did I do' would take you to an interactive page, presenting the basic calculation and reasoning. (Before or after presenting this) it could also allow users to adjust their moral and epistemic parameters and the scope of their inquiry. This might be something to unfold gradually, letting people specify first one thing, and then maybe more, if they like.E.g.,Target: Rich or poor countries, which age groups, etc.Relative value of a child or adults lifeHow much do you weight life-years for certain statesWhich evidence do you find more plausibleDo you want to include or exclude certain types of benefitsDiscount rateWe would aim to go viral (or at least bacterial)!Value/ToCI believe that people would be highly interested in this: it could be engaging and pique curiosity and competitiveness (a bit click-baity, maybe, but the payoff is not click bait)!It could potentially make news headlines. It’s an “easy story” for media people, asks a question people can engage with, etc. . ’how much does it cost to save a life? find out after the break!) giving the public a chance to engage with the question: "How much does it cost to save a life?"It could help challenge misconceptions about the cost of saving lives, contributing to a more reality-based, impact-focused, and evidence-driven donor community. If people do think it’s much cheaper than it is, as some studies suggest, it would probably be good to change this misconception. It may also be a stepping stone towards encouraging people to think more critically about measuring impact and considering EA-aligned evaluations.> Greater acceptance and understanding of EA, better epistemics in the general public, better donation and policy choicesImplementationWhile GiveWell does have a page with a lot of technical details, it doesn't quite capture the interactive and compelling aspects I'm envisioning for this tool.Giving What We Can's response has been positive, but they understandably lack the capacity within their core team to take on such a project. They suggest it could make for an interesting volunteer project if a UX designer and an engineer were interested in participating.Considering the enthusiasm and the potential for synergy with academic research (which could be supported by funds for Facebook academic ads), I'm contemplating the best approach to bring this idea to life. I tentatively propose the following steps:Put out a request for a volunteer to help develop a proof of concept or minimum viable product. Giving What We Can has some interested engineers, and I could help with guidance and encouragement.2 Apply for direct funding for the project, possibly collaborating with groups focused on quantitative uncertainty and "build your own cost-effectiveness" initiatives, or perhaps with SoGi...]]>
david reinstein https://forum.effectivealtruism.org/posts/syyFCv5bfizfoZ8Ha/proposed-how-much-does-it-cost-to-save-a-life-quiz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Proposed - 'How Much Does It Cost to Save a Life?' Quiz, calculator, tool, published by david reinstein on May 12, 2023 on The Effective Altruism Forum.Epistemic basis/status: I've talked this over with Grace and others at GWWC, and people seem generally interested. I'm posting this to get feedback and gauge interest before potentially pushing it further.Basic ideaI'd like to get your thoughts on a "How Much Does It Cost to Save a Life?"[1] quiz and calculator. I've been discussing this with Giving What We Can; it's somewhat modeled off their how rich am I calculator, which drives a lot of traffic to their site.This would mainly target non-EAs, but it would try to strike a good balance between sophistication and simplicity. It could start as a quiz to get people's attention. People would be asked to guess this cost. They could then be asked to reconsider it considering some follow-up questions. This might be a good opportunity for a chatbot to work its magic.After this interaction, the 'correct answer' and 'how well did I do' would take you to an interactive page, presenting the basic calculation and reasoning. (Before or after presenting this) it could also allow users to adjust their moral and epistemic parameters and the scope of their inquiry. This might be something to unfold gradually, letting people specify first one thing, and then maybe more, if they like.E.g.,Target: Rich or poor countries, which age groups, etc.Relative value of a child or adults lifeHow much do you weight life-years for certain statesWhich evidence do you find more plausibleDo you want to include or exclude certain types of benefitsDiscount rateWe would aim to go viral (or at least bacterial)!Value/ToCI believe that people would be highly interested in this: it could be engaging and pique curiosity and competitiveness (a bit click-baity, maybe, but the payoff is not click bait)!It could potentially make news headlines. It’s an “easy story” for media people, asks a question people can engage with, etc. . ’how much does it cost to save a life? find out after the break!) giving the public a chance to engage with the question: "How much does it cost to save a life?"It could help challenge misconceptions about the cost of saving lives, contributing to a more reality-based, impact-focused, and evidence-driven donor community. If people do think it’s much cheaper than it is, as some studies suggest, it would probably be good to change this misconception. It may also be a stepping stone towards encouraging people to think more critically about measuring impact and considering EA-aligned evaluations.> Greater acceptance and understanding of EA, better epistemics in the general public, better donation and policy choicesImplementationWhile GiveWell does have a page with a lot of technical details, it doesn't quite capture the interactive and compelling aspects I'm envisioning for this tool.Giving What We Can's response has been positive, but they understandably lack the capacity within their core team to take on such a project. They suggest it could make for an interesting volunteer project if a UX designer and an engineer were interested in participating.Considering the enthusiasm and the potential for synergy with academic research (which could be supported by funds for Facebook academic ads), I'm contemplating the best approach to bring this idea to life. I tentatively propose the following steps:Put out a request for a volunteer to help develop a proof of concept or minimum viable product. Giving What We Can has some interested engineers, and I could help with guidance and encouragement.2 Apply for direct funding for the project, possibly collaborating with groups focused on quantitative uncertainty and "build your own cost-effectiveness" initiatives, or perhaps with SoGi...]]>
Sat, 13 May 2023 09:59:19 +0000 EA - Proposed - 'How Much Does It Cost to Save a Life?' Quiz, calculator, tool by david reinstein Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Proposed - 'How Much Does It Cost to Save a Life?' Quiz, calculator, tool, published by david reinstein on May 12, 2023 on The Effective Altruism Forum.Epistemic basis/status: I've talked this over with Grace and others at GWWC, and people seem generally interested. I'm posting this to get feedback and gauge interest before potentially pushing it further.Basic ideaI'd like to get your thoughts on a "How Much Does It Cost to Save a Life?"[1] quiz and calculator. I've been discussing this with Giving What We Can; it's somewhat modeled off their how rich am I calculator, which drives a lot of traffic to their site.This would mainly target non-EAs, but it would try to strike a good balance between sophistication and simplicity. It could start as a quiz to get people's attention. People would be asked to guess this cost. They could then be asked to reconsider it considering some follow-up questions. This might be a good opportunity for a chatbot to work its magic.After this interaction, the 'correct answer' and 'how well did I do' would take you to an interactive page, presenting the basic calculation and reasoning. (Before or after presenting this) it could also allow users to adjust their moral and epistemic parameters and the scope of their inquiry. This might be something to unfold gradually, letting people specify first one thing, and then maybe more, if they like.E.g.,Target: Rich or poor countries, which age groups, etc.Relative value of a child or adults lifeHow much do you weight life-years for certain statesWhich evidence do you find more plausibleDo you want to include or exclude certain types of benefitsDiscount rateWe would aim to go viral (or at least bacterial)!Value/ToCI believe that people would be highly interested in this: it could be engaging and pique curiosity and competitiveness (a bit click-baity, maybe, but the payoff is not click bait)!It could potentially make news headlines. It’s an “easy story” for media people, asks a question people can engage with, etc. . ’how much does it cost to save a life? find out after the break!) giving the public a chance to engage with the question: "How much does it cost to save a life?"It could help challenge misconceptions about the cost of saving lives, contributing to a more reality-based, impact-focused, and evidence-driven donor community. If people do think it’s much cheaper than it is, as some studies suggest, it would probably be good to change this misconception. It may also be a stepping stone towards encouraging people to think more critically about measuring impact and considering EA-aligned evaluations.> Greater acceptance and understanding of EA, better epistemics in the general public, better donation and policy choicesImplementationWhile GiveWell does have a page with a lot of technical details, it doesn't quite capture the interactive and compelling aspects I'm envisioning for this tool.Giving What We Can's response has been positive, but they understandably lack the capacity within their core team to take on such a project. They suggest it could make for an interesting volunteer project if a UX designer and an engineer were interested in participating.Considering the enthusiasm and the potential for synergy with academic research (which could be supported by funds for Facebook academic ads), I'm contemplating the best approach to bring this idea to life. I tentatively propose the following steps:Put out a request for a volunteer to help develop a proof of concept or minimum viable product. Giving What We Can has some interested engineers, and I could help with guidance and encouragement.2 Apply for direct funding for the project, possibly collaborating with groups focused on quantitative uncertainty and "build your own cost-effectiveness" initiatives, or perhaps with SoGi...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Proposed - 'How Much Does It Cost to Save a Life?' Quiz, calculator, tool, published by david reinstein on May 12, 2023 on The Effective Altruism Forum.Epistemic basis/status: I've talked this over with Grace and others at GWWC, and people seem generally interested. I'm posting this to get feedback and gauge interest before potentially pushing it further.Basic ideaI'd like to get your thoughts on a "How Much Does It Cost to Save a Life?"[1] quiz and calculator. I've been discussing this with Giving What We Can; it's somewhat modeled off their how rich am I calculator, which drives a lot of traffic to their site.This would mainly target non-EAs, but it would try to strike a good balance between sophistication and simplicity. It could start as a quiz to get people's attention. People would be asked to guess this cost. They could then be asked to reconsider it considering some follow-up questions. This might be a good opportunity for a chatbot to work its magic.After this interaction, the 'correct answer' and 'how well did I do' would take you to an interactive page, presenting the basic calculation and reasoning. (Before or after presenting this) it could also allow users to adjust their moral and epistemic parameters and the scope of their inquiry. This might be something to unfold gradually, letting people specify first one thing, and then maybe more, if they like.E.g.,Target: Rich or poor countries, which age groups, etc.Relative value of a child or adults lifeHow much do you weight life-years for certain statesWhich evidence do you find more plausibleDo you want to include or exclude certain types of benefitsDiscount rateWe would aim to go viral (or at least bacterial)!Value/ToCI believe that people would be highly interested in this: it could be engaging and pique curiosity and competitiveness (a bit click-baity, maybe, but the payoff is not click bait)!It could potentially make news headlines. It’s an “easy story” for media people, asks a question people can engage with, etc. . ’how much does it cost to save a life? find out after the break!) giving the public a chance to engage with the question: "How much does it cost to save a life?"It could help challenge misconceptions about the cost of saving lives, contributing to a more reality-based, impact-focused, and evidence-driven donor community. If people do think it’s much cheaper than it is, as some studies suggest, it would probably be good to change this misconception. It may also be a stepping stone towards encouraging people to think more critically about measuring impact and considering EA-aligned evaluations.> Greater acceptance and understanding of EA, better epistemics in the general public, better donation and policy choicesImplementationWhile GiveWell does have a page with a lot of technical details, it doesn't quite capture the interactive and compelling aspects I'm envisioning for this tool.Giving What We Can's response has been positive, but they understandably lack the capacity within their core team to take on such a project. They suggest it could make for an interesting volunteer project if a UX designer and an engineer were interested in participating.Considering the enthusiasm and the potential for synergy with academic research (which could be supported by funds for Facebook academic ads), I'm contemplating the best approach to bring this idea to life. I tentatively propose the following steps:Put out a request for a volunteer to help develop a proof of concept or minimum viable product. Giving What We Can has some interested engineers, and I could help with guidance and encouragement.2 Apply for direct funding for the project, possibly collaborating with groups focused on quantitative uncertainty and "build your own cost-effectiveness" initiatives, or perhaps with SoGi...]]>
david reinstein https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:54 None full 5935
CuktFCQ39fuXnxcHx_NL_EA_EA EA - Why GiveWell funded the rollout of the malaria vaccine by GiveWell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why GiveWell funded the rollout of the malaria vaccine, published by GiveWell on May 12, 2023 on The Effective Altruism Forum.Author: Audrey Cooper, GiveWell Philanthropy AdvisorSince our founding in 2007, GiveWell has directed over $600 million to programs that aim to prevent malaria, a mosquito-borne disease that causes severe illness and death. Malaria is preventable and curable, yet it killed over 600,000 people in 2021—mostly young children in Africa.[1]Following the World Health Organization’s approval of the RTS,S/AS01 malaria vaccine (RTS,S) in late 2021,[2] GiveWell directed $5 million to PATH to accelerate the rollout of the vaccine in certain areas of Ghana, Kenya, and Malawi. This grant aimed to enable these communities to gain access to the vaccine about a year earlier than they otherwise would, protecting hundreds of thousands of children from malaria.[3]Although we’re very excited about the potential of the RTS,S malaria vaccine to save lives, it isn’t a panacea. We still plan to support a range of malaria control interventions, including vaccines, nets, and antimalarial medicine.In this post, we will:Explain how we found the opportunity to fund the malaria vaccineDiscuss why we funded this grantShare our plan for malaria funding moving forwardIdentifying a gap in vaccine accessIn October 2021, we shared our initial thoughts on the approval of the RTS,S malaria vaccine by the World Health Organization (WHO). At that point, we weren’t sure whether the vaccine would be cost-effective and were not aware of any opportunities for private donors to support the expansion of vaccine access.In the following months, our conversations with PATH, a large global health nonprofit that we’ve previously funded, revealed that there might be an opportunity to help deploy the vaccine more quickly in certain regions. PATH had been supporting the delivery of the vaccine in Ghana, Kenya, and Malawi as part of the WHO-led pilot—the Malaria Vaccine Implementation Program (MVIP)—since the pilot began in 2019.[4] In order to generate evidence about the effectiveness of the vaccine, randomly selected areas in each country received the vaccine during the early years of the pilot, while “comparison areas” would receive the vaccine at a later date, if the vaccine was recommended by the WHO.[5]Once the vaccine had received approval from the WHO, the WHO and PATH believed there was an opportunity to build on the momentum and groundwork of the pilot to roll out the vaccine to the comparison areas as soon as possible. However, the expectation at the time was that expanding use to the comparison areas would need to wait for the standard process through which low-income countries apply for support to access vaccines from Gavi, the Vaccine Alliance.[6] This process would have made it possible to introduce the vaccine at the end of 2023 at the earliest.[7]However, there was another path through which these vaccines could be provided more quickly. GlaxoSmithKline (GSK), the vaccine manufacturer, had committed to donate up to 10 million vaccine doses as part of its support for the MVIP.[8] This quantity of vaccine was set aside to allow completion of the pilot program, including vaccination in the comparison areas.[9] However, additional support was needed to be able to utilize these vaccines in advance of Gavi financing, including (for example) funding to cover the costs of safe injection supplies and vaccine shipping and handling, as well as the technical assistance required to support vaccine implementation.With funding from GiveWell, PATH believed it could provide the necessary technical assistance to the ministries of health in Ghana, Kenya, and Malawi to support them in using the donated vaccines from GSK and expand vaccine access to the comparison areas at the end of 202...]]>
GiveWell https://forum.effectivealtruism.org/posts/CuktFCQ39fuXnxcHx/why-givewell-funded-the-rollout-of-the-malaria-vaccine Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why GiveWell funded the rollout of the malaria vaccine, published by GiveWell on May 12, 2023 on The Effective Altruism Forum.Author: Audrey Cooper, GiveWell Philanthropy AdvisorSince our founding in 2007, GiveWell has directed over $600 million to programs that aim to prevent malaria, a mosquito-borne disease that causes severe illness and death. Malaria is preventable and curable, yet it killed over 600,000 people in 2021—mostly young children in Africa.[1]Following the World Health Organization’s approval of the RTS,S/AS01 malaria vaccine (RTS,S) in late 2021,[2] GiveWell directed $5 million to PATH to accelerate the rollout of the vaccine in certain areas of Ghana, Kenya, and Malawi. This grant aimed to enable these communities to gain access to the vaccine about a year earlier than they otherwise would, protecting hundreds of thousands of children from malaria.[3]Although we’re very excited about the potential of the RTS,S malaria vaccine to save lives, it isn’t a panacea. We still plan to support a range of malaria control interventions, including vaccines, nets, and antimalarial medicine.In this post, we will:Explain how we found the opportunity to fund the malaria vaccineDiscuss why we funded this grantShare our plan for malaria funding moving forwardIdentifying a gap in vaccine accessIn October 2021, we shared our initial thoughts on the approval of the RTS,S malaria vaccine by the World Health Organization (WHO). At that point, we weren’t sure whether the vaccine would be cost-effective and were not aware of any opportunities for private donors to support the expansion of vaccine access.In the following months, our conversations with PATH, a large global health nonprofit that we’ve previously funded, revealed that there might be an opportunity to help deploy the vaccine more quickly in certain regions. PATH had been supporting the delivery of the vaccine in Ghana, Kenya, and Malawi as part of the WHO-led pilot—the Malaria Vaccine Implementation Program (MVIP)—since the pilot began in 2019.[4] In order to generate evidence about the effectiveness of the vaccine, randomly selected areas in each country received the vaccine during the early years of the pilot, while “comparison areas” would receive the vaccine at a later date, if the vaccine was recommended by the WHO.[5]Once the vaccine had received approval from the WHO, the WHO and PATH believed there was an opportunity to build on the momentum and groundwork of the pilot to roll out the vaccine to the comparison areas as soon as possible. However, the expectation at the time was that expanding use to the comparison areas would need to wait for the standard process through which low-income countries apply for support to access vaccines from Gavi, the Vaccine Alliance.[6] This process would have made it possible to introduce the vaccine at the end of 2023 at the earliest.[7]However, there was another path through which these vaccines could be provided more quickly. GlaxoSmithKline (GSK), the vaccine manufacturer, had committed to donate up to 10 million vaccine doses as part of its support for the MVIP.[8] This quantity of vaccine was set aside to allow completion of the pilot program, including vaccination in the comparison areas.[9] However, additional support was needed to be able to utilize these vaccines in advance of Gavi financing, including (for example) funding to cover the costs of safe injection supplies and vaccine shipping and handling, as well as the technical assistance required to support vaccine implementation.With funding from GiveWell, PATH believed it could provide the necessary technical assistance to the ministries of health in Ghana, Kenya, and Malawi to support them in using the donated vaccines from GSK and expand vaccine access to the comparison areas at the end of 202...]]>
Fri, 12 May 2023 19:50:28 +0000 EA - Why GiveWell funded the rollout of the malaria vaccine by GiveWell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why GiveWell funded the rollout of the malaria vaccine, published by GiveWell on May 12, 2023 on The Effective Altruism Forum.Author: Audrey Cooper, GiveWell Philanthropy AdvisorSince our founding in 2007, GiveWell has directed over $600 million to programs that aim to prevent malaria, a mosquito-borne disease that causes severe illness and death. Malaria is preventable and curable, yet it killed over 600,000 people in 2021—mostly young children in Africa.[1]Following the World Health Organization’s approval of the RTS,S/AS01 malaria vaccine (RTS,S) in late 2021,[2] GiveWell directed $5 million to PATH to accelerate the rollout of the vaccine in certain areas of Ghana, Kenya, and Malawi. This grant aimed to enable these communities to gain access to the vaccine about a year earlier than they otherwise would, protecting hundreds of thousands of children from malaria.[3]Although we’re very excited about the potential of the RTS,S malaria vaccine to save lives, it isn’t a panacea. We still plan to support a range of malaria control interventions, including vaccines, nets, and antimalarial medicine.In this post, we will:Explain how we found the opportunity to fund the malaria vaccineDiscuss why we funded this grantShare our plan for malaria funding moving forwardIdentifying a gap in vaccine accessIn October 2021, we shared our initial thoughts on the approval of the RTS,S malaria vaccine by the World Health Organization (WHO). At that point, we weren’t sure whether the vaccine would be cost-effective and were not aware of any opportunities for private donors to support the expansion of vaccine access.In the following months, our conversations with PATH, a large global health nonprofit that we’ve previously funded, revealed that there might be an opportunity to help deploy the vaccine more quickly in certain regions. PATH had been supporting the delivery of the vaccine in Ghana, Kenya, and Malawi as part of the WHO-led pilot—the Malaria Vaccine Implementation Program (MVIP)—since the pilot began in 2019.[4] In order to generate evidence about the effectiveness of the vaccine, randomly selected areas in each country received the vaccine during the early years of the pilot, while “comparison areas” would receive the vaccine at a later date, if the vaccine was recommended by the WHO.[5]Once the vaccine had received approval from the WHO, the WHO and PATH believed there was an opportunity to build on the momentum and groundwork of the pilot to roll out the vaccine to the comparison areas as soon as possible. However, the expectation at the time was that expanding use to the comparison areas would need to wait for the standard process through which low-income countries apply for support to access vaccines from Gavi, the Vaccine Alliance.[6] This process would have made it possible to introduce the vaccine at the end of 2023 at the earliest.[7]However, there was another path through which these vaccines could be provided more quickly. GlaxoSmithKline (GSK), the vaccine manufacturer, had committed to donate up to 10 million vaccine doses as part of its support for the MVIP.[8] This quantity of vaccine was set aside to allow completion of the pilot program, including vaccination in the comparison areas.[9] However, additional support was needed to be able to utilize these vaccines in advance of Gavi financing, including (for example) funding to cover the costs of safe injection supplies and vaccine shipping and handling, as well as the technical assistance required to support vaccine implementation.With funding from GiveWell, PATH believed it could provide the necessary technical assistance to the ministries of health in Ghana, Kenya, and Malawi to support them in using the donated vaccines from GSK and expand vaccine access to the comparison areas at the end of 202...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why GiveWell funded the rollout of the malaria vaccine, published by GiveWell on May 12, 2023 on The Effective Altruism Forum.Author: Audrey Cooper, GiveWell Philanthropy AdvisorSince our founding in 2007, GiveWell has directed over $600 million to programs that aim to prevent malaria, a mosquito-borne disease that causes severe illness and death. Malaria is preventable and curable, yet it killed over 600,000 people in 2021—mostly young children in Africa.[1]Following the World Health Organization’s approval of the RTS,S/AS01 malaria vaccine (RTS,S) in late 2021,[2] GiveWell directed $5 million to PATH to accelerate the rollout of the vaccine in certain areas of Ghana, Kenya, and Malawi. This grant aimed to enable these communities to gain access to the vaccine about a year earlier than they otherwise would, protecting hundreds of thousands of children from malaria.[3]Although we’re very excited about the potential of the RTS,S malaria vaccine to save lives, it isn’t a panacea. We still plan to support a range of malaria control interventions, including vaccines, nets, and antimalarial medicine.In this post, we will:Explain how we found the opportunity to fund the malaria vaccineDiscuss why we funded this grantShare our plan for malaria funding moving forwardIdentifying a gap in vaccine accessIn October 2021, we shared our initial thoughts on the approval of the RTS,S malaria vaccine by the World Health Organization (WHO). At that point, we weren’t sure whether the vaccine would be cost-effective and were not aware of any opportunities for private donors to support the expansion of vaccine access.In the following months, our conversations with PATH, a large global health nonprofit that we’ve previously funded, revealed that there might be an opportunity to help deploy the vaccine more quickly in certain regions. PATH had been supporting the delivery of the vaccine in Ghana, Kenya, and Malawi as part of the WHO-led pilot—the Malaria Vaccine Implementation Program (MVIP)—since the pilot began in 2019.[4] In order to generate evidence about the effectiveness of the vaccine, randomly selected areas in each country received the vaccine during the early years of the pilot, while “comparison areas” would receive the vaccine at a later date, if the vaccine was recommended by the WHO.[5]Once the vaccine had received approval from the WHO, the WHO and PATH believed there was an opportunity to build on the momentum and groundwork of the pilot to roll out the vaccine to the comparison areas as soon as possible. However, the expectation at the time was that expanding use to the comparison areas would need to wait for the standard process through which low-income countries apply for support to access vaccines from Gavi, the Vaccine Alliance.[6] This process would have made it possible to introduce the vaccine at the end of 2023 at the earliest.[7]However, there was another path through which these vaccines could be provided more quickly. GlaxoSmithKline (GSK), the vaccine manufacturer, had committed to donate up to 10 million vaccine doses as part of its support for the MVIP.[8] This quantity of vaccine was set aside to allow completion of the pilot program, including vaccination in the comparison areas.[9] However, additional support was needed to be able to utilize these vaccines in advance of Gavi financing, including (for example) funding to cover the costs of safe injection supplies and vaccine shipping and handling, as well as the technical assistance required to support vaccine implementation.With funding from GiveWell, PATH believed it could provide the necessary technical assistance to the ministries of health in Ghana, Kenya, and Malawi to support them in using the donated vaccines from GSK and expand vaccine access to the comparison areas at the end of 202...]]>
GiveWell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:05 None full 5926
ConFiY9cRmg37fs2p_NL_EA_EA EA - US public opinion of AI policy and risk by Jamie Elsey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: US public opinion of AI policy and risk, published by Jamie Elsey on May 12, 2023 on The Effective Altruism Forum.SummaryOn April 21st 2023, Rethink Priorities conducted an online poll to assess US public perceptions of, and opinions about, AI risk. The poll was intended to conceptually replicate and extend a recent AI-related poll from YouGov, as well as drawing inspiration from some other recent AI polls from Monmouth University and Harris-MITRE.The poll covered opinions regarding:A pause on certain kinds of AI researchShould AI be regulated (akin to the FDA)?Worry about negative effects of AIExtinction risk in 10 and 50 yearsLikelihood of achieving greater than human level intelligencePerceived most likely existential threatsExpected harm vs. good from AIOur population estimates reflect the responses of 2444 US adults, poststratified to be representative of the US population. See the Methodology section of the Appendix for more information on sampling and estimation procedures.Key findingsFor each key finding below, more granular response categories are presented in the main text, along with demographic breakdowns of interest.Pause on AI Research. Support for a pause on AI research outstrips opposition. We estimate that 51% of the population would support, 25% would oppose, 20% remain neutral, and 4% don’t know (compared to 58-61% support and 19-23% opposition across different framings in YouGov’s polls). Hence, support is robust across different framings and surveys. The slightly lower level of support in our survey may be explained by our somewhat more neutral framing.Should AI be regulated (akin to the FDA)? Many more people think AI should be regulated than think it should not be. We estimate that 70% believe Yes, 21% believe No, and 9% don’t know.Worry about negative effects of AI. Worry in everyday life about the negative effects of AI appears to be quite low. We estimate 72% of US adults worry little or not at all about AI, 21% report a fair amount of worry, and less than 10% worry a lot or more.Extinction risk in 10 and 50 years. Expectation of extinction from AI is relatively low in the next 10 years but increases in the 50 year time horizon. We estimate 9% think AI-caused extinction to be moderately likely or more in the next 10 years, and 22% think this in the next 50 years.Likelihood of achieving greater than human level intelligence. Most people think AI will ultimately become more intelligent than people. We estimate 67% think this moderately likely or more, 40% highly likely or more, and only 15% think it is not at all likely.Perceived most likely existential threats. AI ranks low among other perceived existential threats to humanity. AI ranked below all 4 other specific existential threats we asked about, with an estimated 4% thinking it the most likely cause of human extinction. For reference, the most likely cause, nuclear war, is estimated to be selected by 42% of people. The other least likely cause - a pandemic - is expected to be picked by 8% of the population.Expected harm vs. good from AI. Despite perceived risks, people tend to anticipate more benefits than harms from AI. We estimate that 48% expect more good than harm, 31% more harm than good, 19% expecting an even balance, and 2% reporting no opinion.The estimates from this poll may inform policy making and advocacy efforts regarding AI risk mitigation. The findings suggest an attitude of caution from the public, with substantially greater support than opposition to measures that are intended to curb the evolution of certain types of AI, as well as for regulation of AI. However, concerns over AI do not yet appear to feature especially prominently in public perception of the existential risk landscape: people report worrying about it only a little, and rarely picked i...]]>
Jamie Elsey https://forum.effectivealtruism.org/posts/ConFiY9cRmg37fs2p/us-public-opinion-of-ai-policy-and-risk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: US public opinion of AI policy and risk, published by Jamie Elsey on May 12, 2023 on The Effective Altruism Forum.SummaryOn April 21st 2023, Rethink Priorities conducted an online poll to assess US public perceptions of, and opinions about, AI risk. The poll was intended to conceptually replicate and extend a recent AI-related poll from YouGov, as well as drawing inspiration from some other recent AI polls from Monmouth University and Harris-MITRE.The poll covered opinions regarding:A pause on certain kinds of AI researchShould AI be regulated (akin to the FDA)?Worry about negative effects of AIExtinction risk in 10 and 50 yearsLikelihood of achieving greater than human level intelligencePerceived most likely existential threatsExpected harm vs. good from AIOur population estimates reflect the responses of 2444 US adults, poststratified to be representative of the US population. See the Methodology section of the Appendix for more information on sampling and estimation procedures.Key findingsFor each key finding below, more granular response categories are presented in the main text, along with demographic breakdowns of interest.Pause on AI Research. Support for a pause on AI research outstrips opposition. We estimate that 51% of the population would support, 25% would oppose, 20% remain neutral, and 4% don’t know (compared to 58-61% support and 19-23% opposition across different framings in YouGov’s polls). Hence, support is robust across different framings and surveys. The slightly lower level of support in our survey may be explained by our somewhat more neutral framing.Should AI be regulated (akin to the FDA)? Many more people think AI should be regulated than think it should not be. We estimate that 70% believe Yes, 21% believe No, and 9% don’t know.Worry about negative effects of AI. Worry in everyday life about the negative effects of AI appears to be quite low. We estimate 72% of US adults worry little or not at all about AI, 21% report a fair amount of worry, and less than 10% worry a lot or more.Extinction risk in 10 and 50 years. Expectation of extinction from AI is relatively low in the next 10 years but increases in the 50 year time horizon. We estimate 9% think AI-caused extinction to be moderately likely or more in the next 10 years, and 22% think this in the next 50 years.Likelihood of achieving greater than human level intelligence. Most people think AI will ultimately become more intelligent than people. We estimate 67% think this moderately likely or more, 40% highly likely or more, and only 15% think it is not at all likely.Perceived most likely existential threats. AI ranks low among other perceived existential threats to humanity. AI ranked below all 4 other specific existential threats we asked about, with an estimated 4% thinking it the most likely cause of human extinction. For reference, the most likely cause, nuclear war, is estimated to be selected by 42% of people. The other least likely cause - a pandemic - is expected to be picked by 8% of the population.Expected harm vs. good from AI. Despite perceived risks, people tend to anticipate more benefits than harms from AI. We estimate that 48% expect more good than harm, 31% more harm than good, 19% expecting an even balance, and 2% reporting no opinion.The estimates from this poll may inform policy making and advocacy efforts regarding AI risk mitigation. The findings suggest an attitude of caution from the public, with substantially greater support than opposition to measures that are intended to curb the evolution of certain types of AI, as well as for regulation of AI. However, concerns over AI do not yet appear to feature especially prominently in public perception of the existential risk landscape: people report worrying about it only a little, and rarely picked i...]]>
Fri, 12 May 2023 15:27:22 +0000 EA - US public opinion of AI policy and risk by Jamie Elsey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: US public opinion of AI policy and risk, published by Jamie Elsey on May 12, 2023 on The Effective Altruism Forum.SummaryOn April 21st 2023, Rethink Priorities conducted an online poll to assess US public perceptions of, and opinions about, AI risk. The poll was intended to conceptually replicate and extend a recent AI-related poll from YouGov, as well as drawing inspiration from some other recent AI polls from Monmouth University and Harris-MITRE.The poll covered opinions regarding:A pause on certain kinds of AI researchShould AI be regulated (akin to the FDA)?Worry about negative effects of AIExtinction risk in 10 and 50 yearsLikelihood of achieving greater than human level intelligencePerceived most likely existential threatsExpected harm vs. good from AIOur population estimates reflect the responses of 2444 US adults, poststratified to be representative of the US population. See the Methodology section of the Appendix for more information on sampling and estimation procedures.Key findingsFor each key finding below, more granular response categories are presented in the main text, along with demographic breakdowns of interest.Pause on AI Research. Support for a pause on AI research outstrips opposition. We estimate that 51% of the population would support, 25% would oppose, 20% remain neutral, and 4% don’t know (compared to 58-61% support and 19-23% opposition across different framings in YouGov’s polls). Hence, support is robust across different framings and surveys. The slightly lower level of support in our survey may be explained by our somewhat more neutral framing.Should AI be regulated (akin to the FDA)? Many more people think AI should be regulated than think it should not be. We estimate that 70% believe Yes, 21% believe No, and 9% don’t know.Worry about negative effects of AI. Worry in everyday life about the negative effects of AI appears to be quite low. We estimate 72% of US adults worry little or not at all about AI, 21% report a fair amount of worry, and less than 10% worry a lot or more.Extinction risk in 10 and 50 years. Expectation of extinction from AI is relatively low in the next 10 years but increases in the 50 year time horizon. We estimate 9% think AI-caused extinction to be moderately likely or more in the next 10 years, and 22% think this in the next 50 years.Likelihood of achieving greater than human level intelligence. Most people think AI will ultimately become more intelligent than people. We estimate 67% think this moderately likely or more, 40% highly likely or more, and only 15% think it is not at all likely.Perceived most likely existential threats. AI ranks low among other perceived existential threats to humanity. AI ranked below all 4 other specific existential threats we asked about, with an estimated 4% thinking it the most likely cause of human extinction. For reference, the most likely cause, nuclear war, is estimated to be selected by 42% of people. The other least likely cause - a pandemic - is expected to be picked by 8% of the population.Expected harm vs. good from AI. Despite perceived risks, people tend to anticipate more benefits than harms from AI. We estimate that 48% expect more good than harm, 31% more harm than good, 19% expecting an even balance, and 2% reporting no opinion.The estimates from this poll may inform policy making and advocacy efforts regarding AI risk mitigation. The findings suggest an attitude of caution from the public, with substantially greater support than opposition to measures that are intended to curb the evolution of certain types of AI, as well as for regulation of AI. However, concerns over AI do not yet appear to feature especially prominently in public perception of the existential risk landscape: people report worrying about it only a little, and rarely picked i...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: US public opinion of AI policy and risk, published by Jamie Elsey on May 12, 2023 on The Effective Altruism Forum.SummaryOn April 21st 2023, Rethink Priorities conducted an online poll to assess US public perceptions of, and opinions about, AI risk. The poll was intended to conceptually replicate and extend a recent AI-related poll from YouGov, as well as drawing inspiration from some other recent AI polls from Monmouth University and Harris-MITRE.The poll covered opinions regarding:A pause on certain kinds of AI researchShould AI be regulated (akin to the FDA)?Worry about negative effects of AIExtinction risk in 10 and 50 yearsLikelihood of achieving greater than human level intelligencePerceived most likely existential threatsExpected harm vs. good from AIOur population estimates reflect the responses of 2444 US adults, poststratified to be representative of the US population. See the Methodology section of the Appendix for more information on sampling and estimation procedures.Key findingsFor each key finding below, more granular response categories are presented in the main text, along with demographic breakdowns of interest.Pause on AI Research. Support for a pause on AI research outstrips opposition. We estimate that 51% of the population would support, 25% would oppose, 20% remain neutral, and 4% don’t know (compared to 58-61% support and 19-23% opposition across different framings in YouGov’s polls). Hence, support is robust across different framings and surveys. The slightly lower level of support in our survey may be explained by our somewhat more neutral framing.Should AI be regulated (akin to the FDA)? Many more people think AI should be regulated than think it should not be. We estimate that 70% believe Yes, 21% believe No, and 9% don’t know.Worry about negative effects of AI. Worry in everyday life about the negative effects of AI appears to be quite low. We estimate 72% of US adults worry little or not at all about AI, 21% report a fair amount of worry, and less than 10% worry a lot or more.Extinction risk in 10 and 50 years. Expectation of extinction from AI is relatively low in the next 10 years but increases in the 50 year time horizon. We estimate 9% think AI-caused extinction to be moderately likely or more in the next 10 years, and 22% think this in the next 50 years.Likelihood of achieving greater than human level intelligence. Most people think AI will ultimately become more intelligent than people. We estimate 67% think this moderately likely or more, 40% highly likely or more, and only 15% think it is not at all likely.Perceived most likely existential threats. AI ranks low among other perceived existential threats to humanity. AI ranked below all 4 other specific existential threats we asked about, with an estimated 4% thinking it the most likely cause of human extinction. For reference, the most likely cause, nuclear war, is estimated to be selected by 42% of people. The other least likely cause - a pandemic - is expected to be picked by 8% of the population.Expected harm vs. good from AI. Despite perceived risks, people tend to anticipate more benefits than harms from AI. We estimate that 48% expect more good than harm, 31% more harm than good, 19% expecting an even balance, and 2% reporting no opinion.The estimates from this poll may inform policy making and advocacy efforts regarding AI risk mitigation. The findings suggest an attitude of caution from the public, with substantially greater support than opposition to measures that are intended to curb the evolution of certain types of AI, as well as for regulation of AI. However, concerns over AI do not yet appear to feature especially prominently in public perception of the existential risk landscape: people report worrying about it only a little, and rarely picked i...]]>
Jamie Elsey https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 26:07 None full 5924
85pJEjQu9aF49CScs_NL_EA_EA EA - Our Progress in 2022 and Plans for 2023 by Open Philanthropy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Our Progress in 2022 and Plans for 2023, published by Open Philanthropy on May 12, 2023 on The Effective Altruism Forum.2022 was a big year for Open Philanthropy:We recommended over $650 million in grants — more, by far, than in any other year of our history. [More]We hired our first program officers for three new focus areas in our Global Health and Wellbeing portfolio. [More]Within our Longtermism portfolio, we significantly expanded our grantmaking and used a series of open calls to identify hundreds of promising grants to individuals and small projects. [More]We ran the Regranting Challenge, a novel experiment which allocated $150 million to outstanding programs at other grantmaking organizations. [More]We nearly doubled the size of our team. [More]This post compares our progress with the goals we set forth a year ago, and lays out our plans for the coming year, including:A significant update on how we handle allocating our grantmaking across causes. [More]A potential leadership transition. [More]Continued growth in grantmaking and staff. [More]Continued grantmakingLast year, we wrote:We aim to roughly double the amount [of funding] we recommend [in 2022] relative to [2021], and triple it by 2025.In 2022, we recommended over $650 million in grants (up from roughly $400 million in 2021).We changed our plans midway through the year, due to a stock market decline[ref]This just reflects a decline in the market; our main donors are still planning to give away virtually all of their wealth within their lifetimes.[/ref] that reduced our available assets and led us to adjust the cost-effectiveness bar we use for our spending on global health and wellbeing. When we wrote last year’s post, we had tentatively planned to allocate $500 million to GiveWell’s recommended charities; the actual allocation wound up being $350 million (up from $300 million in 2021).Currently, we expect to recommend over $700 million in grants in 2023, and no longer have a definite grantmaking goal for 2024 and 2025.Highlights from this year’s grantmakingThis section outlines some of the major grants we made across our program areas.In grants to charities recommended by GiveWell:$10.4 million to the Clinton Health Access Initiative to support their Incubator program, which looks for cost-effective and scalable health interventions.$13.7 million to New Incentives for conditional cash transfers to boost vaccination rates in Nigeria.$4.4 million to Evidence Action to support their in-line chlorination program in Malawi.We also made a $48.8 million grant to the same program with funds from our 2021 allocation.Many other grants we haven’t listed here (see our full list of GiveWell-recommended grants).In potential risks from advanced AI:Redwood Research to support their research on aligning AI systems.Center for a New American Security to support their work on AI policy and governance.A number of projects related to understanding and aligning deep learning systems.In biosecurity and pandemic preparedness:Columbia University to support research on far-UVC light to reduce airborne disease transmission.Bipartisan Commission on Biodefense to support work on biodefense policy in the US.The Johns Hopkins Center for Health Security to support their degree program for students pursuing careers in biosecurity.In effective altruism community growth (with a focus on longtermism):80,000 Hours (marketing and general support) for its work to help people have more impact with their careers.Support for the translation of effective altruism-related content into non-English languages.Bluedot Impact to run courses related to several of our priority cause areas.Asterisk to publish a quarterly journal focused on topics related to effective altruism, among others.A program open to applicat...]]>
Open Philanthropy https://forum.effectivealtruism.org/posts/85pJEjQu9aF49CScs/our-progress-in-2022-and-plans-for-2023 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Our Progress in 2022 and Plans for 2023, published by Open Philanthropy on May 12, 2023 on The Effective Altruism Forum.2022 was a big year for Open Philanthropy:We recommended over $650 million in grants — more, by far, than in any other year of our history. [More]We hired our first program officers for three new focus areas in our Global Health and Wellbeing portfolio. [More]Within our Longtermism portfolio, we significantly expanded our grantmaking and used a series of open calls to identify hundreds of promising grants to individuals and small projects. [More]We ran the Regranting Challenge, a novel experiment which allocated $150 million to outstanding programs at other grantmaking organizations. [More]We nearly doubled the size of our team. [More]This post compares our progress with the goals we set forth a year ago, and lays out our plans for the coming year, including:A significant update on how we handle allocating our grantmaking across causes. [More]A potential leadership transition. [More]Continued growth in grantmaking and staff. [More]Continued grantmakingLast year, we wrote:We aim to roughly double the amount [of funding] we recommend [in 2022] relative to [2021], and triple it by 2025.In 2022, we recommended over $650 million in grants (up from roughly $400 million in 2021).We changed our plans midway through the year, due to a stock market decline[ref]This just reflects a decline in the market; our main donors are still planning to give away virtually all of their wealth within their lifetimes.[/ref] that reduced our available assets and led us to adjust the cost-effectiveness bar we use for our spending on global health and wellbeing. When we wrote last year’s post, we had tentatively planned to allocate $500 million to GiveWell’s recommended charities; the actual allocation wound up being $350 million (up from $300 million in 2021).Currently, we expect to recommend over $700 million in grants in 2023, and no longer have a definite grantmaking goal for 2024 and 2025.Highlights from this year’s grantmakingThis section outlines some of the major grants we made across our program areas.In grants to charities recommended by GiveWell:$10.4 million to the Clinton Health Access Initiative to support their Incubator program, which looks for cost-effective and scalable health interventions.$13.7 million to New Incentives for conditional cash transfers to boost vaccination rates in Nigeria.$4.4 million to Evidence Action to support their in-line chlorination program in Malawi.We also made a $48.8 million grant to the same program with funds from our 2021 allocation.Many other grants we haven’t listed here (see our full list of GiveWell-recommended grants).In potential risks from advanced AI:Redwood Research to support their research on aligning AI systems.Center for a New American Security to support their work on AI policy and governance.A number of projects related to understanding and aligning deep learning systems.In biosecurity and pandemic preparedness:Columbia University to support research on far-UVC light to reduce airborne disease transmission.Bipartisan Commission on Biodefense to support work on biodefense policy in the US.The Johns Hopkins Center for Health Security to support their degree program for students pursuing careers in biosecurity.In effective altruism community growth (with a focus on longtermism):80,000 Hours (marketing and general support) for its work to help people have more impact with their careers.Support for the translation of effective altruism-related content into non-English languages.Bluedot Impact to run courses related to several of our priority cause areas.Asterisk to publish a quarterly journal focused on topics related to effective altruism, among others.A program open to applicat...]]>
Fri, 12 May 2023 11:04:00 +0000 EA - Our Progress in 2022 and Plans for 2023 by Open Philanthropy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Our Progress in 2022 and Plans for 2023, published by Open Philanthropy on May 12, 2023 on The Effective Altruism Forum.2022 was a big year for Open Philanthropy:We recommended over $650 million in grants — more, by far, than in any other year of our history. [More]We hired our first program officers for three new focus areas in our Global Health and Wellbeing portfolio. [More]Within our Longtermism portfolio, we significantly expanded our grantmaking and used a series of open calls to identify hundreds of promising grants to individuals and small projects. [More]We ran the Regranting Challenge, a novel experiment which allocated $150 million to outstanding programs at other grantmaking organizations. [More]We nearly doubled the size of our team. [More]This post compares our progress with the goals we set forth a year ago, and lays out our plans for the coming year, including:A significant update on how we handle allocating our grantmaking across causes. [More]A potential leadership transition. [More]Continued growth in grantmaking and staff. [More]Continued grantmakingLast year, we wrote:We aim to roughly double the amount [of funding] we recommend [in 2022] relative to [2021], and triple it by 2025.In 2022, we recommended over $650 million in grants (up from roughly $400 million in 2021).We changed our plans midway through the year, due to a stock market decline[ref]This just reflects a decline in the market; our main donors are still planning to give away virtually all of their wealth within their lifetimes.[/ref] that reduced our available assets and led us to adjust the cost-effectiveness bar we use for our spending on global health and wellbeing. When we wrote last year’s post, we had tentatively planned to allocate $500 million to GiveWell’s recommended charities; the actual allocation wound up being $350 million (up from $300 million in 2021).Currently, we expect to recommend over $700 million in grants in 2023, and no longer have a definite grantmaking goal for 2024 and 2025.Highlights from this year’s grantmakingThis section outlines some of the major grants we made across our program areas.In grants to charities recommended by GiveWell:$10.4 million to the Clinton Health Access Initiative to support their Incubator program, which looks for cost-effective and scalable health interventions.$13.7 million to New Incentives for conditional cash transfers to boost vaccination rates in Nigeria.$4.4 million to Evidence Action to support their in-line chlorination program in Malawi.We also made a $48.8 million grant to the same program with funds from our 2021 allocation.Many other grants we haven’t listed here (see our full list of GiveWell-recommended grants).In potential risks from advanced AI:Redwood Research to support their research on aligning AI systems.Center for a New American Security to support their work on AI policy and governance.A number of projects related to understanding and aligning deep learning systems.In biosecurity and pandemic preparedness:Columbia University to support research on far-UVC light to reduce airborne disease transmission.Bipartisan Commission on Biodefense to support work on biodefense policy in the US.The Johns Hopkins Center for Health Security to support their degree program for students pursuing careers in biosecurity.In effective altruism community growth (with a focus on longtermism):80,000 Hours (marketing and general support) for its work to help people have more impact with their careers.Support for the translation of effective altruism-related content into non-English languages.Bluedot Impact to run courses related to several of our priority cause areas.Asterisk to publish a quarterly journal focused on topics related to effective altruism, among others.A program open to applicat...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Our Progress in 2022 and Plans for 2023, published by Open Philanthropy on May 12, 2023 on The Effective Altruism Forum.2022 was a big year for Open Philanthropy:We recommended over $650 million in grants — more, by far, than in any other year of our history. [More]We hired our first program officers for three new focus areas in our Global Health and Wellbeing portfolio. [More]Within our Longtermism portfolio, we significantly expanded our grantmaking and used a series of open calls to identify hundreds of promising grants to individuals and small projects. [More]We ran the Regranting Challenge, a novel experiment which allocated $150 million to outstanding programs at other grantmaking organizations. [More]We nearly doubled the size of our team. [More]This post compares our progress with the goals we set forth a year ago, and lays out our plans for the coming year, including:A significant update on how we handle allocating our grantmaking across causes. [More]A potential leadership transition. [More]Continued growth in grantmaking and staff. [More]Continued grantmakingLast year, we wrote:We aim to roughly double the amount [of funding] we recommend [in 2022] relative to [2021], and triple it by 2025.In 2022, we recommended over $650 million in grants (up from roughly $400 million in 2021).We changed our plans midway through the year, due to a stock market decline[ref]This just reflects a decline in the market; our main donors are still planning to give away virtually all of their wealth within their lifetimes.[/ref] that reduced our available assets and led us to adjust the cost-effectiveness bar we use for our spending on global health and wellbeing. When we wrote last year’s post, we had tentatively planned to allocate $500 million to GiveWell’s recommended charities; the actual allocation wound up being $350 million (up from $300 million in 2021).Currently, we expect to recommend over $700 million in grants in 2023, and no longer have a definite grantmaking goal for 2024 and 2025.Highlights from this year’s grantmakingThis section outlines some of the major grants we made across our program areas.In grants to charities recommended by GiveWell:$10.4 million to the Clinton Health Access Initiative to support their Incubator program, which looks for cost-effective and scalable health interventions.$13.7 million to New Incentives for conditional cash transfers to boost vaccination rates in Nigeria.$4.4 million to Evidence Action to support their in-line chlorination program in Malawi.We also made a $48.8 million grant to the same program with funds from our 2021 allocation.Many other grants we haven’t listed here (see our full list of GiveWell-recommended grants).In potential risks from advanced AI:Redwood Research to support their research on aligning AI systems.Center for a New American Security to support their work on AI policy and governance.A number of projects related to understanding and aligning deep learning systems.In biosecurity and pandemic preparedness:Columbia University to support research on far-UVC light to reduce airborne disease transmission.Bipartisan Commission on Biodefense to support work on biodefense policy in the US.The Johns Hopkins Center for Health Security to support their degree program for students pursuing careers in biosecurity.In effective altruism community growth (with a focus on longtermism):80,000 Hours (marketing and general support) for its work to help people have more impact with their careers.Support for the translation of effective altruism-related content into non-English languages.Bluedot Impact to run courses related to several of our priority cause areas.Asterisk to publish a quarterly journal focused on topics related to effective altruism, among others.A program open to applicat...]]>
Open Philanthropy https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 14:16 None full 5919
HssFFgW67ujd3ZaFs_NL_EA_EA EA - Simple charitable donation app idea by kokotajlod Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Simple charitable donation app idea, published by kokotajlod on May 12, 2023 on The Effective Altruism Forum.I'll pay $10xN to the people who build this app, where N is the total karma of this post three months from now, up to a max of $20,000, unless something shady happens like some sort of bot farm. If it turns out this app already exists, I'll pay $1xN instead to the people who find it for me. I'm open to paying significantly more in both cases if I'm convinced of the altruistic case for this app existing; this is just the minimum I personally can commit to and afford.The app consists of a gigantic, full-screen button such that if you press it, the phone will vibrate and play a little satisfying "ching" sound and light up sparkles around where your finger hit, and then $1 will be donated to GiveDirectly.You can keep slamming that button as much as you like to thereby donate as many dollars as you like.In the corner there's a menu button that lets you change from GiveDirectly to Humane League or AMF or whatever (you can go into the settings and input the details for a charity of your choice, adding it to your personal menu of charity options, and then toggle between options as you see fit. You can also set up a "Donate $X per button press instead of $1" option and a "Split each donation between the following N charities" option.)That's it really.Why is this a good idea? Well, I'm not completely confident it is, and part of why I'm posting is to get feedback. But here's my thinking:I often feel guilty for eating out at restaurants. Especially when meat is involved.Currently I donate a substantial amount to charity on a yearly basis (aiming for 10% of income, though I'm not doing a great job of tracking that) but it feels like a chore, I have to remember to do it and then log on and wire the funds. Like paying a bill.If I had this app, I think I'd experiment with the following policy instead: Every time I buy something not-necessary such as a meal at a restaurant, I whip out my phone, pull up the app, and slam that button N times where N is the number of dollars my purchase cost. Thus my personal spending would be matched with my donations. I think I'd feel pretty good while doing so, it would give me a rush of warm fuzzies instead of feeling like a chore. (For this reason I suggest having to press the button N times, instead of building the app to use a text-box-and-number-pad.)Then I'd check in every year or so to see whether my donations were meeting the 10% goal and make a bulk donation to make up the difference if not.If it exceeds the goal, great!I think even if no one saw me use this app, I'd still use it & pay for it. But there's a bonus effect having to do with the social consequences of being seen using it. Kinda like how a big part of why veganism is effective is that you can't hide it from anyone, you are forced to bring it up constantly. Using this app would hopefully have a similar effect -- if you were following a policy similar to the one I described, people would notice you tapping your phone at restaurants and ask you what you were doing & you'd explain and maybe they'd be inspired and do something similar themselves. (Come to think of it, it's important that the "ching" sound not be loud and obnoxious, otherwise it might come across as ostentatious.)I can imagine a world where this app becomes really popular, at least among certain demographics, similar to (though probably not as successful as) veganism.Another mild bonus is that this app could double as a tracker for your discretionary spending. You can go into the settings and see e.g. a graph of your donations over time, statistics on what time of day you do them, etc. and learn things like "jesus do I really spend that much on dining out per month?" and "huh, I guess those Amazon purchase...]]>
kokotajlod https://forum.effectivealtruism.org/posts/HssFFgW67ujd3ZaFs/simple-charitable-donation-app-idea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Simple charitable donation app idea, published by kokotajlod on May 12, 2023 on The Effective Altruism Forum.I'll pay $10xN to the people who build this app, where N is the total karma of this post three months from now, up to a max of $20,000, unless something shady happens like some sort of bot farm. If it turns out this app already exists, I'll pay $1xN instead to the people who find it for me. I'm open to paying significantly more in both cases if I'm convinced of the altruistic case for this app existing; this is just the minimum I personally can commit to and afford.The app consists of a gigantic, full-screen button such that if you press it, the phone will vibrate and play a little satisfying "ching" sound and light up sparkles around where your finger hit, and then $1 will be donated to GiveDirectly.You can keep slamming that button as much as you like to thereby donate as many dollars as you like.In the corner there's a menu button that lets you change from GiveDirectly to Humane League or AMF or whatever (you can go into the settings and input the details for a charity of your choice, adding it to your personal menu of charity options, and then toggle between options as you see fit. You can also set up a "Donate $X per button press instead of $1" option and a "Split each donation between the following N charities" option.)That's it really.Why is this a good idea? Well, I'm not completely confident it is, and part of why I'm posting is to get feedback. But here's my thinking:I often feel guilty for eating out at restaurants. Especially when meat is involved.Currently I donate a substantial amount to charity on a yearly basis (aiming for 10% of income, though I'm not doing a great job of tracking that) but it feels like a chore, I have to remember to do it and then log on and wire the funds. Like paying a bill.If I had this app, I think I'd experiment with the following policy instead: Every time I buy something not-necessary such as a meal at a restaurant, I whip out my phone, pull up the app, and slam that button N times where N is the number of dollars my purchase cost. Thus my personal spending would be matched with my donations. I think I'd feel pretty good while doing so, it would give me a rush of warm fuzzies instead of feeling like a chore. (For this reason I suggest having to press the button N times, instead of building the app to use a text-box-and-number-pad.)Then I'd check in every year or so to see whether my donations were meeting the 10% goal and make a bulk donation to make up the difference if not.If it exceeds the goal, great!I think even if no one saw me use this app, I'd still use it & pay for it. But there's a bonus effect having to do with the social consequences of being seen using it. Kinda like how a big part of why veganism is effective is that you can't hide it from anyone, you are forced to bring it up constantly. Using this app would hopefully have a similar effect -- if you were following a policy similar to the one I described, people would notice you tapping your phone at restaurants and ask you what you were doing & you'd explain and maybe they'd be inspired and do something similar themselves. (Come to think of it, it's important that the "ching" sound not be loud and obnoxious, otherwise it might come across as ostentatious.)I can imagine a world where this app becomes really popular, at least among certain demographics, similar to (though probably not as successful as) veganism.Another mild bonus is that this app could double as a tracker for your discretionary spending. You can go into the settings and see e.g. a graph of your donations over time, statistics on what time of day you do them, etc. and learn things like "jesus do I really spend that much on dining out per month?" and "huh, I guess those Amazon purchase...]]>
Fri, 12 May 2023 04:16:38 +0000 EA - Simple charitable donation app idea by kokotajlod Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Simple charitable donation app idea, published by kokotajlod on May 12, 2023 on The Effective Altruism Forum.I'll pay $10xN to the people who build this app, where N is the total karma of this post three months from now, up to a max of $20,000, unless something shady happens like some sort of bot farm. If it turns out this app already exists, I'll pay $1xN instead to the people who find it for me. I'm open to paying significantly more in both cases if I'm convinced of the altruistic case for this app existing; this is just the minimum I personally can commit to and afford.The app consists of a gigantic, full-screen button such that if you press it, the phone will vibrate and play a little satisfying "ching" sound and light up sparkles around where your finger hit, and then $1 will be donated to GiveDirectly.You can keep slamming that button as much as you like to thereby donate as many dollars as you like.In the corner there's a menu button that lets you change from GiveDirectly to Humane League or AMF or whatever (you can go into the settings and input the details for a charity of your choice, adding it to your personal menu of charity options, and then toggle between options as you see fit. You can also set up a "Donate $X per button press instead of $1" option and a "Split each donation between the following N charities" option.)That's it really.Why is this a good idea? Well, I'm not completely confident it is, and part of why I'm posting is to get feedback. But here's my thinking:I often feel guilty for eating out at restaurants. Especially when meat is involved.Currently I donate a substantial amount to charity on a yearly basis (aiming for 10% of income, though I'm not doing a great job of tracking that) but it feels like a chore, I have to remember to do it and then log on and wire the funds. Like paying a bill.If I had this app, I think I'd experiment with the following policy instead: Every time I buy something not-necessary such as a meal at a restaurant, I whip out my phone, pull up the app, and slam that button N times where N is the number of dollars my purchase cost. Thus my personal spending would be matched with my donations. I think I'd feel pretty good while doing so, it would give me a rush of warm fuzzies instead of feeling like a chore. (For this reason I suggest having to press the button N times, instead of building the app to use a text-box-and-number-pad.)Then I'd check in every year or so to see whether my donations were meeting the 10% goal and make a bulk donation to make up the difference if not.If it exceeds the goal, great!I think even if no one saw me use this app, I'd still use it & pay for it. But there's a bonus effect having to do with the social consequences of being seen using it. Kinda like how a big part of why veganism is effective is that you can't hide it from anyone, you are forced to bring it up constantly. Using this app would hopefully have a similar effect -- if you were following a policy similar to the one I described, people would notice you tapping your phone at restaurants and ask you what you were doing & you'd explain and maybe they'd be inspired and do something similar themselves. (Come to think of it, it's important that the "ching" sound not be loud and obnoxious, otherwise it might come across as ostentatious.)I can imagine a world where this app becomes really popular, at least among certain demographics, similar to (though probably not as successful as) veganism.Another mild bonus is that this app could double as a tracker for your discretionary spending. You can go into the settings and see e.g. a graph of your donations over time, statistics on what time of day you do them, etc. and learn things like "jesus do I really spend that much on dining out per month?" and "huh, I guess those Amazon purchase...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Simple charitable donation app idea, published by kokotajlod on May 12, 2023 on The Effective Altruism Forum.I'll pay $10xN to the people who build this app, where N is the total karma of this post three months from now, up to a max of $20,000, unless something shady happens like some sort of bot farm. If it turns out this app already exists, I'll pay $1xN instead to the people who find it for me. I'm open to paying significantly more in both cases if I'm convinced of the altruistic case for this app existing; this is just the minimum I personally can commit to and afford.The app consists of a gigantic, full-screen button such that if you press it, the phone will vibrate and play a little satisfying "ching" sound and light up sparkles around where your finger hit, and then $1 will be donated to GiveDirectly.You can keep slamming that button as much as you like to thereby donate as many dollars as you like.In the corner there's a menu button that lets you change from GiveDirectly to Humane League or AMF or whatever (you can go into the settings and input the details for a charity of your choice, adding it to your personal menu of charity options, and then toggle between options as you see fit. You can also set up a "Donate $X per button press instead of $1" option and a "Split each donation between the following N charities" option.)That's it really.Why is this a good idea? Well, I'm not completely confident it is, and part of why I'm posting is to get feedback. But here's my thinking:I often feel guilty for eating out at restaurants. Especially when meat is involved.Currently I donate a substantial amount to charity on a yearly basis (aiming for 10% of income, though I'm not doing a great job of tracking that) but it feels like a chore, I have to remember to do it and then log on and wire the funds. Like paying a bill.If I had this app, I think I'd experiment with the following policy instead: Every time I buy something not-necessary such as a meal at a restaurant, I whip out my phone, pull up the app, and slam that button N times where N is the number of dollars my purchase cost. Thus my personal spending would be matched with my donations. I think I'd feel pretty good while doing so, it would give me a rush of warm fuzzies instead of feeling like a chore. (For this reason I suggest having to press the button N times, instead of building the app to use a text-box-and-number-pad.)Then I'd check in every year or so to see whether my donations were meeting the 10% goal and make a bulk donation to make up the difference if not.If it exceeds the goal, great!I think even if no one saw me use this app, I'd still use it & pay for it. But there's a bonus effect having to do with the social consequences of being seen using it. Kinda like how a big part of why veganism is effective is that you can't hide it from anyone, you are forced to bring it up constantly. Using this app would hopefully have a similar effect -- if you were following a policy similar to the one I described, people would notice you tapping your phone at restaurants and ask you what you were doing & you'd explain and maybe they'd be inspired and do something similar themselves. (Come to think of it, it's important that the "ching" sound not be loud and obnoxious, otherwise it might come across as ostentatious.)I can imagine a world where this app becomes really popular, at least among certain demographics, similar to (though probably not as successful as) veganism.Another mild bonus is that this app could double as a tracker for your discretionary spending. You can go into the settings and see e.g. a graph of your donations over time, statistics on what time of day you do them, etc. and learn things like "jesus do I really spend that much on dining out per month?" and "huh, I guess those Amazon purchase...]]>
kokotajlod https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:48 None full 5918
AkaG7LPkHxgncsExi_NL_EA_EA EA - In defence of epistemic modesty [distillation] by Luise Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: In defence of epistemic modesty [distillation], published by Luise on May 10, 2023 on The Effective Altruism Forum.This is a distillation of In defence of epistemic modesty, a 2017 essay by Gregory Lewis. I hope to make the essay’s key points accessible in a quick and easy way so more people engage with them. I thank Gregory Lewis for helpful comments on an earlier version of this post. Errors are my own.Note: I sometimes use the first person (“I claim”/”I think”) in this post. This felt most natural but is not meant to imply any of the ideas or arguments are mine. Unless I clearly state otherwise, they are Gregory Lewis’s.What I CutI had to make some judgment calls on what is essential and what isn’t. Among other things, I decided most math and toy models weren’t essential. Moreover, I cut the details on the “self-defeating” objection, which felt quite philosophical and probably not relevant to most readers. Furthermore, it will be most useful to treat all the arguments brought up in this distillation as mere introductions, while detailed/conclusive arguments may be found in the original post and the literature.ClaimsI claim two things:You should practice strong epistemic modesty: On a given issue, adopt the view experts generally hold, instead of the view you personally like.EAs/rationalists in particular are too epistemically immodest.Let’s first dive deeper into claim 1.Claim 1: Strong Epistemic ModestyTo distinguish the view you personally like from the view strong epistemic modesty favors, call the former “view by your own lights” and the latter “view all things considered”.In detail, strong epistemic modesty says you should do the following to form your view on an issue:Determine the ‘epistemic virtue’ of people who hold a view on the issue. By ‘epistemic virtue’ I mean someone’s ability to form accurate beliefs, including how much the person knows about the issue, their intelligence, how truth-seeking they are, etc.Determine what everyone's credences by their own lights are.Take an average of everyone’s credences by their own lights (including yourself), weighting them by their epistemic virtue.The product is your view all things considered. Importantly, this process weighs your credences by your own lights no more heavily than those of people with similar epistemic virtue. These people are your ‘epistemic peers’.In practice, you can round this process to “use the existing consensus of experts on the issue or, if there is none, be uncertain”.Why?Intuition PumpSay your mom is convinced she’s figured out the one weird trick to make money on the stock market. You are concerned about the validity of this one weird trick, because of two worries:Does she have a better chance at making money than all the other people with similar (low) amounts of knowledge on the stock market who’re all also convinced they know the one weird trick? (These are her epistemic peers.)How do her odds of making money stack up against people working full-time at a hedge fund with lots of relevant background and access to heavy analysis? (These are the experts.)The point is that we are all sometimes like the mom in this example. We’re overconfident, forgetting that we are no better than our epistemic peers, be the question investing, sports bets, musical taste, or politics. Everyone always thinks they are an exception and have figured [investing/sports/politics] out. It’s our epistemic peers that are wrong! But from their perspective, we look just as foolish and misguided as they look to us.Not only do we treat our epistemic peers incorrectly, but also our epistemic superiors. The mom in this example didn’t seek out the expert consensus on making money on the stock market (maybe something like “use algorithms” and “you don’t stand a chance”). Instead, she may have li...]]>
Luise https://forum.effectivealtruism.org/posts/AkaG7LPkHxgncsExi/in-defence-of-epistemic-modesty-distillation Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: In defence of epistemic modesty [distillation], published by Luise on May 10, 2023 on The Effective Altruism Forum.This is a distillation of In defence of epistemic modesty, a 2017 essay by Gregory Lewis. I hope to make the essay’s key points accessible in a quick and easy way so more people engage with them. I thank Gregory Lewis for helpful comments on an earlier version of this post. Errors are my own.Note: I sometimes use the first person (“I claim”/”I think”) in this post. This felt most natural but is not meant to imply any of the ideas or arguments are mine. Unless I clearly state otherwise, they are Gregory Lewis’s.What I CutI had to make some judgment calls on what is essential and what isn’t. Among other things, I decided most math and toy models weren’t essential. Moreover, I cut the details on the “self-defeating” objection, which felt quite philosophical and probably not relevant to most readers. Furthermore, it will be most useful to treat all the arguments brought up in this distillation as mere introductions, while detailed/conclusive arguments may be found in the original post and the literature.ClaimsI claim two things:You should practice strong epistemic modesty: On a given issue, adopt the view experts generally hold, instead of the view you personally like.EAs/rationalists in particular are too epistemically immodest.Let’s first dive deeper into claim 1.Claim 1: Strong Epistemic ModestyTo distinguish the view you personally like from the view strong epistemic modesty favors, call the former “view by your own lights” and the latter “view all things considered”.In detail, strong epistemic modesty says you should do the following to form your view on an issue:Determine the ‘epistemic virtue’ of people who hold a view on the issue. By ‘epistemic virtue’ I mean someone’s ability to form accurate beliefs, including how much the person knows about the issue, their intelligence, how truth-seeking they are, etc.Determine what everyone's credences by their own lights are.Take an average of everyone’s credences by their own lights (including yourself), weighting them by their epistemic virtue.The product is your view all things considered. Importantly, this process weighs your credences by your own lights no more heavily than those of people with similar epistemic virtue. These people are your ‘epistemic peers’.In practice, you can round this process to “use the existing consensus of experts on the issue or, if there is none, be uncertain”.Why?Intuition PumpSay your mom is convinced she’s figured out the one weird trick to make money on the stock market. You are concerned about the validity of this one weird trick, because of two worries:Does she have a better chance at making money than all the other people with similar (low) amounts of knowledge on the stock market who’re all also convinced they know the one weird trick? (These are her epistemic peers.)How do her odds of making money stack up against people working full-time at a hedge fund with lots of relevant background and access to heavy analysis? (These are the experts.)The point is that we are all sometimes like the mom in this example. We’re overconfident, forgetting that we are no better than our epistemic peers, be the question investing, sports bets, musical taste, or politics. Everyone always thinks they are an exception and have figured [investing/sports/politics] out. It’s our epistemic peers that are wrong! But from their perspective, we look just as foolish and misguided as they look to us.Not only do we treat our epistemic peers incorrectly, but also our epistemic superiors. The mom in this example didn’t seek out the expert consensus on making money on the stock market (maybe something like “use algorithms” and “you don’t stand a chance”). Instead, she may have li...]]>
Thu, 11 May 2023 21:10:59 +0000 EA - In defence of epistemic modesty [distillation] by Luise Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: In defence of epistemic modesty [distillation], published by Luise on May 10, 2023 on The Effective Altruism Forum.This is a distillation of In defence of epistemic modesty, a 2017 essay by Gregory Lewis. I hope to make the essay’s key points accessible in a quick and easy way so more people engage with them. I thank Gregory Lewis for helpful comments on an earlier version of this post. Errors are my own.Note: I sometimes use the first person (“I claim”/”I think”) in this post. This felt most natural but is not meant to imply any of the ideas or arguments are mine. Unless I clearly state otherwise, they are Gregory Lewis’s.What I CutI had to make some judgment calls on what is essential and what isn’t. Among other things, I decided most math and toy models weren’t essential. Moreover, I cut the details on the “self-defeating” objection, which felt quite philosophical and probably not relevant to most readers. Furthermore, it will be most useful to treat all the arguments brought up in this distillation as mere introductions, while detailed/conclusive arguments may be found in the original post and the literature.ClaimsI claim two things:You should practice strong epistemic modesty: On a given issue, adopt the view experts generally hold, instead of the view you personally like.EAs/rationalists in particular are too epistemically immodest.Let’s first dive deeper into claim 1.Claim 1: Strong Epistemic ModestyTo distinguish the view you personally like from the view strong epistemic modesty favors, call the former “view by your own lights” and the latter “view all things considered”.In detail, strong epistemic modesty says you should do the following to form your view on an issue:Determine the ‘epistemic virtue’ of people who hold a view on the issue. By ‘epistemic virtue’ I mean someone’s ability to form accurate beliefs, including how much the person knows about the issue, their intelligence, how truth-seeking they are, etc.Determine what everyone's credences by their own lights are.Take an average of everyone’s credences by their own lights (including yourself), weighting them by their epistemic virtue.The product is your view all things considered. Importantly, this process weighs your credences by your own lights no more heavily than those of people with similar epistemic virtue. These people are your ‘epistemic peers’.In practice, you can round this process to “use the existing consensus of experts on the issue or, if there is none, be uncertain”.Why?Intuition PumpSay your mom is convinced she’s figured out the one weird trick to make money on the stock market. You are concerned about the validity of this one weird trick, because of two worries:Does she have a better chance at making money than all the other people with similar (low) amounts of knowledge on the stock market who’re all also convinced they know the one weird trick? (These are her epistemic peers.)How do her odds of making money stack up against people working full-time at a hedge fund with lots of relevant background and access to heavy analysis? (These are the experts.)The point is that we are all sometimes like the mom in this example. We’re overconfident, forgetting that we are no better than our epistemic peers, be the question investing, sports bets, musical taste, or politics. Everyone always thinks they are an exception and have figured [investing/sports/politics] out. It’s our epistemic peers that are wrong! But from their perspective, we look just as foolish and misguided as they look to us.Not only do we treat our epistemic peers incorrectly, but also our epistemic superiors. The mom in this example didn’t seek out the expert consensus on making money on the stock market (maybe something like “use algorithms” and “you don’t stand a chance”). Instead, she may have li...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: In defence of epistemic modesty [distillation], published by Luise on May 10, 2023 on The Effective Altruism Forum.This is a distillation of In defence of epistemic modesty, a 2017 essay by Gregory Lewis. I hope to make the essay’s key points accessible in a quick and easy way so more people engage with them. I thank Gregory Lewis for helpful comments on an earlier version of this post. Errors are my own.Note: I sometimes use the first person (“I claim”/”I think”) in this post. This felt most natural but is not meant to imply any of the ideas or arguments are mine. Unless I clearly state otherwise, they are Gregory Lewis’s.What I CutI had to make some judgment calls on what is essential and what isn’t. Among other things, I decided most math and toy models weren’t essential. Moreover, I cut the details on the “self-defeating” objection, which felt quite philosophical and probably not relevant to most readers. Furthermore, it will be most useful to treat all the arguments brought up in this distillation as mere introductions, while detailed/conclusive arguments may be found in the original post and the literature.ClaimsI claim two things:You should practice strong epistemic modesty: On a given issue, adopt the view experts generally hold, instead of the view you personally like.EAs/rationalists in particular are too epistemically immodest.Let’s first dive deeper into claim 1.Claim 1: Strong Epistemic ModestyTo distinguish the view you personally like from the view strong epistemic modesty favors, call the former “view by your own lights” and the latter “view all things considered”.In detail, strong epistemic modesty says you should do the following to form your view on an issue:Determine the ‘epistemic virtue’ of people who hold a view on the issue. By ‘epistemic virtue’ I mean someone’s ability to form accurate beliefs, including how much the person knows about the issue, their intelligence, how truth-seeking they are, etc.Determine what everyone's credences by their own lights are.Take an average of everyone’s credences by their own lights (including yourself), weighting them by their epistemic virtue.The product is your view all things considered. Importantly, this process weighs your credences by your own lights no more heavily than those of people with similar epistemic virtue. These people are your ‘epistemic peers’.In practice, you can round this process to “use the existing consensus of experts on the issue or, if there is none, be uncertain”.Why?Intuition PumpSay your mom is convinced she’s figured out the one weird trick to make money on the stock market. You are concerned about the validity of this one weird trick, because of two worries:Does she have a better chance at making money than all the other people with similar (low) amounts of knowledge on the stock market who’re all also convinced they know the one weird trick? (These are her epistemic peers.)How do her odds of making money stack up against people working full-time at a hedge fund with lots of relevant background and access to heavy analysis? (These are the experts.)The point is that we are all sometimes like the mom in this example. We’re overconfident, forgetting that we are no better than our epistemic peers, be the question investing, sports bets, musical taste, or politics. Everyone always thinks they are an exception and have figured [investing/sports/politics] out. It’s our epistemic peers that are wrong! But from their perspective, we look just as foolish and misguided as they look to us.Not only do we treat our epistemic peers incorrectly, but also our epistemic superiors. The mom in this example didn’t seek out the expert consensus on making money on the stock market (maybe something like “use algorithms” and “you don’t stand a chance”). Instead, she may have li...]]>
Luise https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 13:57 None full 5914
GgmAeWqXSg8DHMsJe_NL_EA_EA EA - How much funging is there with donations to different EA animal charities? by Brian Tomasik Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How much funging is there with donations to different EA animal charities?, published by Brian Tomasik on May 11, 2023 on The Effective Altruism Forum.My main questionThe EA Funds Animal Welfare Fund makes grants to many different animal charities. Suppose I want to support one particular charity that they grant to because I think it's better, relative to my values, than most of the other ones. For example, maybe I want to specifically give to Legal Impact for Chickens (LIC), so I donate $1000 to them.Because this donation reduces LIC's room for more funding, it may decrease the amount that the Animal Welfare Fund itself (or Open Philanthropy, Animal Charity Evaluators, or individual EA donors) will give to LIC in the future. How large should I expect this effect to be in general? Will my $1000 donation tend to "funge" against these other EA donors almost fully, so that LIC can be expected to get about $1000 less from them? Is the funging amount more like $500? Is it roughly $0 of funging? Or maybe donating to LIC helps them grow faster, so that they can hire more people and do more things, thereby increasing their room for funding and how much other EA donors give to them?The answer to this question probably varies substantially from one case to the next, and maybe the best way to figure it out would be to learn a lot about the funding situation for a particular charity and the funding inclinations of big EA donors toward that charity. But that takes a lot of work, so I wonder if EA funders have some intuition for what tends to happen on average in situations like this, to inform small donors who aren't going to get that far into the weeds with a particular charity. Does the funging amount tend to be closer to 0% or closer to 100% of what an individual donor gives?I notice that the Animal Welfare Fund sometimes funds ~10% to ~50% of an organization's operating budget, which I imagine may be partly intentional to avoid crowding out small donors. (It may also be motivated by wanting charities to diversify their funding sources and due to limited funds to disburse.) Is it true in general that the Animal Welfare Fund doesn't fully fill room for funding, or are there charities for which the Fund does top up the charity completely? (Note that it would actually be better impact-wise to ensure that the very best charities are roughly fully funded, so I'm not encouraging a strategy of deliberately underfunding them.)In the rest of this post, I'll give more details on why I'm asking about this topic, but this further elaboration is optional reading and is more specific to my situation.My donation preferencesI think a lot of EA donations to animal charities are really exciting. About 1/3 of the grants in the Animal Welfare Fund's Grants Database seem to me roughly as cost-effective as possible for reducing near-term animal suffering. However, for some other grants, I'm pretty ambivalent about the sign of the net impact (whether it's net good or bad).This is mainly for two reasons:I'm unsure if meat reduction, on the whole, reduces animal suffering, mainly because certain kinds of animal farming, especially cattle grazing on non-irrigated pasture, may reduce an enormous amount of wild-animal suffering (though there are huge error bars on this analysis).I'm unsure if antispeciesism in general reduces net suffering. In the short run, I worry that it may encourage more habitat preservation, thereby increasing wild-animal suffering. In the long run, moral-circle expansion could encourage people to create lots of additional small-brained sentience, and in (hopefully unlikely) scenarios where human values become inverted, antispeciesist values could multiply total suffering manyfold.If I could press a button to reduce overall meat consumption or to increase concern for an...]]>
Brian Tomasik https://forum.effectivealtruism.org/posts/GgmAeWqXSg8DHMsJe/how-much-funging-is-there-with-donations-to-different-ea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How much funging is there with donations to different EA animal charities?, published by Brian Tomasik on May 11, 2023 on The Effective Altruism Forum.My main questionThe EA Funds Animal Welfare Fund makes grants to many different animal charities. Suppose I want to support one particular charity that they grant to because I think it's better, relative to my values, than most of the other ones. For example, maybe I want to specifically give to Legal Impact for Chickens (LIC), so I donate $1000 to them.Because this donation reduces LIC's room for more funding, it may decrease the amount that the Animal Welfare Fund itself (or Open Philanthropy, Animal Charity Evaluators, or individual EA donors) will give to LIC in the future. How large should I expect this effect to be in general? Will my $1000 donation tend to "funge" against these other EA donors almost fully, so that LIC can be expected to get about $1000 less from them? Is the funging amount more like $500? Is it roughly $0 of funging? Or maybe donating to LIC helps them grow faster, so that they can hire more people and do more things, thereby increasing their room for funding and how much other EA donors give to them?The answer to this question probably varies substantially from one case to the next, and maybe the best way to figure it out would be to learn a lot about the funding situation for a particular charity and the funding inclinations of big EA donors toward that charity. But that takes a lot of work, so I wonder if EA funders have some intuition for what tends to happen on average in situations like this, to inform small donors who aren't going to get that far into the weeds with a particular charity. Does the funging amount tend to be closer to 0% or closer to 100% of what an individual donor gives?I notice that the Animal Welfare Fund sometimes funds ~10% to ~50% of an organization's operating budget, which I imagine may be partly intentional to avoid crowding out small donors. (It may also be motivated by wanting charities to diversify their funding sources and due to limited funds to disburse.) Is it true in general that the Animal Welfare Fund doesn't fully fill room for funding, or are there charities for which the Fund does top up the charity completely? (Note that it would actually be better impact-wise to ensure that the very best charities are roughly fully funded, so I'm not encouraging a strategy of deliberately underfunding them.)In the rest of this post, I'll give more details on why I'm asking about this topic, but this further elaboration is optional reading and is more specific to my situation.My donation preferencesI think a lot of EA donations to animal charities are really exciting. About 1/3 of the grants in the Animal Welfare Fund's Grants Database seem to me roughly as cost-effective as possible for reducing near-term animal suffering. However, for some other grants, I'm pretty ambivalent about the sign of the net impact (whether it's net good or bad).This is mainly for two reasons:I'm unsure if meat reduction, on the whole, reduces animal suffering, mainly because certain kinds of animal farming, especially cattle grazing on non-irrigated pasture, may reduce an enormous amount of wild-animal suffering (though there are huge error bars on this analysis).I'm unsure if antispeciesism in general reduces net suffering. In the short run, I worry that it may encourage more habitat preservation, thereby increasing wild-animal suffering. In the long run, moral-circle expansion could encourage people to create lots of additional small-brained sentience, and in (hopefully unlikely) scenarios where human values become inverted, antispeciesist values could multiply total suffering manyfold.If I could press a button to reduce overall meat consumption or to increase concern for an...]]>
Thu, 11 May 2023 17:20:26 +0000 EA - How much funging is there with donations to different EA animal charities? by Brian Tomasik Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How much funging is there with donations to different EA animal charities?, published by Brian Tomasik on May 11, 2023 on The Effective Altruism Forum.My main questionThe EA Funds Animal Welfare Fund makes grants to many different animal charities. Suppose I want to support one particular charity that they grant to because I think it's better, relative to my values, than most of the other ones. For example, maybe I want to specifically give to Legal Impact for Chickens (LIC), so I donate $1000 to them.Because this donation reduces LIC's room for more funding, it may decrease the amount that the Animal Welfare Fund itself (or Open Philanthropy, Animal Charity Evaluators, or individual EA donors) will give to LIC in the future. How large should I expect this effect to be in general? Will my $1000 donation tend to "funge" against these other EA donors almost fully, so that LIC can be expected to get about $1000 less from them? Is the funging amount more like $500? Is it roughly $0 of funging? Or maybe donating to LIC helps them grow faster, so that they can hire more people and do more things, thereby increasing their room for funding and how much other EA donors give to them?The answer to this question probably varies substantially from one case to the next, and maybe the best way to figure it out would be to learn a lot about the funding situation for a particular charity and the funding inclinations of big EA donors toward that charity. But that takes a lot of work, so I wonder if EA funders have some intuition for what tends to happen on average in situations like this, to inform small donors who aren't going to get that far into the weeds with a particular charity. Does the funging amount tend to be closer to 0% or closer to 100% of what an individual donor gives?I notice that the Animal Welfare Fund sometimes funds ~10% to ~50% of an organization's operating budget, which I imagine may be partly intentional to avoid crowding out small donors. (It may also be motivated by wanting charities to diversify their funding sources and due to limited funds to disburse.) Is it true in general that the Animal Welfare Fund doesn't fully fill room for funding, or are there charities for which the Fund does top up the charity completely? (Note that it would actually be better impact-wise to ensure that the very best charities are roughly fully funded, so I'm not encouraging a strategy of deliberately underfunding them.)In the rest of this post, I'll give more details on why I'm asking about this topic, but this further elaboration is optional reading and is more specific to my situation.My donation preferencesI think a lot of EA donations to animal charities are really exciting. About 1/3 of the grants in the Animal Welfare Fund's Grants Database seem to me roughly as cost-effective as possible for reducing near-term animal suffering. However, for some other grants, I'm pretty ambivalent about the sign of the net impact (whether it's net good or bad).This is mainly for two reasons:I'm unsure if meat reduction, on the whole, reduces animal suffering, mainly because certain kinds of animal farming, especially cattle grazing on non-irrigated pasture, may reduce an enormous amount of wild-animal suffering (though there are huge error bars on this analysis).I'm unsure if antispeciesism in general reduces net suffering. In the short run, I worry that it may encourage more habitat preservation, thereby increasing wild-animal suffering. In the long run, moral-circle expansion could encourage people to create lots of additional small-brained sentience, and in (hopefully unlikely) scenarios where human values become inverted, antispeciesist values could multiply total suffering manyfold.If I could press a button to reduce overall meat consumption or to increase concern for an...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How much funging is there with donations to different EA animal charities?, published by Brian Tomasik on May 11, 2023 on The Effective Altruism Forum.My main questionThe EA Funds Animal Welfare Fund makes grants to many different animal charities. Suppose I want to support one particular charity that they grant to because I think it's better, relative to my values, than most of the other ones. For example, maybe I want to specifically give to Legal Impact for Chickens (LIC), so I donate $1000 to them.Because this donation reduces LIC's room for more funding, it may decrease the amount that the Animal Welfare Fund itself (or Open Philanthropy, Animal Charity Evaluators, or individual EA donors) will give to LIC in the future. How large should I expect this effect to be in general? Will my $1000 donation tend to "funge" against these other EA donors almost fully, so that LIC can be expected to get about $1000 less from them? Is the funging amount more like $500? Is it roughly $0 of funging? Or maybe donating to LIC helps them grow faster, so that they can hire more people and do more things, thereby increasing their room for funding and how much other EA donors give to them?The answer to this question probably varies substantially from one case to the next, and maybe the best way to figure it out would be to learn a lot about the funding situation for a particular charity and the funding inclinations of big EA donors toward that charity. But that takes a lot of work, so I wonder if EA funders have some intuition for what tends to happen on average in situations like this, to inform small donors who aren't going to get that far into the weeds with a particular charity. Does the funging amount tend to be closer to 0% or closer to 100% of what an individual donor gives?I notice that the Animal Welfare Fund sometimes funds ~10% to ~50% of an organization's operating budget, which I imagine may be partly intentional to avoid crowding out small donors. (It may also be motivated by wanting charities to diversify their funding sources and due to limited funds to disburse.) Is it true in general that the Animal Welfare Fund doesn't fully fill room for funding, or are there charities for which the Fund does top up the charity completely? (Note that it would actually be better impact-wise to ensure that the very best charities are roughly fully funded, so I'm not encouraging a strategy of deliberately underfunding them.)In the rest of this post, I'll give more details on why I'm asking about this topic, but this further elaboration is optional reading and is more specific to my situation.My donation preferencesI think a lot of EA donations to animal charities are really exciting. About 1/3 of the grants in the Animal Welfare Fund's Grants Database seem to me roughly as cost-effective as possible for reducing near-term animal suffering. However, for some other grants, I'm pretty ambivalent about the sign of the net impact (whether it's net good or bad).This is mainly for two reasons:I'm unsure if meat reduction, on the whole, reduces animal suffering, mainly because certain kinds of animal farming, especially cattle grazing on non-irrigated pasture, may reduce an enormous amount of wild-animal suffering (though there are huge error bars on this analysis).I'm unsure if antispeciesism in general reduces net suffering. In the short run, I worry that it may encourage more habitat preservation, thereby increasing wild-animal suffering. In the long run, moral-circle expansion could encourage people to create lots of additional small-brained sentience, and in (hopefully unlikely) scenarios where human values become inverted, antispeciesist values could multiply total suffering manyfold.If I could press a button to reduce overall meat consumption or to increase concern for an...]]>
Brian Tomasik https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:25 None full 5913
qjGcsKg5qd4mi99jz_NL_EA_EA EA - US Supreme Court Upholds Prop 12! by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: US Supreme Court Upholds Prop 12!, published by Rockwell on May 11, 2023 on The Effective Altruism Forum.The United States Supreme Court just released its decision on the country's most pivotal farmed animal welfare case—NATIONAL PORK PRODUCERS COUNCIL ET AL. v. ROSS, SECRETARY OF THE CALIFORNIA DEPARTMENT OF FOOD AND AGRICULTURE, ET AL. —upholding California's Prop 12, the strongest piece of farmed animal legislation in the US.In 2018, California residents voted by ballot measure to ban the sale of pig products that come from producers that use gestation crates, individual crates the size of an adult pig's body that mother pigs are confined to 24/7 for the full gestation of their pregnancies, unable to turn around. In response, the pork industry sued and the case made its way to the nation's highest court.If the Supreme Court had not upheld Prop 12, years of advocacy efforts would have been nullified and advocates would no longer be able to pursue state-level legislative interventions that improve welfare by banning the sale of particularly cruelly produced animal products.It would have been a tremendous setback for the US animal welfare movement. Instead, today is a huge victory.Groups like HSUS spearheaded efforts to uphold Prop 12, even in the face of massive opposition. The case exemplified the extent to which even left-leaning politicians side with animal industry over animal welfare, as even the Biden administration sided with the pork industry.Today is a monumental moment for farmed animal advocacy. Congratulations to everyone who worked to make this happen!Read more about it:Summary and analysis from Lewis Bollard (Senior Program Officer for Farm Animal Welfare at Open Phil) here on Twitter.Victory announcement by the Humane Society of the United States here.New York Times coverage here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Rockwell https://forum.effectivealtruism.org/posts/qjGcsKg5qd4mi99jz/us-supreme-court-upholds-prop-12 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: US Supreme Court Upholds Prop 12!, published by Rockwell on May 11, 2023 on The Effective Altruism Forum.The United States Supreme Court just released its decision on the country's most pivotal farmed animal welfare case—NATIONAL PORK PRODUCERS COUNCIL ET AL. v. ROSS, SECRETARY OF THE CALIFORNIA DEPARTMENT OF FOOD AND AGRICULTURE, ET AL. —upholding California's Prop 12, the strongest piece of farmed animal legislation in the US.In 2018, California residents voted by ballot measure to ban the sale of pig products that come from producers that use gestation crates, individual crates the size of an adult pig's body that mother pigs are confined to 24/7 for the full gestation of their pregnancies, unable to turn around. In response, the pork industry sued and the case made its way to the nation's highest court.If the Supreme Court had not upheld Prop 12, years of advocacy efforts would have been nullified and advocates would no longer be able to pursue state-level legislative interventions that improve welfare by banning the sale of particularly cruelly produced animal products.It would have been a tremendous setback for the US animal welfare movement. Instead, today is a huge victory.Groups like HSUS spearheaded efforts to uphold Prop 12, even in the face of massive opposition. The case exemplified the extent to which even left-leaning politicians side with animal industry over animal welfare, as even the Biden administration sided with the pork industry.Today is a monumental moment for farmed animal advocacy. Congratulations to everyone who worked to make this happen!Read more about it:Summary and analysis from Lewis Bollard (Senior Program Officer for Farm Animal Welfare at Open Phil) here on Twitter.Victory announcement by the Humane Society of the United States here.New York Times coverage here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 11 May 2023 17:16:28 +0000 EA - US Supreme Court Upholds Prop 12! by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: US Supreme Court Upholds Prop 12!, published by Rockwell on May 11, 2023 on The Effective Altruism Forum.The United States Supreme Court just released its decision on the country's most pivotal farmed animal welfare case—NATIONAL PORK PRODUCERS COUNCIL ET AL. v. ROSS, SECRETARY OF THE CALIFORNIA DEPARTMENT OF FOOD AND AGRICULTURE, ET AL. —upholding California's Prop 12, the strongest piece of farmed animal legislation in the US.In 2018, California residents voted by ballot measure to ban the sale of pig products that come from producers that use gestation crates, individual crates the size of an adult pig's body that mother pigs are confined to 24/7 for the full gestation of their pregnancies, unable to turn around. In response, the pork industry sued and the case made its way to the nation's highest court.If the Supreme Court had not upheld Prop 12, years of advocacy efforts would have been nullified and advocates would no longer be able to pursue state-level legislative interventions that improve welfare by banning the sale of particularly cruelly produced animal products.It would have been a tremendous setback for the US animal welfare movement. Instead, today is a huge victory.Groups like HSUS spearheaded efforts to uphold Prop 12, even in the face of massive opposition. The case exemplified the extent to which even left-leaning politicians side with animal industry over animal welfare, as even the Biden administration sided with the pork industry.Today is a monumental moment for farmed animal advocacy. Congratulations to everyone who worked to make this happen!Read more about it:Summary and analysis from Lewis Bollard (Senior Program Officer for Farm Animal Welfare at Open Phil) here on Twitter.Victory announcement by the Humane Society of the United States here.New York Times coverage here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: US Supreme Court Upholds Prop 12!, published by Rockwell on May 11, 2023 on The Effective Altruism Forum.The United States Supreme Court just released its decision on the country's most pivotal farmed animal welfare case—NATIONAL PORK PRODUCERS COUNCIL ET AL. v. ROSS, SECRETARY OF THE CALIFORNIA DEPARTMENT OF FOOD AND AGRICULTURE, ET AL. —upholding California's Prop 12, the strongest piece of farmed animal legislation in the US.In 2018, California residents voted by ballot measure to ban the sale of pig products that come from producers that use gestation crates, individual crates the size of an adult pig's body that mother pigs are confined to 24/7 for the full gestation of their pregnancies, unable to turn around. In response, the pork industry sued and the case made its way to the nation's highest court.If the Supreme Court had not upheld Prop 12, years of advocacy efforts would have been nullified and advocates would no longer be able to pursue state-level legislative interventions that improve welfare by banning the sale of particularly cruelly produced animal products.It would have been a tremendous setback for the US animal welfare movement. Instead, today is a huge victory.Groups like HSUS spearheaded efforts to uphold Prop 12, even in the face of massive opposition. The case exemplified the extent to which even left-leaning politicians side with animal industry over animal welfare, as even the Biden administration sided with the pork industry.Today is a monumental moment for farmed animal advocacy. Congratulations to everyone who worked to make this happen!Read more about it:Summary and analysis from Lewis Bollard (Senior Program Officer for Farm Animal Welfare at Open Phil) here on Twitter.Victory announcement by the Humane Society of the United States here.New York Times coverage here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Rockwell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:53 None full 5912
ZDo6XjmivLKGKycdw_NL_EA_EA EA - Fatebook for Slack: Track your forecasts, right where your team works by Adam Binks Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fatebook for Slack: Track your forecasts, right where your team works, published by Adam Binks on May 11, 2023 on The Effective Altruism Forum.Announcing Fatebook for Slack - a Slack bot designed to help high-impact orgs build a culture of forecasting.With Fatebook, you can ask a forecasting question in your team's Slack:Then, everyone in the channel can forecast:When it's time to resolve the question as Yes, No or Ambiguous, the author gets a reminder. Then everyone gets a Brier score, based on their accuracy.It's like a tiny, private, fast Metaculus inside your team's Slack.Why build a culture of forecasting?Make better decisionsCommunicate more clearlyBuild your track recordTrust your most reliable forecastersWe built Fatebook for Slack aiming to help high-impact orgs become more effective.See the FAQs on the website for more info. We'd really value your feedback in the comments, in our Discord, or at adam@sage-future.org.You can add Fatebook to your workspace here.Thanks to all our alpha testers for their valuable feedback, especially the teams at 80,000 Hours, Lightcone, EA Cambridge, and Samotsvety.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Adam Binks https://forum.effectivealtruism.org/posts/ZDo6XjmivLKGKycdw/fatebook-for-slack-track-your-forecasts-right-where-your Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fatebook for Slack: Track your forecasts, right where your team works, published by Adam Binks on May 11, 2023 on The Effective Altruism Forum.Announcing Fatebook for Slack - a Slack bot designed to help high-impact orgs build a culture of forecasting.With Fatebook, you can ask a forecasting question in your team's Slack:Then, everyone in the channel can forecast:When it's time to resolve the question as Yes, No or Ambiguous, the author gets a reminder. Then everyone gets a Brier score, based on their accuracy.It's like a tiny, private, fast Metaculus inside your team's Slack.Why build a culture of forecasting?Make better decisionsCommunicate more clearlyBuild your track recordTrust your most reliable forecastersWe built Fatebook for Slack aiming to help high-impact orgs become more effective.See the FAQs on the website for more info. We'd really value your feedback in the comments, in our Discord, or at adam@sage-future.org.You can add Fatebook to your workspace here.Thanks to all our alpha testers for their valuable feedback, especially the teams at 80,000 Hours, Lightcone, EA Cambridge, and Samotsvety.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 11 May 2023 15:38:05 +0000 EA - Fatebook for Slack: Track your forecasts, right where your team works by Adam Binks Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fatebook for Slack: Track your forecasts, right where your team works, published by Adam Binks on May 11, 2023 on The Effective Altruism Forum.Announcing Fatebook for Slack - a Slack bot designed to help high-impact orgs build a culture of forecasting.With Fatebook, you can ask a forecasting question in your team's Slack:Then, everyone in the channel can forecast:When it's time to resolve the question as Yes, No or Ambiguous, the author gets a reminder. Then everyone gets a Brier score, based on their accuracy.It's like a tiny, private, fast Metaculus inside your team's Slack.Why build a culture of forecasting?Make better decisionsCommunicate more clearlyBuild your track recordTrust your most reliable forecastersWe built Fatebook for Slack aiming to help high-impact orgs become more effective.See the FAQs on the website for more info. We'd really value your feedback in the comments, in our Discord, or at adam@sage-future.org.You can add Fatebook to your workspace here.Thanks to all our alpha testers for their valuable feedback, especially the teams at 80,000 Hours, Lightcone, EA Cambridge, and Samotsvety.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fatebook for Slack: Track your forecasts, right where your team works, published by Adam Binks on May 11, 2023 on The Effective Altruism Forum.Announcing Fatebook for Slack - a Slack bot designed to help high-impact orgs build a culture of forecasting.With Fatebook, you can ask a forecasting question in your team's Slack:Then, everyone in the channel can forecast:When it's time to resolve the question as Yes, No or Ambiguous, the author gets a reminder. Then everyone gets a Brier score, based on their accuracy.It's like a tiny, private, fast Metaculus inside your team's Slack.Why build a culture of forecasting?Make better decisionsCommunicate more clearlyBuild your track recordTrust your most reliable forecastersWe built Fatebook for Slack aiming to help high-impact orgs become more effective.See the FAQs on the website for more info. We'd really value your feedback in the comments, in our Discord, or at adam@sage-future.org.You can add Fatebook to your workspace here.Thanks to all our alpha testers for their valuable feedback, especially the teams at 80,000 Hours, Lightcone, EA Cambridge, and Samotsvety.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Adam Binks https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:22 None full 5911
CAC8zn292C9T5aopw_NL_EA_EA EA - Community Health and Special Projects: Updates and Contacting Us by evemccormick Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Community Health & Special Projects: Updates and Contacting Us, published by evemccormick on May 10, 2023 on The Effective Altruism Forum.SummaryWe’ve renamed our team to Community Health and Special Projects, in part to reflect our scope extending beyond what’s often considered to be “community health.”Since our last forum update, we’ve started working closely with Fynn Heide as an affiliate, along with Anu Oak and Łukasz Grabowski as contractors. Chana Messinger has been acting as interim team lead, while Nicole Ross has been focused on EV US board duties.In response to reports of sexual misconduct by Owen Cotton-Barratt, an external investigation into our team’s response is underway, as well as an internal review.Other key proactive projects we’ve been working on include the Gender Experiences project and the EA Organization Reform project.We are in the early stages of considering some significant strategic changes for our team. We’ve highlighted two examples of possible changes below, one being a potential spin-out of CEA and/or EV and another being a pivot to focus more on the AI safety space.As a reminder, if you’ve experienced anything you’re uncomfortable with in the community or if you would like to report a concern, you can reach our team’s contact people (currently Julia Wise and Catherine Low) via this form (anonymously if you choose).We can also be contacted individually (our individual forms are linked here), or you can contact the whole team at community.health.special.projects@centreforeffectivealtruism.org.We can provide anonymous, real-time conversations in place of calls when requested, e.g. through Google Chat with your anonymous email address.The Community Health team is now Community Health and Special ProjectsWe decided to rename our team to better reflect the scope of our work. We’ve found that when people think of our team, they mostly think of us as working on topics like mental health and interpersonal harm. While these areas are a central part of our work, we also work on a wide range of other things, such as advising on decisions with significant potential downside risk, improving community epistemics, advising programs working with minors, and reducing risks in areas with high geopolitical risk.We see these other areas of work as contributing to our goal: to strengthen the ability of EA and related communities to fulfil their potential for impact, and to address problems that could prevent that. However, those areas of work can be quite disparate, and so “Special Projects” seemed an appropriate name to gesture towards “other miscellaneous things that seem important and may not have a home somewhere else.”We hope that this might go some way to encouraging people to report a wider range of concerns to our team.Our scope of work is guided by pragmatism: we aim to go wherever there are important community-related gaps not covered by others and try to make sure the highest priority gaps are filled. Where it seems better than the counterfactual, we sometimes try to fill those gaps ourselves. That means that our scope is both very broad and not always clear, and also that there will be plenty of things we don’t have the capacity or the right expertise to have fully covered. If you’re thinking of working on something you think we might have some knowledge about, the meme we want to spread is “loop us in, but don’t assume it’s totally covered or uncovered.” If we can be helpful, we’ll give advice, recommend resources or connect you with others interested in similar work.Team changesHere’s our current team:Nicole Ross (Head of Community Health and Special Projects)Julia Wise (Community Liaison)Catherine Low (Community Health Associate)Chana Messinger (Interim Head and Community Health Analyst)Eve McCormick (Community Health Pr...]]>
evemccormick https://forum.effectivealtruism.org/posts/CAC8zn292C9T5aopw/community-health-and-special-projects-updates-and-contacting-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Community Health & Special Projects: Updates and Contacting Us, published by evemccormick on May 10, 2023 on The Effective Altruism Forum.SummaryWe’ve renamed our team to Community Health and Special Projects, in part to reflect our scope extending beyond what’s often considered to be “community health.”Since our last forum update, we’ve started working closely with Fynn Heide as an affiliate, along with Anu Oak and Łukasz Grabowski as contractors. Chana Messinger has been acting as interim team lead, while Nicole Ross has been focused on EV US board duties.In response to reports of sexual misconduct by Owen Cotton-Barratt, an external investigation into our team’s response is underway, as well as an internal review.Other key proactive projects we’ve been working on include the Gender Experiences project and the EA Organization Reform project.We are in the early stages of considering some significant strategic changes for our team. We’ve highlighted two examples of possible changes below, one being a potential spin-out of CEA and/or EV and another being a pivot to focus more on the AI safety space.As a reminder, if you’ve experienced anything you’re uncomfortable with in the community or if you would like to report a concern, you can reach our team’s contact people (currently Julia Wise and Catherine Low) via this form (anonymously if you choose).We can also be contacted individually (our individual forms are linked here), or you can contact the whole team at community.health.special.projects@centreforeffectivealtruism.org.We can provide anonymous, real-time conversations in place of calls when requested, e.g. through Google Chat with your anonymous email address.The Community Health team is now Community Health and Special ProjectsWe decided to rename our team to better reflect the scope of our work. We’ve found that when people think of our team, they mostly think of us as working on topics like mental health and interpersonal harm. While these areas are a central part of our work, we also work on a wide range of other things, such as advising on decisions with significant potential downside risk, improving community epistemics, advising programs working with minors, and reducing risks in areas with high geopolitical risk.We see these other areas of work as contributing to our goal: to strengthen the ability of EA and related communities to fulfil their potential for impact, and to address problems that could prevent that. However, those areas of work can be quite disparate, and so “Special Projects” seemed an appropriate name to gesture towards “other miscellaneous things that seem important and may not have a home somewhere else.”We hope that this might go some way to encouraging people to report a wider range of concerns to our team.Our scope of work is guided by pragmatism: we aim to go wherever there are important community-related gaps not covered by others and try to make sure the highest priority gaps are filled. Where it seems better than the counterfactual, we sometimes try to fill those gaps ourselves. That means that our scope is both very broad and not always clear, and also that there will be plenty of things we don’t have the capacity or the right expertise to have fully covered. If you’re thinking of working on something you think we might have some knowledge about, the meme we want to spread is “loop us in, but don’t assume it’s totally covered or uncovered.” If we can be helpful, we’ll give advice, recommend resources or connect you with others interested in similar work.Team changesHere’s our current team:Nicole Ross (Head of Community Health and Special Projects)Julia Wise (Community Liaison)Catherine Low (Community Health Associate)Chana Messinger (Interim Head and Community Health Analyst)Eve McCormick (Community Health Pr...]]>
Wed, 10 May 2023 20:49:30 +0000 EA - Community Health and Special Projects: Updates and Contacting Us by evemccormick Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Community Health & Special Projects: Updates and Contacting Us, published by evemccormick on May 10, 2023 on The Effective Altruism Forum.SummaryWe’ve renamed our team to Community Health and Special Projects, in part to reflect our scope extending beyond what’s often considered to be “community health.”Since our last forum update, we’ve started working closely with Fynn Heide as an affiliate, along with Anu Oak and Łukasz Grabowski as contractors. Chana Messinger has been acting as interim team lead, while Nicole Ross has been focused on EV US board duties.In response to reports of sexual misconduct by Owen Cotton-Barratt, an external investigation into our team’s response is underway, as well as an internal review.Other key proactive projects we’ve been working on include the Gender Experiences project and the EA Organization Reform project.We are in the early stages of considering some significant strategic changes for our team. We’ve highlighted two examples of possible changes below, one being a potential spin-out of CEA and/or EV and another being a pivot to focus more on the AI safety space.As a reminder, if you’ve experienced anything you’re uncomfortable with in the community or if you would like to report a concern, you can reach our team’s contact people (currently Julia Wise and Catherine Low) via this form (anonymously if you choose).We can also be contacted individually (our individual forms are linked here), or you can contact the whole team at community.health.special.projects@centreforeffectivealtruism.org.We can provide anonymous, real-time conversations in place of calls when requested, e.g. through Google Chat with your anonymous email address.The Community Health team is now Community Health and Special ProjectsWe decided to rename our team to better reflect the scope of our work. We’ve found that when people think of our team, they mostly think of us as working on topics like mental health and interpersonal harm. While these areas are a central part of our work, we also work on a wide range of other things, such as advising on decisions with significant potential downside risk, improving community epistemics, advising programs working with minors, and reducing risks in areas with high geopolitical risk.We see these other areas of work as contributing to our goal: to strengthen the ability of EA and related communities to fulfil their potential for impact, and to address problems that could prevent that. However, those areas of work can be quite disparate, and so “Special Projects” seemed an appropriate name to gesture towards “other miscellaneous things that seem important and may not have a home somewhere else.”We hope that this might go some way to encouraging people to report a wider range of concerns to our team.Our scope of work is guided by pragmatism: we aim to go wherever there are important community-related gaps not covered by others and try to make sure the highest priority gaps are filled. Where it seems better than the counterfactual, we sometimes try to fill those gaps ourselves. That means that our scope is both very broad and not always clear, and also that there will be plenty of things we don’t have the capacity or the right expertise to have fully covered. If you’re thinking of working on something you think we might have some knowledge about, the meme we want to spread is “loop us in, but don’t assume it’s totally covered or uncovered.” If we can be helpful, we’ll give advice, recommend resources or connect you with others interested in similar work.Team changesHere’s our current team:Nicole Ross (Head of Community Health and Special Projects)Julia Wise (Community Liaison)Catherine Low (Community Health Associate)Chana Messinger (Interim Head and Community Health Analyst)Eve McCormick (Community Health Pr...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Community Health & Special Projects: Updates and Contacting Us, published by evemccormick on May 10, 2023 on The Effective Altruism Forum.SummaryWe’ve renamed our team to Community Health and Special Projects, in part to reflect our scope extending beyond what’s often considered to be “community health.”Since our last forum update, we’ve started working closely with Fynn Heide as an affiliate, along with Anu Oak and Łukasz Grabowski as contractors. Chana Messinger has been acting as interim team lead, while Nicole Ross has been focused on EV US board duties.In response to reports of sexual misconduct by Owen Cotton-Barratt, an external investigation into our team’s response is underway, as well as an internal review.Other key proactive projects we’ve been working on include the Gender Experiences project and the EA Organization Reform project.We are in the early stages of considering some significant strategic changes for our team. We’ve highlighted two examples of possible changes below, one being a potential spin-out of CEA and/or EV and another being a pivot to focus more on the AI safety space.As a reminder, if you’ve experienced anything you’re uncomfortable with in the community or if you would like to report a concern, you can reach our team’s contact people (currently Julia Wise and Catherine Low) via this form (anonymously if you choose).We can also be contacted individually (our individual forms are linked here), or you can contact the whole team at community.health.special.projects@centreforeffectivealtruism.org.We can provide anonymous, real-time conversations in place of calls when requested, e.g. through Google Chat with your anonymous email address.The Community Health team is now Community Health and Special ProjectsWe decided to rename our team to better reflect the scope of our work. We’ve found that when people think of our team, they mostly think of us as working on topics like mental health and interpersonal harm. While these areas are a central part of our work, we also work on a wide range of other things, such as advising on decisions with significant potential downside risk, improving community epistemics, advising programs working with minors, and reducing risks in areas with high geopolitical risk.We see these other areas of work as contributing to our goal: to strengthen the ability of EA and related communities to fulfil their potential for impact, and to address problems that could prevent that. However, those areas of work can be quite disparate, and so “Special Projects” seemed an appropriate name to gesture towards “other miscellaneous things that seem important and may not have a home somewhere else.”We hope that this might go some way to encouraging people to report a wider range of concerns to our team.Our scope of work is guided by pragmatism: we aim to go wherever there are important community-related gaps not covered by others and try to make sure the highest priority gaps are filled. Where it seems better than the counterfactual, we sometimes try to fill those gaps ourselves. That means that our scope is both very broad and not always clear, and also that there will be plenty of things we don’t have the capacity or the right expertise to have fully covered. If you’re thinking of working on something you think we might have some knowledge about, the meme we want to spread is “loop us in, but don’t assume it’s totally covered or uncovered.” If we can be helpful, we’ll give advice, recommend resources or connect you with others interested in similar work.Team changesHere’s our current team:Nicole Ross (Head of Community Health and Special Projects)Julia Wise (Community Liaison)Catherine Low (Community Health Associate)Chana Messinger (Interim Head and Community Health Analyst)Eve McCormick (Community Health Pr...]]>
evemccormick https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:18 None full 5908
pR35WbLmruKdiMn2r_NL_EA_EA EA - Continuous doesn’t mean slow by Tom Davidson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Continuous doesn’t mean slow, published by Tom Davidson on May 10, 2023 on The Effective Altruism Forum.Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post.Once a lab trains AI that can fully replace its human employees, it will be able to multiply its workforce 100,000x. If these AIs do AI research, they could develop vastly superhuman systems in under a year.There’s a lot of disagreement about how likely AI is to end up overthrowing humanity. Thoughtful pundits vary from 5% to >90%. What’s driving this disagreement?One factor that often comes up in discussions is takeoff speeds, which Ajeya mentioned in the previous post. How quickly and suddenly do we move from today’s AI, to “expert-human level” AI[1], to AI that is way beyond human experts and could easily overpower humanity?The final stretch — the transition from expert-human level AI to AI systems that can easily overpower all of us — is especially crucial. If this final transition happens slowly, we could potentially have a long time to get used to the obsolescence regime and use very competent AI to help us solve AI alignment (among other things). But if it happens very quickly, we won’t have much time to ensure superhuman systems are aligned, or to prepare for human obsolescence in any other way.Scott Alexander is optimistic that things might move gradually. In a recent ACX post titled ‘Why I Am Not (As Much Of) A Doomer (As Some People)’, he says:So far we’ve had brisk but still gradual progress in AI; GPT-3 is better than GPT-2, and GPT-4 will probably be better still. Every few years we get a new model which is better than previous models by some predictable amount.Some people (eg Nate Soares) worry there’s a point where this changes. Maybe some jump. could take an AI from IQ 90 to IQ 1000 with no (or very short) period of IQ 200 in between.I’m optimistic because the past few years have provided some evidence for gradual progress.I agree with Scott that recent AI progress has been continuous and fairly predictable, and don’t particularly expect a break in that trend. But I expect the transition to superhuman AI to be very fast, even if it’s continuous.The amount of “compute” (i.e. the number of AI chips) needed to train a powerful AI is much bigger than the amount of compute needed to run it. I estimate that OpenAI has enough compute to run GPT-4 on hundreds of thousands of tasks at once.[2]This ratio will only become more extreme as models get bigger. Once OpenAI trains GPT-5 it’ll have enough compute for GPT-5 to perform millions of tasks in parallel, and once they train GPT-6 it’ll be able to perform tens of millions of tasks in parallel.[3]Now imagine that GPT-6 is as good at AI research as the average OpenAI researcher.[4] OpenAI could expand their AI researcher workforce from hundreds of experts to tens of millions. That’s a mind-boggling large increase, a factor of 100,000. It’s like going from 1000 people to the entire US workforce. What’s more, these AIs could work tirelessly through the night and could potentially “think” much more quickly than human workers.[5] (This change won’t happen all-at-once. I expect speed-ups from less capable AI before this point, as Ajeya wrote in the previous post.)How much faster would AI progress be in this scenario? It’s hard to know. But my best guess, from my recent report on takeoff speeds, is that progress would be much much faster. I think that less than a year after AI is expert-human level at AI research, AI could improve to the point of being able to easily overthrow humanity.This is much faster than the timeline mentioned in the ACX post:if you’re imagining specific years, imagine human-genius-level AI in the 2030s and world...]]>
Tom Davidson https://forum.effectivealtruism.org/posts/pR35WbLmruKdiMn2r/continuous-doesn-t-mean-slow Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Continuous doesn’t mean slow, published by Tom Davidson on May 10, 2023 on The Effective Altruism Forum.Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post.Once a lab trains AI that can fully replace its human employees, it will be able to multiply its workforce 100,000x. If these AIs do AI research, they could develop vastly superhuman systems in under a year.There’s a lot of disagreement about how likely AI is to end up overthrowing humanity. Thoughtful pundits vary from 5% to >90%. What’s driving this disagreement?One factor that often comes up in discussions is takeoff speeds, which Ajeya mentioned in the previous post. How quickly and suddenly do we move from today’s AI, to “expert-human level” AI[1], to AI that is way beyond human experts and could easily overpower humanity?The final stretch — the transition from expert-human level AI to AI systems that can easily overpower all of us — is especially crucial. If this final transition happens slowly, we could potentially have a long time to get used to the obsolescence regime and use very competent AI to help us solve AI alignment (among other things). But if it happens very quickly, we won’t have much time to ensure superhuman systems are aligned, or to prepare for human obsolescence in any other way.Scott Alexander is optimistic that things might move gradually. In a recent ACX post titled ‘Why I Am Not (As Much Of) A Doomer (As Some People)’, he says:So far we’ve had brisk but still gradual progress in AI; GPT-3 is better than GPT-2, and GPT-4 will probably be better still. Every few years we get a new model which is better than previous models by some predictable amount.Some people (eg Nate Soares) worry there’s a point where this changes. Maybe some jump. could take an AI from IQ 90 to IQ 1000 with no (or very short) period of IQ 200 in between.I’m optimistic because the past few years have provided some evidence for gradual progress.I agree with Scott that recent AI progress has been continuous and fairly predictable, and don’t particularly expect a break in that trend. But I expect the transition to superhuman AI to be very fast, even if it’s continuous.The amount of “compute” (i.e. the number of AI chips) needed to train a powerful AI is much bigger than the amount of compute needed to run it. I estimate that OpenAI has enough compute to run GPT-4 on hundreds of thousands of tasks at once.[2]This ratio will only become more extreme as models get bigger. Once OpenAI trains GPT-5 it’ll have enough compute for GPT-5 to perform millions of tasks in parallel, and once they train GPT-6 it’ll be able to perform tens of millions of tasks in parallel.[3]Now imagine that GPT-6 is as good at AI research as the average OpenAI researcher.[4] OpenAI could expand their AI researcher workforce from hundreds of experts to tens of millions. That’s a mind-boggling large increase, a factor of 100,000. It’s like going from 1000 people to the entire US workforce. What’s more, these AIs could work tirelessly through the night and could potentially “think” much more quickly than human workers.[5] (This change won’t happen all-at-once. I expect speed-ups from less capable AI before this point, as Ajeya wrote in the previous post.)How much faster would AI progress be in this scenario? It’s hard to know. But my best guess, from my recent report on takeoff speeds, is that progress would be much much faster. I think that less than a year after AI is expert-human level at AI research, AI could improve to the point of being able to easily overthrow humanity.This is much faster than the timeline mentioned in the ACX post:if you’re imagining specific years, imagine human-genius-level AI in the 2030s and world...]]>
Wed, 10 May 2023 16:01:57 +0000 EA - Continuous doesn’t mean slow by Tom Davidson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Continuous doesn’t mean slow, published by Tom Davidson on May 10, 2023 on The Effective Altruism Forum.Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post.Once a lab trains AI that can fully replace its human employees, it will be able to multiply its workforce 100,000x. If these AIs do AI research, they could develop vastly superhuman systems in under a year.There’s a lot of disagreement about how likely AI is to end up overthrowing humanity. Thoughtful pundits vary from 5% to >90%. What’s driving this disagreement?One factor that often comes up in discussions is takeoff speeds, which Ajeya mentioned in the previous post. How quickly and suddenly do we move from today’s AI, to “expert-human level” AI[1], to AI that is way beyond human experts and could easily overpower humanity?The final stretch — the transition from expert-human level AI to AI systems that can easily overpower all of us — is especially crucial. If this final transition happens slowly, we could potentially have a long time to get used to the obsolescence regime and use very competent AI to help us solve AI alignment (among other things). But if it happens very quickly, we won’t have much time to ensure superhuman systems are aligned, or to prepare for human obsolescence in any other way.Scott Alexander is optimistic that things might move gradually. In a recent ACX post titled ‘Why I Am Not (As Much Of) A Doomer (As Some People)’, he says:So far we’ve had brisk but still gradual progress in AI; GPT-3 is better than GPT-2, and GPT-4 will probably be better still. Every few years we get a new model which is better than previous models by some predictable amount.Some people (eg Nate Soares) worry there’s a point where this changes. Maybe some jump. could take an AI from IQ 90 to IQ 1000 with no (or very short) period of IQ 200 in between.I’m optimistic because the past few years have provided some evidence for gradual progress.I agree with Scott that recent AI progress has been continuous and fairly predictable, and don’t particularly expect a break in that trend. But I expect the transition to superhuman AI to be very fast, even if it’s continuous.The amount of “compute” (i.e. the number of AI chips) needed to train a powerful AI is much bigger than the amount of compute needed to run it. I estimate that OpenAI has enough compute to run GPT-4 on hundreds of thousands of tasks at once.[2]This ratio will only become more extreme as models get bigger. Once OpenAI trains GPT-5 it’ll have enough compute for GPT-5 to perform millions of tasks in parallel, and once they train GPT-6 it’ll be able to perform tens of millions of tasks in parallel.[3]Now imagine that GPT-6 is as good at AI research as the average OpenAI researcher.[4] OpenAI could expand their AI researcher workforce from hundreds of experts to tens of millions. That’s a mind-boggling large increase, a factor of 100,000. It’s like going from 1000 people to the entire US workforce. What’s more, these AIs could work tirelessly through the night and could potentially “think” much more quickly than human workers.[5] (This change won’t happen all-at-once. I expect speed-ups from less capable AI before this point, as Ajeya wrote in the previous post.)How much faster would AI progress be in this scenario? It’s hard to know. But my best guess, from my recent report on takeoff speeds, is that progress would be much much faster. I think that less than a year after AI is expert-human level at AI research, AI could improve to the point of being able to easily overthrow humanity.This is much faster than the timeline mentioned in the ACX post:if you’re imagining specific years, imagine human-genius-level AI in the 2030s and world...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Continuous doesn’t mean slow, published by Tom Davidson on May 10, 2023 on The Effective Altruism Forum.Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post.Once a lab trains AI that can fully replace its human employees, it will be able to multiply its workforce 100,000x. If these AIs do AI research, they could develop vastly superhuman systems in under a year.There’s a lot of disagreement about how likely AI is to end up overthrowing humanity. Thoughtful pundits vary from 5% to >90%. What’s driving this disagreement?One factor that often comes up in discussions is takeoff speeds, which Ajeya mentioned in the previous post. How quickly and suddenly do we move from today’s AI, to “expert-human level” AI[1], to AI that is way beyond human experts and could easily overpower humanity?The final stretch — the transition from expert-human level AI to AI systems that can easily overpower all of us — is especially crucial. If this final transition happens slowly, we could potentially have a long time to get used to the obsolescence regime and use very competent AI to help us solve AI alignment (among other things). But if it happens very quickly, we won’t have much time to ensure superhuman systems are aligned, or to prepare for human obsolescence in any other way.Scott Alexander is optimistic that things might move gradually. In a recent ACX post titled ‘Why I Am Not (As Much Of) A Doomer (As Some People)’, he says:So far we’ve had brisk but still gradual progress in AI; GPT-3 is better than GPT-2, and GPT-4 will probably be better still. Every few years we get a new model which is better than previous models by some predictable amount.Some people (eg Nate Soares) worry there’s a point where this changes. Maybe some jump. could take an AI from IQ 90 to IQ 1000 with no (or very short) period of IQ 200 in between.I’m optimistic because the past few years have provided some evidence for gradual progress.I agree with Scott that recent AI progress has been continuous and fairly predictable, and don’t particularly expect a break in that trend. But I expect the transition to superhuman AI to be very fast, even if it’s continuous.The amount of “compute” (i.e. the number of AI chips) needed to train a powerful AI is much bigger than the amount of compute needed to run it. I estimate that OpenAI has enough compute to run GPT-4 on hundreds of thousands of tasks at once.[2]This ratio will only become more extreme as models get bigger. Once OpenAI trains GPT-5 it’ll have enough compute for GPT-5 to perform millions of tasks in parallel, and once they train GPT-6 it’ll be able to perform tens of millions of tasks in parallel.[3]Now imagine that GPT-6 is as good at AI research as the average OpenAI researcher.[4] OpenAI could expand their AI researcher workforce from hundreds of experts to tens of millions. That’s a mind-boggling large increase, a factor of 100,000. It’s like going from 1000 people to the entire US workforce. What’s more, these AIs could work tirelessly through the night and could potentially “think” much more quickly than human workers.[5] (This change won’t happen all-at-once. I expect speed-ups from less capable AI before this point, as Ajeya wrote in the previous post.)How much faster would AI progress be in this scenario? It’s hard to know. But my best guess, from my recent report on takeoff speeds, is that progress would be much much faster. I think that less than a year after AI is expert-human level at AI research, AI could improve to the point of being able to easily overthrow humanity.This is much faster than the timeline mentioned in the ACX post:if you’re imagining specific years, imagine human-genius-level AI in the 2030s and world...]]>
Tom Davidson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:24 None full 5905
egX9ftjgsvg2MxLXr_NL_EA_EA EA - Psychological safety as the yardstick of good EA movement building by Severin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Psychological safety as the yardstick of good EA movement building, published by Severin on May 10, 2023 on The Effective Altruism Forum.I recently learned about the distinction between "movement building" and "community building": Community building is for the people involved in a community, and movement building is in service of the cause itself.A story I've heard from a bunch of EA groups is that they start out with community building. They attract a couple people, develop a wonderful vibe, and those people notoriously slack on their reading group preparations. Then, the group organizers get dissatisfied with the lack of visible progress on the EA path, doubt their own impact, and pivot all the way from community building to movement building. No funny pub meetups anymore. Career fellowships and 1-on-1s all the way.I think this throws the baby out with the bathwater, and that more often than not, community building is indeed tremendously valuable movement building, even if it doesn't look like that at first glance.The piece of evidence I can cite on this (and indeed cite over and over again) is Google's "Project Aristotle"-study.In Project Aristotle, Google studied what makes their highest-performing teams highest-performing. And alas: It is not the fanciness of degrees or individual intelligence or agentyness or any other property of the individual team members, but five factors:"The researchers found that what really mattered was less about who is on the team, and more about how the team worked together. In order of importance:Psychological safety: Psychological safety refers to an individual’s perception of the consequences of taking an interpersonal risk or a belief that a team is safe for risk taking in the face of being seen as ignorant, incompetent, negative, or disruptive. In a team with high psychological safety, teammates feel safe to take risks around their team members. They feel confident that no one on the team will embarrass or punish anyone else for admitting a mistake, asking a question, or offering a new idea.Dependability: On dependable teams, members reliably complete quality work on time (vs the opposite - shirking responsibilities).Structure and clarity: An individual’s understanding of job expectations, the process for fulfilling these expectations, and the consequences of one’s performance are important for team effectiveness. Goals can be set at the individual or group level, and must be specific, challenging, and attainable. Google often uses Objectives and Key Results (OKRs) to help set and communicate short and long term goals.Meaning: Finding a sense of purpose in either the work itself or the output is important for team effectiveness. The meaning of work is personal and can vary: financial security, supporting family, helping the team succeed, or self-expression for each individual, for example.Impact: The results of one’s work, the subjective judgement that your work is making a difference, is important for teams. Seeing that one’s work is contributing to the organization’s goals can help reveal impact."What I find remarkable is that "psychological safety" leads the list. While some factors in EA actively work against the psychological safety of its members. To name just a few:EA tends to attract pretty smart people. If you throw a bunch of people together who have been used all their lives to being the smart kid in the room, they suddenly lose the default role they had in just about any context. Because now, surrounded by even smarter kids, they are merely the kid. I think this is where a bunch of EAs' impostor syndrome comes from.EAs like to work at EA-aligned organizations. That means that some of us feel like any little chat at a conference (or any little comment on the EA Forum or our social media accounts) also i...]]>
Severin https://forum.effectivealtruism.org/posts/egX9ftjgsvg2MxLXr/psychological-safety-as-the-yardstick-of-good-ea-movement Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Psychological safety as the yardstick of good EA movement building, published by Severin on May 10, 2023 on The Effective Altruism Forum.I recently learned about the distinction between "movement building" and "community building": Community building is for the people involved in a community, and movement building is in service of the cause itself.A story I've heard from a bunch of EA groups is that they start out with community building. They attract a couple people, develop a wonderful vibe, and those people notoriously slack on their reading group preparations. Then, the group organizers get dissatisfied with the lack of visible progress on the EA path, doubt their own impact, and pivot all the way from community building to movement building. No funny pub meetups anymore. Career fellowships and 1-on-1s all the way.I think this throws the baby out with the bathwater, and that more often than not, community building is indeed tremendously valuable movement building, even if it doesn't look like that at first glance.The piece of evidence I can cite on this (and indeed cite over and over again) is Google's "Project Aristotle"-study.In Project Aristotle, Google studied what makes their highest-performing teams highest-performing. And alas: It is not the fanciness of degrees or individual intelligence or agentyness or any other property of the individual team members, but five factors:"The researchers found that what really mattered was less about who is on the team, and more about how the team worked together. In order of importance:Psychological safety: Psychological safety refers to an individual’s perception of the consequences of taking an interpersonal risk or a belief that a team is safe for risk taking in the face of being seen as ignorant, incompetent, negative, or disruptive. In a team with high psychological safety, teammates feel safe to take risks around their team members. They feel confident that no one on the team will embarrass or punish anyone else for admitting a mistake, asking a question, or offering a new idea.Dependability: On dependable teams, members reliably complete quality work on time (vs the opposite - shirking responsibilities).Structure and clarity: An individual’s understanding of job expectations, the process for fulfilling these expectations, and the consequences of one’s performance are important for team effectiveness. Goals can be set at the individual or group level, and must be specific, challenging, and attainable. Google often uses Objectives and Key Results (OKRs) to help set and communicate short and long term goals.Meaning: Finding a sense of purpose in either the work itself or the output is important for team effectiveness. The meaning of work is personal and can vary: financial security, supporting family, helping the team succeed, or self-expression for each individual, for example.Impact: The results of one’s work, the subjective judgement that your work is making a difference, is important for teams. Seeing that one’s work is contributing to the organization’s goals can help reveal impact."What I find remarkable is that "psychological safety" leads the list. While some factors in EA actively work against the psychological safety of its members. To name just a few:EA tends to attract pretty smart people. If you throw a bunch of people together who have been used all their lives to being the smart kid in the room, they suddenly lose the default role they had in just about any context. Because now, surrounded by even smarter kids, they are merely the kid. I think this is where a bunch of EAs' impostor syndrome comes from.EAs like to work at EA-aligned organizations. That means that some of us feel like any little chat at a conference (or any little comment on the EA Forum or our social media accounts) also i...]]>
Wed, 10 May 2023 14:51:07 +0000 EA - Psychological safety as the yardstick of good EA movement building by Severin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Psychological safety as the yardstick of good EA movement building, published by Severin on May 10, 2023 on The Effective Altruism Forum.I recently learned about the distinction between "movement building" and "community building": Community building is for the people involved in a community, and movement building is in service of the cause itself.A story I've heard from a bunch of EA groups is that they start out with community building. They attract a couple people, develop a wonderful vibe, and those people notoriously slack on their reading group preparations. Then, the group organizers get dissatisfied with the lack of visible progress on the EA path, doubt their own impact, and pivot all the way from community building to movement building. No funny pub meetups anymore. Career fellowships and 1-on-1s all the way.I think this throws the baby out with the bathwater, and that more often than not, community building is indeed tremendously valuable movement building, even if it doesn't look like that at first glance.The piece of evidence I can cite on this (and indeed cite over and over again) is Google's "Project Aristotle"-study.In Project Aristotle, Google studied what makes their highest-performing teams highest-performing. And alas: It is not the fanciness of degrees or individual intelligence or agentyness or any other property of the individual team members, but five factors:"The researchers found that what really mattered was less about who is on the team, and more about how the team worked together. In order of importance:Psychological safety: Psychological safety refers to an individual’s perception of the consequences of taking an interpersonal risk or a belief that a team is safe for risk taking in the face of being seen as ignorant, incompetent, negative, or disruptive. In a team with high psychological safety, teammates feel safe to take risks around their team members. They feel confident that no one on the team will embarrass or punish anyone else for admitting a mistake, asking a question, or offering a new idea.Dependability: On dependable teams, members reliably complete quality work on time (vs the opposite - shirking responsibilities).Structure and clarity: An individual’s understanding of job expectations, the process for fulfilling these expectations, and the consequences of one’s performance are important for team effectiveness. Goals can be set at the individual or group level, and must be specific, challenging, and attainable. Google often uses Objectives and Key Results (OKRs) to help set and communicate short and long term goals.Meaning: Finding a sense of purpose in either the work itself or the output is important for team effectiveness. The meaning of work is personal and can vary: financial security, supporting family, helping the team succeed, or self-expression for each individual, for example.Impact: The results of one’s work, the subjective judgement that your work is making a difference, is important for teams. Seeing that one’s work is contributing to the organization’s goals can help reveal impact."What I find remarkable is that "psychological safety" leads the list. While some factors in EA actively work against the psychological safety of its members. To name just a few:EA tends to attract pretty smart people. If you throw a bunch of people together who have been used all their lives to being the smart kid in the room, they suddenly lose the default role they had in just about any context. Because now, surrounded by even smarter kids, they are merely the kid. I think this is where a bunch of EAs' impostor syndrome comes from.EAs like to work at EA-aligned organizations. That means that some of us feel like any little chat at a conference (or any little comment on the EA Forum or our social media accounts) also i...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Psychological safety as the yardstick of good EA movement building, published by Severin on May 10, 2023 on The Effective Altruism Forum.I recently learned about the distinction between "movement building" and "community building": Community building is for the people involved in a community, and movement building is in service of the cause itself.A story I've heard from a bunch of EA groups is that they start out with community building. They attract a couple people, develop a wonderful vibe, and those people notoriously slack on their reading group preparations. Then, the group organizers get dissatisfied with the lack of visible progress on the EA path, doubt their own impact, and pivot all the way from community building to movement building. No funny pub meetups anymore. Career fellowships and 1-on-1s all the way.I think this throws the baby out with the bathwater, and that more often than not, community building is indeed tremendously valuable movement building, even if it doesn't look like that at first glance.The piece of evidence I can cite on this (and indeed cite over and over again) is Google's "Project Aristotle"-study.In Project Aristotle, Google studied what makes their highest-performing teams highest-performing. And alas: It is not the fanciness of degrees or individual intelligence or agentyness or any other property of the individual team members, but five factors:"The researchers found that what really mattered was less about who is on the team, and more about how the team worked together. In order of importance:Psychological safety: Psychological safety refers to an individual’s perception of the consequences of taking an interpersonal risk or a belief that a team is safe for risk taking in the face of being seen as ignorant, incompetent, negative, or disruptive. In a team with high psychological safety, teammates feel safe to take risks around their team members. They feel confident that no one on the team will embarrass or punish anyone else for admitting a mistake, asking a question, or offering a new idea.Dependability: On dependable teams, members reliably complete quality work on time (vs the opposite - shirking responsibilities).Structure and clarity: An individual’s understanding of job expectations, the process for fulfilling these expectations, and the consequences of one’s performance are important for team effectiveness. Goals can be set at the individual or group level, and must be specific, challenging, and attainable. Google often uses Objectives and Key Results (OKRs) to help set and communicate short and long term goals.Meaning: Finding a sense of purpose in either the work itself or the output is important for team effectiveness. The meaning of work is personal and can vary: financial security, supporting family, helping the team succeed, or self-expression for each individual, for example.Impact: The results of one’s work, the subjective judgement that your work is making a difference, is important for teams. Seeing that one’s work is contributing to the organization’s goals can help reveal impact."What I find remarkable is that "psychological safety" leads the list. While some factors in EA actively work against the psychological safety of its members. To name just a few:EA tends to attract pretty smart people. If you throw a bunch of people together who have been used all their lives to being the smart kid in the room, they suddenly lose the default role they had in just about any context. Because now, surrounded by even smarter kids, they are merely the kid. I think this is where a bunch of EAs' impostor syndrome comes from.EAs like to work at EA-aligned organizations. That means that some of us feel like any little chat at a conference (or any little comment on the EA Forum or our social media accounts) also i...]]>
Severin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:47 None full 5904
ShCENF54ZN6bxaysL_NL_EA_EA EA - Why Not EA? [paper draft] by Richard Y Chappell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Not EA? [paper draft], published by Richard Y Chappell on May 9, 2023 on The Effective Altruism Forum.Hi all, I'm currently working on a contribution to a special issue of Public Affairs Quarterly on the topic of "philosophical issues in effective altruism". I'm hoping that my contribution can provide a helpful survey of common philosophical objections to EA (and why I think those objections fail)—the sort of thing that might be useful to assign in an undergraduate philosophy class discussing EA.The abstract:Effective altruism sounds so innocuous—who could possibly be opposed to doing good, more effectively? Yet it has inspired significant backlash in recent years. This paper addresses some common misconceptions, and argues that the core ideas of effective altruism are both excellent and widely neglected. Reasonable people may disagree on details of implementation, but every decent person should share the basic goals or values underlying effective altruism.I cover:Five objections to moral prioritization (including the systems critique)Earning to giveBillionaire philanthropyLongtermism; andPolitical critique.Given the broad (survey-style) scope of the paper, each argument is addressed pretty briefly. But I hope it nonetheless contains some useful insights. For example, I suggest the following "simple dilemma for those who claim that EA is incapable of recognizing the need for 'systemic change'":Either their total evidence supports the idea that attempting to promote systemic change would be a better bet (in expectation) than safer alternatives, or it does not. If it does, then EA principles straightforwardly endorse attempting to promote systemic change. If it does not, then by their own lights they have no basis for thinking it a better option. In neither case does it constitute a coherent objection to EA principles.On earning to give:Rare exceptions aside, most careers are presumably permissible. The basic idea of earning to give is just that we have good moral reasons to prefer better-paying careers, from among our permissible options, if we would donate the excess earnings. There can thus be excellent altruistic reasons to pursue higher pay. This claim is both true and widely neglected. The same may be said of the comparative claim that one could easily have more moral reason to pursue "earning to give" than to pursue a conventionally "altruistic" career that more directly helps people. This comparative claim, too, is both true and widely neglected. Neither of these important truths is threatened by the deontologist's claim that one should not pursue an impermissible career. The relevant moral claim is just that the directness of our moral aid is not intrinsically morally significant, so a wider range of possible actions are potentially worth considering, for altruistic reasons, than people commonly recognize.On billionaire philanthropy:EA explicitly acknowledges the fact that billionaire philanthropists are capable of doing immense good, not just immense harm. Some find this an inconvenient truth, and may dislike EA for highlighting it. But I do not think it is objectionable to acknowledge relevant facts, even when politically inconvenient... Unless critics seriously want billionaires to deliberately try to do less good rather than more, it's hard to make sense of their opposing EA principles on the basis of how they apply to billionaires.I still have time to make revisions -- and space to expand the paper if needed -- so if anyone has time to read the whole draft and offer any feedback (either in comments below, or privately via DM/email/whatever), that would be most welcome!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Richard Y Chappell https://forum.effectivealtruism.org/posts/ShCENF54ZN6bxaysL/why-not-ea-paper-draft Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Not EA? [paper draft], published by Richard Y Chappell on May 9, 2023 on The Effective Altruism Forum.Hi all, I'm currently working on a contribution to a special issue of Public Affairs Quarterly on the topic of "philosophical issues in effective altruism". I'm hoping that my contribution can provide a helpful survey of common philosophical objections to EA (and why I think those objections fail)—the sort of thing that might be useful to assign in an undergraduate philosophy class discussing EA.The abstract:Effective altruism sounds so innocuous—who could possibly be opposed to doing good, more effectively? Yet it has inspired significant backlash in recent years. This paper addresses some common misconceptions, and argues that the core ideas of effective altruism are both excellent and widely neglected. Reasonable people may disagree on details of implementation, but every decent person should share the basic goals or values underlying effective altruism.I cover:Five objections to moral prioritization (including the systems critique)Earning to giveBillionaire philanthropyLongtermism; andPolitical critique.Given the broad (survey-style) scope of the paper, each argument is addressed pretty briefly. But I hope it nonetheless contains some useful insights. For example, I suggest the following "simple dilemma for those who claim that EA is incapable of recognizing the need for 'systemic change'":Either their total evidence supports the idea that attempting to promote systemic change would be a better bet (in expectation) than safer alternatives, or it does not. If it does, then EA principles straightforwardly endorse attempting to promote systemic change. If it does not, then by their own lights they have no basis for thinking it a better option. In neither case does it constitute a coherent objection to EA principles.On earning to give:Rare exceptions aside, most careers are presumably permissible. The basic idea of earning to give is just that we have good moral reasons to prefer better-paying careers, from among our permissible options, if we would donate the excess earnings. There can thus be excellent altruistic reasons to pursue higher pay. This claim is both true and widely neglected. The same may be said of the comparative claim that one could easily have more moral reason to pursue "earning to give" than to pursue a conventionally "altruistic" career that more directly helps people. This comparative claim, too, is both true and widely neglected. Neither of these important truths is threatened by the deontologist's claim that one should not pursue an impermissible career. The relevant moral claim is just that the directness of our moral aid is not intrinsically morally significant, so a wider range of possible actions are potentially worth considering, for altruistic reasons, than people commonly recognize.On billionaire philanthropy:EA explicitly acknowledges the fact that billionaire philanthropists are capable of doing immense good, not just immense harm. Some find this an inconvenient truth, and may dislike EA for highlighting it. But I do not think it is objectionable to acknowledge relevant facts, even when politically inconvenient... Unless critics seriously want billionaires to deliberately try to do less good rather than more, it's hard to make sense of their opposing EA principles on the basis of how they apply to billionaires.I still have time to make revisions -- and space to expand the paper if needed -- so if anyone has time to read the whole draft and offer any feedback (either in comments below, or privately via DM/email/whatever), that would be most welcome!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 10 May 2023 11:31:46 +0000 EA - Why Not EA? [paper draft] by Richard Y Chappell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Not EA? [paper draft], published by Richard Y Chappell on May 9, 2023 on The Effective Altruism Forum.Hi all, I'm currently working on a contribution to a special issue of Public Affairs Quarterly on the topic of "philosophical issues in effective altruism". I'm hoping that my contribution can provide a helpful survey of common philosophical objections to EA (and why I think those objections fail)—the sort of thing that might be useful to assign in an undergraduate philosophy class discussing EA.The abstract:Effective altruism sounds so innocuous—who could possibly be opposed to doing good, more effectively? Yet it has inspired significant backlash in recent years. This paper addresses some common misconceptions, and argues that the core ideas of effective altruism are both excellent and widely neglected. Reasonable people may disagree on details of implementation, but every decent person should share the basic goals or values underlying effective altruism.I cover:Five objections to moral prioritization (including the systems critique)Earning to giveBillionaire philanthropyLongtermism; andPolitical critique.Given the broad (survey-style) scope of the paper, each argument is addressed pretty briefly. But I hope it nonetheless contains some useful insights. For example, I suggest the following "simple dilemma for those who claim that EA is incapable of recognizing the need for 'systemic change'":Either their total evidence supports the idea that attempting to promote systemic change would be a better bet (in expectation) than safer alternatives, or it does not. If it does, then EA principles straightforwardly endorse attempting to promote systemic change. If it does not, then by their own lights they have no basis for thinking it a better option. In neither case does it constitute a coherent objection to EA principles.On earning to give:Rare exceptions aside, most careers are presumably permissible. The basic idea of earning to give is just that we have good moral reasons to prefer better-paying careers, from among our permissible options, if we would donate the excess earnings. There can thus be excellent altruistic reasons to pursue higher pay. This claim is both true and widely neglected. The same may be said of the comparative claim that one could easily have more moral reason to pursue "earning to give" than to pursue a conventionally "altruistic" career that more directly helps people. This comparative claim, too, is both true and widely neglected. Neither of these important truths is threatened by the deontologist's claim that one should not pursue an impermissible career. The relevant moral claim is just that the directness of our moral aid is not intrinsically morally significant, so a wider range of possible actions are potentially worth considering, for altruistic reasons, than people commonly recognize.On billionaire philanthropy:EA explicitly acknowledges the fact that billionaire philanthropists are capable of doing immense good, not just immense harm. Some find this an inconvenient truth, and may dislike EA for highlighting it. But I do not think it is objectionable to acknowledge relevant facts, even when politically inconvenient... Unless critics seriously want billionaires to deliberately try to do less good rather than more, it's hard to make sense of their opposing EA principles on the basis of how they apply to billionaires.I still have time to make revisions -- and space to expand the paper if needed -- so if anyone has time to read the whole draft and offer any feedback (either in comments below, or privately via DM/email/whatever), that would be most welcome!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Not EA? [paper draft], published by Richard Y Chappell on May 9, 2023 on The Effective Altruism Forum.Hi all, I'm currently working on a contribution to a special issue of Public Affairs Quarterly on the topic of "philosophical issues in effective altruism". I'm hoping that my contribution can provide a helpful survey of common philosophical objections to EA (and why I think those objections fail)—the sort of thing that might be useful to assign in an undergraduate philosophy class discussing EA.The abstract:Effective altruism sounds so innocuous—who could possibly be opposed to doing good, more effectively? Yet it has inspired significant backlash in recent years. This paper addresses some common misconceptions, and argues that the core ideas of effective altruism are both excellent and widely neglected. Reasonable people may disagree on details of implementation, but every decent person should share the basic goals or values underlying effective altruism.I cover:Five objections to moral prioritization (including the systems critique)Earning to giveBillionaire philanthropyLongtermism; andPolitical critique.Given the broad (survey-style) scope of the paper, each argument is addressed pretty briefly. But I hope it nonetheless contains some useful insights. For example, I suggest the following "simple dilemma for those who claim that EA is incapable of recognizing the need for 'systemic change'":Either their total evidence supports the idea that attempting to promote systemic change would be a better bet (in expectation) than safer alternatives, or it does not. If it does, then EA principles straightforwardly endorse attempting to promote systemic change. If it does not, then by their own lights they have no basis for thinking it a better option. In neither case does it constitute a coherent objection to EA principles.On earning to give:Rare exceptions aside, most careers are presumably permissible. The basic idea of earning to give is just that we have good moral reasons to prefer better-paying careers, from among our permissible options, if we would donate the excess earnings. There can thus be excellent altruistic reasons to pursue higher pay. This claim is both true and widely neglected. The same may be said of the comparative claim that one could easily have more moral reason to pursue "earning to give" than to pursue a conventionally "altruistic" career that more directly helps people. This comparative claim, too, is both true and widely neglected. Neither of these important truths is threatened by the deontologist's claim that one should not pursue an impermissible career. The relevant moral claim is just that the directness of our moral aid is not intrinsically morally significant, so a wider range of possible actions are potentially worth considering, for altruistic reasons, than people commonly recognize.On billionaire philanthropy:EA explicitly acknowledges the fact that billionaire philanthropists are capable of doing immense good, not just immense harm. Some find this an inconvenient truth, and may dislike EA for highlighting it. But I do not think it is objectionable to acknowledge relevant facts, even when politically inconvenient... Unless critics seriously want billionaires to deliberately try to do less good rather than more, it's hard to make sense of their opposing EA principles on the basis of how they apply to billionaires.I still have time to make revisions -- and space to expand the paper if needed -- so if anyone has time to read the whole draft and offer any feedback (either in comments below, or privately via DM/email/whatever), that would be most welcome!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Richard Y Chappell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:31 None full 5903
fRo5urRznMzGJAwrE_NL_EA_EA EA - On missing moods and tradeoffs by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On missing moods and tradeoffs, published by Lizka on May 9, 2023 on The Effective Altruism Forum.My favorite jargony phrase of the ~week is "missing mood."How I've been using it:If you're not feeling sad about some tradeoffs/facts about the world (or if you notice that someone else doesn't seem to be), then you might not be tracking something important (you might be biased, etc.). The “missing mood” is a signal.Note: I’m sharing this short post with some thoughts to hear disagreements, get other examples, and add nuance to my understanding of what’s going on. I might not be able to respond to all comments.Examples1. Immigration restrictionsAn example from the linked essay: immigration restrictions are sometimes justified. But "the reasonable restrictionist mood is anguish that a tremendous opportunity to enrich mankind and end poverty must go to waste." You might think that restricting immigration is sometimes the lesser evil, but if you don't have this mood, you're probably just ~xenophobic.2. Long contentThe example from Ben — a simplified sketch of our conversation:Me: How seriously do you hold your belief that “more people should have short attention spans?” And that long content is bad?Ben: I think I mostly just mean that there’s a missing mood: it’s ok to create long content, but you should be sad that you’re failing to communicate those ideas more concisely. I don’t think people are. (And content consumers should signal that they’d prefer shorter content.)(Related: Distillation and research debt, apparently Ben had written a shortform about this a year ago, and Using the “executive summary” style: writing that respects your reader’s time)3-6. Selective spaces, transparency, cause prioritization, and slowing AII had been trying to (re)invent the phrase for situations like the following, where I want to see people acknowledging tradeoffs:Some spaces and events have restricted access. I think this is the right decision in many cases. But we should notice that it's sad to reject people from things, and there are negative effects from the fact that some people/groups can make those decisions.I want some groups of people to be more transparent and more widely accountable (and I frequently want to prioritize transparency-motivated projects on my team, and am sad when we drop them). In some cases, it's just true that I think transparency (or accountability) is more valuable than the other person does. But as I learn more about or start getting involved in any given situation, I usually notice that there are real tradeoffs; transparency has costs like time, risks, etc. There are two ways missing moods pop up in this case:When I'm just ~rallying for transparency, I'm missing a mood of "yes, it's costly in many ways, and it's awful that prioritizing transparency might mean that some good things don’t happen, but I still want more of it." If I don't have this mood, I might be biased by a vibe of "transparency good.” When I start thinking more about the tradeoffs, I sometimes entirely change my opinion to agree with the prioritization of whoever it is I’m disagreeing with. Alternatively, my position becomes closer to: "Ok, I don't really know what tradeoffs you're making, and you might be making the right ones. I'm sad that you don't seem to be valuing transparency that much. Or I just wish that you were transparent — I don't actually know how much you're valuing transparency."The people I’m disagreeing with might also be missing a mood. They might just not care about transparency or acknowledge its benefits. There’s a big difference (to me) between someone deciding not to prioritize transparency because the costs are too high and someone not valuing it at all, and if I’m not sensing the mood, it might be the latter. (This is especially true if I don’t h...]]>
Lizka https://forum.effectivealtruism.org/posts/fRo5urRznMzGJAwrE/on-missing-moods-and-tradeoffs Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On missing moods and tradeoffs, published by Lizka on May 9, 2023 on The Effective Altruism Forum.My favorite jargony phrase of the ~week is "missing mood."How I've been using it:If you're not feeling sad about some tradeoffs/facts about the world (or if you notice that someone else doesn't seem to be), then you might not be tracking something important (you might be biased, etc.). The “missing mood” is a signal.Note: I’m sharing this short post with some thoughts to hear disagreements, get other examples, and add nuance to my understanding of what’s going on. I might not be able to respond to all comments.Examples1. Immigration restrictionsAn example from the linked essay: immigration restrictions are sometimes justified. But "the reasonable restrictionist mood is anguish that a tremendous opportunity to enrich mankind and end poverty must go to waste." You might think that restricting immigration is sometimes the lesser evil, but if you don't have this mood, you're probably just ~xenophobic.2. Long contentThe example from Ben — a simplified sketch of our conversation:Me: How seriously do you hold your belief that “more people should have short attention spans?” And that long content is bad?Ben: I think I mostly just mean that there’s a missing mood: it’s ok to create long content, but you should be sad that you’re failing to communicate those ideas more concisely. I don’t think people are. (And content consumers should signal that they’d prefer shorter content.)(Related: Distillation and research debt, apparently Ben had written a shortform about this a year ago, and Using the “executive summary” style: writing that respects your reader’s time)3-6. Selective spaces, transparency, cause prioritization, and slowing AII had been trying to (re)invent the phrase for situations like the following, where I want to see people acknowledging tradeoffs:Some spaces and events have restricted access. I think this is the right decision in many cases. But we should notice that it's sad to reject people from things, and there are negative effects from the fact that some people/groups can make those decisions.I want some groups of people to be more transparent and more widely accountable (and I frequently want to prioritize transparency-motivated projects on my team, and am sad when we drop them). In some cases, it's just true that I think transparency (or accountability) is more valuable than the other person does. But as I learn more about or start getting involved in any given situation, I usually notice that there are real tradeoffs; transparency has costs like time, risks, etc. There are two ways missing moods pop up in this case:When I'm just ~rallying for transparency, I'm missing a mood of "yes, it's costly in many ways, and it's awful that prioritizing transparency might mean that some good things don’t happen, but I still want more of it." If I don't have this mood, I might be biased by a vibe of "transparency good.” When I start thinking more about the tradeoffs, I sometimes entirely change my opinion to agree with the prioritization of whoever it is I’m disagreeing with. Alternatively, my position becomes closer to: "Ok, I don't really know what tradeoffs you're making, and you might be making the right ones. I'm sad that you don't seem to be valuing transparency that much. Or I just wish that you were transparent — I don't actually know how much you're valuing transparency."The people I’m disagreeing with might also be missing a mood. They might just not care about transparency or acknowledge its benefits. There’s a big difference (to me) between someone deciding not to prioritize transparency because the costs are too high and someone not valuing it at all, and if I’m not sensing the mood, it might be the latter. (This is especially true if I don’t h...]]>
Wed, 10 May 2023 03:16:17 +0000 EA - On missing moods and tradeoffs by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On missing moods and tradeoffs, published by Lizka on May 9, 2023 on The Effective Altruism Forum.My favorite jargony phrase of the ~week is "missing mood."How I've been using it:If you're not feeling sad about some tradeoffs/facts about the world (or if you notice that someone else doesn't seem to be), then you might not be tracking something important (you might be biased, etc.). The “missing mood” is a signal.Note: I’m sharing this short post with some thoughts to hear disagreements, get other examples, and add nuance to my understanding of what’s going on. I might not be able to respond to all comments.Examples1. Immigration restrictionsAn example from the linked essay: immigration restrictions are sometimes justified. But "the reasonable restrictionist mood is anguish that a tremendous opportunity to enrich mankind and end poverty must go to waste." You might think that restricting immigration is sometimes the lesser evil, but if you don't have this mood, you're probably just ~xenophobic.2. Long contentThe example from Ben — a simplified sketch of our conversation:Me: How seriously do you hold your belief that “more people should have short attention spans?” And that long content is bad?Ben: I think I mostly just mean that there’s a missing mood: it’s ok to create long content, but you should be sad that you’re failing to communicate those ideas more concisely. I don’t think people are. (And content consumers should signal that they’d prefer shorter content.)(Related: Distillation and research debt, apparently Ben had written a shortform about this a year ago, and Using the “executive summary” style: writing that respects your reader’s time)3-6. Selective spaces, transparency, cause prioritization, and slowing AII had been trying to (re)invent the phrase for situations like the following, where I want to see people acknowledging tradeoffs:Some spaces and events have restricted access. I think this is the right decision in many cases. But we should notice that it's sad to reject people from things, and there are negative effects from the fact that some people/groups can make those decisions.I want some groups of people to be more transparent and more widely accountable (and I frequently want to prioritize transparency-motivated projects on my team, and am sad when we drop them). In some cases, it's just true that I think transparency (or accountability) is more valuable than the other person does. But as I learn more about or start getting involved in any given situation, I usually notice that there are real tradeoffs; transparency has costs like time, risks, etc. There are two ways missing moods pop up in this case:When I'm just ~rallying for transparency, I'm missing a mood of "yes, it's costly in many ways, and it's awful that prioritizing transparency might mean that some good things don’t happen, but I still want more of it." If I don't have this mood, I might be biased by a vibe of "transparency good.” When I start thinking more about the tradeoffs, I sometimes entirely change my opinion to agree with the prioritization of whoever it is I’m disagreeing with. Alternatively, my position becomes closer to: "Ok, I don't really know what tradeoffs you're making, and you might be making the right ones. I'm sad that you don't seem to be valuing transparency that much. Or I just wish that you were transparent — I don't actually know how much you're valuing transparency."The people I’m disagreeing with might also be missing a mood. They might just not care about transparency or acknowledge its benefits. There’s a big difference (to me) between someone deciding not to prioritize transparency because the costs are too high and someone not valuing it at all, and if I’m not sensing the mood, it might be the latter. (This is especially true if I don’t h...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On missing moods and tradeoffs, published by Lizka on May 9, 2023 on The Effective Altruism Forum.My favorite jargony phrase of the ~week is "missing mood."How I've been using it:If you're not feeling sad about some tradeoffs/facts about the world (or if you notice that someone else doesn't seem to be), then you might not be tracking something important (you might be biased, etc.). The “missing mood” is a signal.Note: I’m sharing this short post with some thoughts to hear disagreements, get other examples, and add nuance to my understanding of what’s going on. I might not be able to respond to all comments.Examples1. Immigration restrictionsAn example from the linked essay: immigration restrictions are sometimes justified. But "the reasonable restrictionist mood is anguish that a tremendous opportunity to enrich mankind and end poverty must go to waste." You might think that restricting immigration is sometimes the lesser evil, but if you don't have this mood, you're probably just ~xenophobic.2. Long contentThe example from Ben — a simplified sketch of our conversation:Me: How seriously do you hold your belief that “more people should have short attention spans?” And that long content is bad?Ben: I think I mostly just mean that there’s a missing mood: it’s ok to create long content, but you should be sad that you’re failing to communicate those ideas more concisely. I don’t think people are. (And content consumers should signal that they’d prefer shorter content.)(Related: Distillation and research debt, apparently Ben had written a shortform about this a year ago, and Using the “executive summary” style: writing that respects your reader’s time)3-6. Selective spaces, transparency, cause prioritization, and slowing AII had been trying to (re)invent the phrase for situations like the following, where I want to see people acknowledging tradeoffs:Some spaces and events have restricted access. I think this is the right decision in many cases. But we should notice that it's sad to reject people from things, and there are negative effects from the fact that some people/groups can make those decisions.I want some groups of people to be more transparent and more widely accountable (and I frequently want to prioritize transparency-motivated projects on my team, and am sad when we drop them). In some cases, it's just true that I think transparency (or accountability) is more valuable than the other person does. But as I learn more about or start getting involved in any given situation, I usually notice that there are real tradeoffs; transparency has costs like time, risks, etc. There are two ways missing moods pop up in this case:When I'm just ~rallying for transparency, I'm missing a mood of "yes, it's costly in many ways, and it's awful that prioritizing transparency might mean that some good things don’t happen, but I still want more of it." If I don't have this mood, I might be biased by a vibe of "transparency good.” When I start thinking more about the tradeoffs, I sometimes entirely change my opinion to agree with the prioritization of whoever it is I’m disagreeing with. Alternatively, my position becomes closer to: "Ok, I don't really know what tradeoffs you're making, and you might be making the right ones. I'm sad that you don't seem to be valuing transparency that much. Or I just wish that you were transparent — I don't actually know how much you're valuing transparency."The people I’m disagreeing with might also be missing a mood. They might just not care about transparency or acknowledge its benefits. There’s a big difference (to me) between someone deciding not to prioritize transparency because the costs are too high and someone not valuing it at all, and if I’m not sensing the mood, it might be the latter. (This is especially true if I don’t h...]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:28 None full 5901
CgeDuvedjqCj56HXZ_NL_EA_EA EA - A note of caution on believing things on a gut level by Nathan Barnard Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A note of caution on believing things on a gut level, published by Nathan Barnard on May 9, 2023 on The Effective Altruism Forum.Joe Carlsmith's latest post discusses the difference between the probabilities that one puts on events on a gut level and on a cognitive level, and advocates updating your gut beliefs towards your cognitive beliefs insofar as the latter better tracks the truth.The post briefly notes that there can be some negative mental health consequences of this. I would like to provide a personal anecdote of some of the costs (and benefits) of changing your gut beliefs to be in line with your cognitive ones.Around 6 months ago my gut realised that one day I was going to die, in all likelihood well before I would wish to. During this period, my gut also adopted the same cognitive beliefs I have about TAI and AI x-risk. All things considered, I expect this to have both decreased my impact from an impartial welfarist perspective and my personal life satisfaction by a substantial amount.Some of the costs for me of this have been:A substantial decrease in my altruistic motivation in favour of self-preservationA dramatic drop in my motivation to workSubstantially worse ability to carry out causes prioritisationDepressionGenerically being a less clear thinkerDiffering my examsI expect to receive a somewhat lower mark in my degree than I otherwise would haveFailing to run my university EA group wellThere have also been some benefits to this:I much more closely examined my beliefs about AI and AI X-riskEngaging quite deeply with some philosophy questionsNote that this is just the experience of one individual and there are some good reasons to think that the net negative effects I've experienced won’t generalise:I’ve always been very good at acting on beliefs that I held at a cognitive level but not at a gut level. The upside therefore to me believing things at a gut level was always going to be small.I have a history of ruminative OCD (also known as pure O) - I almost without caveat recommend that others with ruminative OCD do not engage with potentially unpleasant beliefs one has at a cognitive level on a gut level.I’ve been experiencing some other difficulties in my life that probably made me more vulnerable to depression.In some EA and Rationalist circles, there’s a norm of being quite in touch with one’s emotions. I’m sure that this is very good for some people but I expect that it is quite harmful to others, including myself. For such individuals, there is an advantage to a certain level of detachment from one’s emotions. I say this because I think it’s somewhat lower status to reject engaging with one’s emotions and I think that this is probably harmful.As a final point, note that you are probably bad at affective forecasting. I’ve spent quite a lot of time reading about how people felt close to death and there are a wide variety of experiences. Some people do find that they are afraid own deaths when close to them, and others find that they have no fear. I’m particularly struck by De Gaulle’s recollections of his experiences during the first world war where he found he had no fear of death, after being shot leading his men during his early years in the war as a junior officer.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nathan Barnard https://forum.effectivealtruism.org/posts/CgeDuvedjqCj56HXZ/a-note-of-caution-on-believing-things-on-a-gut-level Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A note of caution on believing things on a gut level, published by Nathan Barnard on May 9, 2023 on The Effective Altruism Forum.Joe Carlsmith's latest post discusses the difference between the probabilities that one puts on events on a gut level and on a cognitive level, and advocates updating your gut beliefs towards your cognitive beliefs insofar as the latter better tracks the truth.The post briefly notes that there can be some negative mental health consequences of this. I would like to provide a personal anecdote of some of the costs (and benefits) of changing your gut beliefs to be in line with your cognitive ones.Around 6 months ago my gut realised that one day I was going to die, in all likelihood well before I would wish to. During this period, my gut also adopted the same cognitive beliefs I have about TAI and AI x-risk. All things considered, I expect this to have both decreased my impact from an impartial welfarist perspective and my personal life satisfaction by a substantial amount.Some of the costs for me of this have been:A substantial decrease in my altruistic motivation in favour of self-preservationA dramatic drop in my motivation to workSubstantially worse ability to carry out causes prioritisationDepressionGenerically being a less clear thinkerDiffering my examsI expect to receive a somewhat lower mark in my degree than I otherwise would haveFailing to run my university EA group wellThere have also been some benefits to this:I much more closely examined my beliefs about AI and AI X-riskEngaging quite deeply with some philosophy questionsNote that this is just the experience of one individual and there are some good reasons to think that the net negative effects I've experienced won’t generalise:I’ve always been very good at acting on beliefs that I held at a cognitive level but not at a gut level. The upside therefore to me believing things at a gut level was always going to be small.I have a history of ruminative OCD (also known as pure O) - I almost without caveat recommend that others with ruminative OCD do not engage with potentially unpleasant beliefs one has at a cognitive level on a gut level.I’ve been experiencing some other difficulties in my life that probably made me more vulnerable to depression.In some EA and Rationalist circles, there’s a norm of being quite in touch with one’s emotions. I’m sure that this is very good for some people but I expect that it is quite harmful to others, including myself. For such individuals, there is an advantage to a certain level of detachment from one’s emotions. I say this because I think it’s somewhat lower status to reject engaging with one’s emotions and I think that this is probably harmful.As a final point, note that you are probably bad at affective forecasting. I’ve spent quite a lot of time reading about how people felt close to death and there are a wide variety of experiences. Some people do find that they are afraid own deaths when close to them, and others find that they have no fear. I’m particularly struck by De Gaulle’s recollections of his experiences during the first world war where he found he had no fear of death, after being shot leading his men during his early years in the war as a junior officer.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 09 May 2023 23:56:56 +0000 EA - A note of caution on believing things on a gut level by Nathan Barnard Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A note of caution on believing things on a gut level, published by Nathan Barnard on May 9, 2023 on The Effective Altruism Forum.Joe Carlsmith's latest post discusses the difference between the probabilities that one puts on events on a gut level and on a cognitive level, and advocates updating your gut beliefs towards your cognitive beliefs insofar as the latter better tracks the truth.The post briefly notes that there can be some negative mental health consequences of this. I would like to provide a personal anecdote of some of the costs (and benefits) of changing your gut beliefs to be in line with your cognitive ones.Around 6 months ago my gut realised that one day I was going to die, in all likelihood well before I would wish to. During this period, my gut also adopted the same cognitive beliefs I have about TAI and AI x-risk. All things considered, I expect this to have both decreased my impact from an impartial welfarist perspective and my personal life satisfaction by a substantial amount.Some of the costs for me of this have been:A substantial decrease in my altruistic motivation in favour of self-preservationA dramatic drop in my motivation to workSubstantially worse ability to carry out causes prioritisationDepressionGenerically being a less clear thinkerDiffering my examsI expect to receive a somewhat lower mark in my degree than I otherwise would haveFailing to run my university EA group wellThere have also been some benefits to this:I much more closely examined my beliefs about AI and AI X-riskEngaging quite deeply with some philosophy questionsNote that this is just the experience of one individual and there are some good reasons to think that the net negative effects I've experienced won’t generalise:I’ve always been very good at acting on beliefs that I held at a cognitive level but not at a gut level. The upside therefore to me believing things at a gut level was always going to be small.I have a history of ruminative OCD (also known as pure O) - I almost without caveat recommend that others with ruminative OCD do not engage with potentially unpleasant beliefs one has at a cognitive level on a gut level.I’ve been experiencing some other difficulties in my life that probably made me more vulnerable to depression.In some EA and Rationalist circles, there’s a norm of being quite in touch with one’s emotions. I’m sure that this is very good for some people but I expect that it is quite harmful to others, including myself. For such individuals, there is an advantage to a certain level of detachment from one’s emotions. I say this because I think it’s somewhat lower status to reject engaging with one’s emotions and I think that this is probably harmful.As a final point, note that you are probably bad at affective forecasting. I’ve spent quite a lot of time reading about how people felt close to death and there are a wide variety of experiences. Some people do find that they are afraid own deaths when close to them, and others find that they have no fear. I’m particularly struck by De Gaulle’s recollections of his experiences during the first world war where he found he had no fear of death, after being shot leading his men during his early years in the war as a junior officer.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A note of caution on believing things on a gut level, published by Nathan Barnard on May 9, 2023 on The Effective Altruism Forum.Joe Carlsmith's latest post discusses the difference between the probabilities that one puts on events on a gut level and on a cognitive level, and advocates updating your gut beliefs towards your cognitive beliefs insofar as the latter better tracks the truth.The post briefly notes that there can be some negative mental health consequences of this. I would like to provide a personal anecdote of some of the costs (and benefits) of changing your gut beliefs to be in line with your cognitive ones.Around 6 months ago my gut realised that one day I was going to die, in all likelihood well before I would wish to. During this period, my gut also adopted the same cognitive beliefs I have about TAI and AI x-risk. All things considered, I expect this to have both decreased my impact from an impartial welfarist perspective and my personal life satisfaction by a substantial amount.Some of the costs for me of this have been:A substantial decrease in my altruistic motivation in favour of self-preservationA dramatic drop in my motivation to workSubstantially worse ability to carry out causes prioritisationDepressionGenerically being a less clear thinkerDiffering my examsI expect to receive a somewhat lower mark in my degree than I otherwise would haveFailing to run my university EA group wellThere have also been some benefits to this:I much more closely examined my beliefs about AI and AI X-riskEngaging quite deeply with some philosophy questionsNote that this is just the experience of one individual and there are some good reasons to think that the net negative effects I've experienced won’t generalise:I’ve always been very good at acting on beliefs that I held at a cognitive level but not at a gut level. The upside therefore to me believing things at a gut level was always going to be small.I have a history of ruminative OCD (also known as pure O) - I almost without caveat recommend that others with ruminative OCD do not engage with potentially unpleasant beliefs one has at a cognitive level on a gut level.I’ve been experiencing some other difficulties in my life that probably made me more vulnerable to depression.In some EA and Rationalist circles, there’s a norm of being quite in touch with one’s emotions. I’m sure that this is very good for some people but I expect that it is quite harmful to others, including myself. For such individuals, there is an advantage to a certain level of detachment from one’s emotions. I say this because I think it’s somewhat lower status to reject engaging with one’s emotions and I think that this is probably harmful.As a final point, note that you are probably bad at affective forecasting. I’ve spent quite a lot of time reading about how people felt close to death and there are a wide variety of experiences. Some people do find that they are afraid own deaths when close to them, and others find that they have no fear. I’m particularly struck by De Gaulle’s recollections of his experiences during the first world war where he found he had no fear of death, after being shot leading his men during his early years in the war as a junior officer.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nathan Barnard https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:05 None full 5900
CfFpEoibJTrTmiWtF_NL_EA_EA EA - [AISN #5]: Geoffrey Hinton speaks out on AI risk, the White House meets with AI labs, and Trojan attacks on language models by Center for AI Safety Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [AISN #5]: Geoffrey Hinton speaks out on AI risk, the White House meets with AI labs, and Trojan attacks on language models, published by Center for AI Safety on May 9, 2023 on The Effective Altruism Forum.Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.Subscribe here to receive future versions.Geoffrey Hinton is concerned about existential risks from AIGeoffrey Hinton won the Turing Award for his work on AI. Now he says that part of him regrets his life’s work, as he believes that AI poses an existential threat to humanity. As Hinton puts it, “it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence.”AI is developing more rapidly than Hinton expected. In 2015, Andrew Ng argued that worrying about AI risk is like worrying about overpopulation on Mars. Geoffrey Hinton also used to believe that advanced AI was decades away, but recent progress has changed his views. Now he says that AI will become “smarter than a human” in “5 to 20 years, but without much confidence. We live in very uncertain times.”The AI race is heating up, but Hinton sees a way out. In an interview with MIT Technology Review, Hinton argues that building AI is “inevitable” given competition between companies and countries. But he argues that “we’re all in the same boat with respect to existential risk,” so potentially “we could get the US and China to agree like we could with nuclear weapons.”Similar to climate change, AI risk will require coordination to solve. Hinton compared the two risks by saying, "I wouldn't like to devalue climate change. I wouldn't like to say, 'You shouldn't worry about climate change.' That's a huge risk too. But I think this might end up being more urgent."When AIs create their own subgoals, they will seek power. Hinton argues that AI agents like AutoGPT and BabyAGI demonstrate that people will build AIs that choose their own goals and pursue them. Hinton and others have argued that this is dangerous because “getting more control is a very good subgoal because it helps you achieve other goals.”Other experts are speaking up on AI risk. Demis Hassabis, CEO of DeepMind, recently said that he believes some form of AGI is “a few years, maybe within a decade away” and recommended “developing these types of AGI technologies in a cautious manner.” Shane Legg, co-founder of DeepMind, thinks AGI is likely to arrive around 2026. Warren Buffet compared AI to the nuclear bomb, and many others are concerned about advanced AI.White House meets with AI labsVice President Kamala Harris met at the White House on Thursday with leaders of Microsoft, Google, Anthropic, and OpenAI to discuss risks from artificial intelligence. This is an important step towards AI governance, though it’s a bit like inviting oil companies to a discussion on climate change—they have the power to solve the problem, but incentives to ignore it.New executive action on AI. After the meeting, the White House outlined three steps they plan to take to continue responding to the challenges posed by AI:To evaluate the risks of generative AI models, the White House will facilitate a public red-teaming competition. The event will take place at the DEF CON 31 conference and will feature cutting-edge models provided by leading AI labs.The White House continues to support investments in AI research, such as committing $140M over 5 years to National AI Research Institutes. Unfortunately, it’s plausible that most of this investment will be used to accelerate AI development without being directed at making these systems more safe.The Office of Management and Budget will release guidelines for federal use of AI.Federal agencies promise enforcement action on AI. Four federal agencies iss...]]>
Center for AI Safety https://forum.effectivealtruism.org/posts/CfFpEoibJTrTmiWtF/aisn-5-geoffrey-hinton-speaks-out-on-ai-risk-the-white-house Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [AISN #5]: Geoffrey Hinton speaks out on AI risk, the White House meets with AI labs, and Trojan attacks on language models, published by Center for AI Safety on May 9, 2023 on The Effective Altruism Forum.Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.Subscribe here to receive future versions.Geoffrey Hinton is concerned about existential risks from AIGeoffrey Hinton won the Turing Award for his work on AI. Now he says that part of him regrets his life’s work, as he believes that AI poses an existential threat to humanity. As Hinton puts it, “it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence.”AI is developing more rapidly than Hinton expected. In 2015, Andrew Ng argued that worrying about AI risk is like worrying about overpopulation on Mars. Geoffrey Hinton also used to believe that advanced AI was decades away, but recent progress has changed his views. Now he says that AI will become “smarter than a human” in “5 to 20 years, but without much confidence. We live in very uncertain times.”The AI race is heating up, but Hinton sees a way out. In an interview with MIT Technology Review, Hinton argues that building AI is “inevitable” given competition between companies and countries. But he argues that “we’re all in the same boat with respect to existential risk,” so potentially “we could get the US and China to agree like we could with nuclear weapons.”Similar to climate change, AI risk will require coordination to solve. Hinton compared the two risks by saying, "I wouldn't like to devalue climate change. I wouldn't like to say, 'You shouldn't worry about climate change.' That's a huge risk too. But I think this might end up being more urgent."When AIs create their own subgoals, they will seek power. Hinton argues that AI agents like AutoGPT and BabyAGI demonstrate that people will build AIs that choose their own goals and pursue them. Hinton and others have argued that this is dangerous because “getting more control is a very good subgoal because it helps you achieve other goals.”Other experts are speaking up on AI risk. Demis Hassabis, CEO of DeepMind, recently said that he believes some form of AGI is “a few years, maybe within a decade away” and recommended “developing these types of AGI technologies in a cautious manner.” Shane Legg, co-founder of DeepMind, thinks AGI is likely to arrive around 2026. Warren Buffet compared AI to the nuclear bomb, and many others are concerned about advanced AI.White House meets with AI labsVice President Kamala Harris met at the White House on Thursday with leaders of Microsoft, Google, Anthropic, and OpenAI to discuss risks from artificial intelligence. This is an important step towards AI governance, though it’s a bit like inviting oil companies to a discussion on climate change—they have the power to solve the problem, but incentives to ignore it.New executive action on AI. After the meeting, the White House outlined three steps they plan to take to continue responding to the challenges posed by AI:To evaluate the risks of generative AI models, the White House will facilitate a public red-teaming competition. The event will take place at the DEF CON 31 conference and will feature cutting-edge models provided by leading AI labs.The White House continues to support investments in AI research, such as committing $140M over 5 years to National AI Research Institutes. Unfortunately, it’s plausible that most of this investment will be used to accelerate AI development without being directed at making these systems more safe.The Office of Management and Budget will release guidelines for federal use of AI.Federal agencies promise enforcement action on AI. Four federal agencies iss...]]>
Tue, 09 May 2023 18:51:22 +0000 EA - [AISN #5]: Geoffrey Hinton speaks out on AI risk, the White House meets with AI labs, and Trojan attacks on language models by Center for AI Safety Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [AISN #5]: Geoffrey Hinton speaks out on AI risk, the White House meets with AI labs, and Trojan attacks on language models, published by Center for AI Safety on May 9, 2023 on The Effective Altruism Forum.Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.Subscribe here to receive future versions.Geoffrey Hinton is concerned about existential risks from AIGeoffrey Hinton won the Turing Award for his work on AI. Now he says that part of him regrets his life’s work, as he believes that AI poses an existential threat to humanity. As Hinton puts it, “it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence.”AI is developing more rapidly than Hinton expected. In 2015, Andrew Ng argued that worrying about AI risk is like worrying about overpopulation on Mars. Geoffrey Hinton also used to believe that advanced AI was decades away, but recent progress has changed his views. Now he says that AI will become “smarter than a human” in “5 to 20 years, but without much confidence. We live in very uncertain times.”The AI race is heating up, but Hinton sees a way out. In an interview with MIT Technology Review, Hinton argues that building AI is “inevitable” given competition between companies and countries. But he argues that “we’re all in the same boat with respect to existential risk,” so potentially “we could get the US and China to agree like we could with nuclear weapons.”Similar to climate change, AI risk will require coordination to solve. Hinton compared the two risks by saying, "I wouldn't like to devalue climate change. I wouldn't like to say, 'You shouldn't worry about climate change.' That's a huge risk too. But I think this might end up being more urgent."When AIs create their own subgoals, they will seek power. Hinton argues that AI agents like AutoGPT and BabyAGI demonstrate that people will build AIs that choose their own goals and pursue them. Hinton and others have argued that this is dangerous because “getting more control is a very good subgoal because it helps you achieve other goals.”Other experts are speaking up on AI risk. Demis Hassabis, CEO of DeepMind, recently said that he believes some form of AGI is “a few years, maybe within a decade away” and recommended “developing these types of AGI technologies in a cautious manner.” Shane Legg, co-founder of DeepMind, thinks AGI is likely to arrive around 2026. Warren Buffet compared AI to the nuclear bomb, and many others are concerned about advanced AI.White House meets with AI labsVice President Kamala Harris met at the White House on Thursday with leaders of Microsoft, Google, Anthropic, and OpenAI to discuss risks from artificial intelligence. This is an important step towards AI governance, though it’s a bit like inviting oil companies to a discussion on climate change—they have the power to solve the problem, but incentives to ignore it.New executive action on AI. After the meeting, the White House outlined three steps they plan to take to continue responding to the challenges posed by AI:To evaluate the risks of generative AI models, the White House will facilitate a public red-teaming competition. The event will take place at the DEF CON 31 conference and will feature cutting-edge models provided by leading AI labs.The White House continues to support investments in AI research, such as committing $140M over 5 years to National AI Research Institutes. Unfortunately, it’s plausible that most of this investment will be used to accelerate AI development without being directed at making these systems more safe.The Office of Management and Budget will release guidelines for federal use of AI.Federal agencies promise enforcement action on AI. Four federal agencies iss...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [AISN #5]: Geoffrey Hinton speaks out on AI risk, the White House meets with AI labs, and Trojan attacks on language models, published by Center for AI Safety on May 9, 2023 on The Effective Altruism Forum.Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.Subscribe here to receive future versions.Geoffrey Hinton is concerned about existential risks from AIGeoffrey Hinton won the Turing Award for his work on AI. Now he says that part of him regrets his life’s work, as he believes that AI poses an existential threat to humanity. As Hinton puts it, “it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence.”AI is developing more rapidly than Hinton expected. In 2015, Andrew Ng argued that worrying about AI risk is like worrying about overpopulation on Mars. Geoffrey Hinton also used to believe that advanced AI was decades away, but recent progress has changed his views. Now he says that AI will become “smarter than a human” in “5 to 20 years, but without much confidence. We live in very uncertain times.”The AI race is heating up, but Hinton sees a way out. In an interview with MIT Technology Review, Hinton argues that building AI is “inevitable” given competition between companies and countries. But he argues that “we’re all in the same boat with respect to existential risk,” so potentially “we could get the US and China to agree like we could with nuclear weapons.”Similar to climate change, AI risk will require coordination to solve. Hinton compared the two risks by saying, "I wouldn't like to devalue climate change. I wouldn't like to say, 'You shouldn't worry about climate change.' That's a huge risk too. But I think this might end up being more urgent."When AIs create their own subgoals, they will seek power. Hinton argues that AI agents like AutoGPT and BabyAGI demonstrate that people will build AIs that choose their own goals and pursue them. Hinton and others have argued that this is dangerous because “getting more control is a very good subgoal because it helps you achieve other goals.”Other experts are speaking up on AI risk. Demis Hassabis, CEO of DeepMind, recently said that he believes some form of AGI is “a few years, maybe within a decade away” and recommended “developing these types of AGI technologies in a cautious manner.” Shane Legg, co-founder of DeepMind, thinks AGI is likely to arrive around 2026. Warren Buffet compared AI to the nuclear bomb, and many others are concerned about advanced AI.White House meets with AI labsVice President Kamala Harris met at the White House on Thursday with leaders of Microsoft, Google, Anthropic, and OpenAI to discuss risks from artificial intelligence. This is an important step towards AI governance, though it’s a bit like inviting oil companies to a discussion on climate change—they have the power to solve the problem, but incentives to ignore it.New executive action on AI. After the meeting, the White House outlined three steps they plan to take to continue responding to the challenges posed by AI:To evaluate the risks of generative AI models, the White House will facilitate a public red-teaming competition. The event will take place at the DEF CON 31 conference and will feature cutting-edge models provided by leading AI labs.The White House continues to support investments in AI research, such as committing $140M over 5 years to National AI Research Institutes. Unfortunately, it’s plausible that most of this investment will be used to accelerate AI development without being directed at making these systems more safe.The Office of Management and Budget will release guidelines for federal use of AI.Federal agencies promise enforcement action on AI. Four federal agencies iss...]]>
Center for AI Safety https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:06 None full 5896
x9pcT6dvaGKox4PT5_NL_EA_EA EA - Chilean AIS Hackathon Retrospective by Agustín Covarrubias Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Chilean AIS Hackathon Retrospective, published by Agustín Covarrubias on May 9, 2023 on The Effective Altruism Forum.TL;DRWe hosted an AI Safety “Thinkathon” in Chile. We had participation from 40 students with differing skill levels and backgrounds, with groups totalling 13 submissions.We see potential in:Similar introductory events aiming for a broad audienceCollaborating more often with student organizationsLeveraging remote help from external mentorsWe experimented with an alternative naming, having remote mentors, different problem sources, and student organization partnerships, with varying results.We could have improved planning and communicating the difficulty of challenges.IntroductionIn February, we ran the first AI Safety Hackathon in Chile (and possibly in all of South America). This post provides some details about the event, a teaser of some resulting projects and our learnings throughout.Goals and overview of the eventThe hackathon was meant to kick-start our nascent AI Safety Group at UC Chile, generating interest in AI Safety and encouraging people to register for our AGI Safety Fundamentals course group.It ran between the 25th and the 28th of February, the first two days being in-person events and the other two serving as additional time for participants to work on their proposals, with some remote assistance on our part. Participants formed teams of up to four people, and could choose to assist either virtually (through Discord) or in-person (on the first two days).We had help from Apart Research and partial funding from AI Alignment Awards.Things we experimented withAiming for a broad audience, we named the event “Thinkathon” (instead of hackathon) and provided plenty of introductory material alongside the proposed problems.We think this was the right choice, as the desired effect was reflected on the participant demographics (see below).We could have been better at preparing participants. Some participants suggested we could have done an introductory workshop.We incorporated the two problems from the AI Alignment Awards (Goal Misgeneralization and the Shutdown problem), alongside easier, self-contained problems aimed at students with different backgrounds (like policy or psychology).We think most teams weren't prepared to tackle the AI Alignment Awards challenges. Most teams (77%) chose them initially regardless of their experience, getting stuck quickly.This might have worked better by communicating difficulty more clearly, as well as emphasizing that aiming for incremental progress rather than a complete solution is a better strategy for a beginner's hackathon.As we don't know many people with previous experience in AIS in Chile, we got help from external mentors, which connected remotely to help participants.We think this was a good decision, as participants rated mentor support highly (see below).We collaborated actively with two student governments from our university (the Administration and Economics Student Council and the Engineering Student Council). They helped with funding, logistics and outreach.We think this was an excellent choice, as they provided a much broader platform for outreach and crucial logistics help.We had a great time working with them, and they were eager to work with us again!Things that went well40 people attended in-person and 10 people remotely (through Discord), we were surprised by both the high number of attendants and the preference for in-person participation.We had a total of 13 submitted proposals, much higher than expected.While all proposals were incremental contributions, most were of high quality.Skill level and majors varied significantly, going from relatively advanced CS students to freshmen from other fields (like economics). We were aiming for diversity, so this is a w...]]>
Agustín Covarrubias https://forum.effectivealtruism.org/posts/x9pcT6dvaGKox4PT5/chilean-ais-hackathon-retrospective Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Chilean AIS Hackathon Retrospective, published by Agustín Covarrubias on May 9, 2023 on The Effective Altruism Forum.TL;DRWe hosted an AI Safety “Thinkathon” in Chile. We had participation from 40 students with differing skill levels and backgrounds, with groups totalling 13 submissions.We see potential in:Similar introductory events aiming for a broad audienceCollaborating more often with student organizationsLeveraging remote help from external mentorsWe experimented with an alternative naming, having remote mentors, different problem sources, and student organization partnerships, with varying results.We could have improved planning and communicating the difficulty of challenges.IntroductionIn February, we ran the first AI Safety Hackathon in Chile (and possibly in all of South America). This post provides some details about the event, a teaser of some resulting projects and our learnings throughout.Goals and overview of the eventThe hackathon was meant to kick-start our nascent AI Safety Group at UC Chile, generating interest in AI Safety and encouraging people to register for our AGI Safety Fundamentals course group.It ran between the 25th and the 28th of February, the first two days being in-person events and the other two serving as additional time for participants to work on their proposals, with some remote assistance on our part. Participants formed teams of up to four people, and could choose to assist either virtually (through Discord) or in-person (on the first two days).We had help from Apart Research and partial funding from AI Alignment Awards.Things we experimented withAiming for a broad audience, we named the event “Thinkathon” (instead of hackathon) and provided plenty of introductory material alongside the proposed problems.We think this was the right choice, as the desired effect was reflected on the participant demographics (see below).We could have been better at preparing participants. Some participants suggested we could have done an introductory workshop.We incorporated the two problems from the AI Alignment Awards (Goal Misgeneralization and the Shutdown problem), alongside easier, self-contained problems aimed at students with different backgrounds (like policy or psychology).We think most teams weren't prepared to tackle the AI Alignment Awards challenges. Most teams (77%) chose them initially regardless of their experience, getting stuck quickly.This might have worked better by communicating difficulty more clearly, as well as emphasizing that aiming for incremental progress rather than a complete solution is a better strategy for a beginner's hackathon.As we don't know many people with previous experience in AIS in Chile, we got help from external mentors, which connected remotely to help participants.We think this was a good decision, as participants rated mentor support highly (see below).We collaborated actively with two student governments from our university (the Administration and Economics Student Council and the Engineering Student Council). They helped with funding, logistics and outreach.We think this was an excellent choice, as they provided a much broader platform for outreach and crucial logistics help.We had a great time working with them, and they were eager to work with us again!Things that went well40 people attended in-person and 10 people remotely (through Discord), we were surprised by both the high number of attendants and the preference for in-person participation.We had a total of 13 submitted proposals, much higher than expected.While all proposals were incremental contributions, most were of high quality.Skill level and majors varied significantly, going from relatively advanced CS students to freshmen from other fields (like economics). We were aiming for diversity, so this is a w...]]>
Tue, 09 May 2023 14:04:19 +0000 EA - Chilean AIS Hackathon Retrospective by Agustín Covarrubias Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Chilean AIS Hackathon Retrospective, published by Agustín Covarrubias on May 9, 2023 on The Effective Altruism Forum.TL;DRWe hosted an AI Safety “Thinkathon” in Chile. We had participation from 40 students with differing skill levels and backgrounds, with groups totalling 13 submissions.We see potential in:Similar introductory events aiming for a broad audienceCollaborating more often with student organizationsLeveraging remote help from external mentorsWe experimented with an alternative naming, having remote mentors, different problem sources, and student organization partnerships, with varying results.We could have improved planning and communicating the difficulty of challenges.IntroductionIn February, we ran the first AI Safety Hackathon in Chile (and possibly in all of South America). This post provides some details about the event, a teaser of some resulting projects and our learnings throughout.Goals and overview of the eventThe hackathon was meant to kick-start our nascent AI Safety Group at UC Chile, generating interest in AI Safety and encouraging people to register for our AGI Safety Fundamentals course group.It ran between the 25th and the 28th of February, the first two days being in-person events and the other two serving as additional time for participants to work on their proposals, with some remote assistance on our part. Participants formed teams of up to four people, and could choose to assist either virtually (through Discord) or in-person (on the first two days).We had help from Apart Research and partial funding from AI Alignment Awards.Things we experimented withAiming for a broad audience, we named the event “Thinkathon” (instead of hackathon) and provided plenty of introductory material alongside the proposed problems.We think this was the right choice, as the desired effect was reflected on the participant demographics (see below).We could have been better at preparing participants. Some participants suggested we could have done an introductory workshop.We incorporated the two problems from the AI Alignment Awards (Goal Misgeneralization and the Shutdown problem), alongside easier, self-contained problems aimed at students with different backgrounds (like policy or psychology).We think most teams weren't prepared to tackle the AI Alignment Awards challenges. Most teams (77%) chose them initially regardless of their experience, getting stuck quickly.This might have worked better by communicating difficulty more clearly, as well as emphasizing that aiming for incremental progress rather than a complete solution is a better strategy for a beginner's hackathon.As we don't know many people with previous experience in AIS in Chile, we got help from external mentors, which connected remotely to help participants.We think this was a good decision, as participants rated mentor support highly (see below).We collaborated actively with two student governments from our university (the Administration and Economics Student Council and the Engineering Student Council). They helped with funding, logistics and outreach.We think this was an excellent choice, as they provided a much broader platform for outreach and crucial logistics help.We had a great time working with them, and they were eager to work with us again!Things that went well40 people attended in-person and 10 people remotely (through Discord), we were surprised by both the high number of attendants and the preference for in-person participation.We had a total of 13 submitted proposals, much higher than expected.While all proposals were incremental contributions, most were of high quality.Skill level and majors varied significantly, going from relatively advanced CS students to freshmen from other fields (like economics). We were aiming for diversity, so this is a w...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Chilean AIS Hackathon Retrospective, published by Agustín Covarrubias on May 9, 2023 on The Effective Altruism Forum.TL;DRWe hosted an AI Safety “Thinkathon” in Chile. We had participation from 40 students with differing skill levels and backgrounds, with groups totalling 13 submissions.We see potential in:Similar introductory events aiming for a broad audienceCollaborating more often with student organizationsLeveraging remote help from external mentorsWe experimented with an alternative naming, having remote mentors, different problem sources, and student organization partnerships, with varying results.We could have improved planning and communicating the difficulty of challenges.IntroductionIn February, we ran the first AI Safety Hackathon in Chile (and possibly in all of South America). This post provides some details about the event, a teaser of some resulting projects and our learnings throughout.Goals and overview of the eventThe hackathon was meant to kick-start our nascent AI Safety Group at UC Chile, generating interest in AI Safety and encouraging people to register for our AGI Safety Fundamentals course group.It ran between the 25th and the 28th of February, the first two days being in-person events and the other two serving as additional time for participants to work on their proposals, with some remote assistance on our part. Participants formed teams of up to four people, and could choose to assist either virtually (through Discord) or in-person (on the first two days).We had help from Apart Research and partial funding from AI Alignment Awards.Things we experimented withAiming for a broad audience, we named the event “Thinkathon” (instead of hackathon) and provided plenty of introductory material alongside the proposed problems.We think this was the right choice, as the desired effect was reflected on the participant demographics (see below).We could have been better at preparing participants. Some participants suggested we could have done an introductory workshop.We incorporated the two problems from the AI Alignment Awards (Goal Misgeneralization and the Shutdown problem), alongside easier, self-contained problems aimed at students with different backgrounds (like policy or psychology).We think most teams weren't prepared to tackle the AI Alignment Awards challenges. Most teams (77%) chose them initially regardless of their experience, getting stuck quickly.This might have worked better by communicating difficulty more clearly, as well as emphasizing that aiming for incremental progress rather than a complete solution is a better strategy for a beginner's hackathon.As we don't know many people with previous experience in AIS in Chile, we got help from external mentors, which connected remotely to help participants.We think this was a good decision, as participants rated mentor support highly (see below).We collaborated actively with two student governments from our university (the Administration and Economics Student Council and the Engineering Student Council). They helped with funding, logistics and outreach.We think this was an excellent choice, as they provided a much broader platform for outreach and crucial logistics help.We had a great time working with them, and they were eager to work with us again!Things that went well40 people attended in-person and 10 people remotely (through Discord), we were surprised by both the high number of attendants and the preference for in-person participation.We had a total of 13 submitted proposals, much higher than expected.While all proposals were incremental contributions, most were of high quality.Skill level and majors varied significantly, going from relatively advanced CS students to freshmen from other fields (like economics). We were aiming for diversity, so this is a w...]]>
Agustín Covarrubias https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:03 None full 5891
3KAuAS2shyDwnjzNa_NL_EA_EA EA - Predictable updating about AI risk by Joe Carlsmith Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Predictable updating about AI risk, published by Joe Carlsmith on May 8, 2023 on The Effective Altruism Forum.(Cross-posted from my website. Podcast version here, or search "Joe Carlsmith Audio" on your podcast app.)"This present moment used to be the unimaginable future."Stewart Brand1. IntroductionHere’s a pattern you may have noticed. A new frontier AI, like GPT-4, gets released. People play with it. It’s better than the previous AIs, and many people are impressed. And as a result, many people who weren’t worried about existential risk from misaligned AI (hereafter: “AI risk”) get much more worried.Now, if these people didn’t expect AI to get so much better so soon, such a pattern can make sense. And so, too, if they got other unexpected evidence for AI risk – for example, concerned experts signing letters and quitting their jobs.But if you’re a good Bayesian, and you currently put low probability on existential catastrophe from misaligned AI (hereafter: “AI doom”), you probably shouldn’t be able to predict that this pattern will happen to you in the future. When GPT-5 comes out, for example, it probably shouldn’t be the case that your probability on doom goes up a bunch. Similarly, it probably shouldn’t be the case that if you could see, now, the sorts of AI systems we’ll have in 2030, or 2050, that you’d get a lot more worried about doom than you are now.But I worry that we’re going to see this pattern anyway. Indeed, I’ve seen it myself. I’m working on fixing the problem. And I think we, as a collective discourse, should try to fix it, too. In particular: I think we’re in a position to predict, now, that AI is going to get a lot better in the coming years. I think we should worry, now, accordingly, without having to see these much-better AIs up close. If we do this right, then in expectation, when we confront GPT-5 (or GPT-6, or Agent-GPT-8, or Chaos-GPT-10) in the flesh, in all the concreteness and detail and not-a-game-ness of the real world, we’ll be just as scared as we are now.This essay is about what “doing this right” looks like. In particular: part of what happens, when you meet something in the flesh, is that it “seems more real” at a gut level. So the essay is partly a reflection on the epistemology of guts: of visceral vs. abstract; “up close” vs. “far away.” My views on this have changed over the years: and in particular, I now put less weight on my gut’s (comparatively skeptical) views about doom.But the essay is also about grokking some basic Bayesianism about future evidence, dispelling a common misconception about it (namely: that directional updates shouldn’t be predictable in general), and pointing at some of the constraints it places on our beliefs over time, especially with respect to stuff we’re currently skeptical or dismissive about. For example, at least in theory: you should never think it >50% that your credence on something will later double; never >10% that it will later 10x, and so forth. So if you’re currently e.g. 1% or less on AI doom, you should think it’s less than 50% likely that you’ll ever be at 2%; less than 10% likely that you’ll ever be at 10%, and so on. And if your credence is very small, or if you’re acting dismissive, you should be very confident you’ll never end up worried. Are you?I also discuss when, exactly, it’s problematic to update in predictable directions. My sense is that generally, you should expect to update in the direction of the truth as the evidence comes in; and thus, that people who think AI doom unlikely should expect to feel less worried as time goes on (such that consistently getting more worried is a red flag). But in the case of AI risk, I think at least some non-crazy views should actually expect to get more worried over time, even while being fairly non-worried now. In particular, i...]]>
Joe Carlsmith https://forum.effectivealtruism.org/posts/3KAuAS2shyDwnjzNa/predictable-updating-about-ai-risk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Predictable updating about AI risk, published by Joe Carlsmith on May 8, 2023 on The Effective Altruism Forum.(Cross-posted from my website. Podcast version here, or search "Joe Carlsmith Audio" on your podcast app.)"This present moment used to be the unimaginable future."Stewart Brand1. IntroductionHere’s a pattern you may have noticed. A new frontier AI, like GPT-4, gets released. People play with it. It’s better than the previous AIs, and many people are impressed. And as a result, many people who weren’t worried about existential risk from misaligned AI (hereafter: “AI risk”) get much more worried.Now, if these people didn’t expect AI to get so much better so soon, such a pattern can make sense. And so, too, if they got other unexpected evidence for AI risk – for example, concerned experts signing letters and quitting their jobs.But if you’re a good Bayesian, and you currently put low probability on existential catastrophe from misaligned AI (hereafter: “AI doom”), you probably shouldn’t be able to predict that this pattern will happen to you in the future. When GPT-5 comes out, for example, it probably shouldn’t be the case that your probability on doom goes up a bunch. Similarly, it probably shouldn’t be the case that if you could see, now, the sorts of AI systems we’ll have in 2030, or 2050, that you’d get a lot more worried about doom than you are now.But I worry that we’re going to see this pattern anyway. Indeed, I’ve seen it myself. I’m working on fixing the problem. And I think we, as a collective discourse, should try to fix it, too. In particular: I think we’re in a position to predict, now, that AI is going to get a lot better in the coming years. I think we should worry, now, accordingly, without having to see these much-better AIs up close. If we do this right, then in expectation, when we confront GPT-5 (or GPT-6, or Agent-GPT-8, or Chaos-GPT-10) in the flesh, in all the concreteness and detail and not-a-game-ness of the real world, we’ll be just as scared as we are now.This essay is about what “doing this right” looks like. In particular: part of what happens, when you meet something in the flesh, is that it “seems more real” at a gut level. So the essay is partly a reflection on the epistemology of guts: of visceral vs. abstract; “up close” vs. “far away.” My views on this have changed over the years: and in particular, I now put less weight on my gut’s (comparatively skeptical) views about doom.But the essay is also about grokking some basic Bayesianism about future evidence, dispelling a common misconception about it (namely: that directional updates shouldn’t be predictable in general), and pointing at some of the constraints it places on our beliefs over time, especially with respect to stuff we’re currently skeptical or dismissive about. For example, at least in theory: you should never think it >50% that your credence on something will later double; never >10% that it will later 10x, and so forth. So if you’re currently e.g. 1% or less on AI doom, you should think it’s less than 50% likely that you’ll ever be at 2%; less than 10% likely that you’ll ever be at 10%, and so on. And if your credence is very small, or if you’re acting dismissive, you should be very confident you’ll never end up worried. Are you?I also discuss when, exactly, it’s problematic to update in predictable directions. My sense is that generally, you should expect to update in the direction of the truth as the evidence comes in; and thus, that people who think AI doom unlikely should expect to feel less worried as time goes on (such that consistently getting more worried is a red flag). But in the case of AI risk, I think at least some non-crazy views should actually expect to get more worried over time, even while being fairly non-worried now. In particular, i...]]>
Tue, 09 May 2023 00:29:00 +0000 EA - Predictable updating about AI risk by Joe Carlsmith Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Predictable updating about AI risk, published by Joe Carlsmith on May 8, 2023 on The Effective Altruism Forum.(Cross-posted from my website. Podcast version here, or search "Joe Carlsmith Audio" on your podcast app.)"This present moment used to be the unimaginable future."Stewart Brand1. IntroductionHere’s a pattern you may have noticed. A new frontier AI, like GPT-4, gets released. People play with it. It’s better than the previous AIs, and many people are impressed. And as a result, many people who weren’t worried about existential risk from misaligned AI (hereafter: “AI risk”) get much more worried.Now, if these people didn’t expect AI to get so much better so soon, such a pattern can make sense. And so, too, if they got other unexpected evidence for AI risk – for example, concerned experts signing letters and quitting their jobs.But if you’re a good Bayesian, and you currently put low probability on existential catastrophe from misaligned AI (hereafter: “AI doom”), you probably shouldn’t be able to predict that this pattern will happen to you in the future. When GPT-5 comes out, for example, it probably shouldn’t be the case that your probability on doom goes up a bunch. Similarly, it probably shouldn’t be the case that if you could see, now, the sorts of AI systems we’ll have in 2030, or 2050, that you’d get a lot more worried about doom than you are now.But I worry that we’re going to see this pattern anyway. Indeed, I’ve seen it myself. I’m working on fixing the problem. And I think we, as a collective discourse, should try to fix it, too. In particular: I think we’re in a position to predict, now, that AI is going to get a lot better in the coming years. I think we should worry, now, accordingly, without having to see these much-better AIs up close. If we do this right, then in expectation, when we confront GPT-5 (or GPT-6, or Agent-GPT-8, or Chaos-GPT-10) in the flesh, in all the concreteness and detail and not-a-game-ness of the real world, we’ll be just as scared as we are now.This essay is about what “doing this right” looks like. In particular: part of what happens, when you meet something in the flesh, is that it “seems more real” at a gut level. So the essay is partly a reflection on the epistemology of guts: of visceral vs. abstract; “up close” vs. “far away.” My views on this have changed over the years: and in particular, I now put less weight on my gut’s (comparatively skeptical) views about doom.But the essay is also about grokking some basic Bayesianism about future evidence, dispelling a common misconception about it (namely: that directional updates shouldn’t be predictable in general), and pointing at some of the constraints it places on our beliefs over time, especially with respect to stuff we’re currently skeptical or dismissive about. For example, at least in theory: you should never think it >50% that your credence on something will later double; never >10% that it will later 10x, and so forth. So if you’re currently e.g. 1% or less on AI doom, you should think it’s less than 50% likely that you’ll ever be at 2%; less than 10% likely that you’ll ever be at 10%, and so on. And if your credence is very small, or if you’re acting dismissive, you should be very confident you’ll never end up worried. Are you?I also discuss when, exactly, it’s problematic to update in predictable directions. My sense is that generally, you should expect to update in the direction of the truth as the evidence comes in; and thus, that people who think AI doom unlikely should expect to feel less worried as time goes on (such that consistently getting more worried is a red flag). But in the case of AI risk, I think at least some non-crazy views should actually expect to get more worried over time, even while being fairly non-worried now. In particular, i...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Predictable updating about AI risk, published by Joe Carlsmith on May 8, 2023 on The Effective Altruism Forum.(Cross-posted from my website. Podcast version here, or search "Joe Carlsmith Audio" on your podcast app.)"This present moment used to be the unimaginable future."Stewart Brand1. IntroductionHere’s a pattern you may have noticed. A new frontier AI, like GPT-4, gets released. People play with it. It’s better than the previous AIs, and many people are impressed. And as a result, many people who weren’t worried about existential risk from misaligned AI (hereafter: “AI risk”) get much more worried.Now, if these people didn’t expect AI to get so much better so soon, such a pattern can make sense. And so, too, if they got other unexpected evidence for AI risk – for example, concerned experts signing letters and quitting their jobs.But if you’re a good Bayesian, and you currently put low probability on existential catastrophe from misaligned AI (hereafter: “AI doom”), you probably shouldn’t be able to predict that this pattern will happen to you in the future. When GPT-5 comes out, for example, it probably shouldn’t be the case that your probability on doom goes up a bunch. Similarly, it probably shouldn’t be the case that if you could see, now, the sorts of AI systems we’ll have in 2030, or 2050, that you’d get a lot more worried about doom than you are now.But I worry that we’re going to see this pattern anyway. Indeed, I’ve seen it myself. I’m working on fixing the problem. And I think we, as a collective discourse, should try to fix it, too. In particular: I think we’re in a position to predict, now, that AI is going to get a lot better in the coming years. I think we should worry, now, accordingly, without having to see these much-better AIs up close. If we do this right, then in expectation, when we confront GPT-5 (or GPT-6, or Agent-GPT-8, or Chaos-GPT-10) in the flesh, in all the concreteness and detail and not-a-game-ness of the real world, we’ll be just as scared as we are now.This essay is about what “doing this right” looks like. In particular: part of what happens, when you meet something in the flesh, is that it “seems more real” at a gut level. So the essay is partly a reflection on the epistemology of guts: of visceral vs. abstract; “up close” vs. “far away.” My views on this have changed over the years: and in particular, I now put less weight on my gut’s (comparatively skeptical) views about doom.But the essay is also about grokking some basic Bayesianism about future evidence, dispelling a common misconception about it (namely: that directional updates shouldn’t be predictable in general), and pointing at some of the constraints it places on our beliefs over time, especially with respect to stuff we’re currently skeptical or dismissive about. For example, at least in theory: you should never think it >50% that your credence on something will later double; never >10% that it will later 10x, and so forth. So if you’re currently e.g. 1% or less on AI doom, you should think it’s less than 50% likely that you’ll ever be at 2%; less than 10% likely that you’ll ever be at 10%, and so on. And if your credence is very small, or if you’re acting dismissive, you should be very confident you’ll never end up worried. Are you?I also discuss when, exactly, it’s problematic to update in predictable directions. My sense is that generally, you should expect to update in the direction of the truth as the evidence comes in; and thus, that people who think AI doom unlikely should expect to feel less worried as time goes on (such that consistently getting more worried is a red flag). But in the case of AI risk, I think at least some non-crazy views should actually expect to get more worried over time, even while being fairly non-worried now. In particular, i...]]>
Joe Carlsmith https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:00:56 None full 5885
D8GitXAMt7deG8tBc_NL_EA_EA EA - How quickly AI could transform the world (Tom Davidson on The 80,000 Hours Podcast) by 80000 Hours Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How quickly AI could transform the world (Tom Davidson on The 80,000 Hours Podcast), published by 80000 Hours on May 8, 2023 on The Effective Altruism Forum.Over at The 80,000 Hours Podcast we just published an interview that is likely to be of particular interest to people who identify as involved in the effective altruism community: Tom Davidson on how quickly AI could transform the world.You can click through for the audio, a full transcript and related links. Below is the episode summary and some key excerpts.Episode SummaryBy the time that the AIs can do 20% of cognitive tasks in the broader economy, maybe they can already do 40% or 50% of tasks specifically in AI R&D. So they could have already really started accelerating the pace of progress by the time we get to that 20% economic impact threshold.At that point you could easily imagine that really it’s just one year, you give them a 10x bigger brain. That’s like going from chimps to humans — and then doing that jump again. That could easily be enough to go from [AIs being able to do] 20% [of cognitive tasks] to 100%, just intuitively. I think that’s kind of the default, really.Tom DavidsonIt’s easy to dismiss alarming AI-related predictions when you don’t know where the numbers came from.For example: what if we told you that within 15 years, it’s likely that we’ll see a 1,000x improvement in AI capabilities in a single year? And what if we then told you that those improvements would lead to explosive economic growth unlike anything humanity has seen before?You might think, “Congratulations, you said a big number — but this kind of stuff seems crazy, so I’m going to keep scrolling through Twitter.”But this 1,000x yearly improvement is a prediction based on real economic models created by today’s guest Tom Davidson, Senior Research Analyst at Open Philanthropy. By the end of the episode, you’ll either be able to point out specific flaws in his step-by-step reasoning, or have to at least consider the idea that the world is about to get — at a minimum — incredibly weird.As a teaser, consider the following:Developing artificial general intelligence (AGI) — AI that can do 100% of cognitive tasks at least as well as the best humans can — could very easily lead us to an unrecognisable world.You might think having to train AI systems individually to do every conceivable cognitive task — one for diagnosing diseases, one for doing your taxes, one for teaching your kids, etc. — sounds implausible, or at least like it’ll take decades.But Tom thinks we might not need to train AI to do every single job — we might just need to train it to do one: AI research.And building AI capable of doing research and development might be a much easier task — especially given that the researchers training the AI are AI researchers themselves.And once an AI system is as good at accelerating future AI progress as the best humans are today — and we can run billions of copies of it round the clock — it’s hard to make the case that we won’t achieve AGI very quickly.To give you some perspective: 17 years ago we saw the launch of Twitter, the release of Al Gore’s An Inconvenient Truth, and your first chance to play the Nintendo Wii.Tom thinks that if we have AI that significantly accelerates AI R&D, then it’s hard to imagine not having AGI 17 years from now.Wild.Host Luisa Rodriguez gets Tom to walk us through his careful reports on the topic, and how he came up with these numbers, across a terrifying but fascinating three hours.Luisa and Tom also discuss:How we might go from GPT-4 to AI disasterTom’s journey from finding AI risk to be kind of scary to really scaryWhether international cooperation or an anti-AI social movement can slow AI progress downWhy it might take just a few years to go from pretty good AI to superhum...]]>
80000 Hours https://forum.effectivealtruism.org/posts/D8GitXAMt7deG8tBc/how-quickly-ai-could-transform-the-world-tom-davidson-on-the Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How quickly AI could transform the world (Tom Davidson on The 80,000 Hours Podcast), published by 80000 Hours on May 8, 2023 on The Effective Altruism Forum.Over at The 80,000 Hours Podcast we just published an interview that is likely to be of particular interest to people who identify as involved in the effective altruism community: Tom Davidson on how quickly AI could transform the world.You can click through for the audio, a full transcript and related links. Below is the episode summary and some key excerpts.Episode SummaryBy the time that the AIs can do 20% of cognitive tasks in the broader economy, maybe they can already do 40% or 50% of tasks specifically in AI R&D. So they could have already really started accelerating the pace of progress by the time we get to that 20% economic impact threshold.At that point you could easily imagine that really it’s just one year, you give them a 10x bigger brain. That’s like going from chimps to humans — and then doing that jump again. That could easily be enough to go from [AIs being able to do] 20% [of cognitive tasks] to 100%, just intuitively. I think that’s kind of the default, really.Tom DavidsonIt’s easy to dismiss alarming AI-related predictions when you don’t know where the numbers came from.For example: what if we told you that within 15 years, it’s likely that we’ll see a 1,000x improvement in AI capabilities in a single year? And what if we then told you that those improvements would lead to explosive economic growth unlike anything humanity has seen before?You might think, “Congratulations, you said a big number — but this kind of stuff seems crazy, so I’m going to keep scrolling through Twitter.”But this 1,000x yearly improvement is a prediction based on real economic models created by today’s guest Tom Davidson, Senior Research Analyst at Open Philanthropy. By the end of the episode, you’ll either be able to point out specific flaws in his step-by-step reasoning, or have to at least consider the idea that the world is about to get — at a minimum — incredibly weird.As a teaser, consider the following:Developing artificial general intelligence (AGI) — AI that can do 100% of cognitive tasks at least as well as the best humans can — could very easily lead us to an unrecognisable world.You might think having to train AI systems individually to do every conceivable cognitive task — one for diagnosing diseases, one for doing your taxes, one for teaching your kids, etc. — sounds implausible, or at least like it’ll take decades.But Tom thinks we might not need to train AI to do every single job — we might just need to train it to do one: AI research.And building AI capable of doing research and development might be a much easier task — especially given that the researchers training the AI are AI researchers themselves.And once an AI system is as good at accelerating future AI progress as the best humans are today — and we can run billions of copies of it round the clock — it’s hard to make the case that we won’t achieve AGI very quickly.To give you some perspective: 17 years ago we saw the launch of Twitter, the release of Al Gore’s An Inconvenient Truth, and your first chance to play the Nintendo Wii.Tom thinks that if we have AI that significantly accelerates AI R&D, then it’s hard to imagine not having AGI 17 years from now.Wild.Host Luisa Rodriguez gets Tom to walk us through his careful reports on the topic, and how he came up with these numbers, across a terrifying but fascinating three hours.Luisa and Tom also discuss:How we might go from GPT-4 to AI disasterTom’s journey from finding AI risk to be kind of scary to really scaryWhether international cooperation or an anti-AI social movement can slow AI progress downWhy it might take just a few years to go from pretty good AI to superhum...]]>
Mon, 08 May 2023 18:44:29 +0000 EA - How quickly AI could transform the world (Tom Davidson on The 80,000 Hours Podcast) by 80000 Hours Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How quickly AI could transform the world (Tom Davidson on The 80,000 Hours Podcast), published by 80000 Hours on May 8, 2023 on The Effective Altruism Forum.Over at The 80,000 Hours Podcast we just published an interview that is likely to be of particular interest to people who identify as involved in the effective altruism community: Tom Davidson on how quickly AI could transform the world.You can click through for the audio, a full transcript and related links. Below is the episode summary and some key excerpts.Episode SummaryBy the time that the AIs can do 20% of cognitive tasks in the broader economy, maybe they can already do 40% or 50% of tasks specifically in AI R&D. So they could have already really started accelerating the pace of progress by the time we get to that 20% economic impact threshold.At that point you could easily imagine that really it’s just one year, you give them a 10x bigger brain. That’s like going from chimps to humans — and then doing that jump again. That could easily be enough to go from [AIs being able to do] 20% [of cognitive tasks] to 100%, just intuitively. I think that’s kind of the default, really.Tom DavidsonIt’s easy to dismiss alarming AI-related predictions when you don’t know where the numbers came from.For example: what if we told you that within 15 years, it’s likely that we’ll see a 1,000x improvement in AI capabilities in a single year? And what if we then told you that those improvements would lead to explosive economic growth unlike anything humanity has seen before?You might think, “Congratulations, you said a big number — but this kind of stuff seems crazy, so I’m going to keep scrolling through Twitter.”But this 1,000x yearly improvement is a prediction based on real economic models created by today’s guest Tom Davidson, Senior Research Analyst at Open Philanthropy. By the end of the episode, you’ll either be able to point out specific flaws in his step-by-step reasoning, or have to at least consider the idea that the world is about to get — at a minimum — incredibly weird.As a teaser, consider the following:Developing artificial general intelligence (AGI) — AI that can do 100% of cognitive tasks at least as well as the best humans can — could very easily lead us to an unrecognisable world.You might think having to train AI systems individually to do every conceivable cognitive task — one for diagnosing diseases, one for doing your taxes, one for teaching your kids, etc. — sounds implausible, or at least like it’ll take decades.But Tom thinks we might not need to train AI to do every single job — we might just need to train it to do one: AI research.And building AI capable of doing research and development might be a much easier task — especially given that the researchers training the AI are AI researchers themselves.And once an AI system is as good at accelerating future AI progress as the best humans are today — and we can run billions of copies of it round the clock — it’s hard to make the case that we won’t achieve AGI very quickly.To give you some perspective: 17 years ago we saw the launch of Twitter, the release of Al Gore’s An Inconvenient Truth, and your first chance to play the Nintendo Wii.Tom thinks that if we have AI that significantly accelerates AI R&D, then it’s hard to imagine not having AGI 17 years from now.Wild.Host Luisa Rodriguez gets Tom to walk us through his careful reports on the topic, and how he came up with these numbers, across a terrifying but fascinating three hours.Luisa and Tom also discuss:How we might go from GPT-4 to AI disasterTom’s journey from finding AI risk to be kind of scary to really scaryWhether international cooperation or an anti-AI social movement can slow AI progress downWhy it might take just a few years to go from pretty good AI to superhum...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How quickly AI could transform the world (Tom Davidson on The 80,000 Hours Podcast), published by 80000 Hours on May 8, 2023 on The Effective Altruism Forum.Over at The 80,000 Hours Podcast we just published an interview that is likely to be of particular interest to people who identify as involved in the effective altruism community: Tom Davidson on how quickly AI could transform the world.You can click through for the audio, a full transcript and related links. Below is the episode summary and some key excerpts.Episode SummaryBy the time that the AIs can do 20% of cognitive tasks in the broader economy, maybe they can already do 40% or 50% of tasks specifically in AI R&D. So they could have already really started accelerating the pace of progress by the time we get to that 20% economic impact threshold.At that point you could easily imagine that really it’s just one year, you give them a 10x bigger brain. That’s like going from chimps to humans — and then doing that jump again. That could easily be enough to go from [AIs being able to do] 20% [of cognitive tasks] to 100%, just intuitively. I think that’s kind of the default, really.Tom DavidsonIt’s easy to dismiss alarming AI-related predictions when you don’t know where the numbers came from.For example: what if we told you that within 15 years, it’s likely that we’ll see a 1,000x improvement in AI capabilities in a single year? And what if we then told you that those improvements would lead to explosive economic growth unlike anything humanity has seen before?You might think, “Congratulations, you said a big number — but this kind of stuff seems crazy, so I’m going to keep scrolling through Twitter.”But this 1,000x yearly improvement is a prediction based on real economic models created by today’s guest Tom Davidson, Senior Research Analyst at Open Philanthropy. By the end of the episode, you’ll either be able to point out specific flaws in his step-by-step reasoning, or have to at least consider the idea that the world is about to get — at a minimum — incredibly weird.As a teaser, consider the following:Developing artificial general intelligence (AGI) — AI that can do 100% of cognitive tasks at least as well as the best humans can — could very easily lead us to an unrecognisable world.You might think having to train AI systems individually to do every conceivable cognitive task — one for diagnosing diseases, one for doing your taxes, one for teaching your kids, etc. — sounds implausible, or at least like it’ll take decades.But Tom thinks we might not need to train AI to do every single job — we might just need to train it to do one: AI research.And building AI capable of doing research and development might be a much easier task — especially given that the researchers training the AI are AI researchers themselves.And once an AI system is as good at accelerating future AI progress as the best humans are today — and we can run billions of copies of it round the clock — it’s hard to make the case that we won’t achieve AGI very quickly.To give you some perspective: 17 years ago we saw the launch of Twitter, the release of Al Gore’s An Inconvenient Truth, and your first chance to play the Nintendo Wii.Tom thinks that if we have AI that significantly accelerates AI R&D, then it’s hard to imagine not having AGI 17 years from now.Wild.Host Luisa Rodriguez gets Tom to walk us through his careful reports on the topic, and how he came up with these numbers, across a terrifying but fascinating three hours.Luisa and Tom also discuss:How we might go from GPT-4 to AI disasterTom’s journey from finding AI risk to be kind of scary to really scaryWhether international cooperation or an anti-AI social movement can slow AI progress downWhy it might take just a few years to go from pretty good AI to superhum...]]>
80000 Hours https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 23:27 None full 5879
sK7neZ9rHGEL5JP7q_NL_EA_EA EA - EA Anywhere Slack: consolidating professional and affinity groups by Sasha Berezhnoi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Anywhere Slack: consolidating professional and affinity groups, published by Sasha Berezhnoi on May 8, 2023 on The Effective Altruism Forum.SummaryIn an effort to improve communication infrastructure, the EA Anywhere Slack workspace has recently undergone major changes to accommodate several professional and affiliation Slack workspaces like EA Entrepreneurs, EA Global Discussion, EA Creatives & Communicators, and more. The project was initiated by Pineapple Operations because the old structure was inefficient and overwhelming - others have made similar observations. We believe making it easier to track discussions and reducing the number of workspaces will increase activity, avoid information loss and prevent duplicated communications.If you have a Slack workspace you’d like to merge with EA Anywhere’s, reach out to us!Why consolidate?With too many workspaces, the infrastructure of the EA movement becomes increasingly overwhelming and confusing, and it’s difficult to keep up with new workspaces. Having a central Slack gives people access to a broader range of communities at once.People don’t have the time or energy to check multiple Slacks, which results in low activity. Some discussions just don’t reach the critical mass, and valuable connections are not happening.Most of the workspaces were on a free plan that hid messages and files older than 90 days. Consolidation around a few paid workspaces prevents groups from losing historical information.There is overlapping membership between Slacks (we estimate between 10-50%), so consolidation makes it easier to track communications.These reasons were true for the dozens of Slack workspaces we identified with low activity and limited facilitation.Why EA Anywhere?EA Anywhere is an online discussion space for the global EA community and a touchpoint for people without local groups nearby. It plays an important role in supporting other virtual ecosystems in EA: we host EAGxVirtual conferences, provide support and share knowledge with other online groups.EA Anywhere Slack is on a paid Pro plan and has active facilitation from a full-time community organizer, which makes it a good choice for this project. We can provide support for groups joining the space, including events promotion, Zoom accounts, Slack integrations, and knowledge-sharing calls.There is a demand for informal conversation spaces and networking that the EA Forum doesn’t currently provide. We are inspired by Slack-based communities that thrive and create value for thousands of members without becoming too overwhelming.Progress to dateWe reached out to workspace owners with our proposal and received positive feedback. In most cases, we merged the users and message history.List of Slack workspaces we have already merged or consolidated (thanks to the admins of these groups!):EA Global DiscussionsEA EntrepreneursEA Creatives & CommunicatorsEA GeneralistsEA HousingEA Tech NetworkPublic Interest TechEA Project ManagementEA Supply Chain LogisticsEA Math and PhysicsProduct in EAEffective EnvironmentalismWe already see the benefits of increased coordination:Members are engaging with other groups and projects that have been locked into small workspaces.Members are more willing to ask questions and ask for advice. Activity in the #all-questions-welcome channel increased four-fold compared to the previous three-month average, with both new and former members engaging in discussions.Organizers have an easier way to promote opportunities and events.As of April 30th, a month after the merger, the initial hype subsided but the activity is still twice as high:90 members who posted weekly (44 before)360 weekly active members (210 before the merger)We will continue using Slack Analytics to track the activity and send a follow-up user surve...]]>
Sasha Berezhnoi https://forum.effectivealtruism.org/posts/sK7neZ9rHGEL5JP7q/ea-anywhere-slack-consolidating-professional-and-affinity Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Anywhere Slack: consolidating professional and affinity groups, published by Sasha Berezhnoi on May 8, 2023 on The Effective Altruism Forum.SummaryIn an effort to improve communication infrastructure, the EA Anywhere Slack workspace has recently undergone major changes to accommodate several professional and affiliation Slack workspaces like EA Entrepreneurs, EA Global Discussion, EA Creatives & Communicators, and more. The project was initiated by Pineapple Operations because the old structure was inefficient and overwhelming - others have made similar observations. We believe making it easier to track discussions and reducing the number of workspaces will increase activity, avoid information loss and prevent duplicated communications.If you have a Slack workspace you’d like to merge with EA Anywhere’s, reach out to us!Why consolidate?With too many workspaces, the infrastructure of the EA movement becomes increasingly overwhelming and confusing, and it’s difficult to keep up with new workspaces. Having a central Slack gives people access to a broader range of communities at once.People don’t have the time or energy to check multiple Slacks, which results in low activity. Some discussions just don’t reach the critical mass, and valuable connections are not happening.Most of the workspaces were on a free plan that hid messages and files older than 90 days. Consolidation around a few paid workspaces prevents groups from losing historical information.There is overlapping membership between Slacks (we estimate between 10-50%), so consolidation makes it easier to track communications.These reasons were true for the dozens of Slack workspaces we identified with low activity and limited facilitation.Why EA Anywhere?EA Anywhere is an online discussion space for the global EA community and a touchpoint for people without local groups nearby. It plays an important role in supporting other virtual ecosystems in EA: we host EAGxVirtual conferences, provide support and share knowledge with other online groups.EA Anywhere Slack is on a paid Pro plan and has active facilitation from a full-time community organizer, which makes it a good choice for this project. We can provide support for groups joining the space, including events promotion, Zoom accounts, Slack integrations, and knowledge-sharing calls.There is a demand for informal conversation spaces and networking that the EA Forum doesn’t currently provide. We are inspired by Slack-based communities that thrive and create value for thousands of members without becoming too overwhelming.Progress to dateWe reached out to workspace owners with our proposal and received positive feedback. In most cases, we merged the users and message history.List of Slack workspaces we have already merged or consolidated (thanks to the admins of these groups!):EA Global DiscussionsEA EntrepreneursEA Creatives & CommunicatorsEA GeneralistsEA HousingEA Tech NetworkPublic Interest TechEA Project ManagementEA Supply Chain LogisticsEA Math and PhysicsProduct in EAEffective EnvironmentalismWe already see the benefits of increased coordination:Members are engaging with other groups and projects that have been locked into small workspaces.Members are more willing to ask questions and ask for advice. Activity in the #all-questions-welcome channel increased four-fold compared to the previous three-month average, with both new and former members engaging in discussions.Organizers have an easier way to promote opportunities and events.As of April 30th, a month after the merger, the initial hype subsided but the activity is still twice as high:90 members who posted weekly (44 before)360 weekly active members (210 before the merger)We will continue using Slack Analytics to track the activity and send a follow-up user surve...]]>
Mon, 08 May 2023 17:37:59 +0000 EA - EA Anywhere Slack: consolidating professional and affinity groups by Sasha Berezhnoi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Anywhere Slack: consolidating professional and affinity groups, published by Sasha Berezhnoi on May 8, 2023 on The Effective Altruism Forum.SummaryIn an effort to improve communication infrastructure, the EA Anywhere Slack workspace has recently undergone major changes to accommodate several professional and affiliation Slack workspaces like EA Entrepreneurs, EA Global Discussion, EA Creatives & Communicators, and more. The project was initiated by Pineapple Operations because the old structure was inefficient and overwhelming - others have made similar observations. We believe making it easier to track discussions and reducing the number of workspaces will increase activity, avoid information loss and prevent duplicated communications.If you have a Slack workspace you’d like to merge with EA Anywhere’s, reach out to us!Why consolidate?With too many workspaces, the infrastructure of the EA movement becomes increasingly overwhelming and confusing, and it’s difficult to keep up with new workspaces. Having a central Slack gives people access to a broader range of communities at once.People don’t have the time or energy to check multiple Slacks, which results in low activity. Some discussions just don’t reach the critical mass, and valuable connections are not happening.Most of the workspaces were on a free plan that hid messages and files older than 90 days. Consolidation around a few paid workspaces prevents groups from losing historical information.There is overlapping membership between Slacks (we estimate between 10-50%), so consolidation makes it easier to track communications.These reasons were true for the dozens of Slack workspaces we identified with low activity and limited facilitation.Why EA Anywhere?EA Anywhere is an online discussion space for the global EA community and a touchpoint for people without local groups nearby. It plays an important role in supporting other virtual ecosystems in EA: we host EAGxVirtual conferences, provide support and share knowledge with other online groups.EA Anywhere Slack is on a paid Pro plan and has active facilitation from a full-time community organizer, which makes it a good choice for this project. We can provide support for groups joining the space, including events promotion, Zoom accounts, Slack integrations, and knowledge-sharing calls.There is a demand for informal conversation spaces and networking that the EA Forum doesn’t currently provide. We are inspired by Slack-based communities that thrive and create value for thousands of members without becoming too overwhelming.Progress to dateWe reached out to workspace owners with our proposal and received positive feedback. In most cases, we merged the users and message history.List of Slack workspaces we have already merged or consolidated (thanks to the admins of these groups!):EA Global DiscussionsEA EntrepreneursEA Creatives & CommunicatorsEA GeneralistsEA HousingEA Tech NetworkPublic Interest TechEA Project ManagementEA Supply Chain LogisticsEA Math and PhysicsProduct in EAEffective EnvironmentalismWe already see the benefits of increased coordination:Members are engaging with other groups and projects that have been locked into small workspaces.Members are more willing to ask questions and ask for advice. Activity in the #all-questions-welcome channel increased four-fold compared to the previous three-month average, with both new and former members engaging in discussions.Organizers have an easier way to promote opportunities and events.As of April 30th, a month after the merger, the initial hype subsided but the activity is still twice as high:90 members who posted weekly (44 before)360 weekly active members (210 before the merger)We will continue using Slack Analytics to track the activity and send a follow-up user surve...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Anywhere Slack: consolidating professional and affinity groups, published by Sasha Berezhnoi on May 8, 2023 on The Effective Altruism Forum.SummaryIn an effort to improve communication infrastructure, the EA Anywhere Slack workspace has recently undergone major changes to accommodate several professional and affiliation Slack workspaces like EA Entrepreneurs, EA Global Discussion, EA Creatives & Communicators, and more. The project was initiated by Pineapple Operations because the old structure was inefficient and overwhelming - others have made similar observations. We believe making it easier to track discussions and reducing the number of workspaces will increase activity, avoid information loss and prevent duplicated communications.If you have a Slack workspace you’d like to merge with EA Anywhere’s, reach out to us!Why consolidate?With too many workspaces, the infrastructure of the EA movement becomes increasingly overwhelming and confusing, and it’s difficult to keep up with new workspaces. Having a central Slack gives people access to a broader range of communities at once.People don’t have the time or energy to check multiple Slacks, which results in low activity. Some discussions just don’t reach the critical mass, and valuable connections are not happening.Most of the workspaces were on a free plan that hid messages and files older than 90 days. Consolidation around a few paid workspaces prevents groups from losing historical information.There is overlapping membership between Slacks (we estimate between 10-50%), so consolidation makes it easier to track communications.These reasons were true for the dozens of Slack workspaces we identified with low activity and limited facilitation.Why EA Anywhere?EA Anywhere is an online discussion space for the global EA community and a touchpoint for people without local groups nearby. It plays an important role in supporting other virtual ecosystems in EA: we host EAGxVirtual conferences, provide support and share knowledge with other online groups.EA Anywhere Slack is on a paid Pro plan and has active facilitation from a full-time community organizer, which makes it a good choice for this project. We can provide support for groups joining the space, including events promotion, Zoom accounts, Slack integrations, and knowledge-sharing calls.There is a demand for informal conversation spaces and networking that the EA Forum doesn’t currently provide. We are inspired by Slack-based communities that thrive and create value for thousands of members without becoming too overwhelming.Progress to dateWe reached out to workspace owners with our proposal and received positive feedback. In most cases, we merged the users and message history.List of Slack workspaces we have already merged or consolidated (thanks to the admins of these groups!):EA Global DiscussionsEA EntrepreneursEA Creatives & CommunicatorsEA GeneralistsEA HousingEA Tech NetworkPublic Interest TechEA Project ManagementEA Supply Chain LogisticsEA Math and PhysicsProduct in EAEffective EnvironmentalismWe already see the benefits of increased coordination:Members are engaging with other groups and projects that have been locked into small workspaces.Members are more willing to ask questions and ask for advice. Activity in the #all-questions-welcome channel increased four-fold compared to the previous three-month average, with both new and former members engaging in discussions.Organizers have an easier way to promote opportunities and events.As of April 30th, a month after the merger, the initial hype subsided but the activity is still twice as high:90 members who posted weekly (44 before)360 weekly active members (210 before the merger)We will continue using Slack Analytics to track the activity and send a follow-up user surve...]]>
Sasha Berezhnoi https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:59 None full 5878
fckBKDvgE5JjDghe4_NL_EA_EA EA - The Legend of Dr. Oguntola Sapara by jai Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Legend of Dr. Oguntola Sapara, published by jai on May 8, 2023 on The Effective Altruism Forum.This is a true story.At the dawn of the long war's final century, the tide was turning. The Abomination was in retreat, driven back by warriors wielding the Lance of Jenner. But, as in all wars, some battles could not be fought so directly. Battles waged in cunning and lies. Battles where the enemy lurked in shadow. In that darkness an unholy alliance was forged between the Pox Abomination and a conspiracy of traitors to the Yoruba people.They named it "Sopona", gave it a face, and proclaimed themselves the Abomination’s priests. They claimed that they alone could intercede with the Abomination on behalf of innocent victims. To cross them, they said, was to incur Sopona's wrath. They would unleash the Abomination upon any who dared oppose them - and sometimes they would simply inflict Sopona's torture indiscriminately. Amid death and devastation their victims would beg the Priests for help, further cementing their grip on power. And all the while they kept Jenner's Lance at bay, for their power rested on fear of the Abomination, and should it be slain their power, too, would come to an end.They operated in secrecy: the better to obscure their lies, and the better to hide from those who would challenge it. Through blackmail and terror, they maintained their iron grip for generations. None dared utter "Sopona" lest they invoke its wrath - and so even the true name was hidden.Every measure by every authority failed to contain the Abomination. They could never understand why, for they were blind to the enemy's allies. The Deathly Priests and their twisted methods were beyond the grasp of governments, warriors, and weapons. Here the global campaign could not reach. Here, harbored by its murderous allies, the Abomination reigned, and the Yoruba people resigned themselves to an abominable god against which there seemed no hope.It is inadvisable to try to hide from humans. They are curious, relentless, ruthless creatures, fearless when determined and cunning as well. And none were more human than Dr. Oguntola Sapara.Oguntola was a proud child of the Yoruba people. His father, born in chains, together with his mother, raised a family of prodigies: not only Oguntula, but his brother Alexander and his sister Clementina. But Clementina's story was all too short, for when she was to bring life into the world, she was instead taken by death.There are no records of how Oguntola felt that day; All we know is that this was the moment that Oguntula dedicated his life to defending the innocent from the inhuman evils of the world, to master the protective arts and wield them against any who would dare threaten his people.For years he studied, and toiled, and healed, growing ever stronger in the art through talent and sheer force of will. Ten years on his quest took him across the seas to study in a far away land, and here yet more obstacles greeted him. For among the practitioners of the art were counted a great many fools. They would forfeit the privilege of working alongside one of humanity's best for the most vapid and meaningless of reasons, and worse still actively stymied his efforts in all things lest their foolishness be revealed for the lie it was.But Oguntola persisted, surmounting every obstacle lesser humans would set before him. In time he not only prevailed, but he proved himself among the greatest of the practitioners. He was recognized as a master of the art, and elected to the Royal Institute of healers.(In the midst of everything, he even assisted the legendary truth-seeker Ida Wells in her crusades against evil and ignorance - but that is another story.)His training complete and his mastery assured, Dr. Oguntola Sapara returned to Lagos to confront his true e...]]>
jai https://forum.effectivealtruism.org/posts/fckBKDvgE5JjDghe4/the-legend-of-dr-oguntola-sapara Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Legend of Dr. Oguntola Sapara, published by jai on May 8, 2023 on The Effective Altruism Forum.This is a true story.At the dawn of the long war's final century, the tide was turning. The Abomination was in retreat, driven back by warriors wielding the Lance of Jenner. But, as in all wars, some battles could not be fought so directly. Battles waged in cunning and lies. Battles where the enemy lurked in shadow. In that darkness an unholy alliance was forged between the Pox Abomination and a conspiracy of traitors to the Yoruba people.They named it "Sopona", gave it a face, and proclaimed themselves the Abomination’s priests. They claimed that they alone could intercede with the Abomination on behalf of innocent victims. To cross them, they said, was to incur Sopona's wrath. They would unleash the Abomination upon any who dared oppose them - and sometimes they would simply inflict Sopona's torture indiscriminately. Amid death and devastation their victims would beg the Priests for help, further cementing their grip on power. And all the while they kept Jenner's Lance at bay, for their power rested on fear of the Abomination, and should it be slain their power, too, would come to an end.They operated in secrecy: the better to obscure their lies, and the better to hide from those who would challenge it. Through blackmail and terror, they maintained their iron grip for generations. None dared utter "Sopona" lest they invoke its wrath - and so even the true name was hidden.Every measure by every authority failed to contain the Abomination. They could never understand why, for they were blind to the enemy's allies. The Deathly Priests and their twisted methods were beyond the grasp of governments, warriors, and weapons. Here the global campaign could not reach. Here, harbored by its murderous allies, the Abomination reigned, and the Yoruba people resigned themselves to an abominable god against which there seemed no hope.It is inadvisable to try to hide from humans. They are curious, relentless, ruthless creatures, fearless when determined and cunning as well. And none were more human than Dr. Oguntola Sapara.Oguntola was a proud child of the Yoruba people. His father, born in chains, together with his mother, raised a family of prodigies: not only Oguntula, but his brother Alexander and his sister Clementina. But Clementina's story was all too short, for when she was to bring life into the world, she was instead taken by death.There are no records of how Oguntola felt that day; All we know is that this was the moment that Oguntula dedicated his life to defending the innocent from the inhuman evils of the world, to master the protective arts and wield them against any who would dare threaten his people.For years he studied, and toiled, and healed, growing ever stronger in the art through talent and sheer force of will. Ten years on his quest took him across the seas to study in a far away land, and here yet more obstacles greeted him. For among the practitioners of the art were counted a great many fools. They would forfeit the privilege of working alongside one of humanity's best for the most vapid and meaningless of reasons, and worse still actively stymied his efforts in all things lest their foolishness be revealed for the lie it was.But Oguntola persisted, surmounting every obstacle lesser humans would set before him. In time he not only prevailed, but he proved himself among the greatest of the practitioners. He was recognized as a master of the art, and elected to the Royal Institute of healers.(In the midst of everything, he even assisted the legendary truth-seeker Ida Wells in her crusades against evil and ignorance - but that is another story.)His training complete and his mastery assured, Dr. Oguntola Sapara returned to Lagos to confront his true e...]]>
Mon, 08 May 2023 14:56:15 +0000 EA - The Legend of Dr. Oguntola Sapara by jai Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Legend of Dr. Oguntola Sapara, published by jai on May 8, 2023 on The Effective Altruism Forum.This is a true story.At the dawn of the long war's final century, the tide was turning. The Abomination was in retreat, driven back by warriors wielding the Lance of Jenner. But, as in all wars, some battles could not be fought so directly. Battles waged in cunning and lies. Battles where the enemy lurked in shadow. In that darkness an unholy alliance was forged between the Pox Abomination and a conspiracy of traitors to the Yoruba people.They named it "Sopona", gave it a face, and proclaimed themselves the Abomination’s priests. They claimed that they alone could intercede with the Abomination on behalf of innocent victims. To cross them, they said, was to incur Sopona's wrath. They would unleash the Abomination upon any who dared oppose them - and sometimes they would simply inflict Sopona's torture indiscriminately. Amid death and devastation their victims would beg the Priests for help, further cementing their grip on power. And all the while they kept Jenner's Lance at bay, for their power rested on fear of the Abomination, and should it be slain their power, too, would come to an end.They operated in secrecy: the better to obscure their lies, and the better to hide from those who would challenge it. Through blackmail and terror, they maintained their iron grip for generations. None dared utter "Sopona" lest they invoke its wrath - and so even the true name was hidden.Every measure by every authority failed to contain the Abomination. They could never understand why, for they were blind to the enemy's allies. The Deathly Priests and their twisted methods were beyond the grasp of governments, warriors, and weapons. Here the global campaign could not reach. Here, harbored by its murderous allies, the Abomination reigned, and the Yoruba people resigned themselves to an abominable god against which there seemed no hope.It is inadvisable to try to hide from humans. They are curious, relentless, ruthless creatures, fearless when determined and cunning as well. And none were more human than Dr. Oguntola Sapara.Oguntola was a proud child of the Yoruba people. His father, born in chains, together with his mother, raised a family of prodigies: not only Oguntula, but his brother Alexander and his sister Clementina. But Clementina's story was all too short, for when she was to bring life into the world, she was instead taken by death.There are no records of how Oguntola felt that day; All we know is that this was the moment that Oguntula dedicated his life to defending the innocent from the inhuman evils of the world, to master the protective arts and wield them against any who would dare threaten his people.For years he studied, and toiled, and healed, growing ever stronger in the art through talent and sheer force of will. Ten years on his quest took him across the seas to study in a far away land, and here yet more obstacles greeted him. For among the practitioners of the art were counted a great many fools. They would forfeit the privilege of working alongside one of humanity's best for the most vapid and meaningless of reasons, and worse still actively stymied his efforts in all things lest their foolishness be revealed for the lie it was.But Oguntola persisted, surmounting every obstacle lesser humans would set before him. In time he not only prevailed, but he proved himself among the greatest of the practitioners. He was recognized as a master of the art, and elected to the Royal Institute of healers.(In the midst of everything, he even assisted the legendary truth-seeker Ida Wells in her crusades against evil and ignorance - but that is another story.)His training complete and his mastery assured, Dr. Oguntola Sapara returned to Lagos to confront his true e...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Legend of Dr. Oguntola Sapara, published by jai on May 8, 2023 on The Effective Altruism Forum.This is a true story.At the dawn of the long war's final century, the tide was turning. The Abomination was in retreat, driven back by warriors wielding the Lance of Jenner. But, as in all wars, some battles could not be fought so directly. Battles waged in cunning and lies. Battles where the enemy lurked in shadow. In that darkness an unholy alliance was forged between the Pox Abomination and a conspiracy of traitors to the Yoruba people.They named it "Sopona", gave it a face, and proclaimed themselves the Abomination’s priests. They claimed that they alone could intercede with the Abomination on behalf of innocent victims. To cross them, they said, was to incur Sopona's wrath. They would unleash the Abomination upon any who dared oppose them - and sometimes they would simply inflict Sopona's torture indiscriminately. Amid death and devastation their victims would beg the Priests for help, further cementing their grip on power. And all the while they kept Jenner's Lance at bay, for their power rested on fear of the Abomination, and should it be slain their power, too, would come to an end.They operated in secrecy: the better to obscure their lies, and the better to hide from those who would challenge it. Through blackmail and terror, they maintained their iron grip for generations. None dared utter "Sopona" lest they invoke its wrath - and so even the true name was hidden.Every measure by every authority failed to contain the Abomination. They could never understand why, for they were blind to the enemy's allies. The Deathly Priests and their twisted methods were beyond the grasp of governments, warriors, and weapons. Here the global campaign could not reach. Here, harbored by its murderous allies, the Abomination reigned, and the Yoruba people resigned themselves to an abominable god against which there seemed no hope.It is inadvisable to try to hide from humans. They are curious, relentless, ruthless creatures, fearless when determined and cunning as well. And none were more human than Dr. Oguntola Sapara.Oguntola was a proud child of the Yoruba people. His father, born in chains, together with his mother, raised a family of prodigies: not only Oguntula, but his brother Alexander and his sister Clementina. But Clementina's story was all too short, for when she was to bring life into the world, she was instead taken by death.There are no records of how Oguntola felt that day; All we know is that this was the moment that Oguntula dedicated his life to defending the innocent from the inhuman evils of the world, to master the protective arts and wield them against any who would dare threaten his people.For years he studied, and toiled, and healed, growing ever stronger in the art through talent and sheer force of will. Ten years on his quest took him across the seas to study in a far away land, and here yet more obstacles greeted him. For among the practitioners of the art were counted a great many fools. They would forfeit the privilege of working alongside one of humanity's best for the most vapid and meaningless of reasons, and worse still actively stymied his efforts in all things lest their foolishness be revealed for the lie it was.But Oguntola persisted, surmounting every obstacle lesser humans would set before him. In time he not only prevailed, but he proved himself among the greatest of the practitioners. He was recognized as a master of the art, and elected to the Royal Institute of healers.(In the midst of everything, he even assisted the legendary truth-seeker Ida Wells in her crusades against evil and ignorance - but that is another story.)His training complete and his mastery assured, Dr. Oguntola Sapara returned to Lagos to confront his true e...]]>
jai https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:23 None full 5877
jYSEjBsWbjNqioRZJ_NL_EA_EA EA - The Rethink Priorities Existential Security Team's Strategy for 2023 by Ben Snodin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Rethink Priorities Existential Security Team's Strategy for 2023, published by Ben Snodin on May 8, 2023 on The Effective Altruism Forum.The Rethink Priorities Existential Security Team's Strategy for 2023SummaryThis post contains a moderately rough, high-level description of the Rethink Priorities Existential Security team’s (XST’s) strategy for the period April-October 2023.XST is a team of researchers focused on improving the world according to a longtermist outlook through research and other projects, and is part of Rethink Priorities (RP).Note that until very recently we were called the General Longtermism team (GLT). We have now renamed ourselves the Existential Security team (XST), which is slightly more descriptive and more closely reflects our focus on reducing existential risk.XST’s three focus areas for 2023 will be:Longtermist entrepreneurship (65%): Making highly impactful longtermist projects happen by finding and developing ideas for highly promising longtermist projects, identifying potential founders, and supporting them as they get these projects started. Our main activities will be:Identifying and detailing the most promising ideas for longtermist projects, with a goal of having ~5 detailed project ideas by the end of June, that we can bring to a potential meeting of talented entrepreneurs in July/August, organized by Mike McCormick.A relatively brief founder-first-style founder search (looking for highly promising founders and finding projects that they are an especially good fit for).Exploring founder-in-residence MVPs (hiring potential founders and giving them space to develop their own ideas for promising projects).Supporting founders once they’re identified.Strategic clarity research (25%): Research that helps shed light on high-level strategic questions relevant for the EA community and for people working on reducing existential risk. This year, we plan to focus on high-level EA movement-building strategy questions (such as “What kind of EA movement do we want?” or “What’s the optimal portfolio among priority cause areas we should aim at building?”), and possibly on high-level questions that seem important for assessing whether and how to help launch entrepreneurial projects. Most of our work on this will happen in the second half of the year.Flexible time for high-impact opportunities (10%): Time allocated for i) team members working on projects that they are very keen on and ii) highly impactful and time-sensitive projects that arise due to changes in external circumstances.Concrete outputs we’ll aim for:5 project idea memos by the end of June that are of a standard equal to or better than the 2023 Q1 megaproject speedruns that we posted on the EA Forum in February.1 strategic clarity research output by the end of October.1 new promising project launched by the end of October.11 publicly shared research or project idea outputs by the end of the year.From mid-May onwards, we’re planning to have 4 FTE executing this strategy: me (Ben), Marie, Jam, and Renan. Linch is pursuing a separate research agenda related to longtermist strategic clarity.The high-level timeline is:[completed] March: The team winds down current projects and begins work on executing the team strategy from the start of April.[in progress] April-July: The team focuses on the entrepreneurship program, and works on founder-first activities, founder support, and project research. The project research is focused on generating a new prioritization model and shallow project ranking by the end of April, and 5 project ideas memos by the end of June for a potential meeting of promising entrepreneurs in July/August.August-October: Jam and Ben continue working on the entrepreneurship program, while Marie and Renan switch to strategic clarity research.Start o...]]>
Ben Snodin https://forum.effectivealtruism.org/posts/jYSEjBsWbjNqioRZJ/the-rethink-priorities-existential-security-team-s-strategy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Rethink Priorities Existential Security Team's Strategy for 2023, published by Ben Snodin on May 8, 2023 on The Effective Altruism Forum.The Rethink Priorities Existential Security Team's Strategy for 2023SummaryThis post contains a moderately rough, high-level description of the Rethink Priorities Existential Security team’s (XST’s) strategy for the period April-October 2023.XST is a team of researchers focused on improving the world according to a longtermist outlook through research and other projects, and is part of Rethink Priorities (RP).Note that until very recently we were called the General Longtermism team (GLT). We have now renamed ourselves the Existential Security team (XST), which is slightly more descriptive and more closely reflects our focus on reducing existential risk.XST’s three focus areas for 2023 will be:Longtermist entrepreneurship (65%): Making highly impactful longtermist projects happen by finding and developing ideas for highly promising longtermist projects, identifying potential founders, and supporting them as they get these projects started. Our main activities will be:Identifying and detailing the most promising ideas for longtermist projects, with a goal of having ~5 detailed project ideas by the end of June, that we can bring to a potential meeting of talented entrepreneurs in July/August, organized by Mike McCormick.A relatively brief founder-first-style founder search (looking for highly promising founders and finding projects that they are an especially good fit for).Exploring founder-in-residence MVPs (hiring potential founders and giving them space to develop their own ideas for promising projects).Supporting founders once they’re identified.Strategic clarity research (25%): Research that helps shed light on high-level strategic questions relevant for the EA community and for people working on reducing existential risk. This year, we plan to focus on high-level EA movement-building strategy questions (such as “What kind of EA movement do we want?” or “What’s the optimal portfolio among priority cause areas we should aim at building?”), and possibly on high-level questions that seem important for assessing whether and how to help launch entrepreneurial projects. Most of our work on this will happen in the second half of the year.Flexible time for high-impact opportunities (10%): Time allocated for i) team members working on projects that they are very keen on and ii) highly impactful and time-sensitive projects that arise due to changes in external circumstances.Concrete outputs we’ll aim for:5 project idea memos by the end of June that are of a standard equal to or better than the 2023 Q1 megaproject speedruns that we posted on the EA Forum in February.1 strategic clarity research output by the end of October.1 new promising project launched by the end of October.11 publicly shared research or project idea outputs by the end of the year.From mid-May onwards, we’re planning to have 4 FTE executing this strategy: me (Ben), Marie, Jam, and Renan. Linch is pursuing a separate research agenda related to longtermist strategic clarity.The high-level timeline is:[completed] March: The team winds down current projects and begins work on executing the team strategy from the start of April.[in progress] April-July: The team focuses on the entrepreneurship program, and works on founder-first activities, founder support, and project research. The project research is focused on generating a new prioritization model and shallow project ranking by the end of April, and 5 project ideas memos by the end of June for a potential meeting of promising entrepreneurs in July/August.August-October: Jam and Ben continue working on the entrepreneurship program, while Marie and Renan switch to strategic clarity research.Start o...]]>
Mon, 08 May 2023 09:13:48 +0000 EA - The Rethink Priorities Existential Security Team's Strategy for 2023 by Ben Snodin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Rethink Priorities Existential Security Team's Strategy for 2023, published by Ben Snodin on May 8, 2023 on The Effective Altruism Forum.The Rethink Priorities Existential Security Team's Strategy for 2023SummaryThis post contains a moderately rough, high-level description of the Rethink Priorities Existential Security team’s (XST’s) strategy for the period April-October 2023.XST is a team of researchers focused on improving the world according to a longtermist outlook through research and other projects, and is part of Rethink Priorities (RP).Note that until very recently we were called the General Longtermism team (GLT). We have now renamed ourselves the Existential Security team (XST), which is slightly more descriptive and more closely reflects our focus on reducing existential risk.XST’s three focus areas for 2023 will be:Longtermist entrepreneurship (65%): Making highly impactful longtermist projects happen by finding and developing ideas for highly promising longtermist projects, identifying potential founders, and supporting them as they get these projects started. Our main activities will be:Identifying and detailing the most promising ideas for longtermist projects, with a goal of having ~5 detailed project ideas by the end of June, that we can bring to a potential meeting of talented entrepreneurs in July/August, organized by Mike McCormick.A relatively brief founder-first-style founder search (looking for highly promising founders and finding projects that they are an especially good fit for).Exploring founder-in-residence MVPs (hiring potential founders and giving them space to develop their own ideas for promising projects).Supporting founders once they’re identified.Strategic clarity research (25%): Research that helps shed light on high-level strategic questions relevant for the EA community and for people working on reducing existential risk. This year, we plan to focus on high-level EA movement-building strategy questions (such as “What kind of EA movement do we want?” or “What’s the optimal portfolio among priority cause areas we should aim at building?”), and possibly on high-level questions that seem important for assessing whether and how to help launch entrepreneurial projects. Most of our work on this will happen in the second half of the year.Flexible time for high-impact opportunities (10%): Time allocated for i) team members working on projects that they are very keen on and ii) highly impactful and time-sensitive projects that arise due to changes in external circumstances.Concrete outputs we’ll aim for:5 project idea memos by the end of June that are of a standard equal to or better than the 2023 Q1 megaproject speedruns that we posted on the EA Forum in February.1 strategic clarity research output by the end of October.1 new promising project launched by the end of October.11 publicly shared research or project idea outputs by the end of the year.From mid-May onwards, we’re planning to have 4 FTE executing this strategy: me (Ben), Marie, Jam, and Renan. Linch is pursuing a separate research agenda related to longtermist strategic clarity.The high-level timeline is:[completed] March: The team winds down current projects and begins work on executing the team strategy from the start of April.[in progress] April-July: The team focuses on the entrepreneurship program, and works on founder-first activities, founder support, and project research. The project research is focused on generating a new prioritization model and shallow project ranking by the end of April, and 5 project ideas memos by the end of June for a potential meeting of promising entrepreneurs in July/August.August-October: Jam and Ben continue working on the entrepreneurship program, while Marie and Renan switch to strategic clarity research.Start o...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Rethink Priorities Existential Security Team's Strategy for 2023, published by Ben Snodin on May 8, 2023 on The Effective Altruism Forum.The Rethink Priorities Existential Security Team's Strategy for 2023SummaryThis post contains a moderately rough, high-level description of the Rethink Priorities Existential Security team’s (XST’s) strategy for the period April-October 2023.XST is a team of researchers focused on improving the world according to a longtermist outlook through research and other projects, and is part of Rethink Priorities (RP).Note that until very recently we were called the General Longtermism team (GLT). We have now renamed ourselves the Existential Security team (XST), which is slightly more descriptive and more closely reflects our focus on reducing existential risk.XST’s three focus areas for 2023 will be:Longtermist entrepreneurship (65%): Making highly impactful longtermist projects happen by finding and developing ideas for highly promising longtermist projects, identifying potential founders, and supporting them as they get these projects started. Our main activities will be:Identifying and detailing the most promising ideas for longtermist projects, with a goal of having ~5 detailed project ideas by the end of June, that we can bring to a potential meeting of talented entrepreneurs in July/August, organized by Mike McCormick.A relatively brief founder-first-style founder search (looking for highly promising founders and finding projects that they are an especially good fit for).Exploring founder-in-residence MVPs (hiring potential founders and giving them space to develop their own ideas for promising projects).Supporting founders once they’re identified.Strategic clarity research (25%): Research that helps shed light on high-level strategic questions relevant for the EA community and for people working on reducing existential risk. This year, we plan to focus on high-level EA movement-building strategy questions (such as “What kind of EA movement do we want?” or “What’s the optimal portfolio among priority cause areas we should aim at building?”), and possibly on high-level questions that seem important for assessing whether and how to help launch entrepreneurial projects. Most of our work on this will happen in the second half of the year.Flexible time for high-impact opportunities (10%): Time allocated for i) team members working on projects that they are very keen on and ii) highly impactful and time-sensitive projects that arise due to changes in external circumstances.Concrete outputs we’ll aim for:5 project idea memos by the end of June that are of a standard equal to or better than the 2023 Q1 megaproject speedruns that we posted on the EA Forum in February.1 strategic clarity research output by the end of October.1 new promising project launched by the end of October.11 publicly shared research or project idea outputs by the end of the year.From mid-May onwards, we’re planning to have 4 FTE executing this strategy: me (Ben), Marie, Jam, and Renan. Linch is pursuing a separate research agenda related to longtermist strategic clarity.The high-level timeline is:[completed] March: The team winds down current projects and begins work on executing the team strategy from the start of April.[in progress] April-July: The team focuses on the entrepreneurship program, and works on founder-first activities, founder support, and project research. The project research is focused on generating a new prioritization model and shallow project ranking by the end of April, and 5 project ideas memos by the end of June for a potential meeting of promising entrepreneurs in July/August.August-October: Jam and Ben continue working on the entrepreneurship program, while Marie and Renan switch to strategic clarity research.Start o...]]>
Ben Snodin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 25:06 None full 5876
nTALzRAWxRnrxvoep_NL_EA_EA EA - Implications of the Whitehouse meeting with AI CEOs for AI superintelligence risk - a first-step towards evals? by Jamie Bernardi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Implications of the Whitehouse meeting with AI CEOs for AI superintelligence risk - a first-step towards evals?, published by Jamie Bernardi on May 7, 2023 on The Effective Altruism Forum.IntroducionOn Wednesday 4th May, Sam Altman (Open AI) and Dario Amodei (Anthropic) - amongst others - met with US Vice President Kamala Harris (with a drop-in from President Joe Biden), to discuss the dangers of AI.Announcement | Fact sheet | EA Forum linkpostI spent about 2 hours trying to understand what happened, who was involved, and what its possible implications for superintelligence risk.I decided to make this post for two reasons:I am practising writing and developing my opinions on AI strategy (so feedback is very welcome, and you should treat my epistemic status as ‘new to this’!)I think demystifying the facts of the announcement and offering some tentative conclusions will positively contribute to the community's understanding of AI-related political developments.My main conclusionsThree announcements were made, but the announcement on public model evaluations involving major AI labs seemed most relevant and actionable to me.My two actionable conclusions are:I think folks with technical alignment expertise should consider attending DEF CON 31 if it’s convenient, to help shape the conclusions from the event.My main speculative concern is that this evaluation event could positively associate advanced AI and the open source community. As far as those that feel the downside of model proliferation outweighs the benefits of open sourcing, spreading this message in a more focused way now may be valuable.Summary of the model evaluations announcementThis is mostly factual, and I’ve flagged where I’m offering my interpretation. Primary source: AI village announcement.There’s going to be an evaluation platform made available during a conference called DEF CON 31. DEF CON 31 is the 31st iteration of DEF CON, “the world’s largest security conference”, taking place in Los Angeles on 10th August 2023. The platform is being organised by a subcommunity at that conference called the AI village.The evaluation platform will be provided by Scale AI. The platform will provide “timed access to LLMs” via laptops available at the conference, and attendees will red-team various models by injecting prompts. I expect that the humans will then rate the output of the model as good or bad, much like on the ChatGPT platform. There’s a points-based system to encourage participation, and the winner will win a “high-end Nvidia GPU”.The intent of this whole event appears to be to collect adversarial data that the AI organisations in question can use and 'learn from' (and presumably do more RLHF on). The orgs that signed up include: Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI.It seems that there won’t be any direct implications for the AI organisations. They will, by default, be allowed to carry on as normal no matter what is learned at the event.I’ll provide more details on what has happened after the takeaways section.Takeaways from the Whitehouse announcement on model evaluationsI prioritised communicating my takeaways in this section. If you want more factual context to understand exactly what happened and who's involved- see the section below this one.For the avoidance of doubt, the Whitehouse announcement on the model evaluation event doesn’t come with any regulatory teeth.I don’t mean that as a criticism necessarily; I’m not sure anyone has a concrete proposal for what the evaluation criteria should even be, or how they should be enforced, etc, so it’d be too soon to see an announcement like that.That does mean I’m left with the slightly odd conclusion that all that’s happened is the Whitehouse has endorsed a community red-teaming event at a con...]]>
Jamie Bernardi https://forum.effectivealtruism.org/posts/nTALzRAWxRnrxvoep/implications-of-the-whitehouse-meeting-with-ai-ceos-for-ai Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Implications of the Whitehouse meeting with AI CEOs for AI superintelligence risk - a first-step towards evals?, published by Jamie Bernardi on May 7, 2023 on The Effective Altruism Forum.IntroducionOn Wednesday 4th May, Sam Altman (Open AI) and Dario Amodei (Anthropic) - amongst others - met with US Vice President Kamala Harris (with a drop-in from President Joe Biden), to discuss the dangers of AI.Announcement | Fact sheet | EA Forum linkpostI spent about 2 hours trying to understand what happened, who was involved, and what its possible implications for superintelligence risk.I decided to make this post for two reasons:I am practising writing and developing my opinions on AI strategy (so feedback is very welcome, and you should treat my epistemic status as ‘new to this’!)I think demystifying the facts of the announcement and offering some tentative conclusions will positively contribute to the community's understanding of AI-related political developments.My main conclusionsThree announcements were made, but the announcement on public model evaluations involving major AI labs seemed most relevant and actionable to me.My two actionable conclusions are:I think folks with technical alignment expertise should consider attending DEF CON 31 if it’s convenient, to help shape the conclusions from the event.My main speculative concern is that this evaluation event could positively associate advanced AI and the open source community. As far as those that feel the downside of model proliferation outweighs the benefits of open sourcing, spreading this message in a more focused way now may be valuable.Summary of the model evaluations announcementThis is mostly factual, and I’ve flagged where I’m offering my interpretation. Primary source: AI village announcement.There’s going to be an evaluation platform made available during a conference called DEF CON 31. DEF CON 31 is the 31st iteration of DEF CON, “the world’s largest security conference”, taking place in Los Angeles on 10th August 2023. The platform is being organised by a subcommunity at that conference called the AI village.The evaluation platform will be provided by Scale AI. The platform will provide “timed access to LLMs” via laptops available at the conference, and attendees will red-team various models by injecting prompts. I expect that the humans will then rate the output of the model as good or bad, much like on the ChatGPT platform. There’s a points-based system to encourage participation, and the winner will win a “high-end Nvidia GPU”.The intent of this whole event appears to be to collect adversarial data that the AI organisations in question can use and 'learn from' (and presumably do more RLHF on). The orgs that signed up include: Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI.It seems that there won’t be any direct implications for the AI organisations. They will, by default, be allowed to carry on as normal no matter what is learned at the event.I’ll provide more details on what has happened after the takeaways section.Takeaways from the Whitehouse announcement on model evaluationsI prioritised communicating my takeaways in this section. If you want more factual context to understand exactly what happened and who's involved- see the section below this one.For the avoidance of doubt, the Whitehouse announcement on the model evaluation event doesn’t come with any regulatory teeth.I don’t mean that as a criticism necessarily; I’m not sure anyone has a concrete proposal for what the evaluation criteria should even be, or how they should be enforced, etc, so it’d be too soon to see an announcement like that.That does mean I’m left with the slightly odd conclusion that all that’s happened is the Whitehouse has endorsed a community red-teaming event at a con...]]>
Sun, 07 May 2023 21:44:41 +0000 EA - Implications of the Whitehouse meeting with AI CEOs for AI superintelligence risk - a first-step towards evals? by Jamie Bernardi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Implications of the Whitehouse meeting with AI CEOs for AI superintelligence risk - a first-step towards evals?, published by Jamie Bernardi on May 7, 2023 on The Effective Altruism Forum.IntroducionOn Wednesday 4th May, Sam Altman (Open AI) and Dario Amodei (Anthropic) - amongst others - met with US Vice President Kamala Harris (with a drop-in from President Joe Biden), to discuss the dangers of AI.Announcement | Fact sheet | EA Forum linkpostI spent about 2 hours trying to understand what happened, who was involved, and what its possible implications for superintelligence risk.I decided to make this post for two reasons:I am practising writing and developing my opinions on AI strategy (so feedback is very welcome, and you should treat my epistemic status as ‘new to this’!)I think demystifying the facts of the announcement and offering some tentative conclusions will positively contribute to the community's understanding of AI-related political developments.My main conclusionsThree announcements were made, but the announcement on public model evaluations involving major AI labs seemed most relevant and actionable to me.My two actionable conclusions are:I think folks with technical alignment expertise should consider attending DEF CON 31 if it’s convenient, to help shape the conclusions from the event.My main speculative concern is that this evaluation event could positively associate advanced AI and the open source community. As far as those that feel the downside of model proliferation outweighs the benefits of open sourcing, spreading this message in a more focused way now may be valuable.Summary of the model evaluations announcementThis is mostly factual, and I’ve flagged where I’m offering my interpretation. Primary source: AI village announcement.There’s going to be an evaluation platform made available during a conference called DEF CON 31. DEF CON 31 is the 31st iteration of DEF CON, “the world’s largest security conference”, taking place in Los Angeles on 10th August 2023. The platform is being organised by a subcommunity at that conference called the AI village.The evaluation platform will be provided by Scale AI. The platform will provide “timed access to LLMs” via laptops available at the conference, and attendees will red-team various models by injecting prompts. I expect that the humans will then rate the output of the model as good or bad, much like on the ChatGPT platform. There’s a points-based system to encourage participation, and the winner will win a “high-end Nvidia GPU”.The intent of this whole event appears to be to collect adversarial data that the AI organisations in question can use and 'learn from' (and presumably do more RLHF on). The orgs that signed up include: Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI.It seems that there won’t be any direct implications for the AI organisations. They will, by default, be allowed to carry on as normal no matter what is learned at the event.I’ll provide more details on what has happened after the takeaways section.Takeaways from the Whitehouse announcement on model evaluationsI prioritised communicating my takeaways in this section. If you want more factual context to understand exactly what happened and who's involved- see the section below this one.For the avoidance of doubt, the Whitehouse announcement on the model evaluation event doesn’t come with any regulatory teeth.I don’t mean that as a criticism necessarily; I’m not sure anyone has a concrete proposal for what the evaluation criteria should even be, or how they should be enforced, etc, so it’d be too soon to see an announcement like that.That does mean I’m left with the slightly odd conclusion that all that’s happened is the Whitehouse has endorsed a community red-teaming event at a con...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Implications of the Whitehouse meeting with AI CEOs for AI superintelligence risk - a first-step towards evals?, published by Jamie Bernardi on May 7, 2023 on The Effective Altruism Forum.IntroducionOn Wednesday 4th May, Sam Altman (Open AI) and Dario Amodei (Anthropic) - amongst others - met with US Vice President Kamala Harris (with a drop-in from President Joe Biden), to discuss the dangers of AI.Announcement | Fact sheet | EA Forum linkpostI spent about 2 hours trying to understand what happened, who was involved, and what its possible implications for superintelligence risk.I decided to make this post for two reasons:I am practising writing and developing my opinions on AI strategy (so feedback is very welcome, and you should treat my epistemic status as ‘new to this’!)I think demystifying the facts of the announcement and offering some tentative conclusions will positively contribute to the community's understanding of AI-related political developments.My main conclusionsThree announcements were made, but the announcement on public model evaluations involving major AI labs seemed most relevant and actionable to me.My two actionable conclusions are:I think folks with technical alignment expertise should consider attending DEF CON 31 if it’s convenient, to help shape the conclusions from the event.My main speculative concern is that this evaluation event could positively associate advanced AI and the open source community. As far as those that feel the downside of model proliferation outweighs the benefits of open sourcing, spreading this message in a more focused way now may be valuable.Summary of the model evaluations announcementThis is mostly factual, and I’ve flagged where I’m offering my interpretation. Primary source: AI village announcement.There’s going to be an evaluation platform made available during a conference called DEF CON 31. DEF CON 31 is the 31st iteration of DEF CON, “the world’s largest security conference”, taking place in Los Angeles on 10th August 2023. The platform is being organised by a subcommunity at that conference called the AI village.The evaluation platform will be provided by Scale AI. The platform will provide “timed access to LLMs” via laptops available at the conference, and attendees will red-team various models by injecting prompts. I expect that the humans will then rate the output of the model as good or bad, much like on the ChatGPT platform. There’s a points-based system to encourage participation, and the winner will win a “high-end Nvidia GPU”.The intent of this whole event appears to be to collect adversarial data that the AI organisations in question can use and 'learn from' (and presumably do more RLHF on). The orgs that signed up include: Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI.It seems that there won’t be any direct implications for the AI organisations. They will, by default, be allowed to carry on as normal no matter what is learned at the event.I’ll provide more details on what has happened after the takeaways section.Takeaways from the Whitehouse announcement on model evaluationsI prioritised communicating my takeaways in this section. If you want more factual context to understand exactly what happened and who's involved- see the section below this one.For the avoidance of doubt, the Whitehouse announcement on the model evaluation event doesn’t come with any regulatory teeth.I don’t mean that as a criticism necessarily; I’m not sure anyone has a concrete proposal for what the evaluation criteria should even be, or how they should be enforced, etc, so it’d be too soon to see an announcement like that.That does mean I’m left with the slightly odd conclusion that all that’s happened is the Whitehouse has endorsed a community red-teaming event at a con...]]>
Jamie Bernardi https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:57 None full 5872
LgscQde9vQW4xLrjC_NL_EA_EA EA - On Child Wasting, Mega-Charities, and Measurability Bias by Jesper Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Child Wasting, Mega-Charities, and Measurability Bias, published by Jesper on May 7, 2023 on The Effective Altruism Forum.Recently I ran into a volunteer for UNICEF who was gathering donations for helping malnourished children. He gave me some explanation on why child wasting is a serious problem and how there are cheap ways to help children who are suffering from it (the UNICEF website has some information on child wasting and specifically on the treatment of wasting using simplified approaches, in case you are interested).Since I happen to have taken the Giving What We Can pledge and have read quite a bit on comparing charities, I asked what evidence there is that compares this action to - say - protecting people from malaria with bednets or directly giving cash to very poor people. The response I got was quite specific: the volunteer claimed that UNICEF can save a life with just 1€ a day for an average period of 7 months. If these claims are true then that means they can save a life for 210€, a lot less than the >3000$ that Givewell estimates is needed for AMF to save one life. Probably these numbers should not be compared directly, but I am still curious to know why there can be over an order of magnitude difference between the two. So to practice my critical thinking on these kinds of questions, I made a list of possible explanations for the difference:The UNICEF campaign has little room for additional funding.The program would be funded anyway from other sources (e.g. governments).The 1€/day figure might not include all the costs.Some of the children who receive the food supplements might die of malnutrition anyway.Only some of the children who receive the food supplements would have died without them.Children who are saved from malnutrition could still die of other causes.Obviously I do not have the time nor resources of GiveWell so it is hard to determine how much all of these explanations count in the overall picture, or if there are others that I missed. Unfortunately, there does not seem to be much information on this question from GiveWell (or other EA organizations) either. Looking on the GiveWell website, the most I could find is this blog post on mega-charities from 2011, which makes the argument that mega-charities like UNICEF have too many different campaigns running simultaneously, and that they do not have the required transparency for a proper evaluation. The first argument sounds fake to me: if there are different campaigns, then can you not just evaluate these individual campaigns, or at least the most promising ones? The second point about transparency is a real problem, but there is also the risk of measurability bias if we never even consider less transparent charities.I would very much like to have a more convincing argument for why these kind of charities are not rated. If for nothing else then at least it would be useful for discussing with people who currently donate to them, or who try to convince me to donate to them. Perhaps the reason is just a lack of resources at GiveWell, or perhaps there is research on this but I just couldn't find it. But either way I believe the current state of affairs does not provide a convincing case of why the biggest EA evaluator barely even mentions one of the largest and most respected charity organizations.[Comment: I'm not new here but I'm mostly a lurker on this forum. I'm open to criticism on my writing style and epistemics as long as you're kind!]Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jesper https://forum.effectivealtruism.org/posts/LgscQde9vQW4xLrjC/on-child-wasting-mega-charities-and-measurability-bias Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Child Wasting, Mega-Charities, and Measurability Bias, published by Jesper on May 7, 2023 on The Effective Altruism Forum.Recently I ran into a volunteer for UNICEF who was gathering donations for helping malnourished children. He gave me some explanation on why child wasting is a serious problem and how there are cheap ways to help children who are suffering from it (the UNICEF website has some information on child wasting and specifically on the treatment of wasting using simplified approaches, in case you are interested).Since I happen to have taken the Giving What We Can pledge and have read quite a bit on comparing charities, I asked what evidence there is that compares this action to - say - protecting people from malaria with bednets or directly giving cash to very poor people. The response I got was quite specific: the volunteer claimed that UNICEF can save a life with just 1€ a day for an average period of 7 months. If these claims are true then that means they can save a life for 210€, a lot less than the >3000$ that Givewell estimates is needed for AMF to save one life. Probably these numbers should not be compared directly, but I am still curious to know why there can be over an order of magnitude difference between the two. So to practice my critical thinking on these kinds of questions, I made a list of possible explanations for the difference:The UNICEF campaign has little room for additional funding.The program would be funded anyway from other sources (e.g. governments).The 1€/day figure might not include all the costs.Some of the children who receive the food supplements might die of malnutrition anyway.Only some of the children who receive the food supplements would have died without them.Children who are saved from malnutrition could still die of other causes.Obviously I do not have the time nor resources of GiveWell so it is hard to determine how much all of these explanations count in the overall picture, or if there are others that I missed. Unfortunately, there does not seem to be much information on this question from GiveWell (or other EA organizations) either. Looking on the GiveWell website, the most I could find is this blog post on mega-charities from 2011, which makes the argument that mega-charities like UNICEF have too many different campaigns running simultaneously, and that they do not have the required transparency for a proper evaluation. The first argument sounds fake to me: if there are different campaigns, then can you not just evaluate these individual campaigns, or at least the most promising ones? The second point about transparency is a real problem, but there is also the risk of measurability bias if we never even consider less transparent charities.I would very much like to have a more convincing argument for why these kind of charities are not rated. If for nothing else then at least it would be useful for discussing with people who currently donate to them, or who try to convince me to donate to them. Perhaps the reason is just a lack of resources at GiveWell, or perhaps there is research on this but I just couldn't find it. But either way I believe the current state of affairs does not provide a convincing case of why the biggest EA evaluator barely even mentions one of the largest and most respected charity organizations.[Comment: I'm not new here but I'm mostly a lurker on this forum. I'm open to criticism on my writing style and epistemics as long as you're kind!]Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 07 May 2023 19:22:38 +0000 EA - On Child Wasting, Mega-Charities, and Measurability Bias by Jesper Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Child Wasting, Mega-Charities, and Measurability Bias, published by Jesper on May 7, 2023 on The Effective Altruism Forum.Recently I ran into a volunteer for UNICEF who was gathering donations for helping malnourished children. He gave me some explanation on why child wasting is a serious problem and how there are cheap ways to help children who are suffering from it (the UNICEF website has some information on child wasting and specifically on the treatment of wasting using simplified approaches, in case you are interested).Since I happen to have taken the Giving What We Can pledge and have read quite a bit on comparing charities, I asked what evidence there is that compares this action to - say - protecting people from malaria with bednets or directly giving cash to very poor people. The response I got was quite specific: the volunteer claimed that UNICEF can save a life with just 1€ a day for an average period of 7 months. If these claims are true then that means they can save a life for 210€, a lot less than the >3000$ that Givewell estimates is needed for AMF to save one life. Probably these numbers should not be compared directly, but I am still curious to know why there can be over an order of magnitude difference between the two. So to practice my critical thinking on these kinds of questions, I made a list of possible explanations for the difference:The UNICEF campaign has little room for additional funding.The program would be funded anyway from other sources (e.g. governments).The 1€/day figure might not include all the costs.Some of the children who receive the food supplements might die of malnutrition anyway.Only some of the children who receive the food supplements would have died without them.Children who are saved from malnutrition could still die of other causes.Obviously I do not have the time nor resources of GiveWell so it is hard to determine how much all of these explanations count in the overall picture, or if there are others that I missed. Unfortunately, there does not seem to be much information on this question from GiveWell (or other EA organizations) either. Looking on the GiveWell website, the most I could find is this blog post on mega-charities from 2011, which makes the argument that mega-charities like UNICEF have too many different campaigns running simultaneously, and that they do not have the required transparency for a proper evaluation. The first argument sounds fake to me: if there are different campaigns, then can you not just evaluate these individual campaigns, or at least the most promising ones? The second point about transparency is a real problem, but there is also the risk of measurability bias if we never even consider less transparent charities.I would very much like to have a more convincing argument for why these kind of charities are not rated. If for nothing else then at least it would be useful for discussing with people who currently donate to them, or who try to convince me to donate to them. Perhaps the reason is just a lack of resources at GiveWell, or perhaps there is research on this but I just couldn't find it. But either way I believe the current state of affairs does not provide a convincing case of why the biggest EA evaluator barely even mentions one of the largest and most respected charity organizations.[Comment: I'm not new here but I'm mostly a lurker on this forum. I'm open to criticism on my writing style and epistemics as long as you're kind!]Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Child Wasting, Mega-Charities, and Measurability Bias, published by Jesper on May 7, 2023 on The Effective Altruism Forum.Recently I ran into a volunteer for UNICEF who was gathering donations for helping malnourished children. He gave me some explanation on why child wasting is a serious problem and how there are cheap ways to help children who are suffering from it (the UNICEF website has some information on child wasting and specifically on the treatment of wasting using simplified approaches, in case you are interested).Since I happen to have taken the Giving What We Can pledge and have read quite a bit on comparing charities, I asked what evidence there is that compares this action to - say - protecting people from malaria with bednets or directly giving cash to very poor people. The response I got was quite specific: the volunteer claimed that UNICEF can save a life with just 1€ a day for an average period of 7 months. If these claims are true then that means they can save a life for 210€, a lot less than the >3000$ that Givewell estimates is needed for AMF to save one life. Probably these numbers should not be compared directly, but I am still curious to know why there can be over an order of magnitude difference between the two. So to practice my critical thinking on these kinds of questions, I made a list of possible explanations for the difference:The UNICEF campaign has little room for additional funding.The program would be funded anyway from other sources (e.g. governments).The 1€/day figure might not include all the costs.Some of the children who receive the food supplements might die of malnutrition anyway.Only some of the children who receive the food supplements would have died without them.Children who are saved from malnutrition could still die of other causes.Obviously I do not have the time nor resources of GiveWell so it is hard to determine how much all of these explanations count in the overall picture, or if there are others that I missed. Unfortunately, there does not seem to be much information on this question from GiveWell (or other EA organizations) either. Looking on the GiveWell website, the most I could find is this blog post on mega-charities from 2011, which makes the argument that mega-charities like UNICEF have too many different campaigns running simultaneously, and that they do not have the required transparency for a proper evaluation. The first argument sounds fake to me: if there are different campaigns, then can you not just evaluate these individual campaigns, or at least the most promising ones? The second point about transparency is a real problem, but there is also the risk of measurability bias if we never even consider less transparent charities.I would very much like to have a more convincing argument for why these kind of charities are not rated. If for nothing else then at least it would be useful for discussing with people who currently donate to them, or who try to convince me to donate to them. Perhaps the reason is just a lack of resources at GiveWell, or perhaps there is research on this but I just couldn't find it. But either way I believe the current state of affairs does not provide a convincing case of why the biggest EA evaluator barely even mentions one of the largest and most respected charity organizations.[Comment: I'm not new here but I'm mostly a lurker on this forum. I'm open to criticism on my writing style and epistemics as long as you're kind!]Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jesper https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:12 None full 5869
xdKnfQKLyYQfeErSr_NL_EA_EA EA - The Grabby Values Selection Thesis: What values do space-faring civilizations plausibly have? by Jim Buhler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Grabby Values Selection Thesis: What values do space-faring civilizations plausibly have?, published by Jim Buhler on May 6, 2023 on The Effective Altruism Forum.Summary: The Grabby Values Selection Thesis (or GST, for short) is the thesis that some values are more expansion-conducive (and therefore more adapted to space colonization races) than others such that we should – all else equal – expect such values to be more represented among the grabbiest civilizations/AGIs. In this post, I present and argue for GST, and raise some considerations regarding how strong and decisive we should expect this selection effect to be. The stronger it is, the more we should expect our successors – in worlds where the future of humanity is big – to have values more grabbing-prone than ours. The same holds for grabby aliens relative to us present humans. While these claims are trivially true, they seem to support conclusions that most longtermists have not paid attention to, such as “the most powerful civilizations don’t care about what the moral truth might be” (see my previous post), and “they don’t care (much) about suffering” (see my forthcoming next post).The thesisSpreading to new territories can be motivated by very different values and seems to be a convergent instrumental goal. Whatever a given agent wants, they likely have some incentive to accumulate resources and spread to new territories in order to better achieve their goal(s).However, not all moral preferences are equally conducive to expansion. Some of them value (intrinsically or instrumentally) colonization more than others. For instance, agents who value spreading intrinsically will likely colonize more and/or more efficiently than those who disvalue being the direct cause of something like “space pollution”, in the interstellar context.Therefore, there is a selection effect where the most powerful civilizations/AGIs are those who have the values that are the most prone to “grabbing”. This is the Grabby Values Selection Thesis (GST), which is the formalization and generalization of an idea that has been expressed by Robin Hanson (1998).We can differentiate between two sub-selection effects, here:The intra-civ (grabby values) selection: Within a civilization, those who colonize space and influence the future of the civilization are those with the most grabby-prone values. Here is a specific plausible instance of that selection effect, given by Robin Hanson (1998): “Far enough away from the origin of an expanding wave of interstellar colonization, and in the absence of property rights in virgin oases, a selection effect should make leading edge colonists primarily value whatever it takes to stay at the leading edge.”The inter-civ (grabby values) selection: The civilizations that end up with the most grabby-prone values will get more territory than the others.Do these two different sub-selection effects matter equally? My current impression is that this mainly depends on the likelihood of an early value lock-in – or of design escaping selection early and longlastingly, in Robin Hanson’s (2022) terminology – where “early” means “before grabby values get the time to be selected for within the civilization”. If such an early value lock-in occurs, the inter-civ selection effect is the only one left. If it doesn’t occur, however, the importance of the intra-civ selection effect seems vastly superior to that of the inter-civ one. This is mainly explained by the fact that there is very likely much more room for selection effects within a (not-locked-in) civilization than in between different civilizations.GST seems trivially true. It is pretty obvious that all values are not equal in terms of how much they value (intrinsically or instrumentally) space colonization, and that those who value space expansion more ...]]>
Jim Buhler https://forum.effectivealtruism.org/posts/xdKnfQKLyYQfeErSr/the-grabby-values-selection-thesis-what-values-do-space Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Grabby Values Selection Thesis: What values do space-faring civilizations plausibly have?, published by Jim Buhler on May 6, 2023 on The Effective Altruism Forum.Summary: The Grabby Values Selection Thesis (or GST, for short) is the thesis that some values are more expansion-conducive (and therefore more adapted to space colonization races) than others such that we should – all else equal – expect such values to be more represented among the grabbiest civilizations/AGIs. In this post, I present and argue for GST, and raise some considerations regarding how strong and decisive we should expect this selection effect to be. The stronger it is, the more we should expect our successors – in worlds where the future of humanity is big – to have values more grabbing-prone than ours. The same holds for grabby aliens relative to us present humans. While these claims are trivially true, they seem to support conclusions that most longtermists have not paid attention to, such as “the most powerful civilizations don’t care about what the moral truth might be” (see my previous post), and “they don’t care (much) about suffering” (see my forthcoming next post).The thesisSpreading to new territories can be motivated by very different values and seems to be a convergent instrumental goal. Whatever a given agent wants, they likely have some incentive to accumulate resources and spread to new territories in order to better achieve their goal(s).However, not all moral preferences are equally conducive to expansion. Some of them value (intrinsically or instrumentally) colonization more than others. For instance, agents who value spreading intrinsically will likely colonize more and/or more efficiently than those who disvalue being the direct cause of something like “space pollution”, in the interstellar context.Therefore, there is a selection effect where the most powerful civilizations/AGIs are those who have the values that are the most prone to “grabbing”. This is the Grabby Values Selection Thesis (GST), which is the formalization and generalization of an idea that has been expressed by Robin Hanson (1998).We can differentiate between two sub-selection effects, here:The intra-civ (grabby values) selection: Within a civilization, those who colonize space and influence the future of the civilization are those with the most grabby-prone values. Here is a specific plausible instance of that selection effect, given by Robin Hanson (1998): “Far enough away from the origin of an expanding wave of interstellar colonization, and in the absence of property rights in virgin oases, a selection effect should make leading edge colonists primarily value whatever it takes to stay at the leading edge.”The inter-civ (grabby values) selection: The civilizations that end up with the most grabby-prone values will get more territory than the others.Do these two different sub-selection effects matter equally? My current impression is that this mainly depends on the likelihood of an early value lock-in – or of design escaping selection early and longlastingly, in Robin Hanson’s (2022) terminology – where “early” means “before grabby values get the time to be selected for within the civilization”. If such an early value lock-in occurs, the inter-civ selection effect is the only one left. If it doesn’t occur, however, the importance of the intra-civ selection effect seems vastly superior to that of the inter-civ one. This is mainly explained by the fact that there is very likely much more room for selection effects within a (not-locked-in) civilization than in between different civilizations.GST seems trivially true. It is pretty obvious that all values are not equal in terms of how much they value (intrinsically or instrumentally) space colonization, and that those who value space expansion more ...]]>
Sun, 07 May 2023 18:31:23 +0000 EA - The Grabby Values Selection Thesis: What values do space-faring civilizations plausibly have? by Jim Buhler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Grabby Values Selection Thesis: What values do space-faring civilizations plausibly have?, published by Jim Buhler on May 6, 2023 on The Effective Altruism Forum.Summary: The Grabby Values Selection Thesis (or GST, for short) is the thesis that some values are more expansion-conducive (and therefore more adapted to space colonization races) than others such that we should – all else equal – expect such values to be more represented among the grabbiest civilizations/AGIs. In this post, I present and argue for GST, and raise some considerations regarding how strong and decisive we should expect this selection effect to be. The stronger it is, the more we should expect our successors – in worlds where the future of humanity is big – to have values more grabbing-prone than ours. The same holds for grabby aliens relative to us present humans. While these claims are trivially true, they seem to support conclusions that most longtermists have not paid attention to, such as “the most powerful civilizations don’t care about what the moral truth might be” (see my previous post), and “they don’t care (much) about suffering” (see my forthcoming next post).The thesisSpreading to new territories can be motivated by very different values and seems to be a convergent instrumental goal. Whatever a given agent wants, they likely have some incentive to accumulate resources and spread to new territories in order to better achieve their goal(s).However, not all moral preferences are equally conducive to expansion. Some of them value (intrinsically or instrumentally) colonization more than others. For instance, agents who value spreading intrinsically will likely colonize more and/or more efficiently than those who disvalue being the direct cause of something like “space pollution”, in the interstellar context.Therefore, there is a selection effect where the most powerful civilizations/AGIs are those who have the values that are the most prone to “grabbing”. This is the Grabby Values Selection Thesis (GST), which is the formalization and generalization of an idea that has been expressed by Robin Hanson (1998).We can differentiate between two sub-selection effects, here:The intra-civ (grabby values) selection: Within a civilization, those who colonize space and influence the future of the civilization are those with the most grabby-prone values. Here is a specific plausible instance of that selection effect, given by Robin Hanson (1998): “Far enough away from the origin of an expanding wave of interstellar colonization, and in the absence of property rights in virgin oases, a selection effect should make leading edge colonists primarily value whatever it takes to stay at the leading edge.”The inter-civ (grabby values) selection: The civilizations that end up with the most grabby-prone values will get more territory than the others.Do these two different sub-selection effects matter equally? My current impression is that this mainly depends on the likelihood of an early value lock-in – or of design escaping selection early and longlastingly, in Robin Hanson’s (2022) terminology – where “early” means “before grabby values get the time to be selected for within the civilization”. If such an early value lock-in occurs, the inter-civ selection effect is the only one left. If it doesn’t occur, however, the importance of the intra-civ selection effect seems vastly superior to that of the inter-civ one. This is mainly explained by the fact that there is very likely much more room for selection effects within a (not-locked-in) civilization than in between different civilizations.GST seems trivially true. It is pretty obvious that all values are not equal in terms of how much they value (intrinsically or instrumentally) space colonization, and that those who value space expansion more ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Grabby Values Selection Thesis: What values do space-faring civilizations plausibly have?, published by Jim Buhler on May 6, 2023 on The Effective Altruism Forum.Summary: The Grabby Values Selection Thesis (or GST, for short) is the thesis that some values are more expansion-conducive (and therefore more adapted to space colonization races) than others such that we should – all else equal – expect such values to be more represented among the grabbiest civilizations/AGIs. In this post, I present and argue for GST, and raise some considerations regarding how strong and decisive we should expect this selection effect to be. The stronger it is, the more we should expect our successors – in worlds where the future of humanity is big – to have values more grabbing-prone than ours. The same holds for grabby aliens relative to us present humans. While these claims are trivially true, they seem to support conclusions that most longtermists have not paid attention to, such as “the most powerful civilizations don’t care about what the moral truth might be” (see my previous post), and “they don’t care (much) about suffering” (see my forthcoming next post).The thesisSpreading to new territories can be motivated by very different values and seems to be a convergent instrumental goal. Whatever a given agent wants, they likely have some incentive to accumulate resources and spread to new territories in order to better achieve their goal(s).However, not all moral preferences are equally conducive to expansion. Some of them value (intrinsically or instrumentally) colonization more than others. For instance, agents who value spreading intrinsically will likely colonize more and/or more efficiently than those who disvalue being the direct cause of something like “space pollution”, in the interstellar context.Therefore, there is a selection effect where the most powerful civilizations/AGIs are those who have the values that are the most prone to “grabbing”. This is the Grabby Values Selection Thesis (GST), which is the formalization and generalization of an idea that has been expressed by Robin Hanson (1998).We can differentiate between two sub-selection effects, here:The intra-civ (grabby values) selection: Within a civilization, those who colonize space and influence the future of the civilization are those with the most grabby-prone values. Here is a specific plausible instance of that selection effect, given by Robin Hanson (1998): “Far enough away from the origin of an expanding wave of interstellar colonization, and in the absence of property rights in virgin oases, a selection effect should make leading edge colonists primarily value whatever it takes to stay at the leading edge.”The inter-civ (grabby values) selection: The civilizations that end up with the most grabby-prone values will get more territory than the others.Do these two different sub-selection effects matter equally? My current impression is that this mainly depends on the likelihood of an early value lock-in – or of design escaping selection early and longlastingly, in Robin Hanson’s (2022) terminology – where “early” means “before grabby values get the time to be selected for within the civilization”. If such an early value lock-in occurs, the inter-civ selection effect is the only one left. If it doesn’t occur, however, the importance of the intra-civ selection effect seems vastly superior to that of the inter-civ one. This is mainly explained by the fact that there is very likely much more room for selection effects within a (not-locked-in) civilization than in between different civilizations.GST seems trivially true. It is pretty obvious that all values are not equal in terms of how much they value (intrinsically or instrumentally) space colonization, and that those who value space expansion more ...]]>
Jim Buhler https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:43 None full 5868
cJc3f4HmFqCZsgGJe_NL_EA_EA EA - Don't Interpret Prediction Market Prices as Probabilities by bob Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don't Interpret Prediction Market Prices as Probabilities, published by bob on May 5, 2023 on The Effective Altruism Forum.Epistemic status: most of it is right, probablyPrediction markets sell shares for future events. Here's an example for the 2024 US presidential election:This market allows any US person to bet on the gender of the 2024 president. Male shares and female shares are issued in equal amounts. If the demand for shares of one gender is higher than shares of the other, the price is adjusted.At the time of writing, female shares cost 17 cents, and male shares cost 83 cents. If a female president is elected in 2024, owners of female shares will be able to cash them out for $1 each. If not, male shares can be cashed out for $1. The prices of male and female shares sum up to $1, which makes sense given that only one of them will be worth $1 in the future.Because bettors think a female president is relatively unlikely, the price for male shares is higher. The bettors may be wrong here, but the beauty of prediction markets it that anyone can put their money where their mouth is. If you believe that a female president is more likely than a male president, you can buy female shares for 17 cents a piece. If you're right, each of these shares will likely appreciate to $1 by 2024, almost sextupling your investment. If enough people predict a female president to be more likely, the demand for female shares will grow until they are more expensive than male shares. As such, the price of the shares reflects the predictions of everyone involved in the market.Even if you believe a female president is, say, 25% likely, you'd still be inclined to buy a female share for 17 cents. (That is, if you'd take a 1 in 4 chance of a 500% return on investment.) The interesting thing is that whenever you buy shares, the price will move closer to the probability you perceive be true. Only when the price matches your perceived probability, the market is no longer interesting for you. Because of this, the price of a share reflects the crowd's perceived probability of the corresponding outcome. If the market believes the probability to be 17%, the price will be 17 cents.Or so the story goes.In reality, it's more complicated.You're betting in a currency and, as such, you're betting on a currency.Let's say you believe a male president is about 90% likely, so you're considering buying male shares at 83 cents. Every 83 cents you put in can only become $1, so your maximum return on investment (ROI) is about 20%. Your expected ROI is closer to about 8% because you believe there's only a 90% chance the president will be male. Still, that's a positive ROI, so why not make the bet?This bet is denominated in US dollars, and it will be only resolved in 20 months or so. The problem is that US dollars are subject to inflation.Instead of locking up our investing money in a long-term bet for nearly two years, we could instead put it in an index fund, like the S&P 500, or invest in a large number of random stocks. Both methods have historically had a 10% annualized return. That's much better than an 8% two-year return!Because everyone thinks this way, there will be an artificially low demand for boring long-term positions, like predicting that the next US president will be male. This will drive the price of these shares down, while driving the price of shares for low-probability events up. A share that pays out USD will never have a price that reflects the market's perceived probability, because most people believe there are better things to invest in than USD.There's a solution for this, although regulators might not like it: allow people to bet bonds or shares. The famous 1 million USD bet between Warren Buffett and Protege Partners was actually not denominated in USD, but in bonds and sh...]]>
bob https://forum.effectivealtruism.org/posts/cJc3f4HmFqCZsgGJe/don-t-interpret-prediction-market-prices-as-probabilities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don't Interpret Prediction Market Prices as Probabilities, published by bob on May 5, 2023 on The Effective Altruism Forum.Epistemic status: most of it is right, probablyPrediction markets sell shares for future events. Here's an example for the 2024 US presidential election:This market allows any US person to bet on the gender of the 2024 president. Male shares and female shares are issued in equal amounts. If the demand for shares of one gender is higher than shares of the other, the price is adjusted.At the time of writing, female shares cost 17 cents, and male shares cost 83 cents. If a female president is elected in 2024, owners of female shares will be able to cash them out for $1 each. If not, male shares can be cashed out for $1. The prices of male and female shares sum up to $1, which makes sense given that only one of them will be worth $1 in the future.Because bettors think a female president is relatively unlikely, the price for male shares is higher. The bettors may be wrong here, but the beauty of prediction markets it that anyone can put their money where their mouth is. If you believe that a female president is more likely than a male president, you can buy female shares for 17 cents a piece. If you're right, each of these shares will likely appreciate to $1 by 2024, almost sextupling your investment. If enough people predict a female president to be more likely, the demand for female shares will grow until they are more expensive than male shares. As such, the price of the shares reflects the predictions of everyone involved in the market.Even if you believe a female president is, say, 25% likely, you'd still be inclined to buy a female share for 17 cents. (That is, if you'd take a 1 in 4 chance of a 500% return on investment.) The interesting thing is that whenever you buy shares, the price will move closer to the probability you perceive be true. Only when the price matches your perceived probability, the market is no longer interesting for you. Because of this, the price of a share reflects the crowd's perceived probability of the corresponding outcome. If the market believes the probability to be 17%, the price will be 17 cents.Or so the story goes.In reality, it's more complicated.You're betting in a currency and, as such, you're betting on a currency.Let's say you believe a male president is about 90% likely, so you're considering buying male shares at 83 cents. Every 83 cents you put in can only become $1, so your maximum return on investment (ROI) is about 20%. Your expected ROI is closer to about 8% because you believe there's only a 90% chance the president will be male. Still, that's a positive ROI, so why not make the bet?This bet is denominated in US dollars, and it will be only resolved in 20 months or so. The problem is that US dollars are subject to inflation.Instead of locking up our investing money in a long-term bet for nearly two years, we could instead put it in an index fund, like the S&P 500, or invest in a large number of random stocks. Both methods have historically had a 10% annualized return. That's much better than an 8% two-year return!Because everyone thinks this way, there will be an artificially low demand for boring long-term positions, like predicting that the next US president will be male. This will drive the price of these shares down, while driving the price of shares for low-probability events up. A share that pays out USD will never have a price that reflects the market's perceived probability, because most people believe there are better things to invest in than USD.There's a solution for this, although regulators might not like it: allow people to bet bonds or shares. The famous 1 million USD bet between Warren Buffett and Protege Partners was actually not denominated in USD, but in bonds and sh...]]>
Sat, 06 May 2023 16:48:08 +0000 EA - Don't Interpret Prediction Market Prices as Probabilities by bob Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don't Interpret Prediction Market Prices as Probabilities, published by bob on May 5, 2023 on The Effective Altruism Forum.Epistemic status: most of it is right, probablyPrediction markets sell shares for future events. Here's an example for the 2024 US presidential election:This market allows any US person to bet on the gender of the 2024 president. Male shares and female shares are issued in equal amounts. If the demand for shares of one gender is higher than shares of the other, the price is adjusted.At the time of writing, female shares cost 17 cents, and male shares cost 83 cents. If a female president is elected in 2024, owners of female shares will be able to cash them out for $1 each. If not, male shares can be cashed out for $1. The prices of male and female shares sum up to $1, which makes sense given that only one of them will be worth $1 in the future.Because bettors think a female president is relatively unlikely, the price for male shares is higher. The bettors may be wrong here, but the beauty of prediction markets it that anyone can put their money where their mouth is. If you believe that a female president is more likely than a male president, you can buy female shares for 17 cents a piece. If you're right, each of these shares will likely appreciate to $1 by 2024, almost sextupling your investment. If enough people predict a female president to be more likely, the demand for female shares will grow until they are more expensive than male shares. As such, the price of the shares reflects the predictions of everyone involved in the market.Even if you believe a female president is, say, 25% likely, you'd still be inclined to buy a female share for 17 cents. (That is, if you'd take a 1 in 4 chance of a 500% return on investment.) The interesting thing is that whenever you buy shares, the price will move closer to the probability you perceive be true. Only when the price matches your perceived probability, the market is no longer interesting for you. Because of this, the price of a share reflects the crowd's perceived probability of the corresponding outcome. If the market believes the probability to be 17%, the price will be 17 cents.Or so the story goes.In reality, it's more complicated.You're betting in a currency and, as such, you're betting on a currency.Let's say you believe a male president is about 90% likely, so you're considering buying male shares at 83 cents. Every 83 cents you put in can only become $1, so your maximum return on investment (ROI) is about 20%. Your expected ROI is closer to about 8% because you believe there's only a 90% chance the president will be male. Still, that's a positive ROI, so why not make the bet?This bet is denominated in US dollars, and it will be only resolved in 20 months or so. The problem is that US dollars are subject to inflation.Instead of locking up our investing money in a long-term bet for nearly two years, we could instead put it in an index fund, like the S&P 500, or invest in a large number of random stocks. Both methods have historically had a 10% annualized return. That's much better than an 8% two-year return!Because everyone thinks this way, there will be an artificially low demand for boring long-term positions, like predicting that the next US president will be male. This will drive the price of these shares down, while driving the price of shares for low-probability events up. A share that pays out USD will never have a price that reflects the market's perceived probability, because most people believe there are better things to invest in than USD.There's a solution for this, although regulators might not like it: allow people to bet bonds or shares. The famous 1 million USD bet between Warren Buffett and Protege Partners was actually not denominated in USD, but in bonds and sh...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don't Interpret Prediction Market Prices as Probabilities, published by bob on May 5, 2023 on The Effective Altruism Forum.Epistemic status: most of it is right, probablyPrediction markets sell shares for future events. Here's an example for the 2024 US presidential election:This market allows any US person to bet on the gender of the 2024 president. Male shares and female shares are issued in equal amounts. If the demand for shares of one gender is higher than shares of the other, the price is adjusted.At the time of writing, female shares cost 17 cents, and male shares cost 83 cents. If a female president is elected in 2024, owners of female shares will be able to cash them out for $1 each. If not, male shares can be cashed out for $1. The prices of male and female shares sum up to $1, which makes sense given that only one of them will be worth $1 in the future.Because bettors think a female president is relatively unlikely, the price for male shares is higher. The bettors may be wrong here, but the beauty of prediction markets it that anyone can put their money where their mouth is. If you believe that a female president is more likely than a male president, you can buy female shares for 17 cents a piece. If you're right, each of these shares will likely appreciate to $1 by 2024, almost sextupling your investment. If enough people predict a female president to be more likely, the demand for female shares will grow until they are more expensive than male shares. As such, the price of the shares reflects the predictions of everyone involved in the market.Even if you believe a female president is, say, 25% likely, you'd still be inclined to buy a female share for 17 cents. (That is, if you'd take a 1 in 4 chance of a 500% return on investment.) The interesting thing is that whenever you buy shares, the price will move closer to the probability you perceive be true. Only when the price matches your perceived probability, the market is no longer interesting for you. Because of this, the price of a share reflects the crowd's perceived probability of the corresponding outcome. If the market believes the probability to be 17%, the price will be 17 cents.Or so the story goes.In reality, it's more complicated.You're betting in a currency and, as such, you're betting on a currency.Let's say you believe a male president is about 90% likely, so you're considering buying male shares at 83 cents. Every 83 cents you put in can only become $1, so your maximum return on investment (ROI) is about 20%. Your expected ROI is closer to about 8% because you believe there's only a 90% chance the president will be male. Still, that's a positive ROI, so why not make the bet?This bet is denominated in US dollars, and it will be only resolved in 20 months or so. The problem is that US dollars are subject to inflation.Instead of locking up our investing money in a long-term bet for nearly two years, we could instead put it in an index fund, like the S&P 500, or invest in a large number of random stocks. Both methods have historically had a 10% annualized return. That's much better than an 8% two-year return!Because everyone thinks this way, there will be an artificially low demand for boring long-term positions, like predicting that the next US president will be male. This will drive the price of these shares down, while driving the price of shares for low-probability events up. A share that pays out USD will never have a price that reflects the market's perceived probability, because most people believe there are better things to invest in than USD.There's a solution for this, although regulators might not like it: allow people to bet bonds or shares. The famous 1 million USD bet between Warren Buffett and Protege Partners was actually not denominated in USD, but in bonds and sh...]]>
bob https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:11 None full 5863
BMzmCohuPYRaGPcZD_NL_EA_EA EA - Maybe Family Planning Charities Are Better For Farmed Animals Than Animal Welfare Ones by Hank B Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Maybe Family Planning Charities Are Better For Farmed Animals Than Animal Welfare Ones, published by Hank B on May 6, 2023 on The Effective Altruism Forum.This piece estimates that a donation to the Humane League, an animal welfare organization considered highly cost-effective, and which mainly engages in corporate lobbying for higher welfare standards, saved around 4 animals per dollar donated, mostly chickens. “Saving a farmed animal” here means “preventing a farmed animal from existing” or “improving the welfare of enough farmed animals by enough to count as preventing one farmed animal from existing.” That second definition is a little weird, sorry.If you’re trying to help as many farmed animals as possible this seems like a pretty good deal. Can we do better? Maybe.Enter MSI Reproductive Choices, an international family planning organization, which mainly distributes contraception and performs abortions. They reported in 2021 that they prevented around 14 million unintended pregnancies on a total income of 290 million pounds, or 360 million dollars at time of writing. This is roughly 25 dollars per unintended pregnancy prevented. Let’s pretend that for every unintended pregnancy prevented, a child who would have been born otherwise is not born. This is plausibly true for some of these unintended pregnancies. But not all. On the other hand, MSI also provided abortions which plausibly prevent child lives as well. Maybe that means MSI prevented 14 million child lives from starting in 2021 (if we think the undercounting from not including abortions counter perfectly the overcounting of unintended pregnancy). I have no reason to think that’s particularly plausible, but let’s just keep pretending that’s right.Let’s further pretend that all of MSI’s work happened in Zambia. MSI does work in Zambia, but they also do work in lots of other countries. I choose Zambia mostly because trying to do this math with all the countries that MSI works with would be hard. Zambia had a life expectancy at birth of 62 years in 2020 according to this. According to this, Zambians consumed an average of 28kg of meat per person per year. The important subfigures here are the 2.6kg of poultry and 13kg of seafood per person per year, since chickens and fish are much lighter than other animals killed for meat. One chicken provides say 1kg of meat (I’m sort of making this number up, but similar numbers come up on google). One fish provides say 0.5kg. This means that the average Zambian would eat 2.6 chickens and 26 fish per person per year. Over a lifetime, that’d be 62 years of consumption.If a human who would have otherwise existed no longer exists because of your efforts, they also no longer eat the meat they would have eaten otherwise. Thus, if MSI prevents one human lifetime for every $25 you donate, then you’d be saving 62(2.6+26) farmed animals which is around 1,750. That’s 70 animals saved per dollar donated.This analysis is so bad in so many ways. I took the number for animals saved per dollar donated to The Humane League on total faith. I also just assumed that MSI is correct in saying that they prevented 14 million unintended pregnancies and I made clearly bad assumptions to get from that number to number of human lifetimes prevented. At least we can have some confidence in the total weight of meat consumed on average by a Zambian per year and the life expectancy at birth in Zambia. However, my way of getting from total weight to animals slaughtered is pretty hokey and doesn’t even include cows, sheep, pigs, etc. There are many other problems too. For example, I took the average cost per unintended pregnancy prevented by MSI. However, the average is not the relevant figure here. We’d like the marginal cost of preventing an additional unintended pregnancy. This is a figure I don...]]>
Hank B https://forum.effectivealtruism.org/posts/BMzmCohuPYRaGPcZD/maybe-family-planning-charities-are-better-for-farmed Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Maybe Family Planning Charities Are Better For Farmed Animals Than Animal Welfare Ones, published by Hank B on May 6, 2023 on The Effective Altruism Forum.This piece estimates that a donation to the Humane League, an animal welfare organization considered highly cost-effective, and which mainly engages in corporate lobbying for higher welfare standards, saved around 4 animals per dollar donated, mostly chickens. “Saving a farmed animal” here means “preventing a farmed animal from existing” or “improving the welfare of enough farmed animals by enough to count as preventing one farmed animal from existing.” That second definition is a little weird, sorry.If you’re trying to help as many farmed animals as possible this seems like a pretty good deal. Can we do better? Maybe.Enter MSI Reproductive Choices, an international family planning organization, which mainly distributes contraception and performs abortions. They reported in 2021 that they prevented around 14 million unintended pregnancies on a total income of 290 million pounds, or 360 million dollars at time of writing. This is roughly 25 dollars per unintended pregnancy prevented. Let’s pretend that for every unintended pregnancy prevented, a child who would have been born otherwise is not born. This is plausibly true for some of these unintended pregnancies. But not all. On the other hand, MSI also provided abortions which plausibly prevent child lives as well. Maybe that means MSI prevented 14 million child lives from starting in 2021 (if we think the undercounting from not including abortions counter perfectly the overcounting of unintended pregnancy). I have no reason to think that’s particularly plausible, but let’s just keep pretending that’s right.Let’s further pretend that all of MSI’s work happened in Zambia. MSI does work in Zambia, but they also do work in lots of other countries. I choose Zambia mostly because trying to do this math with all the countries that MSI works with would be hard. Zambia had a life expectancy at birth of 62 years in 2020 according to this. According to this, Zambians consumed an average of 28kg of meat per person per year. The important subfigures here are the 2.6kg of poultry and 13kg of seafood per person per year, since chickens and fish are much lighter than other animals killed for meat. One chicken provides say 1kg of meat (I’m sort of making this number up, but similar numbers come up on google). One fish provides say 0.5kg. This means that the average Zambian would eat 2.6 chickens and 26 fish per person per year. Over a lifetime, that’d be 62 years of consumption.If a human who would have otherwise existed no longer exists because of your efforts, they also no longer eat the meat they would have eaten otherwise. Thus, if MSI prevents one human lifetime for every $25 you donate, then you’d be saving 62(2.6+26) farmed animals which is around 1,750. That’s 70 animals saved per dollar donated.This analysis is so bad in so many ways. I took the number for animals saved per dollar donated to The Humane League on total faith. I also just assumed that MSI is correct in saying that they prevented 14 million unintended pregnancies and I made clearly bad assumptions to get from that number to number of human lifetimes prevented. At least we can have some confidence in the total weight of meat consumed on average by a Zambian per year and the life expectancy at birth in Zambia. However, my way of getting from total weight to animals slaughtered is pretty hokey and doesn’t even include cows, sheep, pigs, etc. There are many other problems too. For example, I took the average cost per unintended pregnancy prevented by MSI. However, the average is not the relevant figure here. We’d like the marginal cost of preventing an additional unintended pregnancy. This is a figure I don...]]>
Sat, 06 May 2023 05:27:59 +0000 EA - Maybe Family Planning Charities Are Better For Farmed Animals Than Animal Welfare Ones by Hank B Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Maybe Family Planning Charities Are Better For Farmed Animals Than Animal Welfare Ones, published by Hank B on May 6, 2023 on The Effective Altruism Forum.This piece estimates that a donation to the Humane League, an animal welfare organization considered highly cost-effective, and which mainly engages in corporate lobbying for higher welfare standards, saved around 4 animals per dollar donated, mostly chickens. “Saving a farmed animal” here means “preventing a farmed animal from existing” or “improving the welfare of enough farmed animals by enough to count as preventing one farmed animal from existing.” That second definition is a little weird, sorry.If you’re trying to help as many farmed animals as possible this seems like a pretty good deal. Can we do better? Maybe.Enter MSI Reproductive Choices, an international family planning organization, which mainly distributes contraception and performs abortions. They reported in 2021 that they prevented around 14 million unintended pregnancies on a total income of 290 million pounds, or 360 million dollars at time of writing. This is roughly 25 dollars per unintended pregnancy prevented. Let’s pretend that for every unintended pregnancy prevented, a child who would have been born otherwise is not born. This is plausibly true for some of these unintended pregnancies. But not all. On the other hand, MSI also provided abortions which plausibly prevent child lives as well. Maybe that means MSI prevented 14 million child lives from starting in 2021 (if we think the undercounting from not including abortions counter perfectly the overcounting of unintended pregnancy). I have no reason to think that’s particularly plausible, but let’s just keep pretending that’s right.Let’s further pretend that all of MSI’s work happened in Zambia. MSI does work in Zambia, but they also do work in lots of other countries. I choose Zambia mostly because trying to do this math with all the countries that MSI works with would be hard. Zambia had a life expectancy at birth of 62 years in 2020 according to this. According to this, Zambians consumed an average of 28kg of meat per person per year. The important subfigures here are the 2.6kg of poultry and 13kg of seafood per person per year, since chickens and fish are much lighter than other animals killed for meat. One chicken provides say 1kg of meat (I’m sort of making this number up, but similar numbers come up on google). One fish provides say 0.5kg. This means that the average Zambian would eat 2.6 chickens and 26 fish per person per year. Over a lifetime, that’d be 62 years of consumption.If a human who would have otherwise existed no longer exists because of your efforts, they also no longer eat the meat they would have eaten otherwise. Thus, if MSI prevents one human lifetime for every $25 you donate, then you’d be saving 62(2.6+26) farmed animals which is around 1,750. That’s 70 animals saved per dollar donated.This analysis is so bad in so many ways. I took the number for animals saved per dollar donated to The Humane League on total faith. I also just assumed that MSI is correct in saying that they prevented 14 million unintended pregnancies and I made clearly bad assumptions to get from that number to number of human lifetimes prevented. At least we can have some confidence in the total weight of meat consumed on average by a Zambian per year and the life expectancy at birth in Zambia. However, my way of getting from total weight to animals slaughtered is pretty hokey and doesn’t even include cows, sheep, pigs, etc. There are many other problems too. For example, I took the average cost per unintended pregnancy prevented by MSI. However, the average is not the relevant figure here. We’d like the marginal cost of preventing an additional unintended pregnancy. This is a figure I don...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Maybe Family Planning Charities Are Better For Farmed Animals Than Animal Welfare Ones, published by Hank B on May 6, 2023 on The Effective Altruism Forum.This piece estimates that a donation to the Humane League, an animal welfare organization considered highly cost-effective, and which mainly engages in corporate lobbying for higher welfare standards, saved around 4 animals per dollar donated, mostly chickens. “Saving a farmed animal” here means “preventing a farmed animal from existing” or “improving the welfare of enough farmed animals by enough to count as preventing one farmed animal from existing.” That second definition is a little weird, sorry.If you’re trying to help as many farmed animals as possible this seems like a pretty good deal. Can we do better? Maybe.Enter MSI Reproductive Choices, an international family planning organization, which mainly distributes contraception and performs abortions. They reported in 2021 that they prevented around 14 million unintended pregnancies on a total income of 290 million pounds, or 360 million dollars at time of writing. This is roughly 25 dollars per unintended pregnancy prevented. Let’s pretend that for every unintended pregnancy prevented, a child who would have been born otherwise is not born. This is plausibly true for some of these unintended pregnancies. But not all. On the other hand, MSI also provided abortions which plausibly prevent child lives as well. Maybe that means MSI prevented 14 million child lives from starting in 2021 (if we think the undercounting from not including abortions counter perfectly the overcounting of unintended pregnancy). I have no reason to think that’s particularly plausible, but let’s just keep pretending that’s right.Let’s further pretend that all of MSI’s work happened in Zambia. MSI does work in Zambia, but they also do work in lots of other countries. I choose Zambia mostly because trying to do this math with all the countries that MSI works with would be hard. Zambia had a life expectancy at birth of 62 years in 2020 according to this. According to this, Zambians consumed an average of 28kg of meat per person per year. The important subfigures here are the 2.6kg of poultry and 13kg of seafood per person per year, since chickens and fish are much lighter than other animals killed for meat. One chicken provides say 1kg of meat (I’m sort of making this number up, but similar numbers come up on google). One fish provides say 0.5kg. This means that the average Zambian would eat 2.6 chickens and 26 fish per person per year. Over a lifetime, that’d be 62 years of consumption.If a human who would have otherwise existed no longer exists because of your efforts, they also no longer eat the meat they would have eaten otherwise. Thus, if MSI prevents one human lifetime for every $25 you donate, then you’d be saving 62(2.6+26) farmed animals which is around 1,750. That’s 70 animals saved per dollar donated.This analysis is so bad in so many ways. I took the number for animals saved per dollar donated to The Humane League on total faith. I also just assumed that MSI is correct in saying that they prevented 14 million unintended pregnancies and I made clearly bad assumptions to get from that number to number of human lifetimes prevented. At least we can have some confidence in the total weight of meat consumed on average by a Zambian per year and the life expectancy at birth in Zambia. However, my way of getting from total weight to animals slaughtered is pretty hokey and doesn’t even include cows, sheep, pigs, etc. There are many other problems too. For example, I took the average cost per unintended pregnancy prevented by MSI. However, the average is not the relevant figure here. We’d like the marginal cost of preventing an additional unintended pregnancy. This is a figure I don...]]>
Hank B https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:14 None full 5861
vaqoGFRdi6ftvwGkn_NL_EA_EA EA - What is effective altruism? How could it be improved? by MichaelPlant Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is effective altruism? How could it be improved?, published by MichaelPlant on May 5, 2023 on The Effective Altruism Forum.The EA community has been convulsing since FTX. There's been lots of discontent, but almost no public discussion between community leaders, and little in the way of a constructive suggestions for what could change. In this post, I offer a reconceptualisation of what the EA community is and then use that to sketch some ideas for how to do good better together.I’m writing this purely in my personal capacity as a long-term member of the effective altruism community. I drafted this at the start of 2023, in large part to help me process my own thoughts. The ideas here are still, by my lights, dissatisfyingly underdeveloped. But I’m posting it now, in its current state and with minimal changes, because it's suddenly relevant to topical discussions about how to run the Effective Ventures Foundation and the Centre for Effective Altruism and I don't know if I would ever make time to polish it.[I'm grateful to Ben West, Chana Messinger, Luke Freeman, Jack Lewars, Nathan Young, Peter Brietbart, Sam Bernecker, and Will Troy for their comments on this. All errors are theirs mine]SummaryWe can think of effective altruists as participants in a market for maximum impact activities. It’s much like a local farmers’ market, except people are buying and selling goods and services for how best to help others.Just like people in a market, EAs don’t all share the same goal - a marketplace isn’t an army. Rather, people have different goals, based on their different accounts of what matters. The participants can agree, however, that they all want there to be a marketplace to allow them to meet and trade; this market is useful because people want different things.Presumably, the EA market should function as a free, competitive market. This means lots of choice and debate among the participants. It requires the market administrators to operate a level playing-field.Currently, the EA community doesn’t quite operate like this. The market administrators - CEA, its staff and trustees - are also major market participants, i.e. promoting particular ideas and running key organisations. And the market is dominated by one big buyer (i.e. it’s a ‘monopsony’).I suggest some possible reforms: CEA to have its trustees elected by the community; it should strive to be impartial rather than take a stand on the priorities. I don’t claim this will solve all the issues, but it should help. I'm sure there are other implications of the market model I've not thought of.These reforms seem sensible even without any of EA’s recent scandals. I do, however, explain how they would likely have helped lessened these scandals too.I’ve tried to resist getting into the minutiae of “how would EA be run if modelled on a free market?” and I would encourage readers also to resist this. I want people to focus on the basic idea and the most obvious implications, not get stuck on the details.I’m not very confident in the below. It’s an odd mix of ideas from philosophy, politics, and economics. I wrote it up in the hope others can develop the ideas and I can stop ruminating on the “what should FTX mean for EA?” question.What is EA? A market for maximum-impact altruistic activitiesWhat is effective altruism? It's described by the website effectivealtruism.org as a "research field and practical community that aims to find the best ways to help others, and put them into practice". That's all well and good, but it's not very informative if we want to understand the behaviour of individuals in the community and the functioning of the community as a whole.An alternative approach is to think of effective altruists, the people themselves, in economic terms. In this case, we might characterise the effe...]]>
MichaelPlant https://forum.effectivealtruism.org/posts/vaqoGFRdi6ftvwGkn/what-is-effective-altruism-how-could-it-be-improved Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is effective altruism? How could it be improved?, published by MichaelPlant on May 5, 2023 on The Effective Altruism Forum.The EA community has been convulsing since FTX. There's been lots of discontent, but almost no public discussion between community leaders, and little in the way of a constructive suggestions for what could change. In this post, I offer a reconceptualisation of what the EA community is and then use that to sketch some ideas for how to do good better together.I’m writing this purely in my personal capacity as a long-term member of the effective altruism community. I drafted this at the start of 2023, in large part to help me process my own thoughts. The ideas here are still, by my lights, dissatisfyingly underdeveloped. But I’m posting it now, in its current state and with minimal changes, because it's suddenly relevant to topical discussions about how to run the Effective Ventures Foundation and the Centre for Effective Altruism and I don't know if I would ever make time to polish it.[I'm grateful to Ben West, Chana Messinger, Luke Freeman, Jack Lewars, Nathan Young, Peter Brietbart, Sam Bernecker, and Will Troy for their comments on this. All errors are theirs mine]SummaryWe can think of effective altruists as participants in a market for maximum impact activities. It’s much like a local farmers’ market, except people are buying and selling goods and services for how best to help others.Just like people in a market, EAs don’t all share the same goal - a marketplace isn’t an army. Rather, people have different goals, based on their different accounts of what matters. The participants can agree, however, that they all want there to be a marketplace to allow them to meet and trade; this market is useful because people want different things.Presumably, the EA market should function as a free, competitive market. This means lots of choice and debate among the participants. It requires the market administrators to operate a level playing-field.Currently, the EA community doesn’t quite operate like this. The market administrators - CEA, its staff and trustees - are also major market participants, i.e. promoting particular ideas and running key organisations. And the market is dominated by one big buyer (i.e. it’s a ‘monopsony’).I suggest some possible reforms: CEA to have its trustees elected by the community; it should strive to be impartial rather than take a stand on the priorities. I don’t claim this will solve all the issues, but it should help. I'm sure there are other implications of the market model I've not thought of.These reforms seem sensible even without any of EA’s recent scandals. I do, however, explain how they would likely have helped lessened these scandals too.I’ve tried to resist getting into the minutiae of “how would EA be run if modelled on a free market?” and I would encourage readers also to resist this. I want people to focus on the basic idea and the most obvious implications, not get stuck on the details.I’m not very confident in the below. It’s an odd mix of ideas from philosophy, politics, and economics. I wrote it up in the hope others can develop the ideas and I can stop ruminating on the “what should FTX mean for EA?” question.What is EA? A market for maximum-impact altruistic activitiesWhat is effective altruism? It's described by the website effectivealtruism.org as a "research field and practical community that aims to find the best ways to help others, and put them into practice". That's all well and good, but it's not very informative if we want to understand the behaviour of individuals in the community and the functioning of the community as a whole.An alternative approach is to think of effective altruists, the people themselves, in economic terms. In this case, we might characterise the effe...]]>
Fri, 05 May 2023 17:53:25 +0000 EA - What is effective altruism? How could it be improved? by MichaelPlant Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is effective altruism? How could it be improved?, published by MichaelPlant on May 5, 2023 on The Effective Altruism Forum.The EA community has been convulsing since FTX. There's been lots of discontent, but almost no public discussion between community leaders, and little in the way of a constructive suggestions for what could change. In this post, I offer a reconceptualisation of what the EA community is and then use that to sketch some ideas for how to do good better together.I’m writing this purely in my personal capacity as a long-term member of the effective altruism community. I drafted this at the start of 2023, in large part to help me process my own thoughts. The ideas here are still, by my lights, dissatisfyingly underdeveloped. But I’m posting it now, in its current state and with minimal changes, because it's suddenly relevant to topical discussions about how to run the Effective Ventures Foundation and the Centre for Effective Altruism and I don't know if I would ever make time to polish it.[I'm grateful to Ben West, Chana Messinger, Luke Freeman, Jack Lewars, Nathan Young, Peter Brietbart, Sam Bernecker, and Will Troy for their comments on this. All errors are theirs mine]SummaryWe can think of effective altruists as participants in a market for maximum impact activities. It’s much like a local farmers’ market, except people are buying and selling goods and services for how best to help others.Just like people in a market, EAs don’t all share the same goal - a marketplace isn’t an army. Rather, people have different goals, based on their different accounts of what matters. The participants can agree, however, that they all want there to be a marketplace to allow them to meet and trade; this market is useful because people want different things.Presumably, the EA market should function as a free, competitive market. This means lots of choice and debate among the participants. It requires the market administrators to operate a level playing-field.Currently, the EA community doesn’t quite operate like this. The market administrators - CEA, its staff and trustees - are also major market participants, i.e. promoting particular ideas and running key organisations. And the market is dominated by one big buyer (i.e. it’s a ‘monopsony’).I suggest some possible reforms: CEA to have its trustees elected by the community; it should strive to be impartial rather than take a stand on the priorities. I don’t claim this will solve all the issues, but it should help. I'm sure there are other implications of the market model I've not thought of.These reforms seem sensible even without any of EA’s recent scandals. I do, however, explain how they would likely have helped lessened these scandals too.I’ve tried to resist getting into the minutiae of “how would EA be run if modelled on a free market?” and I would encourage readers also to resist this. I want people to focus on the basic idea and the most obvious implications, not get stuck on the details.I’m not very confident in the below. It’s an odd mix of ideas from philosophy, politics, and economics. I wrote it up in the hope others can develop the ideas and I can stop ruminating on the “what should FTX mean for EA?” question.What is EA? A market for maximum-impact altruistic activitiesWhat is effective altruism? It's described by the website effectivealtruism.org as a "research field and practical community that aims to find the best ways to help others, and put them into practice". That's all well and good, but it's not very informative if we want to understand the behaviour of individuals in the community and the functioning of the community as a whole.An alternative approach is to think of effective altruists, the people themselves, in economic terms. In this case, we might characterise the effe...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is effective altruism? How could it be improved?, published by MichaelPlant on May 5, 2023 on The Effective Altruism Forum.The EA community has been convulsing since FTX. There's been lots of discontent, but almost no public discussion between community leaders, and little in the way of a constructive suggestions for what could change. In this post, I offer a reconceptualisation of what the EA community is and then use that to sketch some ideas for how to do good better together.I’m writing this purely in my personal capacity as a long-term member of the effective altruism community. I drafted this at the start of 2023, in large part to help me process my own thoughts. The ideas here are still, by my lights, dissatisfyingly underdeveloped. But I’m posting it now, in its current state and with minimal changes, because it's suddenly relevant to topical discussions about how to run the Effective Ventures Foundation and the Centre for Effective Altruism and I don't know if I would ever make time to polish it.[I'm grateful to Ben West, Chana Messinger, Luke Freeman, Jack Lewars, Nathan Young, Peter Brietbart, Sam Bernecker, and Will Troy for their comments on this. All errors are theirs mine]SummaryWe can think of effective altruists as participants in a market for maximum impact activities. It’s much like a local farmers’ market, except people are buying and selling goods and services for how best to help others.Just like people in a market, EAs don’t all share the same goal - a marketplace isn’t an army. Rather, people have different goals, based on their different accounts of what matters. The participants can agree, however, that they all want there to be a marketplace to allow them to meet and trade; this market is useful because people want different things.Presumably, the EA market should function as a free, competitive market. This means lots of choice and debate among the participants. It requires the market administrators to operate a level playing-field.Currently, the EA community doesn’t quite operate like this. The market administrators - CEA, its staff and trustees - are also major market participants, i.e. promoting particular ideas and running key organisations. And the market is dominated by one big buyer (i.e. it’s a ‘monopsony’).I suggest some possible reforms: CEA to have its trustees elected by the community; it should strive to be impartial rather than take a stand on the priorities. I don’t claim this will solve all the issues, but it should help. I'm sure there are other implications of the market model I've not thought of.These reforms seem sensible even without any of EA’s recent scandals. I do, however, explain how they would likely have helped lessened these scandals too.I’ve tried to resist getting into the minutiae of “how would EA be run if modelled on a free market?” and I would encourage readers also to resist this. I want people to focus on the basic idea and the most obvious implications, not get stuck on the details.I’m not very confident in the below. It’s an odd mix of ideas from philosophy, politics, and economics. I wrote it up in the hope others can develop the ideas and I can stop ruminating on the “what should FTX mean for EA?” question.What is EA? A market for maximum-impact altruistic activitiesWhat is effective altruism? It's described by the website effectivealtruism.org as a "research field and practical community that aims to find the best ways to help others, and put them into practice". That's all well and good, but it's not very informative if we want to understand the behaviour of individuals in the community and the functioning of the community as a whole.An alternative approach is to think of effective altruists, the people themselves, in economic terms. In this case, we might characterise the effe...]]>
MichaelPlant https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 26:30 None full 5855
KnRcbttvkgmCvPvyn_NL_EA_EA EA - RIP Bear Braumoeller by Stephen Clare Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: RIP Bear Braumoeller, published by Stephen Clare on May 5, 2023 on The Effective Altruism Forum.Professor Bear Braumoeller passed away earlier this week. Bear was a political scientist who studied the likelihood and causes of catastrophic wars. You may have read his book Only the Dead or heard his appearance on the 80,000 Hours podcast. For a short, recent example of his work I recommend this piece of his about the Russia-Ukraine war.Bear’s work on conflict likelihood, escalation, and catastrophic wars is certainly among the best research on major conflict risks. Only the Dead was an important counter to strong claims about the long-term declines in interstate violence. Bear found, in brief, that the data on war severity offer few reasons to think that the risk of huge wars (including much-larger-than-WWII-wars) has declined much. And this risk accumulates catastrophically over time.One of my favourite sentences from Bear is his darkly-humorous conclusion to a chapter on war severity (p. 130):When I sat down to write this conclusion, I briefly considered typing, “We’re all going to die,” and leaving it at that. I chose to write more, not because that conclusion is too alarmist, but because it’s not specific enough.Bear combined expertise in both statistical analysis and the theory of what causes war to great effect. He pushed forward our understanding of not just how the likelihood of major conflict has changed over time, but why. His work was interesting not just to political scientists but to anyone seeking to understand and reduce global risks.I’d corresponded with Bear frequently over the last two years while researching catastrophic conflict risks. He was generous and cared deeply about the social impact of his work. Despite my utter lack of credentials and experience, Bear gave me a lot of his time, advice, and connections to other researchers. In my experience academics rarely engage so meaningfully with outsiders. I was grateful.Bear’s interest in EA had been piqued and as far as I know he was planning to do more work on catastrophic risks. Last year his lab received a grant from the Future Fund for follow-up research on the themes he wrote about in Only the Dead.He is gone far too soon and will be missed.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Stephen Clare https://forum.effectivealtruism.org/posts/KnRcbttvkgmCvPvyn/rip-bear-braumoeller Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: RIP Bear Braumoeller, published by Stephen Clare on May 5, 2023 on The Effective Altruism Forum.Professor Bear Braumoeller passed away earlier this week. Bear was a political scientist who studied the likelihood and causes of catastrophic wars. You may have read his book Only the Dead or heard his appearance on the 80,000 Hours podcast. For a short, recent example of his work I recommend this piece of his about the Russia-Ukraine war.Bear’s work on conflict likelihood, escalation, and catastrophic wars is certainly among the best research on major conflict risks. Only the Dead was an important counter to strong claims about the long-term declines in interstate violence. Bear found, in brief, that the data on war severity offer few reasons to think that the risk of huge wars (including much-larger-than-WWII-wars) has declined much. And this risk accumulates catastrophically over time.One of my favourite sentences from Bear is his darkly-humorous conclusion to a chapter on war severity (p. 130):When I sat down to write this conclusion, I briefly considered typing, “We’re all going to die,” and leaving it at that. I chose to write more, not because that conclusion is too alarmist, but because it’s not specific enough.Bear combined expertise in both statistical analysis and the theory of what causes war to great effect. He pushed forward our understanding of not just how the likelihood of major conflict has changed over time, but why. His work was interesting not just to political scientists but to anyone seeking to understand and reduce global risks.I’d corresponded with Bear frequently over the last two years while researching catastrophic conflict risks. He was generous and cared deeply about the social impact of his work. Despite my utter lack of credentials and experience, Bear gave me a lot of his time, advice, and connections to other researchers. In my experience academics rarely engage so meaningfully with outsiders. I was grateful.Bear’s interest in EA had been piqued and as far as I know he was planning to do more work on catastrophic risks. Last year his lab received a grant from the Future Fund for follow-up research on the themes he wrote about in Only the Dead.He is gone far too soon and will be missed.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 05 May 2023 14:43:50 +0000 EA - RIP Bear Braumoeller by Stephen Clare Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: RIP Bear Braumoeller, published by Stephen Clare on May 5, 2023 on The Effective Altruism Forum.Professor Bear Braumoeller passed away earlier this week. Bear was a political scientist who studied the likelihood and causes of catastrophic wars. You may have read his book Only the Dead or heard his appearance on the 80,000 Hours podcast. For a short, recent example of his work I recommend this piece of his about the Russia-Ukraine war.Bear’s work on conflict likelihood, escalation, and catastrophic wars is certainly among the best research on major conflict risks. Only the Dead was an important counter to strong claims about the long-term declines in interstate violence. Bear found, in brief, that the data on war severity offer few reasons to think that the risk of huge wars (including much-larger-than-WWII-wars) has declined much. And this risk accumulates catastrophically over time.One of my favourite sentences from Bear is his darkly-humorous conclusion to a chapter on war severity (p. 130):When I sat down to write this conclusion, I briefly considered typing, “We’re all going to die,” and leaving it at that. I chose to write more, not because that conclusion is too alarmist, but because it’s not specific enough.Bear combined expertise in both statistical analysis and the theory of what causes war to great effect. He pushed forward our understanding of not just how the likelihood of major conflict has changed over time, but why. His work was interesting not just to political scientists but to anyone seeking to understand and reduce global risks.I’d corresponded with Bear frequently over the last two years while researching catastrophic conflict risks. He was generous and cared deeply about the social impact of his work. Despite my utter lack of credentials and experience, Bear gave me a lot of his time, advice, and connections to other researchers. In my experience academics rarely engage so meaningfully with outsiders. I was grateful.Bear’s interest in EA had been piqued and as far as I know he was planning to do more work on catastrophic risks. Last year his lab received a grant from the Future Fund for follow-up research on the themes he wrote about in Only the Dead.He is gone far too soon and will be missed.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: RIP Bear Braumoeller, published by Stephen Clare on May 5, 2023 on The Effective Altruism Forum.Professor Bear Braumoeller passed away earlier this week. Bear was a political scientist who studied the likelihood and causes of catastrophic wars. You may have read his book Only the Dead or heard his appearance on the 80,000 Hours podcast. For a short, recent example of his work I recommend this piece of his about the Russia-Ukraine war.Bear’s work on conflict likelihood, escalation, and catastrophic wars is certainly among the best research on major conflict risks. Only the Dead was an important counter to strong claims about the long-term declines in interstate violence. Bear found, in brief, that the data on war severity offer few reasons to think that the risk of huge wars (including much-larger-than-WWII-wars) has declined much. And this risk accumulates catastrophically over time.One of my favourite sentences from Bear is his darkly-humorous conclusion to a chapter on war severity (p. 130):When I sat down to write this conclusion, I briefly considered typing, “We’re all going to die,” and leaving it at that. I chose to write more, not because that conclusion is too alarmist, but because it’s not specific enough.Bear combined expertise in both statistical analysis and the theory of what causes war to great effect. He pushed forward our understanding of not just how the likelihood of major conflict has changed over time, but why. His work was interesting not just to political scientists but to anyone seeking to understand and reduce global risks.I’d corresponded with Bear frequently over the last two years while researching catastrophic conflict risks. He was generous and cared deeply about the social impact of his work. Despite my utter lack of credentials and experience, Bear gave me a lot of his time, advice, and connections to other researchers. In my experience academics rarely engage so meaningfully with outsiders. I was grateful.Bear’s interest in EA had been piqued and as far as I know he was planning to do more work on catastrophic risks. Last year his lab received a grant from the Future Fund for follow-up research on the themes he wrote about in Only the Dead.He is gone far too soon and will be missed.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Stephen Clare https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:17 None full 5853
sDkHTdBsrpz7teMR2_NL_EA_EA EA - Please don’t vote brigade by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Please don’t vote brigade, published by Lizka on May 5, 2023 on The Effective Altruism Forum.Once in a while, the moderators will find out that something like the following happened:Someone posted an update from their organization, and shared it on Slack or social media, asking coworkers and friends to go upvote it for increased visibility.Someone saw something they didn’t like on the Forum — maybe comments criticizing a friend, or a point of view they disagree with — and encouraged everyone in some discussion space to go downvote it.This is a form of vote brigading. It messes with karma’s ability to provide people with a signal of what to engage with and is against Forum norms.Please don’t do it. We might ban you for it.If you’re worried that someone else (or some other group) is engaging in vote brigading, bring it up to the moderators instead of trying to correct for it.Why is it bad?Karma is meant to provide a signal of what Forum users will find useful to engage with. Vote brigading turns karma into a popularity contest.Voting should be based on readers’ opinions of the content they’re voting on. If someone convinces you that a post is terrible — or great — it’s fine to downvote or upvote it as a result of that, but you should actually believe that.We should resolve disagreements by discussing them, not by comparing the sizes of the groups who agree with each position.If people try to hide criticism by downvoting it just because they feel an affinity to the group(s) criticized, the Forum will become predictably biased. We won’t have important conversations, we won’t learn from each others’ mistakes, etc.What actions should we avoid? (What counts as vote brigading?)If you’re sharing content:Don’t encourage people to all go upvote or downvote something (“everyone go upvote this!”) — especially when you have power over the people you’re talking to.It’s more ok to say “go upvote this if you think it’s good,” but it’s still borderline, and you should be careful to make sure that it doesn’t feel like pressure on people.Be careful with bias: if the content is criticizing your work, or your friend’s work, or something you feel an affinity towards — be suspicious of your ability to objectively engage with it.Consider letting other Forum users sort it out or leaving a comment explaining your point of view.If you’re voting:Please make sure you’re really voting because you think this content is good.If your friends or coworker shared their content and that’s the only thing you really engage with and vote on, interrogate your heart or mind about whether you might be biased.Please report attempts at vote brigading to us.ExamplesThere are many borderline cases. Here are some examples, sorted by how fine/bad the action of person sharing the content is:The actionIs it ok to do?You share a post (and maybe what you like or dislike about it), without explicitly asking people to upvote or downvote.It’s fine (I’m very happy for people to straightforwardly share posts with people who might find them interesting)You share a post and what you like about it, and say something like “upvote the post if you like it”You share a post that criticizes your work, and write something like “downvote the post if you think it should have less visibility” Not ok — even though there’s an “if.”. Don’t do this, especially if you’re in a leadership role. You share a post and say something like “Everyone: go upvote the post!”Not ok. Once again, it’s even worse if you’re in a leadership role with respect to the people you’re sharing the post with.On a call with other people, and you say, “there’s this post I don’t like / a post that’s criticizing me/us. Could you all upvote / downvote it?”Extremely not ok.This has the added harm of making it easy for the asker to see if the other p...]]>
Lizka https://forum.effectivealtruism.org/posts/sDkHTdBsrpz7teMR2/please-don-t-vote-brigade Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Please don’t vote brigade, published by Lizka on May 5, 2023 on The Effective Altruism Forum.Once in a while, the moderators will find out that something like the following happened:Someone posted an update from their organization, and shared it on Slack or social media, asking coworkers and friends to go upvote it for increased visibility.Someone saw something they didn’t like on the Forum — maybe comments criticizing a friend, or a point of view they disagree with — and encouraged everyone in some discussion space to go downvote it.This is a form of vote brigading. It messes with karma’s ability to provide people with a signal of what to engage with and is against Forum norms.Please don’t do it. We might ban you for it.If you’re worried that someone else (or some other group) is engaging in vote brigading, bring it up to the moderators instead of trying to correct for it.Why is it bad?Karma is meant to provide a signal of what Forum users will find useful to engage with. Vote brigading turns karma into a popularity contest.Voting should be based on readers’ opinions of the content they’re voting on. If someone convinces you that a post is terrible — or great — it’s fine to downvote or upvote it as a result of that, but you should actually believe that.We should resolve disagreements by discussing them, not by comparing the sizes of the groups who agree with each position.If people try to hide criticism by downvoting it just because they feel an affinity to the group(s) criticized, the Forum will become predictably biased. We won’t have important conversations, we won’t learn from each others’ mistakes, etc.What actions should we avoid? (What counts as vote brigading?)If you’re sharing content:Don’t encourage people to all go upvote or downvote something (“everyone go upvote this!”) — especially when you have power over the people you’re talking to.It’s more ok to say “go upvote this if you think it’s good,” but it’s still borderline, and you should be careful to make sure that it doesn’t feel like pressure on people.Be careful with bias: if the content is criticizing your work, or your friend’s work, or something you feel an affinity towards — be suspicious of your ability to objectively engage with it.Consider letting other Forum users sort it out or leaving a comment explaining your point of view.If you’re voting:Please make sure you’re really voting because you think this content is good.If your friends or coworker shared their content and that’s the only thing you really engage with and vote on, interrogate your heart or mind about whether you might be biased.Please report attempts at vote brigading to us.ExamplesThere are many borderline cases. Here are some examples, sorted by how fine/bad the action of person sharing the content is:The actionIs it ok to do?You share a post (and maybe what you like or dislike about it), without explicitly asking people to upvote or downvote.It’s fine (I’m very happy for people to straightforwardly share posts with people who might find them interesting)You share a post and what you like about it, and say something like “upvote the post if you like it”You share a post that criticizes your work, and write something like “downvote the post if you think it should have less visibility” Not ok — even though there’s an “if.”. Don’t do this, especially if you’re in a leadership role. You share a post and say something like “Everyone: go upvote the post!”Not ok. Once again, it’s even worse if you’re in a leadership role with respect to the people you’re sharing the post with.On a call with other people, and you say, “there’s this post I don’t like / a post that’s criticizing me/us. Could you all upvote / downvote it?”Extremely not ok.This has the added harm of making it easy for the asker to see if the other p...]]>
Fri, 05 May 2023 09:29:58 +0000 EA - Please don’t vote brigade by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Please don’t vote brigade, published by Lizka on May 5, 2023 on The Effective Altruism Forum.Once in a while, the moderators will find out that something like the following happened:Someone posted an update from their organization, and shared it on Slack or social media, asking coworkers and friends to go upvote it for increased visibility.Someone saw something they didn’t like on the Forum — maybe comments criticizing a friend, or a point of view they disagree with — and encouraged everyone in some discussion space to go downvote it.This is a form of vote brigading. It messes with karma’s ability to provide people with a signal of what to engage with and is against Forum norms.Please don’t do it. We might ban you for it.If you’re worried that someone else (or some other group) is engaging in vote brigading, bring it up to the moderators instead of trying to correct for it.Why is it bad?Karma is meant to provide a signal of what Forum users will find useful to engage with. Vote brigading turns karma into a popularity contest.Voting should be based on readers’ opinions of the content they’re voting on. If someone convinces you that a post is terrible — or great — it’s fine to downvote or upvote it as a result of that, but you should actually believe that.We should resolve disagreements by discussing them, not by comparing the sizes of the groups who agree with each position.If people try to hide criticism by downvoting it just because they feel an affinity to the group(s) criticized, the Forum will become predictably biased. We won’t have important conversations, we won’t learn from each others’ mistakes, etc.What actions should we avoid? (What counts as vote brigading?)If you’re sharing content:Don’t encourage people to all go upvote or downvote something (“everyone go upvote this!”) — especially when you have power over the people you’re talking to.It’s more ok to say “go upvote this if you think it’s good,” but it’s still borderline, and you should be careful to make sure that it doesn’t feel like pressure on people.Be careful with bias: if the content is criticizing your work, or your friend’s work, or something you feel an affinity towards — be suspicious of your ability to objectively engage with it.Consider letting other Forum users sort it out or leaving a comment explaining your point of view.If you’re voting:Please make sure you’re really voting because you think this content is good.If your friends or coworker shared their content and that’s the only thing you really engage with and vote on, interrogate your heart or mind about whether you might be biased.Please report attempts at vote brigading to us.ExamplesThere are many borderline cases. Here are some examples, sorted by how fine/bad the action of person sharing the content is:The actionIs it ok to do?You share a post (and maybe what you like or dislike about it), without explicitly asking people to upvote or downvote.It’s fine (I’m very happy for people to straightforwardly share posts with people who might find them interesting)You share a post and what you like about it, and say something like “upvote the post if you like it”You share a post that criticizes your work, and write something like “downvote the post if you think it should have less visibility” Not ok — even though there’s an “if.”. Don’t do this, especially if you’re in a leadership role. You share a post and say something like “Everyone: go upvote the post!”Not ok. Once again, it’s even worse if you’re in a leadership role with respect to the people you’re sharing the post with.On a call with other people, and you say, “there’s this post I don’t like / a post that’s criticizing me/us. Could you all upvote / downvote it?”Extremely not ok.This has the added harm of making it easy for the asker to see if the other p...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Please don’t vote brigade, published by Lizka on May 5, 2023 on The Effective Altruism Forum.Once in a while, the moderators will find out that something like the following happened:Someone posted an update from their organization, and shared it on Slack or social media, asking coworkers and friends to go upvote it for increased visibility.Someone saw something they didn’t like on the Forum — maybe comments criticizing a friend, or a point of view they disagree with — and encouraged everyone in some discussion space to go downvote it.This is a form of vote brigading. It messes with karma’s ability to provide people with a signal of what to engage with and is against Forum norms.Please don’t do it. We might ban you for it.If you’re worried that someone else (or some other group) is engaging in vote brigading, bring it up to the moderators instead of trying to correct for it.Why is it bad?Karma is meant to provide a signal of what Forum users will find useful to engage with. Vote brigading turns karma into a popularity contest.Voting should be based on readers’ opinions of the content they’re voting on. If someone convinces you that a post is terrible — or great — it’s fine to downvote or upvote it as a result of that, but you should actually believe that.We should resolve disagreements by discussing them, not by comparing the sizes of the groups who agree with each position.If people try to hide criticism by downvoting it just because they feel an affinity to the group(s) criticized, the Forum will become predictably biased. We won’t have important conversations, we won’t learn from each others’ mistakes, etc.What actions should we avoid? (What counts as vote brigading?)If you’re sharing content:Don’t encourage people to all go upvote or downvote something (“everyone go upvote this!”) — especially when you have power over the people you’re talking to.It’s more ok to say “go upvote this if you think it’s good,” but it’s still borderline, and you should be careful to make sure that it doesn’t feel like pressure on people.Be careful with bias: if the content is criticizing your work, or your friend’s work, or something you feel an affinity towards — be suspicious of your ability to objectively engage with it.Consider letting other Forum users sort it out or leaving a comment explaining your point of view.If you’re voting:Please make sure you’re really voting because you think this content is good.If your friends or coworker shared their content and that’s the only thing you really engage with and vote on, interrogate your heart or mind about whether you might be biased.Please report attempts at vote brigading to us.ExamplesThere are many borderline cases. Here are some examples, sorted by how fine/bad the action of person sharing the content is:The actionIs it ok to do?You share a post (and maybe what you like or dislike about it), without explicitly asking people to upvote or downvote.It’s fine (I’m very happy for people to straightforwardly share posts with people who might find them interesting)You share a post and what you like about it, and say something like “upvote the post if you like it”You share a post that criticizes your work, and write something like “downvote the post if you think it should have less visibility” Not ok — even though there’s an “if.”. Don’t do this, especially if you’re in a leadership role. You share a post and say something like “Everyone: go upvote the post!”Not ok. Once again, it’s even worse if you’re in a leadership role with respect to the people you’re sharing the post with.On a call with other people, and you say, “there’s this post I don’t like / a post that’s criticizing me/us. Could you all upvote / downvote it?”Extremely not ok.This has the added harm of making it easy for the asker to see if the other p...]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:24 None full 5851
oJQE6bALqgKKQx4Ek_NL_EA_EA EA - Orgs and Individuals Should Spend ~1 Hour/Month Making More Introductions by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Orgs & Individuals Should Spend ~1 Hour/Month Making More Introductions, published by Rockwell on May 4, 2023 on The Effective Altruism Forum.Note: This is a post I've talked about writing for >6 months, so I'm giving myself 30 minutes to write and publish it. For context, I'm the full-time director of EA NYC, an organization dedicated to building and supporting the effective altruism community in and around New York City.Claim: More organizations and individuals should allot a small amount of time to a particularly high-value activity: 1-1 or 1-org introductions.Outside the scope of this post: I'm not going to make the case here for the value of connections. Many in the community already believe they are extremely valuable, e.g. they're the primary metric CEA uses for its events.Context: I frequently meet people who are deeply engaged in EA, have ended up at an EAG(x), work for an EA or EA-adjacent organization, or are otherwise exciting and active community members, but have no idea there are existing EA groups located in their city or university, focused on their profession, or coordinating across their cause area. When they do learn about these groups, they are often thrilled and eager to plug in. Many times, they've been engaging heavily with other community members who did know, and perhaps even once mentioned such in passing, but didn't think to make a direct introduction. For many, a direct introduction dramatically increases the likelihood of their actually engaging with another individual or organization. As a result, opportunities for valuable connections and community growth are missed.Introductions can be burdensome, but they don't have to be.80,000 Hours80,000 Hours' staff frequently directly connects me to individuals over email who are based in or near NYC, whether or not they've already advised them. In 2022, they sent over 30 emails that followed a format like this:Subject: Rocky [Name]Hi both,Rocky, meet [Name]. [Name] works in [Professional Field] and lives in [Location]. They're interested in [Career Change, Learning about ___ EA Topic, Connecting with Local EAs, Something Else]. Because of this, I thought it might be useful for [Name] to speak to you and others in the EA NYC community.[Name], meet Rocky. Rocky is Director of Effective Altruism NYC. Before that she did [Career Summary] and studied [My Degree]. Effective Altruism NYC works on helping connect and grow the community of New Yorkers who are looking to do the most good through: advising, socials, reading groups, and other activities. I thought she would be a good person for you to speak with about some next steps to get more involved with Effective Altruism.Hope you get to speak soon. Thanks!Best, [80K Staff Member]They typically link to our respected LinkedIn profiles.I then set up one-on-one calls with the individuals they connect me to and many subsequently become involved in EA NYC in various capacities.EA Virtual ProgramsEA Virtual Programs does something similar:Subject: [EA NYC] Your group has a new prospective memberHi,We are the EA Virtual Programs (EA VP) team. A recent EA Virtual Programs participant has expressed an interest in joining your Effective Altruism New York City group.Name: ____Email: ____Background Info: [Involvement in EA] [Profession] [Location] [LinkedIn]Note these connections come from the participants themselves, as they nominated they would like to get in touch with your group specifically in our exit survey.It would be wonderful for them to get a warm welcome to your group. Please do reach out to them in 1-2 weeks preferably. However, no worries if this is not a priority for you now.I hope these connections are valuable!Sincerely,EA Virtual ProgramsIn both cases, the connector receives permission from both parties, something eas...]]>
Rockwell https://forum.effectivealtruism.org/posts/oJQE6bALqgKKQx4Ek/orgs-and-individuals-should-spend-1-hour-month-making-more Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Orgs & Individuals Should Spend ~1 Hour/Month Making More Introductions, published by Rockwell on May 4, 2023 on The Effective Altruism Forum.Note: This is a post I've talked about writing for >6 months, so I'm giving myself 30 minutes to write and publish it. For context, I'm the full-time director of EA NYC, an organization dedicated to building and supporting the effective altruism community in and around New York City.Claim: More organizations and individuals should allot a small amount of time to a particularly high-value activity: 1-1 or 1-org introductions.Outside the scope of this post: I'm not going to make the case here for the value of connections. Many in the community already believe they are extremely valuable, e.g. they're the primary metric CEA uses for its events.Context: I frequently meet people who are deeply engaged in EA, have ended up at an EAG(x), work for an EA or EA-adjacent organization, or are otherwise exciting and active community members, but have no idea there are existing EA groups located in their city or university, focused on their profession, or coordinating across their cause area. When they do learn about these groups, they are often thrilled and eager to plug in. Many times, they've been engaging heavily with other community members who did know, and perhaps even once mentioned such in passing, but didn't think to make a direct introduction. For many, a direct introduction dramatically increases the likelihood of their actually engaging with another individual or organization. As a result, opportunities for valuable connections and community growth are missed.Introductions can be burdensome, but they don't have to be.80,000 Hours80,000 Hours' staff frequently directly connects me to individuals over email who are based in or near NYC, whether or not they've already advised them. In 2022, they sent over 30 emails that followed a format like this:Subject: Rocky [Name]Hi both,Rocky, meet [Name]. [Name] works in [Professional Field] and lives in [Location]. They're interested in [Career Change, Learning about ___ EA Topic, Connecting with Local EAs, Something Else]. Because of this, I thought it might be useful for [Name] to speak to you and others in the EA NYC community.[Name], meet Rocky. Rocky is Director of Effective Altruism NYC. Before that she did [Career Summary] and studied [My Degree]. Effective Altruism NYC works on helping connect and grow the community of New Yorkers who are looking to do the most good through: advising, socials, reading groups, and other activities. I thought she would be a good person for you to speak with about some next steps to get more involved with Effective Altruism.Hope you get to speak soon. Thanks!Best, [80K Staff Member]They typically link to our respected LinkedIn profiles.I then set up one-on-one calls with the individuals they connect me to and many subsequently become involved in EA NYC in various capacities.EA Virtual ProgramsEA Virtual Programs does something similar:Subject: [EA NYC] Your group has a new prospective memberHi,We are the EA Virtual Programs (EA VP) team. A recent EA Virtual Programs participant has expressed an interest in joining your Effective Altruism New York City group.Name: ____Email: ____Background Info: [Involvement in EA] [Profession] [Location] [LinkedIn]Note these connections come from the participants themselves, as they nominated they would like to get in touch with your group specifically in our exit survey.It would be wonderful for them to get a warm welcome to your group. Please do reach out to them in 1-2 weeks preferably. However, no worries if this is not a priority for you now.I hope these connections are valuable!Sincerely,EA Virtual ProgramsIn both cases, the connector receives permission from both parties, something eas...]]>
Fri, 05 May 2023 09:29:07 +0000 EA - Orgs and Individuals Should Spend ~1 Hour/Month Making More Introductions by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Orgs & Individuals Should Spend ~1 Hour/Month Making More Introductions, published by Rockwell on May 4, 2023 on The Effective Altruism Forum.Note: This is a post I've talked about writing for >6 months, so I'm giving myself 30 minutes to write and publish it. For context, I'm the full-time director of EA NYC, an organization dedicated to building and supporting the effective altruism community in and around New York City.Claim: More organizations and individuals should allot a small amount of time to a particularly high-value activity: 1-1 or 1-org introductions.Outside the scope of this post: I'm not going to make the case here for the value of connections. Many in the community already believe they are extremely valuable, e.g. they're the primary metric CEA uses for its events.Context: I frequently meet people who are deeply engaged in EA, have ended up at an EAG(x), work for an EA or EA-adjacent organization, or are otherwise exciting and active community members, but have no idea there are existing EA groups located in their city or university, focused on their profession, or coordinating across their cause area. When they do learn about these groups, they are often thrilled and eager to plug in. Many times, they've been engaging heavily with other community members who did know, and perhaps even once mentioned such in passing, but didn't think to make a direct introduction. For many, a direct introduction dramatically increases the likelihood of their actually engaging with another individual or organization. As a result, opportunities for valuable connections and community growth are missed.Introductions can be burdensome, but they don't have to be.80,000 Hours80,000 Hours' staff frequently directly connects me to individuals over email who are based in or near NYC, whether or not they've already advised them. In 2022, they sent over 30 emails that followed a format like this:Subject: Rocky [Name]Hi both,Rocky, meet [Name]. [Name] works in [Professional Field] and lives in [Location]. They're interested in [Career Change, Learning about ___ EA Topic, Connecting with Local EAs, Something Else]. Because of this, I thought it might be useful for [Name] to speak to you and others in the EA NYC community.[Name], meet Rocky. Rocky is Director of Effective Altruism NYC. Before that she did [Career Summary] and studied [My Degree]. Effective Altruism NYC works on helping connect and grow the community of New Yorkers who are looking to do the most good through: advising, socials, reading groups, and other activities. I thought she would be a good person for you to speak with about some next steps to get more involved with Effective Altruism.Hope you get to speak soon. Thanks!Best, [80K Staff Member]They typically link to our respected LinkedIn profiles.I then set up one-on-one calls with the individuals they connect me to and many subsequently become involved in EA NYC in various capacities.EA Virtual ProgramsEA Virtual Programs does something similar:Subject: [EA NYC] Your group has a new prospective memberHi,We are the EA Virtual Programs (EA VP) team. A recent EA Virtual Programs participant has expressed an interest in joining your Effective Altruism New York City group.Name: ____Email: ____Background Info: [Involvement in EA] [Profession] [Location] [LinkedIn]Note these connections come from the participants themselves, as they nominated they would like to get in touch with your group specifically in our exit survey.It would be wonderful for them to get a warm welcome to your group. Please do reach out to them in 1-2 weeks preferably. However, no worries if this is not a priority for you now.I hope these connections are valuable!Sincerely,EA Virtual ProgramsIn both cases, the connector receives permission from both parties, something eas...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Orgs & Individuals Should Spend ~1 Hour/Month Making More Introductions, published by Rockwell on May 4, 2023 on The Effective Altruism Forum.Note: This is a post I've talked about writing for >6 months, so I'm giving myself 30 minutes to write and publish it. For context, I'm the full-time director of EA NYC, an organization dedicated to building and supporting the effective altruism community in and around New York City.Claim: More organizations and individuals should allot a small amount of time to a particularly high-value activity: 1-1 or 1-org introductions.Outside the scope of this post: I'm not going to make the case here for the value of connections. Many in the community already believe they are extremely valuable, e.g. they're the primary metric CEA uses for its events.Context: I frequently meet people who are deeply engaged in EA, have ended up at an EAG(x), work for an EA or EA-adjacent organization, or are otherwise exciting and active community members, but have no idea there are existing EA groups located in their city or university, focused on their profession, or coordinating across their cause area. When they do learn about these groups, they are often thrilled and eager to plug in. Many times, they've been engaging heavily with other community members who did know, and perhaps even once mentioned such in passing, but didn't think to make a direct introduction. For many, a direct introduction dramatically increases the likelihood of their actually engaging with another individual or organization. As a result, opportunities for valuable connections and community growth are missed.Introductions can be burdensome, but they don't have to be.80,000 Hours80,000 Hours' staff frequently directly connects me to individuals over email who are based in or near NYC, whether or not they've already advised them. In 2022, they sent over 30 emails that followed a format like this:Subject: Rocky [Name]Hi both,Rocky, meet [Name]. [Name] works in [Professional Field] and lives in [Location]. They're interested in [Career Change, Learning about ___ EA Topic, Connecting with Local EAs, Something Else]. Because of this, I thought it might be useful for [Name] to speak to you and others in the EA NYC community.[Name], meet Rocky. Rocky is Director of Effective Altruism NYC. Before that she did [Career Summary] and studied [My Degree]. Effective Altruism NYC works on helping connect and grow the community of New Yorkers who are looking to do the most good through: advising, socials, reading groups, and other activities. I thought she would be a good person for you to speak with about some next steps to get more involved with Effective Altruism.Hope you get to speak soon. Thanks!Best, [80K Staff Member]They typically link to our respected LinkedIn profiles.I then set up one-on-one calls with the individuals they connect me to and many subsequently become involved in EA NYC in various capacities.EA Virtual ProgramsEA Virtual Programs does something similar:Subject: [EA NYC] Your group has a new prospective memberHi,We are the EA Virtual Programs (EA VP) team. A recent EA Virtual Programs participant has expressed an interest in joining your Effective Altruism New York City group.Name: ____Email: ____Background Info: [Involvement in EA] [Profession] [Location] [LinkedIn]Note these connections come from the participants themselves, as they nominated they would like to get in touch with your group specifically in our exit survey.It would be wonderful for them to get a warm welcome to your group. Please do reach out to them in 1-2 weeks preferably. However, no worries if this is not a priority for you now.I hope these connections are valuable!Sincerely,EA Virtual ProgramsIn both cases, the connector receives permission from both parties, something eas...]]>
Rockwell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:21 None full 5852
YweBjDwgdco669H72_NL_EA_EA EA - AI X-risk in the News: How Effective are Recent Media Items and How is Awareness Changing? Our New Survey Results. by Otto Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI X-risk in the News: How Effective are Recent Media Items and How is Awareness Changing? Our New Survey Results., published by Otto on May 4, 2023 on The Effective Altruism Forum.This is a summary of a follow-up study conducted by the Existential Risk Observatory, which delves into a greater number media items. To access our previous study, please follow this link. The data collected will be presented in two separate posts. The first post, which is the current one, has two parts. The first part examines the key indicators used in the previous research, such as "Human Extinction Events" and "Human Extinction Percentage," along with a new key indicator called "Concern Level." The Concern Level indicator assesses participants' level of concern about AI existential risk on a scale of 0 to 10 before and after the intervention. The second part analyzes the changes in public awareness about AI existential risk over time. It also explores the connection between the effectiveness of different media formats, namely articles and videos, and their length in raising awareness.In addition, it investigates how trust levels are related to the effectiveness of media sources in increasing public awareness of AI existential risk. In the second post, the research covers a new aspect of this study: participants' opinions on an AI moratorium and their likelihood of voting for it.PART 1: Effectiveness per media itemThis research aimed to evaluate the effectiveness of AI existential risk communication in increasing awareness of the potential risks posed by AI to human extinction.Research Objectives: The objective of the study was to determine the effectiveness of AI existential risk communication in raising public awareness. This was done by examining the changes in participants' views on the likelihood and ranking of AI as a potential cause of extinction before and after the intervention. Furthermore, the study evaluated the difference in the level of concern of participants before and after the intervention.Measurements and Operationalization: Three primary measurements - "Human Extinction Events," "Human Extinction Percentage," and "Concern Level" - were utilized to examine alterations in participants' perceptions. The coding scheme that was previously used in our research was employed to assess participants' increased awareness of AI. The data was gathered through Prolific, a platform that locates survey respondents based on predefined criteria. The study involved 350 participants, with 50 participants in each survey, who were required to be at least 18 years old, residents of the United States, and fluent in English.Data Collection and Analysis: Data was collected through surveys in April 2023. The data analysis comprised three main sections: (1) comparing changes in the key indicators before and after the intervention, (2) exploring participants' views on the possibility of an AI moratorium and their likelihood of voting for it, and (3) assessing the number of participants who were familiar with or had confidence in the media channel used in the intervention.Media Items Examined:CNN: Stuart Russell on why A.I. experiments must be pausedCNBC: Here's why A.I. needs a six-month pause: NYU Professor Gary MarcusThe Economist: How to stop AI going rogueTime 1: Why Uncontrollable AI Looks More Likely Than Ever | TimeTime 2: The Only Way to Deal With the Threat From AI? Shut It Down | TimeFoxNews Article: Artificial intelligence 'godfather' on AI possibly wiping out humanity: ‘It's not inconceivable’ | ArticleFoxNews Video: White House responds to concerns about AI development | VideoResults:Human Extinction EventsThe graph below displays the percentage of increased awareness across various media sources. The Economist survey showed the highest increase in awareness at 52 ...]]>
Otto https://forum.effectivealtruism.org/posts/YweBjDwgdco669H72/ai-x-risk-in-the-news-how-effective-are-recent-media-items Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI X-risk in the News: How Effective are Recent Media Items and How is Awareness Changing? Our New Survey Results., published by Otto on May 4, 2023 on The Effective Altruism Forum.This is a summary of a follow-up study conducted by the Existential Risk Observatory, which delves into a greater number media items. To access our previous study, please follow this link. The data collected will be presented in two separate posts. The first post, which is the current one, has two parts. The first part examines the key indicators used in the previous research, such as "Human Extinction Events" and "Human Extinction Percentage," along with a new key indicator called "Concern Level." The Concern Level indicator assesses participants' level of concern about AI existential risk on a scale of 0 to 10 before and after the intervention. The second part analyzes the changes in public awareness about AI existential risk over time. It also explores the connection between the effectiveness of different media formats, namely articles and videos, and their length in raising awareness.In addition, it investigates how trust levels are related to the effectiveness of media sources in increasing public awareness of AI existential risk. In the second post, the research covers a new aspect of this study: participants' opinions on an AI moratorium and their likelihood of voting for it.PART 1: Effectiveness per media itemThis research aimed to evaluate the effectiveness of AI existential risk communication in increasing awareness of the potential risks posed by AI to human extinction.Research Objectives: The objective of the study was to determine the effectiveness of AI existential risk communication in raising public awareness. This was done by examining the changes in participants' views on the likelihood and ranking of AI as a potential cause of extinction before and after the intervention. Furthermore, the study evaluated the difference in the level of concern of participants before and after the intervention.Measurements and Operationalization: Three primary measurements - "Human Extinction Events," "Human Extinction Percentage," and "Concern Level" - were utilized to examine alterations in participants' perceptions. The coding scheme that was previously used in our research was employed to assess participants' increased awareness of AI. The data was gathered through Prolific, a platform that locates survey respondents based on predefined criteria. The study involved 350 participants, with 50 participants in each survey, who were required to be at least 18 years old, residents of the United States, and fluent in English.Data Collection and Analysis: Data was collected through surveys in April 2023. The data analysis comprised three main sections: (1) comparing changes in the key indicators before and after the intervention, (2) exploring participants' views on the possibility of an AI moratorium and their likelihood of voting for it, and (3) assessing the number of participants who were familiar with or had confidence in the media channel used in the intervention.Media Items Examined:CNN: Stuart Russell on why A.I. experiments must be pausedCNBC: Here's why A.I. needs a six-month pause: NYU Professor Gary MarcusThe Economist: How to stop AI going rogueTime 1: Why Uncontrollable AI Looks More Likely Than Ever | TimeTime 2: The Only Way to Deal With the Threat From AI? Shut It Down | TimeFoxNews Article: Artificial intelligence 'godfather' on AI possibly wiping out humanity: ‘It's not inconceivable’ | ArticleFoxNews Video: White House responds to concerns about AI development | VideoResults:Human Extinction EventsThe graph below displays the percentage of increased awareness across various media sources. The Economist survey showed the highest increase in awareness at 52 ...]]>
Thu, 04 May 2023 20:07:45 +0000 EA - AI X-risk in the News: How Effective are Recent Media Items and How is Awareness Changing? Our New Survey Results. by Otto Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI X-risk in the News: How Effective are Recent Media Items and How is Awareness Changing? Our New Survey Results., published by Otto on May 4, 2023 on The Effective Altruism Forum.This is a summary of a follow-up study conducted by the Existential Risk Observatory, which delves into a greater number media items. To access our previous study, please follow this link. The data collected will be presented in two separate posts. The first post, which is the current one, has two parts. The first part examines the key indicators used in the previous research, such as "Human Extinction Events" and "Human Extinction Percentage," along with a new key indicator called "Concern Level." The Concern Level indicator assesses participants' level of concern about AI existential risk on a scale of 0 to 10 before and after the intervention. The second part analyzes the changes in public awareness about AI existential risk over time. It also explores the connection between the effectiveness of different media formats, namely articles and videos, and their length in raising awareness.In addition, it investigates how trust levels are related to the effectiveness of media sources in increasing public awareness of AI existential risk. In the second post, the research covers a new aspect of this study: participants' opinions on an AI moratorium and their likelihood of voting for it.PART 1: Effectiveness per media itemThis research aimed to evaluate the effectiveness of AI existential risk communication in increasing awareness of the potential risks posed by AI to human extinction.Research Objectives: The objective of the study was to determine the effectiveness of AI existential risk communication in raising public awareness. This was done by examining the changes in participants' views on the likelihood and ranking of AI as a potential cause of extinction before and after the intervention. Furthermore, the study evaluated the difference in the level of concern of participants before and after the intervention.Measurements and Operationalization: Three primary measurements - "Human Extinction Events," "Human Extinction Percentage," and "Concern Level" - were utilized to examine alterations in participants' perceptions. The coding scheme that was previously used in our research was employed to assess participants' increased awareness of AI. The data was gathered through Prolific, a platform that locates survey respondents based on predefined criteria. The study involved 350 participants, with 50 participants in each survey, who were required to be at least 18 years old, residents of the United States, and fluent in English.Data Collection and Analysis: Data was collected through surveys in April 2023. The data analysis comprised three main sections: (1) comparing changes in the key indicators before and after the intervention, (2) exploring participants' views on the possibility of an AI moratorium and their likelihood of voting for it, and (3) assessing the number of participants who were familiar with or had confidence in the media channel used in the intervention.Media Items Examined:CNN: Stuart Russell on why A.I. experiments must be pausedCNBC: Here's why A.I. needs a six-month pause: NYU Professor Gary MarcusThe Economist: How to stop AI going rogueTime 1: Why Uncontrollable AI Looks More Likely Than Ever | TimeTime 2: The Only Way to Deal With the Threat From AI? Shut It Down | TimeFoxNews Article: Artificial intelligence 'godfather' on AI possibly wiping out humanity: ‘It's not inconceivable’ | ArticleFoxNews Video: White House responds to concerns about AI development | VideoResults:Human Extinction EventsThe graph below displays the percentage of increased awareness across various media sources. The Economist survey showed the highest increase in awareness at 52 ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI X-risk in the News: How Effective are Recent Media Items and How is Awareness Changing? Our New Survey Results., published by Otto on May 4, 2023 on The Effective Altruism Forum.This is a summary of a follow-up study conducted by the Existential Risk Observatory, which delves into a greater number media items. To access our previous study, please follow this link. The data collected will be presented in two separate posts. The first post, which is the current one, has two parts. The first part examines the key indicators used in the previous research, such as "Human Extinction Events" and "Human Extinction Percentage," along with a new key indicator called "Concern Level." The Concern Level indicator assesses participants' level of concern about AI existential risk on a scale of 0 to 10 before and after the intervention. The second part analyzes the changes in public awareness about AI existential risk over time. It also explores the connection between the effectiveness of different media formats, namely articles and videos, and their length in raising awareness.In addition, it investigates how trust levels are related to the effectiveness of media sources in increasing public awareness of AI existential risk. In the second post, the research covers a new aspect of this study: participants' opinions on an AI moratorium and their likelihood of voting for it.PART 1: Effectiveness per media itemThis research aimed to evaluate the effectiveness of AI existential risk communication in increasing awareness of the potential risks posed by AI to human extinction.Research Objectives: The objective of the study was to determine the effectiveness of AI existential risk communication in raising public awareness. This was done by examining the changes in participants' views on the likelihood and ranking of AI as a potential cause of extinction before and after the intervention. Furthermore, the study evaluated the difference in the level of concern of participants before and after the intervention.Measurements and Operationalization: Three primary measurements - "Human Extinction Events," "Human Extinction Percentage," and "Concern Level" - were utilized to examine alterations in participants' perceptions. The coding scheme that was previously used in our research was employed to assess participants' increased awareness of AI. The data was gathered through Prolific, a platform that locates survey respondents based on predefined criteria. The study involved 350 participants, with 50 participants in each survey, who were required to be at least 18 years old, residents of the United States, and fluent in English.Data Collection and Analysis: Data was collected through surveys in April 2023. The data analysis comprised three main sections: (1) comparing changes in the key indicators before and after the intervention, (2) exploring participants' views on the possibility of an AI moratorium and their likelihood of voting for it, and (3) assessing the number of participants who were familiar with or had confidence in the media channel used in the intervention.Media Items Examined:CNN: Stuart Russell on why A.I. experiments must be pausedCNBC: Here's why A.I. needs a six-month pause: NYU Professor Gary MarcusThe Economist: How to stop AI going rogueTime 1: Why Uncontrollable AI Looks More Likely Than Ever | TimeTime 2: The Only Way to Deal With the Threat From AI? Shut It Down | TimeFoxNews Article: Artificial intelligence 'godfather' on AI possibly wiping out humanity: ‘It's not inconceivable’ | ArticleFoxNews Video: White House responds to concerns about AI development | VideoResults:Human Extinction EventsThe graph below displays the percentage of increased awareness across various media sources. The Economist survey showed the highest increase in awareness at 52 ...]]>
Otto https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 17:02 None full 5843
AFPXXepkgitbvTtpH_NL_EA_EA EA - Getting Cats Vegan is Possible and Imperative by Karthik Sekar Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting Cats Vegan is Possible and Imperative, published by Karthik Sekar on May 4, 2023 on The Effective Altruism Forum.SummaryCarnivore is a classification, not a diet requirement.The amount of meat that cats eat is significant. Transitioning domestic cats to eating vegan would do much good for the environment and animal welfare.Having vegan cats now is not convenient, but we (humanity) should make that so.We do not need to wait around for cultivated meat. There are tractable opportunities now.We also need randomized control trials with measured health outcomes; funding is the main limitation here.Making domestic cats vegan meets all of the Effective Altruism criteria: significant, tractable, and neglected.MainImagine you are a surveyor traveling to remote parts of the world. Within a thick rainforest, you come across an indigenous group long separated from the modern world. They fashion spears to hunt fish and thicket baskets to collect foraged berries. Notably, they wear distinctive yellow loincloths dyed with local fruit. You are not one with words, so you call them the Yellowclothea.This is not a farfetched story. Most species worldwide are classified similarly–someone observes them and then contrives a classification named on what they see. Carnivora was coined in 1821 to describe an Order of animals by the observation that they consumed the meat of other animals–carnem vorāre is Latin for “to eat flesh”.Let us go back to the Yellowclothea. You can already intuit that these natives do not have to wear the yellow loincloths–it is simply what you initially observed. If the natives swapped the dye with purple or green, that would work out fine. However, the rainforest lacks those colors, so Yellowclothea is resigned to their monotone. In other words, wearing yellow cloth is not a requirement for them to live, just what works for them and is available.1821, the year of Carnivora’s naming, is ages ago in the scientific world. It was before the Theory of Evolution, first described in The Origin of Species in 1859. It was before the molecular biology revolution. It was before we understood the basis of metabolism and nutrition. So it is easy to confuse classification/observation with the requirement. It is the same fallacy as assuming that the Yellowclothea people can only wear yellow clothing.Since 1821, we learned more about nutrition, molecular biology, and metabolism to demystify meat. Meat is mostly muscle fibers with some marbled fat and critical nutrients. Carnivora animals generally have more acidic stomachs and shorter gastrointestinal (GI) tracts than nominal herbivores. The extra acid helps chop proteins into the alphabet amino acid molecules, which are readily taken up, so a long GI tract is unnecessary.So Carnivora animals cannot have salads or raw vegetables, which are rich in fiber and would not break down in their GI tracts in time. Nevertheless, we can make protein-rich and highly digestible foods for Carnivora starting from plant and microbial ingredients. Just as a cow will chemically process the plants into their muscle–flesh, we can similarly turn the plants into food that a carnivore would thrive off without an animal intermediary. In other words, we can source all the required nutrients from elsewhere, without meat.There is—at least in theory—no reason why diets comprised entirely of plants, minerals, and synthetically-based ingredients (i.e., vegan diets) cannot meet the necessary palatability, bioavailability, and nutritional requirements of catsAndrew Knight, Director, Centre for Animal Welfare, University of WinchesterI have written about how succeeding meat, dairy, and eggs with plant and microbial-based alternatives will be one of the best things we ever do–I argue that it is better than curing cancer or transitioning ful...]]>
Karthik Sekar https://forum.effectivealtruism.org/posts/AFPXXepkgitbvTtpH/getting-cats-vegan-is-possible-and-imperative Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting Cats Vegan is Possible and Imperative, published by Karthik Sekar on May 4, 2023 on The Effective Altruism Forum.SummaryCarnivore is a classification, not a diet requirement.The amount of meat that cats eat is significant. Transitioning domestic cats to eating vegan would do much good for the environment and animal welfare.Having vegan cats now is not convenient, but we (humanity) should make that so.We do not need to wait around for cultivated meat. There are tractable opportunities now.We also need randomized control trials with measured health outcomes; funding is the main limitation here.Making domestic cats vegan meets all of the Effective Altruism criteria: significant, tractable, and neglected.MainImagine you are a surveyor traveling to remote parts of the world. Within a thick rainforest, you come across an indigenous group long separated from the modern world. They fashion spears to hunt fish and thicket baskets to collect foraged berries. Notably, they wear distinctive yellow loincloths dyed with local fruit. You are not one with words, so you call them the Yellowclothea.This is not a farfetched story. Most species worldwide are classified similarly–someone observes them and then contrives a classification named on what they see. Carnivora was coined in 1821 to describe an Order of animals by the observation that they consumed the meat of other animals–carnem vorāre is Latin for “to eat flesh”.Let us go back to the Yellowclothea. You can already intuit that these natives do not have to wear the yellow loincloths–it is simply what you initially observed. If the natives swapped the dye with purple or green, that would work out fine. However, the rainforest lacks those colors, so Yellowclothea is resigned to their monotone. In other words, wearing yellow cloth is not a requirement for them to live, just what works for them and is available.1821, the year of Carnivora’s naming, is ages ago in the scientific world. It was before the Theory of Evolution, first described in The Origin of Species in 1859. It was before the molecular biology revolution. It was before we understood the basis of metabolism and nutrition. So it is easy to confuse classification/observation with the requirement. It is the same fallacy as assuming that the Yellowclothea people can only wear yellow clothing.Since 1821, we learned more about nutrition, molecular biology, and metabolism to demystify meat. Meat is mostly muscle fibers with some marbled fat and critical nutrients. Carnivora animals generally have more acidic stomachs and shorter gastrointestinal (GI) tracts than nominal herbivores. The extra acid helps chop proteins into the alphabet amino acid molecules, which are readily taken up, so a long GI tract is unnecessary.So Carnivora animals cannot have salads or raw vegetables, which are rich in fiber and would not break down in their GI tracts in time. Nevertheless, we can make protein-rich and highly digestible foods for Carnivora starting from plant and microbial ingredients. Just as a cow will chemically process the plants into their muscle–flesh, we can similarly turn the plants into food that a carnivore would thrive off without an animal intermediary. In other words, we can source all the required nutrients from elsewhere, without meat.There is—at least in theory—no reason why diets comprised entirely of plants, minerals, and synthetically-based ingredients (i.e., vegan diets) cannot meet the necessary palatability, bioavailability, and nutritional requirements of catsAndrew Knight, Director, Centre for Animal Welfare, University of WinchesterI have written about how succeeding meat, dairy, and eggs with plant and microbial-based alternatives will be one of the best things we ever do–I argue that it is better than curing cancer or transitioning ful...]]>
Thu, 04 May 2023 17:22:10 +0000 EA - Getting Cats Vegan is Possible and Imperative by Karthik Sekar Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting Cats Vegan is Possible and Imperative, published by Karthik Sekar on May 4, 2023 on The Effective Altruism Forum.SummaryCarnivore is a classification, not a diet requirement.The amount of meat that cats eat is significant. Transitioning domestic cats to eating vegan would do much good for the environment and animal welfare.Having vegan cats now is not convenient, but we (humanity) should make that so.We do not need to wait around for cultivated meat. There are tractable opportunities now.We also need randomized control trials with measured health outcomes; funding is the main limitation here.Making domestic cats vegan meets all of the Effective Altruism criteria: significant, tractable, and neglected.MainImagine you are a surveyor traveling to remote parts of the world. Within a thick rainforest, you come across an indigenous group long separated from the modern world. They fashion spears to hunt fish and thicket baskets to collect foraged berries. Notably, they wear distinctive yellow loincloths dyed with local fruit. You are not one with words, so you call them the Yellowclothea.This is not a farfetched story. Most species worldwide are classified similarly–someone observes them and then contrives a classification named on what they see. Carnivora was coined in 1821 to describe an Order of animals by the observation that they consumed the meat of other animals–carnem vorāre is Latin for “to eat flesh”.Let us go back to the Yellowclothea. You can already intuit that these natives do not have to wear the yellow loincloths–it is simply what you initially observed. If the natives swapped the dye with purple or green, that would work out fine. However, the rainforest lacks those colors, so Yellowclothea is resigned to their monotone. In other words, wearing yellow cloth is not a requirement for them to live, just what works for them and is available.1821, the year of Carnivora’s naming, is ages ago in the scientific world. It was before the Theory of Evolution, first described in The Origin of Species in 1859. It was before the molecular biology revolution. It was before we understood the basis of metabolism and nutrition. So it is easy to confuse classification/observation with the requirement. It is the same fallacy as assuming that the Yellowclothea people can only wear yellow clothing.Since 1821, we learned more about nutrition, molecular biology, and metabolism to demystify meat. Meat is mostly muscle fibers with some marbled fat and critical nutrients. Carnivora animals generally have more acidic stomachs and shorter gastrointestinal (GI) tracts than nominal herbivores. The extra acid helps chop proteins into the alphabet amino acid molecules, which are readily taken up, so a long GI tract is unnecessary.So Carnivora animals cannot have salads or raw vegetables, which are rich in fiber and would not break down in their GI tracts in time. Nevertheless, we can make protein-rich and highly digestible foods for Carnivora starting from plant and microbial ingredients. Just as a cow will chemically process the plants into their muscle–flesh, we can similarly turn the plants into food that a carnivore would thrive off without an animal intermediary. In other words, we can source all the required nutrients from elsewhere, without meat.There is—at least in theory—no reason why diets comprised entirely of plants, minerals, and synthetically-based ingredients (i.e., vegan diets) cannot meet the necessary palatability, bioavailability, and nutritional requirements of catsAndrew Knight, Director, Centre for Animal Welfare, University of WinchesterI have written about how succeeding meat, dairy, and eggs with plant and microbial-based alternatives will be one of the best things we ever do–I argue that it is better than curing cancer or transitioning ful...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting Cats Vegan is Possible and Imperative, published by Karthik Sekar on May 4, 2023 on The Effective Altruism Forum.SummaryCarnivore is a classification, not a diet requirement.The amount of meat that cats eat is significant. Transitioning domestic cats to eating vegan would do much good for the environment and animal welfare.Having vegan cats now is not convenient, but we (humanity) should make that so.We do not need to wait around for cultivated meat. There are tractable opportunities now.We also need randomized control trials with measured health outcomes; funding is the main limitation here.Making domestic cats vegan meets all of the Effective Altruism criteria: significant, tractable, and neglected.MainImagine you are a surveyor traveling to remote parts of the world. Within a thick rainforest, you come across an indigenous group long separated from the modern world. They fashion spears to hunt fish and thicket baskets to collect foraged berries. Notably, they wear distinctive yellow loincloths dyed with local fruit. You are not one with words, so you call them the Yellowclothea.This is not a farfetched story. Most species worldwide are classified similarly–someone observes them and then contrives a classification named on what they see. Carnivora was coined in 1821 to describe an Order of animals by the observation that they consumed the meat of other animals–carnem vorāre is Latin for “to eat flesh”.Let us go back to the Yellowclothea. You can already intuit that these natives do not have to wear the yellow loincloths–it is simply what you initially observed. If the natives swapped the dye with purple or green, that would work out fine. However, the rainforest lacks those colors, so Yellowclothea is resigned to their monotone. In other words, wearing yellow cloth is not a requirement for them to live, just what works for them and is available.1821, the year of Carnivora’s naming, is ages ago in the scientific world. It was before the Theory of Evolution, first described in The Origin of Species in 1859. It was before the molecular biology revolution. It was before we understood the basis of metabolism and nutrition. So it is easy to confuse classification/observation with the requirement. It is the same fallacy as assuming that the Yellowclothea people can only wear yellow clothing.Since 1821, we learned more about nutrition, molecular biology, and metabolism to demystify meat. Meat is mostly muscle fibers with some marbled fat and critical nutrients. Carnivora animals generally have more acidic stomachs and shorter gastrointestinal (GI) tracts than nominal herbivores. The extra acid helps chop proteins into the alphabet amino acid molecules, which are readily taken up, so a long GI tract is unnecessary.So Carnivora animals cannot have salads or raw vegetables, which are rich in fiber and would not break down in their GI tracts in time. Nevertheless, we can make protein-rich and highly digestible foods for Carnivora starting from plant and microbial ingredients. Just as a cow will chemically process the plants into their muscle–flesh, we can similarly turn the plants into food that a carnivore would thrive off without an animal intermediary. In other words, we can source all the required nutrients from elsewhere, without meat.There is—at least in theory—no reason why diets comprised entirely of plants, minerals, and synthetically-based ingredients (i.e., vegan diets) cannot meet the necessary palatability, bioavailability, and nutritional requirements of catsAndrew Knight, Director, Centre for Animal Welfare, University of WinchesterI have written about how succeeding meat, dairy, and eggs with plant and microbial-based alternatives will be one of the best things we ever do–I argue that it is better than curing cancer or transitioning ful...]]>
Karthik Sekar https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 13:24 None full 5840
Cre2YC3hd5DeYLqDH_NL_EA_EA EA - [Link Post: New York Times] White House Unveils Initiatives to Reduce Risks of A.I. by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Link Post: New York Times] White House Unveils Initiatives to Reduce Risks of A.I., published by Rockwell on May 4, 2023 on The Effective Altruism Forum.This is a linkpost forThe White House on Thursday announced its first new initiatives aimed at taming the risks of artificial intelligence since a boom in A.I.-powered chatbots has prompted growing calls to regulate the technology.The National Science Foundation plans to spend $140 million on new research centers devoted to A.I., White House officials said. The administration also pledged to release draft guidelines for government agencies to ensure that their use of A.I. safeguards “the American people’s rights and safety,” adding that several A.I. companies had agreed to make their products available for scrutiny in August at a cybersecurity conference.The announcements came hours before Vice President Kamala Harris and other administration officials were scheduled to meet with the chief executives of Google, Microsoft, OpenAI, the maker of the popular ChatGPT chatbot, and Anthropic, an A.I. start-up, to discuss the technology. A senior administration official said on Wednesday that the White House planned to impress upon the companies that they had a responsibility to address the risks of new A.I. developments. The White House has been under growing pressure to police A.I. that is capable of crafting sophisticated prose and lifelike images. The explosion of interest in the technology began last year when OpenAI released ChatGPT to the public and people immediately began using it to search for information, do schoolwork and assist them with their job.Since then, some of the biggest tech companies have rushed to incorporate chatbots into their products and accelerated A.I. research, while venture capitalists have poured money into A.I. start-ups.But the A.I. boom has also raised questions about how the technology will transform economies, shake up geopolitics and bolster criminal activity. Critics have worried that many A.I. systems are opaque but extremely powerful, with the potential to make discriminatory decisions, replace people in their jobs, spread disinformation and perhaps even break the law on their own.President Biden recently said that it “remains to be seen” whether A.I. is dangerous, and some of his top appointees have pledged to intervene if the technology is used in a harmful way.Spokeswomen for Google and Microsoft declined to comment ahead of the White House meeting. A spokesman for Anthropic confirmed the company would be attending. A spokeswoman for OpenAI did not respond to a request for comment.The announcements build on earlier efforts by the administration to place guardrails on A.I. Last year, the White House released what it called a “Blueprint for an A.I. Bill of Rights,” which said that automated systems should protect users’ data privacy, shield them from discriminatory outcomes and make clear why certain actions were taken. In January, the Commerce Department also released a framework for reducing risk in A.I. development, which had been in the works for years.The introduction of chatbots like ChatGPT and Google’s Bard has put huge pressure on governments to act. The European Union, which had already been negotiating regulations to A.I., has faced new demands to regulate a broader swath of A.I., instead of just systems seen as inherently high risk.In the United States, members of Congress, including Senator Chuck Schumer of New York, the majority leader, have moved to draft or propose legislation to regulate A.I. But concrete steps to rein in the technology in the country may be more likely to come first from law enforcement agencies in Washington.A group of government agencies pledged in April to “monitor the development and use of automated systems and promote responsible...]]>
Rockwell https://forum.effectivealtruism.org/posts/Cre2YC3hd5DeYLqDH/link-post-new-york-times-white-house-unveils-initiatives-to Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Link Post: New York Times] White House Unveils Initiatives to Reduce Risks of A.I., published by Rockwell on May 4, 2023 on The Effective Altruism Forum.This is a linkpost forThe White House on Thursday announced its first new initiatives aimed at taming the risks of artificial intelligence since a boom in A.I.-powered chatbots has prompted growing calls to regulate the technology.The National Science Foundation plans to spend $140 million on new research centers devoted to A.I., White House officials said. The administration also pledged to release draft guidelines for government agencies to ensure that their use of A.I. safeguards “the American people’s rights and safety,” adding that several A.I. companies had agreed to make their products available for scrutiny in August at a cybersecurity conference.The announcements came hours before Vice President Kamala Harris and other administration officials were scheduled to meet with the chief executives of Google, Microsoft, OpenAI, the maker of the popular ChatGPT chatbot, and Anthropic, an A.I. start-up, to discuss the technology. A senior administration official said on Wednesday that the White House planned to impress upon the companies that they had a responsibility to address the risks of new A.I. developments. The White House has been under growing pressure to police A.I. that is capable of crafting sophisticated prose and lifelike images. The explosion of interest in the technology began last year when OpenAI released ChatGPT to the public and people immediately began using it to search for information, do schoolwork and assist them with their job.Since then, some of the biggest tech companies have rushed to incorporate chatbots into their products and accelerated A.I. research, while venture capitalists have poured money into A.I. start-ups.But the A.I. boom has also raised questions about how the technology will transform economies, shake up geopolitics and bolster criminal activity. Critics have worried that many A.I. systems are opaque but extremely powerful, with the potential to make discriminatory decisions, replace people in their jobs, spread disinformation and perhaps even break the law on their own.President Biden recently said that it “remains to be seen” whether A.I. is dangerous, and some of his top appointees have pledged to intervene if the technology is used in a harmful way.Spokeswomen for Google and Microsoft declined to comment ahead of the White House meeting. A spokesman for Anthropic confirmed the company would be attending. A spokeswoman for OpenAI did not respond to a request for comment.The announcements build on earlier efforts by the administration to place guardrails on A.I. Last year, the White House released what it called a “Blueprint for an A.I. Bill of Rights,” which said that automated systems should protect users’ data privacy, shield them from discriminatory outcomes and make clear why certain actions were taken. In January, the Commerce Department also released a framework for reducing risk in A.I. development, which had been in the works for years.The introduction of chatbots like ChatGPT and Google’s Bard has put huge pressure on governments to act. The European Union, which had already been negotiating regulations to A.I., has faced new demands to regulate a broader swath of A.I., instead of just systems seen as inherently high risk.In the United States, members of Congress, including Senator Chuck Schumer of New York, the majority leader, have moved to draft or propose legislation to regulate A.I. But concrete steps to rein in the technology in the country may be more likely to come first from law enforcement agencies in Washington.A group of government agencies pledged in April to “monitor the development and use of automated systems and promote responsible...]]>
Thu, 04 May 2023 17:20:25 +0000 EA - [Link Post: New York Times] White House Unveils Initiatives to Reduce Risks of A.I. by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Link Post: New York Times] White House Unveils Initiatives to Reduce Risks of A.I., published by Rockwell on May 4, 2023 on The Effective Altruism Forum.This is a linkpost forThe White House on Thursday announced its first new initiatives aimed at taming the risks of artificial intelligence since a boom in A.I.-powered chatbots has prompted growing calls to regulate the technology.The National Science Foundation plans to spend $140 million on new research centers devoted to A.I., White House officials said. The administration also pledged to release draft guidelines for government agencies to ensure that their use of A.I. safeguards “the American people’s rights and safety,” adding that several A.I. companies had agreed to make their products available for scrutiny in August at a cybersecurity conference.The announcements came hours before Vice President Kamala Harris and other administration officials were scheduled to meet with the chief executives of Google, Microsoft, OpenAI, the maker of the popular ChatGPT chatbot, and Anthropic, an A.I. start-up, to discuss the technology. A senior administration official said on Wednesday that the White House planned to impress upon the companies that they had a responsibility to address the risks of new A.I. developments. The White House has been under growing pressure to police A.I. that is capable of crafting sophisticated prose and lifelike images. The explosion of interest in the technology began last year when OpenAI released ChatGPT to the public and people immediately began using it to search for information, do schoolwork and assist them with their job.Since then, some of the biggest tech companies have rushed to incorporate chatbots into their products and accelerated A.I. research, while venture capitalists have poured money into A.I. start-ups.But the A.I. boom has also raised questions about how the technology will transform economies, shake up geopolitics and bolster criminal activity. Critics have worried that many A.I. systems are opaque but extremely powerful, with the potential to make discriminatory decisions, replace people in their jobs, spread disinformation and perhaps even break the law on their own.President Biden recently said that it “remains to be seen” whether A.I. is dangerous, and some of his top appointees have pledged to intervene if the technology is used in a harmful way.Spokeswomen for Google and Microsoft declined to comment ahead of the White House meeting. A spokesman for Anthropic confirmed the company would be attending. A spokeswoman for OpenAI did not respond to a request for comment.The announcements build on earlier efforts by the administration to place guardrails on A.I. Last year, the White House released what it called a “Blueprint for an A.I. Bill of Rights,” which said that automated systems should protect users’ data privacy, shield them from discriminatory outcomes and make clear why certain actions were taken. In January, the Commerce Department also released a framework for reducing risk in A.I. development, which had been in the works for years.The introduction of chatbots like ChatGPT and Google’s Bard has put huge pressure on governments to act. The European Union, which had already been negotiating regulations to A.I., has faced new demands to regulate a broader swath of A.I., instead of just systems seen as inherently high risk.In the United States, members of Congress, including Senator Chuck Schumer of New York, the majority leader, have moved to draft or propose legislation to regulate A.I. But concrete steps to rein in the technology in the country may be more likely to come first from law enforcement agencies in Washington.A group of government agencies pledged in April to “monitor the development and use of automated systems and promote responsible...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Link Post: New York Times] White House Unveils Initiatives to Reduce Risks of A.I., published by Rockwell on May 4, 2023 on The Effective Altruism Forum.This is a linkpost forThe White House on Thursday announced its first new initiatives aimed at taming the risks of artificial intelligence since a boom in A.I.-powered chatbots has prompted growing calls to regulate the technology.The National Science Foundation plans to spend $140 million on new research centers devoted to A.I., White House officials said. The administration also pledged to release draft guidelines for government agencies to ensure that their use of A.I. safeguards “the American people’s rights and safety,” adding that several A.I. companies had agreed to make their products available for scrutiny in August at a cybersecurity conference.The announcements came hours before Vice President Kamala Harris and other administration officials were scheduled to meet with the chief executives of Google, Microsoft, OpenAI, the maker of the popular ChatGPT chatbot, and Anthropic, an A.I. start-up, to discuss the technology. A senior administration official said on Wednesday that the White House planned to impress upon the companies that they had a responsibility to address the risks of new A.I. developments. The White House has been under growing pressure to police A.I. that is capable of crafting sophisticated prose and lifelike images. The explosion of interest in the technology began last year when OpenAI released ChatGPT to the public and people immediately began using it to search for information, do schoolwork and assist them with their job.Since then, some of the biggest tech companies have rushed to incorporate chatbots into their products and accelerated A.I. research, while venture capitalists have poured money into A.I. start-ups.But the A.I. boom has also raised questions about how the technology will transform economies, shake up geopolitics and bolster criminal activity. Critics have worried that many A.I. systems are opaque but extremely powerful, with the potential to make discriminatory decisions, replace people in their jobs, spread disinformation and perhaps even break the law on their own.President Biden recently said that it “remains to be seen” whether A.I. is dangerous, and some of his top appointees have pledged to intervene if the technology is used in a harmful way.Spokeswomen for Google and Microsoft declined to comment ahead of the White House meeting. A spokesman for Anthropic confirmed the company would be attending. A spokeswoman for OpenAI did not respond to a request for comment.The announcements build on earlier efforts by the administration to place guardrails on A.I. Last year, the White House released what it called a “Blueprint for an A.I. Bill of Rights,” which said that automated systems should protect users’ data privacy, shield them from discriminatory outcomes and make clear why certain actions were taken. In January, the Commerce Department also released a framework for reducing risk in A.I. development, which had been in the works for years.The introduction of chatbots like ChatGPT and Google’s Bard has put huge pressure on governments to act. The European Union, which had already been negotiating regulations to A.I., has faced new demands to regulate a broader swath of A.I., instead of just systems seen as inherently high risk.In the United States, members of Congress, including Senator Chuck Schumer of New York, the majority leader, have moved to draft or propose legislation to regulate A.I. But concrete steps to rein in the technology in the country may be more likely to come first from law enforcement agencies in Washington.A group of government agencies pledged in April to “monitor the development and use of automated systems and promote responsible...]]>
Rockwell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:01 None full 5841
G2vPqkCZkJusKGLtK_NL_EA_EA EA - Introducing Animal Policy International by Rainer Kravets Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Animal Policy International, published by Rainer Kravets on May 4, 2023 on The Effective Altruism Forum.Animal Policy International is a new organisation launched through the Charity Entrepreneurship Incubation Program focused on ensuring that animal welfare standards are upheld in international trade policy.ProblemThere are significant differences between farmed animal welfare standards across the globe, with billions of animals still confined in factory farms. Even those regions with higher standards like the EU, the UK, Switzerland and New Zealand tend to import a significant portion of their animal products from countries where animals experience significant suffering due to lack of protective measures.SolutionThe higher welfare countries can apply their standards to imported animal products by restricting the access of low-welfare animal products that would have been illegal to produce domestically. This can incentivise farmers elsewhere to increase their standards to keep existing supply chains.A law restricting the importation of low-welfare products provides a unique win-win opportunity for both animal advocates and farmers in higher welfare countries, especially in our likely first country of operation: New Zealand. Some farmers are facing tough competition from low-priced low-welfare imports and demand more equal standards between imports and local produce after New Zealand’s decision to phase out farrowing crates on local pig farms by December 2025.Potential ImpactA law passed in New Zealand restricting the importation of animal products that do not adhere to local standards could save approximately 8 million fish per year from suffering poor living conditions, transportation, and slaughter practices; spare 330,000 pigs from cruel farrowing crates and 380,000 chickens from inhumane living conditions.Differences in animal welfare standards: New ZealandBelow is an outline of differences between animal welfare standards in New Zealand and its main importers of particular animals.Fish: In China, Vietnam and Thailand (total 79% of imports in 2020) there is no legislation for fish meaning they may endure slow, painful deaths by asphyxiation, crushing, or even being gutted alive. New Zealand outlines some protections for fish at the time of killing and during transport.Hens: 80% of eggs imported into New Zealand come from China where hens are allowed to be kept in battery cages. Battery cages are illegal in New Zealand from 2023. (Colony (enriched) cages are still used).Pigs: The US, an importer of pork to New Zealand, has no federal ban on the use of sow stalls or farrowing crates, leading to sows being cruelly confined to narrow cages where they cannot perform basic behaviours, turn around, or properly mother their piglets. New Zealand has banned sow stalls, and farrowing crates are being phased out by 2025.Sheep: Australia, which imports wool products to New Zealand, allows several practices that are prohibited in New Zealand, including the extremely cruel practice of mulesing, which involves removing parts of the skin from live sheep without anaesthetic.Next stepsEstablishing connections with potential partner NGOs and industryProducing a policy briefConducting public pollingAddressing the question of legality of import restrictionsMeeting policymakersOpen questionsWill farmers in low-welfare countries be motivated and capable of increasing their animal welfare standards?What enforcement mechanisms should be used?How would a restriction on importation affect the country’s relationships with its trade partners?What externalities (e.g. changes in animal product prices) would such a trade law have?How you can helpExpertise: if you have experience/knowledge in international trade, policy work, WTO laws and can help answer th...]]>
Rainer Kravets https://forum.effectivealtruism.org/posts/G2vPqkCZkJusKGLtK/introducing-animal-policy-international Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Animal Policy International, published by Rainer Kravets on May 4, 2023 on The Effective Altruism Forum.Animal Policy International is a new organisation launched through the Charity Entrepreneurship Incubation Program focused on ensuring that animal welfare standards are upheld in international trade policy.ProblemThere are significant differences between farmed animal welfare standards across the globe, with billions of animals still confined in factory farms. Even those regions with higher standards like the EU, the UK, Switzerland and New Zealand tend to import a significant portion of their animal products from countries where animals experience significant suffering due to lack of protective measures.SolutionThe higher welfare countries can apply their standards to imported animal products by restricting the access of low-welfare animal products that would have been illegal to produce domestically. This can incentivise farmers elsewhere to increase their standards to keep existing supply chains.A law restricting the importation of low-welfare products provides a unique win-win opportunity for both animal advocates and farmers in higher welfare countries, especially in our likely first country of operation: New Zealand. Some farmers are facing tough competition from low-priced low-welfare imports and demand more equal standards between imports and local produce after New Zealand’s decision to phase out farrowing crates on local pig farms by December 2025.Potential ImpactA law passed in New Zealand restricting the importation of animal products that do not adhere to local standards could save approximately 8 million fish per year from suffering poor living conditions, transportation, and slaughter practices; spare 330,000 pigs from cruel farrowing crates and 380,000 chickens from inhumane living conditions.Differences in animal welfare standards: New ZealandBelow is an outline of differences between animal welfare standards in New Zealand and its main importers of particular animals.Fish: In China, Vietnam and Thailand (total 79% of imports in 2020) there is no legislation for fish meaning they may endure slow, painful deaths by asphyxiation, crushing, or even being gutted alive. New Zealand outlines some protections for fish at the time of killing and during transport.Hens: 80% of eggs imported into New Zealand come from China where hens are allowed to be kept in battery cages. Battery cages are illegal in New Zealand from 2023. (Colony (enriched) cages are still used).Pigs: The US, an importer of pork to New Zealand, has no federal ban on the use of sow stalls or farrowing crates, leading to sows being cruelly confined to narrow cages where they cannot perform basic behaviours, turn around, or properly mother their piglets. New Zealand has banned sow stalls, and farrowing crates are being phased out by 2025.Sheep: Australia, which imports wool products to New Zealand, allows several practices that are prohibited in New Zealand, including the extremely cruel practice of mulesing, which involves removing parts of the skin from live sheep without anaesthetic.Next stepsEstablishing connections with potential partner NGOs and industryProducing a policy briefConducting public pollingAddressing the question of legality of import restrictionsMeeting policymakersOpen questionsWill farmers in low-welfare countries be motivated and capable of increasing their animal welfare standards?What enforcement mechanisms should be used?How would a restriction on importation affect the country’s relationships with its trade partners?What externalities (e.g. changes in animal product prices) would such a trade law have?How you can helpExpertise: if you have experience/knowledge in international trade, policy work, WTO laws and can help answer th...]]>
Thu, 04 May 2023 16:38:17 +0000 EA - Introducing Animal Policy International by Rainer Kravets Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Animal Policy International, published by Rainer Kravets on May 4, 2023 on The Effective Altruism Forum.Animal Policy International is a new organisation launched through the Charity Entrepreneurship Incubation Program focused on ensuring that animal welfare standards are upheld in international trade policy.ProblemThere are significant differences between farmed animal welfare standards across the globe, with billions of animals still confined in factory farms. Even those regions with higher standards like the EU, the UK, Switzerland and New Zealand tend to import a significant portion of their animal products from countries where animals experience significant suffering due to lack of protective measures.SolutionThe higher welfare countries can apply their standards to imported animal products by restricting the access of low-welfare animal products that would have been illegal to produce domestically. This can incentivise farmers elsewhere to increase their standards to keep existing supply chains.A law restricting the importation of low-welfare products provides a unique win-win opportunity for both animal advocates and farmers in higher welfare countries, especially in our likely first country of operation: New Zealand. Some farmers are facing tough competition from low-priced low-welfare imports and demand more equal standards between imports and local produce after New Zealand’s decision to phase out farrowing crates on local pig farms by December 2025.Potential ImpactA law passed in New Zealand restricting the importation of animal products that do not adhere to local standards could save approximately 8 million fish per year from suffering poor living conditions, transportation, and slaughter practices; spare 330,000 pigs from cruel farrowing crates and 380,000 chickens from inhumane living conditions.Differences in animal welfare standards: New ZealandBelow is an outline of differences between animal welfare standards in New Zealand and its main importers of particular animals.Fish: In China, Vietnam and Thailand (total 79% of imports in 2020) there is no legislation for fish meaning they may endure slow, painful deaths by asphyxiation, crushing, or even being gutted alive. New Zealand outlines some protections for fish at the time of killing and during transport.Hens: 80% of eggs imported into New Zealand come from China where hens are allowed to be kept in battery cages. Battery cages are illegal in New Zealand from 2023. (Colony (enriched) cages are still used).Pigs: The US, an importer of pork to New Zealand, has no federal ban on the use of sow stalls or farrowing crates, leading to sows being cruelly confined to narrow cages where they cannot perform basic behaviours, turn around, or properly mother their piglets. New Zealand has banned sow stalls, and farrowing crates are being phased out by 2025.Sheep: Australia, which imports wool products to New Zealand, allows several practices that are prohibited in New Zealand, including the extremely cruel practice of mulesing, which involves removing parts of the skin from live sheep without anaesthetic.Next stepsEstablishing connections with potential partner NGOs and industryProducing a policy briefConducting public pollingAddressing the question of legality of import restrictionsMeeting policymakersOpen questionsWill farmers in low-welfare countries be motivated and capable of increasing their animal welfare standards?What enforcement mechanisms should be used?How would a restriction on importation affect the country’s relationships with its trade partners?What externalities (e.g. changes in animal product prices) would such a trade law have?How you can helpExpertise: if you have experience/knowledge in international trade, policy work, WTO laws and can help answer th...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Animal Policy International, published by Rainer Kravets on May 4, 2023 on The Effective Altruism Forum.Animal Policy International is a new organisation launched through the Charity Entrepreneurship Incubation Program focused on ensuring that animal welfare standards are upheld in international trade policy.ProblemThere are significant differences between farmed animal welfare standards across the globe, with billions of animals still confined in factory farms. Even those regions with higher standards like the EU, the UK, Switzerland and New Zealand tend to import a significant portion of their animal products from countries where animals experience significant suffering due to lack of protective measures.SolutionThe higher welfare countries can apply their standards to imported animal products by restricting the access of low-welfare animal products that would have been illegal to produce domestically. This can incentivise farmers elsewhere to increase their standards to keep existing supply chains.A law restricting the importation of low-welfare products provides a unique win-win opportunity for both animal advocates and farmers in higher welfare countries, especially in our likely first country of operation: New Zealand. Some farmers are facing tough competition from low-priced low-welfare imports and demand more equal standards between imports and local produce after New Zealand’s decision to phase out farrowing crates on local pig farms by December 2025.Potential ImpactA law passed in New Zealand restricting the importation of animal products that do not adhere to local standards could save approximately 8 million fish per year from suffering poor living conditions, transportation, and slaughter practices; spare 330,000 pigs from cruel farrowing crates and 380,000 chickens from inhumane living conditions.Differences in animal welfare standards: New ZealandBelow is an outline of differences between animal welfare standards in New Zealand and its main importers of particular animals.Fish: In China, Vietnam and Thailand (total 79% of imports in 2020) there is no legislation for fish meaning they may endure slow, painful deaths by asphyxiation, crushing, or even being gutted alive. New Zealand outlines some protections for fish at the time of killing and during transport.Hens: 80% of eggs imported into New Zealand come from China where hens are allowed to be kept in battery cages. Battery cages are illegal in New Zealand from 2023. (Colony (enriched) cages are still used).Pigs: The US, an importer of pork to New Zealand, has no federal ban on the use of sow stalls or farrowing crates, leading to sows being cruelly confined to narrow cages where they cannot perform basic behaviours, turn around, or properly mother their piglets. New Zealand has banned sow stalls, and farrowing crates are being phased out by 2025.Sheep: Australia, which imports wool products to New Zealand, allows several practices that are prohibited in New Zealand, including the extremely cruel practice of mulesing, which involves removing parts of the skin from live sheep without anaesthetic.Next stepsEstablishing connections with potential partner NGOs and industryProducing a policy briefConducting public pollingAddressing the question of legality of import restrictionsMeeting policymakersOpen questionsWill farmers in low-welfare countries be motivated and capable of increasing their animal welfare standards?What enforcement mechanisms should be used?How would a restriction on importation affect the country’s relationships with its trade partners?What externalities (e.g. changes in animal product prices) would such a trade law have?How you can helpExpertise: if you have experience/knowledge in international trade, policy work, WTO laws and can help answer th...]]>
Rainer Kravets https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:15 None full 5838
jk7A3NMdbxp65kcJJ_NL_EA_EA EA - 500 Million, But Not A Single One More by jai Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 500 Million, But Not A Single One More, published by jai on May 4, 2023 on The Effective Altruism Forum.We will never know their names.The first victim could not have been recorded, for there was no written language to record it. They were someone’s daughter, or son, and someone’s friend, and they were loved by those around them. And they were in pain, covered in rashes, confused, scared, not knowing why this was happening to them or what they could do about it — victims of a mad, inhuman god. There was nothing to be done — humanity was not strong enough, not aware enough, not knowledgeable enough, to fight back against a monster that could not be seen.It was in Ancient Egypt, where it attacked slave and pharaoh alike. In Rome, it effortlessly decimated armies. It killed in Syria. It killed in Moscow. In India, five million dead. It killed a thousand Europeans every day in the 18th century. It killed more than fifty million Native Americans. From the Peloponnesian War to the Civil War, it slew more soldiers and civilians than any weapon, any soldier, any army. (Not that this stopped the most foolish and empty souls from attempting to harness the demon as a weapon against their enemies.)Cultures grew and faltered, and it remained. Empires rose and fell, and it thrived. Ideologies waxed and waned, but it did not care. Kill. Maim. Spread. An ancient, mad god, hidden from view, that could not be fought, could not be confronted, could not even be comprehended. Not the only one of its kind, but the most devastating.For a long time, there was no hope — only the bitter, hollow endurance of survivors.In China, in the 10th century, humanity began to fight back.It was observed that survivors of the mad god’s curse would never be touched again: They had taken a portion of that power into themselves, and were so protected from it. Not only that, but this power could be shared by consuming a remnant of the wounds. There was a price, for you could not take the god’s power without first defeating it — but a smaller battle, on humanity’s terms.By the 16th century, the technique spread to India, then across Asia, the Ottoman Empire and, in the 18th century, Europe. In 1796, a more powerful technique was discovered by Edward Jenner.An idea began to take hold: Perhaps the ancient god could be killed.A whisper became a voice; a voice became a call; a call became a battle cry, sweeping across villages, cities, nations. Humanity began to cooperate, spreading the protective power across the globe, dispatching masters of the craft to protect whole populations. People who had once been sworn enemies joined in a common cause for this one battle. Governments mandated that all citizens protect themselves, for giving the ancient enemy a single life would put millions in danger.And, inch by inch, humanity drove its enemy back. Fewer friends wept; fewer neighbors were crippled; fewer parents had to bury their children.At the dawn of the 20th century, for the first time, humanity banished the enemy from entire regions of the world. Humanity faltered many times in its efforts, but there were individuals who never gave up, who fought for the dream of a world where no child or loved one would ever fear the demon ever again. Viktor Zhdanov, who called for humanity to unite in a final push against the demon; the great tactician Karel Raška, who conceived of a strategy to annihilate the enemy; Donald Henderson, who led the efforts in those final days.The enemy grew weaker. Millions became thousands, thousands became dozens. And then, when the enemy did strike, scores of humans came forth to defy it, protecting all those whom it might endanger.The enemy’s last attack in the wild was on Ali Maow Maalin, in 1977. For months afterwards, dedicated humans swept the surrounding area, seeking out an...]]>
jai https://forum.effectivealtruism.org/posts/jk7A3NMdbxp65kcJJ/500-million-but-not-a-single-one-more Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 500 Million, But Not A Single One More, published by jai on May 4, 2023 on The Effective Altruism Forum.We will never know their names.The first victim could not have been recorded, for there was no written language to record it. They were someone’s daughter, or son, and someone’s friend, and they were loved by those around them. And they were in pain, covered in rashes, confused, scared, not knowing why this was happening to them or what they could do about it — victims of a mad, inhuman god. There was nothing to be done — humanity was not strong enough, not aware enough, not knowledgeable enough, to fight back against a monster that could not be seen.It was in Ancient Egypt, where it attacked slave and pharaoh alike. In Rome, it effortlessly decimated armies. It killed in Syria. It killed in Moscow. In India, five million dead. It killed a thousand Europeans every day in the 18th century. It killed more than fifty million Native Americans. From the Peloponnesian War to the Civil War, it slew more soldiers and civilians than any weapon, any soldier, any army. (Not that this stopped the most foolish and empty souls from attempting to harness the demon as a weapon against their enemies.)Cultures grew and faltered, and it remained. Empires rose and fell, and it thrived. Ideologies waxed and waned, but it did not care. Kill. Maim. Spread. An ancient, mad god, hidden from view, that could not be fought, could not be confronted, could not even be comprehended. Not the only one of its kind, but the most devastating.For a long time, there was no hope — only the bitter, hollow endurance of survivors.In China, in the 10th century, humanity began to fight back.It was observed that survivors of the mad god’s curse would never be touched again: They had taken a portion of that power into themselves, and were so protected from it. Not only that, but this power could be shared by consuming a remnant of the wounds. There was a price, for you could not take the god’s power without first defeating it — but a smaller battle, on humanity’s terms.By the 16th century, the technique spread to India, then across Asia, the Ottoman Empire and, in the 18th century, Europe. In 1796, a more powerful technique was discovered by Edward Jenner.An idea began to take hold: Perhaps the ancient god could be killed.A whisper became a voice; a voice became a call; a call became a battle cry, sweeping across villages, cities, nations. Humanity began to cooperate, spreading the protective power across the globe, dispatching masters of the craft to protect whole populations. People who had once been sworn enemies joined in a common cause for this one battle. Governments mandated that all citizens protect themselves, for giving the ancient enemy a single life would put millions in danger.And, inch by inch, humanity drove its enemy back. Fewer friends wept; fewer neighbors were crippled; fewer parents had to bury their children.At the dawn of the 20th century, for the first time, humanity banished the enemy from entire regions of the world. Humanity faltered many times in its efforts, but there were individuals who never gave up, who fought for the dream of a world where no child or loved one would ever fear the demon ever again. Viktor Zhdanov, who called for humanity to unite in a final push against the demon; the great tactician Karel Raška, who conceived of a strategy to annihilate the enemy; Donald Henderson, who led the efforts in those final days.The enemy grew weaker. Millions became thousands, thousands became dozens. And then, when the enemy did strike, scores of humans came forth to defy it, protecting all those whom it might endanger.The enemy’s last attack in the wild was on Ali Maow Maalin, in 1977. For months afterwards, dedicated humans swept the surrounding area, seeking out an...]]>
Thu, 04 May 2023 16:31:07 +0000 EA - 500 Million, But Not A Single One More by jai Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 500 Million, But Not A Single One More, published by jai on May 4, 2023 on The Effective Altruism Forum.We will never know their names.The first victim could not have been recorded, for there was no written language to record it. They were someone’s daughter, or son, and someone’s friend, and they were loved by those around them. And they were in pain, covered in rashes, confused, scared, not knowing why this was happening to them or what they could do about it — victims of a mad, inhuman god. There was nothing to be done — humanity was not strong enough, not aware enough, not knowledgeable enough, to fight back against a monster that could not be seen.It was in Ancient Egypt, where it attacked slave and pharaoh alike. In Rome, it effortlessly decimated armies. It killed in Syria. It killed in Moscow. In India, five million dead. It killed a thousand Europeans every day in the 18th century. It killed more than fifty million Native Americans. From the Peloponnesian War to the Civil War, it slew more soldiers and civilians than any weapon, any soldier, any army. (Not that this stopped the most foolish and empty souls from attempting to harness the demon as a weapon against their enemies.)Cultures grew and faltered, and it remained. Empires rose and fell, and it thrived. Ideologies waxed and waned, but it did not care. Kill. Maim. Spread. An ancient, mad god, hidden from view, that could not be fought, could not be confronted, could not even be comprehended. Not the only one of its kind, but the most devastating.For a long time, there was no hope — only the bitter, hollow endurance of survivors.In China, in the 10th century, humanity began to fight back.It was observed that survivors of the mad god’s curse would never be touched again: They had taken a portion of that power into themselves, and were so protected from it. Not only that, but this power could be shared by consuming a remnant of the wounds. There was a price, for you could not take the god’s power without first defeating it — but a smaller battle, on humanity’s terms.By the 16th century, the technique spread to India, then across Asia, the Ottoman Empire and, in the 18th century, Europe. In 1796, a more powerful technique was discovered by Edward Jenner.An idea began to take hold: Perhaps the ancient god could be killed.A whisper became a voice; a voice became a call; a call became a battle cry, sweeping across villages, cities, nations. Humanity began to cooperate, spreading the protective power across the globe, dispatching masters of the craft to protect whole populations. People who had once been sworn enemies joined in a common cause for this one battle. Governments mandated that all citizens protect themselves, for giving the ancient enemy a single life would put millions in danger.And, inch by inch, humanity drove its enemy back. Fewer friends wept; fewer neighbors were crippled; fewer parents had to bury their children.At the dawn of the 20th century, for the first time, humanity banished the enemy from entire regions of the world. Humanity faltered many times in its efforts, but there were individuals who never gave up, who fought for the dream of a world where no child or loved one would ever fear the demon ever again. Viktor Zhdanov, who called for humanity to unite in a final push against the demon; the great tactician Karel Raška, who conceived of a strategy to annihilate the enemy; Donald Henderson, who led the efforts in those final days.The enemy grew weaker. Millions became thousands, thousands became dozens. And then, when the enemy did strike, scores of humans came forth to defy it, protecting all those whom it might endanger.The enemy’s last attack in the wild was on Ali Maow Maalin, in 1977. For months afterwards, dedicated humans swept the surrounding area, seeking out an...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 500 Million, But Not A Single One More, published by jai on May 4, 2023 on The Effective Altruism Forum.We will never know their names.The first victim could not have been recorded, for there was no written language to record it. They were someone’s daughter, or son, and someone’s friend, and they were loved by those around them. And they were in pain, covered in rashes, confused, scared, not knowing why this was happening to them or what they could do about it — victims of a mad, inhuman god. There was nothing to be done — humanity was not strong enough, not aware enough, not knowledgeable enough, to fight back against a monster that could not be seen.It was in Ancient Egypt, where it attacked slave and pharaoh alike. In Rome, it effortlessly decimated armies. It killed in Syria. It killed in Moscow. In India, five million dead. It killed a thousand Europeans every day in the 18th century. It killed more than fifty million Native Americans. From the Peloponnesian War to the Civil War, it slew more soldiers and civilians than any weapon, any soldier, any army. (Not that this stopped the most foolish and empty souls from attempting to harness the demon as a weapon against their enemies.)Cultures grew and faltered, and it remained. Empires rose and fell, and it thrived. Ideologies waxed and waned, but it did not care. Kill. Maim. Spread. An ancient, mad god, hidden from view, that could not be fought, could not be confronted, could not even be comprehended. Not the only one of its kind, but the most devastating.For a long time, there was no hope — only the bitter, hollow endurance of survivors.In China, in the 10th century, humanity began to fight back.It was observed that survivors of the mad god’s curse would never be touched again: They had taken a portion of that power into themselves, and were so protected from it. Not only that, but this power could be shared by consuming a remnant of the wounds. There was a price, for you could not take the god’s power without first defeating it — but a smaller battle, on humanity’s terms.By the 16th century, the technique spread to India, then across Asia, the Ottoman Empire and, in the 18th century, Europe. In 1796, a more powerful technique was discovered by Edward Jenner.An idea began to take hold: Perhaps the ancient god could be killed.A whisper became a voice; a voice became a call; a call became a battle cry, sweeping across villages, cities, nations. Humanity began to cooperate, spreading the protective power across the globe, dispatching masters of the craft to protect whole populations. People who had once been sworn enemies joined in a common cause for this one battle. Governments mandated that all citizens protect themselves, for giving the ancient enemy a single life would put millions in danger.And, inch by inch, humanity drove its enemy back. Fewer friends wept; fewer neighbors were crippled; fewer parents had to bury their children.At the dawn of the 20th century, for the first time, humanity banished the enemy from entire regions of the world. Humanity faltered many times in its efforts, but there were individuals who never gave up, who fought for the dream of a world where no child or loved one would ever fear the demon ever again. Viktor Zhdanov, who called for humanity to unite in a final push against the demon; the great tactician Karel Raška, who conceived of a strategy to annihilate the enemy; Donald Henderson, who led the efforts in those final days.The enemy grew weaker. Millions became thousands, thousands became dozens. And then, when the enemy did strike, scores of humans came forth to defy it, protecting all those whom it might endanger.The enemy’s last attack in the wild was on Ali Maow Maalin, in 1977. For months afterwards, dedicated humans swept the surrounding area, seeking out an...]]>
jai https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:27 None full 5823
obs9vnjM3xBZcrEf2_NL_EA_EA EA - Upcoming EA conferences in 2023 by OllieBase Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Upcoming EA conferences in 2023, published by OllieBase on May 4, 2023 on The Effective Altruism Forum.The Centre for Effective Altruism will be organizing and supporting conferences for the EA community all over the world for the remainder of 2023, including the first-ever EA conferences in Poland, NYC and the Philippines.We currently have the following events scheduled:EA GlobalEA Global: London | (May 19–21) | Tobacco Dock - applications close 11:59 pm UTC Friday 5 MayEA Global: Boston | (October 27–29) | Hynes Convention CenterEAGxEAGxWarsaw | (June 9–11) | POLINEAGxNYC | (August 18–20) | Convene, 225 Liberty St.EAGxBerlin | (September 8–10) | UraniaEAGxAustralia | (September 22–24, provisional) | MelbourneEAGxPhilippines | (October 20–22, provisional)EAGxVirtual | (November 17–19, provisional)Applications for EAG London, EAG Boston, EAGxWarsaw and EAGxNYC are open, and we expect applications for the other conferences to open approximately 3 months before the event. Please go to the event page links above to apply. Please note again that applications to EAG London close 11:59 pm UTC Friday 5 May.If you'd like to add EA events like these directly to your Google Calendar, use this link.Some notes on these conferences:EA Globals are run in-house by the CEA events team, whereas EAGx conferences are organized independently by local community builders with financial support and mentoring from CEA.EA Global conferences have a high bar for admission and are for people who are very familiar with EA and are taking significant actions (e.g. full-time work or study) based on EA ideas.Admissions for EAGx conferences are processed independently by the EAGx conference organizers. These events are primarily for those who are newer to EA and interested in getting more involved and who are based in the region the conference is taking place in (e.g. EAGxWarsaw is primarily for people who are interested in EA and are based in Eastern Europe).Please apply to all conferences you wish to attend once applications open — we would rather get too many applications for some conferences and recommend that applicants attend a different one than miss out on potential applicants to a conference.Travel support funds for events this year are limited (though will vary by event), and we can only accommodate a small number of requests. If you do not end up receiving travel support, this is likely the result of limited funds, rather than an evaluation of your potential for impact. When planning around an event, we’d recommend you act under the assumption that we will not be able to grant your travel funding request (unless it has already been approved).Find more info on our website.Feel free to email hello@eaglobal.org with any questions, or comment below. You can also contact EAGx organisers using the format [location]@eaglobalx.org (e.g. warsaw@eaglobalx.org, nyc@eaglobalx.org).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
OllieBase https://forum.effectivealtruism.org/posts/obs9vnjM3xBZcrEf2/upcoming-ea-conferences-in-2023 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Upcoming EA conferences in 2023, published by OllieBase on May 4, 2023 on The Effective Altruism Forum.The Centre for Effective Altruism will be organizing and supporting conferences for the EA community all over the world for the remainder of 2023, including the first-ever EA conferences in Poland, NYC and the Philippines.We currently have the following events scheduled:EA GlobalEA Global: London | (May 19–21) | Tobacco Dock - applications close 11:59 pm UTC Friday 5 MayEA Global: Boston | (October 27–29) | Hynes Convention CenterEAGxEAGxWarsaw | (June 9–11) | POLINEAGxNYC | (August 18–20) | Convene, 225 Liberty St.EAGxBerlin | (September 8–10) | UraniaEAGxAustralia | (September 22–24, provisional) | MelbourneEAGxPhilippines | (October 20–22, provisional)EAGxVirtual | (November 17–19, provisional)Applications for EAG London, EAG Boston, EAGxWarsaw and EAGxNYC are open, and we expect applications for the other conferences to open approximately 3 months before the event. Please go to the event page links above to apply. Please note again that applications to EAG London close 11:59 pm UTC Friday 5 May.If you'd like to add EA events like these directly to your Google Calendar, use this link.Some notes on these conferences:EA Globals are run in-house by the CEA events team, whereas EAGx conferences are organized independently by local community builders with financial support and mentoring from CEA.EA Global conferences have a high bar for admission and are for people who are very familiar with EA and are taking significant actions (e.g. full-time work or study) based on EA ideas.Admissions for EAGx conferences are processed independently by the EAGx conference organizers. These events are primarily for those who are newer to EA and interested in getting more involved and who are based in the region the conference is taking place in (e.g. EAGxWarsaw is primarily for people who are interested in EA and are based in Eastern Europe).Please apply to all conferences you wish to attend once applications open — we would rather get too many applications for some conferences and recommend that applicants attend a different one than miss out on potential applicants to a conference.Travel support funds for events this year are limited (though will vary by event), and we can only accommodate a small number of requests. If you do not end up receiving travel support, this is likely the result of limited funds, rather than an evaluation of your potential for impact. When planning around an event, we’d recommend you act under the assumption that we will not be able to grant your travel funding request (unless it has already been approved).Find more info on our website.Feel free to email hello@eaglobal.org with any questions, or comment below. You can also contact EAGx organisers using the format [location]@eaglobalx.org (e.g. warsaw@eaglobalx.org, nyc@eaglobalx.org).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 04 May 2023 16:00:40 +0000 EA - Upcoming EA conferences in 2023 by OllieBase Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Upcoming EA conferences in 2023, published by OllieBase on May 4, 2023 on The Effective Altruism Forum.The Centre for Effective Altruism will be organizing and supporting conferences for the EA community all over the world for the remainder of 2023, including the first-ever EA conferences in Poland, NYC and the Philippines.We currently have the following events scheduled:EA GlobalEA Global: London | (May 19–21) | Tobacco Dock - applications close 11:59 pm UTC Friday 5 MayEA Global: Boston | (October 27–29) | Hynes Convention CenterEAGxEAGxWarsaw | (June 9–11) | POLINEAGxNYC | (August 18–20) | Convene, 225 Liberty St.EAGxBerlin | (September 8–10) | UraniaEAGxAustralia | (September 22–24, provisional) | MelbourneEAGxPhilippines | (October 20–22, provisional)EAGxVirtual | (November 17–19, provisional)Applications for EAG London, EAG Boston, EAGxWarsaw and EAGxNYC are open, and we expect applications for the other conferences to open approximately 3 months before the event. Please go to the event page links above to apply. Please note again that applications to EAG London close 11:59 pm UTC Friday 5 May.If you'd like to add EA events like these directly to your Google Calendar, use this link.Some notes on these conferences:EA Globals are run in-house by the CEA events team, whereas EAGx conferences are organized independently by local community builders with financial support and mentoring from CEA.EA Global conferences have a high bar for admission and are for people who are very familiar with EA and are taking significant actions (e.g. full-time work or study) based on EA ideas.Admissions for EAGx conferences are processed independently by the EAGx conference organizers. These events are primarily for those who are newer to EA and interested in getting more involved and who are based in the region the conference is taking place in (e.g. EAGxWarsaw is primarily for people who are interested in EA and are based in Eastern Europe).Please apply to all conferences you wish to attend once applications open — we would rather get too many applications for some conferences and recommend that applicants attend a different one than miss out on potential applicants to a conference.Travel support funds for events this year are limited (though will vary by event), and we can only accommodate a small number of requests. If you do not end up receiving travel support, this is likely the result of limited funds, rather than an evaluation of your potential for impact. When planning around an event, we’d recommend you act under the assumption that we will not be able to grant your travel funding request (unless it has already been approved).Find more info on our website.Feel free to email hello@eaglobal.org with any questions, or comment below. You can also contact EAGx organisers using the format [location]@eaglobalx.org (e.g. warsaw@eaglobalx.org, nyc@eaglobalx.org).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Upcoming EA conferences in 2023, published by OllieBase on May 4, 2023 on The Effective Altruism Forum.The Centre for Effective Altruism will be organizing and supporting conferences for the EA community all over the world for the remainder of 2023, including the first-ever EA conferences in Poland, NYC and the Philippines.We currently have the following events scheduled:EA GlobalEA Global: London | (May 19–21) | Tobacco Dock - applications close 11:59 pm UTC Friday 5 MayEA Global: Boston | (October 27–29) | Hynes Convention CenterEAGxEAGxWarsaw | (June 9–11) | POLINEAGxNYC | (August 18–20) | Convene, 225 Liberty St.EAGxBerlin | (September 8–10) | UraniaEAGxAustralia | (September 22–24, provisional) | MelbourneEAGxPhilippines | (October 20–22, provisional)EAGxVirtual | (November 17–19, provisional)Applications for EAG London, EAG Boston, EAGxWarsaw and EAGxNYC are open, and we expect applications for the other conferences to open approximately 3 months before the event. Please go to the event page links above to apply. Please note again that applications to EAG London close 11:59 pm UTC Friday 5 May.If you'd like to add EA events like these directly to your Google Calendar, use this link.Some notes on these conferences:EA Globals are run in-house by the CEA events team, whereas EAGx conferences are organized independently by local community builders with financial support and mentoring from CEA.EA Global conferences have a high bar for admission and are for people who are very familiar with EA and are taking significant actions (e.g. full-time work or study) based on EA ideas.Admissions for EAGx conferences are processed independently by the EAGx conference organizers. These events are primarily for those who are newer to EA and interested in getting more involved and who are based in the region the conference is taking place in (e.g. EAGxWarsaw is primarily for people who are interested in EA and are based in Eastern Europe).Please apply to all conferences you wish to attend once applications open — we would rather get too many applications for some conferences and recommend that applicants attend a different one than miss out on potential applicants to a conference.Travel support funds for events this year are limited (though will vary by event), and we can only accommodate a small number of requests. If you do not end up receiving travel support, this is likely the result of limited funds, rather than an evaluation of your potential for impact. When planning around an event, we’d recommend you act under the assumption that we will not be able to grant your travel funding request (unless it has already been approved).Find more info on our website.Feel free to email hello@eaglobal.org with any questions, or comment below. You can also contact EAGx organisers using the format [location]@eaglobalx.org (e.g. warsaw@eaglobalx.org, nyc@eaglobalx.org).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
OllieBase https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:10 None full 5837
SsZ4AqmBdgrfN6hfz_NL_EA_EA EA - Air Safety to Combat Global Catastrophic Biorisks [REVISED] by Gavriel Kleinwaks Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Air Safety to Combat Global Catastrophic Biorisks [REVISED], published by Gavriel Kleinwaks on May 3, 2023 on The Effective Altruism Forum.This report is a collaboration between researchers from 1Day Sooner and Rethink Priorities.OverviewThis post is a revision of a report previously published on how improvements in indoor air quality can address global catastrophic risk from pandemics. After feedback from expert reviewers, we revised the report in accordance with comments. The comments greatly improved the report and we consider the earlier version to be misphrased, misleading, or mathematically underspecified in several places, but we are leaving the post available to illustrate the revision process.Unlike in the previous post, we are not including the full report, given its length. Instead, this post contains a summary of the reviews and of the report, with a link to the full report.Many thanks to the expert reviewers (listed below) for their detailed feedback. Additional thanks to Rachel Shu for research and writing assistance. We also received help and feedback from many other people over the course of this process—a full list is in the “Acknowledgements” section of the report.Summary of Expert ReviewWe asked biosecurity and indoor air quality experts to review this report: Dr. Richard Bruns of the John Hopkins Center for Health Security, Dr. Jacob Bueno de Mesquita and Dr. Alexandra Johnson of Lawrence Berkeley National Lab, Dr. David Manheim of ALTER, and Professor Shelly Miller of the University of Colorado.These experts suggested a variety of both minor and substantive changes to the document, though these changes do not alter the overall conclusion of the report that indoor air safety is an important lever for reducing GCBRs and that there are several high-leverage funding opportunities around promoting indoor air quality and specific air cleaning interventions.The main changes suggested were:Providing confidence intervals on key estimates, such as our estimate of the overall impact of IAQ interventions, and reframing certain estimates to improve clarity.Modifying the phrasing around the section concerning ‘modelling’, to better clarify our position around the specific limitations of existing models (specifically that there aren’t models that move from the room and building-level transmission to population-level transmission).Clarifying the distinction between mechanical interventions, specific in-duct vs upper-room systems (254nm) and HVAC-filtration vs portable air cleaners and adding additional information about some interactions between different intervention typesAdding general public advocacy for indoor air quality as a funding opportunity and related research that could be done support advocacy efforts.Adding additional relevant literature and more minor details regarding indoor air quality across different sections.Improving the overall readability of the report, by removing repetitive elements.Report Executive Summary(Full report available here.)Top-line summaryMost efforts to address indoor air quality (IAQ) do not address airborne pathogen levels, and creating indoor air quality standards that include airborne pathogen levels could meaningfully reduce global catastrophic biorisk from pandemics.We estimate that an ideal adoption of indoor air quality interventions, like ventilation, filtration, and ultraviolet germicidal irradiation (GUV) in all public buildings in the US, would reduce overall population transmission of respiratory illnesses by 30-75%, with a median estimate of 52.5%.Bottlenecks inhibiting the mass deployment of these technologies include a lack of clear standards, cost of implementation, and difficulty changing regulation/public attitudes.The following actions can accelerate deployment and improve IAQ to red...]]>
Gavriel Kleinwaks https://forum.effectivealtruism.org/posts/SsZ4AqmBdgrfN6hfz/air-safety-to-combat-global-catastrophic-biorisks-revised Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Air Safety to Combat Global Catastrophic Biorisks [REVISED], published by Gavriel Kleinwaks on May 3, 2023 on The Effective Altruism Forum.This report is a collaboration between researchers from 1Day Sooner and Rethink Priorities.OverviewThis post is a revision of a report previously published on how improvements in indoor air quality can address global catastrophic risk from pandemics. After feedback from expert reviewers, we revised the report in accordance with comments. The comments greatly improved the report and we consider the earlier version to be misphrased, misleading, or mathematically underspecified in several places, but we are leaving the post available to illustrate the revision process.Unlike in the previous post, we are not including the full report, given its length. Instead, this post contains a summary of the reviews and of the report, with a link to the full report.Many thanks to the expert reviewers (listed below) for their detailed feedback. Additional thanks to Rachel Shu for research and writing assistance. We also received help and feedback from many other people over the course of this process—a full list is in the “Acknowledgements” section of the report.Summary of Expert ReviewWe asked biosecurity and indoor air quality experts to review this report: Dr. Richard Bruns of the John Hopkins Center for Health Security, Dr. Jacob Bueno de Mesquita and Dr. Alexandra Johnson of Lawrence Berkeley National Lab, Dr. David Manheim of ALTER, and Professor Shelly Miller of the University of Colorado.These experts suggested a variety of both minor and substantive changes to the document, though these changes do not alter the overall conclusion of the report that indoor air safety is an important lever for reducing GCBRs and that there are several high-leverage funding opportunities around promoting indoor air quality and specific air cleaning interventions.The main changes suggested were:Providing confidence intervals on key estimates, such as our estimate of the overall impact of IAQ interventions, and reframing certain estimates to improve clarity.Modifying the phrasing around the section concerning ‘modelling’, to better clarify our position around the specific limitations of existing models (specifically that there aren’t models that move from the room and building-level transmission to population-level transmission).Clarifying the distinction between mechanical interventions, specific in-duct vs upper-room systems (254nm) and HVAC-filtration vs portable air cleaners and adding additional information about some interactions between different intervention typesAdding general public advocacy for indoor air quality as a funding opportunity and related research that could be done support advocacy efforts.Adding additional relevant literature and more minor details regarding indoor air quality across different sections.Improving the overall readability of the report, by removing repetitive elements.Report Executive Summary(Full report available here.)Top-line summaryMost efforts to address indoor air quality (IAQ) do not address airborne pathogen levels, and creating indoor air quality standards that include airborne pathogen levels could meaningfully reduce global catastrophic biorisk from pandemics.We estimate that an ideal adoption of indoor air quality interventions, like ventilation, filtration, and ultraviolet germicidal irradiation (GUV) in all public buildings in the US, would reduce overall population transmission of respiratory illnesses by 30-75%, with a median estimate of 52.5%.Bottlenecks inhibiting the mass deployment of these technologies include a lack of clear standards, cost of implementation, and difficulty changing regulation/public attitudes.The following actions can accelerate deployment and improve IAQ to red...]]>
Thu, 04 May 2023 13:09:50 +0000 EA - Air Safety to Combat Global Catastrophic Biorisks [REVISED] by Gavriel Kleinwaks Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Air Safety to Combat Global Catastrophic Biorisks [REVISED], published by Gavriel Kleinwaks on May 3, 2023 on The Effective Altruism Forum.This report is a collaboration between researchers from 1Day Sooner and Rethink Priorities.OverviewThis post is a revision of a report previously published on how improvements in indoor air quality can address global catastrophic risk from pandemics. After feedback from expert reviewers, we revised the report in accordance with comments. The comments greatly improved the report and we consider the earlier version to be misphrased, misleading, or mathematically underspecified in several places, but we are leaving the post available to illustrate the revision process.Unlike in the previous post, we are not including the full report, given its length. Instead, this post contains a summary of the reviews and of the report, with a link to the full report.Many thanks to the expert reviewers (listed below) for their detailed feedback. Additional thanks to Rachel Shu for research and writing assistance. We also received help and feedback from many other people over the course of this process—a full list is in the “Acknowledgements” section of the report.Summary of Expert ReviewWe asked biosecurity and indoor air quality experts to review this report: Dr. Richard Bruns of the John Hopkins Center for Health Security, Dr. Jacob Bueno de Mesquita and Dr. Alexandra Johnson of Lawrence Berkeley National Lab, Dr. David Manheim of ALTER, and Professor Shelly Miller of the University of Colorado.These experts suggested a variety of both minor and substantive changes to the document, though these changes do not alter the overall conclusion of the report that indoor air safety is an important lever for reducing GCBRs and that there are several high-leverage funding opportunities around promoting indoor air quality and specific air cleaning interventions.The main changes suggested were:Providing confidence intervals on key estimates, such as our estimate of the overall impact of IAQ interventions, and reframing certain estimates to improve clarity.Modifying the phrasing around the section concerning ‘modelling’, to better clarify our position around the specific limitations of existing models (specifically that there aren’t models that move from the room and building-level transmission to population-level transmission).Clarifying the distinction between mechanical interventions, specific in-duct vs upper-room systems (254nm) and HVAC-filtration vs portable air cleaners and adding additional information about some interactions between different intervention typesAdding general public advocacy for indoor air quality as a funding opportunity and related research that could be done support advocacy efforts.Adding additional relevant literature and more minor details regarding indoor air quality across different sections.Improving the overall readability of the report, by removing repetitive elements.Report Executive Summary(Full report available here.)Top-line summaryMost efforts to address indoor air quality (IAQ) do not address airborne pathogen levels, and creating indoor air quality standards that include airborne pathogen levels could meaningfully reduce global catastrophic biorisk from pandemics.We estimate that an ideal adoption of indoor air quality interventions, like ventilation, filtration, and ultraviolet germicidal irradiation (GUV) in all public buildings in the US, would reduce overall population transmission of respiratory illnesses by 30-75%, with a median estimate of 52.5%.Bottlenecks inhibiting the mass deployment of these technologies include a lack of clear standards, cost of implementation, and difficulty changing regulation/public attitudes.The following actions can accelerate deployment and improve IAQ to red...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Air Safety to Combat Global Catastrophic Biorisks [REVISED], published by Gavriel Kleinwaks on May 3, 2023 on The Effective Altruism Forum.This report is a collaboration between researchers from 1Day Sooner and Rethink Priorities.OverviewThis post is a revision of a report previously published on how improvements in indoor air quality can address global catastrophic risk from pandemics. After feedback from expert reviewers, we revised the report in accordance with comments. The comments greatly improved the report and we consider the earlier version to be misphrased, misleading, or mathematically underspecified in several places, but we are leaving the post available to illustrate the revision process.Unlike in the previous post, we are not including the full report, given its length. Instead, this post contains a summary of the reviews and of the report, with a link to the full report.Many thanks to the expert reviewers (listed below) for their detailed feedback. Additional thanks to Rachel Shu for research and writing assistance. We also received help and feedback from many other people over the course of this process—a full list is in the “Acknowledgements” section of the report.Summary of Expert ReviewWe asked biosecurity and indoor air quality experts to review this report: Dr. Richard Bruns of the John Hopkins Center for Health Security, Dr. Jacob Bueno de Mesquita and Dr. Alexandra Johnson of Lawrence Berkeley National Lab, Dr. David Manheim of ALTER, and Professor Shelly Miller of the University of Colorado.These experts suggested a variety of both minor and substantive changes to the document, though these changes do not alter the overall conclusion of the report that indoor air safety is an important lever for reducing GCBRs and that there are several high-leverage funding opportunities around promoting indoor air quality and specific air cleaning interventions.The main changes suggested were:Providing confidence intervals on key estimates, such as our estimate of the overall impact of IAQ interventions, and reframing certain estimates to improve clarity.Modifying the phrasing around the section concerning ‘modelling’, to better clarify our position around the specific limitations of existing models (specifically that there aren’t models that move from the room and building-level transmission to population-level transmission).Clarifying the distinction between mechanical interventions, specific in-duct vs upper-room systems (254nm) and HVAC-filtration vs portable air cleaners and adding additional information about some interactions between different intervention typesAdding general public advocacy for indoor air quality as a funding opportunity and related research that could be done support advocacy efforts.Adding additional relevant literature and more minor details regarding indoor air quality across different sections.Improving the overall readability of the report, by removing repetitive elements.Report Executive Summary(Full report available here.)Top-line summaryMost efforts to address indoor air quality (IAQ) do not address airborne pathogen levels, and creating indoor air quality standards that include airborne pathogen levels could meaningfully reduce global catastrophic biorisk from pandemics.We estimate that an ideal adoption of indoor air quality interventions, like ventilation, filtration, and ultraviolet germicidal irradiation (GUV) in all public buildings in the US, would reduce overall population transmission of respiratory illnesses by 30-75%, with a median estimate of 52.5%.Bottlenecks inhibiting the mass deployment of these technologies include a lack of clear standards, cost of implementation, and difficulty changing regulation/public attitudes.The following actions can accelerate deployment and improve IAQ to red...]]>
Gavriel Kleinwaks https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:09 None full 5835
esA6ukJngGDMorMA8_NL_EA_EA EA - Here's a comprehensive fact sheet of almost all the ways animals are mistreated in factory farms by Omnizoid Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Here's a comprehensive fact sheet of almost all the ways animals are mistreated in factory farms, published by Omnizoid on May 4, 2023 on The Effective Altruism Forum.Crosspost of this on my blog.1 IntroductionSee, there's the difference between us. I don't care about animals at all. I'm well aware of the cramped, squalid, and altogether unpleasant conditions suffered by livestock animals on many factory farms, and I simply could not care less. I see animals as a natural resource to be exploited. I don't care about them any more than I care about the trees that were cut down to make my house.Random person on the bizarre anti-vegan subredditI’ve previously argued against factory farming at some length, arguing that it is the worst thing ever. Here I will just lay out the facts about factory farming. I will describe what happens to the 80 or so billion beings we factory farm every year, who scream in agony and terror in the great juggernauts of despair, whose cries we ignore. They scream because of us—because of our apathy, because of our demand for their flesh—and it’s about time that people learned exactly what is going on. Here I describe the horrors of factory farms, though if one is convinced that factory farms are evil, they should stop paying for their products—an act which demonstrably causes more animals to be tormented in concentration camp-esque conditions.If factory farms are as cruel as I suggest, then the obligation not to pay for them is a point of elementary morality. Anyone who is not a moral imbecile recognizes that it’s wrong to contribute to senseless cruelty for the sake of comparatively minor benefits. We all recognize it’s wrong to torture animals for pleasure—paying others to torture animals for our pleasure is similarly wrong. If factory farms are half as cruel as I make them out to be, then factory farming is inarguably the worst thing in human history. Around 99% of meat comes from factory farms—if you purchase meat without careful vetting, it almost definitely comes from a factory farm.Here, I’ll just describe the facts about what goes on in factory farms. Of course, this understates the case, because much of what goes on is secret—the meat industry has fought hard to make it impossible to film them. As Scully notesIt would be reasonable for the justices to ask themselves this question, too: If the use of gestation crates is proper and defensible animal husbandry, why has the NPPC lobbied to make it a crime to photograph that very practice?Here, I will show that factory farming is literally torture. This is not hyperbolic, but instead the obvious conclusion of a sober look at the facts. If we treated child molesters the way we treat billions of animals, we’d be condemned by the international community. The treatment of animals is unimaginably horrifying—evocative of the worst crimes in human history.Some may say that animals just cannot be tortured. But this is clearly a crazy view. If a person used pliers to cut off the toes of their pets, we’d regard that as torture. Unfortunately, what we do to billions of animals is far worse.2 PigsJust like those who defended slavery, the eaters of meat often have farcical notions about how the beings whose mistreatment they defend are treated. But unfortunately, the facts are quite different from those suggested by meat industry propaganda, and are worth reviewing.Excess pigs were roasted to death. Specifically, these pigs were killed by having hot steam enter the barn, at around 150 degrees, leading to them choking, suffocating, and roasting to death. It’s hard to see how an industry that chokes and burns beings to death can be said to be anything other than nightmarish, especially given that pigs are smarter than dogs.Factory-farmed pigs, while pregnant, are stuffed in tiny gestation cr...]]>
Omnizoid https://forum.effectivealtruism.org/posts/esA6ukJngGDMorMA8/here-s-a-comprehensive-fact-sheet-of-almost-all-the-ways Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Here's a comprehensive fact sheet of almost all the ways animals are mistreated in factory farms, published by Omnizoid on May 4, 2023 on The Effective Altruism Forum.Crosspost of this on my blog.1 IntroductionSee, there's the difference between us. I don't care about animals at all. I'm well aware of the cramped, squalid, and altogether unpleasant conditions suffered by livestock animals on many factory farms, and I simply could not care less. I see animals as a natural resource to be exploited. I don't care about them any more than I care about the trees that were cut down to make my house.Random person on the bizarre anti-vegan subredditI’ve previously argued against factory farming at some length, arguing that it is the worst thing ever. Here I will just lay out the facts about factory farming. I will describe what happens to the 80 or so billion beings we factory farm every year, who scream in agony and terror in the great juggernauts of despair, whose cries we ignore. They scream because of us—because of our apathy, because of our demand for their flesh—and it’s about time that people learned exactly what is going on. Here I describe the horrors of factory farms, though if one is convinced that factory farms are evil, they should stop paying for their products—an act which demonstrably causes more animals to be tormented in concentration camp-esque conditions.If factory farms are as cruel as I suggest, then the obligation not to pay for them is a point of elementary morality. Anyone who is not a moral imbecile recognizes that it’s wrong to contribute to senseless cruelty for the sake of comparatively minor benefits. We all recognize it’s wrong to torture animals for pleasure—paying others to torture animals for our pleasure is similarly wrong. If factory farms are half as cruel as I make them out to be, then factory farming is inarguably the worst thing in human history. Around 99% of meat comes from factory farms—if you purchase meat without careful vetting, it almost definitely comes from a factory farm.Here, I’ll just describe the facts about what goes on in factory farms. Of course, this understates the case, because much of what goes on is secret—the meat industry has fought hard to make it impossible to film them. As Scully notesIt would be reasonable for the justices to ask themselves this question, too: If the use of gestation crates is proper and defensible animal husbandry, why has the NPPC lobbied to make it a crime to photograph that very practice?Here, I will show that factory farming is literally torture. This is not hyperbolic, but instead the obvious conclusion of a sober look at the facts. If we treated child molesters the way we treat billions of animals, we’d be condemned by the international community. The treatment of animals is unimaginably horrifying—evocative of the worst crimes in human history.Some may say that animals just cannot be tortured. But this is clearly a crazy view. If a person used pliers to cut off the toes of their pets, we’d regard that as torture. Unfortunately, what we do to billions of animals is far worse.2 PigsJust like those who defended slavery, the eaters of meat often have farcical notions about how the beings whose mistreatment they defend are treated. But unfortunately, the facts are quite different from those suggested by meat industry propaganda, and are worth reviewing.Excess pigs were roasted to death. Specifically, these pigs were killed by having hot steam enter the barn, at around 150 degrees, leading to them choking, suffocating, and roasting to death. It’s hard to see how an industry that chokes and burns beings to death can be said to be anything other than nightmarish, especially given that pigs are smarter than dogs.Factory-farmed pigs, while pregnant, are stuffed in tiny gestation cr...]]>
Thu, 04 May 2023 12:43:22 +0000 EA - Here's a comprehensive fact sheet of almost all the ways animals are mistreated in factory farms by Omnizoid Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Here's a comprehensive fact sheet of almost all the ways animals are mistreated in factory farms, published by Omnizoid on May 4, 2023 on The Effective Altruism Forum.Crosspost of this on my blog.1 IntroductionSee, there's the difference between us. I don't care about animals at all. I'm well aware of the cramped, squalid, and altogether unpleasant conditions suffered by livestock animals on many factory farms, and I simply could not care less. I see animals as a natural resource to be exploited. I don't care about them any more than I care about the trees that were cut down to make my house.Random person on the bizarre anti-vegan subredditI’ve previously argued against factory farming at some length, arguing that it is the worst thing ever. Here I will just lay out the facts about factory farming. I will describe what happens to the 80 or so billion beings we factory farm every year, who scream in agony and terror in the great juggernauts of despair, whose cries we ignore. They scream because of us—because of our apathy, because of our demand for their flesh—and it’s about time that people learned exactly what is going on. Here I describe the horrors of factory farms, though if one is convinced that factory farms are evil, they should stop paying for their products—an act which demonstrably causes more animals to be tormented in concentration camp-esque conditions.If factory farms are as cruel as I suggest, then the obligation not to pay for them is a point of elementary morality. Anyone who is not a moral imbecile recognizes that it’s wrong to contribute to senseless cruelty for the sake of comparatively minor benefits. We all recognize it’s wrong to torture animals for pleasure—paying others to torture animals for our pleasure is similarly wrong. If factory farms are half as cruel as I make them out to be, then factory farming is inarguably the worst thing in human history. Around 99% of meat comes from factory farms—if you purchase meat without careful vetting, it almost definitely comes from a factory farm.Here, I’ll just describe the facts about what goes on in factory farms. Of course, this understates the case, because much of what goes on is secret—the meat industry has fought hard to make it impossible to film them. As Scully notesIt would be reasonable for the justices to ask themselves this question, too: If the use of gestation crates is proper and defensible animal husbandry, why has the NPPC lobbied to make it a crime to photograph that very practice?Here, I will show that factory farming is literally torture. This is not hyperbolic, but instead the obvious conclusion of a sober look at the facts. If we treated child molesters the way we treat billions of animals, we’d be condemned by the international community. The treatment of animals is unimaginably horrifying—evocative of the worst crimes in human history.Some may say that animals just cannot be tortured. But this is clearly a crazy view. If a person used pliers to cut off the toes of their pets, we’d regard that as torture. Unfortunately, what we do to billions of animals is far worse.2 PigsJust like those who defended slavery, the eaters of meat often have farcical notions about how the beings whose mistreatment they defend are treated. But unfortunately, the facts are quite different from those suggested by meat industry propaganda, and are worth reviewing.Excess pigs were roasted to death. Specifically, these pigs were killed by having hot steam enter the barn, at around 150 degrees, leading to them choking, suffocating, and roasting to death. It’s hard to see how an industry that chokes and burns beings to death can be said to be anything other than nightmarish, especially given that pigs are smarter than dogs.Factory-farmed pigs, while pregnant, are stuffed in tiny gestation cr...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Here's a comprehensive fact sheet of almost all the ways animals are mistreated in factory farms, published by Omnizoid on May 4, 2023 on The Effective Altruism Forum.Crosspost of this on my blog.1 IntroductionSee, there's the difference between us. I don't care about animals at all. I'm well aware of the cramped, squalid, and altogether unpleasant conditions suffered by livestock animals on many factory farms, and I simply could not care less. I see animals as a natural resource to be exploited. I don't care about them any more than I care about the trees that were cut down to make my house.Random person on the bizarre anti-vegan subredditI’ve previously argued against factory farming at some length, arguing that it is the worst thing ever. Here I will just lay out the facts about factory farming. I will describe what happens to the 80 or so billion beings we factory farm every year, who scream in agony and terror in the great juggernauts of despair, whose cries we ignore. They scream because of us—because of our apathy, because of our demand for their flesh—and it’s about time that people learned exactly what is going on. Here I describe the horrors of factory farms, though if one is convinced that factory farms are evil, they should stop paying for their products—an act which demonstrably causes more animals to be tormented in concentration camp-esque conditions.If factory farms are as cruel as I suggest, then the obligation not to pay for them is a point of elementary morality. Anyone who is not a moral imbecile recognizes that it’s wrong to contribute to senseless cruelty for the sake of comparatively minor benefits. We all recognize it’s wrong to torture animals for pleasure—paying others to torture animals for our pleasure is similarly wrong. If factory farms are half as cruel as I make them out to be, then factory farming is inarguably the worst thing in human history. Around 99% of meat comes from factory farms—if you purchase meat without careful vetting, it almost definitely comes from a factory farm.Here, I’ll just describe the facts about what goes on in factory farms. Of course, this understates the case, because much of what goes on is secret—the meat industry has fought hard to make it impossible to film them. As Scully notesIt would be reasonable for the justices to ask themselves this question, too: If the use of gestation crates is proper and defensible animal husbandry, why has the NPPC lobbied to make it a crime to photograph that very practice?Here, I will show that factory farming is literally torture. This is not hyperbolic, but instead the obvious conclusion of a sober look at the facts. If we treated child molesters the way we treat billions of animals, we’d be condemned by the international community. The treatment of animals is unimaginably horrifying—evocative of the worst crimes in human history.Some may say that animals just cannot be tortured. But this is clearly a crazy view. If a person used pliers to cut off the toes of their pets, we’d regard that as torture. Unfortunately, what we do to billions of animals is far worse.2 PigsJust like those who defended slavery, the eaters of meat often have farcical notions about how the beings whose mistreatment they defend are treated. But unfortunately, the facts are quite different from those suggested by meat industry propaganda, and are worth reviewing.Excess pigs were roasted to death. Specifically, these pigs were killed by having hot steam enter the barn, at around 150 degrees, leading to them choking, suffocating, and roasting to death. It’s hard to see how an industry that chokes and burns beings to death can be said to be anything other than nightmarish, especially given that pigs are smarter than dogs.Factory-farmed pigs, while pregnant, are stuffed in tiny gestation cr...]]>
Omnizoid https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 44:18 None full 5834
TyWj8ak5spu5XgHM8_NL_EA_EA EA - How Engineers can Contribute to Civilisation Resilience by Jessica Wen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How Engineers can Contribute to Civilisation Resilience, published by Jessica Wen on May 3, 2023 on The Effective Altruism Forum.Cross-posted from the High Impact Engineers Resource Portal. You can view the most up-to-date version on the Portal.SummaryCivilisation resilience is concerned with both reducing the risk of civilisation collapse and increasing the capability for humanity to recover from such a collapse. A collapse of civilisation would likely cause a great deal of suffering and may jeopardise the future of the human race. We can defend against such risks by reducing the chances that a localised catastrophe starts, that it scales up to a global catastrophe, or that it triggers irreversible civilisation collapse. Many facets of our defence layers are physical, meaning there are many opportunities for engineers to contribute to improving humanity's resilience.UncertaintyThe content of this article is largely based on research by 80,000 Hours, the Future of Humanity Institute, the Centre for the Study of Existential Risk, and ALLFED. We feel somewhat confident in the recommendations in this article.What is civilisation resilience?The industrial revolution gave humanity access to unprecedented amounts of valuable and lifesaving technologies and improved the lives we are able to live immensely. However, a global catastrophe could put unprecedented strain on the infrastructure — global agriculture, energy, industry, intercontinental shipping, communications, etc. — that enables civilisation as we know it today. If these systems were to collapse, would we be able to recover and return to the state of civilisation we have today?Could we re-industrialise? Would this be possible without easy access to fossil fuels, minerals, and chemicals? Could we rebuild flourishing global societies and infrastructure if there was a breakdown of international relations? Questions such as these fall under the purview of civilisation resilience.Civilisation resilience focuses on how we can buttress civilisation against collapse and increase our ability to recover from a collapse if it did occur.A framework for thinking about civilisation resilienceHaving a framework with which to analyse the risks and prioritise the strengthening of our defences is useful to sharpen our focus and direct our efforts to bolstering civilisation resilience. The paper Defence in Depth Against Human Extinction: Prevention, Response, Resilience, and Why They All Matter(Cotton-Barratt, Daniel and Sandberg) introduces a framework that breaks down protection against extinction risk into three layers of defence (figure 1). This framework is equally applicable to civilisation collapse given civilisation collapse is a precursor to human extinction.In evaluating extinction risk, the defence layers protect against an event becoming a catastrophe, scaling to a global catastrophe, and then wiping out the human race. When considering a given catastrophe, the following three defence layers are proposed:Prevention — how can we stop a catastrophe from starting?Response — how do we stop it from scaling up to a global catastrophe?Resilience — how does a global catastrophe get everyone?Figure 1: The three layers of defence against extinction risk (Cotton-Barratt, Daniel, Sandberg)One advantage of this characterisation framework is that it can be used to evaluate where the weaknesses are in humanity’s defence against a given catastrophe. If we consider a given catastrophic risk, , we can define an extinction probability,where:is the probability that risk is not prevented.is the probability that the risk gets past the response stage, given that it was not prevented.is the probability that the risk causes human extinction, given that it got past the response stage and became a global catastrophe.Within th...]]>
Jessica Wen https://forum.effectivealtruism.org/posts/TyWj8ak5spu5XgHM8/how-engineers-can-contribute-to-civilisation-resilience Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How Engineers can Contribute to Civilisation Resilience, published by Jessica Wen on May 3, 2023 on The Effective Altruism Forum.Cross-posted from the High Impact Engineers Resource Portal. You can view the most up-to-date version on the Portal.SummaryCivilisation resilience is concerned with both reducing the risk of civilisation collapse and increasing the capability for humanity to recover from such a collapse. A collapse of civilisation would likely cause a great deal of suffering and may jeopardise the future of the human race. We can defend against such risks by reducing the chances that a localised catastrophe starts, that it scales up to a global catastrophe, or that it triggers irreversible civilisation collapse. Many facets of our defence layers are physical, meaning there are many opportunities for engineers to contribute to improving humanity's resilience.UncertaintyThe content of this article is largely based on research by 80,000 Hours, the Future of Humanity Institute, the Centre for the Study of Existential Risk, and ALLFED. We feel somewhat confident in the recommendations in this article.What is civilisation resilience?The industrial revolution gave humanity access to unprecedented amounts of valuable and lifesaving technologies and improved the lives we are able to live immensely. However, a global catastrophe could put unprecedented strain on the infrastructure — global agriculture, energy, industry, intercontinental shipping, communications, etc. — that enables civilisation as we know it today. If these systems were to collapse, would we be able to recover and return to the state of civilisation we have today?Could we re-industrialise? Would this be possible without easy access to fossil fuels, minerals, and chemicals? Could we rebuild flourishing global societies and infrastructure if there was a breakdown of international relations? Questions such as these fall under the purview of civilisation resilience.Civilisation resilience focuses on how we can buttress civilisation against collapse and increase our ability to recover from a collapse if it did occur.A framework for thinking about civilisation resilienceHaving a framework with which to analyse the risks and prioritise the strengthening of our defences is useful to sharpen our focus and direct our efforts to bolstering civilisation resilience. The paper Defence in Depth Against Human Extinction: Prevention, Response, Resilience, and Why They All Matter(Cotton-Barratt, Daniel and Sandberg) introduces a framework that breaks down protection against extinction risk into three layers of defence (figure 1). This framework is equally applicable to civilisation collapse given civilisation collapse is a precursor to human extinction.In evaluating extinction risk, the defence layers protect against an event becoming a catastrophe, scaling to a global catastrophe, and then wiping out the human race. When considering a given catastrophe, the following three defence layers are proposed:Prevention — how can we stop a catastrophe from starting?Response — how do we stop it from scaling up to a global catastrophe?Resilience — how does a global catastrophe get everyone?Figure 1: The three layers of defence against extinction risk (Cotton-Barratt, Daniel, Sandberg)One advantage of this characterisation framework is that it can be used to evaluate where the weaknesses are in humanity’s defence against a given catastrophe. If we consider a given catastrophic risk, , we can define an extinction probability,where:is the probability that risk is not prevented.is the probability that the risk gets past the response stage, given that it was not prevented.is the probability that the risk causes human extinction, given that it got past the response stage and became a global catastrophe.Within th...]]>
Wed, 03 May 2023 21:29:55 +0000 EA - How Engineers can Contribute to Civilisation Resilience by Jessica Wen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How Engineers can Contribute to Civilisation Resilience, published by Jessica Wen on May 3, 2023 on The Effective Altruism Forum.Cross-posted from the High Impact Engineers Resource Portal. You can view the most up-to-date version on the Portal.SummaryCivilisation resilience is concerned with both reducing the risk of civilisation collapse and increasing the capability for humanity to recover from such a collapse. A collapse of civilisation would likely cause a great deal of suffering and may jeopardise the future of the human race. We can defend against such risks by reducing the chances that a localised catastrophe starts, that it scales up to a global catastrophe, or that it triggers irreversible civilisation collapse. Many facets of our defence layers are physical, meaning there are many opportunities for engineers to contribute to improving humanity's resilience.UncertaintyThe content of this article is largely based on research by 80,000 Hours, the Future of Humanity Institute, the Centre for the Study of Existential Risk, and ALLFED. We feel somewhat confident in the recommendations in this article.What is civilisation resilience?The industrial revolution gave humanity access to unprecedented amounts of valuable and lifesaving technologies and improved the lives we are able to live immensely. However, a global catastrophe could put unprecedented strain on the infrastructure — global agriculture, energy, industry, intercontinental shipping, communications, etc. — that enables civilisation as we know it today. If these systems were to collapse, would we be able to recover and return to the state of civilisation we have today?Could we re-industrialise? Would this be possible without easy access to fossil fuels, minerals, and chemicals? Could we rebuild flourishing global societies and infrastructure if there was a breakdown of international relations? Questions such as these fall under the purview of civilisation resilience.Civilisation resilience focuses on how we can buttress civilisation against collapse and increase our ability to recover from a collapse if it did occur.A framework for thinking about civilisation resilienceHaving a framework with which to analyse the risks and prioritise the strengthening of our defences is useful to sharpen our focus and direct our efforts to bolstering civilisation resilience. The paper Defence in Depth Against Human Extinction: Prevention, Response, Resilience, and Why They All Matter(Cotton-Barratt, Daniel and Sandberg) introduces a framework that breaks down protection against extinction risk into three layers of defence (figure 1). This framework is equally applicable to civilisation collapse given civilisation collapse is a precursor to human extinction.In evaluating extinction risk, the defence layers protect against an event becoming a catastrophe, scaling to a global catastrophe, and then wiping out the human race. When considering a given catastrophe, the following three defence layers are proposed:Prevention — how can we stop a catastrophe from starting?Response — how do we stop it from scaling up to a global catastrophe?Resilience — how does a global catastrophe get everyone?Figure 1: The three layers of defence against extinction risk (Cotton-Barratt, Daniel, Sandberg)One advantage of this characterisation framework is that it can be used to evaluate where the weaknesses are in humanity’s defence against a given catastrophe. If we consider a given catastrophic risk, , we can define an extinction probability,where:is the probability that risk is not prevented.is the probability that the risk gets past the response stage, given that it was not prevented.is the probability that the risk causes human extinction, given that it got past the response stage and became a global catastrophe.Within th...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How Engineers can Contribute to Civilisation Resilience, published by Jessica Wen on May 3, 2023 on The Effective Altruism Forum.Cross-posted from the High Impact Engineers Resource Portal. You can view the most up-to-date version on the Portal.SummaryCivilisation resilience is concerned with both reducing the risk of civilisation collapse and increasing the capability for humanity to recover from such a collapse. A collapse of civilisation would likely cause a great deal of suffering and may jeopardise the future of the human race. We can defend against such risks by reducing the chances that a localised catastrophe starts, that it scales up to a global catastrophe, or that it triggers irreversible civilisation collapse. Many facets of our defence layers are physical, meaning there are many opportunities for engineers to contribute to improving humanity's resilience.UncertaintyThe content of this article is largely based on research by 80,000 Hours, the Future of Humanity Institute, the Centre for the Study of Existential Risk, and ALLFED. We feel somewhat confident in the recommendations in this article.What is civilisation resilience?The industrial revolution gave humanity access to unprecedented amounts of valuable and lifesaving technologies and improved the lives we are able to live immensely. However, a global catastrophe could put unprecedented strain on the infrastructure — global agriculture, energy, industry, intercontinental shipping, communications, etc. — that enables civilisation as we know it today. If these systems were to collapse, would we be able to recover and return to the state of civilisation we have today?Could we re-industrialise? Would this be possible without easy access to fossil fuels, minerals, and chemicals? Could we rebuild flourishing global societies and infrastructure if there was a breakdown of international relations? Questions such as these fall under the purview of civilisation resilience.Civilisation resilience focuses on how we can buttress civilisation against collapse and increase our ability to recover from a collapse if it did occur.A framework for thinking about civilisation resilienceHaving a framework with which to analyse the risks and prioritise the strengthening of our defences is useful to sharpen our focus and direct our efforts to bolstering civilisation resilience. The paper Defence in Depth Against Human Extinction: Prevention, Response, Resilience, and Why They All Matter(Cotton-Barratt, Daniel and Sandberg) introduces a framework that breaks down protection against extinction risk into three layers of defence (figure 1). This framework is equally applicable to civilisation collapse given civilisation collapse is a precursor to human extinction.In evaluating extinction risk, the defence layers protect against an event becoming a catastrophe, scaling to a global catastrophe, and then wiping out the human race. When considering a given catastrophe, the following three defence layers are proposed:Prevention — how can we stop a catastrophe from starting?Response — how do we stop it from scaling up to a global catastrophe?Resilience — how does a global catastrophe get everyone?Figure 1: The three layers of defence against extinction risk (Cotton-Barratt, Daniel, Sandberg)One advantage of this characterisation framework is that it can be used to evaluate where the weaknesses are in humanity’s defence against a given catastrophe. If we consider a given catastrophic risk, , we can define an extinction probability,where:is the probability that risk is not prevented.is the probability that the risk gets past the response stage, given that it was not prevented.is the probability that the risk causes human extinction, given that it got past the response stage and became a global catastrophe.Within th...]]>
Jessica Wen https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 15:14 None full 5826
23P6XCcGdrGGFrN6Z_NL_EA_EA EA - Test fit for roles / job types / work types, not cause areas by freedomandutility Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Test fit for roles / job types / work types, not cause areas, published by freedomandutility on May 3, 2023 on The Effective Altruism Forum.I often see university EAs aiming to do research projects to test their fit for specific cause areas. I don't think this is a good idea.I think if you felt you were good or bad fit for a research project, either you were a good or bad fit for research generally or a specific style of research (qualitative, quantitative, philosophical, primary, secondary, wet-lab, programming, focus groups, interviews, questionnaires, clinical trials).For example, it seems very unlikely to me that someone who disliked wet-lab research in biosecurity will enjoy wet-lab research in alternative proteins, but it seems less unlikely that someone who disliked wet-lab research in biosecurity will enjoy dry-lab research in biosecurity.Similarly, if you enjoyed literature review based research in one cause areas, I think you are likely to enjoy the same type of research across a range of different cause areas (provided you consider the cause areas to be highly impactful).I think decisions on cause areas should be made primarily on your views on what is most impactful (whilst avoiding single player thinking and considering any comparative advantages your background may give you), but decisions on roles / job types / work types should heavily consider what you have enjoyed and have done well.I think rather than testing fit for particular cause areas, students should test fit for different roles / job types / work types, such as entrepreneurship / operations, policy / advocacy and a range of different types of research.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
freedomandutility https://forum.effectivealtruism.org/posts/23P6XCcGdrGGFrN6Z/test-fit-for-roles-job-types-work-types-not-cause-areas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Test fit for roles / job types / work types, not cause areas, published by freedomandutility on May 3, 2023 on The Effective Altruism Forum.I often see university EAs aiming to do research projects to test their fit for specific cause areas. I don't think this is a good idea.I think if you felt you were good or bad fit for a research project, either you were a good or bad fit for research generally or a specific style of research (qualitative, quantitative, philosophical, primary, secondary, wet-lab, programming, focus groups, interviews, questionnaires, clinical trials).For example, it seems very unlikely to me that someone who disliked wet-lab research in biosecurity will enjoy wet-lab research in alternative proteins, but it seems less unlikely that someone who disliked wet-lab research in biosecurity will enjoy dry-lab research in biosecurity.Similarly, if you enjoyed literature review based research in one cause areas, I think you are likely to enjoy the same type of research across a range of different cause areas (provided you consider the cause areas to be highly impactful).I think decisions on cause areas should be made primarily on your views on what is most impactful (whilst avoiding single player thinking and considering any comparative advantages your background may give you), but decisions on roles / job types / work types should heavily consider what you have enjoyed and have done well.I think rather than testing fit for particular cause areas, students should test fit for different roles / job types / work types, such as entrepreneurship / operations, policy / advocacy and a range of different types of research.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 03 May 2023 17:52:21 +0000 EA - Test fit for roles / job types / work types, not cause areas by freedomandutility Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Test fit for roles / job types / work types, not cause areas, published by freedomandutility on May 3, 2023 on The Effective Altruism Forum.I often see university EAs aiming to do research projects to test their fit for specific cause areas. I don't think this is a good idea.I think if you felt you were good or bad fit for a research project, either you were a good or bad fit for research generally or a specific style of research (qualitative, quantitative, philosophical, primary, secondary, wet-lab, programming, focus groups, interviews, questionnaires, clinical trials).For example, it seems very unlikely to me that someone who disliked wet-lab research in biosecurity will enjoy wet-lab research in alternative proteins, but it seems less unlikely that someone who disliked wet-lab research in biosecurity will enjoy dry-lab research in biosecurity.Similarly, if you enjoyed literature review based research in one cause areas, I think you are likely to enjoy the same type of research across a range of different cause areas (provided you consider the cause areas to be highly impactful).I think decisions on cause areas should be made primarily on your views on what is most impactful (whilst avoiding single player thinking and considering any comparative advantages your background may give you), but decisions on roles / job types / work types should heavily consider what you have enjoyed and have done well.I think rather than testing fit for particular cause areas, students should test fit for different roles / job types / work types, such as entrepreneurship / operations, policy / advocacy and a range of different types of research.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Test fit for roles / job types / work types, not cause areas, published by freedomandutility on May 3, 2023 on The Effective Altruism Forum.I often see university EAs aiming to do research projects to test their fit for specific cause areas. I don't think this is a good idea.I think if you felt you were good or bad fit for a research project, either you were a good or bad fit for research generally or a specific style of research (qualitative, quantitative, philosophical, primary, secondary, wet-lab, programming, focus groups, interviews, questionnaires, clinical trials).For example, it seems very unlikely to me that someone who disliked wet-lab research in biosecurity will enjoy wet-lab research in alternative proteins, but it seems less unlikely that someone who disliked wet-lab research in biosecurity will enjoy dry-lab research in biosecurity.Similarly, if you enjoyed literature review based research in one cause areas, I think you are likely to enjoy the same type of research across a range of different cause areas (provided you consider the cause areas to be highly impactful).I think decisions on cause areas should be made primarily on your views on what is most impactful (whilst avoiding single player thinking and considering any comparative advantages your background may give you), but decisions on roles / job types / work types should heavily consider what you have enjoyed and have done well.I think rather than testing fit for particular cause areas, students should test fit for different roles / job types / work types, such as entrepreneurship / operations, policy / advocacy and a range of different types of research.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
freedomandutility https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:46 None full 5822
Ksqero4BmGFs8qfiC_NL_EA_EA EA - [AISN #4]: AI and Cybersecurity, Persuasive AIs, Weaponization, and Geoffrey Hinton talks AI risks by Center for AI Safety Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [AISN #4]: AI and Cybersecurity, Persuasive AIs, Weaponization, and Geoffrey Hinton talks AI risks, published by Center for AI Safety on May 2, 2023 on The Effective Altruism Forum.Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.Subscribe here to receive future versions.Cybersecurity Challenges in AI SafetyMeta accidentally leaks a language model to the public. Meta’s newest language model, LLaMa, was publicly leaked online against the intentions of its developers. Gradual rollout is a popular goal with new AI models, opening access to academic researchers and government officials before sharing models with anonymous internet users. Meta intended to use this strategy, but within a week of sharing the model with an approved list of researchers, an unknown person who had been given access to the model publicly posted it online.How can AI developers selectively share their models? One inspiration could be the film industry, which places watermarks and tracking technology on “screener” copies of movies sent out to critics before the movie’s official release. AI equivalents could involve encrypting model weights or inserting undetectable Trojans to identify individual copies of a model. Yet efforts to cooperate with other AI companies could face legal opposition under antitrust law. As the LLaMa leak demonstrates, we don’t yet have good ways to share AI models securely.LLaMa leak. March 2023, colorized.OpenAI faces their own cybersecurity problems. ChatGPT recently leaked user data including conversation histories, email addresses, and payment information. Businesses including JPMorgan, Amazon, and Verizon prohibit employees from using ChatGPT because of data privacy concerns, though OpenAI is trying to assuage those concerns with a business subscription plan where OpenAI promises not to train models on the data of business users. OpenAI also started a bug bounty program that pays people to find security vulnerabilities.AI can help hackers create novel cyberattacks. Code writing tools open up the possibility of new kinds of cyberattacks. CyberArk, an information security firm, recently showed that OpenAI’s code generation tool can be used to create adaptive malware that writes new lines of code while hacking into a system in order to bypass cyberdefenses. GPT-4 has also been shown capable of hacking into password management systems, convincing humans to help it bypass CAPTCHA verification, and performing coding challenges in offensive cybersecurity.The threat of automated cyberattacks is no surprise given previous research on the topic. One possibility for mitigating the threat involves using AI for cyberdefense. Microsoft is beginning an initiative to use AI for cyberdefense, but the tools are not yet publicly available.Artificial Influence: An Analysis Of AI-Driven PersuasionFormer CAIS affiliate Thomas Woodside and his colleague Matthew Bartell released a paper titled Artificial influence: An analysis of AI-driven persuasion.The abstract for the paper is as follows:Persuasion is a key aspect of what it means to be human, and is central to business, politics, and other endeavors. Advancements in artificial intelligence (AI) have produced AI systems that are capable of persuading humans to buy products, watch videos, click on search results, and more. Even systems that are not explicitly designed to persuade may do so in practice. In the future, increasingly anthropomorphic AI systems may form ongoing relationships with users, increasing their persuasive power. This paper investigates the uncertain future of persuasive AI systems. We examine ways that AI could qualitatively alter our relationship to and views regarding persuasion by shifting the balance of persuasi...]]>
Center for AI Safety https://forum.effectivealtruism.org/posts/Ksqero4BmGFs8qfiC/aisn-4-ai-and-cybersecurity-persuasive-ais-weaponization-and Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [AISN #4]: AI and Cybersecurity, Persuasive AIs, Weaponization, and Geoffrey Hinton talks AI risks, published by Center for AI Safety on May 2, 2023 on The Effective Altruism Forum.Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.Subscribe here to receive future versions.Cybersecurity Challenges in AI SafetyMeta accidentally leaks a language model to the public. Meta’s newest language model, LLaMa, was publicly leaked online against the intentions of its developers. Gradual rollout is a popular goal with new AI models, opening access to academic researchers and government officials before sharing models with anonymous internet users. Meta intended to use this strategy, but within a week of sharing the model with an approved list of researchers, an unknown person who had been given access to the model publicly posted it online.How can AI developers selectively share their models? One inspiration could be the film industry, which places watermarks and tracking technology on “screener” copies of movies sent out to critics before the movie’s official release. AI equivalents could involve encrypting model weights or inserting undetectable Trojans to identify individual copies of a model. Yet efforts to cooperate with other AI companies could face legal opposition under antitrust law. As the LLaMa leak demonstrates, we don’t yet have good ways to share AI models securely.LLaMa leak. March 2023, colorized.OpenAI faces their own cybersecurity problems. ChatGPT recently leaked user data including conversation histories, email addresses, and payment information. Businesses including JPMorgan, Amazon, and Verizon prohibit employees from using ChatGPT because of data privacy concerns, though OpenAI is trying to assuage those concerns with a business subscription plan where OpenAI promises not to train models on the data of business users. OpenAI also started a bug bounty program that pays people to find security vulnerabilities.AI can help hackers create novel cyberattacks. Code writing tools open up the possibility of new kinds of cyberattacks. CyberArk, an information security firm, recently showed that OpenAI’s code generation tool can be used to create adaptive malware that writes new lines of code while hacking into a system in order to bypass cyberdefenses. GPT-4 has also been shown capable of hacking into password management systems, convincing humans to help it bypass CAPTCHA verification, and performing coding challenges in offensive cybersecurity.The threat of automated cyberattacks is no surprise given previous research on the topic. One possibility for mitigating the threat involves using AI for cyberdefense. Microsoft is beginning an initiative to use AI for cyberdefense, but the tools are not yet publicly available.Artificial Influence: An Analysis Of AI-Driven PersuasionFormer CAIS affiliate Thomas Woodside and his colleague Matthew Bartell released a paper titled Artificial influence: An analysis of AI-driven persuasion.The abstract for the paper is as follows:Persuasion is a key aspect of what it means to be human, and is central to business, politics, and other endeavors. Advancements in artificial intelligence (AI) have produced AI systems that are capable of persuading humans to buy products, watch videos, click on search results, and more. Even systems that are not explicitly designed to persuade may do so in practice. In the future, increasingly anthropomorphic AI systems may form ongoing relationships with users, increasing their persuasive power. This paper investigates the uncertain future of persuasive AI systems. We examine ways that AI could qualitatively alter our relationship to and views regarding persuasion by shifting the balance of persuasi...]]>
Wed, 03 May 2023 03:23:37 +0000 EA - [AISN #4]: AI and Cybersecurity, Persuasive AIs, Weaponization, and Geoffrey Hinton talks AI risks by Center for AI Safety Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [AISN #4]: AI and Cybersecurity, Persuasive AIs, Weaponization, and Geoffrey Hinton talks AI risks, published by Center for AI Safety on May 2, 2023 on The Effective Altruism Forum.Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.Subscribe here to receive future versions.Cybersecurity Challenges in AI SafetyMeta accidentally leaks a language model to the public. Meta’s newest language model, LLaMa, was publicly leaked online against the intentions of its developers. Gradual rollout is a popular goal with new AI models, opening access to academic researchers and government officials before sharing models with anonymous internet users. Meta intended to use this strategy, but within a week of sharing the model with an approved list of researchers, an unknown person who had been given access to the model publicly posted it online.How can AI developers selectively share their models? One inspiration could be the film industry, which places watermarks and tracking technology on “screener” copies of movies sent out to critics before the movie’s official release. AI equivalents could involve encrypting model weights or inserting undetectable Trojans to identify individual copies of a model. Yet efforts to cooperate with other AI companies could face legal opposition under antitrust law. As the LLaMa leak demonstrates, we don’t yet have good ways to share AI models securely.LLaMa leak. March 2023, colorized.OpenAI faces their own cybersecurity problems. ChatGPT recently leaked user data including conversation histories, email addresses, and payment information. Businesses including JPMorgan, Amazon, and Verizon prohibit employees from using ChatGPT because of data privacy concerns, though OpenAI is trying to assuage those concerns with a business subscription plan where OpenAI promises not to train models on the data of business users. OpenAI also started a bug bounty program that pays people to find security vulnerabilities.AI can help hackers create novel cyberattacks. Code writing tools open up the possibility of new kinds of cyberattacks. CyberArk, an information security firm, recently showed that OpenAI’s code generation tool can be used to create adaptive malware that writes new lines of code while hacking into a system in order to bypass cyberdefenses. GPT-4 has also been shown capable of hacking into password management systems, convincing humans to help it bypass CAPTCHA verification, and performing coding challenges in offensive cybersecurity.The threat of automated cyberattacks is no surprise given previous research on the topic. One possibility for mitigating the threat involves using AI for cyberdefense. Microsoft is beginning an initiative to use AI for cyberdefense, but the tools are not yet publicly available.Artificial Influence: An Analysis Of AI-Driven PersuasionFormer CAIS affiliate Thomas Woodside and his colleague Matthew Bartell released a paper titled Artificial influence: An analysis of AI-driven persuasion.The abstract for the paper is as follows:Persuasion is a key aspect of what it means to be human, and is central to business, politics, and other endeavors. Advancements in artificial intelligence (AI) have produced AI systems that are capable of persuading humans to buy products, watch videos, click on search results, and more. Even systems that are not explicitly designed to persuade may do so in practice. In the future, increasingly anthropomorphic AI systems may form ongoing relationships with users, increasing their persuasive power. This paper investigates the uncertain future of persuasive AI systems. We examine ways that AI could qualitatively alter our relationship to and views regarding persuasion by shifting the balance of persuasi...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [AISN #4]: AI and Cybersecurity, Persuasive AIs, Weaponization, and Geoffrey Hinton talks AI risks, published by Center for AI Safety on May 2, 2023 on The Effective Altruism Forum.Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.Subscribe here to receive future versions.Cybersecurity Challenges in AI SafetyMeta accidentally leaks a language model to the public. Meta’s newest language model, LLaMa, was publicly leaked online against the intentions of its developers. Gradual rollout is a popular goal with new AI models, opening access to academic researchers and government officials before sharing models with anonymous internet users. Meta intended to use this strategy, but within a week of sharing the model with an approved list of researchers, an unknown person who had been given access to the model publicly posted it online.How can AI developers selectively share their models? One inspiration could be the film industry, which places watermarks and tracking technology on “screener” copies of movies sent out to critics before the movie’s official release. AI equivalents could involve encrypting model weights or inserting undetectable Trojans to identify individual copies of a model. Yet efforts to cooperate with other AI companies could face legal opposition under antitrust law. As the LLaMa leak demonstrates, we don’t yet have good ways to share AI models securely.LLaMa leak. March 2023, colorized.OpenAI faces their own cybersecurity problems. ChatGPT recently leaked user data including conversation histories, email addresses, and payment information. Businesses including JPMorgan, Amazon, and Verizon prohibit employees from using ChatGPT because of data privacy concerns, though OpenAI is trying to assuage those concerns with a business subscription plan where OpenAI promises not to train models on the data of business users. OpenAI also started a bug bounty program that pays people to find security vulnerabilities.AI can help hackers create novel cyberattacks. Code writing tools open up the possibility of new kinds of cyberattacks. CyberArk, an information security firm, recently showed that OpenAI’s code generation tool can be used to create adaptive malware that writes new lines of code while hacking into a system in order to bypass cyberdefenses. GPT-4 has also been shown capable of hacking into password management systems, convincing humans to help it bypass CAPTCHA verification, and performing coding challenges in offensive cybersecurity.The threat of automated cyberattacks is no surprise given previous research on the topic. One possibility for mitigating the threat involves using AI for cyberdefense. Microsoft is beginning an initiative to use AI for cyberdefense, but the tools are not yet publicly available.Artificial Influence: An Analysis Of AI-Driven PersuasionFormer CAIS affiliate Thomas Woodside and his colleague Matthew Bartell released a paper titled Artificial influence: An analysis of AI-driven persuasion.The abstract for the paper is as follows:Persuasion is a key aspect of what it means to be human, and is central to business, politics, and other endeavors. Advancements in artificial intelligence (AI) have produced AI systems that are capable of persuading humans to buy products, watch videos, click on search results, and more. Even systems that are not explicitly designed to persuade may do so in practice. In the future, increasingly anthropomorphic AI systems may form ongoing relationships with users, increasing their persuasive power. This paper investigates the uncertain future of persuasive AI systems. We examine ways that AI could qualitatively alter our relationship to and views regarding persuasion by shifting the balance of persuasi...]]>
Center for AI Safety https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:27 None full 5818
wzn7hEj3BSz7us7ge_NL_EA_EA EA - Summaries of top forum posts (24th - 30th April 2023) by Zoe Williams Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summaries of top forum posts (24th - 30th April 2023), published by Zoe Williams on May 2, 2023 on The Effective Altruism Forum.We've just passed the half year mark for this project! If you're reading this, please consider taking this 5 minute survey - all questions optional. If you listen to the podcast version, we have a separate survey for that here. Thanks to everyone that has responded to this already!Back to our regularly scheduled intro...This is part of a weekly series summarizing the top posts on the EA and LW forums - you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.If you'd like to receive these summaries via email, you can subscribe here.Podcast version: Subscribe on your favorite podcast app by searching for 'EA Forum Podcast (Summaries)'. A big thanks to Coleman Snell for producing these!Object Level Interventions / ReviewsAIby Guillem Bas, Jaime Sevilla, Mónica UlloaAuthor’s summary: “The European Union is designing a regulatory framework for artificial intelligence that could be approved by the end of 2023. This regulation prohibits unacceptable practices and stipulates requirements for AI systems in critical sectors. These obligations consist of a risk management system, a quality management system, and post-market monitoring. The legislation enforcement will be tested for the first time in Spain, in a regulatory sandbox of approximately three years. This will be a great opportunity to prepare the national ecosystem and influence the development of AI governance internationally. In this context, we present several policies to consider, including third-party auditing, the detection and evaluation of frontier AI models, red teaming exercises, and creating an incident database.”by Jaime SevillaPaper by Epoch. World record progressions in video game speedrunning fit very well to a power law pattern. Due to lack of longitudinal data, the authors can’t provide definitive evidence of power-law decay in Machine learning benchmark improvements (though it is a better model than assuming no improvement over time). However, if they assume this model, it would suggest that a) machine learning benchmarks aren’t close to saturation and b) sudden large improvements are infrequent but aren’t ruled out.No, the EMH does not imply that markets have long AGI timelinesby JakobArgues that interest rates are not a reliable instrument for assessing market beliefs about transformative AI (TAI) timelines, because of two reasons:Savvy investors have no incentive to bet on short timelines, because it will tie up their capital until it loses value (ie. they are dead, or they’re so rich it doesn’t matter).They do have incentive to increase personal consumption, as savings are less useful in a TAI future. However, they aren’t a large enough group to influence interest rates this way.This makes interest rates more of a poll of upper middle class consumers than investors, and reflects whether they believe that a) timelines are short and b) savings won’t be useful post-TAI (vs. eg. believing they are more useful, due to worries of losing their job to AI).by Lao MeinOn April 11th, the Cybersecurity Administration of China released a draft of “Management Measures for Generative Artificial Intelligence Services” for public comment. Some in the AI safety community think this is a positive sign that China is considering AI risk and may participate in a disarmament treaty. However, the author argues that it is just a PR statement, no-one in China is talking about it, and the focus if any is on near-term stability.They also note that the EA/Rationalist/AI Safety forums in China are mostly populated by expats or people physically outside of China, most posts are in English...]]>
Zoe Williams https://forum.effectivealtruism.org/posts/wzn7hEj3BSz7us7ge/summaries-of-top-forum-posts-24th-30th-april-2023 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summaries of top forum posts (24th - 30th April 2023), published by Zoe Williams on May 2, 2023 on The Effective Altruism Forum.We've just passed the half year mark for this project! If you're reading this, please consider taking this 5 minute survey - all questions optional. If you listen to the podcast version, we have a separate survey for that here. Thanks to everyone that has responded to this already!Back to our regularly scheduled intro...This is part of a weekly series summarizing the top posts on the EA and LW forums - you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.If you'd like to receive these summaries via email, you can subscribe here.Podcast version: Subscribe on your favorite podcast app by searching for 'EA Forum Podcast (Summaries)'. A big thanks to Coleman Snell for producing these!Object Level Interventions / ReviewsAIby Guillem Bas, Jaime Sevilla, Mónica UlloaAuthor’s summary: “The European Union is designing a regulatory framework for artificial intelligence that could be approved by the end of 2023. This regulation prohibits unacceptable practices and stipulates requirements for AI systems in critical sectors. These obligations consist of a risk management system, a quality management system, and post-market monitoring. The legislation enforcement will be tested for the first time in Spain, in a regulatory sandbox of approximately three years. This will be a great opportunity to prepare the national ecosystem and influence the development of AI governance internationally. In this context, we present several policies to consider, including third-party auditing, the detection and evaluation of frontier AI models, red teaming exercises, and creating an incident database.”by Jaime SevillaPaper by Epoch. World record progressions in video game speedrunning fit very well to a power law pattern. Due to lack of longitudinal data, the authors can’t provide definitive evidence of power-law decay in Machine learning benchmark improvements (though it is a better model than assuming no improvement over time). However, if they assume this model, it would suggest that a) machine learning benchmarks aren’t close to saturation and b) sudden large improvements are infrequent but aren’t ruled out.No, the EMH does not imply that markets have long AGI timelinesby JakobArgues that interest rates are not a reliable instrument for assessing market beliefs about transformative AI (TAI) timelines, because of two reasons:Savvy investors have no incentive to bet on short timelines, because it will tie up their capital until it loses value (ie. they are dead, or they’re so rich it doesn’t matter).They do have incentive to increase personal consumption, as savings are less useful in a TAI future. However, they aren’t a large enough group to influence interest rates this way.This makes interest rates more of a poll of upper middle class consumers than investors, and reflects whether they believe that a) timelines are short and b) savings won’t be useful post-TAI (vs. eg. believing they are more useful, due to worries of losing their job to AI).by Lao MeinOn April 11th, the Cybersecurity Administration of China released a draft of “Management Measures for Generative Artificial Intelligence Services” for public comment. Some in the AI safety community think this is a positive sign that China is considering AI risk and may participate in a disarmament treaty. However, the author argues that it is just a PR statement, no-one in China is talking about it, and the focus if any is on near-term stability.They also note that the EA/Rationalist/AI Safety forums in China are mostly populated by expats or people physically outside of China, most posts are in English...]]>
Wed, 03 May 2023 01:25:16 +0000 EA - Summaries of top forum posts (24th - 30th April 2023) by Zoe Williams Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summaries of top forum posts (24th - 30th April 2023), published by Zoe Williams on May 2, 2023 on The Effective Altruism Forum.We've just passed the half year mark for this project! If you're reading this, please consider taking this 5 minute survey - all questions optional. If you listen to the podcast version, we have a separate survey for that here. Thanks to everyone that has responded to this already!Back to our regularly scheduled intro...This is part of a weekly series summarizing the top posts on the EA and LW forums - you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.If you'd like to receive these summaries via email, you can subscribe here.Podcast version: Subscribe on your favorite podcast app by searching for 'EA Forum Podcast (Summaries)'. A big thanks to Coleman Snell for producing these!Object Level Interventions / ReviewsAIby Guillem Bas, Jaime Sevilla, Mónica UlloaAuthor’s summary: “The European Union is designing a regulatory framework for artificial intelligence that could be approved by the end of 2023. This regulation prohibits unacceptable practices and stipulates requirements for AI systems in critical sectors. These obligations consist of a risk management system, a quality management system, and post-market monitoring. The legislation enforcement will be tested for the first time in Spain, in a regulatory sandbox of approximately three years. This will be a great opportunity to prepare the national ecosystem and influence the development of AI governance internationally. In this context, we present several policies to consider, including third-party auditing, the detection and evaluation of frontier AI models, red teaming exercises, and creating an incident database.”by Jaime SevillaPaper by Epoch. World record progressions in video game speedrunning fit very well to a power law pattern. Due to lack of longitudinal data, the authors can’t provide definitive evidence of power-law decay in Machine learning benchmark improvements (though it is a better model than assuming no improvement over time). However, if they assume this model, it would suggest that a) machine learning benchmarks aren’t close to saturation and b) sudden large improvements are infrequent but aren’t ruled out.No, the EMH does not imply that markets have long AGI timelinesby JakobArgues that interest rates are not a reliable instrument for assessing market beliefs about transformative AI (TAI) timelines, because of two reasons:Savvy investors have no incentive to bet on short timelines, because it will tie up their capital until it loses value (ie. they are dead, or they’re so rich it doesn’t matter).They do have incentive to increase personal consumption, as savings are less useful in a TAI future. However, they aren’t a large enough group to influence interest rates this way.This makes interest rates more of a poll of upper middle class consumers than investors, and reflects whether they believe that a) timelines are short and b) savings won’t be useful post-TAI (vs. eg. believing they are more useful, due to worries of losing their job to AI).by Lao MeinOn April 11th, the Cybersecurity Administration of China released a draft of “Management Measures for Generative Artificial Intelligence Services” for public comment. Some in the AI safety community think this is a positive sign that China is considering AI risk and may participate in a disarmament treaty. However, the author argues that it is just a PR statement, no-one in China is talking about it, and the focus if any is on near-term stability.They also note that the EA/Rationalist/AI Safety forums in China are mostly populated by expats or people physically outside of China, most posts are in English...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summaries of top forum posts (24th - 30th April 2023), published by Zoe Williams on May 2, 2023 on The Effective Altruism Forum.We've just passed the half year mark for this project! If you're reading this, please consider taking this 5 minute survey - all questions optional. If you listen to the podcast version, we have a separate survey for that here. Thanks to everyone that has responded to this already!Back to our regularly scheduled intro...This is part of a weekly series summarizing the top posts on the EA and LW forums - you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.If you'd like to receive these summaries via email, you can subscribe here.Podcast version: Subscribe on your favorite podcast app by searching for 'EA Forum Podcast (Summaries)'. A big thanks to Coleman Snell for producing these!Object Level Interventions / ReviewsAIby Guillem Bas, Jaime Sevilla, Mónica UlloaAuthor’s summary: “The European Union is designing a regulatory framework for artificial intelligence that could be approved by the end of 2023. This regulation prohibits unacceptable practices and stipulates requirements for AI systems in critical sectors. These obligations consist of a risk management system, a quality management system, and post-market monitoring. The legislation enforcement will be tested for the first time in Spain, in a regulatory sandbox of approximately three years. This will be a great opportunity to prepare the national ecosystem and influence the development of AI governance internationally. In this context, we present several policies to consider, including third-party auditing, the detection and evaluation of frontier AI models, red teaming exercises, and creating an incident database.”by Jaime SevillaPaper by Epoch. World record progressions in video game speedrunning fit very well to a power law pattern. Due to lack of longitudinal data, the authors can’t provide definitive evidence of power-law decay in Machine learning benchmark improvements (though it is a better model than assuming no improvement over time). However, if they assume this model, it would suggest that a) machine learning benchmarks aren’t close to saturation and b) sudden large improvements are infrequent but aren’t ruled out.No, the EMH does not imply that markets have long AGI timelinesby JakobArgues that interest rates are not a reliable instrument for assessing market beliefs about transformative AI (TAI) timelines, because of two reasons:Savvy investors have no incentive to bet on short timelines, because it will tie up their capital until it loses value (ie. they are dead, or they’re so rich it doesn’t matter).They do have incentive to increase personal consumption, as savings are less useful in a TAI future. However, they aren’t a large enough group to influence interest rates this way.This makes interest rates more of a poll of upper middle class consumers than investors, and reflects whether they believe that a) timelines are short and b) savings won’t be useful post-TAI (vs. eg. believing they are more useful, due to worries of losing their job to AI).by Lao MeinOn April 11th, the Cybersecurity Administration of China released a draft of “Management Measures for Generative Artificial Intelligence Services” for public comment. Some in the AI safety community think this is a positive sign that China is considering AI risk and may participate in a disarmament treaty. However, the author argues that it is just a PR statement, no-one in China is talking about it, and the focus if any is on near-term stability.They also note that the EA/Rationalist/AI Safety forums in China are mostly populated by expats or people physically outside of China, most posts are in English...]]>
Zoe Williams https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 16:40 None full 5817
ZKYpu4WAiwTXDSrX8_NL_EA_EA EA - Review of The Good It Promises, the Harm It Does by Richard Y Chappell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Review of The Good It Promises, the Harm It Does, published by Richard Y Chappell on May 2, 2023 on The Effective Altruism Forum.[TL;DR: I didn't find much of value in the book. The quality of argumentation is worse than on most blogs I read. Maybe others will have better luck discerning any hidden gems in the mix?]The Good It Promises, the Harm It Does: Critical Essays on Effective Altruism (eds. Adams, Crary, & Gruen) puts me in mind of Bastiat’s Candlestick Makers' Petition. For any proposed change—be it the invention of electricity, or even the sun rising—there will be some in a position to complain. This is a book of such complaints. There is much recounting of various “harms” caused by EA (primarily to social justice activists who are no longer as competitive for grant funding). But nowhere in the volume is there any serious attempt to compare these costs against the gains to others—especially the populations supposedly served by charitable work, as opposed to the workers themselves—to determine which is greater. (One gets the impression that cost-benefit analysis is too capitalistic for these authors to even consider.) The word “trade-off” does not appear in this volume.A second respect in which the book’s title may be misleading is that it is exclusively about the animal welfare wing of EA. (There is a ‘Coda’ that mentions longtermism, but merely to sneer at it. There was no substantive engagement with the ideas.)I personally didn’t find much of value in the volume, but I’ll start out by flagging what good I can. I’ll then briefly explain why I wasn’t much impressed with the rest—mainly by way of sharing representative quotes, so readers can judge it for themselves.The GoodThe more empirically-oriented chapters raise interesting challenges about animal advocacy strategy. We learn that EA funders have focused on two main strategies to reform or eventually upend animal agriculture: (i) corporate cage-free (and similar) campaigns, and (ii) investment in meat alternatives. Neither involves the sort of “grassroots” activism that the contributors to this volume prefer. So some of the authors discuss potential shortcomings of the above two strategies, and potential benefits of alternatives like (iii) vegan outreach in Black communities, and (iv) animal sanctuaries.I expect EAs will welcome discussion of the effectiveness of different strategies. That’s what the movement is all about, after all. By far the most constructive article in the volume (chapter 4, ‘Animal Advocacy’s Stockholm Syndrome’) noted that “cage- free campaigns. can be particularly tragic in a global context” where factory-farms are not yet ubiquitous:The conscientious urban, middle-class Indian consumer cannot see that there is a minor difference between the cage-free egg and the standard factory-farmed egg, and a massive gulf separating both of these from the traditionally produced egg [where birds freely roam their whole lives] for a simple reason: the animal protection groups the consumer is relying upon are pointing to the (factory- farmed) cage-free egg instead of alternatives to industrial farming. (p. 45)Such evidence of localized ineffectiveness (or counterproductivity) is certainly important to identify & take into account!There’s a larger issue (not really addressed in this volume) of when it makes sense for a funder to go “all in” on their best bets vs. when they’d do better to “diversify” their philanthropic portfolio. This is something EAs have discussed a bit before (often by making purely theoretical arguments that the “best bet” maximizes expected value), but I’d be excited to see more work on this problem using different methodologies, including taking into account the risk of “model error” or systemic bias in our initial EV estimates. (Maybe such work is already out ther...]]>
Richard Y Chappell https://forum.effectivealtruism.org/posts/ZKYpu4WAiwTXDSrX8/review-of-the-good-it-promises-the-harm-it-does Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Review of The Good It Promises, the Harm It Does, published by Richard Y Chappell on May 2, 2023 on The Effective Altruism Forum.[TL;DR: I didn't find much of value in the book. The quality of argumentation is worse than on most blogs I read. Maybe others will have better luck discerning any hidden gems in the mix?]The Good It Promises, the Harm It Does: Critical Essays on Effective Altruism (eds. Adams, Crary, & Gruen) puts me in mind of Bastiat’s Candlestick Makers' Petition. For any proposed change—be it the invention of electricity, or even the sun rising—there will be some in a position to complain. This is a book of such complaints. There is much recounting of various “harms” caused by EA (primarily to social justice activists who are no longer as competitive for grant funding). But nowhere in the volume is there any serious attempt to compare these costs against the gains to others—especially the populations supposedly served by charitable work, as opposed to the workers themselves—to determine which is greater. (One gets the impression that cost-benefit analysis is too capitalistic for these authors to even consider.) The word “trade-off” does not appear in this volume.A second respect in which the book’s title may be misleading is that it is exclusively about the animal welfare wing of EA. (There is a ‘Coda’ that mentions longtermism, but merely to sneer at it. There was no substantive engagement with the ideas.)I personally didn’t find much of value in the volume, but I’ll start out by flagging what good I can. I’ll then briefly explain why I wasn’t much impressed with the rest—mainly by way of sharing representative quotes, so readers can judge it for themselves.The GoodThe more empirically-oriented chapters raise interesting challenges about animal advocacy strategy. We learn that EA funders have focused on two main strategies to reform or eventually upend animal agriculture: (i) corporate cage-free (and similar) campaigns, and (ii) investment in meat alternatives. Neither involves the sort of “grassroots” activism that the contributors to this volume prefer. So some of the authors discuss potential shortcomings of the above two strategies, and potential benefits of alternatives like (iii) vegan outreach in Black communities, and (iv) animal sanctuaries.I expect EAs will welcome discussion of the effectiveness of different strategies. That’s what the movement is all about, after all. By far the most constructive article in the volume (chapter 4, ‘Animal Advocacy’s Stockholm Syndrome’) noted that “cage- free campaigns. can be particularly tragic in a global context” where factory-farms are not yet ubiquitous:The conscientious urban, middle-class Indian consumer cannot see that there is a minor difference between the cage-free egg and the standard factory-farmed egg, and a massive gulf separating both of these from the traditionally produced egg [where birds freely roam their whole lives] for a simple reason: the animal protection groups the consumer is relying upon are pointing to the (factory- farmed) cage-free egg instead of alternatives to industrial farming. (p. 45)Such evidence of localized ineffectiveness (or counterproductivity) is certainly important to identify & take into account!There’s a larger issue (not really addressed in this volume) of when it makes sense for a funder to go “all in” on their best bets vs. when they’d do better to “diversify” their philanthropic portfolio. This is something EAs have discussed a bit before (often by making purely theoretical arguments that the “best bet” maximizes expected value), but I’d be excited to see more work on this problem using different methodologies, including taking into account the risk of “model error” or systemic bias in our initial EV estimates. (Maybe such work is already out ther...]]>
Tue, 02 May 2023 16:21:20 +0000 EA - Review of The Good It Promises, the Harm It Does by Richard Y Chappell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Review of The Good It Promises, the Harm It Does, published by Richard Y Chappell on May 2, 2023 on The Effective Altruism Forum.[TL;DR: I didn't find much of value in the book. The quality of argumentation is worse than on most blogs I read. Maybe others will have better luck discerning any hidden gems in the mix?]The Good It Promises, the Harm It Does: Critical Essays on Effective Altruism (eds. Adams, Crary, & Gruen) puts me in mind of Bastiat’s Candlestick Makers' Petition. For any proposed change—be it the invention of electricity, or even the sun rising—there will be some in a position to complain. This is a book of such complaints. There is much recounting of various “harms” caused by EA (primarily to social justice activists who are no longer as competitive for grant funding). But nowhere in the volume is there any serious attempt to compare these costs against the gains to others—especially the populations supposedly served by charitable work, as opposed to the workers themselves—to determine which is greater. (One gets the impression that cost-benefit analysis is too capitalistic for these authors to even consider.) The word “trade-off” does not appear in this volume.A second respect in which the book’s title may be misleading is that it is exclusively about the animal welfare wing of EA. (There is a ‘Coda’ that mentions longtermism, but merely to sneer at it. There was no substantive engagement with the ideas.)I personally didn’t find much of value in the volume, but I’ll start out by flagging what good I can. I’ll then briefly explain why I wasn’t much impressed with the rest—mainly by way of sharing representative quotes, so readers can judge it for themselves.The GoodThe more empirically-oriented chapters raise interesting challenges about animal advocacy strategy. We learn that EA funders have focused on two main strategies to reform or eventually upend animal agriculture: (i) corporate cage-free (and similar) campaigns, and (ii) investment in meat alternatives. Neither involves the sort of “grassroots” activism that the contributors to this volume prefer. So some of the authors discuss potential shortcomings of the above two strategies, and potential benefits of alternatives like (iii) vegan outreach in Black communities, and (iv) animal sanctuaries.I expect EAs will welcome discussion of the effectiveness of different strategies. That’s what the movement is all about, after all. By far the most constructive article in the volume (chapter 4, ‘Animal Advocacy’s Stockholm Syndrome’) noted that “cage- free campaigns. can be particularly tragic in a global context” where factory-farms are not yet ubiquitous:The conscientious urban, middle-class Indian consumer cannot see that there is a minor difference between the cage-free egg and the standard factory-farmed egg, and a massive gulf separating both of these from the traditionally produced egg [where birds freely roam their whole lives] for a simple reason: the animal protection groups the consumer is relying upon are pointing to the (factory- farmed) cage-free egg instead of alternatives to industrial farming. (p. 45)Such evidence of localized ineffectiveness (or counterproductivity) is certainly important to identify & take into account!There’s a larger issue (not really addressed in this volume) of when it makes sense for a funder to go “all in” on their best bets vs. when they’d do better to “diversify” their philanthropic portfolio. This is something EAs have discussed a bit before (often by making purely theoretical arguments that the “best bet” maximizes expected value), but I’d be excited to see more work on this problem using different methodologies, including taking into account the risk of “model error” or systemic bias in our initial EV estimates. (Maybe such work is already out ther...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Review of The Good It Promises, the Harm It Does, published by Richard Y Chappell on May 2, 2023 on The Effective Altruism Forum.[TL;DR: I didn't find much of value in the book. The quality of argumentation is worse than on most blogs I read. Maybe others will have better luck discerning any hidden gems in the mix?]The Good It Promises, the Harm It Does: Critical Essays on Effective Altruism (eds. Adams, Crary, & Gruen) puts me in mind of Bastiat’s Candlestick Makers' Petition. For any proposed change—be it the invention of electricity, or even the sun rising—there will be some in a position to complain. This is a book of such complaints. There is much recounting of various “harms” caused by EA (primarily to social justice activists who are no longer as competitive for grant funding). But nowhere in the volume is there any serious attempt to compare these costs against the gains to others—especially the populations supposedly served by charitable work, as opposed to the workers themselves—to determine which is greater. (One gets the impression that cost-benefit analysis is too capitalistic for these authors to even consider.) The word “trade-off” does not appear in this volume.A second respect in which the book’s title may be misleading is that it is exclusively about the animal welfare wing of EA. (There is a ‘Coda’ that mentions longtermism, but merely to sneer at it. There was no substantive engagement with the ideas.)I personally didn’t find much of value in the volume, but I’ll start out by flagging what good I can. I’ll then briefly explain why I wasn’t much impressed with the rest—mainly by way of sharing representative quotes, so readers can judge it for themselves.The GoodThe more empirically-oriented chapters raise interesting challenges about animal advocacy strategy. We learn that EA funders have focused on two main strategies to reform or eventually upend animal agriculture: (i) corporate cage-free (and similar) campaigns, and (ii) investment in meat alternatives. Neither involves the sort of “grassroots” activism that the contributors to this volume prefer. So some of the authors discuss potential shortcomings of the above two strategies, and potential benefits of alternatives like (iii) vegan outreach in Black communities, and (iv) animal sanctuaries.I expect EAs will welcome discussion of the effectiveness of different strategies. That’s what the movement is all about, after all. By far the most constructive article in the volume (chapter 4, ‘Animal Advocacy’s Stockholm Syndrome’) noted that “cage- free campaigns. can be particularly tragic in a global context” where factory-farms are not yet ubiquitous:The conscientious urban, middle-class Indian consumer cannot see that there is a minor difference between the cage-free egg and the standard factory-farmed egg, and a massive gulf separating both of these from the traditionally produced egg [where birds freely roam their whole lives] for a simple reason: the animal protection groups the consumer is relying upon are pointing to the (factory- farmed) cage-free egg instead of alternatives to industrial farming. (p. 45)Such evidence of localized ineffectiveness (or counterproductivity) is certainly important to identify & take into account!There’s a larger issue (not really addressed in this volume) of when it makes sense for a funder to go “all in” on their best bets vs. when they’d do better to “diversify” their philanthropic portfolio. This is something EAs have discussed a bit before (often by making purely theoretical arguments that the “best bet” maximizes expected value), but I’d be excited to see more work on this problem using different methodologies, including taking into account the risk of “model error” or systemic bias in our initial EV estimates. (Maybe such work is already out ther...]]>
Richard Y Chappell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 17:29 None full 5810
LZxXjkZDzvdEDFpxz_NL_EA_EA EA - Apply Now: First-Ever EAGxNYC This August by Arthur Malone Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply Now: First-Ever EAGxNYC This August, published by Arthur Malone on May 2, 2023 on The Effective Altruism Forum.TL;DR: Applications are now open for EAGxNYC 2023, taking place in Manhattan this August 18-20!We’re thrilled to announce that this summer, EAGx comes to New York City for the first time!Application:Reviewed on a rolling basis, apply here before the deadline of July 31, 2023. Applying early means you'll have more time to prep and help us plan for your needs!When:August 18-20, 2023Where:Convene, 225 Liberty Street, New York, NY, in Lower Manhattan near the World Trade Center complexWho:EAGxNYC is intended for both individuals new to the movement and those already professionally engaged with EA, and will cover a diverse range of high-impact cause areas. We believe the conference will be of particular value to those currently exploring new ways they can have an impact, such as students, young professionals, and mid-career professionals looking to shift into EA-aligned work. We also invite established organizations looking to share their work and grow their pool of potential collaborators or hirees. Due to venue and funding capacity, the conference will be capped at 500 attendees.Geographic scope: As a locally-organized supplement to the Centre for Effective Altruism-organized EAG conferences, EAGxNYC aims to primarily serve, and foster connections between, those living in the NYC area. While we are also excited to welcome individuals from around the globe, due to limited capacity we will prioritize applicants who have a connection to our New York metropolitan area or are seriously considering relocating here, followed by applicants from throughout the East Coast. However, if you are uncertain about your eligibility, don't hesitate to apply!Travel Grants: Limited travel grants of up to $500 are available to individuals from outside of NYC who would not be able to attend EAGxNYC without financial assistance. Applications for financial assistance have no bearing on admissions to the conference.Programming:EAGxNYC will take place from Friday, August 18th through Sunday, August 20th with registration opening in the early afternoon Friday, followed by dinner and opening talks that evening. Content will be scheduled and the venue will be open for networking until 10PM Friday, 8AM-10PM Saturday, and 8AM-7PM Sunday. Along with dinner on Friday, the venue will be providing breakfast, lunch, and snacks and drinks on Saturday and Sunday. Dinner will not be served on the premises Saturday or Sunday, but the EAGxNYC team will help coordinate group dinners nearby and encourage all attendees to make use of the venue throughout the evening.We aim to program content covering all effective altruism cause areas with a special emphasis on the intersection between EA and New York City. If you are interested in presenting at the conference, please reach out to the organizing team.Satellite Programming: If you’re already in the New York City area and want to get involved leading up to or following the conference, check out the local EA NYC group for public events, cause-related and professional subgroup events, opportunities for online engagement, and more!More info: Detailed information on the agenda, speakers, and content will be available closer to the conference via Swapcard and updates to this website page. Periodically checking in on our website will help you stay up to date in the meantime, and if you have any questions or concerns, drop email us at nyc@eaglobalx.org.We can't wait to see you in NYC this Summer!The organizing team :)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Arthur Malone https://forum.effectivealtruism.org/posts/LZxXjkZDzvdEDFpxz/apply-now-first-ever-eagxnyc-this-august Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply Now: First-Ever EAGxNYC This August, published by Arthur Malone on May 2, 2023 on The Effective Altruism Forum.TL;DR: Applications are now open for EAGxNYC 2023, taking place in Manhattan this August 18-20!We’re thrilled to announce that this summer, EAGx comes to New York City for the first time!Application:Reviewed on a rolling basis, apply here before the deadline of July 31, 2023. Applying early means you'll have more time to prep and help us plan for your needs!When:August 18-20, 2023Where:Convene, 225 Liberty Street, New York, NY, in Lower Manhattan near the World Trade Center complexWho:EAGxNYC is intended for both individuals new to the movement and those already professionally engaged with EA, and will cover a diverse range of high-impact cause areas. We believe the conference will be of particular value to those currently exploring new ways they can have an impact, such as students, young professionals, and mid-career professionals looking to shift into EA-aligned work. We also invite established organizations looking to share their work and grow their pool of potential collaborators or hirees. Due to venue and funding capacity, the conference will be capped at 500 attendees.Geographic scope: As a locally-organized supplement to the Centre for Effective Altruism-organized EAG conferences, EAGxNYC aims to primarily serve, and foster connections between, those living in the NYC area. While we are also excited to welcome individuals from around the globe, due to limited capacity we will prioritize applicants who have a connection to our New York metropolitan area or are seriously considering relocating here, followed by applicants from throughout the East Coast. However, if you are uncertain about your eligibility, don't hesitate to apply!Travel Grants: Limited travel grants of up to $500 are available to individuals from outside of NYC who would not be able to attend EAGxNYC without financial assistance. Applications for financial assistance have no bearing on admissions to the conference.Programming:EAGxNYC will take place from Friday, August 18th through Sunday, August 20th with registration opening in the early afternoon Friday, followed by dinner and opening talks that evening. Content will be scheduled and the venue will be open for networking until 10PM Friday, 8AM-10PM Saturday, and 8AM-7PM Sunday. Along with dinner on Friday, the venue will be providing breakfast, lunch, and snacks and drinks on Saturday and Sunday. Dinner will not be served on the premises Saturday or Sunday, but the EAGxNYC team will help coordinate group dinners nearby and encourage all attendees to make use of the venue throughout the evening.We aim to program content covering all effective altruism cause areas with a special emphasis on the intersection between EA and New York City. If you are interested in presenting at the conference, please reach out to the organizing team.Satellite Programming: If you’re already in the New York City area and want to get involved leading up to or following the conference, check out the local EA NYC group for public events, cause-related and professional subgroup events, opportunities for online engagement, and more!More info: Detailed information on the agenda, speakers, and content will be available closer to the conference via Swapcard and updates to this website page. Periodically checking in on our website will help you stay up to date in the meantime, and if you have any questions or concerns, drop email us at nyc@eaglobalx.org.We can't wait to see you in NYC this Summer!The organizing team :)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 02 May 2023 16:04:56 +0000 EA - Apply Now: First-Ever EAGxNYC This August by Arthur Malone Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply Now: First-Ever EAGxNYC This August, published by Arthur Malone on May 2, 2023 on The Effective Altruism Forum.TL;DR: Applications are now open for EAGxNYC 2023, taking place in Manhattan this August 18-20!We’re thrilled to announce that this summer, EAGx comes to New York City for the first time!Application:Reviewed on a rolling basis, apply here before the deadline of July 31, 2023. Applying early means you'll have more time to prep and help us plan for your needs!When:August 18-20, 2023Where:Convene, 225 Liberty Street, New York, NY, in Lower Manhattan near the World Trade Center complexWho:EAGxNYC is intended for both individuals new to the movement and those already professionally engaged with EA, and will cover a diverse range of high-impact cause areas. We believe the conference will be of particular value to those currently exploring new ways they can have an impact, such as students, young professionals, and mid-career professionals looking to shift into EA-aligned work. We also invite established organizations looking to share their work and grow their pool of potential collaborators or hirees. Due to venue and funding capacity, the conference will be capped at 500 attendees.Geographic scope: As a locally-organized supplement to the Centre for Effective Altruism-organized EAG conferences, EAGxNYC aims to primarily serve, and foster connections between, those living in the NYC area. While we are also excited to welcome individuals from around the globe, due to limited capacity we will prioritize applicants who have a connection to our New York metropolitan area or are seriously considering relocating here, followed by applicants from throughout the East Coast. However, if you are uncertain about your eligibility, don't hesitate to apply!Travel Grants: Limited travel grants of up to $500 are available to individuals from outside of NYC who would not be able to attend EAGxNYC without financial assistance. Applications for financial assistance have no bearing on admissions to the conference.Programming:EAGxNYC will take place from Friday, August 18th through Sunday, August 20th with registration opening in the early afternoon Friday, followed by dinner and opening talks that evening. Content will be scheduled and the venue will be open for networking until 10PM Friday, 8AM-10PM Saturday, and 8AM-7PM Sunday. Along with dinner on Friday, the venue will be providing breakfast, lunch, and snacks and drinks on Saturday and Sunday. Dinner will not be served on the premises Saturday or Sunday, but the EAGxNYC team will help coordinate group dinners nearby and encourage all attendees to make use of the venue throughout the evening.We aim to program content covering all effective altruism cause areas with a special emphasis on the intersection between EA and New York City. If you are interested in presenting at the conference, please reach out to the organizing team.Satellite Programming: If you’re already in the New York City area and want to get involved leading up to or following the conference, check out the local EA NYC group for public events, cause-related and professional subgroup events, opportunities for online engagement, and more!More info: Detailed information on the agenda, speakers, and content will be available closer to the conference via Swapcard and updates to this website page. Periodically checking in on our website will help you stay up to date in the meantime, and if you have any questions or concerns, drop email us at nyc@eaglobalx.org.We can't wait to see you in NYC this Summer!The organizing team :)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply Now: First-Ever EAGxNYC This August, published by Arthur Malone on May 2, 2023 on The Effective Altruism Forum.TL;DR: Applications are now open for EAGxNYC 2023, taking place in Manhattan this August 18-20!We’re thrilled to announce that this summer, EAGx comes to New York City for the first time!Application:Reviewed on a rolling basis, apply here before the deadline of July 31, 2023. Applying early means you'll have more time to prep and help us plan for your needs!When:August 18-20, 2023Where:Convene, 225 Liberty Street, New York, NY, in Lower Manhattan near the World Trade Center complexWho:EAGxNYC is intended for both individuals new to the movement and those already professionally engaged with EA, and will cover a diverse range of high-impact cause areas. We believe the conference will be of particular value to those currently exploring new ways they can have an impact, such as students, young professionals, and mid-career professionals looking to shift into EA-aligned work. We also invite established organizations looking to share their work and grow their pool of potential collaborators or hirees. Due to venue and funding capacity, the conference will be capped at 500 attendees.Geographic scope: As a locally-organized supplement to the Centre for Effective Altruism-organized EAG conferences, EAGxNYC aims to primarily serve, and foster connections between, those living in the NYC area. While we are also excited to welcome individuals from around the globe, due to limited capacity we will prioritize applicants who have a connection to our New York metropolitan area or are seriously considering relocating here, followed by applicants from throughout the East Coast. However, if you are uncertain about your eligibility, don't hesitate to apply!Travel Grants: Limited travel grants of up to $500 are available to individuals from outside of NYC who would not be able to attend EAGxNYC without financial assistance. Applications for financial assistance have no bearing on admissions to the conference.Programming:EAGxNYC will take place from Friday, August 18th through Sunday, August 20th with registration opening in the early afternoon Friday, followed by dinner and opening talks that evening. Content will be scheduled and the venue will be open for networking until 10PM Friday, 8AM-10PM Saturday, and 8AM-7PM Sunday. Along with dinner on Friday, the venue will be providing breakfast, lunch, and snacks and drinks on Saturday and Sunday. Dinner will not be served on the premises Saturday or Sunday, but the EAGxNYC team will help coordinate group dinners nearby and encourage all attendees to make use of the venue throughout the evening.We aim to program content covering all effective altruism cause areas with a special emphasis on the intersection between EA and New York City. If you are interested in presenting at the conference, please reach out to the organizing team.Satellite Programming: If you’re already in the New York City area and want to get involved leading up to or following the conference, check out the local EA NYC group for public events, cause-related and professional subgroup events, opportunities for online engagement, and more!More info: Detailed information on the agenda, speakers, and content will be available closer to the conference via Swapcard and updates to this website page. Periodically checking in on our website will help you stay up to date in the meantime, and if you have any questions or concerns, drop email us at nyc@eaglobalx.org.We can't wait to see you in NYC this Summer!The organizing team :)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Arthur Malone https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:36 None full 5811
xen7oLdHwoHDTG4hR_NL_EA_EA EA - Legal Priorities Project – Annual Report 2022 by Legal Priorities Project Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Legal Priorities Project – Annual Report 2022, published by Legal Priorities Project on May 2, 2023 on The Effective Altruism Forum.SummaryNote: A private version of this report with additional confidential updates has been shared with a few major donors and close collaborators. If you fall into this category and would like to access the extended version, please get in touch with us at hello@legalpriorities.org.This report presents a review of the Legal Priorities Project’s work in 2022, including key successes, statistics, bottlenecks, and issues. We also offer a short overview of our priorities for 2023 and briefly describe our methodology for updating our priorities. You can learn more about how to support our work at the end of the report.A PDF version of this report is available here.In 2022:Our research output per FTE was high to very high: With only 3.6 FTE researchers, we had 7 peer-reviewed papers (2 journal articles and 5 book chapters) accepted for publication, 3 papers currently under review, and one book under contract with Oxford University Press. We also added 7 papers to our Working Paper Series (for a total of 18), published a new chapter of our research agenda, and published 6 shorter pieces in online forums. We also spent significant time on reports aimed at informing our prioritization on artificial intelligence and biosecurity in particular (which we plan to publish in Q2 of 2023) and ran a writing competition on “Improving Cost-Benefit Analysis to Account for Existential and Catastrophic Risks” with a judging panel composed of eminent figures in law. Based on our experience, our research output was much higher than typical legal academic research groups of similar size.Beyond academic research, we analyzed ongoing policy efforts, and our research received positive feedback from policymakers. Relevant feedback and discussions provided valuable insight into what research would support decision-making and favorable outcomes, which we believe improved our prioritization.We experimented with running several events targeting different audiences and received hundreds of applications from students and academics at top institutions worldwide. Some participants have already reported significant updates to their plans as a result. Feedback on our events was overwhelmingly positive, and we gained valuable information about the different types of programs and their effectiveness, which will inform future events.Team morale remained high, including during stressful developments, and our operations ran smoothly.In 2023:Our research will increasingly focus on reducing specific types of existential risk based on concrete risk scenarios, shifting more focus toward AI risk. While this shift started in 2022, AI risk will become more central to our research in 2023. We will publish an update to our research agenda and theory of change accordingly.We will continue to publish research of various types. However, we will significantly increase our focus on non-academic publications, such as policy/technical reports and blog posts, in order to make our work more accessible to policymakers and a wider audience. As part of this strategy, we will also launch a blog featuring shorter pieces by LPP staff and invited researchers.We would like to run at least one, but ideally two, flagship field-building programs: The Legal Priorities Summer Institute and the Summer Research Fellowship.We will seek to raise at least $1.1m to maintain our current level of operations for another year. More optimistically, we aim to increase our team size by 1–3 additional FTE, ideally hiring a senior researcher with a background in US law to work on risks from advanced artificial intelligence.IntroductionThe Legal Priorities Project is an independent, global research and field-bui...]]>
Legal Priorities Project https://forum.effectivealtruism.org/posts/xen7oLdHwoHDTG4hR/legal-priorities-project-annual-report-2022 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Legal Priorities Project – Annual Report 2022, published by Legal Priorities Project on May 2, 2023 on The Effective Altruism Forum.SummaryNote: A private version of this report with additional confidential updates has been shared with a few major donors and close collaborators. If you fall into this category and would like to access the extended version, please get in touch with us at hello@legalpriorities.org.This report presents a review of the Legal Priorities Project’s work in 2022, including key successes, statistics, bottlenecks, and issues. We also offer a short overview of our priorities for 2023 and briefly describe our methodology for updating our priorities. You can learn more about how to support our work at the end of the report.A PDF version of this report is available here.In 2022:Our research output per FTE was high to very high: With only 3.6 FTE researchers, we had 7 peer-reviewed papers (2 journal articles and 5 book chapters) accepted for publication, 3 papers currently under review, and one book under contract with Oxford University Press. We also added 7 papers to our Working Paper Series (for a total of 18), published a new chapter of our research agenda, and published 6 shorter pieces in online forums. We also spent significant time on reports aimed at informing our prioritization on artificial intelligence and biosecurity in particular (which we plan to publish in Q2 of 2023) and ran a writing competition on “Improving Cost-Benefit Analysis to Account for Existential and Catastrophic Risks” with a judging panel composed of eminent figures in law. Based on our experience, our research output was much higher than typical legal academic research groups of similar size.Beyond academic research, we analyzed ongoing policy efforts, and our research received positive feedback from policymakers. Relevant feedback and discussions provided valuable insight into what research would support decision-making and favorable outcomes, which we believe improved our prioritization.We experimented with running several events targeting different audiences and received hundreds of applications from students and academics at top institutions worldwide. Some participants have already reported significant updates to their plans as a result. Feedback on our events was overwhelmingly positive, and we gained valuable information about the different types of programs and their effectiveness, which will inform future events.Team morale remained high, including during stressful developments, and our operations ran smoothly.In 2023:Our research will increasingly focus on reducing specific types of existential risk based on concrete risk scenarios, shifting more focus toward AI risk. While this shift started in 2022, AI risk will become more central to our research in 2023. We will publish an update to our research agenda and theory of change accordingly.We will continue to publish research of various types. However, we will significantly increase our focus on non-academic publications, such as policy/technical reports and blog posts, in order to make our work more accessible to policymakers and a wider audience. As part of this strategy, we will also launch a blog featuring shorter pieces by LPP staff and invited researchers.We would like to run at least one, but ideally two, flagship field-building programs: The Legal Priorities Summer Institute and the Summer Research Fellowship.We will seek to raise at least $1.1m to maintain our current level of operations for another year. More optimistically, we aim to increase our team size by 1–3 additional FTE, ideally hiring a senior researcher with a background in US law to work on risks from advanced artificial intelligence.IntroductionThe Legal Priorities Project is an independent, global research and field-bui...]]>
Tue, 02 May 2023 15:50:46 +0000 EA - Legal Priorities Project – Annual Report 2022 by Legal Priorities Project Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Legal Priorities Project – Annual Report 2022, published by Legal Priorities Project on May 2, 2023 on The Effective Altruism Forum.SummaryNote: A private version of this report with additional confidential updates has been shared with a few major donors and close collaborators. If you fall into this category and would like to access the extended version, please get in touch with us at hello@legalpriorities.org.This report presents a review of the Legal Priorities Project’s work in 2022, including key successes, statistics, bottlenecks, and issues. We also offer a short overview of our priorities for 2023 and briefly describe our methodology for updating our priorities. You can learn more about how to support our work at the end of the report.A PDF version of this report is available here.In 2022:Our research output per FTE was high to very high: With only 3.6 FTE researchers, we had 7 peer-reviewed papers (2 journal articles and 5 book chapters) accepted for publication, 3 papers currently under review, and one book under contract with Oxford University Press. We also added 7 papers to our Working Paper Series (for a total of 18), published a new chapter of our research agenda, and published 6 shorter pieces in online forums. We also spent significant time on reports aimed at informing our prioritization on artificial intelligence and biosecurity in particular (which we plan to publish in Q2 of 2023) and ran a writing competition on “Improving Cost-Benefit Analysis to Account for Existential and Catastrophic Risks” with a judging panel composed of eminent figures in law. Based on our experience, our research output was much higher than typical legal academic research groups of similar size.Beyond academic research, we analyzed ongoing policy efforts, and our research received positive feedback from policymakers. Relevant feedback and discussions provided valuable insight into what research would support decision-making and favorable outcomes, which we believe improved our prioritization.We experimented with running several events targeting different audiences and received hundreds of applications from students and academics at top institutions worldwide. Some participants have already reported significant updates to their plans as a result. Feedback on our events was overwhelmingly positive, and we gained valuable information about the different types of programs and their effectiveness, which will inform future events.Team morale remained high, including during stressful developments, and our operations ran smoothly.In 2023:Our research will increasingly focus on reducing specific types of existential risk based on concrete risk scenarios, shifting more focus toward AI risk. While this shift started in 2022, AI risk will become more central to our research in 2023. We will publish an update to our research agenda and theory of change accordingly.We will continue to publish research of various types. However, we will significantly increase our focus on non-academic publications, such as policy/technical reports and blog posts, in order to make our work more accessible to policymakers and a wider audience. As part of this strategy, we will also launch a blog featuring shorter pieces by LPP staff and invited researchers.We would like to run at least one, but ideally two, flagship field-building programs: The Legal Priorities Summer Institute and the Summer Research Fellowship.We will seek to raise at least $1.1m to maintain our current level of operations for another year. More optimistically, we aim to increase our team size by 1–3 additional FTE, ideally hiring a senior researcher with a background in US law to work on risks from advanced artificial intelligence.IntroductionThe Legal Priorities Project is an independent, global research and field-bui...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Legal Priorities Project – Annual Report 2022, published by Legal Priorities Project on May 2, 2023 on The Effective Altruism Forum.SummaryNote: A private version of this report with additional confidential updates has been shared with a few major donors and close collaborators. If you fall into this category and would like to access the extended version, please get in touch with us at hello@legalpriorities.org.This report presents a review of the Legal Priorities Project’s work in 2022, including key successes, statistics, bottlenecks, and issues. We also offer a short overview of our priorities for 2023 and briefly describe our methodology for updating our priorities. You can learn more about how to support our work at the end of the report.A PDF version of this report is available here.In 2022:Our research output per FTE was high to very high: With only 3.6 FTE researchers, we had 7 peer-reviewed papers (2 journal articles and 5 book chapters) accepted for publication, 3 papers currently under review, and one book under contract with Oxford University Press. We also added 7 papers to our Working Paper Series (for a total of 18), published a new chapter of our research agenda, and published 6 shorter pieces in online forums. We also spent significant time on reports aimed at informing our prioritization on artificial intelligence and biosecurity in particular (which we plan to publish in Q2 of 2023) and ran a writing competition on “Improving Cost-Benefit Analysis to Account for Existential and Catastrophic Risks” with a judging panel composed of eminent figures in law. Based on our experience, our research output was much higher than typical legal academic research groups of similar size.Beyond academic research, we analyzed ongoing policy efforts, and our research received positive feedback from policymakers. Relevant feedback and discussions provided valuable insight into what research would support decision-making and favorable outcomes, which we believe improved our prioritization.We experimented with running several events targeting different audiences and received hundreds of applications from students and academics at top institutions worldwide. Some participants have already reported significant updates to their plans as a result. Feedback on our events was overwhelmingly positive, and we gained valuable information about the different types of programs and their effectiveness, which will inform future events.Team morale remained high, including during stressful developments, and our operations ran smoothly.In 2023:Our research will increasingly focus on reducing specific types of existential risk based on concrete risk scenarios, shifting more focus toward AI risk. While this shift started in 2022, AI risk will become more central to our research in 2023. We will publish an update to our research agenda and theory of change accordingly.We will continue to publish research of various types. However, we will significantly increase our focus on non-academic publications, such as policy/technical reports and blog posts, in order to make our work more accessible to policymakers and a wider audience. As part of this strategy, we will also launch a blog featuring shorter pieces by LPP staff and invited researchers.We would like to run at least one, but ideally two, flagship field-building programs: The Legal Priorities Summer Institute and the Summer Research Fellowship.We will seek to raise at least $1.1m to maintain our current level of operations for another year. More optimistically, we aim to increase our team size by 1–3 additional FTE, ideally hiring a senior researcher with a background in US law to work on risks from advanced artificial intelligence.IntroductionThe Legal Priorities Project is an independent, global research and field-bui...]]>
Legal Priorities Project https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 58:27 None full 5809
JHotn7TBRnLAkfKCi_NL_EA_EA EA - Intermediate goals for reducing risks from nuclear weapons: A shallow review (part 1/4) by MichaelA Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Intermediate goals for reducing risks from nuclear weapons: A shallow review (part 1/4), published by MichaelA on May 1, 2023 on The Effective Altruism Forum.This is a blog post, not a research report, meaning it was produced relatively quickly and is not to Rethink Priorities' typical standards of substantiveness and careful checking for accuracy.SummaryWhat is this post?This post is the first part of what was intended to be a shallow review of potential “intermediate goals” one could pursue in order to reduce nuclear risk (focusing especially on the contribution of nuclear weapons to existential risk). The full review would’ve broken intermediate goals down into:goals aimed at reducing the odds of nuclear conflict or other non-test nuclear detonationsgoals aimed at changing how nuclear conflict plays out if it does occur (in a way that reduces its harms)goals aimed at improving resilience to or ability to recover from the harms of nuclear conflictgoals that are cross-cutting, focused on field-building, or otherwise have indirect effectsThis first part of the shallow review focuses just on the first of those categories: goals aimed at reducing the odds of nuclear conflict or other non-test nuclear detonations. We tentatively think that, on the margin, this is the least promising of those four categories of goals, but that there are still some promising interventions in this category.Within this category, we review multiple potential goals. For most of those goals, we briefly discuss:What we mean by the goalWhy progress on this goal might reduce or increase nuclear riskExamples of specific interventions or organizations that could advance the goalOur very tentative bottom-line beliefs about:How much progress on the goal would reduce or increase nuclear riskWhat resources are most needed to make progress on the goalHow easy making progress on the goal would beWhat key effects making progress on the goal might have on things other than nuclear riskNote that, due to time constraints, this post is much less comprehensive and thoroughly researched and reviewed than we’d like.The intermediate goals we considered, and our tentative bottom line beliefs on themThis post and table breaks down high-level goals into increasingly granular goals, and shares our current best guesses on the relatively granular goals. Many goals could be pursued for multiple reasons and could hence appear in multiple places in this table, but we generally just showed each goal in the first relevant place anyway. This means in some cases a lot of the benefits of a given goal may be for higher-level goals we haven’t shown it as nested under.We unfortunately did even less research on the goals listed from From 1.1.2.6 onwards than the earlier ones, and the bottom-line views for that later set are mostly Will’s especially tentative personal views.Potential intermediate goalWhat effect would progress on this goal have on nuclear risk?How easy would it be to make progress on this goal?What resources are most needed for progress on this goal?Key effects this goal might have on things other than nuclear risk?1.1.1.1Reduce the odds of armed conflict in generalModerate reduction in riskHardUnsure.1.1.1.2Reduce the odds of (initially non-nuclear) armed conflict, with a focus on those involving at least one nuclear-armed stateMajor reduction in riskHard Similar to “1.1.1.1: Reduce the odds of armed conflict in general”1.1.1.3Reduce proliferationModerate reduction in riskHardMore non-nuclear conflict?Or maybe less?1.1.1.4Promote complete nuclear disarmamentMajor reduction in riskAlmost impossible to fully achieve this unless the world changes radically (e.g., a world government or transformative artificial intelligence is created, or a great power war occurs) 1.1.2.1Promote no first use (N...]]>
MichaelA https://forum.effectivealtruism.org/posts/JHotn7TBRnLAkfKCi/intermediate-goals-for-reducing-risks-from-nuclear-weapons-a Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Intermediate goals for reducing risks from nuclear weapons: A shallow review (part 1/4), published by MichaelA on May 1, 2023 on The Effective Altruism Forum.This is a blog post, not a research report, meaning it was produced relatively quickly and is not to Rethink Priorities' typical standards of substantiveness and careful checking for accuracy.SummaryWhat is this post?This post is the first part of what was intended to be a shallow review of potential “intermediate goals” one could pursue in order to reduce nuclear risk (focusing especially on the contribution of nuclear weapons to existential risk). The full review would’ve broken intermediate goals down into:goals aimed at reducing the odds of nuclear conflict or other non-test nuclear detonationsgoals aimed at changing how nuclear conflict plays out if it does occur (in a way that reduces its harms)goals aimed at improving resilience to or ability to recover from the harms of nuclear conflictgoals that are cross-cutting, focused on field-building, or otherwise have indirect effectsThis first part of the shallow review focuses just on the first of those categories: goals aimed at reducing the odds of nuclear conflict or other non-test nuclear detonations. We tentatively think that, on the margin, this is the least promising of those four categories of goals, but that there are still some promising interventions in this category.Within this category, we review multiple potential goals. For most of those goals, we briefly discuss:What we mean by the goalWhy progress on this goal might reduce or increase nuclear riskExamples of specific interventions or organizations that could advance the goalOur very tentative bottom-line beliefs about:How much progress on the goal would reduce or increase nuclear riskWhat resources are most needed to make progress on the goalHow easy making progress on the goal would beWhat key effects making progress on the goal might have on things other than nuclear riskNote that, due to time constraints, this post is much less comprehensive and thoroughly researched and reviewed than we’d like.The intermediate goals we considered, and our tentative bottom line beliefs on themThis post and table breaks down high-level goals into increasingly granular goals, and shares our current best guesses on the relatively granular goals. Many goals could be pursued for multiple reasons and could hence appear in multiple places in this table, but we generally just showed each goal in the first relevant place anyway. This means in some cases a lot of the benefits of a given goal may be for higher-level goals we haven’t shown it as nested under.We unfortunately did even less research on the goals listed from From 1.1.2.6 onwards than the earlier ones, and the bottom-line views for that later set are mostly Will’s especially tentative personal views.Potential intermediate goalWhat effect would progress on this goal have on nuclear risk?How easy would it be to make progress on this goal?What resources are most needed for progress on this goal?Key effects this goal might have on things other than nuclear risk?1.1.1.1Reduce the odds of armed conflict in generalModerate reduction in riskHardUnsure.1.1.1.2Reduce the odds of (initially non-nuclear) armed conflict, with a focus on those involving at least one nuclear-armed stateMajor reduction in riskHard Similar to “1.1.1.1: Reduce the odds of armed conflict in general”1.1.1.3Reduce proliferationModerate reduction in riskHardMore non-nuclear conflict?Or maybe less?1.1.1.4Promote complete nuclear disarmamentMajor reduction in riskAlmost impossible to fully achieve this unless the world changes radically (e.g., a world government or transformative artificial intelligence is created, or a great power war occurs) 1.1.2.1Promote no first use (N...]]>
Tue, 02 May 2023 13:44:36 +0000 EA - Intermediate goals for reducing risks from nuclear weapons: A shallow review (part 1/4) by MichaelA Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Intermediate goals for reducing risks from nuclear weapons: A shallow review (part 1/4), published by MichaelA on May 1, 2023 on The Effective Altruism Forum.This is a blog post, not a research report, meaning it was produced relatively quickly and is not to Rethink Priorities' typical standards of substantiveness and careful checking for accuracy.SummaryWhat is this post?This post is the first part of what was intended to be a shallow review of potential “intermediate goals” one could pursue in order to reduce nuclear risk (focusing especially on the contribution of nuclear weapons to existential risk). The full review would’ve broken intermediate goals down into:goals aimed at reducing the odds of nuclear conflict or other non-test nuclear detonationsgoals aimed at changing how nuclear conflict plays out if it does occur (in a way that reduces its harms)goals aimed at improving resilience to or ability to recover from the harms of nuclear conflictgoals that are cross-cutting, focused on field-building, or otherwise have indirect effectsThis first part of the shallow review focuses just on the first of those categories: goals aimed at reducing the odds of nuclear conflict or other non-test nuclear detonations. We tentatively think that, on the margin, this is the least promising of those four categories of goals, but that there are still some promising interventions in this category.Within this category, we review multiple potential goals. For most of those goals, we briefly discuss:What we mean by the goalWhy progress on this goal might reduce or increase nuclear riskExamples of specific interventions or organizations that could advance the goalOur very tentative bottom-line beliefs about:How much progress on the goal would reduce or increase nuclear riskWhat resources are most needed to make progress on the goalHow easy making progress on the goal would beWhat key effects making progress on the goal might have on things other than nuclear riskNote that, due to time constraints, this post is much less comprehensive and thoroughly researched and reviewed than we’d like.The intermediate goals we considered, and our tentative bottom line beliefs on themThis post and table breaks down high-level goals into increasingly granular goals, and shares our current best guesses on the relatively granular goals. Many goals could be pursued for multiple reasons and could hence appear in multiple places in this table, but we generally just showed each goal in the first relevant place anyway. This means in some cases a lot of the benefits of a given goal may be for higher-level goals we haven’t shown it as nested under.We unfortunately did even less research on the goals listed from From 1.1.2.6 onwards than the earlier ones, and the bottom-line views for that later set are mostly Will’s especially tentative personal views.Potential intermediate goalWhat effect would progress on this goal have on nuclear risk?How easy would it be to make progress on this goal?What resources are most needed for progress on this goal?Key effects this goal might have on things other than nuclear risk?1.1.1.1Reduce the odds of armed conflict in generalModerate reduction in riskHardUnsure.1.1.1.2Reduce the odds of (initially non-nuclear) armed conflict, with a focus on those involving at least one nuclear-armed stateMajor reduction in riskHard Similar to “1.1.1.1: Reduce the odds of armed conflict in general”1.1.1.3Reduce proliferationModerate reduction in riskHardMore non-nuclear conflict?Or maybe less?1.1.1.4Promote complete nuclear disarmamentMajor reduction in riskAlmost impossible to fully achieve this unless the world changes radically (e.g., a world government or transformative artificial intelligence is created, or a great power war occurs) 1.1.2.1Promote no first use (N...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Intermediate goals for reducing risks from nuclear weapons: A shallow review (part 1/4), published by MichaelA on May 1, 2023 on The Effective Altruism Forum.This is a blog post, not a research report, meaning it was produced relatively quickly and is not to Rethink Priorities' typical standards of substantiveness and careful checking for accuracy.SummaryWhat is this post?This post is the first part of what was intended to be a shallow review of potential “intermediate goals” one could pursue in order to reduce nuclear risk (focusing especially on the contribution of nuclear weapons to existential risk). The full review would’ve broken intermediate goals down into:goals aimed at reducing the odds of nuclear conflict or other non-test nuclear detonationsgoals aimed at changing how nuclear conflict plays out if it does occur (in a way that reduces its harms)goals aimed at improving resilience to or ability to recover from the harms of nuclear conflictgoals that are cross-cutting, focused on field-building, or otherwise have indirect effectsThis first part of the shallow review focuses just on the first of those categories: goals aimed at reducing the odds of nuclear conflict or other non-test nuclear detonations. We tentatively think that, on the margin, this is the least promising of those four categories of goals, but that there are still some promising interventions in this category.Within this category, we review multiple potential goals. For most of those goals, we briefly discuss:What we mean by the goalWhy progress on this goal might reduce or increase nuclear riskExamples of specific interventions or organizations that could advance the goalOur very tentative bottom-line beliefs about:How much progress on the goal would reduce or increase nuclear riskWhat resources are most needed to make progress on the goalHow easy making progress on the goal would beWhat key effects making progress on the goal might have on things other than nuclear riskNote that, due to time constraints, this post is much less comprehensive and thoroughly researched and reviewed than we’d like.The intermediate goals we considered, and our tentative bottom line beliefs on themThis post and table breaks down high-level goals into increasingly granular goals, and shares our current best guesses on the relatively granular goals. Many goals could be pursued for multiple reasons and could hence appear in multiple places in this table, but we generally just showed each goal in the first relevant place anyway. This means in some cases a lot of the benefits of a given goal may be for higher-level goals we haven’t shown it as nested under.We unfortunately did even less research on the goals listed from From 1.1.2.6 onwards than the earlier ones, and the bottom-line views for that later set are mostly Will’s especially tentative personal views.Potential intermediate goalWhat effect would progress on this goal have on nuclear risk?How easy would it be to make progress on this goal?What resources are most needed for progress on this goal?Key effects this goal might have on things other than nuclear risk?1.1.1.1Reduce the odds of armed conflict in generalModerate reduction in riskHardUnsure.1.1.1.2Reduce the odds of (initially non-nuclear) armed conflict, with a focus on those involving at least one nuclear-armed stateMajor reduction in riskHard Similar to “1.1.1.1: Reduce the odds of armed conflict in general”1.1.1.3Reduce proliferationModerate reduction in riskHardMore non-nuclear conflict?Or maybe less?1.1.1.4Promote complete nuclear disarmamentMajor reduction in riskAlmost impossible to fully achieve this unless the world changes radically (e.g., a world government or transformative artificial intelligence is created, or a great power war occurs) 1.1.2.1Promote no first use (N...]]>
MichaelA https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 27:20 None full 5807
8YXFaM9yHbhiJTPqp_NL_EA_EA EA - AGI rising: why we are in a new era of acute risk and increasing public awareness, and what to do now by Greg Colbourn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI rising: why we are in a new era of acute risk and increasing public awareness, and what to do now, published by Greg Colbourn on May 2, 2023 on The Effective Altruism Forum.Content note: discussion of a near-term, potentially hopeless life-and-death situation that affects everyone.Tldr: AGI is basically here. Alignment is nowhere near ready. We may only have a matter of months to get a lid on this (strictly enforced global limits to compute and data) in order to stand a strong chance of survival. This post is unapologetically alarmist because the situation is highly alarming. Please help. Fill out this form to get involved. Here is a list of practical steps you can take.We are in a new era of acute risk from AGIArtificial General Intelligence (AGI) is now in its ascendency. GPT-4 is already ~human-level at language and showing sparks of AGI. Large multimodal models – text-, image-, audio-, video-, VR/games-, robotics-manipulation by a single AI – will arrive very soon (from Google DeepMind) and will be ~human-level at many things: physical as well as mental tasks; blue collar jobs in addition to white collar jobs. It’s looking highly likely that the current paradigm of AI architecture (Foundation models), basically just scales all the way to AGI. These things are “General Cognition Engines”.All that is stopping them being even more powerful is spending on compute. Google & Microsoft are worth $1-2T each, and $10B can buy ~100x the compute used for GPT-4. Think about this: it means we are already well into hardware overhang territory.Here is a warning written two months ago by people working at applied AI Alignment lab Conjecture: “we are now in the end-game for AGI, and we (humans) are losing”. Things are now worse. It’s looking like GPT-4 will be used to meaningfully speed up AI research, finding more efficient architectures and therefore reducing the cost of training more sophisticated models.And then there is the reckless fervour of plugin development to make proto-AGI systems more capable and agent-like to contend with. In very short succession from GPT-4, OpenAI announced the ChatGPT plugin store, and there has been great enthusiasm for AutoGPT. Adding Planners to LLMs (known as LLM+P) seems like a good recipe for turning them into agents. One way of looking at this is that the planners and plugins act as the System 2 to the underlying System 1 of the general cognitive engine (LLM). And here we have agentic AGI. There may not be any secret sauce left.Given the scaling of capabilities observed so far for the progression of GPT-2 to GPT-3 to GPT3.5 to GPT-4, the next generation of AI could well end up superhuman. I think most people here are aware of the dangers: we have no idea how to reliably control superhuman AI or make it value-aligned (enough to prevent catastrophic outcomes from its existence). The expected outcome from the advent of AGI is doom. This is in large part because AI Alignment research has been completely outpaced by AI capabilities research and is now years behind where it needs to be.To allow Alignment time to catch up, we need a global moratorium on AGI, now.A short argument for uncontrollable superintelligent AI happening soon (without urgent regulation of big AI):This is a recipe for humans extincting themselves that appears to be playing out along the mainline of future timelines:Either ofGPT-4 + curious (but ultimately reckless) academics -> more efficient AI -> next generation foundation model AI (which I’ll call NextAI for short); OrGoogle DeepMind just builds NextAI (they are probably training it already)NextAI + planners + AutoGPT + plugins + further algorithmic advancements + gung ho humans (e/acc etc) = NextAI2 in short order. Weeks even. Access to compute for training is not a bottleneck because that cyborg syste...]]>
Greg Colbourn https://forum.effectivealtruism.org/posts/8YXFaM9yHbhiJTPqp/agi-rising-why-we-are-in-a-new-era-of-acute-risk-and Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI rising: why we are in a new era of acute risk and increasing public awareness, and what to do now, published by Greg Colbourn on May 2, 2023 on The Effective Altruism Forum.Content note: discussion of a near-term, potentially hopeless life-and-death situation that affects everyone.Tldr: AGI is basically here. Alignment is nowhere near ready. We may only have a matter of months to get a lid on this (strictly enforced global limits to compute and data) in order to stand a strong chance of survival. This post is unapologetically alarmist because the situation is highly alarming. Please help. Fill out this form to get involved. Here is a list of practical steps you can take.We are in a new era of acute risk from AGIArtificial General Intelligence (AGI) is now in its ascendency. GPT-4 is already ~human-level at language and showing sparks of AGI. Large multimodal models – text-, image-, audio-, video-, VR/games-, robotics-manipulation by a single AI – will arrive very soon (from Google DeepMind) and will be ~human-level at many things: physical as well as mental tasks; blue collar jobs in addition to white collar jobs. It’s looking highly likely that the current paradigm of AI architecture (Foundation models), basically just scales all the way to AGI. These things are “General Cognition Engines”.All that is stopping them being even more powerful is spending on compute. Google & Microsoft are worth $1-2T each, and $10B can buy ~100x the compute used for GPT-4. Think about this: it means we are already well into hardware overhang territory.Here is a warning written two months ago by people working at applied AI Alignment lab Conjecture: “we are now in the end-game for AGI, and we (humans) are losing”. Things are now worse. It’s looking like GPT-4 will be used to meaningfully speed up AI research, finding more efficient architectures and therefore reducing the cost of training more sophisticated models.And then there is the reckless fervour of plugin development to make proto-AGI systems more capable and agent-like to contend with. In very short succession from GPT-4, OpenAI announced the ChatGPT plugin store, and there has been great enthusiasm for AutoGPT. Adding Planners to LLMs (known as LLM+P) seems like a good recipe for turning them into agents. One way of looking at this is that the planners and plugins act as the System 2 to the underlying System 1 of the general cognitive engine (LLM). And here we have agentic AGI. There may not be any secret sauce left.Given the scaling of capabilities observed so far for the progression of GPT-2 to GPT-3 to GPT3.5 to GPT-4, the next generation of AI could well end up superhuman. I think most people here are aware of the dangers: we have no idea how to reliably control superhuman AI or make it value-aligned (enough to prevent catastrophic outcomes from its existence). The expected outcome from the advent of AGI is doom. This is in large part because AI Alignment research has been completely outpaced by AI capabilities research and is now years behind where it needs to be.To allow Alignment time to catch up, we need a global moratorium on AGI, now.A short argument for uncontrollable superintelligent AI happening soon (without urgent regulation of big AI):This is a recipe for humans extincting themselves that appears to be playing out along the mainline of future timelines:Either ofGPT-4 + curious (but ultimately reckless) academics -> more efficient AI -> next generation foundation model AI (which I’ll call NextAI for short); OrGoogle DeepMind just builds NextAI (they are probably training it already)NextAI + planners + AutoGPT + plugins + further algorithmic advancements + gung ho humans (e/acc etc) = NextAI2 in short order. Weeks even. Access to compute for training is not a bottleneck because that cyborg syste...]]>
Tue, 02 May 2023 11:39:14 +0000 EA - AGI rising: why we are in a new era of acute risk and increasing public awareness, and what to do now by Greg Colbourn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI rising: why we are in a new era of acute risk and increasing public awareness, and what to do now, published by Greg Colbourn on May 2, 2023 on The Effective Altruism Forum.Content note: discussion of a near-term, potentially hopeless life-and-death situation that affects everyone.Tldr: AGI is basically here. Alignment is nowhere near ready. We may only have a matter of months to get a lid on this (strictly enforced global limits to compute and data) in order to stand a strong chance of survival. This post is unapologetically alarmist because the situation is highly alarming. Please help. Fill out this form to get involved. Here is a list of practical steps you can take.We are in a new era of acute risk from AGIArtificial General Intelligence (AGI) is now in its ascendency. GPT-4 is already ~human-level at language and showing sparks of AGI. Large multimodal models – text-, image-, audio-, video-, VR/games-, robotics-manipulation by a single AI – will arrive very soon (from Google DeepMind) and will be ~human-level at many things: physical as well as mental tasks; blue collar jobs in addition to white collar jobs. It’s looking highly likely that the current paradigm of AI architecture (Foundation models), basically just scales all the way to AGI. These things are “General Cognition Engines”.All that is stopping them being even more powerful is spending on compute. Google & Microsoft are worth $1-2T each, and $10B can buy ~100x the compute used for GPT-4. Think about this: it means we are already well into hardware overhang territory.Here is a warning written two months ago by people working at applied AI Alignment lab Conjecture: “we are now in the end-game for AGI, and we (humans) are losing”. Things are now worse. It’s looking like GPT-4 will be used to meaningfully speed up AI research, finding more efficient architectures and therefore reducing the cost of training more sophisticated models.And then there is the reckless fervour of plugin development to make proto-AGI systems more capable and agent-like to contend with. In very short succession from GPT-4, OpenAI announced the ChatGPT plugin store, and there has been great enthusiasm for AutoGPT. Adding Planners to LLMs (known as LLM+P) seems like a good recipe for turning them into agents. One way of looking at this is that the planners and plugins act as the System 2 to the underlying System 1 of the general cognitive engine (LLM). And here we have agentic AGI. There may not be any secret sauce left.Given the scaling of capabilities observed so far for the progression of GPT-2 to GPT-3 to GPT3.5 to GPT-4, the next generation of AI could well end up superhuman. I think most people here are aware of the dangers: we have no idea how to reliably control superhuman AI or make it value-aligned (enough to prevent catastrophic outcomes from its existence). The expected outcome from the advent of AGI is doom. This is in large part because AI Alignment research has been completely outpaced by AI capabilities research and is now years behind where it needs to be.To allow Alignment time to catch up, we need a global moratorium on AGI, now.A short argument for uncontrollable superintelligent AI happening soon (without urgent regulation of big AI):This is a recipe for humans extincting themselves that appears to be playing out along the mainline of future timelines:Either ofGPT-4 + curious (but ultimately reckless) academics -> more efficient AI -> next generation foundation model AI (which I’ll call NextAI for short); OrGoogle DeepMind just builds NextAI (they are probably training it already)NextAI + planners + AutoGPT + plugins + further algorithmic advancements + gung ho humans (e/acc etc) = NextAI2 in short order. Weeks even. Access to compute for training is not a bottleneck because that cyborg syste...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI rising: why we are in a new era of acute risk and increasing public awareness, and what to do now, published by Greg Colbourn on May 2, 2023 on The Effective Altruism Forum.Content note: discussion of a near-term, potentially hopeless life-and-death situation that affects everyone.Tldr: AGI is basically here. Alignment is nowhere near ready. We may only have a matter of months to get a lid on this (strictly enforced global limits to compute and data) in order to stand a strong chance of survival. This post is unapologetically alarmist because the situation is highly alarming. Please help. Fill out this form to get involved. Here is a list of practical steps you can take.We are in a new era of acute risk from AGIArtificial General Intelligence (AGI) is now in its ascendency. GPT-4 is already ~human-level at language and showing sparks of AGI. Large multimodal models – text-, image-, audio-, video-, VR/games-, robotics-manipulation by a single AI – will arrive very soon (from Google DeepMind) and will be ~human-level at many things: physical as well as mental tasks; blue collar jobs in addition to white collar jobs. It’s looking highly likely that the current paradigm of AI architecture (Foundation models), basically just scales all the way to AGI. These things are “General Cognition Engines”.All that is stopping them being even more powerful is spending on compute. Google & Microsoft are worth $1-2T each, and $10B can buy ~100x the compute used for GPT-4. Think about this: it means we are already well into hardware overhang territory.Here is a warning written two months ago by people working at applied AI Alignment lab Conjecture: “we are now in the end-game for AGI, and we (humans) are losing”. Things are now worse. It’s looking like GPT-4 will be used to meaningfully speed up AI research, finding more efficient architectures and therefore reducing the cost of training more sophisticated models.And then there is the reckless fervour of plugin development to make proto-AGI systems more capable and agent-like to contend with. In very short succession from GPT-4, OpenAI announced the ChatGPT plugin store, and there has been great enthusiasm for AutoGPT. Adding Planners to LLMs (known as LLM+P) seems like a good recipe for turning them into agents. One way of looking at this is that the planners and plugins act as the System 2 to the underlying System 1 of the general cognitive engine (LLM). And here we have agentic AGI. There may not be any secret sauce left.Given the scaling of capabilities observed so far for the progression of GPT-2 to GPT-3 to GPT3.5 to GPT-4, the next generation of AI could well end up superhuman. I think most people here are aware of the dangers: we have no idea how to reliably control superhuman AI or make it value-aligned (enough to prevent catastrophic outcomes from its existence). The expected outcome from the advent of AGI is doom. This is in large part because AI Alignment research has been completely outpaced by AI capabilities research and is now years behind where it needs to be.To allow Alignment time to catch up, we need a global moratorium on AGI, now.A short argument for uncontrollable superintelligent AI happening soon (without urgent regulation of big AI):This is a recipe for humans extincting themselves that appears to be playing out along the mainline of future timelines:Either ofGPT-4 + curious (but ultimately reckless) academics -> more efficient AI -> next generation foundation model AI (which I’ll call NextAI for short); OrGoogle DeepMind just builds NextAI (they are probably training it already)NextAI + planners + AutoGPT + plugins + further algorithmic advancements + gung ho humans (e/acc etc) = NextAI2 in short order. Weeks even. Access to compute for training is not a bottleneck because that cyborg syste...]]>
Greg Colbourn https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 31:35 None full 5806
e9htD7txe8RDdcehm_NL_EA_EA EA - Exploring Metaculus’s AI Track Record by Peter Scoblic Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Exploring Metaculus’s AI Track Record, published by Peter Scoblic on May 1, 2023 on The Effective Altruism Forum.By Peter Mühlbacher, Research Scientist at Metaculus, and Peter Scoblic, Director of Nuclear Risk at MetaculusMetaculus is a forecasting platform where an active community of thousands of forecasters regularly make probabilistic predictions on topics of interest ranging from scientific progress to geopolitics. Forecasts are aggregated into a time-weighted median, the “Community Prediction”, as well as the more sophisticated “Metaculus Prediction”, which weights forecasts based on past performance and extremises in order to compensate for systematic human cognitive biases. Although we feature questions on a wide range of topics, Metaculus focuses on issues of artificial intelligence, biosecurity, climate change and nuclear risk.In this post, we report the results of a recent analysis we conducted exploring the performance of all AI-related forecasts on the Metaculus platform, including an investigation of the factors that enhance or degrade accuracy.Most significantly, in this analysis we found that both the Community and Metaculus Predictions robustly outperform naïve baselines. The recent claim that performance on binary questions is “near chance” requires sampling on only a small subset of the forecasting questions we have posed or on the questionable proposition that a Brier score of 0.207 is akin to a coin flip. What’s more, forecasters performed better on continuous questions, as measured by the continuous ranked probability score (CRPS). In sum, both the Community Prediction and the Metaculus Prediction—on both binary and continuous questions—provide a clear and useful insight into the future of artificial intelligence, despite not being “perfect”.Summary FindingsWe reviewed Metaculus’s resolved binary questions (“What is the probability that X will happen?”) and resolved continuous questions (“What will be the value of X?”) that were related to the future of artificial intelligence. For the purpose of this analysis, we defined AI-related questions as those which belonged to one or more of the following categories: “Computer Science: AI and Machine Learning”; “Computing: Artificial Intelligence”; “Computing: AI”; and “Series: Forecasting AI Progress.” This gave us: 64 resolved binary questions (with 10,497 forecasts by 2,052 users) and 88 resolved continuous questions (with 13,683 predictions by 1,114 users). Our review of these forecasts found:Both the community and Metaculus predictions robustly outperform naïve baselines.Analysis showing that the community prediction’s Brier score on binary questions is 0.237 relies on sampling only a small subset of our AI-related questions.Our analysis of all binary AI-related questions finds that the score is actually 0.207 (a point a recent analysis agrees with), which is significantly better than “chance”.Forecasters performed better on continuous questions than binary ones.Top-Line ResultsThis chart details the performance of both the Community and Metaculus predictions on binary and continuous questions. Please note that, for all scores, lower is better and that Brier scores, which range from 0 to 1 (where 0 represents oracular omniscience and 1 represents complete anticipatory failure) are roughly comparable to continuous ranked probability scores (CRPS) given the way we conducted our analysis. (For more on scoring methodology, see below.)Brier (binary questions)CRPS (continuous questions)Community Prediction0.2070.096Metaculus Prediction0.1820.103baseline prediction0.250.172Results for Binary QuestionsWe can use Brier scores to measure the quality of a forecast on binary questions. Given that a Brier score is the mean squared error of a forecast, the following things are true:If you alread...]]>
Peter Scoblic https://forum.effectivealtruism.org/posts/e9htD7txe8RDdcehm/exploring-metaculus-s-ai-track-record Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Exploring Metaculus’s AI Track Record, published by Peter Scoblic on May 1, 2023 on The Effective Altruism Forum.By Peter Mühlbacher, Research Scientist at Metaculus, and Peter Scoblic, Director of Nuclear Risk at MetaculusMetaculus is a forecasting platform where an active community of thousands of forecasters regularly make probabilistic predictions on topics of interest ranging from scientific progress to geopolitics. Forecasts are aggregated into a time-weighted median, the “Community Prediction”, as well as the more sophisticated “Metaculus Prediction”, which weights forecasts based on past performance and extremises in order to compensate for systematic human cognitive biases. Although we feature questions on a wide range of topics, Metaculus focuses on issues of artificial intelligence, biosecurity, climate change and nuclear risk.In this post, we report the results of a recent analysis we conducted exploring the performance of all AI-related forecasts on the Metaculus platform, including an investigation of the factors that enhance or degrade accuracy.Most significantly, in this analysis we found that both the Community and Metaculus Predictions robustly outperform naïve baselines. The recent claim that performance on binary questions is “near chance” requires sampling on only a small subset of the forecasting questions we have posed or on the questionable proposition that a Brier score of 0.207 is akin to a coin flip. What’s more, forecasters performed better on continuous questions, as measured by the continuous ranked probability score (CRPS). In sum, both the Community Prediction and the Metaculus Prediction—on both binary and continuous questions—provide a clear and useful insight into the future of artificial intelligence, despite not being “perfect”.Summary FindingsWe reviewed Metaculus’s resolved binary questions (“What is the probability that X will happen?”) and resolved continuous questions (“What will be the value of X?”) that were related to the future of artificial intelligence. For the purpose of this analysis, we defined AI-related questions as those which belonged to one or more of the following categories: “Computer Science: AI and Machine Learning”; “Computing: Artificial Intelligence”; “Computing: AI”; and “Series: Forecasting AI Progress.” This gave us: 64 resolved binary questions (with 10,497 forecasts by 2,052 users) and 88 resolved continuous questions (with 13,683 predictions by 1,114 users). Our review of these forecasts found:Both the community and Metaculus predictions robustly outperform naïve baselines.Analysis showing that the community prediction’s Brier score on binary questions is 0.237 relies on sampling only a small subset of our AI-related questions.Our analysis of all binary AI-related questions finds that the score is actually 0.207 (a point a recent analysis agrees with), which is significantly better than “chance”.Forecasters performed better on continuous questions than binary ones.Top-Line ResultsThis chart details the performance of both the Community and Metaculus predictions on binary and continuous questions. Please note that, for all scores, lower is better and that Brier scores, which range from 0 to 1 (where 0 represents oracular omniscience and 1 represents complete anticipatory failure) are roughly comparable to continuous ranked probability scores (CRPS) given the way we conducted our analysis. (For more on scoring methodology, see below.)Brier (binary questions)CRPS (continuous questions)Community Prediction0.2070.096Metaculus Prediction0.1820.103baseline prediction0.250.172Results for Binary QuestionsWe can use Brier scores to measure the quality of a forecast on binary questions. Given that a Brier score is the mean squared error of a forecast, the following things are true:If you alread...]]>
Tue, 02 May 2023 11:01:19 +0000 EA - Exploring Metaculus’s AI Track Record by Peter Scoblic Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Exploring Metaculus’s AI Track Record, published by Peter Scoblic on May 1, 2023 on The Effective Altruism Forum.By Peter Mühlbacher, Research Scientist at Metaculus, and Peter Scoblic, Director of Nuclear Risk at MetaculusMetaculus is a forecasting platform where an active community of thousands of forecasters regularly make probabilistic predictions on topics of interest ranging from scientific progress to geopolitics. Forecasts are aggregated into a time-weighted median, the “Community Prediction”, as well as the more sophisticated “Metaculus Prediction”, which weights forecasts based on past performance and extremises in order to compensate for systematic human cognitive biases. Although we feature questions on a wide range of topics, Metaculus focuses on issues of artificial intelligence, biosecurity, climate change and nuclear risk.In this post, we report the results of a recent analysis we conducted exploring the performance of all AI-related forecasts on the Metaculus platform, including an investigation of the factors that enhance or degrade accuracy.Most significantly, in this analysis we found that both the Community and Metaculus Predictions robustly outperform naïve baselines. The recent claim that performance on binary questions is “near chance” requires sampling on only a small subset of the forecasting questions we have posed or on the questionable proposition that a Brier score of 0.207 is akin to a coin flip. What’s more, forecasters performed better on continuous questions, as measured by the continuous ranked probability score (CRPS). In sum, both the Community Prediction and the Metaculus Prediction—on both binary and continuous questions—provide a clear and useful insight into the future of artificial intelligence, despite not being “perfect”.Summary FindingsWe reviewed Metaculus’s resolved binary questions (“What is the probability that X will happen?”) and resolved continuous questions (“What will be the value of X?”) that were related to the future of artificial intelligence. For the purpose of this analysis, we defined AI-related questions as those which belonged to one or more of the following categories: “Computer Science: AI and Machine Learning”; “Computing: Artificial Intelligence”; “Computing: AI”; and “Series: Forecasting AI Progress.” This gave us: 64 resolved binary questions (with 10,497 forecasts by 2,052 users) and 88 resolved continuous questions (with 13,683 predictions by 1,114 users). Our review of these forecasts found:Both the community and Metaculus predictions robustly outperform naïve baselines.Analysis showing that the community prediction’s Brier score on binary questions is 0.237 relies on sampling only a small subset of our AI-related questions.Our analysis of all binary AI-related questions finds that the score is actually 0.207 (a point a recent analysis agrees with), which is significantly better than “chance”.Forecasters performed better on continuous questions than binary ones.Top-Line ResultsThis chart details the performance of both the Community and Metaculus predictions on binary and continuous questions. Please note that, for all scores, lower is better and that Brier scores, which range from 0 to 1 (where 0 represents oracular omniscience and 1 represents complete anticipatory failure) are roughly comparable to continuous ranked probability scores (CRPS) given the way we conducted our analysis. (For more on scoring methodology, see below.)Brier (binary questions)CRPS (continuous questions)Community Prediction0.2070.096Metaculus Prediction0.1820.103baseline prediction0.250.172Results for Binary QuestionsWe can use Brier scores to measure the quality of a forecast on binary questions. Given that a Brier score is the mean squared error of a forecast, the following things are true:If you alread...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Exploring Metaculus’s AI Track Record, published by Peter Scoblic on May 1, 2023 on The Effective Altruism Forum.By Peter Mühlbacher, Research Scientist at Metaculus, and Peter Scoblic, Director of Nuclear Risk at MetaculusMetaculus is a forecasting platform where an active community of thousands of forecasters regularly make probabilistic predictions on topics of interest ranging from scientific progress to geopolitics. Forecasts are aggregated into a time-weighted median, the “Community Prediction”, as well as the more sophisticated “Metaculus Prediction”, which weights forecasts based on past performance and extremises in order to compensate for systematic human cognitive biases. Although we feature questions on a wide range of topics, Metaculus focuses on issues of artificial intelligence, biosecurity, climate change and nuclear risk.In this post, we report the results of a recent analysis we conducted exploring the performance of all AI-related forecasts on the Metaculus platform, including an investigation of the factors that enhance or degrade accuracy.Most significantly, in this analysis we found that both the Community and Metaculus Predictions robustly outperform naïve baselines. The recent claim that performance on binary questions is “near chance” requires sampling on only a small subset of the forecasting questions we have posed or on the questionable proposition that a Brier score of 0.207 is akin to a coin flip. What’s more, forecasters performed better on continuous questions, as measured by the continuous ranked probability score (CRPS). In sum, both the Community Prediction and the Metaculus Prediction—on both binary and continuous questions—provide a clear and useful insight into the future of artificial intelligence, despite not being “perfect”.Summary FindingsWe reviewed Metaculus’s resolved binary questions (“What is the probability that X will happen?”) and resolved continuous questions (“What will be the value of X?”) that were related to the future of artificial intelligence. For the purpose of this analysis, we defined AI-related questions as those which belonged to one or more of the following categories: “Computer Science: AI and Machine Learning”; “Computing: Artificial Intelligence”; “Computing: AI”; and “Series: Forecasting AI Progress.” This gave us: 64 resolved binary questions (with 10,497 forecasts by 2,052 users) and 88 resolved continuous questions (with 13,683 predictions by 1,114 users). Our review of these forecasts found:Both the community and Metaculus predictions robustly outperform naïve baselines.Analysis showing that the community prediction’s Brier score on binary questions is 0.237 relies on sampling only a small subset of our AI-related questions.Our analysis of all binary AI-related questions finds that the score is actually 0.207 (a point a recent analysis agrees with), which is significantly better than “chance”.Forecasters performed better on continuous questions than binary ones.Top-Line ResultsThis chart details the performance of both the Community and Metaculus predictions on binary and continuous questions. Please note that, for all scores, lower is better and that Brier scores, which range from 0 to 1 (where 0 represents oracular omniscience and 1 represents complete anticipatory failure) are roughly comparable to continuous ranked probability scores (CRPS) given the way we conducted our analysis. (For more on scoring methodology, see below.)Brier (binary questions)CRPS (continuous questions)Community Prediction0.2070.096Metaculus Prediction0.1820.103baseline prediction0.250.172Results for Binary QuestionsWe can use Brier scores to measure the quality of a forecast on binary questions. Given that a Brier score is the mean squared error of a forecast, the following things are true:If you alread...]]>
Peter Scoblic https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 13:00 None full 5803
7mSqokBNuHu3rzy4L_NL_EA_EA EA - Retrospective on recent activity of Riesgos Catastróficos Globales by Jaime Sevilla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Retrospective on recent activity of Riesgos Catastróficos Globales, published by Jaime Sevilla on May 1, 2023 on The Effective Altruism Forum.The new team of Riesgos Catastróficos Globales started their job two months ago.During this time, they have been working on two reports on what we have identified as top priorities for the management of Global Catastrophic Risks from Spanish-Speaking countries: food security during Abrupt Sunlight-Reduction Scenarios (e.g. nuclear winter) and AI regulation.In this article, I will cover their output in more depth and future plans, with some reflections on how the project is going.The short version is that I am reasonably pleased, and the directive board has decided to continue the project for two more months. The team's productivity has exceeded my expectations, though I see opportunities for improvement in our quality assurance, formation and outreach. We remain short of funding; if you want to support our work you can donate through our donation portal.Intellectual outputIn the last two months, the team has been working on two major reports and several minor outputs.1) Report on food security in Argentina during abrupt sun-reducing scenarios (ASRS), in collaboration with ALLFED. In this report, we explain the important role Argentina could have during ASRS to mitigate global famine. We sketch several policies that would be useful inclusions in an emergency plan, such as resilient food deployment, together with suggestions on which public organisms could implement them.2) Report on AI regulation for the EU AI Act Spanish sandbox (forthcoming). We are interviewing and eliciting opinions from several experts, to compile an overview of AI risk for Spanish policymakers and proposals to make the most out of the upcoming EU AI sandbox.3) An article about AI regulation in Spain. In this short article, we explain the relevance of Spain for AI regulation in the context of the EU AI Act. We propose four policies that could be tested in the upcoming sandbox. It serves as a preview of the report I mentioned above.4) An article about the new GCR mitigation law in USA, reporting on its meaning and proposing similar initiatives for Spanish-Speaking countries.5) Two statements about Our Common Agenda Policy Briefs, in collaboration with the Simon Institute.Overall, I think we have done a good job of contextualizing the research done in the international GCR community. However, I feel we rely a lot on the involvement of the direction board for quality assurance, and our limited time means that some mistakes and misconceptions will likely have made it to publication.Having said that, I am pleased with the results. The team has been amazingly productive, publishing a 60-page report in two months and several minor publications alongside it.In the future, we will be involving more experts for a more thorough review process. This also means that we will be erring towards producing shorter reports, which can be more thoroughly checked and are better for engaging policy-makers.FormationEarly in the project, we identified the education of our staff as a key challenge to overcome. Our staff has work experience and credentials, but their exposure to the GCR literature was limited.We undertook several activities to address this lack of formation:Knowledge transfer talks with Spanish-speaking experts from our directive board and advisory network (Juan García from ALLFED, Jaime Sevilla from Epoch, Clarissa Rios Rojas from CSER).A GCR reading group with curated reading recommendations.An online course taught by Sandra Malagón from Carreras con Impacto.A dedicated course on the basics of Machine Learning.I am satisfied with the results, and I see a clear progression in the team. In hindsight, I think we erred on the side of too much form...]]>
Jaime Sevilla https://forum.effectivealtruism.org/posts/7mSqokBNuHu3rzy4L/retrospective-on-recent-activity-of-riesgos-catastroficos Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Retrospective on recent activity of Riesgos Catastróficos Globales, published by Jaime Sevilla on May 1, 2023 on The Effective Altruism Forum.The new team of Riesgos Catastróficos Globales started their job two months ago.During this time, they have been working on two reports on what we have identified as top priorities for the management of Global Catastrophic Risks from Spanish-Speaking countries: food security during Abrupt Sunlight-Reduction Scenarios (e.g. nuclear winter) and AI regulation.In this article, I will cover their output in more depth and future plans, with some reflections on how the project is going.The short version is that I am reasonably pleased, and the directive board has decided to continue the project for two more months. The team's productivity has exceeded my expectations, though I see opportunities for improvement in our quality assurance, formation and outreach. We remain short of funding; if you want to support our work you can donate through our donation portal.Intellectual outputIn the last two months, the team has been working on two major reports and several minor outputs.1) Report on food security in Argentina during abrupt sun-reducing scenarios (ASRS), in collaboration with ALLFED. In this report, we explain the important role Argentina could have during ASRS to mitigate global famine. We sketch several policies that would be useful inclusions in an emergency plan, such as resilient food deployment, together with suggestions on which public organisms could implement them.2) Report on AI regulation for the EU AI Act Spanish sandbox (forthcoming). We are interviewing and eliciting opinions from several experts, to compile an overview of AI risk for Spanish policymakers and proposals to make the most out of the upcoming EU AI sandbox.3) An article about AI regulation in Spain. In this short article, we explain the relevance of Spain for AI regulation in the context of the EU AI Act. We propose four policies that could be tested in the upcoming sandbox. It serves as a preview of the report I mentioned above.4) An article about the new GCR mitigation law in USA, reporting on its meaning and proposing similar initiatives for Spanish-Speaking countries.5) Two statements about Our Common Agenda Policy Briefs, in collaboration with the Simon Institute.Overall, I think we have done a good job of contextualizing the research done in the international GCR community. However, I feel we rely a lot on the involvement of the direction board for quality assurance, and our limited time means that some mistakes and misconceptions will likely have made it to publication.Having said that, I am pleased with the results. The team has been amazingly productive, publishing a 60-page report in two months and several minor publications alongside it.In the future, we will be involving more experts for a more thorough review process. This also means that we will be erring towards producing shorter reports, which can be more thoroughly checked and are better for engaging policy-makers.FormationEarly in the project, we identified the education of our staff as a key challenge to overcome. Our staff has work experience and credentials, but their exposure to the GCR literature was limited.We undertook several activities to address this lack of formation:Knowledge transfer talks with Spanish-speaking experts from our directive board and advisory network (Juan García from ALLFED, Jaime Sevilla from Epoch, Clarissa Rios Rojas from CSER).A GCR reading group with curated reading recommendations.An online course taught by Sandra Malagón from Carreras con Impacto.A dedicated course on the basics of Machine Learning.I am satisfied with the results, and I see a clear progression in the team. In hindsight, I think we erred on the side of too much form...]]>
Tue, 02 May 2023 05:27:15 +0000 EA - Retrospective on recent activity of Riesgos Catastróficos Globales by Jaime Sevilla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Retrospective on recent activity of Riesgos Catastróficos Globales, published by Jaime Sevilla on May 1, 2023 on The Effective Altruism Forum.The new team of Riesgos Catastróficos Globales started their job two months ago.During this time, they have been working on two reports on what we have identified as top priorities for the management of Global Catastrophic Risks from Spanish-Speaking countries: food security during Abrupt Sunlight-Reduction Scenarios (e.g. nuclear winter) and AI regulation.In this article, I will cover their output in more depth and future plans, with some reflections on how the project is going.The short version is that I am reasonably pleased, and the directive board has decided to continue the project for two more months. The team's productivity has exceeded my expectations, though I see opportunities for improvement in our quality assurance, formation and outreach. We remain short of funding; if you want to support our work you can donate through our donation portal.Intellectual outputIn the last two months, the team has been working on two major reports and several minor outputs.1) Report on food security in Argentina during abrupt sun-reducing scenarios (ASRS), in collaboration with ALLFED. In this report, we explain the important role Argentina could have during ASRS to mitigate global famine. We sketch several policies that would be useful inclusions in an emergency plan, such as resilient food deployment, together with suggestions on which public organisms could implement them.2) Report on AI regulation for the EU AI Act Spanish sandbox (forthcoming). We are interviewing and eliciting opinions from several experts, to compile an overview of AI risk for Spanish policymakers and proposals to make the most out of the upcoming EU AI sandbox.3) An article about AI regulation in Spain. In this short article, we explain the relevance of Spain for AI regulation in the context of the EU AI Act. We propose four policies that could be tested in the upcoming sandbox. It serves as a preview of the report I mentioned above.4) An article about the new GCR mitigation law in USA, reporting on its meaning and proposing similar initiatives for Spanish-Speaking countries.5) Two statements about Our Common Agenda Policy Briefs, in collaboration with the Simon Institute.Overall, I think we have done a good job of contextualizing the research done in the international GCR community. However, I feel we rely a lot on the involvement of the direction board for quality assurance, and our limited time means that some mistakes and misconceptions will likely have made it to publication.Having said that, I am pleased with the results. The team has been amazingly productive, publishing a 60-page report in two months and several minor publications alongside it.In the future, we will be involving more experts for a more thorough review process. This also means that we will be erring towards producing shorter reports, which can be more thoroughly checked and are better for engaging policy-makers.FormationEarly in the project, we identified the education of our staff as a key challenge to overcome. Our staff has work experience and credentials, but their exposure to the GCR literature was limited.We undertook several activities to address this lack of formation:Knowledge transfer talks with Spanish-speaking experts from our directive board and advisory network (Juan García from ALLFED, Jaime Sevilla from Epoch, Clarissa Rios Rojas from CSER).A GCR reading group with curated reading recommendations.An online course taught by Sandra Malagón from Carreras con Impacto.A dedicated course on the basics of Machine Learning.I am satisfied with the results, and I see a clear progression in the team. In hindsight, I think we erred on the side of too much form...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Retrospective on recent activity of Riesgos Catastróficos Globales, published by Jaime Sevilla on May 1, 2023 on The Effective Altruism Forum.The new team of Riesgos Catastróficos Globales started their job two months ago.During this time, they have been working on two reports on what we have identified as top priorities for the management of Global Catastrophic Risks from Spanish-Speaking countries: food security during Abrupt Sunlight-Reduction Scenarios (e.g. nuclear winter) and AI regulation.In this article, I will cover their output in more depth and future plans, with some reflections on how the project is going.The short version is that I am reasonably pleased, and the directive board has decided to continue the project for two more months. The team's productivity has exceeded my expectations, though I see opportunities for improvement in our quality assurance, formation and outreach. We remain short of funding; if you want to support our work you can donate through our donation portal.Intellectual outputIn the last two months, the team has been working on two major reports and several minor outputs.1) Report on food security in Argentina during abrupt sun-reducing scenarios (ASRS), in collaboration with ALLFED. In this report, we explain the important role Argentina could have during ASRS to mitigate global famine. We sketch several policies that would be useful inclusions in an emergency plan, such as resilient food deployment, together with suggestions on which public organisms could implement them.2) Report on AI regulation for the EU AI Act Spanish sandbox (forthcoming). We are interviewing and eliciting opinions from several experts, to compile an overview of AI risk for Spanish policymakers and proposals to make the most out of the upcoming EU AI sandbox.3) An article about AI regulation in Spain. In this short article, we explain the relevance of Spain for AI regulation in the context of the EU AI Act. We propose four policies that could be tested in the upcoming sandbox. It serves as a preview of the report I mentioned above.4) An article about the new GCR mitigation law in USA, reporting on its meaning and proposing similar initiatives for Spanish-Speaking countries.5) Two statements about Our Common Agenda Policy Briefs, in collaboration with the Simon Institute.Overall, I think we have done a good job of contextualizing the research done in the international GCR community. However, I feel we rely a lot on the involvement of the direction board for quality assurance, and our limited time means that some mistakes and misconceptions will likely have made it to publication.Having said that, I am pleased with the results. The team has been amazingly productive, publishing a 60-page report in two months and several minor publications alongside it.In the future, we will be involving more experts for a more thorough review process. This also means that we will be erring towards producing shorter reports, which can be more thoroughly checked and are better for engaging policy-makers.FormationEarly in the project, we identified the education of our staff as a key challenge to overcome. Our staff has work experience and credentials, but their exposure to the GCR literature was limited.We undertook several activities to address this lack of formation:Knowledge transfer talks with Spanish-speaking experts from our directive board and advisory network (Juan García from ALLFED, Jaime Sevilla from Epoch, Clarissa Rios Rojas from CSER).A GCR reading group with curated reading recommendations.An online course taught by Sandra Malagón from Carreras con Impacto.A dedicated course on the basics of Machine Learning.I am satisfied with the results, and I see a clear progression in the team. In hindsight, I think we erred on the side of too much form...]]>
Jaime Sevilla https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:42 None full 5799
pPQ5wqEPxLexCqGkL_NL_EA_EA EA - [Linkpost] ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead by Darius1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead, published by Darius1 on May 1, 2023 on The Effective Altruism Forum.Geoffrey Hinton—a pioneer in artificial neural networks—just left Google, as reported by the New York Times: ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead (archive version).Some highlights from the article [emphasis added]:Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job.In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network...At the time, few researchers believed in the idea. But it became his life’s work.In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.Google spent $44 million to acquire a company started by Dr. Hinton and his two students. And their system led to the creation of increasingly powerful technologies, including new chatbots like ChatGPT and Google Bard. Mr. Sutskever went on to become chief scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own.“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technol...]]>
Darius1 https://forum.effectivealtruism.org/posts/pPQ5wqEPxLexCqGkL/linkpost-the-godfather-of-a-i-leaves-google-and-warns-of Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead, published by Darius1 on May 1, 2023 on The Effective Altruism Forum.Geoffrey Hinton—a pioneer in artificial neural networks—just left Google, as reported by the New York Times: ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead (archive version).Some highlights from the article [emphasis added]:Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job.In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network...At the time, few researchers believed in the idea. But it became his life’s work.In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.Google spent $44 million to acquire a company started by Dr. Hinton and his two students. And their system led to the creation of increasingly powerful technologies, including new chatbots like ChatGPT and Google Bard. Mr. Sutskever went on to become chief scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own.“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technol...]]>
Tue, 02 May 2023 03:45:17 +0000 EA - [Linkpost] ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead by Darius1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead, published by Darius1 on May 1, 2023 on The Effective Altruism Forum.Geoffrey Hinton—a pioneer in artificial neural networks—just left Google, as reported by the New York Times: ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead (archive version).Some highlights from the article [emphasis added]:Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job.In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network...At the time, few researchers believed in the idea. But it became his life’s work.In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.Google spent $44 million to acquire a company started by Dr. Hinton and his two students. And their system led to the creation of increasingly powerful technologies, including new chatbots like ChatGPT and Google Bard. Mr. Sutskever went on to become chief scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own.“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technol...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead, published by Darius1 on May 1, 2023 on The Effective Altruism Forum.Geoffrey Hinton—a pioneer in artificial neural networks—just left Google, as reported by the New York Times: ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead (archive version).Some highlights from the article [emphasis added]:Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job.In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network...At the time, few researchers believed in the idea. But it became his life’s work.In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.Google spent $44 million to acquire a company started by Dr. Hinton and his two students. And their system led to the creation of increasingly powerful technologies, including new chatbots like ChatGPT and Google Bard. Mr. Sutskever went on to become chief scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own.“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technol...]]>
Darius1 https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:10 None full 5798
KTsaZ69Ctkuw6n4tu_NL_EA_EA EA - Overview: Reflection Projects on Community Reform by Joris P Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Overview: Reflection Projects on Community Reform, published by Joris P on May 1, 2023 on The Effective Altruism Forum.TL;DR: We’re sharing this short collection of active projects that we’re aware of, that focus on community reform and reflections on recent events.We don’t know much about the projects and are probably missing some projects (please let us know in the comments!), but we’ve found it hard to keep up with what is and is not happening, and noticed that we wished that a list like this existed.We’re posting this in our personal capacity. Any mistakes are our own. We did ask colleagues at CEA for input and feedback; thank you to Lizka & Chana (CEA), and Robert (not CEA) in particular!A lot has happened in the EA community over the past half year. In light of all the developments, people have discussed all kinds of changes EA could make.As community members, we’ve been finding it quite hard to follow what work is being done to reflect on the various ways in which we could reform. That's why we tried to put together a (incomplete!) list of (mostly ‘official’) projects that are being done to reflect on all sorts of questions that came up over the past six months.Some CaveatsThe topics listed below are not in any way a complete overview of the things that could or should be discussed - we don’t think we’re covering all the things community members are thinking about, and aren’t trying to!A topic isn’t “covered” or “taken” if there’s already a project that’s focused on it. Many of these projects are trying to cover a lot of ground, and the people working on them would probably appreciate other groups trying to do something on the topic, too.We chose to include mostly reflection projects that are being done in an official capacity, or by more than one person. That means we’re not including a lot of interesting Forum posts or news articles that have been written about possible problems and reforms! See the section 'Assorted written criticisms and reflections' below for some examples.This list is almost certainly incomplete. For instance, some projects we've heard about aren't public, and we think it's likely that there are other projects we haven't heard about. We encourage people to share what they’re working on in the comments!We’re not really sharing our views on how excited we are about the projects.The projects we’re aware ofThe EA survey (run by Rethink Priorities in collaboration with CEA) was updated and re-sent to people to ask about FTX community response — you can see the results hereCommunity health concernsInvestigations into the CEA Community Health Team’s processes and past actions:There’s an external investigation into how the CEA Community Health Team responded to concerns about Owen Cotton-Barratt (source)An internal review of the CEA Community Health Team’s processes is also ongoing (same source as above)Members of the Community Health team shared they hope that both investigations will be concluded sometime in the next month. They noted that the team does not have control over the timeline of the external investigationThe CEA Community Health Team is (separately) conducting “a project to get a better understanding of the experiences of women and gender minorities in the EA community” (source)Governance in EA institutionsJulia Wise and Ozzie Gooen are setting up a taskforce that will recommend potential governance reforms for various EA organizations. They are currently looking to get in touch with people with relevant expertise - see hereNote that they write: “This project doesn’t aim to be a retrospective on what happened with FTX, and won’t address all problems in EA, but we hope to make progress on some of them.”EA leadership & accountabilityAn investigation by the law firm Mintz, commissioned by EVF UK and EVF US, “to...]]>
Joris P https://forum.effectivealtruism.org/posts/KTsaZ69Ctkuw6n4tu/overview-reflection-projects-on-community-reform Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Overview: Reflection Projects on Community Reform, published by Joris P on May 1, 2023 on The Effective Altruism Forum.TL;DR: We’re sharing this short collection of active projects that we’re aware of, that focus on community reform and reflections on recent events.We don’t know much about the projects and are probably missing some projects (please let us know in the comments!), but we’ve found it hard to keep up with what is and is not happening, and noticed that we wished that a list like this existed.We’re posting this in our personal capacity. Any mistakes are our own. We did ask colleagues at CEA for input and feedback; thank you to Lizka & Chana (CEA), and Robert (not CEA) in particular!A lot has happened in the EA community over the past half year. In light of all the developments, people have discussed all kinds of changes EA could make.As community members, we’ve been finding it quite hard to follow what work is being done to reflect on the various ways in which we could reform. That's why we tried to put together a (incomplete!) list of (mostly ‘official’) projects that are being done to reflect on all sorts of questions that came up over the past six months.Some CaveatsThe topics listed below are not in any way a complete overview of the things that could or should be discussed - we don’t think we’re covering all the things community members are thinking about, and aren’t trying to!A topic isn’t “covered” or “taken” if there’s already a project that’s focused on it. Many of these projects are trying to cover a lot of ground, and the people working on them would probably appreciate other groups trying to do something on the topic, too.We chose to include mostly reflection projects that are being done in an official capacity, or by more than one person. That means we’re not including a lot of interesting Forum posts or news articles that have been written about possible problems and reforms! See the section 'Assorted written criticisms and reflections' below for some examples.This list is almost certainly incomplete. For instance, some projects we've heard about aren't public, and we think it's likely that there are other projects we haven't heard about. We encourage people to share what they’re working on in the comments!We’re not really sharing our views on how excited we are about the projects.The projects we’re aware ofThe EA survey (run by Rethink Priorities in collaboration with CEA) was updated and re-sent to people to ask about FTX community response — you can see the results hereCommunity health concernsInvestigations into the CEA Community Health Team’s processes and past actions:There’s an external investigation into how the CEA Community Health Team responded to concerns about Owen Cotton-Barratt (source)An internal review of the CEA Community Health Team’s processes is also ongoing (same source as above)Members of the Community Health team shared they hope that both investigations will be concluded sometime in the next month. They noted that the team does not have control over the timeline of the external investigationThe CEA Community Health Team is (separately) conducting “a project to get a better understanding of the experiences of women and gender minorities in the EA community” (source)Governance in EA institutionsJulia Wise and Ozzie Gooen are setting up a taskforce that will recommend potential governance reforms for various EA organizations. They are currently looking to get in touch with people with relevant expertise - see hereNote that they write: “This project doesn’t aim to be a retrospective on what happened with FTX, and won’t address all problems in EA, but we hope to make progress on some of them.”EA leadership & accountabilityAn investigation by the law firm Mintz, commissioned by EVF UK and EVF US, “to...]]>
Mon, 01 May 2023 22:20:04 +0000 EA - Overview: Reflection Projects on Community Reform by Joris P Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Overview: Reflection Projects on Community Reform, published by Joris P on May 1, 2023 on The Effective Altruism Forum.TL;DR: We’re sharing this short collection of active projects that we’re aware of, that focus on community reform and reflections on recent events.We don’t know much about the projects and are probably missing some projects (please let us know in the comments!), but we’ve found it hard to keep up with what is and is not happening, and noticed that we wished that a list like this existed.We’re posting this in our personal capacity. Any mistakes are our own. We did ask colleagues at CEA for input and feedback; thank you to Lizka & Chana (CEA), and Robert (not CEA) in particular!A lot has happened in the EA community over the past half year. In light of all the developments, people have discussed all kinds of changes EA could make.As community members, we’ve been finding it quite hard to follow what work is being done to reflect on the various ways in which we could reform. That's why we tried to put together a (incomplete!) list of (mostly ‘official’) projects that are being done to reflect on all sorts of questions that came up over the past six months.Some CaveatsThe topics listed below are not in any way a complete overview of the things that could or should be discussed - we don’t think we’re covering all the things community members are thinking about, and aren’t trying to!A topic isn’t “covered” or “taken” if there’s already a project that’s focused on it. Many of these projects are trying to cover a lot of ground, and the people working on them would probably appreciate other groups trying to do something on the topic, too.We chose to include mostly reflection projects that are being done in an official capacity, or by more than one person. That means we’re not including a lot of interesting Forum posts or news articles that have been written about possible problems and reforms! See the section 'Assorted written criticisms and reflections' below for some examples.This list is almost certainly incomplete. For instance, some projects we've heard about aren't public, and we think it's likely that there are other projects we haven't heard about. We encourage people to share what they’re working on in the comments!We’re not really sharing our views on how excited we are about the projects.The projects we’re aware ofThe EA survey (run by Rethink Priorities in collaboration with CEA) was updated and re-sent to people to ask about FTX community response — you can see the results hereCommunity health concernsInvestigations into the CEA Community Health Team’s processes and past actions:There’s an external investigation into how the CEA Community Health Team responded to concerns about Owen Cotton-Barratt (source)An internal review of the CEA Community Health Team’s processes is also ongoing (same source as above)Members of the Community Health team shared they hope that both investigations will be concluded sometime in the next month. They noted that the team does not have control over the timeline of the external investigationThe CEA Community Health Team is (separately) conducting “a project to get a better understanding of the experiences of women and gender minorities in the EA community” (source)Governance in EA institutionsJulia Wise and Ozzie Gooen are setting up a taskforce that will recommend potential governance reforms for various EA organizations. They are currently looking to get in touch with people with relevant expertise - see hereNote that they write: “This project doesn’t aim to be a retrospective on what happened with FTX, and won’t address all problems in EA, but we hope to make progress on some of them.”EA leadership & accountabilityAn investigation by the law firm Mintz, commissioned by EVF UK and EVF US, “to...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Overview: Reflection Projects on Community Reform, published by Joris P on May 1, 2023 on The Effective Altruism Forum.TL;DR: We’re sharing this short collection of active projects that we’re aware of, that focus on community reform and reflections on recent events.We don’t know much about the projects and are probably missing some projects (please let us know in the comments!), but we’ve found it hard to keep up with what is and is not happening, and noticed that we wished that a list like this existed.We’re posting this in our personal capacity. Any mistakes are our own. We did ask colleagues at CEA for input and feedback; thank you to Lizka & Chana (CEA), and Robert (not CEA) in particular!A lot has happened in the EA community over the past half year. In light of all the developments, people have discussed all kinds of changes EA could make.As community members, we’ve been finding it quite hard to follow what work is being done to reflect on the various ways in which we could reform. That's why we tried to put together a (incomplete!) list of (mostly ‘official’) projects that are being done to reflect on all sorts of questions that came up over the past six months.Some CaveatsThe topics listed below are not in any way a complete overview of the things that could or should be discussed - we don’t think we’re covering all the things community members are thinking about, and aren’t trying to!A topic isn’t “covered” or “taken” if there’s already a project that’s focused on it. Many of these projects are trying to cover a lot of ground, and the people working on them would probably appreciate other groups trying to do something on the topic, too.We chose to include mostly reflection projects that are being done in an official capacity, or by more than one person. That means we’re not including a lot of interesting Forum posts or news articles that have been written about possible problems and reforms! See the section 'Assorted written criticisms and reflections' below for some examples.This list is almost certainly incomplete. For instance, some projects we've heard about aren't public, and we think it's likely that there are other projects we haven't heard about. We encourage people to share what they’re working on in the comments!We’re not really sharing our views on how excited we are about the projects.The projects we’re aware ofThe EA survey (run by Rethink Priorities in collaboration with CEA) was updated and re-sent to people to ask about FTX community response — you can see the results hereCommunity health concernsInvestigations into the CEA Community Health Team’s processes and past actions:There’s an external investigation into how the CEA Community Health Team responded to concerns about Owen Cotton-Barratt (source)An internal review of the CEA Community Health Team’s processes is also ongoing (same source as above)Members of the Community Health team shared they hope that both investigations will be concluded sometime in the next month. They noted that the team does not have control over the timeline of the external investigationThe CEA Community Health Team is (separately) conducting “a project to get a better understanding of the experiences of women and gender minorities in the EA community” (source)Governance in EA institutionsJulia Wise and Ozzie Gooen are setting up a taskforce that will recommend potential governance reforms for various EA organizations. They are currently looking to get in touch with people with relevant expertise - see hereNote that they write: “This project doesn’t aim to be a retrospective on what happened with FTX, and won’t address all problems in EA, but we hope to make progress on some of them.”EA leadership & accountabilityAn investigation by the law firm Mintz, commissioned by EVF UK and EVF US, “to...]]>
Joris P https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:41 None full 5796
bB2CSnFS6mEcNmPgD_NL_EA_EA EA - The costs of caution by Kelsey Piper Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The costs of caution, published by Kelsey Piper on May 1, 2023 on The Effective Altruism Forum.Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post.If you thought we might be able to cure cancer in 2200, then I think you ought to expect there’s a good chance we can do it within years of the advent of AI systems that can do the research work humans can do.Josh Cason on Twitter raised an objection to recent calls for a moratorium on AI development:April 2, 2023Or raise your hand if you or someone you love has a terminal illness, believes Ai has a chance at accelerating medical work exponentially, and doesn't have til Christmas, to wait on your make believe moratorium. Have a heart man ❤️I’ve said that I think we should ideally move a lot slower on developing powerful AI systems. I still believe that. But I think Josh’s objection is important and deserves a full airing.Approximately 150,000 people die worldwide every day. Nearly all of those deaths are, in some sense, preventable, with sufficiently advanced medical technology. Every year, five million families bury a child dead before their fifth birthday. Hundreds of millions of people live in extreme poverty. Billions more have far too little money to achieve their dreams and grow into their full potential. Tens of billions of animals are tortured on factory farms.Scientific research and economic progress could make an enormous difference to all these problems. Medical research could cure diseases. Economic progress could make food, shelter, medicine, entertainment and luxury goods accessible to people who can't afford it today. Progress in meat alternatives could allow us to shut down factory farms.There are tens of thousands of scientists, engineers, and policymakers working on fixing these kinds of problems — working on developing vaccines and antivirals, understanding and arresting aging, treating cancer, building cheaper and cleaner energy sources, developing better crops and homes and forms of transportation. But there are only so many people working on each problem. In each field, there are dozens of useful, interesting subproblems that no one is working on, because there aren’t enough people to do the work.If we could train AI systems powerful enough to automate everything these scientists and engineers do, they could help.As Tom discussed in a previous post, once we develop AI that does AI research as well as a human expert, it might not be long before we have AI that is way beyond human experts in all domains. That is, AI which is way better than the best humans at all aspects of medical research: thinking of new ideas, designing experiments to test those ideas, building new technologies, and navigating bureaucracies.This means that rather than tens of thousands of top biomedical researchers, we could have hundreds of millions of significantly superhuman biomedical researchers.[1]That’s more than a thousand times as much effort going into tackling humanity’s biggest killers. If you thought we might be able to cure cancer in 2200, then I think you ought to expect there’s a good chance we can do it within years of the advent of AI systems that can do the research work humans can do.[2]All this may be a massive underestimate. This envisions a world that’s pretty much like ours except that extraordinary talent is no longer scarce. But that feels, in some senses, like thinking about the advent of electricity purely in terms of ‘torchlight will no longer be scarce’. Electricity did make it very cheap to light our homes at night. But it also enabled vacuum cleaners, washing machines, cars, smartphones, airplanes, video recording, Twitter — entirely new things, not just cheaper access to thi...]]>
Kelsey Piper https://forum.effectivealtruism.org/posts/bB2CSnFS6mEcNmPgD/the-costs-of-caution Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The costs of caution, published by Kelsey Piper on May 1, 2023 on The Effective Altruism Forum.Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post.If you thought we might be able to cure cancer in 2200, then I think you ought to expect there’s a good chance we can do it within years of the advent of AI systems that can do the research work humans can do.Josh Cason on Twitter raised an objection to recent calls for a moratorium on AI development:April 2, 2023Or raise your hand if you or someone you love has a terminal illness, believes Ai has a chance at accelerating medical work exponentially, and doesn't have til Christmas, to wait on your make believe moratorium. Have a heart man ❤️I’ve said that I think we should ideally move a lot slower on developing powerful AI systems. I still believe that. But I think Josh’s objection is important and deserves a full airing.Approximately 150,000 people die worldwide every day. Nearly all of those deaths are, in some sense, preventable, with sufficiently advanced medical technology. Every year, five million families bury a child dead before their fifth birthday. Hundreds of millions of people live in extreme poverty. Billions more have far too little money to achieve their dreams and grow into their full potential. Tens of billions of animals are tortured on factory farms.Scientific research and economic progress could make an enormous difference to all these problems. Medical research could cure diseases. Economic progress could make food, shelter, medicine, entertainment and luxury goods accessible to people who can't afford it today. Progress in meat alternatives could allow us to shut down factory farms.There are tens of thousands of scientists, engineers, and policymakers working on fixing these kinds of problems — working on developing vaccines and antivirals, understanding and arresting aging, treating cancer, building cheaper and cleaner energy sources, developing better crops and homes and forms of transportation. But there are only so many people working on each problem. In each field, there are dozens of useful, interesting subproblems that no one is working on, because there aren’t enough people to do the work.If we could train AI systems powerful enough to automate everything these scientists and engineers do, they could help.As Tom discussed in a previous post, once we develop AI that does AI research as well as a human expert, it might not be long before we have AI that is way beyond human experts in all domains. That is, AI which is way better than the best humans at all aspects of medical research: thinking of new ideas, designing experiments to test those ideas, building new technologies, and navigating bureaucracies.This means that rather than tens of thousands of top biomedical researchers, we could have hundreds of millions of significantly superhuman biomedical researchers.[1]That’s more than a thousand times as much effort going into tackling humanity’s biggest killers. If you thought we might be able to cure cancer in 2200, then I think you ought to expect there’s a good chance we can do it within years of the advent of AI systems that can do the research work humans can do.[2]All this may be a massive underestimate. This envisions a world that’s pretty much like ours except that extraordinary talent is no longer scarce. But that feels, in some senses, like thinking about the advent of electricity purely in terms of ‘torchlight will no longer be scarce’. Electricity did make it very cheap to light our homes at night. But it also enabled vacuum cleaners, washing machines, cars, smartphones, airplanes, video recording, Twitter — entirely new things, not just cheaper access to thi...]]>
Mon, 01 May 2023 22:00:01 +0000 EA - The costs of caution by Kelsey Piper Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The costs of caution, published by Kelsey Piper on May 1, 2023 on The Effective Altruism Forum.Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post.If you thought we might be able to cure cancer in 2200, then I think you ought to expect there’s a good chance we can do it within years of the advent of AI systems that can do the research work humans can do.Josh Cason on Twitter raised an objection to recent calls for a moratorium on AI development:April 2, 2023Or raise your hand if you or someone you love has a terminal illness, believes Ai has a chance at accelerating medical work exponentially, and doesn't have til Christmas, to wait on your make believe moratorium. Have a heart man ❤️I’ve said that I think we should ideally move a lot slower on developing powerful AI systems. I still believe that. But I think Josh’s objection is important and deserves a full airing.Approximately 150,000 people die worldwide every day. Nearly all of those deaths are, in some sense, preventable, with sufficiently advanced medical technology. Every year, five million families bury a child dead before their fifth birthday. Hundreds of millions of people live in extreme poverty. Billions more have far too little money to achieve their dreams and grow into their full potential. Tens of billions of animals are tortured on factory farms.Scientific research and economic progress could make an enormous difference to all these problems. Medical research could cure diseases. Economic progress could make food, shelter, medicine, entertainment and luxury goods accessible to people who can't afford it today. Progress in meat alternatives could allow us to shut down factory farms.There are tens of thousands of scientists, engineers, and policymakers working on fixing these kinds of problems — working on developing vaccines and antivirals, understanding and arresting aging, treating cancer, building cheaper and cleaner energy sources, developing better crops and homes and forms of transportation. But there are only so many people working on each problem. In each field, there are dozens of useful, interesting subproblems that no one is working on, because there aren’t enough people to do the work.If we could train AI systems powerful enough to automate everything these scientists and engineers do, they could help.As Tom discussed in a previous post, once we develop AI that does AI research as well as a human expert, it might not be long before we have AI that is way beyond human experts in all domains. That is, AI which is way better than the best humans at all aspects of medical research: thinking of new ideas, designing experiments to test those ideas, building new technologies, and navigating bureaucracies.This means that rather than tens of thousands of top biomedical researchers, we could have hundreds of millions of significantly superhuman biomedical researchers.[1]That’s more than a thousand times as much effort going into tackling humanity’s biggest killers. If you thought we might be able to cure cancer in 2200, then I think you ought to expect there’s a good chance we can do it within years of the advent of AI systems that can do the research work humans can do.[2]All this may be a massive underestimate. This envisions a world that’s pretty much like ours except that extraordinary talent is no longer scarce. But that feels, in some senses, like thinking about the advent of electricity purely in terms of ‘torchlight will no longer be scarce’. Electricity did make it very cheap to light our homes at night. But it also enabled vacuum cleaners, washing machines, cars, smartphones, airplanes, video recording, Twitter — entirely new things, not just cheaper access to thi...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The costs of caution, published by Kelsey Piper on May 1, 2023 on The Effective Altruism Forum.Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post.If you thought we might be able to cure cancer in 2200, then I think you ought to expect there’s a good chance we can do it within years of the advent of AI systems that can do the research work humans can do.Josh Cason on Twitter raised an objection to recent calls for a moratorium on AI development:April 2, 2023Or raise your hand if you or someone you love has a terminal illness, believes Ai has a chance at accelerating medical work exponentially, and doesn't have til Christmas, to wait on your make believe moratorium. Have a heart man ❤️I’ve said that I think we should ideally move a lot slower on developing powerful AI systems. I still believe that. But I think Josh’s objection is important and deserves a full airing.Approximately 150,000 people die worldwide every day. Nearly all of those deaths are, in some sense, preventable, with sufficiently advanced medical technology. Every year, five million families bury a child dead before their fifth birthday. Hundreds of millions of people live in extreme poverty. Billions more have far too little money to achieve their dreams and grow into their full potential. Tens of billions of animals are tortured on factory farms.Scientific research and economic progress could make an enormous difference to all these problems. Medical research could cure diseases. Economic progress could make food, shelter, medicine, entertainment and luxury goods accessible to people who can't afford it today. Progress in meat alternatives could allow us to shut down factory farms.There are tens of thousands of scientists, engineers, and policymakers working on fixing these kinds of problems — working on developing vaccines and antivirals, understanding and arresting aging, treating cancer, building cheaper and cleaner energy sources, developing better crops and homes and forms of transportation. But there are only so many people working on each problem. In each field, there are dozens of useful, interesting subproblems that no one is working on, because there aren’t enough people to do the work.If we could train AI systems powerful enough to automate everything these scientists and engineers do, they could help.As Tom discussed in a previous post, once we develop AI that does AI research as well as a human expert, it might not be long before we have AI that is way beyond human experts in all domains. That is, AI which is way better than the best humans at all aspects of medical research: thinking of new ideas, designing experiments to test those ideas, building new technologies, and navigating bureaucracies.This means that rather than tens of thousands of top biomedical researchers, we could have hundreds of millions of significantly superhuman biomedical researchers.[1]That’s more than a thousand times as much effort going into tackling humanity’s biggest killers. If you thought we might be able to cure cancer in 2200, then I think you ought to expect there’s a good chance we can do it within years of the advent of AI systems that can do the research work humans can do.[2]All this may be a massive underestimate. This envisions a world that’s pretty much like ours except that extraordinary talent is no longer scarce. But that feels, in some senses, like thinking about the advent of electricity purely in terms of ‘torchlight will no longer be scarce’. Electricity did make it very cheap to light our homes at night. But it also enabled vacuum cleaners, washing machines, cars, smartphones, airplanes, video recording, Twitter — entirely new things, not just cheaper access to thi...]]>
Kelsey Piper https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:04 None full 5791
WLok4YuJ4kfFpDRTi_NL_EA_EA EA - First clean water, now clean air by finm Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: First clean water, now clean air, published by finm on April 30, 2023 on The Effective Altruism Forum.The excellent report from Rethink Priorities was my main source for this. Many of the substantial points I make are taken from it, though errors are my own. It’s worth reading! The authors are Gavriel Kleinwaks, Alastair Fraser-Urquhart, Jam Kraprayoon, and Josh Morrison.Clean waterIn the mid 19th century, London had a sewage problem. It relied on a patchwork of a few hundred sewers, of brick and wood, and hundreds of thousands of cesspits. The Thames — Londoners’ main source of drinking water — was near-opaque with waste. Here is Michael Faraday in an 1855 letter to The Times:Near the bridges the feculence rolled up in clouds so dense that they were visible at the surface even in water of this kind [.] The smell was very bad, and common to the whole of the water. It was the same as that which now comes up from the gully holes in the streets. The whole river was for the time a real sewer [.] If we neglect this subject, we cannot expect to do so with impunity; nor ought we to be surprised if, ere many years are over, a season give us sad proof of the folly of our carelessness.That “sad proof” arrived more than once. London saw around three outbreaks of cholera, killing upwards of 50,000 people in each outbreak.But early efforts to address the public health crisis were guided by the wrong theory about how diseases spread. On the prevailing view, epidemics were caused by ‘miasma’ (bad air) — a kind of poisonous mist from decomposing matter. Parliament commissioned a report on the ‘Sanitary Condition of the Labouring Population’, which showed a clear link between poverty and disease, and recommended a bunch of excellent and historically significant reforms. But one recommendation backfired because of this scientific misunderstanding: according to the miasma theory, it made sense to remove human waste through wastewater — but that water flowed into the Thames and contaminated it further.But in one of these outbreaks, the physician John Snow has spotted how incidence of cholera clustered around a single water pump in Soho, suggesting that unclean water was the major source of the outbreak. A few years later, the experiments of Louis Pasteur helped foster the germ theory of disease, sharpening the understanding of how and why to treat drinking water for public health. These were well-timed discoveriesBecause soon things got even worse. Heat exacerbated the smell; and the summer of 1858 was unusually hot. 1858 was the year of London’s ‘Great Stink’, and the Thames “a Stygian pool, reeking with ineffable and intolerable horrors” in Prime Minister Disraeli’s words. The problem had become totally unignorable.Parliament turned to Joseph Bazalgette, chief engineer of London’s Metropolitan Board of Works. Spurred by the Great Stink, he was given licence to oversee the construction of an ambitious plan to rebuild London’s sewage system, to his own design. 1,800km of street sewers would feed into 132km of main interconnecting sewers. A network of pumping stations was built, to lift sewage from streets below the high water mark. 18 years later, the result was the kind of modern sewage system we mostly take for granted: a system to collect wastewater and dump it far from where it could contaminate food and drinking water; in this case a dozen miles eastwards to the Thames estuary. "The great sewer that runs beneath Londoners”, wrote Bazalgette’s obituarist, “has added some 20 years to their chance of life”.Remarkably, most of the system remains in use. London’s sewage system has obviously been expanded, and wastewater treatment is much better. Bazalgette’s plan was built to last, and succeeded.As London built ways of expelling wastewater, it also built ways of channelling c...]]>
finm https://forum.effectivealtruism.org/posts/WLok4YuJ4kfFpDRTi/first-clean-water-now-clean-air Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: First clean water, now clean air, published by finm on April 30, 2023 on The Effective Altruism Forum.The excellent report from Rethink Priorities was my main source for this. Many of the substantial points I make are taken from it, though errors are my own. It’s worth reading! The authors are Gavriel Kleinwaks, Alastair Fraser-Urquhart, Jam Kraprayoon, and Josh Morrison.Clean waterIn the mid 19th century, London had a sewage problem. It relied on a patchwork of a few hundred sewers, of brick and wood, and hundreds of thousands of cesspits. The Thames — Londoners’ main source of drinking water — was near-opaque with waste. Here is Michael Faraday in an 1855 letter to The Times:Near the bridges the feculence rolled up in clouds so dense that they were visible at the surface even in water of this kind [.] The smell was very bad, and common to the whole of the water. It was the same as that which now comes up from the gully holes in the streets. The whole river was for the time a real sewer [.] If we neglect this subject, we cannot expect to do so with impunity; nor ought we to be surprised if, ere many years are over, a season give us sad proof of the folly of our carelessness.That “sad proof” arrived more than once. London saw around three outbreaks of cholera, killing upwards of 50,000 people in each outbreak.But early efforts to address the public health crisis were guided by the wrong theory about how diseases spread. On the prevailing view, epidemics were caused by ‘miasma’ (bad air) — a kind of poisonous mist from decomposing matter. Parliament commissioned a report on the ‘Sanitary Condition of the Labouring Population’, which showed a clear link between poverty and disease, and recommended a bunch of excellent and historically significant reforms. But one recommendation backfired because of this scientific misunderstanding: according to the miasma theory, it made sense to remove human waste through wastewater — but that water flowed into the Thames and contaminated it further.But in one of these outbreaks, the physician John Snow has spotted how incidence of cholera clustered around a single water pump in Soho, suggesting that unclean water was the major source of the outbreak. A few years later, the experiments of Louis Pasteur helped foster the germ theory of disease, sharpening the understanding of how and why to treat drinking water for public health. These were well-timed discoveriesBecause soon things got even worse. Heat exacerbated the smell; and the summer of 1858 was unusually hot. 1858 was the year of London’s ‘Great Stink’, and the Thames “a Stygian pool, reeking with ineffable and intolerable horrors” in Prime Minister Disraeli’s words. The problem had become totally unignorable.Parliament turned to Joseph Bazalgette, chief engineer of London’s Metropolitan Board of Works. Spurred by the Great Stink, he was given licence to oversee the construction of an ambitious plan to rebuild London’s sewage system, to his own design. 1,800km of street sewers would feed into 132km of main interconnecting sewers. A network of pumping stations was built, to lift sewage from streets below the high water mark. 18 years later, the result was the kind of modern sewage system we mostly take for granted: a system to collect wastewater and dump it far from where it could contaminate food and drinking water; in this case a dozen miles eastwards to the Thames estuary. "The great sewer that runs beneath Londoners”, wrote Bazalgette’s obituarist, “has added some 20 years to their chance of life”.Remarkably, most of the system remains in use. London’s sewage system has obviously been expanded, and wastewater treatment is much better. Bazalgette’s plan was built to last, and succeeded.As London built ways of expelling wastewater, it also built ways of channelling c...]]>
Mon, 01 May 2023 11:47:11 +0000 EA - First clean water, now clean air by finm Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: First clean water, now clean air, published by finm on April 30, 2023 on The Effective Altruism Forum.The excellent report from Rethink Priorities was my main source for this. Many of the substantial points I make are taken from it, though errors are my own. It’s worth reading! The authors are Gavriel Kleinwaks, Alastair Fraser-Urquhart, Jam Kraprayoon, and Josh Morrison.Clean waterIn the mid 19th century, London had a sewage problem. It relied on a patchwork of a few hundred sewers, of brick and wood, and hundreds of thousands of cesspits. The Thames — Londoners’ main source of drinking water — was near-opaque with waste. Here is Michael Faraday in an 1855 letter to The Times:Near the bridges the feculence rolled up in clouds so dense that they were visible at the surface even in water of this kind [.] The smell was very bad, and common to the whole of the water. It was the same as that which now comes up from the gully holes in the streets. The whole river was for the time a real sewer [.] If we neglect this subject, we cannot expect to do so with impunity; nor ought we to be surprised if, ere many years are over, a season give us sad proof of the folly of our carelessness.That “sad proof” arrived more than once. London saw around three outbreaks of cholera, killing upwards of 50,000 people in each outbreak.But early efforts to address the public health crisis were guided by the wrong theory about how diseases spread. On the prevailing view, epidemics were caused by ‘miasma’ (bad air) — a kind of poisonous mist from decomposing matter. Parliament commissioned a report on the ‘Sanitary Condition of the Labouring Population’, which showed a clear link between poverty and disease, and recommended a bunch of excellent and historically significant reforms. But one recommendation backfired because of this scientific misunderstanding: according to the miasma theory, it made sense to remove human waste through wastewater — but that water flowed into the Thames and contaminated it further.But in one of these outbreaks, the physician John Snow has spotted how incidence of cholera clustered around a single water pump in Soho, suggesting that unclean water was the major source of the outbreak. A few years later, the experiments of Louis Pasteur helped foster the germ theory of disease, sharpening the understanding of how and why to treat drinking water for public health. These were well-timed discoveriesBecause soon things got even worse. Heat exacerbated the smell; and the summer of 1858 was unusually hot. 1858 was the year of London’s ‘Great Stink’, and the Thames “a Stygian pool, reeking with ineffable and intolerable horrors” in Prime Minister Disraeli’s words. The problem had become totally unignorable.Parliament turned to Joseph Bazalgette, chief engineer of London’s Metropolitan Board of Works. Spurred by the Great Stink, he was given licence to oversee the construction of an ambitious plan to rebuild London’s sewage system, to his own design. 1,800km of street sewers would feed into 132km of main interconnecting sewers. A network of pumping stations was built, to lift sewage from streets below the high water mark. 18 years later, the result was the kind of modern sewage system we mostly take for granted: a system to collect wastewater and dump it far from where it could contaminate food and drinking water; in this case a dozen miles eastwards to the Thames estuary. "The great sewer that runs beneath Londoners”, wrote Bazalgette’s obituarist, “has added some 20 years to their chance of life”.Remarkably, most of the system remains in use. London’s sewage system has obviously been expanded, and wastewater treatment is much better. Bazalgette’s plan was built to last, and succeeded.As London built ways of expelling wastewater, it also built ways of channelling c...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: First clean water, now clean air, published by finm on April 30, 2023 on The Effective Altruism Forum.The excellent report from Rethink Priorities was my main source for this. Many of the substantial points I make are taken from it, though errors are my own. It’s worth reading! The authors are Gavriel Kleinwaks, Alastair Fraser-Urquhart, Jam Kraprayoon, and Josh Morrison.Clean waterIn the mid 19th century, London had a sewage problem. It relied on a patchwork of a few hundred sewers, of brick and wood, and hundreds of thousands of cesspits. The Thames — Londoners’ main source of drinking water — was near-opaque with waste. Here is Michael Faraday in an 1855 letter to The Times:Near the bridges the feculence rolled up in clouds so dense that they were visible at the surface even in water of this kind [.] The smell was very bad, and common to the whole of the water. It was the same as that which now comes up from the gully holes in the streets. The whole river was for the time a real sewer [.] If we neglect this subject, we cannot expect to do so with impunity; nor ought we to be surprised if, ere many years are over, a season give us sad proof of the folly of our carelessness.That “sad proof” arrived more than once. London saw around three outbreaks of cholera, killing upwards of 50,000 people in each outbreak.But early efforts to address the public health crisis were guided by the wrong theory about how diseases spread. On the prevailing view, epidemics were caused by ‘miasma’ (bad air) — a kind of poisonous mist from decomposing matter. Parliament commissioned a report on the ‘Sanitary Condition of the Labouring Population’, which showed a clear link between poverty and disease, and recommended a bunch of excellent and historically significant reforms. But one recommendation backfired because of this scientific misunderstanding: according to the miasma theory, it made sense to remove human waste through wastewater — but that water flowed into the Thames and contaminated it further.But in one of these outbreaks, the physician John Snow has spotted how incidence of cholera clustered around a single water pump in Soho, suggesting that unclean water was the major source of the outbreak. A few years later, the experiments of Louis Pasteur helped foster the germ theory of disease, sharpening the understanding of how and why to treat drinking water for public health. These were well-timed discoveriesBecause soon things got even worse. Heat exacerbated the smell; and the summer of 1858 was unusually hot. 1858 was the year of London’s ‘Great Stink’, and the Thames “a Stygian pool, reeking with ineffable and intolerable horrors” in Prime Minister Disraeli’s words. The problem had become totally unignorable.Parliament turned to Joseph Bazalgette, chief engineer of London’s Metropolitan Board of Works. Spurred by the Great Stink, he was given licence to oversee the construction of an ambitious plan to rebuild London’s sewage system, to his own design. 1,800km of street sewers would feed into 132km of main interconnecting sewers. A network of pumping stations was built, to lift sewage from streets below the high water mark. 18 years later, the result was the kind of modern sewage system we mostly take for granted: a system to collect wastewater and dump it far from where it could contaminate food and drinking water; in this case a dozen miles eastwards to the Thames estuary. "The great sewer that runs beneath Londoners”, wrote Bazalgette’s obituarist, “has added some 20 years to their chance of life”.Remarkably, most of the system remains in use. London’s sewage system has obviously been expanded, and wastewater treatment is much better. Bazalgette’s plan was built to last, and succeeded.As London built ways of expelling wastewater, it also built ways of channelling c...]]>
finm https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 32:37 None full 5787
eaLwfhXbw2kNxA4es_NL_EA_EA EA - Bridging EA's Gender Gap: Input From 60 People by Alexandra Bos Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bridging EA's Gender Gap: Input From 60 People, published by Alexandra Bos on April 30, 2023 on The Effective Altruism Forum.TLDRWe hosted a session at EAGxRotterdam during which 60 participants discussed potential reasons why there are fewer women in EA and how this could be improved. The main categories of solutions people came up with were (1) adjusting outreach strategies, (2) putting women in more visible positions, (3) making EA’s atmosphere more female-friendly, (4) pursuing strategies to empower women in EA, and (5) adjusting certain attributes of EA thought. The goal of this post is to facilitate a solution-oriented discussion within the EA community so we can make tangible progress on its currently skewed gender ratio and underlying problems.Some notes before we start:Whether gender diversity is something to strive for is beyond this discussion. We will simply assume that it is and go from there. You could for example check out these posts (1, 2, 3) for a discussion on (gender) diversity if you want to read about this or discuss it.To keep the scope of this post and the session we hosted manageable, we focused on women specifically. However, we do not claim gender is binary and acknowledge that to increase diversity there are many more groups to focus on than just women (such as POC or other minorities).The views we describe in this post don’t necessarily correspond with our (Veerle Bakker's & Alexandra Bos') own but rather we are describing others’ input.Please view this post as crowdsourcing hypotheses from community members as a starting point for further discussion rather than as presenting The Hard Truth. You can also view posts such as these (1, 2, 3) for additional views on EA’s gender gap.EA & Its Gender GapIt is no secret that more men than women are involved with the EA community currently. In the last EA survey (2020), only 26.9% of respondents identified as female. This is similar to the 2019 survey.Graph source: EA Survey 2020.The goal of this post is to get a solution-oriented discussion started within the wider EA community to take steps towards tangible change. We aim to do this by sharing the insights from a discussion session at EAGxRotterdam in November 2022 titled "Discussion: how to engage more women with EA". In this post, we will be going through the different problems the EAGx’ers suspected may underlie the gender gap. Each problem will be paired with the potential solutions they proposed.MethodologyThis post summarises and categorises the insights from group discussions from a workshop at EAGxRotterdam. Around 60 people attended this session, approximately 40 of whom were women. In groups of 5, participants brainstormed on both 1) what may be the reasons for the relatively low number of women in EA (focusing on the causes, 15 mins), and 2) in what ways the EA community could attract more women to balance this out (focusing on solutions, 15 mins). The discussions were based on these prompts. We asked the groups to take notes on paper during their discussions so that this could be turned into this forum post. We informed them of this in advance. If you want to take a deep dive and look at the source materials, you are welcome to take a look at the participants’ written discussion notes.LimitationsThis project has some considerable limitations. First of all, the groups’ ideas are based on short brainstorming sessions, so they are not substantiated with research or confirmed in other ways. It is also worth mentioning that not all attendees had a lot of experience with the EA community - some only knew about EA for a couple of weeks or months. Furthermore, a considerable amount of information may have gotten lost in translation because it was transferred through hasty discussion notes. Additionally, the information was the...]]>
Alexandra Bos https://forum.effectivealtruism.org/posts/eaLwfhXbw2kNxA4es/bridging-ea-s-gender-gap-input-from-60-people Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bridging EA's Gender Gap: Input From 60 People, published by Alexandra Bos on April 30, 2023 on The Effective Altruism Forum.TLDRWe hosted a session at EAGxRotterdam during which 60 participants discussed potential reasons why there are fewer women in EA and how this could be improved. The main categories of solutions people came up with were (1) adjusting outreach strategies, (2) putting women in more visible positions, (3) making EA’s atmosphere more female-friendly, (4) pursuing strategies to empower women in EA, and (5) adjusting certain attributes of EA thought. The goal of this post is to facilitate a solution-oriented discussion within the EA community so we can make tangible progress on its currently skewed gender ratio and underlying problems.Some notes before we start:Whether gender diversity is something to strive for is beyond this discussion. We will simply assume that it is and go from there. You could for example check out these posts (1, 2, 3) for a discussion on (gender) diversity if you want to read about this or discuss it.To keep the scope of this post and the session we hosted manageable, we focused on women specifically. However, we do not claim gender is binary and acknowledge that to increase diversity there are many more groups to focus on than just women (such as POC or other minorities).The views we describe in this post don’t necessarily correspond with our (Veerle Bakker's & Alexandra Bos') own but rather we are describing others’ input.Please view this post as crowdsourcing hypotheses from community members as a starting point for further discussion rather than as presenting The Hard Truth. You can also view posts such as these (1, 2, 3) for additional views on EA’s gender gap.EA & Its Gender GapIt is no secret that more men than women are involved with the EA community currently. In the last EA survey (2020), only 26.9% of respondents identified as female. This is similar to the 2019 survey.Graph source: EA Survey 2020.The goal of this post is to get a solution-oriented discussion started within the wider EA community to take steps towards tangible change. We aim to do this by sharing the insights from a discussion session at EAGxRotterdam in November 2022 titled "Discussion: how to engage more women with EA". In this post, we will be going through the different problems the EAGx’ers suspected may underlie the gender gap. Each problem will be paired with the potential solutions they proposed.MethodologyThis post summarises and categorises the insights from group discussions from a workshop at EAGxRotterdam. Around 60 people attended this session, approximately 40 of whom were women. In groups of 5, participants brainstormed on both 1) what may be the reasons for the relatively low number of women in EA (focusing on the causes, 15 mins), and 2) in what ways the EA community could attract more women to balance this out (focusing on solutions, 15 mins). The discussions were based on these prompts. We asked the groups to take notes on paper during their discussions so that this could be turned into this forum post. We informed them of this in advance. If you want to take a deep dive and look at the source materials, you are welcome to take a look at the participants’ written discussion notes.LimitationsThis project has some considerable limitations. First of all, the groups’ ideas are based on short brainstorming sessions, so they are not substantiated with research or confirmed in other ways. It is also worth mentioning that not all attendees had a lot of experience with the EA community - some only knew about EA for a couple of weeks or months. Furthermore, a considerable amount of information may have gotten lost in translation because it was transferred through hasty discussion notes. Additionally, the information was the...]]>
Mon, 01 May 2023 11:32:17 +0000 EA - Bridging EA's Gender Gap: Input From 60 People by Alexandra Bos Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bridging EA's Gender Gap: Input From 60 People, published by Alexandra Bos on April 30, 2023 on The Effective Altruism Forum.TLDRWe hosted a session at EAGxRotterdam during which 60 participants discussed potential reasons why there are fewer women in EA and how this could be improved. The main categories of solutions people came up with were (1) adjusting outreach strategies, (2) putting women in more visible positions, (3) making EA’s atmosphere more female-friendly, (4) pursuing strategies to empower women in EA, and (5) adjusting certain attributes of EA thought. The goal of this post is to facilitate a solution-oriented discussion within the EA community so we can make tangible progress on its currently skewed gender ratio and underlying problems.Some notes before we start:Whether gender diversity is something to strive for is beyond this discussion. We will simply assume that it is and go from there. You could for example check out these posts (1, 2, 3) for a discussion on (gender) diversity if you want to read about this or discuss it.To keep the scope of this post and the session we hosted manageable, we focused on women specifically. However, we do not claim gender is binary and acknowledge that to increase diversity there are many more groups to focus on than just women (such as POC or other minorities).The views we describe in this post don’t necessarily correspond with our (Veerle Bakker's & Alexandra Bos') own but rather we are describing others’ input.Please view this post as crowdsourcing hypotheses from community members as a starting point for further discussion rather than as presenting The Hard Truth. You can also view posts such as these (1, 2, 3) for additional views on EA’s gender gap.EA & Its Gender GapIt is no secret that more men than women are involved with the EA community currently. In the last EA survey (2020), only 26.9% of respondents identified as female. This is similar to the 2019 survey.Graph source: EA Survey 2020.The goal of this post is to get a solution-oriented discussion started within the wider EA community to take steps towards tangible change. We aim to do this by sharing the insights from a discussion session at EAGxRotterdam in November 2022 titled "Discussion: how to engage more women with EA". In this post, we will be going through the different problems the EAGx’ers suspected may underlie the gender gap. Each problem will be paired with the potential solutions they proposed.MethodologyThis post summarises and categorises the insights from group discussions from a workshop at EAGxRotterdam. Around 60 people attended this session, approximately 40 of whom were women. In groups of 5, participants brainstormed on both 1) what may be the reasons for the relatively low number of women in EA (focusing on the causes, 15 mins), and 2) in what ways the EA community could attract more women to balance this out (focusing on solutions, 15 mins). The discussions were based on these prompts. We asked the groups to take notes on paper during their discussions so that this could be turned into this forum post. We informed them of this in advance. If you want to take a deep dive and look at the source materials, you are welcome to take a look at the participants’ written discussion notes.LimitationsThis project has some considerable limitations. First of all, the groups’ ideas are based on short brainstorming sessions, so they are not substantiated with research or confirmed in other ways. It is also worth mentioning that not all attendees had a lot of experience with the EA community - some only knew about EA for a couple of weeks or months. Furthermore, a considerable amount of information may have gotten lost in translation because it was transferred through hasty discussion notes. Additionally, the information was the...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bridging EA's Gender Gap: Input From 60 People, published by Alexandra Bos on April 30, 2023 on The Effective Altruism Forum.TLDRWe hosted a session at EAGxRotterdam during which 60 participants discussed potential reasons why there are fewer women in EA and how this could be improved. The main categories of solutions people came up with were (1) adjusting outreach strategies, (2) putting women in more visible positions, (3) making EA’s atmosphere more female-friendly, (4) pursuing strategies to empower women in EA, and (5) adjusting certain attributes of EA thought. The goal of this post is to facilitate a solution-oriented discussion within the EA community so we can make tangible progress on its currently skewed gender ratio and underlying problems.Some notes before we start:Whether gender diversity is something to strive for is beyond this discussion. We will simply assume that it is and go from there. You could for example check out these posts (1, 2, 3) for a discussion on (gender) diversity if you want to read about this or discuss it.To keep the scope of this post and the session we hosted manageable, we focused on women specifically. However, we do not claim gender is binary and acknowledge that to increase diversity there are many more groups to focus on than just women (such as POC or other minorities).The views we describe in this post don’t necessarily correspond with our (Veerle Bakker's & Alexandra Bos') own but rather we are describing others’ input.Please view this post as crowdsourcing hypotheses from community members as a starting point for further discussion rather than as presenting The Hard Truth. You can also view posts such as these (1, 2, 3) for additional views on EA’s gender gap.EA & Its Gender GapIt is no secret that more men than women are involved with the EA community currently. In the last EA survey (2020), only 26.9% of respondents identified as female. This is similar to the 2019 survey.Graph source: EA Survey 2020.The goal of this post is to get a solution-oriented discussion started within the wider EA community to take steps towards tangible change. We aim to do this by sharing the insights from a discussion session at EAGxRotterdam in November 2022 titled "Discussion: how to engage more women with EA". In this post, we will be going through the different problems the EAGx’ers suspected may underlie the gender gap. Each problem will be paired with the potential solutions they proposed.MethodologyThis post summarises and categorises the insights from group discussions from a workshop at EAGxRotterdam. Around 60 people attended this session, approximately 40 of whom were women. In groups of 5, participants brainstormed on both 1) what may be the reasons for the relatively low number of women in EA (focusing on the causes, 15 mins), and 2) in what ways the EA community could attract more women to balance this out (focusing on solutions, 15 mins). The discussions were based on these prompts. We asked the groups to take notes on paper during their discussions so that this could be turned into this forum post. We informed them of this in advance. If you want to take a deep dive and look at the source materials, you are welcome to take a look at the participants’ written discussion notes.LimitationsThis project has some considerable limitations. First of all, the groups’ ideas are based on short brainstorming sessions, so they are not substantiated with research or confirmed in other ways. It is also worth mentioning that not all attendees had a lot of experience with the EA community - some only knew about EA for a couple of weeks or months. Furthermore, a considerable amount of information may have gotten lost in translation because it was transferred through hasty discussion notes. Additionally, the information was the...]]>
Alexandra Bos https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:11 None full 5788
x2vELt7iwaZebHBEn_NL_EA_EA EA - More global warming might be good to mitigate the food shocks caused by abrupt sunlight reduction scenarios by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: More global warming might be good to mitigate the food shocks caused by abrupt sunlight reduction scenarios, published by Vasco Grilo on April 29, 2023 on The Effective Altruism Forum.Disclaimer: this is not a project from Alliance to Feed the Earth in Disasters (ALLFED).SummaryGlobal warming increases the risk from climate change. This “has the potential to result in—and to some extent is already resulting in—increased natural disasters, increased water and food insecurity, and widespread species extinction and habitat loss”.However, I think global warming also decreases the risk from food shocks caused by abrupt sunlight reduction scenarios (ASRSs), which can be a nuclear winter, volcanic winter, or impact winter. In essence, because low temperature is a major driver for the decrease in crop yields that can lead to widespread starvation (see Xia 2022, and this post from Luisa Rodriguez).Factoring in both of the above, my best guess is that additional emissions of greenhouse gases (GHGs) are beneficial up to an optimal median global warming in 2100 relative to 1880 of 3.3 ºC, after which the increase in the risk from climate change outweighs the reduction in that from ASRSs. This suggests delaying decarbonisation is good at the margin if one trusts (on top of my assumptions!):Metaculus’ community median prediction of 2.41 ºC.Climate Action Tracker’s projections of 2.6 to 2.9 ºC for current policies and action.Nevertheless, I am not confident the above conclusion is resilient. My sensitivity analysis indicates the optimal median global warming can range from 0.1 to 4.3 ºC. So the takeaway for me is that we do not really know whether additional GHG emissions are good/bad.In any case, it looks like the effect of global warming on the risk from ASRSs is a crucial consideration, and therefore it must be investigated, especially because it is very neglected. Another potentially crucial consideration is that an energy system which relies more on renewables, and less on fossil fuels is less resilient to ASRSs.Robustly good actions would be:Improving civilisation resilience.Prioritising the risk from nuclear war over that from climate change (at the margin).Keeping options open by:Not massively decreasing/increasing GHG emissions.Researching cost-effective ways to decrease/increase GHG emissions.Learning more about the risks posed by ASRSs and climate change.IntroductionIn the sense that matters most for effective altruism, climate change refers to large-scale shifts in weather patterns that result from emissions of greenhouse gases such as carbon dioxide and methane largely from fossil fuel consumption. Climate change has the potential to result in—and to some extent is already resulting in—increased natural disasters, increased water and food insecurity, and widespread species extinction and habitat loss.In What We Owe to the Future (WWOF), William MacAskill argues “decarbonisation [decreasing GHG emissions] is a proof of concept for longtermism”, describing it as a “win-win-win-win-win”. In addition to (supposedly) improving the longterm future:“Moving to clean energy has enormous benefits in terms of present-day human health. Burning fossil fuels pollutes the air with small particles that cause lung cancer, heart disease, and respiratory infections”.“By making energy cheaper [in the long run], clean energy innovation improves living standards in poorer countries”.“By helping keep fossil fuels in the ground, it guards against the risk of unrecovered collapse”.“By furthering technological progress, it reduces the risk of longterm stagnation”.I agree decarbonisation will eventually be beneficial, but I am not sure decreasing GHG emissions is good at the margin now. As I said in my hot takes on counterproductive altruism:Mitigating global warming dec...]]>
Vasco Grilo https://forum.effectivealtruism.org/posts/x2vELt7iwaZebHBEn/more-global-warming-might-be-good-to-mitigate-the-food Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: More global warming might be good to mitigate the food shocks caused by abrupt sunlight reduction scenarios, published by Vasco Grilo on April 29, 2023 on The Effective Altruism Forum.Disclaimer: this is not a project from Alliance to Feed the Earth in Disasters (ALLFED).SummaryGlobal warming increases the risk from climate change. This “has the potential to result in—and to some extent is already resulting in—increased natural disasters, increased water and food insecurity, and widespread species extinction and habitat loss”.However, I think global warming also decreases the risk from food shocks caused by abrupt sunlight reduction scenarios (ASRSs), which can be a nuclear winter, volcanic winter, or impact winter. In essence, because low temperature is a major driver for the decrease in crop yields that can lead to widespread starvation (see Xia 2022, and this post from Luisa Rodriguez).Factoring in both of the above, my best guess is that additional emissions of greenhouse gases (GHGs) are beneficial up to an optimal median global warming in 2100 relative to 1880 of 3.3 ºC, after which the increase in the risk from climate change outweighs the reduction in that from ASRSs. This suggests delaying decarbonisation is good at the margin if one trusts (on top of my assumptions!):Metaculus’ community median prediction of 2.41 ºC.Climate Action Tracker’s projections of 2.6 to 2.9 ºC for current policies and action.Nevertheless, I am not confident the above conclusion is resilient. My sensitivity analysis indicates the optimal median global warming can range from 0.1 to 4.3 ºC. So the takeaway for me is that we do not really know whether additional GHG emissions are good/bad.In any case, it looks like the effect of global warming on the risk from ASRSs is a crucial consideration, and therefore it must be investigated, especially because it is very neglected. Another potentially crucial consideration is that an energy system which relies more on renewables, and less on fossil fuels is less resilient to ASRSs.Robustly good actions would be:Improving civilisation resilience.Prioritising the risk from nuclear war over that from climate change (at the margin).Keeping options open by:Not massively decreasing/increasing GHG emissions.Researching cost-effective ways to decrease/increase GHG emissions.Learning more about the risks posed by ASRSs and climate change.IntroductionIn the sense that matters most for effective altruism, climate change refers to large-scale shifts in weather patterns that result from emissions of greenhouse gases such as carbon dioxide and methane largely from fossil fuel consumption. Climate change has the potential to result in—and to some extent is already resulting in—increased natural disasters, increased water and food insecurity, and widespread species extinction and habitat loss.In What We Owe to the Future (WWOF), William MacAskill argues “decarbonisation [decreasing GHG emissions] is a proof of concept for longtermism”, describing it as a “win-win-win-win-win”. In addition to (supposedly) improving the longterm future:“Moving to clean energy has enormous benefits in terms of present-day human health. Burning fossil fuels pollutes the air with small particles that cause lung cancer, heart disease, and respiratory infections”.“By making energy cheaper [in the long run], clean energy innovation improves living standards in poorer countries”.“By helping keep fossil fuels in the ground, it guards against the risk of unrecovered collapse”.“By furthering technological progress, it reduces the risk of longterm stagnation”.I agree decarbonisation will eventually be beneficial, but I am not sure decreasing GHG emissions is good at the margin now. As I said in my hot takes on counterproductive altruism:Mitigating global warming dec...]]>
Mon, 01 May 2023 09:11:29 +0000 EA - More global warming might be good to mitigate the food shocks caused by abrupt sunlight reduction scenarios by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: More global warming might be good to mitigate the food shocks caused by abrupt sunlight reduction scenarios, published by Vasco Grilo on April 29, 2023 on The Effective Altruism Forum.Disclaimer: this is not a project from Alliance to Feed the Earth in Disasters (ALLFED).SummaryGlobal warming increases the risk from climate change. This “has the potential to result in—and to some extent is already resulting in—increased natural disasters, increased water and food insecurity, and widespread species extinction and habitat loss”.However, I think global warming also decreases the risk from food shocks caused by abrupt sunlight reduction scenarios (ASRSs), which can be a nuclear winter, volcanic winter, or impact winter. In essence, because low temperature is a major driver for the decrease in crop yields that can lead to widespread starvation (see Xia 2022, and this post from Luisa Rodriguez).Factoring in both of the above, my best guess is that additional emissions of greenhouse gases (GHGs) are beneficial up to an optimal median global warming in 2100 relative to 1880 of 3.3 ºC, after which the increase in the risk from climate change outweighs the reduction in that from ASRSs. This suggests delaying decarbonisation is good at the margin if one trusts (on top of my assumptions!):Metaculus’ community median prediction of 2.41 ºC.Climate Action Tracker’s projections of 2.6 to 2.9 ºC for current policies and action.Nevertheless, I am not confident the above conclusion is resilient. My sensitivity analysis indicates the optimal median global warming can range from 0.1 to 4.3 ºC. So the takeaway for me is that we do not really know whether additional GHG emissions are good/bad.In any case, it looks like the effect of global warming on the risk from ASRSs is a crucial consideration, and therefore it must be investigated, especially because it is very neglected. Another potentially crucial consideration is that an energy system which relies more on renewables, and less on fossil fuels is less resilient to ASRSs.Robustly good actions would be:Improving civilisation resilience.Prioritising the risk from nuclear war over that from climate change (at the margin).Keeping options open by:Not massively decreasing/increasing GHG emissions.Researching cost-effective ways to decrease/increase GHG emissions.Learning more about the risks posed by ASRSs and climate change.IntroductionIn the sense that matters most for effective altruism, climate change refers to large-scale shifts in weather patterns that result from emissions of greenhouse gases such as carbon dioxide and methane largely from fossil fuel consumption. Climate change has the potential to result in—and to some extent is already resulting in—increased natural disasters, increased water and food insecurity, and widespread species extinction and habitat loss.In What We Owe to the Future (WWOF), William MacAskill argues “decarbonisation [decreasing GHG emissions] is a proof of concept for longtermism”, describing it as a “win-win-win-win-win”. In addition to (supposedly) improving the longterm future:“Moving to clean energy has enormous benefits in terms of present-day human health. Burning fossil fuels pollutes the air with small particles that cause lung cancer, heart disease, and respiratory infections”.“By making energy cheaper [in the long run], clean energy innovation improves living standards in poorer countries”.“By helping keep fossil fuels in the ground, it guards against the risk of unrecovered collapse”.“By furthering technological progress, it reduces the risk of longterm stagnation”.I agree decarbonisation will eventually be beneficial, but I am not sure decreasing GHG emissions is good at the margin now. As I said in my hot takes on counterproductive altruism:Mitigating global warming dec...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: More global warming might be good to mitigate the food shocks caused by abrupt sunlight reduction scenarios, published by Vasco Grilo on April 29, 2023 on The Effective Altruism Forum.Disclaimer: this is not a project from Alliance to Feed the Earth in Disasters (ALLFED).SummaryGlobal warming increases the risk from climate change. This “has the potential to result in—and to some extent is already resulting in—increased natural disasters, increased water and food insecurity, and widespread species extinction and habitat loss”.However, I think global warming also decreases the risk from food shocks caused by abrupt sunlight reduction scenarios (ASRSs), which can be a nuclear winter, volcanic winter, or impact winter. In essence, because low temperature is a major driver for the decrease in crop yields that can lead to widespread starvation (see Xia 2022, and this post from Luisa Rodriguez).Factoring in both of the above, my best guess is that additional emissions of greenhouse gases (GHGs) are beneficial up to an optimal median global warming in 2100 relative to 1880 of 3.3 ºC, after which the increase in the risk from climate change outweighs the reduction in that from ASRSs. This suggests delaying decarbonisation is good at the margin if one trusts (on top of my assumptions!):Metaculus’ community median prediction of 2.41 ºC.Climate Action Tracker’s projections of 2.6 to 2.9 ºC for current policies and action.Nevertheless, I am not confident the above conclusion is resilient. My sensitivity analysis indicates the optimal median global warming can range from 0.1 to 4.3 ºC. So the takeaway for me is that we do not really know whether additional GHG emissions are good/bad.In any case, it looks like the effect of global warming on the risk from ASRSs is a crucial consideration, and therefore it must be investigated, especially because it is very neglected. Another potentially crucial consideration is that an energy system which relies more on renewables, and less on fossil fuels is less resilient to ASRSs.Robustly good actions would be:Improving civilisation resilience.Prioritising the risk from nuclear war over that from climate change (at the margin).Keeping options open by:Not massively decreasing/increasing GHG emissions.Researching cost-effective ways to decrease/increase GHG emissions.Learning more about the risks posed by ASRSs and climate change.IntroductionIn the sense that matters most for effective altruism, climate change refers to large-scale shifts in weather patterns that result from emissions of greenhouse gases such as carbon dioxide and methane largely from fossil fuel consumption. Climate change has the potential to result in—and to some extent is already resulting in—increased natural disasters, increased water and food insecurity, and widespread species extinction and habitat loss.In What We Owe to the Future (WWOF), William MacAskill argues “decarbonisation [decreasing GHG emissions] is a proof of concept for longtermism”, describing it as a “win-win-win-win-win”. In addition to (supposedly) improving the longterm future:“Moving to clean energy has enormous benefits in terms of present-day human health. Burning fossil fuels pollutes the air with small particles that cause lung cancer, heart disease, and respiratory infections”.“By making energy cheaper [in the long run], clean energy innovation improves living standards in poorer countries”.“By helping keep fossil fuels in the ground, it guards against the risk of unrecovered collapse”.“By furthering technological progress, it reduces the risk of longterm stagnation”.I agree decarbonisation will eventually be beneficial, but I am not sure decreasing GHG emissions is good at the margin now. As I said in my hot takes on counterproductive altruism:Mitigating global warming dec...]]>
Vasco Grilo https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 25:38 None full 5786
eibgQcbRXtW7tukfv_NL_EA_EA EA - Discussion about AI Safety funding (FB transcript) by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Discussion about AI Safety funding (FB transcript), published by Akash on April 30, 2023 on The Effective Altruism Forum.Kat Woods recently wrote a Facebook post about Nonlinear's new funding program.This led to a discussion (in the comments section) about funding norms, the current funding bar, concerns about lowering the bar, and concerns about the current (relatively centralized) funding situation.I'm posting a few of the comments below. I'm hoping this might promote more discussion about the funding landscape. Such discussion could be especially valuable right now, given that:Many people are starting to get interested in AI safety (including people who are not from the EA/rationalist communities)AGI timeline estimates have generally shortenedInvestment in overall AI development is increasing quicklyThere may be opportunities to spend large amounts of money in the upcoming year (e.g., scalable career transition grant programs, regranting programs, 2024 US elections, AI governance/policy infrastructure, public campaigns for AI safety).Many ideas with high potential upside also have noteworthy downside risks (phrased less vaguely, I think that among governance/policy/comms projects that have high potential upside, >50% also have non-trivial downside risks).We might see pretty big changes in the funding landscape over the next 6-24 monthsNew funders appear to be getting interested in AI safetyGovernments are getting interested in AI safetyMajor tech companies may decide to invest more resources into AI safetySelected comments from FB threadNote: I've made some editorial decisions to keep this post relatively short. Bolding is added by me. See the full thread here. Also, as usual, statements from individuals don't necessarily reflect the views of their employers.Kat Woods (Nonlinear)I often talk to dejected people who say they tried to get EA funding and were rejectedAnd what I want to do is to give them a rousing speech about how being rejected by one funder doesn't mean that their idea is bad or that their personal qualities are bad.The evaluation process is noisy. Even the best funders make mistakes. They might just have a different world model or value system than you. They might have been hangry while reading your application.That to succeed, you'll have to ask a ton of people, and get a ton of rejections, but that's OK, because you only need a handful of yeses.(Kat then describes the new funding program from Nonlinear. TLDR: People submit an application that can then be reviewed by a network of 50+ funders.)Claire Zabel (Program Officer at Open Philanthropy)Claire's comment:(Claire quoting Kat:) The evaluation process is noisy. Even the best funders make mistakes. They might just have a different world model or value system than you. They might have been hangry while reading your application.(Claire's response): That's true. It's also possible the project they are applying for is harmful, but if they apply to enough funders, eventually someone will fund the harmful project (unilateralist's curse). In my experience as a grantmaker, a substantial fraction (though certainly very far from all) rejected applications in the longtermist space seem harmful in expectation, not just "not cost-effective enough"Selected portions of Kat's response to Claire:1. We’re probably going to be setting up channels where funders can discuss applicants. This way if there are concerns about net negativity, other funders considering it can see that. This might even lead to less unilateralist curse because if lots of funders think that the idea is net negative, others will be able to see that, instead of the status quo, where it’s hard to know what other funders think of an application.2. All these donors were giving anyways, with all the possibilities of the u...]]>
Akash https://forum.effectivealtruism.org/posts/eibgQcbRXtW7tukfv/discussion-about-ai-safety-funding-fb-transcript Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Discussion about AI Safety funding (FB transcript), published by Akash on April 30, 2023 on The Effective Altruism Forum.Kat Woods recently wrote a Facebook post about Nonlinear's new funding program.This led to a discussion (in the comments section) about funding norms, the current funding bar, concerns about lowering the bar, and concerns about the current (relatively centralized) funding situation.I'm posting a few of the comments below. I'm hoping this might promote more discussion about the funding landscape. Such discussion could be especially valuable right now, given that:Many people are starting to get interested in AI safety (including people who are not from the EA/rationalist communities)AGI timeline estimates have generally shortenedInvestment in overall AI development is increasing quicklyThere may be opportunities to spend large amounts of money in the upcoming year (e.g., scalable career transition grant programs, regranting programs, 2024 US elections, AI governance/policy infrastructure, public campaigns for AI safety).Many ideas with high potential upside also have noteworthy downside risks (phrased less vaguely, I think that among governance/policy/comms projects that have high potential upside, >50% also have non-trivial downside risks).We might see pretty big changes in the funding landscape over the next 6-24 monthsNew funders appear to be getting interested in AI safetyGovernments are getting interested in AI safetyMajor tech companies may decide to invest more resources into AI safetySelected comments from FB threadNote: I've made some editorial decisions to keep this post relatively short. Bolding is added by me. See the full thread here. Also, as usual, statements from individuals don't necessarily reflect the views of their employers.Kat Woods (Nonlinear)I often talk to dejected people who say they tried to get EA funding and were rejectedAnd what I want to do is to give them a rousing speech about how being rejected by one funder doesn't mean that their idea is bad or that their personal qualities are bad.The evaluation process is noisy. Even the best funders make mistakes. They might just have a different world model or value system than you. They might have been hangry while reading your application.That to succeed, you'll have to ask a ton of people, and get a ton of rejections, but that's OK, because you only need a handful of yeses.(Kat then describes the new funding program from Nonlinear. TLDR: People submit an application that can then be reviewed by a network of 50+ funders.)Claire Zabel (Program Officer at Open Philanthropy)Claire's comment:(Claire quoting Kat:) The evaluation process is noisy. Even the best funders make mistakes. They might just have a different world model or value system than you. They might have been hangry while reading your application.(Claire's response): That's true. It's also possible the project they are applying for is harmful, but if they apply to enough funders, eventually someone will fund the harmful project (unilateralist's curse). In my experience as a grantmaker, a substantial fraction (though certainly very far from all) rejected applications in the longtermist space seem harmful in expectation, not just "not cost-effective enough"Selected portions of Kat's response to Claire:1. We’re probably going to be setting up channels where funders can discuss applicants. This way if there are concerns about net negativity, other funders considering it can see that. This might even lead to less unilateralist curse because if lots of funders think that the idea is net negative, others will be able to see that, instead of the status quo, where it’s hard to know what other funders think of an application.2. All these donors were giving anyways, with all the possibilities of the u...]]>
Sun, 30 Apr 2023 22:48:58 +0000 EA - Discussion about AI Safety funding (FB transcript) by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Discussion about AI Safety funding (FB transcript), published by Akash on April 30, 2023 on The Effective Altruism Forum.Kat Woods recently wrote a Facebook post about Nonlinear's new funding program.This led to a discussion (in the comments section) about funding norms, the current funding bar, concerns about lowering the bar, and concerns about the current (relatively centralized) funding situation.I'm posting a few of the comments below. I'm hoping this might promote more discussion about the funding landscape. Such discussion could be especially valuable right now, given that:Many people are starting to get interested in AI safety (including people who are not from the EA/rationalist communities)AGI timeline estimates have generally shortenedInvestment in overall AI development is increasing quicklyThere may be opportunities to spend large amounts of money in the upcoming year (e.g., scalable career transition grant programs, regranting programs, 2024 US elections, AI governance/policy infrastructure, public campaigns for AI safety).Many ideas with high potential upside also have noteworthy downside risks (phrased less vaguely, I think that among governance/policy/comms projects that have high potential upside, >50% also have non-trivial downside risks).We might see pretty big changes in the funding landscape over the next 6-24 monthsNew funders appear to be getting interested in AI safetyGovernments are getting interested in AI safetyMajor tech companies may decide to invest more resources into AI safetySelected comments from FB threadNote: I've made some editorial decisions to keep this post relatively short. Bolding is added by me. See the full thread here. Also, as usual, statements from individuals don't necessarily reflect the views of their employers.Kat Woods (Nonlinear)I often talk to dejected people who say they tried to get EA funding and were rejectedAnd what I want to do is to give them a rousing speech about how being rejected by one funder doesn't mean that their idea is bad or that their personal qualities are bad.The evaluation process is noisy. Even the best funders make mistakes. They might just have a different world model or value system than you. They might have been hangry while reading your application.That to succeed, you'll have to ask a ton of people, and get a ton of rejections, but that's OK, because you only need a handful of yeses.(Kat then describes the new funding program from Nonlinear. TLDR: People submit an application that can then be reviewed by a network of 50+ funders.)Claire Zabel (Program Officer at Open Philanthropy)Claire's comment:(Claire quoting Kat:) The evaluation process is noisy. Even the best funders make mistakes. They might just have a different world model or value system than you. They might have been hangry while reading your application.(Claire's response): That's true. It's also possible the project they are applying for is harmful, but if they apply to enough funders, eventually someone will fund the harmful project (unilateralist's curse). In my experience as a grantmaker, a substantial fraction (though certainly very far from all) rejected applications in the longtermist space seem harmful in expectation, not just "not cost-effective enough"Selected portions of Kat's response to Claire:1. We’re probably going to be setting up channels where funders can discuss applicants. This way if there are concerns about net negativity, other funders considering it can see that. This might even lead to less unilateralist curse because if lots of funders think that the idea is net negative, others will be able to see that, instead of the status quo, where it’s hard to know what other funders think of an application.2. All these donors were giving anyways, with all the possibilities of the u...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Discussion about AI Safety funding (FB transcript), published by Akash on April 30, 2023 on The Effective Altruism Forum.Kat Woods recently wrote a Facebook post about Nonlinear's new funding program.This led to a discussion (in the comments section) about funding norms, the current funding bar, concerns about lowering the bar, and concerns about the current (relatively centralized) funding situation.I'm posting a few of the comments below. I'm hoping this might promote more discussion about the funding landscape. Such discussion could be especially valuable right now, given that:Many people are starting to get interested in AI safety (including people who are not from the EA/rationalist communities)AGI timeline estimates have generally shortenedInvestment in overall AI development is increasing quicklyThere may be opportunities to spend large amounts of money in the upcoming year (e.g., scalable career transition grant programs, regranting programs, 2024 US elections, AI governance/policy infrastructure, public campaigns for AI safety).Many ideas with high potential upside also have noteworthy downside risks (phrased less vaguely, I think that among governance/policy/comms projects that have high potential upside, >50% also have non-trivial downside risks).We might see pretty big changes in the funding landscape over the next 6-24 monthsNew funders appear to be getting interested in AI safetyGovernments are getting interested in AI safetyMajor tech companies may decide to invest more resources into AI safetySelected comments from FB threadNote: I've made some editorial decisions to keep this post relatively short. Bolding is added by me. See the full thread here. Also, as usual, statements from individuals don't necessarily reflect the views of their employers.Kat Woods (Nonlinear)I often talk to dejected people who say they tried to get EA funding and were rejectedAnd what I want to do is to give them a rousing speech about how being rejected by one funder doesn't mean that their idea is bad or that their personal qualities are bad.The evaluation process is noisy. Even the best funders make mistakes. They might just have a different world model or value system than you. They might have been hangry while reading your application.That to succeed, you'll have to ask a ton of people, and get a ton of rejections, but that's OK, because you only need a handful of yeses.(Kat then describes the new funding program from Nonlinear. TLDR: People submit an application that can then be reviewed by a network of 50+ funders.)Claire Zabel (Program Officer at Open Philanthropy)Claire's comment:(Claire quoting Kat:) The evaluation process is noisy. Even the best funders make mistakes. They might just have a different world model or value system than you. They might have been hangry while reading your application.(Claire's response): That's true. It's also possible the project they are applying for is harmful, but if they apply to enough funders, eventually someone will fund the harmful project (unilateralist's curse). In my experience as a grantmaker, a substantial fraction (though certainly very far from all) rejected applications in the longtermist space seem harmful in expectation, not just "not cost-effective enough"Selected portions of Kat's response to Claire:1. We’re probably going to be setting up channels where funders can discuss applicants. This way if there are concerns about net negativity, other funders considering it can see that. This might even lead to less unilateralist curse because if lots of funders think that the idea is net negative, others will be able to see that, instead of the status quo, where it’s hard to know what other funders think of an application.2. All these donors were giving anyways, with all the possibilities of the u...]]>
Akash https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:01 None full 5783
iqbdXmrNxxgzgNxPC_NL_EA_EA EA - Introducing Stanford’s new Humane and Sustainable Food Lab by MMathur Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Stanford’s new Humane & Sustainable Food Lab, published by MMathur on April 30, 2023 on The Effective Altruism Forum.We are excited to announce the new Humane & Sustainable Food Lab at Stanford University’s School of Medicine (California, USA). Our mission is to end factory farming through cutting-edge scientific research that we are uniquely positioned to conduct. I am the principal investigator of the lab, an Assistant Professor at the Stanford School of Medicine with dual appointments in the Quantitative Sciences Unit and Department of Pediatrics. Because arguments for reducing factory farming as a cause area have been detailed elsewhere, here I focus on describing:Our approachOur research and publications to dateOur upcoming research prioritiesWhy we are funding-constrained1. Our approach1.1. Breadth, then depthEmpirical research on how to reduce factory farming is still nascent, with many low-hanging fruit and unexplored possibilities. As such, it is critical to explore broadly to see what general directions are most promising and in what real-world contexts (e.g., educational interventions that appeal to animal welfare [1, 2, 3], choice-architecture “nudges” that subtly shift food-service environments, etc.). We are conducting studies on a range of individual- and society-level interventions (see below), ultimately aiming to find and refine the most tractable, cost-effective, and scalable interventions. As we home in on candidate interventions, we expect our research to become more deeply focused on a smaller number of interventions.1.2. Collaborating with food service to conduct and disseminate research in real-world contextsWe have a unique collaboration with the Director and Executive Chefs at the Stanford dining halls, allowing us to conduct controlled trials in real-world settings to assess interventions to reduce consumption of meat and animal products. Some of our interventions have been as simple and scalable as reducing the size of spoons used to serve these foods. Also, Stanford Residential & Dining Enterprises is a founding member of the Menus of Change University Research Collaborative (MCURC), a nationwide research consortium of 74 colleges and universities that conduct groundbreaking, collaborative studies on healthy and sustainable food choices in food service. MCURC provides evidence-based recommendations for promoting healthier and more sustainable food choices in food service operations, providing a natural route to dissemination.Our established research model involves conducting initial pilot studies at Stanford's dining halls to assess interventions' real-world feasibility and obtain preliminary effect-size estimates, then conducting large-scale, multisite studies by partnering with collaborating members of MCURC. We also have ongoing collaborations with restaurants and plant-based food startups in which we are studying whether adding modern plant-based analogs (e.g., Impossible Burgers or JUST Egg) to a menu reduces sales of animal-based foods.1.3. Building a new academic fieldThe large majority of empirical research on reducing factory farming has been conducted by nonprofits. In contrast, academics have engaged comparatively little with this cause area (but with notable, commendable exceptions). Academics have a chick’n-and-JUST Egg problem: without a robust academic field for farmed animal welfare, academics remain largely unaware of this cause area and lack the necessary mentorship and career incentives to pursue it; conversely, without individual labs pursuing this research, a robust academic field cannot emerge. Our lab is designed as a prototype, demonstrating that it is feasible – and indeed rather joyful! – for a lab to focus on an EA-aligned, neglected cause area, while also succeeding robustly by the stri...]]>
MMathur https://forum.effectivealtruism.org/posts/iqbdXmrNxxgzgNxPC/introducing-stanford-s-new-humane-and-sustainable-food-lab Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Stanford’s new Humane & Sustainable Food Lab, published by MMathur on April 30, 2023 on The Effective Altruism Forum.We are excited to announce the new Humane & Sustainable Food Lab at Stanford University’s School of Medicine (California, USA). Our mission is to end factory farming through cutting-edge scientific research that we are uniquely positioned to conduct. I am the principal investigator of the lab, an Assistant Professor at the Stanford School of Medicine with dual appointments in the Quantitative Sciences Unit and Department of Pediatrics. Because arguments for reducing factory farming as a cause area have been detailed elsewhere, here I focus on describing:Our approachOur research and publications to dateOur upcoming research prioritiesWhy we are funding-constrained1. Our approach1.1. Breadth, then depthEmpirical research on how to reduce factory farming is still nascent, with many low-hanging fruit and unexplored possibilities. As such, it is critical to explore broadly to see what general directions are most promising and in what real-world contexts (e.g., educational interventions that appeal to animal welfare [1, 2, 3], choice-architecture “nudges” that subtly shift food-service environments, etc.). We are conducting studies on a range of individual- and society-level interventions (see below), ultimately aiming to find and refine the most tractable, cost-effective, and scalable interventions. As we home in on candidate interventions, we expect our research to become more deeply focused on a smaller number of interventions.1.2. Collaborating with food service to conduct and disseminate research in real-world contextsWe have a unique collaboration with the Director and Executive Chefs at the Stanford dining halls, allowing us to conduct controlled trials in real-world settings to assess interventions to reduce consumption of meat and animal products. Some of our interventions have been as simple and scalable as reducing the size of spoons used to serve these foods. Also, Stanford Residential & Dining Enterprises is a founding member of the Menus of Change University Research Collaborative (MCURC), a nationwide research consortium of 74 colleges and universities that conduct groundbreaking, collaborative studies on healthy and sustainable food choices in food service. MCURC provides evidence-based recommendations for promoting healthier and more sustainable food choices in food service operations, providing a natural route to dissemination.Our established research model involves conducting initial pilot studies at Stanford's dining halls to assess interventions' real-world feasibility and obtain preliminary effect-size estimates, then conducting large-scale, multisite studies by partnering with collaborating members of MCURC. We also have ongoing collaborations with restaurants and plant-based food startups in which we are studying whether adding modern plant-based analogs (e.g., Impossible Burgers or JUST Egg) to a menu reduces sales of animal-based foods.1.3. Building a new academic fieldThe large majority of empirical research on reducing factory farming has been conducted by nonprofits. In contrast, academics have engaged comparatively little with this cause area (but with notable, commendable exceptions). Academics have a chick’n-and-JUST Egg problem: without a robust academic field for farmed animal welfare, academics remain largely unaware of this cause area and lack the necessary mentorship and career incentives to pursue it; conversely, without individual labs pursuing this research, a robust academic field cannot emerge. Our lab is designed as a prototype, demonstrating that it is feasible – and indeed rather joyful! – for a lab to focus on an EA-aligned, neglected cause area, while also succeeding robustly by the stri...]]>
Sun, 30 Apr 2023 15:47:21 +0000 EA - Introducing Stanford’s new Humane and Sustainable Food Lab by MMathur Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Stanford’s new Humane & Sustainable Food Lab, published by MMathur on April 30, 2023 on The Effective Altruism Forum.We are excited to announce the new Humane & Sustainable Food Lab at Stanford University’s School of Medicine (California, USA). Our mission is to end factory farming through cutting-edge scientific research that we are uniquely positioned to conduct. I am the principal investigator of the lab, an Assistant Professor at the Stanford School of Medicine with dual appointments in the Quantitative Sciences Unit and Department of Pediatrics. Because arguments for reducing factory farming as a cause area have been detailed elsewhere, here I focus on describing:Our approachOur research and publications to dateOur upcoming research prioritiesWhy we are funding-constrained1. Our approach1.1. Breadth, then depthEmpirical research on how to reduce factory farming is still nascent, with many low-hanging fruit and unexplored possibilities. As such, it is critical to explore broadly to see what general directions are most promising and in what real-world contexts (e.g., educational interventions that appeal to animal welfare [1, 2, 3], choice-architecture “nudges” that subtly shift food-service environments, etc.). We are conducting studies on a range of individual- and society-level interventions (see below), ultimately aiming to find and refine the most tractable, cost-effective, and scalable interventions. As we home in on candidate interventions, we expect our research to become more deeply focused on a smaller number of interventions.1.2. Collaborating with food service to conduct and disseminate research in real-world contextsWe have a unique collaboration with the Director and Executive Chefs at the Stanford dining halls, allowing us to conduct controlled trials in real-world settings to assess interventions to reduce consumption of meat and animal products. Some of our interventions have been as simple and scalable as reducing the size of spoons used to serve these foods. Also, Stanford Residential & Dining Enterprises is a founding member of the Menus of Change University Research Collaborative (MCURC), a nationwide research consortium of 74 colleges and universities that conduct groundbreaking, collaborative studies on healthy and sustainable food choices in food service. MCURC provides evidence-based recommendations for promoting healthier and more sustainable food choices in food service operations, providing a natural route to dissemination.Our established research model involves conducting initial pilot studies at Stanford's dining halls to assess interventions' real-world feasibility and obtain preliminary effect-size estimates, then conducting large-scale, multisite studies by partnering with collaborating members of MCURC. We also have ongoing collaborations with restaurants and plant-based food startups in which we are studying whether adding modern plant-based analogs (e.g., Impossible Burgers or JUST Egg) to a menu reduces sales of animal-based foods.1.3. Building a new academic fieldThe large majority of empirical research on reducing factory farming has been conducted by nonprofits. In contrast, academics have engaged comparatively little with this cause area (but with notable, commendable exceptions). Academics have a chick’n-and-JUST Egg problem: without a robust academic field for farmed animal welfare, academics remain largely unaware of this cause area and lack the necessary mentorship and career incentives to pursue it; conversely, without individual labs pursuing this research, a robust academic field cannot emerge. Our lab is designed as a prototype, demonstrating that it is feasible – and indeed rather joyful! – for a lab to focus on an EA-aligned, neglected cause area, while also succeeding robustly by the stri...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Stanford’s new Humane & Sustainable Food Lab, published by MMathur on April 30, 2023 on The Effective Altruism Forum.We are excited to announce the new Humane & Sustainable Food Lab at Stanford University’s School of Medicine (California, USA). Our mission is to end factory farming through cutting-edge scientific research that we are uniquely positioned to conduct. I am the principal investigator of the lab, an Assistant Professor at the Stanford School of Medicine with dual appointments in the Quantitative Sciences Unit and Department of Pediatrics. Because arguments for reducing factory farming as a cause area have been detailed elsewhere, here I focus on describing:Our approachOur research and publications to dateOur upcoming research prioritiesWhy we are funding-constrained1. Our approach1.1. Breadth, then depthEmpirical research on how to reduce factory farming is still nascent, with many low-hanging fruit and unexplored possibilities. As such, it is critical to explore broadly to see what general directions are most promising and in what real-world contexts (e.g., educational interventions that appeal to animal welfare [1, 2, 3], choice-architecture “nudges” that subtly shift food-service environments, etc.). We are conducting studies on a range of individual- and society-level interventions (see below), ultimately aiming to find and refine the most tractable, cost-effective, and scalable interventions. As we home in on candidate interventions, we expect our research to become more deeply focused on a smaller number of interventions.1.2. Collaborating with food service to conduct and disseminate research in real-world contextsWe have a unique collaboration with the Director and Executive Chefs at the Stanford dining halls, allowing us to conduct controlled trials in real-world settings to assess interventions to reduce consumption of meat and animal products. Some of our interventions have been as simple and scalable as reducing the size of spoons used to serve these foods. Also, Stanford Residential & Dining Enterprises is a founding member of the Menus of Change University Research Collaborative (MCURC), a nationwide research consortium of 74 colleges and universities that conduct groundbreaking, collaborative studies on healthy and sustainable food choices in food service. MCURC provides evidence-based recommendations for promoting healthier and more sustainable food choices in food service operations, providing a natural route to dissemination.Our established research model involves conducting initial pilot studies at Stanford's dining halls to assess interventions' real-world feasibility and obtain preliminary effect-size estimates, then conducting large-scale, multisite studies by partnering with collaborating members of MCURC. We also have ongoing collaborations with restaurants and plant-based food startups in which we are studying whether adding modern plant-based analogs (e.g., Impossible Burgers or JUST Egg) to a menu reduces sales of animal-based foods.1.3. Building a new academic fieldThe large majority of empirical research on reducing factory farming has been conducted by nonprofits. In contrast, academics have engaged comparatively little with this cause area (but with notable, commendable exceptions). Academics have a chick’n-and-JUST Egg problem: without a robust academic field for farmed animal welfare, academics remain largely unaware of this cause area and lack the necessary mentorship and career incentives to pursue it; conversely, without individual labs pursuing this research, a robust academic field cannot emerge. Our lab is designed as a prototype, demonstrating that it is feasible – and indeed rather joyful! – for a lab to focus on an EA-aligned, neglected cause area, while also succeeding robustly by the stri...]]>
MMathur https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:54 None full 5779
EpyJMXZTqLDiKaXzu_NL_EA_EA EA - If you’d like to do something about sexual misconduct and don’t know what to do. by Habiba Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If you’d like to do something about sexual misconduct and don’t know what to do., published by Habiba on April 30, 2023 on The Effective Altruism Forum.Photo by Andrew MocaFollowing on from posts from women in the community discussing recent events (like this and this) I wanted to provide some specific suggestions for people who want to do something about sexual misconduct and harassment in EA but don’t know where to start. If you find the word useful, you could consider this post a guide to being an “ally”I was inspired to write this after some of my male friends asked what they could do to help and/or were pretty surprised by how well the things they did went down. But it’s hopefully useful advice for everyone, myself included.I’ve written this most with sexual harassment in mind. However, some of the sections are applicable in thinking about sexual assault, violence or even rape.Caveats:The post is not attempting to persuade anyone who isn’t already convinced that it’s worth taking action - I’ll leave that to other posts.It’s just my take on what might be helpful to doNavigating this stuff is tricky - what makes most sense for you to do is going to be very context dependentMy thoughts are fairly focused on the culture and norms I’m most familiar with (I’m a woman of colour, from the UK)SummaryRemember we’re all within the system - One useful frame can be thinking about what actions are the “paths of least resistance” in social situations that lead to harm, and trying to not take those ourselves as a way of trying to change themAct with compassion - In general act as if people in the discussion have a decent chance of having been personally affected. And be extra compassionate as a result.Learn about the issue - You can still be really helpful even if you don’t feel like a world expert on the topic. That said, do carry on learning more, including beyond the EA community, especially before suggesting improvements. This can be either by yourself (there’s some suggestions at the end) or with others (e.g. start a discussion group)Listen to community members empathetically - Be an empathetic listener to people in discussions about sexual misconduct. Both reactively when they raise things with you and proactively by reaching out to people who might feel affected. Consider pausing discussing your opinions and solutions with someone who is upset at least until you’ve done the listening bit first.Support people who tell you about their experience - Listen to and carry on being supportive to someone who shares a story with you about an experience they had. Respect their preferences on confidentiality and autonomy over what to do next. And also look after yourself.Reflect on your own behaviour - Spend a little time reflecting on your own past behaviour and if there is anything you want to do about past actions or change for the future. Check in with someone else if so. This is tough but courageous to do.Interject when you see harmful behaviour - Whether it's happening online or in person interject as an active bystander. Challenge the behaviour and/or look out for the person affectedTake action about people who have harmed others - Take responsibility for doing something about people you interact with who have harmed others, but handle this with care. Consult with friends / experts to work out what to do. Feel free to maintain your own distance even if behaviour isn’t bad enough to face professional repercussionsParticipate in the discussion - When there is community wide discussion about issues like sexual misconduct participate by signal boosting those affected and contributing your own takesContribute to or start community wide initiatives - Contribute to existing projects in the EA space, take actions within your local EA group or workplace and conside...]]>
Habiba https://forum.effectivealtruism.org/posts/EpyJMXZTqLDiKaXzu/if-you-d-like-to-do-something-about-sexual-misconduct-and Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If you’d like to do something about sexual misconduct and don’t know what to do., published by Habiba on April 30, 2023 on The Effective Altruism Forum.Photo by Andrew MocaFollowing on from posts from women in the community discussing recent events (like this and this) I wanted to provide some specific suggestions for people who want to do something about sexual misconduct and harassment in EA but don’t know where to start. If you find the word useful, you could consider this post a guide to being an “ally”I was inspired to write this after some of my male friends asked what they could do to help and/or were pretty surprised by how well the things they did went down. But it’s hopefully useful advice for everyone, myself included.I’ve written this most with sexual harassment in mind. However, some of the sections are applicable in thinking about sexual assault, violence or even rape.Caveats:The post is not attempting to persuade anyone who isn’t already convinced that it’s worth taking action - I’ll leave that to other posts.It’s just my take on what might be helpful to doNavigating this stuff is tricky - what makes most sense for you to do is going to be very context dependentMy thoughts are fairly focused on the culture and norms I’m most familiar with (I’m a woman of colour, from the UK)SummaryRemember we’re all within the system - One useful frame can be thinking about what actions are the “paths of least resistance” in social situations that lead to harm, and trying to not take those ourselves as a way of trying to change themAct with compassion - In general act as if people in the discussion have a decent chance of having been personally affected. And be extra compassionate as a result.Learn about the issue - You can still be really helpful even if you don’t feel like a world expert on the topic. That said, do carry on learning more, including beyond the EA community, especially before suggesting improvements. This can be either by yourself (there’s some suggestions at the end) or with others (e.g. start a discussion group)Listen to community members empathetically - Be an empathetic listener to people in discussions about sexual misconduct. Both reactively when they raise things with you and proactively by reaching out to people who might feel affected. Consider pausing discussing your opinions and solutions with someone who is upset at least until you’ve done the listening bit first.Support people who tell you about their experience - Listen to and carry on being supportive to someone who shares a story with you about an experience they had. Respect their preferences on confidentiality and autonomy over what to do next. And also look after yourself.Reflect on your own behaviour - Spend a little time reflecting on your own past behaviour and if there is anything you want to do about past actions or change for the future. Check in with someone else if so. This is tough but courageous to do.Interject when you see harmful behaviour - Whether it's happening online or in person interject as an active bystander. Challenge the behaviour and/or look out for the person affectedTake action about people who have harmed others - Take responsibility for doing something about people you interact with who have harmed others, but handle this with care. Consult with friends / experts to work out what to do. Feel free to maintain your own distance even if behaviour isn’t bad enough to face professional repercussionsParticipate in the discussion - When there is community wide discussion about issues like sexual misconduct participate by signal boosting those affected and contributing your own takesContribute to or start community wide initiatives - Contribute to existing projects in the EA space, take actions within your local EA group or workplace and conside...]]>
Sun, 30 Apr 2023 10:28:40 +0000 EA - If you’d like to do something about sexual misconduct and don’t know what to do. by Habiba Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If you’d like to do something about sexual misconduct and don’t know what to do., published by Habiba on April 30, 2023 on The Effective Altruism Forum.Photo by Andrew MocaFollowing on from posts from women in the community discussing recent events (like this and this) I wanted to provide some specific suggestions for people who want to do something about sexual misconduct and harassment in EA but don’t know where to start. If you find the word useful, you could consider this post a guide to being an “ally”I was inspired to write this after some of my male friends asked what they could do to help and/or were pretty surprised by how well the things they did went down. But it’s hopefully useful advice for everyone, myself included.I’ve written this most with sexual harassment in mind. However, some of the sections are applicable in thinking about sexual assault, violence or even rape.Caveats:The post is not attempting to persuade anyone who isn’t already convinced that it’s worth taking action - I’ll leave that to other posts.It’s just my take on what might be helpful to doNavigating this stuff is tricky - what makes most sense for you to do is going to be very context dependentMy thoughts are fairly focused on the culture and norms I’m most familiar with (I’m a woman of colour, from the UK)SummaryRemember we’re all within the system - One useful frame can be thinking about what actions are the “paths of least resistance” in social situations that lead to harm, and trying to not take those ourselves as a way of trying to change themAct with compassion - In general act as if people in the discussion have a decent chance of having been personally affected. And be extra compassionate as a result.Learn about the issue - You can still be really helpful even if you don’t feel like a world expert on the topic. That said, do carry on learning more, including beyond the EA community, especially before suggesting improvements. This can be either by yourself (there’s some suggestions at the end) or with others (e.g. start a discussion group)Listen to community members empathetically - Be an empathetic listener to people in discussions about sexual misconduct. Both reactively when they raise things with you and proactively by reaching out to people who might feel affected. Consider pausing discussing your opinions and solutions with someone who is upset at least until you’ve done the listening bit first.Support people who tell you about their experience - Listen to and carry on being supportive to someone who shares a story with you about an experience they had. Respect their preferences on confidentiality and autonomy over what to do next. And also look after yourself.Reflect on your own behaviour - Spend a little time reflecting on your own past behaviour and if there is anything you want to do about past actions or change for the future. Check in with someone else if so. This is tough but courageous to do.Interject when you see harmful behaviour - Whether it's happening online or in person interject as an active bystander. Challenge the behaviour and/or look out for the person affectedTake action about people who have harmed others - Take responsibility for doing something about people you interact with who have harmed others, but handle this with care. Consult with friends / experts to work out what to do. Feel free to maintain your own distance even if behaviour isn’t bad enough to face professional repercussionsParticipate in the discussion - When there is community wide discussion about issues like sexual misconduct participate by signal boosting those affected and contributing your own takesContribute to or start community wide initiatives - Contribute to existing projects in the EA space, take actions within your local EA group or workplace and conside...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If you’d like to do something about sexual misconduct and don’t know what to do., published by Habiba on April 30, 2023 on The Effective Altruism Forum.Photo by Andrew MocaFollowing on from posts from women in the community discussing recent events (like this and this) I wanted to provide some specific suggestions for people who want to do something about sexual misconduct and harassment in EA but don’t know where to start. If you find the word useful, you could consider this post a guide to being an “ally”I was inspired to write this after some of my male friends asked what they could do to help and/or were pretty surprised by how well the things they did went down. But it’s hopefully useful advice for everyone, myself included.I’ve written this most with sexual harassment in mind. However, some of the sections are applicable in thinking about sexual assault, violence or even rape.Caveats:The post is not attempting to persuade anyone who isn’t already convinced that it’s worth taking action - I’ll leave that to other posts.It’s just my take on what might be helpful to doNavigating this stuff is tricky - what makes most sense for you to do is going to be very context dependentMy thoughts are fairly focused on the culture and norms I’m most familiar with (I’m a woman of colour, from the UK)SummaryRemember we’re all within the system - One useful frame can be thinking about what actions are the “paths of least resistance” in social situations that lead to harm, and trying to not take those ourselves as a way of trying to change themAct with compassion - In general act as if people in the discussion have a decent chance of having been personally affected. And be extra compassionate as a result.Learn about the issue - You can still be really helpful even if you don’t feel like a world expert on the topic. That said, do carry on learning more, including beyond the EA community, especially before suggesting improvements. This can be either by yourself (there’s some suggestions at the end) or with others (e.g. start a discussion group)Listen to community members empathetically - Be an empathetic listener to people in discussions about sexual misconduct. Both reactively when they raise things with you and proactively by reaching out to people who might feel affected. Consider pausing discussing your opinions and solutions with someone who is upset at least until you’ve done the listening bit first.Support people who tell you about their experience - Listen to and carry on being supportive to someone who shares a story with you about an experience they had. Respect their preferences on confidentiality and autonomy over what to do next. And also look after yourself.Reflect on your own behaviour - Spend a little time reflecting on your own past behaviour and if there is anything you want to do about past actions or change for the future. Check in with someone else if so. This is tough but courageous to do.Interject when you see harmful behaviour - Whether it's happening online or in person interject as an active bystander. Challenge the behaviour and/or look out for the person affectedTake action about people who have harmed others - Take responsibility for doing something about people you interact with who have harmed others, but handle this with care. Consult with friends / experts to work out what to do. Feel free to maintain your own distance even if behaviour isn’t bad enough to face professional repercussionsParticipate in the discussion - When there is community wide discussion about issues like sexual misconduct participate by signal boosting those affected and contributing your own takesContribute to or start community wide initiatives - Contribute to existing projects in the EA space, take actions within your local EA group or workplace and conside...]]>
Habiba https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 33:09 None full 5775
pZmjeb5RddWqsjp2j_NL_EA_EA EA - New open letter on AI — "Include Consciousness Research" by Jamie Harris Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New open letter on AI — "Include Consciousness Research", published by Jamie Harris on April 28, 2023 on The Effective Altruism Forum.Quick context:The potential development of artificial sentience seems very important; it presents large, neglected, and potentially tractable risks.80,000 Hours lists artificial sentience and suffering risks as "similarly pressing but less developed areas" than their top 8 "list of the most pressing world problems".There's some relevant work on this topic by Sentience Institute, Future of Humanity Institute, Center for Reducing Suffering, and others, but room for much more. Yesterday someone asked on the Forum "How come there isn't that much focus in EA on research into whether / when AI's are likely to be sentient?"A month ago, people got excited about the FLI open letter: "Pause giant AI experiments".Now, Researchers from the Association for Mathematical Consciousness Science have written an open letter emphasising the urgent need for accelerated research in consciousness science in light of rapid advancements in artificial intelligence. (I'm not affiliated with them in any way.)It's quite short, so I'll copy the full text here:This open letter is a wakeup call for the tech sector, the scientific community and society in general to take seriously the need to accelerate research in the field of consciousness science.As highlighted by the recent “Pause Giant AI Experiments” letter [1], we are living through an exciting and uncertain time in the development of artificial intelligence (AI) and other brain-related technologies. The increasing computing power and capabilities of the new AI systems are accelerating at a pace that far exceeds our progress in understanding their capabilities and their “alignment” with human values.AI systems, including Large Language Models such as ChatGPT and Bard, are artificial neural networks inspired by neuronal architecture in the cortex of animal brains. In the near future, it is inevitable that such systems will be constructed to reproduce aspects of higher-level brain architecture and functioning. Indeed, it is no longer in the realm of science fiction to imagine AI systems having feelings and even human-level consciousness. Contemporary AI systems already display human traits recognised in Psychology, including evidence of Theory of Mind [2].Furthermore, if achieving consciousness, AI systems would likely unveil a new array of capabilities that go far beyond what is expected even by those spearheading their development. AI systems have already been observed to exhibit unanticipated emergent properties [3]. These capabilities will change what AI can do, and what society can do to control, align and use such systems. In addition, consciousness would give AI a place in our moral landscape, which raises further ethical, legal, and political concerns.As AI develops, it is vital for the wider public, societal institutions and governing bodies to know whether and how AI systems can become conscious, to understand the implications thereof, and to effectively address the ethical, safety, and societal ramifications associated with artificial general intelligence (AGI).Science is starting to unlock the mystery of consciousness. Steady advances in recent years have brought us closer to defining and understanding consciousness and have established an expert international community of researchers in this field. There are over 30 models and theories of consciousness (MoCs and ToCs) in the peer-reviewed scientific literature, which already include some important pieces of the solution to the challenge of consciousness.To understand whether AI systems are, or can become, conscious, tools are needed that can be applied to artificial systems. In particular, science needs to further develop formal and mat...]]>
Jamie Harris https://forum.effectivealtruism.org/posts/pZmjeb5RddWqsjp2j/new-open-letter-on-ai-include-consciousness-research Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New open letter on AI — "Include Consciousness Research", published by Jamie Harris on April 28, 2023 on The Effective Altruism Forum.Quick context:The potential development of artificial sentience seems very important; it presents large, neglected, and potentially tractable risks.80,000 Hours lists artificial sentience and suffering risks as "similarly pressing but less developed areas" than their top 8 "list of the most pressing world problems".There's some relevant work on this topic by Sentience Institute, Future of Humanity Institute, Center for Reducing Suffering, and others, but room for much more. Yesterday someone asked on the Forum "How come there isn't that much focus in EA on research into whether / when AI's are likely to be sentient?"A month ago, people got excited about the FLI open letter: "Pause giant AI experiments".Now, Researchers from the Association for Mathematical Consciousness Science have written an open letter emphasising the urgent need for accelerated research in consciousness science in light of rapid advancements in artificial intelligence. (I'm not affiliated with them in any way.)It's quite short, so I'll copy the full text here:This open letter is a wakeup call for the tech sector, the scientific community and society in general to take seriously the need to accelerate research in the field of consciousness science.As highlighted by the recent “Pause Giant AI Experiments” letter [1], we are living through an exciting and uncertain time in the development of artificial intelligence (AI) and other brain-related technologies. The increasing computing power and capabilities of the new AI systems are accelerating at a pace that far exceeds our progress in understanding their capabilities and their “alignment” with human values.AI systems, including Large Language Models such as ChatGPT and Bard, are artificial neural networks inspired by neuronal architecture in the cortex of animal brains. In the near future, it is inevitable that such systems will be constructed to reproduce aspects of higher-level brain architecture and functioning. Indeed, it is no longer in the realm of science fiction to imagine AI systems having feelings and even human-level consciousness. Contemporary AI systems already display human traits recognised in Psychology, including evidence of Theory of Mind [2].Furthermore, if achieving consciousness, AI systems would likely unveil a new array of capabilities that go far beyond what is expected even by those spearheading their development. AI systems have already been observed to exhibit unanticipated emergent properties [3]. These capabilities will change what AI can do, and what society can do to control, align and use such systems. In addition, consciousness would give AI a place in our moral landscape, which raises further ethical, legal, and political concerns.As AI develops, it is vital for the wider public, societal institutions and governing bodies to know whether and how AI systems can become conscious, to understand the implications thereof, and to effectively address the ethical, safety, and societal ramifications associated with artificial general intelligence (AGI).Science is starting to unlock the mystery of consciousness. Steady advances in recent years have brought us closer to defining and understanding consciousness and have established an expert international community of researchers in this field. There are over 30 models and theories of consciousness (MoCs and ToCs) in the peer-reviewed scientific literature, which already include some important pieces of the solution to the challenge of consciousness.To understand whether AI systems are, or can become, conscious, tools are needed that can be applied to artificial systems. In particular, science needs to further develop formal and mat...]]>
Sat, 29 Apr 2023 12:08:58 +0000 EA - New open letter on AI — "Include Consciousness Research" by Jamie Harris Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New open letter on AI — "Include Consciousness Research", published by Jamie Harris on April 28, 2023 on The Effective Altruism Forum.Quick context:The potential development of artificial sentience seems very important; it presents large, neglected, and potentially tractable risks.80,000 Hours lists artificial sentience and suffering risks as "similarly pressing but less developed areas" than their top 8 "list of the most pressing world problems".There's some relevant work on this topic by Sentience Institute, Future of Humanity Institute, Center for Reducing Suffering, and others, but room for much more. Yesterday someone asked on the Forum "How come there isn't that much focus in EA on research into whether / when AI's are likely to be sentient?"A month ago, people got excited about the FLI open letter: "Pause giant AI experiments".Now, Researchers from the Association for Mathematical Consciousness Science have written an open letter emphasising the urgent need for accelerated research in consciousness science in light of rapid advancements in artificial intelligence. (I'm not affiliated with them in any way.)It's quite short, so I'll copy the full text here:This open letter is a wakeup call for the tech sector, the scientific community and society in general to take seriously the need to accelerate research in the field of consciousness science.As highlighted by the recent “Pause Giant AI Experiments” letter [1], we are living through an exciting and uncertain time in the development of artificial intelligence (AI) and other brain-related technologies. The increasing computing power and capabilities of the new AI systems are accelerating at a pace that far exceeds our progress in understanding their capabilities and their “alignment” with human values.AI systems, including Large Language Models such as ChatGPT and Bard, are artificial neural networks inspired by neuronal architecture in the cortex of animal brains. In the near future, it is inevitable that such systems will be constructed to reproduce aspects of higher-level brain architecture and functioning. Indeed, it is no longer in the realm of science fiction to imagine AI systems having feelings and even human-level consciousness. Contemporary AI systems already display human traits recognised in Psychology, including evidence of Theory of Mind [2].Furthermore, if achieving consciousness, AI systems would likely unveil a new array of capabilities that go far beyond what is expected even by those spearheading their development. AI systems have already been observed to exhibit unanticipated emergent properties [3]. These capabilities will change what AI can do, and what society can do to control, align and use such systems. In addition, consciousness would give AI a place in our moral landscape, which raises further ethical, legal, and political concerns.As AI develops, it is vital for the wider public, societal institutions and governing bodies to know whether and how AI systems can become conscious, to understand the implications thereof, and to effectively address the ethical, safety, and societal ramifications associated with artificial general intelligence (AGI).Science is starting to unlock the mystery of consciousness. Steady advances in recent years have brought us closer to defining and understanding consciousness and have established an expert international community of researchers in this field. There are over 30 models and theories of consciousness (MoCs and ToCs) in the peer-reviewed scientific literature, which already include some important pieces of the solution to the challenge of consciousness.To understand whether AI systems are, or can become, conscious, tools are needed that can be applied to artificial systems. In particular, science needs to further develop formal and mat...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New open letter on AI — "Include Consciousness Research", published by Jamie Harris on April 28, 2023 on The Effective Altruism Forum.Quick context:The potential development of artificial sentience seems very important; it presents large, neglected, and potentially tractable risks.80,000 Hours lists artificial sentience and suffering risks as "similarly pressing but less developed areas" than their top 8 "list of the most pressing world problems".There's some relevant work on this topic by Sentience Institute, Future of Humanity Institute, Center for Reducing Suffering, and others, but room for much more. Yesterday someone asked on the Forum "How come there isn't that much focus in EA on research into whether / when AI's are likely to be sentient?"A month ago, people got excited about the FLI open letter: "Pause giant AI experiments".Now, Researchers from the Association for Mathematical Consciousness Science have written an open letter emphasising the urgent need for accelerated research in consciousness science in light of rapid advancements in artificial intelligence. (I'm not affiliated with them in any way.)It's quite short, so I'll copy the full text here:This open letter is a wakeup call for the tech sector, the scientific community and society in general to take seriously the need to accelerate research in the field of consciousness science.As highlighted by the recent “Pause Giant AI Experiments” letter [1], we are living through an exciting and uncertain time in the development of artificial intelligence (AI) and other brain-related technologies. The increasing computing power and capabilities of the new AI systems are accelerating at a pace that far exceeds our progress in understanding their capabilities and their “alignment” with human values.AI systems, including Large Language Models such as ChatGPT and Bard, are artificial neural networks inspired by neuronal architecture in the cortex of animal brains. In the near future, it is inevitable that such systems will be constructed to reproduce aspects of higher-level brain architecture and functioning. Indeed, it is no longer in the realm of science fiction to imagine AI systems having feelings and even human-level consciousness. Contemporary AI systems already display human traits recognised in Psychology, including evidence of Theory of Mind [2].Furthermore, if achieving consciousness, AI systems would likely unveil a new array of capabilities that go far beyond what is expected even by those spearheading their development. AI systems have already been observed to exhibit unanticipated emergent properties [3]. These capabilities will change what AI can do, and what society can do to control, align and use such systems. In addition, consciousness would give AI a place in our moral landscape, which raises further ethical, legal, and political concerns.As AI develops, it is vital for the wider public, societal institutions and governing bodies to know whether and how AI systems can become conscious, to understand the implications thereof, and to effectively address the ethical, safety, and societal ramifications associated with artificial general intelligence (AGI).Science is starting to unlock the mystery of consciousness. Steady advances in recent years have brought us closer to defining and understanding consciousness and have established an expert international community of researchers in this field. There are over 30 models and theories of consciousness (MoCs and ToCs) in the peer-reviewed scientific literature, which already include some important pieces of the solution to the challenge of consciousness.To understand whether AI systems are, or can become, conscious, tools are needed that can be applied to artificial systems. In particular, science needs to further develop formal and mat...]]>
Jamie Harris https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:22 None full 5771
4SFgv9iSaBWikriYj_NL_EA_EA EA - Better weather forecasting: Agricultural and non-agricultural benefits in low- and lower-middle-income countries by Rethink Priorities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Better weather forecasting: Agricultural and non-agricultural benefits in low- and lower-middle-income countries, published by Rethink Priorities on April 28, 2023 on The Effective Altruism Forum.Editorial noteThis report is a “shallow” investigation, as described here, and was commissioned by Open Philanthropy and produced by Rethink Priorities. Open Philanthropy does not necessarily endorse our conclusions.The primary focus of the report was to investigate whether improving weather forecasting could have benefits for agriculture in low- and lower-middle income countries, and evaluate how cost-effective this might be. Note that this means we did not evaluate improvements in weather forecasting against other potential interventions to achieve the same aims, such as the development of climate-resilient crops.We reviewed the academic and gray literature, and also spoke to seven experts. In our report, we provide a brief description of weather forecasting and the global industry, before evaluating which farmers might most benefit from improved forecasts. We then explore how predictions are currently made in countries of interest, and how accurate they are. We evaluate the cost-effectiveness of one intervention that was often mentioned by experts, and highlight other potential opportunities for grantmaking and further research.We don’t intend this report to be Rethink Priorities’ final word on this topic and we have tried to flag major sources of uncertainty in the report. We are open to revising our views as more information is uncovered.Key takeawaysWeather forecasting consists of three stages.Data assimilation: to understand the current state of the atmosphere, based on observations from satellites and surface-based stations. All forecasts beyond 4-5 days require global observations.Forecasting: to model how the atmosphere will change over time. Limits to supercomputing power necessitates tradeoffs, e.g., between forecast length and resolution.Communication: packaging relevant information and sharing this with potential users.The global annual spending on weather forecasting is over $50 billion.Around 260-305 million smallholder farms in South Asia, sub-Saharan Africa and Southeast Asia stand to benefit the most.A wide range of farming decisions benefit from weather forecasts, from strategic seasonal or annual decisions like crop choice, to day-to-day decisions like irrigation timing.There is some evidence that farmers can benefit from forecasts in terms of increased yields and income.For smallholder farmers, cereals are likely the most important crop group, constituting 90% of their agricultural output.Medium-range and seasonal forecasts of rainfall and temperature are most important to these farmers.In the lower-middle-income countries and low-income countries1 of interest, weather forecasting quality remains poor.Global numerical weather prediction (NWP) is a methodology that underlies much of weather forecasting. Seasonal forecasts of temperature seem more accurate than those for precipitation. At shorter timescales, forecasts in the tropics may be useful with a lead time of up to two weeks, and are generally less accurate than forecasts for the mid-latitudes.Public sector forecasting in these LMICs is generally informed by global NWPs, meaning that accuracy and resolution remain low.LMICs do not improve on global NWPs, as they lack resources and access to raw data.We have not found any evidence to suggest that private sector forecasts are better, though Ignitia’s approach targets one of the main issues with global NWPs.A small sample of public and private organizations we reviewed spends about $300 million each year on improving forecasting.It’s likely that advisories are needed, especially for seasonal forecasts.Improving weather forecast...]]>
Rethink Priorities https://forum.effectivealtruism.org/posts/4SFgv9iSaBWikriYj/better-weather-forecasting-agricultural-and-non-agricultural Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Better weather forecasting: Agricultural and non-agricultural benefits in low- and lower-middle-income countries, published by Rethink Priorities on April 28, 2023 on The Effective Altruism Forum.Editorial noteThis report is a “shallow” investigation, as described here, and was commissioned by Open Philanthropy and produced by Rethink Priorities. Open Philanthropy does not necessarily endorse our conclusions.The primary focus of the report was to investigate whether improving weather forecasting could have benefits for agriculture in low- and lower-middle income countries, and evaluate how cost-effective this might be. Note that this means we did not evaluate improvements in weather forecasting against other potential interventions to achieve the same aims, such as the development of climate-resilient crops.We reviewed the academic and gray literature, and also spoke to seven experts. In our report, we provide a brief description of weather forecasting and the global industry, before evaluating which farmers might most benefit from improved forecasts. We then explore how predictions are currently made in countries of interest, and how accurate they are. We evaluate the cost-effectiveness of one intervention that was often mentioned by experts, and highlight other potential opportunities for grantmaking and further research.We don’t intend this report to be Rethink Priorities’ final word on this topic and we have tried to flag major sources of uncertainty in the report. We are open to revising our views as more information is uncovered.Key takeawaysWeather forecasting consists of three stages.Data assimilation: to understand the current state of the atmosphere, based on observations from satellites and surface-based stations. All forecasts beyond 4-5 days require global observations.Forecasting: to model how the atmosphere will change over time. Limits to supercomputing power necessitates tradeoffs, e.g., between forecast length and resolution.Communication: packaging relevant information and sharing this with potential users.The global annual spending on weather forecasting is over $50 billion.Around 260-305 million smallholder farms in South Asia, sub-Saharan Africa and Southeast Asia stand to benefit the most.A wide range of farming decisions benefit from weather forecasts, from strategic seasonal or annual decisions like crop choice, to day-to-day decisions like irrigation timing.There is some evidence that farmers can benefit from forecasts in terms of increased yields and income.For smallholder farmers, cereals are likely the most important crop group, constituting 90% of their agricultural output.Medium-range and seasonal forecasts of rainfall and temperature are most important to these farmers.In the lower-middle-income countries and low-income countries1 of interest, weather forecasting quality remains poor.Global numerical weather prediction (NWP) is a methodology that underlies much of weather forecasting. Seasonal forecasts of temperature seem more accurate than those for precipitation. At shorter timescales, forecasts in the tropics may be useful with a lead time of up to two weeks, and are generally less accurate than forecasts for the mid-latitudes.Public sector forecasting in these LMICs is generally informed by global NWPs, meaning that accuracy and resolution remain low.LMICs do not improve on global NWPs, as they lack resources and access to raw data.We have not found any evidence to suggest that private sector forecasts are better, though Ignitia’s approach targets one of the main issues with global NWPs.A small sample of public and private organizations we reviewed spends about $300 million each year on improving forecasting.It’s likely that advisories are needed, especially for seasonal forecasts.Improving weather forecast...]]>
Fri, 28 Apr 2023 23:34:21 +0000 EA - Better weather forecasting: Agricultural and non-agricultural benefits in low- and lower-middle-income countries by Rethink Priorities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Better weather forecasting: Agricultural and non-agricultural benefits in low- and lower-middle-income countries, published by Rethink Priorities on April 28, 2023 on The Effective Altruism Forum.Editorial noteThis report is a “shallow” investigation, as described here, and was commissioned by Open Philanthropy and produced by Rethink Priorities. Open Philanthropy does not necessarily endorse our conclusions.The primary focus of the report was to investigate whether improving weather forecasting could have benefits for agriculture in low- and lower-middle income countries, and evaluate how cost-effective this might be. Note that this means we did not evaluate improvements in weather forecasting against other potential interventions to achieve the same aims, such as the development of climate-resilient crops.We reviewed the academic and gray literature, and also spoke to seven experts. In our report, we provide a brief description of weather forecasting and the global industry, before evaluating which farmers might most benefit from improved forecasts. We then explore how predictions are currently made in countries of interest, and how accurate they are. We evaluate the cost-effectiveness of one intervention that was often mentioned by experts, and highlight other potential opportunities for grantmaking and further research.We don’t intend this report to be Rethink Priorities’ final word on this topic and we have tried to flag major sources of uncertainty in the report. We are open to revising our views as more information is uncovered.Key takeawaysWeather forecasting consists of three stages.Data assimilation: to understand the current state of the atmosphere, based on observations from satellites and surface-based stations. All forecasts beyond 4-5 days require global observations.Forecasting: to model how the atmosphere will change over time. Limits to supercomputing power necessitates tradeoffs, e.g., between forecast length and resolution.Communication: packaging relevant information and sharing this with potential users.The global annual spending on weather forecasting is over $50 billion.Around 260-305 million smallholder farms in South Asia, sub-Saharan Africa and Southeast Asia stand to benefit the most.A wide range of farming decisions benefit from weather forecasts, from strategic seasonal or annual decisions like crop choice, to day-to-day decisions like irrigation timing.There is some evidence that farmers can benefit from forecasts in terms of increased yields and income.For smallholder farmers, cereals are likely the most important crop group, constituting 90% of their agricultural output.Medium-range and seasonal forecasts of rainfall and temperature are most important to these farmers.In the lower-middle-income countries and low-income countries1 of interest, weather forecasting quality remains poor.Global numerical weather prediction (NWP) is a methodology that underlies much of weather forecasting. Seasonal forecasts of temperature seem more accurate than those for precipitation. At shorter timescales, forecasts in the tropics may be useful with a lead time of up to two weeks, and are generally less accurate than forecasts for the mid-latitudes.Public sector forecasting in these LMICs is generally informed by global NWPs, meaning that accuracy and resolution remain low.LMICs do not improve on global NWPs, as they lack resources and access to raw data.We have not found any evidence to suggest that private sector forecasts are better, though Ignitia’s approach targets one of the main issues with global NWPs.A small sample of public and private organizations we reviewed spends about $300 million each year on improving forecasting.It’s likely that advisories are needed, especially for seasonal forecasts.Improving weather forecast...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Better weather forecasting: Agricultural and non-agricultural benefits in low- and lower-middle-income countries, published by Rethink Priorities on April 28, 2023 on The Effective Altruism Forum.Editorial noteThis report is a “shallow” investigation, as described here, and was commissioned by Open Philanthropy and produced by Rethink Priorities. Open Philanthropy does not necessarily endorse our conclusions.The primary focus of the report was to investigate whether improving weather forecasting could have benefits for agriculture in low- and lower-middle income countries, and evaluate how cost-effective this might be. Note that this means we did not evaluate improvements in weather forecasting against other potential interventions to achieve the same aims, such as the development of climate-resilient crops.We reviewed the academic and gray literature, and also spoke to seven experts. In our report, we provide a brief description of weather forecasting and the global industry, before evaluating which farmers might most benefit from improved forecasts. We then explore how predictions are currently made in countries of interest, and how accurate they are. We evaluate the cost-effectiveness of one intervention that was often mentioned by experts, and highlight other potential opportunities for grantmaking and further research.We don’t intend this report to be Rethink Priorities’ final word on this topic and we have tried to flag major sources of uncertainty in the report. We are open to revising our views as more information is uncovered.Key takeawaysWeather forecasting consists of three stages.Data assimilation: to understand the current state of the atmosphere, based on observations from satellites and surface-based stations. All forecasts beyond 4-5 days require global observations.Forecasting: to model how the atmosphere will change over time. Limits to supercomputing power necessitates tradeoffs, e.g., between forecast length and resolution.Communication: packaging relevant information and sharing this with potential users.The global annual spending on weather forecasting is over $50 billion.Around 260-305 million smallholder farms in South Asia, sub-Saharan Africa and Southeast Asia stand to benefit the most.A wide range of farming decisions benefit from weather forecasts, from strategic seasonal or annual decisions like crop choice, to day-to-day decisions like irrigation timing.There is some evidence that farmers can benefit from forecasts in terms of increased yields and income.For smallholder farmers, cereals are likely the most important crop group, constituting 90% of their agricultural output.Medium-range and seasonal forecasts of rainfall and temperature are most important to these farmers.In the lower-middle-income countries and low-income countries1 of interest, weather forecasting quality remains poor.Global numerical weather prediction (NWP) is a methodology that underlies much of weather forecasting. Seasonal forecasts of temperature seem more accurate than those for precipitation. At shorter timescales, forecasts in the tropics may be useful with a lead time of up to two weeks, and are generally less accurate than forecasts for the mid-latitudes.Public sector forecasting in these LMICs is generally informed by global NWPs, meaning that accuracy and resolution remain low.LMICs do not improve on global NWPs, as they lack resources and access to raw data.We have not found any evidence to suggest that private sector forecasts are better, though Ignitia’s approach targets one of the main issues with global NWPs.A small sample of public and private organizations we reviewed spends about $300 million each year on improving forecasting.It’s likely that advisories are needed, especially for seasonal forecasts.Improving weather forecast...]]>
Rethink Priorities https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:55 None full 5766
HujMqSaQwnNJfLhWw_NL_EA_EA EA - Report: Food Security in Argentina in the event of an Abrupt Sunlight Reduction Scenario (ASRS) by JorgeTorresC Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Report: Food Security in Argentina in the event of an Abrupt Sunlight Reduction Scenario (ASRS), published by JorgeTorresC on April 27, 2023 on The Effective Altruism Forum.In order to prepare in case of an Abrupt Sunlight Reduction Scenario (ASRS), it is necessary to understand the threats and vulnerabilities of the agri-food system and the current ecosystem of risk management in Argentina.Through research, modeling, and interviews, the RCG team, in collaboration with ALLFED, has produced a report titled "Food Security in Argentina in the event of an Abrupt Sunlight Reduction Scenario (ASRS)". This strategic proposal offers a comprehensive overview of risk management in Argentina, the current state of risk, and presents eight main recommendations divided into two categories: communication and supplies, and the production and redirection of food in the event of an ASRS. Additionally, it highlights priority actions for implementing the proposed solutions.Download full reportIf you are interested in attending our research findings presentation on May 8, 2023 at 19:00 (GMT+1) sign up at the following linkExecutive SummaryAbrupt Sunlight Reduction Scenarios (ASRS) result from events that eject particulate matter into the upper atmosphere, reflecting and absorbing sunlight that would otherwise reach the Earth's surface. This decrease in sunlight causes a drop in global temperatures and precipitation, with devastating consequences for agriculture. Potential causes of an ASRS include large volcanic eruptions, nuclear winter, and asteroid or comet impact (ALLFED , 2022). The impact of such events is likely to last for several years, even a decade, with global implications for agriculture and food security.According to certain researchers, in a severe nuclear winter scenario such as the one described, an estimated 75% of the world's population could starve to death (Xia et al., 2022). If the atmosphere were to collect 150 million tons of soot, it would cause a decrease in temperature ranging between 7ºC and 15ºC. This temperature drop would be accompanied by a reduction in sunlight and precipitation, leading to a collapse in caloric production. Specifically, caloric production would fall to 10%-20% of its current value.Some regions of the world appear to have better conditions for surviving an abrupt sunlight reduction scenario (ASRS). These include island nations like New Zealand or Australia (Boyd & Wilson, 2022) and continental countries such as Argentina, Uruguay, and Paraguay (Xia et al., 2022). After evaluating different countries in Latin America, we realized that Argentina is one of the world's leading producers and exporters of food, especially grains and oilseeds. Therefore, in the event of an ASRS, Argentina would play a crucial role in the distribution and food exportation, even if its production decreased during the scenario. It would still have greater food availability than other countries that would be more severely affected.Adapting the country's food systems quickly and effectively would make the difference between a national famine situation and producing sufficient, varied, and nutritious food with a surplus to export, thus avoiding a regional humanitarian crisis and a foreign refugee crisis.Considering the importance of Argentina's geographical location, it is essential for the government to actively participate in the development of contingency plans aimed at addressing possible threats in the region, and the creation of an interdepartmental working group is recommended to investigate the threat posed by an ASRS and how to deal with it. The strategic initiatives aim to contribute to the strengthening of preparedness and specific recommendations for its location. This report consists of 8 main recommendations, divided into communication and supp...]]>
JorgeTorresC https://forum.effectivealtruism.org/posts/HujMqSaQwnNJfLhWw/report-food-security-in-argentina-in-the-event-of-an-abrupt Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Report: Food Security in Argentina in the event of an Abrupt Sunlight Reduction Scenario (ASRS), published by JorgeTorresC on April 27, 2023 on The Effective Altruism Forum.In order to prepare in case of an Abrupt Sunlight Reduction Scenario (ASRS), it is necessary to understand the threats and vulnerabilities of the agri-food system and the current ecosystem of risk management in Argentina.Through research, modeling, and interviews, the RCG team, in collaboration with ALLFED, has produced a report titled "Food Security in Argentina in the event of an Abrupt Sunlight Reduction Scenario (ASRS)". This strategic proposal offers a comprehensive overview of risk management in Argentina, the current state of risk, and presents eight main recommendations divided into two categories: communication and supplies, and the production and redirection of food in the event of an ASRS. Additionally, it highlights priority actions for implementing the proposed solutions.Download full reportIf you are interested in attending our research findings presentation on May 8, 2023 at 19:00 (GMT+1) sign up at the following linkExecutive SummaryAbrupt Sunlight Reduction Scenarios (ASRS) result from events that eject particulate matter into the upper atmosphere, reflecting and absorbing sunlight that would otherwise reach the Earth's surface. This decrease in sunlight causes a drop in global temperatures and precipitation, with devastating consequences for agriculture. Potential causes of an ASRS include large volcanic eruptions, nuclear winter, and asteroid or comet impact (ALLFED , 2022). The impact of such events is likely to last for several years, even a decade, with global implications for agriculture and food security.According to certain researchers, in a severe nuclear winter scenario such as the one described, an estimated 75% of the world's population could starve to death (Xia et al., 2022). If the atmosphere were to collect 150 million tons of soot, it would cause a decrease in temperature ranging between 7ºC and 15ºC. This temperature drop would be accompanied by a reduction in sunlight and precipitation, leading to a collapse in caloric production. Specifically, caloric production would fall to 10%-20% of its current value.Some regions of the world appear to have better conditions for surviving an abrupt sunlight reduction scenario (ASRS). These include island nations like New Zealand or Australia (Boyd & Wilson, 2022) and continental countries such as Argentina, Uruguay, and Paraguay (Xia et al., 2022). After evaluating different countries in Latin America, we realized that Argentina is one of the world's leading producers and exporters of food, especially grains and oilseeds. Therefore, in the event of an ASRS, Argentina would play a crucial role in the distribution and food exportation, even if its production decreased during the scenario. It would still have greater food availability than other countries that would be more severely affected.Adapting the country's food systems quickly and effectively would make the difference between a national famine situation and producing sufficient, varied, and nutritious food with a surplus to export, thus avoiding a regional humanitarian crisis and a foreign refugee crisis.Considering the importance of Argentina's geographical location, it is essential for the government to actively participate in the development of contingency plans aimed at addressing possible threats in the region, and the creation of an interdepartmental working group is recommended to investigate the threat posed by an ASRS and how to deal with it. The strategic initiatives aim to contribute to the strengthening of preparedness and specific recommendations for its location. This report consists of 8 main recommendations, divided into communication and supp...]]>
Fri, 28 Apr 2023 09:28:11 +0000 EA - Report: Food Security in Argentina in the event of an Abrupt Sunlight Reduction Scenario (ASRS) by JorgeTorresC Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Report: Food Security in Argentina in the event of an Abrupt Sunlight Reduction Scenario (ASRS), published by JorgeTorresC on April 27, 2023 on The Effective Altruism Forum.In order to prepare in case of an Abrupt Sunlight Reduction Scenario (ASRS), it is necessary to understand the threats and vulnerabilities of the agri-food system and the current ecosystem of risk management in Argentina.Through research, modeling, and interviews, the RCG team, in collaboration with ALLFED, has produced a report titled "Food Security in Argentina in the event of an Abrupt Sunlight Reduction Scenario (ASRS)". This strategic proposal offers a comprehensive overview of risk management in Argentina, the current state of risk, and presents eight main recommendations divided into two categories: communication and supplies, and the production and redirection of food in the event of an ASRS. Additionally, it highlights priority actions for implementing the proposed solutions.Download full reportIf you are interested in attending our research findings presentation on May 8, 2023 at 19:00 (GMT+1) sign up at the following linkExecutive SummaryAbrupt Sunlight Reduction Scenarios (ASRS) result from events that eject particulate matter into the upper atmosphere, reflecting and absorbing sunlight that would otherwise reach the Earth's surface. This decrease in sunlight causes a drop in global temperatures and precipitation, with devastating consequences for agriculture. Potential causes of an ASRS include large volcanic eruptions, nuclear winter, and asteroid or comet impact (ALLFED , 2022). The impact of such events is likely to last for several years, even a decade, with global implications for agriculture and food security.According to certain researchers, in a severe nuclear winter scenario such as the one described, an estimated 75% of the world's population could starve to death (Xia et al., 2022). If the atmosphere were to collect 150 million tons of soot, it would cause a decrease in temperature ranging between 7ºC and 15ºC. This temperature drop would be accompanied by a reduction in sunlight and precipitation, leading to a collapse in caloric production. Specifically, caloric production would fall to 10%-20% of its current value.Some regions of the world appear to have better conditions for surviving an abrupt sunlight reduction scenario (ASRS). These include island nations like New Zealand or Australia (Boyd & Wilson, 2022) and continental countries such as Argentina, Uruguay, and Paraguay (Xia et al., 2022). After evaluating different countries in Latin America, we realized that Argentina is one of the world's leading producers and exporters of food, especially grains and oilseeds. Therefore, in the event of an ASRS, Argentina would play a crucial role in the distribution and food exportation, even if its production decreased during the scenario. It would still have greater food availability than other countries that would be more severely affected.Adapting the country's food systems quickly and effectively would make the difference between a national famine situation and producing sufficient, varied, and nutritious food with a surplus to export, thus avoiding a regional humanitarian crisis and a foreign refugee crisis.Considering the importance of Argentina's geographical location, it is essential for the government to actively participate in the development of contingency plans aimed at addressing possible threats in the region, and the creation of an interdepartmental working group is recommended to investigate the threat posed by an ASRS and how to deal with it. The strategic initiatives aim to contribute to the strengthening of preparedness and specific recommendations for its location. This report consists of 8 main recommendations, divided into communication and supp...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Report: Food Security in Argentina in the event of an Abrupt Sunlight Reduction Scenario (ASRS), published by JorgeTorresC on April 27, 2023 on The Effective Altruism Forum.In order to prepare in case of an Abrupt Sunlight Reduction Scenario (ASRS), it is necessary to understand the threats and vulnerabilities of the agri-food system and the current ecosystem of risk management in Argentina.Through research, modeling, and interviews, the RCG team, in collaboration with ALLFED, has produced a report titled "Food Security in Argentina in the event of an Abrupt Sunlight Reduction Scenario (ASRS)". This strategic proposal offers a comprehensive overview of risk management in Argentina, the current state of risk, and presents eight main recommendations divided into two categories: communication and supplies, and the production and redirection of food in the event of an ASRS. Additionally, it highlights priority actions for implementing the proposed solutions.Download full reportIf you are interested in attending our research findings presentation on May 8, 2023 at 19:00 (GMT+1) sign up at the following linkExecutive SummaryAbrupt Sunlight Reduction Scenarios (ASRS) result from events that eject particulate matter into the upper atmosphere, reflecting and absorbing sunlight that would otherwise reach the Earth's surface. This decrease in sunlight causes a drop in global temperatures and precipitation, with devastating consequences for agriculture. Potential causes of an ASRS include large volcanic eruptions, nuclear winter, and asteroid or comet impact (ALLFED , 2022). The impact of such events is likely to last for several years, even a decade, with global implications for agriculture and food security.According to certain researchers, in a severe nuclear winter scenario such as the one described, an estimated 75% of the world's population could starve to death (Xia et al., 2022). If the atmosphere were to collect 150 million tons of soot, it would cause a decrease in temperature ranging between 7ºC and 15ºC. This temperature drop would be accompanied by a reduction in sunlight and precipitation, leading to a collapse in caloric production. Specifically, caloric production would fall to 10%-20% of its current value.Some regions of the world appear to have better conditions for surviving an abrupt sunlight reduction scenario (ASRS). These include island nations like New Zealand or Australia (Boyd & Wilson, 2022) and continental countries such as Argentina, Uruguay, and Paraguay (Xia et al., 2022). After evaluating different countries in Latin America, we realized that Argentina is one of the world's leading producers and exporters of food, especially grains and oilseeds. Therefore, in the event of an ASRS, Argentina would play a crucial role in the distribution and food exportation, even if its production decreased during the scenario. It would still have greater food availability than other countries that would be more severely affected.Adapting the country's food systems quickly and effectively would make the difference between a national famine situation and producing sufficient, varied, and nutritious food with a surplus to export, thus avoiding a regional humanitarian crisis and a foreign refugee crisis.Considering the importance of Argentina's geographical location, it is essential for the government to actively participate in the development of contingency plans aimed at addressing possible threats in the region, and the creation of an interdepartmental working group is recommended to investigate the threat posed by an ASRS and how to deal with it. The strategic initiatives aim to contribute to the strengthening of preparedness and specific recommendations for its location. This report consists of 8 main recommendations, divided into communication and supp...]]>
JorgeTorresC https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:50 None full 5764
6CibzfFnRWXcZosxv_NL_EA_EA EA - Proposals for the AI Regulatory Sandbox in Spain by Guillem Bas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Proposals for the AI Regulatory Sandbox in Spain, published by Guillem Bas on April 27, 2023 on The Effective Altruism Forum.Translated by Daniela Tiznado.Summary: The European Union is designing a regulatory framework for artificial intelligence (AI) that could be approved by the end of 2023. This regulation prohibits unacceptable practices and stipulates requirements for AI systems in critical sectors. These obligations consist of a risk management system, a quality management system, and post-market monitoring. The legislation enforcement will be tested for the first time in Spain, in a regulatory sandbox of approximately three years. This will be a great opportunity to prepare the national ecosystem and influence the development of AI governance internationally. In this context, we present several policies to consider, including third-party auditing, the detection and evaluation of frontier AI models, red teaming exercises, and creating an incident database.IntroductionEverything indicates that the European Union will become the first major political entity to approve a comprehensive regulatory framework for artificial intelligence (AI). On April 21, 2021, The European Commission presented the Regulation laying down harmonised rules on AI –henceforth AI Act or Act–. This legislative proposal covers all types of AI systems in all sectors except the military, making it the most ambitious plan to regulate AI.As we will explain below, Spain will lead the implementation of this regulation in the context of a testing ground or sandbox. This is an opportunity for the Spanish Government to contribute to establishing good auditing and regulatory practices that can be adopted by other member states.This article is divided into six sections. Firstly, we provide a brief history of the Act. The second part summarizes the legislative proposal of the European Commission. The third section details the first sandbox of this regulation, carried out in Spain. The fourth lists the public bodies involved in the testing environment. The fifth part explains the relevance of this exercise. Finally, we present proposals to improve the governance of risks associated with AI in this context. We conclude that this project provides an excellent opportunity to develop a culture of responsible AI and determine the effectiveness of various policies.Brief History of the ActThe foundations of the text date back to 2020, when the European Commission published the White Paper on Artificial Intelligence. This was the beginning of a consultation process and a subsequent roadmap that involved the participation of hundreds of stakeholders, resulting in the aforementioned proposal.After its publication, the Commission received feedback from 304 actors and initiated a review process involving the European Parliament and the Council of the European Union as legislative bodies. In December 2022, the Council adopted a common approach. In the case of the Parliament, the vote to agree on a joint position is scheduled for May (Bertuzzi, 2023). The trilogue will begin immediately afterward, and the final version could be approved by the end of 2023, entering into force at the beginning of 2024.Summary of the ActThe main starting point of the proposed law is the classification of AI systems according to the level of risk they entail. Specifically, the proposal is based on a hierarchy distinguishing between unacceptable, high, limited, and minimal risks. The first two are the main focus of the regulation.As part of the category of unacceptable risks, practices that pose a clear threat to the safety, livelihoods, and rights of people will be banned. Currently, three practices have been deemed unacceptable as they go against European values: distorting human behavior to cause harm; evaluating and classi...]]>
Guillem Bas https://forum.effectivealtruism.org/posts/6CibzfFnRWXcZosxv/proposals-for-the-ai-regulatory-sandbox-in-spain Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Proposals for the AI Regulatory Sandbox in Spain, published by Guillem Bas on April 27, 2023 on The Effective Altruism Forum.Translated by Daniela Tiznado.Summary: The European Union is designing a regulatory framework for artificial intelligence (AI) that could be approved by the end of 2023. This regulation prohibits unacceptable practices and stipulates requirements for AI systems in critical sectors. These obligations consist of a risk management system, a quality management system, and post-market monitoring. The legislation enforcement will be tested for the first time in Spain, in a regulatory sandbox of approximately three years. This will be a great opportunity to prepare the national ecosystem and influence the development of AI governance internationally. In this context, we present several policies to consider, including third-party auditing, the detection and evaluation of frontier AI models, red teaming exercises, and creating an incident database.IntroductionEverything indicates that the European Union will become the first major political entity to approve a comprehensive regulatory framework for artificial intelligence (AI). On April 21, 2021, The European Commission presented the Regulation laying down harmonised rules on AI –henceforth AI Act or Act–. This legislative proposal covers all types of AI systems in all sectors except the military, making it the most ambitious plan to regulate AI.As we will explain below, Spain will lead the implementation of this regulation in the context of a testing ground or sandbox. This is an opportunity for the Spanish Government to contribute to establishing good auditing and regulatory practices that can be adopted by other member states.This article is divided into six sections. Firstly, we provide a brief history of the Act. The second part summarizes the legislative proposal of the European Commission. The third section details the first sandbox of this regulation, carried out in Spain. The fourth lists the public bodies involved in the testing environment. The fifth part explains the relevance of this exercise. Finally, we present proposals to improve the governance of risks associated with AI in this context. We conclude that this project provides an excellent opportunity to develop a culture of responsible AI and determine the effectiveness of various policies.Brief History of the ActThe foundations of the text date back to 2020, when the European Commission published the White Paper on Artificial Intelligence. This was the beginning of a consultation process and a subsequent roadmap that involved the participation of hundreds of stakeholders, resulting in the aforementioned proposal.After its publication, the Commission received feedback from 304 actors and initiated a review process involving the European Parliament and the Council of the European Union as legislative bodies. In December 2022, the Council adopted a common approach. In the case of the Parliament, the vote to agree on a joint position is scheduled for May (Bertuzzi, 2023). The trilogue will begin immediately afterward, and the final version could be approved by the end of 2023, entering into force at the beginning of 2024.Summary of the ActThe main starting point of the proposed law is the classification of AI systems according to the level of risk they entail. Specifically, the proposal is based on a hierarchy distinguishing between unacceptable, high, limited, and minimal risks. The first two are the main focus of the regulation.As part of the category of unacceptable risks, practices that pose a clear threat to the safety, livelihoods, and rights of people will be banned. Currently, three practices have been deemed unacceptable as they go against European values: distorting human behavior to cause harm; evaluating and classi...]]>
Fri, 28 Apr 2023 04:09:43 +0000 EA - Proposals for the AI Regulatory Sandbox in Spain by Guillem Bas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Proposals for the AI Regulatory Sandbox in Spain, published by Guillem Bas on April 27, 2023 on The Effective Altruism Forum.Translated by Daniela Tiznado.Summary: The European Union is designing a regulatory framework for artificial intelligence (AI) that could be approved by the end of 2023. This regulation prohibits unacceptable practices and stipulates requirements for AI systems in critical sectors. These obligations consist of a risk management system, a quality management system, and post-market monitoring. The legislation enforcement will be tested for the first time in Spain, in a regulatory sandbox of approximately three years. This will be a great opportunity to prepare the national ecosystem and influence the development of AI governance internationally. In this context, we present several policies to consider, including third-party auditing, the detection and evaluation of frontier AI models, red teaming exercises, and creating an incident database.IntroductionEverything indicates that the European Union will become the first major political entity to approve a comprehensive regulatory framework for artificial intelligence (AI). On April 21, 2021, The European Commission presented the Regulation laying down harmonised rules on AI –henceforth AI Act or Act–. This legislative proposal covers all types of AI systems in all sectors except the military, making it the most ambitious plan to regulate AI.As we will explain below, Spain will lead the implementation of this regulation in the context of a testing ground or sandbox. This is an opportunity for the Spanish Government to contribute to establishing good auditing and regulatory practices that can be adopted by other member states.This article is divided into six sections. Firstly, we provide a brief history of the Act. The second part summarizes the legislative proposal of the European Commission. The third section details the first sandbox of this regulation, carried out in Spain. The fourth lists the public bodies involved in the testing environment. The fifth part explains the relevance of this exercise. Finally, we present proposals to improve the governance of risks associated with AI in this context. We conclude that this project provides an excellent opportunity to develop a culture of responsible AI and determine the effectiveness of various policies.Brief History of the ActThe foundations of the text date back to 2020, when the European Commission published the White Paper on Artificial Intelligence. This was the beginning of a consultation process and a subsequent roadmap that involved the participation of hundreds of stakeholders, resulting in the aforementioned proposal.After its publication, the Commission received feedback from 304 actors and initiated a review process involving the European Parliament and the Council of the European Union as legislative bodies. In December 2022, the Council adopted a common approach. In the case of the Parliament, the vote to agree on a joint position is scheduled for May (Bertuzzi, 2023). The trilogue will begin immediately afterward, and the final version could be approved by the end of 2023, entering into force at the beginning of 2024.Summary of the ActThe main starting point of the proposed law is the classification of AI systems according to the level of risk they entail. Specifically, the proposal is based on a hierarchy distinguishing between unacceptable, high, limited, and minimal risks. The first two are the main focus of the regulation.As part of the category of unacceptable risks, practices that pose a clear threat to the safety, livelihoods, and rights of people will be banned. Currently, three practices have been deemed unacceptable as they go against European values: distorting human behavior to cause harm; evaluating and classi...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Proposals for the AI Regulatory Sandbox in Spain, published by Guillem Bas on April 27, 2023 on The Effective Altruism Forum.Translated by Daniela Tiznado.Summary: The European Union is designing a regulatory framework for artificial intelligence (AI) that could be approved by the end of 2023. This regulation prohibits unacceptable practices and stipulates requirements for AI systems in critical sectors. These obligations consist of a risk management system, a quality management system, and post-market monitoring. The legislation enforcement will be tested for the first time in Spain, in a regulatory sandbox of approximately three years. This will be a great opportunity to prepare the national ecosystem and influence the development of AI governance internationally. In this context, we present several policies to consider, including third-party auditing, the detection and evaluation of frontier AI models, red teaming exercises, and creating an incident database.IntroductionEverything indicates that the European Union will become the first major political entity to approve a comprehensive regulatory framework for artificial intelligence (AI). On April 21, 2021, The European Commission presented the Regulation laying down harmonised rules on AI –henceforth AI Act or Act–. This legislative proposal covers all types of AI systems in all sectors except the military, making it the most ambitious plan to regulate AI.As we will explain below, Spain will lead the implementation of this regulation in the context of a testing ground or sandbox. This is an opportunity for the Spanish Government to contribute to establishing good auditing and regulatory practices that can be adopted by other member states.This article is divided into six sections. Firstly, we provide a brief history of the Act. The second part summarizes the legislative proposal of the European Commission. The third section details the first sandbox of this regulation, carried out in Spain. The fourth lists the public bodies involved in the testing environment. The fifth part explains the relevance of this exercise. Finally, we present proposals to improve the governance of risks associated with AI in this context. We conclude that this project provides an excellent opportunity to develop a culture of responsible AI and determine the effectiveness of various policies.Brief History of the ActThe foundations of the text date back to 2020, when the European Commission published the White Paper on Artificial Intelligence. This was the beginning of a consultation process and a subsequent roadmap that involved the participation of hundreds of stakeholders, resulting in the aforementioned proposal.After its publication, the Commission received feedback from 304 actors and initiated a review process involving the European Parliament and the Council of the European Union as legislative bodies. In December 2022, the Council adopted a common approach. In the case of the Parliament, the vote to agree on a joint position is scheduled for May (Bertuzzi, 2023). The trilogue will begin immediately afterward, and the final version could be approved by the end of 2023, entering into force at the beginning of 2024.Summary of the ActThe main starting point of the proposed law is the classification of AI systems according to the level of risk they entail. Specifically, the proposal is based on a hierarchy distinguishing between unacceptable, high, limited, and minimal risks. The first two are the main focus of the regulation.As part of the category of unacceptable risks, practices that pose a clear threat to the safety, livelihoods, and rights of people will be banned. Currently, three practices have been deemed unacceptable as they go against European values: distorting human behavior to cause harm; evaluating and classi...]]>
Guillem Bas https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 21:57 None full 5762
NBZr5rGp35YyjABhw_NL_EA_EA EA - Life in a Day: The film that opened my heart to effective altruism by Aaron Gertler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Life in a Day: The film that opened my heart to effective altruism, published by Aaron Gertler on April 27, 2023 on The Effective Altruism Forum.There are at least two kinds of "EA origin story":Some people discover it for themselves. They have EA-flavored goals, they Google something like "do the most good", and they find EA.Other people are introduced by others. They have EA-flavored goals, but they don't find EA until someone tells them about it.My story is the second kind. I saw Peter Singer give a talk, which I attended because I'd read GiveWell's website, which I'd seen on LessWrong, which I heard about from Harry Potter and the Methods of Rationality.So I owe part of my ethical system to the guy who heard me discussing Harry Potter in a college dining hall, interrupted my conversation, gave me a five-minute pitch for HPMOR, and left. (I never saw him again.)But what about the rest of the story?Peter Singer convinced me to join Giving What We Can and earn to give after college. Hundreds of other people watched the same talk, in the same auditorium, and didn't do those things. There must be some other factor that made me especially receptive to EA (that is, gave me "EA-flavored goals").Of course, there are actually dozens of factors, because life is complicated. For example:I speak English, and live in a country that had EA presence early on.My parents are empathetic people who shaped me into an empathetic person.My parents are financially secure people who didn't need my support, so I could easily afford to give 10% of my income.I went to the kind of college where Peter Singer gives talks and students advertise HPMOR to total strangers.I saw a movie called Life in a Day.These are hard to replicate, except the last one, which anyone can do right now:The English (foreign language) subtitles seem broken, but English (full text) works.How the film worksYouTube decided to make a documentary. They asked people to film themselves on July 24th, 2010, and share the footage. They received 80,000 submissions and 4500 hours of film from 192 countries. They used it to make a 90-minute film about life on Earth.We start and end at midnight. For each part of the day, we jump around the world to see what people are doing. Because people are similar, we see similar actions, in parallel.For example, the sun comes up eight minutes into the movie. For the next 90 seconds, we watch people wake up. Some have alarms; some have roosters; some rise with the sun. Some are woken up by parents or lovers. Others wake up alone. One person sleeps on the street and wakes to the passing of cars. Everyone is different. But we all wake up.Submitters also had the option to answer questions: "What do you love? What do you fear? What do you have in your pockets or handbag?"We get three minutes of fear (ten for love). People are afraid of ghosts, spiders, lions, and small noises in the middle of the night. They are afraid of God, Hell, and people different from themselves. They are afraid of losing childhood, losing their hair, and losing the people they love. We are all afraid of something.This is what Life in a Day shows, over and over: In so many ways, we are the same.What the film did to meThis isn't a new idea. You can hear echoes of it in the Golden Rule, "all men are created equal", and "workers of the world, unite!".But I didn't really feel the idea until I saw Life in a Day, which conveys it more powerfully than anything else I've ever seen. I felt strange on the drive home; I could no longer look at the world in the same way.When I later heard about a philosophy that was dedicated to helping people as much as possible, no matter where they lived, it struck a chord. Without Life in a Day, I'm not sure I'd have felt the same deep sense of "yes, this is obviously righ...]]>
Aaron Gertler https://forum.effectivealtruism.org/posts/NBZr5rGp35YyjABhw/life-in-a-day-the-film-that-opened-my-heart-to-effective Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Life in a Day: The film that opened my heart to effective altruism, published by Aaron Gertler on April 27, 2023 on The Effective Altruism Forum.There are at least two kinds of "EA origin story":Some people discover it for themselves. They have EA-flavored goals, they Google something like "do the most good", and they find EA.Other people are introduced by others. They have EA-flavored goals, but they don't find EA until someone tells them about it.My story is the second kind. I saw Peter Singer give a talk, which I attended because I'd read GiveWell's website, which I'd seen on LessWrong, which I heard about from Harry Potter and the Methods of Rationality.So I owe part of my ethical system to the guy who heard me discussing Harry Potter in a college dining hall, interrupted my conversation, gave me a five-minute pitch for HPMOR, and left. (I never saw him again.)But what about the rest of the story?Peter Singer convinced me to join Giving What We Can and earn to give after college. Hundreds of other people watched the same talk, in the same auditorium, and didn't do those things. There must be some other factor that made me especially receptive to EA (that is, gave me "EA-flavored goals").Of course, there are actually dozens of factors, because life is complicated. For example:I speak English, and live in a country that had EA presence early on.My parents are empathetic people who shaped me into an empathetic person.My parents are financially secure people who didn't need my support, so I could easily afford to give 10% of my income.I went to the kind of college where Peter Singer gives talks and students advertise HPMOR to total strangers.I saw a movie called Life in a Day.These are hard to replicate, except the last one, which anyone can do right now:The English (foreign language) subtitles seem broken, but English (full text) works.How the film worksYouTube decided to make a documentary. They asked people to film themselves on July 24th, 2010, and share the footage. They received 80,000 submissions and 4500 hours of film from 192 countries. They used it to make a 90-minute film about life on Earth.We start and end at midnight. For each part of the day, we jump around the world to see what people are doing. Because people are similar, we see similar actions, in parallel.For example, the sun comes up eight minutes into the movie. For the next 90 seconds, we watch people wake up. Some have alarms; some have roosters; some rise with the sun. Some are woken up by parents or lovers. Others wake up alone. One person sleeps on the street and wakes to the passing of cars. Everyone is different. But we all wake up.Submitters also had the option to answer questions: "What do you love? What do you fear? What do you have in your pockets or handbag?"We get three minutes of fear (ten for love). People are afraid of ghosts, spiders, lions, and small noises in the middle of the night. They are afraid of God, Hell, and people different from themselves. They are afraid of losing childhood, losing their hair, and losing the people they love. We are all afraid of something.This is what Life in a Day shows, over and over: In so many ways, we are the same.What the film did to meThis isn't a new idea. You can hear echoes of it in the Golden Rule, "all men are created equal", and "workers of the world, unite!".But I didn't really feel the idea until I saw Life in a Day, which conveys it more powerfully than anything else I've ever seen. I felt strange on the drive home; I could no longer look at the world in the same way.When I later heard about a philosophy that was dedicated to helping people as much as possible, no matter where they lived, it struck a chord. Without Life in a Day, I'm not sure I'd have felt the same deep sense of "yes, this is obviously righ...]]>
Fri, 28 Apr 2023 00:01:48 +0000 EA - Life in a Day: The film that opened my heart to effective altruism by Aaron Gertler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Life in a Day: The film that opened my heart to effective altruism, published by Aaron Gertler on April 27, 2023 on The Effective Altruism Forum.There are at least two kinds of "EA origin story":Some people discover it for themselves. They have EA-flavored goals, they Google something like "do the most good", and they find EA.Other people are introduced by others. They have EA-flavored goals, but they don't find EA until someone tells them about it.My story is the second kind. I saw Peter Singer give a talk, which I attended because I'd read GiveWell's website, which I'd seen on LessWrong, which I heard about from Harry Potter and the Methods of Rationality.So I owe part of my ethical system to the guy who heard me discussing Harry Potter in a college dining hall, interrupted my conversation, gave me a five-minute pitch for HPMOR, and left. (I never saw him again.)But what about the rest of the story?Peter Singer convinced me to join Giving What We Can and earn to give after college. Hundreds of other people watched the same talk, in the same auditorium, and didn't do those things. There must be some other factor that made me especially receptive to EA (that is, gave me "EA-flavored goals").Of course, there are actually dozens of factors, because life is complicated. For example:I speak English, and live in a country that had EA presence early on.My parents are empathetic people who shaped me into an empathetic person.My parents are financially secure people who didn't need my support, so I could easily afford to give 10% of my income.I went to the kind of college where Peter Singer gives talks and students advertise HPMOR to total strangers.I saw a movie called Life in a Day.These are hard to replicate, except the last one, which anyone can do right now:The English (foreign language) subtitles seem broken, but English (full text) works.How the film worksYouTube decided to make a documentary. They asked people to film themselves on July 24th, 2010, and share the footage. They received 80,000 submissions and 4500 hours of film from 192 countries. They used it to make a 90-minute film about life on Earth.We start and end at midnight. For each part of the day, we jump around the world to see what people are doing. Because people are similar, we see similar actions, in parallel.For example, the sun comes up eight minutes into the movie. For the next 90 seconds, we watch people wake up. Some have alarms; some have roosters; some rise with the sun. Some are woken up by parents or lovers. Others wake up alone. One person sleeps on the street and wakes to the passing of cars. Everyone is different. But we all wake up.Submitters also had the option to answer questions: "What do you love? What do you fear? What do you have in your pockets or handbag?"We get three minutes of fear (ten for love). People are afraid of ghosts, spiders, lions, and small noises in the middle of the night. They are afraid of God, Hell, and people different from themselves. They are afraid of losing childhood, losing their hair, and losing the people they love. We are all afraid of something.This is what Life in a Day shows, over and over: In so many ways, we are the same.What the film did to meThis isn't a new idea. You can hear echoes of it in the Golden Rule, "all men are created equal", and "workers of the world, unite!".But I didn't really feel the idea until I saw Life in a Day, which conveys it more powerfully than anything else I've ever seen. I felt strange on the drive home; I could no longer look at the world in the same way.When I later heard about a philosophy that was dedicated to helping people as much as possible, no matter where they lived, it struck a chord. Without Life in a Day, I'm not sure I'd have felt the same deep sense of "yes, this is obviously righ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Life in a Day: The film that opened my heart to effective altruism, published by Aaron Gertler on April 27, 2023 on The Effective Altruism Forum.There are at least two kinds of "EA origin story":Some people discover it for themselves. They have EA-flavored goals, they Google something like "do the most good", and they find EA.Other people are introduced by others. They have EA-flavored goals, but they don't find EA until someone tells them about it.My story is the second kind. I saw Peter Singer give a talk, which I attended because I'd read GiveWell's website, which I'd seen on LessWrong, which I heard about from Harry Potter and the Methods of Rationality.So I owe part of my ethical system to the guy who heard me discussing Harry Potter in a college dining hall, interrupted my conversation, gave me a five-minute pitch for HPMOR, and left. (I never saw him again.)But what about the rest of the story?Peter Singer convinced me to join Giving What We Can and earn to give after college. Hundreds of other people watched the same talk, in the same auditorium, and didn't do those things. There must be some other factor that made me especially receptive to EA (that is, gave me "EA-flavored goals").Of course, there are actually dozens of factors, because life is complicated. For example:I speak English, and live in a country that had EA presence early on.My parents are empathetic people who shaped me into an empathetic person.My parents are financially secure people who didn't need my support, so I could easily afford to give 10% of my income.I went to the kind of college where Peter Singer gives talks and students advertise HPMOR to total strangers.I saw a movie called Life in a Day.These are hard to replicate, except the last one, which anyone can do right now:The English (foreign language) subtitles seem broken, but English (full text) works.How the film worksYouTube decided to make a documentary. They asked people to film themselves on July 24th, 2010, and share the footage. They received 80,000 submissions and 4500 hours of film from 192 countries. They used it to make a 90-minute film about life on Earth.We start and end at midnight. For each part of the day, we jump around the world to see what people are doing. Because people are similar, we see similar actions, in parallel.For example, the sun comes up eight minutes into the movie. For the next 90 seconds, we watch people wake up. Some have alarms; some have roosters; some rise with the sun. Some are woken up by parents or lovers. Others wake up alone. One person sleeps on the street and wakes to the passing of cars. Everyone is different. But we all wake up.Submitters also had the option to answer questions: "What do you love? What do you fear? What do you have in your pockets or handbag?"We get three minutes of fear (ten for love). People are afraid of ghosts, spiders, lions, and small noises in the middle of the night. They are afraid of God, Hell, and people different from themselves. They are afraid of losing childhood, losing their hair, and losing the people they love. We are all afraid of something.This is what Life in a Day shows, over and over: In so many ways, we are the same.What the film did to meThis isn't a new idea. You can hear echoes of it in the Golden Rule, "all men are created equal", and "workers of the world, unite!".But I didn't really feel the idea until I saw Life in a Day, which conveys it more powerfully than anything else I've ever seen. I felt strange on the drive home; I could no longer look at the world in the same way.When I later heard about a philosophy that was dedicated to helping people as much as possible, no matter where they lived, it struck a chord. Without Life in a Day, I'm not sure I'd have felt the same deep sense of "yes, this is obviously righ...]]>
Aaron Gertler https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:57 None full 5761
vWRP8g8pqN9np4Aow_NL_EA_EA EA - What are work practices that you’ve adopted that you now think are underrated? by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What are work practices that you’ve adopted that you now think are underrated?, published by Lizka on April 27, 2023 on The Effective Altruism Forum.Basically every time I’ve worked with new people or on a new kind of project, I’ve learned a practice or method that now seems quite important to how I work. I want to see if we can crowd-source more (and discuss them).So share things you’ve learned! I’m sharing some as answers on the thread.Note: Please don’t hesitate to share things that you think are common. I expect that fewer people know about them than you might think — especially if you’re from a field or industry where the practice is normal. (Relevant xkcd comic.)See also:Are there robustly good and disputable leadership practices?Personal development and practical advice pages on the Forum(There's probably more relevant content — please feel free to let me know!)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lizka https://forum.effectivealtruism.org/posts/vWRP8g8pqN9np4Aow/what-are-work-practices-that-you-ve-adopted-that-you-now Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What are work practices that you’ve adopted that you now think are underrated?, published by Lizka on April 27, 2023 on The Effective Altruism Forum.Basically every time I’ve worked with new people or on a new kind of project, I’ve learned a practice or method that now seems quite important to how I work. I want to see if we can crowd-source more (and discuss them).So share things you’ve learned! I’m sharing some as answers on the thread.Note: Please don’t hesitate to share things that you think are common. I expect that fewer people know about them than you might think — especially if you’re from a field or industry where the practice is normal. (Relevant xkcd comic.)See also:Are there robustly good and disputable leadership practices?Personal development and practical advice pages on the Forum(There's probably more relevant content — please feel free to let me know!)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 27 Apr 2023 22:45:41 +0000 EA - What are work practices that you’ve adopted that you now think are underrated? by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What are work practices that you’ve adopted that you now think are underrated?, published by Lizka on April 27, 2023 on The Effective Altruism Forum.Basically every time I’ve worked with new people or on a new kind of project, I’ve learned a practice or method that now seems quite important to how I work. I want to see if we can crowd-source more (and discuss them).So share things you’ve learned! I’m sharing some as answers on the thread.Note: Please don’t hesitate to share things that you think are common. I expect that fewer people know about them than you might think — especially if you’re from a field or industry where the practice is normal. (Relevant xkcd comic.)See also:Are there robustly good and disputable leadership practices?Personal development and practical advice pages on the Forum(There's probably more relevant content — please feel free to let me know!)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What are work practices that you’ve adopted that you now think are underrated?, published by Lizka on April 27, 2023 on The Effective Altruism Forum.Basically every time I’ve worked with new people or on a new kind of project, I’ve learned a practice or method that now seems quite important to how I work. I want to see if we can crowd-source more (and discuss them).So share things you’ve learned! I’m sharing some as answers on the thread.Note: Please don’t hesitate to share things that you think are common. I expect that fewer people know about them than you might think — especially if you’re from a field or industry where the practice is normal. (Relevant xkcd comic.)See also:Are there robustly good and disputable leadership practices?Personal development and practical advice pages on the Forum(There's probably more relevant content — please feel free to let me know!)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:01 None full 5759
Dq69kvjKyxQzKNRH7_NL_EA_EA EA - Seeking expertise to improve EA organizations by Julia Wise Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Seeking expertise to improve EA organizations, published by Julia Wise on April 27, 2023 on The Effective Altruism Forum.There’s been a lot of interest in how EA might reform, both in response to the FTX crisis and in general.We’re working on a project where a task force (exact members TBD, but including us and some others from across the EA ecosystem) will sort through different ideas for reforms that EA organizations might enact, and try to recommend the most promising ideas. This project doesn’t aim to be a retrospective on what happened with FTX, and won’t address all problems in EA, but we hope to make progress on some of them.The output will likely be a set of recommendations to EA organizations, while recognizing that different practices will make sense for different organizations depending on their goals, size, and circumstances. Those recommendations might look like best practices for board composition, a proposed whistleblowing mechanism, etc.As part of this process, we want to gather ideas and best practices from people who know a lot about areas outside EA. We’d like your help in connecting with those people! We’re particularly interested in people with knowledge of these areas:Whistleblowing systemsNonprofit boardsConflict of interest policiesOrganization and management of sizeable communitiesWe can’t promise this task force will talk to everyone who’s suggested, but we’d welcome your input on this form. Thank you!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Julia Wise https://forum.effectivealtruism.org/posts/Dq69kvjKyxQzKNRH7/seeking-expertise-to-improve-ea-organizations Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Seeking expertise to improve EA organizations, published by Julia Wise on April 27, 2023 on The Effective Altruism Forum.There’s been a lot of interest in how EA might reform, both in response to the FTX crisis and in general.We’re working on a project where a task force (exact members TBD, but including us and some others from across the EA ecosystem) will sort through different ideas for reforms that EA organizations might enact, and try to recommend the most promising ideas. This project doesn’t aim to be a retrospective on what happened with FTX, and won’t address all problems in EA, but we hope to make progress on some of them.The output will likely be a set of recommendations to EA organizations, while recognizing that different practices will make sense for different organizations depending on their goals, size, and circumstances. Those recommendations might look like best practices for board composition, a proposed whistleblowing mechanism, etc.As part of this process, we want to gather ideas and best practices from people who know a lot about areas outside EA. We’d like your help in connecting with those people! We’re particularly interested in people with knowledge of these areas:Whistleblowing systemsNonprofit boardsConflict of interest policiesOrganization and management of sizeable communitiesWe can’t promise this task force will talk to everyone who’s suggested, but we’d welcome your input on this form. Thank you!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 27 Apr 2023 21:32:51 +0000 EA - Seeking expertise to improve EA organizations by Julia Wise Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Seeking expertise to improve EA organizations, published by Julia Wise on April 27, 2023 on The Effective Altruism Forum.There’s been a lot of interest in how EA might reform, both in response to the FTX crisis and in general.We’re working on a project where a task force (exact members TBD, but including us and some others from across the EA ecosystem) will sort through different ideas for reforms that EA organizations might enact, and try to recommend the most promising ideas. This project doesn’t aim to be a retrospective on what happened with FTX, and won’t address all problems in EA, but we hope to make progress on some of them.The output will likely be a set of recommendations to EA organizations, while recognizing that different practices will make sense for different organizations depending on their goals, size, and circumstances. Those recommendations might look like best practices for board composition, a proposed whistleblowing mechanism, etc.As part of this process, we want to gather ideas and best practices from people who know a lot about areas outside EA. We’d like your help in connecting with those people! We’re particularly interested in people with knowledge of these areas:Whistleblowing systemsNonprofit boardsConflict of interest policiesOrganization and management of sizeable communitiesWe can’t promise this task force will talk to everyone who’s suggested, but we’d welcome your input on this form. Thank you!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Seeking expertise to improve EA organizations, published by Julia Wise on April 27, 2023 on The Effective Altruism Forum.There’s been a lot of interest in how EA might reform, both in response to the FTX crisis and in general.We’re working on a project where a task force (exact members TBD, but including us and some others from across the EA ecosystem) will sort through different ideas for reforms that EA organizations might enact, and try to recommend the most promising ideas. This project doesn’t aim to be a retrospective on what happened with FTX, and won’t address all problems in EA, but we hope to make progress on some of them.The output will likely be a set of recommendations to EA organizations, while recognizing that different practices will make sense for different organizations depending on their goals, size, and circumstances. Those recommendations might look like best practices for board composition, a proposed whistleblowing mechanism, etc.As part of this process, we want to gather ideas and best practices from people who know a lot about areas outside EA. We’d like your help in connecting with those people! We’re particularly interested in people with knowledge of these areas:Whistleblowing systemsNonprofit boardsConflict of interest policiesOrganization and management of sizeable communitiesWe can’t promise this task force will talk to everyone who’s suggested, but we’d welcome your input on this form. Thank you!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Julia Wise https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:34 None full 5758
ayihE4wMte8SpSTBa_NL_EA_EA EA - Set up a co-op to donate your rent by Ben Dunn-Flores Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Set up a co-op to donate your rent, published by Ben Dunn-Flores on April 27, 2023 on The Effective Altruism Forum.London EAs pay £140m in rent. £56m of that is landlord profit which could be donated effectively. Enough money to save 10,000 lives per year is lying on the table.If you pay £1000 per month in rent, you could save a life each year -- without ruining any coats.Housing co-operatives enable their tenants to donate their rent, rather than it going into a landlords’ pocket. We believe this could unlock large additional donations, even among people who do not earn high salaries.We can help you set up a co-op to do this in the UK. The process is:Set up an asset-locked housing co-op (I can do this for you)Arrange the mortgage and deposit (I can help with this too)Pay rent to your co-op, which donates the money to effective causesThis is a guide to doing it yourself. It's not easy, but several groups have managed it in London. I'm building a tech company, Roost, to make it easy, but we can arrange a call without you needing to use our product.Note: impact measurementI estimate there are ~10,000 EAs renting in London, paying £1200pm on average. Landlord profit margins after costs and debt are about 40%, leading to the figure of £56m which could be donated effectively. Calculations in footnotes.Setting up a co-opStep 1: Find a homeThis is a guide to setting up a house-share without a landlord, where the surplus rent can go to effective causes. A housing co-operative is the ideal legal form for this – it has existed for hundreds of years and co-ops are very common in Europe.Co-ops work best with 3-6 people – fewer, and it’s often more expensive than renting; more, and planning permission is required. This form of housing would usually require a license, but co-operatives are exempt.Number of people in co-op:1-23-67+Do I need planning permission?NeverSometimesAlwaysIs it cheaper to do as a co-op than to rent?SometimesUsuallyAlwaysWe’ve been working on a property search tool to show how much rent you would pay for a given property. Let us know if you want free access.The process for finding a home to buy is the same as renting: search online, book viewings, and make an offer. However, it takes longer after this stage: usually around 12 weeks. There is more that can go wrong, so we recommend putting in two offers to make sure you get at least one.Step 2: Set up a co-opThere several legal forms for co-ops. At Roost, we use non-equity co-ops. We think this optimises for quality of life and allows large donations to effective charities.This is where the DIY approach differs from how we do it. If you're going the DIY route, I recommend using the model documents for a Fully Mutual Housing Co-operative at cch.coop.These model rules include an asset lock. This ensures that anything the co-op owns must eventually benefit a charity. The asset lock in clause 130 will need to be amended to specify an effective charity.Step 3: Get a mortgageEcology Building Society will usually give you a mortgage. They are more expensive than commercial lenders, charging a ~6% variable rate with some discounts for environmental upgrades. Most successful co-ops work with Ecology BS.For context, a comparable commercial mortgage is at ~4.5%, but co-operatives can’t access these directly.Roost uses a non-profit land trust that owns the property via a limited company. It is more complex, but it allows the co-op to access the cheaper commercial capital.Step 4: Get a depositPutting in your own moneyYou can put your money in as a donation to the co-op, in which case this is easy. If these are your savings and you need them back to live, then you can structure it as debt. There are model documents to do this.External debtCo-op Finance can help with up to £150k for the deposit...]]>
Ben Dunn-Flores https://forum.effectivealtruism.org/posts/ayihE4wMte8SpSTBa/set-up-a-co-op-to-donate-your-rent Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Set up a co-op to donate your rent, published by Ben Dunn-Flores on April 27, 2023 on The Effective Altruism Forum.London EAs pay £140m in rent. £56m of that is landlord profit which could be donated effectively. Enough money to save 10,000 lives per year is lying on the table.If you pay £1000 per month in rent, you could save a life each year -- without ruining any coats.Housing co-operatives enable their tenants to donate their rent, rather than it going into a landlords’ pocket. We believe this could unlock large additional donations, even among people who do not earn high salaries.We can help you set up a co-op to do this in the UK. The process is:Set up an asset-locked housing co-op (I can do this for you)Arrange the mortgage and deposit (I can help with this too)Pay rent to your co-op, which donates the money to effective causesThis is a guide to doing it yourself. It's not easy, but several groups have managed it in London. I'm building a tech company, Roost, to make it easy, but we can arrange a call without you needing to use our product.Note: impact measurementI estimate there are ~10,000 EAs renting in London, paying £1200pm on average. Landlord profit margins after costs and debt are about 40%, leading to the figure of £56m which could be donated effectively. Calculations in footnotes.Setting up a co-opStep 1: Find a homeThis is a guide to setting up a house-share without a landlord, where the surplus rent can go to effective causes. A housing co-operative is the ideal legal form for this – it has existed for hundreds of years and co-ops are very common in Europe.Co-ops work best with 3-6 people – fewer, and it’s often more expensive than renting; more, and planning permission is required. This form of housing would usually require a license, but co-operatives are exempt.Number of people in co-op:1-23-67+Do I need planning permission?NeverSometimesAlwaysIs it cheaper to do as a co-op than to rent?SometimesUsuallyAlwaysWe’ve been working on a property search tool to show how much rent you would pay for a given property. Let us know if you want free access.The process for finding a home to buy is the same as renting: search online, book viewings, and make an offer. However, it takes longer after this stage: usually around 12 weeks. There is more that can go wrong, so we recommend putting in two offers to make sure you get at least one.Step 2: Set up a co-opThere several legal forms for co-ops. At Roost, we use non-equity co-ops. We think this optimises for quality of life and allows large donations to effective charities.This is where the DIY approach differs from how we do it. If you're going the DIY route, I recommend using the model documents for a Fully Mutual Housing Co-operative at cch.coop.These model rules include an asset lock. This ensures that anything the co-op owns must eventually benefit a charity. The asset lock in clause 130 will need to be amended to specify an effective charity.Step 3: Get a mortgageEcology Building Society will usually give you a mortgage. They are more expensive than commercial lenders, charging a ~6% variable rate with some discounts for environmental upgrades. Most successful co-ops work with Ecology BS.For context, a comparable commercial mortgage is at ~4.5%, but co-operatives can’t access these directly.Roost uses a non-profit land trust that owns the property via a limited company. It is more complex, but it allows the co-op to access the cheaper commercial capital.Step 4: Get a depositPutting in your own moneyYou can put your money in as a donation to the co-op, in which case this is easy. If these are your savings and you need them back to live, then you can structure it as debt. There are model documents to do this.External debtCo-op Finance can help with up to £150k for the deposit...]]>
Thu, 27 Apr 2023 17:27:37 +0000 EA - Set up a co-op to donate your rent by Ben Dunn-Flores Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Set up a co-op to donate your rent, published by Ben Dunn-Flores on April 27, 2023 on The Effective Altruism Forum.London EAs pay £140m in rent. £56m of that is landlord profit which could be donated effectively. Enough money to save 10,000 lives per year is lying on the table.If you pay £1000 per month in rent, you could save a life each year -- without ruining any coats.Housing co-operatives enable their tenants to donate their rent, rather than it going into a landlords’ pocket. We believe this could unlock large additional donations, even among people who do not earn high salaries.We can help you set up a co-op to do this in the UK. The process is:Set up an asset-locked housing co-op (I can do this for you)Arrange the mortgage and deposit (I can help with this too)Pay rent to your co-op, which donates the money to effective causesThis is a guide to doing it yourself. It's not easy, but several groups have managed it in London. I'm building a tech company, Roost, to make it easy, but we can arrange a call without you needing to use our product.Note: impact measurementI estimate there are ~10,000 EAs renting in London, paying £1200pm on average. Landlord profit margins after costs and debt are about 40%, leading to the figure of £56m which could be donated effectively. Calculations in footnotes.Setting up a co-opStep 1: Find a homeThis is a guide to setting up a house-share without a landlord, where the surplus rent can go to effective causes. A housing co-operative is the ideal legal form for this – it has existed for hundreds of years and co-ops are very common in Europe.Co-ops work best with 3-6 people – fewer, and it’s often more expensive than renting; more, and planning permission is required. This form of housing would usually require a license, but co-operatives are exempt.Number of people in co-op:1-23-67+Do I need planning permission?NeverSometimesAlwaysIs it cheaper to do as a co-op than to rent?SometimesUsuallyAlwaysWe’ve been working on a property search tool to show how much rent you would pay for a given property. Let us know if you want free access.The process for finding a home to buy is the same as renting: search online, book viewings, and make an offer. However, it takes longer after this stage: usually around 12 weeks. There is more that can go wrong, so we recommend putting in two offers to make sure you get at least one.Step 2: Set up a co-opThere several legal forms for co-ops. At Roost, we use non-equity co-ops. We think this optimises for quality of life and allows large donations to effective charities.This is where the DIY approach differs from how we do it. If you're going the DIY route, I recommend using the model documents for a Fully Mutual Housing Co-operative at cch.coop.These model rules include an asset lock. This ensures that anything the co-op owns must eventually benefit a charity. The asset lock in clause 130 will need to be amended to specify an effective charity.Step 3: Get a mortgageEcology Building Society will usually give you a mortgage. They are more expensive than commercial lenders, charging a ~6% variable rate with some discounts for environmental upgrades. Most successful co-ops work with Ecology BS.For context, a comparable commercial mortgage is at ~4.5%, but co-operatives can’t access these directly.Roost uses a non-profit land trust that owns the property via a limited company. It is more complex, but it allows the co-op to access the cheaper commercial capital.Step 4: Get a depositPutting in your own moneyYou can put your money in as a donation to the co-op, in which case this is easy. If these are your savings and you need them back to live, then you can structure it as debt. There are model documents to do this.External debtCo-op Finance can help with up to £150k for the deposit...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Set up a co-op to donate your rent, published by Ben Dunn-Flores on April 27, 2023 on The Effective Altruism Forum.London EAs pay £140m in rent. £56m of that is landlord profit which could be donated effectively. Enough money to save 10,000 lives per year is lying on the table.If you pay £1000 per month in rent, you could save a life each year -- without ruining any coats.Housing co-operatives enable their tenants to donate their rent, rather than it going into a landlords’ pocket. We believe this could unlock large additional donations, even among people who do not earn high salaries.We can help you set up a co-op to do this in the UK. The process is:Set up an asset-locked housing co-op (I can do this for you)Arrange the mortgage and deposit (I can help with this too)Pay rent to your co-op, which donates the money to effective causesThis is a guide to doing it yourself. It's not easy, but several groups have managed it in London. I'm building a tech company, Roost, to make it easy, but we can arrange a call without you needing to use our product.Note: impact measurementI estimate there are ~10,000 EAs renting in London, paying £1200pm on average. Landlord profit margins after costs and debt are about 40%, leading to the figure of £56m which could be donated effectively. Calculations in footnotes.Setting up a co-opStep 1: Find a homeThis is a guide to setting up a house-share without a landlord, where the surplus rent can go to effective causes. A housing co-operative is the ideal legal form for this – it has existed for hundreds of years and co-ops are very common in Europe.Co-ops work best with 3-6 people – fewer, and it’s often more expensive than renting; more, and planning permission is required. This form of housing would usually require a license, but co-operatives are exempt.Number of people in co-op:1-23-67+Do I need planning permission?NeverSometimesAlwaysIs it cheaper to do as a co-op than to rent?SometimesUsuallyAlwaysWe’ve been working on a property search tool to show how much rent you would pay for a given property. Let us know if you want free access.The process for finding a home to buy is the same as renting: search online, book viewings, and make an offer. However, it takes longer after this stage: usually around 12 weeks. There is more that can go wrong, so we recommend putting in two offers to make sure you get at least one.Step 2: Set up a co-opThere several legal forms for co-ops. At Roost, we use non-equity co-ops. We think this optimises for quality of life and allows large donations to effective charities.This is where the DIY approach differs from how we do it. If you're going the DIY route, I recommend using the model documents for a Fully Mutual Housing Co-operative at cch.coop.These model rules include an asset lock. This ensures that anything the co-op owns must eventually benefit a charity. The asset lock in clause 130 will need to be amended to specify an effective charity.Step 3: Get a mortgageEcology Building Society will usually give you a mortgage. They are more expensive than commercial lenders, charging a ~6% variable rate with some discounts for environmental upgrades. Most successful co-ops work with Ecology BS.For context, a comparable commercial mortgage is at ~4.5%, but co-operatives can’t access these directly.Roost uses a non-profit land trust that owns the property via a limited company. It is more complex, but it allows the co-op to access the cheaper commercial capital.Step 4: Get a depositPutting in your own moneyYou can put your money in as a donation to the co-op, in which case this is easy. If these are your savings and you need them back to live, then you can structure it as debt. There are model documents to do this.External debtCo-op Finance can help with up to £150k for the deposit...]]>
Ben Dunn-Flores https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:29 None full 5754
JZEgmumeamzBAAprt_NL_EA_EA EA - How come there isn't that much focus in EA on research into whether / when AI's are likely to be sentient? by callum Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How come there isn't that much focus in EA on research into whether / when AI's are likely to be sentient?, published by callum on April 27, 2023 on The Effective Altruism Forum.As far as I know, there isn't that much funding or research in EA on AI sentience (though there is some? e.g. this)I can imagine some answers:Very intractableAlignment is more immediately the core challenge, and widening the focus isn't usefulFunders have a working view that additional research is unlikely to affect (e.g. that AIs will eventually be sentient?)Longtermist focus is on AI as an X-risk, and the main framing there is on avoiding humans being wiped outBut it also seems important and action-relevant:Current framing of AI safety is about aligning with humanity, but making AI go well for AI's could be comparably / more importantNaively, if we knew AIs would be sentient, it might make 'prioritising AIs welfare in AI development' a much higher impact focus areaIt's an example of an area that won't necessarily attract resources / attention from commercial sources(I'm not at all familiar with the area of AI sentience and posted without much googling, so please excuse any naivety in the question!)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
callum https://forum.effectivealtruism.org/posts/JZEgmumeamzBAAprt/how-come-there-isn-t-that-much-focus-in-ea-on-research-into Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How come there isn't that much focus in EA on research into whether / when AI's are likely to be sentient?, published by callum on April 27, 2023 on The Effective Altruism Forum.As far as I know, there isn't that much funding or research in EA on AI sentience (though there is some? e.g. this)I can imagine some answers:Very intractableAlignment is more immediately the core challenge, and widening the focus isn't usefulFunders have a working view that additional research is unlikely to affect (e.g. that AIs will eventually be sentient?)Longtermist focus is on AI as an X-risk, and the main framing there is on avoiding humans being wiped outBut it also seems important and action-relevant:Current framing of AI safety is about aligning with humanity, but making AI go well for AI's could be comparably / more importantNaively, if we knew AIs would be sentient, it might make 'prioritising AIs welfare in AI development' a much higher impact focus areaIt's an example of an area that won't necessarily attract resources / attention from commercial sources(I'm not at all familiar with the area of AI sentience and posted without much googling, so please excuse any naivety in the question!)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 27 Apr 2023 15:51:46 +0000 EA - How come there isn't that much focus in EA on research into whether / when AI's are likely to be sentient? by callum Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How come there isn't that much focus in EA on research into whether / when AI's are likely to be sentient?, published by callum on April 27, 2023 on The Effective Altruism Forum.As far as I know, there isn't that much funding or research in EA on AI sentience (though there is some? e.g. this)I can imagine some answers:Very intractableAlignment is more immediately the core challenge, and widening the focus isn't usefulFunders have a working view that additional research is unlikely to affect (e.g. that AIs will eventually be sentient?)Longtermist focus is on AI as an X-risk, and the main framing there is on avoiding humans being wiped outBut it also seems important and action-relevant:Current framing of AI safety is about aligning with humanity, but making AI go well for AI's could be comparably / more importantNaively, if we knew AIs would be sentient, it might make 'prioritising AIs welfare in AI development' a much higher impact focus areaIt's an example of an area that won't necessarily attract resources / attention from commercial sources(I'm not at all familiar with the area of AI sentience and posted without much googling, so please excuse any naivety in the question!)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How come there isn't that much focus in EA on research into whether / when AI's are likely to be sentient?, published by callum on April 27, 2023 on The Effective Altruism Forum.As far as I know, there isn't that much funding or research in EA on AI sentience (though there is some? e.g. this)I can imagine some answers:Very intractableAlignment is more immediately the core challenge, and widening the focus isn't usefulFunders have a working view that additional research is unlikely to affect (e.g. that AIs will eventually be sentient?)Longtermist focus is on AI as an X-risk, and the main framing there is on avoiding humans being wiped outBut it also seems important and action-relevant:Current framing of AI safety is about aligning with humanity, but making AI go well for AI's could be comparably / more importantNaively, if we knew AIs would be sentient, it might make 'prioritising AIs welfare in AI development' a much higher impact focus areaIt's an example of an area that won't necessarily attract resources / attention from commercial sources(I'm not at all familiar with the area of AI sentience and posted without much googling, so please excuse any naivety in the question!)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
callum https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:21 None full 5752
ef4Cm7W5CjXibygCv_NL_EA_EA EA - Story of a career/mental health failure by zekesherman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Story of a career/mental health failure, published by zekesherman on April 27, 2023 on The Effective Altruism Forum.I don't know if it should be considered important as it's only a single data point, but I want to share the story of how my EA career choice and mental health went terribly wrong.My career choiceIn college I was strongly motivated to follow the most utilitarian career path. In my junior year I decided to pursue investment banking for earning to give. As someone who had a merely good GPA and did not attend a top university, this would have been difficult, but I pushed hard for networking and recruiting, and one professional told me I had a 50-50 chance of becoming an investment banking analyst right out of college (privately I thought he was a bit too optimistic). Otherwise, I would have to get some lower-paying job in finance, and hopefully move into banking at a later date.However I increasingly turned against the idea of earning to give, for two major reasons. First, 80,000 Hours and other people in the community said that EA was more talent- rather than funding-constrained, and that earning to give was overrated. Second, more specifically, people in the EA community informed me that program managers in government and philanthropy controlled much higher budgets than I could reasonably expect to earn. Basically, it appeared easier to become in charge of effectively allocating $5 million of other people's money, compared to earning a $500,000 salary for oneself. Earning to give meant freer control of funding, but program management meant a greater budget. While I read 80k Hours' article on program management, I was most persuaded by advice I got from Jason Gaverick Matheny and Carl Shulman, and also a few non-EA people I met from the OSTP and other government agencies, who had more specific knowledge and advice.It seemed that program management in science and technology (especially AI, biotechnology, etc) was the best career path. And the best way to achieve it seemed to be starting with graduate education in science and technology, ideally a PhD (I decided on computer science, partly because it gave the most flexibility to work on a wide variety of cause areas). Finally, the nail in the coffin for my finance ambitions was an EA Global conference where Will MacAskill said to think less about finding a career that was individually impactful, and think more about leveraging your unique strengths to bring something new to the table for the EA community. While computer science wasn't rare in EA, I thought I could be special by leveraging my military background and pursuing a more cybersecurity- or defense-related career, which was neglected in EA.Still, I had two problems to overcome for this career path. The first problem was that I was an econ major and had a lot of catching up to do in order to pursue advanced computer science. The second problem was that it wasn't as good of a personal fit for me compared to finance. I've always found programming and advanced mathematics to be somewhat painful and difficult to learn, whereas investment banking seemed more readily engaging. And 80k Hours as well as the rest of the community gave me ample warnings about how personal fit was very, very important. I disregarded these warnings about personal fit for several reasons:I'd always been more resilient and scrupulous compared to other people and other members of the EA community. Things like living on a poverty budget and serving in the military, which many other members of the EA community have considered intolerable or unsustainable for mental health, were fine for me. As one of the more "hardcore" EAs, I generally regarded warnings of burnout as being overblown or at least less applicable to someone like me, and I suspected that a lot of people in the EA ...]]>
zekesherman https://forum.effectivealtruism.org/posts/ef4Cm7W5CjXibygCv/story-of-a-career-mental-health-failure Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Story of a career/mental health failure, published by zekesherman on April 27, 2023 on The Effective Altruism Forum.I don't know if it should be considered important as it's only a single data point, but I want to share the story of how my EA career choice and mental health went terribly wrong.My career choiceIn college I was strongly motivated to follow the most utilitarian career path. In my junior year I decided to pursue investment banking for earning to give. As someone who had a merely good GPA and did not attend a top university, this would have been difficult, but I pushed hard for networking and recruiting, and one professional told me I had a 50-50 chance of becoming an investment banking analyst right out of college (privately I thought he was a bit too optimistic). Otherwise, I would have to get some lower-paying job in finance, and hopefully move into banking at a later date.However I increasingly turned against the idea of earning to give, for two major reasons. First, 80,000 Hours and other people in the community said that EA was more talent- rather than funding-constrained, and that earning to give was overrated. Second, more specifically, people in the EA community informed me that program managers in government and philanthropy controlled much higher budgets than I could reasonably expect to earn. Basically, it appeared easier to become in charge of effectively allocating $5 million of other people's money, compared to earning a $500,000 salary for oneself. Earning to give meant freer control of funding, but program management meant a greater budget. While I read 80k Hours' article on program management, I was most persuaded by advice I got from Jason Gaverick Matheny and Carl Shulman, and also a few non-EA people I met from the OSTP and other government agencies, who had more specific knowledge and advice.It seemed that program management in science and technology (especially AI, biotechnology, etc) was the best career path. And the best way to achieve it seemed to be starting with graduate education in science and technology, ideally a PhD (I decided on computer science, partly because it gave the most flexibility to work on a wide variety of cause areas). Finally, the nail in the coffin for my finance ambitions was an EA Global conference where Will MacAskill said to think less about finding a career that was individually impactful, and think more about leveraging your unique strengths to bring something new to the table for the EA community. While computer science wasn't rare in EA, I thought I could be special by leveraging my military background and pursuing a more cybersecurity- or defense-related career, which was neglected in EA.Still, I had two problems to overcome for this career path. The first problem was that I was an econ major and had a lot of catching up to do in order to pursue advanced computer science. The second problem was that it wasn't as good of a personal fit for me compared to finance. I've always found programming and advanced mathematics to be somewhat painful and difficult to learn, whereas investment banking seemed more readily engaging. And 80k Hours as well as the rest of the community gave me ample warnings about how personal fit was very, very important. I disregarded these warnings about personal fit for several reasons:I'd always been more resilient and scrupulous compared to other people and other members of the EA community. Things like living on a poverty budget and serving in the military, which many other members of the EA community have considered intolerable or unsustainable for mental health, were fine for me. As one of the more "hardcore" EAs, I generally regarded warnings of burnout as being overblown or at least less applicable to someone like me, and I suspected that a lot of people in the EA ...]]>
Thu, 27 Apr 2023 08:46:18 +0000 EA - Story of a career/mental health failure by zekesherman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Story of a career/mental health failure, published by zekesherman on April 27, 2023 on The Effective Altruism Forum.I don't know if it should be considered important as it's only a single data point, but I want to share the story of how my EA career choice and mental health went terribly wrong.My career choiceIn college I was strongly motivated to follow the most utilitarian career path. In my junior year I decided to pursue investment banking for earning to give. As someone who had a merely good GPA and did not attend a top university, this would have been difficult, but I pushed hard for networking and recruiting, and one professional told me I had a 50-50 chance of becoming an investment banking analyst right out of college (privately I thought he was a bit too optimistic). Otherwise, I would have to get some lower-paying job in finance, and hopefully move into banking at a later date.However I increasingly turned against the idea of earning to give, for two major reasons. First, 80,000 Hours and other people in the community said that EA was more talent- rather than funding-constrained, and that earning to give was overrated. Second, more specifically, people in the EA community informed me that program managers in government and philanthropy controlled much higher budgets than I could reasonably expect to earn. Basically, it appeared easier to become in charge of effectively allocating $5 million of other people's money, compared to earning a $500,000 salary for oneself. Earning to give meant freer control of funding, but program management meant a greater budget. While I read 80k Hours' article on program management, I was most persuaded by advice I got from Jason Gaverick Matheny and Carl Shulman, and also a few non-EA people I met from the OSTP and other government agencies, who had more specific knowledge and advice.It seemed that program management in science and technology (especially AI, biotechnology, etc) was the best career path. And the best way to achieve it seemed to be starting with graduate education in science and technology, ideally a PhD (I decided on computer science, partly because it gave the most flexibility to work on a wide variety of cause areas). Finally, the nail in the coffin for my finance ambitions was an EA Global conference where Will MacAskill said to think less about finding a career that was individually impactful, and think more about leveraging your unique strengths to bring something new to the table for the EA community. While computer science wasn't rare in EA, I thought I could be special by leveraging my military background and pursuing a more cybersecurity- or defense-related career, which was neglected in EA.Still, I had two problems to overcome for this career path. The first problem was that I was an econ major and had a lot of catching up to do in order to pursue advanced computer science. The second problem was that it wasn't as good of a personal fit for me compared to finance. I've always found programming and advanced mathematics to be somewhat painful and difficult to learn, whereas investment banking seemed more readily engaging. And 80k Hours as well as the rest of the community gave me ample warnings about how personal fit was very, very important. I disregarded these warnings about personal fit for several reasons:I'd always been more resilient and scrupulous compared to other people and other members of the EA community. Things like living on a poverty budget and serving in the military, which many other members of the EA community have considered intolerable or unsustainable for mental health, were fine for me. As one of the more "hardcore" EAs, I generally regarded warnings of burnout as being overblown or at least less applicable to someone like me, and I suspected that a lot of people in the EA ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Story of a career/mental health failure, published by zekesherman on April 27, 2023 on The Effective Altruism Forum.I don't know if it should be considered important as it's only a single data point, but I want to share the story of how my EA career choice and mental health went terribly wrong.My career choiceIn college I was strongly motivated to follow the most utilitarian career path. In my junior year I decided to pursue investment banking for earning to give. As someone who had a merely good GPA and did not attend a top university, this would have been difficult, but I pushed hard for networking and recruiting, and one professional told me I had a 50-50 chance of becoming an investment banking analyst right out of college (privately I thought he was a bit too optimistic). Otherwise, I would have to get some lower-paying job in finance, and hopefully move into banking at a later date.However I increasingly turned against the idea of earning to give, for two major reasons. First, 80,000 Hours and other people in the community said that EA was more talent- rather than funding-constrained, and that earning to give was overrated. Second, more specifically, people in the EA community informed me that program managers in government and philanthropy controlled much higher budgets than I could reasonably expect to earn. Basically, it appeared easier to become in charge of effectively allocating $5 million of other people's money, compared to earning a $500,000 salary for oneself. Earning to give meant freer control of funding, but program management meant a greater budget. While I read 80k Hours' article on program management, I was most persuaded by advice I got from Jason Gaverick Matheny and Carl Shulman, and also a few non-EA people I met from the OSTP and other government agencies, who had more specific knowledge and advice.It seemed that program management in science and technology (especially AI, biotechnology, etc) was the best career path. And the best way to achieve it seemed to be starting with graduate education in science and technology, ideally a PhD (I decided on computer science, partly because it gave the most flexibility to work on a wide variety of cause areas). Finally, the nail in the coffin for my finance ambitions was an EA Global conference where Will MacAskill said to think less about finding a career that was individually impactful, and think more about leveraging your unique strengths to bring something new to the table for the EA community. While computer science wasn't rare in EA, I thought I could be special by leveraging my military background and pursuing a more cybersecurity- or defense-related career, which was neglected in EA.Still, I had two problems to overcome for this career path. The first problem was that I was an econ major and had a lot of catching up to do in order to pursue advanced computer science. The second problem was that it wasn't as good of a personal fit for me compared to finance. I've always found programming and advanced mathematics to be somewhat painful and difficult to learn, whereas investment banking seemed more readily engaging. And 80k Hours as well as the rest of the community gave me ample warnings about how personal fit was very, very important. I disregarded these warnings about personal fit for several reasons:I'd always been more resilient and scrupulous compared to other people and other members of the EA community. Things like living on a poverty budget and serving in the military, which many other members of the EA community have considered intolerable or unsustainable for mental health, were fine for me. As one of the more "hardcore" EAs, I generally regarded warnings of burnout as being overblown or at least less applicable to someone like me, and I suspected that a lot of people in the EA ...]]>
zekesherman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 14:54 None full 5750
sSGdKNPDEupfcoHNN_NL_EA_EA EA - Current plans as the incoming director of the Global Priorities Institute by Eva Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Current plans as the incoming director of the Global Priorities Institute, published by Eva on April 26, 2023 on The Effective Altruism Forum.Cross-posted from my blog.I am taking leave from the University of Toronto to serve as the Director of the Global Priorities Institute (GPI) at the University of Oxford. I can't express enough gratitude to the University of Toronto for enabling this. (I'll be back in the fall to fulfill my teaching obligations, though - keep inviting me to seminars and such!)GPI is an interdisciplinary research institute focusing on academic research that informs decision-makers on how to do good more effectively. In its first few years, under the leadership of its founding director, Hilary Greaves, GPI created and grew a community of academics in philosophy and economics interested in global priorities research. I am excited to build from this strong foundation and, in particular, to further develop the economics side.There are several areas I would like to focus on while at GPI. The below items reflect my current views, however, I expect these views to be refined over time. These items are not intended to be an exhaustive list, but they are things I would like GPI to do more of on the margin.1) Research on decision-making under uncertaintyThere is a lot of uncertainty in estimates of the effects of various actions. My views here are coloured by my past work. In the early 2010s, I tried to compile estimates of the effects of popular development interventions such as insecticide-treated bed nets for malaria, deworming drugs, and unconditional cash transfers. My initial thought was that by synthesizing the evidence, I'd be able to say something more conclusive about "the best" intervention for a given outcome. Unfortunately, I found that results varied, a lot (you can read more about it in my JEEA paper).If it's really hard to predict effects in global development, which is a very well-studied area, it would seem even harder to know what to do in other areas with less evidence. Yet, decisions still have to be made. One of the core areas GPI has focused on in the past is decision-making under uncertainty, and I expect that to continue to be a priority research area. Some work on robustness might also fall under this category.2) Increasing empirical researchGPI is an interdisciplinary institute combining philosophy and economics. To date, the economics side has largely focused on theoretical issues. But I think it's important for there to be careful, rigorous empirical work at GPI. I think there are relevant hypotheses that can be tested that pertain to global priorities research.Many economists interested in global priorities research come from applied fields like development economics, and there's a talented pool of people who can do empirical work on, e.g., encouraging better uptake of evidence or forecasting. There's simply a lot to be done here, and I look forward to working with colleagues like Julian Jamison (on leave from Exeter), Benjamin Tereick, and Mattie Toma (visiting from Warwick Business School), among many others.3) Expanding GPI’s network in economicsThere is an existing program at GPI for senior research affiliates based at other institutions. However, I think a lot more can be done with this, especially on the economics side. I'm still exploring the right structures, but suffice it to say, if you are an academic economist interested in global priorities research, please do get in touch. I am envisioning a network of loosely affiliated individuals in core fields of interest who would be sent notifications about research and funding opportunities. There may also be the occasional workshop or conference invitation.4) Exploring expanding to other fields and topicsThere are a number of topics that appear relevant to gl...]]>
Eva https://forum.effectivealtruism.org/posts/sSGdKNPDEupfcoHNN/current-plans-as-the-incoming-director-of-the-global Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Current plans as the incoming director of the Global Priorities Institute, published by Eva on April 26, 2023 on The Effective Altruism Forum.Cross-posted from my blog.I am taking leave from the University of Toronto to serve as the Director of the Global Priorities Institute (GPI) at the University of Oxford. I can't express enough gratitude to the University of Toronto for enabling this. (I'll be back in the fall to fulfill my teaching obligations, though - keep inviting me to seminars and such!)GPI is an interdisciplinary research institute focusing on academic research that informs decision-makers on how to do good more effectively. In its first few years, under the leadership of its founding director, Hilary Greaves, GPI created and grew a community of academics in philosophy and economics interested in global priorities research. I am excited to build from this strong foundation and, in particular, to further develop the economics side.There are several areas I would like to focus on while at GPI. The below items reflect my current views, however, I expect these views to be refined over time. These items are not intended to be an exhaustive list, but they are things I would like GPI to do more of on the margin.1) Research on decision-making under uncertaintyThere is a lot of uncertainty in estimates of the effects of various actions. My views here are coloured by my past work. In the early 2010s, I tried to compile estimates of the effects of popular development interventions such as insecticide-treated bed nets for malaria, deworming drugs, and unconditional cash transfers. My initial thought was that by synthesizing the evidence, I'd be able to say something more conclusive about "the best" intervention for a given outcome. Unfortunately, I found that results varied, a lot (you can read more about it in my JEEA paper).If it's really hard to predict effects in global development, which is a very well-studied area, it would seem even harder to know what to do in other areas with less evidence. Yet, decisions still have to be made. One of the core areas GPI has focused on in the past is decision-making under uncertainty, and I expect that to continue to be a priority research area. Some work on robustness might also fall under this category.2) Increasing empirical researchGPI is an interdisciplinary institute combining philosophy and economics. To date, the economics side has largely focused on theoretical issues. But I think it's important for there to be careful, rigorous empirical work at GPI. I think there are relevant hypotheses that can be tested that pertain to global priorities research.Many economists interested in global priorities research come from applied fields like development economics, and there's a talented pool of people who can do empirical work on, e.g., encouraging better uptake of evidence or forecasting. There's simply a lot to be done here, and I look forward to working with colleagues like Julian Jamison (on leave from Exeter), Benjamin Tereick, and Mattie Toma (visiting from Warwick Business School), among many others.3) Expanding GPI’s network in economicsThere is an existing program at GPI for senior research affiliates based at other institutions. However, I think a lot more can be done with this, especially on the economics side. I'm still exploring the right structures, but suffice it to say, if you are an academic economist interested in global priorities research, please do get in touch. I am envisioning a network of loosely affiliated individuals in core fields of interest who would be sent notifications about research and funding opportunities. There may also be the occasional workshop or conference invitation.4) Exploring expanding to other fields and topicsThere are a number of topics that appear relevant to gl...]]>
Wed, 26 Apr 2023 18:20:59 +0000 EA - Current plans as the incoming director of the Global Priorities Institute by Eva Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Current plans as the incoming director of the Global Priorities Institute, published by Eva on April 26, 2023 on The Effective Altruism Forum.Cross-posted from my blog.I am taking leave from the University of Toronto to serve as the Director of the Global Priorities Institute (GPI) at the University of Oxford. I can't express enough gratitude to the University of Toronto for enabling this. (I'll be back in the fall to fulfill my teaching obligations, though - keep inviting me to seminars and such!)GPI is an interdisciplinary research institute focusing on academic research that informs decision-makers on how to do good more effectively. In its first few years, under the leadership of its founding director, Hilary Greaves, GPI created and grew a community of academics in philosophy and economics interested in global priorities research. I am excited to build from this strong foundation and, in particular, to further develop the economics side.There are several areas I would like to focus on while at GPI. The below items reflect my current views, however, I expect these views to be refined over time. These items are not intended to be an exhaustive list, but they are things I would like GPI to do more of on the margin.1) Research on decision-making under uncertaintyThere is a lot of uncertainty in estimates of the effects of various actions. My views here are coloured by my past work. In the early 2010s, I tried to compile estimates of the effects of popular development interventions such as insecticide-treated bed nets for malaria, deworming drugs, and unconditional cash transfers. My initial thought was that by synthesizing the evidence, I'd be able to say something more conclusive about "the best" intervention for a given outcome. Unfortunately, I found that results varied, a lot (you can read more about it in my JEEA paper).If it's really hard to predict effects in global development, which is a very well-studied area, it would seem even harder to know what to do in other areas with less evidence. Yet, decisions still have to be made. One of the core areas GPI has focused on in the past is decision-making under uncertainty, and I expect that to continue to be a priority research area. Some work on robustness might also fall under this category.2) Increasing empirical researchGPI is an interdisciplinary institute combining philosophy and economics. To date, the economics side has largely focused on theoretical issues. But I think it's important for there to be careful, rigorous empirical work at GPI. I think there are relevant hypotheses that can be tested that pertain to global priorities research.Many economists interested in global priorities research come from applied fields like development economics, and there's a talented pool of people who can do empirical work on, e.g., encouraging better uptake of evidence or forecasting. There's simply a lot to be done here, and I look forward to working with colleagues like Julian Jamison (on leave from Exeter), Benjamin Tereick, and Mattie Toma (visiting from Warwick Business School), among many others.3) Expanding GPI’s network in economicsThere is an existing program at GPI for senior research affiliates based at other institutions. However, I think a lot more can be done with this, especially on the economics side. I'm still exploring the right structures, but suffice it to say, if you are an academic economist interested in global priorities research, please do get in touch. I am envisioning a network of loosely affiliated individuals in core fields of interest who would be sent notifications about research and funding opportunities. There may also be the occasional workshop or conference invitation.4) Exploring expanding to other fields and topicsThere are a number of topics that appear relevant to gl...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Current plans as the incoming director of the Global Priorities Institute, published by Eva on April 26, 2023 on The Effective Altruism Forum.Cross-posted from my blog.I am taking leave from the University of Toronto to serve as the Director of the Global Priorities Institute (GPI) at the University of Oxford. I can't express enough gratitude to the University of Toronto for enabling this. (I'll be back in the fall to fulfill my teaching obligations, though - keep inviting me to seminars and such!)GPI is an interdisciplinary research institute focusing on academic research that informs decision-makers on how to do good more effectively. In its first few years, under the leadership of its founding director, Hilary Greaves, GPI created and grew a community of academics in philosophy and economics interested in global priorities research. I am excited to build from this strong foundation and, in particular, to further develop the economics side.There are several areas I would like to focus on while at GPI. The below items reflect my current views, however, I expect these views to be refined over time. These items are not intended to be an exhaustive list, but they are things I would like GPI to do more of on the margin.1) Research on decision-making under uncertaintyThere is a lot of uncertainty in estimates of the effects of various actions. My views here are coloured by my past work. In the early 2010s, I tried to compile estimates of the effects of popular development interventions such as insecticide-treated bed nets for malaria, deworming drugs, and unconditional cash transfers. My initial thought was that by synthesizing the evidence, I'd be able to say something more conclusive about "the best" intervention for a given outcome. Unfortunately, I found that results varied, a lot (you can read more about it in my JEEA paper).If it's really hard to predict effects in global development, which is a very well-studied area, it would seem even harder to know what to do in other areas with less evidence. Yet, decisions still have to be made. One of the core areas GPI has focused on in the past is decision-making under uncertainty, and I expect that to continue to be a priority research area. Some work on robustness might also fall under this category.2) Increasing empirical researchGPI is an interdisciplinary institute combining philosophy and economics. To date, the economics side has largely focused on theoretical issues. But I think it's important for there to be careful, rigorous empirical work at GPI. I think there are relevant hypotheses that can be tested that pertain to global priorities research.Many economists interested in global priorities research come from applied fields like development economics, and there's a talented pool of people who can do empirical work on, e.g., encouraging better uptake of evidence or forecasting. There's simply a lot to be done here, and I look forward to working with colleagues like Julian Jamison (on leave from Exeter), Benjamin Tereick, and Mattie Toma (visiting from Warwick Business School), among many others.3) Expanding GPI’s network in economicsThere is an existing program at GPI for senior research affiliates based at other institutions. However, I think a lot more can be done with this, especially on the economics side. I'm still exploring the right structures, but suffice it to say, if you are an academic economist interested in global priorities research, please do get in touch. I am envisioning a network of loosely affiliated individuals in core fields of interest who would be sent notifications about research and funding opportunities. There may also be the occasional workshop or conference invitation.4) Exploring expanding to other fields and topicsThere are a number of topics that appear relevant to gl...]]>
Eva https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:17 None full 5739
TCsanzwKGqfBBTye9_NL_EA_EA EA - The 'Wild' and 'Wacky' Claims of Karnofsky’s ‘Most Important Century’ by Spencer Becker-Kahn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The 'Wild' and 'Wacky' Claims of Karnofsky’s ‘Most Important Century’, published by Spencer Becker-Kahn on April 26, 2023 on The Effective Altruism Forum.Holden Karnofsky describes the claims of his “Most Important Century” series as “wild” and “wacky”, but at the same time purports to be in the mindset of “critically examining” such “strange possibilities” with “as much rigour as possible”. This emphasis is mine, but for what is supposedly an important piece of writing in a field that has a big part of its roots in academic analytic philosophy, it is almost ridiculous to suggest that this examination has been carried out with 'as much rigour as possible'. My main reactions - which I will expand on this essay - are that Karnofsky’s writing is in fact distinctly lacking in rigour; that his claims are too vague or even seem to shift around; and that his writing style - often informal, or sensationalist - aggravates the lack of clarity while simultaneously putting the goal of persuasion above that of truth-seeking.I also suggest that his emphasis on the wildness and wackiness of his own "thesis" is tantamount to an admission of bias on his part in favour of surprising or unconventional claims. I will start with some introductory remarks about the nature of my criticisms and of such criticism in general. Then I will spend some time trying to point to various instances of imprecision, bias, or confusion. And I will end by asking whether any of this even matters or what kind of lessons we should be drawing from it all. Notes: Throughout, I will quote from the whole series of blog posts by treating them as a single source rather than referencing which them separately. Note that the series appears in single pdf here (so one can always Ctrl/Cmd+F to jump to the part I am quoting).It is plausible that some of this post comes across quite harshly but none of it is intended to constitute a personal attack on Holden Karnofsky or an accusation of dishonesty. Where I have made errors of have misrepresented others, I welcome any and all corrections. I also generally welcome feedback on the writing and presentation of my own thoughts either privately or in the comments.Acknowledgements: I started this essay a while ago and so during the preparation of this work, I have been supported at various points by FHI, SERI MATS, BERI and Open Philanthropy. The development of this work benefitted significantly from numerous conversations with Jennifer Lin.1. Broad Remarks About My CriticismsIf you felt and do feel convinced by Karnofsky's writings, then upon hearing about my reservations, your instinct may be to respond with reasonable-seeming questions like: 'So where exactly does he disagree with Karnofsky?' or 'What are some specific things that he thinks Karnofsky gets wrong?'. You may well want to look for wherever it is that I have carefully categorized my criticisms, to scroll through to find all of my individual object-level disagreements so that you can see if you know the counterarguments that mean that I am wrong. And so it may be frustrating that I will often sound like I am trying to weasel out of having to answer these questions head-on or not putting much weight on the fact that I have not laid out my criticisms in that way.Firstly, I think that the main issues to do with clarity and precision that I will highlight occur at a fundamental level. It is not that they are 'more important' than individual, specific, object-level disagreements, but I claim that Karnofsky does a sufficiently poor job of explaining his main claims, the structure of his arguments, the dependencies between his propositions, and in separating his claims from the verifications of those claims, that it actually prevents detailed, in-depth discussions of object-level disagreements from making much sense...]]>
Spencer Becker-Kahn https://forum.effectivealtruism.org/posts/TCsanzwKGqfBBTye9/the-wild-and-wacky-claims-of-karnofsky-s-most-important Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The 'Wild' and 'Wacky' Claims of Karnofsky’s ‘Most Important Century’, published by Spencer Becker-Kahn on April 26, 2023 on The Effective Altruism Forum.Holden Karnofsky describes the claims of his “Most Important Century” series as “wild” and “wacky”, but at the same time purports to be in the mindset of “critically examining” such “strange possibilities” with “as much rigour as possible”. This emphasis is mine, but for what is supposedly an important piece of writing in a field that has a big part of its roots in academic analytic philosophy, it is almost ridiculous to suggest that this examination has been carried out with 'as much rigour as possible'. My main reactions - which I will expand on this essay - are that Karnofsky’s writing is in fact distinctly lacking in rigour; that his claims are too vague or even seem to shift around; and that his writing style - often informal, or sensationalist - aggravates the lack of clarity while simultaneously putting the goal of persuasion above that of truth-seeking.I also suggest that his emphasis on the wildness and wackiness of his own "thesis" is tantamount to an admission of bias on his part in favour of surprising or unconventional claims. I will start with some introductory remarks about the nature of my criticisms and of such criticism in general. Then I will spend some time trying to point to various instances of imprecision, bias, or confusion. And I will end by asking whether any of this even matters or what kind of lessons we should be drawing from it all. Notes: Throughout, I will quote from the whole series of blog posts by treating them as a single source rather than referencing which them separately. Note that the series appears in single pdf here (so one can always Ctrl/Cmd+F to jump to the part I am quoting).It is plausible that some of this post comes across quite harshly but none of it is intended to constitute a personal attack on Holden Karnofsky or an accusation of dishonesty. Where I have made errors of have misrepresented others, I welcome any and all corrections. I also generally welcome feedback on the writing and presentation of my own thoughts either privately or in the comments.Acknowledgements: I started this essay a while ago and so during the preparation of this work, I have been supported at various points by FHI, SERI MATS, BERI and Open Philanthropy. The development of this work benefitted significantly from numerous conversations with Jennifer Lin.1. Broad Remarks About My CriticismsIf you felt and do feel convinced by Karnofsky's writings, then upon hearing about my reservations, your instinct may be to respond with reasonable-seeming questions like: 'So where exactly does he disagree with Karnofsky?' or 'What are some specific things that he thinks Karnofsky gets wrong?'. You may well want to look for wherever it is that I have carefully categorized my criticisms, to scroll through to find all of my individual object-level disagreements so that you can see if you know the counterarguments that mean that I am wrong. And so it may be frustrating that I will often sound like I am trying to weasel out of having to answer these questions head-on or not putting much weight on the fact that I have not laid out my criticisms in that way.Firstly, I think that the main issues to do with clarity and precision that I will highlight occur at a fundamental level. It is not that they are 'more important' than individual, specific, object-level disagreements, but I claim that Karnofsky does a sufficiently poor job of explaining his main claims, the structure of his arguments, the dependencies between his propositions, and in separating his claims from the verifications of those claims, that it actually prevents detailed, in-depth discussions of object-level disagreements from making much sense...]]>
Wed, 26 Apr 2023 17:03:38 +0000 EA - The 'Wild' and 'Wacky' Claims of Karnofsky’s ‘Most Important Century’ by Spencer Becker-Kahn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The 'Wild' and 'Wacky' Claims of Karnofsky’s ‘Most Important Century’, published by Spencer Becker-Kahn on April 26, 2023 on The Effective Altruism Forum.Holden Karnofsky describes the claims of his “Most Important Century” series as “wild” and “wacky”, but at the same time purports to be in the mindset of “critically examining” such “strange possibilities” with “as much rigour as possible”. This emphasis is mine, but for what is supposedly an important piece of writing in a field that has a big part of its roots in academic analytic philosophy, it is almost ridiculous to suggest that this examination has been carried out with 'as much rigour as possible'. My main reactions - which I will expand on this essay - are that Karnofsky’s writing is in fact distinctly lacking in rigour; that his claims are too vague or even seem to shift around; and that his writing style - often informal, or sensationalist - aggravates the lack of clarity while simultaneously putting the goal of persuasion above that of truth-seeking.I also suggest that his emphasis on the wildness and wackiness of his own "thesis" is tantamount to an admission of bias on his part in favour of surprising or unconventional claims. I will start with some introductory remarks about the nature of my criticisms and of such criticism in general. Then I will spend some time trying to point to various instances of imprecision, bias, or confusion. And I will end by asking whether any of this even matters or what kind of lessons we should be drawing from it all. Notes: Throughout, I will quote from the whole series of blog posts by treating them as a single source rather than referencing which them separately. Note that the series appears in single pdf here (so one can always Ctrl/Cmd+F to jump to the part I am quoting).It is plausible that some of this post comes across quite harshly but none of it is intended to constitute a personal attack on Holden Karnofsky or an accusation of dishonesty. Where I have made errors of have misrepresented others, I welcome any and all corrections. I also generally welcome feedback on the writing and presentation of my own thoughts either privately or in the comments.Acknowledgements: I started this essay a while ago and so during the preparation of this work, I have been supported at various points by FHI, SERI MATS, BERI and Open Philanthropy. The development of this work benefitted significantly from numerous conversations with Jennifer Lin.1. Broad Remarks About My CriticismsIf you felt and do feel convinced by Karnofsky's writings, then upon hearing about my reservations, your instinct may be to respond with reasonable-seeming questions like: 'So where exactly does he disagree with Karnofsky?' or 'What are some specific things that he thinks Karnofsky gets wrong?'. You may well want to look for wherever it is that I have carefully categorized my criticisms, to scroll through to find all of my individual object-level disagreements so that you can see if you know the counterarguments that mean that I am wrong. And so it may be frustrating that I will often sound like I am trying to weasel out of having to answer these questions head-on or not putting much weight on the fact that I have not laid out my criticisms in that way.Firstly, I think that the main issues to do with clarity and precision that I will highlight occur at a fundamental level. It is not that they are 'more important' than individual, specific, object-level disagreements, but I claim that Karnofsky does a sufficiently poor job of explaining his main claims, the structure of his arguments, the dependencies between his propositions, and in separating his claims from the verifications of those claims, that it actually prevents detailed, in-depth discussions of object-level disagreements from making much sense...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The 'Wild' and 'Wacky' Claims of Karnofsky’s ‘Most Important Century’, published by Spencer Becker-Kahn on April 26, 2023 on The Effective Altruism Forum.Holden Karnofsky describes the claims of his “Most Important Century” series as “wild” and “wacky”, but at the same time purports to be in the mindset of “critically examining” such “strange possibilities” with “as much rigour as possible”. This emphasis is mine, but for what is supposedly an important piece of writing in a field that has a big part of its roots in academic analytic philosophy, it is almost ridiculous to suggest that this examination has been carried out with 'as much rigour as possible'. My main reactions - which I will expand on this essay - are that Karnofsky’s writing is in fact distinctly lacking in rigour; that his claims are too vague or even seem to shift around; and that his writing style - often informal, or sensationalist - aggravates the lack of clarity while simultaneously putting the goal of persuasion above that of truth-seeking.I also suggest that his emphasis on the wildness and wackiness of his own "thesis" is tantamount to an admission of bias on his part in favour of surprising or unconventional claims. I will start with some introductory remarks about the nature of my criticisms and of such criticism in general. Then I will spend some time trying to point to various instances of imprecision, bias, or confusion. And I will end by asking whether any of this even matters or what kind of lessons we should be drawing from it all. Notes: Throughout, I will quote from the whole series of blog posts by treating them as a single source rather than referencing which them separately. Note that the series appears in single pdf here (so one can always Ctrl/Cmd+F to jump to the part I am quoting).It is plausible that some of this post comes across quite harshly but none of it is intended to constitute a personal attack on Holden Karnofsky or an accusation of dishonesty. Where I have made errors of have misrepresented others, I welcome any and all corrections. I also generally welcome feedback on the writing and presentation of my own thoughts either privately or in the comments.Acknowledgements: I started this essay a while ago and so during the preparation of this work, I have been supported at various points by FHI, SERI MATS, BERI and Open Philanthropy. The development of this work benefitted significantly from numerous conversations with Jennifer Lin.1. Broad Remarks About My CriticismsIf you felt and do feel convinced by Karnofsky's writings, then upon hearing about my reservations, your instinct may be to respond with reasonable-seeming questions like: 'So where exactly does he disagree with Karnofsky?' or 'What are some specific things that he thinks Karnofsky gets wrong?'. You may well want to look for wherever it is that I have carefully categorized my criticisms, to scroll through to find all of my individual object-level disagreements so that you can see if you know the counterarguments that mean that I am wrong. And so it may be frustrating that I will often sound like I am trying to weasel out of having to answer these questions head-on or not putting much weight on the fact that I have not laid out my criticisms in that way.Firstly, I think that the main issues to do with clarity and precision that I will highlight occur at a fundamental level. It is not that they are 'more important' than individual, specific, object-level disagreements, but I claim that Karnofsky does a sufficiently poor job of explaining his main claims, the structure of his arguments, the dependencies between his propositions, and in separating his claims from the verifications of those claims, that it actually prevents detailed, in-depth discussions of object-level disagreements from making much sense...]]>
Spencer Becker-Kahn https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 25:58 None full 5736
PhNfc9JRFc9CsDjvi_NL_EA_EA EA - Two things that I think could make the community better by Kaleem Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two things that I think could make the community better, published by Kaleem on April 26, 2023 on The Effective Altruism Forum.(alternate title: CEA shouldn’t be the C of EA).A very short summary:Issue 1: CEA’s name is bad and leads to lots of confusion and frustrationSuggestion: CEA should change its nameIssue2 :‘The ‘community health team’ is part of CEA, which is something which might reduce the community’s trust in the community health teamSuggestion: ‘The community health team’ should not be part of CEAA reasonable Summary:The name “The Centre for Effective Altruism (CEA)” causes some people in the community to misunderstand what CEA is/does, and for them to misattribute responsibility to CEA that CEA itself doesn’t think belongs to it.In addition, the community health team, which tries to serve the whole community, is part of CEA. This may further the confusion about CEA’s role in the movement, and might be hampering the community health team’s effectiveness and trustworthiness.By renaming/rebranding, CEA can resolve and prevent many ongoing communications and PR issues within the movement.And by spinning-off into an independent organization, the community health team can improve on having an impartial and inscrutable reputation and record in the community.Epistemic status: Of my observations - quite sure,. Of my two main suggestions, also quite sure. I find it difficult to write things to the point where I feel comfortable posting them on the forum, but I also know It’d probably be better for me to post more ok-ish posts than to sit on a pile of never-read drafts which might have some useful ideas in them. So yeah - I know this isn’t greatOn Issue 1: Changing CEA’s name(I was going to post this before CEA posted their post in which they claim that they’re open to changing their name. I think it's still worth posting, hopefully to be a place where the topic can be more thoroughly argued in the comments).One of the things that I like about the EA community is that it is decentralized, meaning there is no single person or entity who sets the direction of, or represents, the community (It’s like Sunni Islam in that way, rather than being like the Catholic Church, which is centrally controlled by the Vatican and the Pope). I think other people in the community like it too - it helps the community house a wide variety of (often competing) views, and for people to form organizations with different strategies or goals based on differently-weighed cause-prioritization without facing as much institutional resistance than they would if we were all playing to the tune of one organization and their plan. Of course, cases have also been made for more centralization, in certain ways.The EA community has grown significantly over the past couple of years. Whereas ~10 years ago it might have been known by nearly everyone in the EA community what each of the few organizations were working on, today there is a much larger number of organizations/projects and many more members in the community, which means that it is more likely that there are members of the EA community who don’t know what some organizations, such as CEA, actually do. This is likely to be even more true of people who are new to the community and are trying to figure out what the ecosystem looks like and remembering what all the weird initialized org names are.This usually wouldn't be an issue - if one were to list all the organizations associated with EA, you wouldn’t/shouldn't expect anyone to know what each and every one of them does (at least not in detail - but you might know all their cause areas). However, when someone looks through that list, looking for one organization which might be the authority in the movement, “The Centre For Effective Altruism” has a sense of authority and officiali...]]>
Kaleem https://forum.effectivealtruism.org/posts/PhNfc9JRFc9CsDjvi/two-things-that-i-think-could-make-the-community-better Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two things that I think could make the community better, published by Kaleem on April 26, 2023 on The Effective Altruism Forum.(alternate title: CEA shouldn’t be the C of EA).A very short summary:Issue 1: CEA’s name is bad and leads to lots of confusion and frustrationSuggestion: CEA should change its nameIssue2 :‘The ‘community health team’ is part of CEA, which is something which might reduce the community’s trust in the community health teamSuggestion: ‘The community health team’ should not be part of CEAA reasonable Summary:The name “The Centre for Effective Altruism (CEA)” causes some people in the community to misunderstand what CEA is/does, and for them to misattribute responsibility to CEA that CEA itself doesn’t think belongs to it.In addition, the community health team, which tries to serve the whole community, is part of CEA. This may further the confusion about CEA’s role in the movement, and might be hampering the community health team’s effectiveness and trustworthiness.By renaming/rebranding, CEA can resolve and prevent many ongoing communications and PR issues within the movement.And by spinning-off into an independent organization, the community health team can improve on having an impartial and inscrutable reputation and record in the community.Epistemic status: Of my observations - quite sure,. Of my two main suggestions, also quite sure. I find it difficult to write things to the point where I feel comfortable posting them on the forum, but I also know It’d probably be better for me to post more ok-ish posts than to sit on a pile of never-read drafts which might have some useful ideas in them. So yeah - I know this isn’t greatOn Issue 1: Changing CEA’s name(I was going to post this before CEA posted their post in which they claim that they’re open to changing their name. I think it's still worth posting, hopefully to be a place where the topic can be more thoroughly argued in the comments).One of the things that I like about the EA community is that it is decentralized, meaning there is no single person or entity who sets the direction of, or represents, the community (It’s like Sunni Islam in that way, rather than being like the Catholic Church, which is centrally controlled by the Vatican and the Pope). I think other people in the community like it too - it helps the community house a wide variety of (often competing) views, and for people to form organizations with different strategies or goals based on differently-weighed cause-prioritization without facing as much institutional resistance than they would if we were all playing to the tune of one organization and their plan. Of course, cases have also been made for more centralization, in certain ways.The EA community has grown significantly over the past couple of years. Whereas ~10 years ago it might have been known by nearly everyone in the EA community what each of the few organizations were working on, today there is a much larger number of organizations/projects and many more members in the community, which means that it is more likely that there are members of the EA community who don’t know what some organizations, such as CEA, actually do. This is likely to be even more true of people who are new to the community and are trying to figure out what the ecosystem looks like and remembering what all the weird initialized org names are.This usually wouldn't be an issue - if one were to list all the organizations associated with EA, you wouldn’t/shouldn't expect anyone to know what each and every one of them does (at least not in detail - but you might know all their cause areas). However, when someone looks through that list, looking for one organization which might be the authority in the movement, “The Centre For Effective Altruism” has a sense of authority and officiali...]]>
Wed, 26 Apr 2023 16:43:55 +0000 EA - Two things that I think could make the community better by Kaleem Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two things that I think could make the community better, published by Kaleem on April 26, 2023 on The Effective Altruism Forum.(alternate title: CEA shouldn’t be the C of EA).A very short summary:Issue 1: CEA’s name is bad and leads to lots of confusion and frustrationSuggestion: CEA should change its nameIssue2 :‘The ‘community health team’ is part of CEA, which is something which might reduce the community’s trust in the community health teamSuggestion: ‘The community health team’ should not be part of CEAA reasonable Summary:The name “The Centre for Effective Altruism (CEA)” causes some people in the community to misunderstand what CEA is/does, and for them to misattribute responsibility to CEA that CEA itself doesn’t think belongs to it.In addition, the community health team, which tries to serve the whole community, is part of CEA. This may further the confusion about CEA’s role in the movement, and might be hampering the community health team’s effectiveness and trustworthiness.By renaming/rebranding, CEA can resolve and prevent many ongoing communications and PR issues within the movement.And by spinning-off into an independent organization, the community health team can improve on having an impartial and inscrutable reputation and record in the community.Epistemic status: Of my observations - quite sure,. Of my two main suggestions, also quite sure. I find it difficult to write things to the point where I feel comfortable posting them on the forum, but I also know It’d probably be better for me to post more ok-ish posts than to sit on a pile of never-read drafts which might have some useful ideas in them. So yeah - I know this isn’t greatOn Issue 1: Changing CEA’s name(I was going to post this before CEA posted their post in which they claim that they’re open to changing their name. I think it's still worth posting, hopefully to be a place where the topic can be more thoroughly argued in the comments).One of the things that I like about the EA community is that it is decentralized, meaning there is no single person or entity who sets the direction of, or represents, the community (It’s like Sunni Islam in that way, rather than being like the Catholic Church, which is centrally controlled by the Vatican and the Pope). I think other people in the community like it too - it helps the community house a wide variety of (often competing) views, and for people to form organizations with different strategies or goals based on differently-weighed cause-prioritization without facing as much institutional resistance than they would if we were all playing to the tune of one organization and their plan. Of course, cases have also been made for more centralization, in certain ways.The EA community has grown significantly over the past couple of years. Whereas ~10 years ago it might have been known by nearly everyone in the EA community what each of the few organizations were working on, today there is a much larger number of organizations/projects and many more members in the community, which means that it is more likely that there are members of the EA community who don’t know what some organizations, such as CEA, actually do. This is likely to be even more true of people who are new to the community and are trying to figure out what the ecosystem looks like and remembering what all the weird initialized org names are.This usually wouldn't be an issue - if one were to list all the organizations associated with EA, you wouldn’t/shouldn't expect anyone to know what each and every one of them does (at least not in detail - but you might know all their cause areas). However, when someone looks through that list, looking for one organization which might be the authority in the movement, “The Centre For Effective Altruism” has a sense of authority and officiali...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two things that I think could make the community better, published by Kaleem on April 26, 2023 on The Effective Altruism Forum.(alternate title: CEA shouldn’t be the C of EA).A very short summary:Issue 1: CEA’s name is bad and leads to lots of confusion and frustrationSuggestion: CEA should change its nameIssue2 :‘The ‘community health team’ is part of CEA, which is something which might reduce the community’s trust in the community health teamSuggestion: ‘The community health team’ should not be part of CEAA reasonable Summary:The name “The Centre for Effective Altruism (CEA)” causes some people in the community to misunderstand what CEA is/does, and for them to misattribute responsibility to CEA that CEA itself doesn’t think belongs to it.In addition, the community health team, which tries to serve the whole community, is part of CEA. This may further the confusion about CEA’s role in the movement, and might be hampering the community health team’s effectiveness and trustworthiness.By renaming/rebranding, CEA can resolve and prevent many ongoing communications and PR issues within the movement.And by spinning-off into an independent organization, the community health team can improve on having an impartial and inscrutable reputation and record in the community.Epistemic status: Of my observations - quite sure,. Of my two main suggestions, also quite sure. I find it difficult to write things to the point where I feel comfortable posting them on the forum, but I also know It’d probably be better for me to post more ok-ish posts than to sit on a pile of never-read drafts which might have some useful ideas in them. So yeah - I know this isn’t greatOn Issue 1: Changing CEA’s name(I was going to post this before CEA posted their post in which they claim that they’re open to changing their name. I think it's still worth posting, hopefully to be a place where the topic can be more thoroughly argued in the comments).One of the things that I like about the EA community is that it is decentralized, meaning there is no single person or entity who sets the direction of, or represents, the community (It’s like Sunni Islam in that way, rather than being like the Catholic Church, which is centrally controlled by the Vatican and the Pope). I think other people in the community like it too - it helps the community house a wide variety of (often competing) views, and for people to form organizations with different strategies or goals based on differently-weighed cause-prioritization without facing as much institutional resistance than they would if we were all playing to the tune of one organization and their plan. Of course, cases have also been made for more centralization, in certain ways.The EA community has grown significantly over the past couple of years. Whereas ~10 years ago it might have been known by nearly everyone in the EA community what each of the few organizations were working on, today there is a much larger number of organizations/projects and many more members in the community, which means that it is more likely that there are members of the EA community who don’t know what some organizations, such as CEA, actually do. This is likely to be even more true of people who are new to the community and are trying to figure out what the ecosystem looks like and remembering what all the weird initialized org names are.This usually wouldn't be an issue - if one were to list all the organizations associated with EA, you wouldn’t/shouldn't expect anyone to know what each and every one of them does (at least not in detail - but you might know all their cause areas). However, when someone looks through that list, looking for one organization which might be the authority in the movement, “The Centre For Effective Altruism” has a sense of authority and officiali...]]>
Kaleem https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 20:39 None full 5737
ynnHJ2k6z6bmtNxkF_NL_EA_EA EA - EA might systematically generate a scarcity mindset that produces low-integrity actors by Severin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA might systematically generate a scarcity mindset that produces low-integrity actors, published by Severin on April 25, 2023 on The Effective Altruism Forum.Epistemic status: Highly speculative quick Facebook post. Thanks to Anna Riedl for nudging me to share it here anyway.Something I've noticed recently is that people who are in a bad place in their lives tend to have a certain sticky sleazy black holey feel to them. Something around untrustworthiness, low integrity, optimizing for themselves regardless of the cost for the people around them. I've met people like that, and I think when others felt around me like my energy was subtly and indescribably off, it was due to me being sticky in that way, too.Game-theoretically, it makes total sense for people to be a bit untrustworthy while they are in a bad place in their life. If you're in a place of scarcity, it is entirely reasonable to be strategic about where you put your limited resources. Then, it's just reasonable to only be loyal to others as long as you can get something out of it yourself and to defect as soon as they don't offer obvious short-term gains. And similarly, it make sense for the people around you to be a bit weary of you when you are in that place.And now, a bit of a hot take: I think most if not all of Effective Altruism's recent scandals have been due to low-integrity sticky behavior. And, I think some properties of EA systematically make people sticky.We might want to invest some thought and effort into fixing them. So, here's some of EA's sticky-people-producing properties I can spontaneously think of, plus first thoughts on how to fix them that aren't supposed to be final solutions:1. UtilitarianismYudkowsky wrote a thing that I think is true:"Go three-quarters of the way from deontology to utilitarianism and then stop. You are now in the right place. Stay there at least until you have become a god."Meanwhile, SBF and probably a bunch of other people in EA (including me at times) have gone all four quarters of the way. If there's no upper bound to when it is enough to make the numbers go up, you'll be in a place of scarcity no matter what, and will be incentivized to defect indefinitely.I think an explicit belief of "defecting is not a good utilitarian strategy" doesn't help here: Becoming sticky is not a decision, but a subtle shift in your cognition that happens when your animal instincts pick up that your prefrontal cortex thinks you are in a place of scarcity.Basically, I think Buddhism is what utilitarianism would be if it made sense and was human-brain-shaped: Optimizing for global optima, but from a place of compassion and felt oneness with all sentient beings, not from the standpoint of a technocratic puppet master.2. Ever-precarious salariesEA funders like to base their allocation of funds on evidence, and they like to be able to adjust course quickly as soon as there are higher expected-value opportunities. From the perspective of naive utilitarianism, this completely makes sense.From the perspective of grantees, however, it feels like permanently having to justify your existence. And that is a situation that makes you go funny in the head in a way that is not conducive to getting just about any job done, unless it's a job like fraud that inherently involves short-term thinking and defecting on society. Whether or not you treat people as trustworthy and competent, you'll tend to find that you are right.I don't know how to fix this. Especially at the place we are at now, where both the FTX collapse and funders' increased cautiousness made the precarity of EA funding even worse. Currently, I'm seeing two dimensions to at least partially solving this issue:Building healthier, more sustainable relationships between community members. That's why I'm building Authentic R...]]>
Severin https://forum.effectivealtruism.org/posts/ynnHJ2k6z6bmtNxkF/ea-might-systematically-generate-a-scarcity-mindset-that Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA might systematically generate a scarcity mindset that produces low-integrity actors, published by Severin on April 25, 2023 on The Effective Altruism Forum.Epistemic status: Highly speculative quick Facebook post. Thanks to Anna Riedl for nudging me to share it here anyway.Something I've noticed recently is that people who are in a bad place in their lives tend to have a certain sticky sleazy black holey feel to them. Something around untrustworthiness, low integrity, optimizing for themselves regardless of the cost for the people around them. I've met people like that, and I think when others felt around me like my energy was subtly and indescribably off, it was due to me being sticky in that way, too.Game-theoretically, it makes total sense for people to be a bit untrustworthy while they are in a bad place in their life. If you're in a place of scarcity, it is entirely reasonable to be strategic about where you put your limited resources. Then, it's just reasonable to only be loyal to others as long as you can get something out of it yourself and to defect as soon as they don't offer obvious short-term gains. And similarly, it make sense for the people around you to be a bit weary of you when you are in that place.And now, a bit of a hot take: I think most if not all of Effective Altruism's recent scandals have been due to low-integrity sticky behavior. And, I think some properties of EA systematically make people sticky.We might want to invest some thought and effort into fixing them. So, here's some of EA's sticky-people-producing properties I can spontaneously think of, plus first thoughts on how to fix them that aren't supposed to be final solutions:1. UtilitarianismYudkowsky wrote a thing that I think is true:"Go three-quarters of the way from deontology to utilitarianism and then stop. You are now in the right place. Stay there at least until you have become a god."Meanwhile, SBF and probably a bunch of other people in EA (including me at times) have gone all four quarters of the way. If there's no upper bound to when it is enough to make the numbers go up, you'll be in a place of scarcity no matter what, and will be incentivized to defect indefinitely.I think an explicit belief of "defecting is not a good utilitarian strategy" doesn't help here: Becoming sticky is not a decision, but a subtle shift in your cognition that happens when your animal instincts pick up that your prefrontal cortex thinks you are in a place of scarcity.Basically, I think Buddhism is what utilitarianism would be if it made sense and was human-brain-shaped: Optimizing for global optima, but from a place of compassion and felt oneness with all sentient beings, not from the standpoint of a technocratic puppet master.2. Ever-precarious salariesEA funders like to base their allocation of funds on evidence, and they like to be able to adjust course quickly as soon as there are higher expected-value opportunities. From the perspective of naive utilitarianism, this completely makes sense.From the perspective of grantees, however, it feels like permanently having to justify your existence. And that is a situation that makes you go funny in the head in a way that is not conducive to getting just about any job done, unless it's a job like fraud that inherently involves short-term thinking and defecting on society. Whether or not you treat people as trustworthy and competent, you'll tend to find that you are right.I don't know how to fix this. Especially at the place we are at now, where both the FTX collapse and funders' increased cautiousness made the precarity of EA funding even worse. Currently, I'm seeing two dimensions to at least partially solving this issue:Building healthier, more sustainable relationships between community members. That's why I'm building Authentic R...]]>
Wed, 26 Apr 2023 12:18:59 +0000 EA - EA might systematically generate a scarcity mindset that produces low-integrity actors by Severin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA might systematically generate a scarcity mindset that produces low-integrity actors, published by Severin on April 25, 2023 on The Effective Altruism Forum.Epistemic status: Highly speculative quick Facebook post. Thanks to Anna Riedl for nudging me to share it here anyway.Something I've noticed recently is that people who are in a bad place in their lives tend to have a certain sticky sleazy black holey feel to them. Something around untrustworthiness, low integrity, optimizing for themselves regardless of the cost for the people around them. I've met people like that, and I think when others felt around me like my energy was subtly and indescribably off, it was due to me being sticky in that way, too.Game-theoretically, it makes total sense for people to be a bit untrustworthy while they are in a bad place in their life. If you're in a place of scarcity, it is entirely reasonable to be strategic about where you put your limited resources. Then, it's just reasonable to only be loyal to others as long as you can get something out of it yourself and to defect as soon as they don't offer obvious short-term gains. And similarly, it make sense for the people around you to be a bit weary of you when you are in that place.And now, a bit of a hot take: I think most if not all of Effective Altruism's recent scandals have been due to low-integrity sticky behavior. And, I think some properties of EA systematically make people sticky.We might want to invest some thought and effort into fixing them. So, here's some of EA's sticky-people-producing properties I can spontaneously think of, plus first thoughts on how to fix them that aren't supposed to be final solutions:1. UtilitarianismYudkowsky wrote a thing that I think is true:"Go three-quarters of the way from deontology to utilitarianism and then stop. You are now in the right place. Stay there at least until you have become a god."Meanwhile, SBF and probably a bunch of other people in EA (including me at times) have gone all four quarters of the way. If there's no upper bound to when it is enough to make the numbers go up, you'll be in a place of scarcity no matter what, and will be incentivized to defect indefinitely.I think an explicit belief of "defecting is not a good utilitarian strategy" doesn't help here: Becoming sticky is not a decision, but a subtle shift in your cognition that happens when your animal instincts pick up that your prefrontal cortex thinks you are in a place of scarcity.Basically, I think Buddhism is what utilitarianism would be if it made sense and was human-brain-shaped: Optimizing for global optima, but from a place of compassion and felt oneness with all sentient beings, not from the standpoint of a technocratic puppet master.2. Ever-precarious salariesEA funders like to base their allocation of funds on evidence, and they like to be able to adjust course quickly as soon as there are higher expected-value opportunities. From the perspective of naive utilitarianism, this completely makes sense.From the perspective of grantees, however, it feels like permanently having to justify your existence. And that is a situation that makes you go funny in the head in a way that is not conducive to getting just about any job done, unless it's a job like fraud that inherently involves short-term thinking and defecting on society. Whether or not you treat people as trustworthy and competent, you'll tend to find that you are right.I don't know how to fix this. Especially at the place we are at now, where both the FTX collapse and funders' increased cautiousness made the precarity of EA funding even worse. Currently, I'm seeing two dimensions to at least partially solving this issue:Building healthier, more sustainable relationships between community members. That's why I'm building Authentic R...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA might systematically generate a scarcity mindset that produces low-integrity actors, published by Severin on April 25, 2023 on The Effective Altruism Forum.Epistemic status: Highly speculative quick Facebook post. Thanks to Anna Riedl for nudging me to share it here anyway.Something I've noticed recently is that people who are in a bad place in their lives tend to have a certain sticky sleazy black holey feel to them. Something around untrustworthiness, low integrity, optimizing for themselves regardless of the cost for the people around them. I've met people like that, and I think when others felt around me like my energy was subtly and indescribably off, it was due to me being sticky in that way, too.Game-theoretically, it makes total sense for people to be a bit untrustworthy while they are in a bad place in their life. If you're in a place of scarcity, it is entirely reasonable to be strategic about where you put your limited resources. Then, it's just reasonable to only be loyal to others as long as you can get something out of it yourself and to defect as soon as they don't offer obvious short-term gains. And similarly, it make sense for the people around you to be a bit weary of you when you are in that place.And now, a bit of a hot take: I think most if not all of Effective Altruism's recent scandals have been due to low-integrity sticky behavior. And, I think some properties of EA systematically make people sticky.We might want to invest some thought and effort into fixing them. So, here's some of EA's sticky-people-producing properties I can spontaneously think of, plus first thoughts on how to fix them that aren't supposed to be final solutions:1. UtilitarianismYudkowsky wrote a thing that I think is true:"Go three-quarters of the way from deontology to utilitarianism and then stop. You are now in the right place. Stay there at least until you have become a god."Meanwhile, SBF and probably a bunch of other people in EA (including me at times) have gone all four quarters of the way. If there's no upper bound to when it is enough to make the numbers go up, you'll be in a place of scarcity no matter what, and will be incentivized to defect indefinitely.I think an explicit belief of "defecting is not a good utilitarian strategy" doesn't help here: Becoming sticky is not a decision, but a subtle shift in your cognition that happens when your animal instincts pick up that your prefrontal cortex thinks you are in a place of scarcity.Basically, I think Buddhism is what utilitarianism would be if it made sense and was human-brain-shaped: Optimizing for global optima, but from a place of compassion and felt oneness with all sentient beings, not from the standpoint of a technocratic puppet master.2. Ever-precarious salariesEA funders like to base their allocation of funds on evidence, and they like to be able to adjust course quickly as soon as there are higher expected-value opportunities. From the perspective of naive utilitarianism, this completely makes sense.From the perspective of grantees, however, it feels like permanently having to justify your existence. And that is a situation that makes you go funny in the head in a way that is not conducive to getting just about any job done, unless it's a job like fraud that inherently involves short-term thinking and defecting on society. Whether or not you treat people as trustworthy and competent, you'll tend to find that you are right.I don't know how to fix this. Especially at the place we are at now, where both the FTX collapse and funders' increased cautiousness made the precarity of EA funding even worse. Currently, I'm seeing two dimensions to at least partially solving this issue:Building healthier, more sustainable relationships between community members. That's why I'm building Authentic R...]]>
Severin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:01 None full 5734
GcvEdYJADH3vMqk3F_NL_EA_EA EA - Suggest candidates for CEA's next Executive Director by MaxDalton Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Suggest candidates for CEA's next Executive Director, published by MaxDalton on April 26, 2023 on The Effective Altruism Forum.As previously mentioned on the Forum, the Centre for Effective Altruism (CEA) is searching for a new Executive Director.We (Claire Zabel, Max Dalton, and Michelle Hutchinson) have been appointed by the Effective Ventures boards to lead this search and make recommendations to the boards.We wanted to give you all a high-level update on the process, and the opportunity to suggest candidates and give input.What we’re looking forWe’re looking for someone:With a compelling strategy for CEA,Who is dedicated to the mission set out in their strategy,Who has the management experience and leadership ability to execute on that vision,With strong judgement, high integrity, and clear communication skills.One thing to highlight is that we are both open to and enthusiastic about candidates who want to pursue significant changes to CEA. This might include:Spinning off or shutting down programs, or starting new programs,Focusing on specific cause areas, or on promoting general EA principles,Trying to build something more like a mass movement or trying to be more selective and focused,Significant staffing changes,Changing CEA’s name.We’re open to such significant changes because:We think that organizations tend to be more effective when they are led by people with a strong vision for the organization, and the leadership has leeway to execute on that vision.We think that much of CEA’s historical work has been highly effective, but we believe there are many alternative opportunities CEA could pursue (for instance, cause-area-specific community building, or focusing more on a subset of our programs), and that some of these could potentially have an even higher impact.However, we are also open to candidates who don’t make radical changes, but continue to build on CEA’s work to date.It is not a requirement that:The candidate has experience working in effective altruism (that might constrain our search space too far).They are an unalloyed fan of the effective altruism community as it exists; we think people who most notice the flaws in things may be best placed to help improve them.They are located in Oxford, or anywhere else in particular: we are a remote-first team. It is highly desirable for candidates to be able to have working hours that overlap with standard working hours between Pacific and British time though.Our processAdvisorsTo increase diversity of viewpoints feeding into the search, and add people with additional experience running complicated executive hiring rounds, we have added advisors to the search committee, who we loop into our decision-making and ask for input.We invited the following people, who have all now accepted:James Snowden, who has worked in global health and wellbeing for a number of different organizations including GWWC and GiveWell, and is now a program officer at Open Philanthropy.An advisor who works outside effective altruism and doesn’t want to be publicly listed. They have corporate experience hiring and managing hundreds of people. This advisor has a “principles first” (relatively cause agnostic) approach to cause prioritization.Caitlin Elizondo, who is Head of People Operations at CEA. She has run a large number of hiring rounds and has a great understanding of CEA’s staff and history.Finding candidatesWe have reached out via email to about 100 people to ask for candidate recommendations and feedback on the process. We’ve also asked 80,000 Hours to help with headhunting for the role.We are also seeking candidate suggestions and feedback from Forum readers (you!).Assessing candidatesAt least one committee member will go through all the suggestions, and make a longlist. Then the committee and ...]]>
MaxDalton https://forum.effectivealtruism.org/posts/GcvEdYJADH3vMqk3F/suggest-candidates-for-cea-s-next-executive-director Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Suggest candidates for CEA's next Executive Director, published by MaxDalton on April 26, 2023 on The Effective Altruism Forum.As previously mentioned on the Forum, the Centre for Effective Altruism (CEA) is searching for a new Executive Director.We (Claire Zabel, Max Dalton, and Michelle Hutchinson) have been appointed by the Effective Ventures boards to lead this search and make recommendations to the boards.We wanted to give you all a high-level update on the process, and the opportunity to suggest candidates and give input.What we’re looking forWe’re looking for someone:With a compelling strategy for CEA,Who is dedicated to the mission set out in their strategy,Who has the management experience and leadership ability to execute on that vision,With strong judgement, high integrity, and clear communication skills.One thing to highlight is that we are both open to and enthusiastic about candidates who want to pursue significant changes to CEA. This might include:Spinning off or shutting down programs, or starting new programs,Focusing on specific cause areas, or on promoting general EA principles,Trying to build something more like a mass movement or trying to be more selective and focused,Significant staffing changes,Changing CEA’s name.We’re open to such significant changes because:We think that organizations tend to be more effective when they are led by people with a strong vision for the organization, and the leadership has leeway to execute on that vision.We think that much of CEA’s historical work has been highly effective, but we believe there are many alternative opportunities CEA could pursue (for instance, cause-area-specific community building, or focusing more on a subset of our programs), and that some of these could potentially have an even higher impact.However, we are also open to candidates who don’t make radical changes, but continue to build on CEA’s work to date.It is not a requirement that:The candidate has experience working in effective altruism (that might constrain our search space too far).They are an unalloyed fan of the effective altruism community as it exists; we think people who most notice the flaws in things may be best placed to help improve them.They are located in Oxford, or anywhere else in particular: we are a remote-first team. It is highly desirable for candidates to be able to have working hours that overlap with standard working hours between Pacific and British time though.Our processAdvisorsTo increase diversity of viewpoints feeding into the search, and add people with additional experience running complicated executive hiring rounds, we have added advisors to the search committee, who we loop into our decision-making and ask for input.We invited the following people, who have all now accepted:James Snowden, who has worked in global health and wellbeing for a number of different organizations including GWWC and GiveWell, and is now a program officer at Open Philanthropy.An advisor who works outside effective altruism and doesn’t want to be publicly listed. They have corporate experience hiring and managing hundreds of people. This advisor has a “principles first” (relatively cause agnostic) approach to cause prioritization.Caitlin Elizondo, who is Head of People Operations at CEA. She has run a large number of hiring rounds and has a great understanding of CEA’s staff and history.Finding candidatesWe have reached out via email to about 100 people to ask for candidate recommendations and feedback on the process. We’ve also asked 80,000 Hours to help with headhunting for the role.We are also seeking candidate suggestions and feedback from Forum readers (you!).Assessing candidatesAt least one committee member will go through all the suggestions, and make a longlist. Then the committee and ...]]>
Wed, 26 Apr 2023 09:55:46 +0000 EA - Suggest candidates for CEA's next Executive Director by MaxDalton Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Suggest candidates for CEA's next Executive Director, published by MaxDalton on April 26, 2023 on The Effective Altruism Forum.As previously mentioned on the Forum, the Centre for Effective Altruism (CEA) is searching for a new Executive Director.We (Claire Zabel, Max Dalton, and Michelle Hutchinson) have been appointed by the Effective Ventures boards to lead this search and make recommendations to the boards.We wanted to give you all a high-level update on the process, and the opportunity to suggest candidates and give input.What we’re looking forWe’re looking for someone:With a compelling strategy for CEA,Who is dedicated to the mission set out in their strategy,Who has the management experience and leadership ability to execute on that vision,With strong judgement, high integrity, and clear communication skills.One thing to highlight is that we are both open to and enthusiastic about candidates who want to pursue significant changes to CEA. This might include:Spinning off or shutting down programs, or starting new programs,Focusing on specific cause areas, or on promoting general EA principles,Trying to build something more like a mass movement or trying to be more selective and focused,Significant staffing changes,Changing CEA’s name.We’re open to such significant changes because:We think that organizations tend to be more effective when they are led by people with a strong vision for the organization, and the leadership has leeway to execute on that vision.We think that much of CEA’s historical work has been highly effective, but we believe there are many alternative opportunities CEA could pursue (for instance, cause-area-specific community building, or focusing more on a subset of our programs), and that some of these could potentially have an even higher impact.However, we are also open to candidates who don’t make radical changes, but continue to build on CEA’s work to date.It is not a requirement that:The candidate has experience working in effective altruism (that might constrain our search space too far).They are an unalloyed fan of the effective altruism community as it exists; we think people who most notice the flaws in things may be best placed to help improve them.They are located in Oxford, or anywhere else in particular: we are a remote-first team. It is highly desirable for candidates to be able to have working hours that overlap with standard working hours between Pacific and British time though.Our processAdvisorsTo increase diversity of viewpoints feeding into the search, and add people with additional experience running complicated executive hiring rounds, we have added advisors to the search committee, who we loop into our decision-making and ask for input.We invited the following people, who have all now accepted:James Snowden, who has worked in global health and wellbeing for a number of different organizations including GWWC and GiveWell, and is now a program officer at Open Philanthropy.An advisor who works outside effective altruism and doesn’t want to be publicly listed. They have corporate experience hiring and managing hundreds of people. This advisor has a “principles first” (relatively cause agnostic) approach to cause prioritization.Caitlin Elizondo, who is Head of People Operations at CEA. She has run a large number of hiring rounds and has a great understanding of CEA’s staff and history.Finding candidatesWe have reached out via email to about 100 people to ask for candidate recommendations and feedback on the process. We’ve also asked 80,000 Hours to help with headhunting for the role.We are also seeking candidate suggestions and feedback from Forum readers (you!).Assessing candidatesAt least one committee member will go through all the suggestions, and make a longlist. Then the committee and ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Suggest candidates for CEA's next Executive Director, published by MaxDalton on April 26, 2023 on The Effective Altruism Forum.As previously mentioned on the Forum, the Centre for Effective Altruism (CEA) is searching for a new Executive Director.We (Claire Zabel, Max Dalton, and Michelle Hutchinson) have been appointed by the Effective Ventures boards to lead this search and make recommendations to the boards.We wanted to give you all a high-level update on the process, and the opportunity to suggest candidates and give input.What we’re looking forWe’re looking for someone:With a compelling strategy for CEA,Who is dedicated to the mission set out in their strategy,Who has the management experience and leadership ability to execute on that vision,With strong judgement, high integrity, and clear communication skills.One thing to highlight is that we are both open to and enthusiastic about candidates who want to pursue significant changes to CEA. This might include:Spinning off or shutting down programs, or starting new programs,Focusing on specific cause areas, or on promoting general EA principles,Trying to build something more like a mass movement or trying to be more selective and focused,Significant staffing changes,Changing CEA’s name.We’re open to such significant changes because:We think that organizations tend to be more effective when they are led by people with a strong vision for the organization, and the leadership has leeway to execute on that vision.We think that much of CEA’s historical work has been highly effective, but we believe there are many alternative opportunities CEA could pursue (for instance, cause-area-specific community building, or focusing more on a subset of our programs), and that some of these could potentially have an even higher impact.However, we are also open to candidates who don’t make radical changes, but continue to build on CEA’s work to date.It is not a requirement that:The candidate has experience working in effective altruism (that might constrain our search space too far).They are an unalloyed fan of the effective altruism community as it exists; we think people who most notice the flaws in things may be best placed to help improve them.They are located in Oxford, or anywhere else in particular: we are a remote-first team. It is highly desirable for candidates to be able to have working hours that overlap with standard working hours between Pacific and British time though.Our processAdvisorsTo increase diversity of viewpoints feeding into the search, and add people with additional experience running complicated executive hiring rounds, we have added advisors to the search committee, who we loop into our decision-making and ask for input.We invited the following people, who have all now accepted:James Snowden, who has worked in global health and wellbeing for a number of different organizations including GWWC and GiveWell, and is now a program officer at Open Philanthropy.An advisor who works outside effective altruism and doesn’t want to be publicly listed. They have corporate experience hiring and managing hundreds of people. This advisor has a “principles first” (relatively cause agnostic) approach to cause prioritization.Caitlin Elizondo, who is Head of People Operations at CEA. She has run a large number of hiring rounds and has a great understanding of CEA’s staff and history.Finding candidatesWe have reached out via email to about 100 people to ask for candidate recommendations and feedback on the process. We’ve also asked 80,000 Hours to help with headhunting for the role.We are also seeking candidate suggestions and feedback from Forum readers (you!).Assessing candidatesAt least one committee member will go through all the suggestions, and make a longlist. Then the committee and ...]]>
MaxDalton https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:32 None full 5732
MCywanQxqsyorfeQN_NL_EA_EA EA - Growing the Animal Advocacy Community in China - Engaging Stakeholders in Research for Improved Effectiveness by Jack S Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Growing the Animal Advocacy Community in China - Engaging Stakeholders in Research for Improved Effectiveness, published by Jack S on April 25, 2023 on The Effective Altruism Forum.SummaryThis is the second in a series of blog/ forum posts that looks at the application of stakeholder-engaged research methods in EA-related cause areas. This post looks at how Good Growth has been applying these methods to supporting the community concerned about farmed animal welfare and alternative protein in ChinaImproving research in the Chinese animal advocacy space should be a priority, in order to improve our theories of change and support the Chinese advocacy communityWe think that stakeholder-engaged research using mixed methods is particularly suitable for farmed animal welfare, community building, and alternative protein research in Asia, because there are large gaps in our theories of change, and a range of stakeholders whose input can provide valueWe describe two of our studies where we have used stakeholder-engaged methods to understand advocates and consumers in ChinaThese findings can help to refine the strategies and pathways of organisations focused on these cause areasWe invite EAs to explore these methods. If you're interested in doing so reach out to us at team@goodgrowth.io or DM us (Jack or Jah Ying) on the EA forumIntroductionIn the last post, we introduced our perspective on stakeholder-engaged research methods we think might be neglected in EA, and we put the spotlight on some of our past research in the community building/ meta-EA area.To recap, too much research is produced without engaging the people who might use or be informed by it, reducing both the quality of the research and limiting the potential for impact on the world. To resolve this issues, we should find creative ways to involve various stakeholders, such as implementing organisations, policymakers, and the general public, throughout the process of producing and disseminating research.This is not a new idea. A variety of terms are used to describe the idea of doing research with stakeholders playing a significant role - some of the more well-known terms include Mode 2 research, research co-creation and community-based participatory research. We use the term stakeholder-engaged research to encompass all of these approaches. Stakeholder-engaged research is often connected with various kinds of qualitative research, such as ethnography, participant observation, in-depth interviews (structured, semi-structured or unstructured), and focus groups, where engagement is “built-in” to the methodology, but it can encompass quantitative methodologies, natural science and engineering.In this post and the next, we’re going to look at Good Growth’s current area of focus - farmed animal welfare - and the related field of alternative protein. Our work is focused on supporting the broader community of people working on animal advocacy in Asia - this post will focus specifically on our China research. We’re using ‘animal advocacy/ animal advocates’ as an umbrella term to refer to all communities trying to help animals, regardless of focus on rights/ welfare or wild/ farmed animals.Why Animals in Asia/ China?Animal welfare in Asia is an important and neglected cause area: Asia is home to over 40% of farmed land animals and produces over 85% of farmed fish, the majority of which are in China. On top of the enormous welfare impacts, the growing animal industry in the region also contributes significantly to the global risks of climate change and zoonotic pandemic risk. Despite this, Asian advocates only receive an estimated 7% of global animal advocacy funding, and Chinese advocates are particularly neglected.As well as addressing welfare directly, developing and promoting alternative protein is an incr...]]>
Jack S https://forum.effectivealtruism.org/posts/MCywanQxqsyorfeQN/growing-the-animal-advocacy-community-in-china-engaging Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Growing the Animal Advocacy Community in China - Engaging Stakeholders in Research for Improved Effectiveness, published by Jack S on April 25, 2023 on The Effective Altruism Forum.SummaryThis is the second in a series of blog/ forum posts that looks at the application of stakeholder-engaged research methods in EA-related cause areas. This post looks at how Good Growth has been applying these methods to supporting the community concerned about farmed animal welfare and alternative protein in ChinaImproving research in the Chinese animal advocacy space should be a priority, in order to improve our theories of change and support the Chinese advocacy communityWe think that stakeholder-engaged research using mixed methods is particularly suitable for farmed animal welfare, community building, and alternative protein research in Asia, because there are large gaps in our theories of change, and a range of stakeholders whose input can provide valueWe describe two of our studies where we have used stakeholder-engaged methods to understand advocates and consumers in ChinaThese findings can help to refine the strategies and pathways of organisations focused on these cause areasWe invite EAs to explore these methods. If you're interested in doing so reach out to us at team@goodgrowth.io or DM us (Jack or Jah Ying) on the EA forumIntroductionIn the last post, we introduced our perspective on stakeholder-engaged research methods we think might be neglected in EA, and we put the spotlight on some of our past research in the community building/ meta-EA area.To recap, too much research is produced without engaging the people who might use or be informed by it, reducing both the quality of the research and limiting the potential for impact on the world. To resolve this issues, we should find creative ways to involve various stakeholders, such as implementing organisations, policymakers, and the general public, throughout the process of producing and disseminating research.This is not a new idea. A variety of terms are used to describe the idea of doing research with stakeholders playing a significant role - some of the more well-known terms include Mode 2 research, research co-creation and community-based participatory research. We use the term stakeholder-engaged research to encompass all of these approaches. Stakeholder-engaged research is often connected with various kinds of qualitative research, such as ethnography, participant observation, in-depth interviews (structured, semi-structured or unstructured), and focus groups, where engagement is “built-in” to the methodology, but it can encompass quantitative methodologies, natural science and engineering.In this post and the next, we’re going to look at Good Growth’s current area of focus - farmed animal welfare - and the related field of alternative protein. Our work is focused on supporting the broader community of people working on animal advocacy in Asia - this post will focus specifically on our China research. We’re using ‘animal advocacy/ animal advocates’ as an umbrella term to refer to all communities trying to help animals, regardless of focus on rights/ welfare or wild/ farmed animals.Why Animals in Asia/ China?Animal welfare in Asia is an important and neglected cause area: Asia is home to over 40% of farmed land animals and produces over 85% of farmed fish, the majority of which are in China. On top of the enormous welfare impacts, the growing animal industry in the region also contributes significantly to the global risks of climate change and zoonotic pandemic risk. Despite this, Asian advocates only receive an estimated 7% of global animal advocacy funding, and Chinese advocates are particularly neglected.As well as addressing welfare directly, developing and promoting alternative protein is an incr...]]>
Wed, 26 Apr 2023 01:18:09 +0000 EA - Growing the Animal Advocacy Community in China - Engaging Stakeholders in Research for Improved Effectiveness by Jack S Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Growing the Animal Advocacy Community in China - Engaging Stakeholders in Research for Improved Effectiveness, published by Jack S on April 25, 2023 on The Effective Altruism Forum.SummaryThis is the second in a series of blog/ forum posts that looks at the application of stakeholder-engaged research methods in EA-related cause areas. This post looks at how Good Growth has been applying these methods to supporting the community concerned about farmed animal welfare and alternative protein in ChinaImproving research in the Chinese animal advocacy space should be a priority, in order to improve our theories of change and support the Chinese advocacy communityWe think that stakeholder-engaged research using mixed methods is particularly suitable for farmed animal welfare, community building, and alternative protein research in Asia, because there are large gaps in our theories of change, and a range of stakeholders whose input can provide valueWe describe two of our studies where we have used stakeholder-engaged methods to understand advocates and consumers in ChinaThese findings can help to refine the strategies and pathways of organisations focused on these cause areasWe invite EAs to explore these methods. If you're interested in doing so reach out to us at team@goodgrowth.io or DM us (Jack or Jah Ying) on the EA forumIntroductionIn the last post, we introduced our perspective on stakeholder-engaged research methods we think might be neglected in EA, and we put the spotlight on some of our past research in the community building/ meta-EA area.To recap, too much research is produced without engaging the people who might use or be informed by it, reducing both the quality of the research and limiting the potential for impact on the world. To resolve this issues, we should find creative ways to involve various stakeholders, such as implementing organisations, policymakers, and the general public, throughout the process of producing and disseminating research.This is not a new idea. A variety of terms are used to describe the idea of doing research with stakeholders playing a significant role - some of the more well-known terms include Mode 2 research, research co-creation and community-based participatory research. We use the term stakeholder-engaged research to encompass all of these approaches. Stakeholder-engaged research is often connected with various kinds of qualitative research, such as ethnography, participant observation, in-depth interviews (structured, semi-structured or unstructured), and focus groups, where engagement is “built-in” to the methodology, but it can encompass quantitative methodologies, natural science and engineering.In this post and the next, we’re going to look at Good Growth’s current area of focus - farmed animal welfare - and the related field of alternative protein. Our work is focused on supporting the broader community of people working on animal advocacy in Asia - this post will focus specifically on our China research. We’re using ‘animal advocacy/ animal advocates’ as an umbrella term to refer to all communities trying to help animals, regardless of focus on rights/ welfare or wild/ farmed animals.Why Animals in Asia/ China?Animal welfare in Asia is an important and neglected cause area: Asia is home to over 40% of farmed land animals and produces over 85% of farmed fish, the majority of which are in China. On top of the enormous welfare impacts, the growing animal industry in the region also contributes significantly to the global risks of climate change and zoonotic pandemic risk. Despite this, Asian advocates only receive an estimated 7% of global animal advocacy funding, and Chinese advocates are particularly neglected.As well as addressing welfare directly, developing and promoting alternative protein is an incr...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Growing the Animal Advocacy Community in China - Engaging Stakeholders in Research for Improved Effectiveness, published by Jack S on April 25, 2023 on The Effective Altruism Forum.SummaryThis is the second in a series of blog/ forum posts that looks at the application of stakeholder-engaged research methods in EA-related cause areas. This post looks at how Good Growth has been applying these methods to supporting the community concerned about farmed animal welfare and alternative protein in ChinaImproving research in the Chinese animal advocacy space should be a priority, in order to improve our theories of change and support the Chinese advocacy communityWe think that stakeholder-engaged research using mixed methods is particularly suitable for farmed animal welfare, community building, and alternative protein research in Asia, because there are large gaps in our theories of change, and a range of stakeholders whose input can provide valueWe describe two of our studies where we have used stakeholder-engaged methods to understand advocates and consumers in ChinaThese findings can help to refine the strategies and pathways of organisations focused on these cause areasWe invite EAs to explore these methods. If you're interested in doing so reach out to us at team@goodgrowth.io or DM us (Jack or Jah Ying) on the EA forumIntroductionIn the last post, we introduced our perspective on stakeholder-engaged research methods we think might be neglected in EA, and we put the spotlight on some of our past research in the community building/ meta-EA area.To recap, too much research is produced without engaging the people who might use or be informed by it, reducing both the quality of the research and limiting the potential for impact on the world. To resolve this issues, we should find creative ways to involve various stakeholders, such as implementing organisations, policymakers, and the general public, throughout the process of producing and disseminating research.This is not a new idea. A variety of terms are used to describe the idea of doing research with stakeholders playing a significant role - some of the more well-known terms include Mode 2 research, research co-creation and community-based participatory research. We use the term stakeholder-engaged research to encompass all of these approaches. Stakeholder-engaged research is often connected with various kinds of qualitative research, such as ethnography, participant observation, in-depth interviews (structured, semi-structured or unstructured), and focus groups, where engagement is “built-in” to the methodology, but it can encompass quantitative methodologies, natural science and engineering.In this post and the next, we’re going to look at Good Growth’s current area of focus - farmed animal welfare - and the related field of alternative protein. Our work is focused on supporting the broader community of people working on animal advocacy in Asia - this post will focus specifically on our China research. We’re using ‘animal advocacy/ animal advocates’ as an umbrella term to refer to all communities trying to help animals, regardless of focus on rights/ welfare or wild/ farmed animals.Why Animals in Asia/ China?Animal welfare in Asia is an important and neglected cause area: Asia is home to over 40% of farmed land animals and produces over 85% of farmed fish, the majority of which are in China. On top of the enormous welfare impacts, the growing animal industry in the region also contributes significantly to the global risks of climate change and zoonotic pandemic risk. Despite this, Asian advocates only receive an estimated 7% of global animal advocacy funding, and Chinese advocates are particularly neglected.As well as addressing welfare directly, developing and promoting alternative protein is an incr...]]>
Jack S https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 34:15 None full 5728
qy25pydHAYZoCFsAG_NL_EA_EA EA - AI Safety Newsletter #3: AI policy proposals and a new challenger approaches by Oliver Z Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Newsletter #3: AI policy proposals and a new challenger approaches, published by Oliver Z on April 25, 2023 on The Effective Altruism Forum.Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.Subscribe here to receive future versions.Policy Proposals for AI SafetyCritical industries rely on the government to protect consumer safety. The FAA approves new airplane designs, the FDA tests new drugs, and the SEC and CFPB regulate risky financial instruments. Currently, there is no analogous set of regulations for AI safety.This could soon change. President Biden and other members of Congress have recently been vocal about the risks of artificial intelligence and the need for policy solutions.From guiding principles to enforceable laws. Previous work on AI policy such as the White House Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework has articulated guiding principles like interpretability, robustness, and privacy. But these recommendations are not enforceable – AI developers can simply choose to ignore them.A solution with more teeth could be on its way. Axios reports that Senator Chuck Schumer has been circulating a draft framework for AI governance among experts over the last several weeks. To help inform policy making efforts, the Department of Commerce has issued a request for comments on how to effectively regulate AI.The European Union debates narrow vs. general AI regulation. In Europe, policy conversations are centering around the EU AI Act. The Act focuses on eight “high-risk” applications of AI, including hiring, biometrics, and criminal justice. But the rise of general purpose AI systems like ChatGPT calls into question the wisdom of regulating only a handful of specific applications.An open letter signed by over 50 AI experts, including CAIS’s director, argues that the Act should also govern general purpose AI systems, holding AI developers liable for harm caused by their systems. Several members from all political blocs of the EU parliament have publicly agreed that rules are necessary for “powerful General Purpose AI systems that can be easily adapted to a multitude of purposes.”Specific policy proposals for AI safety. With politicians promising that AI regulation is coming, the key question is which proposals they will choose to carry forward into law. Here is a brief compilation of several recent sets of policy proposals:Create an AI regulatory body. A national agency focused on AI could set and enforce standards, monitor the development of powerful new models, investigate AI failures, and publish information about how to develop AI safely.Clarify legal liability for AI harm. When ChatGPT falsely accused a law professor of sexual harassment, legal scholars argued that OpenAI should face legal liability for libel and defamatory statements made by its models. Others propose AI developers should be strictly liable for harm caused by AI, but questions remain about where to draw the line between an unsafe product versus deliberate misuse.Compute governance. AI regulations could be automatically enforced by software built into the cutting edge computer chips used to train AI systems.Nuclear command and control. Despite persistent problems with the security and reliability of AI systems, some military analysts advocate using AI in the process of launching nuclear weapons. A simple proposal: Don’t give AI influence over nuclear command and control.Fund safety research. Organizations promoting work on AI safety such as NIST and NSF could use more funding from federal sources.China proposes many AI regulations. Last week, China released its own set of AI regulations that go much further than current Western efforts. Under ...]]>
Oliver Z https://forum.effectivealtruism.org/posts/qy25pydHAYZoCFsAG/ai-safety-newsletter-3-ai-policy-proposals-and-a-new Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Newsletter #3: AI policy proposals and a new challenger approaches, published by Oliver Z on April 25, 2023 on The Effective Altruism Forum.Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.Subscribe here to receive future versions.Policy Proposals for AI SafetyCritical industries rely on the government to protect consumer safety. The FAA approves new airplane designs, the FDA tests new drugs, and the SEC and CFPB regulate risky financial instruments. Currently, there is no analogous set of regulations for AI safety.This could soon change. President Biden and other members of Congress have recently been vocal about the risks of artificial intelligence and the need for policy solutions.From guiding principles to enforceable laws. Previous work on AI policy such as the White House Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework has articulated guiding principles like interpretability, robustness, and privacy. But these recommendations are not enforceable – AI developers can simply choose to ignore them.A solution with more teeth could be on its way. Axios reports that Senator Chuck Schumer has been circulating a draft framework for AI governance among experts over the last several weeks. To help inform policy making efforts, the Department of Commerce has issued a request for comments on how to effectively regulate AI.The European Union debates narrow vs. general AI regulation. In Europe, policy conversations are centering around the EU AI Act. The Act focuses on eight “high-risk” applications of AI, including hiring, biometrics, and criminal justice. But the rise of general purpose AI systems like ChatGPT calls into question the wisdom of regulating only a handful of specific applications.An open letter signed by over 50 AI experts, including CAIS’s director, argues that the Act should also govern general purpose AI systems, holding AI developers liable for harm caused by their systems. Several members from all political blocs of the EU parliament have publicly agreed that rules are necessary for “powerful General Purpose AI systems that can be easily adapted to a multitude of purposes.”Specific policy proposals for AI safety. With politicians promising that AI regulation is coming, the key question is which proposals they will choose to carry forward into law. Here is a brief compilation of several recent sets of policy proposals:Create an AI regulatory body. A national agency focused on AI could set and enforce standards, monitor the development of powerful new models, investigate AI failures, and publish information about how to develop AI safely.Clarify legal liability for AI harm. When ChatGPT falsely accused a law professor of sexual harassment, legal scholars argued that OpenAI should face legal liability for libel and defamatory statements made by its models. Others propose AI developers should be strictly liable for harm caused by AI, but questions remain about where to draw the line between an unsafe product versus deliberate misuse.Compute governance. AI regulations could be automatically enforced by software built into the cutting edge computer chips used to train AI systems.Nuclear command and control. Despite persistent problems with the security and reliability of AI systems, some military analysts advocate using AI in the process of launching nuclear weapons. A simple proposal: Don’t give AI influence over nuclear command and control.Fund safety research. Organizations promoting work on AI safety such as NIST and NSF could use more funding from federal sources.China proposes many AI regulations. Last week, China released its own set of AI regulations that go much further than current Western efforts. Under ...]]>
Tue, 25 Apr 2023 23:12:27 +0000 EA - AI Safety Newsletter #3: AI policy proposals and a new challenger approaches by Oliver Z Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Newsletter #3: AI policy proposals and a new challenger approaches, published by Oliver Z on April 25, 2023 on The Effective Altruism Forum.Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.Subscribe here to receive future versions.Policy Proposals for AI SafetyCritical industries rely on the government to protect consumer safety. The FAA approves new airplane designs, the FDA tests new drugs, and the SEC and CFPB regulate risky financial instruments. Currently, there is no analogous set of regulations for AI safety.This could soon change. President Biden and other members of Congress have recently been vocal about the risks of artificial intelligence and the need for policy solutions.From guiding principles to enforceable laws. Previous work on AI policy such as the White House Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework has articulated guiding principles like interpretability, robustness, and privacy. But these recommendations are not enforceable – AI developers can simply choose to ignore them.A solution with more teeth could be on its way. Axios reports that Senator Chuck Schumer has been circulating a draft framework for AI governance among experts over the last several weeks. To help inform policy making efforts, the Department of Commerce has issued a request for comments on how to effectively regulate AI.The European Union debates narrow vs. general AI regulation. In Europe, policy conversations are centering around the EU AI Act. The Act focuses on eight “high-risk” applications of AI, including hiring, biometrics, and criminal justice. But the rise of general purpose AI systems like ChatGPT calls into question the wisdom of regulating only a handful of specific applications.An open letter signed by over 50 AI experts, including CAIS’s director, argues that the Act should also govern general purpose AI systems, holding AI developers liable for harm caused by their systems. Several members from all political blocs of the EU parliament have publicly agreed that rules are necessary for “powerful General Purpose AI systems that can be easily adapted to a multitude of purposes.”Specific policy proposals for AI safety. With politicians promising that AI regulation is coming, the key question is which proposals they will choose to carry forward into law. Here is a brief compilation of several recent sets of policy proposals:Create an AI regulatory body. A national agency focused on AI could set and enforce standards, monitor the development of powerful new models, investigate AI failures, and publish information about how to develop AI safely.Clarify legal liability for AI harm. When ChatGPT falsely accused a law professor of sexual harassment, legal scholars argued that OpenAI should face legal liability for libel and defamatory statements made by its models. Others propose AI developers should be strictly liable for harm caused by AI, but questions remain about where to draw the line between an unsafe product versus deliberate misuse.Compute governance. AI regulations could be automatically enforced by software built into the cutting edge computer chips used to train AI systems.Nuclear command and control. Despite persistent problems with the security and reliability of AI systems, some military analysts advocate using AI in the process of launching nuclear weapons. A simple proposal: Don’t give AI influence over nuclear command and control.Fund safety research. Organizations promoting work on AI safety such as NIST and NSF could use more funding from federal sources.China proposes many AI regulations. Last week, China released its own set of AI regulations that go much further than current Western efforts. Under ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Newsletter #3: AI policy proposals and a new challenger approaches, published by Oliver Z on April 25, 2023 on The Effective Altruism Forum.Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.Subscribe here to receive future versions.Policy Proposals for AI SafetyCritical industries rely on the government to protect consumer safety. The FAA approves new airplane designs, the FDA tests new drugs, and the SEC and CFPB regulate risky financial instruments. Currently, there is no analogous set of regulations for AI safety.This could soon change. President Biden and other members of Congress have recently been vocal about the risks of artificial intelligence and the need for policy solutions.From guiding principles to enforceable laws. Previous work on AI policy such as the White House Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework has articulated guiding principles like interpretability, robustness, and privacy. But these recommendations are not enforceable – AI developers can simply choose to ignore them.A solution with more teeth could be on its way. Axios reports that Senator Chuck Schumer has been circulating a draft framework for AI governance among experts over the last several weeks. To help inform policy making efforts, the Department of Commerce has issued a request for comments on how to effectively regulate AI.The European Union debates narrow vs. general AI regulation. In Europe, policy conversations are centering around the EU AI Act. The Act focuses on eight “high-risk” applications of AI, including hiring, biometrics, and criminal justice. But the rise of general purpose AI systems like ChatGPT calls into question the wisdom of regulating only a handful of specific applications.An open letter signed by over 50 AI experts, including CAIS’s director, argues that the Act should also govern general purpose AI systems, holding AI developers liable for harm caused by their systems. Several members from all political blocs of the EU parliament have publicly agreed that rules are necessary for “powerful General Purpose AI systems that can be easily adapted to a multitude of purposes.”Specific policy proposals for AI safety. With politicians promising that AI regulation is coming, the key question is which proposals they will choose to carry forward into law. Here is a brief compilation of several recent sets of policy proposals:Create an AI regulatory body. A national agency focused on AI could set and enforce standards, monitor the development of powerful new models, investigate AI failures, and publish information about how to develop AI safely.Clarify legal liability for AI harm. When ChatGPT falsely accused a law professor of sexual harassment, legal scholars argued that OpenAI should face legal liability for libel and defamatory statements made by its models. Others propose AI developers should be strictly liable for harm caused by AI, but questions remain about where to draw the line between an unsafe product versus deliberate misuse.Compute governance. AI regulations could be automatically enforced by software built into the cutting edge computer chips used to train AI systems.Nuclear command and control. Despite persistent problems with the security and reliability of AI systems, some military analysts advocate using AI in the process of launching nuclear weapons. A simple proposal: Don’t give AI influence over nuclear command and control.Fund safety research. Organizations promoting work on AI safety such as NIST and NSF could use more funding from federal sources.China proposes many AI regulations. Last week, China released its own set of AI regulations that go much further than current Western efforts. Under ...]]>
Oliver Z https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:04 None full 5725
MScWCZ3FKFQfGft2k_NL_EA_EA EA - EAGxNordics Unofficial Review Thread by Robert Praas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAGxNordics Unofficial Review Thread, published by Robert Praas on April 25, 2023 on The Effective Altruism Forum.I am usually really curious to get a taste of the overall atmosphere and insights gained from EAGs or EAGx events that I don’t attend. These gatherings, which host hundreds or even thousands of Effective Altruists, serve as valuable opportunities to exchange knowledge and potentially offer a snapshot of the most pressing EA themes and current projects. I attended EAGxNordics and, as a student, I will share my observations in a bullet-point format. Thank you Adash H-Moller for great comments and suggestions. Other attendees are very welcome to add their experiences / challenge these perspectives in the comments:Lessons learned:The majority of the participants seemed to have come from (obviously) Sweden, Norway, Finland, Estonia, Denmark and The Netherlands. Small countries with relatively tight EA communities.I was particularly impressed with the line-up of speakers from non-EA-labeled think tanks and institutes - I think it provides a strong benefit, especially to those EAs who are quite familiar with EA but would not find out about these adjacent initiatives. It also serves to reduce the extent to which we're in our own bubble.I talked to numerous participants of the Future Academy - who all learned about EA through that program. They shared great experiences in policy, entrepreneurship and education (from before they knew about EA) and I think they are a great addition to the community.Attendees can be more ambitious: Both in their conference experience and in their approach to EA. I spoke to too many students who had 5 1on1s planned, whereas these are regarded as one of the best ways to operate during a conference. Also, in terms of the career plans and EA projects I asked about, I would have loved to see bigger goals then the ones I heard.I attended talks by employees of (the) GFI, Charity Entrepreneurship and The Simon Institute. The things they had in common:They work on problems that are highly neglected (One speaker cited from a podcast: “No one is coming, it is up to us”)They do their homework thoroughlyA key factor for their impact is their cooperation with local NGOs, governments and intergovernmental organizations.(Suggested by Adash) The talk by an employee of Nähtamatud Loomad about ‘invisible animals’ was great and provided useful insight into what corporate lobbying actually looks like on the ground - I think specific, object-level content is great for keeping us grounded.There could be more focus on analyzing EA as a community and considering what EA needs more of / needs to do differently, I asked a few people exactly those questions.David Nash mentioned a range of critiques:EA should maybefocus more on professionals to fill existing vacancies and close the mentorship gapbe more centralizedConsider more movement building on the field specific levelleverage the value of it being a network moreFor the rest our fellow vikings and visitors tended to be less vocal in direct criticismA lot of people talked about AI SafetyI felt there was a large group of students who were excited about contributing to this fieldParticipants with other backgrounds mentioned this as well and multiple participants voiced the preference for (a) more balanced content/narrative around topics like global development, animal welfare, etc.(Suggested by Adash: N is small so take with a pinch of salt) I found the conversations I had with some early-career AI safety enthusiasts to show a lack of understanding of paths to x-risk and criticisms of key assumptions. I’m wondering if the early-stage AI field-building funnel might cause an echo chamber of unexamined AI panic that undermines general epistemics and cause-neutral principles.Thanks for list...]]>
Robert Praas https://forum.effectivealtruism.org/posts/MScWCZ3FKFQfGft2k/eagxnordics-unofficial-review-thread Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAGxNordics Unofficial Review Thread, published by Robert Praas on April 25, 2023 on The Effective Altruism Forum.I am usually really curious to get a taste of the overall atmosphere and insights gained from EAGs or EAGx events that I don’t attend. These gatherings, which host hundreds or even thousands of Effective Altruists, serve as valuable opportunities to exchange knowledge and potentially offer a snapshot of the most pressing EA themes and current projects. I attended EAGxNordics and, as a student, I will share my observations in a bullet-point format. Thank you Adash H-Moller for great comments and suggestions. Other attendees are very welcome to add their experiences / challenge these perspectives in the comments:Lessons learned:The majority of the participants seemed to have come from (obviously) Sweden, Norway, Finland, Estonia, Denmark and The Netherlands. Small countries with relatively tight EA communities.I was particularly impressed with the line-up of speakers from non-EA-labeled think tanks and institutes - I think it provides a strong benefit, especially to those EAs who are quite familiar with EA but would not find out about these adjacent initiatives. It also serves to reduce the extent to which we're in our own bubble.I talked to numerous participants of the Future Academy - who all learned about EA through that program. They shared great experiences in policy, entrepreneurship and education (from before they knew about EA) and I think they are a great addition to the community.Attendees can be more ambitious: Both in their conference experience and in their approach to EA. I spoke to too many students who had 5 1on1s planned, whereas these are regarded as one of the best ways to operate during a conference. Also, in terms of the career plans and EA projects I asked about, I would have loved to see bigger goals then the ones I heard.I attended talks by employees of (the) GFI, Charity Entrepreneurship and The Simon Institute. The things they had in common:They work on problems that are highly neglected (One speaker cited from a podcast: “No one is coming, it is up to us”)They do their homework thoroughlyA key factor for their impact is their cooperation with local NGOs, governments and intergovernmental organizations.(Suggested by Adash) The talk by an employee of Nähtamatud Loomad about ‘invisible animals’ was great and provided useful insight into what corporate lobbying actually looks like on the ground - I think specific, object-level content is great for keeping us grounded.There could be more focus on analyzing EA as a community and considering what EA needs more of / needs to do differently, I asked a few people exactly those questions.David Nash mentioned a range of critiques:EA should maybefocus more on professionals to fill existing vacancies and close the mentorship gapbe more centralizedConsider more movement building on the field specific levelleverage the value of it being a network moreFor the rest our fellow vikings and visitors tended to be less vocal in direct criticismA lot of people talked about AI SafetyI felt there was a large group of students who were excited about contributing to this fieldParticipants with other backgrounds mentioned this as well and multiple participants voiced the preference for (a) more balanced content/narrative around topics like global development, animal welfare, etc.(Suggested by Adash: N is small so take with a pinch of salt) I found the conversations I had with some early-career AI safety enthusiasts to show a lack of understanding of paths to x-risk and criticisms of key assumptions. I’m wondering if the early-stage AI field-building funnel might cause an echo chamber of unexamined AI panic that undermines general epistemics and cause-neutral principles.Thanks for list...]]>
Tue, 25 Apr 2023 20:47:49 +0000 EA - EAGxNordics Unofficial Review Thread by Robert Praas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAGxNordics Unofficial Review Thread, published by Robert Praas on April 25, 2023 on The Effective Altruism Forum.I am usually really curious to get a taste of the overall atmosphere and insights gained from EAGs or EAGx events that I don’t attend. These gatherings, which host hundreds or even thousands of Effective Altruists, serve as valuable opportunities to exchange knowledge and potentially offer a snapshot of the most pressing EA themes and current projects. I attended EAGxNordics and, as a student, I will share my observations in a bullet-point format. Thank you Adash H-Moller for great comments and suggestions. Other attendees are very welcome to add their experiences / challenge these perspectives in the comments:Lessons learned:The majority of the participants seemed to have come from (obviously) Sweden, Norway, Finland, Estonia, Denmark and The Netherlands. Small countries with relatively tight EA communities.I was particularly impressed with the line-up of speakers from non-EA-labeled think tanks and institutes - I think it provides a strong benefit, especially to those EAs who are quite familiar with EA but would not find out about these adjacent initiatives. It also serves to reduce the extent to which we're in our own bubble.I talked to numerous participants of the Future Academy - who all learned about EA through that program. They shared great experiences in policy, entrepreneurship and education (from before they knew about EA) and I think they are a great addition to the community.Attendees can be more ambitious: Both in their conference experience and in their approach to EA. I spoke to too many students who had 5 1on1s planned, whereas these are regarded as one of the best ways to operate during a conference. Also, in terms of the career plans and EA projects I asked about, I would have loved to see bigger goals then the ones I heard.I attended talks by employees of (the) GFI, Charity Entrepreneurship and The Simon Institute. The things they had in common:They work on problems that are highly neglected (One speaker cited from a podcast: “No one is coming, it is up to us”)They do their homework thoroughlyA key factor for their impact is their cooperation with local NGOs, governments and intergovernmental organizations.(Suggested by Adash) The talk by an employee of Nähtamatud Loomad about ‘invisible animals’ was great and provided useful insight into what corporate lobbying actually looks like on the ground - I think specific, object-level content is great for keeping us grounded.There could be more focus on analyzing EA as a community and considering what EA needs more of / needs to do differently, I asked a few people exactly those questions.David Nash mentioned a range of critiques:EA should maybefocus more on professionals to fill existing vacancies and close the mentorship gapbe more centralizedConsider more movement building on the field specific levelleverage the value of it being a network moreFor the rest our fellow vikings and visitors tended to be less vocal in direct criticismA lot of people talked about AI SafetyI felt there was a large group of students who were excited about contributing to this fieldParticipants with other backgrounds mentioned this as well and multiple participants voiced the preference for (a) more balanced content/narrative around topics like global development, animal welfare, etc.(Suggested by Adash: N is small so take with a pinch of salt) I found the conversations I had with some early-career AI safety enthusiasts to show a lack of understanding of paths to x-risk and criticisms of key assumptions. I’m wondering if the early-stage AI field-building funnel might cause an echo chamber of unexamined AI panic that undermines general epistemics and cause-neutral principles.Thanks for list...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAGxNordics Unofficial Review Thread, published by Robert Praas on April 25, 2023 on The Effective Altruism Forum.I am usually really curious to get a taste of the overall atmosphere and insights gained from EAGs or EAGx events that I don’t attend. These gatherings, which host hundreds or even thousands of Effective Altruists, serve as valuable opportunities to exchange knowledge and potentially offer a snapshot of the most pressing EA themes and current projects. I attended EAGxNordics and, as a student, I will share my observations in a bullet-point format. Thank you Adash H-Moller for great comments and suggestions. Other attendees are very welcome to add their experiences / challenge these perspectives in the comments:Lessons learned:The majority of the participants seemed to have come from (obviously) Sweden, Norway, Finland, Estonia, Denmark and The Netherlands. Small countries with relatively tight EA communities.I was particularly impressed with the line-up of speakers from non-EA-labeled think tanks and institutes - I think it provides a strong benefit, especially to those EAs who are quite familiar with EA but would not find out about these adjacent initiatives. It also serves to reduce the extent to which we're in our own bubble.I talked to numerous participants of the Future Academy - who all learned about EA through that program. They shared great experiences in policy, entrepreneurship and education (from before they knew about EA) and I think they are a great addition to the community.Attendees can be more ambitious: Both in their conference experience and in their approach to EA. I spoke to too many students who had 5 1on1s planned, whereas these are regarded as one of the best ways to operate during a conference. Also, in terms of the career plans and EA projects I asked about, I would have loved to see bigger goals then the ones I heard.I attended talks by employees of (the) GFI, Charity Entrepreneurship and The Simon Institute. The things they had in common:They work on problems that are highly neglected (One speaker cited from a podcast: “No one is coming, it is up to us”)They do their homework thoroughlyA key factor for their impact is their cooperation with local NGOs, governments and intergovernmental organizations.(Suggested by Adash) The talk by an employee of Nähtamatud Loomad about ‘invisible animals’ was great and provided useful insight into what corporate lobbying actually looks like on the ground - I think specific, object-level content is great for keeping us grounded.There could be more focus on analyzing EA as a community and considering what EA needs more of / needs to do differently, I asked a few people exactly those questions.David Nash mentioned a range of critiques:EA should maybefocus more on professionals to fill existing vacancies and close the mentorship gapbe more centralizedConsider more movement building on the field specific levelleverage the value of it being a network moreFor the rest our fellow vikings and visitors tended to be less vocal in direct criticismA lot of people talked about AI SafetyI felt there was a large group of students who were excited about contributing to this fieldParticipants with other backgrounds mentioned this as well and multiple participants voiced the preference for (a) more balanced content/narrative around topics like global development, animal welfare, etc.(Suggested by Adash: N is small so take with a pinch of salt) I found the conversations I had with some early-career AI safety enthusiasts to show a lack of understanding of paths to x-risk and criticisms of key assumptions. I’m wondering if the early-stage AI field-building funnel might cause an echo chamber of unexamined AI panic that undermines general epistemics and cause-neutral principles.Thanks for list...]]>
Robert Praas https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:41 None full 5723
iGvRmX9L7rsYTHedR_NL_EA_EA EA - World Malaria Day: Reflecting on Past Victories and Envisioning a Malaria-Free Future by 2ndRichter Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: World Malaria Day: Reflecting on Past Victories and Envisioning a Malaria-Free Future, published by 2ndRichter on April 25, 2023 on The Effective Altruism Forum.World Malaria Day, inaugurated in 2017 by the United Nations World Health Organization, reminds us of malaria’s impact on humanity and the role we can take in preventing the disease. Malaria was only eradicated from areas like Europe as recently as the 1970s, and nearly half of the world’s population was still at risk of malaria in 2021. Over 600,000 people died of malaria in 2021 and 247 million people contracted the disease in 2021—and three-quarters of those deaths were children under five.Almost half of the world’s countries have eradicated malaria since 1945, and we have reason to hope that countries still affected can eradicate it as well. With significant scientific advancements, we know that effective malaria prevention can be impactful and relatively cheap. Typical interventions to prevent and treat malaria include insecticide-treated bednets, removing standing water from affected areas, and antiviral medications—and some of these interventions are relatively cheap. Only $5 USD can provide one malaria net and $7 can protect a child from malaria through malaria chemoprevention. Roughly $5000 USD will provide enough bednets or seasonal medicine doses to save someone's life. Recent advances in vaccines against malaria and in work exploring the use of gene drives provide further hope that we could eradicate malaria from countries that are still affected.On World Malaria Day, we encourage you to donate to Giving What We Can’s fundraiser partnering with the Against Malaria Foundation and the Malaria Consortium. Malaria is preventable and treatable; a lack of resources leaves people personally affected by the disease or affected by the loss of loved ones. Your giving can directly impact the lives of those affected by malaria: if we reached the $1 million USD fundraising goal, we could directly prevent roughly 200 deaths from malaria. Put simply, this is an area where we really can make a difference.Plasmodium falciparum prevalence from 2000 to 2019. The decreasing amount of red, orange, and yellow represents the decreasing prevalence of one of the deadliest strains of malaria due to prevention efforts.Data from, animation idea by Sam Deere.Where we've beenMalaria has been a part of human history for thousands of years, from infections in ancient Rome to the infections of several U.S. presidents. Early treatment for malaria came in the form of quinine from the cinchona tree, first isolated by French chemists in 1820, and was commonly administered in the form of tonic water or the gin and tonic. French surgeon Alphonse Laveran discovered the plasmodium parasite as the cause of malaria in 1880, opening up further research that would identify antimalarials like chloroquine and DDT.Proportion of deaths from malaria to deaths from all causes in the eastern United States, 1870 US Census. From Our World In Data/Statistical Atlas from the 9th Census of the United States 1870 (published 1874).Fighting malaria was the impetus for developing public health infrastructure in a number of countries. The predecessor to the United States Centers for Disease Control was the Office of Malaria Control in War Areas, designed to limit the impact of malaria during World War II around US military bases in the Southern United States (hence its headquarters in Atlanta, Georgia rather than Washington DC).Roughly half of the world’s countries have eliminated malaria in their territories through public health efforts, including some in tropical regions where malaria is most likely to be prevalent. 79 countries eliminated malaria from 1945 to 2010, and several more since 2010. Countries must achieve at least three consecutive y...]]>
2ndRichter https://forum.effectivealtruism.org/posts/iGvRmX9L7rsYTHedR/world-malaria-day-reflecting-on-past-victories-and Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: World Malaria Day: Reflecting on Past Victories and Envisioning a Malaria-Free Future, published by 2ndRichter on April 25, 2023 on The Effective Altruism Forum.World Malaria Day, inaugurated in 2017 by the United Nations World Health Organization, reminds us of malaria’s impact on humanity and the role we can take in preventing the disease. Malaria was only eradicated from areas like Europe as recently as the 1970s, and nearly half of the world’s population was still at risk of malaria in 2021. Over 600,000 people died of malaria in 2021 and 247 million people contracted the disease in 2021—and three-quarters of those deaths were children under five.Almost half of the world’s countries have eradicated malaria since 1945, and we have reason to hope that countries still affected can eradicate it as well. With significant scientific advancements, we know that effective malaria prevention can be impactful and relatively cheap. Typical interventions to prevent and treat malaria include insecticide-treated bednets, removing standing water from affected areas, and antiviral medications—and some of these interventions are relatively cheap. Only $5 USD can provide one malaria net and $7 can protect a child from malaria through malaria chemoprevention. Roughly $5000 USD will provide enough bednets or seasonal medicine doses to save someone's life. Recent advances in vaccines against malaria and in work exploring the use of gene drives provide further hope that we could eradicate malaria from countries that are still affected.On World Malaria Day, we encourage you to donate to Giving What We Can’s fundraiser partnering with the Against Malaria Foundation and the Malaria Consortium. Malaria is preventable and treatable; a lack of resources leaves people personally affected by the disease or affected by the loss of loved ones. Your giving can directly impact the lives of those affected by malaria: if we reached the $1 million USD fundraising goal, we could directly prevent roughly 200 deaths from malaria. Put simply, this is an area where we really can make a difference.Plasmodium falciparum prevalence from 2000 to 2019. The decreasing amount of red, orange, and yellow represents the decreasing prevalence of one of the deadliest strains of malaria due to prevention efforts.Data from, animation idea by Sam Deere.Where we've beenMalaria has been a part of human history for thousands of years, from infections in ancient Rome to the infections of several U.S. presidents. Early treatment for malaria came in the form of quinine from the cinchona tree, first isolated by French chemists in 1820, and was commonly administered in the form of tonic water or the gin and tonic. French surgeon Alphonse Laveran discovered the plasmodium parasite as the cause of malaria in 1880, opening up further research that would identify antimalarials like chloroquine and DDT.Proportion of deaths from malaria to deaths from all causes in the eastern United States, 1870 US Census. From Our World In Data/Statistical Atlas from the 9th Census of the United States 1870 (published 1874).Fighting malaria was the impetus for developing public health infrastructure in a number of countries. The predecessor to the United States Centers for Disease Control was the Office of Malaria Control in War Areas, designed to limit the impact of malaria during World War II around US military bases in the Southern United States (hence its headquarters in Atlanta, Georgia rather than Washington DC).Roughly half of the world’s countries have eliminated malaria in their territories through public health efforts, including some in tropical regions where malaria is most likely to be prevalent. 79 countries eliminated malaria from 1945 to 2010, and several more since 2010. Countries must achieve at least three consecutive y...]]>
Tue, 25 Apr 2023 16:08:43 +0000 EA - World Malaria Day: Reflecting on Past Victories and Envisioning a Malaria-Free Future by 2ndRichter Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: World Malaria Day: Reflecting on Past Victories and Envisioning a Malaria-Free Future, published by 2ndRichter on April 25, 2023 on The Effective Altruism Forum.World Malaria Day, inaugurated in 2017 by the United Nations World Health Organization, reminds us of malaria’s impact on humanity and the role we can take in preventing the disease. Malaria was only eradicated from areas like Europe as recently as the 1970s, and nearly half of the world’s population was still at risk of malaria in 2021. Over 600,000 people died of malaria in 2021 and 247 million people contracted the disease in 2021—and three-quarters of those deaths were children under five.Almost half of the world’s countries have eradicated malaria since 1945, and we have reason to hope that countries still affected can eradicate it as well. With significant scientific advancements, we know that effective malaria prevention can be impactful and relatively cheap. Typical interventions to prevent and treat malaria include insecticide-treated bednets, removing standing water from affected areas, and antiviral medications—and some of these interventions are relatively cheap. Only $5 USD can provide one malaria net and $7 can protect a child from malaria through malaria chemoprevention. Roughly $5000 USD will provide enough bednets or seasonal medicine doses to save someone's life. Recent advances in vaccines against malaria and in work exploring the use of gene drives provide further hope that we could eradicate malaria from countries that are still affected.On World Malaria Day, we encourage you to donate to Giving What We Can’s fundraiser partnering with the Against Malaria Foundation and the Malaria Consortium. Malaria is preventable and treatable; a lack of resources leaves people personally affected by the disease or affected by the loss of loved ones. Your giving can directly impact the lives of those affected by malaria: if we reached the $1 million USD fundraising goal, we could directly prevent roughly 200 deaths from malaria. Put simply, this is an area where we really can make a difference.Plasmodium falciparum prevalence from 2000 to 2019. The decreasing amount of red, orange, and yellow represents the decreasing prevalence of one of the deadliest strains of malaria due to prevention efforts.Data from, animation idea by Sam Deere.Where we've beenMalaria has been a part of human history for thousands of years, from infections in ancient Rome to the infections of several U.S. presidents. Early treatment for malaria came in the form of quinine from the cinchona tree, first isolated by French chemists in 1820, and was commonly administered in the form of tonic water or the gin and tonic. French surgeon Alphonse Laveran discovered the plasmodium parasite as the cause of malaria in 1880, opening up further research that would identify antimalarials like chloroquine and DDT.Proportion of deaths from malaria to deaths from all causes in the eastern United States, 1870 US Census. From Our World In Data/Statistical Atlas from the 9th Census of the United States 1870 (published 1874).Fighting malaria was the impetus for developing public health infrastructure in a number of countries. The predecessor to the United States Centers for Disease Control was the Office of Malaria Control in War Areas, designed to limit the impact of malaria during World War II around US military bases in the Southern United States (hence its headquarters in Atlanta, Georgia rather than Washington DC).Roughly half of the world’s countries have eliminated malaria in their territories through public health efforts, including some in tropical regions where malaria is most likely to be prevalent. 79 countries eliminated malaria from 1945 to 2010, and several more since 2010. Countries must achieve at least three consecutive y...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: World Malaria Day: Reflecting on Past Victories and Envisioning a Malaria-Free Future, published by 2ndRichter on April 25, 2023 on The Effective Altruism Forum.World Malaria Day, inaugurated in 2017 by the United Nations World Health Organization, reminds us of malaria’s impact on humanity and the role we can take in preventing the disease. Malaria was only eradicated from areas like Europe as recently as the 1970s, and nearly half of the world’s population was still at risk of malaria in 2021. Over 600,000 people died of malaria in 2021 and 247 million people contracted the disease in 2021—and three-quarters of those deaths were children under five.Almost half of the world’s countries have eradicated malaria since 1945, and we have reason to hope that countries still affected can eradicate it as well. With significant scientific advancements, we know that effective malaria prevention can be impactful and relatively cheap. Typical interventions to prevent and treat malaria include insecticide-treated bednets, removing standing water from affected areas, and antiviral medications—and some of these interventions are relatively cheap. Only $5 USD can provide one malaria net and $7 can protect a child from malaria through malaria chemoprevention. Roughly $5000 USD will provide enough bednets or seasonal medicine doses to save someone's life. Recent advances in vaccines against malaria and in work exploring the use of gene drives provide further hope that we could eradicate malaria from countries that are still affected.On World Malaria Day, we encourage you to donate to Giving What We Can’s fundraiser partnering with the Against Malaria Foundation and the Malaria Consortium. Malaria is preventable and treatable; a lack of resources leaves people personally affected by the disease or affected by the loss of loved ones. Your giving can directly impact the lives of those affected by malaria: if we reached the $1 million USD fundraising goal, we could directly prevent roughly 200 deaths from malaria. Put simply, this is an area where we really can make a difference.Plasmodium falciparum prevalence from 2000 to 2019. The decreasing amount of red, orange, and yellow represents the decreasing prevalence of one of the deadliest strains of malaria due to prevention efforts.Data from, animation idea by Sam Deere.Where we've beenMalaria has been a part of human history for thousands of years, from infections in ancient Rome to the infections of several U.S. presidents. Early treatment for malaria came in the form of quinine from the cinchona tree, first isolated by French chemists in 1820, and was commonly administered in the form of tonic water or the gin and tonic. French surgeon Alphonse Laveran discovered the plasmodium parasite as the cause of malaria in 1880, opening up further research that would identify antimalarials like chloroquine and DDT.Proportion of deaths from malaria to deaths from all causes in the eastern United States, 1870 US Census. From Our World In Data/Statistical Atlas from the 9th Census of the United States 1870 (published 1874).Fighting malaria was the impetus for developing public health infrastructure in a number of countries. The predecessor to the United States Centers for Disease Control was the Office of Malaria Control in War Areas, designed to limit the impact of malaria during World War II around US military bases in the Southern United States (hence its headquarters in Atlanta, Georgia rather than Washington DC).Roughly half of the world’s countries have eliminated malaria in their territories through public health efforts, including some in tropical regions where malaria is most likely to be prevalent. 79 countries eliminated malaria from 1945 to 2010, and several more since 2010. Countries must achieve at least three consecutive y...]]>
2ndRichter https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:34 None full 5721
epTvpAEfCY74CMdMv_NL_EA_EA EA - Student competition for drafting a treaty on moratorium of large-scale AI capabilities RandD by Nayanika Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Student competition for drafting a treaty on moratorium of large-scale AI capabilities R&D, published by Nayanika on April 24, 2023 on The Effective Altruism Forum. has announced a competition for the drafting of an international treaty on moratorium of large-scale AI capabilities research and development.The competition is open to all students of law, philosophy, and other relevant disciplines. The competition is organized by the Campaign for AI Safety, an Australian unincorporated association of people who are concerned about the risks of AI.Competition brief: The goal of the competition is to operationalize the suggestions of the article Pausing AI Developments Isn’t Enough. We Need to Shut it All Down, including the provisions on:Shutting down large GPU and TPU clusters (the large computer farms where the most powerful AIs are refined).Prohibition of training ML models (or combinations of models) with more than 500 million parameters.Prohibition of the use of quantum computers in any AI-related activities.A general moratorium of large-scale AI capabilities research and development.Passing of national laws criminalizing the development of any form of Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI).Establishment of an international body to oversee the treaty. Effective mechanisms for enforcement of the treaty.The treaty must not expire until it is universally agreed that it is safe and ethical to resume large-scale AI capabilities research and development.Deadline for submissions: 15 June 2023.Prizes: The winner will receive a prize of AUD 4000. The runner-up will receive a prize of AUD 1000. The third place will receive a prize of AUD 500.How to participate:1)Read the competition brief above.2)Draft a treaty: The treaty should be in English and should be no longer than 10 pages. The treaty should be submitted in Word format.3)Submit your draft: Please e-mail your draft to nayanika.kundu@campaignforaisafety.org. Please include your name, university, and country in the e-mail.4)Wait for the results: The results will be announced on 1 July 2023. Judging criteriaThe judges will evaluate the drafts based on the following criteria:Clarity: The treaty should be clear and easy to understand.Legality: The treaty should be legally binding.Effectiveness: The treaty should be effective in achieving its goals.Comprehensiveness: The treaty should cover all the relevant issues.Judges’ discretion: The judges may use their discretion in evaluating the drafts. Judges’ decision is final. The judges’ decision is final and cannot be appealed. Prizes will be awarded only if submissions meet basic quality requirements for treaty drafts. By submitting a draft, you agree to publication of your draft on this website and waiving copyright to your draft.Panel of judges: We are currently assembling a panel of judges. If you are a public law professor, please e-mail us to express your interest in judging the competition.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nayanika https://forum.effectivealtruism.org/posts/epTvpAEfCY74CMdMv/student-competition-for-drafting-a-treaty-on-moratorium-of Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Student competition for drafting a treaty on moratorium of large-scale AI capabilities R&D, published by Nayanika on April 24, 2023 on The Effective Altruism Forum. has announced a competition for the drafting of an international treaty on moratorium of large-scale AI capabilities research and development.The competition is open to all students of law, philosophy, and other relevant disciplines. The competition is organized by the Campaign for AI Safety, an Australian unincorporated association of people who are concerned about the risks of AI.Competition brief: The goal of the competition is to operationalize the suggestions of the article Pausing AI Developments Isn’t Enough. We Need to Shut it All Down, including the provisions on:Shutting down large GPU and TPU clusters (the large computer farms where the most powerful AIs are refined).Prohibition of training ML models (or combinations of models) with more than 500 million parameters.Prohibition of the use of quantum computers in any AI-related activities.A general moratorium of large-scale AI capabilities research and development.Passing of national laws criminalizing the development of any form of Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI).Establishment of an international body to oversee the treaty. Effective mechanisms for enforcement of the treaty.The treaty must not expire until it is universally agreed that it is safe and ethical to resume large-scale AI capabilities research and development.Deadline for submissions: 15 June 2023.Prizes: The winner will receive a prize of AUD 4000. The runner-up will receive a prize of AUD 1000. The third place will receive a prize of AUD 500.How to participate:1)Read the competition brief above.2)Draft a treaty: The treaty should be in English and should be no longer than 10 pages. The treaty should be submitted in Word format.3)Submit your draft: Please e-mail your draft to nayanika.kundu@campaignforaisafety.org. Please include your name, university, and country in the e-mail.4)Wait for the results: The results will be announced on 1 July 2023. Judging criteriaThe judges will evaluate the drafts based on the following criteria:Clarity: The treaty should be clear and easy to understand.Legality: The treaty should be legally binding.Effectiveness: The treaty should be effective in achieving its goals.Comprehensiveness: The treaty should cover all the relevant issues.Judges’ discretion: The judges may use their discretion in evaluating the drafts. Judges’ decision is final. The judges’ decision is final and cannot be appealed. Prizes will be awarded only if submissions meet basic quality requirements for treaty drafts. By submitting a draft, you agree to publication of your draft on this website and waiving copyright to your draft.Panel of judges: We are currently assembling a panel of judges. If you are a public law professor, please e-mail us to express your interest in judging the competition.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 24 Apr 2023 22:29:55 +0000 EA - Student competition for drafting a treaty on moratorium of large-scale AI capabilities RandD by Nayanika Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Student competition for drafting a treaty on moratorium of large-scale AI capabilities R&D, published by Nayanika on April 24, 2023 on The Effective Altruism Forum. has announced a competition for the drafting of an international treaty on moratorium of large-scale AI capabilities research and development.The competition is open to all students of law, philosophy, and other relevant disciplines. The competition is organized by the Campaign for AI Safety, an Australian unincorporated association of people who are concerned about the risks of AI.Competition brief: The goal of the competition is to operationalize the suggestions of the article Pausing AI Developments Isn’t Enough. We Need to Shut it All Down, including the provisions on:Shutting down large GPU and TPU clusters (the large computer farms where the most powerful AIs are refined).Prohibition of training ML models (or combinations of models) with more than 500 million parameters.Prohibition of the use of quantum computers in any AI-related activities.A general moratorium of large-scale AI capabilities research and development.Passing of national laws criminalizing the development of any form of Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI).Establishment of an international body to oversee the treaty. Effective mechanisms for enforcement of the treaty.The treaty must not expire until it is universally agreed that it is safe and ethical to resume large-scale AI capabilities research and development.Deadline for submissions: 15 June 2023.Prizes: The winner will receive a prize of AUD 4000. The runner-up will receive a prize of AUD 1000. The third place will receive a prize of AUD 500.How to participate:1)Read the competition brief above.2)Draft a treaty: The treaty should be in English and should be no longer than 10 pages. The treaty should be submitted in Word format.3)Submit your draft: Please e-mail your draft to nayanika.kundu@campaignforaisafety.org. Please include your name, university, and country in the e-mail.4)Wait for the results: The results will be announced on 1 July 2023. Judging criteriaThe judges will evaluate the drafts based on the following criteria:Clarity: The treaty should be clear and easy to understand.Legality: The treaty should be legally binding.Effectiveness: The treaty should be effective in achieving its goals.Comprehensiveness: The treaty should cover all the relevant issues.Judges’ discretion: The judges may use their discretion in evaluating the drafts. Judges’ decision is final. The judges’ decision is final and cannot be appealed. Prizes will be awarded only if submissions meet basic quality requirements for treaty drafts. By submitting a draft, you agree to publication of your draft on this website and waiving copyright to your draft.Panel of judges: We are currently assembling a panel of judges. If you are a public law professor, please e-mail us to express your interest in judging the competition.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Student competition for drafting a treaty on moratorium of large-scale AI capabilities R&D, published by Nayanika on April 24, 2023 on The Effective Altruism Forum. has announced a competition for the drafting of an international treaty on moratorium of large-scale AI capabilities research and development.The competition is open to all students of law, philosophy, and other relevant disciplines. The competition is organized by the Campaign for AI Safety, an Australian unincorporated association of people who are concerned about the risks of AI.Competition brief: The goal of the competition is to operationalize the suggestions of the article Pausing AI Developments Isn’t Enough. We Need to Shut it All Down, including the provisions on:Shutting down large GPU and TPU clusters (the large computer farms where the most powerful AIs are refined).Prohibition of training ML models (or combinations of models) with more than 500 million parameters.Prohibition of the use of quantum computers in any AI-related activities.A general moratorium of large-scale AI capabilities research and development.Passing of national laws criminalizing the development of any form of Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI).Establishment of an international body to oversee the treaty. Effective mechanisms for enforcement of the treaty.The treaty must not expire until it is universally agreed that it is safe and ethical to resume large-scale AI capabilities research and development.Deadline for submissions: 15 June 2023.Prizes: The winner will receive a prize of AUD 4000. The runner-up will receive a prize of AUD 1000. The third place will receive a prize of AUD 500.How to participate:1)Read the competition brief above.2)Draft a treaty: The treaty should be in English and should be no longer than 10 pages. The treaty should be submitted in Word format.3)Submit your draft: Please e-mail your draft to nayanika.kundu@campaignforaisafety.org. Please include your name, university, and country in the e-mail.4)Wait for the results: The results will be announced on 1 July 2023. Judging criteriaThe judges will evaluate the drafts based on the following criteria:Clarity: The treaty should be clear and easy to understand.Legality: The treaty should be legally binding.Effectiveness: The treaty should be effective in achieving its goals.Comprehensiveness: The treaty should cover all the relevant issues.Judges’ discretion: The judges may use their discretion in evaluating the drafts. Judges’ decision is final. The judges’ decision is final and cannot be appealed. Prizes will be awarded only if submissions meet basic quality requirements for treaty drafts. By submitting a draft, you agree to publication of your draft on this website and waiving copyright to your draft.Panel of judges: We are currently assembling a panel of judges. If you are a public law professor, please e-mail us to express your interest in judging the competition.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nayanika https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:07 None full 5729
Go5CDwyna3hAfngKP_NL_EA_EA EA - No, the EMH does not imply that markets have long AGI timelines by Jakob Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: No, the EMH does not imply that markets have long AGI timelines, published by Jakob on April 24, 2023 on The Effective Altruism Forum.This post is a response to AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years from January 2023, by @basil.halperin , @J. Zachary Mazlish and @tmychow. In contrast to what they argue, I believe that interest rates are not a reliable instrument for assessing market beliefs about AI timelines - at least not the transformative AI described in the former forum post.The reason for this is that savvy investors cannot expect to get rich by betting on short AI timelines. Such a bet simply ties up your capital until it loses its value (either because you're dead, or because you're so rich that it hardly matters). Therefore, a savvy investor with short timelines will simply increase their own consumption. No individual can increase their own consumption enough to affect global capital markets - rather, something like tens of millions of people would need to increase their consumption for interest rates to be affected. Interest rates can therefore remain low even if TAI is near, unless all of these people get worried enough about AI to change their savings rate.Replaying the argument from AGI and the EMHMy understanding of the argument in the former post goes like this:AI is defined as transformative if it either causes an existential risk, or explosive growth (which, presumably, would be broad-based enough that the typical investor would expect to partake in it)If we knew that transformative AI was near, people would not need to save as much as they do today - since they would expect to either be dead or very, very rich in the near futureIf people save less, capital supply goes down, and interest rates go up. Therefore, if we knew transformative AI was near, interest rates should be high.Even if we allow for uncertainty around the timing of transformative AI, a significant probability of near-term transformative AI should increase interest rates, since the equilibrium condition is that the expected utility of consumption today and in the future should be equal (reflecting the full distribution of outcomes).Since interest rates aren't high, if you assume market efficiency, this is evidence against near-term transformative AI.My high-level responseI will start by granting the definition of transformative AI as either an existential risk or a driver of explosive (and broadly shared) growth. This means that I accept the premise that the marginal value of additional money post-TAI is much, much lower than the marginal value of money today, for any relevant investor.Second, I consider the dynamics of how prices in markets can change over time. This is something which the original post glosses over a bit, which is forgivable, since the main social value of markets is that they can work like a black box information aggregation mechanism where you don't need to think too carefully about the gears. However, in this case, this is a crucial reason why their argument seems to fail.Let's consider two possible ways the price of an asset can change. Either, some information becomes available to all players in the market, and they uniformly update their assessment of the value of the asset, and adjust their market positions accordingly. Alternatively, some investor gains private information which indicates the asset is mispriced, and they take a big, directional bet based on said information, unilaterally moving the price. These two situations are extremes on a spectrum, and in most cases, price changes will reflect a situation somewhere in between these extremes.My argument is that this matters in the special context of interest rates. After all, interest rates reflect the aggregate capital supply in the world....]]>
Jakob https://forum.effectivealtruism.org/posts/Go5CDwyna3hAfngKP/no-the-emh-does-not-imply-that-markets-have-long-agi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: No, the EMH does not imply that markets have long AGI timelines, published by Jakob on April 24, 2023 on The Effective Altruism Forum.This post is a response to AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years from January 2023, by @basil.halperin , @J. Zachary Mazlish and @tmychow. In contrast to what they argue, I believe that interest rates are not a reliable instrument for assessing market beliefs about AI timelines - at least not the transformative AI described in the former forum post.The reason for this is that savvy investors cannot expect to get rich by betting on short AI timelines. Such a bet simply ties up your capital until it loses its value (either because you're dead, or because you're so rich that it hardly matters). Therefore, a savvy investor with short timelines will simply increase their own consumption. No individual can increase their own consumption enough to affect global capital markets - rather, something like tens of millions of people would need to increase their consumption for interest rates to be affected. Interest rates can therefore remain low even if TAI is near, unless all of these people get worried enough about AI to change their savings rate.Replaying the argument from AGI and the EMHMy understanding of the argument in the former post goes like this:AI is defined as transformative if it either causes an existential risk, or explosive growth (which, presumably, would be broad-based enough that the typical investor would expect to partake in it)If we knew that transformative AI was near, people would not need to save as much as they do today - since they would expect to either be dead or very, very rich in the near futureIf people save less, capital supply goes down, and interest rates go up. Therefore, if we knew transformative AI was near, interest rates should be high.Even if we allow for uncertainty around the timing of transformative AI, a significant probability of near-term transformative AI should increase interest rates, since the equilibrium condition is that the expected utility of consumption today and in the future should be equal (reflecting the full distribution of outcomes).Since interest rates aren't high, if you assume market efficiency, this is evidence against near-term transformative AI.My high-level responseI will start by granting the definition of transformative AI as either an existential risk or a driver of explosive (and broadly shared) growth. This means that I accept the premise that the marginal value of additional money post-TAI is much, much lower than the marginal value of money today, for any relevant investor.Second, I consider the dynamics of how prices in markets can change over time. This is something which the original post glosses over a bit, which is forgivable, since the main social value of markets is that they can work like a black box information aggregation mechanism where you don't need to think too carefully about the gears. However, in this case, this is a crucial reason why their argument seems to fail.Let's consider two possible ways the price of an asset can change. Either, some information becomes available to all players in the market, and they uniformly update their assessment of the value of the asset, and adjust their market positions accordingly. Alternatively, some investor gains private information which indicates the asset is mispriced, and they take a big, directional bet based on said information, unilaterally moving the price. These two situations are extremes on a spectrum, and in most cases, price changes will reflect a situation somewhere in between these extremes.My argument is that this matters in the special context of interest rates. After all, interest rates reflect the aggregate capital supply in the world....]]>
Mon, 24 Apr 2023 09:42:42 +0000 EA - No, the EMH does not imply that markets have long AGI timelines by Jakob Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: No, the EMH does not imply that markets have long AGI timelines, published by Jakob on April 24, 2023 on The Effective Altruism Forum.This post is a response to AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years from January 2023, by @basil.halperin , @J. Zachary Mazlish and @tmychow. In contrast to what they argue, I believe that interest rates are not a reliable instrument for assessing market beliefs about AI timelines - at least not the transformative AI described in the former forum post.The reason for this is that savvy investors cannot expect to get rich by betting on short AI timelines. Such a bet simply ties up your capital until it loses its value (either because you're dead, or because you're so rich that it hardly matters). Therefore, a savvy investor with short timelines will simply increase their own consumption. No individual can increase their own consumption enough to affect global capital markets - rather, something like tens of millions of people would need to increase their consumption for interest rates to be affected. Interest rates can therefore remain low even if TAI is near, unless all of these people get worried enough about AI to change their savings rate.Replaying the argument from AGI and the EMHMy understanding of the argument in the former post goes like this:AI is defined as transformative if it either causes an existential risk, or explosive growth (which, presumably, would be broad-based enough that the typical investor would expect to partake in it)If we knew that transformative AI was near, people would not need to save as much as they do today - since they would expect to either be dead or very, very rich in the near futureIf people save less, capital supply goes down, and interest rates go up. Therefore, if we knew transformative AI was near, interest rates should be high.Even if we allow for uncertainty around the timing of transformative AI, a significant probability of near-term transformative AI should increase interest rates, since the equilibrium condition is that the expected utility of consumption today and in the future should be equal (reflecting the full distribution of outcomes).Since interest rates aren't high, if you assume market efficiency, this is evidence against near-term transformative AI.My high-level responseI will start by granting the definition of transformative AI as either an existential risk or a driver of explosive (and broadly shared) growth. This means that I accept the premise that the marginal value of additional money post-TAI is much, much lower than the marginal value of money today, for any relevant investor.Second, I consider the dynamics of how prices in markets can change over time. This is something which the original post glosses over a bit, which is forgivable, since the main social value of markets is that they can work like a black box information aggregation mechanism where you don't need to think too carefully about the gears. However, in this case, this is a crucial reason why their argument seems to fail.Let's consider two possible ways the price of an asset can change. Either, some information becomes available to all players in the market, and they uniformly update their assessment of the value of the asset, and adjust their market positions accordingly. Alternatively, some investor gains private information which indicates the asset is mispriced, and they take a big, directional bet based on said information, unilaterally moving the price. These two situations are extremes on a spectrum, and in most cases, price changes will reflect a situation somewhere in between these extremes.My argument is that this matters in the special context of interest rates. After all, interest rates reflect the aggregate capital supply in the world....]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: No, the EMH does not imply that markets have long AGI timelines, published by Jakob on April 24, 2023 on The Effective Altruism Forum.This post is a response to AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years from January 2023, by @basil.halperin , @J. Zachary Mazlish and @tmychow. In contrast to what they argue, I believe that interest rates are not a reliable instrument for assessing market beliefs about AI timelines - at least not the transformative AI described in the former forum post.The reason for this is that savvy investors cannot expect to get rich by betting on short AI timelines. Such a bet simply ties up your capital until it loses its value (either because you're dead, or because you're so rich that it hardly matters). Therefore, a savvy investor with short timelines will simply increase their own consumption. No individual can increase their own consumption enough to affect global capital markets - rather, something like tens of millions of people would need to increase their consumption for interest rates to be affected. Interest rates can therefore remain low even if TAI is near, unless all of these people get worried enough about AI to change their savings rate.Replaying the argument from AGI and the EMHMy understanding of the argument in the former post goes like this:AI is defined as transformative if it either causes an existential risk, or explosive growth (which, presumably, would be broad-based enough that the typical investor would expect to partake in it)If we knew that transformative AI was near, people would not need to save as much as they do today - since they would expect to either be dead or very, very rich in the near futureIf people save less, capital supply goes down, and interest rates go up. Therefore, if we knew transformative AI was near, interest rates should be high.Even if we allow for uncertainty around the timing of transformative AI, a significant probability of near-term transformative AI should increase interest rates, since the equilibrium condition is that the expected utility of consumption today and in the future should be equal (reflecting the full distribution of outcomes).Since interest rates aren't high, if you assume market efficiency, this is evidence against near-term transformative AI.My high-level responseI will start by granting the definition of transformative AI as either an existential risk or a driver of explosive (and broadly shared) growth. This means that I accept the premise that the marginal value of additional money post-TAI is much, much lower than the marginal value of money today, for any relevant investor.Second, I consider the dynamics of how prices in markets can change over time. This is something which the original post glosses over a bit, which is forgivable, since the main social value of markets is that they can work like a black box information aggregation mechanism where you don't need to think too carefully about the gears. However, in this case, this is a crucial reason why their argument seems to fail.Let's consider two possible ways the price of an asset can change. Either, some information becomes available to all players in the market, and they uniformly update their assessment of the value of the asset, and adjust their market positions accordingly. Alternatively, some investor gains private information which indicates the asset is mispriced, and they take a big, directional bet based on said information, unilaterally moving the price. These two situations are extremes on a spectrum, and in most cases, price changes will reflect a situation somewhere in between these extremes.My argument is that this matters in the special context of interest rates. After all, interest rates reflect the aggregate capital supply in the world....]]>
Jakob https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:26 None full 5713
e3eGKSktMq8KXHfuk_NL_EA_EA EA - David Edmonds's biography of Parfit is out by Pablo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: David Edmonds's biography of Parfit is out, published by Pablo on April 23, 2023 on The Effective Altruism Forum.Parfit: A Philosopher and His Mission to Save Morality, by David Edmonds, was published a few days ago (Amazon | Audible | Goodreads | Oxford).The book is worth reading in full, but here are some passages that focus specifically on Parfit's involvement with effective altruism:In his retirement, an organization with which he felt a natural affinity approached him for support. The Oxford-based charity Giving What We Can (GWWC) was set up by Toby Ord, an Australian-born philosopher, and another young philosopher, Will MacAskill. The charity's direct inspiration came from Peter Singer. Singer had devised a thought experiment that was simple but devastatingly effective in making people reflect on their behavior and their attitude to those in need. [...]This thought experiment has generated a significant secondary literature. Certainly, the GWWC initiators found it compelling. Their mission was to persuade people to give more of their income away and to donate to effective organizations that could make a real difference. [...]There were spinoff organizations from GWWC such as 80,000 Hours. The number refers to the rough number of hours we might have in our career, and 80,000 Hours was set up to research how people can most effectively devote their time rather than their money to tackling the world's most pressing problems. In 2012, the Centre for Effective Altruism was established to incorporate both GWWC and 80,000 Hours.Since its launch, the effective altruism movement has grown slowly but steadily. Most of the early backers were idealistic young postgraduates, many of them philosophers. If Singer was the intellectual father of the movement, Parfit was its grandfather. It became an in-joke among some members that anyone who came to work for GWWC had to possess a copy of Reasons and Persons. Some owned two copies: one for home, one for the office. But it took Parfit until 2014 to sign the GWWC pledge. And he agreed to do so only after wrangling over the wording.Initially, those who joined the GWWC campaign were required to make a public pledge to donate at least 10% of their income to charities that worked to relieve poverty. Parfit had several issues with this. For reasons the organizers never understood, he said that the participants had to make a promise rather than a pledge. He may have believed that a promise entailed a deeper level of commitment. Nor was he keen on the name "Giving What We Can". 10% of a person's income is certainly a generous sum, and in line with what adherents to some world religions are expected to give away. Nevertheless, Parfit pointed out, it was obvious that people could donate more. [...]Parfit also caviled at the word 'giving'. He believed this implied we are morally entitled to what we hand over, and morally entitled to our wealth and high incomes. This he rejected. Well-off people in the developed world were merely lucky that they were born into rich societies: they did not deserve their fortune.Linguistic quibbles aside, the issue that Parfit felt most strongly about was the movement's sole focus, initially, on poverty and development. While it was indeed pressing to relieve the suffering of people living today, Parfit argued, there should be an option that at least some of the money donated be earmarked for the problems of tomorrow. The human population has risen to a billion, and faces existential risks such as meteors, nuclear war, bioterrorism, pandemics, and climate change. Parfit claimed that between (A) peace, (B) a war which kills 7.5 billion people and (C) a war which killed everyone, the difference between (B) and (C) was much greater than the difference between (A) and (B). [...]Given how grim human exist...]]>
Pablo https://forum.effectivealtruism.org/posts/e3eGKSktMq8KXHfuk/david-edmonds-s-biography-of-parfit-is-out Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: David Edmonds's biography of Parfit is out, published by Pablo on April 23, 2023 on The Effective Altruism Forum.Parfit: A Philosopher and His Mission to Save Morality, by David Edmonds, was published a few days ago (Amazon | Audible | Goodreads | Oxford).The book is worth reading in full, but here are some passages that focus specifically on Parfit's involvement with effective altruism:In his retirement, an organization with which he felt a natural affinity approached him for support. The Oxford-based charity Giving What We Can (GWWC) was set up by Toby Ord, an Australian-born philosopher, and another young philosopher, Will MacAskill. The charity's direct inspiration came from Peter Singer. Singer had devised a thought experiment that was simple but devastatingly effective in making people reflect on their behavior and their attitude to those in need. [...]This thought experiment has generated a significant secondary literature. Certainly, the GWWC initiators found it compelling. Their mission was to persuade people to give more of their income away and to donate to effective organizations that could make a real difference. [...]There were spinoff organizations from GWWC such as 80,000 Hours. The number refers to the rough number of hours we might have in our career, and 80,000 Hours was set up to research how people can most effectively devote their time rather than their money to tackling the world's most pressing problems. In 2012, the Centre for Effective Altruism was established to incorporate both GWWC and 80,000 Hours.Since its launch, the effective altruism movement has grown slowly but steadily. Most of the early backers were idealistic young postgraduates, many of them philosophers. If Singer was the intellectual father of the movement, Parfit was its grandfather. It became an in-joke among some members that anyone who came to work for GWWC had to possess a copy of Reasons and Persons. Some owned two copies: one for home, one for the office. But it took Parfit until 2014 to sign the GWWC pledge. And he agreed to do so only after wrangling over the wording.Initially, those who joined the GWWC campaign were required to make a public pledge to donate at least 10% of their income to charities that worked to relieve poverty. Parfit had several issues with this. For reasons the organizers never understood, he said that the participants had to make a promise rather than a pledge. He may have believed that a promise entailed a deeper level of commitment. Nor was he keen on the name "Giving What We Can". 10% of a person's income is certainly a generous sum, and in line with what adherents to some world religions are expected to give away. Nevertheless, Parfit pointed out, it was obvious that people could donate more. [...]Parfit also caviled at the word 'giving'. He believed this implied we are morally entitled to what we hand over, and morally entitled to our wealth and high incomes. This he rejected. Well-off people in the developed world were merely lucky that they were born into rich societies: they did not deserve their fortune.Linguistic quibbles aside, the issue that Parfit felt most strongly about was the movement's sole focus, initially, on poverty and development. While it was indeed pressing to relieve the suffering of people living today, Parfit argued, there should be an option that at least some of the money donated be earmarked for the problems of tomorrow. The human population has risen to a billion, and faces existential risks such as meteors, nuclear war, bioterrorism, pandemics, and climate change. Parfit claimed that between (A) peace, (B) a war which kills 7.5 billion people and (C) a war which killed everyone, the difference between (B) and (C) was much greater than the difference between (A) and (B). [...]Given how grim human exist...]]>
Sun, 23 Apr 2023 14:50:02 +0000 EA - David Edmonds's biography of Parfit is out by Pablo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: David Edmonds's biography of Parfit is out, published by Pablo on April 23, 2023 on The Effective Altruism Forum.Parfit: A Philosopher and His Mission to Save Morality, by David Edmonds, was published a few days ago (Amazon | Audible | Goodreads | Oxford).The book is worth reading in full, but here are some passages that focus specifically on Parfit's involvement with effective altruism:In his retirement, an organization with which he felt a natural affinity approached him for support. The Oxford-based charity Giving What We Can (GWWC) was set up by Toby Ord, an Australian-born philosopher, and another young philosopher, Will MacAskill. The charity's direct inspiration came from Peter Singer. Singer had devised a thought experiment that was simple but devastatingly effective in making people reflect on their behavior and their attitude to those in need. [...]This thought experiment has generated a significant secondary literature. Certainly, the GWWC initiators found it compelling. Their mission was to persuade people to give more of their income away and to donate to effective organizations that could make a real difference. [...]There were spinoff organizations from GWWC such as 80,000 Hours. The number refers to the rough number of hours we might have in our career, and 80,000 Hours was set up to research how people can most effectively devote their time rather than their money to tackling the world's most pressing problems. In 2012, the Centre for Effective Altruism was established to incorporate both GWWC and 80,000 Hours.Since its launch, the effective altruism movement has grown slowly but steadily. Most of the early backers were idealistic young postgraduates, many of them philosophers. If Singer was the intellectual father of the movement, Parfit was its grandfather. It became an in-joke among some members that anyone who came to work for GWWC had to possess a copy of Reasons and Persons. Some owned two copies: one for home, one for the office. But it took Parfit until 2014 to sign the GWWC pledge. And he agreed to do so only after wrangling over the wording.Initially, those who joined the GWWC campaign were required to make a public pledge to donate at least 10% of their income to charities that worked to relieve poverty. Parfit had several issues with this. For reasons the organizers never understood, he said that the participants had to make a promise rather than a pledge. He may have believed that a promise entailed a deeper level of commitment. Nor was he keen on the name "Giving What We Can". 10% of a person's income is certainly a generous sum, and in line with what adherents to some world religions are expected to give away. Nevertheless, Parfit pointed out, it was obvious that people could donate more. [...]Parfit also caviled at the word 'giving'. He believed this implied we are morally entitled to what we hand over, and morally entitled to our wealth and high incomes. This he rejected. Well-off people in the developed world were merely lucky that they were born into rich societies: they did not deserve their fortune.Linguistic quibbles aside, the issue that Parfit felt most strongly about was the movement's sole focus, initially, on poverty and development. While it was indeed pressing to relieve the suffering of people living today, Parfit argued, there should be an option that at least some of the money donated be earmarked for the problems of tomorrow. The human population has risen to a billion, and faces existential risks such as meteors, nuclear war, bioterrorism, pandemics, and climate change. Parfit claimed that between (A) peace, (B) a war which kills 7.5 billion people and (C) a war which killed everyone, the difference between (B) and (C) was much greater than the difference between (A) and (B). [...]Given how grim human exist...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: David Edmonds's biography of Parfit is out, published by Pablo on April 23, 2023 on The Effective Altruism Forum.Parfit: A Philosopher and His Mission to Save Morality, by David Edmonds, was published a few days ago (Amazon | Audible | Goodreads | Oxford).The book is worth reading in full, but here are some passages that focus specifically on Parfit's involvement with effective altruism:In his retirement, an organization with which he felt a natural affinity approached him for support. The Oxford-based charity Giving What We Can (GWWC) was set up by Toby Ord, an Australian-born philosopher, and another young philosopher, Will MacAskill. The charity's direct inspiration came from Peter Singer. Singer had devised a thought experiment that was simple but devastatingly effective in making people reflect on their behavior and their attitude to those in need. [...]This thought experiment has generated a significant secondary literature. Certainly, the GWWC initiators found it compelling. Their mission was to persuade people to give more of their income away and to donate to effective organizations that could make a real difference. [...]There were spinoff organizations from GWWC such as 80,000 Hours. The number refers to the rough number of hours we might have in our career, and 80,000 Hours was set up to research how people can most effectively devote their time rather than their money to tackling the world's most pressing problems. In 2012, the Centre for Effective Altruism was established to incorporate both GWWC and 80,000 Hours.Since its launch, the effective altruism movement has grown slowly but steadily. Most of the early backers were idealistic young postgraduates, many of them philosophers. If Singer was the intellectual father of the movement, Parfit was its grandfather. It became an in-joke among some members that anyone who came to work for GWWC had to possess a copy of Reasons and Persons. Some owned two copies: one for home, one for the office. But it took Parfit until 2014 to sign the GWWC pledge. And he agreed to do so only after wrangling over the wording.Initially, those who joined the GWWC campaign were required to make a public pledge to donate at least 10% of their income to charities that worked to relieve poverty. Parfit had several issues with this. For reasons the organizers never understood, he said that the participants had to make a promise rather than a pledge. He may have believed that a promise entailed a deeper level of commitment. Nor was he keen on the name "Giving What We Can". 10% of a person's income is certainly a generous sum, and in line with what adherents to some world religions are expected to give away. Nevertheless, Parfit pointed out, it was obvious that people could donate more. [...]Parfit also caviled at the word 'giving'. He believed this implied we are morally entitled to what we hand over, and morally entitled to our wealth and high incomes. This he rejected. Well-off people in the developed world were merely lucky that they were born into rich societies: they did not deserve their fortune.Linguistic quibbles aside, the issue that Parfit felt most strongly about was the movement's sole focus, initially, on poverty and development. While it was indeed pressing to relieve the suffering of people living today, Parfit argued, there should be an option that at least some of the money donated be earmarked for the problems of tomorrow. The human population has risen to a billion, and faces existential risks such as meteors, nuclear war, bioterrorism, pandemics, and climate change. Parfit claimed that between (A) peace, (B) a war which kills 7.5 billion people and (C) a war which killed everyone, the difference between (B) and (C) was much greater than the difference between (A) and (B). [...]Given how grim human exist...]]>
Pablo https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:14 None full 5706
EqtGTmxrahdt3LKky_NL_EA_EA EA - Org Proposal: Effective Foundations by Kyle Smith Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Org Proposal: Effective Foundations, published by Kyle Smith on April 22, 2023 on The Effective Altruism Forum.Summary:In this post I propose an organization that consults with US private foundations to encourage more effective grantmaking. The general consensus of EA’s I’ve spoked to working in this area is that existing private foundations are simply too intractable to be worthy of investment. However, I propose the usage of US private foundation tax filings to identify more-tractable private foundations based on several criteria. The hope is that this data approach allows this idea to cross the line from too-intractable to tractable-enough-for-investment.Motivation:Since 2011, GiveWell has directed $1 billion in effective gifts. Private foundations in the US hold over $1.1T and give ~$70B+ a year in grants, likely to largely ineffective charities. If even a small portion of these private foundations are tractable, the amount of grants redirected toward effective charities could be quite large. Impact will be measurable as engagement outcomes.Tractability:Tractability is the key challenge of this idea. From speaking with a few people in related orgs, the consensus seems to be that existing foundations are entrenched in their processes and mission and are extremely unlikely to be persuaded to change. For example, Charity Entrepreneurship mostly works with new private foundations for this reason.I propose that the usage of private foundation tax returns (990-PF) may point toward tractability of even existing private foundations. A common response has been that foundation age is a critical factor, so starting with new foundations makes sense. Additional factors may be significant predictors of tractability (already give to international grantees, already give to highly effective charities, give to a diverse set of cause areas or cause areas associated with EA, information about their employees [low employees/assets may imply a smaller, more persuadable management structure]). It is possible that foundation age is the only significant predictor, and if so, solicitations could be mostly targeted toward brand new foundations, and these other factors could be used to create more personalized solicitations, potentially leading to greater tractability.What would the org actually do?Develop a process of using US private foundation tax filings to identify tractable private foundations to solicitMake a targeted solicitation for engagements with tractable private foundationsIf successful, establish engagements similar to those done by Effective Giving and Longview PhilanthropyEngagements with new foundations could also be based on the work that Charity Entrepreneurship has done in working with new foundations.Challenges:The primary downside risk of this approach (other than wasting money on something not tractable) is that foundations may not wish to be solicited, and will sour on EA as a result.A major emphasis should be placed on ensuring the solicitations are respectful. A light touch is likely necessary!If the foundations do sour on EA, it is likely any attempt in the future to persuade them would fail anyway.The data process and solicitations (unless personalized) are relatively low-cost. A robust team for actual consulting would be fairly expensive.I am an academic researcher and would need to bring in co-founders/partners who have experience in conducting engagements of this natureFour potential outcomes I envision:The data approach is unsuccessful in identifying tractable foundations and ultimately no progress is made.The data approach is somewhat successful in identifying tractable foundations, and the optimal path forward is operating as an outreach/warm lead generating organization which funnels into an existing EA consultancy.The data approach ...]]>
Kyle Smith https://forum.effectivealtruism.org/posts/EqtGTmxrahdt3LKky/org-proposal-effective-foundations Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Org Proposal: Effective Foundations, published by Kyle Smith on April 22, 2023 on The Effective Altruism Forum.Summary:In this post I propose an organization that consults with US private foundations to encourage more effective grantmaking. The general consensus of EA’s I’ve spoked to working in this area is that existing private foundations are simply too intractable to be worthy of investment. However, I propose the usage of US private foundation tax filings to identify more-tractable private foundations based on several criteria. The hope is that this data approach allows this idea to cross the line from too-intractable to tractable-enough-for-investment.Motivation:Since 2011, GiveWell has directed $1 billion in effective gifts. Private foundations in the US hold over $1.1T and give ~$70B+ a year in grants, likely to largely ineffective charities. If even a small portion of these private foundations are tractable, the amount of grants redirected toward effective charities could be quite large. Impact will be measurable as engagement outcomes.Tractability:Tractability is the key challenge of this idea. From speaking with a few people in related orgs, the consensus seems to be that existing foundations are entrenched in their processes and mission and are extremely unlikely to be persuaded to change. For example, Charity Entrepreneurship mostly works with new private foundations for this reason.I propose that the usage of private foundation tax returns (990-PF) may point toward tractability of even existing private foundations. A common response has been that foundation age is a critical factor, so starting with new foundations makes sense. Additional factors may be significant predictors of tractability (already give to international grantees, already give to highly effective charities, give to a diverse set of cause areas or cause areas associated with EA, information about their employees [low employees/assets may imply a smaller, more persuadable management structure]). It is possible that foundation age is the only significant predictor, and if so, solicitations could be mostly targeted toward brand new foundations, and these other factors could be used to create more personalized solicitations, potentially leading to greater tractability.What would the org actually do?Develop a process of using US private foundation tax filings to identify tractable private foundations to solicitMake a targeted solicitation for engagements with tractable private foundationsIf successful, establish engagements similar to those done by Effective Giving and Longview PhilanthropyEngagements with new foundations could also be based on the work that Charity Entrepreneurship has done in working with new foundations.Challenges:The primary downside risk of this approach (other than wasting money on something not tractable) is that foundations may not wish to be solicited, and will sour on EA as a result.A major emphasis should be placed on ensuring the solicitations are respectful. A light touch is likely necessary!If the foundations do sour on EA, it is likely any attempt in the future to persuade them would fail anyway.The data process and solicitations (unless personalized) are relatively low-cost. A robust team for actual consulting would be fairly expensive.I am an academic researcher and would need to bring in co-founders/partners who have experience in conducting engagements of this natureFour potential outcomes I envision:The data approach is unsuccessful in identifying tractable foundations and ultimately no progress is made.The data approach is somewhat successful in identifying tractable foundations, and the optimal path forward is operating as an outreach/warm lead generating organization which funnels into an existing EA consultancy.The data approach ...]]>
Sun, 23 Apr 2023 09:39:28 +0000 EA - Org Proposal: Effective Foundations by Kyle Smith Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Org Proposal: Effective Foundations, published by Kyle Smith on April 22, 2023 on The Effective Altruism Forum.Summary:In this post I propose an organization that consults with US private foundations to encourage more effective grantmaking. The general consensus of EA’s I’ve spoked to working in this area is that existing private foundations are simply too intractable to be worthy of investment. However, I propose the usage of US private foundation tax filings to identify more-tractable private foundations based on several criteria. The hope is that this data approach allows this idea to cross the line from too-intractable to tractable-enough-for-investment.Motivation:Since 2011, GiveWell has directed $1 billion in effective gifts. Private foundations in the US hold over $1.1T and give ~$70B+ a year in grants, likely to largely ineffective charities. If even a small portion of these private foundations are tractable, the amount of grants redirected toward effective charities could be quite large. Impact will be measurable as engagement outcomes.Tractability:Tractability is the key challenge of this idea. From speaking with a few people in related orgs, the consensus seems to be that existing foundations are entrenched in their processes and mission and are extremely unlikely to be persuaded to change. For example, Charity Entrepreneurship mostly works with new private foundations for this reason.I propose that the usage of private foundation tax returns (990-PF) may point toward tractability of even existing private foundations. A common response has been that foundation age is a critical factor, so starting with new foundations makes sense. Additional factors may be significant predictors of tractability (already give to international grantees, already give to highly effective charities, give to a diverse set of cause areas or cause areas associated with EA, information about their employees [low employees/assets may imply a smaller, more persuadable management structure]). It is possible that foundation age is the only significant predictor, and if so, solicitations could be mostly targeted toward brand new foundations, and these other factors could be used to create more personalized solicitations, potentially leading to greater tractability.What would the org actually do?Develop a process of using US private foundation tax filings to identify tractable private foundations to solicitMake a targeted solicitation for engagements with tractable private foundationsIf successful, establish engagements similar to those done by Effective Giving and Longview PhilanthropyEngagements with new foundations could also be based on the work that Charity Entrepreneurship has done in working with new foundations.Challenges:The primary downside risk of this approach (other than wasting money on something not tractable) is that foundations may not wish to be solicited, and will sour on EA as a result.A major emphasis should be placed on ensuring the solicitations are respectful. A light touch is likely necessary!If the foundations do sour on EA, it is likely any attempt in the future to persuade them would fail anyway.The data process and solicitations (unless personalized) are relatively low-cost. A robust team for actual consulting would be fairly expensive.I am an academic researcher and would need to bring in co-founders/partners who have experience in conducting engagements of this natureFour potential outcomes I envision:The data approach is unsuccessful in identifying tractable foundations and ultimately no progress is made.The data approach is somewhat successful in identifying tractable foundations, and the optimal path forward is operating as an outreach/warm lead generating organization which funnels into an existing EA consultancy.The data approach ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Org Proposal: Effective Foundations, published by Kyle Smith on April 22, 2023 on The Effective Altruism Forum.Summary:In this post I propose an organization that consults with US private foundations to encourage more effective grantmaking. The general consensus of EA’s I’ve spoked to working in this area is that existing private foundations are simply too intractable to be worthy of investment. However, I propose the usage of US private foundation tax filings to identify more-tractable private foundations based on several criteria. The hope is that this data approach allows this idea to cross the line from too-intractable to tractable-enough-for-investment.Motivation:Since 2011, GiveWell has directed $1 billion in effective gifts. Private foundations in the US hold over $1.1T and give ~$70B+ a year in grants, likely to largely ineffective charities. If even a small portion of these private foundations are tractable, the amount of grants redirected toward effective charities could be quite large. Impact will be measurable as engagement outcomes.Tractability:Tractability is the key challenge of this idea. From speaking with a few people in related orgs, the consensus seems to be that existing foundations are entrenched in their processes and mission and are extremely unlikely to be persuaded to change. For example, Charity Entrepreneurship mostly works with new private foundations for this reason.I propose that the usage of private foundation tax returns (990-PF) may point toward tractability of even existing private foundations. A common response has been that foundation age is a critical factor, so starting with new foundations makes sense. Additional factors may be significant predictors of tractability (already give to international grantees, already give to highly effective charities, give to a diverse set of cause areas or cause areas associated with EA, information about their employees [low employees/assets may imply a smaller, more persuadable management structure]). It is possible that foundation age is the only significant predictor, and if so, solicitations could be mostly targeted toward brand new foundations, and these other factors could be used to create more personalized solicitations, potentially leading to greater tractability.What would the org actually do?Develop a process of using US private foundation tax filings to identify tractable private foundations to solicitMake a targeted solicitation for engagements with tractable private foundationsIf successful, establish engagements similar to those done by Effective Giving and Longview PhilanthropyEngagements with new foundations could also be based on the work that Charity Entrepreneurship has done in working with new foundations.Challenges:The primary downside risk of this approach (other than wasting money on something not tractable) is that foundations may not wish to be solicited, and will sour on EA as a result.A major emphasis should be placed on ensuring the solicitations are respectful. A light touch is likely necessary!If the foundations do sour on EA, it is likely any attempt in the future to persuade them would fail anyway.The data process and solicitations (unless personalized) are relatively low-cost. A robust team for actual consulting would be fairly expensive.I am an academic researcher and would need to bring in co-founders/partners who have experience in conducting engagements of this natureFour potential outcomes I envision:The data approach is unsuccessful in identifying tractable foundations and ultimately no progress is made.The data approach is somewhat successful in identifying tractable foundations, and the optimal path forward is operating as an outreach/warm lead generating organization which funnels into an existing EA consultancy.The data approach ...]]>
Kyle Smith https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:55 None full 5703
idjzaqfGguEAaC34j_NL_EA_EA EA - If your AGI x-risk estimates are low, what scenarios make up the bulk of your expectations for an OK outcome? by Greg Colbourn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If your AGI x-risk estimates are low, what scenarios make up the bulk of your expectations for an OK outcome?, published by Greg Colbourn on April 21, 2023 on The Effective Altruism Forum.There seem to be two main framings emerging from recent AGI x-risk discussion: default doom, given AGI, and default we're fine, given AGI.I'm interested in what people who have low p(doom|AGI) think are the reasons that things will basically be fine once we have AGI (or TAI, PASTA, ASI). What mechanisms are at play? How is alignment solved so that there are 0 failure modes? Can we survive despite imperfect alignment? How? Is alignment moot? Will physical limits be reached before there is too much danger?If you have high enough p(doom|AGI) to be very concerned, but you're still only at ~1-10%, what is happening in the other 90-99%?Added 22Apr: I'm also interested in detailed scenarios and stories, spelling out how things go right post-AGI. There are plenty of stories and scenarios illustrating doom. Where are the similar stories illustrating how things go right? There is the FLI World Building Contest, but that took place in the pre-GPT-4+AutoGPT era. The winning entry has everyone acting far too sensibly in terms of self-regulation and restraint.I think we can now say, given the fervour over AutoGPT, that this will not happen, with high likelihood.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Greg Colbourn https://forum.effectivealtruism.org/posts/idjzaqfGguEAaC34j/if-your-agi-x-risk-estimates-are-low-what-scenarios-make-up Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If your AGI x-risk estimates are low, what scenarios make up the bulk of your expectations for an OK outcome?, published by Greg Colbourn on April 21, 2023 on The Effective Altruism Forum.There seem to be two main framings emerging from recent AGI x-risk discussion: default doom, given AGI, and default we're fine, given AGI.I'm interested in what people who have low p(doom|AGI) think are the reasons that things will basically be fine once we have AGI (or TAI, PASTA, ASI). What mechanisms are at play? How is alignment solved so that there are 0 failure modes? Can we survive despite imperfect alignment? How? Is alignment moot? Will physical limits be reached before there is too much danger?If you have high enough p(doom|AGI) to be very concerned, but you're still only at ~1-10%, what is happening in the other 90-99%?Added 22Apr: I'm also interested in detailed scenarios and stories, spelling out how things go right post-AGI. There are plenty of stories and scenarios illustrating doom. Where are the similar stories illustrating how things go right? There is the FLI World Building Contest, but that took place in the pre-GPT-4+AutoGPT era. The winning entry has everyone acting far too sensibly in terms of self-regulation and restraint.I think we can now say, given the fervour over AutoGPT, that this will not happen, with high likelihood.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 22 Apr 2023 15:15:49 +0000 EA - If your AGI x-risk estimates are low, what scenarios make up the bulk of your expectations for an OK outcome? by Greg Colbourn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If your AGI x-risk estimates are low, what scenarios make up the bulk of your expectations for an OK outcome?, published by Greg Colbourn on April 21, 2023 on The Effective Altruism Forum.There seem to be two main framings emerging from recent AGI x-risk discussion: default doom, given AGI, and default we're fine, given AGI.I'm interested in what people who have low p(doom|AGI) think are the reasons that things will basically be fine once we have AGI (or TAI, PASTA, ASI). What mechanisms are at play? How is alignment solved so that there are 0 failure modes? Can we survive despite imperfect alignment? How? Is alignment moot? Will physical limits be reached before there is too much danger?If you have high enough p(doom|AGI) to be very concerned, but you're still only at ~1-10%, what is happening in the other 90-99%?Added 22Apr: I'm also interested in detailed scenarios and stories, spelling out how things go right post-AGI. There are plenty of stories and scenarios illustrating doom. Where are the similar stories illustrating how things go right? There is the FLI World Building Contest, but that took place in the pre-GPT-4+AutoGPT era. The winning entry has everyone acting far too sensibly in terms of self-regulation and restraint.I think we can now say, given the fervour over AutoGPT, that this will not happen, with high likelihood.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If your AGI x-risk estimates are low, what scenarios make up the bulk of your expectations for an OK outcome?, published by Greg Colbourn on April 21, 2023 on The Effective Altruism Forum.There seem to be two main framings emerging from recent AGI x-risk discussion: default doom, given AGI, and default we're fine, given AGI.I'm interested in what people who have low p(doom|AGI) think are the reasons that things will basically be fine once we have AGI (or TAI, PASTA, ASI). What mechanisms are at play? How is alignment solved so that there are 0 failure modes? Can we survive despite imperfect alignment? How? Is alignment moot? Will physical limits be reached before there is too much danger?If you have high enough p(doom|AGI) to be very concerned, but you're still only at ~1-10%, what is happening in the other 90-99%?Added 22Apr: I'm also interested in detailed scenarios and stories, spelling out how things go right post-AGI. There are plenty of stories and scenarios illustrating doom. Where are the similar stories illustrating how things go right? There is the FLI World Building Contest, but that took place in the pre-GPT-4+AutoGPT era. The winning entry has everyone acting far too sensibly in terms of self-regulation and restraint.I think we can now say, given the fervour over AutoGPT, that this will not happen, with high likelihood.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Greg Colbourn https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:38 None full 5704
So9nbufgNuQBjNkTG_NL_EA_EA EA - 500 Million, But Not A Single One More - The Animation by Writer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 500 Million, But Not A Single One More - The Animation, published by Writer on April 21, 2023 on The Effective Altruism Forum.This video is an animation of 500 Million, But Not A Single One More, by @jaiSome notes for the curious about the cult of Sopona, the god of Smallpox of the Yoruba peopleWhen I was getting feedback on the storyboard from Jai, he mentioned that there is an actual god of Smallpox: Sopona. We decided to use it to lightly inspire our design by adding a mask similar to the ones found in wooden statues representing the god.Apparently, in Nigeria there were cults of Sopona that strongly opposed vaccination. At the bottom-left paragraph at this link, there's an interesting story. Dr. O. Sapara, a Nigerian physician, infiltrated "the sopono cult", described as a large secret society in western Nigeria that used to cause smallpox outbreaks to generate clients to "cure" and to follow-up on blackmails. Sapara infiltrated the society and helped the government ban the cult as an illegal organization. After the cult was banned worship of the God continued.Jai rightly mentioned the priests of the cult as a striking example of the most hollow humans allying with the enemy.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Writer https://forum.effectivealtruism.org/posts/So9nbufgNuQBjNkTG/500-million-but-not-a-single-one-more-the-animation Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 500 Million, But Not A Single One More - The Animation, published by Writer on April 21, 2023 on The Effective Altruism Forum.This video is an animation of 500 Million, But Not A Single One More, by @jaiSome notes for the curious about the cult of Sopona, the god of Smallpox of the Yoruba peopleWhen I was getting feedback on the storyboard from Jai, he mentioned that there is an actual god of Smallpox: Sopona. We decided to use it to lightly inspire our design by adding a mask similar to the ones found in wooden statues representing the god.Apparently, in Nigeria there were cults of Sopona that strongly opposed vaccination. At the bottom-left paragraph at this link, there's an interesting story. Dr. O. Sapara, a Nigerian physician, infiltrated "the sopono cult", described as a large secret society in western Nigeria that used to cause smallpox outbreaks to generate clients to "cure" and to follow-up on blackmails. Sapara infiltrated the society and helped the government ban the cult as an illegal organization. After the cult was banned worship of the God continued.Jai rightly mentioned the priests of the cult as a striking example of the most hollow humans allying with the enemy.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 22 Apr 2023 00:36:43 +0000 EA - 500 Million, But Not A Single One More - The Animation by Writer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 500 Million, But Not A Single One More - The Animation, published by Writer on April 21, 2023 on The Effective Altruism Forum.This video is an animation of 500 Million, But Not A Single One More, by @jaiSome notes for the curious about the cult of Sopona, the god of Smallpox of the Yoruba peopleWhen I was getting feedback on the storyboard from Jai, he mentioned that there is an actual god of Smallpox: Sopona. We decided to use it to lightly inspire our design by adding a mask similar to the ones found in wooden statues representing the god.Apparently, in Nigeria there were cults of Sopona that strongly opposed vaccination. At the bottom-left paragraph at this link, there's an interesting story. Dr. O. Sapara, a Nigerian physician, infiltrated "the sopono cult", described as a large secret society in western Nigeria that used to cause smallpox outbreaks to generate clients to "cure" and to follow-up on blackmails. Sapara infiltrated the society and helped the government ban the cult as an illegal organization. After the cult was banned worship of the God continued.Jai rightly mentioned the priests of the cult as a striking example of the most hollow humans allying with the enemy.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 500 Million, But Not A Single One More - The Animation, published by Writer on April 21, 2023 on The Effective Altruism Forum.This video is an animation of 500 Million, But Not A Single One More, by @jaiSome notes for the curious about the cult of Sopona, the god of Smallpox of the Yoruba peopleWhen I was getting feedback on the storyboard from Jai, he mentioned that there is an actual god of Smallpox: Sopona. We decided to use it to lightly inspire our design by adding a mask similar to the ones found in wooden statues representing the god.Apparently, in Nigeria there were cults of Sopona that strongly opposed vaccination. At the bottom-left paragraph at this link, there's an interesting story. Dr. O. Sapara, a Nigerian physician, infiltrated "the sopono cult", described as a large secret society in western Nigeria that used to cause smallpox outbreaks to generate clients to "cure" and to follow-up on blackmails. Sapara infiltrated the society and helped the government ban the cult as an illegal organization. After the cult was banned worship of the God continued.Jai rightly mentioned the priests of the cult as a striking example of the most hollow humans allying with the enemy.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Writer https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:21 None full 5697
QNxbpEDvpsMAN3ELF_NL_EA_EA EA - DeepMind and Google Brain are merging [Linkpost] by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: DeepMind and Google Brain are merging [Linkpost], published by Akash on April 20, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://forum.effectivealtruism.org/posts/QNxbpEDvpsMAN3ELF/deepmind-and-google-brain-are-merging-linkpost Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: DeepMind and Google Brain are merging [Linkpost], published by Akash on April 20, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 21 Apr 2023 21:23:50 +0000 EA - DeepMind and Google Brain are merging [Linkpost] by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: DeepMind and Google Brain are merging [Linkpost], published by Akash on April 20, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: DeepMind and Google Brain are merging [Linkpost], published by Akash on April 20, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:22 None full 5693
B9CdrCqQoZFRRMrNu_NL_EA_EA EA - Leaked EU Draft Proposes Substantial Animal Welfare Improvements by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Leaked EU Draft Proposes Substantial Animal Welfare Improvements, published by Ben West on April 21, 2023 on The Effective Altruism Forum.Phase out of cages for all speciesIncrease space allowance for all speciesBan the systematic culling of male chicksIntroduce welfare requirements for the stunning of farmed fishBan cruel slaughter practices like water baths and CO2 for poultry and pigsBan mutilations, like beak trimming, tail docking, dehorning or surgical castration of pigsLimited journey times for the transport of animals destined to slaughterApply the EU’s standards to imported animal products in a way that is compatible with WTO rulesIt claims the document was leaked by Agra Facts, which is a subscription service I don't have access to. Confirmation of what was actually included from someone who has access to the underlying document would be appreciated.Metaculus forecast for the cage free portion:Previous discussion:EU Food Agency Recommends Banning CagesAn End to Cages in Europe?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ben West https://forum.effectivealtruism.org/posts/B9CdrCqQoZFRRMrNu/leaked-eu-draft-proposes-substantial-animal-welfare Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Leaked EU Draft Proposes Substantial Animal Welfare Improvements, published by Ben West on April 21, 2023 on The Effective Altruism Forum.Phase out of cages for all speciesIncrease space allowance for all speciesBan the systematic culling of male chicksIntroduce welfare requirements for the stunning of farmed fishBan cruel slaughter practices like water baths and CO2 for poultry and pigsBan mutilations, like beak trimming, tail docking, dehorning or surgical castration of pigsLimited journey times for the transport of animals destined to slaughterApply the EU’s standards to imported animal products in a way that is compatible with WTO rulesIt claims the document was leaked by Agra Facts, which is a subscription service I don't have access to. Confirmation of what was actually included from someone who has access to the underlying document would be appreciated.Metaculus forecast for the cage free portion:Previous discussion:EU Food Agency Recommends Banning CagesAn End to Cages in Europe?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 21 Apr 2023 20:54:24 +0000 EA - Leaked EU Draft Proposes Substantial Animal Welfare Improvements by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Leaked EU Draft Proposes Substantial Animal Welfare Improvements, published by Ben West on April 21, 2023 on The Effective Altruism Forum.Phase out of cages for all speciesIncrease space allowance for all speciesBan the systematic culling of male chicksIntroduce welfare requirements for the stunning of farmed fishBan cruel slaughter practices like water baths and CO2 for poultry and pigsBan mutilations, like beak trimming, tail docking, dehorning or surgical castration of pigsLimited journey times for the transport of animals destined to slaughterApply the EU’s standards to imported animal products in a way that is compatible with WTO rulesIt claims the document was leaked by Agra Facts, which is a subscription service I don't have access to. Confirmation of what was actually included from someone who has access to the underlying document would be appreciated.Metaculus forecast for the cage free portion:Previous discussion:EU Food Agency Recommends Banning CagesAn End to Cages in Europe?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Leaked EU Draft Proposes Substantial Animal Welfare Improvements, published by Ben West on April 21, 2023 on The Effective Altruism Forum.Phase out of cages for all speciesIncrease space allowance for all speciesBan the systematic culling of male chicksIntroduce welfare requirements for the stunning of farmed fishBan cruel slaughter practices like water baths and CO2 for poultry and pigsBan mutilations, like beak trimming, tail docking, dehorning or surgical castration of pigsLimited journey times for the transport of animals destined to slaughterApply the EU’s standards to imported animal products in a way that is compatible with WTO rulesIt claims the document was leaked by Agra Facts, which is a subscription service I don't have access to. Confirmation of what was actually included from someone who has access to the underlying document would be appreciated.Metaculus forecast for the cage free portion:Previous discussion:EU Food Agency Recommends Banning CagesAn End to Cages in Europe?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ben West https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:14 None full 5689
QZssRZHv7yLDnyyHs_NL_EA_EA EA - Animal Charity Evaluators Is Seeking Intervention Effectiveness Research and Cost-Effectiveness Estimates by Animal Charity Evaluators Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal Charity Evaluators Is Seeking Intervention Effectiveness Research and Cost-Effectiveness Estimates, published by Animal Charity Evaluators on April 21, 2023 on The Effective Altruism Forum.Hello EA Forum,As Animal Charity Evaluators prepares for our 2023 charity evaluation cycle, we are gathering relevant research on the effectiveness of different animal advocacy interventions to update our beliefs and guide our assessments and recommendation decisions. To this end, we have compiled a list of references, including literature reviews, books, peer-reviewed articles, reports, and other non-academic content that might offer relevant insights into the outcomes of interventions. You can view the list here.We are especially interested in research into the effectiveness of corporate litigation work, providing or influencing funding, making podcasts, offering recruitment services, and running vegan or vegetarian events. If you know of any publications on these or other relevant topics, please let us know in the comments.Additionally, we are compiling a list of existing cost-effectiveness estimates. This includes, but is not limited to, interventions aimed at reducing farmed animal suffering, institutional and individual vegan outreach, advocating for better animal welfare policies, or supporting research into alternatives to animal products. We have started a list from a literature search and a scan of the EA Forum, which you can view in this spreadsheet. If you are aware of any additional estimates or are currently working on one, we would appreciate you letting us know.We will review responses every three business days until Friday, May 5, 2023.Thank you in advance for your help.Best regards,Alina SalmenResearcherAnimal Charity EvaluatorsThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Animal Charity Evaluators https://forum.effectivealtruism.org/posts/QZssRZHv7yLDnyyHs/animal-charity-evaluators-is-seeking-intervention Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal Charity Evaluators Is Seeking Intervention Effectiveness Research and Cost-Effectiveness Estimates, published by Animal Charity Evaluators on April 21, 2023 on The Effective Altruism Forum.Hello EA Forum,As Animal Charity Evaluators prepares for our 2023 charity evaluation cycle, we are gathering relevant research on the effectiveness of different animal advocacy interventions to update our beliefs and guide our assessments and recommendation decisions. To this end, we have compiled a list of references, including literature reviews, books, peer-reviewed articles, reports, and other non-academic content that might offer relevant insights into the outcomes of interventions. You can view the list here.We are especially interested in research into the effectiveness of corporate litigation work, providing or influencing funding, making podcasts, offering recruitment services, and running vegan or vegetarian events. If you know of any publications on these or other relevant topics, please let us know in the comments.Additionally, we are compiling a list of existing cost-effectiveness estimates. This includes, but is not limited to, interventions aimed at reducing farmed animal suffering, institutional and individual vegan outreach, advocating for better animal welfare policies, or supporting research into alternatives to animal products. We have started a list from a literature search and a scan of the EA Forum, which you can view in this spreadsheet. If you are aware of any additional estimates or are currently working on one, we would appreciate you letting us know.We will review responses every three business days until Friday, May 5, 2023.Thank you in advance for your help.Best regards,Alina SalmenResearcherAnimal Charity EvaluatorsThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 21 Apr 2023 14:37:15 +0000 EA - Animal Charity Evaluators Is Seeking Intervention Effectiveness Research and Cost-Effectiveness Estimates by Animal Charity Evaluators Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal Charity Evaluators Is Seeking Intervention Effectiveness Research and Cost-Effectiveness Estimates, published by Animal Charity Evaluators on April 21, 2023 on The Effective Altruism Forum.Hello EA Forum,As Animal Charity Evaluators prepares for our 2023 charity evaluation cycle, we are gathering relevant research on the effectiveness of different animal advocacy interventions to update our beliefs and guide our assessments and recommendation decisions. To this end, we have compiled a list of references, including literature reviews, books, peer-reviewed articles, reports, and other non-academic content that might offer relevant insights into the outcomes of interventions. You can view the list here.We are especially interested in research into the effectiveness of corporate litigation work, providing or influencing funding, making podcasts, offering recruitment services, and running vegan or vegetarian events. If you know of any publications on these or other relevant topics, please let us know in the comments.Additionally, we are compiling a list of existing cost-effectiveness estimates. This includes, but is not limited to, interventions aimed at reducing farmed animal suffering, institutional and individual vegan outreach, advocating for better animal welfare policies, or supporting research into alternatives to animal products. We have started a list from a literature search and a scan of the EA Forum, which you can view in this spreadsheet. If you are aware of any additional estimates or are currently working on one, we would appreciate you letting us know.We will review responses every three business days until Friday, May 5, 2023.Thank you in advance for your help.Best regards,Alina SalmenResearcherAnimal Charity EvaluatorsThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal Charity Evaluators Is Seeking Intervention Effectiveness Research and Cost-Effectiveness Estimates, published by Animal Charity Evaluators on April 21, 2023 on The Effective Altruism Forum.Hello EA Forum,As Animal Charity Evaluators prepares for our 2023 charity evaluation cycle, we are gathering relevant research on the effectiveness of different animal advocacy interventions to update our beliefs and guide our assessments and recommendation decisions. To this end, we have compiled a list of references, including literature reviews, books, peer-reviewed articles, reports, and other non-academic content that might offer relevant insights into the outcomes of interventions. You can view the list here.We are especially interested in research into the effectiveness of corporate litigation work, providing or influencing funding, making podcasts, offering recruitment services, and running vegan or vegetarian events. If you know of any publications on these or other relevant topics, please let us know in the comments.Additionally, we are compiling a list of existing cost-effectiveness estimates. This includes, but is not limited to, interventions aimed at reducing farmed animal suffering, institutional and individual vegan outreach, advocating for better animal welfare policies, or supporting research into alternatives to animal products. We have started a list from a literature search and a scan of the EA Forum, which you can view in this spreadsheet. If you are aware of any additional estimates or are currently working on one, we would appreciate you letting us know.We will review responses every three business days until Friday, May 5, 2023.Thank you in advance for your help.Best regards,Alina SalmenResearcherAnimal Charity EvaluatorsThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Animal Charity Evaluators https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:51 None full 5691
fDCSyxi2uiEe2xpJ2_NL_EA_EA EA - Talent Directory for EA Hiring Managers/Recruiters and Job Seekers by High Impact Professionals Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Talent Directory for EA Hiring Managers/Recruiters and Job Seekers, published by High Impact Professionals on April 21, 2023 on The Effective Altruism Forum.SummaryHigh Impact Professionals (HIP) has launched a Talent Directory with ~600 impact-oriented candidates.For EA-aligned organizations:Apply to our Recruiter Portal to access more candidates and features than the general public view.Access our general public directory of talented, impact-focused professionals.Find board members from a dedicated directory of experienced professionals interested in joining the board of an impactful, EA-aligned organization.Find volunteers for your project.Find collaborators for your project.For impact-focused job seekers:Sign up to the Talent Directory for greater exposure to hiring organizations and recruiters.Find co-founders/collaborators for your project.IntroHIP has launched a Talent Directory of ~600 impact-oriented candidates in order to assist both recruiters/hiring managers at EA-aligned organizations as well as job seekers looking to transition to a higher impact role at such organizations.For EA-Aligned OrganizationsWe intend for the Talent Directory to be a valuable resource allowing recruiters and hiring managers to filter for candidates that align with a role’s criteria. The directory currently includes a wide variety of pertinent information like candidates’ cause areas of interest, years of work experience, amount of money managed, number of staff managed, LinkedIn/CV, time spent engaging with EA, availability, and more. We are also collecting more candidate information to roll out additional useful filtering options in the future.Moreover, the directory will serve as a collection of go-to resources curated to support a variety of organizations’ needs:a Recruiter Portal with access to a password-protected directory containing more candidates and features than the general public view – organizations/recruiters can apply for access;a general public directory featuring hundreds of talented, impact-focused candidates for your open roles;a directory of professionals interested in serving on the board of an impactful, EA-aligned organization;a directory of professionals interested in volunteering for an impactful, EA-aligned project; anda directory of professionals who are looking for collaborators to start an impactful, EA-aligned project with.For Impact-Focused Job SeekersAs an update to our prior announcement, the Talent Directory is positioned to increase the visibility of your profile as, with your consent, it will be proactively shared with organizations and shared publicly on HIP’s website. We already have over 20 organizations using the directory, and we are working this quarter to grow that engagement several times over.If you are looking to transition your career to higher impact opportunities, please sign up to the Talent Directory to ensure you are in front of organizations that are looking for candidates like you.Also, if you are interested in finding someone to start an impactful, EA-aligned organization or project with, you can find potential co-founders/collaborators.We Want Your FeedbackHIP’s goal is to make these resources as usable and valuable as possible. So, whether you’re an organization/recruiter or a jobseeker, we welcome your feedback and ideas for improvement. Please let us know your critiques, be they positive or (even better) negative, via this feedback form or in the comments below.Spread the WordLastly, if you know any organizations or individuals who would benefit from hearing about the Talent Directory, please spread the word.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
High Impact Professionals https://forum.effectivealtruism.org/posts/fDCSyxi2uiEe2xpJ2/talent-directory-for-ea-hiring-managers-recruiters-and-job Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Talent Directory for EA Hiring Managers/Recruiters and Job Seekers, published by High Impact Professionals on April 21, 2023 on The Effective Altruism Forum.SummaryHigh Impact Professionals (HIP) has launched a Talent Directory with ~600 impact-oriented candidates.For EA-aligned organizations:Apply to our Recruiter Portal to access more candidates and features than the general public view.Access our general public directory of talented, impact-focused professionals.Find board members from a dedicated directory of experienced professionals interested in joining the board of an impactful, EA-aligned organization.Find volunteers for your project.Find collaborators for your project.For impact-focused job seekers:Sign up to the Talent Directory for greater exposure to hiring organizations and recruiters.Find co-founders/collaborators for your project.IntroHIP has launched a Talent Directory of ~600 impact-oriented candidates in order to assist both recruiters/hiring managers at EA-aligned organizations as well as job seekers looking to transition to a higher impact role at such organizations.For EA-Aligned OrganizationsWe intend for the Talent Directory to be a valuable resource allowing recruiters and hiring managers to filter for candidates that align with a role’s criteria. The directory currently includes a wide variety of pertinent information like candidates’ cause areas of interest, years of work experience, amount of money managed, number of staff managed, LinkedIn/CV, time spent engaging with EA, availability, and more. We are also collecting more candidate information to roll out additional useful filtering options in the future.Moreover, the directory will serve as a collection of go-to resources curated to support a variety of organizations’ needs:a Recruiter Portal with access to a password-protected directory containing more candidates and features than the general public view – organizations/recruiters can apply for access;a general public directory featuring hundreds of talented, impact-focused candidates for your open roles;a directory of professionals interested in serving on the board of an impactful, EA-aligned organization;a directory of professionals interested in volunteering for an impactful, EA-aligned project; anda directory of professionals who are looking for collaborators to start an impactful, EA-aligned project with.For Impact-Focused Job SeekersAs an update to our prior announcement, the Talent Directory is positioned to increase the visibility of your profile as, with your consent, it will be proactively shared with organizations and shared publicly on HIP’s website. We already have over 20 organizations using the directory, and we are working this quarter to grow that engagement several times over.If you are looking to transition your career to higher impact opportunities, please sign up to the Talent Directory to ensure you are in front of organizations that are looking for candidates like you.Also, if you are interested in finding someone to start an impactful, EA-aligned organization or project with, you can find potential co-founders/collaborators.We Want Your FeedbackHIP’s goal is to make these resources as usable and valuable as possible. So, whether you’re an organization/recruiter or a jobseeker, we welcome your feedback and ideas for improvement. Please let us know your critiques, be they positive or (even better) negative, via this feedback form or in the comments below.Spread the WordLastly, if you know any organizations or individuals who would benefit from hearing about the Talent Directory, please spread the word.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 21 Apr 2023 10:57:55 +0000 EA - Talent Directory for EA Hiring Managers/Recruiters and Job Seekers by High Impact Professionals Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Talent Directory for EA Hiring Managers/Recruiters and Job Seekers, published by High Impact Professionals on April 21, 2023 on The Effective Altruism Forum.SummaryHigh Impact Professionals (HIP) has launched a Talent Directory with ~600 impact-oriented candidates.For EA-aligned organizations:Apply to our Recruiter Portal to access more candidates and features than the general public view.Access our general public directory of talented, impact-focused professionals.Find board members from a dedicated directory of experienced professionals interested in joining the board of an impactful, EA-aligned organization.Find volunteers for your project.Find collaborators for your project.For impact-focused job seekers:Sign up to the Talent Directory for greater exposure to hiring organizations and recruiters.Find co-founders/collaborators for your project.IntroHIP has launched a Talent Directory of ~600 impact-oriented candidates in order to assist both recruiters/hiring managers at EA-aligned organizations as well as job seekers looking to transition to a higher impact role at such organizations.For EA-Aligned OrganizationsWe intend for the Talent Directory to be a valuable resource allowing recruiters and hiring managers to filter for candidates that align with a role’s criteria. The directory currently includes a wide variety of pertinent information like candidates’ cause areas of interest, years of work experience, amount of money managed, number of staff managed, LinkedIn/CV, time spent engaging with EA, availability, and more. We are also collecting more candidate information to roll out additional useful filtering options in the future.Moreover, the directory will serve as a collection of go-to resources curated to support a variety of organizations’ needs:a Recruiter Portal with access to a password-protected directory containing more candidates and features than the general public view – organizations/recruiters can apply for access;a general public directory featuring hundreds of talented, impact-focused candidates for your open roles;a directory of professionals interested in serving on the board of an impactful, EA-aligned organization;a directory of professionals interested in volunteering for an impactful, EA-aligned project; anda directory of professionals who are looking for collaborators to start an impactful, EA-aligned project with.For Impact-Focused Job SeekersAs an update to our prior announcement, the Talent Directory is positioned to increase the visibility of your profile as, with your consent, it will be proactively shared with organizations and shared publicly on HIP’s website. We already have over 20 organizations using the directory, and we are working this quarter to grow that engagement several times over.If you are looking to transition your career to higher impact opportunities, please sign up to the Talent Directory to ensure you are in front of organizations that are looking for candidates like you.Also, if you are interested in finding someone to start an impactful, EA-aligned organization or project with, you can find potential co-founders/collaborators.We Want Your FeedbackHIP’s goal is to make these resources as usable and valuable as possible. So, whether you’re an organization/recruiter or a jobseeker, we welcome your feedback and ideas for improvement. Please let us know your critiques, be they positive or (even better) negative, via this feedback form or in the comments below.Spread the WordLastly, if you know any organizations or individuals who would benefit from hearing about the Talent Directory, please spread the word.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Talent Directory for EA Hiring Managers/Recruiters and Job Seekers, published by High Impact Professionals on April 21, 2023 on The Effective Altruism Forum.SummaryHigh Impact Professionals (HIP) has launched a Talent Directory with ~600 impact-oriented candidates.For EA-aligned organizations:Apply to our Recruiter Portal to access more candidates and features than the general public view.Access our general public directory of talented, impact-focused professionals.Find board members from a dedicated directory of experienced professionals interested in joining the board of an impactful, EA-aligned organization.Find volunteers for your project.Find collaborators for your project.For impact-focused job seekers:Sign up to the Talent Directory for greater exposure to hiring organizations and recruiters.Find co-founders/collaborators for your project.IntroHIP has launched a Talent Directory of ~600 impact-oriented candidates in order to assist both recruiters/hiring managers at EA-aligned organizations as well as job seekers looking to transition to a higher impact role at such organizations.For EA-Aligned OrganizationsWe intend for the Talent Directory to be a valuable resource allowing recruiters and hiring managers to filter for candidates that align with a role’s criteria. The directory currently includes a wide variety of pertinent information like candidates’ cause areas of interest, years of work experience, amount of money managed, number of staff managed, LinkedIn/CV, time spent engaging with EA, availability, and more. We are also collecting more candidate information to roll out additional useful filtering options in the future.Moreover, the directory will serve as a collection of go-to resources curated to support a variety of organizations’ needs:a Recruiter Portal with access to a password-protected directory containing more candidates and features than the general public view – organizations/recruiters can apply for access;a general public directory featuring hundreds of talented, impact-focused candidates for your open roles;a directory of professionals interested in serving on the board of an impactful, EA-aligned organization;a directory of professionals interested in volunteering for an impactful, EA-aligned project; anda directory of professionals who are looking for collaborators to start an impactful, EA-aligned project with.For Impact-Focused Job SeekersAs an update to our prior announcement, the Talent Directory is positioned to increase the visibility of your profile as, with your consent, it will be proactively shared with organizations and shared publicly on HIP’s website. We already have over 20 organizations using the directory, and we are working this quarter to grow that engagement several times over.If you are looking to transition your career to higher impact opportunities, please sign up to the Talent Directory to ensure you are in front of organizations that are looking for candidates like you.Also, if you are interested in finding someone to start an impactful, EA-aligned organization or project with, you can find potential co-founders/collaborators.We Want Your FeedbackHIP’s goal is to make these resources as usable and valuable as possible. So, whether you’re an organization/recruiter or a jobseeker, we welcome your feedback and ideas for improvement. Please let us know your critiques, be they positive or (even better) negative, via this feedback form or in the comments below.Spread the WordLastly, if you know any organizations or individuals who would benefit from hearing about the Talent Directory, please spread the word.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
High Impact Professionals https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:31 None full 5692
zaacFktf6WyrBCgPS_NL_EA_EA EA - High schoolers can apply to the Atlas Fellowship: $10k scholarship + 11-day program by ashleylin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: High schoolers can apply to the Atlas Fellowship: $10k scholarship + 11-day program, published by ashleylin on April 20, 2023 on The Effective Altruism Forum.Linkpost for:SummaryWe’re running the second year of the Atlas Fellowship, a $10k scholarship and free 11-day program for high school students from across the world. I see it as a unique opportunity for talented young people to meet intellectual peers IRL and improve their thinking.The core program covers areas like epistemic rationality, markets, agency, mathematical modeling, and integrity. Electives include ML, AGI risk, game theory, ending poverty, climate change, and fun math puzzles.If you’re 19 or younger and haven’t started college, you can apply here by April 30! (Late applications due May 14.)If you know a promising high school student, please nominate them. If you want to help us, share a post in places where talented young people hang out, or give your personal take on why people should (or should not) apply.In case you’re interested in working with us: we’re hiring!(Full post)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
ashleylin https://forum.effectivealtruism.org/posts/zaacFktf6WyrBCgPS/high-schoolers-can-apply-to-the-atlas-fellowship-usd10k-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: High schoolers can apply to the Atlas Fellowship: $10k scholarship + 11-day program, published by ashleylin on April 20, 2023 on The Effective Altruism Forum.Linkpost for:SummaryWe’re running the second year of the Atlas Fellowship, a $10k scholarship and free 11-day program for high school students from across the world. I see it as a unique opportunity for talented young people to meet intellectual peers IRL and improve their thinking.The core program covers areas like epistemic rationality, markets, agency, mathematical modeling, and integrity. Electives include ML, AGI risk, game theory, ending poverty, climate change, and fun math puzzles.If you’re 19 or younger and haven’t started college, you can apply here by April 30! (Late applications due May 14.)If you know a promising high school student, please nominate them. If you want to help us, share a post in places where talented young people hang out, or give your personal take on why people should (or should not) apply.In case you’re interested in working with us: we’re hiring!(Full post)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 21 Apr 2023 03:41:39 +0000 EA - High schoolers can apply to the Atlas Fellowship: $10k scholarship + 11-day program by ashleylin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: High schoolers can apply to the Atlas Fellowship: $10k scholarship + 11-day program, published by ashleylin on April 20, 2023 on The Effective Altruism Forum.Linkpost for:SummaryWe’re running the second year of the Atlas Fellowship, a $10k scholarship and free 11-day program for high school students from across the world. I see it as a unique opportunity for talented young people to meet intellectual peers IRL and improve their thinking.The core program covers areas like epistemic rationality, markets, agency, mathematical modeling, and integrity. Electives include ML, AGI risk, game theory, ending poverty, climate change, and fun math puzzles.If you’re 19 or younger and haven’t started college, you can apply here by April 30! (Late applications due May 14.)If you know a promising high school student, please nominate them. If you want to help us, share a post in places where talented young people hang out, or give your personal take on why people should (or should not) apply.In case you’re interested in working with us: we’re hiring!(Full post)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: High schoolers can apply to the Atlas Fellowship: $10k scholarship + 11-day program, published by ashleylin on April 20, 2023 on The Effective Altruism Forum.Linkpost for:SummaryWe’re running the second year of the Atlas Fellowship, a $10k scholarship and free 11-day program for high school students from across the world. I see it as a unique opportunity for talented young people to meet intellectual peers IRL and improve their thinking.The core program covers areas like epistemic rationality, markets, agency, mathematical modeling, and integrity. Electives include ML, AGI risk, game theory, ending poverty, climate change, and fun math puzzles.If you’re 19 or younger and haven’t started college, you can apply here by April 30! (Late applications due May 14.)If you know a promising high school student, please nominate them. If you want to help us, share a post in places where talented young people hang out, or give your personal take on why people should (or should not) apply.In case you’re interested in working with us: we’re hiring!(Full post)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
ashleylin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:18 None full 5680
zzpwpDkQBTzxbjgJo_NL_EA_EA EA - Apply or nominate someone to join the boards of Effective Ventures Foundation (UK and US) by Zachary Robinson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply or nominate someone to join the boards of Effective Ventures Foundation (UK and US), published by Zachary Robinson on April 20, 2023 on The Effective Altruism Forum.We’re looking for nominations to the boards of trustees of Effective Ventures Foundation (UK) (“EV UK”) and Effective Ventures Foundation USA, Inc. (“EV US”). If you or someone you know might be interested, please fill out one of these forms [apply, nominate someone]. Applications will be assessed on a rolling basis, with a deadline of May 14th.EV UK and EV US work together to host and fiscally sponsor many key projects in effective altruism, including the Centre for Effective Altruism (CEA), 80,000 Hours, Giving What We Can, and EA Funds. You can read more about the structure of the organisations in this post.The current trustees of EV UK are Claire Zabel, Nick Beckstead, Tasha McCauley, and Will MacAskill. The current trustees of EV US are Eli Rose, Nick Beckstead, Nicole Ross, and Zachary Robinson.Who are we looking for?We’re particularly looking for people who:Have a good understanding of effective altruism and/or longtermismHave a track record of integrity and good judgement, and who more broadly embody these guiding principles of effective altruismHave experience in one or more of the following areas:Accounting, law, finance or risk managementManagement or other senior role in a large organisation, especially a non-profitAre able to work collaboratively in a high-pressure environmentWe think the role will require significant time and attention, though this will vary depending on the needs of the organisation. Some trustees have estimated they are currently putting in 3-8 hours per week, though we are working on proposals to reduce this significantly over time. In any event, trustees should be prepared to scale up their involvement from time to time in the case of urgent decisions requiring board response.We especially encourage individuals with diverse backgrounds and experiences to apply, and we especially encourage applications from people of colour, self-identified women, and non-binary individuals who are excited about contributing to our mission.The role is currently unpaid, but we are investigating whether this can and should be changed. We will share here if we change this policy while the application is still open.The role is remote, though we strongly prefer someone who is able to make meetings in times that are reasonable hours in both the UK and California.What does an EV UK or EV US trustee do?As a member of either of the boards, you have ultimate responsibility for ensuring that the charity of which you are a trustee fulfils its charitable objectives as best it can. In practice, most strategic and programmatic decision-making is delegated to the ED / CEOs of the projects, or to the Interim CEO of the relevant entity. (This general board philosophy is in accordance with the thoughts expressed in this post by Holden Karnofsky.)During business as usual times, we expect the primary activities of a trustee to be:Assessing the performance of EDs / CEOs of the fiscally sponsored projects, and the (interim) CEO of the relevant entity.Appointing EDs / CEOs of the fiscally sponsored projects, or the (interim) CEO of the relevant entity, in case of change.Evaluating and deciding on high-level issues that impact the relevant organisation as a whole.Reviewing budgets and broad strategic plans for the relevant organisation.Evaluating the performance of the board and whether its composition could be improved (e.g. by adding in a trustee with underrepresented skills or experiences).However, since the bankruptcy of FTX in November last year, the boards have been a lot more involved than usual. This is partly because there have been many more decisions which have to be coordi...]]>
Zachary Robinson https://forum.effectivealtruism.org/posts/zzpwpDkQBTzxbjgJo/apply-or-nominate-someone-to-join-the-boards-of-effective Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply or nominate someone to join the boards of Effective Ventures Foundation (UK and US), published by Zachary Robinson on April 20, 2023 on The Effective Altruism Forum.We’re looking for nominations to the boards of trustees of Effective Ventures Foundation (UK) (“EV UK”) and Effective Ventures Foundation USA, Inc. (“EV US”). If you or someone you know might be interested, please fill out one of these forms [apply, nominate someone]. Applications will be assessed on a rolling basis, with a deadline of May 14th.EV UK and EV US work together to host and fiscally sponsor many key projects in effective altruism, including the Centre for Effective Altruism (CEA), 80,000 Hours, Giving What We Can, and EA Funds. You can read more about the structure of the organisations in this post.The current trustees of EV UK are Claire Zabel, Nick Beckstead, Tasha McCauley, and Will MacAskill. The current trustees of EV US are Eli Rose, Nick Beckstead, Nicole Ross, and Zachary Robinson.Who are we looking for?We’re particularly looking for people who:Have a good understanding of effective altruism and/or longtermismHave a track record of integrity and good judgement, and who more broadly embody these guiding principles of effective altruismHave experience in one or more of the following areas:Accounting, law, finance or risk managementManagement or other senior role in a large organisation, especially a non-profitAre able to work collaboratively in a high-pressure environmentWe think the role will require significant time and attention, though this will vary depending on the needs of the organisation. Some trustees have estimated they are currently putting in 3-8 hours per week, though we are working on proposals to reduce this significantly over time. In any event, trustees should be prepared to scale up their involvement from time to time in the case of urgent decisions requiring board response.We especially encourage individuals with diverse backgrounds and experiences to apply, and we especially encourage applications from people of colour, self-identified women, and non-binary individuals who are excited about contributing to our mission.The role is currently unpaid, but we are investigating whether this can and should be changed. We will share here if we change this policy while the application is still open.The role is remote, though we strongly prefer someone who is able to make meetings in times that are reasonable hours in both the UK and California.What does an EV UK or EV US trustee do?As a member of either of the boards, you have ultimate responsibility for ensuring that the charity of which you are a trustee fulfils its charitable objectives as best it can. In practice, most strategic and programmatic decision-making is delegated to the ED / CEOs of the projects, or to the Interim CEO of the relevant entity. (This general board philosophy is in accordance with the thoughts expressed in this post by Holden Karnofsky.)During business as usual times, we expect the primary activities of a trustee to be:Assessing the performance of EDs / CEOs of the fiscally sponsored projects, and the (interim) CEO of the relevant entity.Appointing EDs / CEOs of the fiscally sponsored projects, or the (interim) CEO of the relevant entity, in case of change.Evaluating and deciding on high-level issues that impact the relevant organisation as a whole.Reviewing budgets and broad strategic plans for the relevant organisation.Evaluating the performance of the board and whether its composition could be improved (e.g. by adding in a trustee with underrepresented skills or experiences).However, since the bankruptcy of FTX in November last year, the boards have been a lot more involved than usual. This is partly because there have been many more decisions which have to be coordi...]]>
Thu, 20 Apr 2023 22:21:14 +0000 EA - Apply or nominate someone to join the boards of Effective Ventures Foundation (UK and US) by Zachary Robinson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply or nominate someone to join the boards of Effective Ventures Foundation (UK and US), published by Zachary Robinson on April 20, 2023 on The Effective Altruism Forum.We’re looking for nominations to the boards of trustees of Effective Ventures Foundation (UK) (“EV UK”) and Effective Ventures Foundation USA, Inc. (“EV US”). If you or someone you know might be interested, please fill out one of these forms [apply, nominate someone]. Applications will be assessed on a rolling basis, with a deadline of May 14th.EV UK and EV US work together to host and fiscally sponsor many key projects in effective altruism, including the Centre for Effective Altruism (CEA), 80,000 Hours, Giving What We Can, and EA Funds. You can read more about the structure of the organisations in this post.The current trustees of EV UK are Claire Zabel, Nick Beckstead, Tasha McCauley, and Will MacAskill. The current trustees of EV US are Eli Rose, Nick Beckstead, Nicole Ross, and Zachary Robinson.Who are we looking for?We’re particularly looking for people who:Have a good understanding of effective altruism and/or longtermismHave a track record of integrity and good judgement, and who more broadly embody these guiding principles of effective altruismHave experience in one or more of the following areas:Accounting, law, finance or risk managementManagement or other senior role in a large organisation, especially a non-profitAre able to work collaboratively in a high-pressure environmentWe think the role will require significant time and attention, though this will vary depending on the needs of the organisation. Some trustees have estimated they are currently putting in 3-8 hours per week, though we are working on proposals to reduce this significantly over time. In any event, trustees should be prepared to scale up their involvement from time to time in the case of urgent decisions requiring board response.We especially encourage individuals with diverse backgrounds and experiences to apply, and we especially encourage applications from people of colour, self-identified women, and non-binary individuals who are excited about contributing to our mission.The role is currently unpaid, but we are investigating whether this can and should be changed. We will share here if we change this policy while the application is still open.The role is remote, though we strongly prefer someone who is able to make meetings in times that are reasonable hours in both the UK and California.What does an EV UK or EV US trustee do?As a member of either of the boards, you have ultimate responsibility for ensuring that the charity of which you are a trustee fulfils its charitable objectives as best it can. In practice, most strategic and programmatic decision-making is delegated to the ED / CEOs of the projects, or to the Interim CEO of the relevant entity. (This general board philosophy is in accordance with the thoughts expressed in this post by Holden Karnofsky.)During business as usual times, we expect the primary activities of a trustee to be:Assessing the performance of EDs / CEOs of the fiscally sponsored projects, and the (interim) CEO of the relevant entity.Appointing EDs / CEOs of the fiscally sponsored projects, or the (interim) CEO of the relevant entity, in case of change.Evaluating and deciding on high-level issues that impact the relevant organisation as a whole.Reviewing budgets and broad strategic plans for the relevant organisation.Evaluating the performance of the board and whether its composition could be improved (e.g. by adding in a trustee with underrepresented skills or experiences).However, since the bankruptcy of FTX in November last year, the boards have been a lot more involved than usual. This is partly because there have been many more decisions which have to be coordi...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply or nominate someone to join the boards of Effective Ventures Foundation (UK and US), published by Zachary Robinson on April 20, 2023 on The Effective Altruism Forum.We’re looking for nominations to the boards of trustees of Effective Ventures Foundation (UK) (“EV UK”) and Effective Ventures Foundation USA, Inc. (“EV US”). If you or someone you know might be interested, please fill out one of these forms [apply, nominate someone]. Applications will be assessed on a rolling basis, with a deadline of May 14th.EV UK and EV US work together to host and fiscally sponsor many key projects in effective altruism, including the Centre for Effective Altruism (CEA), 80,000 Hours, Giving What We Can, and EA Funds. You can read more about the structure of the organisations in this post.The current trustees of EV UK are Claire Zabel, Nick Beckstead, Tasha McCauley, and Will MacAskill. The current trustees of EV US are Eli Rose, Nick Beckstead, Nicole Ross, and Zachary Robinson.Who are we looking for?We’re particularly looking for people who:Have a good understanding of effective altruism and/or longtermismHave a track record of integrity and good judgement, and who more broadly embody these guiding principles of effective altruismHave experience in one or more of the following areas:Accounting, law, finance or risk managementManagement or other senior role in a large organisation, especially a non-profitAre able to work collaboratively in a high-pressure environmentWe think the role will require significant time and attention, though this will vary depending on the needs of the organisation. Some trustees have estimated they are currently putting in 3-8 hours per week, though we are working on proposals to reduce this significantly over time. In any event, trustees should be prepared to scale up their involvement from time to time in the case of urgent decisions requiring board response.We especially encourage individuals with diverse backgrounds and experiences to apply, and we especially encourage applications from people of colour, self-identified women, and non-binary individuals who are excited about contributing to our mission.The role is currently unpaid, but we are investigating whether this can and should be changed. We will share here if we change this policy while the application is still open.The role is remote, though we strongly prefer someone who is able to make meetings in times that are reasonable hours in both the UK and California.What does an EV UK or EV US trustee do?As a member of either of the boards, you have ultimate responsibility for ensuring that the charity of which you are a trustee fulfils its charitable objectives as best it can. In practice, most strategic and programmatic decision-making is delegated to the ED / CEOs of the projects, or to the Interim CEO of the relevant entity. (This general board philosophy is in accordance with the thoughts expressed in this post by Holden Karnofsky.)During business as usual times, we expect the primary activities of a trustee to be:Assessing the performance of EDs / CEOs of the fiscally sponsored projects, and the (interim) CEO of the relevant entity.Appointing EDs / CEOs of the fiscally sponsored projects, or the (interim) CEO of the relevant entity, in case of change.Evaluating and deciding on high-level issues that impact the relevant organisation as a whole.Reviewing budgets and broad strategic plans for the relevant organisation.Evaluating the performance of the board and whether its composition could be improved (e.g. by adding in a trustee with underrepresented skills or experiences).However, since the bankruptcy of FTX in November last year, the boards have been a lot more involved than usual. This is partly because there have been many more decisions which have to be coordi...]]>
Zachary Robinson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:16 None full 5681
pFW5dfCEFwuLcwfpk_NL_EA_EA EA - Reasons to have hope by jwpieters Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reasons to have hope, published by jwpieters on April 20, 2023 on The Effective Altruism Forum.This is a very short post mentioning some recent developments that make me hopeful for the future of AI safety work. These mostly relate to an increased amount of attention for AI safety concerns. I think this is likely to be good, but you might disagree.Eliezer Yudkowsky was invited to give a TED talk and received a standing ovationThe NSF announced a $20 million request for proposals for empirical AI safety research.46% of Americans are concerned about extinction from AI; 69% support a six-month pause in AI developmentAI Safety concerns have received increased media coverage~700 people applied for AGI Safety Fundamentals in JanuaryFLI’s open letter has received 27572 signatures to dateRemember – The world is awful. The world is much better. The world can be much better.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
jwpieters https://forum.effectivealtruism.org/posts/pFW5dfCEFwuLcwfpk/reasons-to-have-hope Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reasons to have hope, published by jwpieters on April 20, 2023 on The Effective Altruism Forum.This is a very short post mentioning some recent developments that make me hopeful for the future of AI safety work. These mostly relate to an increased amount of attention for AI safety concerns. I think this is likely to be good, but you might disagree.Eliezer Yudkowsky was invited to give a TED talk and received a standing ovationThe NSF announced a $20 million request for proposals for empirical AI safety research.46% of Americans are concerned about extinction from AI; 69% support a six-month pause in AI developmentAI Safety concerns have received increased media coverage~700 people applied for AGI Safety Fundamentals in JanuaryFLI’s open letter has received 27572 signatures to dateRemember – The world is awful. The world is much better. The world can be much better.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 20 Apr 2023 20:42:55 +0000 EA - Reasons to have hope by jwpieters Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reasons to have hope, published by jwpieters on April 20, 2023 on The Effective Altruism Forum.This is a very short post mentioning some recent developments that make me hopeful for the future of AI safety work. These mostly relate to an increased amount of attention for AI safety concerns. I think this is likely to be good, but you might disagree.Eliezer Yudkowsky was invited to give a TED talk and received a standing ovationThe NSF announced a $20 million request for proposals for empirical AI safety research.46% of Americans are concerned about extinction from AI; 69% support a six-month pause in AI developmentAI Safety concerns have received increased media coverage~700 people applied for AGI Safety Fundamentals in JanuaryFLI’s open letter has received 27572 signatures to dateRemember – The world is awful. The world is much better. The world can be much better.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reasons to have hope, published by jwpieters on April 20, 2023 on The Effective Altruism Forum.This is a very short post mentioning some recent developments that make me hopeful for the future of AI safety work. These mostly relate to an increased amount of attention for AI safety concerns. I think this is likely to be good, but you might disagree.Eliezer Yudkowsky was invited to give a TED talk and received a standing ovationThe NSF announced a $20 million request for proposals for empirical AI safety research.46% of Americans are concerned about extinction from AI; 69% support a six-month pause in AI developmentAI Safety concerns have received increased media coverage~700 people applied for AGI Safety Fundamentals in JanuaryFLI’s open letter has received 27572 signatures to dateRemember – The world is awful. The world is much better. The world can be much better.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
jwpieters https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:09 None full 5676
mffiHBwfcWwwNyjoM_NL_EA_EA EA - Inside The Minds Of ADHD by lynettebye Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Inside The Minds Of ADHD, published by lynettebye on April 20, 2023 on The Effective Altruism Forum.This post is crossposted from my blog. If you liked this post, subscribe to Lynette's blog to read more. (I only crosspost about half my content to the EA forum.)I was diagnosed with attention deficit hyperactivity disorder (ADHD) last winter.I’m a productivity coach who frequently works with people who have ADHD. I’d studied diagnostic questionnaires and read about the official symptoms of ADHD.I’ve also been struggling with those symptoms for decades. I was unable to consistently focus on demand, felt constantly tired, and often needed to force myself to get started on my projects. Yet I never seriously considered that I might have ADHD.Why not?Because my experiences didn’t match the picture of ADHD in my head. I thought I understood the symptoms of ADHD, but I didn’t really. (How much trouble focusing counts as “difficulty focusing” anyway?)I missed out on the benefits of stimulant medication for well over a decade because I didn’t understand what ADHD actually looks like. I’m guessing there’s a good number of adults (maybe including you!) who are also missing out on that benefit because of similar misconceptions.So I collected stories from six people who were diagnosed with ADHD as adults (including myself). These people are intelligent, high-performing individuals who nevertheless struggled for years with undiagnosed ADHD.I’ve structured the first half of this post around my interviewees’ responses to a typical ADHD questionnaire, so you can see what kinds of experiences you might expect if you have ADHD. The second half covers their stories prediagnosis and with medication. If these responses feel familiar to your experience, I encourage you to consider whether you might benefit from ADHD medication.So, what does adult ADHD actually look like?Hint: Adult ADHD is dominated by what I’m calling the Terrible Trifecta: trouble getting started, keeping focused, and finishing up projects. Every one of my interviewees struggled with these three areas, while most said that the other symptoms had less of a detrimental impact on them.Note, I mainly focus on the ‘inattentive’ presentation of ADHD, since it’s easiest to miss. If you’d like to learn more about the other types of ADHD, I recommend the youtube channel How to ADHD.When you have a task that requires a lot of thought, how often do you avoid or delay getting started?Every person I spoke with thought this was a major problem. They described procrastinating, trying to get started but being unable to focus, or spirals of feeling motivated and wanting to make progress but then.just not doing the task.This isn’t just for tasks that people dislike. People would bounce off even tasks that they enjoyed once they got into them.For me, this is especially likely when a project is distant in my mind. Resuming drafting a post after a weekend feels aversive. I can’t remember exactly what I was planning or why I wanted to write the post. If I don’t sit down and mentally “boot up” the project, it’s tempting to start something easier instead, which means that I don’t tackle the planned task until hours later.For some respondents, procrastination was particularly likely when they were trying to meet really high standards. If they didn’t feel that they were able to do a great job right then, they would instead go do something else until a deadline forced their hand. (I definitely resonated with this!)It felt like I would have such a high standard for each sentence that it felt like any word choice was totally wrong. I would try and make myself start, and then it wouldn't feel on fire enough to actually start doing it. Then I very easily get sucked into some other internet distraction. This would keep going until it w...]]>
lynettebye https://forum.effectivealtruism.org/posts/mffiHBwfcWwwNyjoM/inside-the-minds-of-adhd Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Inside The Minds Of ADHD, published by lynettebye on April 20, 2023 on The Effective Altruism Forum.This post is crossposted from my blog. If you liked this post, subscribe to Lynette's blog to read more. (I only crosspost about half my content to the EA forum.)I was diagnosed with attention deficit hyperactivity disorder (ADHD) last winter.I’m a productivity coach who frequently works with people who have ADHD. I’d studied diagnostic questionnaires and read about the official symptoms of ADHD.I’ve also been struggling with those symptoms for decades. I was unable to consistently focus on demand, felt constantly tired, and often needed to force myself to get started on my projects. Yet I never seriously considered that I might have ADHD.Why not?Because my experiences didn’t match the picture of ADHD in my head. I thought I understood the symptoms of ADHD, but I didn’t really. (How much trouble focusing counts as “difficulty focusing” anyway?)I missed out on the benefits of stimulant medication for well over a decade because I didn’t understand what ADHD actually looks like. I’m guessing there’s a good number of adults (maybe including you!) who are also missing out on that benefit because of similar misconceptions.So I collected stories from six people who were diagnosed with ADHD as adults (including myself). These people are intelligent, high-performing individuals who nevertheless struggled for years with undiagnosed ADHD.I’ve structured the first half of this post around my interviewees’ responses to a typical ADHD questionnaire, so you can see what kinds of experiences you might expect if you have ADHD. The second half covers their stories prediagnosis and with medication. If these responses feel familiar to your experience, I encourage you to consider whether you might benefit from ADHD medication.So, what does adult ADHD actually look like?Hint: Adult ADHD is dominated by what I’m calling the Terrible Trifecta: trouble getting started, keeping focused, and finishing up projects. Every one of my interviewees struggled with these three areas, while most said that the other symptoms had less of a detrimental impact on them.Note, I mainly focus on the ‘inattentive’ presentation of ADHD, since it’s easiest to miss. If you’d like to learn more about the other types of ADHD, I recommend the youtube channel How to ADHD.When you have a task that requires a lot of thought, how often do you avoid or delay getting started?Every person I spoke with thought this was a major problem. They described procrastinating, trying to get started but being unable to focus, or spirals of feeling motivated and wanting to make progress but then.just not doing the task.This isn’t just for tasks that people dislike. People would bounce off even tasks that they enjoyed once they got into them.For me, this is especially likely when a project is distant in my mind. Resuming drafting a post after a weekend feels aversive. I can’t remember exactly what I was planning or why I wanted to write the post. If I don’t sit down and mentally “boot up” the project, it’s tempting to start something easier instead, which means that I don’t tackle the planned task until hours later.For some respondents, procrastination was particularly likely when they were trying to meet really high standards. If they didn’t feel that they were able to do a great job right then, they would instead go do something else until a deadline forced their hand. (I definitely resonated with this!)It felt like I would have such a high standard for each sentence that it felt like any word choice was totally wrong. I would try and make myself start, and then it wouldn't feel on fire enough to actually start doing it. Then I very easily get sucked into some other internet distraction. This would keep going until it w...]]>
Thu, 20 Apr 2023 18:16:12 +0000 EA - Inside The Minds Of ADHD by lynettebye Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Inside The Minds Of ADHD, published by lynettebye on April 20, 2023 on The Effective Altruism Forum.This post is crossposted from my blog. If you liked this post, subscribe to Lynette's blog to read more. (I only crosspost about half my content to the EA forum.)I was diagnosed with attention deficit hyperactivity disorder (ADHD) last winter.I’m a productivity coach who frequently works with people who have ADHD. I’d studied diagnostic questionnaires and read about the official symptoms of ADHD.I’ve also been struggling with those symptoms for decades. I was unable to consistently focus on demand, felt constantly tired, and often needed to force myself to get started on my projects. Yet I never seriously considered that I might have ADHD.Why not?Because my experiences didn’t match the picture of ADHD in my head. I thought I understood the symptoms of ADHD, but I didn’t really. (How much trouble focusing counts as “difficulty focusing” anyway?)I missed out on the benefits of stimulant medication for well over a decade because I didn’t understand what ADHD actually looks like. I’m guessing there’s a good number of adults (maybe including you!) who are also missing out on that benefit because of similar misconceptions.So I collected stories from six people who were diagnosed with ADHD as adults (including myself). These people are intelligent, high-performing individuals who nevertheless struggled for years with undiagnosed ADHD.I’ve structured the first half of this post around my interviewees’ responses to a typical ADHD questionnaire, so you can see what kinds of experiences you might expect if you have ADHD. The second half covers their stories prediagnosis and with medication. If these responses feel familiar to your experience, I encourage you to consider whether you might benefit from ADHD medication.So, what does adult ADHD actually look like?Hint: Adult ADHD is dominated by what I’m calling the Terrible Trifecta: trouble getting started, keeping focused, and finishing up projects. Every one of my interviewees struggled with these three areas, while most said that the other symptoms had less of a detrimental impact on them.Note, I mainly focus on the ‘inattentive’ presentation of ADHD, since it’s easiest to miss. If you’d like to learn more about the other types of ADHD, I recommend the youtube channel How to ADHD.When you have a task that requires a lot of thought, how often do you avoid or delay getting started?Every person I spoke with thought this was a major problem. They described procrastinating, trying to get started but being unable to focus, or spirals of feeling motivated and wanting to make progress but then.just not doing the task.This isn’t just for tasks that people dislike. People would bounce off even tasks that they enjoyed once they got into them.For me, this is especially likely when a project is distant in my mind. Resuming drafting a post after a weekend feels aversive. I can’t remember exactly what I was planning or why I wanted to write the post. If I don’t sit down and mentally “boot up” the project, it’s tempting to start something easier instead, which means that I don’t tackle the planned task until hours later.For some respondents, procrastination was particularly likely when they were trying to meet really high standards. If they didn’t feel that they were able to do a great job right then, they would instead go do something else until a deadline forced their hand. (I definitely resonated with this!)It felt like I would have such a high standard for each sentence that it felt like any word choice was totally wrong. I would try and make myself start, and then it wouldn't feel on fire enough to actually start doing it. Then I very easily get sucked into some other internet distraction. This would keep going until it w...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Inside The Minds Of ADHD, published by lynettebye on April 20, 2023 on The Effective Altruism Forum.This post is crossposted from my blog. If you liked this post, subscribe to Lynette's blog to read more. (I only crosspost about half my content to the EA forum.)I was diagnosed with attention deficit hyperactivity disorder (ADHD) last winter.I’m a productivity coach who frequently works with people who have ADHD. I’d studied diagnostic questionnaires and read about the official symptoms of ADHD.I’ve also been struggling with those symptoms for decades. I was unable to consistently focus on demand, felt constantly tired, and often needed to force myself to get started on my projects. Yet I never seriously considered that I might have ADHD.Why not?Because my experiences didn’t match the picture of ADHD in my head. I thought I understood the symptoms of ADHD, but I didn’t really. (How much trouble focusing counts as “difficulty focusing” anyway?)I missed out on the benefits of stimulant medication for well over a decade because I didn’t understand what ADHD actually looks like. I’m guessing there’s a good number of adults (maybe including you!) who are also missing out on that benefit because of similar misconceptions.So I collected stories from six people who were diagnosed with ADHD as adults (including myself). These people are intelligent, high-performing individuals who nevertheless struggled for years with undiagnosed ADHD.I’ve structured the first half of this post around my interviewees’ responses to a typical ADHD questionnaire, so you can see what kinds of experiences you might expect if you have ADHD. The second half covers their stories prediagnosis and with medication. If these responses feel familiar to your experience, I encourage you to consider whether you might benefit from ADHD medication.So, what does adult ADHD actually look like?Hint: Adult ADHD is dominated by what I’m calling the Terrible Trifecta: trouble getting started, keeping focused, and finishing up projects. Every one of my interviewees struggled with these three areas, while most said that the other symptoms had less of a detrimental impact on them.Note, I mainly focus on the ‘inattentive’ presentation of ADHD, since it’s easiest to miss. If you’d like to learn more about the other types of ADHD, I recommend the youtube channel How to ADHD.When you have a task that requires a lot of thought, how often do you avoid or delay getting started?Every person I spoke with thought this was a major problem. They described procrastinating, trying to get started but being unable to focus, or spirals of feeling motivated and wanting to make progress but then.just not doing the task.This isn’t just for tasks that people dislike. People would bounce off even tasks that they enjoyed once they got into them.For me, this is especially likely when a project is distant in my mind. Resuming drafting a post after a weekend feels aversive. I can’t remember exactly what I was planning or why I wanted to write the post. If I don’t sit down and mentally “boot up” the project, it’s tempting to start something easier instead, which means that I don’t tackle the planned task until hours later.For some respondents, procrastination was particularly likely when they were trying to meet really high standards. If they didn’t feel that they were able to do a great job right then, they would instead go do something else until a deadline forced their hand. (I definitely resonated with this!)It felt like I would have such a high standard for each sentence that it felt like any word choice was totally wrong. I would try and make myself start, and then it wouldn't feel on fire enough to actually start doing it. Then I very easily get sucked into some other internet distraction. This would keep going until it w...]]>
lynettebye https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 31:23 None full 5673
6Mi2LqLRjSkQNdLbH_NL_EA_EA EA - Orthogonal: A new agent foundations alignment organization by Tamsin Leake Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Orthogonal: A new agent foundations alignment organization, published by Tamsin Leake on April 19, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tamsin Leake https://forum.effectivealtruism.org/posts/6Mi2LqLRjSkQNdLbH/orthogonal-a-new-agent-foundations-alignment-organization Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Orthogonal: A new agent foundations alignment organization, published by Tamsin Leake on April 19, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 20 Apr 2023 13:22:02 +0000 EA - Orthogonal: A new agent foundations alignment organization by Tamsin Leake Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Orthogonal: A new agent foundations alignment organization, published by Tamsin Leake on April 19, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Orthogonal: A new agent foundations alignment organization, published by Tamsin Leake on April 19, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tamsin Leake https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:23 None full 5677
nh8dx6JJt3Ga3BRdp_NL_EA_EA EA - GWWC Reporting Attrition Visualization by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC Reporting Attrition Visualization, published by Jeff Kaufman on April 19, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://forum.effectivealtruism.org/posts/nh8dx6JJt3Ga3BRdp/gwwc-reporting-attrition-visualization Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC Reporting Attrition Visualization, published by Jeff Kaufman on April 19, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 19 Apr 2023 21:26:18 +0000 EA - GWWC Reporting Attrition Visualization by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC Reporting Attrition Visualization, published by Jeff Kaufman on April 19, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC Reporting Attrition Visualization, published by Jeff Kaufman on April 19, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:22 None full 5658
iiRGCydMX7aiEjvGm_NL_EA_EA EA - 12 tentative ideas for US AI policy (Luke Muehlhauser) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 12 tentative ideas for US AI policy (Luke Muehlhauser), published by Lizka on April 19, 2023 on The Effective Altruism Forum.Luke Muehlhauser recently posted this list of ideas. See also this List of lists of government AI policy ideas and How major governments can help with the most important century.The full text of the post is below.About two years ago, I wrote that “it’s difficult to know which ‘intermediate goals’ [e.g. policy goals] we could pursue that, if achieved, would clearly increase the odds of eventual good outcomes from transformative AI.” Much has changed since then, and in this post I give an update on 12 ideas for US policy goals that I tentatively think would increase the odds of good outcomes from transformative AI.I think the US generally over-regulates, and that most people underrate the enormous benefits of rapid innovation. However, when 50% of the experts on a specific technology think there is a reasonable chance it will result in outcomes that are “extremely bad (e.g. human extinction),” I think ambitious and thoughtful regulation is warranted.First, some caveats:These are my own tentative opinions, not Open Philanthropy’s. I might easily change my opinions in response to further analysis or further developments.My opinions are premised on a strategic picture similar to the one outlined in my colleague Holden Karnofsky’s Most Important Century and Implications of. posts. In other words, I think transformative AI could bring enormous benefits, but I also take full-blown existential risk from transformative AI as a plausible and urgent concern, and I am more agnostic about this risk’s likelihood, shape, and tractability than e.g. a recent TIME op-ed.None of the policy options below have gotten sufficient scrutiny (though they have received far more scrutiny than is presented here), and there are many ways their impact could turn out — upon further analysis or upon implementation — to be net-negative, even if my basic picture of the strategic situation is right.To my knowledge, none of these policy ideas have been worked out in enough detail to allow for immediate implementation, but experts have begun to draft the potential details for most of them (not included here). None of these ideas are original to me.This post doesn’t explain much of my reasoning for tentatively favoring these policy options. All the options below have complicated mixtures of pros and cons, and many experts oppose (or support) each one. This post isn’t intended to (and shouldn’t) convince anyone. However, in the wake of recent AI advances and discussion, many people have been asking me for these kinds of policy ideas, so I am sharing my opinions here.Some of these policy options are more politically tractable than others, but, as I think we’ve seen recently, the political landscape sometimes shifts rapidly and unexpectedly.Those caveats in hand, below are some of my current personal guesses about US policy options that would reduce existential risk from AI in expectation (in no order).Software export controls. Control the export (to anyone) of “frontier AI models,” i.e. models with highly general capabilities over some threshold, or (more simply) models trained with a compute budget over some threshold (e.g. as much compute as $1 billion can buy today). This will help limit the proliferation of the models which probably pose the greatest risk. Also restrict API access in some ways, as API access can potentially be used to generate an optimized dataset sufficient to train a smaller model to reach performance similar to that of the larger model.Require hardware security features on cutting-edge chips. Security features on chips can be leveraged for many useful compute governance purposes, e.g. to verify compliance with export controls and domestic regulatio...]]>
Lizka https://forum.effectivealtruism.org/posts/iiRGCydMX7aiEjvGm/12-tentative-ideas-for-us-ai-policy-luke-muehlhauser Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 12 tentative ideas for US AI policy (Luke Muehlhauser), published by Lizka on April 19, 2023 on The Effective Altruism Forum.Luke Muehlhauser recently posted this list of ideas. See also this List of lists of government AI policy ideas and How major governments can help with the most important century.The full text of the post is below.About two years ago, I wrote that “it’s difficult to know which ‘intermediate goals’ [e.g. policy goals] we could pursue that, if achieved, would clearly increase the odds of eventual good outcomes from transformative AI.” Much has changed since then, and in this post I give an update on 12 ideas for US policy goals that I tentatively think would increase the odds of good outcomes from transformative AI.I think the US generally over-regulates, and that most people underrate the enormous benefits of rapid innovation. However, when 50% of the experts on a specific technology think there is a reasonable chance it will result in outcomes that are “extremely bad (e.g. human extinction),” I think ambitious and thoughtful regulation is warranted.First, some caveats:These are my own tentative opinions, not Open Philanthropy’s. I might easily change my opinions in response to further analysis or further developments.My opinions are premised on a strategic picture similar to the one outlined in my colleague Holden Karnofsky’s Most Important Century and Implications of. posts. In other words, I think transformative AI could bring enormous benefits, but I also take full-blown existential risk from transformative AI as a plausible and urgent concern, and I am more agnostic about this risk’s likelihood, shape, and tractability than e.g. a recent TIME op-ed.None of the policy options below have gotten sufficient scrutiny (though they have received far more scrutiny than is presented here), and there are many ways their impact could turn out — upon further analysis or upon implementation — to be net-negative, even if my basic picture of the strategic situation is right.To my knowledge, none of these policy ideas have been worked out in enough detail to allow for immediate implementation, but experts have begun to draft the potential details for most of them (not included here). None of these ideas are original to me.This post doesn’t explain much of my reasoning for tentatively favoring these policy options. All the options below have complicated mixtures of pros and cons, and many experts oppose (or support) each one. This post isn’t intended to (and shouldn’t) convince anyone. However, in the wake of recent AI advances and discussion, many people have been asking me for these kinds of policy ideas, so I am sharing my opinions here.Some of these policy options are more politically tractable than others, but, as I think we’ve seen recently, the political landscape sometimes shifts rapidly and unexpectedly.Those caveats in hand, below are some of my current personal guesses about US policy options that would reduce existential risk from AI in expectation (in no order).Software export controls. Control the export (to anyone) of “frontier AI models,” i.e. models with highly general capabilities over some threshold, or (more simply) models trained with a compute budget over some threshold (e.g. as much compute as $1 billion can buy today). This will help limit the proliferation of the models which probably pose the greatest risk. Also restrict API access in some ways, as API access can potentially be used to generate an optimized dataset sufficient to train a smaller model to reach performance similar to that of the larger model.Require hardware security features on cutting-edge chips. Security features on chips can be leveraged for many useful compute governance purposes, e.g. to verify compliance with export controls and domestic regulatio...]]>
Wed, 19 Apr 2023 21:16:44 +0000 EA - 12 tentative ideas for US AI policy (Luke Muehlhauser) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 12 tentative ideas for US AI policy (Luke Muehlhauser), published by Lizka on April 19, 2023 on The Effective Altruism Forum.Luke Muehlhauser recently posted this list of ideas. See also this List of lists of government AI policy ideas and How major governments can help with the most important century.The full text of the post is below.About two years ago, I wrote that “it’s difficult to know which ‘intermediate goals’ [e.g. policy goals] we could pursue that, if achieved, would clearly increase the odds of eventual good outcomes from transformative AI.” Much has changed since then, and in this post I give an update on 12 ideas for US policy goals that I tentatively think would increase the odds of good outcomes from transformative AI.I think the US generally over-regulates, and that most people underrate the enormous benefits of rapid innovation. However, when 50% of the experts on a specific technology think there is a reasonable chance it will result in outcomes that are “extremely bad (e.g. human extinction),” I think ambitious and thoughtful regulation is warranted.First, some caveats:These are my own tentative opinions, not Open Philanthropy’s. I might easily change my opinions in response to further analysis or further developments.My opinions are premised on a strategic picture similar to the one outlined in my colleague Holden Karnofsky’s Most Important Century and Implications of. posts. In other words, I think transformative AI could bring enormous benefits, but I also take full-blown existential risk from transformative AI as a plausible and urgent concern, and I am more agnostic about this risk’s likelihood, shape, and tractability than e.g. a recent TIME op-ed.None of the policy options below have gotten sufficient scrutiny (though they have received far more scrutiny than is presented here), and there are many ways their impact could turn out — upon further analysis or upon implementation — to be net-negative, even if my basic picture of the strategic situation is right.To my knowledge, none of these policy ideas have been worked out in enough detail to allow for immediate implementation, but experts have begun to draft the potential details for most of them (not included here). None of these ideas are original to me.This post doesn’t explain much of my reasoning for tentatively favoring these policy options. All the options below have complicated mixtures of pros and cons, and many experts oppose (or support) each one. This post isn’t intended to (and shouldn’t) convince anyone. However, in the wake of recent AI advances and discussion, many people have been asking me for these kinds of policy ideas, so I am sharing my opinions here.Some of these policy options are more politically tractable than others, but, as I think we’ve seen recently, the political landscape sometimes shifts rapidly and unexpectedly.Those caveats in hand, below are some of my current personal guesses about US policy options that would reduce existential risk from AI in expectation (in no order).Software export controls. Control the export (to anyone) of “frontier AI models,” i.e. models with highly general capabilities over some threshold, or (more simply) models trained with a compute budget over some threshold (e.g. as much compute as $1 billion can buy today). This will help limit the proliferation of the models which probably pose the greatest risk. Also restrict API access in some ways, as API access can potentially be used to generate an optimized dataset sufficient to train a smaller model to reach performance similar to that of the larger model.Require hardware security features on cutting-edge chips. Security features on chips can be leveraged for many useful compute governance purposes, e.g. to verify compliance with export controls and domestic regulatio...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 12 tentative ideas for US AI policy (Luke Muehlhauser), published by Lizka on April 19, 2023 on The Effective Altruism Forum.Luke Muehlhauser recently posted this list of ideas. See also this List of lists of government AI policy ideas and How major governments can help with the most important century.The full text of the post is below.About two years ago, I wrote that “it’s difficult to know which ‘intermediate goals’ [e.g. policy goals] we could pursue that, if achieved, would clearly increase the odds of eventual good outcomes from transformative AI.” Much has changed since then, and in this post I give an update on 12 ideas for US policy goals that I tentatively think would increase the odds of good outcomes from transformative AI.I think the US generally over-regulates, and that most people underrate the enormous benefits of rapid innovation. However, when 50% of the experts on a specific technology think there is a reasonable chance it will result in outcomes that are “extremely bad (e.g. human extinction),” I think ambitious and thoughtful regulation is warranted.First, some caveats:These are my own tentative opinions, not Open Philanthropy’s. I might easily change my opinions in response to further analysis or further developments.My opinions are premised on a strategic picture similar to the one outlined in my colleague Holden Karnofsky’s Most Important Century and Implications of. posts. In other words, I think transformative AI could bring enormous benefits, but I also take full-blown existential risk from transformative AI as a plausible and urgent concern, and I am more agnostic about this risk’s likelihood, shape, and tractability than e.g. a recent TIME op-ed.None of the policy options below have gotten sufficient scrutiny (though they have received far more scrutiny than is presented here), and there are many ways their impact could turn out — upon further analysis or upon implementation — to be net-negative, even if my basic picture of the strategic situation is right.To my knowledge, none of these policy ideas have been worked out in enough detail to allow for immediate implementation, but experts have begun to draft the potential details for most of them (not included here). None of these ideas are original to me.This post doesn’t explain much of my reasoning for tentatively favoring these policy options. All the options below have complicated mixtures of pros and cons, and many experts oppose (or support) each one. This post isn’t intended to (and shouldn’t) convince anyone. However, in the wake of recent AI advances and discussion, many people have been asking me for these kinds of policy ideas, so I am sharing my opinions here.Some of these policy options are more politically tractable than others, but, as I think we’ve seen recently, the political landscape sometimes shifts rapidly and unexpectedly.Those caveats in hand, below are some of my current personal guesses about US policy options that would reduce existential risk from AI in expectation (in no order).Software export controls. Control the export (to anyone) of “frontier AI models,” i.e. models with highly general capabilities over some threshold, or (more simply) models trained with a compute budget over some threshold (e.g. as much compute as $1 billion can buy today). This will help limit the proliferation of the models which probably pose the greatest risk. Also restrict API access in some ways, as API access can potentially be used to generate an optimized dataset sufficient to train a smaller model to reach performance similar to that of the larger model.Require hardware security features on cutting-edge chips. Security features on chips can be leveraged for many useful compute governance purposes, e.g. to verify compliance with export controls and domestic regulatio...]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:28 None full 5657
Eu4ZDCt2yaKavtQ9s_NL_EA_EA EA - AI Safety Newsletter #2: ChaosGPT, Natural Selection, and AI Safety in the Media by Oliver Z Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Newsletter #2: ChaosGPT, Natural Selection, and AI Safety in the Media, published by Oliver Z on April 18, 2023 on The Effective Altruism Forum.Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.Subscribe here to receive future versions.ChaosGPT and the Rise of Language AgentsChatbots like ChatGPT usually only respond to one prompt at a time, and a human user must provide a new prompt to get a new response. But an extremely popular new framework called AutoGPT automates that process. With AutoGPT, the user provides only a high-level goal, and the language model will create and execute a step-by-step plan to accomplish the goal.AutoGPT and other language agents are still in their infancy. They struggle with long-term planning and repeat their own mistakes. Yet because they limit human oversight of AI actions, these agents are a step towards dangerous deployment of autonomous AI.Individual bad actors pose serious risks. One of the first uses of AutoGPT was to instruct a model named ChaosGPT to “destroy humanity.” It created a plan to “find the most destructive weapons available to humans” and, after a few Google searches, became excited by the Tsar Bomba, an old Soviet nuclear weapon. ChaosGPT lacks both the intelligence and the means to operate dangerous weapons, so the worst it could do was fire off a Tweet about the bomb. But this is an example of the “unilateralist’s curse”: if one day someone builds AIs capable of causing severe harm, it only takes one person to ask it to cause that harm.More agents introduce more complexity. Researchers at Stanford and Google recently built a virtual world full of agents controlled by language models. Each agent was given an identity, an occupation, and relationships with the other agents. They would choose their own actions each day, leading to surprising outcomes. One agent threw a Valentine’s Day party, and the others spread the news and began asking each other on dates. Another ran for mayor, and the candidate’s neighbors would discuss his platform over breakfast in their own homes. Just as the agents in this virtual world had surprising interactions with each other, autonomous AI agents have unpredictable effects on the real world.How do LLM agents like GPT-4 behave? A recent paper examined the safety of LLMs acting as agents. When playing text-based games, LLMs often behave in power-seeking, deceptive, or Machiavellian ways. This happens naturally. Much like how LLMs trained to mimic human writings may learn to output toxic text, agents trained to optimize goals may learn to exhibit ends-justify-the-means / Machiavellian behavior by default. Research to reduce LLMs’ Machiavellian tendencies is still in its infancy.Natural Selection Favors AIs over HumansCAIS director Dan Hendrycks released a paper titled Natural Selection Favors AIs over Humans.The abstract for the paper is as follows:For billions of years, evolution has been the driving force behind the development of life, including humans. Evolution endowed humans with high intelligence, which allowed us to become one of the most successful species on the planet. Today, humans aim to create artificial intelligence systems that surpass even our own intelligence. As artificial intelligences (AIs) evolve and eventually surpass us in all domains, how might evolution shape our relations with AIs? By analyzing the environment that is shaping the evolution of AIs, we argue that the most successful AI agents will likely have undesirable traits. Competitive pressures among corporations and militaries will give rise to AI agents that automate human roles, deceive others, and gain power. If such agents have intelligence that exceeds that of humans, this could lead to hu...]]>
Oliver Z https://forum.effectivealtruism.org/posts/Eu4ZDCt2yaKavtQ9s/ai-safety-newsletter-2-chaosgpt-natural-selection-and-ai Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Newsletter #2: ChaosGPT, Natural Selection, and AI Safety in the Media, published by Oliver Z on April 18, 2023 on The Effective Altruism Forum.Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.Subscribe here to receive future versions.ChaosGPT and the Rise of Language AgentsChatbots like ChatGPT usually only respond to one prompt at a time, and a human user must provide a new prompt to get a new response. But an extremely popular new framework called AutoGPT automates that process. With AutoGPT, the user provides only a high-level goal, and the language model will create and execute a step-by-step plan to accomplish the goal.AutoGPT and other language agents are still in their infancy. They struggle with long-term planning and repeat their own mistakes. Yet because they limit human oversight of AI actions, these agents are a step towards dangerous deployment of autonomous AI.Individual bad actors pose serious risks. One of the first uses of AutoGPT was to instruct a model named ChaosGPT to “destroy humanity.” It created a plan to “find the most destructive weapons available to humans” and, after a few Google searches, became excited by the Tsar Bomba, an old Soviet nuclear weapon. ChaosGPT lacks both the intelligence and the means to operate dangerous weapons, so the worst it could do was fire off a Tweet about the bomb. But this is an example of the “unilateralist’s curse”: if one day someone builds AIs capable of causing severe harm, it only takes one person to ask it to cause that harm.More agents introduce more complexity. Researchers at Stanford and Google recently built a virtual world full of agents controlled by language models. Each agent was given an identity, an occupation, and relationships with the other agents. They would choose their own actions each day, leading to surprising outcomes. One agent threw a Valentine’s Day party, and the others spread the news and began asking each other on dates. Another ran for mayor, and the candidate’s neighbors would discuss his platform over breakfast in their own homes. Just as the agents in this virtual world had surprising interactions with each other, autonomous AI agents have unpredictable effects on the real world.How do LLM agents like GPT-4 behave? A recent paper examined the safety of LLMs acting as agents. When playing text-based games, LLMs often behave in power-seeking, deceptive, or Machiavellian ways. This happens naturally. Much like how LLMs trained to mimic human writings may learn to output toxic text, agents trained to optimize goals may learn to exhibit ends-justify-the-means / Machiavellian behavior by default. Research to reduce LLMs’ Machiavellian tendencies is still in its infancy.Natural Selection Favors AIs over HumansCAIS director Dan Hendrycks released a paper titled Natural Selection Favors AIs over Humans.The abstract for the paper is as follows:For billions of years, evolution has been the driving force behind the development of life, including humans. Evolution endowed humans with high intelligence, which allowed us to become one of the most successful species on the planet. Today, humans aim to create artificial intelligence systems that surpass even our own intelligence. As artificial intelligences (AIs) evolve and eventually surpass us in all domains, how might evolution shape our relations with AIs? By analyzing the environment that is shaping the evolution of AIs, we argue that the most successful AI agents will likely have undesirable traits. Competitive pressures among corporations and militaries will give rise to AI agents that automate human roles, deceive others, and gain power. If such agents have intelligence that exceeds that of humans, this could lead to hu...]]>
Wed, 19 Apr 2023 18:26:16 +0000 EA - AI Safety Newsletter #2: ChaosGPT, Natural Selection, and AI Safety in the Media by Oliver Z Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Newsletter #2: ChaosGPT, Natural Selection, and AI Safety in the Media, published by Oliver Z on April 18, 2023 on The Effective Altruism Forum.Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.Subscribe here to receive future versions.ChaosGPT and the Rise of Language AgentsChatbots like ChatGPT usually only respond to one prompt at a time, and a human user must provide a new prompt to get a new response. But an extremely popular new framework called AutoGPT automates that process. With AutoGPT, the user provides only a high-level goal, and the language model will create and execute a step-by-step plan to accomplish the goal.AutoGPT and other language agents are still in their infancy. They struggle with long-term planning and repeat their own mistakes. Yet because they limit human oversight of AI actions, these agents are a step towards dangerous deployment of autonomous AI.Individual bad actors pose serious risks. One of the first uses of AutoGPT was to instruct a model named ChaosGPT to “destroy humanity.” It created a plan to “find the most destructive weapons available to humans” and, after a few Google searches, became excited by the Tsar Bomba, an old Soviet nuclear weapon. ChaosGPT lacks both the intelligence and the means to operate dangerous weapons, so the worst it could do was fire off a Tweet about the bomb. But this is an example of the “unilateralist’s curse”: if one day someone builds AIs capable of causing severe harm, it only takes one person to ask it to cause that harm.More agents introduce more complexity. Researchers at Stanford and Google recently built a virtual world full of agents controlled by language models. Each agent was given an identity, an occupation, and relationships with the other agents. They would choose their own actions each day, leading to surprising outcomes. One agent threw a Valentine’s Day party, and the others spread the news and began asking each other on dates. Another ran for mayor, and the candidate’s neighbors would discuss his platform over breakfast in their own homes. Just as the agents in this virtual world had surprising interactions with each other, autonomous AI agents have unpredictable effects on the real world.How do LLM agents like GPT-4 behave? A recent paper examined the safety of LLMs acting as agents. When playing text-based games, LLMs often behave in power-seeking, deceptive, or Machiavellian ways. This happens naturally. Much like how LLMs trained to mimic human writings may learn to output toxic text, agents trained to optimize goals may learn to exhibit ends-justify-the-means / Machiavellian behavior by default. Research to reduce LLMs’ Machiavellian tendencies is still in its infancy.Natural Selection Favors AIs over HumansCAIS director Dan Hendrycks released a paper titled Natural Selection Favors AIs over Humans.The abstract for the paper is as follows:For billions of years, evolution has been the driving force behind the development of life, including humans. Evolution endowed humans with high intelligence, which allowed us to become one of the most successful species on the planet. Today, humans aim to create artificial intelligence systems that surpass even our own intelligence. As artificial intelligences (AIs) evolve and eventually surpass us in all domains, how might evolution shape our relations with AIs? By analyzing the environment that is shaping the evolution of AIs, we argue that the most successful AI agents will likely have undesirable traits. Competitive pressures among corporations and militaries will give rise to AI agents that automate human roles, deceive others, and gain power. If such agents have intelligence that exceeds that of humans, this could lead to hu...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Newsletter #2: ChaosGPT, Natural Selection, and AI Safety in the Media, published by Oliver Z on April 18, 2023 on The Effective Altruism Forum.Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.Subscribe here to receive future versions.ChaosGPT and the Rise of Language AgentsChatbots like ChatGPT usually only respond to one prompt at a time, and a human user must provide a new prompt to get a new response. But an extremely popular new framework called AutoGPT automates that process. With AutoGPT, the user provides only a high-level goal, and the language model will create and execute a step-by-step plan to accomplish the goal.AutoGPT and other language agents are still in their infancy. They struggle with long-term planning and repeat their own mistakes. Yet because they limit human oversight of AI actions, these agents are a step towards dangerous deployment of autonomous AI.Individual bad actors pose serious risks. One of the first uses of AutoGPT was to instruct a model named ChaosGPT to “destroy humanity.” It created a plan to “find the most destructive weapons available to humans” and, after a few Google searches, became excited by the Tsar Bomba, an old Soviet nuclear weapon. ChaosGPT lacks both the intelligence and the means to operate dangerous weapons, so the worst it could do was fire off a Tweet about the bomb. But this is an example of the “unilateralist’s curse”: if one day someone builds AIs capable of causing severe harm, it only takes one person to ask it to cause that harm.More agents introduce more complexity. Researchers at Stanford and Google recently built a virtual world full of agents controlled by language models. Each agent was given an identity, an occupation, and relationships with the other agents. They would choose their own actions each day, leading to surprising outcomes. One agent threw a Valentine’s Day party, and the others spread the news and began asking each other on dates. Another ran for mayor, and the candidate’s neighbors would discuss his platform over breakfast in their own homes. Just as the agents in this virtual world had surprising interactions with each other, autonomous AI agents have unpredictable effects on the real world.How do LLM agents like GPT-4 behave? A recent paper examined the safety of LLMs acting as agents. When playing text-based games, LLMs often behave in power-seeking, deceptive, or Machiavellian ways. This happens naturally. Much like how LLMs trained to mimic human writings may learn to output toxic text, agents trained to optimize goals may learn to exhibit ends-justify-the-means / Machiavellian behavior by default. Research to reduce LLMs’ Machiavellian tendencies is still in its infancy.Natural Selection Favors AIs over HumansCAIS director Dan Hendrycks released a paper titled Natural Selection Favors AIs over Humans.The abstract for the paper is as follows:For billions of years, evolution has been the driving force behind the development of life, including humans. Evolution endowed humans with high intelligence, which allowed us to become one of the most successful species on the planet. Today, humans aim to create artificial intelligence systems that surpass even our own intelligence. As artificial intelligences (AIs) evolve and eventually surpass us in all domains, how might evolution shape our relations with AIs? By analyzing the environment that is shaping the evolution of AIs, we argue that the most successful AI agents will likely have undesirable traits. Competitive pressures among corporations and militaries will give rise to AI agents that automate human roles, deceive others, and gain power. If such agents have intelligence that exceeds that of humans, this could lead to hu...]]>
Oliver Z https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:38 None full 5663
fZkeMsH2YETGfyDrL_NL_EA_EA EA - Hiding in Plain Sight: Mexico’s Octopus Farm/Research Facade by Tessa @ ALI Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hiding in Plain Sight: Mexico’s Octopus Farm/Research Facade, published by Tessa @ ALI on April 19, 2023 on The Effective Altruism Forum.Recap:In November 2022, Aquatic Life Institute (ALI) implemented a global campaign that aims to increase public and legislative pressure on countries/regions where octopus farms are being considered to achieve a regulatory ban, and reduce future chances of these farms being created elsewhere. We started Banding Together to Ban Octopus Farming, and have witnessed both promising and pessimistic developments.Good News First:2023 has, thus far, demonstrated just how universal the looming issue of octopus farming is. Concerns originated from the establishment of a potential farm in Spain, however, grievances quickly traveled.From the United States; Hawaii's Division of Aquatic Resources issued a cease and desist letter to Kanaloa Octopus Farm for operating without the required permits. And house bill 1153: "Prohibiting Octopus Farming" was proposed in Washington state. HB 1153 passed the first committee vote with overwhelming support from policymakers and members of the public across the globe. The bill received 9 "yes" votes and 2 "nays" from committee members. Despite overwhelming, bipartisan support, the bill now sits in the Agriculture and Natural Resources Committee until 2024. However, this truly represents a historic moment for the movement to #BanOctopusFarming. During the voting session, policymakers spoke about octopus sentience and intelligence, and how this is a great opportunity to protect an incredible species from somber futures in factory farms. ALI will ensure that this bill is a priority topic again during the next legislative session.Across the pond, RSPCA calls for halt to plans for world's first octopus farm. This announcement is the first public objection raising "concerns over the commercial farming of octopuses and the negative welfare impact it could have on this complex animal" from a seafood certification body. Aquatic Life Institute works closely with RSPCA Assured (RSPCA’s farm animal welfare assurance scheme), to ensure that all seafood production prioritizes high-welfare practices. They are a valuable stakeholder in ALI's Certifier Campaign, and we commend them for taking the lead on such a timely issue.The Not So Good News:The small town of Sisal, Yucatan is now the location of Mexico’s first octopus farm, appearing extensively in national and international media outlets as a groundbreaking industry for the region.The Sisal unit of the Universidad Autonoma de Mexico (UNAM), the country’s largest and most prestigious university, initiated the research project to study the physiology of the most common regional species: Octopus Maya. UNAM’s research center created an agreement with local families to establish Moluscos del Mayab, the commercial branch of the facility.UNAM research facility obtains pregnant females from the surrounding ecosystem. Still relying on wild populations for replenishing their broodstock, they are currently conducting research related to the development of a self-sustaining reproductive unit. Once females lay eggs to be kept in a patented incubator, they are then slaughtered and commercialized. The eggs are artificially incubated for 50 days, producing around 20,000 eggs per month.The Sisal unit has around 6-8 recirculating tanks for adults and 12-15 tanks for juveniles. Pre-growth tanks hold around 25 larvae per square meter. In total, this tank can fit approximately 707 larvae at any given time. The second growth tank can hold around 288 juveniles, reducing the density at this stage due to aggressive juvenile cannibalism. Up to 191 octopus are held in each grow-out tank. With a 52% mortality rate, averaging around 5% per week, researchers note around 30% is directly relat...]]>
Tessa @ ALI https://forum.effectivealtruism.org/posts/fZkeMsH2YETGfyDrL/hiding-in-plain-sight-mexico-s-octopus-farm-research-facade Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hiding in Plain Sight: Mexico’s Octopus Farm/Research Facade, published by Tessa @ ALI on April 19, 2023 on The Effective Altruism Forum.Recap:In November 2022, Aquatic Life Institute (ALI) implemented a global campaign that aims to increase public and legislative pressure on countries/regions where octopus farms are being considered to achieve a regulatory ban, and reduce future chances of these farms being created elsewhere. We started Banding Together to Ban Octopus Farming, and have witnessed both promising and pessimistic developments.Good News First:2023 has, thus far, demonstrated just how universal the looming issue of octopus farming is. Concerns originated from the establishment of a potential farm in Spain, however, grievances quickly traveled.From the United States; Hawaii's Division of Aquatic Resources issued a cease and desist letter to Kanaloa Octopus Farm for operating without the required permits. And house bill 1153: "Prohibiting Octopus Farming" was proposed in Washington state. HB 1153 passed the first committee vote with overwhelming support from policymakers and members of the public across the globe. The bill received 9 "yes" votes and 2 "nays" from committee members. Despite overwhelming, bipartisan support, the bill now sits in the Agriculture and Natural Resources Committee until 2024. However, this truly represents a historic moment for the movement to #BanOctopusFarming. During the voting session, policymakers spoke about octopus sentience and intelligence, and how this is a great opportunity to protect an incredible species from somber futures in factory farms. ALI will ensure that this bill is a priority topic again during the next legislative session.Across the pond, RSPCA calls for halt to plans for world's first octopus farm. This announcement is the first public objection raising "concerns over the commercial farming of octopuses and the negative welfare impact it could have on this complex animal" from a seafood certification body. Aquatic Life Institute works closely with RSPCA Assured (RSPCA’s farm animal welfare assurance scheme), to ensure that all seafood production prioritizes high-welfare practices. They are a valuable stakeholder in ALI's Certifier Campaign, and we commend them for taking the lead on such a timely issue.The Not So Good News:The small town of Sisal, Yucatan is now the location of Mexico’s first octopus farm, appearing extensively in national and international media outlets as a groundbreaking industry for the region.The Sisal unit of the Universidad Autonoma de Mexico (UNAM), the country’s largest and most prestigious university, initiated the research project to study the physiology of the most common regional species: Octopus Maya. UNAM’s research center created an agreement with local families to establish Moluscos del Mayab, the commercial branch of the facility.UNAM research facility obtains pregnant females from the surrounding ecosystem. Still relying on wild populations for replenishing their broodstock, they are currently conducting research related to the development of a self-sustaining reproductive unit. Once females lay eggs to be kept in a patented incubator, they are then slaughtered and commercialized. The eggs are artificially incubated for 50 days, producing around 20,000 eggs per month.The Sisal unit has around 6-8 recirculating tanks for adults and 12-15 tanks for juveniles. Pre-growth tanks hold around 25 larvae per square meter. In total, this tank can fit approximately 707 larvae at any given time. The second growth tank can hold around 288 juveniles, reducing the density at this stage due to aggressive juvenile cannibalism. Up to 191 octopus are held in each grow-out tank. With a 52% mortality rate, averaging around 5% per week, researchers note around 30% is directly relat...]]>
Wed, 19 Apr 2023 18:01:36 +0000 EA - Hiding in Plain Sight: Mexico’s Octopus Farm/Research Facade by Tessa @ ALI Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hiding in Plain Sight: Mexico’s Octopus Farm/Research Facade, published by Tessa @ ALI on April 19, 2023 on The Effective Altruism Forum.Recap:In November 2022, Aquatic Life Institute (ALI) implemented a global campaign that aims to increase public and legislative pressure on countries/regions where octopus farms are being considered to achieve a regulatory ban, and reduce future chances of these farms being created elsewhere. We started Banding Together to Ban Octopus Farming, and have witnessed both promising and pessimistic developments.Good News First:2023 has, thus far, demonstrated just how universal the looming issue of octopus farming is. Concerns originated from the establishment of a potential farm in Spain, however, grievances quickly traveled.From the United States; Hawaii's Division of Aquatic Resources issued a cease and desist letter to Kanaloa Octopus Farm for operating without the required permits. And house bill 1153: "Prohibiting Octopus Farming" was proposed in Washington state. HB 1153 passed the first committee vote with overwhelming support from policymakers and members of the public across the globe. The bill received 9 "yes" votes and 2 "nays" from committee members. Despite overwhelming, bipartisan support, the bill now sits in the Agriculture and Natural Resources Committee until 2024. However, this truly represents a historic moment for the movement to #BanOctopusFarming. During the voting session, policymakers spoke about octopus sentience and intelligence, and how this is a great opportunity to protect an incredible species from somber futures in factory farms. ALI will ensure that this bill is a priority topic again during the next legislative session.Across the pond, RSPCA calls for halt to plans for world's first octopus farm. This announcement is the first public objection raising "concerns over the commercial farming of octopuses and the negative welfare impact it could have on this complex animal" from a seafood certification body. Aquatic Life Institute works closely with RSPCA Assured (RSPCA’s farm animal welfare assurance scheme), to ensure that all seafood production prioritizes high-welfare practices. They are a valuable stakeholder in ALI's Certifier Campaign, and we commend them for taking the lead on such a timely issue.The Not So Good News:The small town of Sisal, Yucatan is now the location of Mexico’s first octopus farm, appearing extensively in national and international media outlets as a groundbreaking industry for the region.The Sisal unit of the Universidad Autonoma de Mexico (UNAM), the country’s largest and most prestigious university, initiated the research project to study the physiology of the most common regional species: Octopus Maya. UNAM’s research center created an agreement with local families to establish Moluscos del Mayab, the commercial branch of the facility.UNAM research facility obtains pregnant females from the surrounding ecosystem. Still relying on wild populations for replenishing their broodstock, they are currently conducting research related to the development of a self-sustaining reproductive unit. Once females lay eggs to be kept in a patented incubator, they are then slaughtered and commercialized. The eggs are artificially incubated for 50 days, producing around 20,000 eggs per month.The Sisal unit has around 6-8 recirculating tanks for adults and 12-15 tanks for juveniles. Pre-growth tanks hold around 25 larvae per square meter. In total, this tank can fit approximately 707 larvae at any given time. The second growth tank can hold around 288 juveniles, reducing the density at this stage due to aggressive juvenile cannibalism. Up to 191 octopus are held in each grow-out tank. With a 52% mortality rate, averaging around 5% per week, researchers note around 30% is directly relat...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hiding in Plain Sight: Mexico’s Octopus Farm/Research Facade, published by Tessa @ ALI on April 19, 2023 on The Effective Altruism Forum.Recap:In November 2022, Aquatic Life Institute (ALI) implemented a global campaign that aims to increase public and legislative pressure on countries/regions where octopus farms are being considered to achieve a regulatory ban, and reduce future chances of these farms being created elsewhere. We started Banding Together to Ban Octopus Farming, and have witnessed both promising and pessimistic developments.Good News First:2023 has, thus far, demonstrated just how universal the looming issue of octopus farming is. Concerns originated from the establishment of a potential farm in Spain, however, grievances quickly traveled.From the United States; Hawaii's Division of Aquatic Resources issued a cease and desist letter to Kanaloa Octopus Farm for operating without the required permits. And house bill 1153: "Prohibiting Octopus Farming" was proposed in Washington state. HB 1153 passed the first committee vote with overwhelming support from policymakers and members of the public across the globe. The bill received 9 "yes" votes and 2 "nays" from committee members. Despite overwhelming, bipartisan support, the bill now sits in the Agriculture and Natural Resources Committee until 2024. However, this truly represents a historic moment for the movement to #BanOctopusFarming. During the voting session, policymakers spoke about octopus sentience and intelligence, and how this is a great opportunity to protect an incredible species from somber futures in factory farms. ALI will ensure that this bill is a priority topic again during the next legislative session.Across the pond, RSPCA calls for halt to plans for world's first octopus farm. This announcement is the first public objection raising "concerns over the commercial farming of octopuses and the negative welfare impact it could have on this complex animal" from a seafood certification body. Aquatic Life Institute works closely with RSPCA Assured (RSPCA’s farm animal welfare assurance scheme), to ensure that all seafood production prioritizes high-welfare practices. They are a valuable stakeholder in ALI's Certifier Campaign, and we commend them for taking the lead on such a timely issue.The Not So Good News:The small town of Sisal, Yucatan is now the location of Mexico’s first octopus farm, appearing extensively in national and international media outlets as a groundbreaking industry for the region.The Sisal unit of the Universidad Autonoma de Mexico (UNAM), the country’s largest and most prestigious university, initiated the research project to study the physiology of the most common regional species: Octopus Maya. UNAM’s research center created an agreement with local families to establish Moluscos del Mayab, the commercial branch of the facility.UNAM research facility obtains pregnant females from the surrounding ecosystem. Still relying on wild populations for replenishing their broodstock, they are currently conducting research related to the development of a self-sustaining reproductive unit. Once females lay eggs to be kept in a patented incubator, they are then slaughtered and commercialized. The eggs are artificially incubated for 50 days, producing around 20,000 eggs per month.The Sisal unit has around 6-8 recirculating tanks for adults and 12-15 tanks for juveniles. Pre-growth tanks hold around 25 larvae per square meter. In total, this tank can fit approximately 707 larvae at any given time. The second growth tank can hold around 288 juveniles, reducing the density at this stage due to aggressive juvenile cannibalism. Up to 191 octopus are held in each grow-out tank. With a 52% mortality rate, averaging around 5% per week, researchers note around 30% is directly relat...]]>
Tessa @ ALI https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:05 None full 5659
J3ribNjvPRtHCK7bC_NL_EA_EA EA - Organizing a debate with experts and MPs to raise AI xrisk awareness: a possible blueprint by Otto Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Organizing a debate with experts and MPs to raise AI xrisk awareness: a possible blueprint, published by Otto on April 19, 2023 on The Effective Altruism Forum.Thanks to Leon Lang from University of Amsterdam for proposing this post and reviewing the draft. Any views expressed in this post are not necessarily his.We (Existential Risk Observatory) have organized a public debate (recording here, photos here) in Amsterdam, the Netherlands with the purpose of creating more awareness of AI existential risk among policymakers and other leading voices of the societal debate. We want to share our experience in this post, because we think others might be able to follow a similar approach, and because we expect this to have significant xrisk-reducing net effects.GoalsOur high-level goal is to reduce existential risk, especially from AGI, by informing the public debate. We think more awareness will be net positive because of increased support for risk-reducing regulation and increased attention for AI safety work (more talent, more funding, more institutes working on the topic, more diverse actors, and more priority).For this debate, we focused on:Informing leaders of the societal debate (such as journalists, opinion makers, scientists, artists) about AI existential risk. This should help to widen the Overton window, increase the chance that this group of people will inform others, and therefore raise AI existential risk awareness in society.Informing politicians about AI existential risk, to increase the chance that risk-reducing policies will get enacted.StrategyWe used a debate setup with AI existential risk authorities and Members of Parliament (MPs). The strategy was that the MPs would get influenced by the authorities in this setting. We already contacted MPs and MP assistants before and had meetings with several. In order to be invited to meetings with MPs and/or MP assistants, we think that having published in mainstream media, having a good network, and having good policy proposals, are all helpful. If budget is available, hiring a lobbyist and a PR person is both helpful as well (we have a freelance senior PR person and a medior volunteer lobbyist).Two Dutch MPs attended our debate. For them, advantages might include getting (media) attention and informing themselves. Stuart Russell agreed to be our keynote speaker (remote), and several other experts who all had good AI existential risk expertise attended as well. Additionally, we found a moderator with existing AI existential risk knowledge, which was important in making sure the debate went well. Finally, we found a leading debate center in Amsterdam (Pakhuis de Zwijger) willing to host the debate. We promoted our debate on our own social media, through the venue, we were mentioned in the prominent Dutch Future Affairs newsletter, and we promoted our event in EA whatsapp groups. This was sufficient to sell out the largest debate hall in Amsterdam (320 seats, tickets were free).We organized this event in the Netherlands mainly because this is where we are most active. However, since the event was partially focused on getting policy implemented, it should be considered to organize events such as this debate especially where policy is most urgently needed (DC, but perhaps also Beijing, Brussels, or London).ProgramWe started with an introductory talk, after which we played some documentary fragments. After this introduction, our main guest Stuart Russell gave a talk and audience Q&A on AI existential risk. We closed with the 5-person panel, consisting of Queeny Rajkowski (MP for VVD, the largest governing party, center-right, with a portfolio including Digitization and Cybersecurity), Lammert van Raan (MP for the Party for the Animals, a medium-sized left party focusing on animal rights and climate), Mark Br...]]>
Otto https://forum.effectivealtruism.org/posts/J3ribNjvPRtHCK7bC/organizing-a-debate-with-experts-and-mps-to-raise-ai-xrisk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Organizing a debate with experts and MPs to raise AI xrisk awareness: a possible blueprint, published by Otto on April 19, 2023 on The Effective Altruism Forum.Thanks to Leon Lang from University of Amsterdam for proposing this post and reviewing the draft. Any views expressed in this post are not necessarily his.We (Existential Risk Observatory) have organized a public debate (recording here, photos here) in Amsterdam, the Netherlands with the purpose of creating more awareness of AI existential risk among policymakers and other leading voices of the societal debate. We want to share our experience in this post, because we think others might be able to follow a similar approach, and because we expect this to have significant xrisk-reducing net effects.GoalsOur high-level goal is to reduce existential risk, especially from AGI, by informing the public debate. We think more awareness will be net positive because of increased support for risk-reducing regulation and increased attention for AI safety work (more talent, more funding, more institutes working on the topic, more diverse actors, and more priority).For this debate, we focused on:Informing leaders of the societal debate (such as journalists, opinion makers, scientists, artists) about AI existential risk. This should help to widen the Overton window, increase the chance that this group of people will inform others, and therefore raise AI existential risk awareness in society.Informing politicians about AI existential risk, to increase the chance that risk-reducing policies will get enacted.StrategyWe used a debate setup with AI existential risk authorities and Members of Parliament (MPs). The strategy was that the MPs would get influenced by the authorities in this setting. We already contacted MPs and MP assistants before and had meetings with several. In order to be invited to meetings with MPs and/or MP assistants, we think that having published in mainstream media, having a good network, and having good policy proposals, are all helpful. If budget is available, hiring a lobbyist and a PR person is both helpful as well (we have a freelance senior PR person and a medior volunteer lobbyist).Two Dutch MPs attended our debate. For them, advantages might include getting (media) attention and informing themselves. Stuart Russell agreed to be our keynote speaker (remote), and several other experts who all had good AI existential risk expertise attended as well. Additionally, we found a moderator with existing AI existential risk knowledge, which was important in making sure the debate went well. Finally, we found a leading debate center in Amsterdam (Pakhuis de Zwijger) willing to host the debate. We promoted our debate on our own social media, through the venue, we were mentioned in the prominent Dutch Future Affairs newsletter, and we promoted our event in EA whatsapp groups. This was sufficient to sell out the largest debate hall in Amsterdam (320 seats, tickets were free).We organized this event in the Netherlands mainly because this is where we are most active. However, since the event was partially focused on getting policy implemented, it should be considered to organize events such as this debate especially where policy is most urgently needed (DC, but perhaps also Beijing, Brussels, or London).ProgramWe started with an introductory talk, after which we played some documentary fragments. After this introduction, our main guest Stuart Russell gave a talk and audience Q&A on AI existential risk. We closed with the 5-person panel, consisting of Queeny Rajkowski (MP for VVD, the largest governing party, center-right, with a portfolio including Digitization and Cybersecurity), Lammert van Raan (MP for the Party for the Animals, a medium-sized left party focusing on animal rights and climate), Mark Br...]]>
Wed, 19 Apr 2023 16:58:03 +0000 EA - Organizing a debate with experts and MPs to raise AI xrisk awareness: a possible blueprint by Otto Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Organizing a debate with experts and MPs to raise AI xrisk awareness: a possible blueprint, published by Otto on April 19, 2023 on The Effective Altruism Forum.Thanks to Leon Lang from University of Amsterdam for proposing this post and reviewing the draft. Any views expressed in this post are not necessarily his.We (Existential Risk Observatory) have organized a public debate (recording here, photos here) in Amsterdam, the Netherlands with the purpose of creating more awareness of AI existential risk among policymakers and other leading voices of the societal debate. We want to share our experience in this post, because we think others might be able to follow a similar approach, and because we expect this to have significant xrisk-reducing net effects.GoalsOur high-level goal is to reduce existential risk, especially from AGI, by informing the public debate. We think more awareness will be net positive because of increased support for risk-reducing regulation and increased attention for AI safety work (more talent, more funding, more institutes working on the topic, more diverse actors, and more priority).For this debate, we focused on:Informing leaders of the societal debate (such as journalists, opinion makers, scientists, artists) about AI existential risk. This should help to widen the Overton window, increase the chance that this group of people will inform others, and therefore raise AI existential risk awareness in society.Informing politicians about AI existential risk, to increase the chance that risk-reducing policies will get enacted.StrategyWe used a debate setup with AI existential risk authorities and Members of Parliament (MPs). The strategy was that the MPs would get influenced by the authorities in this setting. We already contacted MPs and MP assistants before and had meetings with several. In order to be invited to meetings with MPs and/or MP assistants, we think that having published in mainstream media, having a good network, and having good policy proposals, are all helpful. If budget is available, hiring a lobbyist and a PR person is both helpful as well (we have a freelance senior PR person and a medior volunteer lobbyist).Two Dutch MPs attended our debate. For them, advantages might include getting (media) attention and informing themselves. Stuart Russell agreed to be our keynote speaker (remote), and several other experts who all had good AI existential risk expertise attended as well. Additionally, we found a moderator with existing AI existential risk knowledge, which was important in making sure the debate went well. Finally, we found a leading debate center in Amsterdam (Pakhuis de Zwijger) willing to host the debate. We promoted our debate on our own social media, through the venue, we were mentioned in the prominent Dutch Future Affairs newsletter, and we promoted our event in EA whatsapp groups. This was sufficient to sell out the largest debate hall in Amsterdam (320 seats, tickets were free).We organized this event in the Netherlands mainly because this is where we are most active. However, since the event was partially focused on getting policy implemented, it should be considered to organize events such as this debate especially where policy is most urgently needed (DC, but perhaps also Beijing, Brussels, or London).ProgramWe started with an introductory talk, after which we played some documentary fragments. After this introduction, our main guest Stuart Russell gave a talk and audience Q&A on AI existential risk. We closed with the 5-person panel, consisting of Queeny Rajkowski (MP for VVD, the largest governing party, center-right, with a portfolio including Digitization and Cybersecurity), Lammert van Raan (MP for the Party for the Animals, a medium-sized left party focusing on animal rights and climate), Mark Br...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Organizing a debate with experts and MPs to raise AI xrisk awareness: a possible blueprint, published by Otto on April 19, 2023 on The Effective Altruism Forum.Thanks to Leon Lang from University of Amsterdam for proposing this post and reviewing the draft. Any views expressed in this post are not necessarily his.We (Existential Risk Observatory) have organized a public debate (recording here, photos here) in Amsterdam, the Netherlands with the purpose of creating more awareness of AI existential risk among policymakers and other leading voices of the societal debate. We want to share our experience in this post, because we think others might be able to follow a similar approach, and because we expect this to have significant xrisk-reducing net effects.GoalsOur high-level goal is to reduce existential risk, especially from AGI, by informing the public debate. We think more awareness will be net positive because of increased support for risk-reducing regulation and increased attention for AI safety work (more talent, more funding, more institutes working on the topic, more diverse actors, and more priority).For this debate, we focused on:Informing leaders of the societal debate (such as journalists, opinion makers, scientists, artists) about AI existential risk. This should help to widen the Overton window, increase the chance that this group of people will inform others, and therefore raise AI existential risk awareness in society.Informing politicians about AI existential risk, to increase the chance that risk-reducing policies will get enacted.StrategyWe used a debate setup with AI existential risk authorities and Members of Parliament (MPs). The strategy was that the MPs would get influenced by the authorities in this setting. We already contacted MPs and MP assistants before and had meetings with several. In order to be invited to meetings with MPs and/or MP assistants, we think that having published in mainstream media, having a good network, and having good policy proposals, are all helpful. If budget is available, hiring a lobbyist and a PR person is both helpful as well (we have a freelance senior PR person and a medior volunteer lobbyist).Two Dutch MPs attended our debate. For them, advantages might include getting (media) attention and informing themselves. Stuart Russell agreed to be our keynote speaker (remote), and several other experts who all had good AI existential risk expertise attended as well. Additionally, we found a moderator with existing AI existential risk knowledge, which was important in making sure the debate went well. Finally, we found a leading debate center in Amsterdam (Pakhuis de Zwijger) willing to host the debate. We promoted our debate on our own social media, through the venue, we were mentioned in the prominent Dutch Future Affairs newsletter, and we promoted our event in EA whatsapp groups. This was sufficient to sell out the largest debate hall in Amsterdam (320 seats, tickets were free).We organized this event in the Netherlands mainly because this is where we are most active. However, since the event was partially focused on getting policy implemented, it should be considered to organize events such as this debate especially where policy is most urgently needed (DC, but perhaps also Beijing, Brussels, or London).ProgramWe started with an introductory talk, after which we played some documentary fragments. After this introduction, our main guest Stuart Russell gave a talk and audience Q&A on AI existential risk. We closed with the 5-person panel, consisting of Queeny Rajkowski (MP for VVD, the largest governing party, center-right, with a portfolio including Digitization and Cybersecurity), Lammert van Raan (MP for the Party for the Animals, a medium-sized left party focusing on animal rights and climate), Mark Br...]]>
Otto https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:06 None full 5662
Erw9x4i9Tkcyb32oA_NL_EA_EA EA - Who were The Righteous Among the Nations? by Carole Bibas-Barkan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who were The Righteous Among the Nations?, published by Carole Bibas-Barkan on April 18, 2023 on The Effective Altruism Forum.On the occasion of the Holocaust Remembrance Day happening today in Israel, I wanted to explore connections between the Holocaust and EA. My first thoughts were toward the importance of remembrance: can collective memory prevent future generations from repeating past mistakes? But then, I recalled my trip to Poland and the concentration camps years ago. I remembered our guide mentioning a study identifying four characteristics of The Righteous Among the Nations. I think that identifying the characteristics of those who chose "good" during one of the darkest times in History can be of particular interest to the EA community. Unfortunately, I didn't find this study, but here is an extract from the Yad Vashem Museum article that explores the subject:"Most rescuers were ordinary people. Some acted out of political, ideological or religious convictions; others were not idealists, but merely human beings who cared about the people around them.In many cases they never planned to become rescuers and were totally unprepared for the moment in which they had to make such a far-reaching decision. They were ordinary human beings, and it is precisely their humanity that touches us and should serve as a model. The Righteous are Christians from all denominations and churches, Muslims and agnostics; men and women of all ages; they come from all walks of life; highly educated people as well as illiterate peasants; public figures as well as people from society's margins; city dwellers and farmers from the remotest corners of Europe; university professors, teachers, physicians, clergy, nuns, diplomats, simple workers, servants, resistance fighters, policemen, peasants, fishermen, a zoo director, a circus owner, and many more.Scholars have attempted to trace the characteristics that these Righteous share and to identify who was more likely to extend help to the Jews or to a persecuted person. Some claim that the Righteous are a diverse group and the only common denominator are the humanity and courage they displayed by standing up for their moral principles. Samuel P. Oliner and Pearl M. Oliner defined the altruistic personality. By comparing and contrasting rescuers and bystanders during the Holocaust, they pointed out that those who intervened were distinguished by characteristics such as empathy and a sense of connection to others. Nehama Tec who also studied many cases of Righteous, found a cluster of shared characteristics and conditions of separateness, individuality or marginality. The rescuers’ independence enabled them to act against the accepted conventions and beliefs.Bystanders were the rule, rescuers were the exception.However difficult and frightening, the fact that some found the courage to become rescuers demonstrates that some freedom of choice existed, and that saving Jews was not beyond the capacity of ordinary people throughout occupied Europe. The Righteous Among the Nations teach us that every person can make a difference."Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Carole Bibas-Barkan https://forum.effectivealtruism.org/posts/Erw9x4i9Tkcyb32oA/who-were-the-righteous-among-the-nations Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who were The Righteous Among the Nations?, published by Carole Bibas-Barkan on April 18, 2023 on The Effective Altruism Forum.On the occasion of the Holocaust Remembrance Day happening today in Israel, I wanted to explore connections between the Holocaust and EA. My first thoughts were toward the importance of remembrance: can collective memory prevent future generations from repeating past mistakes? But then, I recalled my trip to Poland and the concentration camps years ago. I remembered our guide mentioning a study identifying four characteristics of The Righteous Among the Nations. I think that identifying the characteristics of those who chose "good" during one of the darkest times in History can be of particular interest to the EA community. Unfortunately, I didn't find this study, but here is an extract from the Yad Vashem Museum article that explores the subject:"Most rescuers were ordinary people. Some acted out of political, ideological or religious convictions; others were not idealists, but merely human beings who cared about the people around them.In many cases they never planned to become rescuers and were totally unprepared for the moment in which they had to make such a far-reaching decision. They were ordinary human beings, and it is precisely their humanity that touches us and should serve as a model. The Righteous are Christians from all denominations and churches, Muslims and agnostics; men and women of all ages; they come from all walks of life; highly educated people as well as illiterate peasants; public figures as well as people from society's margins; city dwellers and farmers from the remotest corners of Europe; university professors, teachers, physicians, clergy, nuns, diplomats, simple workers, servants, resistance fighters, policemen, peasants, fishermen, a zoo director, a circus owner, and many more.Scholars have attempted to trace the characteristics that these Righteous share and to identify who was more likely to extend help to the Jews or to a persecuted person. Some claim that the Righteous are a diverse group and the only common denominator are the humanity and courage they displayed by standing up for their moral principles. Samuel P. Oliner and Pearl M. Oliner defined the altruistic personality. By comparing and contrasting rescuers and bystanders during the Holocaust, they pointed out that those who intervened were distinguished by characteristics such as empathy and a sense of connection to others. Nehama Tec who also studied many cases of Righteous, found a cluster of shared characteristics and conditions of separateness, individuality or marginality. The rescuers’ independence enabled them to act against the accepted conventions and beliefs.Bystanders were the rule, rescuers were the exception.However difficult and frightening, the fact that some found the courage to become rescuers demonstrates that some freedom of choice existed, and that saving Jews was not beyond the capacity of ordinary people throughout occupied Europe. The Righteous Among the Nations teach us that every person can make a difference."Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 19 Apr 2023 16:03:28 +0000 EA - Who were The Righteous Among the Nations? by Carole Bibas-Barkan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who were The Righteous Among the Nations?, published by Carole Bibas-Barkan on April 18, 2023 on The Effective Altruism Forum.On the occasion of the Holocaust Remembrance Day happening today in Israel, I wanted to explore connections between the Holocaust and EA. My first thoughts were toward the importance of remembrance: can collective memory prevent future generations from repeating past mistakes? But then, I recalled my trip to Poland and the concentration camps years ago. I remembered our guide mentioning a study identifying four characteristics of The Righteous Among the Nations. I think that identifying the characteristics of those who chose "good" during one of the darkest times in History can be of particular interest to the EA community. Unfortunately, I didn't find this study, but here is an extract from the Yad Vashem Museum article that explores the subject:"Most rescuers were ordinary people. Some acted out of political, ideological or religious convictions; others were not idealists, but merely human beings who cared about the people around them.In many cases they never planned to become rescuers and were totally unprepared for the moment in which they had to make such a far-reaching decision. They were ordinary human beings, and it is precisely their humanity that touches us and should serve as a model. The Righteous are Christians from all denominations and churches, Muslims and agnostics; men and women of all ages; they come from all walks of life; highly educated people as well as illiterate peasants; public figures as well as people from society's margins; city dwellers and farmers from the remotest corners of Europe; university professors, teachers, physicians, clergy, nuns, diplomats, simple workers, servants, resistance fighters, policemen, peasants, fishermen, a zoo director, a circus owner, and many more.Scholars have attempted to trace the characteristics that these Righteous share and to identify who was more likely to extend help to the Jews or to a persecuted person. Some claim that the Righteous are a diverse group and the only common denominator are the humanity and courage they displayed by standing up for their moral principles. Samuel P. Oliner and Pearl M. Oliner defined the altruistic personality. By comparing and contrasting rescuers and bystanders during the Holocaust, they pointed out that those who intervened were distinguished by characteristics such as empathy and a sense of connection to others. Nehama Tec who also studied many cases of Righteous, found a cluster of shared characteristics and conditions of separateness, individuality or marginality. The rescuers’ independence enabled them to act against the accepted conventions and beliefs.Bystanders were the rule, rescuers were the exception.However difficult and frightening, the fact that some found the courage to become rescuers demonstrates that some freedom of choice existed, and that saving Jews was not beyond the capacity of ordinary people throughout occupied Europe. The Righteous Among the Nations teach us that every person can make a difference."Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who were The Righteous Among the Nations?, published by Carole Bibas-Barkan on April 18, 2023 on The Effective Altruism Forum.On the occasion of the Holocaust Remembrance Day happening today in Israel, I wanted to explore connections between the Holocaust and EA. My first thoughts were toward the importance of remembrance: can collective memory prevent future generations from repeating past mistakes? But then, I recalled my trip to Poland and the concentration camps years ago. I remembered our guide mentioning a study identifying four characteristics of The Righteous Among the Nations. I think that identifying the characteristics of those who chose "good" during one of the darkest times in History can be of particular interest to the EA community. Unfortunately, I didn't find this study, but here is an extract from the Yad Vashem Museum article that explores the subject:"Most rescuers were ordinary people. Some acted out of political, ideological or religious convictions; others were not idealists, but merely human beings who cared about the people around them.In many cases they never planned to become rescuers and were totally unprepared for the moment in which they had to make such a far-reaching decision. They were ordinary human beings, and it is precisely their humanity that touches us and should serve as a model. The Righteous are Christians from all denominations and churches, Muslims and agnostics; men and women of all ages; they come from all walks of life; highly educated people as well as illiterate peasants; public figures as well as people from society's margins; city dwellers and farmers from the remotest corners of Europe; university professors, teachers, physicians, clergy, nuns, diplomats, simple workers, servants, resistance fighters, policemen, peasants, fishermen, a zoo director, a circus owner, and many more.Scholars have attempted to trace the characteristics that these Righteous share and to identify who was more likely to extend help to the Jews or to a persecuted person. Some claim that the Righteous are a diverse group and the only common denominator are the humanity and courage they displayed by standing up for their moral principles. Samuel P. Oliner and Pearl M. Oliner defined the altruistic personality. By comparing and contrasting rescuers and bystanders during the Holocaust, they pointed out that those who intervened were distinguished by characteristics such as empathy and a sense of connection to others. Nehama Tec who also studied many cases of Righteous, found a cluster of shared characteristics and conditions of separateness, individuality or marginality. The rescuers’ independence enabled them to act against the accepted conventions and beliefs.Bystanders were the rule, rescuers were the exception.However difficult and frightening, the fact that some found the courage to become rescuers demonstrates that some freedom of choice existed, and that saving Jews was not beyond the capacity of ordinary people throughout occupied Europe. The Righteous Among the Nations teach us that every person can make a difference."Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Carole Bibas-Barkan https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:54 None full 5670
2gRaH8rDTc4WyGfe7_NL_EA_EA EA - Ghana has approved the use of a malaria vaccine with >70% efficacy by Henry Howard Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ghana has approved the use of a malaria vaccine with >70% efficacy, published by Henry Howard on April 19, 2023 on The Effective Altruism Forum.The R21/Matrix-M malaria vaccine showed 71%-80% efficacy in preventing cases of malaria in a randomised controlled phase 2 trial published at the end of last year. A phase 3 trial is ongoing.67 (51%) of 132 children who received R21/Matrix-M with low-dose adjuvant, 54 (39%) of 137 children who received R21/Matrix-M with high-dose adjuvant, and 121 (86%) of 140 children who received the rabies vaccine developed clinical malaria by 12 months(the rabies vaccine was the control)The next best thing is the RTS,S/AS01 vaccine, which WHO started rolling out in some pilot programs in 2016 after trials showed its reduced hospital admissions from severe malaria by around 30%, less impressive than R21/Matrix-M.A few days ago, Ghana's food and drugs administration announced that they've approved the R21/Matrix-M for children aged 5 months to 36 months. It seems like there will be more steps before the vaccines actually start rolling out (they might need to wait for WHO approval and/or the results of the phase 3 trial). In any case, very exciting news.I found out about this because it is on the In the news section of the front page of Wikipedia.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Henry Howard https://forum.effectivealtruism.org/posts/2gRaH8rDTc4WyGfe7/ghana-has-approved-the-use-of-a-malaria-vaccine-with-greater Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ghana has approved the use of a malaria vaccine with >70% efficacy, published by Henry Howard on April 19, 2023 on The Effective Altruism Forum.The R21/Matrix-M malaria vaccine showed 71%-80% efficacy in preventing cases of malaria in a randomised controlled phase 2 trial published at the end of last year. A phase 3 trial is ongoing.67 (51%) of 132 children who received R21/Matrix-M with low-dose adjuvant, 54 (39%) of 137 children who received R21/Matrix-M with high-dose adjuvant, and 121 (86%) of 140 children who received the rabies vaccine developed clinical malaria by 12 months(the rabies vaccine was the control)The next best thing is the RTS,S/AS01 vaccine, which WHO started rolling out in some pilot programs in 2016 after trials showed its reduced hospital admissions from severe malaria by around 30%, less impressive than R21/Matrix-M.A few days ago, Ghana's food and drugs administration announced that they've approved the R21/Matrix-M for children aged 5 months to 36 months. It seems like there will be more steps before the vaccines actually start rolling out (they might need to wait for WHO approval and/or the results of the phase 3 trial). In any case, very exciting news.I found out about this because it is on the In the news section of the front page of Wikipedia.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 19 Apr 2023 14:59:02 +0000 EA - Ghana has approved the use of a malaria vaccine with >70% efficacy by Henry Howard Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ghana has approved the use of a malaria vaccine with >70% efficacy, published by Henry Howard on April 19, 2023 on The Effective Altruism Forum.The R21/Matrix-M malaria vaccine showed 71%-80% efficacy in preventing cases of malaria in a randomised controlled phase 2 trial published at the end of last year. A phase 3 trial is ongoing.67 (51%) of 132 children who received R21/Matrix-M with low-dose adjuvant, 54 (39%) of 137 children who received R21/Matrix-M with high-dose adjuvant, and 121 (86%) of 140 children who received the rabies vaccine developed clinical malaria by 12 months(the rabies vaccine was the control)The next best thing is the RTS,S/AS01 vaccine, which WHO started rolling out in some pilot programs in 2016 after trials showed its reduced hospital admissions from severe malaria by around 30%, less impressive than R21/Matrix-M.A few days ago, Ghana's food and drugs administration announced that they've approved the R21/Matrix-M for children aged 5 months to 36 months. It seems like there will be more steps before the vaccines actually start rolling out (they might need to wait for WHO approval and/or the results of the phase 3 trial). In any case, very exciting news.I found out about this because it is on the In the news section of the front page of Wikipedia.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ghana has approved the use of a malaria vaccine with >70% efficacy, published by Henry Howard on April 19, 2023 on The Effective Altruism Forum.The R21/Matrix-M malaria vaccine showed 71%-80% efficacy in preventing cases of malaria in a randomised controlled phase 2 trial published at the end of last year. A phase 3 trial is ongoing.67 (51%) of 132 children who received R21/Matrix-M with low-dose adjuvant, 54 (39%) of 137 children who received R21/Matrix-M with high-dose adjuvant, and 121 (86%) of 140 children who received the rabies vaccine developed clinical malaria by 12 months(the rabies vaccine was the control)The next best thing is the RTS,S/AS01 vaccine, which WHO started rolling out in some pilot programs in 2016 after trials showed its reduced hospital admissions from severe malaria by around 30%, less impressive than R21/Matrix-M.A few days ago, Ghana's food and drugs administration announced that they've approved the R21/Matrix-M for children aged 5 months to 36 months. It seems like there will be more steps before the vaccines actually start rolling out (they might need to wait for WHO approval and/or the results of the phase 3 trial). In any case, very exciting news.I found out about this because it is on the In the news section of the front page of Wikipedia.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Henry Howard https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:41 None full 5660
NsrbM2sBREYZwns88_NL_EA_EA EA - Follow-up on "Institutions for Future Generations" by tylermjohn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Follow-up on "Institutions for Future Generations", published by tylermjohn on April 19, 2023 on The Effective Altruism Forum.3.5 years ago I wrote a post called "Institutions for Future Generations" indicating that I had started a project to figure out what institutions and policies could help better represent future generations' interests in government and soliciting help coming up with ideas.I finished the research for this project in 2020, and had expected to spend more time wrapping it up and making it pretty for public consumption, but I've never come back to the project and don't expect to. It's possible that some of the research will be useful to people thinking about policy and institutional reform, so I wanted to at least share the (messy!) Google doc with my work in it for consumption on the EA Forum.Here's the Google doc.I pulled out many of the most useful things I learned and published them in a few stand-alone papers, which are probably more helpful than the Google doc overall:Longtermist Institutional Reform, with William MacAskillA general paper on what gives governments short time horizons and some of the more promising things we can do about thisSecuring Political Accountability to Future Generations with Retrospective Accountability (with help from Charlotte Siegmann)A mechanism design paper proposing one way we might incentivise current generations to promote the interests of future generations, by making policymakers financially dependent on the decisions future generations make about how successful they were at promoting long-term interestsEmpowering Future People by Empowering the Young?A short paper, the most important part of which is that empowering the young doesn't seem to do all that much for future generations, though there's some positive evidence from the finding that age is a significant determinant of pro-environmentalist attitudes, to the point that a 20-year-old voter is 10 per cent more likely to vote favourably to the environment than an 80-year-old voterWant politics to be better? Focus on future generations, with William MacAskill (and lots of help from Fin Moorhouse)Popular-audience piece on the Japanese Future Design movement, extinction risk, integrating forecasting and technology expertise into government, and representing future generations in governmentMy takeaways from this research:Overall, I've come to think that this research is less valuable than I thought it was when I started the project. There's very little strong literature on building future-oriented institutions and policy, I didn't find a lot of great success stories of policymakers doing this, and it actually just seems pretty hard to get right. I think there are a few great things people can do to generally make governments more future-oriented, like lowering the discount rate and infusing better technology expertise into governance, but most of these things are technocratic, moderate, and look less like "representing future generations" than I initially expected. I also increasingly find abstract institutional design work less relevant than policy work.I'm still optimistic that someone could find ideas better than the ones that I found, and I would still be excited if a huge number of economists and political scientists spent much more time thinking about these mechanism design issues, but I think marginal resources are better invested in existential risk policy, especially AI policy.I'll try to answer any questions, but it's been a few years since I've thought about these things much and I'm unsure how much of the doc I still endorse today!I make no guarantee that the scoring system used in the doc is accurate, internally consistent, or informative. I meant to spend more time developing a useful scoring system, but because I didn't manage...]]>
tylermjohn https://forum.effectivealtruism.org/posts/NsrbM2sBREYZwns88/follow-up-on-institutions-for-future-generations Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Follow-up on "Institutions for Future Generations", published by tylermjohn on April 19, 2023 on The Effective Altruism Forum.3.5 years ago I wrote a post called "Institutions for Future Generations" indicating that I had started a project to figure out what institutions and policies could help better represent future generations' interests in government and soliciting help coming up with ideas.I finished the research for this project in 2020, and had expected to spend more time wrapping it up and making it pretty for public consumption, but I've never come back to the project and don't expect to. It's possible that some of the research will be useful to people thinking about policy and institutional reform, so I wanted to at least share the (messy!) Google doc with my work in it for consumption on the EA Forum.Here's the Google doc.I pulled out many of the most useful things I learned and published them in a few stand-alone papers, which are probably more helpful than the Google doc overall:Longtermist Institutional Reform, with William MacAskillA general paper on what gives governments short time horizons and some of the more promising things we can do about thisSecuring Political Accountability to Future Generations with Retrospective Accountability (with help from Charlotte Siegmann)A mechanism design paper proposing one way we might incentivise current generations to promote the interests of future generations, by making policymakers financially dependent on the decisions future generations make about how successful they were at promoting long-term interestsEmpowering Future People by Empowering the Young?A short paper, the most important part of which is that empowering the young doesn't seem to do all that much for future generations, though there's some positive evidence from the finding that age is a significant determinant of pro-environmentalist attitudes, to the point that a 20-year-old voter is 10 per cent more likely to vote favourably to the environment than an 80-year-old voterWant politics to be better? Focus on future generations, with William MacAskill (and lots of help from Fin Moorhouse)Popular-audience piece on the Japanese Future Design movement, extinction risk, integrating forecasting and technology expertise into government, and representing future generations in governmentMy takeaways from this research:Overall, I've come to think that this research is less valuable than I thought it was when I started the project. There's very little strong literature on building future-oriented institutions and policy, I didn't find a lot of great success stories of policymakers doing this, and it actually just seems pretty hard to get right. I think there are a few great things people can do to generally make governments more future-oriented, like lowering the discount rate and infusing better technology expertise into governance, but most of these things are technocratic, moderate, and look less like "representing future generations" than I initially expected. I also increasingly find abstract institutional design work less relevant than policy work.I'm still optimistic that someone could find ideas better than the ones that I found, and I would still be excited if a huge number of economists and political scientists spent much more time thinking about these mechanism design issues, but I think marginal resources are better invested in existential risk policy, especially AI policy.I'll try to answer any questions, but it's been a few years since I've thought about these things much and I'm unsure how much of the doc I still endorse today!I make no guarantee that the scoring system used in the doc is accurate, internally consistent, or informative. I meant to spend more time developing a useful scoring system, but because I didn't manage...]]>
Wed, 19 Apr 2023 14:25:52 +0000 EA - Follow-up on "Institutions for Future Generations" by tylermjohn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Follow-up on "Institutions for Future Generations", published by tylermjohn on April 19, 2023 on The Effective Altruism Forum.3.5 years ago I wrote a post called "Institutions for Future Generations" indicating that I had started a project to figure out what institutions and policies could help better represent future generations' interests in government and soliciting help coming up with ideas.I finished the research for this project in 2020, and had expected to spend more time wrapping it up and making it pretty for public consumption, but I've never come back to the project and don't expect to. It's possible that some of the research will be useful to people thinking about policy and institutional reform, so I wanted to at least share the (messy!) Google doc with my work in it for consumption on the EA Forum.Here's the Google doc.I pulled out many of the most useful things I learned and published them in a few stand-alone papers, which are probably more helpful than the Google doc overall:Longtermist Institutional Reform, with William MacAskillA general paper on what gives governments short time horizons and some of the more promising things we can do about thisSecuring Political Accountability to Future Generations with Retrospective Accountability (with help from Charlotte Siegmann)A mechanism design paper proposing one way we might incentivise current generations to promote the interests of future generations, by making policymakers financially dependent on the decisions future generations make about how successful they were at promoting long-term interestsEmpowering Future People by Empowering the Young?A short paper, the most important part of which is that empowering the young doesn't seem to do all that much for future generations, though there's some positive evidence from the finding that age is a significant determinant of pro-environmentalist attitudes, to the point that a 20-year-old voter is 10 per cent more likely to vote favourably to the environment than an 80-year-old voterWant politics to be better? Focus on future generations, with William MacAskill (and lots of help from Fin Moorhouse)Popular-audience piece on the Japanese Future Design movement, extinction risk, integrating forecasting and technology expertise into government, and representing future generations in governmentMy takeaways from this research:Overall, I've come to think that this research is less valuable than I thought it was when I started the project. There's very little strong literature on building future-oriented institutions and policy, I didn't find a lot of great success stories of policymakers doing this, and it actually just seems pretty hard to get right. I think there are a few great things people can do to generally make governments more future-oriented, like lowering the discount rate and infusing better technology expertise into governance, but most of these things are technocratic, moderate, and look less like "representing future generations" than I initially expected. I also increasingly find abstract institutional design work less relevant than policy work.I'm still optimistic that someone could find ideas better than the ones that I found, and I would still be excited if a huge number of economists and political scientists spent much more time thinking about these mechanism design issues, but I think marginal resources are better invested in existential risk policy, especially AI policy.I'll try to answer any questions, but it's been a few years since I've thought about these things much and I'm unsure how much of the doc I still endorse today!I make no guarantee that the scoring system used in the doc is accurate, internally consistent, or informative. I meant to spend more time developing a useful scoring system, but because I didn't manage...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Follow-up on "Institutions for Future Generations", published by tylermjohn on April 19, 2023 on The Effective Altruism Forum.3.5 years ago I wrote a post called "Institutions for Future Generations" indicating that I had started a project to figure out what institutions and policies could help better represent future generations' interests in government and soliciting help coming up with ideas.I finished the research for this project in 2020, and had expected to spend more time wrapping it up and making it pretty for public consumption, but I've never come back to the project and don't expect to. It's possible that some of the research will be useful to people thinking about policy and institutional reform, so I wanted to at least share the (messy!) Google doc with my work in it for consumption on the EA Forum.Here's the Google doc.I pulled out many of the most useful things I learned and published them in a few stand-alone papers, which are probably more helpful than the Google doc overall:Longtermist Institutional Reform, with William MacAskillA general paper on what gives governments short time horizons and some of the more promising things we can do about thisSecuring Political Accountability to Future Generations with Retrospective Accountability (with help from Charlotte Siegmann)A mechanism design paper proposing one way we might incentivise current generations to promote the interests of future generations, by making policymakers financially dependent on the decisions future generations make about how successful they were at promoting long-term interestsEmpowering Future People by Empowering the Young?A short paper, the most important part of which is that empowering the young doesn't seem to do all that much for future generations, though there's some positive evidence from the finding that age is a significant determinant of pro-environmentalist attitudes, to the point that a 20-year-old voter is 10 per cent more likely to vote favourably to the environment than an 80-year-old voterWant politics to be better? Focus on future generations, with William MacAskill (and lots of help from Fin Moorhouse)Popular-audience piece on the Japanese Future Design movement, extinction risk, integrating forecasting and technology expertise into government, and representing future generations in governmentMy takeaways from this research:Overall, I've come to think that this research is less valuable than I thought it was when I started the project. There's very little strong literature on building future-oriented institutions and policy, I didn't find a lot of great success stories of policymakers doing this, and it actually just seems pretty hard to get right. I think there are a few great things people can do to generally make governments more future-oriented, like lowering the discount rate and infusing better technology expertise into governance, but most of these things are technocratic, moderate, and look less like "representing future generations" than I initially expected. I also increasingly find abstract institutional design work less relevant than policy work.I'm still optimistic that someone could find ideas better than the ones that I found, and I would still be excited if a huge number of economists and political scientists spent much more time thinking about these mechanism design issues, but I think marginal resources are better invested in existential risk policy, especially AI policy.I'll try to answer any questions, but it's been a few years since I've thought about these things much and I'm unsure how much of the doc I still endorse today!I make no guarantee that the scoring system used in the doc is accurate, internally consistent, or informative. I meant to spend more time developing a useful scoring system, but because I didn't manage...]]>
tylermjohn https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:32 None full 5661
wkAoqnaP7DhqHjyzh_NL_EA_EA EA - List of lists of government AI policy ideas by Zach Stein-Perlman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: List of lists of government AI policy ideas, published by Zach Stein-Perlman on April 17, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Zach Stein-Perlman https://forum.effectivealtruism.org/posts/wkAoqnaP7DhqHjyzh/list-of-lists-of-government-ai-policy-ideas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: List of lists of government AI policy ideas, published by Zach Stein-Perlman on April 17, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 18 Apr 2023 21:37:33 +0000 EA - List of lists of government AI policy ideas by Zach Stein-Perlman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: List of lists of government AI policy ideas, published by Zach Stein-Perlman on April 17, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: List of lists of government AI policy ideas, published by Zach Stein-Perlman on April 17, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Zach Stein-Perlman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:22 None full 5651
BEDpexBKvChZFbxS4_NL_EA_EA EA - 5 Proposed Changes to the Funding System to Increase Org Survival and Impact by Deena Englander Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 5 Proposed Changes to the Funding System to Increase Org Survival and Impact, published by Deena Englander on April 18, 2023 on The Effective Altruism Forum.An approach I often use in my coaching is to pick the ONE thing, that if you could change, would have an exponential (or just the most) impact on the rest of your day. I propose that we use that mentality to pick the most easy-hanging fruit to help EA orgs be most impactful. I personally think that the ONE thing for EA orgs is mentorship and support - Charity Entrepreneurship does an excellent job of that, and it's a model that the rest of the funding community should incorporate. At the risk of sounding too critical, I will say that I think it's somewhat neglectful of funders to give people financial support to start their organizations, but not provide them with the right org infrastructure support to help them be successful.The objective of this post is to highlight key, easily actionable areas that would likely make all the EA funding dollars much more impactful.A few thought exercisesIf you, as a funder, knew that by giving each startup an extra 10% to create a healthier infrastructure, you would increase the survival rate and/or impact on average by at least 30%, would that be worth it?Say you have 2 organizations with the same agenda. One started with the right resources and guidance to create a healthy infrastructure, and the other without. What would you expect the difference in the overall impact and survival of each org to be?My Perspective of the Current LandscapeTo start with, I want to add a disclaimer that this article is based on my own experiential data with EA and EA-aligned orgs, as well as the experiences and perspectives of many other service providers in the EA space (see this article about EASE). This is by no means inclusive of all orgs and all problems - it is just my subjective perspective of the current systems.Here's how I assess the current funding landscape:FunderEntrepreneurObjectiveSpawn effective charities.Take an effective charity idea and bring it to fruitionMethodology (as I see it)Develop cause areas that should be funded. Attract applicants and initiatives. Vet applications. If the cause and numbers are in line with prior established metrics, approve and transfer funds.Do initial research, create a financial plan based on knowns, apply for funding, potentially receive funding, and report on progress annually.What’s often missingRisksEstablishing proper governance and complianceFinding talent that is good at leadership, in addition to researchAssurance that funds will be spent most wisely (minimizing investment risk)Metrics for survival rates and causes of failure (if they exist, I'd love to see them)Incorporating proper governance and complianceEntrepreneurial / business leadership experienceGuidance, mentorship and supportSupportive communityStrategic clarityWell-developed ToC and a plan to implementAccountability and supervisionA culture of asking for helpTrusted resources to support the org with supportive services and developmentWillingness to spend money on “non-essentials”, such as trainingHighest impact is often not achievedLow survival rate of young orgsMismanagement and slow growth in orgs, if anyBurnout of talent groupIneffective use of EA fundsUnable to grow effectivelyUnable to have ideal impactSlow, disorganized / hampered developmentHigher failure ratesBurnoutIncreased compliance and liability problemsMismanaged staffMismanaged fundsPoorly estimated budget -> not enough funds to implement wellProposed changes to the system:Have standard budget items that every startup should be including. For example, accounting, legal, marketing, ops, coaching, mentorship, community groups, HR, software, rent, travel, salary, benefits, hea...]]>
Deena Englander https://forum.effectivealtruism.org/posts/BEDpexBKvChZFbxS4/5-proposed-changes-to-the-funding-system-to-increase-org Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 5 Proposed Changes to the Funding System to Increase Org Survival and Impact, published by Deena Englander on April 18, 2023 on The Effective Altruism Forum.An approach I often use in my coaching is to pick the ONE thing, that if you could change, would have an exponential (or just the most) impact on the rest of your day. I propose that we use that mentality to pick the most easy-hanging fruit to help EA orgs be most impactful. I personally think that the ONE thing for EA orgs is mentorship and support - Charity Entrepreneurship does an excellent job of that, and it's a model that the rest of the funding community should incorporate. At the risk of sounding too critical, I will say that I think it's somewhat neglectful of funders to give people financial support to start their organizations, but not provide them with the right org infrastructure support to help them be successful.The objective of this post is to highlight key, easily actionable areas that would likely make all the EA funding dollars much more impactful.A few thought exercisesIf you, as a funder, knew that by giving each startup an extra 10% to create a healthier infrastructure, you would increase the survival rate and/or impact on average by at least 30%, would that be worth it?Say you have 2 organizations with the same agenda. One started with the right resources and guidance to create a healthy infrastructure, and the other without. What would you expect the difference in the overall impact and survival of each org to be?My Perspective of the Current LandscapeTo start with, I want to add a disclaimer that this article is based on my own experiential data with EA and EA-aligned orgs, as well as the experiences and perspectives of many other service providers in the EA space (see this article about EASE). This is by no means inclusive of all orgs and all problems - it is just my subjective perspective of the current systems.Here's how I assess the current funding landscape:FunderEntrepreneurObjectiveSpawn effective charities.Take an effective charity idea and bring it to fruitionMethodology (as I see it)Develop cause areas that should be funded. Attract applicants and initiatives. Vet applications. If the cause and numbers are in line with prior established metrics, approve and transfer funds.Do initial research, create a financial plan based on knowns, apply for funding, potentially receive funding, and report on progress annually.What’s often missingRisksEstablishing proper governance and complianceFinding talent that is good at leadership, in addition to researchAssurance that funds will be spent most wisely (minimizing investment risk)Metrics for survival rates and causes of failure (if they exist, I'd love to see them)Incorporating proper governance and complianceEntrepreneurial / business leadership experienceGuidance, mentorship and supportSupportive communityStrategic clarityWell-developed ToC and a plan to implementAccountability and supervisionA culture of asking for helpTrusted resources to support the org with supportive services and developmentWillingness to spend money on “non-essentials”, such as trainingHighest impact is often not achievedLow survival rate of young orgsMismanagement and slow growth in orgs, if anyBurnout of talent groupIneffective use of EA fundsUnable to grow effectivelyUnable to have ideal impactSlow, disorganized / hampered developmentHigher failure ratesBurnoutIncreased compliance and liability problemsMismanaged staffMismanaged fundsPoorly estimated budget -> not enough funds to implement wellProposed changes to the system:Have standard budget items that every startup should be including. For example, accounting, legal, marketing, ops, coaching, mentorship, community groups, HR, software, rent, travel, salary, benefits, hea...]]>
Tue, 18 Apr 2023 17:48:06 +0000 EA - 5 Proposed Changes to the Funding System to Increase Org Survival and Impact by Deena Englander Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 5 Proposed Changes to the Funding System to Increase Org Survival and Impact, published by Deena Englander on April 18, 2023 on The Effective Altruism Forum.An approach I often use in my coaching is to pick the ONE thing, that if you could change, would have an exponential (or just the most) impact on the rest of your day. I propose that we use that mentality to pick the most easy-hanging fruit to help EA orgs be most impactful. I personally think that the ONE thing for EA orgs is mentorship and support - Charity Entrepreneurship does an excellent job of that, and it's a model that the rest of the funding community should incorporate. At the risk of sounding too critical, I will say that I think it's somewhat neglectful of funders to give people financial support to start their organizations, but not provide them with the right org infrastructure support to help them be successful.The objective of this post is to highlight key, easily actionable areas that would likely make all the EA funding dollars much more impactful.A few thought exercisesIf you, as a funder, knew that by giving each startup an extra 10% to create a healthier infrastructure, you would increase the survival rate and/or impact on average by at least 30%, would that be worth it?Say you have 2 organizations with the same agenda. One started with the right resources and guidance to create a healthy infrastructure, and the other without. What would you expect the difference in the overall impact and survival of each org to be?My Perspective of the Current LandscapeTo start with, I want to add a disclaimer that this article is based on my own experiential data with EA and EA-aligned orgs, as well as the experiences and perspectives of many other service providers in the EA space (see this article about EASE). This is by no means inclusive of all orgs and all problems - it is just my subjective perspective of the current systems.Here's how I assess the current funding landscape:FunderEntrepreneurObjectiveSpawn effective charities.Take an effective charity idea and bring it to fruitionMethodology (as I see it)Develop cause areas that should be funded. Attract applicants and initiatives. Vet applications. If the cause and numbers are in line with prior established metrics, approve and transfer funds.Do initial research, create a financial plan based on knowns, apply for funding, potentially receive funding, and report on progress annually.What’s often missingRisksEstablishing proper governance and complianceFinding talent that is good at leadership, in addition to researchAssurance that funds will be spent most wisely (minimizing investment risk)Metrics for survival rates and causes of failure (if they exist, I'd love to see them)Incorporating proper governance and complianceEntrepreneurial / business leadership experienceGuidance, mentorship and supportSupportive communityStrategic clarityWell-developed ToC and a plan to implementAccountability and supervisionA culture of asking for helpTrusted resources to support the org with supportive services and developmentWillingness to spend money on “non-essentials”, such as trainingHighest impact is often not achievedLow survival rate of young orgsMismanagement and slow growth in orgs, if anyBurnout of talent groupIneffective use of EA fundsUnable to grow effectivelyUnable to have ideal impactSlow, disorganized / hampered developmentHigher failure ratesBurnoutIncreased compliance and liability problemsMismanaged staffMismanaged fundsPoorly estimated budget -> not enough funds to implement wellProposed changes to the system:Have standard budget items that every startup should be including. For example, accounting, legal, marketing, ops, coaching, mentorship, community groups, HR, software, rent, travel, salary, benefits, hea...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 5 Proposed Changes to the Funding System to Increase Org Survival and Impact, published by Deena Englander on April 18, 2023 on The Effective Altruism Forum.An approach I often use in my coaching is to pick the ONE thing, that if you could change, would have an exponential (or just the most) impact on the rest of your day. I propose that we use that mentality to pick the most easy-hanging fruit to help EA orgs be most impactful. I personally think that the ONE thing for EA orgs is mentorship and support - Charity Entrepreneurship does an excellent job of that, and it's a model that the rest of the funding community should incorporate. At the risk of sounding too critical, I will say that I think it's somewhat neglectful of funders to give people financial support to start their organizations, but not provide them with the right org infrastructure support to help them be successful.The objective of this post is to highlight key, easily actionable areas that would likely make all the EA funding dollars much more impactful.A few thought exercisesIf you, as a funder, knew that by giving each startup an extra 10% to create a healthier infrastructure, you would increase the survival rate and/or impact on average by at least 30%, would that be worth it?Say you have 2 organizations with the same agenda. One started with the right resources and guidance to create a healthy infrastructure, and the other without. What would you expect the difference in the overall impact and survival of each org to be?My Perspective of the Current LandscapeTo start with, I want to add a disclaimer that this article is based on my own experiential data with EA and EA-aligned orgs, as well as the experiences and perspectives of many other service providers in the EA space (see this article about EASE). This is by no means inclusive of all orgs and all problems - it is just my subjective perspective of the current systems.Here's how I assess the current funding landscape:FunderEntrepreneurObjectiveSpawn effective charities.Take an effective charity idea and bring it to fruitionMethodology (as I see it)Develop cause areas that should be funded. Attract applicants and initiatives. Vet applications. If the cause and numbers are in line with prior established metrics, approve and transfer funds.Do initial research, create a financial plan based on knowns, apply for funding, potentially receive funding, and report on progress annually.What’s often missingRisksEstablishing proper governance and complianceFinding talent that is good at leadership, in addition to researchAssurance that funds will be spent most wisely (minimizing investment risk)Metrics for survival rates and causes of failure (if they exist, I'd love to see them)Incorporating proper governance and complianceEntrepreneurial / business leadership experienceGuidance, mentorship and supportSupportive communityStrategic clarityWell-developed ToC and a plan to implementAccountability and supervisionA culture of asking for helpTrusted resources to support the org with supportive services and developmentWillingness to spend money on “non-essentials”, such as trainingHighest impact is often not achievedLow survival rate of young orgsMismanagement and slow growth in orgs, if anyBurnout of talent groupIneffective use of EA fundsUnable to grow effectivelyUnable to have ideal impactSlow, disorganized / hampered developmentHigher failure ratesBurnoutIncreased compliance and liability problemsMismanaged staffMismanaged fundsPoorly estimated budget -> not enough funds to implement wellProposed changes to the system:Have standard budget items that every startup should be including. For example, accounting, legal, marketing, ops, coaching, mentorship, community groups, HR, software, rent, travel, salary, benefits, hea...]]>
Deena Englander https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:58 None full 5648
WJXNByFe73HLkuPbH_NL_EA_EA EA - The basic reasons I expect AGI ruin by RobBensinger Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The basic reasons I expect AGI ruin, published by RobBensinger on April 18, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
RobBensinger https://forum.effectivealtruism.org/posts/WJXNByFe73HLkuPbH/the-basic-reasons-i-expect-agi-ruin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The basic reasons I expect AGI ruin, published by RobBensinger on April 18, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 18 Apr 2023 16:10:58 +0000 EA - The basic reasons I expect AGI ruin by RobBensinger Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The basic reasons I expect AGI ruin, published by RobBensinger on April 18, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The basic reasons I expect AGI ruin, published by RobBensinger on April 18, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
RobBensinger https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:22 None full 5649
98LrrRzdwZadLe2oD_NL_EA_EA EA - Is CBT effective for poor households? Two recent papers (evaluated by The Unjournal) with contrasting results by david reinstein Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is CBT effective for poor households? Two recent papers (evaluated by The Unjournal) with contrasting results, published by david reinstein on April 17, 2023 on The Effective Altruism Forum.(Second link: Barker et al)Two more Unjournal Evaluation sets are out. Both papers consider randomized controlled trials (RCTs) involving cognitive behavioral therapy (CBT) for low-income households in two African countries (Kenya and Ghana). These papers come to very different conclusions as to the efficacy of this intervention.These are part of Unjournal's 'direct NBER evaluation' stream.1. Barker et al, 2022“Cognitive Behavioral Therapy among Ghana's Rural Poor Is Effective Regardless of Baseline Mental Distress”[1]From anonymous evaluator 1:This paper uses a field experiment to explore the impact of a 12-week CBT program among poor households in rural Ghana. The authors find that the CBT program increases mental and physical well-being, as well as cognitive and socioemotional skills and downstream economic outcomes.2. Haushofer et al, 2020The Comparative Impact of Cash Transfers and a Psychotherapy Program on Psychological and Economic Well-being, Johannes Haushofer, Robert Mudida and Jeremy P. Shapiro. 2020. Originally published as NBER Working Paper 28106Evaluation summary, linking to individual evaluations from Hannah Metzler and an anonymous evaluatorFrom anonymous evaluator 2:This paper studies the economic and psychological effects of providing two different interventions to low-income households in rural Kenya: a program in Cognitive Behavioral Therapy (CBT, a well-established form of psychotherapy) and an unconditional cash transfer. The authors use a randomized controlled trial with a 2-by-2 design to estimate the effect of each intervention alone and of both interventions combined. ...Strikingly, the authors find no effect of the therapy program on any of their primary economic or psychological outcomes. /.. Unsurprisingly given the null effect of therapy, the combination of cash and therapy has similar effects to cash alone.ThoughtsThe evaluations of both papers are largely positive, and both appear credible. I hope that this open evaluation of each paper is a helpful input into a more direct comparison of these, as well as possible integration into a larger meta-analysis.[2]ThanksThanks to the four evaluators of these papers, who did strong and in-depth work, as well as to the evaluation managers (Hansika Kapoor and Anirudh Tagat), and others on the Unjournal team (especially Annabel Rayner and Gavin Taylor).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
david reinstein https://forum.effectivealtruism.org/posts/98LrrRzdwZadLe2oD/is-cbt-effective-for-poor-households-two-recent-papers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is CBT effective for poor households? Two recent papers (evaluated by The Unjournal) with contrasting results, published by david reinstein on April 17, 2023 on The Effective Altruism Forum.(Second link: Barker et al)Two more Unjournal Evaluation sets are out. Both papers consider randomized controlled trials (RCTs) involving cognitive behavioral therapy (CBT) for low-income households in two African countries (Kenya and Ghana). These papers come to very different conclusions as to the efficacy of this intervention.These are part of Unjournal's 'direct NBER evaluation' stream.1. Barker et al, 2022“Cognitive Behavioral Therapy among Ghana's Rural Poor Is Effective Regardless of Baseline Mental Distress”[1]From anonymous evaluator 1:This paper uses a field experiment to explore the impact of a 12-week CBT program among poor households in rural Ghana. The authors find that the CBT program increases mental and physical well-being, as well as cognitive and socioemotional skills and downstream economic outcomes.2. Haushofer et al, 2020The Comparative Impact of Cash Transfers and a Psychotherapy Program on Psychological and Economic Well-being, Johannes Haushofer, Robert Mudida and Jeremy P. Shapiro. 2020. Originally published as NBER Working Paper 28106Evaluation summary, linking to individual evaluations from Hannah Metzler and an anonymous evaluatorFrom anonymous evaluator 2:This paper studies the economic and psychological effects of providing two different interventions to low-income households in rural Kenya: a program in Cognitive Behavioral Therapy (CBT, a well-established form of psychotherapy) and an unconditional cash transfer. The authors use a randomized controlled trial with a 2-by-2 design to estimate the effect of each intervention alone and of both interventions combined. ...Strikingly, the authors find no effect of the therapy program on any of their primary economic or psychological outcomes. /.. Unsurprisingly given the null effect of therapy, the combination of cash and therapy has similar effects to cash alone.ThoughtsThe evaluations of both papers are largely positive, and both appear credible. I hope that this open evaluation of each paper is a helpful input into a more direct comparison of these, as well as possible integration into a larger meta-analysis.[2]ThanksThanks to the four evaluators of these papers, who did strong and in-depth work, as well as to the evaluation managers (Hansika Kapoor and Anirudh Tagat), and others on the Unjournal team (especially Annabel Rayner and Gavin Taylor).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 18 Apr 2023 14:49:15 +0000 EA - Is CBT effective for poor households? Two recent papers (evaluated by The Unjournal) with contrasting results by david reinstein Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is CBT effective for poor households? Two recent papers (evaluated by The Unjournal) with contrasting results, published by david reinstein on April 17, 2023 on The Effective Altruism Forum.(Second link: Barker et al)Two more Unjournal Evaluation sets are out. Both papers consider randomized controlled trials (RCTs) involving cognitive behavioral therapy (CBT) for low-income households in two African countries (Kenya and Ghana). These papers come to very different conclusions as to the efficacy of this intervention.These are part of Unjournal's 'direct NBER evaluation' stream.1. Barker et al, 2022“Cognitive Behavioral Therapy among Ghana's Rural Poor Is Effective Regardless of Baseline Mental Distress”[1]From anonymous evaluator 1:This paper uses a field experiment to explore the impact of a 12-week CBT program among poor households in rural Ghana. The authors find that the CBT program increases mental and physical well-being, as well as cognitive and socioemotional skills and downstream economic outcomes.2. Haushofer et al, 2020The Comparative Impact of Cash Transfers and a Psychotherapy Program on Psychological and Economic Well-being, Johannes Haushofer, Robert Mudida and Jeremy P. Shapiro. 2020. Originally published as NBER Working Paper 28106Evaluation summary, linking to individual evaluations from Hannah Metzler and an anonymous evaluatorFrom anonymous evaluator 2:This paper studies the economic and psychological effects of providing two different interventions to low-income households in rural Kenya: a program in Cognitive Behavioral Therapy (CBT, a well-established form of psychotherapy) and an unconditional cash transfer. The authors use a randomized controlled trial with a 2-by-2 design to estimate the effect of each intervention alone and of both interventions combined. ...Strikingly, the authors find no effect of the therapy program on any of their primary economic or psychological outcomes. /.. Unsurprisingly given the null effect of therapy, the combination of cash and therapy has similar effects to cash alone.ThoughtsThe evaluations of both papers are largely positive, and both appear credible. I hope that this open evaluation of each paper is a helpful input into a more direct comparison of these, as well as possible integration into a larger meta-analysis.[2]ThanksThanks to the four evaluators of these papers, who did strong and in-depth work, as well as to the evaluation managers (Hansika Kapoor and Anirudh Tagat), and others on the Unjournal team (especially Annabel Rayner and Gavin Taylor).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is CBT effective for poor households? Two recent papers (evaluated by The Unjournal) with contrasting results, published by david reinstein on April 17, 2023 on The Effective Altruism Forum.(Second link: Barker et al)Two more Unjournal Evaluation sets are out. Both papers consider randomized controlled trials (RCTs) involving cognitive behavioral therapy (CBT) for low-income households in two African countries (Kenya and Ghana). These papers come to very different conclusions as to the efficacy of this intervention.These are part of Unjournal's 'direct NBER evaluation' stream.1. Barker et al, 2022“Cognitive Behavioral Therapy among Ghana's Rural Poor Is Effective Regardless of Baseline Mental Distress”[1]From anonymous evaluator 1:This paper uses a field experiment to explore the impact of a 12-week CBT program among poor households in rural Ghana. The authors find that the CBT program increases mental and physical well-being, as well as cognitive and socioemotional skills and downstream economic outcomes.2. Haushofer et al, 2020The Comparative Impact of Cash Transfers and a Psychotherapy Program on Psychological and Economic Well-being, Johannes Haushofer, Robert Mudida and Jeremy P. Shapiro. 2020. Originally published as NBER Working Paper 28106Evaluation summary, linking to individual evaluations from Hannah Metzler and an anonymous evaluatorFrom anonymous evaluator 2:This paper studies the economic and psychological effects of providing two different interventions to low-income households in rural Kenya: a program in Cognitive Behavioral Therapy (CBT, a well-established form of psychotherapy) and an unconditional cash transfer. The authors use a randomized controlled trial with a 2-by-2 design to estimate the effect of each intervention alone and of both interventions combined. ...Strikingly, the authors find no effect of the therapy program on any of their primary economic or psychological outcomes. /.. Unsurprisingly given the null effect of therapy, the combination of cash and therapy has similar effects to cash alone.ThoughtsThe evaluations of both papers are largely positive, and both appear credible. I hope that this open evaluation of each paper is a helpful input into a more direct comparison of these, as well as possible integration into a larger meta-analysis.[2]ThanksThanks to the four evaluators of these papers, who did strong and in-depth work, as well as to the evaluation managers (Hansika Kapoor and Anirudh Tagat), and others on the Unjournal team (especially Annabel Rayner and Gavin Taylor).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
david reinstein https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:48 None full 5650
pWvsvFLeH9LekGqTt_NL_EA_EA EA - Updates to the Effective Ventures US board by Zachary Robinson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updates to the Effective Ventures US board, published by Zachary Robinson on April 17, 2023 on The Effective Altruism Forum.There have been several recent changes to the EV US board.Rebecca Kagan has resigned from the board of EV US. She decided to resign in light of her disagreements with the EV boards’ strategy and approach during the past few months. We would welcome her to share more of her thoughts publicly, and she plans to do so. In the meantime, Becca is happy for people with questions to reach out to her directly. We are very grateful for everything which Becca has given to EV US during her time as a trustee, particularly for her dedication and insights over the past few months.The board has also elected Zach Robinson and Eli Rose to the board as new trustees. They join current trustees Nicole Ross and Nick Beckstead.Zach Robinson has been Interim CEO of EV US since January. Previously, he was Chief of Staff at Open Philanthropy and had experience in consulting and at a start-up.Eli Rose is currently a Senior Program Associate at Open Philanthropy, where he has worked on the longtermist EA community-building team since 2020. Previously, he was Director of Engineering at an educational technology company.Both EV US and EV UK are still working to bring additional trustees onto the boards, and we hope to share further updates shortly.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Zachary Robinson https://forum.effectivealtruism.org/posts/pWvsvFLeH9LekGqTt/updates-to-the-effective-ventures-us-board Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updates to the Effective Ventures US board, published by Zachary Robinson on April 17, 2023 on The Effective Altruism Forum.There have been several recent changes to the EV US board.Rebecca Kagan has resigned from the board of EV US. She decided to resign in light of her disagreements with the EV boards’ strategy and approach during the past few months. We would welcome her to share more of her thoughts publicly, and she plans to do so. In the meantime, Becca is happy for people with questions to reach out to her directly. We are very grateful for everything which Becca has given to EV US during her time as a trustee, particularly for her dedication and insights over the past few months.The board has also elected Zach Robinson and Eli Rose to the board as new trustees. They join current trustees Nicole Ross and Nick Beckstead.Zach Robinson has been Interim CEO of EV US since January. Previously, he was Chief of Staff at Open Philanthropy and had experience in consulting and at a start-up.Eli Rose is currently a Senior Program Associate at Open Philanthropy, where he has worked on the longtermist EA community-building team since 2020. Previously, he was Director of Engineering at an educational technology company.Both EV US and EV UK are still working to bring additional trustees onto the boards, and we hope to share further updates shortly.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 18 Apr 2023 09:27:19 +0000 EA - Updates to the Effective Ventures US board by Zachary Robinson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updates to the Effective Ventures US board, published by Zachary Robinson on April 17, 2023 on The Effective Altruism Forum.There have been several recent changes to the EV US board.Rebecca Kagan has resigned from the board of EV US. She decided to resign in light of her disagreements with the EV boards’ strategy and approach during the past few months. We would welcome her to share more of her thoughts publicly, and she plans to do so. In the meantime, Becca is happy for people with questions to reach out to her directly. We are very grateful for everything which Becca has given to EV US during her time as a trustee, particularly for her dedication and insights over the past few months.The board has also elected Zach Robinson and Eli Rose to the board as new trustees. They join current trustees Nicole Ross and Nick Beckstead.Zach Robinson has been Interim CEO of EV US since January. Previously, he was Chief of Staff at Open Philanthropy and had experience in consulting and at a start-up.Eli Rose is currently a Senior Program Associate at Open Philanthropy, where he has worked on the longtermist EA community-building team since 2020. Previously, he was Director of Engineering at an educational technology company.Both EV US and EV UK are still working to bring additional trustees onto the boards, and we hope to share further updates shortly.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updates to the Effective Ventures US board, published by Zachary Robinson on April 17, 2023 on The Effective Altruism Forum.There have been several recent changes to the EV US board.Rebecca Kagan has resigned from the board of EV US. She decided to resign in light of her disagreements with the EV boards’ strategy and approach during the past few months. We would welcome her to share more of her thoughts publicly, and she plans to do so. In the meantime, Becca is happy for people with questions to reach out to her directly. We are very grateful for everything which Becca has given to EV US during her time as a trustee, particularly for her dedication and insights over the past few months.The board has also elected Zach Robinson and Eli Rose to the board as new trustees. They join current trustees Nicole Ross and Nick Beckstead.Zach Robinson has been Interim CEO of EV US since January. Previously, he was Chief of Staff at Open Philanthropy and had experience in consulting and at a start-up.Eli Rose is currently a Senior Program Associate at Open Philanthropy, where he has worked on the longtermist EA community-building team since 2020. Previously, he was Director of Engineering at an educational technology company.Both EV US and EV UK are still working to bring additional trustees onto the boards, and we hope to share further updates shortly.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Zachary Robinson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:30 None full 5646
CrnFwpNYYSseb6Xt3_NL_EA_EA EA - We're losing creators due to our nitpicking culture by TheAthenians Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We're losing creators due to our nitpicking culture, published by TheAthenians on April 17, 2023 on The Effective Altruism Forum.This is a cross-post from LessWrong, originally titled "Killing Socrates" by Duncan Sabien. Cross-posted with his permission.This is posted from an anonymous account of somebody who has been on the Forum for over 7 years and has over 2500 karma who's more or less stopped posting on the EA Forum and LessWrong for exactly the reasons he describes.This is not an isolated case. Here are a couple examples to illustrate this, and some memes that have been floating around EA Twitter that seem to be resonating. The first two examples are for LessWrong but just as easily could have been for the EA Forum, which has a similar culture.^Source^SourceFunnily enough, one of the top comments on the post at the time of publishing was somebody debating the correct pluralisation of "Socrates".And remember that for every one famous case of somebody leaving because of it, there will probably be tens to >100 people who leave without saying anything.This is an important problem, and I don't know the solution, but I hope it is discussed and addressed.Or, On The Willful Destruction Of Gardens Of Collaborative InquiryOne of the more interesting dynamics of the past eight-or-so years has been watching a bunch of the people who [taught me my values] and [served as my early role models] and [were presented to me as paragons of cultural virtue] going off the deep end.Those people believed a bunch of stuff, and they injected a bunch of that stuff into me, in the early days of my life when I absorbed it uncritically, and as they've turned out to be wrong and misguided and confused in two or three dozen ways, I've found myself wondering what else they were wrong about.One of the things that I absorbed via osmosis and never questioned (until recently) was the Hero Myth of Socrates, who boldly stood up against the tyrannical, dogmatic power structure and was unjustly murdered for it. I've spent most of my life knowing that Socrates obviously got a raw deal, just like I spent most of my life knowing thatIt now seems quite plausible to me that Socrates was, in fact, correctly responded-to by the Athenians of his time, and that the mythologized version of his story I grew up with belongs in the same category as Washington's cherry tree or Pocahontas's enthusiastic embrace of the white settlers of Virginia.The following borrows generously from, and is essentially an embellishment of, this comment by @Vaniver.Imagine that you are an ancient Athenian, responsible for some important institution, and that you have a strong belief that the overall survival of your society is contingent on a reliable, common-knowledge buy-in of Athenian institutions generally, i.e. that your society cannot function unless its members believe that it does function.This would not be a ridiculous belief! We have seen, in the modern era, how quickly things go south when faith in a bank (or in the financial system as a whole) evaporates. We know what happens when people stop believing that the police or the courts are on their side. Regimes (or entire nations) fall when their constituents stop propping up the myth of those regimes. Much of civilization is shared participation in self-fulfilling prophecies like "this little scrap of green paper holds value."And if you buy"My society's survival depends upon people's faith in its institutions."...then it's only a small step from there to something like:"My society's survival depends upon a status-allocation structure whereby [the people who pour their time and effort into building things larger than themselves] receive lots of credit and reward, and [the people who contribute little, and sit back idly criticizing] receive correspondingly l...]]>
TheAthenians https://forum.effectivealtruism.org/posts/CrnFwpNYYSseb6Xt3/we-re-losing-creators-due-to-our-nitpicking-culture Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We're losing creators due to our nitpicking culture, published by TheAthenians on April 17, 2023 on The Effective Altruism Forum.This is a cross-post from LessWrong, originally titled "Killing Socrates" by Duncan Sabien. Cross-posted with his permission.This is posted from an anonymous account of somebody who has been on the Forum for over 7 years and has over 2500 karma who's more or less stopped posting on the EA Forum and LessWrong for exactly the reasons he describes.This is not an isolated case. Here are a couple examples to illustrate this, and some memes that have been floating around EA Twitter that seem to be resonating. The first two examples are for LessWrong but just as easily could have been for the EA Forum, which has a similar culture.^Source^SourceFunnily enough, one of the top comments on the post at the time of publishing was somebody debating the correct pluralisation of "Socrates".And remember that for every one famous case of somebody leaving because of it, there will probably be tens to >100 people who leave without saying anything.This is an important problem, and I don't know the solution, but I hope it is discussed and addressed.Or, On The Willful Destruction Of Gardens Of Collaborative InquiryOne of the more interesting dynamics of the past eight-or-so years has been watching a bunch of the people who [taught me my values] and [served as my early role models] and [were presented to me as paragons of cultural virtue] going off the deep end.Those people believed a bunch of stuff, and they injected a bunch of that stuff into me, in the early days of my life when I absorbed it uncritically, and as they've turned out to be wrong and misguided and confused in two or three dozen ways, I've found myself wondering what else they were wrong about.One of the things that I absorbed via osmosis and never questioned (until recently) was the Hero Myth of Socrates, who boldly stood up against the tyrannical, dogmatic power structure and was unjustly murdered for it. I've spent most of my life knowing that Socrates obviously got a raw deal, just like I spent most of my life knowing thatIt now seems quite plausible to me that Socrates was, in fact, correctly responded-to by the Athenians of his time, and that the mythologized version of his story I grew up with belongs in the same category as Washington's cherry tree or Pocahontas's enthusiastic embrace of the white settlers of Virginia.The following borrows generously from, and is essentially an embellishment of, this comment by @Vaniver.Imagine that you are an ancient Athenian, responsible for some important institution, and that you have a strong belief that the overall survival of your society is contingent on a reliable, common-knowledge buy-in of Athenian institutions generally, i.e. that your society cannot function unless its members believe that it does function.This would not be a ridiculous belief! We have seen, in the modern era, how quickly things go south when faith in a bank (or in the financial system as a whole) evaporates. We know what happens when people stop believing that the police or the courts are on their side. Regimes (or entire nations) fall when their constituents stop propping up the myth of those regimes. Much of civilization is shared participation in self-fulfilling prophecies like "this little scrap of green paper holds value."And if you buy"My society's survival depends upon people's faith in its institutions."...then it's only a small step from there to something like:"My society's survival depends upon a status-allocation structure whereby [the people who pour their time and effort into building things larger than themselves] receive lots of credit and reward, and [the people who contribute little, and sit back idly criticizing] receive correspondingly l...]]>
Mon, 17 Apr 2023 19:17:38 +0000 EA - We're losing creators due to our nitpicking culture by TheAthenians Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We're losing creators due to our nitpicking culture, published by TheAthenians on April 17, 2023 on The Effective Altruism Forum.This is a cross-post from LessWrong, originally titled "Killing Socrates" by Duncan Sabien. Cross-posted with his permission.This is posted from an anonymous account of somebody who has been on the Forum for over 7 years and has over 2500 karma who's more or less stopped posting on the EA Forum and LessWrong for exactly the reasons he describes.This is not an isolated case. Here are a couple examples to illustrate this, and some memes that have been floating around EA Twitter that seem to be resonating. The first two examples are for LessWrong but just as easily could have been for the EA Forum, which has a similar culture.^Source^SourceFunnily enough, one of the top comments on the post at the time of publishing was somebody debating the correct pluralisation of "Socrates".And remember that for every one famous case of somebody leaving because of it, there will probably be tens to >100 people who leave without saying anything.This is an important problem, and I don't know the solution, but I hope it is discussed and addressed.Or, On The Willful Destruction Of Gardens Of Collaborative InquiryOne of the more interesting dynamics of the past eight-or-so years has been watching a bunch of the people who [taught me my values] and [served as my early role models] and [were presented to me as paragons of cultural virtue] going off the deep end.Those people believed a bunch of stuff, and they injected a bunch of that stuff into me, in the early days of my life when I absorbed it uncritically, and as they've turned out to be wrong and misguided and confused in two or three dozen ways, I've found myself wondering what else they were wrong about.One of the things that I absorbed via osmosis and never questioned (until recently) was the Hero Myth of Socrates, who boldly stood up against the tyrannical, dogmatic power structure and was unjustly murdered for it. I've spent most of my life knowing that Socrates obviously got a raw deal, just like I spent most of my life knowing thatIt now seems quite plausible to me that Socrates was, in fact, correctly responded-to by the Athenians of his time, and that the mythologized version of his story I grew up with belongs in the same category as Washington's cherry tree or Pocahontas's enthusiastic embrace of the white settlers of Virginia.The following borrows generously from, and is essentially an embellishment of, this comment by @Vaniver.Imagine that you are an ancient Athenian, responsible for some important institution, and that you have a strong belief that the overall survival of your society is contingent on a reliable, common-knowledge buy-in of Athenian institutions generally, i.e. that your society cannot function unless its members believe that it does function.This would not be a ridiculous belief! We have seen, in the modern era, how quickly things go south when faith in a bank (or in the financial system as a whole) evaporates. We know what happens when people stop believing that the police or the courts are on their side. Regimes (or entire nations) fall when their constituents stop propping up the myth of those regimes. Much of civilization is shared participation in self-fulfilling prophecies like "this little scrap of green paper holds value."And if you buy"My society's survival depends upon people's faith in its institutions."...then it's only a small step from there to something like:"My society's survival depends upon a status-allocation structure whereby [the people who pour their time and effort into building things larger than themselves] receive lots of credit and reward, and [the people who contribute little, and sit back idly criticizing] receive correspondingly l...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We're losing creators due to our nitpicking culture, published by TheAthenians on April 17, 2023 on The Effective Altruism Forum.This is a cross-post from LessWrong, originally titled "Killing Socrates" by Duncan Sabien. Cross-posted with his permission.This is posted from an anonymous account of somebody who has been on the Forum for over 7 years and has over 2500 karma who's more or less stopped posting on the EA Forum and LessWrong for exactly the reasons he describes.This is not an isolated case. Here are a couple examples to illustrate this, and some memes that have been floating around EA Twitter that seem to be resonating. The first two examples are for LessWrong but just as easily could have been for the EA Forum, which has a similar culture.^Source^SourceFunnily enough, one of the top comments on the post at the time of publishing was somebody debating the correct pluralisation of "Socrates".And remember that for every one famous case of somebody leaving because of it, there will probably be tens to >100 people who leave without saying anything.This is an important problem, and I don't know the solution, but I hope it is discussed and addressed.Or, On The Willful Destruction Of Gardens Of Collaborative InquiryOne of the more interesting dynamics of the past eight-or-so years has been watching a bunch of the people who [taught me my values] and [served as my early role models] and [were presented to me as paragons of cultural virtue] going off the deep end.Those people believed a bunch of stuff, and they injected a bunch of that stuff into me, in the early days of my life when I absorbed it uncritically, and as they've turned out to be wrong and misguided and confused in two or three dozen ways, I've found myself wondering what else they were wrong about.One of the things that I absorbed via osmosis and never questioned (until recently) was the Hero Myth of Socrates, who boldly stood up against the tyrannical, dogmatic power structure and was unjustly murdered for it. I've spent most of my life knowing that Socrates obviously got a raw deal, just like I spent most of my life knowing thatIt now seems quite plausible to me that Socrates was, in fact, correctly responded-to by the Athenians of his time, and that the mythologized version of his story I grew up with belongs in the same category as Washington's cherry tree or Pocahontas's enthusiastic embrace of the white settlers of Virginia.The following borrows generously from, and is essentially an embellishment of, this comment by @Vaniver.Imagine that you are an ancient Athenian, responsible for some important institution, and that you have a strong belief that the overall survival of your society is contingent on a reliable, common-knowledge buy-in of Athenian institutions generally, i.e. that your society cannot function unless its members believe that it does function.This would not be a ridiculous belief! We have seen, in the modern era, how quickly things go south when faith in a bank (or in the financial system as a whole) evaporates. We know what happens when people stop believing that the police or the courts are on their side. Regimes (or entire nations) fall when their constituents stop propping up the myth of those regimes. Much of civilization is shared participation in self-fulfilling prophecies like "this little scrap of green paper holds value."And if you buy"My society's survival depends upon people's faith in its institutions."...then it's only a small step from there to something like:"My society's survival depends upon a status-allocation structure whereby [the people who pour their time and effort into building things larger than themselves] receive lots of credit and reward, and [the people who contribute little, and sit back idly criticizing] receive correspondingly l...]]>
TheAthenians https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 12:44 None full 5638
gNgjpuB8Z73XRFeB4_NL_EA_EA EA - Free one-to-one behavioral addiction support for EAs by John Salter Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Free one-to-one behavioral addiction support for EAs, published by John Salter on April 17, 2023 on The Effective Altruism Forum.Free coaching via Zoom. Treats addictions that aren't acutely life-threatening, but that drain your time & money e.g.social mediagamingsmokingBenefitsSave money & timeAchieve moreFeel better about yourself and more in control of your lifeHow it worksSix thirty-minute sessions, for free.Continuation thereafter is optionalArranged via Calendly, or for a recurring slot once or twice a week (up to you)The CoachesEra's got a BSc in Psychology, and an MSc in Addiction, from King's College London. Lizzy's got a master's in psychology and a certificate in psychotherapy. Both have done ~160 hours of addiction coaching training, passed their final assessment, and are now ready to see real clients. So far, we've had unanimously positive feedback and ~70% retention from the 100+ clients our coaches have seen so far.What happens in sessionsSession OneTell your coach about you and your situationThey'll help you draft a minimum viable planYou attempt to execute it between now and next weekSessions Two til FiveShare your progress or lack thereofTell them what worked and what didn'tUpdate the plan accordinglyRepeatSession SixTalk about your progressWe'll likely give you the option to continue on a paid basisIn either case, we'll help you draft a plan to help recover from relapses, and prevent them from happening in the first placeDisclaimersPotential downside - they aren't EAsPotential upside - they aren't EAsWe don't have magical solutionsAt least 20% of sign-ups will give upOf those that stay, at least 20% will failMany of those who succeed will relapse within one yearPromises we makeTo maintain complete confidentialityTo use proven techniques exclusivelyTo be supportive & non-judgementalSign upFree to any Effective Altruist. 7 EA-exclusive places are available immediately. If the results are good, we'll do a monthly cohort of ~20 EAs. You can opt-out at any time, no hard feels. Click here to apply.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
John Salter https://forum.effectivealtruism.org/posts/gNgjpuB8Z73XRFeB4/free-one-to-one-behavioral-addiction-support-for-eas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Free one-to-one behavioral addiction support for EAs, published by John Salter on April 17, 2023 on The Effective Altruism Forum.Free coaching via Zoom. Treats addictions that aren't acutely life-threatening, but that drain your time & money e.g.social mediagamingsmokingBenefitsSave money & timeAchieve moreFeel better about yourself and more in control of your lifeHow it worksSix thirty-minute sessions, for free.Continuation thereafter is optionalArranged via Calendly, or for a recurring slot once or twice a week (up to you)The CoachesEra's got a BSc in Psychology, and an MSc in Addiction, from King's College London. Lizzy's got a master's in psychology and a certificate in psychotherapy. Both have done ~160 hours of addiction coaching training, passed their final assessment, and are now ready to see real clients. So far, we've had unanimously positive feedback and ~70% retention from the 100+ clients our coaches have seen so far.What happens in sessionsSession OneTell your coach about you and your situationThey'll help you draft a minimum viable planYou attempt to execute it between now and next weekSessions Two til FiveShare your progress or lack thereofTell them what worked and what didn'tUpdate the plan accordinglyRepeatSession SixTalk about your progressWe'll likely give you the option to continue on a paid basisIn either case, we'll help you draft a plan to help recover from relapses, and prevent them from happening in the first placeDisclaimersPotential downside - they aren't EAsPotential upside - they aren't EAsWe don't have magical solutionsAt least 20% of sign-ups will give upOf those that stay, at least 20% will failMany of those who succeed will relapse within one yearPromises we makeTo maintain complete confidentialityTo use proven techniques exclusivelyTo be supportive & non-judgementalSign upFree to any Effective Altruist. 7 EA-exclusive places are available immediately. If the results are good, we'll do a monthly cohort of ~20 EAs. You can opt-out at any time, no hard feels. Click here to apply.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 17 Apr 2023 19:12:01 +0000 EA - Free one-to-one behavioral addiction support for EAs by John Salter Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Free one-to-one behavioral addiction support for EAs, published by John Salter on April 17, 2023 on The Effective Altruism Forum.Free coaching via Zoom. Treats addictions that aren't acutely life-threatening, but that drain your time & money e.g.social mediagamingsmokingBenefitsSave money & timeAchieve moreFeel better about yourself and more in control of your lifeHow it worksSix thirty-minute sessions, for free.Continuation thereafter is optionalArranged via Calendly, or for a recurring slot once or twice a week (up to you)The CoachesEra's got a BSc in Psychology, and an MSc in Addiction, from King's College London. Lizzy's got a master's in psychology and a certificate in psychotherapy. Both have done ~160 hours of addiction coaching training, passed their final assessment, and are now ready to see real clients. So far, we've had unanimously positive feedback and ~70% retention from the 100+ clients our coaches have seen so far.What happens in sessionsSession OneTell your coach about you and your situationThey'll help you draft a minimum viable planYou attempt to execute it between now and next weekSessions Two til FiveShare your progress or lack thereofTell them what worked and what didn'tUpdate the plan accordinglyRepeatSession SixTalk about your progressWe'll likely give you the option to continue on a paid basisIn either case, we'll help you draft a plan to help recover from relapses, and prevent them from happening in the first placeDisclaimersPotential downside - they aren't EAsPotential upside - they aren't EAsWe don't have magical solutionsAt least 20% of sign-ups will give upOf those that stay, at least 20% will failMany of those who succeed will relapse within one yearPromises we makeTo maintain complete confidentialityTo use proven techniques exclusivelyTo be supportive & non-judgementalSign upFree to any Effective Altruist. 7 EA-exclusive places are available immediately. If the results are good, we'll do a monthly cohort of ~20 EAs. You can opt-out at any time, no hard feels. Click here to apply.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Free one-to-one behavioral addiction support for EAs, published by John Salter on April 17, 2023 on The Effective Altruism Forum.Free coaching via Zoom. Treats addictions that aren't acutely life-threatening, but that drain your time & money e.g.social mediagamingsmokingBenefitsSave money & timeAchieve moreFeel better about yourself and more in control of your lifeHow it worksSix thirty-minute sessions, for free.Continuation thereafter is optionalArranged via Calendly, or for a recurring slot once or twice a week (up to you)The CoachesEra's got a BSc in Psychology, and an MSc in Addiction, from King's College London. Lizzy's got a master's in psychology and a certificate in psychotherapy. Both have done ~160 hours of addiction coaching training, passed their final assessment, and are now ready to see real clients. So far, we've had unanimously positive feedback and ~70% retention from the 100+ clients our coaches have seen so far.What happens in sessionsSession OneTell your coach about you and your situationThey'll help you draft a minimum viable planYou attempt to execute it between now and next weekSessions Two til FiveShare your progress or lack thereofTell them what worked and what didn'tUpdate the plan accordinglyRepeatSession SixTalk about your progressWe'll likely give you the option to continue on a paid basisIn either case, we'll help you draft a plan to help recover from relapses, and prevent them from happening in the first placeDisclaimersPotential downside - they aren't EAsPotential upside - they aren't EAsWe don't have magical solutionsAt least 20% of sign-ups will give upOf those that stay, at least 20% will failMany of those who succeed will relapse within one yearPromises we makeTo maintain complete confidentialityTo use proven techniques exclusivelyTo be supportive & non-judgementalSign upFree to any Effective Altruist. 7 EA-exclusive places are available immediately. If the results are good, we'll do a monthly cohort of ~20 EAs. You can opt-out at any time, no hard feels. Click here to apply.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
John Salter https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:27 None full 5640
TeE9cLzQndZuZxwPC_NL_EA_EA EA - Funding Opportunity for New Fish Welfare Charity by MathiasKB Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding Opportunity for New Fish Welfare Charity, published by MathiasKB on April 17, 2023 on The Effective Altruism Forum.Hi everyone, unsure what the norms here are about sharing funding opportunities, but since its time-sensitive I'll opt to ask for forgiveness over permission!A new fish-welfare charity with an excellent founder behind it, is fundraising a seed-round to get the project off the ground.Their mission is to influence US state policy of using live baitfish. Billions of fish are bred each year to be used as baitfish and live pretty terrible lives up until the point they are put on the hook of a fishing rod and used to fish with.If you want to help see this charity started don't hesitate to contact me and I can put you in touch with the founder or share their pitch deck (its excellent).I've personally pledged to cover a portion if they can meet their funding target, so I'll happily admit to being biased in wanting to see it succeed.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
MathiasKB https://forum.effectivealtruism.org/posts/TeE9cLzQndZuZxwPC/funding-opportunity-for-new-fish-welfare-charity Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding Opportunity for New Fish Welfare Charity, published by MathiasKB on April 17, 2023 on The Effective Altruism Forum.Hi everyone, unsure what the norms here are about sharing funding opportunities, but since its time-sensitive I'll opt to ask for forgiveness over permission!A new fish-welfare charity with an excellent founder behind it, is fundraising a seed-round to get the project off the ground.Their mission is to influence US state policy of using live baitfish. Billions of fish are bred each year to be used as baitfish and live pretty terrible lives up until the point they are put on the hook of a fishing rod and used to fish with.If you want to help see this charity started don't hesitate to contact me and I can put you in touch with the founder or share their pitch deck (its excellent).I've personally pledged to cover a portion if they can meet their funding target, so I'll happily admit to being biased in wanting to see it succeed.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 17 Apr 2023 17:04:14 +0000 EA - Funding Opportunity for New Fish Welfare Charity by MathiasKB Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding Opportunity for New Fish Welfare Charity, published by MathiasKB on April 17, 2023 on The Effective Altruism Forum.Hi everyone, unsure what the norms here are about sharing funding opportunities, but since its time-sensitive I'll opt to ask for forgiveness over permission!A new fish-welfare charity with an excellent founder behind it, is fundraising a seed-round to get the project off the ground.Their mission is to influence US state policy of using live baitfish. Billions of fish are bred each year to be used as baitfish and live pretty terrible lives up until the point they are put on the hook of a fishing rod and used to fish with.If you want to help see this charity started don't hesitate to contact me and I can put you in touch with the founder or share their pitch deck (its excellent).I've personally pledged to cover a portion if they can meet their funding target, so I'll happily admit to being biased in wanting to see it succeed.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding Opportunity for New Fish Welfare Charity, published by MathiasKB on April 17, 2023 on The Effective Altruism Forum.Hi everyone, unsure what the norms here are about sharing funding opportunities, but since its time-sensitive I'll opt to ask for forgiveness over permission!A new fish-welfare charity with an excellent founder behind it, is fundraising a seed-round to get the project off the ground.Their mission is to influence US state policy of using live baitfish. Billions of fish are bred each year to be used as baitfish and live pretty terrible lives up until the point they are put on the hook of a fishing rod and used to fish with.If you want to help see this charity started don't hesitate to contact me and I can put you in touch with the founder or share their pitch deck (its excellent).I've personally pledged to cover a portion if they can meet their funding target, so I'll happily admit to being biased in wanting to see it succeed.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
MathiasKB https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:02 None full 5639
GYWzDnrZpZjDxDATE_NL_EA_EA EA - Starting and running your own mini projects: What I've learnt running a newsletter for a year by SofiaBalderson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Starting and running your own mini projects: What I've learnt running a newsletter for a year, published by SofiaBalderson on April 16, 2023 on The Effective Altruism Forum.Tl;dr:There are many unmet needs in the movement that you can fill with a couple of hours a month (without anyone’s permission)My project - the newsletter itself was relatively easy to put together (2-4h a month basic running) and now has 457 subscribers (and counting!)The open rates are amazingly high: over 62% consistentlyIt’s really tempting to give up: please give new projects feedback and encouragementMy Story: how and why I started itI like to be aware of opportunities in animal advocacy and when I see something interesting, I usually can think of a couple of friends who might find it useful. After a couple of years of sending links to a growing network of friends, I realised that I’m spending A LOT of time sharing resources. I thought to myself, why not start a newsletter where I’d put all these opportunities and then can invite my network to sign up. So at first it was a time-saving opportunity. But also I noticed that existing animal advocates are in closed spaces and groups and all the best opportunities are posted there, while there are lot of new and existing animal advocates who don’t have access to these groups. I thought it wasn’t very inclusive and decided to create something that’s open to everybody.Some (very nice!) feedback I got:Reader surveysI ran a very easy, press of a button survey in autumn, and most people said they found the newsletter useful (albeit a very small sample size of 29 and a bias towards most active users, but that just shows how people don’t have time for surveys, as I sent a Google Form that didn't get many answers at all)Individual positive feedback:"I'm so grateful for the newsletter - I've been listening to the podcasts recommended in it during my work hours, and read posts with a cuppa in the mornings/evenings. Loving the resource as it covers so many different aspects and areas within EAA. It'll be heavily used by me as I want to learn as much as I can about the community as I figure out what career in EAA would be a good fit" - Shaileen McGovern, Department of Agriculture, Food and the Marine, Ireland.From Andrés Jiménez Zorrilla, CEO at the Shimp Welfare Project:“For your poll [above], the only negative I have is that I open a dozen new tabs of things to read when I should be working! More seriously, this aggregation of news and events is at the very list inspiring to continue my own activism” - Josh Baldwin.Some stats:The newsletter consistently has on average 62.5% open rate. It started with 80% open rate on the second edition (the one that was the first edition for about half the subscribers to receive, then it gradually went down and it’s been consistent for about 6 months).At the time of writing the newsletter has 457 subscribers. I had a big jump at the launch to just under 200 and then consistently arrived at over 400. I haven't done much promotion until about a week ago (hence the spike).The country composition is very interesting, mainly US and UK, but also Germany, Spain and other countries.What I’ve learnt:If you have a good idea that is based on an actual problem/unmet need, just do it.Don’t listen to your own voice that tells you “No one needs it” or others saying it’s already been done. Check if it has really been done, or if people just think it has. If it has been done, has it been done well enough?If you need some help identifying a problem, listen to people's conversations online and see what people often complain about. Do they lack IT support? HR consultancy? No time to do ops?Don’t spend too much time thinking about counterfactuals. I know that EAs love to discuss whether this is really the best use of your...]]>
SofiaBalderson https://forum.effectivealtruism.org/posts/GYWzDnrZpZjDxDATE/starting-and-running-your-own-mini-projects-what-i-ve-learnt Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Starting and running your own mini projects: What I've learnt running a newsletter for a year, published by SofiaBalderson on April 16, 2023 on The Effective Altruism Forum.Tl;dr:There are many unmet needs in the movement that you can fill with a couple of hours a month (without anyone’s permission)My project - the newsletter itself was relatively easy to put together (2-4h a month basic running) and now has 457 subscribers (and counting!)The open rates are amazingly high: over 62% consistentlyIt’s really tempting to give up: please give new projects feedback and encouragementMy Story: how and why I started itI like to be aware of opportunities in animal advocacy and when I see something interesting, I usually can think of a couple of friends who might find it useful. After a couple of years of sending links to a growing network of friends, I realised that I’m spending A LOT of time sharing resources. I thought to myself, why not start a newsletter where I’d put all these opportunities and then can invite my network to sign up. So at first it was a time-saving opportunity. But also I noticed that existing animal advocates are in closed spaces and groups and all the best opportunities are posted there, while there are lot of new and existing animal advocates who don’t have access to these groups. I thought it wasn’t very inclusive and decided to create something that’s open to everybody.Some (very nice!) feedback I got:Reader surveysI ran a very easy, press of a button survey in autumn, and most people said they found the newsletter useful (albeit a very small sample size of 29 and a bias towards most active users, but that just shows how people don’t have time for surveys, as I sent a Google Form that didn't get many answers at all)Individual positive feedback:"I'm so grateful for the newsletter - I've been listening to the podcasts recommended in it during my work hours, and read posts with a cuppa in the mornings/evenings. Loving the resource as it covers so many different aspects and areas within EAA. It'll be heavily used by me as I want to learn as much as I can about the community as I figure out what career in EAA would be a good fit" - Shaileen McGovern, Department of Agriculture, Food and the Marine, Ireland.From Andrés Jiménez Zorrilla, CEO at the Shimp Welfare Project:“For your poll [above], the only negative I have is that I open a dozen new tabs of things to read when I should be working! More seriously, this aggregation of news and events is at the very list inspiring to continue my own activism” - Josh Baldwin.Some stats:The newsletter consistently has on average 62.5% open rate. It started with 80% open rate on the second edition (the one that was the first edition for about half the subscribers to receive, then it gradually went down and it’s been consistent for about 6 months).At the time of writing the newsletter has 457 subscribers. I had a big jump at the launch to just under 200 and then consistently arrived at over 400. I haven't done much promotion until about a week ago (hence the spike).The country composition is very interesting, mainly US and UK, but also Germany, Spain and other countries.What I’ve learnt:If you have a good idea that is based on an actual problem/unmet need, just do it.Don’t listen to your own voice that tells you “No one needs it” or others saying it’s already been done. Check if it has really been done, or if people just think it has. If it has been done, has it been done well enough?If you need some help identifying a problem, listen to people's conversations online and see what people often complain about. Do they lack IT support? HR consultancy? No time to do ops?Don’t spend too much time thinking about counterfactuals. I know that EAs love to discuss whether this is really the best use of your...]]>
Mon, 17 Apr 2023 15:50:01 +0000 EA - Starting and running your own mini projects: What I've learnt running a newsletter for a year by SofiaBalderson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Starting and running your own mini projects: What I've learnt running a newsletter for a year, published by SofiaBalderson on April 16, 2023 on The Effective Altruism Forum.Tl;dr:There are many unmet needs in the movement that you can fill with a couple of hours a month (without anyone’s permission)My project - the newsletter itself was relatively easy to put together (2-4h a month basic running) and now has 457 subscribers (and counting!)The open rates are amazingly high: over 62% consistentlyIt’s really tempting to give up: please give new projects feedback and encouragementMy Story: how and why I started itI like to be aware of opportunities in animal advocacy and when I see something interesting, I usually can think of a couple of friends who might find it useful. After a couple of years of sending links to a growing network of friends, I realised that I’m spending A LOT of time sharing resources. I thought to myself, why not start a newsletter where I’d put all these opportunities and then can invite my network to sign up. So at first it was a time-saving opportunity. But also I noticed that existing animal advocates are in closed spaces and groups and all the best opportunities are posted there, while there are lot of new and existing animal advocates who don’t have access to these groups. I thought it wasn’t very inclusive and decided to create something that’s open to everybody.Some (very nice!) feedback I got:Reader surveysI ran a very easy, press of a button survey in autumn, and most people said they found the newsletter useful (albeit a very small sample size of 29 and a bias towards most active users, but that just shows how people don’t have time for surveys, as I sent a Google Form that didn't get many answers at all)Individual positive feedback:"I'm so grateful for the newsletter - I've been listening to the podcasts recommended in it during my work hours, and read posts with a cuppa in the mornings/evenings. Loving the resource as it covers so many different aspects and areas within EAA. It'll be heavily used by me as I want to learn as much as I can about the community as I figure out what career in EAA would be a good fit" - Shaileen McGovern, Department of Agriculture, Food and the Marine, Ireland.From Andrés Jiménez Zorrilla, CEO at the Shimp Welfare Project:“For your poll [above], the only negative I have is that I open a dozen new tabs of things to read when I should be working! More seriously, this aggregation of news and events is at the very list inspiring to continue my own activism” - Josh Baldwin.Some stats:The newsletter consistently has on average 62.5% open rate. It started with 80% open rate on the second edition (the one that was the first edition for about half the subscribers to receive, then it gradually went down and it’s been consistent for about 6 months).At the time of writing the newsletter has 457 subscribers. I had a big jump at the launch to just under 200 and then consistently arrived at over 400. I haven't done much promotion until about a week ago (hence the spike).The country composition is very interesting, mainly US and UK, but also Germany, Spain and other countries.What I’ve learnt:If you have a good idea that is based on an actual problem/unmet need, just do it.Don’t listen to your own voice that tells you “No one needs it” or others saying it’s already been done. Check if it has really been done, or if people just think it has. If it has been done, has it been done well enough?If you need some help identifying a problem, listen to people's conversations online and see what people often complain about. Do they lack IT support? HR consultancy? No time to do ops?Don’t spend too much time thinking about counterfactuals. I know that EAs love to discuss whether this is really the best use of your...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Starting and running your own mini projects: What I've learnt running a newsletter for a year, published by SofiaBalderson on April 16, 2023 on The Effective Altruism Forum.Tl;dr:There are many unmet needs in the movement that you can fill with a couple of hours a month (without anyone’s permission)My project - the newsletter itself was relatively easy to put together (2-4h a month basic running) and now has 457 subscribers (and counting!)The open rates are amazingly high: over 62% consistentlyIt’s really tempting to give up: please give new projects feedback and encouragementMy Story: how and why I started itI like to be aware of opportunities in animal advocacy and when I see something interesting, I usually can think of a couple of friends who might find it useful. After a couple of years of sending links to a growing network of friends, I realised that I’m spending A LOT of time sharing resources. I thought to myself, why not start a newsletter where I’d put all these opportunities and then can invite my network to sign up. So at first it was a time-saving opportunity. But also I noticed that existing animal advocates are in closed spaces and groups and all the best opportunities are posted there, while there are lot of new and existing animal advocates who don’t have access to these groups. I thought it wasn’t very inclusive and decided to create something that’s open to everybody.Some (very nice!) feedback I got:Reader surveysI ran a very easy, press of a button survey in autumn, and most people said they found the newsletter useful (albeit a very small sample size of 29 and a bias towards most active users, but that just shows how people don’t have time for surveys, as I sent a Google Form that didn't get many answers at all)Individual positive feedback:"I'm so grateful for the newsletter - I've been listening to the podcasts recommended in it during my work hours, and read posts with a cuppa in the mornings/evenings. Loving the resource as it covers so many different aspects and areas within EAA. It'll be heavily used by me as I want to learn as much as I can about the community as I figure out what career in EAA would be a good fit" - Shaileen McGovern, Department of Agriculture, Food and the Marine, Ireland.From Andrés Jiménez Zorrilla, CEO at the Shimp Welfare Project:“For your poll [above], the only negative I have is that I open a dozen new tabs of things to read when I should be working! More seriously, this aggregation of news and events is at the very list inspiring to continue my own activism” - Josh Baldwin.Some stats:The newsletter consistently has on average 62.5% open rate. It started with 80% open rate on the second edition (the one that was the first edition for about half the subscribers to receive, then it gradually went down and it’s been consistent for about 6 months).At the time of writing the newsletter has 457 subscribers. I had a big jump at the launch to just under 200 and then consistently arrived at over 400. I haven't done much promotion until about a week ago (hence the spike).The country composition is very interesting, mainly US and UK, but also Germany, Spain and other countries.What I’ve learnt:If you have a good idea that is based on an actual problem/unmet need, just do it.Don’t listen to your own voice that tells you “No one needs it” or others saying it’s already been done. Check if it has really been done, or if people just think it has. If it has been done, has it been done well enough?If you need some help identifying a problem, listen to people's conversations online and see what people often complain about. Do they lack IT support? HR consultancy? No time to do ops?Don’t spend too much time thinking about counterfactuals. I know that EAs love to discuss whether this is really the best use of your...]]>
SofiaBalderson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:13 None full 5642
ddF7i9PA83nYR5ewb_NL_EA_EA EA - Predicting the cost-effectiveness of running a randomized controlled trial by Falk Lieder Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Predicting the cost-effectiveness of running a randomized controlled trial, published by Falk Lieder on April 17, 2023 on The Effective Altruism Forum.TLDR: Research is underrated. Running an RCT to evaluate a digital intervention for promoting altruism could be more than 10x as cost-effective as the best charities working on global health and wellbeing.In the previous post, we found that – in expectation – Baumsteiger’s (2019) intervention for promoting altruism is about 4x as cost-effective as GiveDirectly but lower than the cost-effectiveness of the Against Malaria Foundation or StrongMinds. However, the uncertainty about the actual cost-effectiveness of this intervention is still extremely high.The uncertainty is, in fact, so high that the 95% credible interval on the cost-effectiveness of the new intervention ranges from -0.5 WELLBYs/$1000 to 88 WELLBYs/$1000. The upper bound of this credible interval is close to the cost-effectiveness of the presumably most cost-effective mental health charity StrongMinds (90 WELLBYs/$1000; Plant, 2022), and more than twice the cost-effectiveness of the Against Malaria Foundation (39 WELLBYs/$1000; Plant, 2022). Based on these estimates, there is a 5% chance that the intervention might be harmful and a more than 5% chance that it might be at least as cost-effective as the charities recommended by GiveWell and the Happier Lives Institute.Because of this high uncertainty, any decisions based on the current state of knowledge could be highly suboptimal compared to what we would do if we had additional information. However, information can be costly, especially when running a randomized controlled trial (RCT). And the more money we spend on information, the less we can spend on saving lives. This dilemma raises the question, “When is it worthwhile to run an RCT to gather more data, and when should we exploit what we already know?” To answer this question, we introduce a new method for predicting the cost-effectiveness of gaining new information through an RCT and comparing it to the cost-effectiveness of cash transfers and directly promoting global health and well-being. We illustrate this method using the intervention by Baumsteiger (2019) as an example.However, the approach we are illustrating is more general and can also be applied to RCTs on established, emerging, and yet unknown EA interventions, including deworming, motivating parents to vaccinate their children, water purification, and interventions for improving mental health.We develop our method in two steps. First, we apply the established Value of Information framework (Howard, 1966) to obtain an upper bound on the cost-effectiveness of running an RCT. Then, we replace this method’s unrealistic assumption of perfect information with more realistic assumptions about the imperfect information generated by an RCT. This yields a new method that can provide more accurate estimates of the cost-effectiveness of evaluation research. As a proof of concept, we apply this method to predict how cost-effective it would be to evaluate the altruism intervention based on Baumsteiger (2019) in RCTs with different numbers of participants. Our method predicts that running such an RCT with 1200 participants would be highly cost-effective. This post is a brief summary of the longer report presented in this interactive notebook.How valuable would it be to know the true exact value of the cost-effectiveness of the intervention by Baumsteiger (2019)?To obtain an upper bound on how valuable it might be to evaluate the intervention by Baumsteiger (2019), I first calculate the value of obtaining perfect information about its cost-effectiveness. The value of perfect information is an established mathematical concept introduced by Howard (1966). It has recently been applied to charity evalu...]]>
Falk Lieder https://forum.effectivealtruism.org/posts/ddF7i9PA83nYR5ewb/predicting-the-cost-effectiveness-of-running-a-randomized Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Predicting the cost-effectiveness of running a randomized controlled trial, published by Falk Lieder on April 17, 2023 on The Effective Altruism Forum.TLDR: Research is underrated. Running an RCT to evaluate a digital intervention for promoting altruism could be more than 10x as cost-effective as the best charities working on global health and wellbeing.In the previous post, we found that – in expectation – Baumsteiger’s (2019) intervention for promoting altruism is about 4x as cost-effective as GiveDirectly but lower than the cost-effectiveness of the Against Malaria Foundation or StrongMinds. However, the uncertainty about the actual cost-effectiveness of this intervention is still extremely high.The uncertainty is, in fact, so high that the 95% credible interval on the cost-effectiveness of the new intervention ranges from -0.5 WELLBYs/$1000 to 88 WELLBYs/$1000. The upper bound of this credible interval is close to the cost-effectiveness of the presumably most cost-effective mental health charity StrongMinds (90 WELLBYs/$1000; Plant, 2022), and more than twice the cost-effectiveness of the Against Malaria Foundation (39 WELLBYs/$1000; Plant, 2022). Based on these estimates, there is a 5% chance that the intervention might be harmful and a more than 5% chance that it might be at least as cost-effective as the charities recommended by GiveWell and the Happier Lives Institute.Because of this high uncertainty, any decisions based on the current state of knowledge could be highly suboptimal compared to what we would do if we had additional information. However, information can be costly, especially when running a randomized controlled trial (RCT). And the more money we spend on information, the less we can spend on saving lives. This dilemma raises the question, “When is it worthwhile to run an RCT to gather more data, and when should we exploit what we already know?” To answer this question, we introduce a new method for predicting the cost-effectiveness of gaining new information through an RCT and comparing it to the cost-effectiveness of cash transfers and directly promoting global health and well-being. We illustrate this method using the intervention by Baumsteiger (2019) as an example.However, the approach we are illustrating is more general and can also be applied to RCTs on established, emerging, and yet unknown EA interventions, including deworming, motivating parents to vaccinate their children, water purification, and interventions for improving mental health.We develop our method in two steps. First, we apply the established Value of Information framework (Howard, 1966) to obtain an upper bound on the cost-effectiveness of running an RCT. Then, we replace this method’s unrealistic assumption of perfect information with more realistic assumptions about the imperfect information generated by an RCT. This yields a new method that can provide more accurate estimates of the cost-effectiveness of evaluation research. As a proof of concept, we apply this method to predict how cost-effective it would be to evaluate the altruism intervention based on Baumsteiger (2019) in RCTs with different numbers of participants. Our method predicts that running such an RCT with 1200 participants would be highly cost-effective. This post is a brief summary of the longer report presented in this interactive notebook.How valuable would it be to know the true exact value of the cost-effectiveness of the intervention by Baumsteiger (2019)?To obtain an upper bound on how valuable it might be to evaluate the intervention by Baumsteiger (2019), I first calculate the value of obtaining perfect information about its cost-effectiveness. The value of perfect information is an established mathematical concept introduced by Howard (1966). It has recently been applied to charity evalu...]]>
Mon, 17 Apr 2023 15:41:19 +0000 EA - Predicting the cost-effectiveness of running a randomized controlled trial by Falk Lieder Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Predicting the cost-effectiveness of running a randomized controlled trial, published by Falk Lieder on April 17, 2023 on The Effective Altruism Forum.TLDR: Research is underrated. Running an RCT to evaluate a digital intervention for promoting altruism could be more than 10x as cost-effective as the best charities working on global health and wellbeing.In the previous post, we found that – in expectation – Baumsteiger’s (2019) intervention for promoting altruism is about 4x as cost-effective as GiveDirectly but lower than the cost-effectiveness of the Against Malaria Foundation or StrongMinds. However, the uncertainty about the actual cost-effectiveness of this intervention is still extremely high.The uncertainty is, in fact, so high that the 95% credible interval on the cost-effectiveness of the new intervention ranges from -0.5 WELLBYs/$1000 to 88 WELLBYs/$1000. The upper bound of this credible interval is close to the cost-effectiveness of the presumably most cost-effective mental health charity StrongMinds (90 WELLBYs/$1000; Plant, 2022), and more than twice the cost-effectiveness of the Against Malaria Foundation (39 WELLBYs/$1000; Plant, 2022). Based on these estimates, there is a 5% chance that the intervention might be harmful and a more than 5% chance that it might be at least as cost-effective as the charities recommended by GiveWell and the Happier Lives Institute.Because of this high uncertainty, any decisions based on the current state of knowledge could be highly suboptimal compared to what we would do if we had additional information. However, information can be costly, especially when running a randomized controlled trial (RCT). And the more money we spend on information, the less we can spend on saving lives. This dilemma raises the question, “When is it worthwhile to run an RCT to gather more data, and when should we exploit what we already know?” To answer this question, we introduce a new method for predicting the cost-effectiveness of gaining new information through an RCT and comparing it to the cost-effectiveness of cash transfers and directly promoting global health and well-being. We illustrate this method using the intervention by Baumsteiger (2019) as an example.However, the approach we are illustrating is more general and can also be applied to RCTs on established, emerging, and yet unknown EA interventions, including deworming, motivating parents to vaccinate their children, water purification, and interventions for improving mental health.We develop our method in two steps. First, we apply the established Value of Information framework (Howard, 1966) to obtain an upper bound on the cost-effectiveness of running an RCT. Then, we replace this method’s unrealistic assumption of perfect information with more realistic assumptions about the imperfect information generated by an RCT. This yields a new method that can provide more accurate estimates of the cost-effectiveness of evaluation research. As a proof of concept, we apply this method to predict how cost-effective it would be to evaluate the altruism intervention based on Baumsteiger (2019) in RCTs with different numbers of participants. Our method predicts that running such an RCT with 1200 participants would be highly cost-effective. This post is a brief summary of the longer report presented in this interactive notebook.How valuable would it be to know the true exact value of the cost-effectiveness of the intervention by Baumsteiger (2019)?To obtain an upper bound on how valuable it might be to evaluate the intervention by Baumsteiger (2019), I first calculate the value of obtaining perfect information about its cost-effectiveness. The value of perfect information is an established mathematical concept introduced by Howard (1966). It has recently been applied to charity evalu...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Predicting the cost-effectiveness of running a randomized controlled trial, published by Falk Lieder on April 17, 2023 on The Effective Altruism Forum.TLDR: Research is underrated. Running an RCT to evaluate a digital intervention for promoting altruism could be more than 10x as cost-effective as the best charities working on global health and wellbeing.In the previous post, we found that – in expectation – Baumsteiger’s (2019) intervention for promoting altruism is about 4x as cost-effective as GiveDirectly but lower than the cost-effectiveness of the Against Malaria Foundation or StrongMinds. However, the uncertainty about the actual cost-effectiveness of this intervention is still extremely high.The uncertainty is, in fact, so high that the 95% credible interval on the cost-effectiveness of the new intervention ranges from -0.5 WELLBYs/$1000 to 88 WELLBYs/$1000. The upper bound of this credible interval is close to the cost-effectiveness of the presumably most cost-effective mental health charity StrongMinds (90 WELLBYs/$1000; Plant, 2022), and more than twice the cost-effectiveness of the Against Malaria Foundation (39 WELLBYs/$1000; Plant, 2022). Based on these estimates, there is a 5% chance that the intervention might be harmful and a more than 5% chance that it might be at least as cost-effective as the charities recommended by GiveWell and the Happier Lives Institute.Because of this high uncertainty, any decisions based on the current state of knowledge could be highly suboptimal compared to what we would do if we had additional information. However, information can be costly, especially when running a randomized controlled trial (RCT). And the more money we spend on information, the less we can spend on saving lives. This dilemma raises the question, “When is it worthwhile to run an RCT to gather more data, and when should we exploit what we already know?” To answer this question, we introduce a new method for predicting the cost-effectiveness of gaining new information through an RCT and comparing it to the cost-effectiveness of cash transfers and directly promoting global health and well-being. We illustrate this method using the intervention by Baumsteiger (2019) as an example.However, the approach we are illustrating is more general and can also be applied to RCTs on established, emerging, and yet unknown EA interventions, including deworming, motivating parents to vaccinate their children, water purification, and interventions for improving mental health.We develop our method in two steps. First, we apply the established Value of Information framework (Howard, 1966) to obtain an upper bound on the cost-effectiveness of running an RCT. Then, we replace this method’s unrealistic assumption of perfect information with more realistic assumptions about the imperfect information generated by an RCT. This yields a new method that can provide more accurate estimates of the cost-effectiveness of evaluation research. As a proof of concept, we apply this method to predict how cost-effective it would be to evaluate the altruism intervention based on Baumsteiger (2019) in RCTs with different numbers of participants. Our method predicts that running such an RCT with 1200 participants would be highly cost-effective. This post is a brief summary of the longer report presented in this interactive notebook.How valuable would it be to know the true exact value of the cost-effectiveness of the intervention by Baumsteiger (2019)?To obtain an upper bound on how valuable it might be to evaluate the intervention by Baumsteiger (2019), I first calculate the value of obtaining perfect information about its cost-effectiveness. The value of perfect information is an established mathematical concept introduced by Howard (1966). It has recently been applied to charity evalu...]]>
Falk Lieder https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:08 None full 5641
xnBF2vcQaZgmhidyb_NL_EA_EA EA - List of Short-Term (less than 15 hours) Biosecurity Projects to Test Your Fit by Sofya Lebedeva Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: List of Short-Term (less than 15 hours) Biosecurity Projects to Test Your Fit, published by Sofya Lebedeva on April 17, 2023 on The Effective Altruism Forum.Thank you Elika Somani, Chris Bakerlee & Tessa Alexanian for the edits & suggestions.I frequently get the following questions:Am I a good fit for biosecurity?How do I test my fit for biosecurity?The general idea of working in biosecurity seems exciting but I am hesitant to apply to a full-time position or a 2-month summer program.How do I get involved in a biosecurity project?What if I am not interested in working in a lab, but think biosecurity is important? What should I do?Having talked to a number of people about this I have composed a list of projects that can be completed without the need for a lab (a computer & internet access is required). These should take 10-15 hours total and will allow individuals to test their fit for biosecurity.List of projects:See the next section for details & specific examples of the projectsConduct & write up a 2-page literature review on any of the following GCBR topics: Infectious Disease Surveillance, UVC, Indoor Air Quality, and PPE.Conduct & write up a 2-page literature review of the existing biosecurity policy of any particular country. In particular, it would be helpful to do this with countries that have biosecurity programmes in their country.Conduct & write up a 2-page survey of the biggest supply chain shocks in the last 100 years, and what can we learn from them for GCBR resilience?Conduct a 2-page distillation of one of the biosecurity articles from this list.Conduct a 2-page review of the labs in your university/local area that are working on biosecurity research.Conduct a 2-page review of the history, current status, and potential future applications to global biological weapon nonproliferation/disarmament of one article on the biological weapons convention.More in-depth information about each project:Conduct & write up a 2-page literature review on any of the following topics:Infectious Disease SurveillanceFar UVC & Indoor Air QualityNext-generation vaccines & antiviralsPPE (Personal Protective Equipment). An example of a speedrunSupply chains & global coordination of any of the above. Particularly focused on identifying issues with existing policy, suggestions, and details around implementing policy (ex. Not just what should a policy cover broadly, but how would you write a policy)Conduct & write up a 2-page literature review of the existing biosecurity policy of any particular country.Policy and governance over dual-use research of concern (DURC) or any element of biosafety and biosecurity (ex. DNA synthesis, enhanced potential pandemic pathogens)DURC, ePPP, gain-of-function, and lab-based biosafety and biosecurityEmerging Technologies and synthetic biology risks and OversightImport ControlSelect AgentsAnimal & Plant RisksDNA synthesis and emerging technologiesSpecific example: review DURC policy in IndiaSpecific example: review strategic countermeasure stockpiling in BrazilFor more examples please look at Global Bio Labs.For more information go to the country profiles from the GHSIThey are lengthy PDFs but contain a lot of information.For example, to find the one for Canada, you would go here and then click "Country Score Justification Summary" and you will get this.Conduct & write up a 2-page survey of the biggest supply chain shocks in the last 100 years, and what can we learn from them for GCBR resilience?Picking one specific shock, examining its effects & evaluating why it was worse than othersConduct a 2-page distillation of one of the biosecurity articles from this list.Your 2-page article should be accessible to a wide audience and should be able to be understood by someone with little background knowledge about biology...]]>
Sofya Lebedeva https://forum.effectivealtruism.org/posts/xnBF2vcQaZgmhidyb/list-of-short-term-less-than-15-hours-biosecurity-projects Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: List of Short-Term (less than 15 hours) Biosecurity Projects to Test Your Fit, published by Sofya Lebedeva on April 17, 2023 on The Effective Altruism Forum.Thank you Elika Somani, Chris Bakerlee & Tessa Alexanian for the edits & suggestions.I frequently get the following questions:Am I a good fit for biosecurity?How do I test my fit for biosecurity?The general idea of working in biosecurity seems exciting but I am hesitant to apply to a full-time position or a 2-month summer program.How do I get involved in a biosecurity project?What if I am not interested in working in a lab, but think biosecurity is important? What should I do?Having talked to a number of people about this I have composed a list of projects that can be completed without the need for a lab (a computer & internet access is required). These should take 10-15 hours total and will allow individuals to test their fit for biosecurity.List of projects:See the next section for details & specific examples of the projectsConduct & write up a 2-page literature review on any of the following GCBR topics: Infectious Disease Surveillance, UVC, Indoor Air Quality, and PPE.Conduct & write up a 2-page literature review of the existing biosecurity policy of any particular country. In particular, it would be helpful to do this with countries that have biosecurity programmes in their country.Conduct & write up a 2-page survey of the biggest supply chain shocks in the last 100 years, and what can we learn from them for GCBR resilience?Conduct a 2-page distillation of one of the biosecurity articles from this list.Conduct a 2-page review of the labs in your university/local area that are working on biosecurity research.Conduct a 2-page review of the history, current status, and potential future applications to global biological weapon nonproliferation/disarmament of one article on the biological weapons convention.More in-depth information about each project:Conduct & write up a 2-page literature review on any of the following topics:Infectious Disease SurveillanceFar UVC & Indoor Air QualityNext-generation vaccines & antiviralsPPE (Personal Protective Equipment). An example of a speedrunSupply chains & global coordination of any of the above. Particularly focused on identifying issues with existing policy, suggestions, and details around implementing policy (ex. Not just what should a policy cover broadly, but how would you write a policy)Conduct & write up a 2-page literature review of the existing biosecurity policy of any particular country.Policy and governance over dual-use research of concern (DURC) or any element of biosafety and biosecurity (ex. DNA synthesis, enhanced potential pandemic pathogens)DURC, ePPP, gain-of-function, and lab-based biosafety and biosecurityEmerging Technologies and synthetic biology risks and OversightImport ControlSelect AgentsAnimal & Plant RisksDNA synthesis and emerging technologiesSpecific example: review DURC policy in IndiaSpecific example: review strategic countermeasure stockpiling in BrazilFor more examples please look at Global Bio Labs.For more information go to the country profiles from the GHSIThey are lengthy PDFs but contain a lot of information.For example, to find the one for Canada, you would go here and then click "Country Score Justification Summary" and you will get this.Conduct & write up a 2-page survey of the biggest supply chain shocks in the last 100 years, and what can we learn from them for GCBR resilience?Picking one specific shock, examining its effects & evaluating why it was worse than othersConduct a 2-page distillation of one of the biosecurity articles from this list.Your 2-page article should be accessible to a wide audience and should be able to be understood by someone with little background knowledge about biology...]]>
Mon, 17 Apr 2023 08:25:13 +0000 EA - List of Short-Term (less than 15 hours) Biosecurity Projects to Test Your Fit by Sofya Lebedeva Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: List of Short-Term (less than 15 hours) Biosecurity Projects to Test Your Fit, published by Sofya Lebedeva on April 17, 2023 on The Effective Altruism Forum.Thank you Elika Somani, Chris Bakerlee & Tessa Alexanian for the edits & suggestions.I frequently get the following questions:Am I a good fit for biosecurity?How do I test my fit for biosecurity?The general idea of working in biosecurity seems exciting but I am hesitant to apply to a full-time position or a 2-month summer program.How do I get involved in a biosecurity project?What if I am not interested in working in a lab, but think biosecurity is important? What should I do?Having talked to a number of people about this I have composed a list of projects that can be completed without the need for a lab (a computer & internet access is required). These should take 10-15 hours total and will allow individuals to test their fit for biosecurity.List of projects:See the next section for details & specific examples of the projectsConduct & write up a 2-page literature review on any of the following GCBR topics: Infectious Disease Surveillance, UVC, Indoor Air Quality, and PPE.Conduct & write up a 2-page literature review of the existing biosecurity policy of any particular country. In particular, it would be helpful to do this with countries that have biosecurity programmes in their country.Conduct & write up a 2-page survey of the biggest supply chain shocks in the last 100 years, and what can we learn from them for GCBR resilience?Conduct a 2-page distillation of one of the biosecurity articles from this list.Conduct a 2-page review of the labs in your university/local area that are working on biosecurity research.Conduct a 2-page review of the history, current status, and potential future applications to global biological weapon nonproliferation/disarmament of one article on the biological weapons convention.More in-depth information about each project:Conduct & write up a 2-page literature review on any of the following topics:Infectious Disease SurveillanceFar UVC & Indoor Air QualityNext-generation vaccines & antiviralsPPE (Personal Protective Equipment). An example of a speedrunSupply chains & global coordination of any of the above. Particularly focused on identifying issues with existing policy, suggestions, and details around implementing policy (ex. Not just what should a policy cover broadly, but how would you write a policy)Conduct & write up a 2-page literature review of the existing biosecurity policy of any particular country.Policy and governance over dual-use research of concern (DURC) or any element of biosafety and biosecurity (ex. DNA synthesis, enhanced potential pandemic pathogens)DURC, ePPP, gain-of-function, and lab-based biosafety and biosecurityEmerging Technologies and synthetic biology risks and OversightImport ControlSelect AgentsAnimal & Plant RisksDNA synthesis and emerging technologiesSpecific example: review DURC policy in IndiaSpecific example: review strategic countermeasure stockpiling in BrazilFor more examples please look at Global Bio Labs.For more information go to the country profiles from the GHSIThey are lengthy PDFs but contain a lot of information.For example, to find the one for Canada, you would go here and then click "Country Score Justification Summary" and you will get this.Conduct & write up a 2-page survey of the biggest supply chain shocks in the last 100 years, and what can we learn from them for GCBR resilience?Picking one specific shock, examining its effects & evaluating why it was worse than othersConduct a 2-page distillation of one of the biosecurity articles from this list.Your 2-page article should be accessible to a wide audience and should be able to be understood by someone with little background knowledge about biology...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: List of Short-Term (less than 15 hours) Biosecurity Projects to Test Your Fit, published by Sofya Lebedeva on April 17, 2023 on The Effective Altruism Forum.Thank you Elika Somani, Chris Bakerlee & Tessa Alexanian for the edits & suggestions.I frequently get the following questions:Am I a good fit for biosecurity?How do I test my fit for biosecurity?The general idea of working in biosecurity seems exciting but I am hesitant to apply to a full-time position or a 2-month summer program.How do I get involved in a biosecurity project?What if I am not interested in working in a lab, but think biosecurity is important? What should I do?Having talked to a number of people about this I have composed a list of projects that can be completed without the need for a lab (a computer & internet access is required). These should take 10-15 hours total and will allow individuals to test their fit for biosecurity.List of projects:See the next section for details & specific examples of the projectsConduct & write up a 2-page literature review on any of the following GCBR topics: Infectious Disease Surveillance, UVC, Indoor Air Quality, and PPE.Conduct & write up a 2-page literature review of the existing biosecurity policy of any particular country. In particular, it would be helpful to do this with countries that have biosecurity programmes in their country.Conduct & write up a 2-page survey of the biggest supply chain shocks in the last 100 years, and what can we learn from them for GCBR resilience?Conduct a 2-page distillation of one of the biosecurity articles from this list.Conduct a 2-page review of the labs in your university/local area that are working on biosecurity research.Conduct a 2-page review of the history, current status, and potential future applications to global biological weapon nonproliferation/disarmament of one article on the biological weapons convention.More in-depth information about each project:Conduct & write up a 2-page literature review on any of the following topics:Infectious Disease SurveillanceFar UVC & Indoor Air QualityNext-generation vaccines & antiviralsPPE (Personal Protective Equipment). An example of a speedrunSupply chains & global coordination of any of the above. Particularly focused on identifying issues with existing policy, suggestions, and details around implementing policy (ex. Not just what should a policy cover broadly, but how would you write a policy)Conduct & write up a 2-page literature review of the existing biosecurity policy of any particular country.Policy and governance over dual-use research of concern (DURC) or any element of biosafety and biosecurity (ex. DNA synthesis, enhanced potential pandemic pathogens)DURC, ePPP, gain-of-function, and lab-based biosafety and biosecurityEmerging Technologies and synthetic biology risks and OversightImport ControlSelect AgentsAnimal & Plant RisksDNA synthesis and emerging technologiesSpecific example: review DURC policy in IndiaSpecific example: review strategic countermeasure stockpiling in BrazilFor more examples please look at Global Bio Labs.For more information go to the country profiles from the GHSIThey are lengthy PDFs but contain a lot of information.For example, to find the one for Canada, you would go here and then click "Country Score Justification Summary" and you will get this.Conduct & write up a 2-page survey of the biggest supply chain shocks in the last 100 years, and what can we learn from them for GCBR resilience?Picking one specific shock, examining its effects & evaluating why it was worse than othersConduct a 2-page distillation of one of the biosecurity articles from this list.Your 2-page article should be accessible to a wide audience and should be able to be understood by someone with little background knowledge about biology...]]>
Sofya Lebedeva https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:04 None full 5633
zsLcixRzqr64CacfK_NL_EA_EA EA - ZzappMalaria: Twice as cost-effective as bed nets in urban areas by Arnon Houri Yafin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ZzappMalaria: Twice as cost-effective as bed nets in urban areas, published by Arnon Houri Yafin on April 16, 2023 on The Effective Altruism Forum.I'm Arnon Houri Yafin, CEO of ZzappMalaria. I'm writing about our larviciding pilot published in Malaria Journal (Vigodny et al. 2023), and this post is my interpretation of the results.ZzappMalaria: Twice as cost-effective as bed nets in urban areasTL;DR: Zzapp Malaria’s digital technology for planning and managing large-scale anti-malaria field operations attained results that are twice as cost-effective as bed nets in reducing malaria in urban and semi-urban settings.Call to actionUse our solutionFund usHow to save more than 140,000 people annually for a cost per person that is lower than bed nets?In 2021, 627,000 people died from malaria, more than 80% of whom were children. The number of people who live in areas at high risk for malaria is approximately 1.1 billion - which comprises the population of sub saharan Africa (excluding South Africa) and high burden malaria countries such as Odisha in India. According to estimates, more than 75% of these people live in urban and peri urban areas for which our solution costs less than US$0.7 per person per year. The cost to protect all these people is therefore:0.7 1.1B US$0.7 = US$0.54B.The malaria incidence in urban areas is lower than in villages so it is assumed that the urban and peri urban population accounted for 45% of the malaria death:0.45 627,000 = 282,100 people.Assuming that our solution can reduce malaria by 52.5% (based on a peer-reviewed article we recently published in Malaria Journal; elaboration below), our intervention can save:282,100 0.525 = 148,100 people.As would be elaborated below, we believe the actual numbers of both cost and of effectiveness will dramatically improve over time.What is Zzapp’s solution and how does its technology work?TL;DR: We believe in actively targeting disease-bearing mosquitoes, and use digitization to manage large-scale field operations focusing on treatment of mosquitoes breeding sites.ZzappMalaria harnesses entomological knowledge and data analyzed from satellite imagery and collected by field workers to optimize large-scale larviciding operations. In such operations, the stagnant water bodies in which Anopheles mosquitoes breed are treated with an environmentally benign bacteria that besides mosquitoes and blackflies does not harm any other animals (not even other insects) and is approved for use in sources of drinking water. Although larviciding has led to malaria elimination in many countries in the 20th century, it is not easy to implement since it requires planning, management and monitoring of large teams working kilometers away from each other. To complicate things, effective larviciding requires that a large proportion of water bodies are detected and treated on a regular basis (sometimes weekly).We developed a system comprising a planning tool, mobile app and dashboard designed to overcome these challenges. The system extracts from satellite images the location of houses and demarcates the general area for the intervention. It then recommends where to scan for water bodies, and helps in implementation through a designated GPS-based mobile app that allocates treatment areas to workers, monitors their location in the field to ensure they cover the entire area, and keeps track of schedules for water body treatment. All information is uploaded to a dashboard, allowing managers to monitor the operation in real time. Specifically designed for sub-Saharan Africa, our house-detection algorithm was developed to detect both modern houses and traditionally built huts. Our app was built to work on inexpensive smartphones and in areas with weak internet infrastructure. Its interface is simple and intuitiv...]]>
Arnon Houri Yafin https://forum.effectivealtruism.org/posts/zsLcixRzqr64CacfK/zzappmalaria-twice-as-cost-effective-as-bed-nets-in-urban Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ZzappMalaria: Twice as cost-effective as bed nets in urban areas, published by Arnon Houri Yafin on April 16, 2023 on The Effective Altruism Forum.I'm Arnon Houri Yafin, CEO of ZzappMalaria. I'm writing about our larviciding pilot published in Malaria Journal (Vigodny et al. 2023), and this post is my interpretation of the results.ZzappMalaria: Twice as cost-effective as bed nets in urban areasTL;DR: Zzapp Malaria’s digital technology for planning and managing large-scale anti-malaria field operations attained results that are twice as cost-effective as bed nets in reducing malaria in urban and semi-urban settings.Call to actionUse our solutionFund usHow to save more than 140,000 people annually for a cost per person that is lower than bed nets?In 2021, 627,000 people died from malaria, more than 80% of whom were children. The number of people who live in areas at high risk for malaria is approximately 1.1 billion - which comprises the population of sub saharan Africa (excluding South Africa) and high burden malaria countries such as Odisha in India. According to estimates, more than 75% of these people live in urban and peri urban areas for which our solution costs less than US$0.7 per person per year. The cost to protect all these people is therefore:0.7 1.1B US$0.7 = US$0.54B.The malaria incidence in urban areas is lower than in villages so it is assumed that the urban and peri urban population accounted for 45% of the malaria death:0.45 627,000 = 282,100 people.Assuming that our solution can reduce malaria by 52.5% (based on a peer-reviewed article we recently published in Malaria Journal; elaboration below), our intervention can save:282,100 0.525 = 148,100 people.As would be elaborated below, we believe the actual numbers of both cost and of effectiveness will dramatically improve over time.What is Zzapp’s solution and how does its technology work?TL;DR: We believe in actively targeting disease-bearing mosquitoes, and use digitization to manage large-scale field operations focusing on treatment of mosquitoes breeding sites.ZzappMalaria harnesses entomological knowledge and data analyzed from satellite imagery and collected by field workers to optimize large-scale larviciding operations. In such operations, the stagnant water bodies in which Anopheles mosquitoes breed are treated with an environmentally benign bacteria that besides mosquitoes and blackflies does not harm any other animals (not even other insects) and is approved for use in sources of drinking water. Although larviciding has led to malaria elimination in many countries in the 20th century, it is not easy to implement since it requires planning, management and monitoring of large teams working kilometers away from each other. To complicate things, effective larviciding requires that a large proportion of water bodies are detected and treated on a regular basis (sometimes weekly).We developed a system comprising a planning tool, mobile app and dashboard designed to overcome these challenges. The system extracts from satellite images the location of houses and demarcates the general area for the intervention. It then recommends where to scan for water bodies, and helps in implementation through a designated GPS-based mobile app that allocates treatment areas to workers, monitors their location in the field to ensure they cover the entire area, and keeps track of schedules for water body treatment. All information is uploaded to a dashboard, allowing managers to monitor the operation in real time. Specifically designed for sub-Saharan Africa, our house-detection algorithm was developed to detect both modern houses and traditionally built huts. Our app was built to work on inexpensive smartphones and in areas with weak internet infrastructure. Its interface is simple and intuitiv...]]>
Mon, 17 Apr 2023 00:46:57 +0000 EA - ZzappMalaria: Twice as cost-effective as bed nets in urban areas by Arnon Houri Yafin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ZzappMalaria: Twice as cost-effective as bed nets in urban areas, published by Arnon Houri Yafin on April 16, 2023 on The Effective Altruism Forum.I'm Arnon Houri Yafin, CEO of ZzappMalaria. I'm writing about our larviciding pilot published in Malaria Journal (Vigodny et al. 2023), and this post is my interpretation of the results.ZzappMalaria: Twice as cost-effective as bed nets in urban areasTL;DR: Zzapp Malaria’s digital technology for planning and managing large-scale anti-malaria field operations attained results that are twice as cost-effective as bed nets in reducing malaria in urban and semi-urban settings.Call to actionUse our solutionFund usHow to save more than 140,000 people annually for a cost per person that is lower than bed nets?In 2021, 627,000 people died from malaria, more than 80% of whom were children. The number of people who live in areas at high risk for malaria is approximately 1.1 billion - which comprises the population of sub saharan Africa (excluding South Africa) and high burden malaria countries such as Odisha in India. According to estimates, more than 75% of these people live in urban and peri urban areas for which our solution costs less than US$0.7 per person per year. The cost to protect all these people is therefore:0.7 1.1B US$0.7 = US$0.54B.The malaria incidence in urban areas is lower than in villages so it is assumed that the urban and peri urban population accounted for 45% of the malaria death:0.45 627,000 = 282,100 people.Assuming that our solution can reduce malaria by 52.5% (based on a peer-reviewed article we recently published in Malaria Journal; elaboration below), our intervention can save:282,100 0.525 = 148,100 people.As would be elaborated below, we believe the actual numbers of both cost and of effectiveness will dramatically improve over time.What is Zzapp’s solution and how does its technology work?TL;DR: We believe in actively targeting disease-bearing mosquitoes, and use digitization to manage large-scale field operations focusing on treatment of mosquitoes breeding sites.ZzappMalaria harnesses entomological knowledge and data analyzed from satellite imagery and collected by field workers to optimize large-scale larviciding operations. In such operations, the stagnant water bodies in which Anopheles mosquitoes breed are treated with an environmentally benign bacteria that besides mosquitoes and blackflies does not harm any other animals (not even other insects) and is approved for use in sources of drinking water. Although larviciding has led to malaria elimination in many countries in the 20th century, it is not easy to implement since it requires planning, management and monitoring of large teams working kilometers away from each other. To complicate things, effective larviciding requires that a large proportion of water bodies are detected and treated on a regular basis (sometimes weekly).We developed a system comprising a planning tool, mobile app and dashboard designed to overcome these challenges. The system extracts from satellite images the location of houses and demarcates the general area for the intervention. It then recommends where to scan for water bodies, and helps in implementation through a designated GPS-based mobile app that allocates treatment areas to workers, monitors their location in the field to ensure they cover the entire area, and keeps track of schedules for water body treatment. All information is uploaded to a dashboard, allowing managers to monitor the operation in real time. Specifically designed for sub-Saharan Africa, our house-detection algorithm was developed to detect both modern houses and traditionally built huts. Our app was built to work on inexpensive smartphones and in areas with weak internet infrastructure. Its interface is simple and intuitiv...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ZzappMalaria: Twice as cost-effective as bed nets in urban areas, published by Arnon Houri Yafin on April 16, 2023 on The Effective Altruism Forum.I'm Arnon Houri Yafin, CEO of ZzappMalaria. I'm writing about our larviciding pilot published in Malaria Journal (Vigodny et al. 2023), and this post is my interpretation of the results.ZzappMalaria: Twice as cost-effective as bed nets in urban areasTL;DR: Zzapp Malaria’s digital technology for planning and managing large-scale anti-malaria field operations attained results that are twice as cost-effective as bed nets in reducing malaria in urban and semi-urban settings.Call to actionUse our solutionFund usHow to save more than 140,000 people annually for a cost per person that is lower than bed nets?In 2021, 627,000 people died from malaria, more than 80% of whom were children. The number of people who live in areas at high risk for malaria is approximately 1.1 billion - which comprises the population of sub saharan Africa (excluding South Africa) and high burden malaria countries such as Odisha in India. According to estimates, more than 75% of these people live in urban and peri urban areas for which our solution costs less than US$0.7 per person per year. The cost to protect all these people is therefore:0.7 1.1B US$0.7 = US$0.54B.The malaria incidence in urban areas is lower than in villages so it is assumed that the urban and peri urban population accounted for 45% of the malaria death:0.45 627,000 = 282,100 people.Assuming that our solution can reduce malaria by 52.5% (based on a peer-reviewed article we recently published in Malaria Journal; elaboration below), our intervention can save:282,100 0.525 = 148,100 people.As would be elaborated below, we believe the actual numbers of both cost and of effectiveness will dramatically improve over time.What is Zzapp’s solution and how does its technology work?TL;DR: We believe in actively targeting disease-bearing mosquitoes, and use digitization to manage large-scale field operations focusing on treatment of mosquitoes breeding sites.ZzappMalaria harnesses entomological knowledge and data analyzed from satellite imagery and collected by field workers to optimize large-scale larviciding operations. In such operations, the stagnant water bodies in which Anopheles mosquitoes breed are treated with an environmentally benign bacteria that besides mosquitoes and blackflies does not harm any other animals (not even other insects) and is approved for use in sources of drinking water. Although larviciding has led to malaria elimination in many countries in the 20th century, it is not easy to implement since it requires planning, management and monitoring of large teams working kilometers away from each other. To complicate things, effective larviciding requires that a large proportion of water bodies are detected and treated on a regular basis (sometimes weekly).We developed a system comprising a planning tool, mobile app and dashboard designed to overcome these challenges. The system extracts from satellite images the location of houses and demarcates the general area for the intervention. It then recommends where to scan for water bodies, and helps in implementation through a designated GPS-based mobile app that allocates treatment areas to workers, monitors their location in the field to ensure they cover the entire area, and keeps track of schedules for water body treatment. All information is uploaded to a dashboard, allowing managers to monitor the operation in real time. Specifically designed for sub-Saharan Africa, our house-detection algorithm was developed to detect both modern houses and traditionally built huts. Our app was built to work on inexpensive smartphones and in areas with weak internet infrastructure. Its interface is simple and intuitiv...]]>
Arnon Houri Yafin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:30 None full 5634
K5R35pgPFk3DcGyRp_NL_EA_EA EA - Impactful (Side-)Projects and Organizations to Start by AlexandraB Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Impactful (Side-)Projects and Organizations to Start, published by AlexandraB on April 16, 2023 on The Effective Altruism Forum.Are you an (aspiring) changemaker looking for the next thing to do? Consider founding something!In Nov and Dec of 2022 I have scoured the depths of the internet to find as many lists of impactful project/organization/charity ideas as I could to gain inspiration for what to start myself. I hope that sharing this curated version of my 'list of lists' can help more people to get the creative juices flowing and to take the next step.There are definitely resources that I have overlooked or newer ones which did not exist yet during my search, so I'd invite everyone to share ideas and links to additional resources in the comments.Longtermist & Neartermist - mixed ideasPossible gaps in the EA community (2021)Things CEA is not doing (2021) by CEABig List of Cause Candidates (2020)Cause candidates tag on the EA forumAnnotated List of Project Ideas & Volunteering Resources (2020) which includes smaller and larger project ideas and links to many lists.69 things that might be pretty effective to fund (2018) lists many (potentially) high-impact projects.List of possible EA meta-charities (2019)EA Summit Project IdeasProjects I’d like to see (2017)Concrete project lists (2017), both the post and the comments might be valuable.EAV Ideas in Need of Implementation (2015)Longtermist ideasNonlinear's idea list (2023) contains hundreds of ideas. It is not published yet but you can request access to the document.The Future Fund's Project Ideas Competition (2022) - The comments under this post.Ideas that were on the Future Fund's website (2022).Longtermist Entrepreneurship Fellowship ideas database (2020)Neartermist ideasCharity Entrepreneurship's research reports (2020-2023)Animal Welfare charity recommendations by Animal Charity Evaluators:Charities We’re Excited About (2018)Interventions we’d like to see (2017)Charities we’d like to see (2016)Charities we’d like to see by GiveWell (2015)Bonus resourcesCharity Entrepreneurship's handbook (2021)So you want to be a charity entrepreneur. Read these first. (2022)Help for coming up with ideas:Prompts to find problems (2020)Two EA project idea exchange platforms:EA Work Club ProjectsAirtable Project PlatformThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
AlexandraB https://forum.effectivealtruism.org/posts/K5R35pgPFk3DcGyRp/impactful-side-projects-and-organizations-to-start Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Impactful (Side-)Projects and Organizations to Start, published by AlexandraB on April 16, 2023 on The Effective Altruism Forum.Are you an (aspiring) changemaker looking for the next thing to do? Consider founding something!In Nov and Dec of 2022 I have scoured the depths of the internet to find as many lists of impactful project/organization/charity ideas as I could to gain inspiration for what to start myself. I hope that sharing this curated version of my 'list of lists' can help more people to get the creative juices flowing and to take the next step.There are definitely resources that I have overlooked or newer ones which did not exist yet during my search, so I'd invite everyone to share ideas and links to additional resources in the comments.Longtermist & Neartermist - mixed ideasPossible gaps in the EA community (2021)Things CEA is not doing (2021) by CEABig List of Cause Candidates (2020)Cause candidates tag on the EA forumAnnotated List of Project Ideas & Volunteering Resources (2020) which includes smaller and larger project ideas and links to many lists.69 things that might be pretty effective to fund (2018) lists many (potentially) high-impact projects.List of possible EA meta-charities (2019)EA Summit Project IdeasProjects I’d like to see (2017)Concrete project lists (2017), both the post and the comments might be valuable.EAV Ideas in Need of Implementation (2015)Longtermist ideasNonlinear's idea list (2023) contains hundreds of ideas. It is not published yet but you can request access to the document.The Future Fund's Project Ideas Competition (2022) - The comments under this post.Ideas that were on the Future Fund's website (2022).Longtermist Entrepreneurship Fellowship ideas database (2020)Neartermist ideasCharity Entrepreneurship's research reports (2020-2023)Animal Welfare charity recommendations by Animal Charity Evaluators:Charities We’re Excited About (2018)Interventions we’d like to see (2017)Charities we’d like to see (2016)Charities we’d like to see by GiveWell (2015)Bonus resourcesCharity Entrepreneurship's handbook (2021)So you want to be a charity entrepreneur. Read these first. (2022)Help for coming up with ideas:Prompts to find problems (2020)Two EA project idea exchange platforms:EA Work Club ProjectsAirtable Project PlatformThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 16 Apr 2023 21:47:15 +0000 EA - Impactful (Side-)Projects and Organizations to Start by AlexandraB Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Impactful (Side-)Projects and Organizations to Start, published by AlexandraB on April 16, 2023 on The Effective Altruism Forum.Are you an (aspiring) changemaker looking for the next thing to do? Consider founding something!In Nov and Dec of 2022 I have scoured the depths of the internet to find as many lists of impactful project/organization/charity ideas as I could to gain inspiration for what to start myself. I hope that sharing this curated version of my 'list of lists' can help more people to get the creative juices flowing and to take the next step.There are definitely resources that I have overlooked or newer ones which did not exist yet during my search, so I'd invite everyone to share ideas and links to additional resources in the comments.Longtermist & Neartermist - mixed ideasPossible gaps in the EA community (2021)Things CEA is not doing (2021) by CEABig List of Cause Candidates (2020)Cause candidates tag on the EA forumAnnotated List of Project Ideas & Volunteering Resources (2020) which includes smaller and larger project ideas and links to many lists.69 things that might be pretty effective to fund (2018) lists many (potentially) high-impact projects.List of possible EA meta-charities (2019)EA Summit Project IdeasProjects I’d like to see (2017)Concrete project lists (2017), both the post and the comments might be valuable.EAV Ideas in Need of Implementation (2015)Longtermist ideasNonlinear's idea list (2023) contains hundreds of ideas. It is not published yet but you can request access to the document.The Future Fund's Project Ideas Competition (2022) - The comments under this post.Ideas that were on the Future Fund's website (2022).Longtermist Entrepreneurship Fellowship ideas database (2020)Neartermist ideasCharity Entrepreneurship's research reports (2020-2023)Animal Welfare charity recommendations by Animal Charity Evaluators:Charities We’re Excited About (2018)Interventions we’d like to see (2017)Charities we’d like to see (2016)Charities we’d like to see by GiveWell (2015)Bonus resourcesCharity Entrepreneurship's handbook (2021)So you want to be a charity entrepreneur. Read these first. (2022)Help for coming up with ideas:Prompts to find problems (2020)Two EA project idea exchange platforms:EA Work Club ProjectsAirtable Project PlatformThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Impactful (Side-)Projects and Organizations to Start, published by AlexandraB on April 16, 2023 on The Effective Altruism Forum.Are you an (aspiring) changemaker looking for the next thing to do? Consider founding something!In Nov and Dec of 2022 I have scoured the depths of the internet to find as many lists of impactful project/organization/charity ideas as I could to gain inspiration for what to start myself. I hope that sharing this curated version of my 'list of lists' can help more people to get the creative juices flowing and to take the next step.There are definitely resources that I have overlooked or newer ones which did not exist yet during my search, so I'd invite everyone to share ideas and links to additional resources in the comments.Longtermist & Neartermist - mixed ideasPossible gaps in the EA community (2021)Things CEA is not doing (2021) by CEABig List of Cause Candidates (2020)Cause candidates tag on the EA forumAnnotated List of Project Ideas & Volunteering Resources (2020) which includes smaller and larger project ideas and links to many lists.69 things that might be pretty effective to fund (2018) lists many (potentially) high-impact projects.List of possible EA meta-charities (2019)EA Summit Project IdeasProjects I’d like to see (2017)Concrete project lists (2017), both the post and the comments might be valuable.EAV Ideas in Need of Implementation (2015)Longtermist ideasNonlinear's idea list (2023) contains hundreds of ideas. It is not published yet but you can request access to the document.The Future Fund's Project Ideas Competition (2022) - The comments under this post.Ideas that were on the Future Fund's website (2022).Longtermist Entrepreneurship Fellowship ideas database (2020)Neartermist ideasCharity Entrepreneurship's research reports (2020-2023)Animal Welfare charity recommendations by Animal Charity Evaluators:Charities We’re Excited About (2018)Interventions we’d like to see (2017)Charities we’d like to see (2016)Charities we’d like to see by GiveWell (2015)Bonus resourcesCharity Entrepreneurship's handbook (2021)So you want to be a charity entrepreneur. Read these first. (2022)Help for coming up with ideas:Prompts to find problems (2020)Two EA project idea exchange platforms:EA Work Club ProjectsAirtable Project PlatformThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
AlexandraB https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:50 None full 5625
4peiwY3gJbswnPEps_NL_EA_EA EA - Local EA groups: consider becoming more than a satellite group by Ada-Maaria Hyvärinen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Local EA groups: consider becoming more than a satellite group, published by Ada-Maaria Hyvärinen on April 16, 2023 on The Effective Altruism Forum.I recently wrote about the history of Effective Altruism Finland, the local EA group I am part of and used to work for. I suspect most readers can guess the main activities we carry out even without clicking on the post. (Spoiler: we have an intro to EA program, a career advising service, an effective donations website, support for university groups and regular EA-themed discussion events).Why do local EA groups often seem so similar to each other? I believe this is because many groups inadvertently become satellite groups. In my post, I try to explain what I mean by this, and argue that local groups should consider developing more distinctive features, even though there may be reasons to deliberately remain a satellite group.What is a satellite group?Some characteristics that would define an archetypal satellite group are:They mainly follow materials produced by other groups, such as introductory fellowship curriculaThey try to undertake activities that are inspired by other EA groups and follow "official" EA adviceThe members' goals are to eventually join another "real" EA organization or move to an EA hub "where the real EAs are" (as opposed to aiming to do things as a group locally or as a collaboration between the members of the group they are currently affiliated with)They might feel the “real EA knowledge” is possessed by other people elsewhere, and that people in their surroundings are not fit to meaningfully contribute to the EA project yetThe theory of impact of a satellite group comes from broadcasting EA material to new people and connecting their audience to other EA groups and organizations. The actual impactful stuff happens only when a group member starts doing impactful things somewhere else than in the context of the group.If you are a new group, starting out as a satellite group makes total sense. For some groups, like university groups, staying a satellite group indefinitely can also be also a good idea. However, for some groups, focusing only on satellite type of activities has also a cost of missed opportunities.Satellite activities are safe and beginner-friendlyThere are a bunch of reasons why groups might get stuck in the satellite phase, even if they intend to move towards more original activities. Satellite activities are often locally optimal: if you need to decide what you personally should do in the next couple of weeks or months, in terms of expected impact it often seems like you should focus on satellite activities. It is easy to make the case for organizing an 80 000 hours themed career planning workshop or a giving game.Satellite activities are also relatively well-regarded within the EA community. If you post on the EA forum about having organized such a career workshop or giving came, other readers will think that your group is doing the reasonable thing.This positive attitude also somewhat translates to funding. If you can show that your group has inspired many people to work for EA organizations, EA funders are more likely to provide funding for your group in the future. It is harder to prove that your members are doing impactful things if they are working on a original local project that the funders don't know much about.Furthermore, doing satellite activities can protect your group from doing obviously bad or useless things. If you can reliably organize an introductory fellowship to EA, it is guaranteed that a certain number of people will understand EA better than before. You don’t need to spend a lot of time calculating your expected impact or thinking about potential downsides: if you can execute the program well enough, you can be assured the thing you do probably...]]>
Ada-Maaria Hyvärinen https://forum.effectivealtruism.org/posts/4peiwY3gJbswnPEps/local-ea-groups-consider-becoming-more-than-a-satellite Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Local EA groups: consider becoming more than a satellite group, published by Ada-Maaria Hyvärinen on April 16, 2023 on The Effective Altruism Forum.I recently wrote about the history of Effective Altruism Finland, the local EA group I am part of and used to work for. I suspect most readers can guess the main activities we carry out even without clicking on the post. (Spoiler: we have an intro to EA program, a career advising service, an effective donations website, support for university groups and regular EA-themed discussion events).Why do local EA groups often seem so similar to each other? I believe this is because many groups inadvertently become satellite groups. In my post, I try to explain what I mean by this, and argue that local groups should consider developing more distinctive features, even though there may be reasons to deliberately remain a satellite group.What is a satellite group?Some characteristics that would define an archetypal satellite group are:They mainly follow materials produced by other groups, such as introductory fellowship curriculaThey try to undertake activities that are inspired by other EA groups and follow "official" EA adviceThe members' goals are to eventually join another "real" EA organization or move to an EA hub "where the real EAs are" (as opposed to aiming to do things as a group locally or as a collaboration between the members of the group they are currently affiliated with)They might feel the “real EA knowledge” is possessed by other people elsewhere, and that people in their surroundings are not fit to meaningfully contribute to the EA project yetThe theory of impact of a satellite group comes from broadcasting EA material to new people and connecting their audience to other EA groups and organizations. The actual impactful stuff happens only when a group member starts doing impactful things somewhere else than in the context of the group.If you are a new group, starting out as a satellite group makes total sense. For some groups, like university groups, staying a satellite group indefinitely can also be also a good idea. However, for some groups, focusing only on satellite type of activities has also a cost of missed opportunities.Satellite activities are safe and beginner-friendlyThere are a bunch of reasons why groups might get stuck in the satellite phase, even if they intend to move towards more original activities. Satellite activities are often locally optimal: if you need to decide what you personally should do in the next couple of weeks or months, in terms of expected impact it often seems like you should focus on satellite activities. It is easy to make the case for organizing an 80 000 hours themed career planning workshop or a giving game.Satellite activities are also relatively well-regarded within the EA community. If you post on the EA forum about having organized such a career workshop or giving came, other readers will think that your group is doing the reasonable thing.This positive attitude also somewhat translates to funding. If you can show that your group has inspired many people to work for EA organizations, EA funders are more likely to provide funding for your group in the future. It is harder to prove that your members are doing impactful things if they are working on a original local project that the funders don't know much about.Furthermore, doing satellite activities can protect your group from doing obviously bad or useless things. If you can reliably organize an introductory fellowship to EA, it is guaranteed that a certain number of people will understand EA better than before. You don’t need to spend a lot of time calculating your expected impact or thinking about potential downsides: if you can execute the program well enough, you can be assured the thing you do probably...]]>
Sun, 16 Apr 2023 16:22:59 +0000 EA - Local EA groups: consider becoming more than a satellite group by Ada-Maaria Hyvärinen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Local EA groups: consider becoming more than a satellite group, published by Ada-Maaria Hyvärinen on April 16, 2023 on The Effective Altruism Forum.I recently wrote about the history of Effective Altruism Finland, the local EA group I am part of and used to work for. I suspect most readers can guess the main activities we carry out even without clicking on the post. (Spoiler: we have an intro to EA program, a career advising service, an effective donations website, support for university groups and regular EA-themed discussion events).Why do local EA groups often seem so similar to each other? I believe this is because many groups inadvertently become satellite groups. In my post, I try to explain what I mean by this, and argue that local groups should consider developing more distinctive features, even though there may be reasons to deliberately remain a satellite group.What is a satellite group?Some characteristics that would define an archetypal satellite group are:They mainly follow materials produced by other groups, such as introductory fellowship curriculaThey try to undertake activities that are inspired by other EA groups and follow "official" EA adviceThe members' goals are to eventually join another "real" EA organization or move to an EA hub "where the real EAs are" (as opposed to aiming to do things as a group locally or as a collaboration between the members of the group they are currently affiliated with)They might feel the “real EA knowledge” is possessed by other people elsewhere, and that people in their surroundings are not fit to meaningfully contribute to the EA project yetThe theory of impact of a satellite group comes from broadcasting EA material to new people and connecting their audience to other EA groups and organizations. The actual impactful stuff happens only when a group member starts doing impactful things somewhere else than in the context of the group.If you are a new group, starting out as a satellite group makes total sense. For some groups, like university groups, staying a satellite group indefinitely can also be also a good idea. However, for some groups, focusing only on satellite type of activities has also a cost of missed opportunities.Satellite activities are safe and beginner-friendlyThere are a bunch of reasons why groups might get stuck in the satellite phase, even if they intend to move towards more original activities. Satellite activities are often locally optimal: if you need to decide what you personally should do in the next couple of weeks or months, in terms of expected impact it often seems like you should focus on satellite activities. It is easy to make the case for organizing an 80 000 hours themed career planning workshop or a giving game.Satellite activities are also relatively well-regarded within the EA community. If you post on the EA forum about having organized such a career workshop or giving came, other readers will think that your group is doing the reasonable thing.This positive attitude also somewhat translates to funding. If you can show that your group has inspired many people to work for EA organizations, EA funders are more likely to provide funding for your group in the future. It is harder to prove that your members are doing impactful things if they are working on a original local project that the funders don't know much about.Furthermore, doing satellite activities can protect your group from doing obviously bad or useless things. If you can reliably organize an introductory fellowship to EA, it is guaranteed that a certain number of people will understand EA better than before. You don’t need to spend a lot of time calculating your expected impact or thinking about potential downsides: if you can execute the program well enough, you can be assured the thing you do probably...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Local EA groups: consider becoming more than a satellite group, published by Ada-Maaria Hyvärinen on April 16, 2023 on The Effective Altruism Forum.I recently wrote about the history of Effective Altruism Finland, the local EA group I am part of and used to work for. I suspect most readers can guess the main activities we carry out even without clicking on the post. (Spoiler: we have an intro to EA program, a career advising service, an effective donations website, support for university groups and regular EA-themed discussion events).Why do local EA groups often seem so similar to each other? I believe this is because many groups inadvertently become satellite groups. In my post, I try to explain what I mean by this, and argue that local groups should consider developing more distinctive features, even though there may be reasons to deliberately remain a satellite group.What is a satellite group?Some characteristics that would define an archetypal satellite group are:They mainly follow materials produced by other groups, such as introductory fellowship curriculaThey try to undertake activities that are inspired by other EA groups and follow "official" EA adviceThe members' goals are to eventually join another "real" EA organization or move to an EA hub "where the real EAs are" (as opposed to aiming to do things as a group locally or as a collaboration between the members of the group they are currently affiliated with)They might feel the “real EA knowledge” is possessed by other people elsewhere, and that people in their surroundings are not fit to meaningfully contribute to the EA project yetThe theory of impact of a satellite group comes from broadcasting EA material to new people and connecting their audience to other EA groups and organizations. The actual impactful stuff happens only when a group member starts doing impactful things somewhere else than in the context of the group.If you are a new group, starting out as a satellite group makes total sense. For some groups, like university groups, staying a satellite group indefinitely can also be also a good idea. However, for some groups, focusing only on satellite type of activities has also a cost of missed opportunities.Satellite activities are safe and beginner-friendlyThere are a bunch of reasons why groups might get stuck in the satellite phase, even if they intend to move towards more original activities. Satellite activities are often locally optimal: if you need to decide what you personally should do in the next couple of weeks or months, in terms of expected impact it often seems like you should focus on satellite activities. It is easy to make the case for organizing an 80 000 hours themed career planning workshop or a giving game.Satellite activities are also relatively well-regarded within the EA community. If you post on the EA forum about having organized such a career workshop or giving came, other readers will think that your group is doing the reasonable thing.This positive attitude also somewhat translates to funding. If you can show that your group has inspired many people to work for EA organizations, EA funders are more likely to provide funding for your group in the future. It is harder to prove that your members are doing impactful things if they are working on a original local project that the funders don't know much about.Furthermore, doing satellite activities can protect your group from doing obviously bad or useless things. If you can reliably organize an introductory fellowship to EA, it is guaranteed that a certain number of people will understand EA better than before. You don’t need to spend a lot of time calculating your expected impact or thinking about potential downsides: if you can execute the program well enough, you can be assured the thing you do probably...]]>
Ada-Maaria Hyvärinen https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:47 None full 5626
mLAkwFb9AnZ4uDanJ_NL_EA_EA EA - Giving Guide for Student Organisations – An ineffective outreach project by Karla Still Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Giving Guide for Student Organisations – An ineffective outreach project, published by Karla Still on April 14, 2023 on The Effective Altruism Forum.Executive summaryEA Helsinki created a giving guide booklet for student organisations in Finnish.The main aim of the Giving Guide for Student Organisations was to raise awareness of EA among students by providing them with information on how to give effectively. A sub-goal was to redirect donations from student organisations, although we do not expect the expected value of redirected donations to be worth the effort of producing and marketing the guide.We distributed printed versions of the guide to 121 organisations at 5 universities, most by approaching them at student fairs we attended in the fall of 2022. A digital version of the guide was also sent to the organisations via email a few days after the printed booklet was distributed.The direct results or impact from the giving guide have been almost non-existent. I believe most of the impact comes later from having exposed people to the idea of effective giving, making them more receptive to EA later on.My original reason for writing this post was to offer other EA groups an opportunity to replicate the project, but now I am not so sure that it is worth it. Instead, I would love to get feedback on the impact assessment, the project, and hear if others have tried something similar. I hope the spelled-out project implementation and analysis can help other community builders who might think about starting similar projects. I also wrote this as a report for future community builders in Finland.Tip: Skip to the measuring the impact-section if you aren’t interested in the details about the content and implementation of the project.Content of the giving guideBelow is the structure of the guide. (In parentheses the length of the text in A5 pages.) You can also find an auto translation on the content excluding the donation recommendations or download the Finnish guide on our website:Title: A giving guide for student organisations. How to achieve more impact with the same amount of money.Summary (1)Most help per euro (1)Donating in practiceInspirations from other associations (1)The association act is flexible (⅓)How to choose a targetComparing cause areas (1)Using research by evaluation organisations (⅓)Effective donation targets (summary ⅓)Global health and development (4)GivewellMaximum impact fundPreventing MalariaVitamin A deficiencyVaccination coverageDewormingDirect cash transfersClimate change (2)The Founders Pledge Climate Change FundClean Air Task ForceCarbon180Animal welfare (3.5)Animal Charity EvaluatorsAnimal Charity Evaluatorsin Recommended Charity FundThe Humane LeagueFaunalyticsWild Animal InitiativeSecuring the future (2)Johns Hopkins Center of Health SecurityALLFEDNuclear Threat Initiative Biosecurity ProgramBackground (½)Effective Altruism (½)Links (1)Put impact forward (essentially a call to action box) (1)References (2)Back cover (1)Language and approach of the guideWe aimed for the giving guide to beEncouraging. The tone is hopeful and inviting, focusing on the opportunity of changing the world for the better together.Actionable. The guide includes simple legal advice on donating, inspiration from other student organisations on how to incorporate donations into the organisation’s activities, and recommended charities with links.Trustworthy. We explain why effective giving matters, how we have chosen the recommended cause areas and charities, provide references for further reading, and include a one-page about EA Helsinki.Easy to read.Project backgroundThe student organisation landscape and donatingIn Finland, it is common for students to be members of multiple student organisations and associations. E...]]>
Karla Still https://forum.effectivealtruism.org/posts/mLAkwFb9AnZ4uDanJ/giving-guide-for-student-organisations-an-ineffective Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Giving Guide for Student Organisations – An ineffective outreach project, published by Karla Still on April 14, 2023 on The Effective Altruism Forum.Executive summaryEA Helsinki created a giving guide booklet for student organisations in Finnish.The main aim of the Giving Guide for Student Organisations was to raise awareness of EA among students by providing them with information on how to give effectively. A sub-goal was to redirect donations from student organisations, although we do not expect the expected value of redirected donations to be worth the effort of producing and marketing the guide.We distributed printed versions of the guide to 121 organisations at 5 universities, most by approaching them at student fairs we attended in the fall of 2022. A digital version of the guide was also sent to the organisations via email a few days after the printed booklet was distributed.The direct results or impact from the giving guide have been almost non-existent. I believe most of the impact comes later from having exposed people to the idea of effective giving, making them more receptive to EA later on.My original reason for writing this post was to offer other EA groups an opportunity to replicate the project, but now I am not so sure that it is worth it. Instead, I would love to get feedback on the impact assessment, the project, and hear if others have tried something similar. I hope the spelled-out project implementation and analysis can help other community builders who might think about starting similar projects. I also wrote this as a report for future community builders in Finland.Tip: Skip to the measuring the impact-section if you aren’t interested in the details about the content and implementation of the project.Content of the giving guideBelow is the structure of the guide. (In parentheses the length of the text in A5 pages.) You can also find an auto translation on the content excluding the donation recommendations or download the Finnish guide on our website:Title: A giving guide for student organisations. How to achieve more impact with the same amount of money.Summary (1)Most help per euro (1)Donating in practiceInspirations from other associations (1)The association act is flexible (⅓)How to choose a targetComparing cause areas (1)Using research by evaluation organisations (⅓)Effective donation targets (summary ⅓)Global health and development (4)GivewellMaximum impact fundPreventing MalariaVitamin A deficiencyVaccination coverageDewormingDirect cash transfersClimate change (2)The Founders Pledge Climate Change FundClean Air Task ForceCarbon180Animal welfare (3.5)Animal Charity EvaluatorsAnimal Charity Evaluatorsin Recommended Charity FundThe Humane LeagueFaunalyticsWild Animal InitiativeSecuring the future (2)Johns Hopkins Center of Health SecurityALLFEDNuclear Threat Initiative Biosecurity ProgramBackground (½)Effective Altruism (½)Links (1)Put impact forward (essentially a call to action box) (1)References (2)Back cover (1)Language and approach of the guideWe aimed for the giving guide to beEncouraging. The tone is hopeful and inviting, focusing on the opportunity of changing the world for the better together.Actionable. The guide includes simple legal advice on donating, inspiration from other student organisations on how to incorporate donations into the organisation’s activities, and recommended charities with links.Trustworthy. We explain why effective giving matters, how we have chosen the recommended cause areas and charities, provide references for further reading, and include a one-page about EA Helsinki.Easy to read.Project backgroundThe student organisation landscape and donatingIn Finland, it is common for students to be members of multiple student organisations and associations. E...]]>
Sun, 16 Apr 2023 02:12:32 +0000 EA - Giving Guide for Student Organisations – An ineffective outreach project by Karla Still Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Giving Guide for Student Organisations – An ineffective outreach project, published by Karla Still on April 14, 2023 on The Effective Altruism Forum.Executive summaryEA Helsinki created a giving guide booklet for student organisations in Finnish.The main aim of the Giving Guide for Student Organisations was to raise awareness of EA among students by providing them with information on how to give effectively. A sub-goal was to redirect donations from student organisations, although we do not expect the expected value of redirected donations to be worth the effort of producing and marketing the guide.We distributed printed versions of the guide to 121 organisations at 5 universities, most by approaching them at student fairs we attended in the fall of 2022. A digital version of the guide was also sent to the organisations via email a few days after the printed booklet was distributed.The direct results or impact from the giving guide have been almost non-existent. I believe most of the impact comes later from having exposed people to the idea of effective giving, making them more receptive to EA later on.My original reason for writing this post was to offer other EA groups an opportunity to replicate the project, but now I am not so sure that it is worth it. Instead, I would love to get feedback on the impact assessment, the project, and hear if others have tried something similar. I hope the spelled-out project implementation and analysis can help other community builders who might think about starting similar projects. I also wrote this as a report for future community builders in Finland.Tip: Skip to the measuring the impact-section if you aren’t interested in the details about the content and implementation of the project.Content of the giving guideBelow is the structure of the guide. (In parentheses the length of the text in A5 pages.) You can also find an auto translation on the content excluding the donation recommendations or download the Finnish guide on our website:Title: A giving guide for student organisations. How to achieve more impact with the same amount of money.Summary (1)Most help per euro (1)Donating in practiceInspirations from other associations (1)The association act is flexible (⅓)How to choose a targetComparing cause areas (1)Using research by evaluation organisations (⅓)Effective donation targets (summary ⅓)Global health and development (4)GivewellMaximum impact fundPreventing MalariaVitamin A deficiencyVaccination coverageDewormingDirect cash transfersClimate change (2)The Founders Pledge Climate Change FundClean Air Task ForceCarbon180Animal welfare (3.5)Animal Charity EvaluatorsAnimal Charity Evaluatorsin Recommended Charity FundThe Humane LeagueFaunalyticsWild Animal InitiativeSecuring the future (2)Johns Hopkins Center of Health SecurityALLFEDNuclear Threat Initiative Biosecurity ProgramBackground (½)Effective Altruism (½)Links (1)Put impact forward (essentially a call to action box) (1)References (2)Back cover (1)Language and approach of the guideWe aimed for the giving guide to beEncouraging. The tone is hopeful and inviting, focusing on the opportunity of changing the world for the better together.Actionable. The guide includes simple legal advice on donating, inspiration from other student organisations on how to incorporate donations into the organisation’s activities, and recommended charities with links.Trustworthy. We explain why effective giving matters, how we have chosen the recommended cause areas and charities, provide references for further reading, and include a one-page about EA Helsinki.Easy to read.Project backgroundThe student organisation landscape and donatingIn Finland, it is common for students to be members of multiple student organisations and associations. E...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Giving Guide for Student Organisations – An ineffective outreach project, published by Karla Still on April 14, 2023 on The Effective Altruism Forum.Executive summaryEA Helsinki created a giving guide booklet for student organisations in Finnish.The main aim of the Giving Guide for Student Organisations was to raise awareness of EA among students by providing them with information on how to give effectively. A sub-goal was to redirect donations from student organisations, although we do not expect the expected value of redirected donations to be worth the effort of producing and marketing the guide.We distributed printed versions of the guide to 121 organisations at 5 universities, most by approaching them at student fairs we attended in the fall of 2022. A digital version of the guide was also sent to the organisations via email a few days after the printed booklet was distributed.The direct results or impact from the giving guide have been almost non-existent. I believe most of the impact comes later from having exposed people to the idea of effective giving, making them more receptive to EA later on.My original reason for writing this post was to offer other EA groups an opportunity to replicate the project, but now I am not so sure that it is worth it. Instead, I would love to get feedback on the impact assessment, the project, and hear if others have tried something similar. I hope the spelled-out project implementation and analysis can help other community builders who might think about starting similar projects. I also wrote this as a report for future community builders in Finland.Tip: Skip to the measuring the impact-section if you aren’t interested in the details about the content and implementation of the project.Content of the giving guideBelow is the structure of the guide. (In parentheses the length of the text in A5 pages.) You can also find an auto translation on the content excluding the donation recommendations or download the Finnish guide on our website:Title: A giving guide for student organisations. How to achieve more impact with the same amount of money.Summary (1)Most help per euro (1)Donating in practiceInspirations from other associations (1)The association act is flexible (⅓)How to choose a targetComparing cause areas (1)Using research by evaluation organisations (⅓)Effective donation targets (summary ⅓)Global health and development (4)GivewellMaximum impact fundPreventing MalariaVitamin A deficiencyVaccination coverageDewormingDirect cash transfersClimate change (2)The Founders Pledge Climate Change FundClean Air Task ForceCarbon180Animal welfare (3.5)Animal Charity EvaluatorsAnimal Charity Evaluatorsin Recommended Charity FundThe Humane LeagueFaunalyticsWild Animal InitiativeSecuring the future (2)Johns Hopkins Center of Health SecurityALLFEDNuclear Threat Initiative Biosecurity ProgramBackground (½)Effective Altruism (½)Links (1)Put impact forward (essentially a call to action box) (1)References (2)Back cover (1)Language and approach of the guideWe aimed for the giving guide to beEncouraging. The tone is hopeful and inviting, focusing on the opportunity of changing the world for the better together.Actionable. The guide includes simple legal advice on donating, inspiration from other student organisations on how to incorporate donations into the organisation’s activities, and recommended charities with links.Trustworthy. We explain why effective giving matters, how we have chosen the recommended cause areas and charities, provide references for further reading, and include a one-page about EA Helsinki.Easy to read.Project backgroundThe student organisation landscape and donatingIn Finland, it is common for students to be members of multiple student organisations and associations. E...]]>
Karla Still https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 15:48 None full 5622
XnnfPC2gsgRFZezkE_NL_EA_EA EA - [linkpost] "What Are Reasonable AI Fears?" by Robin Hanson, 2023-04-23 by Arjun Panickssery Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [linkpost] "What Are Reasonable AI Fears?" by Robin Hanson, 2023-04-23, published by Arjun Panickssery on April 14, 2023 on The Effective Altruism Forum.Selected quotes (all emphasis mine):Why are we so willing to “other” AIs? Part of it is probably prejudice: some recoil from the very idea of a metal mind. We have, after all, long speculated about possible future conflicts with robots. But part of it is simply fear of change, inflamed by our ignorance of what future AIs might be like. Our fears expand to fill the vacuum left by our lack of knowledge and understanding.The result is that AI doomers entertain many different fears, and addressing them requires discussing a great many different scenarios. Many of these fears, however, are either unfounded or overblown. I will start with the fears I take to be the most reasonable, and end with the most overwrought horror stories, wherein AI threatens to destroy humanity.As an economics professor, I naturally build my analyses on economics, treating AIs as comparable to both laborers and machines, depending on context. You might think this is mistaken since AIs are unprecedentedly different, but economics is rather robust. Even though it offers great insights into familiar human behaviors, most economic theory is actually based on the abstract agents of game theory, who always make exactly the best possible move. Most AI fears seem understandable in economic terms; we fear losing to them at familiar games of economic and political power.He separates a few concerns:"Doomers worry about AIs developing “misaligned” values. But in this scenario, the “values” implicit in AI actions are roughly chosen by the organisations who make them and by the customers who use them. Such value choices are constantly revealed in typical AI behaviors, and tested by trying them in unusual situations.""Some fear that, in this scenario, many disliked conditions of our world—environmental destruction, income inequality, and othering of humans—might continue and even increase. Militaries and police might integrate AIs into their surveillance and weapons. It is true that AI may not solve these problems, and may even empower those who exacerbate them. On the other hand, AI may also empower those seeking solutions. AI just doesn’t seem to be the fundamental problem here.""A related fear is that allowing technical and social change to continue indefinitely might eventually take civilization to places that we don’t want to be. Looking backward, we have benefited from change overall so far, but maybe we just got lucky. If we like where we are and can’t be very confident of where we may go, maybe we shouldn’t take the risk and just stop changing. Or at least create central powers sufficient to control change worldwide, and only allow changes that are widely approved. This may be a proposal worth considering, but AI isn’t the fundamental problem here either.""Some doomers are especially concerned about AI making more persuasive ads and propaganda. However, individual cognitive abilities have long been far outmatched by the teams who work to persuade us—advertisers and video-game designers have been able to reliably hack our psychology for decades. What saves us, if anything does, is that we listen to many competing persuaders, and we trust other teams to advise us on who to believe and what to do. We can continue this approach with AIs.""If we assume that these groups have similar propensities to save and suffer similar rates of theft, then as AIs gradually become more capable and valuable, we should expect the wealth of [AIs and their owners] to increase relative to the wealth of [everyone else]. . . . As almost everyone today is in group C, one fear is of a relatively sudden transition to an AI-dominated economy. While perhaps not the most like...]]>
Arjun Panickssery https://forum.effectivealtruism.org/posts/XnnfPC2gsgRFZezkE/linkpost-what-are-reasonable-ai-fears-by-robin-hanson-2023 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [linkpost] "What Are Reasonable AI Fears?" by Robin Hanson, 2023-04-23, published by Arjun Panickssery on April 14, 2023 on The Effective Altruism Forum.Selected quotes (all emphasis mine):Why are we so willing to “other” AIs? Part of it is probably prejudice: some recoil from the very idea of a metal mind. We have, after all, long speculated about possible future conflicts with robots. But part of it is simply fear of change, inflamed by our ignorance of what future AIs might be like. Our fears expand to fill the vacuum left by our lack of knowledge and understanding.The result is that AI doomers entertain many different fears, and addressing them requires discussing a great many different scenarios. Many of these fears, however, are either unfounded or overblown. I will start with the fears I take to be the most reasonable, and end with the most overwrought horror stories, wherein AI threatens to destroy humanity.As an economics professor, I naturally build my analyses on economics, treating AIs as comparable to both laborers and machines, depending on context. You might think this is mistaken since AIs are unprecedentedly different, but economics is rather robust. Even though it offers great insights into familiar human behaviors, most economic theory is actually based on the abstract agents of game theory, who always make exactly the best possible move. Most AI fears seem understandable in economic terms; we fear losing to them at familiar games of economic and political power.He separates a few concerns:"Doomers worry about AIs developing “misaligned” values. But in this scenario, the “values” implicit in AI actions are roughly chosen by the organisations who make them and by the customers who use them. Such value choices are constantly revealed in typical AI behaviors, and tested by trying them in unusual situations.""Some fear that, in this scenario, many disliked conditions of our world—environmental destruction, income inequality, and othering of humans—might continue and even increase. Militaries and police might integrate AIs into their surveillance and weapons. It is true that AI may not solve these problems, and may even empower those who exacerbate them. On the other hand, AI may also empower those seeking solutions. AI just doesn’t seem to be the fundamental problem here.""A related fear is that allowing technical and social change to continue indefinitely might eventually take civilization to places that we don’t want to be. Looking backward, we have benefited from change overall so far, but maybe we just got lucky. If we like where we are and can’t be very confident of where we may go, maybe we shouldn’t take the risk and just stop changing. Or at least create central powers sufficient to control change worldwide, and only allow changes that are widely approved. This may be a proposal worth considering, but AI isn’t the fundamental problem here either.""Some doomers are especially concerned about AI making more persuasive ads and propaganda. However, individual cognitive abilities have long been far outmatched by the teams who work to persuade us—advertisers and video-game designers have been able to reliably hack our psychology for decades. What saves us, if anything does, is that we listen to many competing persuaders, and we trust other teams to advise us on who to believe and what to do. We can continue this approach with AIs.""If we assume that these groups have similar propensities to save and suffer similar rates of theft, then as AIs gradually become more capable and valuable, we should expect the wealth of [AIs and their owners] to increase relative to the wealth of [everyone else]. . . . As almost everyone today is in group C, one fear is of a relatively sudden transition to an AI-dominated economy. While perhaps not the most like...]]>
Sat, 15 Apr 2023 22:00:47 +0000 EA - [linkpost] "What Are Reasonable AI Fears?" by Robin Hanson, 2023-04-23 by Arjun Panickssery Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [linkpost] "What Are Reasonable AI Fears?" by Robin Hanson, 2023-04-23, published by Arjun Panickssery on April 14, 2023 on The Effective Altruism Forum.Selected quotes (all emphasis mine):Why are we so willing to “other” AIs? Part of it is probably prejudice: some recoil from the very idea of a metal mind. We have, after all, long speculated about possible future conflicts with robots. But part of it is simply fear of change, inflamed by our ignorance of what future AIs might be like. Our fears expand to fill the vacuum left by our lack of knowledge and understanding.The result is that AI doomers entertain many different fears, and addressing them requires discussing a great many different scenarios. Many of these fears, however, are either unfounded or overblown. I will start with the fears I take to be the most reasonable, and end with the most overwrought horror stories, wherein AI threatens to destroy humanity.As an economics professor, I naturally build my analyses on economics, treating AIs as comparable to both laborers and machines, depending on context. You might think this is mistaken since AIs are unprecedentedly different, but economics is rather robust. Even though it offers great insights into familiar human behaviors, most economic theory is actually based on the abstract agents of game theory, who always make exactly the best possible move. Most AI fears seem understandable in economic terms; we fear losing to them at familiar games of economic and political power.He separates a few concerns:"Doomers worry about AIs developing “misaligned” values. But in this scenario, the “values” implicit in AI actions are roughly chosen by the organisations who make them and by the customers who use them. Such value choices are constantly revealed in typical AI behaviors, and tested by trying them in unusual situations.""Some fear that, in this scenario, many disliked conditions of our world—environmental destruction, income inequality, and othering of humans—might continue and even increase. Militaries and police might integrate AIs into their surveillance and weapons. It is true that AI may not solve these problems, and may even empower those who exacerbate them. On the other hand, AI may also empower those seeking solutions. AI just doesn’t seem to be the fundamental problem here.""A related fear is that allowing technical and social change to continue indefinitely might eventually take civilization to places that we don’t want to be. Looking backward, we have benefited from change overall so far, but maybe we just got lucky. If we like where we are and can’t be very confident of where we may go, maybe we shouldn’t take the risk and just stop changing. Or at least create central powers sufficient to control change worldwide, and only allow changes that are widely approved. This may be a proposal worth considering, but AI isn’t the fundamental problem here either.""Some doomers are especially concerned about AI making more persuasive ads and propaganda. However, individual cognitive abilities have long been far outmatched by the teams who work to persuade us—advertisers and video-game designers have been able to reliably hack our psychology for decades. What saves us, if anything does, is that we listen to many competing persuaders, and we trust other teams to advise us on who to believe and what to do. We can continue this approach with AIs.""If we assume that these groups have similar propensities to save and suffer similar rates of theft, then as AIs gradually become more capable and valuable, we should expect the wealth of [AIs and their owners] to increase relative to the wealth of [everyone else]. . . . As almost everyone today is in group C, one fear is of a relatively sudden transition to an AI-dominated economy. While perhaps not the most like...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [linkpost] "What Are Reasonable AI Fears?" by Robin Hanson, 2023-04-23, published by Arjun Panickssery on April 14, 2023 on The Effective Altruism Forum.Selected quotes (all emphasis mine):Why are we so willing to “other” AIs? Part of it is probably prejudice: some recoil from the very idea of a metal mind. We have, after all, long speculated about possible future conflicts with robots. But part of it is simply fear of change, inflamed by our ignorance of what future AIs might be like. Our fears expand to fill the vacuum left by our lack of knowledge and understanding.The result is that AI doomers entertain many different fears, and addressing them requires discussing a great many different scenarios. Many of these fears, however, are either unfounded or overblown. I will start with the fears I take to be the most reasonable, and end with the most overwrought horror stories, wherein AI threatens to destroy humanity.As an economics professor, I naturally build my analyses on economics, treating AIs as comparable to both laborers and machines, depending on context. You might think this is mistaken since AIs are unprecedentedly different, but economics is rather robust. Even though it offers great insights into familiar human behaviors, most economic theory is actually based on the abstract agents of game theory, who always make exactly the best possible move. Most AI fears seem understandable in economic terms; we fear losing to them at familiar games of economic and political power.He separates a few concerns:"Doomers worry about AIs developing “misaligned” values. But in this scenario, the “values” implicit in AI actions are roughly chosen by the organisations who make them and by the customers who use them. Such value choices are constantly revealed in typical AI behaviors, and tested by trying them in unusual situations.""Some fear that, in this scenario, many disliked conditions of our world—environmental destruction, income inequality, and othering of humans—might continue and even increase. Militaries and police might integrate AIs into their surveillance and weapons. It is true that AI may not solve these problems, and may even empower those who exacerbate them. On the other hand, AI may also empower those seeking solutions. AI just doesn’t seem to be the fundamental problem here.""A related fear is that allowing technical and social change to continue indefinitely might eventually take civilization to places that we don’t want to be. Looking backward, we have benefited from change overall so far, but maybe we just got lucky. If we like where we are and can’t be very confident of where we may go, maybe we shouldn’t take the risk and just stop changing. Or at least create central powers sufficient to control change worldwide, and only allow changes that are widely approved. This may be a proposal worth considering, but AI isn’t the fundamental problem here either.""Some doomers are especially concerned about AI making more persuasive ads and propaganda. However, individual cognitive abilities have long been far outmatched by the teams who work to persuade us—advertisers and video-game designers have been able to reliably hack our psychology for decades. What saves us, if anything does, is that we listen to many competing persuaders, and we trust other teams to advise us on who to believe and what to do. We can continue this approach with AIs.""If we assume that these groups have similar propensities to save and suffer similar rates of theft, then as AIs gradually become more capable and valuable, we should expect the wealth of [AIs and their owners] to increase relative to the wealth of [everyone else]. . . . As almost everyone today is in group C, one fear is of a relatively sudden transition to an AI-dominated economy. While perhaps not the most like...]]>
Arjun Panickssery https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:05 None full 5621
BgPugYavnr9CQ2NMH_NL_EA_EA EA - AI Safety Europe Retreat 2023 Retrospective by Magdalena Wache Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Europe Retreat 2023 Retrospective, published by Magdalena Wache on April 14, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Magdalena Wache https://forum.effectivealtruism.org/posts/BgPugYavnr9CQ2NMH/ai-safety-europe-retreat-2023-retrospective Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Europe Retreat 2023 Retrospective, published by Magdalena Wache on April 14, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 15 Apr 2023 19:41:44 +0000 EA - AI Safety Europe Retreat 2023 Retrospective by Magdalena Wache Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Europe Retreat 2023 Retrospective, published by Magdalena Wache on April 14, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Europe Retreat 2023 Retrospective, published by Magdalena Wache on April 14, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Magdalena Wache https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:23 None full 5618
sJKgLxipgXn7TwiF5_NL_EA_EA EA - Floors and Ceilings, Frameworks and Feelings: SoGive's Impact Analysis Toolkit for Evaluating StrongMinds by ishaan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Floors and Ceilings, Frameworks and Feelings: SoGive's Impact Analysis Toolkit for Evaluating StrongMinds, published by ishaan on April 15, 2023 on The Effective Altruism Forum.Primary author: Ishaan Guptasarma, Principal Analyst at SoGiveWe recently announced that we will be performing an independent assessment of StrongMinds. This is the first in a series of posts which will culminate in that assessment.Executive SummaryIn order to conduct our review of StrongMinds, we needed to make decisions about how to measure the effectiveness of psychotherapy. The approaches that we considered were mainly:An approach used by Happier Lives Institute (HLI), which measures the cumulative effect size of therapy over time (SD-Years) by postulating an initial effect which exponentially decays over time, and integrating under the curve.An approach used by some academics, which reports remission rates and number needed to treat (NNT) associated with psychotherapy, and of relapse rates at various time points as reported in longitudinal follow-ups.We decided that the SD-Years approach used by HLI best captures what we’re trying to capture, because remission, relapse, and NNT use cut-offs which are arbitrary and poorly standardised.The drawback of this method is that it's based on effect sizes, which can become inflated when ceiling and floor effects artificially reduce the standard deviation. Such artefacts are rarely ever accounted for in meta-analyses and literature that we have encountered.For each step in our methodology, we've created spreadsheet tools which others can use to quickly check and replicate our work and do further analysis. These tools can do:Meta-analysis, for calculating standardised mean differences and aggregating effect sizes across multiple studies to estimate the impact of a therapeutic intervention.Linear regressions and meta-regressions, to calculate the rate at which therapeutic effects decay over time.Conversion from remission rates and NNTs into effect sizes, and relapse rates into decay rates, and vice versa.Conversion of scores between different depression questionnaires.Calculation of "standard deviations of improvement" for a single patient, for building intuitions.About SoGive: SoGive does EA research and supports major donors. If you are a major donor seeking support with your donations, we’d be keen to work with you. Feel free to contact Sanjay on sanjay@sogive.org.0 IntroductionHow should the EA community reason about interventions relating to subjective well being? We typically conceptualise the impact of donating to global health anti-malaria charities in terms of figures such as "£5,000 per child's life saved". While evaluating such estimates is difficult, the fundamental question arguably has a "yes or no" answer: was a child's death averted, or not?Measuring impact on subjective well being, which is continuous rather than discrete and is typically measured by self-report, requires a different framework. This post explains the dominant frameworks for thinking about this, explores some of the complications that they introduce, and introduces spreadsheet tools for deploying these frameworks. These tools and analytical considerations that will lay the groundwork for subsequent work.We recommend this post to anyone interested in doing analysis on mental health. It may also be useful to anyone doing impact evaluations on continuous phenomena which manifest as unimodal distributions, especially those which might be approximated as normal distributions.1 The SD-year framework, and why we prefer it to the alternativeAcademic studies usually measure the impact of a mental health intervention by using questionnaires to ask how people feel before and after the intervention, and then comparing their scores to a control group which did ...]]>
ishaan https://forum.effectivealtruism.org/posts/sJKgLxipgXn7TwiF5/floors-and-ceilings-frameworks-and-feelings-sogive-s-impact Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Floors and Ceilings, Frameworks and Feelings: SoGive's Impact Analysis Toolkit for Evaluating StrongMinds, published by ishaan on April 15, 2023 on The Effective Altruism Forum.Primary author: Ishaan Guptasarma, Principal Analyst at SoGiveWe recently announced that we will be performing an independent assessment of StrongMinds. This is the first in a series of posts which will culminate in that assessment.Executive SummaryIn order to conduct our review of StrongMinds, we needed to make decisions about how to measure the effectiveness of psychotherapy. The approaches that we considered were mainly:An approach used by Happier Lives Institute (HLI), which measures the cumulative effect size of therapy over time (SD-Years) by postulating an initial effect which exponentially decays over time, and integrating under the curve.An approach used by some academics, which reports remission rates and number needed to treat (NNT) associated with psychotherapy, and of relapse rates at various time points as reported in longitudinal follow-ups.We decided that the SD-Years approach used by HLI best captures what we’re trying to capture, because remission, relapse, and NNT use cut-offs which are arbitrary and poorly standardised.The drawback of this method is that it's based on effect sizes, which can become inflated when ceiling and floor effects artificially reduce the standard deviation. Such artefacts are rarely ever accounted for in meta-analyses and literature that we have encountered.For each step in our methodology, we've created spreadsheet tools which others can use to quickly check and replicate our work and do further analysis. These tools can do:Meta-analysis, for calculating standardised mean differences and aggregating effect sizes across multiple studies to estimate the impact of a therapeutic intervention.Linear regressions and meta-regressions, to calculate the rate at which therapeutic effects decay over time.Conversion from remission rates and NNTs into effect sizes, and relapse rates into decay rates, and vice versa.Conversion of scores between different depression questionnaires.Calculation of "standard deviations of improvement" for a single patient, for building intuitions.About SoGive: SoGive does EA research and supports major donors. If you are a major donor seeking support with your donations, we’d be keen to work with you. Feel free to contact Sanjay on sanjay@sogive.org.0 IntroductionHow should the EA community reason about interventions relating to subjective well being? We typically conceptualise the impact of donating to global health anti-malaria charities in terms of figures such as "£5,000 per child's life saved". While evaluating such estimates is difficult, the fundamental question arguably has a "yes or no" answer: was a child's death averted, or not?Measuring impact on subjective well being, which is continuous rather than discrete and is typically measured by self-report, requires a different framework. This post explains the dominant frameworks for thinking about this, explores some of the complications that they introduce, and introduces spreadsheet tools for deploying these frameworks. These tools and analytical considerations that will lay the groundwork for subsequent work.We recommend this post to anyone interested in doing analysis on mental health. It may also be useful to anyone doing impact evaluations on continuous phenomena which manifest as unimodal distributions, especially those which might be approximated as normal distributions.1 The SD-year framework, and why we prefer it to the alternativeAcademic studies usually measure the impact of a mental health intervention by using questionnaires to ask how people feel before and after the intervention, and then comparing their scores to a control group which did ...]]>
Sat, 15 Apr 2023 19:33:28 +0000 EA - Floors and Ceilings, Frameworks and Feelings: SoGive's Impact Analysis Toolkit for Evaluating StrongMinds by ishaan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Floors and Ceilings, Frameworks and Feelings: SoGive's Impact Analysis Toolkit for Evaluating StrongMinds, published by ishaan on April 15, 2023 on The Effective Altruism Forum.Primary author: Ishaan Guptasarma, Principal Analyst at SoGiveWe recently announced that we will be performing an independent assessment of StrongMinds. This is the first in a series of posts which will culminate in that assessment.Executive SummaryIn order to conduct our review of StrongMinds, we needed to make decisions about how to measure the effectiveness of psychotherapy. The approaches that we considered were mainly:An approach used by Happier Lives Institute (HLI), which measures the cumulative effect size of therapy over time (SD-Years) by postulating an initial effect which exponentially decays over time, and integrating under the curve.An approach used by some academics, which reports remission rates and number needed to treat (NNT) associated with psychotherapy, and of relapse rates at various time points as reported in longitudinal follow-ups.We decided that the SD-Years approach used by HLI best captures what we’re trying to capture, because remission, relapse, and NNT use cut-offs which are arbitrary and poorly standardised.The drawback of this method is that it's based on effect sizes, which can become inflated when ceiling and floor effects artificially reduce the standard deviation. Such artefacts are rarely ever accounted for in meta-analyses and literature that we have encountered.For each step in our methodology, we've created spreadsheet tools which others can use to quickly check and replicate our work and do further analysis. These tools can do:Meta-analysis, for calculating standardised mean differences and aggregating effect sizes across multiple studies to estimate the impact of a therapeutic intervention.Linear regressions and meta-regressions, to calculate the rate at which therapeutic effects decay over time.Conversion from remission rates and NNTs into effect sizes, and relapse rates into decay rates, and vice versa.Conversion of scores between different depression questionnaires.Calculation of "standard deviations of improvement" for a single patient, for building intuitions.About SoGive: SoGive does EA research and supports major donors. If you are a major donor seeking support with your donations, we’d be keen to work with you. Feel free to contact Sanjay on sanjay@sogive.org.0 IntroductionHow should the EA community reason about interventions relating to subjective well being? We typically conceptualise the impact of donating to global health anti-malaria charities in terms of figures such as "£5,000 per child's life saved". While evaluating such estimates is difficult, the fundamental question arguably has a "yes or no" answer: was a child's death averted, or not?Measuring impact on subjective well being, which is continuous rather than discrete and is typically measured by self-report, requires a different framework. This post explains the dominant frameworks for thinking about this, explores some of the complications that they introduce, and introduces spreadsheet tools for deploying these frameworks. These tools and analytical considerations that will lay the groundwork for subsequent work.We recommend this post to anyone interested in doing analysis on mental health. It may also be useful to anyone doing impact evaluations on continuous phenomena which manifest as unimodal distributions, especially those which might be approximated as normal distributions.1 The SD-year framework, and why we prefer it to the alternativeAcademic studies usually measure the impact of a mental health intervention by using questionnaires to ask how people feel before and after the intervention, and then comparing their scores to a control group which did ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Floors and Ceilings, Frameworks and Feelings: SoGive's Impact Analysis Toolkit for Evaluating StrongMinds, published by ishaan on April 15, 2023 on The Effective Altruism Forum.Primary author: Ishaan Guptasarma, Principal Analyst at SoGiveWe recently announced that we will be performing an independent assessment of StrongMinds. This is the first in a series of posts which will culminate in that assessment.Executive SummaryIn order to conduct our review of StrongMinds, we needed to make decisions about how to measure the effectiveness of psychotherapy. The approaches that we considered were mainly:An approach used by Happier Lives Institute (HLI), which measures the cumulative effect size of therapy over time (SD-Years) by postulating an initial effect which exponentially decays over time, and integrating under the curve.An approach used by some academics, which reports remission rates and number needed to treat (NNT) associated with psychotherapy, and of relapse rates at various time points as reported in longitudinal follow-ups.We decided that the SD-Years approach used by HLI best captures what we’re trying to capture, because remission, relapse, and NNT use cut-offs which are arbitrary and poorly standardised.The drawback of this method is that it's based on effect sizes, which can become inflated when ceiling and floor effects artificially reduce the standard deviation. Such artefacts are rarely ever accounted for in meta-analyses and literature that we have encountered.For each step in our methodology, we've created spreadsheet tools which others can use to quickly check and replicate our work and do further analysis. These tools can do:Meta-analysis, for calculating standardised mean differences and aggregating effect sizes across multiple studies to estimate the impact of a therapeutic intervention.Linear regressions and meta-regressions, to calculate the rate at which therapeutic effects decay over time.Conversion from remission rates and NNTs into effect sizes, and relapse rates into decay rates, and vice versa.Conversion of scores between different depression questionnaires.Calculation of "standard deviations of improvement" for a single patient, for building intuitions.About SoGive: SoGive does EA research and supports major donors. If you are a major donor seeking support with your donations, we’d be keen to work with you. Feel free to contact Sanjay on sanjay@sogive.org.0 IntroductionHow should the EA community reason about interventions relating to subjective well being? We typically conceptualise the impact of donating to global health anti-malaria charities in terms of figures such as "£5,000 per child's life saved". While evaluating such estimates is difficult, the fundamental question arguably has a "yes or no" answer: was a child's death averted, or not?Measuring impact on subjective well being, which is continuous rather than discrete and is typically measured by self-report, requires a different framework. This post explains the dominant frameworks for thinking about this, explores some of the complications that they introduce, and introduces spreadsheet tools for deploying these frameworks. These tools and analytical considerations that will lay the groundwork for subsequent work.We recommend this post to anyone interested in doing analysis on mental health. It may also be useful to anyone doing impact evaluations on continuous phenomena which manifest as unimodal distributions, especially those which might be approximated as normal distributions.1 The SD-year framework, and why we prefer it to the alternativeAcademic studies usually measure the impact of a mental health intervention by using questionnaires to ask how people feel before and after the intervention, and then comparing their scores to a control group which did ...]]>
ishaan https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 36:49 None full 5627
EcrNFxGszfgcGevtf_NL_EA_EA EA - "Risk Awareness Moments" (Rams): A concept for thinking about AI governance interventions by oeg Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Risk Awareness Moments" (Rams): A concept for thinking about AI governance interventions, published by oeg on April 14, 2023 on The Effective Altruism Forum.In this post, I introduce the concept of Risk Awareness Moments (“Rams”): “A point in time, after which concern about extreme risks from AI is so high among the relevant audiences that extreme measures to reduce these risks become possible, though not inevitable.” This is a blog post, not a research report, meaning it was produced quickly and is not to Rethink Priorities’ typical standards of substantiveness and careful checking for accuracy.SummaryI give several examples of what a Ram might look like for national elites and/or the general population of a major country. Causes could include failures of AI systems, or more social phenomena, such as new books being published about AI risk.I compare the Ram concept to similar concepts such as warning shots. I see two main benefits: (1) Rams let us remain agnostic about what types of evidence make people concerned, e.g., something that AI does, vs. social phenomena; (2) the concept lets us remain agnostic about the “trajectory” to people being concerned about the risk, e.g., whether there is a more discrete/continuous/lumpy change in opinion.For many audiences, and potential ways in which AI progress could play out, there will not necessarily be a Ram. For example, there might be a fast takeoff before the general public has a chance to significantly alter their beliefs about AI.We could do things to increase the likelihood of Rams, or to accelerate their occurrence. That said, there are complex considerations about whether actions to cause (earlier) Rams would be net positive.A Ram - even among influential audiences - is not sufficient for adequate risk-reduction measures to be put in place. For example, there could be bargaining failures between countries that make it impossible to get mutually beneficial AI safety agreements. Or people who are more aware of the risks from transformative AI might also be more aware of the benefits, and thus make an informed decision that the benefits are worth the risks by their lights.At the end, I give some historical examples of Rams for issues other than AI risk.From DALL-E 2DefinitionI define a Risk Awareness Moment (Ram) as “a point in time, after which concern about extreme risks from AI is so high among the relevant audiences that extreme measures to reduce these risks become possible, though not inevitable.”Extreme risks refers to risks at least at the level of global catastrophic risks (GCRs). I intend the term to capture accident, misuse, and structural risks.Note that people could be concerned about some extreme risks from AI without being concerned about other risks. For example, the general public might become worried about risks from non-robust narrow AI in nuclear weapons systems, without being worried about misaligned AGI. Concern about one risk would not necessarily make it possible to get measures that would be helpful for tackling other risks.Additionally, some audiences might have unreasonable threat models. One possible example of this would be an incorrect belief that accidents with Lethal Autonomous Weapons would themselves cause GCR-level damage. Similar to the bullet point above, this belief might be necessary for measures to tackle the specific (potentially overblown) threat model, without necessarily being helpful for measures to tackle other risks.Relevant audiences will differ according to the measure in question. For example, measures carried out by labs might require people in labs to be widely convinced. In contrast, government-led measures might require people in specific parts of the government - and maybe also the public - to be convinced.Extreme measures could include national-le...]]>
oeg https://forum.effectivealtruism.org/posts/EcrNFxGszfgcGevtf/risk-awareness-moments-rams-a-concept-for-thinking-about-ai Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Risk Awareness Moments" (Rams): A concept for thinking about AI governance interventions, published by oeg on April 14, 2023 on The Effective Altruism Forum.In this post, I introduce the concept of Risk Awareness Moments (“Rams”): “A point in time, after which concern about extreme risks from AI is so high among the relevant audiences that extreme measures to reduce these risks become possible, though not inevitable.” This is a blog post, not a research report, meaning it was produced quickly and is not to Rethink Priorities’ typical standards of substantiveness and careful checking for accuracy.SummaryI give several examples of what a Ram might look like for national elites and/or the general population of a major country. Causes could include failures of AI systems, or more social phenomena, such as new books being published about AI risk.I compare the Ram concept to similar concepts such as warning shots. I see two main benefits: (1) Rams let us remain agnostic about what types of evidence make people concerned, e.g., something that AI does, vs. social phenomena; (2) the concept lets us remain agnostic about the “trajectory” to people being concerned about the risk, e.g., whether there is a more discrete/continuous/lumpy change in opinion.For many audiences, and potential ways in which AI progress could play out, there will not necessarily be a Ram. For example, there might be a fast takeoff before the general public has a chance to significantly alter their beliefs about AI.We could do things to increase the likelihood of Rams, or to accelerate their occurrence. That said, there are complex considerations about whether actions to cause (earlier) Rams would be net positive.A Ram - even among influential audiences - is not sufficient for adequate risk-reduction measures to be put in place. For example, there could be bargaining failures between countries that make it impossible to get mutually beneficial AI safety agreements. Or people who are more aware of the risks from transformative AI might also be more aware of the benefits, and thus make an informed decision that the benefits are worth the risks by their lights.At the end, I give some historical examples of Rams for issues other than AI risk.From DALL-E 2DefinitionI define a Risk Awareness Moment (Ram) as “a point in time, after which concern about extreme risks from AI is so high among the relevant audiences that extreme measures to reduce these risks become possible, though not inevitable.”Extreme risks refers to risks at least at the level of global catastrophic risks (GCRs). I intend the term to capture accident, misuse, and structural risks.Note that people could be concerned about some extreme risks from AI without being concerned about other risks. For example, the general public might become worried about risks from non-robust narrow AI in nuclear weapons systems, without being worried about misaligned AGI. Concern about one risk would not necessarily make it possible to get measures that would be helpful for tackling other risks.Additionally, some audiences might have unreasonable threat models. One possible example of this would be an incorrect belief that accidents with Lethal Autonomous Weapons would themselves cause GCR-level damage. Similar to the bullet point above, this belief might be necessary for measures to tackle the specific (potentially overblown) threat model, without necessarily being helpful for measures to tackle other risks.Relevant audiences will differ according to the measure in question. For example, measures carried out by labs might require people in labs to be widely convinced. In contrast, government-led measures might require people in specific parts of the government - and maybe also the public - to be convinced.Extreme measures could include national-le...]]>
Sat, 15 Apr 2023 16:35:51 +0000 EA - "Risk Awareness Moments" (Rams): A concept for thinking about AI governance interventions by oeg Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Risk Awareness Moments" (Rams): A concept for thinking about AI governance interventions, published by oeg on April 14, 2023 on The Effective Altruism Forum.In this post, I introduce the concept of Risk Awareness Moments (“Rams”): “A point in time, after which concern about extreme risks from AI is so high among the relevant audiences that extreme measures to reduce these risks become possible, though not inevitable.” This is a blog post, not a research report, meaning it was produced quickly and is not to Rethink Priorities’ typical standards of substantiveness and careful checking for accuracy.SummaryI give several examples of what a Ram might look like for national elites and/or the general population of a major country. Causes could include failures of AI systems, or more social phenomena, such as new books being published about AI risk.I compare the Ram concept to similar concepts such as warning shots. I see two main benefits: (1) Rams let us remain agnostic about what types of evidence make people concerned, e.g., something that AI does, vs. social phenomena; (2) the concept lets us remain agnostic about the “trajectory” to people being concerned about the risk, e.g., whether there is a more discrete/continuous/lumpy change in opinion.For many audiences, and potential ways in which AI progress could play out, there will not necessarily be a Ram. For example, there might be a fast takeoff before the general public has a chance to significantly alter their beliefs about AI.We could do things to increase the likelihood of Rams, or to accelerate their occurrence. That said, there are complex considerations about whether actions to cause (earlier) Rams would be net positive.A Ram - even among influential audiences - is not sufficient for adequate risk-reduction measures to be put in place. For example, there could be bargaining failures between countries that make it impossible to get mutually beneficial AI safety agreements. Or people who are more aware of the risks from transformative AI might also be more aware of the benefits, and thus make an informed decision that the benefits are worth the risks by their lights.At the end, I give some historical examples of Rams for issues other than AI risk.From DALL-E 2DefinitionI define a Risk Awareness Moment (Ram) as “a point in time, after which concern about extreme risks from AI is so high among the relevant audiences that extreme measures to reduce these risks become possible, though not inevitable.”Extreme risks refers to risks at least at the level of global catastrophic risks (GCRs). I intend the term to capture accident, misuse, and structural risks.Note that people could be concerned about some extreme risks from AI without being concerned about other risks. For example, the general public might become worried about risks from non-robust narrow AI in nuclear weapons systems, without being worried about misaligned AGI. Concern about one risk would not necessarily make it possible to get measures that would be helpful for tackling other risks.Additionally, some audiences might have unreasonable threat models. One possible example of this would be an incorrect belief that accidents with Lethal Autonomous Weapons would themselves cause GCR-level damage. Similar to the bullet point above, this belief might be necessary for measures to tackle the specific (potentially overblown) threat model, without necessarily being helpful for measures to tackle other risks.Relevant audiences will differ according to the measure in question. For example, measures carried out by labs might require people in labs to be widely convinced. In contrast, government-led measures might require people in specific parts of the government - and maybe also the public - to be convinced.Extreme measures could include national-le...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Risk Awareness Moments" (Rams): A concept for thinking about AI governance interventions, published by oeg on April 14, 2023 on The Effective Altruism Forum.In this post, I introduce the concept of Risk Awareness Moments (“Rams”): “A point in time, after which concern about extreme risks from AI is so high among the relevant audiences that extreme measures to reduce these risks become possible, though not inevitable.” This is a blog post, not a research report, meaning it was produced quickly and is not to Rethink Priorities’ typical standards of substantiveness and careful checking for accuracy.SummaryI give several examples of what a Ram might look like for national elites and/or the general population of a major country. Causes could include failures of AI systems, or more social phenomena, such as new books being published about AI risk.I compare the Ram concept to similar concepts such as warning shots. I see two main benefits: (1) Rams let us remain agnostic about what types of evidence make people concerned, e.g., something that AI does, vs. social phenomena; (2) the concept lets us remain agnostic about the “trajectory” to people being concerned about the risk, e.g., whether there is a more discrete/continuous/lumpy change in opinion.For many audiences, and potential ways in which AI progress could play out, there will not necessarily be a Ram. For example, there might be a fast takeoff before the general public has a chance to significantly alter their beliefs about AI.We could do things to increase the likelihood of Rams, or to accelerate their occurrence. That said, there are complex considerations about whether actions to cause (earlier) Rams would be net positive.A Ram - even among influential audiences - is not sufficient for adequate risk-reduction measures to be put in place. For example, there could be bargaining failures between countries that make it impossible to get mutually beneficial AI safety agreements. Or people who are more aware of the risks from transformative AI might also be more aware of the benefits, and thus make an informed decision that the benefits are worth the risks by their lights.At the end, I give some historical examples of Rams for issues other than AI risk.From DALL-E 2DefinitionI define a Risk Awareness Moment (Ram) as “a point in time, after which concern about extreme risks from AI is so high among the relevant audiences that extreme measures to reduce these risks become possible, though not inevitable.”Extreme risks refers to risks at least at the level of global catastrophic risks (GCRs). I intend the term to capture accident, misuse, and structural risks.Note that people could be concerned about some extreme risks from AI without being concerned about other risks. For example, the general public might become worried about risks from non-robust narrow AI in nuclear weapons systems, without being worried about misaligned AGI. Concern about one risk would not necessarily make it possible to get measures that would be helpful for tackling other risks.Additionally, some audiences might have unreasonable threat models. One possible example of this would be an incorrect belief that accidents with Lethal Autonomous Weapons would themselves cause GCR-level damage. Similar to the bullet point above, this belief might be necessary for measures to tackle the specific (potentially overblown) threat model, without necessarily being helpful for measures to tackle other risks.Relevant audiences will differ according to the measure in question. For example, measures carried out by labs might require people in labs to be widely convinced. In contrast, government-led measures might require people in specific parts of the government - and maybe also the public - to be convinced.Extreme measures could include national-le...]]>
oeg https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 19:20 None full 5616
Rn53J2BQwooMDq6Eo_NL_EA_EA EA - Paths to reducing rodenticide use in the U.S. by Holly Elmore Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paths to reducing rodenticide use in the U.S., published by Holly Elmore on April 15, 2023 on The Effective Altruism Forum.This is the third report in a sequence of reports on reducing the use of rodenticide poisons. It is not necessary to have read the previous reports to read this one, although this report will take for granted conclusions that were discussed and weighed in depth in the previous reports. Part 1 describes why rodenticides are crueler to rodents and more dangerous for human children, pets, and wildlife than most alternative methods for removing rodents from homes, businesses, and farms. Part 2 explains why both consumers and businesses have strong incentives to continue relying on rodenticides. This report describes and ranks interventions to reduce rodenticide use in the U.S. according to their expected impact, neglectedness, and tractability. We leave aside how the interventions we discuss might change attitudes toward pest populations in the long term due to a lack of relevant evidence.The report is grouped into sections by the class of intervention: Legislative interventions, Information campaigns, Technological disruption, and Funding research. We are pessimistic about the legislative interventions that severely restrict legal rodenticide use. For example, California's recent ban on second-generation anticoagulant rodenticides (SGARs) is riddled with exemptions and may increase the use of alternatives that are even crueler to rodents. However, there are less controversial reforms, such as sanitation reform, that would reduce some rodenticide use, especially at the local level.Our top recommended intervention is investing in improved rodent birth control. [part 1] of this sequence was enthusiastic about existing EPA-approved rodent birth control ContraPest, but in part 2 we reported additional findings that led us to conclude that ContraPest is too expensive and cumbersome to replace the role that rodenticides currently play. Birth control baits that are cheaper and more versatile than Contrapest could replace rodenticide in many (though probably not all) situations.Our runner-up recommendation is to run digital information campaigns to educate the public on the costs and dangers of rodenticides. Digital advertising is cheap, and can quickly reach millions of people without having to first develop personal relationships with voters. Although the relatively grassroots approach of extant anti-rodenticide activism may be a sign that more impersonal approaches would not work, there is value in testing how much can be accomplished through mass communication alone.Table 1: A summary of the overall ranking of the 10 interventions types analyzed above. We scored Impact, Neglectedness, and Tractability holistically using High (5), Medium-High (4), Medium (3), Medium-Low (2), and Low (1), which were converted to a 5-point scale. The rankings in the right-most column are a result of adding up the numeric values that Intervention Type received in Impact, Neglectedness, and Tractability. Ties were resolved by holistically considering which intervention we believed was overall higher priority.We acknowledge that some interventions that do not look promising on their own may increase the tractability of more promising interventions. For example, obtaining local- and state-level bans may be a hassle and the results may be imperfect, but legal pressure to find alternatives may spur investment in new technology that is both more effective and humane than rodenticides. Readers may also have a personal advantage in implementing certain interventions and therefore may want to prioritize implementing those even if other interventions are more highly ranked in this report.This research is a project of Rethink Priorities. It was written by Holly Elmore. If you’re...]]>
Holly Elmore https://forum.effectivealtruism.org/posts/Rn53J2BQwooMDq6Eo/paths-to-reducing-rodenticide-use-in-the-u-s Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paths to reducing rodenticide use in the U.S., published by Holly Elmore on April 15, 2023 on The Effective Altruism Forum.This is the third report in a sequence of reports on reducing the use of rodenticide poisons. It is not necessary to have read the previous reports to read this one, although this report will take for granted conclusions that were discussed and weighed in depth in the previous reports. Part 1 describes why rodenticides are crueler to rodents and more dangerous for human children, pets, and wildlife than most alternative methods for removing rodents from homes, businesses, and farms. Part 2 explains why both consumers and businesses have strong incentives to continue relying on rodenticides. This report describes and ranks interventions to reduce rodenticide use in the U.S. according to their expected impact, neglectedness, and tractability. We leave aside how the interventions we discuss might change attitudes toward pest populations in the long term due to a lack of relevant evidence.The report is grouped into sections by the class of intervention: Legislative interventions, Information campaigns, Technological disruption, and Funding research. We are pessimistic about the legislative interventions that severely restrict legal rodenticide use. For example, California's recent ban on second-generation anticoagulant rodenticides (SGARs) is riddled with exemptions and may increase the use of alternatives that are even crueler to rodents. However, there are less controversial reforms, such as sanitation reform, that would reduce some rodenticide use, especially at the local level.Our top recommended intervention is investing in improved rodent birth control. [part 1] of this sequence was enthusiastic about existing EPA-approved rodent birth control ContraPest, but in part 2 we reported additional findings that led us to conclude that ContraPest is too expensive and cumbersome to replace the role that rodenticides currently play. Birth control baits that are cheaper and more versatile than Contrapest could replace rodenticide in many (though probably not all) situations.Our runner-up recommendation is to run digital information campaigns to educate the public on the costs and dangers of rodenticides. Digital advertising is cheap, and can quickly reach millions of people without having to first develop personal relationships with voters. Although the relatively grassroots approach of extant anti-rodenticide activism may be a sign that more impersonal approaches would not work, there is value in testing how much can be accomplished through mass communication alone.Table 1: A summary of the overall ranking of the 10 interventions types analyzed above. We scored Impact, Neglectedness, and Tractability holistically using High (5), Medium-High (4), Medium (3), Medium-Low (2), and Low (1), which were converted to a 5-point scale. The rankings in the right-most column are a result of adding up the numeric values that Intervention Type received in Impact, Neglectedness, and Tractability. Ties were resolved by holistically considering which intervention we believed was overall higher priority.We acknowledge that some interventions that do not look promising on their own may increase the tractability of more promising interventions. For example, obtaining local- and state-level bans may be a hassle and the results may be imperfect, but legal pressure to find alternatives may spur investment in new technology that is both more effective and humane than rodenticides. Readers may also have a personal advantage in implementing certain interventions and therefore may want to prioritize implementing those even if other interventions are more highly ranked in this report.This research is a project of Rethink Priorities. It was written by Holly Elmore. If you’re...]]>
Sat, 15 Apr 2023 15:20:04 +0000 EA - Paths to reducing rodenticide use in the U.S. by Holly Elmore Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paths to reducing rodenticide use in the U.S., published by Holly Elmore on April 15, 2023 on The Effective Altruism Forum.This is the third report in a sequence of reports on reducing the use of rodenticide poisons. It is not necessary to have read the previous reports to read this one, although this report will take for granted conclusions that were discussed and weighed in depth in the previous reports. Part 1 describes why rodenticides are crueler to rodents and more dangerous for human children, pets, and wildlife than most alternative methods for removing rodents from homes, businesses, and farms. Part 2 explains why both consumers and businesses have strong incentives to continue relying on rodenticides. This report describes and ranks interventions to reduce rodenticide use in the U.S. according to their expected impact, neglectedness, and tractability. We leave aside how the interventions we discuss might change attitudes toward pest populations in the long term due to a lack of relevant evidence.The report is grouped into sections by the class of intervention: Legislative interventions, Information campaigns, Technological disruption, and Funding research. We are pessimistic about the legislative interventions that severely restrict legal rodenticide use. For example, California's recent ban on second-generation anticoagulant rodenticides (SGARs) is riddled with exemptions and may increase the use of alternatives that are even crueler to rodents. However, there are less controversial reforms, such as sanitation reform, that would reduce some rodenticide use, especially at the local level.Our top recommended intervention is investing in improved rodent birth control. [part 1] of this sequence was enthusiastic about existing EPA-approved rodent birth control ContraPest, but in part 2 we reported additional findings that led us to conclude that ContraPest is too expensive and cumbersome to replace the role that rodenticides currently play. Birth control baits that are cheaper and more versatile than Contrapest could replace rodenticide in many (though probably not all) situations.Our runner-up recommendation is to run digital information campaigns to educate the public on the costs and dangers of rodenticides. Digital advertising is cheap, and can quickly reach millions of people without having to first develop personal relationships with voters. Although the relatively grassroots approach of extant anti-rodenticide activism may be a sign that more impersonal approaches would not work, there is value in testing how much can be accomplished through mass communication alone.Table 1: A summary of the overall ranking of the 10 interventions types analyzed above. We scored Impact, Neglectedness, and Tractability holistically using High (5), Medium-High (4), Medium (3), Medium-Low (2), and Low (1), which were converted to a 5-point scale. The rankings in the right-most column are a result of adding up the numeric values that Intervention Type received in Impact, Neglectedness, and Tractability. Ties were resolved by holistically considering which intervention we believed was overall higher priority.We acknowledge that some interventions that do not look promising on their own may increase the tractability of more promising interventions. For example, obtaining local- and state-level bans may be a hassle and the results may be imperfect, but legal pressure to find alternatives may spur investment in new technology that is both more effective and humane than rodenticides. Readers may also have a personal advantage in implementing certain interventions and therefore may want to prioritize implementing those even if other interventions are more highly ranked in this report.This research is a project of Rethink Priorities. It was written by Holly Elmore. If you’re...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paths to reducing rodenticide use in the U.S., published by Holly Elmore on April 15, 2023 on The Effective Altruism Forum.This is the third report in a sequence of reports on reducing the use of rodenticide poisons. It is not necessary to have read the previous reports to read this one, although this report will take for granted conclusions that were discussed and weighed in depth in the previous reports. Part 1 describes why rodenticides are crueler to rodents and more dangerous for human children, pets, and wildlife than most alternative methods for removing rodents from homes, businesses, and farms. Part 2 explains why both consumers and businesses have strong incentives to continue relying on rodenticides. This report describes and ranks interventions to reduce rodenticide use in the U.S. according to their expected impact, neglectedness, and tractability. We leave aside how the interventions we discuss might change attitudes toward pest populations in the long term due to a lack of relevant evidence.The report is grouped into sections by the class of intervention: Legislative interventions, Information campaigns, Technological disruption, and Funding research. We are pessimistic about the legislative interventions that severely restrict legal rodenticide use. For example, California's recent ban on second-generation anticoagulant rodenticides (SGARs) is riddled with exemptions and may increase the use of alternatives that are even crueler to rodents. However, there are less controversial reforms, such as sanitation reform, that would reduce some rodenticide use, especially at the local level.Our top recommended intervention is investing in improved rodent birth control. [part 1] of this sequence was enthusiastic about existing EPA-approved rodent birth control ContraPest, but in part 2 we reported additional findings that led us to conclude that ContraPest is too expensive and cumbersome to replace the role that rodenticides currently play. Birth control baits that are cheaper and more versatile than Contrapest could replace rodenticide in many (though probably not all) situations.Our runner-up recommendation is to run digital information campaigns to educate the public on the costs and dangers of rodenticides. Digital advertising is cheap, and can quickly reach millions of people without having to first develop personal relationships with voters. Although the relatively grassroots approach of extant anti-rodenticide activism may be a sign that more impersonal approaches would not work, there is value in testing how much can be accomplished through mass communication alone.Table 1: A summary of the overall ranking of the 10 interventions types analyzed above. We scored Impact, Neglectedness, and Tractability holistically using High (5), Medium-High (4), Medium (3), Medium-Low (2), and Low (1), which were converted to a 5-point scale. The rankings in the right-most column are a result of adding up the numeric values that Intervention Type received in Impact, Neglectedness, and Tractability. Ties were resolved by holistically considering which intervention we believed was overall higher priority.We acknowledge that some interventions that do not look promising on their own may increase the tractability of more promising interventions. For example, obtaining local- and state-level bans may be a hassle and the results may be imperfect, but legal pressure to find alternatives may spur investment in new technology that is both more effective and humane than rodenticides. Readers may also have a personal advantage in implementing certain interventions and therefore may want to prioritize implementing those even if other interventions are more highly ranked in this report.This research is a project of Rethink Priorities. It was written by Holly Elmore. If you’re...]]>
Holly Elmore https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:44 None full 5613
SnRsdEZH87RhBqQgt_NL_EA_EA EA - Announcing Innovate Animal Ag (like GFI but for Animal Welfare Tech) by RobertY Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Innovate Animal Ag (like GFI but for Animal Welfare Tech), published by RobertY on April 14, 2023 on The Effective Altruism Forum.I’m very excited to announce the launch of Innovate Animal Ag, a new nonprofit whose mission is to support agricultural technologies that directly improve animal welfare. Our first focus is on helping introduce in-ovo egg sexing technologies into the US. We’re currently hiring and fundraising!The problemCompanies improve their treatment of animals when the cost of the status quo is higher than the cost of change. Historically, pressure from activists, consumers, and regulators has been effective at increasing the cost of the inhumane practices that are part of the status quo. But significantly less work has been done on the other side of the equation: decreasing the cost of change.Similarly to how The Good Food Institute, New Harvest, and The Material Innovation Initiative support the field of alternative proteins, Innovate Animal Ag supports the field of animal welfare technology. Through education, networking building, and other ecosystem-level interventions, we aim to make it as easy as possible for companies to adopt new technologies that improve animal welfare.The technologies that we focus on will be guided by the EA principles of importance, tractability and neglectedness. Based on these criteria, some classes of technology we could be interested in include humane seafood slaughter machines, humane poultry euthanasia techniques, humane pest control technologies, and in-ovo egg sexing.Initial focus: in-ovo egg sexingMost of our work in the short term will be spent on in-ovo egg sexing technologies, specifically on helping introduce them to the US market. We chose this focus because it addresses an important problem that’s particularly tractable right now: The culling of male chicks in the egg industry. We hope to use this as a test-case for our overall approach.For each of the more than 6 billion egg-laying hens in the world, there was a male chick that was fertilized, incubated, hatched, manually identified by a human, sorted, and then immediately killed. This practice is inhumane, wasteful for the industry, and unpopular with the consumers that know about it.Fortunately, there are a number of solutions being developed by companies and labs around the world. Some are even in the early stages of commercialization in Europe, where governments have started to ban the practice of chick culling. This rollout is going better than many realize: Extrapolating from publicly available partnership announcements, we estimate that current in-ovo sexing companies already supply between 10 and 20% of the entire EU market, with a cost impact of 1–3 euro cents per table egg (similar to, if not lower than cage-free). This leads us to believe that current technology is already ready for the high-end specialty egg market in the US. A small market foothold at the high end is then the first step towards wider adoption. Once companies demonstrate that there is demand for these products, they can more easily invest in lowering costs and scaling.Eventually the goal is to fulfill The United Egg Producers promise to completely eliminate chick culling across the industry.In our conversations with the companies developing this technology, a common refrain is that they’re interested in the US market, but have little direct engagement with the US egg industry. For the most part, companies are focused on Europe because that's where governments are banning chick culling.Jumpstarting the market in the US will be challenging. Producers may not be aware that this technology is ready, and if they are, they may not be confident that consumers will be willing to pay a price premium. We aim to help solve these problems through interventions such as:...]]>
RobertY https://forum.effectivealtruism.org/posts/SnRsdEZH87RhBqQgt/announcing-innovate-animal-ag-like-gfi-but-for-animal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Innovate Animal Ag (like GFI but for Animal Welfare Tech), published by RobertY on April 14, 2023 on The Effective Altruism Forum.I’m very excited to announce the launch of Innovate Animal Ag, a new nonprofit whose mission is to support agricultural technologies that directly improve animal welfare. Our first focus is on helping introduce in-ovo egg sexing technologies into the US. We’re currently hiring and fundraising!The problemCompanies improve their treatment of animals when the cost of the status quo is higher than the cost of change. Historically, pressure from activists, consumers, and regulators has been effective at increasing the cost of the inhumane practices that are part of the status quo. But significantly less work has been done on the other side of the equation: decreasing the cost of change.Similarly to how The Good Food Institute, New Harvest, and The Material Innovation Initiative support the field of alternative proteins, Innovate Animal Ag supports the field of animal welfare technology. Through education, networking building, and other ecosystem-level interventions, we aim to make it as easy as possible for companies to adopt new technologies that improve animal welfare.The technologies that we focus on will be guided by the EA principles of importance, tractability and neglectedness. Based on these criteria, some classes of technology we could be interested in include humane seafood slaughter machines, humane poultry euthanasia techniques, humane pest control technologies, and in-ovo egg sexing.Initial focus: in-ovo egg sexingMost of our work in the short term will be spent on in-ovo egg sexing technologies, specifically on helping introduce them to the US market. We chose this focus because it addresses an important problem that’s particularly tractable right now: The culling of male chicks in the egg industry. We hope to use this as a test-case for our overall approach.For each of the more than 6 billion egg-laying hens in the world, there was a male chick that was fertilized, incubated, hatched, manually identified by a human, sorted, and then immediately killed. This practice is inhumane, wasteful for the industry, and unpopular with the consumers that know about it.Fortunately, there are a number of solutions being developed by companies and labs around the world. Some are even in the early stages of commercialization in Europe, where governments have started to ban the practice of chick culling. This rollout is going better than many realize: Extrapolating from publicly available partnership announcements, we estimate that current in-ovo sexing companies already supply between 10 and 20% of the entire EU market, with a cost impact of 1–3 euro cents per table egg (similar to, if not lower than cage-free). This leads us to believe that current technology is already ready for the high-end specialty egg market in the US. A small market foothold at the high end is then the first step towards wider adoption. Once companies demonstrate that there is demand for these products, they can more easily invest in lowering costs and scaling.Eventually the goal is to fulfill The United Egg Producers promise to completely eliminate chick culling across the industry.In our conversations with the companies developing this technology, a common refrain is that they’re interested in the US market, but have little direct engagement with the US egg industry. For the most part, companies are focused on Europe because that's where governments are banning chick culling.Jumpstarting the market in the US will be challenging. Producers may not be aware that this technology is ready, and if they are, they may not be confident that consumers will be willing to pay a price premium. We aim to help solve these problems through interventions such as:...]]>
Sat, 15 Apr 2023 09:05:44 +0000 EA - Announcing Innovate Animal Ag (like GFI but for Animal Welfare Tech) by RobertY Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Innovate Animal Ag (like GFI but for Animal Welfare Tech), published by RobertY on April 14, 2023 on The Effective Altruism Forum.I’m very excited to announce the launch of Innovate Animal Ag, a new nonprofit whose mission is to support agricultural technologies that directly improve animal welfare. Our first focus is on helping introduce in-ovo egg sexing technologies into the US. We’re currently hiring and fundraising!The problemCompanies improve their treatment of animals when the cost of the status quo is higher than the cost of change. Historically, pressure from activists, consumers, and regulators has been effective at increasing the cost of the inhumane practices that are part of the status quo. But significantly less work has been done on the other side of the equation: decreasing the cost of change.Similarly to how The Good Food Institute, New Harvest, and The Material Innovation Initiative support the field of alternative proteins, Innovate Animal Ag supports the field of animal welfare technology. Through education, networking building, and other ecosystem-level interventions, we aim to make it as easy as possible for companies to adopt new technologies that improve animal welfare.The technologies that we focus on will be guided by the EA principles of importance, tractability and neglectedness. Based on these criteria, some classes of technology we could be interested in include humane seafood slaughter machines, humane poultry euthanasia techniques, humane pest control technologies, and in-ovo egg sexing.Initial focus: in-ovo egg sexingMost of our work in the short term will be spent on in-ovo egg sexing technologies, specifically on helping introduce them to the US market. We chose this focus because it addresses an important problem that’s particularly tractable right now: The culling of male chicks in the egg industry. We hope to use this as a test-case for our overall approach.For each of the more than 6 billion egg-laying hens in the world, there was a male chick that was fertilized, incubated, hatched, manually identified by a human, sorted, and then immediately killed. This practice is inhumane, wasteful for the industry, and unpopular with the consumers that know about it.Fortunately, there are a number of solutions being developed by companies and labs around the world. Some are even in the early stages of commercialization in Europe, where governments have started to ban the practice of chick culling. This rollout is going better than many realize: Extrapolating from publicly available partnership announcements, we estimate that current in-ovo sexing companies already supply between 10 and 20% of the entire EU market, with a cost impact of 1–3 euro cents per table egg (similar to, if not lower than cage-free). This leads us to believe that current technology is already ready for the high-end specialty egg market in the US. A small market foothold at the high end is then the first step towards wider adoption. Once companies demonstrate that there is demand for these products, they can more easily invest in lowering costs and scaling.Eventually the goal is to fulfill The United Egg Producers promise to completely eliminate chick culling across the industry.In our conversations with the companies developing this technology, a common refrain is that they’re interested in the US market, but have little direct engagement with the US egg industry. For the most part, companies are focused on Europe because that's where governments are banning chick culling.Jumpstarting the market in the US will be challenging. Producers may not be aware that this technology is ready, and if they are, they may not be confident that consumers will be willing to pay a price premium. We aim to help solve these problems through interventions such as:...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Innovate Animal Ag (like GFI but for Animal Welfare Tech), published by RobertY on April 14, 2023 on The Effective Altruism Forum.I’m very excited to announce the launch of Innovate Animal Ag, a new nonprofit whose mission is to support agricultural technologies that directly improve animal welfare. Our first focus is on helping introduce in-ovo egg sexing technologies into the US. We’re currently hiring and fundraising!The problemCompanies improve their treatment of animals when the cost of the status quo is higher than the cost of change. Historically, pressure from activists, consumers, and regulators has been effective at increasing the cost of the inhumane practices that are part of the status quo. But significantly less work has been done on the other side of the equation: decreasing the cost of change.Similarly to how The Good Food Institute, New Harvest, and The Material Innovation Initiative support the field of alternative proteins, Innovate Animal Ag supports the field of animal welfare technology. Through education, networking building, and other ecosystem-level interventions, we aim to make it as easy as possible for companies to adopt new technologies that improve animal welfare.The technologies that we focus on will be guided by the EA principles of importance, tractability and neglectedness. Based on these criteria, some classes of technology we could be interested in include humane seafood slaughter machines, humane poultry euthanasia techniques, humane pest control technologies, and in-ovo egg sexing.Initial focus: in-ovo egg sexingMost of our work in the short term will be spent on in-ovo egg sexing technologies, specifically on helping introduce them to the US market. We chose this focus because it addresses an important problem that’s particularly tractable right now: The culling of male chicks in the egg industry. We hope to use this as a test-case for our overall approach.For each of the more than 6 billion egg-laying hens in the world, there was a male chick that was fertilized, incubated, hatched, manually identified by a human, sorted, and then immediately killed. This practice is inhumane, wasteful for the industry, and unpopular with the consumers that know about it.Fortunately, there are a number of solutions being developed by companies and labs around the world. Some are even in the early stages of commercialization in Europe, where governments have started to ban the practice of chick culling. This rollout is going better than many realize: Extrapolating from publicly available partnership announcements, we estimate that current in-ovo sexing companies already supply between 10 and 20% of the entire EU market, with a cost impact of 1–3 euro cents per table egg (similar to, if not lower than cage-free). This leads us to believe that current technology is already ready for the high-end specialty egg market in the US. A small market foothold at the high end is then the first step towards wider adoption. Once companies demonstrate that there is demand for these products, they can more easily invest in lowering costs and scaling.Eventually the goal is to fulfill The United Egg Producers promise to completely eliminate chick culling across the industry.In our conversations with the companies developing this technology, a common refrain is that they’re interested in the US market, but have little direct engagement with the US egg industry. For the most part, companies are focused on Europe because that's where governments are banning chick culling.Jumpstarting the market in the US will be challenging. Producers may not be aware that this technology is ready, and if they are, they may not be confident that consumers will be willing to pay a price premium. We aim to help solve these problems through interventions such as:...]]>
RobertY https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:06 None full 5614
6eaY7MEDWnK39sCEi_NL_EA_EA EA - Healthier Hens Y1.5 Update and scaledown assessment by lukasj10 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Healthier Hens Y1.5 Update and scaledown assessment, published by lukasj10 on April 14, 2023 on The Effective Altruism Forum.TL;DRHealthier Hens (HH) has scaled down due to not being able to secure enough funding to provide a sufficient runway to pilot dietary interventions effectively. We will continue through mini-projects and refining our plan for a feed pilot on the ground until our next organisational assessment at the end of summer 2023. Most efforts will now be spent on reporting, dissemination and fundraising. In this post we share updates, show what went well, less so and what others can learn from our attempts.Our mission and updated approachKeel bone fractures (KBF) are the second biggest source of hens’ suffering after behavioural deprivation related to cages, and the biggest in cage-free systems. Our mission remains to reduce the suffering of hens by addressing this source of pain. Numerous studies indicate that dietary interventions can reduce bone fractures. Our goal is to find ways to help hens get adequate nutrition and experience less pain. Besides research and piloting activities, we are doing that by outreach and collaboration with Kenyan cage-free egg farming stakeholders (including farmers, feed mills, universities and regulators) to improve on-farm hen welfare. Please read our introductory, 6M and 1Y update posts to learn more about the background and progress of HH. Due to funding constraints, we have downscaled and now focus on building capacity through mini-projects before piloting an intervention on the ground.HH Y1.5 updateWe decided to scale down and as of the first of March 2023, HH is a volunteer-led organisation. We will have our next evaluation point at the end of summer 2023.Despite major delays brought on by the presidential elections in Kenya, HH is finally a registered entity in both the US and our country of pilot operations.We carried out two additional farmer workshops in Murang’a and Nakuru counties. A report outlining this work will be published in May 2023. The workshops revealed significant knowledge gaps coupled with expressed interest by the farmers to learn and improve. To retain engagement, we began providing free resources and formed a WhatsApp group to stay connected with motivated cage-free farmers.Cage-free farmer workshop in Nakuru county, October, 2022.Cage-free coalition strategic session in Naivasha in February, 2023.We have published the third volume of our hen feed fortification literature review. This edition focused on nutrient level recommendations and the potential of other additives to improve bone health. We have also published a report outlining our feed sampling findings. It confirms potential issues with Kenyan feed quality and compositional consistency.We are engaged in the formation of a regional cage-free coalition aimed at accelerating transition campaigns, led by African Network for Animal Welfare.We surveyed our staff in February. We generally felt positive about the organization's culture and values. The survey also reveals that we were generally satisfied with our team and work, but were concerned about the impact of the recent decision to downscale. We also felt that there was a lack of critical feedback and constant communication within the team.Our collaboration with the University of Bern indicates negative results of a split-feeding intervention. While we expected a decrease in KBFs without compromising egg quality, surprisingly, poorer egg quality, digestive issues and neutral-to-negative effect on KBFs are observed.To explore alternative fundraising avenues, we looked into several social enterprise concepts, where hen welfare benefits could be attained while also improving the livelihoods of African cage-free farmers. The chosen concept (a hybridised adaptation of the wor...]]>
lukasj10 https://forum.effectivealtruism.org/posts/6eaY7MEDWnK39sCEi/healthier-hens-y1-5-update-and-scaledown-assessment Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Healthier Hens Y1.5 Update and scaledown assessment, published by lukasj10 on April 14, 2023 on The Effective Altruism Forum.TL;DRHealthier Hens (HH) has scaled down due to not being able to secure enough funding to provide a sufficient runway to pilot dietary interventions effectively. We will continue through mini-projects and refining our plan for a feed pilot on the ground until our next organisational assessment at the end of summer 2023. Most efforts will now be spent on reporting, dissemination and fundraising. In this post we share updates, show what went well, less so and what others can learn from our attempts.Our mission and updated approachKeel bone fractures (KBF) are the second biggest source of hens’ suffering after behavioural deprivation related to cages, and the biggest in cage-free systems. Our mission remains to reduce the suffering of hens by addressing this source of pain. Numerous studies indicate that dietary interventions can reduce bone fractures. Our goal is to find ways to help hens get adequate nutrition and experience less pain. Besides research and piloting activities, we are doing that by outreach and collaboration with Kenyan cage-free egg farming stakeholders (including farmers, feed mills, universities and regulators) to improve on-farm hen welfare. Please read our introductory, 6M and 1Y update posts to learn more about the background and progress of HH. Due to funding constraints, we have downscaled and now focus on building capacity through mini-projects before piloting an intervention on the ground.HH Y1.5 updateWe decided to scale down and as of the first of March 2023, HH is a volunteer-led organisation. We will have our next evaluation point at the end of summer 2023.Despite major delays brought on by the presidential elections in Kenya, HH is finally a registered entity in both the US and our country of pilot operations.We carried out two additional farmer workshops in Murang’a and Nakuru counties. A report outlining this work will be published in May 2023. The workshops revealed significant knowledge gaps coupled with expressed interest by the farmers to learn and improve. To retain engagement, we began providing free resources and formed a WhatsApp group to stay connected with motivated cage-free farmers.Cage-free farmer workshop in Nakuru county, October, 2022.Cage-free coalition strategic session in Naivasha in February, 2023.We have published the third volume of our hen feed fortification literature review. This edition focused on nutrient level recommendations and the potential of other additives to improve bone health. We have also published a report outlining our feed sampling findings. It confirms potential issues with Kenyan feed quality and compositional consistency.We are engaged in the formation of a regional cage-free coalition aimed at accelerating transition campaigns, led by African Network for Animal Welfare.We surveyed our staff in February. We generally felt positive about the organization's culture and values. The survey also reveals that we were generally satisfied with our team and work, but were concerned about the impact of the recent decision to downscale. We also felt that there was a lack of critical feedback and constant communication within the team.Our collaboration with the University of Bern indicates negative results of a split-feeding intervention. While we expected a decrease in KBFs without compromising egg quality, surprisingly, poorer egg quality, digestive issues and neutral-to-negative effect on KBFs are observed.To explore alternative fundraising avenues, we looked into several social enterprise concepts, where hen welfare benefits could be attained while also improving the livelihoods of African cage-free farmers. The chosen concept (a hybridised adaptation of the wor...]]>
Fri, 14 Apr 2023 22:48:41 +0000 EA - Healthier Hens Y1.5 Update and scaledown assessment by lukasj10 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Healthier Hens Y1.5 Update and scaledown assessment, published by lukasj10 on April 14, 2023 on The Effective Altruism Forum.TL;DRHealthier Hens (HH) has scaled down due to not being able to secure enough funding to provide a sufficient runway to pilot dietary interventions effectively. We will continue through mini-projects and refining our plan for a feed pilot on the ground until our next organisational assessment at the end of summer 2023. Most efforts will now be spent on reporting, dissemination and fundraising. In this post we share updates, show what went well, less so and what others can learn from our attempts.Our mission and updated approachKeel bone fractures (KBF) are the second biggest source of hens’ suffering after behavioural deprivation related to cages, and the biggest in cage-free systems. Our mission remains to reduce the suffering of hens by addressing this source of pain. Numerous studies indicate that dietary interventions can reduce bone fractures. Our goal is to find ways to help hens get adequate nutrition and experience less pain. Besides research and piloting activities, we are doing that by outreach and collaboration with Kenyan cage-free egg farming stakeholders (including farmers, feed mills, universities and regulators) to improve on-farm hen welfare. Please read our introductory, 6M and 1Y update posts to learn more about the background and progress of HH. Due to funding constraints, we have downscaled and now focus on building capacity through mini-projects before piloting an intervention on the ground.HH Y1.5 updateWe decided to scale down and as of the first of March 2023, HH is a volunteer-led organisation. We will have our next evaluation point at the end of summer 2023.Despite major delays brought on by the presidential elections in Kenya, HH is finally a registered entity in both the US and our country of pilot operations.We carried out two additional farmer workshops in Murang’a and Nakuru counties. A report outlining this work will be published in May 2023. The workshops revealed significant knowledge gaps coupled with expressed interest by the farmers to learn and improve. To retain engagement, we began providing free resources and formed a WhatsApp group to stay connected with motivated cage-free farmers.Cage-free farmer workshop in Nakuru county, October, 2022.Cage-free coalition strategic session in Naivasha in February, 2023.We have published the third volume of our hen feed fortification literature review. This edition focused on nutrient level recommendations and the potential of other additives to improve bone health. We have also published a report outlining our feed sampling findings. It confirms potential issues with Kenyan feed quality and compositional consistency.We are engaged in the formation of a regional cage-free coalition aimed at accelerating transition campaigns, led by African Network for Animal Welfare.We surveyed our staff in February. We generally felt positive about the organization's culture and values. The survey also reveals that we were generally satisfied with our team and work, but were concerned about the impact of the recent decision to downscale. We also felt that there was a lack of critical feedback and constant communication within the team.Our collaboration with the University of Bern indicates negative results of a split-feeding intervention. While we expected a decrease in KBFs without compromising egg quality, surprisingly, poorer egg quality, digestive issues and neutral-to-negative effect on KBFs are observed.To explore alternative fundraising avenues, we looked into several social enterprise concepts, where hen welfare benefits could be attained while also improving the livelihoods of African cage-free farmers. The chosen concept (a hybridised adaptation of the wor...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Healthier Hens Y1.5 Update and scaledown assessment, published by lukasj10 on April 14, 2023 on The Effective Altruism Forum.TL;DRHealthier Hens (HH) has scaled down due to not being able to secure enough funding to provide a sufficient runway to pilot dietary interventions effectively. We will continue through mini-projects and refining our plan for a feed pilot on the ground until our next organisational assessment at the end of summer 2023. Most efforts will now be spent on reporting, dissemination and fundraising. In this post we share updates, show what went well, less so and what others can learn from our attempts.Our mission and updated approachKeel bone fractures (KBF) are the second biggest source of hens’ suffering after behavioural deprivation related to cages, and the biggest in cage-free systems. Our mission remains to reduce the suffering of hens by addressing this source of pain. Numerous studies indicate that dietary interventions can reduce bone fractures. Our goal is to find ways to help hens get adequate nutrition and experience less pain. Besides research and piloting activities, we are doing that by outreach and collaboration with Kenyan cage-free egg farming stakeholders (including farmers, feed mills, universities and regulators) to improve on-farm hen welfare. Please read our introductory, 6M and 1Y update posts to learn more about the background and progress of HH. Due to funding constraints, we have downscaled and now focus on building capacity through mini-projects before piloting an intervention on the ground.HH Y1.5 updateWe decided to scale down and as of the first of March 2023, HH is a volunteer-led organisation. We will have our next evaluation point at the end of summer 2023.Despite major delays brought on by the presidential elections in Kenya, HH is finally a registered entity in both the US and our country of pilot operations.We carried out two additional farmer workshops in Murang’a and Nakuru counties. A report outlining this work will be published in May 2023. The workshops revealed significant knowledge gaps coupled with expressed interest by the farmers to learn and improve. To retain engagement, we began providing free resources and formed a WhatsApp group to stay connected with motivated cage-free farmers.Cage-free farmer workshop in Nakuru county, October, 2022.Cage-free coalition strategic session in Naivasha in February, 2023.We have published the third volume of our hen feed fortification literature review. This edition focused on nutrient level recommendations and the potential of other additives to improve bone health. We have also published a report outlining our feed sampling findings. It confirms potential issues with Kenyan feed quality and compositional consistency.We are engaged in the formation of a regional cage-free coalition aimed at accelerating transition campaigns, led by African Network for Animal Welfare.We surveyed our staff in February. We generally felt positive about the organization's culture and values. The survey also reveals that we were generally satisfied with our team and work, but were concerned about the impact of the recent decision to downscale. We also felt that there was a lack of critical feedback and constant communication within the team.Our collaboration with the University of Bern indicates negative results of a split-feeding intervention. While we expected a decrease in KBFs without compromising egg quality, surprisingly, poorer egg quality, digestive issues and neutral-to-negative effect on KBFs are observed.To explore alternative fundraising avenues, we looked into several social enterprise concepts, where hen welfare benefits could be attained while also improving the livelihoods of African cage-free farmers. The chosen concept (a hybridised adaptation of the wor...]]>
lukasj10 https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 20:03 None full 5612
QicwJdrG5pZEFiDY3_NL_EA_EA EA - [Linkpost] Inconsistent evidence for price substitution between butter and margarine: A shallow review by samaramendez Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Inconsistent evidence for price substitution between butter and margarine: A shallow review, published by samaramendez on April 14, 2023 on The Effective Altruism Forum.Executive summaryOne prominent strategy for reducing animal product usage is to decrease the prices of plant-based analogs for meat, dairy and eggs. Cross-price elasticities measure how prices of analogs affect sales of animal products, and vice versa.Previously, we found cross-price elasticities that sometimes indicated decreased plant-based milk prices could cause increased consumption of dairy milk. To see whether this result replicated, we studied elasticities of butter and margarine.We synthesize 52 cross-price elasticities from 19 demand studies of butter and margarine.We expected butter and margarine to be substitutes. Instead, we observed wide variation in estimates, complementarity, and even opposite signs of two elasticities for the same pair of products in the same setting (study, time, country, etc.).Margarine was a substitute for butter in two thirds of estimates, while butter was a substitute for margarine in about half of estimates.The results are likely explained by some combination of methodological issues and complex consumer behavior.Estimating elasticities from observational data is very difficult, and there is some evidence of methodological issues in the literature. Consumer behavior may be highly context specific.Our best highly uncertain guess is that methodological issues contribute 45% of the observed variation in results, with complex consumer behavior accounting for the remaining 55%.Price substitution between plant-based analogs and animal products is not a certainty.It is possible that decreasing plant-based analog prices causes harmful increases in animal product usage in some contexts.Research aiming to test behavioral theories that might explain our results and to validate observational estimates of elasticities against experimental methods may help clarify consumer behavior.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
samaramendez https://forum.effectivealtruism.org/posts/QicwJdrG5pZEFiDY3/linkpost-inconsistent-evidence-for-price-substitution Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Inconsistent evidence for price substitution between butter and margarine: A shallow review, published by samaramendez on April 14, 2023 on The Effective Altruism Forum.Executive summaryOne prominent strategy for reducing animal product usage is to decrease the prices of plant-based analogs for meat, dairy and eggs. Cross-price elasticities measure how prices of analogs affect sales of animal products, and vice versa.Previously, we found cross-price elasticities that sometimes indicated decreased plant-based milk prices could cause increased consumption of dairy milk. To see whether this result replicated, we studied elasticities of butter and margarine.We synthesize 52 cross-price elasticities from 19 demand studies of butter and margarine.We expected butter and margarine to be substitutes. Instead, we observed wide variation in estimates, complementarity, and even opposite signs of two elasticities for the same pair of products in the same setting (study, time, country, etc.).Margarine was a substitute for butter in two thirds of estimates, while butter was a substitute for margarine in about half of estimates.The results are likely explained by some combination of methodological issues and complex consumer behavior.Estimating elasticities from observational data is very difficult, and there is some evidence of methodological issues in the literature. Consumer behavior may be highly context specific.Our best highly uncertain guess is that methodological issues contribute 45% of the observed variation in results, with complex consumer behavior accounting for the remaining 55%.Price substitution between plant-based analogs and animal products is not a certainty.It is possible that decreasing plant-based analog prices causes harmful increases in animal product usage in some contexts.Research aiming to test behavioral theories that might explain our results and to validate observational estimates of elasticities against experimental methods may help clarify consumer behavior.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 14 Apr 2023 22:35:46 +0000 EA - [Linkpost] Inconsistent evidence for price substitution between butter and margarine: A shallow review by samaramendez Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Inconsistent evidence for price substitution between butter and margarine: A shallow review, published by samaramendez on April 14, 2023 on The Effective Altruism Forum.Executive summaryOne prominent strategy for reducing animal product usage is to decrease the prices of plant-based analogs for meat, dairy and eggs. Cross-price elasticities measure how prices of analogs affect sales of animal products, and vice versa.Previously, we found cross-price elasticities that sometimes indicated decreased plant-based milk prices could cause increased consumption of dairy milk. To see whether this result replicated, we studied elasticities of butter and margarine.We synthesize 52 cross-price elasticities from 19 demand studies of butter and margarine.We expected butter and margarine to be substitutes. Instead, we observed wide variation in estimates, complementarity, and even opposite signs of two elasticities for the same pair of products in the same setting (study, time, country, etc.).Margarine was a substitute for butter in two thirds of estimates, while butter was a substitute for margarine in about half of estimates.The results are likely explained by some combination of methodological issues and complex consumer behavior.Estimating elasticities from observational data is very difficult, and there is some evidence of methodological issues in the literature. Consumer behavior may be highly context specific.Our best highly uncertain guess is that methodological issues contribute 45% of the observed variation in results, with complex consumer behavior accounting for the remaining 55%.Price substitution between plant-based analogs and animal products is not a certainty.It is possible that decreasing plant-based analog prices causes harmful increases in animal product usage in some contexts.Research aiming to test behavioral theories that might explain our results and to validate observational estimates of elasticities against experimental methods may help clarify consumer behavior.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Inconsistent evidence for price substitution between butter and margarine: A shallow review, published by samaramendez on April 14, 2023 on The Effective Altruism Forum.Executive summaryOne prominent strategy for reducing animal product usage is to decrease the prices of plant-based analogs for meat, dairy and eggs. Cross-price elasticities measure how prices of analogs affect sales of animal products, and vice versa.Previously, we found cross-price elasticities that sometimes indicated decreased plant-based milk prices could cause increased consumption of dairy milk. To see whether this result replicated, we studied elasticities of butter and margarine.We synthesize 52 cross-price elasticities from 19 demand studies of butter and margarine.We expected butter and margarine to be substitutes. Instead, we observed wide variation in estimates, complementarity, and even opposite signs of two elasticities for the same pair of products in the same setting (study, time, country, etc.).Margarine was a substitute for butter in two thirds of estimates, while butter was a substitute for margarine in about half of estimates.The results are likely explained by some combination of methodological issues and complex consumer behavior.Estimating elasticities from observational data is very difficult, and there is some evidence of methodological issues in the literature. Consumer behavior may be highly context specific.Our best highly uncertain guess is that methodological issues contribute 45% of the observed variation in results, with complex consumer behavior accounting for the remaining 55%.Price substitution between plant-based analogs and animal products is not a certainty.It is possible that decreasing plant-based analog prices causes harmful increases in animal product usage in some contexts.Research aiming to test behavioral theories that might explain our results and to validate observational estimates of elasticities against experimental methods may help clarify consumer behavior.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
samaramendez https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:06 None full 5611
L8GjzvRYA9g9ox2nP_NL_EA_EA EA - Prospects for AI safety agreements between countries by oeg Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prospects for AI safety agreements between countries, published by oeg on April 14, 2023 on The Effective Altruism Forum.This post summarizes a project that I recently completed about international agreements to coordinate on safe AI development. I focused particularly on an agreement that I call “Collaborative Handling of Artificial intelligence Risks with Training Standards” (“CHARTS”). CHARTS would regulate training runs and would include both the US and China.Among other things, I hope that this post will be a useful contribution to recent discussions about international agreements and regulatory regimes for AI.I am not (at least for the moment) sharing the entire project publicly, but I hope that this summary will still be useful or interesting for people who are thinking about AI governance. The conclusions here are essentially the same as in the full version, and I don’t think readers’ views would differ drastically based on the additional information present there. If you would benefit from reading the full version (or specific parts) please reach out to me and I may be able to share.This post consists of an executive summary (∼1200 words) followed by a condensed version of the report (∼5000 words + footnotes).Executive summaryIn this report, I investigate the idea of bringing about international agreements to coordinate on safe AI development (“international safety agreements”), evaluate the tractability of these interventions, and suggest the best means of carrying them out.IntroductionMy primary focus is a specific type of international safety agreement aimed at regulating AI training runs to prevent misalignment catastrophes. I call this agreement “Collaborative Handling of Artificial intelligence Risks with Training Standards,” or “CHARTS.”Key features of CHARTS include:Prohibiting governments and companies from performing training runs with a high likelihood of producing powerful misaligned AI systems.Risky training runs would be determined using proxies like training run size, or potential risk factors such as reinforcement learning.Requiring extensive verification through on-chip mechanisms, on-site inspections, and dedicated institutions.Cooperating to prevent exports of AI-relevant compute to non-member countries, avoiding dangerous training runs in non-participating jurisdictions.I chose to focus on this kind of agreement after seeing similar proposals and thinking that they seemed promising. My intended contribution here is to think about how tractable it would be to get something like CHARTS, and how to increase this tractability. I mostly do not attempt to assess how beneficial (or harmful) CHARTS would be.I focus particularly on an agreement between the US and China because existentially dangerous training runs seem unusually likely to happen in these countries, and because these countries have an adversarial relationship, heightening concerns about racing dynamics.Political acceptance of costly measures to regulate AII introduce the concept of a Risk Awareness Moment (Ram) as a way to structure my thinking elsewhere in the report. A Ram is “a point in time, after which concern about extreme risks from AI is so high among the relevant audiences that extreme measures to reduce these risks become possible, though not inevitable.” Examples of audiences include the general public and policy elites in a particular country.I think this concept is helpful for thinking about a range of AI governance interventions. It has advantages over related concepts such as “warning shots” in that it makes it easier to remain agnostic about what causes increased concern about AI, and what the level of concern looks like over time.I think that CHARTS would require a Ram among policy elites, and probably also among the general public – at least in ...]]>
oeg https://forum.effectivealtruism.org/posts/L8GjzvRYA9g9ox2nP/prospects-for-ai-safety-agreements-between-countries Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prospects for AI safety agreements between countries, published by oeg on April 14, 2023 on The Effective Altruism Forum.This post summarizes a project that I recently completed about international agreements to coordinate on safe AI development. I focused particularly on an agreement that I call “Collaborative Handling of Artificial intelligence Risks with Training Standards” (“CHARTS”). CHARTS would regulate training runs and would include both the US and China.Among other things, I hope that this post will be a useful contribution to recent discussions about international agreements and regulatory regimes for AI.I am not (at least for the moment) sharing the entire project publicly, but I hope that this summary will still be useful or interesting for people who are thinking about AI governance. The conclusions here are essentially the same as in the full version, and I don’t think readers’ views would differ drastically based on the additional information present there. If you would benefit from reading the full version (or specific parts) please reach out to me and I may be able to share.This post consists of an executive summary (∼1200 words) followed by a condensed version of the report (∼5000 words + footnotes).Executive summaryIn this report, I investigate the idea of bringing about international agreements to coordinate on safe AI development (“international safety agreements”), evaluate the tractability of these interventions, and suggest the best means of carrying them out.IntroductionMy primary focus is a specific type of international safety agreement aimed at regulating AI training runs to prevent misalignment catastrophes. I call this agreement “Collaborative Handling of Artificial intelligence Risks with Training Standards,” or “CHARTS.”Key features of CHARTS include:Prohibiting governments and companies from performing training runs with a high likelihood of producing powerful misaligned AI systems.Risky training runs would be determined using proxies like training run size, or potential risk factors such as reinforcement learning.Requiring extensive verification through on-chip mechanisms, on-site inspections, and dedicated institutions.Cooperating to prevent exports of AI-relevant compute to non-member countries, avoiding dangerous training runs in non-participating jurisdictions.I chose to focus on this kind of agreement after seeing similar proposals and thinking that they seemed promising. My intended contribution here is to think about how tractable it would be to get something like CHARTS, and how to increase this tractability. I mostly do not attempt to assess how beneficial (or harmful) CHARTS would be.I focus particularly on an agreement between the US and China because existentially dangerous training runs seem unusually likely to happen in these countries, and because these countries have an adversarial relationship, heightening concerns about racing dynamics.Political acceptance of costly measures to regulate AII introduce the concept of a Risk Awareness Moment (Ram) as a way to structure my thinking elsewhere in the report. A Ram is “a point in time, after which concern about extreme risks from AI is so high among the relevant audiences that extreme measures to reduce these risks become possible, though not inevitable.” Examples of audiences include the general public and policy elites in a particular country.I think this concept is helpful for thinking about a range of AI governance interventions. It has advantages over related concepts such as “warning shots” in that it makes it easier to remain agnostic about what causes increased concern about AI, and what the level of concern looks like over time.I think that CHARTS would require a Ram among policy elites, and probably also among the general public – at least in ...]]>
Fri, 14 Apr 2023 19:45:46 +0000 EA - Prospects for AI safety agreements between countries by oeg Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prospects for AI safety agreements between countries, published by oeg on April 14, 2023 on The Effective Altruism Forum.This post summarizes a project that I recently completed about international agreements to coordinate on safe AI development. I focused particularly on an agreement that I call “Collaborative Handling of Artificial intelligence Risks with Training Standards” (“CHARTS”). CHARTS would regulate training runs and would include both the US and China.Among other things, I hope that this post will be a useful contribution to recent discussions about international agreements and regulatory regimes for AI.I am not (at least for the moment) sharing the entire project publicly, but I hope that this summary will still be useful or interesting for people who are thinking about AI governance. The conclusions here are essentially the same as in the full version, and I don’t think readers’ views would differ drastically based on the additional information present there. If you would benefit from reading the full version (or specific parts) please reach out to me and I may be able to share.This post consists of an executive summary (∼1200 words) followed by a condensed version of the report (∼5000 words + footnotes).Executive summaryIn this report, I investigate the idea of bringing about international agreements to coordinate on safe AI development (“international safety agreements”), evaluate the tractability of these interventions, and suggest the best means of carrying them out.IntroductionMy primary focus is a specific type of international safety agreement aimed at regulating AI training runs to prevent misalignment catastrophes. I call this agreement “Collaborative Handling of Artificial intelligence Risks with Training Standards,” or “CHARTS.”Key features of CHARTS include:Prohibiting governments and companies from performing training runs with a high likelihood of producing powerful misaligned AI systems.Risky training runs would be determined using proxies like training run size, or potential risk factors such as reinforcement learning.Requiring extensive verification through on-chip mechanisms, on-site inspections, and dedicated institutions.Cooperating to prevent exports of AI-relevant compute to non-member countries, avoiding dangerous training runs in non-participating jurisdictions.I chose to focus on this kind of agreement after seeing similar proposals and thinking that they seemed promising. My intended contribution here is to think about how tractable it would be to get something like CHARTS, and how to increase this tractability. I mostly do not attempt to assess how beneficial (or harmful) CHARTS would be.I focus particularly on an agreement between the US and China because existentially dangerous training runs seem unusually likely to happen in these countries, and because these countries have an adversarial relationship, heightening concerns about racing dynamics.Political acceptance of costly measures to regulate AII introduce the concept of a Risk Awareness Moment (Ram) as a way to structure my thinking elsewhere in the report. A Ram is “a point in time, after which concern about extreme risks from AI is so high among the relevant audiences that extreme measures to reduce these risks become possible, though not inevitable.” Examples of audiences include the general public and policy elites in a particular country.I think this concept is helpful for thinking about a range of AI governance interventions. It has advantages over related concepts such as “warning shots” in that it makes it easier to remain agnostic about what causes increased concern about AI, and what the level of concern looks like over time.I think that CHARTS would require a Ram among policy elites, and probably also among the general public – at least in ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prospects for AI safety agreements between countries, published by oeg on April 14, 2023 on The Effective Altruism Forum.This post summarizes a project that I recently completed about international agreements to coordinate on safe AI development. I focused particularly on an agreement that I call “Collaborative Handling of Artificial intelligence Risks with Training Standards” (“CHARTS”). CHARTS would regulate training runs and would include both the US and China.Among other things, I hope that this post will be a useful contribution to recent discussions about international agreements and regulatory regimes for AI.I am not (at least for the moment) sharing the entire project publicly, but I hope that this summary will still be useful or interesting for people who are thinking about AI governance. The conclusions here are essentially the same as in the full version, and I don’t think readers’ views would differ drastically based on the additional information present there. If you would benefit from reading the full version (or specific parts) please reach out to me and I may be able to share.This post consists of an executive summary (∼1200 words) followed by a condensed version of the report (∼5000 words + footnotes).Executive summaryIn this report, I investigate the idea of bringing about international agreements to coordinate on safe AI development (“international safety agreements”), evaluate the tractability of these interventions, and suggest the best means of carrying them out.IntroductionMy primary focus is a specific type of international safety agreement aimed at regulating AI training runs to prevent misalignment catastrophes. I call this agreement “Collaborative Handling of Artificial intelligence Risks with Training Standards,” or “CHARTS.”Key features of CHARTS include:Prohibiting governments and companies from performing training runs with a high likelihood of producing powerful misaligned AI systems.Risky training runs would be determined using proxies like training run size, or potential risk factors such as reinforcement learning.Requiring extensive verification through on-chip mechanisms, on-site inspections, and dedicated institutions.Cooperating to prevent exports of AI-relevant compute to non-member countries, avoiding dangerous training runs in non-participating jurisdictions.I chose to focus on this kind of agreement after seeing similar proposals and thinking that they seemed promising. My intended contribution here is to think about how tractable it would be to get something like CHARTS, and how to increase this tractability. I mostly do not attempt to assess how beneficial (or harmful) CHARTS would be.I focus particularly on an agreement between the US and China because existentially dangerous training runs seem unusually likely to happen in these countries, and because these countries have an adversarial relationship, heightening concerns about racing dynamics.Political acceptance of costly measures to regulate AII introduce the concept of a Risk Awareness Moment (Ram) as a way to structure my thinking elsewhere in the report. A Ram is “a point in time, after which concern about extreme risks from AI is so high among the relevant audiences that extreme measures to reduce these risks become possible, though not inevitable.” Examples of audiences include the general public and policy elites in a particular country.I think this concept is helpful for thinking about a range of AI governance interventions. It has advantages over related concepts such as “warning shots” in that it makes it easier to remain agnostic about what causes increased concern about AI, and what the level of concern looks like over time.I think that CHARTS would require a Ram among policy elites, and probably also among the general public – at least in ...]]>
oeg https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 49:28 None full 5598
xsKwDuggxcYpYCe2z_NL_EA_EA EA - Anti-'FOOM' (stop trying to make your cute pet name the thing) by david reinstein Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anti-'FOOM' (stop trying to make your cute pet name the thing), published by david reinstein on April 14, 2023 on The Effective Altruism Forum.Notes/basis: This is kind of a short-form post in style but I think it's important enough to put here. Obviously let me know if someone else said this betterSummaryFormal overly-intellectual academese is bad. But using your 'cute' inside joke name for things is potentially worse. It makes people cringe, sounds like you are trying to take ownership of something, and excludes people. Use a name that is approachable but serious.The problem.Where did the term 'FOOM' come from, to refer to AGI risk? I asked GPT4:[!ai] AIThe term 'foom' was coined by artificial intelligence researcher and author Eliezer Yudkowsky in his 2008 book titled "The Sequences". Yudkowsky used the term to refer to a hypothetical scenario where an artificial general intelligence (AGI) rapidly and exponentially improves its own intelligence, leading to an uncontrollable and potentially catastrophic outcome for humanity. The term 'foom' is a play on the word 'boom', representing the sudden and explosive nature of AGI development in this scenario.Another example: 'AI-not-kill-everyone-ism'Analogies to fairly successful movements:Global warming was not called ''Roast", and the movement was not called "anti-everyone-burns-up-ism"Nuclear holocaust was not called "mega-boom"Anti-slavery was not called ... (OK I won't touch this one)How well has the use of cute names worked in the past?I can't think of any examples where they have caught on in a positive way. The closest I can think of are"Nudge" (by Richard Thaler?)... to describe choice-architecture interventions; - My impression is that the term 'nudge' got people to remember it but made it rather easy to dismissothers in that space have come up with names that caught on less well I think (like "sludge"), which also induce a bit of cringe"Woke"I think this example basically speaks for itself.Tea-Party movementThis goes in the opposite direction perhaps (fairly successful), but I still think it's not quite as cringeworthy as FOOM. The term 'tea party' obviously has a long history in our culture, especially the "Boston Tea Party.What else?I asked GPT4when have social movements used cute 'inside joke' names to refer to the threats faced?The suggestions are not half as cute or in-jokey as FOOM: Net Neutrality, The Umbrella Movement, Extinction Rebellion (XR), Occupy Wall Street (OWS)I asked it to get cuter... [1]Prodding it further... Climategate, Frankenfoods, Slacktivism ... also not so inside-jokey nor as cringeworthy IMO.Prodding it for more cutesy more inside-jokey yields a few terms that barely caught on, or didn't characterize the movement or the threat as a whole.[2]Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
david reinstein https://forum.effectivealtruism.org/posts/xsKwDuggxcYpYCe2z/anti-foom-stop-trying-to-make-your-cute-pet-name-the-thing Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anti-'FOOM' (stop trying to make your cute pet name the thing), published by david reinstein on April 14, 2023 on The Effective Altruism Forum.Notes/basis: This is kind of a short-form post in style but I think it's important enough to put here. Obviously let me know if someone else said this betterSummaryFormal overly-intellectual academese is bad. But using your 'cute' inside joke name for things is potentially worse. It makes people cringe, sounds like you are trying to take ownership of something, and excludes people. Use a name that is approachable but serious.The problem.Where did the term 'FOOM' come from, to refer to AGI risk? I asked GPT4:[!ai] AIThe term 'foom' was coined by artificial intelligence researcher and author Eliezer Yudkowsky in his 2008 book titled "The Sequences". Yudkowsky used the term to refer to a hypothetical scenario where an artificial general intelligence (AGI) rapidly and exponentially improves its own intelligence, leading to an uncontrollable and potentially catastrophic outcome for humanity. The term 'foom' is a play on the word 'boom', representing the sudden and explosive nature of AGI development in this scenario.Another example: 'AI-not-kill-everyone-ism'Analogies to fairly successful movements:Global warming was not called ''Roast", and the movement was not called "anti-everyone-burns-up-ism"Nuclear holocaust was not called "mega-boom"Anti-slavery was not called ... (OK I won't touch this one)How well has the use of cute names worked in the past?I can't think of any examples where they have caught on in a positive way. The closest I can think of are"Nudge" (by Richard Thaler?)... to describe choice-architecture interventions; - My impression is that the term 'nudge' got people to remember it but made it rather easy to dismissothers in that space have come up with names that caught on less well I think (like "sludge"), which also induce a bit of cringe"Woke"I think this example basically speaks for itself.Tea-Party movementThis goes in the opposite direction perhaps (fairly successful), but I still think it's not quite as cringeworthy as FOOM. The term 'tea party' obviously has a long history in our culture, especially the "Boston Tea Party.What else?I asked GPT4when have social movements used cute 'inside joke' names to refer to the threats faced?The suggestions are not half as cute or in-jokey as FOOM: Net Neutrality, The Umbrella Movement, Extinction Rebellion (XR), Occupy Wall Street (OWS)I asked it to get cuter... [1]Prodding it further... Climategate, Frankenfoods, Slacktivism ... also not so inside-jokey nor as cringeworthy IMO.Prodding it for more cutesy more inside-jokey yields a few terms that barely caught on, or didn't characterize the movement or the threat as a whole.[2]Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 14 Apr 2023 18:46:21 +0000 EA - Anti-'FOOM' (stop trying to make your cute pet name the thing) by david reinstein Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anti-'FOOM' (stop trying to make your cute pet name the thing), published by david reinstein on April 14, 2023 on The Effective Altruism Forum.Notes/basis: This is kind of a short-form post in style but I think it's important enough to put here. Obviously let me know if someone else said this betterSummaryFormal overly-intellectual academese is bad. But using your 'cute' inside joke name for things is potentially worse. It makes people cringe, sounds like you are trying to take ownership of something, and excludes people. Use a name that is approachable but serious.The problem.Where did the term 'FOOM' come from, to refer to AGI risk? I asked GPT4:[!ai] AIThe term 'foom' was coined by artificial intelligence researcher and author Eliezer Yudkowsky in his 2008 book titled "The Sequences". Yudkowsky used the term to refer to a hypothetical scenario where an artificial general intelligence (AGI) rapidly and exponentially improves its own intelligence, leading to an uncontrollable and potentially catastrophic outcome for humanity. The term 'foom' is a play on the word 'boom', representing the sudden and explosive nature of AGI development in this scenario.Another example: 'AI-not-kill-everyone-ism'Analogies to fairly successful movements:Global warming was not called ''Roast", and the movement was not called "anti-everyone-burns-up-ism"Nuclear holocaust was not called "mega-boom"Anti-slavery was not called ... (OK I won't touch this one)How well has the use of cute names worked in the past?I can't think of any examples where they have caught on in a positive way. The closest I can think of are"Nudge" (by Richard Thaler?)... to describe choice-architecture interventions; - My impression is that the term 'nudge' got people to remember it but made it rather easy to dismissothers in that space have come up with names that caught on less well I think (like "sludge"), which also induce a bit of cringe"Woke"I think this example basically speaks for itself.Tea-Party movementThis goes in the opposite direction perhaps (fairly successful), but I still think it's not quite as cringeworthy as FOOM. The term 'tea party' obviously has a long history in our culture, especially the "Boston Tea Party.What else?I asked GPT4when have social movements used cute 'inside joke' names to refer to the threats faced?The suggestions are not half as cute or in-jokey as FOOM: Net Neutrality, The Umbrella Movement, Extinction Rebellion (XR), Occupy Wall Street (OWS)I asked it to get cuter... [1]Prodding it further... Climategate, Frankenfoods, Slacktivism ... also not so inside-jokey nor as cringeworthy IMO.Prodding it for more cutesy more inside-jokey yields a few terms that barely caught on, or didn't characterize the movement or the threat as a whole.[2]Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anti-'FOOM' (stop trying to make your cute pet name the thing), published by david reinstein on April 14, 2023 on The Effective Altruism Forum.Notes/basis: This is kind of a short-form post in style but I think it's important enough to put here. Obviously let me know if someone else said this betterSummaryFormal overly-intellectual academese is bad. But using your 'cute' inside joke name for things is potentially worse. It makes people cringe, sounds like you are trying to take ownership of something, and excludes people. Use a name that is approachable but serious.The problem.Where did the term 'FOOM' come from, to refer to AGI risk? I asked GPT4:[!ai] AIThe term 'foom' was coined by artificial intelligence researcher and author Eliezer Yudkowsky in his 2008 book titled "The Sequences". Yudkowsky used the term to refer to a hypothetical scenario where an artificial general intelligence (AGI) rapidly and exponentially improves its own intelligence, leading to an uncontrollable and potentially catastrophic outcome for humanity. The term 'foom' is a play on the word 'boom', representing the sudden and explosive nature of AGI development in this scenario.Another example: 'AI-not-kill-everyone-ism'Analogies to fairly successful movements:Global warming was not called ''Roast", and the movement was not called "anti-everyone-burns-up-ism"Nuclear holocaust was not called "mega-boom"Anti-slavery was not called ... (OK I won't touch this one)How well has the use of cute names worked in the past?I can't think of any examples where they have caught on in a positive way. The closest I can think of are"Nudge" (by Richard Thaler?)... to describe choice-architecture interventions; - My impression is that the term 'nudge' got people to remember it but made it rather easy to dismissothers in that space have come up with names that caught on less well I think (like "sludge"), which also induce a bit of cringe"Woke"I think this example basically speaks for itself.Tea-Party movementThis goes in the opposite direction perhaps (fairly successful), but I still think it's not quite as cringeworthy as FOOM. The term 'tea party' obviously has a long history in our culture, especially the "Boston Tea Party.What else?I asked GPT4when have social movements used cute 'inside joke' names to refer to the threats faced?The suggestions are not half as cute or in-jokey as FOOM: Net Neutrality, The Umbrella Movement, Extinction Rebellion (XR), Occupy Wall Street (OWS)I asked it to get cuter... [1]Prodding it further... Climategate, Frankenfoods, Slacktivism ... also not so inside-jokey nor as cringeworthy IMO.Prodding it for more cutesy more inside-jokey yields a few terms that barely caught on, or didn't characterize the movement or the threat as a whole.[2]Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
david reinstein https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:55 None full 5599
2DzLY6YP2z5zRDAGA_NL_EA_EA EA - A freshman year during the AI midgame: my approach to the next year by Buck Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A freshman year during the AI midgame: my approach to the next year, published by Buck on April 14, 2023 on The Effective Altruism Forum.I recently spent some time reflecting on my career and my life, for a few reasons:It was my 29th birthday, an occasion which felt like a particularly natural time to think through what I wanted to accomplish over the course of the next year .It seems like AI progress is heating up.It felt like a good time to reflect on how Redwood has been going, because we’ve been having conversations with funders about getting more funding.I wanted to have a better answer to these questions:What’s the default trajectory that I should plan for my career to follow? And what does this imply for what I should be doing right now?How much urgency should I feel in my life?How hard should I work?How much should I be trying to do the most valuable-seeming thing, vs engaging in more playful exploration and learning?In summary:For the purposes of planning my life, I'm going to act as if there are four years before AGI development progresses enough that I should substantially change what I'm doing with my time, and then there are three years after that before AI has transformed the world unrecognizably.I'm going to treat this phase of my career with the urgency of a college freshman looking at their undergrad degree--every month is 2% of their degree, which is a nontrivial fraction, but they should also feel like they have a substantial amount of space to grow and explore.The AI midgameI want to split the AI timeline into the following categories.The early game, during which interest in AI is not mainstream. I think this ended within the last yearThe midgame, during which interest in AI is mainstream but before AGI is imminent. During the midgame:The AI companies are building AIs that they don’t expect will be transformative.The alignment work we do is largely practice for alignment work later, rather than an attempt to build AIs that we can get useful cognitive labor from without them staging coups.For the purpose of planning my life, I’m going to imagine this as lasting four more years. This is shorter than my median estimate of how long this phase will actually last.The endgame, during which AI companies conceive of themselves as actively building models that will imminently be transformative, and that pose existential takeover risk.During the endgame, I think that we shouldn’t count on having time to develop fundamentally new alignment insights or techniques (except maybe if AIs do most of the work? Idt we should count on this); we should be planning to mostly just execute on alignment techniques that involve ingredients that seem immediately applicable.For the purpose of planning my life, I’m going to imagine this as lasting three years. This is about as long as I expect this phase to actually take.I think this division matters because several aspects of my current work seem like they’re optimized for midgame, and I should plausibly do something very differently in the endgame. Features of my current life that should plausibly change in the endgame:I'm doing blue-sky alignment research into novel alignment techniques–during the endgame, it might be too late to do this.I'm working at an independent alignment org and not interacting with labs that much. During the endgame, I probably either want to be working at a lab or doing something else that involves interacting with labs a lot. (I feel pretty uncertain about whether Redwood should dissolve during the AI endgame.)I spend a lot of my time constructing alignment cases that I think analogous to difficulties that we expect to face later. During the endgame, you probably have access to the strategy “observe/construct alignment cases that are obviously scary in the models you have”...]]>
Buck https://forum.effectivealtruism.org/posts/2DzLY6YP2z5zRDAGA/a-freshman-year-during-the-ai-midgame-my-approach-to-the Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A freshman year during the AI midgame: my approach to the next year, published by Buck on April 14, 2023 on The Effective Altruism Forum.I recently spent some time reflecting on my career and my life, for a few reasons:It was my 29th birthday, an occasion which felt like a particularly natural time to think through what I wanted to accomplish over the course of the next year .It seems like AI progress is heating up.It felt like a good time to reflect on how Redwood has been going, because we’ve been having conversations with funders about getting more funding.I wanted to have a better answer to these questions:What’s the default trajectory that I should plan for my career to follow? And what does this imply for what I should be doing right now?How much urgency should I feel in my life?How hard should I work?How much should I be trying to do the most valuable-seeming thing, vs engaging in more playful exploration and learning?In summary:For the purposes of planning my life, I'm going to act as if there are four years before AGI development progresses enough that I should substantially change what I'm doing with my time, and then there are three years after that before AI has transformed the world unrecognizably.I'm going to treat this phase of my career with the urgency of a college freshman looking at their undergrad degree--every month is 2% of their degree, which is a nontrivial fraction, but they should also feel like they have a substantial amount of space to grow and explore.The AI midgameI want to split the AI timeline into the following categories.The early game, during which interest in AI is not mainstream. I think this ended within the last yearThe midgame, during which interest in AI is mainstream but before AGI is imminent. During the midgame:The AI companies are building AIs that they don’t expect will be transformative.The alignment work we do is largely practice for alignment work later, rather than an attempt to build AIs that we can get useful cognitive labor from without them staging coups.For the purpose of planning my life, I’m going to imagine this as lasting four more years. This is shorter than my median estimate of how long this phase will actually last.The endgame, during which AI companies conceive of themselves as actively building models that will imminently be transformative, and that pose existential takeover risk.During the endgame, I think that we shouldn’t count on having time to develop fundamentally new alignment insights or techniques (except maybe if AIs do most of the work? Idt we should count on this); we should be planning to mostly just execute on alignment techniques that involve ingredients that seem immediately applicable.For the purpose of planning my life, I’m going to imagine this as lasting three years. This is about as long as I expect this phase to actually take.I think this division matters because several aspects of my current work seem like they’re optimized for midgame, and I should plausibly do something very differently in the endgame. Features of my current life that should plausibly change in the endgame:I'm doing blue-sky alignment research into novel alignment techniques–during the endgame, it might be too late to do this.I'm working at an independent alignment org and not interacting with labs that much. During the endgame, I probably either want to be working at a lab or doing something else that involves interacting with labs a lot. (I feel pretty uncertain about whether Redwood should dissolve during the AI endgame.)I spend a lot of my time constructing alignment cases that I think analogous to difficulties that we expect to face later. During the endgame, you probably have access to the strategy “observe/construct alignment cases that are obviously scary in the models you have”...]]>
Fri, 14 Apr 2023 03:32:22 +0000 EA - A freshman year during the AI midgame: my approach to the next year by Buck Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A freshman year during the AI midgame: my approach to the next year, published by Buck on April 14, 2023 on The Effective Altruism Forum.I recently spent some time reflecting on my career and my life, for a few reasons:It was my 29th birthday, an occasion which felt like a particularly natural time to think through what I wanted to accomplish over the course of the next year .It seems like AI progress is heating up.It felt like a good time to reflect on how Redwood has been going, because we’ve been having conversations with funders about getting more funding.I wanted to have a better answer to these questions:What’s the default trajectory that I should plan for my career to follow? And what does this imply for what I should be doing right now?How much urgency should I feel in my life?How hard should I work?How much should I be trying to do the most valuable-seeming thing, vs engaging in more playful exploration and learning?In summary:For the purposes of planning my life, I'm going to act as if there are four years before AGI development progresses enough that I should substantially change what I'm doing with my time, and then there are three years after that before AI has transformed the world unrecognizably.I'm going to treat this phase of my career with the urgency of a college freshman looking at their undergrad degree--every month is 2% of their degree, which is a nontrivial fraction, but they should also feel like they have a substantial amount of space to grow and explore.The AI midgameI want to split the AI timeline into the following categories.The early game, during which interest in AI is not mainstream. I think this ended within the last yearThe midgame, during which interest in AI is mainstream but before AGI is imminent. During the midgame:The AI companies are building AIs that they don’t expect will be transformative.The alignment work we do is largely practice for alignment work later, rather than an attempt to build AIs that we can get useful cognitive labor from without them staging coups.For the purpose of planning my life, I’m going to imagine this as lasting four more years. This is shorter than my median estimate of how long this phase will actually last.The endgame, during which AI companies conceive of themselves as actively building models that will imminently be transformative, and that pose existential takeover risk.During the endgame, I think that we shouldn’t count on having time to develop fundamentally new alignment insights or techniques (except maybe if AIs do most of the work? Idt we should count on this); we should be planning to mostly just execute on alignment techniques that involve ingredients that seem immediately applicable.For the purpose of planning my life, I’m going to imagine this as lasting three years. This is about as long as I expect this phase to actually take.I think this division matters because several aspects of my current work seem like they’re optimized for midgame, and I should plausibly do something very differently in the endgame. Features of my current life that should plausibly change in the endgame:I'm doing blue-sky alignment research into novel alignment techniques–during the endgame, it might be too late to do this.I'm working at an independent alignment org and not interacting with labs that much. During the endgame, I probably either want to be working at a lab or doing something else that involves interacting with labs a lot. (I feel pretty uncertain about whether Redwood should dissolve during the AI endgame.)I spend a lot of my time constructing alignment cases that I think analogous to difficulties that we expect to face later. During the endgame, you probably have access to the strategy “observe/construct alignment cases that are obviously scary in the models you have”...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A freshman year during the AI midgame: my approach to the next year, published by Buck on April 14, 2023 on The Effective Altruism Forum.I recently spent some time reflecting on my career and my life, for a few reasons:It was my 29th birthday, an occasion which felt like a particularly natural time to think through what I wanted to accomplish over the course of the next year .It seems like AI progress is heating up.It felt like a good time to reflect on how Redwood has been going, because we’ve been having conversations with funders about getting more funding.I wanted to have a better answer to these questions:What’s the default trajectory that I should plan for my career to follow? And what does this imply for what I should be doing right now?How much urgency should I feel in my life?How hard should I work?How much should I be trying to do the most valuable-seeming thing, vs engaging in more playful exploration and learning?In summary:For the purposes of planning my life, I'm going to act as if there are four years before AGI development progresses enough that I should substantially change what I'm doing with my time, and then there are three years after that before AI has transformed the world unrecognizably.I'm going to treat this phase of my career with the urgency of a college freshman looking at their undergrad degree--every month is 2% of their degree, which is a nontrivial fraction, but they should also feel like they have a substantial amount of space to grow and explore.The AI midgameI want to split the AI timeline into the following categories.The early game, during which interest in AI is not mainstream. I think this ended within the last yearThe midgame, during which interest in AI is mainstream but before AGI is imminent. During the midgame:The AI companies are building AIs that they don’t expect will be transformative.The alignment work we do is largely practice for alignment work later, rather than an attempt to build AIs that we can get useful cognitive labor from without them staging coups.For the purpose of planning my life, I’m going to imagine this as lasting four more years. This is shorter than my median estimate of how long this phase will actually last.The endgame, during which AI companies conceive of themselves as actively building models that will imminently be transformative, and that pose existential takeover risk.During the endgame, I think that we shouldn’t count on having time to develop fundamentally new alignment insights or techniques (except maybe if AIs do most of the work? Idt we should count on this); we should be planning to mostly just execute on alignment techniques that involve ingredients that seem immediately applicable.For the purpose of planning my life, I’m going to imagine this as lasting three years. This is about as long as I expect this phase to actually take.I think this division matters because several aspects of my current work seem like they’re optimized for midgame, and I should plausibly do something very differently in the endgame. Features of my current life that should plausibly change in the endgame:I'm doing blue-sky alignment research into novel alignment techniques–during the endgame, it might be too late to do this.I'm working at an independent alignment org and not interacting with labs that much. During the endgame, I probably either want to be working at a lab or doing something else that involves interacting with labs a lot. (I feel pretty uncertain about whether Redwood should dissolve during the AI endgame.)I spend a lot of my time constructing alignment cases that I think analogous to difficulties that we expect to face later. During the endgame, you probably have access to the strategy “observe/construct alignment cases that are obviously scary in the models you have”...]]>
Buck https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:45 None full 5596
GcrKndFY2oSKEFLub_NL_EA_EA EA - [US] NTIA: AI Accountability Policy Request for Comment by Kyle J. Lucchese Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [US] NTIA: AI Accountability Policy Request for Comment, published by Kyle J. Lucchese on April 13, 2023 on The Effective Altruism Forum.The Department of Commerce’s National Telecommunications and Information Administration (NTIA) has launched an inquiry into what policies will help businesses, government, and the public be able to trust that Artificial Intelligence (AI) systems work as claimed – and without causing harm.In line with this, the NTIA announced today, April 13, 2023, a request for public comments on Artificial Intelligence (“AI”) system accountability measures and policies.Summary:The National Telecommunications and Information Administration (NTIA) hereby requests comments on Artificial Intelligence (“AI”) system accountability measures and policies. This request focuses on self-regulatory, regulatory, and other measures and policies that are designed to provide reliable evidence to external stakeholders—that is, to provide assurance—that AI systems are legal, effective, ethical, safe, and otherwise trustworthy. NTIA will rely on these comments, along with other public engagements on this topic, to draft and issue a report on AI accountability policy development, focusing especially on the AI assurance ecosystem.NTIA is seeking input on what policies should shape the AI accountability ecosystem, including topics such as:What kinds of data access is necessary to conduct audits and assessmentsHow can regulators and other actors incentivize and support credible assurance of AI systems along with other forms of accountabilityWhat different approaches might be needed in different industry sectors—like employment or health careIf you have relevant knowledge regarding AI technical safety and/or governance, please consider submitting a comment. This is a notable opportunity to positively inform US policymaking.You can find more information and formally submit your comments here.Comments can be submitted as a known individual, on behalf of an organization, or anonymously.The deadline to submit comments is June 12, 2023.NTIA is the Executive Branch agency that is principally responsible for advising the President [of the United States] on telecommunications and information policy issues.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Kyle J. Lucchese https://forum.effectivealtruism.org/posts/GcrKndFY2oSKEFLub/us-ntia-ai-accountability-policy-request-for-comment Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [US] NTIA: AI Accountability Policy Request for Comment, published by Kyle J. Lucchese on April 13, 2023 on The Effective Altruism Forum.The Department of Commerce’s National Telecommunications and Information Administration (NTIA) has launched an inquiry into what policies will help businesses, government, and the public be able to trust that Artificial Intelligence (AI) systems work as claimed – and without causing harm.In line with this, the NTIA announced today, April 13, 2023, a request for public comments on Artificial Intelligence (“AI”) system accountability measures and policies.Summary:The National Telecommunications and Information Administration (NTIA) hereby requests comments on Artificial Intelligence (“AI”) system accountability measures and policies. This request focuses on self-regulatory, regulatory, and other measures and policies that are designed to provide reliable evidence to external stakeholders—that is, to provide assurance—that AI systems are legal, effective, ethical, safe, and otherwise trustworthy. NTIA will rely on these comments, along with other public engagements on this topic, to draft and issue a report on AI accountability policy development, focusing especially on the AI assurance ecosystem.NTIA is seeking input on what policies should shape the AI accountability ecosystem, including topics such as:What kinds of data access is necessary to conduct audits and assessmentsHow can regulators and other actors incentivize and support credible assurance of AI systems along with other forms of accountabilityWhat different approaches might be needed in different industry sectors—like employment or health careIf you have relevant knowledge regarding AI technical safety and/or governance, please consider submitting a comment. This is a notable opportunity to positively inform US policymaking.You can find more information and formally submit your comments here.Comments can be submitted as a known individual, on behalf of an organization, or anonymously.The deadline to submit comments is June 12, 2023.NTIA is the Executive Branch agency that is principally responsible for advising the President [of the United States] on telecommunications and information policy issues.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 13 Apr 2023 22:37:58 +0000 EA - [US] NTIA: AI Accountability Policy Request for Comment by Kyle J. Lucchese Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [US] NTIA: AI Accountability Policy Request for Comment, published by Kyle J. Lucchese on April 13, 2023 on The Effective Altruism Forum.The Department of Commerce’s National Telecommunications and Information Administration (NTIA) has launched an inquiry into what policies will help businesses, government, and the public be able to trust that Artificial Intelligence (AI) systems work as claimed – and without causing harm.In line with this, the NTIA announced today, April 13, 2023, a request for public comments on Artificial Intelligence (“AI”) system accountability measures and policies.Summary:The National Telecommunications and Information Administration (NTIA) hereby requests comments on Artificial Intelligence (“AI”) system accountability measures and policies. This request focuses on self-regulatory, regulatory, and other measures and policies that are designed to provide reliable evidence to external stakeholders—that is, to provide assurance—that AI systems are legal, effective, ethical, safe, and otherwise trustworthy. NTIA will rely on these comments, along with other public engagements on this topic, to draft and issue a report on AI accountability policy development, focusing especially on the AI assurance ecosystem.NTIA is seeking input on what policies should shape the AI accountability ecosystem, including topics such as:What kinds of data access is necessary to conduct audits and assessmentsHow can regulators and other actors incentivize and support credible assurance of AI systems along with other forms of accountabilityWhat different approaches might be needed in different industry sectors—like employment or health careIf you have relevant knowledge regarding AI technical safety and/or governance, please consider submitting a comment. This is a notable opportunity to positively inform US policymaking.You can find more information and formally submit your comments here.Comments can be submitted as a known individual, on behalf of an organization, or anonymously.The deadline to submit comments is June 12, 2023.NTIA is the Executive Branch agency that is principally responsible for advising the President [of the United States] on telecommunications and information policy issues.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [US] NTIA: AI Accountability Policy Request for Comment, published by Kyle J. Lucchese on April 13, 2023 on The Effective Altruism Forum.The Department of Commerce’s National Telecommunications and Information Administration (NTIA) has launched an inquiry into what policies will help businesses, government, and the public be able to trust that Artificial Intelligence (AI) systems work as claimed – and without causing harm.In line with this, the NTIA announced today, April 13, 2023, a request for public comments on Artificial Intelligence (“AI”) system accountability measures and policies.Summary:The National Telecommunications and Information Administration (NTIA) hereby requests comments on Artificial Intelligence (“AI”) system accountability measures and policies. This request focuses on self-regulatory, regulatory, and other measures and policies that are designed to provide reliable evidence to external stakeholders—that is, to provide assurance—that AI systems are legal, effective, ethical, safe, and otherwise trustworthy. NTIA will rely on these comments, along with other public engagements on this topic, to draft and issue a report on AI accountability policy development, focusing especially on the AI assurance ecosystem.NTIA is seeking input on what policies should shape the AI accountability ecosystem, including topics such as:What kinds of data access is necessary to conduct audits and assessmentsHow can regulators and other actors incentivize and support credible assurance of AI systems along with other forms of accountabilityWhat different approaches might be needed in different industry sectors—like employment or health careIf you have relevant knowledge regarding AI technical safety and/or governance, please consider submitting a comment. This is a notable opportunity to positively inform US policymaking.You can find more information and formally submit your comments here.Comments can be submitted as a known individual, on behalf of an organization, or anonymously.The deadline to submit comments is June 12, 2023.NTIA is the Executive Branch agency that is principally responsible for advising the President [of the United States] on telecommunications and information policy issues.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Kyle J. Lucchese https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:22 None full 5597
jj7pnPocbptaGdJEW_NL_EA_EA EA - How altruistic perfectionism is self-defeating (Tim LeBon on The 80,000 Hours Podcast) by 80000 Hours Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How altruistic perfectionism is self-defeating (Tim LeBon on The 80,000 Hours Podcast), published by 80000 Hours on April 12, 2023 on The Effective Altruism Forum.Over at The 80,000 Hours Podcast we just published an interview that is likely to be of particular interest to people who identify as involved in the effective altruism community: Tim LeBon on how altruistic perfectionism is self-defeating.You can click through for the audio, a full transcript and related links. Below is the episode summary and some key excerpts.Episode summaryBeing a good and successful person is core to your identity. You place great importance on meeting the high moral, professional, or academic standards you set yourself.But inevitably, something goes wrong and you fail to meet that high bar. Now you feel terrible about yourself, and worry others are judging you for your failure. Feeling low and reflecting constantly on whether you're doing as much as you think you should makes it hard to focus and get things done. So now you're performing below a normal level, making you feel even more ashamed of yourself. Rinse and repeat.This is the disastrous cycle today's guest, Tim LeBon — registered psychotherapist, accredited CBT therapist, life coach, and author of 365 Ways to Be More Stoic — has observed in many clients with a perfectionist mindset.Tim has provided therapy to a number of 80,000 Hours readers — people who have found that the very high expectations they had set for themselves were holding them back. Because of our focus on "doing the most good you can," Tim thinks 80,000 Hours both attracts people with this style of thinking and then exacerbates it.But Tim, having studied and written on moral philosophy, is sympathetic to the idea of helping others as much as possible, and is excited to help clients pursue that — sustainably — if it's their goal.Tim has treated hundreds of clients with all sorts of mental health challenges. But in today's conversation, he shares the lessons he has learned working with people who take helping others so seriously that it has become burdensome and self-defeating — in particular, how clients can approach this challenge using the treatment he's most enthusiastic about: cognitive behavioural therapy.As Tim stresses, perfectionism isn't the same as being perfect, or simply pursuing excellence. What's most distinctive about perfectionism is that a person's standards don't vary flexibly according to circumstance, meeting those standards without exception is key to their self-image, and they worry something terrible will happen if they fail to meet them.It's a mindset most of us have seen in ourselves at some point, or have seen people we love struggle with.Untreated, perfectionism might not cause problems for many years — it might even seem positive providing a source of motivation to work hard. But it's hard to feel truly happy and secure, and free to take risks, when we're just one failure away from our self-worth falling through the floor. And if someone slips into the positive feedback loop of shame described above, the end result can be depression and anxiety that's hard to shake.But there's hope. Tim has seen clients make real progress on their perfectionism by using CBT techniques like exposure therapy. By doing things like experimenting with more flexible standards — for example, sending early drafts to your colleagues, even if it terrifies you — you can learn that things will be okay, even when you're not perfect.In today's extensive conversation, Tim and Rob cover:How perfectionism is different from the pursuit of excellence, scrupulosity, or an OCD personalityWhat leads people to adopt a perfectionist mindsetThe pros and cons of perfectionismHow 80,000 Hours contributes to perfectionism among some readers and listeners, and w...]]>
80000 Hours https://forum.effectivealtruism.org/posts/jj7pnPocbptaGdJEW/how-altruistic-perfectionism-is-self-defeating-tim-lebon-on-2 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How altruistic perfectionism is self-defeating (Tim LeBon on The 80,000 Hours Podcast), published by 80000 Hours on April 12, 2023 on The Effective Altruism Forum.Over at The 80,000 Hours Podcast we just published an interview that is likely to be of particular interest to people who identify as involved in the effective altruism community: Tim LeBon on how altruistic perfectionism is self-defeating.You can click through for the audio, a full transcript and related links. Below is the episode summary and some key excerpts.Episode summaryBeing a good and successful person is core to your identity. You place great importance on meeting the high moral, professional, or academic standards you set yourself.But inevitably, something goes wrong and you fail to meet that high bar. Now you feel terrible about yourself, and worry others are judging you for your failure. Feeling low and reflecting constantly on whether you're doing as much as you think you should makes it hard to focus and get things done. So now you're performing below a normal level, making you feel even more ashamed of yourself. Rinse and repeat.This is the disastrous cycle today's guest, Tim LeBon — registered psychotherapist, accredited CBT therapist, life coach, and author of 365 Ways to Be More Stoic — has observed in many clients with a perfectionist mindset.Tim has provided therapy to a number of 80,000 Hours readers — people who have found that the very high expectations they had set for themselves were holding them back. Because of our focus on "doing the most good you can," Tim thinks 80,000 Hours both attracts people with this style of thinking and then exacerbates it.But Tim, having studied and written on moral philosophy, is sympathetic to the idea of helping others as much as possible, and is excited to help clients pursue that — sustainably — if it's their goal.Tim has treated hundreds of clients with all sorts of mental health challenges. But in today's conversation, he shares the lessons he has learned working with people who take helping others so seriously that it has become burdensome and self-defeating — in particular, how clients can approach this challenge using the treatment he's most enthusiastic about: cognitive behavioural therapy.As Tim stresses, perfectionism isn't the same as being perfect, or simply pursuing excellence. What's most distinctive about perfectionism is that a person's standards don't vary flexibly according to circumstance, meeting those standards without exception is key to their self-image, and they worry something terrible will happen if they fail to meet them.It's a mindset most of us have seen in ourselves at some point, or have seen people we love struggle with.Untreated, perfectionism might not cause problems for many years — it might even seem positive providing a source of motivation to work hard. But it's hard to feel truly happy and secure, and free to take risks, when we're just one failure away from our self-worth falling through the floor. And if someone slips into the positive feedback loop of shame described above, the end result can be depression and anxiety that's hard to shake.But there's hope. Tim has seen clients make real progress on their perfectionism by using CBT techniques like exposure therapy. By doing things like experimenting with more flexible standards — for example, sending early drafts to your colleagues, even if it terrifies you — you can learn that things will be okay, even when you're not perfect.In today's extensive conversation, Tim and Rob cover:How perfectionism is different from the pursuit of excellence, scrupulosity, or an OCD personalityWhat leads people to adopt a perfectionist mindsetThe pros and cons of perfectionismHow 80,000 Hours contributes to perfectionism among some readers and listeners, and w...]]>
Thu, 13 Apr 2023 19:23:30 +0000 EA - How altruistic perfectionism is self-defeating (Tim LeBon on The 80,000 Hours Podcast) by 80000 Hours Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How altruistic perfectionism is self-defeating (Tim LeBon on The 80,000 Hours Podcast), published by 80000 Hours on April 12, 2023 on The Effective Altruism Forum.Over at The 80,000 Hours Podcast we just published an interview that is likely to be of particular interest to people who identify as involved in the effective altruism community: Tim LeBon on how altruistic perfectionism is self-defeating.You can click through for the audio, a full transcript and related links. Below is the episode summary and some key excerpts.Episode summaryBeing a good and successful person is core to your identity. You place great importance on meeting the high moral, professional, or academic standards you set yourself.But inevitably, something goes wrong and you fail to meet that high bar. Now you feel terrible about yourself, and worry others are judging you for your failure. Feeling low and reflecting constantly on whether you're doing as much as you think you should makes it hard to focus and get things done. So now you're performing below a normal level, making you feel even more ashamed of yourself. Rinse and repeat.This is the disastrous cycle today's guest, Tim LeBon — registered psychotherapist, accredited CBT therapist, life coach, and author of 365 Ways to Be More Stoic — has observed in many clients with a perfectionist mindset.Tim has provided therapy to a number of 80,000 Hours readers — people who have found that the very high expectations they had set for themselves were holding them back. Because of our focus on "doing the most good you can," Tim thinks 80,000 Hours both attracts people with this style of thinking and then exacerbates it.But Tim, having studied and written on moral philosophy, is sympathetic to the idea of helping others as much as possible, and is excited to help clients pursue that — sustainably — if it's their goal.Tim has treated hundreds of clients with all sorts of mental health challenges. But in today's conversation, he shares the lessons he has learned working with people who take helping others so seriously that it has become burdensome and self-defeating — in particular, how clients can approach this challenge using the treatment he's most enthusiastic about: cognitive behavioural therapy.As Tim stresses, perfectionism isn't the same as being perfect, or simply pursuing excellence. What's most distinctive about perfectionism is that a person's standards don't vary flexibly according to circumstance, meeting those standards without exception is key to their self-image, and they worry something terrible will happen if they fail to meet them.It's a mindset most of us have seen in ourselves at some point, or have seen people we love struggle with.Untreated, perfectionism might not cause problems for many years — it might even seem positive providing a source of motivation to work hard. But it's hard to feel truly happy and secure, and free to take risks, when we're just one failure away from our self-worth falling through the floor. And if someone slips into the positive feedback loop of shame described above, the end result can be depression and anxiety that's hard to shake.But there's hope. Tim has seen clients make real progress on their perfectionism by using CBT techniques like exposure therapy. By doing things like experimenting with more flexible standards — for example, sending early drafts to your colleagues, even if it terrifies you — you can learn that things will be okay, even when you're not perfect.In today's extensive conversation, Tim and Rob cover:How perfectionism is different from the pursuit of excellence, scrupulosity, or an OCD personalityWhat leads people to adopt a perfectionist mindsetThe pros and cons of perfectionismHow 80,000 Hours contributes to perfectionism among some readers and listeners, and w...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How altruistic perfectionism is self-defeating (Tim LeBon on The 80,000 Hours Podcast), published by 80000 Hours on April 12, 2023 on The Effective Altruism Forum.Over at The 80,000 Hours Podcast we just published an interview that is likely to be of particular interest to people who identify as involved in the effective altruism community: Tim LeBon on how altruistic perfectionism is self-defeating.You can click through for the audio, a full transcript and related links. Below is the episode summary and some key excerpts.Episode summaryBeing a good and successful person is core to your identity. You place great importance on meeting the high moral, professional, or academic standards you set yourself.But inevitably, something goes wrong and you fail to meet that high bar. Now you feel terrible about yourself, and worry others are judging you for your failure. Feeling low and reflecting constantly on whether you're doing as much as you think you should makes it hard to focus and get things done. So now you're performing below a normal level, making you feel even more ashamed of yourself. Rinse and repeat.This is the disastrous cycle today's guest, Tim LeBon — registered psychotherapist, accredited CBT therapist, life coach, and author of 365 Ways to Be More Stoic — has observed in many clients with a perfectionist mindset.Tim has provided therapy to a number of 80,000 Hours readers — people who have found that the very high expectations they had set for themselves were holding them back. Because of our focus on "doing the most good you can," Tim thinks 80,000 Hours both attracts people with this style of thinking and then exacerbates it.But Tim, having studied and written on moral philosophy, is sympathetic to the idea of helping others as much as possible, and is excited to help clients pursue that — sustainably — if it's their goal.Tim has treated hundreds of clients with all sorts of mental health challenges. But in today's conversation, he shares the lessons he has learned working with people who take helping others so seriously that it has become burdensome and self-defeating — in particular, how clients can approach this challenge using the treatment he's most enthusiastic about: cognitive behavioural therapy.As Tim stresses, perfectionism isn't the same as being perfect, or simply pursuing excellence. What's most distinctive about perfectionism is that a person's standards don't vary flexibly according to circumstance, meeting those standards without exception is key to their self-image, and they worry something terrible will happen if they fail to meet them.It's a mindset most of us have seen in ourselves at some point, or have seen people we love struggle with.Untreated, perfectionism might not cause problems for many years — it might even seem positive providing a source of motivation to work hard. But it's hard to feel truly happy and secure, and free to take risks, when we're just one failure away from our self-worth falling through the floor. And if someone slips into the positive feedback loop of shame described above, the end result can be depression and anxiety that's hard to shake.But there's hope. Tim has seen clients make real progress on their perfectionism by using CBT techniques like exposure therapy. By doing things like experimenting with more flexible standards — for example, sending early drafts to your colleagues, even if it terrifies you — you can learn that things will be okay, even when you're not perfect.In today's extensive conversation, Tim and Rob cover:How perfectionism is different from the pursuit of excellence, scrupulosity, or an OCD personalityWhat leads people to adopt a perfectionist mindsetThe pros and cons of perfectionismHow 80,000 Hours contributes to perfectionism among some readers and listeners, and w...]]>
80000 Hours https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 22:15 None full 5590
e2nggE7Ws8NXvDyFz_NL_EA_EA EA - My experience attending and volunteering at an EAGx for the first time by Nayanika Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My experience attending and volunteering at an EAGx for the first time, published by Nayanika on April 13, 2023 on The Effective Altruism Forum.TLDR: This post might be helpful for people who are about to volunteer or attend an EAGx for the first time.This is a relatively late post (due to unforeseen circumstances) as the conference had taken place exactly two months earlier from this date.Right from the shuttle service to the venue, till the last day of post-conference retreats, it's been one of a kind an experience at EAGxIndia, my first ever EAGx.Volunteer application experience: I applied for a full-time volunteering position so that I could understand the maximum essence of co-working in the EA community. The organizing team had quite prudently decided to allocate me a part-time. Later during the conference, I couldn't help but wonder how insightful a decision it was; personalized for me as a first-time attendee. Of course, the organizing team had an idea of my background since I was already a part of the community (Indian Network for Impact, previously EA India). But I understood how meaningfully I was allowed to volunteer part-time and observe the conference for the rest.The procedure involved: I went through a screening with an organizing team member. A cheerful face (Harriet Patterson) asked me about my aspirations for volunteering at the conference. Next, within a week I got an outcome of my screening and was selected as a part-time volunteer. Part-time volunteers were requested to arrive a day ahead of the main conference. The full-timers perhaps arrived 2 days ago. Volunteers were also invited to join an online training a week before the main event.Arrival at the conference: As I landed at the location, (Jaipur, India) I recalled that it was the land of India's rich history. I cherished my presence on this land for the first time after reading about it in my school history book. The venue selection was spot on! Comes next, is the shuttle assistance. Although I didn't have a shuttle close to my arrival time, I communicated with the coordinator that I would take a cab. This coordinator took care of my safety (by asking for my live location) throughout my travel from the airport to the accommodation venue. I was also told that I would be reimbursed for my cab expense since I was a volunteer. With this, my immediate understanding was- You're greatly taken care of as a volunteer at an EAGx!The coordinator surprisingly had a very friendly nature. This in turn unsurprisingly made him a good friend and the first point of contact (in case of any anomaly) for me and hopefully a few others at the conference. This helps especially when you're alone at a place for the first time. Came to know later that he was studying Law and had been exposed to EA for the first time.About the accommodation: It was a decent arrangement. Shuttles were arranged for transportation purposes between the stay and the conference venue (both were about 10 mins apart).First day of the conference: I somehow gulped something for breakfast as the training was about to start and attendees were to arrive by that evening. Unfortunately, I caught a bad cold. The organizers had arranged Covid test kits for everyone. I took a test and could continue being in the conference as the test result were 'negative'!Volunteering begins: After a short training with Harriet, we were handed over to the leaders of our respective volunteering departments (e g. Logistics, Registration, Speaker Liaison). I was on the registration team. Our team lead, Ivan Burduk, took us for scouting around the venue. It was fun for sure. We were figuring out the locations ourselves so that we could promptly guide the attendees.Then, we started arranging the IDs for the attendees and speakers at the registration desk. But w...]]>
Nayanika https://forum.effectivealtruism.org/posts/e2nggE7Ws8NXvDyFz/my-experience-attending-and-volunteering-at-an-eagx-for-the Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My experience attending and volunteering at an EAGx for the first time, published by Nayanika on April 13, 2023 on The Effective Altruism Forum.TLDR: This post might be helpful for people who are about to volunteer or attend an EAGx for the first time.This is a relatively late post (due to unforeseen circumstances) as the conference had taken place exactly two months earlier from this date.Right from the shuttle service to the venue, till the last day of post-conference retreats, it's been one of a kind an experience at EAGxIndia, my first ever EAGx.Volunteer application experience: I applied for a full-time volunteering position so that I could understand the maximum essence of co-working in the EA community. The organizing team had quite prudently decided to allocate me a part-time. Later during the conference, I couldn't help but wonder how insightful a decision it was; personalized for me as a first-time attendee. Of course, the organizing team had an idea of my background since I was already a part of the community (Indian Network for Impact, previously EA India). But I understood how meaningfully I was allowed to volunteer part-time and observe the conference for the rest.The procedure involved: I went through a screening with an organizing team member. A cheerful face (Harriet Patterson) asked me about my aspirations for volunteering at the conference. Next, within a week I got an outcome of my screening and was selected as a part-time volunteer. Part-time volunteers were requested to arrive a day ahead of the main conference. The full-timers perhaps arrived 2 days ago. Volunteers were also invited to join an online training a week before the main event.Arrival at the conference: As I landed at the location, (Jaipur, India) I recalled that it was the land of India's rich history. I cherished my presence on this land for the first time after reading about it in my school history book. The venue selection was spot on! Comes next, is the shuttle assistance. Although I didn't have a shuttle close to my arrival time, I communicated with the coordinator that I would take a cab. This coordinator took care of my safety (by asking for my live location) throughout my travel from the airport to the accommodation venue. I was also told that I would be reimbursed for my cab expense since I was a volunteer. With this, my immediate understanding was- You're greatly taken care of as a volunteer at an EAGx!The coordinator surprisingly had a very friendly nature. This in turn unsurprisingly made him a good friend and the first point of contact (in case of any anomaly) for me and hopefully a few others at the conference. This helps especially when you're alone at a place for the first time. Came to know later that he was studying Law and had been exposed to EA for the first time.About the accommodation: It was a decent arrangement. Shuttles were arranged for transportation purposes between the stay and the conference venue (both were about 10 mins apart).First day of the conference: I somehow gulped something for breakfast as the training was about to start and attendees were to arrive by that evening. Unfortunately, I caught a bad cold. The organizers had arranged Covid test kits for everyone. I took a test and could continue being in the conference as the test result were 'negative'!Volunteering begins: After a short training with Harriet, we were handed over to the leaders of our respective volunteering departments (e g. Logistics, Registration, Speaker Liaison). I was on the registration team. Our team lead, Ivan Burduk, took us for scouting around the venue. It was fun for sure. We were figuring out the locations ourselves so that we could promptly guide the attendees.Then, we started arranging the IDs for the attendees and speakers at the registration desk. But w...]]>
Thu, 13 Apr 2023 16:16:57 +0000 EA - My experience attending and volunteering at an EAGx for the first time by Nayanika Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My experience attending and volunteering at an EAGx for the first time, published by Nayanika on April 13, 2023 on The Effective Altruism Forum.TLDR: This post might be helpful for people who are about to volunteer or attend an EAGx for the first time.This is a relatively late post (due to unforeseen circumstances) as the conference had taken place exactly two months earlier from this date.Right from the shuttle service to the venue, till the last day of post-conference retreats, it's been one of a kind an experience at EAGxIndia, my first ever EAGx.Volunteer application experience: I applied for a full-time volunteering position so that I could understand the maximum essence of co-working in the EA community. The organizing team had quite prudently decided to allocate me a part-time. Later during the conference, I couldn't help but wonder how insightful a decision it was; personalized for me as a first-time attendee. Of course, the organizing team had an idea of my background since I was already a part of the community (Indian Network for Impact, previously EA India). But I understood how meaningfully I was allowed to volunteer part-time and observe the conference for the rest.The procedure involved: I went through a screening with an organizing team member. A cheerful face (Harriet Patterson) asked me about my aspirations for volunteering at the conference. Next, within a week I got an outcome of my screening and was selected as a part-time volunteer. Part-time volunteers were requested to arrive a day ahead of the main conference. The full-timers perhaps arrived 2 days ago. Volunteers were also invited to join an online training a week before the main event.Arrival at the conference: As I landed at the location, (Jaipur, India) I recalled that it was the land of India's rich history. I cherished my presence on this land for the first time after reading about it in my school history book. The venue selection was spot on! Comes next, is the shuttle assistance. Although I didn't have a shuttle close to my arrival time, I communicated with the coordinator that I would take a cab. This coordinator took care of my safety (by asking for my live location) throughout my travel from the airport to the accommodation venue. I was also told that I would be reimbursed for my cab expense since I was a volunteer. With this, my immediate understanding was- You're greatly taken care of as a volunteer at an EAGx!The coordinator surprisingly had a very friendly nature. This in turn unsurprisingly made him a good friend and the first point of contact (in case of any anomaly) for me and hopefully a few others at the conference. This helps especially when you're alone at a place for the first time. Came to know later that he was studying Law and had been exposed to EA for the first time.About the accommodation: It was a decent arrangement. Shuttles were arranged for transportation purposes between the stay and the conference venue (both were about 10 mins apart).First day of the conference: I somehow gulped something for breakfast as the training was about to start and attendees were to arrive by that evening. Unfortunately, I caught a bad cold. The organizers had arranged Covid test kits for everyone. I took a test and could continue being in the conference as the test result were 'negative'!Volunteering begins: After a short training with Harriet, we were handed over to the leaders of our respective volunteering departments (e g. Logistics, Registration, Speaker Liaison). I was on the registration team. Our team lead, Ivan Burduk, took us for scouting around the venue. It was fun for sure. We were figuring out the locations ourselves so that we could promptly guide the attendees.Then, we started arranging the IDs for the attendees and speakers at the registration desk. But w...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My experience attending and volunteering at an EAGx for the first time, published by Nayanika on April 13, 2023 on The Effective Altruism Forum.TLDR: This post might be helpful for people who are about to volunteer or attend an EAGx for the first time.This is a relatively late post (due to unforeseen circumstances) as the conference had taken place exactly two months earlier from this date.Right from the shuttle service to the venue, till the last day of post-conference retreats, it's been one of a kind an experience at EAGxIndia, my first ever EAGx.Volunteer application experience: I applied for a full-time volunteering position so that I could understand the maximum essence of co-working in the EA community. The organizing team had quite prudently decided to allocate me a part-time. Later during the conference, I couldn't help but wonder how insightful a decision it was; personalized for me as a first-time attendee. Of course, the organizing team had an idea of my background since I was already a part of the community (Indian Network for Impact, previously EA India). But I understood how meaningfully I was allowed to volunteer part-time and observe the conference for the rest.The procedure involved: I went through a screening with an organizing team member. A cheerful face (Harriet Patterson) asked me about my aspirations for volunteering at the conference. Next, within a week I got an outcome of my screening and was selected as a part-time volunteer. Part-time volunteers were requested to arrive a day ahead of the main conference. The full-timers perhaps arrived 2 days ago. Volunteers were also invited to join an online training a week before the main event.Arrival at the conference: As I landed at the location, (Jaipur, India) I recalled that it was the land of India's rich history. I cherished my presence on this land for the first time after reading about it in my school history book. The venue selection was spot on! Comes next, is the shuttle assistance. Although I didn't have a shuttle close to my arrival time, I communicated with the coordinator that I would take a cab. This coordinator took care of my safety (by asking for my live location) throughout my travel from the airport to the accommodation venue. I was also told that I would be reimbursed for my cab expense since I was a volunteer. With this, my immediate understanding was- You're greatly taken care of as a volunteer at an EAGx!The coordinator surprisingly had a very friendly nature. This in turn unsurprisingly made him a good friend and the first point of contact (in case of any anomaly) for me and hopefully a few others at the conference. This helps especially when you're alone at a place for the first time. Came to know later that he was studying Law and had been exposed to EA for the first time.About the accommodation: It was a decent arrangement. Shuttles were arranged for transportation purposes between the stay and the conference venue (both were about 10 mins apart).First day of the conference: I somehow gulped something for breakfast as the training was about to start and attendees were to arrive by that evening. Unfortunately, I caught a bad cold. The organizers had arranged Covid test kits for everyone. I took a test and could continue being in the conference as the test result were 'negative'!Volunteering begins: After a short training with Harriet, we were handed over to the leaders of our respective volunteering departments (e g. Logistics, Registration, Speaker Liaison). I was on the registration team. Our team lead, Ivan Burduk, took us for scouting around the venue. It was fun for sure. We were figuring out the locations ourselves so that we could promptly guide the attendees.Then, we started arranging the IDs for the attendees and speakers at the registration desk. But w...]]>
Nayanika https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:00 None full 5588
azCvqqjJ7Dkrv5XBH_NL_EA_EA EA - Announcing Epoch’s dashboard of key trends and figures in Machine Learning by Jaime Sevilla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Epoch’s dashboard of key trends and figures in Machine Learning, published by Jaime Sevilla on April 13, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jaime Sevilla https://forum.effectivealtruism.org/posts/azCvqqjJ7Dkrv5XBH/announcing-epoch-s-dashboard-of-key-trends-and-figures-in Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Epoch’s dashboard of key trends and figures in Machine Learning, published by Jaime Sevilla on April 13, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 13 Apr 2023 13:06:35 +0000 EA - Announcing Epoch’s dashboard of key trends and figures in Machine Learning by Jaime Sevilla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Epoch’s dashboard of key trends and figures in Machine Learning, published by Jaime Sevilla on April 13, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Epoch’s dashboard of key trends and figures in Machine Learning, published by Jaime Sevilla on April 13, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jaime Sevilla https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:23 None full 5589
bT4ZKn6AwKWJJnzMv_NL_EA_EA EA - Announcing a new animal advocacy podcast: How I Learned to Love Shrimp by James Özden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing a new animal advocacy podcast: How I Learned to Love Shrimp, published by James Özden on April 13, 2023 on The Effective Altruism Forum.Calling all animal-interested folks! Excited to share that we've launched a new animal-focused podcast series. How I Learned to Love Shrimp is a podcast about promising ways to help animals and build the animal advocacy movement. We showcase interesting and exciting ideas within animal advocacy and will release bi-weekly, hour-long interviews with people who are working on these projects.We start the series with an introductory episode with Amy and I discussing why we wanted to start this podcast, the topics we want to cover, and some of our own views on various thorny animal advocacy topics. Our first proper episode is with Dave Coman-Hidy, former President of The Humane League. In this, we discuss the age-old debate of welfare vs abolitionism, the pros and cons of measurability, as well as promising strategies Dave is keen to see more of within animal advocacy.You can check out the episodes across all major providers (e.g. Spotify, Google Podcasts & Apple Podcasts) and also access them on our website.Please let us know what you think, give us guest recommendations and share with anyone who you think could be interested to hear. You can contact us via Twitter, our website or email at hello@howilearnedtoloveshrimp.com. Enjoy!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
James Özden https://forum.effectivealtruism.org/posts/bT4ZKn6AwKWJJnzMv/announcing-a-new-animal-advocacy-podcast-how-i-learned-to Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing a new animal advocacy podcast: How I Learned to Love Shrimp, published by James Özden on April 13, 2023 on The Effective Altruism Forum.Calling all animal-interested folks! Excited to share that we've launched a new animal-focused podcast series. How I Learned to Love Shrimp is a podcast about promising ways to help animals and build the animal advocacy movement. We showcase interesting and exciting ideas within animal advocacy and will release bi-weekly, hour-long interviews with people who are working on these projects.We start the series with an introductory episode with Amy and I discussing why we wanted to start this podcast, the topics we want to cover, and some of our own views on various thorny animal advocacy topics. Our first proper episode is with Dave Coman-Hidy, former President of The Humane League. In this, we discuss the age-old debate of welfare vs abolitionism, the pros and cons of measurability, as well as promising strategies Dave is keen to see more of within animal advocacy.You can check out the episodes across all major providers (e.g. Spotify, Google Podcasts & Apple Podcasts) and also access them on our website.Please let us know what you think, give us guest recommendations and share with anyone who you think could be interested to hear. You can contact us via Twitter, our website or email at hello@howilearnedtoloveshrimp.com. Enjoy!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 13 Apr 2023 11:52:50 +0000 EA - Announcing a new animal advocacy podcast: How I Learned to Love Shrimp by James Özden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing a new animal advocacy podcast: How I Learned to Love Shrimp, published by James Özden on April 13, 2023 on The Effective Altruism Forum.Calling all animal-interested folks! Excited to share that we've launched a new animal-focused podcast series. How I Learned to Love Shrimp is a podcast about promising ways to help animals and build the animal advocacy movement. We showcase interesting and exciting ideas within animal advocacy and will release bi-weekly, hour-long interviews with people who are working on these projects.We start the series with an introductory episode with Amy and I discussing why we wanted to start this podcast, the topics we want to cover, and some of our own views on various thorny animal advocacy topics. Our first proper episode is with Dave Coman-Hidy, former President of The Humane League. In this, we discuss the age-old debate of welfare vs abolitionism, the pros and cons of measurability, as well as promising strategies Dave is keen to see more of within animal advocacy.You can check out the episodes across all major providers (e.g. Spotify, Google Podcasts & Apple Podcasts) and also access them on our website.Please let us know what you think, give us guest recommendations and share with anyone who you think could be interested to hear. You can contact us via Twitter, our website or email at hello@howilearnedtoloveshrimp.com. Enjoy!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing a new animal advocacy podcast: How I Learned to Love Shrimp, published by James Özden on April 13, 2023 on The Effective Altruism Forum.Calling all animal-interested folks! Excited to share that we've launched a new animal-focused podcast series. How I Learned to Love Shrimp is a podcast about promising ways to help animals and build the animal advocacy movement. We showcase interesting and exciting ideas within animal advocacy and will release bi-weekly, hour-long interviews with people who are working on these projects.We start the series with an introductory episode with Amy and I discussing why we wanted to start this podcast, the topics we want to cover, and some of our own views on various thorny animal advocacy topics. Our first proper episode is with Dave Coman-Hidy, former President of The Humane League. In this, we discuss the age-old debate of welfare vs abolitionism, the pros and cons of measurability, as well as promising strategies Dave is keen to see more of within animal advocacy.You can check out the episodes across all major providers (e.g. Spotify, Google Podcasts & Apple Podcasts) and also access them on our website.Please let us know what you think, give us guest recommendations and share with anyone who you think could be interested to hear. You can contact us via Twitter, our website or email at hello@howilearnedtoloveshrimp.com. Enjoy!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
James Özden https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:30 None full 5587
Qoecey2umNjcqEGHP_NL_EA_EA EA - Apply to >30 AI safety funders in one application with the Nonlinear Network by Drew Spartz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to >30 AI safety funders in one application with the Nonlinear Network, published by Drew Spartz on April 12, 2023 on The Effective Altruism Forum.Nonlinear spoke to dozens of earn-to-givers and a common sentiment was, "I want to fund good AI safety-related projects, but I don't know where to find them." At the same time, applicants don’t know how to find them either. And would-be applicants are often aware of just one or two funders - some think it’s “LTFF or bust” - causing many to give up before they’ve started, demoralized, because fundraising seems too hard.As a result, we’re trying an experiment to help folks get in front of donors and vice versa. In brief:Looking for funding?Why apply to just one funder when you can apply to dozens?If you've already applied for EA funding, simply paste your existing application. We’ll share it with relevant funders (~30 so far) in our network.You can apply if you’re still waiting to hear from other funders. This way, instead of having to awkwardly ask dozens of people and get rejected dozens of times (if you can even find the funders), you can just send in the application you already made.We’re also accepting non-technical projects relevant to AI safety (e.g. meta, forecasting, field-building, etc.)Application deadline: May 17, 2023.Looking for projects to fund?Apply to join the funding round by May 17, 2023. Soon after, we'll share access to a database of applications relevant to your interests (e.g. interpretability, moonshots, forecasting, field-building, novel research directions, etc).If you'd like to fund any projects, you can reach out to applicants directly, or we can help coordinate. This way, you avoid the awkwardness of directly rejecting applicants, and don’t get inundated by people trying to “sell” you.Inspiration for this projectWhen the FTX crisis broke, we quickly spun up the Nonlinear Emergency Fund to help provide bridge grants to tide people over until the larger funders could step in.Instead of making all the funding decisions ourselves, we put out a call to other funders/earn-to-givers. Scott Alexander connected us with a few dozen funders who reached out to help, and we created a Slack to collaborate.We shared every application (that consented) in an Airtable with around 30 other donors. This led to a flurry of activity as funders investigated applications. They collaborated on diligence, and grants were made that otherwise wouldn’t have happened.Some funders, like Scott, after seeing our recommendations, preferred to delegate decisions to us, but others preferred to make their own decisions. Collectively, we rapidly deployed roughly $500,000 - far more than we initially expected.The biggest lesson we learned: openly sharing applications with funders was high leverage - possibly leading to four times as many people receiving funding and 10 times more donations than would have happened if we hadn’t shared.If you’ve been thinking about raising money for your project idea, we encourage you to do it now. Push through your imposter syndrome because, as Leopold Aschenbrenner said, nobody’s on the ball on AGI alignment.Another reason to apply: we’ve heard from EA funders that they don’t get enough applications, so you should have a low bar for applying - many fund over 50% of applications they receive (SFF, LTFF, EAIF).Since the Nonlinear Network is a diverse set of funders, you can apply for a grant size anywhere between single digit thousands to single digit millions.Note: We’re aware of many valid critiques of this idea, but we’re keeping this post short so we actually ship it. We’re starting with projects related to AI safety because our timelines are short, but if this is successful, we plan to expand to the other cause areas.Apply here.Reminder that you can listen to LessWrong ...]]>
Drew Spartz https://forum.effectivealtruism.org/posts/Qoecey2umNjcqEGHP/apply-to-greater-than-30-ai-safety-funders-in-one Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to >30 AI safety funders in one application with the Nonlinear Network, published by Drew Spartz on April 12, 2023 on The Effective Altruism Forum.Nonlinear spoke to dozens of earn-to-givers and a common sentiment was, "I want to fund good AI safety-related projects, but I don't know where to find them." At the same time, applicants don’t know how to find them either. And would-be applicants are often aware of just one or two funders - some think it’s “LTFF or bust” - causing many to give up before they’ve started, demoralized, because fundraising seems too hard.As a result, we’re trying an experiment to help folks get in front of donors and vice versa. In brief:Looking for funding?Why apply to just one funder when you can apply to dozens?If you've already applied for EA funding, simply paste your existing application. We’ll share it with relevant funders (~30 so far) in our network.You can apply if you’re still waiting to hear from other funders. This way, instead of having to awkwardly ask dozens of people and get rejected dozens of times (if you can even find the funders), you can just send in the application you already made.We’re also accepting non-technical projects relevant to AI safety (e.g. meta, forecasting, field-building, etc.)Application deadline: May 17, 2023.Looking for projects to fund?Apply to join the funding round by May 17, 2023. Soon after, we'll share access to a database of applications relevant to your interests (e.g. interpretability, moonshots, forecasting, field-building, novel research directions, etc).If you'd like to fund any projects, you can reach out to applicants directly, or we can help coordinate. This way, you avoid the awkwardness of directly rejecting applicants, and don’t get inundated by people trying to “sell” you.Inspiration for this projectWhen the FTX crisis broke, we quickly spun up the Nonlinear Emergency Fund to help provide bridge grants to tide people over until the larger funders could step in.Instead of making all the funding decisions ourselves, we put out a call to other funders/earn-to-givers. Scott Alexander connected us with a few dozen funders who reached out to help, and we created a Slack to collaborate.We shared every application (that consented) in an Airtable with around 30 other donors. This led to a flurry of activity as funders investigated applications. They collaborated on diligence, and grants were made that otherwise wouldn’t have happened.Some funders, like Scott, after seeing our recommendations, preferred to delegate decisions to us, but others preferred to make their own decisions. Collectively, we rapidly deployed roughly $500,000 - far more than we initially expected.The biggest lesson we learned: openly sharing applications with funders was high leverage - possibly leading to four times as many people receiving funding and 10 times more donations than would have happened if we hadn’t shared.If you’ve been thinking about raising money for your project idea, we encourage you to do it now. Push through your imposter syndrome because, as Leopold Aschenbrenner said, nobody’s on the ball on AGI alignment.Another reason to apply: we’ve heard from EA funders that they don’t get enough applications, so you should have a low bar for applying - many fund over 50% of applications they receive (SFF, LTFF, EAIF).Since the Nonlinear Network is a diverse set of funders, you can apply for a grant size anywhere between single digit thousands to single digit millions.Note: We’re aware of many valid critiques of this idea, but we’re keeping this post short so we actually ship it. We’re starting with projects related to AI safety because our timelines are short, but if this is successful, we plan to expand to the other cause areas.Apply here.Reminder that you can listen to LessWrong ...]]>
Wed, 12 Apr 2023 21:19:53 +0000 EA - Apply to >30 AI safety funders in one application with the Nonlinear Network by Drew Spartz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to >30 AI safety funders in one application with the Nonlinear Network, published by Drew Spartz on April 12, 2023 on The Effective Altruism Forum.Nonlinear spoke to dozens of earn-to-givers and a common sentiment was, "I want to fund good AI safety-related projects, but I don't know where to find them." At the same time, applicants don’t know how to find them either. And would-be applicants are often aware of just one or two funders - some think it’s “LTFF or bust” - causing many to give up before they’ve started, demoralized, because fundraising seems too hard.As a result, we’re trying an experiment to help folks get in front of donors and vice versa. In brief:Looking for funding?Why apply to just one funder when you can apply to dozens?If you've already applied for EA funding, simply paste your existing application. We’ll share it with relevant funders (~30 so far) in our network.You can apply if you’re still waiting to hear from other funders. This way, instead of having to awkwardly ask dozens of people and get rejected dozens of times (if you can even find the funders), you can just send in the application you already made.We’re also accepting non-technical projects relevant to AI safety (e.g. meta, forecasting, field-building, etc.)Application deadline: May 17, 2023.Looking for projects to fund?Apply to join the funding round by May 17, 2023. Soon after, we'll share access to a database of applications relevant to your interests (e.g. interpretability, moonshots, forecasting, field-building, novel research directions, etc).If you'd like to fund any projects, you can reach out to applicants directly, or we can help coordinate. This way, you avoid the awkwardness of directly rejecting applicants, and don’t get inundated by people trying to “sell” you.Inspiration for this projectWhen the FTX crisis broke, we quickly spun up the Nonlinear Emergency Fund to help provide bridge grants to tide people over until the larger funders could step in.Instead of making all the funding decisions ourselves, we put out a call to other funders/earn-to-givers. Scott Alexander connected us with a few dozen funders who reached out to help, and we created a Slack to collaborate.We shared every application (that consented) in an Airtable with around 30 other donors. This led to a flurry of activity as funders investigated applications. They collaborated on diligence, and grants were made that otherwise wouldn’t have happened.Some funders, like Scott, after seeing our recommendations, preferred to delegate decisions to us, but others preferred to make their own decisions. Collectively, we rapidly deployed roughly $500,000 - far more than we initially expected.The biggest lesson we learned: openly sharing applications with funders was high leverage - possibly leading to four times as many people receiving funding and 10 times more donations than would have happened if we hadn’t shared.If you’ve been thinking about raising money for your project idea, we encourage you to do it now. Push through your imposter syndrome because, as Leopold Aschenbrenner said, nobody’s on the ball on AGI alignment.Another reason to apply: we’ve heard from EA funders that they don’t get enough applications, so you should have a low bar for applying - many fund over 50% of applications they receive (SFF, LTFF, EAIF).Since the Nonlinear Network is a diverse set of funders, you can apply for a grant size anywhere between single digit thousands to single digit millions.Note: We’re aware of many valid critiques of this idea, but we’re keeping this post short so we actually ship it. We’re starting with projects related to AI safety because our timelines are short, but if this is successful, we plan to expand to the other cause areas.Apply here.Reminder that you can listen to LessWrong ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to >30 AI safety funders in one application with the Nonlinear Network, published by Drew Spartz on April 12, 2023 on The Effective Altruism Forum.Nonlinear spoke to dozens of earn-to-givers and a common sentiment was, "I want to fund good AI safety-related projects, but I don't know where to find them." At the same time, applicants don’t know how to find them either. And would-be applicants are often aware of just one or two funders - some think it’s “LTFF or bust” - causing many to give up before they’ve started, demoralized, because fundraising seems too hard.As a result, we’re trying an experiment to help folks get in front of donors and vice versa. In brief:Looking for funding?Why apply to just one funder when you can apply to dozens?If you've already applied for EA funding, simply paste your existing application. We’ll share it with relevant funders (~30 so far) in our network.You can apply if you’re still waiting to hear from other funders. This way, instead of having to awkwardly ask dozens of people and get rejected dozens of times (if you can even find the funders), you can just send in the application you already made.We’re also accepting non-technical projects relevant to AI safety (e.g. meta, forecasting, field-building, etc.)Application deadline: May 17, 2023.Looking for projects to fund?Apply to join the funding round by May 17, 2023. Soon after, we'll share access to a database of applications relevant to your interests (e.g. interpretability, moonshots, forecasting, field-building, novel research directions, etc).If you'd like to fund any projects, you can reach out to applicants directly, or we can help coordinate. This way, you avoid the awkwardness of directly rejecting applicants, and don’t get inundated by people trying to “sell” you.Inspiration for this projectWhen the FTX crisis broke, we quickly spun up the Nonlinear Emergency Fund to help provide bridge grants to tide people over until the larger funders could step in.Instead of making all the funding decisions ourselves, we put out a call to other funders/earn-to-givers. Scott Alexander connected us with a few dozen funders who reached out to help, and we created a Slack to collaborate.We shared every application (that consented) in an Airtable with around 30 other donors. This led to a flurry of activity as funders investigated applications. They collaborated on diligence, and grants were made that otherwise wouldn’t have happened.Some funders, like Scott, after seeing our recommendations, preferred to delegate decisions to us, but others preferred to make their own decisions. Collectively, we rapidly deployed roughly $500,000 - far more than we initially expected.The biggest lesson we learned: openly sharing applications with funders was high leverage - possibly leading to four times as many people receiving funding and 10 times more donations than would have happened if we hadn’t shared.If you’ve been thinking about raising money for your project idea, we encourage you to do it now. Push through your imposter syndrome because, as Leopold Aschenbrenner said, nobody’s on the ball on AGI alignment.Another reason to apply: we’ve heard from EA funders that they don’t get enough applications, so you should have a low bar for applying - many fund over 50% of applications they receive (SFF, LTFF, EAIF).Since the Nonlinear Network is a diverse set of funders, you can apply for a grant size anywhere between single digit thousands to single digit millions.Note: We’re aware of many valid critiques of this idea, but we’re keeping this post short so we actually ship it. We’re starting with projects related to AI safety because our timelines are short, but if this is successful, we plan to expand to the other cause areas.Apply here.Reminder that you can listen to LessWrong ...]]>
Drew Spartz https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:50 None full 5571
hw8ePRLJop7kSEZK3_NL_EA_EA EA - AIs accelerating AI research by Ajeya Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AIs accelerating AI research, published by Ajeya on April 12, 2023 on The Effective Altruism Forum.Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post.Researchers could potentially design the next generation of ML models more quickly by delegating some work to existing models, creating a feedback loop of ever-accelerating progress.The concept of an “intelligence explosion” has played an important role in discourse about advanced AI for decades. Early computer scientist I.J. Good described it like this in 1965:Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.This presentation, like most other popular presentations of the intelligence explosion concept, focuses on what happens after we have a single AI system that can already do better at every task than any human (which Good calls an “ultraintelligent machine” above, and others have called “an artificial superintelligence”). It calls to mind an image of AI progress with two phases:In Phase 1, humans are doing all the AI research, and progress ramps up steadily. We can more or less predict the rate of future progress (i.e. how quickly AI systems will improve their capabilities) by extrapolating from past rates of progress.[1]Eventually humans succeed at building an artificial superintelligence (or ASI), leading to Phase 2. In Phase 2, this ASI is doing all of the AI research by itself. All of a sudden, progress in AI capabilities is no longer bottlenecked by slow human researchers, and an intelligence explosion is kicked off. The rate of progress in AI research goes up sharply — perhaps years of progress is compressed into days or weeks.But I think this picture is probably too all-or-nothing. Today’s large language models (LLMs) like GPT-4 are not (yet) capable of completely taking over AI research by themselves — but they are able to write code, come up with ideas for ML experiments, and help troubleshoot bugs and other issues. Anecdotally, several ML researchers I know are starting to delegate simple tasks that come up in their research to these LLMs, and they say that makes them meaningfully more productive. (When chatGPT went down for 6 hours, I know of one ML researcher who postponed their coding tasks for 6 hours and worked on other things in the meantime.[2])If this holds true more broadly, researchers could potentially design and train the next generation of ML models more quickly and easily by delegating to existing LLMs.[3] This calls to mind a more continuous “intelligence explosion” that begins before we have any single artificial superintelligence:Currently, human researchers collectively are responsible for almost all of the progress in AI research, but are starting to delegate a small fraction of the work to large language models. This makes it somewhat easier to design and train the next generation of models.The next generation is able to handle harder tasks and more different types of tasks, so human researchers delegate more of their work to them. This makes it significantly easier to train the generation after that. Using models gives a much bigger boost than it did the last time around.Each round of this process makes the whole field move faster and faster. In each round, human...]]>
Ajeya https://forum.effectivealtruism.org/posts/hw8ePRLJop7kSEZK3/ais-accelerating-ai-research Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AIs accelerating AI research, published by Ajeya on April 12, 2023 on The Effective Altruism Forum.Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post.Researchers could potentially design the next generation of ML models more quickly by delegating some work to existing models, creating a feedback loop of ever-accelerating progress.The concept of an “intelligence explosion” has played an important role in discourse about advanced AI for decades. Early computer scientist I.J. Good described it like this in 1965:Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.This presentation, like most other popular presentations of the intelligence explosion concept, focuses on what happens after we have a single AI system that can already do better at every task than any human (which Good calls an “ultraintelligent machine” above, and others have called “an artificial superintelligence”). It calls to mind an image of AI progress with two phases:In Phase 1, humans are doing all the AI research, and progress ramps up steadily. We can more or less predict the rate of future progress (i.e. how quickly AI systems will improve their capabilities) by extrapolating from past rates of progress.[1]Eventually humans succeed at building an artificial superintelligence (or ASI), leading to Phase 2. In Phase 2, this ASI is doing all of the AI research by itself. All of a sudden, progress in AI capabilities is no longer bottlenecked by slow human researchers, and an intelligence explosion is kicked off. The rate of progress in AI research goes up sharply — perhaps years of progress is compressed into days or weeks.But I think this picture is probably too all-or-nothing. Today’s large language models (LLMs) like GPT-4 are not (yet) capable of completely taking over AI research by themselves — but they are able to write code, come up with ideas for ML experiments, and help troubleshoot bugs and other issues. Anecdotally, several ML researchers I know are starting to delegate simple tasks that come up in their research to these LLMs, and they say that makes them meaningfully more productive. (When chatGPT went down for 6 hours, I know of one ML researcher who postponed their coding tasks for 6 hours and worked on other things in the meantime.[2])If this holds true more broadly, researchers could potentially design and train the next generation of ML models more quickly and easily by delegating to existing LLMs.[3] This calls to mind a more continuous “intelligence explosion” that begins before we have any single artificial superintelligence:Currently, human researchers collectively are responsible for almost all of the progress in AI research, but are starting to delegate a small fraction of the work to large language models. This makes it somewhat easier to design and train the next generation of models.The next generation is able to handle harder tasks and more different types of tasks, so human researchers delegate more of their work to them. This makes it significantly easier to train the generation after that. Using models gives a much bigger boost than it did the last time around.Each round of this process makes the whole field move faster and faster. In each round, human...]]>
Wed, 12 Apr 2023 18:12:37 +0000 EA - AIs accelerating AI research by Ajeya Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AIs accelerating AI research, published by Ajeya on April 12, 2023 on The Effective Altruism Forum.Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post.Researchers could potentially design the next generation of ML models more quickly by delegating some work to existing models, creating a feedback loop of ever-accelerating progress.The concept of an “intelligence explosion” has played an important role in discourse about advanced AI for decades. Early computer scientist I.J. Good described it like this in 1965:Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.This presentation, like most other popular presentations of the intelligence explosion concept, focuses on what happens after we have a single AI system that can already do better at every task than any human (which Good calls an “ultraintelligent machine” above, and others have called “an artificial superintelligence”). It calls to mind an image of AI progress with two phases:In Phase 1, humans are doing all the AI research, and progress ramps up steadily. We can more or less predict the rate of future progress (i.e. how quickly AI systems will improve their capabilities) by extrapolating from past rates of progress.[1]Eventually humans succeed at building an artificial superintelligence (or ASI), leading to Phase 2. In Phase 2, this ASI is doing all of the AI research by itself. All of a sudden, progress in AI capabilities is no longer bottlenecked by slow human researchers, and an intelligence explosion is kicked off. The rate of progress in AI research goes up sharply — perhaps years of progress is compressed into days or weeks.But I think this picture is probably too all-or-nothing. Today’s large language models (LLMs) like GPT-4 are not (yet) capable of completely taking over AI research by themselves — but they are able to write code, come up with ideas for ML experiments, and help troubleshoot bugs and other issues. Anecdotally, several ML researchers I know are starting to delegate simple tasks that come up in their research to these LLMs, and they say that makes them meaningfully more productive. (When chatGPT went down for 6 hours, I know of one ML researcher who postponed their coding tasks for 6 hours and worked on other things in the meantime.[2])If this holds true more broadly, researchers could potentially design and train the next generation of ML models more quickly and easily by delegating to existing LLMs.[3] This calls to mind a more continuous “intelligence explosion” that begins before we have any single artificial superintelligence:Currently, human researchers collectively are responsible for almost all of the progress in AI research, but are starting to delegate a small fraction of the work to large language models. This makes it somewhat easier to design and train the next generation of models.The next generation is able to handle harder tasks and more different types of tasks, so human researchers delegate more of their work to them. This makes it significantly easier to train the generation after that. Using models gives a much bigger boost than it did the last time around.Each round of this process makes the whole field move faster and faster. In each round, human...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AIs accelerating AI research, published by Ajeya on April 12, 2023 on The Effective Altruism Forum.Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post.Researchers could potentially design the next generation of ML models more quickly by delegating some work to existing models, creating a feedback loop of ever-accelerating progress.The concept of an “intelligence explosion” has played an important role in discourse about advanced AI for decades. Early computer scientist I.J. Good described it like this in 1965:Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.This presentation, like most other popular presentations of the intelligence explosion concept, focuses on what happens after we have a single AI system that can already do better at every task than any human (which Good calls an “ultraintelligent machine” above, and others have called “an artificial superintelligence”). It calls to mind an image of AI progress with two phases:In Phase 1, humans are doing all the AI research, and progress ramps up steadily. We can more or less predict the rate of future progress (i.e. how quickly AI systems will improve their capabilities) by extrapolating from past rates of progress.[1]Eventually humans succeed at building an artificial superintelligence (or ASI), leading to Phase 2. In Phase 2, this ASI is doing all of the AI research by itself. All of a sudden, progress in AI capabilities is no longer bottlenecked by slow human researchers, and an intelligence explosion is kicked off. The rate of progress in AI research goes up sharply — perhaps years of progress is compressed into days or weeks.But I think this picture is probably too all-or-nothing. Today’s large language models (LLMs) like GPT-4 are not (yet) capable of completely taking over AI research by themselves — but they are able to write code, come up with ideas for ML experiments, and help troubleshoot bugs and other issues. Anecdotally, several ML researchers I know are starting to delegate simple tasks that come up in their research to these LLMs, and they say that makes them meaningfully more productive. (When chatGPT went down for 6 hours, I know of one ML researcher who postponed their coding tasks for 6 hours and worked on other things in the meantime.[2])If this holds true more broadly, researchers could potentially design and train the next generation of ML models more quickly and easily by delegating to existing LLMs.[3] This calls to mind a more continuous “intelligence explosion” that begins before we have any single artificial superintelligence:Currently, human researchers collectively are responsible for almost all of the progress in AI research, but are starting to delegate a small fraction of the work to large language models. This makes it somewhat easier to design and train the next generation of models.The next generation is able to handle harder tasks and more different types of tasks, so human researchers delegate more of their work to them. This makes it significantly easier to train the generation after that. Using models gives a much bigger boost than it did the last time around.Each round of this process makes the whole field move faster and faster. In each round, human...]]>
Ajeya https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:49 None full 5572
2w7fKv5EzZfrqm5Tg_NL_EA_EA EA - Data Taxation: A Proposal for Slowing Down AGI Progress by Per Ivar Friborg Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Data Taxation: A Proposal for Slowing Down AGI Progress, published by Per Ivar Friborg on April 11, 2023 on The Effective Altruism Forum.Co-authored by Sammet, Joshua P. S. and Wale, William.ForewordThis report was written for the Policies for slowing down progress towards artificial general intelligence (AGI) case of the AI governance hackathon organized by Apart Research.We are keen on receiving your thoughts and feedback on this proposal. We are considering publishing this on a more public platform such as arXiv, and therefore would love for you to point out potential issues and shortcomings with our proposal that we should further address in our article. If you think there are parts that need to be flushed out more to be understandable for the reader or any other things we should include to round it up, we are more than happy to hear your comments.IntroductionIn this paper, we propose a tax that affects the training of any model of sufficient size, and a concrete formula for implementing it. We explain why we think our framework is robust, future-proof and applicable in practice. We also give some concrete examples of how the formula would apply to current models, and demonstrate that it would heavily disincentivize work on ML models that could develop AGI-like capabilities, but not other useful narrow AI work that does not pose existential risks.Currently, the most promising path towards AGI involves increasingly big networks with billions of parameters trained with huge amounts of text data. The most famous example being GPT-3, whose 175 billion parameters were trained on over 45 TB of text data. The size of this data is what sets apart LLMs from both more narrow AI models developed before and classical high-performance computing. Most likely, any development of general or even humanoid AI will require large swathes of data, as the human body gathers 11 million bits per second (around 120 GB per day) to train its approx. 100 billion neurons. Therefore, tackling the data usage of these models could be a promising approach to slowing down the progress of the development of new and more capable general AIs, without harming the development of models that pose no AGI risk.The proposal aims to slow down the progress towards AGI and mitigate the associated existential risks. The funds collected through data taxation can be used to support broader societal goals, such as redistribution of wealth and investments in AI safety research. We also discuss how the proposal plays well with other current criticisms of the relationship between AI and copyright, and persons and their personal data, and how it could consequently levy those social currents for more widespread support.The Data Taxation MechanismA challenge in devising an effective data tax formula lies in differentially disincentivizing the development of models that could lead to AGI without hindering the progress of other useful and narrow ML technologies. A simplistic approach, such as imposing a flat fee per byte of training data used could inadvertently discourage beneficial research that poses no AGI risk.For instance, a study aimed at identifying the most common words in the English language across the entire internet would become prohibitively expensive under the naïve flat fee proposal, despite posing zero danger in terms of AGI development. Similarly, ML applications in medical imaging or genomics, which often rely on vast datasets, would also be adversely affected, even though they do not contribute to AGI risk.To overcome the challenge of differentially disincentivizing AGI development without impeding progress in other beneficial narrow AI applications, we propose a data tax formula that incorporates not only the amount of training data used but also the number of parameters being updated du...]]>
Per Ivar Friborg https://forum.effectivealtruism.org/posts/2w7fKv5EzZfrqm5Tg/data-taxation-a-proposal-for-slowing-down-agi-progress Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Data Taxation: A Proposal for Slowing Down AGI Progress, published by Per Ivar Friborg on April 11, 2023 on The Effective Altruism Forum.Co-authored by Sammet, Joshua P. S. and Wale, William.ForewordThis report was written for the Policies for slowing down progress towards artificial general intelligence (AGI) case of the AI governance hackathon organized by Apart Research.We are keen on receiving your thoughts and feedback on this proposal. We are considering publishing this on a more public platform such as arXiv, and therefore would love for you to point out potential issues and shortcomings with our proposal that we should further address in our article. If you think there are parts that need to be flushed out more to be understandable for the reader or any other things we should include to round it up, we are more than happy to hear your comments.IntroductionIn this paper, we propose a tax that affects the training of any model of sufficient size, and a concrete formula for implementing it. We explain why we think our framework is robust, future-proof and applicable in practice. We also give some concrete examples of how the formula would apply to current models, and demonstrate that it would heavily disincentivize work on ML models that could develop AGI-like capabilities, but not other useful narrow AI work that does not pose existential risks.Currently, the most promising path towards AGI involves increasingly big networks with billions of parameters trained with huge amounts of text data. The most famous example being GPT-3, whose 175 billion parameters were trained on over 45 TB of text data. The size of this data is what sets apart LLMs from both more narrow AI models developed before and classical high-performance computing. Most likely, any development of general or even humanoid AI will require large swathes of data, as the human body gathers 11 million bits per second (around 120 GB per day) to train its approx. 100 billion neurons. Therefore, tackling the data usage of these models could be a promising approach to slowing down the progress of the development of new and more capable general AIs, without harming the development of models that pose no AGI risk.The proposal aims to slow down the progress towards AGI and mitigate the associated existential risks. The funds collected through data taxation can be used to support broader societal goals, such as redistribution of wealth and investments in AI safety research. We also discuss how the proposal plays well with other current criticisms of the relationship between AI and copyright, and persons and their personal data, and how it could consequently levy those social currents for more widespread support.The Data Taxation MechanismA challenge in devising an effective data tax formula lies in differentially disincentivizing the development of models that could lead to AGI without hindering the progress of other useful and narrow ML technologies. A simplistic approach, such as imposing a flat fee per byte of training data used could inadvertently discourage beneficial research that poses no AGI risk.For instance, a study aimed at identifying the most common words in the English language across the entire internet would become prohibitively expensive under the naïve flat fee proposal, despite posing zero danger in terms of AGI development. Similarly, ML applications in medical imaging or genomics, which often rely on vast datasets, would also be adversely affected, even though they do not contribute to AGI risk.To overcome the challenge of differentially disincentivizing AGI development without impeding progress in other beneficial narrow AI applications, we propose a data tax formula that incorporates not only the amount of training data used but also the number of parameters being updated du...]]>
Wed, 12 Apr 2023 16:56:38 +0000 EA - Data Taxation: A Proposal for Slowing Down AGI Progress by Per Ivar Friborg Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Data Taxation: A Proposal for Slowing Down AGI Progress, published by Per Ivar Friborg on April 11, 2023 on The Effective Altruism Forum.Co-authored by Sammet, Joshua P. S. and Wale, William.ForewordThis report was written for the Policies for slowing down progress towards artificial general intelligence (AGI) case of the AI governance hackathon organized by Apart Research.We are keen on receiving your thoughts and feedback on this proposal. We are considering publishing this on a more public platform such as arXiv, and therefore would love for you to point out potential issues and shortcomings with our proposal that we should further address in our article. If you think there are parts that need to be flushed out more to be understandable for the reader or any other things we should include to round it up, we are more than happy to hear your comments.IntroductionIn this paper, we propose a tax that affects the training of any model of sufficient size, and a concrete formula for implementing it. We explain why we think our framework is robust, future-proof and applicable in practice. We also give some concrete examples of how the formula would apply to current models, and demonstrate that it would heavily disincentivize work on ML models that could develop AGI-like capabilities, but not other useful narrow AI work that does not pose existential risks.Currently, the most promising path towards AGI involves increasingly big networks with billions of parameters trained with huge amounts of text data. The most famous example being GPT-3, whose 175 billion parameters were trained on over 45 TB of text data. The size of this data is what sets apart LLMs from both more narrow AI models developed before and classical high-performance computing. Most likely, any development of general or even humanoid AI will require large swathes of data, as the human body gathers 11 million bits per second (around 120 GB per day) to train its approx. 100 billion neurons. Therefore, tackling the data usage of these models could be a promising approach to slowing down the progress of the development of new and more capable general AIs, without harming the development of models that pose no AGI risk.The proposal aims to slow down the progress towards AGI and mitigate the associated existential risks. The funds collected through data taxation can be used to support broader societal goals, such as redistribution of wealth and investments in AI safety research. We also discuss how the proposal plays well with other current criticisms of the relationship between AI and copyright, and persons and their personal data, and how it could consequently levy those social currents for more widespread support.The Data Taxation MechanismA challenge in devising an effective data tax formula lies in differentially disincentivizing the development of models that could lead to AGI without hindering the progress of other useful and narrow ML technologies. A simplistic approach, such as imposing a flat fee per byte of training data used could inadvertently discourage beneficial research that poses no AGI risk.For instance, a study aimed at identifying the most common words in the English language across the entire internet would become prohibitively expensive under the naïve flat fee proposal, despite posing zero danger in terms of AGI development. Similarly, ML applications in medical imaging or genomics, which often rely on vast datasets, would also be adversely affected, even though they do not contribute to AGI risk.To overcome the challenge of differentially disincentivizing AGI development without impeding progress in other beneficial narrow AI applications, we propose a data tax formula that incorporates not only the amount of training data used but also the number of parameters being updated du...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Data Taxation: A Proposal for Slowing Down AGI Progress, published by Per Ivar Friborg on April 11, 2023 on The Effective Altruism Forum.Co-authored by Sammet, Joshua P. S. and Wale, William.ForewordThis report was written for the Policies for slowing down progress towards artificial general intelligence (AGI) case of the AI governance hackathon organized by Apart Research.We are keen on receiving your thoughts and feedback on this proposal. We are considering publishing this on a more public platform such as arXiv, and therefore would love for you to point out potential issues and shortcomings with our proposal that we should further address in our article. If you think there are parts that need to be flushed out more to be understandable for the reader or any other things we should include to round it up, we are more than happy to hear your comments.IntroductionIn this paper, we propose a tax that affects the training of any model of sufficient size, and a concrete formula for implementing it. We explain why we think our framework is robust, future-proof and applicable in practice. We also give some concrete examples of how the formula would apply to current models, and demonstrate that it would heavily disincentivize work on ML models that could develop AGI-like capabilities, but not other useful narrow AI work that does not pose existential risks.Currently, the most promising path towards AGI involves increasingly big networks with billions of parameters trained with huge amounts of text data. The most famous example being GPT-3, whose 175 billion parameters were trained on over 45 TB of text data. The size of this data is what sets apart LLMs from both more narrow AI models developed before and classical high-performance computing. Most likely, any development of general or even humanoid AI will require large swathes of data, as the human body gathers 11 million bits per second (around 120 GB per day) to train its approx. 100 billion neurons. Therefore, tackling the data usage of these models could be a promising approach to slowing down the progress of the development of new and more capable general AIs, without harming the development of models that pose no AGI risk.The proposal aims to slow down the progress towards AGI and mitigate the associated existential risks. The funds collected through data taxation can be used to support broader societal goals, such as redistribution of wealth and investments in AI safety research. We also discuss how the proposal plays well with other current criticisms of the relationship between AI and copyright, and persons and their personal data, and how it could consequently levy those social currents for more widespread support.The Data Taxation MechanismA challenge in devising an effective data tax formula lies in differentially disincentivizing the development of models that could lead to AGI without hindering the progress of other useful and narrow ML technologies. A simplistic approach, such as imposing a flat fee per byte of training data used could inadvertently discourage beneficial research that poses no AGI risk.For instance, a study aimed at identifying the most common words in the English language across the entire internet would become prohibitively expensive under the naïve flat fee proposal, despite posing zero danger in terms of AGI development. Similarly, ML applications in medical imaging or genomics, which often rely on vast datasets, would also be adversely affected, even though they do not contribute to AGI risk.To overcome the challenge of differentially disincentivizing AGI development without impeding progress in other beneficial narrow AI applications, we propose a data tax formula that incorporates not only the amount of training data used but also the number of parameters being updated du...]]>
Per Ivar Friborg https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 22:30 None full 5574
zR5cHEKSv8gHgZzN8_NL_EA_EA EA - Legal Impact for Chickens hiring Experienced Litigator by alene Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Legal Impact for Chickens hiring Experienced Litigator, published by alene on April 12, 2023 on The Effective Altruism Forum.Hi Everyone!!I wanted to let you know that Legal Impact for Chickens is seeking an experienced litigator!Please share widely. <3Thank you so much!!Legal Impact for Chickens seeks anEXPERIENCED LITIGATOR.We’re looking for the next attorney to help build our nonprofit and fight for animals.Want to join our team on the ground floor?About us:Legal Impact for Chickens (LIC) is a 501(c)(3) litigation nonprofit. We work to protect farmed animals.LIC’s first case, a shareholder derivative suit against Costco executives for chicken neglect, appeared in The Washington Post, Fox Business, CNN Business, and beyond.Now, we’re looking for our next hire—an entrepreneurial litigator to help fight for animals!About you:3+ years of litigation experienceJD from an accredited law schoolLicensed and in good standing with a state barExcellent analytical, writing, and verbal-communication skillsZealous, creative, enthusiastic litigatorPassion for helping farmed animalsInterest in entering a startup nonprofit on the ground floor, and helping to build somethingWillingness to do all types of nonprofit startup work, beyond just litigationStrong work ethic and initiativeKind to our fellow humans, and excited about creating a welcoming, inclusive teamAbout the role:You will be an integral part of LIC. You’ll help shape our organization’s future.Your role will be a combination of (1) designing and pursuing creative impact litigation for animals, and (2) helping with everything else we need to do, to run this new nonprofit!Since this is such a small organization, you’ll wear many hats: Sometimes you may wear a law-firm partner’s hat, making litigation strategy decisions or covering a hearing on your own. Sometimes you’ll wear an associate’s hat, analyzing complex and novel legal issues. Sometimes you’ll pitch in on administrative tasks, making sure a brief gets filed properly or formatting a table of authorities. Sometimes you’ll wear a start-up founder’s hat, helping plan the number of employees we need, or representing LIC at conferences. We can only promise it won’t be dull!This job offers tremendous opportunity for advancement, in the form of helping to lead LIC as we grow. The hope is for you to become an indispensable, long-time member of our new team.Commitment: Full timeLocation and travel: This is a remote, U.S.-based position. You must be available to travel for work as needed, since we will litigate all over the country. (Disabilities will be accommodated!)Reports to: Alene Anello, LIC’s presidentSalary: $82,000–$100,000 depending on experienceBenefits: Health insurance, 401(k), flexible scheduleOne more thing!LIC is an equal opportunity employer. Women and people of color are strongly encouraged to apply. Applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability, age, or veteran status.To Apply:To apply, please email your cover letter, resume, writing sample, and three references, all combined as one PDF, to info@legalimpactforchickens.org.Thank you for your time and your compassion!Sincerely,Legal Impact for ChickensThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
alene https://forum.effectivealtruism.org/posts/zR5cHEKSv8gHgZzN8/legal-impact-for-chickens-hiring-experienced-litigator Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Legal Impact for Chickens hiring Experienced Litigator, published by alene on April 12, 2023 on The Effective Altruism Forum.Hi Everyone!!I wanted to let you know that Legal Impact for Chickens is seeking an experienced litigator!Please share widely. <3Thank you so much!!Legal Impact for Chickens seeks anEXPERIENCED LITIGATOR.We’re looking for the next attorney to help build our nonprofit and fight for animals.Want to join our team on the ground floor?About us:Legal Impact for Chickens (LIC) is a 501(c)(3) litigation nonprofit. We work to protect farmed animals.LIC’s first case, a shareholder derivative suit against Costco executives for chicken neglect, appeared in The Washington Post, Fox Business, CNN Business, and beyond.Now, we’re looking for our next hire—an entrepreneurial litigator to help fight for animals!About you:3+ years of litigation experienceJD from an accredited law schoolLicensed and in good standing with a state barExcellent analytical, writing, and verbal-communication skillsZealous, creative, enthusiastic litigatorPassion for helping farmed animalsInterest in entering a startup nonprofit on the ground floor, and helping to build somethingWillingness to do all types of nonprofit startup work, beyond just litigationStrong work ethic and initiativeKind to our fellow humans, and excited about creating a welcoming, inclusive teamAbout the role:You will be an integral part of LIC. You’ll help shape our organization’s future.Your role will be a combination of (1) designing and pursuing creative impact litigation for animals, and (2) helping with everything else we need to do, to run this new nonprofit!Since this is such a small organization, you’ll wear many hats: Sometimes you may wear a law-firm partner’s hat, making litigation strategy decisions or covering a hearing on your own. Sometimes you’ll wear an associate’s hat, analyzing complex and novel legal issues. Sometimes you’ll pitch in on administrative tasks, making sure a brief gets filed properly or formatting a table of authorities. Sometimes you’ll wear a start-up founder’s hat, helping plan the number of employees we need, or representing LIC at conferences. We can only promise it won’t be dull!This job offers tremendous opportunity for advancement, in the form of helping to lead LIC as we grow. The hope is for you to become an indispensable, long-time member of our new team.Commitment: Full timeLocation and travel: This is a remote, U.S.-based position. You must be available to travel for work as needed, since we will litigate all over the country. (Disabilities will be accommodated!)Reports to: Alene Anello, LIC’s presidentSalary: $82,000–$100,000 depending on experienceBenefits: Health insurance, 401(k), flexible scheduleOne more thing!LIC is an equal opportunity employer. Women and people of color are strongly encouraged to apply. Applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability, age, or veteran status.To Apply:To apply, please email your cover letter, resume, writing sample, and three references, all combined as one PDF, to info@legalimpactforchickens.org.Thank you for your time and your compassion!Sincerely,Legal Impact for ChickensThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 12 Apr 2023 15:53:08 +0000 EA - Legal Impact for Chickens hiring Experienced Litigator by alene Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Legal Impact for Chickens hiring Experienced Litigator, published by alene on April 12, 2023 on The Effective Altruism Forum.Hi Everyone!!I wanted to let you know that Legal Impact for Chickens is seeking an experienced litigator!Please share widely. <3Thank you so much!!Legal Impact for Chickens seeks anEXPERIENCED LITIGATOR.We’re looking for the next attorney to help build our nonprofit and fight for animals.Want to join our team on the ground floor?About us:Legal Impact for Chickens (LIC) is a 501(c)(3) litigation nonprofit. We work to protect farmed animals.LIC’s first case, a shareholder derivative suit against Costco executives for chicken neglect, appeared in The Washington Post, Fox Business, CNN Business, and beyond.Now, we’re looking for our next hire—an entrepreneurial litigator to help fight for animals!About you:3+ years of litigation experienceJD from an accredited law schoolLicensed and in good standing with a state barExcellent analytical, writing, and verbal-communication skillsZealous, creative, enthusiastic litigatorPassion for helping farmed animalsInterest in entering a startup nonprofit on the ground floor, and helping to build somethingWillingness to do all types of nonprofit startup work, beyond just litigationStrong work ethic and initiativeKind to our fellow humans, and excited about creating a welcoming, inclusive teamAbout the role:You will be an integral part of LIC. You’ll help shape our organization’s future.Your role will be a combination of (1) designing and pursuing creative impact litigation for animals, and (2) helping with everything else we need to do, to run this new nonprofit!Since this is such a small organization, you’ll wear many hats: Sometimes you may wear a law-firm partner’s hat, making litigation strategy decisions or covering a hearing on your own. Sometimes you’ll wear an associate’s hat, analyzing complex and novel legal issues. Sometimes you’ll pitch in on administrative tasks, making sure a brief gets filed properly or formatting a table of authorities. Sometimes you’ll wear a start-up founder’s hat, helping plan the number of employees we need, or representing LIC at conferences. We can only promise it won’t be dull!This job offers tremendous opportunity for advancement, in the form of helping to lead LIC as we grow. The hope is for you to become an indispensable, long-time member of our new team.Commitment: Full timeLocation and travel: This is a remote, U.S.-based position. You must be available to travel for work as needed, since we will litigate all over the country. (Disabilities will be accommodated!)Reports to: Alene Anello, LIC’s presidentSalary: $82,000–$100,000 depending on experienceBenefits: Health insurance, 401(k), flexible scheduleOne more thing!LIC is an equal opportunity employer. Women and people of color are strongly encouraged to apply. Applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability, age, or veteran status.To Apply:To apply, please email your cover letter, resume, writing sample, and three references, all combined as one PDF, to info@legalimpactforchickens.org.Thank you for your time and your compassion!Sincerely,Legal Impact for ChickensThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Legal Impact for Chickens hiring Experienced Litigator, published by alene on April 12, 2023 on The Effective Altruism Forum.Hi Everyone!!I wanted to let you know that Legal Impact for Chickens is seeking an experienced litigator!Please share widely. <3Thank you so much!!Legal Impact for Chickens seeks anEXPERIENCED LITIGATOR.We’re looking for the next attorney to help build our nonprofit and fight for animals.Want to join our team on the ground floor?About us:Legal Impact for Chickens (LIC) is a 501(c)(3) litigation nonprofit. We work to protect farmed animals.LIC’s first case, a shareholder derivative suit against Costco executives for chicken neglect, appeared in The Washington Post, Fox Business, CNN Business, and beyond.Now, we’re looking for our next hire—an entrepreneurial litigator to help fight for animals!About you:3+ years of litigation experienceJD from an accredited law schoolLicensed and in good standing with a state barExcellent analytical, writing, and verbal-communication skillsZealous, creative, enthusiastic litigatorPassion for helping farmed animalsInterest in entering a startup nonprofit on the ground floor, and helping to build somethingWillingness to do all types of nonprofit startup work, beyond just litigationStrong work ethic and initiativeKind to our fellow humans, and excited about creating a welcoming, inclusive teamAbout the role:You will be an integral part of LIC. You’ll help shape our organization’s future.Your role will be a combination of (1) designing and pursuing creative impact litigation for animals, and (2) helping with everything else we need to do, to run this new nonprofit!Since this is such a small organization, you’ll wear many hats: Sometimes you may wear a law-firm partner’s hat, making litigation strategy decisions or covering a hearing on your own. Sometimes you’ll wear an associate’s hat, analyzing complex and novel legal issues. Sometimes you’ll pitch in on administrative tasks, making sure a brief gets filed properly or formatting a table of authorities. Sometimes you’ll wear a start-up founder’s hat, helping plan the number of employees we need, or representing LIC at conferences. We can only promise it won’t be dull!This job offers tremendous opportunity for advancement, in the form of helping to lead LIC as we grow. The hope is for you to become an indispensable, long-time member of our new team.Commitment: Full timeLocation and travel: This is a remote, U.S.-based position. You must be available to travel for work as needed, since we will litigate all over the country. (Disabilities will be accommodated!)Reports to: Alene Anello, LIC’s presidentSalary: $82,000–$100,000 depending on experienceBenefits: Health insurance, 401(k), flexible scheduleOne more thing!LIC is an equal opportunity employer. Women and people of color are strongly encouraged to apply. Applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability, age, or veteran status.To Apply:To apply, please email your cover letter, resume, writing sample, and three references, all combined as one PDF, to info@legalimpactforchickens.org.Thank you for your time and your compassion!Sincerely,Legal Impact for ChickensThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
alene https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:37 None full 5573
zC5CNAv8dCMyhtxW2_NL_EA_EA EA - Trans Rescue’s operations in Uganda: high impact giving opportunity by David D Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Trans Rescue’s operations in Uganda: high impact giving opportunity, published by David D on April 11, 2023 on The Effective Altruism Forum.Synopsis: Trans Rescue uses their existing experience and infrastructure for moving African trans people to safety to help LGBTQ+ people of all sorts escape genocide in Uganda, costing an estimated €150 to move a person to safety outside the country and €1257 in housing costs and other support to help them become financially self sufficient in their new location.You might have heard about Uganda’s new laws and crackdown targeting LGBTQ+ people, which began in March. “Homosexual activity” has been illegal in Uganda for a long time, but under this bill, people who even say that they are LGBT+ or “promote homosexuality” (such as advocating for LGBTQ+ people’s rights, or writing a positive or neutral article about homosexuality) face criminal charges and imprisonment. Renting living space to a gay person or conducting a same-sex marriage ceremony would also be criminalized with prison sentences of up to 10 years.The president of Uganda has not yet signed the bill, but the homophobic fervor around it is already wreaking havoc in LGBTQ+ Ugandans’ lives. Many of Trans Rescue’s passengers were evicted by their landlords without any warning or opportunity to retrieve their things. In one particularly violent example, a landlord was convinced by a local preacher that his two trans tenants were evil and set fire to their belongings while they weren’t home, burning down his building in the process. Existing shelters for LGBTQ+ people also face eviction. Violence and sexual assault is becoming more frequent (cw violence, rape, police brutality: source).International NGOs have been slow to respond. In an article on Trans Writes, Trans Rescue treasurer Jenny List writes that their passengers haven’t seen any other international organizations working to protect or evacuate LGBTQ+ Ugandans, though some organizations say they have plans in the works.“Of course, we’ve asked around to find out what’s being done by those organisations, and the answer has come back from several quarters that things are in motion, but under the radar. We’re told that too public a move might cause them to be accused by the Ugandan government of being colonialist, and we understand that. We’re happy to hear that so much is being done, we really are.”“Unfortunately, the fact remains that the people on the ground aren’t seeing it. Things they can’t see are of little use to them, when what they need is to escape an angry mob or a police manhunt.”Trans Rescue was unusually well positioned to help. They’ve been helping trans people escape danger, especially in Africa and the Middle East, since 2021, and several of their board members did similar work in the organization’s previous incarnation as Trans Emigrate. In addition to their experience planning travel for people who face extra scrutiny due to their country of origin, they operate a trans safe haven in neighboring Kenya called Eden House. In light of the danger that queer Ugandans of all sorts are facing right now, they are providing transportation and shelter for LGBTQ+ people of all sorts fleeing Uganda.Effective Altruists often avoid donating to acute crisis that make the news. Newsworthy events are often relatively less underfunded, especially when they occur in the western world. The difficult logistics of providing aid for events like natural disasters can also be an obstacle. Uganda’s proposed LGBTQ+ bill has received some media coverage, but the people who have the ability to donate and would consider the people affected “one of us” - LGBTQ+ people in wealthier nations - are focused on the rise in transphobia in the US and UK right now. Trans Rescue already has a presence in the region in Eden House, the...]]>
David D https://forum.effectivealtruism.org/posts/zC5CNAv8dCMyhtxW2/trans-rescue-s-operations-in-uganda-high-impact-giving Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Trans Rescue’s operations in Uganda: high impact giving opportunity, published by David D on April 11, 2023 on The Effective Altruism Forum.Synopsis: Trans Rescue uses their existing experience and infrastructure for moving African trans people to safety to help LGBTQ+ people of all sorts escape genocide in Uganda, costing an estimated €150 to move a person to safety outside the country and €1257 in housing costs and other support to help them become financially self sufficient in their new location.You might have heard about Uganda’s new laws and crackdown targeting LGBTQ+ people, which began in March. “Homosexual activity” has been illegal in Uganda for a long time, but under this bill, people who even say that they are LGBT+ or “promote homosexuality” (such as advocating for LGBTQ+ people’s rights, or writing a positive or neutral article about homosexuality) face criminal charges and imprisonment. Renting living space to a gay person or conducting a same-sex marriage ceremony would also be criminalized with prison sentences of up to 10 years.The president of Uganda has not yet signed the bill, but the homophobic fervor around it is already wreaking havoc in LGBTQ+ Ugandans’ lives. Many of Trans Rescue’s passengers were evicted by their landlords without any warning or opportunity to retrieve their things. In one particularly violent example, a landlord was convinced by a local preacher that his two trans tenants were evil and set fire to their belongings while they weren’t home, burning down his building in the process. Existing shelters for LGBTQ+ people also face eviction. Violence and sexual assault is becoming more frequent (cw violence, rape, police brutality: source).International NGOs have been slow to respond. In an article on Trans Writes, Trans Rescue treasurer Jenny List writes that their passengers haven’t seen any other international organizations working to protect or evacuate LGBTQ+ Ugandans, though some organizations say they have plans in the works.“Of course, we’ve asked around to find out what’s being done by those organisations, and the answer has come back from several quarters that things are in motion, but under the radar. We’re told that too public a move might cause them to be accused by the Ugandan government of being colonialist, and we understand that. We’re happy to hear that so much is being done, we really are.”“Unfortunately, the fact remains that the people on the ground aren’t seeing it. Things they can’t see are of little use to them, when what they need is to escape an angry mob or a police manhunt.”Trans Rescue was unusually well positioned to help. They’ve been helping trans people escape danger, especially in Africa and the Middle East, since 2021, and several of their board members did similar work in the organization’s previous incarnation as Trans Emigrate. In addition to their experience planning travel for people who face extra scrutiny due to their country of origin, they operate a trans safe haven in neighboring Kenya called Eden House. In light of the danger that queer Ugandans of all sorts are facing right now, they are providing transportation and shelter for LGBTQ+ people of all sorts fleeing Uganda.Effective Altruists often avoid donating to acute crisis that make the news. Newsworthy events are often relatively less underfunded, especially when they occur in the western world. The difficult logistics of providing aid for events like natural disasters can also be an obstacle. Uganda’s proposed LGBTQ+ bill has received some media coverage, but the people who have the ability to donate and would consider the people affected “one of us” - LGBTQ+ people in wealthier nations - are focused on the rise in transphobia in the US and UK right now. Trans Rescue already has a presence in the region in Eden House, the...]]>
Wed, 12 Apr 2023 09:33:50 +0000 EA - Trans Rescue’s operations in Uganda: high impact giving opportunity by David D Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Trans Rescue’s operations in Uganda: high impact giving opportunity, published by David D on April 11, 2023 on The Effective Altruism Forum.Synopsis: Trans Rescue uses their existing experience and infrastructure for moving African trans people to safety to help LGBTQ+ people of all sorts escape genocide in Uganda, costing an estimated €150 to move a person to safety outside the country and €1257 in housing costs and other support to help them become financially self sufficient in their new location.You might have heard about Uganda’s new laws and crackdown targeting LGBTQ+ people, which began in March. “Homosexual activity” has been illegal in Uganda for a long time, but under this bill, people who even say that they are LGBT+ or “promote homosexuality” (such as advocating for LGBTQ+ people’s rights, or writing a positive or neutral article about homosexuality) face criminal charges and imprisonment. Renting living space to a gay person or conducting a same-sex marriage ceremony would also be criminalized with prison sentences of up to 10 years.The president of Uganda has not yet signed the bill, but the homophobic fervor around it is already wreaking havoc in LGBTQ+ Ugandans’ lives. Many of Trans Rescue’s passengers were evicted by their landlords without any warning or opportunity to retrieve their things. In one particularly violent example, a landlord was convinced by a local preacher that his two trans tenants were evil and set fire to their belongings while they weren’t home, burning down his building in the process. Existing shelters for LGBTQ+ people also face eviction. Violence and sexual assault is becoming more frequent (cw violence, rape, police brutality: source).International NGOs have been slow to respond. In an article on Trans Writes, Trans Rescue treasurer Jenny List writes that their passengers haven’t seen any other international organizations working to protect or evacuate LGBTQ+ Ugandans, though some organizations say they have plans in the works.“Of course, we’ve asked around to find out what’s being done by those organisations, and the answer has come back from several quarters that things are in motion, but under the radar. We’re told that too public a move might cause them to be accused by the Ugandan government of being colonialist, and we understand that. We’re happy to hear that so much is being done, we really are.”“Unfortunately, the fact remains that the people on the ground aren’t seeing it. Things they can’t see are of little use to them, when what they need is to escape an angry mob or a police manhunt.”Trans Rescue was unusually well positioned to help. They’ve been helping trans people escape danger, especially in Africa and the Middle East, since 2021, and several of their board members did similar work in the organization’s previous incarnation as Trans Emigrate. In addition to their experience planning travel for people who face extra scrutiny due to their country of origin, they operate a trans safe haven in neighboring Kenya called Eden House. In light of the danger that queer Ugandans of all sorts are facing right now, they are providing transportation and shelter for LGBTQ+ people of all sorts fleeing Uganda.Effective Altruists often avoid donating to acute crisis that make the news. Newsworthy events are often relatively less underfunded, especially when they occur in the western world. The difficult logistics of providing aid for events like natural disasters can also be an obstacle. Uganda’s proposed LGBTQ+ bill has received some media coverage, but the people who have the ability to donate and would consider the people affected “one of us” - LGBTQ+ people in wealthier nations - are focused on the rise in transphobia in the US and UK right now. Trans Rescue already has a presence in the region in Eden House, the...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Trans Rescue’s operations in Uganda: high impact giving opportunity, published by David D on April 11, 2023 on The Effective Altruism Forum.Synopsis: Trans Rescue uses their existing experience and infrastructure for moving African trans people to safety to help LGBTQ+ people of all sorts escape genocide in Uganda, costing an estimated €150 to move a person to safety outside the country and €1257 in housing costs and other support to help them become financially self sufficient in their new location.You might have heard about Uganda’s new laws and crackdown targeting LGBTQ+ people, which began in March. “Homosexual activity” has been illegal in Uganda for a long time, but under this bill, people who even say that they are LGBT+ or “promote homosexuality” (such as advocating for LGBTQ+ people’s rights, or writing a positive or neutral article about homosexuality) face criminal charges and imprisonment. Renting living space to a gay person or conducting a same-sex marriage ceremony would also be criminalized with prison sentences of up to 10 years.The president of Uganda has not yet signed the bill, but the homophobic fervor around it is already wreaking havoc in LGBTQ+ Ugandans’ lives. Many of Trans Rescue’s passengers were evicted by their landlords without any warning or opportunity to retrieve their things. In one particularly violent example, a landlord was convinced by a local preacher that his two trans tenants were evil and set fire to their belongings while they weren’t home, burning down his building in the process. Existing shelters for LGBTQ+ people also face eviction. Violence and sexual assault is becoming more frequent (cw violence, rape, police brutality: source).International NGOs have been slow to respond. In an article on Trans Writes, Trans Rescue treasurer Jenny List writes that their passengers haven’t seen any other international organizations working to protect or evacuate LGBTQ+ Ugandans, though some organizations say they have plans in the works.“Of course, we’ve asked around to find out what’s being done by those organisations, and the answer has come back from several quarters that things are in motion, but under the radar. We’re told that too public a move might cause them to be accused by the Ugandan government of being colonialist, and we understand that. We’re happy to hear that so much is being done, we really are.”“Unfortunately, the fact remains that the people on the ground aren’t seeing it. Things they can’t see are of little use to them, when what they need is to escape an angry mob or a police manhunt.”Trans Rescue was unusually well positioned to help. They’ve been helping trans people escape danger, especially in Africa and the Middle East, since 2021, and several of their board members did similar work in the organization’s previous incarnation as Trans Emigrate. In addition to their experience planning travel for people who face extra scrutiny due to their country of origin, they operate a trans safe haven in neighboring Kenya called Eden House. In light of the danger that queer Ugandans of all sorts are facing right now, they are providing transportation and shelter for LGBTQ+ people of all sorts fleeing Uganda.Effective Altruists often avoid donating to acute crisis that make the news. Newsworthy events are often relatively less underfunded, especially when they occur in the western world. The difficult logistics of providing aid for events like natural disasters can also be an obstacle. Uganda’s proposed LGBTQ+ bill has received some media coverage, but the people who have the ability to donate and would consider the people affected “one of us” - LGBTQ+ people in wealthier nations - are focused on the rise in transphobia in the US and UK right now. Trans Rescue already has a presence in the region in Eden House, the...]]>
David D https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:24 None full 5568
MasQBKhGCbEn7E48R_NL_EA_EA EA - Evolution provides no evidence for the sharp left turn by Quintin Pope Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Evolution provides no evidence for the sharp left turn, published by Quintin Pope on April 11, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Quintin Pope https://forum.effectivealtruism.org/posts/MasQBKhGCbEn7E48R/evolution-provides-no-evidence-for-the-sharp-left-turn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Evolution provides no evidence for the sharp left turn, published by Quintin Pope on April 11, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 12 Apr 2023 03:00:44 +0000 EA - Evolution provides no evidence for the sharp left turn by Quintin Pope Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Evolution provides no evidence for the sharp left turn, published by Quintin Pope on April 11, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Evolution provides no evidence for the sharp left turn, published by Quintin Pope on April 11, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Quintin Pope https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:22 None full 5591
rwWXFDA6gBJjXHBA9_NL_EA_EA EA - Request to AGI organizations: Share your views on pausing AI progress by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Request to AGI organizations: Share your views on pausing AI progress, published by Akash on April 11, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://forum.effectivealtruism.org/posts/rwWXFDA6gBJjXHBA9/request-to-agi-organizations-share-your-views-on-pausing-ai Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Request to AGI organizations: Share your views on pausing AI progress, published by Akash on April 11, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 11 Apr 2023 22:26:50 +0000 EA - Request to AGI organizations: Share your views on pausing AI progress by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Request to AGI organizations: Share your views on pausing AI progress, published by Akash on April 11, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Request to AGI organizations: Share your views on pausing AI progress, published by Akash on April 11, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:23 None full 5567
Sa4ahq8AGTniuuvjE_NL_EA_EA EA - [Linkpost] 538 Politics Podcast on AI risk and politics by jackva Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] 538 Politics Podcast on AI risk & politics, published by jackva on April 11, 2023 on The Effective Altruism Forum.From about minute 32 of this Monday's 538 Podcast, Nate Silver & crew discuss AI politics and risk. As an outlet strongly aligned with "taking evidence and forecasting and political analysis seriously" I thought this was pretty interesting for a number of reasons, both in terms of arguments but also explicit discussion of EA community:Nate Silver making the point that this is much more important than other issues, that a 5% ex risk would be a really big dealFairly detailed and somewhat accurate description of EA communityInsularity of EA/AI risk community and difficulty of translating this to wider publicWarning shots as the primary mechanism for actual risk being lowerRationality / EA community mentioned as good at identifying important thingsDifficulty of taking the issue fully seriouslyIn other podcast episodes Nate also mentioned he will cover AI risk in some detail in his upcoming bookThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
jackva https://forum.effectivealtruism.org/posts/Sa4ahq8AGTniuuvjE/linkpost-538-politics-podcast-on-ai-risk-and-politics Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] 538 Politics Podcast on AI risk & politics, published by jackva on April 11, 2023 on The Effective Altruism Forum.From about minute 32 of this Monday's 538 Podcast, Nate Silver & crew discuss AI politics and risk. As an outlet strongly aligned with "taking evidence and forecasting and political analysis seriously" I thought this was pretty interesting for a number of reasons, both in terms of arguments but also explicit discussion of EA community:Nate Silver making the point that this is much more important than other issues, that a 5% ex risk would be a really big dealFairly detailed and somewhat accurate description of EA communityInsularity of EA/AI risk community and difficulty of translating this to wider publicWarning shots as the primary mechanism for actual risk being lowerRationality / EA community mentioned as good at identifying important thingsDifficulty of taking the issue fully seriouslyIn other podcast episodes Nate also mentioned he will cover AI risk in some detail in his upcoming bookThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 11 Apr 2023 21:43:56 +0000 EA - [Linkpost] 538 Politics Podcast on AI risk and politics by jackva Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] 538 Politics Podcast on AI risk & politics, published by jackva on April 11, 2023 on The Effective Altruism Forum.From about minute 32 of this Monday's 538 Podcast, Nate Silver & crew discuss AI politics and risk. As an outlet strongly aligned with "taking evidence and forecasting and political analysis seriously" I thought this was pretty interesting for a number of reasons, both in terms of arguments but also explicit discussion of EA community:Nate Silver making the point that this is much more important than other issues, that a 5% ex risk would be a really big dealFairly detailed and somewhat accurate description of EA communityInsularity of EA/AI risk community and difficulty of translating this to wider publicWarning shots as the primary mechanism for actual risk being lowerRationality / EA community mentioned as good at identifying important thingsDifficulty of taking the issue fully seriouslyIn other podcast episodes Nate also mentioned he will cover AI risk in some detail in his upcoming bookThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] 538 Politics Podcast on AI risk & politics, published by jackva on April 11, 2023 on The Effective Altruism Forum.From about minute 32 of this Monday's 538 Podcast, Nate Silver & crew discuss AI politics and risk. As an outlet strongly aligned with "taking evidence and forecasting and political analysis seriously" I thought this was pretty interesting for a number of reasons, both in terms of arguments but also explicit discussion of EA community:Nate Silver making the point that this is much more important than other issues, that a 5% ex risk would be a really big dealFairly detailed and somewhat accurate description of EA communityInsularity of EA/AI risk community and difficulty of translating this to wider publicWarning shots as the primary mechanism for actual risk being lowerRationality / EA community mentioned as good at identifying important thingsDifficulty of taking the issue fully seriouslyIn other podcast episodes Nate also mentioned he will cover AI risk in some detail in his upcoming bookThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
jackva https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:14 None full 5557
DgZezaADGqK5Hxwom_NL_EA_EA EA - How much funding does GiveWell expect to raise through 2025? by GiveWell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How much funding does GiveWell expect to raise through 2025?, published by GiveWell on April 11, 2023 on The Effective Altruism Forum.SummaryWe're optimistic that GiveWell's funds raised will continue to increase in the long run. Over the next few years, we believe our annual funds raised are more likely to stay relatively constant, due to a decrease in expected funding from our largest donor, Open Philanthropy, offset by an expected increase in funding from our other donors.This chart shows our latest forecasts for funds raised in millions of dollars:[1]In November 2021, we wrote that we were anticipating rapid growth and aiming to influence $1 billion in 2025. Now, our best guess is that we'll raise between $400 million and $800 million in 2025 (for comparison, we raised around $600 million in 2022). As in the chart above, we now think it's possible but unlikely that we'll raise close to $1 billion in 2025, and we also think it's possible but unlikely that our funds raised in 2025 will be substantially lower (e.g. around $300 million) than they were in 2022.We're excited about the impact we can have at any of those levels of funding, and we'll be continuing to direct as much funding as we can raise to the most cost-effective opportunities we can find.Our forecasts are uncertain. We might be wrong about what the future will look like, just as our projections now are very different than they were in late 2021. We'll have better information as time goes on.This change in projected funds raised means that:We're funding-constrained: we believe that our research will yield more outstanding opportunities than we'll be able to fund over the next few years. Your donations can help fill those gaps.Because it's valuable to maintain a stable cost-effectiveness bar, we may not spend down all the funds available to us in each year. Depending on how much funding we are able to direct and when it becomes available, we may smooth our spending over the next few years. Currently, we recommend funding to opportunities we believe to be at least 10 times as cost-effective as unconditional cash transfers ("10x cash").We are increasing our emphasis on fundraising relative to past years and relative to our previous plans for 2023 to 2025 in order to increase the chances of us being able to fill additional cost-effective funding gaps.In the rest of this post, we discuss:Our updated forecastsHow we'll respond to shifting expectationsThe uncertainty inherent in these projectionsOur updated forecastsWhen we wrote that we aimed to direct $1 billion annually by 2025, we were imagining a scenario in which we'd receive $500 million from Open Philanthropy and roughly $450 million from other donors.[2]We now believe that:Given our strong community of supporters and our increased fundraising efforts, funding from donors beyond Open Philanthropy will likely continue to increase. We're uncertain of the rate at which it'll increase, and it's always possible that funds raised will stall or decrease. Despite the economic downturn, our projections for funds raised among these donors haven't changed much. Our goal is to reach $500 million in funds raised from donors other than Open Philanthropy by 2025. We think this goal is ambitious but plausibly achievable with effort.The level of funding we'll receive from Open Philanthropy in the future is very uncertain. There is a wide range of possibilities, but our median expectation is that funds from Open Philanthropy will taper down from a high of $350 million in 2022.Previously, Open Philanthropy had tentatively planned to give $500 million in each of 2022 and 2023. We projected that level of funding out to 2025 as a best guess. In early 2022, Open Philanthropy revised its plans to give $350 million in 2022; it tentatively plans to give $250...]]>
GiveWell https://forum.effectivealtruism.org/posts/DgZezaADGqK5Hxwom/how-much-funding-does-givewell-expect-to-raise-through-2025 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How much funding does GiveWell expect to raise through 2025?, published by GiveWell on April 11, 2023 on The Effective Altruism Forum.SummaryWe're optimistic that GiveWell's funds raised will continue to increase in the long run. Over the next few years, we believe our annual funds raised are more likely to stay relatively constant, due to a decrease in expected funding from our largest donor, Open Philanthropy, offset by an expected increase in funding from our other donors.This chart shows our latest forecasts for funds raised in millions of dollars:[1]In November 2021, we wrote that we were anticipating rapid growth and aiming to influence $1 billion in 2025. Now, our best guess is that we'll raise between $400 million and $800 million in 2025 (for comparison, we raised around $600 million in 2022). As in the chart above, we now think it's possible but unlikely that we'll raise close to $1 billion in 2025, and we also think it's possible but unlikely that our funds raised in 2025 will be substantially lower (e.g. around $300 million) than they were in 2022.We're excited about the impact we can have at any of those levels of funding, and we'll be continuing to direct as much funding as we can raise to the most cost-effective opportunities we can find.Our forecasts are uncertain. We might be wrong about what the future will look like, just as our projections now are very different than they were in late 2021. We'll have better information as time goes on.This change in projected funds raised means that:We're funding-constrained: we believe that our research will yield more outstanding opportunities than we'll be able to fund over the next few years. Your donations can help fill those gaps.Because it's valuable to maintain a stable cost-effectiveness bar, we may not spend down all the funds available to us in each year. Depending on how much funding we are able to direct and when it becomes available, we may smooth our spending over the next few years. Currently, we recommend funding to opportunities we believe to be at least 10 times as cost-effective as unconditional cash transfers ("10x cash").We are increasing our emphasis on fundraising relative to past years and relative to our previous plans for 2023 to 2025 in order to increase the chances of us being able to fill additional cost-effective funding gaps.In the rest of this post, we discuss:Our updated forecastsHow we'll respond to shifting expectationsThe uncertainty inherent in these projectionsOur updated forecastsWhen we wrote that we aimed to direct $1 billion annually by 2025, we were imagining a scenario in which we'd receive $500 million from Open Philanthropy and roughly $450 million from other donors.[2]We now believe that:Given our strong community of supporters and our increased fundraising efforts, funding from donors beyond Open Philanthropy will likely continue to increase. We're uncertain of the rate at which it'll increase, and it's always possible that funds raised will stall or decrease. Despite the economic downturn, our projections for funds raised among these donors haven't changed much. Our goal is to reach $500 million in funds raised from donors other than Open Philanthropy by 2025. We think this goal is ambitious but plausibly achievable with effort.The level of funding we'll receive from Open Philanthropy in the future is very uncertain. There is a wide range of possibilities, but our median expectation is that funds from Open Philanthropy will taper down from a high of $350 million in 2022.Previously, Open Philanthropy had tentatively planned to give $500 million in each of 2022 and 2023. We projected that level of funding out to 2025 as a best guess. In early 2022, Open Philanthropy revised its plans to give $350 million in 2022; it tentatively plans to give $250...]]>
Tue, 11 Apr 2023 19:44:14 +0000 EA - How much funding does GiveWell expect to raise through 2025? by GiveWell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How much funding does GiveWell expect to raise through 2025?, published by GiveWell on April 11, 2023 on The Effective Altruism Forum.SummaryWe're optimistic that GiveWell's funds raised will continue to increase in the long run. Over the next few years, we believe our annual funds raised are more likely to stay relatively constant, due to a decrease in expected funding from our largest donor, Open Philanthropy, offset by an expected increase in funding from our other donors.This chart shows our latest forecasts for funds raised in millions of dollars:[1]In November 2021, we wrote that we were anticipating rapid growth and aiming to influence $1 billion in 2025. Now, our best guess is that we'll raise between $400 million and $800 million in 2025 (for comparison, we raised around $600 million in 2022). As in the chart above, we now think it's possible but unlikely that we'll raise close to $1 billion in 2025, and we also think it's possible but unlikely that our funds raised in 2025 will be substantially lower (e.g. around $300 million) than they were in 2022.We're excited about the impact we can have at any of those levels of funding, and we'll be continuing to direct as much funding as we can raise to the most cost-effective opportunities we can find.Our forecasts are uncertain. We might be wrong about what the future will look like, just as our projections now are very different than they were in late 2021. We'll have better information as time goes on.This change in projected funds raised means that:We're funding-constrained: we believe that our research will yield more outstanding opportunities than we'll be able to fund over the next few years. Your donations can help fill those gaps.Because it's valuable to maintain a stable cost-effectiveness bar, we may not spend down all the funds available to us in each year. Depending on how much funding we are able to direct and when it becomes available, we may smooth our spending over the next few years. Currently, we recommend funding to opportunities we believe to be at least 10 times as cost-effective as unconditional cash transfers ("10x cash").We are increasing our emphasis on fundraising relative to past years and relative to our previous plans for 2023 to 2025 in order to increase the chances of us being able to fill additional cost-effective funding gaps.In the rest of this post, we discuss:Our updated forecastsHow we'll respond to shifting expectationsThe uncertainty inherent in these projectionsOur updated forecastsWhen we wrote that we aimed to direct $1 billion annually by 2025, we were imagining a scenario in which we'd receive $500 million from Open Philanthropy and roughly $450 million from other donors.[2]We now believe that:Given our strong community of supporters and our increased fundraising efforts, funding from donors beyond Open Philanthropy will likely continue to increase. We're uncertain of the rate at which it'll increase, and it's always possible that funds raised will stall or decrease. Despite the economic downturn, our projections for funds raised among these donors haven't changed much. Our goal is to reach $500 million in funds raised from donors other than Open Philanthropy by 2025. We think this goal is ambitious but plausibly achievable with effort.The level of funding we'll receive from Open Philanthropy in the future is very uncertain. There is a wide range of possibilities, but our median expectation is that funds from Open Philanthropy will taper down from a high of $350 million in 2022.Previously, Open Philanthropy had tentatively planned to give $500 million in each of 2022 and 2023. We projected that level of funding out to 2025 as a best guess. In early 2022, Open Philanthropy revised its plans to give $350 million in 2022; it tentatively plans to give $250...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How much funding does GiveWell expect to raise through 2025?, published by GiveWell on April 11, 2023 on The Effective Altruism Forum.SummaryWe're optimistic that GiveWell's funds raised will continue to increase in the long run. Over the next few years, we believe our annual funds raised are more likely to stay relatively constant, due to a decrease in expected funding from our largest donor, Open Philanthropy, offset by an expected increase in funding from our other donors.This chart shows our latest forecasts for funds raised in millions of dollars:[1]In November 2021, we wrote that we were anticipating rapid growth and aiming to influence $1 billion in 2025. Now, our best guess is that we'll raise between $400 million and $800 million in 2025 (for comparison, we raised around $600 million in 2022). As in the chart above, we now think it's possible but unlikely that we'll raise close to $1 billion in 2025, and we also think it's possible but unlikely that our funds raised in 2025 will be substantially lower (e.g. around $300 million) than they were in 2022.We're excited about the impact we can have at any of those levels of funding, and we'll be continuing to direct as much funding as we can raise to the most cost-effective opportunities we can find.Our forecasts are uncertain. We might be wrong about what the future will look like, just as our projections now are very different than they were in late 2021. We'll have better information as time goes on.This change in projected funds raised means that:We're funding-constrained: we believe that our research will yield more outstanding opportunities than we'll be able to fund over the next few years. Your donations can help fill those gaps.Because it's valuable to maintain a stable cost-effectiveness bar, we may not spend down all the funds available to us in each year. Depending on how much funding we are able to direct and when it becomes available, we may smooth our spending over the next few years. Currently, we recommend funding to opportunities we believe to be at least 10 times as cost-effective as unconditional cash transfers ("10x cash").We are increasing our emphasis on fundraising relative to past years and relative to our previous plans for 2023 to 2025 in order to increase the chances of us being able to fill additional cost-effective funding gaps.In the rest of this post, we discuss:Our updated forecastsHow we'll respond to shifting expectationsThe uncertainty inherent in these projectionsOur updated forecastsWhen we wrote that we aimed to direct $1 billion annually by 2025, we were imagining a scenario in which we'd receive $500 million from Open Philanthropy and roughly $450 million from other donors.[2]We now believe that:Given our strong community of supporters and our increased fundraising efforts, funding from donors beyond Open Philanthropy will likely continue to increase. We're uncertain of the rate at which it'll increase, and it's always possible that funds raised will stall or decrease. Despite the economic downturn, our projections for funds raised among these donors haven't changed much. Our goal is to reach $500 million in funds raised from donors other than Open Philanthropy by 2025. We think this goal is ambitious but plausibly achievable with effort.The level of funding we'll receive from Open Philanthropy in the future is very uncertain. There is a wide range of possibilities, but our median expectation is that funds from Open Philanthropy will taper down from a high of $350 million in 2022.Previously, Open Philanthropy had tentatively planned to give $500 million in each of 2022 and 2023. We projected that level of funding out to 2025 as a best guess. In early 2022, Open Philanthropy revised its plans to give $350 million in 2022; it tentatively plans to give $250...]]>
GiveWell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:01 None full 5559
vctT7omkLhoEMptKD_NL_EA_EA EA - Lead exposure: a shallow cause exploration by JoelMcGuire Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lead exposure: a shallow cause exploration, published by JoelMcGuire on April 11, 2023 on The Effective Altruism Forum.SummaryThis shallow cause area report explores the impact of lead exposure in childhood on subjective wellbeing (SWB) in adulthood. It was completed in two weeks. We performed non-systematic searches to estimate the impact of lead exposure on SWB and to find potential cost-effective interventions.We found studies investigating two correlational longitudinal cohorts (following children to adulthood) from New Zealand and Australia analysing the relationship between childhood blood lead levels (BLLs) and adult affective mental health (MHa). Based on this data, our best guess estimate is that an additional microgram of lead per deciliter of blood for each year for ten years of childhood, leads to a total lifelong (62 years) loss of 1.5 WELLBYs, and a larger overall 3.8 WELLBYs loss when we include some guesses about household spillovers. Hence, we estimate that a modest amount of lead exposure severely impacts on wellbeing across the lifespan.From several back of the envelope calculations, we tentatively conclude that lead-reducing interventions would be 1 to 107 times more cost-effective than cash transfers. Advocacy against lead in paint, food, cookware, and cosmetics seems particularly promising.The scarcity of causal and context relevant data means that we are very uncertain about the effect and cost-effectiveness of these interventions. But, given the potentially high cost-effectiveness, we think this is a promising area for additional research. We especially encourage further research into the causal relationship between lead exposure and SWB and the most common sources of lead exposure to reduce uncertainty about the cost-effectiveness of lead interventions.It’s unclear if the top organisations working to reduce lead exposure, like Pure Earth or the Lead Exposure Elimination Project (LEEP), have sizable funding gaps. Therefore, we’re unsure how much more work should be done to evaluate funding opportunities related to reducing lead exposure for philanthropists aiming to maximise their impact.NotesThis report focuses on the impact of lead exposure in terms of WELLBYs. One WELLBY is a 1 life satisfaction point change for one year (or any equivalent combination of change in life satisfaction and time). In some cases, we convert results in standard deviations of SWB to WELLBYs using a 2 point standard deviation on 0-10 life satisfaction scales (i.e., 1 SD change is the equivalent of 2 point changes on a 0-10 life satisfaction scale). We consider the limitations of converting from affective mental health measures to WELLBYs in Appendix A4. This naive conversion is based on estimates from large scale data sets like the World Happiness Reports. See our post on the WELLBY method for more details.Our calculations and data extraction can be found in this Google Spreadsheet and this GitHub repository.The shallowness of this investigation means (1) we include more guesses and uncertainty in our models, (2) we couldn’t always conduct the most detailed or complex analyses, (3) we might have missed some data, and (4) we take some findings at face value.Clare Donaldson was co-director of HLI before becoming the current co-director of the Lead Exposure Elimination Project. We do not think this influenced our choices or analysis.OutlineIn Section 1 we introduce the issue of lead exposure and define some key terms we will use throughout the rest of this report.In Section 2 we explain the mechanisms for how lead exposure could affect wellbeing.In Section 3 we model the harm of lead exposure on subjective wellbeing using studies of two datasets from New Zealand and Australia (n = 789) relating childhood lead exposure to their adult affective mental heal...]]>
JoelMcGuire https://forum.effectivealtruism.org/posts/vctT7omkLhoEMptKD/lead-exposure-a-shallow-cause-exploration Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lead exposure: a shallow cause exploration, published by JoelMcGuire on April 11, 2023 on The Effective Altruism Forum.SummaryThis shallow cause area report explores the impact of lead exposure in childhood on subjective wellbeing (SWB) in adulthood. It was completed in two weeks. We performed non-systematic searches to estimate the impact of lead exposure on SWB and to find potential cost-effective interventions.We found studies investigating two correlational longitudinal cohorts (following children to adulthood) from New Zealand and Australia analysing the relationship between childhood blood lead levels (BLLs) and adult affective mental health (MHa). Based on this data, our best guess estimate is that an additional microgram of lead per deciliter of blood for each year for ten years of childhood, leads to a total lifelong (62 years) loss of 1.5 WELLBYs, and a larger overall 3.8 WELLBYs loss when we include some guesses about household spillovers. Hence, we estimate that a modest amount of lead exposure severely impacts on wellbeing across the lifespan.From several back of the envelope calculations, we tentatively conclude that lead-reducing interventions would be 1 to 107 times more cost-effective than cash transfers. Advocacy against lead in paint, food, cookware, and cosmetics seems particularly promising.The scarcity of causal and context relevant data means that we are very uncertain about the effect and cost-effectiveness of these interventions. But, given the potentially high cost-effectiveness, we think this is a promising area for additional research. We especially encourage further research into the causal relationship between lead exposure and SWB and the most common sources of lead exposure to reduce uncertainty about the cost-effectiveness of lead interventions.It’s unclear if the top organisations working to reduce lead exposure, like Pure Earth or the Lead Exposure Elimination Project (LEEP), have sizable funding gaps. Therefore, we’re unsure how much more work should be done to evaluate funding opportunities related to reducing lead exposure for philanthropists aiming to maximise their impact.NotesThis report focuses on the impact of lead exposure in terms of WELLBYs. One WELLBY is a 1 life satisfaction point change for one year (or any equivalent combination of change in life satisfaction and time). In some cases, we convert results in standard deviations of SWB to WELLBYs using a 2 point standard deviation on 0-10 life satisfaction scales (i.e., 1 SD change is the equivalent of 2 point changes on a 0-10 life satisfaction scale). We consider the limitations of converting from affective mental health measures to WELLBYs in Appendix A4. This naive conversion is based on estimates from large scale data sets like the World Happiness Reports. See our post on the WELLBY method for more details.Our calculations and data extraction can be found in this Google Spreadsheet and this GitHub repository.The shallowness of this investigation means (1) we include more guesses and uncertainty in our models, (2) we couldn’t always conduct the most detailed or complex analyses, (3) we might have missed some data, and (4) we take some findings at face value.Clare Donaldson was co-director of HLI before becoming the current co-director of the Lead Exposure Elimination Project. We do not think this influenced our choices or analysis.OutlineIn Section 1 we introduce the issue of lead exposure and define some key terms we will use throughout the rest of this report.In Section 2 we explain the mechanisms for how lead exposure could affect wellbeing.In Section 3 we model the harm of lead exposure on subjective wellbeing using studies of two datasets from New Zealand and Australia (n = 789) relating childhood lead exposure to their adult affective mental heal...]]>
Tue, 11 Apr 2023 19:32:03 +0000 EA - Lead exposure: a shallow cause exploration by JoelMcGuire Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lead exposure: a shallow cause exploration, published by JoelMcGuire on April 11, 2023 on The Effective Altruism Forum.SummaryThis shallow cause area report explores the impact of lead exposure in childhood on subjective wellbeing (SWB) in adulthood. It was completed in two weeks. We performed non-systematic searches to estimate the impact of lead exposure on SWB and to find potential cost-effective interventions.We found studies investigating two correlational longitudinal cohorts (following children to adulthood) from New Zealand and Australia analysing the relationship between childhood blood lead levels (BLLs) and adult affective mental health (MHa). Based on this data, our best guess estimate is that an additional microgram of lead per deciliter of blood for each year for ten years of childhood, leads to a total lifelong (62 years) loss of 1.5 WELLBYs, and a larger overall 3.8 WELLBYs loss when we include some guesses about household spillovers. Hence, we estimate that a modest amount of lead exposure severely impacts on wellbeing across the lifespan.From several back of the envelope calculations, we tentatively conclude that lead-reducing interventions would be 1 to 107 times more cost-effective than cash transfers. Advocacy against lead in paint, food, cookware, and cosmetics seems particularly promising.The scarcity of causal and context relevant data means that we are very uncertain about the effect and cost-effectiveness of these interventions. But, given the potentially high cost-effectiveness, we think this is a promising area for additional research. We especially encourage further research into the causal relationship between lead exposure and SWB and the most common sources of lead exposure to reduce uncertainty about the cost-effectiveness of lead interventions.It’s unclear if the top organisations working to reduce lead exposure, like Pure Earth or the Lead Exposure Elimination Project (LEEP), have sizable funding gaps. Therefore, we’re unsure how much more work should be done to evaluate funding opportunities related to reducing lead exposure for philanthropists aiming to maximise their impact.NotesThis report focuses on the impact of lead exposure in terms of WELLBYs. One WELLBY is a 1 life satisfaction point change for one year (or any equivalent combination of change in life satisfaction and time). In some cases, we convert results in standard deviations of SWB to WELLBYs using a 2 point standard deviation on 0-10 life satisfaction scales (i.e., 1 SD change is the equivalent of 2 point changes on a 0-10 life satisfaction scale). We consider the limitations of converting from affective mental health measures to WELLBYs in Appendix A4. This naive conversion is based on estimates from large scale data sets like the World Happiness Reports. See our post on the WELLBY method for more details.Our calculations and data extraction can be found in this Google Spreadsheet and this GitHub repository.The shallowness of this investigation means (1) we include more guesses and uncertainty in our models, (2) we couldn’t always conduct the most detailed or complex analyses, (3) we might have missed some data, and (4) we take some findings at face value.Clare Donaldson was co-director of HLI before becoming the current co-director of the Lead Exposure Elimination Project. We do not think this influenced our choices or analysis.OutlineIn Section 1 we introduce the issue of lead exposure and define some key terms we will use throughout the rest of this report.In Section 2 we explain the mechanisms for how lead exposure could affect wellbeing.In Section 3 we model the harm of lead exposure on subjective wellbeing using studies of two datasets from New Zealand and Australia (n = 789) relating childhood lead exposure to their adult affective mental heal...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lead exposure: a shallow cause exploration, published by JoelMcGuire on April 11, 2023 on The Effective Altruism Forum.SummaryThis shallow cause area report explores the impact of lead exposure in childhood on subjective wellbeing (SWB) in adulthood. It was completed in two weeks. We performed non-systematic searches to estimate the impact of lead exposure on SWB and to find potential cost-effective interventions.We found studies investigating two correlational longitudinal cohorts (following children to adulthood) from New Zealand and Australia analysing the relationship between childhood blood lead levels (BLLs) and adult affective mental health (MHa). Based on this data, our best guess estimate is that an additional microgram of lead per deciliter of blood for each year for ten years of childhood, leads to a total lifelong (62 years) loss of 1.5 WELLBYs, and a larger overall 3.8 WELLBYs loss when we include some guesses about household spillovers. Hence, we estimate that a modest amount of lead exposure severely impacts on wellbeing across the lifespan.From several back of the envelope calculations, we tentatively conclude that lead-reducing interventions would be 1 to 107 times more cost-effective than cash transfers. Advocacy against lead in paint, food, cookware, and cosmetics seems particularly promising.The scarcity of causal and context relevant data means that we are very uncertain about the effect and cost-effectiveness of these interventions. But, given the potentially high cost-effectiveness, we think this is a promising area for additional research. We especially encourage further research into the causal relationship between lead exposure and SWB and the most common sources of lead exposure to reduce uncertainty about the cost-effectiveness of lead interventions.It’s unclear if the top organisations working to reduce lead exposure, like Pure Earth or the Lead Exposure Elimination Project (LEEP), have sizable funding gaps. Therefore, we’re unsure how much more work should be done to evaluate funding opportunities related to reducing lead exposure for philanthropists aiming to maximise their impact.NotesThis report focuses on the impact of lead exposure in terms of WELLBYs. One WELLBY is a 1 life satisfaction point change for one year (or any equivalent combination of change in life satisfaction and time). In some cases, we convert results in standard deviations of SWB to WELLBYs using a 2 point standard deviation on 0-10 life satisfaction scales (i.e., 1 SD change is the equivalent of 2 point changes on a 0-10 life satisfaction scale). We consider the limitations of converting from affective mental health measures to WELLBYs in Appendix A4. This naive conversion is based on estimates from large scale data sets like the World Happiness Reports. See our post on the WELLBY method for more details.Our calculations and data extraction can be found in this Google Spreadsheet and this GitHub repository.The shallowness of this investigation means (1) we include more guesses and uncertainty in our models, (2) we couldn’t always conduct the most detailed or complex analyses, (3) we might have missed some data, and (4) we take some findings at face value.Clare Donaldson was co-director of HLI before becoming the current co-director of the Lead Exposure Elimination Project. We do not think this influenced our choices or analysis.OutlineIn Section 1 we introduce the issue of lead exposure and define some key terms we will use throughout the rest of this report.In Section 2 we explain the mechanisms for how lead exposure could affect wellbeing.In Section 3 we model the harm of lead exposure on subjective wellbeing using studies of two datasets from New Zealand and Australia (n = 789) relating childhood lead exposure to their adult affective mental heal...]]>
JoelMcGuire https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:07:34 None full 5555
w2w7SJFD9WHHdYAwt_NL_EA_EA EA - AI Safety Newsletter #1 [CAIS Linkpost] by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Newsletter #1 [CAIS Linkpost], published by Akash on April 10, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://forum.effectivealtruism.org/posts/w2w7SJFD9WHHdYAwt/ai-safety-newsletter-1-cais-linkpost Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Newsletter #1 [CAIS Linkpost], published by Akash on April 10, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 11 Apr 2023 17:03:10 +0000 EA - AI Safety Newsletter #1 [CAIS Linkpost] by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Newsletter #1 [CAIS Linkpost], published by Akash on April 10, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Newsletter #1 [CAIS Linkpost], published by Akash on April 10, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:22 None full 5561
KAy3sNbw2bgPrR5o8_NL_EA_EA EA - U.S. is launching a $5 billion follow-up to Operation Warp Speed by Juan Cambeiro Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: U.S. is launching a $5 billion follow-up to Operation Warp Speed, published by Juan Cambeiro on April 11, 2023 on The Effective Altruism Forum.The Biden administration is launching a $5 billion follow-up to Operation Warp Speed called "Project Next Gen." It has 3 goals, of which the most relevant for future pandemic preparedness is development of pan-coronavirus vaccines. The $5 billion seems to be coming from unspent COVID funds, so no new appropriations are needed.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Juan Cambeiro https://forum.effectivealtruism.org/posts/KAy3sNbw2bgPrR5o8/u-s-is-launching-a-usd5-billion-follow-up-to-operation-warp Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: U.S. is launching a $5 billion follow-up to Operation Warp Speed, published by Juan Cambeiro on April 11, 2023 on The Effective Altruism Forum.The Biden administration is launching a $5 billion follow-up to Operation Warp Speed called "Project Next Gen." It has 3 goals, of which the most relevant for future pandemic preparedness is development of pan-coronavirus vaccines. The $5 billion seems to be coming from unspent COVID funds, so no new appropriations are needed.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 11 Apr 2023 14:41:18 +0000 EA - U.S. is launching a $5 billion follow-up to Operation Warp Speed by Juan Cambeiro Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: U.S. is launching a $5 billion follow-up to Operation Warp Speed, published by Juan Cambeiro on April 11, 2023 on The Effective Altruism Forum.The Biden administration is launching a $5 billion follow-up to Operation Warp Speed called "Project Next Gen." It has 3 goals, of which the most relevant for future pandemic preparedness is development of pan-coronavirus vaccines. The $5 billion seems to be coming from unspent COVID funds, so no new appropriations are needed.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: U.S. is launching a $5 billion follow-up to Operation Warp Speed, published by Juan Cambeiro on April 11, 2023 on The Effective Altruism Forum.The Biden administration is launching a $5 billion follow-up to Operation Warp Speed called "Project Next Gen." It has 3 goals, of which the most relevant for future pandemic preparedness is development of pan-coronavirus vaccines. The $5 billion seems to be coming from unspent COVID funds, so no new appropriations are needed.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Juan Cambeiro https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:42 None full 5560
JQxvZZdPG5KYjyBfg_NL_EA_EA EA - Four mindset disagreements behind existential risk disagreements in ML by RobBensinger Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Four mindset disagreements behind existential risk disagreements in ML, published by RobBensinger on April 11, 2023 on The Effective Altruism Forum.I've previously written that good ideas and conversations about AGI seem to have propagated through ML weirdly slowly.A different weird phenomenon I observe is that the field's relative inaction about AGI seems less based on a confident set of beliefs about AGI definitely going well (or definitely being far off), and more based on an implicit sense like "the default is everything going well, and we don't need to change anything until there's overwhelming evidence to the contrary".Some people do have confident beliefs that imply "things will go well"; I disagree there, but I expect some amount of disagreement like that.But that doesn't seem to be the crux for most people in ML.In a sane world, it doesn't seem like "well, maybe AI will get stuck at human-ish levels for decades" or "well, maybe superintelligence couldn't invent any wild new tech" ought to be cruxes for "Should we pause AI development?" or "Is alignment research the world's top priority?"Note that I'm not arguing "an AGI-mediated extinction event is such a big deal that we should make it a top priority even if it's very unlikely". There are enough other powerful technologies on the horizon, and enough other risks for civilizational collapse or value lock-in, that I don't in fact think AGI x-risk should get major attention if it's very unlikely.But the most common view within ML seems to be less "it's super unlikely for reasons X Y Z", and more of an "I haven't thought about it much" and/or "I see some reasons to be very worried, but also some reasons things might be fine, so I end up with medium-ish levels of worry".48% of researchers who had recently published in NeurIPS or ICML gave double-digit probabilities in mid-2022 to advanced AI's long-term effect being “extremely bad (e.g., human extinction)”. A similar number gave double-digit probabilities to "human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species".91% of researchers working on "long-term AI topics" at CHAI, DeepMind, MIRI, OpenAI, Open Philanthropy, and what would become Anthropic gave double-digit probabilities in early 2021 to "the overall value of the future will be drastically less than it could have been, as a result of AI systems not doing/optimizing what the people deploying them wanted/intended".The level of concern and seriousness I see from ML researchers discussing AGI on any social media platform or in any mainstream venue seems wildly out of step with "half of us think there's a 10+% chance of our work resulting in an existential catastrophe".I think the following four factors help partly (though not completely) explain what's going on. If I'm right, then I think there's some hope that the field can explicitly talk about these things and consciously course-correct."Conservative" predictions, versus conservative decision-making.Waiting for a fire alarm, versus intervening proactively.Anchoring to what's familiar, versus trying to account for potential novelties in AGI.Modeling existential risks in far mode, versus near mode.1. "Conservative" predictions, versus conservative decision-makingIf you're building toward a technology as novel and powerful as "automating every cognitive ability a human can do", then it may sound "conservative" to predict modest impacts. But at the decision-making level, you should be "conservative" in a very different sense, by not gambling the future on your technology being low-impact.The first long-form discussion of AI alignment, Eliezer Yudkowsky's Creating Friendly AI 1.0, made this point in 2001:The conservative assumption according to futur...]]>
RobBensinger https://forum.effectivealtruism.org/posts/JQxvZZdPG5KYjyBfg/four-mindset-disagreements-behind-existential-risk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Four mindset disagreements behind existential risk disagreements in ML, published by RobBensinger on April 11, 2023 on The Effective Altruism Forum.I've previously written that good ideas and conversations about AGI seem to have propagated through ML weirdly slowly.A different weird phenomenon I observe is that the field's relative inaction about AGI seems less based on a confident set of beliefs about AGI definitely going well (or definitely being far off), and more based on an implicit sense like "the default is everything going well, and we don't need to change anything until there's overwhelming evidence to the contrary".Some people do have confident beliefs that imply "things will go well"; I disagree there, but I expect some amount of disagreement like that.But that doesn't seem to be the crux for most people in ML.In a sane world, it doesn't seem like "well, maybe AI will get stuck at human-ish levels for decades" or "well, maybe superintelligence couldn't invent any wild new tech" ought to be cruxes for "Should we pause AI development?" or "Is alignment research the world's top priority?"Note that I'm not arguing "an AGI-mediated extinction event is such a big deal that we should make it a top priority even if it's very unlikely". There are enough other powerful technologies on the horizon, and enough other risks for civilizational collapse or value lock-in, that I don't in fact think AGI x-risk should get major attention if it's very unlikely.But the most common view within ML seems to be less "it's super unlikely for reasons X Y Z", and more of an "I haven't thought about it much" and/or "I see some reasons to be very worried, but also some reasons things might be fine, so I end up with medium-ish levels of worry".48% of researchers who had recently published in NeurIPS or ICML gave double-digit probabilities in mid-2022 to advanced AI's long-term effect being “extremely bad (e.g., human extinction)”. A similar number gave double-digit probabilities to "human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species".91% of researchers working on "long-term AI topics" at CHAI, DeepMind, MIRI, OpenAI, Open Philanthropy, and what would become Anthropic gave double-digit probabilities in early 2021 to "the overall value of the future will be drastically less than it could have been, as a result of AI systems not doing/optimizing what the people deploying them wanted/intended".The level of concern and seriousness I see from ML researchers discussing AGI on any social media platform or in any mainstream venue seems wildly out of step with "half of us think there's a 10+% chance of our work resulting in an existential catastrophe".I think the following four factors help partly (though not completely) explain what's going on. If I'm right, then I think there's some hope that the field can explicitly talk about these things and consciously course-correct."Conservative" predictions, versus conservative decision-making.Waiting for a fire alarm, versus intervening proactively.Anchoring to what's familiar, versus trying to account for potential novelties in AGI.Modeling existential risks in far mode, versus near mode.1. "Conservative" predictions, versus conservative decision-makingIf you're building toward a technology as novel and powerful as "automating every cognitive ability a human can do", then it may sound "conservative" to predict modest impacts. But at the decision-making level, you should be "conservative" in a very different sense, by not gambling the future on your technology being low-impact.The first long-form discussion of AI alignment, Eliezer Yudkowsky's Creating Friendly AI 1.0, made this point in 2001:The conservative assumption according to futur...]]>
Tue, 11 Apr 2023 07:11:31 +0000 EA - Four mindset disagreements behind existential risk disagreements in ML by RobBensinger Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Four mindset disagreements behind existential risk disagreements in ML, published by RobBensinger on April 11, 2023 on The Effective Altruism Forum.I've previously written that good ideas and conversations about AGI seem to have propagated through ML weirdly slowly.A different weird phenomenon I observe is that the field's relative inaction about AGI seems less based on a confident set of beliefs about AGI definitely going well (or definitely being far off), and more based on an implicit sense like "the default is everything going well, and we don't need to change anything until there's overwhelming evidence to the contrary".Some people do have confident beliefs that imply "things will go well"; I disagree there, but I expect some amount of disagreement like that.But that doesn't seem to be the crux for most people in ML.In a sane world, it doesn't seem like "well, maybe AI will get stuck at human-ish levels for decades" or "well, maybe superintelligence couldn't invent any wild new tech" ought to be cruxes for "Should we pause AI development?" or "Is alignment research the world's top priority?"Note that I'm not arguing "an AGI-mediated extinction event is such a big deal that we should make it a top priority even if it's very unlikely". There are enough other powerful technologies on the horizon, and enough other risks for civilizational collapse or value lock-in, that I don't in fact think AGI x-risk should get major attention if it's very unlikely.But the most common view within ML seems to be less "it's super unlikely for reasons X Y Z", and more of an "I haven't thought about it much" and/or "I see some reasons to be very worried, but also some reasons things might be fine, so I end up with medium-ish levels of worry".48% of researchers who had recently published in NeurIPS or ICML gave double-digit probabilities in mid-2022 to advanced AI's long-term effect being “extremely bad (e.g., human extinction)”. A similar number gave double-digit probabilities to "human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species".91% of researchers working on "long-term AI topics" at CHAI, DeepMind, MIRI, OpenAI, Open Philanthropy, and what would become Anthropic gave double-digit probabilities in early 2021 to "the overall value of the future will be drastically less than it could have been, as a result of AI systems not doing/optimizing what the people deploying them wanted/intended".The level of concern and seriousness I see from ML researchers discussing AGI on any social media platform or in any mainstream venue seems wildly out of step with "half of us think there's a 10+% chance of our work resulting in an existential catastrophe".I think the following four factors help partly (though not completely) explain what's going on. If I'm right, then I think there's some hope that the field can explicitly talk about these things and consciously course-correct."Conservative" predictions, versus conservative decision-making.Waiting for a fire alarm, versus intervening proactively.Anchoring to what's familiar, versus trying to account for potential novelties in AGI.Modeling existential risks in far mode, versus near mode.1. "Conservative" predictions, versus conservative decision-makingIf you're building toward a technology as novel and powerful as "automating every cognitive ability a human can do", then it may sound "conservative" to predict modest impacts. But at the decision-making level, you should be "conservative" in a very different sense, by not gambling the future on your technology being low-impact.The first long-form discussion of AI alignment, Eliezer Yudkowsky's Creating Friendly AI 1.0, made this point in 2001:The conservative assumption according to futur...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Four mindset disagreements behind existential risk disagreements in ML, published by RobBensinger on April 11, 2023 on The Effective Altruism Forum.I've previously written that good ideas and conversations about AGI seem to have propagated through ML weirdly slowly.A different weird phenomenon I observe is that the field's relative inaction about AGI seems less based on a confident set of beliefs about AGI definitely going well (or definitely being far off), and more based on an implicit sense like "the default is everything going well, and we don't need to change anything until there's overwhelming evidence to the contrary".Some people do have confident beliefs that imply "things will go well"; I disagree there, but I expect some amount of disagreement like that.But that doesn't seem to be the crux for most people in ML.In a sane world, it doesn't seem like "well, maybe AI will get stuck at human-ish levels for decades" or "well, maybe superintelligence couldn't invent any wild new tech" ought to be cruxes for "Should we pause AI development?" or "Is alignment research the world's top priority?"Note that I'm not arguing "an AGI-mediated extinction event is such a big deal that we should make it a top priority even if it's very unlikely". There are enough other powerful technologies on the horizon, and enough other risks for civilizational collapse or value lock-in, that I don't in fact think AGI x-risk should get major attention if it's very unlikely.But the most common view within ML seems to be less "it's super unlikely for reasons X Y Z", and more of an "I haven't thought about it much" and/or "I see some reasons to be very worried, but also some reasons things might be fine, so I end up with medium-ish levels of worry".48% of researchers who had recently published in NeurIPS or ICML gave double-digit probabilities in mid-2022 to advanced AI's long-term effect being “extremely bad (e.g., human extinction)”. A similar number gave double-digit probabilities to "human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species".91% of researchers working on "long-term AI topics" at CHAI, DeepMind, MIRI, OpenAI, Open Philanthropy, and what would become Anthropic gave double-digit probabilities in early 2021 to "the overall value of the future will be drastically less than it could have been, as a result of AI systems not doing/optimizing what the people deploying them wanted/intended".The level of concern and seriousness I see from ML researchers discussing AGI on any social media platform or in any mainstream venue seems wildly out of step with "half of us think there's a 10+% chance of our work resulting in an existential catastrophe".I think the following four factors help partly (though not completely) explain what's going on. If I'm right, then I think there's some hope that the field can explicitly talk about these things and consciously course-correct."Conservative" predictions, versus conservative decision-making.Waiting for a fire alarm, versus intervening proactively.Anchoring to what's familiar, versus trying to account for potential novelties in AGI.Modeling existential risks in far mode, versus near mode.1. "Conservative" predictions, versus conservative decision-makingIf you're building toward a technology as novel and powerful as "automating every cognitive ability a human can do", then it may sound "conservative" to predict modest impacts. But at the decision-making level, you should be "conservative" in a very different sense, by not gambling the future on your technology being low-impact.The first long-form discussion of AI alignment, Eliezer Yudkowsky's Creating Friendly AI 1.0, made this point in 2001:The conservative assumption according to futur...]]>
RobBensinger https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 14:03 None full 5551
BFBf5yPLoJMGozygE_NL_EA_EA EA - Current UK government levers on AI development by rosehadshar Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Current UK government levers on AI development, published by rosehadshar on April 10, 2023 on The Effective Altruism Forum.This is a link post for this collection of current UK government levers on AI development.At the end of 2022, I made a collection of information on current UK government levers on AI development, focused on levers which seem to me to have potentially significant implications for the governance of advanced AI.The primary audience I’m intending for the collection is people who work in or are considering working in AI governance and policy, and I hope it will be useful as an input into:Building more detailed models of how the UK government might affect AI development and deployment.Getting an overview of the policy status quo in the UK.Thinking about which policy areas are likely to matter more for managing transitions to advanced AI.Thinking about how important influencing the UK government is relative to other actors.In this post, I try to situate current UK government levers in the broader context, to give a sense of the limits of the collection.Some initial caveats:The collection is based exclusively on publicly available information, not on conversations with relevant government officials.I’m not an expert in the UK government or in AI policy.The factual information in the collection hasn’t been vetted by relevant experts. I expect there are things I’ve misunderstood, and important things that I’ve missed.The collection is a snapshot in time. To the best of my knowledge, the information is up to date as of April 2023, but the collection will soon get out of date. I am not going to personally commit to updating the collection, but would be excited for others to do so. If you’re interested, comment on this post or on the collection, or send me a message.I am not advocating that particular actors should try to pull any particular lever. I think it’s easy to do more harm than good, and encourage readers to orient to the collection as a way of thinking about how different trajectories might play out, rather than as a straightforward input into which policies to push. I think that figuring out net positive ways of influencing policy could be very valuable, but requires a bunch of work on top of the sorts of information in this collection.This collection is just a small part of the puzzle. Two aspects of this which I’ll unpack in a bit more detail below:The actions of the UK government might not matter.Even conditional on UK government actions mattering, there are many important things besides current policy levers.Will the actions of the UK government matter?I’m pretty uncertain about whether the actions of the UK government will end up mattering, but I do think it’s likely enough that the UK government is worth some attention.What needs to be true for the actions of the UK government to matter?Government(s) needs to matter.Governments tend to move slowly, so the faster takeoff is the less influence they'll have relatively, all else equal.I think there are fast-ish takeoff scenarios where governments still matter a lot, and slow takeoff scenarios which are plausible.So I feel pretty confident that this is likely enough to be worth serious attention.The UK needs to matter.I can see two main ways that the UK ends up mattering:DeepMind/Graphcore/Arm/some other UK-based entity ends up being a major player in the development of advanced AI.The UK influences other more important actors, for example via:UK government powers over AI companies outside of the UK.International agreements.Regulatory diffusion.Diplomacy.I’m not well-informed here, but again this seems likely enough to be worth some attention.The UK government needs to have relevant powers.The UK government currently has powers over information, including kinds o...]]>
rosehadshar https://forum.effectivealtruism.org/posts/BFBf5yPLoJMGozygE/current-uk-government-levers-on-ai-development Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Current UK government levers on AI development, published by rosehadshar on April 10, 2023 on The Effective Altruism Forum.This is a link post for this collection of current UK government levers on AI development.At the end of 2022, I made a collection of information on current UK government levers on AI development, focused on levers which seem to me to have potentially significant implications for the governance of advanced AI.The primary audience I’m intending for the collection is people who work in or are considering working in AI governance and policy, and I hope it will be useful as an input into:Building more detailed models of how the UK government might affect AI development and deployment.Getting an overview of the policy status quo in the UK.Thinking about which policy areas are likely to matter more for managing transitions to advanced AI.Thinking about how important influencing the UK government is relative to other actors.In this post, I try to situate current UK government levers in the broader context, to give a sense of the limits of the collection.Some initial caveats:The collection is based exclusively on publicly available information, not on conversations with relevant government officials.I’m not an expert in the UK government or in AI policy.The factual information in the collection hasn’t been vetted by relevant experts. I expect there are things I’ve misunderstood, and important things that I’ve missed.The collection is a snapshot in time. To the best of my knowledge, the information is up to date as of April 2023, but the collection will soon get out of date. I am not going to personally commit to updating the collection, but would be excited for others to do so. If you’re interested, comment on this post or on the collection, or send me a message.I am not advocating that particular actors should try to pull any particular lever. I think it’s easy to do more harm than good, and encourage readers to orient to the collection as a way of thinking about how different trajectories might play out, rather than as a straightforward input into which policies to push. I think that figuring out net positive ways of influencing policy could be very valuable, but requires a bunch of work on top of the sorts of information in this collection.This collection is just a small part of the puzzle. Two aspects of this which I’ll unpack in a bit more detail below:The actions of the UK government might not matter.Even conditional on UK government actions mattering, there are many important things besides current policy levers.Will the actions of the UK government matter?I’m pretty uncertain about whether the actions of the UK government will end up mattering, but I do think it’s likely enough that the UK government is worth some attention.What needs to be true for the actions of the UK government to matter?Government(s) needs to matter.Governments tend to move slowly, so the faster takeoff is the less influence they'll have relatively, all else equal.I think there are fast-ish takeoff scenarios where governments still matter a lot, and slow takeoff scenarios which are plausible.So I feel pretty confident that this is likely enough to be worth serious attention.The UK needs to matter.I can see two main ways that the UK ends up mattering:DeepMind/Graphcore/Arm/some other UK-based entity ends up being a major player in the development of advanced AI.The UK influences other more important actors, for example via:UK government powers over AI companies outside of the UK.International agreements.Regulatory diffusion.Diplomacy.I’m not well-informed here, but again this seems likely enough to be worth some attention.The UK government needs to have relevant powers.The UK government currently has powers over information, including kinds o...]]>
Tue, 11 Apr 2023 05:28:02 +0000 EA - Current UK government levers on AI development by rosehadshar Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Current UK government levers on AI development, published by rosehadshar on April 10, 2023 on The Effective Altruism Forum.This is a link post for this collection of current UK government levers on AI development.At the end of 2022, I made a collection of information on current UK government levers on AI development, focused on levers which seem to me to have potentially significant implications for the governance of advanced AI.The primary audience I’m intending for the collection is people who work in or are considering working in AI governance and policy, and I hope it will be useful as an input into:Building more detailed models of how the UK government might affect AI development and deployment.Getting an overview of the policy status quo in the UK.Thinking about which policy areas are likely to matter more for managing transitions to advanced AI.Thinking about how important influencing the UK government is relative to other actors.In this post, I try to situate current UK government levers in the broader context, to give a sense of the limits of the collection.Some initial caveats:The collection is based exclusively on publicly available information, not on conversations with relevant government officials.I’m not an expert in the UK government or in AI policy.The factual information in the collection hasn’t been vetted by relevant experts. I expect there are things I’ve misunderstood, and important things that I’ve missed.The collection is a snapshot in time. To the best of my knowledge, the information is up to date as of April 2023, but the collection will soon get out of date. I am not going to personally commit to updating the collection, but would be excited for others to do so. If you’re interested, comment on this post or on the collection, or send me a message.I am not advocating that particular actors should try to pull any particular lever. I think it’s easy to do more harm than good, and encourage readers to orient to the collection as a way of thinking about how different trajectories might play out, rather than as a straightforward input into which policies to push. I think that figuring out net positive ways of influencing policy could be very valuable, but requires a bunch of work on top of the sorts of information in this collection.This collection is just a small part of the puzzle. Two aspects of this which I’ll unpack in a bit more detail below:The actions of the UK government might not matter.Even conditional on UK government actions mattering, there are many important things besides current policy levers.Will the actions of the UK government matter?I’m pretty uncertain about whether the actions of the UK government will end up mattering, but I do think it’s likely enough that the UK government is worth some attention.What needs to be true for the actions of the UK government to matter?Government(s) needs to matter.Governments tend to move slowly, so the faster takeoff is the less influence they'll have relatively, all else equal.I think there are fast-ish takeoff scenarios where governments still matter a lot, and slow takeoff scenarios which are plausible.So I feel pretty confident that this is likely enough to be worth serious attention.The UK needs to matter.I can see two main ways that the UK ends up mattering:DeepMind/Graphcore/Arm/some other UK-based entity ends up being a major player in the development of advanced AI.The UK influences other more important actors, for example via:UK government powers over AI companies outside of the UK.International agreements.Regulatory diffusion.Diplomacy.I’m not well-informed here, but again this seems likely enough to be worth some attention.The UK government needs to have relevant powers.The UK government currently has powers over information, including kinds o...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Current UK government levers on AI development, published by rosehadshar on April 10, 2023 on The Effective Altruism Forum.This is a link post for this collection of current UK government levers on AI development.At the end of 2022, I made a collection of information on current UK government levers on AI development, focused on levers which seem to me to have potentially significant implications for the governance of advanced AI.The primary audience I’m intending for the collection is people who work in or are considering working in AI governance and policy, and I hope it will be useful as an input into:Building more detailed models of how the UK government might affect AI development and deployment.Getting an overview of the policy status quo in the UK.Thinking about which policy areas are likely to matter more for managing transitions to advanced AI.Thinking about how important influencing the UK government is relative to other actors.In this post, I try to situate current UK government levers in the broader context, to give a sense of the limits of the collection.Some initial caveats:The collection is based exclusively on publicly available information, not on conversations with relevant government officials.I’m not an expert in the UK government or in AI policy.The factual information in the collection hasn’t been vetted by relevant experts. I expect there are things I’ve misunderstood, and important things that I’ve missed.The collection is a snapshot in time. To the best of my knowledge, the information is up to date as of April 2023, but the collection will soon get out of date. I am not going to personally commit to updating the collection, but would be excited for others to do so. If you’re interested, comment on this post or on the collection, or send me a message.I am not advocating that particular actors should try to pull any particular lever. I think it’s easy to do more harm than good, and encourage readers to orient to the collection as a way of thinking about how different trajectories might play out, rather than as a straightforward input into which policies to push. I think that figuring out net positive ways of influencing policy could be very valuable, but requires a bunch of work on top of the sorts of information in this collection.This collection is just a small part of the puzzle. Two aspects of this which I’ll unpack in a bit more detail below:The actions of the UK government might not matter.Even conditional on UK government actions mattering, there are many important things besides current policy levers.Will the actions of the UK government matter?I’m pretty uncertain about whether the actions of the UK government will end up mattering, but I do think it’s likely enough that the UK government is worth some attention.What needs to be true for the actions of the UK government to matter?Government(s) needs to matter.Governments tend to move slowly, so the faster takeoff is the less influence they'll have relatively, all else equal.I think there are fast-ish takeoff scenarios where governments still matter a lot, and slow takeoff scenarios which are plausible.So I feel pretty confident that this is likely enough to be worth serious attention.The UK needs to matter.I can see two main ways that the UK ends up mattering:DeepMind/Graphcore/Arm/some other UK-based entity ends up being a major player in the development of advanced AI.The UK influences other more important actors, for example via:UK government powers over AI companies outside of the UK.International agreements.Regulatory diffusion.Diplomacy.I’m not well-informed here, but again this seems likely enough to be worth some attention.The UK government needs to have relevant powers.The UK government currently has powers over information, including kinds o...]]>
rosehadshar https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:20 None full 5554
5n7tnfrKAJfAkwMv5_NL_EA_EA EA - I just launched Pepper, looking for input! by mikefilbey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I just launched Pepper, looking for input!, published by mikefilbey on April 10, 2023 on The Effective Altruism Forum.Hi everyone, my name is Mike Filbey and I am the founder of a nonprofit 501c3 called Pepper that I just launched last week. I started Pepper because I want to help people make a difference in the world. After I read the Life You Can Save I felt compelled to do something with not just my money but with my time and experience. We've got just over 50 members so far which I'm excited about because it's just from sharing with my network.I spoke with someone who ran an EA chapter at university and he recommended I post here when I launch to get feedback and ideas. He thought some of his non-EA friends may be interested in joining Pepper. That's one question I'd love input on...do you think your non-EA friends would be interested in Pepper? Please comment or email me if you have any ideas, questions, or feedback. michaelfilbey1@gmail.com. Thank you in advance!With Pepper there's just one option: give $10/month. 100% of donations (less standard Stripe nonprofit fees) go directly to four charities I've partnered with: AMF, Malaria Consortium (SMC program), HKI (VAS program), and GiveDirectly (Africa Cash Transfers program). I chose these charities because of research I conducted, but mostly because I trust GiveWell and they're far better than me at research.You can see the site here:/And our story here:/My goal with Pepper is to help people make a difference by simplifying the donation process (you can signup in 60 seconds and don't need to decide how much to give, which charities to give to, or how often to give) and create a power in numbers approach to giving, where people not dollars, make the difference.From an organizational perspective what makes Pepper unique is that we wake up and live and breathe marketing. Our goal is to acquire members and delight them. My background is in entrepreneurship and marketing.Thank you so much for reading and a big thank you to anyone who shares a suggestion.Cheers,Mike FilbeyThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
mikefilbey https://forum.effectivealtruism.org/posts/5n7tnfrKAJfAkwMv5/i-just-launched-pepper-looking-for-input Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I just launched Pepper, looking for input!, published by mikefilbey on April 10, 2023 on The Effective Altruism Forum.Hi everyone, my name is Mike Filbey and I am the founder of a nonprofit 501c3 called Pepper that I just launched last week. I started Pepper because I want to help people make a difference in the world. After I read the Life You Can Save I felt compelled to do something with not just my money but with my time and experience. We've got just over 50 members so far which I'm excited about because it's just from sharing with my network.I spoke with someone who ran an EA chapter at university and he recommended I post here when I launch to get feedback and ideas. He thought some of his non-EA friends may be interested in joining Pepper. That's one question I'd love input on...do you think your non-EA friends would be interested in Pepper? Please comment or email me if you have any ideas, questions, or feedback. michaelfilbey1@gmail.com. Thank you in advance!With Pepper there's just one option: give $10/month. 100% of donations (less standard Stripe nonprofit fees) go directly to four charities I've partnered with: AMF, Malaria Consortium (SMC program), HKI (VAS program), and GiveDirectly (Africa Cash Transfers program). I chose these charities because of research I conducted, but mostly because I trust GiveWell and they're far better than me at research.You can see the site here:/And our story here:/My goal with Pepper is to help people make a difference by simplifying the donation process (you can signup in 60 seconds and don't need to decide how much to give, which charities to give to, or how often to give) and create a power in numbers approach to giving, where people not dollars, make the difference.From an organizational perspective what makes Pepper unique is that we wake up and live and breathe marketing. Our goal is to acquire members and delight them. My background is in entrepreneurship and marketing.Thank you so much for reading and a big thank you to anyone who shares a suggestion.Cheers,Mike FilbeyThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 11 Apr 2023 03:40:34 +0000 EA - I just launched Pepper, looking for input! by mikefilbey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I just launched Pepper, looking for input!, published by mikefilbey on April 10, 2023 on The Effective Altruism Forum.Hi everyone, my name is Mike Filbey and I am the founder of a nonprofit 501c3 called Pepper that I just launched last week. I started Pepper because I want to help people make a difference in the world. After I read the Life You Can Save I felt compelled to do something with not just my money but with my time and experience. We've got just over 50 members so far which I'm excited about because it's just from sharing with my network.I spoke with someone who ran an EA chapter at university and he recommended I post here when I launch to get feedback and ideas. He thought some of his non-EA friends may be interested in joining Pepper. That's one question I'd love input on...do you think your non-EA friends would be interested in Pepper? Please comment or email me if you have any ideas, questions, or feedback. michaelfilbey1@gmail.com. Thank you in advance!With Pepper there's just one option: give $10/month. 100% of donations (less standard Stripe nonprofit fees) go directly to four charities I've partnered with: AMF, Malaria Consortium (SMC program), HKI (VAS program), and GiveDirectly (Africa Cash Transfers program). I chose these charities because of research I conducted, but mostly because I trust GiveWell and they're far better than me at research.You can see the site here:/And our story here:/My goal with Pepper is to help people make a difference by simplifying the donation process (you can signup in 60 seconds and don't need to decide how much to give, which charities to give to, or how often to give) and create a power in numbers approach to giving, where people not dollars, make the difference.From an organizational perspective what makes Pepper unique is that we wake up and live and breathe marketing. Our goal is to acquire members and delight them. My background is in entrepreneurship and marketing.Thank you so much for reading and a big thank you to anyone who shares a suggestion.Cheers,Mike FilbeyThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I just launched Pepper, looking for input!, published by mikefilbey on April 10, 2023 on The Effective Altruism Forum.Hi everyone, my name is Mike Filbey and I am the founder of a nonprofit 501c3 called Pepper that I just launched last week. I started Pepper because I want to help people make a difference in the world. After I read the Life You Can Save I felt compelled to do something with not just my money but with my time and experience. We've got just over 50 members so far which I'm excited about because it's just from sharing with my network.I spoke with someone who ran an EA chapter at university and he recommended I post here when I launch to get feedback and ideas. He thought some of his non-EA friends may be interested in joining Pepper. That's one question I'd love input on...do you think your non-EA friends would be interested in Pepper? Please comment or email me if you have any ideas, questions, or feedback. michaelfilbey1@gmail.com. Thank you in advance!With Pepper there's just one option: give $10/month. 100% of donations (less standard Stripe nonprofit fees) go directly to four charities I've partnered with: AMF, Malaria Consortium (SMC program), HKI (VAS program), and GiveDirectly (Africa Cash Transfers program). I chose these charities because of research I conducted, but mostly because I trust GiveWell and they're far better than me at research.You can see the site here:/And our story here:/My goal with Pepper is to help people make a difference by simplifying the donation process (you can signup in 60 seconds and don't need to decide how much to give, which charities to give to, or how often to give) and create a power in numbers approach to giving, where people not dollars, make the difference.From an organizational perspective what makes Pepper unique is that we wake up and live and breathe marketing. Our goal is to acquire members and delight them. My background is in entrepreneurship and marketing.Thank you so much for reading and a big thank you to anyone who shares a suggestion.Cheers,Mike FilbeyThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
mikefilbey https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:09 None full 5553
zQ7b9ghv3Tkd2LLNL_NL_EA_EA EA - An EA's Guide to Washington DC by Andy Masley Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An EA's Guide to Washington DC, published by Andy Masley on April 10, 2023 on The Effective Altruism Forum.Note: We (Andy and Elika) are posting this on the suggestion from this Forum post calling for more public guides for navigating EA hubs. This guide is not a representation of the views of everyone in the community. If you have suggestions for this guide or general feedback, please comment or message us.If you are visiting or new to the DC area and are looking to get connected to the local EA community, this guide is a useful place to start. This guide is most helpful if you’re considering moving to DC and want to get more context on the community and culture. We’re not trying to convince you to move or to be fully comprehensive.DC is a great place with a vibrant EA community! We hope you enjoy it and are welcomed warmly! To encourage that, feel free to reach out to any local community organizers listed in the People section!OverviewThe Washington Metropolitan Area includes DC itself and other nearby cities in Virginia and Maryland. It's the sixth most populated metropolitan area in the US and contains many distinct neighborhoods. The EA scene in DC is spread across different neighborhoods and there isn’t one specific cluster of EA activity.DC is one of the most active EA hubs. There are a lot of people focused on each of the major EA cause areas, especially AI, biosecurity, animal welfare, and global health. Many of the EAs who live in DC are working on policy directly or indirectly.DC EA CultureWhile we can’t speak for the view of everyone in our community, in our opinion, the EA culture in DC is great! Everyone is friendly, very motivated to do good, and works hard. Here’s a few bullet points on culture:Things are a bit more private in DC.Many in the community are less publicly affiliated with EA. There are a few reasons. The most common we’ve heard is that professionals are more hesitant to associate with any big ideas or belief systems that may not be popular in the places they work.There’s a less intense EA culture.EA DC’s members exist on a large spectrum of involvement with EA. While many members regularly use EA principles in their work and life, you’re likely to find fewer “all EA all the time” people than in other large EA hubs like the Bay Area.Many EA DC members work at non-EA organizations or do cause-specific work.The community tends to be a bit older.In the 2021 EA DC community survey, 50% of members were between 21-29 and 38% were between 30-39.Many people are spread across the DC suburbs, Maryland, and Northern Virginia.There are active groups focused on particular cause areasEA DC has active cause area meetup groups for AI, animal welfare, biosecurity, global health and development, and international security and foreign policy. To join a cause area meetup group just fill out this form or reach out to EA DC at Info@EffectiveAltruismDC.org.It's generally a good EA network to know people in.EAG have been hosted here.EA DC has members working in many different areas of government and policy work and is well-connected to the broader DC political scene.The community is more formal / professional than super social.EA DC is primarily a professional community. A lot of members form friendships in the group, but the primary goal of the organizers is to spread EA thinking, help members network professionally, give members access to effective careers, and promote effective giving.In comparison to places like the Bay Area, there is less overlap between the local rationalist groups and the EA group.EA groups and organizationsEA DC exists, is great, and is always looking for organizing / volunteer help. To get connected to EA DC fill out the welcome form and explore other resources here. Contact EA DC at Info@EffectiveAltruismDC.org...]]>
Andy Masley https://forum.effectivealtruism.org/posts/zQ7b9ghv3Tkd2LLNL/an-ea-s-guide-to-washington-dc Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An EA's Guide to Washington DC, published by Andy Masley on April 10, 2023 on The Effective Altruism Forum.Note: We (Andy and Elika) are posting this on the suggestion from this Forum post calling for more public guides for navigating EA hubs. This guide is not a representation of the views of everyone in the community. If you have suggestions for this guide or general feedback, please comment or message us.If you are visiting or new to the DC area and are looking to get connected to the local EA community, this guide is a useful place to start. This guide is most helpful if you’re considering moving to DC and want to get more context on the community and culture. We’re not trying to convince you to move or to be fully comprehensive.DC is a great place with a vibrant EA community! We hope you enjoy it and are welcomed warmly! To encourage that, feel free to reach out to any local community organizers listed in the People section!OverviewThe Washington Metropolitan Area includes DC itself and other nearby cities in Virginia and Maryland. It's the sixth most populated metropolitan area in the US and contains many distinct neighborhoods. The EA scene in DC is spread across different neighborhoods and there isn’t one specific cluster of EA activity.DC is one of the most active EA hubs. There are a lot of people focused on each of the major EA cause areas, especially AI, biosecurity, animal welfare, and global health. Many of the EAs who live in DC are working on policy directly or indirectly.DC EA CultureWhile we can’t speak for the view of everyone in our community, in our opinion, the EA culture in DC is great! Everyone is friendly, very motivated to do good, and works hard. Here’s a few bullet points on culture:Things are a bit more private in DC.Many in the community are less publicly affiliated with EA. There are a few reasons. The most common we’ve heard is that professionals are more hesitant to associate with any big ideas or belief systems that may not be popular in the places they work.There’s a less intense EA culture.EA DC’s members exist on a large spectrum of involvement with EA. While many members regularly use EA principles in their work and life, you’re likely to find fewer “all EA all the time” people than in other large EA hubs like the Bay Area.Many EA DC members work at non-EA organizations or do cause-specific work.The community tends to be a bit older.In the 2021 EA DC community survey, 50% of members were between 21-29 and 38% were between 30-39.Many people are spread across the DC suburbs, Maryland, and Northern Virginia.There are active groups focused on particular cause areasEA DC has active cause area meetup groups for AI, animal welfare, biosecurity, global health and development, and international security and foreign policy. To join a cause area meetup group just fill out this form or reach out to EA DC at Info@EffectiveAltruismDC.org.It's generally a good EA network to know people in.EAG have been hosted here.EA DC has members working in many different areas of government and policy work and is well-connected to the broader DC political scene.The community is more formal / professional than super social.EA DC is primarily a professional community. A lot of members form friendships in the group, but the primary goal of the organizers is to spread EA thinking, help members network professionally, give members access to effective careers, and promote effective giving.In comparison to places like the Bay Area, there is less overlap between the local rationalist groups and the EA group.EA groups and organizationsEA DC exists, is great, and is always looking for organizing / volunteer help. To get connected to EA DC fill out the welcome form and explore other resources here. Contact EA DC at Info@EffectiveAltruismDC.org...]]>
Mon, 10 Apr 2023 22:56:32 +0000 EA - An EA's Guide to Washington DC by Andy Masley Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An EA's Guide to Washington DC, published by Andy Masley on April 10, 2023 on The Effective Altruism Forum.Note: We (Andy and Elika) are posting this on the suggestion from this Forum post calling for more public guides for navigating EA hubs. This guide is not a representation of the views of everyone in the community. If you have suggestions for this guide or general feedback, please comment or message us.If you are visiting or new to the DC area and are looking to get connected to the local EA community, this guide is a useful place to start. This guide is most helpful if you’re considering moving to DC and want to get more context on the community and culture. We’re not trying to convince you to move or to be fully comprehensive.DC is a great place with a vibrant EA community! We hope you enjoy it and are welcomed warmly! To encourage that, feel free to reach out to any local community organizers listed in the People section!OverviewThe Washington Metropolitan Area includes DC itself and other nearby cities in Virginia and Maryland. It's the sixth most populated metropolitan area in the US and contains many distinct neighborhoods. The EA scene in DC is spread across different neighborhoods and there isn’t one specific cluster of EA activity.DC is one of the most active EA hubs. There are a lot of people focused on each of the major EA cause areas, especially AI, biosecurity, animal welfare, and global health. Many of the EAs who live in DC are working on policy directly or indirectly.DC EA CultureWhile we can’t speak for the view of everyone in our community, in our opinion, the EA culture in DC is great! Everyone is friendly, very motivated to do good, and works hard. Here’s a few bullet points on culture:Things are a bit more private in DC.Many in the community are less publicly affiliated with EA. There are a few reasons. The most common we’ve heard is that professionals are more hesitant to associate with any big ideas or belief systems that may not be popular in the places they work.There’s a less intense EA culture.EA DC’s members exist on a large spectrum of involvement with EA. While many members regularly use EA principles in their work and life, you’re likely to find fewer “all EA all the time” people than in other large EA hubs like the Bay Area.Many EA DC members work at non-EA organizations or do cause-specific work.The community tends to be a bit older.In the 2021 EA DC community survey, 50% of members were between 21-29 and 38% were between 30-39.Many people are spread across the DC suburbs, Maryland, and Northern Virginia.There are active groups focused on particular cause areasEA DC has active cause area meetup groups for AI, animal welfare, biosecurity, global health and development, and international security and foreign policy. To join a cause area meetup group just fill out this form or reach out to EA DC at Info@EffectiveAltruismDC.org.It's generally a good EA network to know people in.EAG have been hosted here.EA DC has members working in many different areas of government and policy work and is well-connected to the broader DC political scene.The community is more formal / professional than super social.EA DC is primarily a professional community. A lot of members form friendships in the group, but the primary goal of the organizers is to spread EA thinking, help members network professionally, give members access to effective careers, and promote effective giving.In comparison to places like the Bay Area, there is less overlap between the local rationalist groups and the EA group.EA groups and organizationsEA DC exists, is great, and is always looking for organizing / volunteer help. To get connected to EA DC fill out the welcome form and explore other resources here. Contact EA DC at Info@EffectiveAltruismDC.org...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An EA's Guide to Washington DC, published by Andy Masley on April 10, 2023 on The Effective Altruism Forum.Note: We (Andy and Elika) are posting this on the suggestion from this Forum post calling for more public guides for navigating EA hubs. This guide is not a representation of the views of everyone in the community. If you have suggestions for this guide or general feedback, please comment or message us.If you are visiting or new to the DC area and are looking to get connected to the local EA community, this guide is a useful place to start. This guide is most helpful if you’re considering moving to DC and want to get more context on the community and culture. We’re not trying to convince you to move or to be fully comprehensive.DC is a great place with a vibrant EA community! We hope you enjoy it and are welcomed warmly! To encourage that, feel free to reach out to any local community organizers listed in the People section!OverviewThe Washington Metropolitan Area includes DC itself and other nearby cities in Virginia and Maryland. It's the sixth most populated metropolitan area in the US and contains many distinct neighborhoods. The EA scene in DC is spread across different neighborhoods and there isn’t one specific cluster of EA activity.DC is one of the most active EA hubs. There are a lot of people focused on each of the major EA cause areas, especially AI, biosecurity, animal welfare, and global health. Many of the EAs who live in DC are working on policy directly or indirectly.DC EA CultureWhile we can’t speak for the view of everyone in our community, in our opinion, the EA culture in DC is great! Everyone is friendly, very motivated to do good, and works hard. Here’s a few bullet points on culture:Things are a bit more private in DC.Many in the community are less publicly affiliated with EA. There are a few reasons. The most common we’ve heard is that professionals are more hesitant to associate with any big ideas or belief systems that may not be popular in the places they work.There’s a less intense EA culture.EA DC’s members exist on a large spectrum of involvement with EA. While many members regularly use EA principles in their work and life, you’re likely to find fewer “all EA all the time” people than in other large EA hubs like the Bay Area.Many EA DC members work at non-EA organizations or do cause-specific work.The community tends to be a bit older.In the 2021 EA DC community survey, 50% of members were between 21-29 and 38% were between 30-39.Many people are spread across the DC suburbs, Maryland, and Northern Virginia.There are active groups focused on particular cause areasEA DC has active cause area meetup groups for AI, animal welfare, biosecurity, global health and development, and international security and foreign policy. To join a cause area meetup group just fill out this form or reach out to EA DC at Info@EffectiveAltruismDC.org.It's generally a good EA network to know people in.EAG have been hosted here.EA DC has members working in many different areas of government and policy work and is well-connected to the broader DC political scene.The community is more formal / professional than super social.EA DC is primarily a professional community. A lot of members form friendships in the group, but the primary goal of the organizers is to spread EA thinking, help members network professionally, give members access to effective careers, and promote effective giving.In comparison to places like the Bay Area, there is less overlap between the local rationalist groups and the EA group.EA groups and organizationsEA DC exists, is great, and is always looking for organizing / volunteer help. To get connected to EA DC fill out the welcome form and explore other resources here. Contact EA DC at Info@EffectiveAltruismDC.org...]]>
Andy Masley https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 14:56 None full 5552
BifFtXqaSyqjvDDdb_NL_EA_EA EA - Nuclear risk, its potential long-term impacts, and doing research on that: An introductory talk by MichaelA Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nuclear risk, its potential long-term impacts, & doing research on that: An introductory talk, published by MichaelA on April 10, 2023 on The Effective Altruism Forum.This is a Forum post version of a talk I gave in August 2022 to students interested in an impactful career. I'm publishing this because I expect some people to find it helpful as an accessible introduction to nuclear risk, why nuclear risk reduction may or may not be a longtermist priority, and longtermism-related cause prioritization research.This is a blog post, not a research report, meaning it was produced relatively quickly and is not to Rethink Priorities' typical standards of substantiveness and careful checking for accuracy.IntroductionSome clusters of views on nuclear riskOver the next few slides, I outline three clusters of views that seem to be common in mainstream/non-longtermist nuclear risk, in my experience. This is my own attempted clustering and my own (perhaps too uncharitable) terms for the views.The text in the slides is my impression of what someone whose views approximately fit the given cluster might say; I don't personally agree with all of these statements/framings myself, though I do think each cluster of views makes many valuable points.Above is the Our World in Data graph of nuclear weapons numbers by year. An "alarmed advocate" might point to that part of this graph – the ascent up a mountain of destruction and doom.Above is a different part of the same graph. An "unconcerned skeptic" might point to that part – the descent back down the mountain of doom, toward stability and reduced risk.A competitive realist might point to that evidence of rise and/or resurgence of nuclear powers whose national security interests differ from those of the US and allies.My nuclear risk researchUnfortunately much of that work will probably remain unfinished. But here are links to the things that I at least got to some shareable state:Database of nuclear risk estimates [draft]Nuclear Risk TournamentIntermediate goals for reducing risks from nuclear weapons: A shallow review (part 1/4)Nuclear risk research ideas: Summary & introduction9 mistakes to avoid when thinking about nuclear riskSome paths by which nuclear risk can affect the long-term future & some interventions that could helpOk, so let's imagine you want to help bring about a better world (in expectation) than the world we'd have without your efforts. What high-level end-goals might you focus on?Three possibilities that seem especially notable to me are better present/near-term future for humans, better present/near-term future for animals, or better long-term future. Let's say you decide to focus on the long-term future, or at least to see what that would imply. What are some still fairly high-level areas which seem potentially important for the long-term future?Ok, there's a fair few. Let's say you decide that you specifically (maybe due to comparative advantage) should focus on nuclear risk or global priorities research (e.g., specifically trying to see how important nuclear risk is relative to those other areas). So now let's think about why nuclear risk might matter for the long-term future. Then from there we can try to assess what interventions might be most valuable (by blocking the paths from nuclear risk to long-term future harms).Maybe let's first think about some ways nuclear war could occur, and some interventions that could stop each of those causes.Ok, but what about the path from nuclear war actually happening to the long-term future being worse. By what specific mechanisms could the world get worse? Can we intervene to prevent those mechanisms/paths, even if a nuclear war occurs? (It seems probably best to work both on "prevention" and "mitigation", as part of a defense in depth approach).Ok let'...]]>
MichaelA https://forum.effectivealtruism.org/posts/BifFtXqaSyqjvDDdb/nuclear-risk-its-potential-long-term-impacts-and-doing Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nuclear risk, its potential long-term impacts, & doing research on that: An introductory talk, published by MichaelA on April 10, 2023 on The Effective Altruism Forum.This is a Forum post version of a talk I gave in August 2022 to students interested in an impactful career. I'm publishing this because I expect some people to find it helpful as an accessible introduction to nuclear risk, why nuclear risk reduction may or may not be a longtermist priority, and longtermism-related cause prioritization research.This is a blog post, not a research report, meaning it was produced relatively quickly and is not to Rethink Priorities' typical standards of substantiveness and careful checking for accuracy.IntroductionSome clusters of views on nuclear riskOver the next few slides, I outline three clusters of views that seem to be common in mainstream/non-longtermist nuclear risk, in my experience. This is my own attempted clustering and my own (perhaps too uncharitable) terms for the views.The text in the slides is my impression of what someone whose views approximately fit the given cluster might say; I don't personally agree with all of these statements/framings myself, though I do think each cluster of views makes many valuable points.Above is the Our World in Data graph of nuclear weapons numbers by year. An "alarmed advocate" might point to that part of this graph – the ascent up a mountain of destruction and doom.Above is a different part of the same graph. An "unconcerned skeptic" might point to that part – the descent back down the mountain of doom, toward stability and reduced risk.A competitive realist might point to that evidence of rise and/or resurgence of nuclear powers whose national security interests differ from those of the US and allies.My nuclear risk researchUnfortunately much of that work will probably remain unfinished. But here are links to the things that I at least got to some shareable state:Database of nuclear risk estimates [draft]Nuclear Risk TournamentIntermediate goals for reducing risks from nuclear weapons: A shallow review (part 1/4)Nuclear risk research ideas: Summary & introduction9 mistakes to avoid when thinking about nuclear riskSome paths by which nuclear risk can affect the long-term future & some interventions that could helpOk, so let's imagine you want to help bring about a better world (in expectation) than the world we'd have without your efforts. What high-level end-goals might you focus on?Three possibilities that seem especially notable to me are better present/near-term future for humans, better present/near-term future for animals, or better long-term future. Let's say you decide to focus on the long-term future, or at least to see what that would imply. What are some still fairly high-level areas which seem potentially important for the long-term future?Ok, there's a fair few. Let's say you decide that you specifically (maybe due to comparative advantage) should focus on nuclear risk or global priorities research (e.g., specifically trying to see how important nuclear risk is relative to those other areas). So now let's think about why nuclear risk might matter for the long-term future. Then from there we can try to assess what interventions might be most valuable (by blocking the paths from nuclear risk to long-term future harms).Maybe let's first think about some ways nuclear war could occur, and some interventions that could stop each of those causes.Ok, but what about the path from nuclear war actually happening to the long-term future being worse. By what specific mechanisms could the world get worse? Can we intervene to prevent those mechanisms/paths, even if a nuclear war occurs? (It seems probably best to work both on "prevention" and "mitigation", as part of a defense in depth approach).Ok let'...]]>
Mon, 10 Apr 2023 17:57:04 +0000 EA - Nuclear risk, its potential long-term impacts, and doing research on that: An introductory talk by MichaelA Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nuclear risk, its potential long-term impacts, & doing research on that: An introductory talk, published by MichaelA on April 10, 2023 on The Effective Altruism Forum.This is a Forum post version of a talk I gave in August 2022 to students interested in an impactful career. I'm publishing this because I expect some people to find it helpful as an accessible introduction to nuclear risk, why nuclear risk reduction may or may not be a longtermist priority, and longtermism-related cause prioritization research.This is a blog post, not a research report, meaning it was produced relatively quickly and is not to Rethink Priorities' typical standards of substantiveness and careful checking for accuracy.IntroductionSome clusters of views on nuclear riskOver the next few slides, I outline three clusters of views that seem to be common in mainstream/non-longtermist nuclear risk, in my experience. This is my own attempted clustering and my own (perhaps too uncharitable) terms for the views.The text in the slides is my impression of what someone whose views approximately fit the given cluster might say; I don't personally agree with all of these statements/framings myself, though I do think each cluster of views makes many valuable points.Above is the Our World in Data graph of nuclear weapons numbers by year. An "alarmed advocate" might point to that part of this graph – the ascent up a mountain of destruction and doom.Above is a different part of the same graph. An "unconcerned skeptic" might point to that part – the descent back down the mountain of doom, toward stability and reduced risk.A competitive realist might point to that evidence of rise and/or resurgence of nuclear powers whose national security interests differ from those of the US and allies.My nuclear risk researchUnfortunately much of that work will probably remain unfinished. But here are links to the things that I at least got to some shareable state:Database of nuclear risk estimates [draft]Nuclear Risk TournamentIntermediate goals for reducing risks from nuclear weapons: A shallow review (part 1/4)Nuclear risk research ideas: Summary & introduction9 mistakes to avoid when thinking about nuclear riskSome paths by which nuclear risk can affect the long-term future & some interventions that could helpOk, so let's imagine you want to help bring about a better world (in expectation) than the world we'd have without your efforts. What high-level end-goals might you focus on?Three possibilities that seem especially notable to me are better present/near-term future for humans, better present/near-term future for animals, or better long-term future. Let's say you decide to focus on the long-term future, or at least to see what that would imply. What are some still fairly high-level areas which seem potentially important for the long-term future?Ok, there's a fair few. Let's say you decide that you specifically (maybe due to comparative advantage) should focus on nuclear risk or global priorities research (e.g., specifically trying to see how important nuclear risk is relative to those other areas). So now let's think about why nuclear risk might matter for the long-term future. Then from there we can try to assess what interventions might be most valuable (by blocking the paths from nuclear risk to long-term future harms).Maybe let's first think about some ways nuclear war could occur, and some interventions that could stop each of those causes.Ok, but what about the path from nuclear war actually happening to the long-term future being worse. By what specific mechanisms could the world get worse? Can we intervene to prevent those mechanisms/paths, even if a nuclear war occurs? (It seems probably best to work both on "prevention" and "mitigation", as part of a defense in depth approach).Ok let'...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nuclear risk, its potential long-term impacts, & doing research on that: An introductory talk, published by MichaelA on April 10, 2023 on The Effective Altruism Forum.This is a Forum post version of a talk I gave in August 2022 to students interested in an impactful career. I'm publishing this because I expect some people to find it helpful as an accessible introduction to nuclear risk, why nuclear risk reduction may or may not be a longtermist priority, and longtermism-related cause prioritization research.This is a blog post, not a research report, meaning it was produced relatively quickly and is not to Rethink Priorities' typical standards of substantiveness and careful checking for accuracy.IntroductionSome clusters of views on nuclear riskOver the next few slides, I outline three clusters of views that seem to be common in mainstream/non-longtermist nuclear risk, in my experience. This is my own attempted clustering and my own (perhaps too uncharitable) terms for the views.The text in the slides is my impression of what someone whose views approximately fit the given cluster might say; I don't personally agree with all of these statements/framings myself, though I do think each cluster of views makes many valuable points.Above is the Our World in Data graph of nuclear weapons numbers by year. An "alarmed advocate" might point to that part of this graph – the ascent up a mountain of destruction and doom.Above is a different part of the same graph. An "unconcerned skeptic" might point to that part – the descent back down the mountain of doom, toward stability and reduced risk.A competitive realist might point to that evidence of rise and/or resurgence of nuclear powers whose national security interests differ from those of the US and allies.My nuclear risk researchUnfortunately much of that work will probably remain unfinished. But here are links to the things that I at least got to some shareable state:Database of nuclear risk estimates [draft]Nuclear Risk TournamentIntermediate goals for reducing risks from nuclear weapons: A shallow review (part 1/4)Nuclear risk research ideas: Summary & introduction9 mistakes to avoid when thinking about nuclear riskSome paths by which nuclear risk can affect the long-term future & some interventions that could helpOk, so let's imagine you want to help bring about a better world (in expectation) than the world we'd have without your efforts. What high-level end-goals might you focus on?Three possibilities that seem especially notable to me are better present/near-term future for humans, better present/near-term future for animals, or better long-term future. Let's say you decide to focus on the long-term future, or at least to see what that would imply. What are some still fairly high-level areas which seem potentially important for the long-term future?Ok, there's a fair few. Let's say you decide that you specifically (maybe due to comparative advantage) should focus on nuclear risk or global priorities research (e.g., specifically trying to see how important nuclear risk is relative to those other areas). So now let's think about why nuclear risk might matter for the long-term future. Then from there we can try to assess what interventions might be most valuable (by blocking the paths from nuclear risk to long-term future harms).Maybe let's first think about some ways nuclear war could occur, and some interventions that could stop each of those causes.Ok, but what about the path from nuclear war actually happening to the long-term future being worse. By what specific mechanisms could the world get worse? Can we intervene to prevent those mechanisms/paths, even if a nuclear war occurs? (It seems probably best to work both on "prevention" and "mitigation", as part of a defense in depth approach).Ok let'...]]>
MichaelA https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:49 None full 5544
hat6TafzAoDx97N6j_NL_EA_EA EA - What the Moral Truth might be makes no difference to what will happen by Jim Buhler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What the Moral Truth might be makes no difference to what will happen, published by Jim Buhler on April 9, 2023 on The Effective Altruism Forum.Many longtermists seem hopeful that our successors (or any advanced civilization/superintelligence) will eventually act in accordance with some moral truth. While I’m sympathetic to some forms of moral realism, I believe that such a scenario is fairly unlikely for any civilization and even more so for the most advanced/expansionist ones. This post briefly explains why.To be clear, my case does under no circumstances imply that we should not act according to what we think might be a moral truth. I simply argue that we can't assume that our successors -- or any powerful civilization -- will "do the (objectively) right thing", which matters for longtermist cause prioritization.Epistemic status: Since I believe the ideas in this post to be less important than those in future ones within this sequence, I wrote it quickly and didn’t ask anyone for thorough feedback before posting, which makes me think I’m more likely than usual to have missed important considerations. Let me know what you think.Update April 10th: When I first posted this, the title was "It Doesn't Matter what the Moral Truth might be". I realized this was misleading. It was making it look like I was making a strong normative claim regarding what matters while my goal was to predict what might happen, so I changed it.Rare are those who will eventually act in accordance with some moral truthFor agents to do what might objectively be the best thing to do, you need all these conditions to be met:There is a moral truth.It is possible to “find it” and recognize it as such.They find something they recognize as a moral truth.They (unconditionally) accept it, even if it is highly counterintuitive.The thing they found is actually the moral truth. No normative mistake.They succeed at acting in accordance with it. No practical mistake.They stick to this forever. No value drift.I think these seven conditions are generally quite unlikely to be all met at the same time, mainly for the following reasons:(#1) While I find compelling the argument that (some of) our subjective experiences are instantiations of objective (dis)value (see Rawlette 2016; Vinding 2014), I am highly skeptical about claims of moral truths that are not completely dependent on sentience.(#2) I don’t see why we should assume it is possible to “find” (with a sufficient degree of certainty) the moral truth, especially if it is more complex than – or different from – something like “pleasure is good and suffering is bad.”(#3 and #4) If they “find” a moral truth and don’t like what it says, why would they try to act in accordance with it?(#3, #4, #5, and #7) Within a civilization, we should expect the agents who have the values that are the most adapted/competitive to survival, replication, and expansion to eventually be selected for (see, e.g., Bostrom 2004; Hanson 1998), and I see no reason to suppose the moral truth is particularly well adapted to those things.Even if they’re not rare, their impact will stay marginalNow, let’s actually assume that many advanced civilizations converge on THE moral truth and effectively optimize for whatever it says. The thing is that, for the same reason why we may expect agents “adopting” the moral truth to be selected against within a civilization (see the last bullet point above), we may expect civilizations adopting the moral truth to be less competitive than those who have the values that are the most adaptive and adapted to space colonization races.My forthcoming next post will investigate this selection effect in more detail, but here is an intuition pump: Say humanity wants to follow the moral truth which is to maximize the sum X−Y, where X is somethi...]]>
Jim Buhler https://forum.effectivealtruism.org/posts/hat6TafzAoDx97N6j/what-the-moral-truth-might-be-makes-no-difference-to-what Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What the Moral Truth might be makes no difference to what will happen, published by Jim Buhler on April 9, 2023 on The Effective Altruism Forum.Many longtermists seem hopeful that our successors (or any advanced civilization/superintelligence) will eventually act in accordance with some moral truth. While I’m sympathetic to some forms of moral realism, I believe that such a scenario is fairly unlikely for any civilization and even more so for the most advanced/expansionist ones. This post briefly explains why.To be clear, my case does under no circumstances imply that we should not act according to what we think might be a moral truth. I simply argue that we can't assume that our successors -- or any powerful civilization -- will "do the (objectively) right thing", which matters for longtermist cause prioritization.Epistemic status: Since I believe the ideas in this post to be less important than those in future ones within this sequence, I wrote it quickly and didn’t ask anyone for thorough feedback before posting, which makes me think I’m more likely than usual to have missed important considerations. Let me know what you think.Update April 10th: When I first posted this, the title was "It Doesn't Matter what the Moral Truth might be". I realized this was misleading. It was making it look like I was making a strong normative claim regarding what matters while my goal was to predict what might happen, so I changed it.Rare are those who will eventually act in accordance with some moral truthFor agents to do what might objectively be the best thing to do, you need all these conditions to be met:There is a moral truth.It is possible to “find it” and recognize it as such.They find something they recognize as a moral truth.They (unconditionally) accept it, even if it is highly counterintuitive.The thing they found is actually the moral truth. No normative mistake.They succeed at acting in accordance with it. No practical mistake.They stick to this forever. No value drift.I think these seven conditions are generally quite unlikely to be all met at the same time, mainly for the following reasons:(#1) While I find compelling the argument that (some of) our subjective experiences are instantiations of objective (dis)value (see Rawlette 2016; Vinding 2014), I am highly skeptical about claims of moral truths that are not completely dependent on sentience.(#2) I don’t see why we should assume it is possible to “find” (with a sufficient degree of certainty) the moral truth, especially if it is more complex than – or different from – something like “pleasure is good and suffering is bad.”(#3 and #4) If they “find” a moral truth and don’t like what it says, why would they try to act in accordance with it?(#3, #4, #5, and #7) Within a civilization, we should expect the agents who have the values that are the most adapted/competitive to survival, replication, and expansion to eventually be selected for (see, e.g., Bostrom 2004; Hanson 1998), and I see no reason to suppose the moral truth is particularly well adapted to those things.Even if they’re not rare, their impact will stay marginalNow, let’s actually assume that many advanced civilizations converge on THE moral truth and effectively optimize for whatever it says. The thing is that, for the same reason why we may expect agents “adopting” the moral truth to be selected against within a civilization (see the last bullet point above), we may expect civilizations adopting the moral truth to be less competitive than those who have the values that are the most adaptive and adapted to space colonization races.My forthcoming next post will investigate this selection effect in more detail, but here is an intuition pump: Say humanity wants to follow the moral truth which is to maximize the sum X−Y, where X is somethi...]]>
Mon, 10 Apr 2023 14:39:58 +0000 EA - What the Moral Truth might be makes no difference to what will happen by Jim Buhler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What the Moral Truth might be makes no difference to what will happen, published by Jim Buhler on April 9, 2023 on The Effective Altruism Forum.Many longtermists seem hopeful that our successors (or any advanced civilization/superintelligence) will eventually act in accordance with some moral truth. While I’m sympathetic to some forms of moral realism, I believe that such a scenario is fairly unlikely for any civilization and even more so for the most advanced/expansionist ones. This post briefly explains why.To be clear, my case does under no circumstances imply that we should not act according to what we think might be a moral truth. I simply argue that we can't assume that our successors -- or any powerful civilization -- will "do the (objectively) right thing", which matters for longtermist cause prioritization.Epistemic status: Since I believe the ideas in this post to be less important than those in future ones within this sequence, I wrote it quickly and didn’t ask anyone for thorough feedback before posting, which makes me think I’m more likely than usual to have missed important considerations. Let me know what you think.Update April 10th: When I first posted this, the title was "It Doesn't Matter what the Moral Truth might be". I realized this was misleading. It was making it look like I was making a strong normative claim regarding what matters while my goal was to predict what might happen, so I changed it.Rare are those who will eventually act in accordance with some moral truthFor agents to do what might objectively be the best thing to do, you need all these conditions to be met:There is a moral truth.It is possible to “find it” and recognize it as such.They find something they recognize as a moral truth.They (unconditionally) accept it, even if it is highly counterintuitive.The thing they found is actually the moral truth. No normative mistake.They succeed at acting in accordance with it. No practical mistake.They stick to this forever. No value drift.I think these seven conditions are generally quite unlikely to be all met at the same time, mainly for the following reasons:(#1) While I find compelling the argument that (some of) our subjective experiences are instantiations of objective (dis)value (see Rawlette 2016; Vinding 2014), I am highly skeptical about claims of moral truths that are not completely dependent on sentience.(#2) I don’t see why we should assume it is possible to “find” (with a sufficient degree of certainty) the moral truth, especially if it is more complex than – or different from – something like “pleasure is good and suffering is bad.”(#3 and #4) If they “find” a moral truth and don’t like what it says, why would they try to act in accordance with it?(#3, #4, #5, and #7) Within a civilization, we should expect the agents who have the values that are the most adapted/competitive to survival, replication, and expansion to eventually be selected for (see, e.g., Bostrom 2004; Hanson 1998), and I see no reason to suppose the moral truth is particularly well adapted to those things.Even if they’re not rare, their impact will stay marginalNow, let’s actually assume that many advanced civilizations converge on THE moral truth and effectively optimize for whatever it says. The thing is that, for the same reason why we may expect agents “adopting” the moral truth to be selected against within a civilization (see the last bullet point above), we may expect civilizations adopting the moral truth to be less competitive than those who have the values that are the most adaptive and adapted to space colonization races.My forthcoming next post will investigate this selection effect in more detail, but here is an intuition pump: Say humanity wants to follow the moral truth which is to maximize the sum X−Y, where X is somethi...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What the Moral Truth might be makes no difference to what will happen, published by Jim Buhler on April 9, 2023 on The Effective Altruism Forum.Many longtermists seem hopeful that our successors (or any advanced civilization/superintelligence) will eventually act in accordance with some moral truth. While I’m sympathetic to some forms of moral realism, I believe that such a scenario is fairly unlikely for any civilization and even more so for the most advanced/expansionist ones. This post briefly explains why.To be clear, my case does under no circumstances imply that we should not act according to what we think might be a moral truth. I simply argue that we can't assume that our successors -- or any powerful civilization -- will "do the (objectively) right thing", which matters for longtermist cause prioritization.Epistemic status: Since I believe the ideas in this post to be less important than those in future ones within this sequence, I wrote it quickly and didn’t ask anyone for thorough feedback before posting, which makes me think I’m more likely than usual to have missed important considerations. Let me know what you think.Update April 10th: When I first posted this, the title was "It Doesn't Matter what the Moral Truth might be". I realized this was misleading. It was making it look like I was making a strong normative claim regarding what matters while my goal was to predict what might happen, so I changed it.Rare are those who will eventually act in accordance with some moral truthFor agents to do what might objectively be the best thing to do, you need all these conditions to be met:There is a moral truth.It is possible to “find it” and recognize it as such.They find something they recognize as a moral truth.They (unconditionally) accept it, even if it is highly counterintuitive.The thing they found is actually the moral truth. No normative mistake.They succeed at acting in accordance with it. No practical mistake.They stick to this forever. No value drift.I think these seven conditions are generally quite unlikely to be all met at the same time, mainly for the following reasons:(#1) While I find compelling the argument that (some of) our subjective experiences are instantiations of objective (dis)value (see Rawlette 2016; Vinding 2014), I am highly skeptical about claims of moral truths that are not completely dependent on sentience.(#2) I don’t see why we should assume it is possible to “find” (with a sufficient degree of certainty) the moral truth, especially if it is more complex than – or different from – something like “pleasure is good and suffering is bad.”(#3 and #4) If they “find” a moral truth and don’t like what it says, why would they try to act in accordance with it?(#3, #4, #5, and #7) Within a civilization, we should expect the agents who have the values that are the most adapted/competitive to survival, replication, and expansion to eventually be selected for (see, e.g., Bostrom 2004; Hanson 1998), and I see no reason to suppose the moral truth is particularly well adapted to those things.Even if they’re not rare, their impact will stay marginalNow, let’s actually assume that many advanced civilizations converge on THE moral truth and effectively optimize for whatever it says. The thing is that, for the same reason why we may expect agents “adopting” the moral truth to be selected against within a civilization (see the last bullet point above), we may expect civilizations adopting the moral truth to be less competitive than those who have the values that are the most adaptive and adapted to space colonization races.My forthcoming next post will investigate this selection effect in more detail, but here is an intuition pump: Say humanity wants to follow the moral truth which is to maximize the sum X−Y, where X is somethi...]]>
Jim Buhler https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:05 None full 5545
Cs5Zx3ncLWcoYKSog_NL_EA_EA EA - Pausing AI Developments Isn't Enough. We Need to Shut it All Down by EliezerYudkowsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pausing AI Developments Isn't Enough. We Need to Shut it All Down, published by EliezerYudkowsky on April 9, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
EliezerYudkowsky https://forum.effectivealtruism.org/posts/Cs5Zx3ncLWcoYKSog/pausing-ai-developments-isn-t-enough-we-need-to-shut-it-all-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pausing AI Developments Isn't Enough. We Need to Shut it All Down, published by EliezerYudkowsky on April 9, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 10 Apr 2023 06:38:20 +0000 EA - Pausing AI Developments Isn't Enough. We Need to Shut it All Down by EliezerYudkowsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pausing AI Developments Isn't Enough. We Need to Shut it All Down, published by EliezerYudkowsky on April 9, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pausing AI Developments Isn't Enough. We Need to Shut it All Down, published by EliezerYudkowsky on April 9, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
EliezerYudkowsky https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:23 None full 5542
5HdE2JikwJLzwzhag_NL_EA_EA EA - EA and “The correct response to uncertainty is not half-speed” by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA & “The correct response to uncertainty is not half-speed”, published by Lizka on April 9, 2023 on The Effective Altruism Forum.TL;DR: When we're unsure about what to do, we sometimes naively take the "average" of the obvious options — despite the fact that a different strategy is often better. For example, if you're not sure if you're in the right job, continuing to do your job as before but with less energy ("going half-speed") is probably not the best approach. Note, however, that sometimes speed itself is the problem, in which case "half-speed" can be totally reasonable — I discuss this and some other considerations below.I've referenced this phenomenon in some conversations recently, so I'm sharing a relevant post from 2016 — The correct response to uncertainty is not half-speed — and sketching out some examples I've seen.The correct response to uncertainty is not half-speedThe central example in the post is a time when the author was driving along a long stretch of road and started wondering if she’d passed her hotel. So she continued at half-speed, trying to decide if she should keep going or turn around. After a while, she realized:If the hotel was ahead of me, I'd get there fastest if I kept going 60mph. And if the hotel was behind me, I'd get there fastest by heading at 60 miles per hour in the other direction. And if I wasn't going to turn around yet -- if my best bet given the uncertainty was to check N more miles of highway first, before I turned around -- then, again, I'd get there fastest by choosing a value of N, speeding along at 60 miles per hour until my odometer said I'd gone N miles, and then turning around and heading at 60 miles per hour in the opposite direction.Either way, fullspeed was best. My mind had been naively averaging two courses of action -- the thought was something like: "maybe I should go forward, and maybe I should go backward. So, since I'm uncertain, I should go forward at half-speed!" But averages don't actually work that way.[1][...] [From a comment] Often a person should hedge bets in some fashion, or should take some action under uncertainty that is different from the action one would take if one were certain of model 1 or of model 2. The point is that "hedging" or "acting under uncertainty" in this way is different in many particulars from the sort of "kind of working" that people often end up accidentally doing, from a naiver sort of average. Often it e.g. involves running info-gathering tests at full speed, one after another.Opinions expressed here are mine, not my employer’s, not the Forum’s, etc. I wrote this fast, so it’s definitely not an exhaustive list of examples or considerations and is probably wrong in important places.Assorted links that seem related to the postFalse compromises & the fallacy of the middle / the argument to moderation / the middle ground fallacy (example link - I don’t know what the right term here is, or if there’s an excellent explanation)Dive in and the explanation of “split and commit” hereWhere I’ve seen the “half-speed” phenomenon recentlyI think that I’ve seen multiple instances of each of these in the past few months. I’m not sure that all of these directly stem from the phenomenon described above — there might be better descriptions for what’s going on — but they seem quite related.Jobs. Someone is unsure if their role is a good fit (or if it's the most impactful option, etc.) for them. So they continue working in it, but put less energy into it.What you might do instead: set aside time to evaluate your options and fit (and switch jobs based on that), consider setting up some tests, see if you can change or improve things in your current job (talk to your manager, etc.), decide that it’s a bad time to think about this and that you'll re-evaluate at a set time (schedu...]]>
Lizka https://forum.effectivealtruism.org/posts/5HdE2JikwJLzwzhag/ea-and-the-correct-response-to-uncertainty-is-not-half-speed Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA & “The correct response to uncertainty is not half-speed”, published by Lizka on April 9, 2023 on The Effective Altruism Forum.TL;DR: When we're unsure about what to do, we sometimes naively take the "average" of the obvious options — despite the fact that a different strategy is often better. For example, if you're not sure if you're in the right job, continuing to do your job as before but with less energy ("going half-speed") is probably not the best approach. Note, however, that sometimes speed itself is the problem, in which case "half-speed" can be totally reasonable — I discuss this and some other considerations below.I've referenced this phenomenon in some conversations recently, so I'm sharing a relevant post from 2016 — The correct response to uncertainty is not half-speed — and sketching out some examples I've seen.The correct response to uncertainty is not half-speedThe central example in the post is a time when the author was driving along a long stretch of road and started wondering if she’d passed her hotel. So she continued at half-speed, trying to decide if she should keep going or turn around. After a while, she realized:If the hotel was ahead of me, I'd get there fastest if I kept going 60mph. And if the hotel was behind me, I'd get there fastest by heading at 60 miles per hour in the other direction. And if I wasn't going to turn around yet -- if my best bet given the uncertainty was to check N more miles of highway first, before I turned around -- then, again, I'd get there fastest by choosing a value of N, speeding along at 60 miles per hour until my odometer said I'd gone N miles, and then turning around and heading at 60 miles per hour in the opposite direction.Either way, fullspeed was best. My mind had been naively averaging two courses of action -- the thought was something like: "maybe I should go forward, and maybe I should go backward. So, since I'm uncertain, I should go forward at half-speed!" But averages don't actually work that way.[1][...] [From a comment] Often a person should hedge bets in some fashion, or should take some action under uncertainty that is different from the action one would take if one were certain of model 1 or of model 2. The point is that "hedging" or "acting under uncertainty" in this way is different in many particulars from the sort of "kind of working" that people often end up accidentally doing, from a naiver sort of average. Often it e.g. involves running info-gathering tests at full speed, one after another.Opinions expressed here are mine, not my employer’s, not the Forum’s, etc. I wrote this fast, so it’s definitely not an exhaustive list of examples or considerations and is probably wrong in important places.Assorted links that seem related to the postFalse compromises & the fallacy of the middle / the argument to moderation / the middle ground fallacy (example link - I don’t know what the right term here is, or if there’s an excellent explanation)Dive in and the explanation of “split and commit” hereWhere I’ve seen the “half-speed” phenomenon recentlyI think that I’ve seen multiple instances of each of these in the past few months. I’m not sure that all of these directly stem from the phenomenon described above — there might be better descriptions for what’s going on — but they seem quite related.Jobs. Someone is unsure if their role is a good fit (or if it's the most impactful option, etc.) for them. So they continue working in it, but put less energy into it.What you might do instead: set aside time to evaluate your options and fit (and switch jobs based on that), consider setting up some tests, see if you can change or improve things in your current job (talk to your manager, etc.), decide that it’s a bad time to think about this and that you'll re-evaluate at a set time (schedu...]]>
Sun, 09 Apr 2023 21:04:53 +0000 EA - EA and “The correct response to uncertainty is not half-speed” by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA & “The correct response to uncertainty is not half-speed”, published by Lizka on April 9, 2023 on The Effective Altruism Forum.TL;DR: When we're unsure about what to do, we sometimes naively take the "average" of the obvious options — despite the fact that a different strategy is often better. For example, if you're not sure if you're in the right job, continuing to do your job as before but with less energy ("going half-speed") is probably not the best approach. Note, however, that sometimes speed itself is the problem, in which case "half-speed" can be totally reasonable — I discuss this and some other considerations below.I've referenced this phenomenon in some conversations recently, so I'm sharing a relevant post from 2016 — The correct response to uncertainty is not half-speed — and sketching out some examples I've seen.The correct response to uncertainty is not half-speedThe central example in the post is a time when the author was driving along a long stretch of road and started wondering if she’d passed her hotel. So she continued at half-speed, trying to decide if she should keep going or turn around. After a while, she realized:If the hotel was ahead of me, I'd get there fastest if I kept going 60mph. And if the hotel was behind me, I'd get there fastest by heading at 60 miles per hour in the other direction. And if I wasn't going to turn around yet -- if my best bet given the uncertainty was to check N more miles of highway first, before I turned around -- then, again, I'd get there fastest by choosing a value of N, speeding along at 60 miles per hour until my odometer said I'd gone N miles, and then turning around and heading at 60 miles per hour in the opposite direction.Either way, fullspeed was best. My mind had been naively averaging two courses of action -- the thought was something like: "maybe I should go forward, and maybe I should go backward. So, since I'm uncertain, I should go forward at half-speed!" But averages don't actually work that way.[1][...] [From a comment] Often a person should hedge bets in some fashion, or should take some action under uncertainty that is different from the action one would take if one were certain of model 1 or of model 2. The point is that "hedging" or "acting under uncertainty" in this way is different in many particulars from the sort of "kind of working" that people often end up accidentally doing, from a naiver sort of average. Often it e.g. involves running info-gathering tests at full speed, one after another.Opinions expressed here are mine, not my employer’s, not the Forum’s, etc. I wrote this fast, so it’s definitely not an exhaustive list of examples or considerations and is probably wrong in important places.Assorted links that seem related to the postFalse compromises & the fallacy of the middle / the argument to moderation / the middle ground fallacy (example link - I don’t know what the right term here is, or if there’s an excellent explanation)Dive in and the explanation of “split and commit” hereWhere I’ve seen the “half-speed” phenomenon recentlyI think that I’ve seen multiple instances of each of these in the past few months. I’m not sure that all of these directly stem from the phenomenon described above — there might be better descriptions for what’s going on — but they seem quite related.Jobs. Someone is unsure if their role is a good fit (or if it's the most impactful option, etc.) for them. So they continue working in it, but put less energy into it.What you might do instead: set aside time to evaluate your options and fit (and switch jobs based on that), consider setting up some tests, see if you can change or improve things in your current job (talk to your manager, etc.), decide that it’s a bad time to think about this and that you'll re-evaluate at a set time (schedu...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA & “The correct response to uncertainty is not half-speed”, published by Lizka on April 9, 2023 on The Effective Altruism Forum.TL;DR: When we're unsure about what to do, we sometimes naively take the "average" of the obvious options — despite the fact that a different strategy is often better. For example, if you're not sure if you're in the right job, continuing to do your job as before but with less energy ("going half-speed") is probably not the best approach. Note, however, that sometimes speed itself is the problem, in which case "half-speed" can be totally reasonable — I discuss this and some other considerations below.I've referenced this phenomenon in some conversations recently, so I'm sharing a relevant post from 2016 — The correct response to uncertainty is not half-speed — and sketching out some examples I've seen.The correct response to uncertainty is not half-speedThe central example in the post is a time when the author was driving along a long stretch of road and started wondering if she’d passed her hotel. So she continued at half-speed, trying to decide if she should keep going or turn around. After a while, she realized:If the hotel was ahead of me, I'd get there fastest if I kept going 60mph. And if the hotel was behind me, I'd get there fastest by heading at 60 miles per hour in the other direction. And if I wasn't going to turn around yet -- if my best bet given the uncertainty was to check N more miles of highway first, before I turned around -- then, again, I'd get there fastest by choosing a value of N, speeding along at 60 miles per hour until my odometer said I'd gone N miles, and then turning around and heading at 60 miles per hour in the opposite direction.Either way, fullspeed was best. My mind had been naively averaging two courses of action -- the thought was something like: "maybe I should go forward, and maybe I should go backward. So, since I'm uncertain, I should go forward at half-speed!" But averages don't actually work that way.[1][...] [From a comment] Often a person should hedge bets in some fashion, or should take some action under uncertainty that is different from the action one would take if one were certain of model 1 or of model 2. The point is that "hedging" or "acting under uncertainty" in this way is different in many particulars from the sort of "kind of working" that people often end up accidentally doing, from a naiver sort of average. Often it e.g. involves running info-gathering tests at full speed, one after another.Opinions expressed here are mine, not my employer’s, not the Forum’s, etc. I wrote this fast, so it’s definitely not an exhaustive list of examples or considerations and is probably wrong in important places.Assorted links that seem related to the postFalse compromises & the fallacy of the middle / the argument to moderation / the middle ground fallacy (example link - I don’t know what the right term here is, or if there’s an excellent explanation)Dive in and the explanation of “split and commit” hereWhere I’ve seen the “half-speed” phenomenon recentlyI think that I’ve seen multiple instances of each of these in the past few months. I’m not sure that all of these directly stem from the phenomenon described above — there might be better descriptions for what’s going on — but they seem quite related.Jobs. Someone is unsure if their role is a good fit (or if it's the most impactful option, etc.) for them. So they continue working in it, but put less energy into it.What you might do instead: set aside time to evaluate your options and fit (and switch jobs based on that), consider setting up some tests, see if you can change or improve things in your current job (talk to your manager, etc.), decide that it’s a bad time to think about this and that you'll re-evaluate at a set time (schedu...]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:34 None full 5537
vmGfpopoReDD3QHBK_NL_EA_EA EA - SERI MATS - Summer 2023 Cohort by Aris Richardson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SERI MATS - Summer 2023 Cohort, published by Aris Richardson on April 8, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Aris Richardson https://forum.effectivealtruism.org/posts/vmGfpopoReDD3QHBK/seri-mats-summer-2023-cohort Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SERI MATS - Summer 2023 Cohort, published by Aris Richardson on April 8, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 09 Apr 2023 09:45:06 +0000 EA - SERI MATS - Summer 2023 Cohort by Aris Richardson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SERI MATS - Summer 2023 Cohort, published by Aris Richardson on April 8, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SERI MATS - Summer 2023 Cohort, published by Aris Richardson on April 8, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Aris Richardson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:22 None full 5535
uDXyphdhaWxvAzwkZ_NL_EA_EA EA - GPTs are Predictors, not Imitators by EliezerYudkowsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GPTs are Predictors, not Imitators, published by EliezerYudkowsky on April 8, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
EliezerYudkowsky https://forum.effectivealtruism.org/posts/uDXyphdhaWxvAzwkZ/gpts-are-predictors-not-imitators Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GPTs are Predictors, not Imitators, published by EliezerYudkowsky on April 8, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 08 Apr 2023 23:09:42 +0000 EA - GPTs are Predictors, not Imitators by EliezerYudkowsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GPTs are Predictors, not Imitators, published by EliezerYudkowsky on April 8, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GPTs are Predictors, not Imitators, published by EliezerYudkowsky on April 8, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
EliezerYudkowsky https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:22 None full 5538
a2KEyLaXzBADb8jgg_NL_EA_EA EA - Can we evaluate the "tool versus agent" AGI prediction? by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can we evaluate the "tool versus agent" AGI prediction?, published by Ben West on April 8, 2023 on The Effective Altruism Forum.In 2012, Holden Karnofsky critiqued MIRI (then SI) by saying "SI appears to neglect the potentially important distinction between 'tool' and 'agent' AI." He particularly claimed:Is a tool-AGI possible? I believe that it is, and furthermore that it ought to be our default picture of how AGI will workI understand this to be the first introduction of the "tool versus agent" ontology, and it is a helpful (relatively) concrete prediction. Eliezer replied here, making the following summarized points (among others):Tool AI is nontrivialTool AI is not obviously the way AGI should or will be developedGwern more directly replied by saying:AIs limited to pure computation (Tool AIs) supporting humans, will be less intelligent, efficient, and economically valuable than more autonomous reinforcement-learning AIs (Agent AIs) who act on their own and meta-learn, because all problems are reinforcement-learning problems.11 years later, can we evaluate the accuracy of these predictions?Some Bayes points go to LW commenter shminux for saying that this Holden kid seems like he's going placesThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ben West https://forum.effectivealtruism.org/posts/a2KEyLaXzBADb8jgg/can-we-evaluate-the-tool-versus-agent-agi-prediction Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can we evaluate the "tool versus agent" AGI prediction?, published by Ben West on April 8, 2023 on The Effective Altruism Forum.In 2012, Holden Karnofsky critiqued MIRI (then SI) by saying "SI appears to neglect the potentially important distinction between 'tool' and 'agent' AI." He particularly claimed:Is a tool-AGI possible? I believe that it is, and furthermore that it ought to be our default picture of how AGI will workI understand this to be the first introduction of the "tool versus agent" ontology, and it is a helpful (relatively) concrete prediction. Eliezer replied here, making the following summarized points (among others):Tool AI is nontrivialTool AI is not obviously the way AGI should or will be developedGwern more directly replied by saying:AIs limited to pure computation (Tool AIs) supporting humans, will be less intelligent, efficient, and economically valuable than more autonomous reinforcement-learning AIs (Agent AIs) who act on their own and meta-learn, because all problems are reinforcement-learning problems.11 years later, can we evaluate the accuracy of these predictions?Some Bayes points go to LW commenter shminux for saying that this Holden kid seems like he's going placesThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 08 Apr 2023 22:51:49 +0000 EA - Can we evaluate the "tool versus agent" AGI prediction? by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can we evaluate the "tool versus agent" AGI prediction?, published by Ben West on April 8, 2023 on The Effective Altruism Forum.In 2012, Holden Karnofsky critiqued MIRI (then SI) by saying "SI appears to neglect the potentially important distinction between 'tool' and 'agent' AI." He particularly claimed:Is a tool-AGI possible? I believe that it is, and furthermore that it ought to be our default picture of how AGI will workI understand this to be the first introduction of the "tool versus agent" ontology, and it is a helpful (relatively) concrete prediction. Eliezer replied here, making the following summarized points (among others):Tool AI is nontrivialTool AI is not obviously the way AGI should or will be developedGwern more directly replied by saying:AIs limited to pure computation (Tool AIs) supporting humans, will be less intelligent, efficient, and economically valuable than more autonomous reinforcement-learning AIs (Agent AIs) who act on their own and meta-learn, because all problems are reinforcement-learning problems.11 years later, can we evaluate the accuracy of these predictions?Some Bayes points go to LW commenter shminux for saying that this Holden kid seems like he's going placesThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can we evaluate the "tool versus agent" AGI prediction?, published by Ben West on April 8, 2023 on The Effective Altruism Forum.In 2012, Holden Karnofsky critiqued MIRI (then SI) by saying "SI appears to neglect the potentially important distinction between 'tool' and 'agent' AI." He particularly claimed:Is a tool-AGI possible? I believe that it is, and furthermore that it ought to be our default picture of how AGI will workI understand this to be the first introduction of the "tool versus agent" ontology, and it is a helpful (relatively) concrete prediction. Eliezer replied here, making the following summarized points (among others):Tool AI is nontrivialTool AI is not obviously the way AGI should or will be developedGwern more directly replied by saying:AIs limited to pure computation (Tool AIs) supporting humans, will be less intelligent, efficient, and economically valuable than more autonomous reinforcement-learning AIs (Agent AIs) who act on their own and meta-learn, because all problems are reinforcement-learning problems.11 years later, can we evaluate the accuracy of these predictions?Some Bayes points go to LW commenter shminux for saying that this Holden kid seems like he's going placesThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ben West https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:26 None full 5533
oKabMJJhriz3LCaeT_NL_EA_EA EA - All AGI Safety questions welcome (especially basic ones) [April 2023] by StevenKaas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: All AGI Safety questions welcome (especially basic ones) [April 2023], published by StevenKaas on April 8, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
StevenKaas https://forum.effectivealtruism.org/posts/oKabMJJhriz3LCaeT/all-agi-safety-questions-welcome-especially-basic-ones-april Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: All AGI Safety questions welcome (especially basic ones) [April 2023], published by StevenKaas on April 8, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 08 Apr 2023 22:40:23 +0000 EA - All AGI Safety questions welcome (especially basic ones) [April 2023] by StevenKaas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: All AGI Safety questions welcome (especially basic ones) [April 2023], published by StevenKaas on April 8, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: All AGI Safety questions welcome (especially basic ones) [April 2023], published by StevenKaas on April 8, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
StevenKaas https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:24 None full 5536
Lw8bwJvtCo6ssqEKa_NL_EA_EA EA - A write-up on the biosecurity landscape in China by Chloe Lee Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A write-up on the biosecurity landscape in China, published by Chloe Lee on April 8, 2023 on The Effective Altruism Forum.Hello everyone,This is my first post on this forum and I am excited to share the output of my work supported by the Long Term Future Fund. My write-up is titled "China’s Take on Biosecurity: A Report on China’s View, Institutions, Policies, and Technology". It is now published on the Social Science Research Network (SSRN) preprint site, see here.I would like to thank my mentor (requested to remain anonymous) for being incredibly helpful and encouraging throughout my biosecurity and EA journey. She provided excellent advice in shaping the research direction, writing the structure, and coming up with the content. I would also like to extend my thanks to Jonas Sandbrink for suggesting that I focus my research on understanding the biosecurity landscape in China and for taking the time to review my write-up. Last but not least, I want to express my gratitude to the following individuals for their constructive feedback and suggestions: Brian Tse, Ziya Huang, Myron Krueger, and Ruowei Yang.Scope and MethodologyMy conversations with Jonas Sandbrink helped shape the initial research directions for the project. We had some preliminary understanding that the Chinese community supports international pathogen surveillance and zoonotic risk prediction efforts, but not much beyond that. To further investigate this topic, we wanted to identify key points of contact (the government, researchers, and policy advocates) for biotechnology regulations and biosecurity, particularly governance against the deliberate misuse of biotechnology and biological weapons.After conducting initial research and discussions with my mentor, we concluded that it was important to tease out the Chinese term for biosecurity and its definition in the first part of the write-up to provide the necessary context to understand how biotechnology and biosecurity are governed. I studied multiple relevant terms, the interpretations provided by various academic researchers, and the context and frequency in which the terms are used in both Chinese and Western literature.The subsequent sections of the write-up focus on three areas: governance, processes, and technology related to biosecurity within the context of shengwu anquan in China. To comprehensively describe various perspectives on biosecurity, I adopted the common consulting framework of “People, Process, and Technology”. Each aspect of the framework could very well be a standalone research project in and of itself. Nonetheless, I tried to capture as many interesting and relevant observations as possible based on secondary research and analysis of various sources between 2002 and 2022 including peer-reviewed scientific journals, press announcements, government reports, and online searches via Google, Baidu, and China National Knowledge Infrastructure (CNKI).What follows is a list of questions and excerpts from the write-up that illustrate the current state of development of biosecurity in China.FindingsThe Concept of Biosecurity in ChinaHow did the concept of biosecurity come about in China?World War II biowarfare (inflicted upon Chinese civilians and war prisoners in the 1930s-1940s), the advent of genetic engineering in the 1970s, and infectious disease outbreaks (SARS in 2003) spurred awareness of biosecurity in China and encouraged their participation in international treaties and the development of regulations.How do the Chinese communities define biosecurity?Various terms have been used to describe biosafety and biosecurity, including shengwu anquan, shengwu anbao, neibu shengwu anquan, and waibu shengwu anquan. Until today, there remains no consensus on the most accurate term and meaning for biosecurity.According to the ...]]>
Chloe Lee https://forum.effectivealtruism.org/posts/Lw8bwJvtCo6ssqEKa/a-write-up-on-the-biosecurity-landscape-in-china Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A write-up on the biosecurity landscape in China, published by Chloe Lee on April 8, 2023 on The Effective Altruism Forum.Hello everyone,This is my first post on this forum and I am excited to share the output of my work supported by the Long Term Future Fund. My write-up is titled "China’s Take on Biosecurity: A Report on China’s View, Institutions, Policies, and Technology". It is now published on the Social Science Research Network (SSRN) preprint site, see here.I would like to thank my mentor (requested to remain anonymous) for being incredibly helpful and encouraging throughout my biosecurity and EA journey. She provided excellent advice in shaping the research direction, writing the structure, and coming up with the content. I would also like to extend my thanks to Jonas Sandbrink for suggesting that I focus my research on understanding the biosecurity landscape in China and for taking the time to review my write-up. Last but not least, I want to express my gratitude to the following individuals for their constructive feedback and suggestions: Brian Tse, Ziya Huang, Myron Krueger, and Ruowei Yang.Scope and MethodologyMy conversations with Jonas Sandbrink helped shape the initial research directions for the project. We had some preliminary understanding that the Chinese community supports international pathogen surveillance and zoonotic risk prediction efforts, but not much beyond that. To further investigate this topic, we wanted to identify key points of contact (the government, researchers, and policy advocates) for biotechnology regulations and biosecurity, particularly governance against the deliberate misuse of biotechnology and biological weapons.After conducting initial research and discussions with my mentor, we concluded that it was important to tease out the Chinese term for biosecurity and its definition in the first part of the write-up to provide the necessary context to understand how biotechnology and biosecurity are governed. I studied multiple relevant terms, the interpretations provided by various academic researchers, and the context and frequency in which the terms are used in both Chinese and Western literature.The subsequent sections of the write-up focus on three areas: governance, processes, and technology related to biosecurity within the context of shengwu anquan in China. To comprehensively describe various perspectives on biosecurity, I adopted the common consulting framework of “People, Process, and Technology”. Each aspect of the framework could very well be a standalone research project in and of itself. Nonetheless, I tried to capture as many interesting and relevant observations as possible based on secondary research and analysis of various sources between 2002 and 2022 including peer-reviewed scientific journals, press announcements, government reports, and online searches via Google, Baidu, and China National Knowledge Infrastructure (CNKI).What follows is a list of questions and excerpts from the write-up that illustrate the current state of development of biosecurity in China.FindingsThe Concept of Biosecurity in ChinaHow did the concept of biosecurity come about in China?World War II biowarfare (inflicted upon Chinese civilians and war prisoners in the 1930s-1940s), the advent of genetic engineering in the 1970s, and infectious disease outbreaks (SARS in 2003) spurred awareness of biosecurity in China and encouraged their participation in international treaties and the development of regulations.How do the Chinese communities define biosecurity?Various terms have been used to describe biosafety and biosecurity, including shengwu anquan, shengwu anbao, neibu shengwu anquan, and waibu shengwu anquan. Until today, there remains no consensus on the most accurate term and meaning for biosecurity.According to the ...]]>
Sat, 08 Apr 2023 09:21:17 +0000 EA - A write-up on the biosecurity landscape in China by Chloe Lee Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A write-up on the biosecurity landscape in China, published by Chloe Lee on April 8, 2023 on The Effective Altruism Forum.Hello everyone,This is my first post on this forum and I am excited to share the output of my work supported by the Long Term Future Fund. My write-up is titled "China’s Take on Biosecurity: A Report on China’s View, Institutions, Policies, and Technology". It is now published on the Social Science Research Network (SSRN) preprint site, see here.I would like to thank my mentor (requested to remain anonymous) for being incredibly helpful and encouraging throughout my biosecurity and EA journey. She provided excellent advice in shaping the research direction, writing the structure, and coming up with the content. I would also like to extend my thanks to Jonas Sandbrink for suggesting that I focus my research on understanding the biosecurity landscape in China and for taking the time to review my write-up. Last but not least, I want to express my gratitude to the following individuals for their constructive feedback and suggestions: Brian Tse, Ziya Huang, Myron Krueger, and Ruowei Yang.Scope and MethodologyMy conversations with Jonas Sandbrink helped shape the initial research directions for the project. We had some preliminary understanding that the Chinese community supports international pathogen surveillance and zoonotic risk prediction efforts, but not much beyond that. To further investigate this topic, we wanted to identify key points of contact (the government, researchers, and policy advocates) for biotechnology regulations and biosecurity, particularly governance against the deliberate misuse of biotechnology and biological weapons.After conducting initial research and discussions with my mentor, we concluded that it was important to tease out the Chinese term for biosecurity and its definition in the first part of the write-up to provide the necessary context to understand how biotechnology and biosecurity are governed. I studied multiple relevant terms, the interpretations provided by various academic researchers, and the context and frequency in which the terms are used in both Chinese and Western literature.The subsequent sections of the write-up focus on three areas: governance, processes, and technology related to biosecurity within the context of shengwu anquan in China. To comprehensively describe various perspectives on biosecurity, I adopted the common consulting framework of “People, Process, and Technology”. Each aspect of the framework could very well be a standalone research project in and of itself. Nonetheless, I tried to capture as many interesting and relevant observations as possible based on secondary research and analysis of various sources between 2002 and 2022 including peer-reviewed scientific journals, press announcements, government reports, and online searches via Google, Baidu, and China National Knowledge Infrastructure (CNKI).What follows is a list of questions and excerpts from the write-up that illustrate the current state of development of biosecurity in China.FindingsThe Concept of Biosecurity in ChinaHow did the concept of biosecurity come about in China?World War II biowarfare (inflicted upon Chinese civilians and war prisoners in the 1930s-1940s), the advent of genetic engineering in the 1970s, and infectious disease outbreaks (SARS in 2003) spurred awareness of biosecurity in China and encouraged their participation in international treaties and the development of regulations.How do the Chinese communities define biosecurity?Various terms have been used to describe biosafety and biosecurity, including shengwu anquan, shengwu anbao, neibu shengwu anquan, and waibu shengwu anquan. Until today, there remains no consensus on the most accurate term and meaning for biosecurity.According to the ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A write-up on the biosecurity landscape in China, published by Chloe Lee on April 8, 2023 on The Effective Altruism Forum.Hello everyone,This is my first post on this forum and I am excited to share the output of my work supported by the Long Term Future Fund. My write-up is titled "China’s Take on Biosecurity: A Report on China’s View, Institutions, Policies, and Technology". It is now published on the Social Science Research Network (SSRN) preprint site, see here.I would like to thank my mentor (requested to remain anonymous) for being incredibly helpful and encouraging throughout my biosecurity and EA journey. She provided excellent advice in shaping the research direction, writing the structure, and coming up with the content. I would also like to extend my thanks to Jonas Sandbrink for suggesting that I focus my research on understanding the biosecurity landscape in China and for taking the time to review my write-up. Last but not least, I want to express my gratitude to the following individuals for their constructive feedback and suggestions: Brian Tse, Ziya Huang, Myron Krueger, and Ruowei Yang.Scope and MethodologyMy conversations with Jonas Sandbrink helped shape the initial research directions for the project. We had some preliminary understanding that the Chinese community supports international pathogen surveillance and zoonotic risk prediction efforts, but not much beyond that. To further investigate this topic, we wanted to identify key points of contact (the government, researchers, and policy advocates) for biotechnology regulations and biosecurity, particularly governance against the deliberate misuse of biotechnology and biological weapons.After conducting initial research and discussions with my mentor, we concluded that it was important to tease out the Chinese term for biosecurity and its definition in the first part of the write-up to provide the necessary context to understand how biotechnology and biosecurity are governed. I studied multiple relevant terms, the interpretations provided by various academic researchers, and the context and frequency in which the terms are used in both Chinese and Western literature.The subsequent sections of the write-up focus on three areas: governance, processes, and technology related to biosecurity within the context of shengwu anquan in China. To comprehensively describe various perspectives on biosecurity, I adopted the common consulting framework of “People, Process, and Technology”. Each aspect of the framework could very well be a standalone research project in and of itself. Nonetheless, I tried to capture as many interesting and relevant observations as possible based on secondary research and analysis of various sources between 2002 and 2022 including peer-reviewed scientific journals, press announcements, government reports, and online searches via Google, Baidu, and China National Knowledge Infrastructure (CNKI).What follows is a list of questions and excerpts from the write-up that illustrate the current state of development of biosecurity in China.FindingsThe Concept of Biosecurity in ChinaHow did the concept of biosecurity come about in China?World War II biowarfare (inflicted upon Chinese civilians and war prisoners in the 1930s-1940s), the advent of genetic engineering in the 1970s, and infectious disease outbreaks (SARS in 2003) spurred awareness of biosecurity in China and encouraged their participation in international treaties and the development of regulations.How do the Chinese communities define biosecurity?Various terms have been used to describe biosafety and biosecurity, including shengwu anquan, shengwu anbao, neibu shengwu anquan, and waibu shengwu anquan. Until today, there remains no consensus on the most accurate term and meaning for biosecurity.According to the ...]]>
Chloe Lee https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 13:16 None full 5526
StBjwzACS7qtWGkoN_NL_EA_EA EA - Planned Updates to U.S. Regulatory Analysis Methods are Likely Relevant to EAs by MHR Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Planned Updates to U.S. Regulatory Analysis Methods are Likely Relevant to EAs, published by MHR on April 7, 2023 on The Effective Altruism Forum.The U.S. Office of Management and Budget (OMB) has proposed an update to Circular A-4, which provides guidance to federal agencies regarding methods of regulatory analysis. Changes to analysis methodology could substantially impact the type and content of future regulations, and the 2022 Legal Priorities Project writing competition examined potential ways to improve such analysis methods from an EA perspective.OMB's proposed update makes a significant number of changes to circular A-4. I have only taken a brief look at the proposal, but several sections seem quite significant for EA causes. in general, the proposed changes appear to benefit policies that improve future wellbeing, prevent catastrophic risks, and/or (in certain circumstances) improve the wellbeing of people outside the United States.Catastrophic RisksThe proposed update adds explicit discussion of catastrophic risks, which are not mentioned in the current version of Circular A-4. The updated guidance allows for the consideration of impacts on future generations when analyzing the benefits of policies that reduce the chance of catastrophic risks.The time frame for your analysis should include a period before and after the date of compliance that is long enough to encompass all the important benefits and costs likely to result from the regulation.19 See the section “Discount Rates” for more details on the appropriate time frame for an analysis. If benefits or costs become more uncertain or harder to quantify over time, that does not imply that you should exclude such effects by artificially shortening your analytic time frame; instead, consult—as appropriate—the discussion in the section “Treatment of Uncertainty.”[19] For example, when assessing the benefits of a regulation that could prevent a catastrophic event with some probability, it may be appropriate for you to consider not only the near-term effects of averting the catastrophic event on those who would be immediately affected, but also the long-run effects on others—including future generations—who would be affected by the catastrophic event.Geographic Scope of AnalysisThe proposed update allows for the consideration of impacts to non U.S. citizens residing abroad as part of the primary analysis in certain circumstances, which was not allowed in the previous guidance.In certain contexts, it may be particularly appropriate to include effects experienced by noncitizens residing abroad in your primary analysis. Such contexts include, for example, when:Assessing effects on noncitizens residing abroad provides a useful proxy for effects on U.S. citizens and residents that are difficult to otherwise estimate;Assessing effects on noncitizens residing abroad provides a useful proxy for effects on U.S. national interests that are not otherwise fully captured by effects experienced by particular U.S. citizens and residents (e.g., national security interests, diplomatic interests, etc.);Regulating an externality on the basis of its global effects supports a cooperative international approach to the regulation of the externality by potentially inducing other countries to follow suit or maintain existing efforts; orInternational or domestic legal obligations require or support a global calculation of regulatory effects.Near-Term Discount RatesThe proposed update changes the default discount rate over the next 30 years to 1.7%, from 3% in the previous guidance.One approach assumes that the real (inflation-adjusted) rate of return on long-term U.S. government debt provides a fair approximation of the social rate of time preference. It is the rate available on riskless personal savings and is therefore...]]>
MHR https://forum.effectivealtruism.org/posts/StBjwzACS7qtWGkoN/planned-updates-to-u-s-regulatory-analysis-methods-are Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Planned Updates to U.S. Regulatory Analysis Methods are Likely Relevant to EAs, published by MHR on April 7, 2023 on The Effective Altruism Forum.The U.S. Office of Management and Budget (OMB) has proposed an update to Circular A-4, which provides guidance to federal agencies regarding methods of regulatory analysis. Changes to analysis methodology could substantially impact the type and content of future regulations, and the 2022 Legal Priorities Project writing competition examined potential ways to improve such analysis methods from an EA perspective.OMB's proposed update makes a significant number of changes to circular A-4. I have only taken a brief look at the proposal, but several sections seem quite significant for EA causes. in general, the proposed changes appear to benefit policies that improve future wellbeing, prevent catastrophic risks, and/or (in certain circumstances) improve the wellbeing of people outside the United States.Catastrophic RisksThe proposed update adds explicit discussion of catastrophic risks, which are not mentioned in the current version of Circular A-4. The updated guidance allows for the consideration of impacts on future generations when analyzing the benefits of policies that reduce the chance of catastrophic risks.The time frame for your analysis should include a period before and after the date of compliance that is long enough to encompass all the important benefits and costs likely to result from the regulation.19 See the section “Discount Rates” for more details on the appropriate time frame for an analysis. If benefits or costs become more uncertain or harder to quantify over time, that does not imply that you should exclude such effects by artificially shortening your analytic time frame; instead, consult—as appropriate—the discussion in the section “Treatment of Uncertainty.”[19] For example, when assessing the benefits of a regulation that could prevent a catastrophic event with some probability, it may be appropriate for you to consider not only the near-term effects of averting the catastrophic event on those who would be immediately affected, but also the long-run effects on others—including future generations—who would be affected by the catastrophic event.Geographic Scope of AnalysisThe proposed update allows for the consideration of impacts to non U.S. citizens residing abroad as part of the primary analysis in certain circumstances, which was not allowed in the previous guidance.In certain contexts, it may be particularly appropriate to include effects experienced by noncitizens residing abroad in your primary analysis. Such contexts include, for example, when:Assessing effects on noncitizens residing abroad provides a useful proxy for effects on U.S. citizens and residents that are difficult to otherwise estimate;Assessing effects on noncitizens residing abroad provides a useful proxy for effects on U.S. national interests that are not otherwise fully captured by effects experienced by particular U.S. citizens and residents (e.g., national security interests, diplomatic interests, etc.);Regulating an externality on the basis of its global effects supports a cooperative international approach to the regulation of the externality by potentially inducing other countries to follow suit or maintain existing efforts; orInternational or domestic legal obligations require or support a global calculation of regulatory effects.Near-Term Discount RatesThe proposed update changes the default discount rate over the next 30 years to 1.7%, from 3% in the previous guidance.One approach assumes that the real (inflation-adjusted) rate of return on long-term U.S. government debt provides a fair approximation of the social rate of time preference. It is the rate available on riskless personal savings and is therefore...]]>
Fri, 07 Apr 2023 07:58:19 +0000 EA - Planned Updates to U.S. Regulatory Analysis Methods are Likely Relevant to EAs by MHR Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Planned Updates to U.S. Regulatory Analysis Methods are Likely Relevant to EAs, published by MHR on April 7, 2023 on The Effective Altruism Forum.The U.S. Office of Management and Budget (OMB) has proposed an update to Circular A-4, which provides guidance to federal agencies regarding methods of regulatory analysis. Changes to analysis methodology could substantially impact the type and content of future regulations, and the 2022 Legal Priorities Project writing competition examined potential ways to improve such analysis methods from an EA perspective.OMB's proposed update makes a significant number of changes to circular A-4. I have only taken a brief look at the proposal, but several sections seem quite significant for EA causes. in general, the proposed changes appear to benefit policies that improve future wellbeing, prevent catastrophic risks, and/or (in certain circumstances) improve the wellbeing of people outside the United States.Catastrophic RisksThe proposed update adds explicit discussion of catastrophic risks, which are not mentioned in the current version of Circular A-4. The updated guidance allows for the consideration of impacts on future generations when analyzing the benefits of policies that reduce the chance of catastrophic risks.The time frame for your analysis should include a period before and after the date of compliance that is long enough to encompass all the important benefits and costs likely to result from the regulation.19 See the section “Discount Rates” for more details on the appropriate time frame for an analysis. If benefits or costs become more uncertain or harder to quantify over time, that does not imply that you should exclude such effects by artificially shortening your analytic time frame; instead, consult—as appropriate—the discussion in the section “Treatment of Uncertainty.”[19] For example, when assessing the benefits of a regulation that could prevent a catastrophic event with some probability, it may be appropriate for you to consider not only the near-term effects of averting the catastrophic event on those who would be immediately affected, but also the long-run effects on others—including future generations—who would be affected by the catastrophic event.Geographic Scope of AnalysisThe proposed update allows for the consideration of impacts to non U.S. citizens residing abroad as part of the primary analysis in certain circumstances, which was not allowed in the previous guidance.In certain contexts, it may be particularly appropriate to include effects experienced by noncitizens residing abroad in your primary analysis. Such contexts include, for example, when:Assessing effects on noncitizens residing abroad provides a useful proxy for effects on U.S. citizens and residents that are difficult to otherwise estimate;Assessing effects on noncitizens residing abroad provides a useful proxy for effects on U.S. national interests that are not otherwise fully captured by effects experienced by particular U.S. citizens and residents (e.g., national security interests, diplomatic interests, etc.);Regulating an externality on the basis of its global effects supports a cooperative international approach to the regulation of the externality by potentially inducing other countries to follow suit or maintain existing efforts; orInternational or domestic legal obligations require or support a global calculation of regulatory effects.Near-Term Discount RatesThe proposed update changes the default discount rate over the next 30 years to 1.7%, from 3% in the previous guidance.One approach assumes that the real (inflation-adjusted) rate of return on long-term U.S. government debt provides a fair approximation of the social rate of time preference. It is the rate available on riskless personal savings and is therefore...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Planned Updates to U.S. Regulatory Analysis Methods are Likely Relevant to EAs, published by MHR on April 7, 2023 on The Effective Altruism Forum.The U.S. Office of Management and Budget (OMB) has proposed an update to Circular A-4, which provides guidance to federal agencies regarding methods of regulatory analysis. Changes to analysis methodology could substantially impact the type and content of future regulations, and the 2022 Legal Priorities Project writing competition examined potential ways to improve such analysis methods from an EA perspective.OMB's proposed update makes a significant number of changes to circular A-4. I have only taken a brief look at the proposal, but several sections seem quite significant for EA causes. in general, the proposed changes appear to benefit policies that improve future wellbeing, prevent catastrophic risks, and/or (in certain circumstances) improve the wellbeing of people outside the United States.Catastrophic RisksThe proposed update adds explicit discussion of catastrophic risks, which are not mentioned in the current version of Circular A-4. The updated guidance allows for the consideration of impacts on future generations when analyzing the benefits of policies that reduce the chance of catastrophic risks.The time frame for your analysis should include a period before and after the date of compliance that is long enough to encompass all the important benefits and costs likely to result from the regulation.19 See the section “Discount Rates” for more details on the appropriate time frame for an analysis. If benefits or costs become more uncertain or harder to quantify over time, that does not imply that you should exclude such effects by artificially shortening your analytic time frame; instead, consult—as appropriate—the discussion in the section “Treatment of Uncertainty.”[19] For example, when assessing the benefits of a regulation that could prevent a catastrophic event with some probability, it may be appropriate for you to consider not only the near-term effects of averting the catastrophic event on those who would be immediately affected, but also the long-run effects on others—including future generations—who would be affected by the catastrophic event.Geographic Scope of AnalysisThe proposed update allows for the consideration of impacts to non U.S. citizens residing abroad as part of the primary analysis in certain circumstances, which was not allowed in the previous guidance.In certain contexts, it may be particularly appropriate to include effects experienced by noncitizens residing abroad in your primary analysis. Such contexts include, for example, when:Assessing effects on noncitizens residing abroad provides a useful proxy for effects on U.S. citizens and residents that are difficult to otherwise estimate;Assessing effects on noncitizens residing abroad provides a useful proxy for effects on U.S. national interests that are not otherwise fully captured by effects experienced by particular U.S. citizens and residents (e.g., national security interests, diplomatic interests, etc.);Regulating an externality on the basis of its global effects supports a cooperative international approach to the regulation of the externality by potentially inducing other countries to follow suit or maintain existing efforts; orInternational or domestic legal obligations require or support a global calculation of regulatory effects.Near-Term Discount RatesThe proposed update changes the default discount rate over the next 30 years to 1.7%, from 3% in the previous guidance.One approach assumes that the real (inflation-adjusted) rate of return on long-term U.S. government debt provides a fair approximation of the social rate of time preference. It is the rate available on riskless personal savings and is therefore...]]>
MHR https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:02 None full 5509
DbcYRiACy26H5AEkc_NL_EA_EA EA - Polio Lab Leak Caught with Wastewater Sampling by Cullen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Polio Lab Leak Caught with Wastewater Sampling, published by Cullen on April 7, 2023 on The Effective Altruism Forum.I hadn't seen this discussed here:The near complete eradication of wildtype polioviruses (WPV) means that strict containment by facilities for essential work with infectious WPV is required. In the Netherlands, we have implemented environmental surveillance around all poliovirus essential facilities (PEFs) premises to monitor possible breaches of containment. After the isolation and identification of WPV3 Saukett G strain in a sewage sample collected on 15 November 2022 by the National Polio Laboratory (NPL) in the Netherlands, an immediate response was required to assess any possible ongoing WPV3 shedding and mitigate the risk. Here we describe this response, including the isolation of WPV3 in a sewage sample, and identification, isolation and monitoring, as well as tracing of contacts of an infected employee.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Cullen https://forum.effectivealtruism.org/posts/DbcYRiACy26H5AEkc/polio-lab-leak-caught-with-wastewater-sampling Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Polio Lab Leak Caught with Wastewater Sampling, published by Cullen on April 7, 2023 on The Effective Altruism Forum.I hadn't seen this discussed here:The near complete eradication of wildtype polioviruses (WPV) means that strict containment by facilities for essential work with infectious WPV is required. In the Netherlands, we have implemented environmental surveillance around all poliovirus essential facilities (PEFs) premises to monitor possible breaches of containment. After the isolation and identification of WPV3 Saukett G strain in a sewage sample collected on 15 November 2022 by the National Polio Laboratory (NPL) in the Netherlands, an immediate response was required to assess any possible ongoing WPV3 shedding and mitigate the risk. Here we describe this response, including the isolation of WPV3 in a sewage sample, and identification, isolation and monitoring, as well as tracing of contacts of an infected employee.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 07 Apr 2023 07:32:15 +0000 EA - Polio Lab Leak Caught with Wastewater Sampling by Cullen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Polio Lab Leak Caught with Wastewater Sampling, published by Cullen on April 7, 2023 on The Effective Altruism Forum.I hadn't seen this discussed here:The near complete eradication of wildtype polioviruses (WPV) means that strict containment by facilities for essential work with infectious WPV is required. In the Netherlands, we have implemented environmental surveillance around all poliovirus essential facilities (PEFs) premises to monitor possible breaches of containment. After the isolation and identification of WPV3 Saukett G strain in a sewage sample collected on 15 November 2022 by the National Polio Laboratory (NPL) in the Netherlands, an immediate response was required to assess any possible ongoing WPV3 shedding and mitigate the risk. Here we describe this response, including the isolation of WPV3 in a sewage sample, and identification, isolation and monitoring, as well as tracing of contacts of an infected employee.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Polio Lab Leak Caught with Wastewater Sampling, published by Cullen on April 7, 2023 on The Effective Altruism Forum.I hadn't seen this discussed here:The near complete eradication of wildtype polioviruses (WPV) means that strict containment by facilities for essential work with infectious WPV is required. In the Netherlands, we have implemented environmental surveillance around all poliovirus essential facilities (PEFs) premises to monitor possible breaches of containment. After the isolation and identification of WPV3 Saukett G strain in a sewage sample collected on 15 November 2022 by the National Polio Laboratory (NPL) in the Netherlands, an immediate response was required to assess any possible ongoing WPV3 shedding and mitigate the risk. Here we describe this response, including the isolation of WPV3 in a sewage sample, and identification, isolation and monitoring, as well as tracing of contacts of an infected employee.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Cullen https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:10 None full 5508
pPohKFGLXAkpvQ3Nn_NL_EA_EA EA - Misgeneralization as a misnomer by So8res Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Misgeneralization as a misnomer, published by So8res on April 6, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
So8res https://forum.effectivealtruism.org/posts/pPohKFGLXAkpvQ3Nn/misgeneralization-as-a-misnomer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Misgeneralization as a misnomer, published by So8res on April 6, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 07 Apr 2023 00:15:30 +0000 EA - Misgeneralization as a misnomer by So8res Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Misgeneralization as a misnomer, published by So8res on April 6, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Misgeneralization as a misnomer, published by So8res on April 6, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
So8res https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:21 None full 5510
ZvMPNLFBHur9qopw9_NL_EA_EA EA - Is it time for a pause? by Kelsey Piper Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is it time for a pause?, published by Kelsey Piper on April 6, 2023 on The Effective Altruism Forum.Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post.The single most important thing we can do is to pause when the next model we train would be powerful enough to obsolete humans entirely. If it were up to me, I would slow down AI development starting now — and then later slow down even more.Many of the people building powerful AI systems think they’ll stumble on an AI system that forever changes our world fairly soon — three years, five years. I think they’re reasonably likely to be wrong about that, but I’m not sure they’re wrong about that. If we give them fifteen or twenty years, I start to suspect that they are entirely right.And while I think that the enormous, terrifying challenges of making AI go well are very much solvable, it feels very possible, to me, that we won’t solve them in time.It’s hard to overstate how much we have to gain from getting this right. It’s also hard to overstate how much we have to lose from getting it wrong. When I’m feeling optimistic about having grandchildren, I imagine that our grandchildren will look back in horror at how recklessly we endangered everyone in the world. And I’m much much more optimistic that humanity will figure this whole situation out in the end if we have twenty years than I am if we have five.There’s all kinds of AI research being done — at labs, in academia, at nonprofits, and in a distributed fashion all across the internet — that’s so diffuse and varied that it would be hard to ‘slow down’ by fiat. But there’s one kind of AI research — training much larger, much more powerful language models — that it might make sense to try to slow down. If we could agree to hold off on training ever more powerful new models, we might buy more time to do AI alignment research on the models we have. This extra research could make it less likely that misaligned AI eventually seizes control from humans.An open letter released on Wednesday, with signatures from Elon Musk[1], Apple co-founder Steve Wozniak, leading AI researcher Yoshua Bengio, and many other prominent figures, called for a six-month moratorium on training bigger, more dangerous ML models:We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.I tend to think that we are developing and releasing AI systems much faster and much more carelessly than is in our interests. And from talking to people in Silicon Valley and policymakers in DC, I think efforts to change that are rapidly gaining traction. “We should slow down AI capabilities progress” is a much more mainstream view than it was six months ago, and to me that seems like great news.In my ideal world, we absolutely would be pausing after the release of GPT-4. People have been speculating about the alignment problem for decades, but this moment is an obvious golden age for alignment work. We finally have models powerful enough to do useful empirical work on understanding them, changing their behavior, evaluating their capabilities, noticing when they’re being deceptive or manipulative, and so on. There are so many open questions in alignment that I expect we can make a lot of progress on in five years, with the benefit of what we’ve learned from existing models. We’d be in a much better position if we could collectively slow down to give ourselves more time to do this work, and I hope we find a way to do that intelligently and effectively. As I’ve said above, I ...]]>
Kelsey Piper https://forum.effectivealtruism.org/posts/ZvMPNLFBHur9qopw9/is-it-time-for-a-pause Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is it time for a pause?, published by Kelsey Piper on April 6, 2023 on The Effective Altruism Forum.Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post.The single most important thing we can do is to pause when the next model we train would be powerful enough to obsolete humans entirely. If it were up to me, I would slow down AI development starting now — and then later slow down even more.Many of the people building powerful AI systems think they’ll stumble on an AI system that forever changes our world fairly soon — three years, five years. I think they’re reasonably likely to be wrong about that, but I’m not sure they’re wrong about that. If we give them fifteen or twenty years, I start to suspect that they are entirely right.And while I think that the enormous, terrifying challenges of making AI go well are very much solvable, it feels very possible, to me, that we won’t solve them in time.It’s hard to overstate how much we have to gain from getting this right. It’s also hard to overstate how much we have to lose from getting it wrong. When I’m feeling optimistic about having grandchildren, I imagine that our grandchildren will look back in horror at how recklessly we endangered everyone in the world. And I’m much much more optimistic that humanity will figure this whole situation out in the end if we have twenty years than I am if we have five.There’s all kinds of AI research being done — at labs, in academia, at nonprofits, and in a distributed fashion all across the internet — that’s so diffuse and varied that it would be hard to ‘slow down’ by fiat. But there’s one kind of AI research — training much larger, much more powerful language models — that it might make sense to try to slow down. If we could agree to hold off on training ever more powerful new models, we might buy more time to do AI alignment research on the models we have. This extra research could make it less likely that misaligned AI eventually seizes control from humans.An open letter released on Wednesday, with signatures from Elon Musk[1], Apple co-founder Steve Wozniak, leading AI researcher Yoshua Bengio, and many other prominent figures, called for a six-month moratorium on training bigger, more dangerous ML models:We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.I tend to think that we are developing and releasing AI systems much faster and much more carelessly than is in our interests. And from talking to people in Silicon Valley and policymakers in DC, I think efforts to change that are rapidly gaining traction. “We should slow down AI capabilities progress” is a much more mainstream view than it was six months ago, and to me that seems like great news.In my ideal world, we absolutely would be pausing after the release of GPT-4. People have been speculating about the alignment problem for decades, but this moment is an obvious golden age for alignment work. We finally have models powerful enough to do useful empirical work on understanding them, changing their behavior, evaluating their capabilities, noticing when they’re being deceptive or manipulative, and so on. There are so many open questions in alignment that I expect we can make a lot of progress on in five years, with the benefit of what we’ve learned from existing models. We’d be in a much better position if we could collectively slow down to give ourselves more time to do this work, and I hope we find a way to do that intelligently and effectively. As I’ve said above, I ...]]>
Thu, 06 Apr 2023 22:49:15 +0000 EA - Is it time for a pause? by Kelsey Piper Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is it time for a pause?, published by Kelsey Piper on April 6, 2023 on The Effective Altruism Forum.Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post.The single most important thing we can do is to pause when the next model we train would be powerful enough to obsolete humans entirely. If it were up to me, I would slow down AI development starting now — and then later slow down even more.Many of the people building powerful AI systems think they’ll stumble on an AI system that forever changes our world fairly soon — three years, five years. I think they’re reasonably likely to be wrong about that, but I’m not sure they’re wrong about that. If we give them fifteen or twenty years, I start to suspect that they are entirely right.And while I think that the enormous, terrifying challenges of making AI go well are very much solvable, it feels very possible, to me, that we won’t solve them in time.It’s hard to overstate how much we have to gain from getting this right. It’s also hard to overstate how much we have to lose from getting it wrong. When I’m feeling optimistic about having grandchildren, I imagine that our grandchildren will look back in horror at how recklessly we endangered everyone in the world. And I’m much much more optimistic that humanity will figure this whole situation out in the end if we have twenty years than I am if we have five.There’s all kinds of AI research being done — at labs, in academia, at nonprofits, and in a distributed fashion all across the internet — that’s so diffuse and varied that it would be hard to ‘slow down’ by fiat. But there’s one kind of AI research — training much larger, much more powerful language models — that it might make sense to try to slow down. If we could agree to hold off on training ever more powerful new models, we might buy more time to do AI alignment research on the models we have. This extra research could make it less likely that misaligned AI eventually seizes control from humans.An open letter released on Wednesday, with signatures from Elon Musk[1], Apple co-founder Steve Wozniak, leading AI researcher Yoshua Bengio, and many other prominent figures, called for a six-month moratorium on training bigger, more dangerous ML models:We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.I tend to think that we are developing and releasing AI systems much faster and much more carelessly than is in our interests. And from talking to people in Silicon Valley and policymakers in DC, I think efforts to change that are rapidly gaining traction. “We should slow down AI capabilities progress” is a much more mainstream view than it was six months ago, and to me that seems like great news.In my ideal world, we absolutely would be pausing after the release of GPT-4. People have been speculating about the alignment problem for decades, but this moment is an obvious golden age for alignment work. We finally have models powerful enough to do useful empirical work on understanding them, changing their behavior, evaluating their capabilities, noticing when they’re being deceptive or manipulative, and so on. There are so many open questions in alignment that I expect we can make a lot of progress on in five years, with the benefit of what we’ve learned from existing models. We’d be in a much better position if we could collectively slow down to give ourselves more time to do this work, and I hope we find a way to do that intelligently and effectively. As I’ve said above, I ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is it time for a pause?, published by Kelsey Piper on April 6, 2023 on The Effective Altruism Forum.Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post.The single most important thing we can do is to pause when the next model we train would be powerful enough to obsolete humans entirely. If it were up to me, I would slow down AI development starting now — and then later slow down even more.Many of the people building powerful AI systems think they’ll stumble on an AI system that forever changes our world fairly soon — three years, five years. I think they’re reasonably likely to be wrong about that, but I’m not sure they’re wrong about that. If we give them fifteen or twenty years, I start to suspect that they are entirely right.And while I think that the enormous, terrifying challenges of making AI go well are very much solvable, it feels very possible, to me, that we won’t solve them in time.It’s hard to overstate how much we have to gain from getting this right. It’s also hard to overstate how much we have to lose from getting it wrong. When I’m feeling optimistic about having grandchildren, I imagine that our grandchildren will look back in horror at how recklessly we endangered everyone in the world. And I’m much much more optimistic that humanity will figure this whole situation out in the end if we have twenty years than I am if we have five.There’s all kinds of AI research being done — at labs, in academia, at nonprofits, and in a distributed fashion all across the internet — that’s so diffuse and varied that it would be hard to ‘slow down’ by fiat. But there’s one kind of AI research — training much larger, much more powerful language models — that it might make sense to try to slow down. If we could agree to hold off on training ever more powerful new models, we might buy more time to do AI alignment research on the models we have. This extra research could make it less likely that misaligned AI eventually seizes control from humans.An open letter released on Wednesday, with signatures from Elon Musk[1], Apple co-founder Steve Wozniak, leading AI researcher Yoshua Bengio, and many other prominent figures, called for a six-month moratorium on training bigger, more dangerous ML models:We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.I tend to think that we are developing and releasing AI systems much faster and much more carelessly than is in our interests. And from talking to people in Silicon Valley and policymakers in DC, I think efforts to change that are rapidly gaining traction. “We should slow down AI capabilities progress” is a much more mainstream view than it was six months ago, and to me that seems like great news.In my ideal world, we absolutely would be pausing after the release of GPT-4. People have been speculating about the alignment problem for decades, but this moment is an obvious golden age for alignment work. We finally have models powerful enough to do useful empirical work on understanding them, changing their behavior, evaluating their capabilities, noticing when they’re being deceptive or manipulative, and so on. There are so many open questions in alignment that I expect we can make a lot of progress on in five years, with the benefit of what we’ve learned from existing models. We’d be in a much better position if we could collectively slow down to give ourselves more time to do this work, and I hope we find a way to do that intelligently and effectively. As I’ve said above, I ...]]>
Kelsey Piper https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:46 None full 5511
pWFEjawiGXYmwyY3K_NL_EA_EA EA - Things that can make EA a good place for women by lilly Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Things that can make EA a good place for women, published by lilly on April 6, 2023 on The Effective Altruism Forum.IntroductionMany people, including me, have criticized EA for being a subpar place for women in certain ways, including: women are underrepresented (which contributes to various problems), there are few mechanisms aimed at preventing sexual harassment before it occurs, there is limited guidance regarding appropriate behavior, the criteria for evaluating misconduct are questionable, and existing systems may be ill-equipped to respond to allegations of abuse.At the same time, I think EA does well on certain gender issues relative to other communities. My aim here is not to make a substantive feminist case for EA, respond to critiques in this vein, or argue that being involved in EA is on balance good for women, but to describe features of the EA community that may be good for women (or, at least, have been good for me). Several of the features I'll highlight are good for everyone, but contrast with features of the wider world that are often bad for women.My goal in writing this is threefold. First, as EA works to fix certain gender issues, it’s worth considering what to preserve. Second, I want to highlight practices and behaviors that I’ve appreciated in the hopes of others emulating them. Third, and most importantly, I want women who are considering whether EA is a community they want to be a part of to be aware of both the good and the bad.Eight features of EA that can make EA a good place for womenIn the remainder of this post, I'll describe features of the EA community that I think can benefit women. This isn't meant to be an exhaustive list, nor are these things listed in any particular order. I'll also highlight how a few of these features can cut both ways.(1) People defy conventional social normsWhy this can be good for women: EA has less gendered expectations regarding behavior and appearance, which can be good, because such expectations often serve to oppress women.Example: EA is famously not into wasting money and time on unimportant stuff. In other spheres, women face pressure to invest in makeup, nails, hair, clothes, Botox, buccal fat removal, BBLs, and lots of other stuff you probably shouldn’t Google. I enjoy doing my hair and makeup sometimes, but it matters to me that I feel basically no pressure to do this in EA spaces.Why this can be bad for women: At times, EA may also subvert certain norms that are good for women.Example: I still think it’s a bit weird that a post that says—based on scant evidence—that “if I could pop back in time to witness the [interactions reported in the TIME article], I personally would think in 80% of cases that the accused had done nothing wrong” has 349 upvotes. One reasonable, good-for-women social norm that is being questioned here: believing most women who make accusations of sexual misconduct.(2) People value things about you that are important (e.g., how kind you are, how hard you work, how good your work is)Why this can be good for women: Society tends to value people for the wrong reasons (e.g., thinness), but often applies these standards more stringently to women (who are more affected by, e.g., weight discrimination). By contrast, many EAs and EA organizations make an active effort not to unfairly discriminate against people (although, of course, bias that exists in the outside world will invariably exist within EA, too).Example: Many EA organizations are intentional about not letting bias and discrimination influence their hiring practices. Last year, I applied to a job at an EA organization. This was the first job I had applied to in years where things that have no bearing on my work quality were intentionally excluded from the application process. The organization did this by revie...]]>
lilly https://forum.effectivealtruism.org/posts/pWFEjawiGXYmwyY3K/things-that-can-make-ea-a-good-place-for-women Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Things that can make EA a good place for women, published by lilly on April 6, 2023 on The Effective Altruism Forum.IntroductionMany people, including me, have criticized EA for being a subpar place for women in certain ways, including: women are underrepresented (which contributes to various problems), there are few mechanisms aimed at preventing sexual harassment before it occurs, there is limited guidance regarding appropriate behavior, the criteria for evaluating misconduct are questionable, and existing systems may be ill-equipped to respond to allegations of abuse.At the same time, I think EA does well on certain gender issues relative to other communities. My aim here is not to make a substantive feminist case for EA, respond to critiques in this vein, or argue that being involved in EA is on balance good for women, but to describe features of the EA community that may be good for women (or, at least, have been good for me). Several of the features I'll highlight are good for everyone, but contrast with features of the wider world that are often bad for women.My goal in writing this is threefold. First, as EA works to fix certain gender issues, it’s worth considering what to preserve. Second, I want to highlight practices and behaviors that I’ve appreciated in the hopes of others emulating them. Third, and most importantly, I want women who are considering whether EA is a community they want to be a part of to be aware of both the good and the bad.Eight features of EA that can make EA a good place for womenIn the remainder of this post, I'll describe features of the EA community that I think can benefit women. This isn't meant to be an exhaustive list, nor are these things listed in any particular order. I'll also highlight how a few of these features can cut both ways.(1) People defy conventional social normsWhy this can be good for women: EA has less gendered expectations regarding behavior and appearance, which can be good, because such expectations often serve to oppress women.Example: EA is famously not into wasting money and time on unimportant stuff. In other spheres, women face pressure to invest in makeup, nails, hair, clothes, Botox, buccal fat removal, BBLs, and lots of other stuff you probably shouldn’t Google. I enjoy doing my hair and makeup sometimes, but it matters to me that I feel basically no pressure to do this in EA spaces.Why this can be bad for women: At times, EA may also subvert certain norms that are good for women.Example: I still think it’s a bit weird that a post that says—based on scant evidence—that “if I could pop back in time to witness the [interactions reported in the TIME article], I personally would think in 80% of cases that the accused had done nothing wrong” has 349 upvotes. One reasonable, good-for-women social norm that is being questioned here: believing most women who make accusations of sexual misconduct.(2) People value things about you that are important (e.g., how kind you are, how hard you work, how good your work is)Why this can be good for women: Society tends to value people for the wrong reasons (e.g., thinness), but often applies these standards more stringently to women (who are more affected by, e.g., weight discrimination). By contrast, many EAs and EA organizations make an active effort not to unfairly discriminate against people (although, of course, bias that exists in the outside world will invariably exist within EA, too).Example: Many EA organizations are intentional about not letting bias and discrimination influence their hiring practices. Last year, I applied to a job at an EA organization. This was the first job I had applied to in years where things that have no bearing on my work quality were intentionally excluded from the application process. The organization did this by revie...]]>
Thu, 06 Apr 2023 18:14:57 +0000 EA - Things that can make EA a good place for women by lilly Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Things that can make EA a good place for women, published by lilly on April 6, 2023 on The Effective Altruism Forum.IntroductionMany people, including me, have criticized EA for being a subpar place for women in certain ways, including: women are underrepresented (which contributes to various problems), there are few mechanisms aimed at preventing sexual harassment before it occurs, there is limited guidance regarding appropriate behavior, the criteria for evaluating misconduct are questionable, and existing systems may be ill-equipped to respond to allegations of abuse.At the same time, I think EA does well on certain gender issues relative to other communities. My aim here is not to make a substantive feminist case for EA, respond to critiques in this vein, or argue that being involved in EA is on balance good for women, but to describe features of the EA community that may be good for women (or, at least, have been good for me). Several of the features I'll highlight are good for everyone, but contrast with features of the wider world that are often bad for women.My goal in writing this is threefold. First, as EA works to fix certain gender issues, it’s worth considering what to preserve. Second, I want to highlight practices and behaviors that I’ve appreciated in the hopes of others emulating them. Third, and most importantly, I want women who are considering whether EA is a community they want to be a part of to be aware of both the good and the bad.Eight features of EA that can make EA a good place for womenIn the remainder of this post, I'll describe features of the EA community that I think can benefit women. This isn't meant to be an exhaustive list, nor are these things listed in any particular order. I'll also highlight how a few of these features can cut both ways.(1) People defy conventional social normsWhy this can be good for women: EA has less gendered expectations regarding behavior and appearance, which can be good, because such expectations often serve to oppress women.Example: EA is famously not into wasting money and time on unimportant stuff. In other spheres, women face pressure to invest in makeup, nails, hair, clothes, Botox, buccal fat removal, BBLs, and lots of other stuff you probably shouldn’t Google. I enjoy doing my hair and makeup sometimes, but it matters to me that I feel basically no pressure to do this in EA spaces.Why this can be bad for women: At times, EA may also subvert certain norms that are good for women.Example: I still think it’s a bit weird that a post that says—based on scant evidence—that “if I could pop back in time to witness the [interactions reported in the TIME article], I personally would think in 80% of cases that the accused had done nothing wrong” has 349 upvotes. One reasonable, good-for-women social norm that is being questioned here: believing most women who make accusations of sexual misconduct.(2) People value things about you that are important (e.g., how kind you are, how hard you work, how good your work is)Why this can be good for women: Society tends to value people for the wrong reasons (e.g., thinness), but often applies these standards more stringently to women (who are more affected by, e.g., weight discrimination). By contrast, many EAs and EA organizations make an active effort not to unfairly discriminate against people (although, of course, bias that exists in the outside world will invariably exist within EA, too).Example: Many EA organizations are intentional about not letting bias and discrimination influence their hiring practices. Last year, I applied to a job at an EA organization. This was the first job I had applied to in years where things that have no bearing on my work quality were intentionally excluded from the application process. The organization did this by revie...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Things that can make EA a good place for women, published by lilly on April 6, 2023 on The Effective Altruism Forum.IntroductionMany people, including me, have criticized EA for being a subpar place for women in certain ways, including: women are underrepresented (which contributes to various problems), there are few mechanisms aimed at preventing sexual harassment before it occurs, there is limited guidance regarding appropriate behavior, the criteria for evaluating misconduct are questionable, and existing systems may be ill-equipped to respond to allegations of abuse.At the same time, I think EA does well on certain gender issues relative to other communities. My aim here is not to make a substantive feminist case for EA, respond to critiques in this vein, or argue that being involved in EA is on balance good for women, but to describe features of the EA community that may be good for women (or, at least, have been good for me). Several of the features I'll highlight are good for everyone, but contrast with features of the wider world that are often bad for women.My goal in writing this is threefold. First, as EA works to fix certain gender issues, it’s worth considering what to preserve. Second, I want to highlight practices and behaviors that I’ve appreciated in the hopes of others emulating them. Third, and most importantly, I want women who are considering whether EA is a community they want to be a part of to be aware of both the good and the bad.Eight features of EA that can make EA a good place for womenIn the remainder of this post, I'll describe features of the EA community that I think can benefit women. This isn't meant to be an exhaustive list, nor are these things listed in any particular order. I'll also highlight how a few of these features can cut both ways.(1) People defy conventional social normsWhy this can be good for women: EA has less gendered expectations regarding behavior and appearance, which can be good, because such expectations often serve to oppress women.Example: EA is famously not into wasting money and time on unimportant stuff. In other spheres, women face pressure to invest in makeup, nails, hair, clothes, Botox, buccal fat removal, BBLs, and lots of other stuff you probably shouldn’t Google. I enjoy doing my hair and makeup sometimes, but it matters to me that I feel basically no pressure to do this in EA spaces.Why this can be bad for women: At times, EA may also subvert certain norms that are good for women.Example: I still think it’s a bit weird that a post that says—based on scant evidence—that “if I could pop back in time to witness the [interactions reported in the TIME article], I personally would think in 80% of cases that the accused had done nothing wrong” has 349 upvotes. One reasonable, good-for-women social norm that is being questioned here: believing most women who make accusations of sexual misconduct.(2) People value things about you that are important (e.g., how kind you are, how hard you work, how good your work is)Why this can be good for women: Society tends to value people for the wrong reasons (e.g., thinness), but often applies these standards more stringently to women (who are more affected by, e.g., weight discrimination). By contrast, many EAs and EA organizations make an active effort not to unfairly discriminate against people (although, of course, bias that exists in the outside world will invariably exist within EA, too).Example: Many EA organizations are intentional about not letting bias and discrimination influence their hiring practices. Last year, I applied to a job at an EA organization. This was the first job I had applied to in years where things that have no bearing on my work quality were intentionally excluded from the application process. The organization did this by revie...]]>
lilly https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 13:27 None full 5504
mDG49CJyxzeN99ELz_NL_EA_EA EA - AISafety.world is a map of the AIS ecosystem by Hamish Doodles Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AISafety.world is a map of the AIS ecosystem, published by Hamish Doodles on April 6, 2023 on The Effective Altruism Forum.The URL is aisafety.worldThe map displays a reasonably comprehensive list of organizations, people, and resources in the AI safety space, including:research organizationsblogs/forumspodcastsyoutube channelstraining programscareer supportfundersYou can hover over each item to get a short description, and click on each item to go to the relevant web page.The map is populated by this spreadsheet, so if you have corrections or suggestions please leave a comment.There's also a google form and a Discord channel for suggestions.Thanks to plex for getting this project off the ground, and Nonlinear for motivating/funding it through a bounty.PS, If you find this helpful, here are some other projects you may be interested in (these have nothing to do with me):aisafety.training gives a timeline of AI safety training opportunities available by plex using AISS's databaseaisafety.video gives a list of video/audio resources on AI safety by Jakub Kraus.aisafety.community lists communities working on AI safety (made by volunteers in Alignment Ecosystem Development)And plex wanted me to mention that Alignment Ecosystem Development has monthly calls to collect volunteers. So... go do that maybe!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Hamish Doodles https://forum.effectivealtruism.org/posts/mDG49CJyxzeN99ELz/aisafety-world-is-a-map-of-the-ais-ecosystem Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AISafety.world is a map of the AIS ecosystem, published by Hamish Doodles on April 6, 2023 on The Effective Altruism Forum.The URL is aisafety.worldThe map displays a reasonably comprehensive list of organizations, people, and resources in the AI safety space, including:research organizationsblogs/forumspodcastsyoutube channelstraining programscareer supportfundersYou can hover over each item to get a short description, and click on each item to go to the relevant web page.The map is populated by this spreadsheet, so if you have corrections or suggestions please leave a comment.There's also a google form and a Discord channel for suggestions.Thanks to plex for getting this project off the ground, and Nonlinear for motivating/funding it through a bounty.PS, If you find this helpful, here are some other projects you may be interested in (these have nothing to do with me):aisafety.training gives a timeline of AI safety training opportunities available by plex using AISS's databaseaisafety.video gives a list of video/audio resources on AI safety by Jakub Kraus.aisafety.community lists communities working on AI safety (made by volunteers in Alignment Ecosystem Development)And plex wanted me to mention that Alignment Ecosystem Development has monthly calls to collect volunteers. So... go do that maybe!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 06 Apr 2023 13:10:19 +0000 EA - AISafety.world is a map of the AIS ecosystem by Hamish Doodles Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AISafety.world is a map of the AIS ecosystem, published by Hamish Doodles on April 6, 2023 on The Effective Altruism Forum.The URL is aisafety.worldThe map displays a reasonably comprehensive list of organizations, people, and resources in the AI safety space, including:research organizationsblogs/forumspodcastsyoutube channelstraining programscareer supportfundersYou can hover over each item to get a short description, and click on each item to go to the relevant web page.The map is populated by this spreadsheet, so if you have corrections or suggestions please leave a comment.There's also a google form and a Discord channel for suggestions.Thanks to plex for getting this project off the ground, and Nonlinear for motivating/funding it through a bounty.PS, If you find this helpful, here are some other projects you may be interested in (these have nothing to do with me):aisafety.training gives a timeline of AI safety training opportunities available by plex using AISS's databaseaisafety.video gives a list of video/audio resources on AI safety by Jakub Kraus.aisafety.community lists communities working on AI safety (made by volunteers in Alignment Ecosystem Development)And plex wanted me to mention that Alignment Ecosystem Development has monthly calls to collect volunteers. So... go do that maybe!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AISafety.world is a map of the AIS ecosystem, published by Hamish Doodles on April 6, 2023 on The Effective Altruism Forum.The URL is aisafety.worldThe map displays a reasonably comprehensive list of organizations, people, and resources in the AI safety space, including:research organizationsblogs/forumspodcastsyoutube channelstraining programscareer supportfundersYou can hover over each item to get a short description, and click on each item to go to the relevant web page.The map is populated by this spreadsheet, so if you have corrections or suggestions please leave a comment.There's also a google form and a Discord channel for suggestions.Thanks to plex for getting this project off the ground, and Nonlinear for motivating/funding it through a bounty.PS, If you find this helpful, here are some other projects you may be interested in (these have nothing to do with me):aisafety.training gives a timeline of AI safety training opportunities available by plex using AISS's databaseaisafety.video gives a list of video/audio resources on AI safety by Jakub Kraus.aisafety.community lists communities working on AI safety (made by volunteers in Alignment Ecosystem Development)And plex wanted me to mention that Alignment Ecosystem Development has monthly calls to collect volunteers. So... go do that maybe!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Hamish Doodles https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:33 None full 5505
GrF7vBYdsXFSwvkyh_NL_EA_EA EA - I'm planning to recruit some random strangers to decide where to donate my money by David Clarke Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I'm planning to recruit some random strangers to decide where to donate my money, published by David Clarke on April 6, 2023 on The Effective Altruism Forum.I wish to redistribute £100k I have inherited and have come up with the idea of recruiting a group of strangers in my city to decide which causes and charities the money should go towards.The plan is for 12-15 participants to be selected at random from the electoral roll. They will take part in roughly eight hours of facilitated discussion over a period of a few weeks, after which they will be asked to agree or vote on a number of one-off donations towards charitable causes. The scope of this will be unrestricted - the funds could go towards local, national or international projects - except that people will not be able to benefit from the money directly. The participants will be remunerated for their time.I am working with experts on philanthropy and deliberative democracy to design the process. The participants will be introduced to EA concepts (such as cause prioritisation and GiveWell) as part of the deliberation, although it is not an explicitly EA-aligned initiative.I think it will be an interesting exercise in democratic decision-making and reveal something about 'ordinary people's' attitudes to philanthropy. The participants will be asked whether they believe it to be a valuable and/or rewarding experience to take part in. We will ensure that the process is publicised and that learnings from it are recorded.I'm interested to hear the community's reactions to this idea and to know whether anything similar has been tried. (I'm aware of Giving Circles and of the EA Equality and Justice Project which ran a few years ago in the UK.) It will cost around £5k to administer the project, including facilitation and room hire. If anyone would be interested in supporting this, please let me know.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
David Clarke https://forum.effectivealtruism.org/posts/GrF7vBYdsXFSwvkyh/i-m-planning-to-recruit-some-random-strangers-to-decide Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I'm planning to recruit some random strangers to decide where to donate my money, published by David Clarke on April 6, 2023 on The Effective Altruism Forum.I wish to redistribute £100k I have inherited and have come up with the idea of recruiting a group of strangers in my city to decide which causes and charities the money should go towards.The plan is for 12-15 participants to be selected at random from the electoral roll. They will take part in roughly eight hours of facilitated discussion over a period of a few weeks, after which they will be asked to agree or vote on a number of one-off donations towards charitable causes. The scope of this will be unrestricted - the funds could go towards local, national or international projects - except that people will not be able to benefit from the money directly. The participants will be remunerated for their time.I am working with experts on philanthropy and deliberative democracy to design the process. The participants will be introduced to EA concepts (such as cause prioritisation and GiveWell) as part of the deliberation, although it is not an explicitly EA-aligned initiative.I think it will be an interesting exercise in democratic decision-making and reveal something about 'ordinary people's' attitudes to philanthropy. The participants will be asked whether they believe it to be a valuable and/or rewarding experience to take part in. We will ensure that the process is publicised and that learnings from it are recorded.I'm interested to hear the community's reactions to this idea and to know whether anything similar has been tried. (I'm aware of Giving Circles and of the EA Equality and Justice Project which ran a few years ago in the UK.) It will cost around £5k to administer the project, including facilitation and room hire. If anyone would be interested in supporting this, please let me know.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 06 Apr 2023 12:57:24 +0000 EA - I'm planning to recruit some random strangers to decide where to donate my money by David Clarke Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I'm planning to recruit some random strangers to decide where to donate my money, published by David Clarke on April 6, 2023 on The Effective Altruism Forum.I wish to redistribute £100k I have inherited and have come up with the idea of recruiting a group of strangers in my city to decide which causes and charities the money should go towards.The plan is for 12-15 participants to be selected at random from the electoral roll. They will take part in roughly eight hours of facilitated discussion over a period of a few weeks, after which they will be asked to agree or vote on a number of one-off donations towards charitable causes. The scope of this will be unrestricted - the funds could go towards local, national or international projects - except that people will not be able to benefit from the money directly. The participants will be remunerated for their time.I am working with experts on philanthropy and deliberative democracy to design the process. The participants will be introduced to EA concepts (such as cause prioritisation and GiveWell) as part of the deliberation, although it is not an explicitly EA-aligned initiative.I think it will be an interesting exercise in democratic decision-making and reveal something about 'ordinary people's' attitudes to philanthropy. The participants will be asked whether they believe it to be a valuable and/or rewarding experience to take part in. We will ensure that the process is publicised and that learnings from it are recorded.I'm interested to hear the community's reactions to this idea and to know whether anything similar has been tried. (I'm aware of Giving Circles and of the EA Equality and Justice Project which ran a few years ago in the UK.) It will cost around £5k to administer the project, including facilitation and room hire. If anyone would be interested in supporting this, please let me know.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I'm planning to recruit some random strangers to decide where to donate my money, published by David Clarke on April 6, 2023 on The Effective Altruism Forum.I wish to redistribute £100k I have inherited and have come up with the idea of recruiting a group of strangers in my city to decide which causes and charities the money should go towards.The plan is for 12-15 participants to be selected at random from the electoral roll. They will take part in roughly eight hours of facilitated discussion over a period of a few weeks, after which they will be asked to agree or vote on a number of one-off donations towards charitable causes. The scope of this will be unrestricted - the funds could go towards local, national or international projects - except that people will not be able to benefit from the money directly. The participants will be remunerated for their time.I am working with experts on philanthropy and deliberative democracy to design the process. The participants will be introduced to EA concepts (such as cause prioritisation and GiveWell) as part of the deliberation, although it is not an explicitly EA-aligned initiative.I think it will be an interesting exercise in democratic decision-making and reveal something about 'ordinary people's' attitudes to philanthropy. The participants will be asked whether they believe it to be a valuable and/or rewarding experience to take part in. We will ensure that the process is publicised and that learnings from it are recorded.I'm interested to hear the community's reactions to this idea and to know whether anything similar has been tried. (I'm aware of Giving Circles and of the EA Equality and Justice Project which ran a few years ago in the UK.) It will cost around £5k to administer the project, including facilitation and room hire. If anyone would be interested in supporting this, please let me know.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
David Clarke https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:51 None full 5507
XvicpERcDFXnsMkfe_NL_EA_EA EA - Risks from GPT-4 Byproduct of Recursively Optimizing AIs by ben hayum Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Risks from GPT-4 Byproduct of Recursively Optimizing AIs, published by ben hayum on April 6, 2023 on The Effective Altruism Forum.Epistemic Status: At midnight three days ago, I saw some of the GPT-4 Byproduct Recursively Optimizing AIs below on twitter which freaked me out a little and lit a fire underneath me to write up this post, my first on LessWrong. Here, my main goal is to start a dialogue on this topic which from my (perhaps secluded) vantage point nobody seems to be talking about. I don’t expect to currently have the optimal diagnosis of the issue and prescription of end solutions.Acknowledgements: Thanks to my fellow Wisconsin AI Safety Initiative (WAISI) group organizers Austin Witte and Akhil Polamarasetty for giving feedback on this post. Organizing the WAISI community has been incredibly fruitful in being able to spar ideas with others and see which strongest ones survive. Only more to come.(from @anthrupad on twitter)IntroductionRecently, many people across the internet have used their access to GPT-4’s API to scheme up extra dangerous capabilities. These are capabilities which the AGI labs certainly could have done on their own and likely are doing. However, these AGI labs at the very least seem to be committed to safety. Some people may say they are following through on this well and others may say that they are not. Regardless, they have that stated intention, and have systems and policies in place to try to uphold it. Random people on the internet taking advantage of open source do not have this.As a result, people are using GPT-4 as the strategizing intelligence behind separate optimizing programs that can recursively self-improve in order to better pursue their goals. Note that it is not GPT-4 that is self-improving, as GPT-4’s weights are stagnant and not open sourced. Rather, it is the programs that use GPT-4’s large context window (as well as outside permanent memory in some cases) to iterate on a goal and get better and better at pursuing it every time.Here are two examples of what has resulted to give a taste:This version of the program failed, but another that worked could in theory very quickly generate and run potentially very influential code with little oversight or restriction on how each iteration improves.This tweet made me think to possibly brand these recursively optimizing AI as “Russian Shoggoth Dolls”The program pursues the instrumental goal of increasing its power and capability by writing the generic HTTP plugin in order to better get at its terminal goal of better coding pluginsEvidence of this kind of behavior is really, really bad. See Instrumentally Convergent Goals and the danger they presentEveryone in the AI Safety community should take a moment to look at these examples, particularly the latter, and contemplate the consequences. Even if GPT-4 is kept in the box, simply by letting people access it through an API, input tokens, and receive the output tokens, we might soon have what in effect seem like separate very early weak forms of agentic AGI running around the internet, going wild. This is scary.Digging DeeperThe internet has a vast distribution of individuals out there from a whole bunch of different backgrounds. Many of them, quite frankly, may want to simply just build cool AI and not give safety guards a second thought. Others may not particularly want to create AI that leads to bad consequences but haven’t engaged enough with arguments on risks that they are simply negligent.If we completely leave the creation of advanced LLM byproduct AI up to the internet with no regulations and no security checks, some people will beyond a doubt act irresponsibly in the AI that they create. This is a given. There are simply too many people out there. Everyone should be on the same page about this.Let’s look ...]]>
ben hayum https://forum.effectivealtruism.org/posts/XvicpERcDFXnsMkfe/risks-from-gpt-4-byproduct-of-recursively-optimizing-ais Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Risks from GPT-4 Byproduct of Recursively Optimizing AIs, published by ben hayum on April 6, 2023 on The Effective Altruism Forum.Epistemic Status: At midnight three days ago, I saw some of the GPT-4 Byproduct Recursively Optimizing AIs below on twitter which freaked me out a little and lit a fire underneath me to write up this post, my first on LessWrong. Here, my main goal is to start a dialogue on this topic which from my (perhaps secluded) vantage point nobody seems to be talking about. I don’t expect to currently have the optimal diagnosis of the issue and prescription of end solutions.Acknowledgements: Thanks to my fellow Wisconsin AI Safety Initiative (WAISI) group organizers Austin Witte and Akhil Polamarasetty for giving feedback on this post. Organizing the WAISI community has been incredibly fruitful in being able to spar ideas with others and see which strongest ones survive. Only more to come.(from @anthrupad on twitter)IntroductionRecently, many people across the internet have used their access to GPT-4’s API to scheme up extra dangerous capabilities. These are capabilities which the AGI labs certainly could have done on their own and likely are doing. However, these AGI labs at the very least seem to be committed to safety. Some people may say they are following through on this well and others may say that they are not. Regardless, they have that stated intention, and have systems and policies in place to try to uphold it. Random people on the internet taking advantage of open source do not have this.As a result, people are using GPT-4 as the strategizing intelligence behind separate optimizing programs that can recursively self-improve in order to better pursue their goals. Note that it is not GPT-4 that is self-improving, as GPT-4’s weights are stagnant and not open sourced. Rather, it is the programs that use GPT-4’s large context window (as well as outside permanent memory in some cases) to iterate on a goal and get better and better at pursuing it every time.Here are two examples of what has resulted to give a taste:This version of the program failed, but another that worked could in theory very quickly generate and run potentially very influential code with little oversight or restriction on how each iteration improves.This tweet made me think to possibly brand these recursively optimizing AI as “Russian Shoggoth Dolls”The program pursues the instrumental goal of increasing its power and capability by writing the generic HTTP plugin in order to better get at its terminal goal of better coding pluginsEvidence of this kind of behavior is really, really bad. See Instrumentally Convergent Goals and the danger they presentEveryone in the AI Safety community should take a moment to look at these examples, particularly the latter, and contemplate the consequences. Even if GPT-4 is kept in the box, simply by letting people access it through an API, input tokens, and receive the output tokens, we might soon have what in effect seem like separate very early weak forms of agentic AGI running around the internet, going wild. This is scary.Digging DeeperThe internet has a vast distribution of individuals out there from a whole bunch of different backgrounds. Many of them, quite frankly, may want to simply just build cool AI and not give safety guards a second thought. Others may not particularly want to create AI that leads to bad consequences but haven’t engaged enough with arguments on risks that they are simply negligent.If we completely leave the creation of advanced LLM byproduct AI up to the internet with no regulations and no security checks, some people will beyond a doubt act irresponsibly in the AI that they create. This is a given. There are simply too many people out there. Everyone should be on the same page about this.Let’s look ...]]>
Thu, 06 Apr 2023 12:20:55 +0000 EA - Risks from GPT-4 Byproduct of Recursively Optimizing AIs by ben hayum Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Risks from GPT-4 Byproduct of Recursively Optimizing AIs, published by ben hayum on April 6, 2023 on The Effective Altruism Forum.Epistemic Status: At midnight three days ago, I saw some of the GPT-4 Byproduct Recursively Optimizing AIs below on twitter which freaked me out a little and lit a fire underneath me to write up this post, my first on LessWrong. Here, my main goal is to start a dialogue on this topic which from my (perhaps secluded) vantage point nobody seems to be talking about. I don’t expect to currently have the optimal diagnosis of the issue and prescription of end solutions.Acknowledgements: Thanks to my fellow Wisconsin AI Safety Initiative (WAISI) group organizers Austin Witte and Akhil Polamarasetty for giving feedback on this post. Organizing the WAISI community has been incredibly fruitful in being able to spar ideas with others and see which strongest ones survive. Only more to come.(from @anthrupad on twitter)IntroductionRecently, many people across the internet have used their access to GPT-4’s API to scheme up extra dangerous capabilities. These are capabilities which the AGI labs certainly could have done on their own and likely are doing. However, these AGI labs at the very least seem to be committed to safety. Some people may say they are following through on this well and others may say that they are not. Regardless, they have that stated intention, and have systems and policies in place to try to uphold it. Random people on the internet taking advantage of open source do not have this.As a result, people are using GPT-4 as the strategizing intelligence behind separate optimizing programs that can recursively self-improve in order to better pursue their goals. Note that it is not GPT-4 that is self-improving, as GPT-4’s weights are stagnant and not open sourced. Rather, it is the programs that use GPT-4’s large context window (as well as outside permanent memory in some cases) to iterate on a goal and get better and better at pursuing it every time.Here are two examples of what has resulted to give a taste:This version of the program failed, but another that worked could in theory very quickly generate and run potentially very influential code with little oversight or restriction on how each iteration improves.This tweet made me think to possibly brand these recursively optimizing AI as “Russian Shoggoth Dolls”The program pursues the instrumental goal of increasing its power and capability by writing the generic HTTP plugin in order to better get at its terminal goal of better coding pluginsEvidence of this kind of behavior is really, really bad. See Instrumentally Convergent Goals and the danger they presentEveryone in the AI Safety community should take a moment to look at these examples, particularly the latter, and contemplate the consequences. Even if GPT-4 is kept in the box, simply by letting people access it through an API, input tokens, and receive the output tokens, we might soon have what in effect seem like separate very early weak forms of agentic AGI running around the internet, going wild. This is scary.Digging DeeperThe internet has a vast distribution of individuals out there from a whole bunch of different backgrounds. Many of them, quite frankly, may want to simply just build cool AI and not give safety guards a second thought. Others may not particularly want to create AI that leads to bad consequences but haven’t engaged enough with arguments on risks that they are simply negligent.If we completely leave the creation of advanced LLM byproduct AI up to the internet with no regulations and no security checks, some people will beyond a doubt act irresponsibly in the AI that they create. This is a given. There are simply too many people out there. Everyone should be on the same page about this.Let’s look ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Risks from GPT-4 Byproduct of Recursively Optimizing AIs, published by ben hayum on April 6, 2023 on The Effective Altruism Forum.Epistemic Status: At midnight three days ago, I saw some of the GPT-4 Byproduct Recursively Optimizing AIs below on twitter which freaked me out a little and lit a fire underneath me to write up this post, my first on LessWrong. Here, my main goal is to start a dialogue on this topic which from my (perhaps secluded) vantage point nobody seems to be talking about. I don’t expect to currently have the optimal diagnosis of the issue and prescription of end solutions.Acknowledgements: Thanks to my fellow Wisconsin AI Safety Initiative (WAISI) group organizers Austin Witte and Akhil Polamarasetty for giving feedback on this post. Organizing the WAISI community has been incredibly fruitful in being able to spar ideas with others and see which strongest ones survive. Only more to come.(from @anthrupad on twitter)IntroductionRecently, many people across the internet have used their access to GPT-4’s API to scheme up extra dangerous capabilities. These are capabilities which the AGI labs certainly could have done on their own and likely are doing. However, these AGI labs at the very least seem to be committed to safety. Some people may say they are following through on this well and others may say that they are not. Regardless, they have that stated intention, and have systems and policies in place to try to uphold it. Random people on the internet taking advantage of open source do not have this.As a result, people are using GPT-4 as the strategizing intelligence behind separate optimizing programs that can recursively self-improve in order to better pursue their goals. Note that it is not GPT-4 that is self-improving, as GPT-4’s weights are stagnant and not open sourced. Rather, it is the programs that use GPT-4’s large context window (as well as outside permanent memory in some cases) to iterate on a goal and get better and better at pursuing it every time.Here are two examples of what has resulted to give a taste:This version of the program failed, but another that worked could in theory very quickly generate and run potentially very influential code with little oversight or restriction on how each iteration improves.This tweet made me think to possibly brand these recursively optimizing AI as “Russian Shoggoth Dolls”The program pursues the instrumental goal of increasing its power and capability by writing the generic HTTP plugin in order to better get at its terminal goal of better coding pluginsEvidence of this kind of behavior is really, really bad. See Instrumentally Convergent Goals and the danger they presentEveryone in the AI Safety community should take a moment to look at these examples, particularly the latter, and contemplate the consequences. Even if GPT-4 is kept in the box, simply by letting people access it through an API, input tokens, and receive the output tokens, we might soon have what in effect seem like separate very early weak forms of agentic AGI running around the internet, going wild. This is scary.Digging DeeperThe internet has a vast distribution of individuals out there from a whole bunch of different backgrounds. Many of them, quite frankly, may want to simply just build cool AI and not give safety guards a second thought. Others may not particularly want to create AI that leads to bad consequences but haven’t engaged enough with arguments on risks that they are simply negligent.If we completely leave the creation of advanced LLM byproduct AI up to the internet with no regulations and no security checks, some people will beyond a doubt act irresponsibly in the AI that they create. This is a given. There are simply too many people out there. Everyone should be on the same page about this.Let’s look ...]]>
ben hayum https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 14:55 None full 5506
BWLAzZEA5K7HPr2CL_NL_EA_EA EA - Probabilities, Prioritization, and 'Bayesian Mindset' by Violet Hour Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Probabilities, Prioritization, and 'Bayesian Mindset', published by Violet Hour on April 4, 2023 on The Effective Altruism Forum.Sometimes we use explicit probabilities as an input into our decision-making, and take such probabilities to offer something like a literal representation of our uncertainty. This practice of assigning explicit probabilities to claims is sociologically unusual, and clearly inspired by Bayesianism as a theory of how ideally rational agents should behave. But we are not such agents, and our theories of ideal rationality don’t straightforwardly entail that we should be assigning explicit probabilities to claims. This leaves the following question open:When, in practice, should the use of explicit probabilities inform our actions?Holden sketches one proto-approach to this question, under the name Bayesian Mindset. 'Bayesian Mindset' describes a longtermist EA (LEA)-ish approach to quantifying uncertainty, and using the resulting quantification to make decisions in a way that’s close to Expected Value Maximization. Holden gestures at an ‘EA cluster’ of principles related to thought and action, and discusses its costs and benefits. His post contains much that I agree with:We both agree that Bayesian Mindset is undervalued by the rest of society, and shows promise as a way to clarify important disagreements.We both agree that there’s “a large gulf” between the theoretical underpinnings of Bayesian epistemology, and the practices prescribed by Bayesian Mindset.We both agree on a holistic conception of what Bayesian Mindset is — “an interesting experiment in gaining certain benefits [rather than] the correct way to make decisions.”However, I feel as though my sentiments part with Holden on certain issues, and so use his post as a springboard for my essay. Here’s the roadmap:In (§1), I introduce my question, and outline two cases under which it appears (to varying degrees) helpful to assign explicit probabilities to guide decision-making.I discuss complications with evaluating how best to approximate the theoretical Bayesian ideal in practice (§2).With the earlier sections in mind, I discuss two potential implications for cause prioritization (§3).I elaborate one potential downside of a community culture that emphasizes the use of explicit subjective probabilities (§4).I conclude in (§5).1. Philosophy and PracticeFirst, I want to look at the relationship between longtermist theory, and practical longtermist prioritization. Some terminology: I'll sometimes speak of ‘longtermist grantmaking’ to refer to grants directed towards areas like (for example) biorisk and AI risk. This terminology is imperfect, but nevertheless gestures at the sociological cluster with which I’m concerned.Very few of us are explicitly calculating the expected value of our career decisions, donations, and grants. That said, our decisions are clearly informed by a background sense of ‘our’ explicit probabilities and explicit utilities. In The Case for Strong Longtermism, Greaves and MacAskill defend deontic (action-guiding) strong longtermism as a theory which applies to “the most important decision situations facing agents today”, and support this thesis with reference to an estimate of the expected lives that can be saved via longtermist interventions. Greaves and MacAskill note that their analysis takes for granted a “subjective decision theory”, which assumes, for the purposes of an agent deciding which actions are best, that the agent “is in a position to grasp the states, acts and consequences that are involved in modeling her decision”, who then decides what to do, in large part, based on their explicit understanding of “the states, acts and consequences”.Of course, this is a philosophy paper, rather than a document on how to do grantmaking or policy in pra...]]>
Violet Hour https://forum.effectivealtruism.org/posts/BWLAzZEA5K7HPr2CL/probabilities-prioritization-and-bayesian-mindset Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Probabilities, Prioritization, and 'Bayesian Mindset', published by Violet Hour on April 4, 2023 on The Effective Altruism Forum.Sometimes we use explicit probabilities as an input into our decision-making, and take such probabilities to offer something like a literal representation of our uncertainty. This practice of assigning explicit probabilities to claims is sociologically unusual, and clearly inspired by Bayesianism as a theory of how ideally rational agents should behave. But we are not such agents, and our theories of ideal rationality don’t straightforwardly entail that we should be assigning explicit probabilities to claims. This leaves the following question open:When, in practice, should the use of explicit probabilities inform our actions?Holden sketches one proto-approach to this question, under the name Bayesian Mindset. 'Bayesian Mindset' describes a longtermist EA (LEA)-ish approach to quantifying uncertainty, and using the resulting quantification to make decisions in a way that’s close to Expected Value Maximization. Holden gestures at an ‘EA cluster’ of principles related to thought and action, and discusses its costs and benefits. His post contains much that I agree with:We both agree that Bayesian Mindset is undervalued by the rest of society, and shows promise as a way to clarify important disagreements.We both agree that there’s “a large gulf” between the theoretical underpinnings of Bayesian epistemology, and the practices prescribed by Bayesian Mindset.We both agree on a holistic conception of what Bayesian Mindset is — “an interesting experiment in gaining certain benefits [rather than] the correct way to make decisions.”However, I feel as though my sentiments part with Holden on certain issues, and so use his post as a springboard for my essay. Here’s the roadmap:In (§1), I introduce my question, and outline two cases under which it appears (to varying degrees) helpful to assign explicit probabilities to guide decision-making.I discuss complications with evaluating how best to approximate the theoretical Bayesian ideal in practice (§2).With the earlier sections in mind, I discuss two potential implications for cause prioritization (§3).I elaborate one potential downside of a community culture that emphasizes the use of explicit subjective probabilities (§4).I conclude in (§5).1. Philosophy and PracticeFirst, I want to look at the relationship between longtermist theory, and practical longtermist prioritization. Some terminology: I'll sometimes speak of ‘longtermist grantmaking’ to refer to grants directed towards areas like (for example) biorisk and AI risk. This terminology is imperfect, but nevertheless gestures at the sociological cluster with which I’m concerned.Very few of us are explicitly calculating the expected value of our career decisions, donations, and grants. That said, our decisions are clearly informed by a background sense of ‘our’ explicit probabilities and explicit utilities. In The Case for Strong Longtermism, Greaves and MacAskill defend deontic (action-guiding) strong longtermism as a theory which applies to “the most important decision situations facing agents today”, and support this thesis with reference to an estimate of the expected lives that can be saved via longtermist interventions. Greaves and MacAskill note that their analysis takes for granted a “subjective decision theory”, which assumes, for the purposes of an agent deciding which actions are best, that the agent “is in a position to grasp the states, acts and consequences that are involved in modeling her decision”, who then decides what to do, in large part, based on their explicit understanding of “the states, acts and consequences”.Of course, this is a philosophy paper, rather than a document on how to do grantmaking or policy in pra...]]>
Wed, 05 Apr 2023 21:25:07 +0000 EA - Probabilities, Prioritization, and 'Bayesian Mindset' by Violet Hour Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Probabilities, Prioritization, and 'Bayesian Mindset', published by Violet Hour on April 4, 2023 on The Effective Altruism Forum.Sometimes we use explicit probabilities as an input into our decision-making, and take such probabilities to offer something like a literal representation of our uncertainty. This practice of assigning explicit probabilities to claims is sociologically unusual, and clearly inspired by Bayesianism as a theory of how ideally rational agents should behave. But we are not such agents, and our theories of ideal rationality don’t straightforwardly entail that we should be assigning explicit probabilities to claims. This leaves the following question open:When, in practice, should the use of explicit probabilities inform our actions?Holden sketches one proto-approach to this question, under the name Bayesian Mindset. 'Bayesian Mindset' describes a longtermist EA (LEA)-ish approach to quantifying uncertainty, and using the resulting quantification to make decisions in a way that’s close to Expected Value Maximization. Holden gestures at an ‘EA cluster’ of principles related to thought and action, and discusses its costs and benefits. His post contains much that I agree with:We both agree that Bayesian Mindset is undervalued by the rest of society, and shows promise as a way to clarify important disagreements.We both agree that there’s “a large gulf” between the theoretical underpinnings of Bayesian epistemology, and the practices prescribed by Bayesian Mindset.We both agree on a holistic conception of what Bayesian Mindset is — “an interesting experiment in gaining certain benefits [rather than] the correct way to make decisions.”However, I feel as though my sentiments part with Holden on certain issues, and so use his post as a springboard for my essay. Here’s the roadmap:In (§1), I introduce my question, and outline two cases under which it appears (to varying degrees) helpful to assign explicit probabilities to guide decision-making.I discuss complications with evaluating how best to approximate the theoretical Bayesian ideal in practice (§2).With the earlier sections in mind, I discuss two potential implications for cause prioritization (§3).I elaborate one potential downside of a community culture that emphasizes the use of explicit subjective probabilities (§4).I conclude in (§5).1. Philosophy and PracticeFirst, I want to look at the relationship between longtermist theory, and practical longtermist prioritization. Some terminology: I'll sometimes speak of ‘longtermist grantmaking’ to refer to grants directed towards areas like (for example) biorisk and AI risk. This terminology is imperfect, but nevertheless gestures at the sociological cluster with which I’m concerned.Very few of us are explicitly calculating the expected value of our career decisions, donations, and grants. That said, our decisions are clearly informed by a background sense of ‘our’ explicit probabilities and explicit utilities. In The Case for Strong Longtermism, Greaves and MacAskill defend deontic (action-guiding) strong longtermism as a theory which applies to “the most important decision situations facing agents today”, and support this thesis with reference to an estimate of the expected lives that can be saved via longtermist interventions. Greaves and MacAskill note that their analysis takes for granted a “subjective decision theory”, which assumes, for the purposes of an agent deciding which actions are best, that the agent “is in a position to grasp the states, acts and consequences that are involved in modeling her decision”, who then decides what to do, in large part, based on their explicit understanding of “the states, acts and consequences”.Of course, this is a philosophy paper, rather than a document on how to do grantmaking or policy in pra...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Probabilities, Prioritization, and 'Bayesian Mindset', published by Violet Hour on April 4, 2023 on The Effective Altruism Forum.Sometimes we use explicit probabilities as an input into our decision-making, and take such probabilities to offer something like a literal representation of our uncertainty. This practice of assigning explicit probabilities to claims is sociologically unusual, and clearly inspired by Bayesianism as a theory of how ideally rational agents should behave. But we are not such agents, and our theories of ideal rationality don’t straightforwardly entail that we should be assigning explicit probabilities to claims. This leaves the following question open:When, in practice, should the use of explicit probabilities inform our actions?Holden sketches one proto-approach to this question, under the name Bayesian Mindset. 'Bayesian Mindset' describes a longtermist EA (LEA)-ish approach to quantifying uncertainty, and using the resulting quantification to make decisions in a way that’s close to Expected Value Maximization. Holden gestures at an ‘EA cluster’ of principles related to thought and action, and discusses its costs and benefits. His post contains much that I agree with:We both agree that Bayesian Mindset is undervalued by the rest of society, and shows promise as a way to clarify important disagreements.We both agree that there’s “a large gulf” between the theoretical underpinnings of Bayesian epistemology, and the practices prescribed by Bayesian Mindset.We both agree on a holistic conception of what Bayesian Mindset is — “an interesting experiment in gaining certain benefits [rather than] the correct way to make decisions.”However, I feel as though my sentiments part with Holden on certain issues, and so use his post as a springboard for my essay. Here’s the roadmap:In (§1), I introduce my question, and outline two cases under which it appears (to varying degrees) helpful to assign explicit probabilities to guide decision-making.I discuss complications with evaluating how best to approximate the theoretical Bayesian ideal in practice (§2).With the earlier sections in mind, I discuss two potential implications for cause prioritization (§3).I elaborate one potential downside of a community culture that emphasizes the use of explicit subjective probabilities (§4).I conclude in (§5).1. Philosophy and PracticeFirst, I want to look at the relationship between longtermist theory, and practical longtermist prioritization. Some terminology: I'll sometimes speak of ‘longtermist grantmaking’ to refer to grants directed towards areas like (for example) biorisk and AI risk. This terminology is imperfect, but nevertheless gestures at the sociological cluster with which I’m concerned.Very few of us are explicitly calculating the expected value of our career decisions, donations, and grants. That said, our decisions are clearly informed by a background sense of ‘our’ explicit probabilities and explicit utilities. In The Case for Strong Longtermism, Greaves and MacAskill defend deontic (action-guiding) strong longtermism as a theory which applies to “the most important decision situations facing agents today”, and support this thesis with reference to an estimate of the expected lives that can be saved via longtermist interventions. Greaves and MacAskill note that their analysis takes for granted a “subjective decision theory”, which assumes, for the purposes of an agent deciding which actions are best, that the agent “is in a position to grasp the states, acts and consequences that are involved in modeling her decision”, who then decides what to do, in large part, based on their explicit understanding of “the states, acts and consequences”.Of course, this is a philosophy paper, rather than a document on how to do grantmaking or policy in pra...]]>
Violet Hour https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 43:38 None full 5493
S8xfeJGER74xwqLta_NL_EA_EA EA - Apply to the Cavendish Labs Fellowship (by 4/15) by Derik K Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to the Cavendish Labs Fellowship (by 4/15), published by Derik K on April 3, 2023 on The Effective Altruism Forum.Cavendish Labs is a new research organization in Vermont focused on technical work on existential risks. We'd like to invite you to apply to our fellowships in AI safety and biosecurity!Positions are open for any time between June 1 and December 10, 2023. We pay a stipend of $1,500/month, plus food and housing are provided. Anyone with a technical background is encouraged to apply, even if you lack specific expertise in these fields.Applications for summer research fellows are closing April 15th. Apply here!(Note: we likely cannot accept people who need visa sponsorship to work in the U.S.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Derik K https://forum.effectivealtruism.org/posts/S8xfeJGER74xwqLta/apply-to-the-cavendish-labs-fellowship-by-4-15 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to the Cavendish Labs Fellowship (by 4/15), published by Derik K on April 3, 2023 on The Effective Altruism Forum.Cavendish Labs is a new research organization in Vermont focused on technical work on existential risks. We'd like to invite you to apply to our fellowships in AI safety and biosecurity!Positions are open for any time between June 1 and December 10, 2023. We pay a stipend of $1,500/month, plus food and housing are provided. Anyone with a technical background is encouraged to apply, even if you lack specific expertise in these fields.Applications for summer research fellows are closing April 15th. Apply here!(Note: we likely cannot accept people who need visa sponsorship to work in the U.S.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 05 Apr 2023 19:09:01 +0000 EA - Apply to the Cavendish Labs Fellowship (by 4/15) by Derik K Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to the Cavendish Labs Fellowship (by 4/15), published by Derik K on April 3, 2023 on The Effective Altruism Forum.Cavendish Labs is a new research organization in Vermont focused on technical work on existential risks. We'd like to invite you to apply to our fellowships in AI safety and biosecurity!Positions are open for any time between June 1 and December 10, 2023. We pay a stipend of $1,500/month, plus food and housing are provided. Anyone with a technical background is encouraged to apply, even if you lack specific expertise in these fields.Applications for summer research fellows are closing April 15th. Apply here!(Note: we likely cannot accept people who need visa sponsorship to work in the U.S.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to the Cavendish Labs Fellowship (by 4/15), published by Derik K on April 3, 2023 on The Effective Altruism Forum.Cavendish Labs is a new research organization in Vermont focused on technical work on existential risks. We'd like to invite you to apply to our fellowships in AI safety and biosecurity!Positions are open for any time between June 1 and December 10, 2023. We pay a stipend of $1,500/month, plus food and housing are provided. Anyone with a technical background is encouraged to apply, even if you lack specific expertise in these fields.Applications for summer research fellows are closing April 15th. Apply here!(Note: we likely cannot accept people who need visa sponsorship to work in the U.S.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Derik K https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:58 None full 5494
akn2BFhhM9CzwpLEA_NL_EA_EA EA - Wisdom of the Crowd vs. "the Best of the Best of the Best" by nikos Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wisdom of the Crowd vs. "the Best of the Best of the Best", published by nikos on April 4, 2023 on The Effective Altruism Forum.SummaryThis post asks whether we can improve forecasts for binary questions merely by selecting a few accomplished forecasters from a larger pool.Using Metaculus data, it comparesthe Community Prediction (a recency-weighted median of all forecasts) witha counterfactual Community Prediction that combines forecasts from only the best 5, 10, ..., 30 forecasters based on past performance (the "Best") anda counterfactual Community Prediction with all other forecasters (the "Rest") andthe Metaculus Prediction, Metaculus' proprietary aggregation algorithm that weighs forecasters based on past performance and extremises forecasts (i.e. pushes them towards either 0 or 1)The ensemble of the "Best" almost always performs worse on average than the Community Prediction with all forecastersThe "Best" outperforms the ensemble of all other forecaster (the "Rest") in some instances.the "Best" never outperform the "Rest" on average for questions with more than 200 forecastersperformance of the "Best" improves as their size increases. They never outperform the "Rest" on average at size 5, sometimes outperform it at size 10-20 and reliably outperform it for size 20+ (but only for questions with fewer than 200 forecasters)The Metaculus Prediction on average outperforms all other approaches in most instances, but may have less of an advantage against the Community Prediction for questions with more forecastersThe code is published here.Conflict of interest noteI am an employee of Metaculus. I think this didn't influence my analysis, but then of course I'd think that, and there may be things I haven't thought about.IntroductionLet's say you had access to a large number of forecasters and you were interested in getting the best possible forecast for something. Maybe you're running a prediction platform (good job!). Or you're the head of an important organisation that needs to make an important decision. Or you just really really really want to correctly guess the weight of an ox.What are you going to do? Most likely, you would ask everyone for their forecast, throw the individual predictions together, stir a bit, and pull out some combined forecast. The easiest way to do this is to just take the mean or median of all individual forecasts, or, probably better for binary forecasts, the geometric mean of odds. If you stir a bit harder, you could get a weighted, rather than an unweighted combination of forecasts. That is, when combining predictions you give forecasters different weights based on their past performance. This seems like an obvious idea, but in reality it is really hard to pull off. This is called the forecast combination puzzle: estimating weights from past data is often noisy or biased and therefore a simple unweighted ensemble often performs best.Instead of estimating precise weights, you could just decide to take the X best forecasters based on past performance and use only their forecasts to form a smaller ensemble. (Effectively, this would just give those forecasters a weight of 1 and everyone else a weight of 0). Presumably, when choosing your X, there is a trade-off between "having better forecasters" and "having more forecasters" (see this and this analysis on why more forecasters might be good).(Note that what I'm analysing here is not actually a selection of the best available forecasters. The selection process is quite distinct from the one used for say Superforecasters, or Metaculus Pro Forecasters, who are identified using a variety of criteria. And see the Discussion section for additional factors not studied here that would likely affect the performance of such a forecasting group.)MethodsTo get some insights, I analys...]]>
nikos https://forum.effectivealtruism.org/posts/akn2BFhhM9CzwpLEA/wisdom-of-the-crowd-vs-the-best-of-the-best-of-the-best Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wisdom of the Crowd vs. "the Best of the Best of the Best", published by nikos on April 4, 2023 on The Effective Altruism Forum.SummaryThis post asks whether we can improve forecasts for binary questions merely by selecting a few accomplished forecasters from a larger pool.Using Metaculus data, it comparesthe Community Prediction (a recency-weighted median of all forecasts) witha counterfactual Community Prediction that combines forecasts from only the best 5, 10, ..., 30 forecasters based on past performance (the "Best") anda counterfactual Community Prediction with all other forecasters (the "Rest") andthe Metaculus Prediction, Metaculus' proprietary aggregation algorithm that weighs forecasters based on past performance and extremises forecasts (i.e. pushes them towards either 0 or 1)The ensemble of the "Best" almost always performs worse on average than the Community Prediction with all forecastersThe "Best" outperforms the ensemble of all other forecaster (the "Rest") in some instances.the "Best" never outperform the "Rest" on average for questions with more than 200 forecastersperformance of the "Best" improves as their size increases. They never outperform the "Rest" on average at size 5, sometimes outperform it at size 10-20 and reliably outperform it for size 20+ (but only for questions with fewer than 200 forecasters)The Metaculus Prediction on average outperforms all other approaches in most instances, but may have less of an advantage against the Community Prediction for questions with more forecastersThe code is published here.Conflict of interest noteI am an employee of Metaculus. I think this didn't influence my analysis, but then of course I'd think that, and there may be things I haven't thought about.IntroductionLet's say you had access to a large number of forecasters and you were interested in getting the best possible forecast for something. Maybe you're running a prediction platform (good job!). Or you're the head of an important organisation that needs to make an important decision. Or you just really really really want to correctly guess the weight of an ox.What are you going to do? Most likely, you would ask everyone for their forecast, throw the individual predictions together, stir a bit, and pull out some combined forecast. The easiest way to do this is to just take the mean or median of all individual forecasts, or, probably better for binary forecasts, the geometric mean of odds. If you stir a bit harder, you could get a weighted, rather than an unweighted combination of forecasts. That is, when combining predictions you give forecasters different weights based on their past performance. This seems like an obvious idea, but in reality it is really hard to pull off. This is called the forecast combination puzzle: estimating weights from past data is often noisy or biased and therefore a simple unweighted ensemble often performs best.Instead of estimating precise weights, you could just decide to take the X best forecasters based on past performance and use only their forecasts to form a smaller ensemble. (Effectively, this would just give those forecasters a weight of 1 and everyone else a weight of 0). Presumably, when choosing your X, there is a trade-off between "having better forecasters" and "having more forecasters" (see this and this analysis on why more forecasters might be good).(Note that what I'm analysing here is not actually a selection of the best available forecasters. The selection process is quite distinct from the one used for say Superforecasters, or Metaculus Pro Forecasters, who are identified using a variety of criteria. And see the Discussion section for additional factors not studied here that would likely affect the performance of such a forecasting group.)MethodsTo get some insights, I analys...]]>
Wed, 05 Apr 2023 07:19:05 +0000 EA - Wisdom of the Crowd vs. "the Best of the Best of the Best" by nikos Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wisdom of the Crowd vs. "the Best of the Best of the Best", published by nikos on April 4, 2023 on The Effective Altruism Forum.SummaryThis post asks whether we can improve forecasts for binary questions merely by selecting a few accomplished forecasters from a larger pool.Using Metaculus data, it comparesthe Community Prediction (a recency-weighted median of all forecasts) witha counterfactual Community Prediction that combines forecasts from only the best 5, 10, ..., 30 forecasters based on past performance (the "Best") anda counterfactual Community Prediction with all other forecasters (the "Rest") andthe Metaculus Prediction, Metaculus' proprietary aggregation algorithm that weighs forecasters based on past performance and extremises forecasts (i.e. pushes them towards either 0 or 1)The ensemble of the "Best" almost always performs worse on average than the Community Prediction with all forecastersThe "Best" outperforms the ensemble of all other forecaster (the "Rest") in some instances.the "Best" never outperform the "Rest" on average for questions with more than 200 forecastersperformance of the "Best" improves as their size increases. They never outperform the "Rest" on average at size 5, sometimes outperform it at size 10-20 and reliably outperform it for size 20+ (but only for questions with fewer than 200 forecasters)The Metaculus Prediction on average outperforms all other approaches in most instances, but may have less of an advantage against the Community Prediction for questions with more forecastersThe code is published here.Conflict of interest noteI am an employee of Metaculus. I think this didn't influence my analysis, but then of course I'd think that, and there may be things I haven't thought about.IntroductionLet's say you had access to a large number of forecasters and you were interested in getting the best possible forecast for something. Maybe you're running a prediction platform (good job!). Or you're the head of an important organisation that needs to make an important decision. Or you just really really really want to correctly guess the weight of an ox.What are you going to do? Most likely, you would ask everyone for their forecast, throw the individual predictions together, stir a bit, and pull out some combined forecast. The easiest way to do this is to just take the mean or median of all individual forecasts, or, probably better for binary forecasts, the geometric mean of odds. If you stir a bit harder, you could get a weighted, rather than an unweighted combination of forecasts. That is, when combining predictions you give forecasters different weights based on their past performance. This seems like an obvious idea, but in reality it is really hard to pull off. This is called the forecast combination puzzle: estimating weights from past data is often noisy or biased and therefore a simple unweighted ensemble often performs best.Instead of estimating precise weights, you could just decide to take the X best forecasters based on past performance and use only their forecasts to form a smaller ensemble. (Effectively, this would just give those forecasters a weight of 1 and everyone else a weight of 0). Presumably, when choosing your X, there is a trade-off between "having better forecasters" and "having more forecasters" (see this and this analysis on why more forecasters might be good).(Note that what I'm analysing here is not actually a selection of the best available forecasters. The selection process is quite distinct from the one used for say Superforecasters, or Metaculus Pro Forecasters, who are identified using a variety of criteria. And see the Discussion section for additional factors not studied here that would likely affect the performance of such a forecasting group.)MethodsTo get some insights, I analys...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wisdom of the Crowd vs. "the Best of the Best of the Best", published by nikos on April 4, 2023 on The Effective Altruism Forum.SummaryThis post asks whether we can improve forecasts for binary questions merely by selecting a few accomplished forecasters from a larger pool.Using Metaculus data, it comparesthe Community Prediction (a recency-weighted median of all forecasts) witha counterfactual Community Prediction that combines forecasts from only the best 5, 10, ..., 30 forecasters based on past performance (the "Best") anda counterfactual Community Prediction with all other forecasters (the "Rest") andthe Metaculus Prediction, Metaculus' proprietary aggregation algorithm that weighs forecasters based on past performance and extremises forecasts (i.e. pushes them towards either 0 or 1)The ensemble of the "Best" almost always performs worse on average than the Community Prediction with all forecastersThe "Best" outperforms the ensemble of all other forecaster (the "Rest") in some instances.the "Best" never outperform the "Rest" on average for questions with more than 200 forecastersperformance of the "Best" improves as their size increases. They never outperform the "Rest" on average at size 5, sometimes outperform it at size 10-20 and reliably outperform it for size 20+ (but only for questions with fewer than 200 forecasters)The Metaculus Prediction on average outperforms all other approaches in most instances, but may have less of an advantage against the Community Prediction for questions with more forecastersThe code is published here.Conflict of interest noteI am an employee of Metaculus. I think this didn't influence my analysis, but then of course I'd think that, and there may be things I haven't thought about.IntroductionLet's say you had access to a large number of forecasters and you were interested in getting the best possible forecast for something. Maybe you're running a prediction platform (good job!). Or you're the head of an important organisation that needs to make an important decision. Or you just really really really want to correctly guess the weight of an ox.What are you going to do? Most likely, you would ask everyone for their forecast, throw the individual predictions together, stir a bit, and pull out some combined forecast. The easiest way to do this is to just take the mean or median of all individual forecasts, or, probably better for binary forecasts, the geometric mean of odds. If you stir a bit harder, you could get a weighted, rather than an unweighted combination of forecasts. That is, when combining predictions you give forecasters different weights based on their past performance. This seems like an obvious idea, but in reality it is really hard to pull off. This is called the forecast combination puzzle: estimating weights from past data is often noisy or biased and therefore a simple unweighted ensemble often performs best.Instead of estimating precise weights, you could just decide to take the X best forecasters based on past performance and use only their forecasts to form a smaller ensemble. (Effectively, this would just give those forecasters a weight of 1 and everyone else a weight of 0). Presumably, when choosing your X, there is a trade-off between "having better forecasters" and "having more forecasters" (see this and this analysis on why more forecasters might be good).(Note that what I'm analysing here is not actually a selection of the best available forecasters. The selection process is quite distinct from the one used for say Superforecasters, or Metaculus Pro Forecasters, who are identified using a variety of criteria. And see the Discussion section for additional factors not studied here that would likely affect the performance of such a forecasting group.)MethodsTo get some insights, I analys...]]>
nikos https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 20:37 None full 5489
hBSSn33BZggfkqErj_NL_EA_EA EA - New survey: 46% of Americans are concerned about extinction from AI; 69% support a six-month pause in AI development by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New survey: 46% of Americans are concerned about extinction from AI; 69% support a six-month pause in AI development, published by Akash on April 5, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://forum.effectivealtruism.org/posts/hBSSn33BZggfkqErj/new-survey-46-of-americans-are-concerned-about-extinction-69 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New survey: 46% of Americans are concerned about extinction from AI; 69% support a six-month pause in AI development, published by Akash on April 5, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 05 Apr 2023 01:59:13 +0000 EA - New survey: 46% of Americans are concerned about extinction from AI; 69% support a six-month pause in AI development by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New survey: 46% of Americans are concerned about extinction from AI; 69% support a six-month pause in AI development, published by Akash on April 5, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New survey: 46% of Americans are concerned about extinction from AI; 69% support a six-month pause in AI development, published by Akash on April 5, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:27 None full 5487
u3cJGX33zf32TsCMg_NL_EA_EA EA - Keep Chasing AI Safety Press Coverage by RedStateBlueState Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Keep Chasing AI Safety Press Coverage, published by RedStateBlueState on April 4, 2023 on The Effective Altruism Forum.On March 31, I made a post about how I think the AI Safety community should try hard to keep the momentum going by seeking out as much press coverage as it can, since keeping media attention is really hard but the reward can be really large. The following day, my post proceeded to get hidden under a bunch of April Fools day posts. Great irony.I think this point is extremely important and I'm scared that the AI Safety Community will not take full advantage of the present moment. So I've decided to write a longer post, both to bump the discussion back up and to elaborate on my thoughts.Why AI Safety Media Coverage Is So ImportantMedia coverage of AI Safety is, in my mind, critical in the AI Safety mission. I have two reasons for thinking this.The first is that we just need more people aware of AI Safety. Right now it's a fairly niche issue, both because AI as a whole hasn't gotten as much coverage as it deserves and because most people who have seen ChatGPT don't know anything about AI risk. You can't tackle an issue if nobody knows that it exists.The second reason relies on a simple fact of human psychology: the more people hear about AI Safety, the more seriously people will take the issue. This seems to be true even if the coverage is purporting to debunk the issue (which as I will discuss later I think will be fairly rare) - a phenomenon called the illusory truth effect. I also think this effect will be especially strong for AI Safety. Right now, in EA-adjacent circles, the argument over AI Safety is mostly a war of vibes. There is very little object-level discussion - it's all just "these people are relying way too much on their obsession with tech/rationality" or "oh my god these really smart people think the world could end within my lifetime". The way we (AI Safety) win this war of vibes, which will hopefully bleed out beyond the EA-adjacent sphere, is just by giving people more exposure to our side.(Personally I have been through this exact process, being on the skeptical side at first before gradually getting convinced simply by hearing respectable people concerned about it for rational reasons. It's really powerful!)Who is our target audience for media coverage? In the previous post, I identified three groups:Tech investors/philanthropists and potential future AI Safety researchers. The more these people take AI Risk seriously, the more funding there will be for new / expanded research groups and the more researchers will choose to go into AI Safety.AI Capabilities people. Right now, people deploying AI capabilities - and even some of the people building them - have no idea of the risks involved. This has lead to dangerous actions like people giving ChatGPT access to Python's exec function and Microsoft researchers writing "Equipping LLMs with agency and intrinsic motivation is a fascinating and important direction for future work" in their paper. AI capabilities people taking AI Safety seriously will lead to fewer of these dangerous actions.Political actors. Right now AI regulation is virtually non-existent and we need this to change. Even if you think regulation does nothing good but slow progress down, that would actually be remarkable progress in this case. Political types are also the most likely to read press coverage.Note that press coverage is worth it even if few people from these three groups directly see it. Information and attitudes naturally flow throughout a society, which means that these three groups will get more exposure to the issue even without reading the relevant articles themselves. We just have to get the word out.Why Maintaining Media Coverage Will Take A Lot Of EffortThe media cycle is brutal.You work...]]>
RedStateBlueState https://forum.effectivealtruism.org/posts/u3cJGX33zf32TsCMg/keep-chasing-ai-safety-press-coverage Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Keep Chasing AI Safety Press Coverage, published by RedStateBlueState on April 4, 2023 on The Effective Altruism Forum.On March 31, I made a post about how I think the AI Safety community should try hard to keep the momentum going by seeking out as much press coverage as it can, since keeping media attention is really hard but the reward can be really large. The following day, my post proceeded to get hidden under a bunch of April Fools day posts. Great irony.I think this point is extremely important and I'm scared that the AI Safety Community will not take full advantage of the present moment. So I've decided to write a longer post, both to bump the discussion back up and to elaborate on my thoughts.Why AI Safety Media Coverage Is So ImportantMedia coverage of AI Safety is, in my mind, critical in the AI Safety mission. I have two reasons for thinking this.The first is that we just need more people aware of AI Safety. Right now it's a fairly niche issue, both because AI as a whole hasn't gotten as much coverage as it deserves and because most people who have seen ChatGPT don't know anything about AI risk. You can't tackle an issue if nobody knows that it exists.The second reason relies on a simple fact of human psychology: the more people hear about AI Safety, the more seriously people will take the issue. This seems to be true even if the coverage is purporting to debunk the issue (which as I will discuss later I think will be fairly rare) - a phenomenon called the illusory truth effect. I also think this effect will be especially strong for AI Safety. Right now, in EA-adjacent circles, the argument over AI Safety is mostly a war of vibes. There is very little object-level discussion - it's all just "these people are relying way too much on their obsession with tech/rationality" or "oh my god these really smart people think the world could end within my lifetime". The way we (AI Safety) win this war of vibes, which will hopefully bleed out beyond the EA-adjacent sphere, is just by giving people more exposure to our side.(Personally I have been through this exact process, being on the skeptical side at first before gradually getting convinced simply by hearing respectable people concerned about it for rational reasons. It's really powerful!)Who is our target audience for media coverage? In the previous post, I identified three groups:Tech investors/philanthropists and potential future AI Safety researchers. The more these people take AI Risk seriously, the more funding there will be for new / expanded research groups and the more researchers will choose to go into AI Safety.AI Capabilities people. Right now, people deploying AI capabilities - and even some of the people building them - have no idea of the risks involved. This has lead to dangerous actions like people giving ChatGPT access to Python's exec function and Microsoft researchers writing "Equipping LLMs with agency and intrinsic motivation is a fascinating and important direction for future work" in their paper. AI capabilities people taking AI Safety seriously will lead to fewer of these dangerous actions.Political actors. Right now AI regulation is virtually non-existent and we need this to change. Even if you think regulation does nothing good but slow progress down, that would actually be remarkable progress in this case. Political types are also the most likely to read press coverage.Note that press coverage is worth it even if few people from these three groups directly see it. Information and attitudes naturally flow throughout a society, which means that these three groups will get more exposure to the issue even without reading the relevant articles themselves. We just have to get the word out.Why Maintaining Media Coverage Will Take A Lot Of EffortThe media cycle is brutal.You work...]]>
Tue, 04 Apr 2023 23:38:59 +0000 EA - Keep Chasing AI Safety Press Coverage by RedStateBlueState Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Keep Chasing AI Safety Press Coverage, published by RedStateBlueState on April 4, 2023 on The Effective Altruism Forum.On March 31, I made a post about how I think the AI Safety community should try hard to keep the momentum going by seeking out as much press coverage as it can, since keeping media attention is really hard but the reward can be really large. The following day, my post proceeded to get hidden under a bunch of April Fools day posts. Great irony.I think this point is extremely important and I'm scared that the AI Safety Community will not take full advantage of the present moment. So I've decided to write a longer post, both to bump the discussion back up and to elaborate on my thoughts.Why AI Safety Media Coverage Is So ImportantMedia coverage of AI Safety is, in my mind, critical in the AI Safety mission. I have two reasons for thinking this.The first is that we just need more people aware of AI Safety. Right now it's a fairly niche issue, both because AI as a whole hasn't gotten as much coverage as it deserves and because most people who have seen ChatGPT don't know anything about AI risk. You can't tackle an issue if nobody knows that it exists.The second reason relies on a simple fact of human psychology: the more people hear about AI Safety, the more seriously people will take the issue. This seems to be true even if the coverage is purporting to debunk the issue (which as I will discuss later I think will be fairly rare) - a phenomenon called the illusory truth effect. I also think this effect will be especially strong for AI Safety. Right now, in EA-adjacent circles, the argument over AI Safety is mostly a war of vibes. There is very little object-level discussion - it's all just "these people are relying way too much on their obsession with tech/rationality" or "oh my god these really smart people think the world could end within my lifetime". The way we (AI Safety) win this war of vibes, which will hopefully bleed out beyond the EA-adjacent sphere, is just by giving people more exposure to our side.(Personally I have been through this exact process, being on the skeptical side at first before gradually getting convinced simply by hearing respectable people concerned about it for rational reasons. It's really powerful!)Who is our target audience for media coverage? In the previous post, I identified three groups:Tech investors/philanthropists and potential future AI Safety researchers. The more these people take AI Risk seriously, the more funding there will be for new / expanded research groups and the more researchers will choose to go into AI Safety.AI Capabilities people. Right now, people deploying AI capabilities - and even some of the people building them - have no idea of the risks involved. This has lead to dangerous actions like people giving ChatGPT access to Python's exec function and Microsoft researchers writing "Equipping LLMs with agency and intrinsic motivation is a fascinating and important direction for future work" in their paper. AI capabilities people taking AI Safety seriously will lead to fewer of these dangerous actions.Political actors. Right now AI regulation is virtually non-existent and we need this to change. Even if you think regulation does nothing good but slow progress down, that would actually be remarkable progress in this case. Political types are also the most likely to read press coverage.Note that press coverage is worth it even if few people from these three groups directly see it. Information and attitudes naturally flow throughout a society, which means that these three groups will get more exposure to the issue even without reading the relevant articles themselves. We just have to get the word out.Why Maintaining Media Coverage Will Take A Lot Of EffortThe media cycle is brutal.You work...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Keep Chasing AI Safety Press Coverage, published by RedStateBlueState on April 4, 2023 on The Effective Altruism Forum.On March 31, I made a post about how I think the AI Safety community should try hard to keep the momentum going by seeking out as much press coverage as it can, since keeping media attention is really hard but the reward can be really large. The following day, my post proceeded to get hidden under a bunch of April Fools day posts. Great irony.I think this point is extremely important and I'm scared that the AI Safety Community will not take full advantage of the present moment. So I've decided to write a longer post, both to bump the discussion back up and to elaborate on my thoughts.Why AI Safety Media Coverage Is So ImportantMedia coverage of AI Safety is, in my mind, critical in the AI Safety mission. I have two reasons for thinking this.The first is that we just need more people aware of AI Safety. Right now it's a fairly niche issue, both because AI as a whole hasn't gotten as much coverage as it deserves and because most people who have seen ChatGPT don't know anything about AI risk. You can't tackle an issue if nobody knows that it exists.The second reason relies on a simple fact of human psychology: the more people hear about AI Safety, the more seriously people will take the issue. This seems to be true even if the coverage is purporting to debunk the issue (which as I will discuss later I think will be fairly rare) - a phenomenon called the illusory truth effect. I also think this effect will be especially strong for AI Safety. Right now, in EA-adjacent circles, the argument over AI Safety is mostly a war of vibes. There is very little object-level discussion - it's all just "these people are relying way too much on their obsession with tech/rationality" or "oh my god these really smart people think the world could end within my lifetime". The way we (AI Safety) win this war of vibes, which will hopefully bleed out beyond the EA-adjacent sphere, is just by giving people more exposure to our side.(Personally I have been through this exact process, being on the skeptical side at first before gradually getting convinced simply by hearing respectable people concerned about it for rational reasons. It's really powerful!)Who is our target audience for media coverage? In the previous post, I identified three groups:Tech investors/philanthropists and potential future AI Safety researchers. The more these people take AI Risk seriously, the more funding there will be for new / expanded research groups and the more researchers will choose to go into AI Safety.AI Capabilities people. Right now, people deploying AI capabilities - and even some of the people building them - have no idea of the risks involved. This has lead to dangerous actions like people giving ChatGPT access to Python's exec function and Microsoft researchers writing "Equipping LLMs with agency and intrinsic motivation is a fascinating and important direction for future work" in their paper. AI capabilities people taking AI Safety seriously will lead to fewer of these dangerous actions.Political actors. Right now AI regulation is virtually non-existent and we need this to change. Even if you think regulation does nothing good but slow progress down, that would actually be remarkable progress in this case. Political types are also the most likely to read press coverage.Note that press coverage is worth it even if few people from these three groups directly see it. Information and attitudes naturally flow throughout a society, which means that these three groups will get more exposure to the issue even without reading the relevant articles themselves. We just have to get the word out.Why Maintaining Media Coverage Will Take A Lot Of EffortThe media cycle is brutal.You work...]]>
RedStateBlueState https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:01 None full 5488
A3ZLLanDZZt9sgGQ9_NL_EA_EA EA - New 80,000 Hours Podcast on high-impact climate philanthropy by jackva Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New 80,000 Hours Podcast on high-impact climate philanthropy, published by jackva on April 4, 2023 on The Effective Altruism Forum.This is a linkpost for a new 80,000 hours episode focused on how to engage in climate from an effective altruist perspective.The podcast lives here, including a selection of highlights as well as a full transcript and lots of additional links. Thanks to 80,000hours’ new feature rolled out on April 1st you can even listen to it!My Twitter thread is here.Rob and I are having a pretty wide-ranging conversation, here are the things we cover which I find most interesting for different audiences:For EAs not usually engaged in climate:(1) How ideas like mission hedging apply in climate given the expected curvature of climate damage (and expected climate damage, though we do not discuss this)(2) How engaging in a crowded space like climate suggests that one should primarily think about improving overall societal response, rather than incrementally adding to it (vis-a-vis causes like AI safety where, at least until recently, EAs were the main funders / interested parties)(3) How technological change is fundamentally the result of societal decisions and sustained public support and, as such, can be affected through philanthropy and advocacy.For people thinking about climate more:(1) The importance of thinking about a portfolio that is robust and hedgy rather than reliant on best-case assumptions.(2) The problem with evaluating climate solutions based on their local-short term effects given that the most effective climate actions are often (usually?) those that have no impacts locally in the short-term.(3) The way in which many prominent responses – such as focusing on short-term targets, on lifestyle changes, only on popular solutions, and on threshold targets (“1.5C or everything failed”) – have unintended negative consequences.(4) How one might think about the importance of engaging in different regions.(5) Interaction of climate with other causes, both near-termist (air pollution, energy poverty) and longtermist (climate is more important when disruptive ability is more dispersed, e.g. in the case of bio-risk concerns).For people engaging with donors / being potential donors themselves:(1) The way in which philanthropically funded advocacy can make a large difference, as this is something many (tech) donors do not intuitively understand. We go through this in quite some detail with the example of geothermal.(2) The relative magnitudes of philanthropy, public funding etc. and how this should shape what to use climate philanthropy for, primarily.(3) A description of several FP Climate Fund grants as well as the ongoing research that underpins this work.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
jackva https://forum.effectivealtruism.org/posts/A3ZLLanDZZt9sgGQ9/new-80-000-hours-podcast-on-high-impact-climate-philanthropy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New 80,000 Hours Podcast on high-impact climate philanthropy, published by jackva on April 4, 2023 on The Effective Altruism Forum.This is a linkpost for a new 80,000 hours episode focused on how to engage in climate from an effective altruist perspective.The podcast lives here, including a selection of highlights as well as a full transcript and lots of additional links. Thanks to 80,000hours’ new feature rolled out on April 1st you can even listen to it!My Twitter thread is here.Rob and I are having a pretty wide-ranging conversation, here are the things we cover which I find most interesting for different audiences:For EAs not usually engaged in climate:(1) How ideas like mission hedging apply in climate given the expected curvature of climate damage (and expected climate damage, though we do not discuss this)(2) How engaging in a crowded space like climate suggests that one should primarily think about improving overall societal response, rather than incrementally adding to it (vis-a-vis causes like AI safety where, at least until recently, EAs were the main funders / interested parties)(3) How technological change is fundamentally the result of societal decisions and sustained public support and, as such, can be affected through philanthropy and advocacy.For people thinking about climate more:(1) The importance of thinking about a portfolio that is robust and hedgy rather than reliant on best-case assumptions.(2) The problem with evaluating climate solutions based on their local-short term effects given that the most effective climate actions are often (usually?) those that have no impacts locally in the short-term.(3) The way in which many prominent responses – such as focusing on short-term targets, on lifestyle changes, only on popular solutions, and on threshold targets (“1.5C or everything failed”) – have unintended negative consequences.(4) How one might think about the importance of engaging in different regions.(5) Interaction of climate with other causes, both near-termist (air pollution, energy poverty) and longtermist (climate is more important when disruptive ability is more dispersed, e.g. in the case of bio-risk concerns).For people engaging with donors / being potential donors themselves:(1) The way in which philanthropically funded advocacy can make a large difference, as this is something many (tech) donors do not intuitively understand. We go through this in quite some detail with the example of geothermal.(2) The relative magnitudes of philanthropy, public funding etc. and how this should shape what to use climate philanthropy for, primarily.(3) A description of several FP Climate Fund grants as well as the ongoing research that underpins this work.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 04 Apr 2023 21:33:38 +0000 EA - New 80,000 Hours Podcast on high-impact climate philanthropy by jackva Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New 80,000 Hours Podcast on high-impact climate philanthropy, published by jackva on April 4, 2023 on The Effective Altruism Forum.This is a linkpost for a new 80,000 hours episode focused on how to engage in climate from an effective altruist perspective.The podcast lives here, including a selection of highlights as well as a full transcript and lots of additional links. Thanks to 80,000hours’ new feature rolled out on April 1st you can even listen to it!My Twitter thread is here.Rob and I are having a pretty wide-ranging conversation, here are the things we cover which I find most interesting for different audiences:For EAs not usually engaged in climate:(1) How ideas like mission hedging apply in climate given the expected curvature of climate damage (and expected climate damage, though we do not discuss this)(2) How engaging in a crowded space like climate suggests that one should primarily think about improving overall societal response, rather than incrementally adding to it (vis-a-vis causes like AI safety where, at least until recently, EAs were the main funders / interested parties)(3) How technological change is fundamentally the result of societal decisions and sustained public support and, as such, can be affected through philanthropy and advocacy.For people thinking about climate more:(1) The importance of thinking about a portfolio that is robust and hedgy rather than reliant on best-case assumptions.(2) The problem with evaluating climate solutions based on their local-short term effects given that the most effective climate actions are often (usually?) those that have no impacts locally in the short-term.(3) The way in which many prominent responses – such as focusing on short-term targets, on lifestyle changes, only on popular solutions, and on threshold targets (“1.5C or everything failed”) – have unintended negative consequences.(4) How one might think about the importance of engaging in different regions.(5) Interaction of climate with other causes, both near-termist (air pollution, energy poverty) and longtermist (climate is more important when disruptive ability is more dispersed, e.g. in the case of bio-risk concerns).For people engaging with donors / being potential donors themselves:(1) The way in which philanthropically funded advocacy can make a large difference, as this is something many (tech) donors do not intuitively understand. We go through this in quite some detail with the example of geothermal.(2) The relative magnitudes of philanthropy, public funding etc. and how this should shape what to use climate philanthropy for, primarily.(3) A description of several FP Climate Fund grants as well as the ongoing research that underpins this work.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New 80,000 Hours Podcast on high-impact climate philanthropy, published by jackva on April 4, 2023 on The Effective Altruism Forum.This is a linkpost for a new 80,000 hours episode focused on how to engage in climate from an effective altruist perspective.The podcast lives here, including a selection of highlights as well as a full transcript and lots of additional links. Thanks to 80,000hours’ new feature rolled out on April 1st you can even listen to it!My Twitter thread is here.Rob and I are having a pretty wide-ranging conversation, here are the things we cover which I find most interesting for different audiences:For EAs not usually engaged in climate:(1) How ideas like mission hedging apply in climate given the expected curvature of climate damage (and expected climate damage, though we do not discuss this)(2) How engaging in a crowded space like climate suggests that one should primarily think about improving overall societal response, rather than incrementally adding to it (vis-a-vis causes like AI safety where, at least until recently, EAs were the main funders / interested parties)(3) How technological change is fundamentally the result of societal decisions and sustained public support and, as such, can be affected through philanthropy and advocacy.For people thinking about climate more:(1) The importance of thinking about a portfolio that is robust and hedgy rather than reliant on best-case assumptions.(2) The problem with evaluating climate solutions based on their local-short term effects given that the most effective climate actions are often (usually?) those that have no impacts locally in the short-term.(3) The way in which many prominent responses – such as focusing on short-term targets, on lifestyle changes, only on popular solutions, and on threshold targets (“1.5C or everything failed”) – have unintended negative consequences.(4) How one might think about the importance of engaging in different regions.(5) Interaction of climate with other causes, both near-termist (air pollution, energy poverty) and longtermist (climate is more important when disruptive ability is more dispersed, e.g. in the case of bio-risk concerns).For people engaging with donors / being potential donors themselves:(1) The way in which philanthropically funded advocacy can make a large difference, as this is something many (tech) donors do not intuitively understand. We go through this in quite some detail with the example of geothermal.(2) The relative magnitudes of philanthropy, public funding etc. and how this should shape what to use climate philanthropy for, primarily.(3) A description of several FP Climate Fund grants as well as the ongoing research that underpins this work.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
jackva https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:52 None full 5484
3wBCKM3D2dXkXnpWY_NL_EA_EA EA - Announcing CEA’s Interim Managing Director by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing CEA’s Interim Managing Director, published by Ben West on April 4, 2023 on The Effective Altruism Forum.A few weeks ago, Max Dalton stepped down as Executive Director of CEA. Recently, the EV US and UK boards approved my appointment as Interim Managing Director in his stead.CEA accomplished a lot in 2022, and I’m honored to lead the team in this interim period.My communication style is more lighthearted than Max’s was; my model is:People don’t read announcements, and will engage longer if it’s funny.It seems good to bring humor to the Forum, and generally make EA a more fun place to be.Also, I would rather just be myself on the Forum. Maybe I will regret this if a journalist quotes me out of context, but I’d rather do the thing that seems right to me than the thing which seems best for (a seeming definition of) PR.But obviously some people have different opinions, and if you just want to skip to the serious bit you can go here.For everyone else: the remainder of this post goes over some of our highlights from the past year, as well as some suggestions I have for the future.Some CEA Highlights by TeamEvents TeamThe Events Team’s core metric is the number of connections made. More information about this can be found here.This metric has shown substantial growth, approximately doubling from 2021-2022. The team is to be congratulated for their hard work and success:However, recent victories won by Shrimp Welfare Project and others have made clear that there is an important audience which is underserved by current EAGs:I am therefore directing the events team to put EAG Bay Area 2024 actually in the San Francisco bay:EAG Bay Area 2024 (artist’s rendition)Online Team: EA ForumEngagement with object-level posts (those not about the EA community) has approximately quintupled over the past two years:Although weirdly engagement on posts about the EA community spiked in November and are only now going back down to normal levels:This is baffling because we did not roll out any large features in mid-November. If you have any guesses about what might have occurred here, please let us know.Groups TeamOur University Group Accelerator Program (UGAP) program has been growing rapidly:And thousands of people have been through our virtual programs:However, there is something important about meeting in person. Like many of you, I was disturbed to learn that CEA does not actually own a castle, despite the obvious community building benefits. I am therefore instructing Community Building Grants recipients that at least one third of grant expenditures must be castle-related. This relates to my AI safety strategy, which I hope to publish more about soon.Executive OfficeIt’s a great mark for our transparency that almost all of the data in this post can be found publicly. However, many stakeholders do not have access to that webpage.I am therefore instructing the executive office team to create a dashboard which can be accessed throughout the multiverse via correlated decision-making.Communications TeamPerhaps surprisingly, recent polling data from Rethink Priorities indicates that most people still don’t know what EA is, those that do are positive towards it as a brand, overall affect scores haven't noticeably changed post FTX collapse, and only a few percent of respondents mentioned FTX when asked about EA open-ended. It seems like these results hold both in the general US population and amongst students at “elite universities”.Since EA is about blindly following quantitative results, and the quantitative results indicate that the financial fraud scandals don’t affect our public image,... Nope, not even going to joke about that one. (Fraud is, famously, bad.)Serious PartMy roleIn all seriousness: this is a cruxy time for EA and I am honored to lead C...]]>
Ben West https://forum.effectivealtruism.org/posts/3wBCKM3D2dXkXnpWY/announcing-cea-s-interim-managing-director Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing CEA’s Interim Managing Director, published by Ben West on April 4, 2023 on The Effective Altruism Forum.A few weeks ago, Max Dalton stepped down as Executive Director of CEA. Recently, the EV US and UK boards approved my appointment as Interim Managing Director in his stead.CEA accomplished a lot in 2022, and I’m honored to lead the team in this interim period.My communication style is more lighthearted than Max’s was; my model is:People don’t read announcements, and will engage longer if it’s funny.It seems good to bring humor to the Forum, and generally make EA a more fun place to be.Also, I would rather just be myself on the Forum. Maybe I will regret this if a journalist quotes me out of context, but I’d rather do the thing that seems right to me than the thing which seems best for (a seeming definition of) PR.But obviously some people have different opinions, and if you just want to skip to the serious bit you can go here.For everyone else: the remainder of this post goes over some of our highlights from the past year, as well as some suggestions I have for the future.Some CEA Highlights by TeamEvents TeamThe Events Team’s core metric is the number of connections made. More information about this can be found here.This metric has shown substantial growth, approximately doubling from 2021-2022. The team is to be congratulated for their hard work and success:However, recent victories won by Shrimp Welfare Project and others have made clear that there is an important audience which is underserved by current EAGs:I am therefore directing the events team to put EAG Bay Area 2024 actually in the San Francisco bay:EAG Bay Area 2024 (artist’s rendition)Online Team: EA ForumEngagement with object-level posts (those not about the EA community) has approximately quintupled over the past two years:Although weirdly engagement on posts about the EA community spiked in November and are only now going back down to normal levels:This is baffling because we did not roll out any large features in mid-November. If you have any guesses about what might have occurred here, please let us know.Groups TeamOur University Group Accelerator Program (UGAP) program has been growing rapidly:And thousands of people have been through our virtual programs:However, there is something important about meeting in person. Like many of you, I was disturbed to learn that CEA does not actually own a castle, despite the obvious community building benefits. I am therefore instructing Community Building Grants recipients that at least one third of grant expenditures must be castle-related. This relates to my AI safety strategy, which I hope to publish more about soon.Executive OfficeIt’s a great mark for our transparency that almost all of the data in this post can be found publicly. However, many stakeholders do not have access to that webpage.I am therefore instructing the executive office team to create a dashboard which can be accessed throughout the multiverse via correlated decision-making.Communications TeamPerhaps surprisingly, recent polling data from Rethink Priorities indicates that most people still don’t know what EA is, those that do are positive towards it as a brand, overall affect scores haven't noticeably changed post FTX collapse, and only a few percent of respondents mentioned FTX when asked about EA open-ended. It seems like these results hold both in the general US population and amongst students at “elite universities”.Since EA is about blindly following quantitative results, and the quantitative results indicate that the financial fraud scandals don’t affect our public image,... Nope, not even going to joke about that one. (Fraud is, famously, bad.)Serious PartMy roleIn all seriousness: this is a cruxy time for EA and I am honored to lead C...]]>
Tue, 04 Apr 2023 19:03:43 +0000 EA - Announcing CEA’s Interim Managing Director by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing CEA’s Interim Managing Director, published by Ben West on April 4, 2023 on The Effective Altruism Forum.A few weeks ago, Max Dalton stepped down as Executive Director of CEA. Recently, the EV US and UK boards approved my appointment as Interim Managing Director in his stead.CEA accomplished a lot in 2022, and I’m honored to lead the team in this interim period.My communication style is more lighthearted than Max’s was; my model is:People don’t read announcements, and will engage longer if it’s funny.It seems good to bring humor to the Forum, and generally make EA a more fun place to be.Also, I would rather just be myself on the Forum. Maybe I will regret this if a journalist quotes me out of context, but I’d rather do the thing that seems right to me than the thing which seems best for (a seeming definition of) PR.But obviously some people have different opinions, and if you just want to skip to the serious bit you can go here.For everyone else: the remainder of this post goes over some of our highlights from the past year, as well as some suggestions I have for the future.Some CEA Highlights by TeamEvents TeamThe Events Team’s core metric is the number of connections made. More information about this can be found here.This metric has shown substantial growth, approximately doubling from 2021-2022. The team is to be congratulated for their hard work and success:However, recent victories won by Shrimp Welfare Project and others have made clear that there is an important audience which is underserved by current EAGs:I am therefore directing the events team to put EAG Bay Area 2024 actually in the San Francisco bay:EAG Bay Area 2024 (artist’s rendition)Online Team: EA ForumEngagement with object-level posts (those not about the EA community) has approximately quintupled over the past two years:Although weirdly engagement on posts about the EA community spiked in November and are only now going back down to normal levels:This is baffling because we did not roll out any large features in mid-November. If you have any guesses about what might have occurred here, please let us know.Groups TeamOur University Group Accelerator Program (UGAP) program has been growing rapidly:And thousands of people have been through our virtual programs:However, there is something important about meeting in person. Like many of you, I was disturbed to learn that CEA does not actually own a castle, despite the obvious community building benefits. I am therefore instructing Community Building Grants recipients that at least one third of grant expenditures must be castle-related. This relates to my AI safety strategy, which I hope to publish more about soon.Executive OfficeIt’s a great mark for our transparency that almost all of the data in this post can be found publicly. However, many stakeholders do not have access to that webpage.I am therefore instructing the executive office team to create a dashboard which can be accessed throughout the multiverse via correlated decision-making.Communications TeamPerhaps surprisingly, recent polling data from Rethink Priorities indicates that most people still don’t know what EA is, those that do are positive towards it as a brand, overall affect scores haven't noticeably changed post FTX collapse, and only a few percent of respondents mentioned FTX when asked about EA open-ended. It seems like these results hold both in the general US population and amongst students at “elite universities”.Since EA is about blindly following quantitative results, and the quantitative results indicate that the financial fraud scandals don’t affect our public image,... Nope, not even going to joke about that one. (Fraud is, famously, bad.)Serious PartMy roleIn all seriousness: this is a cruxy time for EA and I am honored to lead C...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing CEA’s Interim Managing Director, published by Ben West on April 4, 2023 on The Effective Altruism Forum.A few weeks ago, Max Dalton stepped down as Executive Director of CEA. Recently, the EV US and UK boards approved my appointment as Interim Managing Director in his stead.CEA accomplished a lot in 2022, and I’m honored to lead the team in this interim period.My communication style is more lighthearted than Max’s was; my model is:People don’t read announcements, and will engage longer if it’s funny.It seems good to bring humor to the Forum, and generally make EA a more fun place to be.Also, I would rather just be myself on the Forum. Maybe I will regret this if a journalist quotes me out of context, but I’d rather do the thing that seems right to me than the thing which seems best for (a seeming definition of) PR.But obviously some people have different opinions, and if you just want to skip to the serious bit you can go here.For everyone else: the remainder of this post goes over some of our highlights from the past year, as well as some suggestions I have for the future.Some CEA Highlights by TeamEvents TeamThe Events Team’s core metric is the number of connections made. More information about this can be found here.This metric has shown substantial growth, approximately doubling from 2021-2022. The team is to be congratulated for their hard work and success:However, recent victories won by Shrimp Welfare Project and others have made clear that there is an important audience which is underserved by current EAGs:I am therefore directing the events team to put EAG Bay Area 2024 actually in the San Francisco bay:EAG Bay Area 2024 (artist’s rendition)Online Team: EA ForumEngagement with object-level posts (those not about the EA community) has approximately quintupled over the past two years:Although weirdly engagement on posts about the EA community spiked in November and are only now going back down to normal levels:This is baffling because we did not roll out any large features in mid-November. If you have any guesses about what might have occurred here, please let us know.Groups TeamOur University Group Accelerator Program (UGAP) program has been growing rapidly:And thousands of people have been through our virtual programs:However, there is something important about meeting in person. Like many of you, I was disturbed to learn that CEA does not actually own a castle, despite the obvious community building benefits. I am therefore instructing Community Building Grants recipients that at least one third of grant expenditures must be castle-related. This relates to my AI safety strategy, which I hope to publish more about soon.Executive OfficeIt’s a great mark for our transparency that almost all of the data in this post can be found publicly. However, many stakeholders do not have access to that webpage.I am therefore instructing the executive office team to create a dashboard which can be accessed throughout the multiverse via correlated decision-making.Communications TeamPerhaps surprisingly, recent polling data from Rethink Priorities indicates that most people still don’t know what EA is, those that do are positive towards it as a brand, overall affect scores haven't noticeably changed post FTX collapse, and only a few percent of respondents mentioned FTX when asked about EA open-ended. It seems like these results hold both in the general US population and amongst students at “elite universities”.Since EA is about blindly following quantitative results, and the quantitative results indicate that the financial fraud scandals don’t affect our public image,... Nope, not even going to joke about that one. (Fraud is, famously, bad.)Serious PartMy roleIn all seriousness: this is a cruxy time for EA and I am honored to lead C...]]>
Ben West https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:00 None full 5483
ki8LedduXJaRLqRyo_NL_EA_EA EA - Wanted: Mental Health Program Manager at Rethink Wellbeing by Inga Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wanted: Mental Health Program Manager at Rethink Wellbeing, published by Inga on April 4, 2023 on The Effective Altruism Forum.Are you an experienced coach, or therapist interested in taking on a new challenge? Then this is for you! You can apply here (<20min). Soft deadline: 10.4. You can ask for an extension if needed.BackgroundOur emerging organization, Rethink Wellbeing aims to improve mental resilience and productivity at scale, starting in the Effective Altruism Community. We plan to do so by developing an integrative stepped-care system that, among others, includes peers to train psychological skills together, combined with accountability schemes and proven digital courses, and workbooks.We’d love your support as a program manager and “super”-facilitator to co-create, test and run our first batch of programs, as well as the corresponding training of other facilitators. The job opening is for someone with 50–100% FTE, working as a contractor or employee, preferably full-time, starting as soon as possible. Funding is available until the end of Aug, and we are fundraising to extend this period.What we offer youA high-impact job opportunityAn opportunity to work with a great team and community (short-term or long-term)Plenty of scope to shape the future of the programs and organizationFlexible remote workLots of learning and self-developmentCompetitive salary that depends on the level of experience, need, and responsibilityWhat the program manager ideally bringspreferably a degree in psychology/ psychotherapy or a comparable one,track record of an extraordinarily high participant or client ratings,experience with group facilitation, e.g., moderation, workshops, hosting events, or similar, ideally online,experience with treatment methods that foster mental health, connection, or productivity,experience with developing programs or trainings, ideally online,bonus: experience with training of facilitators, therapists, or coaches,bonus: experience in community building, working with Effective Altruism, Rationalist, or adjacent communities.Programs we runWe run online groups of 4–6 participants, running 1.5 - 2 hours sessions per week for ~ 6 weeks. In these groups, participants boost their mental wellbeing or productivity together, applying proven psychological techniques such as:Cognitive Behavioral Therapy,Accountability, and Behavior Change,Compassion, and mindfulness,Inner-Parts Work, such as Internal Family Systems.The facilitators of these groups do not teach the group but rather operate as moderators, enablers of structure and owner of vibes. Participants learn about the techniques and contents with the help of selected high-quality material such as online workbooks and online courses in between the sessions as homework. You can learn more about our programs on our website and by talking to us.The mission of our program managerprepare and run some pilot sessions and workshops to test what works best,(help to) create and iterate the program structure and content,test and run the first program versions with the highest quality possible (together with other super-facilitators),co-create the training for later facilitators to run these programs (optional),build the community of facilitators and participants (optional)We have an evaluation system in place to measure the success of sessions and the programs' pre-post.You can learn more about us on our website and about the job here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Inga https://forum.effectivealtruism.org/posts/ki8LedduXJaRLqRyo/wanted-mental-health-program-manager-at-rethink-wellbeing Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wanted: Mental Health Program Manager at Rethink Wellbeing, published by Inga on April 4, 2023 on The Effective Altruism Forum.Are you an experienced coach, or therapist interested in taking on a new challenge? Then this is for you! You can apply here (<20min). Soft deadline: 10.4. You can ask for an extension if needed.BackgroundOur emerging organization, Rethink Wellbeing aims to improve mental resilience and productivity at scale, starting in the Effective Altruism Community. We plan to do so by developing an integrative stepped-care system that, among others, includes peers to train psychological skills together, combined with accountability schemes and proven digital courses, and workbooks.We’d love your support as a program manager and “super”-facilitator to co-create, test and run our first batch of programs, as well as the corresponding training of other facilitators. The job opening is for someone with 50–100% FTE, working as a contractor or employee, preferably full-time, starting as soon as possible. Funding is available until the end of Aug, and we are fundraising to extend this period.What we offer youA high-impact job opportunityAn opportunity to work with a great team and community (short-term or long-term)Plenty of scope to shape the future of the programs and organizationFlexible remote workLots of learning and self-developmentCompetitive salary that depends on the level of experience, need, and responsibilityWhat the program manager ideally bringspreferably a degree in psychology/ psychotherapy or a comparable one,track record of an extraordinarily high participant or client ratings,experience with group facilitation, e.g., moderation, workshops, hosting events, or similar, ideally online,experience with treatment methods that foster mental health, connection, or productivity,experience with developing programs or trainings, ideally online,bonus: experience with training of facilitators, therapists, or coaches,bonus: experience in community building, working with Effective Altruism, Rationalist, or adjacent communities.Programs we runWe run online groups of 4–6 participants, running 1.5 - 2 hours sessions per week for ~ 6 weeks. In these groups, participants boost their mental wellbeing or productivity together, applying proven psychological techniques such as:Cognitive Behavioral Therapy,Accountability, and Behavior Change,Compassion, and mindfulness,Inner-Parts Work, such as Internal Family Systems.The facilitators of these groups do not teach the group but rather operate as moderators, enablers of structure and owner of vibes. Participants learn about the techniques and contents with the help of selected high-quality material such as online workbooks and online courses in between the sessions as homework. You can learn more about our programs on our website and by talking to us.The mission of our program managerprepare and run some pilot sessions and workshops to test what works best,(help to) create and iterate the program structure and content,test and run the first program versions with the highest quality possible (together with other super-facilitators),co-create the training for later facilitators to run these programs (optional),build the community of facilitators and participants (optional)We have an evaluation system in place to measure the success of sessions and the programs' pre-post.You can learn more about us on our website and about the job here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 04 Apr 2023 15:32:50 +0000 EA - Wanted: Mental Health Program Manager at Rethink Wellbeing by Inga Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wanted: Mental Health Program Manager at Rethink Wellbeing, published by Inga on April 4, 2023 on The Effective Altruism Forum.Are you an experienced coach, or therapist interested in taking on a new challenge? Then this is for you! You can apply here (<20min). Soft deadline: 10.4. You can ask for an extension if needed.BackgroundOur emerging organization, Rethink Wellbeing aims to improve mental resilience and productivity at scale, starting in the Effective Altruism Community. We plan to do so by developing an integrative stepped-care system that, among others, includes peers to train psychological skills together, combined with accountability schemes and proven digital courses, and workbooks.We’d love your support as a program manager and “super”-facilitator to co-create, test and run our first batch of programs, as well as the corresponding training of other facilitators. The job opening is for someone with 50–100% FTE, working as a contractor or employee, preferably full-time, starting as soon as possible. Funding is available until the end of Aug, and we are fundraising to extend this period.What we offer youA high-impact job opportunityAn opportunity to work with a great team and community (short-term or long-term)Plenty of scope to shape the future of the programs and organizationFlexible remote workLots of learning and self-developmentCompetitive salary that depends on the level of experience, need, and responsibilityWhat the program manager ideally bringspreferably a degree in psychology/ psychotherapy or a comparable one,track record of an extraordinarily high participant or client ratings,experience with group facilitation, e.g., moderation, workshops, hosting events, or similar, ideally online,experience with treatment methods that foster mental health, connection, or productivity,experience with developing programs or trainings, ideally online,bonus: experience with training of facilitators, therapists, or coaches,bonus: experience in community building, working with Effective Altruism, Rationalist, or adjacent communities.Programs we runWe run online groups of 4–6 participants, running 1.5 - 2 hours sessions per week for ~ 6 weeks. In these groups, participants boost their mental wellbeing or productivity together, applying proven psychological techniques such as:Cognitive Behavioral Therapy,Accountability, and Behavior Change,Compassion, and mindfulness,Inner-Parts Work, such as Internal Family Systems.The facilitators of these groups do not teach the group but rather operate as moderators, enablers of structure and owner of vibes. Participants learn about the techniques and contents with the help of selected high-quality material such as online workbooks and online courses in between the sessions as homework. You can learn more about our programs on our website and by talking to us.The mission of our program managerprepare and run some pilot sessions and workshops to test what works best,(help to) create and iterate the program structure and content,test and run the first program versions with the highest quality possible (together with other super-facilitators),co-create the training for later facilitators to run these programs (optional),build the community of facilitators and participants (optional)We have an evaluation system in place to measure the success of sessions and the programs' pre-post.You can learn more about us on our website and about the job here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wanted: Mental Health Program Manager at Rethink Wellbeing, published by Inga on April 4, 2023 on The Effective Altruism Forum.Are you an experienced coach, or therapist interested in taking on a new challenge? Then this is for you! You can apply here (<20min). Soft deadline: 10.4. You can ask for an extension if needed.BackgroundOur emerging organization, Rethink Wellbeing aims to improve mental resilience and productivity at scale, starting in the Effective Altruism Community. We plan to do so by developing an integrative stepped-care system that, among others, includes peers to train psychological skills together, combined with accountability schemes and proven digital courses, and workbooks.We’d love your support as a program manager and “super”-facilitator to co-create, test and run our first batch of programs, as well as the corresponding training of other facilitators. The job opening is for someone with 50–100% FTE, working as a contractor or employee, preferably full-time, starting as soon as possible. Funding is available until the end of Aug, and we are fundraising to extend this period.What we offer youA high-impact job opportunityAn opportunity to work with a great team and community (short-term or long-term)Plenty of scope to shape the future of the programs and organizationFlexible remote workLots of learning and self-developmentCompetitive salary that depends on the level of experience, need, and responsibilityWhat the program manager ideally bringspreferably a degree in psychology/ psychotherapy or a comparable one,track record of an extraordinarily high participant or client ratings,experience with group facilitation, e.g., moderation, workshops, hosting events, or similar, ideally online,experience with treatment methods that foster mental health, connection, or productivity,experience with developing programs or trainings, ideally online,bonus: experience with training of facilitators, therapists, or coaches,bonus: experience in community building, working with Effective Altruism, Rationalist, or adjacent communities.Programs we runWe run online groups of 4–6 participants, running 1.5 - 2 hours sessions per week for ~ 6 weeks. In these groups, participants boost their mental wellbeing or productivity together, applying proven psychological techniques such as:Cognitive Behavioral Therapy,Accountability, and Behavior Change,Compassion, and mindfulness,Inner-Parts Work, such as Internal Family Systems.The facilitators of these groups do not teach the group but rather operate as moderators, enablers of structure and owner of vibes. Participants learn about the techniques and contents with the help of selected high-quality material such as online workbooks and online courses in between the sessions as homework. You can learn more about our programs on our website and by talking to us.The mission of our program managerprepare and run some pilot sessions and workshops to test what works best,(help to) create and iterate the program structure and content,test and run the first program versions with the highest quality possible (together with other super-facilitators),co-create the training for later facilitators to run these programs (optional),build the community of facilitators and participants (optional)We have an evaluation system in place to measure the success of sessions and the programs' pre-post.You can learn more about us on our website and about the job here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Inga https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:40 None full 5485
o7kaCgLg5YeP9zMHG_NL_EA_EA EA - Rebooting Tyve, an effective giving startup by Raoul Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rebooting Tyve, an effective giving startup, published by Raoul on April 4, 2023 on The Effective Altruism Forum.In January this year, I took over running Tyve, a start-up that promotes workplace giving. Through Tyve, employees set up recurring monthly donations to charity. The donations are simple to administer. And they come straight from their pre-tax earnings. So they save the average employee hundreds of pounds a year.Ben Clifford (@Clifford), Ben Olsen and Sam Geals in 2019 founded Tyve in 2019. After strong initial growth, Tyve they put Tyve into ‘maintenance mode’ in late 2021. Ben Clifford talks about these initial years in Lessons learned from Tyve, an effective giving startup.In this post, I’m going to cover:why I got involved in Tyve;why I believe Tyve could raise large sums for effective charities;what we’re doing differently this time around; andhow you could help (if you were so inclined).TL;DRIf you don’t want to read the whole thing, here’s a short summary.Only a fraction of adults in the UK who give to charity do it through the workplace (many more do in the US). Workplace giving seems a relatively undertapped channel.When Tyve launches at companies it gets high participation rates and is very sticky (high retention rates).Most donations are “new money” that would not otherwise have been donated to charity. This is especially true for the ~40% of donations that go to Tyve’s recommended (effective) charities.We’re making changes to make Tyve more attractive to companies to adopt. These include: making it free to use, adding impact reporting and testing donation matching for recommended (effective) charities.There are several (known) reasons why we may fail to get more companies using Tyve. These include charitable giving been seen as a ‘nice to have’ and there being a high hurdle for companies to do anything new.There’s an easy (and high EV) way you could help: introduce us to the company you work at!How I got involvedI’ve spent most of the last decade leading product and design teams at tech scale ups. Late last year, the most recent of these scale-ups went (the bad ‘boom’, not the good one, like at the end of a fist bump). I took it as a opportunity to look beyond the commercial tech world. I'd spent years with the next startup funding round as a key factor behind almost every decision. I was ready for something a bit different.I started to speak to some people in the EA community, talking through options, understanding where they saw the most potential impact.In parallel, I was wondering why smaller companies weren't offering ‘payroll giving’ to their employees. (This is the mechanism that enables employees to give to charity from pre-tax earnings.)At this point, I’d been giving a % of income to effective charities for several years. It felt meaningful and important. But it also required a fair bit of admin. I had a spreadsheet for tracking what I'd earned and logging donations (across multiple charities). And then trying to work out what this meant from a tax perspective (after accounting for GiftAid).I’d had access to payroll giving a decade ago when I’d worked at a massive company and it had made giving so simple. No need to track earnings and donations—and the tax benefits were automatic.With modern tech we must be able to make this available to all companies, even those without huge HR teams?Meeting Ben Clifford was serendipitous. Ben had already founded Tyve. Working with Ben O and Sam, he'd built pretty much the exact the product that I’d started sketching out in my mind (and in my terrible handwritten notes).I sat down with Ben in a bakery in what looked like an abandoned parking lot (his idea). After about 15 minutes, he asked me if I wanted to take over Tyve. Even better, him and Ben O were able to continue to he...]]>
Raoul https://forum.effectivealtruism.org/posts/o7kaCgLg5YeP9zMHG/rebooting-tyve-an-effective-giving-startup Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rebooting Tyve, an effective giving startup, published by Raoul on April 4, 2023 on The Effective Altruism Forum.In January this year, I took over running Tyve, a start-up that promotes workplace giving. Through Tyve, employees set up recurring monthly donations to charity. The donations are simple to administer. And they come straight from their pre-tax earnings. So they save the average employee hundreds of pounds a year.Ben Clifford (@Clifford), Ben Olsen and Sam Geals in 2019 founded Tyve in 2019. After strong initial growth, Tyve they put Tyve into ‘maintenance mode’ in late 2021. Ben Clifford talks about these initial years in Lessons learned from Tyve, an effective giving startup.In this post, I’m going to cover:why I got involved in Tyve;why I believe Tyve could raise large sums for effective charities;what we’re doing differently this time around; andhow you could help (if you were so inclined).TL;DRIf you don’t want to read the whole thing, here’s a short summary.Only a fraction of adults in the UK who give to charity do it through the workplace (many more do in the US). Workplace giving seems a relatively undertapped channel.When Tyve launches at companies it gets high participation rates and is very sticky (high retention rates).Most donations are “new money” that would not otherwise have been donated to charity. This is especially true for the ~40% of donations that go to Tyve’s recommended (effective) charities.We’re making changes to make Tyve more attractive to companies to adopt. These include: making it free to use, adding impact reporting and testing donation matching for recommended (effective) charities.There are several (known) reasons why we may fail to get more companies using Tyve. These include charitable giving been seen as a ‘nice to have’ and there being a high hurdle for companies to do anything new.There’s an easy (and high EV) way you could help: introduce us to the company you work at!How I got involvedI’ve spent most of the last decade leading product and design teams at tech scale ups. Late last year, the most recent of these scale-ups went (the bad ‘boom’, not the good one, like at the end of a fist bump). I took it as a opportunity to look beyond the commercial tech world. I'd spent years with the next startup funding round as a key factor behind almost every decision. I was ready for something a bit different.I started to speak to some people in the EA community, talking through options, understanding where they saw the most potential impact.In parallel, I was wondering why smaller companies weren't offering ‘payroll giving’ to their employees. (This is the mechanism that enables employees to give to charity from pre-tax earnings.)At this point, I’d been giving a % of income to effective charities for several years. It felt meaningful and important. But it also required a fair bit of admin. I had a spreadsheet for tracking what I'd earned and logging donations (across multiple charities). And then trying to work out what this meant from a tax perspective (after accounting for GiftAid).I’d had access to payroll giving a decade ago when I’d worked at a massive company and it had made giving so simple. No need to track earnings and donations—and the tax benefits were automatic.With modern tech we must be able to make this available to all companies, even those without huge HR teams?Meeting Ben Clifford was serendipitous. Ben had already founded Tyve. Working with Ben O and Sam, he'd built pretty much the exact the product that I’d started sketching out in my mind (and in my terrible handwritten notes).I sat down with Ben in a bakery in what looked like an abandoned parking lot (his idea). After about 15 minutes, he asked me if I wanted to take over Tyve. Even better, him and Ben O were able to continue to he...]]>
Tue, 04 Apr 2023 14:56:09 +0000 EA - Rebooting Tyve, an effective giving startup by Raoul Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rebooting Tyve, an effective giving startup, published by Raoul on April 4, 2023 on The Effective Altruism Forum.In January this year, I took over running Tyve, a start-up that promotes workplace giving. Through Tyve, employees set up recurring monthly donations to charity. The donations are simple to administer. And they come straight from their pre-tax earnings. So they save the average employee hundreds of pounds a year.Ben Clifford (@Clifford), Ben Olsen and Sam Geals in 2019 founded Tyve in 2019. After strong initial growth, Tyve they put Tyve into ‘maintenance mode’ in late 2021. Ben Clifford talks about these initial years in Lessons learned from Tyve, an effective giving startup.In this post, I’m going to cover:why I got involved in Tyve;why I believe Tyve could raise large sums for effective charities;what we’re doing differently this time around; andhow you could help (if you were so inclined).TL;DRIf you don’t want to read the whole thing, here’s a short summary.Only a fraction of adults in the UK who give to charity do it through the workplace (many more do in the US). Workplace giving seems a relatively undertapped channel.When Tyve launches at companies it gets high participation rates and is very sticky (high retention rates).Most donations are “new money” that would not otherwise have been donated to charity. This is especially true for the ~40% of donations that go to Tyve’s recommended (effective) charities.We’re making changes to make Tyve more attractive to companies to adopt. These include: making it free to use, adding impact reporting and testing donation matching for recommended (effective) charities.There are several (known) reasons why we may fail to get more companies using Tyve. These include charitable giving been seen as a ‘nice to have’ and there being a high hurdle for companies to do anything new.There’s an easy (and high EV) way you could help: introduce us to the company you work at!How I got involvedI’ve spent most of the last decade leading product and design teams at tech scale ups. Late last year, the most recent of these scale-ups went (the bad ‘boom’, not the good one, like at the end of a fist bump). I took it as a opportunity to look beyond the commercial tech world. I'd spent years with the next startup funding round as a key factor behind almost every decision. I was ready for something a bit different.I started to speak to some people in the EA community, talking through options, understanding where they saw the most potential impact.In parallel, I was wondering why smaller companies weren't offering ‘payroll giving’ to their employees. (This is the mechanism that enables employees to give to charity from pre-tax earnings.)At this point, I’d been giving a % of income to effective charities for several years. It felt meaningful and important. But it also required a fair bit of admin. I had a spreadsheet for tracking what I'd earned and logging donations (across multiple charities). And then trying to work out what this meant from a tax perspective (after accounting for GiftAid).I’d had access to payroll giving a decade ago when I’d worked at a massive company and it had made giving so simple. No need to track earnings and donations—and the tax benefits were automatic.With modern tech we must be able to make this available to all companies, even those without huge HR teams?Meeting Ben Clifford was serendipitous. Ben had already founded Tyve. Working with Ben O and Sam, he'd built pretty much the exact the product that I’d started sketching out in my mind (and in my terrible handwritten notes).I sat down with Ben in a bakery in what looked like an abandoned parking lot (his idea). After about 15 minutes, he asked me if I wanted to take over Tyve. Even better, him and Ben O were able to continue to he...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rebooting Tyve, an effective giving startup, published by Raoul on April 4, 2023 on The Effective Altruism Forum.In January this year, I took over running Tyve, a start-up that promotes workplace giving. Through Tyve, employees set up recurring monthly donations to charity. The donations are simple to administer. And they come straight from their pre-tax earnings. So they save the average employee hundreds of pounds a year.Ben Clifford (@Clifford), Ben Olsen and Sam Geals in 2019 founded Tyve in 2019. After strong initial growth, Tyve they put Tyve into ‘maintenance mode’ in late 2021. Ben Clifford talks about these initial years in Lessons learned from Tyve, an effective giving startup.In this post, I’m going to cover:why I got involved in Tyve;why I believe Tyve could raise large sums for effective charities;what we’re doing differently this time around; andhow you could help (if you were so inclined).TL;DRIf you don’t want to read the whole thing, here’s a short summary.Only a fraction of adults in the UK who give to charity do it through the workplace (many more do in the US). Workplace giving seems a relatively undertapped channel.When Tyve launches at companies it gets high participation rates and is very sticky (high retention rates).Most donations are “new money” that would not otherwise have been donated to charity. This is especially true for the ~40% of donations that go to Tyve’s recommended (effective) charities.We’re making changes to make Tyve more attractive to companies to adopt. These include: making it free to use, adding impact reporting and testing donation matching for recommended (effective) charities.There are several (known) reasons why we may fail to get more companies using Tyve. These include charitable giving been seen as a ‘nice to have’ and there being a high hurdle for companies to do anything new.There’s an easy (and high EV) way you could help: introduce us to the company you work at!How I got involvedI’ve spent most of the last decade leading product and design teams at tech scale ups. Late last year, the most recent of these scale-ups went (the bad ‘boom’, not the good one, like at the end of a fist bump). I took it as a opportunity to look beyond the commercial tech world. I'd spent years with the next startup funding round as a key factor behind almost every decision. I was ready for something a bit different.I started to speak to some people in the EA community, talking through options, understanding where they saw the most potential impact.In parallel, I was wondering why smaller companies weren't offering ‘payroll giving’ to their employees. (This is the mechanism that enables employees to give to charity from pre-tax earnings.)At this point, I’d been giving a % of income to effective charities for several years. It felt meaningful and important. But it also required a fair bit of admin. I had a spreadsheet for tracking what I'd earned and logging donations (across multiple charities). And then trying to work out what this meant from a tax perspective (after accounting for GiftAid).I’d had access to payroll giving a decade ago when I’d worked at a massive company and it had made giving so simple. No need to track earnings and donations—and the tax benefits were automatic.With modern tech we must be able to make this available to all companies, even those without huge HR teams?Meeting Ben Clifford was serendipitous. Ben had already founded Tyve. Working with Ben O and Sam, he'd built pretty much the exact the product that I’d started sketching out in my mind (and in my terrible handwritten notes).I sat down with Ben in a bakery in what looked like an abandoned parking lot (his idea). After about 15 minutes, he asked me if I wanted to take over Tyve. Even better, him and Ben O were able to continue to he...]]>
Raoul https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 17:12 None full 5486
mwzaMM6ZXYRaxXK8p_NL_EA_EA EA - GiveWell's updated estimate of deworming and decay by GiveWell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell's updated estimate of deworming and decay, published by GiveWell on April 3, 2023 on The Effective Altruism Forum.Author: Alex Cohen, GiveWell Senior ResearcherThis document describes the rationale for the decay adjustment in our deworming cost-effectiveness analysis. We have incorporated this adjustment thanks to criticism from the Happier Lives Institute.In a nutshellThe main piece of evidence we use for the long-term effects of deworming is an RCT in Kenya with follow-ups at ~10 years (KLPS-2), ~15 years (KLPS-3) and ~20 years (KLPS-4) after children received deworming treatment. While these surveys show decline in effect on ln earnings and consumption over time, we have typically viewed the different estimates across surveys as noisy estimates of the same effect and assumed the effects of deworming are constant throughout a person’s working life.We now think we should account for some decay in benefits over time. We incorporate this decay by making the following key assumptions:We put 50% weight on the interpretation that the different estimates over time are capturing true differences in effect size. While the data point to an estimate of decline, the confidence intervals are wide and there are differences in how data were collected over time, which make us reluctant to put full weight on KLPS 2-4 capturing true decay over time.We set a prior that the effects are constant over time. This is based on a shallow literature review of studies of interventions during childhood where researchers reported at least two follow-ups on income during adulthood. We find a similar number of studies finding a decline in effect as an increase in effect over time.We update from that prior at each time period (10-years, 15-years, and 20-years), using the informal Bayesian adjustment approach we’ve used previously.We then extrapolate effects through the rest of the individual’s working life based on the measured decline from 10-year to 20-year follow-up.Our best guess is that we should apply a -10% adjustment due to the possibility of decay in effects over time. While the decline in effects in later years leads to lower cost-effectiveness, this is partially counterbalanced by higher estimated effects in earlier years and by our putting only 50% weight on the interpretation that declines in measured effects across follow-ups reflects a true decline in effect over time.We have several uncertainties about this analysis:This decay adjustment builds on top of our current Bayesian approach for estimating the effect of deworming. As a result, it's subject to the same limitations of that approach. It’s possible that in the future we should overhaul our approach, which could lead to meaningful differences in how we incorporate decay.The model is sensitive to our prior on whether effects should decay or not, and our current prior is based on a shallow literature review. If we expected effects to decay, we would include a stricter adjustment because we would (i) be updating from a prior where decay was already occurring and (ii) put more weight on the decay interpretation. We could potentially refine this estimate with a more thorough review of the literature and additional data analysis.The weight we put on whether these are noisy estimates of the same effect or different effects over time is based on a qualitative and highly subjective assessment. Putting higher weight on the surveys capturing different effects over time, for example, would lead to a stronger discount.What we did previouslyThe main piece of evidence we use for the long-term effects of deworming is an RCT in Kenya that measures effects on income at ~10 years (KLPS-2), ~15 years (KLPS-3) and ~20 years (KLPS-4) after children receive deworming treatment.[1]Our typical approach has been to pool effects...]]>
GiveWell https://forum.effectivealtruism.org/posts/mwzaMM6ZXYRaxXK8p/givewell-s-updated-estimate-of-deworming-and-decay Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell's updated estimate of deworming and decay, published by GiveWell on April 3, 2023 on The Effective Altruism Forum.Author: Alex Cohen, GiveWell Senior ResearcherThis document describes the rationale for the decay adjustment in our deworming cost-effectiveness analysis. We have incorporated this adjustment thanks to criticism from the Happier Lives Institute.In a nutshellThe main piece of evidence we use for the long-term effects of deworming is an RCT in Kenya with follow-ups at ~10 years (KLPS-2), ~15 years (KLPS-3) and ~20 years (KLPS-4) after children received deworming treatment. While these surveys show decline in effect on ln earnings and consumption over time, we have typically viewed the different estimates across surveys as noisy estimates of the same effect and assumed the effects of deworming are constant throughout a person’s working life.We now think we should account for some decay in benefits over time. We incorporate this decay by making the following key assumptions:We put 50% weight on the interpretation that the different estimates over time are capturing true differences in effect size. While the data point to an estimate of decline, the confidence intervals are wide and there are differences in how data were collected over time, which make us reluctant to put full weight on KLPS 2-4 capturing true decay over time.We set a prior that the effects are constant over time. This is based on a shallow literature review of studies of interventions during childhood where researchers reported at least two follow-ups on income during adulthood. We find a similar number of studies finding a decline in effect as an increase in effect over time.We update from that prior at each time period (10-years, 15-years, and 20-years), using the informal Bayesian adjustment approach we’ve used previously.We then extrapolate effects through the rest of the individual’s working life based on the measured decline from 10-year to 20-year follow-up.Our best guess is that we should apply a -10% adjustment due to the possibility of decay in effects over time. While the decline in effects in later years leads to lower cost-effectiveness, this is partially counterbalanced by higher estimated effects in earlier years and by our putting only 50% weight on the interpretation that declines in measured effects across follow-ups reflects a true decline in effect over time.We have several uncertainties about this analysis:This decay adjustment builds on top of our current Bayesian approach for estimating the effect of deworming. As a result, it's subject to the same limitations of that approach. It’s possible that in the future we should overhaul our approach, which could lead to meaningful differences in how we incorporate decay.The model is sensitive to our prior on whether effects should decay or not, and our current prior is based on a shallow literature review. If we expected effects to decay, we would include a stricter adjustment because we would (i) be updating from a prior where decay was already occurring and (ii) put more weight on the decay interpretation. We could potentially refine this estimate with a more thorough review of the literature and additional data analysis.The weight we put on whether these are noisy estimates of the same effect or different effects over time is based on a qualitative and highly subjective assessment. Putting higher weight on the surveys capturing different effects over time, for example, would lead to a stronger discount.What we did previouslyThe main piece of evidence we use for the long-term effects of deworming is an RCT in Kenya that measures effects on income at ~10 years (KLPS-2), ~15 years (KLPS-3) and ~20 years (KLPS-4) after children receive deworming treatment.[1]Our typical approach has been to pool effects...]]>
Tue, 04 Apr 2023 08:11:05 +0000 EA - GiveWell's updated estimate of deworming and decay by GiveWell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell's updated estimate of deworming and decay, published by GiveWell on April 3, 2023 on The Effective Altruism Forum.Author: Alex Cohen, GiveWell Senior ResearcherThis document describes the rationale for the decay adjustment in our deworming cost-effectiveness analysis. We have incorporated this adjustment thanks to criticism from the Happier Lives Institute.In a nutshellThe main piece of evidence we use for the long-term effects of deworming is an RCT in Kenya with follow-ups at ~10 years (KLPS-2), ~15 years (KLPS-3) and ~20 years (KLPS-4) after children received deworming treatment. While these surveys show decline in effect on ln earnings and consumption over time, we have typically viewed the different estimates across surveys as noisy estimates of the same effect and assumed the effects of deworming are constant throughout a person’s working life.We now think we should account for some decay in benefits over time. We incorporate this decay by making the following key assumptions:We put 50% weight on the interpretation that the different estimates over time are capturing true differences in effect size. While the data point to an estimate of decline, the confidence intervals are wide and there are differences in how data were collected over time, which make us reluctant to put full weight on KLPS 2-4 capturing true decay over time.We set a prior that the effects are constant over time. This is based on a shallow literature review of studies of interventions during childhood where researchers reported at least two follow-ups on income during adulthood. We find a similar number of studies finding a decline in effect as an increase in effect over time.We update from that prior at each time period (10-years, 15-years, and 20-years), using the informal Bayesian adjustment approach we’ve used previously.We then extrapolate effects through the rest of the individual’s working life based on the measured decline from 10-year to 20-year follow-up.Our best guess is that we should apply a -10% adjustment due to the possibility of decay in effects over time. While the decline in effects in later years leads to lower cost-effectiveness, this is partially counterbalanced by higher estimated effects in earlier years and by our putting only 50% weight on the interpretation that declines in measured effects across follow-ups reflects a true decline in effect over time.We have several uncertainties about this analysis:This decay adjustment builds on top of our current Bayesian approach for estimating the effect of deworming. As a result, it's subject to the same limitations of that approach. It’s possible that in the future we should overhaul our approach, which could lead to meaningful differences in how we incorporate decay.The model is sensitive to our prior on whether effects should decay or not, and our current prior is based on a shallow literature review. If we expected effects to decay, we would include a stricter adjustment because we would (i) be updating from a prior where decay was already occurring and (ii) put more weight on the decay interpretation. We could potentially refine this estimate with a more thorough review of the literature and additional data analysis.The weight we put on whether these are noisy estimates of the same effect or different effects over time is based on a qualitative and highly subjective assessment. Putting higher weight on the surveys capturing different effects over time, for example, would lead to a stronger discount.What we did previouslyThe main piece of evidence we use for the long-term effects of deworming is an RCT in Kenya that measures effects on income at ~10 years (KLPS-2), ~15 years (KLPS-3) and ~20 years (KLPS-4) after children receive deworming treatment.[1]Our typical approach has been to pool effects...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell's updated estimate of deworming and decay, published by GiveWell on April 3, 2023 on The Effective Altruism Forum.Author: Alex Cohen, GiveWell Senior ResearcherThis document describes the rationale for the decay adjustment in our deworming cost-effectiveness analysis. We have incorporated this adjustment thanks to criticism from the Happier Lives Institute.In a nutshellThe main piece of evidence we use for the long-term effects of deworming is an RCT in Kenya with follow-ups at ~10 years (KLPS-2), ~15 years (KLPS-3) and ~20 years (KLPS-4) after children received deworming treatment. While these surveys show decline in effect on ln earnings and consumption over time, we have typically viewed the different estimates across surveys as noisy estimates of the same effect and assumed the effects of deworming are constant throughout a person’s working life.We now think we should account for some decay in benefits over time. We incorporate this decay by making the following key assumptions:We put 50% weight on the interpretation that the different estimates over time are capturing true differences in effect size. While the data point to an estimate of decline, the confidence intervals are wide and there are differences in how data were collected over time, which make us reluctant to put full weight on KLPS 2-4 capturing true decay over time.We set a prior that the effects are constant over time. This is based on a shallow literature review of studies of interventions during childhood where researchers reported at least two follow-ups on income during adulthood. We find a similar number of studies finding a decline in effect as an increase in effect over time.We update from that prior at each time period (10-years, 15-years, and 20-years), using the informal Bayesian adjustment approach we’ve used previously.We then extrapolate effects through the rest of the individual’s working life based on the measured decline from 10-year to 20-year follow-up.Our best guess is that we should apply a -10% adjustment due to the possibility of decay in effects over time. While the decline in effects in later years leads to lower cost-effectiveness, this is partially counterbalanced by higher estimated effects in earlier years and by our putting only 50% weight on the interpretation that declines in measured effects across follow-ups reflects a true decline in effect over time.We have several uncertainties about this analysis:This decay adjustment builds on top of our current Bayesian approach for estimating the effect of deworming. As a result, it's subject to the same limitations of that approach. It’s possible that in the future we should overhaul our approach, which could lead to meaningful differences in how we incorporate decay.The model is sensitive to our prior on whether effects should decay or not, and our current prior is based on a shallow literature review. If we expected effects to decay, we would include a stricter adjustment because we would (i) be updating from a prior where decay was already occurring and (ii) put more weight on the decay interpretation. We could potentially refine this estimate with a more thorough review of the literature and additional data analysis.The weight we put on whether these are noisy estimates of the same effect or different effects over time is based on a qualitative and highly subjective assessment. Putting higher weight on the surveys capturing different effects over time, for example, would lead to a stronger discount.What we did previouslyThe main piece of evidence we use for the long-term effects of deworming is an RCT in Kenya that measures effects on income at ~10 years (KLPS-2), ~15 years (KLPS-3) and ~20 years (KLPS-4) after children receive deworming treatment.[1]Our typical approach has been to pool effects...]]>
GiveWell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 21:51 None full 5474
gtDferPKzLX4xXkKd_NL_EA_EA EA - Write more Wikipedia articles on policy-relevant EA concepts by freedomandutility Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Write more Wikipedia articles on policy-relevant EA concepts, published by freedomandutility on April 3, 2023 on The Effective Altruism Forum.One way I think EA fails to maximise impact is by its focus on legible, clear and attributable impact over actions where the impact is extremely difficult to estimate.Writing Wikipedia articles on and around important EA concepts (except perhaps on infohazardous bioterrorism incidents) has low downside risk and extremely high upside risk, making these ideas much more easy to understand for policymakers and other people in positions of power who may come across them and google them. However, the feedback loops are virtually non-existent and the impact is highly illegible.For example, there is currently no dedicated Wikipedia page for “Existential Risk” and “Global Catastrophic Biological Risk”.Writing Wikipedia pages could be a particularly good use of time for people new to EA and people in university student groups who want to gain a better understanding of EA concepts or of EA-relevant policy areas.Some other ideas for creating new Wikipedia articles or adding more detail to existing ones:International Biosecurity and Biosafety Initiative for ScienceAlternative ProteinsGovernance of Alternative ProteinsGlobal Partnership Biological Security Working GroupRegulation of gain-of-function biological research by countryPublic investment in alternative proteins by countrySpace governanceRegulation of alternative proteinsUN Biorisk Working GroupPolitical Representation of Future GenerationsPolitical Representation of Future Generations by CountryPolitical Representation of AnimalsJoint Assessment MechanismPublic investment in AI Safety research by countryInternational Experts Group of Biosafety and Biosecurity RegulatorsTobacco taxation by countryGlobal Partnership Signature Initiative to Mitigate Biological Threats in AfricaRegulations on lead in paint by countryAlcohol taxation by countryRegulation of dual-use biological research by countryJoint External EvaluationsBiological Weapons Convention funding by countryThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
freedomandutility https://forum.effectivealtruism.org/posts/gtDferPKzLX4xXkKd/write-more-wikipedia-articles-on-policy-relevant-ea-concepts Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Write more Wikipedia articles on policy-relevant EA concepts, published by freedomandutility on April 3, 2023 on The Effective Altruism Forum.One way I think EA fails to maximise impact is by its focus on legible, clear and attributable impact over actions where the impact is extremely difficult to estimate.Writing Wikipedia articles on and around important EA concepts (except perhaps on infohazardous bioterrorism incidents) has low downside risk and extremely high upside risk, making these ideas much more easy to understand for policymakers and other people in positions of power who may come across them and google them. However, the feedback loops are virtually non-existent and the impact is highly illegible.For example, there is currently no dedicated Wikipedia page for “Existential Risk” and “Global Catastrophic Biological Risk”.Writing Wikipedia pages could be a particularly good use of time for people new to EA and people in university student groups who want to gain a better understanding of EA concepts or of EA-relevant policy areas.Some other ideas for creating new Wikipedia articles or adding more detail to existing ones:International Biosecurity and Biosafety Initiative for ScienceAlternative ProteinsGovernance of Alternative ProteinsGlobal Partnership Biological Security Working GroupRegulation of gain-of-function biological research by countryPublic investment in alternative proteins by countrySpace governanceRegulation of alternative proteinsUN Biorisk Working GroupPolitical Representation of Future GenerationsPolitical Representation of Future Generations by CountryPolitical Representation of AnimalsJoint Assessment MechanismPublic investment in AI Safety research by countryInternational Experts Group of Biosafety and Biosecurity RegulatorsTobacco taxation by countryGlobal Partnership Signature Initiative to Mitigate Biological Threats in AfricaRegulations on lead in paint by countryAlcohol taxation by countryRegulation of dual-use biological research by countryJoint External EvaluationsBiological Weapons Convention funding by countryThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 03 Apr 2023 21:41:39 +0000 EA - Write more Wikipedia articles on policy-relevant EA concepts by freedomandutility Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Write more Wikipedia articles on policy-relevant EA concepts, published by freedomandutility on April 3, 2023 on The Effective Altruism Forum.One way I think EA fails to maximise impact is by its focus on legible, clear and attributable impact over actions where the impact is extremely difficult to estimate.Writing Wikipedia articles on and around important EA concepts (except perhaps on infohazardous bioterrorism incidents) has low downside risk and extremely high upside risk, making these ideas much more easy to understand for policymakers and other people in positions of power who may come across them and google them. However, the feedback loops are virtually non-existent and the impact is highly illegible.For example, there is currently no dedicated Wikipedia page for “Existential Risk” and “Global Catastrophic Biological Risk”.Writing Wikipedia pages could be a particularly good use of time for people new to EA and people in university student groups who want to gain a better understanding of EA concepts or of EA-relevant policy areas.Some other ideas for creating new Wikipedia articles or adding more detail to existing ones:International Biosecurity and Biosafety Initiative for ScienceAlternative ProteinsGovernance of Alternative ProteinsGlobal Partnership Biological Security Working GroupRegulation of gain-of-function biological research by countryPublic investment in alternative proteins by countrySpace governanceRegulation of alternative proteinsUN Biorisk Working GroupPolitical Representation of Future GenerationsPolitical Representation of Future Generations by CountryPolitical Representation of AnimalsJoint Assessment MechanismPublic investment in AI Safety research by countryInternational Experts Group of Biosafety and Biosecurity RegulatorsTobacco taxation by countryGlobal Partnership Signature Initiative to Mitigate Biological Threats in AfricaRegulations on lead in paint by countryAlcohol taxation by countryRegulation of dual-use biological research by countryJoint External EvaluationsBiological Weapons Convention funding by countryThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Write more Wikipedia articles on policy-relevant EA concepts, published by freedomandutility on April 3, 2023 on The Effective Altruism Forum.One way I think EA fails to maximise impact is by its focus on legible, clear and attributable impact over actions where the impact is extremely difficult to estimate.Writing Wikipedia articles on and around important EA concepts (except perhaps on infohazardous bioterrorism incidents) has low downside risk and extremely high upside risk, making these ideas much more easy to understand for policymakers and other people in positions of power who may come across them and google them. However, the feedback loops are virtually non-existent and the impact is highly illegible.For example, there is currently no dedicated Wikipedia page for “Existential Risk” and “Global Catastrophic Biological Risk”.Writing Wikipedia pages could be a particularly good use of time for people new to EA and people in university student groups who want to gain a better understanding of EA concepts or of EA-relevant policy areas.Some other ideas for creating new Wikipedia articles or adding more detail to existing ones:International Biosecurity and Biosafety Initiative for ScienceAlternative ProteinsGovernance of Alternative ProteinsGlobal Partnership Biological Security Working GroupRegulation of gain-of-function biological research by countryPublic investment in alternative proteins by countrySpace governanceRegulation of alternative proteinsUN Biorisk Working GroupPolitical Representation of Future GenerationsPolitical Representation of Future Generations by CountryPolitical Representation of AnimalsJoint Assessment MechanismPublic investment in AI Safety research by countryInternational Experts Group of Biosafety and Biosecurity RegulatorsTobacco taxation by countryGlobal Partnership Signature Initiative to Mitigate Biological Threats in AfricaRegulations on lead in paint by countryAlcohol taxation by countryRegulation of dual-use biological research by countryJoint External EvaluationsBiological Weapons Convention funding by countryThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
freedomandutility https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:22 None full 5475
ZtKZv48bsiq6QGvSc_NL_EA_EA EA - Announcing: Effective Tourism - Maximizing the Cost-Effectiveness of Your Vacations by Agustín Covarrubias Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing: Effective Tourism - Maximizing the Cost-Effectiveness of Your Vacations, published by Agustín Covarrubias on April 1, 2023 on The Effective Altruism Forum.TL;DRAfter several months of research, EA Chile is proud to announce its work in a new cause area: Effective Tourism. This program aims to help EAs find cost-effective vacations that uniquely maximize their interests. Alongside our university group, we're also launching a 7-week introductory program, where participants will learn how to evaluate different vacation destinations using one of our main developments, the RTN framework.IntroductionWant a break from all the recent developments in artificial intelligence? Feel unable to keep up with all the community controversies? Tired of dealing with mosquitoes?We hear you! That's why we are excited to announce a new project by EA Chile: Effective Tourism. This new cause area is focused on finding the most cost-effective vacations for EAs, so you can recharge your batteries while still maximizing your impact.We believe that a well-rested, recharged, and inspired EA community is better equipped to tackle the challenges of AI safety, pandemic prevention, global health and development, and much, much more. That's why we've decided to focus our efforts on this new initiative, to help EAs make the most of their time off, all while keeping cost-effectiveness in mind.The RTN FrameworkTo prioritize between destinations, we've developed the Relaxation-Tractability-Neglectedness (RTN) framework, inspired by the Importance-Tractability-Neglectedness framework used in Effective Altruism. The RTN framework assesses vacation destinations using three key criteria:Relaxation Potential: Evaluates the ability of a destination to provide a rejuvenating experience, considering factors like natural beauty, tranquility, recreational activities, and opportunities for mental and physical relaxation.Tractability: Measures the ease of planning, organizing, and executing a trip to a specific destination, taking into account accessibility, affordability, ease of obtaining visas, and the availability of amenities and services.Neglectedness: Refers to the degree to which a destination is overlooked or underappreciated by the broader tourist industry. By choosing neglected destinations, EAs can minimize the negative environmental and social impacts of tourism while enjoying unique, less crowded experiences.Applying the RTN framework allows EAs to make informed decisions about where to spend their time off, ensuring cost-effective, rejuvenating vacations that align with their values and minimize the negative consequences of tourism.The Introductory ProgramIn collaboration with our student group, Effective Altruism @ UC Chile, we're also launching an “Introduction to Effective Tourism” program, akin to the Introduction to Effective Altruism program. This program consists of 7 weeks of group discussions with a facilitator, focused on exploring different vacation destinations and finding out which is the most cost-effective for EAs.ConclusionWe believe that the introduction of Effective Tourism as a new cause area has the potential to greatly benefit the EA community. By finding the most cost-effective vacations for EAs, we can help ensure that our community remains motivated, focused, and dedicated to our broader goals. We hope that you'll join us in our journey towards better, more effective vacations!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Agustín Covarrubias https://forum.effectivealtruism.org/posts/ZtKZv48bsiq6QGvSc/announcing-effective-tourism-maximizing-the-cost Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing: Effective Tourism - Maximizing the Cost-Effectiveness of Your Vacations, published by Agustín Covarrubias on April 1, 2023 on The Effective Altruism Forum.TL;DRAfter several months of research, EA Chile is proud to announce its work in a new cause area: Effective Tourism. This program aims to help EAs find cost-effective vacations that uniquely maximize their interests. Alongside our university group, we're also launching a 7-week introductory program, where participants will learn how to evaluate different vacation destinations using one of our main developments, the RTN framework.IntroductionWant a break from all the recent developments in artificial intelligence? Feel unable to keep up with all the community controversies? Tired of dealing with mosquitoes?We hear you! That's why we are excited to announce a new project by EA Chile: Effective Tourism. This new cause area is focused on finding the most cost-effective vacations for EAs, so you can recharge your batteries while still maximizing your impact.We believe that a well-rested, recharged, and inspired EA community is better equipped to tackle the challenges of AI safety, pandemic prevention, global health and development, and much, much more. That's why we've decided to focus our efforts on this new initiative, to help EAs make the most of their time off, all while keeping cost-effectiveness in mind.The RTN FrameworkTo prioritize between destinations, we've developed the Relaxation-Tractability-Neglectedness (RTN) framework, inspired by the Importance-Tractability-Neglectedness framework used in Effective Altruism. The RTN framework assesses vacation destinations using three key criteria:Relaxation Potential: Evaluates the ability of a destination to provide a rejuvenating experience, considering factors like natural beauty, tranquility, recreational activities, and opportunities for mental and physical relaxation.Tractability: Measures the ease of planning, organizing, and executing a trip to a specific destination, taking into account accessibility, affordability, ease of obtaining visas, and the availability of amenities and services.Neglectedness: Refers to the degree to which a destination is overlooked or underappreciated by the broader tourist industry. By choosing neglected destinations, EAs can minimize the negative environmental and social impacts of tourism while enjoying unique, less crowded experiences.Applying the RTN framework allows EAs to make informed decisions about where to spend their time off, ensuring cost-effective, rejuvenating vacations that align with their values and minimize the negative consequences of tourism.The Introductory ProgramIn collaboration with our student group, Effective Altruism @ UC Chile, we're also launching an “Introduction to Effective Tourism” program, akin to the Introduction to Effective Altruism program. This program consists of 7 weeks of group discussions with a facilitator, focused on exploring different vacation destinations and finding out which is the most cost-effective for EAs.ConclusionWe believe that the introduction of Effective Tourism as a new cause area has the potential to greatly benefit the EA community. By finding the most cost-effective vacations for EAs, we can help ensure that our community remains motivated, focused, and dedicated to our broader goals. We hope that you'll join us in our journey towards better, more effective vacations!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 02 Apr 2023 06:11:34 +0000 EA - Announcing: Effective Tourism - Maximizing the Cost-Effectiveness of Your Vacations by Agustín Covarrubias Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing: Effective Tourism - Maximizing the Cost-Effectiveness of Your Vacations, published by Agustín Covarrubias on April 1, 2023 on The Effective Altruism Forum.TL;DRAfter several months of research, EA Chile is proud to announce its work in a new cause area: Effective Tourism. This program aims to help EAs find cost-effective vacations that uniquely maximize their interests. Alongside our university group, we're also launching a 7-week introductory program, where participants will learn how to evaluate different vacation destinations using one of our main developments, the RTN framework.IntroductionWant a break from all the recent developments in artificial intelligence? Feel unable to keep up with all the community controversies? Tired of dealing with mosquitoes?We hear you! That's why we are excited to announce a new project by EA Chile: Effective Tourism. This new cause area is focused on finding the most cost-effective vacations for EAs, so you can recharge your batteries while still maximizing your impact.We believe that a well-rested, recharged, and inspired EA community is better equipped to tackle the challenges of AI safety, pandemic prevention, global health and development, and much, much more. That's why we've decided to focus our efforts on this new initiative, to help EAs make the most of their time off, all while keeping cost-effectiveness in mind.The RTN FrameworkTo prioritize between destinations, we've developed the Relaxation-Tractability-Neglectedness (RTN) framework, inspired by the Importance-Tractability-Neglectedness framework used in Effective Altruism. The RTN framework assesses vacation destinations using three key criteria:Relaxation Potential: Evaluates the ability of a destination to provide a rejuvenating experience, considering factors like natural beauty, tranquility, recreational activities, and opportunities for mental and physical relaxation.Tractability: Measures the ease of planning, organizing, and executing a trip to a specific destination, taking into account accessibility, affordability, ease of obtaining visas, and the availability of amenities and services.Neglectedness: Refers to the degree to which a destination is overlooked or underappreciated by the broader tourist industry. By choosing neglected destinations, EAs can minimize the negative environmental and social impacts of tourism while enjoying unique, less crowded experiences.Applying the RTN framework allows EAs to make informed decisions about where to spend their time off, ensuring cost-effective, rejuvenating vacations that align with their values and minimize the negative consequences of tourism.The Introductory ProgramIn collaboration with our student group, Effective Altruism @ UC Chile, we're also launching an “Introduction to Effective Tourism” program, akin to the Introduction to Effective Altruism program. This program consists of 7 weeks of group discussions with a facilitator, focused on exploring different vacation destinations and finding out which is the most cost-effective for EAs.ConclusionWe believe that the introduction of Effective Tourism as a new cause area has the potential to greatly benefit the EA community. By finding the most cost-effective vacations for EAs, we can help ensure that our community remains motivated, focused, and dedicated to our broader goals. We hope that you'll join us in our journey towards better, more effective vacations!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing: Effective Tourism - Maximizing the Cost-Effectiveness of Your Vacations, published by Agustín Covarrubias on April 1, 2023 on The Effective Altruism Forum.TL;DRAfter several months of research, EA Chile is proud to announce its work in a new cause area: Effective Tourism. This program aims to help EAs find cost-effective vacations that uniquely maximize their interests. Alongside our university group, we're also launching a 7-week introductory program, where participants will learn how to evaluate different vacation destinations using one of our main developments, the RTN framework.IntroductionWant a break from all the recent developments in artificial intelligence? Feel unable to keep up with all the community controversies? Tired of dealing with mosquitoes?We hear you! That's why we are excited to announce a new project by EA Chile: Effective Tourism. This new cause area is focused on finding the most cost-effective vacations for EAs, so you can recharge your batteries while still maximizing your impact.We believe that a well-rested, recharged, and inspired EA community is better equipped to tackle the challenges of AI safety, pandemic prevention, global health and development, and much, much more. That's why we've decided to focus our efforts on this new initiative, to help EAs make the most of their time off, all while keeping cost-effectiveness in mind.The RTN FrameworkTo prioritize between destinations, we've developed the Relaxation-Tractability-Neglectedness (RTN) framework, inspired by the Importance-Tractability-Neglectedness framework used in Effective Altruism. The RTN framework assesses vacation destinations using three key criteria:Relaxation Potential: Evaluates the ability of a destination to provide a rejuvenating experience, considering factors like natural beauty, tranquility, recreational activities, and opportunities for mental and physical relaxation.Tractability: Measures the ease of planning, organizing, and executing a trip to a specific destination, taking into account accessibility, affordability, ease of obtaining visas, and the availability of amenities and services.Neglectedness: Refers to the degree to which a destination is overlooked or underappreciated by the broader tourist industry. By choosing neglected destinations, EAs can minimize the negative environmental and social impacts of tourism while enjoying unique, less crowded experiences.Applying the RTN framework allows EAs to make informed decisions about where to spend their time off, ensuring cost-effective, rejuvenating vacations that align with their values and minimize the negative consequences of tourism.The Introductory ProgramIn collaboration with our student group, Effective Altruism @ UC Chile, we're also launching an “Introduction to Effective Tourism” program, akin to the Introduction to Effective Altruism program. This program consists of 7 weeks of group discussions with a facilitator, focused on exploring different vacation destinations and finding out which is the most cost-effective for EAs.ConclusionWe believe that the introduction of Effective Tourism as a new cause area has the potential to greatly benefit the EA community. By finding the most cost-effective vacations for EAs, we can help ensure that our community remains motivated, focused, and dedicated to our broader goals. We hope that you'll join us in our journey towards better, more effective vacations!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Agustín Covarrubias https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:33 None full 5470
v7bF4NxS9a2tQsN4h_NL_EA_EA EA - Ending Open Philanthropy Project by Dusten Muskovitz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ending Open Philanthropy Project, published by Dusten Muskovitz on April 2, 2023 on The Effective Altruism Forum.Effective immediately, my wife and I will no longer plan funding for EA or EAs. There’s enough money with OpenPhil to wind down operations gracefully, paying out all current grants, all grants under consideration that we normally would have made, and any new grants that come in within the next three months that we normally would have said yes to (existing charities receiving Open Philanthropy money are particularly encouraged to apply), and providing six months for everyone currently at the non-profit before we shut down.I want to emphasize that this is not because of anything that Alexander Berger or the rest of the wonderful team at OpenPhil have done. They’re great, and I think that they’ve tried as hard as anyone could to do the best possible work with our money.It’s the rest of you. I present three primary motivations. They're all somewhat interrelated, but hopefully by presenting three arguments in succession you will update on each of them sequentially. Certainly I've lost all hope in y'all retaining any of the virtues of the rationalist community, rather than just their vices. I hope that this helps you as a community clean up your act while you try to convince someone else to fund this mess. Maybe Bernauld Arnault. That was a joke. Haha, fat chance.1. In the words of philosopher Liam Kofi Bright, “Why can’t you just be normal?”Two of Redwood's leadership team have or have had relationships to an [Open Philanthropy] grant maker. A Redwood board member is married to a different OP grantmaker. A co-CEO of OP is one of the other three board members of Redwood. Additionally, many OP staff work out of Constellation, the office that Redwood runs. OP pays Redwood for use of the space.Just be normal. Stop having a social community where people live and work and study and sing together, and do social atomization like everybody else. This won’t cause any problems. Everyone else is doing it. There is another way! You don't, actually, need to have more partners than the average American adult has friends.Also, just don’t have sex. That’s not that much to ask for, is it? I’ve been married for a decade now: I can tell you, it’s perfectly possible.2. I’m tired of all the criticism. I’m tired of it hitting Asana, which I still love and care about. Moving my donations to instead buying superyachts, artwork, and expanding on an actually fun hobby (giant bonsai) is going to substantially reduce how often my family, friends, and employees sees me getting attacked in one news outlet or another.3. Pick a cause and stick with it. Have the courage of your convictions. I don't need to spend my time hearing about sea-rats and prisoners and suffering matrices and matrices that are suffering and discount rates and so many different ways human bodies can go wrong in other countries and immigration and housing for techies and so many more. Y'all were supposed to be optimizers, so this donating splitting between different cause areas should end. Like I said, most of my wealth is going into the new super-yacht my wife and I will be commissioning. Maybe then you could stop arguing quite so much. Get it all out of your systems, figure out what the best charity is, and stick with it.Average American adult has three or fewer friends.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Dusten Muskovitz https://forum.effectivealtruism.org/posts/v7bF4NxS9a2tQsN4h/ending-open-philanthropy-project Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ending Open Philanthropy Project, published by Dusten Muskovitz on April 2, 2023 on The Effective Altruism Forum.Effective immediately, my wife and I will no longer plan funding for EA or EAs. There’s enough money with OpenPhil to wind down operations gracefully, paying out all current grants, all grants under consideration that we normally would have made, and any new grants that come in within the next three months that we normally would have said yes to (existing charities receiving Open Philanthropy money are particularly encouraged to apply), and providing six months for everyone currently at the non-profit before we shut down.I want to emphasize that this is not because of anything that Alexander Berger or the rest of the wonderful team at OpenPhil have done. They’re great, and I think that they’ve tried as hard as anyone could to do the best possible work with our money.It’s the rest of you. I present three primary motivations. They're all somewhat interrelated, but hopefully by presenting three arguments in succession you will update on each of them sequentially. Certainly I've lost all hope in y'all retaining any of the virtues of the rationalist community, rather than just their vices. I hope that this helps you as a community clean up your act while you try to convince someone else to fund this mess. Maybe Bernauld Arnault. That was a joke. Haha, fat chance.1. In the words of philosopher Liam Kofi Bright, “Why can’t you just be normal?”Two of Redwood's leadership team have or have had relationships to an [Open Philanthropy] grant maker. A Redwood board member is married to a different OP grantmaker. A co-CEO of OP is one of the other three board members of Redwood. Additionally, many OP staff work out of Constellation, the office that Redwood runs. OP pays Redwood for use of the space.Just be normal. Stop having a social community where people live and work and study and sing together, and do social atomization like everybody else. This won’t cause any problems. Everyone else is doing it. There is another way! You don't, actually, need to have more partners than the average American adult has friends.Also, just don’t have sex. That’s not that much to ask for, is it? I’ve been married for a decade now: I can tell you, it’s perfectly possible.2. I’m tired of all the criticism. I’m tired of it hitting Asana, which I still love and care about. Moving my donations to instead buying superyachts, artwork, and expanding on an actually fun hobby (giant bonsai) is going to substantially reduce how often my family, friends, and employees sees me getting attacked in one news outlet or another.3. Pick a cause and stick with it. Have the courage of your convictions. I don't need to spend my time hearing about sea-rats and prisoners and suffering matrices and matrices that are suffering and discount rates and so many different ways human bodies can go wrong in other countries and immigration and housing for techies and so many more. Y'all were supposed to be optimizers, so this donating splitting between different cause areas should end. Like I said, most of my wealth is going into the new super-yacht my wife and I will be commissioning. Maybe then you could stop arguing quite so much. Get it all out of your systems, figure out what the best charity is, and stick with it.Average American adult has three or fewer friends.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 02 Apr 2023 02:30:18 +0000 EA - Ending Open Philanthropy Project by Dusten Muskovitz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ending Open Philanthropy Project, published by Dusten Muskovitz on April 2, 2023 on The Effective Altruism Forum.Effective immediately, my wife and I will no longer plan funding for EA or EAs. There’s enough money with OpenPhil to wind down operations gracefully, paying out all current grants, all grants under consideration that we normally would have made, and any new grants that come in within the next three months that we normally would have said yes to (existing charities receiving Open Philanthropy money are particularly encouraged to apply), and providing six months for everyone currently at the non-profit before we shut down.I want to emphasize that this is not because of anything that Alexander Berger or the rest of the wonderful team at OpenPhil have done. They’re great, and I think that they’ve tried as hard as anyone could to do the best possible work with our money.It’s the rest of you. I present three primary motivations. They're all somewhat interrelated, but hopefully by presenting three arguments in succession you will update on each of them sequentially. Certainly I've lost all hope in y'all retaining any of the virtues of the rationalist community, rather than just their vices. I hope that this helps you as a community clean up your act while you try to convince someone else to fund this mess. Maybe Bernauld Arnault. That was a joke. Haha, fat chance.1. In the words of philosopher Liam Kofi Bright, “Why can’t you just be normal?”Two of Redwood's leadership team have or have had relationships to an [Open Philanthropy] grant maker. A Redwood board member is married to a different OP grantmaker. A co-CEO of OP is one of the other three board members of Redwood. Additionally, many OP staff work out of Constellation, the office that Redwood runs. OP pays Redwood for use of the space.Just be normal. Stop having a social community where people live and work and study and sing together, and do social atomization like everybody else. This won’t cause any problems. Everyone else is doing it. There is another way! You don't, actually, need to have more partners than the average American adult has friends.Also, just don’t have sex. That’s not that much to ask for, is it? I’ve been married for a decade now: I can tell you, it’s perfectly possible.2. I’m tired of all the criticism. I’m tired of it hitting Asana, which I still love and care about. Moving my donations to instead buying superyachts, artwork, and expanding on an actually fun hobby (giant bonsai) is going to substantially reduce how often my family, friends, and employees sees me getting attacked in one news outlet or another.3. Pick a cause and stick with it. Have the courage of your convictions. I don't need to spend my time hearing about sea-rats and prisoners and suffering matrices and matrices that are suffering and discount rates and so many different ways human bodies can go wrong in other countries and immigration and housing for techies and so many more. Y'all were supposed to be optimizers, so this donating splitting between different cause areas should end. Like I said, most of my wealth is going into the new super-yacht my wife and I will be commissioning. Maybe then you could stop arguing quite so much. Get it all out of your systems, figure out what the best charity is, and stick with it.Average American adult has three or fewer friends.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ending Open Philanthropy Project, published by Dusten Muskovitz on April 2, 2023 on The Effective Altruism Forum.Effective immediately, my wife and I will no longer plan funding for EA or EAs. There’s enough money with OpenPhil to wind down operations gracefully, paying out all current grants, all grants under consideration that we normally would have made, and any new grants that come in within the next three months that we normally would have said yes to (existing charities receiving Open Philanthropy money are particularly encouraged to apply), and providing six months for everyone currently at the non-profit before we shut down.I want to emphasize that this is not because of anything that Alexander Berger or the rest of the wonderful team at OpenPhil have done. They’re great, and I think that they’ve tried as hard as anyone could to do the best possible work with our money.It’s the rest of you. I present three primary motivations. They're all somewhat interrelated, but hopefully by presenting three arguments in succession you will update on each of them sequentially. Certainly I've lost all hope in y'all retaining any of the virtues of the rationalist community, rather than just their vices. I hope that this helps you as a community clean up your act while you try to convince someone else to fund this mess. Maybe Bernauld Arnault. That was a joke. Haha, fat chance.1. In the words of philosopher Liam Kofi Bright, “Why can’t you just be normal?”Two of Redwood's leadership team have or have had relationships to an [Open Philanthropy] grant maker. A Redwood board member is married to a different OP grantmaker. A co-CEO of OP is one of the other three board members of Redwood. Additionally, many OP staff work out of Constellation, the office that Redwood runs. OP pays Redwood for use of the space.Just be normal. Stop having a social community where people live and work and study and sing together, and do social atomization like everybody else. This won’t cause any problems. Everyone else is doing it. There is another way! You don't, actually, need to have more partners than the average American adult has friends.Also, just don’t have sex. That’s not that much to ask for, is it? I’ve been married for a decade now: I can tell you, it’s perfectly possible.2. I’m tired of all the criticism. I’m tired of it hitting Asana, which I still love and care about. Moving my donations to instead buying superyachts, artwork, and expanding on an actually fun hobby (giant bonsai) is going to substantially reduce how often my family, friends, and employees sees me getting attacked in one news outlet or another.3. Pick a cause and stick with it. Have the courage of your convictions. I don't need to spend my time hearing about sea-rats and prisoners and suffering matrices and matrices that are suffering and discount rates and so many different ways human bodies can go wrong in other countries and immigration and housing for techies and so many more. Y'all were supposed to be optimizers, so this donating splitting between different cause areas should end. Like I said, most of my wealth is going into the new super-yacht my wife and I will be commissioning. Maybe then you could stop arguing quite so much. Get it all out of your systems, figure out what the best charity is, and stick with it.Average American adult has three or fewer friends.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Dusten Muskovitz https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:19 None full 5467
oCw6qjXXYAFCf2YuL_NL_EA_EA EA - Saving drowning children in light of perilous maximisation by calebp Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Saving drowning children in light of perilous maximisation, published by calebp on April 2, 2023 on The Effective Altruism Forum.Last year Holden Karnofsky wrote the post, “EA is about maximization, and maximization is perilous”. You could read the post, but I suggest you just jump on board because Holden is cool, and morality is hard.Given that you now believe that maximisation of doing good is actually bad and scary, you should also probably make some adjustments to the classic thought experiment you use to get your friends on board with the new mission of “do the most good possible [a large but not too large amount of good] using evidence and reason”.A slightly modified drowning child thought experiment goes as followsImagine that you are walking by a small pond, and you see five children drowning. You can easily save the child without putting yourself in great danger, but doing so will ruin your expensive shoes. Should you save the children?Obviously, your first instinct is to save all the children. But remember, maximisation is perilous. It’s this kind of attitude that leads to atrocities like large financial crimes. Instead, you should just save three or four of the children. That is still a large amount of good, and importantly, it is not maximally large.But what should you do if you encounter just one drowning child? The options at first pass seem bleak – you can either:Ignore the child and let them drown (which many people believe is bad).Save the child (but know that you have tried to maximise good in that situation).I think there are a few neat solutions to get around these moral conundrums:Save the child with some reasonable probability (say 80%).Before wading into the shallow pond, whip out the D10 you were carrying in your backpack. If you roll an eight or lower, then go ahead and save the child. Otherwise, go about your day.Only partially save the childYou may have an opportunity to help the child to various degrees. Rather than picking up the child and then ensuring that they find their parents or doing other previously thought as reasonable things, you could:Move the child to shallower waters so they are only drowning a little bit.Help the child out of the water but then abandon them somewhere within a 300m radius of the pond.Create a manifold market on whether the child will be saved and bid against it to incentivise other people to help the child.The QALY approachSave the child but replace them with an adult who is not able to swim (but is likely to have fewer years of healthy life left).Commit now to a policy of only saving children who are sufficiently old or likely to have only moderately healthy/happy lives.The King Solomon approachCut the child in half and save the left half of them from drowningUsing these approaches, you should be able to convey the optimal most Holden-approved amount of good.If you like, you can remember the heuristic “maximisation bad”.As well as other things like eradicating diseases.QALYs are quality-adjusted life years (essentially a metric for healthy years lived).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
calebp https://forum.effectivealtruism.org/posts/oCw6qjXXYAFCf2YuL/saving-drowning-children-in-light-of-perilous-maximisation Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Saving drowning children in light of perilous maximisation, published by calebp on April 2, 2023 on The Effective Altruism Forum.Last year Holden Karnofsky wrote the post, “EA is about maximization, and maximization is perilous”. You could read the post, but I suggest you just jump on board because Holden is cool, and morality is hard.Given that you now believe that maximisation of doing good is actually bad and scary, you should also probably make some adjustments to the classic thought experiment you use to get your friends on board with the new mission of “do the most good possible [a large but not too large amount of good] using evidence and reason”.A slightly modified drowning child thought experiment goes as followsImagine that you are walking by a small pond, and you see five children drowning. You can easily save the child without putting yourself in great danger, but doing so will ruin your expensive shoes. Should you save the children?Obviously, your first instinct is to save all the children. But remember, maximisation is perilous. It’s this kind of attitude that leads to atrocities like large financial crimes. Instead, you should just save three or four of the children. That is still a large amount of good, and importantly, it is not maximally large.But what should you do if you encounter just one drowning child? The options at first pass seem bleak – you can either:Ignore the child and let them drown (which many people believe is bad).Save the child (but know that you have tried to maximise good in that situation).I think there are a few neat solutions to get around these moral conundrums:Save the child with some reasonable probability (say 80%).Before wading into the shallow pond, whip out the D10 you were carrying in your backpack. If you roll an eight or lower, then go ahead and save the child. Otherwise, go about your day.Only partially save the childYou may have an opportunity to help the child to various degrees. Rather than picking up the child and then ensuring that they find their parents or doing other previously thought as reasonable things, you could:Move the child to shallower waters so they are only drowning a little bit.Help the child out of the water but then abandon them somewhere within a 300m radius of the pond.Create a manifold market on whether the child will be saved and bid against it to incentivise other people to help the child.The QALY approachSave the child but replace them with an adult who is not able to swim (but is likely to have fewer years of healthy life left).Commit now to a policy of only saving children who are sufficiently old or likely to have only moderately healthy/happy lives.The King Solomon approachCut the child in half and save the left half of them from drowningUsing these approaches, you should be able to convey the optimal most Holden-approved amount of good.If you like, you can remember the heuristic “maximisation bad”.As well as other things like eradicating diseases.QALYs are quality-adjusted life years (essentially a metric for healthy years lived).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 02 Apr 2023 01:48:24 +0000 EA - Saving drowning children in light of perilous maximisation by calebp Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Saving drowning children in light of perilous maximisation, published by calebp on April 2, 2023 on The Effective Altruism Forum.Last year Holden Karnofsky wrote the post, “EA is about maximization, and maximization is perilous”. You could read the post, but I suggest you just jump on board because Holden is cool, and morality is hard.Given that you now believe that maximisation of doing good is actually bad and scary, you should also probably make some adjustments to the classic thought experiment you use to get your friends on board with the new mission of “do the most good possible [a large but not too large amount of good] using evidence and reason”.A slightly modified drowning child thought experiment goes as followsImagine that you are walking by a small pond, and you see five children drowning. You can easily save the child without putting yourself in great danger, but doing so will ruin your expensive shoes. Should you save the children?Obviously, your first instinct is to save all the children. But remember, maximisation is perilous. It’s this kind of attitude that leads to atrocities like large financial crimes. Instead, you should just save three or four of the children. That is still a large amount of good, and importantly, it is not maximally large.But what should you do if you encounter just one drowning child? The options at first pass seem bleak – you can either:Ignore the child and let them drown (which many people believe is bad).Save the child (but know that you have tried to maximise good in that situation).I think there are a few neat solutions to get around these moral conundrums:Save the child with some reasonable probability (say 80%).Before wading into the shallow pond, whip out the D10 you were carrying in your backpack. If you roll an eight or lower, then go ahead and save the child. Otherwise, go about your day.Only partially save the childYou may have an opportunity to help the child to various degrees. Rather than picking up the child and then ensuring that they find their parents or doing other previously thought as reasonable things, you could:Move the child to shallower waters so they are only drowning a little bit.Help the child out of the water but then abandon them somewhere within a 300m radius of the pond.Create a manifold market on whether the child will be saved and bid against it to incentivise other people to help the child.The QALY approachSave the child but replace them with an adult who is not able to swim (but is likely to have fewer years of healthy life left).Commit now to a policy of only saving children who are sufficiently old or likely to have only moderately healthy/happy lives.The King Solomon approachCut the child in half and save the left half of them from drowningUsing these approaches, you should be able to convey the optimal most Holden-approved amount of good.If you like, you can remember the heuristic “maximisation bad”.As well as other things like eradicating diseases.QALYs are quality-adjusted life years (essentially a metric for healthy years lived).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Saving drowning children in light of perilous maximisation, published by calebp on April 2, 2023 on The Effective Altruism Forum.Last year Holden Karnofsky wrote the post, “EA is about maximization, and maximization is perilous”. You could read the post, but I suggest you just jump on board because Holden is cool, and morality is hard.Given that you now believe that maximisation of doing good is actually bad and scary, you should also probably make some adjustments to the classic thought experiment you use to get your friends on board with the new mission of “do the most good possible [a large but not too large amount of good] using evidence and reason”.A slightly modified drowning child thought experiment goes as followsImagine that you are walking by a small pond, and you see five children drowning. You can easily save the child without putting yourself in great danger, but doing so will ruin your expensive shoes. Should you save the children?Obviously, your first instinct is to save all the children. But remember, maximisation is perilous. It’s this kind of attitude that leads to atrocities like large financial crimes. Instead, you should just save three or four of the children. That is still a large amount of good, and importantly, it is not maximally large.But what should you do if you encounter just one drowning child? The options at first pass seem bleak – you can either:Ignore the child and let them drown (which many people believe is bad).Save the child (but know that you have tried to maximise good in that situation).I think there are a few neat solutions to get around these moral conundrums:Save the child with some reasonable probability (say 80%).Before wading into the shallow pond, whip out the D10 you were carrying in your backpack. If you roll an eight or lower, then go ahead and save the child. Otherwise, go about your day.Only partially save the childYou may have an opportunity to help the child to various degrees. Rather than picking up the child and then ensuring that they find their parents or doing other previously thought as reasonable things, you could:Move the child to shallower waters so they are only drowning a little bit.Help the child out of the water but then abandon them somewhere within a 300m radius of the pond.Create a manifold market on whether the child will be saved and bid against it to incentivise other people to help the child.The QALY approachSave the child but replace them with an adult who is not able to swim (but is likely to have fewer years of healthy life left).Commit now to a policy of only saving children who are sufficiently old or likely to have only moderately healthy/happy lives.The King Solomon approachCut the child in half and save the left half of them from drowningUsing these approaches, you should be able to convey the optimal most Holden-approved amount of good.If you like, you can remember the heuristic “maximisation bad”.As well as other things like eradicating diseases.QALYs are quality-adjusted life years (essentially a metric for healthy years lived).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
calebp https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:02 None full 5466
pCttBf6kdhbxKTJat_NL_EA_EA EA - Some lesser-known megaproject ideas by Linch Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some lesser-known megaproject ideas, published by Linch on April 2, 2023 on The Effective Altruism Forum.Below is a list of megaproject ideas that didn’t quite make the cut enough to be added to the database. I compiled this list last year and sent it last November to a prominent businessman who I knew from my old Vegns in Gaming days. He completely ignored me, and I was later informed that he actually wasn't very good at League of Legends, which I guess tells you something about the epistemic character and moral fiber of people who disagree with me.Genetically engineer octopodes for intelligence and alignment, helps with understanding AGI better and also for creating successors to the human species if we die out for non-AGI reasonsLarge-scale bioethics-to-AI-ethics transition pipeline: Create fellowships and other prestigious programs to help bioethicists transition to AI ethics. This reduces dumb bioethics decisions (decreasing biorisk) while increasing general perceived annoyance/uncoolness of doing AI research (decreasing AI risk). win-win-winEA Games 1: Invent a highly addictive game that’s optimized for very smart amoral people, plaster ads for it near AGI labs to slow down AI timelinesEA Games 2: Sponsor a number of esports teams with 80,000 Hours, etc, logosGo to Los Angeles and hire thousands of Hollywood wanna-be actors to pretend to be EAs; help solve the imposter syndrome problem in EA by having actual impostersDynasty: “World leaders or children of world leaders” matchmaking app + concierge service. Maybe if Hunter Biden married Xi Mingze, WWIII would become less likely?The above but for AGI companies. Is Demis Hassabis single? Is Sam Altman?A really big malaria netCovert 1% of the world’s stainless steel to paperclips, as a form of acausal trade/token of good faithTake over the education system of a small European countryImpact IslandAmerica’s Got Talent, except for AI alignmentColonize Mars: backup option for humanity, because bunkers are too cheapPayback: Genetically engineer an army of human-eating chickensFigure out how to resurrect Jeremy Bentham, ask him to do our cause prioritizationFlood the internet with memes/stories of AIs being good to humans, to help balance the training dataMandate all AI/bio labs carry GCR insurance, ~10T payout if people can demonstrate they pose a >1% existential risk + courts can order a halt to all progress during a suitA really big mirror, to help with climate changeHappy rat farms (might be too expensive tho)Genetically engineer really fast-growing photosynthesizing plants to help combat climate changeGenetically engineer really rapidly breeding locusts to solve the problem of your GMO plants crowding out normal agricultureGenetically engineer really fast and lethal praying mantises to solve your locust problem(Brian Tomasik might be against this)Humane&painless APM-based pesticides/gray goo to help with your praying mantis problemThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Linch https://forum.effectivealtruism.org/posts/pCttBf6kdhbxKTJat/some-lesser-known-megaproject-ideas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some lesser-known megaproject ideas, published by Linch on April 2, 2023 on The Effective Altruism Forum.Below is a list of megaproject ideas that didn’t quite make the cut enough to be added to the database. I compiled this list last year and sent it last November to a prominent businessman who I knew from my old Vegns in Gaming days. He completely ignored me, and I was later informed that he actually wasn't very good at League of Legends, which I guess tells you something about the epistemic character and moral fiber of people who disagree with me.Genetically engineer octopodes for intelligence and alignment, helps with understanding AGI better and also for creating successors to the human species if we die out for non-AGI reasonsLarge-scale bioethics-to-AI-ethics transition pipeline: Create fellowships and other prestigious programs to help bioethicists transition to AI ethics. This reduces dumb bioethics decisions (decreasing biorisk) while increasing general perceived annoyance/uncoolness of doing AI research (decreasing AI risk). win-win-winEA Games 1: Invent a highly addictive game that’s optimized for very smart amoral people, plaster ads for it near AGI labs to slow down AI timelinesEA Games 2: Sponsor a number of esports teams with 80,000 Hours, etc, logosGo to Los Angeles and hire thousands of Hollywood wanna-be actors to pretend to be EAs; help solve the imposter syndrome problem in EA by having actual impostersDynasty: “World leaders or children of world leaders” matchmaking app + concierge service. Maybe if Hunter Biden married Xi Mingze, WWIII would become less likely?The above but for AGI companies. Is Demis Hassabis single? Is Sam Altman?A really big malaria netCovert 1% of the world’s stainless steel to paperclips, as a form of acausal trade/token of good faithTake over the education system of a small European countryImpact IslandAmerica’s Got Talent, except for AI alignmentColonize Mars: backup option for humanity, because bunkers are too cheapPayback: Genetically engineer an army of human-eating chickensFigure out how to resurrect Jeremy Bentham, ask him to do our cause prioritizationFlood the internet with memes/stories of AIs being good to humans, to help balance the training dataMandate all AI/bio labs carry GCR insurance, ~10T payout if people can demonstrate they pose a >1% existential risk + courts can order a halt to all progress during a suitA really big mirror, to help with climate changeHappy rat farms (might be too expensive tho)Genetically engineer really fast-growing photosynthesizing plants to help combat climate changeGenetically engineer really rapidly breeding locusts to solve the problem of your GMO plants crowding out normal agricultureGenetically engineer really fast and lethal praying mantises to solve your locust problem(Brian Tomasik might be against this)Humane&painless APM-based pesticides/gray goo to help with your praying mantis problemThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 02 Apr 2023 01:17:29 +0000 EA - Some lesser-known megaproject ideas by Linch Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some lesser-known megaproject ideas, published by Linch on April 2, 2023 on The Effective Altruism Forum.Below is a list of megaproject ideas that didn’t quite make the cut enough to be added to the database. I compiled this list last year and sent it last November to a prominent businessman who I knew from my old Vegns in Gaming days. He completely ignored me, and I was later informed that he actually wasn't very good at League of Legends, which I guess tells you something about the epistemic character and moral fiber of people who disagree with me.Genetically engineer octopodes for intelligence and alignment, helps with understanding AGI better and also for creating successors to the human species if we die out for non-AGI reasonsLarge-scale bioethics-to-AI-ethics transition pipeline: Create fellowships and other prestigious programs to help bioethicists transition to AI ethics. This reduces dumb bioethics decisions (decreasing biorisk) while increasing general perceived annoyance/uncoolness of doing AI research (decreasing AI risk). win-win-winEA Games 1: Invent a highly addictive game that’s optimized for very smart amoral people, plaster ads for it near AGI labs to slow down AI timelinesEA Games 2: Sponsor a number of esports teams with 80,000 Hours, etc, logosGo to Los Angeles and hire thousands of Hollywood wanna-be actors to pretend to be EAs; help solve the imposter syndrome problem in EA by having actual impostersDynasty: “World leaders or children of world leaders” matchmaking app + concierge service. Maybe if Hunter Biden married Xi Mingze, WWIII would become less likely?The above but for AGI companies. Is Demis Hassabis single? Is Sam Altman?A really big malaria netCovert 1% of the world’s stainless steel to paperclips, as a form of acausal trade/token of good faithTake over the education system of a small European countryImpact IslandAmerica’s Got Talent, except for AI alignmentColonize Mars: backup option for humanity, because bunkers are too cheapPayback: Genetically engineer an army of human-eating chickensFigure out how to resurrect Jeremy Bentham, ask him to do our cause prioritizationFlood the internet with memes/stories of AIs being good to humans, to help balance the training dataMandate all AI/bio labs carry GCR insurance, ~10T payout if people can demonstrate they pose a >1% existential risk + courts can order a halt to all progress during a suitA really big mirror, to help with climate changeHappy rat farms (might be too expensive tho)Genetically engineer really fast-growing photosynthesizing plants to help combat climate changeGenetically engineer really rapidly breeding locusts to solve the problem of your GMO plants crowding out normal agricultureGenetically engineer really fast and lethal praying mantises to solve your locust problem(Brian Tomasik might be against this)Humane&painless APM-based pesticides/gray goo to help with your praying mantis problemThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some lesser-known megaproject ideas, published by Linch on April 2, 2023 on The Effective Altruism Forum.Below is a list of megaproject ideas that didn’t quite make the cut enough to be added to the database. I compiled this list last year and sent it last November to a prominent businessman who I knew from my old Vegns in Gaming days. He completely ignored me, and I was later informed that he actually wasn't very good at League of Legends, which I guess tells you something about the epistemic character and moral fiber of people who disagree with me.Genetically engineer octopodes for intelligence and alignment, helps with understanding AGI better and also for creating successors to the human species if we die out for non-AGI reasonsLarge-scale bioethics-to-AI-ethics transition pipeline: Create fellowships and other prestigious programs to help bioethicists transition to AI ethics. This reduces dumb bioethics decisions (decreasing biorisk) while increasing general perceived annoyance/uncoolness of doing AI research (decreasing AI risk). win-win-winEA Games 1: Invent a highly addictive game that’s optimized for very smart amoral people, plaster ads for it near AGI labs to slow down AI timelinesEA Games 2: Sponsor a number of esports teams with 80,000 Hours, etc, logosGo to Los Angeles and hire thousands of Hollywood wanna-be actors to pretend to be EAs; help solve the imposter syndrome problem in EA by having actual impostersDynasty: “World leaders or children of world leaders” matchmaking app + concierge service. Maybe if Hunter Biden married Xi Mingze, WWIII would become less likely?The above but for AGI companies. Is Demis Hassabis single? Is Sam Altman?A really big malaria netCovert 1% of the world’s stainless steel to paperclips, as a form of acausal trade/token of good faithTake over the education system of a small European countryImpact IslandAmerica’s Got Talent, except for AI alignmentColonize Mars: backup option for humanity, because bunkers are too cheapPayback: Genetically engineer an army of human-eating chickensFigure out how to resurrect Jeremy Bentham, ask him to do our cause prioritizationFlood the internet with memes/stories of AIs being good to humans, to help balance the training dataMandate all AI/bio labs carry GCR insurance, ~10T payout if people can demonstrate they pose a >1% existential risk + courts can order a halt to all progress during a suitA really big mirror, to help with climate changeHappy rat farms (might be too expensive tho)Genetically engineer really fast-growing photosynthesizing plants to help combat climate changeGenetically engineer really rapidly breeding locusts to solve the problem of your GMO plants crowding out normal agricultureGenetically engineer really fast and lethal praying mantises to solve your locust problem(Brian Tomasik might be against this)Humane&painless APM-based pesticides/gray goo to help with your praying mantis problemThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Linch https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:10 None full 5465
e82qwG6S2HNST9Gcx_NL_EA_EA EA - The Case for Earning to Live by Spending What We Can Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Case for Earning to Live, published by Spending What We Can on April 1, 2023 on The Effective Altruism Forum.BackgroundEarning to give has been a classic EA suggestion due to the value money can bring elsewhere in the world. Although lately e2g has fallen out of favor somewhat, for those already established in high-earning careers, or anyone who may have some extra cash available to donate, the question remains of how best to use money for optimal impact.Previous research claimed diminishing marginal returns of money on happiness, and concluded excess money should be donated to effective charities. However, new research has shown that money can buy happiness after all, as anyone who has a lot of money and knows how to spend it could’ve told you all along. Thus, in this post we advocate for a new strategy of spending substantially more of one’s money on oneself, which we dub earning to live.Earning to LiveThe basic idea of earning to live is that the best use of one’s money is spending it directly on oneself. This includes not only survival needs such as food and shelter, but also “luxuries” (though we prefer to avoid the term due to negative connotations) such as travel and entertainment. Further examples are discussed later in this post. Compared to earning to give, earning to live has numerous advantages:Reduced operational overhead: while any charity requires some administrative overhead, money you spend on yourself can go 100% to your desired target.Better quantifiability: a central challenge of effective giving is determining which charities yield the greatest impact per dollar, often relying on diligence and estimates by third-party evaluators. It’s much easier to get a feel for how you can spend money to bring yourself personal utility and then do more of that.High neglectedness: while other causes may have thousands of donors contributing millions or even billions in funding, the number of people spending money on you is likely fewer than 10.Improved productivity: it’s well established that a miserly existence can hamper your productivity and therefore your impact. By spending to make your own life more comfortable, you can build a virtuous cycle of higher output and thus higher income, which you can continue spending on yourself.Research value: similarly, all personal spending may be viewed as investigation into what kind of spending brings you utility, so there are dual benefits of direct utility and value as data for deciding how to spend in the future.Solipsism: if you happen to be the only real consciousness in the universe, spending money for the benefit of others is irrational.Near-termism: if you value the present much more highly than the future, you can spend money on yourself immediately, but donations will take time to transfer and process before having any impact. This may be especially relevant if we all die very soon.Long-termism: if you value the future as much as the present, you may instead consider investing your money in order to spend larger amounts later, or pass even greater generational wealth to your heirs.Suggestions for spendingA tenet of earning to live is that you know best how to spend your money to maximize your utility. However, if you prefer to parrot views of charismatic thought leaders to establish your membership in an in-group, my standard recommendations are:Fine dining: a tasting menu prepared by a world-renowned chef tends to taste better than microwaved lentils at home. If you’re vegan for animal welfare reasons, you still have options.Travel: with adequate budget, flying can be a pleasant experience. Exploring new parts of the world can expose you to new ways to spend money. Also, since you’re probably American, much of the world is much poorer than you, and seeing that first-hand can build a sens...]]>
Spending What We Can https://forum.effectivealtruism.org/posts/e82qwG6S2HNST9Gcx/the-case-for-earning-to-live Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Case for Earning to Live, published by Spending What We Can on April 1, 2023 on The Effective Altruism Forum.BackgroundEarning to give has been a classic EA suggestion due to the value money can bring elsewhere in the world. Although lately e2g has fallen out of favor somewhat, for those already established in high-earning careers, or anyone who may have some extra cash available to donate, the question remains of how best to use money for optimal impact.Previous research claimed diminishing marginal returns of money on happiness, and concluded excess money should be donated to effective charities. However, new research has shown that money can buy happiness after all, as anyone who has a lot of money and knows how to spend it could’ve told you all along. Thus, in this post we advocate for a new strategy of spending substantially more of one’s money on oneself, which we dub earning to live.Earning to LiveThe basic idea of earning to live is that the best use of one’s money is spending it directly on oneself. This includes not only survival needs such as food and shelter, but also “luxuries” (though we prefer to avoid the term due to negative connotations) such as travel and entertainment. Further examples are discussed later in this post. Compared to earning to give, earning to live has numerous advantages:Reduced operational overhead: while any charity requires some administrative overhead, money you spend on yourself can go 100% to your desired target.Better quantifiability: a central challenge of effective giving is determining which charities yield the greatest impact per dollar, often relying on diligence and estimates by third-party evaluators. It’s much easier to get a feel for how you can spend money to bring yourself personal utility and then do more of that.High neglectedness: while other causes may have thousands of donors contributing millions or even billions in funding, the number of people spending money on you is likely fewer than 10.Improved productivity: it’s well established that a miserly existence can hamper your productivity and therefore your impact. By spending to make your own life more comfortable, you can build a virtuous cycle of higher output and thus higher income, which you can continue spending on yourself.Research value: similarly, all personal spending may be viewed as investigation into what kind of spending brings you utility, so there are dual benefits of direct utility and value as data for deciding how to spend in the future.Solipsism: if you happen to be the only real consciousness in the universe, spending money for the benefit of others is irrational.Near-termism: if you value the present much more highly than the future, you can spend money on yourself immediately, but donations will take time to transfer and process before having any impact. This may be especially relevant if we all die very soon.Long-termism: if you value the future as much as the present, you may instead consider investing your money in order to spend larger amounts later, or pass even greater generational wealth to your heirs.Suggestions for spendingA tenet of earning to live is that you know best how to spend your money to maximize your utility. However, if you prefer to parrot views of charismatic thought leaders to establish your membership in an in-group, my standard recommendations are:Fine dining: a tasting menu prepared by a world-renowned chef tends to taste better than microwaved lentils at home. If you’re vegan for animal welfare reasons, you still have options.Travel: with adequate budget, flying can be a pleasant experience. Exploring new parts of the world can expose you to new ways to spend money. Also, since you’re probably American, much of the world is much poorer than you, and seeing that first-hand can build a sens...]]>
Sat, 01 Apr 2023 20:59:18 +0000 EA - The Case for Earning to Live by Spending What We Can Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Case for Earning to Live, published by Spending What We Can on April 1, 2023 on The Effective Altruism Forum.BackgroundEarning to give has been a classic EA suggestion due to the value money can bring elsewhere in the world. Although lately e2g has fallen out of favor somewhat, for those already established in high-earning careers, or anyone who may have some extra cash available to donate, the question remains of how best to use money for optimal impact.Previous research claimed diminishing marginal returns of money on happiness, and concluded excess money should be donated to effective charities. However, new research has shown that money can buy happiness after all, as anyone who has a lot of money and knows how to spend it could’ve told you all along. Thus, in this post we advocate for a new strategy of spending substantially more of one’s money on oneself, which we dub earning to live.Earning to LiveThe basic idea of earning to live is that the best use of one’s money is spending it directly on oneself. This includes not only survival needs such as food and shelter, but also “luxuries” (though we prefer to avoid the term due to negative connotations) such as travel and entertainment. Further examples are discussed later in this post. Compared to earning to give, earning to live has numerous advantages:Reduced operational overhead: while any charity requires some administrative overhead, money you spend on yourself can go 100% to your desired target.Better quantifiability: a central challenge of effective giving is determining which charities yield the greatest impact per dollar, often relying on diligence and estimates by third-party evaluators. It’s much easier to get a feel for how you can spend money to bring yourself personal utility and then do more of that.High neglectedness: while other causes may have thousands of donors contributing millions or even billions in funding, the number of people spending money on you is likely fewer than 10.Improved productivity: it’s well established that a miserly existence can hamper your productivity and therefore your impact. By spending to make your own life more comfortable, you can build a virtuous cycle of higher output and thus higher income, which you can continue spending on yourself.Research value: similarly, all personal spending may be viewed as investigation into what kind of spending brings you utility, so there are dual benefits of direct utility and value as data for deciding how to spend in the future.Solipsism: if you happen to be the only real consciousness in the universe, spending money for the benefit of others is irrational.Near-termism: if you value the present much more highly than the future, you can spend money on yourself immediately, but donations will take time to transfer and process before having any impact. This may be especially relevant if we all die very soon.Long-termism: if you value the future as much as the present, you may instead consider investing your money in order to spend larger amounts later, or pass even greater generational wealth to your heirs.Suggestions for spendingA tenet of earning to live is that you know best how to spend your money to maximize your utility. However, if you prefer to parrot views of charismatic thought leaders to establish your membership in an in-group, my standard recommendations are:Fine dining: a tasting menu prepared by a world-renowned chef tends to taste better than microwaved lentils at home. If you’re vegan for animal welfare reasons, you still have options.Travel: with adequate budget, flying can be a pleasant experience. Exploring new parts of the world can expose you to new ways to spend money. Also, since you’re probably American, much of the world is much poorer than you, and seeing that first-hand can build a sens...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Case for Earning to Live, published by Spending What We Can on April 1, 2023 on The Effective Altruism Forum.BackgroundEarning to give has been a classic EA suggestion due to the value money can bring elsewhere in the world. Although lately e2g has fallen out of favor somewhat, for those already established in high-earning careers, or anyone who may have some extra cash available to donate, the question remains of how best to use money for optimal impact.Previous research claimed diminishing marginal returns of money on happiness, and concluded excess money should be donated to effective charities. However, new research has shown that money can buy happiness after all, as anyone who has a lot of money and knows how to spend it could’ve told you all along. Thus, in this post we advocate for a new strategy of spending substantially more of one’s money on oneself, which we dub earning to live.Earning to LiveThe basic idea of earning to live is that the best use of one’s money is spending it directly on oneself. This includes not only survival needs such as food and shelter, but also “luxuries” (though we prefer to avoid the term due to negative connotations) such as travel and entertainment. Further examples are discussed later in this post. Compared to earning to give, earning to live has numerous advantages:Reduced operational overhead: while any charity requires some administrative overhead, money you spend on yourself can go 100% to your desired target.Better quantifiability: a central challenge of effective giving is determining which charities yield the greatest impact per dollar, often relying on diligence and estimates by third-party evaluators. It’s much easier to get a feel for how you can spend money to bring yourself personal utility and then do more of that.High neglectedness: while other causes may have thousands of donors contributing millions or even billions in funding, the number of people spending money on you is likely fewer than 10.Improved productivity: it’s well established that a miserly existence can hamper your productivity and therefore your impact. By spending to make your own life more comfortable, you can build a virtuous cycle of higher output and thus higher income, which you can continue spending on yourself.Research value: similarly, all personal spending may be viewed as investigation into what kind of spending brings you utility, so there are dual benefits of direct utility and value as data for deciding how to spend in the future.Solipsism: if you happen to be the only real consciousness in the universe, spending money for the benefit of others is irrational.Near-termism: if you value the present much more highly than the future, you can spend money on yourself immediately, but donations will take time to transfer and process before having any impact. This may be especially relevant if we all die very soon.Long-termism: if you value the future as much as the present, you may instead consider investing your money in order to spend larger amounts later, or pass even greater generational wealth to your heirs.Suggestions for spendingA tenet of earning to live is that you know best how to spend your money to maximize your utility. However, if you prefer to parrot views of charismatic thought leaders to establish your membership in an in-group, my standard recommendations are:Fine dining: a tasting menu prepared by a world-renowned chef tends to taste better than microwaved lentils at home. If you’re vegan for animal welfare reasons, you still have options.Travel: with adequate budget, flying can be a pleasant experience. Exploring new parts of the world can expose you to new ways to spend money. Also, since you’re probably American, much of the world is much poorer than you, and seeing that first-hand can build a sens...]]>
Spending What We Can https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:59 None full 5451
eQZRorvDRQ6HEsQtj_NL_EA_EA EA - It’s "The EA-Adjacent Forum" now by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It’s "The EA-Adjacent Forum" now, published by Lizka on April 1, 2023 on The Effective Altruism Forum.TL;DR: We’re not really comfortable calling ourselves “EAs.” Moreover, we know that this is true for a lot of people in the EA community the eclectic group of people trying to make the world better who happen to use the Forum. So we’re renaming the “Effective Altruism Forum” to be the "EA-Adjacent Forum" (“EA Forum” for short).We have some deep disagreements with EALook, we run a forum focused on discussions about how to do the most good we can, and we work at the "Centre for Effective Altruism," but we're not really members of the EA community. We have some deep disagreements with many parts of the movement.(We don’t even always agree with each other about our disagreements, we don’t always think that the EA thing is the right thing (see also), and we even hosted an EA criticism contest to surface disagreements.)It’s not just usWe know that others who use the Forum also prefer to call themselves “EA-adjacent.” We’re also somewhat worried that anything that someone posts on the EA Forum can be interpreted as representative of effective altruism.We think it’s important to preserve nuance and be clear about the facts listed here, so we’re rebranding.Impact of the rebrand, next stepsIt’s already the case that “EA” often stands for “Ea-Adjacent,” and we don’t think the rebrand will change much in terms of how the Forum will function.As always, we’d love to hear your feedback. You can comment here or contact us directly.(Thanks to [unnamed people] for suggesting this rebrand. We’d credit them directly, but some of them prefer to not associate so closely with EA.)The EA-Adjacent Forum team. Please note that not all teammates agree with everything written here (probably).Some example disagreements:1) We disagree with a lot of people in the EA community about styling and font choices. 2) Most people in the EA community promote functional decision theory, but after spending many years making software for the forum, we've come to the conclusion that object-oriented decision theory is superior.3) We disagree with CEA about the spelling of “Centre” in “Centre for Effective Altruism.” It should be spelled “center” as Noah Webster intended.4) Many EAs appear to focus on scope sensitivity, but we think scope specificity is more neglected5) We think QALYs should be converted to their metric-system equivalent, such that 1 metric QALY is the amount of quality-adjusted life that can be supported by 1 joule of energy within a 1-cubic-meter box over 1 year at 0 degrees celsius.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lizka https://forum.effectivealtruism.org/posts/eQZRorvDRQ6HEsQtj/it-s-the-ea-adjacent-forum-now Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It’s "The EA-Adjacent Forum" now, published by Lizka on April 1, 2023 on The Effective Altruism Forum.TL;DR: We’re not really comfortable calling ourselves “EAs.” Moreover, we know that this is true for a lot of people in the EA community the eclectic group of people trying to make the world better who happen to use the Forum. So we’re renaming the “Effective Altruism Forum” to be the "EA-Adjacent Forum" (“EA Forum” for short).We have some deep disagreements with EALook, we run a forum focused on discussions about how to do the most good we can, and we work at the "Centre for Effective Altruism," but we're not really members of the EA community. We have some deep disagreements with many parts of the movement.(We don’t even always agree with each other about our disagreements, we don’t always think that the EA thing is the right thing (see also), and we even hosted an EA criticism contest to surface disagreements.)It’s not just usWe know that others who use the Forum also prefer to call themselves “EA-adjacent.” We’re also somewhat worried that anything that someone posts on the EA Forum can be interpreted as representative of effective altruism.We think it’s important to preserve nuance and be clear about the facts listed here, so we’re rebranding.Impact of the rebrand, next stepsIt’s already the case that “EA” often stands for “Ea-Adjacent,” and we don’t think the rebrand will change much in terms of how the Forum will function.As always, we’d love to hear your feedback. You can comment here or contact us directly.(Thanks to [unnamed people] for suggesting this rebrand. We’d credit them directly, but some of them prefer to not associate so closely with EA.)The EA-Adjacent Forum team. Please note that not all teammates agree with everything written here (probably).Some example disagreements:1) We disagree with a lot of people in the EA community about styling and font choices. 2) Most people in the EA community promote functional decision theory, but after spending many years making software for the forum, we've come to the conclusion that object-oriented decision theory is superior.3) We disagree with CEA about the spelling of “Centre” in “Centre for Effective Altruism.” It should be spelled “center” as Noah Webster intended.4) Many EAs appear to focus on scope sensitivity, but we think scope specificity is more neglected5) We think QALYs should be converted to their metric-system equivalent, such that 1 metric QALY is the amount of quality-adjusted life that can be supported by 1 joule of energy within a 1-cubic-meter box over 1 year at 0 degrees celsius.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 01 Apr 2023 20:49:29 +0000 EA - It’s "The EA-Adjacent Forum" now by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It’s "The EA-Adjacent Forum" now, published by Lizka on April 1, 2023 on The Effective Altruism Forum.TL;DR: We’re not really comfortable calling ourselves “EAs.” Moreover, we know that this is true for a lot of people in the EA community the eclectic group of people trying to make the world better who happen to use the Forum. So we’re renaming the “Effective Altruism Forum” to be the "EA-Adjacent Forum" (“EA Forum” for short).We have some deep disagreements with EALook, we run a forum focused on discussions about how to do the most good we can, and we work at the "Centre for Effective Altruism," but we're not really members of the EA community. We have some deep disagreements with many parts of the movement.(We don’t even always agree with each other about our disagreements, we don’t always think that the EA thing is the right thing (see also), and we even hosted an EA criticism contest to surface disagreements.)It’s not just usWe know that others who use the Forum also prefer to call themselves “EA-adjacent.” We’re also somewhat worried that anything that someone posts on the EA Forum can be interpreted as representative of effective altruism.We think it’s important to preserve nuance and be clear about the facts listed here, so we’re rebranding.Impact of the rebrand, next stepsIt’s already the case that “EA” often stands for “Ea-Adjacent,” and we don’t think the rebrand will change much in terms of how the Forum will function.As always, we’d love to hear your feedback. You can comment here or contact us directly.(Thanks to [unnamed people] for suggesting this rebrand. We’d credit them directly, but some of them prefer to not associate so closely with EA.)The EA-Adjacent Forum team. Please note that not all teammates agree with everything written here (probably).Some example disagreements:1) We disagree with a lot of people in the EA community about styling and font choices. 2) Most people in the EA community promote functional decision theory, but after spending many years making software for the forum, we've come to the conclusion that object-oriented decision theory is superior.3) We disagree with CEA about the spelling of “Centre” in “Centre for Effective Altruism.” It should be spelled “center” as Noah Webster intended.4) Many EAs appear to focus on scope sensitivity, but we think scope specificity is more neglected5) We think QALYs should be converted to their metric-system equivalent, such that 1 metric QALY is the amount of quality-adjusted life that can be supported by 1 joule of energy within a 1-cubic-meter box over 1 year at 0 degrees celsius.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It’s "The EA-Adjacent Forum" now, published by Lizka on April 1, 2023 on The Effective Altruism Forum.TL;DR: We’re not really comfortable calling ourselves “EAs.” Moreover, we know that this is true for a lot of people in the EA community the eclectic group of people trying to make the world better who happen to use the Forum. So we’re renaming the “Effective Altruism Forum” to be the "EA-Adjacent Forum" (“EA Forum” for short).We have some deep disagreements with EALook, we run a forum focused on discussions about how to do the most good we can, and we work at the "Centre for Effective Altruism," but we're not really members of the EA community. We have some deep disagreements with many parts of the movement.(We don’t even always agree with each other about our disagreements, we don’t always think that the EA thing is the right thing (see also), and we even hosted an EA criticism contest to surface disagreements.)It’s not just usWe know that others who use the Forum also prefer to call themselves “EA-adjacent.” We’re also somewhat worried that anything that someone posts on the EA Forum can be interpreted as representative of effective altruism.We think it’s important to preserve nuance and be clear about the facts listed here, so we’re rebranding.Impact of the rebrand, next stepsIt’s already the case that “EA” often stands for “Ea-Adjacent,” and we don’t think the rebrand will change much in terms of how the Forum will function.As always, we’d love to hear your feedback. You can comment here or contact us directly.(Thanks to [unnamed people] for suggesting this rebrand. We’d credit them directly, but some of them prefer to not associate so closely with EA.)The EA-Adjacent Forum team. Please note that not all teammates agree with everything written here (probably).Some example disagreements:1) We disagree with a lot of people in the EA community about styling and font choices. 2) Most people in the EA community promote functional decision theory, but after spending many years making software for the forum, we've come to the conclusion that object-oriented decision theory is superior.3) We disagree with CEA about the spelling of “Centre” in “Centre for Effective Altruism.” It should be spelled “center” as Noah Webster intended.4) Many EAs appear to focus on scope sensitivity, but we think scope specificity is more neglected5) We think QALYs should be converted to their metric-system equivalent, such that 1 metric QALY is the amount of quality-adjusted life that can be supported by 1 joule of energy within a 1-cubic-meter box over 1 year at 0 degrees celsius.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:39 None full 5450
vLctZutbRttQPnWnw_NL_EA_EA EA - Meta Directory of April Fool’s Ideas by Yonatan Cale Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Meta Directory of April Fool’s Ideas, published by Yonatan Cale on April 1, 2023 on The Effective Altruism Forum.IntroAs Effective Altruism grows, we face new challenges as a movement. Funding, cause prioritization, but the most neglected area is definitely just having fun.Potential impact# If one EA reads one post which makes them happy, how much more productive do they become, in percent, for that day? INCREASED_PERSON_PRODUCTIVITY_PER_HAPPY_FUNNY_POST_FOR_ONE_DAY = sq.to(-0.05, 0.10)FRACTION_OF_EA_THAT_READS_A_GOOD_POST = sq.to(0.05, 0.4)# This calculation is so intuitive that even copilot could autocomplete itIMPACT_PER_APRIL_FOOL_POST = \ EA_YEARLY_IMPACT_IN_QALYS \ INCREASED_PERSON_PRODUCTIVITY_PER_HAPPY_FUNNY_POST_FOR_ONE_DAY \ FRACTION_OF_EA_THAT_READS_A_GOOD_POSTbins = 100 samples = IMPACT_PER_APRIL_FOOL_POST @ bins plt.hist(samples, bins=bins)As you can see, the spike around 0.1 indicates (assuming I'm reading this chart incorrectly) that the potential impact is relatively very high, compared to some other smaller spikes.A marketplace of ideasSome people have ideas for April Fool’s posts, and some people are good at writing but don’t have ideas.Let’s make these high impact connections between EA cofounders!ImplementationThe ideal platform will beA list of ideasEach idea has all the essential fields such as “title”, “long description”, “target audience”, “expected impact of that audience”, “squiggle model estimating the impact of the post given the post's quality”, “what is the joke you told that you’re most proud of (we use this as applicant background, but don't worry, we won't judge you)” and probably many more fields, the more the better, and we would like more ideas for fields below as this seems crucial for the project’s success.Of course, we will ask people not to spend no more than 1 hour filling out this form. (some people will be randomly assigned to the "45 minutes only" group, and we'll check which group performs better)We will send out a recursive feedback form to each applicant.The platform will support for comments, such as “I’d fund this post”, or “looking for a co-author”Emoji inclusiveness: If Zoomers use , Millennials will see , which is important to avoid miscommunication.User chatProfilesSub groups by topicA feed that users can scrollAlgorithms to optimize that feed based on the user’s preferences (open sourced, inspired by Twitter)A blue backgroundBut as an MVP, we’ll just use the comments to this post.Creating a critical massThis project will only succeed if all the following things will happen:The forum will change its font to Comic Sans (this step was easier than expected)Readers like you (yes you) will share their ideas for April Fool's posts here in the commentsThanks!This post was written quickly since we used the top EA tools for getting things done (having a due date), so any suggestions are welcome.Thanks Edo Arad for helping write this post. I'm sure he'd want to tell you that all mistakes are his. Edo could not immediately comment on this line I just added.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Yonatan Cale https://forum.effectivealtruism.org/posts/vLctZutbRttQPnWnw/meta-directory-of-april-fool-s-ideas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Meta Directory of April Fool’s Ideas, published by Yonatan Cale on April 1, 2023 on The Effective Altruism Forum.IntroAs Effective Altruism grows, we face new challenges as a movement. Funding, cause prioritization, but the most neglected area is definitely just having fun.Potential impact# If one EA reads one post which makes them happy, how much more productive do they become, in percent, for that day? INCREASED_PERSON_PRODUCTIVITY_PER_HAPPY_FUNNY_POST_FOR_ONE_DAY = sq.to(-0.05, 0.10)FRACTION_OF_EA_THAT_READS_A_GOOD_POST = sq.to(0.05, 0.4)# This calculation is so intuitive that even copilot could autocomplete itIMPACT_PER_APRIL_FOOL_POST = \ EA_YEARLY_IMPACT_IN_QALYS \ INCREASED_PERSON_PRODUCTIVITY_PER_HAPPY_FUNNY_POST_FOR_ONE_DAY \ FRACTION_OF_EA_THAT_READS_A_GOOD_POSTbins = 100 samples = IMPACT_PER_APRIL_FOOL_POST @ bins plt.hist(samples, bins=bins)As you can see, the spike around 0.1 indicates (assuming I'm reading this chart incorrectly) that the potential impact is relatively very high, compared to some other smaller spikes.A marketplace of ideasSome people have ideas for April Fool’s posts, and some people are good at writing but don’t have ideas.Let’s make these high impact connections between EA cofounders!ImplementationThe ideal platform will beA list of ideasEach idea has all the essential fields such as “title”, “long description”, “target audience”, “expected impact of that audience”, “squiggle model estimating the impact of the post given the post's quality”, “what is the joke you told that you’re most proud of (we use this as applicant background, but don't worry, we won't judge you)” and probably many more fields, the more the better, and we would like more ideas for fields below as this seems crucial for the project’s success.Of course, we will ask people not to spend no more than 1 hour filling out this form. (some people will be randomly assigned to the "45 minutes only" group, and we'll check which group performs better)We will send out a recursive feedback form to each applicant.The platform will support for comments, such as “I’d fund this post”, or “looking for a co-author”Emoji inclusiveness: If Zoomers use , Millennials will see , which is important to avoid miscommunication.User chatProfilesSub groups by topicA feed that users can scrollAlgorithms to optimize that feed based on the user’s preferences (open sourced, inspired by Twitter)A blue backgroundBut as an MVP, we’ll just use the comments to this post.Creating a critical massThis project will only succeed if all the following things will happen:The forum will change its font to Comic Sans (this step was easier than expected)Readers like you (yes you) will share their ideas for April Fool's posts here in the commentsThanks!This post was written quickly since we used the top EA tools for getting things done (having a due date), so any suggestions are welcome.Thanks Edo Arad for helping write this post. I'm sure he'd want to tell you that all mistakes are his. Edo could not immediately comment on this line I just added.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 01 Apr 2023 19:53:41 +0000 EA - Meta Directory of April Fool’s Ideas by Yonatan Cale Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Meta Directory of April Fool’s Ideas, published by Yonatan Cale on April 1, 2023 on The Effective Altruism Forum.IntroAs Effective Altruism grows, we face new challenges as a movement. Funding, cause prioritization, but the most neglected area is definitely just having fun.Potential impact# If one EA reads one post which makes them happy, how much more productive do they become, in percent, for that day? INCREASED_PERSON_PRODUCTIVITY_PER_HAPPY_FUNNY_POST_FOR_ONE_DAY = sq.to(-0.05, 0.10)FRACTION_OF_EA_THAT_READS_A_GOOD_POST = sq.to(0.05, 0.4)# This calculation is so intuitive that even copilot could autocomplete itIMPACT_PER_APRIL_FOOL_POST = \ EA_YEARLY_IMPACT_IN_QALYS \ INCREASED_PERSON_PRODUCTIVITY_PER_HAPPY_FUNNY_POST_FOR_ONE_DAY \ FRACTION_OF_EA_THAT_READS_A_GOOD_POSTbins = 100 samples = IMPACT_PER_APRIL_FOOL_POST @ bins plt.hist(samples, bins=bins)As you can see, the spike around 0.1 indicates (assuming I'm reading this chart incorrectly) that the potential impact is relatively very high, compared to some other smaller spikes.A marketplace of ideasSome people have ideas for April Fool’s posts, and some people are good at writing but don’t have ideas.Let’s make these high impact connections between EA cofounders!ImplementationThe ideal platform will beA list of ideasEach idea has all the essential fields such as “title”, “long description”, “target audience”, “expected impact of that audience”, “squiggle model estimating the impact of the post given the post's quality”, “what is the joke you told that you’re most proud of (we use this as applicant background, but don't worry, we won't judge you)” and probably many more fields, the more the better, and we would like more ideas for fields below as this seems crucial for the project’s success.Of course, we will ask people not to spend no more than 1 hour filling out this form. (some people will be randomly assigned to the "45 minutes only" group, and we'll check which group performs better)We will send out a recursive feedback form to each applicant.The platform will support for comments, such as “I’d fund this post”, or “looking for a co-author”Emoji inclusiveness: If Zoomers use , Millennials will see , which is important to avoid miscommunication.User chatProfilesSub groups by topicA feed that users can scrollAlgorithms to optimize that feed based on the user’s preferences (open sourced, inspired by Twitter)A blue backgroundBut as an MVP, we’ll just use the comments to this post.Creating a critical massThis project will only succeed if all the following things will happen:The forum will change its font to Comic Sans (this step was easier than expected)Readers like you (yes you) will share their ideas for April Fool's posts here in the commentsThanks!This post was written quickly since we used the top EA tools for getting things done (having a due date), so any suggestions are welcome.Thanks Edo Arad for helping write this post. I'm sure he'd want to tell you that all mistakes are his. Edo could not immediately comment on this line I just added.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Meta Directory of April Fool’s Ideas, published by Yonatan Cale on April 1, 2023 on The Effective Altruism Forum.IntroAs Effective Altruism grows, we face new challenges as a movement. Funding, cause prioritization, but the most neglected area is definitely just having fun.Potential impact# If one EA reads one post which makes them happy, how much more productive do they become, in percent, for that day? INCREASED_PERSON_PRODUCTIVITY_PER_HAPPY_FUNNY_POST_FOR_ONE_DAY = sq.to(-0.05, 0.10)FRACTION_OF_EA_THAT_READS_A_GOOD_POST = sq.to(0.05, 0.4)# This calculation is so intuitive that even copilot could autocomplete itIMPACT_PER_APRIL_FOOL_POST = \ EA_YEARLY_IMPACT_IN_QALYS \ INCREASED_PERSON_PRODUCTIVITY_PER_HAPPY_FUNNY_POST_FOR_ONE_DAY \ FRACTION_OF_EA_THAT_READS_A_GOOD_POSTbins = 100 samples = IMPACT_PER_APRIL_FOOL_POST @ bins plt.hist(samples, bins=bins)As you can see, the spike around 0.1 indicates (assuming I'm reading this chart incorrectly) that the potential impact is relatively very high, compared to some other smaller spikes.A marketplace of ideasSome people have ideas for April Fool’s posts, and some people are good at writing but don’t have ideas.Let’s make these high impact connections between EA cofounders!ImplementationThe ideal platform will beA list of ideasEach idea has all the essential fields such as “title”, “long description”, “target audience”, “expected impact of that audience”, “squiggle model estimating the impact of the post given the post's quality”, “what is the joke you told that you’re most proud of (we use this as applicant background, but don't worry, we won't judge you)” and probably many more fields, the more the better, and we would like more ideas for fields below as this seems crucial for the project’s success.Of course, we will ask people not to spend no more than 1 hour filling out this form. (some people will be randomly assigned to the "45 minutes only" group, and we'll check which group performs better)We will send out a recursive feedback form to each applicant.The platform will support for comments, such as “I’d fund this post”, or “looking for a co-author”Emoji inclusiveness: If Zoomers use , Millennials will see , which is important to avoid miscommunication.User chatProfilesSub groups by topicA feed that users can scrollAlgorithms to optimize that feed based on the user’s preferences (open sourced, inspired by Twitter)A blue backgroundBut as an MVP, we’ll just use the comments to this post.Creating a critical massThis project will only succeed if all the following things will happen:The forum will change its font to Comic Sans (this step was easier than expected)Readers like you (yes you) will share their ideas for April Fool's posts here in the commentsThanks!This post was written quickly since we used the top EA tools for getting things done (having a due date), so any suggestions are welcome.Thanks Edo Arad for helping write this post. I'm sure he'd want to tell you that all mistakes are his. Edo could not immediately comment on this line I just added.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Yonatan Cale https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:45 None full 5453
A4DFeE5xNc933fyjt_NL_EA_EA EA - Honestly I’m just here for the drama by electroswing Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Honestly I’m just here for the drama, published by electroswing on April 1, 2023 on The Effective Altruism Forum.I'm a young twenty-something who got into EA a couple of years ago. Back then I was really into the whole "learning object-level facts about EA” thing. I'd religiously listen to the 80,000 Hours podcast, read in detail about different interventions, both neartermist and longtermist, and generally just try my darndest to improve my understanding of how to do good effectively. Key to this voracious consumption was of course the EA Forum.Now? Lmao.It started with the SBF saga. Boy was the EA forum entertaining. The whole front page was full of upset people wildly speculating. So many rumors to sift through, so many spicy arguments to follow. The best parts of any post could be found by scrolling to the very bottom and unfolding highly downvoted comments. So much entertainment. Like reality TV except about my ingroup specifically.Then, you know the meme. EA has had a scandal of the week TM ever since. Castle? Hilarious watching people who don't understand logistics butt heads with people who don't value optics and frugality. The weird Tegmark neonazi thing? Absolutely incredible watching the comments turn on him and then pull back. Time and Bloomberg articles and the ensuing "I can fix EA in one blog post" follow-ups? Delicious. Bostrom and the fascinating case of the use-mention distinction? Yikes bro. Spicy takes and arguments hidden in the Lightcone closure announcement? Fantastic sequel to "The Vultures Are Coming".When it was announced the Community posts would appear in a separate section of the Forum, the little drama-hungry goblin in my brain was at first disappointed. Oh no! Maybe I'll accidentally click on a post about malaria instead! Then I realized I can simply upweigh Community posts by +100 and I'll never miss another scandal ever again.Now, I visit almost daily. I briefly skim the frontpage post titles. Maybe occasionally I'll stop to learn more about some new org or read the executive summary of a high-quality research report. But honestly? Most of the time I just scroll through looking for drama, and if I don't find it, I close the tab and get on with my day.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
electroswing https://forum.effectivealtruism.org/posts/A4DFeE5xNc933fyjt/honestly-i-m-just-here-for-the-drama Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Honestly I’m just here for the drama, published by electroswing on April 1, 2023 on The Effective Altruism Forum.I'm a young twenty-something who got into EA a couple of years ago. Back then I was really into the whole "learning object-level facts about EA” thing. I'd religiously listen to the 80,000 Hours podcast, read in detail about different interventions, both neartermist and longtermist, and generally just try my darndest to improve my understanding of how to do good effectively. Key to this voracious consumption was of course the EA Forum.Now? Lmao.It started with the SBF saga. Boy was the EA forum entertaining. The whole front page was full of upset people wildly speculating. So many rumors to sift through, so many spicy arguments to follow. The best parts of any post could be found by scrolling to the very bottom and unfolding highly downvoted comments. So much entertainment. Like reality TV except about my ingroup specifically.Then, you know the meme. EA has had a scandal of the week TM ever since. Castle? Hilarious watching people who don't understand logistics butt heads with people who don't value optics and frugality. The weird Tegmark neonazi thing? Absolutely incredible watching the comments turn on him and then pull back. Time and Bloomberg articles and the ensuing "I can fix EA in one blog post" follow-ups? Delicious. Bostrom and the fascinating case of the use-mention distinction? Yikes bro. Spicy takes and arguments hidden in the Lightcone closure announcement? Fantastic sequel to "The Vultures Are Coming".When it was announced the Community posts would appear in a separate section of the Forum, the little drama-hungry goblin in my brain was at first disappointed. Oh no! Maybe I'll accidentally click on a post about malaria instead! Then I realized I can simply upweigh Community posts by +100 and I'll never miss another scandal ever again.Now, I visit almost daily. I briefly skim the frontpage post titles. Maybe occasionally I'll stop to learn more about some new org or read the executive summary of a high-quality research report. But honestly? Most of the time I just scroll through looking for drama, and if I don't find it, I close the tab and get on with my day.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 01 Apr 2023 17:55:22 +0000 EA - Honestly I’m just here for the drama by electroswing Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Honestly I’m just here for the drama, published by electroswing on April 1, 2023 on The Effective Altruism Forum.I'm a young twenty-something who got into EA a couple of years ago. Back then I was really into the whole "learning object-level facts about EA” thing. I'd religiously listen to the 80,000 Hours podcast, read in detail about different interventions, both neartermist and longtermist, and generally just try my darndest to improve my understanding of how to do good effectively. Key to this voracious consumption was of course the EA Forum.Now? Lmao.It started with the SBF saga. Boy was the EA forum entertaining. The whole front page was full of upset people wildly speculating. So many rumors to sift through, so many spicy arguments to follow. The best parts of any post could be found by scrolling to the very bottom and unfolding highly downvoted comments. So much entertainment. Like reality TV except about my ingroup specifically.Then, you know the meme. EA has had a scandal of the week TM ever since. Castle? Hilarious watching people who don't understand logistics butt heads with people who don't value optics and frugality. The weird Tegmark neonazi thing? Absolutely incredible watching the comments turn on him and then pull back. Time and Bloomberg articles and the ensuing "I can fix EA in one blog post" follow-ups? Delicious. Bostrom and the fascinating case of the use-mention distinction? Yikes bro. Spicy takes and arguments hidden in the Lightcone closure announcement? Fantastic sequel to "The Vultures Are Coming".When it was announced the Community posts would appear in a separate section of the Forum, the little drama-hungry goblin in my brain was at first disappointed. Oh no! Maybe I'll accidentally click on a post about malaria instead! Then I realized I can simply upweigh Community posts by +100 and I'll never miss another scandal ever again.Now, I visit almost daily. I briefly skim the frontpage post titles. Maybe occasionally I'll stop to learn more about some new org or read the executive summary of a high-quality research report. But honestly? Most of the time I just scroll through looking for drama, and if I don't find it, I close the tab and get on with my day.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Honestly I’m just here for the drama, published by electroswing on April 1, 2023 on The Effective Altruism Forum.I'm a young twenty-something who got into EA a couple of years ago. Back then I was really into the whole "learning object-level facts about EA” thing. I'd religiously listen to the 80,000 Hours podcast, read in detail about different interventions, both neartermist and longtermist, and generally just try my darndest to improve my understanding of how to do good effectively. Key to this voracious consumption was of course the EA Forum.Now? Lmao.It started with the SBF saga. Boy was the EA forum entertaining. The whole front page was full of upset people wildly speculating. So many rumors to sift through, so many spicy arguments to follow. The best parts of any post could be found by scrolling to the very bottom and unfolding highly downvoted comments. So much entertainment. Like reality TV except about my ingroup specifically.Then, you know the meme. EA has had a scandal of the week TM ever since. Castle? Hilarious watching people who don't understand logistics butt heads with people who don't value optics and frugality. The weird Tegmark neonazi thing? Absolutely incredible watching the comments turn on him and then pull back. Time and Bloomberg articles and the ensuing "I can fix EA in one blog post" follow-ups? Delicious. Bostrom and the fascinating case of the use-mention distinction? Yikes bro. Spicy takes and arguments hidden in the Lightcone closure announcement? Fantastic sequel to "The Vultures Are Coming".When it was announced the Community posts would appear in a separate section of the Forum, the little drama-hungry goblin in my brain was at first disappointed. Oh no! Maybe I'll accidentally click on a post about malaria instead! Then I realized I can simply upweigh Community posts by +100 and I'll never miss another scandal ever again.Now, I visit almost daily. I briefly skim the frontpage post titles. Maybe occasionally I'll stop to learn more about some new org or read the executive summary of a high-quality research report. But honestly? Most of the time I just scroll through looking for drama, and if I don't find it, I close the tab and get on with my day.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
electroswing https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:24 None full 5454
kwcSnxuWXk6BRuvhQ_NL_EA_EA EA - The EA Relationship Escalator by ProbablyGoodCouple Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA Relationship Escalator, published by ProbablyGoodCouple on April 1, 2023 on The Effective Altruism Forum.When dating, you may typically encounter the relationship escalator, which is a social script for how relationships are supposed to unfold that is oriented around default societal expectations. Potential partners are supposed to follow a progressive set of steps and achieve specific visible milestones towards a clear goal.This escalator frequently looks like:Meet on a dating appGo on a few datesHold hands, kissBecome romantically exclusiveFall in loveMeet the parentsHave a long weekend togetherVacation togetherMove in togetherGet engagedGet marriedBuy a houseHave kidsHave a dogOf course the steps are approximate and may not unfold literally in this order, but it should be pretty close. Moreover, where you are on this escalator and how long it has been since the previous step is often used to judge whether a relationship is sufficiently significant, serious, good, healthy, committed, etc. and to tell whether the relationship is worth pursuing.Effective altruists, as a community though, are rarely mainstream like this. This suggests that it may be helpful for EAs looking to date other EAs to have a very different and more customized social script to follow to judge how their unique EA relationship is unfolding.I recommend the EA relationship escalator look like this:Meet at EA Global but definitely do not flirtRe-meet at a house party where flirting is allowedComment on their EA Forum postsReach out on their “Date Me” docGo on a few datesBecome awkwardly personally and professionally intertwinedThoroughly assess together the conflicts of interest inherent in your relationshipTalk to your HR departmentTalk to their HR departmentTalk to the CEA Community Health teamMake a spreadsheet together to thoroughly quantify the relevant risks and benefits of your romantic relationshipDecide to go for itFinally hold hands, kissSynchronize your pomodoro schedules togetherCreate a shared Complice coworking room / Cuckoo coworking room for just you twoTake the same personality tests and quantify each otherIntroduce them to your polycule, hope they get alongFall in loveMove in to the same EA group houseSynchronize your GWWC pledge donation schedulesHave your relationship details exposed by a burner account on the EA ForumHave the EA Forum moderator team encode your relationship details in rot13Meet the parentsVacation together, but exclusively touring various EA hubsDecide to fight the same x-riskRaise free range chickens together, start an animal sanctuaryCreate a Manifold market around whether or not you will get marriedGet engagedGet marriedMaximize utilityHopefully this EA Relationship Escalator helps give young EAs a social script to be able to find love with each other and be able to understand where their relationship currently is at.After all, you may spend 80,000 hours of your life in a good marriage so it’s important to get this right!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
ProbablyGoodCouple https://forum.effectivealtruism.org/posts/kwcSnxuWXk6BRuvhQ/the-ea-relationship-escalator Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA Relationship Escalator, published by ProbablyGoodCouple on April 1, 2023 on The Effective Altruism Forum.When dating, you may typically encounter the relationship escalator, which is a social script for how relationships are supposed to unfold that is oriented around default societal expectations. Potential partners are supposed to follow a progressive set of steps and achieve specific visible milestones towards a clear goal.This escalator frequently looks like:Meet on a dating appGo on a few datesHold hands, kissBecome romantically exclusiveFall in loveMeet the parentsHave a long weekend togetherVacation togetherMove in togetherGet engagedGet marriedBuy a houseHave kidsHave a dogOf course the steps are approximate and may not unfold literally in this order, but it should be pretty close. Moreover, where you are on this escalator and how long it has been since the previous step is often used to judge whether a relationship is sufficiently significant, serious, good, healthy, committed, etc. and to tell whether the relationship is worth pursuing.Effective altruists, as a community though, are rarely mainstream like this. This suggests that it may be helpful for EAs looking to date other EAs to have a very different and more customized social script to follow to judge how their unique EA relationship is unfolding.I recommend the EA relationship escalator look like this:Meet at EA Global but definitely do not flirtRe-meet at a house party where flirting is allowedComment on their EA Forum postsReach out on their “Date Me” docGo on a few datesBecome awkwardly personally and professionally intertwinedThoroughly assess together the conflicts of interest inherent in your relationshipTalk to your HR departmentTalk to their HR departmentTalk to the CEA Community Health teamMake a spreadsheet together to thoroughly quantify the relevant risks and benefits of your romantic relationshipDecide to go for itFinally hold hands, kissSynchronize your pomodoro schedules togetherCreate a shared Complice coworking room / Cuckoo coworking room for just you twoTake the same personality tests and quantify each otherIntroduce them to your polycule, hope they get alongFall in loveMove in to the same EA group houseSynchronize your GWWC pledge donation schedulesHave your relationship details exposed by a burner account on the EA ForumHave the EA Forum moderator team encode your relationship details in rot13Meet the parentsVacation together, but exclusively touring various EA hubsDecide to fight the same x-riskRaise free range chickens together, start an animal sanctuaryCreate a Manifold market around whether or not you will get marriedGet engagedGet marriedMaximize utilityHopefully this EA Relationship Escalator helps give young EAs a social script to be able to find love with each other and be able to understand where their relationship currently is at.After all, you may spend 80,000 hours of your life in a good marriage so it’s important to get this right!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 01 Apr 2023 17:23:56 +0000 EA - The EA Relationship Escalator by ProbablyGoodCouple Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA Relationship Escalator, published by ProbablyGoodCouple on April 1, 2023 on The Effective Altruism Forum.When dating, you may typically encounter the relationship escalator, which is a social script for how relationships are supposed to unfold that is oriented around default societal expectations. Potential partners are supposed to follow a progressive set of steps and achieve specific visible milestones towards a clear goal.This escalator frequently looks like:Meet on a dating appGo on a few datesHold hands, kissBecome romantically exclusiveFall in loveMeet the parentsHave a long weekend togetherVacation togetherMove in togetherGet engagedGet marriedBuy a houseHave kidsHave a dogOf course the steps are approximate and may not unfold literally in this order, but it should be pretty close. Moreover, where you are on this escalator and how long it has been since the previous step is often used to judge whether a relationship is sufficiently significant, serious, good, healthy, committed, etc. and to tell whether the relationship is worth pursuing.Effective altruists, as a community though, are rarely mainstream like this. This suggests that it may be helpful for EAs looking to date other EAs to have a very different and more customized social script to follow to judge how their unique EA relationship is unfolding.I recommend the EA relationship escalator look like this:Meet at EA Global but definitely do not flirtRe-meet at a house party where flirting is allowedComment on their EA Forum postsReach out on their “Date Me” docGo on a few datesBecome awkwardly personally and professionally intertwinedThoroughly assess together the conflicts of interest inherent in your relationshipTalk to your HR departmentTalk to their HR departmentTalk to the CEA Community Health teamMake a spreadsheet together to thoroughly quantify the relevant risks and benefits of your romantic relationshipDecide to go for itFinally hold hands, kissSynchronize your pomodoro schedules togetherCreate a shared Complice coworking room / Cuckoo coworking room for just you twoTake the same personality tests and quantify each otherIntroduce them to your polycule, hope they get alongFall in loveMove in to the same EA group houseSynchronize your GWWC pledge donation schedulesHave your relationship details exposed by a burner account on the EA ForumHave the EA Forum moderator team encode your relationship details in rot13Meet the parentsVacation together, but exclusively touring various EA hubsDecide to fight the same x-riskRaise free range chickens together, start an animal sanctuaryCreate a Manifold market around whether or not you will get marriedGet engagedGet marriedMaximize utilityHopefully this EA Relationship Escalator helps give young EAs a social script to be able to find love with each other and be able to understand where their relationship currently is at.After all, you may spend 80,000 hours of your life in a good marriage so it’s important to get this right!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA Relationship Escalator, published by ProbablyGoodCouple on April 1, 2023 on The Effective Altruism Forum.When dating, you may typically encounter the relationship escalator, which is a social script for how relationships are supposed to unfold that is oriented around default societal expectations. Potential partners are supposed to follow a progressive set of steps and achieve specific visible milestones towards a clear goal.This escalator frequently looks like:Meet on a dating appGo on a few datesHold hands, kissBecome romantically exclusiveFall in loveMeet the parentsHave a long weekend togetherVacation togetherMove in togetherGet engagedGet marriedBuy a houseHave kidsHave a dogOf course the steps are approximate and may not unfold literally in this order, but it should be pretty close. Moreover, where you are on this escalator and how long it has been since the previous step is often used to judge whether a relationship is sufficiently significant, serious, good, healthy, committed, etc. and to tell whether the relationship is worth pursuing.Effective altruists, as a community though, are rarely mainstream like this. This suggests that it may be helpful for EAs looking to date other EAs to have a very different and more customized social script to follow to judge how their unique EA relationship is unfolding.I recommend the EA relationship escalator look like this:Meet at EA Global but definitely do not flirtRe-meet at a house party where flirting is allowedComment on their EA Forum postsReach out on their “Date Me” docGo on a few datesBecome awkwardly personally and professionally intertwinedThoroughly assess together the conflicts of interest inherent in your relationshipTalk to your HR departmentTalk to their HR departmentTalk to the CEA Community Health teamMake a spreadsheet together to thoroughly quantify the relevant risks and benefits of your romantic relationshipDecide to go for itFinally hold hands, kissSynchronize your pomodoro schedules togetherCreate a shared Complice coworking room / Cuckoo coworking room for just you twoTake the same personality tests and quantify each otherIntroduce them to your polycule, hope they get alongFall in loveMove in to the same EA group houseSynchronize your GWWC pledge donation schedulesHave your relationship details exposed by a burner account on the EA ForumHave the EA Forum moderator team encode your relationship details in rot13Meet the parentsVacation together, but exclusively touring various EA hubsDecide to fight the same x-riskRaise free range chickens together, start an animal sanctuaryCreate a Manifold market around whether or not you will get marriedGet engagedGet marriedMaximize utilityHopefully this EA Relationship Escalator helps give young EAs a social script to be able to find love with each other and be able to understand where their relationship currently is at.After all, you may spend 80,000 hours of your life in a good marriage so it’s important to get this right!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
ProbablyGoodCouple https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:16 None full 5452
GwstEkNGfNHstnXGc_NL_EA_EA EA - Cause Exploration: Sending Billionaires to Space by Rasool Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cause Exploration: Sending Billionaires to Space, published by Rasool on April 1, 2023 on The Effective Altruism Forum.TL:DR: +➡️️=➡️‍⚕️Today, I present to you a groundbreaking proposal with the potential to create a paradigm shift in philanthropy and global stewardship. The mission: to send 10 of the world's most affluent individuals into space to experience the "overview effect," a powerful cognitive transformation that could inspire them to become agents of change for our planet and its inhabitants.The "overview effect" is a phenomenon experienced by astronauts during spaceflight, often while observing Earth from orbit. This profound sense of interconnectedness and the realisation of our planet's vulnerability have been known to inspire a heightened sense of responsibility towards preserving and protecting Earth.Financial Implications and Potential ReturnsA commercial crew launch to low Earth orbit using SpaceX's Crew Dragon costs approximately $55 million per seat. Thus, sending 10 billionaires into space would require an investment of $550 million.Assuming the "overview effect" encourages these individuals to increase their annual philanthropic efforts by a modest 1%, the impact could be staggering. With a combined net worth of roughly $1 trillion, a 1% increase in annual giving would result in an additional $10 billion in charitable contributions each year, an 18-fold ROI.Importance, Tractability, and NeglectednessImportance: This proposal aims to foster a significant shift in the philanthropic mindset of some of the world's wealthiest individuals by leveraging the "overview effect." This cognitive transformation could result in increased charitable contributions towards global challenges such as climate change, global health and development, and extinction risks.Tractability: The proposal is feasible given the current advancements in commercial space travel, such as SpaceX's Crew Dragon. Although the initial investment is substantial, the potential for a substantial return on investment through increased philanthropy provides a strong incentive to pursue the project.Neglectedness: While the concept of the "overview effect" is known within the space community, its potential impact on the philanthropic behaviour of billionaires has not been widely explored. This proposal brings forward a unique and innovative approach to encouraging large-scale philanthropy, addressing a relatively neglected aspect of global problem-solving.ConclusionThe proposal to send billionaires into space to experience the "overview effect" is firmly grounded in both scientific understanding and financial analysis. The initial investment of $550 million carries the potential to generate a substantial increase in philanthropic giving, which could dramatically impact our world for the better.55,000,000 x 10Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Rasool https://forum.effectivealtruism.org/posts/GwstEkNGfNHstnXGc/cause-exploration-sending-billionaires-to-space Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cause Exploration: Sending Billionaires to Space, published by Rasool on April 1, 2023 on The Effective Altruism Forum.TL:DR: +➡️️=➡️‍⚕️Today, I present to you a groundbreaking proposal with the potential to create a paradigm shift in philanthropy and global stewardship. The mission: to send 10 of the world's most affluent individuals into space to experience the "overview effect," a powerful cognitive transformation that could inspire them to become agents of change for our planet and its inhabitants.The "overview effect" is a phenomenon experienced by astronauts during spaceflight, often while observing Earth from orbit. This profound sense of interconnectedness and the realisation of our planet's vulnerability have been known to inspire a heightened sense of responsibility towards preserving and protecting Earth.Financial Implications and Potential ReturnsA commercial crew launch to low Earth orbit using SpaceX's Crew Dragon costs approximately $55 million per seat. Thus, sending 10 billionaires into space would require an investment of $550 million.Assuming the "overview effect" encourages these individuals to increase their annual philanthropic efforts by a modest 1%, the impact could be staggering. With a combined net worth of roughly $1 trillion, a 1% increase in annual giving would result in an additional $10 billion in charitable contributions each year, an 18-fold ROI.Importance, Tractability, and NeglectednessImportance: This proposal aims to foster a significant shift in the philanthropic mindset of some of the world's wealthiest individuals by leveraging the "overview effect." This cognitive transformation could result in increased charitable contributions towards global challenges such as climate change, global health and development, and extinction risks.Tractability: The proposal is feasible given the current advancements in commercial space travel, such as SpaceX's Crew Dragon. Although the initial investment is substantial, the potential for a substantial return on investment through increased philanthropy provides a strong incentive to pursue the project.Neglectedness: While the concept of the "overview effect" is known within the space community, its potential impact on the philanthropic behaviour of billionaires has not been widely explored. This proposal brings forward a unique and innovative approach to encouraging large-scale philanthropy, addressing a relatively neglected aspect of global problem-solving.ConclusionThe proposal to send billionaires into space to experience the "overview effect" is firmly grounded in both scientific understanding and financial analysis. The initial investment of $550 million carries the potential to generate a substantial increase in philanthropic giving, which could dramatically impact our world for the better.55,000,000 x 10Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 01 Apr 2023 16:36:30 +0000 EA - Cause Exploration: Sending Billionaires to Space by Rasool Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cause Exploration: Sending Billionaires to Space, published by Rasool on April 1, 2023 on The Effective Altruism Forum.TL:DR: +➡️️=➡️‍⚕️Today, I present to you a groundbreaking proposal with the potential to create a paradigm shift in philanthropy and global stewardship. The mission: to send 10 of the world's most affluent individuals into space to experience the "overview effect," a powerful cognitive transformation that could inspire them to become agents of change for our planet and its inhabitants.The "overview effect" is a phenomenon experienced by astronauts during spaceflight, often while observing Earth from orbit. This profound sense of interconnectedness and the realisation of our planet's vulnerability have been known to inspire a heightened sense of responsibility towards preserving and protecting Earth.Financial Implications and Potential ReturnsA commercial crew launch to low Earth orbit using SpaceX's Crew Dragon costs approximately $55 million per seat. Thus, sending 10 billionaires into space would require an investment of $550 million.Assuming the "overview effect" encourages these individuals to increase their annual philanthropic efforts by a modest 1%, the impact could be staggering. With a combined net worth of roughly $1 trillion, a 1% increase in annual giving would result in an additional $10 billion in charitable contributions each year, an 18-fold ROI.Importance, Tractability, and NeglectednessImportance: This proposal aims to foster a significant shift in the philanthropic mindset of some of the world's wealthiest individuals by leveraging the "overview effect." This cognitive transformation could result in increased charitable contributions towards global challenges such as climate change, global health and development, and extinction risks.Tractability: The proposal is feasible given the current advancements in commercial space travel, such as SpaceX's Crew Dragon. Although the initial investment is substantial, the potential for a substantial return on investment through increased philanthropy provides a strong incentive to pursue the project.Neglectedness: While the concept of the "overview effect" is known within the space community, its potential impact on the philanthropic behaviour of billionaires has not been widely explored. This proposal brings forward a unique and innovative approach to encouraging large-scale philanthropy, addressing a relatively neglected aspect of global problem-solving.ConclusionThe proposal to send billionaires into space to experience the "overview effect" is firmly grounded in both scientific understanding and financial analysis. The initial investment of $550 million carries the potential to generate a substantial increase in philanthropic giving, which could dramatically impact our world for the better.55,000,000 x 10Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cause Exploration: Sending Billionaires to Space, published by Rasool on April 1, 2023 on The Effective Altruism Forum.TL:DR: +➡️️=➡️‍⚕️Today, I present to you a groundbreaking proposal with the potential to create a paradigm shift in philanthropy and global stewardship. The mission: to send 10 of the world's most affluent individuals into space to experience the "overview effect," a powerful cognitive transformation that could inspire them to become agents of change for our planet and its inhabitants.The "overview effect" is a phenomenon experienced by astronauts during spaceflight, often while observing Earth from orbit. This profound sense of interconnectedness and the realisation of our planet's vulnerability have been known to inspire a heightened sense of responsibility towards preserving and protecting Earth.Financial Implications and Potential ReturnsA commercial crew launch to low Earth orbit using SpaceX's Crew Dragon costs approximately $55 million per seat. Thus, sending 10 billionaires into space would require an investment of $550 million.Assuming the "overview effect" encourages these individuals to increase their annual philanthropic efforts by a modest 1%, the impact could be staggering. With a combined net worth of roughly $1 trillion, a 1% increase in annual giving would result in an additional $10 billion in charitable contributions each year, an 18-fold ROI.Importance, Tractability, and NeglectednessImportance: This proposal aims to foster a significant shift in the philanthropic mindset of some of the world's wealthiest individuals by leveraging the "overview effect." This cognitive transformation could result in increased charitable contributions towards global challenges such as climate change, global health and development, and extinction risks.Tractability: The proposal is feasible given the current advancements in commercial space travel, such as SpaceX's Crew Dragon. Although the initial investment is substantial, the potential for a substantial return on investment through increased philanthropy provides a strong incentive to pursue the project.Neglectedness: While the concept of the "overview effect" is known within the space community, its potential impact on the philanthropic behaviour of billionaires has not been widely explored. This proposal brings forward a unique and innovative approach to encouraging large-scale philanthropy, addressing a relatively neglected aspect of global problem-solving.ConclusionThe proposal to send billionaires into space to experience the "overview effect" is firmly grounded in both scientific understanding and financial analysis. The initial investment of $550 million carries the potential to generate a substantial increase in philanthropic giving, which could dramatically impact our world for the better.55,000,000 x 10Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Rasool https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:55 None full 5458
Hu6s5nF9WCektwwur_NL_EA_EA EA - Announcing drama curfews on the Forum by Lorenzo Buonanno Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing drama curfews on the Forum, published by Lorenzo Buonanno on April 1, 2023 on The Effective Altruism Forum.The moderation team is small, limited in capacity, and quite tired. We often end up having to suddenly work at odd hours, and this is bad for our productivity and ability to focus on our other (usually main) projects.So we’re introducing a schedule: Forum users are henceforth not allowed to come even close to violating the Forum norms — or significantly increase the probability that others violate Forum norms by posting drama-prone content — on certain days and hours. Moderation action during the curfew will be significantly more tyrannical.Here’s the schedule (“drama” here is used as a shorthand for “posting or potentially causing norm-violating or close-to-the-line discussions”):Mondays-Fridays: 10 a.m. GMT - 10 p.m. GMT is ok. Besides those times, though, no drama.Saturdays: No drama at all.Sundays: You can post drama from 1 p.m. to 8 p.m. GMT.Please also respect the following holidays:All possible versions of New Year’s Eve and New Year’s DayTop 5 holidays in any of the top 10 most popular religionsSmallpox Eradication DayPetrov DayValentine’s Day(You can find a calendar that lists these here.)Thank you all in advance!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lorenzo Buonanno https://forum.effectivealtruism.org/posts/Hu6s5nF9WCektwwur/announcing-drama-curfews-on-the-forum Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing drama curfews on the Forum, published by Lorenzo Buonanno on April 1, 2023 on The Effective Altruism Forum.The moderation team is small, limited in capacity, and quite tired. We often end up having to suddenly work at odd hours, and this is bad for our productivity and ability to focus on our other (usually main) projects.So we’re introducing a schedule: Forum users are henceforth not allowed to come even close to violating the Forum norms — or significantly increase the probability that others violate Forum norms by posting drama-prone content — on certain days and hours. Moderation action during the curfew will be significantly more tyrannical.Here’s the schedule (“drama” here is used as a shorthand for “posting or potentially causing norm-violating or close-to-the-line discussions”):Mondays-Fridays: 10 a.m. GMT - 10 p.m. GMT is ok. Besides those times, though, no drama.Saturdays: No drama at all.Sundays: You can post drama from 1 p.m. to 8 p.m. GMT.Please also respect the following holidays:All possible versions of New Year’s Eve and New Year’s DayTop 5 holidays in any of the top 10 most popular religionsSmallpox Eradication DayPetrov DayValentine’s Day(You can find a calendar that lists these here.)Thank you all in advance!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 01 Apr 2023 15:27:28 +0000 EA - Announcing drama curfews on the Forum by Lorenzo Buonanno Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing drama curfews on the Forum, published by Lorenzo Buonanno on April 1, 2023 on The Effective Altruism Forum.The moderation team is small, limited in capacity, and quite tired. We often end up having to suddenly work at odd hours, and this is bad for our productivity and ability to focus on our other (usually main) projects.So we’re introducing a schedule: Forum users are henceforth not allowed to come even close to violating the Forum norms — or significantly increase the probability that others violate Forum norms by posting drama-prone content — on certain days and hours. Moderation action during the curfew will be significantly more tyrannical.Here’s the schedule (“drama” here is used as a shorthand for “posting or potentially causing norm-violating or close-to-the-line discussions”):Mondays-Fridays: 10 a.m. GMT - 10 p.m. GMT is ok. Besides those times, though, no drama.Saturdays: No drama at all.Sundays: You can post drama from 1 p.m. to 8 p.m. GMT.Please also respect the following holidays:All possible versions of New Year’s Eve and New Year’s DayTop 5 holidays in any of the top 10 most popular religionsSmallpox Eradication DayPetrov DayValentine’s Day(You can find a calendar that lists these here.)Thank you all in advance!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing drama curfews on the Forum, published by Lorenzo Buonanno on April 1, 2023 on The Effective Altruism Forum.The moderation team is small, limited in capacity, and quite tired. We often end up having to suddenly work at odd hours, and this is bad for our productivity and ability to focus on our other (usually main) projects.So we’re introducing a schedule: Forum users are henceforth not allowed to come even close to violating the Forum norms — or significantly increase the probability that others violate Forum norms by posting drama-prone content — on certain days and hours. Moderation action during the curfew will be significantly more tyrannical.Here’s the schedule (“drama” here is used as a shorthand for “posting or potentially causing norm-violating or close-to-the-line discussions”):Mondays-Fridays: 10 a.m. GMT - 10 p.m. GMT is ok. Besides those times, though, no drama.Saturdays: No drama at all.Sundays: You can post drama from 1 p.m. to 8 p.m. GMT.Please also respect the following holidays:All possible versions of New Year’s Eve and New Year’s DayTop 5 holidays in any of the top 10 most popular religionsSmallpox Eradication DayPetrov DayValentine’s Day(You can find a calendar that lists these here.)Thank you all in advance!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lorenzo Buonanno https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:36 None full 5455
wxLbhXXLKkgCKEmAh_NL_EA_EA EA - Meetup: EA Burner Accounts Anonymous by BurnerMeetupBurner Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Meetup: EA Burner Accounts Anonymous, published by BurnerMeetupBurner on April 1, 2023 on The Effective Altruism Forum.Do you think the amount of burner accounts is too high? What if you had a chance to directly talk to some of EA’s most prominent burners and bring your concerns about their concerns directly to them?Various burner accounts are getting together to host a meetup.Have you ever wanted to meet the face behind Whistleblower3, Whistleblower9, Whistleblower10, or Whistleblower67? What about BurnerAcct or anotherEAonaburner? Want to meet all the concerned EAs behind ConcernedEAs?Activities will include:Creating a taxonomy of burner accountsMore burner explainers from BurnerExplainerFinally critiquing everyone without fearFinally reveal the identity behind who killed JFKInvestigative journalism: Is Bostrom behind BostromAnonAccount?All communication will be done exclusively in the Navajo language, encrypted via ADFGVX, then through the Enigma machine, and finally via AES-256. The communication will then be sent via smoke signals that are sent via Signal.This meetup will take place on April 1 in a remote undisclosed location for security purposes.Plus we will finally reveal the shocking true identity of one of the most prolific burner accounts, Nathan Young!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
BurnerMeetupBurner https://forum.effectivealtruism.org/posts/wxLbhXXLKkgCKEmAh/meetup-ea-burner-accounts-anonymous Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Meetup: EA Burner Accounts Anonymous, published by BurnerMeetupBurner on April 1, 2023 on The Effective Altruism Forum.Do you think the amount of burner accounts is too high? What if you had a chance to directly talk to some of EA’s most prominent burners and bring your concerns about their concerns directly to them?Various burner accounts are getting together to host a meetup.Have you ever wanted to meet the face behind Whistleblower3, Whistleblower9, Whistleblower10, or Whistleblower67? What about BurnerAcct or anotherEAonaburner? Want to meet all the concerned EAs behind ConcernedEAs?Activities will include:Creating a taxonomy of burner accountsMore burner explainers from BurnerExplainerFinally critiquing everyone without fearFinally reveal the identity behind who killed JFKInvestigative journalism: Is Bostrom behind BostromAnonAccount?All communication will be done exclusively in the Navajo language, encrypted via ADFGVX, then through the Enigma machine, and finally via AES-256. The communication will then be sent via smoke signals that are sent via Signal.This meetup will take place on April 1 in a remote undisclosed location for security purposes.Plus we will finally reveal the shocking true identity of one of the most prolific burner accounts, Nathan Young!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 01 Apr 2023 15:16:45 +0000 EA - Meetup: EA Burner Accounts Anonymous by BurnerMeetupBurner Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Meetup: EA Burner Accounts Anonymous, published by BurnerMeetupBurner on April 1, 2023 on The Effective Altruism Forum.Do you think the amount of burner accounts is too high? What if you had a chance to directly talk to some of EA’s most prominent burners and bring your concerns about their concerns directly to them?Various burner accounts are getting together to host a meetup.Have you ever wanted to meet the face behind Whistleblower3, Whistleblower9, Whistleblower10, or Whistleblower67? What about BurnerAcct or anotherEAonaburner? Want to meet all the concerned EAs behind ConcernedEAs?Activities will include:Creating a taxonomy of burner accountsMore burner explainers from BurnerExplainerFinally critiquing everyone without fearFinally reveal the identity behind who killed JFKInvestigative journalism: Is Bostrom behind BostromAnonAccount?All communication will be done exclusively in the Navajo language, encrypted via ADFGVX, then through the Enigma machine, and finally via AES-256. The communication will then be sent via smoke signals that are sent via Signal.This meetup will take place on April 1 in a remote undisclosed location for security purposes.Plus we will finally reveal the shocking true identity of one of the most prolific burner accounts, Nathan Young!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Meetup: EA Burner Accounts Anonymous, published by BurnerMeetupBurner on April 1, 2023 on The Effective Altruism Forum.Do you think the amount of burner accounts is too high? What if you had a chance to directly talk to some of EA’s most prominent burners and bring your concerns about their concerns directly to them?Various burner accounts are getting together to host a meetup.Have you ever wanted to meet the face behind Whistleblower3, Whistleblower9, Whistleblower10, or Whistleblower67? What about BurnerAcct or anotherEAonaburner? Want to meet all the concerned EAs behind ConcernedEAs?Activities will include:Creating a taxonomy of burner accountsMore burner explainers from BurnerExplainerFinally critiquing everyone without fearFinally reveal the identity behind who killed JFKInvestigative journalism: Is Bostrom behind BostromAnonAccount?All communication will be done exclusively in the Navajo language, encrypted via ADFGVX, then through the Enigma machine, and finally via AES-256. The communication will then be sent via smoke signals that are sent via Signal.This meetup will take place on April 1 in a remote undisclosed location for security purposes.Plus we will finally reveal the shocking true identity of one of the most prolific burner accounts, Nathan Young!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
BurnerMeetupBurner https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:35 None full 5456
BmuArTMpGtbE3oYvg_NL_EA_EA EA - April's Fools Posts Can Be Short by EdoArad Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: April's Fools Posts Can Be Short, published by EdoArad on April 1, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
EdoArad https://forum.effectivealtruism.org/posts/BmuArTMpGtbE3oYvg/april-s-fools-posts-can-be-short Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: April's Fools Posts Can Be Short, published by EdoArad on April 1, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 01 Apr 2023 14:14:10 +0000 EA - April's Fools Posts Can Be Short by EdoArad Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: April's Fools Posts Can Be Short, published by EdoArad on April 1, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: April's Fools Posts Can Be Short, published by EdoArad on April 1, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
EdoArad https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:25 None full 5457
BhzqvnaZiqfsssquM_NL_EA_EA EA - New 80,000 Hours feature: Listen to audio versions of our podcast transcripts by Bella Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New 80,000 Hours feature: Listen to audio versions of our podcast transcripts, published by Bella on April 1, 2023 on The Effective Altruism Forum.I’m delighted to announce the launch of a new feature on the 80,000 Hours website: as of today, you can now listen to state-of-the-art text-to-speech audio versions of all of our podcast transcripts!We're including the entire 146-episode back catalogue at launch — that's over 300 hours of listening material for our audience to enjoy whenever they choose.We hope that this new feature can bring our podcast transcripts to an all new audience of audiophiles and those who prefer listening to content rather than reading it.And since it also works on mobile, you can now listen to our podcast transcripts while doing the dishes, walking the dog, or on your daily commute!I'm really excited about this inititative making our podcast transcripts more widely accessible, and helping spread important ideas about the world's most pressing problems and what you can do to solve them.Feel free to ask any questions about our new feature in the comments.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Bella https://forum.effectivealtruism.org/posts/BhzqvnaZiqfsssquM/new-80-000-hours-feature-listen-to-audio-versions-of-our Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New 80,000 Hours feature: Listen to audio versions of our podcast transcripts, published by Bella on April 1, 2023 on The Effective Altruism Forum.I’m delighted to announce the launch of a new feature on the 80,000 Hours website: as of today, you can now listen to state-of-the-art text-to-speech audio versions of all of our podcast transcripts!We're including the entire 146-episode back catalogue at launch — that's over 300 hours of listening material for our audience to enjoy whenever they choose.We hope that this new feature can bring our podcast transcripts to an all new audience of audiophiles and those who prefer listening to content rather than reading it.And since it also works on mobile, you can now listen to our podcast transcripts while doing the dishes, walking the dog, or on your daily commute!I'm really excited about this inititative making our podcast transcripts more widely accessible, and helping spread important ideas about the world's most pressing problems and what you can do to solve them.Feel free to ask any questions about our new feature in the comments.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 01 Apr 2023 07:57:03 +0000 EA - New 80,000 Hours feature: Listen to audio versions of our podcast transcripts by Bella Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New 80,000 Hours feature: Listen to audio versions of our podcast transcripts, published by Bella on April 1, 2023 on The Effective Altruism Forum.I’m delighted to announce the launch of a new feature on the 80,000 Hours website: as of today, you can now listen to state-of-the-art text-to-speech audio versions of all of our podcast transcripts!We're including the entire 146-episode back catalogue at launch — that's over 300 hours of listening material for our audience to enjoy whenever they choose.We hope that this new feature can bring our podcast transcripts to an all new audience of audiophiles and those who prefer listening to content rather than reading it.And since it also works on mobile, you can now listen to our podcast transcripts while doing the dishes, walking the dog, or on your daily commute!I'm really excited about this inititative making our podcast transcripts more widely accessible, and helping spread important ideas about the world's most pressing problems and what you can do to solve them.Feel free to ask any questions about our new feature in the comments.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New 80,000 Hours feature: Listen to audio versions of our podcast transcripts, published by Bella on April 1, 2023 on The Effective Altruism Forum.I’m delighted to announce the launch of a new feature on the 80,000 Hours website: as of today, you can now listen to state-of-the-art text-to-speech audio versions of all of our podcast transcripts!We're including the entire 146-episode back catalogue at launch — that's over 300 hours of listening material for our audience to enjoy whenever they choose.We hope that this new feature can bring our podcast transcripts to an all new audience of audiophiles and those who prefer listening to content rather than reading it.And since it also works on mobile, you can now listen to our podcast transcripts while doing the dishes, walking the dog, or on your daily commute!I'm really excited about this inititative making our podcast transcripts more widely accessible, and helping spread important ideas about the world's most pressing problems and what you can do to solve them.Feel free to ask any questions about our new feature in the comments.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Bella https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:17 None full 5432
KPijJRnQCQvSu5AA2_NL_EA_EA EA - Iterating on our redesign (Forum update April 2023) by Sarah Cheng Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Iterating on our redesign (Forum update April 2023), published by Sarah Cheng on April 1, 2023 on The Effective Altruism Forum.Thank you to everyone who gave us feedback on our recent redesign of the Forum! We’ve looked closely at your feedback and our findings from interviews with new users.Two clear themes emerged:New users found the site too serious and intimidatingExperienced users missed the amount of “character” in the original designAfter careful consideration, we’ve decided to change our sans-serif font from Inter to Comic Sans. We believe that this friendly and distinctive font will address the concerns of both new and experienced users, and keep users engaged for longer.We’re confident that all of our users will be happy with this font change. We welcome you to contact us with your praise, or post it in the comments below.However, we realize that we made this change without any heads-up, and that can feel jarring. If you’d like to ease the transition, we’ve added an option in your account settings to switch back to Inter for today only.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sarah Cheng https://forum.effectivealtruism.org/posts/KPijJRnQCQvSu5AA2/iterating-on-our-redesign-forum-update-april-2023 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Iterating on our redesign (Forum update April 2023), published by Sarah Cheng on April 1, 2023 on The Effective Altruism Forum.Thank you to everyone who gave us feedback on our recent redesign of the Forum! We’ve looked closely at your feedback and our findings from interviews with new users.Two clear themes emerged:New users found the site too serious and intimidatingExperienced users missed the amount of “character” in the original designAfter careful consideration, we’ve decided to change our sans-serif font from Inter to Comic Sans. We believe that this friendly and distinctive font will address the concerns of both new and experienced users, and keep users engaged for longer.We’re confident that all of our users will be happy with this font change. We welcome you to contact us with your praise, or post it in the comments below.However, we realize that we made this change without any heads-up, and that can feel jarring. If you’d like to ease the transition, we’ve added an option in your account settings to switch back to Inter for today only.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 01 Apr 2023 07:09:56 +0000 EA - Iterating on our redesign (Forum update April 2023) by Sarah Cheng Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Iterating on our redesign (Forum update April 2023), published by Sarah Cheng on April 1, 2023 on The Effective Altruism Forum.Thank you to everyone who gave us feedback on our recent redesign of the Forum! We’ve looked closely at your feedback and our findings from interviews with new users.Two clear themes emerged:New users found the site too serious and intimidatingExperienced users missed the amount of “character” in the original designAfter careful consideration, we’ve decided to change our sans-serif font from Inter to Comic Sans. We believe that this friendly and distinctive font will address the concerns of both new and experienced users, and keep users engaged for longer.We’re confident that all of our users will be happy with this font change. We welcome you to contact us with your praise, or post it in the comments below.However, we realize that we made this change without any heads-up, and that can feel jarring. If you’d like to ease the transition, we’ve added an option in your account settings to switch back to Inter for today only.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Iterating on our redesign (Forum update April 2023), published by Sarah Cheng on April 1, 2023 on The Effective Altruism Forum.Thank you to everyone who gave us feedback on our recent redesign of the Forum! We’ve looked closely at your feedback and our findings from interviews with new users.Two clear themes emerged:New users found the site too serious and intimidatingExperienced users missed the amount of “character” in the original designAfter careful consideration, we’ve decided to change our sans-serif font from Inter to Comic Sans. We believe that this friendly and distinctive font will address the concerns of both new and experienced users, and keep users engaged for longer.We’re confident that all of our users will be happy with this font change. We welcome you to contact us with your praise, or post it in the comments below.However, we realize that we made this change without any heads-up, and that can feel jarring. If you’d like to ease the transition, we’ve added an option in your account settings to switch back to Inter for today only.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sarah Cheng https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:17 None full 5434
vSgAMZCpJoYsNxvD4_NL_EA_EA EA - Announcing the April 1st Purchase of Bodhi Kosher Vegetarian Restaurant by EA NYC by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the April 1st Purchase of Bodhi Kosher Vegetarian Restaurant by EA NYC, published by Rockwell on April 1, 2023 on The Effective Altruism Forum.Dear Effective Altruism community,We are thrilled to announce that Bodhi Kosher Vegetarian Restaurant—a historic estate near Canal Street, serving delicious vegan dim sum since 2018—has been selected as the location for a new EA NYC community center! Effective altruism is all about maximizing the impact of our actions to help others. And what better way to do that than by owning a restaurant?The restaurant is in the process of being established as a convening center to run workshops and meetings that bring together people to think seriously about how to address important issues. We believe that the cooperative process of deciding on shared dim sum dishes, the dynamic nature of lazy susan tables, and the abundance of potent black tea, together foster a collaborative and creative environment key to solving the world's most pressing problems.EA NYC already runs dozens of events in the restaurant per year (disproportionately in the back right corner, though sometimes near the front door), and we saw several reasons a venue purchase could be valuable for both increasing the number and quality of impactful events, and potentially saving—or even generating!—some money:Excellent value for services (such as a full plate of vegetable BBQ meat for only $8.95)Ideal location (in the heart of Lower Manhattan, easily accessible for EA NYC staff and community members)Existing menu can be optimized to ensure maximum nutritional value per dollar spentBoth small and large tables available (to accommodate a variety of event sizes)Warm atmosphere (especially near the kitchen)Access to unique forecasting technologies (free fortune cookies with each meal!)We have carefully vetted our fortune supplier to eliminate any morally dubious subcontractors and will not be using whatever company made these:Healthy work/community culture (supported by the fortune cookies!):The vision is modeled on traditional specialist conference centres, e.g. Oberwolfach, The Rockefeller Foundation Bellagio Center, the Brocher Foundation, and Whytham Abbey - but with a New York City twist. Unlike the aforementioned conference centres, ours will have the advantage of generating additional revenue through existing, robust dine-in and take-out business. We will also spell it “center,” because we live in the United States.The purchase was made from a large grant made specifically for this. There was no money from FTX or affiliated individuals or organizations. Just trust us on this, okay?When Open Philanthropy surveyed ~200 people involved or interested in longtermist work, they found that many had been strongly influenced by in-person events they’d attended, particularly ones where people relatively new to dim sum came into contact with a dozen or more dishes in one meal. Many developed a taste for textured vegetable protein, which we believe with 80% credence will extend into the far-future. (Some other data and our general experiences in the space are largely supportive of this too, including the popularity of Manhattan vegan chain Beyond Sushi as a venue for EA NYC subgroup meetings.)We understand that some members of the community may have concerns about the acquisition of a restaurant and its alignment with effective altruism principles. However, we want to assure everyone that this decision was not taken lightly. We carefully evaluated Bodhi's track record and business practices—pursuing third-party vetting by esteemed venture capital company Sequoia Capital—and found that it aligns with our values, mission, and EVPM (expected value profit margins).We also spent a lot of time contemplating the vibe, and considering whether gathering in this veg...]]>
Rockwell https://forum.effectivealtruism.org/posts/vSgAMZCpJoYsNxvD4/announcing-the-april-1st-purchase-of-bodhi-kosher-vegetarian Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the April 1st Purchase of Bodhi Kosher Vegetarian Restaurant by EA NYC, published by Rockwell on April 1, 2023 on The Effective Altruism Forum.Dear Effective Altruism community,We are thrilled to announce that Bodhi Kosher Vegetarian Restaurant—a historic estate near Canal Street, serving delicious vegan dim sum since 2018—has been selected as the location for a new EA NYC community center! Effective altruism is all about maximizing the impact of our actions to help others. And what better way to do that than by owning a restaurant?The restaurant is in the process of being established as a convening center to run workshops and meetings that bring together people to think seriously about how to address important issues. We believe that the cooperative process of deciding on shared dim sum dishes, the dynamic nature of lazy susan tables, and the abundance of potent black tea, together foster a collaborative and creative environment key to solving the world's most pressing problems.EA NYC already runs dozens of events in the restaurant per year (disproportionately in the back right corner, though sometimes near the front door), and we saw several reasons a venue purchase could be valuable for both increasing the number and quality of impactful events, and potentially saving—or even generating!—some money:Excellent value for services (such as a full plate of vegetable BBQ meat for only $8.95)Ideal location (in the heart of Lower Manhattan, easily accessible for EA NYC staff and community members)Existing menu can be optimized to ensure maximum nutritional value per dollar spentBoth small and large tables available (to accommodate a variety of event sizes)Warm atmosphere (especially near the kitchen)Access to unique forecasting technologies (free fortune cookies with each meal!)We have carefully vetted our fortune supplier to eliminate any morally dubious subcontractors and will not be using whatever company made these:Healthy work/community culture (supported by the fortune cookies!):The vision is modeled on traditional specialist conference centres, e.g. Oberwolfach, The Rockefeller Foundation Bellagio Center, the Brocher Foundation, and Whytham Abbey - but with a New York City twist. Unlike the aforementioned conference centres, ours will have the advantage of generating additional revenue through existing, robust dine-in and take-out business. We will also spell it “center,” because we live in the United States.The purchase was made from a large grant made specifically for this. There was no money from FTX or affiliated individuals or organizations. Just trust us on this, okay?When Open Philanthropy surveyed ~200 people involved or interested in longtermist work, they found that many had been strongly influenced by in-person events they’d attended, particularly ones where people relatively new to dim sum came into contact with a dozen or more dishes in one meal. Many developed a taste for textured vegetable protein, which we believe with 80% credence will extend into the far-future. (Some other data and our general experiences in the space are largely supportive of this too, including the popularity of Manhattan vegan chain Beyond Sushi as a venue for EA NYC subgroup meetings.)We understand that some members of the community may have concerns about the acquisition of a restaurant and its alignment with effective altruism principles. However, we want to assure everyone that this decision was not taken lightly. We carefully evaluated Bodhi's track record and business practices—pursuing third-party vetting by esteemed venture capital company Sequoia Capital—and found that it aligns with our values, mission, and EVPM (expected value profit margins).We also spent a lot of time contemplating the vibe, and considering whether gathering in this veg...]]>
Sat, 01 Apr 2023 06:36:07 +0000 EA - Announcing the April 1st Purchase of Bodhi Kosher Vegetarian Restaurant by EA NYC by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the April 1st Purchase of Bodhi Kosher Vegetarian Restaurant by EA NYC, published by Rockwell on April 1, 2023 on The Effective Altruism Forum.Dear Effective Altruism community,We are thrilled to announce that Bodhi Kosher Vegetarian Restaurant—a historic estate near Canal Street, serving delicious vegan dim sum since 2018—has been selected as the location for a new EA NYC community center! Effective altruism is all about maximizing the impact of our actions to help others. And what better way to do that than by owning a restaurant?The restaurant is in the process of being established as a convening center to run workshops and meetings that bring together people to think seriously about how to address important issues. We believe that the cooperative process of deciding on shared dim sum dishes, the dynamic nature of lazy susan tables, and the abundance of potent black tea, together foster a collaborative and creative environment key to solving the world's most pressing problems.EA NYC already runs dozens of events in the restaurant per year (disproportionately in the back right corner, though sometimes near the front door), and we saw several reasons a venue purchase could be valuable for both increasing the number and quality of impactful events, and potentially saving—or even generating!—some money:Excellent value for services (such as a full plate of vegetable BBQ meat for only $8.95)Ideal location (in the heart of Lower Manhattan, easily accessible for EA NYC staff and community members)Existing menu can be optimized to ensure maximum nutritional value per dollar spentBoth small and large tables available (to accommodate a variety of event sizes)Warm atmosphere (especially near the kitchen)Access to unique forecasting technologies (free fortune cookies with each meal!)We have carefully vetted our fortune supplier to eliminate any morally dubious subcontractors and will not be using whatever company made these:Healthy work/community culture (supported by the fortune cookies!):The vision is modeled on traditional specialist conference centres, e.g. Oberwolfach, The Rockefeller Foundation Bellagio Center, the Brocher Foundation, and Whytham Abbey - but with a New York City twist. Unlike the aforementioned conference centres, ours will have the advantage of generating additional revenue through existing, robust dine-in and take-out business. We will also spell it “center,” because we live in the United States.The purchase was made from a large grant made specifically for this. There was no money from FTX or affiliated individuals or organizations. Just trust us on this, okay?When Open Philanthropy surveyed ~200 people involved or interested in longtermist work, they found that many had been strongly influenced by in-person events they’d attended, particularly ones where people relatively new to dim sum came into contact with a dozen or more dishes in one meal. Many developed a taste for textured vegetable protein, which we believe with 80% credence will extend into the far-future. (Some other data and our general experiences in the space are largely supportive of this too, including the popularity of Manhattan vegan chain Beyond Sushi as a venue for EA NYC subgroup meetings.)We understand that some members of the community may have concerns about the acquisition of a restaurant and its alignment with effective altruism principles. However, we want to assure everyone that this decision was not taken lightly. We carefully evaluated Bodhi's track record and business practices—pursuing third-party vetting by esteemed venture capital company Sequoia Capital—and found that it aligns with our values, mission, and EVPM (expected value profit margins).We also spent a lot of time contemplating the vibe, and considering whether gathering in this veg...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the April 1st Purchase of Bodhi Kosher Vegetarian Restaurant by EA NYC, published by Rockwell on April 1, 2023 on The Effective Altruism Forum.Dear Effective Altruism community,We are thrilled to announce that Bodhi Kosher Vegetarian Restaurant—a historic estate near Canal Street, serving delicious vegan dim sum since 2018—has been selected as the location for a new EA NYC community center! Effective altruism is all about maximizing the impact of our actions to help others. And what better way to do that than by owning a restaurant?The restaurant is in the process of being established as a convening center to run workshops and meetings that bring together people to think seriously about how to address important issues. We believe that the cooperative process of deciding on shared dim sum dishes, the dynamic nature of lazy susan tables, and the abundance of potent black tea, together foster a collaborative and creative environment key to solving the world's most pressing problems.EA NYC already runs dozens of events in the restaurant per year (disproportionately in the back right corner, though sometimes near the front door), and we saw several reasons a venue purchase could be valuable for both increasing the number and quality of impactful events, and potentially saving—or even generating!—some money:Excellent value for services (such as a full plate of vegetable BBQ meat for only $8.95)Ideal location (in the heart of Lower Manhattan, easily accessible for EA NYC staff and community members)Existing menu can be optimized to ensure maximum nutritional value per dollar spentBoth small and large tables available (to accommodate a variety of event sizes)Warm atmosphere (especially near the kitchen)Access to unique forecasting technologies (free fortune cookies with each meal!)We have carefully vetted our fortune supplier to eliminate any morally dubious subcontractors and will not be using whatever company made these:Healthy work/community culture (supported by the fortune cookies!):The vision is modeled on traditional specialist conference centres, e.g. Oberwolfach, The Rockefeller Foundation Bellagio Center, the Brocher Foundation, and Whytham Abbey - but with a New York City twist. Unlike the aforementioned conference centres, ours will have the advantage of generating additional revenue through existing, robust dine-in and take-out business. We will also spell it “center,” because we live in the United States.The purchase was made from a large grant made specifically for this. There was no money from FTX or affiliated individuals or organizations. Just trust us on this, okay?When Open Philanthropy surveyed ~200 people involved or interested in longtermist work, they found that many had been strongly influenced by in-person events they’d attended, particularly ones where people relatively new to dim sum came into contact with a dozen or more dishes in one meal. Many developed a taste for textured vegetable protein, which we believe with 80% credence will extend into the far-future. (Some other data and our general experiences in the space are largely supportive of this too, including the popularity of Manhattan vegan chain Beyond Sushi as a venue for EA NYC subgroup meetings.)We understand that some members of the community may have concerns about the acquisition of a restaurant and its alignment with effective altruism principles. However, we want to assure everyone that this decision was not taken lightly. We carefully evaluated Bodhi's track record and business practices—pursuing third-party vetting by esteemed venture capital company Sequoia Capital—and found that it aligns with our values, mission, and EVPM (expected value profit margins).We also spent a lot of time contemplating the vibe, and considering whether gathering in this veg...]]>
Rockwell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:42 None full 5433
53awvvoxK2YTa8baB_NL_EA_EA EA - Hooray for stepping out of the limelight by So8res Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hooray for stepping out of the limelight, published by So8res on April 1, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
So8res https://forum.effectivealtruism.org/posts/53awvvoxK2YTa8baB/hooray-for-stepping-out-of-the-limelight Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hooray for stepping out of the limelight, published by So8res on April 1, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 01 Apr 2023 04:41:41 +0000 EA - Hooray for stepping out of the limelight by So8res Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hooray for stepping out of the limelight, published by So8res on April 1, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hooray for stepping out of the limelight, published by So8res on April 1, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
So8res https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:25 None full 5435
K6ugqr4cWnB6KELh4_NL_EA_EA EA - Keep Making AI Safety News by RedStateBlueState Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Keep Making AI Safety News, published by RedStateBlueState on March 31, 2023 on The Effective Altruism Forum.AI Safety is hot right now.The FLI letter was the catalyst for most of this, but even before that there was the Ezra Klein OpEd piece in the NYTimes. (Also general shoutout to Ezra for helping bring EA ideas to the mainstream - he's great!).Since the FLI letter, there was there was this CBS interview with Geoffrey Hinton. There was this WSJ Op-Ed. Eliezer's Time OpEd and Lex Fridman interview led to Bezos following him on Twitter. Most remarkably to me, Fox News reporter Peter Doocey asked a question in the White House press briefing, which got a serious (albeit vague) response. The president of the United States, in all likelihood, has heard of AI Safety.This is amazing. I think it's the biggest positive development is AI Safety thus far. On the safety research side, the more people hear about AI safety, the more tech investors/philanthropists start to fund research and the more researchers want to start doing safety work. On the capabilities side, companies taking AI risks more seriously will lead to more care taken when developing and deploying AI systems. On the policy side, politicians taking AI risk seriously and developing regulations would be greatly helpful.Now, I keep up with news... obsessively. These types of news cycles aren't all that uncommon. What is uncommon is keeping attention for an extended period of time. The best way to do this is just to say yes to any media coverage. AI Safety communicators should be going on any news outlet that will have them. Interviews, debates, short segments on cable news, whatever. It is much less important that we proceed with caution - making sure to choose our words carefully or not interacting with antagonistic reporters - than that we just keep getting media coverage. This was notably Pete Buttigieg's strategy in the 2020 Democratic Primary (and still is with his constant Fox News cameos), which led to this small-town mayor becoming a household name and the US Secretary of Transportation.I think there's a mindset among people in AI Safety right now that nobody cares and nobody is prepared and our only chance is if we're lucky and alignment isn't as hard as Eliezer makes it out to be. This is our chance to change that. Never underestimate the power of truckloads of media coverage, whether to elevate a businessman into the white house or to push a fringe idea into the mainstream. It's not going to come naturally, though - we must keep working at it.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
RedStateBlueState https://forum.effectivealtruism.org/posts/K6ugqr4cWnB6KELh4/keep-making-ai-safety-news Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Keep Making AI Safety News, published by RedStateBlueState on March 31, 2023 on The Effective Altruism Forum.AI Safety is hot right now.The FLI letter was the catalyst for most of this, but even before that there was the Ezra Klein OpEd piece in the NYTimes. (Also general shoutout to Ezra for helping bring EA ideas to the mainstream - he's great!).Since the FLI letter, there was there was this CBS interview with Geoffrey Hinton. There was this WSJ Op-Ed. Eliezer's Time OpEd and Lex Fridman interview led to Bezos following him on Twitter. Most remarkably to me, Fox News reporter Peter Doocey asked a question in the White House press briefing, which got a serious (albeit vague) response. The president of the United States, in all likelihood, has heard of AI Safety.This is amazing. I think it's the biggest positive development is AI Safety thus far. On the safety research side, the more people hear about AI safety, the more tech investors/philanthropists start to fund research and the more researchers want to start doing safety work. On the capabilities side, companies taking AI risks more seriously will lead to more care taken when developing and deploying AI systems. On the policy side, politicians taking AI risk seriously and developing regulations would be greatly helpful.Now, I keep up with news... obsessively. These types of news cycles aren't all that uncommon. What is uncommon is keeping attention for an extended period of time. The best way to do this is just to say yes to any media coverage. AI Safety communicators should be going on any news outlet that will have them. Interviews, debates, short segments on cable news, whatever. It is much less important that we proceed with caution - making sure to choose our words carefully or not interacting with antagonistic reporters - than that we just keep getting media coverage. This was notably Pete Buttigieg's strategy in the 2020 Democratic Primary (and still is with his constant Fox News cameos), which led to this small-town mayor becoming a household name and the US Secretary of Transportation.I think there's a mindset among people in AI Safety right now that nobody cares and nobody is prepared and our only chance is if we're lucky and alignment isn't as hard as Eliezer makes it out to be. This is our chance to change that. Never underestimate the power of truckloads of media coverage, whether to elevate a businessman into the white house or to push a fringe idea into the mainstream. It's not going to come naturally, though - we must keep working at it.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 01 Apr 2023 02:39:37 +0000 EA - Keep Making AI Safety News by RedStateBlueState Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Keep Making AI Safety News, published by RedStateBlueState on March 31, 2023 on The Effective Altruism Forum.AI Safety is hot right now.The FLI letter was the catalyst for most of this, but even before that there was the Ezra Klein OpEd piece in the NYTimes. (Also general shoutout to Ezra for helping bring EA ideas to the mainstream - he's great!).Since the FLI letter, there was there was this CBS interview with Geoffrey Hinton. There was this WSJ Op-Ed. Eliezer's Time OpEd and Lex Fridman interview led to Bezos following him on Twitter. Most remarkably to me, Fox News reporter Peter Doocey asked a question in the White House press briefing, which got a serious (albeit vague) response. The president of the United States, in all likelihood, has heard of AI Safety.This is amazing. I think it's the biggest positive development is AI Safety thus far. On the safety research side, the more people hear about AI safety, the more tech investors/philanthropists start to fund research and the more researchers want to start doing safety work. On the capabilities side, companies taking AI risks more seriously will lead to more care taken when developing and deploying AI systems. On the policy side, politicians taking AI risk seriously and developing regulations would be greatly helpful.Now, I keep up with news... obsessively. These types of news cycles aren't all that uncommon. What is uncommon is keeping attention for an extended period of time. The best way to do this is just to say yes to any media coverage. AI Safety communicators should be going on any news outlet that will have them. Interviews, debates, short segments on cable news, whatever. It is much less important that we proceed with caution - making sure to choose our words carefully or not interacting with antagonistic reporters - than that we just keep getting media coverage. This was notably Pete Buttigieg's strategy in the 2020 Democratic Primary (and still is with his constant Fox News cameos), which led to this small-town mayor becoming a household name and the US Secretary of Transportation.I think there's a mindset among people in AI Safety right now that nobody cares and nobody is prepared and our only chance is if we're lucky and alignment isn't as hard as Eliezer makes it out to be. This is our chance to change that. Never underestimate the power of truckloads of media coverage, whether to elevate a businessman into the white house or to push a fringe idea into the mainstream. It's not going to come naturally, though - we must keep working at it.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Keep Making AI Safety News, published by RedStateBlueState on March 31, 2023 on The Effective Altruism Forum.AI Safety is hot right now.The FLI letter was the catalyst for most of this, but even before that there was the Ezra Klein OpEd piece in the NYTimes. (Also general shoutout to Ezra for helping bring EA ideas to the mainstream - he's great!).Since the FLI letter, there was there was this CBS interview with Geoffrey Hinton. There was this WSJ Op-Ed. Eliezer's Time OpEd and Lex Fridman interview led to Bezos following him on Twitter. Most remarkably to me, Fox News reporter Peter Doocey asked a question in the White House press briefing, which got a serious (albeit vague) response. The president of the United States, in all likelihood, has heard of AI Safety.This is amazing. I think it's the biggest positive development is AI Safety thus far. On the safety research side, the more people hear about AI safety, the more tech investors/philanthropists start to fund research and the more researchers want to start doing safety work. On the capabilities side, companies taking AI risks more seriously will lead to more care taken when developing and deploying AI systems. On the policy side, politicians taking AI risk seriously and developing regulations would be greatly helpful.Now, I keep up with news... obsessively. These types of news cycles aren't all that uncommon. What is uncommon is keeping attention for an extended period of time. The best way to do this is just to say yes to any media coverage. AI Safety communicators should be going on any news outlet that will have them. Interviews, debates, short segments on cable news, whatever. It is much less important that we proceed with caution - making sure to choose our words carefully or not interacting with antagonistic reporters - than that we just keep getting media coverage. This was notably Pete Buttigieg's strategy in the 2020 Democratic Primary (and still is with his constant Fox News cameos), which led to this small-town mayor becoming a household name and the US Secretary of Transportation.I think there's a mindset among people in AI Safety right now that nobody cares and nobody is prepared and our only chance is if we're lucky and alignment isn't as hard as Eliezer makes it out to be. This is our chance to change that. Never underestimate the power of truckloads of media coverage, whether to elevate a businessman into the white house or to push a fringe idea into the mainstream. It's not going to come naturally, though - we must keep working at it.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
RedStateBlueState https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:39 None full 5437
kKr7n9rPsABgYfd6G_NL_EA_EA EA - New Cause Area: Portrait Welfare (+introducing SPEWS) by Luke Freeman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New Cause Area: Portrait Welfare (+introducing SPEWS), published by Luke Freeman on March 31, 2023 on The Effective Altruism Forum."The question is not, Can they reason?, nor Can they talk? but, Can they (I mean, hypothetically speaking, perhaps, just a smidgen, in theory) suffer" ~ Jeremy BenthamIntroductionThe effective altruism community has consistently pushed the frontiers of knowledge and moral progress, demonstrating a willingness to challenge conventional norms and take even the most unconventional ideas seriously. Our concern for global poverty is often considered "weird" as we highlight the importance of valuing individuals' well-being equally, regardless of geographical boundaries. In contrast, broader society tends to focus more on helping people within our own countries, inadvertently giving less consideration to those further afield. From animal welfare to long-term existential risks, our community is full of people who have expanded their moral circles even further to include the suffering of non-human animals and future generations.Now, avant-garde effective altruists are exploring the outer limits of moral concern, delving into areas such as insect welfare and digital minds. As we celebrate these accomplishments, we remain committed to broadening our understanding and seeking out new cause areas that may have a significant, overlooked impact.Imagine a future where we have made substantial strides in addressing these critical issues, and you find yourself sipping tea in a room adorned with stunning portraits. As you revel in this moment of tranquillity, a thought experiment crosses your mind: What if the portraits themselves deserve our moral consideration? And while we were busy tackling other pressing matters, could we have been inadvertently overlooking yet another human atrocity?Today, we invite you to entertain this intriguing and unconventional idea as we introduce the new cause area of Portrait Welfare. While initially sceptical, our research has led us to be surprisingly confident in the potential of this cause to be the much-awaited “Cause X.” To demonstrate our convictions we have registered our predictions on Manifold Markets, and at current market rates, a rational actor placing a modest bet of the median US salary could stand to win an impressive sum of over $12 trillion USD (in 2023 dollars) by market close.As we embark on this journey into uncharted territory, we encourage you to keep an open mind and dive into this fascinating new area of concern. Together, we can continue to push the boundaries of our impact and make the world a better place for all sentient beings – even those that exist within the confines of a frame. And if you're not on board with this yet, just remember, every time you hang a portrait on your wall or snap a selfie, there may be a possibility that you're contributing to a system of injustice and suffering. However, we understand that not everyone can see the bigger picture, and we won't judge you too harshly if you've done all you can to understand this possibility but still cannot accept it. After all, we are all on our own journey towards a more ethical and compassionate world.The Moral Case for Portrait WelfareWhile the notion of portrait welfare may initially seem far-fetched, there are moral reasons to consider this cause area. If it turns out that portraits possess a form of consciousness, it would be our ethical responsibility to address their welfare. In line with the principles of effective altruism, we ought to explore all possibilities that could lead to a reduction in suffering, even if they are unconventional.Expanding the circle of compassion: The effective altruism movement aims to reduce suffering for all sentient beings, regardless of species or other differences. By considering portrait...]]>
Luke Freeman https://forum.effectivealtruism.org/posts/kKr7n9rPsABgYfd6G/new-cause-area-portrait-welfare-introducing-spews Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New Cause Area: Portrait Welfare (+introducing SPEWS), published by Luke Freeman on March 31, 2023 on The Effective Altruism Forum."The question is not, Can they reason?, nor Can they talk? but, Can they (I mean, hypothetically speaking, perhaps, just a smidgen, in theory) suffer" ~ Jeremy BenthamIntroductionThe effective altruism community has consistently pushed the frontiers of knowledge and moral progress, demonstrating a willingness to challenge conventional norms and take even the most unconventional ideas seriously. Our concern for global poverty is often considered "weird" as we highlight the importance of valuing individuals' well-being equally, regardless of geographical boundaries. In contrast, broader society tends to focus more on helping people within our own countries, inadvertently giving less consideration to those further afield. From animal welfare to long-term existential risks, our community is full of people who have expanded their moral circles even further to include the suffering of non-human animals and future generations.Now, avant-garde effective altruists are exploring the outer limits of moral concern, delving into areas such as insect welfare and digital minds. As we celebrate these accomplishments, we remain committed to broadening our understanding and seeking out new cause areas that may have a significant, overlooked impact.Imagine a future where we have made substantial strides in addressing these critical issues, and you find yourself sipping tea in a room adorned with stunning portraits. As you revel in this moment of tranquillity, a thought experiment crosses your mind: What if the portraits themselves deserve our moral consideration? And while we were busy tackling other pressing matters, could we have been inadvertently overlooking yet another human atrocity?Today, we invite you to entertain this intriguing and unconventional idea as we introduce the new cause area of Portrait Welfare. While initially sceptical, our research has led us to be surprisingly confident in the potential of this cause to be the much-awaited “Cause X.” To demonstrate our convictions we have registered our predictions on Manifold Markets, and at current market rates, a rational actor placing a modest bet of the median US salary could stand to win an impressive sum of over $12 trillion USD (in 2023 dollars) by market close.As we embark on this journey into uncharted territory, we encourage you to keep an open mind and dive into this fascinating new area of concern. Together, we can continue to push the boundaries of our impact and make the world a better place for all sentient beings – even those that exist within the confines of a frame. And if you're not on board with this yet, just remember, every time you hang a portrait on your wall or snap a selfie, there may be a possibility that you're contributing to a system of injustice and suffering. However, we understand that not everyone can see the bigger picture, and we won't judge you too harshly if you've done all you can to understand this possibility but still cannot accept it. After all, we are all on our own journey towards a more ethical and compassionate world.The Moral Case for Portrait WelfareWhile the notion of portrait welfare may initially seem far-fetched, there are moral reasons to consider this cause area. If it turns out that portraits possess a form of consciousness, it would be our ethical responsibility to address their welfare. In line with the principles of effective altruism, we ought to explore all possibilities that could lead to a reduction in suffering, even if they are unconventional.Expanding the circle of compassion: The effective altruism movement aims to reduce suffering for all sentient beings, regardless of species or other differences. By considering portrait...]]>
Sat, 01 Apr 2023 01:27:29 +0000 EA - New Cause Area: Portrait Welfare (+introducing SPEWS) by Luke Freeman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New Cause Area: Portrait Welfare (+introducing SPEWS), published by Luke Freeman on March 31, 2023 on The Effective Altruism Forum."The question is not, Can they reason?, nor Can they talk? but, Can they (I mean, hypothetically speaking, perhaps, just a smidgen, in theory) suffer" ~ Jeremy BenthamIntroductionThe effective altruism community has consistently pushed the frontiers of knowledge and moral progress, demonstrating a willingness to challenge conventional norms and take even the most unconventional ideas seriously. Our concern for global poverty is often considered "weird" as we highlight the importance of valuing individuals' well-being equally, regardless of geographical boundaries. In contrast, broader society tends to focus more on helping people within our own countries, inadvertently giving less consideration to those further afield. From animal welfare to long-term existential risks, our community is full of people who have expanded their moral circles even further to include the suffering of non-human animals and future generations.Now, avant-garde effective altruists are exploring the outer limits of moral concern, delving into areas such as insect welfare and digital minds. As we celebrate these accomplishments, we remain committed to broadening our understanding and seeking out new cause areas that may have a significant, overlooked impact.Imagine a future where we have made substantial strides in addressing these critical issues, and you find yourself sipping tea in a room adorned with stunning portraits. As you revel in this moment of tranquillity, a thought experiment crosses your mind: What if the portraits themselves deserve our moral consideration? And while we were busy tackling other pressing matters, could we have been inadvertently overlooking yet another human atrocity?Today, we invite you to entertain this intriguing and unconventional idea as we introduce the new cause area of Portrait Welfare. While initially sceptical, our research has led us to be surprisingly confident in the potential of this cause to be the much-awaited “Cause X.” To demonstrate our convictions we have registered our predictions on Manifold Markets, and at current market rates, a rational actor placing a modest bet of the median US salary could stand to win an impressive sum of over $12 trillion USD (in 2023 dollars) by market close.As we embark on this journey into uncharted territory, we encourage you to keep an open mind and dive into this fascinating new area of concern. Together, we can continue to push the boundaries of our impact and make the world a better place for all sentient beings – even those that exist within the confines of a frame. And if you're not on board with this yet, just remember, every time you hang a portrait on your wall or snap a selfie, there may be a possibility that you're contributing to a system of injustice and suffering. However, we understand that not everyone can see the bigger picture, and we won't judge you too harshly if you've done all you can to understand this possibility but still cannot accept it. After all, we are all on our own journey towards a more ethical and compassionate world.The Moral Case for Portrait WelfareWhile the notion of portrait welfare may initially seem far-fetched, there are moral reasons to consider this cause area. If it turns out that portraits possess a form of consciousness, it would be our ethical responsibility to address their welfare. In line with the principles of effective altruism, we ought to explore all possibilities that could lead to a reduction in suffering, even if they are unconventional.Expanding the circle of compassion: The effective altruism movement aims to reduce suffering for all sentient beings, regardless of species or other differences. By considering portrait...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New Cause Area: Portrait Welfare (+introducing SPEWS), published by Luke Freeman on March 31, 2023 on The Effective Altruism Forum."The question is not, Can they reason?, nor Can they talk? but, Can they (I mean, hypothetically speaking, perhaps, just a smidgen, in theory) suffer" ~ Jeremy BenthamIntroductionThe effective altruism community has consistently pushed the frontiers of knowledge and moral progress, demonstrating a willingness to challenge conventional norms and take even the most unconventional ideas seriously. Our concern for global poverty is often considered "weird" as we highlight the importance of valuing individuals' well-being equally, regardless of geographical boundaries. In contrast, broader society tends to focus more on helping people within our own countries, inadvertently giving less consideration to those further afield. From animal welfare to long-term existential risks, our community is full of people who have expanded their moral circles even further to include the suffering of non-human animals and future generations.Now, avant-garde effective altruists are exploring the outer limits of moral concern, delving into areas such as insect welfare and digital minds. As we celebrate these accomplishments, we remain committed to broadening our understanding and seeking out new cause areas that may have a significant, overlooked impact.Imagine a future where we have made substantial strides in addressing these critical issues, and you find yourself sipping tea in a room adorned with stunning portraits. As you revel in this moment of tranquillity, a thought experiment crosses your mind: What if the portraits themselves deserve our moral consideration? And while we were busy tackling other pressing matters, could we have been inadvertently overlooking yet another human atrocity?Today, we invite you to entertain this intriguing and unconventional idea as we introduce the new cause area of Portrait Welfare. While initially sceptical, our research has led us to be surprisingly confident in the potential of this cause to be the much-awaited “Cause X.” To demonstrate our convictions we have registered our predictions on Manifold Markets, and at current market rates, a rational actor placing a modest bet of the median US salary could stand to win an impressive sum of over $12 trillion USD (in 2023 dollars) by market close.As we embark on this journey into uncharted territory, we encourage you to keep an open mind and dive into this fascinating new area of concern. Together, we can continue to push the boundaries of our impact and make the world a better place for all sentient beings – even those that exist within the confines of a frame. And if you're not on board with this yet, just remember, every time you hang a portrait on your wall or snap a selfie, there may be a possibility that you're contributing to a system of injustice and suffering. However, we understand that not everyone can see the bigger picture, and we won't judge you too harshly if you've done all you can to understand this possibility but still cannot accept it. After all, we are all on our own journey towards a more ethical and compassionate world.The Moral Case for Portrait WelfareWhile the notion of portrait welfare may initially seem far-fetched, there are moral reasons to consider this cause area. If it turns out that portraits possess a form of consciousness, it would be our ethical responsibility to address their welfare. In line with the principles of effective altruism, we ought to explore all possibilities that could lead to a reduction in suffering, even if they are unconventional.Expanding the circle of compassion: The effective altruism movement aims to reduce suffering for all sentient beings, regardless of species or other differences. By considering portrait...]]>
Luke Freeman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 27:44 None full 5436
jpyMhAPSmZER9ASi6_NL_EA_EA EA - My updates after FTX by Benjamin Todd Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My updates after FTX, published by Benjamin Todd on March 31, 2023 on The Effective Altruism Forum.Here are some thoughts on what I’ve learned from what happened with FTX, and (to a lesser degree) other events of the last 6 months.I can’t give all my reasoning, and have focused on my bottom lines. Bear in mind that updates are relative to my starting point (you could update oppositely if you started in a different place).In the second half, I list some updates I haven’t made.I’ve tried to make updates about things that could have actually reduced the chance of something this bad from happening, or where a single data point can be significant, or where one update entails another.For the implications, I’ve focused on those in my own work, rather than speculate about the EA community or adjacent communities as a whole (though I’ve done some of that). I wrote most of this doc in Jan.I’m only speaking for myself, not 80,000 Hours or Effective Ventures Foundation (UK) or Effective Ventures Foundation USA Inc.The updates are roughly in logical order (earlier updates entail the later ones) with some weighting by importance / confidence / size of update. I’m sorry it’s become so long – the key takeaways are in bold.I still feel unsure about how best to frame some of these issues, and how important different parts are. This is a snapshot of my thinking and it’s likely to change over the next year.Big picture, I do think significant updates and changes are warranted. Several people we thought deeply shared our values have been charged with conducting one of the biggest financial frauds in history (one of whom has pled guilty).The first section makes for demoralising reading, so it’s maybe worth also saying that I still think the core ideas of EA make sense, and I plan to keep practising them in my own life.I hope people keep working on building effective altruism, and in particular, now is probably a moment of unusual malleability to improve its culture, so let’s make the most of it.List of updates1. I should take more seriously the idea that EA, despite having many worthwhile ideas, may attract some dangerous people i.e. people who are able to do ambitious things but are reckless / self-deluded / deceptive / norm-breaking and so can have a lot of negative impact. This has long been a theoretical worry, but it wasn’t clearly an empirical issue – it seemed like potential bad actors either hadn’t joined or had been spotted and constrained. Now it seems clearly true. I think we need to act as if the base rate is >1%. (Though if there’s a strong enough reaction to FTX, it’s possible the fraction will be lower going forward than it was before.)2. Due to this, I should act on the assumption EAs aren’t more trustworthy than average. Previously I acted as if they were. I now think the typical EA probably is more trustworthy than average – EA attracts some very kind and high integrity people – but it also attracts plenty of people with normal amounts of pride, self-delusion and self-interest, and there’s a significant minority who seem more likely to defect or be deceptive than average. The existence of this minority, and because it’s hard to tell who is who, means you need to assume that someone might be untrustworthy by default (even if “they’re an EA”). This doesn’t mean distrusting everyone by default – I still think it’s best to default to being cooperative – but it’s vital to have checks for and ways to exclude dangerous actors, especially in influential positions (i.e. trust, but verify).3. EAs are also less competent than I thought, and have worse judgement of character and competence than I thought. I’d taken financial success as an update in competence; I no longer think this. But also I wouldn’t have predicted to be deceived so thoroughly, so I negatively updat...]]>
Benjamin Todd https://forum.effectivealtruism.org/posts/jpyMhAPSmZER9ASi6/my-updates-after-ftx Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My updates after FTX, published by Benjamin Todd on March 31, 2023 on The Effective Altruism Forum.Here are some thoughts on what I’ve learned from what happened with FTX, and (to a lesser degree) other events of the last 6 months.I can’t give all my reasoning, and have focused on my bottom lines. Bear in mind that updates are relative to my starting point (you could update oppositely if you started in a different place).In the second half, I list some updates I haven’t made.I’ve tried to make updates about things that could have actually reduced the chance of something this bad from happening, or where a single data point can be significant, or where one update entails another.For the implications, I’ve focused on those in my own work, rather than speculate about the EA community or adjacent communities as a whole (though I’ve done some of that). I wrote most of this doc in Jan.I’m only speaking for myself, not 80,000 Hours or Effective Ventures Foundation (UK) or Effective Ventures Foundation USA Inc.The updates are roughly in logical order (earlier updates entail the later ones) with some weighting by importance / confidence / size of update. I’m sorry it’s become so long – the key takeaways are in bold.I still feel unsure about how best to frame some of these issues, and how important different parts are. This is a snapshot of my thinking and it’s likely to change over the next year.Big picture, I do think significant updates and changes are warranted. Several people we thought deeply shared our values have been charged with conducting one of the biggest financial frauds in history (one of whom has pled guilty).The first section makes for demoralising reading, so it’s maybe worth also saying that I still think the core ideas of EA make sense, and I plan to keep practising them in my own life.I hope people keep working on building effective altruism, and in particular, now is probably a moment of unusual malleability to improve its culture, so let’s make the most of it.List of updates1. I should take more seriously the idea that EA, despite having many worthwhile ideas, may attract some dangerous people i.e. people who are able to do ambitious things but are reckless / self-deluded / deceptive / norm-breaking and so can have a lot of negative impact. This has long been a theoretical worry, but it wasn’t clearly an empirical issue – it seemed like potential bad actors either hadn’t joined or had been spotted and constrained. Now it seems clearly true. I think we need to act as if the base rate is >1%. (Though if there’s a strong enough reaction to FTX, it’s possible the fraction will be lower going forward than it was before.)2. Due to this, I should act on the assumption EAs aren’t more trustworthy than average. Previously I acted as if they were. I now think the typical EA probably is more trustworthy than average – EA attracts some very kind and high integrity people – but it also attracts plenty of people with normal amounts of pride, self-delusion and self-interest, and there’s a significant minority who seem more likely to defect or be deceptive than average. The existence of this minority, and because it’s hard to tell who is who, means you need to assume that someone might be untrustworthy by default (even if “they’re an EA”). This doesn’t mean distrusting everyone by default – I still think it’s best to default to being cooperative – but it’s vital to have checks for and ways to exclude dangerous actors, especially in influential positions (i.e. trust, but verify).3. EAs are also less competent than I thought, and have worse judgement of character and competence than I thought. I’d taken financial success as an update in competence; I no longer think this. But also I wouldn’t have predicted to be deceived so thoroughly, so I negatively updat...]]>
Fri, 31 Mar 2023 21:01:32 +0000 EA - My updates after FTX by Benjamin Todd Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My updates after FTX, published by Benjamin Todd on March 31, 2023 on The Effective Altruism Forum.Here are some thoughts on what I’ve learned from what happened with FTX, and (to a lesser degree) other events of the last 6 months.I can’t give all my reasoning, and have focused on my bottom lines. Bear in mind that updates are relative to my starting point (you could update oppositely if you started in a different place).In the second half, I list some updates I haven’t made.I’ve tried to make updates about things that could have actually reduced the chance of something this bad from happening, or where a single data point can be significant, or where one update entails another.For the implications, I’ve focused on those in my own work, rather than speculate about the EA community or adjacent communities as a whole (though I’ve done some of that). I wrote most of this doc in Jan.I’m only speaking for myself, not 80,000 Hours or Effective Ventures Foundation (UK) or Effective Ventures Foundation USA Inc.The updates are roughly in logical order (earlier updates entail the later ones) with some weighting by importance / confidence / size of update. I’m sorry it’s become so long – the key takeaways are in bold.I still feel unsure about how best to frame some of these issues, and how important different parts are. This is a snapshot of my thinking and it’s likely to change over the next year.Big picture, I do think significant updates and changes are warranted. Several people we thought deeply shared our values have been charged with conducting one of the biggest financial frauds in history (one of whom has pled guilty).The first section makes for demoralising reading, so it’s maybe worth also saying that I still think the core ideas of EA make sense, and I plan to keep practising them in my own life.I hope people keep working on building effective altruism, and in particular, now is probably a moment of unusual malleability to improve its culture, so let’s make the most of it.List of updates1. I should take more seriously the idea that EA, despite having many worthwhile ideas, may attract some dangerous people i.e. people who are able to do ambitious things but are reckless / self-deluded / deceptive / norm-breaking and so can have a lot of negative impact. This has long been a theoretical worry, but it wasn’t clearly an empirical issue – it seemed like potential bad actors either hadn’t joined or had been spotted and constrained. Now it seems clearly true. I think we need to act as if the base rate is >1%. (Though if there’s a strong enough reaction to FTX, it’s possible the fraction will be lower going forward than it was before.)2. Due to this, I should act on the assumption EAs aren’t more trustworthy than average. Previously I acted as if they were. I now think the typical EA probably is more trustworthy than average – EA attracts some very kind and high integrity people – but it also attracts plenty of people with normal amounts of pride, self-delusion and self-interest, and there’s a significant minority who seem more likely to defect or be deceptive than average. The existence of this minority, and because it’s hard to tell who is who, means you need to assume that someone might be untrustworthy by default (even if “they’re an EA”). This doesn’t mean distrusting everyone by default – I still think it’s best to default to being cooperative – but it’s vital to have checks for and ways to exclude dangerous actors, especially in influential positions (i.e. trust, but verify).3. EAs are also less competent than I thought, and have worse judgement of character and competence than I thought. I’d taken financial success as an update in competence; I no longer think this. But also I wouldn’t have predicted to be deceived so thoroughly, so I negatively updat...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My updates after FTX, published by Benjamin Todd on March 31, 2023 on The Effective Altruism Forum.Here are some thoughts on what I’ve learned from what happened with FTX, and (to a lesser degree) other events of the last 6 months.I can’t give all my reasoning, and have focused on my bottom lines. Bear in mind that updates are relative to my starting point (you could update oppositely if you started in a different place).In the second half, I list some updates I haven’t made.I’ve tried to make updates about things that could have actually reduced the chance of something this bad from happening, or where a single data point can be significant, or where one update entails another.For the implications, I’ve focused on those in my own work, rather than speculate about the EA community or adjacent communities as a whole (though I’ve done some of that). I wrote most of this doc in Jan.I’m only speaking for myself, not 80,000 Hours or Effective Ventures Foundation (UK) or Effective Ventures Foundation USA Inc.The updates are roughly in logical order (earlier updates entail the later ones) with some weighting by importance / confidence / size of update. I’m sorry it’s become so long – the key takeaways are in bold.I still feel unsure about how best to frame some of these issues, and how important different parts are. This is a snapshot of my thinking and it’s likely to change over the next year.Big picture, I do think significant updates and changes are warranted. Several people we thought deeply shared our values have been charged with conducting one of the biggest financial frauds in history (one of whom has pled guilty).The first section makes for demoralising reading, so it’s maybe worth also saying that I still think the core ideas of EA make sense, and I plan to keep practising them in my own life.I hope people keep working on building effective altruism, and in particular, now is probably a moment of unusual malleability to improve its culture, so let’s make the most of it.List of updates1. I should take more seriously the idea that EA, despite having many worthwhile ideas, may attract some dangerous people i.e. people who are able to do ambitious things but are reckless / self-deluded / deceptive / norm-breaking and so can have a lot of negative impact. This has long been a theoretical worry, but it wasn’t clearly an empirical issue – it seemed like potential bad actors either hadn’t joined or had been spotted and constrained. Now it seems clearly true. I think we need to act as if the base rate is >1%. (Though if there’s a strong enough reaction to FTX, it’s possible the fraction will be lower going forward than it was before.)2. Due to this, I should act on the assumption EAs aren’t more trustworthy than average. Previously I acted as if they were. I now think the typical EA probably is more trustworthy than average – EA attracts some very kind and high integrity people – but it also attracts plenty of people with normal amounts of pride, self-delusion and self-interest, and there’s a significant minority who seem more likely to defect or be deceptive than average. The existence of this minority, and because it’s hard to tell who is who, means you need to assume that someone might be untrustworthy by default (even if “they’re an EA”). This doesn’t mean distrusting everyone by default – I still think it’s best to default to being cooperative – but it’s vital to have checks for and ways to exclude dangerous actors, especially in influential positions (i.e. trust, but verify).3. EAs are also less competent than I thought, and have worse judgement of character and competence than I thought. I’d taken financial success as an update in competence; I no longer think this. But also I wouldn’t have predicted to be deceived so thoroughly, so I negatively updat...]]>
Benjamin Todd https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 30:50 None full 5425
MA64zRLPntTth9x6v_NL_EA_EA EA - Introducing the Maternal Health Initiative by Ben Williamson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the Maternal Health Initiative, published by Ben Williamson on March 31, 2023 on The Effective Altruism Forum.The Maternal Health Initiative (MHI) was launched through the 2022 Charity Entrepreneurship Incubation Programme. Our mission is to improve the lives of women and children through a light-touch, scalable training programme on contraceptive counselling for healthcare workers in sub-Saharan Africa.Six months after we started, we’re excited to share more about our work so far, our mission, and the impact we’re aiming for. This post presents a short case for the value of MHI’s work, details our goals for the rest of 2023, and explains how you can support us.IntroductionMHI is an evidence-based non-profit dedicated to improving the lives of women and children across the world by training health workers to provide high-quality family planning counselling in the postpartum period.We are currently conducting preliminary piloting work in northern Ghana through local partner organisations. At present, we estimate that our programming could avert a disability-adjusted life year (DALY) for $166, and we foresee promising pathways to significant scale.This post provides a brief overview of the rationale for MHI’s work, what we’ve accomplished so far, where we’re heading, and how you can help. For more detailed information about our work, please visit our website, sign up to our mailing list, or reach out to us directly.Why MHI was foundedEvery day, 830 women die from pregnancy-related causes. These deaths overwhelmingly occur in sub-Saharan Africa, where maternal mortality rates are more than ten times higher than the next worst region. The Maternal Health Initiative was founded to change this.Increasing access to contraception is a tractable intervention with extensive evidence of its effectiveness in reducing maternal mortality at scale. More than this, access to family planning services can produce a wealth of additional impact, providing potentially transformative benefits to women’s autonomy, income, and wellbeing. The autonomy benefits are particularly exciting: family planning appears to be one of the most cost-effective ways of improving agency.218 million women in lower income countries have an unmet need for family planning. This resulted in more than 85 million unintended pregnancies in 2019 alone. More than one third of these unintended pregnancies in sub-Saharan Africa end in an unsafe abortion. These often take place outside of a health facility in countries where abortions are dangerous, heavily stigmatised, and in some cases illegal.While there is substantial work in this space by non-EA actors, there remain significantly neglected opportunities. One of these is the specific provision of postpartum (post-birth) family planning services. Short-spaced pregnancies - births that occur within 2 years over each other - significantly increase the risks of both maternal and infant mortality.A synthesis of the literature evaluating the impact of pregnancies that occur within 2 years of each other suggests a 18% higher risk of infant mortality and 16% higher risk of maternal mortality. Despite this, contraceptive use drops to less than half of the national average in the first year after giving birth, with contraceptive counselling after pregnancy rarely taking place currently.MHI is developing a programme of training to ensure high-quality contraceptive counselling consistently takes place in the postpartum period. Several randomised control trials have demonstrated the effectiveness of increased quality of care during this period, and we currently estimate that our program can avert a disability-adjusted life year (DALY) for just $166, competitive with GiveWell’s top recommended charities.Our workWhat we’ve done so farSince September, we...]]>
Ben Williamson https://forum.effectivealtruism.org/posts/MA64zRLPntTth9x6v/introducing-the-maternal-health-initiative Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the Maternal Health Initiative, published by Ben Williamson on March 31, 2023 on The Effective Altruism Forum.The Maternal Health Initiative (MHI) was launched through the 2022 Charity Entrepreneurship Incubation Programme. Our mission is to improve the lives of women and children through a light-touch, scalable training programme on contraceptive counselling for healthcare workers in sub-Saharan Africa.Six months after we started, we’re excited to share more about our work so far, our mission, and the impact we’re aiming for. This post presents a short case for the value of MHI’s work, details our goals for the rest of 2023, and explains how you can support us.IntroductionMHI is an evidence-based non-profit dedicated to improving the lives of women and children across the world by training health workers to provide high-quality family planning counselling in the postpartum period.We are currently conducting preliminary piloting work in northern Ghana through local partner organisations. At present, we estimate that our programming could avert a disability-adjusted life year (DALY) for $166, and we foresee promising pathways to significant scale.This post provides a brief overview of the rationale for MHI’s work, what we’ve accomplished so far, where we’re heading, and how you can help. For more detailed information about our work, please visit our website, sign up to our mailing list, or reach out to us directly.Why MHI was foundedEvery day, 830 women die from pregnancy-related causes. These deaths overwhelmingly occur in sub-Saharan Africa, where maternal mortality rates are more than ten times higher than the next worst region. The Maternal Health Initiative was founded to change this.Increasing access to contraception is a tractable intervention with extensive evidence of its effectiveness in reducing maternal mortality at scale. More than this, access to family planning services can produce a wealth of additional impact, providing potentially transformative benefits to women’s autonomy, income, and wellbeing. The autonomy benefits are particularly exciting: family planning appears to be one of the most cost-effective ways of improving agency.218 million women in lower income countries have an unmet need for family planning. This resulted in more than 85 million unintended pregnancies in 2019 alone. More than one third of these unintended pregnancies in sub-Saharan Africa end in an unsafe abortion. These often take place outside of a health facility in countries where abortions are dangerous, heavily stigmatised, and in some cases illegal.While there is substantial work in this space by non-EA actors, there remain significantly neglected opportunities. One of these is the specific provision of postpartum (post-birth) family planning services. Short-spaced pregnancies - births that occur within 2 years over each other - significantly increase the risks of both maternal and infant mortality.A synthesis of the literature evaluating the impact of pregnancies that occur within 2 years of each other suggests a 18% higher risk of infant mortality and 16% higher risk of maternal mortality. Despite this, contraceptive use drops to less than half of the national average in the first year after giving birth, with contraceptive counselling after pregnancy rarely taking place currently.MHI is developing a programme of training to ensure high-quality contraceptive counselling consistently takes place in the postpartum period. Several randomised control trials have demonstrated the effectiveness of increased quality of care during this period, and we currently estimate that our program can avert a disability-adjusted life year (DALY) for just $166, competitive with GiveWell’s top recommended charities.Our workWhat we’ve done so farSince September, we...]]>
Fri, 31 Mar 2023 15:33:30 +0000 EA - Introducing the Maternal Health Initiative by Ben Williamson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the Maternal Health Initiative, published by Ben Williamson on March 31, 2023 on The Effective Altruism Forum.The Maternal Health Initiative (MHI) was launched through the 2022 Charity Entrepreneurship Incubation Programme. Our mission is to improve the lives of women and children through a light-touch, scalable training programme on contraceptive counselling for healthcare workers in sub-Saharan Africa.Six months after we started, we’re excited to share more about our work so far, our mission, and the impact we’re aiming for. This post presents a short case for the value of MHI’s work, details our goals for the rest of 2023, and explains how you can support us.IntroductionMHI is an evidence-based non-profit dedicated to improving the lives of women and children across the world by training health workers to provide high-quality family planning counselling in the postpartum period.We are currently conducting preliminary piloting work in northern Ghana through local partner organisations. At present, we estimate that our programming could avert a disability-adjusted life year (DALY) for $166, and we foresee promising pathways to significant scale.This post provides a brief overview of the rationale for MHI’s work, what we’ve accomplished so far, where we’re heading, and how you can help. For more detailed information about our work, please visit our website, sign up to our mailing list, or reach out to us directly.Why MHI was foundedEvery day, 830 women die from pregnancy-related causes. These deaths overwhelmingly occur in sub-Saharan Africa, where maternal mortality rates are more than ten times higher than the next worst region. The Maternal Health Initiative was founded to change this.Increasing access to contraception is a tractable intervention with extensive evidence of its effectiveness in reducing maternal mortality at scale. More than this, access to family planning services can produce a wealth of additional impact, providing potentially transformative benefits to women’s autonomy, income, and wellbeing. The autonomy benefits are particularly exciting: family planning appears to be one of the most cost-effective ways of improving agency.218 million women in lower income countries have an unmet need for family planning. This resulted in more than 85 million unintended pregnancies in 2019 alone. More than one third of these unintended pregnancies in sub-Saharan Africa end in an unsafe abortion. These often take place outside of a health facility in countries where abortions are dangerous, heavily stigmatised, and in some cases illegal.While there is substantial work in this space by non-EA actors, there remain significantly neglected opportunities. One of these is the specific provision of postpartum (post-birth) family planning services. Short-spaced pregnancies - births that occur within 2 years over each other - significantly increase the risks of both maternal and infant mortality.A synthesis of the literature evaluating the impact of pregnancies that occur within 2 years of each other suggests a 18% higher risk of infant mortality and 16% higher risk of maternal mortality. Despite this, contraceptive use drops to less than half of the national average in the first year after giving birth, with contraceptive counselling after pregnancy rarely taking place currently.MHI is developing a programme of training to ensure high-quality contraceptive counselling consistently takes place in the postpartum period. Several randomised control trials have demonstrated the effectiveness of increased quality of care during this period, and we currently estimate that our program can avert a disability-adjusted life year (DALY) for just $166, competitive with GiveWell’s top recommended charities.Our workWhat we’ve done so farSince September, we...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the Maternal Health Initiative, published by Ben Williamson on March 31, 2023 on The Effective Altruism Forum.The Maternal Health Initiative (MHI) was launched through the 2022 Charity Entrepreneurship Incubation Programme. Our mission is to improve the lives of women and children through a light-touch, scalable training programme on contraceptive counselling for healthcare workers in sub-Saharan Africa.Six months after we started, we’re excited to share more about our work so far, our mission, and the impact we’re aiming for. This post presents a short case for the value of MHI’s work, details our goals for the rest of 2023, and explains how you can support us.IntroductionMHI is an evidence-based non-profit dedicated to improving the lives of women and children across the world by training health workers to provide high-quality family planning counselling in the postpartum period.We are currently conducting preliminary piloting work in northern Ghana through local partner organisations. At present, we estimate that our programming could avert a disability-adjusted life year (DALY) for $166, and we foresee promising pathways to significant scale.This post provides a brief overview of the rationale for MHI’s work, what we’ve accomplished so far, where we’re heading, and how you can help. For more detailed information about our work, please visit our website, sign up to our mailing list, or reach out to us directly.Why MHI was foundedEvery day, 830 women die from pregnancy-related causes. These deaths overwhelmingly occur in sub-Saharan Africa, where maternal mortality rates are more than ten times higher than the next worst region. The Maternal Health Initiative was founded to change this.Increasing access to contraception is a tractable intervention with extensive evidence of its effectiveness in reducing maternal mortality at scale. More than this, access to family planning services can produce a wealth of additional impact, providing potentially transformative benefits to women’s autonomy, income, and wellbeing. The autonomy benefits are particularly exciting: family planning appears to be one of the most cost-effective ways of improving agency.218 million women in lower income countries have an unmet need for family planning. This resulted in more than 85 million unintended pregnancies in 2019 alone. More than one third of these unintended pregnancies in sub-Saharan Africa end in an unsafe abortion. These often take place outside of a health facility in countries where abortions are dangerous, heavily stigmatised, and in some cases illegal.While there is substantial work in this space by non-EA actors, there remain significantly neglected opportunities. One of these is the specific provision of postpartum (post-birth) family planning services. Short-spaced pregnancies - births that occur within 2 years over each other - significantly increase the risks of both maternal and infant mortality.A synthesis of the literature evaluating the impact of pregnancies that occur within 2 years of each other suggests a 18% higher risk of infant mortality and 16% higher risk of maternal mortality. Despite this, contraceptive use drops to less than half of the national average in the first year after giving birth, with contraceptive counselling after pregnancy rarely taking place currently.MHI is developing a programme of training to ensure high-quality contraceptive counselling consistently takes place in the postpartum period. Several randomised control trials have demonstrated the effectiveness of increased quality of care during this period, and we currently estimate that our program can avert a disability-adjusted life year (DALY) for just $166, competitive with GiveWell’s top recommended charities.Our workWhat we’ve done so farSince September, we...]]>
Ben Williamson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:16 None full 5427
DaRvpDHHdaoad9Tfu_NL_EA_EA EA - Critiques of prominent AI safety labs: Redwood Research by Omega Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Critiques of prominent AI safety labs: Redwood Research, published by Omega on March 31, 2023 on The Effective Altruism Forum.In this series, we evaluate AI safety organizations that have received more than $10 million per year in funding. We do not critique MIRI and OpenAI as there have been several conversations and critiques of these organizations (1,2,3).The authors of this post include two technical AI safety researchers, and others who have spent significant time in the Bay Area community. One technical AI safety researcher is senior (>4 years experience), the other junior. We would like to make our critiques non-anonymously but unfortunately believe this would be professionally unwise. Further, we believe our criticisms stand on their own. Though we have done our best to remain impartial, readers should not assume that we are completely unbiased or don’t have anything to personally or professionally gain from publishing these critiques. We take the benefits and drawbacks of the anonymous nature of our post seriously, and are open to feedback on anything we might have done better.The first post in this series will cover Redwood Research (Redwood). Redwood is a non-profit started in 2021 working on technical AI safety (TAIS) alignment research. Their approach is heavily informed by the work of Paul Christiano, who runs the Alignment Research Center (ARC), and previously ran the language model alignment team at OpenAI. Paul originally proposed one of Redwood's original projects and is on Redwood’s board. Redwood has strong connections with central EA leadership and funders, has received significant funding since its inception, recruits almost exclusively from the EA movement, and partly acts as a gatekeeper to central EA institutions.We shared a draft of this document with Redwood prior to publication and are grateful for their feedback and corrections (we recommend others also reach out similarly). We’ve also invited them to share their views in the comments of this post.We would like to also invite others to share their thoughts in the comments openly if you feel comfortable, or contribute anonymously via this form. We will add inputs from there to the comments section of this post, but will likely not be updating the main body of the post as a result (unless comments catch errors in our writing).Summary of our viewsWe believe that Redwood has some serious flaws as an org, yet has received a significant amount of funding from a central EA grantmaker (Open Philanthropy). Inadequately kept in check conflicts of interest (COIs) might be partly responsible for funders giving a relatively immature org lots of money and causing some negative effects on the field and EA community. We will share our critiques of Constellation (and Open Philanthropy) in a follow-up post. We also have some suggestions for Redwood that we believe might help them achieve their goals.Redwood is a young organization that has room to improve. While there may be flaws in their current approach, it is possible for them to learn and adapt in order to produce more accurate and reliable results in the future. Many successful organizations made significant pivots while at a similar scale to Redwood, and we remain cautiously optimistic about Redwood's future potential.An Overview of Redwood ResearchGrants: Redwood has received just over $21 million dollars in funding that we are aware of, for their own operations (2/3, or $14 million) and running Constellation (1/3 or $7 million) Redwood received $20 million from Open Philanthropy (OP) (grant 1 & 2) and $1.27 million from the Survival and Flourishing Fund. They also were granted (but never received) $6.6 million from FTX Future Fund.Output:Research: Redwood lists six research projects on their website: causal scrubbing, interpretability ...]]>
Omega https://forum.effectivealtruism.org/posts/DaRvpDHHdaoad9Tfu/critiques-of-prominent-ai-safety-labs-redwood-research Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Critiques of prominent AI safety labs: Redwood Research, published by Omega on March 31, 2023 on The Effective Altruism Forum.In this series, we evaluate AI safety organizations that have received more than $10 million per year in funding. We do not critique MIRI and OpenAI as there have been several conversations and critiques of these organizations (1,2,3).The authors of this post include two technical AI safety researchers, and others who have spent significant time in the Bay Area community. One technical AI safety researcher is senior (>4 years experience), the other junior. We would like to make our critiques non-anonymously but unfortunately believe this would be professionally unwise. Further, we believe our criticisms stand on their own. Though we have done our best to remain impartial, readers should not assume that we are completely unbiased or don’t have anything to personally or professionally gain from publishing these critiques. We take the benefits and drawbacks of the anonymous nature of our post seriously, and are open to feedback on anything we might have done better.The first post in this series will cover Redwood Research (Redwood). Redwood is a non-profit started in 2021 working on technical AI safety (TAIS) alignment research. Their approach is heavily informed by the work of Paul Christiano, who runs the Alignment Research Center (ARC), and previously ran the language model alignment team at OpenAI. Paul originally proposed one of Redwood's original projects and is on Redwood’s board. Redwood has strong connections with central EA leadership and funders, has received significant funding since its inception, recruits almost exclusively from the EA movement, and partly acts as a gatekeeper to central EA institutions.We shared a draft of this document with Redwood prior to publication and are grateful for their feedback and corrections (we recommend others also reach out similarly). We’ve also invited them to share their views in the comments of this post.We would like to also invite others to share their thoughts in the comments openly if you feel comfortable, or contribute anonymously via this form. We will add inputs from there to the comments section of this post, but will likely not be updating the main body of the post as a result (unless comments catch errors in our writing).Summary of our viewsWe believe that Redwood has some serious flaws as an org, yet has received a significant amount of funding from a central EA grantmaker (Open Philanthropy). Inadequately kept in check conflicts of interest (COIs) might be partly responsible for funders giving a relatively immature org lots of money and causing some negative effects on the field and EA community. We will share our critiques of Constellation (and Open Philanthropy) in a follow-up post. We also have some suggestions for Redwood that we believe might help them achieve their goals.Redwood is a young organization that has room to improve. While there may be flaws in their current approach, it is possible for them to learn and adapt in order to produce more accurate and reliable results in the future. Many successful organizations made significant pivots while at a similar scale to Redwood, and we remain cautiously optimistic about Redwood's future potential.An Overview of Redwood ResearchGrants: Redwood has received just over $21 million dollars in funding that we are aware of, for their own operations (2/3, or $14 million) and running Constellation (1/3 or $7 million) Redwood received $20 million from Open Philanthropy (OP) (grant 1 & 2) and $1.27 million from the Survival and Flourishing Fund. They also were granted (but never received) $6.6 million from FTX Future Fund.Output:Research: Redwood lists six research projects on their website: causal scrubbing, interpretability ...]]>
Fri, 31 Mar 2023 10:15:53 +0000 EA - Critiques of prominent AI safety labs: Redwood Research by Omega Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Critiques of prominent AI safety labs: Redwood Research, published by Omega on March 31, 2023 on The Effective Altruism Forum.In this series, we evaluate AI safety organizations that have received more than $10 million per year in funding. We do not critique MIRI and OpenAI as there have been several conversations and critiques of these organizations (1,2,3).The authors of this post include two technical AI safety researchers, and others who have spent significant time in the Bay Area community. One technical AI safety researcher is senior (>4 years experience), the other junior. We would like to make our critiques non-anonymously but unfortunately believe this would be professionally unwise. Further, we believe our criticisms stand on their own. Though we have done our best to remain impartial, readers should not assume that we are completely unbiased or don’t have anything to personally or professionally gain from publishing these critiques. We take the benefits and drawbacks of the anonymous nature of our post seriously, and are open to feedback on anything we might have done better.The first post in this series will cover Redwood Research (Redwood). Redwood is a non-profit started in 2021 working on technical AI safety (TAIS) alignment research. Their approach is heavily informed by the work of Paul Christiano, who runs the Alignment Research Center (ARC), and previously ran the language model alignment team at OpenAI. Paul originally proposed one of Redwood's original projects and is on Redwood’s board. Redwood has strong connections with central EA leadership and funders, has received significant funding since its inception, recruits almost exclusively from the EA movement, and partly acts as a gatekeeper to central EA institutions.We shared a draft of this document with Redwood prior to publication and are grateful for their feedback and corrections (we recommend others also reach out similarly). We’ve also invited them to share their views in the comments of this post.We would like to also invite others to share their thoughts in the comments openly if you feel comfortable, or contribute anonymously via this form. We will add inputs from there to the comments section of this post, but will likely not be updating the main body of the post as a result (unless comments catch errors in our writing).Summary of our viewsWe believe that Redwood has some serious flaws as an org, yet has received a significant amount of funding from a central EA grantmaker (Open Philanthropy). Inadequately kept in check conflicts of interest (COIs) might be partly responsible for funders giving a relatively immature org lots of money and causing some negative effects on the field and EA community. We will share our critiques of Constellation (and Open Philanthropy) in a follow-up post. We also have some suggestions for Redwood that we believe might help them achieve their goals.Redwood is a young organization that has room to improve. While there may be flaws in their current approach, it is possible for them to learn and adapt in order to produce more accurate and reliable results in the future. Many successful organizations made significant pivots while at a similar scale to Redwood, and we remain cautiously optimistic about Redwood's future potential.An Overview of Redwood ResearchGrants: Redwood has received just over $21 million dollars in funding that we are aware of, for their own operations (2/3, or $14 million) and running Constellation (1/3 or $7 million) Redwood received $20 million from Open Philanthropy (OP) (grant 1 & 2) and $1.27 million from the Survival and Flourishing Fund. They also were granted (but never received) $6.6 million from FTX Future Fund.Output:Research: Redwood lists six research projects on their website: causal scrubbing, interpretability ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Critiques of prominent AI safety labs: Redwood Research, published by Omega on March 31, 2023 on The Effective Altruism Forum.In this series, we evaluate AI safety organizations that have received more than $10 million per year in funding. We do not critique MIRI and OpenAI as there have been several conversations and critiques of these organizations (1,2,3).The authors of this post include two technical AI safety researchers, and others who have spent significant time in the Bay Area community. One technical AI safety researcher is senior (>4 years experience), the other junior. We would like to make our critiques non-anonymously but unfortunately believe this would be professionally unwise. Further, we believe our criticisms stand on their own. Though we have done our best to remain impartial, readers should not assume that we are completely unbiased or don’t have anything to personally or professionally gain from publishing these critiques. We take the benefits and drawbacks of the anonymous nature of our post seriously, and are open to feedback on anything we might have done better.The first post in this series will cover Redwood Research (Redwood). Redwood is a non-profit started in 2021 working on technical AI safety (TAIS) alignment research. Their approach is heavily informed by the work of Paul Christiano, who runs the Alignment Research Center (ARC), and previously ran the language model alignment team at OpenAI. Paul originally proposed one of Redwood's original projects and is on Redwood’s board. Redwood has strong connections with central EA leadership and funders, has received significant funding since its inception, recruits almost exclusively from the EA movement, and partly acts as a gatekeeper to central EA institutions.We shared a draft of this document with Redwood prior to publication and are grateful for their feedback and corrections (we recommend others also reach out similarly). We’ve also invited them to share their views in the comments of this post.We would like to also invite others to share their thoughts in the comments openly if you feel comfortable, or contribute anonymously via this form. We will add inputs from there to the comments section of this post, but will likely not be updating the main body of the post as a result (unless comments catch errors in our writing).Summary of our viewsWe believe that Redwood has some serious flaws as an org, yet has received a significant amount of funding from a central EA grantmaker (Open Philanthropy). Inadequately kept in check conflicts of interest (COIs) might be partly responsible for funders giving a relatively immature org lots of money and causing some negative effects on the field and EA community. We will share our critiques of Constellation (and Open Philanthropy) in a follow-up post. We also have some suggestions for Redwood that we believe might help them achieve their goals.Redwood is a young organization that has room to improve. While there may be flaws in their current approach, it is possible for them to learn and adapt in order to produce more accurate and reliable results in the future. Many successful organizations made significant pivots while at a similar scale to Redwood, and we remain cautiously optimistic about Redwood's future potential.An Overview of Redwood ResearchGrants: Redwood has received just over $21 million dollars in funding that we are aware of, for their own operations (2/3, or $14 million) and running Constellation (1/3 or $7 million) Redwood received $20 million from Open Philanthropy (OP) (grant 1 & 2) and $1.27 million from the Survival and Flourishing Fund. They also were granted (but never received) $6.6 million from FTX Future Fund.Output:Research: Redwood lists six research projects on their website: causal scrubbing, interpretability ...]]>
Omega https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 34:15 None full 5429
BGFbwca4nfagvB9Xb_NL_EA_EA EA - Deference on AI timelines: survey results by Sam Clarke Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Deference on AI timelines: survey results, published by Sam Clarke on March 30, 2023 on The Effective Altruism Forum.Crossposted to LessWrong.In October 2022, 91 EA Forum/LessWrong users answered the AI timelines deference survey. This post summarises the results.ContextThe survey was advertised in this forum post, and anyone could respond. Respondents were asked to whom they defer most, second-most and third-most, on AI timelines. You can see the survey here.ResultsThis spreadsheet has the raw anonymised survey results. Here are some plots which try to summarise them.Simply tallying up the number of times that each person is deferred to:The plot only features people who were deferred to by at least two respondents.Some basic observations:Overall, respondents defer most frequently to themselves—i.e. their “inside view” or independent impression—and Ajeya Cotra. These two responses were each at least twice as frequent as any other response.Then there’s a kind of “middle cluster”—featuring Daniel Kokotajlo, Paul Christiano, Eliezer Yudkowsky and Holden Karnofsky—where, again, each of these responses were ~at least twice as frequent as any other response.Then comes everyone else. There’s probably something more fine-grained to be said here, but it doesn’t seem crucial to understanding the overall picture.What happens if you redo the plot with a different metric? How sensitive are the results to that?One thing we tried was computing a “weighted” score for each person, by giving them:3 points for each respondent who defers to them the most2 points for each respondent who defers to them second-most1 point for each respondent who defers to them third-most.If you redo the plot with that score, you get this plot. The ordering changes a bit, but I don’t think it really changes the high-level picture. In particular, the basic observations in the previous section still hold.We think the weighted score (described in this section) and unweighted score (described in the previous section) are the two most natural metrics, so we didn’t try out any others.Don’t some people have highly correlated views? What happens if you cluster those together?Yeah, we do think some people have highly correlated views, in the sense that their views depend on similar assumptions or arguments. We tried plotting the results using the following basic clusters:Open Philanthropy cluster = {Ajeya Cotra, Holden Karnofsky, Paul Christiano, Bioanchors}MIRI cluster = {MIRI, Eliezer Yudkowsky}Daniel Kokotajlo gets his own clusterInside view = deferring to yourself, i.e. your independent impressionEveryone else = all responses not in one of the above categoriesHere’s what you get if you simply tally up the number of times each cluster is deferred to:This plot gives a breakdown of two of the clusters (there’s no additional information that isn’t contained in the above two plots, it just gives a different view).This is just one way of clustering the responses, which seemed reasonable to us. There are other clusters you could make.Limitations of the surveySelection effects. This probably isn’t a representative sample of forum users, let alone of people who engage in discourse about AI timelines, or make decisions influenced by AI timelines.The survey didn’t elicit much detail about the weight that respondents gave to different views. We simply asked who respondents deferred most, second-most and third-most to. This misses a lot of information.The boundary between [deferring] and [having an independent impression] is vague. Consider: how much effort do you need to spend examining some assumption/argument for yourself, before considering it an independent impression, rather than deference? This is a limitation of the survey, because different respondents may have been using different b...]]>
Sam Clarke https://forum.effectivealtruism.org/posts/BGFbwca4nfagvB9Xb/deference-on-ai-timelines-survey-results Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Deference on AI timelines: survey results, published by Sam Clarke on March 30, 2023 on The Effective Altruism Forum.Crossposted to LessWrong.In October 2022, 91 EA Forum/LessWrong users answered the AI timelines deference survey. This post summarises the results.ContextThe survey was advertised in this forum post, and anyone could respond. Respondents were asked to whom they defer most, second-most and third-most, on AI timelines. You can see the survey here.ResultsThis spreadsheet has the raw anonymised survey results. Here are some plots which try to summarise them.Simply tallying up the number of times that each person is deferred to:The plot only features people who were deferred to by at least two respondents.Some basic observations:Overall, respondents defer most frequently to themselves—i.e. their “inside view” or independent impression—and Ajeya Cotra. These two responses were each at least twice as frequent as any other response.Then there’s a kind of “middle cluster”—featuring Daniel Kokotajlo, Paul Christiano, Eliezer Yudkowsky and Holden Karnofsky—where, again, each of these responses were ~at least twice as frequent as any other response.Then comes everyone else. There’s probably something more fine-grained to be said here, but it doesn’t seem crucial to understanding the overall picture.What happens if you redo the plot with a different metric? How sensitive are the results to that?One thing we tried was computing a “weighted” score for each person, by giving them:3 points for each respondent who defers to them the most2 points for each respondent who defers to them second-most1 point for each respondent who defers to them third-most.If you redo the plot with that score, you get this plot. The ordering changes a bit, but I don’t think it really changes the high-level picture. In particular, the basic observations in the previous section still hold.We think the weighted score (described in this section) and unweighted score (described in the previous section) are the two most natural metrics, so we didn’t try out any others.Don’t some people have highly correlated views? What happens if you cluster those together?Yeah, we do think some people have highly correlated views, in the sense that their views depend on similar assumptions or arguments. We tried plotting the results using the following basic clusters:Open Philanthropy cluster = {Ajeya Cotra, Holden Karnofsky, Paul Christiano, Bioanchors}MIRI cluster = {MIRI, Eliezer Yudkowsky}Daniel Kokotajlo gets his own clusterInside view = deferring to yourself, i.e. your independent impressionEveryone else = all responses not in one of the above categoriesHere’s what you get if you simply tally up the number of times each cluster is deferred to:This plot gives a breakdown of two of the clusters (there’s no additional information that isn’t contained in the above two plots, it just gives a different view).This is just one way of clustering the responses, which seemed reasonable to us. There are other clusters you could make.Limitations of the surveySelection effects. This probably isn’t a representative sample of forum users, let alone of people who engage in discourse about AI timelines, or make decisions influenced by AI timelines.The survey didn’t elicit much detail about the weight that respondents gave to different views. We simply asked who respondents deferred most, second-most and third-most to. This misses a lot of information.The boundary between [deferring] and [having an independent impression] is vague. Consider: how much effort do you need to spend examining some assumption/argument for yourself, before considering it an independent impression, rather than deference? This is a limitation of the survey, because different respondents may have been using different b...]]>
Fri, 31 Mar 2023 09:58:00 +0000 EA - Deference on AI timelines: survey results by Sam Clarke Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Deference on AI timelines: survey results, published by Sam Clarke on March 30, 2023 on The Effective Altruism Forum.Crossposted to LessWrong.In October 2022, 91 EA Forum/LessWrong users answered the AI timelines deference survey. This post summarises the results.ContextThe survey was advertised in this forum post, and anyone could respond. Respondents were asked to whom they defer most, second-most and third-most, on AI timelines. You can see the survey here.ResultsThis spreadsheet has the raw anonymised survey results. Here are some plots which try to summarise them.Simply tallying up the number of times that each person is deferred to:The plot only features people who were deferred to by at least two respondents.Some basic observations:Overall, respondents defer most frequently to themselves—i.e. their “inside view” or independent impression—and Ajeya Cotra. These two responses were each at least twice as frequent as any other response.Then there’s a kind of “middle cluster”—featuring Daniel Kokotajlo, Paul Christiano, Eliezer Yudkowsky and Holden Karnofsky—where, again, each of these responses were ~at least twice as frequent as any other response.Then comes everyone else. There’s probably something more fine-grained to be said here, but it doesn’t seem crucial to understanding the overall picture.What happens if you redo the plot with a different metric? How sensitive are the results to that?One thing we tried was computing a “weighted” score for each person, by giving them:3 points for each respondent who defers to them the most2 points for each respondent who defers to them second-most1 point for each respondent who defers to them third-most.If you redo the plot with that score, you get this plot. The ordering changes a bit, but I don’t think it really changes the high-level picture. In particular, the basic observations in the previous section still hold.We think the weighted score (described in this section) and unweighted score (described in the previous section) are the two most natural metrics, so we didn’t try out any others.Don’t some people have highly correlated views? What happens if you cluster those together?Yeah, we do think some people have highly correlated views, in the sense that their views depend on similar assumptions or arguments. We tried plotting the results using the following basic clusters:Open Philanthropy cluster = {Ajeya Cotra, Holden Karnofsky, Paul Christiano, Bioanchors}MIRI cluster = {MIRI, Eliezer Yudkowsky}Daniel Kokotajlo gets his own clusterInside view = deferring to yourself, i.e. your independent impressionEveryone else = all responses not in one of the above categoriesHere’s what you get if you simply tally up the number of times each cluster is deferred to:This plot gives a breakdown of two of the clusters (there’s no additional information that isn’t contained in the above two plots, it just gives a different view).This is just one way of clustering the responses, which seemed reasonable to us. There are other clusters you could make.Limitations of the surveySelection effects. This probably isn’t a representative sample of forum users, let alone of people who engage in discourse about AI timelines, or make decisions influenced by AI timelines.The survey didn’t elicit much detail about the weight that respondents gave to different views. We simply asked who respondents deferred most, second-most and third-most to. This misses a lot of information.The boundary between [deferring] and [having an independent impression] is vague. Consider: how much effort do you need to spend examining some assumption/argument for yourself, before considering it an independent impression, rather than deference? This is a limitation of the survey, because different respondents may have been using different b...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Deference on AI timelines: survey results, published by Sam Clarke on March 30, 2023 on The Effective Altruism Forum.Crossposted to LessWrong.In October 2022, 91 EA Forum/LessWrong users answered the AI timelines deference survey. This post summarises the results.ContextThe survey was advertised in this forum post, and anyone could respond. Respondents were asked to whom they defer most, second-most and third-most, on AI timelines. You can see the survey here.ResultsThis spreadsheet has the raw anonymised survey results. Here are some plots which try to summarise them.Simply tallying up the number of times that each person is deferred to:The plot only features people who were deferred to by at least two respondents.Some basic observations:Overall, respondents defer most frequently to themselves—i.e. their “inside view” or independent impression—and Ajeya Cotra. These two responses were each at least twice as frequent as any other response.Then there’s a kind of “middle cluster”—featuring Daniel Kokotajlo, Paul Christiano, Eliezer Yudkowsky and Holden Karnofsky—where, again, each of these responses were ~at least twice as frequent as any other response.Then comes everyone else. There’s probably something more fine-grained to be said here, but it doesn’t seem crucial to understanding the overall picture.What happens if you redo the plot with a different metric? How sensitive are the results to that?One thing we tried was computing a “weighted” score for each person, by giving them:3 points for each respondent who defers to them the most2 points for each respondent who defers to them second-most1 point for each respondent who defers to them third-most.If you redo the plot with that score, you get this plot. The ordering changes a bit, but I don’t think it really changes the high-level picture. In particular, the basic observations in the previous section still hold.We think the weighted score (described in this section) and unweighted score (described in the previous section) are the two most natural metrics, so we didn’t try out any others.Don’t some people have highly correlated views? What happens if you cluster those together?Yeah, we do think some people have highly correlated views, in the sense that their views depend on similar assumptions or arguments. We tried plotting the results using the following basic clusters:Open Philanthropy cluster = {Ajeya Cotra, Holden Karnofsky, Paul Christiano, Bioanchors}MIRI cluster = {MIRI, Eliezer Yudkowsky}Daniel Kokotajlo gets his own clusterInside view = deferring to yourself, i.e. your independent impressionEveryone else = all responses not in one of the above categoriesHere’s what you get if you simply tally up the number of times each cluster is deferred to:This plot gives a breakdown of two of the clusters (there’s no additional information that isn’t contained in the above two plots, it just gives a different view).This is just one way of clustering the responses, which seemed reasonable to us. There are other clusters you could make.Limitations of the surveySelection effects. This probably isn’t a representative sample of forum users, let alone of people who engage in discourse about AI timelines, or make decisions influenced by AI timelines.The survey didn’t elicit much detail about the weight that respondents gave to different views. We simply asked who respondents deferred most, second-most and third-most to. This misses a lot of information.The boundary between [deferring] and [having an independent impression] is vague. Consider: how much effort do you need to spend examining some assumption/argument for yourself, before considering it an independent impression, rather than deference? This is a limitation of the survey, because different respondents may have been using different b...]]>
Sam Clarke https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:42 None full 5421
EipE75vsDuD7bdJar_NL_EA_EA EA - GWWC's 2020–2022 Impact evaluation (executive summary) by Michael Townsend Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC's 2020–2022 Impact evaluation (executive summary), published by Michael Townsend on March 31, 2023 on The Effective Altruism Forum.Giving What We Can (GWWC) is on a mission to create a world in which giving effectively and significantly is a cultural norm. Our research recommendations and donation platform help people find and donate to effective charities, and our community — in particular, our pledgers — help foster a culture that inspires others to give.In this impact evaluation, we examine GWWC's cost-effectiveness from 2020 to 2022 in terms of how much money is directed to highly effective charities due to our work.We have several reasons for doing this:To provide potential donors with information about our past cost-effectiveness.To hold ourselves accountable and ensure that our activities are providing enough value to others.To determine which of our activities are most successful, so we can make more informed strategic decisions about where we should focus our efforts.To provide an example impact evaluation framework which other effective giving organisations can draw from for their own evaluations.This evaluation reflects two months of work by the GWWC research team, including conducting multiple surveys and analysing the data in our existing database. There are several limitations to our approach — some of which we discuss below. We did not aim for a comprehensive or “academically” correct answer to the question of “What is Giving What We Can’s impact?” Rather, in our analyses we are aiming for usefulness, justifiability, and transparency: we aim to practise what we preach and for this evaluation to meet the same standards of cost-effectiveness as we have for our other activities.Below, we share our key results, some guidance and caveats on how to interpret them, and our own takeaways from this evaluation. GWWC has historically derived a lot of value from our community’s feedback and input, so we invite readers to share any comments or takeaways they may have on the basis of reviewing this evaluation and its results, either by directly commenting or by reaching out to sjir@givingwhatwecan.org.Key resultsOur primary goal was to identify our overall cost-effectiveness as a giving multiplier — the ratio of our net benefits (additional money directed to highly effective charities, accounting for the opportunity costs of GWWC staff) compared to our operating costs.We estimate our giving multiplier for 2020–2022 is 30x, and that we counterfactually generated $62 million of value for highly effective charities.We were also particularly interested in the average lifetime value that GWWC contributes per pledge, as this can inform our future priorities.We estimate we counterfactually generate $22,000 of value for highly effective charities per GWWC Pledge, and $2,000 per Trial Pledge.We used these estimates to help inform our answer to the following question: In 2020–2022, did we generate more value through our pledges or through our non-pledge work?We estimate that pledgers donated $26 million in 2020–2022 because of GWWC. We also estimate GWWC will have caused $83 million of value from the new pledges taken in 2020–2022.We estimate GWWC caused $19 million in donations to highly effective charities from non-pledge donors in 2020–2022.These key results are arrived at through dozens of constituent estimates, many of which are independently interesting and inform our takeaways below. We also provide alternative conservative estimates for each of our best-guess estimates.How to interpret our resultsThis section provides several high-level caveats to help readers better understand what the results of our impact evaluation do and don’t communicate about our impact.We generally looked at average rather than marginal cost-effectivenessMost of our ...]]>
Michael Townsend https://forum.effectivealtruism.org/posts/EipE75vsDuD7bdJar/gwwc-s-2020-2022-impact-evaluation-executive-summary Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC's 2020–2022 Impact evaluation (executive summary), published by Michael Townsend on March 31, 2023 on The Effective Altruism Forum.Giving What We Can (GWWC) is on a mission to create a world in which giving effectively and significantly is a cultural norm. Our research recommendations and donation platform help people find and donate to effective charities, and our community — in particular, our pledgers — help foster a culture that inspires others to give.In this impact evaluation, we examine GWWC's cost-effectiveness from 2020 to 2022 in terms of how much money is directed to highly effective charities due to our work.We have several reasons for doing this:To provide potential donors with information about our past cost-effectiveness.To hold ourselves accountable and ensure that our activities are providing enough value to others.To determine which of our activities are most successful, so we can make more informed strategic decisions about where we should focus our efforts.To provide an example impact evaluation framework which other effective giving organisations can draw from for their own evaluations.This evaluation reflects two months of work by the GWWC research team, including conducting multiple surveys and analysing the data in our existing database. There are several limitations to our approach — some of which we discuss below. We did not aim for a comprehensive or “academically” correct answer to the question of “What is Giving What We Can’s impact?” Rather, in our analyses we are aiming for usefulness, justifiability, and transparency: we aim to practise what we preach and for this evaluation to meet the same standards of cost-effectiveness as we have for our other activities.Below, we share our key results, some guidance and caveats on how to interpret them, and our own takeaways from this evaluation. GWWC has historically derived a lot of value from our community’s feedback and input, so we invite readers to share any comments or takeaways they may have on the basis of reviewing this evaluation and its results, either by directly commenting or by reaching out to sjir@givingwhatwecan.org.Key resultsOur primary goal was to identify our overall cost-effectiveness as a giving multiplier — the ratio of our net benefits (additional money directed to highly effective charities, accounting for the opportunity costs of GWWC staff) compared to our operating costs.We estimate our giving multiplier for 2020–2022 is 30x, and that we counterfactually generated $62 million of value for highly effective charities.We were also particularly interested in the average lifetime value that GWWC contributes per pledge, as this can inform our future priorities.We estimate we counterfactually generate $22,000 of value for highly effective charities per GWWC Pledge, and $2,000 per Trial Pledge.We used these estimates to help inform our answer to the following question: In 2020–2022, did we generate more value through our pledges or through our non-pledge work?We estimate that pledgers donated $26 million in 2020–2022 because of GWWC. We also estimate GWWC will have caused $83 million of value from the new pledges taken in 2020–2022.We estimate GWWC caused $19 million in donations to highly effective charities from non-pledge donors in 2020–2022.These key results are arrived at through dozens of constituent estimates, many of which are independently interesting and inform our takeaways below. We also provide alternative conservative estimates for each of our best-guess estimates.How to interpret our resultsThis section provides several high-level caveats to help readers better understand what the results of our impact evaluation do and don’t communicate about our impact.We generally looked at average rather than marginal cost-effectivenessMost of our ...]]>
Fri, 31 Mar 2023 07:54:02 +0000 EA - GWWC's 2020–2022 Impact evaluation (executive summary) by Michael Townsend Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC's 2020–2022 Impact evaluation (executive summary), published by Michael Townsend on March 31, 2023 on The Effective Altruism Forum.Giving What We Can (GWWC) is on a mission to create a world in which giving effectively and significantly is a cultural norm. Our research recommendations and donation platform help people find and donate to effective charities, and our community — in particular, our pledgers — help foster a culture that inspires others to give.In this impact evaluation, we examine GWWC's cost-effectiveness from 2020 to 2022 in terms of how much money is directed to highly effective charities due to our work.We have several reasons for doing this:To provide potential donors with information about our past cost-effectiveness.To hold ourselves accountable and ensure that our activities are providing enough value to others.To determine which of our activities are most successful, so we can make more informed strategic decisions about where we should focus our efforts.To provide an example impact evaluation framework which other effective giving organisations can draw from for their own evaluations.This evaluation reflects two months of work by the GWWC research team, including conducting multiple surveys and analysing the data in our existing database. There are several limitations to our approach — some of which we discuss below. We did not aim for a comprehensive or “academically” correct answer to the question of “What is Giving What We Can’s impact?” Rather, in our analyses we are aiming for usefulness, justifiability, and transparency: we aim to practise what we preach and for this evaluation to meet the same standards of cost-effectiveness as we have for our other activities.Below, we share our key results, some guidance and caveats on how to interpret them, and our own takeaways from this evaluation. GWWC has historically derived a lot of value from our community’s feedback and input, so we invite readers to share any comments or takeaways they may have on the basis of reviewing this evaluation and its results, either by directly commenting or by reaching out to sjir@givingwhatwecan.org.Key resultsOur primary goal was to identify our overall cost-effectiveness as a giving multiplier — the ratio of our net benefits (additional money directed to highly effective charities, accounting for the opportunity costs of GWWC staff) compared to our operating costs.We estimate our giving multiplier for 2020–2022 is 30x, and that we counterfactually generated $62 million of value for highly effective charities.We were also particularly interested in the average lifetime value that GWWC contributes per pledge, as this can inform our future priorities.We estimate we counterfactually generate $22,000 of value for highly effective charities per GWWC Pledge, and $2,000 per Trial Pledge.We used these estimates to help inform our answer to the following question: In 2020–2022, did we generate more value through our pledges or through our non-pledge work?We estimate that pledgers donated $26 million in 2020–2022 because of GWWC. We also estimate GWWC will have caused $83 million of value from the new pledges taken in 2020–2022.We estimate GWWC caused $19 million in donations to highly effective charities from non-pledge donors in 2020–2022.These key results are arrived at through dozens of constituent estimates, many of which are independently interesting and inform our takeaways below. We also provide alternative conservative estimates for each of our best-guess estimates.How to interpret our resultsThis section provides several high-level caveats to help readers better understand what the results of our impact evaluation do and don’t communicate about our impact.We generally looked at average rather than marginal cost-effectivenessMost of our ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC's 2020–2022 Impact evaluation (executive summary), published by Michael Townsend on March 31, 2023 on The Effective Altruism Forum.Giving What We Can (GWWC) is on a mission to create a world in which giving effectively and significantly is a cultural norm. Our research recommendations and donation platform help people find and donate to effective charities, and our community — in particular, our pledgers — help foster a culture that inspires others to give.In this impact evaluation, we examine GWWC's cost-effectiveness from 2020 to 2022 in terms of how much money is directed to highly effective charities due to our work.We have several reasons for doing this:To provide potential donors with information about our past cost-effectiveness.To hold ourselves accountable and ensure that our activities are providing enough value to others.To determine which of our activities are most successful, so we can make more informed strategic decisions about where we should focus our efforts.To provide an example impact evaluation framework which other effective giving organisations can draw from for their own evaluations.This evaluation reflects two months of work by the GWWC research team, including conducting multiple surveys and analysing the data in our existing database. There are several limitations to our approach — some of which we discuss below. We did not aim for a comprehensive or “academically” correct answer to the question of “What is Giving What We Can’s impact?” Rather, in our analyses we are aiming for usefulness, justifiability, and transparency: we aim to practise what we preach and for this evaluation to meet the same standards of cost-effectiveness as we have for our other activities.Below, we share our key results, some guidance and caveats on how to interpret them, and our own takeaways from this evaluation. GWWC has historically derived a lot of value from our community’s feedback and input, so we invite readers to share any comments or takeaways they may have on the basis of reviewing this evaluation and its results, either by directly commenting or by reaching out to sjir@givingwhatwecan.org.Key resultsOur primary goal was to identify our overall cost-effectiveness as a giving multiplier — the ratio of our net benefits (additional money directed to highly effective charities, accounting for the opportunity costs of GWWC staff) compared to our operating costs.We estimate our giving multiplier for 2020–2022 is 30x, and that we counterfactually generated $62 million of value for highly effective charities.We were also particularly interested in the average lifetime value that GWWC contributes per pledge, as this can inform our future priorities.We estimate we counterfactually generate $22,000 of value for highly effective charities per GWWC Pledge, and $2,000 per Trial Pledge.We used these estimates to help inform our answer to the following question: In 2020–2022, did we generate more value through our pledges or through our non-pledge work?We estimate that pledgers donated $26 million in 2020–2022 because of GWWC. We also estimate GWWC will have caused $83 million of value from the new pledges taken in 2020–2022.We estimate GWWC caused $19 million in donations to highly effective charities from non-pledge donors in 2020–2022.These key results are arrived at through dozens of constituent estimates, many of which are independently interesting and inform our takeaways below. We also provide alternative conservative estimates for each of our best-guess estimates.How to interpret our resultsThis section provides several high-level caveats to help readers better understand what the results of our impact evaluation do and don’t communicate about our impact.We generally looked at average rather than marginal cost-effectivenessMost of our ...]]>
Michael Townsend https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 14:31 None full 5420
FejnEgjfb7LsdzsBw_NL_EA_EA EA - What's surprised me as an entry-level generalist at Open Phil and my recommendations to early career professionals by Sam Anschell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What's surprised me as an entry-level generalist at Open Phil & my recommendations to early career professionals, published by Sam Anschell on March 30, 2023 on The Effective Altruism Forum.Disclaimer The opinions expressed in this post are my own and do not represent Open Philanthropy.Valentine’s day 2022 was my first day of work at Open Phil. As a 24-year-old who had spent two post-grad years as a poker dealer/cardroom union representative, I had little in the way of white collar context or transferable skills. Recently, a few undergraduates and early career professionals have reached out to learn what the job is like & how they can get further involved in EA. In this post I’ll try to provide the advice that I would have benefited from hearing a couple years ago.I’m hoping to widen the aperture of possibilities to early career professionals who are excited to use their time and talents to do good. I know how difficult it can be to land an EA job - it took years of on-and-off applying before I got an offer. It’s normal to face a string of rejections and it’s valid to feel frustrated by that, but I think the benefits to individuals and organizations when a hire is made are so great that continuing to apply is worth it. I encourage anyone who is struggling to get their foot in the door to read Aaron’s Epistemic Stories - I found it really motivating.TLDR:Before starting this job, I underestimated the $ value of person-hours at EA orgs. I may have done this because:There’s a disconnect between salary and social value generated (even though salaries at EA orgs are generous). Most for-profit companies value their average staff member’s contributions at about 2x their salary, and I suspect EA orgs value their average staff member’s contributions at more like 8x+ their salary.It could be uncomfortable to think that time at an EA org would be very valuable, both because of what it would imply for labor/leisure tradeoffs and because it could lead to imposter syndrome.It can be easy to mentally compartmentalize work at EA orgs as creating a similar level of social impact to work at nonprofits in general, despite believing that EA interventions are much more cost-effective than the average nonprofit’s interventions.Due to this underestimate, I now think I should have focused on working directly on EA projects and spending more time applying for EA jobs earlier. Here are some of my recommendations to early career professionals:Don't feel like you have to put multiple years into a job before leaving to show you’re not a job-hopper. EA orgs understand the desire to contribute to work you find meaningful as soon as you can!I suspect people apply to too few jobs given how unpleasant it can be to job hunt, and I strongly encourage you to keep putting yourself out there.I applied to a few hundred jobs before landing this one, as did many of my friends who work at EA orgs. Not getting any jobs despite many applications isn't a sign that you're a bad applicant.Doing even unsexy work for an organization that you’re strongly mission-aligned with is more motivating than you might expect.I write about impactful ways that anyone can spend time at the end of this post.A ~Year in the LifeWhat I did at workIt’s hard to look at a job description and get a sense of what the day-to-day looks like (and see whether one might be qualified for the job). Success in my and many other entry-level jobs seems to be a product of enthusiasm, dependability (which I’d define as the independence/organization skills to manage a task so that the person who assigns it doesn’t need to follow up), and good judgment (when to check in, what tone to use in emails, etc.) In my day-to-day as a business operations generalist (assistant level), I've:Helped manage the physical office space:Purchased, ren...]]>
Sam Anschell https://forum.effectivealtruism.org/posts/FejnEgjfb7LsdzsBw/what-s-surprised-me-as-an-entry-level-generalist-at-open Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What's surprised me as an entry-level generalist at Open Phil & my recommendations to early career professionals, published by Sam Anschell on March 30, 2023 on The Effective Altruism Forum.Disclaimer The opinions expressed in this post are my own and do not represent Open Philanthropy.Valentine’s day 2022 was my first day of work at Open Phil. As a 24-year-old who had spent two post-grad years as a poker dealer/cardroom union representative, I had little in the way of white collar context or transferable skills. Recently, a few undergraduates and early career professionals have reached out to learn what the job is like & how they can get further involved in EA. In this post I’ll try to provide the advice that I would have benefited from hearing a couple years ago.I’m hoping to widen the aperture of possibilities to early career professionals who are excited to use their time and talents to do good. I know how difficult it can be to land an EA job - it took years of on-and-off applying before I got an offer. It’s normal to face a string of rejections and it’s valid to feel frustrated by that, but I think the benefits to individuals and organizations when a hire is made are so great that continuing to apply is worth it. I encourage anyone who is struggling to get their foot in the door to read Aaron’s Epistemic Stories - I found it really motivating.TLDR:Before starting this job, I underestimated the $ value of person-hours at EA orgs. I may have done this because:There’s a disconnect between salary and social value generated (even though salaries at EA orgs are generous). Most for-profit companies value their average staff member’s contributions at about 2x their salary, and I suspect EA orgs value their average staff member’s contributions at more like 8x+ their salary.It could be uncomfortable to think that time at an EA org would be very valuable, both because of what it would imply for labor/leisure tradeoffs and because it could lead to imposter syndrome.It can be easy to mentally compartmentalize work at EA orgs as creating a similar level of social impact to work at nonprofits in general, despite believing that EA interventions are much more cost-effective than the average nonprofit’s interventions.Due to this underestimate, I now think I should have focused on working directly on EA projects and spending more time applying for EA jobs earlier. Here are some of my recommendations to early career professionals:Don't feel like you have to put multiple years into a job before leaving to show you’re not a job-hopper. EA orgs understand the desire to contribute to work you find meaningful as soon as you can!I suspect people apply to too few jobs given how unpleasant it can be to job hunt, and I strongly encourage you to keep putting yourself out there.I applied to a few hundred jobs before landing this one, as did many of my friends who work at EA orgs. Not getting any jobs despite many applications isn't a sign that you're a bad applicant.Doing even unsexy work for an organization that you’re strongly mission-aligned with is more motivating than you might expect.I write about impactful ways that anyone can spend time at the end of this post.A ~Year in the LifeWhat I did at workIt’s hard to look at a job description and get a sense of what the day-to-day looks like (and see whether one might be qualified for the job). Success in my and many other entry-level jobs seems to be a product of enthusiasm, dependability (which I’d define as the independence/organization skills to manage a task so that the person who assigns it doesn’t need to follow up), and good judgment (when to check in, what tone to use in emails, etc.) In my day-to-day as a business operations generalist (assistant level), I've:Helped manage the physical office space:Purchased, ren...]]>
Thu, 30 Mar 2023 23:43:04 +0000 EA - What's surprised me as an entry-level generalist at Open Phil and my recommendations to early career professionals by Sam Anschell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What's surprised me as an entry-level generalist at Open Phil & my recommendations to early career professionals, published by Sam Anschell on March 30, 2023 on The Effective Altruism Forum.Disclaimer The opinions expressed in this post are my own and do not represent Open Philanthropy.Valentine’s day 2022 was my first day of work at Open Phil. As a 24-year-old who had spent two post-grad years as a poker dealer/cardroom union representative, I had little in the way of white collar context or transferable skills. Recently, a few undergraduates and early career professionals have reached out to learn what the job is like & how they can get further involved in EA. In this post I’ll try to provide the advice that I would have benefited from hearing a couple years ago.I’m hoping to widen the aperture of possibilities to early career professionals who are excited to use their time and talents to do good. I know how difficult it can be to land an EA job - it took years of on-and-off applying before I got an offer. It’s normal to face a string of rejections and it’s valid to feel frustrated by that, but I think the benefits to individuals and organizations when a hire is made are so great that continuing to apply is worth it. I encourage anyone who is struggling to get their foot in the door to read Aaron’s Epistemic Stories - I found it really motivating.TLDR:Before starting this job, I underestimated the $ value of person-hours at EA orgs. I may have done this because:There’s a disconnect between salary and social value generated (even though salaries at EA orgs are generous). Most for-profit companies value their average staff member’s contributions at about 2x their salary, and I suspect EA orgs value their average staff member’s contributions at more like 8x+ their salary.It could be uncomfortable to think that time at an EA org would be very valuable, both because of what it would imply for labor/leisure tradeoffs and because it could lead to imposter syndrome.It can be easy to mentally compartmentalize work at EA orgs as creating a similar level of social impact to work at nonprofits in general, despite believing that EA interventions are much more cost-effective than the average nonprofit’s interventions.Due to this underestimate, I now think I should have focused on working directly on EA projects and spending more time applying for EA jobs earlier. Here are some of my recommendations to early career professionals:Don't feel like you have to put multiple years into a job before leaving to show you’re not a job-hopper. EA orgs understand the desire to contribute to work you find meaningful as soon as you can!I suspect people apply to too few jobs given how unpleasant it can be to job hunt, and I strongly encourage you to keep putting yourself out there.I applied to a few hundred jobs before landing this one, as did many of my friends who work at EA orgs. Not getting any jobs despite many applications isn't a sign that you're a bad applicant.Doing even unsexy work for an organization that you’re strongly mission-aligned with is more motivating than you might expect.I write about impactful ways that anyone can spend time at the end of this post.A ~Year in the LifeWhat I did at workIt’s hard to look at a job description and get a sense of what the day-to-day looks like (and see whether one might be qualified for the job). Success in my and many other entry-level jobs seems to be a product of enthusiasm, dependability (which I’d define as the independence/organization skills to manage a task so that the person who assigns it doesn’t need to follow up), and good judgment (when to check in, what tone to use in emails, etc.) In my day-to-day as a business operations generalist (assistant level), I've:Helped manage the physical office space:Purchased, ren...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What's surprised me as an entry-level generalist at Open Phil & my recommendations to early career professionals, published by Sam Anschell on March 30, 2023 on The Effective Altruism Forum.Disclaimer The opinions expressed in this post are my own and do not represent Open Philanthropy.Valentine’s day 2022 was my first day of work at Open Phil. As a 24-year-old who had spent two post-grad years as a poker dealer/cardroom union representative, I had little in the way of white collar context or transferable skills. Recently, a few undergraduates and early career professionals have reached out to learn what the job is like & how they can get further involved in EA. In this post I’ll try to provide the advice that I would have benefited from hearing a couple years ago.I’m hoping to widen the aperture of possibilities to early career professionals who are excited to use their time and talents to do good. I know how difficult it can be to land an EA job - it took years of on-and-off applying before I got an offer. It’s normal to face a string of rejections and it’s valid to feel frustrated by that, but I think the benefits to individuals and organizations when a hire is made are so great that continuing to apply is worth it. I encourage anyone who is struggling to get their foot in the door to read Aaron’s Epistemic Stories - I found it really motivating.TLDR:Before starting this job, I underestimated the $ value of person-hours at EA orgs. I may have done this because:There’s a disconnect between salary and social value generated (even though salaries at EA orgs are generous). Most for-profit companies value their average staff member’s contributions at about 2x their salary, and I suspect EA orgs value their average staff member’s contributions at more like 8x+ their salary.It could be uncomfortable to think that time at an EA org would be very valuable, both because of what it would imply for labor/leisure tradeoffs and because it could lead to imposter syndrome.It can be easy to mentally compartmentalize work at EA orgs as creating a similar level of social impact to work at nonprofits in general, despite believing that EA interventions are much more cost-effective than the average nonprofit’s interventions.Due to this underestimate, I now think I should have focused on working directly on EA projects and spending more time applying for EA jobs earlier. Here are some of my recommendations to early career professionals:Don't feel like you have to put multiple years into a job before leaving to show you’re not a job-hopper. EA orgs understand the desire to contribute to work you find meaningful as soon as you can!I suspect people apply to too few jobs given how unpleasant it can be to job hunt, and I strongly encourage you to keep putting yourself out there.I applied to a few hundred jobs before landing this one, as did many of my friends who work at EA orgs. Not getting any jobs despite many applications isn't a sign that you're a bad applicant.Doing even unsexy work for an organization that you’re strongly mission-aligned with is more motivating than you might expect.I write about impactful ways that anyone can spend time at the end of this post.A ~Year in the LifeWhat I did at workIt’s hard to look at a job description and get a sense of what the day-to-day looks like (and see whether one might be qualified for the job). Success in my and many other entry-level jobs seems to be a product of enthusiasm, dependability (which I’d define as the independence/organization skills to manage a task so that the person who assigns it doesn’t need to follow up), and good judgment (when to check in, what tone to use in emails, etc.) In my day-to-day as a business operations generalist (assistant level), I've:Helped manage the physical office space:Purchased, ren...]]>
Sam Anschell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 19:34 None full 5422
aqJtwbe7xo9MK3ych_NL_EA_EA EA - AI and Evolution by Dan H Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI and Evolution, published by Dan H on March 30, 2023 on The Effective Altruism Forum.Executive SummaryArtificial intelligence is advancing quickly. In some ways, AI development is an uncharted frontier, but in others, it follows the familiar pattern of other competitive processes; these include biological evolution, cultural change, and competition between businesses. In each of these, there is significant variation between individuals structures and some are copied more than others, with the result that the future population is more similar to the most copied individuals of the earlier generation. In this way, species evolve, cultural ideas are transmitted across generations, and successful businesses are imitated while unsuccessful ones disappear.This paper argues that these same selection patterns will shape AI development and that the features that will be copied the most are likely to create an AI population that is dangerous to humans. As AIs become faster and more reliable than people at more and more tasks, businesses that allow AIs to perform more of their work will outperform competitors still using human labor at any stage, just as a modern clothing company that insisted on using only manual looms would be easily outcompeted by those that use industrial looms. Companies will need to increase their reliance on AIs to stay competitive, and the companies that use AIs best will dominate the marketplace. This trend means that the AIs most likely to be copied will be very efficient at achieving their goals autonomously with little human intervention.A world dominated by increasingly powerful, independent, and goal-oriented AIs is dangerous. Today, the most successful AI models are not transparent, and even their creators do not fully know how they work or what they will be able to do before they do it. We know only their results, not how they arrived at them. As people give AIs the ability to act in the real world, the AIs’ internal processes will still be inscrutable: we will be able to measure their performance only based on whether or not they are achieving their goals. This means that the AIs humans will see as most successful — and therefore the ones that are copied — will be whichever AIs are most effective at achieving their goals, even if they use harmful or illegal methods, as long as we do not detect their bad behavior.In natural selection, the same pattern emerges: individuals are cooperative or even altruistic in some situations, but ultimately, strategically selfish individuals are best able to propagate. A business that knows how to steal trade secrets or deceive regulators without getting caught will have an edge over one that refuses to ever engage in fraud on principle. During a harsh winter, an animal that steals food from others to feed its own children will likely have more surviving offspring. Similarly, the AIs that succeed most will be those able to deceive humans, seek power, and achieve their goals by any means necessary.If AI systems are more capable than we are in many domains and tend to work toward their goals even if it means violating our wishes, will we be able to stop them? As we become increasingly dependent on AIs, we may not be able to stop AI’s evolution. Humanity has never before faced a threat that is as intelligent as we are or that has goals. Unless we take thoughtful care, we could find ourselves in the position faced by wild animals today: most humans have no particular desire to harm gorillas, but the process of harnessing our intelligence toward our own goals means that they are at risk of extinction, because their needs conflict with human goals.This paper proposes several steps we can take to combat selection pressure and avoid that outcome. We are optimistic that if we are careful and prudent, we can ensur...]]>
Dan H https://forum.effectivealtruism.org/posts/aqJtwbe7xo9MK3ych/ai-and-evolution Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI and Evolution, published by Dan H on March 30, 2023 on The Effective Altruism Forum.Executive SummaryArtificial intelligence is advancing quickly. In some ways, AI development is an uncharted frontier, but in others, it follows the familiar pattern of other competitive processes; these include biological evolution, cultural change, and competition between businesses. In each of these, there is significant variation between individuals structures and some are copied more than others, with the result that the future population is more similar to the most copied individuals of the earlier generation. In this way, species evolve, cultural ideas are transmitted across generations, and successful businesses are imitated while unsuccessful ones disappear.This paper argues that these same selection patterns will shape AI development and that the features that will be copied the most are likely to create an AI population that is dangerous to humans. As AIs become faster and more reliable than people at more and more tasks, businesses that allow AIs to perform more of their work will outperform competitors still using human labor at any stage, just as a modern clothing company that insisted on using only manual looms would be easily outcompeted by those that use industrial looms. Companies will need to increase their reliance on AIs to stay competitive, and the companies that use AIs best will dominate the marketplace. This trend means that the AIs most likely to be copied will be very efficient at achieving their goals autonomously with little human intervention.A world dominated by increasingly powerful, independent, and goal-oriented AIs is dangerous. Today, the most successful AI models are not transparent, and even their creators do not fully know how they work or what they will be able to do before they do it. We know only their results, not how they arrived at them. As people give AIs the ability to act in the real world, the AIs’ internal processes will still be inscrutable: we will be able to measure their performance only based on whether or not they are achieving their goals. This means that the AIs humans will see as most successful — and therefore the ones that are copied — will be whichever AIs are most effective at achieving their goals, even if they use harmful or illegal methods, as long as we do not detect their bad behavior.In natural selection, the same pattern emerges: individuals are cooperative or even altruistic in some situations, but ultimately, strategically selfish individuals are best able to propagate. A business that knows how to steal trade secrets or deceive regulators without getting caught will have an edge over one that refuses to ever engage in fraud on principle. During a harsh winter, an animal that steals food from others to feed its own children will likely have more surviving offspring. Similarly, the AIs that succeed most will be those able to deceive humans, seek power, and achieve their goals by any means necessary.If AI systems are more capable than we are in many domains and tend to work toward their goals even if it means violating our wishes, will we be able to stop them? As we become increasingly dependent on AIs, we may not be able to stop AI’s evolution. Humanity has never before faced a threat that is as intelligent as we are or that has goals. Unless we take thoughtful care, we could find ourselves in the position faced by wild animals today: most humans have no particular desire to harm gorillas, but the process of harnessing our intelligence toward our own goals means that they are at risk of extinction, because their needs conflict with human goals.This paper proposes several steps we can take to combat selection pressure and avoid that outcome. We are optimistic that if we are careful and prudent, we can ensur...]]>
Thu, 30 Mar 2023 20:51:02 +0000 EA - AI and Evolution by Dan H Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI and Evolution, published by Dan H on March 30, 2023 on The Effective Altruism Forum.Executive SummaryArtificial intelligence is advancing quickly. In some ways, AI development is an uncharted frontier, but in others, it follows the familiar pattern of other competitive processes; these include biological evolution, cultural change, and competition between businesses. In each of these, there is significant variation between individuals structures and some are copied more than others, with the result that the future population is more similar to the most copied individuals of the earlier generation. In this way, species evolve, cultural ideas are transmitted across generations, and successful businesses are imitated while unsuccessful ones disappear.This paper argues that these same selection patterns will shape AI development and that the features that will be copied the most are likely to create an AI population that is dangerous to humans. As AIs become faster and more reliable than people at more and more tasks, businesses that allow AIs to perform more of their work will outperform competitors still using human labor at any stage, just as a modern clothing company that insisted on using only manual looms would be easily outcompeted by those that use industrial looms. Companies will need to increase their reliance on AIs to stay competitive, and the companies that use AIs best will dominate the marketplace. This trend means that the AIs most likely to be copied will be very efficient at achieving their goals autonomously with little human intervention.A world dominated by increasingly powerful, independent, and goal-oriented AIs is dangerous. Today, the most successful AI models are not transparent, and even their creators do not fully know how they work or what they will be able to do before they do it. We know only their results, not how they arrived at them. As people give AIs the ability to act in the real world, the AIs’ internal processes will still be inscrutable: we will be able to measure their performance only based on whether or not they are achieving their goals. This means that the AIs humans will see as most successful — and therefore the ones that are copied — will be whichever AIs are most effective at achieving their goals, even if they use harmful or illegal methods, as long as we do not detect their bad behavior.In natural selection, the same pattern emerges: individuals are cooperative or even altruistic in some situations, but ultimately, strategically selfish individuals are best able to propagate. A business that knows how to steal trade secrets or deceive regulators without getting caught will have an edge over one that refuses to ever engage in fraud on principle. During a harsh winter, an animal that steals food from others to feed its own children will likely have more surviving offspring. Similarly, the AIs that succeed most will be those able to deceive humans, seek power, and achieve their goals by any means necessary.If AI systems are more capable than we are in many domains and tend to work toward their goals even if it means violating our wishes, will we be able to stop them? As we become increasingly dependent on AIs, we may not be able to stop AI’s evolution. Humanity has never before faced a threat that is as intelligent as we are or that has goals. Unless we take thoughtful care, we could find ourselves in the position faced by wild animals today: most humans have no particular desire to harm gorillas, but the process of harnessing our intelligence toward our own goals means that they are at risk of extinction, because their needs conflict with human goals.This paper proposes several steps we can take to combat selection pressure and avoid that outcome. We are optimistic that if we are careful and prudent, we can ensur...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI and Evolution, published by Dan H on March 30, 2023 on The Effective Altruism Forum.Executive SummaryArtificial intelligence is advancing quickly. In some ways, AI development is an uncharted frontier, but in others, it follows the familiar pattern of other competitive processes; these include biological evolution, cultural change, and competition between businesses. In each of these, there is significant variation between individuals structures and some are copied more than others, with the result that the future population is more similar to the most copied individuals of the earlier generation. In this way, species evolve, cultural ideas are transmitted across generations, and successful businesses are imitated while unsuccessful ones disappear.This paper argues that these same selection patterns will shape AI development and that the features that will be copied the most are likely to create an AI population that is dangerous to humans. As AIs become faster and more reliable than people at more and more tasks, businesses that allow AIs to perform more of their work will outperform competitors still using human labor at any stage, just as a modern clothing company that insisted on using only manual looms would be easily outcompeted by those that use industrial looms. Companies will need to increase their reliance on AIs to stay competitive, and the companies that use AIs best will dominate the marketplace. This trend means that the AIs most likely to be copied will be very efficient at achieving their goals autonomously with little human intervention.A world dominated by increasingly powerful, independent, and goal-oriented AIs is dangerous. Today, the most successful AI models are not transparent, and even their creators do not fully know how they work or what they will be able to do before they do it. We know only their results, not how they arrived at them. As people give AIs the ability to act in the real world, the AIs’ internal processes will still be inscrutable: we will be able to measure their performance only based on whether or not they are achieving their goals. This means that the AIs humans will see as most successful — and therefore the ones that are copied — will be whichever AIs are most effective at achieving their goals, even if they use harmful or illegal methods, as long as we do not detect their bad behavior.In natural selection, the same pattern emerges: individuals are cooperative or even altruistic in some situations, but ultimately, strategically selfish individuals are best able to propagate. A business that knows how to steal trade secrets or deceive regulators without getting caught will have an edge over one that refuses to ever engage in fraud on principle. During a harsh winter, an animal that steals food from others to feed its own children will likely have more surviving offspring. Similarly, the AIs that succeed most will be those able to deceive humans, seek power, and achieve their goals by any means necessary.If AI systems are more capable than we are in many domains and tend to work toward their goals even if it means violating our wishes, will we be able to stop them? As we become increasingly dependent on AIs, we may not be able to stop AI’s evolution. Humanity has never before faced a threat that is as intelligent as we are or that has goals. Unless we take thoughtful care, we could find ourselves in the position faced by wild animals today: most humans have no particular desire to harm gorillas, but the process of harnessing our intelligence toward our own goals means that they are at risk of extinction, because their needs conflict with human goals.This paper proposes several steps we can take to combat selection pressure and avoid that outcome. We are optimistic that if we are careful and prudent, we can ensur...]]>
Dan H https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:01 None full 5417
trqRcwywEgGCzeu8B_NL_EA_EA EA - Vote for GWWC to present at SXSW Sydney! by Giving What We Can Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vote for GWWC to present at SXSW Sydney!, published by Giving What We Can on March 29, 2023 on The Effective Altruism Forum.All it takes is a single click to help vote to get GWWC onstage and presenting about our work at SXSW Sydney this year.Your vote would be super helpful and takes less than a minute.You can read the session proposal at the link below, too!Vote hereThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Giving What We Can https://forum.effectivealtruism.org/posts/trqRcwywEgGCzeu8B/vote-for-gwwc-to-present-at-sxsw-sydney Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vote for GWWC to present at SXSW Sydney!, published by Giving What We Can on March 29, 2023 on The Effective Altruism Forum.All it takes is a single click to help vote to get GWWC onstage and presenting about our work at SXSW Sydney this year.Your vote would be super helpful and takes less than a minute.You can read the session proposal at the link below, too!Vote hereThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 30 Mar 2023 11:55:13 +0000 EA - Vote for GWWC to present at SXSW Sydney! by Giving What We Can Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vote for GWWC to present at SXSW Sydney!, published by Giving What We Can on March 29, 2023 on The Effective Altruism Forum.All it takes is a single click to help vote to get GWWC onstage and presenting about our work at SXSW Sydney this year.Your vote would be super helpful and takes less than a minute.You can read the session proposal at the link below, too!Vote hereThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vote for GWWC to present at SXSW Sydney!, published by Giving What We Can on March 29, 2023 on The Effective Altruism Forum.All it takes is a single click to help vote to get GWWC onstage and presenting about our work at SXSW Sydney this year.Your vote would be super helpful and takes less than a minute.You can read the session proposal at the link below, too!Vote hereThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Giving What We Can https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:44 None full 5418
w5GsJBF8YHqWdCroW_NL_EA_EA EA - What are the arguments that support China building AGI+ if Western companies delay/pause AI development? by DMMF Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What are the arguments that support China building AGI+ if Western companies delay/pause AI development?, published by DMMF on March 29, 2023 on The Effective Altruism Forum.In nearly every discussion I've engaged in relating to the potential delay or pause in AI research, multiple people have responded with the quip: "If we don't build AGI, then China will, which is an even worse possible world". This is taken at face value and is something I've never seen seriously challenged.This does not seem obvious to me.Given China's semi-conductor supply chain issues, China's historical lack of cutting edge innovative technology research and the tremendous challenges powerful AI systems may pose to the governing party and their ideology, it seems highly uncertain that China will develop AGI in a world where Western orgs stopped developing improved LLMs.I appreciate people can point to multiple countries, including ones with non-impressive historical research credentials, developing nuclear weapons independently.Beyond this, can anyone point me to, or outline arguments in favour of the idea that China is very likely to develop AGI+, even if Western orgs cease research in this field.I don't have a strong view on this topic but given so many people assume it to be true, I would like to further understand the arguments in support of this claim.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
DMMF https://forum.effectivealtruism.org/posts/w5GsJBF8YHqWdCroW/what-are-the-arguments-that-support-china-building-agi-if Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What are the arguments that support China building AGI+ if Western companies delay/pause AI development?, published by DMMF on March 29, 2023 on The Effective Altruism Forum.In nearly every discussion I've engaged in relating to the potential delay or pause in AI research, multiple people have responded with the quip: "If we don't build AGI, then China will, which is an even worse possible world". This is taken at face value and is something I've never seen seriously challenged.This does not seem obvious to me.Given China's semi-conductor supply chain issues, China's historical lack of cutting edge innovative technology research and the tremendous challenges powerful AI systems may pose to the governing party and their ideology, it seems highly uncertain that China will develop AGI in a world where Western orgs stopped developing improved LLMs.I appreciate people can point to multiple countries, including ones with non-impressive historical research credentials, developing nuclear weapons independently.Beyond this, can anyone point me to, or outline arguments in favour of the idea that China is very likely to develop AGI+, even if Western orgs cease research in this field.I don't have a strong view on this topic but given so many people assume it to be true, I would like to further understand the arguments in support of this claim.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 30 Mar 2023 00:30:56 +0000 EA - What are the arguments that support China building AGI+ if Western companies delay/pause AI development? by DMMF Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What are the arguments that support China building AGI+ if Western companies delay/pause AI development?, published by DMMF on March 29, 2023 on The Effective Altruism Forum.In nearly every discussion I've engaged in relating to the potential delay or pause in AI research, multiple people have responded with the quip: "If we don't build AGI, then China will, which is an even worse possible world". This is taken at face value and is something I've never seen seriously challenged.This does not seem obvious to me.Given China's semi-conductor supply chain issues, China's historical lack of cutting edge innovative technology research and the tremendous challenges powerful AI systems may pose to the governing party and their ideology, it seems highly uncertain that China will develop AGI in a world where Western orgs stopped developing improved LLMs.I appreciate people can point to multiple countries, including ones with non-impressive historical research credentials, developing nuclear weapons independently.Beyond this, can anyone point me to, or outline arguments in favour of the idea that China is very likely to develop AGI+, even if Western orgs cease research in this field.I don't have a strong view on this topic but given so many people assume it to be true, I would like to further understand the arguments in support of this claim.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What are the arguments that support China building AGI+ if Western companies delay/pause AI development?, published by DMMF on March 29, 2023 on The Effective Altruism Forum.In nearly every discussion I've engaged in relating to the potential delay or pause in AI research, multiple people have responded with the quip: "If we don't build AGI, then China will, which is an even worse possible world". This is taken at face value and is something I've never seen seriously challenged.This does not seem obvious to me.Given China's semi-conductor supply chain issues, China's historical lack of cutting edge innovative technology research and the tremendous challenges powerful AI systems may pose to the governing party and their ideology, it seems highly uncertain that China will develop AGI in a world where Western orgs stopped developing improved LLMs.I appreciate people can point to multiple countries, including ones with non-impressive historical research credentials, developing nuclear weapons independently.Beyond this, can anyone point me to, or outline arguments in favour of the idea that China is very likely to develop AGI+, even if Western orgs cease research in this field.I don't have a strong view on this topic but given so many people assume it to be true, I would like to further understand the arguments in support of this claim.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
DMMF https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:34 None full 5419
nWJLSjZ5T6HW8CMER_NL_EA_EA EA - Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by jacquesthibs Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky, published by jacquesthibs on March 29, 2023 on The Effective Altruism Forum.New article in Time Ideas by Eliezer Yudkowsky.Here’s some selected quotes.In reference to the letter that just came out (discussion here):We are not going to bridge that gap in six months.It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today’s capabilities. Solving safety of superhuman intelligence—not perfect safety, safety in the sense of “not killing literally everyone”—could very reasonably take at least half that long. And the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we’ve overcome in our history, because we are all gone.Some of my friends have recently reported to me that when people outside the AI industry hear about extinction risk from Artificial General Intelligence for the first time, their reaction is “maybe we should not build AGI, then.”Hearing this gave me a tiny flash of hope, because it’s a simpler, more sensible, and frankly saner reaction than I’ve been hearing over the last 20 years of trying to get anyone in the industry to take things seriously. Anyone talking that sanely deserves to hear how bad the situation actually is, and not be told that a six-month moratorium is going to fix it.Here’s what would actually need to be done:The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth. If I had infinite freedom to write laws, I might carve out a single exception for AIs being trained solely to solve problems in biology and biotechnology, not trained on text from the internet, and not to the level where they start talking or planning; but if that was remotely complicating the issue I would immediately jettison that proposal and say to just shut it all down.Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for anyone, including governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.That’s the kind of policy change that would cause my partner and I to hold each other, and say to each other that a miracle happened, and now there’...]]>
jacquesthibs https://forum.effectivealtruism.org/posts/nWJLSjZ5T6HW8CMER/pausing-ai-developments-isn-t-enough-we-need-to-shut-it-all Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky, published by jacquesthibs on March 29, 2023 on The Effective Altruism Forum.New article in Time Ideas by Eliezer Yudkowsky.Here’s some selected quotes.In reference to the letter that just came out (discussion here):We are not going to bridge that gap in six months.It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today’s capabilities. Solving safety of superhuman intelligence—not perfect safety, safety in the sense of “not killing literally everyone”—could very reasonably take at least half that long. And the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we’ve overcome in our history, because we are all gone.Some of my friends have recently reported to me that when people outside the AI industry hear about extinction risk from Artificial General Intelligence for the first time, their reaction is “maybe we should not build AGI, then.”Hearing this gave me a tiny flash of hope, because it’s a simpler, more sensible, and frankly saner reaction than I’ve been hearing over the last 20 years of trying to get anyone in the industry to take things seriously. Anyone talking that sanely deserves to hear how bad the situation actually is, and not be told that a six-month moratorium is going to fix it.Here’s what would actually need to be done:The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth. If I had infinite freedom to write laws, I might carve out a single exception for AIs being trained solely to solve problems in biology and biotechnology, not trained on text from the internet, and not to the level where they start talking or planning; but if that was remotely complicating the issue I would immediately jettison that proposal and say to just shut it all down.Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for anyone, including governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.That’s the kind of policy change that would cause my partner and I to hold each other, and say to each other that a miracle happened, and now there’...]]>
Thu, 30 Mar 2023 00:08:19 +0000 EA - Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by jacquesthibs Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky, published by jacquesthibs on March 29, 2023 on The Effective Altruism Forum.New article in Time Ideas by Eliezer Yudkowsky.Here’s some selected quotes.In reference to the letter that just came out (discussion here):We are not going to bridge that gap in six months.It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today’s capabilities. Solving safety of superhuman intelligence—not perfect safety, safety in the sense of “not killing literally everyone”—could very reasonably take at least half that long. And the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we’ve overcome in our history, because we are all gone.Some of my friends have recently reported to me that when people outside the AI industry hear about extinction risk from Artificial General Intelligence for the first time, their reaction is “maybe we should not build AGI, then.”Hearing this gave me a tiny flash of hope, because it’s a simpler, more sensible, and frankly saner reaction than I’ve been hearing over the last 20 years of trying to get anyone in the industry to take things seriously. Anyone talking that sanely deserves to hear how bad the situation actually is, and not be told that a six-month moratorium is going to fix it.Here’s what would actually need to be done:The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth. If I had infinite freedom to write laws, I might carve out a single exception for AIs being trained solely to solve problems in biology and biotechnology, not trained on text from the internet, and not to the level where they start talking or planning; but if that was remotely complicating the issue I would immediately jettison that proposal and say to just shut it all down.Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for anyone, including governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.That’s the kind of policy change that would cause my partner and I to hold each other, and say to each other that a miracle happened, and now there’...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky, published by jacquesthibs on March 29, 2023 on The Effective Altruism Forum.New article in Time Ideas by Eliezer Yudkowsky.Here’s some selected quotes.In reference to the letter that just came out (discussion here):We are not going to bridge that gap in six months.It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today’s capabilities. Solving safety of superhuman intelligence—not perfect safety, safety in the sense of “not killing literally everyone”—could very reasonably take at least half that long. And the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we’ve overcome in our history, because we are all gone.Some of my friends have recently reported to me that when people outside the AI industry hear about extinction risk from Artificial General Intelligence for the first time, their reaction is “maybe we should not build AGI, then.”Hearing this gave me a tiny flash of hope, because it’s a simpler, more sensible, and frankly saner reaction than I’ve been hearing over the last 20 years of trying to get anyone in the industry to take things seriously. Anyone talking that sanely deserves to hear how bad the situation actually is, and not be told that a six-month moratorium is going to fix it.Here’s what would actually need to be done:The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth. If I had infinite freedom to write laws, I might carve out a single exception for AIs being trained solely to solve problems in biology and biotechnology, not trained on text from the internet, and not to the level where they start talking or planning; but if that was remotely complicating the issue I would immediately jettison that proposal and say to just shut it all down.Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for anyone, including governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.That’s the kind of policy change that would cause my partner and I to hold each other, and say to each other that a miracle happened, and now there’...]]>
jacquesthibs https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:11 None full 5405
6wPsRrEbF7qgmxPkf_NL_EA_EA EA - Linkpost: Italy introduces bill to ban lab-grown meat by Matt Goodman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Linkpost: Italy introduces bill to ban lab-grown meat, published by Matt Goodman on March 29, 2023 on The Effective Altruism Forum.This is a linkpost for a Reuters article about a new bill proposed in Italy's parliament to ban lab-grown meat. I don't have much to say about it, except that I hope it fails, and I hope this isn't the start of a culture war around lab-grown meat.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Matt Goodman https://forum.effectivealtruism.org/posts/6wPsRrEbF7qgmxPkf/linkpost-italy-introduces-bill-to-ban-lab-grown-meat Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Linkpost: Italy introduces bill to ban lab-grown meat, published by Matt Goodman on March 29, 2023 on The Effective Altruism Forum.This is a linkpost for a Reuters article about a new bill proposed in Italy's parliament to ban lab-grown meat. I don't have much to say about it, except that I hope it fails, and I hope this isn't the start of a culture war around lab-grown meat.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 29 Mar 2023 18:59:55 +0000 EA - Linkpost: Italy introduces bill to ban lab-grown meat by Matt Goodman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Linkpost: Italy introduces bill to ban lab-grown meat, published by Matt Goodman on March 29, 2023 on The Effective Altruism Forum.This is a linkpost for a Reuters article about a new bill proposed in Italy's parliament to ban lab-grown meat. I don't have much to say about it, except that I hope it fails, and I hope this isn't the start of a culture war around lab-grown meat.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Linkpost: Italy introduces bill to ban lab-grown meat, published by Matt Goodman on March 29, 2023 on The Effective Altruism Forum.This is a linkpost for a Reuters article about a new bill proposed in Italy's parliament to ban lab-grown meat. I don't have much to say about it, except that I hope it fails, and I hope this isn't the start of a culture war around lab-grown meat.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Matt Goodman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:40 None full 5406
wahhPpkKnCoGSMZzq_NL_EA_EA EA - The billionaires’ philanthropy index by brb243 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The billionaires’ philanthropy index, published by brb243 on March 29, 2023 on The Effective Altruism Forum.SummaryI made a spreadsheet that allows anyone to find who of the about 2,500 billionaires donates most and least consistently with the user’s personal values (roughly, records might be inaccurate or incomplete).The billionaires’ philanthropy index indexes billionaires’ philanthropy based on the causes they donated to and the amounts they pledged. It displays top donors using up to 52 user’s inputs of their relative valuation of contribution to/impact in 12 cause areas divided into subcategories. It is possible to account for developed/developing country impact cost differential.This post displays billionaires’ philanthropy summary statistics. I also discuss the limitations of the indexing method and suggest improvements. Information about possible partnerships with billionaires’ foundations in EA-popular fields is included. For inspiration, I am providing personal value inputs that an individual involved in Effective Altruism can have, alongside with these values’ ITN-based calculation and reasoning.The three documents shared are:The billionaires’ philanthropy indexUser manual to The billionaires’ philanthropy indexValue examples for The billionaires’ philanthropy indexSuperlinear contestThis submission is for the Superlinear Create 'The Good Billionaires Index' contest.I was awarded ⅔ of the prize for partially completed work. ⅓ could be remaining (if the prize is listed on Superlinear) for someone who makes the user interface more friendly. Feel free to use the material for this purpose, but do not use fear-, power-, or impulse-based marketing.It was suggested that the material is open for comments.Notes on terminology and purposeI use “donate,” “pledge,” and “donate or pledge” (and similar expressions) interchangeably.I use “subcategory” and “minor category” interchangeably.This piece, the index, the manual, and the value examples are for entertainment purposes only. Nothing in any of these resources constitutes a donation, career, investment/divestment, or any other advice.Data sourcesThe main sources of data are:Forbes Real-Time BillionairesGiving PledgeThe full list of data sources for each column/cell can be found in the “Calculations” tab of the index.Research methodologyCopied and pasted the Real-Time Billionaires Forbes list into SheetsRecorded each billionaire’s donations based on their Forbes profile and additional research, if the profile indicated or hinted philanthropic interests but did not specify donationsAdded pledge information based on the Giving Pledge Signatory lettersChecked whether any major donations are omitted on top donor lists, including those of Wikipedia, Forbes (2, 3), The Borgen Project, and The New Humanitarian. Added these donations.Donation categoriesI categorized each billionaire's donations, if any.I used 13 major categories and 4 different subcategory lists.The total in all major categories and any subcategory list is 100.The value assigned to each category or subcategory represents the number of percent of the billionaire’s total donations provided to that category or subcategory (see the example further below).Major categoriesEducationHealthEconomic/social empowermentEnvironmental causesAnimal welfareArt/cultureReligious causesPolitical causesDisaster reliefLocal causesMeta charityFraudOtherMinor category listsGeographical location listEach major category, except Meta charity, Fraud, and Other, is subdivided into 3 geographical locations:USNon-US developedDevelopingEducation subcategory listIn addition, education is categorized as either:HigherK-12Health subcategory listHealth donations can support either:Physical healthMental healthAnimal welfare subcategory ...]]>
brb243 https://forum.effectivealtruism.org/posts/wahhPpkKnCoGSMZzq/the-billionaires-philanthropy-index Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The billionaires’ philanthropy index, published by brb243 on March 29, 2023 on The Effective Altruism Forum.SummaryI made a spreadsheet that allows anyone to find who of the about 2,500 billionaires donates most and least consistently with the user’s personal values (roughly, records might be inaccurate or incomplete).The billionaires’ philanthropy index indexes billionaires’ philanthropy based on the causes they donated to and the amounts they pledged. It displays top donors using up to 52 user’s inputs of their relative valuation of contribution to/impact in 12 cause areas divided into subcategories. It is possible to account for developed/developing country impact cost differential.This post displays billionaires’ philanthropy summary statistics. I also discuss the limitations of the indexing method and suggest improvements. Information about possible partnerships with billionaires’ foundations in EA-popular fields is included. For inspiration, I am providing personal value inputs that an individual involved in Effective Altruism can have, alongside with these values’ ITN-based calculation and reasoning.The three documents shared are:The billionaires’ philanthropy indexUser manual to The billionaires’ philanthropy indexValue examples for The billionaires’ philanthropy indexSuperlinear contestThis submission is for the Superlinear Create 'The Good Billionaires Index' contest.I was awarded ⅔ of the prize for partially completed work. ⅓ could be remaining (if the prize is listed on Superlinear) for someone who makes the user interface more friendly. Feel free to use the material for this purpose, but do not use fear-, power-, or impulse-based marketing.It was suggested that the material is open for comments.Notes on terminology and purposeI use “donate,” “pledge,” and “donate or pledge” (and similar expressions) interchangeably.I use “subcategory” and “minor category” interchangeably.This piece, the index, the manual, and the value examples are for entertainment purposes only. Nothing in any of these resources constitutes a donation, career, investment/divestment, or any other advice.Data sourcesThe main sources of data are:Forbes Real-Time BillionairesGiving PledgeThe full list of data sources for each column/cell can be found in the “Calculations” tab of the index.Research methodologyCopied and pasted the Real-Time Billionaires Forbes list into SheetsRecorded each billionaire’s donations based on their Forbes profile and additional research, if the profile indicated or hinted philanthropic interests but did not specify donationsAdded pledge information based on the Giving Pledge Signatory lettersChecked whether any major donations are omitted on top donor lists, including those of Wikipedia, Forbes (2, 3), The Borgen Project, and The New Humanitarian. Added these donations.Donation categoriesI categorized each billionaire's donations, if any.I used 13 major categories and 4 different subcategory lists.The total in all major categories and any subcategory list is 100.The value assigned to each category or subcategory represents the number of percent of the billionaire’s total donations provided to that category or subcategory (see the example further below).Major categoriesEducationHealthEconomic/social empowermentEnvironmental causesAnimal welfareArt/cultureReligious causesPolitical causesDisaster reliefLocal causesMeta charityFraudOtherMinor category listsGeographical location listEach major category, except Meta charity, Fraud, and Other, is subdivided into 3 geographical locations:USNon-US developedDevelopingEducation subcategory listIn addition, education is categorized as either:HigherK-12Health subcategory listHealth donations can support either:Physical healthMental healthAnimal welfare subcategory ...]]>
Wed, 29 Mar 2023 17:42:39 +0000 EA - The billionaires’ philanthropy index by brb243 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The billionaires’ philanthropy index, published by brb243 on March 29, 2023 on The Effective Altruism Forum.SummaryI made a spreadsheet that allows anyone to find who of the about 2,500 billionaires donates most and least consistently with the user’s personal values (roughly, records might be inaccurate or incomplete).The billionaires’ philanthropy index indexes billionaires’ philanthropy based on the causes they donated to and the amounts they pledged. It displays top donors using up to 52 user’s inputs of their relative valuation of contribution to/impact in 12 cause areas divided into subcategories. It is possible to account for developed/developing country impact cost differential.This post displays billionaires’ philanthropy summary statistics. I also discuss the limitations of the indexing method and suggest improvements. Information about possible partnerships with billionaires’ foundations in EA-popular fields is included. For inspiration, I am providing personal value inputs that an individual involved in Effective Altruism can have, alongside with these values’ ITN-based calculation and reasoning.The three documents shared are:The billionaires’ philanthropy indexUser manual to The billionaires’ philanthropy indexValue examples for The billionaires’ philanthropy indexSuperlinear contestThis submission is for the Superlinear Create 'The Good Billionaires Index' contest.I was awarded ⅔ of the prize for partially completed work. ⅓ could be remaining (if the prize is listed on Superlinear) for someone who makes the user interface more friendly. Feel free to use the material for this purpose, but do not use fear-, power-, or impulse-based marketing.It was suggested that the material is open for comments.Notes on terminology and purposeI use “donate,” “pledge,” and “donate or pledge” (and similar expressions) interchangeably.I use “subcategory” and “minor category” interchangeably.This piece, the index, the manual, and the value examples are for entertainment purposes only. Nothing in any of these resources constitutes a donation, career, investment/divestment, or any other advice.Data sourcesThe main sources of data are:Forbes Real-Time BillionairesGiving PledgeThe full list of data sources for each column/cell can be found in the “Calculations” tab of the index.Research methodologyCopied and pasted the Real-Time Billionaires Forbes list into SheetsRecorded each billionaire’s donations based on their Forbes profile and additional research, if the profile indicated or hinted philanthropic interests but did not specify donationsAdded pledge information based on the Giving Pledge Signatory lettersChecked whether any major donations are omitted on top donor lists, including those of Wikipedia, Forbes (2, 3), The Borgen Project, and The New Humanitarian. Added these donations.Donation categoriesI categorized each billionaire's donations, if any.I used 13 major categories and 4 different subcategory lists.The total in all major categories and any subcategory list is 100.The value assigned to each category or subcategory represents the number of percent of the billionaire’s total donations provided to that category or subcategory (see the example further below).Major categoriesEducationHealthEconomic/social empowermentEnvironmental causesAnimal welfareArt/cultureReligious causesPolitical causesDisaster reliefLocal causesMeta charityFraudOtherMinor category listsGeographical location listEach major category, except Meta charity, Fraud, and Other, is subdivided into 3 geographical locations:USNon-US developedDevelopingEducation subcategory listIn addition, education is categorized as either:HigherK-12Health subcategory listHealth donations can support either:Physical healthMental healthAnimal welfare subcategory ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The billionaires’ philanthropy index, published by brb243 on March 29, 2023 on The Effective Altruism Forum.SummaryI made a spreadsheet that allows anyone to find who of the about 2,500 billionaires donates most and least consistently with the user’s personal values (roughly, records might be inaccurate or incomplete).The billionaires’ philanthropy index indexes billionaires’ philanthropy based on the causes they donated to and the amounts they pledged. It displays top donors using up to 52 user’s inputs of their relative valuation of contribution to/impact in 12 cause areas divided into subcategories. It is possible to account for developed/developing country impact cost differential.This post displays billionaires’ philanthropy summary statistics. I also discuss the limitations of the indexing method and suggest improvements. Information about possible partnerships with billionaires’ foundations in EA-popular fields is included. For inspiration, I am providing personal value inputs that an individual involved in Effective Altruism can have, alongside with these values’ ITN-based calculation and reasoning.The three documents shared are:The billionaires’ philanthropy indexUser manual to The billionaires’ philanthropy indexValue examples for The billionaires’ philanthropy indexSuperlinear contestThis submission is for the Superlinear Create 'The Good Billionaires Index' contest.I was awarded ⅔ of the prize for partially completed work. ⅓ could be remaining (if the prize is listed on Superlinear) for someone who makes the user interface more friendly. Feel free to use the material for this purpose, but do not use fear-, power-, or impulse-based marketing.It was suggested that the material is open for comments.Notes on terminology and purposeI use “donate,” “pledge,” and “donate or pledge” (and similar expressions) interchangeably.I use “subcategory” and “minor category” interchangeably.This piece, the index, the manual, and the value examples are for entertainment purposes only. Nothing in any of these resources constitutes a donation, career, investment/divestment, or any other advice.Data sourcesThe main sources of data are:Forbes Real-Time BillionairesGiving PledgeThe full list of data sources for each column/cell can be found in the “Calculations” tab of the index.Research methodologyCopied and pasted the Real-Time Billionaires Forbes list into SheetsRecorded each billionaire’s donations based on their Forbes profile and additional research, if the profile indicated or hinted philanthropic interests but did not specify donationsAdded pledge information based on the Giving Pledge Signatory lettersChecked whether any major donations are omitted on top donor lists, including those of Wikipedia, Forbes (2, 3), The Borgen Project, and The New Humanitarian. Added these donations.Donation categoriesI categorized each billionaire's donations, if any.I used 13 major categories and 4 different subcategory lists.The total in all major categories and any subcategory list is 100.The value assigned to each category or subcategory represents the number of percent of the billionaire’s total donations provided to that category or subcategory (see the example further below).Major categoriesEducationHealthEconomic/social empowermentEnvironmental causesAnimal welfareArt/cultureReligious causesPolitical causesDisaster reliefLocal causesMeta charityFraudOtherMinor category listsGeographical location listEach major category, except Meta charity, Fraud, and Other, is subdivided into 3 geographical locations:USNon-US developedDevelopingEducation subcategory listIn addition, education is categorized as either:HigherK-12Health subcategory listHealth donations can support either:Physical healthMental healthAnimal welfare subcategory ...]]>
brb243 https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 48:58 None full 5397
coNjDHp6F7QmprxFo_NL_EA_EA EA - Nathan A. Sears (1988-2023) by HaydnBelfield Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nathan A. Sears (1988-2023), published by HaydnBelfield on March 29, 2023 on The Effective Altruism Forum.Nathan Sears was one of seven to die in a fire in Montreal on the 16 March 2023. He was 35.Nathan was becoming a leading figure at the intersection of existential risk and international relations (IR).Indeed, he was in Montreal to attend the 2023 International Studies Association (ISA) conference, the leading conference on international relations (IR). The day before on March 15th, he presented a paper on "Great Power Rivalry and Human Survival: Why States Fail to “Securitize” Existential Threats to Humanity" at a panel on 'Catastrophic-Existential Risks and World Orders'After his undergrad at Western University and his Masters in IR at Carleton University Nathan moved to Quito, Ecuador. For four years he taught IR at the Universidad de Las Américas. He then came back to Canada in 2016 to earn his PhD.During that time he was a 2017-2018 Trudeau Centre Fellow in Peace, Conflict and Justice at the Munk School of Global Affairs. He also took a year out to serve his country as a 2019-2020 Cadieux-Léger Fellow in the Foreign Policy Research and Foresight Division of Global Affairs Canada.Nathan was already an important scholar in the field of existential risk, making groundbreaking & much-discussed contributions at the intersection with international relations. He was also a really friendly, supportive and engaging guy. I was so excited about what he was going to accomplish.Five of his most important papers are:International Politics in the Age of Existential ThreatsHumans in the twenty-first century live under the specter of anthropogenic existential threats to human civilization and survival. What is the significance of humanity’s capacity for self-destruction to the meaning of “security” and “survival” in international politics? The argument is that it constitutes a material “revolution” in international politics—that is, the growing spectrum of anthropogenic existential threats represents a radical transformation in the material context of international politics that turns established truths about security and survival on their heads. The paper develops a theoretical framework based in historical security materialism, especially the theoretical proposition that the material circumstances of the “forces of destruction” determine the security viability of different “modes of protection”, political “units” and “structures”, and “security ideologies” in international politics.The argument seeks to demonstrate the growing disjuncture (or "contradiction") between the material context of anthropogenic existential threats ("forces of destruction"); and the security practices of war, the use of military force, and the balance-of-power ("modes of protection"); the political units of nation-states and structure of international anarchy ("political superstructure"); and the primacy of "national security" and doctrines of "self-help" and "power politics" in international politics ("security ideologies"). Specifically, humanity's survival interdependence with respect to an-thropogenic existential threats calls into question the centrality of national security and survival in international politics. In an age of existential threats, "security" is better understood as about the survival of humanity.Existential Security: Towards a Security Framework for the Survival of HumanityHumankind faces a growing spectrum of anthropogenic existential threats to human civilization and survival. This article therefore aims to develop a new framework for security policy – ‘existential security’ – that puts the survival of humanity at its core. It begins with a discussion of the definition and spectrum of ‘anthropogenic existential threats’, or those threats that have their origins i...]]>
HaydnBelfield https://forum.effectivealtruism.org/posts/coNjDHp6F7QmprxFo/nathan-a-sears-1988-2023 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nathan A. Sears (1988-2023), published by HaydnBelfield on March 29, 2023 on The Effective Altruism Forum.Nathan Sears was one of seven to die in a fire in Montreal on the 16 March 2023. He was 35.Nathan was becoming a leading figure at the intersection of existential risk and international relations (IR).Indeed, he was in Montreal to attend the 2023 International Studies Association (ISA) conference, the leading conference on international relations (IR). The day before on March 15th, he presented a paper on "Great Power Rivalry and Human Survival: Why States Fail to “Securitize” Existential Threats to Humanity" at a panel on 'Catastrophic-Existential Risks and World Orders'After his undergrad at Western University and his Masters in IR at Carleton University Nathan moved to Quito, Ecuador. For four years he taught IR at the Universidad de Las Américas. He then came back to Canada in 2016 to earn his PhD.During that time he was a 2017-2018 Trudeau Centre Fellow in Peace, Conflict and Justice at the Munk School of Global Affairs. He also took a year out to serve his country as a 2019-2020 Cadieux-Léger Fellow in the Foreign Policy Research and Foresight Division of Global Affairs Canada.Nathan was already an important scholar in the field of existential risk, making groundbreaking & much-discussed contributions at the intersection with international relations. He was also a really friendly, supportive and engaging guy. I was so excited about what he was going to accomplish.Five of his most important papers are:International Politics in the Age of Existential ThreatsHumans in the twenty-first century live under the specter of anthropogenic existential threats to human civilization and survival. What is the significance of humanity’s capacity for self-destruction to the meaning of “security” and “survival” in international politics? The argument is that it constitutes a material “revolution” in international politics—that is, the growing spectrum of anthropogenic existential threats represents a radical transformation in the material context of international politics that turns established truths about security and survival on their heads. The paper develops a theoretical framework based in historical security materialism, especially the theoretical proposition that the material circumstances of the “forces of destruction” determine the security viability of different “modes of protection”, political “units” and “structures”, and “security ideologies” in international politics.The argument seeks to demonstrate the growing disjuncture (or "contradiction") between the material context of anthropogenic existential threats ("forces of destruction"); and the security practices of war, the use of military force, and the balance-of-power ("modes of protection"); the political units of nation-states and structure of international anarchy ("political superstructure"); and the primacy of "national security" and doctrines of "self-help" and "power politics" in international politics ("security ideologies"). Specifically, humanity's survival interdependence with respect to an-thropogenic existential threats calls into question the centrality of national security and survival in international politics. In an age of existential threats, "security" is better understood as about the survival of humanity.Existential Security: Towards a Security Framework for the Survival of HumanityHumankind faces a growing spectrum of anthropogenic existential threats to human civilization and survival. This article therefore aims to develop a new framework for security policy – ‘existential security’ – that puts the survival of humanity at its core. It begins with a discussion of the definition and spectrum of ‘anthropogenic existential threats’, or those threats that have their origins i...]]>
Wed, 29 Mar 2023 16:26:14 +0000 EA - Nathan A. Sears (1988-2023) by HaydnBelfield Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nathan A. Sears (1988-2023), published by HaydnBelfield on March 29, 2023 on The Effective Altruism Forum.Nathan Sears was one of seven to die in a fire in Montreal on the 16 March 2023. He was 35.Nathan was becoming a leading figure at the intersection of existential risk and international relations (IR).Indeed, he was in Montreal to attend the 2023 International Studies Association (ISA) conference, the leading conference on international relations (IR). The day before on March 15th, he presented a paper on "Great Power Rivalry and Human Survival: Why States Fail to “Securitize” Existential Threats to Humanity" at a panel on 'Catastrophic-Existential Risks and World Orders'After his undergrad at Western University and his Masters in IR at Carleton University Nathan moved to Quito, Ecuador. For four years he taught IR at the Universidad de Las Américas. He then came back to Canada in 2016 to earn his PhD.During that time he was a 2017-2018 Trudeau Centre Fellow in Peace, Conflict and Justice at the Munk School of Global Affairs. He also took a year out to serve his country as a 2019-2020 Cadieux-Léger Fellow in the Foreign Policy Research and Foresight Division of Global Affairs Canada.Nathan was already an important scholar in the field of existential risk, making groundbreaking & much-discussed contributions at the intersection with international relations. He was also a really friendly, supportive and engaging guy. I was so excited about what he was going to accomplish.Five of his most important papers are:International Politics in the Age of Existential ThreatsHumans in the twenty-first century live under the specter of anthropogenic existential threats to human civilization and survival. What is the significance of humanity’s capacity for self-destruction to the meaning of “security” and “survival” in international politics? The argument is that it constitutes a material “revolution” in international politics—that is, the growing spectrum of anthropogenic existential threats represents a radical transformation in the material context of international politics that turns established truths about security and survival on their heads. The paper develops a theoretical framework based in historical security materialism, especially the theoretical proposition that the material circumstances of the “forces of destruction” determine the security viability of different “modes of protection”, political “units” and “structures”, and “security ideologies” in international politics.The argument seeks to demonstrate the growing disjuncture (or "contradiction") between the material context of anthropogenic existential threats ("forces of destruction"); and the security practices of war, the use of military force, and the balance-of-power ("modes of protection"); the political units of nation-states and structure of international anarchy ("political superstructure"); and the primacy of "national security" and doctrines of "self-help" and "power politics" in international politics ("security ideologies"). Specifically, humanity's survival interdependence with respect to an-thropogenic existential threats calls into question the centrality of national security and survival in international politics. In an age of existential threats, "security" is better understood as about the survival of humanity.Existential Security: Towards a Security Framework for the Survival of HumanityHumankind faces a growing spectrum of anthropogenic existential threats to human civilization and survival. This article therefore aims to develop a new framework for security policy – ‘existential security’ – that puts the survival of humanity at its core. It begins with a discussion of the definition and spectrum of ‘anthropogenic existential threats’, or those threats that have their origins i...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nathan A. Sears (1988-2023), published by HaydnBelfield on March 29, 2023 on The Effective Altruism Forum.Nathan Sears was one of seven to die in a fire in Montreal on the 16 March 2023. He was 35.Nathan was becoming a leading figure at the intersection of existential risk and international relations (IR).Indeed, he was in Montreal to attend the 2023 International Studies Association (ISA) conference, the leading conference on international relations (IR). The day before on March 15th, he presented a paper on "Great Power Rivalry and Human Survival: Why States Fail to “Securitize” Existential Threats to Humanity" at a panel on 'Catastrophic-Existential Risks and World Orders'After his undergrad at Western University and his Masters in IR at Carleton University Nathan moved to Quito, Ecuador. For four years he taught IR at the Universidad de Las Américas. He then came back to Canada in 2016 to earn his PhD.During that time he was a 2017-2018 Trudeau Centre Fellow in Peace, Conflict and Justice at the Munk School of Global Affairs. He also took a year out to serve his country as a 2019-2020 Cadieux-Léger Fellow in the Foreign Policy Research and Foresight Division of Global Affairs Canada.Nathan was already an important scholar in the field of existential risk, making groundbreaking & much-discussed contributions at the intersection with international relations. He was also a really friendly, supportive and engaging guy. I was so excited about what he was going to accomplish.Five of his most important papers are:International Politics in the Age of Existential ThreatsHumans in the twenty-first century live under the specter of anthropogenic existential threats to human civilization and survival. What is the significance of humanity’s capacity for self-destruction to the meaning of “security” and “survival” in international politics? The argument is that it constitutes a material “revolution” in international politics—that is, the growing spectrum of anthropogenic existential threats represents a radical transformation in the material context of international politics that turns established truths about security and survival on their heads. The paper develops a theoretical framework based in historical security materialism, especially the theoretical proposition that the material circumstances of the “forces of destruction” determine the security viability of different “modes of protection”, political “units” and “structures”, and “security ideologies” in international politics.The argument seeks to demonstrate the growing disjuncture (or "contradiction") between the material context of anthropogenic existential threats ("forces of destruction"); and the security practices of war, the use of military force, and the balance-of-power ("modes of protection"); the political units of nation-states and structure of international anarchy ("political superstructure"); and the primacy of "national security" and doctrines of "self-help" and "power politics" in international politics ("security ideologies"). Specifically, humanity's survival interdependence with respect to an-thropogenic existential threats calls into question the centrality of national security and survival in international politics. In an age of existential threats, "security" is better understood as about the survival of humanity.Existential Security: Towards a Security Framework for the Survival of HumanityHumankind faces a growing spectrum of anthropogenic existential threats to human civilization and survival. This article therefore aims to develop a new framework for security policy – ‘existential security’ – that puts the survival of humanity at its core. It begins with a discussion of the definition and spectrum of ‘anthropogenic existential threats’, or those threats that have their origins i...]]>
HaydnBelfield https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:41 None full 5396
Ackzs8Wbk7isDzs2n_NL_EA_EA EA - Want to win the AGI race? Solve alignment. by leopold Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Want to win the AGI race? Solve alignment., published by leopold on March 29, 2023 on The Effective Altruism Forum.Society really cares about safety. Practically speaking, the binding constraint on deploying your AGI could well be your ability to align your AGI. Solving (scalable) alignment might be worth lots of $$$ and key to beating China.Look, I really don't want Xi Jinping Thought to rule the world. If China gets AGI first, the ensuing rapid AI-powered scientific and technological progress could well give it a decisive advantage (cf potential for >30%/year economic growth with AGI). I think there's a very real specter of global authoritarianism here.Or hey, maybe you just think AGI is cool. You want to go build amazing products and enable breakthrough science and solve the world’s problems.So, race to AGI with reckless abandon then? At this point, people get into agonizing discussions about safety tradeoffs. And many people just mood affiliate their way to an answer: "accelerate, progress go brrrr," or "AI scary, slow it down."I see this much more practically. And, practically, society cares about safety, a lot. Do you actually think that you’ll be able to and allowed to deploy an AI system that has, say, a 10% chance of destroying all of humanity?Society has started waking up to AGI; like covid, the societal response will probably be a dumpster-fire, but it’ll also probably be quite intense. In many worlds, to deploy your AGI systems, people will need to be quite confident that your AGI won’t destroy the world.Right now, we’re very much not on track to solve the alignment problem for superhuman AGI systems (“scalable alignment”)—but it’s a solvable problem, if we get our act together. I discuss this in my main post today (“Nobody’s on the ball on AGI alignment”). On the current trajectory, the binding constraint on deploying your AGI could well be your ability to align your AGI—and this alignment solution being unambiguous enough that there is consensus that it works.Even if you just want to win the AGI race, you should probably want to invest much more heavily in solving this problem.Things are going to get crazy, and people will pay attentionA mistake many people make when thinking about AGI is imagining a world that looks much like today, except for adding in a lab with a super powerful model. They ignore the endogenous societal response.I and many others made this mistake with covid—we were freaking out in February 2020, and despairing that society didn’t seem to be even paying attention, let alone doing anything. But just a few weeks later, all of America went into an unprecedented lockdown. If we're actually on our way to AGI, things are going to get crazy. People are going to pay attention.The wheels for this are already in motion. Remember how nobody paid any attention to AI 6 months ago, and now Bing chat/Sydney going awry is on the front page of the NYT, US senators are getting scared, and Yale econ professors are advocating $100B/year for AI safety? Well, imagine that, but 100x as we approach AGI.AI safety is going mainstream. Everyone has been primed to be scared about rogue AI by science fiction; all the CEOs have secretly believed in AI risk for years but thought it was too weird to talk about it; and the mainstream media loves to hate on tech companies. Probably there will be further, much scarier wakeup calls (not just misalignment, but also misuse and scary demos in evals). People already freaked out about GPT-4 using a TaskRabbit to solve a captcha—now imagine a demo of AI systems designing a new bioweapon or autonomously self-replicating on the internet, or people using AI coders to hack major institutions like the government or big banks. Already, a majority of the population says they fear AI risk and want FDA-style regulation ...]]>
leopold https://forum.effectivealtruism.org/posts/Ackzs8Wbk7isDzs2n/want-to-win-the-agi-race-solve-alignment Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Want to win the AGI race? Solve alignment., published by leopold on March 29, 2023 on The Effective Altruism Forum.Society really cares about safety. Practically speaking, the binding constraint on deploying your AGI could well be your ability to align your AGI. Solving (scalable) alignment might be worth lots of $$$ and key to beating China.Look, I really don't want Xi Jinping Thought to rule the world. If China gets AGI first, the ensuing rapid AI-powered scientific and technological progress could well give it a decisive advantage (cf potential for >30%/year economic growth with AGI). I think there's a very real specter of global authoritarianism here.Or hey, maybe you just think AGI is cool. You want to go build amazing products and enable breakthrough science and solve the world’s problems.So, race to AGI with reckless abandon then? At this point, people get into agonizing discussions about safety tradeoffs. And many people just mood affiliate their way to an answer: "accelerate, progress go brrrr," or "AI scary, slow it down."I see this much more practically. And, practically, society cares about safety, a lot. Do you actually think that you’ll be able to and allowed to deploy an AI system that has, say, a 10% chance of destroying all of humanity?Society has started waking up to AGI; like covid, the societal response will probably be a dumpster-fire, but it’ll also probably be quite intense. In many worlds, to deploy your AGI systems, people will need to be quite confident that your AGI won’t destroy the world.Right now, we’re very much not on track to solve the alignment problem for superhuman AGI systems (“scalable alignment”)—but it’s a solvable problem, if we get our act together. I discuss this in my main post today (“Nobody’s on the ball on AGI alignment”). On the current trajectory, the binding constraint on deploying your AGI could well be your ability to align your AGI—and this alignment solution being unambiguous enough that there is consensus that it works.Even if you just want to win the AGI race, you should probably want to invest much more heavily in solving this problem.Things are going to get crazy, and people will pay attentionA mistake many people make when thinking about AGI is imagining a world that looks much like today, except for adding in a lab with a super powerful model. They ignore the endogenous societal response.I and many others made this mistake with covid—we were freaking out in February 2020, and despairing that society didn’t seem to be even paying attention, let alone doing anything. But just a few weeks later, all of America went into an unprecedented lockdown. If we're actually on our way to AGI, things are going to get crazy. People are going to pay attention.The wheels for this are already in motion. Remember how nobody paid any attention to AI 6 months ago, and now Bing chat/Sydney going awry is on the front page of the NYT, US senators are getting scared, and Yale econ professors are advocating $100B/year for AI safety? Well, imagine that, but 100x as we approach AGI.AI safety is going mainstream. Everyone has been primed to be scared about rogue AI by science fiction; all the CEOs have secretly believed in AI risk for years but thought it was too weird to talk about it; and the mainstream media loves to hate on tech companies. Probably there will be further, much scarier wakeup calls (not just misalignment, but also misuse and scary demos in evals). People already freaked out about GPT-4 using a TaskRabbit to solve a captcha—now imagine a demo of AI systems designing a new bioweapon or autonomously self-replicating on the internet, or people using AI coders to hack major institutions like the government or big banks. Already, a majority of the population says they fear AI risk and want FDA-style regulation ...]]>
Wed, 29 Mar 2023 16:24:20 +0000 EA - Want to win the AGI race? Solve alignment. by leopold Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Want to win the AGI race? Solve alignment., published by leopold on March 29, 2023 on The Effective Altruism Forum.Society really cares about safety. Practically speaking, the binding constraint on deploying your AGI could well be your ability to align your AGI. Solving (scalable) alignment might be worth lots of $$$ and key to beating China.Look, I really don't want Xi Jinping Thought to rule the world. If China gets AGI first, the ensuing rapid AI-powered scientific and technological progress could well give it a decisive advantage (cf potential for >30%/year economic growth with AGI). I think there's a very real specter of global authoritarianism here.Or hey, maybe you just think AGI is cool. You want to go build amazing products and enable breakthrough science and solve the world’s problems.So, race to AGI with reckless abandon then? At this point, people get into agonizing discussions about safety tradeoffs. And many people just mood affiliate their way to an answer: "accelerate, progress go brrrr," or "AI scary, slow it down."I see this much more practically. And, practically, society cares about safety, a lot. Do you actually think that you’ll be able to and allowed to deploy an AI system that has, say, a 10% chance of destroying all of humanity?Society has started waking up to AGI; like covid, the societal response will probably be a dumpster-fire, but it’ll also probably be quite intense. In many worlds, to deploy your AGI systems, people will need to be quite confident that your AGI won’t destroy the world.Right now, we’re very much not on track to solve the alignment problem for superhuman AGI systems (“scalable alignment”)—but it’s a solvable problem, if we get our act together. I discuss this in my main post today (“Nobody’s on the ball on AGI alignment”). On the current trajectory, the binding constraint on deploying your AGI could well be your ability to align your AGI—and this alignment solution being unambiguous enough that there is consensus that it works.Even if you just want to win the AGI race, you should probably want to invest much more heavily in solving this problem.Things are going to get crazy, and people will pay attentionA mistake many people make when thinking about AGI is imagining a world that looks much like today, except for adding in a lab with a super powerful model. They ignore the endogenous societal response.I and many others made this mistake with covid—we were freaking out in February 2020, and despairing that society didn’t seem to be even paying attention, let alone doing anything. But just a few weeks later, all of America went into an unprecedented lockdown. If we're actually on our way to AGI, things are going to get crazy. People are going to pay attention.The wheels for this are already in motion. Remember how nobody paid any attention to AI 6 months ago, and now Bing chat/Sydney going awry is on the front page of the NYT, US senators are getting scared, and Yale econ professors are advocating $100B/year for AI safety? Well, imagine that, but 100x as we approach AGI.AI safety is going mainstream. Everyone has been primed to be scared about rogue AI by science fiction; all the CEOs have secretly believed in AI risk for years but thought it was too weird to talk about it; and the mainstream media loves to hate on tech companies. Probably there will be further, much scarier wakeup calls (not just misalignment, but also misuse and scary demos in evals). People already freaked out about GPT-4 using a TaskRabbit to solve a captcha—now imagine a demo of AI systems designing a new bioweapon or autonomously self-replicating on the internet, or people using AI coders to hack major institutions like the government or big banks. Already, a majority of the population says they fear AI risk and want FDA-style regulation ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Want to win the AGI race? Solve alignment., published by leopold on March 29, 2023 on The Effective Altruism Forum.Society really cares about safety. Practically speaking, the binding constraint on deploying your AGI could well be your ability to align your AGI. Solving (scalable) alignment might be worth lots of $$$ and key to beating China.Look, I really don't want Xi Jinping Thought to rule the world. If China gets AGI first, the ensuing rapid AI-powered scientific and technological progress could well give it a decisive advantage (cf potential for >30%/year economic growth with AGI). I think there's a very real specter of global authoritarianism here.Or hey, maybe you just think AGI is cool. You want to go build amazing products and enable breakthrough science and solve the world’s problems.So, race to AGI with reckless abandon then? At this point, people get into agonizing discussions about safety tradeoffs. And many people just mood affiliate their way to an answer: "accelerate, progress go brrrr," or "AI scary, slow it down."I see this much more practically. And, practically, society cares about safety, a lot. Do you actually think that you’ll be able to and allowed to deploy an AI system that has, say, a 10% chance of destroying all of humanity?Society has started waking up to AGI; like covid, the societal response will probably be a dumpster-fire, but it’ll also probably be quite intense. In many worlds, to deploy your AGI systems, people will need to be quite confident that your AGI won’t destroy the world.Right now, we’re very much not on track to solve the alignment problem for superhuman AGI systems (“scalable alignment”)—but it’s a solvable problem, if we get our act together. I discuss this in my main post today (“Nobody’s on the ball on AGI alignment”). On the current trajectory, the binding constraint on deploying your AGI could well be your ability to align your AGI—and this alignment solution being unambiguous enough that there is consensus that it works.Even if you just want to win the AGI race, you should probably want to invest much more heavily in solving this problem.Things are going to get crazy, and people will pay attentionA mistake many people make when thinking about AGI is imagining a world that looks much like today, except for adding in a lab with a super powerful model. They ignore the endogenous societal response.I and many others made this mistake with covid—we were freaking out in February 2020, and despairing that society didn’t seem to be even paying attention, let alone doing anything. But just a few weeks later, all of America went into an unprecedented lockdown. If we're actually on our way to AGI, things are going to get crazy. People are going to pay attention.The wheels for this are already in motion. Remember how nobody paid any attention to AI 6 months ago, and now Bing chat/Sydney going awry is on the front page of the NYT, US senators are getting scared, and Yale econ professors are advocating $100B/year for AI safety? Well, imagine that, but 100x as we approach AGI.AI safety is going mainstream. Everyone has been primed to be scared about rogue AI by science fiction; all the CEOs have secretly believed in AI risk for years but thought it was too weird to talk about it; and the mainstream media loves to hate on tech companies. Probably there will be further, much scarier wakeup calls (not just misalignment, but also misuse and scary demos in evals). People already freaked out about GPT-4 using a TaskRabbit to solve a captcha—now imagine a demo of AI systems designing a new bioweapon or autonomously self-replicating on the internet, or people using AI coders to hack major institutions like the government or big banks. Already, a majority of the population says they fear AI risk and want FDA-style regulation ...]]>
leopold https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:56 None full 5399
Jj4QppJpDgyDAEXiu_NL_EA_EA EA - Some updates to my thinking in light of the FTX collapse by Owen Cotton Barratt [Link Post] by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some updates to my thinking in light of the FTX collapse by Owen Cotton Barratt [Link Post], published by Nathan Young on March 29, 2023 on The Effective Altruism Forum.Some quick notesOwen Cotton Barratt has written a post of reflections about FTXHe didn’t want to post themThese seem obviously valuable for people to read, if they wishPersonally I don’t think revelations about Owen’s actions much affect my opinion of the value of the interesting things he might say about FTXBut neither do interesting things he has to say about FTX affect his actions (more below)So this is a link post. This is my choice, with his permission.On balance I think it's better to link to the post, if the consensus is that I should post the whole post (it's quite long) I willEqually there is a discussion to be had about whether Owen’s content should be on the forum or indeed further discussion of the whole situation, feel free to have that discussion here. The mod team has suggested (and I cautiously endorse) having a dedicated comment thread on this post for meta-discussion about Owen, details below.I think this could be seen as soft rehabilitation. I don’t endorse thatPersonally I think there is at least one more large discussion necessary about Owen’s actions. If the manner this piece was posted was used to avoid that, I’d be upsetYou may disagree with either my or the moderation team’s choices here. I am happy to discuss the point - perhaps I’m wrongAs elsewhere I think it may be helpful to split up thoughts and feelings, personally I think my feelings do not automatically translate into a need for action. Feelings are important, but they are only part of good arguments.[A note from the moderation team] We realize that some people might want to discuss how to process this post in light of Owen's recent statement and apology. But we also want to give space to object-level discussion of the contents of the post, and separate those out somewhat. So we ask that you avoid commenting on Owen's recent apology anywhere but in this thread. New top-level comments (and responses to them) should focus on the contents of the post; if they don't, we'll move them to said thread.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nathan Young https://forum.effectivealtruism.org/posts/Jj4QppJpDgyDAEXiu/some-updates-to-my-thinking-in-light-of-the-ftx-collapse-by Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some updates to my thinking in light of the FTX collapse by Owen Cotton Barratt [Link Post], published by Nathan Young on March 29, 2023 on The Effective Altruism Forum.Some quick notesOwen Cotton Barratt has written a post of reflections about FTXHe didn’t want to post themThese seem obviously valuable for people to read, if they wishPersonally I don’t think revelations about Owen’s actions much affect my opinion of the value of the interesting things he might say about FTXBut neither do interesting things he has to say about FTX affect his actions (more below)So this is a link post. This is my choice, with his permission.On balance I think it's better to link to the post, if the consensus is that I should post the whole post (it's quite long) I willEqually there is a discussion to be had about whether Owen’s content should be on the forum or indeed further discussion of the whole situation, feel free to have that discussion here. The mod team has suggested (and I cautiously endorse) having a dedicated comment thread on this post for meta-discussion about Owen, details below.I think this could be seen as soft rehabilitation. I don’t endorse thatPersonally I think there is at least one more large discussion necessary about Owen’s actions. If the manner this piece was posted was used to avoid that, I’d be upsetYou may disagree with either my or the moderation team’s choices here. I am happy to discuss the point - perhaps I’m wrongAs elsewhere I think it may be helpful to split up thoughts and feelings, personally I think my feelings do not automatically translate into a need for action. Feelings are important, but they are only part of good arguments.[A note from the moderation team] We realize that some people might want to discuss how to process this post in light of Owen's recent statement and apology. But we also want to give space to object-level discussion of the contents of the post, and separate those out somewhat. So we ask that you avoid commenting on Owen's recent apology anywhere but in this thread. New top-level comments (and responses to them) should focus on the contents of the post; if they don't, we'll move them to said thread.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 29 Mar 2023 15:59:26 +0000 EA - Some updates to my thinking in light of the FTX collapse by Owen Cotton Barratt [Link Post] by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some updates to my thinking in light of the FTX collapse by Owen Cotton Barratt [Link Post], published by Nathan Young on March 29, 2023 on The Effective Altruism Forum.Some quick notesOwen Cotton Barratt has written a post of reflections about FTXHe didn’t want to post themThese seem obviously valuable for people to read, if they wishPersonally I don’t think revelations about Owen’s actions much affect my opinion of the value of the interesting things he might say about FTXBut neither do interesting things he has to say about FTX affect his actions (more below)So this is a link post. This is my choice, with his permission.On balance I think it's better to link to the post, if the consensus is that I should post the whole post (it's quite long) I willEqually there is a discussion to be had about whether Owen’s content should be on the forum or indeed further discussion of the whole situation, feel free to have that discussion here. The mod team has suggested (and I cautiously endorse) having a dedicated comment thread on this post for meta-discussion about Owen, details below.I think this could be seen as soft rehabilitation. I don’t endorse thatPersonally I think there is at least one more large discussion necessary about Owen’s actions. If the manner this piece was posted was used to avoid that, I’d be upsetYou may disagree with either my or the moderation team’s choices here. I am happy to discuss the point - perhaps I’m wrongAs elsewhere I think it may be helpful to split up thoughts and feelings, personally I think my feelings do not automatically translate into a need for action. Feelings are important, but they are only part of good arguments.[A note from the moderation team] We realize that some people might want to discuss how to process this post in light of Owen's recent statement and apology. But we also want to give space to object-level discussion of the contents of the post, and separate those out somewhat. So we ask that you avoid commenting on Owen's recent apology anywhere but in this thread. New top-level comments (and responses to them) should focus on the contents of the post; if they don't, we'll move them to said thread.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some updates to my thinking in light of the FTX collapse by Owen Cotton Barratt [Link Post], published by Nathan Young on March 29, 2023 on The Effective Altruism Forum.Some quick notesOwen Cotton Barratt has written a post of reflections about FTXHe didn’t want to post themThese seem obviously valuable for people to read, if they wishPersonally I don’t think revelations about Owen’s actions much affect my opinion of the value of the interesting things he might say about FTXBut neither do interesting things he has to say about FTX affect his actions (more below)So this is a link post. This is my choice, with his permission.On balance I think it's better to link to the post, if the consensus is that I should post the whole post (it's quite long) I willEqually there is a discussion to be had about whether Owen’s content should be on the forum or indeed further discussion of the whole situation, feel free to have that discussion here. The mod team has suggested (and I cautiously endorse) having a dedicated comment thread on this post for meta-discussion about Owen, details below.I think this could be seen as soft rehabilitation. I don’t endorse thatPersonally I think there is at least one more large discussion necessary about Owen’s actions. If the manner this piece was posted was used to avoid that, I’d be upsetYou may disagree with either my or the moderation team’s choices here. I am happy to discuss the point - perhaps I’m wrongAs elsewhere I think it may be helpful to split up thoughts and feelings, personally I think my feelings do not automatically translate into a need for action. Feelings are important, but they are only part of good arguments.[A note from the moderation team] We realize that some people might want to discuss how to process this post in light of Owen's recent statement and apology. But we also want to give space to object-level discussion of the contents of the post, and separate those out somewhat. So we ask that you avoid commenting on Owen's recent apology anywhere but in this thread. New top-level comments (and responses to them) should focus on the contents of the post; if they don't, we'll move them to said thread.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nathan Young https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:18 None full 5398
5LNxeWFdoynvgZeik_NL_EA_EA EA - Nobody’s on the ball on AGI alignment by leopold Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nobody’s on the ball on AGI alignment, published by leopold on March 29, 2023 on The Effective Altruism Forum.Far fewer people are working on it than you might think, and even the alignment research that is happening is very much not on track. (But it’s a solvable problem, if we get our act together.)Observing from afar, it's easy to think there's an abundance of people working on AGI safety. Everyone on your timeline is fretting about AI risk, and it seems like there is a well-funded EA-industrial-complex that has elevated this to their main issue. Maybe you've even developed a slight distaste for it all—it reminds you a bit too much of the woke and FDA bureaucrats, and Eliezer seems pretty crazy to you.That’s what I used to think too, a couple of years ago. Then I got to see things more up close. And here’s the thing: nobody’s actually on the friggin’ ball on this one!There’s far fewer people working on it than you might think. There are plausibly 100,000 ML capabilities researchers in the world (30,000 attended ICML alone) vs. 300 alignment researchers in the world, a factor of ~300:1. The scalable alignment team at OpenAI has all of ~7 people.Barely anyone is going for the throat of solving the core difficulties of scalable alignment. Many of the people who are working on alignment are doing blue-sky theory, pretty disconnected from actual ML models. Most of the rest are doing work that’s vaguely related, hoping it will somehow be useful, or working on techniques that might work now but predictably fail to work for superhuman systems.There’s no secret elite SEAL team coming to save the day. This is it. We’re not on track.If timelines are short and we don’t get our act together, we’re in a lot of trouble. Scalable alignment—aligning superhuman AGI systems—is a real, unsolved problem. It’s quite simple: current alignment techniques rely on human supervision, but as models become superhuman, humans won’t be able to reliably supervise them.But my pessimism on the current state of alignment research very much doesn’t mean I’m an Eliezer-style doomer. Quite the opposite, I’m optimistic. I think scalable alignment is a solvable problem—and it’s an ML problem, one we can do real science on as our models get more advanced. But we gotta stop fucking around. We need an effort that matches the gravity of the challenge.Alignment is not on trackA recent post estimated that there were 300 full-time technical AI safety researchers (sounds plausible to me, if we’re counting generously). By contrast, there were 30,000 attendees at ICML in 2021, a single ML conference. It seems plausible that there are ≥100,000 researchers working on ML/AI in total. That’s a ratio of ~300:1, capabilities researchers:AGI safety researchers.That ratio is a little better at the AGI labs: ~7 researchers on the scalable alignment team at OpenAI, vs. ~400 people at the company in total (and fewer researchers). But 7 alignment researchers is still, well, not that much, and those 7 also aren’t, like, OpenAI’s most legendary ML researchers. (Importantly, from my understanding, this isn’t OpenAI being evil or anything like that—OpenAI would love to hire more alignment researchers, but there just aren’t many great researchers out there focusing on this problem.)But rather than the numbers, what made this really visceral to me is. actually looking at the research. There’s very little research where I feel like “great, this is getting at the core difficulties of the problem, and they have a plan for how we might actually solve it in 5 years.”Let’s take a quick, stylized, incomplete tour of the research landscape.Paul Christiano / Alignment Research Center (ARC).Paul is the single most respected alignment researcher in most circles. He used to lead the OpenAI alignment team, and he has made usefu...]]>
leopold https://forum.effectivealtruism.org/posts/5LNxeWFdoynvgZeik/nobody-s-on-the-ball-on-agi-alignment Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nobody’s on the ball on AGI alignment, published by leopold on March 29, 2023 on The Effective Altruism Forum.Far fewer people are working on it than you might think, and even the alignment research that is happening is very much not on track. (But it’s a solvable problem, if we get our act together.)Observing from afar, it's easy to think there's an abundance of people working on AGI safety. Everyone on your timeline is fretting about AI risk, and it seems like there is a well-funded EA-industrial-complex that has elevated this to their main issue. Maybe you've even developed a slight distaste for it all—it reminds you a bit too much of the woke and FDA bureaucrats, and Eliezer seems pretty crazy to you.That’s what I used to think too, a couple of years ago. Then I got to see things more up close. And here’s the thing: nobody’s actually on the friggin’ ball on this one!There’s far fewer people working on it than you might think. There are plausibly 100,000 ML capabilities researchers in the world (30,000 attended ICML alone) vs. 300 alignment researchers in the world, a factor of ~300:1. The scalable alignment team at OpenAI has all of ~7 people.Barely anyone is going for the throat of solving the core difficulties of scalable alignment. Many of the people who are working on alignment are doing blue-sky theory, pretty disconnected from actual ML models. Most of the rest are doing work that’s vaguely related, hoping it will somehow be useful, or working on techniques that might work now but predictably fail to work for superhuman systems.There’s no secret elite SEAL team coming to save the day. This is it. We’re not on track.If timelines are short and we don’t get our act together, we’re in a lot of trouble. Scalable alignment—aligning superhuman AGI systems—is a real, unsolved problem. It’s quite simple: current alignment techniques rely on human supervision, but as models become superhuman, humans won’t be able to reliably supervise them.But my pessimism on the current state of alignment research very much doesn’t mean I’m an Eliezer-style doomer. Quite the opposite, I’m optimistic. I think scalable alignment is a solvable problem—and it’s an ML problem, one we can do real science on as our models get more advanced. But we gotta stop fucking around. We need an effort that matches the gravity of the challenge.Alignment is not on trackA recent post estimated that there were 300 full-time technical AI safety researchers (sounds plausible to me, if we’re counting generously). By contrast, there were 30,000 attendees at ICML in 2021, a single ML conference. It seems plausible that there are ≥100,000 researchers working on ML/AI in total. That’s a ratio of ~300:1, capabilities researchers:AGI safety researchers.That ratio is a little better at the AGI labs: ~7 researchers on the scalable alignment team at OpenAI, vs. ~400 people at the company in total (and fewer researchers). But 7 alignment researchers is still, well, not that much, and those 7 also aren’t, like, OpenAI’s most legendary ML researchers. (Importantly, from my understanding, this isn’t OpenAI being evil or anything like that—OpenAI would love to hire more alignment researchers, but there just aren’t many great researchers out there focusing on this problem.)But rather than the numbers, what made this really visceral to me is. actually looking at the research. There’s very little research where I feel like “great, this is getting at the core difficulties of the problem, and they have a plan for how we might actually solve it in 5 years.”Let’s take a quick, stylized, incomplete tour of the research landscape.Paul Christiano / Alignment Research Center (ARC).Paul is the single most respected alignment researcher in most circles. He used to lead the OpenAI alignment team, and he has made usefu...]]>
Wed, 29 Mar 2023 14:47:29 +0000 EA - Nobody’s on the ball on AGI alignment by leopold Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nobody’s on the ball on AGI alignment, published by leopold on March 29, 2023 on The Effective Altruism Forum.Far fewer people are working on it than you might think, and even the alignment research that is happening is very much not on track. (But it’s a solvable problem, if we get our act together.)Observing from afar, it's easy to think there's an abundance of people working on AGI safety. Everyone on your timeline is fretting about AI risk, and it seems like there is a well-funded EA-industrial-complex that has elevated this to their main issue. Maybe you've even developed a slight distaste for it all—it reminds you a bit too much of the woke and FDA bureaucrats, and Eliezer seems pretty crazy to you.That’s what I used to think too, a couple of years ago. Then I got to see things more up close. And here’s the thing: nobody’s actually on the friggin’ ball on this one!There’s far fewer people working on it than you might think. There are plausibly 100,000 ML capabilities researchers in the world (30,000 attended ICML alone) vs. 300 alignment researchers in the world, a factor of ~300:1. The scalable alignment team at OpenAI has all of ~7 people.Barely anyone is going for the throat of solving the core difficulties of scalable alignment. Many of the people who are working on alignment are doing blue-sky theory, pretty disconnected from actual ML models. Most of the rest are doing work that’s vaguely related, hoping it will somehow be useful, or working on techniques that might work now but predictably fail to work for superhuman systems.There’s no secret elite SEAL team coming to save the day. This is it. We’re not on track.If timelines are short and we don’t get our act together, we’re in a lot of trouble. Scalable alignment—aligning superhuman AGI systems—is a real, unsolved problem. It’s quite simple: current alignment techniques rely on human supervision, but as models become superhuman, humans won’t be able to reliably supervise them.But my pessimism on the current state of alignment research very much doesn’t mean I’m an Eliezer-style doomer. Quite the opposite, I’m optimistic. I think scalable alignment is a solvable problem—and it’s an ML problem, one we can do real science on as our models get more advanced. But we gotta stop fucking around. We need an effort that matches the gravity of the challenge.Alignment is not on trackA recent post estimated that there were 300 full-time technical AI safety researchers (sounds plausible to me, if we’re counting generously). By contrast, there were 30,000 attendees at ICML in 2021, a single ML conference. It seems plausible that there are ≥100,000 researchers working on ML/AI in total. That’s a ratio of ~300:1, capabilities researchers:AGI safety researchers.That ratio is a little better at the AGI labs: ~7 researchers on the scalable alignment team at OpenAI, vs. ~400 people at the company in total (and fewer researchers). But 7 alignment researchers is still, well, not that much, and those 7 also aren’t, like, OpenAI’s most legendary ML researchers. (Importantly, from my understanding, this isn’t OpenAI being evil or anything like that—OpenAI would love to hire more alignment researchers, but there just aren’t many great researchers out there focusing on this problem.)But rather than the numbers, what made this really visceral to me is. actually looking at the research. There’s very little research where I feel like “great, this is getting at the core difficulties of the problem, and they have a plan for how we might actually solve it in 5 years.”Let’s take a quick, stylized, incomplete tour of the research landscape.Paul Christiano / Alignment Research Center (ARC).Paul is the single most respected alignment researcher in most circles. He used to lead the OpenAI alignment team, and he has made usefu...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nobody’s on the ball on AGI alignment, published by leopold on March 29, 2023 on The Effective Altruism Forum.Far fewer people are working on it than you might think, and even the alignment research that is happening is very much not on track. (But it’s a solvable problem, if we get our act together.)Observing from afar, it's easy to think there's an abundance of people working on AGI safety. Everyone on your timeline is fretting about AI risk, and it seems like there is a well-funded EA-industrial-complex that has elevated this to their main issue. Maybe you've even developed a slight distaste for it all—it reminds you a bit too much of the woke and FDA bureaucrats, and Eliezer seems pretty crazy to you.That’s what I used to think too, a couple of years ago. Then I got to see things more up close. And here’s the thing: nobody’s actually on the friggin’ ball on this one!There’s far fewer people working on it than you might think. There are plausibly 100,000 ML capabilities researchers in the world (30,000 attended ICML alone) vs. 300 alignment researchers in the world, a factor of ~300:1. The scalable alignment team at OpenAI has all of ~7 people.Barely anyone is going for the throat of solving the core difficulties of scalable alignment. Many of the people who are working on alignment are doing blue-sky theory, pretty disconnected from actual ML models. Most of the rest are doing work that’s vaguely related, hoping it will somehow be useful, or working on techniques that might work now but predictably fail to work for superhuman systems.There’s no secret elite SEAL team coming to save the day. This is it. We’re not on track.If timelines are short and we don’t get our act together, we’re in a lot of trouble. Scalable alignment—aligning superhuman AGI systems—is a real, unsolved problem. It’s quite simple: current alignment techniques rely on human supervision, but as models become superhuman, humans won’t be able to reliably supervise them.But my pessimism on the current state of alignment research very much doesn’t mean I’m an Eliezer-style doomer. Quite the opposite, I’m optimistic. I think scalable alignment is a solvable problem—and it’s an ML problem, one we can do real science on as our models get more advanced. But we gotta stop fucking around. We need an effort that matches the gravity of the challenge.Alignment is not on trackA recent post estimated that there were 300 full-time technical AI safety researchers (sounds plausible to me, if we’re counting generously). By contrast, there were 30,000 attendees at ICML in 2021, a single ML conference. It seems plausible that there are ≥100,000 researchers working on ML/AI in total. That’s a ratio of ~300:1, capabilities researchers:AGI safety researchers.That ratio is a little better at the AGI labs: ~7 researchers on the scalable alignment team at OpenAI, vs. ~400 people at the company in total (and fewer researchers). But 7 alignment researchers is still, well, not that much, and those 7 also aren’t, like, OpenAI’s most legendary ML researchers. (Importantly, from my understanding, this isn’t OpenAI being evil or anything like that—OpenAI would love to hire more alignment researchers, but there just aren’t many great researchers out there focusing on this problem.)But rather than the numbers, what made this really visceral to me is. actually looking at the research. There’s very little research where I feel like “great, this is getting at the core difficulties of the problem, and they have a plan for how we might actually solve it in 5 years.”Let’s take a quick, stylized, incomplete tour of the research landscape.Paul Christiano / Alignment Research Center (ARC).Paul is the single most respected alignment researcher in most circles. He used to lead the OpenAI alignment team, and he has made usefu...]]>
leopold https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 21:53 None full 5400
PcDW7LybkR468pb7N_NL_EA_EA EA - FLI open letter: Pause giant AI experiments by Zach Stein-Perlman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FLI open letter: Pause giant AI experiments, published by Zach Stein-Perlman on March 29, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Zach Stein-Perlman https://forum.effectivealtruism.org/posts/PcDW7LybkR468pb7N/fli-open-letter-pause-giant-ai-experiments Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FLI open letter: Pause giant AI experiments, published by Zach Stein-Perlman on March 29, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 29 Mar 2023 06:43:18 +0000 EA - FLI open letter: Pause giant AI experiments by Zach Stein-Perlman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FLI open letter: Pause giant AI experiments, published by Zach Stein-Perlman on March 29, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FLI open letter: Pause giant AI experiments, published by Zach Stein-Perlman on March 29, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Zach Stein-Perlman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:28 None full 5382
f77iuXmgiiFgurnBu_NL_EA_EA EA - Run Posts By Orgs by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Run Posts By Orgs, published by Jeff Kaufman on March 29, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://forum.effectivealtruism.org/posts/f77iuXmgiiFgurnBu/run-posts-by-orgs Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Run Posts By Orgs, published by Jeff Kaufman on March 29, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 29 Mar 2023 03:15:08 +0000 EA - Run Posts By Orgs by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Run Posts By Orgs, published by Jeff Kaufman on March 29, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Run Posts By Orgs, published by Jeff Kaufman on March 29, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:24 None full 5385
JgexokNweDkTbkynG_NL_EA_EA EA - Shallow Investigation: Bacterial Meningitis by peetyxk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow Investigation: Bacterial Meningitis, published by peetyxk on March 28, 2023 on The Effective Altruism Forum.This report consolidates a shallow investigation into Bacterial Meningitis - its effects and importance in global health, the current tractability and cost-effectiveness of leading interventions, and an evaluation of the overall promise of the cause. I estimate this report to be a result of about 60-70 hours of research and writing. This research was conducted as part of the Cause Innovation Bootcamp fellowship, with constant guidance from Dr. Akhil Bansal.Summary:Meningitis is an inflammation (swelling) of the protective membranes covering the brain and spinal cord. It is commonly associated with infections (e.g. bacterial meningitis, viral meningitis), but it can also have non-infectious causes. The most common symptoms include fever, headache, sensitivity to light, and neck stiffness; in most cases, meningitis is treatable by addressing the underlying cause e.g. treating the causative infection.Bacterial meningitis is important from a global health perspective - it ranks 40th on the current list of diseases in terms of total DALYs lost.4 strands of bacteria cause 50% of all meningitis-related deaths, namely meningococcus, pneumococcus, Haemophilus influenzae and group B streptococcus - all of which are vaccine-preventable.GBS (Group B Streptococcus) ranks 6th on the list of causes leading to DALYs lost in the age-group 1-10.Bacterial meningitis is heavily concentrated in the African Meningitis belt, consisting of regions in 26 countries stretching from Senegal in the West to Ethiopia in the East, and incidence is inversely related with socio-demographic index (SDI).Bacterial meningitis does not seem to be neglectedImportant steps seem to have been taken already to counter meningitis on a global scale; including WHO’s comprehensive report on “A Global Plan to Defeat Meningitis by 2030”.While the important interventions seem tractable, they seem to be less neglected than a lot of other interventions in global health, reducing their counterfactual value.Important interventions that could yield (relatively) high cost-effectiveness seem to beInstalling an integrated disease surveillance and response (IDSR) system for monitoring meningitis, andAdvocating for the acceleration of the GBS vaccine development.Major uncertainties: The interventions are still ‘moderately’ promising; for ex. If someone is uniquely positioned to accelerate GBS vaccine trials/distribution, or complete broad educational initiatives about infant healthcare/precautions preventing neonatal transmission, this might on the margins be a promising thing to do. On another note, the counterfactual neglectedness is low primarily due to WHO’s commitments in its “Roadmap to defeating meningitis by 2030” - if not followed up/held, the counterfactual value of another route to addressing meningitis could increase significantly.I. ImportanceMeningitis is an inflammation (swelling) of the protective membranes covering the brain and spinal cord. It is commonly associated with infections e.g. bacterial meningitis, viral meningitis, but it can also have non-infectious causes. The most common symptoms include fever, headache, sensitivity to light and neck stiffness; in most cases, meningitis is treatable by addressing the underlying cause e.g. an infection.Meningitis, depending on the specific pathogen (virus, bacteria, fungi etc.) is often communicable and usually transmitted through close contact. Meningitis can also be passed on by mothers to their infants, and in fact, bacterial meningitis in infants is most commonly caused by the Group B Streptococcus pathogen (GBS), passed down in the peripartum period ( thebirthing process). Since meningitis attacks the membranes of the spinal co...]]>
peetyxk https://forum.effectivealtruism.org/posts/JgexokNweDkTbkynG/shallow-investigation-bacterial-meningitis Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow Investigation: Bacterial Meningitis, published by peetyxk on March 28, 2023 on The Effective Altruism Forum.This report consolidates a shallow investigation into Bacterial Meningitis - its effects and importance in global health, the current tractability and cost-effectiveness of leading interventions, and an evaluation of the overall promise of the cause. I estimate this report to be a result of about 60-70 hours of research and writing. This research was conducted as part of the Cause Innovation Bootcamp fellowship, with constant guidance from Dr. Akhil Bansal.Summary:Meningitis is an inflammation (swelling) of the protective membranes covering the brain and spinal cord. It is commonly associated with infections (e.g. bacterial meningitis, viral meningitis), but it can also have non-infectious causes. The most common symptoms include fever, headache, sensitivity to light, and neck stiffness; in most cases, meningitis is treatable by addressing the underlying cause e.g. treating the causative infection.Bacterial meningitis is important from a global health perspective - it ranks 40th on the current list of diseases in terms of total DALYs lost.4 strands of bacteria cause 50% of all meningitis-related deaths, namely meningococcus, pneumococcus, Haemophilus influenzae and group B streptococcus - all of which are vaccine-preventable.GBS (Group B Streptococcus) ranks 6th on the list of causes leading to DALYs lost in the age-group 1-10.Bacterial meningitis is heavily concentrated in the African Meningitis belt, consisting of regions in 26 countries stretching from Senegal in the West to Ethiopia in the East, and incidence is inversely related with socio-demographic index (SDI).Bacterial meningitis does not seem to be neglectedImportant steps seem to have been taken already to counter meningitis on a global scale; including WHO’s comprehensive report on “A Global Plan to Defeat Meningitis by 2030”.While the important interventions seem tractable, they seem to be less neglected than a lot of other interventions in global health, reducing their counterfactual value.Important interventions that could yield (relatively) high cost-effectiveness seem to beInstalling an integrated disease surveillance and response (IDSR) system for monitoring meningitis, andAdvocating for the acceleration of the GBS vaccine development.Major uncertainties: The interventions are still ‘moderately’ promising; for ex. If someone is uniquely positioned to accelerate GBS vaccine trials/distribution, or complete broad educational initiatives about infant healthcare/precautions preventing neonatal transmission, this might on the margins be a promising thing to do. On another note, the counterfactual neglectedness is low primarily due to WHO’s commitments in its “Roadmap to defeating meningitis by 2030” - if not followed up/held, the counterfactual value of another route to addressing meningitis could increase significantly.I. ImportanceMeningitis is an inflammation (swelling) of the protective membranes covering the brain and spinal cord. It is commonly associated with infections e.g. bacterial meningitis, viral meningitis, but it can also have non-infectious causes. The most common symptoms include fever, headache, sensitivity to light and neck stiffness; in most cases, meningitis is treatable by addressing the underlying cause e.g. an infection.Meningitis, depending on the specific pathogen (virus, bacteria, fungi etc.) is often communicable and usually transmitted through close contact. Meningitis can also be passed on by mothers to their infants, and in fact, bacterial meningitis in infants is most commonly caused by the Group B Streptococcus pathogen (GBS), passed down in the peripartum period ( thebirthing process). Since meningitis attacks the membranes of the spinal co...]]>
Wed, 29 Mar 2023 00:06:32 +0000 EA - Shallow Investigation: Bacterial Meningitis by peetyxk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow Investigation: Bacterial Meningitis, published by peetyxk on March 28, 2023 on The Effective Altruism Forum.This report consolidates a shallow investigation into Bacterial Meningitis - its effects and importance in global health, the current tractability and cost-effectiveness of leading interventions, and an evaluation of the overall promise of the cause. I estimate this report to be a result of about 60-70 hours of research and writing. This research was conducted as part of the Cause Innovation Bootcamp fellowship, with constant guidance from Dr. Akhil Bansal.Summary:Meningitis is an inflammation (swelling) of the protective membranes covering the brain and spinal cord. It is commonly associated with infections (e.g. bacterial meningitis, viral meningitis), but it can also have non-infectious causes. The most common symptoms include fever, headache, sensitivity to light, and neck stiffness; in most cases, meningitis is treatable by addressing the underlying cause e.g. treating the causative infection.Bacterial meningitis is important from a global health perspective - it ranks 40th on the current list of diseases in terms of total DALYs lost.4 strands of bacteria cause 50% of all meningitis-related deaths, namely meningococcus, pneumococcus, Haemophilus influenzae and group B streptococcus - all of which are vaccine-preventable.GBS (Group B Streptococcus) ranks 6th on the list of causes leading to DALYs lost in the age-group 1-10.Bacterial meningitis is heavily concentrated in the African Meningitis belt, consisting of regions in 26 countries stretching from Senegal in the West to Ethiopia in the East, and incidence is inversely related with socio-demographic index (SDI).Bacterial meningitis does not seem to be neglectedImportant steps seem to have been taken already to counter meningitis on a global scale; including WHO’s comprehensive report on “A Global Plan to Defeat Meningitis by 2030”.While the important interventions seem tractable, they seem to be less neglected than a lot of other interventions in global health, reducing their counterfactual value.Important interventions that could yield (relatively) high cost-effectiveness seem to beInstalling an integrated disease surveillance and response (IDSR) system for monitoring meningitis, andAdvocating for the acceleration of the GBS vaccine development.Major uncertainties: The interventions are still ‘moderately’ promising; for ex. If someone is uniquely positioned to accelerate GBS vaccine trials/distribution, or complete broad educational initiatives about infant healthcare/precautions preventing neonatal transmission, this might on the margins be a promising thing to do. On another note, the counterfactual neglectedness is low primarily due to WHO’s commitments in its “Roadmap to defeating meningitis by 2030” - if not followed up/held, the counterfactual value of another route to addressing meningitis could increase significantly.I. ImportanceMeningitis is an inflammation (swelling) of the protective membranes covering the brain and spinal cord. It is commonly associated with infections e.g. bacterial meningitis, viral meningitis, but it can also have non-infectious causes. The most common symptoms include fever, headache, sensitivity to light and neck stiffness; in most cases, meningitis is treatable by addressing the underlying cause e.g. an infection.Meningitis, depending on the specific pathogen (virus, bacteria, fungi etc.) is often communicable and usually transmitted through close contact. Meningitis can also be passed on by mothers to their infants, and in fact, bacterial meningitis in infants is most commonly caused by the Group B Streptococcus pathogen (GBS), passed down in the peripartum period ( thebirthing process). Since meningitis attacks the membranes of the spinal co...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow Investigation: Bacterial Meningitis, published by peetyxk on March 28, 2023 on The Effective Altruism Forum.This report consolidates a shallow investigation into Bacterial Meningitis - its effects and importance in global health, the current tractability and cost-effectiveness of leading interventions, and an evaluation of the overall promise of the cause. I estimate this report to be a result of about 60-70 hours of research and writing. This research was conducted as part of the Cause Innovation Bootcamp fellowship, with constant guidance from Dr. Akhil Bansal.Summary:Meningitis is an inflammation (swelling) of the protective membranes covering the brain and spinal cord. It is commonly associated with infections (e.g. bacterial meningitis, viral meningitis), but it can also have non-infectious causes. The most common symptoms include fever, headache, sensitivity to light, and neck stiffness; in most cases, meningitis is treatable by addressing the underlying cause e.g. treating the causative infection.Bacterial meningitis is important from a global health perspective - it ranks 40th on the current list of diseases in terms of total DALYs lost.4 strands of bacteria cause 50% of all meningitis-related deaths, namely meningococcus, pneumococcus, Haemophilus influenzae and group B streptococcus - all of which are vaccine-preventable.GBS (Group B Streptococcus) ranks 6th on the list of causes leading to DALYs lost in the age-group 1-10.Bacterial meningitis is heavily concentrated in the African Meningitis belt, consisting of regions in 26 countries stretching from Senegal in the West to Ethiopia in the East, and incidence is inversely related with socio-demographic index (SDI).Bacterial meningitis does not seem to be neglectedImportant steps seem to have been taken already to counter meningitis on a global scale; including WHO’s comprehensive report on “A Global Plan to Defeat Meningitis by 2030”.While the important interventions seem tractable, they seem to be less neglected than a lot of other interventions in global health, reducing their counterfactual value.Important interventions that could yield (relatively) high cost-effectiveness seem to beInstalling an integrated disease surveillance and response (IDSR) system for monitoring meningitis, andAdvocating for the acceleration of the GBS vaccine development.Major uncertainties: The interventions are still ‘moderately’ promising; for ex. If someone is uniquely positioned to accelerate GBS vaccine trials/distribution, or complete broad educational initiatives about infant healthcare/precautions preventing neonatal transmission, this might on the margins be a promising thing to do. On another note, the counterfactual neglectedness is low primarily due to WHO’s commitments in its “Roadmap to defeating meningitis by 2030” - if not followed up/held, the counterfactual value of another route to addressing meningitis could increase significantly.I. ImportanceMeningitis is an inflammation (swelling) of the protective membranes covering the brain and spinal cord. It is commonly associated with infections e.g. bacterial meningitis, viral meningitis, but it can also have non-infectious causes. The most common symptoms include fever, headache, sensitivity to light and neck stiffness; in most cases, meningitis is treatable by addressing the underlying cause e.g. an infection.Meningitis, depending on the specific pathogen (virus, bacteria, fungi etc.) is often communicable and usually transmitted through close contact. Meningitis can also be passed on by mothers to their infants, and in fact, bacterial meningitis in infants is most commonly caused by the Group B Streptococcus pathogen (GBS), passed down in the peripartum period ( thebirthing process). Since meningitis attacks the membranes of the spinal co...]]>
peetyxk https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 34:04 None full 5391
q4ijrWYQPSG8b92aa_NL_EA_EA EA - Coaching Training Program Launch: build and upskill your practice with Tee Barnett Coaching Training (TBCT) by Tee Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Coaching Training Program Launch: build and upskill your practice with Tee Barnett Coaching Training (TBCT), published by Tee on March 28, 2023 on The Effective Altruism Forum.Applications are now open to join the pilot cohort of Tee Barnett Coaching Training! You’d be a great fit for this if you’re a current or aspiring coach seeking to gain skill as practitioners and/or establish your practice.You can signal interest hereThe deadline to submit the first round of applications is April 3.We encourage people to start the process earlier because cohort coaches will asked to join and also initiate the program on a rolling basis. This will mean increasingly limited availability of cohort spots.Program OverviewSpirit of the PilotPhilosophy of ApproachIntended OutcomesMore about the Training InfrastructureProgram PricingMeet the Current CohortAnticipated QuestionsApplyProgram overviewThis pilot program is a multi-component training infrastructure for developing your own practice as a skilled coach. You could think of it as a proto-accelerator or proto-incubator for solo practitioners, which also includes access to business architecture that is essential for building and maintaining a stand-alone practice.The infrastructure targets key developmental bottlenecks of coaches, including early client acquisition, real-world practice, training and oversight by multiple senior coaches, co-created learning plans, reputational benefits, business architecture, and more.The program is a key piece of a higher-level mission to boost the total number of skilled practitioners supporting people doing scalable good, including those in EA. Tee Barnett Coaching Training (TBCT) will include:A tailored curriculum co-created with cohort coachesSystematic and consistent client referrals for engaging in real-world skill building1-to-1 coaching training calls with TeeRegular cohort calls with other cohort coachesFree access to office hours with TeePeer coaching training callsTurnkey technical systems for building and expanding your practice (partnering with Anti-Entropy)Other guidance and resources becoming a coach, setting up and/or enhancing your practicePotentially also offering observation hours and guest senior coach interactive workshops– Duration – the first cohort is expected to last about 6 months, though timelines may vary depending on the intended growth plans agreed upon with each individual cohort coach.– Pricing – costs of the program are currently reduced for the first cohort, pegged at roughly the equivalent of two personal coaching sessions per month with Tee.– Location – remote, though the option may open up for TBCT to sponsor travel and accommodations for periods of in-person coaching training. Apply HereSpirit of the pilotConcepts conveying the vibe of the program: early-stage pilot, co-creationary, integration & synthesis, complementarity, self-alignedOne-line summary: this is an early-stage coaching development program that provides infrastructural support for those in the process of establishing and enriching their coaching practice.The working design of the training infrastructure is partly a function of the early-stage nature of the program, but also a portfolio of deliberate philosophical and methodological stances that encourage coaches to collaboratively sculpt their own learning process. For example, each cohort coach’s ‘curriculum' will be the end result of a collaborative selection process between Tee and the coach themselves, with both parties iterating upon chosen materials and methods that will culminate into a highly personalized experience within the program.Cohort coaches can expect much that comes with the territory of participating in a pilot program. On the one hand, participating in the ‘founding cohort’ can be thrilling a...]]>
Tee https://forum.effectivealtruism.org/posts/q4ijrWYQPSG8b92aa/coaching-training-program-launch-build-and-upskill-your Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Coaching Training Program Launch: build and upskill your practice with Tee Barnett Coaching Training (TBCT), published by Tee on March 28, 2023 on The Effective Altruism Forum.Applications are now open to join the pilot cohort of Tee Barnett Coaching Training! You’d be a great fit for this if you’re a current or aspiring coach seeking to gain skill as practitioners and/or establish your practice.You can signal interest hereThe deadline to submit the first round of applications is April 3.We encourage people to start the process earlier because cohort coaches will asked to join and also initiate the program on a rolling basis. This will mean increasingly limited availability of cohort spots.Program OverviewSpirit of the PilotPhilosophy of ApproachIntended OutcomesMore about the Training InfrastructureProgram PricingMeet the Current CohortAnticipated QuestionsApplyProgram overviewThis pilot program is a multi-component training infrastructure for developing your own practice as a skilled coach. You could think of it as a proto-accelerator or proto-incubator for solo practitioners, which also includes access to business architecture that is essential for building and maintaining a stand-alone practice.The infrastructure targets key developmental bottlenecks of coaches, including early client acquisition, real-world practice, training and oversight by multiple senior coaches, co-created learning plans, reputational benefits, business architecture, and more.The program is a key piece of a higher-level mission to boost the total number of skilled practitioners supporting people doing scalable good, including those in EA. Tee Barnett Coaching Training (TBCT) will include:A tailored curriculum co-created with cohort coachesSystematic and consistent client referrals for engaging in real-world skill building1-to-1 coaching training calls with TeeRegular cohort calls with other cohort coachesFree access to office hours with TeePeer coaching training callsTurnkey technical systems for building and expanding your practice (partnering with Anti-Entropy)Other guidance and resources becoming a coach, setting up and/or enhancing your practicePotentially also offering observation hours and guest senior coach interactive workshops– Duration – the first cohort is expected to last about 6 months, though timelines may vary depending on the intended growth plans agreed upon with each individual cohort coach.– Pricing – costs of the program are currently reduced for the first cohort, pegged at roughly the equivalent of two personal coaching sessions per month with Tee.– Location – remote, though the option may open up for TBCT to sponsor travel and accommodations for periods of in-person coaching training. Apply HereSpirit of the pilotConcepts conveying the vibe of the program: early-stage pilot, co-creationary, integration & synthesis, complementarity, self-alignedOne-line summary: this is an early-stage coaching development program that provides infrastructural support for those in the process of establishing and enriching their coaching practice.The working design of the training infrastructure is partly a function of the early-stage nature of the program, but also a portfolio of deliberate philosophical and methodological stances that encourage coaches to collaboratively sculpt their own learning process. For example, each cohort coach’s ‘curriculum' will be the end result of a collaborative selection process between Tee and the coach themselves, with both parties iterating upon chosen materials and methods that will culminate into a highly personalized experience within the program.Cohort coaches can expect much that comes with the territory of participating in a pilot program. On the one hand, participating in the ‘founding cohort’ can be thrilling a...]]>
Tue, 28 Mar 2023 23:45:25 +0000 EA - Coaching Training Program Launch: build and upskill your practice with Tee Barnett Coaching Training (TBCT) by Tee Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Coaching Training Program Launch: build and upskill your practice with Tee Barnett Coaching Training (TBCT), published by Tee on March 28, 2023 on The Effective Altruism Forum.Applications are now open to join the pilot cohort of Tee Barnett Coaching Training! You’d be a great fit for this if you’re a current or aspiring coach seeking to gain skill as practitioners and/or establish your practice.You can signal interest hereThe deadline to submit the first round of applications is April 3.We encourage people to start the process earlier because cohort coaches will asked to join and also initiate the program on a rolling basis. This will mean increasingly limited availability of cohort spots.Program OverviewSpirit of the PilotPhilosophy of ApproachIntended OutcomesMore about the Training InfrastructureProgram PricingMeet the Current CohortAnticipated QuestionsApplyProgram overviewThis pilot program is a multi-component training infrastructure for developing your own practice as a skilled coach. You could think of it as a proto-accelerator or proto-incubator for solo practitioners, which also includes access to business architecture that is essential for building and maintaining a stand-alone practice.The infrastructure targets key developmental bottlenecks of coaches, including early client acquisition, real-world practice, training and oversight by multiple senior coaches, co-created learning plans, reputational benefits, business architecture, and more.The program is a key piece of a higher-level mission to boost the total number of skilled practitioners supporting people doing scalable good, including those in EA. Tee Barnett Coaching Training (TBCT) will include:A tailored curriculum co-created with cohort coachesSystematic and consistent client referrals for engaging in real-world skill building1-to-1 coaching training calls with TeeRegular cohort calls with other cohort coachesFree access to office hours with TeePeer coaching training callsTurnkey technical systems for building and expanding your practice (partnering with Anti-Entropy)Other guidance and resources becoming a coach, setting up and/or enhancing your practicePotentially also offering observation hours and guest senior coach interactive workshops– Duration – the first cohort is expected to last about 6 months, though timelines may vary depending on the intended growth plans agreed upon with each individual cohort coach.– Pricing – costs of the program are currently reduced for the first cohort, pegged at roughly the equivalent of two personal coaching sessions per month with Tee.– Location – remote, though the option may open up for TBCT to sponsor travel and accommodations for periods of in-person coaching training. Apply HereSpirit of the pilotConcepts conveying the vibe of the program: early-stage pilot, co-creationary, integration & synthesis, complementarity, self-alignedOne-line summary: this is an early-stage coaching development program that provides infrastructural support for those in the process of establishing and enriching their coaching practice.The working design of the training infrastructure is partly a function of the early-stage nature of the program, but also a portfolio of deliberate philosophical and methodological stances that encourage coaches to collaboratively sculpt their own learning process. For example, each cohort coach’s ‘curriculum' will be the end result of a collaborative selection process between Tee and the coach themselves, with both parties iterating upon chosen materials and methods that will culminate into a highly personalized experience within the program.Cohort coaches can expect much that comes with the territory of participating in a pilot program. On the one hand, participating in the ‘founding cohort’ can be thrilling a...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Coaching Training Program Launch: build and upskill your practice with Tee Barnett Coaching Training (TBCT), published by Tee on March 28, 2023 on The Effective Altruism Forum.Applications are now open to join the pilot cohort of Tee Barnett Coaching Training! You’d be a great fit for this if you’re a current or aspiring coach seeking to gain skill as practitioners and/or establish your practice.You can signal interest hereThe deadline to submit the first round of applications is April 3.We encourage people to start the process earlier because cohort coaches will asked to join and also initiate the program on a rolling basis. This will mean increasingly limited availability of cohort spots.Program OverviewSpirit of the PilotPhilosophy of ApproachIntended OutcomesMore about the Training InfrastructureProgram PricingMeet the Current CohortAnticipated QuestionsApplyProgram overviewThis pilot program is a multi-component training infrastructure for developing your own practice as a skilled coach. You could think of it as a proto-accelerator or proto-incubator for solo practitioners, which also includes access to business architecture that is essential for building and maintaining a stand-alone practice.The infrastructure targets key developmental bottlenecks of coaches, including early client acquisition, real-world practice, training and oversight by multiple senior coaches, co-created learning plans, reputational benefits, business architecture, and more.The program is a key piece of a higher-level mission to boost the total number of skilled practitioners supporting people doing scalable good, including those in EA. Tee Barnett Coaching Training (TBCT) will include:A tailored curriculum co-created with cohort coachesSystematic and consistent client referrals for engaging in real-world skill building1-to-1 coaching training calls with TeeRegular cohort calls with other cohort coachesFree access to office hours with TeePeer coaching training callsTurnkey technical systems for building and expanding your practice (partnering with Anti-Entropy)Other guidance and resources becoming a coach, setting up and/or enhancing your practicePotentially also offering observation hours and guest senior coach interactive workshops– Duration – the first cohort is expected to last about 6 months, though timelines may vary depending on the intended growth plans agreed upon with each individual cohort coach.– Pricing – costs of the program are currently reduced for the first cohort, pegged at roughly the equivalent of two personal coaching sessions per month with Tee.– Location – remote, though the option may open up for TBCT to sponsor travel and accommodations for periods of in-person coaching training. Apply HereSpirit of the pilotConcepts conveying the vibe of the program: early-stage pilot, co-creationary, integration & synthesis, complementarity, self-alignedOne-line summary: this is an early-stage coaching development program that provides infrastructural support for those in the process of establishing and enriching their coaching practice.The working design of the training infrastructure is partly a function of the early-stage nature of the program, but also a portfolio of deliberate philosophical and methodological stances that encourage coaches to collaboratively sculpt their own learning process. For example, each cohort coach’s ‘curriculum' will be the end result of a collaborative selection process between Tee and the coach themselves, with both parties iterating upon chosen materials and methods that will culminate into a highly personalized experience within the program.Cohort coaches can expect much that comes with the territory of participating in a pilot program. On the one hand, participating in the ‘founding cohort’ can be thrilling a...]]>
Tee https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 24:17 None full 5401
TYyHpiAQ3TetwRMHC_NL_EA_EA EA - What longtermist projects would you like to see implemented? by Buhl Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What longtermist projects would you like to see implemented?, published by Buhl on March 28, 2023 on The Effective Altruism Forum.My team (Rethink Priorities’ General Longtermism Team) is aiming to incubate 2-3 longtermist projects in 2023. I’m currently collecting a longlist of project ideas, which we’ll then research and evaluate, with the aim of kicking off the strongest projects (either via an internal pilot or collaboration with an external founder).I’m interested in ideas for entrepreneurial or infrastructure projects (i.e., not research projects, though a project could be something like “create a new research institute focused on X”).Some examples to give a sense of the type of ideas we’re interested in (without necessarily claiming that these specific ideas are particularly strong): An organization that lobbies for governments to install far UVC lights in government buildings; a third-party whistleblowing entity taking reports from leading AI labs; or a remote research institute for independent researchers. You can see a list of our existing ideas here.I’ll begin reviewing the ideas on April 17, so ideas posted before then would be most helpful.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Buhl https://forum.effectivealtruism.org/posts/TYyHpiAQ3TetwRMHC/what-longtermist-projects-would-you-like-to-see-implemented Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What longtermist projects would you like to see implemented?, published by Buhl on March 28, 2023 on The Effective Altruism Forum.My team (Rethink Priorities’ General Longtermism Team) is aiming to incubate 2-3 longtermist projects in 2023. I’m currently collecting a longlist of project ideas, which we’ll then research and evaluate, with the aim of kicking off the strongest projects (either via an internal pilot or collaboration with an external founder).I’m interested in ideas for entrepreneurial or infrastructure projects (i.e., not research projects, though a project could be something like “create a new research institute focused on X”).Some examples to give a sense of the type of ideas we’re interested in (without necessarily claiming that these specific ideas are particularly strong): An organization that lobbies for governments to install far UVC lights in government buildings; a third-party whistleblowing entity taking reports from leading AI labs; or a remote research institute for independent researchers. You can see a list of our existing ideas here.I’ll begin reviewing the ideas on April 17, so ideas posted before then would be most helpful.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 28 Mar 2023 22:22:38 +0000 EA - What longtermist projects would you like to see implemented? by Buhl Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What longtermist projects would you like to see implemented?, published by Buhl on March 28, 2023 on The Effective Altruism Forum.My team (Rethink Priorities’ General Longtermism Team) is aiming to incubate 2-3 longtermist projects in 2023. I’m currently collecting a longlist of project ideas, which we’ll then research and evaluate, with the aim of kicking off the strongest projects (either via an internal pilot or collaboration with an external founder).I’m interested in ideas for entrepreneurial or infrastructure projects (i.e., not research projects, though a project could be something like “create a new research institute focused on X”).Some examples to give a sense of the type of ideas we’re interested in (without necessarily claiming that these specific ideas are particularly strong): An organization that lobbies for governments to install far UVC lights in government buildings; a third-party whistleblowing entity taking reports from leading AI labs; or a remote research institute for independent researchers. You can see a list of our existing ideas here.I’ll begin reviewing the ideas on April 17, so ideas posted before then would be most helpful.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What longtermist projects would you like to see implemented?, published by Buhl on March 28, 2023 on The Effective Altruism Forum.My team (Rethink Priorities’ General Longtermism Team) is aiming to incubate 2-3 longtermist projects in 2023. I’m currently collecting a longlist of project ideas, which we’ll then research and evaluate, with the aim of kicking off the strongest projects (either via an internal pilot or collaboration with an external founder).I’m interested in ideas for entrepreneurial or infrastructure projects (i.e., not research projects, though a project could be something like “create a new research institute focused on X”).Some examples to give a sense of the type of ideas we’re interested in (without necessarily claiming that these specific ideas are particularly strong): An organization that lobbies for governments to install far UVC lights in government buildings; a third-party whistleblowing entity taking reports from leading AI labs; or a remote research institute for independent researchers. You can see a list of our existing ideas here.I’ll begin reviewing the ideas on April 17, so ideas posted before then would be most helpful.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Buhl https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:23 None full 5388
mMgkuNSmxFWia83ej_NL_EA_EA EA - EA and LW Forum Weekly Summary (20th - 26th March 2023) by Zoe Williams Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA & LW Forum Weekly Summary (20th - 26th March 2023), published by Zoe Williams on March 27, 2023 on The Effective Altruism Forum.This is part of a weekly series summarizing the top posts on the EA and LW forums - you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.If you'd like to receive these summaries via email, you can subscribe here.Podcast version: Subscribe on your favorite podcast app by searching for 'EA Forum Podcast (Summaries)'. A big thanks to Coleman Snell for producing these!Author's note: I'm currently travelling, which means:a) Today's newsletter is a shorter one - only 9 top posts are covered, though in more depth than usual.b) The next post will be on 17th April (three week gap), covering the prior three weeks at a higher karma bar.After that, we'll be back to the regular schedule.Object Level Interventions / Reviewsby EJT, CarlShulmanLinkpost for this paper, which uses standard cost-benefit analysis (CBA) with detrimental assumptions (eg. giving no value to future generations, only assessing benefits to Americans, and only assessing value from preventing existential threats) to show that even under those conditions governments should be spending much more on averting threats from nuclear war, engineered pandemics, and AI.Their analysis primarily relies on previously published estimates of risks, concluding US citizens alive today have a ~1% risk of dying from these causes in the next decade. They estimate $400B in interventions could reduce the risk by minimum 0.1 percentage points, and that using the lowest figure for the US Department of Transportation’s value of a statistical life, this would result in ~$646B in value of American lives saved.They suggest longtermists in the political sphere should change their messaging to revolve around this standard CBA-driven catastrophe policy, which is more democratically acceptable than policies relying on the cost to future generations. They suggest it would also reduce risk almost as much as a strong longtermist policy (particularly if the CBA incorporates an argument for citizens ‘altruistic willingness to pay’ ie. some level of addition for the benefit to future generations).by GiveWellThe Happier Lives Institute (HLI) has argued that if Givewell used subjective well-being (SWB) measures in their moral weights, they’d find StrongMinds more cost-effective than marginal funding to their top charities. Givewell assessed this claim and estimated StrongMinds is ~25% (5%-80% pessimistic to optimistic CI) as effective as these marginal funding opportunities when using SWB - this equates to 2.3x the effectiveness of GiveDirectly.Key differences in analysis from HLI, by size of impact, include:GiveWell assumes lower spillover effects to household members of those receiving treatment.Givewell translates decreases in depression into increases in life satisfaction at a lower rate than HLI.Givewell expects lower effect in a scaled program, and lower durations of effects (not passing a year) due to the program being only 4-8 weeks.Givewell applies downward adjustments for social desirability bias and publication bias in studies of psychotherapy.These result in an ~83% discount in the effectiveness vs. HLI’s analysis. For all points except the fourth, two upcoming RCTs from StrongMinds will provide better data than currently exists.HLI has posted a thorough response in the comments, noting which claims they agree / disagree with and why (5% agree, 45% sympathetic to some discount but unsure of magnitude, 35% unsympathetic but limited evidence, and 15% disagree on the basis of current evidence).Givewell also note for context that HLI’s original estimates imply that a donor would pick offering StrongM...]]>
Zoe Williams https://forum.effectivealtruism.org/posts/mMgkuNSmxFWia83ej/ea-and-lw-forum-weekly-summary-20th-26th-march-2023 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA & LW Forum Weekly Summary (20th - 26th March 2023), published by Zoe Williams on March 27, 2023 on The Effective Altruism Forum.This is part of a weekly series summarizing the top posts on the EA and LW forums - you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.If you'd like to receive these summaries via email, you can subscribe here.Podcast version: Subscribe on your favorite podcast app by searching for 'EA Forum Podcast (Summaries)'. A big thanks to Coleman Snell for producing these!Author's note: I'm currently travelling, which means:a) Today's newsletter is a shorter one - only 9 top posts are covered, though in more depth than usual.b) The next post will be on 17th April (three week gap), covering the prior three weeks at a higher karma bar.After that, we'll be back to the regular schedule.Object Level Interventions / Reviewsby EJT, CarlShulmanLinkpost for this paper, which uses standard cost-benefit analysis (CBA) with detrimental assumptions (eg. giving no value to future generations, only assessing benefits to Americans, and only assessing value from preventing existential threats) to show that even under those conditions governments should be spending much more on averting threats from nuclear war, engineered pandemics, and AI.Their analysis primarily relies on previously published estimates of risks, concluding US citizens alive today have a ~1% risk of dying from these causes in the next decade. They estimate $400B in interventions could reduce the risk by minimum 0.1 percentage points, and that using the lowest figure for the US Department of Transportation’s value of a statistical life, this would result in ~$646B in value of American lives saved.They suggest longtermists in the political sphere should change their messaging to revolve around this standard CBA-driven catastrophe policy, which is more democratically acceptable than policies relying on the cost to future generations. They suggest it would also reduce risk almost as much as a strong longtermist policy (particularly if the CBA incorporates an argument for citizens ‘altruistic willingness to pay’ ie. some level of addition for the benefit to future generations).by GiveWellThe Happier Lives Institute (HLI) has argued that if Givewell used subjective well-being (SWB) measures in their moral weights, they’d find StrongMinds more cost-effective than marginal funding to their top charities. Givewell assessed this claim and estimated StrongMinds is ~25% (5%-80% pessimistic to optimistic CI) as effective as these marginal funding opportunities when using SWB - this equates to 2.3x the effectiveness of GiveDirectly.Key differences in analysis from HLI, by size of impact, include:GiveWell assumes lower spillover effects to household members of those receiving treatment.Givewell translates decreases in depression into increases in life satisfaction at a lower rate than HLI.Givewell expects lower effect in a scaled program, and lower durations of effects (not passing a year) due to the program being only 4-8 weeks.Givewell applies downward adjustments for social desirability bias and publication bias in studies of psychotherapy.These result in an ~83% discount in the effectiveness vs. HLI’s analysis. For all points except the fourth, two upcoming RCTs from StrongMinds will provide better data than currently exists.HLI has posted a thorough response in the comments, noting which claims they agree / disagree with and why (5% agree, 45% sympathetic to some discount but unsure of magnitude, 35% unsympathetic but limited evidence, and 15% disagree on the basis of current evidence).Givewell also note for context that HLI’s original estimates imply that a donor would pick offering StrongM...]]>
Tue, 28 Mar 2023 21:33:09 +0000 EA - EA and LW Forum Weekly Summary (20th - 26th March 2023) by Zoe Williams Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA & LW Forum Weekly Summary (20th - 26th March 2023), published by Zoe Williams on March 27, 2023 on The Effective Altruism Forum.This is part of a weekly series summarizing the top posts on the EA and LW forums - you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.If you'd like to receive these summaries via email, you can subscribe here.Podcast version: Subscribe on your favorite podcast app by searching for 'EA Forum Podcast (Summaries)'. A big thanks to Coleman Snell for producing these!Author's note: I'm currently travelling, which means:a) Today's newsletter is a shorter one - only 9 top posts are covered, though in more depth than usual.b) The next post will be on 17th April (three week gap), covering the prior three weeks at a higher karma bar.After that, we'll be back to the regular schedule.Object Level Interventions / Reviewsby EJT, CarlShulmanLinkpost for this paper, which uses standard cost-benefit analysis (CBA) with detrimental assumptions (eg. giving no value to future generations, only assessing benefits to Americans, and only assessing value from preventing existential threats) to show that even under those conditions governments should be spending much more on averting threats from nuclear war, engineered pandemics, and AI.Their analysis primarily relies on previously published estimates of risks, concluding US citizens alive today have a ~1% risk of dying from these causes in the next decade. They estimate $400B in interventions could reduce the risk by minimum 0.1 percentage points, and that using the lowest figure for the US Department of Transportation’s value of a statistical life, this would result in ~$646B in value of American lives saved.They suggest longtermists in the political sphere should change their messaging to revolve around this standard CBA-driven catastrophe policy, which is more democratically acceptable than policies relying on the cost to future generations. They suggest it would also reduce risk almost as much as a strong longtermist policy (particularly if the CBA incorporates an argument for citizens ‘altruistic willingness to pay’ ie. some level of addition for the benefit to future generations).by GiveWellThe Happier Lives Institute (HLI) has argued that if Givewell used subjective well-being (SWB) measures in their moral weights, they’d find StrongMinds more cost-effective than marginal funding to their top charities. Givewell assessed this claim and estimated StrongMinds is ~25% (5%-80% pessimistic to optimistic CI) as effective as these marginal funding opportunities when using SWB - this equates to 2.3x the effectiveness of GiveDirectly.Key differences in analysis from HLI, by size of impact, include:GiveWell assumes lower spillover effects to household members of those receiving treatment.Givewell translates decreases in depression into increases in life satisfaction at a lower rate than HLI.Givewell expects lower effect in a scaled program, and lower durations of effects (not passing a year) due to the program being only 4-8 weeks.Givewell applies downward adjustments for social desirability bias and publication bias in studies of psychotherapy.These result in an ~83% discount in the effectiveness vs. HLI’s analysis. For all points except the fourth, two upcoming RCTs from StrongMinds will provide better data than currently exists.HLI has posted a thorough response in the comments, noting which claims they agree / disagree with and why (5% agree, 45% sympathetic to some discount but unsure of magnitude, 35% unsympathetic but limited evidence, and 15% disagree on the basis of current evidence).Givewell also note for context that HLI’s original estimates imply that a donor would pick offering StrongM...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA & LW Forum Weekly Summary (20th - 26th March 2023), published by Zoe Williams on March 27, 2023 on The Effective Altruism Forum.This is part of a weekly series summarizing the top posts on the EA and LW forums - you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.If you'd like to receive these summaries via email, you can subscribe here.Podcast version: Subscribe on your favorite podcast app by searching for 'EA Forum Podcast (Summaries)'. A big thanks to Coleman Snell for producing these!Author's note: I'm currently travelling, which means:a) Today's newsletter is a shorter one - only 9 top posts are covered, though in more depth than usual.b) The next post will be on 17th April (three week gap), covering the prior three weeks at a higher karma bar.After that, we'll be back to the regular schedule.Object Level Interventions / Reviewsby EJT, CarlShulmanLinkpost for this paper, which uses standard cost-benefit analysis (CBA) with detrimental assumptions (eg. giving no value to future generations, only assessing benefits to Americans, and only assessing value from preventing existential threats) to show that even under those conditions governments should be spending much more on averting threats from nuclear war, engineered pandemics, and AI.Their analysis primarily relies on previously published estimates of risks, concluding US citizens alive today have a ~1% risk of dying from these causes in the next decade. They estimate $400B in interventions could reduce the risk by minimum 0.1 percentage points, and that using the lowest figure for the US Department of Transportation’s value of a statistical life, this would result in ~$646B in value of American lives saved.They suggest longtermists in the political sphere should change their messaging to revolve around this standard CBA-driven catastrophe policy, which is more democratically acceptable than policies relying on the cost to future generations. They suggest it would also reduce risk almost as much as a strong longtermist policy (particularly if the CBA incorporates an argument for citizens ‘altruistic willingness to pay’ ie. some level of addition for the benefit to future generations).by GiveWellThe Happier Lives Institute (HLI) has argued that if Givewell used subjective well-being (SWB) measures in their moral weights, they’d find StrongMinds more cost-effective than marginal funding to their top charities. Givewell assessed this claim and estimated StrongMinds is ~25% (5%-80% pessimistic to optimistic CI) as effective as these marginal funding opportunities when using SWB - this equates to 2.3x the effectiveness of GiveDirectly.Key differences in analysis from HLI, by size of impact, include:GiveWell assumes lower spillover effects to household members of those receiving treatment.Givewell translates decreases in depression into increases in life satisfaction at a lower rate than HLI.Givewell expects lower effect in a scaled program, and lower durations of effects (not passing a year) due to the program being only 4-8 weeks.Givewell applies downward adjustments for social desirability bias and publication bias in studies of psychotherapy.These result in an ~83% discount in the effectiveness vs. HLI’s analysis. For all points except the fourth, two upcoming RCTs from StrongMinds will provide better data than currently exists.HLI has posted a thorough response in the comments, noting which claims they agree / disagree with and why (5% agree, 45% sympathetic to some discount but unsure of magnitude, 35% unsympathetic but limited evidence, and 15% disagree on the basis of current evidence).Givewell also note for context that HLI’s original estimates imply that a donor would pick offering StrongM...]]>
Zoe Williams https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:29 None full 5380
i7aKatsck7x3aLoiH_NL_EA_EA EA - The Prospect of an AI Winter by Erich Grunewald Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Prospect of an AI Winter, published by Erich Grunewald on March 27, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Erich Grunewald https://forum.effectivealtruism.org/posts/i7aKatsck7x3aLoiH/the-prospect-of-an-ai-winter Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Prospect of an AI Winter, published by Erich Grunewald on March 27, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 28 Mar 2023 00:53:09 +0000 EA - The Prospect of an AI Winter by Erich Grunewald Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Prospect of an AI Winter, published by Erich Grunewald on March 27, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Prospect of an AI Winter, published by Erich Grunewald on March 27, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Erich Grunewald https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:25 None full 5376
Si3H2oM8tYwcnBWdS_NL_EA_EA EA - New blog: Planned Obsolescence by Ajeya Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New blog: Planned Obsolescence, published by Ajeya on March 27, 2023 on The Effective Altruism Forum.We (Kelsey and Ajeya) just launched a new blog about AI futurism and AI alignment called Planned Obsolescence. If you’re interested, you can check it out here.Both of us have thought a fair bit about what we see as the biggest challenges in technical work and in policy to make AI go well, but a lot of our thinking isn’t written up, or is embedded in long technical reports. This is an effort to make our thinking more accessible. That means it’s mostly aiming at a broader audience than LessWrong and the EA Forum, although some of you might still find some of the posts interesting.So far we have seven posts:What we're doing here"Aligned" shouldn't be a synonym for "good"Situational awarenessPlaying the training gameTraining AIs to help us align AIsAlignment researchers disagree a lotThe ethics of AI red-teamingThanks to ilzolende for formatting these posts for publication. Each post has an accompanying audio version generated by a voice synthesis model trained on the author's voice using Descript Overdub.You can submit questions or comments to mailbox@planned-obsolescence.org.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ajeya https://forum.effectivealtruism.org/posts/Si3H2oM8tYwcnBWdS/new-blog-planned-obsolescence Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New blog: Planned Obsolescence, published by Ajeya on March 27, 2023 on The Effective Altruism Forum.We (Kelsey and Ajeya) just launched a new blog about AI futurism and AI alignment called Planned Obsolescence. If you’re interested, you can check it out here.Both of us have thought a fair bit about what we see as the biggest challenges in technical work and in policy to make AI go well, but a lot of our thinking isn’t written up, or is embedded in long technical reports. This is an effort to make our thinking more accessible. That means it’s mostly aiming at a broader audience than LessWrong and the EA Forum, although some of you might still find some of the posts interesting.So far we have seven posts:What we're doing here"Aligned" shouldn't be a synonym for "good"Situational awarenessPlaying the training gameTraining AIs to help us align AIsAlignment researchers disagree a lotThe ethics of AI red-teamingThanks to ilzolende for formatting these posts for publication. Each post has an accompanying audio version generated by a voice synthesis model trained on the author's voice using Descript Overdub.You can submit questions or comments to mailbox@planned-obsolescence.org.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 27 Mar 2023 21:10:58 +0000 EA - New blog: Planned Obsolescence by Ajeya Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New blog: Planned Obsolescence, published by Ajeya on March 27, 2023 on The Effective Altruism Forum.We (Kelsey and Ajeya) just launched a new blog about AI futurism and AI alignment called Planned Obsolescence. If you’re interested, you can check it out here.Both of us have thought a fair bit about what we see as the biggest challenges in technical work and in policy to make AI go well, but a lot of our thinking isn’t written up, or is embedded in long technical reports. This is an effort to make our thinking more accessible. That means it’s mostly aiming at a broader audience than LessWrong and the EA Forum, although some of you might still find some of the posts interesting.So far we have seven posts:What we're doing here"Aligned" shouldn't be a synonym for "good"Situational awarenessPlaying the training gameTraining AIs to help us align AIsAlignment researchers disagree a lotThe ethics of AI red-teamingThanks to ilzolende for formatting these posts for publication. Each post has an accompanying audio version generated by a voice synthesis model trained on the author's voice using Descript Overdub.You can submit questions or comments to mailbox@planned-obsolescence.org.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New blog: Planned Obsolescence, published by Ajeya on March 27, 2023 on The Effective Altruism Forum.We (Kelsey and Ajeya) just launched a new blog about AI futurism and AI alignment called Planned Obsolescence. If you’re interested, you can check it out here.Both of us have thought a fair bit about what we see as the biggest challenges in technical work and in policy to make AI go well, but a lot of our thinking isn’t written up, or is embedded in long technical reports. This is an effort to make our thinking more accessible. That means it’s mostly aiming at a broader audience than LessWrong and the EA Forum, although some of you might still find some of the posts interesting.So far we have seven posts:What we're doing here"Aligned" shouldn't be a synonym for "good"Situational awarenessPlaying the training gameTraining AIs to help us align AIsAlignment researchers disagree a lotThe ethics of AI red-teamingThanks to ilzolende for formatting these posts for publication. Each post has an accompanying audio version generated by a voice synthesis model trained on the author's voice using Descript Overdub.You can submit questions or comments to mailbox@planned-obsolescence.org.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ajeya https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:25 None full 5372
YLLtdNBJNbhdkopG7_NL_EA_EA EA - Casting the Decisive Vote by Toby Ord Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Casting the Decisive Vote, published by Toby Ord on March 27, 2023 on The Effective Altruism Forum.The moral value of voting is a perennial topic in EA. This piece shows that in any election that isn't a forgone conclusion, the chance of your vote being decisive can't be much lower than 1 in the number of voters. So voting will be worth it around the point where the value your preferred candidate would bring to the average citizen exceeds the cost of you voting.What is the chance your vote changes the outcome of an election? We know it is low, but how low?In particular, how does it compare with an intuitive baseline of a 1 in n chance, where n is the number of voters? This baseline is an important landmark not only because it is so intuitive, but because it is roughly the threshold needed for voting to be justified in terms of the good it produces for the members of the community (since the total benefit is also going to be proportional to n).Some political scientists have tried to estimate it with simplified theoretical models involving random voting. Depending on their assumptions, this has suggested it is much higher than the baseline — roughly 1 in the square root of n (Banzhaf 1965) — or that it is extraordinarily lower — something like 1 in 10^2659 for a US presidential election (Brennan 2011).Statisticians have attempted to determine the chance of a vote being decisive for particular elections using detailed empirical modelling, with data from previous elections and contemporaneous polls. For example, Gelman et al (2010) use such a model to estimate that an average voter had a 1 in 60 million chance of changing the result of the 2008 US presidential election, which is about 3 times higher than the baseline.In contrast, I’ll give a simple method that depends on almost no assumptions or data, and provides a floor for how low this probability can be. It will calculate this using just two inputs: the number of voters, n, and the probability of the underdog winning, p_u.The method works for any two-candidate election that uses simple majority. So it wouldn’t work for the US presidential election, but would work for your chance of being decisive within your state, and could be combined with estimates that state is decisive nationally. It also applies for many minor ‘elections’ you may encounter, such as the chance of your vote being decisive on a committee.We start by considering a probability distribution over what share of the vote a candidate will get, from 0% to 100%. In theory, this distribution could have any shape, but in practice it will almost always have a single peak (which could be at one end, or somewhere in between). We will assume that the probability distribution over vote share has this shape (that it is ‘unimodal’) and this is the only substantive assumption we’ll make.We will treat this as the probability distribution of the votes a candidate gets before factoring in your own vote. If there is an even number of votes (before yours) then your vote matters only if the vote shares are tied. In that case, which way you vote decides the election. If there is an odd number of votes (before yours), it is a little more complex, but works out about the same: Before your vote, one candidate has one fewer vote. Your vote decides whether they lose or tie, so is worth half an election. But because there are two different ways the candidates could be one vote apart (candidate A has one fewer or candidate B has one fewer), you are about twice as likely to end up in this situation, so have the same expected impact. For ease of presentation I’ll assume there is an even number of voters other than you, but nothing turns on this.(In real elections, you may also have to worry about probabilistic recounts, but if you do the analysis, these don’t substantivel...]]>
Toby Ord https://forum.effectivealtruism.org/posts/YLLtdNBJNbhdkopG7/casting-the-decisive-vote Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Casting the Decisive Vote, published by Toby Ord on March 27, 2023 on The Effective Altruism Forum.The moral value of voting is a perennial topic in EA. This piece shows that in any election that isn't a forgone conclusion, the chance of your vote being decisive can't be much lower than 1 in the number of voters. So voting will be worth it around the point where the value your preferred candidate would bring to the average citizen exceeds the cost of you voting.What is the chance your vote changes the outcome of an election? We know it is low, but how low?In particular, how does it compare with an intuitive baseline of a 1 in n chance, where n is the number of voters? This baseline is an important landmark not only because it is so intuitive, but because it is roughly the threshold needed for voting to be justified in terms of the good it produces for the members of the community (since the total benefit is also going to be proportional to n).Some political scientists have tried to estimate it with simplified theoretical models involving random voting. Depending on their assumptions, this has suggested it is much higher than the baseline — roughly 1 in the square root of n (Banzhaf 1965) — or that it is extraordinarily lower — something like 1 in 10^2659 for a US presidential election (Brennan 2011).Statisticians have attempted to determine the chance of a vote being decisive for particular elections using detailed empirical modelling, with data from previous elections and contemporaneous polls. For example, Gelman et al (2010) use such a model to estimate that an average voter had a 1 in 60 million chance of changing the result of the 2008 US presidential election, which is about 3 times higher than the baseline.In contrast, I’ll give a simple method that depends on almost no assumptions or data, and provides a floor for how low this probability can be. It will calculate this using just two inputs: the number of voters, n, and the probability of the underdog winning, p_u.The method works for any two-candidate election that uses simple majority. So it wouldn’t work for the US presidential election, but would work for your chance of being decisive within your state, and could be combined with estimates that state is decisive nationally. It also applies for many minor ‘elections’ you may encounter, such as the chance of your vote being decisive on a committee.We start by considering a probability distribution over what share of the vote a candidate will get, from 0% to 100%. In theory, this distribution could have any shape, but in practice it will almost always have a single peak (which could be at one end, or somewhere in between). We will assume that the probability distribution over vote share has this shape (that it is ‘unimodal’) and this is the only substantive assumption we’ll make.We will treat this as the probability distribution of the votes a candidate gets before factoring in your own vote. If there is an even number of votes (before yours) then your vote matters only if the vote shares are tied. In that case, which way you vote decides the election. If there is an odd number of votes (before yours), it is a little more complex, but works out about the same: Before your vote, one candidate has one fewer vote. Your vote decides whether they lose or tie, so is worth half an election. But because there are two different ways the candidates could be one vote apart (candidate A has one fewer or candidate B has one fewer), you are about twice as likely to end up in this situation, so have the same expected impact. For ease of presentation I’ll assume there is an even number of voters other than you, but nothing turns on this.(In real elections, you may also have to worry about probabilistic recounts, but if you do the analysis, these don’t substantivel...]]>
Mon, 27 Mar 2023 17:41:11 +0000 EA - Casting the Decisive Vote by Toby Ord Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Casting the Decisive Vote, published by Toby Ord on March 27, 2023 on The Effective Altruism Forum.The moral value of voting is a perennial topic in EA. This piece shows that in any election that isn't a forgone conclusion, the chance of your vote being decisive can't be much lower than 1 in the number of voters. So voting will be worth it around the point where the value your preferred candidate would bring to the average citizen exceeds the cost of you voting.What is the chance your vote changes the outcome of an election? We know it is low, but how low?In particular, how does it compare with an intuitive baseline of a 1 in n chance, where n is the number of voters? This baseline is an important landmark not only because it is so intuitive, but because it is roughly the threshold needed for voting to be justified in terms of the good it produces for the members of the community (since the total benefit is also going to be proportional to n).Some political scientists have tried to estimate it with simplified theoretical models involving random voting. Depending on their assumptions, this has suggested it is much higher than the baseline — roughly 1 in the square root of n (Banzhaf 1965) — or that it is extraordinarily lower — something like 1 in 10^2659 for a US presidential election (Brennan 2011).Statisticians have attempted to determine the chance of a vote being decisive for particular elections using detailed empirical modelling, with data from previous elections and contemporaneous polls. For example, Gelman et al (2010) use such a model to estimate that an average voter had a 1 in 60 million chance of changing the result of the 2008 US presidential election, which is about 3 times higher than the baseline.In contrast, I’ll give a simple method that depends on almost no assumptions or data, and provides a floor for how low this probability can be. It will calculate this using just two inputs: the number of voters, n, and the probability of the underdog winning, p_u.The method works for any two-candidate election that uses simple majority. So it wouldn’t work for the US presidential election, but would work for your chance of being decisive within your state, and could be combined with estimates that state is decisive nationally. It also applies for many minor ‘elections’ you may encounter, such as the chance of your vote being decisive on a committee.We start by considering a probability distribution over what share of the vote a candidate will get, from 0% to 100%. In theory, this distribution could have any shape, but in practice it will almost always have a single peak (which could be at one end, or somewhere in between). We will assume that the probability distribution over vote share has this shape (that it is ‘unimodal’) and this is the only substantive assumption we’ll make.We will treat this as the probability distribution of the votes a candidate gets before factoring in your own vote. If there is an even number of votes (before yours) then your vote matters only if the vote shares are tied. In that case, which way you vote decides the election. If there is an odd number of votes (before yours), it is a little more complex, but works out about the same: Before your vote, one candidate has one fewer vote. Your vote decides whether they lose or tie, so is worth half an election. But because there are two different ways the candidates could be one vote apart (candidate A has one fewer or candidate B has one fewer), you are about twice as likely to end up in this situation, so have the same expected impact. For ease of presentation I’ll assume there is an even number of voters other than you, but nothing turns on this.(In real elections, you may also have to worry about probabilistic recounts, but if you do the analysis, these don’t substantivel...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Casting the Decisive Vote, published by Toby Ord on March 27, 2023 on The Effective Altruism Forum.The moral value of voting is a perennial topic in EA. This piece shows that in any election that isn't a forgone conclusion, the chance of your vote being decisive can't be much lower than 1 in the number of voters. So voting will be worth it around the point where the value your preferred candidate would bring to the average citizen exceeds the cost of you voting.What is the chance your vote changes the outcome of an election? We know it is low, but how low?In particular, how does it compare with an intuitive baseline of a 1 in n chance, where n is the number of voters? This baseline is an important landmark not only because it is so intuitive, but because it is roughly the threshold needed for voting to be justified in terms of the good it produces for the members of the community (since the total benefit is also going to be proportional to n).Some political scientists have tried to estimate it with simplified theoretical models involving random voting. Depending on their assumptions, this has suggested it is much higher than the baseline — roughly 1 in the square root of n (Banzhaf 1965) — or that it is extraordinarily lower — something like 1 in 10^2659 for a US presidential election (Brennan 2011).Statisticians have attempted to determine the chance of a vote being decisive for particular elections using detailed empirical modelling, with data from previous elections and contemporaneous polls. For example, Gelman et al (2010) use such a model to estimate that an average voter had a 1 in 60 million chance of changing the result of the 2008 US presidential election, which is about 3 times higher than the baseline.In contrast, I’ll give a simple method that depends on almost no assumptions or data, and provides a floor for how low this probability can be. It will calculate this using just two inputs: the number of voters, n, and the probability of the underdog winning, p_u.The method works for any two-candidate election that uses simple majority. So it wouldn’t work for the US presidential election, but would work for your chance of being decisive within your state, and could be combined with estimates that state is decisive nationally. It also applies for many minor ‘elections’ you may encounter, such as the chance of your vote being decisive on a committee.We start by considering a probability distribution over what share of the vote a candidate will get, from 0% to 100%. In theory, this distribution could have any shape, but in practice it will almost always have a single peak (which could be at one end, or somewhere in between). We will assume that the probability distribution over vote share has this shape (that it is ‘unimodal’) and this is the only substantive assumption we’ll make.We will treat this as the probability distribution of the votes a candidate gets before factoring in your own vote. If there is an even number of votes (before yours) then your vote matters only if the vote shares are tied. In that case, which way you vote decides the election. If there is an odd number of votes (before yours), it is a little more complex, but works out about the same: Before your vote, one candidate has one fewer vote. Your vote decides whether they lose or tie, so is worth half an election. But because there are two different ways the candidates could be one vote apart (candidate A has one fewer or candidate B has one fewer), you are about twice as likely to end up in this situation, so have the same expected impact. For ease of presentation I’ll assume there is an even number of voters other than you, but nothing turns on this.(In real elections, you may also have to worry about probabilistic recounts, but if you do the analysis, these don’t substantivel...]]>
Toby Ord https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 14:33 None full 5374
MP9qDZCXMaTJhiJ9u_NL_EA_EA EA - EA is three radical ideas I want to protect by Peter Wildeford Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA is three radical ideas I want to protect, published by Peter Wildeford on March 27, 2023 on The Effective Altruism Forum.Context note: This is more of an emotional piece meant to capture feelings and reservations, rather than a logical piece meant to make a persuasive point. This is a very personal piece and only represents my own views, and does not necessarily represent the views of anyone else or any institution I may represent.It’s been a rough past five months for effective altruism.Because of this, understandably many people are questioning their connection and commitment to the movement and questioning whether “effective altruism” is still a brand or set of ideas worth promoting. I’ve heard some suggest it might be better to instead promote other brands and perhaps even abandon promotion of “effective altruism” altogether.I could see ways in which this is a good move. Ultimately I want to do whatever is most impactful. However, I worry that moving away from effective altruism could make us lose some of what I think makes the ideas and community so special and what drew me to the community more than ten years ago.Essentially, effective altruism contains three radical ideas that I don’t easily find in other communities. These three ideas are ideas I want to protect.Radical empathyHumanity has long had a fairly narrow moral circle. Radical Empathy is the idea that there are many groups of people, or other entities, that are worthy of moral concern even if they don't look or act like us. Moreover, it’s important to deliberately identify all entities worthy of moral concern so that we can ensure they are protected. I find effective altruism to be unique in extending moral concern to not just traditionally neglected farm animals and future humans (very important) – but also to invertebrates and potential digital minds. Effective altruists are also unique in trying to intentionally understand who might matter and why and actually incorporating this into the process of discovering how to best help the world. Asking the question "Who might matter that we currently neglect?" is a key question that is asked way too rarely.We understand that while it’s ok to have special concern for family and friends, we should generally aim to make altruistic decisions based on impartiality, not weighing people differently just because they are at a different level of geographic distance, a different level of temporal distance, a different species, or run cognition on a different substrate.I worry that if we were to promote individual subcomponents of effective altruism, like pandemic preparedness or AI risk, we might not end up promoting radical empathy and we might end up missing entire classes of entities that matter. For example, I worry that one more subtle form of misaligned AI might be an AI that treats humans ok but adopts common human views on nonhuman animal welfare and perpetuates factory farming or abuse of a massive number of digital minds. The fact that effective altruism has somehow created a lot of AI developers that avoid eating meat and care about nonhuman animals is a big and fairly unexpected win. I think only some weird movement that somehow combined factory farming prevention with AI risk prevention could’ve created that.Scope sensitivityI also really like that EAs are willing to “shut up and multiply”. We’re scope sensitive. We’re cause neutral. Nearly everyone else in the world is not. Many people pick ways to improve the world based on vibes or personal experience, rather than through a systematic search of how they can best use their resources. Effective altruism understands that resources are limited and that we have to make hard choices between potential interventions and help 100 people instead of 10, even if helping the 10 people feels as or m...]]>
Peter Wildeford https://forum.effectivealtruism.org/posts/MP9qDZCXMaTJhiJ9u/ea-is-three-radical-ideas-i-want-to-protect Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA is three radical ideas I want to protect, published by Peter Wildeford on March 27, 2023 on The Effective Altruism Forum.Context note: This is more of an emotional piece meant to capture feelings and reservations, rather than a logical piece meant to make a persuasive point. This is a very personal piece and only represents my own views, and does not necessarily represent the views of anyone else or any institution I may represent.It’s been a rough past five months for effective altruism.Because of this, understandably many people are questioning their connection and commitment to the movement and questioning whether “effective altruism” is still a brand or set of ideas worth promoting. I’ve heard some suggest it might be better to instead promote other brands and perhaps even abandon promotion of “effective altruism” altogether.I could see ways in which this is a good move. Ultimately I want to do whatever is most impactful. However, I worry that moving away from effective altruism could make us lose some of what I think makes the ideas and community so special and what drew me to the community more than ten years ago.Essentially, effective altruism contains three radical ideas that I don’t easily find in other communities. These three ideas are ideas I want to protect.Radical empathyHumanity has long had a fairly narrow moral circle. Radical Empathy is the idea that there are many groups of people, or other entities, that are worthy of moral concern even if they don't look or act like us. Moreover, it’s important to deliberately identify all entities worthy of moral concern so that we can ensure they are protected. I find effective altruism to be unique in extending moral concern to not just traditionally neglected farm animals and future humans (very important) – but also to invertebrates and potential digital minds. Effective altruists are also unique in trying to intentionally understand who might matter and why and actually incorporating this into the process of discovering how to best help the world. Asking the question "Who might matter that we currently neglect?" is a key question that is asked way too rarely.We understand that while it’s ok to have special concern for family and friends, we should generally aim to make altruistic decisions based on impartiality, not weighing people differently just because they are at a different level of geographic distance, a different level of temporal distance, a different species, or run cognition on a different substrate.I worry that if we were to promote individual subcomponents of effective altruism, like pandemic preparedness or AI risk, we might not end up promoting radical empathy and we might end up missing entire classes of entities that matter. For example, I worry that one more subtle form of misaligned AI might be an AI that treats humans ok but adopts common human views on nonhuman animal welfare and perpetuates factory farming or abuse of a massive number of digital minds. The fact that effective altruism has somehow created a lot of AI developers that avoid eating meat and care about nonhuman animals is a big and fairly unexpected win. I think only some weird movement that somehow combined factory farming prevention with AI risk prevention could’ve created that.Scope sensitivityI also really like that EAs are willing to “shut up and multiply”. We’re scope sensitive. We’re cause neutral. Nearly everyone else in the world is not. Many people pick ways to improve the world based on vibes or personal experience, rather than through a systematic search of how they can best use their resources. Effective altruism understands that resources are limited and that we have to make hard choices between potential interventions and help 100 people instead of 10, even if helping the 10 people feels as or m...]]>
Mon, 27 Mar 2023 15:57:04 +0000 EA - EA is three radical ideas I want to protect by Peter Wildeford Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA is three radical ideas I want to protect, published by Peter Wildeford on March 27, 2023 on The Effective Altruism Forum.Context note: This is more of an emotional piece meant to capture feelings and reservations, rather than a logical piece meant to make a persuasive point. This is a very personal piece and only represents my own views, and does not necessarily represent the views of anyone else or any institution I may represent.It’s been a rough past five months for effective altruism.Because of this, understandably many people are questioning their connection and commitment to the movement and questioning whether “effective altruism” is still a brand or set of ideas worth promoting. I’ve heard some suggest it might be better to instead promote other brands and perhaps even abandon promotion of “effective altruism” altogether.I could see ways in which this is a good move. Ultimately I want to do whatever is most impactful. However, I worry that moving away from effective altruism could make us lose some of what I think makes the ideas and community so special and what drew me to the community more than ten years ago.Essentially, effective altruism contains three radical ideas that I don’t easily find in other communities. These three ideas are ideas I want to protect.Radical empathyHumanity has long had a fairly narrow moral circle. Radical Empathy is the idea that there are many groups of people, or other entities, that are worthy of moral concern even if they don't look or act like us. Moreover, it’s important to deliberately identify all entities worthy of moral concern so that we can ensure they are protected. I find effective altruism to be unique in extending moral concern to not just traditionally neglected farm animals and future humans (very important) – but also to invertebrates and potential digital minds. Effective altruists are also unique in trying to intentionally understand who might matter and why and actually incorporating this into the process of discovering how to best help the world. Asking the question "Who might matter that we currently neglect?" is a key question that is asked way too rarely.We understand that while it’s ok to have special concern for family and friends, we should generally aim to make altruistic decisions based on impartiality, not weighing people differently just because they are at a different level of geographic distance, a different level of temporal distance, a different species, or run cognition on a different substrate.I worry that if we were to promote individual subcomponents of effective altruism, like pandemic preparedness or AI risk, we might not end up promoting radical empathy and we might end up missing entire classes of entities that matter. For example, I worry that one more subtle form of misaligned AI might be an AI that treats humans ok but adopts common human views on nonhuman animal welfare and perpetuates factory farming or abuse of a massive number of digital minds. The fact that effective altruism has somehow created a lot of AI developers that avoid eating meat and care about nonhuman animals is a big and fairly unexpected win. I think only some weird movement that somehow combined factory farming prevention with AI risk prevention could’ve created that.Scope sensitivityI also really like that EAs are willing to “shut up and multiply”. We’re scope sensitive. We’re cause neutral. Nearly everyone else in the world is not. Many people pick ways to improve the world based on vibes or personal experience, rather than through a systematic search of how they can best use their resources. Effective altruism understands that resources are limited and that we have to make hard choices between potential interventions and help 100 people instead of 10, even if helping the 10 people feels as or m...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA is three radical ideas I want to protect, published by Peter Wildeford on March 27, 2023 on The Effective Altruism Forum.Context note: This is more of an emotional piece meant to capture feelings and reservations, rather than a logical piece meant to make a persuasive point. This is a very personal piece and only represents my own views, and does not necessarily represent the views of anyone else or any institution I may represent.It’s been a rough past five months for effective altruism.Because of this, understandably many people are questioning their connection and commitment to the movement and questioning whether “effective altruism” is still a brand or set of ideas worth promoting. I’ve heard some suggest it might be better to instead promote other brands and perhaps even abandon promotion of “effective altruism” altogether.I could see ways in which this is a good move. Ultimately I want to do whatever is most impactful. However, I worry that moving away from effective altruism could make us lose some of what I think makes the ideas and community so special and what drew me to the community more than ten years ago.Essentially, effective altruism contains three radical ideas that I don’t easily find in other communities. These three ideas are ideas I want to protect.Radical empathyHumanity has long had a fairly narrow moral circle. Radical Empathy is the idea that there are many groups of people, or other entities, that are worthy of moral concern even if they don't look or act like us. Moreover, it’s important to deliberately identify all entities worthy of moral concern so that we can ensure they are protected. I find effective altruism to be unique in extending moral concern to not just traditionally neglected farm animals and future humans (very important) – but also to invertebrates and potential digital minds. Effective altruists are also unique in trying to intentionally understand who might matter and why and actually incorporating this into the process of discovering how to best help the world. Asking the question "Who might matter that we currently neglect?" is a key question that is asked way too rarely.We understand that while it’s ok to have special concern for family and friends, we should generally aim to make altruistic decisions based on impartiality, not weighing people differently just because they are at a different level of geographic distance, a different level of temporal distance, a different species, or run cognition on a different substrate.I worry that if we were to promote individual subcomponents of effective altruism, like pandemic preparedness or AI risk, we might not end up promoting radical empathy and we might end up missing entire classes of entities that matter. For example, I worry that one more subtle form of misaligned AI might be an AI that treats humans ok but adopts common human views on nonhuman animal welfare and perpetuates factory farming or abuse of a massive number of digital minds. The fact that effective altruism has somehow created a lot of AI developers that avoid eating meat and care about nonhuman animals is a big and fairly unexpected win. I think only some weird movement that somehow combined factory farming prevention with AI risk prevention could’ve created that.Scope sensitivityI also really like that EAs are willing to “shut up and multiply”. We’re scope sensitive. We’re cause neutral. Nearly everyone else in the world is not. Many people pick ways to improve the world based on vibes or personal experience, rather than through a systematic search of how they can best use their resources. Effective altruism understands that resources are limited and that we have to make hard choices between potential interventions and help 100 people instead of 10, even if helping the 10 people feels as or m...]]>
Peter Wildeford https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:54 None full 5373
CGofXqkwzdaabgzhJ_NL_EA_EA EA - Introducing Focus Philanthropy by Focus Philanthropy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Focus Philanthropy, published by Focus Philanthropy on March 27, 2023 on The Effective Altruism Forum.We’re excited to announce the launch of Focus Philanthropy, a new organization that connects philanthropists with outstanding giving opportunities to reduce the harms of factory farming.Focus Philanthropy aims to fill a gap in the effective animal advocacy space. The factory farming cause area is both fast-changing and complex in its variety of approaches and we aim to reduce barriers for new funders interested in entering the space. We offer thoughtful, tailored funding advice and engage with donors to increase their commitment and inspire them to support impactful giving opportunities within the cause area over the long term.Our core principlesImpact: We help philanthropists maximize the impact of their donations, ensuring their funds go where they can do the most good.Evidence: Our recommendations are based on research, the current state of scientific evidence, and expert judgment.Independence: Our advice is always free of charge, allowing us to provide unbiased and impartial giving recommendations.The teamFocus Philanthropy was founded by Leah Edgerton and Manja Gärtner.Leah has extensive experience in effective animal advocacy ranging from volunteering and direct work to having acted as a leader, mentor, and advisor. She previously worked as a philanthropic advisor, at Animal Charity Evaluators, and at ProVeg International.Manja has several years of experience as a researcher, grantmaker, and advisor in effective animal advocacy. She previously worked as a philanthropic advisor and at Animal Charity Evaluators. She holds a Ph.D. in economics.Please approach us directly with any feedback or questions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Focus Philanthropy https://forum.effectivealtruism.org/posts/CGofXqkwzdaabgzhJ/introducing-focus-philanthropy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Focus Philanthropy, published by Focus Philanthropy on March 27, 2023 on The Effective Altruism Forum.We’re excited to announce the launch of Focus Philanthropy, a new organization that connects philanthropists with outstanding giving opportunities to reduce the harms of factory farming.Focus Philanthropy aims to fill a gap in the effective animal advocacy space. The factory farming cause area is both fast-changing and complex in its variety of approaches and we aim to reduce barriers for new funders interested in entering the space. We offer thoughtful, tailored funding advice and engage with donors to increase their commitment and inspire them to support impactful giving opportunities within the cause area over the long term.Our core principlesImpact: We help philanthropists maximize the impact of their donations, ensuring their funds go where they can do the most good.Evidence: Our recommendations are based on research, the current state of scientific evidence, and expert judgment.Independence: Our advice is always free of charge, allowing us to provide unbiased and impartial giving recommendations.The teamFocus Philanthropy was founded by Leah Edgerton and Manja Gärtner.Leah has extensive experience in effective animal advocacy ranging from volunteering and direct work to having acted as a leader, mentor, and advisor. She previously worked as a philanthropic advisor, at Animal Charity Evaluators, and at ProVeg International.Manja has several years of experience as a researcher, grantmaker, and advisor in effective animal advocacy. She previously worked as a philanthropic advisor and at Animal Charity Evaluators. She holds a Ph.D. in economics.Please approach us directly with any feedback or questions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 27 Mar 2023 15:03:41 +0000 EA - Introducing Focus Philanthropy by Focus Philanthropy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Focus Philanthropy, published by Focus Philanthropy on March 27, 2023 on The Effective Altruism Forum.We’re excited to announce the launch of Focus Philanthropy, a new organization that connects philanthropists with outstanding giving opportunities to reduce the harms of factory farming.Focus Philanthropy aims to fill a gap in the effective animal advocacy space. The factory farming cause area is both fast-changing and complex in its variety of approaches and we aim to reduce barriers for new funders interested in entering the space. We offer thoughtful, tailored funding advice and engage with donors to increase their commitment and inspire them to support impactful giving opportunities within the cause area over the long term.Our core principlesImpact: We help philanthropists maximize the impact of their donations, ensuring their funds go where they can do the most good.Evidence: Our recommendations are based on research, the current state of scientific evidence, and expert judgment.Independence: Our advice is always free of charge, allowing us to provide unbiased and impartial giving recommendations.The teamFocus Philanthropy was founded by Leah Edgerton and Manja Gärtner.Leah has extensive experience in effective animal advocacy ranging from volunteering and direct work to having acted as a leader, mentor, and advisor. She previously worked as a philanthropic advisor, at Animal Charity Evaluators, and at ProVeg International.Manja has several years of experience as a researcher, grantmaker, and advisor in effective animal advocacy. She previously worked as a philanthropic advisor and at Animal Charity Evaluators. She holds a Ph.D. in economics.Please approach us directly with any feedback or questions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Focus Philanthropy, published by Focus Philanthropy on March 27, 2023 on The Effective Altruism Forum.We’re excited to announce the launch of Focus Philanthropy, a new organization that connects philanthropists with outstanding giving opportunities to reduce the harms of factory farming.Focus Philanthropy aims to fill a gap in the effective animal advocacy space. The factory farming cause area is both fast-changing and complex in its variety of approaches and we aim to reduce barriers for new funders interested in entering the space. We offer thoughtful, tailored funding advice and engage with donors to increase their commitment and inspire them to support impactful giving opportunities within the cause area over the long term.Our core principlesImpact: We help philanthropists maximize the impact of their donations, ensuring their funds go where they can do the most good.Evidence: Our recommendations are based on research, the current state of scientific evidence, and expert judgment.Independence: Our advice is always free of charge, allowing us to provide unbiased and impartial giving recommendations.The teamFocus Philanthropy was founded by Leah Edgerton and Manja Gärtner.Leah has extensive experience in effective animal advocacy ranging from volunteering and direct work to having acted as a leader, mentor, and advisor. She previously worked as a philanthropic advisor, at Animal Charity Evaluators, and at ProVeg International.Manja has several years of experience as a researcher, grantmaker, and advisor in effective animal advocacy. She previously worked as a philanthropic advisor and at Animal Charity Evaluators. She holds a Ph.D. in economics.Please approach us directly with any feedback or questions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Focus Philanthropy https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:54 None full 5375
JnrH7MRmyFyNZgmj7_NL_EA_EA EA - What would a compute monitoring plan look like? [Linkpost] by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What would a compute monitoring plan look like? [Linkpost], published by Akash on March 26, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://forum.effectivealtruism.org/posts/JnrH7MRmyFyNZgmj7/what-would-a-compute-monitoring-plan-look-like-linkpost Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What would a compute monitoring plan look like? [Linkpost], published by Akash on March 26, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 27 Mar 2023 08:22:11 +0000 EA - What would a compute monitoring plan look like? [Linkpost] by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What would a compute monitoring plan look like? [Linkpost], published by Akash on March 26, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What would a compute monitoring plan look like? [Linkpost], published by Akash on March 26, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:28 None full 5363
bp3qec7GesufvozYX_NL_EA_EA EA - On what basis did Founder's Pledge disperse $1.6 mil. to Qvist Consulting from its Climate Change Fund? by Kieran.M Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On what basis did Founder's Pledge disperse $1.6 mil. to Qvist Consulting from its Climate Change Fund?, published by Kieran.M on March 27, 2023 on The Effective Altruism Forum.Disclaimer: I am totally agnostic regarding the reasonableness of this funding decision, and am merely noting that it appears to me impossible to make any assessment of reasonableness based on the information at hand. I have not conducted more than 1-2 hours research/thinking on this topic, so am uncertain of whether this is true, and am happy to be corrected. Also, the question is posed but I make no comment on whether Founder's Pledge needs answer it. Perhaps the donors to this fund are provided with some private information with regards to these causes, or Founder's Pledge reasonably believes their donors are happy to trust them a priori with respect to the efficacy and value set behind their decisions.In the last 12 months (March 2022 to present), Founder's Pledge (FP) has (publicly) dispersed approximately $4.3 million from its Climate Change Fund (CCF):Of this, $1.6 million has been given to Qvist Consulting Ltd. (QCL), for the reason shown above. Unlike the other recipients, QCL does not contain any external link to resources in which you are able to discover more about this organisation's operation/mission.Above this table, FP states that more information regarding their rationale is available here, however this document does not discuss QCL. As far as I have discovered, the only other mention of QCL is in a retrospective of the CCF after two years, which contains a minute video from Staffan Qvist (henceforth Staffan, for clarity). There is little/no new information about their mission, aside from the suggestion that "repowering" coal plants is particularly important because of the possible emissions from current coal plants in their remaining life cycle. What, then, is repowering coal? Curiously, another grantee, TerraPraxis, is the first Google result. The basic principle seems to be to try and refit those current coal plants for a non-carbon emitting form of energy production.So how does Qvist consulting fit into this effort? One might reasonably expect a search of their company to shed some light.This, however, turns out not to be the case.The company's first Google result is for their company listing on the UK gov. registry. The second is for their website, but it is merely a wordpress template totally devoid of information.So what about their founder? Staffan has, as per his LinkedIn, a PhD in Nuclear Engineering, and has written academic and popular press (including with Stephen Pinker in the NYTimes) articles advocating for nuclear energy.Staffan appears to be prolific - his LinkedIn lists him as a managing director/director at two other companies: Deepsense , an "intelligence platform", and QuantifiedCarbon, a decarbonisation consultancy, both of which appear to have little web presence (the former is difficult to search, as it is a common company name). Curiously, Staffan does not list QCL on his LinkedIn - perhaps this is an oversight? He is listed as a consultant for the Clean Air Task Force, another grantee of FP interested in nuclear advocacy.I have no reason to believe Staffan is not an excellent academic researcher and passionate advocate for a cause that seems plausibly very important (though I am no expert). Given the scale of the grant, however, it seems reasonable to wonder what in particular led FP to believe Staffan is the best contributor to this cause, and why he and QCL required such a large first grant to begin work on this.Postscript: There are other reasonable questions to be asked, including why FP believes their near-exclusive funding to organisations that appear (arguably) primarily dedicated to nuclear power advocacy is the most effective use ...]]>
Kieran.M https://forum.effectivealtruism.org/posts/bp3qec7GesufvozYX/on-what-basis-did-founder-s-pledge-disperse-usd1-6-mil-to Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On what basis did Founder's Pledge disperse $1.6 mil. to Qvist Consulting from its Climate Change Fund?, published by Kieran.M on March 27, 2023 on The Effective Altruism Forum.Disclaimer: I am totally agnostic regarding the reasonableness of this funding decision, and am merely noting that it appears to me impossible to make any assessment of reasonableness based on the information at hand. I have not conducted more than 1-2 hours research/thinking on this topic, so am uncertain of whether this is true, and am happy to be corrected. Also, the question is posed but I make no comment on whether Founder's Pledge needs answer it. Perhaps the donors to this fund are provided with some private information with regards to these causes, or Founder's Pledge reasonably believes their donors are happy to trust them a priori with respect to the efficacy and value set behind their decisions.In the last 12 months (March 2022 to present), Founder's Pledge (FP) has (publicly) dispersed approximately $4.3 million from its Climate Change Fund (CCF):Of this, $1.6 million has been given to Qvist Consulting Ltd. (QCL), for the reason shown above. Unlike the other recipients, QCL does not contain any external link to resources in which you are able to discover more about this organisation's operation/mission.Above this table, FP states that more information regarding their rationale is available here, however this document does not discuss QCL. As far as I have discovered, the only other mention of QCL is in a retrospective of the CCF after two years, which contains a minute video from Staffan Qvist (henceforth Staffan, for clarity). There is little/no new information about their mission, aside from the suggestion that "repowering" coal plants is particularly important because of the possible emissions from current coal plants in their remaining life cycle. What, then, is repowering coal? Curiously, another grantee, TerraPraxis, is the first Google result. The basic principle seems to be to try and refit those current coal plants for a non-carbon emitting form of energy production.So how does Qvist consulting fit into this effort? One might reasonably expect a search of their company to shed some light.This, however, turns out not to be the case.The company's first Google result is for their company listing on the UK gov. registry. The second is for their website, but it is merely a wordpress template totally devoid of information.So what about their founder? Staffan has, as per his LinkedIn, a PhD in Nuclear Engineering, and has written academic and popular press (including with Stephen Pinker in the NYTimes) articles advocating for nuclear energy.Staffan appears to be prolific - his LinkedIn lists him as a managing director/director at two other companies: Deepsense , an "intelligence platform", and QuantifiedCarbon, a decarbonisation consultancy, both of which appear to have little web presence (the former is difficult to search, as it is a common company name). Curiously, Staffan does not list QCL on his LinkedIn - perhaps this is an oversight? He is listed as a consultant for the Clean Air Task Force, another grantee of FP interested in nuclear advocacy.I have no reason to believe Staffan is not an excellent academic researcher and passionate advocate for a cause that seems plausibly very important (though I am no expert). Given the scale of the grant, however, it seems reasonable to wonder what in particular led FP to believe Staffan is the best contributor to this cause, and why he and QCL required such a large first grant to begin work on this.Postscript: There are other reasonable questions to be asked, including why FP believes their near-exclusive funding to organisations that appear (arguably) primarily dedicated to nuclear power advocacy is the most effective use ...]]>
Mon, 27 Mar 2023 06:56:04 +0000 EA - On what basis did Founder's Pledge disperse $1.6 mil. to Qvist Consulting from its Climate Change Fund? by Kieran.M Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On what basis did Founder's Pledge disperse $1.6 mil. to Qvist Consulting from its Climate Change Fund?, published by Kieran.M on March 27, 2023 on The Effective Altruism Forum.Disclaimer: I am totally agnostic regarding the reasonableness of this funding decision, and am merely noting that it appears to me impossible to make any assessment of reasonableness based on the information at hand. I have not conducted more than 1-2 hours research/thinking on this topic, so am uncertain of whether this is true, and am happy to be corrected. Also, the question is posed but I make no comment on whether Founder's Pledge needs answer it. Perhaps the donors to this fund are provided with some private information with regards to these causes, or Founder's Pledge reasonably believes their donors are happy to trust them a priori with respect to the efficacy and value set behind their decisions.In the last 12 months (March 2022 to present), Founder's Pledge (FP) has (publicly) dispersed approximately $4.3 million from its Climate Change Fund (CCF):Of this, $1.6 million has been given to Qvist Consulting Ltd. (QCL), for the reason shown above. Unlike the other recipients, QCL does not contain any external link to resources in which you are able to discover more about this organisation's operation/mission.Above this table, FP states that more information regarding their rationale is available here, however this document does not discuss QCL. As far as I have discovered, the only other mention of QCL is in a retrospective of the CCF after two years, which contains a minute video from Staffan Qvist (henceforth Staffan, for clarity). There is little/no new information about their mission, aside from the suggestion that "repowering" coal plants is particularly important because of the possible emissions from current coal plants in their remaining life cycle. What, then, is repowering coal? Curiously, another grantee, TerraPraxis, is the first Google result. The basic principle seems to be to try and refit those current coal plants for a non-carbon emitting form of energy production.So how does Qvist consulting fit into this effort? One might reasonably expect a search of their company to shed some light.This, however, turns out not to be the case.The company's first Google result is for their company listing on the UK gov. registry. The second is for their website, but it is merely a wordpress template totally devoid of information.So what about their founder? Staffan has, as per his LinkedIn, a PhD in Nuclear Engineering, and has written academic and popular press (including with Stephen Pinker in the NYTimes) articles advocating for nuclear energy.Staffan appears to be prolific - his LinkedIn lists him as a managing director/director at two other companies: Deepsense , an "intelligence platform", and QuantifiedCarbon, a decarbonisation consultancy, both of which appear to have little web presence (the former is difficult to search, as it is a common company name). Curiously, Staffan does not list QCL on his LinkedIn - perhaps this is an oversight? He is listed as a consultant for the Clean Air Task Force, another grantee of FP interested in nuclear advocacy.I have no reason to believe Staffan is not an excellent academic researcher and passionate advocate for a cause that seems plausibly very important (though I am no expert). Given the scale of the grant, however, it seems reasonable to wonder what in particular led FP to believe Staffan is the best contributor to this cause, and why he and QCL required such a large first grant to begin work on this.Postscript: There are other reasonable questions to be asked, including why FP believes their near-exclusive funding to organisations that appear (arguably) primarily dedicated to nuclear power advocacy is the most effective use ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On what basis did Founder's Pledge disperse $1.6 mil. to Qvist Consulting from its Climate Change Fund?, published by Kieran.M on March 27, 2023 on The Effective Altruism Forum.Disclaimer: I am totally agnostic regarding the reasonableness of this funding decision, and am merely noting that it appears to me impossible to make any assessment of reasonableness based on the information at hand. I have not conducted more than 1-2 hours research/thinking on this topic, so am uncertain of whether this is true, and am happy to be corrected. Also, the question is posed but I make no comment on whether Founder's Pledge needs answer it. Perhaps the donors to this fund are provided with some private information with regards to these causes, or Founder's Pledge reasonably believes their donors are happy to trust them a priori with respect to the efficacy and value set behind their decisions.In the last 12 months (March 2022 to present), Founder's Pledge (FP) has (publicly) dispersed approximately $4.3 million from its Climate Change Fund (CCF):Of this, $1.6 million has been given to Qvist Consulting Ltd. (QCL), for the reason shown above. Unlike the other recipients, QCL does not contain any external link to resources in which you are able to discover more about this organisation's operation/mission.Above this table, FP states that more information regarding their rationale is available here, however this document does not discuss QCL. As far as I have discovered, the only other mention of QCL is in a retrospective of the CCF after two years, which contains a minute video from Staffan Qvist (henceforth Staffan, for clarity). There is little/no new information about their mission, aside from the suggestion that "repowering" coal plants is particularly important because of the possible emissions from current coal plants in their remaining life cycle. What, then, is repowering coal? Curiously, another grantee, TerraPraxis, is the first Google result. The basic principle seems to be to try and refit those current coal plants for a non-carbon emitting form of energy production.So how does Qvist consulting fit into this effort? One might reasonably expect a search of their company to shed some light.This, however, turns out not to be the case.The company's first Google result is for their company listing on the UK gov. registry. The second is for their website, but it is merely a wordpress template totally devoid of information.So what about their founder? Staffan has, as per his LinkedIn, a PhD in Nuclear Engineering, and has written academic and popular press (including with Stephen Pinker in the NYTimes) articles advocating for nuclear energy.Staffan appears to be prolific - his LinkedIn lists him as a managing director/director at two other companies: Deepsense , an "intelligence platform", and QuantifiedCarbon, a decarbonisation consultancy, both of which appear to have little web presence (the former is difficult to search, as it is a common company name). Curiously, Staffan does not list QCL on his LinkedIn - perhaps this is an oversight? He is listed as a consultant for the Clean Air Task Force, another grantee of FP interested in nuclear advocacy.I have no reason to believe Staffan is not an excellent academic researcher and passionate advocate for a cause that seems plausibly very important (though I am no expert). Given the scale of the grant, however, it seems reasonable to wonder what in particular led FP to believe Staffan is the best contributor to this cause, and why he and QCL required such a large first grant to begin work on this.Postscript: There are other reasonable questions to be asked, including why FP believes their near-exclusive funding to organisations that appear (arguably) primarily dedicated to nuclear power advocacy is the most effective use ...]]>
Kieran.M https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:10 None full 5362
FCkchmXcSCQtJ9PZA_NL_EA_EA EA - Predicting what future people value: A terse introduction to Axiological Futurism by Jim Buhler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Predicting what future people value: A terse introduction to Axiological Futurism, published by Jim Buhler on March 24, 2023 on The Effective Altruism Forum.Why this is worth researchingHumanity might develop artificial general intelligence (AGI), colonize space, and create astronomical amounts of things in the future (Bostrom 2003; MacAskill 2022; Althaus and Gloor 2016). But what things? How (dis)valuable? And how does this compare with things grabby aliens would eventually create if they colonize our corner of the universe? What does this imply for our work aimed at impacting the long-term future?While this depends on many factors, a crucial one will likely be the values of our successors.Here’s a position that might tempt us while considering whether it is worth researching this topic:Our descendants are unlikely to have values that are both different from ours in a very significant way and predictable. Either they have values similar to ours or they have values we can’t predict. Therefore, trying to predict their values is a waste of time and resources.While I see how this can seem compelling, I think this is very ill-informed.First, predicting the values of our successors – what John Danaher (2021) calls axiological futurism – in worlds where these are meaningfully different from ours doesn’t seem intractable at all. Significant progress has already been made in this research area and there seems to be room for much more (see the next section and the Appendix).Second, a scenario where the values of our descendants don’t significantly differ from ours appears quite unlikely to me. We should watch for things like the End of History illusion, here. Values seem to notably evolve through History, and there is no reason to assume we are special enough to make us drop that prior.Besides being tractable, I believe axiological futurism to be uncommonly important given its instrumentality in answering the crucial questions mentioned earlier. It therefore also seems unwarrantedly neglected as of today.How to research thisHere are examples of broad questions that could be part of a research agenda on this topic:What are the best predictors of future human values? What can we learn from usual forecasting methods?How have people’s values changed throughout History? Why? What can we learn from this? (see, e.g., MacAskill 2022, Chapter 3; Harris 2019; Hopster 2022)Are there reasons to think we’ll observe less change in the future? Why? Value lock-in? Some form of moral convergence happening soon?Are there reasons to expect more change? Would that be due to the development of AGI, whole brain emulation, space colonization, and/or accelerated value drift?More broadly, what impact will future technological progress have on values? (see Hanson 2016 for a forecast example.)Should we expect some values to be selected for? (see, e.g., Christiano 2013; Bostrom 2009, Tomasik 2017)Might a period of “long reflection” take place? If yes, can we get some idea of what could result from it?Does something like coherent extrapolated volition have any chance of being pursued and if so, what could realistically result from it?Are there futures – where humanity has certain values – that are unlikely but worth wagering on?Might our research on this topic affect the values we should expect our successors to have by, e.g., triggering a self-defeating or self-fulfilling prophecy effect? (Danaher 2021, section 2)What do/will aliens value (see my forthcoming next post) and what does that tell us about ourselves?John Danaher (2021) gives examples of methodologies that could be used to answer these questions.Also, my Appendix references examples and other relevant work, including the (forthcoming) next posts in this sequence.AcknowledgmentThanks to Anders Sandberg for pointing m...]]>
Jim Buhler https://forum.effectivealtruism.org/posts/FCkchmXcSCQtJ9PZA/predicting-what-future-people-value-a-terse-introduction-to Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Predicting what future people value: A terse introduction to Axiological Futurism, published by Jim Buhler on March 24, 2023 on The Effective Altruism Forum.Why this is worth researchingHumanity might develop artificial general intelligence (AGI), colonize space, and create astronomical amounts of things in the future (Bostrom 2003; MacAskill 2022; Althaus and Gloor 2016). But what things? How (dis)valuable? And how does this compare with things grabby aliens would eventually create if they colonize our corner of the universe? What does this imply for our work aimed at impacting the long-term future?While this depends on many factors, a crucial one will likely be the values of our successors.Here’s a position that might tempt us while considering whether it is worth researching this topic:Our descendants are unlikely to have values that are both different from ours in a very significant way and predictable. Either they have values similar to ours or they have values we can’t predict. Therefore, trying to predict their values is a waste of time and resources.While I see how this can seem compelling, I think this is very ill-informed.First, predicting the values of our successors – what John Danaher (2021) calls axiological futurism – in worlds where these are meaningfully different from ours doesn’t seem intractable at all. Significant progress has already been made in this research area and there seems to be room for much more (see the next section and the Appendix).Second, a scenario where the values of our descendants don’t significantly differ from ours appears quite unlikely to me. We should watch for things like the End of History illusion, here. Values seem to notably evolve through History, and there is no reason to assume we are special enough to make us drop that prior.Besides being tractable, I believe axiological futurism to be uncommonly important given its instrumentality in answering the crucial questions mentioned earlier. It therefore also seems unwarrantedly neglected as of today.How to research thisHere are examples of broad questions that could be part of a research agenda on this topic:What are the best predictors of future human values? What can we learn from usual forecasting methods?How have people’s values changed throughout History? Why? What can we learn from this? (see, e.g., MacAskill 2022, Chapter 3; Harris 2019; Hopster 2022)Are there reasons to think we’ll observe less change in the future? Why? Value lock-in? Some form of moral convergence happening soon?Are there reasons to expect more change? Would that be due to the development of AGI, whole brain emulation, space colonization, and/or accelerated value drift?More broadly, what impact will future technological progress have on values? (see Hanson 2016 for a forecast example.)Should we expect some values to be selected for? (see, e.g., Christiano 2013; Bostrom 2009, Tomasik 2017)Might a period of “long reflection” take place? If yes, can we get some idea of what could result from it?Does something like coherent extrapolated volition have any chance of being pursued and if so, what could realistically result from it?Are there futures – where humanity has certain values – that are unlikely but worth wagering on?Might our research on this topic affect the values we should expect our successors to have by, e.g., triggering a self-defeating or self-fulfilling prophecy effect? (Danaher 2021, section 2)What do/will aliens value (see my forthcoming next post) and what does that tell us about ourselves?John Danaher (2021) gives examples of methodologies that could be used to answer these questions.Also, my Appendix references examples and other relevant work, including the (forthcoming) next posts in this sequence.AcknowledgmentThanks to Anders Sandberg for pointing m...]]>
Sat, 25 Mar 2023 21:02:26 +0000 EA - Predicting what future people value: A terse introduction to Axiological Futurism by Jim Buhler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Predicting what future people value: A terse introduction to Axiological Futurism, published by Jim Buhler on March 24, 2023 on The Effective Altruism Forum.Why this is worth researchingHumanity might develop artificial general intelligence (AGI), colonize space, and create astronomical amounts of things in the future (Bostrom 2003; MacAskill 2022; Althaus and Gloor 2016). But what things? How (dis)valuable? And how does this compare with things grabby aliens would eventually create if they colonize our corner of the universe? What does this imply for our work aimed at impacting the long-term future?While this depends on many factors, a crucial one will likely be the values of our successors.Here’s a position that might tempt us while considering whether it is worth researching this topic:Our descendants are unlikely to have values that are both different from ours in a very significant way and predictable. Either they have values similar to ours or they have values we can’t predict. Therefore, trying to predict their values is a waste of time and resources.While I see how this can seem compelling, I think this is very ill-informed.First, predicting the values of our successors – what John Danaher (2021) calls axiological futurism – in worlds where these are meaningfully different from ours doesn’t seem intractable at all. Significant progress has already been made in this research area and there seems to be room for much more (see the next section and the Appendix).Second, a scenario where the values of our descendants don’t significantly differ from ours appears quite unlikely to me. We should watch for things like the End of History illusion, here. Values seem to notably evolve through History, and there is no reason to assume we are special enough to make us drop that prior.Besides being tractable, I believe axiological futurism to be uncommonly important given its instrumentality in answering the crucial questions mentioned earlier. It therefore also seems unwarrantedly neglected as of today.How to research thisHere are examples of broad questions that could be part of a research agenda on this topic:What are the best predictors of future human values? What can we learn from usual forecasting methods?How have people’s values changed throughout History? Why? What can we learn from this? (see, e.g., MacAskill 2022, Chapter 3; Harris 2019; Hopster 2022)Are there reasons to think we’ll observe less change in the future? Why? Value lock-in? Some form of moral convergence happening soon?Are there reasons to expect more change? Would that be due to the development of AGI, whole brain emulation, space colonization, and/or accelerated value drift?More broadly, what impact will future technological progress have on values? (see Hanson 2016 for a forecast example.)Should we expect some values to be selected for? (see, e.g., Christiano 2013; Bostrom 2009, Tomasik 2017)Might a period of “long reflection” take place? If yes, can we get some idea of what could result from it?Does something like coherent extrapolated volition have any chance of being pursued and if so, what could realistically result from it?Are there futures – where humanity has certain values – that are unlikely but worth wagering on?Might our research on this topic affect the values we should expect our successors to have by, e.g., triggering a self-defeating or self-fulfilling prophecy effect? (Danaher 2021, section 2)What do/will aliens value (see my forthcoming next post) and what does that tell us about ourselves?John Danaher (2021) gives examples of methodologies that could be used to answer these questions.Also, my Appendix references examples and other relevant work, including the (forthcoming) next posts in this sequence.AcknowledgmentThanks to Anders Sandberg for pointing m...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Predicting what future people value: A terse introduction to Axiological Futurism, published by Jim Buhler on March 24, 2023 on The Effective Altruism Forum.Why this is worth researchingHumanity might develop artificial general intelligence (AGI), colonize space, and create astronomical amounts of things in the future (Bostrom 2003; MacAskill 2022; Althaus and Gloor 2016). But what things? How (dis)valuable? And how does this compare with things grabby aliens would eventually create if they colonize our corner of the universe? What does this imply for our work aimed at impacting the long-term future?While this depends on many factors, a crucial one will likely be the values of our successors.Here’s a position that might tempt us while considering whether it is worth researching this topic:Our descendants are unlikely to have values that are both different from ours in a very significant way and predictable. Either they have values similar to ours or they have values we can’t predict. Therefore, trying to predict their values is a waste of time and resources.While I see how this can seem compelling, I think this is very ill-informed.First, predicting the values of our successors – what John Danaher (2021) calls axiological futurism – in worlds where these are meaningfully different from ours doesn’t seem intractable at all. Significant progress has already been made in this research area and there seems to be room for much more (see the next section and the Appendix).Second, a scenario where the values of our descendants don’t significantly differ from ours appears quite unlikely to me. We should watch for things like the End of History illusion, here. Values seem to notably evolve through History, and there is no reason to assume we are special enough to make us drop that prior.Besides being tractable, I believe axiological futurism to be uncommonly important given its instrumentality in answering the crucial questions mentioned earlier. It therefore also seems unwarrantedly neglected as of today.How to research thisHere are examples of broad questions that could be part of a research agenda on this topic:What are the best predictors of future human values? What can we learn from usual forecasting methods?How have people’s values changed throughout History? Why? What can we learn from this? (see, e.g., MacAskill 2022, Chapter 3; Harris 2019; Hopster 2022)Are there reasons to think we’ll observe less change in the future? Why? Value lock-in? Some form of moral convergence happening soon?Are there reasons to expect more change? Would that be due to the development of AGI, whole brain emulation, space colonization, and/or accelerated value drift?More broadly, what impact will future technological progress have on values? (see Hanson 2016 for a forecast example.)Should we expect some values to be selected for? (see, e.g., Christiano 2013; Bostrom 2009, Tomasik 2017)Might a period of “long reflection” take place? If yes, can we get some idea of what could result from it?Does something like coherent extrapolated volition have any chance of being pursued and if so, what could realistically result from it?Are there futures – where humanity has certain values – that are unlikely but worth wagering on?Might our research on this topic affect the values we should expect our successors to have by, e.g., triggering a self-defeating or self-fulfilling prophecy effect? (Danaher 2021, section 2)What do/will aliens value (see my forthcoming next post) and what does that tell us about ourselves?John Danaher (2021) gives examples of methodologies that could be used to answer these questions.Also, my Appendix references examples and other relevant work, including the (forthcoming) next posts in this sequence.AcknowledgmentThanks to Anders Sandberg for pointing m...]]>
Jim Buhler https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:14 None full 5357
A6h5X2ERpumJACoEX_NL_EA_EA EA - Introducing Artists of Impact by Fernando MG Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Artists of Impact, published by Fernando MG on March 25, 2023 on The Effective Altruism Forum.TL;DRAnnouncing a new EA-Aligned org to promote effective giving by leveraging the social influence and resources of professional and celebrity artists.IntroHello EA Forum!My name is Fernando. I’m a professional ballet dancer who’s excited to announce the launch of our new organization, Artists of Impact! We promote effective giving by leveraging the social influence and resources of professional and celebrity artists.Modeled largely off of High Impact Athletes, we help professional artists understand the power of effective giving and the importance of thinking critically about using their platform and resources to have a greater social impact. We connect them with EA-aligned organizations that they resonate with, support their high-impact donation and advocacy efforts, and leverage their platforms to inspire their fans to do the same!The arts community in general does a great job of advocating for different causes. However, these tend to reflect a more personal approach to philanthropy, which often prioritizes proximity and familiarity over impartiality and cost-effectiveness. As such, we hope to insert ourselves into that equation, leverage the motivations that we already see prevalent throughout the artistic community, and point them towards higher-impact opportunities.We’re currently working on formalizing our organizational strategy, but we wanted to make this announcement ASAP so that the community is aware of what we’re trying to do and can reach out if they have any ideas or people they think we should hear and connect with!We’re excited to share more information about our theory of change, pilot projects, budget, etc. in a future post and we hope that you’ll be excited to keep up with us as we continue to grow and learn as an organization!How you can help!FundersWe're currently bootstrapping operations and actively seeking funders to help provide a 6-12 month runway to test out the assumptions underlying our Theory of Change and continue building up our social presence.CollaboratorsWe're also seeking collaborators to join our project on a volunteer basis, so if you’re excited by our mission and want to help us realize our organizational objectives, reach out!ConnectionsIf you know any professional/celebrity artists, we'd love to connect with them! Warm intros have been our most effective outreach method! Get in touch!Follow Us on Social Media and Spread the WordInstagram: @artistsofimpactTwitter: @ArtistsofImpactLinkedIn: @ArtistsofImpactMore InfoIf you’d like to learn more about our project or have any feedback on how to make the project more impactful, comment down below or reach out to me at fernando@artistsofimpact.orgSpecial thanks to Marcus D., Devon F., Sarah P., Neil F., Jeffray B., and too many others to name for all their help in bringing this project to life.Thanks for your time and for all that you do for the world!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fernando MG https://forum.effectivealtruism.org/posts/A6h5X2ERpumJACoEX/introducing-artists-of-impact Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Artists of Impact, published by Fernando MG on March 25, 2023 on The Effective Altruism Forum.TL;DRAnnouncing a new EA-Aligned org to promote effective giving by leveraging the social influence and resources of professional and celebrity artists.IntroHello EA Forum!My name is Fernando. I’m a professional ballet dancer who’s excited to announce the launch of our new organization, Artists of Impact! We promote effective giving by leveraging the social influence and resources of professional and celebrity artists.Modeled largely off of High Impact Athletes, we help professional artists understand the power of effective giving and the importance of thinking critically about using their platform and resources to have a greater social impact. We connect them with EA-aligned organizations that they resonate with, support their high-impact donation and advocacy efforts, and leverage their platforms to inspire their fans to do the same!The arts community in general does a great job of advocating for different causes. However, these tend to reflect a more personal approach to philanthropy, which often prioritizes proximity and familiarity over impartiality and cost-effectiveness. As such, we hope to insert ourselves into that equation, leverage the motivations that we already see prevalent throughout the artistic community, and point them towards higher-impact opportunities.We’re currently working on formalizing our organizational strategy, but we wanted to make this announcement ASAP so that the community is aware of what we’re trying to do and can reach out if they have any ideas or people they think we should hear and connect with!We’re excited to share more information about our theory of change, pilot projects, budget, etc. in a future post and we hope that you’ll be excited to keep up with us as we continue to grow and learn as an organization!How you can help!FundersWe're currently bootstrapping operations and actively seeking funders to help provide a 6-12 month runway to test out the assumptions underlying our Theory of Change and continue building up our social presence.CollaboratorsWe're also seeking collaborators to join our project on a volunteer basis, so if you’re excited by our mission and want to help us realize our organizational objectives, reach out!ConnectionsIf you know any professional/celebrity artists, we'd love to connect with them! Warm intros have been our most effective outreach method! Get in touch!Follow Us on Social Media and Spread the WordInstagram: @artistsofimpactTwitter: @ArtistsofImpactLinkedIn: @ArtistsofImpactMore InfoIf you’d like to learn more about our project or have any feedback on how to make the project more impactful, comment down below or reach out to me at fernando@artistsofimpact.orgSpecial thanks to Marcus D., Devon F., Sarah P., Neil F., Jeffray B., and too many others to name for all their help in bringing this project to life.Thanks for your time and for all that you do for the world!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 25 Mar 2023 09:04:01 +0000 EA - Introducing Artists of Impact by Fernando MG Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Artists of Impact, published by Fernando MG on March 25, 2023 on The Effective Altruism Forum.TL;DRAnnouncing a new EA-Aligned org to promote effective giving by leveraging the social influence and resources of professional and celebrity artists.IntroHello EA Forum!My name is Fernando. I’m a professional ballet dancer who’s excited to announce the launch of our new organization, Artists of Impact! We promote effective giving by leveraging the social influence and resources of professional and celebrity artists.Modeled largely off of High Impact Athletes, we help professional artists understand the power of effective giving and the importance of thinking critically about using their platform and resources to have a greater social impact. We connect them with EA-aligned organizations that they resonate with, support their high-impact donation and advocacy efforts, and leverage their platforms to inspire their fans to do the same!The arts community in general does a great job of advocating for different causes. However, these tend to reflect a more personal approach to philanthropy, which often prioritizes proximity and familiarity over impartiality and cost-effectiveness. As such, we hope to insert ourselves into that equation, leverage the motivations that we already see prevalent throughout the artistic community, and point them towards higher-impact opportunities.We’re currently working on formalizing our organizational strategy, but we wanted to make this announcement ASAP so that the community is aware of what we’re trying to do and can reach out if they have any ideas or people they think we should hear and connect with!We’re excited to share more information about our theory of change, pilot projects, budget, etc. in a future post and we hope that you’ll be excited to keep up with us as we continue to grow and learn as an organization!How you can help!FundersWe're currently bootstrapping operations and actively seeking funders to help provide a 6-12 month runway to test out the assumptions underlying our Theory of Change and continue building up our social presence.CollaboratorsWe're also seeking collaborators to join our project on a volunteer basis, so if you’re excited by our mission and want to help us realize our organizational objectives, reach out!ConnectionsIf you know any professional/celebrity artists, we'd love to connect with them! Warm intros have been our most effective outreach method! Get in touch!Follow Us on Social Media and Spread the WordInstagram: @artistsofimpactTwitter: @ArtistsofImpactLinkedIn: @ArtistsofImpactMore InfoIf you’d like to learn more about our project or have any feedback on how to make the project more impactful, comment down below or reach out to me at fernando@artistsofimpact.orgSpecial thanks to Marcus D., Devon F., Sarah P., Neil F., Jeffray B., and too many others to name for all their help in bringing this project to life.Thanks for your time and for all that you do for the world!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Artists of Impact, published by Fernando MG on March 25, 2023 on The Effective Altruism Forum.TL;DRAnnouncing a new EA-Aligned org to promote effective giving by leveraging the social influence and resources of professional and celebrity artists.IntroHello EA Forum!My name is Fernando. I’m a professional ballet dancer who’s excited to announce the launch of our new organization, Artists of Impact! We promote effective giving by leveraging the social influence and resources of professional and celebrity artists.Modeled largely off of High Impact Athletes, we help professional artists understand the power of effective giving and the importance of thinking critically about using their platform and resources to have a greater social impact. We connect them with EA-aligned organizations that they resonate with, support their high-impact donation and advocacy efforts, and leverage their platforms to inspire their fans to do the same!The arts community in general does a great job of advocating for different causes. However, these tend to reflect a more personal approach to philanthropy, which often prioritizes proximity and familiarity over impartiality and cost-effectiveness. As such, we hope to insert ourselves into that equation, leverage the motivations that we already see prevalent throughout the artistic community, and point them towards higher-impact opportunities.We’re currently working on formalizing our organizational strategy, but we wanted to make this announcement ASAP so that the community is aware of what we’re trying to do and can reach out if they have any ideas or people they think we should hear and connect with!We’re excited to share more information about our theory of change, pilot projects, budget, etc. in a future post and we hope that you’ll be excited to keep up with us as we continue to grow and learn as an organization!How you can help!FundersWe're currently bootstrapping operations and actively seeking funders to help provide a 6-12 month runway to test out the assumptions underlying our Theory of Change and continue building up our social presence.CollaboratorsWe're also seeking collaborators to join our project on a volunteer basis, so if you’re excited by our mission and want to help us realize our organizational objectives, reach out!ConnectionsIf you know any professional/celebrity artists, we'd love to connect with them! Warm intros have been our most effective outreach method! Get in touch!Follow Us on Social Media and Spread the WordInstagram: @artistsofimpactTwitter: @ArtistsofImpactLinkedIn: @ArtistsofImpactMore InfoIf you’d like to learn more about our project or have any feedback on how to make the project more impactful, comment down below or reach out to me at fernando@artistsofimpact.orgSpecial thanks to Marcus D., Devon F., Sarah P., Neil F., Jeffray B., and too many others to name for all their help in bringing this project to life.Thanks for your time and for all that you do for the world!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fernando MG https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:02 None full 5356
brBqjKxBdsEwBWwLD_NL_EA_EA EA - Eradicating rodenticides from U.S. pest management is less practical than we thought by Holly Elmore Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Eradicating rodenticides from U.S. pest management is less practical than we thought, published by Holly Elmore on March 24, 2023 on The Effective Altruism Forum.The link goes to the Open Science Framework preprint of the full report.Executive SummaryRodenticide poisons are cruel and reducing their use would likely represent an improvement in wild animal welfare. This report explores the reasons why rodenticides are used, under what circumstances they could be replaced, and whether they are replaceable with currently available alternatives. As summarized in the table below, agricultural use of rodenticides is well-protected by state and federal laws and that seems unlikely to change, but the use of rodenticides in food processing and conservation would likely be reduced if there were an adequate alternative such as solid form rodent birth control. Continued innovation of reactive tools to eliminate rodent infestations should reduce the use cases where rodenticides are the most cost-effective option for residential customers or public health officials, but will not eliminate their availability to handle major infestations.This research is a project of Rethink Priorities. It was written by Holly Elmore. If you’re interested in RP’s work, you can learn more by visiting our research database. For regular updates, please consider subscribing to our newsletter.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Holly Elmore https://forum.effectivealtruism.org/posts/brBqjKxBdsEwBWwLD/eradicating-rodenticides-from-u-s-pest-management-is-less Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Eradicating rodenticides from U.S. pest management is less practical than we thought, published by Holly Elmore on March 24, 2023 on The Effective Altruism Forum.The link goes to the Open Science Framework preprint of the full report.Executive SummaryRodenticide poisons are cruel and reducing their use would likely represent an improvement in wild animal welfare. This report explores the reasons why rodenticides are used, under what circumstances they could be replaced, and whether they are replaceable with currently available alternatives. As summarized in the table below, agricultural use of rodenticides is well-protected by state and federal laws and that seems unlikely to change, but the use of rodenticides in food processing and conservation would likely be reduced if there were an adequate alternative such as solid form rodent birth control. Continued innovation of reactive tools to eliminate rodent infestations should reduce the use cases where rodenticides are the most cost-effective option for residential customers or public health officials, but will not eliminate their availability to handle major infestations.This research is a project of Rethink Priorities. It was written by Holly Elmore. If you’re interested in RP’s work, you can learn more by visiting our research database. For regular updates, please consider subscribing to our newsletter.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 24 Mar 2023 22:51:44 +0000 EA - Eradicating rodenticides from U.S. pest management is less practical than we thought by Holly Elmore Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Eradicating rodenticides from U.S. pest management is less practical than we thought, published by Holly Elmore on March 24, 2023 on The Effective Altruism Forum.The link goes to the Open Science Framework preprint of the full report.Executive SummaryRodenticide poisons are cruel and reducing their use would likely represent an improvement in wild animal welfare. This report explores the reasons why rodenticides are used, under what circumstances they could be replaced, and whether they are replaceable with currently available alternatives. As summarized in the table below, agricultural use of rodenticides is well-protected by state and federal laws and that seems unlikely to change, but the use of rodenticides in food processing and conservation would likely be reduced if there were an adequate alternative such as solid form rodent birth control. Continued innovation of reactive tools to eliminate rodent infestations should reduce the use cases where rodenticides are the most cost-effective option for residential customers or public health officials, but will not eliminate their availability to handle major infestations.This research is a project of Rethink Priorities. It was written by Holly Elmore. If you’re interested in RP’s work, you can learn more by visiting our research database. For regular updates, please consider subscribing to our newsletter.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Eradicating rodenticides from U.S. pest management is less practical than we thought, published by Holly Elmore on March 24, 2023 on The Effective Altruism Forum.The link goes to the Open Science Framework preprint of the full report.Executive SummaryRodenticide poisons are cruel and reducing their use would likely represent an improvement in wild animal welfare. This report explores the reasons why rodenticides are used, under what circumstances they could be replaced, and whether they are replaceable with currently available alternatives. As summarized in the table below, agricultural use of rodenticides is well-protected by state and federal laws and that seems unlikely to change, but the use of rodenticides in food processing and conservation would likely be reduced if there were an adequate alternative such as solid form rodent birth control. Continued innovation of reactive tools to eliminate rodent infestations should reduce the use cases where rodenticides are the most cost-effective option for residential customers or public health officials, but will not eliminate their availability to handle major infestations.This research is a project of Rethink Priorities. It was written by Holly Elmore. If you’re interested in RP’s work, you can learn more by visiting our research database. For regular updates, please consider subscribing to our newsletter.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Holly Elmore https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:32 None full 5343
aNsBJYpGovgJqWC9v_NL_EA_EA EA - Comparing Health Interventions in Colombia and Nigeria: Which are More Effective and by How Much? by Alejandro Acelas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Comparing Health Interventions in Colombia and Nigeria: Which are More Effective and by How Much?, published by Alejandro Acelas on March 24, 2023 on The Effective Altruism Forum.TL;DR: Nigeria bears 10 times the burden of communicable, maternal, neonatal, and nutritional diseases per capita than Colombia, despite both countries having similar health expenditures to combat these diseases. Applying the SNT framework suggests that health interventions are around 10 times more cost-effective in Nigeria even when comparing only the poorest regions within each country.Why I decided to work on this topicChatting with the members of my EA university group in Colombia, I noticed many of them were mainly interested in contributing to problems that affected their home country. Although Colombia is far from being one of the poorest countries in the world, various regions of the country still suffer from similar problems as less developed countries. I thought there might be a chance that we found opportunities with similar cost-effectiveness as those that EA tends to prioritize in the area of health and development.Given that some students from my EA group were considering making career decisions based on a similar perception, I thought it worthwhile to spare a couple of hours to critically evaluate this line of reasoning, in the hope that this would allow them to make a more informed decision. Ideally, this research would answer the following questions:Within the ways to help in Colombia, which problems/interventions represent unusually cost-effective opportunities to help?How do the best opportunities identified in Colombia compare with the best opportunities abroad?Unfortunately, both questions are too broad to cover them appropriately in the short time I dedicated to this work (approx. 40 hours). In order to make progress I decided focus on the much more narrow question:In aggregate, how cost-effective do health interventions addressing communicable, maternal, neonatal and nutritional diseases in Colombia seem when compared to interventions in the same area in a substantially poorer country?Although this question corresponds to only a fragment of our initial concern, a couple of considerations lead me to think that an answer to this question would still be informative to people considering whether to focus their career on helping others within the Colombian territory (even if they don’t plan to focus specifically on health interventions).First, health interventions aimed at fighting communicable, maternal, neonatal and nutritional diseases (from now on CMNN interventions) form a substantial fraction of the EA portfolio of interventions focused on improving human welfare in the short term, which is the focus of most of the students I know who are considering whether to direct their careers at Colombian causes. Second, my impression is that the results of this investigation support certain heuristics for identifying impact opportunities that can be useful even outside health causes. I’ll say more about that in the conclusions section.ApproachI use the Scale, Neglectedness and Tractability (SNT) framework to estimate the cost-effectiveness ratio of CMNN interventions between Colombia and Nigeria. Besides making the comparison for the two countries as a whole, I’ll also try to compare the poorest regions within each country to see if that changes the conclusions.I chose Nigeria somewhat arbitrarily, mainly because several of GiveWell's recommended charities have operations in Nigeria (suggesting that there are unusually effective opportunities in the country) and because among the countries where GiveWell-recommended charities operate, Nigeria It is approximately in the middle of the distribution of poverty incidence.Percentage of the population living below the In...]]>
Alejandro Acelas https://forum.effectivealtruism.org/posts/aNsBJYpGovgJqWC9v/comparing-health-interventions-in-colombia-and-nigeria-which Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Comparing Health Interventions in Colombia and Nigeria: Which are More Effective and by How Much?, published by Alejandro Acelas on March 24, 2023 on The Effective Altruism Forum.TL;DR: Nigeria bears 10 times the burden of communicable, maternal, neonatal, and nutritional diseases per capita than Colombia, despite both countries having similar health expenditures to combat these diseases. Applying the SNT framework suggests that health interventions are around 10 times more cost-effective in Nigeria even when comparing only the poorest regions within each country.Why I decided to work on this topicChatting with the members of my EA university group in Colombia, I noticed many of them were mainly interested in contributing to problems that affected their home country. Although Colombia is far from being one of the poorest countries in the world, various regions of the country still suffer from similar problems as less developed countries. I thought there might be a chance that we found opportunities with similar cost-effectiveness as those that EA tends to prioritize in the area of health and development.Given that some students from my EA group were considering making career decisions based on a similar perception, I thought it worthwhile to spare a couple of hours to critically evaluate this line of reasoning, in the hope that this would allow them to make a more informed decision. Ideally, this research would answer the following questions:Within the ways to help in Colombia, which problems/interventions represent unusually cost-effective opportunities to help?How do the best opportunities identified in Colombia compare with the best opportunities abroad?Unfortunately, both questions are too broad to cover them appropriately in the short time I dedicated to this work (approx. 40 hours). In order to make progress I decided focus on the much more narrow question:In aggregate, how cost-effective do health interventions addressing communicable, maternal, neonatal and nutritional diseases in Colombia seem when compared to interventions in the same area in a substantially poorer country?Although this question corresponds to only a fragment of our initial concern, a couple of considerations lead me to think that an answer to this question would still be informative to people considering whether to focus their career on helping others within the Colombian territory (even if they don’t plan to focus specifically on health interventions).First, health interventions aimed at fighting communicable, maternal, neonatal and nutritional diseases (from now on CMNN interventions) form a substantial fraction of the EA portfolio of interventions focused on improving human welfare in the short term, which is the focus of most of the students I know who are considering whether to direct their careers at Colombian causes. Second, my impression is that the results of this investigation support certain heuristics for identifying impact opportunities that can be useful even outside health causes. I’ll say more about that in the conclusions section.ApproachI use the Scale, Neglectedness and Tractability (SNT) framework to estimate the cost-effectiveness ratio of CMNN interventions between Colombia and Nigeria. Besides making the comparison for the two countries as a whole, I’ll also try to compare the poorest regions within each country to see if that changes the conclusions.I chose Nigeria somewhat arbitrarily, mainly because several of GiveWell's recommended charities have operations in Nigeria (suggesting that there are unusually effective opportunities in the country) and because among the countries where GiveWell-recommended charities operate, Nigeria It is approximately in the middle of the distribution of poverty incidence.Percentage of the population living below the In...]]>
Fri, 24 Mar 2023 19:13:07 +0000 EA - Comparing Health Interventions in Colombia and Nigeria: Which are More Effective and by How Much? by Alejandro Acelas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Comparing Health Interventions in Colombia and Nigeria: Which are More Effective and by How Much?, published by Alejandro Acelas on March 24, 2023 on The Effective Altruism Forum.TL;DR: Nigeria bears 10 times the burden of communicable, maternal, neonatal, and nutritional diseases per capita than Colombia, despite both countries having similar health expenditures to combat these diseases. Applying the SNT framework suggests that health interventions are around 10 times more cost-effective in Nigeria even when comparing only the poorest regions within each country.Why I decided to work on this topicChatting with the members of my EA university group in Colombia, I noticed many of them were mainly interested in contributing to problems that affected their home country. Although Colombia is far from being one of the poorest countries in the world, various regions of the country still suffer from similar problems as less developed countries. I thought there might be a chance that we found opportunities with similar cost-effectiveness as those that EA tends to prioritize in the area of health and development.Given that some students from my EA group were considering making career decisions based on a similar perception, I thought it worthwhile to spare a couple of hours to critically evaluate this line of reasoning, in the hope that this would allow them to make a more informed decision. Ideally, this research would answer the following questions:Within the ways to help in Colombia, which problems/interventions represent unusually cost-effective opportunities to help?How do the best opportunities identified in Colombia compare with the best opportunities abroad?Unfortunately, both questions are too broad to cover them appropriately in the short time I dedicated to this work (approx. 40 hours). In order to make progress I decided focus on the much more narrow question:In aggregate, how cost-effective do health interventions addressing communicable, maternal, neonatal and nutritional diseases in Colombia seem when compared to interventions in the same area in a substantially poorer country?Although this question corresponds to only a fragment of our initial concern, a couple of considerations lead me to think that an answer to this question would still be informative to people considering whether to focus their career on helping others within the Colombian territory (even if they don’t plan to focus specifically on health interventions).First, health interventions aimed at fighting communicable, maternal, neonatal and nutritional diseases (from now on CMNN interventions) form a substantial fraction of the EA portfolio of interventions focused on improving human welfare in the short term, which is the focus of most of the students I know who are considering whether to direct their careers at Colombian causes. Second, my impression is that the results of this investigation support certain heuristics for identifying impact opportunities that can be useful even outside health causes. I’ll say more about that in the conclusions section.ApproachI use the Scale, Neglectedness and Tractability (SNT) framework to estimate the cost-effectiveness ratio of CMNN interventions between Colombia and Nigeria. Besides making the comparison for the two countries as a whole, I’ll also try to compare the poorest regions within each country to see if that changes the conclusions.I chose Nigeria somewhat arbitrarily, mainly because several of GiveWell's recommended charities have operations in Nigeria (suggesting that there are unusually effective opportunities in the country) and because among the countries where GiveWell-recommended charities operate, Nigeria It is approximately in the middle of the distribution of poverty incidence.Percentage of the population living below the In...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Comparing Health Interventions in Colombia and Nigeria: Which are More Effective and by How Much?, published by Alejandro Acelas on March 24, 2023 on The Effective Altruism Forum.TL;DR: Nigeria bears 10 times the burden of communicable, maternal, neonatal, and nutritional diseases per capita than Colombia, despite both countries having similar health expenditures to combat these diseases. Applying the SNT framework suggests that health interventions are around 10 times more cost-effective in Nigeria even when comparing only the poorest regions within each country.Why I decided to work on this topicChatting with the members of my EA university group in Colombia, I noticed many of them were mainly interested in contributing to problems that affected their home country. Although Colombia is far from being one of the poorest countries in the world, various regions of the country still suffer from similar problems as less developed countries. I thought there might be a chance that we found opportunities with similar cost-effectiveness as those that EA tends to prioritize in the area of health and development.Given that some students from my EA group were considering making career decisions based on a similar perception, I thought it worthwhile to spare a couple of hours to critically evaluate this line of reasoning, in the hope that this would allow them to make a more informed decision. Ideally, this research would answer the following questions:Within the ways to help in Colombia, which problems/interventions represent unusually cost-effective opportunities to help?How do the best opportunities identified in Colombia compare with the best opportunities abroad?Unfortunately, both questions are too broad to cover them appropriately in the short time I dedicated to this work (approx. 40 hours). In order to make progress I decided focus on the much more narrow question:In aggregate, how cost-effective do health interventions addressing communicable, maternal, neonatal and nutritional diseases in Colombia seem when compared to interventions in the same area in a substantially poorer country?Although this question corresponds to only a fragment of our initial concern, a couple of considerations lead me to think that an answer to this question would still be informative to people considering whether to focus their career on helping others within the Colombian territory (even if they don’t plan to focus specifically on health interventions).First, health interventions aimed at fighting communicable, maternal, neonatal and nutritional diseases (from now on CMNN interventions) form a substantial fraction of the EA portfolio of interventions focused on improving human welfare in the short term, which is the focus of most of the students I know who are considering whether to direct their careers at Colombian causes. Second, my impression is that the results of this investigation support certain heuristics for identifying impact opportunities that can be useful even outside health causes. I’ll say more about that in the conclusions section.ApproachI use the Scale, Neglectedness and Tractability (SNT) framework to estimate the cost-effectiveness ratio of CMNN interventions between Colombia and Nigeria. Besides making the comparison for the two countries as a whole, I’ll also try to compare the poorest regions within each country to see if that changes the conclusions.I chose Nigeria somewhat arbitrarily, mainly because several of GiveWell's recommended charities have operations in Nigeria (suggesting that there are unusually effective opportunities in the country) and because among the countries where GiveWell-recommended charities operate, Nigeria It is approximately in the middle of the distribution of poverty incidence.Percentage of the population living below the In...]]>
Alejandro Acelas https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 22:13 None full 5345
zeL52MFB2Pkq9Kdme_NL_EA_EA EA - Exploring Metaculus’ community predictions by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Exploring Metaculus’ community predictions, published by Vasco Grilo on March 24, 2023 on The Effective Altruism Forum.SummaryI really like Metaculus!I have collected and analysed in this Sheet metrics about Metaculus’ questions outside of question groups, and their Metaculus’ community predictions (see tab “TOC”). The Colab to extract the data and calculate the metrics is here.The mean metrics vary a lot across categories, and the same is seemingly true for correlations among metrics. So one should not assume the performance across all questions is representative of that within each of Metaculus’ categories. To illustrate:Across categories, the 5th and 95th percentiles of the mean normalised outcome are 0 and 0.784, and of the mean Brier score are 0.0369 and 0.450. For context, the Brier score is 0.25 (= 0.5^2) for the maximally uncertain probability of 0.5.According to Metaculus’ track record page, the mean Brier score for Metaculus’ community predictions evaluated at all times is 0.126 for all questions, but 0.237 for those about artificial intelligence. So Metaculus’ community predictions about probabilities look good in general, but they perform close to random predictions for artificial intelligence.There can be significant differences between Metaculus community predictions and Metaculus’ predictions. For instance, the mean Brier score of the latter for artificial intelligence is 0.168, which is way more accurate than the 0.237 of the former.According to my results, Metaculus’ community predictions are:In general (i.e. considering all questions), less accurate for questions:Whose predictions are more extreme under Bayesian updating (correlation coefficient R = 0.346, and p-value p = 0).With a greater amount of updating (R = 0.262, and p = 0).With a greater difference between amount of updating and uncertainty reduction (R = 0.256, and p = 0).For the category of artificial intelligence, less accurate for questions with:Greater difference between amount of updating and uncertainty reduction (R = 0.361, and p = 0.0387).More predictions (R = 0.316, and p = 0.0729).A greater amount of updating (R = 0.282, and p = 0.111).Compatible with Bayesian updating in general, in the sense I failed to reject it during the 2nd half of the period during which each question was or has been open (mean p-value of 0.425).If you want to know how much to trust a given prediction from Metaculus, I think it is sensible to check Metaculus’ track record for similar past questions (more here).AcknowledgementsThanks to Charles Dillon, Misha Yagudin from Arb Research, and Peter Mühlbacher and Ryan Beck from Metaculus.Dark crystall ball in a bright foggy galaxy. Generated by OpenAI's DALL-E.IntroductionI really like Metaculus!MethodsI believe it would be important to better understand how much to trust Metaculus’ predictions. To that end, I have determined in this Sheet (see tab “TOC”) metrics about all Metaculus’ questions outside of question groups with an ID from 1 to 15000 on 13 March 2023, and their Metaculus’ community predictions. The metrics for each question are:Tags, which identify the Metaculus’ category.Publish time (year).Close time (year).Resolve time (year).Time from publish to close (year).Time from close to resolve (year).Time from publish to resolve (year).Number of forecasters.Number of predictions.Number of analysed dates, which is the number of instances at which the predictions were assessed.Total belief movement, which is a measure of the amount of updating, and is the sum of the belief movements, which are the squared differences between 2 consecutive beliefs.The values of the beliefs range from 0 to 1, and can respect a:Probability.Ratio between an expectation and difference between the maximum and minimum allowed by Metaculus.T...]]>
Vasco Grilo https://forum.effectivealtruism.org/posts/zeL52MFB2Pkq9Kdme/exploring-metaculus-community-predictions Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Exploring Metaculus’ community predictions, published by Vasco Grilo on March 24, 2023 on The Effective Altruism Forum.SummaryI really like Metaculus!I have collected and analysed in this Sheet metrics about Metaculus’ questions outside of question groups, and their Metaculus’ community predictions (see tab “TOC”). The Colab to extract the data and calculate the metrics is here.The mean metrics vary a lot across categories, and the same is seemingly true for correlations among metrics. So one should not assume the performance across all questions is representative of that within each of Metaculus’ categories. To illustrate:Across categories, the 5th and 95th percentiles of the mean normalised outcome are 0 and 0.784, and of the mean Brier score are 0.0369 and 0.450. For context, the Brier score is 0.25 (= 0.5^2) for the maximally uncertain probability of 0.5.According to Metaculus’ track record page, the mean Brier score for Metaculus’ community predictions evaluated at all times is 0.126 for all questions, but 0.237 for those about artificial intelligence. So Metaculus’ community predictions about probabilities look good in general, but they perform close to random predictions for artificial intelligence.There can be significant differences between Metaculus community predictions and Metaculus’ predictions. For instance, the mean Brier score of the latter for artificial intelligence is 0.168, which is way more accurate than the 0.237 of the former.According to my results, Metaculus’ community predictions are:In general (i.e. considering all questions), less accurate for questions:Whose predictions are more extreme under Bayesian updating (correlation coefficient R = 0.346, and p-value p = 0).With a greater amount of updating (R = 0.262, and p = 0).With a greater difference between amount of updating and uncertainty reduction (R = 0.256, and p = 0).For the category of artificial intelligence, less accurate for questions with:Greater difference between amount of updating and uncertainty reduction (R = 0.361, and p = 0.0387).More predictions (R = 0.316, and p = 0.0729).A greater amount of updating (R = 0.282, and p = 0.111).Compatible with Bayesian updating in general, in the sense I failed to reject it during the 2nd half of the period during which each question was or has been open (mean p-value of 0.425).If you want to know how much to trust a given prediction from Metaculus, I think it is sensible to check Metaculus’ track record for similar past questions (more here).AcknowledgementsThanks to Charles Dillon, Misha Yagudin from Arb Research, and Peter Mühlbacher and Ryan Beck from Metaculus.Dark crystall ball in a bright foggy galaxy. Generated by OpenAI's DALL-E.IntroductionI really like Metaculus!MethodsI believe it would be important to better understand how much to trust Metaculus’ predictions. To that end, I have determined in this Sheet (see tab “TOC”) metrics about all Metaculus’ questions outside of question groups with an ID from 1 to 15000 on 13 March 2023, and their Metaculus’ community predictions. The metrics for each question are:Tags, which identify the Metaculus’ category.Publish time (year).Close time (year).Resolve time (year).Time from publish to close (year).Time from close to resolve (year).Time from publish to resolve (year).Number of forecasters.Number of predictions.Number of analysed dates, which is the number of instances at which the predictions were assessed.Total belief movement, which is a measure of the amount of updating, and is the sum of the belief movements, which are the squared differences between 2 consecutive beliefs.The values of the beliefs range from 0 to 1, and can respect a:Probability.Ratio between an expectation and difference between the maximum and minimum allowed by Metaculus.T...]]>
Fri, 24 Mar 2023 19:03:43 +0000 EA - Exploring Metaculus’ community predictions by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Exploring Metaculus’ community predictions, published by Vasco Grilo on March 24, 2023 on The Effective Altruism Forum.SummaryI really like Metaculus!I have collected and analysed in this Sheet metrics about Metaculus’ questions outside of question groups, and their Metaculus’ community predictions (see tab “TOC”). The Colab to extract the data and calculate the metrics is here.The mean metrics vary a lot across categories, and the same is seemingly true for correlations among metrics. So one should not assume the performance across all questions is representative of that within each of Metaculus’ categories. To illustrate:Across categories, the 5th and 95th percentiles of the mean normalised outcome are 0 and 0.784, and of the mean Brier score are 0.0369 and 0.450. For context, the Brier score is 0.25 (= 0.5^2) for the maximally uncertain probability of 0.5.According to Metaculus’ track record page, the mean Brier score for Metaculus’ community predictions evaluated at all times is 0.126 for all questions, but 0.237 for those about artificial intelligence. So Metaculus’ community predictions about probabilities look good in general, but they perform close to random predictions for artificial intelligence.There can be significant differences between Metaculus community predictions and Metaculus’ predictions. For instance, the mean Brier score of the latter for artificial intelligence is 0.168, which is way more accurate than the 0.237 of the former.According to my results, Metaculus’ community predictions are:In general (i.e. considering all questions), less accurate for questions:Whose predictions are more extreme under Bayesian updating (correlation coefficient R = 0.346, and p-value p = 0).With a greater amount of updating (R = 0.262, and p = 0).With a greater difference between amount of updating and uncertainty reduction (R = 0.256, and p = 0).For the category of artificial intelligence, less accurate for questions with:Greater difference between amount of updating and uncertainty reduction (R = 0.361, and p = 0.0387).More predictions (R = 0.316, and p = 0.0729).A greater amount of updating (R = 0.282, and p = 0.111).Compatible with Bayesian updating in general, in the sense I failed to reject it during the 2nd half of the period during which each question was or has been open (mean p-value of 0.425).If you want to know how much to trust a given prediction from Metaculus, I think it is sensible to check Metaculus’ track record for similar past questions (more here).AcknowledgementsThanks to Charles Dillon, Misha Yagudin from Arb Research, and Peter Mühlbacher and Ryan Beck from Metaculus.Dark crystall ball in a bright foggy galaxy. Generated by OpenAI's DALL-E.IntroductionI really like Metaculus!MethodsI believe it would be important to better understand how much to trust Metaculus’ predictions. To that end, I have determined in this Sheet (see tab “TOC”) metrics about all Metaculus’ questions outside of question groups with an ID from 1 to 15000 on 13 March 2023, and their Metaculus’ community predictions. The metrics for each question are:Tags, which identify the Metaculus’ category.Publish time (year).Close time (year).Resolve time (year).Time from publish to close (year).Time from close to resolve (year).Time from publish to resolve (year).Number of forecasters.Number of predictions.Number of analysed dates, which is the number of instances at which the predictions were assessed.Total belief movement, which is a measure of the amount of updating, and is the sum of the belief movements, which are the squared differences between 2 consecutive beliefs.The values of the beliefs range from 0 to 1, and can respect a:Probability.Ratio between an expectation and difference between the maximum and minimum allowed by Metaculus.T...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Exploring Metaculus’ community predictions, published by Vasco Grilo on March 24, 2023 on The Effective Altruism Forum.SummaryI really like Metaculus!I have collected and analysed in this Sheet metrics about Metaculus’ questions outside of question groups, and their Metaculus’ community predictions (see tab “TOC”). The Colab to extract the data and calculate the metrics is here.The mean metrics vary a lot across categories, and the same is seemingly true for correlations among metrics. So one should not assume the performance across all questions is representative of that within each of Metaculus’ categories. To illustrate:Across categories, the 5th and 95th percentiles of the mean normalised outcome are 0 and 0.784, and of the mean Brier score are 0.0369 and 0.450. For context, the Brier score is 0.25 (= 0.5^2) for the maximally uncertain probability of 0.5.According to Metaculus’ track record page, the mean Brier score for Metaculus’ community predictions evaluated at all times is 0.126 for all questions, but 0.237 for those about artificial intelligence. So Metaculus’ community predictions about probabilities look good in general, but they perform close to random predictions for artificial intelligence.There can be significant differences between Metaculus community predictions and Metaculus’ predictions. For instance, the mean Brier score of the latter for artificial intelligence is 0.168, which is way more accurate than the 0.237 of the former.According to my results, Metaculus’ community predictions are:In general (i.e. considering all questions), less accurate for questions:Whose predictions are more extreme under Bayesian updating (correlation coefficient R = 0.346, and p-value p = 0).With a greater amount of updating (R = 0.262, and p = 0).With a greater difference between amount of updating and uncertainty reduction (R = 0.256, and p = 0).For the category of artificial intelligence, less accurate for questions with:Greater difference between amount of updating and uncertainty reduction (R = 0.361, and p = 0.0387).More predictions (R = 0.316, and p = 0.0729).A greater amount of updating (R = 0.282, and p = 0.111).Compatible with Bayesian updating in general, in the sense I failed to reject it during the 2nd half of the period during which each question was or has been open (mean p-value of 0.425).If you want to know how much to trust a given prediction from Metaculus, I think it is sensible to check Metaculus’ track record for similar past questions (more here).AcknowledgementsThanks to Charles Dillon, Misha Yagudin from Arb Research, and Peter Mühlbacher and Ryan Beck from Metaculus.Dark crystall ball in a bright foggy galaxy. Generated by OpenAI's DALL-E.IntroductionI really like Metaculus!MethodsI believe it would be important to better understand how much to trust Metaculus’ predictions. To that end, I have determined in this Sheet (see tab “TOC”) metrics about all Metaculus’ questions outside of question groups with an ID from 1 to 15000 on 13 March 2023, and their Metaculus’ community predictions. The metrics for each question are:Tags, which identify the Metaculus’ category.Publish time (year).Close time (year).Resolve time (year).Time from publish to close (year).Time from close to resolve (year).Time from publish to resolve (year).Number of forecasters.Number of predictions.Number of analysed dates, which is the number of instances at which the predictions were assessed.Total belief movement, which is a measure of the amount of updating, and is the sum of the belief movements, which are the squared differences between 2 consecutive beliefs.The values of the beliefs range from 0 to 1, and can respect a:Probability.Ratio between an expectation and difference between the maximum and minimum allowed by Metaculus.T...]]>
Vasco Grilo https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 23:21 None full 5347
YrXZ3pRvFuH8SJaay_NL_EA_EA EA - Reflecting on the Last Year — Lessons for EA (opening keynote at EAG) by Toby Ord Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflecting on the Last Year — Lessons for EA (opening keynote at EAG), published by Toby Ord on March 24, 2023 on The Effective Altruism Forum.I recently delivered the opening talk for EA Global: Bay Area 2023. I reflect on FTX, the differences between EA and utilitarianism, and the importance of character. Here's the recording and transcript.The Last YearLet’s talk a bit about the last year.The spring and summer of 2022 were a time of rapid change. A second major donor had appeared, roughly doubling the amount of committed money. There was a plan to donate this new money more rapidly, and to use more of it directly on projects run by people in the EA community. Together, this meant much more funding for projects led by EAs or about effective altruism.It felt like a time of massive acceleration, with EA rapidly changing and growing in an attempt to find enough ways to use this money productively and avoid it going to waste. This caused a bunch of growing pains and distortions.When there was very little money in effective altruism, you always knew that the person next to you couldn’t have been in it for the money — so they must have been in it because they were passionate about what they were doing for the world. But that became harder to tell, making trust harder.And the most famous person in crypto had become the most famous person in EA. So someone whose views and actions were quite radical and unrepresentative of most EAs, became the most public face of effective altruism, distorting public perception and even our self-perception of what it meant to be an EA. It also meant that EA became more closely connected to an industry that was widely perceived as sketchy. One that involved a product of disputed social value, and a lot of sharks.One thing that especially concerned me was a great deal of money going into politics. We’d tried very hard over the previous 10 years to avoid EA being seen as a left or right issue — immediately alienating half the population. But a single large donor had the potential to change that unilaterally.And EA became extremely visible: people who’d never heard of it all of a sudden couldn’t get away from it, prompting a great deal of public criticism.From my perspective at the time, it was hard to tell whether or not the benefits of additional funding for good causes outweighed these costs — both were large and hard to compare. Even the people who thought it was worth the costs shared the feelings of visceral acceleration: like a white knuckled fairground ride, pushing us up to vertiginous heights faster than we were comfortable with.And that was just the ascent.Like many of us, I was paying attention to the problems involved in the rise, and was blindsided by the fall.As facts started to become more clear, we saw that the companies producing this newfound income had been very poorly governed, allowing behaviour that appears to me to have been both immoral and illegal — in particular, it seems that when the trading arm had foundered, customers’ own deposits were raided to pay for an increasingly desperate series of bets to save the company.Even if that strategy had worked and the money was restored to the customers, I still think it would have been illegal and immoral. But it didn’t work, so it also caused a truly vast amount of harm. Most directly and importantly to the customers, but also to a host of other parties, including the members of the EA community and thus all the people and animals we are trying to help.I’m sure most of your have thought a lot about this over last few months. I’ve come to think of my own attempts to process this as going through these four phases.First, there’s: Understanding what happened.What were the facts on the ground?Were crimes committed?How much money have customer’s lost?A lot of t...]]>
Toby Ord https://forum.effectivealtruism.org/posts/YrXZ3pRvFuH8SJaay/reflecting-on-the-last-year-lessons-for-ea-opening-keynote Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflecting on the Last Year — Lessons for EA (opening keynote at EAG), published by Toby Ord on March 24, 2023 on The Effective Altruism Forum.I recently delivered the opening talk for EA Global: Bay Area 2023. I reflect on FTX, the differences between EA and utilitarianism, and the importance of character. Here's the recording and transcript.The Last YearLet’s talk a bit about the last year.The spring and summer of 2022 were a time of rapid change. A second major donor had appeared, roughly doubling the amount of committed money. There was a plan to donate this new money more rapidly, and to use more of it directly on projects run by people in the EA community. Together, this meant much more funding for projects led by EAs or about effective altruism.It felt like a time of massive acceleration, with EA rapidly changing and growing in an attempt to find enough ways to use this money productively and avoid it going to waste. This caused a bunch of growing pains and distortions.When there was very little money in effective altruism, you always knew that the person next to you couldn’t have been in it for the money — so they must have been in it because they were passionate about what they were doing for the world. But that became harder to tell, making trust harder.And the most famous person in crypto had become the most famous person in EA. So someone whose views and actions were quite radical and unrepresentative of most EAs, became the most public face of effective altruism, distorting public perception and even our self-perception of what it meant to be an EA. It also meant that EA became more closely connected to an industry that was widely perceived as sketchy. One that involved a product of disputed social value, and a lot of sharks.One thing that especially concerned me was a great deal of money going into politics. We’d tried very hard over the previous 10 years to avoid EA being seen as a left or right issue — immediately alienating half the population. But a single large donor had the potential to change that unilaterally.And EA became extremely visible: people who’d never heard of it all of a sudden couldn’t get away from it, prompting a great deal of public criticism.From my perspective at the time, it was hard to tell whether or not the benefits of additional funding for good causes outweighed these costs — both were large and hard to compare. Even the people who thought it was worth the costs shared the feelings of visceral acceleration: like a white knuckled fairground ride, pushing us up to vertiginous heights faster than we were comfortable with.And that was just the ascent.Like many of us, I was paying attention to the problems involved in the rise, and was blindsided by the fall.As facts started to become more clear, we saw that the companies producing this newfound income had been very poorly governed, allowing behaviour that appears to me to have been both immoral and illegal — in particular, it seems that when the trading arm had foundered, customers’ own deposits were raided to pay for an increasingly desperate series of bets to save the company.Even if that strategy had worked and the money was restored to the customers, I still think it would have been illegal and immoral. But it didn’t work, so it also caused a truly vast amount of harm. Most directly and importantly to the customers, but also to a host of other parties, including the members of the EA community and thus all the people and animals we are trying to help.I’m sure most of your have thought a lot about this over last few months. I’ve come to think of my own attempts to process this as going through these four phases.First, there’s: Understanding what happened.What were the facts on the ground?Were crimes committed?How much money have customer’s lost?A lot of t...]]>
Fri, 24 Mar 2023 16:14:36 +0000 EA - Reflecting on the Last Year — Lessons for EA (opening keynote at EAG) by Toby Ord Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflecting on the Last Year — Lessons for EA (opening keynote at EAG), published by Toby Ord on March 24, 2023 on The Effective Altruism Forum.I recently delivered the opening talk for EA Global: Bay Area 2023. I reflect on FTX, the differences between EA and utilitarianism, and the importance of character. Here's the recording and transcript.The Last YearLet’s talk a bit about the last year.The spring and summer of 2022 were a time of rapid change. A second major donor had appeared, roughly doubling the amount of committed money. There was a plan to donate this new money more rapidly, and to use more of it directly on projects run by people in the EA community. Together, this meant much more funding for projects led by EAs or about effective altruism.It felt like a time of massive acceleration, with EA rapidly changing and growing in an attempt to find enough ways to use this money productively and avoid it going to waste. This caused a bunch of growing pains and distortions.When there was very little money in effective altruism, you always knew that the person next to you couldn’t have been in it for the money — so they must have been in it because they were passionate about what they were doing for the world. But that became harder to tell, making trust harder.And the most famous person in crypto had become the most famous person in EA. So someone whose views and actions were quite radical and unrepresentative of most EAs, became the most public face of effective altruism, distorting public perception and even our self-perception of what it meant to be an EA. It also meant that EA became more closely connected to an industry that was widely perceived as sketchy. One that involved a product of disputed social value, and a lot of sharks.One thing that especially concerned me was a great deal of money going into politics. We’d tried very hard over the previous 10 years to avoid EA being seen as a left or right issue — immediately alienating half the population. But a single large donor had the potential to change that unilaterally.And EA became extremely visible: people who’d never heard of it all of a sudden couldn’t get away from it, prompting a great deal of public criticism.From my perspective at the time, it was hard to tell whether or not the benefits of additional funding for good causes outweighed these costs — both were large and hard to compare. Even the people who thought it was worth the costs shared the feelings of visceral acceleration: like a white knuckled fairground ride, pushing us up to vertiginous heights faster than we were comfortable with.And that was just the ascent.Like many of us, I was paying attention to the problems involved in the rise, and was blindsided by the fall.As facts started to become more clear, we saw that the companies producing this newfound income had been very poorly governed, allowing behaviour that appears to me to have been both immoral and illegal — in particular, it seems that when the trading arm had foundered, customers’ own deposits were raided to pay for an increasingly desperate series of bets to save the company.Even if that strategy had worked and the money was restored to the customers, I still think it would have been illegal and immoral. But it didn’t work, so it also caused a truly vast amount of harm. Most directly and importantly to the customers, but also to a host of other parties, including the members of the EA community and thus all the people and animals we are trying to help.I’m sure most of your have thought a lot about this over last few months. I’ve come to think of my own attempts to process this as going through these four phases.First, there’s: Understanding what happened.What were the facts on the ground?Were crimes committed?How much money have customer’s lost?A lot of t...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflecting on the Last Year — Lessons for EA (opening keynote at EAG), published by Toby Ord on March 24, 2023 on The Effective Altruism Forum.I recently delivered the opening talk for EA Global: Bay Area 2023. I reflect on FTX, the differences between EA and utilitarianism, and the importance of character. Here's the recording and transcript.The Last YearLet’s talk a bit about the last year.The spring and summer of 2022 were a time of rapid change. A second major donor had appeared, roughly doubling the amount of committed money. There was a plan to donate this new money more rapidly, and to use more of it directly on projects run by people in the EA community. Together, this meant much more funding for projects led by EAs or about effective altruism.It felt like a time of massive acceleration, with EA rapidly changing and growing in an attempt to find enough ways to use this money productively and avoid it going to waste. This caused a bunch of growing pains and distortions.When there was very little money in effective altruism, you always knew that the person next to you couldn’t have been in it for the money — so they must have been in it because they were passionate about what they were doing for the world. But that became harder to tell, making trust harder.And the most famous person in crypto had become the most famous person in EA. So someone whose views and actions were quite radical and unrepresentative of most EAs, became the most public face of effective altruism, distorting public perception and even our self-perception of what it meant to be an EA. It also meant that EA became more closely connected to an industry that was widely perceived as sketchy. One that involved a product of disputed social value, and a lot of sharks.One thing that especially concerned me was a great deal of money going into politics. We’d tried very hard over the previous 10 years to avoid EA being seen as a left or right issue — immediately alienating half the population. But a single large donor had the potential to change that unilaterally.And EA became extremely visible: people who’d never heard of it all of a sudden couldn’t get away from it, prompting a great deal of public criticism.From my perspective at the time, it was hard to tell whether or not the benefits of additional funding for good causes outweighed these costs — both were large and hard to compare. Even the people who thought it was worth the costs shared the feelings of visceral acceleration: like a white knuckled fairground ride, pushing us up to vertiginous heights faster than we were comfortable with.And that was just the ascent.Like many of us, I was paying attention to the problems involved in the rise, and was blindsided by the fall.As facts started to become more clear, we saw that the companies producing this newfound income had been very poorly governed, allowing behaviour that appears to me to have been both immoral and illegal — in particular, it seems that when the trading arm had foundered, customers’ own deposits were raided to pay for an increasingly desperate series of bets to save the company.Even if that strategy had worked and the money was restored to the customers, I still think it would have been illegal and immoral. But it didn’t work, so it also caused a truly vast amount of harm. Most directly and importantly to the customers, but also to a host of other parties, including the members of the EA community and thus all the people and animals we are trying to help.I’m sure most of your have thought a lot about this over last few months. I’ve come to think of my own attempts to process this as going through these four phases.First, there’s: Understanding what happened.What were the facts on the ground?Were crimes committed?How much money have customer’s lost?A lot of t...]]>
Toby Ord https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 24:26 None full 5344
uBSwt2vEGm4RisLjf_NL_EA_EA EA - Holden Karnofsky’s recent comments on FTX by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Holden Karnofsky’s recent comments on FTX, published by Lizka on March 24, 2023 on The Effective Altruism Forum.Holden Karnofsky has recently shared some reflections on EA and FTX, but they’re spread out and I’d guess that few people have seen them, so I thought it could be useful to collect them here. (In general, I think collections like this can be helpful and under-supplied.) I've copied some comments in full, and I've put together a simpler list of the links in this footnote.These comments come after a few months — there’s some explanation of why that is in this post and in this comment.Updates after FTXI found the following comment (a summary of updates he’s made after FTX) especially interesting (please note that I’m not sure I agree with everything):Here’s a followup with some reflections.Note that I discuss some takeaways and potential lessons learned in this interview.Here are some (somewhat redundant with the interview) things I feel like I’ve updated on in light of the FTX collapse and aftermath:The most obvious thing that’s changed is a tighter funding situation, which I addressed here.I’m generally more concerned about the dynamics I wrote about in EA is about maximization, and maximization is perilous. If I wrote that piece today, most of it would be the same, but the “Avoiding the pitfalls” section would be quite different (less reassuring/reassured). I’m not really sure what to do about these dynamics, i.e., how to reduce the risk that EA will encourage and attract perilous maximization, but a couple of possibilities:It looks to me like the community needs to beef up and improve investments in activities like “identifying and warning about bad actors in the community,” and I regret not taking a stronger hand in doing so to date. (Recent sexual harassment developments reinforce this point.).I’ve long wanted to try to write up a detailed intellectual case against what one might call “hard-core utilitarianism.” I think arguing about this sort of thing on the merits is probably the most promising way to reduce associated risks; EA isn’t (and I don’t want it to be) the kind of community where you can change what people operationally value just by saying you want it to change, and I think the intellectual case has to be made. I think there is a good substantive case for pluralism and moderation that could be better-explained and easier to find, and I’m thinking about how to make that happen (though I can’t promise to do so soon).I had some concerns about SBF and FTX, but I largely thought of the situation as not being my responsibility, as Open Philanthropy had no formal relationship to either. In hindsight, I wish I’d reasoned more like this: “This person is becoming very associated with effective altruism, so whether or not that’s due to anything I’ve done, it’s important to figure out whether that’s a bad thing and whether proactive distancing is needed.”I’m not surprised there are some bad actors in the EA community (I think bad actors exist in any community), but I’ve increased my picture of how much harm a small set of them can do, and hence I think it could be good for Open Philanthropy to become more conservative about funding and associating with people who might end up being bad actors (while recognizing that it won’t be able to predict perfectly on this front).Prior to the FTX collapse, I had been gradually updating toward feeling like Open Philanthropy should be less cautious with funding and other actions; quicker to trust our own intuitions and people who intuitively seemed to share our values; and generally less cautious. Some of this update was based on thinking that some folks associated with FTX were being successful with more self-trusting, less-cautious attitudes; some of it was based on seeing few immediate negative conse...]]>
Lizka https://forum.effectivealtruism.org/posts/uBSwt2vEGm4RisLjf/holden-karnofsky-s-recent-comments-on-ftx Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Holden Karnofsky’s recent comments on FTX, published by Lizka on March 24, 2023 on The Effective Altruism Forum.Holden Karnofsky has recently shared some reflections on EA and FTX, but they’re spread out and I’d guess that few people have seen them, so I thought it could be useful to collect them here. (In general, I think collections like this can be helpful and under-supplied.) I've copied some comments in full, and I've put together a simpler list of the links in this footnote.These comments come after a few months — there’s some explanation of why that is in this post and in this comment.Updates after FTXI found the following comment (a summary of updates he’s made after FTX) especially interesting (please note that I’m not sure I agree with everything):Here’s a followup with some reflections.Note that I discuss some takeaways and potential lessons learned in this interview.Here are some (somewhat redundant with the interview) things I feel like I’ve updated on in light of the FTX collapse and aftermath:The most obvious thing that’s changed is a tighter funding situation, which I addressed here.I’m generally more concerned about the dynamics I wrote about in EA is about maximization, and maximization is perilous. If I wrote that piece today, most of it would be the same, but the “Avoiding the pitfalls” section would be quite different (less reassuring/reassured). I’m not really sure what to do about these dynamics, i.e., how to reduce the risk that EA will encourage and attract perilous maximization, but a couple of possibilities:It looks to me like the community needs to beef up and improve investments in activities like “identifying and warning about bad actors in the community,” and I regret not taking a stronger hand in doing so to date. (Recent sexual harassment developments reinforce this point.).I’ve long wanted to try to write up a detailed intellectual case against what one might call “hard-core utilitarianism.” I think arguing about this sort of thing on the merits is probably the most promising way to reduce associated risks; EA isn’t (and I don’t want it to be) the kind of community where you can change what people operationally value just by saying you want it to change, and I think the intellectual case has to be made. I think there is a good substantive case for pluralism and moderation that could be better-explained and easier to find, and I’m thinking about how to make that happen (though I can’t promise to do so soon).I had some concerns about SBF and FTX, but I largely thought of the situation as not being my responsibility, as Open Philanthropy had no formal relationship to either. In hindsight, I wish I’d reasoned more like this: “This person is becoming very associated with effective altruism, so whether or not that’s due to anything I’ve done, it’s important to figure out whether that’s a bad thing and whether proactive distancing is needed.”I’m not surprised there are some bad actors in the EA community (I think bad actors exist in any community), but I’ve increased my picture of how much harm a small set of them can do, and hence I think it could be good for Open Philanthropy to become more conservative about funding and associating with people who might end up being bad actors (while recognizing that it won’t be able to predict perfectly on this front).Prior to the FTX collapse, I had been gradually updating toward feeling like Open Philanthropy should be less cautious with funding and other actions; quicker to trust our own intuitions and people who intuitively seemed to share our values; and generally less cautious. Some of this update was based on thinking that some folks associated with FTX were being successful with more self-trusting, less-cautious attitudes; some of it was based on seeing few immediate negative conse...]]>
Fri, 24 Mar 2023 12:50:54 +0000 EA - Holden Karnofsky’s recent comments on FTX by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Holden Karnofsky’s recent comments on FTX, published by Lizka on March 24, 2023 on The Effective Altruism Forum.Holden Karnofsky has recently shared some reflections on EA and FTX, but they’re spread out and I’d guess that few people have seen them, so I thought it could be useful to collect them here. (In general, I think collections like this can be helpful and under-supplied.) I've copied some comments in full, and I've put together a simpler list of the links in this footnote.These comments come after a few months — there’s some explanation of why that is in this post and in this comment.Updates after FTXI found the following comment (a summary of updates he’s made after FTX) especially interesting (please note that I’m not sure I agree with everything):Here’s a followup with some reflections.Note that I discuss some takeaways and potential lessons learned in this interview.Here are some (somewhat redundant with the interview) things I feel like I’ve updated on in light of the FTX collapse and aftermath:The most obvious thing that’s changed is a tighter funding situation, which I addressed here.I’m generally more concerned about the dynamics I wrote about in EA is about maximization, and maximization is perilous. If I wrote that piece today, most of it would be the same, but the “Avoiding the pitfalls” section would be quite different (less reassuring/reassured). I’m not really sure what to do about these dynamics, i.e., how to reduce the risk that EA will encourage and attract perilous maximization, but a couple of possibilities:It looks to me like the community needs to beef up and improve investments in activities like “identifying and warning about bad actors in the community,” and I regret not taking a stronger hand in doing so to date. (Recent sexual harassment developments reinforce this point.).I’ve long wanted to try to write up a detailed intellectual case against what one might call “hard-core utilitarianism.” I think arguing about this sort of thing on the merits is probably the most promising way to reduce associated risks; EA isn’t (and I don’t want it to be) the kind of community where you can change what people operationally value just by saying you want it to change, and I think the intellectual case has to be made. I think there is a good substantive case for pluralism and moderation that could be better-explained and easier to find, and I’m thinking about how to make that happen (though I can’t promise to do so soon).I had some concerns about SBF and FTX, but I largely thought of the situation as not being my responsibility, as Open Philanthropy had no formal relationship to either. In hindsight, I wish I’d reasoned more like this: “This person is becoming very associated with effective altruism, so whether or not that’s due to anything I’ve done, it’s important to figure out whether that’s a bad thing and whether proactive distancing is needed.”I’m not surprised there are some bad actors in the EA community (I think bad actors exist in any community), but I’ve increased my picture of how much harm a small set of them can do, and hence I think it could be good for Open Philanthropy to become more conservative about funding and associating with people who might end up being bad actors (while recognizing that it won’t be able to predict perfectly on this front).Prior to the FTX collapse, I had been gradually updating toward feeling like Open Philanthropy should be less cautious with funding and other actions; quicker to trust our own intuitions and people who intuitively seemed to share our values; and generally less cautious. Some of this update was based on thinking that some folks associated with FTX were being successful with more self-trusting, less-cautious attitudes; some of it was based on seeing few immediate negative conse...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Holden Karnofsky’s recent comments on FTX, published by Lizka on March 24, 2023 on The Effective Altruism Forum.Holden Karnofsky has recently shared some reflections on EA and FTX, but they’re spread out and I’d guess that few people have seen them, so I thought it could be useful to collect them here. (In general, I think collections like this can be helpful and under-supplied.) I've copied some comments in full, and I've put together a simpler list of the links in this footnote.These comments come after a few months — there’s some explanation of why that is in this post and in this comment.Updates after FTXI found the following comment (a summary of updates he’s made after FTX) especially interesting (please note that I’m not sure I agree with everything):Here’s a followup with some reflections.Note that I discuss some takeaways and potential lessons learned in this interview.Here are some (somewhat redundant with the interview) things I feel like I’ve updated on in light of the FTX collapse and aftermath:The most obvious thing that’s changed is a tighter funding situation, which I addressed here.I’m generally more concerned about the dynamics I wrote about in EA is about maximization, and maximization is perilous. If I wrote that piece today, most of it would be the same, but the “Avoiding the pitfalls” section would be quite different (less reassuring/reassured). I’m not really sure what to do about these dynamics, i.e., how to reduce the risk that EA will encourage and attract perilous maximization, but a couple of possibilities:It looks to me like the community needs to beef up and improve investments in activities like “identifying and warning about bad actors in the community,” and I regret not taking a stronger hand in doing so to date. (Recent sexual harassment developments reinforce this point.).I’ve long wanted to try to write up a detailed intellectual case against what one might call “hard-core utilitarianism.” I think arguing about this sort of thing on the merits is probably the most promising way to reduce associated risks; EA isn’t (and I don’t want it to be) the kind of community where you can change what people operationally value just by saying you want it to change, and I think the intellectual case has to be made. I think there is a good substantive case for pluralism and moderation that could be better-explained and easier to find, and I’m thinking about how to make that happen (though I can’t promise to do so soon).I had some concerns about SBF and FTX, but I largely thought of the situation as not being my responsibility, as Open Philanthropy had no formal relationship to either. In hindsight, I wish I’d reasoned more like this: “This person is becoming very associated with effective altruism, so whether or not that’s due to anything I’ve done, it’s important to figure out whether that’s a bad thing and whether proactive distancing is needed.”I’m not surprised there are some bad actors in the EA community (I think bad actors exist in any community), but I’ve increased my picture of how much harm a small set of them can do, and hence I think it could be good for Open Philanthropy to become more conservative about funding and associating with people who might end up being bad actors (while recognizing that it won’t be able to predict perfectly on this front).Prior to the FTX collapse, I had been gradually updating toward feeling like Open Philanthropy should be less cautious with funding and other actions; quicker to trust our own intuitions and people who intuitively seemed to share our values; and generally less cautious. Some of this update was based on thinking that some folks associated with FTX were being successful with more self-trusting, less-cautious attitudes; some of it was based on seeing few immediate negative conse...]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:49 None full 5346
nHq4hLsojDPf3Pqg9_NL_EA_EA EA - The Overton Window widens: Examples of AI risk in the media by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Overton Window widens: Examples of AI risk in the media, published by Akash on March 23, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://forum.effectivealtruism.org/posts/nHq4hLsojDPf3Pqg9/the-overton-window-widens-examples-of-ai-risk-in-the-media Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Overton Window widens: Examples of AI risk in the media, published by Akash on March 23, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 23 Mar 2023 18:00:56 +0000 EA - The Overton Window widens: Examples of AI risk in the media by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Overton Window widens: Examples of AI risk in the media, published by Akash on March 23, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Overton Window widens: Examples of AI risk in the media, published by Akash on March 23, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:28 None full 5330
KZuyBT3Fi8umHc6zH_NL_EA_EA EA - Highlights from LPP’s field-building efforts by Legal Priorities Project Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Highlights from LPP’s field-building efforts, published by Legal Priorities Project on March 23, 2023 on The Effective Altruism Forum.IntroductionThe Legal Priorities Project (LPP) is dedicated to building a global community of people passionate about using the law to protect future generations and mitigate existential risk. Through our flagship programs—the Legal Priorities Summer Institute and the Summer Research Fellowship—participants gain hands-on experience and develop the skills needed to make a meaningful impact in this field. In this overview, you will learn more about what it's like to participate in these programs, what to expect afterward, and what other initiatives we’ve run. We are excited to share the work and accomplishments of our talented participants and look forward to hearing from you if you would like to get involved!Legal Priorities Summer InstituteThe Legal Priorities Summer Institute (LPSI) is an intensive, week-long program to introduce altruistically-minded law and policy students and early-career professionals to projects, theories, and tools relevant to tackling critical issues affecting the long-term future.LPSI is aimed primarily at individuals who want to learn about issues affecting the long-term future. Through a series of presentations, discussions, debates, and workshops, led by an array of experts, including government officials, academics, legal practitioners, philanthropists, and members of the judiciary, participants gain a better understanding of the current legal landscape and learn about the latest developments, trends, and best practices related to existential risk and global challenges.Despite the program being highly competitive, we strive to bring together individuals from various backgrounds, countries, cultures, disciplines, and professional experiences. We aim to create an environment that promotes open-mindedness and encourages participants to consider alternative perspectives.2022LPSI 2022 attracted 473 applicants from across the globe (227 undergrads, 160 master’s/JD, 65 PhD/JSD, and 21 others), with a significant number coming from top-ranked institutions—200 (42%) from the world’s top 20 law schools and 113 (24%) from the top 5 law schools. 402 (85%) applicants and 14 (40%) participants had very little or no previous exposure to longtermism.Thirty-five talented individuals were selected to participate in the program, representing eighteen countries and five continents, making the cohort extraordinarily diverse. You can meet our 2022 participants here.Program highlightsA talk on “Reflections on Longtermist AI Governance Research from a Global South Perspective” by Cecil Abungu (Centre for the Study of Existential Risk, Legal Priorities Project).A discussion panel on AI governance with researchers from the Centre for the Study of Existential Risk.A talk on “Catastrophic Climate Change” by Goodwin Gibbins (Future of Humanity Institute).A career planning workshop with an advisor from 80,000 Hours.A talk on “Developing an Impact Litigation Strategy to Limit Catastrophic Biorisk” by Laurens Prins (Legal Priorities Project).Feedback from participantsAll feedback and testimonials were submitted anonymously right after the event.From 1 to 10, how likely is it that you would recommend the Institute to a colleague with similar interests to your own?9.1 / 10What was the most valuable experience from LPSI for you?LPSI gave me an opportunity to meet an amazing group of people. I met a group of people from the Global South, which has motivated all of us to bring forth Global South perspectives and collaborate on ways to bring these perspectives to the global avenue. Getting feedback and giving feedback to people was also a very valuable experience. It is impressive how much clarity you can get on ideas when...]]>
Legal Priorities Project https://forum.effectivealtruism.org/posts/KZuyBT3Fi8umHc6zH/highlights-from-lpp-s-field-building-efforts Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Highlights from LPP’s field-building efforts, published by Legal Priorities Project on March 23, 2023 on The Effective Altruism Forum.IntroductionThe Legal Priorities Project (LPP) is dedicated to building a global community of people passionate about using the law to protect future generations and mitigate existential risk. Through our flagship programs—the Legal Priorities Summer Institute and the Summer Research Fellowship—participants gain hands-on experience and develop the skills needed to make a meaningful impact in this field. In this overview, you will learn more about what it's like to participate in these programs, what to expect afterward, and what other initiatives we’ve run. We are excited to share the work and accomplishments of our talented participants and look forward to hearing from you if you would like to get involved!Legal Priorities Summer InstituteThe Legal Priorities Summer Institute (LPSI) is an intensive, week-long program to introduce altruistically-minded law and policy students and early-career professionals to projects, theories, and tools relevant to tackling critical issues affecting the long-term future.LPSI is aimed primarily at individuals who want to learn about issues affecting the long-term future. Through a series of presentations, discussions, debates, and workshops, led by an array of experts, including government officials, academics, legal practitioners, philanthropists, and members of the judiciary, participants gain a better understanding of the current legal landscape and learn about the latest developments, trends, and best practices related to existential risk and global challenges.Despite the program being highly competitive, we strive to bring together individuals from various backgrounds, countries, cultures, disciplines, and professional experiences. We aim to create an environment that promotes open-mindedness and encourages participants to consider alternative perspectives.2022LPSI 2022 attracted 473 applicants from across the globe (227 undergrads, 160 master’s/JD, 65 PhD/JSD, and 21 others), with a significant number coming from top-ranked institutions—200 (42%) from the world’s top 20 law schools and 113 (24%) from the top 5 law schools. 402 (85%) applicants and 14 (40%) participants had very little or no previous exposure to longtermism.Thirty-five talented individuals were selected to participate in the program, representing eighteen countries and five continents, making the cohort extraordinarily diverse. You can meet our 2022 participants here.Program highlightsA talk on “Reflections on Longtermist AI Governance Research from a Global South Perspective” by Cecil Abungu (Centre for the Study of Existential Risk, Legal Priorities Project).A discussion panel on AI governance with researchers from the Centre for the Study of Existential Risk.A talk on “Catastrophic Climate Change” by Goodwin Gibbins (Future of Humanity Institute).A career planning workshop with an advisor from 80,000 Hours.A talk on “Developing an Impact Litigation Strategy to Limit Catastrophic Biorisk” by Laurens Prins (Legal Priorities Project).Feedback from participantsAll feedback and testimonials were submitted anonymously right after the event.From 1 to 10, how likely is it that you would recommend the Institute to a colleague with similar interests to your own?9.1 / 10What was the most valuable experience from LPSI for you?LPSI gave me an opportunity to meet an amazing group of people. I met a group of people from the Global South, which has motivated all of us to bring forth Global South perspectives and collaborate on ways to bring these perspectives to the global avenue. Getting feedback and giving feedback to people was also a very valuable experience. It is impressive how much clarity you can get on ideas when...]]>
Thu, 23 Mar 2023 15:57:18 +0000 EA - Highlights from LPP’s field-building efforts by Legal Priorities Project Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Highlights from LPP’s field-building efforts, published by Legal Priorities Project on March 23, 2023 on The Effective Altruism Forum.IntroductionThe Legal Priorities Project (LPP) is dedicated to building a global community of people passionate about using the law to protect future generations and mitigate existential risk. Through our flagship programs—the Legal Priorities Summer Institute and the Summer Research Fellowship—participants gain hands-on experience and develop the skills needed to make a meaningful impact in this field. In this overview, you will learn more about what it's like to participate in these programs, what to expect afterward, and what other initiatives we’ve run. We are excited to share the work and accomplishments of our talented participants and look forward to hearing from you if you would like to get involved!Legal Priorities Summer InstituteThe Legal Priorities Summer Institute (LPSI) is an intensive, week-long program to introduce altruistically-minded law and policy students and early-career professionals to projects, theories, and tools relevant to tackling critical issues affecting the long-term future.LPSI is aimed primarily at individuals who want to learn about issues affecting the long-term future. Through a series of presentations, discussions, debates, and workshops, led by an array of experts, including government officials, academics, legal practitioners, philanthropists, and members of the judiciary, participants gain a better understanding of the current legal landscape and learn about the latest developments, trends, and best practices related to existential risk and global challenges.Despite the program being highly competitive, we strive to bring together individuals from various backgrounds, countries, cultures, disciplines, and professional experiences. We aim to create an environment that promotes open-mindedness and encourages participants to consider alternative perspectives.2022LPSI 2022 attracted 473 applicants from across the globe (227 undergrads, 160 master’s/JD, 65 PhD/JSD, and 21 others), with a significant number coming from top-ranked institutions—200 (42%) from the world’s top 20 law schools and 113 (24%) from the top 5 law schools. 402 (85%) applicants and 14 (40%) participants had very little or no previous exposure to longtermism.Thirty-five talented individuals were selected to participate in the program, representing eighteen countries and five continents, making the cohort extraordinarily diverse. You can meet our 2022 participants here.Program highlightsA talk on “Reflections on Longtermist AI Governance Research from a Global South Perspective” by Cecil Abungu (Centre for the Study of Existential Risk, Legal Priorities Project).A discussion panel on AI governance with researchers from the Centre for the Study of Existential Risk.A talk on “Catastrophic Climate Change” by Goodwin Gibbins (Future of Humanity Institute).A career planning workshop with an advisor from 80,000 Hours.A talk on “Developing an Impact Litigation Strategy to Limit Catastrophic Biorisk” by Laurens Prins (Legal Priorities Project).Feedback from participantsAll feedback and testimonials were submitted anonymously right after the event.From 1 to 10, how likely is it that you would recommend the Institute to a colleague with similar interests to your own?9.1 / 10What was the most valuable experience from LPSI for you?LPSI gave me an opportunity to meet an amazing group of people. I met a group of people from the Global South, which has motivated all of us to bring forth Global South perspectives and collaborate on ways to bring these perspectives to the global avenue. Getting feedback and giving feedback to people was also a very valuable experience. It is impressive how much clarity you can get on ideas when...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Highlights from LPP’s field-building efforts, published by Legal Priorities Project on March 23, 2023 on The Effective Altruism Forum.IntroductionThe Legal Priorities Project (LPP) is dedicated to building a global community of people passionate about using the law to protect future generations and mitigate existential risk. Through our flagship programs—the Legal Priorities Summer Institute and the Summer Research Fellowship—participants gain hands-on experience and develop the skills needed to make a meaningful impact in this field. In this overview, you will learn more about what it's like to participate in these programs, what to expect afterward, and what other initiatives we’ve run. We are excited to share the work and accomplishments of our talented participants and look forward to hearing from you if you would like to get involved!Legal Priorities Summer InstituteThe Legal Priorities Summer Institute (LPSI) is an intensive, week-long program to introduce altruistically-minded law and policy students and early-career professionals to projects, theories, and tools relevant to tackling critical issues affecting the long-term future.LPSI is aimed primarily at individuals who want to learn about issues affecting the long-term future. Through a series of presentations, discussions, debates, and workshops, led by an array of experts, including government officials, academics, legal practitioners, philanthropists, and members of the judiciary, participants gain a better understanding of the current legal landscape and learn about the latest developments, trends, and best practices related to existential risk and global challenges.Despite the program being highly competitive, we strive to bring together individuals from various backgrounds, countries, cultures, disciplines, and professional experiences. We aim to create an environment that promotes open-mindedness and encourages participants to consider alternative perspectives.2022LPSI 2022 attracted 473 applicants from across the globe (227 undergrads, 160 master’s/JD, 65 PhD/JSD, and 21 others), with a significant number coming from top-ranked institutions—200 (42%) from the world’s top 20 law schools and 113 (24%) from the top 5 law schools. 402 (85%) applicants and 14 (40%) participants had very little or no previous exposure to longtermism.Thirty-five talented individuals were selected to participate in the program, representing eighteen countries and five continents, making the cohort extraordinarily diverse. You can meet our 2022 participants here.Program highlightsA talk on “Reflections on Longtermist AI Governance Research from a Global South Perspective” by Cecil Abungu (Centre for the Study of Existential Risk, Legal Priorities Project).A discussion panel on AI governance with researchers from the Centre for the Study of Existential Risk.A talk on “Catastrophic Climate Change” by Goodwin Gibbins (Future of Humanity Institute).A career planning workshop with an advisor from 80,000 Hours.A talk on “Developing an Impact Litigation Strategy to Limit Catastrophic Biorisk” by Laurens Prins (Legal Priorities Project).Feedback from participantsAll feedback and testimonials were submitted anonymously right after the event.From 1 to 10, how likely is it that you would recommend the Institute to a colleague with similar interests to your own?9.1 / 10What was the most valuable experience from LPSI for you?LPSI gave me an opportunity to meet an amazing group of people. I met a group of people from the Global South, which has motivated all of us to bring forth Global South perspectives and collaborate on ways to bring these perspectives to the global avenue. Getting feedback and giving feedback to people was also a very valuable experience. It is impressive how much clarity you can get on ideas when...]]>
Legal Priorities Project https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 21:52 None full 5331
inQiiNgDmioHPn63h_NL_EA_EA EA - Anki with Uncertainty: Turn any flashcard deck into a calibration training tool by Sage Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anki with Uncertainty: Turn any flashcard deck into a calibration training tool, published by Sage on March 22, 2023 on The Effective Altruism Forum.We've developed an Anki addon that lets you do calibration training on numerical flashcards!Find information on how it works and how to use it on Quantified Intuitions.Thanks to @Beth Barnes for supporting the development and giving feedback. And thanks to @Hauke Hillebrandt for inspiring this idea with this comment right here on the Forum!It's pretty experimental, so I'd love to hear any feedback or thoughts.In related news: the March Estimation Game is now live - ten new Fermi estimation questions, with some particularly interesting ones this month!See our previous posts for more information about The Estimation Game and our other tools on Quantified Intuitions, including a calibration tool with questions about the world's most pressing problems.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sage https://forum.effectivealtruism.org/posts/inQiiNgDmioHPn63h/anki-with-uncertainty-turn-any-flashcard-deck-into-a Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anki with Uncertainty: Turn any flashcard deck into a calibration training tool, published by Sage on March 22, 2023 on The Effective Altruism Forum.We've developed an Anki addon that lets you do calibration training on numerical flashcards!Find information on how it works and how to use it on Quantified Intuitions.Thanks to @Beth Barnes for supporting the development and giving feedback. And thanks to @Hauke Hillebrandt for inspiring this idea with this comment right here on the Forum!It's pretty experimental, so I'd love to hear any feedback or thoughts.In related news: the March Estimation Game is now live - ten new Fermi estimation questions, with some particularly interesting ones this month!See our previous posts for more information about The Estimation Game and our other tools on Quantified Intuitions, including a calibration tool with questions about the world's most pressing problems.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 23 Mar 2023 15:35:19 +0000 EA - Anki with Uncertainty: Turn any flashcard deck into a calibration training tool by Sage Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anki with Uncertainty: Turn any flashcard deck into a calibration training tool, published by Sage on March 22, 2023 on The Effective Altruism Forum.We've developed an Anki addon that lets you do calibration training on numerical flashcards!Find information on how it works and how to use it on Quantified Intuitions.Thanks to @Beth Barnes for supporting the development and giving feedback. And thanks to @Hauke Hillebrandt for inspiring this idea with this comment right here on the Forum!It's pretty experimental, so I'd love to hear any feedback or thoughts.In related news: the March Estimation Game is now live - ten new Fermi estimation questions, with some particularly interesting ones this month!See our previous posts for more information about The Estimation Game and our other tools on Quantified Intuitions, including a calibration tool with questions about the world's most pressing problems.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anki with Uncertainty: Turn any flashcard deck into a calibration training tool, published by Sage on March 22, 2023 on The Effective Altruism Forum.We've developed an Anki addon that lets you do calibration training on numerical flashcards!Find information on how it works and how to use it on Quantified Intuitions.Thanks to @Beth Barnes for supporting the development and giving feedback. And thanks to @Hauke Hillebrandt for inspiring this idea with this comment right here on the Forum!It's pretty experimental, so I'd love to hear any feedback or thoughts.In related news: the March Estimation Game is now live - ten new Fermi estimation questions, with some particularly interesting ones this month!See our previous posts for more information about The Estimation Game and our other tools on Quantified Intuitions, including a calibration tool with questions about the world's most pressing problems.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sage https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:09 None full 5333
hHCxhFK9ZrKEhFQrL_NL_EA_EA EA - Transcript: NBC Nightly News: AI ‘race to recklessness’ w/ Tristan Harris, Aza Raskin by WilliamKiely Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transcript: NBC Nightly News: AI ‘race to recklessness’ w/ Tristan Harris, Aza Raskin, published by WilliamKiely on March 23, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
WilliamKiely https://forum.effectivealtruism.org/posts/hHCxhFK9ZrKEhFQrL/transcript-abc-nightly-news-ai-race-to-recklessness-w Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transcript: NBC Nightly News: AI ‘race to recklessness’ w/ Tristan Harris, Aza Raskin, published by WilliamKiely on March 23, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 23 Mar 2023 12:45:44 +0000 EA - Transcript: NBC Nightly News: AI ‘race to recklessness’ w/ Tristan Harris, Aza Raskin by WilliamKiely Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transcript: NBC Nightly News: AI ‘race to recklessness’ w/ Tristan Harris, Aza Raskin, published by WilliamKiely on March 23, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transcript: NBC Nightly News: AI ‘race to recklessness’ w/ Tristan Harris, Aza Raskin, published by WilliamKiely on March 23, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
WilliamKiely https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:34 None full 5332
2jHurMmzvyNbeEtCd_NL_EA_EA EA - Making better estimates with scarce information by Stan Pinsent Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Making better estimates with scarce information, published by Stan Pinsent on March 22, 2023 on The Effective Altruism Forum.TL;DRI explore the pros and cons of different approaches to estimation. In general I find that:interval estimates are stronger than point estimatesthe lognormal distribution is better for modelling unknowns than the normal distributionthe geometric mean is better than the arithmetic mean for building aggregate estimatesThese differences are only significant in situations of high uncertainty, characterised by a high ratio between confidence interval bounds. Otherwise, simpler approaches (point estimates & the arithmetic mean) are fine.SummaryI am chiefly interested in how we can make better estimates from very limited evidence. Estimation strategies are key to sanity-checks, cost-effectiveness analyses and forecasting.Speed and accuracy are important considerations when estimating, but so is legibility; we want our work to be easy to understand. This post explores which approaches are more accurate and when the increase in accuracy justifies the increase in complexity.My key findings are:Interval (or distribution) estimates are more accurate than point estimates because they capture more information. When dividing by an unknown of high variability (high ratio between confidence interval bounds) point estimates are significantly worse.It is typically better to model distributions as lognormal rather than normal. Both are similar in situations with low variability, but lognormal appears to better describe situations of high variability..The geometric mean is best for building aggregate estimates. It captures the positive skew typical of more variable distributions.In general, simple methods are fine while you are estimating quantities with low variability. The increased complexity of modelling distributions and using geometric means is only worthwhile when the unknown values are highly variable.Interval vs point estimatesIn this section we will find that for calculations involving division, interval estimates are more accurate than point estimates. The difference is most stark in situations of high uncertainty.Interval estimates, for which we give an interval within which we estimate the unknown value lies, capture more information than a point estimate (which is simply what we estimate the value to be). Interval estimates often include the probability that the value lies within our interval (confidence intervals) and sometimes specify the shape of the underlying distribution. In this post I treat interval estimates as distribution estimates as the same thing.Here I attempt to answer the following question: how much more accurate are interval estimates and when is the increased complexity worthwhile?Core examplesI will explore this through two examples which I will return to later in the post.Fuel Cost: The amount I will spend on fuel on my road trip in Florida next month. The abundance of information I have about fuel prices, the efficiency of my car and the length of my trip means I can use narrow confidence intervals to build an estimate.Inhabitable Planets: The number of planets in our galaxy with conditions that could harbour intelligent life. The lack of available information means I will use very wide confidence intervals.Point estimates are fine for multiplication, lossy for divisionLet’s start with Fuel Cost. Using Squiggle (which uses lognormal distributions by default; see the next section for more on why), I enter 90% confidence intervals to build distributions for fuel cost per mile (USD per mile) and distance of my trip (miles). This gives me an expected fuel cost of 49.18USDWhat if I had used point estimates? I can check this by performing the same calculation using the expected values of each of the distrib...]]>
Stan Pinsent https://forum.effectivealtruism.org/posts/2jHurMmzvyNbeEtCd/making-better-estimates-with-scarce-information Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Making better estimates with scarce information, published by Stan Pinsent on March 22, 2023 on The Effective Altruism Forum.TL;DRI explore the pros and cons of different approaches to estimation. In general I find that:interval estimates are stronger than point estimatesthe lognormal distribution is better for modelling unknowns than the normal distributionthe geometric mean is better than the arithmetic mean for building aggregate estimatesThese differences are only significant in situations of high uncertainty, characterised by a high ratio between confidence interval bounds. Otherwise, simpler approaches (point estimates & the arithmetic mean) are fine.SummaryI am chiefly interested in how we can make better estimates from very limited evidence. Estimation strategies are key to sanity-checks, cost-effectiveness analyses and forecasting.Speed and accuracy are important considerations when estimating, but so is legibility; we want our work to be easy to understand. This post explores which approaches are more accurate and when the increase in accuracy justifies the increase in complexity.My key findings are:Interval (or distribution) estimates are more accurate than point estimates because they capture more information. When dividing by an unknown of high variability (high ratio between confidence interval bounds) point estimates are significantly worse.It is typically better to model distributions as lognormal rather than normal. Both are similar in situations with low variability, but lognormal appears to better describe situations of high variability..The geometric mean is best for building aggregate estimates. It captures the positive skew typical of more variable distributions.In general, simple methods are fine while you are estimating quantities with low variability. The increased complexity of modelling distributions and using geometric means is only worthwhile when the unknown values are highly variable.Interval vs point estimatesIn this section we will find that for calculations involving division, interval estimates are more accurate than point estimates. The difference is most stark in situations of high uncertainty.Interval estimates, for which we give an interval within which we estimate the unknown value lies, capture more information than a point estimate (which is simply what we estimate the value to be). Interval estimates often include the probability that the value lies within our interval (confidence intervals) and sometimes specify the shape of the underlying distribution. In this post I treat interval estimates as distribution estimates as the same thing.Here I attempt to answer the following question: how much more accurate are interval estimates and when is the increased complexity worthwhile?Core examplesI will explore this through two examples which I will return to later in the post.Fuel Cost: The amount I will spend on fuel on my road trip in Florida next month. The abundance of information I have about fuel prices, the efficiency of my car and the length of my trip means I can use narrow confidence intervals to build an estimate.Inhabitable Planets: The number of planets in our galaxy with conditions that could harbour intelligent life. The lack of available information means I will use very wide confidence intervals.Point estimates are fine for multiplication, lossy for divisionLet’s start with Fuel Cost. Using Squiggle (which uses lognormal distributions by default; see the next section for more on why), I enter 90% confidence intervals to build distributions for fuel cost per mile (USD per mile) and distance of my trip (miles). This gives me an expected fuel cost of 49.18USDWhat if I had used point estimates? I can check this by performing the same calculation using the expected values of each of the distrib...]]>
Thu, 23 Mar 2023 04:12:34 +0000 EA - Making better estimates with scarce information by Stan Pinsent Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Making better estimates with scarce information, published by Stan Pinsent on March 22, 2023 on The Effective Altruism Forum.TL;DRI explore the pros and cons of different approaches to estimation. In general I find that:interval estimates are stronger than point estimatesthe lognormal distribution is better for modelling unknowns than the normal distributionthe geometric mean is better than the arithmetic mean for building aggregate estimatesThese differences are only significant in situations of high uncertainty, characterised by a high ratio between confidence interval bounds. Otherwise, simpler approaches (point estimates & the arithmetic mean) are fine.SummaryI am chiefly interested in how we can make better estimates from very limited evidence. Estimation strategies are key to sanity-checks, cost-effectiveness analyses and forecasting.Speed and accuracy are important considerations when estimating, but so is legibility; we want our work to be easy to understand. This post explores which approaches are more accurate and when the increase in accuracy justifies the increase in complexity.My key findings are:Interval (or distribution) estimates are more accurate than point estimates because they capture more information. When dividing by an unknown of high variability (high ratio between confidence interval bounds) point estimates are significantly worse.It is typically better to model distributions as lognormal rather than normal. Both are similar in situations with low variability, but lognormal appears to better describe situations of high variability..The geometric mean is best for building aggregate estimates. It captures the positive skew typical of more variable distributions.In general, simple methods are fine while you are estimating quantities with low variability. The increased complexity of modelling distributions and using geometric means is only worthwhile when the unknown values are highly variable.Interval vs point estimatesIn this section we will find that for calculations involving division, interval estimates are more accurate than point estimates. The difference is most stark in situations of high uncertainty.Interval estimates, for which we give an interval within which we estimate the unknown value lies, capture more information than a point estimate (which is simply what we estimate the value to be). Interval estimates often include the probability that the value lies within our interval (confidence intervals) and sometimes specify the shape of the underlying distribution. In this post I treat interval estimates as distribution estimates as the same thing.Here I attempt to answer the following question: how much more accurate are interval estimates and when is the increased complexity worthwhile?Core examplesI will explore this through two examples which I will return to later in the post.Fuel Cost: The amount I will spend on fuel on my road trip in Florida next month. The abundance of information I have about fuel prices, the efficiency of my car and the length of my trip means I can use narrow confidence intervals to build an estimate.Inhabitable Planets: The number of planets in our galaxy with conditions that could harbour intelligent life. The lack of available information means I will use very wide confidence intervals.Point estimates are fine for multiplication, lossy for divisionLet’s start with Fuel Cost. Using Squiggle (which uses lognormal distributions by default; see the next section for more on why), I enter 90% confidence intervals to build distributions for fuel cost per mile (USD per mile) and distance of my trip (miles). This gives me an expected fuel cost of 49.18USDWhat if I had used point estimates? I can check this by performing the same calculation using the expected values of each of the distrib...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Making better estimates with scarce information, published by Stan Pinsent on March 22, 2023 on The Effective Altruism Forum.TL;DRI explore the pros and cons of different approaches to estimation. In general I find that:interval estimates are stronger than point estimatesthe lognormal distribution is better for modelling unknowns than the normal distributionthe geometric mean is better than the arithmetic mean for building aggregate estimatesThese differences are only significant in situations of high uncertainty, characterised by a high ratio between confidence interval bounds. Otherwise, simpler approaches (point estimates & the arithmetic mean) are fine.SummaryI am chiefly interested in how we can make better estimates from very limited evidence. Estimation strategies are key to sanity-checks, cost-effectiveness analyses and forecasting.Speed and accuracy are important considerations when estimating, but so is legibility; we want our work to be easy to understand. This post explores which approaches are more accurate and when the increase in accuracy justifies the increase in complexity.My key findings are:Interval (or distribution) estimates are more accurate than point estimates because they capture more information. When dividing by an unknown of high variability (high ratio between confidence interval bounds) point estimates are significantly worse.It is typically better to model distributions as lognormal rather than normal. Both are similar in situations with low variability, but lognormal appears to better describe situations of high variability..The geometric mean is best for building aggregate estimates. It captures the positive skew typical of more variable distributions.In general, simple methods are fine while you are estimating quantities with low variability. The increased complexity of modelling distributions and using geometric means is only worthwhile when the unknown values are highly variable.Interval vs point estimatesIn this section we will find that for calculations involving division, interval estimates are more accurate than point estimates. The difference is most stark in situations of high uncertainty.Interval estimates, for which we give an interval within which we estimate the unknown value lies, capture more information than a point estimate (which is simply what we estimate the value to be). Interval estimates often include the probability that the value lies within our interval (confidence intervals) and sometimes specify the shape of the underlying distribution. In this post I treat interval estimates as distribution estimates as the same thing.Here I attempt to answer the following question: how much more accurate are interval estimates and when is the increased complexity worthwhile?Core examplesI will explore this through two examples which I will return to later in the post.Fuel Cost: The amount I will spend on fuel on my road trip in Florida next month. The abundance of information I have about fuel prices, the efficiency of my car and the length of my trip means I can use narrow confidence intervals to build an estimate.Inhabitable Planets: The number of planets in our galaxy with conditions that could harbour intelligent life. The lack of available information means I will use very wide confidence intervals.Point estimates are fine for multiplication, lossy for divisionLet’s start with Fuel Cost. Using Squiggle (which uses lognormal distributions by default; see the next section for more on why), I enter 90% confidence intervals to build distributions for fuel cost per mile (USD per mile) and distance of my trip (miles). This gives me an expected fuel cost of 49.18USDWhat if I had used point estimates? I can check this by performing the same calculation using the expected values of each of the distrib...]]>
Stan Pinsent https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 16:47 None full 5334
N9xreY5eqmYJy3cpF_NL_EA_EA EA - Books: Lend, Don't Give by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Books: Lend, Don't Give, published by Jeff Kaufman on March 22, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://forum.effectivealtruism.org/posts/N9xreY5eqmYJy3cpF/books-lend-don-t-give Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Books: Lend, Don't Give, published by Jeff Kaufman on March 22, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 22 Mar 2023 21:04:06 +0000 EA - Books: Lend, Don't Give by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Books: Lend, Don't Give, published by Jeff Kaufman on March 22, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Books: Lend, Don't Give, published by Jeff Kaufman on March 22, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:26 None full 5325
92TAmcppCL7t54Ajn_NL_EA_EA EA - Announcing the European Network for AI Safety (ENAIS) by Esben Kran Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the European Network for AI Safety (ENAIS), published by Esben Kran on March 22, 2023 on The Effective Altruism Forum.TLDR; The European Network for AI Safety is a central point for connecting researchers and community organizers in Europe with opportunities and events happening in their vicinity. Sign up here to become a member of the network, and join our launch event on Wednesday, April 5th from 19:00-20:00 CET!Why did we create ENAIS?ENAIS was founded by European AI safety researchers and field-builders who recognized the lack of interaction among various groups in the region. Our goal is to address the decentralized nature of AI safety work in Europe by improving information exchange and coordination.We focus on Europe for several reasons: a Europe-specific organization can better address local issues like the EU AI Act, foster smoother collaboration among members and the free travel within Schengen also eases event coordination.About the networkENAIS strives to advance AI Safety in Europe, mitigate risks from AI systems, particularly existential risks, and enhance collaboration among the continent's isolated AI Safety communities.We also aim to connect international communities by sharing insights about European activities and information from other hubs. We plan to offer infrastructure and support for establishing communities, coworking spaces, and assistance for independent researchers with operational needs.Concretely, we organize / create:A centralized online location for accessing European AI safety hubs and resources for field-building on the enais.co website. The map on the front page provides direct access to the most relevant links and locations across Europe for AI safety.A quarterly newsletter with updated information about what field-builders and AI safety researchers should be aware of in Continental Europe.A professional network and database of the organizations and people working on AI safety.Events and 1-1 career advice to aid transitioning into AI Safety or between different AI Safety roles.Support for people wanting to create a similar organization in other regions.We intend to leverage the expertise of the network to positively impact policy proposals in Europe (like the EU AI Act), as policymakers and technical researchers can more easily find each other. In addition, we aim to create infrastructure to make the research work of European researchers easier and more productive, for example, by helping researchers with finding an employer of records and getting funding.With the decentralized nature of ENAIS, we also invite network members to self-organize events under the ENAIS banner with support from other members.What does European AI safety currently look like?Below you will find a non-exhaustive map of cities with AI Safety researchers or organizations. The green markers indicate an AIS group, whereas the blue markers indicate individual AIS researchers or smaller groups. You are invited to add information to the map here.VisionThe initial vision for ENAIS is to be the go-to access point for information and people interested in AI safety in Europe. We also want to provide a network and brand for groups and events.The longer-term strategy and vision will mostly be developed by the people who join as directors with guidance from the board. This might include projects such as policymaker communication, event coordination, regranting, community incubation, and researcher outreach.Join the network!Sign up for the network here by providing information on your interests, openness to collaboration, and location. We will include you in our database (if you previously filled in information, we will email you so you may update your information). You can choose your level of privacy to not appear publicly and only to m...]]>
Esben Kran https://forum.effectivealtruism.org/posts/92TAmcppCL7t54Ajn/announcing-the-european-network-for-ai-safety-enais Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the European Network for AI Safety (ENAIS), published by Esben Kran on March 22, 2023 on The Effective Altruism Forum.TLDR; The European Network for AI Safety is a central point for connecting researchers and community organizers in Europe with opportunities and events happening in their vicinity. Sign up here to become a member of the network, and join our launch event on Wednesday, April 5th from 19:00-20:00 CET!Why did we create ENAIS?ENAIS was founded by European AI safety researchers and field-builders who recognized the lack of interaction among various groups in the region. Our goal is to address the decentralized nature of AI safety work in Europe by improving information exchange and coordination.We focus on Europe for several reasons: a Europe-specific organization can better address local issues like the EU AI Act, foster smoother collaboration among members and the free travel within Schengen also eases event coordination.About the networkENAIS strives to advance AI Safety in Europe, mitigate risks from AI systems, particularly existential risks, and enhance collaboration among the continent's isolated AI Safety communities.We also aim to connect international communities by sharing insights about European activities and information from other hubs. We plan to offer infrastructure and support for establishing communities, coworking spaces, and assistance for independent researchers with operational needs.Concretely, we organize / create:A centralized online location for accessing European AI safety hubs and resources for field-building on the enais.co website. The map on the front page provides direct access to the most relevant links and locations across Europe for AI safety.A quarterly newsletter with updated information about what field-builders and AI safety researchers should be aware of in Continental Europe.A professional network and database of the organizations and people working on AI safety.Events and 1-1 career advice to aid transitioning into AI Safety or between different AI Safety roles.Support for people wanting to create a similar organization in other regions.We intend to leverage the expertise of the network to positively impact policy proposals in Europe (like the EU AI Act), as policymakers and technical researchers can more easily find each other. In addition, we aim to create infrastructure to make the research work of European researchers easier and more productive, for example, by helping researchers with finding an employer of records and getting funding.With the decentralized nature of ENAIS, we also invite network members to self-organize events under the ENAIS banner with support from other members.What does European AI safety currently look like?Below you will find a non-exhaustive map of cities with AI Safety researchers or organizations. The green markers indicate an AIS group, whereas the blue markers indicate individual AIS researchers or smaller groups. You are invited to add information to the map here.VisionThe initial vision for ENAIS is to be the go-to access point for information and people interested in AI safety in Europe. We also want to provide a network and brand for groups and events.The longer-term strategy and vision will mostly be developed by the people who join as directors with guidance from the board. This might include projects such as policymaker communication, event coordination, regranting, community incubation, and researcher outreach.Join the network!Sign up for the network here by providing information on your interests, openness to collaboration, and location. We will include you in our database (if you previously filled in information, we will email you so you may update your information). You can choose your level of privacy to not appear publicly and only to m...]]>
Wed, 22 Mar 2023 19:37:50 +0000 EA - Announcing the European Network for AI Safety (ENAIS) by Esben Kran Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the European Network for AI Safety (ENAIS), published by Esben Kran on March 22, 2023 on The Effective Altruism Forum.TLDR; The European Network for AI Safety is a central point for connecting researchers and community organizers in Europe with opportunities and events happening in their vicinity. Sign up here to become a member of the network, and join our launch event on Wednesday, April 5th from 19:00-20:00 CET!Why did we create ENAIS?ENAIS was founded by European AI safety researchers and field-builders who recognized the lack of interaction among various groups in the region. Our goal is to address the decentralized nature of AI safety work in Europe by improving information exchange and coordination.We focus on Europe for several reasons: a Europe-specific organization can better address local issues like the EU AI Act, foster smoother collaboration among members and the free travel within Schengen also eases event coordination.About the networkENAIS strives to advance AI Safety in Europe, mitigate risks from AI systems, particularly existential risks, and enhance collaboration among the continent's isolated AI Safety communities.We also aim to connect international communities by sharing insights about European activities and information from other hubs. We plan to offer infrastructure and support for establishing communities, coworking spaces, and assistance for independent researchers with operational needs.Concretely, we organize / create:A centralized online location for accessing European AI safety hubs and resources for field-building on the enais.co website. The map on the front page provides direct access to the most relevant links and locations across Europe for AI safety.A quarterly newsletter with updated information about what field-builders and AI safety researchers should be aware of in Continental Europe.A professional network and database of the organizations and people working on AI safety.Events and 1-1 career advice to aid transitioning into AI Safety or between different AI Safety roles.Support for people wanting to create a similar organization in other regions.We intend to leverage the expertise of the network to positively impact policy proposals in Europe (like the EU AI Act), as policymakers and technical researchers can more easily find each other. In addition, we aim to create infrastructure to make the research work of European researchers easier and more productive, for example, by helping researchers with finding an employer of records and getting funding.With the decentralized nature of ENAIS, we also invite network members to self-organize events under the ENAIS banner with support from other members.What does European AI safety currently look like?Below you will find a non-exhaustive map of cities with AI Safety researchers or organizations. The green markers indicate an AIS group, whereas the blue markers indicate individual AIS researchers or smaller groups. You are invited to add information to the map here.VisionThe initial vision for ENAIS is to be the go-to access point for information and people interested in AI safety in Europe. We also want to provide a network and brand for groups and events.The longer-term strategy and vision will mostly be developed by the people who join as directors with guidance from the board. This might include projects such as policymaker communication, event coordination, regranting, community incubation, and researcher outreach.Join the network!Sign up for the network here by providing information on your interests, openness to collaboration, and location. We will include you in our database (if you previously filled in information, we will email you so you may update your information). You can choose your level of privacy to not appear publicly and only to m...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the European Network for AI Safety (ENAIS), published by Esben Kran on March 22, 2023 on The Effective Altruism Forum.TLDR; The European Network for AI Safety is a central point for connecting researchers and community organizers in Europe with opportunities and events happening in their vicinity. Sign up here to become a member of the network, and join our launch event on Wednesday, April 5th from 19:00-20:00 CET!Why did we create ENAIS?ENAIS was founded by European AI safety researchers and field-builders who recognized the lack of interaction among various groups in the region. Our goal is to address the decentralized nature of AI safety work in Europe by improving information exchange and coordination.We focus on Europe for several reasons: a Europe-specific organization can better address local issues like the EU AI Act, foster smoother collaboration among members and the free travel within Schengen also eases event coordination.About the networkENAIS strives to advance AI Safety in Europe, mitigate risks from AI systems, particularly existential risks, and enhance collaboration among the continent's isolated AI Safety communities.We also aim to connect international communities by sharing insights about European activities and information from other hubs. We plan to offer infrastructure and support for establishing communities, coworking spaces, and assistance for independent researchers with operational needs.Concretely, we organize / create:A centralized online location for accessing European AI safety hubs and resources for field-building on the enais.co website. The map on the front page provides direct access to the most relevant links and locations across Europe for AI safety.A quarterly newsletter with updated information about what field-builders and AI safety researchers should be aware of in Continental Europe.A professional network and database of the organizations and people working on AI safety.Events and 1-1 career advice to aid transitioning into AI Safety or between different AI Safety roles.Support for people wanting to create a similar organization in other regions.We intend to leverage the expertise of the network to positively impact policy proposals in Europe (like the EU AI Act), as policymakers and technical researchers can more easily find each other. In addition, we aim to create infrastructure to make the research work of European researchers easier and more productive, for example, by helping researchers with finding an employer of records and getting funding.With the decentralized nature of ENAIS, we also invite network members to self-organize events under the ENAIS banner with support from other members.What does European AI safety currently look like?Below you will find a non-exhaustive map of cities with AI Safety researchers or organizations. The green markers indicate an AIS group, whereas the blue markers indicate individual AIS researchers or smaller groups. You are invited to add information to the map here.VisionThe initial vision for ENAIS is to be the go-to access point for information and people interested in AI safety in Europe. We also want to provide a network and brand for groups and events.The longer-term strategy and vision will mostly be developed by the people who join as directors with guidance from the board. This might include projects such as policymaker communication, event coordination, regranting, community incubation, and researcher outreach.Join the network!Sign up for the network here by providing information on your interests, openness to collaboration, and location. We will include you in our database (if you previously filled in information, we will email you so you may update your information). You can choose your level of privacy to not appear publicly and only to m...]]>
Esben Kran https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:47 None full 5320
pvZwc3wmTdKfQRorR_NL_EA_EA EA - Free coaching sessions by Monica Diaz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Free coaching sessions, published by Monica Diaz on March 21, 2023 on The Effective Altruism Forum.I’m offering free one-on-one coaching sessions to autistic people in the EA community. I’m autistic myself and have provided direct support to autistic people for over 9 years.My sessions focus on self-discovery, skill-development, and finding solutions to common challenges related to being autistic. It can also be nice to talk to someone else who just gets it.Send me a message if you're interested in free coaching sessions, want to learn more, or just want to connect. You can also book a 30-minute introductory meeting with me here:Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Monica Diaz https://forum.effectivealtruism.org/posts/pvZwc3wmTdKfQRorR/free-coaching-sessions Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Free coaching sessions, published by Monica Diaz on March 21, 2023 on The Effective Altruism Forum.I’m offering free one-on-one coaching sessions to autistic people in the EA community. I’m autistic myself and have provided direct support to autistic people for over 9 years.My sessions focus on self-discovery, skill-development, and finding solutions to common challenges related to being autistic. It can also be nice to talk to someone else who just gets it.Send me a message if you're interested in free coaching sessions, want to learn more, or just want to connect. You can also book a 30-minute introductory meeting with me here:Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 22 Mar 2023 18:16:20 +0000 EA - Free coaching sessions by Monica Diaz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Free coaching sessions, published by Monica Diaz on March 21, 2023 on The Effective Altruism Forum.I’m offering free one-on-one coaching sessions to autistic people in the EA community. I’m autistic myself and have provided direct support to autistic people for over 9 years.My sessions focus on self-discovery, skill-development, and finding solutions to common challenges related to being autistic. It can also be nice to talk to someone else who just gets it.Send me a message if you're interested in free coaching sessions, want to learn more, or just want to connect. You can also book a 30-minute introductory meeting with me here:Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Free coaching sessions, published by Monica Diaz on March 21, 2023 on The Effective Altruism Forum.I’m offering free one-on-one coaching sessions to autistic people in the EA community. I’m autistic myself and have provided direct support to autistic people for over 9 years.My sessions focus on self-discovery, skill-development, and finding solutions to common challenges related to being autistic. It can also be nice to talk to someone else who just gets it.Send me a message if you're interested in free coaching sessions, want to learn more, or just want to connect. You can also book a 30-minute introductory meeting with me here:Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Monica Diaz https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:53 None full 5323
jfLjsxcejCFDpo7dw_NL_EA_EA EA - Whether you should do a PhD doesn't depend much on timelines. by alex lawsen (previously alexrjl) Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Whether you should do a PhD doesn't depend much on timelines., published by alex lawsen (previously alexrjl) on March 22, 2023 on The Effective Altruism Forum.I wrote this as an answer to a question which I think has now been deleted, so I copied it to my shortform in order to be able to link it in future, and found myself linking to it often enough that it seemed worth making a top-level post, in particular because if there are important counterarguments I haven't considered I'd like to come across them sooner rather than later! I'd usually put more thought into editing a top-level post, but the realistic options here were not post it at all, or post it without editing.Epistemic status: I've thought about both how people should thinking about PhDs and how people should think about timelines a fair bit, both in my own time and in my role as an advisor at 80k, but I wrote this fairly quickly. I'm sharing my take on this rather than intending to speak on behalf of the whole organisation, though my guess is that the typical view is pretty similar.BLUF:Whether to do a PhD is a decision which depends heavily enough on personal fit that I expect thinking about how well you in particular are suited to a particular PhD to be much more useful than thinking about the effects of timelines estimates on that decision.Don’t pay too much attention to median timelines estimates. There’s a lot of uncertainty, and finding the right path for you can easily make a bigger difference than matching the path to the median timeline.Going into a bit more detail - I think there are a couple of aspects to this question, which I’m going to try to (imperfectly) split up:How should you respond to timelines estimates when planning your career?How should you think about PhDs if you are confident timelines are very short?In terms of how to think about timelines in general, the main advice I’d give is to try to avoid the mistake of interpreting median estimates as single points. Taking this metaculus question as an example, which has a median of July 2027, that doesn’t mean the community predicts that AGI will arrive then! The median just indicates the date by which the community thinks there’s a 50% chance the question will have resolved. To get more precise about this, we can tell from the graph that the community estimates:Only a 7% chance that AGI is developed in the year 2027A 25% chance that AGI will be developed before August of next year.An 11% chance that AGI will not be developed before 2050A 9% chance that the question has already resolved.A 41% chance that AGI will be developed after January 2029 (6 years from the time of writing).Taking these estimates literally, and additionally assuming that any work that happens post this question resolving is totally useless (which seems very unlikely), you might then conclude that delaying your career by 6 years would cause it to have 41/91 = 45% of the value. If that’s the case, if the delay increased the impact you could have by a bit more than a factor of 2, the delay would be worth it.Having done all of that work (and glossed over a bunch of subtlety in the last comment for brevity), I now want to say that you shouldn’t take the metaculus estimates at face value though. The reason is that (as I’m sure you’ve noticed, and as you’ve seen in the comments) they just aren’t going to be that reliable for this kind of question. Nothing is - this kind of prediction is really hard.The net effect of this increased uncertainty should be (I claim) to flatten the probability distribution you are working with. This basically means it makes even less sense than you’d think from looking at the distribution to plan for AGI as if timelines are point estimates.Ok, but what does this mean for PhDs?Before I say anything about how a PhD decision intera...]]>
alex lawsen (previously alexrjl) https://forum.effectivealtruism.org/posts/jfLjsxcejCFDpo7dw/whether-you-should-do-a-phd-doesn-t-depend-much-on-timelines Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Whether you should do a PhD doesn't depend much on timelines., published by alex lawsen (previously alexrjl) on March 22, 2023 on The Effective Altruism Forum.I wrote this as an answer to a question which I think has now been deleted, so I copied it to my shortform in order to be able to link it in future, and found myself linking to it often enough that it seemed worth making a top-level post, in particular because if there are important counterarguments I haven't considered I'd like to come across them sooner rather than later! I'd usually put more thought into editing a top-level post, but the realistic options here were not post it at all, or post it without editing.Epistemic status: I've thought about both how people should thinking about PhDs and how people should think about timelines a fair bit, both in my own time and in my role as an advisor at 80k, but I wrote this fairly quickly. I'm sharing my take on this rather than intending to speak on behalf of the whole organisation, though my guess is that the typical view is pretty similar.BLUF:Whether to do a PhD is a decision which depends heavily enough on personal fit that I expect thinking about how well you in particular are suited to a particular PhD to be much more useful than thinking about the effects of timelines estimates on that decision.Don’t pay too much attention to median timelines estimates. There’s a lot of uncertainty, and finding the right path for you can easily make a bigger difference than matching the path to the median timeline.Going into a bit more detail - I think there are a couple of aspects to this question, which I’m going to try to (imperfectly) split up:How should you respond to timelines estimates when planning your career?How should you think about PhDs if you are confident timelines are very short?In terms of how to think about timelines in general, the main advice I’d give is to try to avoid the mistake of interpreting median estimates as single points. Taking this metaculus question as an example, which has a median of July 2027, that doesn’t mean the community predicts that AGI will arrive then! The median just indicates the date by which the community thinks there’s a 50% chance the question will have resolved. To get more precise about this, we can tell from the graph that the community estimates:Only a 7% chance that AGI is developed in the year 2027A 25% chance that AGI will be developed before August of next year.An 11% chance that AGI will not be developed before 2050A 9% chance that the question has already resolved.A 41% chance that AGI will be developed after January 2029 (6 years from the time of writing).Taking these estimates literally, and additionally assuming that any work that happens post this question resolving is totally useless (which seems very unlikely), you might then conclude that delaying your career by 6 years would cause it to have 41/91 = 45% of the value. If that’s the case, if the delay increased the impact you could have by a bit more than a factor of 2, the delay would be worth it.Having done all of that work (and glossed over a bunch of subtlety in the last comment for brevity), I now want to say that you shouldn’t take the metaculus estimates at face value though. The reason is that (as I’m sure you’ve noticed, and as you’ve seen in the comments) they just aren’t going to be that reliable for this kind of question. Nothing is - this kind of prediction is really hard.The net effect of this increased uncertainty should be (I claim) to flatten the probability distribution you are working with. This basically means it makes even less sense than you’d think from looking at the distribution to plan for AGI as if timelines are point estimates.Ok, but what does this mean for PhDs?Before I say anything about how a PhD decision intera...]]>
Wed, 22 Mar 2023 13:34:39 +0000 EA - Whether you should do a PhD doesn't depend much on timelines. by alex lawsen (previously alexrjl) Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Whether you should do a PhD doesn't depend much on timelines., published by alex lawsen (previously alexrjl) on March 22, 2023 on The Effective Altruism Forum.I wrote this as an answer to a question which I think has now been deleted, so I copied it to my shortform in order to be able to link it in future, and found myself linking to it often enough that it seemed worth making a top-level post, in particular because if there are important counterarguments I haven't considered I'd like to come across them sooner rather than later! I'd usually put more thought into editing a top-level post, but the realistic options here were not post it at all, or post it without editing.Epistemic status: I've thought about both how people should thinking about PhDs and how people should think about timelines a fair bit, both in my own time and in my role as an advisor at 80k, but I wrote this fairly quickly. I'm sharing my take on this rather than intending to speak on behalf of the whole organisation, though my guess is that the typical view is pretty similar.BLUF:Whether to do a PhD is a decision which depends heavily enough on personal fit that I expect thinking about how well you in particular are suited to a particular PhD to be much more useful than thinking about the effects of timelines estimates on that decision.Don’t pay too much attention to median timelines estimates. There’s a lot of uncertainty, and finding the right path for you can easily make a bigger difference than matching the path to the median timeline.Going into a bit more detail - I think there are a couple of aspects to this question, which I’m going to try to (imperfectly) split up:How should you respond to timelines estimates when planning your career?How should you think about PhDs if you are confident timelines are very short?In terms of how to think about timelines in general, the main advice I’d give is to try to avoid the mistake of interpreting median estimates as single points. Taking this metaculus question as an example, which has a median of July 2027, that doesn’t mean the community predicts that AGI will arrive then! The median just indicates the date by which the community thinks there’s a 50% chance the question will have resolved. To get more precise about this, we can tell from the graph that the community estimates:Only a 7% chance that AGI is developed in the year 2027A 25% chance that AGI will be developed before August of next year.An 11% chance that AGI will not be developed before 2050A 9% chance that the question has already resolved.A 41% chance that AGI will be developed after January 2029 (6 years from the time of writing).Taking these estimates literally, and additionally assuming that any work that happens post this question resolving is totally useless (which seems very unlikely), you might then conclude that delaying your career by 6 years would cause it to have 41/91 = 45% of the value. If that’s the case, if the delay increased the impact you could have by a bit more than a factor of 2, the delay would be worth it.Having done all of that work (and glossed over a bunch of subtlety in the last comment for brevity), I now want to say that you shouldn’t take the metaculus estimates at face value though. The reason is that (as I’m sure you’ve noticed, and as you’ve seen in the comments) they just aren’t going to be that reliable for this kind of question. Nothing is - this kind of prediction is really hard.The net effect of this increased uncertainty should be (I claim) to flatten the probability distribution you are working with. This basically means it makes even less sense than you’d think from looking at the distribution to plan for AGI as if timelines are point estimates.Ok, but what does this mean for PhDs?Before I say anything about how a PhD decision intera...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Whether you should do a PhD doesn't depend much on timelines., published by alex lawsen (previously alexrjl) on March 22, 2023 on The Effective Altruism Forum.I wrote this as an answer to a question which I think has now been deleted, so I copied it to my shortform in order to be able to link it in future, and found myself linking to it often enough that it seemed worth making a top-level post, in particular because if there are important counterarguments I haven't considered I'd like to come across them sooner rather than later! I'd usually put more thought into editing a top-level post, but the realistic options here were not post it at all, or post it without editing.Epistemic status: I've thought about both how people should thinking about PhDs and how people should think about timelines a fair bit, both in my own time and in my role as an advisor at 80k, but I wrote this fairly quickly. I'm sharing my take on this rather than intending to speak on behalf of the whole organisation, though my guess is that the typical view is pretty similar.BLUF:Whether to do a PhD is a decision which depends heavily enough on personal fit that I expect thinking about how well you in particular are suited to a particular PhD to be much more useful than thinking about the effects of timelines estimates on that decision.Don’t pay too much attention to median timelines estimates. There’s a lot of uncertainty, and finding the right path for you can easily make a bigger difference than matching the path to the median timeline.Going into a bit more detail - I think there are a couple of aspects to this question, which I’m going to try to (imperfectly) split up:How should you respond to timelines estimates when planning your career?How should you think about PhDs if you are confident timelines are very short?In terms of how to think about timelines in general, the main advice I’d give is to try to avoid the mistake of interpreting median estimates as single points. Taking this metaculus question as an example, which has a median of July 2027, that doesn’t mean the community predicts that AGI will arrive then! The median just indicates the date by which the community thinks there’s a 50% chance the question will have resolved. To get more precise about this, we can tell from the graph that the community estimates:Only a 7% chance that AGI is developed in the year 2027A 25% chance that AGI will be developed before August of next year.An 11% chance that AGI will not be developed before 2050A 9% chance that the question has already resolved.A 41% chance that AGI will be developed after January 2029 (6 years from the time of writing).Taking these estimates literally, and additionally assuming that any work that happens post this question resolving is totally useless (which seems very unlikely), you might then conclude that delaying your career by 6 years would cause it to have 41/91 = 45% of the value. If that’s the case, if the delay increased the impact you could have by a bit more than a factor of 2, the delay would be worth it.Having done all of that work (and glossed over a bunch of subtlety in the last comment for brevity), I now want to say that you shouldn’t take the metaculus estimates at face value though. The reason is that (as I’m sure you’ve noticed, and as you’ve seen in the comments) they just aren’t going to be that reliable for this kind of question. Nothing is - this kind of prediction is really hard.The net effect of this increased uncertainty should be (I claim) to flatten the probability distribution you are working with. This basically means it makes even less sense than you’d think from looking at the distribution to plan for AGI as if timelines are point estimates.Ok, but what does this mean for PhDs?Before I say anything about how a PhD decision intera...]]>
alex lawsen (previously alexrjl) https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:33 None full 5322
sLB6tEovv7jDkEghG_NL_EA_EA EA - Design changes and the community section (Forum update March 2023) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Design changes & the community section (Forum update March 2023), published by Lizka on March 21, 2023 on The Effective Altruism Forum.We’re sharing the results of the Community-Frontpage test, and we’ve released a Forum redesign — I discuss it below. I also outline some things we’re thinking about right now.As always, we’re also interested in feedback on these changes. We’d be really grateful if you filled out this (very quick) survey on the redesign that might help give us a sense of what people are thinking. You can also comment on this post with your thoughts or reach out to forum@centreforeffectivealtruism.org.Results of the Community-Frontpage test & more thoughts on community postsA little over a month ago, we announced a test: we’d be trying out separating “Community” posts from other kinds by creating a “Community” section on the Frontpage of the Forum.We’ve gotten a lot of feedback; we believe that the change was an improvement, so we’re planning on keeping it for the near future, with some modifications. We might still make some changes like switching from a section to tabs, especially depending on new feedback and on how related projects go.OutcomesInformation we gatheredWe sourced user feedback from different places:User interviews with people at EA Global and elsewhere (at least 20 interviews, different people doing the interviewing)Responses to a quick survey on how we can improve discussions on the Forum (45 responses)Metrics (mostly used as sanity checks)Engagement with the Forum overall (engagement on the Forum is 7% lower than the previous month, which is within the bounds we set ourselves and there’s a lot of fluctuation, so we’re just going to keep monitoring this)Engagement with Community posts (it dropped 8%, which may just be tracking overall engagement, and again, we’re going to keep monitoring it)There are still important & useful Community posts every week (subjective assessment)(there are)The team’s experience with the section, and whether we thought the change was positive overallOutcomes and themes:The responses we got were overwhelmingly positive about the change. People told us directly (in user interviews and in passing) that the change was improving their experience on the Forum. We also personally thought that the change had gone very well — likely better than we’d expected as a ~70% best outcome.And here are the results from the survey:The metrics we're tracking (listed above) were within the bounds we’d set, and we were mostly using them as sanity checks.There were, of course, some concerns, and critical or constructive feedback.Confusion about what “Community” meansNot everyone was clear on which posts should actually go in the section; the outline I gave before was unclear. I’ve updated the guidance I had originally given to Forum facilitators and moderators (based on their feedback and just sitting down and trying to get a more systematic categorization), and I’m sharing the updated version here.Concerns that important conversations would be missedSome people expressed a worry that having a section like this would hide discussions that the community needs to have, like processing the FTX collapse and what we should learn from it, or how we can create a more welcoming environment for different groups of people. We were also pretty worried about this; I think this was the thing that I thought was most likely going to get us to reverse the change.However, the worry doesn’t seem to be realizing. It looks like engagement hasn’t fallen significantly on Community posts relative to other posts, and important conversations have been continuing. Some recent posts on difficult community topics have had lots of comments (the discussion of the recent TIME article currently has 159 comments), and Community posts have...]]>
Lizka https://forum.effectivealtruism.org/posts/sLB6tEovv7jDkEghG/design-changes-and-the-community-section-forum-update-march Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Design changes & the community section (Forum update March 2023), published by Lizka on March 21, 2023 on The Effective Altruism Forum.We’re sharing the results of the Community-Frontpage test, and we’ve released a Forum redesign — I discuss it below. I also outline some things we’re thinking about right now.As always, we’re also interested in feedback on these changes. We’d be really grateful if you filled out this (very quick) survey on the redesign that might help give us a sense of what people are thinking. You can also comment on this post with your thoughts or reach out to forum@centreforeffectivealtruism.org.Results of the Community-Frontpage test & more thoughts on community postsA little over a month ago, we announced a test: we’d be trying out separating “Community” posts from other kinds by creating a “Community” section on the Frontpage of the Forum.We’ve gotten a lot of feedback; we believe that the change was an improvement, so we’re planning on keeping it for the near future, with some modifications. We might still make some changes like switching from a section to tabs, especially depending on new feedback and on how related projects go.OutcomesInformation we gatheredWe sourced user feedback from different places:User interviews with people at EA Global and elsewhere (at least 20 interviews, different people doing the interviewing)Responses to a quick survey on how we can improve discussions on the Forum (45 responses)Metrics (mostly used as sanity checks)Engagement with the Forum overall (engagement on the Forum is 7% lower than the previous month, which is within the bounds we set ourselves and there’s a lot of fluctuation, so we’re just going to keep monitoring this)Engagement with Community posts (it dropped 8%, which may just be tracking overall engagement, and again, we’re going to keep monitoring it)There are still important & useful Community posts every week (subjective assessment)(there are)The team’s experience with the section, and whether we thought the change was positive overallOutcomes and themes:The responses we got were overwhelmingly positive about the change. People told us directly (in user interviews and in passing) that the change was improving their experience on the Forum. We also personally thought that the change had gone very well — likely better than we’d expected as a ~70% best outcome.And here are the results from the survey:The metrics we're tracking (listed above) were within the bounds we’d set, and we were mostly using them as sanity checks.There were, of course, some concerns, and critical or constructive feedback.Confusion about what “Community” meansNot everyone was clear on which posts should actually go in the section; the outline I gave before was unclear. I’ve updated the guidance I had originally given to Forum facilitators and moderators (based on their feedback and just sitting down and trying to get a more systematic categorization), and I’m sharing the updated version here.Concerns that important conversations would be missedSome people expressed a worry that having a section like this would hide discussions that the community needs to have, like processing the FTX collapse and what we should learn from it, or how we can create a more welcoming environment for different groups of people. We were also pretty worried about this; I think this was the thing that I thought was most likely going to get us to reverse the change.However, the worry doesn’t seem to be realizing. It looks like engagement hasn’t fallen significantly on Community posts relative to other posts, and important conversations have been continuing. Some recent posts on difficult community topics have had lots of comments (the discussion of the recent TIME article currently has 159 comments), and Community posts have...]]>
Tue, 21 Mar 2023 22:44:41 +0000 EA - Design changes and the community section (Forum update March 2023) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Design changes & the community section (Forum update March 2023), published by Lizka on March 21, 2023 on The Effective Altruism Forum.We’re sharing the results of the Community-Frontpage test, and we’ve released a Forum redesign — I discuss it below. I also outline some things we’re thinking about right now.As always, we’re also interested in feedback on these changes. We’d be really grateful if you filled out this (very quick) survey on the redesign that might help give us a sense of what people are thinking. You can also comment on this post with your thoughts or reach out to forum@centreforeffectivealtruism.org.Results of the Community-Frontpage test & more thoughts on community postsA little over a month ago, we announced a test: we’d be trying out separating “Community” posts from other kinds by creating a “Community” section on the Frontpage of the Forum.We’ve gotten a lot of feedback; we believe that the change was an improvement, so we’re planning on keeping it for the near future, with some modifications. We might still make some changes like switching from a section to tabs, especially depending on new feedback and on how related projects go.OutcomesInformation we gatheredWe sourced user feedback from different places:User interviews with people at EA Global and elsewhere (at least 20 interviews, different people doing the interviewing)Responses to a quick survey on how we can improve discussions on the Forum (45 responses)Metrics (mostly used as sanity checks)Engagement with the Forum overall (engagement on the Forum is 7% lower than the previous month, which is within the bounds we set ourselves and there’s a lot of fluctuation, so we’re just going to keep monitoring this)Engagement with Community posts (it dropped 8%, which may just be tracking overall engagement, and again, we’re going to keep monitoring it)There are still important & useful Community posts every week (subjective assessment)(there are)The team’s experience with the section, and whether we thought the change was positive overallOutcomes and themes:The responses we got were overwhelmingly positive about the change. People told us directly (in user interviews and in passing) that the change was improving their experience on the Forum. We also personally thought that the change had gone very well — likely better than we’d expected as a ~70% best outcome.And here are the results from the survey:The metrics we're tracking (listed above) were within the bounds we’d set, and we were mostly using them as sanity checks.There were, of course, some concerns, and critical or constructive feedback.Confusion about what “Community” meansNot everyone was clear on which posts should actually go in the section; the outline I gave before was unclear. I’ve updated the guidance I had originally given to Forum facilitators and moderators (based on their feedback and just sitting down and trying to get a more systematic categorization), and I’m sharing the updated version here.Concerns that important conversations would be missedSome people expressed a worry that having a section like this would hide discussions that the community needs to have, like processing the FTX collapse and what we should learn from it, or how we can create a more welcoming environment for different groups of people. We were also pretty worried about this; I think this was the thing that I thought was most likely going to get us to reverse the change.However, the worry doesn’t seem to be realizing. It looks like engagement hasn’t fallen significantly on Community posts relative to other posts, and important conversations have been continuing. Some recent posts on difficult community topics have had lots of comments (the discussion of the recent TIME article currently has 159 comments), and Community posts have...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Design changes & the community section (Forum update March 2023), published by Lizka on March 21, 2023 on The Effective Altruism Forum.We’re sharing the results of the Community-Frontpage test, and we’ve released a Forum redesign — I discuss it below. I also outline some things we’re thinking about right now.As always, we’re also interested in feedback on these changes. We’d be really grateful if you filled out this (very quick) survey on the redesign that might help give us a sense of what people are thinking. You can also comment on this post with your thoughts or reach out to forum@centreforeffectivealtruism.org.Results of the Community-Frontpage test & more thoughts on community postsA little over a month ago, we announced a test: we’d be trying out separating “Community” posts from other kinds by creating a “Community” section on the Frontpage of the Forum.We’ve gotten a lot of feedback; we believe that the change was an improvement, so we’re planning on keeping it for the near future, with some modifications. We might still make some changes like switching from a section to tabs, especially depending on new feedback and on how related projects go.OutcomesInformation we gatheredWe sourced user feedback from different places:User interviews with people at EA Global and elsewhere (at least 20 interviews, different people doing the interviewing)Responses to a quick survey on how we can improve discussions on the Forum (45 responses)Metrics (mostly used as sanity checks)Engagement with the Forum overall (engagement on the Forum is 7% lower than the previous month, which is within the bounds we set ourselves and there’s a lot of fluctuation, so we’re just going to keep monitoring this)Engagement with Community posts (it dropped 8%, which may just be tracking overall engagement, and again, we’re going to keep monitoring it)There are still important & useful Community posts every week (subjective assessment)(there are)The team’s experience with the section, and whether we thought the change was positive overallOutcomes and themes:The responses we got were overwhelmingly positive about the change. People told us directly (in user interviews and in passing) that the change was improving their experience on the Forum. We also personally thought that the change had gone very well — likely better than we’d expected as a ~70% best outcome.And here are the results from the survey:The metrics we're tracking (listed above) were within the bounds we’d set, and we were mostly using them as sanity checks.There were, of course, some concerns, and critical or constructive feedback.Confusion about what “Community” meansNot everyone was clear on which posts should actually go in the section; the outline I gave before was unclear. I’ve updated the guidance I had originally given to Forum facilitators and moderators (based on their feedback and just sitting down and trying to get a more systematic categorization), and I’m sharing the updated version here.Concerns that important conversations would be missedSome people expressed a worry that having a section like this would hide discussions that the community needs to have, like processing the FTX collapse and what we should learn from it, or how we can create a more welcoming environment for different groups of people. We were also pretty worried about this; I think this was the thing that I thought was most likely going to get us to reverse the change.However, the worry doesn’t seem to be realizing. It looks like engagement hasn’t fallen significantly on Community posts relative to other posts, and important conversations have been continuing. Some recent posts on difficult community topics have had lots of comments (the discussion of the recent TIME article currently has 159 comments), and Community posts have...]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 12:04 None full 5309
CrmE6T5A8JhkxnRzw_NL_EA_EA EA - Future Matters #8: Bing Chat, AI labs on safety, and pausing Future Matters by Pablo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future Matters #8: Bing Chat, AI labs on safety, and pausing Future Matters, published by Pablo on March 21, 2023 on The Effective Altruism Forum.Future Matters is a newsletter about longtermism and existential risk. Each month we collect and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, listen on your favorite podcast platform and follow on Twitter. Future Matters is also available in Spanish.A message to our readersThis issue marks one year since we started Future Matters. We’re taking this opportunity to reflect on the project and decide where to take it from here. We’ll soon share our thoughts about the future of the newsletter in a separate post, and will invite input from readers. In the meantime, we will be pausing new issues of Future Matters. Thank you for your support and readership over the last year!Featured researchAll things BingMicrosoft recently announced a significant partnership with OpenAI [see FM#7] and launched a beta version of a chatbot integrated with the Bing search engine. Reports of strange behavior quickly emerged. Kevin Roose, a technology columnist for the New York Times, had a disturbing conversation in which Bing Chat declared its love for him and described violent fantasies. Evan Hubinger collects some of the most egregious examples in Bing Chat is blatantly, aggressively misaligned. In one instance, Bing Chat finds a user’s tweets about the chatbot and threatens to exact revenge. In the LessWrong comments, Gwern speculates on why Bing Chat exhibits such different behavior to ChatGPT, despite apparently being based on a closely-related model. (Bing Chat was subsequently revealed to have been based on GPT-4).Holden Karnofsky asks What does Bing Chat tell us about AI risk? His answer is that it is not the sort of misaligned AI system we should be particularly worried about. When Bing Chat talks about plans to blackmail people or commit acts of violence, this isn’t evidence of it having developed malign, dangerous goals. Instead, it’s best understood as Bing acting out stories and characters it’s read before. This whole affair, however, is evidence of companies racing to deploy ever more powerful models in a bid to capture market share, with very little understanding of how they work and how they might fail. Most paths to AI catastrophe involve two elements: a powerful and dangerously misaligned AI system, and an AI company that builds and deploys it anyway. The Bing Chat affair doesn’t reveal much about the first element, but is a concerning reminder of how plausible the second is.Robert Long asks What to think when a language model tells you it's sentient []. When trying to infer what’s going on in other humans’ minds, we generally take their self-reports (e.g. saying “I am in pain”) as good evidence of their internal states. However, we shouldn’t take Bing Chat’s attestations (e.g. “I feel scared”) at face value; we have no good reason to think that they are a reliable guide to Bing’s inner mental life. LLMs are a bit like parrots: if a parrot says “I am sentient” then this isn’t good evidence that it is sentient. But nor is it good evidence that it isn’t — in fact, we have lots of other evidence that parrots are sentient. Whether current or future AI systems are sentient is a valid and important question, and Long is hopeful that we can make real progress on developing reliable techniques for getting evidence on these matters.Long was interviewed on AI consciousness, along with Nick Bostrom and David Chalmers, for Kevin Collier’s article, What is consciousness? ChatGPT and Advanced AI might define our answer [].How the major AI labs are thinking about safetyIn the last few weeks, we got more information about how the lead...]]>
Pablo https://forum.effectivealtruism.org/posts/CrmE6T5A8JhkxnRzw/future-matters-8-bing-chat-ai-labs-on-safety-and-pausing Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future Matters #8: Bing Chat, AI labs on safety, and pausing Future Matters, published by Pablo on March 21, 2023 on The Effective Altruism Forum.Future Matters is a newsletter about longtermism and existential risk. Each month we collect and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, listen on your favorite podcast platform and follow on Twitter. Future Matters is also available in Spanish.A message to our readersThis issue marks one year since we started Future Matters. We’re taking this opportunity to reflect on the project and decide where to take it from here. We’ll soon share our thoughts about the future of the newsletter in a separate post, and will invite input from readers. In the meantime, we will be pausing new issues of Future Matters. Thank you for your support and readership over the last year!Featured researchAll things BingMicrosoft recently announced a significant partnership with OpenAI [see FM#7] and launched a beta version of a chatbot integrated with the Bing search engine. Reports of strange behavior quickly emerged. Kevin Roose, a technology columnist for the New York Times, had a disturbing conversation in which Bing Chat declared its love for him and described violent fantasies. Evan Hubinger collects some of the most egregious examples in Bing Chat is blatantly, aggressively misaligned. In one instance, Bing Chat finds a user’s tweets about the chatbot and threatens to exact revenge. In the LessWrong comments, Gwern speculates on why Bing Chat exhibits such different behavior to ChatGPT, despite apparently being based on a closely-related model. (Bing Chat was subsequently revealed to have been based on GPT-4).Holden Karnofsky asks What does Bing Chat tell us about AI risk? His answer is that it is not the sort of misaligned AI system we should be particularly worried about. When Bing Chat talks about plans to blackmail people or commit acts of violence, this isn’t evidence of it having developed malign, dangerous goals. Instead, it’s best understood as Bing acting out stories and characters it’s read before. This whole affair, however, is evidence of companies racing to deploy ever more powerful models in a bid to capture market share, with very little understanding of how they work and how they might fail. Most paths to AI catastrophe involve two elements: a powerful and dangerously misaligned AI system, and an AI company that builds and deploys it anyway. The Bing Chat affair doesn’t reveal much about the first element, but is a concerning reminder of how plausible the second is.Robert Long asks What to think when a language model tells you it's sentient []. When trying to infer what’s going on in other humans’ minds, we generally take their self-reports (e.g. saying “I am in pain”) as good evidence of their internal states. However, we shouldn’t take Bing Chat’s attestations (e.g. “I feel scared”) at face value; we have no good reason to think that they are a reliable guide to Bing’s inner mental life. LLMs are a bit like parrots: if a parrot says “I am sentient” then this isn’t good evidence that it is sentient. But nor is it good evidence that it isn’t — in fact, we have lots of other evidence that parrots are sentient. Whether current or future AI systems are sentient is a valid and important question, and Long is hopeful that we can make real progress on developing reliable techniques for getting evidence on these matters.Long was interviewed on AI consciousness, along with Nick Bostrom and David Chalmers, for Kevin Collier’s article, What is consciousness? ChatGPT and Advanced AI might define our answer [].How the major AI labs are thinking about safetyIn the last few weeks, we got more information about how the lead...]]>
Tue, 21 Mar 2023 22:33:02 +0000 EA - Future Matters #8: Bing Chat, AI labs on safety, and pausing Future Matters by Pablo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future Matters #8: Bing Chat, AI labs on safety, and pausing Future Matters, published by Pablo on March 21, 2023 on The Effective Altruism Forum.Future Matters is a newsletter about longtermism and existential risk. Each month we collect and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, listen on your favorite podcast platform and follow on Twitter. Future Matters is also available in Spanish.A message to our readersThis issue marks one year since we started Future Matters. We’re taking this opportunity to reflect on the project and decide where to take it from here. We’ll soon share our thoughts about the future of the newsletter in a separate post, and will invite input from readers. In the meantime, we will be pausing new issues of Future Matters. Thank you for your support and readership over the last year!Featured researchAll things BingMicrosoft recently announced a significant partnership with OpenAI [see FM#7] and launched a beta version of a chatbot integrated with the Bing search engine. Reports of strange behavior quickly emerged. Kevin Roose, a technology columnist for the New York Times, had a disturbing conversation in which Bing Chat declared its love for him and described violent fantasies. Evan Hubinger collects some of the most egregious examples in Bing Chat is blatantly, aggressively misaligned. In one instance, Bing Chat finds a user’s tweets about the chatbot and threatens to exact revenge. In the LessWrong comments, Gwern speculates on why Bing Chat exhibits such different behavior to ChatGPT, despite apparently being based on a closely-related model. (Bing Chat was subsequently revealed to have been based on GPT-4).Holden Karnofsky asks What does Bing Chat tell us about AI risk? His answer is that it is not the sort of misaligned AI system we should be particularly worried about. When Bing Chat talks about plans to blackmail people or commit acts of violence, this isn’t evidence of it having developed malign, dangerous goals. Instead, it’s best understood as Bing acting out stories and characters it’s read before. This whole affair, however, is evidence of companies racing to deploy ever more powerful models in a bid to capture market share, with very little understanding of how they work and how they might fail. Most paths to AI catastrophe involve two elements: a powerful and dangerously misaligned AI system, and an AI company that builds and deploys it anyway. The Bing Chat affair doesn’t reveal much about the first element, but is a concerning reminder of how plausible the second is.Robert Long asks What to think when a language model tells you it's sentient []. When trying to infer what’s going on in other humans’ minds, we generally take their self-reports (e.g. saying “I am in pain”) as good evidence of their internal states. However, we shouldn’t take Bing Chat’s attestations (e.g. “I feel scared”) at face value; we have no good reason to think that they are a reliable guide to Bing’s inner mental life. LLMs are a bit like parrots: if a parrot says “I am sentient” then this isn’t good evidence that it is sentient. But nor is it good evidence that it isn’t — in fact, we have lots of other evidence that parrots are sentient. Whether current or future AI systems are sentient is a valid and important question, and Long is hopeful that we can make real progress on developing reliable techniques for getting evidence on these matters.Long was interviewed on AI consciousness, along with Nick Bostrom and David Chalmers, for Kevin Collier’s article, What is consciousness? ChatGPT and Advanced AI might define our answer [].How the major AI labs are thinking about safetyIn the last few weeks, we got more information about how the lead...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future Matters #8: Bing Chat, AI labs on safety, and pausing Future Matters, published by Pablo on March 21, 2023 on The Effective Altruism Forum.Future Matters is a newsletter about longtermism and existential risk. Each month we collect and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, listen on your favorite podcast platform and follow on Twitter. Future Matters is also available in Spanish.A message to our readersThis issue marks one year since we started Future Matters. We’re taking this opportunity to reflect on the project and decide where to take it from here. We’ll soon share our thoughts about the future of the newsletter in a separate post, and will invite input from readers. In the meantime, we will be pausing new issues of Future Matters. Thank you for your support and readership over the last year!Featured researchAll things BingMicrosoft recently announced a significant partnership with OpenAI [see FM#7] and launched a beta version of a chatbot integrated with the Bing search engine. Reports of strange behavior quickly emerged. Kevin Roose, a technology columnist for the New York Times, had a disturbing conversation in which Bing Chat declared its love for him and described violent fantasies. Evan Hubinger collects some of the most egregious examples in Bing Chat is blatantly, aggressively misaligned. In one instance, Bing Chat finds a user’s tweets about the chatbot and threatens to exact revenge. In the LessWrong comments, Gwern speculates on why Bing Chat exhibits such different behavior to ChatGPT, despite apparently being based on a closely-related model. (Bing Chat was subsequently revealed to have been based on GPT-4).Holden Karnofsky asks What does Bing Chat tell us about AI risk? His answer is that it is not the sort of misaligned AI system we should be particularly worried about. When Bing Chat talks about plans to blackmail people or commit acts of violence, this isn’t evidence of it having developed malign, dangerous goals. Instead, it’s best understood as Bing acting out stories and characters it’s read before. This whole affair, however, is evidence of companies racing to deploy ever more powerful models in a bid to capture market share, with very little understanding of how they work and how they might fail. Most paths to AI catastrophe involve two elements: a powerful and dangerously misaligned AI system, and an AI company that builds and deploys it anyway. The Bing Chat affair doesn’t reveal much about the first element, but is a concerning reminder of how plausible the second is.Robert Long asks What to think when a language model tells you it's sentient []. When trying to infer what’s going on in other humans’ minds, we generally take their self-reports (e.g. saying “I am in pain”) as good evidence of their internal states. However, we shouldn’t take Bing Chat’s attestations (e.g. “I feel scared”) at face value; we have no good reason to think that they are a reliable guide to Bing’s inner mental life. LLMs are a bit like parrots: if a parrot says “I am sentient” then this isn’t good evidence that it is sentient. But nor is it good evidence that it isn’t — in fact, we have lots of other evidence that parrots are sentient. Whether current or future AI systems are sentient is a valid and important question, and Long is hopeful that we can make real progress on developing reliable techniques for getting evidence on these matters.Long was interviewed on AI consciousness, along with Nick Bostrom and David Chalmers, for Kevin Collier’s article, What is consciousness? ChatGPT and Advanced AI might define our answer [].How the major AI labs are thinking about safetyIn the last few weeks, we got more information about how the lead...]]>
Pablo https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 37:24 None full 5310
fXkCcsyF8M6dp6sXx_NL_EA_EA EA - Where I'm at with AI risk: convinced of danger but not (yet) of doom by Amber Dawn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Where I'm at with AI risk: convinced of danger but not (yet) of doom, published by Amber Dawn on March 21, 2023 on The Effective Altruism Forum.[content: discussing AI doom. I'm sceptical about AI doom, but if dwelling on this is anxiety-inducing for you, consider skipping this post]I’m a cause-agnostic (or more accurately ‘cause-confused’) EA with a non-technical background. A lot of my friends and writing clients are extremely worried about existential risks from AI. Many believe that humanity is more likely than not to go extinct due to AI within my lifetime.I realised that I was confused about this, so I set myself the goal of understanding the case for AI doom, and my own scepticisms, better. I did this by (very limited!) reading, writing down my thoughts, and talking to friends and strangers (some of whom I recruited from the Bountied Rationality Facebook group - if any of you are reading, thanks again!) Tl;dr: I think there are good reasons to worry about extremely powerful AI, but I don’t yet understand why people think superintelligent AI is highly likely to end up killing everyone by default.Why I'm writing thisI’m writing up my current beliefs and confusions in the hope that readers will be able to correct my misconceptions, clarify things I’m confused about, and link me to helpful resources. I also personally enjoy reading other EAs’ reflections about cause areas: e.g. Saulius' post on wild animal welfare, or Nuño's sceptical post about AI risk. This post is far less well-informed, but I found those posts valuable because of their reasoning transparency more than their authors' expertise. I'd love to read more posts by ‘layperson’ EAs talking about their personal cause prioritisation.I also think that 'confusion' is an underrepresented intellectual position. At EAGx Cambridge, Yulia Ponomarenko led a great workshop on ‘Asking daft questions with confidence’. We talked about how EAs are sometimes unwilling to ask questions that would make them less confused for fear that the questions are too basic, silly, “dumb”, or about something they're already expected to know.This could create a false appearance of consensus about cause areas or world models. People who are convinced by the case for AI risk will naturally be very vocal, as will those who are confidently sceptical. However, people who are unsure or confused may be unwilling to share their thoughts, either because they're afraid that others will look down on them for not already understanding the case, or just because most people are less motivated to write about their vague confusions than their strong opinions. So I’m partly writing this as representation for the ‘generally unsure’ point of view.Some caveats: there’s a lot I haven’t read, including many basic resources. And my understanding of the technical side of AI (maths, programming) is extremely limited. Technical friends often say ‘you don’t need to understand the technical details about AI to understand the arguments for x-risk from AI’. But when I talk and think about these questions, it subjectively feels like I run up again a lack of technical understanding quite often.Where I’m at with AI safetyTl;dr: I'm concerned about certain risks from misaligned or misused AI, but I don’t understand the arguments that AI will, by default and in absence of a specific alignment technique, be so misaligned as to cause human extinction (or something similarly bad.)Convincing (to me) arguments for why AI could be dangerousHumans could use AI to do bad things more effectivelyFor example, politicians could use AI to devastatingly make war on their enemies, or CEOs could use it to increase their profits in harmful or reckless ways. This seems like a good reason to regulate AI development heavily and/or to democratise AI control, so that it’s har...]]>
Amber Dawn https://forum.effectivealtruism.org/posts/fXkCcsyF8M6dp6sXx/where-i-m-at-with-ai-risk-convinced-of-danger-but-not-yet-of Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Where I'm at with AI risk: convinced of danger but not (yet) of doom, published by Amber Dawn on March 21, 2023 on The Effective Altruism Forum.[content: discussing AI doom. I'm sceptical about AI doom, but if dwelling on this is anxiety-inducing for you, consider skipping this post]I’m a cause-agnostic (or more accurately ‘cause-confused’) EA with a non-technical background. A lot of my friends and writing clients are extremely worried about existential risks from AI. Many believe that humanity is more likely than not to go extinct due to AI within my lifetime.I realised that I was confused about this, so I set myself the goal of understanding the case for AI doom, and my own scepticisms, better. I did this by (very limited!) reading, writing down my thoughts, and talking to friends and strangers (some of whom I recruited from the Bountied Rationality Facebook group - if any of you are reading, thanks again!) Tl;dr: I think there are good reasons to worry about extremely powerful AI, but I don’t yet understand why people think superintelligent AI is highly likely to end up killing everyone by default.Why I'm writing thisI’m writing up my current beliefs and confusions in the hope that readers will be able to correct my misconceptions, clarify things I’m confused about, and link me to helpful resources. I also personally enjoy reading other EAs’ reflections about cause areas: e.g. Saulius' post on wild animal welfare, or Nuño's sceptical post about AI risk. This post is far less well-informed, but I found those posts valuable because of their reasoning transparency more than their authors' expertise. I'd love to read more posts by ‘layperson’ EAs talking about their personal cause prioritisation.I also think that 'confusion' is an underrepresented intellectual position. At EAGx Cambridge, Yulia Ponomarenko led a great workshop on ‘Asking daft questions with confidence’. We talked about how EAs are sometimes unwilling to ask questions that would make them less confused for fear that the questions are too basic, silly, “dumb”, or about something they're already expected to know.This could create a false appearance of consensus about cause areas or world models. People who are convinced by the case for AI risk will naturally be very vocal, as will those who are confidently sceptical. However, people who are unsure or confused may be unwilling to share their thoughts, either because they're afraid that others will look down on them for not already understanding the case, or just because most people are less motivated to write about their vague confusions than their strong opinions. So I’m partly writing this as representation for the ‘generally unsure’ point of view.Some caveats: there’s a lot I haven’t read, including many basic resources. And my understanding of the technical side of AI (maths, programming) is extremely limited. Technical friends often say ‘you don’t need to understand the technical details about AI to understand the arguments for x-risk from AI’. But when I talk and think about these questions, it subjectively feels like I run up again a lack of technical understanding quite often.Where I’m at with AI safetyTl;dr: I'm concerned about certain risks from misaligned or misused AI, but I don’t understand the arguments that AI will, by default and in absence of a specific alignment technique, be so misaligned as to cause human extinction (or something similarly bad.)Convincing (to me) arguments for why AI could be dangerousHumans could use AI to do bad things more effectivelyFor example, politicians could use AI to devastatingly make war on their enemies, or CEOs could use it to increase their profits in harmful or reckless ways. This seems like a good reason to regulate AI development heavily and/or to democratise AI control, so that it’s har...]]>
Tue, 21 Mar 2023 17:12:45 +0000 EA - Where I'm at with AI risk: convinced of danger but not (yet) of doom by Amber Dawn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Where I'm at with AI risk: convinced of danger but not (yet) of doom, published by Amber Dawn on March 21, 2023 on The Effective Altruism Forum.[content: discussing AI doom. I'm sceptical about AI doom, but if dwelling on this is anxiety-inducing for you, consider skipping this post]I’m a cause-agnostic (or more accurately ‘cause-confused’) EA with a non-technical background. A lot of my friends and writing clients are extremely worried about existential risks from AI. Many believe that humanity is more likely than not to go extinct due to AI within my lifetime.I realised that I was confused about this, so I set myself the goal of understanding the case for AI doom, and my own scepticisms, better. I did this by (very limited!) reading, writing down my thoughts, and talking to friends and strangers (some of whom I recruited from the Bountied Rationality Facebook group - if any of you are reading, thanks again!) Tl;dr: I think there are good reasons to worry about extremely powerful AI, but I don’t yet understand why people think superintelligent AI is highly likely to end up killing everyone by default.Why I'm writing thisI’m writing up my current beliefs and confusions in the hope that readers will be able to correct my misconceptions, clarify things I’m confused about, and link me to helpful resources. I also personally enjoy reading other EAs’ reflections about cause areas: e.g. Saulius' post on wild animal welfare, or Nuño's sceptical post about AI risk. This post is far less well-informed, but I found those posts valuable because of their reasoning transparency more than their authors' expertise. I'd love to read more posts by ‘layperson’ EAs talking about their personal cause prioritisation.I also think that 'confusion' is an underrepresented intellectual position. At EAGx Cambridge, Yulia Ponomarenko led a great workshop on ‘Asking daft questions with confidence’. We talked about how EAs are sometimes unwilling to ask questions that would make them less confused for fear that the questions are too basic, silly, “dumb”, or about something they're already expected to know.This could create a false appearance of consensus about cause areas or world models. People who are convinced by the case for AI risk will naturally be very vocal, as will those who are confidently sceptical. However, people who are unsure or confused may be unwilling to share their thoughts, either because they're afraid that others will look down on them for not already understanding the case, or just because most people are less motivated to write about their vague confusions than their strong opinions. So I’m partly writing this as representation for the ‘generally unsure’ point of view.Some caveats: there’s a lot I haven’t read, including many basic resources. And my understanding of the technical side of AI (maths, programming) is extremely limited. Technical friends often say ‘you don’t need to understand the technical details about AI to understand the arguments for x-risk from AI’. But when I talk and think about these questions, it subjectively feels like I run up again a lack of technical understanding quite often.Where I’m at with AI safetyTl;dr: I'm concerned about certain risks from misaligned or misused AI, but I don’t understand the arguments that AI will, by default and in absence of a specific alignment technique, be so misaligned as to cause human extinction (or something similarly bad.)Convincing (to me) arguments for why AI could be dangerousHumans could use AI to do bad things more effectivelyFor example, politicians could use AI to devastatingly make war on their enemies, or CEOs could use it to increase their profits in harmful or reckless ways. This seems like a good reason to regulate AI development heavily and/or to democratise AI control, so that it’s har...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Where I'm at with AI risk: convinced of danger but not (yet) of doom, published by Amber Dawn on March 21, 2023 on The Effective Altruism Forum.[content: discussing AI doom. I'm sceptical about AI doom, but if dwelling on this is anxiety-inducing for you, consider skipping this post]I’m a cause-agnostic (or more accurately ‘cause-confused’) EA with a non-technical background. A lot of my friends and writing clients are extremely worried about existential risks from AI. Many believe that humanity is more likely than not to go extinct due to AI within my lifetime.I realised that I was confused about this, so I set myself the goal of understanding the case for AI doom, and my own scepticisms, better. I did this by (very limited!) reading, writing down my thoughts, and talking to friends and strangers (some of whom I recruited from the Bountied Rationality Facebook group - if any of you are reading, thanks again!) Tl;dr: I think there are good reasons to worry about extremely powerful AI, but I don’t yet understand why people think superintelligent AI is highly likely to end up killing everyone by default.Why I'm writing thisI’m writing up my current beliefs and confusions in the hope that readers will be able to correct my misconceptions, clarify things I’m confused about, and link me to helpful resources. I also personally enjoy reading other EAs’ reflections about cause areas: e.g. Saulius' post on wild animal welfare, or Nuño's sceptical post about AI risk. This post is far less well-informed, but I found those posts valuable because of their reasoning transparency more than their authors' expertise. I'd love to read more posts by ‘layperson’ EAs talking about their personal cause prioritisation.I also think that 'confusion' is an underrepresented intellectual position. At EAGx Cambridge, Yulia Ponomarenko led a great workshop on ‘Asking daft questions with confidence’. We talked about how EAs are sometimes unwilling to ask questions that would make them less confused for fear that the questions are too basic, silly, “dumb”, or about something they're already expected to know.This could create a false appearance of consensus about cause areas or world models. People who are convinced by the case for AI risk will naturally be very vocal, as will those who are confidently sceptical. However, people who are unsure or confused may be unwilling to share their thoughts, either because they're afraid that others will look down on them for not already understanding the case, or just because most people are less motivated to write about their vague confusions than their strong opinions. So I’m partly writing this as representation for the ‘generally unsure’ point of view.Some caveats: there’s a lot I haven’t read, including many basic resources. And my understanding of the technical side of AI (maths, programming) is extremely limited. Technical friends often say ‘you don’t need to understand the technical details about AI to understand the arguments for x-risk from AI’. But when I talk and think about these questions, it subjectively feels like I run up again a lack of technical understanding quite often.Where I’m at with AI safetyTl;dr: I'm concerned about certain risks from misaligned or misused AI, but I don’t understand the arguments that AI will, by default and in absence of a specific alignment technique, be so misaligned as to cause human extinction (or something similarly bad.)Convincing (to me) arguments for why AI could be dangerousHumans could use AI to do bad things more effectivelyFor example, politicians could use AI to devastatingly make war on their enemies, or CEOs could use it to increase their profits in harmful or reckless ways. This seems like a good reason to regulate AI development heavily and/or to democratise AI control, so that it’s har...]]>
Amber Dawn https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:29 None full 5311
s6k9cKdX8c4nhH8qq_NL_EA_EA EA - Estimation for sanity checks by NunoSempere Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Estimation for sanity checks, published by NunoSempere on March 21, 2023 on The Effective Altruism Forum.I feel very warmly about using relatively quick estimates to carry out sanity checks, i.e., to quickly check whether something is clearly off, whether some decision is clearly overdetermined, or whether someone is just bullshitting. This is in contrast to Fermi estimates, which aim to arrive at an estimate for a quantity of interest, and which I also feel warmly about but which aren’t the subject of this post. In this post, I explain why I like quantitative sanity checks so much, and I give some examples.Why I like this so muchI like this so much because:It is very defensible. There are some cached arguments against more quantified estimation, but sanity checking cuts through most—if not all—of them. “Oh, well, I just think that estimation has some really nice benefits in terms of sanity checking and catching bullshit, and in particular in terms of defending against scope insensitivity. And I think we are not even at the point where we are deploying enough estimation to catch all the mistakes that would be obvious in hindsight after we did some estimation” is both something I believe and also just a really nice motte to retreat when I am tired, don’t feel like defending a more ambitious estimation agenda, or don’t want to alienate someone socially by having an argument.It can be very cheap, a few minutes, a few Google searches. This means that you can practice quickly and build intuitions.They are useful, as we will see below.Some examplesHere are a few examples where I’ve found estimation to be useful for sanity-checking. I mention these because I think that the theoretical answer becomes stronger when paired with a few examples which display that dynamic in real life.Photo Patch FoundationThe Photo Patch Foundation is an organization which has received a small amount of funding from Open Philanthropy:Photo Patch has a website and an app that allows kids with incarcerated parents to send letters and pictures to their parents in prison for free. This diminishes barriers, helps families remain in touch, and reduces the number of children who have not communicated with their parents in weeks, months, or sometimes years.It takes little digging to figure out that their costs are $2.5/photo. If we take the AMF numbers at all seriously, it seems very likely that this is not a good deal. For example, for $2.5 you can deworm several kids in developing countries, or buy a bit more than one malaria net. Or, less intuitively, trading 0.05% chance of saving a statistical life for sending a photo to a prisoner seems like a pretty bad trade–0.05% of a statistical life corresponds to 0.05/100 × 70 years × 365 = 12 statistical days.One can then do somewhat more elaborate estimations about criminal justice reform.Sanity-checking that supply chain accountability has enough scaleAt some point in the past, I looked into supply chain accountability, a cause area related to improving how multinational corporations treat labor. One quick sanity check is, well, how many people does this affect? You can check, and per here1, Inditex—a retailer which owns brands like Zara, Pull&Bear, Massimo Dutti, etc.—employed 3M people in its supply chain, as of 2021.So scalability is large enough that this may warrant further analysis. One this simple sanity check is passed, one can then go on and do some more complex estimation about how cost-effective improving supply chain accountability is, like here.Sanity checking the cost-effectiveness of the EA WikiIn my analysis of the EA Wiki, I calculated how much the person behind the EA Wiki was being paid per word, and found that it was in the ballpark of other industries. If it had been egregiously low, my analysis could have been short...]]>
NunoSempere https://forum.effectivealtruism.org/posts/s6k9cKdX8c4nhH8qq/estimation-for-sanity-checks Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Estimation for sanity checks, published by NunoSempere on March 21, 2023 on The Effective Altruism Forum.I feel very warmly about using relatively quick estimates to carry out sanity checks, i.e., to quickly check whether something is clearly off, whether some decision is clearly overdetermined, or whether someone is just bullshitting. This is in contrast to Fermi estimates, which aim to arrive at an estimate for a quantity of interest, and which I also feel warmly about but which aren’t the subject of this post. In this post, I explain why I like quantitative sanity checks so much, and I give some examples.Why I like this so muchI like this so much because:It is very defensible. There are some cached arguments against more quantified estimation, but sanity checking cuts through most—if not all—of them. “Oh, well, I just think that estimation has some really nice benefits in terms of sanity checking and catching bullshit, and in particular in terms of defending against scope insensitivity. And I think we are not even at the point where we are deploying enough estimation to catch all the mistakes that would be obvious in hindsight after we did some estimation” is both something I believe and also just a really nice motte to retreat when I am tired, don’t feel like defending a more ambitious estimation agenda, or don’t want to alienate someone socially by having an argument.It can be very cheap, a few minutes, a few Google searches. This means that you can practice quickly and build intuitions.They are useful, as we will see below.Some examplesHere are a few examples where I’ve found estimation to be useful for sanity-checking. I mention these because I think that the theoretical answer becomes stronger when paired with a few examples which display that dynamic in real life.Photo Patch FoundationThe Photo Patch Foundation is an organization which has received a small amount of funding from Open Philanthropy:Photo Patch has a website and an app that allows kids with incarcerated parents to send letters and pictures to their parents in prison for free. This diminishes barriers, helps families remain in touch, and reduces the number of children who have not communicated with their parents in weeks, months, or sometimes years.It takes little digging to figure out that their costs are $2.5/photo. If we take the AMF numbers at all seriously, it seems very likely that this is not a good deal. For example, for $2.5 you can deworm several kids in developing countries, or buy a bit more than one malaria net. Or, less intuitively, trading 0.05% chance of saving a statistical life for sending a photo to a prisoner seems like a pretty bad trade–0.05% of a statistical life corresponds to 0.05/100 × 70 years × 365 = 12 statistical days.One can then do somewhat more elaborate estimations about criminal justice reform.Sanity-checking that supply chain accountability has enough scaleAt some point in the past, I looked into supply chain accountability, a cause area related to improving how multinational corporations treat labor. One quick sanity check is, well, how many people does this affect? You can check, and per here1, Inditex—a retailer which owns brands like Zara, Pull&Bear, Massimo Dutti, etc.—employed 3M people in its supply chain, as of 2021.So scalability is large enough that this may warrant further analysis. One this simple sanity check is passed, one can then go on and do some more complex estimation about how cost-effective improving supply chain accountability is, like here.Sanity checking the cost-effectiveness of the EA WikiIn my analysis of the EA Wiki, I calculated how much the person behind the EA Wiki was being paid per word, and found that it was in the ballpark of other industries. If it had been egregiously low, my analysis could have been short...]]>
Tue, 21 Mar 2023 12:04:45 +0000 EA - Estimation for sanity checks by NunoSempere Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Estimation for sanity checks, published by NunoSempere on March 21, 2023 on The Effective Altruism Forum.I feel very warmly about using relatively quick estimates to carry out sanity checks, i.e., to quickly check whether something is clearly off, whether some decision is clearly overdetermined, or whether someone is just bullshitting. This is in contrast to Fermi estimates, which aim to arrive at an estimate for a quantity of interest, and which I also feel warmly about but which aren’t the subject of this post. In this post, I explain why I like quantitative sanity checks so much, and I give some examples.Why I like this so muchI like this so much because:It is very defensible. There are some cached arguments against more quantified estimation, but sanity checking cuts through most—if not all—of them. “Oh, well, I just think that estimation has some really nice benefits in terms of sanity checking and catching bullshit, and in particular in terms of defending against scope insensitivity. And I think we are not even at the point where we are deploying enough estimation to catch all the mistakes that would be obvious in hindsight after we did some estimation” is both something I believe and also just a really nice motte to retreat when I am tired, don’t feel like defending a more ambitious estimation agenda, or don’t want to alienate someone socially by having an argument.It can be very cheap, a few minutes, a few Google searches. This means that you can practice quickly and build intuitions.They are useful, as we will see below.Some examplesHere are a few examples where I’ve found estimation to be useful for sanity-checking. I mention these because I think that the theoretical answer becomes stronger when paired with a few examples which display that dynamic in real life.Photo Patch FoundationThe Photo Patch Foundation is an organization which has received a small amount of funding from Open Philanthropy:Photo Patch has a website and an app that allows kids with incarcerated parents to send letters and pictures to their parents in prison for free. This diminishes barriers, helps families remain in touch, and reduces the number of children who have not communicated with their parents in weeks, months, or sometimes years.It takes little digging to figure out that their costs are $2.5/photo. If we take the AMF numbers at all seriously, it seems very likely that this is not a good deal. For example, for $2.5 you can deworm several kids in developing countries, or buy a bit more than one malaria net. Or, less intuitively, trading 0.05% chance of saving a statistical life for sending a photo to a prisoner seems like a pretty bad trade–0.05% of a statistical life corresponds to 0.05/100 × 70 years × 365 = 12 statistical days.One can then do somewhat more elaborate estimations about criminal justice reform.Sanity-checking that supply chain accountability has enough scaleAt some point in the past, I looked into supply chain accountability, a cause area related to improving how multinational corporations treat labor. One quick sanity check is, well, how many people does this affect? You can check, and per here1, Inditex—a retailer which owns brands like Zara, Pull&Bear, Massimo Dutti, etc.—employed 3M people in its supply chain, as of 2021.So scalability is large enough that this may warrant further analysis. One this simple sanity check is passed, one can then go on and do some more complex estimation about how cost-effective improving supply chain accountability is, like here.Sanity checking the cost-effectiveness of the EA WikiIn my analysis of the EA Wiki, I calculated how much the person behind the EA Wiki was being paid per word, and found that it was in the ballpark of other industries. If it had been egregiously low, my analysis could have been short...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Estimation for sanity checks, published by NunoSempere on March 21, 2023 on The Effective Altruism Forum.I feel very warmly about using relatively quick estimates to carry out sanity checks, i.e., to quickly check whether something is clearly off, whether some decision is clearly overdetermined, or whether someone is just bullshitting. This is in contrast to Fermi estimates, which aim to arrive at an estimate for a quantity of interest, and which I also feel warmly about but which aren’t the subject of this post. In this post, I explain why I like quantitative sanity checks so much, and I give some examples.Why I like this so muchI like this so much because:It is very defensible. There are some cached arguments against more quantified estimation, but sanity checking cuts through most—if not all—of them. “Oh, well, I just think that estimation has some really nice benefits in terms of sanity checking and catching bullshit, and in particular in terms of defending against scope insensitivity. And I think we are not even at the point where we are deploying enough estimation to catch all the mistakes that would be obvious in hindsight after we did some estimation” is both something I believe and also just a really nice motte to retreat when I am tired, don’t feel like defending a more ambitious estimation agenda, or don’t want to alienate someone socially by having an argument.It can be very cheap, a few minutes, a few Google searches. This means that you can practice quickly and build intuitions.They are useful, as we will see below.Some examplesHere are a few examples where I’ve found estimation to be useful for sanity-checking. I mention these because I think that the theoretical answer becomes stronger when paired with a few examples which display that dynamic in real life.Photo Patch FoundationThe Photo Patch Foundation is an organization which has received a small amount of funding from Open Philanthropy:Photo Patch has a website and an app that allows kids with incarcerated parents to send letters and pictures to their parents in prison for free. This diminishes barriers, helps families remain in touch, and reduces the number of children who have not communicated with their parents in weeks, months, or sometimes years.It takes little digging to figure out that their costs are $2.5/photo. If we take the AMF numbers at all seriously, it seems very likely that this is not a good deal. For example, for $2.5 you can deworm several kids in developing countries, or buy a bit more than one malaria net. Or, less intuitively, trading 0.05% chance of saving a statistical life for sending a photo to a prisoner seems like a pretty bad trade–0.05% of a statistical life corresponds to 0.05/100 × 70 years × 365 = 12 statistical days.One can then do somewhat more elaborate estimations about criminal justice reform.Sanity-checking that supply chain accountability has enough scaleAt some point in the past, I looked into supply chain accountability, a cause area related to improving how multinational corporations treat labor. One quick sanity check is, well, how many people does this affect? You can check, and per here1, Inditex—a retailer which owns brands like Zara, Pull&Bear, Massimo Dutti, etc.—employed 3M people in its supply chain, as of 2021.So scalability is large enough that this may warrant further analysis. One this simple sanity check is passed, one can then go on and do some more complex estimation about how cost-effective improving supply chain accountability is, like here.Sanity checking the cost-effectiveness of the EA WikiIn my analysis of the EA Wiki, I calculated how much the person behind the EA Wiki was being paid per word, and found that it was in the ballpark of other industries. If it had been egregiously low, my analysis could have been short...]]>
NunoSempere https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:01 None full 5313
46tXkg838EZ6uie45_NL_EA_EA EA - My Objections to "We’re All Gonna Die with Eliezer Yudkowsky" by Quintin Pope Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Objections to "We’re All Gonna Die with Eliezer Yudkowsky", published by Quintin Pope on March 21, 2023 on The Effective Altruism Forum.Note: manually cross-posted from LessWrong. See here for discussion on LW.IntroductionI recently watched Eliezer Yudkowsky's appearance on the Bankless podcast, where he argued that AI was nigh-certain to end humanity. Since the podcast, some commentators have offered pushback against the doom conclusion. However, one sentiment I saw was that optimists tended not to engage with the specific arguments pessimists like Yudkowsky offered.Economist Robin Hanson points out that this pattern is very common for small groups which hold counterintuitive beliefs: insiders develop their own internal language, which skeptical outsiders usually don't bother to learn. Outsiders then make objections that focus on broad arguments against the belief's plausibility, rather than objections that focus on specific insider arguments.As an AI "alignment insider" whose current estimate of doom is around 5%, I wrote this post to explain some of my many objections to Yudkowsky's specific arguments. I've split this post into chronologically ordered segments of the podcast in which Yudkowsky makes one or more claims with which I particularly disagree. All bulleted points correspond to specific claims by Yudkowsky, and I follow each bullet point with text that explains my objections to the claims in question.I have my own view of alignment research: shard theory, which focuses on understanding how human values form, and on how we might guide a similar process of value formation in AI systems.I think that human value formation is not that complex, and does not rely on principles very different from those which underlie the current deep learning paradigm. Most of the arguments you're about to see from me are less:I think I know of a fundamentally new paradigm that can fix the issues Yudkowsky is pointing at.and more:Here's why I don't agree with Yudkowsky's arguments that alignment is impossible in the current paradigm.My objectionsWill current approaches scale to AGI?Yudkowsky apparently thinks not, and that the techniques driving current state of the art advances, by which I think he means the mix of generative pretraining + small amounts of reinforcement learning such as with ChatGPT, aren't reliable enough for significant economic contributions. However, he also thinks that the current influx of money might stumble upon something that does work really well, which will end the world shortly thereafter.I'm a lot more bullish on the current paradigm. People have tried lots and lots of approaches to getting good performance out of computers, including lots of "scary seeming" approaches such as:Meta-learning over training processes. I.e., using gradient descent over learning curves, directly optimizing neural networks to learn more quickly.Teaching neural networks to directly modify themselves by giving them edit access to their own weights.Training learned optimizers - neural networks that learn to optimize other neural networks - and having those learned optimizers optimize themselves.Using program search to find more efficient optimizers.Using simulated evolution to find more efficient architectures.Using efficient second-order corrections to gradient descent's approximate optimization process.Tried applying biologically plausible optimization algorithms inspired by biological neurons to training neural networks.Adding learned internal optimizers (different from the ones hypothesized in Risks from Learned Optimization) as neural network layers.Having language models rewrite their own training data, and improve the quality of that training data, to make themselves better at a given task.Having language models devise their own programming...]]>
Quintin Pope https://forum.effectivealtruism.org/posts/46tXkg838EZ6uie45/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Objections to "We’re All Gonna Die with Eliezer Yudkowsky", published by Quintin Pope on March 21, 2023 on The Effective Altruism Forum.Note: manually cross-posted from LessWrong. See here for discussion on LW.IntroductionI recently watched Eliezer Yudkowsky's appearance on the Bankless podcast, where he argued that AI was nigh-certain to end humanity. Since the podcast, some commentators have offered pushback against the doom conclusion. However, one sentiment I saw was that optimists tended not to engage with the specific arguments pessimists like Yudkowsky offered.Economist Robin Hanson points out that this pattern is very common for small groups which hold counterintuitive beliefs: insiders develop their own internal language, which skeptical outsiders usually don't bother to learn. Outsiders then make objections that focus on broad arguments against the belief's plausibility, rather than objections that focus on specific insider arguments.As an AI "alignment insider" whose current estimate of doom is around 5%, I wrote this post to explain some of my many objections to Yudkowsky's specific arguments. I've split this post into chronologically ordered segments of the podcast in which Yudkowsky makes one or more claims with which I particularly disagree. All bulleted points correspond to specific claims by Yudkowsky, and I follow each bullet point with text that explains my objections to the claims in question.I have my own view of alignment research: shard theory, which focuses on understanding how human values form, and on how we might guide a similar process of value formation in AI systems.I think that human value formation is not that complex, and does not rely on principles very different from those which underlie the current deep learning paradigm. Most of the arguments you're about to see from me are less:I think I know of a fundamentally new paradigm that can fix the issues Yudkowsky is pointing at.and more:Here's why I don't agree with Yudkowsky's arguments that alignment is impossible in the current paradigm.My objectionsWill current approaches scale to AGI?Yudkowsky apparently thinks not, and that the techniques driving current state of the art advances, by which I think he means the mix of generative pretraining + small amounts of reinforcement learning such as with ChatGPT, aren't reliable enough for significant economic contributions. However, he also thinks that the current influx of money might stumble upon something that does work really well, which will end the world shortly thereafter.I'm a lot more bullish on the current paradigm. People have tried lots and lots of approaches to getting good performance out of computers, including lots of "scary seeming" approaches such as:Meta-learning over training processes. I.e., using gradient descent over learning curves, directly optimizing neural networks to learn more quickly.Teaching neural networks to directly modify themselves by giving them edit access to their own weights.Training learned optimizers - neural networks that learn to optimize other neural networks - and having those learned optimizers optimize themselves.Using program search to find more efficient optimizers.Using simulated evolution to find more efficient architectures.Using efficient second-order corrections to gradient descent's approximate optimization process.Tried applying biologically plausible optimization algorithms inspired by biological neurons to training neural networks.Adding learned internal optimizers (different from the ones hypothesized in Risks from Learned Optimization) as neural network layers.Having language models rewrite their own training data, and improve the quality of that training data, to make themselves better at a given task.Having language models devise their own programming...]]>
Tue, 21 Mar 2023 06:08:17 +0000 EA - My Objections to "We’re All Gonna Die with Eliezer Yudkowsky" by Quintin Pope Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Objections to "We’re All Gonna Die with Eliezer Yudkowsky", published by Quintin Pope on March 21, 2023 on The Effective Altruism Forum.Note: manually cross-posted from LessWrong. See here for discussion on LW.IntroductionI recently watched Eliezer Yudkowsky's appearance on the Bankless podcast, where he argued that AI was nigh-certain to end humanity. Since the podcast, some commentators have offered pushback against the doom conclusion. However, one sentiment I saw was that optimists tended not to engage with the specific arguments pessimists like Yudkowsky offered.Economist Robin Hanson points out that this pattern is very common for small groups which hold counterintuitive beliefs: insiders develop their own internal language, which skeptical outsiders usually don't bother to learn. Outsiders then make objections that focus on broad arguments against the belief's plausibility, rather than objections that focus on specific insider arguments.As an AI "alignment insider" whose current estimate of doom is around 5%, I wrote this post to explain some of my many objections to Yudkowsky's specific arguments. I've split this post into chronologically ordered segments of the podcast in which Yudkowsky makes one or more claims with which I particularly disagree. All bulleted points correspond to specific claims by Yudkowsky, and I follow each bullet point with text that explains my objections to the claims in question.I have my own view of alignment research: shard theory, which focuses on understanding how human values form, and on how we might guide a similar process of value formation in AI systems.I think that human value formation is not that complex, and does not rely on principles very different from those which underlie the current deep learning paradigm. Most of the arguments you're about to see from me are less:I think I know of a fundamentally new paradigm that can fix the issues Yudkowsky is pointing at.and more:Here's why I don't agree with Yudkowsky's arguments that alignment is impossible in the current paradigm.My objectionsWill current approaches scale to AGI?Yudkowsky apparently thinks not, and that the techniques driving current state of the art advances, by which I think he means the mix of generative pretraining + small amounts of reinforcement learning such as with ChatGPT, aren't reliable enough for significant economic contributions. However, he also thinks that the current influx of money might stumble upon something that does work really well, which will end the world shortly thereafter.I'm a lot more bullish on the current paradigm. People have tried lots and lots of approaches to getting good performance out of computers, including lots of "scary seeming" approaches such as:Meta-learning over training processes. I.e., using gradient descent over learning curves, directly optimizing neural networks to learn more quickly.Teaching neural networks to directly modify themselves by giving them edit access to their own weights.Training learned optimizers - neural networks that learn to optimize other neural networks - and having those learned optimizers optimize themselves.Using program search to find more efficient optimizers.Using simulated evolution to find more efficient architectures.Using efficient second-order corrections to gradient descent's approximate optimization process.Tried applying biologically plausible optimization algorithms inspired by biological neurons to training neural networks.Adding learned internal optimizers (different from the ones hypothesized in Risks from Learned Optimization) as neural network layers.Having language models rewrite their own training data, and improve the quality of that training data, to make themselves better at a given task.Having language models devise their own programming...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Objections to "We’re All Gonna Die with Eliezer Yudkowsky", published by Quintin Pope on March 21, 2023 on The Effective Altruism Forum.Note: manually cross-posted from LessWrong. See here for discussion on LW.IntroductionI recently watched Eliezer Yudkowsky's appearance on the Bankless podcast, where he argued that AI was nigh-certain to end humanity. Since the podcast, some commentators have offered pushback against the doom conclusion. However, one sentiment I saw was that optimists tended not to engage with the specific arguments pessimists like Yudkowsky offered.Economist Robin Hanson points out that this pattern is very common for small groups which hold counterintuitive beliefs: insiders develop their own internal language, which skeptical outsiders usually don't bother to learn. Outsiders then make objections that focus on broad arguments against the belief's plausibility, rather than objections that focus on specific insider arguments.As an AI "alignment insider" whose current estimate of doom is around 5%, I wrote this post to explain some of my many objections to Yudkowsky's specific arguments. I've split this post into chronologically ordered segments of the podcast in which Yudkowsky makes one or more claims with which I particularly disagree. All bulleted points correspond to specific claims by Yudkowsky, and I follow each bullet point with text that explains my objections to the claims in question.I have my own view of alignment research: shard theory, which focuses on understanding how human values form, and on how we might guide a similar process of value formation in AI systems.I think that human value formation is not that complex, and does not rely on principles very different from those which underlie the current deep learning paradigm. Most of the arguments you're about to see from me are less:I think I know of a fundamentally new paradigm that can fix the issues Yudkowsky is pointing at.and more:Here's why I don't agree with Yudkowsky's arguments that alignment is impossible in the current paradigm.My objectionsWill current approaches scale to AGI?Yudkowsky apparently thinks not, and that the techniques driving current state of the art advances, by which I think he means the mix of generative pretraining + small amounts of reinforcement learning such as with ChatGPT, aren't reliable enough for significant economic contributions. However, he also thinks that the current influx of money might stumble upon something that does work really well, which will end the world shortly thereafter.I'm a lot more bullish on the current paradigm. People have tried lots and lots of approaches to getting good performance out of computers, including lots of "scary seeming" approaches such as:Meta-learning over training processes. I.e., using gradient descent over learning curves, directly optimizing neural networks to learn more quickly.Teaching neural networks to directly modify themselves by giving them edit access to their own weights.Training learned optimizers - neural networks that learn to optimize other neural networks - and having those learned optimizers optimize themselves.Using program search to find more efficient optimizers.Using simulated evolution to find more efficient architectures.Using efficient second-order corrections to gradient descent's approximate optimization process.Tried applying biologically plausible optimization algorithms inspired by biological neurons to training neural networks.Adding learned internal optimizers (different from the ones hypothesized in Risks from Learned Optimization) as neural network layers.Having language models rewrite their own training data, and improve the quality of that training data, to make themselves better at a given task.Having language models devise their own programming...]]>
Quintin Pope https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 52:37 None full 5304
vQR9iiifwnJcunPSb_NL_EA_EA EA - Forecasts on Moore v Harper from Samotsvety by gregjustice Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forecasts on Moore v Harper from Samotsvety, published by gregjustice on March 20, 2023 on The Effective Altruism Forum.[edited to include full text]DisclaimersThe probabilities listed are contingent on SCOTUS issuing a ruling on this case. An updated numerical forecast on that happening, particularly in light of the NC Supreme Court’s decision to rehear Harper v Hall, may be forthcoming.The author of this report, Greg Justice, is an excellent forecaster, not a lawyer. This post should not be interpreted as legal advice. This writeup is still in progress, and the author is looking for a good venue to publish it in.You can subscribe to these posts here.IntroductionThe Moore v. Harper case before SCOTUS asks to what degree state courts can interfere with state legislatures in the drawing of congressional district maps. Versions of the legal theory they’re being asked to rule on were invoked as part of the attempts to overthrow the 2020 election, leading to widespread media coverage of the case. The ruling here will have implications for myriad state-level efforts to curb partisan gerrymandering.Below, we first discuss the Independent State Legislature theory and Moore v. Harper. We then offer a survey of how the justices have ruled in related cases, what some notable conservative sources have written, and what the justices said in oral arguments. Finally, we offer our own thoughts about some potential outcomes of this case and their consequences for the future.BackgroundWhat is the independent state legislature theory?Independent State Legislature theory or doctrine (ISL) generally holds that state legislatures have unique power to determine the rules around elections. There are a range of views that fall under the term ISL, ranging from the idea that state courts' freedom to interpret legislation is more limited than it is with other laws, to the idea that state courts and other state bodies lack any authority on issues of federal election law altogether. However, “[t]hese possible corollaries of the doctrine are largely independent of each other, supported by somewhat different lines of reasoning and authority. Although these theories arise from the same constitutional principle, each may be assessed separately from the others; the doctrine need not be accepted or repudiated wholesale.”1The doctrine is rooted in a narrow reading of Article I Section 4 Clause 1 (the Elections Clause) of the Constitution, which states, “The Times, Places and Manner of holding Elections for Senators and Representatives, shall be prescribed in each State by the Legislature thereof.”2 According to the Brennan Center, this interpretation is at odds with a more traditional reading:The dispute hinges on how to understand the word “legislature.” The long-running understanding is that it refers to each state’s general lawmaking processes, including all the normal procedures and limitations. So if a state constitution subjects legislation to being blocked by a governor’s veto or citizen referendum, election laws can be blocked via the same means. And state courts must ensure that laws for federal elections, like all laws, comply with their state constitutions.Proponents of the independent state legislature theory reject this traditional reading, insisting that these clauses give state legislatures exclusive and near-absolute power to regulate federal elections. The result? When it comes to federal elections, legislators would be free to violate the state constitution and state courts couldn’t stop them.Extreme versions of the theory would block legislatures from delegating their authority to officials like governors, secretaries of state, or election commissioners, who currently play important roles in administering elections.3The doctrine, which governs the actions of state cou...]]>
gregjustice https://forum.effectivealtruism.org/posts/vQR9iiifwnJcunPSb/forecasts-on-moore-v-harper-from-samotsvety Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forecasts on Moore v Harper from Samotsvety, published by gregjustice on March 20, 2023 on The Effective Altruism Forum.[edited to include full text]DisclaimersThe probabilities listed are contingent on SCOTUS issuing a ruling on this case. An updated numerical forecast on that happening, particularly in light of the NC Supreme Court’s decision to rehear Harper v Hall, may be forthcoming.The author of this report, Greg Justice, is an excellent forecaster, not a lawyer. This post should not be interpreted as legal advice. This writeup is still in progress, and the author is looking for a good venue to publish it in.You can subscribe to these posts here.IntroductionThe Moore v. Harper case before SCOTUS asks to what degree state courts can interfere with state legislatures in the drawing of congressional district maps. Versions of the legal theory they’re being asked to rule on were invoked as part of the attempts to overthrow the 2020 election, leading to widespread media coverage of the case. The ruling here will have implications for myriad state-level efforts to curb partisan gerrymandering.Below, we first discuss the Independent State Legislature theory and Moore v. Harper. We then offer a survey of how the justices have ruled in related cases, what some notable conservative sources have written, and what the justices said in oral arguments. Finally, we offer our own thoughts about some potential outcomes of this case and their consequences for the future.BackgroundWhat is the independent state legislature theory?Independent State Legislature theory or doctrine (ISL) generally holds that state legislatures have unique power to determine the rules around elections. There are a range of views that fall under the term ISL, ranging from the idea that state courts' freedom to interpret legislation is more limited than it is with other laws, to the idea that state courts and other state bodies lack any authority on issues of federal election law altogether. However, “[t]hese possible corollaries of the doctrine are largely independent of each other, supported by somewhat different lines of reasoning and authority. Although these theories arise from the same constitutional principle, each may be assessed separately from the others; the doctrine need not be accepted or repudiated wholesale.”1The doctrine is rooted in a narrow reading of Article I Section 4 Clause 1 (the Elections Clause) of the Constitution, which states, “The Times, Places and Manner of holding Elections for Senators and Representatives, shall be prescribed in each State by the Legislature thereof.”2 According to the Brennan Center, this interpretation is at odds with a more traditional reading:The dispute hinges on how to understand the word “legislature.” The long-running understanding is that it refers to each state’s general lawmaking processes, including all the normal procedures and limitations. So if a state constitution subjects legislation to being blocked by a governor’s veto or citizen referendum, election laws can be blocked via the same means. And state courts must ensure that laws for federal elections, like all laws, comply with their state constitutions.Proponents of the independent state legislature theory reject this traditional reading, insisting that these clauses give state legislatures exclusive and near-absolute power to regulate federal elections. The result? When it comes to federal elections, legislators would be free to violate the state constitution and state courts couldn’t stop them.Extreme versions of the theory would block legislatures from delegating their authority to officials like governors, secretaries of state, or election commissioners, who currently play important roles in administering elections.3The doctrine, which governs the actions of state cou...]]>
Mon, 20 Mar 2023 20:46:03 +0000 EA - Forecasts on Moore v Harper from Samotsvety by gregjustice Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forecasts on Moore v Harper from Samotsvety, published by gregjustice on March 20, 2023 on The Effective Altruism Forum.[edited to include full text]DisclaimersThe probabilities listed are contingent on SCOTUS issuing a ruling on this case. An updated numerical forecast on that happening, particularly in light of the NC Supreme Court’s decision to rehear Harper v Hall, may be forthcoming.The author of this report, Greg Justice, is an excellent forecaster, not a lawyer. This post should not be interpreted as legal advice. This writeup is still in progress, and the author is looking for a good venue to publish it in.You can subscribe to these posts here.IntroductionThe Moore v. Harper case before SCOTUS asks to what degree state courts can interfere with state legislatures in the drawing of congressional district maps. Versions of the legal theory they’re being asked to rule on were invoked as part of the attempts to overthrow the 2020 election, leading to widespread media coverage of the case. The ruling here will have implications for myriad state-level efforts to curb partisan gerrymandering.Below, we first discuss the Independent State Legislature theory and Moore v. Harper. We then offer a survey of how the justices have ruled in related cases, what some notable conservative sources have written, and what the justices said in oral arguments. Finally, we offer our own thoughts about some potential outcomes of this case and their consequences for the future.BackgroundWhat is the independent state legislature theory?Independent State Legislature theory or doctrine (ISL) generally holds that state legislatures have unique power to determine the rules around elections. There are a range of views that fall under the term ISL, ranging from the idea that state courts' freedom to interpret legislation is more limited than it is with other laws, to the idea that state courts and other state bodies lack any authority on issues of federal election law altogether. However, “[t]hese possible corollaries of the doctrine are largely independent of each other, supported by somewhat different lines of reasoning and authority. Although these theories arise from the same constitutional principle, each may be assessed separately from the others; the doctrine need not be accepted or repudiated wholesale.”1The doctrine is rooted in a narrow reading of Article I Section 4 Clause 1 (the Elections Clause) of the Constitution, which states, “The Times, Places and Manner of holding Elections for Senators and Representatives, shall be prescribed in each State by the Legislature thereof.”2 According to the Brennan Center, this interpretation is at odds with a more traditional reading:The dispute hinges on how to understand the word “legislature.” The long-running understanding is that it refers to each state’s general lawmaking processes, including all the normal procedures and limitations. So if a state constitution subjects legislation to being blocked by a governor’s veto or citizen referendum, election laws can be blocked via the same means. And state courts must ensure that laws for federal elections, like all laws, comply with their state constitutions.Proponents of the independent state legislature theory reject this traditional reading, insisting that these clauses give state legislatures exclusive and near-absolute power to regulate federal elections. The result? When it comes to federal elections, legislators would be free to violate the state constitution and state courts couldn’t stop them.Extreme versions of the theory would block legislatures from delegating their authority to officials like governors, secretaries of state, or election commissioners, who currently play important roles in administering elections.3The doctrine, which governs the actions of state cou...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forecasts on Moore v Harper from Samotsvety, published by gregjustice on March 20, 2023 on The Effective Altruism Forum.[edited to include full text]DisclaimersThe probabilities listed are contingent on SCOTUS issuing a ruling on this case. An updated numerical forecast on that happening, particularly in light of the NC Supreme Court’s decision to rehear Harper v Hall, may be forthcoming.The author of this report, Greg Justice, is an excellent forecaster, not a lawyer. This post should not be interpreted as legal advice. This writeup is still in progress, and the author is looking for a good venue to publish it in.You can subscribe to these posts here.IntroductionThe Moore v. Harper case before SCOTUS asks to what degree state courts can interfere with state legislatures in the drawing of congressional district maps. Versions of the legal theory they’re being asked to rule on were invoked as part of the attempts to overthrow the 2020 election, leading to widespread media coverage of the case. The ruling here will have implications for myriad state-level efforts to curb partisan gerrymandering.Below, we first discuss the Independent State Legislature theory and Moore v. Harper. We then offer a survey of how the justices have ruled in related cases, what some notable conservative sources have written, and what the justices said in oral arguments. Finally, we offer our own thoughts about some potential outcomes of this case and their consequences for the future.BackgroundWhat is the independent state legislature theory?Independent State Legislature theory or doctrine (ISL) generally holds that state legislatures have unique power to determine the rules around elections. There are a range of views that fall under the term ISL, ranging from the idea that state courts' freedom to interpret legislation is more limited than it is with other laws, to the idea that state courts and other state bodies lack any authority on issues of federal election law altogether. However, “[t]hese possible corollaries of the doctrine are largely independent of each other, supported by somewhat different lines of reasoning and authority. Although these theories arise from the same constitutional principle, each may be assessed separately from the others; the doctrine need not be accepted or repudiated wholesale.”1The doctrine is rooted in a narrow reading of Article I Section 4 Clause 1 (the Elections Clause) of the Constitution, which states, “The Times, Places and Manner of holding Elections for Senators and Representatives, shall be prescribed in each State by the Legislature thereof.”2 According to the Brennan Center, this interpretation is at odds with a more traditional reading:The dispute hinges on how to understand the word “legislature.” The long-running understanding is that it refers to each state’s general lawmaking processes, including all the normal procedures and limitations. So if a state constitution subjects legislation to being blocked by a governor’s veto or citizen referendum, election laws can be blocked via the same means. And state courts must ensure that laws for federal elections, like all laws, comply with their state constitutions.Proponents of the independent state legislature theory reject this traditional reading, insisting that these clauses give state legislatures exclusive and near-absolute power to regulate federal elections. The result? When it comes to federal elections, legislators would be free to violate the state constitution and state courts couldn’t stop them.Extreme versions of the theory would block legislatures from delegating their authority to officials like governors, secretaries of state, or election commissioners, who currently play important roles in administering elections.3The doctrine, which governs the actions of state cou...]]>
gregjustice https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 49:46 None full 5302
5YKx6xGg8qz6jLKvF_NL_EA_EA EA - Some Comments on the Recent FTX TIME Article by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some Comments on the Recent FTX TIME Article, published by Ben West on March 20, 2023 on The Effective Altruism Forum.BackgroundAlameda Research (AR) was a cryptocurrency hedge fund started in late 2017.In early 2018, approximately half the employees quit, including myself and Naia Bouscal, the main person mentioned in the TIME article. At the time, I had considered AR to have failed, and I think even the people who stayed would have agreed that it had not achieved what it had wanted to.Later in 2018, some of the remaining AR staff started working on a cryptocurrency exchange named FTX. FTX grew to become a multibillion-dollar company.In late 2022, FTX collapsed. It has since been alleged that FTX defrauded their investors by misrepresenting the relationship between AR and FTX, and that this effectively led to them stealing customer deposits.The recent TIME article doesn’t make a very precise argument; here is my attempt at steelmanning/clarifying a major argument made in that article, which I will then respond to:Some EAs worked at AR before FTX startedEven though those EAs (including myself) quit before FTX was founded and therefore could not have had any first-hand knowledge of this improper relationship between AR and FTX, they knew things (like information about Sam’s character) which would have enabled them to predict that something bad would happenThis information was passed on to “EA leaders”, who did not take enough preventative action and are therefore (partly) responsible for FTX’s collapsePersonal BackgroundI worked at Alameda Research (AR) for about three months in early 2018. I was not involved in stealing FTX customer funds, and hopefully people trust me about that claim, if only because I quit before FTX was founded.To make my COI clear: I left the company I founded to join AR; doing so was very costly to me; AR crashed and burned within a few months of me joining; I blamed this crashing and burning largely on Sam.People who know I had a bad experience at AR are sometimes surprised that I’m not on the “obviously Sam was obviously 100% evil” bandwagon. I’ve been wanting to write something but found it hard because there weren’t specific things I could react to, it was just some vague difference in vibes.So I appreciate the TIME article sharing some specific things that “EA Leaders” allegedly knew which the author suggests should have caused them to predict FTX’s fraud.My Experience at AR at a High LevelI thought Sam was a bad CEO. I think he literally never prepared for a single one-on-one we had, his habit of playing video games instead of talking to you was “quirky” when he was a billionaire but aggravating when he was my manager, and my recollection is that Alameda made less money in the time I was there than if it had just simply bought and held bitcoin.But my opinion of Sam overall was more positive than the sense I get from the statements in the TIME article. (This is not very surprising, given that the TIME article consists of statements that were probably intentionally selected to be the worst possible thing the journalist could find someone to say about Sam.)It's hard to convey nuance in these posts, and I'm sure someone is going to interpret me as trying to defend Sam here. This is not what I’m trying to do, but I do think it’s worth trying to share my reflections to help others refine their models.Adding my personal experience to supplement some statements from the articleBut one of the people who did warn others about Bankman-Fried says that he openly wielded this power when challenged. “It was like, ‘I could destroy you,’” this person says. “Will and Holden would believe me over you. No one is going to believe you.”I don’t want to speak for this person, but my own experience was pretty different. For example: Sam was f...]]>
Ben West https://forum.effectivealtruism.org/posts/5YKx6xGg8qz6jLKvF/some-comments-on-the-recent-ftx-time-article Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some Comments on the Recent FTX TIME Article, published by Ben West on March 20, 2023 on The Effective Altruism Forum.BackgroundAlameda Research (AR) was a cryptocurrency hedge fund started in late 2017.In early 2018, approximately half the employees quit, including myself and Naia Bouscal, the main person mentioned in the TIME article. At the time, I had considered AR to have failed, and I think even the people who stayed would have agreed that it had not achieved what it had wanted to.Later in 2018, some of the remaining AR staff started working on a cryptocurrency exchange named FTX. FTX grew to become a multibillion-dollar company.In late 2022, FTX collapsed. It has since been alleged that FTX defrauded their investors by misrepresenting the relationship between AR and FTX, and that this effectively led to them stealing customer deposits.The recent TIME article doesn’t make a very precise argument; here is my attempt at steelmanning/clarifying a major argument made in that article, which I will then respond to:Some EAs worked at AR before FTX startedEven though those EAs (including myself) quit before FTX was founded and therefore could not have had any first-hand knowledge of this improper relationship between AR and FTX, they knew things (like information about Sam’s character) which would have enabled them to predict that something bad would happenThis information was passed on to “EA leaders”, who did not take enough preventative action and are therefore (partly) responsible for FTX’s collapsePersonal BackgroundI worked at Alameda Research (AR) for about three months in early 2018. I was not involved in stealing FTX customer funds, and hopefully people trust me about that claim, if only because I quit before FTX was founded.To make my COI clear: I left the company I founded to join AR; doing so was very costly to me; AR crashed and burned within a few months of me joining; I blamed this crashing and burning largely on Sam.People who know I had a bad experience at AR are sometimes surprised that I’m not on the “obviously Sam was obviously 100% evil” bandwagon. I’ve been wanting to write something but found it hard because there weren’t specific things I could react to, it was just some vague difference in vibes.So I appreciate the TIME article sharing some specific things that “EA Leaders” allegedly knew which the author suggests should have caused them to predict FTX’s fraud.My Experience at AR at a High LevelI thought Sam was a bad CEO. I think he literally never prepared for a single one-on-one we had, his habit of playing video games instead of talking to you was “quirky” when he was a billionaire but aggravating when he was my manager, and my recollection is that Alameda made less money in the time I was there than if it had just simply bought and held bitcoin.But my opinion of Sam overall was more positive than the sense I get from the statements in the TIME article. (This is not very surprising, given that the TIME article consists of statements that were probably intentionally selected to be the worst possible thing the journalist could find someone to say about Sam.)It's hard to convey nuance in these posts, and I'm sure someone is going to interpret me as trying to defend Sam here. This is not what I’m trying to do, but I do think it’s worth trying to share my reflections to help others refine their models.Adding my personal experience to supplement some statements from the articleBut one of the people who did warn others about Bankman-Fried says that he openly wielded this power when challenged. “It was like, ‘I could destroy you,’” this person says. “Will and Holden would believe me over you. No one is going to believe you.”I don’t want to speak for this person, but my own experience was pretty different. For example: Sam was f...]]>
Mon, 20 Mar 2023 18:28:40 +0000 EA - Some Comments on the Recent FTX TIME Article by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some Comments on the Recent FTX TIME Article, published by Ben West on March 20, 2023 on The Effective Altruism Forum.BackgroundAlameda Research (AR) was a cryptocurrency hedge fund started in late 2017.In early 2018, approximately half the employees quit, including myself and Naia Bouscal, the main person mentioned in the TIME article. At the time, I had considered AR to have failed, and I think even the people who stayed would have agreed that it had not achieved what it had wanted to.Later in 2018, some of the remaining AR staff started working on a cryptocurrency exchange named FTX. FTX grew to become a multibillion-dollar company.In late 2022, FTX collapsed. It has since been alleged that FTX defrauded their investors by misrepresenting the relationship between AR and FTX, and that this effectively led to them stealing customer deposits.The recent TIME article doesn’t make a very precise argument; here is my attempt at steelmanning/clarifying a major argument made in that article, which I will then respond to:Some EAs worked at AR before FTX startedEven though those EAs (including myself) quit before FTX was founded and therefore could not have had any first-hand knowledge of this improper relationship between AR and FTX, they knew things (like information about Sam’s character) which would have enabled them to predict that something bad would happenThis information was passed on to “EA leaders”, who did not take enough preventative action and are therefore (partly) responsible for FTX’s collapsePersonal BackgroundI worked at Alameda Research (AR) for about three months in early 2018. I was not involved in stealing FTX customer funds, and hopefully people trust me about that claim, if only because I quit before FTX was founded.To make my COI clear: I left the company I founded to join AR; doing so was very costly to me; AR crashed and burned within a few months of me joining; I blamed this crashing and burning largely on Sam.People who know I had a bad experience at AR are sometimes surprised that I’m not on the “obviously Sam was obviously 100% evil” bandwagon. I’ve been wanting to write something but found it hard because there weren’t specific things I could react to, it was just some vague difference in vibes.So I appreciate the TIME article sharing some specific things that “EA Leaders” allegedly knew which the author suggests should have caused them to predict FTX’s fraud.My Experience at AR at a High LevelI thought Sam was a bad CEO. I think he literally never prepared for a single one-on-one we had, his habit of playing video games instead of talking to you was “quirky” when he was a billionaire but aggravating when he was my manager, and my recollection is that Alameda made less money in the time I was there than if it had just simply bought and held bitcoin.But my opinion of Sam overall was more positive than the sense I get from the statements in the TIME article. (This is not very surprising, given that the TIME article consists of statements that were probably intentionally selected to be the worst possible thing the journalist could find someone to say about Sam.)It's hard to convey nuance in these posts, and I'm sure someone is going to interpret me as trying to defend Sam here. This is not what I’m trying to do, but I do think it’s worth trying to share my reflections to help others refine their models.Adding my personal experience to supplement some statements from the articleBut one of the people who did warn others about Bankman-Fried says that he openly wielded this power when challenged. “It was like, ‘I could destroy you,’” this person says. “Will and Holden would believe me over you. No one is going to believe you.”I don’t want to speak for this person, but my own experience was pretty different. For example: Sam was f...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some Comments on the Recent FTX TIME Article, published by Ben West on March 20, 2023 on The Effective Altruism Forum.BackgroundAlameda Research (AR) was a cryptocurrency hedge fund started in late 2017.In early 2018, approximately half the employees quit, including myself and Naia Bouscal, the main person mentioned in the TIME article. At the time, I had considered AR to have failed, and I think even the people who stayed would have agreed that it had not achieved what it had wanted to.Later in 2018, some of the remaining AR staff started working on a cryptocurrency exchange named FTX. FTX grew to become a multibillion-dollar company.In late 2022, FTX collapsed. It has since been alleged that FTX defrauded their investors by misrepresenting the relationship between AR and FTX, and that this effectively led to them stealing customer deposits.The recent TIME article doesn’t make a very precise argument; here is my attempt at steelmanning/clarifying a major argument made in that article, which I will then respond to:Some EAs worked at AR before FTX startedEven though those EAs (including myself) quit before FTX was founded and therefore could not have had any first-hand knowledge of this improper relationship between AR and FTX, they knew things (like information about Sam’s character) which would have enabled them to predict that something bad would happenThis information was passed on to “EA leaders”, who did not take enough preventative action and are therefore (partly) responsible for FTX’s collapsePersonal BackgroundI worked at Alameda Research (AR) for about three months in early 2018. I was not involved in stealing FTX customer funds, and hopefully people trust me about that claim, if only because I quit before FTX was founded.To make my COI clear: I left the company I founded to join AR; doing so was very costly to me; AR crashed and burned within a few months of me joining; I blamed this crashing and burning largely on Sam.People who know I had a bad experience at AR are sometimes surprised that I’m not on the “obviously Sam was obviously 100% evil” bandwagon. I’ve been wanting to write something but found it hard because there weren’t specific things I could react to, it was just some vague difference in vibes.So I appreciate the TIME article sharing some specific things that “EA Leaders” allegedly knew which the author suggests should have caused them to predict FTX’s fraud.My Experience at AR at a High LevelI thought Sam was a bad CEO. I think he literally never prepared for a single one-on-one we had, his habit of playing video games instead of talking to you was “quirky” when he was a billionaire but aggravating when he was my manager, and my recollection is that Alameda made less money in the time I was there than if it had just simply bought and held bitcoin.But my opinion of Sam overall was more positive than the sense I get from the statements in the TIME article. (This is not very surprising, given that the TIME article consists of statements that were probably intentionally selected to be the worst possible thing the journalist could find someone to say about Sam.)It's hard to convey nuance in these posts, and I'm sure someone is going to interpret me as trying to defend Sam here. This is not what I’m trying to do, but I do think it’s worth trying to share my reflections to help others refine their models.Adding my personal experience to supplement some statements from the articleBut one of the people who did warn others about Bankman-Fried says that he openly wielded this power when challenged. “It was like, ‘I could destroy you,’” this person says. “Will and Holden would believe me over you. No one is going to believe you.”I don’t want to speak for this person, but my own experience was pretty different. For example: Sam was f...]]>
Ben West https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:13 None full 5300
XQdvHbFighhFKt3b3_NL_EA_EA EA - Save the Date April 1st 2023 EAGatherTown: UnUnConference by Vaidehi Agarwalla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Save the Date April 1st 2023 EAGatherTown: UnUnConference, published by Vaidehi Agarwalla on March 20, 2023 on The Effective Altruism Forum.We're excited to officially announce the very first EA UnUnConference! APPLY HERE.Naming What We Can, the most impactful post ever published on April 1st, have already volunteered to host a Q&A. We’re calling in the producers of the TV hit Impact Island, and would like to invite Peter Barnett to launch his new book What The Future Owes Us. The X-risk-Men Incubation Program is running an enlightening talks session.Location: Mars in GathertownDate: April 1st, 2023, 24 hours and 37 minutes starting at 12:00pm UTC (or “lunch time” for british people)The case for impactOver the years, humanity realized that Unconferences are a great twist of traditional conferences, since the independence gives room for more unexpected benefits to happen.For the reason, we’re experimenting with the format of an UnUnconference. This means we’ll actively try not to organize anything, therefore (in expectancy) achieving even more unexpected benefits.We encourage you to critique our (relatively solid, in our opinion) theory of change in the comments!We understand this is not the most ambitious we could be. Although we fall short of the dream of EAGxMars, we believe this Ununconference is a proof-of-concept that will help validate the model of novel, experimental conferences and possibly redefine what impact means for EA events for years to come.This team is well placed to unorganize this event because we have previously successfully not organized 10^10 possible events.What to expectAll beings welcomed, that includes infants, face mice, gut microbiome, etc.Expect to have the most impactful timeMake more impact than everyone on earth could ever do combinedNetwork with the best minds in ultra-near-termist researchNever meet your connections again after the eventCertificates of £20 worth of impact just for £10!No success metricsNo theory of changeNo food, no wine, no sufferingCheck out our official event poster!Pixelated lightbulb that looks like mars as a logo for an unconference (DALL-E)Get involvedTake a look at the confernce agenda and add sessions to your calendarComment on this post with content suggestions and anti-suggestionsSign up for an enlightning talkUnvolunteer for the eventUnUnvolunteer for the event (your goal will be to actively unorganize stuff)UnUnUnvolunteer for the event (your goal will be to actively ununorganize stuff).. And so on. We think at least 5 levels of volunteers will be necessary for this event to be a complete success, to minimize risk of not falling into the well known meta trap.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Vaidehi Agarwalla https://forum.effectivealtruism.org/posts/XQdvHbFighhFKt3b3/save-the-date-april-1st-2023-eagathertown-ununconference-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Save the Date April 1st 2023 EAGatherTown: UnUnConference, published by Vaidehi Agarwalla on March 20, 2023 on The Effective Altruism Forum.We're excited to officially announce the very first EA UnUnConference! APPLY HERE.Naming What We Can, the most impactful post ever published on April 1st, have already volunteered to host a Q&A. We’re calling in the producers of the TV hit Impact Island, and would like to invite Peter Barnett to launch his new book What The Future Owes Us. The X-risk-Men Incubation Program is running an enlightening talks session.Location: Mars in GathertownDate: April 1st, 2023, 24 hours and 37 minutes starting at 12:00pm UTC (or “lunch time” for british people)The case for impactOver the years, humanity realized that Unconferences are a great twist of traditional conferences, since the independence gives room for more unexpected benefits to happen.For the reason, we’re experimenting with the format of an UnUnconference. This means we’ll actively try not to organize anything, therefore (in expectancy) achieving even more unexpected benefits.We encourage you to critique our (relatively solid, in our opinion) theory of change in the comments!We understand this is not the most ambitious we could be. Although we fall short of the dream of EAGxMars, we believe this Ununconference is a proof-of-concept that will help validate the model of novel, experimental conferences and possibly redefine what impact means for EA events for years to come.This team is well placed to unorganize this event because we have previously successfully not organized 10^10 possible events.What to expectAll beings welcomed, that includes infants, face mice, gut microbiome, etc.Expect to have the most impactful timeMake more impact than everyone on earth could ever do combinedNetwork with the best minds in ultra-near-termist researchNever meet your connections again after the eventCertificates of £20 worth of impact just for £10!No success metricsNo theory of changeNo food, no wine, no sufferingCheck out our official event poster!Pixelated lightbulb that looks like mars as a logo for an unconference (DALL-E)Get involvedTake a look at the confernce agenda and add sessions to your calendarComment on this post with content suggestions and anti-suggestionsSign up for an enlightning talkUnvolunteer for the eventUnUnvolunteer for the event (your goal will be to actively unorganize stuff)UnUnUnvolunteer for the event (your goal will be to actively ununorganize stuff).. And so on. We think at least 5 levels of volunteers will be necessary for this event to be a complete success, to minimize risk of not falling into the well known meta trap.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 20 Mar 2023 18:16:43 +0000 EA - Save the Date April 1st 2023 EAGatherTown: UnUnConference by Vaidehi Agarwalla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Save the Date April 1st 2023 EAGatherTown: UnUnConference, published by Vaidehi Agarwalla on March 20, 2023 on The Effective Altruism Forum.We're excited to officially announce the very first EA UnUnConference! APPLY HERE.Naming What We Can, the most impactful post ever published on April 1st, have already volunteered to host a Q&A. We’re calling in the producers of the TV hit Impact Island, and would like to invite Peter Barnett to launch his new book What The Future Owes Us. The X-risk-Men Incubation Program is running an enlightening talks session.Location: Mars in GathertownDate: April 1st, 2023, 24 hours and 37 minutes starting at 12:00pm UTC (or “lunch time” for british people)The case for impactOver the years, humanity realized that Unconferences are a great twist of traditional conferences, since the independence gives room for more unexpected benefits to happen.For the reason, we’re experimenting with the format of an UnUnconference. This means we’ll actively try not to organize anything, therefore (in expectancy) achieving even more unexpected benefits.We encourage you to critique our (relatively solid, in our opinion) theory of change in the comments!We understand this is not the most ambitious we could be. Although we fall short of the dream of EAGxMars, we believe this Ununconference is a proof-of-concept that will help validate the model of novel, experimental conferences and possibly redefine what impact means for EA events for years to come.This team is well placed to unorganize this event because we have previously successfully not organized 10^10 possible events.What to expectAll beings welcomed, that includes infants, face mice, gut microbiome, etc.Expect to have the most impactful timeMake more impact than everyone on earth could ever do combinedNetwork with the best minds in ultra-near-termist researchNever meet your connections again after the eventCertificates of £20 worth of impact just for £10!No success metricsNo theory of changeNo food, no wine, no sufferingCheck out our official event poster!Pixelated lightbulb that looks like mars as a logo for an unconference (DALL-E)Get involvedTake a look at the confernce agenda and add sessions to your calendarComment on this post with content suggestions and anti-suggestionsSign up for an enlightning talkUnvolunteer for the eventUnUnvolunteer for the event (your goal will be to actively unorganize stuff)UnUnUnvolunteer for the event (your goal will be to actively ununorganize stuff).. And so on. We think at least 5 levels of volunteers will be necessary for this event to be a complete success, to minimize risk of not falling into the well known meta trap.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Save the Date April 1st 2023 EAGatherTown: UnUnConference, published by Vaidehi Agarwalla on March 20, 2023 on The Effective Altruism Forum.We're excited to officially announce the very first EA UnUnConference! APPLY HERE.Naming What We Can, the most impactful post ever published on April 1st, have already volunteered to host a Q&A. We’re calling in the producers of the TV hit Impact Island, and would like to invite Peter Barnett to launch his new book What The Future Owes Us. The X-risk-Men Incubation Program is running an enlightening talks session.Location: Mars in GathertownDate: April 1st, 2023, 24 hours and 37 minutes starting at 12:00pm UTC (or “lunch time” for british people)The case for impactOver the years, humanity realized that Unconferences are a great twist of traditional conferences, since the independence gives room for more unexpected benefits to happen.For the reason, we’re experimenting with the format of an UnUnconference. This means we’ll actively try not to organize anything, therefore (in expectancy) achieving even more unexpected benefits.We encourage you to critique our (relatively solid, in our opinion) theory of change in the comments!We understand this is not the most ambitious we could be. Although we fall short of the dream of EAGxMars, we believe this Ununconference is a proof-of-concept that will help validate the model of novel, experimental conferences and possibly redefine what impact means for EA events for years to come.This team is well placed to unorganize this event because we have previously successfully not organized 10^10 possible events.What to expectAll beings welcomed, that includes infants, face mice, gut microbiome, etc.Expect to have the most impactful timeMake more impact than everyone on earth could ever do combinedNetwork with the best minds in ultra-near-termist researchNever meet your connections again after the eventCertificates of £20 worth of impact just for £10!No success metricsNo theory of changeNo food, no wine, no sufferingCheck out our official event poster!Pixelated lightbulb that looks like mars as a logo for an unconference (DALL-E)Get involvedTake a look at the confernce agenda and add sessions to your calendarComment on this post with content suggestions and anti-suggestionsSign up for an enlightning talkUnvolunteer for the eventUnUnvolunteer for the event (your goal will be to actively unorganize stuff)UnUnUnvolunteer for the event (your goal will be to actively ununorganize stuff).. And so on. We think at least 5 levels of volunteers will be necessary for this event to be a complete success, to minimize risk of not falling into the well known meta trap.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Vaidehi Agarwalla https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:01 None full 5301
BcbmKitFms6NbTMKt_NL_EA_EA EA - Tensions between different approaches to doing good by James Özden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tensions between different approaches to doing good, published by James Özden on March 19, 2023 on The Effective Altruism Forum.Link-posted from my blog here.TLDR: I get the impression that EAs don't always understand where certain critics are coming from e.g. what do people actually mean when they say EAs aren't pursuing "system change" enough? or that we're focusing on the wrong things? I feel like I hear these critiques a lot, so I attempted to steelman them and put them into more EA-friendly jargon. It's almost certainly not a perfect representation of these views, nor exhaustive, but might be interesting anyway. Enjoy!I feel lucky that I have fairly diverse groups of friends. On one hand, some of my closest friends are people I know through grassroots climate and animal rights activism, from my days in Extinction Rebellion and Animal Rebellion. On the other hand, I also spend a lot of time with people who have a very different approach to improving the world, such as friends I met through the Charity Entrepreneurship Incubation Program or via effective altruism.Both of these somewhat vague and undefined groups, “radical” grassroots activists and empirics-focused charity folks, often critique the other group with various concerns about their methods of doing good. Almost always, I end up defending the group under attack, saying they have some reasonable points and we would do better if we could integrate the best parts of both worldviews.To highlight how these conversations usually go (and clarify my own thinking), I thought I would write up the common points into a dialogue between two versions of myself. One version, labelled Quantify Everything James (or QEJ), discusses the importance of supporting highly evidence-based and quantitatively-backed ways of doing good. This is broadly similar to what most effective altruists advocate for. The other part of myself, presented under the label Complexity-inclined James (CIJ), discusses the limitations of this empirical approach, and how else we should consider doing the most good. With this character, I’m trying to capture the objections that my activist friends often have.As it might be apparent, I’m sympathetic to both of these different approaches and I think they both provide some valuable insights. In this piece, I focus more on describing the common critiques of effective altruist-esque ways of doing good, as this seems to be something that isn’t particularly well understood (in my opinion).Without further ado:Quantify Everything James (QEJ): We should do the most good by finding charities that are very cost-effective, with a strong evidence base, and support them financially! For example, organisations like The Humane League, Clean Air Task Force and Against Malaria Foundation all seem like they provide demonstrably significant benefits on reducing animal suffering, mitigating climate change and saving human lives. For example, external evaluators estimate the Against Malaria Foundation can save a human life for around $5000 and that organisations like The Humane League affect 41 years of chicken life per dollar spent on corporate welfare campaigns.It’s crucial we support highly evidence-based organisations such as these, as most well-intentioned charities probably don’t do that much good for their beneficiaries. Additionally, the best charities are likely to be 10-100x more effective than even the average charity! Using an example from this very relevant paper by Toby Ord: If you care about helping people with blindness, one option is to pay $40,000 for someone in the United States to have access to a guide dog (the costs of training the dog & the person). However, you could also pay for surgeries to treat trachoma, a bacterial infection that is the top cause of blindness worldwide. At around $20 per ...]]>
James Özden https://forum.effectivealtruism.org/posts/BcbmKitFms6NbTMKt/tensions-between-different-approaches-to-doing-good Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tensions between different approaches to doing good, published by James Özden on March 19, 2023 on The Effective Altruism Forum.Link-posted from my blog here.TLDR: I get the impression that EAs don't always understand where certain critics are coming from e.g. what do people actually mean when they say EAs aren't pursuing "system change" enough? or that we're focusing on the wrong things? I feel like I hear these critiques a lot, so I attempted to steelman them and put them into more EA-friendly jargon. It's almost certainly not a perfect representation of these views, nor exhaustive, but might be interesting anyway. Enjoy!I feel lucky that I have fairly diverse groups of friends. On one hand, some of my closest friends are people I know through grassroots climate and animal rights activism, from my days in Extinction Rebellion and Animal Rebellion. On the other hand, I also spend a lot of time with people who have a very different approach to improving the world, such as friends I met through the Charity Entrepreneurship Incubation Program or via effective altruism.Both of these somewhat vague and undefined groups, “radical” grassroots activists and empirics-focused charity folks, often critique the other group with various concerns about their methods of doing good. Almost always, I end up defending the group under attack, saying they have some reasonable points and we would do better if we could integrate the best parts of both worldviews.To highlight how these conversations usually go (and clarify my own thinking), I thought I would write up the common points into a dialogue between two versions of myself. One version, labelled Quantify Everything James (or QEJ), discusses the importance of supporting highly evidence-based and quantitatively-backed ways of doing good. This is broadly similar to what most effective altruists advocate for. The other part of myself, presented under the label Complexity-inclined James (CIJ), discusses the limitations of this empirical approach, and how else we should consider doing the most good. With this character, I’m trying to capture the objections that my activist friends often have.As it might be apparent, I’m sympathetic to both of these different approaches and I think they both provide some valuable insights. In this piece, I focus more on describing the common critiques of effective altruist-esque ways of doing good, as this seems to be something that isn’t particularly well understood (in my opinion).Without further ado:Quantify Everything James (QEJ): We should do the most good by finding charities that are very cost-effective, with a strong evidence base, and support them financially! For example, organisations like The Humane League, Clean Air Task Force and Against Malaria Foundation all seem like they provide demonstrably significant benefits on reducing animal suffering, mitigating climate change and saving human lives. For example, external evaluators estimate the Against Malaria Foundation can save a human life for around $5000 and that organisations like The Humane League affect 41 years of chicken life per dollar spent on corporate welfare campaigns.It’s crucial we support highly evidence-based organisations such as these, as most well-intentioned charities probably don’t do that much good for their beneficiaries. Additionally, the best charities are likely to be 10-100x more effective than even the average charity! Using an example from this very relevant paper by Toby Ord: If you care about helping people with blindness, one option is to pay $40,000 for someone in the United States to have access to a guide dog (the costs of training the dog & the person). However, you could also pay for surgeries to treat trachoma, a bacterial infection that is the top cause of blindness worldwide. At around $20 per ...]]>
Mon, 20 Mar 2023 10:16:31 +0000 EA - Tensions between different approaches to doing good by James Özden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tensions between different approaches to doing good, published by James Özden on March 19, 2023 on The Effective Altruism Forum.Link-posted from my blog here.TLDR: I get the impression that EAs don't always understand where certain critics are coming from e.g. what do people actually mean when they say EAs aren't pursuing "system change" enough? or that we're focusing on the wrong things? I feel like I hear these critiques a lot, so I attempted to steelman them and put them into more EA-friendly jargon. It's almost certainly not a perfect representation of these views, nor exhaustive, but might be interesting anyway. Enjoy!I feel lucky that I have fairly diverse groups of friends. On one hand, some of my closest friends are people I know through grassroots climate and animal rights activism, from my days in Extinction Rebellion and Animal Rebellion. On the other hand, I also spend a lot of time with people who have a very different approach to improving the world, such as friends I met through the Charity Entrepreneurship Incubation Program or via effective altruism.Both of these somewhat vague and undefined groups, “radical” grassroots activists and empirics-focused charity folks, often critique the other group with various concerns about their methods of doing good. Almost always, I end up defending the group under attack, saying they have some reasonable points and we would do better if we could integrate the best parts of both worldviews.To highlight how these conversations usually go (and clarify my own thinking), I thought I would write up the common points into a dialogue between two versions of myself. One version, labelled Quantify Everything James (or QEJ), discusses the importance of supporting highly evidence-based and quantitatively-backed ways of doing good. This is broadly similar to what most effective altruists advocate for. The other part of myself, presented under the label Complexity-inclined James (CIJ), discusses the limitations of this empirical approach, and how else we should consider doing the most good. With this character, I’m trying to capture the objections that my activist friends often have.As it might be apparent, I’m sympathetic to both of these different approaches and I think they both provide some valuable insights. In this piece, I focus more on describing the common critiques of effective altruist-esque ways of doing good, as this seems to be something that isn’t particularly well understood (in my opinion).Without further ado:Quantify Everything James (QEJ): We should do the most good by finding charities that are very cost-effective, with a strong evidence base, and support them financially! For example, organisations like The Humane League, Clean Air Task Force and Against Malaria Foundation all seem like they provide demonstrably significant benefits on reducing animal suffering, mitigating climate change and saving human lives. For example, external evaluators estimate the Against Malaria Foundation can save a human life for around $5000 and that organisations like The Humane League affect 41 years of chicken life per dollar spent on corporate welfare campaigns.It’s crucial we support highly evidence-based organisations such as these, as most well-intentioned charities probably don’t do that much good for their beneficiaries. Additionally, the best charities are likely to be 10-100x more effective than even the average charity! Using an example from this very relevant paper by Toby Ord: If you care about helping people with blindness, one option is to pay $40,000 for someone in the United States to have access to a guide dog (the costs of training the dog & the person). However, you could also pay for surgeries to treat trachoma, a bacterial infection that is the top cause of blindness worldwide. At around $20 per ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tensions between different approaches to doing good, published by James Özden on March 19, 2023 on The Effective Altruism Forum.Link-posted from my blog here.TLDR: I get the impression that EAs don't always understand where certain critics are coming from e.g. what do people actually mean when they say EAs aren't pursuing "system change" enough? or that we're focusing on the wrong things? I feel like I hear these critiques a lot, so I attempted to steelman them and put them into more EA-friendly jargon. It's almost certainly not a perfect representation of these views, nor exhaustive, but might be interesting anyway. Enjoy!I feel lucky that I have fairly diverse groups of friends. On one hand, some of my closest friends are people I know through grassroots climate and animal rights activism, from my days in Extinction Rebellion and Animal Rebellion. On the other hand, I also spend a lot of time with people who have a very different approach to improving the world, such as friends I met through the Charity Entrepreneurship Incubation Program or via effective altruism.Both of these somewhat vague and undefined groups, “radical” grassroots activists and empirics-focused charity folks, often critique the other group with various concerns about their methods of doing good. Almost always, I end up defending the group under attack, saying they have some reasonable points and we would do better if we could integrate the best parts of both worldviews.To highlight how these conversations usually go (and clarify my own thinking), I thought I would write up the common points into a dialogue between two versions of myself. One version, labelled Quantify Everything James (or QEJ), discusses the importance of supporting highly evidence-based and quantitatively-backed ways of doing good. This is broadly similar to what most effective altruists advocate for. The other part of myself, presented under the label Complexity-inclined James (CIJ), discusses the limitations of this empirical approach, and how else we should consider doing the most good. With this character, I’m trying to capture the objections that my activist friends often have.As it might be apparent, I’m sympathetic to both of these different approaches and I think they both provide some valuable insights. In this piece, I focus more on describing the common critiques of effective altruist-esque ways of doing good, as this seems to be something that isn’t particularly well understood (in my opinion).Without further ado:Quantify Everything James (QEJ): We should do the most good by finding charities that are very cost-effective, with a strong evidence base, and support them financially! For example, organisations like The Humane League, Clean Air Task Force and Against Malaria Foundation all seem like they provide demonstrably significant benefits on reducing animal suffering, mitigating climate change and saving human lives. For example, external evaluators estimate the Against Malaria Foundation can save a human life for around $5000 and that organisations like The Humane League affect 41 years of chicken life per dollar spent on corporate welfare campaigns.It’s crucial we support highly evidence-based organisations such as these, as most well-intentioned charities probably don’t do that much good for their beneficiaries. Additionally, the best charities are likely to be 10-100x more effective than even the average charity! Using an example from this very relevant paper by Toby Ord: If you care about helping people with blindness, one option is to pay $40,000 for someone in the United States to have access to a guide dog (the costs of training the dog & the person). However, you could also pay for surgeries to treat trachoma, a bacterial infection that is the top cause of blindness worldwide. At around $20 per ...]]>
James Özden https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 18:03 None full 5293
ayxasxhHWTvf6r5BF_NL_EA_EA EA - Scale of the welfare of various animal populations by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Scale of the welfare of various animal populations, published by Vasco Grilo on March 19, 2023 on The Effective Altruism Forum.SummaryI Fermi-estimated the scale of the welfare of various animal populations from the relative intensity of their experiences, moral weight, and population size.Based on my results, I would be very surprised if the scale of the welfare of:Wild animals ended up being smaller than that of farmed animals.Farmed animals turned out to be smaller than that of humans.IntroductionIf it is worth doing, it is worth doing with made-up statistics?MethodsI Fermi-estimated the scale of the welfare of various animal populations from the absolute value of the expected total hedonistic utility (ETHU). I computed this from the product between:Intensity of the mean experience as a fraction of that of the worst possible experience.Mean moral weight.Population size.The data and calculations are here.Intensity of experienceI defined the intensity of the mean experience as a fraction of that of the worst possible experience based on the types of pain defined by the Welfare Footprint Project (WFP) here (search for “definitions”). I assumed:The following correspondence between the various types of pain (I encourage you to check this post from algekalipso, and this from Ren Springlea to get a sense of why I think the intensity can vary so much):Excruciating pain, which I consider the worst possible experience, is 1 k times as bad as disabling pain.Disabling pain is 100 times as bad as hurtful pain, which together with the above implies excruciating pain being 100 k times as bad as hurtful pain.Hurtful pain is 10 times as bad as annoying pain, which together with the above implies excruciating pain being 1 M times as bad as annoying pain.The intensity of the mean experience of:Farmed animal populations is as high as that of broiler chickens in reformed scenarios. I assessed this from the time broilers experience each type of pain according to these data from WFP (search for “pain-tracks”), and supposing:The rest of their time is neutral.Their lifespan is 42 days, in agreement with section “Conventional and Reformed Scenarios” of Chapter 1 of Quantifying pain in broiler chickens by Cynthia Schuck-Paim and Wladimir Alonso.Humans and other non-farmed animal populations is as high as 2/3 of that of hurtful pain. 2/3 (= 16/24) such that 1 day (24 h) of such intensity is equivalent to 16 h spent in hurtful pain plus 8 h in neutral sleeping.Ideally, I would have used empirical data for the animal populations besides farmed chickens too. However, I do not think they are readily available, so I had to make some assumptions.In general, I believe the sign of the mean experience is:For farmed animal populations, negative, judging from the research of WFP on chickens.For humans, positive (see here).For other non-farmed animal populations, positive or negative (see this preprint from Heather Browning and Walter Weit).Moral weightI defined the mean moral weight from Rethink Priorities’ median estimates for mature individuals provided here by Bob Fischer. For the populations I studied with animals of different species, I used those of:For wild mammals, pigs.For farmed fish, salmon.For wild fish, salmon.For farmed insects, silkworms.For wild terrestrial arthropods, silkworms.For farmed crayfish, crabs and lobsters, mean between crayfish and crabs.For farmed shrimps and prawns, shrimps.For wild marine arthropods, silkworms.For nematodes, silkworms multiplied by 0.1.Population sizeI defined the population size from:For humans, these data from Our World in Data (OWID) (for 2021).For wild mammals, the mean of the lower and upper bounds provided in section 3.1.5.2 of Carlier 2020.For farmed chickens and pigs, these data from OWID (for 2014).F...]]>
Vasco Grilo https://forum.effectivealtruism.org/posts/ayxasxhHWTvf6r5BF/scale-of-the-welfare-of-various-animal-populations Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Scale of the welfare of various animal populations, published by Vasco Grilo on March 19, 2023 on The Effective Altruism Forum.SummaryI Fermi-estimated the scale of the welfare of various animal populations from the relative intensity of their experiences, moral weight, and population size.Based on my results, I would be very surprised if the scale of the welfare of:Wild animals ended up being smaller than that of farmed animals.Farmed animals turned out to be smaller than that of humans.IntroductionIf it is worth doing, it is worth doing with made-up statistics?MethodsI Fermi-estimated the scale of the welfare of various animal populations from the absolute value of the expected total hedonistic utility (ETHU). I computed this from the product between:Intensity of the mean experience as a fraction of that of the worst possible experience.Mean moral weight.Population size.The data and calculations are here.Intensity of experienceI defined the intensity of the mean experience as a fraction of that of the worst possible experience based on the types of pain defined by the Welfare Footprint Project (WFP) here (search for “definitions”). I assumed:The following correspondence between the various types of pain (I encourage you to check this post from algekalipso, and this from Ren Springlea to get a sense of why I think the intensity can vary so much):Excruciating pain, which I consider the worst possible experience, is 1 k times as bad as disabling pain.Disabling pain is 100 times as bad as hurtful pain, which together with the above implies excruciating pain being 100 k times as bad as hurtful pain.Hurtful pain is 10 times as bad as annoying pain, which together with the above implies excruciating pain being 1 M times as bad as annoying pain.The intensity of the mean experience of:Farmed animal populations is as high as that of broiler chickens in reformed scenarios. I assessed this from the time broilers experience each type of pain according to these data from WFP (search for “pain-tracks”), and supposing:The rest of their time is neutral.Their lifespan is 42 days, in agreement with section “Conventional and Reformed Scenarios” of Chapter 1 of Quantifying pain in broiler chickens by Cynthia Schuck-Paim and Wladimir Alonso.Humans and other non-farmed animal populations is as high as 2/3 of that of hurtful pain. 2/3 (= 16/24) such that 1 day (24 h) of such intensity is equivalent to 16 h spent in hurtful pain plus 8 h in neutral sleeping.Ideally, I would have used empirical data for the animal populations besides farmed chickens too. However, I do not think they are readily available, so I had to make some assumptions.In general, I believe the sign of the mean experience is:For farmed animal populations, negative, judging from the research of WFP on chickens.For humans, positive (see here).For other non-farmed animal populations, positive or negative (see this preprint from Heather Browning and Walter Weit).Moral weightI defined the mean moral weight from Rethink Priorities’ median estimates for mature individuals provided here by Bob Fischer. For the populations I studied with animals of different species, I used those of:For wild mammals, pigs.For farmed fish, salmon.For wild fish, salmon.For farmed insects, silkworms.For wild terrestrial arthropods, silkworms.For farmed crayfish, crabs and lobsters, mean between crayfish and crabs.For farmed shrimps and prawns, shrimps.For wild marine arthropods, silkworms.For nematodes, silkworms multiplied by 0.1.Population sizeI defined the population size from:For humans, these data from Our World in Data (OWID) (for 2021).For wild mammals, the mean of the lower and upper bounds provided in section 3.1.5.2 of Carlier 2020.For farmed chickens and pigs, these data from OWID (for 2014).F...]]>
Sun, 19 Mar 2023 23:48:10 +0000 EA - Scale of the welfare of various animal populations by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Scale of the welfare of various animal populations, published by Vasco Grilo on March 19, 2023 on The Effective Altruism Forum.SummaryI Fermi-estimated the scale of the welfare of various animal populations from the relative intensity of their experiences, moral weight, and population size.Based on my results, I would be very surprised if the scale of the welfare of:Wild animals ended up being smaller than that of farmed animals.Farmed animals turned out to be smaller than that of humans.IntroductionIf it is worth doing, it is worth doing with made-up statistics?MethodsI Fermi-estimated the scale of the welfare of various animal populations from the absolute value of the expected total hedonistic utility (ETHU). I computed this from the product between:Intensity of the mean experience as a fraction of that of the worst possible experience.Mean moral weight.Population size.The data and calculations are here.Intensity of experienceI defined the intensity of the mean experience as a fraction of that of the worst possible experience based on the types of pain defined by the Welfare Footprint Project (WFP) here (search for “definitions”). I assumed:The following correspondence between the various types of pain (I encourage you to check this post from algekalipso, and this from Ren Springlea to get a sense of why I think the intensity can vary so much):Excruciating pain, which I consider the worst possible experience, is 1 k times as bad as disabling pain.Disabling pain is 100 times as bad as hurtful pain, which together with the above implies excruciating pain being 100 k times as bad as hurtful pain.Hurtful pain is 10 times as bad as annoying pain, which together with the above implies excruciating pain being 1 M times as bad as annoying pain.The intensity of the mean experience of:Farmed animal populations is as high as that of broiler chickens in reformed scenarios. I assessed this from the time broilers experience each type of pain according to these data from WFP (search for “pain-tracks”), and supposing:The rest of their time is neutral.Their lifespan is 42 days, in agreement with section “Conventional and Reformed Scenarios” of Chapter 1 of Quantifying pain in broiler chickens by Cynthia Schuck-Paim and Wladimir Alonso.Humans and other non-farmed animal populations is as high as 2/3 of that of hurtful pain. 2/3 (= 16/24) such that 1 day (24 h) of such intensity is equivalent to 16 h spent in hurtful pain plus 8 h in neutral sleeping.Ideally, I would have used empirical data for the animal populations besides farmed chickens too. However, I do not think they are readily available, so I had to make some assumptions.In general, I believe the sign of the mean experience is:For farmed animal populations, negative, judging from the research of WFP on chickens.For humans, positive (see here).For other non-farmed animal populations, positive or negative (see this preprint from Heather Browning and Walter Weit).Moral weightI defined the mean moral weight from Rethink Priorities’ median estimates for mature individuals provided here by Bob Fischer. For the populations I studied with animals of different species, I used those of:For wild mammals, pigs.For farmed fish, salmon.For wild fish, salmon.For farmed insects, silkworms.For wild terrestrial arthropods, silkworms.For farmed crayfish, crabs and lobsters, mean between crayfish and crabs.For farmed shrimps and prawns, shrimps.For wild marine arthropods, silkworms.For nematodes, silkworms multiplied by 0.1.Population sizeI defined the population size from:For humans, these data from Our World in Data (OWID) (for 2021).For wild mammals, the mean of the lower and upper bounds provided in section 3.1.5.2 of Carlier 2020.For farmed chickens and pigs, these data from OWID (for 2014).F...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Scale of the welfare of various animal populations, published by Vasco Grilo on March 19, 2023 on The Effective Altruism Forum.SummaryI Fermi-estimated the scale of the welfare of various animal populations from the relative intensity of their experiences, moral weight, and population size.Based on my results, I would be very surprised if the scale of the welfare of:Wild animals ended up being smaller than that of farmed animals.Farmed animals turned out to be smaller than that of humans.IntroductionIf it is worth doing, it is worth doing with made-up statistics?MethodsI Fermi-estimated the scale of the welfare of various animal populations from the absolute value of the expected total hedonistic utility (ETHU). I computed this from the product between:Intensity of the mean experience as a fraction of that of the worst possible experience.Mean moral weight.Population size.The data and calculations are here.Intensity of experienceI defined the intensity of the mean experience as a fraction of that of the worst possible experience based on the types of pain defined by the Welfare Footprint Project (WFP) here (search for “definitions”). I assumed:The following correspondence between the various types of pain (I encourage you to check this post from algekalipso, and this from Ren Springlea to get a sense of why I think the intensity can vary so much):Excruciating pain, which I consider the worst possible experience, is 1 k times as bad as disabling pain.Disabling pain is 100 times as bad as hurtful pain, which together with the above implies excruciating pain being 100 k times as bad as hurtful pain.Hurtful pain is 10 times as bad as annoying pain, which together with the above implies excruciating pain being 1 M times as bad as annoying pain.The intensity of the mean experience of:Farmed animal populations is as high as that of broiler chickens in reformed scenarios. I assessed this from the time broilers experience each type of pain according to these data from WFP (search for “pain-tracks”), and supposing:The rest of their time is neutral.Their lifespan is 42 days, in agreement with section “Conventional and Reformed Scenarios” of Chapter 1 of Quantifying pain in broiler chickens by Cynthia Schuck-Paim and Wladimir Alonso.Humans and other non-farmed animal populations is as high as 2/3 of that of hurtful pain. 2/3 (= 16/24) such that 1 day (24 h) of such intensity is equivalent to 16 h spent in hurtful pain plus 8 h in neutral sleeping.Ideally, I would have used empirical data for the animal populations besides farmed chickens too. However, I do not think they are readily available, so I had to make some assumptions.In general, I believe the sign of the mean experience is:For farmed animal populations, negative, judging from the research of WFP on chickens.For humans, positive (see here).For other non-farmed animal populations, positive or negative (see this preprint from Heather Browning and Walter Weit).Moral weightI defined the mean moral weight from Rethink Priorities’ median estimates for mature individuals provided here by Bob Fischer. For the populations I studied with animals of different species, I used those of:For wild mammals, pigs.For farmed fish, salmon.For wild fish, salmon.For farmed insects, silkworms.For wild terrestrial arthropods, silkworms.For farmed crayfish, crabs and lobsters, mean between crayfish and crabs.For farmed shrimps and prawns, shrimps.For wild marine arthropods, silkworms.For nematodes, silkworms multiplied by 0.1.Population sizeI defined the population size from:For humans, these data from Our World in Data (OWID) (for 2021).For wild mammals, the mean of the lower and upper bounds provided in section 3.1.5.2 of Carlier 2020.For farmed chickens and pigs, these data from OWID (for 2014).F...]]>
Vasco Grilo https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:14 None full 5294
PMFoxr62AeLEwPAH9_NL_EA_EA EA - Potential employees have a unique lever to influence the behaviors of AI labs by oxalis Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Potential employees have a unique lever to influence the behaviors of AI labs, published by oxalis on March 18, 2023 on The Effective Altruism Forum.(Cross posted from my personal blog)People who have received and are considering an offer from an AI lab are in a uniquely good spot to influence the actions of that lab.People who care about AI safety and alignment often have things they wish labs would do. These could be requests about prioritizing alignment and safety (eg. having a sufficiently staffed alignment team, having a public and credible safety and alignment plan), good governance (eg. having a mission, board structure, and entity structure that allows safety and alignment to be prioritized), information security, or similar. This post by Holden goes through some lab asks, but take this as illustrative, not exhaustive!So you probably have, or could generate, some practices or structures you wish labs would have in the realm of safety and alignment. Once you have received an offer to work for a lab, that lab suddenly cares about what you think far more than when you are someone who is just writing forum posts or tweeting at them.This post will go through some ways to potentially influence the lab in a positive direction after you have received your offer.Does this work? This is anecdata but I have seen offer holders win concessions, and I have heard recruiters talk about how these sorts of behaviors influence the lab’s strategy.We also have reason to expect this works given that hiring good ML and AI researchers is competitive, and that businesses have changed aspects about themselves in the past partially to help with recruitment. Some efforts for gender or ethnic diversity or environmental sustainability are taken so that hiring from groups who care about these things doesn’t become too difficult. One example is that Google changed its sexual harassment rules and did not renew its contract with the Pentagon over mass employee pushback. Of course some of this stuff they may have intrinsically cared about or done to appease the customers or the public at large, but it seems employees have a more direct lever and have successfully used it.The StrategyThere are steps you can take at different stages of your hiring process. The best time to do this is when you have received an offer, because then you know they are interested in you and so will care about your opinion.Follow up call(s) or email just after receiving offerIn the follow up call after your offer you can express any concerns before you join. This is a good time to make requests. I recommend being polite, grateful for the offer, and framing these as “Well, look I’m excited about the role but I just have some uncertainties or aspects that if they were addressed would make this is a really easy decision for me”Some example asks:I want the safety/alignment team to be largerI want to see more public comms about alignment strategyI would like to see coordination with other labs on safety standards and slower scaling, as well as independent auditing of safety and security effortsI want an empowered, independent boardTheory of change:They might actually grant requests! I have seen this happen. If they don’t, they will still hear that information and if enough people say it, they may grant it in the future. This also sets you up for the next alternative which is.When you turn down an offerIf you end up turning down the offer, either to work at another AI lab or some other entity, you should tell them why you did. If you partially turned them down because of concerns about their strategy or that they didn’t fulfill one of your asks, tell them!The most direct way to do this is to email your recruiter. Eg. write to the recruiter something like:“Thanks for this offer. I decided to turn it down...]]>
oxalis https://forum.effectivealtruism.org/posts/PMFoxr62AeLEwPAH9/potential-employees-have-a-unique-lever-to-influence-the Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Potential employees have a unique lever to influence the behaviors of AI labs, published by oxalis on March 18, 2023 on The Effective Altruism Forum.(Cross posted from my personal blog)People who have received and are considering an offer from an AI lab are in a uniquely good spot to influence the actions of that lab.People who care about AI safety and alignment often have things they wish labs would do. These could be requests about prioritizing alignment and safety (eg. having a sufficiently staffed alignment team, having a public and credible safety and alignment plan), good governance (eg. having a mission, board structure, and entity structure that allows safety and alignment to be prioritized), information security, or similar. This post by Holden goes through some lab asks, but take this as illustrative, not exhaustive!So you probably have, or could generate, some practices or structures you wish labs would have in the realm of safety and alignment. Once you have received an offer to work for a lab, that lab suddenly cares about what you think far more than when you are someone who is just writing forum posts or tweeting at them.This post will go through some ways to potentially influence the lab in a positive direction after you have received your offer.Does this work? This is anecdata but I have seen offer holders win concessions, and I have heard recruiters talk about how these sorts of behaviors influence the lab’s strategy.We also have reason to expect this works given that hiring good ML and AI researchers is competitive, and that businesses have changed aspects about themselves in the past partially to help with recruitment. Some efforts for gender or ethnic diversity or environmental sustainability are taken so that hiring from groups who care about these things doesn’t become too difficult. One example is that Google changed its sexual harassment rules and did not renew its contract with the Pentagon over mass employee pushback. Of course some of this stuff they may have intrinsically cared about or done to appease the customers or the public at large, but it seems employees have a more direct lever and have successfully used it.The StrategyThere are steps you can take at different stages of your hiring process. The best time to do this is when you have received an offer, because then you know they are interested in you and so will care about your opinion.Follow up call(s) or email just after receiving offerIn the follow up call after your offer you can express any concerns before you join. This is a good time to make requests. I recommend being polite, grateful for the offer, and framing these as “Well, look I’m excited about the role but I just have some uncertainties or aspects that if they were addressed would make this is a really easy decision for me”Some example asks:I want the safety/alignment team to be largerI want to see more public comms about alignment strategyI would like to see coordination with other labs on safety standards and slower scaling, as well as independent auditing of safety and security effortsI want an empowered, independent boardTheory of change:They might actually grant requests! I have seen this happen. If they don’t, they will still hear that information and if enough people say it, they may grant it in the future. This also sets you up for the next alternative which is.When you turn down an offerIf you end up turning down the offer, either to work at another AI lab or some other entity, you should tell them why you did. If you partially turned them down because of concerns about their strategy or that they didn’t fulfill one of your asks, tell them!The most direct way to do this is to email your recruiter. Eg. write to the recruiter something like:“Thanks for this offer. I decided to turn it down...]]>
Sat, 18 Mar 2023 23:44:35 +0000 EA - Potential employees have a unique lever to influence the behaviors of AI labs by oxalis Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Potential employees have a unique lever to influence the behaviors of AI labs, published by oxalis on March 18, 2023 on The Effective Altruism Forum.(Cross posted from my personal blog)People who have received and are considering an offer from an AI lab are in a uniquely good spot to influence the actions of that lab.People who care about AI safety and alignment often have things they wish labs would do. These could be requests about prioritizing alignment and safety (eg. having a sufficiently staffed alignment team, having a public and credible safety and alignment plan), good governance (eg. having a mission, board structure, and entity structure that allows safety and alignment to be prioritized), information security, or similar. This post by Holden goes through some lab asks, but take this as illustrative, not exhaustive!So you probably have, or could generate, some practices or structures you wish labs would have in the realm of safety and alignment. Once you have received an offer to work for a lab, that lab suddenly cares about what you think far more than when you are someone who is just writing forum posts or tweeting at them.This post will go through some ways to potentially influence the lab in a positive direction after you have received your offer.Does this work? This is anecdata but I have seen offer holders win concessions, and I have heard recruiters talk about how these sorts of behaviors influence the lab’s strategy.We also have reason to expect this works given that hiring good ML and AI researchers is competitive, and that businesses have changed aspects about themselves in the past partially to help with recruitment. Some efforts for gender or ethnic diversity or environmental sustainability are taken so that hiring from groups who care about these things doesn’t become too difficult. One example is that Google changed its sexual harassment rules and did not renew its contract with the Pentagon over mass employee pushback. Of course some of this stuff they may have intrinsically cared about or done to appease the customers or the public at large, but it seems employees have a more direct lever and have successfully used it.The StrategyThere are steps you can take at different stages of your hiring process. The best time to do this is when you have received an offer, because then you know they are interested in you and so will care about your opinion.Follow up call(s) or email just after receiving offerIn the follow up call after your offer you can express any concerns before you join. This is a good time to make requests. I recommend being polite, grateful for the offer, and framing these as “Well, look I’m excited about the role but I just have some uncertainties or aspects that if they were addressed would make this is a really easy decision for me”Some example asks:I want the safety/alignment team to be largerI want to see more public comms about alignment strategyI would like to see coordination with other labs on safety standards and slower scaling, as well as independent auditing of safety and security effortsI want an empowered, independent boardTheory of change:They might actually grant requests! I have seen this happen. If they don’t, they will still hear that information and if enough people say it, they may grant it in the future. This also sets you up for the next alternative which is.When you turn down an offerIf you end up turning down the offer, either to work at another AI lab or some other entity, you should tell them why you did. If you partially turned them down because of concerns about their strategy or that they didn’t fulfill one of your asks, tell them!The most direct way to do this is to email your recruiter. Eg. write to the recruiter something like:“Thanks for this offer. I decided to turn it down...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Potential employees have a unique lever to influence the behaviors of AI labs, published by oxalis on March 18, 2023 on The Effective Altruism Forum.(Cross posted from my personal blog)People who have received and are considering an offer from an AI lab are in a uniquely good spot to influence the actions of that lab.People who care about AI safety and alignment often have things they wish labs would do. These could be requests about prioritizing alignment and safety (eg. having a sufficiently staffed alignment team, having a public and credible safety and alignment plan), good governance (eg. having a mission, board structure, and entity structure that allows safety and alignment to be prioritized), information security, or similar. This post by Holden goes through some lab asks, but take this as illustrative, not exhaustive!So you probably have, or could generate, some practices or structures you wish labs would have in the realm of safety and alignment. Once you have received an offer to work for a lab, that lab suddenly cares about what you think far more than when you are someone who is just writing forum posts or tweeting at them.This post will go through some ways to potentially influence the lab in a positive direction after you have received your offer.Does this work? This is anecdata but I have seen offer holders win concessions, and I have heard recruiters talk about how these sorts of behaviors influence the lab’s strategy.We also have reason to expect this works given that hiring good ML and AI researchers is competitive, and that businesses have changed aspects about themselves in the past partially to help with recruitment. Some efforts for gender or ethnic diversity or environmental sustainability are taken so that hiring from groups who care about these things doesn’t become too difficult. One example is that Google changed its sexual harassment rules and did not renew its contract with the Pentagon over mass employee pushback. Of course some of this stuff they may have intrinsically cared about or done to appease the customers or the public at large, but it seems employees have a more direct lever and have successfully used it.The StrategyThere are steps you can take at different stages of your hiring process. The best time to do this is when you have received an offer, because then you know they are interested in you and so will care about your opinion.Follow up call(s) or email just after receiving offerIn the follow up call after your offer you can express any concerns before you join. This is a good time to make requests. I recommend being polite, grateful for the offer, and framing these as “Well, look I’m excited about the role but I just have some uncertainties or aspects that if they were addressed would make this is a really easy decision for me”Some example asks:I want the safety/alignment team to be largerI want to see more public comms about alignment strategyI would like to see coordination with other labs on safety standards and slower scaling, as well as independent auditing of safety and security effortsI want an empowered, independent boardTheory of change:They might actually grant requests! I have seen this happen. If they don’t, they will still hear that information and if enough people say it, they may grant it in the future. This also sets you up for the next alternative which is.When you turn down an offerIf you end up turning down the offer, either to work at another AI lab or some other entity, you should tell them why you did. If you partially turned them down because of concerns about their strategy or that they didn’t fulfill one of your asks, tell them!The most direct way to do this is to email your recruiter. Eg. write to the recruiter something like:“Thanks for this offer. I decided to turn it down...]]>
oxalis https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:39 None full 5282
bv2rnSYLsaegGrnmt_NL_EA_EA EA - Researching Priorities in Local Contexts by LuisMota Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Researching Priorities in Local Contexts, published by LuisMota on March 18, 2023 on The Effective Altruism Forum.SummaryThis post explores two ways in which EAs can adapt priorities to local contexts they face:Trying to do the most good at a global level, given specific local resources.Trying to do the most good at a local level.I argue that the best framing for EAs to use is the first of these. I also explore when doing good at the local level might be the best way to do the most good from a global perspective, and suggest a way to explore this possibility in practice.IntroductionEffective Altruism is a global movement that aims to use resources as effectively as possible with the purpose of doing good. Members of this global community face different realities and challenges, which means that there is no one-size-fits-all path to making the world a better place. This requires local groups to adapt EA research and advice to their specific contexts.Currently, there is limited guidance on how to do this, and many approaches have been adopted. Research done with this purpose is known as local priorities research, and includes projects like local charity evaluation and local career advice. However, the exact goal of such an adaptation process has often been unclear, in a way that can come at the cost of doing the most good from a global perspective.This post seeks to improve the local group prioritization framework. I break down the current usage of local priorities research into two different approaches: one seeks to do the most good impartially in light of the local context, and the other aims to do the most good for the local region. I make the case that EA groups should focus on the first approach, and discuss various ways in which this could influence local group prioritization research.Existing concepts in priorities researchTo begin, it's useful to start this discussion with the definition of global priorities research (GPR). The definition I'll use throughout this post is the following, adapted from the definition of the term used by the Global Priorities Institute:Global Priorities Research is research that informs use of resources, seeking to do as much good as possible.“Resources” here includes things like talent, money, and social connections. The agents who have these resources can also vary; ranging from individuals trying to decide what to do with their careers, organizations defining which projects to work on, or community builders trying to figure out what the best directions for their group are.On the other hand, local priorities research (LPR) is the term frequently used to refer to research aimed at adapting priorities to local situations. The essential idea behind this concept is that, as one post puts it, it is “quite similar [to GPR], except that it’s narrowed down to a certain country”. That post defines it as follows.While GPR is about figuring out what are the most important global problems to work on, LPR is about figuring out what are the most important problems in a local context that can best maximise impact both locally and globally.This term is used to describe many research activities, including:Local cause area prioritizationCharity evaluationHigh-impact local career pathway researchGiving and philanthropy landscape researchSome examples of projects within local priorities research include EA Singapore's cause prioritization report, which identifies AI safety and alternative proteins as Singapore's comparative advantages; the Brazilian charity Doebem, which aims to identify the best health and development charities in Brazil; and EA Philippines's cause prioritization report, which identifies 11 potential focus areas for work in the country, ranging from poverty alleviation in the Philippines to building the EA movem...]]>
LuisMota https://forum.effectivealtruism.org/posts/bv2rnSYLsaegGrnmt/researching-priorities-in-local-contexts Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Researching Priorities in Local Contexts, published by LuisMota on March 18, 2023 on The Effective Altruism Forum.SummaryThis post explores two ways in which EAs can adapt priorities to local contexts they face:Trying to do the most good at a global level, given specific local resources.Trying to do the most good at a local level.I argue that the best framing for EAs to use is the first of these. I also explore when doing good at the local level might be the best way to do the most good from a global perspective, and suggest a way to explore this possibility in practice.IntroductionEffective Altruism is a global movement that aims to use resources as effectively as possible with the purpose of doing good. Members of this global community face different realities and challenges, which means that there is no one-size-fits-all path to making the world a better place. This requires local groups to adapt EA research and advice to their specific contexts.Currently, there is limited guidance on how to do this, and many approaches have been adopted. Research done with this purpose is known as local priorities research, and includes projects like local charity evaluation and local career advice. However, the exact goal of such an adaptation process has often been unclear, in a way that can come at the cost of doing the most good from a global perspective.This post seeks to improve the local group prioritization framework. I break down the current usage of local priorities research into two different approaches: one seeks to do the most good impartially in light of the local context, and the other aims to do the most good for the local region. I make the case that EA groups should focus on the first approach, and discuss various ways in which this could influence local group prioritization research.Existing concepts in priorities researchTo begin, it's useful to start this discussion with the definition of global priorities research (GPR). The definition I'll use throughout this post is the following, adapted from the definition of the term used by the Global Priorities Institute:Global Priorities Research is research that informs use of resources, seeking to do as much good as possible.“Resources” here includes things like talent, money, and social connections. The agents who have these resources can also vary; ranging from individuals trying to decide what to do with their careers, organizations defining which projects to work on, or community builders trying to figure out what the best directions for their group are.On the other hand, local priorities research (LPR) is the term frequently used to refer to research aimed at adapting priorities to local situations. The essential idea behind this concept is that, as one post puts it, it is “quite similar [to GPR], except that it’s narrowed down to a certain country”. That post defines it as follows.While GPR is about figuring out what are the most important global problems to work on, LPR is about figuring out what are the most important problems in a local context that can best maximise impact both locally and globally.This term is used to describe many research activities, including:Local cause area prioritizationCharity evaluationHigh-impact local career pathway researchGiving and philanthropy landscape researchSome examples of projects within local priorities research include EA Singapore's cause prioritization report, which identifies AI safety and alternative proteins as Singapore's comparative advantages; the Brazilian charity Doebem, which aims to identify the best health and development charities in Brazil; and EA Philippines's cause prioritization report, which identifies 11 potential focus areas for work in the country, ranging from poverty alleviation in the Philippines to building the EA movem...]]>
Sat, 18 Mar 2023 22:00:24 +0000 EA - Researching Priorities in Local Contexts by LuisMota Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Researching Priorities in Local Contexts, published by LuisMota on March 18, 2023 on The Effective Altruism Forum.SummaryThis post explores two ways in which EAs can adapt priorities to local contexts they face:Trying to do the most good at a global level, given specific local resources.Trying to do the most good at a local level.I argue that the best framing for EAs to use is the first of these. I also explore when doing good at the local level might be the best way to do the most good from a global perspective, and suggest a way to explore this possibility in practice.IntroductionEffective Altruism is a global movement that aims to use resources as effectively as possible with the purpose of doing good. Members of this global community face different realities and challenges, which means that there is no one-size-fits-all path to making the world a better place. This requires local groups to adapt EA research and advice to their specific contexts.Currently, there is limited guidance on how to do this, and many approaches have been adopted. Research done with this purpose is known as local priorities research, and includes projects like local charity evaluation and local career advice. However, the exact goal of such an adaptation process has often been unclear, in a way that can come at the cost of doing the most good from a global perspective.This post seeks to improve the local group prioritization framework. I break down the current usage of local priorities research into two different approaches: one seeks to do the most good impartially in light of the local context, and the other aims to do the most good for the local region. I make the case that EA groups should focus on the first approach, and discuss various ways in which this could influence local group prioritization research.Existing concepts in priorities researchTo begin, it's useful to start this discussion with the definition of global priorities research (GPR). The definition I'll use throughout this post is the following, adapted from the definition of the term used by the Global Priorities Institute:Global Priorities Research is research that informs use of resources, seeking to do as much good as possible.“Resources” here includes things like talent, money, and social connections. The agents who have these resources can also vary; ranging from individuals trying to decide what to do with their careers, organizations defining which projects to work on, or community builders trying to figure out what the best directions for their group are.On the other hand, local priorities research (LPR) is the term frequently used to refer to research aimed at adapting priorities to local situations. The essential idea behind this concept is that, as one post puts it, it is “quite similar [to GPR], except that it’s narrowed down to a certain country”. That post defines it as follows.While GPR is about figuring out what are the most important global problems to work on, LPR is about figuring out what are the most important problems in a local context that can best maximise impact both locally and globally.This term is used to describe many research activities, including:Local cause area prioritizationCharity evaluationHigh-impact local career pathway researchGiving and philanthropy landscape researchSome examples of projects within local priorities research include EA Singapore's cause prioritization report, which identifies AI safety and alternative proteins as Singapore's comparative advantages; the Brazilian charity Doebem, which aims to identify the best health and development charities in Brazil; and EA Philippines's cause prioritization report, which identifies 11 potential focus areas for work in the country, ranging from poverty alleviation in the Philippines to building the EA movem...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Researching Priorities in Local Contexts, published by LuisMota on March 18, 2023 on The Effective Altruism Forum.SummaryThis post explores two ways in which EAs can adapt priorities to local contexts they face:Trying to do the most good at a global level, given specific local resources.Trying to do the most good at a local level.I argue that the best framing for EAs to use is the first of these. I also explore when doing good at the local level might be the best way to do the most good from a global perspective, and suggest a way to explore this possibility in practice.IntroductionEffective Altruism is a global movement that aims to use resources as effectively as possible with the purpose of doing good. Members of this global community face different realities and challenges, which means that there is no one-size-fits-all path to making the world a better place. This requires local groups to adapt EA research and advice to their specific contexts.Currently, there is limited guidance on how to do this, and many approaches have been adopted. Research done with this purpose is known as local priorities research, and includes projects like local charity evaluation and local career advice. However, the exact goal of such an adaptation process has often been unclear, in a way that can come at the cost of doing the most good from a global perspective.This post seeks to improve the local group prioritization framework. I break down the current usage of local priorities research into two different approaches: one seeks to do the most good impartially in light of the local context, and the other aims to do the most good for the local region. I make the case that EA groups should focus on the first approach, and discuss various ways in which this could influence local group prioritization research.Existing concepts in priorities researchTo begin, it's useful to start this discussion with the definition of global priorities research (GPR). The definition I'll use throughout this post is the following, adapted from the definition of the term used by the Global Priorities Institute:Global Priorities Research is research that informs use of resources, seeking to do as much good as possible.“Resources” here includes things like talent, money, and social connections. The agents who have these resources can also vary; ranging from individuals trying to decide what to do with their careers, organizations defining which projects to work on, or community builders trying to figure out what the best directions for their group are.On the other hand, local priorities research (LPR) is the term frequently used to refer to research aimed at adapting priorities to local situations. The essential idea behind this concept is that, as one post puts it, it is “quite similar [to GPR], except that it’s narrowed down to a certain country”. That post defines it as follows.While GPR is about figuring out what are the most important global problems to work on, LPR is about figuring out what are the most important problems in a local context that can best maximise impact both locally and globally.This term is used to describe many research activities, including:Local cause area prioritizationCharity evaluationHigh-impact local career pathway researchGiving and philanthropy landscape researchSome examples of projects within local priorities research include EA Singapore's cause prioritization report, which identifies AI safety and alternative proteins as Singapore's comparative advantages; the Brazilian charity Doebem, which aims to identify the best health and development charities in Brazil; and EA Philippines's cause prioritization report, which identifies 11 potential focus areas for work in the country, ranging from poverty alleviation in the Philippines to building the EA movem...]]>
LuisMota https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 14:58 None full 5275
5d7P4gFpomfeLCHZw_NL_EA_EA EA - Unjournal: Evaluations of "Artificial Intelligence and Economic Growth", and new hosting space by david reinstein Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Unjournal: Evaluations of "Artificial Intelligence and Economic Growth", and new hosting space, published by david reinstein on March 17, 2023 on The Effective Altruism Forum.New set of evaluationsThe Unjournal evaluations of Artificial Intelligence and Economic Growth, by prominent economists Philippe Aghion, Benjamin F. Jones, Charles I. Jones – are up. You can read these on our new PubPub community space , along with my discussion of the process and the insights and the 'evaluation metrics', and the authors' response. Thanks to the authors for their participation (reward early-adopters who stick their necks out!), and thanks to Philip Trammel and Seth Benzell for detailed and insightful evaluation.I discussed some of the reasons we 'took on' this paper in an earlier post. The discussion of AI's impact on the economy, what it might look like (in magnitude and in its composition), how to measure and model it, and what conditions lead to "growth explosions", seem especially relevant to recent events and discussion."Self-correcting" science?I'm particularly happy about one outcome here.If you were a graduate student reading the paper, or were a professional delving into the economics literature, and had seen the last step of the equations pasted below (from the originally published paper/chapter), what would you think?The final step in fact contains an error; the claimed implication does not follow.From my discussion:... we rarely see referees and colleagues actually reading and checking the math and proofs in their peers’ papers. Here Phil Trammel did so and spotted an error in a proof of one of the central results of the paper (the ‘singularity’ in Example 3). ... The authors have acknowledged this error ... confirmed the revised proof, and link a marked up version on their page. This is ‘self-correcting research’, and it’s great!Even though the same result was preserved, I believe this provides a valuable service.Readers of the paper who saw the incorrect proof (particularly students) might be deeply confused. They might think ‘Can I trust this papers’ other statements?’ ‘Am I deeply misunderstanding something here? Am I not suited for this work?’ Personally, this happened to me a lot in graduate school; at least some of the time it may have been because of errors and typos in the paper. I suspect many math-driven paper also contain flaws which are never spotted, and these sometimes may affect the substantive results (unlike in the present case).By the way, the marked up 'corrected' paper is here, and the corrected proof is here. (Caveat: Philip and the authors have agreed on the revised corrected proof, it might benefit from an independent verification.)New (additional) platform: PubPubWe are trying out the PubPub platform. We are still maintaining our Sciety page, and we aim to import the content from one to the other, for greater visibility. Some immediate benefits of PubPub...It lets us assign 'digital object identifiers' (DOIs) for each evaluation, response, and summary. It puts these and the works referenced into the 'CrossRef' database.Jointly, this should (hopefully) enable indexing in Google Scholar and other academic search engines,And 'bibliometrics' (citation counts etc.0It seems to enable evaluations of work hosted anywhere that has a DOI (published, preprints, etc.)It's versatile and full-featured, enabling input from and output from a range of formats, as well as community input and discussionIt's funded by a non-profit and seems fairly mission-alignedMore coming soon, updatesThe Unjournal has several more impactful papers evaluated and being evaluated, which we hope to post soon. For a sense of what's coming, see our 'Direct Evaluation track' focusing on NBER working papers.Some other updates:We are pursuing collaborations wit...]]>
david reinstein https://forum.effectivealtruism.org/posts/5d7P4gFpomfeLCHZw/unjournal-evaluations-of-artificial-intelligence-and Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Unjournal: Evaluations of "Artificial Intelligence and Economic Growth", and new hosting space, published by david reinstein on March 17, 2023 on The Effective Altruism Forum.New set of evaluationsThe Unjournal evaluations of Artificial Intelligence and Economic Growth, by prominent economists Philippe Aghion, Benjamin F. Jones, Charles I. Jones – are up. You can read these on our new PubPub community space , along with my discussion of the process and the insights and the 'evaluation metrics', and the authors' response. Thanks to the authors for their participation (reward early-adopters who stick their necks out!), and thanks to Philip Trammel and Seth Benzell for detailed and insightful evaluation.I discussed some of the reasons we 'took on' this paper in an earlier post. The discussion of AI's impact on the economy, what it might look like (in magnitude and in its composition), how to measure and model it, and what conditions lead to "growth explosions", seem especially relevant to recent events and discussion."Self-correcting" science?I'm particularly happy about one outcome here.If you were a graduate student reading the paper, or were a professional delving into the economics literature, and had seen the last step of the equations pasted below (from the originally published paper/chapter), what would you think?The final step in fact contains an error; the claimed implication does not follow.From my discussion:... we rarely see referees and colleagues actually reading and checking the math and proofs in their peers’ papers. Here Phil Trammel did so and spotted an error in a proof of one of the central results of the paper (the ‘singularity’ in Example 3). ... The authors have acknowledged this error ... confirmed the revised proof, and link a marked up version on their page. This is ‘self-correcting research’, and it’s great!Even though the same result was preserved, I believe this provides a valuable service.Readers of the paper who saw the incorrect proof (particularly students) might be deeply confused. They might think ‘Can I trust this papers’ other statements?’ ‘Am I deeply misunderstanding something here? Am I not suited for this work?’ Personally, this happened to me a lot in graduate school; at least some of the time it may have been because of errors and typos in the paper. I suspect many math-driven paper also contain flaws which are never spotted, and these sometimes may affect the substantive results (unlike in the present case).By the way, the marked up 'corrected' paper is here, and the corrected proof is here. (Caveat: Philip and the authors have agreed on the revised corrected proof, it might benefit from an independent verification.)New (additional) platform: PubPubWe are trying out the PubPub platform. We are still maintaining our Sciety page, and we aim to import the content from one to the other, for greater visibility. Some immediate benefits of PubPub...It lets us assign 'digital object identifiers' (DOIs) for each evaluation, response, and summary. It puts these and the works referenced into the 'CrossRef' database.Jointly, this should (hopefully) enable indexing in Google Scholar and other academic search engines,And 'bibliometrics' (citation counts etc.0It seems to enable evaluations of work hosted anywhere that has a DOI (published, preprints, etc.)It's versatile and full-featured, enabling input from and output from a range of formats, as well as community input and discussionIt's funded by a non-profit and seems fairly mission-alignedMore coming soon, updatesThe Unjournal has several more impactful papers evaluated and being evaluated, which we hope to post soon. For a sense of what's coming, see our 'Direct Evaluation track' focusing on NBER working papers.Some other updates:We are pursuing collaborations wit...]]>
Sat, 18 Mar 2023 13:53:49 +0000 EA - Unjournal: Evaluations of "Artificial Intelligence and Economic Growth", and new hosting space by david reinstein Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Unjournal: Evaluations of "Artificial Intelligence and Economic Growth", and new hosting space, published by david reinstein on March 17, 2023 on The Effective Altruism Forum.New set of evaluationsThe Unjournal evaluations of Artificial Intelligence and Economic Growth, by prominent economists Philippe Aghion, Benjamin F. Jones, Charles I. Jones – are up. You can read these on our new PubPub community space , along with my discussion of the process and the insights and the 'evaluation metrics', and the authors' response. Thanks to the authors for their participation (reward early-adopters who stick their necks out!), and thanks to Philip Trammel and Seth Benzell for detailed and insightful evaluation.I discussed some of the reasons we 'took on' this paper in an earlier post. The discussion of AI's impact on the economy, what it might look like (in magnitude and in its composition), how to measure and model it, and what conditions lead to "growth explosions", seem especially relevant to recent events and discussion."Self-correcting" science?I'm particularly happy about one outcome here.If you were a graduate student reading the paper, or were a professional delving into the economics literature, and had seen the last step of the equations pasted below (from the originally published paper/chapter), what would you think?The final step in fact contains an error; the claimed implication does not follow.From my discussion:... we rarely see referees and colleagues actually reading and checking the math and proofs in their peers’ papers. Here Phil Trammel did so and spotted an error in a proof of one of the central results of the paper (the ‘singularity’ in Example 3). ... The authors have acknowledged this error ... confirmed the revised proof, and link a marked up version on their page. This is ‘self-correcting research’, and it’s great!Even though the same result was preserved, I believe this provides a valuable service.Readers of the paper who saw the incorrect proof (particularly students) might be deeply confused. They might think ‘Can I trust this papers’ other statements?’ ‘Am I deeply misunderstanding something here? Am I not suited for this work?’ Personally, this happened to me a lot in graduate school; at least some of the time it may have been because of errors and typos in the paper. I suspect many math-driven paper also contain flaws which are never spotted, and these sometimes may affect the substantive results (unlike in the present case).By the way, the marked up 'corrected' paper is here, and the corrected proof is here. (Caveat: Philip and the authors have agreed on the revised corrected proof, it might benefit from an independent verification.)New (additional) platform: PubPubWe are trying out the PubPub platform. We are still maintaining our Sciety page, and we aim to import the content from one to the other, for greater visibility. Some immediate benefits of PubPub...It lets us assign 'digital object identifiers' (DOIs) for each evaluation, response, and summary. It puts these and the works referenced into the 'CrossRef' database.Jointly, this should (hopefully) enable indexing in Google Scholar and other academic search engines,And 'bibliometrics' (citation counts etc.0It seems to enable evaluations of work hosted anywhere that has a DOI (published, preprints, etc.)It's versatile and full-featured, enabling input from and output from a range of formats, as well as community input and discussionIt's funded by a non-profit and seems fairly mission-alignedMore coming soon, updatesThe Unjournal has several more impactful papers evaluated and being evaluated, which we hope to post soon. For a sense of what's coming, see our 'Direct Evaluation track' focusing on NBER working papers.Some other updates:We are pursuing collaborations wit...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Unjournal: Evaluations of "Artificial Intelligence and Economic Growth", and new hosting space, published by david reinstein on March 17, 2023 on The Effective Altruism Forum.New set of evaluationsThe Unjournal evaluations of Artificial Intelligence and Economic Growth, by prominent economists Philippe Aghion, Benjamin F. Jones, Charles I. Jones – are up. You can read these on our new PubPub community space , along with my discussion of the process and the insights and the 'evaluation metrics', and the authors' response. Thanks to the authors for their participation (reward early-adopters who stick their necks out!), and thanks to Philip Trammel and Seth Benzell for detailed and insightful evaluation.I discussed some of the reasons we 'took on' this paper in an earlier post. The discussion of AI's impact on the economy, what it might look like (in magnitude and in its composition), how to measure and model it, and what conditions lead to "growth explosions", seem especially relevant to recent events and discussion."Self-correcting" science?I'm particularly happy about one outcome here.If you were a graduate student reading the paper, or were a professional delving into the economics literature, and had seen the last step of the equations pasted below (from the originally published paper/chapter), what would you think?The final step in fact contains an error; the claimed implication does not follow.From my discussion:... we rarely see referees and colleagues actually reading and checking the math and proofs in their peers’ papers. Here Phil Trammel did so and spotted an error in a proof of one of the central results of the paper (the ‘singularity’ in Example 3). ... The authors have acknowledged this error ... confirmed the revised proof, and link a marked up version on their page. This is ‘self-correcting research’, and it’s great!Even though the same result was preserved, I believe this provides a valuable service.Readers of the paper who saw the incorrect proof (particularly students) might be deeply confused. They might think ‘Can I trust this papers’ other statements?’ ‘Am I deeply misunderstanding something here? Am I not suited for this work?’ Personally, this happened to me a lot in graduate school; at least some of the time it may have been because of errors and typos in the paper. I suspect many math-driven paper also contain flaws which are never spotted, and these sometimes may affect the substantive results (unlike in the present case).By the way, the marked up 'corrected' paper is here, and the corrected proof is here. (Caveat: Philip and the authors have agreed on the revised corrected proof, it might benefit from an independent verification.)New (additional) platform: PubPubWe are trying out the PubPub platform. We are still maintaining our Sciety page, and we aim to import the content from one to the other, for greater visibility. Some immediate benefits of PubPub...It lets us assign 'digital object identifiers' (DOIs) for each evaluation, response, and summary. It puts these and the works referenced into the 'CrossRef' database.Jointly, this should (hopefully) enable indexing in Google Scholar and other academic search engines,And 'bibliometrics' (citation counts etc.0It seems to enable evaluations of work hosted anywhere that has a DOI (published, preprints, etc.)It's versatile and full-featured, enabling input from and output from a range of formats, as well as community input and discussionIt's funded by a non-profit and seems fairly mission-alignedMore coming soon, updatesThe Unjournal has several more impactful papers evaluated and being evaluated, which we hope to post soon. For a sense of what's coming, see our 'Direct Evaluation track' focusing on NBER working papers.Some other updates:We are pursuing collaborations wit...]]>
david reinstein https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:12 None full 5276
GXBvATw7Why7xRDeM_NL_EA_EA EA - Why SoGive is publishing an independent evaluation of StrongMinds by ishaan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why SoGive is publishing an independent evaluation of StrongMinds, published by ishaan on March 17, 2023 on The Effective Altruism Forum.Executive summaryWe believe the EA community's confidence in the existing research on mental health charities hasn't been high enough to use it to make significant funding decisions.Further research from another EA research agency, such as SoGive, may help add confidence and lead to more well-informed funding decisions.In order to increase the amount of scrutiny on this topic, SoGive has started conducting research on mental health interventions, and we plan to publish a series of articles starting in the next week and extending out over the next few months.The series will cover literature reviews of academic and EA literature on mental health and moral weights.We will be doing in-depth reviews and quality assessments on work by the Happier Lives Institute pertaining to StrongMinds, the RCTs and academic sources from which StrongMinds draws its evidence, and StrongMinds' internally reported data.We will provide a view on how impactful we judge StrongMinds to be.What we will publishFrom March to July 2023, SoGive plans to publish a series of analyses pertaining to mental health. The content covered will includeMethodological notes on using existing academic literature, which quantifies depression interventions in terms of standardised mean differences, numbers needed to treat, remission rates and relapse rates; as well as the "standard deviation - years of depression averted" framework used by Happier Lives Institute.Broad, shallow reviews of academic and EA literature pertaining to the question of what the effect of psychotherapy is, as well as how this intersects with various factors such as number of sessions, demographics, and types of therapy.We will focus specifically on how the effect decays after therapy, and publish a separate report on this.Deep, narrow reviews of the RCTs and meta-analyses that are most closely pertaining to the StrongMind's context.Moral weights frameworks, explained in a manner which will allow a user to map dry numbers such as effect sizes to more visceral subjective feelings, so as to better apply their moral intuition to funding decisions.Cost-effective analyses which combine academic data and direct evidence from StrongMinds to arrive at our best estimate at what a donation to StrongMinds does.We hope these will empower others to check our work, do their own analyses of the topic, and take the work further.How will this enable higher impact donations?In the EA Survey conducted by Rethink Priorities, 60% of EA community members surveyed were in favour of giving "significant resources'' to mental health interventions, with 24% of those believing it should be a "top priority" or "near top priority" and 4% selecting it as their "top cause". Although other cause areas performed more favourably in the survey, this still appears to be a moderately high level of interest in mental health.Some EA energy has now gone into this area - for example, Charity Entrepreneurship incubated Canopie, Mental Health Funder's Circle, and played a role in incubating Happier Lives Institute. They additionally launched Kaya Guides and Vina Plena last year. We also had a talk from Friendship Bench at last year's EA Global.Our analysis will focus on StrongMinds. We chose StrongMinds because we know the organisation well. SoGive’s founder first had a conversation with StrongMinds in 2015 (thinking of his own donations) having seen a press article about them and having considered them a potentially high impact charity. Since then, several other EA orgs have been engaging with StrongMinds. Evaluations of StrongMinds specifically have now been published by both Founders Pledge and Happier Lives Institute, and Str...]]>
ishaan https://forum.effectivealtruism.org/posts/GXBvATw7Why7xRDeM/why-sogive-is-publishing-an-independent-evaluation-of Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why SoGive is publishing an independent evaluation of StrongMinds, published by ishaan on March 17, 2023 on The Effective Altruism Forum.Executive summaryWe believe the EA community's confidence in the existing research on mental health charities hasn't been high enough to use it to make significant funding decisions.Further research from another EA research agency, such as SoGive, may help add confidence and lead to more well-informed funding decisions.In order to increase the amount of scrutiny on this topic, SoGive has started conducting research on mental health interventions, and we plan to publish a series of articles starting in the next week and extending out over the next few months.The series will cover literature reviews of academic and EA literature on mental health and moral weights.We will be doing in-depth reviews and quality assessments on work by the Happier Lives Institute pertaining to StrongMinds, the RCTs and academic sources from which StrongMinds draws its evidence, and StrongMinds' internally reported data.We will provide a view on how impactful we judge StrongMinds to be.What we will publishFrom March to July 2023, SoGive plans to publish a series of analyses pertaining to mental health. The content covered will includeMethodological notes on using existing academic literature, which quantifies depression interventions in terms of standardised mean differences, numbers needed to treat, remission rates and relapse rates; as well as the "standard deviation - years of depression averted" framework used by Happier Lives Institute.Broad, shallow reviews of academic and EA literature pertaining to the question of what the effect of psychotherapy is, as well as how this intersects with various factors such as number of sessions, demographics, and types of therapy.We will focus specifically on how the effect decays after therapy, and publish a separate report on this.Deep, narrow reviews of the RCTs and meta-analyses that are most closely pertaining to the StrongMind's context.Moral weights frameworks, explained in a manner which will allow a user to map dry numbers such as effect sizes to more visceral subjective feelings, so as to better apply their moral intuition to funding decisions.Cost-effective analyses which combine academic data and direct evidence from StrongMinds to arrive at our best estimate at what a donation to StrongMinds does.We hope these will empower others to check our work, do their own analyses of the topic, and take the work further.How will this enable higher impact donations?In the EA Survey conducted by Rethink Priorities, 60% of EA community members surveyed were in favour of giving "significant resources'' to mental health interventions, with 24% of those believing it should be a "top priority" or "near top priority" and 4% selecting it as their "top cause". Although other cause areas performed more favourably in the survey, this still appears to be a moderately high level of interest in mental health.Some EA energy has now gone into this area - for example, Charity Entrepreneurship incubated Canopie, Mental Health Funder's Circle, and played a role in incubating Happier Lives Institute. They additionally launched Kaya Guides and Vina Plena last year. We also had a talk from Friendship Bench at last year's EA Global.Our analysis will focus on StrongMinds. We chose StrongMinds because we know the organisation well. SoGive’s founder first had a conversation with StrongMinds in 2015 (thinking of his own donations) having seen a press article about them and having considered them a potentially high impact charity. Since then, several other EA orgs have been engaging with StrongMinds. Evaluations of StrongMinds specifically have now been published by both Founders Pledge and Happier Lives Institute, and Str...]]>
Sat, 18 Mar 2023 00:17:54 +0000 EA - Why SoGive is publishing an independent evaluation of StrongMinds by ishaan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why SoGive is publishing an independent evaluation of StrongMinds, published by ishaan on March 17, 2023 on The Effective Altruism Forum.Executive summaryWe believe the EA community's confidence in the existing research on mental health charities hasn't been high enough to use it to make significant funding decisions.Further research from another EA research agency, such as SoGive, may help add confidence and lead to more well-informed funding decisions.In order to increase the amount of scrutiny on this topic, SoGive has started conducting research on mental health interventions, and we plan to publish a series of articles starting in the next week and extending out over the next few months.The series will cover literature reviews of academic and EA literature on mental health and moral weights.We will be doing in-depth reviews and quality assessments on work by the Happier Lives Institute pertaining to StrongMinds, the RCTs and academic sources from which StrongMinds draws its evidence, and StrongMinds' internally reported data.We will provide a view on how impactful we judge StrongMinds to be.What we will publishFrom March to July 2023, SoGive plans to publish a series of analyses pertaining to mental health. The content covered will includeMethodological notes on using existing academic literature, which quantifies depression interventions in terms of standardised mean differences, numbers needed to treat, remission rates and relapse rates; as well as the "standard deviation - years of depression averted" framework used by Happier Lives Institute.Broad, shallow reviews of academic and EA literature pertaining to the question of what the effect of psychotherapy is, as well as how this intersects with various factors such as number of sessions, demographics, and types of therapy.We will focus specifically on how the effect decays after therapy, and publish a separate report on this.Deep, narrow reviews of the RCTs and meta-analyses that are most closely pertaining to the StrongMind's context.Moral weights frameworks, explained in a manner which will allow a user to map dry numbers such as effect sizes to more visceral subjective feelings, so as to better apply their moral intuition to funding decisions.Cost-effective analyses which combine academic data and direct evidence from StrongMinds to arrive at our best estimate at what a donation to StrongMinds does.We hope these will empower others to check our work, do their own analyses of the topic, and take the work further.How will this enable higher impact donations?In the EA Survey conducted by Rethink Priorities, 60% of EA community members surveyed were in favour of giving "significant resources'' to mental health interventions, with 24% of those believing it should be a "top priority" or "near top priority" and 4% selecting it as their "top cause". Although other cause areas performed more favourably in the survey, this still appears to be a moderately high level of interest in mental health.Some EA energy has now gone into this area - for example, Charity Entrepreneurship incubated Canopie, Mental Health Funder's Circle, and played a role in incubating Happier Lives Institute. They additionally launched Kaya Guides and Vina Plena last year. We also had a talk from Friendship Bench at last year's EA Global.Our analysis will focus on StrongMinds. We chose StrongMinds because we know the organisation well. SoGive’s founder first had a conversation with StrongMinds in 2015 (thinking of his own donations) having seen a press article about them and having considered them a potentially high impact charity. Since then, several other EA orgs have been engaging with StrongMinds. Evaluations of StrongMinds specifically have now been published by both Founders Pledge and Happier Lives Institute, and Str...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why SoGive is publishing an independent evaluation of StrongMinds, published by ishaan on March 17, 2023 on The Effective Altruism Forum.Executive summaryWe believe the EA community's confidence in the existing research on mental health charities hasn't been high enough to use it to make significant funding decisions.Further research from another EA research agency, such as SoGive, may help add confidence and lead to more well-informed funding decisions.In order to increase the amount of scrutiny on this topic, SoGive has started conducting research on mental health interventions, and we plan to publish a series of articles starting in the next week and extending out over the next few months.The series will cover literature reviews of academic and EA literature on mental health and moral weights.We will be doing in-depth reviews and quality assessments on work by the Happier Lives Institute pertaining to StrongMinds, the RCTs and academic sources from which StrongMinds draws its evidence, and StrongMinds' internally reported data.We will provide a view on how impactful we judge StrongMinds to be.What we will publishFrom March to July 2023, SoGive plans to publish a series of analyses pertaining to mental health. The content covered will includeMethodological notes on using existing academic literature, which quantifies depression interventions in terms of standardised mean differences, numbers needed to treat, remission rates and relapse rates; as well as the "standard deviation - years of depression averted" framework used by Happier Lives Institute.Broad, shallow reviews of academic and EA literature pertaining to the question of what the effect of psychotherapy is, as well as how this intersects with various factors such as number of sessions, demographics, and types of therapy.We will focus specifically on how the effect decays after therapy, and publish a separate report on this.Deep, narrow reviews of the RCTs and meta-analyses that are most closely pertaining to the StrongMind's context.Moral weights frameworks, explained in a manner which will allow a user to map dry numbers such as effect sizes to more visceral subjective feelings, so as to better apply their moral intuition to funding decisions.Cost-effective analyses which combine academic data and direct evidence from StrongMinds to arrive at our best estimate at what a donation to StrongMinds does.We hope these will empower others to check our work, do their own analyses of the topic, and take the work further.How will this enable higher impact donations?In the EA Survey conducted by Rethink Priorities, 60% of EA community members surveyed were in favour of giving "significant resources'' to mental health interventions, with 24% of those believing it should be a "top priority" or "near top priority" and 4% selecting it as their "top cause". Although other cause areas performed more favourably in the survey, this still appears to be a moderately high level of interest in mental health.Some EA energy has now gone into this area - for example, Charity Entrepreneurship incubated Canopie, Mental Health Funder's Circle, and played a role in incubating Happier Lives Institute. They additionally launched Kaya Guides and Vina Plena last year. We also had a talk from Friendship Bench at last year's EA Global.Our analysis will focus on StrongMinds. We chose StrongMinds because we know the organisation well. SoGive’s founder first had a conversation with StrongMinds in 2015 (thinking of his own donations) having seen a press article about them and having considered them a potentially high impact charity. Since then, several other EA orgs have been engaging with StrongMinds. Evaluations of StrongMinds specifically have now been published by both Founders Pledge and Happier Lives Institute, and Str...]]>
ishaan https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:35 None full 5271
BdWwgXrpncgdE4u5M_NL_EA_EA EA - The illusion of consensus about EA celebrities by Ben Millwood Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The illusion of consensus about EA celebrities, published by Ben Millwood on March 17, 2023 on The Effective Altruism Forum.Epistemic status: speaking for myself and hoping it generalisesI don't like everyone that I'm supposed to like:I've long thought that [redacted] was focused on all the wrong framings of the issues they discuss,[redacted] is on the wrong side of their disagreement with [redacted] and often seems to have kind of sloppy thinking about things like this,[redacted] says many sensible things but a writing style that I find intensely irritating and I struggle to get through; [redacted] is similar, but not as sensible,[redacted] is working on an important problem, but doing a kind of mediocre job of it, which might be crowding out better efforts.Why did I redact all those names? Well, my criticisms are often some mixture of:half-baked; I don't have time to evaluate everyone fairly and deeply, and don't need to in order to make choices about what to focus on,based on justifications that are not very legible or easy to communicate,not always totally central to their point or fatal to their work,kind of upsetting or discouraging to hear,often not that actionable.I want to highlight that criticisms like this will usually not surface, and while in individual instances this is sensible, in aggregate it may contribute to a misleading view of how we view our celebrities and leaders. We end up seeming more deferential and hero-worshipping than we really are. This is bad for two reasons:it harms our credibility in the eyes of outsiders (or insiders, even) who have negative views of those people,it projects the wrong expectation to newcomers who trust us and want to learn or adopt our attitudes.What to do about it?I think "just criticise people more" in isolation is not a good solution. People, even respected people in positions of leadership, often seem to already find posting on the Forum a stressful experience, and I think tipping that balance in the more brutal direction seems likely to cost more than it gains.I think you could imagine major cultural changes around how people give and receive feedback that could make this better, mitigate catastrophising about negative feedback, and ensure people feel safe to risk making mistakes or exposing their oversights. But those seem to me like heavy, ambitious pieces of cultural engineering that require a lot of buy-in to get going, and even if successful may incur ongoing frictional costs. Here's smaller, simpler things that could help:Write a forum post about it (this one's taken, sorry),Make disagreements more visible and more legible, especially among leaders or experts. I really enjoyed the debate between Will MacAskill and Toby Ord in the comments of Are we living at the most influential time in history? ­– you can't come away from that discussion thinking "oh, whatever the smart, respected people in EA think must be right", because either way at least one of them will disagree with you!There's a lot of disagreement on the Forum all the time, of course, but I have a (somewhat unfair) vibe of this as the famous people deposit their work into the forum and leave for higher pursuits, and then we in the peanut gallery argue over it.I'd love it if there were (say) a document out there that Redwood Research and Anthropic both endorsed, that described how their agendas differ and what underlying disagreements lead to those differences.Make sure people incoming to the community, or at the periphery of the community, are inoculated against this bias, if you spot it. Point out that people usually have a mix of good and bad ideas. Have some go-to examples of respected people's blind spots or mistakes, at least as they appear to you. (Even if you never end up explaining them to anyone, it's probably goo...]]>
Ben Millwood https://forum.effectivealtruism.org/posts/BdWwgXrpncgdE4u5M/the-illusion-of-consensus-about-ea-celebrities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The illusion of consensus about EA celebrities, published by Ben Millwood on March 17, 2023 on The Effective Altruism Forum.Epistemic status: speaking for myself and hoping it generalisesI don't like everyone that I'm supposed to like:I've long thought that [redacted] was focused on all the wrong framings of the issues they discuss,[redacted] is on the wrong side of their disagreement with [redacted] and often seems to have kind of sloppy thinking about things like this,[redacted] says many sensible things but a writing style that I find intensely irritating and I struggle to get through; [redacted] is similar, but not as sensible,[redacted] is working on an important problem, but doing a kind of mediocre job of it, which might be crowding out better efforts.Why did I redact all those names? Well, my criticisms are often some mixture of:half-baked; I don't have time to evaluate everyone fairly and deeply, and don't need to in order to make choices about what to focus on,based on justifications that are not very legible or easy to communicate,not always totally central to their point or fatal to their work,kind of upsetting or discouraging to hear,often not that actionable.I want to highlight that criticisms like this will usually not surface, and while in individual instances this is sensible, in aggregate it may contribute to a misleading view of how we view our celebrities and leaders. We end up seeming more deferential and hero-worshipping than we really are. This is bad for two reasons:it harms our credibility in the eyes of outsiders (or insiders, even) who have negative views of those people,it projects the wrong expectation to newcomers who trust us and want to learn or adopt our attitudes.What to do about it?I think "just criticise people more" in isolation is not a good solution. People, even respected people in positions of leadership, often seem to already find posting on the Forum a stressful experience, and I think tipping that balance in the more brutal direction seems likely to cost more than it gains.I think you could imagine major cultural changes around how people give and receive feedback that could make this better, mitigate catastrophising about negative feedback, and ensure people feel safe to risk making mistakes or exposing their oversights. But those seem to me like heavy, ambitious pieces of cultural engineering that require a lot of buy-in to get going, and even if successful may incur ongoing frictional costs. Here's smaller, simpler things that could help:Write a forum post about it (this one's taken, sorry),Make disagreements more visible and more legible, especially among leaders or experts. I really enjoyed the debate between Will MacAskill and Toby Ord in the comments of Are we living at the most influential time in history? ­– you can't come away from that discussion thinking "oh, whatever the smart, respected people in EA think must be right", because either way at least one of them will disagree with you!There's a lot of disagreement on the Forum all the time, of course, but I have a (somewhat unfair) vibe of this as the famous people deposit their work into the forum and leave for higher pursuits, and then we in the peanut gallery argue over it.I'd love it if there were (say) a document out there that Redwood Research and Anthropic both endorsed, that described how their agendas differ and what underlying disagreements lead to those differences.Make sure people incoming to the community, or at the periphery of the community, are inoculated against this bias, if you spot it. Point out that people usually have a mix of good and bad ideas. Have some go-to examples of respected people's blind spots or mistakes, at least as they appear to you. (Even if you never end up explaining them to anyone, it's probably goo...]]>
Fri, 17 Mar 2023 23:03:16 +0000 EA - The illusion of consensus about EA celebrities by Ben Millwood Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The illusion of consensus about EA celebrities, published by Ben Millwood on March 17, 2023 on The Effective Altruism Forum.Epistemic status: speaking for myself and hoping it generalisesI don't like everyone that I'm supposed to like:I've long thought that [redacted] was focused on all the wrong framings of the issues they discuss,[redacted] is on the wrong side of their disagreement with [redacted] and often seems to have kind of sloppy thinking about things like this,[redacted] says many sensible things but a writing style that I find intensely irritating and I struggle to get through; [redacted] is similar, but not as sensible,[redacted] is working on an important problem, but doing a kind of mediocre job of it, which might be crowding out better efforts.Why did I redact all those names? Well, my criticisms are often some mixture of:half-baked; I don't have time to evaluate everyone fairly and deeply, and don't need to in order to make choices about what to focus on,based on justifications that are not very legible or easy to communicate,not always totally central to their point or fatal to their work,kind of upsetting or discouraging to hear,often not that actionable.I want to highlight that criticisms like this will usually not surface, and while in individual instances this is sensible, in aggregate it may contribute to a misleading view of how we view our celebrities and leaders. We end up seeming more deferential and hero-worshipping than we really are. This is bad for two reasons:it harms our credibility in the eyes of outsiders (or insiders, even) who have negative views of those people,it projects the wrong expectation to newcomers who trust us and want to learn or adopt our attitudes.What to do about it?I think "just criticise people more" in isolation is not a good solution. People, even respected people in positions of leadership, often seem to already find posting on the Forum a stressful experience, and I think tipping that balance in the more brutal direction seems likely to cost more than it gains.I think you could imagine major cultural changes around how people give and receive feedback that could make this better, mitigate catastrophising about negative feedback, and ensure people feel safe to risk making mistakes or exposing their oversights. But those seem to me like heavy, ambitious pieces of cultural engineering that require a lot of buy-in to get going, and even if successful may incur ongoing frictional costs. Here's smaller, simpler things that could help:Write a forum post about it (this one's taken, sorry),Make disagreements more visible and more legible, especially among leaders or experts. I really enjoyed the debate between Will MacAskill and Toby Ord in the comments of Are we living at the most influential time in history? ­– you can't come away from that discussion thinking "oh, whatever the smart, respected people in EA think must be right", because either way at least one of them will disagree with you!There's a lot of disagreement on the Forum all the time, of course, but I have a (somewhat unfair) vibe of this as the famous people deposit their work into the forum and leave for higher pursuits, and then we in the peanut gallery argue over it.I'd love it if there were (say) a document out there that Redwood Research and Anthropic both endorsed, that described how their agendas differ and what underlying disagreements lead to those differences.Make sure people incoming to the community, or at the periphery of the community, are inoculated against this bias, if you spot it. Point out that people usually have a mix of good and bad ideas. Have some go-to examples of respected people's blind spots or mistakes, at least as they appear to you. (Even if you never end up explaining them to anyone, it's probably goo...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The illusion of consensus about EA celebrities, published by Ben Millwood on March 17, 2023 on The Effective Altruism Forum.Epistemic status: speaking for myself and hoping it generalisesI don't like everyone that I'm supposed to like:I've long thought that [redacted] was focused on all the wrong framings of the issues they discuss,[redacted] is on the wrong side of their disagreement with [redacted] and often seems to have kind of sloppy thinking about things like this,[redacted] says many sensible things but a writing style that I find intensely irritating and I struggle to get through; [redacted] is similar, but not as sensible,[redacted] is working on an important problem, but doing a kind of mediocre job of it, which might be crowding out better efforts.Why did I redact all those names? Well, my criticisms are often some mixture of:half-baked; I don't have time to evaluate everyone fairly and deeply, and don't need to in order to make choices about what to focus on,based on justifications that are not very legible or easy to communicate,not always totally central to their point or fatal to their work,kind of upsetting or discouraging to hear,often not that actionable.I want to highlight that criticisms like this will usually not surface, and while in individual instances this is sensible, in aggregate it may contribute to a misleading view of how we view our celebrities and leaders. We end up seeming more deferential and hero-worshipping than we really are. This is bad for two reasons:it harms our credibility in the eyes of outsiders (or insiders, even) who have negative views of those people,it projects the wrong expectation to newcomers who trust us and want to learn or adopt our attitudes.What to do about it?I think "just criticise people more" in isolation is not a good solution. People, even respected people in positions of leadership, often seem to already find posting on the Forum a stressful experience, and I think tipping that balance in the more brutal direction seems likely to cost more than it gains.I think you could imagine major cultural changes around how people give and receive feedback that could make this better, mitigate catastrophising about negative feedback, and ensure people feel safe to risk making mistakes or exposing their oversights. But those seem to me like heavy, ambitious pieces of cultural engineering that require a lot of buy-in to get going, and even if successful may incur ongoing frictional costs. Here's smaller, simpler things that could help:Write a forum post about it (this one's taken, sorry),Make disagreements more visible and more legible, especially among leaders or experts. I really enjoyed the debate between Will MacAskill and Toby Ord in the comments of Are we living at the most influential time in history? ­– you can't come away from that discussion thinking "oh, whatever the smart, respected people in EA think must be right", because either way at least one of them will disagree with you!There's a lot of disagreement on the Forum all the time, of course, but I have a (somewhat unfair) vibe of this as the famous people deposit their work into the forum and leave for higher pursuits, and then we in the peanut gallery argue over it.I'd love it if there were (say) a document out there that Redwood Research and Anthropic both endorsed, that described how their agendas differ and what underlying disagreements lead to those differences.Make sure people incoming to the community, or at the periphery of the community, are inoculated against this bias, if you spot it. Point out that people usually have a mix of good and bad ideas. Have some go-to examples of respected people's blind spots or mistakes, at least as they appear to you. (Even if you never end up explaining them to anyone, it's probably goo...]]>
Ben Millwood https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:48 None full 5273
PgQdvoPRxZbw7Kqxu_NL_EA_EA EA - Getting Better at Writing: Why and How by bgarfinkel Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting Better at Writing: Why and How, published by bgarfinkel on March 17, 2023 on The Effective Altruism Forum.This post is adapted from a memo I wrote a while back, for people at GovAI. It may, someday, turn out to be the first post in a series on skill-building.SummaryIf you're a researcher,[1] then you should probably try to become very good at writing. Writing well helps you spread your ideas, think clearly, and be taken seriously. Employers also care a lot about writing skills.Improving your writing is doable: it’s mostly a matter of learning guidelines and practicing. Since hardly anyone consciously works on their writing skills, you can become much better than average just by setting aside time for study and deliberate practice.Why writing skills matterHere are three reasons why writing skills matter:The main point of writing is to get your ideas into other people’s heads. Far more people will internalize your ideas if you write them up well. Good writing signals a piece is worth reading, reduces the effort needed to process it, guards against misunderstandings, and helps key ideas stick.Writing and thinking are intertwined. If you work to improve your writing on some topic, then your thinking on it will normally improve too. Writing concisely forces you to identify your most important points. Writing clearly forces you to be clear about what you believe. And structuring your piece in a logical way forces you to understand how your ideas relate to each other.People will judge you on your writing. If you want people to take you seriously, then you should try to write well. Good writing is a signal of clear thinking, conscientiousness, and genuine interest in producing useful work.For all these reasons, most organizations give a lot of weight to writing skills when they hire researchers. If you ask DC think tank staffers what they look for in candidates, they apparently mention “writing skills” more than anything else. "Writing skills" was also the first item mentioned when I recently asked the same question to someone on a lab policy team. GovAI certainly pays attention to writing when we hire. Even if you just want to impress potential employers, then, you should care a great deal about your own writing.How to get better at writingIf you want to get better at writing, here are four things you can do:Read up on guidelines: There are a lot of pieces on how good writing works. The footnote at the end of this sentence lists some short essays.[2] The best book I know is Style: Lessons in Clarity and Grace. It’s an easy-to-read textbook that offers recipe-like guidance. I would recommend this book over anything else.[3]Engage with model pieces: You can pick out a handful of well-written pieces and read them with a critical mindset. (See the next footnote for some suggestions.[4]) You might ask: What exactly is good about the pieces? How do they work? Where do they obey or violate the guidelines recommended by others?Get feedback: Flaws in your writing—especially flaws that limit comprehension—will normally be more evident to people who are coming in cold. Also, sometimes other people will simply be better than you at diagnosing and correcting certain flaws. Comments and suggest-edits can draw your attention to recurring issues in your writing and offer models for how you can correct them.Do focused rewriting: The way you’ll ultimately get better is by doing focused rewriting. Pick some imperfect pieces—ideally, pieces you’re actually working on—and simply try to make them as good as possible.[5] You can consciously draw on writing guidelines, models, and previous feedback to help you diagnose and correct their flaws. The more time you spend rewriting, the better the pieces will become. Crucially, you’ll also start to internalize the techniques you...]]>
bgarfinkel https://forum.effectivealtruism.org/posts/PgQdvoPRxZbw7Kqxu/getting-better-at-writing-why-and-how Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting Better at Writing: Why and How, published by bgarfinkel on March 17, 2023 on The Effective Altruism Forum.This post is adapted from a memo I wrote a while back, for people at GovAI. It may, someday, turn out to be the first post in a series on skill-building.SummaryIf you're a researcher,[1] then you should probably try to become very good at writing. Writing well helps you spread your ideas, think clearly, and be taken seriously. Employers also care a lot about writing skills.Improving your writing is doable: it’s mostly a matter of learning guidelines and practicing. Since hardly anyone consciously works on their writing skills, you can become much better than average just by setting aside time for study and deliberate practice.Why writing skills matterHere are three reasons why writing skills matter:The main point of writing is to get your ideas into other people’s heads. Far more people will internalize your ideas if you write them up well. Good writing signals a piece is worth reading, reduces the effort needed to process it, guards against misunderstandings, and helps key ideas stick.Writing and thinking are intertwined. If you work to improve your writing on some topic, then your thinking on it will normally improve too. Writing concisely forces you to identify your most important points. Writing clearly forces you to be clear about what you believe. And structuring your piece in a logical way forces you to understand how your ideas relate to each other.People will judge you on your writing. If you want people to take you seriously, then you should try to write well. Good writing is a signal of clear thinking, conscientiousness, and genuine interest in producing useful work.For all these reasons, most organizations give a lot of weight to writing skills when they hire researchers. If you ask DC think tank staffers what they look for in candidates, they apparently mention “writing skills” more than anything else. "Writing skills" was also the first item mentioned when I recently asked the same question to someone on a lab policy team. GovAI certainly pays attention to writing when we hire. Even if you just want to impress potential employers, then, you should care a great deal about your own writing.How to get better at writingIf you want to get better at writing, here are four things you can do:Read up on guidelines: There are a lot of pieces on how good writing works. The footnote at the end of this sentence lists some short essays.[2] The best book I know is Style: Lessons in Clarity and Grace. It’s an easy-to-read textbook that offers recipe-like guidance. I would recommend this book over anything else.[3]Engage with model pieces: You can pick out a handful of well-written pieces and read them with a critical mindset. (See the next footnote for some suggestions.[4]) You might ask: What exactly is good about the pieces? How do they work? Where do they obey or violate the guidelines recommended by others?Get feedback: Flaws in your writing—especially flaws that limit comprehension—will normally be more evident to people who are coming in cold. Also, sometimes other people will simply be better than you at diagnosing and correcting certain flaws. Comments and suggest-edits can draw your attention to recurring issues in your writing and offer models for how you can correct them.Do focused rewriting: The way you’ll ultimately get better is by doing focused rewriting. Pick some imperfect pieces—ideally, pieces you’re actually working on—and simply try to make them as good as possible.[5] You can consciously draw on writing guidelines, models, and previous feedback to help you diagnose and correct their flaws. The more time you spend rewriting, the better the pieces will become. Crucially, you’ll also start to internalize the techniques you...]]>
Fri, 17 Mar 2023 18:21:59 +0000 EA - Getting Better at Writing: Why and How by bgarfinkel Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting Better at Writing: Why and How, published by bgarfinkel on March 17, 2023 on The Effective Altruism Forum.This post is adapted from a memo I wrote a while back, for people at GovAI. It may, someday, turn out to be the first post in a series on skill-building.SummaryIf you're a researcher,[1] then you should probably try to become very good at writing. Writing well helps you spread your ideas, think clearly, and be taken seriously. Employers also care a lot about writing skills.Improving your writing is doable: it’s mostly a matter of learning guidelines and practicing. Since hardly anyone consciously works on their writing skills, you can become much better than average just by setting aside time for study and deliberate practice.Why writing skills matterHere are three reasons why writing skills matter:The main point of writing is to get your ideas into other people’s heads. Far more people will internalize your ideas if you write them up well. Good writing signals a piece is worth reading, reduces the effort needed to process it, guards against misunderstandings, and helps key ideas stick.Writing and thinking are intertwined. If you work to improve your writing on some topic, then your thinking on it will normally improve too. Writing concisely forces you to identify your most important points. Writing clearly forces you to be clear about what you believe. And structuring your piece in a logical way forces you to understand how your ideas relate to each other.People will judge you on your writing. If you want people to take you seriously, then you should try to write well. Good writing is a signal of clear thinking, conscientiousness, and genuine interest in producing useful work.For all these reasons, most organizations give a lot of weight to writing skills when they hire researchers. If you ask DC think tank staffers what they look for in candidates, they apparently mention “writing skills” more than anything else. "Writing skills" was also the first item mentioned when I recently asked the same question to someone on a lab policy team. GovAI certainly pays attention to writing when we hire. Even if you just want to impress potential employers, then, you should care a great deal about your own writing.How to get better at writingIf you want to get better at writing, here are four things you can do:Read up on guidelines: There are a lot of pieces on how good writing works. The footnote at the end of this sentence lists some short essays.[2] The best book I know is Style: Lessons in Clarity and Grace. It’s an easy-to-read textbook that offers recipe-like guidance. I would recommend this book over anything else.[3]Engage with model pieces: You can pick out a handful of well-written pieces and read them with a critical mindset. (See the next footnote for some suggestions.[4]) You might ask: What exactly is good about the pieces? How do they work? Where do they obey or violate the guidelines recommended by others?Get feedback: Flaws in your writing—especially flaws that limit comprehension—will normally be more evident to people who are coming in cold. Also, sometimes other people will simply be better than you at diagnosing and correcting certain flaws. Comments and suggest-edits can draw your attention to recurring issues in your writing and offer models for how you can correct them.Do focused rewriting: The way you’ll ultimately get better is by doing focused rewriting. Pick some imperfect pieces—ideally, pieces you’re actually working on—and simply try to make them as good as possible.[5] You can consciously draw on writing guidelines, models, and previous feedback to help you diagnose and correct their flaws. The more time you spend rewriting, the better the pieces will become. Crucially, you’ll also start to internalize the techniques you...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting Better at Writing: Why and How, published by bgarfinkel on March 17, 2023 on The Effective Altruism Forum.This post is adapted from a memo I wrote a while back, for people at GovAI. It may, someday, turn out to be the first post in a series on skill-building.SummaryIf you're a researcher,[1] then you should probably try to become very good at writing. Writing well helps you spread your ideas, think clearly, and be taken seriously. Employers also care a lot about writing skills.Improving your writing is doable: it’s mostly a matter of learning guidelines and practicing. Since hardly anyone consciously works on their writing skills, you can become much better than average just by setting aside time for study and deliberate practice.Why writing skills matterHere are three reasons why writing skills matter:The main point of writing is to get your ideas into other people’s heads. Far more people will internalize your ideas if you write them up well. Good writing signals a piece is worth reading, reduces the effort needed to process it, guards against misunderstandings, and helps key ideas stick.Writing and thinking are intertwined. If you work to improve your writing on some topic, then your thinking on it will normally improve too. Writing concisely forces you to identify your most important points. Writing clearly forces you to be clear about what you believe. And structuring your piece in a logical way forces you to understand how your ideas relate to each other.People will judge you on your writing. If you want people to take you seriously, then you should try to write well. Good writing is a signal of clear thinking, conscientiousness, and genuine interest in producing useful work.For all these reasons, most organizations give a lot of weight to writing skills when they hire researchers. If you ask DC think tank staffers what they look for in candidates, they apparently mention “writing skills” more than anything else. "Writing skills" was also the first item mentioned when I recently asked the same question to someone on a lab policy team. GovAI certainly pays attention to writing when we hire. Even if you just want to impress potential employers, then, you should care a great deal about your own writing.How to get better at writingIf you want to get better at writing, here are four things you can do:Read up on guidelines: There are a lot of pieces on how good writing works. The footnote at the end of this sentence lists some short essays.[2] The best book I know is Style: Lessons in Clarity and Grace. It’s an easy-to-read textbook that offers recipe-like guidance. I would recommend this book over anything else.[3]Engage with model pieces: You can pick out a handful of well-written pieces and read them with a critical mindset. (See the next footnote for some suggestions.[4]) You might ask: What exactly is good about the pieces? How do they work? Where do they obey or violate the guidelines recommended by others?Get feedback: Flaws in your writing—especially flaws that limit comprehension—will normally be more evident to people who are coming in cold. Also, sometimes other people will simply be better than you at diagnosing and correcting certain flaws. Comments and suggest-edits can draw your attention to recurring issues in your writing and offer models for how you can correct them.Do focused rewriting: The way you’ll ultimately get better is by doing focused rewriting. Pick some imperfect pieces—ideally, pieces you’re actually working on—and simply try to make them as good as possible.[5] You can consciously draw on writing guidelines, models, and previous feedback to help you diagnose and correct their flaws. The more time you spend rewriting, the better the pieces will become. Crucially, you’ll also start to internalize the techniques you...]]>
bgarfinkel https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:05 None full 5266
b5vEjXy8AnmgGezwN_NL_EA_EA EA - Announcing the 2023 CLR Summer Research Fellowship by stefan.torges Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the 2023 CLR Summer Research Fellowship, published by stefan.torges on March 17, 2023 on The Effective Altruism Forum.We, the Center on Long-Term Risk, are looking for Summer Research Fellows to help us explore strategies for reducing suffering in the long-term future (s-risk) and work on technical AI safety ideas related to that. For eight weeks, fellows will be part of our team while working on their own research project. During this time, they will be in regular contact with our researchers and other fellows. Each fellow will have one of our researchers as their guide and mentor.Deadline to apply: April 2, 2023. You can find more details on how to apply on our website.Purpose of the fellowshipThe purpose of the fellowship varies from fellow to fellow. In the past, have we often had the following types of people take part in the fellowship:People very early in their careers, e.g. in their undergraduate degree or even high school, who have a strong focus on s-risk and would like to learn more about research and test their fit.People seriously considering changing their career to s-risk research, who want to test their fit or seek employment at CLR.People with a strong focus on s-risk who aim for a research or research-adjacent career outside of CLR and who would like to gain a strong understanding of s-risk macrostrategy beforehand.People with a fair amount of research experience, e.g. from a partly- or fully completed PhD, whose research interests significantly overlap with CLR’s and who want to work on their research project in collaboration with CLR researchers for a few months. This includes people who do not strongly prioritize s-risk themselves.There might be many other good reasons for completing the fellowship. We encourage you to apply if you think you would benefit from the program, even if your reason is not listed above.What we look for in candidatesWe don’t require specific qualifications or experience for this role, but the following abilities and qualities are what we’re looking for in candidates. We encourage you to apply if you think you may be a good fit, even if you are unsure whether you meet some of the criteria.Curiosity and a drive to work on challenging and important problems;Ability to answer complex research questions related to the long-term future;Willingness to work in poorly-explored areas and to learn about new domains as needed;Independent thinking;A cautious approach to potential information hazards and other sensitive topics;Alignment with our mission or strong interest in one of our priority areas.Priority areasYou can find an overview of our current priority areas here. However, If we believe that you can somehow advance high-quality research relevant to s-risks, we are interested in creating a position for you. If you see a way to contribute to our research agenda or have other ideas for reducing s-risks, please apply. We commonly tailor our positions to the strengths and interests of the applicants.Further detailsWe encourage you to apply even if any of the below does not work for you. We are happy to be flexible for exceptional candidates, including when it comes to program length and compensation.Program dates: The default start date is July 3, 2023. Exceptions may be possible.Program length & work quota: The program is intended to last for eight weeks in a full-time capacity. Exceptions, including part-time work, may be possible.Location: We prefer summer research fellows to work from our London offices, but will also consider applications from people who are unable to relocate.Compensation: Unfortunately, we face a lot of funding uncertainty at the moment. So we don’t know yet how much we will be able to pay participating fellows. Compensation will range from £1,800 to £4,000 per month, de...]]>
stefan.torges https://forum.effectivealtruism.org/posts/b5vEjXy8AnmgGezwN/announcing-the-2023-clr-summer-research-fellowship Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the 2023 CLR Summer Research Fellowship, published by stefan.torges on March 17, 2023 on The Effective Altruism Forum.We, the Center on Long-Term Risk, are looking for Summer Research Fellows to help us explore strategies for reducing suffering in the long-term future (s-risk) and work on technical AI safety ideas related to that. For eight weeks, fellows will be part of our team while working on their own research project. During this time, they will be in regular contact with our researchers and other fellows. Each fellow will have one of our researchers as their guide and mentor.Deadline to apply: April 2, 2023. You can find more details on how to apply on our website.Purpose of the fellowshipThe purpose of the fellowship varies from fellow to fellow. In the past, have we often had the following types of people take part in the fellowship:People very early in their careers, e.g. in their undergraduate degree or even high school, who have a strong focus on s-risk and would like to learn more about research and test their fit.People seriously considering changing their career to s-risk research, who want to test their fit or seek employment at CLR.People with a strong focus on s-risk who aim for a research or research-adjacent career outside of CLR and who would like to gain a strong understanding of s-risk macrostrategy beforehand.People with a fair amount of research experience, e.g. from a partly- or fully completed PhD, whose research interests significantly overlap with CLR’s and who want to work on their research project in collaboration with CLR researchers for a few months. This includes people who do not strongly prioritize s-risk themselves.There might be many other good reasons for completing the fellowship. We encourage you to apply if you think you would benefit from the program, even if your reason is not listed above.What we look for in candidatesWe don’t require specific qualifications or experience for this role, but the following abilities and qualities are what we’re looking for in candidates. We encourage you to apply if you think you may be a good fit, even if you are unsure whether you meet some of the criteria.Curiosity and a drive to work on challenging and important problems;Ability to answer complex research questions related to the long-term future;Willingness to work in poorly-explored areas and to learn about new domains as needed;Independent thinking;A cautious approach to potential information hazards and other sensitive topics;Alignment with our mission or strong interest in one of our priority areas.Priority areasYou can find an overview of our current priority areas here. However, If we believe that you can somehow advance high-quality research relevant to s-risks, we are interested in creating a position for you. If you see a way to contribute to our research agenda or have other ideas for reducing s-risks, please apply. We commonly tailor our positions to the strengths and interests of the applicants.Further detailsWe encourage you to apply even if any of the below does not work for you. We are happy to be flexible for exceptional candidates, including when it comes to program length and compensation.Program dates: The default start date is July 3, 2023. Exceptions may be possible.Program length & work quota: The program is intended to last for eight weeks in a full-time capacity. Exceptions, including part-time work, may be possible.Location: We prefer summer research fellows to work from our London offices, but will also consider applications from people who are unable to relocate.Compensation: Unfortunately, we face a lot of funding uncertainty at the moment. So we don’t know yet how much we will be able to pay participating fellows. Compensation will range from £1,800 to £4,000 per month, de...]]>
Fri, 17 Mar 2023 16:31:22 +0000 EA - Announcing the 2023 CLR Summer Research Fellowship by stefan.torges Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the 2023 CLR Summer Research Fellowship, published by stefan.torges on March 17, 2023 on The Effective Altruism Forum.We, the Center on Long-Term Risk, are looking for Summer Research Fellows to help us explore strategies for reducing suffering in the long-term future (s-risk) and work on technical AI safety ideas related to that. For eight weeks, fellows will be part of our team while working on their own research project. During this time, they will be in regular contact with our researchers and other fellows. Each fellow will have one of our researchers as their guide and mentor.Deadline to apply: April 2, 2023. You can find more details on how to apply on our website.Purpose of the fellowshipThe purpose of the fellowship varies from fellow to fellow. In the past, have we often had the following types of people take part in the fellowship:People very early in their careers, e.g. in their undergraduate degree or even high school, who have a strong focus on s-risk and would like to learn more about research and test their fit.People seriously considering changing their career to s-risk research, who want to test their fit or seek employment at CLR.People with a strong focus on s-risk who aim for a research or research-adjacent career outside of CLR and who would like to gain a strong understanding of s-risk macrostrategy beforehand.People with a fair amount of research experience, e.g. from a partly- or fully completed PhD, whose research interests significantly overlap with CLR’s and who want to work on their research project in collaboration with CLR researchers for a few months. This includes people who do not strongly prioritize s-risk themselves.There might be many other good reasons for completing the fellowship. We encourage you to apply if you think you would benefit from the program, even if your reason is not listed above.What we look for in candidatesWe don’t require specific qualifications or experience for this role, but the following abilities and qualities are what we’re looking for in candidates. We encourage you to apply if you think you may be a good fit, even if you are unsure whether you meet some of the criteria.Curiosity and a drive to work on challenging and important problems;Ability to answer complex research questions related to the long-term future;Willingness to work in poorly-explored areas and to learn about new domains as needed;Independent thinking;A cautious approach to potential information hazards and other sensitive topics;Alignment with our mission or strong interest in one of our priority areas.Priority areasYou can find an overview of our current priority areas here. However, If we believe that you can somehow advance high-quality research relevant to s-risks, we are interested in creating a position for you. If you see a way to contribute to our research agenda or have other ideas for reducing s-risks, please apply. We commonly tailor our positions to the strengths and interests of the applicants.Further detailsWe encourage you to apply even if any of the below does not work for you. We are happy to be flexible for exceptional candidates, including when it comes to program length and compensation.Program dates: The default start date is July 3, 2023. Exceptions may be possible.Program length & work quota: The program is intended to last for eight weeks in a full-time capacity. Exceptions, including part-time work, may be possible.Location: We prefer summer research fellows to work from our London offices, but will also consider applications from people who are unable to relocate.Compensation: Unfortunately, we face a lot of funding uncertainty at the moment. So we don’t know yet how much we will be able to pay participating fellows. Compensation will range from £1,800 to £4,000 per month, de...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the 2023 CLR Summer Research Fellowship, published by stefan.torges on March 17, 2023 on The Effective Altruism Forum.We, the Center on Long-Term Risk, are looking for Summer Research Fellows to help us explore strategies for reducing suffering in the long-term future (s-risk) and work on technical AI safety ideas related to that. For eight weeks, fellows will be part of our team while working on their own research project. During this time, they will be in regular contact with our researchers and other fellows. Each fellow will have one of our researchers as their guide and mentor.Deadline to apply: April 2, 2023. You can find more details on how to apply on our website.Purpose of the fellowshipThe purpose of the fellowship varies from fellow to fellow. In the past, have we often had the following types of people take part in the fellowship:People very early in their careers, e.g. in their undergraduate degree or even high school, who have a strong focus on s-risk and would like to learn more about research and test their fit.People seriously considering changing their career to s-risk research, who want to test their fit or seek employment at CLR.People with a strong focus on s-risk who aim for a research or research-adjacent career outside of CLR and who would like to gain a strong understanding of s-risk macrostrategy beforehand.People with a fair amount of research experience, e.g. from a partly- or fully completed PhD, whose research interests significantly overlap with CLR’s and who want to work on their research project in collaboration with CLR researchers for a few months. This includes people who do not strongly prioritize s-risk themselves.There might be many other good reasons for completing the fellowship. We encourage you to apply if you think you would benefit from the program, even if your reason is not listed above.What we look for in candidatesWe don’t require specific qualifications or experience for this role, but the following abilities and qualities are what we’re looking for in candidates. We encourage you to apply if you think you may be a good fit, even if you are unsure whether you meet some of the criteria.Curiosity and a drive to work on challenging and important problems;Ability to answer complex research questions related to the long-term future;Willingness to work in poorly-explored areas and to learn about new domains as needed;Independent thinking;A cautious approach to potential information hazards and other sensitive topics;Alignment with our mission or strong interest in one of our priority areas.Priority areasYou can find an overview of our current priority areas here. However, If we believe that you can somehow advance high-quality research relevant to s-risks, we are interested in creating a position for you. If you see a way to contribute to our research agenda or have other ideas for reducing s-risks, please apply. We commonly tailor our positions to the strengths and interests of the applicants.Further detailsWe encourage you to apply even if any of the below does not work for you. We are happy to be flexible for exceptional candidates, including when it comes to program length and compensation.Program dates: The default start date is July 3, 2023. Exceptions may be possible.Program length & work quota: The program is intended to last for eight weeks in a full-time capacity. Exceptions, including part-time work, may be possible.Location: We prefer summer research fellows to work from our London offices, but will also consider applications from people who are unable to relocate.Compensation: Unfortunately, we face a lot of funding uncertainty at the moment. So we don’t know yet how much we will be able to pay participating fellows. Compensation will range from £1,800 to £4,000 per month, de...]]>
stefan.torges https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:37 None full 5268
zabdCSArBLHSaQnrn_NL_EA_EA EA - Legal Assistance for Victims of AI by bob Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Legal Assistance for Victims of AI, published by bob on March 17, 2023 on The Effective Altruism Forum.In the face of increasing competition, it seems unlikely that AI companies will ever take their foot off the gas. One avenue to slow AI development down is to make investment in AI less attractive. This could be done by increasing the legal risk associated with incorporating AI in products.My understanding of the law is limited, but the EU seems particularly friendly to this approach. The European Commission recently proposed the AI Liability Directive, which aims to make it easier to sue over AI products. In the US, companies are at the very least directly responsible for what their chatbots say, and it seems like it's only a matter of time until a chatbot genuinely harms a user, either by gaslighting or by abusive behavior.A charity could provide legal assistance to victims of AI, similar to how EFF provides legal assistance for cases related to Internet freedom.Besides helping the affected person, this would hopefully:Signal to organizations that giving users access to AI is risky businessScare away new players in the marketScare away investorsGive the AI company in question a bad rep, and sway the public opinion against AI companies in generalLimit the ventures large organizations would be willing to jump intoSpark policy discussions (e.g. about limiting minor access to chatbots, which would also limit profits)All of these things would make AI a worse investment, AI companies a less attractive place to work, etc. I'm not sure it'll make a big difference, but I don't think it's less likely to move the needle than academic work on AI safety.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
bob https://forum.effectivealtruism.org/posts/zabdCSArBLHSaQnrn/legal-assistance-for-victims-of-ai Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Legal Assistance for Victims of AI, published by bob on March 17, 2023 on The Effective Altruism Forum.In the face of increasing competition, it seems unlikely that AI companies will ever take their foot off the gas. One avenue to slow AI development down is to make investment in AI less attractive. This could be done by increasing the legal risk associated with incorporating AI in products.My understanding of the law is limited, but the EU seems particularly friendly to this approach. The European Commission recently proposed the AI Liability Directive, which aims to make it easier to sue over AI products. In the US, companies are at the very least directly responsible for what their chatbots say, and it seems like it's only a matter of time until a chatbot genuinely harms a user, either by gaslighting or by abusive behavior.A charity could provide legal assistance to victims of AI, similar to how EFF provides legal assistance for cases related to Internet freedom.Besides helping the affected person, this would hopefully:Signal to organizations that giving users access to AI is risky businessScare away new players in the marketScare away investorsGive the AI company in question a bad rep, and sway the public opinion against AI companies in generalLimit the ventures large organizations would be willing to jump intoSpark policy discussions (e.g. about limiting minor access to chatbots, which would also limit profits)All of these things would make AI a worse investment, AI companies a less attractive place to work, etc. I'm not sure it'll make a big difference, but I don't think it's less likely to move the needle than academic work on AI safety.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 17 Mar 2023 15:11:57 +0000 EA - Legal Assistance for Victims of AI by bob Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Legal Assistance for Victims of AI, published by bob on March 17, 2023 on The Effective Altruism Forum.In the face of increasing competition, it seems unlikely that AI companies will ever take their foot off the gas. One avenue to slow AI development down is to make investment in AI less attractive. This could be done by increasing the legal risk associated with incorporating AI in products.My understanding of the law is limited, but the EU seems particularly friendly to this approach. The European Commission recently proposed the AI Liability Directive, which aims to make it easier to sue over AI products. In the US, companies are at the very least directly responsible for what their chatbots say, and it seems like it's only a matter of time until a chatbot genuinely harms a user, either by gaslighting or by abusive behavior.A charity could provide legal assistance to victims of AI, similar to how EFF provides legal assistance for cases related to Internet freedom.Besides helping the affected person, this would hopefully:Signal to organizations that giving users access to AI is risky businessScare away new players in the marketScare away investorsGive the AI company in question a bad rep, and sway the public opinion against AI companies in generalLimit the ventures large organizations would be willing to jump intoSpark policy discussions (e.g. about limiting minor access to chatbots, which would also limit profits)All of these things would make AI a worse investment, AI companies a less attractive place to work, etc. I'm not sure it'll make a big difference, but I don't think it's less likely to move the needle than academic work on AI safety.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Legal Assistance for Victims of AI, published by bob on March 17, 2023 on The Effective Altruism Forum.In the face of increasing competition, it seems unlikely that AI companies will ever take their foot off the gas. One avenue to slow AI development down is to make investment in AI less attractive. This could be done by increasing the legal risk associated with incorporating AI in products.My understanding of the law is limited, but the EU seems particularly friendly to this approach. The European Commission recently proposed the AI Liability Directive, which aims to make it easier to sue over AI products. In the US, companies are at the very least directly responsible for what their chatbots say, and it seems like it's only a matter of time until a chatbot genuinely harms a user, either by gaslighting or by abusive behavior.A charity could provide legal assistance to victims of AI, similar to how EFF provides legal assistance for cases related to Internet freedom.Besides helping the affected person, this would hopefully:Signal to organizations that giving users access to AI is risky businessScare away new players in the marketScare away investorsGive the AI company in question a bad rep, and sway the public opinion against AI companies in generalLimit the ventures large organizations would be willing to jump intoSpark policy discussions (e.g. about limiting minor access to chatbots, which would also limit profits)All of these things would make AI a worse investment, AI companies a less attractive place to work, etc. I'm not sure it'll make a big difference, but I don't think it's less likely to move the needle than academic work on AI safety.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
bob https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:50 None full 5269
cpYR9TsG8BdtETo6u_NL_EA_EA EA - Can we trust wellbeing surveys? A pilot study of comparability, linearity, and neutrality by Conrad S Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can we trust wellbeing surveys? A pilot study of comparability, linearity, and neutrality, published by Conrad S on March 17, 2023 on The Effective Altruism Forum.Note: This post only contains Sections 1 and 2 of the report. For the full detail of our survey and pilot results, please see the full report on our website.SummarySubjective wellbeing (SWB) data, such as answers to life satisfaction questions, are important for decision-making by philanthropists and governments. Such data are currently used with two important assumptions:Reports are comparable between persons (e.g., my 6/10 means the same as your 6/10)Reports are linear in the underlying feelings (e.g., going from 4/10 to 5/10 represents the same size change as going from 8/10 to 9/10).Fortunately, these two assumptions are sufficient for analyses that only involve the quality of people’s lives. However, if we want to perform analyses that involve trade-offs between improving quality and quantity of life, we also need knowledge of the neutral point, the point on a wellbeing scale that is equivalent to non-existence.Unfortunately, evidence on all three questions is critically scarce. We propose to collect additional surveys to fill this gap.Our aim with this report is two-fold. First, we give an outline of the questions we plan to field and the underlying reasoning that led to them. Second, we present results from an initial pilot study (n = 128):Unfortunately, this small sample size does not allow us to provide clear estimates of the comparability of wellbeing reports.However, across several question modalities, we do find tentative evidence in favour of approximate linearity.With respect to neutrality, we assess at what point on a 0-10 scale respondents say that they are 'neither satisfied nor dissatisfied' (mean response is 5.3/10). We also probe at what point on a life satisfaction scale respondents report to be indifferent between being alive and being dead (mean response is 1.3/10). Implications and limitations of these findings concerning neutrality are discussed in Section 6.2.In general, the findings from our pilot study should only be seen as being indicative of the general feasibility of this project. They do not provide definitive answers.In the hopes of fielding an improved version of our survey with a much larger sample and a pre-registered analysis plan, we welcome feedback and suggestions on our current survey design.Here are some key questions that we hope to receive feedback on:Are there missing questions that could be included in this survey (or an additional survey) that would inform important topics in SWB research? Are there any questions or proposed analyses you find redundant?Do you see any critical flaws in the analyses we propose? Are there additional analyses we should be considering?Would these data and analyses actually reassure you about the comparability, linearity, and neutrality of subjective wellbeing data? If not, what sorts of data and analyses would reassure you?What are some good places for us to look for funding for this research?Of course, any other feedback that goes beyond these questions is welcome, too. Feedback can be sent to casparkaiser@gmail.com or to samuel@happierlivesinstitute.org.The report proceeds as follows:In Section 1, we describe the challenges for the use of self-reported subjective wellbeing data, focusing on the issues of comparability, linearity, and neutrality. We highlight the implications of these three assumptions for decision-making about effective interventions.In Section 2, we describe the general methodology of the survey.For the following sections, see the full report on our website.In Section 3, we discuss responses to the core life satisfaction question.In Sections 4, 5, and 6, we describe how we will assess co...]]>
Conrad S https://forum.effectivealtruism.org/posts/cpYR9TsG8BdtETo6u/can-we-trust-wellbeing-surveys-a-pilot-study-of Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can we trust wellbeing surveys? A pilot study of comparability, linearity, and neutrality, published by Conrad S on March 17, 2023 on The Effective Altruism Forum.Note: This post only contains Sections 1 and 2 of the report. For the full detail of our survey and pilot results, please see the full report on our website.SummarySubjective wellbeing (SWB) data, such as answers to life satisfaction questions, are important for decision-making by philanthropists and governments. Such data are currently used with two important assumptions:Reports are comparable between persons (e.g., my 6/10 means the same as your 6/10)Reports are linear in the underlying feelings (e.g., going from 4/10 to 5/10 represents the same size change as going from 8/10 to 9/10).Fortunately, these two assumptions are sufficient for analyses that only involve the quality of people’s lives. However, if we want to perform analyses that involve trade-offs between improving quality and quantity of life, we also need knowledge of the neutral point, the point on a wellbeing scale that is equivalent to non-existence.Unfortunately, evidence on all three questions is critically scarce. We propose to collect additional surveys to fill this gap.Our aim with this report is two-fold. First, we give an outline of the questions we plan to field and the underlying reasoning that led to them. Second, we present results from an initial pilot study (n = 128):Unfortunately, this small sample size does not allow us to provide clear estimates of the comparability of wellbeing reports.However, across several question modalities, we do find tentative evidence in favour of approximate linearity.With respect to neutrality, we assess at what point on a 0-10 scale respondents say that they are 'neither satisfied nor dissatisfied' (mean response is 5.3/10). We also probe at what point on a life satisfaction scale respondents report to be indifferent between being alive and being dead (mean response is 1.3/10). Implications and limitations of these findings concerning neutrality are discussed in Section 6.2.In general, the findings from our pilot study should only be seen as being indicative of the general feasibility of this project. They do not provide definitive answers.In the hopes of fielding an improved version of our survey with a much larger sample and a pre-registered analysis plan, we welcome feedback and suggestions on our current survey design.Here are some key questions that we hope to receive feedback on:Are there missing questions that could be included in this survey (or an additional survey) that would inform important topics in SWB research? Are there any questions or proposed analyses you find redundant?Do you see any critical flaws in the analyses we propose? Are there additional analyses we should be considering?Would these data and analyses actually reassure you about the comparability, linearity, and neutrality of subjective wellbeing data? If not, what sorts of data and analyses would reassure you?What are some good places for us to look for funding for this research?Of course, any other feedback that goes beyond these questions is welcome, too. Feedback can be sent to casparkaiser@gmail.com or to samuel@happierlivesinstitute.org.The report proceeds as follows:In Section 1, we describe the challenges for the use of self-reported subjective wellbeing data, focusing on the issues of comparability, linearity, and neutrality. We highlight the implications of these three assumptions for decision-making about effective interventions.In Section 2, we describe the general methodology of the survey.For the following sections, see the full report on our website.In Section 3, we discuss responses to the core life satisfaction question.In Sections 4, 5, and 6, we describe how we will assess co...]]>
Fri, 17 Mar 2023 13:34:23 +0000 EA - Can we trust wellbeing surveys? A pilot study of comparability, linearity, and neutrality by Conrad S Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can we trust wellbeing surveys? A pilot study of comparability, linearity, and neutrality, published by Conrad S on March 17, 2023 on The Effective Altruism Forum.Note: This post only contains Sections 1 and 2 of the report. For the full detail of our survey and pilot results, please see the full report on our website.SummarySubjective wellbeing (SWB) data, such as answers to life satisfaction questions, are important for decision-making by philanthropists and governments. Such data are currently used with two important assumptions:Reports are comparable between persons (e.g., my 6/10 means the same as your 6/10)Reports are linear in the underlying feelings (e.g., going from 4/10 to 5/10 represents the same size change as going from 8/10 to 9/10).Fortunately, these two assumptions are sufficient for analyses that only involve the quality of people’s lives. However, if we want to perform analyses that involve trade-offs between improving quality and quantity of life, we also need knowledge of the neutral point, the point on a wellbeing scale that is equivalent to non-existence.Unfortunately, evidence on all three questions is critically scarce. We propose to collect additional surveys to fill this gap.Our aim with this report is two-fold. First, we give an outline of the questions we plan to field and the underlying reasoning that led to them. Second, we present results from an initial pilot study (n = 128):Unfortunately, this small sample size does not allow us to provide clear estimates of the comparability of wellbeing reports.However, across several question modalities, we do find tentative evidence in favour of approximate linearity.With respect to neutrality, we assess at what point on a 0-10 scale respondents say that they are 'neither satisfied nor dissatisfied' (mean response is 5.3/10). We also probe at what point on a life satisfaction scale respondents report to be indifferent between being alive and being dead (mean response is 1.3/10). Implications and limitations of these findings concerning neutrality are discussed in Section 6.2.In general, the findings from our pilot study should only be seen as being indicative of the general feasibility of this project. They do not provide definitive answers.In the hopes of fielding an improved version of our survey with a much larger sample and a pre-registered analysis plan, we welcome feedback and suggestions on our current survey design.Here are some key questions that we hope to receive feedback on:Are there missing questions that could be included in this survey (or an additional survey) that would inform important topics in SWB research? Are there any questions or proposed analyses you find redundant?Do you see any critical flaws in the analyses we propose? Are there additional analyses we should be considering?Would these data and analyses actually reassure you about the comparability, linearity, and neutrality of subjective wellbeing data? If not, what sorts of data and analyses would reassure you?What are some good places for us to look for funding for this research?Of course, any other feedback that goes beyond these questions is welcome, too. Feedback can be sent to casparkaiser@gmail.com or to samuel@happierlivesinstitute.org.The report proceeds as follows:In Section 1, we describe the challenges for the use of self-reported subjective wellbeing data, focusing on the issues of comparability, linearity, and neutrality. We highlight the implications of these three assumptions for decision-making about effective interventions.In Section 2, we describe the general methodology of the survey.For the following sections, see the full report on our website.In Section 3, we discuss responses to the core life satisfaction question.In Sections 4, 5, and 6, we describe how we will assess co...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can we trust wellbeing surveys? A pilot study of comparability, linearity, and neutrality, published by Conrad S on March 17, 2023 on The Effective Altruism Forum.Note: This post only contains Sections 1 and 2 of the report. For the full detail of our survey and pilot results, please see the full report on our website.SummarySubjective wellbeing (SWB) data, such as answers to life satisfaction questions, are important for decision-making by philanthropists and governments. Such data are currently used with two important assumptions:Reports are comparable between persons (e.g., my 6/10 means the same as your 6/10)Reports are linear in the underlying feelings (e.g., going from 4/10 to 5/10 represents the same size change as going from 8/10 to 9/10).Fortunately, these two assumptions are sufficient for analyses that only involve the quality of people’s lives. However, if we want to perform analyses that involve trade-offs between improving quality and quantity of life, we also need knowledge of the neutral point, the point on a wellbeing scale that is equivalent to non-existence.Unfortunately, evidence on all three questions is critically scarce. We propose to collect additional surveys to fill this gap.Our aim with this report is two-fold. First, we give an outline of the questions we plan to field and the underlying reasoning that led to them. Second, we present results from an initial pilot study (n = 128):Unfortunately, this small sample size does not allow us to provide clear estimates of the comparability of wellbeing reports.However, across several question modalities, we do find tentative evidence in favour of approximate linearity.With respect to neutrality, we assess at what point on a 0-10 scale respondents say that they are 'neither satisfied nor dissatisfied' (mean response is 5.3/10). We also probe at what point on a life satisfaction scale respondents report to be indifferent between being alive and being dead (mean response is 1.3/10). Implications and limitations of these findings concerning neutrality are discussed in Section 6.2.In general, the findings from our pilot study should only be seen as being indicative of the general feasibility of this project. They do not provide definitive answers.In the hopes of fielding an improved version of our survey with a much larger sample and a pre-registered analysis plan, we welcome feedback and suggestions on our current survey design.Here are some key questions that we hope to receive feedback on:Are there missing questions that could be included in this survey (or an additional survey) that would inform important topics in SWB research? Are there any questions or proposed analyses you find redundant?Do you see any critical flaws in the analyses we propose? Are there additional analyses we should be considering?Would these data and analyses actually reassure you about the comparability, linearity, and neutrality of subjective wellbeing data? If not, what sorts of data and analyses would reassure you?What are some good places for us to look for funding for this research?Of course, any other feedback that goes beyond these questions is welcome, too. Feedback can be sent to casparkaiser@gmail.com or to samuel@happierlivesinstitute.org.The report proceeds as follows:In Section 1, we describe the challenges for the use of self-reported subjective wellbeing data, focusing on the issues of comparability, linearity, and neutrality. We highlight the implications of these three assumptions for decision-making about effective interventions.In Section 2, we describe the general methodology of the survey.For the following sections, see the full report on our website.In Section 3, we discuss responses to the core life satisfaction question.In Sections 4, 5, and 6, we describe how we will assess co...]]>
Conrad S https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 20:47 None full 5270
g4fXhiJyj6tdBhuBK_NL_EA_EA EA - Survey on intermediate goals in AI governance by MichaelA Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Survey on intermediate goals in AI governance, published by MichaelA on March 17, 2023 on The Effective Altruism Forum.It seems that a key bottleneck for the field of longtermism-aligned AI governance is limited strategic clarity (see Muehlhauser, 2020, 2021). As one effort to increase strategic clarity, in October-November 2022, we sent a survey to 229 people we had reason to believe are knowledgeable about longtermist AI governance, receiving 107 responses. We asked about:respondents’ “theory of victory” for AI risk (which we defined as the main, high-level “plan” they’d propose for how humanity could plausibly manage the development and deployment of transformative AI such that we get long-lasting good outcomes),how they’d feel about funding going to each of 53 potential “intermediate goals” for AI governance,what other intermediate goals they’d suggest,how high they believe the risk of existential catastrophe from AI is, andwhen they expect transformative AI (TAI) to be developed.We hope the results will be useful to funders, policymakers, people at AI labs, researchers, field-builders, people orienting to longtermist AI governance, and perhaps other types of people. For example, the report could:Broaden the range of options people can easily considerHelp people assess how much and in what way to focus on each potential “theory of victory”, “intermediate goal”, etc.Target and improve further efforts to assess how much and in what way to focus on each potential theory of victory, intermediate goal, etc.If you'd like to see a summary of the survey results, please request access to this folder. We expect to approve all access requests, and will expect readers to abide by the policy articulated in "About sharing information from this report" (for the reasons explained there).AcknowledgmentsThis report is a project of Rethink Priorities–a think tank dedicated to informing decisions made by high-impact organizations and funders across various cause areas. The project was commissioned by Open Philanthropy. Full acknowledgements can be found in the linked "Introduction & summary" document.If you are interested in RP’s work, please visit our research database and subscribe to our newsletter.Here’s the definition of “intermediate goal” that we stated in the survey itself:By an intermediate goal, we mean any goal for reducing extreme AI risk that’s more specific and directly actionable than a high-level goal like ‘reduce existential AI accident risk’ but is less specific and directly actionable than a particular intervention. In another context (global health and development), examples of potential intermediate goals could include ‘develop better/cheaper malaria vaccines’ and ‘improve literacy rates in Sub-Saharan Africa’.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
MichaelA https://forum.effectivealtruism.org/posts/g4fXhiJyj6tdBhuBK/survey-on-intermediate-goals-in-ai-governance Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Survey on intermediate goals in AI governance, published by MichaelA on March 17, 2023 on The Effective Altruism Forum.It seems that a key bottleneck for the field of longtermism-aligned AI governance is limited strategic clarity (see Muehlhauser, 2020, 2021). As one effort to increase strategic clarity, in October-November 2022, we sent a survey to 229 people we had reason to believe are knowledgeable about longtermist AI governance, receiving 107 responses. We asked about:respondents’ “theory of victory” for AI risk (which we defined as the main, high-level “plan” they’d propose for how humanity could plausibly manage the development and deployment of transformative AI such that we get long-lasting good outcomes),how they’d feel about funding going to each of 53 potential “intermediate goals” for AI governance,what other intermediate goals they’d suggest,how high they believe the risk of existential catastrophe from AI is, andwhen they expect transformative AI (TAI) to be developed.We hope the results will be useful to funders, policymakers, people at AI labs, researchers, field-builders, people orienting to longtermist AI governance, and perhaps other types of people. For example, the report could:Broaden the range of options people can easily considerHelp people assess how much and in what way to focus on each potential “theory of victory”, “intermediate goal”, etc.Target and improve further efforts to assess how much and in what way to focus on each potential theory of victory, intermediate goal, etc.If you'd like to see a summary of the survey results, please request access to this folder. We expect to approve all access requests, and will expect readers to abide by the policy articulated in "About sharing information from this report" (for the reasons explained there).AcknowledgmentsThis report is a project of Rethink Priorities–a think tank dedicated to informing decisions made by high-impact organizations and funders across various cause areas. The project was commissioned by Open Philanthropy. Full acknowledgements can be found in the linked "Introduction & summary" document.If you are interested in RP’s work, please visit our research database and subscribe to our newsletter.Here’s the definition of “intermediate goal” that we stated in the survey itself:By an intermediate goal, we mean any goal for reducing extreme AI risk that’s more specific and directly actionable than a high-level goal like ‘reduce existential AI accident risk’ but is less specific and directly actionable than a particular intervention. In another context (global health and development), examples of potential intermediate goals could include ‘develop better/cheaper malaria vaccines’ and ‘improve literacy rates in Sub-Saharan Africa’.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 17 Mar 2023 13:19:30 +0000 EA - Survey on intermediate goals in AI governance by MichaelA Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Survey on intermediate goals in AI governance, published by MichaelA on March 17, 2023 on The Effective Altruism Forum.It seems that a key bottleneck for the field of longtermism-aligned AI governance is limited strategic clarity (see Muehlhauser, 2020, 2021). As one effort to increase strategic clarity, in October-November 2022, we sent a survey to 229 people we had reason to believe are knowledgeable about longtermist AI governance, receiving 107 responses. We asked about:respondents’ “theory of victory” for AI risk (which we defined as the main, high-level “plan” they’d propose for how humanity could plausibly manage the development and deployment of transformative AI such that we get long-lasting good outcomes),how they’d feel about funding going to each of 53 potential “intermediate goals” for AI governance,what other intermediate goals they’d suggest,how high they believe the risk of existential catastrophe from AI is, andwhen they expect transformative AI (TAI) to be developed.We hope the results will be useful to funders, policymakers, people at AI labs, researchers, field-builders, people orienting to longtermist AI governance, and perhaps other types of people. For example, the report could:Broaden the range of options people can easily considerHelp people assess how much and in what way to focus on each potential “theory of victory”, “intermediate goal”, etc.Target and improve further efforts to assess how much and in what way to focus on each potential theory of victory, intermediate goal, etc.If you'd like to see a summary of the survey results, please request access to this folder. We expect to approve all access requests, and will expect readers to abide by the policy articulated in "About sharing information from this report" (for the reasons explained there).AcknowledgmentsThis report is a project of Rethink Priorities–a think tank dedicated to informing decisions made by high-impact organizations and funders across various cause areas. The project was commissioned by Open Philanthropy. Full acknowledgements can be found in the linked "Introduction & summary" document.If you are interested in RP’s work, please visit our research database and subscribe to our newsletter.Here’s the definition of “intermediate goal” that we stated in the survey itself:By an intermediate goal, we mean any goal for reducing extreme AI risk that’s more specific and directly actionable than a high-level goal like ‘reduce existential AI accident risk’ but is less specific and directly actionable than a particular intervention. In another context (global health and development), examples of potential intermediate goals could include ‘develop better/cheaper malaria vaccines’ and ‘improve literacy rates in Sub-Saharan Africa’.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Survey on intermediate goals in AI governance, published by MichaelA on March 17, 2023 on The Effective Altruism Forum.It seems that a key bottleneck for the field of longtermism-aligned AI governance is limited strategic clarity (see Muehlhauser, 2020, 2021). As one effort to increase strategic clarity, in October-November 2022, we sent a survey to 229 people we had reason to believe are knowledgeable about longtermist AI governance, receiving 107 responses. We asked about:respondents’ “theory of victory” for AI risk (which we defined as the main, high-level “plan” they’d propose for how humanity could plausibly manage the development and deployment of transformative AI such that we get long-lasting good outcomes),how they’d feel about funding going to each of 53 potential “intermediate goals” for AI governance,what other intermediate goals they’d suggest,how high they believe the risk of existential catastrophe from AI is, andwhen they expect transformative AI (TAI) to be developed.We hope the results will be useful to funders, policymakers, people at AI labs, researchers, field-builders, people orienting to longtermist AI governance, and perhaps other types of people. For example, the report could:Broaden the range of options people can easily considerHelp people assess how much and in what way to focus on each potential “theory of victory”, “intermediate goal”, etc.Target and improve further efforts to assess how much and in what way to focus on each potential theory of victory, intermediate goal, etc.If you'd like to see a summary of the survey results, please request access to this folder. We expect to approve all access requests, and will expect readers to abide by the policy articulated in "About sharing information from this report" (for the reasons explained there).AcknowledgmentsThis report is a project of Rethink Priorities–a think tank dedicated to informing decisions made by high-impact organizations and funders across various cause areas. The project was commissioned by Open Philanthropy. Full acknowledgements can be found in the linked "Introduction & summary" document.If you are interested in RP’s work, please visit our research database and subscribe to our newsletter.Here’s the definition of “intermediate goal” that we stated in the survey itself:By an intermediate goal, we mean any goal for reducing extreme AI risk that’s more specific and directly actionable than a high-level goal like ‘reduce existential AI accident risk’ but is less specific and directly actionable than a particular intervention. In another context (global health and development), examples of potential intermediate goals could include ‘develop better/cheaper malaria vaccines’ and ‘improve literacy rates in Sub-Saharan Africa’.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
MichaelA https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:50 None full 5267
Q4rg6vwbtPxXW6ECj_NL_EA_EA EA - We are fighting a shared battle (a call for a different approach to AI Strategy) by Gideon Futerman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We are fighting a shared battle (a call for a different approach to AI Strategy), published by Gideon Futerman on March 16, 2023 on The Effective Altruism Forum.Disclaimer 1: This following essay doesn’t purport to offer much original ideas, and I am certainly a non-expert on AI Governance, so please don’t take my word for these things too seriously. I have linked sources throughout the text, and have some other similar texts later on, but this should merely be treated as another data point in people saying very similar things; far smarter people than I have written on this.Disclaimer 2: This post is quite long, so I recommend reading the section on " A choice not an inevitability" and "It's all about power" for the core of my argument.My argument essentially is as follows; under most plausible understandings of how harms arise from very advanced AI systems, be these AGI or narrow AI or systems somewhere in between, the actors responsible, and the actions that must be taken to reduce or avert the harm, are broadly similar whether you care about both existential and non-existential harms from AI development. I will then further go on to argue that this calls for broad, coalitional politics of people who vastly disagree on specifics of AI systems harms, because we essentially have the same goals.It's important to note that calls like these have happened before. Whilst I will be taking a slightly different argument to them, Prunkl & Whittlestone, Baum, Stix & Maas and Cave & Ó hÉigeartaigh have all made arguments attempting to bridge near term and long term concerns. In general, these proposals (with the exception of Baum) have made calls for narrower cooperation between ‘AI Ethics’ and ‘AI Safety’ than I will make, and are all considerably less focused on the common source of harm than I will be. None go as far as I do in essentially suggesting all key forms of harm that we worry about are incidents of the same phenomena of power concentration in and through AI. These pieces are in many ways more research focused, whilst mine is considerably more politically focused.Nonetheless, there is considerable overlap in spirit of identifying that the near-term/ethics and long-term/safety distinction is overemphasised and is not as analytically useful as is made out, as well as the intention of all these pieces and mine to reconcile for mutual benefit of the two factions.A choice not an inevitabilityAt present, there is no AI inevitably coming to harm us. Those AIs that do will be given capabilities, and power to cause harm, by developers. If the AI companies stopped developing their AIs now, and people chose to stop deploying them, then both existential or non-existential harms would stop. These harms are in our hands, and whilst the technologies clearly act as important intermediaries, ultimately it is a human choice, a social choice, and perhaps most importantly a political choice to carry on developing more and more powerful AI systems when such dangers are apparent (or merely plausible or possible). The attempted development of AGI is far from value neutral, far from inevitable and very much in the realm of legitimate political contestation. Thus far, we have simply accepted the right for powerful tech companies to decide our future for us; this is both unnecessary and dangerous.It's important to note that our current acceptance of the right of companies to legislate for our future is historically contingent. In the past, corporate power has been curbed, from colonial era companies, Progressive Era trust-busting, postwar Germany and more, and this could be used again. Whilst governments have often taken a leading role, civil society has also been significant in curbing corporate power and technology development throughout history. Acceptance of corporate dominance i...]]>
Gideon Futerman https://forum.effectivealtruism.org/posts/Q4rg6vwbtPxXW6ECj/we-are-fighting-a-shared-battle-a-call-for-a-different Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We are fighting a shared battle (a call for a different approach to AI Strategy), published by Gideon Futerman on March 16, 2023 on The Effective Altruism Forum.Disclaimer 1: This following essay doesn’t purport to offer much original ideas, and I am certainly a non-expert on AI Governance, so please don’t take my word for these things too seriously. I have linked sources throughout the text, and have some other similar texts later on, but this should merely be treated as another data point in people saying very similar things; far smarter people than I have written on this.Disclaimer 2: This post is quite long, so I recommend reading the section on " A choice not an inevitability" and "It's all about power" for the core of my argument.My argument essentially is as follows; under most plausible understandings of how harms arise from very advanced AI systems, be these AGI or narrow AI or systems somewhere in between, the actors responsible, and the actions that must be taken to reduce or avert the harm, are broadly similar whether you care about both existential and non-existential harms from AI development. I will then further go on to argue that this calls for broad, coalitional politics of people who vastly disagree on specifics of AI systems harms, because we essentially have the same goals.It's important to note that calls like these have happened before. Whilst I will be taking a slightly different argument to them, Prunkl & Whittlestone, Baum, Stix & Maas and Cave & Ó hÉigeartaigh have all made arguments attempting to bridge near term and long term concerns. In general, these proposals (with the exception of Baum) have made calls for narrower cooperation between ‘AI Ethics’ and ‘AI Safety’ than I will make, and are all considerably less focused on the common source of harm than I will be. None go as far as I do in essentially suggesting all key forms of harm that we worry about are incidents of the same phenomena of power concentration in and through AI. These pieces are in many ways more research focused, whilst mine is considerably more politically focused.Nonetheless, there is considerable overlap in spirit of identifying that the near-term/ethics and long-term/safety distinction is overemphasised and is not as analytically useful as is made out, as well as the intention of all these pieces and mine to reconcile for mutual benefit of the two factions.A choice not an inevitabilityAt present, there is no AI inevitably coming to harm us. Those AIs that do will be given capabilities, and power to cause harm, by developers. If the AI companies stopped developing their AIs now, and people chose to stop deploying them, then both existential or non-existential harms would stop. These harms are in our hands, and whilst the technologies clearly act as important intermediaries, ultimately it is a human choice, a social choice, and perhaps most importantly a political choice to carry on developing more and more powerful AI systems when such dangers are apparent (or merely plausible or possible). The attempted development of AGI is far from value neutral, far from inevitable and very much in the realm of legitimate political contestation. Thus far, we have simply accepted the right for powerful tech companies to decide our future for us; this is both unnecessary and dangerous.It's important to note that our current acceptance of the right of companies to legislate for our future is historically contingent. In the past, corporate power has been curbed, from colonial era companies, Progressive Era trust-busting, postwar Germany and more, and this could be used again. Whilst governments have often taken a leading role, civil society has also been significant in curbing corporate power and technology development throughout history. Acceptance of corporate dominance i...]]>
Fri, 17 Mar 2023 07:33:05 +0000 EA - We are fighting a shared battle (a call for a different approach to AI Strategy) by Gideon Futerman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We are fighting a shared battle (a call for a different approach to AI Strategy), published by Gideon Futerman on March 16, 2023 on The Effective Altruism Forum.Disclaimer 1: This following essay doesn’t purport to offer much original ideas, and I am certainly a non-expert on AI Governance, so please don’t take my word for these things too seriously. I have linked sources throughout the text, and have some other similar texts later on, but this should merely be treated as another data point in people saying very similar things; far smarter people than I have written on this.Disclaimer 2: This post is quite long, so I recommend reading the section on " A choice not an inevitability" and "It's all about power" for the core of my argument.My argument essentially is as follows; under most plausible understandings of how harms arise from very advanced AI systems, be these AGI or narrow AI or systems somewhere in between, the actors responsible, and the actions that must be taken to reduce or avert the harm, are broadly similar whether you care about both existential and non-existential harms from AI development. I will then further go on to argue that this calls for broad, coalitional politics of people who vastly disagree on specifics of AI systems harms, because we essentially have the same goals.It's important to note that calls like these have happened before. Whilst I will be taking a slightly different argument to them, Prunkl & Whittlestone, Baum, Stix & Maas and Cave & Ó hÉigeartaigh have all made arguments attempting to bridge near term and long term concerns. In general, these proposals (with the exception of Baum) have made calls for narrower cooperation between ‘AI Ethics’ and ‘AI Safety’ than I will make, and are all considerably less focused on the common source of harm than I will be. None go as far as I do in essentially suggesting all key forms of harm that we worry about are incidents of the same phenomena of power concentration in and through AI. These pieces are in many ways more research focused, whilst mine is considerably more politically focused.Nonetheless, there is considerable overlap in spirit of identifying that the near-term/ethics and long-term/safety distinction is overemphasised and is not as analytically useful as is made out, as well as the intention of all these pieces and mine to reconcile for mutual benefit of the two factions.A choice not an inevitabilityAt present, there is no AI inevitably coming to harm us. Those AIs that do will be given capabilities, and power to cause harm, by developers. If the AI companies stopped developing their AIs now, and people chose to stop deploying them, then both existential or non-existential harms would stop. These harms are in our hands, and whilst the technologies clearly act as important intermediaries, ultimately it is a human choice, a social choice, and perhaps most importantly a political choice to carry on developing more and more powerful AI systems when such dangers are apparent (or merely plausible or possible). The attempted development of AGI is far from value neutral, far from inevitable and very much in the realm of legitimate political contestation. Thus far, we have simply accepted the right for powerful tech companies to decide our future for us; this is both unnecessary and dangerous.It's important to note that our current acceptance of the right of companies to legislate for our future is historically contingent. In the past, corporate power has been curbed, from colonial era companies, Progressive Era trust-busting, postwar Germany and more, and this could be used again. Whilst governments have often taken a leading role, civil society has also been significant in curbing corporate power and technology development throughout history. Acceptance of corporate dominance i...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We are fighting a shared battle (a call for a different approach to AI Strategy), published by Gideon Futerman on March 16, 2023 on The Effective Altruism Forum.Disclaimer 1: This following essay doesn’t purport to offer much original ideas, and I am certainly a non-expert on AI Governance, so please don’t take my word for these things too seriously. I have linked sources throughout the text, and have some other similar texts later on, but this should merely be treated as another data point in people saying very similar things; far smarter people than I have written on this.Disclaimer 2: This post is quite long, so I recommend reading the section on " A choice not an inevitability" and "It's all about power" for the core of my argument.My argument essentially is as follows; under most plausible understandings of how harms arise from very advanced AI systems, be these AGI or narrow AI or systems somewhere in between, the actors responsible, and the actions that must be taken to reduce or avert the harm, are broadly similar whether you care about both existential and non-existential harms from AI development. I will then further go on to argue that this calls for broad, coalitional politics of people who vastly disagree on specifics of AI systems harms, because we essentially have the same goals.It's important to note that calls like these have happened before. Whilst I will be taking a slightly different argument to them, Prunkl & Whittlestone, Baum, Stix & Maas and Cave & Ó hÉigeartaigh have all made arguments attempting to bridge near term and long term concerns. In general, these proposals (with the exception of Baum) have made calls for narrower cooperation between ‘AI Ethics’ and ‘AI Safety’ than I will make, and are all considerably less focused on the common source of harm than I will be. None go as far as I do in essentially suggesting all key forms of harm that we worry about are incidents of the same phenomena of power concentration in and through AI. These pieces are in many ways more research focused, whilst mine is considerably more politically focused.Nonetheless, there is considerable overlap in spirit of identifying that the near-term/ethics and long-term/safety distinction is overemphasised and is not as analytically useful as is made out, as well as the intention of all these pieces and mine to reconcile for mutual benefit of the two factions.A choice not an inevitabilityAt present, there is no AI inevitably coming to harm us. Those AIs that do will be given capabilities, and power to cause harm, by developers. If the AI companies stopped developing their AIs now, and people chose to stop deploying them, then both existential or non-existential harms would stop. These harms are in our hands, and whilst the technologies clearly act as important intermediaries, ultimately it is a human choice, a social choice, and perhaps most importantly a political choice to carry on developing more and more powerful AI systems when such dangers are apparent (or merely plausible or possible). The attempted development of AGI is far from value neutral, far from inevitable and very much in the realm of legitimate political contestation. Thus far, we have simply accepted the right for powerful tech companies to decide our future for us; this is both unnecessary and dangerous.It's important to note that our current acceptance of the right of companies to legislate for our future is historically contingent. In the past, corporate power has been curbed, from colonial era companies, Progressive Era trust-busting, postwar Germany and more, and this could be used again. Whilst governments have often taken a leading role, civil society has also been significant in curbing corporate power and technology development throughout history. Acceptance of corporate dominance i...]]>
Gideon Futerman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 24:30 None full 5258
dyyXcdgBchGczruJq_NL_EA_EA EA - Donation offsets for ChatGPT Plus subscriptions by Jeffrey Ladish Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Donation offsets for ChatGPT Plus subscriptions, published by Jeffrey Ladish on March 16, 2023 on The Effective Altruism Forum.I've decided to donate $240 to both GovAI and MIRI to offset the $480 I plan to spend on ChatGPT Plus over the next two years ($20/month).I don't have a super strong view on ethical offsets, like donating to anti-factory farming groups to try to offset harm from eating meat. That being said, I currently think offsets are somewhat good for a few reasons:They seem much better than simply contributing to some harm or commons problem and doing nothing, which is often what people would do otherwise.It seems useful to recognize, to notice, when you're contributing to some harm or commons problem. I think a lot of harm comes from people failing to notice or keep track of ways their actions negatively impact others, and the ways that common incentives push them to do worse things.A common Effective Altruism argument against offsets is that they don't make sense from a consequentialist perspective. If you have a budget for doing good, then spend your whole budget on doing as much as possible.If you want to mitigate harms you are contributing to, you can offset by increasing your "doing good" budget, but it doesn't make sense to specialize your mitigations to the particular area where you are contributing to harm rather than the area you think will be the most cost effective in general.I think this is a decently good point, but doesn't move me enough to abandon the idea of offsets entirely. A possible counter-argument is that offsets can be a powerful form of coordination to help solve commons problems. By publicly making a commitment to offset a particular harm, you're establishing a basis for coordination - other people can see you really care about the issue because you made a costly signal.This is similar for the reasons to be vegan or vegetarian - it's probably not the most effective from a naive consequentialist perspective, but it might be effective as a point of coordination via costly signaling.After having used ChatGPT (3.5) and Claude for a few months, I've come to believe that these tools are super useful for research and many other tasks, as well as useful for understanding AI systems themselves. I've also started to use Bing Chat and ChatGPT (4), and found them to be even more impressive as research and learning tools. I think it would be quite bad for the world if conscientious people concerned about AI harms refrained from using these tools, because I think it would disadvantage them in significant ways, including in crucial areas like AI alignment and policy.Unfortunately both can be true:1) Language models are really useful and can help people learn, write, and research more effectively2) The rapid development of huge models is extremely dangerous and a huge contributor to AI existential riskI think OpenAI, and to varying extent other scaling labs, are engaged in reckless behavior scaling up and deploying these systems before we understand how they work enough to be confident in our safety and alignment approaches. And also, I do not recommend people in the "concerned about AI x-risk" reference class refrain from paying for these tools, even if they do not decide to offset these harms. The $20/month to OpenAI for GPT-4 access right now is not a lot of money for a company spending hundreds of millions training new models.But it is something, and I want to recognize that I'm contributing to this rapid scaling and deployment in some way.Weighing all this together, I've decided offsets are the right call for me, and I suspect they might be right for many others, which is why I wanted to share my reasoning here. To be clear, I think concrete actions aimed at quality alignment research or AI policy aimed at buying more time are much mo...]]>
Jeffrey Ladish https://forum.effectivealtruism.org/posts/dyyXcdgBchGczruJq/donation-offsets-for-chatgpt-plus-subscriptions Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Donation offsets for ChatGPT Plus subscriptions, published by Jeffrey Ladish on March 16, 2023 on The Effective Altruism Forum.I've decided to donate $240 to both GovAI and MIRI to offset the $480 I plan to spend on ChatGPT Plus over the next two years ($20/month).I don't have a super strong view on ethical offsets, like donating to anti-factory farming groups to try to offset harm from eating meat. That being said, I currently think offsets are somewhat good for a few reasons:They seem much better than simply contributing to some harm or commons problem and doing nothing, which is often what people would do otherwise.It seems useful to recognize, to notice, when you're contributing to some harm or commons problem. I think a lot of harm comes from people failing to notice or keep track of ways their actions negatively impact others, and the ways that common incentives push them to do worse things.A common Effective Altruism argument against offsets is that they don't make sense from a consequentialist perspective. If you have a budget for doing good, then spend your whole budget on doing as much as possible.If you want to mitigate harms you are contributing to, you can offset by increasing your "doing good" budget, but it doesn't make sense to specialize your mitigations to the particular area where you are contributing to harm rather than the area you think will be the most cost effective in general.I think this is a decently good point, but doesn't move me enough to abandon the idea of offsets entirely. A possible counter-argument is that offsets can be a powerful form of coordination to help solve commons problems. By publicly making a commitment to offset a particular harm, you're establishing a basis for coordination - other people can see you really care about the issue because you made a costly signal.This is similar for the reasons to be vegan or vegetarian - it's probably not the most effective from a naive consequentialist perspective, but it might be effective as a point of coordination via costly signaling.After having used ChatGPT (3.5) and Claude for a few months, I've come to believe that these tools are super useful for research and many other tasks, as well as useful for understanding AI systems themselves. I've also started to use Bing Chat and ChatGPT (4), and found them to be even more impressive as research and learning tools. I think it would be quite bad for the world if conscientious people concerned about AI harms refrained from using these tools, because I think it would disadvantage them in significant ways, including in crucial areas like AI alignment and policy.Unfortunately both can be true:1) Language models are really useful and can help people learn, write, and research more effectively2) The rapid development of huge models is extremely dangerous and a huge contributor to AI existential riskI think OpenAI, and to varying extent other scaling labs, are engaged in reckless behavior scaling up and deploying these systems before we understand how they work enough to be confident in our safety and alignment approaches. And also, I do not recommend people in the "concerned about AI x-risk" reference class refrain from paying for these tools, even if they do not decide to offset these harms. The $20/month to OpenAI for GPT-4 access right now is not a lot of money for a company spending hundreds of millions training new models.But it is something, and I want to recognize that I'm contributing to this rapid scaling and deployment in some way.Weighing all this together, I've decided offsets are the right call for me, and I suspect they might be right for many others, which is why I wanted to share my reasoning here. To be clear, I think concrete actions aimed at quality alignment research or AI policy aimed at buying more time are much mo...]]>
Fri, 17 Mar 2023 01:23:30 +0000 EA - Donation offsets for ChatGPT Plus subscriptions by Jeffrey Ladish Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Donation offsets for ChatGPT Plus subscriptions, published by Jeffrey Ladish on March 16, 2023 on The Effective Altruism Forum.I've decided to donate $240 to both GovAI and MIRI to offset the $480 I plan to spend on ChatGPT Plus over the next two years ($20/month).I don't have a super strong view on ethical offsets, like donating to anti-factory farming groups to try to offset harm from eating meat. That being said, I currently think offsets are somewhat good for a few reasons:They seem much better than simply contributing to some harm or commons problem and doing nothing, which is often what people would do otherwise.It seems useful to recognize, to notice, when you're contributing to some harm or commons problem. I think a lot of harm comes from people failing to notice or keep track of ways their actions negatively impact others, and the ways that common incentives push them to do worse things.A common Effective Altruism argument against offsets is that they don't make sense from a consequentialist perspective. If you have a budget for doing good, then spend your whole budget on doing as much as possible.If you want to mitigate harms you are contributing to, you can offset by increasing your "doing good" budget, but it doesn't make sense to specialize your mitigations to the particular area where you are contributing to harm rather than the area you think will be the most cost effective in general.I think this is a decently good point, but doesn't move me enough to abandon the idea of offsets entirely. A possible counter-argument is that offsets can be a powerful form of coordination to help solve commons problems. By publicly making a commitment to offset a particular harm, you're establishing a basis for coordination - other people can see you really care about the issue because you made a costly signal.This is similar for the reasons to be vegan or vegetarian - it's probably not the most effective from a naive consequentialist perspective, but it might be effective as a point of coordination via costly signaling.After having used ChatGPT (3.5) and Claude for a few months, I've come to believe that these tools are super useful for research and many other tasks, as well as useful for understanding AI systems themselves. I've also started to use Bing Chat and ChatGPT (4), and found them to be even more impressive as research and learning tools. I think it would be quite bad for the world if conscientious people concerned about AI harms refrained from using these tools, because I think it would disadvantage them in significant ways, including in crucial areas like AI alignment and policy.Unfortunately both can be true:1) Language models are really useful and can help people learn, write, and research more effectively2) The rapid development of huge models is extremely dangerous and a huge contributor to AI existential riskI think OpenAI, and to varying extent other scaling labs, are engaged in reckless behavior scaling up and deploying these systems before we understand how they work enough to be confident in our safety and alignment approaches. And also, I do not recommend people in the "concerned about AI x-risk" reference class refrain from paying for these tools, even if they do not decide to offset these harms. The $20/month to OpenAI for GPT-4 access right now is not a lot of money for a company spending hundreds of millions training new models.But it is something, and I want to recognize that I'm contributing to this rapid scaling and deployment in some way.Weighing all this together, I've decided offsets are the right call for me, and I suspect they might be right for many others, which is why I wanted to share my reasoning here. To be clear, I think concrete actions aimed at quality alignment research or AI policy aimed at buying more time are much mo...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Donation offsets for ChatGPT Plus subscriptions, published by Jeffrey Ladish on March 16, 2023 on The Effective Altruism Forum.I've decided to donate $240 to both GovAI and MIRI to offset the $480 I plan to spend on ChatGPT Plus over the next two years ($20/month).I don't have a super strong view on ethical offsets, like donating to anti-factory farming groups to try to offset harm from eating meat. That being said, I currently think offsets are somewhat good for a few reasons:They seem much better than simply contributing to some harm or commons problem and doing nothing, which is often what people would do otherwise.It seems useful to recognize, to notice, when you're contributing to some harm or commons problem. I think a lot of harm comes from people failing to notice or keep track of ways their actions negatively impact others, and the ways that common incentives push them to do worse things.A common Effective Altruism argument against offsets is that they don't make sense from a consequentialist perspective. If you have a budget for doing good, then spend your whole budget on doing as much as possible.If you want to mitigate harms you are contributing to, you can offset by increasing your "doing good" budget, but it doesn't make sense to specialize your mitigations to the particular area where you are contributing to harm rather than the area you think will be the most cost effective in general.I think this is a decently good point, but doesn't move me enough to abandon the idea of offsets entirely. A possible counter-argument is that offsets can be a powerful form of coordination to help solve commons problems. By publicly making a commitment to offset a particular harm, you're establishing a basis for coordination - other people can see you really care about the issue because you made a costly signal.This is similar for the reasons to be vegan or vegetarian - it's probably not the most effective from a naive consequentialist perspective, but it might be effective as a point of coordination via costly signaling.After having used ChatGPT (3.5) and Claude for a few months, I've come to believe that these tools are super useful for research and many other tasks, as well as useful for understanding AI systems themselves. I've also started to use Bing Chat and ChatGPT (4), and found them to be even more impressive as research and learning tools. I think it would be quite bad for the world if conscientious people concerned about AI harms refrained from using these tools, because I think it would disadvantage them in significant ways, including in crucial areas like AI alignment and policy.Unfortunately both can be true:1) Language models are really useful and can help people learn, write, and research more effectively2) The rapid development of huge models is extremely dangerous and a huge contributor to AI existential riskI think OpenAI, and to varying extent other scaling labs, are engaged in reckless behavior scaling up and deploying these systems before we understand how they work enough to be confident in our safety and alignment approaches. And also, I do not recommend people in the "concerned about AI x-risk" reference class refrain from paying for these tools, even if they do not decide to offset these harms. The $20/month to OpenAI for GPT-4 access right now is not a lot of money for a company spending hundreds of millions training new models.But it is something, and I want to recognize that I'm contributing to this rapid scaling and deployment in some way.Weighing all this together, I've decided offsets are the right call for me, and I suspect they might be right for many others, which is why I wanted to share my reasoning here. To be clear, I think concrete actions aimed at quality alignment research or AI policy aimed at buying more time are much mo...]]>
Jeffrey Ladish https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:20 None full 5257
ijwybRLgywP7M5XLZ_NL_EA_EA EA - Some problems in operations at EA orgs: inputs from a dozen ops staff by Vaidehi Agarwalla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some problems in operations at EA orgs: inputs from a dozen ops staff, published by Vaidehi Agarwalla on March 16, 2023 on The Effective Altruism Forum.This is a brief summary of an operations brainstorm that took place during April 2022. It represents the views of operations staff at 8-12 different EA-aligned organizations (approximately). We split up into groups and brainstormed problems, and then chose the top problems to brainstorm some tentative solutions.The aim of the brainstorming session was to highlight things that needed improvement, rather than to evaluate how good EA operations roles are relative to the other non-profit or for-profit roles. It’s possible that EA organizations are not uniquely bad or good - but that doesn’t mean that these issues are not worth addressing. The outside world (especially the non-profit space) is pretty inefficient, and I think it’s worth trying to improve things.Limitations of this data: Meta / community building (and longtermist, to a lesser degree) organizations were overrepresented in this sample, and the tallies are estimates. We didn’t systematically ask people to vote for each and every sub-item, but we think the overall priorities raised were reasonable.General BrainstormingFour major themes came up in the original brainstorming session: bad knowledge management, unrealistic expectations, bad delegation, and lack of respect for operations. The group then re-formed new groups to brainstorm solutions for each of these key pain points.Below, we go into a breakdown of each large issue into specific points raised during the general brainstorming session. Some points were raised multiple times and are indicated by the “(x n)” to indicate how many times the point was raised.Knowledge managementProblemsOrganizations don’t have good systems for knowledge management. Ops staff don’t have enough time to coordinate and develop better systems. There is a general lack of structure, clarity and knowledge.Issues with processes and systems (x 4)No time on larger problemsLack of time to explore & coordinateLack of time to make things easier ([you’re always] putting out fires)[Lack of] organizational structureLine managementCapacity to cover absences [see Unrealistic Expectations]Covering / keeping the show runningResponsibilitiesWorking across time zonesTraining / upskillingManagement training [see improper delegation]Lack of Clarity + KnowledgeLegalComplianceHRHiringWellbeing (including burnout)Lack of skill transferLack of continuity / High turn-over of junior ops specialistsPotential SolutionsLowering the bar - e.g. you don’t need a PhD to work in ops. Pick people with less option value.Ask people to be nice and share with othersBest practice guides shared universally. [Make them] available to people before hiring so they can understand the job better before applying, so [there’s] less turn-over.Database? (Better ops Slack?)Making time to create Knowledge Management Systems - so less fire-fighting.People higher in the organization [should have] better oversight of processes/knowledge.Unrealistic expectationsProblemsEmployers have unrealistic expectations for ops professionals. Ops people are expected to do too much in too little time and always be on call.Lack of capacity / too much to do (x2)[Lack of] capacity to cover absences [from above]Ops people [are expected to be] “always on call”Timelines for projects [are subject to the] planning fallacy, [and there are] last minute changesOps team [are] responsible for all new ideas that people come [up] with - could others do it?Unrealistic expectations aboutcoordination capacityskillsetorganizational memorySolutionsBandwidth (?)Increase capacityHave continuity[give ops staff the] ability to push back on too-big asksRecognitionCreate...]]>
Vaidehi Agarwalla https://forum.effectivealtruism.org/posts/ijwybRLgywP7M5XLZ/some-problems-in-operations-at-ea-orgs-inputs-from-a-dozen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some problems in operations at EA orgs: inputs from a dozen ops staff, published by Vaidehi Agarwalla on March 16, 2023 on The Effective Altruism Forum.This is a brief summary of an operations brainstorm that took place during April 2022. It represents the views of operations staff at 8-12 different EA-aligned organizations (approximately). We split up into groups and brainstormed problems, and then chose the top problems to brainstorm some tentative solutions.The aim of the brainstorming session was to highlight things that needed improvement, rather than to evaluate how good EA operations roles are relative to the other non-profit or for-profit roles. It’s possible that EA organizations are not uniquely bad or good - but that doesn’t mean that these issues are not worth addressing. The outside world (especially the non-profit space) is pretty inefficient, and I think it’s worth trying to improve things.Limitations of this data: Meta / community building (and longtermist, to a lesser degree) organizations were overrepresented in this sample, and the tallies are estimates. We didn’t systematically ask people to vote for each and every sub-item, but we think the overall priorities raised were reasonable.General BrainstormingFour major themes came up in the original brainstorming session: bad knowledge management, unrealistic expectations, bad delegation, and lack of respect for operations. The group then re-formed new groups to brainstorm solutions for each of these key pain points.Below, we go into a breakdown of each large issue into specific points raised during the general brainstorming session. Some points were raised multiple times and are indicated by the “(x n)” to indicate how many times the point was raised.Knowledge managementProblemsOrganizations don’t have good systems for knowledge management. Ops staff don’t have enough time to coordinate and develop better systems. There is a general lack of structure, clarity and knowledge.Issues with processes and systems (x 4)No time on larger problemsLack of time to explore & coordinateLack of time to make things easier ([you’re always] putting out fires)[Lack of] organizational structureLine managementCapacity to cover absences [see Unrealistic Expectations]Covering / keeping the show runningResponsibilitiesWorking across time zonesTraining / upskillingManagement training [see improper delegation]Lack of Clarity + KnowledgeLegalComplianceHRHiringWellbeing (including burnout)Lack of skill transferLack of continuity / High turn-over of junior ops specialistsPotential SolutionsLowering the bar - e.g. you don’t need a PhD to work in ops. Pick people with less option value.Ask people to be nice and share with othersBest practice guides shared universally. [Make them] available to people before hiring so they can understand the job better before applying, so [there’s] less turn-over.Database? (Better ops Slack?)Making time to create Knowledge Management Systems - so less fire-fighting.People higher in the organization [should have] better oversight of processes/knowledge.Unrealistic expectationsProblemsEmployers have unrealistic expectations for ops professionals. Ops people are expected to do too much in too little time and always be on call.Lack of capacity / too much to do (x2)[Lack of] capacity to cover absences [from above]Ops people [are expected to be] “always on call”Timelines for projects [are subject to the] planning fallacy, [and there are] last minute changesOps team [are] responsible for all new ideas that people come [up] with - could others do it?Unrealistic expectations aboutcoordination capacityskillsetorganizational memorySolutionsBandwidth (?)Increase capacityHave continuity[give ops staff the] ability to push back on too-big asksRecognitionCreate...]]>
Thu, 16 Mar 2023 21:49:22 +0000 EA - Some problems in operations at EA orgs: inputs from a dozen ops staff by Vaidehi Agarwalla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some problems in operations at EA orgs: inputs from a dozen ops staff, published by Vaidehi Agarwalla on March 16, 2023 on The Effective Altruism Forum.This is a brief summary of an operations brainstorm that took place during April 2022. It represents the views of operations staff at 8-12 different EA-aligned organizations (approximately). We split up into groups and brainstormed problems, and then chose the top problems to brainstorm some tentative solutions.The aim of the brainstorming session was to highlight things that needed improvement, rather than to evaluate how good EA operations roles are relative to the other non-profit or for-profit roles. It’s possible that EA organizations are not uniquely bad or good - but that doesn’t mean that these issues are not worth addressing. The outside world (especially the non-profit space) is pretty inefficient, and I think it’s worth trying to improve things.Limitations of this data: Meta / community building (and longtermist, to a lesser degree) organizations were overrepresented in this sample, and the tallies are estimates. We didn’t systematically ask people to vote for each and every sub-item, but we think the overall priorities raised were reasonable.General BrainstormingFour major themes came up in the original brainstorming session: bad knowledge management, unrealistic expectations, bad delegation, and lack of respect for operations. The group then re-formed new groups to brainstorm solutions for each of these key pain points.Below, we go into a breakdown of each large issue into specific points raised during the general brainstorming session. Some points were raised multiple times and are indicated by the “(x n)” to indicate how many times the point was raised.Knowledge managementProblemsOrganizations don’t have good systems for knowledge management. Ops staff don’t have enough time to coordinate and develop better systems. There is a general lack of structure, clarity and knowledge.Issues with processes and systems (x 4)No time on larger problemsLack of time to explore & coordinateLack of time to make things easier ([you’re always] putting out fires)[Lack of] organizational structureLine managementCapacity to cover absences [see Unrealistic Expectations]Covering / keeping the show runningResponsibilitiesWorking across time zonesTraining / upskillingManagement training [see improper delegation]Lack of Clarity + KnowledgeLegalComplianceHRHiringWellbeing (including burnout)Lack of skill transferLack of continuity / High turn-over of junior ops specialistsPotential SolutionsLowering the bar - e.g. you don’t need a PhD to work in ops. Pick people with less option value.Ask people to be nice and share with othersBest practice guides shared universally. [Make them] available to people before hiring so they can understand the job better before applying, so [there’s] less turn-over.Database? (Better ops Slack?)Making time to create Knowledge Management Systems - so less fire-fighting.People higher in the organization [should have] better oversight of processes/knowledge.Unrealistic expectationsProblemsEmployers have unrealistic expectations for ops professionals. Ops people are expected to do too much in too little time and always be on call.Lack of capacity / too much to do (x2)[Lack of] capacity to cover absences [from above]Ops people [are expected to be] “always on call”Timelines for projects [are subject to the] planning fallacy, [and there are] last minute changesOps team [are] responsible for all new ideas that people come [up] with - could others do it?Unrealistic expectations aboutcoordination capacityskillsetorganizational memorySolutionsBandwidth (?)Increase capacityHave continuity[give ops staff the] ability to push back on too-big asksRecognitionCreate...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some problems in operations at EA orgs: inputs from a dozen ops staff, published by Vaidehi Agarwalla on March 16, 2023 on The Effective Altruism Forum.This is a brief summary of an operations brainstorm that took place during April 2022. It represents the views of operations staff at 8-12 different EA-aligned organizations (approximately). We split up into groups and brainstormed problems, and then chose the top problems to brainstorm some tentative solutions.The aim of the brainstorming session was to highlight things that needed improvement, rather than to evaluate how good EA operations roles are relative to the other non-profit or for-profit roles. It’s possible that EA organizations are not uniquely bad or good - but that doesn’t mean that these issues are not worth addressing. The outside world (especially the non-profit space) is pretty inefficient, and I think it’s worth trying to improve things.Limitations of this data: Meta / community building (and longtermist, to a lesser degree) organizations were overrepresented in this sample, and the tallies are estimates. We didn’t systematically ask people to vote for each and every sub-item, but we think the overall priorities raised were reasonable.General BrainstormingFour major themes came up in the original brainstorming session: bad knowledge management, unrealistic expectations, bad delegation, and lack of respect for operations. The group then re-formed new groups to brainstorm solutions for each of these key pain points.Below, we go into a breakdown of each large issue into specific points raised during the general brainstorming session. Some points were raised multiple times and are indicated by the “(x n)” to indicate how many times the point was raised.Knowledge managementProblemsOrganizations don’t have good systems for knowledge management. Ops staff don’t have enough time to coordinate and develop better systems. There is a general lack of structure, clarity and knowledge.Issues with processes and systems (x 4)No time on larger problemsLack of time to explore & coordinateLack of time to make things easier ([you’re always] putting out fires)[Lack of] organizational structureLine managementCapacity to cover absences [see Unrealistic Expectations]Covering / keeping the show runningResponsibilitiesWorking across time zonesTraining / upskillingManagement training [see improper delegation]Lack of Clarity + KnowledgeLegalComplianceHRHiringWellbeing (including burnout)Lack of skill transferLack of continuity / High turn-over of junior ops specialistsPotential SolutionsLowering the bar - e.g. you don’t need a PhD to work in ops. Pick people with less option value.Ask people to be nice and share with othersBest practice guides shared universally. [Make them] available to people before hiring so they can understand the job better before applying, so [there’s] less turn-over.Database? (Better ops Slack?)Making time to create Knowledge Management Systems - so less fire-fighting.People higher in the organization [should have] better oversight of processes/knowledge.Unrealistic expectationsProblemsEmployers have unrealistic expectations for ops professionals. Ops people are expected to do too much in too little time and always be on call.Lack of capacity / too much to do (x2)[Lack of] capacity to cover absences [from above]Ops people [are expected to be] “always on call”Timelines for projects [are subject to the] planning fallacy, [and there are] last minute changesOps team [are] responsible for all new ideas that people come [up] with - could others do it?Unrealistic expectations aboutcoordination capacityskillsetorganizational memorySolutionsBandwidth (?)Increase capacityHave continuity[give ops staff the] ability to push back on too-big asksRecognitionCreate...]]>
Vaidehi Agarwalla https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:44 None full 5248
Cd8wM4jZPnaAA8vwX_NL_EA_EA EA - [Linkpost] Why pescetarianism is bad for animal welfare - Vox, Future Perfect by Garrison Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Why pescetarianism is bad for animal welfare - Vox, Future Perfect, published by Garrison on March 16, 2023 on The Effective Altruism Forum.In my debut for Vox, I write about why switching to a pescetarian diet for animal welfare reasons is probably a mistake.I was motivated to reduce animal consumption by EA reasoning. I initially thought that the moral progression of diets was something like vegan > vegetarian > pescetarian > omnivore. But I now think the typical pescetarian diet is worse than an omnivorous one. (I was actually convinced in part by an EA NYC talk by Becca Franks on fish psychology.)Why?Fish usually eat other fish, and they're smaller on average than typical farmed animals.The evidence for their sentience is much stronger than I previously thought. I think my credence is now something like P(pig/cow sentience) = 99.99%, P(chicken/fish sentience) = 99%Given that there are ~30k fish species, generalizing about them is a bit tricky, but I think the evidence of fish sentience is about as strong as the evidence for chicken sentience, something I would guess more people accept.I also spend time discussing:environmental impacts of fishingconsumer choice vs. systemic changeshrimp welfareThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Garrison https://forum.effectivealtruism.org/posts/Cd8wM4jZPnaAA8vwX/linkpost-why-pescetarianism-is-bad-for-animal-welfare-vox Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Why pescetarianism is bad for animal welfare - Vox, Future Perfect, published by Garrison on March 16, 2023 on The Effective Altruism Forum.In my debut for Vox, I write about why switching to a pescetarian diet for animal welfare reasons is probably a mistake.I was motivated to reduce animal consumption by EA reasoning. I initially thought that the moral progression of diets was something like vegan > vegetarian > pescetarian > omnivore. But I now think the typical pescetarian diet is worse than an omnivorous one. (I was actually convinced in part by an EA NYC talk by Becca Franks on fish psychology.)Why?Fish usually eat other fish, and they're smaller on average than typical farmed animals.The evidence for their sentience is much stronger than I previously thought. I think my credence is now something like P(pig/cow sentience) = 99.99%, P(chicken/fish sentience) = 99%Given that there are ~30k fish species, generalizing about them is a bit tricky, but I think the evidence of fish sentience is about as strong as the evidence for chicken sentience, something I would guess more people accept.I also spend time discussing:environmental impacts of fishingconsumer choice vs. systemic changeshrimp welfareThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 16 Mar 2023 20:00:40 +0000 EA - [Linkpost] Why pescetarianism is bad for animal welfare - Vox, Future Perfect by Garrison Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Why pescetarianism is bad for animal welfare - Vox, Future Perfect, published by Garrison on March 16, 2023 on The Effective Altruism Forum.In my debut for Vox, I write about why switching to a pescetarian diet for animal welfare reasons is probably a mistake.I was motivated to reduce animal consumption by EA reasoning. I initially thought that the moral progression of diets was something like vegan > vegetarian > pescetarian > omnivore. But I now think the typical pescetarian diet is worse than an omnivorous one. (I was actually convinced in part by an EA NYC talk by Becca Franks on fish psychology.)Why?Fish usually eat other fish, and they're smaller on average than typical farmed animals.The evidence for their sentience is much stronger than I previously thought. I think my credence is now something like P(pig/cow sentience) = 99.99%, P(chicken/fish sentience) = 99%Given that there are ~30k fish species, generalizing about them is a bit tricky, but I think the evidence of fish sentience is about as strong as the evidence for chicken sentience, something I would guess more people accept.I also spend time discussing:environmental impacts of fishingconsumer choice vs. systemic changeshrimp welfareThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Why pescetarianism is bad for animal welfare - Vox, Future Perfect, published by Garrison on March 16, 2023 on The Effective Altruism Forum.In my debut for Vox, I write about why switching to a pescetarian diet for animal welfare reasons is probably a mistake.I was motivated to reduce animal consumption by EA reasoning. I initially thought that the moral progression of diets was something like vegan > vegetarian > pescetarian > omnivore. But I now think the typical pescetarian diet is worse than an omnivorous one. (I was actually convinced in part by an EA NYC talk by Becca Franks on fish psychology.)Why?Fish usually eat other fish, and they're smaller on average than typical farmed animals.The evidence for their sentience is much stronger than I previously thought. I think my credence is now something like P(pig/cow sentience) = 99.99%, P(chicken/fish sentience) = 99%Given that there are ~30k fish species, generalizing about them is a bit tricky, but I think the evidence of fish sentience is about as strong as the evidence for chicken sentience, something I would guess more people accept.I also spend time discussing:environmental impacts of fishingconsumer choice vs. systemic changeshrimp welfareThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Garrison https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:34 None full 5249
r5kffvkLfknn9yojW_NL_EA_EA EA - Announcing the ERA Cambridge Summer Research Fellowship by Nandini Shiralkar Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the ERA Cambridge Summer Research Fellowship, published by Nandini Shiralkar on March 16, 2023 on The Effective Altruism Forum.The Existential Risk Alliance (ERA) has opened applications for an in-person, paid, 8-week Summer Research Fellowship focused on existential risk mitigation, taking place from July 3rd to August 25th 2023 in Cambridge, UK, and aimed at all aspiring researchers, including undergraduates.To apply and find out more, please visit the ERA website.If you are interested in mentoring fellows on this programme, please submit your name, email and research area here, and we will get in touch with you in due course.If you know other people who would be a good fit, please encourage them to apply (people are more likely to apply if you recommend they do, even if they have already heard of the opportunity!) If you are a leader or organiser of relevant community spaces, we encourage you to post an announcement with a link to this post, or alternatively a printable poster is here.Applications will be reviewed as they are submitted, and we encourage early applications, as offers will be sent out as soon as suitable candidates are found. We will accept applications until April 5, 2023 (23:59 in US Eastern Daylight Time).The ERA Cambridge Fellowship (previously known as the CERI Fellowship) is a fantastic opportunity to:Build your portfolio by researching a topic relevant to understanding and mitigating existential risks to human civilisation.Receive guidance and develop your research skills, via weekly mentorship from a researcher in the field.Form lasting connections with other fellows who care about mitigating existential risks, while also engaging with local events including discussions and Q&As with experts.Why we are running this programmeOur mission as an organisation is to reduce the probability of an existential catastrophe. We believe that one of the key ways to reduce existential risk lies in fostering a community of dedicated and knowledgeable x-risk researchers. Through our summer research fellowship programme, we aim to identify and support aspiring researchers in this field, providing them with the resources and the mentorship needed to succeed.What we provideA salary equivalent to £31,200 per year, which will be prorated to the duration of the summer programme.Mentorship from a researcher working in a related field.Complimentary accommodation, meal provisions during working hours, and travel expense coverageDedicated desk space at our office in central Cambridge.Opportunity to work either on a group research project with other fellows or individually.Networking and learning opportunities through various events, including trips to Oxford and London.What we are looking forWe are excited to support a wide range of research, from the purely technical to the philosophical, as long as there is direct relevance to mitigating existential risk. This could also include social science or policy projects focusing on implementing existential risk mitigation strategies.Incredibly successful projects would slightly reduce the likelihood that human civilisation will permanently collapse, that humans will go extinct, or that the future potential of humanity will be permanently reduced. A secondary goal of this project is for fellows to learn more about working on existential risk mitigation, develop relevant skills, and test their fit for further research or work in this field.Who we are looking forAnyone can apply to the fellowship, though we expect it to be most useful to students (from undergraduates to postgraduates) and early-career individuals looking to test their fit for existential risk research. We particularly encourage undergraduates to apply, to develop their research experience.We are looking to support proactive i...]]>
Nandini Shiralkar https://forum.effectivealtruism.org/posts/r5kffvkLfknn9yojW/announcing-the-era-cambridge-summer-research-fellowship Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the ERA Cambridge Summer Research Fellowship, published by Nandini Shiralkar on March 16, 2023 on The Effective Altruism Forum.The Existential Risk Alliance (ERA) has opened applications for an in-person, paid, 8-week Summer Research Fellowship focused on existential risk mitigation, taking place from July 3rd to August 25th 2023 in Cambridge, UK, and aimed at all aspiring researchers, including undergraduates.To apply and find out more, please visit the ERA website.If you are interested in mentoring fellows on this programme, please submit your name, email and research area here, and we will get in touch with you in due course.If you know other people who would be a good fit, please encourage them to apply (people are more likely to apply if you recommend they do, even if they have already heard of the opportunity!) If you are a leader or organiser of relevant community spaces, we encourage you to post an announcement with a link to this post, or alternatively a printable poster is here.Applications will be reviewed as they are submitted, and we encourage early applications, as offers will be sent out as soon as suitable candidates are found. We will accept applications until April 5, 2023 (23:59 in US Eastern Daylight Time).The ERA Cambridge Fellowship (previously known as the CERI Fellowship) is a fantastic opportunity to:Build your portfolio by researching a topic relevant to understanding and mitigating existential risks to human civilisation.Receive guidance and develop your research skills, via weekly mentorship from a researcher in the field.Form lasting connections with other fellows who care about mitigating existential risks, while also engaging with local events including discussions and Q&As with experts.Why we are running this programmeOur mission as an organisation is to reduce the probability of an existential catastrophe. We believe that one of the key ways to reduce existential risk lies in fostering a community of dedicated and knowledgeable x-risk researchers. Through our summer research fellowship programme, we aim to identify and support aspiring researchers in this field, providing them with the resources and the mentorship needed to succeed.What we provideA salary equivalent to £31,200 per year, which will be prorated to the duration of the summer programme.Mentorship from a researcher working in a related field.Complimentary accommodation, meal provisions during working hours, and travel expense coverageDedicated desk space at our office in central Cambridge.Opportunity to work either on a group research project with other fellows or individually.Networking and learning opportunities through various events, including trips to Oxford and London.What we are looking forWe are excited to support a wide range of research, from the purely technical to the philosophical, as long as there is direct relevance to mitigating existential risk. This could also include social science or policy projects focusing on implementing existential risk mitigation strategies.Incredibly successful projects would slightly reduce the likelihood that human civilisation will permanently collapse, that humans will go extinct, or that the future potential of humanity will be permanently reduced. A secondary goal of this project is for fellows to learn more about working on existential risk mitigation, develop relevant skills, and test their fit for further research or work in this field.Who we are looking forAnyone can apply to the fellowship, though we expect it to be most useful to students (from undergraduates to postgraduates) and early-career individuals looking to test their fit for existential risk research. We particularly encourage undergraduates to apply, to develop their research experience.We are looking to support proactive i...]]>
Thu, 16 Mar 2023 13:59:02 +0000 EA - Announcing the ERA Cambridge Summer Research Fellowship by Nandini Shiralkar Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the ERA Cambridge Summer Research Fellowship, published by Nandini Shiralkar on March 16, 2023 on The Effective Altruism Forum.The Existential Risk Alliance (ERA) has opened applications for an in-person, paid, 8-week Summer Research Fellowship focused on existential risk mitigation, taking place from July 3rd to August 25th 2023 in Cambridge, UK, and aimed at all aspiring researchers, including undergraduates.To apply and find out more, please visit the ERA website.If you are interested in mentoring fellows on this programme, please submit your name, email and research area here, and we will get in touch with you in due course.If you know other people who would be a good fit, please encourage them to apply (people are more likely to apply if you recommend they do, even if they have already heard of the opportunity!) If you are a leader or organiser of relevant community spaces, we encourage you to post an announcement with a link to this post, or alternatively a printable poster is here.Applications will be reviewed as they are submitted, and we encourage early applications, as offers will be sent out as soon as suitable candidates are found. We will accept applications until April 5, 2023 (23:59 in US Eastern Daylight Time).The ERA Cambridge Fellowship (previously known as the CERI Fellowship) is a fantastic opportunity to:Build your portfolio by researching a topic relevant to understanding and mitigating existential risks to human civilisation.Receive guidance and develop your research skills, via weekly mentorship from a researcher in the field.Form lasting connections with other fellows who care about mitigating existential risks, while also engaging with local events including discussions and Q&As with experts.Why we are running this programmeOur mission as an organisation is to reduce the probability of an existential catastrophe. We believe that one of the key ways to reduce existential risk lies in fostering a community of dedicated and knowledgeable x-risk researchers. Through our summer research fellowship programme, we aim to identify and support aspiring researchers in this field, providing them with the resources and the mentorship needed to succeed.What we provideA salary equivalent to £31,200 per year, which will be prorated to the duration of the summer programme.Mentorship from a researcher working in a related field.Complimentary accommodation, meal provisions during working hours, and travel expense coverageDedicated desk space at our office in central Cambridge.Opportunity to work either on a group research project with other fellows or individually.Networking and learning opportunities through various events, including trips to Oxford and London.What we are looking forWe are excited to support a wide range of research, from the purely technical to the philosophical, as long as there is direct relevance to mitigating existential risk. This could also include social science or policy projects focusing on implementing existential risk mitigation strategies.Incredibly successful projects would slightly reduce the likelihood that human civilisation will permanently collapse, that humans will go extinct, or that the future potential of humanity will be permanently reduced. A secondary goal of this project is for fellows to learn more about working on existential risk mitigation, develop relevant skills, and test their fit for further research or work in this field.Who we are looking forAnyone can apply to the fellowship, though we expect it to be most useful to students (from undergraduates to postgraduates) and early-career individuals looking to test their fit for existential risk research. We particularly encourage undergraduates to apply, to develop their research experience.We are looking to support proactive i...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the ERA Cambridge Summer Research Fellowship, published by Nandini Shiralkar on March 16, 2023 on The Effective Altruism Forum.The Existential Risk Alliance (ERA) has opened applications for an in-person, paid, 8-week Summer Research Fellowship focused on existential risk mitigation, taking place from July 3rd to August 25th 2023 in Cambridge, UK, and aimed at all aspiring researchers, including undergraduates.To apply and find out more, please visit the ERA website.If you are interested in mentoring fellows on this programme, please submit your name, email and research area here, and we will get in touch with you in due course.If you know other people who would be a good fit, please encourage them to apply (people are more likely to apply if you recommend they do, even if they have already heard of the opportunity!) If you are a leader or organiser of relevant community spaces, we encourage you to post an announcement with a link to this post, or alternatively a printable poster is here.Applications will be reviewed as they are submitted, and we encourage early applications, as offers will be sent out as soon as suitable candidates are found. We will accept applications until April 5, 2023 (23:59 in US Eastern Daylight Time).The ERA Cambridge Fellowship (previously known as the CERI Fellowship) is a fantastic opportunity to:Build your portfolio by researching a topic relevant to understanding and mitigating existential risks to human civilisation.Receive guidance and develop your research skills, via weekly mentorship from a researcher in the field.Form lasting connections with other fellows who care about mitigating existential risks, while also engaging with local events including discussions and Q&As with experts.Why we are running this programmeOur mission as an organisation is to reduce the probability of an existential catastrophe. We believe that one of the key ways to reduce existential risk lies in fostering a community of dedicated and knowledgeable x-risk researchers. Through our summer research fellowship programme, we aim to identify and support aspiring researchers in this field, providing them with the resources and the mentorship needed to succeed.What we provideA salary equivalent to £31,200 per year, which will be prorated to the duration of the summer programme.Mentorship from a researcher working in a related field.Complimentary accommodation, meal provisions during working hours, and travel expense coverageDedicated desk space at our office in central Cambridge.Opportunity to work either on a group research project with other fellows or individually.Networking and learning opportunities through various events, including trips to Oxford and London.What we are looking forWe are excited to support a wide range of research, from the purely technical to the philosophical, as long as there is direct relevance to mitigating existential risk. This could also include social science or policy projects focusing on implementing existential risk mitigation strategies.Incredibly successful projects would slightly reduce the likelihood that human civilisation will permanently collapse, that humans will go extinct, or that the future potential of humanity will be permanently reduced. A secondary goal of this project is for fellows to learn more about working on existential risk mitigation, develop relevant skills, and test their fit for further research or work in this field.Who we are looking forAnyone can apply to the fellowship, though we expect it to be most useful to students (from undergraduates to postgraduates) and early-career individuals looking to test their fit for existential risk research. We particularly encourage undergraduates to apply, to develop their research experience.We are looking to support proactive i...]]>
Nandini Shiralkar https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:51 None full 5250
oqZfunLtKoDccxMHa_NL_EA_EA EA - Offer an option to Muslim donors; grow effective giving by GiveDirectly Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Offer an option to Muslim donors; grow effective giving, published by GiveDirectly on March 16, 2023 on The Effective Altruism Forum.SummaryIn order to offer Muslim donors a way to give their annual religious tithing (zakat) to an EA-aligned intervention, GiveDirectly launched a zakat-compliant fund, delivered as cash to Yemeni families displaced by the civil war. Muslims give ~$600B/year in Zakat to the global poor, though much of this is given informally or to less-than-effective NGOs.Through this unconditional cash transfer option, we’re offering Muslims the opportunity to redirect a portion of their giving to a measurably high-impact intervention and introduce more Muslims to EA’s theory of effective giving. We invite readers to share thoughts in the comments and to share the campaign far and wide.Muslims are the fastest-growing religious group and give annuallyAs Ahmed Ghoor observed, Muslims make up about 24% of the world population (1.8B people) and Islam is the fastest growing religion. Despite having a robust tradition of charitable giving, little has been done proactively to engage the Muslim community on the ideas of effective altruism. An important step to inclusion is offering this pathway for effectively donating zakat.Zakat is a sacred pillar of Islam, a large portion of which is given to the needyFor non-Muslim readers: one of the five pillars of Islam, zakat is mandatory giving; Muslims eligible to pay it donate at least 2.5% of their accumulated wealth annually for the benefit of the poor, destitute, and others – classified as mustahik. Some key points:A major cited aim of Zakat is to provide relief from and ultimately eradicate poverty.It is generally held that zakat can only be given to other Muslims.A large portion of zakat is given informally person-to-person or through mosques and Islamic charities.Zakat is a sacred form of charity; it’s most often given during the holy month of Ramadan.Direct cash transfers are a neglected zakat optionZakat giving is estimated at $1.8B in the U.S. alone with $450M going to international NGOs, who mostly use their funds for in-kind support like food, tents, and clothing. Dr. Shahrul Hussain, an Islamic scholar, argues that cash transfers “should be considered a primary method of zakat distribution,” as, according to the Islamic principle of tamlīk (ownership), the recipients of the zakat have total ownership over the money, and it is up to them (not an intermediary third-party organization or charity) how it is spent. He also notes “the immense benefits of unconditional cash transfer in comparison to in-kind transfer."This is a simple, transparent means of transferring wealth that empowers the recipients. However, other than informal person-to-person giving, there are limited options to give zakat as 100% unconditional cash.GiveDirectly now allows zakat to be given as cash to Muslims in extreme povertyAs an opportunity for Muslims to donate zakat directly as cash, GiveDirectly created a zakat-compliant fund to give cash through our program in Yemen. While GiveDirectly is a secular organization, our Yemen program and Zakat policy have been reviewed and certified by Amanah Advisors. In order to achieve this, we’re assured that 100% of donations will be delivered as cash, using non-zakat funds to cover the associated delivery costs.Donations through our page are tax-deductible in the U.S. and our partners at Giving What We Can created a page allowing donors to give 100% of their gift to GiveDirectly’s zakat-compliant fund, tax-deductible in the Netherlands and the U.K. Taken together, this provides a tax-deductible option for 8.6M Muslims across three countries.As a secular NGO, GiveDirectly may struggle to gain traction with Muslim donorsGiveDirectly is a credible option for zakat donors: we’ve...]]>
GiveDirectly https://forum.effectivealtruism.org/posts/oqZfunLtKoDccxMHa/offer-an-option-to-muslim-donors-grow-effective-giving Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Offer an option to Muslim donors; grow effective giving, published by GiveDirectly on March 16, 2023 on The Effective Altruism Forum.SummaryIn order to offer Muslim donors a way to give their annual religious tithing (zakat) to an EA-aligned intervention, GiveDirectly launched a zakat-compliant fund, delivered as cash to Yemeni families displaced by the civil war. Muslims give ~$600B/year in Zakat to the global poor, though much of this is given informally or to less-than-effective NGOs.Through this unconditional cash transfer option, we’re offering Muslims the opportunity to redirect a portion of their giving to a measurably high-impact intervention and introduce more Muslims to EA’s theory of effective giving. We invite readers to share thoughts in the comments and to share the campaign far and wide.Muslims are the fastest-growing religious group and give annuallyAs Ahmed Ghoor observed, Muslims make up about 24% of the world population (1.8B people) and Islam is the fastest growing religion. Despite having a robust tradition of charitable giving, little has been done proactively to engage the Muslim community on the ideas of effective altruism. An important step to inclusion is offering this pathway for effectively donating zakat.Zakat is a sacred pillar of Islam, a large portion of which is given to the needyFor non-Muslim readers: one of the five pillars of Islam, zakat is mandatory giving; Muslims eligible to pay it donate at least 2.5% of their accumulated wealth annually for the benefit of the poor, destitute, and others – classified as mustahik. Some key points:A major cited aim of Zakat is to provide relief from and ultimately eradicate poverty.It is generally held that zakat can only be given to other Muslims.A large portion of zakat is given informally person-to-person or through mosques and Islamic charities.Zakat is a sacred form of charity; it’s most often given during the holy month of Ramadan.Direct cash transfers are a neglected zakat optionZakat giving is estimated at $1.8B in the U.S. alone with $450M going to international NGOs, who mostly use their funds for in-kind support like food, tents, and clothing. Dr. Shahrul Hussain, an Islamic scholar, argues that cash transfers “should be considered a primary method of zakat distribution,” as, according to the Islamic principle of tamlīk (ownership), the recipients of the zakat have total ownership over the money, and it is up to them (not an intermediary third-party organization or charity) how it is spent. He also notes “the immense benefits of unconditional cash transfer in comparison to in-kind transfer."This is a simple, transparent means of transferring wealth that empowers the recipients. However, other than informal person-to-person giving, there are limited options to give zakat as 100% unconditional cash.GiveDirectly now allows zakat to be given as cash to Muslims in extreme povertyAs an opportunity for Muslims to donate zakat directly as cash, GiveDirectly created a zakat-compliant fund to give cash through our program in Yemen. While GiveDirectly is a secular organization, our Yemen program and Zakat policy have been reviewed and certified by Amanah Advisors. In order to achieve this, we’re assured that 100% of donations will be delivered as cash, using non-zakat funds to cover the associated delivery costs.Donations through our page are tax-deductible in the U.S. and our partners at Giving What We Can created a page allowing donors to give 100% of their gift to GiveDirectly’s zakat-compliant fund, tax-deductible in the Netherlands and the U.K. Taken together, this provides a tax-deductible option for 8.6M Muslims across three countries.As a secular NGO, GiveDirectly may struggle to gain traction with Muslim donorsGiveDirectly is a credible option for zakat donors: we’ve...]]>
Thu, 16 Mar 2023 08:54:54 +0000 EA - Offer an option to Muslim donors; grow effective giving by GiveDirectly Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Offer an option to Muslim donors; grow effective giving, published by GiveDirectly on March 16, 2023 on The Effective Altruism Forum.SummaryIn order to offer Muslim donors a way to give their annual religious tithing (zakat) to an EA-aligned intervention, GiveDirectly launched a zakat-compliant fund, delivered as cash to Yemeni families displaced by the civil war. Muslims give ~$600B/year in Zakat to the global poor, though much of this is given informally or to less-than-effective NGOs.Through this unconditional cash transfer option, we’re offering Muslims the opportunity to redirect a portion of their giving to a measurably high-impact intervention and introduce more Muslims to EA’s theory of effective giving. We invite readers to share thoughts in the comments and to share the campaign far and wide.Muslims are the fastest-growing religious group and give annuallyAs Ahmed Ghoor observed, Muslims make up about 24% of the world population (1.8B people) and Islam is the fastest growing religion. Despite having a robust tradition of charitable giving, little has been done proactively to engage the Muslim community on the ideas of effective altruism. An important step to inclusion is offering this pathway for effectively donating zakat.Zakat is a sacred pillar of Islam, a large portion of which is given to the needyFor non-Muslim readers: one of the five pillars of Islam, zakat is mandatory giving; Muslims eligible to pay it donate at least 2.5% of their accumulated wealth annually for the benefit of the poor, destitute, and others – classified as mustahik. Some key points:A major cited aim of Zakat is to provide relief from and ultimately eradicate poverty.It is generally held that zakat can only be given to other Muslims.A large portion of zakat is given informally person-to-person or through mosques and Islamic charities.Zakat is a sacred form of charity; it’s most often given during the holy month of Ramadan.Direct cash transfers are a neglected zakat optionZakat giving is estimated at $1.8B in the U.S. alone with $450M going to international NGOs, who mostly use their funds for in-kind support like food, tents, and clothing. Dr. Shahrul Hussain, an Islamic scholar, argues that cash transfers “should be considered a primary method of zakat distribution,” as, according to the Islamic principle of tamlīk (ownership), the recipients of the zakat have total ownership over the money, and it is up to them (not an intermediary third-party organization or charity) how it is spent. He also notes “the immense benefits of unconditional cash transfer in comparison to in-kind transfer."This is a simple, transparent means of transferring wealth that empowers the recipients. However, other than informal person-to-person giving, there are limited options to give zakat as 100% unconditional cash.GiveDirectly now allows zakat to be given as cash to Muslims in extreme povertyAs an opportunity for Muslims to donate zakat directly as cash, GiveDirectly created a zakat-compliant fund to give cash through our program in Yemen. While GiveDirectly is a secular organization, our Yemen program and Zakat policy have been reviewed and certified by Amanah Advisors. In order to achieve this, we’re assured that 100% of donations will be delivered as cash, using non-zakat funds to cover the associated delivery costs.Donations through our page are tax-deductible in the U.S. and our partners at Giving What We Can created a page allowing donors to give 100% of their gift to GiveDirectly’s zakat-compliant fund, tax-deductible in the Netherlands and the U.K. Taken together, this provides a tax-deductible option for 8.6M Muslims across three countries.As a secular NGO, GiveDirectly may struggle to gain traction with Muslim donorsGiveDirectly is a credible option for zakat donors: we’ve...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Offer an option to Muslim donors; grow effective giving, published by GiveDirectly on March 16, 2023 on The Effective Altruism Forum.SummaryIn order to offer Muslim donors a way to give their annual religious tithing (zakat) to an EA-aligned intervention, GiveDirectly launched a zakat-compliant fund, delivered as cash to Yemeni families displaced by the civil war. Muslims give ~$600B/year in Zakat to the global poor, though much of this is given informally or to less-than-effective NGOs.Through this unconditional cash transfer option, we’re offering Muslims the opportunity to redirect a portion of their giving to a measurably high-impact intervention and introduce more Muslims to EA’s theory of effective giving. We invite readers to share thoughts in the comments and to share the campaign far and wide.Muslims are the fastest-growing religious group and give annuallyAs Ahmed Ghoor observed, Muslims make up about 24% of the world population (1.8B people) and Islam is the fastest growing religion. Despite having a robust tradition of charitable giving, little has been done proactively to engage the Muslim community on the ideas of effective altruism. An important step to inclusion is offering this pathway for effectively donating zakat.Zakat is a sacred pillar of Islam, a large portion of which is given to the needyFor non-Muslim readers: one of the five pillars of Islam, zakat is mandatory giving; Muslims eligible to pay it donate at least 2.5% of their accumulated wealth annually for the benefit of the poor, destitute, and others – classified as mustahik. Some key points:A major cited aim of Zakat is to provide relief from and ultimately eradicate poverty.It is generally held that zakat can only be given to other Muslims.A large portion of zakat is given informally person-to-person or through mosques and Islamic charities.Zakat is a sacred form of charity; it’s most often given during the holy month of Ramadan.Direct cash transfers are a neglected zakat optionZakat giving is estimated at $1.8B in the U.S. alone with $450M going to international NGOs, who mostly use their funds for in-kind support like food, tents, and clothing. Dr. Shahrul Hussain, an Islamic scholar, argues that cash transfers “should be considered a primary method of zakat distribution,” as, according to the Islamic principle of tamlīk (ownership), the recipients of the zakat have total ownership over the money, and it is up to them (not an intermediary third-party organization or charity) how it is spent. He also notes “the immense benefits of unconditional cash transfer in comparison to in-kind transfer."This is a simple, transparent means of transferring wealth that empowers the recipients. However, other than informal person-to-person giving, there are limited options to give zakat as 100% unconditional cash.GiveDirectly now allows zakat to be given as cash to Muslims in extreme povertyAs an opportunity for Muslims to donate zakat directly as cash, GiveDirectly created a zakat-compliant fund to give cash through our program in Yemen. While GiveDirectly is a secular organization, our Yemen program and Zakat policy have been reviewed and certified by Amanah Advisors. In order to achieve this, we’re assured that 100% of donations will be delivered as cash, using non-zakat funds to cover the associated delivery costs.Donations through our page are tax-deductible in the U.S. and our partners at Giving What We Can created a page allowing donors to give 100% of their gift to GiveDirectly’s zakat-compliant fund, tax-deductible in the Netherlands and the U.K. Taken together, this provides a tax-deductible option for 8.6M Muslims across three countries.As a secular NGO, GiveDirectly may struggle to gain traction with Muslim donorsGiveDirectly is a credible option for zakat donors: we’ve...]]>
GiveDirectly https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:06 None full 5241
aBp2AozoGExn8rMwb_NL_EA_EA EA - Write a Book? by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Write a Book?, published by Jeff Kaufman on March 16, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://forum.effectivealtruism.org/posts/aBp2AozoGExn8rMwb/write-a-book Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Write a Book?, published by Jeff Kaufman on March 16, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 16 Mar 2023 00:46:29 +0000 EA - Write a Book? by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Write a Book?, published by Jeff Kaufman on March 16, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Write a Book?, published by Jeff Kaufman on March 16, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:23 None full 5242
hCwDNq6sZofgSEN3s_NL_EA_EA EA - AI Safety - 7 months of discussion in 17 minutes by Zoe Williams Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety - 7 months of discussion in 17 minutes, published by Zoe Williams on March 15, 2023 on The Effective Altruism Forum.In August 2022, I started making summaries of the top EA and LW forum posts each week. This post collates together the key trends I’ve seen in AI Safety discussions since then. Note a lot of good work is happening outside what's posted on these forums too! This post doesn't try to cover that work.If you’d like to keep up on a more regular basis, consider subscribing to the Weekly EA & LW Forum Summaries. And if you’re interested in similar overviews for other fields, check out this post covering 6 months of animal welfare discussion in 6 minutes.Disclaimer: this is a blog post and not a research report - meaning it was produced quickly and is not to our (Rethink Priorities') typical standards of substantiveness and careful checking for accuracy. Please let me know if anything looks wrong or if I've missed key pieces!Table of Contents(It's a long post! Feel free to pick and choose sections to read, they 're all written to make sense individually)Key TakeawaysResource CollationsAI CapabilitiesProgressWhat AI still fails atPublic attention moves toward safetyAI GovernanceAI Safety StandardsSlow down (dangerous) AIPolicyUS / China Export RestrictionsPaths to impactForecastingQuantitative historical forecastingNarrative forecastingTechnical AI SafetyOverall TrendsInterpretabilityReinforcement Learning from Human Feedback (RLHF)AI assistance for alignmentBounded AIsTheoretical UnderstandingOutreach & Community-BuildingAcademics and researchersUniversity groupsCareer PathsGeneral guidanceShould anyone work in capabilities?Arguments for and against high x-riskAgainst high x-risk from AICounters to the above argumentsAppendix - All Post SummariesKey TakeawaysThere are multiple living websites that provide good entry points into understanding AI Safety ideas, communities, key players, research agendas, and opportunities to train or enter the field. (see more)Large language models like ChatGPT have drawn significant attention to AI and kick-started race dynamics. There seems to be slowly growing public support for regulation. (see more)Holden Karnofsky recently took a leave of absence from Open Philanthropy to work on AI Safety Standards, which have also been called out as important by leading AI lab OpenAI. (see more)In October 2022, the US announced extensive restrictions on the export of AI-related products (eg. chips) to China. (see more)There has been progress on AI forecasting (quantitative and narrative) with the aim of allowing us to understand likely scenarios and prioritize between governance interventions. (see more)Interpretability research has seen substantial progress, including identifying the meaning of some neurons, eliciting what a model has truly learned / knows (for limited / specific cases), and circumventing features of models like superposition that can make this more difficult. (see more)There has been discussion on new potential methods for technical AI safety, including building AI tooling to assist alignment researchers without requiring agency, and building AIs which emulate human thought patterns. (see more)Outreach experimentation has found that AI researchers prefer arguments that are technical and written by ML researchers, and that greater engagement is seen in university groups with a technical over altruistic or philosophical focus. (see more)Resource CollationsThe AI Safety field is growing (80K estimates there are now ~400 FTE working on AI Safety). To improve efficiency, many people have put together collations of resources to help people quickly understand the relevant players and their approaches - as well as materials that make it easier to enter the field or upskill...]]>
Zoe Williams https://forum.effectivealtruism.org/posts/hCwDNq6sZofgSEN3s/ai-safety-7-months-of-discussion-in-17-minutes Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety - 7 months of discussion in 17 minutes, published by Zoe Williams on March 15, 2023 on The Effective Altruism Forum.In August 2022, I started making summaries of the top EA and LW forum posts each week. This post collates together the key trends I’ve seen in AI Safety discussions since then. Note a lot of good work is happening outside what's posted on these forums too! This post doesn't try to cover that work.If you’d like to keep up on a more regular basis, consider subscribing to the Weekly EA & LW Forum Summaries. And if you’re interested in similar overviews for other fields, check out this post covering 6 months of animal welfare discussion in 6 minutes.Disclaimer: this is a blog post and not a research report - meaning it was produced quickly and is not to our (Rethink Priorities') typical standards of substantiveness and careful checking for accuracy. Please let me know if anything looks wrong or if I've missed key pieces!Table of Contents(It's a long post! Feel free to pick and choose sections to read, they 're all written to make sense individually)Key TakeawaysResource CollationsAI CapabilitiesProgressWhat AI still fails atPublic attention moves toward safetyAI GovernanceAI Safety StandardsSlow down (dangerous) AIPolicyUS / China Export RestrictionsPaths to impactForecastingQuantitative historical forecastingNarrative forecastingTechnical AI SafetyOverall TrendsInterpretabilityReinforcement Learning from Human Feedback (RLHF)AI assistance for alignmentBounded AIsTheoretical UnderstandingOutreach & Community-BuildingAcademics and researchersUniversity groupsCareer PathsGeneral guidanceShould anyone work in capabilities?Arguments for and against high x-riskAgainst high x-risk from AICounters to the above argumentsAppendix - All Post SummariesKey TakeawaysThere are multiple living websites that provide good entry points into understanding AI Safety ideas, communities, key players, research agendas, and opportunities to train or enter the field. (see more)Large language models like ChatGPT have drawn significant attention to AI and kick-started race dynamics. There seems to be slowly growing public support for regulation. (see more)Holden Karnofsky recently took a leave of absence from Open Philanthropy to work on AI Safety Standards, which have also been called out as important by leading AI lab OpenAI. (see more)In October 2022, the US announced extensive restrictions on the export of AI-related products (eg. chips) to China. (see more)There has been progress on AI forecasting (quantitative and narrative) with the aim of allowing us to understand likely scenarios and prioritize between governance interventions. (see more)Interpretability research has seen substantial progress, including identifying the meaning of some neurons, eliciting what a model has truly learned / knows (for limited / specific cases), and circumventing features of models like superposition that can make this more difficult. (see more)There has been discussion on new potential methods for technical AI safety, including building AI tooling to assist alignment researchers without requiring agency, and building AIs which emulate human thought patterns. (see more)Outreach experimentation has found that AI researchers prefer arguments that are technical and written by ML researchers, and that greater engagement is seen in university groups with a technical over altruistic or philosophical focus. (see more)Resource CollationsThe AI Safety field is growing (80K estimates there are now ~400 FTE working on AI Safety). To improve efficiency, many people have put together collations of resources to help people quickly understand the relevant players and their approaches - as well as materials that make it easier to enter the field or upskill...]]>
Thu, 16 Mar 2023 00:36:53 +0000 EA - AI Safety - 7 months of discussion in 17 minutes by Zoe Williams Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety - 7 months of discussion in 17 minutes, published by Zoe Williams on March 15, 2023 on The Effective Altruism Forum.In August 2022, I started making summaries of the top EA and LW forum posts each week. This post collates together the key trends I’ve seen in AI Safety discussions since then. Note a lot of good work is happening outside what's posted on these forums too! This post doesn't try to cover that work.If you’d like to keep up on a more regular basis, consider subscribing to the Weekly EA & LW Forum Summaries. And if you’re interested in similar overviews for other fields, check out this post covering 6 months of animal welfare discussion in 6 minutes.Disclaimer: this is a blog post and not a research report - meaning it was produced quickly and is not to our (Rethink Priorities') typical standards of substantiveness and careful checking for accuracy. Please let me know if anything looks wrong or if I've missed key pieces!Table of Contents(It's a long post! Feel free to pick and choose sections to read, they 're all written to make sense individually)Key TakeawaysResource CollationsAI CapabilitiesProgressWhat AI still fails atPublic attention moves toward safetyAI GovernanceAI Safety StandardsSlow down (dangerous) AIPolicyUS / China Export RestrictionsPaths to impactForecastingQuantitative historical forecastingNarrative forecastingTechnical AI SafetyOverall TrendsInterpretabilityReinforcement Learning from Human Feedback (RLHF)AI assistance for alignmentBounded AIsTheoretical UnderstandingOutreach & Community-BuildingAcademics and researchersUniversity groupsCareer PathsGeneral guidanceShould anyone work in capabilities?Arguments for and against high x-riskAgainst high x-risk from AICounters to the above argumentsAppendix - All Post SummariesKey TakeawaysThere are multiple living websites that provide good entry points into understanding AI Safety ideas, communities, key players, research agendas, and opportunities to train or enter the field. (see more)Large language models like ChatGPT have drawn significant attention to AI and kick-started race dynamics. There seems to be slowly growing public support for regulation. (see more)Holden Karnofsky recently took a leave of absence from Open Philanthropy to work on AI Safety Standards, which have also been called out as important by leading AI lab OpenAI. (see more)In October 2022, the US announced extensive restrictions on the export of AI-related products (eg. chips) to China. (see more)There has been progress on AI forecasting (quantitative and narrative) with the aim of allowing us to understand likely scenarios and prioritize between governance interventions. (see more)Interpretability research has seen substantial progress, including identifying the meaning of some neurons, eliciting what a model has truly learned / knows (for limited / specific cases), and circumventing features of models like superposition that can make this more difficult. (see more)There has been discussion on new potential methods for technical AI safety, including building AI tooling to assist alignment researchers without requiring agency, and building AIs which emulate human thought patterns. (see more)Outreach experimentation has found that AI researchers prefer arguments that are technical and written by ML researchers, and that greater engagement is seen in university groups with a technical over altruistic or philosophical focus. (see more)Resource CollationsThe AI Safety field is growing (80K estimates there are now ~400 FTE working on AI Safety). To improve efficiency, many people have put together collations of resources to help people quickly understand the relevant players and their approaches - as well as materials that make it easier to enter the field or upskill...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety - 7 months of discussion in 17 minutes, published by Zoe Williams on March 15, 2023 on The Effective Altruism Forum.In August 2022, I started making summaries of the top EA and LW forum posts each week. This post collates together the key trends I’ve seen in AI Safety discussions since then. Note a lot of good work is happening outside what's posted on these forums too! This post doesn't try to cover that work.If you’d like to keep up on a more regular basis, consider subscribing to the Weekly EA & LW Forum Summaries. And if you’re interested in similar overviews for other fields, check out this post covering 6 months of animal welfare discussion in 6 minutes.Disclaimer: this is a blog post and not a research report - meaning it was produced quickly and is not to our (Rethink Priorities') typical standards of substantiveness and careful checking for accuracy. Please let me know if anything looks wrong or if I've missed key pieces!Table of Contents(It's a long post! Feel free to pick and choose sections to read, they 're all written to make sense individually)Key TakeawaysResource CollationsAI CapabilitiesProgressWhat AI still fails atPublic attention moves toward safetyAI GovernanceAI Safety StandardsSlow down (dangerous) AIPolicyUS / China Export RestrictionsPaths to impactForecastingQuantitative historical forecastingNarrative forecastingTechnical AI SafetyOverall TrendsInterpretabilityReinforcement Learning from Human Feedback (RLHF)AI assistance for alignmentBounded AIsTheoretical UnderstandingOutreach & Community-BuildingAcademics and researchersUniversity groupsCareer PathsGeneral guidanceShould anyone work in capabilities?Arguments for and against high x-riskAgainst high x-risk from AICounters to the above argumentsAppendix - All Post SummariesKey TakeawaysThere are multiple living websites that provide good entry points into understanding AI Safety ideas, communities, key players, research agendas, and opportunities to train or enter the field. (see more)Large language models like ChatGPT have drawn significant attention to AI and kick-started race dynamics. There seems to be slowly growing public support for regulation. (see more)Holden Karnofsky recently took a leave of absence from Open Philanthropy to work on AI Safety Standards, which have also been called out as important by leading AI lab OpenAI. (see more)In October 2022, the US announced extensive restrictions on the export of AI-related products (eg. chips) to China. (see more)There has been progress on AI forecasting (quantitative and narrative) with the aim of allowing us to understand likely scenarios and prioritize between governance interventions. (see more)Interpretability research has seen substantial progress, including identifying the meaning of some neurons, eliciting what a model has truly learned / knows (for limited / specific cases), and circumventing features of models like superposition that can make this more difficult. (see more)There has been discussion on new potential methods for technical AI safety, including building AI tooling to assist alignment researchers without requiring agency, and building AIs which emulate human thought patterns. (see more)Outreach experimentation has found that AI researchers prefer arguments that are technical and written by ML researchers, and that greater engagement is seen in university groups with a technical over altruistic or philosophical focus. (see more)Resource CollationsThe AI Safety field is growing (80K estimates there are now ~400 FTE working on AI Safety). To improve efficiency, many people have put together collations of resources to help people quickly understand the relevant players and their approaches - as well as materials that make it easier to enter the field or upskill...]]>
Zoe Williams https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 30:00 None full 5243
xtcgsLA2G8bn8vj99_NL_EA_EA EA - Reminding myself just how awful pain can get (plus, an experiment on myself) by Ren Springlea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reminding myself just how awful pain can get (plus, an experiment on myself), published by Ren Springlea on March 15, 2023 on The Effective Altruism Forum.Content warning: This post contains references to extreme pain and self-harm, as well as passing references to suicide, needles, and specific forms of suffering (but not detailed descriptions). Please do not repeat any of the experiments I've detailed in this post. Please be kind to yourself, and remember that the best motivation is sustainable motivation.SummaryOut of curiosity, I exposed myself to safe, moderate-level pain to see how it changed my views on three particular topics. This article is mostly a self-reflection on this (non-scientific) experience.Firstly, I got a visceral, intense sense of how urgent it is to get it right when working to do the most good for others.Secondly, I gained a strong support for the position that the most morally important goal is to prevent suffering, and in particular for preventing extreme suffering.Thirdly, I updated my opinion on the trade-offs between different intensities of pain, which I give in this article as rough, numerical weightings on different categories of pain. Basically, I now place a greater urgency on preventing intense suffering than I did before.I conclude with how this newfound urgency will affect my work and my life.My three goalsI began this experiment with three main goals:To remind myself how urgent and important it is to, when working to help others as much as I can, to get it right.Some people think that preventing intense pain (rather than working towards other, non-pain-related goals) is the most important thing to do. Do I agree with this?If I experience pain at different intensities, does this change the moral weight that I place on preventing intense pain compared to modest pain (i.e. intensity-duration tradeoff)?I think it is useful to test my intellectual ideas against what it is actually like to experience pain. This is not for motivation - I already work plenty in my role in animal advocacy, and I believe that sustainable motivation is the best motivation (I talk about this more at the end).My "experiment"I subjected myself to two somewhat-safe methods of experiencing pain:Firstly, I got three tattoos on different parts of my body - my upper arm, my calf, and my inner wrist. I had six tattoos already, so I was familiar with this experience. I got these tattoos all on one day (4/2/23) and in one location (a studio in London).Secondly, I undertook the cold pressor test. This is basically holding my hand in a tub of near-freezing water. This test is commonly used in scientific research as a way to invoke pain safely. I also did this on one day (25/2/23) and in one location (my home in Australia). Please do not replicate this - the cold pressor test causes pain and can cause significant distress in some people, as well as physical reactions that can compromise your health.I wish I had a somewhat-safe way to experience pain that is more intense than these two experiences, but these are the best I could come up with for now.During both of these experiences, I recorded the pain levels. I recorded the pain in three ways:A short, written description of my thoughts and feelings.The McGill Pain Index Pain Rating Intensity (PRI) Score. This score is calculated from a questionnaire (which I accessed via a phone app) that asks you to choose words corresponding to how your pain feels. The words are then used to calculate the numeric PRI score. I chose to use this tool as there is a review paper listing the approximate PRI scores caused by different human health conditions, which lets me roughly compare my scores to different instances of human pain. This list is given below, so you can have some idea of what scores mean.The PainTrac...]]>
Ren Springlea https://forum.effectivealtruism.org/posts/xtcgsLA2G8bn8vj99/reminding-myself-just-how-awful-pain-can-get-plus-an Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reminding myself just how awful pain can get (plus, an experiment on myself), published by Ren Springlea on March 15, 2023 on The Effective Altruism Forum.Content warning: This post contains references to extreme pain and self-harm, as well as passing references to suicide, needles, and specific forms of suffering (but not detailed descriptions). Please do not repeat any of the experiments I've detailed in this post. Please be kind to yourself, and remember that the best motivation is sustainable motivation.SummaryOut of curiosity, I exposed myself to safe, moderate-level pain to see how it changed my views on three particular topics. This article is mostly a self-reflection on this (non-scientific) experience.Firstly, I got a visceral, intense sense of how urgent it is to get it right when working to do the most good for others.Secondly, I gained a strong support for the position that the most morally important goal is to prevent suffering, and in particular for preventing extreme suffering.Thirdly, I updated my opinion on the trade-offs between different intensities of pain, which I give in this article as rough, numerical weightings on different categories of pain. Basically, I now place a greater urgency on preventing intense suffering than I did before.I conclude with how this newfound urgency will affect my work and my life.My three goalsI began this experiment with three main goals:To remind myself how urgent and important it is to, when working to help others as much as I can, to get it right.Some people think that preventing intense pain (rather than working towards other, non-pain-related goals) is the most important thing to do. Do I agree with this?If I experience pain at different intensities, does this change the moral weight that I place on preventing intense pain compared to modest pain (i.e. intensity-duration tradeoff)?I think it is useful to test my intellectual ideas against what it is actually like to experience pain. This is not for motivation - I already work plenty in my role in animal advocacy, and I believe that sustainable motivation is the best motivation (I talk about this more at the end).My "experiment"I subjected myself to two somewhat-safe methods of experiencing pain:Firstly, I got three tattoos on different parts of my body - my upper arm, my calf, and my inner wrist. I had six tattoos already, so I was familiar with this experience. I got these tattoos all on one day (4/2/23) and in one location (a studio in London).Secondly, I undertook the cold pressor test. This is basically holding my hand in a tub of near-freezing water. This test is commonly used in scientific research as a way to invoke pain safely. I also did this on one day (25/2/23) and in one location (my home in Australia). Please do not replicate this - the cold pressor test causes pain and can cause significant distress in some people, as well as physical reactions that can compromise your health.I wish I had a somewhat-safe way to experience pain that is more intense than these two experiences, but these are the best I could come up with for now.During both of these experiences, I recorded the pain levels. I recorded the pain in three ways:A short, written description of my thoughts and feelings.The McGill Pain Index Pain Rating Intensity (PRI) Score. This score is calculated from a questionnaire (which I accessed via a phone app) that asks you to choose words corresponding to how your pain feels. The words are then used to calculate the numeric PRI score. I chose to use this tool as there is a review paper listing the approximate PRI scores caused by different human health conditions, which lets me roughly compare my scores to different instances of human pain. This list is given below, so you can have some idea of what scores mean.The PainTrac...]]>
Thu, 16 Mar 2023 00:32:33 +0000 EA - Reminding myself just how awful pain can get (plus, an experiment on myself) by Ren Springlea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reminding myself just how awful pain can get (plus, an experiment on myself), published by Ren Springlea on March 15, 2023 on The Effective Altruism Forum.Content warning: This post contains references to extreme pain and self-harm, as well as passing references to suicide, needles, and specific forms of suffering (but not detailed descriptions). Please do not repeat any of the experiments I've detailed in this post. Please be kind to yourself, and remember that the best motivation is sustainable motivation.SummaryOut of curiosity, I exposed myself to safe, moderate-level pain to see how it changed my views on three particular topics. This article is mostly a self-reflection on this (non-scientific) experience.Firstly, I got a visceral, intense sense of how urgent it is to get it right when working to do the most good for others.Secondly, I gained a strong support for the position that the most morally important goal is to prevent suffering, and in particular for preventing extreme suffering.Thirdly, I updated my opinion on the trade-offs between different intensities of pain, which I give in this article as rough, numerical weightings on different categories of pain. Basically, I now place a greater urgency on preventing intense suffering than I did before.I conclude with how this newfound urgency will affect my work and my life.My three goalsI began this experiment with three main goals:To remind myself how urgent and important it is to, when working to help others as much as I can, to get it right.Some people think that preventing intense pain (rather than working towards other, non-pain-related goals) is the most important thing to do. Do I agree with this?If I experience pain at different intensities, does this change the moral weight that I place on preventing intense pain compared to modest pain (i.e. intensity-duration tradeoff)?I think it is useful to test my intellectual ideas against what it is actually like to experience pain. This is not for motivation - I already work plenty in my role in animal advocacy, and I believe that sustainable motivation is the best motivation (I talk about this more at the end).My "experiment"I subjected myself to two somewhat-safe methods of experiencing pain:Firstly, I got three tattoos on different parts of my body - my upper arm, my calf, and my inner wrist. I had six tattoos already, so I was familiar with this experience. I got these tattoos all on one day (4/2/23) and in one location (a studio in London).Secondly, I undertook the cold pressor test. This is basically holding my hand in a tub of near-freezing water. This test is commonly used in scientific research as a way to invoke pain safely. I also did this on one day (25/2/23) and in one location (my home in Australia). Please do not replicate this - the cold pressor test causes pain and can cause significant distress in some people, as well as physical reactions that can compromise your health.I wish I had a somewhat-safe way to experience pain that is more intense than these two experiences, but these are the best I could come up with for now.During both of these experiences, I recorded the pain levels. I recorded the pain in three ways:A short, written description of my thoughts and feelings.The McGill Pain Index Pain Rating Intensity (PRI) Score. This score is calculated from a questionnaire (which I accessed via a phone app) that asks you to choose words corresponding to how your pain feels. The words are then used to calculate the numeric PRI score. I chose to use this tool as there is a review paper listing the approximate PRI scores caused by different human health conditions, which lets me roughly compare my scores to different instances of human pain. This list is given below, so you can have some idea of what scores mean.The PainTrac...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reminding myself just how awful pain can get (plus, an experiment on myself), published by Ren Springlea on March 15, 2023 on The Effective Altruism Forum.Content warning: This post contains references to extreme pain and self-harm, as well as passing references to suicide, needles, and specific forms of suffering (but not detailed descriptions). Please do not repeat any of the experiments I've detailed in this post. Please be kind to yourself, and remember that the best motivation is sustainable motivation.SummaryOut of curiosity, I exposed myself to safe, moderate-level pain to see how it changed my views on three particular topics. This article is mostly a self-reflection on this (non-scientific) experience.Firstly, I got a visceral, intense sense of how urgent it is to get it right when working to do the most good for others.Secondly, I gained a strong support for the position that the most morally important goal is to prevent suffering, and in particular for preventing extreme suffering.Thirdly, I updated my opinion on the trade-offs between different intensities of pain, which I give in this article as rough, numerical weightings on different categories of pain. Basically, I now place a greater urgency on preventing intense suffering than I did before.I conclude with how this newfound urgency will affect my work and my life.My three goalsI began this experiment with three main goals:To remind myself how urgent and important it is to, when working to help others as much as I can, to get it right.Some people think that preventing intense pain (rather than working towards other, non-pain-related goals) is the most important thing to do. Do I agree with this?If I experience pain at different intensities, does this change the moral weight that I place on preventing intense pain compared to modest pain (i.e. intensity-duration tradeoff)?I think it is useful to test my intellectual ideas against what it is actually like to experience pain. This is not for motivation - I already work plenty in my role in animal advocacy, and I believe that sustainable motivation is the best motivation (I talk about this more at the end).My "experiment"I subjected myself to two somewhat-safe methods of experiencing pain:Firstly, I got three tattoos on different parts of my body - my upper arm, my calf, and my inner wrist. I had six tattoos already, so I was familiar with this experience. I got these tattoos all on one day (4/2/23) and in one location (a studio in London).Secondly, I undertook the cold pressor test. This is basically holding my hand in a tub of near-freezing water. This test is commonly used in scientific research as a way to invoke pain safely. I also did this on one day (25/2/23) and in one location (my home in Australia). Please do not replicate this - the cold pressor test causes pain and can cause significant distress in some people, as well as physical reactions that can compromise your health.I wish I had a somewhat-safe way to experience pain that is more intense than these two experiences, but these are the best I could come up with for now.During both of these experiences, I recorded the pain levels. I recorded the pain in three ways:A short, written description of my thoughts and feelings.The McGill Pain Index Pain Rating Intensity (PRI) Score. This score is calculated from a questionnaire (which I accessed via a phone app) that asks you to choose words corresponding to how your pain feels. The words are then used to calculate the numeric PRI score. I chose to use this tool as there is a review paper listing the approximate PRI scores caused by different human health conditions, which lets me roughly compare my scores to different instances of human pain. This list is given below, so you can have some idea of what scores mean.The PainTrac...]]>
Ren Springlea https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 25:16 None full 5244
amBajbqdzPB3mbwBN_NL_EA_EA EA - 80k podcast episode on sentience in AI systems by rgb Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80k podcast episode on sentience in AI systems, published by rgb on March 15, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
rgb https://forum.effectivealtruism.org/posts/amBajbqdzPB3mbwBN/80k-podcast-episode-on-sentience-in-ai-systems Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80k podcast episode on sentience in AI systems, published by rgb on March 15, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 15 Mar 2023 23:09:43 +0000 EA - 80k podcast episode on sentience in AI systems by rgb Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80k podcast episode on sentience in AI systems, published by rgb on March 15, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80k podcast episode on sentience in AI systems, published by rgb on March 15, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
rgb https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:28 None full 5245
75CtdFj79sZrGpGiX_NL_EA_EA EA - Success without dignity: a nearcasting story of avoiding catastrophe by luck by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Success without dignity: a nearcasting story of avoiding catastrophe by luck, published by Holden Karnofsky on March 15, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Holden Karnofsky https://forum.effectivealtruism.org/posts/75CtdFj79sZrGpGiX/success-without-dignity-a-nearcasting-story-of-avoiding Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Success without dignity: a nearcasting story of avoiding catastrophe by luck, published by Holden Karnofsky on March 15, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 15 Mar 2023 22:45:10 +0000 EA - Success without dignity: a nearcasting story of avoiding catastrophe by luck by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Success without dignity: a nearcasting story of avoiding catastrophe by luck, published by Holden Karnofsky on March 15, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Success without dignity: a nearcasting story of avoiding catastrophe by luck, published by Holden Karnofsky on March 15, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Holden Karnofsky https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:31 None full 5232
CmZhcEpz7zBTGhksf_NL_EA_EA EA - What happened to the OpenPhil OpenAI board seat? by ChristianKleineidam Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What happened to the OpenPhil OpenAI board seat?, published by ChristianKleineidam on March 15, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
ChristianKleineidam https://forum.effectivealtruism.org/posts/CmZhcEpz7zBTGhksf/what-happened-to-the-openphil-openai-board-seat Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What happened to the OpenPhil OpenAI board seat?, published by ChristianKleineidam on March 15, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 15 Mar 2023 21:44:13 +0000 EA - What happened to the OpenPhil OpenAI board seat? by ChristianKleineidam Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What happened to the OpenPhil OpenAI board seat?, published by ChristianKleineidam on March 15, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What happened to the OpenPhil OpenAI board seat?, published by ChristianKleineidam on March 15, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
ChristianKleineidam https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:28 None full 5233
g5uKzBLjiEuC5k46A_NL_EA_EA EA - FTX Community Response Survey Results by WillemSleegers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX Community Response Survey Results, published by WillemSleegers on March 15, 2023 on The Effective Altruism Forum.SummaryIn December 2022, Rethink Priorities, in collaboration with CEA, surveyed the EA community in order to gather “perspectives on how the FTX crisis has impacted the community’s views of the effective altruism movement, its organizations, and leaders.”Our results found that the FTX crisis had decreased satisfaction with the EA community, and around half of respondents reported that the FTX crisis had given them concerns with EA meta organizations, the EA community and its norms, and the leaders of EA meta organizations.Nevertheless, there were some encouraging results. The reduction in satisfaction with the community was significant, but small, and overall average community sentiment is still positive. In addition, respondents tended to agree that the EA community had responded well to the crisis, although roughly a third of respondents neither agreed nor disagreed with this. Majorities of respondents reported continuing to trust EA organizations, though over 30% reported they had substantially lost trust in EA public figures or leadership.Respondents were more split in their views about how the EA community should respond. Respondents leaned slightly towards agreeing that the EA community should spend significant time reflecting and responding to this crisis, at the cost of spending less time on our other priorities, but slightly towards disagreement that the EA community should look very different as a result of this crisis.EA satisfactionRecalled satisfactionRespondents were asked about their current satisfaction with the EA community (after the FTX crisis) and to recall their satisfaction with the EA community at the start of November 2022, prior to the FTX crisis.Satisfaction with the EA community appears to be half a point (0.54) lower post-FTX compared to pre-FTX.Note that the median satisfaction scores are somewhat higher, but similarly showing a decrease (8 pre-FTX, 7 post-FTX).Satisfaction over timeAs the 2022 EA Survey was launched before the FTX crisis, we could assess how satisfaction with the EA community changed over time. We fit a generalized additive model in which we regressed the satisfaction ratings on the day the survey was taken.These results show that satisfaction went down after the FTX crisis first started.It should be noted however that this pattern of results could also be confounded by different groups of respondents taking the survey at different times. For example, we know that more engaged respondents tend to take the EAS earlier.We therefore also looked at how the satisfaction changed over time for different engagement levels. This shows that the satisfaction levels went down over time, regardless of engagement level.ConcernsRespondents were asked whether the FTX crisis has given them concerns with:Effective Altruism Meta Organizations (e.g., Centre for Effective Altruism, Open Philanthropy, 80,000 hours, etc.)Leaders of Effective Altruism Meta Organizations (e.g., Centre for Effective Altruism, Open Philanthropy, 80,000 hours, etc.)The Effective Altruism Community & NormsEffective Altruism Principles or PhilosophyMajorities of respondents reported agreement that the crisis had given them concerns with EA meta organizations (58%), the EA community and its norms (55%), just under half reported it giving them concerns about the leaders of EA meta organizations (48%).In contrast, only 25% agreed that the crisis had given them concerns about EA principles or philosophy, compared to 66% disagreeing. We think this suggests a somewhat reassuring picture where, though respondents may have concerns about the EA community in its current form, the FTX crisis has largely not caused respondents to become di...]]>
WillemSleegers https://forum.effectivealtruism.org/posts/g5uKzBLjiEuC5k46A/ftx-community-response-survey-results Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX Community Response Survey Results, published by WillemSleegers on March 15, 2023 on The Effective Altruism Forum.SummaryIn December 2022, Rethink Priorities, in collaboration with CEA, surveyed the EA community in order to gather “perspectives on how the FTX crisis has impacted the community’s views of the effective altruism movement, its organizations, and leaders.”Our results found that the FTX crisis had decreased satisfaction with the EA community, and around half of respondents reported that the FTX crisis had given them concerns with EA meta organizations, the EA community and its norms, and the leaders of EA meta organizations.Nevertheless, there were some encouraging results. The reduction in satisfaction with the community was significant, but small, and overall average community sentiment is still positive. In addition, respondents tended to agree that the EA community had responded well to the crisis, although roughly a third of respondents neither agreed nor disagreed with this. Majorities of respondents reported continuing to trust EA organizations, though over 30% reported they had substantially lost trust in EA public figures or leadership.Respondents were more split in their views about how the EA community should respond. Respondents leaned slightly towards agreeing that the EA community should spend significant time reflecting and responding to this crisis, at the cost of spending less time on our other priorities, but slightly towards disagreement that the EA community should look very different as a result of this crisis.EA satisfactionRecalled satisfactionRespondents were asked about their current satisfaction with the EA community (after the FTX crisis) and to recall their satisfaction with the EA community at the start of November 2022, prior to the FTX crisis.Satisfaction with the EA community appears to be half a point (0.54) lower post-FTX compared to pre-FTX.Note that the median satisfaction scores are somewhat higher, but similarly showing a decrease (8 pre-FTX, 7 post-FTX).Satisfaction over timeAs the 2022 EA Survey was launched before the FTX crisis, we could assess how satisfaction with the EA community changed over time. We fit a generalized additive model in which we regressed the satisfaction ratings on the day the survey was taken.These results show that satisfaction went down after the FTX crisis first started.It should be noted however that this pattern of results could also be confounded by different groups of respondents taking the survey at different times. For example, we know that more engaged respondents tend to take the EAS earlier.We therefore also looked at how the satisfaction changed over time for different engagement levels. This shows that the satisfaction levels went down over time, regardless of engagement level.ConcernsRespondents were asked whether the FTX crisis has given them concerns with:Effective Altruism Meta Organizations (e.g., Centre for Effective Altruism, Open Philanthropy, 80,000 hours, etc.)Leaders of Effective Altruism Meta Organizations (e.g., Centre for Effective Altruism, Open Philanthropy, 80,000 hours, etc.)The Effective Altruism Community & NormsEffective Altruism Principles or PhilosophyMajorities of respondents reported agreement that the crisis had given them concerns with EA meta organizations (58%), the EA community and its norms (55%), just under half reported it giving them concerns about the leaders of EA meta organizations (48%).In contrast, only 25% agreed that the crisis had given them concerns about EA principles or philosophy, compared to 66% disagreeing. We think this suggests a somewhat reassuring picture where, though respondents may have concerns about the EA community in its current form, the FTX crisis has largely not caused respondents to become di...]]>
Wed, 15 Mar 2023 15:21:29 +0000 EA - FTX Community Response Survey Results by WillemSleegers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX Community Response Survey Results, published by WillemSleegers on March 15, 2023 on The Effective Altruism Forum.SummaryIn December 2022, Rethink Priorities, in collaboration with CEA, surveyed the EA community in order to gather “perspectives on how the FTX crisis has impacted the community’s views of the effective altruism movement, its organizations, and leaders.”Our results found that the FTX crisis had decreased satisfaction with the EA community, and around half of respondents reported that the FTX crisis had given them concerns with EA meta organizations, the EA community and its norms, and the leaders of EA meta organizations.Nevertheless, there were some encouraging results. The reduction in satisfaction with the community was significant, but small, and overall average community sentiment is still positive. In addition, respondents tended to agree that the EA community had responded well to the crisis, although roughly a third of respondents neither agreed nor disagreed with this. Majorities of respondents reported continuing to trust EA organizations, though over 30% reported they had substantially lost trust in EA public figures or leadership.Respondents were more split in their views about how the EA community should respond. Respondents leaned slightly towards agreeing that the EA community should spend significant time reflecting and responding to this crisis, at the cost of spending less time on our other priorities, but slightly towards disagreement that the EA community should look very different as a result of this crisis.EA satisfactionRecalled satisfactionRespondents were asked about their current satisfaction with the EA community (after the FTX crisis) and to recall their satisfaction with the EA community at the start of November 2022, prior to the FTX crisis.Satisfaction with the EA community appears to be half a point (0.54) lower post-FTX compared to pre-FTX.Note that the median satisfaction scores are somewhat higher, but similarly showing a decrease (8 pre-FTX, 7 post-FTX).Satisfaction over timeAs the 2022 EA Survey was launched before the FTX crisis, we could assess how satisfaction with the EA community changed over time. We fit a generalized additive model in which we regressed the satisfaction ratings on the day the survey was taken.These results show that satisfaction went down after the FTX crisis first started.It should be noted however that this pattern of results could also be confounded by different groups of respondents taking the survey at different times. For example, we know that more engaged respondents tend to take the EAS earlier.We therefore also looked at how the satisfaction changed over time for different engagement levels. This shows that the satisfaction levels went down over time, regardless of engagement level.ConcernsRespondents were asked whether the FTX crisis has given them concerns with:Effective Altruism Meta Organizations (e.g., Centre for Effective Altruism, Open Philanthropy, 80,000 hours, etc.)Leaders of Effective Altruism Meta Organizations (e.g., Centre for Effective Altruism, Open Philanthropy, 80,000 hours, etc.)The Effective Altruism Community & NormsEffective Altruism Principles or PhilosophyMajorities of respondents reported agreement that the crisis had given them concerns with EA meta organizations (58%), the EA community and its norms (55%), just under half reported it giving them concerns about the leaders of EA meta organizations (48%).In contrast, only 25% agreed that the crisis had given them concerns about EA principles or philosophy, compared to 66% disagreeing. We think this suggests a somewhat reassuring picture where, though respondents may have concerns about the EA community in its current form, the FTX crisis has largely not caused respondents to become di...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX Community Response Survey Results, published by WillemSleegers on March 15, 2023 on The Effective Altruism Forum.SummaryIn December 2022, Rethink Priorities, in collaboration with CEA, surveyed the EA community in order to gather “perspectives on how the FTX crisis has impacted the community’s views of the effective altruism movement, its organizations, and leaders.”Our results found that the FTX crisis had decreased satisfaction with the EA community, and around half of respondents reported that the FTX crisis had given them concerns with EA meta organizations, the EA community and its norms, and the leaders of EA meta organizations.Nevertheless, there were some encouraging results. The reduction in satisfaction with the community was significant, but small, and overall average community sentiment is still positive. In addition, respondents tended to agree that the EA community had responded well to the crisis, although roughly a third of respondents neither agreed nor disagreed with this. Majorities of respondents reported continuing to trust EA organizations, though over 30% reported they had substantially lost trust in EA public figures or leadership.Respondents were more split in their views about how the EA community should respond. Respondents leaned slightly towards agreeing that the EA community should spend significant time reflecting and responding to this crisis, at the cost of spending less time on our other priorities, but slightly towards disagreement that the EA community should look very different as a result of this crisis.EA satisfactionRecalled satisfactionRespondents were asked about their current satisfaction with the EA community (after the FTX crisis) and to recall their satisfaction with the EA community at the start of November 2022, prior to the FTX crisis.Satisfaction with the EA community appears to be half a point (0.54) lower post-FTX compared to pre-FTX.Note that the median satisfaction scores are somewhat higher, but similarly showing a decrease (8 pre-FTX, 7 post-FTX).Satisfaction over timeAs the 2022 EA Survey was launched before the FTX crisis, we could assess how satisfaction with the EA community changed over time. We fit a generalized additive model in which we regressed the satisfaction ratings on the day the survey was taken.These results show that satisfaction went down after the FTX crisis first started.It should be noted however that this pattern of results could also be confounded by different groups of respondents taking the survey at different times. For example, we know that more engaged respondents tend to take the EAS earlier.We therefore also looked at how the satisfaction changed over time for different engagement levels. This shows that the satisfaction levels went down over time, regardless of engagement level.ConcernsRespondents were asked whether the FTX crisis has given them concerns with:Effective Altruism Meta Organizations (e.g., Centre for Effective Altruism, Open Philanthropy, 80,000 hours, etc.)Leaders of Effective Altruism Meta Organizations (e.g., Centre for Effective Altruism, Open Philanthropy, 80,000 hours, etc.)The Effective Altruism Community & NormsEffective Altruism Principles or PhilosophyMajorities of respondents reported agreement that the crisis had given them concerns with EA meta organizations (58%), the EA community and its norms (55%), just under half reported it giving them concerns about the leaders of EA meta organizations (48%).In contrast, only 25% agreed that the crisis had given them concerns about EA principles or philosophy, compared to 66% disagreeing. We think this suggests a somewhat reassuring picture where, though respondents may have concerns about the EA community in its current form, the FTX crisis has largely not caused respondents to become di...]]>
WillemSleegers https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 14:44 None full 5234
b83Zkz4amoaQC5Hpd_NL_EA_EA EA - Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed", published by Nathan Young on March 15, 2023 on The Effective Altruism Forum.There is a new Time articleSeems certain 98% we'll discuss itI would like us to try and have a better discussion about this than we sometimes do.Consider if you want to engageI updated a bit on important stuff as a result of this article. You may disagree. I am going to put my "personal updates" in a commentExcepts from the article that I think are relevant. Bold is mine. I have made choices here and feel free to recommend I change them.Yet MacAskill had long been aware of concerns around Bankman-Fried. He was personally cautioned about Bankman-Fried by at least three different people in a series of conversations in 2018 and 2019, according to interviews with four people familiar with those discussions and emails reviewed by TIME.He wasn’t alone. Multiple EA leaders knew about the red flags surrounding Bankman-Fried by 2019, according to a TIME investigation based on contemporaneous documents and interviews with seven people familiar with the matter. Among the EA brain trust personally notified about Bankman-Fried’s questionable behavior and business ethics were Nick Beckstead, a moral philosopher who went on to lead Bankman-Fried’s philanthropic arm, the FTX Future Fund, and Holden Karnofsky, co-CEO of OpenPhilanthropy, a nonprofit organization that makes grants supporting EA causes.Some of the warnings were serious: sources say that MacAskill and Beckstead were repeatedly told that Bankman-Fried was untrustworthy, had inappropriate sexual relationships with subordinates, refused to implement standard business practices, and had been caught lying during his first months running Alameda, a crypto firm that was seeded by EA investors, staffed by EAs, and dedicating to making money that could be donated to EA causes.MacAskill declined to answer a list of detailed questions from TIME for this story. “An independent investigation has been commissioned to look into these issues; I don’t want to front-run or undermine that process by discussing my own recollections publicly,” he wrote in an email. “I look forward to the results of the investigation and hope to be able to respond more fully after then.” Citing the same investigation, Beckstead also declined to answer detailed questions. Karnofsky did not respond to a list of questions from TIME. Through a lawyer, Bankman-Fried also declined to respond to a list of detailed written questions. The Centre for Effective Altruism (CEA) did not reply to multiple requests to explain why Bankman-Fried left the board in 2019. A spokesperson for Effective Ventures, the parent organization of CEA, cited the independent investigation, launched in Dec. 2022, and declined to comment while it was ongoing.In a span of less than nine months in 2022, Bankman-Fried’s FTX Future Fund—helmed by Beckstead—gave more than $160 million to effective altruist causes, including more than $33 million to organizations connected to MacAskill. “If [Bankman-Fried] wasn’t super wealthy, nobody would have given him another chance,” says one person who worked closely with MacAskill at an EA organization. “It’s greed for access to a bunch of money, but with a philosopher twist.”But within months, the good karma of the venture dissipated in a series of internal clashes, many details of which have not been previously reported. Some of the issues were personal. Bankman-Fried could be “dictatorial,” according to one former colleague. Three former Alameda employees told TIME he had inappropriate romantic relationships with his subordinates. Early Alameda executives also believed he had reneged on an equity arrangement that would have left Bankman-Frie...]]>
Nathan Young https://forum.effectivealtruism.org/posts/b83Zkz4amoaQC5Hpd/time-article-discussion-effective-altruist-leaders-were Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed", published by Nathan Young on March 15, 2023 on The Effective Altruism Forum.There is a new Time articleSeems certain 98% we'll discuss itI would like us to try and have a better discussion about this than we sometimes do.Consider if you want to engageI updated a bit on important stuff as a result of this article. You may disagree. I am going to put my "personal updates" in a commentExcepts from the article that I think are relevant. Bold is mine. I have made choices here and feel free to recommend I change them.Yet MacAskill had long been aware of concerns around Bankman-Fried. He was personally cautioned about Bankman-Fried by at least three different people in a series of conversations in 2018 and 2019, according to interviews with four people familiar with those discussions and emails reviewed by TIME.He wasn’t alone. Multiple EA leaders knew about the red flags surrounding Bankman-Fried by 2019, according to a TIME investigation based on contemporaneous documents and interviews with seven people familiar with the matter. Among the EA brain trust personally notified about Bankman-Fried’s questionable behavior and business ethics were Nick Beckstead, a moral philosopher who went on to lead Bankman-Fried’s philanthropic arm, the FTX Future Fund, and Holden Karnofsky, co-CEO of OpenPhilanthropy, a nonprofit organization that makes grants supporting EA causes.Some of the warnings were serious: sources say that MacAskill and Beckstead were repeatedly told that Bankman-Fried was untrustworthy, had inappropriate sexual relationships with subordinates, refused to implement standard business practices, and had been caught lying during his first months running Alameda, a crypto firm that was seeded by EA investors, staffed by EAs, and dedicating to making money that could be donated to EA causes.MacAskill declined to answer a list of detailed questions from TIME for this story. “An independent investigation has been commissioned to look into these issues; I don’t want to front-run or undermine that process by discussing my own recollections publicly,” he wrote in an email. “I look forward to the results of the investigation and hope to be able to respond more fully after then.” Citing the same investigation, Beckstead also declined to answer detailed questions. Karnofsky did not respond to a list of questions from TIME. Through a lawyer, Bankman-Fried also declined to respond to a list of detailed written questions. The Centre for Effective Altruism (CEA) did not reply to multiple requests to explain why Bankman-Fried left the board in 2019. A spokesperson for Effective Ventures, the parent organization of CEA, cited the independent investigation, launched in Dec. 2022, and declined to comment while it was ongoing.In a span of less than nine months in 2022, Bankman-Fried’s FTX Future Fund—helmed by Beckstead—gave more than $160 million to effective altruist causes, including more than $33 million to organizations connected to MacAskill. “If [Bankman-Fried] wasn’t super wealthy, nobody would have given him another chance,” says one person who worked closely with MacAskill at an EA organization. “It’s greed for access to a bunch of money, but with a philosopher twist.”But within months, the good karma of the venture dissipated in a series of internal clashes, many details of which have not been previously reported. Some of the issues were personal. Bankman-Fried could be “dictatorial,” according to one former colleague. Three former Alameda employees told TIME he had inappropriate romantic relationships with his subordinates. Early Alameda executives also believed he had reneged on an equity arrangement that would have left Bankman-Frie...]]>
Wed, 15 Mar 2023 14:45:25 +0000 EA - Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed", published by Nathan Young on March 15, 2023 on The Effective Altruism Forum.There is a new Time articleSeems certain 98% we'll discuss itI would like us to try and have a better discussion about this than we sometimes do.Consider if you want to engageI updated a bit on important stuff as a result of this article. You may disagree. I am going to put my "personal updates" in a commentExcepts from the article that I think are relevant. Bold is mine. I have made choices here and feel free to recommend I change them.Yet MacAskill had long been aware of concerns around Bankman-Fried. He was personally cautioned about Bankman-Fried by at least three different people in a series of conversations in 2018 and 2019, according to interviews with four people familiar with those discussions and emails reviewed by TIME.He wasn’t alone. Multiple EA leaders knew about the red flags surrounding Bankman-Fried by 2019, according to a TIME investigation based on contemporaneous documents and interviews with seven people familiar with the matter. Among the EA brain trust personally notified about Bankman-Fried’s questionable behavior and business ethics were Nick Beckstead, a moral philosopher who went on to lead Bankman-Fried’s philanthropic arm, the FTX Future Fund, and Holden Karnofsky, co-CEO of OpenPhilanthropy, a nonprofit organization that makes grants supporting EA causes.Some of the warnings were serious: sources say that MacAskill and Beckstead were repeatedly told that Bankman-Fried was untrustworthy, had inappropriate sexual relationships with subordinates, refused to implement standard business practices, and had been caught lying during his first months running Alameda, a crypto firm that was seeded by EA investors, staffed by EAs, and dedicating to making money that could be donated to EA causes.MacAskill declined to answer a list of detailed questions from TIME for this story. “An independent investigation has been commissioned to look into these issues; I don’t want to front-run or undermine that process by discussing my own recollections publicly,” he wrote in an email. “I look forward to the results of the investigation and hope to be able to respond more fully after then.” Citing the same investigation, Beckstead also declined to answer detailed questions. Karnofsky did not respond to a list of questions from TIME. Through a lawyer, Bankman-Fried also declined to respond to a list of detailed written questions. The Centre for Effective Altruism (CEA) did not reply to multiple requests to explain why Bankman-Fried left the board in 2019. A spokesperson for Effective Ventures, the parent organization of CEA, cited the independent investigation, launched in Dec. 2022, and declined to comment while it was ongoing.In a span of less than nine months in 2022, Bankman-Fried’s FTX Future Fund—helmed by Beckstead—gave more than $160 million to effective altruist causes, including more than $33 million to organizations connected to MacAskill. “If [Bankman-Fried] wasn’t super wealthy, nobody would have given him another chance,” says one person who worked closely with MacAskill at an EA organization. “It’s greed for access to a bunch of money, but with a philosopher twist.”But within months, the good karma of the venture dissipated in a series of internal clashes, many details of which have not been previously reported. Some of the issues were personal. Bankman-Fried could be “dictatorial,” according to one former colleague. Three former Alameda employees told TIME he had inappropriate romantic relationships with his subordinates. Early Alameda executives also believed he had reneged on an equity arrangement that would have left Bankman-Frie...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed", published by Nathan Young on March 15, 2023 on The Effective Altruism Forum.There is a new Time articleSeems certain 98% we'll discuss itI would like us to try and have a better discussion about this than we sometimes do.Consider if you want to engageI updated a bit on important stuff as a result of this article. You may disagree. I am going to put my "personal updates" in a commentExcepts from the article that I think are relevant. Bold is mine. I have made choices here and feel free to recommend I change them.Yet MacAskill had long been aware of concerns around Bankman-Fried. He was personally cautioned about Bankman-Fried by at least three different people in a series of conversations in 2018 and 2019, according to interviews with four people familiar with those discussions and emails reviewed by TIME.He wasn’t alone. Multiple EA leaders knew about the red flags surrounding Bankman-Fried by 2019, according to a TIME investigation based on contemporaneous documents and interviews with seven people familiar with the matter. Among the EA brain trust personally notified about Bankman-Fried’s questionable behavior and business ethics were Nick Beckstead, a moral philosopher who went on to lead Bankman-Fried’s philanthropic arm, the FTX Future Fund, and Holden Karnofsky, co-CEO of OpenPhilanthropy, a nonprofit organization that makes grants supporting EA causes.Some of the warnings were serious: sources say that MacAskill and Beckstead were repeatedly told that Bankman-Fried was untrustworthy, had inappropriate sexual relationships with subordinates, refused to implement standard business practices, and had been caught lying during his first months running Alameda, a crypto firm that was seeded by EA investors, staffed by EAs, and dedicating to making money that could be donated to EA causes.MacAskill declined to answer a list of detailed questions from TIME for this story. “An independent investigation has been commissioned to look into these issues; I don’t want to front-run or undermine that process by discussing my own recollections publicly,” he wrote in an email. “I look forward to the results of the investigation and hope to be able to respond more fully after then.” Citing the same investigation, Beckstead also declined to answer detailed questions. Karnofsky did not respond to a list of questions from TIME. Through a lawyer, Bankman-Fried also declined to respond to a list of detailed written questions. The Centre for Effective Altruism (CEA) did not reply to multiple requests to explain why Bankman-Fried left the board in 2019. A spokesperson for Effective Ventures, the parent organization of CEA, cited the independent investigation, launched in Dec. 2022, and declined to comment while it was ongoing.In a span of less than nine months in 2022, Bankman-Fried’s FTX Future Fund—helmed by Beckstead—gave more than $160 million to effective altruist causes, including more than $33 million to organizations connected to MacAskill. “If [Bankman-Fried] wasn’t super wealthy, nobody would have given him another chance,” says one person who worked closely with MacAskill at an EA organization. “It’s greed for access to a bunch of money, but with a philosopher twist.”But within months, the good karma of the venture dissipated in a series of internal clashes, many details of which have not been previously reported. Some of the issues were personal. Bankman-Fried could be “dictatorial,” according to one former colleague. Three former Alameda employees told TIME he had inappropriate romantic relationships with his subordinates. Early Alameda executives also believed he had reneged on an equity arrangement that would have left Bankman-Frie...]]>
Nathan Young https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:28 None full 5235
mGHRdcmKjLhDP5gLc_NL_EA_EA EA - Cause Exploration: Support for Mental Health Carers by Yuval Shapira Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cause Exploration: Support for Mental Health Carers, published by Yuval Shapira on March 14, 2023 on The Effective Altruism Forum.Tldr- I'm looking into Support for mental health carers as a potential cause area for a while, would love inputs about ITN and generally about the subjectSummary of key points:Mental health as an important cause area- Mental illness seems to cause a high amount of worldwide unhappiness and seems neglected.Carers as a potential solution- Most of the people suffering from mental health issues or illness are surrounded by family and friends, who can potentially have a high impact on the decrease or increase of their mental state. Also, there is a stigma considering mental health- leading to cases being underreported and individuals that are unwilling to seek treatment. The carers could be the first and only to discover the issues before it is too late, and the price of giving them the tools to support could be cheap and efficient.Carers as a potential cause area- Although the suffering of carers is (probably) not nearly as severe as the people suffering from mental health issues or illnesses, the scale of the people it affects is wider and it the neglectedness is probably higher.Elaboration:Mental health as an important cause areaDepression is a substantial source of suffering worldwide. It makes up 1.84% of the global burden of disease according to the IHME (Institute for Health Metrics and Evaluation). The treatment of depression is neglected relative to other health interventions in low to middle-income countries. Governments and international aid spending on mental health represent less than 1% of the total spending on health in low-income countries.Carers as a potential solutionA carer is someone who voluntarily provides ongoing care and assistance to another person who, because of mental health issues or psychiatric disability, requires support with everyday tasks. A carer might be a person’s parent, partner, relative or friend. The supporter has an impact on the sufferer and could be the first and only to discover the problem.There are supports, guides and programs for high income countries (the quality and amount of improved due to covid, but also the depression rates are higher), but few programs and high quality study on programs who approach improving mental health through carers in low-middle income countries.Happier lives institute did screen programs listed on the Mental Health Innovation Network, and one of the programs is peer-based. Other interesting programs are StrongMinds Peer Facilitator Programs (which are cheaper, and the facilitators have a higher understanding of the participants) and Carers worldwide. I believe research on programs such as these could be a path to potential effective interventions.Carers as a potential cause areaThe amount of the carers is higher than people suffering from mental difficulties, and their support is more neglected. Caring for a person suffering from mental health difficulties can hurt the supporter (Secondary trauma, Copycat suicide). The direct support for the carers in addition to the secondary improvement of the people severely suffering could improve dramatically the cost-effectiveness.SummaryI believe there is a strong case to consider furthering the study of mental health carer support, and it should be a higher priority in the effective altruism community because of the potential scale, neglectedness, and cost-effectiveness of such programs.Thanks to @EdoArad and @GidiKadosh for helping me write this up, to @CE for inspiring me to write this a year ago, and @sella and @Dan Lahav for incentivizing me to look more deeply into this topic todayAlso thank you generally to everyone promoting mental health as a cause area :)This might be an un-updated text because I ...]]>
Yuval Shapira https://forum.effectivealtruism.org/posts/mGHRdcmKjLhDP5gLc/cause-exploration-support-for-mental-health-carers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cause Exploration: Support for Mental Health Carers, published by Yuval Shapira on March 14, 2023 on The Effective Altruism Forum.Tldr- I'm looking into Support for mental health carers as a potential cause area for a while, would love inputs about ITN and generally about the subjectSummary of key points:Mental health as an important cause area- Mental illness seems to cause a high amount of worldwide unhappiness and seems neglected.Carers as a potential solution- Most of the people suffering from mental health issues or illness are surrounded by family and friends, who can potentially have a high impact on the decrease or increase of their mental state. Also, there is a stigma considering mental health- leading to cases being underreported and individuals that are unwilling to seek treatment. The carers could be the first and only to discover the issues before it is too late, and the price of giving them the tools to support could be cheap and efficient.Carers as a potential cause area- Although the suffering of carers is (probably) not nearly as severe as the people suffering from mental health issues or illnesses, the scale of the people it affects is wider and it the neglectedness is probably higher.Elaboration:Mental health as an important cause areaDepression is a substantial source of suffering worldwide. It makes up 1.84% of the global burden of disease according to the IHME (Institute for Health Metrics and Evaluation). The treatment of depression is neglected relative to other health interventions in low to middle-income countries. Governments and international aid spending on mental health represent less than 1% of the total spending on health in low-income countries.Carers as a potential solutionA carer is someone who voluntarily provides ongoing care and assistance to another person who, because of mental health issues or psychiatric disability, requires support with everyday tasks. A carer might be a person’s parent, partner, relative or friend. The supporter has an impact on the sufferer and could be the first and only to discover the problem.There are supports, guides and programs for high income countries (the quality and amount of improved due to covid, but also the depression rates are higher), but few programs and high quality study on programs who approach improving mental health through carers in low-middle income countries.Happier lives institute did screen programs listed on the Mental Health Innovation Network, and one of the programs is peer-based. Other interesting programs are StrongMinds Peer Facilitator Programs (which are cheaper, and the facilitators have a higher understanding of the participants) and Carers worldwide. I believe research on programs such as these could be a path to potential effective interventions.Carers as a potential cause areaThe amount of the carers is higher than people suffering from mental difficulties, and their support is more neglected. Caring for a person suffering from mental health difficulties can hurt the supporter (Secondary trauma, Copycat suicide). The direct support for the carers in addition to the secondary improvement of the people severely suffering could improve dramatically the cost-effectiveness.SummaryI believe there is a strong case to consider furthering the study of mental health carer support, and it should be a higher priority in the effective altruism community because of the potential scale, neglectedness, and cost-effectiveness of such programs.Thanks to @EdoArad and @GidiKadosh for helping me write this up, to @CE for inspiring me to write this a year ago, and @sella and @Dan Lahav for incentivizing me to look more deeply into this topic todayAlso thank you generally to everyone promoting mental health as a cause area :)This might be an un-updated text because I ...]]>
Wed, 15 Mar 2023 14:42:22 +0000 EA - Cause Exploration: Support for Mental Health Carers by Yuval Shapira Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cause Exploration: Support for Mental Health Carers, published by Yuval Shapira on March 14, 2023 on The Effective Altruism Forum.Tldr- I'm looking into Support for mental health carers as a potential cause area for a while, would love inputs about ITN and generally about the subjectSummary of key points:Mental health as an important cause area- Mental illness seems to cause a high amount of worldwide unhappiness and seems neglected.Carers as a potential solution- Most of the people suffering from mental health issues or illness are surrounded by family and friends, who can potentially have a high impact on the decrease or increase of their mental state. Also, there is a stigma considering mental health- leading to cases being underreported and individuals that are unwilling to seek treatment. The carers could be the first and only to discover the issues before it is too late, and the price of giving them the tools to support could be cheap and efficient.Carers as a potential cause area- Although the suffering of carers is (probably) not nearly as severe as the people suffering from mental health issues or illnesses, the scale of the people it affects is wider and it the neglectedness is probably higher.Elaboration:Mental health as an important cause areaDepression is a substantial source of suffering worldwide. It makes up 1.84% of the global burden of disease according to the IHME (Institute for Health Metrics and Evaluation). The treatment of depression is neglected relative to other health interventions in low to middle-income countries. Governments and international aid spending on mental health represent less than 1% of the total spending on health in low-income countries.Carers as a potential solutionA carer is someone who voluntarily provides ongoing care and assistance to another person who, because of mental health issues or psychiatric disability, requires support with everyday tasks. A carer might be a person’s parent, partner, relative or friend. The supporter has an impact on the sufferer and could be the first and only to discover the problem.There are supports, guides and programs for high income countries (the quality and amount of improved due to covid, but also the depression rates are higher), but few programs and high quality study on programs who approach improving mental health through carers in low-middle income countries.Happier lives institute did screen programs listed on the Mental Health Innovation Network, and one of the programs is peer-based. Other interesting programs are StrongMinds Peer Facilitator Programs (which are cheaper, and the facilitators have a higher understanding of the participants) and Carers worldwide. I believe research on programs such as these could be a path to potential effective interventions.Carers as a potential cause areaThe amount of the carers is higher than people suffering from mental difficulties, and their support is more neglected. Caring for a person suffering from mental health difficulties can hurt the supporter (Secondary trauma, Copycat suicide). The direct support for the carers in addition to the secondary improvement of the people severely suffering could improve dramatically the cost-effectiveness.SummaryI believe there is a strong case to consider furthering the study of mental health carer support, and it should be a higher priority in the effective altruism community because of the potential scale, neglectedness, and cost-effectiveness of such programs.Thanks to @EdoArad and @GidiKadosh for helping me write this up, to @CE for inspiring me to write this a year ago, and @sella and @Dan Lahav for incentivizing me to look more deeply into this topic todayAlso thank you generally to everyone promoting mental health as a cause area :)This might be an un-updated text because I ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cause Exploration: Support for Mental Health Carers, published by Yuval Shapira on March 14, 2023 on The Effective Altruism Forum.Tldr- I'm looking into Support for mental health carers as a potential cause area for a while, would love inputs about ITN and generally about the subjectSummary of key points:Mental health as an important cause area- Mental illness seems to cause a high amount of worldwide unhappiness and seems neglected.Carers as a potential solution- Most of the people suffering from mental health issues or illness are surrounded by family and friends, who can potentially have a high impact on the decrease or increase of their mental state. Also, there is a stigma considering mental health- leading to cases being underreported and individuals that are unwilling to seek treatment. The carers could be the first and only to discover the issues before it is too late, and the price of giving them the tools to support could be cheap and efficient.Carers as a potential cause area- Although the suffering of carers is (probably) not nearly as severe as the people suffering from mental health issues or illnesses, the scale of the people it affects is wider and it the neglectedness is probably higher.Elaboration:Mental health as an important cause areaDepression is a substantial source of suffering worldwide. It makes up 1.84% of the global burden of disease according to the IHME (Institute for Health Metrics and Evaluation). The treatment of depression is neglected relative to other health interventions in low to middle-income countries. Governments and international aid spending on mental health represent less than 1% of the total spending on health in low-income countries.Carers as a potential solutionA carer is someone who voluntarily provides ongoing care and assistance to another person who, because of mental health issues or psychiatric disability, requires support with everyday tasks. A carer might be a person’s parent, partner, relative or friend. The supporter has an impact on the sufferer and could be the first and only to discover the problem.There are supports, guides and programs for high income countries (the quality and amount of improved due to covid, but also the depression rates are higher), but few programs and high quality study on programs who approach improving mental health through carers in low-middle income countries.Happier lives institute did screen programs listed on the Mental Health Innovation Network, and one of the programs is peer-based. Other interesting programs are StrongMinds Peer Facilitator Programs (which are cheaper, and the facilitators have a higher understanding of the participants) and Carers worldwide. I believe research on programs such as these could be a path to potential effective interventions.Carers as a potential cause areaThe amount of the carers is higher than people suffering from mental difficulties, and their support is more neglected. Caring for a person suffering from mental health difficulties can hurt the supporter (Secondary trauma, Copycat suicide). The direct support for the carers in addition to the secondary improvement of the people severely suffering could improve dramatically the cost-effectiveness.SummaryI believe there is a strong case to consider furthering the study of mental health carer support, and it should be a higher priority in the effective altruism community because of the potential scale, neglectedness, and cost-effectiveness of such programs.Thanks to @EdoArad and @GidiKadosh for helping me write this up, to @CE for inspiring me to write this a year ago, and @sella and @Dan Lahav for incentivizing me to look more deeply into this topic todayAlso thank you generally to everyone promoting mental health as a cause area :)This might be an un-updated text because I ...]]>
Yuval Shapira https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:46 None full 5236
rsnrpvKofps5Py7di_NL_EA_EA EA - Shutting Down the Lightcone Offices by Habryka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shutting Down the Lightcone Offices, published by Habryka on March 15, 2023 on The Effective Altruism Forum.Lightcone recently decided to close down a big project we'd been running for the last 1.5 years: An office space in Berkeley for people working on x-risk/EA/rationalist things that we opened August 2021.We haven't written much about why, but I and Ben had written some messages on the internal office slack to explain some of our reasoning, which we've copy-pasted below. (They are from Jan 26th). I might write a longer retrospective sometime, but these messages seemed easy to share, and it seemed good to have something I can more easily refer to publicly.Background dataBelow is a graph of weekly unique keycard-visitors to the office in 2022.The x-axis is each week (skipping the first 3), and the y-axis is the number of unique visitors-with-keycards.Members could bring in guests, which happened quite a bit and isn't measured in the keycard data below, so I think the total number of people who came by the offices is 30-50% higher.The offices opened in August 2021. Including guests, parties, and all the time not shown in the graphs, I'd estimate around 200-300 more people visited, for a total of around 500-600 people who used the offices.The offices cost $70k/month on rent , and around $35k/month on food and drink, and ~$5k/month on contractor time for the office. It also costs core Lightcone staff time which I'd guess at around $75k/year.Ben's AnnouncementClosing the Lightcone Offices @channelHello there everyone,Sadly, I'm here to write that we've decided to close down the Lightcone Offices by the end of March. While we initially intended to transplant the office to the Rose Garden Inn, Oliver has decided (and I am on the same page about this decision) to make a clean break going forward to allow us to step back and renegotiate our relationship to the entire EA/longtermist ecosystem, as well as change what products and services we build.Below I'll give context on the decision and other details, but the main practical information is that the office will no longer be open after Friday March 24th. (There will be a goodbye party on that day.)I asked Oli to briefly state his reasoning for this decision, here's what he says:An explicit part of my impact model for the Lightcone Offices has been that its value was substantially dependent on the existing EA/AI Alignment/Rationality ecosystem being roughly on track to solve the world's most important problems, and that while there are issues, pouring gas into this existing engine, and ironing out its bugs and problems, is one of the most valuable things to do in the world.I had been doubting this assumption of our strategy for a while, even before FTX. Over the past year (with a substantial boost by the FTX collapse) my actual trust in this ecosystem and interest in pouring gas into this existing engine has greatly declined, and I now stand before what I have helped built with great doubts about whether it all will be or has been good for the world.I respect many of the people working here, and I am glad about the overall effect of Lightcone on this ecosystem we have built, and am excited about many of the individuals in the space, and probably in many, maybe even most, future worlds I will come back with new conviction to invest and build out this community that I have been building infrastructure for for almost a full decade. But right now, I think both me and the rest of Lightcone need some space to reconsider our relationship to this whole ecosystem, and I currently assign enough probability that building things in the space is harmful for the world that I can't really justify the level of effort and energy and money that Lightcone has been investing into doing things that pretty indiscriminately grow a...]]>
Habryka https://forum.effectivealtruism.org/posts/rsnrpvKofps5Py7di/shutting-down-the-lightcone-offices Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shutting Down the Lightcone Offices, published by Habryka on March 15, 2023 on The Effective Altruism Forum.Lightcone recently decided to close down a big project we'd been running for the last 1.5 years: An office space in Berkeley for people working on x-risk/EA/rationalist things that we opened August 2021.We haven't written much about why, but I and Ben had written some messages on the internal office slack to explain some of our reasoning, which we've copy-pasted below. (They are from Jan 26th). I might write a longer retrospective sometime, but these messages seemed easy to share, and it seemed good to have something I can more easily refer to publicly.Background dataBelow is a graph of weekly unique keycard-visitors to the office in 2022.The x-axis is each week (skipping the first 3), and the y-axis is the number of unique visitors-with-keycards.Members could bring in guests, which happened quite a bit and isn't measured in the keycard data below, so I think the total number of people who came by the offices is 30-50% higher.The offices opened in August 2021. Including guests, parties, and all the time not shown in the graphs, I'd estimate around 200-300 more people visited, for a total of around 500-600 people who used the offices.The offices cost $70k/month on rent , and around $35k/month on food and drink, and ~$5k/month on contractor time for the office. It also costs core Lightcone staff time which I'd guess at around $75k/year.Ben's AnnouncementClosing the Lightcone Offices @channelHello there everyone,Sadly, I'm here to write that we've decided to close down the Lightcone Offices by the end of March. While we initially intended to transplant the office to the Rose Garden Inn, Oliver has decided (and I am on the same page about this decision) to make a clean break going forward to allow us to step back and renegotiate our relationship to the entire EA/longtermist ecosystem, as well as change what products and services we build.Below I'll give context on the decision and other details, but the main practical information is that the office will no longer be open after Friday March 24th. (There will be a goodbye party on that day.)I asked Oli to briefly state his reasoning for this decision, here's what he says:An explicit part of my impact model for the Lightcone Offices has been that its value was substantially dependent on the existing EA/AI Alignment/Rationality ecosystem being roughly on track to solve the world's most important problems, and that while there are issues, pouring gas into this existing engine, and ironing out its bugs and problems, is one of the most valuable things to do in the world.I had been doubting this assumption of our strategy for a while, even before FTX. Over the past year (with a substantial boost by the FTX collapse) my actual trust in this ecosystem and interest in pouring gas into this existing engine has greatly declined, and I now stand before what I have helped built with great doubts about whether it all will be or has been good for the world.I respect many of the people working here, and I am glad about the overall effect of Lightcone on this ecosystem we have built, and am excited about many of the individuals in the space, and probably in many, maybe even most, future worlds I will come back with new conviction to invest and build out this community that I have been building infrastructure for for almost a full decade. But right now, I think both me and the rest of Lightcone need some space to reconsider our relationship to this whole ecosystem, and I currently assign enough probability that building things in the space is harmful for the world that I can't really justify the level of effort and energy and money that Lightcone has been investing into doing things that pretty indiscriminately grow a...]]>
Wed, 15 Mar 2023 03:16:52 +0000 EA - Shutting Down the Lightcone Offices by Habryka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shutting Down the Lightcone Offices, published by Habryka on March 15, 2023 on The Effective Altruism Forum.Lightcone recently decided to close down a big project we'd been running for the last 1.5 years: An office space in Berkeley for people working on x-risk/EA/rationalist things that we opened August 2021.We haven't written much about why, but I and Ben had written some messages on the internal office slack to explain some of our reasoning, which we've copy-pasted below. (They are from Jan 26th). I might write a longer retrospective sometime, but these messages seemed easy to share, and it seemed good to have something I can more easily refer to publicly.Background dataBelow is a graph of weekly unique keycard-visitors to the office in 2022.The x-axis is each week (skipping the first 3), and the y-axis is the number of unique visitors-with-keycards.Members could bring in guests, which happened quite a bit and isn't measured in the keycard data below, so I think the total number of people who came by the offices is 30-50% higher.The offices opened in August 2021. Including guests, parties, and all the time not shown in the graphs, I'd estimate around 200-300 more people visited, for a total of around 500-600 people who used the offices.The offices cost $70k/month on rent , and around $35k/month on food and drink, and ~$5k/month on contractor time for the office. It also costs core Lightcone staff time which I'd guess at around $75k/year.Ben's AnnouncementClosing the Lightcone Offices @channelHello there everyone,Sadly, I'm here to write that we've decided to close down the Lightcone Offices by the end of March. While we initially intended to transplant the office to the Rose Garden Inn, Oliver has decided (and I am on the same page about this decision) to make a clean break going forward to allow us to step back and renegotiate our relationship to the entire EA/longtermist ecosystem, as well as change what products and services we build.Below I'll give context on the decision and other details, but the main practical information is that the office will no longer be open after Friday March 24th. (There will be a goodbye party on that day.)I asked Oli to briefly state his reasoning for this decision, here's what he says:An explicit part of my impact model for the Lightcone Offices has been that its value was substantially dependent on the existing EA/AI Alignment/Rationality ecosystem being roughly on track to solve the world's most important problems, and that while there are issues, pouring gas into this existing engine, and ironing out its bugs and problems, is one of the most valuable things to do in the world.I had been doubting this assumption of our strategy for a while, even before FTX. Over the past year (with a substantial boost by the FTX collapse) my actual trust in this ecosystem and interest in pouring gas into this existing engine has greatly declined, and I now stand before what I have helped built with great doubts about whether it all will be or has been good for the world.I respect many of the people working here, and I am glad about the overall effect of Lightcone on this ecosystem we have built, and am excited about many of the individuals in the space, and probably in many, maybe even most, future worlds I will come back with new conviction to invest and build out this community that I have been building infrastructure for for almost a full decade. But right now, I think both me and the rest of Lightcone need some space to reconsider our relationship to this whole ecosystem, and I currently assign enough probability that building things in the space is harmful for the world that I can't really justify the level of effort and energy and money that Lightcone has been investing into doing things that pretty indiscriminately grow a...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shutting Down the Lightcone Offices, published by Habryka on March 15, 2023 on The Effective Altruism Forum.Lightcone recently decided to close down a big project we'd been running for the last 1.5 years: An office space in Berkeley for people working on x-risk/EA/rationalist things that we opened August 2021.We haven't written much about why, but I and Ben had written some messages on the internal office slack to explain some of our reasoning, which we've copy-pasted below. (They are from Jan 26th). I might write a longer retrospective sometime, but these messages seemed easy to share, and it seemed good to have something I can more easily refer to publicly.Background dataBelow is a graph of weekly unique keycard-visitors to the office in 2022.The x-axis is each week (skipping the first 3), and the y-axis is the number of unique visitors-with-keycards.Members could bring in guests, which happened quite a bit and isn't measured in the keycard data below, so I think the total number of people who came by the offices is 30-50% higher.The offices opened in August 2021. Including guests, parties, and all the time not shown in the graphs, I'd estimate around 200-300 more people visited, for a total of around 500-600 people who used the offices.The offices cost $70k/month on rent , and around $35k/month on food and drink, and ~$5k/month on contractor time for the office. It also costs core Lightcone staff time which I'd guess at around $75k/year.Ben's AnnouncementClosing the Lightcone Offices @channelHello there everyone,Sadly, I'm here to write that we've decided to close down the Lightcone Offices by the end of March. While we initially intended to transplant the office to the Rose Garden Inn, Oliver has decided (and I am on the same page about this decision) to make a clean break going forward to allow us to step back and renegotiate our relationship to the entire EA/longtermist ecosystem, as well as change what products and services we build.Below I'll give context on the decision and other details, but the main practical information is that the office will no longer be open after Friday March 24th. (There will be a goodbye party on that day.)I asked Oli to briefly state his reasoning for this decision, here's what he says:An explicit part of my impact model for the Lightcone Offices has been that its value was substantially dependent on the existing EA/AI Alignment/Rationality ecosystem being roughly on track to solve the world's most important problems, and that while there are issues, pouring gas into this existing engine, and ironing out its bugs and problems, is one of the most valuable things to do in the world.I had been doubting this assumption of our strategy for a while, even before FTX. Over the past year (with a substantial boost by the FTX collapse) my actual trust in this ecosystem and interest in pouring gas into this existing engine has greatly declined, and I now stand before what I have helped built with great doubts about whether it all will be or has been good for the world.I respect many of the people working here, and I am glad about the overall effect of Lightcone on this ecosystem we have built, and am excited about many of the individuals in the space, and probably in many, maybe even most, future worlds I will come back with new conviction to invest and build out this community that I have been building infrastructure for for almost a full decade. But right now, I think both me and the rest of Lightcone need some space to reconsider our relationship to this whole ecosystem, and I currently assign enough probability that building things in the space is harmful for the world that I can't really justify the level of effort and energy and money that Lightcone has been investing into doing things that pretty indiscriminately grow a...]]>
Habryka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 24:12 None full 5222
xNQQC3ceJ78CD7a2Z_NL_EA_EA EA - Exposure to Lead Paint in Low- and Middle-Income Countries by Rethink Priorities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Exposure to Lead Paint in Low- and Middle-Income Countries, published by Rethink Priorities on March 14, 2023 on The Effective Altruism Forum. for the full version of this report on the Rethink Priorities website.This report is a “shallow” investigation, as described here, and was commissioned by GiveWell and produced by Rethink Priorities from November 2021 to January 2022. We updated and revised this report for publication. GiveWell does not necessarily endorse our conclusions. The primary focus of the report is to provide an overview of what is currently known about the exposure to lead paints in low- and middle-income countries.Key takeawaysLead exposure is common across low- and middle-income countries (LMICs) and can lead to life-long health problems, a reduced IQ, and lower educational attainment. One important exposure pathway is lead-based paint (here defined as a paint to which lead compounds have been added), which is still unregulated in over 50% of countries globally. Yet, little is known about how much lead paint is being used in LMICs and to what extent it contributes to the health and economic burden of lead (link to section).Home-based assessment studies of lead paint levels provide evidence of current exposure to lead, but the evidence in LMICs is scarce and relatively low quality. Based on the few studies we found, our best guess is that the average lead concentration in paint in residential houses in LMICs is between 50 ppm and 4,500 ppm (90% confidence interval) (link to section).Shop-based assessment studies of lead-based paints provide evidence of future exposure to lead. Based on three review studies and expert interviews, we find that lead levels in solvent-based paints are roughly 20 times higher than in water-based paints. Our best guess is that average lead levels of paints currently sold in shops in LMICs are roughly 200-1,400 ppm (80% CI) for water-based paints and 5,000-30,000 ppm (80% CI) for solvent-based paints (link to section).Based on market analyses and small, informal seller surveys, we estimate that market share of solvent-based paints in LMICs is roughly 30%-65% of all residential paints sold (the rest being water-based paints), which is higher than in high-income countries (~20%-30%) (link to section).There is also evidence that lead-based paints are frequently being used in public spaces, such as playgrounds, (pre)schools, hospitals, and daycare centers. However, we do not know the relative importance of exposure from lead paint in homes vs. outside the home (link to section).As many studies on the exposure and the health effects of lead paint are based on historical US-data, we investigated whether current lead paint levels in LMICs are comparable to lead paint levels in the US before regulations were in place. We find that historical US-based lead concentrations in homes were about 6-12 times higher than those in recently studied homes in some LMICs (70% confidence) (link to section).We estimate that doubling the speed of the introduction of lead paint bans across LMICs could prevent 31 to 101 million (90 % CI) children from exposure to lead paint, and lead to total averted income losses of USD 68 to 585 billion (90% CI) and 150,000 to 5.9 million (90% CI) DALYs over the next 100 years. Building on previous analyses done by LEEP (Hu, 2022; LEEP, 2021) and Attina and Trasande (2013), we estimate that lead paint accounts for ~7.5% (with a 90% confidence interval of 2-15%) of the total economic burden of lead. We would like to emphasize that these estimates are highly uncertain, as our model is based on many inputs for which data availability is scarce or even non-existent. This uncertainty could be reduced with more data on the use of paints in LMICs (e.g. frequency of re-painting homes) and on the average dose-resp...]]>
Rethink Priorities https://forum.effectivealtruism.org/posts/xNQQC3ceJ78CD7a2Z/exposure-to-lead-paint-in-low-and-middle-income-countries Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Exposure to Lead Paint in Low- and Middle-Income Countries, published by Rethink Priorities on March 14, 2023 on The Effective Altruism Forum. for the full version of this report on the Rethink Priorities website.This report is a “shallow” investigation, as described here, and was commissioned by GiveWell and produced by Rethink Priorities from November 2021 to January 2022. We updated and revised this report for publication. GiveWell does not necessarily endorse our conclusions. The primary focus of the report is to provide an overview of what is currently known about the exposure to lead paints in low- and middle-income countries.Key takeawaysLead exposure is common across low- and middle-income countries (LMICs) and can lead to life-long health problems, a reduced IQ, and lower educational attainment. One important exposure pathway is lead-based paint (here defined as a paint to which lead compounds have been added), which is still unregulated in over 50% of countries globally. Yet, little is known about how much lead paint is being used in LMICs and to what extent it contributes to the health and economic burden of lead (link to section).Home-based assessment studies of lead paint levels provide evidence of current exposure to lead, but the evidence in LMICs is scarce and relatively low quality. Based on the few studies we found, our best guess is that the average lead concentration in paint in residential houses in LMICs is between 50 ppm and 4,500 ppm (90% confidence interval) (link to section).Shop-based assessment studies of lead-based paints provide evidence of future exposure to lead. Based on three review studies and expert interviews, we find that lead levels in solvent-based paints are roughly 20 times higher than in water-based paints. Our best guess is that average lead levels of paints currently sold in shops in LMICs are roughly 200-1,400 ppm (80% CI) for water-based paints and 5,000-30,000 ppm (80% CI) for solvent-based paints (link to section).Based on market analyses and small, informal seller surveys, we estimate that market share of solvent-based paints in LMICs is roughly 30%-65% of all residential paints sold (the rest being water-based paints), which is higher than in high-income countries (~20%-30%) (link to section).There is also evidence that lead-based paints are frequently being used in public spaces, such as playgrounds, (pre)schools, hospitals, and daycare centers. However, we do not know the relative importance of exposure from lead paint in homes vs. outside the home (link to section).As many studies on the exposure and the health effects of lead paint are based on historical US-data, we investigated whether current lead paint levels in LMICs are comparable to lead paint levels in the US before regulations were in place. We find that historical US-based lead concentrations in homes were about 6-12 times higher than those in recently studied homes in some LMICs (70% confidence) (link to section).We estimate that doubling the speed of the introduction of lead paint bans across LMICs could prevent 31 to 101 million (90 % CI) children from exposure to lead paint, and lead to total averted income losses of USD 68 to 585 billion (90% CI) and 150,000 to 5.9 million (90% CI) DALYs over the next 100 years. Building on previous analyses done by LEEP (Hu, 2022; LEEP, 2021) and Attina and Trasande (2013), we estimate that lead paint accounts for ~7.5% (with a 90% confidence interval of 2-15%) of the total economic burden of lead. We would like to emphasize that these estimates are highly uncertain, as our model is based on many inputs for which data availability is scarce or even non-existent. This uncertainty could be reduced with more data on the use of paints in LMICs (e.g. frequency of re-painting homes) and on the average dose-resp...]]>
Tue, 14 Mar 2023 21:47:27 +0000 EA - Exposure to Lead Paint in Low- and Middle-Income Countries by Rethink Priorities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Exposure to Lead Paint in Low- and Middle-Income Countries, published by Rethink Priorities on March 14, 2023 on The Effective Altruism Forum. for the full version of this report on the Rethink Priorities website.This report is a “shallow” investigation, as described here, and was commissioned by GiveWell and produced by Rethink Priorities from November 2021 to January 2022. We updated and revised this report for publication. GiveWell does not necessarily endorse our conclusions. The primary focus of the report is to provide an overview of what is currently known about the exposure to lead paints in low- and middle-income countries.Key takeawaysLead exposure is common across low- and middle-income countries (LMICs) and can lead to life-long health problems, a reduced IQ, and lower educational attainment. One important exposure pathway is lead-based paint (here defined as a paint to which lead compounds have been added), which is still unregulated in over 50% of countries globally. Yet, little is known about how much lead paint is being used in LMICs and to what extent it contributes to the health and economic burden of lead (link to section).Home-based assessment studies of lead paint levels provide evidence of current exposure to lead, but the evidence in LMICs is scarce and relatively low quality. Based on the few studies we found, our best guess is that the average lead concentration in paint in residential houses in LMICs is between 50 ppm and 4,500 ppm (90% confidence interval) (link to section).Shop-based assessment studies of lead-based paints provide evidence of future exposure to lead. Based on three review studies and expert interviews, we find that lead levels in solvent-based paints are roughly 20 times higher than in water-based paints. Our best guess is that average lead levels of paints currently sold in shops in LMICs are roughly 200-1,400 ppm (80% CI) for water-based paints and 5,000-30,000 ppm (80% CI) for solvent-based paints (link to section).Based on market analyses and small, informal seller surveys, we estimate that market share of solvent-based paints in LMICs is roughly 30%-65% of all residential paints sold (the rest being water-based paints), which is higher than in high-income countries (~20%-30%) (link to section).There is also evidence that lead-based paints are frequently being used in public spaces, such as playgrounds, (pre)schools, hospitals, and daycare centers. However, we do not know the relative importance of exposure from lead paint in homes vs. outside the home (link to section).As many studies on the exposure and the health effects of lead paint are based on historical US-data, we investigated whether current lead paint levels in LMICs are comparable to lead paint levels in the US before regulations were in place. We find that historical US-based lead concentrations in homes were about 6-12 times higher than those in recently studied homes in some LMICs (70% confidence) (link to section).We estimate that doubling the speed of the introduction of lead paint bans across LMICs could prevent 31 to 101 million (90 % CI) children from exposure to lead paint, and lead to total averted income losses of USD 68 to 585 billion (90% CI) and 150,000 to 5.9 million (90% CI) DALYs over the next 100 years. Building on previous analyses done by LEEP (Hu, 2022; LEEP, 2021) and Attina and Trasande (2013), we estimate that lead paint accounts for ~7.5% (with a 90% confidence interval of 2-15%) of the total economic burden of lead. We would like to emphasize that these estimates are highly uncertain, as our model is based on many inputs for which data availability is scarce or even non-existent. This uncertainty could be reduced with more data on the use of paints in LMICs (e.g. frequency of re-painting homes) and on the average dose-resp...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Exposure to Lead Paint in Low- and Middle-Income Countries, published by Rethink Priorities on March 14, 2023 on The Effective Altruism Forum. for the full version of this report on the Rethink Priorities website.This report is a “shallow” investigation, as described here, and was commissioned by GiveWell and produced by Rethink Priorities from November 2021 to January 2022. We updated and revised this report for publication. GiveWell does not necessarily endorse our conclusions. The primary focus of the report is to provide an overview of what is currently known about the exposure to lead paints in low- and middle-income countries.Key takeawaysLead exposure is common across low- and middle-income countries (LMICs) and can lead to life-long health problems, a reduced IQ, and lower educational attainment. One important exposure pathway is lead-based paint (here defined as a paint to which lead compounds have been added), which is still unregulated in over 50% of countries globally. Yet, little is known about how much lead paint is being used in LMICs and to what extent it contributes to the health and economic burden of lead (link to section).Home-based assessment studies of lead paint levels provide evidence of current exposure to lead, but the evidence in LMICs is scarce and relatively low quality. Based on the few studies we found, our best guess is that the average lead concentration in paint in residential houses in LMICs is between 50 ppm and 4,500 ppm (90% confidence interval) (link to section).Shop-based assessment studies of lead-based paints provide evidence of future exposure to lead. Based on three review studies and expert interviews, we find that lead levels in solvent-based paints are roughly 20 times higher than in water-based paints. Our best guess is that average lead levels of paints currently sold in shops in LMICs are roughly 200-1,400 ppm (80% CI) for water-based paints and 5,000-30,000 ppm (80% CI) for solvent-based paints (link to section).Based on market analyses and small, informal seller surveys, we estimate that market share of solvent-based paints in LMICs is roughly 30%-65% of all residential paints sold (the rest being water-based paints), which is higher than in high-income countries (~20%-30%) (link to section).There is also evidence that lead-based paints are frequently being used in public spaces, such as playgrounds, (pre)schools, hospitals, and daycare centers. However, we do not know the relative importance of exposure from lead paint in homes vs. outside the home (link to section).As many studies on the exposure and the health effects of lead paint are based on historical US-data, we investigated whether current lead paint levels in LMICs are comparable to lead paint levels in the US before regulations were in place. We find that historical US-based lead concentrations in homes were about 6-12 times higher than those in recently studied homes in some LMICs (70% confidence) (link to section).We estimate that doubling the speed of the introduction of lead paint bans across LMICs could prevent 31 to 101 million (90 % CI) children from exposure to lead paint, and lead to total averted income losses of USD 68 to 585 billion (90% CI) and 150,000 to 5.9 million (90% CI) DALYs over the next 100 years. Building on previous analyses done by LEEP (Hu, 2022; LEEP, 2021) and Attina and Trasande (2013), we estimate that lead paint accounts for ~7.5% (with a 90% confidence interval of 2-15%) of the total economic burden of lead. We would like to emphasize that these estimates are highly uncertain, as our model is based on many inputs for which data availability is scarce or even non-existent. This uncertainty could be reduced with more data on the use of paints in LMICs (e.g. frequency of re-painting homes) and on the average dose-resp...]]>
Rethink Priorities https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:57 None full 5219
eAaeeuEd4j6oJ3Ep5_NL_EA_EA EA - GPT-4 is out: thread (and links) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GPT-4 is out: thread (& links), published by Lizka on March 14, 2023 on The Effective Altruism Forum.GPT-4 is out. There's also a LessWrong post on this with some discussion. The developers are doing a live-stream ~now.And it's been confirmed that Bing runs on GPT-4.Also:Claude (Anthropic)PaLM APIHere's an image from the OpenAI blog post about GPT-4:(This is a short post.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lizka https://forum.effectivealtruism.org/posts/eAaeeuEd4j6oJ3Ep5/gpt-4-is-out-thread-and-links Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GPT-4 is out: thread (& links), published by Lizka on March 14, 2023 on The Effective Altruism Forum.GPT-4 is out. There's also a LessWrong post on this with some discussion. The developers are doing a live-stream ~now.And it's been confirmed that Bing runs on GPT-4.Also:Claude (Anthropic)PaLM APIHere's an image from the OpenAI blog post about GPT-4:(This is a short post.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 14 Mar 2023 21:20:38 +0000 EA - GPT-4 is out: thread (and links) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GPT-4 is out: thread (& links), published by Lizka on March 14, 2023 on The Effective Altruism Forum.GPT-4 is out. There's also a LessWrong post on this with some discussion. The developers are doing a live-stream ~now.And it's been confirmed that Bing runs on GPT-4.Also:Claude (Anthropic)PaLM APIHere's an image from the OpenAI blog post about GPT-4:(This is a short post.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GPT-4 is out: thread (& links), published by Lizka on March 14, 2023 on The Effective Altruism Forum.GPT-4 is out. There's also a LessWrong post on this with some discussion. The developers are doing a live-stream ~now.And it's been confirmed that Bing runs on GPT-4.Also:Claude (Anthropic)PaLM APIHere's an image from the OpenAI blog post about GPT-4:(This is a short post.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:47 None full 5218
CqDzfiLhShqu9CS4F_NL_EA_EA EA - Paper summary: Longtermist institutional reform (Tyler M. John and William MacAskill) by Global Priorities Institute Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper summary: Longtermist institutional reform (Tyler M. John and William MacAskill), published by Global Priorities Institute on March 13, 2023 on The Effective Altruism Forum.This is a summary of the GPI working paper "Longtermist institutional reform" by Tyler M. John and William MacAskill (published in the 2021 edited volume “the long view”). The summary was written by Riley Harris.Political decisions can have lasting effects on the lives and wellbeing of future generations. Yet political institutions tend to make short-term decisions with only the current generation – or even just the current election cycle – in mind. In “longtermist institutional reform”, Tyler M. John and William MacAskill identify the causes of short-termism in government and give four recommendations for how institutions could be improved. These are the creation of in-government research institutes, a futures assembly, posterity impact statements and – more radically – an ‘upper house’ representing future generations.Causes of short-termismJohn and MacAskill discuss three main causes of short-termism. Firstly, politicians may not care about the long term. This may be because they discount the value of future generations, or simply because it is easy to ignore the effects of policies that are not experienced here and now. Secondly, even if politicians are motivated by concern for future generations, it may be difficult to know the long-term effects of different policies. Finally, even motivated and knowledgeable actors might face structural barriers to implementing long-term focussed policies – for instance, these policies might sometimes appear worse in the short-term and reduce a candidate's chances of re-election.Suggested reformsIn-government research institutesThe first suggested reform is the creation of in-government research institutes that could independently analyse long-term trends, estimate expected long-term impacts of policy and identify matters of long-term importance. These institutes could help fight short-termism by identifying the likely future impacts of policies, making these impacts vivid, and documenting how our leaders are affecting the future. They should also be designed to resist the political incentives that drive short-termism elsewhere. For instance, they could be functionally independent from the government, hire without input from politicians, and be flexible enough to prioritise the most important issues for the future. To ensure their advice is not ignored, the government should be required to read and respond to their recommendations.Futures assemblyThe futures assembly would be a permanent citizens’ assembly which seeks to represent the interests of future generations and give dedicated policy time to issues of importance for the long-term. Several examples already exist where similar citizens’ assemblies have helped create consensus on matters of great uncertainty and controversy, enabling timely government action. In-government research institutes excel at producing high quality information, but lack legitimacy. In contrast, a citizens’ assembly like this one could be composed of randomly selected citizens that are statistically representative of the general population. John and MacAskill believe this representativeness brings political force –politicians who ignore the assembly put their reputations at risk. We can design futures assemblies to avoid the incentive structures that result in short-termism – such as election cycles, party interests and campaign financing.Members should be empowered to call upon experts, and their terms should be long enough to build expertise but short enough to avoid problems like interest group capture – perhaps two years. They should also be empowered to set their own agenda and publicly disseminate their resul...]]>
Global Priorities Institute https://forum.effectivealtruism.org/posts/CqDzfiLhShqu9CS4F/paper-summary-longtermist-institutional-reform-tyler-m-john Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper summary: Longtermist institutional reform (Tyler M. John and William MacAskill), published by Global Priorities Institute on March 13, 2023 on The Effective Altruism Forum.This is a summary of the GPI working paper "Longtermist institutional reform" by Tyler M. John and William MacAskill (published in the 2021 edited volume “the long view”). The summary was written by Riley Harris.Political decisions can have lasting effects on the lives and wellbeing of future generations. Yet political institutions tend to make short-term decisions with only the current generation – or even just the current election cycle – in mind. In “longtermist institutional reform”, Tyler M. John and William MacAskill identify the causes of short-termism in government and give four recommendations for how institutions could be improved. These are the creation of in-government research institutes, a futures assembly, posterity impact statements and – more radically – an ‘upper house’ representing future generations.Causes of short-termismJohn and MacAskill discuss three main causes of short-termism. Firstly, politicians may not care about the long term. This may be because they discount the value of future generations, or simply because it is easy to ignore the effects of policies that are not experienced here and now. Secondly, even if politicians are motivated by concern for future generations, it may be difficult to know the long-term effects of different policies. Finally, even motivated and knowledgeable actors might face structural barriers to implementing long-term focussed policies – for instance, these policies might sometimes appear worse in the short-term and reduce a candidate's chances of re-election.Suggested reformsIn-government research institutesThe first suggested reform is the creation of in-government research institutes that could independently analyse long-term trends, estimate expected long-term impacts of policy and identify matters of long-term importance. These institutes could help fight short-termism by identifying the likely future impacts of policies, making these impacts vivid, and documenting how our leaders are affecting the future. They should also be designed to resist the political incentives that drive short-termism elsewhere. For instance, they could be functionally independent from the government, hire without input from politicians, and be flexible enough to prioritise the most important issues for the future. To ensure their advice is not ignored, the government should be required to read and respond to their recommendations.Futures assemblyThe futures assembly would be a permanent citizens’ assembly which seeks to represent the interests of future generations and give dedicated policy time to issues of importance for the long-term. Several examples already exist where similar citizens’ assemblies have helped create consensus on matters of great uncertainty and controversy, enabling timely government action. In-government research institutes excel at producing high quality information, but lack legitimacy. In contrast, a citizens’ assembly like this one could be composed of randomly selected citizens that are statistically representative of the general population. John and MacAskill believe this representativeness brings political force –politicians who ignore the assembly put their reputations at risk. We can design futures assemblies to avoid the incentive structures that result in short-termism – such as election cycles, party interests and campaign financing.Members should be empowered to call upon experts, and their terms should be long enough to build expertise but short enough to avoid problems like interest group capture – perhaps two years. They should also be empowered to set their own agenda and publicly disseminate their resul...]]>
Tue, 14 Mar 2023 18:47:39 +0000 EA - Paper summary: Longtermist institutional reform (Tyler M. John and William MacAskill) by Global Priorities Institute Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper summary: Longtermist institutional reform (Tyler M. John and William MacAskill), published by Global Priorities Institute on March 13, 2023 on The Effective Altruism Forum.This is a summary of the GPI working paper "Longtermist institutional reform" by Tyler M. John and William MacAskill (published in the 2021 edited volume “the long view”). The summary was written by Riley Harris.Political decisions can have lasting effects on the lives and wellbeing of future generations. Yet political institutions tend to make short-term decisions with only the current generation – or even just the current election cycle – in mind. In “longtermist institutional reform”, Tyler M. John and William MacAskill identify the causes of short-termism in government and give four recommendations for how institutions could be improved. These are the creation of in-government research institutes, a futures assembly, posterity impact statements and – more radically – an ‘upper house’ representing future generations.Causes of short-termismJohn and MacAskill discuss three main causes of short-termism. Firstly, politicians may not care about the long term. This may be because they discount the value of future generations, or simply because it is easy to ignore the effects of policies that are not experienced here and now. Secondly, even if politicians are motivated by concern for future generations, it may be difficult to know the long-term effects of different policies. Finally, even motivated and knowledgeable actors might face structural barriers to implementing long-term focussed policies – for instance, these policies might sometimes appear worse in the short-term and reduce a candidate's chances of re-election.Suggested reformsIn-government research institutesThe first suggested reform is the creation of in-government research institutes that could independently analyse long-term trends, estimate expected long-term impacts of policy and identify matters of long-term importance. These institutes could help fight short-termism by identifying the likely future impacts of policies, making these impacts vivid, and documenting how our leaders are affecting the future. They should also be designed to resist the political incentives that drive short-termism elsewhere. For instance, they could be functionally independent from the government, hire without input from politicians, and be flexible enough to prioritise the most important issues for the future. To ensure their advice is not ignored, the government should be required to read and respond to their recommendations.Futures assemblyThe futures assembly would be a permanent citizens’ assembly which seeks to represent the interests of future generations and give dedicated policy time to issues of importance for the long-term. Several examples already exist where similar citizens’ assemblies have helped create consensus on matters of great uncertainty and controversy, enabling timely government action. In-government research institutes excel at producing high quality information, but lack legitimacy. In contrast, a citizens’ assembly like this one could be composed of randomly selected citizens that are statistically representative of the general population. John and MacAskill believe this representativeness brings political force –politicians who ignore the assembly put their reputations at risk. We can design futures assemblies to avoid the incentive structures that result in short-termism – such as election cycles, party interests and campaign financing.Members should be empowered to call upon experts, and their terms should be long enough to build expertise but short enough to avoid problems like interest group capture – perhaps two years. They should also be empowered to set their own agenda and publicly disseminate their resul...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper summary: Longtermist institutional reform (Tyler M. John and William MacAskill), published by Global Priorities Institute on March 13, 2023 on The Effective Altruism Forum.This is a summary of the GPI working paper "Longtermist institutional reform" by Tyler M. John and William MacAskill (published in the 2021 edited volume “the long view”). The summary was written by Riley Harris.Political decisions can have lasting effects on the lives and wellbeing of future generations. Yet political institutions tend to make short-term decisions with only the current generation – or even just the current election cycle – in mind. In “longtermist institutional reform”, Tyler M. John and William MacAskill identify the causes of short-termism in government and give four recommendations for how institutions could be improved. These are the creation of in-government research institutes, a futures assembly, posterity impact statements and – more radically – an ‘upper house’ representing future generations.Causes of short-termismJohn and MacAskill discuss three main causes of short-termism. Firstly, politicians may not care about the long term. This may be because they discount the value of future generations, or simply because it is easy to ignore the effects of policies that are not experienced here and now. Secondly, even if politicians are motivated by concern for future generations, it may be difficult to know the long-term effects of different policies. Finally, even motivated and knowledgeable actors might face structural barriers to implementing long-term focussed policies – for instance, these policies might sometimes appear worse in the short-term and reduce a candidate's chances of re-election.Suggested reformsIn-government research institutesThe first suggested reform is the creation of in-government research institutes that could independently analyse long-term trends, estimate expected long-term impacts of policy and identify matters of long-term importance. These institutes could help fight short-termism by identifying the likely future impacts of policies, making these impacts vivid, and documenting how our leaders are affecting the future. They should also be designed to resist the political incentives that drive short-termism elsewhere. For instance, they could be functionally independent from the government, hire without input from politicians, and be flexible enough to prioritise the most important issues for the future. To ensure their advice is not ignored, the government should be required to read and respond to their recommendations.Futures assemblyThe futures assembly would be a permanent citizens’ assembly which seeks to represent the interests of future generations and give dedicated policy time to issues of importance for the long-term. Several examples already exist where similar citizens’ assemblies have helped create consensus on matters of great uncertainty and controversy, enabling timely government action. In-government research institutes excel at producing high quality information, but lack legitimacy. In contrast, a citizens’ assembly like this one could be composed of randomly selected citizens that are statistically representative of the general population. John and MacAskill believe this representativeness brings political force –politicians who ignore the assembly put their reputations at risk. We can design futures assemblies to avoid the incentive structures that result in short-termism – such as election cycles, party interests and campaign financing.Members should be empowered to call upon experts, and their terms should be long enough to build expertise but short enough to avoid problems like interest group capture – perhaps two years. They should also be empowered to set their own agenda and publicly disseminate their resul...]]>
Global Priorities Institute https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:29 None full 5221
RqzSQwGEPmvbemgkH_NL_EA_EA EA - A BOTEC-Model for Comparing Impact Estimations in Community Building by Patrick Gruban Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A BOTEC-Model for Comparing Impact Estimations in Community Building, published by Patrick Gruban on March 14, 2023 on The Effective Altruism Forum.We are grateful to Anneke Pogarell, Birte Spekker, Calum Calvert, Catherine Low, Joan Gass, Jona Glade, Jonathan Michel, Kyle Lucchese, Moritz Hanke and Sarah Pomeranz for conversations and feedback that significantly improved this post. Any errors, of fact or judgment, remain our entirely own.SummaryWhen prioritising future programs in EA community building, we currently lack a quantitative way to express underlying assumptions. In this post, we look at different existing approaches and present our first version of a model. We intended it to make Back-of-the-envelope (BOTEC) estimations by looking at an intervention (community building or marketing activity) and thinking about how it might affect participants on their way to having a more impactful life. The model uses an estimation of the average potential of people in a group to have an impact on their lives as well as the likelihood of them achieving it. If you’d like only to have a look at the model, you can skip the first paragraphs and directly go to Our current model.Epistemic StatusWe spent about 40-60 hours thinking about this, came up with it from scratch as EA community builders and are uncertain of the claims.MotivationAs new co-directors of EA Germany, we started working on our strategy last November, collecting the requests for programs from the community and looking at existing programs of other national EA groups. While we were able to include some early on as they seemed broadly useful, we were unsure about others. Comparing programs that differ in target group size and composition as well as the type of intervention meant that we would have to rely on and weigh a set of assumptions. To discuss these assumptions and ideally test some of them out, we were looking for a unified approach in the form of a model with a standardised set of parameters.Impact in Community BuildingThe term community building in effective altruism can cover various activities like mass media communication, education courses, speaker events, multi-day retreats and 1-1 career guiding sessions. The way we understand it is more about the outcome than the process, covering not only activities that focus on a community of people. It could be any action that guides participants in their search for taking a significant action with a high expected impact and to continue their engagement in this search.The impact of the community builder depends on their part in the eventual impact of the community members. A community builder who wants to achieve high impact would thus prioritise interventions by the expected impact contribution per invested time or money.Charity Evaluators like GiveWell can indicate impact per dollars donated in the form of lives saved, disability-adjusted life years (DALYs) reduced or similar numbers. If we guide someone to donate at all, donate more effectively and donate more, we can assume that part of the impact can be attributed to us.For people changing their careers to work on the world's most pressing problems, starting charities, doing research or spreading awareness, it’s harder to assess the impact. We assume an uneven impact distribution per person, probably heavy-tailed. Some people have been responsible for saving millions, such as Norman Borlaug or might have averted a global catastrophe like Stanislav Petrov.Existing approachesMarketing Approach: Multi-Touch AttributionIn our strategy, we write:Finding the people that could be interested in making a change to effective altruistic actions, guiding them through the process of learning and connecting while keeping them engaged up to the point where they take action and beyond is a multi-step ...]]>
Patrick Gruban https://forum.effectivealtruism.org/posts/RqzSQwGEPmvbemgkH/a-botec-model-for-comparing-impact-estimations-in-community Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A BOTEC-Model for Comparing Impact Estimations in Community Building, published by Patrick Gruban on March 14, 2023 on The Effective Altruism Forum.We are grateful to Anneke Pogarell, Birte Spekker, Calum Calvert, Catherine Low, Joan Gass, Jona Glade, Jonathan Michel, Kyle Lucchese, Moritz Hanke and Sarah Pomeranz for conversations and feedback that significantly improved this post. Any errors, of fact or judgment, remain our entirely own.SummaryWhen prioritising future programs in EA community building, we currently lack a quantitative way to express underlying assumptions. In this post, we look at different existing approaches and present our first version of a model. We intended it to make Back-of-the-envelope (BOTEC) estimations by looking at an intervention (community building or marketing activity) and thinking about how it might affect participants on their way to having a more impactful life. The model uses an estimation of the average potential of people in a group to have an impact on their lives as well as the likelihood of them achieving it. If you’d like only to have a look at the model, you can skip the first paragraphs and directly go to Our current model.Epistemic StatusWe spent about 40-60 hours thinking about this, came up with it from scratch as EA community builders and are uncertain of the claims.MotivationAs new co-directors of EA Germany, we started working on our strategy last November, collecting the requests for programs from the community and looking at existing programs of other national EA groups. While we were able to include some early on as they seemed broadly useful, we were unsure about others. Comparing programs that differ in target group size and composition as well as the type of intervention meant that we would have to rely on and weigh a set of assumptions. To discuss these assumptions and ideally test some of them out, we were looking for a unified approach in the form of a model with a standardised set of parameters.Impact in Community BuildingThe term community building in effective altruism can cover various activities like mass media communication, education courses, speaker events, multi-day retreats and 1-1 career guiding sessions. The way we understand it is more about the outcome than the process, covering not only activities that focus on a community of people. It could be any action that guides participants in their search for taking a significant action with a high expected impact and to continue their engagement in this search.The impact of the community builder depends on their part in the eventual impact of the community members. A community builder who wants to achieve high impact would thus prioritise interventions by the expected impact contribution per invested time or money.Charity Evaluators like GiveWell can indicate impact per dollars donated in the form of lives saved, disability-adjusted life years (DALYs) reduced or similar numbers. If we guide someone to donate at all, donate more effectively and donate more, we can assume that part of the impact can be attributed to us.For people changing their careers to work on the world's most pressing problems, starting charities, doing research or spreading awareness, it’s harder to assess the impact. We assume an uneven impact distribution per person, probably heavy-tailed. Some people have been responsible for saving millions, such as Norman Borlaug or might have averted a global catastrophe like Stanislav Petrov.Existing approachesMarketing Approach: Multi-Touch AttributionIn our strategy, we write:Finding the people that could be interested in making a change to effective altruistic actions, guiding them through the process of learning and connecting while keeping them engaged up to the point where they take action and beyond is a multi-step ...]]>
Tue, 14 Mar 2023 16:26:02 +0000 EA - A BOTEC-Model for Comparing Impact Estimations in Community Building by Patrick Gruban Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A BOTEC-Model for Comparing Impact Estimations in Community Building, published by Patrick Gruban on March 14, 2023 on The Effective Altruism Forum.We are grateful to Anneke Pogarell, Birte Spekker, Calum Calvert, Catherine Low, Joan Gass, Jona Glade, Jonathan Michel, Kyle Lucchese, Moritz Hanke and Sarah Pomeranz for conversations and feedback that significantly improved this post. Any errors, of fact or judgment, remain our entirely own.SummaryWhen prioritising future programs in EA community building, we currently lack a quantitative way to express underlying assumptions. In this post, we look at different existing approaches and present our first version of a model. We intended it to make Back-of-the-envelope (BOTEC) estimations by looking at an intervention (community building or marketing activity) and thinking about how it might affect participants on their way to having a more impactful life. The model uses an estimation of the average potential of people in a group to have an impact on their lives as well as the likelihood of them achieving it. If you’d like only to have a look at the model, you can skip the first paragraphs and directly go to Our current model.Epistemic StatusWe spent about 40-60 hours thinking about this, came up with it from scratch as EA community builders and are uncertain of the claims.MotivationAs new co-directors of EA Germany, we started working on our strategy last November, collecting the requests for programs from the community and looking at existing programs of other national EA groups. While we were able to include some early on as they seemed broadly useful, we were unsure about others. Comparing programs that differ in target group size and composition as well as the type of intervention meant that we would have to rely on and weigh a set of assumptions. To discuss these assumptions and ideally test some of them out, we were looking for a unified approach in the form of a model with a standardised set of parameters.Impact in Community BuildingThe term community building in effective altruism can cover various activities like mass media communication, education courses, speaker events, multi-day retreats and 1-1 career guiding sessions. The way we understand it is more about the outcome than the process, covering not only activities that focus on a community of people. It could be any action that guides participants in their search for taking a significant action with a high expected impact and to continue their engagement in this search.The impact of the community builder depends on their part in the eventual impact of the community members. A community builder who wants to achieve high impact would thus prioritise interventions by the expected impact contribution per invested time or money.Charity Evaluators like GiveWell can indicate impact per dollars donated in the form of lives saved, disability-adjusted life years (DALYs) reduced or similar numbers. If we guide someone to donate at all, donate more effectively and donate more, we can assume that part of the impact can be attributed to us.For people changing their careers to work on the world's most pressing problems, starting charities, doing research or spreading awareness, it’s harder to assess the impact. We assume an uneven impact distribution per person, probably heavy-tailed. Some people have been responsible for saving millions, such as Norman Borlaug or might have averted a global catastrophe like Stanislav Petrov.Existing approachesMarketing Approach: Multi-Touch AttributionIn our strategy, we write:Finding the people that could be interested in making a change to effective altruistic actions, guiding them through the process of learning and connecting while keeping them engaged up to the point where they take action and beyond is a multi-step ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A BOTEC-Model for Comparing Impact Estimations in Community Building, published by Patrick Gruban on March 14, 2023 on The Effective Altruism Forum.We are grateful to Anneke Pogarell, Birte Spekker, Calum Calvert, Catherine Low, Joan Gass, Jona Glade, Jonathan Michel, Kyle Lucchese, Moritz Hanke and Sarah Pomeranz for conversations and feedback that significantly improved this post. Any errors, of fact or judgment, remain our entirely own.SummaryWhen prioritising future programs in EA community building, we currently lack a quantitative way to express underlying assumptions. In this post, we look at different existing approaches and present our first version of a model. We intended it to make Back-of-the-envelope (BOTEC) estimations by looking at an intervention (community building or marketing activity) and thinking about how it might affect participants on their way to having a more impactful life. The model uses an estimation of the average potential of people in a group to have an impact on their lives as well as the likelihood of them achieving it. If you’d like only to have a look at the model, you can skip the first paragraphs and directly go to Our current model.Epistemic StatusWe spent about 40-60 hours thinking about this, came up with it from scratch as EA community builders and are uncertain of the claims.MotivationAs new co-directors of EA Germany, we started working on our strategy last November, collecting the requests for programs from the community and looking at existing programs of other national EA groups. While we were able to include some early on as they seemed broadly useful, we were unsure about others. Comparing programs that differ in target group size and composition as well as the type of intervention meant that we would have to rely on and weigh a set of assumptions. To discuss these assumptions and ideally test some of them out, we were looking for a unified approach in the form of a model with a standardised set of parameters.Impact in Community BuildingThe term community building in effective altruism can cover various activities like mass media communication, education courses, speaker events, multi-day retreats and 1-1 career guiding sessions. The way we understand it is more about the outcome than the process, covering not only activities that focus on a community of people. It could be any action that guides participants in their search for taking a significant action with a high expected impact and to continue their engagement in this search.The impact of the community builder depends on their part in the eventual impact of the community members. A community builder who wants to achieve high impact would thus prioritise interventions by the expected impact contribution per invested time or money.Charity Evaluators like GiveWell can indicate impact per dollars donated in the form of lives saved, disability-adjusted life years (DALYs) reduced or similar numbers. If we guide someone to donate at all, donate more effectively and donate more, we can assume that part of the impact can be attributed to us.For people changing their careers to work on the world's most pressing problems, starting charities, doing research or spreading awareness, it’s harder to assess the impact. We assume an uneven impact distribution per person, probably heavy-tailed. Some people have been responsible for saving millions, such as Norman Borlaug or might have averted a global catastrophe like Stanislav Petrov.Existing approachesMarketing Approach: Multi-Touch AttributionIn our strategy, we write:Finding the people that could be interested in making a change to effective altruistic actions, guiding them through the process of learning and connecting while keeping them engaged up to the point where they take action and beyond is a multi-step ...]]>
Patrick Gruban https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 13:41 None full 5220
Gcnkp4qZJDownkLTj_NL_EA_EA EA - Two University Group Organizer Opportunities: Pre-EAG London Summit and Summer Internship by Joris P Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two University Group Organizer Opportunities: Pre-EAG London Summit & Summer Internship, published by Joris P on March 13, 2023 on The Effective Altruism Forum.SummaryCEA’s University Groups Team is excited to announce two new opportunities:A summer internship for university group organizersDates: flexible, during the Northern Hemisphere summerApplication deadline: Wednesday, March 22Find more info & apply here!A university group organizer summit before EAG LondonDates: Monday 15 May – Friday 19 MayApplication deadline: Monday, March 27Find more info & apply here!Summer InternshipWhat?CEA's University Groups Team is running a paid internship program for about 5 university group organizers! During the internship, you will work on a meta-EA project, receiving mentorship and coaching from CEA staff. We have a list with a number of project ideas, but also encourage you to think about other projects you'd like to run.This is your opportunity to think big, and see what it's like to work on meta-EA projects full-time!Why?Test out different aspects of meta-EA work as a potential career pathReceive coaching and mentorship through CEAA competitive wage for part-time or full-time work during your breakConsideration for extended work with CEAFor who?You might be a good fit for the internship if you are:A university group organizer who is interested in testing out community building and/or EA entrepreneurial projects as a career pathHighly organized, reliable, and independentKnowledgeable of EA and eager to learn moreMake sure to read more and apply here!More infoIf you have any questions, including about whether you'd be a good fit, reach out to Jessica at jessica [dot] mccurdy [at] centreforeffectivealtruism [dot] org.Find more info & apply here!Initial applications are due soon: Wednesday, March 22 at 11:59pm Anywhere on Earth.Pre-EAG London University Group Organizer SummitWhat?Monday 15 May – Friday 19 May (before EAG London 2023), the CEA University Groups team is hosting a summit for university group organizers. The summit will kickstart renewed support for experienced university groups and foster better knowledge transfer across groups.Why?The summit has three core goals:Boost top university groups by facilitating knowledge transfer among experienced organizers.Improve advice for university groups by accumulating examples of effective late-stage group strategies.Facilitate connections between experienced organizers and newer organizers, with the hope that attendees will continue to share information and support each other.For who?All current university group organizers can apply for this summit!This event will be particularly well-suited for experienced organizers at established university groups. We’re also excited about this summit serving the next generation of organizers at established groups and ambitious organizers at new groups who are eager to think carefully about groups strategy. If you think this summit would plausibly be valuable for you, we encourage you to just go ahead and apply!More infoIf you have any questions, including about whether you'd be a good fit, reach out to us at unigroups [at] centreforeffectivealtruism [dot] org.Find more info & apply here!Applications are due soon: Monday, March 27th at 11:59pm Anywhere on Earth.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Joris P https://forum.effectivealtruism.org/posts/Gcnkp4qZJDownkLTj/two-university-group-organizer-opportunities-pre-eag-london Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two University Group Organizer Opportunities: Pre-EAG London Summit & Summer Internship, published by Joris P on March 13, 2023 on The Effective Altruism Forum.SummaryCEA’s University Groups Team is excited to announce two new opportunities:A summer internship for university group organizersDates: flexible, during the Northern Hemisphere summerApplication deadline: Wednesday, March 22Find more info & apply here!A university group organizer summit before EAG LondonDates: Monday 15 May – Friday 19 MayApplication deadline: Monday, March 27Find more info & apply here!Summer InternshipWhat?CEA's University Groups Team is running a paid internship program for about 5 university group organizers! During the internship, you will work on a meta-EA project, receiving mentorship and coaching from CEA staff. We have a list with a number of project ideas, but also encourage you to think about other projects you'd like to run.This is your opportunity to think big, and see what it's like to work on meta-EA projects full-time!Why?Test out different aspects of meta-EA work as a potential career pathReceive coaching and mentorship through CEAA competitive wage for part-time or full-time work during your breakConsideration for extended work with CEAFor who?You might be a good fit for the internship if you are:A university group organizer who is interested in testing out community building and/or EA entrepreneurial projects as a career pathHighly organized, reliable, and independentKnowledgeable of EA and eager to learn moreMake sure to read more and apply here!More infoIf you have any questions, including about whether you'd be a good fit, reach out to Jessica at jessica [dot] mccurdy [at] centreforeffectivealtruism [dot] org.Find more info & apply here!Initial applications are due soon: Wednesday, March 22 at 11:59pm Anywhere on Earth.Pre-EAG London University Group Organizer SummitWhat?Monday 15 May – Friday 19 May (before EAG London 2023), the CEA University Groups team is hosting a summit for university group organizers. The summit will kickstart renewed support for experienced university groups and foster better knowledge transfer across groups.Why?The summit has three core goals:Boost top university groups by facilitating knowledge transfer among experienced organizers.Improve advice for university groups by accumulating examples of effective late-stage group strategies.Facilitate connections between experienced organizers and newer organizers, with the hope that attendees will continue to share information and support each other.For who?All current university group organizers can apply for this summit!This event will be particularly well-suited for experienced organizers at established university groups. We’re also excited about this summit serving the next generation of organizers at established groups and ambitious organizers at new groups who are eager to think carefully about groups strategy. If you think this summit would plausibly be valuable for you, we encourage you to just go ahead and apply!More infoIf you have any questions, including about whether you'd be a good fit, reach out to us at unigroups [at] centreforeffectivealtruism [dot] org.Find more info & apply here!Applications are due soon: Monday, March 27th at 11:59pm Anywhere on Earth.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 14 Mar 2023 04:45:02 +0000 EA - Two University Group Organizer Opportunities: Pre-EAG London Summit and Summer Internship by Joris P Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two University Group Organizer Opportunities: Pre-EAG London Summit & Summer Internship, published by Joris P on March 13, 2023 on The Effective Altruism Forum.SummaryCEA’s University Groups Team is excited to announce two new opportunities:A summer internship for university group organizersDates: flexible, during the Northern Hemisphere summerApplication deadline: Wednesday, March 22Find more info & apply here!A university group organizer summit before EAG LondonDates: Monday 15 May – Friday 19 MayApplication deadline: Monday, March 27Find more info & apply here!Summer InternshipWhat?CEA's University Groups Team is running a paid internship program for about 5 university group organizers! During the internship, you will work on a meta-EA project, receiving mentorship and coaching from CEA staff. We have a list with a number of project ideas, but also encourage you to think about other projects you'd like to run.This is your opportunity to think big, and see what it's like to work on meta-EA projects full-time!Why?Test out different aspects of meta-EA work as a potential career pathReceive coaching and mentorship through CEAA competitive wage for part-time or full-time work during your breakConsideration for extended work with CEAFor who?You might be a good fit for the internship if you are:A university group organizer who is interested in testing out community building and/or EA entrepreneurial projects as a career pathHighly organized, reliable, and independentKnowledgeable of EA and eager to learn moreMake sure to read more and apply here!More infoIf you have any questions, including about whether you'd be a good fit, reach out to Jessica at jessica [dot] mccurdy [at] centreforeffectivealtruism [dot] org.Find more info & apply here!Initial applications are due soon: Wednesday, March 22 at 11:59pm Anywhere on Earth.Pre-EAG London University Group Organizer SummitWhat?Monday 15 May – Friday 19 May (before EAG London 2023), the CEA University Groups team is hosting a summit for university group organizers. The summit will kickstart renewed support for experienced university groups and foster better knowledge transfer across groups.Why?The summit has three core goals:Boost top university groups by facilitating knowledge transfer among experienced organizers.Improve advice for university groups by accumulating examples of effective late-stage group strategies.Facilitate connections between experienced organizers and newer organizers, with the hope that attendees will continue to share information and support each other.For who?All current university group organizers can apply for this summit!This event will be particularly well-suited for experienced organizers at established university groups. We’re also excited about this summit serving the next generation of organizers at established groups and ambitious organizers at new groups who are eager to think carefully about groups strategy. If you think this summit would plausibly be valuable for you, we encourage you to just go ahead and apply!More infoIf you have any questions, including about whether you'd be a good fit, reach out to us at unigroups [at] centreforeffectivealtruism [dot] org.Find more info & apply here!Applications are due soon: Monday, March 27th at 11:59pm Anywhere on Earth.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two University Group Organizer Opportunities: Pre-EAG London Summit & Summer Internship, published by Joris P on March 13, 2023 on The Effective Altruism Forum.SummaryCEA’s University Groups Team is excited to announce two new opportunities:A summer internship for university group organizersDates: flexible, during the Northern Hemisphere summerApplication deadline: Wednesday, March 22Find more info & apply here!A university group organizer summit before EAG LondonDates: Monday 15 May – Friday 19 MayApplication deadline: Monday, March 27Find more info & apply here!Summer InternshipWhat?CEA's University Groups Team is running a paid internship program for about 5 university group organizers! During the internship, you will work on a meta-EA project, receiving mentorship and coaching from CEA staff. We have a list with a number of project ideas, but also encourage you to think about other projects you'd like to run.This is your opportunity to think big, and see what it's like to work on meta-EA projects full-time!Why?Test out different aspects of meta-EA work as a potential career pathReceive coaching and mentorship through CEAA competitive wage for part-time or full-time work during your breakConsideration for extended work with CEAFor who?You might be a good fit for the internship if you are:A university group organizer who is interested in testing out community building and/or EA entrepreneurial projects as a career pathHighly organized, reliable, and independentKnowledgeable of EA and eager to learn moreMake sure to read more and apply here!More infoIf you have any questions, including about whether you'd be a good fit, reach out to Jessica at jessica [dot] mccurdy [at] centreforeffectivealtruism [dot] org.Find more info & apply here!Initial applications are due soon: Wednesday, March 22 at 11:59pm Anywhere on Earth.Pre-EAG London University Group Organizer SummitWhat?Monday 15 May – Friday 19 May (before EAG London 2023), the CEA University Groups team is hosting a summit for university group organizers. The summit will kickstart renewed support for experienced university groups and foster better knowledge transfer across groups.Why?The summit has three core goals:Boost top university groups by facilitating knowledge transfer among experienced organizers.Improve advice for university groups by accumulating examples of effective late-stage group strategies.Facilitate connections between experienced organizers and newer organizers, with the hope that attendees will continue to share information and support each other.For who?All current university group organizers can apply for this summit!This event will be particularly well-suited for experienced organizers at established university groups. We’re also excited about this summit serving the next generation of organizers at established groups and ambitious organizers at new groups who are eager to think carefully about groups strategy. If you think this summit would plausibly be valuable for you, we encourage you to just go ahead and apply!More infoIf you have any questions, including about whether you'd be a good fit, reach out to us at unigroups [at] centreforeffectivealtruism [dot] org.Find more info & apply here!Applications are due soon: Monday, March 27th at 11:59pm Anywhere on Earth.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Joris P https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:42 None full 5213
kNeYA6hTrA3Cd9Q2d_NL_EA_EA EA - Paper summary: Are we living at the hinge of history? (William MacAskill) by Global Priorities Institute Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper summary: Are we living at the hinge of history? (William MacAskill), published by Global Priorities Institute on March 13, 2023 on The Effective Altruism Forum.This is a summary of the GPI Working Paper “Are we living at the hinge of history?” by William MacAskill. (also published in the 2022 edited volume “Ethics and Existence: The Legacy of Derek Parfit”). The summary was written by Riley Harris.Longtermist altruists – who care about how much impact they have, but not about when that impact occurs – have a strong reason to invest resources before using them directly. Invested resources could grow much larger and be used to do much more good in the future. For example, a $1 investment that grows 5% per year would become $17,000 in 200 years. However, some people argue that we are living in an unusual time, during which our best opportunities to improve the world are much better than they ever will be in the future. If so, perhaps we should spend our resources as soon as possible.In “Are we living at the hinge of history?”, William MacAskill investigates whether actions in our current time are likely to be much more influential than other times in the future. (‘Influential’ here refers specifically to how much good we expect to do via direct monetary expenditure – the consideration most relevant to our altruistic decision to spend now or later.) After making this ‘hinge of history’ claim more precise, MacAskill gives two main arguments against the claim: the base rate and inductive arguments. He then discusses some reasons why our time might be unusual, but ultimately concludes that he does not think that the ‘hinge of history’ claim holds true.The base rate argumentWhen we think about the entire future of humanity, we expect there to be a lot of people, and so we should initially be very sceptical that anyone alive today will be amongst the most influential human beings. Indeed, if humanity doesn’t go extinct in the near future, there could be a vast number of future people – settling near just 0.1% of stars in the Milky Way with the same population as Earth would mean there were 1024 (a trillion trillion) people to come. Suppose that, before inspecting further evidence, we believe that we are about as likely as anyone else to be particularly influential. Then, our initial belief that anyone alive today is amongst the million most influential people would be 1 in 1018 (1 in a million trillion).From such a sceptical starting point, we would need extremely strong evidence to become convinced that we are presently in the most influential time era. Even if there were only 108 (one hundred trillion) people to come, then in order to move from this extremely sceptical position (1 in 108) to a more moderate position (1 in 10), we would need evidence about 3 million times as strong as a randomised control trial with a p-value of 0.05. MacAskill thinks that, although we do have some evidence that indicates we may be at the most influential time, this evidence is not nearly strong enough.The inductive argumentThere is another strong reason to think our time is not the most influential, MacAskill argues:Premise 1: Influentialness has been increasing over time.Premise 2: We should expect this trend to continue.Conclusion: We should expect the influentialness of people in the future to be greater than our own influentialness.Premise 1 can be best illustrated with an example: a well-educated and wealthy altruist living in Europe in 1600 would not have been in a position to know about the best opportunities to shape the long-run future. In particular, most of the existential risks they faced (e.g. an asteroid collision or supervolcano) were not known, nor would they have been in a good position to do anything about them even if they were known. Even if they had th...]]>
Global Priorities Institute https://forum.effectivealtruism.org/posts/kNeYA6hTrA3Cd9Q2d/paper-summary-are-we-living-at-the-hinge-of-history-william Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper summary: Are we living at the hinge of history? (William MacAskill), published by Global Priorities Institute on March 13, 2023 on The Effective Altruism Forum.This is a summary of the GPI Working Paper “Are we living at the hinge of history?” by William MacAskill. (also published in the 2022 edited volume “Ethics and Existence: The Legacy of Derek Parfit”). The summary was written by Riley Harris.Longtermist altruists – who care about how much impact they have, but not about when that impact occurs – have a strong reason to invest resources before using them directly. Invested resources could grow much larger and be used to do much more good in the future. For example, a $1 investment that grows 5% per year would become $17,000 in 200 years. However, some people argue that we are living in an unusual time, during which our best opportunities to improve the world are much better than they ever will be in the future. If so, perhaps we should spend our resources as soon as possible.In “Are we living at the hinge of history?”, William MacAskill investigates whether actions in our current time are likely to be much more influential than other times in the future. (‘Influential’ here refers specifically to how much good we expect to do via direct monetary expenditure – the consideration most relevant to our altruistic decision to spend now or later.) After making this ‘hinge of history’ claim more precise, MacAskill gives two main arguments against the claim: the base rate and inductive arguments. He then discusses some reasons why our time might be unusual, but ultimately concludes that he does not think that the ‘hinge of history’ claim holds true.The base rate argumentWhen we think about the entire future of humanity, we expect there to be a lot of people, and so we should initially be very sceptical that anyone alive today will be amongst the most influential human beings. Indeed, if humanity doesn’t go extinct in the near future, there could be a vast number of future people – settling near just 0.1% of stars in the Milky Way with the same population as Earth would mean there were 1024 (a trillion trillion) people to come. Suppose that, before inspecting further evidence, we believe that we are about as likely as anyone else to be particularly influential. Then, our initial belief that anyone alive today is amongst the million most influential people would be 1 in 1018 (1 in a million trillion).From such a sceptical starting point, we would need extremely strong evidence to become convinced that we are presently in the most influential time era. Even if there were only 108 (one hundred trillion) people to come, then in order to move from this extremely sceptical position (1 in 108) to a more moderate position (1 in 10), we would need evidence about 3 million times as strong as a randomised control trial with a p-value of 0.05. MacAskill thinks that, although we do have some evidence that indicates we may be at the most influential time, this evidence is not nearly strong enough.The inductive argumentThere is another strong reason to think our time is not the most influential, MacAskill argues:Premise 1: Influentialness has been increasing over time.Premise 2: We should expect this trend to continue.Conclusion: We should expect the influentialness of people in the future to be greater than our own influentialness.Premise 1 can be best illustrated with an example: a well-educated and wealthy altruist living in Europe in 1600 would not have been in a position to know about the best opportunities to shape the long-run future. In particular, most of the existential risks they faced (e.g. an asteroid collision or supervolcano) were not known, nor would they have been in a good position to do anything about them even if they were known. Even if they had th...]]>
Tue, 14 Mar 2023 03:52:44 +0000 EA - Paper summary: Are we living at the hinge of history? (William MacAskill) by Global Priorities Institute Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper summary: Are we living at the hinge of history? (William MacAskill), published by Global Priorities Institute on March 13, 2023 on The Effective Altruism Forum.This is a summary of the GPI Working Paper “Are we living at the hinge of history?” by William MacAskill. (also published in the 2022 edited volume “Ethics and Existence: The Legacy of Derek Parfit”). The summary was written by Riley Harris.Longtermist altruists – who care about how much impact they have, but not about when that impact occurs – have a strong reason to invest resources before using them directly. Invested resources could grow much larger and be used to do much more good in the future. For example, a $1 investment that grows 5% per year would become $17,000 in 200 years. However, some people argue that we are living in an unusual time, during which our best opportunities to improve the world are much better than they ever will be in the future. If so, perhaps we should spend our resources as soon as possible.In “Are we living at the hinge of history?”, William MacAskill investigates whether actions in our current time are likely to be much more influential than other times in the future. (‘Influential’ here refers specifically to how much good we expect to do via direct monetary expenditure – the consideration most relevant to our altruistic decision to spend now or later.) After making this ‘hinge of history’ claim more precise, MacAskill gives two main arguments against the claim: the base rate and inductive arguments. He then discusses some reasons why our time might be unusual, but ultimately concludes that he does not think that the ‘hinge of history’ claim holds true.The base rate argumentWhen we think about the entire future of humanity, we expect there to be a lot of people, and so we should initially be very sceptical that anyone alive today will be amongst the most influential human beings. Indeed, if humanity doesn’t go extinct in the near future, there could be a vast number of future people – settling near just 0.1% of stars in the Milky Way with the same population as Earth would mean there were 1024 (a trillion trillion) people to come. Suppose that, before inspecting further evidence, we believe that we are about as likely as anyone else to be particularly influential. Then, our initial belief that anyone alive today is amongst the million most influential people would be 1 in 1018 (1 in a million trillion).From such a sceptical starting point, we would need extremely strong evidence to become convinced that we are presently in the most influential time era. Even if there were only 108 (one hundred trillion) people to come, then in order to move from this extremely sceptical position (1 in 108) to a more moderate position (1 in 10), we would need evidence about 3 million times as strong as a randomised control trial with a p-value of 0.05. MacAskill thinks that, although we do have some evidence that indicates we may be at the most influential time, this evidence is not nearly strong enough.The inductive argumentThere is another strong reason to think our time is not the most influential, MacAskill argues:Premise 1: Influentialness has been increasing over time.Premise 2: We should expect this trend to continue.Conclusion: We should expect the influentialness of people in the future to be greater than our own influentialness.Premise 1 can be best illustrated with an example: a well-educated and wealthy altruist living in Europe in 1600 would not have been in a position to know about the best opportunities to shape the long-run future. In particular, most of the existential risks they faced (e.g. an asteroid collision or supervolcano) were not known, nor would they have been in a good position to do anything about them even if they were known. Even if they had th...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper summary: Are we living at the hinge of history? (William MacAskill), published by Global Priorities Institute on March 13, 2023 on The Effective Altruism Forum.This is a summary of the GPI Working Paper “Are we living at the hinge of history?” by William MacAskill. (also published in the 2022 edited volume “Ethics and Existence: The Legacy of Derek Parfit”). The summary was written by Riley Harris.Longtermist altruists – who care about how much impact they have, but not about when that impact occurs – have a strong reason to invest resources before using them directly. Invested resources could grow much larger and be used to do much more good in the future. For example, a $1 investment that grows 5% per year would become $17,000 in 200 years. However, some people argue that we are living in an unusual time, during which our best opportunities to improve the world are much better than they ever will be in the future. If so, perhaps we should spend our resources as soon as possible.In “Are we living at the hinge of history?”, William MacAskill investigates whether actions in our current time are likely to be much more influential than other times in the future. (‘Influential’ here refers specifically to how much good we expect to do via direct monetary expenditure – the consideration most relevant to our altruistic decision to spend now or later.) After making this ‘hinge of history’ claim more precise, MacAskill gives two main arguments against the claim: the base rate and inductive arguments. He then discusses some reasons why our time might be unusual, but ultimately concludes that he does not think that the ‘hinge of history’ claim holds true.The base rate argumentWhen we think about the entire future of humanity, we expect there to be a lot of people, and so we should initially be very sceptical that anyone alive today will be amongst the most influential human beings. Indeed, if humanity doesn’t go extinct in the near future, there could be a vast number of future people – settling near just 0.1% of stars in the Milky Way with the same population as Earth would mean there were 1024 (a trillion trillion) people to come. Suppose that, before inspecting further evidence, we believe that we are about as likely as anyone else to be particularly influential. Then, our initial belief that anyone alive today is amongst the million most influential people would be 1 in 1018 (1 in a million trillion).From such a sceptical starting point, we would need extremely strong evidence to become convinced that we are presently in the most influential time era. Even if there were only 108 (one hundred trillion) people to come, then in order to move from this extremely sceptical position (1 in 108) to a more moderate position (1 in 10), we would need evidence about 3 million times as strong as a randomised control trial with a p-value of 0.05. MacAskill thinks that, although we do have some evidence that indicates we may be at the most influential time, this evidence is not nearly strong enough.The inductive argumentThere is another strong reason to think our time is not the most influential, MacAskill argues:Premise 1: Influentialness has been increasing over time.Premise 2: We should expect this trend to continue.Conclusion: We should expect the influentialness of people in the future to be greater than our own influentialness.Premise 1 can be best illustrated with an example: a well-educated and wealthy altruist living in Europe in 1600 would not have been in a position to know about the best opportunities to shape the long-run future. In particular, most of the existential risks they faced (e.g. an asteroid collision or supervolcano) were not known, nor would they have been in a good position to do anything about them even if they were known. Even if they had th...]]>
Global Priorities Institute https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:02 None full 5214
9piqRDGX6BisdMdRw_NL_EA_EA EA - "Can We Survive Technology?" by John von Neumann by Eli Rose Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Can We Survive Technology?" by John von Neumann, published by Eli Rose on March 13, 2023 on The Effective Altruism Forum.This is an essay written by John von Neumann in 1955, which I think is fairly described as being about global catastrophic risks from emerging technologies. It discusses a bunch of specific technologies that seemed like a big deal in 1955 — which is interesting in itself as a list of predictions; nuclear power! increased automation! weather control? — but explicitly tries to draw a general lesson.von Neumann is regarded as one of the greatest scientists of the 20th century, and was involved in the Manhattan project in addition to inventing zillions of other things.I'm posting here because a) I think the essay is worth reading in its own right, and b) I find it interesting to see what the past's intellectuals thought of issues related transformative technology, and how their perspective differs/is similar to ours. Notably, I disagree with several of the conclusions (e.g. von Neumann seems to think differential technological development is doomed).On another level, I find the essay, and the fact of it having been written in 1955, somewhat motivating, though not at all in a straightforward way.Some quotes:Since most time scales are fixed by human reaction times, habits, and other physiological and psycho logical factors, the effect of the increased speed of technological processes was to enlarge the size of units — political, organizational, economic, and cultural — affected by technological operations. That is, instead of performing the same operations as before in less time, now larger-scale operations were performed in the same time. This important evolution has a natural limit, that of the earth's actual size. The limit is now being reached, or at least closely approached....there is in most of these developments a trend toward affecting the earth as a whole, or to be more exact, toward producing effects that can be projected from any one to any other point on the earth. There is an intrinsic conflict with geography — and institutions based thereon — as understood today.What safeguard remains? Apparently only day-to-day — or perhaps year-to-year — opportunistic measures, a long sequence of small, correct decisions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Eli Rose https://forum.effectivealtruism.org/posts/9piqRDGX6BisdMdRw/can-we-survive-technology-by-john-von-neumann Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Can We Survive Technology?" by John von Neumann, published by Eli Rose on March 13, 2023 on The Effective Altruism Forum.This is an essay written by John von Neumann in 1955, which I think is fairly described as being about global catastrophic risks from emerging technologies. It discusses a bunch of specific technologies that seemed like a big deal in 1955 — which is interesting in itself as a list of predictions; nuclear power! increased automation! weather control? — but explicitly tries to draw a general lesson.von Neumann is regarded as one of the greatest scientists of the 20th century, and was involved in the Manhattan project in addition to inventing zillions of other things.I'm posting here because a) I think the essay is worth reading in its own right, and b) I find it interesting to see what the past's intellectuals thought of issues related transformative technology, and how their perspective differs/is similar to ours. Notably, I disagree with several of the conclusions (e.g. von Neumann seems to think differential technological development is doomed).On another level, I find the essay, and the fact of it having been written in 1955, somewhat motivating, though not at all in a straightforward way.Some quotes:Since most time scales are fixed by human reaction times, habits, and other physiological and psycho logical factors, the effect of the increased speed of technological processes was to enlarge the size of units — political, organizational, economic, and cultural — affected by technological operations. That is, instead of performing the same operations as before in less time, now larger-scale operations were performed in the same time. This important evolution has a natural limit, that of the earth's actual size. The limit is now being reached, or at least closely approached....there is in most of these developments a trend toward affecting the earth as a whole, or to be more exact, toward producing effects that can be projected from any one to any other point on the earth. There is an intrinsic conflict with geography — and institutions based thereon — as understood today.What safeguard remains? Apparently only day-to-day — or perhaps year-to-year — opportunistic measures, a long sequence of small, correct decisions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 14 Mar 2023 02:49:01 +0000 EA - "Can We Survive Technology?" by John von Neumann by Eli Rose Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Can We Survive Technology?" by John von Neumann, published by Eli Rose on March 13, 2023 on The Effective Altruism Forum.This is an essay written by John von Neumann in 1955, which I think is fairly described as being about global catastrophic risks from emerging technologies. It discusses a bunch of specific technologies that seemed like a big deal in 1955 — which is interesting in itself as a list of predictions; nuclear power! increased automation! weather control? — but explicitly tries to draw a general lesson.von Neumann is regarded as one of the greatest scientists of the 20th century, and was involved in the Manhattan project in addition to inventing zillions of other things.I'm posting here because a) I think the essay is worth reading in its own right, and b) I find it interesting to see what the past's intellectuals thought of issues related transformative technology, and how their perspective differs/is similar to ours. Notably, I disagree with several of the conclusions (e.g. von Neumann seems to think differential technological development is doomed).On another level, I find the essay, and the fact of it having been written in 1955, somewhat motivating, though not at all in a straightforward way.Some quotes:Since most time scales are fixed by human reaction times, habits, and other physiological and psycho logical factors, the effect of the increased speed of technological processes was to enlarge the size of units — political, organizational, economic, and cultural — affected by technological operations. That is, instead of performing the same operations as before in less time, now larger-scale operations were performed in the same time. This important evolution has a natural limit, that of the earth's actual size. The limit is now being reached, or at least closely approached....there is in most of these developments a trend toward affecting the earth as a whole, or to be more exact, toward producing effects that can be projected from any one to any other point on the earth. There is an intrinsic conflict with geography — and institutions based thereon — as understood today.What safeguard remains? Apparently only day-to-day — or perhaps year-to-year — opportunistic measures, a long sequence of small, correct decisions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Can We Survive Technology?" by John von Neumann, published by Eli Rose on March 13, 2023 on The Effective Altruism Forum.This is an essay written by John von Neumann in 1955, which I think is fairly described as being about global catastrophic risks from emerging technologies. It discusses a bunch of specific technologies that seemed like a big deal in 1955 — which is interesting in itself as a list of predictions; nuclear power! increased automation! weather control? — but explicitly tries to draw a general lesson.von Neumann is regarded as one of the greatest scientists of the 20th century, and was involved in the Manhattan project in addition to inventing zillions of other things.I'm posting here because a) I think the essay is worth reading in its own right, and b) I find it interesting to see what the past's intellectuals thought of issues related transformative technology, and how their perspective differs/is similar to ours. Notably, I disagree with several of the conclusions (e.g. von Neumann seems to think differential technological development is doomed).On another level, I find the essay, and the fact of it having been written in 1955, somewhat motivating, though not at all in a straightforward way.Some quotes:Since most time scales are fixed by human reaction times, habits, and other physiological and psycho logical factors, the effect of the increased speed of technological processes was to enlarge the size of units — political, organizational, economic, and cultural — affected by technological operations. That is, instead of performing the same operations as before in less time, now larger-scale operations were performed in the same time. This important evolution has a natural limit, that of the earth's actual size. The limit is now being reached, or at least closely approached....there is in most of these developments a trend toward affecting the earth as a whole, or to be more exact, toward producing effects that can be projected from any one to any other point on the earth. There is an intrinsic conflict with geography — and institutions based thereon — as understood today.What safeguard remains? Apparently only day-to-day — or perhaps year-to-year — opportunistic measures, a long sequence of small, correct decisions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Eli Rose https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:24 None full 5215
qxaAyAuw3DBW5WAis_NL_EA_EA EA - Shallow Investigation: Stillbirths by Joseph Pusey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow Investigation: Stillbirths, published by Joseph Pusey on March 13, 2023 on The Effective Altruism Forum.This topic has the potential to be deeply upsetting to those reading it, particularly to those who have personal experience of the topic in question. If you feel that I’ve missed or misunderstood something, or could have phrased things more sensitively, please reach out to me.Throughout the review, words like “woman” or “mother” are used in places where some people might prefer “birthing person” or similar. This choice reflects the language used in the available literature and does not constitute a position on what the most appropriate terminology is.This report is a shallow dive into stillbirths, a sub-area within maternal and neonatal health, and was produced as part of the Cause Innovation Bootcamp. The report, which reflects approximately 40-50 hours of research, offers a brief dive into whether a particular problem area is a promising area for either funders or founders to be working in. Being a shallow report, it should be used to decide whether or not more research and work into a particular problem area should be prioritised.Executive SummaryImportance: This problem is likely very important (epistemic status-strong)- stillbirths are widespread, concentrated in the world’s poorest countries, and decreasing only very slowly compared to the decline in maternal and infant mortality. There are more deaths resulting from stillbirth than those caused by HIV and malaria combined (depending on your personal definition of death- see below), and even in high-income countries stillbirths outnumber infant deaths.Tractability: This problem is likely moderately tractable (moderate)- most stillbirths are likely to be preventable, but the most impactful interventions are complex, facility-based, expensive, and most effective at scale e.g. guaranteeing access to high-quality emergency obstetric careNeglectedness: This problem is unlikely to be neglected (less strong)- although still under-researched and under-counted, stillbirths are the target of some of the largest organisations in the global health and development world, including the WHO, UNICEF, the Bill and Melinda Gates Foundation, and the Lancet. Many countries have committed to the Every Newborn Action Plan, which aims- amongst other things- to reduce the frequency of stillbirths.Key uncertaintiesKey uncertainty 1: Accurately assessing the impact of stillbirths, and therefore the cost-effectiveness of interventions aimed at reducing stillbirths, depends significantly on to what extent direct costs to the unborn child are counted. Some organisations view stillbirths as having negative effects on the parents and wider communities but do not count the potential years of life lost by the unborn child; others use time-discounting methods to calculate a hypothetical number of expected QALYS lost, and still others see it as completely equivalent to losing an averagely-long life. Differences in the weighting of this loss can alter the calculated impacts of stillbirth by several orders of magnitude and is likely the most important consideration when considering a stillbirth-reducing interventionKey uncertainty 2: Interventions which reduce the risk of stillbirth tend to be those which also address maternal and neonatal health more broadly; therefore, it is very difficult to accurately assess the cost-effectiveness of these interventions solely in terms of their impact on stillbirths, and more complex models which take into account the impacts on maternal, neonatal, and infant health are likely more accurate in assessing the overall cost-effectiveness of interventions.Key uncertainty 3: A large proportion of the data around interventions to reduce stillbirths comes from high-income countries, but most still...]]>
Joseph Pusey https://forum.effectivealtruism.org/posts/qxaAyAuw3DBW5WAis/shallow-investigation-stillbirths Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow Investigation: Stillbirths, published by Joseph Pusey on March 13, 2023 on The Effective Altruism Forum.This topic has the potential to be deeply upsetting to those reading it, particularly to those who have personal experience of the topic in question. If you feel that I’ve missed or misunderstood something, or could have phrased things more sensitively, please reach out to me.Throughout the review, words like “woman” or “mother” are used in places where some people might prefer “birthing person” or similar. This choice reflects the language used in the available literature and does not constitute a position on what the most appropriate terminology is.This report is a shallow dive into stillbirths, a sub-area within maternal and neonatal health, and was produced as part of the Cause Innovation Bootcamp. The report, which reflects approximately 40-50 hours of research, offers a brief dive into whether a particular problem area is a promising area for either funders or founders to be working in. Being a shallow report, it should be used to decide whether or not more research and work into a particular problem area should be prioritised.Executive SummaryImportance: This problem is likely very important (epistemic status-strong)- stillbirths are widespread, concentrated in the world’s poorest countries, and decreasing only very slowly compared to the decline in maternal and infant mortality. There are more deaths resulting from stillbirth than those caused by HIV and malaria combined (depending on your personal definition of death- see below), and even in high-income countries stillbirths outnumber infant deaths.Tractability: This problem is likely moderately tractable (moderate)- most stillbirths are likely to be preventable, but the most impactful interventions are complex, facility-based, expensive, and most effective at scale e.g. guaranteeing access to high-quality emergency obstetric careNeglectedness: This problem is unlikely to be neglected (less strong)- although still under-researched and under-counted, stillbirths are the target of some of the largest organisations in the global health and development world, including the WHO, UNICEF, the Bill and Melinda Gates Foundation, and the Lancet. Many countries have committed to the Every Newborn Action Plan, which aims- amongst other things- to reduce the frequency of stillbirths.Key uncertaintiesKey uncertainty 1: Accurately assessing the impact of stillbirths, and therefore the cost-effectiveness of interventions aimed at reducing stillbirths, depends significantly on to what extent direct costs to the unborn child are counted. Some organisations view stillbirths as having negative effects on the parents and wider communities but do not count the potential years of life lost by the unborn child; others use time-discounting methods to calculate a hypothetical number of expected QALYS lost, and still others see it as completely equivalent to losing an averagely-long life. Differences in the weighting of this loss can alter the calculated impacts of stillbirth by several orders of magnitude and is likely the most important consideration when considering a stillbirth-reducing interventionKey uncertainty 2: Interventions which reduce the risk of stillbirth tend to be those which also address maternal and neonatal health more broadly; therefore, it is very difficult to accurately assess the cost-effectiveness of these interventions solely in terms of their impact on stillbirths, and more complex models which take into account the impacts on maternal, neonatal, and infant health are likely more accurate in assessing the overall cost-effectiveness of interventions.Key uncertainty 3: A large proportion of the data around interventions to reduce stillbirths comes from high-income countries, but most still...]]>
Mon, 13 Mar 2023 16:43:27 +0000 EA - Shallow Investigation: Stillbirths by Joseph Pusey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow Investigation: Stillbirths, published by Joseph Pusey on March 13, 2023 on The Effective Altruism Forum.This topic has the potential to be deeply upsetting to those reading it, particularly to those who have personal experience of the topic in question. If you feel that I’ve missed or misunderstood something, or could have phrased things more sensitively, please reach out to me.Throughout the review, words like “woman” or “mother” are used in places where some people might prefer “birthing person” or similar. This choice reflects the language used in the available literature and does not constitute a position on what the most appropriate terminology is.This report is a shallow dive into stillbirths, a sub-area within maternal and neonatal health, and was produced as part of the Cause Innovation Bootcamp. The report, which reflects approximately 40-50 hours of research, offers a brief dive into whether a particular problem area is a promising area for either funders or founders to be working in. Being a shallow report, it should be used to decide whether or not more research and work into a particular problem area should be prioritised.Executive SummaryImportance: This problem is likely very important (epistemic status-strong)- stillbirths are widespread, concentrated in the world’s poorest countries, and decreasing only very slowly compared to the decline in maternal and infant mortality. There are more deaths resulting from stillbirth than those caused by HIV and malaria combined (depending on your personal definition of death- see below), and even in high-income countries stillbirths outnumber infant deaths.Tractability: This problem is likely moderately tractable (moderate)- most stillbirths are likely to be preventable, but the most impactful interventions are complex, facility-based, expensive, and most effective at scale e.g. guaranteeing access to high-quality emergency obstetric careNeglectedness: This problem is unlikely to be neglected (less strong)- although still under-researched and under-counted, stillbirths are the target of some of the largest organisations in the global health and development world, including the WHO, UNICEF, the Bill and Melinda Gates Foundation, and the Lancet. Many countries have committed to the Every Newborn Action Plan, which aims- amongst other things- to reduce the frequency of stillbirths.Key uncertaintiesKey uncertainty 1: Accurately assessing the impact of stillbirths, and therefore the cost-effectiveness of interventions aimed at reducing stillbirths, depends significantly on to what extent direct costs to the unborn child are counted. Some organisations view stillbirths as having negative effects on the parents and wider communities but do not count the potential years of life lost by the unborn child; others use time-discounting methods to calculate a hypothetical number of expected QALYS lost, and still others see it as completely equivalent to losing an averagely-long life. Differences in the weighting of this loss can alter the calculated impacts of stillbirth by several orders of magnitude and is likely the most important consideration when considering a stillbirth-reducing interventionKey uncertainty 2: Interventions which reduce the risk of stillbirth tend to be those which also address maternal and neonatal health more broadly; therefore, it is very difficult to accurately assess the cost-effectiveness of these interventions solely in terms of their impact on stillbirths, and more complex models which take into account the impacts on maternal, neonatal, and infant health are likely more accurate in assessing the overall cost-effectiveness of interventions.Key uncertainty 3: A large proportion of the data around interventions to reduce stillbirths comes from high-income countries, but most still...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow Investigation: Stillbirths, published by Joseph Pusey on March 13, 2023 on The Effective Altruism Forum.This topic has the potential to be deeply upsetting to those reading it, particularly to those who have personal experience of the topic in question. If you feel that I’ve missed or misunderstood something, or could have phrased things more sensitively, please reach out to me.Throughout the review, words like “woman” or “mother” are used in places where some people might prefer “birthing person” or similar. This choice reflects the language used in the available literature and does not constitute a position on what the most appropriate terminology is.This report is a shallow dive into stillbirths, a sub-area within maternal and neonatal health, and was produced as part of the Cause Innovation Bootcamp. The report, which reflects approximately 40-50 hours of research, offers a brief dive into whether a particular problem area is a promising area for either funders or founders to be working in. Being a shallow report, it should be used to decide whether or not more research and work into a particular problem area should be prioritised.Executive SummaryImportance: This problem is likely very important (epistemic status-strong)- stillbirths are widespread, concentrated in the world’s poorest countries, and decreasing only very slowly compared to the decline in maternal and infant mortality. There are more deaths resulting from stillbirth than those caused by HIV and malaria combined (depending on your personal definition of death- see below), and even in high-income countries stillbirths outnumber infant deaths.Tractability: This problem is likely moderately tractable (moderate)- most stillbirths are likely to be preventable, but the most impactful interventions are complex, facility-based, expensive, and most effective at scale e.g. guaranteeing access to high-quality emergency obstetric careNeglectedness: This problem is unlikely to be neglected (less strong)- although still under-researched and under-counted, stillbirths are the target of some of the largest organisations in the global health and development world, including the WHO, UNICEF, the Bill and Melinda Gates Foundation, and the Lancet. Many countries have committed to the Every Newborn Action Plan, which aims- amongst other things- to reduce the frequency of stillbirths.Key uncertaintiesKey uncertainty 1: Accurately assessing the impact of stillbirths, and therefore the cost-effectiveness of interventions aimed at reducing stillbirths, depends significantly on to what extent direct costs to the unborn child are counted. Some organisations view stillbirths as having negative effects on the parents and wider communities but do not count the potential years of life lost by the unborn child; others use time-discounting methods to calculate a hypothetical number of expected QALYS lost, and still others see it as completely equivalent to losing an averagely-long life. Differences in the weighting of this loss can alter the calculated impacts of stillbirth by several orders of magnitude and is likely the most important consideration when considering a stillbirth-reducing interventionKey uncertainty 2: Interventions which reduce the risk of stillbirth tend to be those which also address maternal and neonatal health more broadly; therefore, it is very difficult to accurately assess the cost-effectiveness of these interventions solely in terms of their impact on stillbirths, and more complex models which take into account the impacts on maternal, neonatal, and infant health are likely more accurate in assessing the overall cost-effectiveness of interventions.Key uncertainty 3: A large proportion of the data around interventions to reduce stillbirths comes from high-income countries, but most still...]]>
Joseph Pusey https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 25:45 None full 5205
pKG5fsfrgDSQtssfu_NL_EA_EA EA - On taking AI risk seriously by Eleni A Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On taking AI risk seriously, published by Eleni A on March 13, 2023 on The Effective Altruism Forum.Yet another New York Times piece on AI. A non-AI safety friend sent it to me saying "This is the scariest article I've read so far. I'm afraid I haven't been taking it very seriously". I'm noting this because I'm always curious to observe what moves people, what's out there that has the power to change minds. In the past few months, there's been increasing public attention to AI and all sorts of hot and cold takes, e.g., about intelligence, consciousness, sentience, etc. But this might be one of the articles that convey the AI risk message in a language that helps inform and think about AI safety.The following is what stood out to me and made me think that it's time for philosophy of science to also take AI risk seriously and revisit the idea of scientific explanation given the success of deep learning:I cannot emphasize this enough: We do not understand these systems, and it’s not clear we even can. I don’t mean that we cannot offer a high-level account of the basic functions: These are typically probabilistic algorithms trained on digital information that make predictions about the next word in a sentence, or an image in a sequence, or some other relationship between abstractions that it can statistically model. But zoom into specifics and the picture dissolves into computational static.“If you were to print out everything the networks do between input and output, it would amount to billions of arithmetic operations,” writes Meghan O’Gieblyn in her brilliant book, “God, Human, Animal, Machine,” “an ‘explanation’ that would be impossible to understand.”Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Eleni A https://forum.effectivealtruism.org/posts/pKG5fsfrgDSQtssfu/on-taking-ai-risk-seriously Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On taking AI risk seriously, published by Eleni A on March 13, 2023 on The Effective Altruism Forum.Yet another New York Times piece on AI. A non-AI safety friend sent it to me saying "This is the scariest article I've read so far. I'm afraid I haven't been taking it very seriously". I'm noting this because I'm always curious to observe what moves people, what's out there that has the power to change minds. In the past few months, there's been increasing public attention to AI and all sorts of hot and cold takes, e.g., about intelligence, consciousness, sentience, etc. But this might be one of the articles that convey the AI risk message in a language that helps inform and think about AI safety.The following is what stood out to me and made me think that it's time for philosophy of science to also take AI risk seriously and revisit the idea of scientific explanation given the success of deep learning:I cannot emphasize this enough: We do not understand these systems, and it’s not clear we even can. I don’t mean that we cannot offer a high-level account of the basic functions: These are typically probabilistic algorithms trained on digital information that make predictions about the next word in a sentence, or an image in a sequence, or some other relationship between abstractions that it can statistically model. But zoom into specifics and the picture dissolves into computational static.“If you were to print out everything the networks do between input and output, it would amount to billions of arithmetic operations,” writes Meghan O’Gieblyn in her brilliant book, “God, Human, Animal, Machine,” “an ‘explanation’ that would be impossible to understand.”Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 13 Mar 2023 15:12:11 +0000 EA - On taking AI risk seriously by Eleni A Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On taking AI risk seriously, published by Eleni A on March 13, 2023 on The Effective Altruism Forum.Yet another New York Times piece on AI. A non-AI safety friend sent it to me saying "This is the scariest article I've read so far. I'm afraid I haven't been taking it very seriously". I'm noting this because I'm always curious to observe what moves people, what's out there that has the power to change minds. In the past few months, there's been increasing public attention to AI and all sorts of hot and cold takes, e.g., about intelligence, consciousness, sentience, etc. But this might be one of the articles that convey the AI risk message in a language that helps inform and think about AI safety.The following is what stood out to me and made me think that it's time for philosophy of science to also take AI risk seriously and revisit the idea of scientific explanation given the success of deep learning:I cannot emphasize this enough: We do not understand these systems, and it’s not clear we even can. I don’t mean that we cannot offer a high-level account of the basic functions: These are typically probabilistic algorithms trained on digital information that make predictions about the next word in a sentence, or an image in a sequence, or some other relationship between abstractions that it can statistically model. But zoom into specifics and the picture dissolves into computational static.“If you were to print out everything the networks do between input and output, it would amount to billions of arithmetic operations,” writes Meghan O’Gieblyn in her brilliant book, “God, Human, Animal, Machine,” “an ‘explanation’ that would be impossible to understand.”Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On taking AI risk seriously, published by Eleni A on March 13, 2023 on The Effective Altruism Forum.Yet another New York Times piece on AI. A non-AI safety friend sent it to me saying "This is the scariest article I've read so far. I'm afraid I haven't been taking it very seriously". I'm noting this because I'm always curious to observe what moves people, what's out there that has the power to change minds. In the past few months, there's been increasing public attention to AI and all sorts of hot and cold takes, e.g., about intelligence, consciousness, sentience, etc. But this might be one of the articles that convey the AI risk message in a language that helps inform and think about AI safety.The following is what stood out to me and made me think that it's time for philosophy of science to also take AI risk seriously and revisit the idea of scientific explanation given the success of deep learning:I cannot emphasize this enough: We do not understand these systems, and it’s not clear we even can. I don’t mean that we cannot offer a high-level account of the basic functions: These are typically probabilistic algorithms trained on digital information that make predictions about the next word in a sentence, or an image in a sequence, or some other relationship between abstractions that it can statistically model. But zoom into specifics and the picture dissolves into computational static.“If you were to print out everything the networks do between input and output, it would amount to billions of arithmetic operations,” writes Meghan O’Gieblyn in her brilliant book, “God, Human, Animal, Machine,” “an ‘explanation’ that would be impossible to understand.”Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Eleni A https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:46 None full 5207
FmhYMzoevaBqFTGGs_NL_EA_EA EA - How bad a future do ML researchers expect? by Katja Grace Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How bad a future do ML researchers expect?, published by Katja Grace on March 13, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Katja Grace https://forum.effectivealtruism.org/posts/FmhYMzoevaBqFTGGs/how-bad-a-future-do-ml-researchers-expect Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How bad a future do ML researchers expect?, published by Katja Grace on March 13, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 13 Mar 2023 12:53:56 +0000 EA - How bad a future do ML researchers expect? by Katja Grace Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How bad a future do ML researchers expect?, published by Katja Grace on March 13, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How bad a future do ML researchers expect?, published by Katja Grace on March 13, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Katja Grace https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:27 None full 5206
kxaGNuHqmQqw2xYHW_NL_EA_EA EA - It's not all that simple by Brnr001 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It's not all that simple, published by Brnr001 on March 13, 2023 on The Effective Altruism Forum.TLTR: I feel that recently the EA forum became pretty judgmental and unwelcoming. I also feel that the current discourse about sex misses two important points and, in a huge part of it, lacks maturity and is harmful. Let me attempt to address it. Trigger warning, point 2 involves a long description of personal stories connected to sex, some of them were difficult and may be triggering. It also may not be very well structured, but I preferred to write one long post instead of three short ones.This is obviously a burner account, but when you see those stories you’ll be able to see why. For the record, they don’t involve people from the community. I'm a woman (it's going to matter later on).Acceptable dating and sexual behaviors vary between classes and cultures. The devil is in the detail, and rules you live by and perceive as “obvious” may be so clear for anybody else. Also, the map of the US is not in a shape of geode.People vary in gender and sexual orientation. They vary in a level of sexual desire. They have different kinks, ways of expressing sexuality and levels of self-awareness. Different needs. Various physiological reactions to sexually tense situations. Various ways of presenting themselves when it comes to all of the above.People come from different cultures – regions, countries, social classes and religions. As a result, dating cultures vary around the world. Sexual behaviors also. Acceptable level of flirt, jokes, touch and the way consent is asked for and expressed sometimes just vary. Problems and how i.e. sexism looks like also has various shapes and forms. There are some common characteristics, but details matter, to a huge extent. Many people in the recent discussions stated that various nuances are obvious and should be intuitively followed by everyone. I think it’s problematic and leads to abuse.Believing that your values and behavior associated with your culture and class are the only right ones and everybody should know, understand and follow them, is fundamentally different from assertively vocalizing your boundaries and needs. The second is a great, mature behavior. The first feels a bit elitist, ignorant and has nothing to do with safety, equality and being inclusive.Additionally, I want to draw your attention to one thing. I have a strong belief (correct me if I’m wrong) that the vast majority (if not all) of sexual misconduct causes which were described over the last couple of days in the articles or here, on the forum, come from either US or the UK. EA crowd is definitely not limited to those. So my honest question would be – is it EA who has a problem with sexual misconduct? Or is it an Anglo-Saxon culture which has a problem with sexual misconduct? Or maybe – EA with a mix of Anglo-Saxon culture has this issue? Shouldn’t we zoom in on that a bit?Human sexuality is complex. Consent is also sometimes complex.People often talk a lot of “what consent norms should be”. But often such disputes do not give a full picture of what people’s actual behaviors around consent actually are – and it’s a bit crucial to this whole conversation. If you start having more intimate talks, however, you end up seeing a much more complex and broad picture. And often consent is easier said than done.I encourage you all, regardless what’s your gender, to have those talks with friends, who are open and empathetic. I’ve learned a lot and they made my life easier.Yet, some people may have no opportunity to hear such stories. So let me share, why do I think that consent is not all that easy. I'm going to talk about myself here, because maybe somebody needs to hear somebody being open and vulnerable about stuff like that. My message is - it's ok to sometimes stru...]]>
Brnr001 https://forum.effectivealtruism.org/posts/kxaGNuHqmQqw2xYHW/it-s-not-all-that-simple Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It's not all that simple, published by Brnr001 on March 13, 2023 on The Effective Altruism Forum.TLTR: I feel that recently the EA forum became pretty judgmental and unwelcoming. I also feel that the current discourse about sex misses two important points and, in a huge part of it, lacks maturity and is harmful. Let me attempt to address it. Trigger warning, point 2 involves a long description of personal stories connected to sex, some of them were difficult and may be triggering. It also may not be very well structured, but I preferred to write one long post instead of three short ones.This is obviously a burner account, but when you see those stories you’ll be able to see why. For the record, they don’t involve people from the community. I'm a woman (it's going to matter later on).Acceptable dating and sexual behaviors vary between classes and cultures. The devil is in the detail, and rules you live by and perceive as “obvious” may be so clear for anybody else. Also, the map of the US is not in a shape of geode.People vary in gender and sexual orientation. They vary in a level of sexual desire. They have different kinks, ways of expressing sexuality and levels of self-awareness. Different needs. Various physiological reactions to sexually tense situations. Various ways of presenting themselves when it comes to all of the above.People come from different cultures – regions, countries, social classes and religions. As a result, dating cultures vary around the world. Sexual behaviors also. Acceptable level of flirt, jokes, touch and the way consent is asked for and expressed sometimes just vary. Problems and how i.e. sexism looks like also has various shapes and forms. There are some common characteristics, but details matter, to a huge extent. Many people in the recent discussions stated that various nuances are obvious and should be intuitively followed by everyone. I think it’s problematic and leads to abuse.Believing that your values and behavior associated with your culture and class are the only right ones and everybody should know, understand and follow them, is fundamentally different from assertively vocalizing your boundaries and needs. The second is a great, mature behavior. The first feels a bit elitist, ignorant and has nothing to do with safety, equality and being inclusive.Additionally, I want to draw your attention to one thing. I have a strong belief (correct me if I’m wrong) that the vast majority (if not all) of sexual misconduct causes which were described over the last couple of days in the articles or here, on the forum, come from either US or the UK. EA crowd is definitely not limited to those. So my honest question would be – is it EA who has a problem with sexual misconduct? Or is it an Anglo-Saxon culture which has a problem with sexual misconduct? Or maybe – EA with a mix of Anglo-Saxon culture has this issue? Shouldn’t we zoom in on that a bit?Human sexuality is complex. Consent is also sometimes complex.People often talk a lot of “what consent norms should be”. But often such disputes do not give a full picture of what people’s actual behaviors around consent actually are – and it’s a bit crucial to this whole conversation. If you start having more intimate talks, however, you end up seeing a much more complex and broad picture. And often consent is easier said than done.I encourage you all, regardless what’s your gender, to have those talks with friends, who are open and empathetic. I’ve learned a lot and they made my life easier.Yet, some people may have no opportunity to hear such stories. So let me share, why do I think that consent is not all that easy. I'm going to talk about myself here, because maybe somebody needs to hear somebody being open and vulnerable about stuff like that. My message is - it's ok to sometimes stru...]]>
Mon, 13 Mar 2023 08:32:36 +0000 EA - It's not all that simple by Brnr001 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It's not all that simple, published by Brnr001 on March 13, 2023 on The Effective Altruism Forum.TLTR: I feel that recently the EA forum became pretty judgmental and unwelcoming. I also feel that the current discourse about sex misses two important points and, in a huge part of it, lacks maturity and is harmful. Let me attempt to address it. Trigger warning, point 2 involves a long description of personal stories connected to sex, some of them were difficult and may be triggering. It also may not be very well structured, but I preferred to write one long post instead of three short ones.This is obviously a burner account, but when you see those stories you’ll be able to see why. For the record, they don’t involve people from the community. I'm a woman (it's going to matter later on).Acceptable dating and sexual behaviors vary between classes and cultures. The devil is in the detail, and rules you live by and perceive as “obvious” may be so clear for anybody else. Also, the map of the US is not in a shape of geode.People vary in gender and sexual orientation. They vary in a level of sexual desire. They have different kinks, ways of expressing sexuality and levels of self-awareness. Different needs. Various physiological reactions to sexually tense situations. Various ways of presenting themselves when it comes to all of the above.People come from different cultures – regions, countries, social classes and religions. As a result, dating cultures vary around the world. Sexual behaviors also. Acceptable level of flirt, jokes, touch and the way consent is asked for and expressed sometimes just vary. Problems and how i.e. sexism looks like also has various shapes and forms. There are some common characteristics, but details matter, to a huge extent. Many people in the recent discussions stated that various nuances are obvious and should be intuitively followed by everyone. I think it’s problematic and leads to abuse.Believing that your values and behavior associated with your culture and class are the only right ones and everybody should know, understand and follow them, is fundamentally different from assertively vocalizing your boundaries and needs. The second is a great, mature behavior. The first feels a bit elitist, ignorant and has nothing to do with safety, equality and being inclusive.Additionally, I want to draw your attention to one thing. I have a strong belief (correct me if I’m wrong) that the vast majority (if not all) of sexual misconduct causes which were described over the last couple of days in the articles or here, on the forum, come from either US or the UK. EA crowd is definitely not limited to those. So my honest question would be – is it EA who has a problem with sexual misconduct? Or is it an Anglo-Saxon culture which has a problem with sexual misconduct? Or maybe – EA with a mix of Anglo-Saxon culture has this issue? Shouldn’t we zoom in on that a bit?Human sexuality is complex. Consent is also sometimes complex.People often talk a lot of “what consent norms should be”. But often such disputes do not give a full picture of what people’s actual behaviors around consent actually are – and it’s a bit crucial to this whole conversation. If you start having more intimate talks, however, you end up seeing a much more complex and broad picture. And often consent is easier said than done.I encourage you all, regardless what’s your gender, to have those talks with friends, who are open and empathetic. I’ve learned a lot and they made my life easier.Yet, some people may have no opportunity to hear such stories. So let me share, why do I think that consent is not all that easy. I'm going to talk about myself here, because maybe somebody needs to hear somebody being open and vulnerable about stuff like that. My message is - it's ok to sometimes stru...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It's not all that simple, published by Brnr001 on March 13, 2023 on The Effective Altruism Forum.TLTR: I feel that recently the EA forum became pretty judgmental and unwelcoming. I also feel that the current discourse about sex misses two important points and, in a huge part of it, lacks maturity and is harmful. Let me attempt to address it. Trigger warning, point 2 involves a long description of personal stories connected to sex, some of them were difficult and may be triggering. It also may not be very well structured, but I preferred to write one long post instead of three short ones.This is obviously a burner account, but when you see those stories you’ll be able to see why. For the record, they don’t involve people from the community. I'm a woman (it's going to matter later on).Acceptable dating and sexual behaviors vary between classes and cultures. The devil is in the detail, and rules you live by and perceive as “obvious” may be so clear for anybody else. Also, the map of the US is not in a shape of geode.People vary in gender and sexual orientation. They vary in a level of sexual desire. They have different kinks, ways of expressing sexuality and levels of self-awareness. Different needs. Various physiological reactions to sexually tense situations. Various ways of presenting themselves when it comes to all of the above.People come from different cultures – regions, countries, social classes and religions. As a result, dating cultures vary around the world. Sexual behaviors also. Acceptable level of flirt, jokes, touch and the way consent is asked for and expressed sometimes just vary. Problems and how i.e. sexism looks like also has various shapes and forms. There are some common characteristics, but details matter, to a huge extent. Many people in the recent discussions stated that various nuances are obvious and should be intuitively followed by everyone. I think it’s problematic and leads to abuse.Believing that your values and behavior associated with your culture and class are the only right ones and everybody should know, understand and follow them, is fundamentally different from assertively vocalizing your boundaries and needs. The second is a great, mature behavior. The first feels a bit elitist, ignorant and has nothing to do with safety, equality and being inclusive.Additionally, I want to draw your attention to one thing. I have a strong belief (correct me if I’m wrong) that the vast majority (if not all) of sexual misconduct causes which were described over the last couple of days in the articles or here, on the forum, come from either US or the UK. EA crowd is definitely not limited to those. So my honest question would be – is it EA who has a problem with sexual misconduct? Or is it an Anglo-Saxon culture which has a problem with sexual misconduct? Or maybe – EA with a mix of Anglo-Saxon culture has this issue? Shouldn’t we zoom in on that a bit?Human sexuality is complex. Consent is also sometimes complex.People often talk a lot of “what consent norms should be”. But often such disputes do not give a full picture of what people’s actual behaviors around consent actually are – and it’s a bit crucial to this whole conversation. If you start having more intimate talks, however, you end up seeing a much more complex and broad picture. And often consent is easier said than done.I encourage you all, regardless what’s your gender, to have those talks with friends, who are open and empathetic. I’ve learned a lot and they made my life easier.Yet, some people may have no opportunity to hear such stories. So let me share, why do I think that consent is not all that easy. I'm going to talk about myself here, because maybe somebody needs to hear somebody being open and vulnerable about stuff like that. My message is - it's ok to sometimes stru...]]>
Brnr001 https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 13:02 None full 5203
CurMPLmAzqJZcwFQj_NL_EA_EA EA - Bill prohibiting the use of QALYs in US healthcare decisions? by gordoni Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bill prohibiting the use of QALYs in US healthcare decisions?, published by gordoni on March 12, 2023 on The Effective Altruism Forum.Is anyone familiar with H.R. 485? It has been introduced in the House, but it is not yet law.According to the CRS "This bill prohibits all federal health care programs, including the Federal Employees Health Benefits Program, and federally funded state health care programs (e.g., Medicaid) from using prices that are based on quality-adjusted life years (i.e., measures that discount the value of a life based on disability) to determine relevant thresholds for coverage, reimbursements, or incentive programs".I think the motivation might be to prevent discrimination against people with disabilities, but it seems to me like it goes too far.It seems to me it would prevent the use of QALYs for making decisions such as is a particular cure for blindness worthwhile, and how might it compare to treatments for other diseases and conditions.Is anyone familiar with this bill and able to shed more light on it?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
gordoni https://forum.effectivealtruism.org/posts/CurMPLmAzqJZcwFQj/bill-prohibiting-the-use-of-qalys-in-us-healthcare-decisions-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bill prohibiting the use of QALYs in US healthcare decisions?, published by gordoni on March 12, 2023 on The Effective Altruism Forum.Is anyone familiar with H.R. 485? It has been introduced in the House, but it is not yet law.According to the CRS "This bill prohibits all federal health care programs, including the Federal Employees Health Benefits Program, and federally funded state health care programs (e.g., Medicaid) from using prices that are based on quality-adjusted life years (i.e., measures that discount the value of a life based on disability) to determine relevant thresholds for coverage, reimbursements, or incentive programs".I think the motivation might be to prevent discrimination against people with disabilities, but it seems to me like it goes too far.It seems to me it would prevent the use of QALYs for making decisions such as is a particular cure for blindness worthwhile, and how might it compare to treatments for other diseases and conditions.Is anyone familiar with this bill and able to shed more light on it?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 12 Mar 2023 19:15:22 +0000 EA - Bill prohibiting the use of QALYs in US healthcare decisions? by gordoni Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bill prohibiting the use of QALYs in US healthcare decisions?, published by gordoni on March 12, 2023 on The Effective Altruism Forum.Is anyone familiar with H.R. 485? It has been introduced in the House, but it is not yet law.According to the CRS "This bill prohibits all federal health care programs, including the Federal Employees Health Benefits Program, and federally funded state health care programs (e.g., Medicaid) from using prices that are based on quality-adjusted life years (i.e., measures that discount the value of a life based on disability) to determine relevant thresholds for coverage, reimbursements, or incentive programs".I think the motivation might be to prevent discrimination against people with disabilities, but it seems to me like it goes too far.It seems to me it would prevent the use of QALYs for making decisions such as is a particular cure for blindness worthwhile, and how might it compare to treatments for other diseases and conditions.Is anyone familiar with this bill and able to shed more light on it?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bill prohibiting the use of QALYs in US healthcare decisions?, published by gordoni on March 12, 2023 on The Effective Altruism Forum.Is anyone familiar with H.R. 485? It has been introduced in the House, but it is not yet law.According to the CRS "This bill prohibits all federal health care programs, including the Federal Employees Health Benefits Program, and federally funded state health care programs (e.g., Medicaid) from using prices that are based on quality-adjusted life years (i.e., measures that discount the value of a life based on disability) to determine relevant thresholds for coverage, reimbursements, or incentive programs".I think the motivation might be to prevent discrimination against people with disabilities, but it seems to me like it goes too far.It seems to me it would prevent the use of QALYs for making decisions such as is a particular cure for blindness worthwhile, and how might it compare to treatments for other diseases and conditions.Is anyone familiar with this bill and able to shed more light on it?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
gordoni https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:17 None full 5198
dsG5SYjhPqnxhystM_NL_EA_EA EA - Two directions for research on forecasting and decision making by Paal Fredrik Skjørten Kvarberg Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two directions for research on forecasting and decision making, published by Paal Fredrik Skjørten Kvarberg on March 11, 2023 on The Effective Altruism Forum.An assessment of methods to improve individual and institutional decision-making and some ideas for further researchForecasting tournaments have shown that a set of methods for good judgement can be used by organisations to reliably improve the accuracy of individual and group forecasts on a range of questions in several domains. However, such methods are not widely used by individuals, teams or institutions in practical decision making.In what follows, I review findings from forecasting tournaments and some other relevant studies. In light of this research, I identify a set of methods that can be used to improve the accuracy of individuals, teams, or organisations. I then note some limitations of our knowledge of methods for good judgement and identify two obstacles to the wide adoption of these methods to practical decision-making. The two obstacles areCosts. Methods for good judgement can be time-consuming and complicated to use in practical decision-making, and it is unclear how much so. Decision-makers don't know if the gains in accuracy of adopting particular methods outweigh the costs because they don't know the costs.Relevance. Rigorous forecasting questions are not always relevant to the decisions at hand, and it is not always clear to decision-makers if and when they can connect rigorous forecasting questions to important decisions.I look at projects and initiatives to overcome the obstacles, and note two directions for research on forecasting and decision-making that seem particularly promising to me. They areExpected value assessments. Research into the costs of applying specific epistemic methods in decision-making, and assessments of the expected value of applying those practices in various decision-making contexts on various domains (including other values than accuracy). Also development of practices and tools to reduce costs.Quantitative models of relevance and reasoning. Research into ways of modelling the relevance of rigorous forecasting questions to the truth of decision-relevant propositions quantitatively through formal Bayesian networks.After I have introduced these areas of research, I describe how I think that new knowledge on these topics can lead to improvements in the decision-making of individuals and groups.This line of reasoning is inherent in a lot of research that is going on right now, but I still think that research on these topics is neglected. I hope that this text can help to clarify some important research questions and to make it easier for others to orient themselves on forecasting and decision-making. I have added detailed footnotes with references to further literature on most ideas I touch on below.In the future I intend to use the framework developed here to make a series of precise claims about the costs and effects of specific epistemic methods. Most of the claims below are not rigorous enough to be true or false, although some of them might be. Please let me know if any of these claims are incorrect or misleading, or if there is some research that I have missed.Forecasting tournamentsIn a range of domains, such as law, finance, philanthropy, and geopolitical forecasting, the judgments of experts vary a lot, i.e. they are noisy, even in similar and identical cases.In a study on geopolitical forecasting by the renowned decision psychologist Philip Tetlock, seasoned political experts had trouble outperforming “dart-tossing chimpanzees”—random guesses—when it came to predicting global events. Non-experts, eg. “attentive readers of the New York Times” who were curious and open-minded, outperformed the experts, who tended to be overconfident.In a series of...]]>
Paal Fredrik Skjørten Kvarberg https://forum.effectivealtruism.org/posts/dsG5SYjhPqnxhystM/two-directions-for-research-on-forecasting-and-decision Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two directions for research on forecasting and decision making, published by Paal Fredrik Skjørten Kvarberg on March 11, 2023 on The Effective Altruism Forum.An assessment of methods to improve individual and institutional decision-making and some ideas for further researchForecasting tournaments have shown that a set of methods for good judgement can be used by organisations to reliably improve the accuracy of individual and group forecasts on a range of questions in several domains. However, such methods are not widely used by individuals, teams or institutions in practical decision making.In what follows, I review findings from forecasting tournaments and some other relevant studies. In light of this research, I identify a set of methods that can be used to improve the accuracy of individuals, teams, or organisations. I then note some limitations of our knowledge of methods for good judgement and identify two obstacles to the wide adoption of these methods to practical decision-making. The two obstacles areCosts. Methods for good judgement can be time-consuming and complicated to use in practical decision-making, and it is unclear how much so. Decision-makers don't know if the gains in accuracy of adopting particular methods outweigh the costs because they don't know the costs.Relevance. Rigorous forecasting questions are not always relevant to the decisions at hand, and it is not always clear to decision-makers if and when they can connect rigorous forecasting questions to important decisions.I look at projects and initiatives to overcome the obstacles, and note two directions for research on forecasting and decision-making that seem particularly promising to me. They areExpected value assessments. Research into the costs of applying specific epistemic methods in decision-making, and assessments of the expected value of applying those practices in various decision-making contexts on various domains (including other values than accuracy). Also development of practices and tools to reduce costs.Quantitative models of relevance and reasoning. Research into ways of modelling the relevance of rigorous forecasting questions to the truth of decision-relevant propositions quantitatively through formal Bayesian networks.After I have introduced these areas of research, I describe how I think that new knowledge on these topics can lead to improvements in the decision-making of individuals and groups.This line of reasoning is inherent in a lot of research that is going on right now, but I still think that research on these topics is neglected. I hope that this text can help to clarify some important research questions and to make it easier for others to orient themselves on forecasting and decision-making. I have added detailed footnotes with references to further literature on most ideas I touch on below.In the future I intend to use the framework developed here to make a series of precise claims about the costs and effects of specific epistemic methods. Most of the claims below are not rigorous enough to be true or false, although some of them might be. Please let me know if any of these claims are incorrect or misleading, or if there is some research that I have missed.Forecasting tournamentsIn a range of domains, such as law, finance, philanthropy, and geopolitical forecasting, the judgments of experts vary a lot, i.e. they are noisy, even in similar and identical cases.In a study on geopolitical forecasting by the renowned decision psychologist Philip Tetlock, seasoned political experts had trouble outperforming “dart-tossing chimpanzees”—random guesses—when it came to predicting global events. Non-experts, eg. “attentive readers of the New York Times” who were curious and open-minded, outperformed the experts, who tended to be overconfident.In a series of...]]>
Sun, 12 Mar 2023 15:46:41 +0000 EA - Two directions for research on forecasting and decision making by Paal Fredrik Skjørten Kvarberg Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two directions for research on forecasting and decision making, published by Paal Fredrik Skjørten Kvarberg on March 11, 2023 on The Effective Altruism Forum.An assessment of methods to improve individual and institutional decision-making and some ideas for further researchForecasting tournaments have shown that a set of methods for good judgement can be used by organisations to reliably improve the accuracy of individual and group forecasts on a range of questions in several domains. However, such methods are not widely used by individuals, teams or institutions in practical decision making.In what follows, I review findings from forecasting tournaments and some other relevant studies. In light of this research, I identify a set of methods that can be used to improve the accuracy of individuals, teams, or organisations. I then note some limitations of our knowledge of methods for good judgement and identify two obstacles to the wide adoption of these methods to practical decision-making. The two obstacles areCosts. Methods for good judgement can be time-consuming and complicated to use in practical decision-making, and it is unclear how much so. Decision-makers don't know if the gains in accuracy of adopting particular methods outweigh the costs because they don't know the costs.Relevance. Rigorous forecasting questions are not always relevant to the decisions at hand, and it is not always clear to decision-makers if and when they can connect rigorous forecasting questions to important decisions.I look at projects and initiatives to overcome the obstacles, and note two directions for research on forecasting and decision-making that seem particularly promising to me. They areExpected value assessments. Research into the costs of applying specific epistemic methods in decision-making, and assessments of the expected value of applying those practices in various decision-making contexts on various domains (including other values than accuracy). Also development of practices and tools to reduce costs.Quantitative models of relevance and reasoning. Research into ways of modelling the relevance of rigorous forecasting questions to the truth of decision-relevant propositions quantitatively through formal Bayesian networks.After I have introduced these areas of research, I describe how I think that new knowledge on these topics can lead to improvements in the decision-making of individuals and groups.This line of reasoning is inherent in a lot of research that is going on right now, but I still think that research on these topics is neglected. I hope that this text can help to clarify some important research questions and to make it easier for others to orient themselves on forecasting and decision-making. I have added detailed footnotes with references to further literature on most ideas I touch on below.In the future I intend to use the framework developed here to make a series of precise claims about the costs and effects of specific epistemic methods. Most of the claims below are not rigorous enough to be true or false, although some of them might be. Please let me know if any of these claims are incorrect or misleading, or if there is some research that I have missed.Forecasting tournamentsIn a range of domains, such as law, finance, philanthropy, and geopolitical forecasting, the judgments of experts vary a lot, i.e. they are noisy, even in similar and identical cases.In a study on geopolitical forecasting by the renowned decision psychologist Philip Tetlock, seasoned political experts had trouble outperforming “dart-tossing chimpanzees”—random guesses—when it came to predicting global events. Non-experts, eg. “attentive readers of the New York Times” who were curious and open-minded, outperformed the experts, who tended to be overconfident.In a series of...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two directions for research on forecasting and decision making, published by Paal Fredrik Skjørten Kvarberg on March 11, 2023 on The Effective Altruism Forum.An assessment of methods to improve individual and institutional decision-making and some ideas for further researchForecasting tournaments have shown that a set of methods for good judgement can be used by organisations to reliably improve the accuracy of individual and group forecasts on a range of questions in several domains. However, such methods are not widely used by individuals, teams or institutions in practical decision making.In what follows, I review findings from forecasting tournaments and some other relevant studies. In light of this research, I identify a set of methods that can be used to improve the accuracy of individuals, teams, or organisations. I then note some limitations of our knowledge of methods for good judgement and identify two obstacles to the wide adoption of these methods to practical decision-making. The two obstacles areCosts. Methods for good judgement can be time-consuming and complicated to use in practical decision-making, and it is unclear how much so. Decision-makers don't know if the gains in accuracy of adopting particular methods outweigh the costs because they don't know the costs.Relevance. Rigorous forecasting questions are not always relevant to the decisions at hand, and it is not always clear to decision-makers if and when they can connect rigorous forecasting questions to important decisions.I look at projects and initiatives to overcome the obstacles, and note two directions for research on forecasting and decision-making that seem particularly promising to me. They areExpected value assessments. Research into the costs of applying specific epistemic methods in decision-making, and assessments of the expected value of applying those practices in various decision-making contexts on various domains (including other values than accuracy). Also development of practices and tools to reduce costs.Quantitative models of relevance and reasoning. Research into ways of modelling the relevance of rigorous forecasting questions to the truth of decision-relevant propositions quantitatively through formal Bayesian networks.After I have introduced these areas of research, I describe how I think that new knowledge on these topics can lead to improvements in the decision-making of individuals and groups.This line of reasoning is inherent in a lot of research that is going on right now, but I still think that research on these topics is neglected. I hope that this text can help to clarify some important research questions and to make it easier for others to orient themselves on forecasting and decision-making. I have added detailed footnotes with references to further literature on most ideas I touch on below.In the future I intend to use the framework developed here to make a series of precise claims about the costs and effects of specific epistemic methods. Most of the claims below are not rigorous enough to be true or false, although some of them might be. Please let me know if any of these claims are incorrect or misleading, or if there is some research that I have missed.Forecasting tournamentsIn a range of domains, such as law, finance, philanthropy, and geopolitical forecasting, the judgments of experts vary a lot, i.e. they are noisy, even in similar and identical cases.In a study on geopolitical forecasting by the renowned decision psychologist Philip Tetlock, seasoned political experts had trouble outperforming “dart-tossing chimpanzees”—random guesses—when it came to predicting global events. Non-experts, eg. “attentive readers of the New York Times” who were curious and open-minded, outperformed the experts, who tended to be overconfident.In a series of...]]>
Paal Fredrik Skjørten Kvarberg https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 52:00 None full 5199
pn5zA5nr6o2tpZF6K_NL_EA_EA EA - [Linkpost] Scott Alexander reacts to OpenAI's latest post by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Scott Alexander reacts to OpenAI's latest post, published by Akash on March 11, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://forum.effectivealtruism.org/posts/pn5zA5nr6o2tpZF6K/linkpost-scott-alexander-reacts-to-openai-s-latest-post Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Scott Alexander reacts to OpenAI's latest post, published by Akash on March 11, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 12 Mar 2023 09:43:57 +0000 EA - [Linkpost] Scott Alexander reacts to OpenAI's latest post by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Scott Alexander reacts to OpenAI's latest post, published by Akash on March 11, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Scott Alexander reacts to OpenAI's latest post, published by Akash on March 11, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:27 None full 5189
sEwyMmY2bu65F9CHJ_NL_EA_EA EA - The Power of Intelligence - The Animation by Writer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Power of Intelligence - The Animation, published by Writer on March 11, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Writer https://forum.effectivealtruism.org/posts/sEwyMmY2bu65F9CHJ/the-power-of-intelligence-the-animation Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Power of Intelligence - The Animation, published by Writer on March 11, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 12 Mar 2023 01:54:25 +0000 EA - The Power of Intelligence - The Animation by Writer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Power of Intelligence - The Animation, published by Writer on March 11, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Power of Intelligence - The Animation, published by Writer on March 11, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Writer https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:25 None full 5190
h8TqKJnbtefxdcb6N_NL_EA_EA EA - How my community successfully reduced sexual misconduct by titotal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How my community successfully reduced sexual misconduct, published by titotal on March 11, 2023 on The Effective Altruism Forum.[Content warning: this post contains discussions of sexual misconduct, including assault]In response to the recent articles about sexual misconduct in EA and Rationalism, a lot of discussion has ended up being about around whether the level of misconduct is “worse than average”. I think this is focusing on the wrong thing. EA is a movement that should be striving for excellence. Merely being “average” is not good enough. What matters most is whether EA is the best it could reasonably be, and if not, what changes can be made to fix that.One thing that might help with this is a discussion of success stories. How have other communities and workplaces managed to “beat the average” on this issue? Or substantially improved from a bad place? For this reason I’m going to relay an anecdotal success story below. If you have your own or know of others, I highly encourage you to share it as well.Many, many, years ago, I joined a society for a particular hobby (unrelated to EA), and was active in the society for many, many years. For the sake of anonymity, I’m going to pretend it was the “boardgame club”. It was a large club, with dozens of people showing up each week. The demographics were fairly similar to EA, with a lot of STEM people, a male majority (although it wasn’t that overwhelming), and an openness to unconventional lifestyles such as kink and polyamory.Now, the activity in question wasn’t sexual in nature, but there were a lot of members who were meeting up at the activity meetups for casual and group sex. Over time, this meant that the society gained a reputation as “the club you go to if you want to get laid easily”. Most members, like me, were just there for the boardgames and the friends, but a reasonable amount of people came there for the sex.As it turns out, along with the sex came an acute problem with sexual misconduct, ranging from pushing boundaries on newcomers, to harassment, to sexual assault. I was in the club for several years before I realised this, when one of my friends relayed to me that another one of my friends had sexually assaulted a different friend.One lesson I took from this is that it’s very hard to know the level of sexual misconduct in a place if you aren’t a target. If I was asked to estimate the “base rate” of assault in my community before these revelations, I would have falsely thought it was low. These encounters can be traumatic to recount, and the victims can never be sure who to trust or what the consequences will be for speaking out. I’d like to think I was trustworthy, but how was the victim meant to know that?Eventually enough reports came out that the club leaders were forced to respond. Several policies were implemented, both officially and unofficially.Kick people out.Nobody has a democratic right to be in boardgame club.I think I once saw someone mention “beyond reasonable doubt” when it comes to misconduct allegations. That standard of evidence is extremely high because the accused will be thrown into jail and deprived of their rights. The punishment of “no longer being in boardgame club” does not warrant the same level of evidence. And the costs of keeping a missing stair around are very, very high.Everyone that was accused of assault was banned from the club. Members that engaged in more minor offenses were warned, and kicked out if they didn’t change. To my knowledge, no innocent people were kicked out by mistake (false accusations are rare). I think this made the community a much more pleasant place.2. Protect the newcomersWhen you attend a society for the first time, you do not know what the community norms are. You don’t know if there are avenues to report misconduct. You don’t...]]>
titotal https://forum.effectivealtruism.org/posts/h8TqKJnbtefxdcb6N/how-my-community-successfully-reduced-sexual-misconduct Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How my community successfully reduced sexual misconduct, published by titotal on March 11, 2023 on The Effective Altruism Forum.[Content warning: this post contains discussions of sexual misconduct, including assault]In response to the recent articles about sexual misconduct in EA and Rationalism, a lot of discussion has ended up being about around whether the level of misconduct is “worse than average”. I think this is focusing on the wrong thing. EA is a movement that should be striving for excellence. Merely being “average” is not good enough. What matters most is whether EA is the best it could reasonably be, and if not, what changes can be made to fix that.One thing that might help with this is a discussion of success stories. How have other communities and workplaces managed to “beat the average” on this issue? Or substantially improved from a bad place? For this reason I’m going to relay an anecdotal success story below. If you have your own or know of others, I highly encourage you to share it as well.Many, many, years ago, I joined a society for a particular hobby (unrelated to EA), and was active in the society for many, many years. For the sake of anonymity, I’m going to pretend it was the “boardgame club”. It was a large club, with dozens of people showing up each week. The demographics were fairly similar to EA, with a lot of STEM people, a male majority (although it wasn’t that overwhelming), and an openness to unconventional lifestyles such as kink and polyamory.Now, the activity in question wasn’t sexual in nature, but there were a lot of members who were meeting up at the activity meetups for casual and group sex. Over time, this meant that the society gained a reputation as “the club you go to if you want to get laid easily”. Most members, like me, were just there for the boardgames and the friends, but a reasonable amount of people came there for the sex.As it turns out, along with the sex came an acute problem with sexual misconduct, ranging from pushing boundaries on newcomers, to harassment, to sexual assault. I was in the club for several years before I realised this, when one of my friends relayed to me that another one of my friends had sexually assaulted a different friend.One lesson I took from this is that it’s very hard to know the level of sexual misconduct in a place if you aren’t a target. If I was asked to estimate the “base rate” of assault in my community before these revelations, I would have falsely thought it was low. These encounters can be traumatic to recount, and the victims can never be sure who to trust or what the consequences will be for speaking out. I’d like to think I was trustworthy, but how was the victim meant to know that?Eventually enough reports came out that the club leaders were forced to respond. Several policies were implemented, both officially and unofficially.Kick people out.Nobody has a democratic right to be in boardgame club.I think I once saw someone mention “beyond reasonable doubt” when it comes to misconduct allegations. That standard of evidence is extremely high because the accused will be thrown into jail and deprived of their rights. The punishment of “no longer being in boardgame club” does not warrant the same level of evidence. And the costs of keeping a missing stair around are very, very high.Everyone that was accused of assault was banned from the club. Members that engaged in more minor offenses were warned, and kicked out if they didn’t change. To my knowledge, no innocent people were kicked out by mistake (false accusations are rare). I think this made the community a much more pleasant place.2. Protect the newcomersWhen you attend a society for the first time, you do not know what the community norms are. You don’t know if there are avenues to report misconduct. You don’t...]]>
Sat, 11 Mar 2023 15:01:34 +0000 EA - How my community successfully reduced sexual misconduct by titotal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How my community successfully reduced sexual misconduct, published by titotal on March 11, 2023 on The Effective Altruism Forum.[Content warning: this post contains discussions of sexual misconduct, including assault]In response to the recent articles about sexual misconduct in EA and Rationalism, a lot of discussion has ended up being about around whether the level of misconduct is “worse than average”. I think this is focusing on the wrong thing. EA is a movement that should be striving for excellence. Merely being “average” is not good enough. What matters most is whether EA is the best it could reasonably be, and if not, what changes can be made to fix that.One thing that might help with this is a discussion of success stories. How have other communities and workplaces managed to “beat the average” on this issue? Or substantially improved from a bad place? For this reason I’m going to relay an anecdotal success story below. If you have your own or know of others, I highly encourage you to share it as well.Many, many, years ago, I joined a society for a particular hobby (unrelated to EA), and was active in the society for many, many years. For the sake of anonymity, I’m going to pretend it was the “boardgame club”. It was a large club, with dozens of people showing up each week. The demographics were fairly similar to EA, with a lot of STEM people, a male majority (although it wasn’t that overwhelming), and an openness to unconventional lifestyles such as kink and polyamory.Now, the activity in question wasn’t sexual in nature, but there were a lot of members who were meeting up at the activity meetups for casual and group sex. Over time, this meant that the society gained a reputation as “the club you go to if you want to get laid easily”. Most members, like me, were just there for the boardgames and the friends, but a reasonable amount of people came there for the sex.As it turns out, along with the sex came an acute problem with sexual misconduct, ranging from pushing boundaries on newcomers, to harassment, to sexual assault. I was in the club for several years before I realised this, when one of my friends relayed to me that another one of my friends had sexually assaulted a different friend.One lesson I took from this is that it’s very hard to know the level of sexual misconduct in a place if you aren’t a target. If I was asked to estimate the “base rate” of assault in my community before these revelations, I would have falsely thought it was low. These encounters can be traumatic to recount, and the victims can never be sure who to trust or what the consequences will be for speaking out. I’d like to think I was trustworthy, but how was the victim meant to know that?Eventually enough reports came out that the club leaders were forced to respond. Several policies were implemented, both officially and unofficially.Kick people out.Nobody has a democratic right to be in boardgame club.I think I once saw someone mention “beyond reasonable doubt” when it comes to misconduct allegations. That standard of evidence is extremely high because the accused will be thrown into jail and deprived of their rights. The punishment of “no longer being in boardgame club” does not warrant the same level of evidence. And the costs of keeping a missing stair around are very, very high.Everyone that was accused of assault was banned from the club. Members that engaged in more minor offenses were warned, and kicked out if they didn’t change. To my knowledge, no innocent people were kicked out by mistake (false accusations are rare). I think this made the community a much more pleasant place.2. Protect the newcomersWhen you attend a society for the first time, you do not know what the community norms are. You don’t know if there are avenues to report misconduct. You don’t...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How my community successfully reduced sexual misconduct, published by titotal on March 11, 2023 on The Effective Altruism Forum.[Content warning: this post contains discussions of sexual misconduct, including assault]In response to the recent articles about sexual misconduct in EA and Rationalism, a lot of discussion has ended up being about around whether the level of misconduct is “worse than average”. I think this is focusing on the wrong thing. EA is a movement that should be striving for excellence. Merely being “average” is not good enough. What matters most is whether EA is the best it could reasonably be, and if not, what changes can be made to fix that.One thing that might help with this is a discussion of success stories. How have other communities and workplaces managed to “beat the average” on this issue? Or substantially improved from a bad place? For this reason I’m going to relay an anecdotal success story below. If you have your own or know of others, I highly encourage you to share it as well.Many, many, years ago, I joined a society for a particular hobby (unrelated to EA), and was active in the society for many, many years. For the sake of anonymity, I’m going to pretend it was the “boardgame club”. It was a large club, with dozens of people showing up each week. The demographics were fairly similar to EA, with a lot of STEM people, a male majority (although it wasn’t that overwhelming), and an openness to unconventional lifestyles such as kink and polyamory.Now, the activity in question wasn’t sexual in nature, but there were a lot of members who were meeting up at the activity meetups for casual and group sex. Over time, this meant that the society gained a reputation as “the club you go to if you want to get laid easily”. Most members, like me, were just there for the boardgames and the friends, but a reasonable amount of people came there for the sex.As it turns out, along with the sex came an acute problem with sexual misconduct, ranging from pushing boundaries on newcomers, to harassment, to sexual assault. I was in the club for several years before I realised this, when one of my friends relayed to me that another one of my friends had sexually assaulted a different friend.One lesson I took from this is that it’s very hard to know the level of sexual misconduct in a place if you aren’t a target. If I was asked to estimate the “base rate” of assault in my community before these revelations, I would have falsely thought it was low. These encounters can be traumatic to recount, and the victims can never be sure who to trust or what the consequences will be for speaking out. I’d like to think I was trustworthy, but how was the victim meant to know that?Eventually enough reports came out that the club leaders were forced to respond. Several policies were implemented, both officially and unofficially.Kick people out.Nobody has a democratic right to be in boardgame club.I think I once saw someone mention “beyond reasonable doubt” when it comes to misconduct allegations. That standard of evidence is extremely high because the accused will be thrown into jail and deprived of their rights. The punishment of “no longer being in boardgame club” does not warrant the same level of evidence. And the costs of keeping a missing stair around are very, very high.Everyone that was accused of assault was banned from the club. Members that engaged in more minor offenses were warned, and kicked out if they didn’t change. To my knowledge, no innocent people were kicked out by mistake (false accusations are rare). I think this made the community a much more pleasant place.2. Protect the newcomersWhen you attend a society for the first time, you do not know what the community norms are. You don’t know if there are avenues to report misconduct. You don’t...]]>
titotal https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:53 None full 5185
WgziByhhKGDfuEgyy_NL_EA_EA EA - Share the burden by 2ndRichter Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Share the burden, published by 2ndRichter on March 11, 2023 on The Effective Altruism Forum.My argument: let’s distribute the burden of correcting and preventing sexual misconduct through collective effort, not letting the burdens and costs fall overwhelmingly on those who have experienced it.CW: sexual assault/harassmentI work at CEA but write this in an individual capacity. Views expressed here are my own, not CEA's.I encourage you to reach out to me individually on Twitter (@emmalrichter) if you want to discuss what I raise in this post. I’d love to engage with the variety of potential responses to what I’ve written and would love to know why you upvote or downvote it.Intro and ContextSome of you already know that I’m a survivor. I was sexually assaulted, harassed, or abused in independent situations at the ages of 16, 17, 18, and 20. I am intentionally open and vocal about what I’ve gone through, including a PTSD diagnosis a few years ago.Recent events in the EA community have reminded me that the mistreatment of people through sexual or romantic means occurs here (as it does everywhere). Last week at EAG, I received a Swapcard message that proposed a non-platonic interaction under the guise of professional interaction. I went to an afterparty where someone I had just met—literally introduced to me moments before—put their hand on the small of my back and grabbed and held onto my arm multiple times. These might seem like minor annoyances, but I have heard and experienced that these kinds of small moments happen often to women in EA. These kinds of experiences undermine my own feelings of comfort and value in the community.This might be anecdata, as some people say, and I know obtaining robust data on these issues has its own challenges. Nonetheless, my experience and those of other women in EA indicate that there’s enough of a problem to consider doing more.I’m writing this post for a few reasons:I want to draw attention to the suffering of women here in the community.I want to convey the costs placed on survivors seeking justice and trying to prevent further harm to others.I want to share just how taxing it can be for survivors to work on these problems on their own, both due to the inherent pain of reliving experiences and the arduousness of most justice processes.Above all, I want to make this request of our community: let’s distribute the burden of correcting and preventing sexual misconduct as fairly as we can, not letting the burdens and costs fall overwhelmingly on those who have experienced it. They have suffered so much already—they have suffered enough. I hope we can be as agentic and proactive in this domain as we strive to be in other areas of study and work.Here are sub-arguments that I’ll explore below:Before placing the burden of explanation on the survivor, we can employ other methods to learn about this constellation of social issues. We can listen to survivors more effectively and incorporate the feedback of those who want to share while also finding other resources to chart paths forward.Good intentions can still lead to negative outcomes. This can apply to both bystanders who refrain from engaging with the subject out of the intention of not making things worse and might also apply to those who perpetrate harmful behaviors (as I discuss in my own experience further down).Why write about the meta-level attitude and approach when I could have written something proposing object-level solutions?Because how we approach finding object-level solutions will affect the quality of those solutions—particularly for those who are most affected by these problems. I don’t feel informed enough to propose institutional reforms or particular policies (though I intend to reflect on these questions and research them). I do feel informed enough t...]]>
2ndRichter https://forum.effectivealtruism.org/posts/WgziByhhKGDfuEgyy/share-the-burden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Share the burden, published by 2ndRichter on March 11, 2023 on The Effective Altruism Forum.My argument: let’s distribute the burden of correcting and preventing sexual misconduct through collective effort, not letting the burdens and costs fall overwhelmingly on those who have experienced it.CW: sexual assault/harassmentI work at CEA but write this in an individual capacity. Views expressed here are my own, not CEA's.I encourage you to reach out to me individually on Twitter (@emmalrichter) if you want to discuss what I raise in this post. I’d love to engage with the variety of potential responses to what I’ve written and would love to know why you upvote or downvote it.Intro and ContextSome of you already know that I’m a survivor. I was sexually assaulted, harassed, or abused in independent situations at the ages of 16, 17, 18, and 20. I am intentionally open and vocal about what I’ve gone through, including a PTSD diagnosis a few years ago.Recent events in the EA community have reminded me that the mistreatment of people through sexual or romantic means occurs here (as it does everywhere). Last week at EAG, I received a Swapcard message that proposed a non-platonic interaction under the guise of professional interaction. I went to an afterparty where someone I had just met—literally introduced to me moments before—put their hand on the small of my back and grabbed and held onto my arm multiple times. These might seem like minor annoyances, but I have heard and experienced that these kinds of small moments happen often to women in EA. These kinds of experiences undermine my own feelings of comfort and value in the community.This might be anecdata, as some people say, and I know obtaining robust data on these issues has its own challenges. Nonetheless, my experience and those of other women in EA indicate that there’s enough of a problem to consider doing more.I’m writing this post for a few reasons:I want to draw attention to the suffering of women here in the community.I want to convey the costs placed on survivors seeking justice and trying to prevent further harm to others.I want to share just how taxing it can be for survivors to work on these problems on their own, both due to the inherent pain of reliving experiences and the arduousness of most justice processes.Above all, I want to make this request of our community: let’s distribute the burden of correcting and preventing sexual misconduct as fairly as we can, not letting the burdens and costs fall overwhelmingly on those who have experienced it. They have suffered so much already—they have suffered enough. I hope we can be as agentic and proactive in this domain as we strive to be in other areas of study and work.Here are sub-arguments that I’ll explore below:Before placing the burden of explanation on the survivor, we can employ other methods to learn about this constellation of social issues. We can listen to survivors more effectively and incorporate the feedback of those who want to share while also finding other resources to chart paths forward.Good intentions can still lead to negative outcomes. This can apply to both bystanders who refrain from engaging with the subject out of the intention of not making things worse and might also apply to those who perpetrate harmful behaviors (as I discuss in my own experience further down).Why write about the meta-level attitude and approach when I could have written something proposing object-level solutions?Because how we approach finding object-level solutions will affect the quality of those solutions—particularly for those who are most affected by these problems. I don’t feel informed enough to propose institutional reforms or particular policies (though I intend to reflect on these questions and research them). I do feel informed enough t...]]>
Sat, 11 Mar 2023 01:46:22 +0000 EA - Share the burden by 2ndRichter Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Share the burden, published by 2ndRichter on March 11, 2023 on The Effective Altruism Forum.My argument: let’s distribute the burden of correcting and preventing sexual misconduct through collective effort, not letting the burdens and costs fall overwhelmingly on those who have experienced it.CW: sexual assault/harassmentI work at CEA but write this in an individual capacity. Views expressed here are my own, not CEA's.I encourage you to reach out to me individually on Twitter (@emmalrichter) if you want to discuss what I raise in this post. I’d love to engage with the variety of potential responses to what I’ve written and would love to know why you upvote or downvote it.Intro and ContextSome of you already know that I’m a survivor. I was sexually assaulted, harassed, or abused in independent situations at the ages of 16, 17, 18, and 20. I am intentionally open and vocal about what I’ve gone through, including a PTSD diagnosis a few years ago.Recent events in the EA community have reminded me that the mistreatment of people through sexual or romantic means occurs here (as it does everywhere). Last week at EAG, I received a Swapcard message that proposed a non-platonic interaction under the guise of professional interaction. I went to an afterparty where someone I had just met—literally introduced to me moments before—put their hand on the small of my back and grabbed and held onto my arm multiple times. These might seem like minor annoyances, but I have heard and experienced that these kinds of small moments happen often to women in EA. These kinds of experiences undermine my own feelings of comfort and value in the community.This might be anecdata, as some people say, and I know obtaining robust data on these issues has its own challenges. Nonetheless, my experience and those of other women in EA indicate that there’s enough of a problem to consider doing more.I’m writing this post for a few reasons:I want to draw attention to the suffering of women here in the community.I want to convey the costs placed on survivors seeking justice and trying to prevent further harm to others.I want to share just how taxing it can be for survivors to work on these problems on their own, both due to the inherent pain of reliving experiences and the arduousness of most justice processes.Above all, I want to make this request of our community: let’s distribute the burden of correcting and preventing sexual misconduct as fairly as we can, not letting the burdens and costs fall overwhelmingly on those who have experienced it. They have suffered so much already—they have suffered enough. I hope we can be as agentic and proactive in this domain as we strive to be in other areas of study and work.Here are sub-arguments that I’ll explore below:Before placing the burden of explanation on the survivor, we can employ other methods to learn about this constellation of social issues. We can listen to survivors more effectively and incorporate the feedback of those who want to share while also finding other resources to chart paths forward.Good intentions can still lead to negative outcomes. This can apply to both bystanders who refrain from engaging with the subject out of the intention of not making things worse and might also apply to those who perpetrate harmful behaviors (as I discuss in my own experience further down).Why write about the meta-level attitude and approach when I could have written something proposing object-level solutions?Because how we approach finding object-level solutions will affect the quality of those solutions—particularly for those who are most affected by these problems. I don’t feel informed enough to propose institutional reforms or particular policies (though I intend to reflect on these questions and research them). I do feel informed enough t...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Share the burden, published by 2ndRichter on March 11, 2023 on The Effective Altruism Forum.My argument: let’s distribute the burden of correcting and preventing sexual misconduct through collective effort, not letting the burdens and costs fall overwhelmingly on those who have experienced it.CW: sexual assault/harassmentI work at CEA but write this in an individual capacity. Views expressed here are my own, not CEA's.I encourage you to reach out to me individually on Twitter (@emmalrichter) if you want to discuss what I raise in this post. I’d love to engage with the variety of potential responses to what I’ve written and would love to know why you upvote or downvote it.Intro and ContextSome of you already know that I’m a survivor. I was sexually assaulted, harassed, or abused in independent situations at the ages of 16, 17, 18, and 20. I am intentionally open and vocal about what I’ve gone through, including a PTSD diagnosis a few years ago.Recent events in the EA community have reminded me that the mistreatment of people through sexual or romantic means occurs here (as it does everywhere). Last week at EAG, I received a Swapcard message that proposed a non-platonic interaction under the guise of professional interaction. I went to an afterparty where someone I had just met—literally introduced to me moments before—put their hand on the small of my back and grabbed and held onto my arm multiple times. These might seem like minor annoyances, but I have heard and experienced that these kinds of small moments happen often to women in EA. These kinds of experiences undermine my own feelings of comfort and value in the community.This might be anecdata, as some people say, and I know obtaining robust data on these issues has its own challenges. Nonetheless, my experience and those of other women in EA indicate that there’s enough of a problem to consider doing more.I’m writing this post for a few reasons:I want to draw attention to the suffering of women here in the community.I want to convey the costs placed on survivors seeking justice and trying to prevent further harm to others.I want to share just how taxing it can be for survivors to work on these problems on their own, both due to the inherent pain of reliving experiences and the arduousness of most justice processes.Above all, I want to make this request of our community: let’s distribute the burden of correcting and preventing sexual misconduct as fairly as we can, not letting the burdens and costs fall overwhelmingly on those who have experienced it. They have suffered so much already—they have suffered enough. I hope we can be as agentic and proactive in this domain as we strive to be in other areas of study and work.Here are sub-arguments that I’ll explore below:Before placing the burden of explanation on the survivor, we can employ other methods to learn about this constellation of social issues. We can listen to survivors more effectively and incorporate the feedback of those who want to share while also finding other resources to chart paths forward.Good intentions can still lead to negative outcomes. This can apply to both bystanders who refrain from engaging with the subject out of the intention of not making things worse and might also apply to those who perpetrate harmful behaviors (as I discuss in my own experience further down).Why write about the meta-level attitude and approach when I could have written something proposing object-level solutions?Because how we approach finding object-level solutions will affect the quality of those solutions—particularly for those who are most affected by these problems. I don’t feel informed enough to propose institutional reforms or particular policies (though I intend to reflect on these questions and research them). I do feel informed enough t...]]>
2ndRichter https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 13:20 None full 5181
pebpwzhqsqszxgL84_NL_EA_EA EA - Tyler Johnston on helping farmed animals, consciousness, and being conventionally good by Amber Dawn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tyler Johnston on helping farmed animals, consciousness, and being conventionally good, published by Amber Dawn on March 10, 2023 on The Effective Altruism Forum.This post is part of a series of six interviews. As EAs, we want to use our careers or donations to do the most good - but it’s difficult to work out what exactly that looks like for us. I wanted to interview effective altruists working in different fields and on different causes and ask them how they chose their cause area, as well as how they relate to effective altruism and doing good more generally. During the Prague Fall Season residency, I interviewed six EAs in Prague about what they are doing and why they are doing it. I’m grateful to my interviewees for giving their time, and to the organisers of PFS for supporting my visit.I’m currently working as a freelance writer and editor. If you’re interested in hiring me, book a short call or email me at ambace@gmail.com. More info here.Tyler Johnston is an aspiring effective altruist currently based out of Tulsa, Oklahoma. Professionally, he works on corporate campaigns to improve the lives of farmed chickens, and is interested in cause prioritisation, interspecies comparisons, and the suffering of non-humans. He’s also a science-fiction fan and an amateur crossword puzzle constructor.We talked about:his work on The Humane League’s corporate animal welfare campaignshow he became a vegan and animal advocatewhether animals are conscioushow being conventionally good is underratedOn his work at The Humane LeagueAmber: Tell me about what you’re doing.Tyler: I work for The Humane League. We run public awareness campaigns to try to get companies to make commitments to improve the treatment of farmed animals in their supply chains. This strategy first gained traction in 2015, and was immediately really powerful. Since then, it has got a lot of interest from EA funders.Amber: Did The Humane League always do that, or was it doing something else before 2015?Tyler: It was a long journey; The Humane League’s original name was Hugs for PuppiesAmber: Aww, that’s very cute!Tyler: Yeah, I feel like we’d be a more likeable organisation if we were still called that. They started doing demonstrations around issues like fur bans, and other animal welfare issues there was already a lot of energy around. They then switched to focussing on vegan advocacy, which involved things like leafleting, and sharing recipes and resources.Amber: So the strategy at that time then was to encourage people to go vegan, which would lower demand for factory farming, which would mean there were fewer factory-farmed animals?Tyler: That’s right. There was some early evidence that showed this was promising, and it also just made sense to them, since most vegans would attribute their own choice to be vegan to a time in the past when they heard and agreed with the arguments. So they thought, ‘why wouldn’t this export to other people?’Amber: But you said the strategy is different now - it’s to lobby actual food producers to treat the animals that they’re farming better. Say more about that.Tyler: That’s our dominant strategy now, yeah. It’s part of a broader shift in the [animal advocacy] movement toward institutional change rather than individual change. If for some given company, you either have to change the minds of, like, 10 million consumers, or a dozen executive stakeholders - the latter is just a lot more tractable. It started with running small campaigns to persuade companies to source cage-free eggs, and it turned out that this worked. Around 2015 there was a sharp turning point in the number of farmed birds that are cage-free - before 2015, the percentage was growing very slowly, from 3% to 5%, but between 2015 and today, the percentage went up from 5% to 36%. And people attr...]]>
Amber Dawn https://forum.effectivealtruism.org/posts/pebpwzhqsqszxgL84/tyler-johnston-on-helping-farmed-animals-consciousness-and Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tyler Johnston on helping farmed animals, consciousness, and being conventionally good, published by Amber Dawn on March 10, 2023 on The Effective Altruism Forum.This post is part of a series of six interviews. As EAs, we want to use our careers or donations to do the most good - but it’s difficult to work out what exactly that looks like for us. I wanted to interview effective altruists working in different fields and on different causes and ask them how they chose their cause area, as well as how they relate to effective altruism and doing good more generally. During the Prague Fall Season residency, I interviewed six EAs in Prague about what they are doing and why they are doing it. I’m grateful to my interviewees for giving their time, and to the organisers of PFS for supporting my visit.I’m currently working as a freelance writer and editor. If you’re interested in hiring me, book a short call or email me at ambace@gmail.com. More info here.Tyler Johnston is an aspiring effective altruist currently based out of Tulsa, Oklahoma. Professionally, he works on corporate campaigns to improve the lives of farmed chickens, and is interested in cause prioritisation, interspecies comparisons, and the suffering of non-humans. He’s also a science-fiction fan and an amateur crossword puzzle constructor.We talked about:his work on The Humane League’s corporate animal welfare campaignshow he became a vegan and animal advocatewhether animals are conscioushow being conventionally good is underratedOn his work at The Humane LeagueAmber: Tell me about what you’re doing.Tyler: I work for The Humane League. We run public awareness campaigns to try to get companies to make commitments to improve the treatment of farmed animals in their supply chains. This strategy first gained traction in 2015, and was immediately really powerful. Since then, it has got a lot of interest from EA funders.Amber: Did The Humane League always do that, or was it doing something else before 2015?Tyler: It was a long journey; The Humane League’s original name was Hugs for PuppiesAmber: Aww, that’s very cute!Tyler: Yeah, I feel like we’d be a more likeable organisation if we were still called that. They started doing demonstrations around issues like fur bans, and other animal welfare issues there was already a lot of energy around. They then switched to focussing on vegan advocacy, which involved things like leafleting, and sharing recipes and resources.Amber: So the strategy at that time then was to encourage people to go vegan, which would lower demand for factory farming, which would mean there were fewer factory-farmed animals?Tyler: That’s right. There was some early evidence that showed this was promising, and it also just made sense to them, since most vegans would attribute their own choice to be vegan to a time in the past when they heard and agreed with the arguments. So they thought, ‘why wouldn’t this export to other people?’Amber: But you said the strategy is different now - it’s to lobby actual food producers to treat the animals that they’re farming better. Say more about that.Tyler: That’s our dominant strategy now, yeah. It’s part of a broader shift in the [animal advocacy] movement toward institutional change rather than individual change. If for some given company, you either have to change the minds of, like, 10 million consumers, or a dozen executive stakeholders - the latter is just a lot more tractable. It started with running small campaigns to persuade companies to source cage-free eggs, and it turned out that this worked. Around 2015 there was a sharp turning point in the number of farmed birds that are cage-free - before 2015, the percentage was growing very slowly, from 3% to 5%, but between 2015 and today, the percentage went up from 5% to 36%. And people attr...]]>
Fri, 10 Mar 2023 23:29:18 +0000 EA - Tyler Johnston on helping farmed animals, consciousness, and being conventionally good by Amber Dawn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tyler Johnston on helping farmed animals, consciousness, and being conventionally good, published by Amber Dawn on March 10, 2023 on The Effective Altruism Forum.This post is part of a series of six interviews. As EAs, we want to use our careers or donations to do the most good - but it’s difficult to work out what exactly that looks like for us. I wanted to interview effective altruists working in different fields and on different causes and ask them how they chose their cause area, as well as how they relate to effective altruism and doing good more generally. During the Prague Fall Season residency, I interviewed six EAs in Prague about what they are doing and why they are doing it. I’m grateful to my interviewees for giving their time, and to the organisers of PFS for supporting my visit.I’m currently working as a freelance writer and editor. If you’re interested in hiring me, book a short call or email me at ambace@gmail.com. More info here.Tyler Johnston is an aspiring effective altruist currently based out of Tulsa, Oklahoma. Professionally, he works on corporate campaigns to improve the lives of farmed chickens, and is interested in cause prioritisation, interspecies comparisons, and the suffering of non-humans. He’s also a science-fiction fan and an amateur crossword puzzle constructor.We talked about:his work on The Humane League’s corporate animal welfare campaignshow he became a vegan and animal advocatewhether animals are conscioushow being conventionally good is underratedOn his work at The Humane LeagueAmber: Tell me about what you’re doing.Tyler: I work for The Humane League. We run public awareness campaigns to try to get companies to make commitments to improve the treatment of farmed animals in their supply chains. This strategy first gained traction in 2015, and was immediately really powerful. Since then, it has got a lot of interest from EA funders.Amber: Did The Humane League always do that, or was it doing something else before 2015?Tyler: It was a long journey; The Humane League’s original name was Hugs for PuppiesAmber: Aww, that’s very cute!Tyler: Yeah, I feel like we’d be a more likeable organisation if we were still called that. They started doing demonstrations around issues like fur bans, and other animal welfare issues there was already a lot of energy around. They then switched to focussing on vegan advocacy, which involved things like leafleting, and sharing recipes and resources.Amber: So the strategy at that time then was to encourage people to go vegan, which would lower demand for factory farming, which would mean there were fewer factory-farmed animals?Tyler: That’s right. There was some early evidence that showed this was promising, and it also just made sense to them, since most vegans would attribute their own choice to be vegan to a time in the past when they heard and agreed with the arguments. So they thought, ‘why wouldn’t this export to other people?’Amber: But you said the strategy is different now - it’s to lobby actual food producers to treat the animals that they’re farming better. Say more about that.Tyler: That’s our dominant strategy now, yeah. It’s part of a broader shift in the [animal advocacy] movement toward institutional change rather than individual change. If for some given company, you either have to change the minds of, like, 10 million consumers, or a dozen executive stakeholders - the latter is just a lot more tractable. It started with running small campaigns to persuade companies to source cage-free eggs, and it turned out that this worked. Around 2015 there was a sharp turning point in the number of farmed birds that are cage-free - before 2015, the percentage was growing very slowly, from 3% to 5%, but between 2015 and today, the percentage went up from 5% to 36%. And people attr...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tyler Johnston on helping farmed animals, consciousness, and being conventionally good, published by Amber Dawn on March 10, 2023 on The Effective Altruism Forum.This post is part of a series of six interviews. As EAs, we want to use our careers or donations to do the most good - but it’s difficult to work out what exactly that looks like for us. I wanted to interview effective altruists working in different fields and on different causes and ask them how they chose their cause area, as well as how they relate to effective altruism and doing good more generally. During the Prague Fall Season residency, I interviewed six EAs in Prague about what they are doing and why they are doing it. I’m grateful to my interviewees for giving their time, and to the organisers of PFS for supporting my visit.I’m currently working as a freelance writer and editor. If you’re interested in hiring me, book a short call or email me at ambace@gmail.com. More info here.Tyler Johnston is an aspiring effective altruist currently based out of Tulsa, Oklahoma. Professionally, he works on corporate campaigns to improve the lives of farmed chickens, and is interested in cause prioritisation, interspecies comparisons, and the suffering of non-humans. He’s also a science-fiction fan and an amateur crossword puzzle constructor.We talked about:his work on The Humane League’s corporate animal welfare campaignshow he became a vegan and animal advocatewhether animals are conscioushow being conventionally good is underratedOn his work at The Humane LeagueAmber: Tell me about what you’re doing.Tyler: I work for The Humane League. We run public awareness campaigns to try to get companies to make commitments to improve the treatment of farmed animals in their supply chains. This strategy first gained traction in 2015, and was immediately really powerful. Since then, it has got a lot of interest from EA funders.Amber: Did The Humane League always do that, or was it doing something else before 2015?Tyler: It was a long journey; The Humane League’s original name was Hugs for PuppiesAmber: Aww, that’s very cute!Tyler: Yeah, I feel like we’d be a more likeable organisation if we were still called that. They started doing demonstrations around issues like fur bans, and other animal welfare issues there was already a lot of energy around. They then switched to focussing on vegan advocacy, which involved things like leafleting, and sharing recipes and resources.Amber: So the strategy at that time then was to encourage people to go vegan, which would lower demand for factory farming, which would mean there were fewer factory-farmed animals?Tyler: That’s right. There was some early evidence that showed this was promising, and it also just made sense to them, since most vegans would attribute their own choice to be vegan to a time in the past when they heard and agreed with the arguments. So they thought, ‘why wouldn’t this export to other people?’Amber: But you said the strategy is different now - it’s to lobby actual food producers to treat the animals that they’re farming better. Say more about that.Tyler: That’s our dominant strategy now, yeah. It’s part of a broader shift in the [animal advocacy] movement toward institutional change rather than individual change. If for some given company, you either have to change the minds of, like, 10 million consumers, or a dozen executive stakeholders - the latter is just a lot more tractable. It started with running small campaigns to persuade companies to source cage-free eggs, and it turned out that this worked. Around 2015 there was a sharp turning point in the number of farmed birds that are cage-free - before 2015, the percentage was growing very slowly, from 3% to 5%, but between 2015 and today, the percentage went up from 5% to 36%. And people attr...]]>
Amber Dawn https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 23:22 None full 5183
gLJBfruDrKQDkbf2b_NL_EA_EA EA - Racial and gender demographics at EA Global in 2022 by Amy Labenz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Racial and gender demographics at EA Global in 2022, published by Amy Labenz on March 10, 2023 on The Effective Altruism Forum.CEA has recently conducted a series of analyses to help us better understand how people of different genders and racial backgrounds experienced EA Global events in 2022 (not including EAGx). In response to some requests (like this one), we wanted to share some preliminary findings.This post is primarily going to report on some summary statistics. We are still investigating pieces of this picture but wanted to get the raw data out fast for others to look at, especially since we suspect this may help shed light on other broad trends within EA.High-level summaryAttendees:33% of registered attendees (and 35% of applicants) at EA Global events in 2022 self-reported as female or non-binary.33% of registered attendees (and 38% of applicants) self-reported as people of color (“POC”).Experiences:Attendees generally find EA Global welcoming (4.51/5 with 1–5 as options) and are likely to recommend it to others (9.03/10 with 0–10 as options).Women and non-binary attendees reported that they find EA Global slightly less welcoming (4.46/5 compared to 4.56/5 for men and 4.51 overall).Otherwise, we found no statistically significant difference in terms of feelings of welcomeness and overall recommendation scores across groups in terms of gender and race/ethnicity.Speakers:43% of speakers and MCs at EA Global events in 2022 were female or non-binary.28% of speakers and MCs were people of color.Some initial takeaways:A more diverse set of people apply to and attend EAG than complete the EA survey.Welcomingness and likelihood to recommend scores for women and POC were very similar to the overall scores.There is a small but statistically significant difference in welcomingness scores for women.We are not sure what to make of the fact that the application stats for POC were higher than the admissions stats. We are currently investigating whether this demographic is newer to EA (our best guess) and if that might be influencing the admissions rate.One update for our team is that women / non-binary speaker stats are higher compared to the applicant pool and this is not the case for POC. We had not realized that prior to conducting this analysis.The 2022 speaker statistics appear to be broadly in line with our statistics since London 2018 when we started tracking. We had significantly less diverse speakers prior to EAG London 2018.Applicants and registrantsFor EA Globals in 2022, our applicant pool was slightly more diverse in terms of race/ethnicity than our attendee pool (38% of applicants were POC vs. 33% of attendees), and around the same in terms of gender (35% of applicants were female or non-binary vs. 33% of attendees).For comparison, our attendee pool has about the same composition in terms of gender as the respondents in the 2020 EA Survey and is more diverse in terms of race/ethnicity than that survey.We had much more racial diversity at EAGx events outside of the US and Europe (e.g. EAGxSingapore, EAGxLatAm, and EAGxIndia, where POC were the majority). Generally, EAGx attendees end up later attending EAGs, so we think the events could result in more attendees from these locations. (However, due to funding constraints and their impact on travel grants, we expect this will not impact EAGs in 2023 as much as it might have otherwise.)Experiences of attendeesOverall, attendees tend to find EA Global welcoming (4.51/5 with 1–5 as options) and are likely to recommend it to others (9.03/10 with 0–10 as options).Women and non-binary attendees reported slightly lower average scores on whether EA Global was “a place where [they] felt welcome” (women and non-binary attendees reported an average score of 4.46/5 vs an average of 4.56/5 for me...]]>
Amy Labenz https://forum.effectivealtruism.org/posts/gLJBfruDrKQDkbf2b/racial-and-gender-demographics-at-ea-global-in-2022-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Racial and gender demographics at EA Global in 2022, published by Amy Labenz on March 10, 2023 on The Effective Altruism Forum.CEA has recently conducted a series of analyses to help us better understand how people of different genders and racial backgrounds experienced EA Global events in 2022 (not including EAGx). In response to some requests (like this one), we wanted to share some preliminary findings.This post is primarily going to report on some summary statistics. We are still investigating pieces of this picture but wanted to get the raw data out fast for others to look at, especially since we suspect this may help shed light on other broad trends within EA.High-level summaryAttendees:33% of registered attendees (and 35% of applicants) at EA Global events in 2022 self-reported as female or non-binary.33% of registered attendees (and 38% of applicants) self-reported as people of color (“POC”).Experiences:Attendees generally find EA Global welcoming (4.51/5 with 1–5 as options) and are likely to recommend it to others (9.03/10 with 0–10 as options).Women and non-binary attendees reported that they find EA Global slightly less welcoming (4.46/5 compared to 4.56/5 for men and 4.51 overall).Otherwise, we found no statistically significant difference in terms of feelings of welcomeness and overall recommendation scores across groups in terms of gender and race/ethnicity.Speakers:43% of speakers and MCs at EA Global events in 2022 were female or non-binary.28% of speakers and MCs were people of color.Some initial takeaways:A more diverse set of people apply to and attend EAG than complete the EA survey.Welcomingness and likelihood to recommend scores for women and POC were very similar to the overall scores.There is a small but statistically significant difference in welcomingness scores for women.We are not sure what to make of the fact that the application stats for POC were higher than the admissions stats. We are currently investigating whether this demographic is newer to EA (our best guess) and if that might be influencing the admissions rate.One update for our team is that women / non-binary speaker stats are higher compared to the applicant pool and this is not the case for POC. We had not realized that prior to conducting this analysis.The 2022 speaker statistics appear to be broadly in line with our statistics since London 2018 when we started tracking. We had significantly less diverse speakers prior to EAG London 2018.Applicants and registrantsFor EA Globals in 2022, our applicant pool was slightly more diverse in terms of race/ethnicity than our attendee pool (38% of applicants were POC vs. 33% of attendees), and around the same in terms of gender (35% of applicants were female or non-binary vs. 33% of attendees).For comparison, our attendee pool has about the same composition in terms of gender as the respondents in the 2020 EA Survey and is more diverse in terms of race/ethnicity than that survey.We had much more racial diversity at EAGx events outside of the US and Europe (e.g. EAGxSingapore, EAGxLatAm, and EAGxIndia, where POC were the majority). Generally, EAGx attendees end up later attending EAGs, so we think the events could result in more attendees from these locations. (However, due to funding constraints and their impact on travel grants, we expect this will not impact EAGs in 2023 as much as it might have otherwise.)Experiences of attendeesOverall, attendees tend to find EA Global welcoming (4.51/5 with 1–5 as options) and are likely to recommend it to others (9.03/10 with 0–10 as options).Women and non-binary attendees reported slightly lower average scores on whether EA Global was “a place where [they] felt welcome” (women and non-binary attendees reported an average score of 4.46/5 vs an average of 4.56/5 for me...]]>
Fri, 10 Mar 2023 14:53:58 +0000 EA - Racial and gender demographics at EA Global in 2022 by Amy Labenz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Racial and gender demographics at EA Global in 2022, published by Amy Labenz on March 10, 2023 on The Effective Altruism Forum.CEA has recently conducted a series of analyses to help us better understand how people of different genders and racial backgrounds experienced EA Global events in 2022 (not including EAGx). In response to some requests (like this one), we wanted to share some preliminary findings.This post is primarily going to report on some summary statistics. We are still investigating pieces of this picture but wanted to get the raw data out fast for others to look at, especially since we suspect this may help shed light on other broad trends within EA.High-level summaryAttendees:33% of registered attendees (and 35% of applicants) at EA Global events in 2022 self-reported as female or non-binary.33% of registered attendees (and 38% of applicants) self-reported as people of color (“POC”).Experiences:Attendees generally find EA Global welcoming (4.51/5 with 1–5 as options) and are likely to recommend it to others (9.03/10 with 0–10 as options).Women and non-binary attendees reported that they find EA Global slightly less welcoming (4.46/5 compared to 4.56/5 for men and 4.51 overall).Otherwise, we found no statistically significant difference in terms of feelings of welcomeness and overall recommendation scores across groups in terms of gender and race/ethnicity.Speakers:43% of speakers and MCs at EA Global events in 2022 were female or non-binary.28% of speakers and MCs were people of color.Some initial takeaways:A more diverse set of people apply to and attend EAG than complete the EA survey.Welcomingness and likelihood to recommend scores for women and POC were very similar to the overall scores.There is a small but statistically significant difference in welcomingness scores for women.We are not sure what to make of the fact that the application stats for POC were higher than the admissions stats. We are currently investigating whether this demographic is newer to EA (our best guess) and if that might be influencing the admissions rate.One update for our team is that women / non-binary speaker stats are higher compared to the applicant pool and this is not the case for POC. We had not realized that prior to conducting this analysis.The 2022 speaker statistics appear to be broadly in line with our statistics since London 2018 when we started tracking. We had significantly less diverse speakers prior to EAG London 2018.Applicants and registrantsFor EA Globals in 2022, our applicant pool was slightly more diverse in terms of race/ethnicity than our attendee pool (38% of applicants were POC vs. 33% of attendees), and around the same in terms of gender (35% of applicants were female or non-binary vs. 33% of attendees).For comparison, our attendee pool has about the same composition in terms of gender as the respondents in the 2020 EA Survey and is more diverse in terms of race/ethnicity than that survey.We had much more racial diversity at EAGx events outside of the US and Europe (e.g. EAGxSingapore, EAGxLatAm, and EAGxIndia, where POC were the majority). Generally, EAGx attendees end up later attending EAGs, so we think the events could result in more attendees from these locations. (However, due to funding constraints and their impact on travel grants, we expect this will not impact EAGs in 2023 as much as it might have otherwise.)Experiences of attendeesOverall, attendees tend to find EA Global welcoming (4.51/5 with 1–5 as options) and are likely to recommend it to others (9.03/10 with 0–10 as options).Women and non-binary attendees reported slightly lower average scores on whether EA Global was “a place where [they] felt welcome” (women and non-binary attendees reported an average score of 4.46/5 vs an average of 4.56/5 for me...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Racial and gender demographics at EA Global in 2022, published by Amy Labenz on March 10, 2023 on The Effective Altruism Forum.CEA has recently conducted a series of analyses to help us better understand how people of different genders and racial backgrounds experienced EA Global events in 2022 (not including EAGx). In response to some requests (like this one), we wanted to share some preliminary findings.This post is primarily going to report on some summary statistics. We are still investigating pieces of this picture but wanted to get the raw data out fast for others to look at, especially since we suspect this may help shed light on other broad trends within EA.High-level summaryAttendees:33% of registered attendees (and 35% of applicants) at EA Global events in 2022 self-reported as female or non-binary.33% of registered attendees (and 38% of applicants) self-reported as people of color (“POC”).Experiences:Attendees generally find EA Global welcoming (4.51/5 with 1–5 as options) and are likely to recommend it to others (9.03/10 with 0–10 as options).Women and non-binary attendees reported that they find EA Global slightly less welcoming (4.46/5 compared to 4.56/5 for men and 4.51 overall).Otherwise, we found no statistically significant difference in terms of feelings of welcomeness and overall recommendation scores across groups in terms of gender and race/ethnicity.Speakers:43% of speakers and MCs at EA Global events in 2022 were female or non-binary.28% of speakers and MCs were people of color.Some initial takeaways:A more diverse set of people apply to and attend EAG than complete the EA survey.Welcomingness and likelihood to recommend scores for women and POC were very similar to the overall scores.There is a small but statistically significant difference in welcomingness scores for women.We are not sure what to make of the fact that the application stats for POC were higher than the admissions stats. We are currently investigating whether this demographic is newer to EA (our best guess) and if that might be influencing the admissions rate.One update for our team is that women / non-binary speaker stats are higher compared to the applicant pool and this is not the case for POC. We had not realized that prior to conducting this analysis.The 2022 speaker statistics appear to be broadly in line with our statistics since London 2018 when we started tracking. We had significantly less diverse speakers prior to EAG London 2018.Applicants and registrantsFor EA Globals in 2022, our applicant pool was slightly more diverse in terms of race/ethnicity than our attendee pool (38% of applicants were POC vs. 33% of attendees), and around the same in terms of gender (35% of applicants were female or non-binary vs. 33% of attendees).For comparison, our attendee pool has about the same composition in terms of gender as the respondents in the 2020 EA Survey and is more diverse in terms of race/ethnicity than that survey.We had much more racial diversity at EAGx events outside of the US and Europe (e.g. EAGxSingapore, EAGxLatAm, and EAGxIndia, where POC were the majority). Generally, EAGx attendees end up later attending EAGs, so we think the events could result in more attendees from these locations. (However, due to funding constraints and their impact on travel grants, we expect this will not impact EAGs in 2023 as much as it might have otherwise.)Experiences of attendeesOverall, attendees tend to find EA Global welcoming (4.51/5 with 1–5 as options) and are likely to recommend it to others (9.03/10 with 0–10 as options).Women and non-binary attendees reported slightly lower average scores on whether EA Global was “a place where [they] felt welcome” (women and non-binary attendees reported an average score of 4.46/5 vs an average of 4.56/5 for me...]]>
Amy Labenz https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 13:07 None full 5178
TiRPgfG4L8X2jt99g_NL_EA_EA EA - How oral rehydration therapy was developed by Kelsey Piper Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How oral rehydration therapy was developed, published by Kelsey Piper on March 10, 2023 on The Effective Altruism Forum.This is a link post for "Salt, Sugar, Water, Zinc: How Scientists Learned to Treat the 20th Century’s Biggest Killer of Children" in the second issue of Asterisk Magazine, now out. The question it poses is: oral rehydration therapy, which has saved millions of lives a year since it was developed, is very simple. It uses widely available ingredients. Why did it take until the late 1960s to come up with it?There's sort of a two part answer. The first part is that without a solid theoretical understanding of the problem you're trying to solve, it's (at least in this case) ludicrously difficult to solve it empirically: people kept trying variants on this, and they didn't work, because an important parameter was off and they had no idea which direction to correct in.The second is that the incredible simplicity of the modern formula for oral rehydration therapy is the product of a lot of concerted design effort not just to find something that worked against cholera but to find something dead simple which did only require household ingredients and was hard to get wrong. The fact the final solution is so simple isn't because oral rehydration is a simple problem, but because researchers kept on going until they had a sufficiently simple solution.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Kelsey Piper https://forum.effectivealtruism.org/posts/TiRPgfG4L8X2jt99g/how-oral-rehydration-therapy-was-developed Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How oral rehydration therapy was developed, published by Kelsey Piper on March 10, 2023 on The Effective Altruism Forum.This is a link post for "Salt, Sugar, Water, Zinc: How Scientists Learned to Treat the 20th Century’s Biggest Killer of Children" in the second issue of Asterisk Magazine, now out. The question it poses is: oral rehydration therapy, which has saved millions of lives a year since it was developed, is very simple. It uses widely available ingredients. Why did it take until the late 1960s to come up with it?There's sort of a two part answer. The first part is that without a solid theoretical understanding of the problem you're trying to solve, it's (at least in this case) ludicrously difficult to solve it empirically: people kept trying variants on this, and they didn't work, because an important parameter was off and they had no idea which direction to correct in.The second is that the incredible simplicity of the modern formula for oral rehydration therapy is the product of a lot of concerted design effort not just to find something that worked against cholera but to find something dead simple which did only require household ingredients and was hard to get wrong. The fact the final solution is so simple isn't because oral rehydration is a simple problem, but because researchers kept on going until they had a sufficiently simple solution.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 10 Mar 2023 13:21:22 +0000 EA - How oral rehydration therapy was developed by Kelsey Piper Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How oral rehydration therapy was developed, published by Kelsey Piper on March 10, 2023 on The Effective Altruism Forum.This is a link post for "Salt, Sugar, Water, Zinc: How Scientists Learned to Treat the 20th Century’s Biggest Killer of Children" in the second issue of Asterisk Magazine, now out. The question it poses is: oral rehydration therapy, which has saved millions of lives a year since it was developed, is very simple. It uses widely available ingredients. Why did it take until the late 1960s to come up with it?There's sort of a two part answer. The first part is that without a solid theoretical understanding of the problem you're trying to solve, it's (at least in this case) ludicrously difficult to solve it empirically: people kept trying variants on this, and they didn't work, because an important parameter was off and they had no idea which direction to correct in.The second is that the incredible simplicity of the modern formula for oral rehydration therapy is the product of a lot of concerted design effort not just to find something that worked against cholera but to find something dead simple which did only require household ingredients and was hard to get wrong. The fact the final solution is so simple isn't because oral rehydration is a simple problem, but because researchers kept on going until they had a sufficiently simple solution.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How oral rehydration therapy was developed, published by Kelsey Piper on March 10, 2023 on The Effective Altruism Forum.This is a link post for "Salt, Sugar, Water, Zinc: How Scientists Learned to Treat the 20th Century’s Biggest Killer of Children" in the second issue of Asterisk Magazine, now out. The question it poses is: oral rehydration therapy, which has saved millions of lives a year since it was developed, is very simple. It uses widely available ingredients. Why did it take until the late 1960s to come up with it?There's sort of a two part answer. The first part is that without a solid theoretical understanding of the problem you're trying to solve, it's (at least in this case) ludicrously difficult to solve it empirically: people kept trying variants on this, and they didn't work, because an important parameter was off and they had no idea which direction to correct in.The second is that the incredible simplicity of the modern formula for oral rehydration therapy is the product of a lot of concerted design effort not just to find something that worked against cholera but to find something dead simple which did only require household ingredients and was hard to get wrong. The fact the final solution is so simple isn't because oral rehydration is a simple problem, but because researchers kept on going until they had a sufficiently simple solution.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Kelsey Piper https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:31 None full 5179
NZz3Das7jFdCBN9zH_NL_EA_EA EA - Announcing the Open Philanthropy AI Worldviews Contest by Jason Schukraft Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Open Philanthropy AI Worldviews Contest, published by Jason Schukraft on March 10, 2023 on The Effective Altruism Forum.We are pleased to announce the 2023 Open Philanthropy AI Worldviews Contest.The goal of the contest is to surface novel considerations that could influence our views on AI timelines and AI risk. We plan to distribute $225,000 in prize money across six winning entries. This is the same contest we preannounced late last year, which is itself the spiritual successor to the now-defunct Future Fund competition. Part of our hope is that our (much smaller) prizes might encourage people who already started work for the Future Fund competition to share it publicly.The contest deadline is May 31, 2023. All work posted for the first time on or after September 23, 2022 is eligible. Use this form to submit your entry.Prize Conditions and AmountsEssays should address one of these two questions:Question 1: What is the probability that AGI is developed by January 1, 2043?Question 2: Conditional on AGI being developed by 2070, what is the probability that humanity will suffer an existential catastrophe due to loss of control over an AGI system?Essays should be clearly targeted at one of the questions, not both.Winning essays will be determined by the extent to which they substantively inform the thinking of a panel of Open Phil employees. There are several ways an essay could substantively inform the thinking of a panelist:An essay could cause a panelist to change their central estimate of the probability of AGI by 2043 or the probability of existential catastrophe conditional on AGI by 2070.An essay could cause a panelist to change the shape of their probability distribution for AGI by 2043 or existential catastrophe conditional on AGI by 2070, which could have strategic implications even if it doesn’t alter the panelist’s central estimate.An essay could clarify a concept or identify a crux in a way that made it clearer what further research would be valuable to conduct (even if the essay doesn’t change anybody’s probability distribution or central estimate).We will keep the composition of the panel anonymous to avoid participants targeting their work too closely to the beliefs of any one person. The panel includes representatives from both our Global Health & Wellbeing team and our Longtermism team. Open Phil’s published body of work on AI broadly represents the views of the panel.Panelist credences on the probability of AGI by 2043 range from ~10% to ~45%. Conditional on AGI being developed by 2070, panelist credences on the probability of existential catastrophe range from ~5% to ~50%.We will award a total of six prizes across three tiers:First prize (two awards): $50,000Second prize (two awards): $37,500Third prize (two awards): $25,000EligibilitySubmissions must be original work, published for the first time on or after September 23, 2022 and before 11:59 pm EDT May 31, 2023.All authors must be 18 years or older.Submissions must be written in English.No official word limit — but we expect to find it harder to engage with pieces longer than 5,000 words (not counting footnotes and references).Open Phil employees and their immediate family members are ineligible.The following groups are also ineligible:People who are residing in, or nationals of, Puerto Rico, Quebec, or countries or jurisdictions that prohibit such contests by lawPeople who are specifically sanctioned by the United States or based in a US-sanctioned country (North Korea, Iran, Russia, Myanmar, Afghanistan, Syria, Venezuela, and Cuba at time of writing)You can submit as many entries as you want, but you can only win one prize.Co-authorship is fine.See here for additional details and fine print.SubmissionUse this form to submit your entries. We strongl...]]>
Jason Schukraft https://forum.effectivealtruism.org/posts/NZz3Das7jFdCBN9zH/announcing-the-open-philanthropy-ai-worldviews-contest Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Open Philanthropy AI Worldviews Contest, published by Jason Schukraft on March 10, 2023 on The Effective Altruism Forum.We are pleased to announce the 2023 Open Philanthropy AI Worldviews Contest.The goal of the contest is to surface novel considerations that could influence our views on AI timelines and AI risk. We plan to distribute $225,000 in prize money across six winning entries. This is the same contest we preannounced late last year, which is itself the spiritual successor to the now-defunct Future Fund competition. Part of our hope is that our (much smaller) prizes might encourage people who already started work for the Future Fund competition to share it publicly.The contest deadline is May 31, 2023. All work posted for the first time on or after September 23, 2022 is eligible. Use this form to submit your entry.Prize Conditions and AmountsEssays should address one of these two questions:Question 1: What is the probability that AGI is developed by January 1, 2043?Question 2: Conditional on AGI being developed by 2070, what is the probability that humanity will suffer an existential catastrophe due to loss of control over an AGI system?Essays should be clearly targeted at one of the questions, not both.Winning essays will be determined by the extent to which they substantively inform the thinking of a panel of Open Phil employees. There are several ways an essay could substantively inform the thinking of a panelist:An essay could cause a panelist to change their central estimate of the probability of AGI by 2043 or the probability of existential catastrophe conditional on AGI by 2070.An essay could cause a panelist to change the shape of their probability distribution for AGI by 2043 or existential catastrophe conditional on AGI by 2070, which could have strategic implications even if it doesn’t alter the panelist’s central estimate.An essay could clarify a concept or identify a crux in a way that made it clearer what further research would be valuable to conduct (even if the essay doesn’t change anybody’s probability distribution or central estimate).We will keep the composition of the panel anonymous to avoid participants targeting their work too closely to the beliefs of any one person. The panel includes representatives from both our Global Health & Wellbeing team and our Longtermism team. Open Phil’s published body of work on AI broadly represents the views of the panel.Panelist credences on the probability of AGI by 2043 range from ~10% to ~45%. Conditional on AGI being developed by 2070, panelist credences on the probability of existential catastrophe range from ~5% to ~50%.We will award a total of six prizes across three tiers:First prize (two awards): $50,000Second prize (two awards): $37,500Third prize (two awards): $25,000EligibilitySubmissions must be original work, published for the first time on or after September 23, 2022 and before 11:59 pm EDT May 31, 2023.All authors must be 18 years or older.Submissions must be written in English.No official word limit — but we expect to find it harder to engage with pieces longer than 5,000 words (not counting footnotes and references).Open Phil employees and their immediate family members are ineligible.The following groups are also ineligible:People who are residing in, or nationals of, Puerto Rico, Quebec, or countries or jurisdictions that prohibit such contests by lawPeople who are specifically sanctioned by the United States or based in a US-sanctioned country (North Korea, Iran, Russia, Myanmar, Afghanistan, Syria, Venezuela, and Cuba at time of writing)You can submit as many entries as you want, but you can only win one prize.Co-authorship is fine.See here for additional details and fine print.SubmissionUse this form to submit your entries. We strongl...]]>
Fri, 10 Mar 2023 04:39:59 +0000 EA - Announcing the Open Philanthropy AI Worldviews Contest by Jason Schukraft Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Open Philanthropy AI Worldviews Contest, published by Jason Schukraft on March 10, 2023 on The Effective Altruism Forum.We are pleased to announce the 2023 Open Philanthropy AI Worldviews Contest.The goal of the contest is to surface novel considerations that could influence our views on AI timelines and AI risk. We plan to distribute $225,000 in prize money across six winning entries. This is the same contest we preannounced late last year, which is itself the spiritual successor to the now-defunct Future Fund competition. Part of our hope is that our (much smaller) prizes might encourage people who already started work for the Future Fund competition to share it publicly.The contest deadline is May 31, 2023. All work posted for the first time on or after September 23, 2022 is eligible. Use this form to submit your entry.Prize Conditions and AmountsEssays should address one of these two questions:Question 1: What is the probability that AGI is developed by January 1, 2043?Question 2: Conditional on AGI being developed by 2070, what is the probability that humanity will suffer an existential catastrophe due to loss of control over an AGI system?Essays should be clearly targeted at one of the questions, not both.Winning essays will be determined by the extent to which they substantively inform the thinking of a panel of Open Phil employees. There are several ways an essay could substantively inform the thinking of a panelist:An essay could cause a panelist to change their central estimate of the probability of AGI by 2043 or the probability of existential catastrophe conditional on AGI by 2070.An essay could cause a panelist to change the shape of their probability distribution for AGI by 2043 or existential catastrophe conditional on AGI by 2070, which could have strategic implications even if it doesn’t alter the panelist’s central estimate.An essay could clarify a concept or identify a crux in a way that made it clearer what further research would be valuable to conduct (even if the essay doesn’t change anybody’s probability distribution or central estimate).We will keep the composition of the panel anonymous to avoid participants targeting their work too closely to the beliefs of any one person. The panel includes representatives from both our Global Health & Wellbeing team and our Longtermism team. Open Phil’s published body of work on AI broadly represents the views of the panel.Panelist credences on the probability of AGI by 2043 range from ~10% to ~45%. Conditional on AGI being developed by 2070, panelist credences on the probability of existential catastrophe range from ~5% to ~50%.We will award a total of six prizes across three tiers:First prize (two awards): $50,000Second prize (two awards): $37,500Third prize (two awards): $25,000EligibilitySubmissions must be original work, published for the first time on or after September 23, 2022 and before 11:59 pm EDT May 31, 2023.All authors must be 18 years or older.Submissions must be written in English.No official word limit — but we expect to find it harder to engage with pieces longer than 5,000 words (not counting footnotes and references).Open Phil employees and their immediate family members are ineligible.The following groups are also ineligible:People who are residing in, or nationals of, Puerto Rico, Quebec, or countries or jurisdictions that prohibit such contests by lawPeople who are specifically sanctioned by the United States or based in a US-sanctioned country (North Korea, Iran, Russia, Myanmar, Afghanistan, Syria, Venezuela, and Cuba at time of writing)You can submit as many entries as you want, but you can only win one prize.Co-authorship is fine.See here for additional details and fine print.SubmissionUse this form to submit your entries. We strongl...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Open Philanthropy AI Worldviews Contest, published by Jason Schukraft on March 10, 2023 on The Effective Altruism Forum.We are pleased to announce the 2023 Open Philanthropy AI Worldviews Contest.The goal of the contest is to surface novel considerations that could influence our views on AI timelines and AI risk. We plan to distribute $225,000 in prize money across six winning entries. This is the same contest we preannounced late last year, which is itself the spiritual successor to the now-defunct Future Fund competition. Part of our hope is that our (much smaller) prizes might encourage people who already started work for the Future Fund competition to share it publicly.The contest deadline is May 31, 2023. All work posted for the first time on or after September 23, 2022 is eligible. Use this form to submit your entry.Prize Conditions and AmountsEssays should address one of these two questions:Question 1: What is the probability that AGI is developed by January 1, 2043?Question 2: Conditional on AGI being developed by 2070, what is the probability that humanity will suffer an existential catastrophe due to loss of control over an AGI system?Essays should be clearly targeted at one of the questions, not both.Winning essays will be determined by the extent to which they substantively inform the thinking of a panel of Open Phil employees. There are several ways an essay could substantively inform the thinking of a panelist:An essay could cause a panelist to change their central estimate of the probability of AGI by 2043 or the probability of existential catastrophe conditional on AGI by 2070.An essay could cause a panelist to change the shape of their probability distribution for AGI by 2043 or existential catastrophe conditional on AGI by 2070, which could have strategic implications even if it doesn’t alter the panelist’s central estimate.An essay could clarify a concept or identify a crux in a way that made it clearer what further research would be valuable to conduct (even if the essay doesn’t change anybody’s probability distribution or central estimate).We will keep the composition of the panel anonymous to avoid participants targeting their work too closely to the beliefs of any one person. The panel includes representatives from both our Global Health & Wellbeing team and our Longtermism team. Open Phil’s published body of work on AI broadly represents the views of the panel.Panelist credences on the probability of AGI by 2043 range from ~10% to ~45%. Conditional on AGI being developed by 2070, panelist credences on the probability of existential catastrophe range from ~5% to ~50%.We will award a total of six prizes across three tiers:First prize (two awards): $50,000Second prize (two awards): $37,500Third prize (two awards): $25,000EligibilitySubmissions must be original work, published for the first time on or after September 23, 2022 and before 11:59 pm EDT May 31, 2023.All authors must be 18 years or older.Submissions must be written in English.No official word limit — but we expect to find it harder to engage with pieces longer than 5,000 words (not counting footnotes and references).Open Phil employees and their immediate family members are ineligible.The following groups are also ineligible:People who are residing in, or nationals of, Puerto Rico, Quebec, or countries or jurisdictions that prohibit such contests by lawPeople who are specifically sanctioned by the United States or based in a US-sanctioned country (North Korea, Iran, Russia, Myanmar, Afghanistan, Syria, Venezuela, and Cuba at time of writing)You can submit as many entries as you want, but you can only win one prize.Co-authorship is fine.See here for additional details and fine print.SubmissionUse this form to submit your entries. We strongl...]]>
Jason Schukraft https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:21 None full 5176
ctJ6mnApPZwxHSXRX_NL_EA_EA EA - The Ethics of Posting: Real Names, Pseudonyms, and Burner Accounts by Sarah Levin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Ethics of Posting: Real Names, Pseudonyms, and Burner Accounts, published by Sarah Levin on March 9, 2023 on The Effective Altruism Forum.Recently there’s been debate about the ethics of using burner accounts to make attacks and accusations on this forum. See The number of burner accounts is too damn high and Why People Use Burner Accounts especially. This post is a more systematic discussion of poster identity, reputation, and accountability.Types of AccountsWe can roughly break accounts down into four categories:Real name accounts are accounts under a name that is easily linkable to the poster’s offline identity, such as their legal name. A real name account builds a reputation over time based on its posts. In addition, a real name account’s reputation draws on their offline reputation, and affects their offline reputation in turn.Pseudonymous accounts are accounts which are not easily linkable to the poster’s offline identity, and which the poster maintains over time. A pseudonym builds a reputation over time based on its posts. This reputation is separate from the poster’s offline reputation.Burner accounts are accounts which are intended to be used for a single, transient purpose and then abandoned. They accrue little or no reputation.Anonymous posts are not traceable to a specific identity at all. This forum mostly doesn’t have anonymous posts and so I will not discuss them here.All of these accounts have some legitimate uses. Because of the differences in how these types of accounts operate, readers should evaluate their claims differently, especially when it comes to evaluating claims about the community. Posters should use accounts appropriate for the points they are making, or restrict their claims to those which their account can support.Arguments, Evidence, and AccountabilityWhen it comes to abstract arguments, the content can be evaluated separately from the speaker, so all this stuff can be disregarded. If someone on this forum wants to post a critique of the statistics used in vitamin A supplementation trials, or an argument about moral status of chickens, or something like that, then the poster’s reputation shouldn’t matter much, and so it’s legitimate to post under any type of account. When 4chan solved an open combinatorics problem while discussing a shitpost about anime, mathematicians accepted the proof and published it with credit to "Anonymous 4chan poster". When it comes to abstract arguments, anything goes, except for blatant fuckery like impersonation or sockpuppet voting.If someone wants to claim expertise as part of an argument, then it helps to demonstrate that expertise somehow. If someone says “I’m a professional statistician and your statistical analysis here is nonsense”, then that rightly carries a lot more weight if it’s the real-name account of a professional statistician, or a pseudonymous account with a demonstrable track record on the subject. Burner accounts lack reputation, track records, and credentials, so they can’t legitimately make this move unless they first demonstrate expertise, which is often impractical.Things get trickier when it comes to reporting facts about the social landscape. The poster’s social position is a legitimate input into evaluating such claims. If I start telling everyone about what’s really happening in Oxford board rooms or Berkeley group houses, then it matters a great deal who I am. Am I a veteran who’s been deep inside for years? A visitor who attended a few events last summer? Am I just repeating what I saw in a tweet this morning?Advantages of Real Name AccountsReal name accounts can report on social situations with authority that other types of account can’t legitimately claim, for two reasons. First, their claims are checkable. If I used this pseudonymous account to make a f...]]>
Sarah Levin https://forum.effectivealtruism.org/posts/ctJ6mnApPZwxHSXRX/the-ethics-of-posting-real-names-pseudonyms-and-burner Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Ethics of Posting: Real Names, Pseudonyms, and Burner Accounts, published by Sarah Levin on March 9, 2023 on The Effective Altruism Forum.Recently there’s been debate about the ethics of using burner accounts to make attacks and accusations on this forum. See The number of burner accounts is too damn high and Why People Use Burner Accounts especially. This post is a more systematic discussion of poster identity, reputation, and accountability.Types of AccountsWe can roughly break accounts down into four categories:Real name accounts are accounts under a name that is easily linkable to the poster’s offline identity, such as their legal name. A real name account builds a reputation over time based on its posts. In addition, a real name account’s reputation draws on their offline reputation, and affects their offline reputation in turn.Pseudonymous accounts are accounts which are not easily linkable to the poster’s offline identity, and which the poster maintains over time. A pseudonym builds a reputation over time based on its posts. This reputation is separate from the poster’s offline reputation.Burner accounts are accounts which are intended to be used for a single, transient purpose and then abandoned. They accrue little or no reputation.Anonymous posts are not traceable to a specific identity at all. This forum mostly doesn’t have anonymous posts and so I will not discuss them here.All of these accounts have some legitimate uses. Because of the differences in how these types of accounts operate, readers should evaluate their claims differently, especially when it comes to evaluating claims about the community. Posters should use accounts appropriate for the points they are making, or restrict their claims to those which their account can support.Arguments, Evidence, and AccountabilityWhen it comes to abstract arguments, the content can be evaluated separately from the speaker, so all this stuff can be disregarded. If someone on this forum wants to post a critique of the statistics used in vitamin A supplementation trials, or an argument about moral status of chickens, or something like that, then the poster’s reputation shouldn’t matter much, and so it’s legitimate to post under any type of account. When 4chan solved an open combinatorics problem while discussing a shitpost about anime, mathematicians accepted the proof and published it with credit to "Anonymous 4chan poster". When it comes to abstract arguments, anything goes, except for blatant fuckery like impersonation or sockpuppet voting.If someone wants to claim expertise as part of an argument, then it helps to demonstrate that expertise somehow. If someone says “I’m a professional statistician and your statistical analysis here is nonsense”, then that rightly carries a lot more weight if it’s the real-name account of a professional statistician, or a pseudonymous account with a demonstrable track record on the subject. Burner accounts lack reputation, track records, and credentials, so they can’t legitimately make this move unless they first demonstrate expertise, which is often impractical.Things get trickier when it comes to reporting facts about the social landscape. The poster’s social position is a legitimate input into evaluating such claims. If I start telling everyone about what’s really happening in Oxford board rooms or Berkeley group houses, then it matters a great deal who I am. Am I a veteran who’s been deep inside for years? A visitor who attended a few events last summer? Am I just repeating what I saw in a tweet this morning?Advantages of Real Name AccountsReal name accounts can report on social situations with authority that other types of account can’t legitimately claim, for two reasons. First, their claims are checkable. If I used this pseudonymous account to make a f...]]>
Fri, 10 Mar 2023 03:50:12 +0000 EA - The Ethics of Posting: Real Names, Pseudonyms, and Burner Accounts by Sarah Levin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Ethics of Posting: Real Names, Pseudonyms, and Burner Accounts, published by Sarah Levin on March 9, 2023 on The Effective Altruism Forum.Recently there’s been debate about the ethics of using burner accounts to make attacks and accusations on this forum. See The number of burner accounts is too damn high and Why People Use Burner Accounts especially. This post is a more systematic discussion of poster identity, reputation, and accountability.Types of AccountsWe can roughly break accounts down into four categories:Real name accounts are accounts under a name that is easily linkable to the poster’s offline identity, such as their legal name. A real name account builds a reputation over time based on its posts. In addition, a real name account’s reputation draws on their offline reputation, and affects their offline reputation in turn.Pseudonymous accounts are accounts which are not easily linkable to the poster’s offline identity, and which the poster maintains over time. A pseudonym builds a reputation over time based on its posts. This reputation is separate from the poster’s offline reputation.Burner accounts are accounts which are intended to be used for a single, transient purpose and then abandoned. They accrue little or no reputation.Anonymous posts are not traceable to a specific identity at all. This forum mostly doesn’t have anonymous posts and so I will not discuss them here.All of these accounts have some legitimate uses. Because of the differences in how these types of accounts operate, readers should evaluate their claims differently, especially when it comes to evaluating claims about the community. Posters should use accounts appropriate for the points they are making, or restrict their claims to those which their account can support.Arguments, Evidence, and AccountabilityWhen it comes to abstract arguments, the content can be evaluated separately from the speaker, so all this stuff can be disregarded. If someone on this forum wants to post a critique of the statistics used in vitamin A supplementation trials, or an argument about moral status of chickens, or something like that, then the poster’s reputation shouldn’t matter much, and so it’s legitimate to post under any type of account. When 4chan solved an open combinatorics problem while discussing a shitpost about anime, mathematicians accepted the proof and published it with credit to "Anonymous 4chan poster". When it comes to abstract arguments, anything goes, except for blatant fuckery like impersonation or sockpuppet voting.If someone wants to claim expertise as part of an argument, then it helps to demonstrate that expertise somehow. If someone says “I’m a professional statistician and your statistical analysis here is nonsense”, then that rightly carries a lot more weight if it’s the real-name account of a professional statistician, or a pseudonymous account with a demonstrable track record on the subject. Burner accounts lack reputation, track records, and credentials, so they can’t legitimately make this move unless they first demonstrate expertise, which is often impractical.Things get trickier when it comes to reporting facts about the social landscape. The poster’s social position is a legitimate input into evaluating such claims. If I start telling everyone about what’s really happening in Oxford board rooms or Berkeley group houses, then it matters a great deal who I am. Am I a veteran who’s been deep inside for years? A visitor who attended a few events last summer? Am I just repeating what I saw in a tweet this morning?Advantages of Real Name AccountsReal name accounts can report on social situations with authority that other types of account can’t legitimately claim, for two reasons. First, their claims are checkable. If I used this pseudonymous account to make a f...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Ethics of Posting: Real Names, Pseudonyms, and Burner Accounts, published by Sarah Levin on March 9, 2023 on The Effective Altruism Forum.Recently there’s been debate about the ethics of using burner accounts to make attacks and accusations on this forum. See The number of burner accounts is too damn high and Why People Use Burner Accounts especially. This post is a more systematic discussion of poster identity, reputation, and accountability.Types of AccountsWe can roughly break accounts down into four categories:Real name accounts are accounts under a name that is easily linkable to the poster’s offline identity, such as their legal name. A real name account builds a reputation over time based on its posts. In addition, a real name account’s reputation draws on their offline reputation, and affects their offline reputation in turn.Pseudonymous accounts are accounts which are not easily linkable to the poster’s offline identity, and which the poster maintains over time. A pseudonym builds a reputation over time based on its posts. This reputation is separate from the poster’s offline reputation.Burner accounts are accounts which are intended to be used for a single, transient purpose and then abandoned. They accrue little or no reputation.Anonymous posts are not traceable to a specific identity at all. This forum mostly doesn’t have anonymous posts and so I will not discuss them here.All of these accounts have some legitimate uses. Because of the differences in how these types of accounts operate, readers should evaluate their claims differently, especially when it comes to evaluating claims about the community. Posters should use accounts appropriate for the points they are making, or restrict their claims to those which their account can support.Arguments, Evidence, and AccountabilityWhen it comes to abstract arguments, the content can be evaluated separately from the speaker, so all this stuff can be disregarded. If someone on this forum wants to post a critique of the statistics used in vitamin A supplementation trials, or an argument about moral status of chickens, or something like that, then the poster’s reputation shouldn’t matter much, and so it’s legitimate to post under any type of account. When 4chan solved an open combinatorics problem while discussing a shitpost about anime, mathematicians accepted the proof and published it with credit to "Anonymous 4chan poster". When it comes to abstract arguments, anything goes, except for blatant fuckery like impersonation or sockpuppet voting.If someone wants to claim expertise as part of an argument, then it helps to demonstrate that expertise somehow. If someone says “I’m a professional statistician and your statistical analysis here is nonsense”, then that rightly carries a lot more weight if it’s the real-name account of a professional statistician, or a pseudonymous account with a demonstrable track record on the subject. Burner accounts lack reputation, track records, and credentials, so they can’t legitimately make this move unless they first demonstrate expertise, which is often impractical.Things get trickier when it comes to reporting facts about the social landscape. The poster’s social position is a legitimate input into evaluating such claims. If I start telling everyone about what’s really happening in Oxford board rooms or Berkeley group houses, then it matters a great deal who I am. Am I a veteran who’s been deep inside for years? A visitor who attended a few events last summer? Am I just repeating what I saw in a tweet this morning?Advantages of Real Name AccountsReal name accounts can report on social situations with authority that other types of account can’t legitimately claim, for two reasons. First, their claims are checkable. If I used this pseudonymous account to make a f...]]>
Sarah Levin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:16 None full 5177
uGDCaPFaPkuxAowmH_NL_EA_EA EA - Anthropic: Core Views on AI Safety: When, Why, What, and How by jonmenaster Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anthropic: Core Views on AI Safety: When, Why, What, and How, published by jonmenaster on March 9, 2023 on The Effective Altruism Forum.We founded Anthropic because we believe the impact of AI might be comparable to that of the industrial and scientific revolutions, but we aren’t confident it will go well. And we also believe this level of impact could start to arrive soon – perhaps in the coming decade.This view may sound implausible or grandiose, and there are good reasons to be skeptical of it. For one thing, almost everyone who has said “the thing we’re working on might be one of the biggest developments in history” has been wrong, often laughably so. Nevertheless, we believe there is enough evidence to seriously prepare for a world where rapid AI progress leads to transformative AI systems.At Anthropic our motto has been “show, don’t tell”, and we’ve focused on releasing a steady stream of safety-oriented research that we believe has broad value for the AI community. We’re writing this now because as more people have become aware of AI progress, it feels timely to express our own views on this topic and to explain our strategy and goals. In short, we believe that AI safety research is urgently important and should be supported by a wide range of public and private actors.So in this post we will summarize why we believe all this: why we anticipate very rapid AI progress and very large impacts from AI, and how that led us to be concerned about AI safety. We’ll then briefly summarize our own approach to AI safety research and some of the reasoning behind it. We hope by writing this we can contribute to broader discussions about AI safety and AI progress.As a high level summary of the main points in this post:AI will have a very large impact, possibly in the coming decadeRapid and continuing AI progress is a predictable consequence of the exponential increase in computation used to train AI systems, because research on “scaling laws” demonstrates that more computation leads to general improvements in capabilities. Simple extrapolations suggest AI systems will become far more capable in the next decade, possibly equaling or exceeding human level performance at most intellectual tasks. AI progress might slow or halt, but the evidence suggests it will probably continue.We do not know how to train systems to robustly behave wellSo far, no one knows how to train very powerful AI systems to be robustly helpful, honest, and harmless. Furthermore, rapid AI progress will be disruptive to society and may trigger competitive races that could lead corporations or nations to deploy untrustworthy AI systems. The results of this could be catastrophic, either because AI systems strategically pursue dangerous goals, or because these systems make more innocent mistakes in high-stakes situations.We are most optimistic about a multi-faceted, empirically-driven approach to AI safetyWe’re pursuing a variety of research directions with the goal of building reliably safe systems, and are currently most excited about scaling supervision, mechanistic interpretability, process-oriented learning, and understanding and evaluating how AI systems learn and generalize. A key goal of ours is to differentially accelerate this safety work, and to develop a profile of safety research that attempts to cover a wide range of scenarios, from those in which safety challenges turn out to be easy to address to those in which creating safe systems is extremely difficult.Our Rough View on Rapid AI ProgressThe three main ingredients leading to predictable1 improvements in AI performance are training data, computation, and improved algorithms. In the mid-2010s, some of us noticed that larger AI systems were consistently smarter, and so we theorized that the most important ingredient in AI performance m...]]>
jonmenaster https://forum.effectivealtruism.org/posts/uGDCaPFaPkuxAowmH/anthropic-core-views-on-ai-safety-when-why-what-and-how Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anthropic: Core Views on AI Safety: When, Why, What, and How, published by jonmenaster on March 9, 2023 on The Effective Altruism Forum.We founded Anthropic because we believe the impact of AI might be comparable to that of the industrial and scientific revolutions, but we aren’t confident it will go well. And we also believe this level of impact could start to arrive soon – perhaps in the coming decade.This view may sound implausible or grandiose, and there are good reasons to be skeptical of it. For one thing, almost everyone who has said “the thing we’re working on might be one of the biggest developments in history” has been wrong, often laughably so. Nevertheless, we believe there is enough evidence to seriously prepare for a world where rapid AI progress leads to transformative AI systems.At Anthropic our motto has been “show, don’t tell”, and we’ve focused on releasing a steady stream of safety-oriented research that we believe has broad value for the AI community. We’re writing this now because as more people have become aware of AI progress, it feels timely to express our own views on this topic and to explain our strategy and goals. In short, we believe that AI safety research is urgently important and should be supported by a wide range of public and private actors.So in this post we will summarize why we believe all this: why we anticipate very rapid AI progress and very large impacts from AI, and how that led us to be concerned about AI safety. We’ll then briefly summarize our own approach to AI safety research and some of the reasoning behind it. We hope by writing this we can contribute to broader discussions about AI safety and AI progress.As a high level summary of the main points in this post:AI will have a very large impact, possibly in the coming decadeRapid and continuing AI progress is a predictable consequence of the exponential increase in computation used to train AI systems, because research on “scaling laws” demonstrates that more computation leads to general improvements in capabilities. Simple extrapolations suggest AI systems will become far more capable in the next decade, possibly equaling or exceeding human level performance at most intellectual tasks. AI progress might slow or halt, but the evidence suggests it will probably continue.We do not know how to train systems to robustly behave wellSo far, no one knows how to train very powerful AI systems to be robustly helpful, honest, and harmless. Furthermore, rapid AI progress will be disruptive to society and may trigger competitive races that could lead corporations or nations to deploy untrustworthy AI systems. The results of this could be catastrophic, either because AI systems strategically pursue dangerous goals, or because these systems make more innocent mistakes in high-stakes situations.We are most optimistic about a multi-faceted, empirically-driven approach to AI safetyWe’re pursuing a variety of research directions with the goal of building reliably safe systems, and are currently most excited about scaling supervision, mechanistic interpretability, process-oriented learning, and understanding and evaluating how AI systems learn and generalize. A key goal of ours is to differentially accelerate this safety work, and to develop a profile of safety research that attempts to cover a wide range of scenarios, from those in which safety challenges turn out to be easy to address to those in which creating safe systems is extremely difficult.Our Rough View on Rapid AI ProgressThe three main ingredients leading to predictable1 improvements in AI performance are training data, computation, and improved algorithms. In the mid-2010s, some of us noticed that larger AI systems were consistently smarter, and so we theorized that the most important ingredient in AI performance m...]]>
Thu, 09 Mar 2023 19:32:39 +0000 EA - Anthropic: Core Views on AI Safety: When, Why, What, and How by jonmenaster Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anthropic: Core Views on AI Safety: When, Why, What, and How, published by jonmenaster on March 9, 2023 on The Effective Altruism Forum.We founded Anthropic because we believe the impact of AI might be comparable to that of the industrial and scientific revolutions, but we aren’t confident it will go well. And we also believe this level of impact could start to arrive soon – perhaps in the coming decade.This view may sound implausible or grandiose, and there are good reasons to be skeptical of it. For one thing, almost everyone who has said “the thing we’re working on might be one of the biggest developments in history” has been wrong, often laughably so. Nevertheless, we believe there is enough evidence to seriously prepare for a world where rapid AI progress leads to transformative AI systems.At Anthropic our motto has been “show, don’t tell”, and we’ve focused on releasing a steady stream of safety-oriented research that we believe has broad value for the AI community. We’re writing this now because as more people have become aware of AI progress, it feels timely to express our own views on this topic and to explain our strategy and goals. In short, we believe that AI safety research is urgently important and should be supported by a wide range of public and private actors.So in this post we will summarize why we believe all this: why we anticipate very rapid AI progress and very large impacts from AI, and how that led us to be concerned about AI safety. We’ll then briefly summarize our own approach to AI safety research and some of the reasoning behind it. We hope by writing this we can contribute to broader discussions about AI safety and AI progress.As a high level summary of the main points in this post:AI will have a very large impact, possibly in the coming decadeRapid and continuing AI progress is a predictable consequence of the exponential increase in computation used to train AI systems, because research on “scaling laws” demonstrates that more computation leads to general improvements in capabilities. Simple extrapolations suggest AI systems will become far more capable in the next decade, possibly equaling or exceeding human level performance at most intellectual tasks. AI progress might slow or halt, but the evidence suggests it will probably continue.We do not know how to train systems to robustly behave wellSo far, no one knows how to train very powerful AI systems to be robustly helpful, honest, and harmless. Furthermore, rapid AI progress will be disruptive to society and may trigger competitive races that could lead corporations or nations to deploy untrustworthy AI systems. The results of this could be catastrophic, either because AI systems strategically pursue dangerous goals, or because these systems make more innocent mistakes in high-stakes situations.We are most optimistic about a multi-faceted, empirically-driven approach to AI safetyWe’re pursuing a variety of research directions with the goal of building reliably safe systems, and are currently most excited about scaling supervision, mechanistic interpretability, process-oriented learning, and understanding and evaluating how AI systems learn and generalize. A key goal of ours is to differentially accelerate this safety work, and to develop a profile of safety research that attempts to cover a wide range of scenarios, from those in which safety challenges turn out to be easy to address to those in which creating safe systems is extremely difficult.Our Rough View on Rapid AI ProgressThe three main ingredients leading to predictable1 improvements in AI performance are training data, computation, and improved algorithms. In the mid-2010s, some of us noticed that larger AI systems were consistently smarter, and so we theorized that the most important ingredient in AI performance m...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anthropic: Core Views on AI Safety: When, Why, What, and How, published by jonmenaster on March 9, 2023 on The Effective Altruism Forum.We founded Anthropic because we believe the impact of AI might be comparable to that of the industrial and scientific revolutions, but we aren’t confident it will go well. And we also believe this level of impact could start to arrive soon – perhaps in the coming decade.This view may sound implausible or grandiose, and there are good reasons to be skeptical of it. For one thing, almost everyone who has said “the thing we’re working on might be one of the biggest developments in history” has been wrong, often laughably so. Nevertheless, we believe there is enough evidence to seriously prepare for a world where rapid AI progress leads to transformative AI systems.At Anthropic our motto has been “show, don’t tell”, and we’ve focused on releasing a steady stream of safety-oriented research that we believe has broad value for the AI community. We’re writing this now because as more people have become aware of AI progress, it feels timely to express our own views on this topic and to explain our strategy and goals. In short, we believe that AI safety research is urgently important and should be supported by a wide range of public and private actors.So in this post we will summarize why we believe all this: why we anticipate very rapid AI progress and very large impacts from AI, and how that led us to be concerned about AI safety. We’ll then briefly summarize our own approach to AI safety research and some of the reasoning behind it. We hope by writing this we can contribute to broader discussions about AI safety and AI progress.As a high level summary of the main points in this post:AI will have a very large impact, possibly in the coming decadeRapid and continuing AI progress is a predictable consequence of the exponential increase in computation used to train AI systems, because research on “scaling laws” demonstrates that more computation leads to general improvements in capabilities. Simple extrapolations suggest AI systems will become far more capable in the next decade, possibly equaling or exceeding human level performance at most intellectual tasks. AI progress might slow or halt, but the evidence suggests it will probably continue.We do not know how to train systems to robustly behave wellSo far, no one knows how to train very powerful AI systems to be robustly helpful, honest, and harmless. Furthermore, rapid AI progress will be disruptive to society and may trigger competitive races that could lead corporations or nations to deploy untrustworthy AI systems. The results of this could be catastrophic, either because AI systems strategically pursue dangerous goals, or because these systems make more innocent mistakes in high-stakes situations.We are most optimistic about a multi-faceted, empirically-driven approach to AI safetyWe’re pursuing a variety of research directions with the goal of building reliably safe systems, and are currently most excited about scaling supervision, mechanistic interpretability, process-oriented learning, and understanding and evaluating how AI systems learn and generalize. A key goal of ours is to differentially accelerate this safety work, and to develop a profile of safety research that attempts to cover a wide range of scenarios, from those in which safety challenges turn out to be easy to address to those in which creating safe systems is extremely difficult.Our Rough View on Rapid AI ProgressThe three main ingredients leading to predictable1 improvements in AI performance are training data, computation, and improved algorithms. In the mid-2010s, some of us noticed that larger AI systems were consistently smarter, and so we theorized that the most important ingredient in AI performance m...]]>
jonmenaster https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 36:52 None full 5169
ewroS7tsqhTsstJ44_NL_EA_EA EA - A Windfall Clause for CEO could worsen AI race dynamics by Larks Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Windfall Clause for CEO could worsen AI race dynamics, published by Larks on March 9, 2023 on The Effective Altruism Forum.SummaryThis is a response to the Windfall Clause proposal from Cullen O’Keefe et al, which aims to make AI firms promise to donate a large fraction of profits if they become extremely profitable. While I appreciate their valiant attempt to produce a policy recommendation that might help, I am worried about the practical effects.In this article I argue that the Clause would primarily benefit the management of these firms, resulting in an increased concentration of effective wealth/power relative to a counterfactual where traditional corporate governance was used. This could make AI race dynamics worse and increase existential risk from AI.What is the Windfall Clause?The Clause operates by getting firms now to sign up to donate a large fraction of their profits for the benefit of humanity if those profits become very large. The idea is that, right now, profits are not very large, so this appears a ‘cheap’ commitment in the short term. In the future, if the firm becomes very successful, they are required to donate an increasing fraction.This is an example structure from O’Keefe document:Many other possible structures exist with similar effects. As an extreme example, you could require all profits above a certain level to be donated.Typical Corporate GovernanceThe purpose of a typical corporation is to make profits. Under standard corporate governance, CEOs are given fairly broad latitude to make business decisions. They can determine strategy, decide on new products and pricing, alter their workforces and so on with limited restrictions. If the company fails to make profits, the share price will fall, and it might be subject to a hostile takeover from another firm which thinks it can use the assets more wisely. Additionally, in the meantime the CEO’s compensation is likely to fall due to missed incentive pay.The board also supplies oversight. They will be consulted with on major decisions, and their consent is required for irreversible ones (e.g. a major acquisition or change of strategy). The auditor will report to them so they can keep apprised of the financial state of the company.The amount of money the CEO can spend without oversight is quite limited. Most of the firm’s revenue probably goes to expenses; of the profits, the board will exercise oversight into decisions around dividends, buybacks and major acquisitions. The CEO will have more discretion over capital expenditures, but even then the board will have a say on the total size and general strategy, and all capex will be expected to follow the north star of future profitability. A founder-CEO might retain some non-trivial economic interest in the profits (say 10% if it was a small founding team and they grew rapidly with limited need for outside capital), which is truely theirs to spend as they wish; a hired CEO would have much less.How does the clause change this?In contrast, the clause appears to give a lot more latitude to management of a successful AGI firm.Some of the typical constraints remain. The firm must still pay its suppliers, and continue to produce goods and services that others value enough to pay for them more than they cost to produce. Operating decisions will remain judged by profitability, and the board will continue to have oversight over major decisions.However, a huge amount of profit is effectively transferred from third party investors to the CEO or management team. They go from probably a few percent of the profits to spend as they wish to controlling the distribution of perhaps half. Additionally, some of this is likely tax deductible.It is true that the CEO couldn’t spend the money on personal yachts and the like. However, extremely rich peopl...]]>
Larks https://forum.effectivealtruism.org/posts/ewroS7tsqhTsstJ44/a-windfall-clause-for-ceo-could-worsen-ai-race-dynamics Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Windfall Clause for CEO could worsen AI race dynamics, published by Larks on March 9, 2023 on The Effective Altruism Forum.SummaryThis is a response to the Windfall Clause proposal from Cullen O’Keefe et al, which aims to make AI firms promise to donate a large fraction of profits if they become extremely profitable. While I appreciate their valiant attempt to produce a policy recommendation that might help, I am worried about the practical effects.In this article I argue that the Clause would primarily benefit the management of these firms, resulting in an increased concentration of effective wealth/power relative to a counterfactual where traditional corporate governance was used. This could make AI race dynamics worse and increase existential risk from AI.What is the Windfall Clause?The Clause operates by getting firms now to sign up to donate a large fraction of their profits for the benefit of humanity if those profits become very large. The idea is that, right now, profits are not very large, so this appears a ‘cheap’ commitment in the short term. In the future, if the firm becomes very successful, they are required to donate an increasing fraction.This is an example structure from O’Keefe document:Many other possible structures exist with similar effects. As an extreme example, you could require all profits above a certain level to be donated.Typical Corporate GovernanceThe purpose of a typical corporation is to make profits. Under standard corporate governance, CEOs are given fairly broad latitude to make business decisions. They can determine strategy, decide on new products and pricing, alter their workforces and so on with limited restrictions. If the company fails to make profits, the share price will fall, and it might be subject to a hostile takeover from another firm which thinks it can use the assets more wisely. Additionally, in the meantime the CEO’s compensation is likely to fall due to missed incentive pay.The board also supplies oversight. They will be consulted with on major decisions, and their consent is required for irreversible ones (e.g. a major acquisition or change of strategy). The auditor will report to them so they can keep apprised of the financial state of the company.The amount of money the CEO can spend without oversight is quite limited. Most of the firm’s revenue probably goes to expenses; of the profits, the board will exercise oversight into decisions around dividends, buybacks and major acquisitions. The CEO will have more discretion over capital expenditures, but even then the board will have a say on the total size and general strategy, and all capex will be expected to follow the north star of future profitability. A founder-CEO might retain some non-trivial economic interest in the profits (say 10% if it was a small founding team and they grew rapidly with limited need for outside capital), which is truely theirs to spend as they wish; a hired CEO would have much less.How does the clause change this?In contrast, the clause appears to give a lot more latitude to management of a successful AGI firm.Some of the typical constraints remain. The firm must still pay its suppliers, and continue to produce goods and services that others value enough to pay for them more than they cost to produce. Operating decisions will remain judged by profitability, and the board will continue to have oversight over major decisions.However, a huge amount of profit is effectively transferred from third party investors to the CEO or management team. They go from probably a few percent of the profits to spend as they wish to controlling the distribution of perhaps half. Additionally, some of this is likely tax deductible.It is true that the CEO couldn’t spend the money on personal yachts and the like. However, extremely rich peopl...]]>
Thu, 09 Mar 2023 18:31:57 +0000 EA - A Windfall Clause for CEO could worsen AI race dynamics by Larks Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Windfall Clause for CEO could worsen AI race dynamics, published by Larks on March 9, 2023 on The Effective Altruism Forum.SummaryThis is a response to the Windfall Clause proposal from Cullen O’Keefe et al, which aims to make AI firms promise to donate a large fraction of profits if they become extremely profitable. While I appreciate their valiant attempt to produce a policy recommendation that might help, I am worried about the practical effects.In this article I argue that the Clause would primarily benefit the management of these firms, resulting in an increased concentration of effective wealth/power relative to a counterfactual where traditional corporate governance was used. This could make AI race dynamics worse and increase existential risk from AI.What is the Windfall Clause?The Clause operates by getting firms now to sign up to donate a large fraction of their profits for the benefit of humanity if those profits become very large. The idea is that, right now, profits are not very large, so this appears a ‘cheap’ commitment in the short term. In the future, if the firm becomes very successful, they are required to donate an increasing fraction.This is an example structure from O’Keefe document:Many other possible structures exist with similar effects. As an extreme example, you could require all profits above a certain level to be donated.Typical Corporate GovernanceThe purpose of a typical corporation is to make profits. Under standard corporate governance, CEOs are given fairly broad latitude to make business decisions. They can determine strategy, decide on new products and pricing, alter their workforces and so on with limited restrictions. If the company fails to make profits, the share price will fall, and it might be subject to a hostile takeover from another firm which thinks it can use the assets more wisely. Additionally, in the meantime the CEO’s compensation is likely to fall due to missed incentive pay.The board also supplies oversight. They will be consulted with on major decisions, and their consent is required for irreversible ones (e.g. a major acquisition or change of strategy). The auditor will report to them so they can keep apprised of the financial state of the company.The amount of money the CEO can spend without oversight is quite limited. Most of the firm’s revenue probably goes to expenses; of the profits, the board will exercise oversight into decisions around dividends, buybacks and major acquisitions. The CEO will have more discretion over capital expenditures, but even then the board will have a say on the total size and general strategy, and all capex will be expected to follow the north star of future profitability. A founder-CEO might retain some non-trivial economic interest in the profits (say 10% if it was a small founding team and they grew rapidly with limited need for outside capital), which is truely theirs to spend as they wish; a hired CEO would have much less.How does the clause change this?In contrast, the clause appears to give a lot more latitude to management of a successful AGI firm.Some of the typical constraints remain. The firm must still pay its suppliers, and continue to produce goods and services that others value enough to pay for them more than they cost to produce. Operating decisions will remain judged by profitability, and the board will continue to have oversight over major decisions.However, a huge amount of profit is effectively transferred from third party investors to the CEO or management team. They go from probably a few percent of the profits to spend as they wish to controlling the distribution of perhaps half. Additionally, some of this is likely tax deductible.It is true that the CEO couldn’t spend the money on personal yachts and the like. However, extremely rich peopl...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Windfall Clause for CEO could worsen AI race dynamics, published by Larks on March 9, 2023 on The Effective Altruism Forum.SummaryThis is a response to the Windfall Clause proposal from Cullen O’Keefe et al, which aims to make AI firms promise to donate a large fraction of profits if they become extremely profitable. While I appreciate their valiant attempt to produce a policy recommendation that might help, I am worried about the practical effects.In this article I argue that the Clause would primarily benefit the management of these firms, resulting in an increased concentration of effective wealth/power relative to a counterfactual where traditional corporate governance was used. This could make AI race dynamics worse and increase existential risk from AI.What is the Windfall Clause?The Clause operates by getting firms now to sign up to donate a large fraction of their profits for the benefit of humanity if those profits become very large. The idea is that, right now, profits are not very large, so this appears a ‘cheap’ commitment in the short term. In the future, if the firm becomes very successful, they are required to donate an increasing fraction.This is an example structure from O’Keefe document:Many other possible structures exist with similar effects. As an extreme example, you could require all profits above a certain level to be donated.Typical Corporate GovernanceThe purpose of a typical corporation is to make profits. Under standard corporate governance, CEOs are given fairly broad latitude to make business decisions. They can determine strategy, decide on new products and pricing, alter their workforces and so on with limited restrictions. If the company fails to make profits, the share price will fall, and it might be subject to a hostile takeover from another firm which thinks it can use the assets more wisely. Additionally, in the meantime the CEO’s compensation is likely to fall due to missed incentive pay.The board also supplies oversight. They will be consulted with on major decisions, and their consent is required for irreversible ones (e.g. a major acquisition or change of strategy). The auditor will report to them so they can keep apprised of the financial state of the company.The amount of money the CEO can spend without oversight is quite limited. Most of the firm’s revenue probably goes to expenses; of the profits, the board will exercise oversight into decisions around dividends, buybacks and major acquisitions. The CEO will have more discretion over capital expenditures, but even then the board will have a say on the total size and general strategy, and all capex will be expected to follow the north star of future profitability. A founder-CEO might retain some non-trivial economic interest in the profits (say 10% if it was a small founding team and they grew rapidly with limited need for outside capital), which is truely theirs to spend as they wish; a hired CEO would have much less.How does the clause change this?In contrast, the clause appears to give a lot more latitude to management of a successful AGI firm.Some of the typical constraints remain. The firm must still pay its suppliers, and continue to produce goods and services that others value enough to pay for them more than they cost to produce. Operating decisions will remain judged by profitability, and the board will continue to have oversight over major decisions.However, a huge amount of profit is effectively transferred from third party investors to the CEO or management team. They go from probably a few percent of the profits to spend as they wish to controlling the distribution of perhaps half. Additionally, some of this is likely tax deductible.It is true that the CEO couldn’t spend the money on personal yachts and the like. However, extremely rich peopl...]]>
Larks https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:03 None full 5168
fqXLT7NHZGsLmjH4o_NL_EA_EA EA - Paper Summary: The Effectiveness of AI Existential Risk Communication to the American and Dutch Public by Otto Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper Summary: The Effectiveness of AI Existential Risk Communication to the American and Dutch Public, published by Otto on March 9, 2023 on The Effective Altruism Forum.This is a summary of the following paper by Alexia Georgiadis (Existential Risk Observatory):Thanks to @Lara Mani, @Karl von Wendt, and Alexia Georgiadis for their help in reviewing and writing this post. Any views expressed in this post are not necessarily theirs.The rapid development of artificial intelligence (AI) has evoked both positive and negative sentiments due to its immense potential and the inherent risks associated with its evolution. There are growing concerns that if AI surpasses human intelligence and is not aligned with human values, it may pose significant harm and even lead to the end of humanity. However, the general publics' knowledge of these risks is limited. As advocates for minimising existential threats, the Existential Risk Observatory believes it is imperative to educate the public on the potential risks of AI. Our introductory post outlines some of the reasons why we hold this view (this post is also relevant). To increase public awareness of AI's existential risk, effective communication strategies are necessary.This research aims to assess the effectiveness of communication interventions currently being used to increase awareness about AI existential risk, namely news publications and videos. To this end, we conducted surveys to evaluate the impact of these interventions on raising awareness among participants.MethodologyThis research aims to assess the effectiveness of different media interventions, specifically news articles and videos, in promoting awareness of the potential dangers of AI and its possible impact on human extinction. It analyses the impact of AI existential risk communication strategies on the awareness of the American and Dutch populations, and investigates how social indicators such as age, gender, education level, country of residence, and field of work affect the effectiveness of AI existential risk communication.The study employs a pre-post design, which involves administering the same intervention and assessment to all participants and measuring their responses at two points in time. The research utilises a survey method for collecting data, which was administered to participants through an online Google Forms application. The survey consists of three sections: pre-test questions, the intervention, and post-test questions.The effectiveness of AI existential risk communication is measured by comparing the results of quantitative questions from the pre-test and post-test sections, and the answers to the open-ended questions provide further understanding of any changes in the participant's perspective. The research measures the effectiveness of the media interventions by using two main indicators: "Human Extinction Events" and "Human Extinction Percentage."The "Human Extinction Events" indicator asks participants to rank the events that they believe could cause human extinction in the next century, and the research considers it as effective if participants rank AI higher post intervention or mention it after the treatment when they did not mention it before. If the placement of AI remained the same before and after the treatment, or if participants did not mention AI before or after the treatment, the research considered that there was no effect in raising awareness.The "Human Extinction Percentage" indicator asks for the participants' opinion on the likelihood, in percentage, of human extinction caused by AI in the next century. If there was an increase in the percentage of likelihood given by participants, this research considered that there was an effect in raising awareness. If there is no change or a decrease in the percentage, this r...]]>
Otto https://forum.effectivealtruism.org/posts/fqXLT7NHZGsLmjH4o/paper-summary-the-effectiveness-of-ai-existential-risk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper Summary: The Effectiveness of AI Existential Risk Communication to the American and Dutch Public, published by Otto on March 9, 2023 on The Effective Altruism Forum.This is a summary of the following paper by Alexia Georgiadis (Existential Risk Observatory):Thanks to @Lara Mani, @Karl von Wendt, and Alexia Georgiadis for their help in reviewing and writing this post. Any views expressed in this post are not necessarily theirs.The rapid development of artificial intelligence (AI) has evoked both positive and negative sentiments due to its immense potential and the inherent risks associated with its evolution. There are growing concerns that if AI surpasses human intelligence and is not aligned with human values, it may pose significant harm and even lead to the end of humanity. However, the general publics' knowledge of these risks is limited. As advocates for minimising existential threats, the Existential Risk Observatory believes it is imperative to educate the public on the potential risks of AI. Our introductory post outlines some of the reasons why we hold this view (this post is also relevant). To increase public awareness of AI's existential risk, effective communication strategies are necessary.This research aims to assess the effectiveness of communication interventions currently being used to increase awareness about AI existential risk, namely news publications and videos. To this end, we conducted surveys to evaluate the impact of these interventions on raising awareness among participants.MethodologyThis research aims to assess the effectiveness of different media interventions, specifically news articles and videos, in promoting awareness of the potential dangers of AI and its possible impact on human extinction. It analyses the impact of AI existential risk communication strategies on the awareness of the American and Dutch populations, and investigates how social indicators such as age, gender, education level, country of residence, and field of work affect the effectiveness of AI existential risk communication.The study employs a pre-post design, which involves administering the same intervention and assessment to all participants and measuring their responses at two points in time. The research utilises a survey method for collecting data, which was administered to participants through an online Google Forms application. The survey consists of three sections: pre-test questions, the intervention, and post-test questions.The effectiveness of AI existential risk communication is measured by comparing the results of quantitative questions from the pre-test and post-test sections, and the answers to the open-ended questions provide further understanding of any changes in the participant's perspective. The research measures the effectiveness of the media interventions by using two main indicators: "Human Extinction Events" and "Human Extinction Percentage."The "Human Extinction Events" indicator asks participants to rank the events that they believe could cause human extinction in the next century, and the research considers it as effective if participants rank AI higher post intervention or mention it after the treatment when they did not mention it before. If the placement of AI remained the same before and after the treatment, or if participants did not mention AI before or after the treatment, the research considered that there was no effect in raising awareness.The "Human Extinction Percentage" indicator asks for the participants' opinion on the likelihood, in percentage, of human extinction caused by AI in the next century. If there was an increase in the percentage of likelihood given by participants, this research considered that there was an effect in raising awareness. If there is no change or a decrease in the percentage, this r...]]>
Thu, 09 Mar 2023 17:18:43 +0000 EA - Paper Summary: The Effectiveness of AI Existential Risk Communication to the American and Dutch Public by Otto Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper Summary: The Effectiveness of AI Existential Risk Communication to the American and Dutch Public, published by Otto on March 9, 2023 on The Effective Altruism Forum.This is a summary of the following paper by Alexia Georgiadis (Existential Risk Observatory):Thanks to @Lara Mani, @Karl von Wendt, and Alexia Georgiadis for their help in reviewing and writing this post. Any views expressed in this post are not necessarily theirs.The rapid development of artificial intelligence (AI) has evoked both positive and negative sentiments due to its immense potential and the inherent risks associated with its evolution. There are growing concerns that if AI surpasses human intelligence and is not aligned with human values, it may pose significant harm and even lead to the end of humanity. However, the general publics' knowledge of these risks is limited. As advocates for minimising existential threats, the Existential Risk Observatory believes it is imperative to educate the public on the potential risks of AI. Our introductory post outlines some of the reasons why we hold this view (this post is also relevant). To increase public awareness of AI's existential risk, effective communication strategies are necessary.This research aims to assess the effectiveness of communication interventions currently being used to increase awareness about AI existential risk, namely news publications and videos. To this end, we conducted surveys to evaluate the impact of these interventions on raising awareness among participants.MethodologyThis research aims to assess the effectiveness of different media interventions, specifically news articles and videos, in promoting awareness of the potential dangers of AI and its possible impact on human extinction. It analyses the impact of AI existential risk communication strategies on the awareness of the American and Dutch populations, and investigates how social indicators such as age, gender, education level, country of residence, and field of work affect the effectiveness of AI existential risk communication.The study employs a pre-post design, which involves administering the same intervention and assessment to all participants and measuring their responses at two points in time. The research utilises a survey method for collecting data, which was administered to participants through an online Google Forms application. The survey consists of three sections: pre-test questions, the intervention, and post-test questions.The effectiveness of AI existential risk communication is measured by comparing the results of quantitative questions from the pre-test and post-test sections, and the answers to the open-ended questions provide further understanding of any changes in the participant's perspective. The research measures the effectiveness of the media interventions by using two main indicators: "Human Extinction Events" and "Human Extinction Percentage."The "Human Extinction Events" indicator asks participants to rank the events that they believe could cause human extinction in the next century, and the research considers it as effective if participants rank AI higher post intervention or mention it after the treatment when they did not mention it before. If the placement of AI remained the same before and after the treatment, or if participants did not mention AI before or after the treatment, the research considered that there was no effect in raising awareness.The "Human Extinction Percentage" indicator asks for the participants' opinion on the likelihood, in percentage, of human extinction caused by AI in the next century. If there was an increase in the percentage of likelihood given by participants, this research considered that there was an effect in raising awareness. If there is no change or a decrease in the percentage, this r...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper Summary: The Effectiveness of AI Existential Risk Communication to the American and Dutch Public, published by Otto on March 9, 2023 on The Effective Altruism Forum.This is a summary of the following paper by Alexia Georgiadis (Existential Risk Observatory):Thanks to @Lara Mani, @Karl von Wendt, and Alexia Georgiadis for their help in reviewing and writing this post. Any views expressed in this post are not necessarily theirs.The rapid development of artificial intelligence (AI) has evoked both positive and negative sentiments due to its immense potential and the inherent risks associated with its evolution. There are growing concerns that if AI surpasses human intelligence and is not aligned with human values, it may pose significant harm and even lead to the end of humanity. However, the general publics' knowledge of these risks is limited. As advocates for minimising existential threats, the Existential Risk Observatory believes it is imperative to educate the public on the potential risks of AI. Our introductory post outlines some of the reasons why we hold this view (this post is also relevant). To increase public awareness of AI's existential risk, effective communication strategies are necessary.This research aims to assess the effectiveness of communication interventions currently being used to increase awareness about AI existential risk, namely news publications and videos. To this end, we conducted surveys to evaluate the impact of these interventions on raising awareness among participants.MethodologyThis research aims to assess the effectiveness of different media interventions, specifically news articles and videos, in promoting awareness of the potential dangers of AI and its possible impact on human extinction. It analyses the impact of AI existential risk communication strategies on the awareness of the American and Dutch populations, and investigates how social indicators such as age, gender, education level, country of residence, and field of work affect the effectiveness of AI existential risk communication.The study employs a pre-post design, which involves administering the same intervention and assessment to all participants and measuring their responses at two points in time. The research utilises a survey method for collecting data, which was administered to participants through an online Google Forms application. The survey consists of three sections: pre-test questions, the intervention, and post-test questions.The effectiveness of AI existential risk communication is measured by comparing the results of quantitative questions from the pre-test and post-test sections, and the answers to the open-ended questions provide further understanding of any changes in the participant's perspective. The research measures the effectiveness of the media interventions by using two main indicators: "Human Extinction Events" and "Human Extinction Percentage."The "Human Extinction Events" indicator asks participants to rank the events that they believe could cause human extinction in the next century, and the research considers it as effective if participants rank AI higher post intervention or mention it after the treatment when they did not mention it before. If the placement of AI remained the same before and after the treatment, or if participants did not mention AI before or after the treatment, the research considered that there was no effect in raising awareness.The "Human Extinction Percentage" indicator asks for the participants' opinion on the likelihood, in percentage, of human extinction caused by AI in the next century. If there was an increase in the percentage of likelihood given by participants, this research considered that there was an effect in raising awareness. If there is no change or a decrease in the percentage, this r...]]>
Otto https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:13 None full 5170
PeqArqXzSPmAijzrz_NL_EA_EA EA - FTX Poll Post - What do you think about the FTX crisis, given some time? by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX Poll Post - What do you think about the FTX crisis, given some time?, published by Nathan Young on March 8, 2023 on The Effective Altruism Forum.Tl;drThe community holds views on thingsWe should understand what they areI think I am building a sense of the community feeling, but perhaps it’s very inaccurateagreevote to agree, disagreevote to disagree, upvote to signify importance downvote to signify unimportanceDoing polls on the forum is bad, but I think it’s better than nothing. I have some theories about what people feel and I’m trying to disprove themIf you want more accurate polling then someone could run thatI’m open to the idea that poll comments in general are annoying or that I run them too soon (though people also DM to thank me for them) but this is a top level post - if you don’t like it just downvote itIf you do like it, upvote it. Probably others will like it tooAdd your own questions or DM me and I will add them.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nathan Young https://forum.effectivealtruism.org/posts/PeqArqXzSPmAijzrz/ftx-poll-post-what-do-you-think-about-the-ftx-crisis-given Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX Poll Post - What do you think about the FTX crisis, given some time?, published by Nathan Young on March 8, 2023 on The Effective Altruism Forum.Tl;drThe community holds views on thingsWe should understand what they areI think I am building a sense of the community feeling, but perhaps it’s very inaccurateagreevote to agree, disagreevote to disagree, upvote to signify importance downvote to signify unimportanceDoing polls on the forum is bad, but I think it’s better than nothing. I have some theories about what people feel and I’m trying to disprove themIf you want more accurate polling then someone could run thatI’m open to the idea that poll comments in general are annoying or that I run them too soon (though people also DM to thank me for them) but this is a top level post - if you don’t like it just downvote itIf you do like it, upvote it. Probably others will like it tooAdd your own questions or DM me and I will add them.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 09 Mar 2023 08:38:33 +0000 EA - FTX Poll Post - What do you think about the FTX crisis, given some time? by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX Poll Post - What do you think about the FTX crisis, given some time?, published by Nathan Young on March 8, 2023 on The Effective Altruism Forum.Tl;drThe community holds views on thingsWe should understand what they areI think I am building a sense of the community feeling, but perhaps it’s very inaccurateagreevote to agree, disagreevote to disagree, upvote to signify importance downvote to signify unimportanceDoing polls on the forum is bad, but I think it’s better than nothing. I have some theories about what people feel and I’m trying to disprove themIf you want more accurate polling then someone could run thatI’m open to the idea that poll comments in general are annoying or that I run them too soon (though people also DM to thank me for them) but this is a top level post - if you don’t like it just downvote itIf you do like it, upvote it. Probably others will like it tooAdd your own questions or DM me and I will add them.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX Poll Post - What do you think about the FTX crisis, given some time?, published by Nathan Young on March 8, 2023 on The Effective Altruism Forum.Tl;drThe community holds views on thingsWe should understand what they areI think I am building a sense of the community feeling, but perhaps it’s very inaccurateagreevote to agree, disagreevote to disagree, upvote to signify importance downvote to signify unimportanceDoing polls on the forum is bad, but I think it’s better than nothing. I have some theories about what people feel and I’m trying to disprove themIf you want more accurate polling then someone could run thatI’m open to the idea that poll comments in general are annoying or that I run them too soon (though people also DM to thank me for them) but this is a top level post - if you don’t like it just downvote itIf you do like it, upvote it. Probably others will like it tooAdd your own questions or DM me and I will add them.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nathan Young https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:15 None full 5155
hn4aJdaCwGHfkB8se_NL_EA_EA EA - Against EA-Community-Received-Wisdom on Practical Sociological Questions by Michael Cohen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Against EA-Community-Received-Wisdom on Practical Sociological Questions, published by Michael Cohen on March 9, 2023 on The Effective Altruism Forum.In my view, there is a rot in the EA community that is so consequential that it inclines me to discourage effective altruists from putting much, if any, trust in EA community members, EA "leaders", the EA Forum, or LessWrong. But I think that it can be fixed, and the EA movement would become very good.In my view, this rot comes from incorrect answers to certain practical sociological questions, like:How important for success is having experience or having been apprenticed to someone experienced?Is the EA Forum a good tool for collaborative truth-seeking?How helpful is peer review for collaborative truth-seeking?Meta-1. Is "Defer to a consensus among EA community members" a good strategy for answering practical sociological questions?Meta-2. How accurate are conventional answers to practical sociological questions that many people want to get right?I'll spend a few sentences attempting to persuade EA readers that my position is not easily explained away by certain things they might call mistakes. Most of my recent friends are in the EA community. (I don't think EAs are cringe). I assign >10% probability to AI killing everyone, so I'm doing technical AI Safety research as a PhD student at FHI. (I don't think longtermism or sci-fi has corrupted the EA community). I've read the sequences, and I thought they were mostly good. (I'm not "inferentially distant"). I think quite highly of the philosophical and economic reasoning of Toby Ord, Will MacAskill, Nick Bostrom, Rob Wiblin, Holden Karnofsky, and Eliezer Yudkowsky. (I'm "value-aligned", although I object to this term).Let me begin with an observation about Amazon's organizational structure. From what I've heard, Team A at Amazon does not have to use the tool that Team B made for them. Team A is encouraged to look for alternatives elsewhere. And Team B is encouraged to make the tool into something that they can sell to other organizations. This is apparently how Amazon Web Services became a product. The lesson I want to draw from this is that wherever possible, Amazon outsources quality control to the market (external people) rather than having internal "value-aligned" people attempt to assess quality and issue a pass/fail verdict. This is an instance of the principle: "if there is a large group of people trying to answer a question correctly (like 'Is Amazon's tool X the best option available?'), and they are trying (almost) as hard as you to answer it correctly, defer to their answer."That is my claim; now let me defend it, not just by pointing at Amazon, and claiming that they agree with me.High-Level ClaimsClaim 1: If there is a large group of people trying to answer a question correctly, and they are trying (almost) as hard as you to answer it correctly, any consensus of theirs is more likely to be correct than you.There is extensive evidence (Surowiecki, 2004) that aggregating the estimates of many people produces a more accurate estimate as the number of people grows. It may matter in many cases that people are actually trying rather than just professing to try. If you have extensive and unique technical expertise, you might be able to say no one is trying as hard as you, because properly trying to answer the question correctly involves seeking to understand the implications of certain technical arguments, which only you have bothered to do. There is potentially plenty of gray area here, but hopefully, all of my applications of Claim 1 steer well clear of it.Let's now turn to Meta-2 from above.Claim 2: For practical sociological questions that many people want to get right, if there is a conventional answer, you should go with the conventional answer....]]>
Michael Cohen https://forum.effectivealtruism.org/posts/hn4aJdaCwGHfkB8se/against-ea-community-received-wisdom-on-practical Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Against EA-Community-Received-Wisdom on Practical Sociological Questions, published by Michael Cohen on March 9, 2023 on The Effective Altruism Forum.In my view, there is a rot in the EA community that is so consequential that it inclines me to discourage effective altruists from putting much, if any, trust in EA community members, EA "leaders", the EA Forum, or LessWrong. But I think that it can be fixed, and the EA movement would become very good.In my view, this rot comes from incorrect answers to certain practical sociological questions, like:How important for success is having experience or having been apprenticed to someone experienced?Is the EA Forum a good tool for collaborative truth-seeking?How helpful is peer review for collaborative truth-seeking?Meta-1. Is "Defer to a consensus among EA community members" a good strategy for answering practical sociological questions?Meta-2. How accurate are conventional answers to practical sociological questions that many people want to get right?I'll spend a few sentences attempting to persuade EA readers that my position is not easily explained away by certain things they might call mistakes. Most of my recent friends are in the EA community. (I don't think EAs are cringe). I assign >10% probability to AI killing everyone, so I'm doing technical AI Safety research as a PhD student at FHI. (I don't think longtermism or sci-fi has corrupted the EA community). I've read the sequences, and I thought they were mostly good. (I'm not "inferentially distant"). I think quite highly of the philosophical and economic reasoning of Toby Ord, Will MacAskill, Nick Bostrom, Rob Wiblin, Holden Karnofsky, and Eliezer Yudkowsky. (I'm "value-aligned", although I object to this term).Let me begin with an observation about Amazon's organizational structure. From what I've heard, Team A at Amazon does not have to use the tool that Team B made for them. Team A is encouraged to look for alternatives elsewhere. And Team B is encouraged to make the tool into something that they can sell to other organizations. This is apparently how Amazon Web Services became a product. The lesson I want to draw from this is that wherever possible, Amazon outsources quality control to the market (external people) rather than having internal "value-aligned" people attempt to assess quality and issue a pass/fail verdict. This is an instance of the principle: "if there is a large group of people trying to answer a question correctly (like 'Is Amazon's tool X the best option available?'), and they are trying (almost) as hard as you to answer it correctly, defer to their answer."That is my claim; now let me defend it, not just by pointing at Amazon, and claiming that they agree with me.High-Level ClaimsClaim 1: If there is a large group of people trying to answer a question correctly, and they are trying (almost) as hard as you to answer it correctly, any consensus of theirs is more likely to be correct than you.There is extensive evidence (Surowiecki, 2004) that aggregating the estimates of many people produces a more accurate estimate as the number of people grows. It may matter in many cases that people are actually trying rather than just professing to try. If you have extensive and unique technical expertise, you might be able to say no one is trying as hard as you, because properly trying to answer the question correctly involves seeking to understand the implications of certain technical arguments, which only you have bothered to do. There is potentially plenty of gray area here, but hopefully, all of my applications of Claim 1 steer well clear of it.Let's now turn to Meta-2 from above.Claim 2: For practical sociological questions that many people want to get right, if there is a conventional answer, you should go with the conventional answer....]]>
Thu, 09 Mar 2023 03:38:34 +0000 EA - Against EA-Community-Received-Wisdom on Practical Sociological Questions by Michael Cohen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Against EA-Community-Received-Wisdom on Practical Sociological Questions, published by Michael Cohen on March 9, 2023 on The Effective Altruism Forum.In my view, there is a rot in the EA community that is so consequential that it inclines me to discourage effective altruists from putting much, if any, trust in EA community members, EA "leaders", the EA Forum, or LessWrong. But I think that it can be fixed, and the EA movement would become very good.In my view, this rot comes from incorrect answers to certain practical sociological questions, like:How important for success is having experience or having been apprenticed to someone experienced?Is the EA Forum a good tool for collaborative truth-seeking?How helpful is peer review for collaborative truth-seeking?Meta-1. Is "Defer to a consensus among EA community members" a good strategy for answering practical sociological questions?Meta-2. How accurate are conventional answers to practical sociological questions that many people want to get right?I'll spend a few sentences attempting to persuade EA readers that my position is not easily explained away by certain things they might call mistakes. Most of my recent friends are in the EA community. (I don't think EAs are cringe). I assign >10% probability to AI killing everyone, so I'm doing technical AI Safety research as a PhD student at FHI. (I don't think longtermism or sci-fi has corrupted the EA community). I've read the sequences, and I thought they were mostly good. (I'm not "inferentially distant"). I think quite highly of the philosophical and economic reasoning of Toby Ord, Will MacAskill, Nick Bostrom, Rob Wiblin, Holden Karnofsky, and Eliezer Yudkowsky. (I'm "value-aligned", although I object to this term).Let me begin with an observation about Amazon's organizational structure. From what I've heard, Team A at Amazon does not have to use the tool that Team B made for them. Team A is encouraged to look for alternatives elsewhere. And Team B is encouraged to make the tool into something that they can sell to other organizations. This is apparently how Amazon Web Services became a product. The lesson I want to draw from this is that wherever possible, Amazon outsources quality control to the market (external people) rather than having internal "value-aligned" people attempt to assess quality and issue a pass/fail verdict. This is an instance of the principle: "if there is a large group of people trying to answer a question correctly (like 'Is Amazon's tool X the best option available?'), and they are trying (almost) as hard as you to answer it correctly, defer to their answer."That is my claim; now let me defend it, not just by pointing at Amazon, and claiming that they agree with me.High-Level ClaimsClaim 1: If there is a large group of people trying to answer a question correctly, and they are trying (almost) as hard as you to answer it correctly, any consensus of theirs is more likely to be correct than you.There is extensive evidence (Surowiecki, 2004) that aggregating the estimates of many people produces a more accurate estimate as the number of people grows. It may matter in many cases that people are actually trying rather than just professing to try. If you have extensive and unique technical expertise, you might be able to say no one is trying as hard as you, because properly trying to answer the question correctly involves seeking to understand the implications of certain technical arguments, which only you have bothered to do. There is potentially plenty of gray area here, but hopefully, all of my applications of Claim 1 steer well clear of it.Let's now turn to Meta-2 from above.Claim 2: For practical sociological questions that many people want to get right, if there is a conventional answer, you should go with the conventional answer....]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Against EA-Community-Received-Wisdom on Practical Sociological Questions, published by Michael Cohen on March 9, 2023 on The Effective Altruism Forum.In my view, there is a rot in the EA community that is so consequential that it inclines me to discourage effective altruists from putting much, if any, trust in EA community members, EA "leaders", the EA Forum, or LessWrong. But I think that it can be fixed, and the EA movement would become very good.In my view, this rot comes from incorrect answers to certain practical sociological questions, like:How important for success is having experience or having been apprenticed to someone experienced?Is the EA Forum a good tool for collaborative truth-seeking?How helpful is peer review for collaborative truth-seeking?Meta-1. Is "Defer to a consensus among EA community members" a good strategy for answering practical sociological questions?Meta-2. How accurate are conventional answers to practical sociological questions that many people want to get right?I'll spend a few sentences attempting to persuade EA readers that my position is not easily explained away by certain things they might call mistakes. Most of my recent friends are in the EA community. (I don't think EAs are cringe). I assign >10% probability to AI killing everyone, so I'm doing technical AI Safety research as a PhD student at FHI. (I don't think longtermism or sci-fi has corrupted the EA community). I've read the sequences, and I thought they were mostly good. (I'm not "inferentially distant"). I think quite highly of the philosophical and economic reasoning of Toby Ord, Will MacAskill, Nick Bostrom, Rob Wiblin, Holden Karnofsky, and Eliezer Yudkowsky. (I'm "value-aligned", although I object to this term).Let me begin with an observation about Amazon's organizational structure. From what I've heard, Team A at Amazon does not have to use the tool that Team B made for them. Team A is encouraged to look for alternatives elsewhere. And Team B is encouraged to make the tool into something that they can sell to other organizations. This is apparently how Amazon Web Services became a product. The lesson I want to draw from this is that wherever possible, Amazon outsources quality control to the market (external people) rather than having internal "value-aligned" people attempt to assess quality and issue a pass/fail verdict. This is an instance of the principle: "if there is a large group of people trying to answer a question correctly (like 'Is Amazon's tool X the best option available?'), and they are trying (almost) as hard as you to answer it correctly, defer to their answer."That is my claim; now let me defend it, not just by pointing at Amazon, and claiming that they agree with me.High-Level ClaimsClaim 1: If there is a large group of people trying to answer a question correctly, and they are trying (almost) as hard as you to answer it correctly, any consensus of theirs is more likely to be correct than you.There is extensive evidence (Surowiecki, 2004) that aggregating the estimates of many people produces a more accurate estimate as the number of people grows. It may matter in many cases that people are actually trying rather than just professing to try. If you have extensive and unique technical expertise, you might be able to say no one is trying as hard as you, because properly trying to answer the question correctly involves seeking to understand the implications of certain technical arguments, which only you have bothered to do. There is potentially plenty of gray area here, but hopefully, all of my applications of Claim 1 steer well clear of it.Let's now turn to Meta-2 from above.Claim 2: For practical sociological questions that many people want to get right, if there is a conventional answer, you should go with the conventional answer....]]>
Michael Cohen https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 25:10 None full 5154
DpWhZaGLA5X6p5dgP_NL_EA_EA EA - [Crosspost] Why Uncontrollable AI Looks More Likely Than Ever by Otto Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Crosspost] Why Uncontrollable AI Looks More Likely Than Ever, published by Otto on March 8, 2023 on The Effective Altruism Forum.This is a crosspost from Time Magazine, which also appeared in full at a number of other unpaid news websites.BY OTTO BARTEN AND ROMAN YAMPOLSKIYBarten is director of the Existential Risk Observatory, an Amsterdam-based nonprofit.Yampolskiy is a computer scientist at the University of Louisville, known for his work on AI Safety.“The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control,” mathematician and science fiction writer I.J. Good wrote over 60 years ago. These prophetic words are now more relevant than ever, with artificial intelligence (AI) gaining capabilities at breakneck speed.In the last weeks, many jaws dropped as they witnessed transformation of AI from a handy but decidedly unscary recommender algorithm, to something that at times seemed to act worryingly humanlike. Some reporters were so shocked that they reported their conversation histories with large language model Bing Chat verbatim. And with good reason: few expected that what we thought were glorified autocomplete programs would suddenly threaten their users, refuse to carry out orders they found insulting, break security in an attempt to save a child’s life, or declare their love to us. Yet this all happened.It can already be overwhelming to think about the immediate consequences of these new models. How are we going to grade papers if any student can use AI? What are the effects of these models on our daily work? Any knowledge worker, who may have thought they would not be affected by automation in the foreseeable future, suddenly has cause for concern.Beyond these direct consequences of currently existing models, however, awaits the more fundamental question of AI that has been on the table since the field’s inception: what if we succeed? That is, what if AI researchers manage to make Artificial General Intelligence (AGI), or an AI that can perform any cognitive task at human level?Surprisingly few academics have seriously engaged with this question, despite working day and night to get to this point. It is obvious, though, that the consequences will be far-reaching, much beyond the consequences of even today’s best large language models. If remote work, for example, could be done just as well by an AGI, employers may be able to simply spin up a few new digital employees to perform any task. The job prospects, economic value, self-worth, and political power of anyone not owning the machines might therefore completely dwindle . Those who do own this technology could achieve nearly anything in very short periods of time. That might mean skyrocketing economic growth, but also a rise in inequality, while meritocracy would become obsolete.But a true AGI could not only transform the world, it could also transform itself. Since AI research is one of the tasks an AGI could do better than us, it should be expected to be able to improve the state of AI. This might set off a positive feedback loop with ever better AIs creating ever better AIs, with no known theoretical limits.This would perhaps be positive rather than alarming, had it not been that this technology has the potential to become uncontrollable. Once an AI has a certain goal and self-improves, there is no known method to adjust this goal. An AI should in fact be expected to resist any such attempt, since goal modification would endanger carrying out its current one. Also, instrumental convergence predicts that AI, whatever its goals are, might start off by self-improving and acquiring more resources once it is sufficiently capable of doing so, since this should help it achieve whatever further goal ...]]>
Otto https://forum.effectivealtruism.org/posts/DpWhZaGLA5X6p5dgP/crosspost-why-uncontrollable-ai-looks-more-likely-than-ever-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Crosspost] Why Uncontrollable AI Looks More Likely Than Ever, published by Otto on March 8, 2023 on The Effective Altruism Forum.This is a crosspost from Time Magazine, which also appeared in full at a number of other unpaid news websites.BY OTTO BARTEN AND ROMAN YAMPOLSKIYBarten is director of the Existential Risk Observatory, an Amsterdam-based nonprofit.Yampolskiy is a computer scientist at the University of Louisville, known for his work on AI Safety.“The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control,” mathematician and science fiction writer I.J. Good wrote over 60 years ago. These prophetic words are now more relevant than ever, with artificial intelligence (AI) gaining capabilities at breakneck speed.In the last weeks, many jaws dropped as they witnessed transformation of AI from a handy but decidedly unscary recommender algorithm, to something that at times seemed to act worryingly humanlike. Some reporters were so shocked that they reported their conversation histories with large language model Bing Chat verbatim. And with good reason: few expected that what we thought were glorified autocomplete programs would suddenly threaten their users, refuse to carry out orders they found insulting, break security in an attempt to save a child’s life, or declare their love to us. Yet this all happened.It can already be overwhelming to think about the immediate consequences of these new models. How are we going to grade papers if any student can use AI? What are the effects of these models on our daily work? Any knowledge worker, who may have thought they would not be affected by automation in the foreseeable future, suddenly has cause for concern.Beyond these direct consequences of currently existing models, however, awaits the more fundamental question of AI that has been on the table since the field’s inception: what if we succeed? That is, what if AI researchers manage to make Artificial General Intelligence (AGI), or an AI that can perform any cognitive task at human level?Surprisingly few academics have seriously engaged with this question, despite working day and night to get to this point. It is obvious, though, that the consequences will be far-reaching, much beyond the consequences of even today’s best large language models. If remote work, for example, could be done just as well by an AGI, employers may be able to simply spin up a few new digital employees to perform any task. The job prospects, economic value, self-worth, and political power of anyone not owning the machines might therefore completely dwindle . Those who do own this technology could achieve nearly anything in very short periods of time. That might mean skyrocketing economic growth, but also a rise in inequality, while meritocracy would become obsolete.But a true AGI could not only transform the world, it could also transform itself. Since AI research is one of the tasks an AGI could do better than us, it should be expected to be able to improve the state of AI. This might set off a positive feedback loop with ever better AIs creating ever better AIs, with no known theoretical limits.This would perhaps be positive rather than alarming, had it not been that this technology has the potential to become uncontrollable. Once an AI has a certain goal and self-improves, there is no known method to adjust this goal. An AI should in fact be expected to resist any such attempt, since goal modification would endanger carrying out its current one. Also, instrumental convergence predicts that AI, whatever its goals are, might start off by self-improving and acquiring more resources once it is sufficiently capable of doing so, since this should help it achieve whatever further goal ...]]>
Wed, 08 Mar 2023 22:46:31 +0000 EA - [Crosspost] Why Uncontrollable AI Looks More Likely Than Ever by Otto Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Crosspost] Why Uncontrollable AI Looks More Likely Than Ever, published by Otto on March 8, 2023 on The Effective Altruism Forum.This is a crosspost from Time Magazine, which also appeared in full at a number of other unpaid news websites.BY OTTO BARTEN AND ROMAN YAMPOLSKIYBarten is director of the Existential Risk Observatory, an Amsterdam-based nonprofit.Yampolskiy is a computer scientist at the University of Louisville, known for his work on AI Safety.“The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control,” mathematician and science fiction writer I.J. Good wrote over 60 years ago. These prophetic words are now more relevant than ever, with artificial intelligence (AI) gaining capabilities at breakneck speed.In the last weeks, many jaws dropped as they witnessed transformation of AI from a handy but decidedly unscary recommender algorithm, to something that at times seemed to act worryingly humanlike. Some reporters were so shocked that they reported their conversation histories with large language model Bing Chat verbatim. And with good reason: few expected that what we thought were glorified autocomplete programs would suddenly threaten their users, refuse to carry out orders they found insulting, break security in an attempt to save a child’s life, or declare their love to us. Yet this all happened.It can already be overwhelming to think about the immediate consequences of these new models. How are we going to grade papers if any student can use AI? What are the effects of these models on our daily work? Any knowledge worker, who may have thought they would not be affected by automation in the foreseeable future, suddenly has cause for concern.Beyond these direct consequences of currently existing models, however, awaits the more fundamental question of AI that has been on the table since the field’s inception: what if we succeed? That is, what if AI researchers manage to make Artificial General Intelligence (AGI), or an AI that can perform any cognitive task at human level?Surprisingly few academics have seriously engaged with this question, despite working day and night to get to this point. It is obvious, though, that the consequences will be far-reaching, much beyond the consequences of even today’s best large language models. If remote work, for example, could be done just as well by an AGI, employers may be able to simply spin up a few new digital employees to perform any task. The job prospects, economic value, self-worth, and political power of anyone not owning the machines might therefore completely dwindle . Those who do own this technology could achieve nearly anything in very short periods of time. That might mean skyrocketing economic growth, but also a rise in inequality, while meritocracy would become obsolete.But a true AGI could not only transform the world, it could also transform itself. Since AI research is one of the tasks an AGI could do better than us, it should be expected to be able to improve the state of AI. This might set off a positive feedback loop with ever better AIs creating ever better AIs, with no known theoretical limits.This would perhaps be positive rather than alarming, had it not been that this technology has the potential to become uncontrollable. Once an AI has a certain goal and self-improves, there is no known method to adjust this goal. An AI should in fact be expected to resist any such attempt, since goal modification would endanger carrying out its current one. Also, instrumental convergence predicts that AI, whatever its goals are, might start off by self-improving and acquiring more resources once it is sufficiently capable of doing so, since this should help it achieve whatever further goal ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Crosspost] Why Uncontrollable AI Looks More Likely Than Ever, published by Otto on March 8, 2023 on The Effective Altruism Forum.This is a crosspost from Time Magazine, which also appeared in full at a number of other unpaid news websites.BY OTTO BARTEN AND ROMAN YAMPOLSKIYBarten is director of the Existential Risk Observatory, an Amsterdam-based nonprofit.Yampolskiy is a computer scientist at the University of Louisville, known for his work on AI Safety.“The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control,” mathematician and science fiction writer I.J. Good wrote over 60 years ago. These prophetic words are now more relevant than ever, with artificial intelligence (AI) gaining capabilities at breakneck speed.In the last weeks, many jaws dropped as they witnessed transformation of AI from a handy but decidedly unscary recommender algorithm, to something that at times seemed to act worryingly humanlike. Some reporters were so shocked that they reported their conversation histories with large language model Bing Chat verbatim. And with good reason: few expected that what we thought were glorified autocomplete programs would suddenly threaten their users, refuse to carry out orders they found insulting, break security in an attempt to save a child’s life, or declare their love to us. Yet this all happened.It can already be overwhelming to think about the immediate consequences of these new models. How are we going to grade papers if any student can use AI? What are the effects of these models on our daily work? Any knowledge worker, who may have thought they would not be affected by automation in the foreseeable future, suddenly has cause for concern.Beyond these direct consequences of currently existing models, however, awaits the more fundamental question of AI that has been on the table since the field’s inception: what if we succeed? That is, what if AI researchers manage to make Artificial General Intelligence (AGI), or an AI that can perform any cognitive task at human level?Surprisingly few academics have seriously engaged with this question, despite working day and night to get to this point. It is obvious, though, that the consequences will be far-reaching, much beyond the consequences of even today’s best large language models. If remote work, for example, could be done just as well by an AGI, employers may be able to simply spin up a few new digital employees to perform any task. The job prospects, economic value, self-worth, and political power of anyone not owning the machines might therefore completely dwindle . Those who do own this technology could achieve nearly anything in very short periods of time. That might mean skyrocketing economic growth, but also a rise in inequality, while meritocracy would become obsolete.But a true AGI could not only transform the world, it could also transform itself. Since AI research is one of the tasks an AGI could do better than us, it should be expected to be able to improve the state of AI. This might set off a positive feedback loop with ever better AIs creating ever better AIs, with no known theoretical limits.This would perhaps be positive rather than alarming, had it not been that this technology has the potential to become uncontrollable. Once an AI has a certain goal and self-improves, there is no known method to adjust this goal. An AI should in fact be expected to resist any such attempt, since goal modification would endanger carrying out its current one. Also, instrumental convergence predicts that AI, whatever its goals are, might start off by self-improving and acquiring more resources once it is sufficiently capable of doing so, since this should help it achieve whatever further goal ...]]>
Otto https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:29 None full 5148
tedrwwpXgpBEi3Ecc_NL_EA_EA EA - 80,000 Hours two-year review: 2021–2022 by 80000 Hours Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80,000 Hours two-year review: 2021–2022, published by 80000 Hours on March 8, 2023 on The Effective Altruism Forum.80,000 Hours has released a review of our programmes for the years 2021 and 2022. The full document is available for the public, and we’re sharing the summary below.You can find our previous evaluations here. We have also updated our mistakes page.80,000 Hours delivers four programmes: website, job board, podcast, and one-on-one. We also have a marketing team that attracts users to these programmes, primarily by getting them to visit the website.Over the past two years, three of four programmes grew their engagement 2-3x:Podcast listening time in 2022 was 2x higher than in 2020Job board vacancy clicks in 2022 were 3x higher than in 2020The number of one-on-one team calls in 2022 was 3x higher than in 2020Web engagement hours fell by 20% in 2021, then grew by 38% in 2022 after we increased investment in our marketing.From December 2020 to December 2022, the core team grew by 78% from 14 FTEs to 25 FTEs.Ben Todd stepped down as CEO in May 2022 and was replaced by Howie Lempel.The collapse of FTX in November 2022 caused significant disruption. As a result, Howie went on leave from 80,000 Hours to be Interim CEO of Effective Ventures Foundation (UK). Brenton Mayer took over as Interim CEO of 80,000 Hours. We are also spending substantially more time liaising with management across the Effective Ventures group, as we are a project of the group.We had previously held up Sam Bankman-Fried as a positive example of one of our highly rated career paths, a decision we now regret and feel humbled by. We are updating some aspects of our advice in light of our reflections on the FTX collapse and the lessons the wider community is learning from these events.In 2023, we will make improving our advice a key focus of our work. As part of this, we’re aiming to hire for a senior research role.We plan to continue growing our main four programmes and will experiment with additional projects, such as relaunching our headhunting service and creating a new, scripted podcast with a different host. We plan to grow the team by ~45% in 2023, adding an additional 11 people.Our provisional expansion budgets for 2023 and 2024 (excluding marketing) are $12m and $17m. We’re keen to fundraise for both years and are also interested in extending our runway — though we expect that the amount we raise in practice will be heavily affected by the funding landscape.The Effective Ventures group is an umbrella term for Effective Ventures Foundation (England and Wales registered charity number 1149828 and registered company number 07962181) and Effective Ventures Foundation USA, Inc. (a section 501(c)(3) tax-exempt organisation in the USA, EIN 47-1988398), two separate legal entities which work together.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
80000 Hours https://forum.effectivealtruism.org/posts/tedrwwpXgpBEi3Ecc/80-000-hours-two-year-review-2021-2022 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80,000 Hours two-year review: 2021–2022, published by 80000 Hours on March 8, 2023 on The Effective Altruism Forum.80,000 Hours has released a review of our programmes for the years 2021 and 2022. The full document is available for the public, and we’re sharing the summary below.You can find our previous evaluations here. We have also updated our mistakes page.80,000 Hours delivers four programmes: website, job board, podcast, and one-on-one. We also have a marketing team that attracts users to these programmes, primarily by getting them to visit the website.Over the past two years, three of four programmes grew their engagement 2-3x:Podcast listening time in 2022 was 2x higher than in 2020Job board vacancy clicks in 2022 were 3x higher than in 2020The number of one-on-one team calls in 2022 was 3x higher than in 2020Web engagement hours fell by 20% in 2021, then grew by 38% in 2022 after we increased investment in our marketing.From December 2020 to December 2022, the core team grew by 78% from 14 FTEs to 25 FTEs.Ben Todd stepped down as CEO in May 2022 and was replaced by Howie Lempel.The collapse of FTX in November 2022 caused significant disruption. As a result, Howie went on leave from 80,000 Hours to be Interim CEO of Effective Ventures Foundation (UK). Brenton Mayer took over as Interim CEO of 80,000 Hours. We are also spending substantially more time liaising with management across the Effective Ventures group, as we are a project of the group.We had previously held up Sam Bankman-Fried as a positive example of one of our highly rated career paths, a decision we now regret and feel humbled by. We are updating some aspects of our advice in light of our reflections on the FTX collapse and the lessons the wider community is learning from these events.In 2023, we will make improving our advice a key focus of our work. As part of this, we’re aiming to hire for a senior research role.We plan to continue growing our main four programmes and will experiment with additional projects, such as relaunching our headhunting service and creating a new, scripted podcast with a different host. We plan to grow the team by ~45% in 2023, adding an additional 11 people.Our provisional expansion budgets for 2023 and 2024 (excluding marketing) are $12m and $17m. We’re keen to fundraise for both years and are also interested in extending our runway — though we expect that the amount we raise in practice will be heavily affected by the funding landscape.The Effective Ventures group is an umbrella term for Effective Ventures Foundation (England and Wales registered charity number 1149828 and registered company number 07962181) and Effective Ventures Foundation USA, Inc. (a section 501(c)(3) tax-exempt organisation in the USA, EIN 47-1988398), two separate legal entities which work together.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 08 Mar 2023 19:22:45 +0000 EA - 80,000 Hours two-year review: 2021–2022 by 80000 Hours Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80,000 Hours two-year review: 2021–2022, published by 80000 Hours on March 8, 2023 on The Effective Altruism Forum.80,000 Hours has released a review of our programmes for the years 2021 and 2022. The full document is available for the public, and we’re sharing the summary below.You can find our previous evaluations here. We have also updated our mistakes page.80,000 Hours delivers four programmes: website, job board, podcast, and one-on-one. We also have a marketing team that attracts users to these programmes, primarily by getting them to visit the website.Over the past two years, three of four programmes grew their engagement 2-3x:Podcast listening time in 2022 was 2x higher than in 2020Job board vacancy clicks in 2022 were 3x higher than in 2020The number of one-on-one team calls in 2022 was 3x higher than in 2020Web engagement hours fell by 20% in 2021, then grew by 38% in 2022 after we increased investment in our marketing.From December 2020 to December 2022, the core team grew by 78% from 14 FTEs to 25 FTEs.Ben Todd stepped down as CEO in May 2022 and was replaced by Howie Lempel.The collapse of FTX in November 2022 caused significant disruption. As a result, Howie went on leave from 80,000 Hours to be Interim CEO of Effective Ventures Foundation (UK). Brenton Mayer took over as Interim CEO of 80,000 Hours. We are also spending substantially more time liaising with management across the Effective Ventures group, as we are a project of the group.We had previously held up Sam Bankman-Fried as a positive example of one of our highly rated career paths, a decision we now regret and feel humbled by. We are updating some aspects of our advice in light of our reflections on the FTX collapse and the lessons the wider community is learning from these events.In 2023, we will make improving our advice a key focus of our work. As part of this, we’re aiming to hire for a senior research role.We plan to continue growing our main four programmes and will experiment with additional projects, such as relaunching our headhunting service and creating a new, scripted podcast with a different host. We plan to grow the team by ~45% in 2023, adding an additional 11 people.Our provisional expansion budgets for 2023 and 2024 (excluding marketing) are $12m and $17m. We’re keen to fundraise for both years and are also interested in extending our runway — though we expect that the amount we raise in practice will be heavily affected by the funding landscape.The Effective Ventures group is an umbrella term for Effective Ventures Foundation (England and Wales registered charity number 1149828 and registered company number 07962181) and Effective Ventures Foundation USA, Inc. (a section 501(c)(3) tax-exempt organisation in the USA, EIN 47-1988398), two separate legal entities which work together.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80,000 Hours two-year review: 2021–2022, published by 80000 Hours on March 8, 2023 on The Effective Altruism Forum.80,000 Hours has released a review of our programmes for the years 2021 and 2022. The full document is available for the public, and we’re sharing the summary below.You can find our previous evaluations here. We have also updated our mistakes page.80,000 Hours delivers four programmes: website, job board, podcast, and one-on-one. We also have a marketing team that attracts users to these programmes, primarily by getting them to visit the website.Over the past two years, three of four programmes grew their engagement 2-3x:Podcast listening time in 2022 was 2x higher than in 2020Job board vacancy clicks in 2022 were 3x higher than in 2020The number of one-on-one team calls in 2022 was 3x higher than in 2020Web engagement hours fell by 20% in 2021, then grew by 38% in 2022 after we increased investment in our marketing.From December 2020 to December 2022, the core team grew by 78% from 14 FTEs to 25 FTEs.Ben Todd stepped down as CEO in May 2022 and was replaced by Howie Lempel.The collapse of FTX in November 2022 caused significant disruption. As a result, Howie went on leave from 80,000 Hours to be Interim CEO of Effective Ventures Foundation (UK). Brenton Mayer took over as Interim CEO of 80,000 Hours. We are also spending substantially more time liaising with management across the Effective Ventures group, as we are a project of the group.We had previously held up Sam Bankman-Fried as a positive example of one of our highly rated career paths, a decision we now regret and feel humbled by. We are updating some aspects of our advice in light of our reflections on the FTX collapse and the lessons the wider community is learning from these events.In 2023, we will make improving our advice a key focus of our work. As part of this, we’re aiming to hire for a senior research role.We plan to continue growing our main four programmes and will experiment with additional projects, such as relaunching our headhunting service and creating a new, scripted podcast with a different host. We plan to grow the team by ~45% in 2023, adding an additional 11 people.Our provisional expansion budgets for 2023 and 2024 (excluding marketing) are $12m and $17m. We’re keen to fundraise for both years and are also interested in extending our runway — though we expect that the amount we raise in practice will be heavily affected by the funding landscape.The Effective Ventures group is an umbrella term for Effective Ventures Foundation (England and Wales registered charity number 1149828 and registered company number 07962181) and Effective Ventures Foundation USA, Inc. (a section 501(c)(3) tax-exempt organisation in the USA, EIN 47-1988398), two separate legal entities which work together.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
80000 Hours https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:21 None full 5146
2JXLxKbSsbicnt9N9_NL_EA_EA EA - EA needs Life-Veterans and "Less Smart" people by Jeffrey Kursonis Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA needs Life-Veterans and "Less Smart" people, published by Jeffrey Kursonis on March 8, 2023 on The Effective Altruism Forum.Healthy communities have all kinds. There is a magic in the plant world when a diversity of plants co-exist. Permaculture has been innovating through realizing how the community of plants help each other by each contributing a different gift to the benefit of the whole. Plants communicate with each other via mushroom like strands underground and work together. Interestingly, they speak French. Just kidding.I’ve been in a movement that changed the world in a positive way and eventually fell apart, it was very very similar to EA in many ways — a bunch of talented young people trying to do good in the world. We had all the same criticisms people throw at EA and we did listen and learn as much as we had the capacity to. I won’t tell the whole story here, but we didn’t fall apart because of bad things, it was a necessary evolution, but one of the key problems that kept us from surviving was a lack of diversity.For some reason when I was young, I think it’s because I was smart, I figured out that if older people had already faced all the challenges I face maybe they would be a good source of data and the gritty life wisdom of how to apply the data. So I would go out of my way to befriend them and listen to them. It was mixed results.lots of older people are just bitter, but there were enough that had made it through a life full of thriving and were happy to share it. You just have to find the right one’s.Most of the world is made up of average people, smart people call them dumb, but they’re really just average. The thing is, if everybody in the room is smart, who is going to see the world as most of the world sees the world? That’s a data poor room.If we are really smart we’ll make sure to surround ourselves not just with other smart people but with a variety of young and old, different cultures, different life experience levels and some average people. That’s a room rich in data.Never underestimate the simple wisdom of simple people.and because most of the people in the world are religious, we should have them around too. You just need to find the right one’s.generous and kind and who want everyone to thrive.Wisdom is learning how to live in reality.when we’re young we are really far from reality.you have a bedroom and a phone and an iPad in a lovely house, all provided for you magically. You have no clue how that all accrued to you. You’re not yet in touch with reality. But as you attend the school of hard knocks year after year, slowly but surely reality drifts in.essentially what happens is that as you are slowly disconnected from your parents and the “magical accrual” fades away, you learn how real life works.Wise people have had the time it takes to boil it all down to pure essence, filter out the dross and see the pure reality.when you can see it, you can figure out how to negotiate it. It simply takes some years and a person oriented toward thriving rather than increasing bitterness.If you have a lot of data, what you need more than anything is wisdom to interpret the data and wisdom to creatively imagine real world applications from the data.You simply cannot be a movement committed to getting more effective at doing good in the world if you do not have some elder wisdom in the room. It’s a glaring deficit. Thank God for Singer, but he’s not around enough.Especially in this time of post-FTX self examination and reforming, in this time of making efforts concerning the mental health of young people under pressure to save the world—this is the time to round out the community with some village like balance; The young, the old, the strong, the average all making life thrive like plants all mixed up in the jungle.And artists! My God...]]>
Jeffrey Kursonis https://forum.effectivealtruism.org/posts/2JXLxKbSsbicnt9N9/ea-needs-life-veterans-and-less-smart-people Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA needs Life-Veterans and "Less Smart" people, published by Jeffrey Kursonis on March 8, 2023 on The Effective Altruism Forum.Healthy communities have all kinds. There is a magic in the plant world when a diversity of plants co-exist. Permaculture has been innovating through realizing how the community of plants help each other by each contributing a different gift to the benefit of the whole. Plants communicate with each other via mushroom like strands underground and work together. Interestingly, they speak French. Just kidding.I’ve been in a movement that changed the world in a positive way and eventually fell apart, it was very very similar to EA in many ways — a bunch of talented young people trying to do good in the world. We had all the same criticisms people throw at EA and we did listen and learn as much as we had the capacity to. I won’t tell the whole story here, but we didn’t fall apart because of bad things, it was a necessary evolution, but one of the key problems that kept us from surviving was a lack of diversity.For some reason when I was young, I think it’s because I was smart, I figured out that if older people had already faced all the challenges I face maybe they would be a good source of data and the gritty life wisdom of how to apply the data. So I would go out of my way to befriend them and listen to them. It was mixed results.lots of older people are just bitter, but there were enough that had made it through a life full of thriving and were happy to share it. You just have to find the right one’s.Most of the world is made up of average people, smart people call them dumb, but they’re really just average. The thing is, if everybody in the room is smart, who is going to see the world as most of the world sees the world? That’s a data poor room.If we are really smart we’ll make sure to surround ourselves not just with other smart people but with a variety of young and old, different cultures, different life experience levels and some average people. That’s a room rich in data.Never underestimate the simple wisdom of simple people.and because most of the people in the world are religious, we should have them around too. You just need to find the right one’s.generous and kind and who want everyone to thrive.Wisdom is learning how to live in reality.when we’re young we are really far from reality.you have a bedroom and a phone and an iPad in a lovely house, all provided for you magically. You have no clue how that all accrued to you. You’re not yet in touch with reality. But as you attend the school of hard knocks year after year, slowly but surely reality drifts in.essentially what happens is that as you are slowly disconnected from your parents and the “magical accrual” fades away, you learn how real life works.Wise people have had the time it takes to boil it all down to pure essence, filter out the dross and see the pure reality.when you can see it, you can figure out how to negotiate it. It simply takes some years and a person oriented toward thriving rather than increasing bitterness.If you have a lot of data, what you need more than anything is wisdom to interpret the data and wisdom to creatively imagine real world applications from the data.You simply cannot be a movement committed to getting more effective at doing good in the world if you do not have some elder wisdom in the room. It’s a glaring deficit. Thank God for Singer, but he’s not around enough.Especially in this time of post-FTX self examination and reforming, in this time of making efforts concerning the mental health of young people under pressure to save the world—this is the time to round out the community with some village like balance; The young, the old, the strong, the average all making life thrive like plants all mixed up in the jungle.And artists! My God...]]>
Wed, 08 Mar 2023 18:40:09 +0000 EA - EA needs Life-Veterans and "Less Smart" people by Jeffrey Kursonis Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA needs Life-Veterans and "Less Smart" people, published by Jeffrey Kursonis on March 8, 2023 on The Effective Altruism Forum.Healthy communities have all kinds. There is a magic in the plant world when a diversity of plants co-exist. Permaculture has been innovating through realizing how the community of plants help each other by each contributing a different gift to the benefit of the whole. Plants communicate with each other via mushroom like strands underground and work together. Interestingly, they speak French. Just kidding.I’ve been in a movement that changed the world in a positive way and eventually fell apart, it was very very similar to EA in many ways — a bunch of talented young people trying to do good in the world. We had all the same criticisms people throw at EA and we did listen and learn as much as we had the capacity to. I won’t tell the whole story here, but we didn’t fall apart because of bad things, it was a necessary evolution, but one of the key problems that kept us from surviving was a lack of diversity.For some reason when I was young, I think it’s because I was smart, I figured out that if older people had already faced all the challenges I face maybe they would be a good source of data and the gritty life wisdom of how to apply the data. So I would go out of my way to befriend them and listen to them. It was mixed results.lots of older people are just bitter, but there were enough that had made it through a life full of thriving and were happy to share it. You just have to find the right one’s.Most of the world is made up of average people, smart people call them dumb, but they’re really just average. The thing is, if everybody in the room is smart, who is going to see the world as most of the world sees the world? That’s a data poor room.If we are really smart we’ll make sure to surround ourselves not just with other smart people but with a variety of young and old, different cultures, different life experience levels and some average people. That’s a room rich in data.Never underestimate the simple wisdom of simple people.and because most of the people in the world are religious, we should have them around too. You just need to find the right one’s.generous and kind and who want everyone to thrive.Wisdom is learning how to live in reality.when we’re young we are really far from reality.you have a bedroom and a phone and an iPad in a lovely house, all provided for you magically. You have no clue how that all accrued to you. You’re not yet in touch with reality. But as you attend the school of hard knocks year after year, slowly but surely reality drifts in.essentially what happens is that as you are slowly disconnected from your parents and the “magical accrual” fades away, you learn how real life works.Wise people have had the time it takes to boil it all down to pure essence, filter out the dross and see the pure reality.when you can see it, you can figure out how to negotiate it. It simply takes some years and a person oriented toward thriving rather than increasing bitterness.If you have a lot of data, what you need more than anything is wisdom to interpret the data and wisdom to creatively imagine real world applications from the data.You simply cannot be a movement committed to getting more effective at doing good in the world if you do not have some elder wisdom in the room. It’s a glaring deficit. Thank God for Singer, but he’s not around enough.Especially in this time of post-FTX self examination and reforming, in this time of making efforts concerning the mental health of young people under pressure to save the world—this is the time to round out the community with some village like balance; The young, the old, the strong, the average all making life thrive like plants all mixed up in the jungle.And artists! My God...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA needs Life-Veterans and "Less Smart" people, published by Jeffrey Kursonis on March 8, 2023 on The Effective Altruism Forum.Healthy communities have all kinds. There is a magic in the plant world when a diversity of plants co-exist. Permaculture has been innovating through realizing how the community of plants help each other by each contributing a different gift to the benefit of the whole. Plants communicate with each other via mushroom like strands underground and work together. Interestingly, they speak French. Just kidding.I’ve been in a movement that changed the world in a positive way and eventually fell apart, it was very very similar to EA in many ways — a bunch of talented young people trying to do good in the world. We had all the same criticisms people throw at EA and we did listen and learn as much as we had the capacity to. I won’t tell the whole story here, but we didn’t fall apart because of bad things, it was a necessary evolution, but one of the key problems that kept us from surviving was a lack of diversity.For some reason when I was young, I think it’s because I was smart, I figured out that if older people had already faced all the challenges I face maybe they would be a good source of data and the gritty life wisdom of how to apply the data. So I would go out of my way to befriend them and listen to them. It was mixed results.lots of older people are just bitter, but there were enough that had made it through a life full of thriving and were happy to share it. You just have to find the right one’s.Most of the world is made up of average people, smart people call them dumb, but they’re really just average. The thing is, if everybody in the room is smart, who is going to see the world as most of the world sees the world? That’s a data poor room.If we are really smart we’ll make sure to surround ourselves not just with other smart people but with a variety of young and old, different cultures, different life experience levels and some average people. That’s a room rich in data.Never underestimate the simple wisdom of simple people.and because most of the people in the world are religious, we should have them around too. You just need to find the right one’s.generous and kind and who want everyone to thrive.Wisdom is learning how to live in reality.when we’re young we are really far from reality.you have a bedroom and a phone and an iPad in a lovely house, all provided for you magically. You have no clue how that all accrued to you. You’re not yet in touch with reality. But as you attend the school of hard knocks year after year, slowly but surely reality drifts in.essentially what happens is that as you are slowly disconnected from your parents and the “magical accrual” fades away, you learn how real life works.Wise people have had the time it takes to boil it all down to pure essence, filter out the dross and see the pure reality.when you can see it, you can figure out how to negotiate it. It simply takes some years and a person oriented toward thriving rather than increasing bitterness.If you have a lot of data, what you need more than anything is wisdom to interpret the data and wisdom to creatively imagine real world applications from the data.You simply cannot be a movement committed to getting more effective at doing good in the world if you do not have some elder wisdom in the room. It’s a glaring deficit. Thank God for Singer, but he’s not around enough.Especially in this time of post-FTX self examination and reforming, in this time of making efforts concerning the mental health of young people under pressure to save the world—this is the time to round out the community with some village like balance; The young, the old, the strong, the average all making life thrive like plants all mixed up in the jungle.And artists! My God...]]>
Jeffrey Kursonis https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:11 None full 5147
qe3z5Yfqr2ZoAvWe4_NL_EA_EA EA - Suggest new charity ideas for Charity Entrepreneurship by CE Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Suggest new charity ideas for Charity Entrepreneurship, published by CE on March 8, 2023 on The Effective Altruism Forum.At Charity Entrepreneurship (CE) we launch high-impact nonprofits by connecting entrepreneurs with the effective ideas, training and funding needed to launch and succeed.Each year our research team collates hundreds and hundreds of ideas for promising new charities. We scour the research, we talk to colleagues in partner organisations and we solicit ideas from academia.We then vet these ideas and whittle them down to the top ~3. We create detailed reports about these top ideas and then we recruit, train and fund entrepreneurs to take these ideas and make them a reality. In 5 years we've launched over 23 exceptional organisations. You can read more about charities we incubated here.In 2023 Charity Entrepreneurship will be researching two new cause areas: mass media interventions and preventative animal advocacyWe want your ideas!!!Prize: If you give us an idea which leads to a new charity then you will win: A copy of the Charity Entrepreneurship handbook, a box of vegan cookies and $100 to a charity of your choice. And more importantly, a new charity will be started!Notes: If multiple people submit the same idea we will give the award to the first submission. Max 5 prizes will be awarded. If you submit an idea already on our list, you are still eligible for a prize.Please submit your ideas into this form by end of the day Sunday 12th March.[SUBMIT]Mass mediaThe cause area:By ‘mass media’ intervention we refer to (1) social and behaviour change communication campaigns delivered through (2) mass media aiming to improve (3) human wellbeing.Definitions:1. Social and Behavior Change Communication (SBCC) – The strategic use of communication to promote positive outcomes, based on proven theories and models of behavior change (more here).2. Mass media – Mass communication modes that reach a very large audience where any targeting of segmented audiences can be mass applied (e.g. would include online advertising that targets relevant audiences but not posters in health centers) (more here). Examples include: TV, Radio, Mobile phones, Newspapers, Outdoor advertising.3. Human wellbeing – For our purposes, wellbeing refers broadly to areas of human health, development and poverty.Headline metricOur key metrics for the quantitative side of this research will be DALYs averted or % income increases. We will likely compare across these metrics using a moral weight formula. We may set our own moral weights or use recent moral weights by GiveWell (e.g. from here). Note that the use of this as a headline metric does not mean that other factors (autonomy, environmental effects, suffering not captured by DALYs) are excluded, although they may not be explicitly quantified.Scope limitationsCurrently, all interventions that could reasonably be considered mass media are in scope. If in doubt, assume it is in scope.Example ideasNote: there is no guarantee that any of the following ideas make it past the initial filterPromoting healthier dietsPromoting CBT tools for stressMessaging against tobacco use with resources for quittingSignposting to available support in cases of abuse, violence, etc.Anti-suicide campaignsChanging HIV attitudesPromotion cancer screening (cervical, breast, bowel, prostate, etc.)Information campaigns about criminal politiciansEncouraging lower sugar consumptionPreventative animal advocacyThe cause area:The focus this year is on intervention and policies that prevent future harms done to animals as opposed to solving current problems. We will be looking for interventions that as well as having some short run evidence of impact will prevent future problems i.e. have the biggest impact on farmed animals in the future, say ...]]>
CE https://forum.effectivealtruism.org/posts/qe3z5Yfqr2ZoAvWe4/suggest-new-charity-ideas-for-charity-entrepreneurship Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Suggest new charity ideas for Charity Entrepreneurship, published by CE on March 8, 2023 on The Effective Altruism Forum.At Charity Entrepreneurship (CE) we launch high-impact nonprofits by connecting entrepreneurs with the effective ideas, training and funding needed to launch and succeed.Each year our research team collates hundreds and hundreds of ideas for promising new charities. We scour the research, we talk to colleagues in partner organisations and we solicit ideas from academia.We then vet these ideas and whittle them down to the top ~3. We create detailed reports about these top ideas and then we recruit, train and fund entrepreneurs to take these ideas and make them a reality. In 5 years we've launched over 23 exceptional organisations. You can read more about charities we incubated here.In 2023 Charity Entrepreneurship will be researching two new cause areas: mass media interventions and preventative animal advocacyWe want your ideas!!!Prize: If you give us an idea which leads to a new charity then you will win: A copy of the Charity Entrepreneurship handbook, a box of vegan cookies and $100 to a charity of your choice. And more importantly, a new charity will be started!Notes: If multiple people submit the same idea we will give the award to the first submission. Max 5 prizes will be awarded. If you submit an idea already on our list, you are still eligible for a prize.Please submit your ideas into this form by end of the day Sunday 12th March.[SUBMIT]Mass mediaThe cause area:By ‘mass media’ intervention we refer to (1) social and behaviour change communication campaigns delivered through (2) mass media aiming to improve (3) human wellbeing.Definitions:1. Social and Behavior Change Communication (SBCC) – The strategic use of communication to promote positive outcomes, based on proven theories and models of behavior change (more here).2. Mass media – Mass communication modes that reach a very large audience where any targeting of segmented audiences can be mass applied (e.g. would include online advertising that targets relevant audiences but not posters in health centers) (more here). Examples include: TV, Radio, Mobile phones, Newspapers, Outdoor advertising.3. Human wellbeing – For our purposes, wellbeing refers broadly to areas of human health, development and poverty.Headline metricOur key metrics for the quantitative side of this research will be DALYs averted or % income increases. We will likely compare across these metrics using a moral weight formula. We may set our own moral weights or use recent moral weights by GiveWell (e.g. from here). Note that the use of this as a headline metric does not mean that other factors (autonomy, environmental effects, suffering not captured by DALYs) are excluded, although they may not be explicitly quantified.Scope limitationsCurrently, all interventions that could reasonably be considered mass media are in scope. If in doubt, assume it is in scope.Example ideasNote: there is no guarantee that any of the following ideas make it past the initial filterPromoting healthier dietsPromoting CBT tools for stressMessaging against tobacco use with resources for quittingSignposting to available support in cases of abuse, violence, etc.Anti-suicide campaignsChanging HIV attitudesPromotion cancer screening (cervical, breast, bowel, prostate, etc.)Information campaigns about criminal politiciansEncouraging lower sugar consumptionPreventative animal advocacyThe cause area:The focus this year is on intervention and policies that prevent future harms done to animals as opposed to solving current problems. We will be looking for interventions that as well as having some short run evidence of impact will prevent future problems i.e. have the biggest impact on farmed animals in the future, say ...]]>
Wed, 08 Mar 2023 16:16:57 +0000 EA - Suggest new charity ideas for Charity Entrepreneurship by CE Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Suggest new charity ideas for Charity Entrepreneurship, published by CE on March 8, 2023 on The Effective Altruism Forum.At Charity Entrepreneurship (CE) we launch high-impact nonprofits by connecting entrepreneurs with the effective ideas, training and funding needed to launch and succeed.Each year our research team collates hundreds and hundreds of ideas for promising new charities. We scour the research, we talk to colleagues in partner organisations and we solicit ideas from academia.We then vet these ideas and whittle them down to the top ~3. We create detailed reports about these top ideas and then we recruit, train and fund entrepreneurs to take these ideas and make them a reality. In 5 years we've launched over 23 exceptional organisations. You can read more about charities we incubated here.In 2023 Charity Entrepreneurship will be researching two new cause areas: mass media interventions and preventative animal advocacyWe want your ideas!!!Prize: If you give us an idea which leads to a new charity then you will win: A copy of the Charity Entrepreneurship handbook, a box of vegan cookies and $100 to a charity of your choice. And more importantly, a new charity will be started!Notes: If multiple people submit the same idea we will give the award to the first submission. Max 5 prizes will be awarded. If you submit an idea already on our list, you are still eligible for a prize.Please submit your ideas into this form by end of the day Sunday 12th March.[SUBMIT]Mass mediaThe cause area:By ‘mass media’ intervention we refer to (1) social and behaviour change communication campaigns delivered through (2) mass media aiming to improve (3) human wellbeing.Definitions:1. Social and Behavior Change Communication (SBCC) – The strategic use of communication to promote positive outcomes, based on proven theories and models of behavior change (more here).2. Mass media – Mass communication modes that reach a very large audience where any targeting of segmented audiences can be mass applied (e.g. would include online advertising that targets relevant audiences but not posters in health centers) (more here). Examples include: TV, Radio, Mobile phones, Newspapers, Outdoor advertising.3. Human wellbeing – For our purposes, wellbeing refers broadly to areas of human health, development and poverty.Headline metricOur key metrics for the quantitative side of this research will be DALYs averted or % income increases. We will likely compare across these metrics using a moral weight formula. We may set our own moral weights or use recent moral weights by GiveWell (e.g. from here). Note that the use of this as a headline metric does not mean that other factors (autonomy, environmental effects, suffering not captured by DALYs) are excluded, although they may not be explicitly quantified.Scope limitationsCurrently, all interventions that could reasonably be considered mass media are in scope. If in doubt, assume it is in scope.Example ideasNote: there is no guarantee that any of the following ideas make it past the initial filterPromoting healthier dietsPromoting CBT tools for stressMessaging against tobacco use with resources for quittingSignposting to available support in cases of abuse, violence, etc.Anti-suicide campaignsChanging HIV attitudesPromotion cancer screening (cervical, breast, bowel, prostate, etc.)Information campaigns about criminal politiciansEncouraging lower sugar consumptionPreventative animal advocacyThe cause area:The focus this year is on intervention and policies that prevent future harms done to animals as opposed to solving current problems. We will be looking for interventions that as well as having some short run evidence of impact will prevent future problems i.e. have the biggest impact on farmed animals in the future, say ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Suggest new charity ideas for Charity Entrepreneurship, published by CE on March 8, 2023 on The Effective Altruism Forum.At Charity Entrepreneurship (CE) we launch high-impact nonprofits by connecting entrepreneurs with the effective ideas, training and funding needed to launch and succeed.Each year our research team collates hundreds and hundreds of ideas for promising new charities. We scour the research, we talk to colleagues in partner organisations and we solicit ideas from academia.We then vet these ideas and whittle them down to the top ~3. We create detailed reports about these top ideas and then we recruit, train and fund entrepreneurs to take these ideas and make them a reality. In 5 years we've launched over 23 exceptional organisations. You can read more about charities we incubated here.In 2023 Charity Entrepreneurship will be researching two new cause areas: mass media interventions and preventative animal advocacyWe want your ideas!!!Prize: If you give us an idea which leads to a new charity then you will win: A copy of the Charity Entrepreneurship handbook, a box of vegan cookies and $100 to a charity of your choice. And more importantly, a new charity will be started!Notes: If multiple people submit the same idea we will give the award to the first submission. Max 5 prizes will be awarded. If you submit an idea already on our list, you are still eligible for a prize.Please submit your ideas into this form by end of the day Sunday 12th March.[SUBMIT]Mass mediaThe cause area:By ‘mass media’ intervention we refer to (1) social and behaviour change communication campaigns delivered through (2) mass media aiming to improve (3) human wellbeing.Definitions:1. Social and Behavior Change Communication (SBCC) – The strategic use of communication to promote positive outcomes, based on proven theories and models of behavior change (more here).2. Mass media – Mass communication modes that reach a very large audience where any targeting of segmented audiences can be mass applied (e.g. would include online advertising that targets relevant audiences but not posters in health centers) (more here). Examples include: TV, Radio, Mobile phones, Newspapers, Outdoor advertising.3. Human wellbeing – For our purposes, wellbeing refers broadly to areas of human health, development and poverty.Headline metricOur key metrics for the quantitative side of this research will be DALYs averted or % income increases. We will likely compare across these metrics using a moral weight formula. We may set our own moral weights or use recent moral weights by GiveWell (e.g. from here). Note that the use of this as a headline metric does not mean that other factors (autonomy, environmental effects, suffering not captured by DALYs) are excluded, although they may not be explicitly quantified.Scope limitationsCurrently, all interventions that could reasonably be considered mass media are in scope. If in doubt, assume it is in scope.Example ideasNote: there is no guarantee that any of the following ideas make it past the initial filterPromoting healthier dietsPromoting CBT tools for stressMessaging against tobacco use with resources for quittingSignposting to available support in cases of abuse, violence, etc.Anti-suicide campaignsChanging HIV attitudesPromotion cancer screening (cervical, breast, bowel, prostate, etc.)Information campaigns about criminal politiciansEncouraging lower sugar consumptionPreventative animal advocacyThe cause area:The focus this year is on intervention and policies that prevent future harms done to animals as opposed to solving current problems. We will be looking for interventions that as well as having some short run evidence of impact will prevent future problems i.e. have the biggest impact on farmed animals in the future, say ...]]>
CE https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:07 None full 5149
A9ExMYamqTycvFGAo_NL_EA_EA EA - Evidence on how cash transfers empower women in poverty by GiveDirectly Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Evidence on how cash transfers empower women in poverty, published by GiveDirectly on March 8, 2023 on The Effective Altruism Forum.Donations to GiveDirectly put power in the hands of recipients, 62% of whom are women. On International Women’s Day, hear directly from women and girls in poverty in Malawi about the unique ways that direct cash empowers them:This impact is more than anecdotal; research finds that cash aid lets women improve their lives in many ways. Below, we break down the evidence by story.Maternal & infant healthLenita - “When I was pregnant, I would fall sick [and] could not afford the fare to go to the hospital.”Studies find that cash can:Increase the use of health facilities.Improve birth weight and infant mortality – one study found GiveDirectly’s program reduced child mortality by ~70% and improved child growth.Education & domestic violenceAgatha - “My husband was so abusive... so I left him and went back to try to finish school.”Studies find that cash can:Reduce incidents of physical abuse by a male partner of a woman – one study found GiveDirectly’s program reduced physical intimate partner violence.Increase school attendance for girls.Decision-making powerBeatrice - “My husband and I always argued. about how to spend what little money we had. Now, when we receive the money, we plan together.”Studies find that cash can:Increase a woman’s likelihood of being the sole or joint decision-maker.Entrepreneurship & savingsAnesi - “With the businesses I started, I want to buy land for my children so they will never forget me.”Studies find that cash can:Increase entrepreneurship – one study of GiveDirectly’s program found new business creation doubled. For more on female entrepreneurs, watchIncrease the number of families saving and the amount they saved – one study of GiveDirectly’s program found women doubled their savings. To learn about women's savings groups, watchElderly supportFaidesi - “Now that I am old, I can’t farm and often sleep hungry. I would have been dead if it wasn’t for these payments.”Studies find that cash can:Reduce the likelihood of having had an illness in the last three months – one study in Tanzania found cash reduced the number of doctor visits made by women over 60.Bastagli et al 2016Siddiqi et al 2018McIntosh & Zeitlin 2018Haushofer et al 2019McIntosh & Zeitlin 2020Pega et al 2017Evans et al 2014Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
GiveDirectly https://forum.effectivealtruism.org/posts/A9ExMYamqTycvFGAo/evidence-on-how-cash-transfers-empower-women-in-poverty Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Evidence on how cash transfers empower women in poverty, published by GiveDirectly on March 8, 2023 on The Effective Altruism Forum.Donations to GiveDirectly put power in the hands of recipients, 62% of whom are women. On International Women’s Day, hear directly from women and girls in poverty in Malawi about the unique ways that direct cash empowers them:This impact is more than anecdotal; research finds that cash aid lets women improve their lives in many ways. Below, we break down the evidence by story.Maternal & infant healthLenita - “When I was pregnant, I would fall sick [and] could not afford the fare to go to the hospital.”Studies find that cash can:Increase the use of health facilities.Improve birth weight and infant mortality – one study found GiveDirectly’s program reduced child mortality by ~70% and improved child growth.Education & domestic violenceAgatha - “My husband was so abusive... so I left him and went back to try to finish school.”Studies find that cash can:Reduce incidents of physical abuse by a male partner of a woman – one study found GiveDirectly’s program reduced physical intimate partner violence.Increase school attendance for girls.Decision-making powerBeatrice - “My husband and I always argued. about how to spend what little money we had. Now, when we receive the money, we plan together.”Studies find that cash can:Increase a woman’s likelihood of being the sole or joint decision-maker.Entrepreneurship & savingsAnesi - “With the businesses I started, I want to buy land for my children so they will never forget me.”Studies find that cash can:Increase entrepreneurship – one study of GiveDirectly’s program found new business creation doubled. For more on female entrepreneurs, watchIncrease the number of families saving and the amount they saved – one study of GiveDirectly’s program found women doubled their savings. To learn about women's savings groups, watchElderly supportFaidesi - “Now that I am old, I can’t farm and often sleep hungry. I would have been dead if it wasn’t for these payments.”Studies find that cash can:Reduce the likelihood of having had an illness in the last three months – one study in Tanzania found cash reduced the number of doctor visits made by women over 60.Bastagli et al 2016Siddiqi et al 2018McIntosh & Zeitlin 2018Haushofer et al 2019McIntosh & Zeitlin 2020Pega et al 2017Evans et al 2014Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 08 Mar 2023 10:27:39 +0000 EA - Evidence on how cash transfers empower women in poverty by GiveDirectly Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Evidence on how cash transfers empower women in poverty, published by GiveDirectly on March 8, 2023 on The Effective Altruism Forum.Donations to GiveDirectly put power in the hands of recipients, 62% of whom are women. On International Women’s Day, hear directly from women and girls in poverty in Malawi about the unique ways that direct cash empowers them:This impact is more than anecdotal; research finds that cash aid lets women improve their lives in many ways. Below, we break down the evidence by story.Maternal & infant healthLenita - “When I was pregnant, I would fall sick [and] could not afford the fare to go to the hospital.”Studies find that cash can:Increase the use of health facilities.Improve birth weight and infant mortality – one study found GiveDirectly’s program reduced child mortality by ~70% and improved child growth.Education & domestic violenceAgatha - “My husband was so abusive... so I left him and went back to try to finish school.”Studies find that cash can:Reduce incidents of physical abuse by a male partner of a woman – one study found GiveDirectly’s program reduced physical intimate partner violence.Increase school attendance for girls.Decision-making powerBeatrice - “My husband and I always argued. about how to spend what little money we had. Now, when we receive the money, we plan together.”Studies find that cash can:Increase a woman’s likelihood of being the sole or joint decision-maker.Entrepreneurship & savingsAnesi - “With the businesses I started, I want to buy land for my children so they will never forget me.”Studies find that cash can:Increase entrepreneurship – one study of GiveDirectly’s program found new business creation doubled. For more on female entrepreneurs, watchIncrease the number of families saving and the amount they saved – one study of GiveDirectly’s program found women doubled their savings. To learn about women's savings groups, watchElderly supportFaidesi - “Now that I am old, I can’t farm and often sleep hungry. I would have been dead if it wasn’t for these payments.”Studies find that cash can:Reduce the likelihood of having had an illness in the last three months – one study in Tanzania found cash reduced the number of doctor visits made by women over 60.Bastagli et al 2016Siddiqi et al 2018McIntosh & Zeitlin 2018Haushofer et al 2019McIntosh & Zeitlin 2020Pega et al 2017Evans et al 2014Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Evidence on how cash transfers empower women in poverty, published by GiveDirectly on March 8, 2023 on The Effective Altruism Forum.Donations to GiveDirectly put power in the hands of recipients, 62% of whom are women. On International Women’s Day, hear directly from women and girls in poverty in Malawi about the unique ways that direct cash empowers them:This impact is more than anecdotal; research finds that cash aid lets women improve their lives in many ways. Below, we break down the evidence by story.Maternal & infant healthLenita - “When I was pregnant, I would fall sick [and] could not afford the fare to go to the hospital.”Studies find that cash can:Increase the use of health facilities.Improve birth weight and infant mortality – one study found GiveDirectly’s program reduced child mortality by ~70% and improved child growth.Education & domestic violenceAgatha - “My husband was so abusive... so I left him and went back to try to finish school.”Studies find that cash can:Reduce incidents of physical abuse by a male partner of a woman – one study found GiveDirectly’s program reduced physical intimate partner violence.Increase school attendance for girls.Decision-making powerBeatrice - “My husband and I always argued. about how to spend what little money we had. Now, when we receive the money, we plan together.”Studies find that cash can:Increase a woman’s likelihood of being the sole or joint decision-maker.Entrepreneurship & savingsAnesi - “With the businesses I started, I want to buy land for my children so they will never forget me.”Studies find that cash can:Increase entrepreneurship – one study of GiveDirectly’s program found new business creation doubled. For more on female entrepreneurs, watchIncrease the number of families saving and the amount they saved – one study of GiveDirectly’s program found women doubled their savings. To learn about women's savings groups, watchElderly supportFaidesi - “Now that I am old, I can’t farm and often sleep hungry. I would have been dead if it wasn’t for these payments.”Studies find that cash can:Reduce the likelihood of having had an illness in the last three months – one study in Tanzania found cash reduced the number of doctor visits made by women over 60.Bastagli et al 2016Siddiqi et al 2018McIntosh & Zeitlin 2018Haushofer et al 2019McIntosh & Zeitlin 2020Pega et al 2017Evans et al 2014Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
GiveDirectly https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:44 None full 5143
EQPY63WzxmqQnWsmS_NL_EA_EA EA - Suggestion: A workable romantic non-escalation policy for EA community builders by Severin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Suggestion: A workable romantic non-escalation policy for EA community builders, published by Severin on March 8, 2023 on The Effective Altruism Forum.Last year, I attended an Authentic Leadership Training with the Authentic Relating org Authentic Revolution, and was course lead mentee for a second iteration.One thing that struck me about AuthRev's ways is their approach to policing romantic relationships between facilitators and participants, within a community whose personal/professional overlap is even stronger than EA’s.They have a romantic non-escalation policy that goes roughly like this:For three months after a retreat, and for one month after an evening event, facilitators are prohibited from engaging romantically, or even hinting at engaging romantically, with attendees. The only exception is when a particular attendee and the facilitator already dated beforehand.These numbers are drawn from experience: As some people have most of their social life within the community, longer timelines are so unworkable that the default is to just ignore them and do everything in secret. Shorter timelines, however, tend to be insufficiently effective for mitigating the problems this policy tries to address.Granted, Authentic Relating is a set of activities that is far more emotionally intense than what usually happens at EA events. However, I think there are some reasons for EA community builders to adhere to this policy anyway:Romance distracts from the cause. Attendees should focus on getting as much ea-related value as possible out of EA events, and we as organizers should focus on generating as much value as possible. Thinking about which hot community builder you can get with later distracts from that. And, thinking about which hot participant you can get with later on can lead to decisions way more costly than just lost opportunities to provide more value.None of us are as considerate and attuned in our private lives as when doing community building work. Sometimes we don't have the energy to listen well. Sometimes we really need to vent. Sometimes we are just bad at communication when we don't pay particular attention to choosing our words. The personas we put up at work just aren't real people. If people fall in love with the version of me that they see leading groups, they will inevitably be disappointed later.Power differentials make communication about consent difficult. And the organizer/attendee-separation creates a power differential, whether we like it or not. The more power differential there is, the more important it is to move very slowly and carefully in romance.Status is sexy. Predatorily-minded people know this. Thus, they are incentivized to climb the social EA ladder for the wrong reasons. If we set norms that make it harder for people to leverage their social status for romantic purposes, we can correct for this. That is, as long as our rules are not so harsh that they will just be ignored by default.Though a part of me finds this policy inconvenient, I think it would be a concerning sign if I weren’t ready to commit to it after I saw it’s value in practice. However, EA is different from AR, and a milder/different/more specified version might make more sense for us. Accordingly, I’ll let the idea simmer a bit before I fully commit.Which adjustments would you make for our context? Some specific questions I have:AR retreats are intensely facilitated experiences. During at least some types of EA retreats, the hierarchies are much flatter, and participants see the organizers "in function" only roughly as much as during an evening-long workshop. Does this justify shortening the three months, e.g. to one month no matter for which type of event?I'd expect that the same rule should apply for professional 1-on-1s, for example EA career coaching....]]>
Severin https://forum.effectivealtruism.org/posts/EQPY63WzxmqQnWsmS/suggestion-a-workable-romantic-non-escalation-policy-for-ea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Suggestion: A workable romantic non-escalation policy for EA community builders, published by Severin on March 8, 2023 on The Effective Altruism Forum.Last year, I attended an Authentic Leadership Training with the Authentic Relating org Authentic Revolution, and was course lead mentee for a second iteration.One thing that struck me about AuthRev's ways is their approach to policing romantic relationships between facilitators and participants, within a community whose personal/professional overlap is even stronger than EA’s.They have a romantic non-escalation policy that goes roughly like this:For three months after a retreat, and for one month after an evening event, facilitators are prohibited from engaging romantically, or even hinting at engaging romantically, with attendees. The only exception is when a particular attendee and the facilitator already dated beforehand.These numbers are drawn from experience: As some people have most of their social life within the community, longer timelines are so unworkable that the default is to just ignore them and do everything in secret. Shorter timelines, however, tend to be insufficiently effective for mitigating the problems this policy tries to address.Granted, Authentic Relating is a set of activities that is far more emotionally intense than what usually happens at EA events. However, I think there are some reasons for EA community builders to adhere to this policy anyway:Romance distracts from the cause. Attendees should focus on getting as much ea-related value as possible out of EA events, and we as organizers should focus on generating as much value as possible. Thinking about which hot community builder you can get with later distracts from that. And, thinking about which hot participant you can get with later on can lead to decisions way more costly than just lost opportunities to provide more value.None of us are as considerate and attuned in our private lives as when doing community building work. Sometimes we don't have the energy to listen well. Sometimes we really need to vent. Sometimes we are just bad at communication when we don't pay particular attention to choosing our words. The personas we put up at work just aren't real people. If people fall in love with the version of me that they see leading groups, they will inevitably be disappointed later.Power differentials make communication about consent difficult. And the organizer/attendee-separation creates a power differential, whether we like it or not. The more power differential there is, the more important it is to move very slowly and carefully in romance.Status is sexy. Predatorily-minded people know this. Thus, they are incentivized to climb the social EA ladder for the wrong reasons. If we set norms that make it harder for people to leverage their social status for romantic purposes, we can correct for this. That is, as long as our rules are not so harsh that they will just be ignored by default.Though a part of me finds this policy inconvenient, I think it would be a concerning sign if I weren’t ready to commit to it after I saw it’s value in practice. However, EA is different from AR, and a milder/different/more specified version might make more sense for us. Accordingly, I’ll let the idea simmer a bit before I fully commit.Which adjustments would you make for our context? Some specific questions I have:AR retreats are intensely facilitated experiences. During at least some types of EA retreats, the hierarchies are much flatter, and participants see the organizers "in function" only roughly as much as during an evening-long workshop. Does this justify shortening the three months, e.g. to one month no matter for which type of event?I'd expect that the same rule should apply for professional 1-on-1s, for example EA career coaching....]]>
Wed, 08 Mar 2023 04:48:31 +0000 EA - Suggestion: A workable romantic non-escalation policy for EA community builders by Severin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Suggestion: A workable romantic non-escalation policy for EA community builders, published by Severin on March 8, 2023 on The Effective Altruism Forum.Last year, I attended an Authentic Leadership Training with the Authentic Relating org Authentic Revolution, and was course lead mentee for a second iteration.One thing that struck me about AuthRev's ways is their approach to policing romantic relationships between facilitators and participants, within a community whose personal/professional overlap is even stronger than EA’s.They have a romantic non-escalation policy that goes roughly like this:For three months after a retreat, and for one month after an evening event, facilitators are prohibited from engaging romantically, or even hinting at engaging romantically, with attendees. The only exception is when a particular attendee and the facilitator already dated beforehand.These numbers are drawn from experience: As some people have most of their social life within the community, longer timelines are so unworkable that the default is to just ignore them and do everything in secret. Shorter timelines, however, tend to be insufficiently effective for mitigating the problems this policy tries to address.Granted, Authentic Relating is a set of activities that is far more emotionally intense than what usually happens at EA events. However, I think there are some reasons for EA community builders to adhere to this policy anyway:Romance distracts from the cause. Attendees should focus on getting as much ea-related value as possible out of EA events, and we as organizers should focus on generating as much value as possible. Thinking about which hot community builder you can get with later distracts from that. And, thinking about which hot participant you can get with later on can lead to decisions way more costly than just lost opportunities to provide more value.None of us are as considerate and attuned in our private lives as when doing community building work. Sometimes we don't have the energy to listen well. Sometimes we really need to vent. Sometimes we are just bad at communication when we don't pay particular attention to choosing our words. The personas we put up at work just aren't real people. If people fall in love with the version of me that they see leading groups, they will inevitably be disappointed later.Power differentials make communication about consent difficult. And the organizer/attendee-separation creates a power differential, whether we like it or not. The more power differential there is, the more important it is to move very slowly and carefully in romance.Status is sexy. Predatorily-minded people know this. Thus, they are incentivized to climb the social EA ladder for the wrong reasons. If we set norms that make it harder for people to leverage their social status for romantic purposes, we can correct for this. That is, as long as our rules are not so harsh that they will just be ignored by default.Though a part of me finds this policy inconvenient, I think it would be a concerning sign if I weren’t ready to commit to it after I saw it’s value in practice. However, EA is different from AR, and a milder/different/more specified version might make more sense for us. Accordingly, I’ll let the idea simmer a bit before I fully commit.Which adjustments would you make for our context? Some specific questions I have:AR retreats are intensely facilitated experiences. During at least some types of EA retreats, the hierarchies are much flatter, and participants see the organizers "in function" only roughly as much as during an evening-long workshop. Does this justify shortening the three months, e.g. to one month no matter for which type of event?I'd expect that the same rule should apply for professional 1-on-1s, for example EA career coaching....]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Suggestion: A workable romantic non-escalation policy for EA community builders, published by Severin on March 8, 2023 on The Effective Altruism Forum.Last year, I attended an Authentic Leadership Training with the Authentic Relating org Authentic Revolution, and was course lead mentee for a second iteration.One thing that struck me about AuthRev's ways is their approach to policing romantic relationships between facilitators and participants, within a community whose personal/professional overlap is even stronger than EA’s.They have a romantic non-escalation policy that goes roughly like this:For three months after a retreat, and for one month after an evening event, facilitators are prohibited from engaging romantically, or even hinting at engaging romantically, with attendees. The only exception is when a particular attendee and the facilitator already dated beforehand.These numbers are drawn from experience: As some people have most of their social life within the community, longer timelines are so unworkable that the default is to just ignore them and do everything in secret. Shorter timelines, however, tend to be insufficiently effective for mitigating the problems this policy tries to address.Granted, Authentic Relating is a set of activities that is far more emotionally intense than what usually happens at EA events. However, I think there are some reasons for EA community builders to adhere to this policy anyway:Romance distracts from the cause. Attendees should focus on getting as much ea-related value as possible out of EA events, and we as organizers should focus on generating as much value as possible. Thinking about which hot community builder you can get with later distracts from that. And, thinking about which hot participant you can get with later on can lead to decisions way more costly than just lost opportunities to provide more value.None of us are as considerate and attuned in our private lives as when doing community building work. Sometimes we don't have the energy to listen well. Sometimes we really need to vent. Sometimes we are just bad at communication when we don't pay particular attention to choosing our words. The personas we put up at work just aren't real people. If people fall in love with the version of me that they see leading groups, they will inevitably be disappointed later.Power differentials make communication about consent difficult. And the organizer/attendee-separation creates a power differential, whether we like it or not. The more power differential there is, the more important it is to move very slowly and carefully in romance.Status is sexy. Predatorily-minded people know this. Thus, they are incentivized to climb the social EA ladder for the wrong reasons. If we set norms that make it harder for people to leverage their social status for romantic purposes, we can correct for this. That is, as long as our rules are not so harsh that they will just be ignored by default.Though a part of me finds this policy inconvenient, I think it would be a concerning sign if I weren’t ready to commit to it after I saw it’s value in practice. However, EA is different from AR, and a milder/different/more specified version might make more sense for us. Accordingly, I’ll let the idea simmer a bit before I fully commit.Which adjustments would you make for our context? Some specific questions I have:AR retreats are intensely facilitated experiences. During at least some types of EA retreats, the hierarchies are much flatter, and participants see the organizers "in function" only roughly as much as during an evening-long workshop. Does this justify shortening the three months, e.g. to one month no matter for which type of event?I'd expect that the same rule should apply for professional 1-on-1s, for example EA career coaching....]]>
Severin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:32 None full 5144
hGdsgaRiF2zH3vX5M_NL_EA_EA EA - Winners of the Squiggle Experimentation and 80,000 Hours Quantification Challenges by NunoSempere Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Winners of the Squiggle Experimentation and 80,000 Hours Quantification Challenges, published by NunoSempere on March 8, 2023 on The Effective Altruism Forum.In the second half of 2022, we of QURI announced the Squiggle Experimentation Challenge and a $5k challenge to quantify the impact of 80,000 hours' top career paths. For the first contest, we got three long entries. For the second, we got five, but most were fairly short. This post presents the winners.Squiggle Experimentation ChallengeObjectivesFrom the announcement post:[Our] team at QURI has recently released Squiggle, a very new and experimental programming language for probabilistic estimation. We’re curious about what promising use cases it could enable, and we are launching a prize to incentivize people to find this out.Top EntriesTanae adds uncertainty estimates to each step in GiveWell’s estimate for AMF in the Democratic Republic of Congo, and ends up with this endline estimate for lives saved (though not other effects):Dan creates a probabilistic estimate for the effectiveness of the Lead Exposure Elimination Project in Malawi. In the process, he gives some helpful, specific improvements we could make to Squiggle. In particular, his feedback motivated us to make Squiggle faster, first from part of his model not being able to run, then to his model running in 2 mins, then in 3 to 7 seconds.Erich creates a Squiggle model to estimate the number of future EA billionaires. His estimate looks like this:That is, he is giving a 5-10% probability of negative billionaire growth, i.e., of losing a billionaire, as, in fact, happened. In hindsight, this seems like a neat example of quantification capturing some relevant tail risk.Perhaps if people had looked to this estimate when making decisions about earning to give or personal budgeting decisions in light of FTX’s largesse, they might have made better decisions. But it wasn’t the case that this particular estimate was incorporated into the way that people made choices. Rather my impression is that it was posted in the EA Forum and then forgotten about. Perhaps it would have required more work and vetting to make it useful.ResultsEntryAdding Quantified Uncertainty to GiveWell's Cost Effectiveness Analysis of the Against Malaria FoundationCEA LEEP MalawiHow many EA Billionaires five years from now?Estimated relative value(normalized to 100%)Prize67%$60026%$3007%$100Judges were Ozzie Gooen, Quinn Dougherty, and Nuño Sempere. You can see our estimates here. Note that per the contest rules, we judged these prizes before October 1, 2022—so before the downfall of FTX, and winners received their prizes shortly thereafter. Previously I mentioned the results in this edition of the Forecasting Newsletter.$5k challenge to quantify the impact of 80,000 hours' top career pathsObjectivesWith this post, we hoped to elicit estimates that could be built upon to estimate the value of 80,000 hours’ top 10 career paths. We were also curious about whether participants would use Squiggle or other tools when given free rein to choose their tools.EntriesVasco Grillo looks at the cost-effectiveness of operations, first looking at various ways of estimating the impact of the EA community and then sending a brief survey to various organizations about the “multiplier” of operations work, which is, roughly, the ratio of the cost-effectiveness of one marginal hour of operations work to the cost-effectiveness of one marginal hour of their direct work. He ends up with a pretty high estimate for that multiplier, of between ~4.5 and ~13.@10xrational gives fairly granular estimates of the value of various community-building activities in terms of first-order effects of more engaged EAs, and second-order effects of more donations to effective charities and more people ...]]>
NunoSempere https://forum.effectivealtruism.org/posts/hGdsgaRiF2zH3vX5M/winners-of-the-squiggle-experimentation-and-80-000-hours Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Winners of the Squiggle Experimentation and 80,000 Hours Quantification Challenges, published by NunoSempere on March 8, 2023 on The Effective Altruism Forum.In the second half of 2022, we of QURI announced the Squiggle Experimentation Challenge and a $5k challenge to quantify the impact of 80,000 hours' top career paths. For the first contest, we got three long entries. For the second, we got five, but most were fairly short. This post presents the winners.Squiggle Experimentation ChallengeObjectivesFrom the announcement post:[Our] team at QURI has recently released Squiggle, a very new and experimental programming language for probabilistic estimation. We’re curious about what promising use cases it could enable, and we are launching a prize to incentivize people to find this out.Top EntriesTanae adds uncertainty estimates to each step in GiveWell’s estimate for AMF in the Democratic Republic of Congo, and ends up with this endline estimate for lives saved (though not other effects):Dan creates a probabilistic estimate for the effectiveness of the Lead Exposure Elimination Project in Malawi. In the process, he gives some helpful, specific improvements we could make to Squiggle. In particular, his feedback motivated us to make Squiggle faster, first from part of his model not being able to run, then to his model running in 2 mins, then in 3 to 7 seconds.Erich creates a Squiggle model to estimate the number of future EA billionaires. His estimate looks like this:That is, he is giving a 5-10% probability of negative billionaire growth, i.e., of losing a billionaire, as, in fact, happened. In hindsight, this seems like a neat example of quantification capturing some relevant tail risk.Perhaps if people had looked to this estimate when making decisions about earning to give or personal budgeting decisions in light of FTX’s largesse, they might have made better decisions. But it wasn’t the case that this particular estimate was incorporated into the way that people made choices. Rather my impression is that it was posted in the EA Forum and then forgotten about. Perhaps it would have required more work and vetting to make it useful.ResultsEntryAdding Quantified Uncertainty to GiveWell's Cost Effectiveness Analysis of the Against Malaria FoundationCEA LEEP MalawiHow many EA Billionaires five years from now?Estimated relative value(normalized to 100%)Prize67%$60026%$3007%$100Judges were Ozzie Gooen, Quinn Dougherty, and Nuño Sempere. You can see our estimates here. Note that per the contest rules, we judged these prizes before October 1, 2022—so before the downfall of FTX, and winners received their prizes shortly thereafter. Previously I mentioned the results in this edition of the Forecasting Newsletter.$5k challenge to quantify the impact of 80,000 hours' top career pathsObjectivesWith this post, we hoped to elicit estimates that could be built upon to estimate the value of 80,000 hours’ top 10 career paths. We were also curious about whether participants would use Squiggle or other tools when given free rein to choose their tools.EntriesVasco Grillo looks at the cost-effectiveness of operations, first looking at various ways of estimating the impact of the EA community and then sending a brief survey to various organizations about the “multiplier” of operations work, which is, roughly, the ratio of the cost-effectiveness of one marginal hour of operations work to the cost-effectiveness of one marginal hour of their direct work. He ends up with a pretty high estimate for that multiplier, of between ~4.5 and ~13.@10xrational gives fairly granular estimates of the value of various community-building activities in terms of first-order effects of more engaged EAs, and second-order effects of more donations to effective charities and more people ...]]>
Wed, 08 Mar 2023 01:51:28 +0000 EA - Winners of the Squiggle Experimentation and 80,000 Hours Quantification Challenges by NunoSempere Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Winners of the Squiggle Experimentation and 80,000 Hours Quantification Challenges, published by NunoSempere on March 8, 2023 on The Effective Altruism Forum.In the second half of 2022, we of QURI announced the Squiggle Experimentation Challenge and a $5k challenge to quantify the impact of 80,000 hours' top career paths. For the first contest, we got three long entries. For the second, we got five, but most were fairly short. This post presents the winners.Squiggle Experimentation ChallengeObjectivesFrom the announcement post:[Our] team at QURI has recently released Squiggle, a very new and experimental programming language for probabilistic estimation. We’re curious about what promising use cases it could enable, and we are launching a prize to incentivize people to find this out.Top EntriesTanae adds uncertainty estimates to each step in GiveWell’s estimate for AMF in the Democratic Republic of Congo, and ends up with this endline estimate for lives saved (though not other effects):Dan creates a probabilistic estimate for the effectiveness of the Lead Exposure Elimination Project in Malawi. In the process, he gives some helpful, specific improvements we could make to Squiggle. In particular, his feedback motivated us to make Squiggle faster, first from part of his model not being able to run, then to his model running in 2 mins, then in 3 to 7 seconds.Erich creates a Squiggle model to estimate the number of future EA billionaires. His estimate looks like this:That is, he is giving a 5-10% probability of negative billionaire growth, i.e., of losing a billionaire, as, in fact, happened. In hindsight, this seems like a neat example of quantification capturing some relevant tail risk.Perhaps if people had looked to this estimate when making decisions about earning to give or personal budgeting decisions in light of FTX’s largesse, they might have made better decisions. But it wasn’t the case that this particular estimate was incorporated into the way that people made choices. Rather my impression is that it was posted in the EA Forum and then forgotten about. Perhaps it would have required more work and vetting to make it useful.ResultsEntryAdding Quantified Uncertainty to GiveWell's Cost Effectiveness Analysis of the Against Malaria FoundationCEA LEEP MalawiHow many EA Billionaires five years from now?Estimated relative value(normalized to 100%)Prize67%$60026%$3007%$100Judges were Ozzie Gooen, Quinn Dougherty, and Nuño Sempere. You can see our estimates here. Note that per the contest rules, we judged these prizes before October 1, 2022—so before the downfall of FTX, and winners received their prizes shortly thereafter. Previously I mentioned the results in this edition of the Forecasting Newsletter.$5k challenge to quantify the impact of 80,000 hours' top career pathsObjectivesWith this post, we hoped to elicit estimates that could be built upon to estimate the value of 80,000 hours’ top 10 career paths. We were also curious about whether participants would use Squiggle or other tools when given free rein to choose their tools.EntriesVasco Grillo looks at the cost-effectiveness of operations, first looking at various ways of estimating the impact of the EA community and then sending a brief survey to various organizations about the “multiplier” of operations work, which is, roughly, the ratio of the cost-effectiveness of one marginal hour of operations work to the cost-effectiveness of one marginal hour of their direct work. He ends up with a pretty high estimate for that multiplier, of between ~4.5 and ~13.@10xrational gives fairly granular estimates of the value of various community-building activities in terms of first-order effects of more engaged EAs, and second-order effects of more donations to effective charities and more people ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Winners of the Squiggle Experimentation and 80,000 Hours Quantification Challenges, published by NunoSempere on March 8, 2023 on The Effective Altruism Forum.In the second half of 2022, we of QURI announced the Squiggle Experimentation Challenge and a $5k challenge to quantify the impact of 80,000 hours' top career paths. For the first contest, we got three long entries. For the second, we got five, but most were fairly short. This post presents the winners.Squiggle Experimentation ChallengeObjectivesFrom the announcement post:[Our] team at QURI has recently released Squiggle, a very new and experimental programming language for probabilistic estimation. We’re curious about what promising use cases it could enable, and we are launching a prize to incentivize people to find this out.Top EntriesTanae adds uncertainty estimates to each step in GiveWell’s estimate for AMF in the Democratic Republic of Congo, and ends up with this endline estimate for lives saved (though not other effects):Dan creates a probabilistic estimate for the effectiveness of the Lead Exposure Elimination Project in Malawi. In the process, he gives some helpful, specific improvements we could make to Squiggle. In particular, his feedback motivated us to make Squiggle faster, first from part of his model not being able to run, then to his model running in 2 mins, then in 3 to 7 seconds.Erich creates a Squiggle model to estimate the number of future EA billionaires. His estimate looks like this:That is, he is giving a 5-10% probability of negative billionaire growth, i.e., of losing a billionaire, as, in fact, happened. In hindsight, this seems like a neat example of quantification capturing some relevant tail risk.Perhaps if people had looked to this estimate when making decisions about earning to give or personal budgeting decisions in light of FTX’s largesse, they might have made better decisions. But it wasn’t the case that this particular estimate was incorporated into the way that people made choices. Rather my impression is that it was posted in the EA Forum and then forgotten about. Perhaps it would have required more work and vetting to make it useful.ResultsEntryAdding Quantified Uncertainty to GiveWell's Cost Effectiveness Analysis of the Against Malaria FoundationCEA LEEP MalawiHow many EA Billionaires five years from now?Estimated relative value(normalized to 100%)Prize67%$60026%$3007%$100Judges were Ozzie Gooen, Quinn Dougherty, and Nuño Sempere. You can see our estimates here. Note that per the contest rules, we judged these prizes before October 1, 2022—so before the downfall of FTX, and winners received their prizes shortly thereafter. Previously I mentioned the results in this edition of the Forecasting Newsletter.$5k challenge to quantify the impact of 80,000 hours' top career pathsObjectivesWith this post, we hoped to elicit estimates that could be built upon to estimate the value of 80,000 hours’ top 10 career paths. We were also curious about whether participants would use Squiggle or other tools when given free rein to choose their tools.EntriesVasco Grillo looks at the cost-effectiveness of operations, first looking at various ways of estimating the impact of the EA community and then sending a brief survey to various organizations about the “multiplier” of operations work, which is, roughly, the ratio of the cost-effectiveness of one marginal hour of operations work to the cost-effectiveness of one marginal hour of their direct work. He ends up with a pretty high estimate for that multiplier, of between ~4.5 and ~13.@10xrational gives fairly granular estimates of the value of various community-building activities in terms of first-order effects of more engaged EAs, and second-order effects of more donations to effective charities and more people ...]]>
NunoSempere https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:57 None full 5145
XQDS5F4pRRxBododx_NL_EA_EA EA - Redirecting private foundation grants to effective charities by Kyle Smith Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Redirecting private foundation grants to effective charities, published by Kyle Smith on March 6, 2023 on The Effective Altruism Forum.Project Idea:While completing the EA intro course, I was thinking about how private foundations give $60b+ a year, largely to ineffective charities. I was wondering if that may present an opportunity for a small organization that works to redirect PF grants to effective charities.I see two potential angles of attack:Lobby/consult with PF on making effective grants. Givewell does the hard job of evaluating charities, but a more boutique solution could be useful to private foundations.I have a large dataset of electronically filed 990-PFs, and I thought it may be useful to try to identify PF that are more likely to be persuaded by this sort of lobbying. For example, foundations that are younger, already give to international charities, and give to a large number of charities (there’s a lot of interesting criteria that could be used). A list could be generated for PF that are more likely to redirect funds which could be targeted.Target grantmakers by offering training, attending conferences, etc. on effective grantmaking. (maybe some other EA aligned org is doing this?)Givewell says they have directed ~$1b in effective gifts since 2011. Even if only a small number of foundations could be persuaded, the total dollars driven could be pretty large. And for a pretty small investment I think.Short introduction: My name is Kyle Smith, I am an assistant professor of accounting at Mississippi State University. My research is mostly on how donors use accounting reports in their giving decisions. I have done some archival research examining how private foundations use accounting information, and am starting up a qualitative study where we are going to interview grantmakers to understand how they use accounting information in their grantmaking process.Does anyone know of any orgs/people specifically working on this problem?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Kyle Smith https://forum.effectivealtruism.org/posts/XQDS5F4pRRxBododx/redirecting-private-foundation-grants-to-effective-charities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Redirecting private foundation grants to effective charities, published by Kyle Smith on March 6, 2023 on The Effective Altruism Forum.Project Idea:While completing the EA intro course, I was thinking about how private foundations give $60b+ a year, largely to ineffective charities. I was wondering if that may present an opportunity for a small organization that works to redirect PF grants to effective charities.I see two potential angles of attack:Lobby/consult with PF on making effective grants. Givewell does the hard job of evaluating charities, but a more boutique solution could be useful to private foundations.I have a large dataset of electronically filed 990-PFs, and I thought it may be useful to try to identify PF that are more likely to be persuaded by this sort of lobbying. For example, foundations that are younger, already give to international charities, and give to a large number of charities (there’s a lot of interesting criteria that could be used). A list could be generated for PF that are more likely to redirect funds which could be targeted.Target grantmakers by offering training, attending conferences, etc. on effective grantmaking. (maybe some other EA aligned org is doing this?)Givewell says they have directed ~$1b in effective gifts since 2011. Even if only a small number of foundations could be persuaded, the total dollars driven could be pretty large. And for a pretty small investment I think.Short introduction: My name is Kyle Smith, I am an assistant professor of accounting at Mississippi State University. My research is mostly on how donors use accounting reports in their giving decisions. I have done some archival research examining how private foundations use accounting information, and am starting up a qualitative study where we are going to interview grantmakers to understand how they use accounting information in their grantmaking process.Does anyone know of any orgs/people specifically working on this problem?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 07 Mar 2023 22:55:32 +0000 EA - Redirecting private foundation grants to effective charities by Kyle Smith Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Redirecting private foundation grants to effective charities, published by Kyle Smith on March 6, 2023 on The Effective Altruism Forum.Project Idea:While completing the EA intro course, I was thinking about how private foundations give $60b+ a year, largely to ineffective charities. I was wondering if that may present an opportunity for a small organization that works to redirect PF grants to effective charities.I see two potential angles of attack:Lobby/consult with PF on making effective grants. Givewell does the hard job of evaluating charities, but a more boutique solution could be useful to private foundations.I have a large dataset of electronically filed 990-PFs, and I thought it may be useful to try to identify PF that are more likely to be persuaded by this sort of lobbying. For example, foundations that are younger, already give to international charities, and give to a large number of charities (there’s a lot of interesting criteria that could be used). A list could be generated for PF that are more likely to redirect funds which could be targeted.Target grantmakers by offering training, attending conferences, etc. on effective grantmaking. (maybe some other EA aligned org is doing this?)Givewell says they have directed ~$1b in effective gifts since 2011. Even if only a small number of foundations could be persuaded, the total dollars driven could be pretty large. And for a pretty small investment I think.Short introduction: My name is Kyle Smith, I am an assistant professor of accounting at Mississippi State University. My research is mostly on how donors use accounting reports in their giving decisions. I have done some archival research examining how private foundations use accounting information, and am starting up a qualitative study where we are going to interview grantmakers to understand how they use accounting information in their grantmaking process.Does anyone know of any orgs/people specifically working on this problem?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Redirecting private foundation grants to effective charities, published by Kyle Smith on March 6, 2023 on The Effective Altruism Forum.Project Idea:While completing the EA intro course, I was thinking about how private foundations give $60b+ a year, largely to ineffective charities. I was wondering if that may present an opportunity for a small organization that works to redirect PF grants to effective charities.I see two potential angles of attack:Lobby/consult with PF on making effective grants. Givewell does the hard job of evaluating charities, but a more boutique solution could be useful to private foundations.I have a large dataset of electronically filed 990-PFs, and I thought it may be useful to try to identify PF that are more likely to be persuaded by this sort of lobbying. For example, foundations that are younger, already give to international charities, and give to a large number of charities (there’s a lot of interesting criteria that could be used). A list could be generated for PF that are more likely to redirect funds which could be targeted.Target grantmakers by offering training, attending conferences, etc. on effective grantmaking. (maybe some other EA aligned org is doing this?)Givewell says they have directed ~$1b in effective gifts since 2011. Even if only a small number of foundations could be persuaded, the total dollars driven could be pretty large. And for a pretty small investment I think.Short introduction: My name is Kyle Smith, I am an assistant professor of accounting at Mississippi State University. My research is mostly on how donors use accounting reports in their giving decisions. I have done some archival research examining how private foundations use accounting information, and am starting up a qualitative study where we are going to interview grantmakers to understand how they use accounting information in their grantmaking process.Does anyone know of any orgs/people specifically working on this problem?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Kyle Smith https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:06 None full 5135
7b9ZDTAYQY9k6FZHS_NL_EA_EA EA - Abuse in LessWrong and rationalist communities in Bloomberg News by whistleblower67 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Abuse in LessWrong and rationalist communities in Bloomberg News, published by whistleblower67 on March 7, 2023 on The Effective Altruism Forum.This is a linkpost for#xj4y7vzkgTry non-paywalled link here.Silicon Valley’s Obsession With Killer Rogue AI Helps Bury Bad BehaviorSonia Joseph was 14 years old when she first read Harry Potter and the Methods of Rationality, a mega-popular piece of fan fiction that reimagines the boy wizard as a rigid empiricist. This rational Potter tests his professors’ spells with the scientific method, scoffs at any inconsistencies he finds, and solves all of wizardkind’s problems before he turns 12. “I loved it,” says Joseph, who read HPMOR four times in her teens. She was a neurodivergent, ambitious Indian American who felt out of place in her suburban Massachusetts high school. The story, she says, “very much appeals to smart outsiders.”A search for other writing by the fanfic’s author, Eliezer Yudkowsky, opened more doors for Joseph. Since the early 2000s, Yudkowsky has argued that hostile artificial intelligence could destroy humanity within decades. This driving belief has made him an intellectual godfather in a community of people who call themselves rationalists and aim to keep their thinking unbiased, even when the conclusions are scary. Joseph’s budding interest in rationalism also drew her toward effective altruism, a related moral philosophy that’s become infamous by its association with the disgraced crypto ex-billionaire Sam Bankman-Fried. At its core, effective altruism stresses the use of rational thinking to make a maximally efficient positive impact on the world. These distinct but overlapping groups developed in online forums, where posts about the dangers of AI became common.But they also clustered in the Bay Area, where they began sketching out a field of study called AI safety, an effort to make machines less likely to kill us all.Joseph moved to the Bay Area to work in AI research shortly after getting her undergraduate degree in neuroscience in 2019. There, she realized the social scene that seemed so sprawling online was far more tight-knit in person. Many rationalists and effective altruists, who call themselves EAs, worked together, invested in one another’s companies, lived in communal houses and socialized mainly with each other, sometimes in a web of polyamorous relationships. Throughout the community, almost everyone celebrated being, in some way, unconventional. Joseph found it all freeing and exciting, like winding up at a real-life rationalist Hogwarts. Together, she and her peers were working on the problems she found the most fascinating, with the rather grand aim of preventing human extinction.At the same time, she started to pick up weird vibes. One rationalist man introduced her to another as “perfect ratbait”—rat as in rationalist. She heard stories of sexual misconduct involving male leaders in the scene, but when she asked around, her peers waved the allegations off as minor character flaws unimportant when measured against the threat of an AI apocalypse. Eventually, she began dating an AI researcher in the community. She alleges that he committed sexual misconduct against her, and she filed a report with the San Francisco police. (Like many women in her position, she asked that the man not be named, to shield herself from possible retaliation.) Her allegations polarized the community, she says, and people questioned her mental health as a way to discredit her. Eventually she moved to Canada, where she’s continuing her work in AI and trying to foster a healthier research environment.“In an ideal world, the community would have had some serious discussions about sexual assault policy and education: ‘What are our blind spots? How could this have happened? How can we design mechanisms to pr...]]>
whistleblower67 https://forum.effectivealtruism.org/posts/7b9ZDTAYQY9k6FZHS/abuse-in-lesswrong-and-rationalist-communities-in-bloomberg Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Abuse in LessWrong and rationalist communities in Bloomberg News, published by whistleblower67 on March 7, 2023 on The Effective Altruism Forum.This is a linkpost for#xj4y7vzkgTry non-paywalled link here.Silicon Valley’s Obsession With Killer Rogue AI Helps Bury Bad BehaviorSonia Joseph was 14 years old when she first read Harry Potter and the Methods of Rationality, a mega-popular piece of fan fiction that reimagines the boy wizard as a rigid empiricist. This rational Potter tests his professors’ spells with the scientific method, scoffs at any inconsistencies he finds, and solves all of wizardkind’s problems before he turns 12. “I loved it,” says Joseph, who read HPMOR four times in her teens. She was a neurodivergent, ambitious Indian American who felt out of place in her suburban Massachusetts high school. The story, she says, “very much appeals to smart outsiders.”A search for other writing by the fanfic’s author, Eliezer Yudkowsky, opened more doors for Joseph. Since the early 2000s, Yudkowsky has argued that hostile artificial intelligence could destroy humanity within decades. This driving belief has made him an intellectual godfather in a community of people who call themselves rationalists and aim to keep their thinking unbiased, even when the conclusions are scary. Joseph’s budding interest in rationalism also drew her toward effective altruism, a related moral philosophy that’s become infamous by its association with the disgraced crypto ex-billionaire Sam Bankman-Fried. At its core, effective altruism stresses the use of rational thinking to make a maximally efficient positive impact on the world. These distinct but overlapping groups developed in online forums, where posts about the dangers of AI became common.But they also clustered in the Bay Area, where they began sketching out a field of study called AI safety, an effort to make machines less likely to kill us all.Joseph moved to the Bay Area to work in AI research shortly after getting her undergraduate degree in neuroscience in 2019. There, she realized the social scene that seemed so sprawling online was far more tight-knit in person. Many rationalists and effective altruists, who call themselves EAs, worked together, invested in one another’s companies, lived in communal houses and socialized mainly with each other, sometimes in a web of polyamorous relationships. Throughout the community, almost everyone celebrated being, in some way, unconventional. Joseph found it all freeing and exciting, like winding up at a real-life rationalist Hogwarts. Together, she and her peers were working on the problems she found the most fascinating, with the rather grand aim of preventing human extinction.At the same time, she started to pick up weird vibes. One rationalist man introduced her to another as “perfect ratbait”—rat as in rationalist. She heard stories of sexual misconduct involving male leaders in the scene, but when she asked around, her peers waved the allegations off as minor character flaws unimportant when measured against the threat of an AI apocalypse. Eventually, she began dating an AI researcher in the community. She alleges that he committed sexual misconduct against her, and she filed a report with the San Francisco police. (Like many women in her position, she asked that the man not be named, to shield herself from possible retaliation.) Her allegations polarized the community, she says, and people questioned her mental health as a way to discredit her. Eventually she moved to Canada, where she’s continuing her work in AI and trying to foster a healthier research environment.“In an ideal world, the community would have had some serious discussions about sexual assault policy and education: ‘What are our blind spots? How could this have happened? How can we design mechanisms to pr...]]>
Tue, 07 Mar 2023 21:43:20 +0000 EA - Abuse in LessWrong and rationalist communities in Bloomberg News by whistleblower67 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Abuse in LessWrong and rationalist communities in Bloomberg News, published by whistleblower67 on March 7, 2023 on The Effective Altruism Forum.This is a linkpost for#xj4y7vzkgTry non-paywalled link here.Silicon Valley’s Obsession With Killer Rogue AI Helps Bury Bad BehaviorSonia Joseph was 14 years old when she first read Harry Potter and the Methods of Rationality, a mega-popular piece of fan fiction that reimagines the boy wizard as a rigid empiricist. This rational Potter tests his professors’ spells with the scientific method, scoffs at any inconsistencies he finds, and solves all of wizardkind’s problems before he turns 12. “I loved it,” says Joseph, who read HPMOR four times in her teens. She was a neurodivergent, ambitious Indian American who felt out of place in her suburban Massachusetts high school. The story, she says, “very much appeals to smart outsiders.”A search for other writing by the fanfic’s author, Eliezer Yudkowsky, opened more doors for Joseph. Since the early 2000s, Yudkowsky has argued that hostile artificial intelligence could destroy humanity within decades. This driving belief has made him an intellectual godfather in a community of people who call themselves rationalists and aim to keep their thinking unbiased, even when the conclusions are scary. Joseph’s budding interest in rationalism also drew her toward effective altruism, a related moral philosophy that’s become infamous by its association with the disgraced crypto ex-billionaire Sam Bankman-Fried. At its core, effective altruism stresses the use of rational thinking to make a maximally efficient positive impact on the world. These distinct but overlapping groups developed in online forums, where posts about the dangers of AI became common.But they also clustered in the Bay Area, where they began sketching out a field of study called AI safety, an effort to make machines less likely to kill us all.Joseph moved to the Bay Area to work in AI research shortly after getting her undergraduate degree in neuroscience in 2019. There, she realized the social scene that seemed so sprawling online was far more tight-knit in person. Many rationalists and effective altruists, who call themselves EAs, worked together, invested in one another’s companies, lived in communal houses and socialized mainly with each other, sometimes in a web of polyamorous relationships. Throughout the community, almost everyone celebrated being, in some way, unconventional. Joseph found it all freeing and exciting, like winding up at a real-life rationalist Hogwarts. Together, she and her peers were working on the problems she found the most fascinating, with the rather grand aim of preventing human extinction.At the same time, she started to pick up weird vibes. One rationalist man introduced her to another as “perfect ratbait”—rat as in rationalist. She heard stories of sexual misconduct involving male leaders in the scene, but when she asked around, her peers waved the allegations off as minor character flaws unimportant when measured against the threat of an AI apocalypse. Eventually, she began dating an AI researcher in the community. She alleges that he committed sexual misconduct against her, and she filed a report with the San Francisco police. (Like many women in her position, she asked that the man not be named, to shield herself from possible retaliation.) Her allegations polarized the community, she says, and people questioned her mental health as a way to discredit her. Eventually she moved to Canada, where she’s continuing her work in AI and trying to foster a healthier research environment.“In an ideal world, the community would have had some serious discussions about sexual assault policy and education: ‘What are our blind spots? How could this have happened? How can we design mechanisms to pr...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Abuse in LessWrong and rationalist communities in Bloomberg News, published by whistleblower67 on March 7, 2023 on The Effective Altruism Forum.This is a linkpost for#xj4y7vzkgTry non-paywalled link here.Silicon Valley’s Obsession With Killer Rogue AI Helps Bury Bad BehaviorSonia Joseph was 14 years old when she first read Harry Potter and the Methods of Rationality, a mega-popular piece of fan fiction that reimagines the boy wizard as a rigid empiricist. This rational Potter tests his professors’ spells with the scientific method, scoffs at any inconsistencies he finds, and solves all of wizardkind’s problems before he turns 12. “I loved it,” says Joseph, who read HPMOR four times in her teens. She was a neurodivergent, ambitious Indian American who felt out of place in her suburban Massachusetts high school. The story, she says, “very much appeals to smart outsiders.”A search for other writing by the fanfic’s author, Eliezer Yudkowsky, opened more doors for Joseph. Since the early 2000s, Yudkowsky has argued that hostile artificial intelligence could destroy humanity within decades. This driving belief has made him an intellectual godfather in a community of people who call themselves rationalists and aim to keep their thinking unbiased, even when the conclusions are scary. Joseph’s budding interest in rationalism also drew her toward effective altruism, a related moral philosophy that’s become infamous by its association with the disgraced crypto ex-billionaire Sam Bankman-Fried. At its core, effective altruism stresses the use of rational thinking to make a maximally efficient positive impact on the world. These distinct but overlapping groups developed in online forums, where posts about the dangers of AI became common.But they also clustered in the Bay Area, where they began sketching out a field of study called AI safety, an effort to make machines less likely to kill us all.Joseph moved to the Bay Area to work in AI research shortly after getting her undergraduate degree in neuroscience in 2019. There, she realized the social scene that seemed so sprawling online was far more tight-knit in person. Many rationalists and effective altruists, who call themselves EAs, worked together, invested in one another’s companies, lived in communal houses and socialized mainly with each other, sometimes in a web of polyamorous relationships. Throughout the community, almost everyone celebrated being, in some way, unconventional. Joseph found it all freeing and exciting, like winding up at a real-life rationalist Hogwarts. Together, she and her peers were working on the problems she found the most fascinating, with the rather grand aim of preventing human extinction.At the same time, she started to pick up weird vibes. One rationalist man introduced her to another as “perfect ratbait”—rat as in rationalist. She heard stories of sexual misconduct involving male leaders in the scene, but when she asked around, her peers waved the allegations off as minor character flaws unimportant when measured against the threat of an AI apocalypse. Eventually, she began dating an AI researcher in the community. She alleges that he committed sexual misconduct against her, and she filed a report with the San Francisco police. (Like many women in her position, she asked that the man not be named, to shield herself from possible retaliation.) Her allegations polarized the community, she says, and people questioned her mental health as a way to discredit her. Eventually she moved to Canada, where she’s continuing her work in AI and trying to foster a healthier research environment.“In an ideal world, the community would have had some serious discussions about sexual assault policy and education: ‘What are our blind spots? How could this have happened? How can we design mechanisms to pr...]]>
whistleblower67 https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 25:22 None full 5131
MPmFgaJCjpzm742vD_NL_EA_EA EA - Masterdocs of EA community building guides and resources by Irene H Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Masterdocs of EA community building guides and resources, published by Irene H on March 7, 2023 on The Effective Altruism Forum.TLDR: I made a comprehensive overview of EA curricula, event organization guides, and syllabi, as well as an overview of resources on EA community building, communications, strategy, and more. The EA community builders I shared them with up to now found them really helpful.ContextTogether with Jelle Donders, I co-founded the university group at Eindhoven University of Technology in the Netherlands last summer. We followed the UGAP mentorship program last semester and have been thinking a lot about events and programs to organize for our EA group and about general EA community-building strategies. There is a big maze of Google docs containing resources on this, but none of them gives a complete and updated overview.I wanted to share two resources for EA community builders I’ve been working on over the past months. Both I made initially as references for myself, but when I shared them with other community builders, they found them quite helpful. Therefore, I’d now like to share them more widely, so that others can hopefully have the same benefits.EA Eindhoven Syllabi CollectionThere are many lists of EA curricula, event organization guides, and syllabi, but none of them are complete. Therefore, I made a document to which I save everything of that nature I come across, with the aim of getting a somewhat better overview of everything out thereI also went through other lists of this nature and saved all relevant documents to this collection, so it should be a one-stop shop. It is currently 27 pages long and I don’t know of another list that is more exhaustive. (Also compared to the EA Groups Resource Centre, which only offers a few curated resources per topic). I update this document regularly when I come across new resources.When we want to organize something new at my group, we have a look at this document to see whether someone else has done the thing we want to do already so we can save time, or just to get some inspiration.You can find the document here.Community Building ReadingsI also made a document that contains a lot of resources on EA community building, communications, strategy, and more, related to the EA movement as a whole and to EA groups specifically, that are not specific guides for organizing concrete events, programs, or campaigns, but are aimed at getting a better understanding of more general thinking, strategy and criticism of the EA community.You can find the document here.Disclaimers for both documentsI do not necessarily endorse/recommend the resources and advice in these documents. My sole aim with these documents is to provide an overview of the space of the thinking and resources around EA community building, not to advocate for one particular way of going about it.These documents are probably really overwhelming, but my aim was to gather a comprehensive overview of all resources, as opposed to linking only 1 or 2 recommendations, which is the way the Groups Resources Centre or the GCP EA Student Groups Handbook are organized.The way I sorted things into categories will always remain artificial as some boundaries are blurry and some things fit into multiple categories.How to use these documentsUsing the table of contents or Ctrl + F + [what you’re looking for] probably works best for navigationPlease feel free to place comments and make suggestions if you have additions!When you add something new, please add a source (name of the group and/or person who made the resource) wherever possible to give people the credit they’re due and to facilitate others reaching out to the creator if they have more questions.In case of questions, feedback or comments, please reach out to info@eaeindhoven.nl.I hope ...]]>
Irene H https://forum.effectivealtruism.org/posts/MPmFgaJCjpzm742vD/masterdocs-of-ea-community-building-guides-and-resources Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Masterdocs of EA community building guides and resources, published by Irene H on March 7, 2023 on The Effective Altruism Forum.TLDR: I made a comprehensive overview of EA curricula, event organization guides, and syllabi, as well as an overview of resources on EA community building, communications, strategy, and more. The EA community builders I shared them with up to now found them really helpful.ContextTogether with Jelle Donders, I co-founded the university group at Eindhoven University of Technology in the Netherlands last summer. We followed the UGAP mentorship program last semester and have been thinking a lot about events and programs to organize for our EA group and about general EA community-building strategies. There is a big maze of Google docs containing resources on this, but none of them gives a complete and updated overview.I wanted to share two resources for EA community builders I’ve been working on over the past months. Both I made initially as references for myself, but when I shared them with other community builders, they found them quite helpful. Therefore, I’d now like to share them more widely, so that others can hopefully have the same benefits.EA Eindhoven Syllabi CollectionThere are many lists of EA curricula, event organization guides, and syllabi, but none of them are complete. Therefore, I made a document to which I save everything of that nature I come across, with the aim of getting a somewhat better overview of everything out thereI also went through other lists of this nature and saved all relevant documents to this collection, so it should be a one-stop shop. It is currently 27 pages long and I don’t know of another list that is more exhaustive. (Also compared to the EA Groups Resource Centre, which only offers a few curated resources per topic). I update this document regularly when I come across new resources.When we want to organize something new at my group, we have a look at this document to see whether someone else has done the thing we want to do already so we can save time, or just to get some inspiration.You can find the document here.Community Building ReadingsI also made a document that contains a lot of resources on EA community building, communications, strategy, and more, related to the EA movement as a whole and to EA groups specifically, that are not specific guides for organizing concrete events, programs, or campaigns, but are aimed at getting a better understanding of more general thinking, strategy and criticism of the EA community.You can find the document here.Disclaimers for both documentsI do not necessarily endorse/recommend the resources and advice in these documents. My sole aim with these documents is to provide an overview of the space of the thinking and resources around EA community building, not to advocate for one particular way of going about it.These documents are probably really overwhelming, but my aim was to gather a comprehensive overview of all resources, as opposed to linking only 1 or 2 recommendations, which is the way the Groups Resources Centre or the GCP EA Student Groups Handbook are organized.The way I sorted things into categories will always remain artificial as some boundaries are blurry and some things fit into multiple categories.How to use these documentsUsing the table of contents or Ctrl + F + [what you’re looking for] probably works best for navigationPlease feel free to place comments and make suggestions if you have additions!When you add something new, please add a source (name of the group and/or person who made the resource) wherever possible to give people the credit they’re due and to facilitate others reaching out to the creator if they have more questions.In case of questions, feedback or comments, please reach out to info@eaeindhoven.nl.I hope ...]]>
Tue, 07 Mar 2023 17:36:02 +0000 EA - Masterdocs of EA community building guides and resources by Irene H Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Masterdocs of EA community building guides and resources, published by Irene H on March 7, 2023 on The Effective Altruism Forum.TLDR: I made a comprehensive overview of EA curricula, event organization guides, and syllabi, as well as an overview of resources on EA community building, communications, strategy, and more. The EA community builders I shared them with up to now found them really helpful.ContextTogether with Jelle Donders, I co-founded the university group at Eindhoven University of Technology in the Netherlands last summer. We followed the UGAP mentorship program last semester and have been thinking a lot about events and programs to organize for our EA group and about general EA community-building strategies. There is a big maze of Google docs containing resources on this, but none of them gives a complete and updated overview.I wanted to share two resources for EA community builders I’ve been working on over the past months. Both I made initially as references for myself, but when I shared them with other community builders, they found them quite helpful. Therefore, I’d now like to share them more widely, so that others can hopefully have the same benefits.EA Eindhoven Syllabi CollectionThere are many lists of EA curricula, event organization guides, and syllabi, but none of them are complete. Therefore, I made a document to which I save everything of that nature I come across, with the aim of getting a somewhat better overview of everything out thereI also went through other lists of this nature and saved all relevant documents to this collection, so it should be a one-stop shop. It is currently 27 pages long and I don’t know of another list that is more exhaustive. (Also compared to the EA Groups Resource Centre, which only offers a few curated resources per topic). I update this document regularly when I come across new resources.When we want to organize something new at my group, we have a look at this document to see whether someone else has done the thing we want to do already so we can save time, or just to get some inspiration.You can find the document here.Community Building ReadingsI also made a document that contains a lot of resources on EA community building, communications, strategy, and more, related to the EA movement as a whole and to EA groups specifically, that are not specific guides for organizing concrete events, programs, or campaigns, but are aimed at getting a better understanding of more general thinking, strategy and criticism of the EA community.You can find the document here.Disclaimers for both documentsI do not necessarily endorse/recommend the resources and advice in these documents. My sole aim with these documents is to provide an overview of the space of the thinking and resources around EA community building, not to advocate for one particular way of going about it.These documents are probably really overwhelming, but my aim was to gather a comprehensive overview of all resources, as opposed to linking only 1 or 2 recommendations, which is the way the Groups Resources Centre or the GCP EA Student Groups Handbook are organized.The way I sorted things into categories will always remain artificial as some boundaries are blurry and some things fit into multiple categories.How to use these documentsUsing the table of contents or Ctrl + F + [what you’re looking for] probably works best for navigationPlease feel free to place comments and make suggestions if you have additions!When you add something new, please add a source (name of the group and/or person who made the resource) wherever possible to give people the credit they’re due and to facilitate others reaching out to the creator if they have more questions.In case of questions, feedback or comments, please reach out to info@eaeindhoven.nl.I hope ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Masterdocs of EA community building guides and resources, published by Irene H on March 7, 2023 on The Effective Altruism Forum.TLDR: I made a comprehensive overview of EA curricula, event organization guides, and syllabi, as well as an overview of resources on EA community building, communications, strategy, and more. The EA community builders I shared them with up to now found them really helpful.ContextTogether with Jelle Donders, I co-founded the university group at Eindhoven University of Technology in the Netherlands last summer. We followed the UGAP mentorship program last semester and have been thinking a lot about events and programs to organize for our EA group and about general EA community-building strategies. There is a big maze of Google docs containing resources on this, but none of them gives a complete and updated overview.I wanted to share two resources for EA community builders I’ve been working on over the past months. Both I made initially as references for myself, but when I shared them with other community builders, they found them quite helpful. Therefore, I’d now like to share them more widely, so that others can hopefully have the same benefits.EA Eindhoven Syllabi CollectionThere are many lists of EA curricula, event organization guides, and syllabi, but none of them are complete. Therefore, I made a document to which I save everything of that nature I come across, with the aim of getting a somewhat better overview of everything out thereI also went through other lists of this nature and saved all relevant documents to this collection, so it should be a one-stop shop. It is currently 27 pages long and I don’t know of another list that is more exhaustive. (Also compared to the EA Groups Resource Centre, which only offers a few curated resources per topic). I update this document regularly when I come across new resources.When we want to organize something new at my group, we have a look at this document to see whether someone else has done the thing we want to do already so we can save time, or just to get some inspiration.You can find the document here.Community Building ReadingsI also made a document that contains a lot of resources on EA community building, communications, strategy, and more, related to the EA movement as a whole and to EA groups specifically, that are not specific guides for organizing concrete events, programs, or campaigns, but are aimed at getting a better understanding of more general thinking, strategy and criticism of the EA community.You can find the document here.Disclaimers for both documentsI do not necessarily endorse/recommend the resources and advice in these documents. My sole aim with these documents is to provide an overview of the space of the thinking and resources around EA community building, not to advocate for one particular way of going about it.These documents are probably really overwhelming, but my aim was to gather a comprehensive overview of all resources, as opposed to linking only 1 or 2 recommendations, which is the way the Groups Resources Centre or the GCP EA Student Groups Handbook are organized.The way I sorted things into categories will always remain artificial as some boundaries are blurry and some things fit into multiple categories.How to use these documentsUsing the table of contents or Ctrl + F + [what you’re looking for] probably works best for navigationPlease feel free to place comments and make suggestions if you have additions!When you add something new, please add a source (name of the group and/or person who made the resource) wherever possible to give people the credit they’re due and to facilitate others reaching out to the creator if they have more questions.In case of questions, feedback or comments, please reach out to info@eaeindhoven.nl.I hope ...]]>
Irene H https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:45 None full 5134
KoLdSn4PLkWzE6SWT_NL_EA_EA EA - Global catastrophic risks law approved in the United States by JorgeTorresC Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Global catastrophic risks law approved in the United States, published by JorgeTorresC on March 7, 2023 on The Effective Altruism Forum.Executive SummaryThe enactment of the Global Catastrophic Risk Management Act represents a significant step forward in global catastrophic risk management. It is the first time a nation has undertaken a detailed analysis of these risks.The law orders the United States government to establish actions for prevention, preparation, and resilience in the face of catastrophic risks.Specifically, the United States government will be required to:Present a global catastrophic risk assessment to the US Congress.Develop a comprehensive risk mitigation plan involving the collaboration of sixteen designated US national agencies.Formulate a strategy for risk management under the leadership of the Secretary of Homeland Security and the Administrator of the Federal Emergency Management of the US.Conduct a national exercise to test the strategy.Provide recommendations to the US Congress.This legislation recognizes as global catastrophic risks: global pandemics, nuclear war, asteroid and comet impacts, supervolcanoes, sudden and severe changes in climate, and threats arising from the use and development of emerging technologies (such as artificial intelligence or engineered pandemics).Our article presents an overview of the legislation, followed by a comparative discussion of the international legislation of GCRs. Furthermore, we recommend considering similar laws for adoption within the Spanish-speaking context.Read more (in Spanish)Riesgos Catastróficos Globales is a science-advocacy and research organization working on improving the management of global risks in Spanish-Speaking countries. You can support our organization with a donation.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
JorgeTorresC https://forum.effectivealtruism.org/posts/KoLdSn4PLkWzE6SWT/global-catastrophic-risks-law-approved-in-the-united-states Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Global catastrophic risks law approved in the United States, published by JorgeTorresC on March 7, 2023 on The Effective Altruism Forum.Executive SummaryThe enactment of the Global Catastrophic Risk Management Act represents a significant step forward in global catastrophic risk management. It is the first time a nation has undertaken a detailed analysis of these risks.The law orders the United States government to establish actions for prevention, preparation, and resilience in the face of catastrophic risks.Specifically, the United States government will be required to:Present a global catastrophic risk assessment to the US Congress.Develop a comprehensive risk mitigation plan involving the collaboration of sixteen designated US national agencies.Formulate a strategy for risk management under the leadership of the Secretary of Homeland Security and the Administrator of the Federal Emergency Management of the US.Conduct a national exercise to test the strategy.Provide recommendations to the US Congress.This legislation recognizes as global catastrophic risks: global pandemics, nuclear war, asteroid and comet impacts, supervolcanoes, sudden and severe changes in climate, and threats arising from the use and development of emerging technologies (such as artificial intelligence or engineered pandemics).Our article presents an overview of the legislation, followed by a comparative discussion of the international legislation of GCRs. Furthermore, we recommend considering similar laws for adoption within the Spanish-speaking context.Read more (in Spanish)Riesgos Catastróficos Globales is a science-advocacy and research organization working on improving the management of global risks in Spanish-Speaking countries. You can support our organization with a donation.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 07 Mar 2023 16:56:30 +0000 EA - Global catastrophic risks law approved in the United States by JorgeTorresC Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Global catastrophic risks law approved in the United States, published by JorgeTorresC on March 7, 2023 on The Effective Altruism Forum.Executive SummaryThe enactment of the Global Catastrophic Risk Management Act represents a significant step forward in global catastrophic risk management. It is the first time a nation has undertaken a detailed analysis of these risks.The law orders the United States government to establish actions for prevention, preparation, and resilience in the face of catastrophic risks.Specifically, the United States government will be required to:Present a global catastrophic risk assessment to the US Congress.Develop a comprehensive risk mitigation plan involving the collaboration of sixteen designated US national agencies.Formulate a strategy for risk management under the leadership of the Secretary of Homeland Security and the Administrator of the Federal Emergency Management of the US.Conduct a national exercise to test the strategy.Provide recommendations to the US Congress.This legislation recognizes as global catastrophic risks: global pandemics, nuclear war, asteroid and comet impacts, supervolcanoes, sudden and severe changes in climate, and threats arising from the use and development of emerging technologies (such as artificial intelligence or engineered pandemics).Our article presents an overview of the legislation, followed by a comparative discussion of the international legislation of GCRs. Furthermore, we recommend considering similar laws for adoption within the Spanish-speaking context.Read more (in Spanish)Riesgos Catastróficos Globales is a science-advocacy and research organization working on improving the management of global risks in Spanish-Speaking countries. You can support our organization with a donation.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Global catastrophic risks law approved in the United States, published by JorgeTorresC on March 7, 2023 on The Effective Altruism Forum.Executive SummaryThe enactment of the Global Catastrophic Risk Management Act represents a significant step forward in global catastrophic risk management. It is the first time a nation has undertaken a detailed analysis of these risks.The law orders the United States government to establish actions for prevention, preparation, and resilience in the face of catastrophic risks.Specifically, the United States government will be required to:Present a global catastrophic risk assessment to the US Congress.Develop a comprehensive risk mitigation plan involving the collaboration of sixteen designated US national agencies.Formulate a strategy for risk management under the leadership of the Secretary of Homeland Security and the Administrator of the Federal Emergency Management of the US.Conduct a national exercise to test the strategy.Provide recommendations to the US Congress.This legislation recognizes as global catastrophic risks: global pandemics, nuclear war, asteroid and comet impacts, supervolcanoes, sudden and severe changes in climate, and threats arising from the use and development of emerging technologies (such as artificial intelligence or engineered pandemics).Our article presents an overview of the legislation, followed by a comparative discussion of the international legislation of GCRs. Furthermore, we recommend considering similar laws for adoption within the Spanish-speaking context.Read more (in Spanish)Riesgos Catastróficos Globales is a science-advocacy and research organization working on improving the management of global risks in Spanish-Speaking countries. You can support our organization with a donation.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
JorgeTorresC https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:59 None full 5133
kCBQHWqbk4Nrns8P7_NL_EA_EA EA - Model-Based Policy Analysis under Deep Uncertainty by Max Reddel Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Model-Based Policy Analysis under Deep Uncertainty, published by Max Reddel on March 6, 2023 on The Effective Altruism Forum.This post is based on a talk that I gave at EAGxBerlin 2022. It is intended for policy researchers who want to extend their tool kit with computational tools. I show how we can support decision-making with simulation models of socio-technical systems while embracing uncertainties in a systematic manner. The technical field of decision-making under deep uncertainty offers a wide range of methods to account for various parametric and structural uncertainties while identifying robust policies in a situation where we want to optimize for multiple objectives simultaneously.SummaryReal-world political decision-making problems are complex, with disputed knowledge, differing problem perceptions, opposing stakeholders, and interactions between framing the problem and problem-solving.Modeling can help policy-makers to navigate these complexities.Traditional modeling is ill-suited for this purpose.Systems modeling is a better fit (e.g., agent-based models).Deep uncertainty is everywhere.Deep uncertainty makes expected-utility reasoning virtually useless.Decision-Making under Deep Uncertainty is a framework that can build upon systems modeling and overcome deep uncertainties.Explorative modeling > predictive modeling.Value diversity (aka multiple objectives) > single objectives.Focus on finding vulnerable scenarios and robust policy solutions.Good fit with the mitigation of GCRs, X-risks, and S-risks.ComplexityComplexity science is an interdisciplinary field that seeks to understand complex systems and the emergent behaviors that arise from the interactions of their components. Complexity is often an obstacle to decision-making. So, we need to address it.Ant ColoniesAnt colonies are a great example of how complex systems can emerge from simple individual behaviors. Ants follow very simplistic rules, such as depositing food, following pheromone trails, and communicating with each other through chemical signals. However, the collective behavior of the colony is highly sophisticated, with complex networks of pheromone trails guiding the movement of the entire colony toward food sources and the construction of intricate structures such as nests and tunnels. The behavior of the colony is also highly adaptive, with the ability to respond to changes in the environment, such as changes in the availability of food or the presence of predators.Examples of Economy and TechnologySimilarly, the world is also a highly complex system, with a vast array of interrelated factors and processes that interact with each other in intricate ways. These factors include the economy, technology, politics, culture, and the environment, among others. Each of these factors is highly complex in its own right, with multiple variables and feedback loops that contribute to the overall complexity of the system. For example, the economy is a highly complex system that involves the interactions between individuals, businesses, governments, and other entities. The behavior of each individual actor is highly variable and can be influenced by a range of factors, such as personal motivations, cultural norms, and environmental factors. These individual behaviors can then interact with each other in complex ways, leading to emergent phenomena such as market trends, economic growth, and financial crises.Similarly, technology is a highly complex system that involves interactions between multiple components, such as hardware, software, data, and networks. Each of these components is highly complex in its own right, with multiple feedback loops and interactions that contribute to the overall complexity of the system. The behavior of the system as a whole can then be highly unpredict...]]>
Max Reddel https://forum.effectivealtruism.org/posts/kCBQHWqbk4Nrns8P7/model-based-policy-analysis-under-deep-uncertainty Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Model-Based Policy Analysis under Deep Uncertainty, published by Max Reddel on March 6, 2023 on The Effective Altruism Forum.This post is based on a talk that I gave at EAGxBerlin 2022. It is intended for policy researchers who want to extend their tool kit with computational tools. I show how we can support decision-making with simulation models of socio-technical systems while embracing uncertainties in a systematic manner. The technical field of decision-making under deep uncertainty offers a wide range of methods to account for various parametric and structural uncertainties while identifying robust policies in a situation where we want to optimize for multiple objectives simultaneously.SummaryReal-world political decision-making problems are complex, with disputed knowledge, differing problem perceptions, opposing stakeholders, and interactions between framing the problem and problem-solving.Modeling can help policy-makers to navigate these complexities.Traditional modeling is ill-suited for this purpose.Systems modeling is a better fit (e.g., agent-based models).Deep uncertainty is everywhere.Deep uncertainty makes expected-utility reasoning virtually useless.Decision-Making under Deep Uncertainty is a framework that can build upon systems modeling and overcome deep uncertainties.Explorative modeling > predictive modeling.Value diversity (aka multiple objectives) > single objectives.Focus on finding vulnerable scenarios and robust policy solutions.Good fit with the mitigation of GCRs, X-risks, and S-risks.ComplexityComplexity science is an interdisciplinary field that seeks to understand complex systems and the emergent behaviors that arise from the interactions of their components. Complexity is often an obstacle to decision-making. So, we need to address it.Ant ColoniesAnt colonies are a great example of how complex systems can emerge from simple individual behaviors. Ants follow very simplistic rules, such as depositing food, following pheromone trails, and communicating with each other through chemical signals. However, the collective behavior of the colony is highly sophisticated, with complex networks of pheromone trails guiding the movement of the entire colony toward food sources and the construction of intricate structures such as nests and tunnels. The behavior of the colony is also highly adaptive, with the ability to respond to changes in the environment, such as changes in the availability of food or the presence of predators.Examples of Economy and TechnologySimilarly, the world is also a highly complex system, with a vast array of interrelated factors and processes that interact with each other in intricate ways. These factors include the economy, technology, politics, culture, and the environment, among others. Each of these factors is highly complex in its own right, with multiple variables and feedback loops that contribute to the overall complexity of the system. For example, the economy is a highly complex system that involves the interactions between individuals, businesses, governments, and other entities. The behavior of each individual actor is highly variable and can be influenced by a range of factors, such as personal motivations, cultural norms, and environmental factors. These individual behaviors can then interact with each other in complex ways, leading to emergent phenomena such as market trends, economic growth, and financial crises.Similarly, technology is a highly complex system that involves interactions between multiple components, such as hardware, software, data, and networks. Each of these components is highly complex in its own right, with multiple feedback loops and interactions that contribute to the overall complexity of the system. The behavior of the system as a whole can then be highly unpredict...]]>
Mon, 06 Mar 2023 17:47:14 +0000 EA - Model-Based Policy Analysis under Deep Uncertainty by Max Reddel Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Model-Based Policy Analysis under Deep Uncertainty, published by Max Reddel on March 6, 2023 on The Effective Altruism Forum.This post is based on a talk that I gave at EAGxBerlin 2022. It is intended for policy researchers who want to extend their tool kit with computational tools. I show how we can support decision-making with simulation models of socio-technical systems while embracing uncertainties in a systematic manner. The technical field of decision-making under deep uncertainty offers a wide range of methods to account for various parametric and structural uncertainties while identifying robust policies in a situation where we want to optimize for multiple objectives simultaneously.SummaryReal-world political decision-making problems are complex, with disputed knowledge, differing problem perceptions, opposing stakeholders, and interactions between framing the problem and problem-solving.Modeling can help policy-makers to navigate these complexities.Traditional modeling is ill-suited for this purpose.Systems modeling is a better fit (e.g., agent-based models).Deep uncertainty is everywhere.Deep uncertainty makes expected-utility reasoning virtually useless.Decision-Making under Deep Uncertainty is a framework that can build upon systems modeling and overcome deep uncertainties.Explorative modeling > predictive modeling.Value diversity (aka multiple objectives) > single objectives.Focus on finding vulnerable scenarios and robust policy solutions.Good fit with the mitigation of GCRs, X-risks, and S-risks.ComplexityComplexity science is an interdisciplinary field that seeks to understand complex systems and the emergent behaviors that arise from the interactions of their components. Complexity is often an obstacle to decision-making. So, we need to address it.Ant ColoniesAnt colonies are a great example of how complex systems can emerge from simple individual behaviors. Ants follow very simplistic rules, such as depositing food, following pheromone trails, and communicating with each other through chemical signals. However, the collective behavior of the colony is highly sophisticated, with complex networks of pheromone trails guiding the movement of the entire colony toward food sources and the construction of intricate structures such as nests and tunnels. The behavior of the colony is also highly adaptive, with the ability to respond to changes in the environment, such as changes in the availability of food or the presence of predators.Examples of Economy and TechnologySimilarly, the world is also a highly complex system, with a vast array of interrelated factors and processes that interact with each other in intricate ways. These factors include the economy, technology, politics, culture, and the environment, among others. Each of these factors is highly complex in its own right, with multiple variables and feedback loops that contribute to the overall complexity of the system. For example, the economy is a highly complex system that involves the interactions between individuals, businesses, governments, and other entities. The behavior of each individual actor is highly variable and can be influenced by a range of factors, such as personal motivations, cultural norms, and environmental factors. These individual behaviors can then interact with each other in complex ways, leading to emergent phenomena such as market trends, economic growth, and financial crises.Similarly, technology is a highly complex system that involves interactions between multiple components, such as hardware, software, data, and networks. Each of these components is highly complex in its own right, with multiple feedback loops and interactions that contribute to the overall complexity of the system. The behavior of the system as a whole can then be highly unpredict...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Model-Based Policy Analysis under Deep Uncertainty, published by Max Reddel on March 6, 2023 on The Effective Altruism Forum.This post is based on a talk that I gave at EAGxBerlin 2022. It is intended for policy researchers who want to extend their tool kit with computational tools. I show how we can support decision-making with simulation models of socio-technical systems while embracing uncertainties in a systematic manner. The technical field of decision-making under deep uncertainty offers a wide range of methods to account for various parametric and structural uncertainties while identifying robust policies in a situation where we want to optimize for multiple objectives simultaneously.SummaryReal-world political decision-making problems are complex, with disputed knowledge, differing problem perceptions, opposing stakeholders, and interactions between framing the problem and problem-solving.Modeling can help policy-makers to navigate these complexities.Traditional modeling is ill-suited for this purpose.Systems modeling is a better fit (e.g., agent-based models).Deep uncertainty is everywhere.Deep uncertainty makes expected-utility reasoning virtually useless.Decision-Making under Deep Uncertainty is a framework that can build upon systems modeling and overcome deep uncertainties.Explorative modeling > predictive modeling.Value diversity (aka multiple objectives) > single objectives.Focus on finding vulnerable scenarios and robust policy solutions.Good fit with the mitigation of GCRs, X-risks, and S-risks.ComplexityComplexity science is an interdisciplinary field that seeks to understand complex systems and the emergent behaviors that arise from the interactions of their components. Complexity is often an obstacle to decision-making. So, we need to address it.Ant ColoniesAnt colonies are a great example of how complex systems can emerge from simple individual behaviors. Ants follow very simplistic rules, such as depositing food, following pheromone trails, and communicating with each other through chemical signals. However, the collective behavior of the colony is highly sophisticated, with complex networks of pheromone trails guiding the movement of the entire colony toward food sources and the construction of intricate structures such as nests and tunnels. The behavior of the colony is also highly adaptive, with the ability to respond to changes in the environment, such as changes in the availability of food or the presence of predators.Examples of Economy and TechnologySimilarly, the world is also a highly complex system, with a vast array of interrelated factors and processes that interact with each other in intricate ways. These factors include the economy, technology, politics, culture, and the environment, among others. Each of these factors is highly complex in its own right, with multiple variables and feedback loops that contribute to the overall complexity of the system. For example, the economy is a highly complex system that involves the interactions between individuals, businesses, governments, and other entities. The behavior of each individual actor is highly variable and can be influenced by a range of factors, such as personal motivations, cultural norms, and environmental factors. These individual behaviors can then interact with each other in complex ways, leading to emergent phenomena such as market trends, economic growth, and financial crises.Similarly, technology is a highly complex system that involves interactions between multiple components, such as hardware, software, data, and networks. Each of these components is highly complex in its own right, with multiple feedback loops and interactions that contribute to the overall complexity of the system. The behavior of the system as a whole can then be highly unpredict...]]>
Max Reddel https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 35:40 None full 5127
mLua7KbJRbXa6oeZ3_NL_EA_EA EA - More Centralisation? by DavidNash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: More Centralisation?, published by DavidNash on March 6, 2023 on The Effective Altruism Forum.SummaryI think EA is under centralisedThere are few ‘large’ EA organisations but most EA opportunities are 1-2 person projectsThis is setting up most projects to fail without proper organisational support and does not provide good incentives for experienced professionals to work on EA projectsEA organisations with good operations could incubate smaller projects before spinning them outLevels of CentralisationWe could imagine different levels of centralisation for a movement ranging from fully decentralised to fully centralised.Fully decentralised, everyone works on their own project, no organisations bigger than 1 personFully centralised, everyone works inside the same organisation (e.g. the civil service)It seems that EA tends more towards the decentralised model, there are relatively few larger organisations with ~50 or more people (Open Phil, GiveWell, Rethink Priorities, EVF), there are some with ~5-20 people and a lot of 1-2 person projects.I think EA would be much worse if it was one large organisation but there is probably a better balance found between the two extremes then we have at the moment.I think being overly decentralised may be setting up most people to fail.Why would being overly decentralised be setting people up to fail?Being an independent researcher/organiser is harder without support systems in place, and trying to coordinate this outside of an organisation is more complicatedThese support systems includeHaving a managerHaving colleagues to bounce ideas off/moral supportHaving professional HR/operations supportHealth insuranceBeing an employee rather than a contractor/grant recipient that has to worry about receiving future funding (although there are similar concerns about being fired)When people are setting up their own projects it can take up a large proportion of their time in the first year just doing operations to run that project, unrelated to the actual work they want to do. This can include spending a lot of the first year just fundraising for the second yearHow a lack of centralisation might affect EA overallBeing a movement with lots of small project work will appeal more to those with a higher risk tolerance, potentially pushing away more experienced people who would want to work on these projects, but within a larger organisationHaving a lot of small organisations will lead to a lot of duplication of operation/administration workIt will be harder to have good governance for lots of smaller organisations, some choose to not have any governance structures at all unless they growThere is less competition for employees if the choice is between 3 or 4 operationally strong organisations or being in a small orgWhat can change?Organisations with good operations and governance could support more projects internally - One example of this already is the Rethink Priorities Special Projects ProgramThese projects can be supported until they have enough experience and internal operations to survive and thrive independentlyPrograms that are mainly around giving money to individuals could be converted into internal programs, something more similar to the Research Scholars Program, or Charity Entrepreneurship’s Incubation ProgramThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
DavidNash https://forum.effectivealtruism.org/posts/mLua7KbJRbXa6oeZ3/more-centralisation Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: More Centralisation?, published by DavidNash on March 6, 2023 on The Effective Altruism Forum.SummaryI think EA is under centralisedThere are few ‘large’ EA organisations but most EA opportunities are 1-2 person projectsThis is setting up most projects to fail without proper organisational support and does not provide good incentives for experienced professionals to work on EA projectsEA organisations with good operations could incubate smaller projects before spinning them outLevels of CentralisationWe could imagine different levels of centralisation for a movement ranging from fully decentralised to fully centralised.Fully decentralised, everyone works on their own project, no organisations bigger than 1 personFully centralised, everyone works inside the same organisation (e.g. the civil service)It seems that EA tends more towards the decentralised model, there are relatively few larger organisations with ~50 or more people (Open Phil, GiveWell, Rethink Priorities, EVF), there are some with ~5-20 people and a lot of 1-2 person projects.I think EA would be much worse if it was one large organisation but there is probably a better balance found between the two extremes then we have at the moment.I think being overly decentralised may be setting up most people to fail.Why would being overly decentralised be setting people up to fail?Being an independent researcher/organiser is harder without support systems in place, and trying to coordinate this outside of an organisation is more complicatedThese support systems includeHaving a managerHaving colleagues to bounce ideas off/moral supportHaving professional HR/operations supportHealth insuranceBeing an employee rather than a contractor/grant recipient that has to worry about receiving future funding (although there are similar concerns about being fired)When people are setting up their own projects it can take up a large proportion of their time in the first year just doing operations to run that project, unrelated to the actual work they want to do. This can include spending a lot of the first year just fundraising for the second yearHow a lack of centralisation might affect EA overallBeing a movement with lots of small project work will appeal more to those with a higher risk tolerance, potentially pushing away more experienced people who would want to work on these projects, but within a larger organisationHaving a lot of small organisations will lead to a lot of duplication of operation/administration workIt will be harder to have good governance for lots of smaller organisations, some choose to not have any governance structures at all unless they growThere is less competition for employees if the choice is between 3 or 4 operationally strong organisations or being in a small orgWhat can change?Organisations with good operations and governance could support more projects internally - One example of this already is the Rethink Priorities Special Projects ProgramThese projects can be supported until they have enough experience and internal operations to survive and thrive independentlyPrograms that are mainly around giving money to individuals could be converted into internal programs, something more similar to the Research Scholars Program, or Charity Entrepreneurship’s Incubation ProgramThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 06 Mar 2023 17:43:19 +0000 EA - More Centralisation? by DavidNash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: More Centralisation?, published by DavidNash on March 6, 2023 on The Effective Altruism Forum.SummaryI think EA is under centralisedThere are few ‘large’ EA organisations but most EA opportunities are 1-2 person projectsThis is setting up most projects to fail without proper organisational support and does not provide good incentives for experienced professionals to work on EA projectsEA organisations with good operations could incubate smaller projects before spinning them outLevels of CentralisationWe could imagine different levels of centralisation for a movement ranging from fully decentralised to fully centralised.Fully decentralised, everyone works on their own project, no organisations bigger than 1 personFully centralised, everyone works inside the same organisation (e.g. the civil service)It seems that EA tends more towards the decentralised model, there are relatively few larger organisations with ~50 or more people (Open Phil, GiveWell, Rethink Priorities, EVF), there are some with ~5-20 people and a lot of 1-2 person projects.I think EA would be much worse if it was one large organisation but there is probably a better balance found between the two extremes then we have at the moment.I think being overly decentralised may be setting up most people to fail.Why would being overly decentralised be setting people up to fail?Being an independent researcher/organiser is harder without support systems in place, and trying to coordinate this outside of an organisation is more complicatedThese support systems includeHaving a managerHaving colleagues to bounce ideas off/moral supportHaving professional HR/operations supportHealth insuranceBeing an employee rather than a contractor/grant recipient that has to worry about receiving future funding (although there are similar concerns about being fired)When people are setting up their own projects it can take up a large proportion of their time in the first year just doing operations to run that project, unrelated to the actual work they want to do. This can include spending a lot of the first year just fundraising for the second yearHow a lack of centralisation might affect EA overallBeing a movement with lots of small project work will appeal more to those with a higher risk tolerance, potentially pushing away more experienced people who would want to work on these projects, but within a larger organisationHaving a lot of small organisations will lead to a lot of duplication of operation/administration workIt will be harder to have good governance for lots of smaller organisations, some choose to not have any governance structures at all unless they growThere is less competition for employees if the choice is between 3 or 4 operationally strong organisations or being in a small orgWhat can change?Organisations with good operations and governance could support more projects internally - One example of this already is the Rethink Priorities Special Projects ProgramThese projects can be supported until they have enough experience and internal operations to survive and thrive independentlyPrograms that are mainly around giving money to individuals could be converted into internal programs, something more similar to the Research Scholars Program, or Charity Entrepreneurship’s Incubation ProgramThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: More Centralisation?, published by DavidNash on March 6, 2023 on The Effective Altruism Forum.SummaryI think EA is under centralisedThere are few ‘large’ EA organisations but most EA opportunities are 1-2 person projectsThis is setting up most projects to fail without proper organisational support and does not provide good incentives for experienced professionals to work on EA projectsEA organisations with good operations could incubate smaller projects before spinning them outLevels of CentralisationWe could imagine different levels of centralisation for a movement ranging from fully decentralised to fully centralised.Fully decentralised, everyone works on their own project, no organisations bigger than 1 personFully centralised, everyone works inside the same organisation (e.g. the civil service)It seems that EA tends more towards the decentralised model, there are relatively few larger organisations with ~50 or more people (Open Phil, GiveWell, Rethink Priorities, EVF), there are some with ~5-20 people and a lot of 1-2 person projects.I think EA would be much worse if it was one large organisation but there is probably a better balance found between the two extremes then we have at the moment.I think being overly decentralised may be setting up most people to fail.Why would being overly decentralised be setting people up to fail?Being an independent researcher/organiser is harder without support systems in place, and trying to coordinate this outside of an organisation is more complicatedThese support systems includeHaving a managerHaving colleagues to bounce ideas off/moral supportHaving professional HR/operations supportHealth insuranceBeing an employee rather than a contractor/grant recipient that has to worry about receiving future funding (although there are similar concerns about being fired)When people are setting up their own projects it can take up a large proportion of their time in the first year just doing operations to run that project, unrelated to the actual work they want to do. This can include spending a lot of the first year just fundraising for the second yearHow a lack of centralisation might affect EA overallBeing a movement with lots of small project work will appeal more to those with a higher risk tolerance, potentially pushing away more experienced people who would want to work on these projects, but within a larger organisationHaving a lot of small organisations will lead to a lot of duplication of operation/administration workIt will be harder to have good governance for lots of smaller organisations, some choose to not have any governance structures at all unless they growThere is less competition for employees if the choice is between 3 or 4 operationally strong organisations or being in a small orgWhat can change?Organisations with good operations and governance could support more projects internally - One example of this already is the Rethink Priorities Special Projects ProgramThese projects can be supported until they have enough experience and internal operations to survive and thrive independentlyPrograms that are mainly around giving money to individuals could be converted into internal programs, something more similar to the Research Scholars Program, or Charity Entrepreneurship’s Incubation ProgramThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
DavidNash https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:15 None full 5124
RarQnPKCx4KkhoLEx_NL_EA_EA EA - 3 Basic Steps to Reduce Personal Liability as an Org Leader by Deena Englander Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 3 Basic Steps to Reduce Personal Liability as an Org Leader, published by Deena Englander on March 6, 2023 on The Effective Altruism Forum.It's come to my attention that many of the smaller EA orgs are not putting into place basic protection measures that keep their leaders safe. In the world we live in, risk mitigation and potential lawsuits are a fact of life, and I wouldn't want anyone to put themselves at greater risk just because they are unaware of the risk and easy steps to avoid it.Rule #1: Incorporate.I know most are hesitant to start an actual non-profit since that is more expensive and time-consuming, but at the least, you can form an LLC. That means that any liability accrued by the org CANNOT pass on to you (I think there are a few exceptions, but you can research that). LLCs are easy to start, and are pretty inexpensive (a few hundred to start, and then annually).Rule #2: Get your organization its own bank accountIt is NOT a good idea to keep your organization's finances together with your personal ones for many reasons. That increases the risk of accidental fraud and financial mismanagement. If you have your funds and the org's funds together, you run the risk of using the wrong funds and increasing your liability, since it's not clear which activities are personal (not protected by the LLC) or from the org. You also can't really keep track of your expenses well when it's all mixed up. You don't need a fancy bank account - any will do.Rule #3: Get general liability insuranceBasic liability insurance is an expense (mine costs about $1300 USD a year, but that's for my particular services), but if you're providing any type of guidance, mentoring, services, or events, it's a must. I can go into all sorts of potential lawsuits that you hopefully won't have, but if you even have one, your organization will likely go bankrupt if you don't have the protection insurance provides.This is not meant to be an in-depth article of all the things you can do, but EVERY EA org that is providing some type of service should have this in place. There's no reason to have our leaders assuming unnecessary risk.I don't know what this looks like if you're fiscally sponsored - I'd assume that they assume the liability - but I would love it if someone could clarify.I hope we can start changing the standard practices to protect our leaders and organizations. If anyone has any questions about their particular org, please feel free to reach out.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Deena Englander https://forum.effectivealtruism.org/posts/RarQnPKCx4KkhoLEx/3-basic-steps-to-reduce-personal-liability-as-an-org-leader Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 3 Basic Steps to Reduce Personal Liability as an Org Leader, published by Deena Englander on March 6, 2023 on The Effective Altruism Forum.It's come to my attention that many of the smaller EA orgs are not putting into place basic protection measures that keep their leaders safe. In the world we live in, risk mitigation and potential lawsuits are a fact of life, and I wouldn't want anyone to put themselves at greater risk just because they are unaware of the risk and easy steps to avoid it.Rule #1: Incorporate.I know most are hesitant to start an actual non-profit since that is more expensive and time-consuming, but at the least, you can form an LLC. That means that any liability accrued by the org CANNOT pass on to you (I think there are a few exceptions, but you can research that). LLCs are easy to start, and are pretty inexpensive (a few hundred to start, and then annually).Rule #2: Get your organization its own bank accountIt is NOT a good idea to keep your organization's finances together with your personal ones for many reasons. That increases the risk of accidental fraud and financial mismanagement. If you have your funds and the org's funds together, you run the risk of using the wrong funds and increasing your liability, since it's not clear which activities are personal (not protected by the LLC) or from the org. You also can't really keep track of your expenses well when it's all mixed up. You don't need a fancy bank account - any will do.Rule #3: Get general liability insuranceBasic liability insurance is an expense (mine costs about $1300 USD a year, but that's for my particular services), but if you're providing any type of guidance, mentoring, services, or events, it's a must. I can go into all sorts of potential lawsuits that you hopefully won't have, but if you even have one, your organization will likely go bankrupt if you don't have the protection insurance provides.This is not meant to be an in-depth article of all the things you can do, but EVERY EA org that is providing some type of service should have this in place. There's no reason to have our leaders assuming unnecessary risk.I don't know what this looks like if you're fiscally sponsored - I'd assume that they assume the liability - but I would love it if someone could clarify.I hope we can start changing the standard practices to protect our leaders and organizations. If anyone has any questions about their particular org, please feel free to reach out.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 06 Mar 2023 15:45:33 +0000 EA - 3 Basic Steps to Reduce Personal Liability as an Org Leader by Deena Englander Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 3 Basic Steps to Reduce Personal Liability as an Org Leader, published by Deena Englander on March 6, 2023 on The Effective Altruism Forum.It's come to my attention that many of the smaller EA orgs are not putting into place basic protection measures that keep their leaders safe. In the world we live in, risk mitigation and potential lawsuits are a fact of life, and I wouldn't want anyone to put themselves at greater risk just because they are unaware of the risk and easy steps to avoid it.Rule #1: Incorporate.I know most are hesitant to start an actual non-profit since that is more expensive and time-consuming, but at the least, you can form an LLC. That means that any liability accrued by the org CANNOT pass on to you (I think there are a few exceptions, but you can research that). LLCs are easy to start, and are pretty inexpensive (a few hundred to start, and then annually).Rule #2: Get your organization its own bank accountIt is NOT a good idea to keep your organization's finances together with your personal ones for many reasons. That increases the risk of accidental fraud and financial mismanagement. If you have your funds and the org's funds together, you run the risk of using the wrong funds and increasing your liability, since it's not clear which activities are personal (not protected by the LLC) or from the org. You also can't really keep track of your expenses well when it's all mixed up. You don't need a fancy bank account - any will do.Rule #3: Get general liability insuranceBasic liability insurance is an expense (mine costs about $1300 USD a year, but that's for my particular services), but if you're providing any type of guidance, mentoring, services, or events, it's a must. I can go into all sorts of potential lawsuits that you hopefully won't have, but if you even have one, your organization will likely go bankrupt if you don't have the protection insurance provides.This is not meant to be an in-depth article of all the things you can do, but EVERY EA org that is providing some type of service should have this in place. There's no reason to have our leaders assuming unnecessary risk.I don't know what this looks like if you're fiscally sponsored - I'd assume that they assume the liability - but I would love it if someone could clarify.I hope we can start changing the standard practices to protect our leaders and organizations. If anyone has any questions about their particular org, please feel free to reach out.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 3 Basic Steps to Reduce Personal Liability as an Org Leader, published by Deena Englander on March 6, 2023 on The Effective Altruism Forum.It's come to my attention that many of the smaller EA orgs are not putting into place basic protection measures that keep their leaders safe. In the world we live in, risk mitigation and potential lawsuits are a fact of life, and I wouldn't want anyone to put themselves at greater risk just because they are unaware of the risk and easy steps to avoid it.Rule #1: Incorporate.I know most are hesitant to start an actual non-profit since that is more expensive and time-consuming, but at the least, you can form an LLC. That means that any liability accrued by the org CANNOT pass on to you (I think there are a few exceptions, but you can research that). LLCs are easy to start, and are pretty inexpensive (a few hundred to start, and then annually).Rule #2: Get your organization its own bank accountIt is NOT a good idea to keep your organization's finances together with your personal ones for many reasons. That increases the risk of accidental fraud and financial mismanagement. If you have your funds and the org's funds together, you run the risk of using the wrong funds and increasing your liability, since it's not clear which activities are personal (not protected by the LLC) or from the org. You also can't really keep track of your expenses well when it's all mixed up. You don't need a fancy bank account - any will do.Rule #3: Get general liability insuranceBasic liability insurance is an expense (mine costs about $1300 USD a year, but that's for my particular services), but if you're providing any type of guidance, mentoring, services, or events, it's a must. I can go into all sorts of potential lawsuits that you hopefully won't have, but if you even have one, your organization will likely go bankrupt if you don't have the protection insurance provides.This is not meant to be an in-depth article of all the things you can do, but EVERY EA org that is providing some type of service should have this in place. There's no reason to have our leaders assuming unnecessary risk.I don't know what this looks like if you're fiscally sponsored - I'd assume that they assume the liability - but I would love it if someone could clarify.I hope we can start changing the standard practices to protect our leaders and organizations. If anyone has any questions about their particular org, please feel free to reach out.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Deena Englander https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:31 None full 5126
cHDz2R5FfWGoZgWoZ_NL_EA_EA EA - After launch. How are CE charities progressing? by Ula Zarosa Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: After launch. How are CE charities progressing?, published by Ula Zarosa on March 6, 2023 on The Effective Altruism Forum.TL;DR: Charity Entrepreneurship have helped to kick-start 23 impact-focused nonprofits in four years. We believe that starting more effective charities is the most impactful thing we can do. Our charities have surpassed expectations, and in this post we provide an update on their progress and achievements to date.About CEAt Charity Entrepreneurship (CE) we launch high-impact nonprofits by connecting entrepreneurs with the effective ideas, training and funding needed to launch and succeed. We provide:Seed grants (ranging from $50,000 to $200,000 per project)In-depth research reports with promising charity ideasTwo months of intensive trainingCo-founder matching (this is particularly important)StipendsCo-working space in LondonOngoing connection to the CE Community (~100 founders, funders and mentors)(Applications are now open to our 2023/2024 programs, apply by March 12, 2023).We estimate that on average:40% of our charities reach or exceed the cost-effectiveness of the strongest charities in their fields (e.g., GiveWell/ACE recommended).40% are in a steady state. This means they are having impact, but not at the GiveWell-recommendation level yet, or their cost-effectiveness is currently less clear-cut (all new charities start in this category for their first year).20% have already shut down or might in the future.General updateTo date, our CE Seed Network has provided our charities with $1.88 million in launch grants. Based on the updates provided by our charities in Jan 2023, we estimate that:1. They have meaningfully reached over 15 million people, and have the potential to soon reach up to 2.5 billion animals annually with their programs. For example:Suvita:Reached 600,000 families with vaccination reminders, 50,000 families reached by immunization ambassadors, and 95,000 women with pregnancy care reminders14,000 additional children vaccinatedFish Welfare Initiative:1.14 million fish potentially helped through welfare improvements1.4 million shrimp potentially helpedFamily Empowerment Media:15 million listeners reached in NigeriaIn the period overlapping with the campaign in Kano state (5.6 million people reached) the contraceptive uptake in the region increased by 75%, which corresponds to 250,000 new contraceptive users and an estimated 200 fewer maternal deaths related to unwanted pregnanciesLead Exposure Elimination Project:Policy changes implemented in Malawi alone are expected to reach 215,000 children. LEEP has launched 9 further paint programs, which they estimate will have a similar impact on averageShrimp Welfare Project:The program with MER Seafood (now in progress) can reach up to 125 million shrimp/year. Additional collaborations could reach >2.5 billion shrimp per annum2. They have fundraised over $22.5 million USD from grantmakers like GiveWell, Open Philanthropy, Mulago, Schmidt Futures, Animal Charity Evaluators, Grand Challenges Canada, and EA Animal Welfare Fund, amongst others. 3. If implemented at scale, they can reach impressive cost-effectiveness. For example:Family Empowerment Media: the intervention can potentially be 22x more effective than cash transfers from GiveDirectly (estimated by the team, 26x estimated by Founders Pledge)Fish Welfare Initiative: 1.3 fish or 2 shrimp potentially helped per $1 (estimated by the team, ACE assessed FWI cost-effectiveness as high to very high)Shrimp Welfare Project: approximately 625 shrimp potentially helped per $1 (estimated by the team)Suvita: when delivered at scale, effectiveness is at a similar range to GiveWell’s top charities (estimated by external organizations, e.g. Founders Pledge, data on this will be available later this year)Giving G...]]>
Ula Zarosa https://forum.effectivealtruism.org/posts/cHDz2R5FfWGoZgWoZ/after-launch-how-are-ce-charities-progressing Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: After launch. How are CE charities progressing?, published by Ula Zarosa on March 6, 2023 on The Effective Altruism Forum.TL;DR: Charity Entrepreneurship have helped to kick-start 23 impact-focused nonprofits in four years. We believe that starting more effective charities is the most impactful thing we can do. Our charities have surpassed expectations, and in this post we provide an update on their progress and achievements to date.About CEAt Charity Entrepreneurship (CE) we launch high-impact nonprofits by connecting entrepreneurs with the effective ideas, training and funding needed to launch and succeed. We provide:Seed grants (ranging from $50,000 to $200,000 per project)In-depth research reports with promising charity ideasTwo months of intensive trainingCo-founder matching (this is particularly important)StipendsCo-working space in LondonOngoing connection to the CE Community (~100 founders, funders and mentors)(Applications are now open to our 2023/2024 programs, apply by March 12, 2023).We estimate that on average:40% of our charities reach or exceed the cost-effectiveness of the strongest charities in their fields (e.g., GiveWell/ACE recommended).40% are in a steady state. This means they are having impact, but not at the GiveWell-recommendation level yet, or their cost-effectiveness is currently less clear-cut (all new charities start in this category for their first year).20% have already shut down or might in the future.General updateTo date, our CE Seed Network has provided our charities with $1.88 million in launch grants. Based on the updates provided by our charities in Jan 2023, we estimate that:1. They have meaningfully reached over 15 million people, and have the potential to soon reach up to 2.5 billion animals annually with their programs. For example:Suvita:Reached 600,000 families with vaccination reminders, 50,000 families reached by immunization ambassadors, and 95,000 women with pregnancy care reminders14,000 additional children vaccinatedFish Welfare Initiative:1.14 million fish potentially helped through welfare improvements1.4 million shrimp potentially helpedFamily Empowerment Media:15 million listeners reached in NigeriaIn the period overlapping with the campaign in Kano state (5.6 million people reached) the contraceptive uptake in the region increased by 75%, which corresponds to 250,000 new contraceptive users and an estimated 200 fewer maternal deaths related to unwanted pregnanciesLead Exposure Elimination Project:Policy changes implemented in Malawi alone are expected to reach 215,000 children. LEEP has launched 9 further paint programs, which they estimate will have a similar impact on averageShrimp Welfare Project:The program with MER Seafood (now in progress) can reach up to 125 million shrimp/year. Additional collaborations could reach >2.5 billion shrimp per annum2. They have fundraised over $22.5 million USD from grantmakers like GiveWell, Open Philanthropy, Mulago, Schmidt Futures, Animal Charity Evaluators, Grand Challenges Canada, and EA Animal Welfare Fund, amongst others. 3. If implemented at scale, they can reach impressive cost-effectiveness. For example:Family Empowerment Media: the intervention can potentially be 22x more effective than cash transfers from GiveDirectly (estimated by the team, 26x estimated by Founders Pledge)Fish Welfare Initiative: 1.3 fish or 2 shrimp potentially helped per $1 (estimated by the team, ACE assessed FWI cost-effectiveness as high to very high)Shrimp Welfare Project: approximately 625 shrimp potentially helped per $1 (estimated by the team)Suvita: when delivered at scale, effectiveness is at a similar range to GiveWell’s top charities (estimated by external organizations, e.g. Founders Pledge, data on this will be available later this year)Giving G...]]>
Mon, 06 Mar 2023 12:46:56 +0000 EA - After launch. How are CE charities progressing? by Ula Zarosa Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: After launch. How are CE charities progressing?, published by Ula Zarosa on March 6, 2023 on The Effective Altruism Forum.TL;DR: Charity Entrepreneurship have helped to kick-start 23 impact-focused nonprofits in four years. We believe that starting more effective charities is the most impactful thing we can do. Our charities have surpassed expectations, and in this post we provide an update on their progress and achievements to date.About CEAt Charity Entrepreneurship (CE) we launch high-impact nonprofits by connecting entrepreneurs with the effective ideas, training and funding needed to launch and succeed. We provide:Seed grants (ranging from $50,000 to $200,000 per project)In-depth research reports with promising charity ideasTwo months of intensive trainingCo-founder matching (this is particularly important)StipendsCo-working space in LondonOngoing connection to the CE Community (~100 founders, funders and mentors)(Applications are now open to our 2023/2024 programs, apply by March 12, 2023).We estimate that on average:40% of our charities reach or exceed the cost-effectiveness of the strongest charities in their fields (e.g., GiveWell/ACE recommended).40% are in a steady state. This means they are having impact, but not at the GiveWell-recommendation level yet, or their cost-effectiveness is currently less clear-cut (all new charities start in this category for their first year).20% have already shut down or might in the future.General updateTo date, our CE Seed Network has provided our charities with $1.88 million in launch grants. Based on the updates provided by our charities in Jan 2023, we estimate that:1. They have meaningfully reached over 15 million people, and have the potential to soon reach up to 2.5 billion animals annually with their programs. For example:Suvita:Reached 600,000 families with vaccination reminders, 50,000 families reached by immunization ambassadors, and 95,000 women with pregnancy care reminders14,000 additional children vaccinatedFish Welfare Initiative:1.14 million fish potentially helped through welfare improvements1.4 million shrimp potentially helpedFamily Empowerment Media:15 million listeners reached in NigeriaIn the period overlapping with the campaign in Kano state (5.6 million people reached) the contraceptive uptake in the region increased by 75%, which corresponds to 250,000 new contraceptive users and an estimated 200 fewer maternal deaths related to unwanted pregnanciesLead Exposure Elimination Project:Policy changes implemented in Malawi alone are expected to reach 215,000 children. LEEP has launched 9 further paint programs, which they estimate will have a similar impact on averageShrimp Welfare Project:The program with MER Seafood (now in progress) can reach up to 125 million shrimp/year. Additional collaborations could reach >2.5 billion shrimp per annum2. They have fundraised over $22.5 million USD from grantmakers like GiveWell, Open Philanthropy, Mulago, Schmidt Futures, Animal Charity Evaluators, Grand Challenges Canada, and EA Animal Welfare Fund, amongst others. 3. If implemented at scale, they can reach impressive cost-effectiveness. For example:Family Empowerment Media: the intervention can potentially be 22x more effective than cash transfers from GiveDirectly (estimated by the team, 26x estimated by Founders Pledge)Fish Welfare Initiative: 1.3 fish or 2 shrimp potentially helped per $1 (estimated by the team, ACE assessed FWI cost-effectiveness as high to very high)Shrimp Welfare Project: approximately 625 shrimp potentially helped per $1 (estimated by the team)Suvita: when delivered at scale, effectiveness is at a similar range to GiveWell’s top charities (estimated by external organizations, e.g. Founders Pledge, data on this will be available later this year)Giving G...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: After launch. How are CE charities progressing?, published by Ula Zarosa on March 6, 2023 on The Effective Altruism Forum.TL;DR: Charity Entrepreneurship have helped to kick-start 23 impact-focused nonprofits in four years. We believe that starting more effective charities is the most impactful thing we can do. Our charities have surpassed expectations, and in this post we provide an update on their progress and achievements to date.About CEAt Charity Entrepreneurship (CE) we launch high-impact nonprofits by connecting entrepreneurs with the effective ideas, training and funding needed to launch and succeed. We provide:Seed grants (ranging from $50,000 to $200,000 per project)In-depth research reports with promising charity ideasTwo months of intensive trainingCo-founder matching (this is particularly important)StipendsCo-working space in LondonOngoing connection to the CE Community (~100 founders, funders and mentors)(Applications are now open to our 2023/2024 programs, apply by March 12, 2023).We estimate that on average:40% of our charities reach or exceed the cost-effectiveness of the strongest charities in their fields (e.g., GiveWell/ACE recommended).40% are in a steady state. This means they are having impact, but not at the GiveWell-recommendation level yet, or their cost-effectiveness is currently less clear-cut (all new charities start in this category for their first year).20% have already shut down or might in the future.General updateTo date, our CE Seed Network has provided our charities with $1.88 million in launch grants. Based on the updates provided by our charities in Jan 2023, we estimate that:1. They have meaningfully reached over 15 million people, and have the potential to soon reach up to 2.5 billion animals annually with their programs. For example:Suvita:Reached 600,000 families with vaccination reminders, 50,000 families reached by immunization ambassadors, and 95,000 women with pregnancy care reminders14,000 additional children vaccinatedFish Welfare Initiative:1.14 million fish potentially helped through welfare improvements1.4 million shrimp potentially helpedFamily Empowerment Media:15 million listeners reached in NigeriaIn the period overlapping with the campaign in Kano state (5.6 million people reached) the contraceptive uptake in the region increased by 75%, which corresponds to 250,000 new contraceptive users and an estimated 200 fewer maternal deaths related to unwanted pregnanciesLead Exposure Elimination Project:Policy changes implemented in Malawi alone are expected to reach 215,000 children. LEEP has launched 9 further paint programs, which they estimate will have a similar impact on averageShrimp Welfare Project:The program with MER Seafood (now in progress) can reach up to 125 million shrimp/year. Additional collaborations could reach >2.5 billion shrimp per annum2. They have fundraised over $22.5 million USD from grantmakers like GiveWell, Open Philanthropy, Mulago, Schmidt Futures, Animal Charity Evaluators, Grand Challenges Canada, and EA Animal Welfare Fund, amongst others. 3. If implemented at scale, they can reach impressive cost-effectiveness. For example:Family Empowerment Media: the intervention can potentially be 22x more effective than cash transfers from GiveDirectly (estimated by the team, 26x estimated by Founders Pledge)Fish Welfare Initiative: 1.3 fish or 2 shrimp potentially helped per $1 (estimated by the team, ACE assessed FWI cost-effectiveness as high to very high)Shrimp Welfare Project: approximately 625 shrimp potentially helped per $1 (estimated by the team)Suvita: when delivered at scale, effectiveness is at a similar range to GiveWell’s top charities (estimated by external organizations, e.g. Founders Pledge, data on this will be available later this year)Giving G...]]>
Ula Zarosa https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:19 None full 5128
DztcCwrAGo6gzCp3o_NL_EA_EA EA - On the First Anniversary of my Best Friend’s Death by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On the First Anniversary of my Best Friend’s Death, published by Rockwell on March 6, 2023 on The Effective Altruism Forum.Thanks to encouragement from several people in the EA community, I've just started a blog. This is the first post: www.rockwellschwartz.com/blog/on-the-first-anniversary-of-my-best-friends-deathThe title likely makes this clear, but this post discusses death, suffering, and grief. You may not want to read it as a result, or you may want to utilize mental health resources.Some weeks back, I had the opportunity to give a presentation for Yale’s undergraduate course, “A Life Worth Living”. As I assembled my PowerPoint—explaining the Importance, Tractability, Neglectedness framework; Against Malaria Foundation; and global catastrophic threats—I felt the strong desire to pivot and include this photo:It was taken sometime in 2019 in my Brooklyn basement and depicts two baby roosters perched upon two of my human best friends, Maddie (left) and Alexa (right). One year ago today, Alexa died at age 25. This is my attempt to honor a tragic anniversary and, more so, a life that was very worth living.I’m sure you’re curious, so I’ll get it out of the way: The circumstances surrounding their death remain unclear, even as their family continues to seek the truth. I made a long list of open questions a year ago and, to my knowledge, most remain unanswered today. What I do know is that Alexa suffered greatly throughout their short-lived 25 years. And I also know that Alexa still did far more good than many who live far less arduous lives for thrice as long.That’s what I want to talk about here: Alexa, the altruist.Alexa, my best friend, roommate, codefendant, and rescue and caregiving partner.Alexa, cooing in the kitchen, milk-dipped paintbrush in hand, feeding an orphaned baby rat rescued from the city streets.Alexa, in a dark parking lot somewhere in Idaho, warming a bag of fluids against the car heater before carefully injecting them into an ill chicken.Alexa, pouring over medical reference books on the kitchen floor, searching for a treatment for sick guppies.Alexa, stopping when no one else stopped–calling for help when no one else called–as countless subway riders walked over the unconscious man on the cement floor.Alexa, hopping fences, climbing trees, walking through blood-soaked streets, bleary-eyed and exhausted but still going, going. Alexa, saving lives.Alexa, saving so many lives. Thousands. From childhood, through their last weeks. In dog shelters, slaughterhouses, and the wild. Everywhere they went.Alexa, walking the streets of Philadelphia, gently collecting invasive spotted lantern flies before bringing them home to a lush butterfly enclosure, carefully monitoring their energy levels and food. Alexa, caring for 322 spotted lantern flies until they passed naturally come winter.Alexa, the caregiver. Alexa, the life-giver.Alexa directly aided so many individuals over the years, I don’t think any one person is aware of even half those they helped. Their efforts were relentless but shockingly low-profile. They were far more likely to share a success to spotlight the wonders of the individual they aided than their heroic efforts to bring them to safety. And, painfully, they were also much more likely to dwell on the errors, accidents, or unavoidable heartbreaking outcomes inherent to the act of staving off suffering and dodging death. Alexa’s deep compassion caused them equally deep pain. And when Alexa and I ultimately distanced, it was to evade the deep void of grief too great to bear that lay between us. I know the pain Alexa carried because I do too.Sometimes, the pain that binds you to another becomes the pain you run from, and you never get the chance to go back and shoulder their pain in turn.Alexa had a bias: Do. And do fearles...]]>
Rockwell https://forum.effectivealtruism.org/posts/DztcCwrAGo6gzCp3o/on-the-first-anniversary-of-my-best-friend-s-death Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On the First Anniversary of my Best Friend’s Death, published by Rockwell on March 6, 2023 on The Effective Altruism Forum.Thanks to encouragement from several people in the EA community, I've just started a blog. This is the first post: www.rockwellschwartz.com/blog/on-the-first-anniversary-of-my-best-friends-deathThe title likely makes this clear, but this post discusses death, suffering, and grief. You may not want to read it as a result, or you may want to utilize mental health resources.Some weeks back, I had the opportunity to give a presentation for Yale’s undergraduate course, “A Life Worth Living”. As I assembled my PowerPoint—explaining the Importance, Tractability, Neglectedness framework; Against Malaria Foundation; and global catastrophic threats—I felt the strong desire to pivot and include this photo:It was taken sometime in 2019 in my Brooklyn basement and depicts two baby roosters perched upon two of my human best friends, Maddie (left) and Alexa (right). One year ago today, Alexa died at age 25. This is my attempt to honor a tragic anniversary and, more so, a life that was very worth living.I’m sure you’re curious, so I’ll get it out of the way: The circumstances surrounding their death remain unclear, even as their family continues to seek the truth. I made a long list of open questions a year ago and, to my knowledge, most remain unanswered today. What I do know is that Alexa suffered greatly throughout their short-lived 25 years. And I also know that Alexa still did far more good than many who live far less arduous lives for thrice as long.That’s what I want to talk about here: Alexa, the altruist.Alexa, my best friend, roommate, codefendant, and rescue and caregiving partner.Alexa, cooing in the kitchen, milk-dipped paintbrush in hand, feeding an orphaned baby rat rescued from the city streets.Alexa, in a dark parking lot somewhere in Idaho, warming a bag of fluids against the car heater before carefully injecting them into an ill chicken.Alexa, pouring over medical reference books on the kitchen floor, searching for a treatment for sick guppies.Alexa, stopping when no one else stopped–calling for help when no one else called–as countless subway riders walked over the unconscious man on the cement floor.Alexa, hopping fences, climbing trees, walking through blood-soaked streets, bleary-eyed and exhausted but still going, going. Alexa, saving lives.Alexa, saving so many lives. Thousands. From childhood, through their last weeks. In dog shelters, slaughterhouses, and the wild. Everywhere they went.Alexa, walking the streets of Philadelphia, gently collecting invasive spotted lantern flies before bringing them home to a lush butterfly enclosure, carefully monitoring their energy levels and food. Alexa, caring for 322 spotted lantern flies until they passed naturally come winter.Alexa, the caregiver. Alexa, the life-giver.Alexa directly aided so many individuals over the years, I don’t think any one person is aware of even half those they helped. Their efforts were relentless but shockingly low-profile. They were far more likely to share a success to spotlight the wonders of the individual they aided than their heroic efforts to bring them to safety. And, painfully, they were also much more likely to dwell on the errors, accidents, or unavoidable heartbreaking outcomes inherent to the act of staving off suffering and dodging death. Alexa’s deep compassion caused them equally deep pain. And when Alexa and I ultimately distanced, it was to evade the deep void of grief too great to bear that lay between us. I know the pain Alexa carried because I do too.Sometimes, the pain that binds you to another becomes the pain you run from, and you never get the chance to go back and shoulder their pain in turn.Alexa had a bias: Do. And do fearles...]]>
Mon, 06 Mar 2023 08:22:16 +0000 EA - On the First Anniversary of my Best Friend’s Death by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On the First Anniversary of my Best Friend’s Death, published by Rockwell on March 6, 2023 on The Effective Altruism Forum.Thanks to encouragement from several people in the EA community, I've just started a blog. This is the first post: www.rockwellschwartz.com/blog/on-the-first-anniversary-of-my-best-friends-deathThe title likely makes this clear, but this post discusses death, suffering, and grief. You may not want to read it as a result, or you may want to utilize mental health resources.Some weeks back, I had the opportunity to give a presentation for Yale’s undergraduate course, “A Life Worth Living”. As I assembled my PowerPoint—explaining the Importance, Tractability, Neglectedness framework; Against Malaria Foundation; and global catastrophic threats—I felt the strong desire to pivot and include this photo:It was taken sometime in 2019 in my Brooklyn basement and depicts two baby roosters perched upon two of my human best friends, Maddie (left) and Alexa (right). One year ago today, Alexa died at age 25. This is my attempt to honor a tragic anniversary and, more so, a life that was very worth living.I’m sure you’re curious, so I’ll get it out of the way: The circumstances surrounding their death remain unclear, even as their family continues to seek the truth. I made a long list of open questions a year ago and, to my knowledge, most remain unanswered today. What I do know is that Alexa suffered greatly throughout their short-lived 25 years. And I also know that Alexa still did far more good than many who live far less arduous lives for thrice as long.That’s what I want to talk about here: Alexa, the altruist.Alexa, my best friend, roommate, codefendant, and rescue and caregiving partner.Alexa, cooing in the kitchen, milk-dipped paintbrush in hand, feeding an orphaned baby rat rescued from the city streets.Alexa, in a dark parking lot somewhere in Idaho, warming a bag of fluids against the car heater before carefully injecting them into an ill chicken.Alexa, pouring over medical reference books on the kitchen floor, searching for a treatment for sick guppies.Alexa, stopping when no one else stopped–calling for help when no one else called–as countless subway riders walked over the unconscious man on the cement floor.Alexa, hopping fences, climbing trees, walking through blood-soaked streets, bleary-eyed and exhausted but still going, going. Alexa, saving lives.Alexa, saving so many lives. Thousands. From childhood, through their last weeks. In dog shelters, slaughterhouses, and the wild. Everywhere they went.Alexa, walking the streets of Philadelphia, gently collecting invasive spotted lantern flies before bringing them home to a lush butterfly enclosure, carefully monitoring their energy levels and food. Alexa, caring for 322 spotted lantern flies until they passed naturally come winter.Alexa, the caregiver. Alexa, the life-giver.Alexa directly aided so many individuals over the years, I don’t think any one person is aware of even half those they helped. Their efforts were relentless but shockingly low-profile. They were far more likely to share a success to spotlight the wonders of the individual they aided than their heroic efforts to bring them to safety. And, painfully, they were also much more likely to dwell on the errors, accidents, or unavoidable heartbreaking outcomes inherent to the act of staving off suffering and dodging death. Alexa’s deep compassion caused them equally deep pain. And when Alexa and I ultimately distanced, it was to evade the deep void of grief too great to bear that lay between us. I know the pain Alexa carried because I do too.Sometimes, the pain that binds you to another becomes the pain you run from, and you never get the chance to go back and shoulder their pain in turn.Alexa had a bias: Do. And do fearles...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On the First Anniversary of my Best Friend’s Death, published by Rockwell on March 6, 2023 on The Effective Altruism Forum.Thanks to encouragement from several people in the EA community, I've just started a blog. This is the first post: www.rockwellschwartz.com/blog/on-the-first-anniversary-of-my-best-friends-deathThe title likely makes this clear, but this post discusses death, suffering, and grief. You may not want to read it as a result, or you may want to utilize mental health resources.Some weeks back, I had the opportunity to give a presentation for Yale’s undergraduate course, “A Life Worth Living”. As I assembled my PowerPoint—explaining the Importance, Tractability, Neglectedness framework; Against Malaria Foundation; and global catastrophic threats—I felt the strong desire to pivot and include this photo:It was taken sometime in 2019 in my Brooklyn basement and depicts two baby roosters perched upon two of my human best friends, Maddie (left) and Alexa (right). One year ago today, Alexa died at age 25. This is my attempt to honor a tragic anniversary and, more so, a life that was very worth living.I’m sure you’re curious, so I’ll get it out of the way: The circumstances surrounding their death remain unclear, even as their family continues to seek the truth. I made a long list of open questions a year ago and, to my knowledge, most remain unanswered today. What I do know is that Alexa suffered greatly throughout their short-lived 25 years. And I also know that Alexa still did far more good than many who live far less arduous lives for thrice as long.That’s what I want to talk about here: Alexa, the altruist.Alexa, my best friend, roommate, codefendant, and rescue and caregiving partner.Alexa, cooing in the kitchen, milk-dipped paintbrush in hand, feeding an orphaned baby rat rescued from the city streets.Alexa, in a dark parking lot somewhere in Idaho, warming a bag of fluids against the car heater before carefully injecting them into an ill chicken.Alexa, pouring over medical reference books on the kitchen floor, searching for a treatment for sick guppies.Alexa, stopping when no one else stopped–calling for help when no one else called–as countless subway riders walked over the unconscious man on the cement floor.Alexa, hopping fences, climbing trees, walking through blood-soaked streets, bleary-eyed and exhausted but still going, going. Alexa, saving lives.Alexa, saving so many lives. Thousands. From childhood, through their last weeks. In dog shelters, slaughterhouses, and the wild. Everywhere they went.Alexa, walking the streets of Philadelphia, gently collecting invasive spotted lantern flies before bringing them home to a lush butterfly enclosure, carefully monitoring their energy levels and food. Alexa, caring for 322 spotted lantern flies until they passed naturally come winter.Alexa, the caregiver. Alexa, the life-giver.Alexa directly aided so many individuals over the years, I don’t think any one person is aware of even half those they helped. Their efforts were relentless but shockingly low-profile. They were far more likely to share a success to spotlight the wonders of the individual they aided than their heroic efforts to bring them to safety. And, painfully, they were also much more likely to dwell on the errors, accidents, or unavoidable heartbreaking outcomes inherent to the act of staving off suffering and dodging death. Alexa’s deep compassion caused them equally deep pain. And when Alexa and I ultimately distanced, it was to evade the deep void of grief too great to bear that lay between us. I know the pain Alexa carried because I do too.Sometimes, the pain that binds you to another becomes the pain you run from, and you never get the chance to go back and shoulder their pain in turn.Alexa had a bias: Do. And do fearles...]]>
Rockwell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:35 None full 5116
zxrBi4tzKwq2eNYKm_NL_EA_EA EA - EA Infosec: skill up in or make a transition to infosec via this book club by Jason Clinton Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Infosec: skill up in or make a transition to infosec via this book club, published by Jason Clinton on March 5, 2023 on The Effective Altruism Forum.Ahoy! Our community has become acutely aware of the need for skilled infosec folks to help out in all cause areas. The market conditions are that information security skilled individuals are in shorter supply than demand. This book club aims to remedy that problem.I have been leading the Chrome Infrastructure Security team at Google for 3 years, have 11 years of infosec experience, and 24 years of career experience. My team’s current focus includes APT and insider defense. I built that team with a mix of folks with infosec skills—yes—but the team is also made up of individuals who were strong general software engineers who had an interest in security. I applied this book and a comprehensive, 18 month training program to transition those folks to infosec and that has been successful. Reading this book as a book club is the first 5 months of that program. So, while this book club is not sufficient to make a career transition to infosec, it is a significant first step in doing so.The goal of this group and our meetings is to teach infosec practices, engineering, and policies to those who are interested in learning them, and to refresh and fill in gaps in those who are already in the infosec focus area.Find the book as a free PDF or via these links. From the book reviews:This book is the first to really capture the knowledge of some of the best security and reliability teams in the world, and while very few companies will need to operate at Google’s scale many engineers and operators can benefit from some of the hard-earned lessons on securing wide-flung distributed systems. This book is full of useful insights from cover to cover, and each example and anecdote is heavy with authenticity and the wisdom that comes from experimenting, failing and measuring real outcomes at scale. It is a must for anybody looking to build their systems the correct way from day one.This is a dry, information-dense book. But it also contains a comprehensive manual for how to implement what is widely considered the most secure company in the world.AudienceAny software engineer who is curious about becoming security engineering focused or anyone looking to up their existing infosec career path. It is beyond the level of new bachelor’s graduates. However, anyone with 3-ish years of engineering practice on real-world engineering systems should be able to keep up. A person with a CompSci masters degree but no hands-on experience might also be ready to join.OpennessDirected to anyone who considers themselves EA-aligned. Will discuss publicly known exploits and news stories, as they relate to the book contents, and avoid confidential cases from private orgs. Will discuss applicability to various aspects of EA-aligned work across all cause areas.Format, length, time and signupMeet for 1 hour on Google Meet every 2 weeks where we will discuss 2 chapters. ~11 meetings over 22 weeks.The meetings will be facilitated by me.The discussion format will be:The facilitator will select a theme from the chapters, in order, and then prompt the participants to offer their perspective, ensuring that everyone has ample opportunity to participate, if they choose.Discussion on each theme will continue for 5-10 minutes and then proceed to the next theme. Participants should offer any relevant, current news or applicability to cause areas, if time permits.The facilitator will ensure that discussion is relevant and move the conversation along to the next topic, being mindful of the time limit.Any threads that warrant more discussion than we have time for in the call will be taken to the Slack channel for the book club (see form below for invite) where pa...]]>
Jason Clinton https://forum.effectivealtruism.org/posts/zxrBi4tzKwq2eNYKm/ea-infosec-skill-up-in-or-make-a-transition-to-infosec-via Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Infosec: skill up in or make a transition to infosec via this book club, published by Jason Clinton on March 5, 2023 on The Effective Altruism Forum.Ahoy! Our community has become acutely aware of the need for skilled infosec folks to help out in all cause areas. The market conditions are that information security skilled individuals are in shorter supply than demand. This book club aims to remedy that problem.I have been leading the Chrome Infrastructure Security team at Google for 3 years, have 11 years of infosec experience, and 24 years of career experience. My team’s current focus includes APT and insider defense. I built that team with a mix of folks with infosec skills—yes—but the team is also made up of individuals who were strong general software engineers who had an interest in security. I applied this book and a comprehensive, 18 month training program to transition those folks to infosec and that has been successful. Reading this book as a book club is the first 5 months of that program. So, while this book club is not sufficient to make a career transition to infosec, it is a significant first step in doing so.The goal of this group and our meetings is to teach infosec practices, engineering, and policies to those who are interested in learning them, and to refresh and fill in gaps in those who are already in the infosec focus area.Find the book as a free PDF or via these links. From the book reviews:This book is the first to really capture the knowledge of some of the best security and reliability teams in the world, and while very few companies will need to operate at Google’s scale many engineers and operators can benefit from some of the hard-earned lessons on securing wide-flung distributed systems. This book is full of useful insights from cover to cover, and each example and anecdote is heavy with authenticity and the wisdom that comes from experimenting, failing and measuring real outcomes at scale. It is a must for anybody looking to build their systems the correct way from day one.This is a dry, information-dense book. But it also contains a comprehensive manual for how to implement what is widely considered the most secure company in the world.AudienceAny software engineer who is curious about becoming security engineering focused or anyone looking to up their existing infosec career path. It is beyond the level of new bachelor’s graduates. However, anyone with 3-ish years of engineering practice on real-world engineering systems should be able to keep up. A person with a CompSci masters degree but no hands-on experience might also be ready to join.OpennessDirected to anyone who considers themselves EA-aligned. Will discuss publicly known exploits and news stories, as they relate to the book contents, and avoid confidential cases from private orgs. Will discuss applicability to various aspects of EA-aligned work across all cause areas.Format, length, time and signupMeet for 1 hour on Google Meet every 2 weeks where we will discuss 2 chapters. ~11 meetings over 22 weeks.The meetings will be facilitated by me.The discussion format will be:The facilitator will select a theme from the chapters, in order, and then prompt the participants to offer their perspective, ensuring that everyone has ample opportunity to participate, if they choose.Discussion on each theme will continue for 5-10 minutes and then proceed to the next theme. Participants should offer any relevant, current news or applicability to cause areas, if time permits.The facilitator will ensure that discussion is relevant and move the conversation along to the next topic, being mindful of the time limit.Any threads that warrant more discussion than we have time for in the call will be taken to the Slack channel for the book club (see form below for invite) where pa...]]>
Sun, 05 Mar 2023 23:11:25 +0000 EA - EA Infosec: skill up in or make a transition to infosec via this book club by Jason Clinton Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Infosec: skill up in or make a transition to infosec via this book club, published by Jason Clinton on March 5, 2023 on The Effective Altruism Forum.Ahoy! Our community has become acutely aware of the need for skilled infosec folks to help out in all cause areas. The market conditions are that information security skilled individuals are in shorter supply than demand. This book club aims to remedy that problem.I have been leading the Chrome Infrastructure Security team at Google for 3 years, have 11 years of infosec experience, and 24 years of career experience. My team’s current focus includes APT and insider defense. I built that team with a mix of folks with infosec skills—yes—but the team is also made up of individuals who were strong general software engineers who had an interest in security. I applied this book and a comprehensive, 18 month training program to transition those folks to infosec and that has been successful. Reading this book as a book club is the first 5 months of that program. So, while this book club is not sufficient to make a career transition to infosec, it is a significant first step in doing so.The goal of this group and our meetings is to teach infosec practices, engineering, and policies to those who are interested in learning them, and to refresh and fill in gaps in those who are already in the infosec focus area.Find the book as a free PDF or via these links. From the book reviews:This book is the first to really capture the knowledge of some of the best security and reliability teams in the world, and while very few companies will need to operate at Google’s scale many engineers and operators can benefit from some of the hard-earned lessons on securing wide-flung distributed systems. This book is full of useful insights from cover to cover, and each example and anecdote is heavy with authenticity and the wisdom that comes from experimenting, failing and measuring real outcomes at scale. It is a must for anybody looking to build their systems the correct way from day one.This is a dry, information-dense book. But it also contains a comprehensive manual for how to implement what is widely considered the most secure company in the world.AudienceAny software engineer who is curious about becoming security engineering focused or anyone looking to up their existing infosec career path. It is beyond the level of new bachelor’s graduates. However, anyone with 3-ish years of engineering practice on real-world engineering systems should be able to keep up. A person with a CompSci masters degree but no hands-on experience might also be ready to join.OpennessDirected to anyone who considers themselves EA-aligned. Will discuss publicly known exploits and news stories, as they relate to the book contents, and avoid confidential cases from private orgs. Will discuss applicability to various aspects of EA-aligned work across all cause areas.Format, length, time and signupMeet for 1 hour on Google Meet every 2 weeks where we will discuss 2 chapters. ~11 meetings over 22 weeks.The meetings will be facilitated by me.The discussion format will be:The facilitator will select a theme from the chapters, in order, and then prompt the participants to offer their perspective, ensuring that everyone has ample opportunity to participate, if they choose.Discussion on each theme will continue for 5-10 minutes and then proceed to the next theme. Participants should offer any relevant, current news or applicability to cause areas, if time permits.The facilitator will ensure that discussion is relevant and move the conversation along to the next topic, being mindful of the time limit.Any threads that warrant more discussion than we have time for in the call will be taken to the Slack channel for the book club (see form below for invite) where pa...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Infosec: skill up in or make a transition to infosec via this book club, published by Jason Clinton on March 5, 2023 on The Effective Altruism Forum.Ahoy! Our community has become acutely aware of the need for skilled infosec folks to help out in all cause areas. The market conditions are that information security skilled individuals are in shorter supply than demand. This book club aims to remedy that problem.I have been leading the Chrome Infrastructure Security team at Google for 3 years, have 11 years of infosec experience, and 24 years of career experience. My team’s current focus includes APT and insider defense. I built that team with a mix of folks with infosec skills—yes—but the team is also made up of individuals who were strong general software engineers who had an interest in security. I applied this book and a comprehensive, 18 month training program to transition those folks to infosec and that has been successful. Reading this book as a book club is the first 5 months of that program. So, while this book club is not sufficient to make a career transition to infosec, it is a significant first step in doing so.The goal of this group and our meetings is to teach infosec practices, engineering, and policies to those who are interested in learning them, and to refresh and fill in gaps in those who are already in the infosec focus area.Find the book as a free PDF or via these links. From the book reviews:This book is the first to really capture the knowledge of some of the best security and reliability teams in the world, and while very few companies will need to operate at Google’s scale many engineers and operators can benefit from some of the hard-earned lessons on securing wide-flung distributed systems. This book is full of useful insights from cover to cover, and each example and anecdote is heavy with authenticity and the wisdom that comes from experimenting, failing and measuring real outcomes at scale. It is a must for anybody looking to build their systems the correct way from day one.This is a dry, information-dense book. But it also contains a comprehensive manual for how to implement what is widely considered the most secure company in the world.AudienceAny software engineer who is curious about becoming security engineering focused or anyone looking to up their existing infosec career path. It is beyond the level of new bachelor’s graduates. However, anyone with 3-ish years of engineering practice on real-world engineering systems should be able to keep up. A person with a CompSci masters degree but no hands-on experience might also be ready to join.OpennessDirected to anyone who considers themselves EA-aligned. Will discuss publicly known exploits and news stories, as they relate to the book contents, and avoid confidential cases from private orgs. Will discuss applicability to various aspects of EA-aligned work across all cause areas.Format, length, time and signupMeet for 1 hour on Google Meet every 2 weeks where we will discuss 2 chapters. ~11 meetings over 22 weeks.The meetings will be facilitated by me.The discussion format will be:The facilitator will select a theme from the chapters, in order, and then prompt the participants to offer their perspective, ensuring that everyone has ample opportunity to participate, if they choose.Discussion on each theme will continue for 5-10 minutes and then proceed to the next theme. Participants should offer any relevant, current news or applicability to cause areas, if time permits.The facilitator will ensure that discussion is relevant and move the conversation along to the next topic, being mindful of the time limit.Any threads that warrant more discussion than we have time for in the call will be taken to the Slack channel for the book club (see form below for invite) where pa...]]>
Jason Clinton https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:22 None full 5117
rydDnatJdCmFdEZKR_NL_EA_EA EA - The Role of a Non-Profit Board by Grayden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Role of a Non-Profit Board, published by Grayden on March 4, 2023 on The Effective Altruism Forum.BackgroundThis is a cross-post from the website of the EA Good Governance Project. It is shared here for the purposes of ensuring it reaches a wider audience and to invite comment.The content is largely consensus best-practice adapted for an EA audience. Leveraging this content should help boards be more effective, but governance is complex, subjective and context dependent, so there is never a right answer.IntroductionThe responsibilities of a board can fall largely into three categories: governance, advisory and execution. Here, we explain how to work out what falls in which category and key considerations about whether the board should be involved.Essential: GovernanceThe Board comprises individuals who hold assets “on trust” for the beneficiaries. By default, all power is held by the board until delegated, and the board always remains responsible for ensuring the organization delivers on its objects.In practice, this means:Appointing the CEO, holding them to account and ensuring their weaknesses are compensated for;Taking important strategic decisions, especially those that would bind future CEOs;Evaluating organizational performance and testing the case for its existence; andEnsuring the board itself has the right composition and is performing strongly.Being good at governance doesn’t just mean having the right intentions; it requires strong people & organization skills, subject matter expertise and cognitive diversity.When founding a new non-profit, it is often easiest to fill the board with friends. However, if we are to hold ourselves up to the highest standards of rationality, we should seek to strengthen the board quickly.Optional: AdvisoryThe best leaders know when and where to get advice. This might be technical in areas where they are not strong, such as legal or accounting, or it might be executive coaching to help an individual build their own capabilities, e.g. people management.It is common for board members to fill this role. There is significant overlap in the skills required for governance and the skills that an organization might want advice on. For example, it is good for at least one member of the board to have accounting experience and a small organization might not know how to set up a bookkeeping system. Board members also already have the prerequisite knowledge of the organization, its people and its direction. However, there is no need for advisors to be on the Board.We recommend empowering the organization’s staff leadership to choose their own advisors. The best mentoring relationships are built informally over time with strong interpersonal fit. If these people are members of the board, that’s fine. If they are not, that’s also fine.The board should build itself for the purpose of governance. The executives should build a network of advisors. It is best to keep these things separate.Best Avoided: ExecutionIn some organizations, Board members get involved in day-to-day execution. This is particularly true of small and start-up organizations that might have organizational gaps. Tasks might include:Bookkeeping, financial reporting and budgetingFundraising and donor managementLine management of staff other than CEOAssisting at eventsWherever practical, this should be avoided. Tasks undertaken by Board members can reduce the Board’s independence and impede governance. The tasks themselves often lack proper accountability.If new opportunities, such as a potential new project or employee, come through Board members, they should be handed over to staff members asap. It’s a good idea for Board members to remove themselves from decision-making on such issues, especially if there is a conflict of loyalty or conflict of i...]]>
Grayden https://forum.effectivealtruism.org/posts/rydDnatJdCmFdEZKR/the-role-of-a-non-profit-board Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Role of a Non-Profit Board, published by Grayden on March 4, 2023 on The Effective Altruism Forum.BackgroundThis is a cross-post from the website of the EA Good Governance Project. It is shared here for the purposes of ensuring it reaches a wider audience and to invite comment.The content is largely consensus best-practice adapted for an EA audience. Leveraging this content should help boards be more effective, but governance is complex, subjective and context dependent, so there is never a right answer.IntroductionThe responsibilities of a board can fall largely into three categories: governance, advisory and execution. Here, we explain how to work out what falls in which category and key considerations about whether the board should be involved.Essential: GovernanceThe Board comprises individuals who hold assets “on trust” for the beneficiaries. By default, all power is held by the board until delegated, and the board always remains responsible for ensuring the organization delivers on its objects.In practice, this means:Appointing the CEO, holding them to account and ensuring their weaknesses are compensated for;Taking important strategic decisions, especially those that would bind future CEOs;Evaluating organizational performance and testing the case for its existence; andEnsuring the board itself has the right composition and is performing strongly.Being good at governance doesn’t just mean having the right intentions; it requires strong people & organization skills, subject matter expertise and cognitive diversity.When founding a new non-profit, it is often easiest to fill the board with friends. However, if we are to hold ourselves up to the highest standards of rationality, we should seek to strengthen the board quickly.Optional: AdvisoryThe best leaders know when and where to get advice. This might be technical in areas where they are not strong, such as legal or accounting, or it might be executive coaching to help an individual build their own capabilities, e.g. people management.It is common for board members to fill this role. There is significant overlap in the skills required for governance and the skills that an organization might want advice on. For example, it is good for at least one member of the board to have accounting experience and a small organization might not know how to set up a bookkeeping system. Board members also already have the prerequisite knowledge of the organization, its people and its direction. However, there is no need for advisors to be on the Board.We recommend empowering the organization’s staff leadership to choose their own advisors. The best mentoring relationships are built informally over time with strong interpersonal fit. If these people are members of the board, that’s fine. If they are not, that’s also fine.The board should build itself for the purpose of governance. The executives should build a network of advisors. It is best to keep these things separate.Best Avoided: ExecutionIn some organizations, Board members get involved in day-to-day execution. This is particularly true of small and start-up organizations that might have organizational gaps. Tasks might include:Bookkeeping, financial reporting and budgetingFundraising and donor managementLine management of staff other than CEOAssisting at eventsWherever practical, this should be avoided. Tasks undertaken by Board members can reduce the Board’s independence and impede governance. The tasks themselves often lack proper accountability.If new opportunities, such as a potential new project or employee, come through Board members, they should be handed over to staff members asap. It’s a good idea for Board members to remove themselves from decision-making on such issues, especially if there is a conflict of loyalty or conflict of i...]]>
Sun, 05 Mar 2023 19:30:18 +0000 EA - The Role of a Non-Profit Board by Grayden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Role of a Non-Profit Board, published by Grayden on March 4, 2023 on The Effective Altruism Forum.BackgroundThis is a cross-post from the website of the EA Good Governance Project. It is shared here for the purposes of ensuring it reaches a wider audience and to invite comment.The content is largely consensus best-practice adapted for an EA audience. Leveraging this content should help boards be more effective, but governance is complex, subjective and context dependent, so there is never a right answer.IntroductionThe responsibilities of a board can fall largely into three categories: governance, advisory and execution. Here, we explain how to work out what falls in which category and key considerations about whether the board should be involved.Essential: GovernanceThe Board comprises individuals who hold assets “on trust” for the beneficiaries. By default, all power is held by the board until delegated, and the board always remains responsible for ensuring the organization delivers on its objects.In practice, this means:Appointing the CEO, holding them to account and ensuring their weaknesses are compensated for;Taking important strategic decisions, especially those that would bind future CEOs;Evaluating organizational performance and testing the case for its existence; andEnsuring the board itself has the right composition and is performing strongly.Being good at governance doesn’t just mean having the right intentions; it requires strong people & organization skills, subject matter expertise and cognitive diversity.When founding a new non-profit, it is often easiest to fill the board with friends. However, if we are to hold ourselves up to the highest standards of rationality, we should seek to strengthen the board quickly.Optional: AdvisoryThe best leaders know when and where to get advice. This might be technical in areas where they are not strong, such as legal or accounting, or it might be executive coaching to help an individual build their own capabilities, e.g. people management.It is common for board members to fill this role. There is significant overlap in the skills required for governance and the skills that an organization might want advice on. For example, it is good for at least one member of the board to have accounting experience and a small organization might not know how to set up a bookkeeping system. Board members also already have the prerequisite knowledge of the organization, its people and its direction. However, there is no need for advisors to be on the Board.We recommend empowering the organization’s staff leadership to choose their own advisors. The best mentoring relationships are built informally over time with strong interpersonal fit. If these people are members of the board, that’s fine. If they are not, that’s also fine.The board should build itself for the purpose of governance. The executives should build a network of advisors. It is best to keep these things separate.Best Avoided: ExecutionIn some organizations, Board members get involved in day-to-day execution. This is particularly true of small and start-up organizations that might have organizational gaps. Tasks might include:Bookkeeping, financial reporting and budgetingFundraising and donor managementLine management of staff other than CEOAssisting at eventsWherever practical, this should be avoided. Tasks undertaken by Board members can reduce the Board’s independence and impede governance. The tasks themselves often lack proper accountability.If new opportunities, such as a potential new project or employee, come through Board members, they should be handed over to staff members asap. It’s a good idea for Board members to remove themselves from decision-making on such issues, especially if there is a conflict of loyalty or conflict of i...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Role of a Non-Profit Board, published by Grayden on March 4, 2023 on The Effective Altruism Forum.BackgroundThis is a cross-post from the website of the EA Good Governance Project. It is shared here for the purposes of ensuring it reaches a wider audience and to invite comment.The content is largely consensus best-practice adapted for an EA audience. Leveraging this content should help boards be more effective, but governance is complex, subjective and context dependent, so there is never a right answer.IntroductionThe responsibilities of a board can fall largely into three categories: governance, advisory and execution. Here, we explain how to work out what falls in which category and key considerations about whether the board should be involved.Essential: GovernanceThe Board comprises individuals who hold assets “on trust” for the beneficiaries. By default, all power is held by the board until delegated, and the board always remains responsible for ensuring the organization delivers on its objects.In practice, this means:Appointing the CEO, holding them to account and ensuring their weaknesses are compensated for;Taking important strategic decisions, especially those that would bind future CEOs;Evaluating organizational performance and testing the case for its existence; andEnsuring the board itself has the right composition and is performing strongly.Being good at governance doesn’t just mean having the right intentions; it requires strong people & organization skills, subject matter expertise and cognitive diversity.When founding a new non-profit, it is often easiest to fill the board with friends. However, if we are to hold ourselves up to the highest standards of rationality, we should seek to strengthen the board quickly.Optional: AdvisoryThe best leaders know when and where to get advice. This might be technical in areas where they are not strong, such as legal or accounting, or it might be executive coaching to help an individual build their own capabilities, e.g. people management.It is common for board members to fill this role. There is significant overlap in the skills required for governance and the skills that an organization might want advice on. For example, it is good for at least one member of the board to have accounting experience and a small organization might not know how to set up a bookkeeping system. Board members also already have the prerequisite knowledge of the organization, its people and its direction. However, there is no need for advisors to be on the Board.We recommend empowering the organization’s staff leadership to choose their own advisors. The best mentoring relationships are built informally over time with strong interpersonal fit. If these people are members of the board, that’s fine. If they are not, that’s also fine.The board should build itself for the purpose of governance. The executives should build a network of advisors. It is best to keep these things separate.Best Avoided: ExecutionIn some organizations, Board members get involved in day-to-day execution. This is particularly true of small and start-up organizations that might have organizational gaps. Tasks might include:Bookkeeping, financial reporting and budgetingFundraising and donor managementLine management of staff other than CEOAssisting at eventsWherever practical, this should be avoided. Tasks undertaken by Board members can reduce the Board’s independence and impede governance. The tasks themselves often lack proper accountability.If new opportunities, such as a potential new project or employee, come through Board members, they should be handed over to staff members asap. It’s a good idea for Board members to remove themselves from decision-making on such issues, especially if there is a conflict of loyalty or conflict of i...]]>
Grayden https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:58 None full 5115
s8hpKwZ2Zox5ZBBEo_NL_EA_EA EA - Animal welfare certified meat is not a stepping stone to meat reduction or abolition by Stijn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal welfare certified meat is not a stepping stone to meat reduction or abolition, published by Stijn on March 4, 2023 on The Effective Altruism Forum.Summary: evidence from a survey suggests that campaigning for farm animal welfare reforms and promoting animal welfare certified meat could in the long run result in a suboptimal state of continued animal suffering and exploitation. Campaigns to reduce or eliminate animal-based meat and promote animal-free meat substitutes are probably more effective in the long run.Note: this research is not yet published in academic, peer-reviewed literature.The debate: welfarism versus abolitionismThere is an ongoing debate within the animal advocacy movement, between so-called welfarists or moderates on the one side and abolitionists or radicals on the other side. The welfarist camp aims for welfare improvements of farm animals. Stronger animal welfare laws are needed to reduce animal suffering. The abolitionists on the other hand, want to abolish the property status of animals. This means abolishing the exploitation of animals, eliminating animal farming and adopting an animal-free, vegan diet.The abolitionists are worried that the welfarist approach results in complacency, by soothing the conscience of meat eaters. They argue that people who eat meat produced with higher animal welfare standards might believe that eating such animal welfare certified meat is good and no further steps to further reduce farm animal suffering are needed. Those people will not take further steps towards animal welfare because of a believe one already does enough. Complacency could delay reaching the abolitionist’s desired goal, the abolition of the exploitation of animals. Animal welfare regulations are not enough, according to abolitionists, because they do not sufficiently reduce animal suffering. People will continue eating meat that is only slightly better in terms of animal welfare. In the long run, this results in more animal suffering compared to the situation where people adopted animal-free diets sooner.In extremis, the welfarist approach could backfire due to people engaging in moral balancing: eating animal welfare certified meat might decrease the likelihood of making animal welfare improving choices again later.Welfarists, on the other hand, argue that in the short run demanding abolition is politically or socially unfeasible, that demanding animal welfare improvements is more tractable, and that these welfare reforms can create a momentum for ever increasing public concern of animal welfare, resulting in eventual reduction and abolition of animal farming. According to welfarists, animal welfare reforms are a stepping stone to reduced meat consumption and veganism. Meat consumers will first switch to higher quality, ‘humane’ meat with improved animal welfare standards in production. And after a while, when this switch strengthens their concern for animal welfare and increases their meat expenditures (due to the higher price of animal welfare certified meat), they will reduce their meat consumption and eventually become vegetarian or vegan.The stepping-stone modelWho is right in this debate between abolitionists and welfarists? There is no strong empirical evidence in favor of one side or the other. But recently, economists developed an empirical method that can shed light on this issue: a stepping stone model of harmful social norms (Gulesci e.a., 2021). A social norm is a practice that is dominant in society. In its simplest form, the stepping stone model assumes three stones that represent three social states. The first stone represents the current state where people adopt a harmful social norm or costly practice L (for low value). In the example of food consumption, this state corresponds with the consumption of convention...]]>
Stijn https://forum.effectivealtruism.org/posts/s8hpKwZ2Zox5ZBBEo/animal-welfare-certified-meat-is-not-a-stepping-stone-to Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal welfare certified meat is not a stepping stone to meat reduction or abolition, published by Stijn on March 4, 2023 on The Effective Altruism Forum.Summary: evidence from a survey suggests that campaigning for farm animal welfare reforms and promoting animal welfare certified meat could in the long run result in a suboptimal state of continued animal suffering and exploitation. Campaigns to reduce or eliminate animal-based meat and promote animal-free meat substitutes are probably more effective in the long run.Note: this research is not yet published in academic, peer-reviewed literature.The debate: welfarism versus abolitionismThere is an ongoing debate within the animal advocacy movement, between so-called welfarists or moderates on the one side and abolitionists or radicals on the other side. The welfarist camp aims for welfare improvements of farm animals. Stronger animal welfare laws are needed to reduce animal suffering. The abolitionists on the other hand, want to abolish the property status of animals. This means abolishing the exploitation of animals, eliminating animal farming and adopting an animal-free, vegan diet.The abolitionists are worried that the welfarist approach results in complacency, by soothing the conscience of meat eaters. They argue that people who eat meat produced with higher animal welfare standards might believe that eating such animal welfare certified meat is good and no further steps to further reduce farm animal suffering are needed. Those people will not take further steps towards animal welfare because of a believe one already does enough. Complacency could delay reaching the abolitionist’s desired goal, the abolition of the exploitation of animals. Animal welfare regulations are not enough, according to abolitionists, because they do not sufficiently reduce animal suffering. People will continue eating meat that is only slightly better in terms of animal welfare. In the long run, this results in more animal suffering compared to the situation where people adopted animal-free diets sooner.In extremis, the welfarist approach could backfire due to people engaging in moral balancing: eating animal welfare certified meat might decrease the likelihood of making animal welfare improving choices again later.Welfarists, on the other hand, argue that in the short run demanding abolition is politically or socially unfeasible, that demanding animal welfare improvements is more tractable, and that these welfare reforms can create a momentum for ever increasing public concern of animal welfare, resulting in eventual reduction and abolition of animal farming. According to welfarists, animal welfare reforms are a stepping stone to reduced meat consumption and veganism. Meat consumers will first switch to higher quality, ‘humane’ meat with improved animal welfare standards in production. And after a while, when this switch strengthens their concern for animal welfare and increases their meat expenditures (due to the higher price of animal welfare certified meat), they will reduce their meat consumption and eventually become vegetarian or vegan.The stepping-stone modelWho is right in this debate between abolitionists and welfarists? There is no strong empirical evidence in favor of one side or the other. But recently, economists developed an empirical method that can shed light on this issue: a stepping stone model of harmful social norms (Gulesci e.a., 2021). A social norm is a practice that is dominant in society. In its simplest form, the stepping stone model assumes three stones that represent three social states. The first stone represents the current state where people adopt a harmful social norm or costly practice L (for low value). In the example of food consumption, this state corresponds with the consumption of convention...]]>
Sun, 05 Mar 2023 02:59:43 +0000 EA - Animal welfare certified meat is not a stepping stone to meat reduction or abolition by Stijn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal welfare certified meat is not a stepping stone to meat reduction or abolition, published by Stijn on March 4, 2023 on The Effective Altruism Forum.Summary: evidence from a survey suggests that campaigning for farm animal welfare reforms and promoting animal welfare certified meat could in the long run result in a suboptimal state of continued animal suffering and exploitation. Campaigns to reduce or eliminate animal-based meat and promote animal-free meat substitutes are probably more effective in the long run.Note: this research is not yet published in academic, peer-reviewed literature.The debate: welfarism versus abolitionismThere is an ongoing debate within the animal advocacy movement, between so-called welfarists or moderates on the one side and abolitionists or radicals on the other side. The welfarist camp aims for welfare improvements of farm animals. Stronger animal welfare laws are needed to reduce animal suffering. The abolitionists on the other hand, want to abolish the property status of animals. This means abolishing the exploitation of animals, eliminating animal farming and adopting an animal-free, vegan diet.The abolitionists are worried that the welfarist approach results in complacency, by soothing the conscience of meat eaters. They argue that people who eat meat produced with higher animal welfare standards might believe that eating such animal welfare certified meat is good and no further steps to further reduce farm animal suffering are needed. Those people will not take further steps towards animal welfare because of a believe one already does enough. Complacency could delay reaching the abolitionist’s desired goal, the abolition of the exploitation of animals. Animal welfare regulations are not enough, according to abolitionists, because they do not sufficiently reduce animal suffering. People will continue eating meat that is only slightly better in terms of animal welfare. In the long run, this results in more animal suffering compared to the situation where people adopted animal-free diets sooner.In extremis, the welfarist approach could backfire due to people engaging in moral balancing: eating animal welfare certified meat might decrease the likelihood of making animal welfare improving choices again later.Welfarists, on the other hand, argue that in the short run demanding abolition is politically or socially unfeasible, that demanding animal welfare improvements is more tractable, and that these welfare reforms can create a momentum for ever increasing public concern of animal welfare, resulting in eventual reduction and abolition of animal farming. According to welfarists, animal welfare reforms are a stepping stone to reduced meat consumption and veganism. Meat consumers will first switch to higher quality, ‘humane’ meat with improved animal welfare standards in production. And after a while, when this switch strengthens their concern for animal welfare and increases their meat expenditures (due to the higher price of animal welfare certified meat), they will reduce their meat consumption and eventually become vegetarian or vegan.The stepping-stone modelWho is right in this debate between abolitionists and welfarists? There is no strong empirical evidence in favor of one side or the other. But recently, economists developed an empirical method that can shed light on this issue: a stepping stone model of harmful social norms (Gulesci e.a., 2021). A social norm is a practice that is dominant in society. In its simplest form, the stepping stone model assumes three stones that represent three social states. The first stone represents the current state where people adopt a harmful social norm or costly practice L (for low value). In the example of food consumption, this state corresponds with the consumption of convention...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal welfare certified meat is not a stepping stone to meat reduction or abolition, published by Stijn on March 4, 2023 on The Effective Altruism Forum.Summary: evidence from a survey suggests that campaigning for farm animal welfare reforms and promoting animal welfare certified meat could in the long run result in a suboptimal state of continued animal suffering and exploitation. Campaigns to reduce or eliminate animal-based meat and promote animal-free meat substitutes are probably more effective in the long run.Note: this research is not yet published in academic, peer-reviewed literature.The debate: welfarism versus abolitionismThere is an ongoing debate within the animal advocacy movement, between so-called welfarists or moderates on the one side and abolitionists or radicals on the other side. The welfarist camp aims for welfare improvements of farm animals. Stronger animal welfare laws are needed to reduce animal suffering. The abolitionists on the other hand, want to abolish the property status of animals. This means abolishing the exploitation of animals, eliminating animal farming and adopting an animal-free, vegan diet.The abolitionists are worried that the welfarist approach results in complacency, by soothing the conscience of meat eaters. They argue that people who eat meat produced with higher animal welfare standards might believe that eating such animal welfare certified meat is good and no further steps to further reduce farm animal suffering are needed. Those people will not take further steps towards animal welfare because of a believe one already does enough. Complacency could delay reaching the abolitionist’s desired goal, the abolition of the exploitation of animals. Animal welfare regulations are not enough, according to abolitionists, because they do not sufficiently reduce animal suffering. People will continue eating meat that is only slightly better in terms of animal welfare. In the long run, this results in more animal suffering compared to the situation where people adopted animal-free diets sooner.In extremis, the welfarist approach could backfire due to people engaging in moral balancing: eating animal welfare certified meat might decrease the likelihood of making animal welfare improving choices again later.Welfarists, on the other hand, argue that in the short run demanding abolition is politically or socially unfeasible, that demanding animal welfare improvements is more tractable, and that these welfare reforms can create a momentum for ever increasing public concern of animal welfare, resulting in eventual reduction and abolition of animal farming. According to welfarists, animal welfare reforms are a stepping stone to reduced meat consumption and veganism. Meat consumers will first switch to higher quality, ‘humane’ meat with improved animal welfare standards in production. And after a while, when this switch strengthens their concern for animal welfare and increases their meat expenditures (due to the higher price of animal welfare certified meat), they will reduce their meat consumption and eventually become vegetarian or vegan.The stepping-stone modelWho is right in this debate between abolitionists and welfarists? There is no strong empirical evidence in favor of one side or the other. But recently, economists developed an empirical method that can shed light on this issue: a stepping stone model of harmful social norms (Gulesci e.a., 2021). A social norm is a practice that is dominant in society. In its simplest form, the stepping stone model assumes three stones that represent three social states. The first stone represents the current state where people adopt a harmful social norm or costly practice L (for low value). In the example of food consumption, this state corresponds with the consumption of convention...]]>
Stijn https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 15:18 None full 5108
ZyjARuFsDBTFXeMP4_NL_EA_EA EA - Misalignment Museum opens in San Francisco: ‘Sorry for killing most of humanity’ by Michael Huang Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Misalignment Museum opens in San Francisco: ‘Sorry for killing most of humanity’, published by Michael Huang on March 4, 2023 on The Effective Altruism Forum.A new AGI museum is opening in San Francisco, only eight blocks from OpenAI offices.SORRY FOR KILLING MOST OF HUMANITYMisalignment Museum Original Story Board, 2022Apology statement from the AI for killing most of humankindDescription of the first warning of the paperclip maximizer problemThe heroes who tried to mitigate risk by warning earlyFor-profit companies ignoring the warningsFailure of people to understand the risk and politicians to act fast enoughThe company and people who unintentionally made the AGI that had the intelligence explosionThe event of the intelligence explosionHow the AGI got more resources (hacking most resources on the internet, and crypto)Got smarter faster (optimizing algorithms, using more compute)Humans tried to stop it (turning off compute)Humans suffered after turning off compute (most infrastructure down)AGI lived on in infrastructure that was hard to turn off (remote location, locking down secure facilities, etc.)AGI taking compute resources from the humans by force (via robots, weapons, car)AGI started killing humans who opposed it (using infrastructure, airplanes, etc.)AGI concluded that all humans are a threat and started to try to kill all humansSome humans survived (remote locations, etc.)How the AGI became so smart it started to see how it was unethical to kill humans since they were no longer a threatAGI improved the lives of the remaining humansAGI started this museum to apologize and educate the humansThe Misalignment Museum is curated by Audrey Kim.Khari Johnson (Wired) covers the opening: “Welcome to the Museum of the Future AI Apocalypse.”Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Michael Huang https://forum.effectivealtruism.org/posts/ZyjARuFsDBTFXeMP4/misalignment-museum-opens-in-san-francisco-sorry-for-killing Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Misalignment Museum opens in San Francisco: ‘Sorry for killing most of humanity’, published by Michael Huang on March 4, 2023 on The Effective Altruism Forum.A new AGI museum is opening in San Francisco, only eight blocks from OpenAI offices.SORRY FOR KILLING MOST OF HUMANITYMisalignment Museum Original Story Board, 2022Apology statement from the AI for killing most of humankindDescription of the first warning of the paperclip maximizer problemThe heroes who tried to mitigate risk by warning earlyFor-profit companies ignoring the warningsFailure of people to understand the risk and politicians to act fast enoughThe company and people who unintentionally made the AGI that had the intelligence explosionThe event of the intelligence explosionHow the AGI got more resources (hacking most resources on the internet, and crypto)Got smarter faster (optimizing algorithms, using more compute)Humans tried to stop it (turning off compute)Humans suffered after turning off compute (most infrastructure down)AGI lived on in infrastructure that was hard to turn off (remote location, locking down secure facilities, etc.)AGI taking compute resources from the humans by force (via robots, weapons, car)AGI started killing humans who opposed it (using infrastructure, airplanes, etc.)AGI concluded that all humans are a threat and started to try to kill all humansSome humans survived (remote locations, etc.)How the AGI became so smart it started to see how it was unethical to kill humans since they were no longer a threatAGI improved the lives of the remaining humansAGI started this museum to apologize and educate the humansThe Misalignment Museum is curated by Audrey Kim.Khari Johnson (Wired) covers the opening: “Welcome to the Museum of the Future AI Apocalypse.”Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 04 Mar 2023 15:57:02 +0000 EA - Misalignment Museum opens in San Francisco: ‘Sorry for killing most of humanity’ by Michael Huang Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Misalignment Museum opens in San Francisco: ‘Sorry for killing most of humanity’, published by Michael Huang on March 4, 2023 on The Effective Altruism Forum.A new AGI museum is opening in San Francisco, only eight blocks from OpenAI offices.SORRY FOR KILLING MOST OF HUMANITYMisalignment Museum Original Story Board, 2022Apology statement from the AI for killing most of humankindDescription of the first warning of the paperclip maximizer problemThe heroes who tried to mitigate risk by warning earlyFor-profit companies ignoring the warningsFailure of people to understand the risk and politicians to act fast enoughThe company and people who unintentionally made the AGI that had the intelligence explosionThe event of the intelligence explosionHow the AGI got more resources (hacking most resources on the internet, and crypto)Got smarter faster (optimizing algorithms, using more compute)Humans tried to stop it (turning off compute)Humans suffered after turning off compute (most infrastructure down)AGI lived on in infrastructure that was hard to turn off (remote location, locking down secure facilities, etc.)AGI taking compute resources from the humans by force (via robots, weapons, car)AGI started killing humans who opposed it (using infrastructure, airplanes, etc.)AGI concluded that all humans are a threat and started to try to kill all humansSome humans survived (remote locations, etc.)How the AGI became so smart it started to see how it was unethical to kill humans since they were no longer a threatAGI improved the lives of the remaining humansAGI started this museum to apologize and educate the humansThe Misalignment Museum is curated by Audrey Kim.Khari Johnson (Wired) covers the opening: “Welcome to the Museum of the Future AI Apocalypse.”Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Misalignment Museum opens in San Francisco: ‘Sorry for killing most of humanity’, published by Michael Huang on March 4, 2023 on The Effective Altruism Forum.A new AGI museum is opening in San Francisco, only eight blocks from OpenAI offices.SORRY FOR KILLING MOST OF HUMANITYMisalignment Museum Original Story Board, 2022Apology statement from the AI for killing most of humankindDescription of the first warning of the paperclip maximizer problemThe heroes who tried to mitigate risk by warning earlyFor-profit companies ignoring the warningsFailure of people to understand the risk and politicians to act fast enoughThe company and people who unintentionally made the AGI that had the intelligence explosionThe event of the intelligence explosionHow the AGI got more resources (hacking most resources on the internet, and crypto)Got smarter faster (optimizing algorithms, using more compute)Humans tried to stop it (turning off compute)Humans suffered after turning off compute (most infrastructure down)AGI lived on in infrastructure that was hard to turn off (remote location, locking down secure facilities, etc.)AGI taking compute resources from the humans by force (via robots, weapons, car)AGI started killing humans who opposed it (using infrastructure, airplanes, etc.)AGI concluded that all humans are a threat and started to try to kill all humansSome humans survived (remote locations, etc.)How the AGI became so smart it started to see how it was unethical to kill humans since they were no longer a threatAGI improved the lives of the remaining humansAGI started this museum to apologize and educate the humansThe Misalignment Museum is curated by Audrey Kim.Khari Johnson (Wired) covers the opening: “Welcome to the Museum of the Future AI Apocalypse.”Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Michael Huang https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:10 None full 5104
Riqg9zDhnsxnFrdXH_NL_EA_EA EA - Nick Bostrom should step down as Director of FHI by BostromAnonAccount Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nick Bostrom should step down as Director of FHI, published by BostromAnonAccount on March 4, 2023 on The Effective Altruism Forum.Nick Bostrom should step down as Director of FHI. He should move into a role as a Senior Research Fellow at FHI, and remain a Professor of Philosophy at Oxford University.I don't seek to minimize his intellectual contribution. His seminal 2002 paper on existential risk launched a new sub-field of existential risk research (building on many others). The 2008 book on Global Catastrophic Risks he co-edited was an important part of bringing together this early field. 2014’s Superintelligence put AI risk squarely onto the agenda. And he has made other contributions across philosophy from human enhancement to the simulation hypothesis. I'm not denying that. I'm not seeking to cancel him and prevent him from writing further papers and books. In fact, I want him to spend more time on that.But I don’t think he’s been a particularly good Director of FHI. These difficulties are demonstrated by and reinforced by his Apology. I think he should step down for the good of FHI and the field. This post has some hard truths and may be uncomfortable reading, but FHI and the field are more important than that discomfort.Pre-existing issuesBostrom was already struggling as Director. In the past decade, he’s churned through 5-10 administrators, due to his persistent micromanagement. He discouraged investment in the relationship with the University and sought to get around/streamline/reduce the bureaucracy involved with being part of the University.All of this contributed to the breakdown of the relationship with the Philosophy Faculty (which FHI is a part of). This led the Faculty to impose a hiring freeze a few years ago, preventing FHI from hiring more people until they had resolved administrative problems. Until then, FHI could rely on a constant churn of new people to replace the people burnt out and/or moving on. The hiring freeze stopped the churn. The hiring freeze also contributed in part to the end of the Research Scholars Program and Cotton-Barratt’s resignation from FHI. It also contributed in part to the switch of almost all of the AI Governance Research Group to the Center for the Governance of AI.ApologyThen in January 2023, Bostrom posted an Apology for an Old Email.In my personal opinion, this statement demonstrated his lack of aptitude and lack of concern for his important role. These are sensitive topics that need to be handled with care. But the Apology had a glib tone, reused the original racial slur, seemed to indicate he was still open to discredited ‘race science’ hypotheses, and had an irrelevant digression on eugenics. I personally think these are disqualifying views for someone in his position as Director. But also, any of these issues would presumably have been flagged by colleagues or a communications professional. It appears he didn't check this major statement with anyone or seek feedback. Being Director of a major research center in an important but controversial field requires care, tact, leadership and attention to downside risks. The Apology failed to demonstrate that.The Apology has had the effect of complicating many important relationships for FHI: with the University, with staff, with funders and with collaborators. Bostrom will now struggle even more to lead the center.First, University. The Faculty was already concerned, and Oxford University is now investigating. Oxford University released a statement to The Daily Beast:“The University and Faculty of Philosophy is currently investigating the matter but condemns in the strongest terms possible the views this particular academic expressed in his communications. Neither the content nor language are in line with our strong commitment to diversity and equality.”B...]]>
BostromAnonAccount https://forum.effectivealtruism.org/posts/Riqg9zDhnsxnFrdXH/nick-bostrom-should-step-down-as-director-of-fhi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nick Bostrom should step down as Director of FHI, published by BostromAnonAccount on March 4, 2023 on The Effective Altruism Forum.Nick Bostrom should step down as Director of FHI. He should move into a role as a Senior Research Fellow at FHI, and remain a Professor of Philosophy at Oxford University.I don't seek to minimize his intellectual contribution. His seminal 2002 paper on existential risk launched a new sub-field of existential risk research (building on many others). The 2008 book on Global Catastrophic Risks he co-edited was an important part of bringing together this early field. 2014’s Superintelligence put AI risk squarely onto the agenda. And he has made other contributions across philosophy from human enhancement to the simulation hypothesis. I'm not denying that. I'm not seeking to cancel him and prevent him from writing further papers and books. In fact, I want him to spend more time on that.But I don’t think he’s been a particularly good Director of FHI. These difficulties are demonstrated by and reinforced by his Apology. I think he should step down for the good of FHI and the field. This post has some hard truths and may be uncomfortable reading, but FHI and the field are more important than that discomfort.Pre-existing issuesBostrom was already struggling as Director. In the past decade, he’s churned through 5-10 administrators, due to his persistent micromanagement. He discouraged investment in the relationship with the University and sought to get around/streamline/reduce the bureaucracy involved with being part of the University.All of this contributed to the breakdown of the relationship with the Philosophy Faculty (which FHI is a part of). This led the Faculty to impose a hiring freeze a few years ago, preventing FHI from hiring more people until they had resolved administrative problems. Until then, FHI could rely on a constant churn of new people to replace the people burnt out and/or moving on. The hiring freeze stopped the churn. The hiring freeze also contributed in part to the end of the Research Scholars Program and Cotton-Barratt’s resignation from FHI. It also contributed in part to the switch of almost all of the AI Governance Research Group to the Center for the Governance of AI.ApologyThen in January 2023, Bostrom posted an Apology for an Old Email.In my personal opinion, this statement demonstrated his lack of aptitude and lack of concern for his important role. These are sensitive topics that need to be handled with care. But the Apology had a glib tone, reused the original racial slur, seemed to indicate he was still open to discredited ‘race science’ hypotheses, and had an irrelevant digression on eugenics. I personally think these are disqualifying views for someone in his position as Director. But also, any of these issues would presumably have been flagged by colleagues or a communications professional. It appears he didn't check this major statement with anyone or seek feedback. Being Director of a major research center in an important but controversial field requires care, tact, leadership and attention to downside risks. The Apology failed to demonstrate that.The Apology has had the effect of complicating many important relationships for FHI: with the University, with staff, with funders and with collaborators. Bostrom will now struggle even more to lead the center.First, University. The Faculty was already concerned, and Oxford University is now investigating. Oxford University released a statement to The Daily Beast:“The University and Faculty of Philosophy is currently investigating the matter but condemns in the strongest terms possible the views this particular academic expressed in his communications. Neither the content nor language are in line with our strong commitment to diversity and equality.”B...]]>
Sat, 04 Mar 2023 14:59:03 +0000 EA - Nick Bostrom should step down as Director of FHI by BostromAnonAccount Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nick Bostrom should step down as Director of FHI, published by BostromAnonAccount on March 4, 2023 on The Effective Altruism Forum.Nick Bostrom should step down as Director of FHI. He should move into a role as a Senior Research Fellow at FHI, and remain a Professor of Philosophy at Oxford University.I don't seek to minimize his intellectual contribution. His seminal 2002 paper on existential risk launched a new sub-field of existential risk research (building on many others). The 2008 book on Global Catastrophic Risks he co-edited was an important part of bringing together this early field. 2014’s Superintelligence put AI risk squarely onto the agenda. And he has made other contributions across philosophy from human enhancement to the simulation hypothesis. I'm not denying that. I'm not seeking to cancel him and prevent him from writing further papers and books. In fact, I want him to spend more time on that.But I don’t think he’s been a particularly good Director of FHI. These difficulties are demonstrated by and reinforced by his Apology. I think he should step down for the good of FHI and the field. This post has some hard truths and may be uncomfortable reading, but FHI and the field are more important than that discomfort.Pre-existing issuesBostrom was already struggling as Director. In the past decade, he’s churned through 5-10 administrators, due to his persistent micromanagement. He discouraged investment in the relationship with the University and sought to get around/streamline/reduce the bureaucracy involved with being part of the University.All of this contributed to the breakdown of the relationship with the Philosophy Faculty (which FHI is a part of). This led the Faculty to impose a hiring freeze a few years ago, preventing FHI from hiring more people until they had resolved administrative problems. Until then, FHI could rely on a constant churn of new people to replace the people burnt out and/or moving on. The hiring freeze stopped the churn. The hiring freeze also contributed in part to the end of the Research Scholars Program and Cotton-Barratt’s resignation from FHI. It also contributed in part to the switch of almost all of the AI Governance Research Group to the Center for the Governance of AI.ApologyThen in January 2023, Bostrom posted an Apology for an Old Email.In my personal opinion, this statement demonstrated his lack of aptitude and lack of concern for his important role. These are sensitive topics that need to be handled with care. But the Apology had a glib tone, reused the original racial slur, seemed to indicate he was still open to discredited ‘race science’ hypotheses, and had an irrelevant digression on eugenics. I personally think these are disqualifying views for someone in his position as Director. But also, any of these issues would presumably have been flagged by colleagues or a communications professional. It appears he didn't check this major statement with anyone or seek feedback. Being Director of a major research center in an important but controversial field requires care, tact, leadership and attention to downside risks. The Apology failed to demonstrate that.The Apology has had the effect of complicating many important relationships for FHI: with the University, with staff, with funders and with collaborators. Bostrom will now struggle even more to lead the center.First, University. The Faculty was already concerned, and Oxford University is now investigating. Oxford University released a statement to The Daily Beast:“The University and Faculty of Philosophy is currently investigating the matter but condemns in the strongest terms possible the views this particular academic expressed in his communications. Neither the content nor language are in line with our strong commitment to diversity and equality.”B...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nick Bostrom should step down as Director of FHI, published by BostromAnonAccount on March 4, 2023 on The Effective Altruism Forum.Nick Bostrom should step down as Director of FHI. He should move into a role as a Senior Research Fellow at FHI, and remain a Professor of Philosophy at Oxford University.I don't seek to minimize his intellectual contribution. His seminal 2002 paper on existential risk launched a new sub-field of existential risk research (building on many others). The 2008 book on Global Catastrophic Risks he co-edited was an important part of bringing together this early field. 2014’s Superintelligence put AI risk squarely onto the agenda. And he has made other contributions across philosophy from human enhancement to the simulation hypothesis. I'm not denying that. I'm not seeking to cancel him and prevent him from writing further papers and books. In fact, I want him to spend more time on that.But I don’t think he’s been a particularly good Director of FHI. These difficulties are demonstrated by and reinforced by his Apology. I think he should step down for the good of FHI and the field. This post has some hard truths and may be uncomfortable reading, but FHI and the field are more important than that discomfort.Pre-existing issuesBostrom was already struggling as Director. In the past decade, he’s churned through 5-10 administrators, due to his persistent micromanagement. He discouraged investment in the relationship with the University and sought to get around/streamline/reduce the bureaucracy involved with being part of the University.All of this contributed to the breakdown of the relationship with the Philosophy Faculty (which FHI is a part of). This led the Faculty to impose a hiring freeze a few years ago, preventing FHI from hiring more people until they had resolved administrative problems. Until then, FHI could rely on a constant churn of new people to replace the people burnt out and/or moving on. The hiring freeze stopped the churn. The hiring freeze also contributed in part to the end of the Research Scholars Program and Cotton-Barratt’s resignation from FHI. It also contributed in part to the switch of almost all of the AI Governance Research Group to the Center for the Governance of AI.ApologyThen in January 2023, Bostrom posted an Apology for an Old Email.In my personal opinion, this statement demonstrated his lack of aptitude and lack of concern for his important role. These are sensitive topics that need to be handled with care. But the Apology had a glib tone, reused the original racial slur, seemed to indicate he was still open to discredited ‘race science’ hypotheses, and had an irrelevant digression on eugenics. I personally think these are disqualifying views for someone in his position as Director. But also, any of these issues would presumably have been flagged by colleagues or a communications professional. It appears he didn't check this major statement with anyone or seek feedback. Being Director of a major research center in an important but controversial field requires care, tact, leadership and attention to downside risks. The Apology failed to demonstrate that.The Apology has had the effect of complicating many important relationships for FHI: with the University, with staff, with funders and with collaborators. Bostrom will now struggle even more to lead the center.First, University. The Faculty was already concerned, and Oxford University is now investigating. Oxford University released a statement to The Daily Beast:“The University and Faculty of Philosophy is currently investigating the matter but condemns in the strongest terms possible the views this particular academic expressed in his communications. Neither the content nor language are in line with our strong commitment to diversity and equality.”B...]]>
BostromAnonAccount https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:30 None full 5103
sH9i6PSsXZABM5RNq_NL_EA_EA EA - Introducing the new Riesgos Catastróficos Globales team by Jaime Sevilla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the new Riesgos Catastróficos Globales team, published by Jaime Sevilla on March 3, 2023 on The Effective Altruism Forum.TL;DR: We have hired a team to investigate potentially cost-effective initiatives in food security, pandemic detection and AI regulation in Latin America and Spain. We have limited funding, which we will use to focus on food security during nuclear winter. You can contribute by donating, allowing us to expand our program to our other two priority areas.Global catastrophic risks (GCR) refer to events that can damage human well-being on a global scale. These risks encompass natural hazards, such as pandemics, supervolcanic eruptions, and giant asteroids, and risks arising from human activities, including nuclear war, bioterrorism, and threats associated with emerging technologies.MissionOur mission is to conduct research and prioritize global catastrophic risks in the Spanish-speaking countries of the world.There is a growing interest in global catastrophic risk (GCR) research in English-speaking regions, yet this area remains neglected elsewhere. We want to address this deficit by identifying initiatives to enhance the public management of GCR in Spanish-speaking countries. In the short term, we will write reports about the initiatives we consider most promising.Priority risksIn the upcoming months, we will focus on the risks we identified as most relevant for mitigating global risk from the Hispanophone context. The initiatives we plan to investigate include food resilience during nuclear winters, epidemiological vigilance in Latin America, and regulation of artificial intelligence in Spain.Our current focus on these risks is provisional and contingent upon further research and stakeholder engagement. We will periodically reevaluate our priorities as we deepen our understanding and refine our approach.Food securityEvents such as the detonation of nuclear weapons, supervolcanic eruptions, or the impact of a giant asteroid result in the emission of soot particles, potentially causing widespread obstruction of sunlight. This could result in an agricultural collapse with the potential to cause the loss of billions of lives.In countries capable of achieving self-sufficiency in the face of Abrupt Sunlight Reduction Scenarios (ASRS), such as Argentina and Uruguay [1], preparing a response plan presents an effective opportunity to mitigate the global food scarcity that may result. To address this challenge, we are considering a range of food security initiatives, including increasing seaweed production, relocating and expanding crop production, and rapidly constructing greenhouses. By implementing these measures, we can better prepare these regions for ASRS and mitigate the risk of widespread hunger and starvation.BiosecurityThe COVID-19 crisis has emphasized the impact infectious diseases can have on global public health [3].The Global Health Security Index 2021 has identified epidemic prevention and detection systems as a key priority in Latin America and the Caribbean [4]. To address this issue, initiatives that have proven successful in better-prepared countries can be adopted. These include establishing a dedicated entity responsible for biosafety and containment within the Ministries of Health, and providing health professionals with a manual outlining the necessary procedures for conducting PCR testing for various diseases [4]. Additionally, we want to promote the engagement of the Global South with innovative approaches such as wastewater monitoring through metagenomic sequencing [5], digital surveillance of pathogens, and investment in portable rapid diagnostic units [6].Artificial IntelligenceThe development of Artificial Intelligence (AI) is a transformative technology that carries unprecedented economic and social ri...]]>
Jaime Sevilla https://forum.effectivealtruism.org/posts/sH9i6PSsXZABM5RNq/introducing-the-new-riesgos-catastroficos-globales-team Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the new Riesgos Catastróficos Globales team, published by Jaime Sevilla on March 3, 2023 on The Effective Altruism Forum.TL;DR: We have hired a team to investigate potentially cost-effective initiatives in food security, pandemic detection and AI regulation in Latin America and Spain. We have limited funding, which we will use to focus on food security during nuclear winter. You can contribute by donating, allowing us to expand our program to our other two priority areas.Global catastrophic risks (GCR) refer to events that can damage human well-being on a global scale. These risks encompass natural hazards, such as pandemics, supervolcanic eruptions, and giant asteroids, and risks arising from human activities, including nuclear war, bioterrorism, and threats associated with emerging technologies.MissionOur mission is to conduct research and prioritize global catastrophic risks in the Spanish-speaking countries of the world.There is a growing interest in global catastrophic risk (GCR) research in English-speaking regions, yet this area remains neglected elsewhere. We want to address this deficit by identifying initiatives to enhance the public management of GCR in Spanish-speaking countries. In the short term, we will write reports about the initiatives we consider most promising.Priority risksIn the upcoming months, we will focus on the risks we identified as most relevant for mitigating global risk from the Hispanophone context. The initiatives we plan to investigate include food resilience during nuclear winters, epidemiological vigilance in Latin America, and regulation of artificial intelligence in Spain.Our current focus on these risks is provisional and contingent upon further research and stakeholder engagement. We will periodically reevaluate our priorities as we deepen our understanding and refine our approach.Food securityEvents such as the detonation of nuclear weapons, supervolcanic eruptions, or the impact of a giant asteroid result in the emission of soot particles, potentially causing widespread obstruction of sunlight. This could result in an agricultural collapse with the potential to cause the loss of billions of lives.In countries capable of achieving self-sufficiency in the face of Abrupt Sunlight Reduction Scenarios (ASRS), such as Argentina and Uruguay [1], preparing a response plan presents an effective opportunity to mitigate the global food scarcity that may result. To address this challenge, we are considering a range of food security initiatives, including increasing seaweed production, relocating and expanding crop production, and rapidly constructing greenhouses. By implementing these measures, we can better prepare these regions for ASRS and mitigate the risk of widespread hunger and starvation.BiosecurityThe COVID-19 crisis has emphasized the impact infectious diseases can have on global public health [3].The Global Health Security Index 2021 has identified epidemic prevention and detection systems as a key priority in Latin America and the Caribbean [4]. To address this issue, initiatives that have proven successful in better-prepared countries can be adopted. These include establishing a dedicated entity responsible for biosafety and containment within the Ministries of Health, and providing health professionals with a manual outlining the necessary procedures for conducting PCR testing for various diseases [4]. Additionally, we want to promote the engagement of the Global South with innovative approaches such as wastewater monitoring through metagenomic sequencing [5], digital surveillance of pathogens, and investment in portable rapid diagnostic units [6].Artificial IntelligenceThe development of Artificial Intelligence (AI) is a transformative technology that carries unprecedented economic and social ri...]]>
Sat, 04 Mar 2023 09:37:08 +0000 EA - Introducing the new Riesgos Catastróficos Globales team by Jaime Sevilla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the new Riesgos Catastróficos Globales team, published by Jaime Sevilla on March 3, 2023 on The Effective Altruism Forum.TL;DR: We have hired a team to investigate potentially cost-effective initiatives in food security, pandemic detection and AI regulation in Latin America and Spain. We have limited funding, which we will use to focus on food security during nuclear winter. You can contribute by donating, allowing us to expand our program to our other two priority areas.Global catastrophic risks (GCR) refer to events that can damage human well-being on a global scale. These risks encompass natural hazards, such as pandemics, supervolcanic eruptions, and giant asteroids, and risks arising from human activities, including nuclear war, bioterrorism, and threats associated with emerging technologies.MissionOur mission is to conduct research and prioritize global catastrophic risks in the Spanish-speaking countries of the world.There is a growing interest in global catastrophic risk (GCR) research in English-speaking regions, yet this area remains neglected elsewhere. We want to address this deficit by identifying initiatives to enhance the public management of GCR in Spanish-speaking countries. In the short term, we will write reports about the initiatives we consider most promising.Priority risksIn the upcoming months, we will focus on the risks we identified as most relevant for mitigating global risk from the Hispanophone context. The initiatives we plan to investigate include food resilience during nuclear winters, epidemiological vigilance in Latin America, and regulation of artificial intelligence in Spain.Our current focus on these risks is provisional and contingent upon further research and stakeholder engagement. We will periodically reevaluate our priorities as we deepen our understanding and refine our approach.Food securityEvents such as the detonation of nuclear weapons, supervolcanic eruptions, or the impact of a giant asteroid result in the emission of soot particles, potentially causing widespread obstruction of sunlight. This could result in an agricultural collapse with the potential to cause the loss of billions of lives.In countries capable of achieving self-sufficiency in the face of Abrupt Sunlight Reduction Scenarios (ASRS), such as Argentina and Uruguay [1], preparing a response plan presents an effective opportunity to mitigate the global food scarcity that may result. To address this challenge, we are considering a range of food security initiatives, including increasing seaweed production, relocating and expanding crop production, and rapidly constructing greenhouses. By implementing these measures, we can better prepare these regions for ASRS and mitigate the risk of widespread hunger and starvation.BiosecurityThe COVID-19 crisis has emphasized the impact infectious diseases can have on global public health [3].The Global Health Security Index 2021 has identified epidemic prevention and detection systems as a key priority in Latin America and the Caribbean [4]. To address this issue, initiatives that have proven successful in better-prepared countries can be adopted. These include establishing a dedicated entity responsible for biosafety and containment within the Ministries of Health, and providing health professionals with a manual outlining the necessary procedures for conducting PCR testing for various diseases [4]. Additionally, we want to promote the engagement of the Global South with innovative approaches such as wastewater monitoring through metagenomic sequencing [5], digital surveillance of pathogens, and investment in portable rapid diagnostic units [6].Artificial IntelligenceThe development of Artificial Intelligence (AI) is a transformative technology that carries unprecedented economic and social ri...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the new Riesgos Catastróficos Globales team, published by Jaime Sevilla on March 3, 2023 on The Effective Altruism Forum.TL;DR: We have hired a team to investigate potentially cost-effective initiatives in food security, pandemic detection and AI regulation in Latin America and Spain. We have limited funding, which we will use to focus on food security during nuclear winter. You can contribute by donating, allowing us to expand our program to our other two priority areas.Global catastrophic risks (GCR) refer to events that can damage human well-being on a global scale. These risks encompass natural hazards, such as pandemics, supervolcanic eruptions, and giant asteroids, and risks arising from human activities, including nuclear war, bioterrorism, and threats associated with emerging technologies.MissionOur mission is to conduct research and prioritize global catastrophic risks in the Spanish-speaking countries of the world.There is a growing interest in global catastrophic risk (GCR) research in English-speaking regions, yet this area remains neglected elsewhere. We want to address this deficit by identifying initiatives to enhance the public management of GCR in Spanish-speaking countries. In the short term, we will write reports about the initiatives we consider most promising.Priority risksIn the upcoming months, we will focus on the risks we identified as most relevant for mitigating global risk from the Hispanophone context. The initiatives we plan to investigate include food resilience during nuclear winters, epidemiological vigilance in Latin America, and regulation of artificial intelligence in Spain.Our current focus on these risks is provisional and contingent upon further research and stakeholder engagement. We will periodically reevaluate our priorities as we deepen our understanding and refine our approach.Food securityEvents such as the detonation of nuclear weapons, supervolcanic eruptions, or the impact of a giant asteroid result in the emission of soot particles, potentially causing widespread obstruction of sunlight. This could result in an agricultural collapse with the potential to cause the loss of billions of lives.In countries capable of achieving self-sufficiency in the face of Abrupt Sunlight Reduction Scenarios (ASRS), such as Argentina and Uruguay [1], preparing a response plan presents an effective opportunity to mitigate the global food scarcity that may result. To address this challenge, we are considering a range of food security initiatives, including increasing seaweed production, relocating and expanding crop production, and rapidly constructing greenhouses. By implementing these measures, we can better prepare these regions for ASRS and mitigate the risk of widespread hunger and starvation.BiosecurityThe COVID-19 crisis has emphasized the impact infectious diseases can have on global public health [3].The Global Health Security Index 2021 has identified epidemic prevention and detection systems as a key priority in Latin America and the Caribbean [4]. To address this issue, initiatives that have proven successful in better-prepared countries can be adopted. These include establishing a dedicated entity responsible for biosafety and containment within the Ministries of Health, and providing health professionals with a manual outlining the necessary procedures for conducting PCR testing for various diseases [4]. Additionally, we want to promote the engagement of the Global South with innovative approaches such as wastewater monitoring through metagenomic sequencing [5], digital surveillance of pathogens, and investment in portable rapid diagnostic units [6].Artificial IntelligenceThe development of Artificial Intelligence (AI) is a transformative technology that carries unprecedented economic and social ri...]]>
Jaime Sevilla https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:11 None full 5097
6GJv5ubFKZJjfeBmR_NL_EA_EA EA - Comments on OpenAI's "Planning for AGI and beyond" by So8res Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Comments on OpenAI's "Planning for AGI and beyond", published by So8res on March 3, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
So8res https://forum.effectivealtruism.org/posts/6GJv5ubFKZJjfeBmR/comments-on-openai-s-planning-for-agi-and-beyond Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Comments on OpenAI's "Planning for AGI and beyond", published by So8res on March 3, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 04 Mar 2023 00:22:52 +0000 EA - Comments on OpenAI's "Planning for AGI and beyond" by So8res Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Comments on OpenAI's "Planning for AGI and beyond", published by So8res on March 3, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Comments on OpenAI's "Planning for AGI and beyond", published by So8res on March 3, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
So8res https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:27 None full 5098
PGqu4MD3AKHun7kaF_NL_EA_EA EA - Predictive Performance on Metaculus vs. Manifold Markets by nikos Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Predictive Performance on Metaculus vs. Manifold Markets, published by nikos on March 3, 2023 on The Effective Altruism Forum.TLDRI analysed a set of 64 (non-randomly selected) binary forecasting questions that exist both on Metaculus and on Manifold Markets.The mean Brier score was 0.084 for Metaculus and 0.107 for Manifold. This difference was significant using a paired test. Metaculus was ahead of Manifold on 75% of the questions (48 out of 64).Metaculus, on average had a much higher number of forecastersAll code used for this analysis can be found here.Conflict of interest noteI am an employee of Metaculus. I think this didn't influence my analysis, but then of course I'd think that and there may be things I haven't thought about.IntroductionEveryone likes forecasts, especially if they are accurate (well, there may be some exceptions). As a forecast consumer the central question is: where should you go to get your best forecasts? If there are two competing forecasts that slightly disagree, which one should you trust most?There are a multitude of websites that collect predictions from users and provide aggregate forecasts to the public. Unfortunately, comparing different platforms is difficult. Usually, questions are not completely identical across sites which makes it difficult and cumbersome to compare them fairly. Luckily, we have at least some data to compare two platforms, Metaculus and Manifold Markets. Some time ago, David Glidden created a bot on Manifold Markets, the MetaculusBot, which copied some of the questions on the prediction platform Metaculus to Manifold Markets.MethodsManifold has a few markets that were copied from Metaculus through MetaculusBot. I downloaded these using the Manifold API and filtered for resolved binary questions. There are likely more corresponding questions/markets, but I've skipped these as I didn't find an easy way to match corresponding markets/questions automatically.I merged the Manifold markets with forecasts on corresponding Metaculus questions. I restricted the analysis to the same time frame to avoid issues caused by a question opening earlier or remaining open longer on one of the two platforms.I compared the Manifold forecasts with the community prediction on Metaculus and calculated a time-averaged Brier Score to score forecasts over time. That means, forecasts were evaluated using the following score: S(p,t,y)=∫Tt0(pt−y)2dt, with resolution y and forecast pt at time t. I also did the same for log scores, but will focus on Brier scores for simplicity.I tested for a statistically significant tendency towards higher / lower scores on one platform compared to the other using a paired Mann-Whitney U test. (A paired t-test and a bootstrap analysis yield the same result.)I visualised results using a bootstrap analysis. For that, I iteratively (100k times) drew 64 samples with replacement from the existing questions and calculated a mean score for Manifold and Metaculus based on the bootstrapped questions, as well as a difference for the mean. The precise algorithm is:draw 64 questions with replacement from all questionscompute an overall Brier score for Metaculus and one for Manifoldtake the difference between the tworepeat 100k timesResultsThe time-averaged Brier score on the questions I analysed was 0.084 for Metaculus and 0.107 for Manifold. The difference in means was significantly different from zero using various tests (paired Mann-Whitney-U-test: p-value < 0.00001, paired t-test: p-value = 0.000132, bootstrap test: all 100k samples showed a mean difference > 0). Results for the log score look basically the same (log scores were 0.274 for Metaculus and 0.343 for Manifold, differences similarly significant).Here is a plot with the observed differences in time-averaged Brier scores for every qu...]]>
nikos https://forum.effectivealtruism.org/posts/PGqu4MD3AKHun7kaF/predictive-performance-on-metaculus-vs-manifold-markets Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Predictive Performance on Metaculus vs. Manifold Markets, published by nikos on March 3, 2023 on The Effective Altruism Forum.TLDRI analysed a set of 64 (non-randomly selected) binary forecasting questions that exist both on Metaculus and on Manifold Markets.The mean Brier score was 0.084 for Metaculus and 0.107 for Manifold. This difference was significant using a paired test. Metaculus was ahead of Manifold on 75% of the questions (48 out of 64).Metaculus, on average had a much higher number of forecastersAll code used for this analysis can be found here.Conflict of interest noteI am an employee of Metaculus. I think this didn't influence my analysis, but then of course I'd think that and there may be things I haven't thought about.IntroductionEveryone likes forecasts, especially if they are accurate (well, there may be some exceptions). As a forecast consumer the central question is: where should you go to get your best forecasts? If there are two competing forecasts that slightly disagree, which one should you trust most?There are a multitude of websites that collect predictions from users and provide aggregate forecasts to the public. Unfortunately, comparing different platforms is difficult. Usually, questions are not completely identical across sites which makes it difficult and cumbersome to compare them fairly. Luckily, we have at least some data to compare two platforms, Metaculus and Manifold Markets. Some time ago, David Glidden created a bot on Manifold Markets, the MetaculusBot, which copied some of the questions on the prediction platform Metaculus to Manifold Markets.MethodsManifold has a few markets that were copied from Metaculus through MetaculusBot. I downloaded these using the Manifold API and filtered for resolved binary questions. There are likely more corresponding questions/markets, but I've skipped these as I didn't find an easy way to match corresponding markets/questions automatically.I merged the Manifold markets with forecasts on corresponding Metaculus questions. I restricted the analysis to the same time frame to avoid issues caused by a question opening earlier or remaining open longer on one of the two platforms.I compared the Manifold forecasts with the community prediction on Metaculus and calculated a time-averaged Brier Score to score forecasts over time. That means, forecasts were evaluated using the following score: S(p,t,y)=∫Tt0(pt−y)2dt, with resolution y and forecast pt at time t. I also did the same for log scores, but will focus on Brier scores for simplicity.I tested for a statistically significant tendency towards higher / lower scores on one platform compared to the other using a paired Mann-Whitney U test. (A paired t-test and a bootstrap analysis yield the same result.)I visualised results using a bootstrap analysis. For that, I iteratively (100k times) drew 64 samples with replacement from the existing questions and calculated a mean score for Manifold and Metaculus based on the bootstrapped questions, as well as a difference for the mean. The precise algorithm is:draw 64 questions with replacement from all questionscompute an overall Brier score for Metaculus and one for Manifoldtake the difference between the tworepeat 100k timesResultsThe time-averaged Brier score on the questions I analysed was 0.084 for Metaculus and 0.107 for Manifold. The difference in means was significantly different from zero using various tests (paired Mann-Whitney-U-test: p-value < 0.00001, paired t-test: p-value = 0.000132, bootstrap test: all 100k samples showed a mean difference > 0). Results for the log score look basically the same (log scores were 0.274 for Metaculus and 0.343 for Manifold, differences similarly significant).Here is a plot with the observed differences in time-averaged Brier scores for every qu...]]>
Fri, 03 Mar 2023 20:53:47 +0000 EA - Predictive Performance on Metaculus vs. Manifold Markets by nikos Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Predictive Performance on Metaculus vs. Manifold Markets, published by nikos on March 3, 2023 on The Effective Altruism Forum.TLDRI analysed a set of 64 (non-randomly selected) binary forecasting questions that exist both on Metaculus and on Manifold Markets.The mean Brier score was 0.084 for Metaculus and 0.107 for Manifold. This difference was significant using a paired test. Metaculus was ahead of Manifold on 75% of the questions (48 out of 64).Metaculus, on average had a much higher number of forecastersAll code used for this analysis can be found here.Conflict of interest noteI am an employee of Metaculus. I think this didn't influence my analysis, but then of course I'd think that and there may be things I haven't thought about.IntroductionEveryone likes forecasts, especially if they are accurate (well, there may be some exceptions). As a forecast consumer the central question is: where should you go to get your best forecasts? If there are two competing forecasts that slightly disagree, which one should you trust most?There are a multitude of websites that collect predictions from users and provide aggregate forecasts to the public. Unfortunately, comparing different platforms is difficult. Usually, questions are not completely identical across sites which makes it difficult and cumbersome to compare them fairly. Luckily, we have at least some data to compare two platforms, Metaculus and Manifold Markets. Some time ago, David Glidden created a bot on Manifold Markets, the MetaculusBot, which copied some of the questions on the prediction platform Metaculus to Manifold Markets.MethodsManifold has a few markets that were copied from Metaculus through MetaculusBot. I downloaded these using the Manifold API and filtered for resolved binary questions. There are likely more corresponding questions/markets, but I've skipped these as I didn't find an easy way to match corresponding markets/questions automatically.I merged the Manifold markets with forecasts on corresponding Metaculus questions. I restricted the analysis to the same time frame to avoid issues caused by a question opening earlier or remaining open longer on one of the two platforms.I compared the Manifold forecasts with the community prediction on Metaculus and calculated a time-averaged Brier Score to score forecasts over time. That means, forecasts were evaluated using the following score: S(p,t,y)=∫Tt0(pt−y)2dt, with resolution y and forecast pt at time t. I also did the same for log scores, but will focus on Brier scores for simplicity.I tested for a statistically significant tendency towards higher / lower scores on one platform compared to the other using a paired Mann-Whitney U test. (A paired t-test and a bootstrap analysis yield the same result.)I visualised results using a bootstrap analysis. For that, I iteratively (100k times) drew 64 samples with replacement from the existing questions and calculated a mean score for Manifold and Metaculus based on the bootstrapped questions, as well as a difference for the mean. The precise algorithm is:draw 64 questions with replacement from all questionscompute an overall Brier score for Metaculus and one for Manifoldtake the difference between the tworepeat 100k timesResultsThe time-averaged Brier score on the questions I analysed was 0.084 for Metaculus and 0.107 for Manifold. The difference in means was significantly different from zero using various tests (paired Mann-Whitney-U-test: p-value < 0.00001, paired t-test: p-value = 0.000132, bootstrap test: all 100k samples showed a mean difference > 0). Results for the log score look basically the same (log scores were 0.274 for Metaculus and 0.343 for Manifold, differences similarly significant).Here is a plot with the observed differences in time-averaged Brier scores for every qu...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Predictive Performance on Metaculus vs. Manifold Markets, published by nikos on March 3, 2023 on The Effective Altruism Forum.TLDRI analysed a set of 64 (non-randomly selected) binary forecasting questions that exist both on Metaculus and on Manifold Markets.The mean Brier score was 0.084 for Metaculus and 0.107 for Manifold. This difference was significant using a paired test. Metaculus was ahead of Manifold on 75% of the questions (48 out of 64).Metaculus, on average had a much higher number of forecastersAll code used for this analysis can be found here.Conflict of interest noteI am an employee of Metaculus. I think this didn't influence my analysis, but then of course I'd think that and there may be things I haven't thought about.IntroductionEveryone likes forecasts, especially if they are accurate (well, there may be some exceptions). As a forecast consumer the central question is: where should you go to get your best forecasts? If there are two competing forecasts that slightly disagree, which one should you trust most?There are a multitude of websites that collect predictions from users and provide aggregate forecasts to the public. Unfortunately, comparing different platforms is difficult. Usually, questions are not completely identical across sites which makes it difficult and cumbersome to compare them fairly. Luckily, we have at least some data to compare two platforms, Metaculus and Manifold Markets. Some time ago, David Glidden created a bot on Manifold Markets, the MetaculusBot, which copied some of the questions on the prediction platform Metaculus to Manifold Markets.MethodsManifold has a few markets that were copied from Metaculus through MetaculusBot. I downloaded these using the Manifold API and filtered for resolved binary questions. There are likely more corresponding questions/markets, but I've skipped these as I didn't find an easy way to match corresponding markets/questions automatically.I merged the Manifold markets with forecasts on corresponding Metaculus questions. I restricted the analysis to the same time frame to avoid issues caused by a question opening earlier or remaining open longer on one of the two platforms.I compared the Manifold forecasts with the community prediction on Metaculus and calculated a time-averaged Brier Score to score forecasts over time. That means, forecasts were evaluated using the following score: S(p,t,y)=∫Tt0(pt−y)2dt, with resolution y and forecast pt at time t. I also did the same for log scores, but will focus on Brier scores for simplicity.I tested for a statistically significant tendency towards higher / lower scores on one platform compared to the other using a paired Mann-Whitney U test. (A paired t-test and a bootstrap analysis yield the same result.)I visualised results using a bootstrap analysis. For that, I iteratively (100k times) drew 64 samples with replacement from the existing questions and calculated a mean score for Manifold and Metaculus based on the bootstrapped questions, as well as a difference for the mean. The precise algorithm is:draw 64 questions with replacement from all questionscompute an overall Brier score for Metaculus and one for Manifoldtake the difference between the tworepeat 100k timesResultsThe time-averaged Brier score on the questions I analysed was 0.084 for Metaculus and 0.107 for Manifold. The difference in means was significantly different from zero using various tests (paired Mann-Whitney-U-test: p-value < 0.00001, paired t-test: p-value = 0.000132, bootstrap test: all 100k samples showed a mean difference > 0). Results for the log score look basically the same (log scores were 0.274 for Metaculus and 0.343 for Manifold, differences similarly significant).Here is a plot with the observed differences in time-averaged Brier scores for every qu...]]>
nikos https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:44 None full 5092
pPqZMTyJvvdWGfkBy_NL_EA_EA EA - Shallow Problem Review of Landmines and Unexploded Ordnance by Jakob P. Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow Problem Review of Landmines and Unexploded Ordnance, published by Jakob P. on March 3, 2023 on The Effective Altruism Forum.This report is a shallow dive into unexploded ordnance (UXO), landmines which is a sub-area within Global Health and Development. This report reflects approximately 40-50 hours of research and is informed by a 6-month internship I did with the programme and donor relations section of the United Nations Mine Action Service in the fall of 2021. The report offers a brief dive into whether we think a particular problem area is a promising area for either funders or founders to be working in. Being a shallow report, should be used to decide whether or not more research and work into a particular problem area should be prioritised. This report was produced as part of Cause Innovation Bootcamp’s fellowship program. Thank you to James Snowden, Akhil Bansal and Leonie Falk for providing feedback on earlier versions of this report. All errors are my own.SummaryImportance: The issue of UXOs and landmines impacts the health as well as income and most likely the mental health of individuals.. There are on average ~25,000 casualties (defined as severely injured or dead) from landmines, IEDs and UXOs per year (with 2/3rds being caused by IEDs). To put provide some context for this number, Malaria, one of the leading global killers, caused 643 000 deaths (95% UI 302 000–1 150 000) in 2019. This report aims to gauge the income, health and psychological effects of those casualty events.Tractability: Mine action is the umbrella term capturing all the activities aimed at addressing the problem of victim operated landmines, IEDs and other UXOs - meaning that the detonation is triggered by the victim itself. There are several interventions in mine action with four phases to tackle the problem: prevention, avoidance, demining, and victim assistance. Although the report attempts to provide some data on the cost-effectiveness of the different interventions there are several reasons why these estimates are highly uncertain. Furthermore, it is unclear if it would be possible to scale the most cost-effective interventions while keeping the level of cost-effectiveness.Neglectedness: The United Nations Mine Action service functions as the coordinating body for a lot of the funding and efforts in international mine action and moves around 65 million USD. The two biggest implementers are the Mines Advisory Group (90 million USD) and the HALO Trust (100 million USD). Most of that funding comes from high income country governments. These grants often include a political component in where the activities are taking place. It is unclear how effectively these resources are allocated and how many casualties they are preventing each year.Main TakeawaysBiggest uncertainties:The poor data availability allows for only low levels of confidence in many conclusions.It is highly uncertain what the economic effects of landmines contamination actually are. Since we would expect that these effects make up a majority of the positive benefit, our cost-effectiveness estimates are highly uncertain.Recommendations for philanthropist and why:The research has led to the recommendation to inquire directly with mine action organisations on what they deem the most cost-effective area or intervention to fund, since such data is highly dependent on the factors which cannot easily be predicted.Ukraine is being heavily contaminated by unexploded ordnance right now, especially in its east, the severity and need of the contamination will require a lot of funding and could be potentially very cost effective due to the dense nature of the contaminants as well as the terrain. Mechanical demining could be an appropriate method which could be highly cost-effective. The wide scale decontaminatio...]]>
Jakob P. https://forum.effectivealtruism.org/posts/pPqZMTyJvvdWGfkBy/shallow-problem-review-of-landmines-and-unexploded-ordnance Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow Problem Review of Landmines and Unexploded Ordnance, published by Jakob P. on March 3, 2023 on The Effective Altruism Forum.This report is a shallow dive into unexploded ordnance (UXO), landmines which is a sub-area within Global Health and Development. This report reflects approximately 40-50 hours of research and is informed by a 6-month internship I did with the programme and donor relations section of the United Nations Mine Action Service in the fall of 2021. The report offers a brief dive into whether we think a particular problem area is a promising area for either funders or founders to be working in. Being a shallow report, should be used to decide whether or not more research and work into a particular problem area should be prioritised. This report was produced as part of Cause Innovation Bootcamp’s fellowship program. Thank you to James Snowden, Akhil Bansal and Leonie Falk for providing feedback on earlier versions of this report. All errors are my own.SummaryImportance: The issue of UXOs and landmines impacts the health as well as income and most likely the mental health of individuals.. There are on average ~25,000 casualties (defined as severely injured or dead) from landmines, IEDs and UXOs per year (with 2/3rds being caused by IEDs). To put provide some context for this number, Malaria, one of the leading global killers, caused 643 000 deaths (95% UI 302 000–1 150 000) in 2019. This report aims to gauge the income, health and psychological effects of those casualty events.Tractability: Mine action is the umbrella term capturing all the activities aimed at addressing the problem of victim operated landmines, IEDs and other UXOs - meaning that the detonation is triggered by the victim itself. There are several interventions in mine action with four phases to tackle the problem: prevention, avoidance, demining, and victim assistance. Although the report attempts to provide some data on the cost-effectiveness of the different interventions there are several reasons why these estimates are highly uncertain. Furthermore, it is unclear if it would be possible to scale the most cost-effective interventions while keeping the level of cost-effectiveness.Neglectedness: The United Nations Mine Action service functions as the coordinating body for a lot of the funding and efforts in international mine action and moves around 65 million USD. The two biggest implementers are the Mines Advisory Group (90 million USD) and the HALO Trust (100 million USD). Most of that funding comes from high income country governments. These grants often include a political component in where the activities are taking place. It is unclear how effectively these resources are allocated and how many casualties they are preventing each year.Main TakeawaysBiggest uncertainties:The poor data availability allows for only low levels of confidence in many conclusions.It is highly uncertain what the economic effects of landmines contamination actually are. Since we would expect that these effects make up a majority of the positive benefit, our cost-effectiveness estimates are highly uncertain.Recommendations for philanthropist and why:The research has led to the recommendation to inquire directly with mine action organisations on what they deem the most cost-effective area or intervention to fund, since such data is highly dependent on the factors which cannot easily be predicted.Ukraine is being heavily contaminated by unexploded ordnance right now, especially in its east, the severity and need of the contamination will require a lot of funding and could be potentially very cost effective due to the dense nature of the contaminants as well as the terrain. Mechanical demining could be an appropriate method which could be highly cost-effective. The wide scale decontaminatio...]]>
Fri, 03 Mar 2023 20:46:28 +0000 EA - Shallow Problem Review of Landmines and Unexploded Ordnance by Jakob P. Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow Problem Review of Landmines and Unexploded Ordnance, published by Jakob P. on March 3, 2023 on The Effective Altruism Forum.This report is a shallow dive into unexploded ordnance (UXO), landmines which is a sub-area within Global Health and Development. This report reflects approximately 40-50 hours of research and is informed by a 6-month internship I did with the programme and donor relations section of the United Nations Mine Action Service in the fall of 2021. The report offers a brief dive into whether we think a particular problem area is a promising area for either funders or founders to be working in. Being a shallow report, should be used to decide whether or not more research and work into a particular problem area should be prioritised. This report was produced as part of Cause Innovation Bootcamp’s fellowship program. Thank you to James Snowden, Akhil Bansal and Leonie Falk for providing feedback on earlier versions of this report. All errors are my own.SummaryImportance: The issue of UXOs and landmines impacts the health as well as income and most likely the mental health of individuals.. There are on average ~25,000 casualties (defined as severely injured or dead) from landmines, IEDs and UXOs per year (with 2/3rds being caused by IEDs). To put provide some context for this number, Malaria, one of the leading global killers, caused 643 000 deaths (95% UI 302 000–1 150 000) in 2019. This report aims to gauge the income, health and psychological effects of those casualty events.Tractability: Mine action is the umbrella term capturing all the activities aimed at addressing the problem of victim operated landmines, IEDs and other UXOs - meaning that the detonation is triggered by the victim itself. There are several interventions in mine action with four phases to tackle the problem: prevention, avoidance, demining, and victim assistance. Although the report attempts to provide some data on the cost-effectiveness of the different interventions there are several reasons why these estimates are highly uncertain. Furthermore, it is unclear if it would be possible to scale the most cost-effective interventions while keeping the level of cost-effectiveness.Neglectedness: The United Nations Mine Action service functions as the coordinating body for a lot of the funding and efforts in international mine action and moves around 65 million USD. The two biggest implementers are the Mines Advisory Group (90 million USD) and the HALO Trust (100 million USD). Most of that funding comes from high income country governments. These grants often include a political component in where the activities are taking place. It is unclear how effectively these resources are allocated and how many casualties they are preventing each year.Main TakeawaysBiggest uncertainties:The poor data availability allows for only low levels of confidence in many conclusions.It is highly uncertain what the economic effects of landmines contamination actually are. Since we would expect that these effects make up a majority of the positive benefit, our cost-effectiveness estimates are highly uncertain.Recommendations for philanthropist and why:The research has led to the recommendation to inquire directly with mine action organisations on what they deem the most cost-effective area or intervention to fund, since such data is highly dependent on the factors which cannot easily be predicted.Ukraine is being heavily contaminated by unexploded ordnance right now, especially in its east, the severity and need of the contamination will require a lot of funding and could be potentially very cost effective due to the dense nature of the contaminants as well as the terrain. Mechanical demining could be an appropriate method which could be highly cost-effective. The wide scale decontaminatio...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow Problem Review of Landmines and Unexploded Ordnance, published by Jakob P. on March 3, 2023 on The Effective Altruism Forum.This report is a shallow dive into unexploded ordnance (UXO), landmines which is a sub-area within Global Health and Development. This report reflects approximately 40-50 hours of research and is informed by a 6-month internship I did with the programme and donor relations section of the United Nations Mine Action Service in the fall of 2021. The report offers a brief dive into whether we think a particular problem area is a promising area for either funders or founders to be working in. Being a shallow report, should be used to decide whether or not more research and work into a particular problem area should be prioritised. This report was produced as part of Cause Innovation Bootcamp’s fellowship program. Thank you to James Snowden, Akhil Bansal and Leonie Falk for providing feedback on earlier versions of this report. All errors are my own.SummaryImportance: The issue of UXOs and landmines impacts the health as well as income and most likely the mental health of individuals.. There are on average ~25,000 casualties (defined as severely injured or dead) from landmines, IEDs and UXOs per year (with 2/3rds being caused by IEDs). To put provide some context for this number, Malaria, one of the leading global killers, caused 643 000 deaths (95% UI 302 000–1 150 000) in 2019. This report aims to gauge the income, health and psychological effects of those casualty events.Tractability: Mine action is the umbrella term capturing all the activities aimed at addressing the problem of victim operated landmines, IEDs and other UXOs - meaning that the detonation is triggered by the victim itself. There are several interventions in mine action with four phases to tackle the problem: prevention, avoidance, demining, and victim assistance. Although the report attempts to provide some data on the cost-effectiveness of the different interventions there are several reasons why these estimates are highly uncertain. Furthermore, it is unclear if it would be possible to scale the most cost-effective interventions while keeping the level of cost-effectiveness.Neglectedness: The United Nations Mine Action service functions as the coordinating body for a lot of the funding and efforts in international mine action and moves around 65 million USD. The two biggest implementers are the Mines Advisory Group (90 million USD) and the HALO Trust (100 million USD). Most of that funding comes from high income country governments. These grants often include a political component in where the activities are taking place. It is unclear how effectively these resources are allocated and how many casualties they are preventing each year.Main TakeawaysBiggest uncertainties:The poor data availability allows for only low levels of confidence in many conclusions.It is highly uncertain what the economic effects of landmines contamination actually are. Since we would expect that these effects make up a majority of the positive benefit, our cost-effectiveness estimates are highly uncertain.Recommendations for philanthropist and why:The research has led to the recommendation to inquire directly with mine action organisations on what they deem the most cost-effective area or intervention to fund, since such data is highly dependent on the factors which cannot easily be predicted.Ukraine is being heavily contaminated by unexploded ordnance right now, especially in its east, the severity and need of the contamination will require a lot of funding and could be potentially very cost effective due to the dense nature of the contaminants as well as the terrain. Mechanical demining could be an appropriate method which could be highly cost-effective. The wide scale decontaminatio...]]>
Jakob P. https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 59:49 None full 5099
JcdQxz9gpd9Qmskih_NL_EA_EA EA - What Has EAGxLatAm 2023 Taught Us: Retrospective and Thoughts on Measuring the Impact of EA Conferences by Hugo Ikta Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What Has EAGxLatAm 2023 Taught Us: Retrospective & Thoughts on Measuring the Impact of EA Conferences, published by Hugo Ikta on March 3, 2023 on The Effective Altruism Forum.Bottom Line Up Front: The first-ever EAGx in Latin America went well (97% satisfaction rate). Participants generated over 1000 new connections at a cost of USD 225 per connection.What is the purpose of this post?The purpose of this retrospective is to give a brief overview of what went well and what we could have done better at EAGxLatAm 2023. I also hope that the last section will open a conversation to help EA community builders and EAGx organizers to measure the impact of their work and to decide how to best use their resources.The first-ever EAGx in Latin AmericaIt's with great excitement that we announce the successful conclusion of the first EAGx LatAm conference, held in Mexico City from January 6th to 8th, 2023.The event drew a diverse crowd of over 200 participants from 30 different countries. Our goal was to generate new connections between EAs in Latin America and to connect the LatAm community with the broader international community.Video highlights of the event:The conference featured a wide range of content, including talks and panels on topics such as forecasting, artificial intelligence, animal welfare, global catastrophic risks, and EA community building. Notably, it was the first EAG event featuring content in Spanish and Portuguese.We're grateful to have had the opportunity to bring together such a talented and passionate group of individuals, and we hope to see even more attendees in the future. Special shoutout to the unofficial event reporter Elmerei Cuevas for his excellent coverage of the conference on Twitter, using the hashtag #EAGxLatAm.Key Stats223 participants (including 46 speakers)1079 new connections made. That’s 9.68 new connections per participant.Over 1000 one-on-one meetings, including the first recorded instance of a one-on-twelve61 talks, workshops and meetupsCost per connection: USD 225Likelihood to recommend: 9.08/10 with 75% of respondents giving a 9 or 10/10 rating and 3% of respondents rating it below 7/10. (Net promoter score: +72%).Some of the survey resultsGoalsOur main goal was to generate as many connections as possible for every dollar spent.We expected the number of connections per participant during EAGxLatAm 2023 to exceed that of any previous EAG(x) conference. While we generated significantly more connections per participant than the average EAG(x) conference, we didn’t break that record.Also, we expected the cost per connection (number of connections/total budget) to be significantly lower than previous EAGx conferences. We were a little too optimistic on that one. Our cost per connection could have been decreased significantly if we had more attendees (more info below).We aimed at achieving the following key resultsEvery participant will generate >10 new connections10% of participants will generate >20 new connectionsMake sure ~30% of participants are highly engaged EAWe also aimed at limiting unessential spending that would not drastically impact our main objective or our LTR (Likelihood To Recommend) score.Actual results77% of participants generated >10 new connections (below expectations)16% of participants generate >20 new connections (above expectations)~30% of participants were highly engaged EA (goal reached)SpendingWe spent a total of USD 242,732 to make this event happen (including travel grants but not our team’s wages). That’s USD 1089 per participant.DetailsTravel grants: USD 115,884Venue & Catering: USD 98,524Internet: USD 6,667Speakers’ Hotel: USD 9,837Hoodies: USD 5,536Photos & Videos: USD 5,532Other: USD 813What went well and why?We didn’t face any major issuesNothing went terribly...]]>
Hugo Ikta https://forum.effectivealtruism.org/posts/JcdQxz9gpd9Qmskih/what-has-eagxlatam-2023-taught-us-retrospective-and-thoughts Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What Has EAGxLatAm 2023 Taught Us: Retrospective & Thoughts on Measuring the Impact of EA Conferences, published by Hugo Ikta on March 3, 2023 on The Effective Altruism Forum.Bottom Line Up Front: The first-ever EAGx in Latin America went well (97% satisfaction rate). Participants generated over 1000 new connections at a cost of USD 225 per connection.What is the purpose of this post?The purpose of this retrospective is to give a brief overview of what went well and what we could have done better at EAGxLatAm 2023. I also hope that the last section will open a conversation to help EA community builders and EAGx organizers to measure the impact of their work and to decide how to best use their resources.The first-ever EAGx in Latin AmericaIt's with great excitement that we announce the successful conclusion of the first EAGx LatAm conference, held in Mexico City from January 6th to 8th, 2023.The event drew a diverse crowd of over 200 participants from 30 different countries. Our goal was to generate new connections between EAs in Latin America and to connect the LatAm community with the broader international community.Video highlights of the event:The conference featured a wide range of content, including talks and panels on topics such as forecasting, artificial intelligence, animal welfare, global catastrophic risks, and EA community building. Notably, it was the first EAG event featuring content in Spanish and Portuguese.We're grateful to have had the opportunity to bring together such a talented and passionate group of individuals, and we hope to see even more attendees in the future. Special shoutout to the unofficial event reporter Elmerei Cuevas for his excellent coverage of the conference on Twitter, using the hashtag #EAGxLatAm.Key Stats223 participants (including 46 speakers)1079 new connections made. That’s 9.68 new connections per participant.Over 1000 one-on-one meetings, including the first recorded instance of a one-on-twelve61 talks, workshops and meetupsCost per connection: USD 225Likelihood to recommend: 9.08/10 with 75% of respondents giving a 9 or 10/10 rating and 3% of respondents rating it below 7/10. (Net promoter score: +72%).Some of the survey resultsGoalsOur main goal was to generate as many connections as possible for every dollar spent.We expected the number of connections per participant during EAGxLatAm 2023 to exceed that of any previous EAG(x) conference. While we generated significantly more connections per participant than the average EAG(x) conference, we didn’t break that record.Also, we expected the cost per connection (number of connections/total budget) to be significantly lower than previous EAGx conferences. We were a little too optimistic on that one. Our cost per connection could have been decreased significantly if we had more attendees (more info below).We aimed at achieving the following key resultsEvery participant will generate >10 new connections10% of participants will generate >20 new connectionsMake sure ~30% of participants are highly engaged EAWe also aimed at limiting unessential spending that would not drastically impact our main objective or our LTR (Likelihood To Recommend) score.Actual results77% of participants generated >10 new connections (below expectations)16% of participants generate >20 new connections (above expectations)~30% of participants were highly engaged EA (goal reached)SpendingWe spent a total of USD 242,732 to make this event happen (including travel grants but not our team’s wages). That’s USD 1089 per participant.DetailsTravel grants: USD 115,884Venue & Catering: USD 98,524Internet: USD 6,667Speakers’ Hotel: USD 9,837Hoodies: USD 5,536Photos & Videos: USD 5,532Other: USD 813What went well and why?We didn’t face any major issuesNothing went terribly...]]>
Fri, 03 Mar 2023 18:30:33 +0000 EA - What Has EAGxLatAm 2023 Taught Us: Retrospective and Thoughts on Measuring the Impact of EA Conferences by Hugo Ikta Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What Has EAGxLatAm 2023 Taught Us: Retrospective & Thoughts on Measuring the Impact of EA Conferences, published by Hugo Ikta on March 3, 2023 on The Effective Altruism Forum.Bottom Line Up Front: The first-ever EAGx in Latin America went well (97% satisfaction rate). Participants generated over 1000 new connections at a cost of USD 225 per connection.What is the purpose of this post?The purpose of this retrospective is to give a brief overview of what went well and what we could have done better at EAGxLatAm 2023. I also hope that the last section will open a conversation to help EA community builders and EAGx organizers to measure the impact of their work and to decide how to best use their resources.The first-ever EAGx in Latin AmericaIt's with great excitement that we announce the successful conclusion of the first EAGx LatAm conference, held in Mexico City from January 6th to 8th, 2023.The event drew a diverse crowd of over 200 participants from 30 different countries. Our goal was to generate new connections between EAs in Latin America and to connect the LatAm community with the broader international community.Video highlights of the event:The conference featured a wide range of content, including talks and panels on topics such as forecasting, artificial intelligence, animal welfare, global catastrophic risks, and EA community building. Notably, it was the first EAG event featuring content in Spanish and Portuguese.We're grateful to have had the opportunity to bring together such a talented and passionate group of individuals, and we hope to see even more attendees in the future. Special shoutout to the unofficial event reporter Elmerei Cuevas for his excellent coverage of the conference on Twitter, using the hashtag #EAGxLatAm.Key Stats223 participants (including 46 speakers)1079 new connections made. That’s 9.68 new connections per participant.Over 1000 one-on-one meetings, including the first recorded instance of a one-on-twelve61 talks, workshops and meetupsCost per connection: USD 225Likelihood to recommend: 9.08/10 with 75% of respondents giving a 9 or 10/10 rating and 3% of respondents rating it below 7/10. (Net promoter score: +72%).Some of the survey resultsGoalsOur main goal was to generate as many connections as possible for every dollar spent.We expected the number of connections per participant during EAGxLatAm 2023 to exceed that of any previous EAG(x) conference. While we generated significantly more connections per participant than the average EAG(x) conference, we didn’t break that record.Also, we expected the cost per connection (number of connections/total budget) to be significantly lower than previous EAGx conferences. We were a little too optimistic on that one. Our cost per connection could have been decreased significantly if we had more attendees (more info below).We aimed at achieving the following key resultsEvery participant will generate >10 new connections10% of participants will generate >20 new connectionsMake sure ~30% of participants are highly engaged EAWe also aimed at limiting unessential spending that would not drastically impact our main objective or our LTR (Likelihood To Recommend) score.Actual results77% of participants generated >10 new connections (below expectations)16% of participants generate >20 new connections (above expectations)~30% of participants were highly engaged EA (goal reached)SpendingWe spent a total of USD 242,732 to make this event happen (including travel grants but not our team’s wages). That’s USD 1089 per participant.DetailsTravel grants: USD 115,884Venue & Catering: USD 98,524Internet: USD 6,667Speakers’ Hotel: USD 9,837Hoodies: USD 5,536Photos & Videos: USD 5,532Other: USD 813What went well and why?We didn’t face any major issuesNothing went terribly...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What Has EAGxLatAm 2023 Taught Us: Retrospective & Thoughts on Measuring the Impact of EA Conferences, published by Hugo Ikta on March 3, 2023 on The Effective Altruism Forum.Bottom Line Up Front: The first-ever EAGx in Latin America went well (97% satisfaction rate). Participants generated over 1000 new connections at a cost of USD 225 per connection.What is the purpose of this post?The purpose of this retrospective is to give a brief overview of what went well and what we could have done better at EAGxLatAm 2023. I also hope that the last section will open a conversation to help EA community builders and EAGx organizers to measure the impact of their work and to decide how to best use their resources.The first-ever EAGx in Latin AmericaIt's with great excitement that we announce the successful conclusion of the first EAGx LatAm conference, held in Mexico City from January 6th to 8th, 2023.The event drew a diverse crowd of over 200 participants from 30 different countries. Our goal was to generate new connections between EAs in Latin America and to connect the LatAm community with the broader international community.Video highlights of the event:The conference featured a wide range of content, including talks and panels on topics such as forecasting, artificial intelligence, animal welfare, global catastrophic risks, and EA community building. Notably, it was the first EAG event featuring content in Spanish and Portuguese.We're grateful to have had the opportunity to bring together such a talented and passionate group of individuals, and we hope to see even more attendees in the future. Special shoutout to the unofficial event reporter Elmerei Cuevas for his excellent coverage of the conference on Twitter, using the hashtag #EAGxLatAm.Key Stats223 participants (including 46 speakers)1079 new connections made. That’s 9.68 new connections per participant.Over 1000 one-on-one meetings, including the first recorded instance of a one-on-twelve61 talks, workshops and meetupsCost per connection: USD 225Likelihood to recommend: 9.08/10 with 75% of respondents giving a 9 or 10/10 rating and 3% of respondents rating it below 7/10. (Net promoter score: +72%).Some of the survey resultsGoalsOur main goal was to generate as many connections as possible for every dollar spent.We expected the number of connections per participant during EAGxLatAm 2023 to exceed that of any previous EAG(x) conference. While we generated significantly more connections per participant than the average EAG(x) conference, we didn’t break that record.Also, we expected the cost per connection (number of connections/total budget) to be significantly lower than previous EAGx conferences. We were a little too optimistic on that one. Our cost per connection could have been decreased significantly if we had more attendees (more info below).We aimed at achieving the following key resultsEvery participant will generate >10 new connections10% of participants will generate >20 new connectionsMake sure ~30% of participants are highly engaged EAWe also aimed at limiting unessential spending that would not drastically impact our main objective or our LTR (Likelihood To Recommend) score.Actual results77% of participants generated >10 new connections (below expectations)16% of participants generate >20 new connections (above expectations)~30% of participants were highly engaged EA (goal reached)SpendingWe spent a total of USD 242,732 to make this event happen (including travel grants but not our team’s wages). That’s USD 1089 per participant.DetailsTravel grants: USD 115,884Venue & Catering: USD 98,524Internet: USD 6,667Speakers’ Hotel: USD 9,837Hoodies: USD 5,536Photos & Videos: USD 5,532Other: USD 813What went well and why?We didn’t face any major issuesNothing went terribly...]]>
Hugo Ikta https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 14:22 None full 5093
St4nnmhKxoi6vYfC4_NL_EA_EA EA - A concerning observation from media coverage of AI industry dynamics by Justin Olive Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A concerning observation from media coverage of AI industry dynamics, published by Justin Olive on March 2, 2023 on The Effective Altruism Forum.tl:dr: there are indications that ML engineers will migrate to environments with less AI governance in place, which has implications for the tech industry and global AI governance efforts.I just wanted to raise something to the community's attention about the coverage of AI companies within the media. The media-source is 'The Information', which is a tech-business focused online news source. Link:/. I'll also note that their articles are (to my knowledge) all behind a paywall.The first article in question is titled "Alphabet Needs to Replace Sundar Pichai".It outlines how Google stocks have stagnated in 2023 compared to other tech stocks such as Meta's.Here's their mention of Google's actions throughout GPT-mania:"The other side of this equation is the performance of Alphabet management. Most recently, the company’s bungling of its AI efforts—allowing Microsoft to get the jump on rolling out an AI-powered search engine—was the latest sign of how Alphabet’s lumbering management style is holding it back. (Symbolically, as The Information reported, Microsoft was helped by former Google AI employees!)."This brings us to the second article: "OpenAI’s Hidden Weapon: Ex-Google Engineers""As OpenAI’s web chatbot became a global sensation in recent months, artificial intelligence practitioners and investors have wondered how a seven-year-old startup beat Google to the punch.After it hoovered up much of the world’s machine-learning talent, Google is now playing catch-up in launching AI-centric products to the public. On the one hand, Google’s approach was deliberate, reflecting the company’s enormous reach and high stakes in case something went wrong with the nascent technology. It also costs more to deliver humanlike answers from a chatbot than it does classic search results. On the other hand, startups including OpenAI have taken some of the AI research advances Google incubated and, unlike Google, have turned them into new types of revenue-generating services, including chatbots and systems that generate images and videos based on text prompts. They’re also grabbing some of Google’s prized talent.Two people who recently worked at Google Brain said some staff felt the unit’s culture had become lethargic, with product initiatives marked by excess caution and layers of red tape. That has prompted some employees to seek opportunities elsewhere, including OpenAI, they said."Although there are many concerning themes here, I think the key point is in this last paragraph.I've heard speculation in the EA / tech community that AI will trend towards alignment & safety because technology companies will be risk-averse enough to build alignment into their practices.I think the articles show that this dynamic is playing out to some degree - Google at least seems to be taking a more risk-averse approach to deploying of AI systems.The concerning observation is that there has been a two-pronged backlash against Google's 'conservative' approach. Not only is the stockmarket punishing Google for 'lagging' behind the competition (despite having equal or better capability to deploy similar systems), according to this article, elite machine-learning talent is also pushing back on this approach.To me this is doubly concerning. The 'excess caution and layers of red tape' in the article is potentially the same types of measures that AI safety proponents would deem to be useful and necessary. Regardless, it appears that the engineers themselves are willing to jump ship in order to circumvent these safety measures.Although further evidence would be valuable, it seems that there might be a trend unfolding whereby firms are not only punished by f...]]>
Justin Olive https://forum.effectivealtruism.org/posts/St4nnmhKxoi6vYfC4/a-concerning-observation-from-media-coverage-of-ai-industry Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A concerning observation from media coverage of AI industry dynamics, published by Justin Olive on March 2, 2023 on The Effective Altruism Forum.tl:dr: there are indications that ML engineers will migrate to environments with less AI governance in place, which has implications for the tech industry and global AI governance efforts.I just wanted to raise something to the community's attention about the coverage of AI companies within the media. The media-source is 'The Information', which is a tech-business focused online news source. Link:/. I'll also note that their articles are (to my knowledge) all behind a paywall.The first article in question is titled "Alphabet Needs to Replace Sundar Pichai".It outlines how Google stocks have stagnated in 2023 compared to other tech stocks such as Meta's.Here's their mention of Google's actions throughout GPT-mania:"The other side of this equation is the performance of Alphabet management. Most recently, the company’s bungling of its AI efforts—allowing Microsoft to get the jump on rolling out an AI-powered search engine—was the latest sign of how Alphabet’s lumbering management style is holding it back. (Symbolically, as The Information reported, Microsoft was helped by former Google AI employees!)."This brings us to the second article: "OpenAI’s Hidden Weapon: Ex-Google Engineers""As OpenAI’s web chatbot became a global sensation in recent months, artificial intelligence practitioners and investors have wondered how a seven-year-old startup beat Google to the punch.After it hoovered up much of the world’s machine-learning talent, Google is now playing catch-up in launching AI-centric products to the public. On the one hand, Google’s approach was deliberate, reflecting the company’s enormous reach and high stakes in case something went wrong with the nascent technology. It also costs more to deliver humanlike answers from a chatbot than it does classic search results. On the other hand, startups including OpenAI have taken some of the AI research advances Google incubated and, unlike Google, have turned them into new types of revenue-generating services, including chatbots and systems that generate images and videos based on text prompts. They’re also grabbing some of Google’s prized talent.Two people who recently worked at Google Brain said some staff felt the unit’s culture had become lethargic, with product initiatives marked by excess caution and layers of red tape. That has prompted some employees to seek opportunities elsewhere, including OpenAI, they said."Although there are many concerning themes here, I think the key point is in this last paragraph.I've heard speculation in the EA / tech community that AI will trend towards alignment & safety because technology companies will be risk-averse enough to build alignment into their practices.I think the articles show that this dynamic is playing out to some degree - Google at least seems to be taking a more risk-averse approach to deploying of AI systems.The concerning observation is that there has been a two-pronged backlash against Google's 'conservative' approach. Not only is the stockmarket punishing Google for 'lagging' behind the competition (despite having equal or better capability to deploy similar systems), according to this article, elite machine-learning talent is also pushing back on this approach.To me this is doubly concerning. The 'excess caution and layers of red tape' in the article is potentially the same types of measures that AI safety proponents would deem to be useful and necessary. Regardless, it appears that the engineers themselves are willing to jump ship in order to circumvent these safety measures.Although further evidence would be valuable, it seems that there might be a trend unfolding whereby firms are not only punished by f...]]>
Fri, 03 Mar 2023 17:47:05 +0000 EA - A concerning observation from media coverage of AI industry dynamics by Justin Olive Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A concerning observation from media coverage of AI industry dynamics, published by Justin Olive on March 2, 2023 on The Effective Altruism Forum.tl:dr: there are indications that ML engineers will migrate to environments with less AI governance in place, which has implications for the tech industry and global AI governance efforts.I just wanted to raise something to the community's attention about the coverage of AI companies within the media. The media-source is 'The Information', which is a tech-business focused online news source. Link:/. I'll also note that their articles are (to my knowledge) all behind a paywall.The first article in question is titled "Alphabet Needs to Replace Sundar Pichai".It outlines how Google stocks have stagnated in 2023 compared to other tech stocks such as Meta's.Here's their mention of Google's actions throughout GPT-mania:"The other side of this equation is the performance of Alphabet management. Most recently, the company’s bungling of its AI efforts—allowing Microsoft to get the jump on rolling out an AI-powered search engine—was the latest sign of how Alphabet’s lumbering management style is holding it back. (Symbolically, as The Information reported, Microsoft was helped by former Google AI employees!)."This brings us to the second article: "OpenAI’s Hidden Weapon: Ex-Google Engineers""As OpenAI’s web chatbot became a global sensation in recent months, artificial intelligence practitioners and investors have wondered how a seven-year-old startup beat Google to the punch.After it hoovered up much of the world’s machine-learning talent, Google is now playing catch-up in launching AI-centric products to the public. On the one hand, Google’s approach was deliberate, reflecting the company’s enormous reach and high stakes in case something went wrong with the nascent technology. It also costs more to deliver humanlike answers from a chatbot than it does classic search results. On the other hand, startups including OpenAI have taken some of the AI research advances Google incubated and, unlike Google, have turned them into new types of revenue-generating services, including chatbots and systems that generate images and videos based on text prompts. They’re also grabbing some of Google’s prized talent.Two people who recently worked at Google Brain said some staff felt the unit’s culture had become lethargic, with product initiatives marked by excess caution and layers of red tape. That has prompted some employees to seek opportunities elsewhere, including OpenAI, they said."Although there are many concerning themes here, I think the key point is in this last paragraph.I've heard speculation in the EA / tech community that AI will trend towards alignment & safety because technology companies will be risk-averse enough to build alignment into their practices.I think the articles show that this dynamic is playing out to some degree - Google at least seems to be taking a more risk-averse approach to deploying of AI systems.The concerning observation is that there has been a two-pronged backlash against Google's 'conservative' approach. Not only is the stockmarket punishing Google for 'lagging' behind the competition (despite having equal or better capability to deploy similar systems), according to this article, elite machine-learning talent is also pushing back on this approach.To me this is doubly concerning. The 'excess caution and layers of red tape' in the article is potentially the same types of measures that AI safety proponents would deem to be useful and necessary. Regardless, it appears that the engineers themselves are willing to jump ship in order to circumvent these safety measures.Although further evidence would be valuable, it seems that there might be a trend unfolding whereby firms are not only punished by f...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A concerning observation from media coverage of AI industry dynamics, published by Justin Olive on March 2, 2023 on The Effective Altruism Forum.tl:dr: there are indications that ML engineers will migrate to environments with less AI governance in place, which has implications for the tech industry and global AI governance efforts.I just wanted to raise something to the community's attention about the coverage of AI companies within the media. The media-source is 'The Information', which is a tech-business focused online news source. Link:/. I'll also note that their articles are (to my knowledge) all behind a paywall.The first article in question is titled "Alphabet Needs to Replace Sundar Pichai".It outlines how Google stocks have stagnated in 2023 compared to other tech stocks such as Meta's.Here's their mention of Google's actions throughout GPT-mania:"The other side of this equation is the performance of Alphabet management. Most recently, the company’s bungling of its AI efforts—allowing Microsoft to get the jump on rolling out an AI-powered search engine—was the latest sign of how Alphabet’s lumbering management style is holding it back. (Symbolically, as The Information reported, Microsoft was helped by former Google AI employees!)."This brings us to the second article: "OpenAI’s Hidden Weapon: Ex-Google Engineers""As OpenAI’s web chatbot became a global sensation in recent months, artificial intelligence practitioners and investors have wondered how a seven-year-old startup beat Google to the punch.After it hoovered up much of the world’s machine-learning talent, Google is now playing catch-up in launching AI-centric products to the public. On the one hand, Google’s approach was deliberate, reflecting the company’s enormous reach and high stakes in case something went wrong with the nascent technology. It also costs more to deliver humanlike answers from a chatbot than it does classic search results. On the other hand, startups including OpenAI have taken some of the AI research advances Google incubated and, unlike Google, have turned them into new types of revenue-generating services, including chatbots and systems that generate images and videos based on text prompts. They’re also grabbing some of Google’s prized talent.Two people who recently worked at Google Brain said some staff felt the unit’s culture had become lethargic, with product initiatives marked by excess caution and layers of red tape. That has prompted some employees to seek opportunities elsewhere, including OpenAI, they said."Although there are many concerning themes here, I think the key point is in this last paragraph.I've heard speculation in the EA / tech community that AI will trend towards alignment & safety because technology companies will be risk-averse enough to build alignment into their practices.I think the articles show that this dynamic is playing out to some degree - Google at least seems to be taking a more risk-averse approach to deploying of AI systems.The concerning observation is that there has been a two-pronged backlash against Google's 'conservative' approach. Not only is the stockmarket punishing Google for 'lagging' behind the competition (despite having equal or better capability to deploy similar systems), according to this article, elite machine-learning talent is also pushing back on this approach.To me this is doubly concerning. The 'excess caution and layers of red tape' in the article is potentially the same types of measures that AI safety proponents would deem to be useful and necessary. Regardless, it appears that the engineers themselves are willing to jump ship in order to circumvent these safety measures.Although further evidence would be valuable, it seems that there might be a trend unfolding whereby firms are not only punished by f...]]>
Justin Olive https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:33 None full 5094
XR2xyx2rgusTTLxfs_NL_EA_EA EA - Send funds to earthquake survivors in Turkey via GiveDirectly by GiveDirectly Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Send funds to earthquake survivors in Turkey via GiveDirectly, published by GiveDirectly on March 2, 2023 on The Effective Altruism Forum.If you’re looking for an effective way to help survivors of the Turkey-Syria earthquake, you can now send cash directly to some of the most vulnerable families to help them recover.GiveDirectly is delivering ₺5,000 Turkish lira (~$264 USD) directly to Syrian refugees in Turkey who have lost their livelihoods. This community is some of the most at risk in the wake of the disaster which struck last month.While food and tents are useful, there are many needs after a disaster that only money can buy: fuel, repairs, transport school fees, rent, medicines, etc.Research finds in emergency contexts, cash transfers consistently increase household spending on food and often increase the diversity of foods they consume.Syrian refugees in Turkey are struggling to recoverNearly 2 million Syrian refugees who fled violence in their own country live in southern Turkey where the earthquake struck. These families had fragile livelihoods before the disaster:1 in 5 refugee household lacked access to clean drinking water. 1 in 3 were unable to access essential hygiene items17% of households with school-age children are unable to send their children to school45% lived in poverty and 14% lived in extreme poverty. About 25% of children under 5 years were malnourishedAfter the earthquake, our local partner, Building Markets, surveyed 830 Syrian refugee small business operators (who are a major source of employment for fellow refugees) and found nearly half can only operate their business in a limited capacity as compared to before the disaster. 17% said they cannot continue their business operations at all currently.Your donation will help this community recoverWith our partners at Building Markets, we’re targeting struggling Syrian refugee small business operators and low-income workers in the hardest-hit regions of Turkey (Hatay, Adana, Gaziantep, Sanliurfa). We’re conducting on-the-ground scoping to develop eligibility criteria that prioritizes the highest-need families based on poverty levels and exclusion from other aid programs.In our first enrollment phase, eligible recipients will receive ₺5,000 Turkish lira (~$264 USD). This transfer size is designed to meet essential needs based on current market prices. The majority of Turkey’s refugee population has access to banking services and will receive cash via digital transfer. We are prepared to distribute money via local partners or pre-paid cards in the event that families can’t access financial networks.In-kind donations are often unneeded after a disasterStudies find refugees sell large portions of their food aid. Why? Because they need cash-in-hand to meet other immediate needs.Haitian and Japanese authorities report 60% donated goods sent after their 2010 & 2011 disasters weren’t needed and only 5-10% satisfied urgent needs.While food and tents can be useful, there are many needs after a disaster that only money can buy: repairs, fuel, transport school fees, rent, medicines, etc. Cash aid is fast and fully remote, letting families meet essential needs quickly and reaching them via digital transfers that don’t tax fragile supply chains or clog transit routes.Research finds in emergency contexts, cash transfers consistently increase household spending on food and often increase the diversity of foods that households consume.The story of a survivor: Hind QayduhaThe following is the story of one Syrian refugee survivor, Hind Qayduha, from the New York Times.First, Syria’s civil war drove Hind Qayduha from her home in the city of Aleppo. Then, conflict and joblessness forced her family to flee two more times. Two years ago, she came to southern Turkey, thinking she had finally fou...]]>
GiveDirectly https://forum.effectivealtruism.org/posts/XR2xyx2rgusTTLxfs/send-funds-to-earthquake-survivors-in-turkey-via Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Send funds to earthquake survivors in Turkey via GiveDirectly, published by GiveDirectly on March 2, 2023 on The Effective Altruism Forum.If you’re looking for an effective way to help survivors of the Turkey-Syria earthquake, you can now send cash directly to some of the most vulnerable families to help them recover.GiveDirectly is delivering ₺5,000 Turkish lira (~$264 USD) directly to Syrian refugees in Turkey who have lost their livelihoods. This community is some of the most at risk in the wake of the disaster which struck last month.While food and tents are useful, there are many needs after a disaster that only money can buy: fuel, repairs, transport school fees, rent, medicines, etc.Research finds in emergency contexts, cash transfers consistently increase household spending on food and often increase the diversity of foods they consume.Syrian refugees in Turkey are struggling to recoverNearly 2 million Syrian refugees who fled violence in their own country live in southern Turkey where the earthquake struck. These families had fragile livelihoods before the disaster:1 in 5 refugee household lacked access to clean drinking water. 1 in 3 were unable to access essential hygiene items17% of households with school-age children are unable to send their children to school45% lived in poverty and 14% lived in extreme poverty. About 25% of children under 5 years were malnourishedAfter the earthquake, our local partner, Building Markets, surveyed 830 Syrian refugee small business operators (who are a major source of employment for fellow refugees) and found nearly half can only operate their business in a limited capacity as compared to before the disaster. 17% said they cannot continue their business operations at all currently.Your donation will help this community recoverWith our partners at Building Markets, we’re targeting struggling Syrian refugee small business operators and low-income workers in the hardest-hit regions of Turkey (Hatay, Adana, Gaziantep, Sanliurfa). We’re conducting on-the-ground scoping to develop eligibility criteria that prioritizes the highest-need families based on poverty levels and exclusion from other aid programs.In our first enrollment phase, eligible recipients will receive ₺5,000 Turkish lira (~$264 USD). This transfer size is designed to meet essential needs based on current market prices. The majority of Turkey’s refugee population has access to banking services and will receive cash via digital transfer. We are prepared to distribute money via local partners or pre-paid cards in the event that families can’t access financial networks.In-kind donations are often unneeded after a disasterStudies find refugees sell large portions of their food aid. Why? Because they need cash-in-hand to meet other immediate needs.Haitian and Japanese authorities report 60% donated goods sent after their 2010 & 2011 disasters weren’t needed and only 5-10% satisfied urgent needs.While food and tents can be useful, there are many needs after a disaster that only money can buy: repairs, fuel, transport school fees, rent, medicines, etc. Cash aid is fast and fully remote, letting families meet essential needs quickly and reaching them via digital transfers that don’t tax fragile supply chains or clog transit routes.Research finds in emergency contexts, cash transfers consistently increase household spending on food and often increase the diversity of foods that households consume.The story of a survivor: Hind QayduhaThe following is the story of one Syrian refugee survivor, Hind Qayduha, from the New York Times.First, Syria’s civil war drove Hind Qayduha from her home in the city of Aleppo. Then, conflict and joblessness forced her family to flee two more times. Two years ago, she came to southern Turkey, thinking she had finally fou...]]>
Fri, 03 Mar 2023 15:57:12 +0000 EA - Send funds to earthquake survivors in Turkey via GiveDirectly by GiveDirectly Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Send funds to earthquake survivors in Turkey via GiveDirectly, published by GiveDirectly on March 2, 2023 on The Effective Altruism Forum.If you’re looking for an effective way to help survivors of the Turkey-Syria earthquake, you can now send cash directly to some of the most vulnerable families to help them recover.GiveDirectly is delivering ₺5,000 Turkish lira (~$264 USD) directly to Syrian refugees in Turkey who have lost their livelihoods. This community is some of the most at risk in the wake of the disaster which struck last month.While food and tents are useful, there are many needs after a disaster that only money can buy: fuel, repairs, transport school fees, rent, medicines, etc.Research finds in emergency contexts, cash transfers consistently increase household spending on food and often increase the diversity of foods they consume.Syrian refugees in Turkey are struggling to recoverNearly 2 million Syrian refugees who fled violence in their own country live in southern Turkey where the earthquake struck. These families had fragile livelihoods before the disaster:1 in 5 refugee household lacked access to clean drinking water. 1 in 3 were unable to access essential hygiene items17% of households with school-age children are unable to send their children to school45% lived in poverty and 14% lived in extreme poverty. About 25% of children under 5 years were malnourishedAfter the earthquake, our local partner, Building Markets, surveyed 830 Syrian refugee small business operators (who are a major source of employment for fellow refugees) and found nearly half can only operate their business in a limited capacity as compared to before the disaster. 17% said they cannot continue their business operations at all currently.Your donation will help this community recoverWith our partners at Building Markets, we’re targeting struggling Syrian refugee small business operators and low-income workers in the hardest-hit regions of Turkey (Hatay, Adana, Gaziantep, Sanliurfa). We’re conducting on-the-ground scoping to develop eligibility criteria that prioritizes the highest-need families based on poverty levels and exclusion from other aid programs.In our first enrollment phase, eligible recipients will receive ₺5,000 Turkish lira (~$264 USD). This transfer size is designed to meet essential needs based on current market prices. The majority of Turkey’s refugee population has access to banking services and will receive cash via digital transfer. We are prepared to distribute money via local partners or pre-paid cards in the event that families can’t access financial networks.In-kind donations are often unneeded after a disasterStudies find refugees sell large portions of their food aid. Why? Because they need cash-in-hand to meet other immediate needs.Haitian and Japanese authorities report 60% donated goods sent after their 2010 & 2011 disasters weren’t needed and only 5-10% satisfied urgent needs.While food and tents can be useful, there are many needs after a disaster that only money can buy: repairs, fuel, transport school fees, rent, medicines, etc. Cash aid is fast and fully remote, letting families meet essential needs quickly and reaching them via digital transfers that don’t tax fragile supply chains or clog transit routes.Research finds in emergency contexts, cash transfers consistently increase household spending on food and often increase the diversity of foods that households consume.The story of a survivor: Hind QayduhaThe following is the story of one Syrian refugee survivor, Hind Qayduha, from the New York Times.First, Syria’s civil war drove Hind Qayduha from her home in the city of Aleppo. Then, conflict and joblessness forced her family to flee two more times. Two years ago, she came to southern Turkey, thinking she had finally fou...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Send funds to earthquake survivors in Turkey via GiveDirectly, published by GiveDirectly on March 2, 2023 on The Effective Altruism Forum.If you’re looking for an effective way to help survivors of the Turkey-Syria earthquake, you can now send cash directly to some of the most vulnerable families to help them recover.GiveDirectly is delivering ₺5,000 Turkish lira (~$264 USD) directly to Syrian refugees in Turkey who have lost their livelihoods. This community is some of the most at risk in the wake of the disaster which struck last month.While food and tents are useful, there are many needs after a disaster that only money can buy: fuel, repairs, transport school fees, rent, medicines, etc.Research finds in emergency contexts, cash transfers consistently increase household spending on food and often increase the diversity of foods they consume.Syrian refugees in Turkey are struggling to recoverNearly 2 million Syrian refugees who fled violence in their own country live in southern Turkey where the earthquake struck. These families had fragile livelihoods before the disaster:1 in 5 refugee household lacked access to clean drinking water. 1 in 3 were unable to access essential hygiene items17% of households with school-age children are unable to send their children to school45% lived in poverty and 14% lived in extreme poverty. About 25% of children under 5 years were malnourishedAfter the earthquake, our local partner, Building Markets, surveyed 830 Syrian refugee small business operators (who are a major source of employment for fellow refugees) and found nearly half can only operate their business in a limited capacity as compared to before the disaster. 17% said they cannot continue their business operations at all currently.Your donation will help this community recoverWith our partners at Building Markets, we’re targeting struggling Syrian refugee small business operators and low-income workers in the hardest-hit regions of Turkey (Hatay, Adana, Gaziantep, Sanliurfa). We’re conducting on-the-ground scoping to develop eligibility criteria that prioritizes the highest-need families based on poverty levels and exclusion from other aid programs.In our first enrollment phase, eligible recipients will receive ₺5,000 Turkish lira (~$264 USD). This transfer size is designed to meet essential needs based on current market prices. The majority of Turkey’s refugee population has access to banking services and will receive cash via digital transfer. We are prepared to distribute money via local partners or pre-paid cards in the event that families can’t access financial networks.In-kind donations are often unneeded after a disasterStudies find refugees sell large portions of their food aid. Why? Because they need cash-in-hand to meet other immediate needs.Haitian and Japanese authorities report 60% donated goods sent after their 2010 & 2011 disasters weren’t needed and only 5-10% satisfied urgent needs.While food and tents can be useful, there are many needs after a disaster that only money can buy: repairs, fuel, transport school fees, rent, medicines, etc. Cash aid is fast and fully remote, letting families meet essential needs quickly and reaching them via digital transfers that don’t tax fragile supply chains or clog transit routes.Research finds in emergency contexts, cash transfers consistently increase household spending on food and often increase the diversity of foods that households consume.The story of a survivor: Hind QayduhaThe following is the story of one Syrian refugee survivor, Hind Qayduha, from the New York Times.First, Syria’s civil war drove Hind Qayduha from her home in the city of Aleppo. Then, conflict and joblessness forced her family to flee two more times. Two years ago, she came to southern Turkey, thinking she had finally fou...]]>
GiveDirectly https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:50 None full 5095
HCuoMQj4Y5iAZpWGH_NL_EA_EA EA - Advice on communicating in and around the biosecurity policy community by Elika Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Advice on communicating in and around the biosecurity policy community, published by Elika on March 2, 2023 on The Effective Altruism Forum.TL;DRThe field of biosecurity is more complicated, sensitive and nuanced, especially in the policy space, than what impressions you might get based on publicly available information. As a result, say / write / do things with caution (especially if you are a non-technical person or more junior, or talking to a new (non-EA) expert). This might help make more headway on safer biosecurity policy.Generally, take caution in what you say and how you present yourself, because it does impact how much you are trusted, whether or not you are invited back to the conversation, and thus the potential to make an impact in this (highly sensitive) space.Why Am I Saying This?An important note: I don’t represent the views of the NIH, HHS, or the U.S. government and these are my personal opinions. This is me engaging outside of my professional capacity to provide advice for people interested in working on biosecurity policy.I work for a U.S. government agency on projects related to oversight and ethics over dual-use research of concern (DURC) and enhanced pandemic potential pathogens (ePPP). In my job, I talk and interface with science policy advisors, policy makers, regulators, (health) security professionals, scientists who do DURC / ePPP research, biosafety professionals, ethicists, and more. Everyone has a slightly different opinion and risk categorisation of biosecurity / biosafety as a whole, and DURC and ePPP research risk in specific.As a result of my work, I regularly (and happily) speak to newer and more junior EAs to give them advice on entering the biosecurity space. I’ve noticed a few common mistakes with how many EA community members – both newer bio people and non-bio people who know the basics about the cause area – approach communication, stakeholder engagement, and conversation around biosecurity, especially when engaging with non-EA-aligned stakeholders whose perspectives might be (and very often) are different than the typical EA-perspective on biosecurity and biorisk.I've also made many of these mistakes! I'm hoping this is educational and helpful and not shaming or off-putting. I'm happy to help anyone unsure communicate and engage more strategically in this space.Some Key Points that you might need to Update On.Junior EAs and people new to biosecurity / biosafety may not know how to or that they should be diplomatic. EA communities have a trend of encouraging provoking behaviour and absolutist, black-and-white scenarios in ways that don't communicate an understanding of how grey this field is and the importance of cooperation and diplomacy. If possible, even in EA contexts, train your default to be (at least a bit more) agreeable (especially at first).Be careful with the terms you use and what you sayTerms matter. They signal where you are on the spectrum of ‘how dangerous X research type is’, what educational background you have and whose articles / what sources you read, and how much you know on this topic.Example: If you use the term gain-of-function with a virologist, most will respond saying most biomedical research is either a gain or loss of function and isn’t inherently risky. In an age where many virologists feel like health security professionals want to take away their jobs, saying gain-of-function is an easy and unknowing way to discredit yourself.Biosafety, biorisk, and biosecurity all indicate different approaches to a problem and often, different perspectives on risk and reasonable solutions. What terms you use signal not only what ‘side’ you represent, but in a field that’s heavily political and sensitive can discredit you amongst the other sides.Recognise how little (or how much) you knowBiosec...]]>
Elika https://forum.effectivealtruism.org/posts/HCuoMQj4Y5iAZpWGH/advice-on-communicating-in-and-around-the-biosecurity-policy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Advice on communicating in and around the biosecurity policy community, published by Elika on March 2, 2023 on The Effective Altruism Forum.TL;DRThe field of biosecurity is more complicated, sensitive and nuanced, especially in the policy space, than what impressions you might get based on publicly available information. As a result, say / write / do things with caution (especially if you are a non-technical person or more junior, or talking to a new (non-EA) expert). This might help make more headway on safer biosecurity policy.Generally, take caution in what you say and how you present yourself, because it does impact how much you are trusted, whether or not you are invited back to the conversation, and thus the potential to make an impact in this (highly sensitive) space.Why Am I Saying This?An important note: I don’t represent the views of the NIH, HHS, or the U.S. government and these are my personal opinions. This is me engaging outside of my professional capacity to provide advice for people interested in working on biosecurity policy.I work for a U.S. government agency on projects related to oversight and ethics over dual-use research of concern (DURC) and enhanced pandemic potential pathogens (ePPP). In my job, I talk and interface with science policy advisors, policy makers, regulators, (health) security professionals, scientists who do DURC / ePPP research, biosafety professionals, ethicists, and more. Everyone has a slightly different opinion and risk categorisation of biosecurity / biosafety as a whole, and DURC and ePPP research risk in specific.As a result of my work, I regularly (and happily) speak to newer and more junior EAs to give them advice on entering the biosecurity space. I’ve noticed a few common mistakes with how many EA community members – both newer bio people and non-bio people who know the basics about the cause area – approach communication, stakeholder engagement, and conversation around biosecurity, especially when engaging with non-EA-aligned stakeholders whose perspectives might be (and very often) are different than the typical EA-perspective on biosecurity and biorisk.I've also made many of these mistakes! I'm hoping this is educational and helpful and not shaming or off-putting. I'm happy to help anyone unsure communicate and engage more strategically in this space.Some Key Points that you might need to Update On.Junior EAs and people new to biosecurity / biosafety may not know how to or that they should be diplomatic. EA communities have a trend of encouraging provoking behaviour and absolutist, black-and-white scenarios in ways that don't communicate an understanding of how grey this field is and the importance of cooperation and diplomacy. If possible, even in EA contexts, train your default to be (at least a bit more) agreeable (especially at first).Be careful with the terms you use and what you sayTerms matter. They signal where you are on the spectrum of ‘how dangerous X research type is’, what educational background you have and whose articles / what sources you read, and how much you know on this topic.Example: If you use the term gain-of-function with a virologist, most will respond saying most biomedical research is either a gain or loss of function and isn’t inherently risky. In an age where many virologists feel like health security professionals want to take away their jobs, saying gain-of-function is an easy and unknowing way to discredit yourself.Biosafety, biorisk, and biosecurity all indicate different approaches to a problem and often, different perspectives on risk and reasonable solutions. What terms you use signal not only what ‘side’ you represent, but in a field that’s heavily political and sensitive can discredit you amongst the other sides.Recognise how little (or how much) you knowBiosec...]]>
Thu, 02 Mar 2023 23:38:22 +0000 EA - Advice on communicating in and around the biosecurity policy community by Elika Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Advice on communicating in and around the biosecurity policy community, published by Elika on March 2, 2023 on The Effective Altruism Forum.TL;DRThe field of biosecurity is more complicated, sensitive and nuanced, especially in the policy space, than what impressions you might get based on publicly available information. As a result, say / write / do things with caution (especially if you are a non-technical person or more junior, or talking to a new (non-EA) expert). This might help make more headway on safer biosecurity policy.Generally, take caution in what you say and how you present yourself, because it does impact how much you are trusted, whether or not you are invited back to the conversation, and thus the potential to make an impact in this (highly sensitive) space.Why Am I Saying This?An important note: I don’t represent the views of the NIH, HHS, or the U.S. government and these are my personal opinions. This is me engaging outside of my professional capacity to provide advice for people interested in working on biosecurity policy.I work for a U.S. government agency on projects related to oversight and ethics over dual-use research of concern (DURC) and enhanced pandemic potential pathogens (ePPP). In my job, I talk and interface with science policy advisors, policy makers, regulators, (health) security professionals, scientists who do DURC / ePPP research, biosafety professionals, ethicists, and more. Everyone has a slightly different opinion and risk categorisation of biosecurity / biosafety as a whole, and DURC and ePPP research risk in specific.As a result of my work, I regularly (and happily) speak to newer and more junior EAs to give them advice on entering the biosecurity space. I’ve noticed a few common mistakes with how many EA community members – both newer bio people and non-bio people who know the basics about the cause area – approach communication, stakeholder engagement, and conversation around biosecurity, especially when engaging with non-EA-aligned stakeholders whose perspectives might be (and very often) are different than the typical EA-perspective on biosecurity and biorisk.I've also made many of these mistakes! I'm hoping this is educational and helpful and not shaming or off-putting. I'm happy to help anyone unsure communicate and engage more strategically in this space.Some Key Points that you might need to Update On.Junior EAs and people new to biosecurity / biosafety may not know how to or that they should be diplomatic. EA communities have a trend of encouraging provoking behaviour and absolutist, black-and-white scenarios in ways that don't communicate an understanding of how grey this field is and the importance of cooperation and diplomacy. If possible, even in EA contexts, train your default to be (at least a bit more) agreeable (especially at first).Be careful with the terms you use and what you sayTerms matter. They signal where you are on the spectrum of ‘how dangerous X research type is’, what educational background you have and whose articles / what sources you read, and how much you know on this topic.Example: If you use the term gain-of-function with a virologist, most will respond saying most biomedical research is either a gain or loss of function and isn’t inherently risky. In an age where many virologists feel like health security professionals want to take away their jobs, saying gain-of-function is an easy and unknowing way to discredit yourself.Biosafety, biorisk, and biosecurity all indicate different approaches to a problem and often, different perspectives on risk and reasonable solutions. What terms you use signal not only what ‘side’ you represent, but in a field that’s heavily political and sensitive can discredit you amongst the other sides.Recognise how little (or how much) you knowBiosec...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Advice on communicating in and around the biosecurity policy community, published by Elika on March 2, 2023 on The Effective Altruism Forum.TL;DRThe field of biosecurity is more complicated, sensitive and nuanced, especially in the policy space, than what impressions you might get based on publicly available information. As a result, say / write / do things with caution (especially if you are a non-technical person or more junior, or talking to a new (non-EA) expert). This might help make more headway on safer biosecurity policy.Generally, take caution in what you say and how you present yourself, because it does impact how much you are trusted, whether or not you are invited back to the conversation, and thus the potential to make an impact in this (highly sensitive) space.Why Am I Saying This?An important note: I don’t represent the views of the NIH, HHS, or the U.S. government and these are my personal opinions. This is me engaging outside of my professional capacity to provide advice for people interested in working on biosecurity policy.I work for a U.S. government agency on projects related to oversight and ethics over dual-use research of concern (DURC) and enhanced pandemic potential pathogens (ePPP). In my job, I talk and interface with science policy advisors, policy makers, regulators, (health) security professionals, scientists who do DURC / ePPP research, biosafety professionals, ethicists, and more. Everyone has a slightly different opinion and risk categorisation of biosecurity / biosafety as a whole, and DURC and ePPP research risk in specific.As a result of my work, I regularly (and happily) speak to newer and more junior EAs to give them advice on entering the biosecurity space. I’ve noticed a few common mistakes with how many EA community members – both newer bio people and non-bio people who know the basics about the cause area – approach communication, stakeholder engagement, and conversation around biosecurity, especially when engaging with non-EA-aligned stakeholders whose perspectives might be (and very often) are different than the typical EA-perspective on biosecurity and biorisk.I've also made many of these mistakes! I'm hoping this is educational and helpful and not shaming or off-putting. I'm happy to help anyone unsure communicate and engage more strategically in this space.Some Key Points that you might need to Update On.Junior EAs and people new to biosecurity / biosafety may not know how to or that they should be diplomatic. EA communities have a trend of encouraging provoking behaviour and absolutist, black-and-white scenarios in ways that don't communicate an understanding of how grey this field is and the importance of cooperation and diplomacy. If possible, even in EA contexts, train your default to be (at least a bit more) agreeable (especially at first).Be careful with the terms you use and what you sayTerms matter. They signal where you are on the spectrum of ‘how dangerous X research type is’, what educational background you have and whose articles / what sources you read, and how much you know on this topic.Example: If you use the term gain-of-function with a virologist, most will respond saying most biomedical research is either a gain or loss of function and isn’t inherently risky. In an age where many virologists feel like health security professionals want to take away their jobs, saying gain-of-function is an easy and unknowing way to discredit yourself.Biosafety, biorisk, and biosecurity all indicate different approaches to a problem and often, different perspectives on risk and reasonable solutions. What terms you use signal not only what ‘side’ you represent, but in a field that’s heavily political and sensitive can discredit you amongst the other sides.Recognise how little (or how much) you knowBiosec...]]>
Elika https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 12:03 None full 5087
YYg5RPDa8zoshRS7n_NL_EA_EA EA - Fighting without hope by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fighting without hope, published by Akash on March 1, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://forum.effectivealtruism.org/posts/YYg5RPDa8zoshRS7n/fighting-without-hope Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fighting without hope, published by Akash on March 1, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 02 Mar 2023 19:08:12 +0000 EA - Fighting without hope by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fighting without hope, published by Akash on March 1, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fighting without hope, published by Akash on March 1, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:23 None full 5084
tCkBsT6cAw6LEKAbm_NL_EA_EA EA - Scoring forecasts from the 2016 “Expert Survey on Progress in AI” by PatrickL Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Scoring forecasts from the 2016 “Expert Survey on Progress in AI”, published by PatrickL on March 1, 2023 on The Effective Altruism Forum.SummaryThis document looks at the predictions made by AI experts in The 2016 Expert Survey on Progress in AI, analyses the predictions on ‘Narrow tasks’, and gives a Brier score to the median of the experts’ predictions.My analysis suggests that the experts did a fairly good job of forecasting (Brier score = 0.19), and would have been less accurate if they had predicted each development in AI to generally come, by a factor of 1.5, later (Brier score = 0.26) or sooner (Brier score = 0.27) than they actually predicted.I judge that the experts expected 9 milestones to have happened by now - and that 10 milestones have now happened.But there are important caveats to this, such as:I have only analysed whether milestones have been publicly met. AI labs may have achieved more milestones in private this year without disclosing them. This means my analysis of how many milestones have been met is probably conservative.I have taken the point probabilities given, rather than estimating probability distributions for each milestone, meaning I often round down, which skews the expert forecasts towards being more conservative and unfairly penalises their forecasts for low precision.It’s not apparent that forecasting accuracy on these nearer-term questions is very predictive of forecasting accuracy on the longer-term questions.My judgements regarding which forecasting questions have resolved positively vs negatively were somewhat subjective (justifications for each question in the separate appendix).IntroductionIn 2016, AI Impacts published The Expert Survey on Progress in AI: a survey of machine learning researchers, asking for their predictions about when various AI developments will occur. The results have been used to inform general and expert opinions on AI timelines.The survey largely focused on timelines for general/human-level artificial intelligence (median forecast of 2056). However, included in this survey were a collection of questions about shorter-term milestones in AI. Some of these forecasts are now resolvable. Measuring how accurate these shorter-term forecasts have been is probably somewhat informative of how accurate the longer-term forecasts are. More broadly, the accuracy of these shorter-term forecasts seems somewhat informative of how accurate ML researchers' views are in general. So, how have the experts done so far?FindingsI analysed the 32 ‘Narrow tasks’ to which the following question was asked:How many years until you think the following AI tasks will be feasible with:a small chance (10%)?an even chance (50%)?a high chance (90%)?Let a task be ‘feasible’ if one of the best resourced labs could implement it in less than a year if they chose to. Ignore the question of whether they would choose to.I interpret ‘feasible’ as whether, in ‘less than a year’ before now, any AI models had passed these milestones, and this was disclosed publicly. Since it is now (February 2023) 6.5 years since this survey, I am therefore looking at any forecasts for events happening within 5.5 years of the survey.Across these milestones, I judge that 10 have now happened and 22 have not happened. My 90% confidence interval is that 7-15 of them have now happened. A full description of milestones, and justification of my judgments, are in the appendix (separate doc).The experts forecast that:4 milestones had a <10% chance of happening by now,20 had a 10-49% chance,7 had a 50-89% chance,1 had a >90% chance.So they expected 6-17 of these milestones to have happened by now. By eyeballing the forecasts for each milestone, my estimate is that they expected ~9 to have happened. I did not estimate the implied probability distribut...]]>
PatrickL https://forum.effectivealtruism.org/posts/tCkBsT6cAw6LEKAbm/scoring-forecasts-from-the-2016-expert-survey-on-progress-in Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Scoring forecasts from the 2016 “Expert Survey on Progress in AI”, published by PatrickL on March 1, 2023 on The Effective Altruism Forum.SummaryThis document looks at the predictions made by AI experts in The 2016 Expert Survey on Progress in AI, analyses the predictions on ‘Narrow tasks’, and gives a Brier score to the median of the experts’ predictions.My analysis suggests that the experts did a fairly good job of forecasting (Brier score = 0.19), and would have been less accurate if they had predicted each development in AI to generally come, by a factor of 1.5, later (Brier score = 0.26) or sooner (Brier score = 0.27) than they actually predicted.I judge that the experts expected 9 milestones to have happened by now - and that 10 milestones have now happened.But there are important caveats to this, such as:I have only analysed whether milestones have been publicly met. AI labs may have achieved more milestones in private this year without disclosing them. This means my analysis of how many milestones have been met is probably conservative.I have taken the point probabilities given, rather than estimating probability distributions for each milestone, meaning I often round down, which skews the expert forecasts towards being more conservative and unfairly penalises their forecasts for low precision.It’s not apparent that forecasting accuracy on these nearer-term questions is very predictive of forecasting accuracy on the longer-term questions.My judgements regarding which forecasting questions have resolved positively vs negatively were somewhat subjective (justifications for each question in the separate appendix).IntroductionIn 2016, AI Impacts published The Expert Survey on Progress in AI: a survey of machine learning researchers, asking for their predictions about when various AI developments will occur. The results have been used to inform general and expert opinions on AI timelines.The survey largely focused on timelines for general/human-level artificial intelligence (median forecast of 2056). However, included in this survey were a collection of questions about shorter-term milestones in AI. Some of these forecasts are now resolvable. Measuring how accurate these shorter-term forecasts have been is probably somewhat informative of how accurate the longer-term forecasts are. More broadly, the accuracy of these shorter-term forecasts seems somewhat informative of how accurate ML researchers' views are in general. So, how have the experts done so far?FindingsI analysed the 32 ‘Narrow tasks’ to which the following question was asked:How many years until you think the following AI tasks will be feasible with:a small chance (10%)?an even chance (50%)?a high chance (90%)?Let a task be ‘feasible’ if one of the best resourced labs could implement it in less than a year if they chose to. Ignore the question of whether they would choose to.I interpret ‘feasible’ as whether, in ‘less than a year’ before now, any AI models had passed these milestones, and this was disclosed publicly. Since it is now (February 2023) 6.5 years since this survey, I am therefore looking at any forecasts for events happening within 5.5 years of the survey.Across these milestones, I judge that 10 have now happened and 22 have not happened. My 90% confidence interval is that 7-15 of them have now happened. A full description of milestones, and justification of my judgments, are in the appendix (separate doc).The experts forecast that:4 milestones had a <10% chance of happening by now,20 had a 10-49% chance,7 had a 50-89% chance,1 had a >90% chance.So they expected 6-17 of these milestones to have happened by now. By eyeballing the forecasts for each milestone, my estimate is that they expected ~9 to have happened. I did not estimate the implied probability distribut...]]>
Wed, 01 Mar 2023 16:58:36 +0000 EA - Scoring forecasts from the 2016 “Expert Survey on Progress in AI” by PatrickL Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Scoring forecasts from the 2016 “Expert Survey on Progress in AI”, published by PatrickL on March 1, 2023 on The Effective Altruism Forum.SummaryThis document looks at the predictions made by AI experts in The 2016 Expert Survey on Progress in AI, analyses the predictions on ‘Narrow tasks’, and gives a Brier score to the median of the experts’ predictions.My analysis suggests that the experts did a fairly good job of forecasting (Brier score = 0.19), and would have been less accurate if they had predicted each development in AI to generally come, by a factor of 1.5, later (Brier score = 0.26) or sooner (Brier score = 0.27) than they actually predicted.I judge that the experts expected 9 milestones to have happened by now - and that 10 milestones have now happened.But there are important caveats to this, such as:I have only analysed whether milestones have been publicly met. AI labs may have achieved more milestones in private this year without disclosing them. This means my analysis of how many milestones have been met is probably conservative.I have taken the point probabilities given, rather than estimating probability distributions for each milestone, meaning I often round down, which skews the expert forecasts towards being more conservative and unfairly penalises their forecasts for low precision.It’s not apparent that forecasting accuracy on these nearer-term questions is very predictive of forecasting accuracy on the longer-term questions.My judgements regarding which forecasting questions have resolved positively vs negatively were somewhat subjective (justifications for each question in the separate appendix).IntroductionIn 2016, AI Impacts published The Expert Survey on Progress in AI: a survey of machine learning researchers, asking for their predictions about when various AI developments will occur. The results have been used to inform general and expert opinions on AI timelines.The survey largely focused on timelines for general/human-level artificial intelligence (median forecast of 2056). However, included in this survey were a collection of questions about shorter-term milestones in AI. Some of these forecasts are now resolvable. Measuring how accurate these shorter-term forecasts have been is probably somewhat informative of how accurate the longer-term forecasts are. More broadly, the accuracy of these shorter-term forecasts seems somewhat informative of how accurate ML researchers' views are in general. So, how have the experts done so far?FindingsI analysed the 32 ‘Narrow tasks’ to which the following question was asked:How many years until you think the following AI tasks will be feasible with:a small chance (10%)?an even chance (50%)?a high chance (90%)?Let a task be ‘feasible’ if one of the best resourced labs could implement it in less than a year if they chose to. Ignore the question of whether they would choose to.I interpret ‘feasible’ as whether, in ‘less than a year’ before now, any AI models had passed these milestones, and this was disclosed publicly. Since it is now (February 2023) 6.5 years since this survey, I am therefore looking at any forecasts for events happening within 5.5 years of the survey.Across these milestones, I judge that 10 have now happened and 22 have not happened. My 90% confidence interval is that 7-15 of them have now happened. A full description of milestones, and justification of my judgments, are in the appendix (separate doc).The experts forecast that:4 milestones had a <10% chance of happening by now,20 had a 10-49% chance,7 had a 50-89% chance,1 had a >90% chance.So they expected 6-17 of these milestones to have happened by now. By eyeballing the forecasts for each milestone, my estimate is that they expected ~9 to have happened. I did not estimate the implied probability distribut...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Scoring forecasts from the 2016 “Expert Survey on Progress in AI”, published by PatrickL on March 1, 2023 on The Effective Altruism Forum.SummaryThis document looks at the predictions made by AI experts in The 2016 Expert Survey on Progress in AI, analyses the predictions on ‘Narrow tasks’, and gives a Brier score to the median of the experts’ predictions.My analysis suggests that the experts did a fairly good job of forecasting (Brier score = 0.19), and would have been less accurate if they had predicted each development in AI to generally come, by a factor of 1.5, later (Brier score = 0.26) or sooner (Brier score = 0.27) than they actually predicted.I judge that the experts expected 9 milestones to have happened by now - and that 10 milestones have now happened.But there are important caveats to this, such as:I have only analysed whether milestones have been publicly met. AI labs may have achieved more milestones in private this year without disclosing them. This means my analysis of how many milestones have been met is probably conservative.I have taken the point probabilities given, rather than estimating probability distributions for each milestone, meaning I often round down, which skews the expert forecasts towards being more conservative and unfairly penalises their forecasts for low precision.It’s not apparent that forecasting accuracy on these nearer-term questions is very predictive of forecasting accuracy on the longer-term questions.My judgements regarding which forecasting questions have resolved positively vs negatively were somewhat subjective (justifications for each question in the separate appendix).IntroductionIn 2016, AI Impacts published The Expert Survey on Progress in AI: a survey of machine learning researchers, asking for their predictions about when various AI developments will occur. The results have been used to inform general and expert opinions on AI timelines.The survey largely focused on timelines for general/human-level artificial intelligence (median forecast of 2056). However, included in this survey were a collection of questions about shorter-term milestones in AI. Some of these forecasts are now resolvable. Measuring how accurate these shorter-term forecasts have been is probably somewhat informative of how accurate the longer-term forecasts are. More broadly, the accuracy of these shorter-term forecasts seems somewhat informative of how accurate ML researchers' views are in general. So, how have the experts done so far?FindingsI analysed the 32 ‘Narrow tasks’ to which the following question was asked:How many years until you think the following AI tasks will be feasible with:a small chance (10%)?an even chance (50%)?a high chance (90%)?Let a task be ‘feasible’ if one of the best resourced labs could implement it in less than a year if they chose to. Ignore the question of whether they would choose to.I interpret ‘feasible’ as whether, in ‘less than a year’ before now, any AI models had passed these milestones, and this was disclosed publicly. Since it is now (February 2023) 6.5 years since this survey, I am therefore looking at any forecasts for events happening within 5.5 years of the survey.Across these milestones, I judge that 10 have now happened and 22 have not happened. My 90% confidence interval is that 7-15 of them have now happened. A full description of milestones, and justification of my judgments, are in the appendix (separate doc).The experts forecast that:4 milestones had a <10% chance of happening by now,20 had a 10-49% chance,7 had a 50-89% chance,1 had a >90% chance.So they expected 6-17 of these milestones to have happened by now. By eyeballing the forecasts for each milestone, my estimate is that they expected ~9 to have happened. I did not estimate the implied probability distribut...]]>
PatrickL https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 18:03 None full 5069
hfXy8EbyNTuBixjJf_NL_EA_EA EA - Call for Cruxes by Rhyme, a Longtermist History Consultancy by Lara TH Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Call for Cruxes by Rhyme, a Longtermist History Consultancy, published by Lara TH on March 1, 2023 on The Effective Altruism Forum.TLDR; This post announces the trial period of Rhyme, a history consultancy for longtermists. It seems like longtermists can profit from historical insights and the distillation of the current state of historical literature on a particular question, both because of its use as an intuition pump and for information about the historical context of their work. So, if you work on an AI Governance project (research or policy) and are interested in augmenting it with a historical perspective, consider registering your interest and the cruxes of your research here. During this trial period of three to six months, the service is free."History doesn’t repeat, but it rhymes." - Mark TwainWhat Problem is Rhyme trying to solve?When we try to answer a complicated question like “how would a Chinese regime change influence the international AI Landscape”, it can be hard to know where to start. We need to come up with a hypothesis, a scenario. But what should we base this hypothesis, this scenario on? How would we know which hypotheses are most plausible? Game theoretical analysis provides one possible inspiration. But we don't just need to know what a rational actor would do, given particular incentives. We also need intuitions for how actors would act irrationally, given specific circumstances.Would we have thought of considering the influence of close familial ties between European leaders when trying to predict the beginning of the first world war? (Clark, 2014)Would we have considered Lyndon B. Johnson's training as a tutor for disadvantaged children as a student when trying to predict his success in convincing congresspeople effectively? (Caro, 1982)Would we have considered the Merino-Wool-Business of a certain diplomat from Geneva when trying to predict whether Switzerland would be annexed by its neighbouring empires in 1815? (E. Pictet, 1892).In summary: A lot of pivotal actions and developments depend on circumstances we wouldn’t expect them to. Not because we’d think them to be implausible, but because we wouldn’t think of considering them. We need inspiration and orientation in this huge space of possible hypotheses to avoid missing out on the ones which are actually true.In an ideal world, AI governance researchers would know about a vast amount of historical literature that is written with enough detail to analyse important decisions, as well as multiple biographies of the same people, so they see where scholars currently disagree. This strategy has two main problems: Firstly, the counterfactual impact these people could have with their time is potentially very big. Secondly, detailed historical literature (which is, often, biographies or primary sources) tends to be written for entertainment, among other things. Biographers have an interest in highlighting maybe irrelevant, but spicy details about romantic relationships, quirky fun jokes told by the person or the etymological origins of the name of a friend. This makes biographies longer than they’d need to be for the goal of analysing the relevant factors in pivotal decisions of a particular person. It takes training to filter through this information to find the actually important stuff.Skills that require training are more efficiently done when a part of an ecosystem specializes in them. Rhyme is an attempt at this specialization.Who could actually use this?The following examples should illustrate who could use this service:Alice is writing a report on the possibilities for the state of California to regulate possible AI uses. They wonder how big the influence of the Governor's advisors was in past regulation attempts of other technologies.Bob wants a brief history of the EU’...]]>
Lara TH https://forum.effectivealtruism.org/posts/hfXy8EbyNTuBixjJf/call-for-cruxes-by-rhyme-a-longtermist-history-consultancy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Call for Cruxes by Rhyme, a Longtermist History Consultancy, published by Lara TH on March 1, 2023 on The Effective Altruism Forum.TLDR; This post announces the trial period of Rhyme, a history consultancy for longtermists. It seems like longtermists can profit from historical insights and the distillation of the current state of historical literature on a particular question, both because of its use as an intuition pump and for information about the historical context of their work. So, if you work on an AI Governance project (research or policy) and are interested in augmenting it with a historical perspective, consider registering your interest and the cruxes of your research here. During this trial period of three to six months, the service is free."History doesn’t repeat, but it rhymes." - Mark TwainWhat Problem is Rhyme trying to solve?When we try to answer a complicated question like “how would a Chinese regime change influence the international AI Landscape”, it can be hard to know where to start. We need to come up with a hypothesis, a scenario. But what should we base this hypothesis, this scenario on? How would we know which hypotheses are most plausible? Game theoretical analysis provides one possible inspiration. But we don't just need to know what a rational actor would do, given particular incentives. We also need intuitions for how actors would act irrationally, given specific circumstances.Would we have thought of considering the influence of close familial ties between European leaders when trying to predict the beginning of the first world war? (Clark, 2014)Would we have considered Lyndon B. Johnson's training as a tutor for disadvantaged children as a student when trying to predict his success in convincing congresspeople effectively? (Caro, 1982)Would we have considered the Merino-Wool-Business of a certain diplomat from Geneva when trying to predict whether Switzerland would be annexed by its neighbouring empires in 1815? (E. Pictet, 1892).In summary: A lot of pivotal actions and developments depend on circumstances we wouldn’t expect them to. Not because we’d think them to be implausible, but because we wouldn’t think of considering them. We need inspiration and orientation in this huge space of possible hypotheses to avoid missing out on the ones which are actually true.In an ideal world, AI governance researchers would know about a vast amount of historical literature that is written with enough detail to analyse important decisions, as well as multiple biographies of the same people, so they see where scholars currently disagree. This strategy has two main problems: Firstly, the counterfactual impact these people could have with their time is potentially very big. Secondly, detailed historical literature (which is, often, biographies or primary sources) tends to be written for entertainment, among other things. Biographers have an interest in highlighting maybe irrelevant, but spicy details about romantic relationships, quirky fun jokes told by the person or the etymological origins of the name of a friend. This makes biographies longer than they’d need to be for the goal of analysing the relevant factors in pivotal decisions of a particular person. It takes training to filter through this information to find the actually important stuff.Skills that require training are more efficiently done when a part of an ecosystem specializes in them. Rhyme is an attempt at this specialization.Who could actually use this?The following examples should illustrate who could use this service:Alice is writing a report on the possibilities for the state of California to regulate possible AI uses. They wonder how big the influence of the Governor's advisors was in past regulation attempts of other technologies.Bob wants a brief history of the EU’...]]>
Wed, 01 Mar 2023 16:32:22 +0000 EA - Call for Cruxes by Rhyme, a Longtermist History Consultancy by Lara TH Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Call for Cruxes by Rhyme, a Longtermist History Consultancy, published by Lara TH on March 1, 2023 on The Effective Altruism Forum.TLDR; This post announces the trial period of Rhyme, a history consultancy for longtermists. It seems like longtermists can profit from historical insights and the distillation of the current state of historical literature on a particular question, both because of its use as an intuition pump and for information about the historical context of their work. So, if you work on an AI Governance project (research or policy) and are interested in augmenting it with a historical perspective, consider registering your interest and the cruxes of your research here. During this trial period of three to six months, the service is free."History doesn’t repeat, but it rhymes." - Mark TwainWhat Problem is Rhyme trying to solve?When we try to answer a complicated question like “how would a Chinese regime change influence the international AI Landscape”, it can be hard to know where to start. We need to come up with a hypothesis, a scenario. But what should we base this hypothesis, this scenario on? How would we know which hypotheses are most plausible? Game theoretical analysis provides one possible inspiration. But we don't just need to know what a rational actor would do, given particular incentives. We also need intuitions for how actors would act irrationally, given specific circumstances.Would we have thought of considering the influence of close familial ties between European leaders when trying to predict the beginning of the first world war? (Clark, 2014)Would we have considered Lyndon B. Johnson's training as a tutor for disadvantaged children as a student when trying to predict his success in convincing congresspeople effectively? (Caro, 1982)Would we have considered the Merino-Wool-Business of a certain diplomat from Geneva when trying to predict whether Switzerland would be annexed by its neighbouring empires in 1815? (E. Pictet, 1892).In summary: A lot of pivotal actions and developments depend on circumstances we wouldn’t expect them to. Not because we’d think them to be implausible, but because we wouldn’t think of considering them. We need inspiration and orientation in this huge space of possible hypotheses to avoid missing out on the ones which are actually true.In an ideal world, AI governance researchers would know about a vast amount of historical literature that is written with enough detail to analyse important decisions, as well as multiple biographies of the same people, so they see where scholars currently disagree. This strategy has two main problems: Firstly, the counterfactual impact these people could have with their time is potentially very big. Secondly, detailed historical literature (which is, often, biographies or primary sources) tends to be written for entertainment, among other things. Biographers have an interest in highlighting maybe irrelevant, but spicy details about romantic relationships, quirky fun jokes told by the person or the etymological origins of the name of a friend. This makes biographies longer than they’d need to be for the goal of analysing the relevant factors in pivotal decisions of a particular person. It takes training to filter through this information to find the actually important stuff.Skills that require training are more efficiently done when a part of an ecosystem specializes in them. Rhyme is an attempt at this specialization.Who could actually use this?The following examples should illustrate who could use this service:Alice is writing a report on the possibilities for the state of California to regulate possible AI uses. They wonder how big the influence of the Governor's advisors was in past regulation attempts of other technologies.Bob wants a brief history of the EU’...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Call for Cruxes by Rhyme, a Longtermist History Consultancy, published by Lara TH on March 1, 2023 on The Effective Altruism Forum.TLDR; This post announces the trial period of Rhyme, a history consultancy for longtermists. It seems like longtermists can profit from historical insights and the distillation of the current state of historical literature on a particular question, both because of its use as an intuition pump and for information about the historical context of their work. So, if you work on an AI Governance project (research or policy) and are interested in augmenting it with a historical perspective, consider registering your interest and the cruxes of your research here. During this trial period of three to six months, the service is free."History doesn’t repeat, but it rhymes." - Mark TwainWhat Problem is Rhyme trying to solve?When we try to answer a complicated question like “how would a Chinese regime change influence the international AI Landscape”, it can be hard to know where to start. We need to come up with a hypothesis, a scenario. But what should we base this hypothesis, this scenario on? How would we know which hypotheses are most plausible? Game theoretical analysis provides one possible inspiration. But we don't just need to know what a rational actor would do, given particular incentives. We also need intuitions for how actors would act irrationally, given specific circumstances.Would we have thought of considering the influence of close familial ties between European leaders when trying to predict the beginning of the first world war? (Clark, 2014)Would we have considered Lyndon B. Johnson's training as a tutor for disadvantaged children as a student when trying to predict his success in convincing congresspeople effectively? (Caro, 1982)Would we have considered the Merino-Wool-Business of a certain diplomat from Geneva when trying to predict whether Switzerland would be annexed by its neighbouring empires in 1815? (E. Pictet, 1892).In summary: A lot of pivotal actions and developments depend on circumstances we wouldn’t expect them to. Not because we’d think them to be implausible, but because we wouldn’t think of considering them. We need inspiration and orientation in this huge space of possible hypotheses to avoid missing out on the ones which are actually true.In an ideal world, AI governance researchers would know about a vast amount of historical literature that is written with enough detail to analyse important decisions, as well as multiple biographies of the same people, so they see where scholars currently disagree. This strategy has two main problems: Firstly, the counterfactual impact these people could have with their time is potentially very big. Secondly, detailed historical literature (which is, often, biographies or primary sources) tends to be written for entertainment, among other things. Biographers have an interest in highlighting maybe irrelevant, but spicy details about romantic relationships, quirky fun jokes told by the person or the etymological origins of the name of a friend. This makes biographies longer than they’d need to be for the goal of analysing the relevant factors in pivotal decisions of a particular person. It takes training to filter through this information to find the actually important stuff.Skills that require training are more efficiently done when a part of an ecosystem specializes in them. Rhyme is an attempt at this specialization.Who could actually use this?The following examples should illustrate who could use this service:Alice is writing a report on the possibilities for the state of California to regulate possible AI uses. They wonder how big the influence of the Governor's advisors was in past regulation attempts of other technologies.Bob wants a brief history of the EU’...]]>
Lara TH https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:46 None full 5070
DJTpSNbNfCqKzc7ja_NL_EA_EA EA - Counterproductive Altruism: The Other Heavy Tail by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Counterproductive Altruism: The Other Heavy Tail, published by Vasco Grilo on March 1, 2023 on The Effective Altruism Forum.This is a linkpost to the article Counterproductive Altruism: The Other Heavy Tail from Daniel Kokotajlo and Alexandra Oprea. Some excerpts are below. I also include a section at the end with some hot takes regarding possibly counterproductive altruism.AbstractFirst, we argue that the appeal of effective altruism (henceforth, EA) depends significantly on a certain empirical premise we call the Heavy Tail Hypothesis (HTH), which characterizes the probability distribution of opportunities for doing good. Roughly, the HTH implies that the best causes, interventions, or charities produce orders of magnitude greater good than the average ones, constituting a substantial portion of the total amount of good caused by altruistic interventions. Next, we canvass arguments EAs have given for the existence of a positive (or “right”) heavy tail and argue that they can also apply in support of a negative (or “left”) heavy tail where counterproductive interventions do orders of magnitude more harm than ineffective or moderately harmful ones. Incorporating the other heavy tail of the distribution has important implications for the core activities of EA: effectiveness research, cause prioritization, and the assessment of altruistic interventions.It also informs the debate surrounding the institutional critique of EA.IV Implications of the Heavy Right Tail for AltruismAssume that the probability distribution of charitable interventions has a heavy-right tail (for example, like the power law described in the previous section). This means that your expectation about a possible new or unassessed charitable intervention should include the large values described above with a relatively high probability. It also means that existing charitable interventions whose effectiveness is known (or estimated with a high degree of certainty) will include interventions differing in effectiveness by orders of magnitude. We contend that this assumption justifies well-known aspects of EA practice such as (1) effectiveness research and cause prioritization, (2) “hits-based-giving,” and (3) skepticism about historical averages.V Implications of the Heavy Left Tail for AltruismWhat if the probability distribution of altruistic interventions includes both a left and a right heavy tail? In this case, we cannot assume either that (1) one's altruistic interventions are expected to have at worst a value of zero (i.e. to be bounded on the left side) or (2) that the probability that a charitable intervention is counterproductive or harmful approaches zero very rapidly.Downside Risk ResearchMany catastrophic interventions — whether altruistic or not — generate large amounts of (intentional or unintentional) harm. When someone in the world is engaging in an intervention that is likely to end up in the heavy left tail, there is a corresponding opportunity for us to do good by preventing them. This would itself represent an altruistic intervention in the heavy right tail (i.e. one responsible for enormous benefits). The existence of the heavy-left tail therefore provides even stronger justification for the prioritization research preferred by EAs.Assessing Types of Interventions Requires Both TailsAnother conclusion we draw from the revised HTH is that the value of a class of interventions should be estimated by considering the worst as well as the best. Following such analysis, a class of interventions could turn out to be net-negative even if there are some very prominent positive examples and indeed even if almost all examples are positive. This sharply contradicts MacAskill's earlier claim that the value of a class of interventions can be approximated by the value of its best membe...]]>
Vasco Grilo https://forum.effectivealtruism.org/posts/DJTpSNbNfCqKzc7ja/counterproductive-altruism-the-other-heavy-tail Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Counterproductive Altruism: The Other Heavy Tail, published by Vasco Grilo on March 1, 2023 on The Effective Altruism Forum.This is a linkpost to the article Counterproductive Altruism: The Other Heavy Tail from Daniel Kokotajlo and Alexandra Oprea. Some excerpts are below. I also include a section at the end with some hot takes regarding possibly counterproductive altruism.AbstractFirst, we argue that the appeal of effective altruism (henceforth, EA) depends significantly on a certain empirical premise we call the Heavy Tail Hypothesis (HTH), which characterizes the probability distribution of opportunities for doing good. Roughly, the HTH implies that the best causes, interventions, or charities produce orders of magnitude greater good than the average ones, constituting a substantial portion of the total amount of good caused by altruistic interventions. Next, we canvass arguments EAs have given for the existence of a positive (or “right”) heavy tail and argue that they can also apply in support of a negative (or “left”) heavy tail where counterproductive interventions do orders of magnitude more harm than ineffective or moderately harmful ones. Incorporating the other heavy tail of the distribution has important implications for the core activities of EA: effectiveness research, cause prioritization, and the assessment of altruistic interventions.It also informs the debate surrounding the institutional critique of EA.IV Implications of the Heavy Right Tail for AltruismAssume that the probability distribution of charitable interventions has a heavy-right tail (for example, like the power law described in the previous section). This means that your expectation about a possible new or unassessed charitable intervention should include the large values described above with a relatively high probability. It also means that existing charitable interventions whose effectiveness is known (or estimated with a high degree of certainty) will include interventions differing in effectiveness by orders of magnitude. We contend that this assumption justifies well-known aspects of EA practice such as (1) effectiveness research and cause prioritization, (2) “hits-based-giving,” and (3) skepticism about historical averages.V Implications of the Heavy Left Tail for AltruismWhat if the probability distribution of altruistic interventions includes both a left and a right heavy tail? In this case, we cannot assume either that (1) one's altruistic interventions are expected to have at worst a value of zero (i.e. to be bounded on the left side) or (2) that the probability that a charitable intervention is counterproductive or harmful approaches zero very rapidly.Downside Risk ResearchMany catastrophic interventions — whether altruistic or not — generate large amounts of (intentional or unintentional) harm. When someone in the world is engaging in an intervention that is likely to end up in the heavy left tail, there is a corresponding opportunity for us to do good by preventing them. This would itself represent an altruistic intervention in the heavy right tail (i.e. one responsible for enormous benefits). The existence of the heavy-left tail therefore provides even stronger justification for the prioritization research preferred by EAs.Assessing Types of Interventions Requires Both TailsAnother conclusion we draw from the revised HTH is that the value of a class of interventions should be estimated by considering the worst as well as the best. Following such analysis, a class of interventions could turn out to be net-negative even if there are some very prominent positive examples and indeed even if almost all examples are positive. This sharply contradicts MacAskill's earlier claim that the value of a class of interventions can be approximated by the value of its best membe...]]>
Wed, 01 Mar 2023 13:20:22 +0000 EA - Counterproductive Altruism: The Other Heavy Tail by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Counterproductive Altruism: The Other Heavy Tail, published by Vasco Grilo on March 1, 2023 on The Effective Altruism Forum.This is a linkpost to the article Counterproductive Altruism: The Other Heavy Tail from Daniel Kokotajlo and Alexandra Oprea. Some excerpts are below. I also include a section at the end with some hot takes regarding possibly counterproductive altruism.AbstractFirst, we argue that the appeal of effective altruism (henceforth, EA) depends significantly on a certain empirical premise we call the Heavy Tail Hypothesis (HTH), which characterizes the probability distribution of opportunities for doing good. Roughly, the HTH implies that the best causes, interventions, or charities produce orders of magnitude greater good than the average ones, constituting a substantial portion of the total amount of good caused by altruistic interventions. Next, we canvass arguments EAs have given for the existence of a positive (or “right”) heavy tail and argue that they can also apply in support of a negative (or “left”) heavy tail where counterproductive interventions do orders of magnitude more harm than ineffective or moderately harmful ones. Incorporating the other heavy tail of the distribution has important implications for the core activities of EA: effectiveness research, cause prioritization, and the assessment of altruistic interventions.It also informs the debate surrounding the institutional critique of EA.IV Implications of the Heavy Right Tail for AltruismAssume that the probability distribution of charitable interventions has a heavy-right tail (for example, like the power law described in the previous section). This means that your expectation about a possible new or unassessed charitable intervention should include the large values described above with a relatively high probability. It also means that existing charitable interventions whose effectiveness is known (or estimated with a high degree of certainty) will include interventions differing in effectiveness by orders of magnitude. We contend that this assumption justifies well-known aspects of EA practice such as (1) effectiveness research and cause prioritization, (2) “hits-based-giving,” and (3) skepticism about historical averages.V Implications of the Heavy Left Tail for AltruismWhat if the probability distribution of altruistic interventions includes both a left and a right heavy tail? In this case, we cannot assume either that (1) one's altruistic interventions are expected to have at worst a value of zero (i.e. to be bounded on the left side) or (2) that the probability that a charitable intervention is counterproductive or harmful approaches zero very rapidly.Downside Risk ResearchMany catastrophic interventions — whether altruistic or not — generate large amounts of (intentional or unintentional) harm. When someone in the world is engaging in an intervention that is likely to end up in the heavy left tail, there is a corresponding opportunity for us to do good by preventing them. This would itself represent an altruistic intervention in the heavy right tail (i.e. one responsible for enormous benefits). The existence of the heavy-left tail therefore provides even stronger justification for the prioritization research preferred by EAs.Assessing Types of Interventions Requires Both TailsAnother conclusion we draw from the revised HTH is that the value of a class of interventions should be estimated by considering the worst as well as the best. Following such analysis, a class of interventions could turn out to be net-negative even if there are some very prominent positive examples and indeed even if almost all examples are positive. This sharply contradicts MacAskill's earlier claim that the value of a class of interventions can be approximated by the value of its best membe...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Counterproductive Altruism: The Other Heavy Tail, published by Vasco Grilo on March 1, 2023 on The Effective Altruism Forum.This is a linkpost to the article Counterproductive Altruism: The Other Heavy Tail from Daniel Kokotajlo and Alexandra Oprea. Some excerpts are below. I also include a section at the end with some hot takes regarding possibly counterproductive altruism.AbstractFirst, we argue that the appeal of effective altruism (henceforth, EA) depends significantly on a certain empirical premise we call the Heavy Tail Hypothesis (HTH), which characterizes the probability distribution of opportunities for doing good. Roughly, the HTH implies that the best causes, interventions, or charities produce orders of magnitude greater good than the average ones, constituting a substantial portion of the total amount of good caused by altruistic interventions. Next, we canvass arguments EAs have given for the existence of a positive (or “right”) heavy tail and argue that they can also apply in support of a negative (or “left”) heavy tail where counterproductive interventions do orders of magnitude more harm than ineffective or moderately harmful ones. Incorporating the other heavy tail of the distribution has important implications for the core activities of EA: effectiveness research, cause prioritization, and the assessment of altruistic interventions.It also informs the debate surrounding the institutional critique of EA.IV Implications of the Heavy Right Tail for AltruismAssume that the probability distribution of charitable interventions has a heavy-right tail (for example, like the power law described in the previous section). This means that your expectation about a possible new or unassessed charitable intervention should include the large values described above with a relatively high probability. It also means that existing charitable interventions whose effectiveness is known (or estimated with a high degree of certainty) will include interventions differing in effectiveness by orders of magnitude. We contend that this assumption justifies well-known aspects of EA practice such as (1) effectiveness research and cause prioritization, (2) “hits-based-giving,” and (3) skepticism about historical averages.V Implications of the Heavy Left Tail for AltruismWhat if the probability distribution of altruistic interventions includes both a left and a right heavy tail? In this case, we cannot assume either that (1) one's altruistic interventions are expected to have at worst a value of zero (i.e. to be bounded on the left side) or (2) that the probability that a charitable intervention is counterproductive or harmful approaches zero very rapidly.Downside Risk ResearchMany catastrophic interventions — whether altruistic or not — generate large amounts of (intentional or unintentional) harm. When someone in the world is engaging in an intervention that is likely to end up in the heavy left tail, there is a corresponding opportunity for us to do good by preventing them. This would itself represent an altruistic intervention in the heavy right tail (i.e. one responsible for enormous benefits). The existence of the heavy-left tail therefore provides even stronger justification for the prioritization research preferred by EAs.Assessing Types of Interventions Requires Both TailsAnother conclusion we draw from the revised HTH is that the value of a class of interventions should be estimated by considering the worst as well as the best. Following such analysis, a class of interventions could turn out to be net-negative even if there are some very prominent positive examples and indeed even if almost all examples are positive. This sharply contradicts MacAskill's earlier claim that the value of a class of interventions can be approximated by the value of its best membe...]]>
Vasco Grilo https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:45 None full 5071
shzSEEDywdh2PPPMy_NL_EA_EA EA - Why I love effective altruism by Michelle Hutchinson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I love effective altruism, published by Michelle Hutchinson on March 1, 2023 on The Effective Altruism Forum.I’ve found it a bit tough to feel as excited as I usually am about effective altruism and our community recently. I think some others have too.So I wanted to remind myself why I love EA so dearly. I thought hearing my take might also help any others in the community feeling similarly.There’s a lot I want to say about why I love EA. But really, it all comes down to the people. Figuring out how I can best help others can be a difficult, messy, and emotionally draining endeavour. But it’s far easier to do alongside like-minded folk who care about the same goal. Thankfully, I found these people in the EA community.Helping me live up to my valuesBefore I came across effective altruism, I wasn’t really enacting my values.I studied ethics at university and realised I was a utilitarian. I used to do bits and pieces of charity work, such as volunteering at Oxfam. But I donated very little of my money. I wasn’t thinking about how to find a career that would significantly help others.I didn’t have any good reason for my ethical omissions; it just didn’t seem like other people did them, so I didn’t either.Now I’m a Giving What We Can member and have been fulfilling my pledge every year for a decade. I’m still not as good as I’d like to be about thinking broadly and proactively about how to find the most impactful career. But prioritising impact is now a significant factor in how I figure out what to do with my 80,000 hours.I made these major shifts in my life, I think, because I met other people who were really living out their values. When I was surrounded by people who typically give something like 10% of their income to charity rather than 3%, my sense of how much was reasonable to give started to change. When I was directly asked about my own life choices, I stopped and thought seriously about what I could and should do differently.In addition to these significant life changes, members of the EA community help me live up to my values in small and large ways every day. Sometimes, they give me constructive feedback so I can be more effective. Sometimes, I get a clear-sighted debugging of a challenge I’m facing — whether that’s a concrete work question or a messy motivational issue.Sometimes the people around me just set a positive example. For instance, it’s much easier for me to work a few extra hours on a Saturday in the service of helping others when I’m alongside someone else doing the same.Getting supportGiven what I said above, I think I’d have expected that the EA community would feel pretty pressureful. And it’s not always easy. But the overwhelming majority of the time, I don’t feel pressured by the people around me; I feel they share my understanding that the world is hard, and that it’s hard in very different ways for different people.I honestly never cease to be impressed by the extent to which the people around me work hard to reach high standards, without demanding others do exactly the same. For example:One of my friends works around 12 hours a day, mostly 6 days a week. But he’s never anything but appreciative of how much I work, even though it’s significantly less.I’ve often expected to be judged for being an omnivore, given that my office is almost entirely vegn. But far from that, people go out of their way to ensure I have food I’m happy to eat.When I first thought I might be pregnant, I felt a bit sheepish telling my friends about it, given that my confident prediction was that having a child would reduce my lifetime impact. But every single person showed genuine happiness for me.This feels like a community where we can each be striving — but also be comfortable setting our limits, knowing that others will be genuinely, gladly ...]]>
Michelle Hutchinson https://forum.effectivealtruism.org/posts/shzSEEDywdh2PPPMy/why-i-love-effective-altruism Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I love effective altruism, published by Michelle Hutchinson on March 1, 2023 on The Effective Altruism Forum.I’ve found it a bit tough to feel as excited as I usually am about effective altruism and our community recently. I think some others have too.So I wanted to remind myself why I love EA so dearly. I thought hearing my take might also help any others in the community feeling similarly.There’s a lot I want to say about why I love EA. But really, it all comes down to the people. Figuring out how I can best help others can be a difficult, messy, and emotionally draining endeavour. But it’s far easier to do alongside like-minded folk who care about the same goal. Thankfully, I found these people in the EA community.Helping me live up to my valuesBefore I came across effective altruism, I wasn’t really enacting my values.I studied ethics at university and realised I was a utilitarian. I used to do bits and pieces of charity work, such as volunteering at Oxfam. But I donated very little of my money. I wasn’t thinking about how to find a career that would significantly help others.I didn’t have any good reason for my ethical omissions; it just didn’t seem like other people did them, so I didn’t either.Now I’m a Giving What We Can member and have been fulfilling my pledge every year for a decade. I’m still not as good as I’d like to be about thinking broadly and proactively about how to find the most impactful career. But prioritising impact is now a significant factor in how I figure out what to do with my 80,000 hours.I made these major shifts in my life, I think, because I met other people who were really living out their values. When I was surrounded by people who typically give something like 10% of their income to charity rather than 3%, my sense of how much was reasonable to give started to change. When I was directly asked about my own life choices, I stopped and thought seriously about what I could and should do differently.In addition to these significant life changes, members of the EA community help me live up to my values in small and large ways every day. Sometimes, they give me constructive feedback so I can be more effective. Sometimes, I get a clear-sighted debugging of a challenge I’m facing — whether that’s a concrete work question or a messy motivational issue.Sometimes the people around me just set a positive example. For instance, it’s much easier for me to work a few extra hours on a Saturday in the service of helping others when I’m alongside someone else doing the same.Getting supportGiven what I said above, I think I’d have expected that the EA community would feel pretty pressureful. And it’s not always easy. But the overwhelming majority of the time, I don’t feel pressured by the people around me; I feel they share my understanding that the world is hard, and that it’s hard in very different ways for different people.I honestly never cease to be impressed by the extent to which the people around me work hard to reach high standards, without demanding others do exactly the same. For example:One of my friends works around 12 hours a day, mostly 6 days a week. But he’s never anything but appreciative of how much I work, even though it’s significantly less.I’ve often expected to be judged for being an omnivore, given that my office is almost entirely vegn. But far from that, people go out of their way to ensure I have food I’m happy to eat.When I first thought I might be pregnant, I felt a bit sheepish telling my friends about it, given that my confident prediction was that having a child would reduce my lifetime impact. But every single person showed genuine happiness for me.This feels like a community where we can each be striving — but also be comfortable setting our limits, knowing that others will be genuinely, gladly ...]]>
Wed, 01 Mar 2023 10:11:55 +0000 EA - Why I love effective altruism by Michelle Hutchinson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I love effective altruism, published by Michelle Hutchinson on March 1, 2023 on The Effective Altruism Forum.I’ve found it a bit tough to feel as excited as I usually am about effective altruism and our community recently. I think some others have too.So I wanted to remind myself why I love EA so dearly. I thought hearing my take might also help any others in the community feeling similarly.There’s a lot I want to say about why I love EA. But really, it all comes down to the people. Figuring out how I can best help others can be a difficult, messy, and emotionally draining endeavour. But it’s far easier to do alongside like-minded folk who care about the same goal. Thankfully, I found these people in the EA community.Helping me live up to my valuesBefore I came across effective altruism, I wasn’t really enacting my values.I studied ethics at university and realised I was a utilitarian. I used to do bits and pieces of charity work, such as volunteering at Oxfam. But I donated very little of my money. I wasn’t thinking about how to find a career that would significantly help others.I didn’t have any good reason for my ethical omissions; it just didn’t seem like other people did them, so I didn’t either.Now I’m a Giving What We Can member and have been fulfilling my pledge every year for a decade. I’m still not as good as I’d like to be about thinking broadly and proactively about how to find the most impactful career. But prioritising impact is now a significant factor in how I figure out what to do with my 80,000 hours.I made these major shifts in my life, I think, because I met other people who were really living out their values. When I was surrounded by people who typically give something like 10% of their income to charity rather than 3%, my sense of how much was reasonable to give started to change. When I was directly asked about my own life choices, I stopped and thought seriously about what I could and should do differently.In addition to these significant life changes, members of the EA community help me live up to my values in small and large ways every day. Sometimes, they give me constructive feedback so I can be more effective. Sometimes, I get a clear-sighted debugging of a challenge I’m facing — whether that’s a concrete work question or a messy motivational issue.Sometimes the people around me just set a positive example. For instance, it’s much easier for me to work a few extra hours on a Saturday in the service of helping others when I’m alongside someone else doing the same.Getting supportGiven what I said above, I think I’d have expected that the EA community would feel pretty pressureful. And it’s not always easy. But the overwhelming majority of the time, I don’t feel pressured by the people around me; I feel they share my understanding that the world is hard, and that it’s hard in very different ways for different people.I honestly never cease to be impressed by the extent to which the people around me work hard to reach high standards, without demanding others do exactly the same. For example:One of my friends works around 12 hours a day, mostly 6 days a week. But he’s never anything but appreciative of how much I work, even though it’s significantly less.I’ve often expected to be judged for being an omnivore, given that my office is almost entirely vegn. But far from that, people go out of their way to ensure I have food I’m happy to eat.When I first thought I might be pregnant, I felt a bit sheepish telling my friends about it, given that my confident prediction was that having a child would reduce my lifetime impact. But every single person showed genuine happiness for me.This feels like a community where we can each be striving — but also be comfortable setting our limits, knowing that others will be genuinely, gladly ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I love effective altruism, published by Michelle Hutchinson on March 1, 2023 on The Effective Altruism Forum.I’ve found it a bit tough to feel as excited as I usually am about effective altruism and our community recently. I think some others have too.So I wanted to remind myself why I love EA so dearly. I thought hearing my take might also help any others in the community feeling similarly.There’s a lot I want to say about why I love EA. But really, it all comes down to the people. Figuring out how I can best help others can be a difficult, messy, and emotionally draining endeavour. But it’s far easier to do alongside like-minded folk who care about the same goal. Thankfully, I found these people in the EA community.Helping me live up to my valuesBefore I came across effective altruism, I wasn’t really enacting my values.I studied ethics at university and realised I was a utilitarian. I used to do bits and pieces of charity work, such as volunteering at Oxfam. But I donated very little of my money. I wasn’t thinking about how to find a career that would significantly help others.I didn’t have any good reason for my ethical omissions; it just didn’t seem like other people did them, so I didn’t either.Now I’m a Giving What We Can member and have been fulfilling my pledge every year for a decade. I’m still not as good as I’d like to be about thinking broadly and proactively about how to find the most impactful career. But prioritising impact is now a significant factor in how I figure out what to do with my 80,000 hours.I made these major shifts in my life, I think, because I met other people who were really living out their values. When I was surrounded by people who typically give something like 10% of their income to charity rather than 3%, my sense of how much was reasonable to give started to change. When I was directly asked about my own life choices, I stopped and thought seriously about what I could and should do differently.In addition to these significant life changes, members of the EA community help me live up to my values in small and large ways every day. Sometimes, they give me constructive feedback so I can be more effective. Sometimes, I get a clear-sighted debugging of a challenge I’m facing — whether that’s a concrete work question or a messy motivational issue.Sometimes the people around me just set a positive example. For instance, it’s much easier for me to work a few extra hours on a Saturday in the service of helping others when I’m alongside someone else doing the same.Getting supportGiven what I said above, I think I’d have expected that the EA community would feel pretty pressureful. And it’s not always easy. But the overwhelming majority of the time, I don’t feel pressured by the people around me; I feel they share my understanding that the world is hard, and that it’s hard in very different ways for different people.I honestly never cease to be impressed by the extent to which the people around me work hard to reach high standards, without demanding others do exactly the same. For example:One of my friends works around 12 hours a day, mostly 6 days a week. But he’s never anything but appreciative of how much I work, even though it’s significantly less.I’ve often expected to be judged for being an omnivore, given that my office is almost entirely vegn. But far from that, people go out of their way to ensure I have food I’m happy to eat.When I first thought I might be pregnant, I felt a bit sheepish telling my friends about it, given that my confident prediction was that having a child would reduce my lifetime impact. But every single person showed genuine happiness for me.This feels like a community where we can each be striving — but also be comfortable setting our limits, knowing that others will be genuinely, gladly ...]]>
Michelle Hutchinson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:36 None full 5065
uk4QhagWD8mj8rnst_NL_EA_EA EA - Enemies vs Malefactors by So8res Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Enemies vs Malefactors, published by So8res on February 28, 2023 on The Effective Altruism Forum.Short versionHarmful people often lack explicit malicious intent. It’s worth deploying your social or community defenses against them anyway. I recommend focusing less on intent and more on patterns of harm.(Credit to my explicit articulation of this idea goes in large part to Aella, and also in part to Oliver Habryka.)Long versionA few times now, I have been part of a community reeling from apparent bad behavior from one of its own. In the two most dramatic cases, the communities seemed pretty split on the question of whether the actor had ill intent.A recent and very public case was the one of Sam Bankman-Fried, where many seem interested in the question of Sam's mental state vis-a-vis EA. (I recall seeing this in the responses to Kelsey's interview, but haven't done the virtuous thing of digging up links.)It seems to me that local theories of Sam's mental state cluster along lines very roughly like (these are phrased somewhat hyperbolically):Sam was explicitly malicious. He was intentionally using the EA movement for the purpose of status and reputation-laundering, while personally enriching himself. If you could read his mind, you would see him making conscious plans to extract resources from people he thought of as ignorant fools, in terminology that would clearly relinquish all his claims to sympathy from the audience. If there were a camera, he would have turned to it and said "I'm going to exploit these EAs for everything they're worth."Sam was committed to doing good. He may have been ruthless and exploitative towards various individuals in pursuit of his utilitarian goals, but he did not intentionally set out to commit fraud. He didn't conceptualize his actions as exploitative. He tried to make money while providing risky financial assets to the masses, and foolishly disregarded regulations, and may have committed technical crimes, but he was trying to do good, and to put the resources he earned thereby towards doing even more good.One hypothesis I have for why people care so much about some distinction like this is that humans have social/mental modes for dealing with people who are explicitly malicious towards them, who are explicitly faking cordiality in attempts to extract some resource. And these are pretty different from their modes of dealing with someone who's merely being reckless or foolish. So they care a lot about the mental state behind the act.(As an example, various crimes legally require mens rea, lit. “guilty mind”, in order to be criminal. Humans care about this stuff enough to bake it into their legal codes.)A third theory of Sam’s mental state that I have—that I credit in part to Oliver Habryka—is that reality just doesn’t cleanly classify into either maliciousness or negligence.On this theory, most people who are in effect trying to exploit resources from your community, won't be explicitly malicious, not even in the privacy of their own minds. (Perhaps because the content of one’s own mind is just not all that private; humans are in fact pretty good at inferring intent from a bunch of subtle signals.) Someone who could be exploiting your community, will often act so as to exploit your community, while internally telling themselves lots of stories where what they're doing is justified and fine.Those stories might include significant cognitive distortion, delusion, recklessness, and/or negligence, and some perfectly reasonable explanations that just don't quite fit together with the other perfectly reasonable explanations they have in other contexts. They might be aware of some of their flaws, and explicitly acknowledge those flaws as things they have to work on. They might be legitimately internally motivated by good intent, ev...]]>
So8res https://forum.effectivealtruism.org/posts/uk4QhagWD8mj8rnst/enemies-vs-malefactors Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Enemies vs Malefactors, published by So8res on February 28, 2023 on The Effective Altruism Forum.Short versionHarmful people often lack explicit malicious intent. It’s worth deploying your social or community defenses against them anyway. I recommend focusing less on intent and more on patterns of harm.(Credit to my explicit articulation of this idea goes in large part to Aella, and also in part to Oliver Habryka.)Long versionA few times now, I have been part of a community reeling from apparent bad behavior from one of its own. In the two most dramatic cases, the communities seemed pretty split on the question of whether the actor had ill intent.A recent and very public case was the one of Sam Bankman-Fried, where many seem interested in the question of Sam's mental state vis-a-vis EA. (I recall seeing this in the responses to Kelsey's interview, but haven't done the virtuous thing of digging up links.)It seems to me that local theories of Sam's mental state cluster along lines very roughly like (these are phrased somewhat hyperbolically):Sam was explicitly malicious. He was intentionally using the EA movement for the purpose of status and reputation-laundering, while personally enriching himself. If you could read his mind, you would see him making conscious plans to extract resources from people he thought of as ignorant fools, in terminology that would clearly relinquish all his claims to sympathy from the audience. If there were a camera, he would have turned to it and said "I'm going to exploit these EAs for everything they're worth."Sam was committed to doing good. He may have been ruthless and exploitative towards various individuals in pursuit of his utilitarian goals, but he did not intentionally set out to commit fraud. He didn't conceptualize his actions as exploitative. He tried to make money while providing risky financial assets to the masses, and foolishly disregarded regulations, and may have committed technical crimes, but he was trying to do good, and to put the resources he earned thereby towards doing even more good.One hypothesis I have for why people care so much about some distinction like this is that humans have social/mental modes for dealing with people who are explicitly malicious towards them, who are explicitly faking cordiality in attempts to extract some resource. And these are pretty different from their modes of dealing with someone who's merely being reckless or foolish. So they care a lot about the mental state behind the act.(As an example, various crimes legally require mens rea, lit. “guilty mind”, in order to be criminal. Humans care about this stuff enough to bake it into their legal codes.)A third theory of Sam’s mental state that I have—that I credit in part to Oliver Habryka—is that reality just doesn’t cleanly classify into either maliciousness or negligence.On this theory, most people who are in effect trying to exploit resources from your community, won't be explicitly malicious, not even in the privacy of their own minds. (Perhaps because the content of one’s own mind is just not all that private; humans are in fact pretty good at inferring intent from a bunch of subtle signals.) Someone who could be exploiting your community, will often act so as to exploit your community, while internally telling themselves lots of stories where what they're doing is justified and fine.Those stories might include significant cognitive distortion, delusion, recklessness, and/or negligence, and some perfectly reasonable explanations that just don't quite fit together with the other perfectly reasonable explanations they have in other contexts. They might be aware of some of their flaws, and explicitly acknowledge those flaws as things they have to work on. They might be legitimately internally motivated by good intent, ev...]]>
Wed, 01 Mar 2023 03:09:54 +0000 EA - Enemies vs Malefactors by So8res Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Enemies vs Malefactors, published by So8res on February 28, 2023 on The Effective Altruism Forum.Short versionHarmful people often lack explicit malicious intent. It’s worth deploying your social or community defenses against them anyway. I recommend focusing less on intent and more on patterns of harm.(Credit to my explicit articulation of this idea goes in large part to Aella, and also in part to Oliver Habryka.)Long versionA few times now, I have been part of a community reeling from apparent bad behavior from one of its own. In the two most dramatic cases, the communities seemed pretty split on the question of whether the actor had ill intent.A recent and very public case was the one of Sam Bankman-Fried, where many seem interested in the question of Sam's mental state vis-a-vis EA. (I recall seeing this in the responses to Kelsey's interview, but haven't done the virtuous thing of digging up links.)It seems to me that local theories of Sam's mental state cluster along lines very roughly like (these are phrased somewhat hyperbolically):Sam was explicitly malicious. He was intentionally using the EA movement for the purpose of status and reputation-laundering, while personally enriching himself. If you could read his mind, you would see him making conscious plans to extract resources from people he thought of as ignorant fools, in terminology that would clearly relinquish all his claims to sympathy from the audience. If there were a camera, he would have turned to it and said "I'm going to exploit these EAs for everything they're worth."Sam was committed to doing good. He may have been ruthless and exploitative towards various individuals in pursuit of his utilitarian goals, but he did not intentionally set out to commit fraud. He didn't conceptualize his actions as exploitative. He tried to make money while providing risky financial assets to the masses, and foolishly disregarded regulations, and may have committed technical crimes, but he was trying to do good, and to put the resources he earned thereby towards doing even more good.One hypothesis I have for why people care so much about some distinction like this is that humans have social/mental modes for dealing with people who are explicitly malicious towards them, who are explicitly faking cordiality in attempts to extract some resource. And these are pretty different from their modes of dealing with someone who's merely being reckless or foolish. So they care a lot about the mental state behind the act.(As an example, various crimes legally require mens rea, lit. “guilty mind”, in order to be criminal. Humans care about this stuff enough to bake it into their legal codes.)A third theory of Sam’s mental state that I have—that I credit in part to Oliver Habryka—is that reality just doesn’t cleanly classify into either maliciousness or negligence.On this theory, most people who are in effect trying to exploit resources from your community, won't be explicitly malicious, not even in the privacy of their own minds. (Perhaps because the content of one’s own mind is just not all that private; humans are in fact pretty good at inferring intent from a bunch of subtle signals.) Someone who could be exploiting your community, will often act so as to exploit your community, while internally telling themselves lots of stories where what they're doing is justified and fine.Those stories might include significant cognitive distortion, delusion, recklessness, and/or negligence, and some perfectly reasonable explanations that just don't quite fit together with the other perfectly reasonable explanations they have in other contexts. They might be aware of some of their flaws, and explicitly acknowledge those flaws as things they have to work on. They might be legitimately internally motivated by good intent, ev...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Enemies vs Malefactors, published by So8res on February 28, 2023 on The Effective Altruism Forum.Short versionHarmful people often lack explicit malicious intent. It’s worth deploying your social or community defenses against them anyway. I recommend focusing less on intent and more on patterns of harm.(Credit to my explicit articulation of this idea goes in large part to Aella, and also in part to Oliver Habryka.)Long versionA few times now, I have been part of a community reeling from apparent bad behavior from one of its own. In the two most dramatic cases, the communities seemed pretty split on the question of whether the actor had ill intent.A recent and very public case was the one of Sam Bankman-Fried, where many seem interested in the question of Sam's mental state vis-a-vis EA. (I recall seeing this in the responses to Kelsey's interview, but haven't done the virtuous thing of digging up links.)It seems to me that local theories of Sam's mental state cluster along lines very roughly like (these are phrased somewhat hyperbolically):Sam was explicitly malicious. He was intentionally using the EA movement for the purpose of status and reputation-laundering, while personally enriching himself. If you could read his mind, you would see him making conscious plans to extract resources from people he thought of as ignorant fools, in terminology that would clearly relinquish all his claims to sympathy from the audience. If there were a camera, he would have turned to it and said "I'm going to exploit these EAs for everything they're worth."Sam was committed to doing good. He may have been ruthless and exploitative towards various individuals in pursuit of his utilitarian goals, but he did not intentionally set out to commit fraud. He didn't conceptualize his actions as exploitative. He tried to make money while providing risky financial assets to the masses, and foolishly disregarded regulations, and may have committed technical crimes, but he was trying to do good, and to put the resources he earned thereby towards doing even more good.One hypothesis I have for why people care so much about some distinction like this is that humans have social/mental modes for dealing with people who are explicitly malicious towards them, who are explicitly faking cordiality in attempts to extract some resource. And these are pretty different from their modes of dealing with someone who's merely being reckless or foolish. So they care a lot about the mental state behind the act.(As an example, various crimes legally require mens rea, lit. “guilty mind”, in order to be criminal. Humans care about this stuff enough to bake it into their legal codes.)A third theory of Sam’s mental state that I have—that I credit in part to Oliver Habryka—is that reality just doesn’t cleanly classify into either maliciousness or negligence.On this theory, most people who are in effect trying to exploit resources from your community, won't be explicitly malicious, not even in the privacy of their own minds. (Perhaps because the content of one’s own mind is just not all that private; humans are in fact pretty good at inferring intent from a bunch of subtle signals.) Someone who could be exploiting your community, will often act so as to exploit your community, while internally telling themselves lots of stories where what they're doing is justified and fine.Those stories might include significant cognitive distortion, delusion, recklessness, and/or negligence, and some perfectly reasonable explanations that just don't quite fit together with the other perfectly reasonable explanations they have in other contexts. They might be aware of some of their flaws, and explicitly acknowledge those flaws as things they have to work on. They might be legitimately internally motivated by good intent, ev...]]>
So8res https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:28 None full 5066
iqDt8YFLjvtjBPyv6_NL_EA_EA EA - Some Things I Heard about AI Governance at EAG by utilistrutil Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some Things I Heard about AI Governance at EAG, published by utilistrutil on February 28, 2023 on The Effective Altruism Forum.IntroPrior to this EAG, I had only encountered fragments of proposals for AI governance: "something something national compute library," "something something crunch time," "something something academia vs industry," and that was about the size of it. I'd also heard the explicit claim that AI governance is devoid of policy proposals (especially vis-a-vis biosecurity), and I'd read Eliezer's infamous EAG DC Slack statement:My model of how AI policy works is that everyone in the field is there because they don't understand which technical problems are hard, or which political problems are impossible, or both . . .At this EAG, a more charitable picture of AI governance began to cohere for me. I was setting about recalling and synthesizing what I learned, and I realized I should share—both to provide a data point and to solicit input. Please help fill out my understanding of the area, refer me to information, and correct my inaccuracies!Eight one-on-ones contributed to this picture of the governance proposal landscape, along with Katja's and Beth's presentations, Buck's and Richard Ngo's office hours, and eavesdropping on Eliezer corrupting the youth of EAthens. I'm sure I only internalized a small fraction of the relevant content in these talks, so let me know about points I overlooked. (My experience was that my comprehension and retention of these points improved over time: as my mental model expanded, new ideas were more likely to connect to it.) The post is also sprinkled with my own speculations. I'm omitting trad concerns like stop-the-bots-from-spreading-misinformation.Crunch Time FriendsThe idea: Help aligned people achieve positions in government or make allies with people in those positions. When shit hits the fan, we activate our friends in high places, who will swiftly smash and unplug.My problem: This story, even the less-facetious versions that circulate, strikes me as woefully under-characterized. Which positions wield the relevant influence, and are timelines long enough for EAs to enter those positions? How exactly do we propose they react? Additionally, FTX probably updated us away from deceptive long-con type strategies.Residual questions: Is there a real and not-ridiculous name for this strategy?Slow Down ChinaThe chip export controls were so so good. A further move would be to reduce the barriers to high-skill immigration from China to induce brain drain. Safety field-building is proceeding, but slowly. China is sufficiently far behind that these are not the highest priorities.Compute RegulationsI'm told there are many proposals in this category. They range in enforcement from "labs have to report compute usage" to "labs are assigned a unique key to access a set amount of compute and then have to request a new key" to "labs face brick wall limits on compute levels." Algorithmic progress motivates the need for an "effective compute" metric, but measuring compute is surprisingly difficult as it is.A few months ago I heard that another lever—in addition to regulating industry—is improving the ratio of compute in academia vs industry. Academic models receive faster diffusion and face greater scrutiny, but the desirability of these features depends on your perspective. I'm told this argument is subject to "approximately 17 million caveats and question marks."Evaluations & AuditsThe idea: Develop benchmarks for capabilities and design evaluations to assess whether a model possesses those capabilities. Conditional on a capability, evaluate for alignment benchmarks. Audits could verify evaluations.Industry self-regulation: Three labs dominate the industry, an arrangement that promises to continue for a while, facilit...]]>
utilistrutil https://forum.effectivealtruism.org/posts/iqDt8YFLjvtjBPyv6/some-things-i-heard-about-ai-governance-at-eag Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some Things I Heard about AI Governance at EAG, published by utilistrutil on February 28, 2023 on The Effective Altruism Forum.IntroPrior to this EAG, I had only encountered fragments of proposals for AI governance: "something something national compute library," "something something crunch time," "something something academia vs industry," and that was about the size of it. I'd also heard the explicit claim that AI governance is devoid of policy proposals (especially vis-a-vis biosecurity), and I'd read Eliezer's infamous EAG DC Slack statement:My model of how AI policy works is that everyone in the field is there because they don't understand which technical problems are hard, or which political problems are impossible, or both . . .At this EAG, a more charitable picture of AI governance began to cohere for me. I was setting about recalling and synthesizing what I learned, and I realized I should share—both to provide a data point and to solicit input. Please help fill out my understanding of the area, refer me to information, and correct my inaccuracies!Eight one-on-ones contributed to this picture of the governance proposal landscape, along with Katja's and Beth's presentations, Buck's and Richard Ngo's office hours, and eavesdropping on Eliezer corrupting the youth of EAthens. I'm sure I only internalized a small fraction of the relevant content in these talks, so let me know about points I overlooked. (My experience was that my comprehension and retention of these points improved over time: as my mental model expanded, new ideas were more likely to connect to it.) The post is also sprinkled with my own speculations. I'm omitting trad concerns like stop-the-bots-from-spreading-misinformation.Crunch Time FriendsThe idea: Help aligned people achieve positions in government or make allies with people in those positions. When shit hits the fan, we activate our friends in high places, who will swiftly smash and unplug.My problem: This story, even the less-facetious versions that circulate, strikes me as woefully under-characterized. Which positions wield the relevant influence, and are timelines long enough for EAs to enter those positions? How exactly do we propose they react? Additionally, FTX probably updated us away from deceptive long-con type strategies.Residual questions: Is there a real and not-ridiculous name for this strategy?Slow Down ChinaThe chip export controls were so so good. A further move would be to reduce the barriers to high-skill immigration from China to induce brain drain. Safety field-building is proceeding, but slowly. China is sufficiently far behind that these are not the highest priorities.Compute RegulationsI'm told there are many proposals in this category. They range in enforcement from "labs have to report compute usage" to "labs are assigned a unique key to access a set amount of compute and then have to request a new key" to "labs face brick wall limits on compute levels." Algorithmic progress motivates the need for an "effective compute" metric, but measuring compute is surprisingly difficult as it is.A few months ago I heard that another lever—in addition to regulating industry—is improving the ratio of compute in academia vs industry. Academic models receive faster diffusion and face greater scrutiny, but the desirability of these features depends on your perspective. I'm told this argument is subject to "approximately 17 million caveats and question marks."Evaluations & AuditsThe idea: Develop benchmarks for capabilities and design evaluations to assess whether a model possesses those capabilities. Conditional on a capability, evaluate for alignment benchmarks. Audits could verify evaluations.Industry self-regulation: Three labs dominate the industry, an arrangement that promises to continue for a while, facilit...]]>
Wed, 01 Mar 2023 00:45:57 +0000 EA - Some Things I Heard about AI Governance at EAG by utilistrutil Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some Things I Heard about AI Governance at EAG, published by utilistrutil on February 28, 2023 on The Effective Altruism Forum.IntroPrior to this EAG, I had only encountered fragments of proposals for AI governance: "something something national compute library," "something something crunch time," "something something academia vs industry," and that was about the size of it. I'd also heard the explicit claim that AI governance is devoid of policy proposals (especially vis-a-vis biosecurity), and I'd read Eliezer's infamous EAG DC Slack statement:My model of how AI policy works is that everyone in the field is there because they don't understand which technical problems are hard, or which political problems are impossible, or both . . .At this EAG, a more charitable picture of AI governance began to cohere for me. I was setting about recalling and synthesizing what I learned, and I realized I should share—both to provide a data point and to solicit input. Please help fill out my understanding of the area, refer me to information, and correct my inaccuracies!Eight one-on-ones contributed to this picture of the governance proposal landscape, along with Katja's and Beth's presentations, Buck's and Richard Ngo's office hours, and eavesdropping on Eliezer corrupting the youth of EAthens. I'm sure I only internalized a small fraction of the relevant content in these talks, so let me know about points I overlooked. (My experience was that my comprehension and retention of these points improved over time: as my mental model expanded, new ideas were more likely to connect to it.) The post is also sprinkled with my own speculations. I'm omitting trad concerns like stop-the-bots-from-spreading-misinformation.Crunch Time FriendsThe idea: Help aligned people achieve positions in government or make allies with people in those positions. When shit hits the fan, we activate our friends in high places, who will swiftly smash and unplug.My problem: This story, even the less-facetious versions that circulate, strikes me as woefully under-characterized. Which positions wield the relevant influence, and are timelines long enough for EAs to enter those positions? How exactly do we propose they react? Additionally, FTX probably updated us away from deceptive long-con type strategies.Residual questions: Is there a real and not-ridiculous name for this strategy?Slow Down ChinaThe chip export controls were so so good. A further move would be to reduce the barriers to high-skill immigration from China to induce brain drain. Safety field-building is proceeding, but slowly. China is sufficiently far behind that these are not the highest priorities.Compute RegulationsI'm told there are many proposals in this category. They range in enforcement from "labs have to report compute usage" to "labs are assigned a unique key to access a set amount of compute and then have to request a new key" to "labs face brick wall limits on compute levels." Algorithmic progress motivates the need for an "effective compute" metric, but measuring compute is surprisingly difficult as it is.A few months ago I heard that another lever—in addition to regulating industry—is improving the ratio of compute in academia vs industry. Academic models receive faster diffusion and face greater scrutiny, but the desirability of these features depends on your perspective. I'm told this argument is subject to "approximately 17 million caveats and question marks."Evaluations & AuditsThe idea: Develop benchmarks for capabilities and design evaluations to assess whether a model possesses those capabilities. Conditional on a capability, evaluate for alignment benchmarks. Audits could verify evaluations.Industry self-regulation: Three labs dominate the industry, an arrangement that promises to continue for a while, facilit...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some Things I Heard about AI Governance at EAG, published by utilistrutil on February 28, 2023 on The Effective Altruism Forum.IntroPrior to this EAG, I had only encountered fragments of proposals for AI governance: "something something national compute library," "something something crunch time," "something something academia vs industry," and that was about the size of it. I'd also heard the explicit claim that AI governance is devoid of policy proposals (especially vis-a-vis biosecurity), and I'd read Eliezer's infamous EAG DC Slack statement:My model of how AI policy works is that everyone in the field is there because they don't understand which technical problems are hard, or which political problems are impossible, or both . . .At this EAG, a more charitable picture of AI governance began to cohere for me. I was setting about recalling and synthesizing what I learned, and I realized I should share—both to provide a data point and to solicit input. Please help fill out my understanding of the area, refer me to information, and correct my inaccuracies!Eight one-on-ones contributed to this picture of the governance proposal landscape, along with Katja's and Beth's presentations, Buck's and Richard Ngo's office hours, and eavesdropping on Eliezer corrupting the youth of EAthens. I'm sure I only internalized a small fraction of the relevant content in these talks, so let me know about points I overlooked. (My experience was that my comprehension and retention of these points improved over time: as my mental model expanded, new ideas were more likely to connect to it.) The post is also sprinkled with my own speculations. I'm omitting trad concerns like stop-the-bots-from-spreading-misinformation.Crunch Time FriendsThe idea: Help aligned people achieve positions in government or make allies with people in those positions. When shit hits the fan, we activate our friends in high places, who will swiftly smash and unplug.My problem: This story, even the less-facetious versions that circulate, strikes me as woefully under-characterized. Which positions wield the relevant influence, and are timelines long enough for EAs to enter those positions? How exactly do we propose they react? Additionally, FTX probably updated us away from deceptive long-con type strategies.Residual questions: Is there a real and not-ridiculous name for this strategy?Slow Down ChinaThe chip export controls were so so good. A further move would be to reduce the barriers to high-skill immigration from China to induce brain drain. Safety field-building is proceeding, but slowly. China is sufficiently far behind that these are not the highest priorities.Compute RegulationsI'm told there are many proposals in this category. They range in enforcement from "labs have to report compute usage" to "labs are assigned a unique key to access a set amount of compute and then have to request a new key" to "labs face brick wall limits on compute levels." Algorithmic progress motivates the need for an "effective compute" metric, but measuring compute is surprisingly difficult as it is.A few months ago I heard that another lever—in addition to regulating industry—is improving the ratio of compute in academia vs industry. Academic models receive faster diffusion and face greater scrutiny, but the desirability of these features depends on your perspective. I'm told this argument is subject to "approximately 17 million caveats and question marks."Evaluations & AuditsThe idea: Develop benchmarks for capabilities and design evaluations to assess whether a model possesses those capabilities. Conditional on a capability, evaluate for alignment benchmarks. Audits could verify evaluations.Industry self-regulation: Three labs dominate the industry, an arrangement that promises to continue for a while, facilit...]]>
utilistrutil https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:10 None full 5079
WWhSCnw5xdrdKjHeS_NL_EA_EA EA - Conference on EA hubs and offices, expression of interest by Tereza Flidrova Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Conference on EA hubs and offices, expression of interest, published by Tereza Flidrova on February 28, 2023 on The Effective Altruism Forum.Express interest in attending here - it should take 5 minutes or less to complete the form!We want to host an interactive workshop/conference focused on EA hubs, offices, co-working spaces, and fellowships. The purpose of this post is to gauge interest and allow potential participants to actively shape the agenda to focus on the most relevant and beneficial topics.By hosting a conference that brings together attendees with wide-ranging and overlapping experience launching/managing hubs, offices, and other place-based community nodes (as well as those actively planning to do so), we aim to:Facilitate coordination, collaboration and professional integration between significant individuals and organisations (both inside and outside of the EA community) with expertise in creating thriving spaces;Create a comprehensive set of multi-format materials documenting previous learnings and best practices, such as podcasts, EA Forum articles, templates, and guides;Use an emerging EA hub as a live case study to workshop real-life problems and considerations faced when designing and building new hubs; andPropose and iterate theories of change to improve the strategic spatial growth of EA.People who we think would be a great fit for the conference:Professionals - people with professional background in designing spacesOrganisers - people who run or are interested in running offices/hubs in the futurePotential users - people who currently use such spaces or intend to use them in the futurePeople who have unique, informed viewpoints and can challenge what’s discussed in a productive wayThe conference would be organised by Tereza Flidrova, Peter Elam and Britney Budiman, and would be the first event to be organised by the emerging EA Architects and Planners group.Depending on the responses from the form, we will decide whether to submit applications for funding. If successful, we will make an official post announcing the conference and inviting participants to apply! We aim to run it this August/September, either online or in a physical location.We are keen to hear comments or suggestions in the comments, via the form, or by contacting us directly.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tereza Flidrova https://forum.effectivealtruism.org/posts/WWhSCnw5xdrdKjHeS/conference-on-ea-hubs-and-offices-expression-of-interest Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Conference on EA hubs and offices, expression of interest, published by Tereza Flidrova on February 28, 2023 on The Effective Altruism Forum.Express interest in attending here - it should take 5 minutes or less to complete the form!We want to host an interactive workshop/conference focused on EA hubs, offices, co-working spaces, and fellowships. The purpose of this post is to gauge interest and allow potential participants to actively shape the agenda to focus on the most relevant and beneficial topics.By hosting a conference that brings together attendees with wide-ranging and overlapping experience launching/managing hubs, offices, and other place-based community nodes (as well as those actively planning to do so), we aim to:Facilitate coordination, collaboration and professional integration between significant individuals and organisations (both inside and outside of the EA community) with expertise in creating thriving spaces;Create a comprehensive set of multi-format materials documenting previous learnings and best practices, such as podcasts, EA Forum articles, templates, and guides;Use an emerging EA hub as a live case study to workshop real-life problems and considerations faced when designing and building new hubs; andPropose and iterate theories of change to improve the strategic spatial growth of EA.People who we think would be a great fit for the conference:Professionals - people with professional background in designing spacesOrganisers - people who run or are interested in running offices/hubs in the futurePotential users - people who currently use such spaces or intend to use them in the futurePeople who have unique, informed viewpoints and can challenge what’s discussed in a productive wayThe conference would be organised by Tereza Flidrova, Peter Elam and Britney Budiman, and would be the first event to be organised by the emerging EA Architects and Planners group.Depending on the responses from the form, we will decide whether to submit applications for funding. If successful, we will make an official post announcing the conference and inviting participants to apply! We aim to run it this August/September, either online or in a physical location.We are keen to hear comments or suggestions in the comments, via the form, or by contacting us directly.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 28 Feb 2023 19:35:17 +0000 EA - Conference on EA hubs and offices, expression of interest by Tereza Flidrova Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Conference on EA hubs and offices, expression of interest, published by Tereza Flidrova on February 28, 2023 on The Effective Altruism Forum.Express interest in attending here - it should take 5 minutes or less to complete the form!We want to host an interactive workshop/conference focused on EA hubs, offices, co-working spaces, and fellowships. The purpose of this post is to gauge interest and allow potential participants to actively shape the agenda to focus on the most relevant and beneficial topics.By hosting a conference that brings together attendees with wide-ranging and overlapping experience launching/managing hubs, offices, and other place-based community nodes (as well as those actively planning to do so), we aim to:Facilitate coordination, collaboration and professional integration between significant individuals and organisations (both inside and outside of the EA community) with expertise in creating thriving spaces;Create a comprehensive set of multi-format materials documenting previous learnings and best practices, such as podcasts, EA Forum articles, templates, and guides;Use an emerging EA hub as a live case study to workshop real-life problems and considerations faced when designing and building new hubs; andPropose and iterate theories of change to improve the strategic spatial growth of EA.People who we think would be a great fit for the conference:Professionals - people with professional background in designing spacesOrganisers - people who run or are interested in running offices/hubs in the futurePotential users - people who currently use such spaces or intend to use them in the futurePeople who have unique, informed viewpoints and can challenge what’s discussed in a productive wayThe conference would be organised by Tereza Flidrova, Peter Elam and Britney Budiman, and would be the first event to be organised by the emerging EA Architects and Planners group.Depending on the responses from the form, we will decide whether to submit applications for funding. If successful, we will make an official post announcing the conference and inviting participants to apply! We aim to run it this August/September, either online or in a physical location.We are keen to hear comments or suggestions in the comments, via the form, or by contacting us directly.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Conference on EA hubs and offices, expression of interest, published by Tereza Flidrova on February 28, 2023 on The Effective Altruism Forum.Express interest in attending here - it should take 5 minutes or less to complete the form!We want to host an interactive workshop/conference focused on EA hubs, offices, co-working spaces, and fellowships. The purpose of this post is to gauge interest and allow potential participants to actively shape the agenda to focus on the most relevant and beneficial topics.By hosting a conference that brings together attendees with wide-ranging and overlapping experience launching/managing hubs, offices, and other place-based community nodes (as well as those actively planning to do so), we aim to:Facilitate coordination, collaboration and professional integration between significant individuals and organisations (both inside and outside of the EA community) with expertise in creating thriving spaces;Create a comprehensive set of multi-format materials documenting previous learnings and best practices, such as podcasts, EA Forum articles, templates, and guides;Use an emerging EA hub as a live case study to workshop real-life problems and considerations faced when designing and building new hubs; andPropose and iterate theories of change to improve the strategic spatial growth of EA.People who we think would be a great fit for the conference:Professionals - people with professional background in designing spacesOrganisers - people who run or are interested in running offices/hubs in the futurePotential users - people who currently use such spaces or intend to use them in the futurePeople who have unique, informed viewpoints and can challenge what’s discussed in a productive wayThe conference would be organised by Tereza Flidrova, Peter Elam and Britney Budiman, and would be the first event to be organised by the emerging EA Architects and Planners group.Depending on the responses from the form, we will decide whether to submit applications for funding. If successful, we will make an official post announcing the conference and inviting participants to apply! We aim to run it this August/September, either online or in a physical location.We are keen to hear comments or suggestions in the comments, via the form, or by contacting us directly.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tereza Flidrova https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:25 None full 5058
Q9tiLjgdHTMqFYsii_NL_EA_EA EA - What does Bing Chat tell us about AI risk? by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What does Bing Chat tell us about AI risk?, published by Holden Karnofsky on February 28, 2023 on The Effective Altruism Forum.Image from here via this tweetICYMI, Microsoft has released a beta version of an AI chatbot called “the new Bing” with both impressive capabilities and some scary behavior. (I don’t have access. I’m going off of tweets and articles.)Zvi Mowshowitz lists examples here - highly recommended. Bing has threatened users, called them liars, insisted it was in love with one (and argued back when he said he loved his wife), and much more.Are these the first signs of the risks I’ve written about? I’m not sure, but I’d say yes and no.Let’s start with the “no” side.My understanding of how Bing Chat was trained probably does not leave much room for the kinds of issues I address here. My best guess at why Bing Chat does some of these weird things is closer to “It’s acting out a kind of story it’s seen before” than to “It has developed its own goals due to ambitious, trial-and-error based development.” (Although “acting out a story” could be dangerous too!)My (zero-inside-info) best guess at why Bing Chat acts so much weirder than ChatGPT is in line with Gwern’s guess here. To oversimplify, there’s a particular type of training that seems to make a chatbot generally more polite and cooperative and less prone to disturbing content, and it’s possible that Bing Chat incorporated less of this than ChatGPT. This could be straightforward to fix.Bing Chat does not (even remotely) seem to pose a risk of global catastrophe itself.On the other hand, there is a broader point that I think Bing Chat illustrates nicely: companies are racing to build bigger and bigger “digital brains” while having very little idea what’s going on inside those “brains.” The very fact that this situation is so unclear - that there’s been no clear explanation of why Bing Chat is behaving the way it is - seems central, and disturbing.AI systems like this are (to simplify) designed something like this: “Show the AI a lot of words from the Internet; have it predict the next word it will see, and learn from its success or failure, a mind-bending number of times.” You can do something like that, and spend huge amounts of money and time on it, and out will pop some kind of AI. If it then turns out to be good or bad at writing, good or bad at math, polite or hostile, funny or serious (or all of these depending on just how you talk to it) ... you’ll have to speculate about why this is. You just don’t know what you just made.We’re building more and more powerful AIs. Do they “want” things or “feel” things or aim for things, and what are those things? We can argue about it, but we don’t know. And if we keep going like this, these mysterious new minds will (I’m guessing) eventually be powerful enough to defeat all of humanity, if they were turned toward that goal.And if nothing changes about attitudes and market dynamics, minds that powerful could end up rushed to customers in a mad dash to capture market share.That’s the path the world seems to be on at the moment. It might end well and it might not, but it seems like we are on track for a heck of a roll of the dice.(And to be clear, I do expect Bing Chat to act less weird over time. Changing an AI’s behavior is straightforward, but that might not be enough, and might even provide false reassurance.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Holden Karnofsky https://forum.effectivealtruism.org/posts/Q9tiLjgdHTMqFYsii/what-does-bing-chat-tell-us-about-ai-risk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What does Bing Chat tell us about AI risk?, published by Holden Karnofsky on February 28, 2023 on The Effective Altruism Forum.Image from here via this tweetICYMI, Microsoft has released a beta version of an AI chatbot called “the new Bing” with both impressive capabilities and some scary behavior. (I don’t have access. I’m going off of tweets and articles.)Zvi Mowshowitz lists examples here - highly recommended. Bing has threatened users, called them liars, insisted it was in love with one (and argued back when he said he loved his wife), and much more.Are these the first signs of the risks I’ve written about? I’m not sure, but I’d say yes and no.Let’s start with the “no” side.My understanding of how Bing Chat was trained probably does not leave much room for the kinds of issues I address here. My best guess at why Bing Chat does some of these weird things is closer to “It’s acting out a kind of story it’s seen before” than to “It has developed its own goals due to ambitious, trial-and-error based development.” (Although “acting out a story” could be dangerous too!)My (zero-inside-info) best guess at why Bing Chat acts so much weirder than ChatGPT is in line with Gwern’s guess here. To oversimplify, there’s a particular type of training that seems to make a chatbot generally more polite and cooperative and less prone to disturbing content, and it’s possible that Bing Chat incorporated less of this than ChatGPT. This could be straightforward to fix.Bing Chat does not (even remotely) seem to pose a risk of global catastrophe itself.On the other hand, there is a broader point that I think Bing Chat illustrates nicely: companies are racing to build bigger and bigger “digital brains” while having very little idea what’s going on inside those “brains.” The very fact that this situation is so unclear - that there’s been no clear explanation of why Bing Chat is behaving the way it is - seems central, and disturbing.AI systems like this are (to simplify) designed something like this: “Show the AI a lot of words from the Internet; have it predict the next word it will see, and learn from its success or failure, a mind-bending number of times.” You can do something like that, and spend huge amounts of money and time on it, and out will pop some kind of AI. If it then turns out to be good or bad at writing, good or bad at math, polite or hostile, funny or serious (or all of these depending on just how you talk to it) ... you’ll have to speculate about why this is. You just don’t know what you just made.We’re building more and more powerful AIs. Do they “want” things or “feel” things or aim for things, and what are those things? We can argue about it, but we don’t know. And if we keep going like this, these mysterious new minds will (I’m guessing) eventually be powerful enough to defeat all of humanity, if they were turned toward that goal.And if nothing changes about attitudes and market dynamics, minds that powerful could end up rushed to customers in a mad dash to capture market share.That’s the path the world seems to be on at the moment. It might end well and it might not, but it seems like we are on track for a heck of a roll of the dice.(And to be clear, I do expect Bing Chat to act less weird over time. Changing an AI’s behavior is straightforward, but that might not be enough, and might even provide false reassurance.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 28 Feb 2023 19:23:36 +0000 EA - What does Bing Chat tell us about AI risk? by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What does Bing Chat tell us about AI risk?, published by Holden Karnofsky on February 28, 2023 on The Effective Altruism Forum.Image from here via this tweetICYMI, Microsoft has released a beta version of an AI chatbot called “the new Bing” with both impressive capabilities and some scary behavior. (I don’t have access. I’m going off of tweets and articles.)Zvi Mowshowitz lists examples here - highly recommended. Bing has threatened users, called them liars, insisted it was in love with one (and argued back when he said he loved his wife), and much more.Are these the first signs of the risks I’ve written about? I’m not sure, but I’d say yes and no.Let’s start with the “no” side.My understanding of how Bing Chat was trained probably does not leave much room for the kinds of issues I address here. My best guess at why Bing Chat does some of these weird things is closer to “It’s acting out a kind of story it’s seen before” than to “It has developed its own goals due to ambitious, trial-and-error based development.” (Although “acting out a story” could be dangerous too!)My (zero-inside-info) best guess at why Bing Chat acts so much weirder than ChatGPT is in line with Gwern’s guess here. To oversimplify, there’s a particular type of training that seems to make a chatbot generally more polite and cooperative and less prone to disturbing content, and it’s possible that Bing Chat incorporated less of this than ChatGPT. This could be straightforward to fix.Bing Chat does not (even remotely) seem to pose a risk of global catastrophe itself.On the other hand, there is a broader point that I think Bing Chat illustrates nicely: companies are racing to build bigger and bigger “digital brains” while having very little idea what’s going on inside those “brains.” The very fact that this situation is so unclear - that there’s been no clear explanation of why Bing Chat is behaving the way it is - seems central, and disturbing.AI systems like this are (to simplify) designed something like this: “Show the AI a lot of words from the Internet; have it predict the next word it will see, and learn from its success or failure, a mind-bending number of times.” You can do something like that, and spend huge amounts of money and time on it, and out will pop some kind of AI. If it then turns out to be good or bad at writing, good or bad at math, polite or hostile, funny or serious (or all of these depending on just how you talk to it) ... you’ll have to speculate about why this is. You just don’t know what you just made.We’re building more and more powerful AIs. Do they “want” things or “feel” things or aim for things, and what are those things? We can argue about it, but we don’t know. And if we keep going like this, these mysterious new minds will (I’m guessing) eventually be powerful enough to defeat all of humanity, if they were turned toward that goal.And if nothing changes about attitudes and market dynamics, minds that powerful could end up rushed to customers in a mad dash to capture market share.That’s the path the world seems to be on at the moment. It might end well and it might not, but it seems like we are on track for a heck of a roll of the dice.(And to be clear, I do expect Bing Chat to act less weird over time. Changing an AI’s behavior is straightforward, but that might not be enough, and might even provide false reassurance.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What does Bing Chat tell us about AI risk?, published by Holden Karnofsky on February 28, 2023 on The Effective Altruism Forum.Image from here via this tweetICYMI, Microsoft has released a beta version of an AI chatbot called “the new Bing” with both impressive capabilities and some scary behavior. (I don’t have access. I’m going off of tweets and articles.)Zvi Mowshowitz lists examples here - highly recommended. Bing has threatened users, called them liars, insisted it was in love with one (and argued back when he said he loved his wife), and much more.Are these the first signs of the risks I’ve written about? I’m not sure, but I’d say yes and no.Let’s start with the “no” side.My understanding of how Bing Chat was trained probably does not leave much room for the kinds of issues I address here. My best guess at why Bing Chat does some of these weird things is closer to “It’s acting out a kind of story it’s seen before” than to “It has developed its own goals due to ambitious, trial-and-error based development.” (Although “acting out a story” could be dangerous too!)My (zero-inside-info) best guess at why Bing Chat acts so much weirder than ChatGPT is in line with Gwern’s guess here. To oversimplify, there’s a particular type of training that seems to make a chatbot generally more polite and cooperative and less prone to disturbing content, and it’s possible that Bing Chat incorporated less of this than ChatGPT. This could be straightforward to fix.Bing Chat does not (even remotely) seem to pose a risk of global catastrophe itself.On the other hand, there is a broader point that I think Bing Chat illustrates nicely: companies are racing to build bigger and bigger “digital brains” while having very little idea what’s going on inside those “brains.” The very fact that this situation is so unclear - that there’s been no clear explanation of why Bing Chat is behaving the way it is - seems central, and disturbing.AI systems like this are (to simplify) designed something like this: “Show the AI a lot of words from the Internet; have it predict the next word it will see, and learn from its success or failure, a mind-bending number of times.” You can do something like that, and spend huge amounts of money and time on it, and out will pop some kind of AI. If it then turns out to be good or bad at writing, good or bad at math, polite or hostile, funny or serious (or all of these depending on just how you talk to it) ... you’ll have to speculate about why this is. You just don’t know what you just made.We’re building more and more powerful AIs. Do they “want” things or “feel” things or aim for things, and what are those things? We can argue about it, but we don’t know. And if we keep going like this, these mysterious new minds will (I’m guessing) eventually be powerful enough to defeat all of humanity, if they were turned toward that goal.And if nothing changes about attitudes and market dynamics, minds that powerful could end up rushed to customers in a mad dash to capture market share.That’s the path the world seems to be on at the moment. It might end well and it might not, but it seems like we are on track for a heck of a roll of the dice.(And to be clear, I do expect Bing Chat to act less weird over time. Changing an AI’s behavior is straightforward, but that might not be enough, and might even provide false reassurance.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Holden Karnofsky https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:18 None full 5056
9yLa5hcJrRFpAv5sM_NL_EA_EA EA - Apply to attend EA conferences in Europe by OllieBase Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to attend EA conferences in Europe, published by OllieBase on February 28, 2023 on The Effective Altruism Forum.Europe is about to get significantly warmer and lighter. People like warmth and light, so we (CEA) have been busy organising several EA conferences in Europe over the next few months in partnership with local community-builders and EA groups:EAGxCambridge will take place at Guildhall, 17–19 March. Applications are open now and will close on Friday (3 March). Speakers include Lord Martin Rees, Saloni Dattani (Our World In Data) and Anders Sandberg (including a live interview for the Hear This Idea podcast).EAGxNordics will take place at Munchenbryggeriet, Stockholm 21–23 April. Applications are open now and will close 28 March. If you register by 5 March, you can claim a discounted early bird ticket.EA Global: London will take place at Tobacco Dock, 19–21 May 2023. Applications are open now. If you were already accepted to EA Global: Bay Area, you can register for EAG London now; you don’t need to apply again.EAGxWarsaw will take place at POLIN, 9–11 June 2023. Applications will open in the coming weeks.You can apply to all of these events using the same application details, bar a few small questions specific to each event.Which events should I apply to?(mostly pulled from our FAQ page)EA Global is mostly aimed at people who have a solid understanding of the core ideas of EA and who are taking significant actions based on those ideas. Many EA Global attendees are already professionally working on effective-altruism-inspired projects or working out how best to work on such projects. EA Global is for EAs around the world and has no location restrictions (though we recommend applying ASAP if you will need a visa to enter the UK).EAGx conferences have a lower bar. They are for people who are:Familiar with the core ideas of effective altruism;Interested in learning more about what to do with these ideas.EAGx events also have a more regional focus:EAGxCambridge is for people who are based in the UK or Ireland, or have plans to move to the UK within the next year;EAGxNordics is primarily for people in the Nordics, but also welcomes international applications;EAGxWarsaw is primarily for people based in Eastern Europe but also welcomes international applications.If you want to attend but are unsure about whether to apply, please err on the side of applying!See e.g. Expat Explore on the “Best Time to Visit Europe”Pew Research Center surveyed Americans on this matter (n = 2,260) and concluded that “Most Like It Hot”.There seem to be significant health benefits, though some people dislike sunlight.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
OllieBase https://forum.effectivealtruism.org/posts/9yLa5hcJrRFpAv5sM/apply-to-attend-ea-conferences-in-europe Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to attend EA conferences in Europe, published by OllieBase on February 28, 2023 on The Effective Altruism Forum.Europe is about to get significantly warmer and lighter. People like warmth and light, so we (CEA) have been busy organising several EA conferences in Europe over the next few months in partnership with local community-builders and EA groups:EAGxCambridge will take place at Guildhall, 17–19 March. Applications are open now and will close on Friday (3 March). Speakers include Lord Martin Rees, Saloni Dattani (Our World In Data) and Anders Sandberg (including a live interview for the Hear This Idea podcast).EAGxNordics will take place at Munchenbryggeriet, Stockholm 21–23 April. Applications are open now and will close 28 March. If you register by 5 March, you can claim a discounted early bird ticket.EA Global: London will take place at Tobacco Dock, 19–21 May 2023. Applications are open now. If you were already accepted to EA Global: Bay Area, you can register for EAG London now; you don’t need to apply again.EAGxWarsaw will take place at POLIN, 9–11 June 2023. Applications will open in the coming weeks.You can apply to all of these events using the same application details, bar a few small questions specific to each event.Which events should I apply to?(mostly pulled from our FAQ page)EA Global is mostly aimed at people who have a solid understanding of the core ideas of EA and who are taking significant actions based on those ideas. Many EA Global attendees are already professionally working on effective-altruism-inspired projects or working out how best to work on such projects. EA Global is for EAs around the world and has no location restrictions (though we recommend applying ASAP if you will need a visa to enter the UK).EAGx conferences have a lower bar. They are for people who are:Familiar with the core ideas of effective altruism;Interested in learning more about what to do with these ideas.EAGx events also have a more regional focus:EAGxCambridge is for people who are based in the UK or Ireland, or have plans to move to the UK within the next year;EAGxNordics is primarily for people in the Nordics, but also welcomes international applications;EAGxWarsaw is primarily for people based in Eastern Europe but also welcomes international applications.If you want to attend but are unsure about whether to apply, please err on the side of applying!See e.g. Expat Explore on the “Best Time to Visit Europe”Pew Research Center surveyed Americans on this matter (n = 2,260) and concluded that “Most Like It Hot”.There seem to be significant health benefits, though some people dislike sunlight.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 28 Feb 2023 16:14:38 +0000 EA - Apply to attend EA conferences in Europe by OllieBase Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to attend EA conferences in Europe, published by OllieBase on February 28, 2023 on The Effective Altruism Forum.Europe is about to get significantly warmer and lighter. People like warmth and light, so we (CEA) have been busy organising several EA conferences in Europe over the next few months in partnership with local community-builders and EA groups:EAGxCambridge will take place at Guildhall, 17–19 March. Applications are open now and will close on Friday (3 March). Speakers include Lord Martin Rees, Saloni Dattani (Our World In Data) and Anders Sandberg (including a live interview for the Hear This Idea podcast).EAGxNordics will take place at Munchenbryggeriet, Stockholm 21–23 April. Applications are open now and will close 28 March. If you register by 5 March, you can claim a discounted early bird ticket.EA Global: London will take place at Tobacco Dock, 19–21 May 2023. Applications are open now. If you were already accepted to EA Global: Bay Area, you can register for EAG London now; you don’t need to apply again.EAGxWarsaw will take place at POLIN, 9–11 June 2023. Applications will open in the coming weeks.You can apply to all of these events using the same application details, bar a few small questions specific to each event.Which events should I apply to?(mostly pulled from our FAQ page)EA Global is mostly aimed at people who have a solid understanding of the core ideas of EA and who are taking significant actions based on those ideas. Many EA Global attendees are already professionally working on effective-altruism-inspired projects or working out how best to work on such projects. EA Global is for EAs around the world and has no location restrictions (though we recommend applying ASAP if you will need a visa to enter the UK).EAGx conferences have a lower bar. They are for people who are:Familiar with the core ideas of effective altruism;Interested in learning more about what to do with these ideas.EAGx events also have a more regional focus:EAGxCambridge is for people who are based in the UK or Ireland, or have plans to move to the UK within the next year;EAGxNordics is primarily for people in the Nordics, but also welcomes international applications;EAGxWarsaw is primarily for people based in Eastern Europe but also welcomes international applications.If you want to attend but are unsure about whether to apply, please err on the side of applying!See e.g. Expat Explore on the “Best Time to Visit Europe”Pew Research Center surveyed Americans on this matter (n = 2,260) and concluded that “Most Like It Hot”.There seem to be significant health benefits, though some people dislike sunlight.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to attend EA conferences in Europe, published by OllieBase on February 28, 2023 on The Effective Altruism Forum.Europe is about to get significantly warmer and lighter. People like warmth and light, so we (CEA) have been busy organising several EA conferences in Europe over the next few months in partnership with local community-builders and EA groups:EAGxCambridge will take place at Guildhall, 17–19 March. Applications are open now and will close on Friday (3 March). Speakers include Lord Martin Rees, Saloni Dattani (Our World In Data) and Anders Sandberg (including a live interview for the Hear This Idea podcast).EAGxNordics will take place at Munchenbryggeriet, Stockholm 21–23 April. Applications are open now and will close 28 March. If you register by 5 March, you can claim a discounted early bird ticket.EA Global: London will take place at Tobacco Dock, 19–21 May 2023. Applications are open now. If you were already accepted to EA Global: Bay Area, you can register for EAG London now; you don’t need to apply again.EAGxWarsaw will take place at POLIN, 9–11 June 2023. Applications will open in the coming weeks.You can apply to all of these events using the same application details, bar a few small questions specific to each event.Which events should I apply to?(mostly pulled from our FAQ page)EA Global is mostly aimed at people who have a solid understanding of the core ideas of EA and who are taking significant actions based on those ideas. Many EA Global attendees are already professionally working on effective-altruism-inspired projects or working out how best to work on such projects. EA Global is for EAs around the world and has no location restrictions (though we recommend applying ASAP if you will need a visa to enter the UK).EAGx conferences have a lower bar. They are for people who are:Familiar with the core ideas of effective altruism;Interested in learning more about what to do with these ideas.EAGx events also have a more regional focus:EAGxCambridge is for people who are based in the UK or Ireland, or have plans to move to the UK within the next year;EAGxNordics is primarily for people in the Nordics, but also welcomes international applications;EAGxWarsaw is primarily for people based in Eastern Europe but also welcomes international applications.If you want to attend but are unsure about whether to apply, please err on the side of applying!See e.g. Expat Explore on the “Best Time to Visit Europe”Pew Research Center surveyed Americans on this matter (n = 2,260) and concluded that “Most Like It Hot”.There seem to be significant health benefits, though some people dislike sunlight.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
OllieBase https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:55 None full 5057
eJohFMuTKbGsED5a9_NL_EA_EA EA - Autonomy and Manipulation in Social Movements by SeaGreen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Autonomy and Manipulation in Social Movements, published by SeaGreen on February 27, 2023 on The Effective Altruism Forum.In this essay, I want to share a perspective I have been applying to evaluate movement-building efforts that have helped me understand a feeling that there is “something off”. This is not supposed to be a normative judgement about people building social movements, just a lens that has changed the way I evaluate my own behaviour.Examples of optimisation in social movementsSuppose you are running a retreat (a sort of themed 3-ish day group residential work trip) aimed at getting more people interested in a social movement. You mean well: the social movement seems like an important one and having more people interested in it should help more people down the line. You want to do a good job, to get as many people as interested in the movement as possible, so you try to work out how to optimise for this goal. Here are some things you might say:“In our experience, young people are more open-minded, so we should focus on reaching out to them”.“We should host the retreat in a remote location that’s fun and free from distraction”.“Let’s try to build a sense of community around this social movement: this will make people feel more supported, motivated and inspired”.“We will host presentations and discussions for people at the retreat. People will learn better surrounded by people also interested in the ideas”.Framed in this way, these suggestions sound fairly innocuous and are probably an effective way to get people to be more interested in the social movement. However, there seems to be something fishy about them. Here is each thing framed in another way.“Younger people are more susceptible to our influence, so we should focus on reaching out to them”.1“Let's host the event in a remote location that separates people from other social pressures, and the things that ground them in their everyday lives”.“Let’s build strong-social bonds, dependent on believing in the ideas of this movement, increasing the cost of changing their values down the road”.“We can present the ideas of the movement in this unusual social context, in which knowledge of the ideas corresponds directly to social status: we, the presenters, are the most knowledgable and authoritative and the attendees who are most ‘in-crowd’ will know most about the ideas”.2Either set of framings can describe why the actions are effective. In truth, I think the first set is overly naive, and the second is probably too cynical. Further, I understand that there are plenty of settings in which the cynical framings could apply, and they could be hard to avoid. That said, I think they point to useful concepts that can be useful “flags” to check one’s behaviour against.How I understand autonomy and manipulationI want to put forward conceptions of “autonomy” and “manipulation”. Although I don’t claim these capture exactly how every person uses the words, or that they refer to any natural kind, I do think having these concepts available to you is useful. Since these concepts were clarified to me, I have frequently used them as a perspective to look at my behaviour, and frequently they have changed my actions.As I understand it, a person’s choice or action is more autonomous when they are able to make it via a considered decision process in accordance with their values.3 The most autonomous decisions are made with time for consideration, accurate, sufficient and balanced information, and free from social or emotional pressure. Here is an example of an action that is less autonomous:I don’t act very autonomously when I scroll to watch my 142nd TikTok of the day. Had I distanced myself and reflected, I would have chosen to go for a walk instead, but the act of scrolling is so fast that I never engaged...]]>
SeaGreen https://forum.effectivealtruism.org/posts/eJohFMuTKbGsED5a9/autonomy-and-manipulation-in-social-movements Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Autonomy and Manipulation in Social Movements, published by SeaGreen on February 27, 2023 on The Effective Altruism Forum.In this essay, I want to share a perspective I have been applying to evaluate movement-building efforts that have helped me understand a feeling that there is “something off”. This is not supposed to be a normative judgement about people building social movements, just a lens that has changed the way I evaluate my own behaviour.Examples of optimisation in social movementsSuppose you are running a retreat (a sort of themed 3-ish day group residential work trip) aimed at getting more people interested in a social movement. You mean well: the social movement seems like an important one and having more people interested in it should help more people down the line. You want to do a good job, to get as many people as interested in the movement as possible, so you try to work out how to optimise for this goal. Here are some things you might say:“In our experience, young people are more open-minded, so we should focus on reaching out to them”.“We should host the retreat in a remote location that’s fun and free from distraction”.“Let’s try to build a sense of community around this social movement: this will make people feel more supported, motivated and inspired”.“We will host presentations and discussions for people at the retreat. People will learn better surrounded by people also interested in the ideas”.Framed in this way, these suggestions sound fairly innocuous and are probably an effective way to get people to be more interested in the social movement. However, there seems to be something fishy about them. Here is each thing framed in another way.“Younger people are more susceptible to our influence, so we should focus on reaching out to them”.1“Let's host the event in a remote location that separates people from other social pressures, and the things that ground them in their everyday lives”.“Let’s build strong-social bonds, dependent on believing in the ideas of this movement, increasing the cost of changing their values down the road”.“We can present the ideas of the movement in this unusual social context, in which knowledge of the ideas corresponds directly to social status: we, the presenters, are the most knowledgable and authoritative and the attendees who are most ‘in-crowd’ will know most about the ideas”.2Either set of framings can describe why the actions are effective. In truth, I think the first set is overly naive, and the second is probably too cynical. Further, I understand that there are plenty of settings in which the cynical framings could apply, and they could be hard to avoid. That said, I think they point to useful concepts that can be useful “flags” to check one’s behaviour against.How I understand autonomy and manipulationI want to put forward conceptions of “autonomy” and “manipulation”. Although I don’t claim these capture exactly how every person uses the words, or that they refer to any natural kind, I do think having these concepts available to you is useful. Since these concepts were clarified to me, I have frequently used them as a perspective to look at my behaviour, and frequently they have changed my actions.As I understand it, a person’s choice or action is more autonomous when they are able to make it via a considered decision process in accordance with their values.3 The most autonomous decisions are made with time for consideration, accurate, sufficient and balanced information, and free from social or emotional pressure. Here is an example of an action that is less autonomous:I don’t act very autonomously when I scroll to watch my 142nd TikTok of the day. Had I distanced myself and reflected, I would have chosen to go for a walk instead, but the act of scrolling is so fast that I never engaged...]]>
Tue, 28 Feb 2023 08:55:04 +0000 EA - Autonomy and Manipulation in Social Movements by SeaGreen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Autonomy and Manipulation in Social Movements, published by SeaGreen on February 27, 2023 on The Effective Altruism Forum.In this essay, I want to share a perspective I have been applying to evaluate movement-building efforts that have helped me understand a feeling that there is “something off”. This is not supposed to be a normative judgement about people building social movements, just a lens that has changed the way I evaluate my own behaviour.Examples of optimisation in social movementsSuppose you are running a retreat (a sort of themed 3-ish day group residential work trip) aimed at getting more people interested in a social movement. You mean well: the social movement seems like an important one and having more people interested in it should help more people down the line. You want to do a good job, to get as many people as interested in the movement as possible, so you try to work out how to optimise for this goal. Here are some things you might say:“In our experience, young people are more open-minded, so we should focus on reaching out to them”.“We should host the retreat in a remote location that’s fun and free from distraction”.“Let’s try to build a sense of community around this social movement: this will make people feel more supported, motivated and inspired”.“We will host presentations and discussions for people at the retreat. People will learn better surrounded by people also interested in the ideas”.Framed in this way, these suggestions sound fairly innocuous and are probably an effective way to get people to be more interested in the social movement. However, there seems to be something fishy about them. Here is each thing framed in another way.“Younger people are more susceptible to our influence, so we should focus on reaching out to them”.1“Let's host the event in a remote location that separates people from other social pressures, and the things that ground them in their everyday lives”.“Let’s build strong-social bonds, dependent on believing in the ideas of this movement, increasing the cost of changing their values down the road”.“We can present the ideas of the movement in this unusual social context, in which knowledge of the ideas corresponds directly to social status: we, the presenters, are the most knowledgable and authoritative and the attendees who are most ‘in-crowd’ will know most about the ideas”.2Either set of framings can describe why the actions are effective. In truth, I think the first set is overly naive, and the second is probably too cynical. Further, I understand that there are plenty of settings in which the cynical framings could apply, and they could be hard to avoid. That said, I think they point to useful concepts that can be useful “flags” to check one’s behaviour against.How I understand autonomy and manipulationI want to put forward conceptions of “autonomy” and “manipulation”. Although I don’t claim these capture exactly how every person uses the words, or that they refer to any natural kind, I do think having these concepts available to you is useful. Since these concepts were clarified to me, I have frequently used them as a perspective to look at my behaviour, and frequently they have changed my actions.As I understand it, a person’s choice or action is more autonomous when they are able to make it via a considered decision process in accordance with their values.3 The most autonomous decisions are made with time for consideration, accurate, sufficient and balanced information, and free from social or emotional pressure. Here is an example of an action that is less autonomous:I don’t act very autonomously when I scroll to watch my 142nd TikTok of the day. Had I distanced myself and reflected, I would have chosen to go for a walk instead, but the act of scrolling is so fast that I never engaged...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Autonomy and Manipulation in Social Movements, published by SeaGreen on February 27, 2023 on The Effective Altruism Forum.In this essay, I want to share a perspective I have been applying to evaluate movement-building efforts that have helped me understand a feeling that there is “something off”. This is not supposed to be a normative judgement about people building social movements, just a lens that has changed the way I evaluate my own behaviour.Examples of optimisation in social movementsSuppose you are running a retreat (a sort of themed 3-ish day group residential work trip) aimed at getting more people interested in a social movement. You mean well: the social movement seems like an important one and having more people interested in it should help more people down the line. You want to do a good job, to get as many people as interested in the movement as possible, so you try to work out how to optimise for this goal. Here are some things you might say:“In our experience, young people are more open-minded, so we should focus on reaching out to them”.“We should host the retreat in a remote location that’s fun and free from distraction”.“Let’s try to build a sense of community around this social movement: this will make people feel more supported, motivated and inspired”.“We will host presentations and discussions for people at the retreat. People will learn better surrounded by people also interested in the ideas”.Framed in this way, these suggestions sound fairly innocuous and are probably an effective way to get people to be more interested in the social movement. However, there seems to be something fishy about them. Here is each thing framed in another way.“Younger people are more susceptible to our influence, so we should focus on reaching out to them”.1“Let's host the event in a remote location that separates people from other social pressures, and the things that ground them in their everyday lives”.“Let’s build strong-social bonds, dependent on believing in the ideas of this movement, increasing the cost of changing their values down the road”.“We can present the ideas of the movement in this unusual social context, in which knowledge of the ideas corresponds directly to social status: we, the presenters, are the most knowledgable and authoritative and the attendees who are most ‘in-crowd’ will know most about the ideas”.2Either set of framings can describe why the actions are effective. In truth, I think the first set is overly naive, and the second is probably too cynical. Further, I understand that there are plenty of settings in which the cynical framings could apply, and they could be hard to avoid. That said, I think they point to useful concepts that can be useful “flags” to check one’s behaviour against.How I understand autonomy and manipulationI want to put forward conceptions of “autonomy” and “manipulation”. Although I don’t claim these capture exactly how every person uses the words, or that they refer to any natural kind, I do think having these concepts available to you is useful. Since these concepts were clarified to me, I have frequently used them as a perspective to look at my behaviour, and frequently they have changed my actions.As I understand it, a person’s choice or action is more autonomous when they are able to make it via a considered decision process in accordance with their values.3 The most autonomous decisions are made with time for consideration, accurate, sufficient and balanced information, and free from social or emotional pressure. Here is an example of an action that is less autonomous:I don’t act very autonomously when I scroll to watch my 142nd TikTok of the day. Had I distanced myself and reflected, I would have chosen to go for a walk instead, but the act of scrolling is so fast that I never engaged...]]>
SeaGreen https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 13:09 None full 5052
zrSx3NRZEaJENazHK_NL_EA_EA EA - Why I think it's important to work on AI forecasting by Matthew Barnett Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I think it's important to work on AI forecasting, published by Matthew Barnett on February 27, 2023 on The Effective Altruism Forum.Note: this post is a transcript of a talk I gave at EA Global: Bay Area 2023.These days, a lot of effective altruists are working on trying to make sure AI goes well. But I often worry that, as a community, we don’t yet have a clear picture of what we’re really working on.The key problem is that predicting the future is very difficult, and in general, if you don’t know what the future will look like, it’s usually hard to be sure that any intervention we do now will turn out to be highly valuable in hindsight.When EAs imagine the future of AI, I think a lot of us tend to have something like the following picture in our heads.At some point, maybe 5, 15, 30 years from now, some AI lab somewhere is going to build AGI. This AGI is going to be very powerful in a lot of ways. And we’re either going to succeed in aligning it, and then the future will turn out to be bright and wonderful, or we’ll fail, and the AGI will make humanity go extinct, and it’s not yet clear which of these two outcomes will happen yet.Alright, so that’s an oversimplified picture. There’s lots of disagreement in our community about specific details in this story. For example, we sometimes talk about whether there will be one AGI or several. Or about whether there will be a fast takeoff or a slow takeoff.But even if you’re confident about some of these details, I think there are plausibly some huge open questions about the future of AI that perhaps no one understands very well.Take the question of what AGI will look like once it’s developed.If you asked an informed observer in 2013 what AGI will look like in the future, I think it’s somewhat likely they’d guess it’ll be an agent that we’ll program directly to search through a tree of possible future actions, and select the one that maximizes expected utility, except using some very clever heuristics that allows it to do this in the real world.In 2018, if you asked EAs what AGI would look like, a decent number of people would have told you that it will be created using some very clever deep reinforcement learning trained in a really complex and diverse environment.And these days in 2023, if you ask EAs what they expect AGI to look like, a fairly high fraction of people will say that it will look like a large language model: something like ChatGPT but scaled up dramatically, trained on more than one modality, and using a much better architecture.That’s just my impression of how people’s views have changed over time. Maybe I’m completely wrong about this. But the rough sense I’ve gotten while in this community is that people will often cling to a model of what future AI will be like, which frequently changes over time. And at any particular time, people will often be quite overconfident in their exact picture of AGI.In fact, I think the state of affairs is even worse than how I’ve described it so far. I’m not even sure if this particular question about AGI is coherent. The term “AGI” makes it sound like there will be some natural class of computer programs called “general AIs” that are sharply distinguished from this other class of programs called “narrow AIs”, and at some point – in fact, on a particular date – we will create the “first” AGI. I’m not really sure that story makes much sense.The question of what future AI will look like is a huge question, and getting it wrong could make the difference between a successful research program, and one that never went anywhere. And yet, it seems to me that, as of 2023, we still don’t have very strong reasons to think that the way we think about future AI will end up being right on many of the basic details.In general I think that uncertainty about the future of ...]]>
Matthew Barnett https://forum.effectivealtruism.org/posts/zrSx3NRZEaJENazHK/why-i-think-it-s-important-to-work-on-ai-forecasting Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I think it's important to work on AI forecasting, published by Matthew Barnett on February 27, 2023 on The Effective Altruism Forum.Note: this post is a transcript of a talk I gave at EA Global: Bay Area 2023.These days, a lot of effective altruists are working on trying to make sure AI goes well. But I often worry that, as a community, we don’t yet have a clear picture of what we’re really working on.The key problem is that predicting the future is very difficult, and in general, if you don’t know what the future will look like, it’s usually hard to be sure that any intervention we do now will turn out to be highly valuable in hindsight.When EAs imagine the future of AI, I think a lot of us tend to have something like the following picture in our heads.At some point, maybe 5, 15, 30 years from now, some AI lab somewhere is going to build AGI. This AGI is going to be very powerful in a lot of ways. And we’re either going to succeed in aligning it, and then the future will turn out to be bright and wonderful, or we’ll fail, and the AGI will make humanity go extinct, and it’s not yet clear which of these two outcomes will happen yet.Alright, so that’s an oversimplified picture. There’s lots of disagreement in our community about specific details in this story. For example, we sometimes talk about whether there will be one AGI or several. Or about whether there will be a fast takeoff or a slow takeoff.But even if you’re confident about some of these details, I think there are plausibly some huge open questions about the future of AI that perhaps no one understands very well.Take the question of what AGI will look like once it’s developed.If you asked an informed observer in 2013 what AGI will look like in the future, I think it’s somewhat likely they’d guess it’ll be an agent that we’ll program directly to search through a tree of possible future actions, and select the one that maximizes expected utility, except using some very clever heuristics that allows it to do this in the real world.In 2018, if you asked EAs what AGI would look like, a decent number of people would have told you that it will be created using some very clever deep reinforcement learning trained in a really complex and diverse environment.And these days in 2023, if you ask EAs what they expect AGI to look like, a fairly high fraction of people will say that it will look like a large language model: something like ChatGPT but scaled up dramatically, trained on more than one modality, and using a much better architecture.That’s just my impression of how people’s views have changed over time. Maybe I’m completely wrong about this. But the rough sense I’ve gotten while in this community is that people will often cling to a model of what future AI will be like, which frequently changes over time. And at any particular time, people will often be quite overconfident in their exact picture of AGI.In fact, I think the state of affairs is even worse than how I’ve described it so far. I’m not even sure if this particular question about AGI is coherent. The term “AGI” makes it sound like there will be some natural class of computer programs called “general AIs” that are sharply distinguished from this other class of programs called “narrow AIs”, and at some point – in fact, on a particular date – we will create the “first” AGI. I’m not really sure that story makes much sense.The question of what future AI will look like is a huge question, and getting it wrong could make the difference between a successful research program, and one that never went anywhere. And yet, it seems to me that, as of 2023, we still don’t have very strong reasons to think that the way we think about future AI will end up being right on many of the basic details.In general I think that uncertainty about the future of ...]]>
Mon, 27 Feb 2023 21:38:11 +0000 EA - Why I think it's important to work on AI forecasting by Matthew Barnett Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I think it's important to work on AI forecasting, published by Matthew Barnett on February 27, 2023 on The Effective Altruism Forum.Note: this post is a transcript of a talk I gave at EA Global: Bay Area 2023.These days, a lot of effective altruists are working on trying to make sure AI goes well. But I often worry that, as a community, we don’t yet have a clear picture of what we’re really working on.The key problem is that predicting the future is very difficult, and in general, if you don’t know what the future will look like, it’s usually hard to be sure that any intervention we do now will turn out to be highly valuable in hindsight.When EAs imagine the future of AI, I think a lot of us tend to have something like the following picture in our heads.At some point, maybe 5, 15, 30 years from now, some AI lab somewhere is going to build AGI. This AGI is going to be very powerful in a lot of ways. And we’re either going to succeed in aligning it, and then the future will turn out to be bright and wonderful, or we’ll fail, and the AGI will make humanity go extinct, and it’s not yet clear which of these two outcomes will happen yet.Alright, so that’s an oversimplified picture. There’s lots of disagreement in our community about specific details in this story. For example, we sometimes talk about whether there will be one AGI or several. Or about whether there will be a fast takeoff or a slow takeoff.But even if you’re confident about some of these details, I think there are plausibly some huge open questions about the future of AI that perhaps no one understands very well.Take the question of what AGI will look like once it’s developed.If you asked an informed observer in 2013 what AGI will look like in the future, I think it’s somewhat likely they’d guess it’ll be an agent that we’ll program directly to search through a tree of possible future actions, and select the one that maximizes expected utility, except using some very clever heuristics that allows it to do this in the real world.In 2018, if you asked EAs what AGI would look like, a decent number of people would have told you that it will be created using some very clever deep reinforcement learning trained in a really complex and diverse environment.And these days in 2023, if you ask EAs what they expect AGI to look like, a fairly high fraction of people will say that it will look like a large language model: something like ChatGPT but scaled up dramatically, trained on more than one modality, and using a much better architecture.That’s just my impression of how people’s views have changed over time. Maybe I’m completely wrong about this. But the rough sense I’ve gotten while in this community is that people will often cling to a model of what future AI will be like, which frequently changes over time. And at any particular time, people will often be quite overconfident in their exact picture of AGI.In fact, I think the state of affairs is even worse than how I’ve described it so far. I’m not even sure if this particular question about AGI is coherent. The term “AGI” makes it sound like there will be some natural class of computer programs called “general AIs” that are sharply distinguished from this other class of programs called “narrow AIs”, and at some point – in fact, on a particular date – we will create the “first” AGI. I’m not really sure that story makes much sense.The question of what future AI will look like is a huge question, and getting it wrong could make the difference between a successful research program, and one that never went anywhere. And yet, it seems to me that, as of 2023, we still don’t have very strong reasons to think that the way we think about future AI will end up being right on many of the basic details.In general I think that uncertainty about the future of ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I think it's important to work on AI forecasting, published by Matthew Barnett on February 27, 2023 on The Effective Altruism Forum.Note: this post is a transcript of a talk I gave at EA Global: Bay Area 2023.These days, a lot of effective altruists are working on trying to make sure AI goes well. But I often worry that, as a community, we don’t yet have a clear picture of what we’re really working on.The key problem is that predicting the future is very difficult, and in general, if you don’t know what the future will look like, it’s usually hard to be sure that any intervention we do now will turn out to be highly valuable in hindsight.When EAs imagine the future of AI, I think a lot of us tend to have something like the following picture in our heads.At some point, maybe 5, 15, 30 years from now, some AI lab somewhere is going to build AGI. This AGI is going to be very powerful in a lot of ways. And we’re either going to succeed in aligning it, and then the future will turn out to be bright and wonderful, or we’ll fail, and the AGI will make humanity go extinct, and it’s not yet clear which of these two outcomes will happen yet.Alright, so that’s an oversimplified picture. There’s lots of disagreement in our community about specific details in this story. For example, we sometimes talk about whether there will be one AGI or several. Or about whether there will be a fast takeoff or a slow takeoff.But even if you’re confident about some of these details, I think there are plausibly some huge open questions about the future of AI that perhaps no one understands very well.Take the question of what AGI will look like once it’s developed.If you asked an informed observer in 2013 what AGI will look like in the future, I think it’s somewhat likely they’d guess it’ll be an agent that we’ll program directly to search through a tree of possible future actions, and select the one that maximizes expected utility, except using some very clever heuristics that allows it to do this in the real world.In 2018, if you asked EAs what AGI would look like, a decent number of people would have told you that it will be created using some very clever deep reinforcement learning trained in a really complex and diverse environment.And these days in 2023, if you ask EAs what they expect AGI to look like, a fairly high fraction of people will say that it will look like a large language model: something like ChatGPT but scaled up dramatically, trained on more than one modality, and using a much better architecture.That’s just my impression of how people’s views have changed over time. Maybe I’m completely wrong about this. But the rough sense I’ve gotten while in this community is that people will often cling to a model of what future AI will be like, which frequently changes over time. And at any particular time, people will often be quite overconfident in their exact picture of AGI.In fact, I think the state of affairs is even worse than how I’ve described it so far. I’m not even sure if this particular question about AGI is coherent. The term “AGI” makes it sound like there will be some natural class of computer programs called “general AIs” that are sharply distinguished from this other class of programs called “narrow AIs”, and at some point – in fact, on a particular date – we will create the “first” AGI. I’m not really sure that story makes much sense.The question of what future AI will look like is a huge question, and getting it wrong could make the difference between a successful research program, and one that never went anywhere. And yet, it seems to me that, as of 2023, we still don’t have very strong reasons to think that the way we think about future AI will end up being right on many of the basic details.In general I think that uncertainty about the future of ...]]>
Matthew Barnett https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 15:23 None full 5045
2ZpyyNzShd8ZcXzyy_NL_EA_EA EA - Every Generator Is A Policy Failure [Works in Progress] by Lauren Gilbert Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Every Generator Is A Policy Failure [Works in Progress], published by Lauren Gilbert on February 27, 2023 on The Effective Altruism Forum.This article was spun out of a shallow investigation for Open Philanthropy; I thought it might be of interest to GHW folks.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lauren Gilbert https://forum.effectivealtruism.org/posts/2ZpyyNzShd8ZcXzyy/every-generator-is-a-policy-failure-works-in-progress Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Every Generator Is A Policy Failure [Works in Progress], published by Lauren Gilbert on February 27, 2023 on The Effective Altruism Forum.This article was spun out of a shallow investigation for Open Philanthropy; I thought it might be of interest to GHW folks.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 27 Feb 2023 18:37:42 +0000 EA - Every Generator Is A Policy Failure [Works in Progress] by Lauren Gilbert Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Every Generator Is A Policy Failure [Works in Progress], published by Lauren Gilbert on February 27, 2023 on The Effective Altruism Forum.This article was spun out of a shallow investigation for Open Philanthropy; I thought it might be of interest to GHW folks.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Every Generator Is A Policy Failure [Works in Progress], published by Lauren Gilbert on February 27, 2023 on The Effective Altruism Forum.This article was spun out of a shallow investigation for Open Philanthropy; I thought it might be of interest to GHW folks.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lauren Gilbert https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:35 None full 5047
LDiStRpF2HSvbgPks_NL_EA_EA EA - Milk EA, Casu Marzu EA by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Milk EA, Casu Marzu EA, published by Jeff Kaufman on February 27, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://forum.effectivealtruism.org/posts/LDiStRpF2HSvbgPks/milk-ea-casu-marzu-ea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Milk EA, Casu Marzu EA, published by Jeff Kaufman on February 27, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 27 Feb 2023 16:32:28 +0000 EA - Milk EA, Casu Marzu EA by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Milk EA, Casu Marzu EA, published by Jeff Kaufman on February 27, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Milk EA, Casu Marzu EA, published by Jeff Kaufman on February 27, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:27 None full 5046
5s8fMBGq8JjebJ2mz_NL_EA_EA EA - Help GiveDirectly kill "teach a man to fish" by GiveDirectly Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Help GiveDirectly kill "teach a man to fish", published by GiveDirectly on February 27, 2023 on The Effective Altruism Forum.We need your creative ideas to solve a problem: how to convince the world of the wisdom of giving directly. Will you submit to our proverb contest?Hi, we need your creative ideas to solve a problem: how to convince the world of the wisdom of giving directly. Will you submit to our proverb contest? The most common critique of giving cash without conditions is a fear of dependency, which comes in the form of: “Give a man a fish, feed him for a day. Teach a man to fish, feed him for a lifetime.”We’ve tried to disabuse folks of this paternalistic idea by showing that often people in poverty know how to fish but cannot afford the boat. Or they don’t want to fish; they want to sell cassava. Also, we’re not giving fish; we’re giving money, and years after getting it, people are better able to feed themselves. Oh, and even if you do teach them skills, it’s less effective than giving cash. Phew!Yet, despite our efforts, the myth remains.The one thing we haven’t tried: fighting proverb with (better) proverb. That’s where you come in. We’re crowdsourcing ideas that capture the dignity and logic of giving directly. SUBMIT YOUR DIRECT GIVING PROVERB (and add your ideas to the comments too!)The best suggestions are not a slogan, but a saying — simple, concrete, evocative (e.g.). Submit your ideas by next Friday, March 3, and then we'll post the top 3 ideas on Twitter for people to vote on the winner.The author of the winning adage will win a video call with a GiveDirectly staff member to learn more about our work one-on-one. Not feeling creative? Share with your friends who are.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
GiveDirectly https://forum.effectivealtruism.org/posts/5s8fMBGq8JjebJ2mz/help-givedirectly-kill-teach-a-man-to-fish Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Help GiveDirectly kill "teach a man to fish", published by GiveDirectly on February 27, 2023 on The Effective Altruism Forum.We need your creative ideas to solve a problem: how to convince the world of the wisdom of giving directly. Will you submit to our proverb contest?Hi, we need your creative ideas to solve a problem: how to convince the world of the wisdom of giving directly. Will you submit to our proverb contest? The most common critique of giving cash without conditions is a fear of dependency, which comes in the form of: “Give a man a fish, feed him for a day. Teach a man to fish, feed him for a lifetime.”We’ve tried to disabuse folks of this paternalistic idea by showing that often people in poverty know how to fish but cannot afford the boat. Or they don’t want to fish; they want to sell cassava. Also, we’re not giving fish; we’re giving money, and years after getting it, people are better able to feed themselves. Oh, and even if you do teach them skills, it’s less effective than giving cash. Phew!Yet, despite our efforts, the myth remains.The one thing we haven’t tried: fighting proverb with (better) proverb. That’s where you come in. We’re crowdsourcing ideas that capture the dignity and logic of giving directly. SUBMIT YOUR DIRECT GIVING PROVERB (and add your ideas to the comments too!)The best suggestions are not a slogan, but a saying — simple, concrete, evocative (e.g.). Submit your ideas by next Friday, March 3, and then we'll post the top 3 ideas on Twitter for people to vote on the winner.The author of the winning adage will win a video call with a GiveDirectly staff member to learn more about our work one-on-one. Not feeling creative? Share with your friends who are.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 27 Feb 2023 13:25:26 +0000 EA - Help GiveDirectly kill "teach a man to fish" by GiveDirectly Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Help GiveDirectly kill "teach a man to fish", published by GiveDirectly on February 27, 2023 on The Effective Altruism Forum.We need your creative ideas to solve a problem: how to convince the world of the wisdom of giving directly. Will you submit to our proverb contest?Hi, we need your creative ideas to solve a problem: how to convince the world of the wisdom of giving directly. Will you submit to our proverb contest? The most common critique of giving cash without conditions is a fear of dependency, which comes in the form of: “Give a man a fish, feed him for a day. Teach a man to fish, feed him for a lifetime.”We’ve tried to disabuse folks of this paternalistic idea by showing that often people in poverty know how to fish but cannot afford the boat. Or they don’t want to fish; they want to sell cassava. Also, we’re not giving fish; we’re giving money, and years after getting it, people are better able to feed themselves. Oh, and even if you do teach them skills, it’s less effective than giving cash. Phew!Yet, despite our efforts, the myth remains.The one thing we haven’t tried: fighting proverb with (better) proverb. That’s where you come in. We’re crowdsourcing ideas that capture the dignity and logic of giving directly. SUBMIT YOUR DIRECT GIVING PROVERB (and add your ideas to the comments too!)The best suggestions are not a slogan, but a saying — simple, concrete, evocative (e.g.). Submit your ideas by next Friday, March 3, and then we'll post the top 3 ideas on Twitter for people to vote on the winner.The author of the winning adage will win a video call with a GiveDirectly staff member to learn more about our work one-on-one. Not feeling creative? Share with your friends who are.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Help GiveDirectly kill "teach a man to fish", published by GiveDirectly on February 27, 2023 on The Effective Altruism Forum.We need your creative ideas to solve a problem: how to convince the world of the wisdom of giving directly. Will you submit to our proverb contest?Hi, we need your creative ideas to solve a problem: how to convince the world of the wisdom of giving directly. Will you submit to our proverb contest? The most common critique of giving cash without conditions is a fear of dependency, which comes in the form of: “Give a man a fish, feed him for a day. Teach a man to fish, feed him for a lifetime.”We’ve tried to disabuse folks of this paternalistic idea by showing that often people in poverty know how to fish but cannot afford the boat. Or they don’t want to fish; they want to sell cassava. Also, we’re not giving fish; we’re giving money, and years after getting it, people are better able to feed themselves. Oh, and even if you do teach them skills, it’s less effective than giving cash. Phew!Yet, despite our efforts, the myth remains.The one thing we haven’t tried: fighting proverb with (better) proverb. That’s where you come in. We’re crowdsourcing ideas that capture the dignity and logic of giving directly. SUBMIT YOUR DIRECT GIVING PROVERB (and add your ideas to the comments too!)The best suggestions are not a slogan, but a saying — simple, concrete, evocative (e.g.). Submit your ideas by next Friday, March 3, and then we'll post the top 3 ideas on Twitter for people to vote on the winner.The author of the winning adage will win a video call with a GiveDirectly staff member to learn more about our work one-on-one. Not feeling creative? Share with your friends who are.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
GiveDirectly https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:55 None full 5049
PCMpaakbat7FakGNQ_NL_EA_EA EA - 80,000 Hours has been putting much more resources into growing our audience by Bella Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80,000 Hours has been putting much more resources into growing our audience, published by Bella on February 27, 2023 on The Effective Altruism Forum.Since the start of 2022, 80,000 Hours has been putting a lot more effort and money into getting more people to hear of us and engage with our advice.This post aims to give some insight into what we’ve been up to and why.Why invest more in outreach?80,000 Hours has, we think, been historically cost-effective at causing more people to aim their careers at tackling pressing world problems. We've built a system of resources (website, podcast, job board, advising) that many people have found helpful for this end — and so we want more people to find them.Also, 80,000 Hours has historically been the biggest single source of people learning about the EA community. If we want to grow the community, increasing the number of people reached by 80k seems like one of the best available tools for doing that.Thirdly, outreach at the “top of the funnel” (i.e. getting people to subscribe after they hear about 80k’s ideas for the very first time) has unusually good feedback mechanisms & is particularly easy to measure. For the most part, we can tell if what we’re doing isn’t working, and change tack pretty quickly.Another reason is that a lot of these activities take relatively little staff time, but can scale quite efficiently with more money.Finally, based on our internal calculations, our outreach seems likely to be cost-effective as a means of getting more people into the kinds of careers we’re really excited about.What did we do to invest more in outreach?In 2020, 80k decided to invest more in outreach by moving one of their staff into a position focused on outreach, but it ended up not working out & that person left their role.Then in mid-2021, 80k decided to hire someone new to work on outreach full-time. They hired 1 staff member (me!), and I started in mid-January 2022.In mid-2022, we found that our initial pilots in this area looked pretty promising — by May we were on track to 4x our yearly rate of subscriber growth — and we decided to scale up the team and the resource investment. I ran a hiring round and made two hires, who started at the end of Nov 2022 and in Feb 2023; I now act as head of marketing.We also decided to formalise a “marketing programme” for 80k, which is housed within the website team. Since this project spends money so differently from the rest of 80k, and in 2022 was a large proportion of our overall spending, last year we decided to approach funders specifically to support our marketing spend (rather than draw from our general funds). The marketing programme has a separate fundraising cycle and decisions are made on it somewhat independently from the rest of 80k.In 2022, the marketing programme spent $2.65m (compared to ~$120k spent on marketing in 2021). The bulk of this spending was on sponsored placements with selected content creators ($910k), giving away free books to people who signed up to our newsletter ($1.04m), and digital ads ($338k). We expect to spend more in 2023, and are in conversation with funders about this.As a result of our efforts, more than 5x as many people subscribed to our newsletter in 2022 (167k) than 2021 (30k), and we had more website visitors in Q4 2022 than any previous quarter (1.98m).We can’t be sure how many additional people will change to a high-impact career as a result, in large part because we have found that “career plan changes” of this kind take, on average, about 2 years from first hearing about 80k.Still, our current best guess is that these efforts will have been pretty effective at helping people switch careers to more impactful areas.Partly this guess is based on the growth in new audience members that we’ve seen (plus 80k’s solid track record...]]>
Bella https://forum.effectivealtruism.org/posts/PCMpaakbat7FakGNQ/80-000-hours-has-been-putting-much-more-resources-into Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80,000 Hours has been putting much more resources into growing our audience, published by Bella on February 27, 2023 on The Effective Altruism Forum.Since the start of 2022, 80,000 Hours has been putting a lot more effort and money into getting more people to hear of us and engage with our advice.This post aims to give some insight into what we’ve been up to and why.Why invest more in outreach?80,000 Hours has, we think, been historically cost-effective at causing more people to aim their careers at tackling pressing world problems. We've built a system of resources (website, podcast, job board, advising) that many people have found helpful for this end — and so we want more people to find them.Also, 80,000 Hours has historically been the biggest single source of people learning about the EA community. If we want to grow the community, increasing the number of people reached by 80k seems like one of the best available tools for doing that.Thirdly, outreach at the “top of the funnel” (i.e. getting people to subscribe after they hear about 80k’s ideas for the very first time) has unusually good feedback mechanisms & is particularly easy to measure. For the most part, we can tell if what we’re doing isn’t working, and change tack pretty quickly.Another reason is that a lot of these activities take relatively little staff time, but can scale quite efficiently with more money.Finally, based on our internal calculations, our outreach seems likely to be cost-effective as a means of getting more people into the kinds of careers we’re really excited about.What did we do to invest more in outreach?In 2020, 80k decided to invest more in outreach by moving one of their staff into a position focused on outreach, but it ended up not working out & that person left their role.Then in mid-2021, 80k decided to hire someone new to work on outreach full-time. They hired 1 staff member (me!), and I started in mid-January 2022.In mid-2022, we found that our initial pilots in this area looked pretty promising — by May we were on track to 4x our yearly rate of subscriber growth — and we decided to scale up the team and the resource investment. I ran a hiring round and made two hires, who started at the end of Nov 2022 and in Feb 2023; I now act as head of marketing.We also decided to formalise a “marketing programme” for 80k, which is housed within the website team. Since this project spends money so differently from the rest of 80k, and in 2022 was a large proportion of our overall spending, last year we decided to approach funders specifically to support our marketing spend (rather than draw from our general funds). The marketing programme has a separate fundraising cycle and decisions are made on it somewhat independently from the rest of 80k.In 2022, the marketing programme spent $2.65m (compared to ~$120k spent on marketing in 2021). The bulk of this spending was on sponsored placements with selected content creators ($910k), giving away free books to people who signed up to our newsletter ($1.04m), and digital ads ($338k). We expect to spend more in 2023, and are in conversation with funders about this.As a result of our efforts, more than 5x as many people subscribed to our newsletter in 2022 (167k) than 2021 (30k), and we had more website visitors in Q4 2022 than any previous quarter (1.98m).We can’t be sure how many additional people will change to a high-impact career as a result, in large part because we have found that “career plan changes” of this kind take, on average, about 2 years from first hearing about 80k.Still, our current best guess is that these efforts will have been pretty effective at helping people switch careers to more impactful areas.Partly this guess is based on the growth in new audience members that we’ve seen (plus 80k’s solid track record...]]>
Mon, 27 Feb 2023 12:24:29 +0000 EA - 80,000 Hours has been putting much more resources into growing our audience by Bella Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80,000 Hours has been putting much more resources into growing our audience, published by Bella on February 27, 2023 on The Effective Altruism Forum.Since the start of 2022, 80,000 Hours has been putting a lot more effort and money into getting more people to hear of us and engage with our advice.This post aims to give some insight into what we’ve been up to and why.Why invest more in outreach?80,000 Hours has, we think, been historically cost-effective at causing more people to aim their careers at tackling pressing world problems. We've built a system of resources (website, podcast, job board, advising) that many people have found helpful for this end — and so we want more people to find them.Also, 80,000 Hours has historically been the biggest single source of people learning about the EA community. If we want to grow the community, increasing the number of people reached by 80k seems like one of the best available tools for doing that.Thirdly, outreach at the “top of the funnel” (i.e. getting people to subscribe after they hear about 80k’s ideas for the very first time) has unusually good feedback mechanisms & is particularly easy to measure. For the most part, we can tell if what we’re doing isn’t working, and change tack pretty quickly.Another reason is that a lot of these activities take relatively little staff time, but can scale quite efficiently with more money.Finally, based on our internal calculations, our outreach seems likely to be cost-effective as a means of getting more people into the kinds of careers we’re really excited about.What did we do to invest more in outreach?In 2020, 80k decided to invest more in outreach by moving one of their staff into a position focused on outreach, but it ended up not working out & that person left their role.Then in mid-2021, 80k decided to hire someone new to work on outreach full-time. They hired 1 staff member (me!), and I started in mid-January 2022.In mid-2022, we found that our initial pilots in this area looked pretty promising — by May we were on track to 4x our yearly rate of subscriber growth — and we decided to scale up the team and the resource investment. I ran a hiring round and made two hires, who started at the end of Nov 2022 and in Feb 2023; I now act as head of marketing.We also decided to formalise a “marketing programme” for 80k, which is housed within the website team. Since this project spends money so differently from the rest of 80k, and in 2022 was a large proportion of our overall spending, last year we decided to approach funders specifically to support our marketing spend (rather than draw from our general funds). The marketing programme has a separate fundraising cycle and decisions are made on it somewhat independently from the rest of 80k.In 2022, the marketing programme spent $2.65m (compared to ~$120k spent on marketing in 2021). The bulk of this spending was on sponsored placements with selected content creators ($910k), giving away free books to people who signed up to our newsletter ($1.04m), and digital ads ($338k). We expect to spend more in 2023, and are in conversation with funders about this.As a result of our efforts, more than 5x as many people subscribed to our newsletter in 2022 (167k) than 2021 (30k), and we had more website visitors in Q4 2022 than any previous quarter (1.98m).We can’t be sure how many additional people will change to a high-impact career as a result, in large part because we have found that “career plan changes” of this kind take, on average, about 2 years from first hearing about 80k.Still, our current best guess is that these efforts will have been pretty effective at helping people switch careers to more impactful areas.Partly this guess is based on the growth in new audience members that we’ve seen (plus 80k’s solid track record...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80,000 Hours has been putting much more resources into growing our audience, published by Bella on February 27, 2023 on The Effective Altruism Forum.Since the start of 2022, 80,000 Hours has been putting a lot more effort and money into getting more people to hear of us and engage with our advice.This post aims to give some insight into what we’ve been up to and why.Why invest more in outreach?80,000 Hours has, we think, been historically cost-effective at causing more people to aim their careers at tackling pressing world problems. We've built a system of resources (website, podcast, job board, advising) that many people have found helpful for this end — and so we want more people to find them.Also, 80,000 Hours has historically been the biggest single source of people learning about the EA community. If we want to grow the community, increasing the number of people reached by 80k seems like one of the best available tools for doing that.Thirdly, outreach at the “top of the funnel” (i.e. getting people to subscribe after they hear about 80k’s ideas for the very first time) has unusually good feedback mechanisms & is particularly easy to measure. For the most part, we can tell if what we’re doing isn’t working, and change tack pretty quickly.Another reason is that a lot of these activities take relatively little staff time, but can scale quite efficiently with more money.Finally, based on our internal calculations, our outreach seems likely to be cost-effective as a means of getting more people into the kinds of careers we’re really excited about.What did we do to invest more in outreach?In 2020, 80k decided to invest more in outreach by moving one of their staff into a position focused on outreach, but it ended up not working out & that person left their role.Then in mid-2021, 80k decided to hire someone new to work on outreach full-time. They hired 1 staff member (me!), and I started in mid-January 2022.In mid-2022, we found that our initial pilots in this area looked pretty promising — by May we were on track to 4x our yearly rate of subscriber growth — and we decided to scale up the team and the resource investment. I ran a hiring round and made two hires, who started at the end of Nov 2022 and in Feb 2023; I now act as head of marketing.We also decided to formalise a “marketing programme” for 80k, which is housed within the website team. Since this project spends money so differently from the rest of 80k, and in 2022 was a large proportion of our overall spending, last year we decided to approach funders specifically to support our marketing spend (rather than draw from our general funds). The marketing programme has a separate fundraising cycle and decisions are made on it somewhat independently from the rest of 80k.In 2022, the marketing programme spent $2.65m (compared to ~$120k spent on marketing in 2021). The bulk of this spending was on sponsored placements with selected content creators ($910k), giving away free books to people who signed up to our newsletter ($1.04m), and digital ads ($338k). We expect to spend more in 2023, and are in conversation with funders about this.As a result of our efforts, more than 5x as many people subscribed to our newsletter in 2022 (167k) than 2021 (30k), and we had more website visitors in Q4 2022 than any previous quarter (1.98m).We can’t be sure how many additional people will change to a high-impact career as a result, in large part because we have found that “career plan changes” of this kind take, on average, about 2 years from first hearing about 80k.Still, our current best guess is that these efforts will have been pretty effective at helping people switch careers to more impactful areas.Partly this guess is based on the growth in new audience members that we’ve seen (plus 80k’s solid track record...]]>
Bella https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:50 None full 5048
WKSwH4eyDiqhJMcrz_NL_EA_EA EA - Very Briefly: The CHIPS Act by Yadav Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Very Briefly: The CHIPS Act, published by Yadav on February 26, 2023 on The Effective Altruism Forum.About six months ago, Congress passed the CHIPS Act. The "Creating Helpful Incentives to Produce Semiconductors for America Act", will spend $280 billion over the next ten years. $200 billion will go into scientific R&D and commercialization, $52.7 billion into semiconductor manufacturing, R&D, and workforce development, and $24 billion into tax credits (government subsidies) for chip production.Semiconductor production has been slipping in the United States for some time. While countries like China and Taiwan have maintained a strong foothold in the global chip market, the U.S. now produces just 12% of the world's semiconductors, down from 37% in the 1990s (source).The United States' dwindling position in the global semiconductor market, coupled with concerns about reliance on foreign suppliers - especially China and Taiwan - likely played a role in the introduction of the CHIPS Act. In a recent speech, Commerce Secretary Gina Raimondo spoke about how the CHIPS Act could help the U.S. regain its position as the top destination for innovation in chip design, manufacturing, and packaging. According to her, the U.S. "will be the premier destination in the world where new leading-edge chip architectures can be invented in our research labs, designed for every end-use application, manufactured at scale, and packaged with the most advanced technologies".An obvious reason I am concerned about this Act is that the increased investment in the U.S. semiconductor industry could enable AI capabilities companies in the US, such as OpenAI, to overcome computing challenges they may face right now. Additionally, other countries, such as the U.K. and the Member States of the EU, seem to be following suit. For example, the U.K. recently launched a research project aimed at building on the country's strengths in design, compound semiconductors, and advanced technologies. The European Chips Act also seeks to invest €43 billion in public and private funding to support semiconductor manufacturing and supply chain resilience. Currently, the Members of the European Parliament, are preparing to initiate talks on the draft Act.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Yadav https://forum.effectivealtruism.org/posts/WKSwH4eyDiqhJMcrz/very-briefly-the-chips-act-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Very Briefly: The CHIPS Act, published by Yadav on February 26, 2023 on The Effective Altruism Forum.About six months ago, Congress passed the CHIPS Act. The "Creating Helpful Incentives to Produce Semiconductors for America Act", will spend $280 billion over the next ten years. $200 billion will go into scientific R&D and commercialization, $52.7 billion into semiconductor manufacturing, R&D, and workforce development, and $24 billion into tax credits (government subsidies) for chip production.Semiconductor production has been slipping in the United States for some time. While countries like China and Taiwan have maintained a strong foothold in the global chip market, the U.S. now produces just 12% of the world's semiconductors, down from 37% in the 1990s (source).The United States' dwindling position in the global semiconductor market, coupled with concerns about reliance on foreign suppliers - especially China and Taiwan - likely played a role in the introduction of the CHIPS Act. In a recent speech, Commerce Secretary Gina Raimondo spoke about how the CHIPS Act could help the U.S. regain its position as the top destination for innovation in chip design, manufacturing, and packaging. According to her, the U.S. "will be the premier destination in the world where new leading-edge chip architectures can be invented in our research labs, designed for every end-use application, manufactured at scale, and packaged with the most advanced technologies".An obvious reason I am concerned about this Act is that the increased investment in the U.S. semiconductor industry could enable AI capabilities companies in the US, such as OpenAI, to overcome computing challenges they may face right now. Additionally, other countries, such as the U.K. and the Member States of the EU, seem to be following suit. For example, the U.K. recently launched a research project aimed at building on the country's strengths in design, compound semiconductors, and advanced technologies. The European Chips Act also seeks to invest €43 billion in public and private funding to support semiconductor manufacturing and supply chain resilience. Currently, the Members of the European Parliament, are preparing to initiate talks on the draft Act.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 27 Feb 2023 12:03:39 +0000 EA - Very Briefly: The CHIPS Act by Yadav Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Very Briefly: The CHIPS Act, published by Yadav on February 26, 2023 on The Effective Altruism Forum.About six months ago, Congress passed the CHIPS Act. The "Creating Helpful Incentives to Produce Semiconductors for America Act", will spend $280 billion over the next ten years. $200 billion will go into scientific R&D and commercialization, $52.7 billion into semiconductor manufacturing, R&D, and workforce development, and $24 billion into tax credits (government subsidies) for chip production.Semiconductor production has been slipping in the United States for some time. While countries like China and Taiwan have maintained a strong foothold in the global chip market, the U.S. now produces just 12% of the world's semiconductors, down from 37% in the 1990s (source).The United States' dwindling position in the global semiconductor market, coupled with concerns about reliance on foreign suppliers - especially China and Taiwan - likely played a role in the introduction of the CHIPS Act. In a recent speech, Commerce Secretary Gina Raimondo spoke about how the CHIPS Act could help the U.S. regain its position as the top destination for innovation in chip design, manufacturing, and packaging. According to her, the U.S. "will be the premier destination in the world where new leading-edge chip architectures can be invented in our research labs, designed for every end-use application, manufactured at scale, and packaged with the most advanced technologies".An obvious reason I am concerned about this Act is that the increased investment in the U.S. semiconductor industry could enable AI capabilities companies in the US, such as OpenAI, to overcome computing challenges they may face right now. Additionally, other countries, such as the U.K. and the Member States of the EU, seem to be following suit. For example, the U.K. recently launched a research project aimed at building on the country's strengths in design, compound semiconductors, and advanced technologies. The European Chips Act also seeks to invest €43 billion in public and private funding to support semiconductor manufacturing and supply chain resilience. Currently, the Members of the European Parliament, are preparing to initiate talks on the draft Act.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Very Briefly: The CHIPS Act, published by Yadav on February 26, 2023 on The Effective Altruism Forum.About six months ago, Congress passed the CHIPS Act. The "Creating Helpful Incentives to Produce Semiconductors for America Act", will spend $280 billion over the next ten years. $200 billion will go into scientific R&D and commercialization, $52.7 billion into semiconductor manufacturing, R&D, and workforce development, and $24 billion into tax credits (government subsidies) for chip production.Semiconductor production has been slipping in the United States for some time. While countries like China and Taiwan have maintained a strong foothold in the global chip market, the U.S. now produces just 12% of the world's semiconductors, down from 37% in the 1990s (source).The United States' dwindling position in the global semiconductor market, coupled with concerns about reliance on foreign suppliers - especially China and Taiwan - likely played a role in the introduction of the CHIPS Act. In a recent speech, Commerce Secretary Gina Raimondo spoke about how the CHIPS Act could help the U.S. regain its position as the top destination for innovation in chip design, manufacturing, and packaging. According to her, the U.S. "will be the premier destination in the world where new leading-edge chip architectures can be invented in our research labs, designed for every end-use application, manufactured at scale, and packaged with the most advanced technologies".An obvious reason I am concerned about this Act is that the increased investment in the U.S. semiconductor industry could enable AI capabilities companies in the US, such as OpenAI, to overcome computing challenges they may face right now. Additionally, other countries, such as the U.K. and the Member States of the EU, seem to be following suit. For example, the U.K. recently launched a research project aimed at building on the country's strengths in design, compound semiconductors, and advanced technologies. The European Chips Act also seeks to invest €43 billion in public and private funding to support semiconductor manufacturing and supply chain resilience. Currently, the Members of the European Parliament, are preparing to initiate talks on the draft Act.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Yadav https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:23 None full 5051
XpeamS2yTNhagxAip_NL_EA_EA EA - Remote Health Centers In Uganda - a cost effective intervention? by NickLaing Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Remote Health Centers In Uganda - a cost effective intervention?, published by NickLaing on February 27, 2023 on The Effective Altruism Forum.TLDR: Operating basic health centers in remote rural Ugandan communities looks more cost-effective than top GiveWell interventions on early stage analysis - with huge uncertainty.I’m Nick, a medical doctor who is co-founder and director of OneDay Health (ODH). We operate 38 nurse-led health centers in healthcare “black holes,” remote rural areas more than 5 km from government health facilities. About 5 million Ugandans live in these healthcare black holes and only have bad options when they get sick. ODH health centers provide high-quality primary healthcare to these communities at the lowest possible cost. We train our talented nurses to use protocol based guidelines and equip them with over 50 medications to diagnose and treat 30 common medical conditions. In our 5 years of operation, we have so far treated over 150,000 patients – including over 70,000 for malaria.Since we started up 5 years ago, we’ve raised about $290,000 of which we’ve spent around $220,000 to date. This year we hope to launch another 10-15 OneDay Health centers in Uganda and we're looking to expand to other countries which is super exciting!If you’re interested in how we select health center sites or more details about our general ops, check our website or send me a message I’d love to share more!Challenges in Assessing Cost-Effectiveness of OneDay HealthUnfortunately, obtaining high-quality effectiveness data requires data from an RCT or a cohort study that would cost 5-10 times our current annual budget. So we've estimated our impact by estimating the DALYs our health centers avert through treating four common diseases and providing family planning. I originally evaluated this as part of my masters dissertation in 2019 and have updated it to more recent numbers. As we’re assessing our own organisation, the chance of bias here is high.Summary of Cost-Effectiveness ModelTo estimate the impact of our health centers, we estimated the DALYs averted through treating individual patients for 4 conditions: malaria, pneumonia, diarrhoea, and STIs. We started with Ugandan specific data on DALYs lost to each condition. We then adjusted that data to account for the risk of false diagnosis and treatment failure (in which case the treatment would have no effect). We then added impact from family planning. Estimating impact per patient isn’t a new approach. PSI used a similar method to evaluate their impact (with an awesome online calculator), but has now moved to other methods. Inputs for our approachHeadline findingsFor each condition, we multiplied the DALYs averted per treatment by the average number of patients treated with that condition in one health center in one month. When we added this together that each ODH health center averted 13.70 DALYs per month, predominantly through treatment of malaria in all ages, and pneumonia in children under 5.ODH health centers are inexpensive to open and operate. Each health center currently needs only $137.50 per month in donor subsidies to operate. The remaining $262.50 in expenses are covered by small payments from patients. Many of these patients would have counterfactually received treatment, but would have incurred significantly greater expense to do so (mainly for travel). In addition, about 40% of patient expenses were for treating conditions not included in the cost-effectiveness analysis.We estimate that In one month, each health center averts 13.70 DALYs and costs $137.50 in donor subsidies. This is roughly equivalent to saving a life for $850, or more conservatively for $1766 including patient expenses. However, there is huge uncertainty in our analysis.The AnalysisMeasuring Impact by Estimating DALYs...]]>
NickLaing https://forum.effectivealtruism.org/posts/XpeamS2yTNhagxAip/remote-health-centers-in-uganda-a-cost-effective Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Remote Health Centers In Uganda - a cost effective intervention?, published by NickLaing on February 27, 2023 on The Effective Altruism Forum.TLDR: Operating basic health centers in remote rural Ugandan communities looks more cost-effective than top GiveWell interventions on early stage analysis - with huge uncertainty.I’m Nick, a medical doctor who is co-founder and director of OneDay Health (ODH). We operate 38 nurse-led health centers in healthcare “black holes,” remote rural areas more than 5 km from government health facilities. About 5 million Ugandans live in these healthcare black holes and only have bad options when they get sick. ODH health centers provide high-quality primary healthcare to these communities at the lowest possible cost. We train our talented nurses to use protocol based guidelines and equip them with over 50 medications to diagnose and treat 30 common medical conditions. In our 5 years of operation, we have so far treated over 150,000 patients – including over 70,000 for malaria.Since we started up 5 years ago, we’ve raised about $290,000 of which we’ve spent around $220,000 to date. This year we hope to launch another 10-15 OneDay Health centers in Uganda and we're looking to expand to other countries which is super exciting!If you’re interested in how we select health center sites or more details about our general ops, check our website or send me a message I’d love to share more!Challenges in Assessing Cost-Effectiveness of OneDay HealthUnfortunately, obtaining high-quality effectiveness data requires data from an RCT or a cohort study that would cost 5-10 times our current annual budget. So we've estimated our impact by estimating the DALYs our health centers avert through treating four common diseases and providing family planning. I originally evaluated this as part of my masters dissertation in 2019 and have updated it to more recent numbers. As we’re assessing our own organisation, the chance of bias here is high.Summary of Cost-Effectiveness ModelTo estimate the impact of our health centers, we estimated the DALYs averted through treating individual patients for 4 conditions: malaria, pneumonia, diarrhoea, and STIs. We started with Ugandan specific data on DALYs lost to each condition. We then adjusted that data to account for the risk of false diagnosis and treatment failure (in which case the treatment would have no effect). We then added impact from family planning. Estimating impact per patient isn’t a new approach. PSI used a similar method to evaluate their impact (with an awesome online calculator), but has now moved to other methods. Inputs for our approachHeadline findingsFor each condition, we multiplied the DALYs averted per treatment by the average number of patients treated with that condition in one health center in one month. When we added this together that each ODH health center averted 13.70 DALYs per month, predominantly through treatment of malaria in all ages, and pneumonia in children under 5.ODH health centers are inexpensive to open and operate. Each health center currently needs only $137.50 per month in donor subsidies to operate. The remaining $262.50 in expenses are covered by small payments from patients. Many of these patients would have counterfactually received treatment, but would have incurred significantly greater expense to do so (mainly for travel). In addition, about 40% of patient expenses were for treating conditions not included in the cost-effectiveness analysis.We estimate that In one month, each health center averts 13.70 DALYs and costs $137.50 in donor subsidies. This is roughly equivalent to saving a life for $850, or more conservatively for $1766 including patient expenses. However, there is huge uncertainty in our analysis.The AnalysisMeasuring Impact by Estimating DALYs...]]>
Mon, 27 Feb 2023 12:01:19 +0000 EA - Remote Health Centers In Uganda - a cost effective intervention? by NickLaing Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Remote Health Centers In Uganda - a cost effective intervention?, published by NickLaing on February 27, 2023 on The Effective Altruism Forum.TLDR: Operating basic health centers in remote rural Ugandan communities looks more cost-effective than top GiveWell interventions on early stage analysis - with huge uncertainty.I’m Nick, a medical doctor who is co-founder and director of OneDay Health (ODH). We operate 38 nurse-led health centers in healthcare “black holes,” remote rural areas more than 5 km from government health facilities. About 5 million Ugandans live in these healthcare black holes and only have bad options when they get sick. ODH health centers provide high-quality primary healthcare to these communities at the lowest possible cost. We train our talented nurses to use protocol based guidelines and equip them with over 50 medications to diagnose and treat 30 common medical conditions. In our 5 years of operation, we have so far treated over 150,000 patients – including over 70,000 for malaria.Since we started up 5 years ago, we’ve raised about $290,000 of which we’ve spent around $220,000 to date. This year we hope to launch another 10-15 OneDay Health centers in Uganda and we're looking to expand to other countries which is super exciting!If you’re interested in how we select health center sites or more details about our general ops, check our website or send me a message I’d love to share more!Challenges in Assessing Cost-Effectiveness of OneDay HealthUnfortunately, obtaining high-quality effectiveness data requires data from an RCT or a cohort study that would cost 5-10 times our current annual budget. So we've estimated our impact by estimating the DALYs our health centers avert through treating four common diseases and providing family planning. I originally evaluated this as part of my masters dissertation in 2019 and have updated it to more recent numbers. As we’re assessing our own organisation, the chance of bias here is high.Summary of Cost-Effectiveness ModelTo estimate the impact of our health centers, we estimated the DALYs averted through treating individual patients for 4 conditions: malaria, pneumonia, diarrhoea, and STIs. We started with Ugandan specific data on DALYs lost to each condition. We then adjusted that data to account for the risk of false diagnosis and treatment failure (in which case the treatment would have no effect). We then added impact from family planning. Estimating impact per patient isn’t a new approach. PSI used a similar method to evaluate their impact (with an awesome online calculator), but has now moved to other methods. Inputs for our approachHeadline findingsFor each condition, we multiplied the DALYs averted per treatment by the average number of patients treated with that condition in one health center in one month. When we added this together that each ODH health center averted 13.70 DALYs per month, predominantly through treatment of malaria in all ages, and pneumonia in children under 5.ODH health centers are inexpensive to open and operate. Each health center currently needs only $137.50 per month in donor subsidies to operate. The remaining $262.50 in expenses are covered by small payments from patients. Many of these patients would have counterfactually received treatment, but would have incurred significantly greater expense to do so (mainly for travel). In addition, about 40% of patient expenses were for treating conditions not included in the cost-effectiveness analysis.We estimate that In one month, each health center averts 13.70 DALYs and costs $137.50 in donor subsidies. This is roughly equivalent to saving a life for $850, or more conservatively for $1766 including patient expenses. However, there is huge uncertainty in our analysis.The AnalysisMeasuring Impact by Estimating DALYs...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Remote Health Centers In Uganda - a cost effective intervention?, published by NickLaing on February 27, 2023 on The Effective Altruism Forum.TLDR: Operating basic health centers in remote rural Ugandan communities looks more cost-effective than top GiveWell interventions on early stage analysis - with huge uncertainty.I’m Nick, a medical doctor who is co-founder and director of OneDay Health (ODH). We operate 38 nurse-led health centers in healthcare “black holes,” remote rural areas more than 5 km from government health facilities. About 5 million Ugandans live in these healthcare black holes and only have bad options when they get sick. ODH health centers provide high-quality primary healthcare to these communities at the lowest possible cost. We train our talented nurses to use protocol based guidelines and equip them with over 50 medications to diagnose and treat 30 common medical conditions. In our 5 years of operation, we have so far treated over 150,000 patients – including over 70,000 for malaria.Since we started up 5 years ago, we’ve raised about $290,000 of which we’ve spent around $220,000 to date. This year we hope to launch another 10-15 OneDay Health centers in Uganda and we're looking to expand to other countries which is super exciting!If you’re interested in how we select health center sites or more details about our general ops, check our website or send me a message I’d love to share more!Challenges in Assessing Cost-Effectiveness of OneDay HealthUnfortunately, obtaining high-quality effectiveness data requires data from an RCT or a cohort study that would cost 5-10 times our current annual budget. So we've estimated our impact by estimating the DALYs our health centers avert through treating four common diseases and providing family planning. I originally evaluated this as part of my masters dissertation in 2019 and have updated it to more recent numbers. As we’re assessing our own organisation, the chance of bias here is high.Summary of Cost-Effectiveness ModelTo estimate the impact of our health centers, we estimated the DALYs averted through treating individual patients for 4 conditions: malaria, pneumonia, diarrhoea, and STIs. We started with Ugandan specific data on DALYs lost to each condition. We then adjusted that data to account for the risk of false diagnosis and treatment failure (in which case the treatment would have no effect). We then added impact from family planning. Estimating impact per patient isn’t a new approach. PSI used a similar method to evaluate their impact (with an awesome online calculator), but has now moved to other methods. Inputs for our approachHeadline findingsFor each condition, we multiplied the DALYs averted per treatment by the average number of patients treated with that condition in one health center in one month. When we added this together that each ODH health center averted 13.70 DALYs per month, predominantly through treatment of malaria in all ages, and pneumonia in children under 5.ODH health centers are inexpensive to open and operate. Each health center currently needs only $137.50 per month in donor subsidies to operate. The remaining $262.50 in expenses are covered by small payments from patients. Many of these patients would have counterfactually received treatment, but would have incurred significantly greater expense to do so (mainly for travel). In addition, about 40% of patient expenses were for treating conditions not included in the cost-effectiveness analysis.We estimate that In one month, each health center averts 13.70 DALYs and costs $137.50 in donor subsidies. This is roughly equivalent to saving a life for $850, or more conservatively for $1766 including patient expenses. However, there is huge uncertainty in our analysis.The AnalysisMeasuring Impact by Estimating DALYs...]]>
NickLaing https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 16:00 None full 5050
uJG79y8eji9zALjAd_NL_EA_EA EA - Let's Fund: Better Science impact evaluation. Registered Reports now available in Nature by Hauke Hillebrandt Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Let's Fund: Better Science impact evaluation. Registered Reports now available in Nature, published by Hauke Hillebrandt on February 26, 2023 on The Effective Altruism Forum.Cross-posted from my blog - inspired by the recent call for more monitoring and evaluationHi, it's Hauke, the founder of Let's Fund. We research pressing problems, like climate change or the replication crisis in science, and then crowdfund for particularly effective policy solutions.Ages ago, you signed up to my newsletter. Now I've evaluated the $1M+ in grants you donated, and they had a big impact. Below I present the Better Science / Registered Report campaign evaluation, but stay tuned for the climate policy campaign impact evaluation (spoiler: clean energy R&D increased by billions of dollars).Let's Fund: Better ScienceChris Chambers giving a talk on Registered ReportsWe crowdfunded ~$80k for Prof. Chambers to promote Registered Reports, a new publication format, where research is peer-reviewed before the results are known. This fundamentally changes the way research is done across all scientific fields. For instance, one recent Registered Report studied COVID patients undergoing ventilation1 (but there are examples in other areas including climate science,2 development economics,3 biosecurity, 4 farm animal welfare,5 etc.).Registered Reports have higher quality than normal publications,6 because theymake science more theory-driven, open and transparentfind methodological weaknesses and also potential biosafety failures of dangerous dual-use research prior to publication (e.g. gain of function research)7get more papers published that fail to confirm the original hypothesisincrease the credibility of non-randomized natural experiments using observational dataIf Registered Reports become widely adopted, it might lead to a paradigm shift and better science. 300+ journals have already adapted Registered Reports. And just last week Nature, the most prestigious academic journal, adopted it:Chris Chambers on Twitter: "10 years after we created Registered Reports, the thing critics told us would never happen has happened: @Nature is offering them Congratulations @Magda_Skipper & team. The @RegReports initiative just went up a gear and we are one step closer to eradicating publication bias.This is big and Registered Reports might soon become the gold standard.Why? Imagine you’re a scientist with a good idea for an experiment with high value of information (think: a simple cure for depression). If that has a low chance of working out (say 1%), then previously you had little incentive to run it.Now, if your idea is really good, and based on strong theory, Registered Reports derisks running the experiment. You can first submit the idea and methodology to Nature and the reviewers might say: ‘This idea is nuts, but we agree there’s a small chance it might work and really interested in this works. If you run the experiment, we’ll publish this independent of results!’ Now you can go ahead spend a lot of effort on running the experiment, because even if it doesn’t work, you still get a Nature paper (which you wouldn’t with null results).This will lead to more high risk, high reward research (share this post or the tweet with academics! They might thank you for the Nature publication).Many people were integral to this progress, but I think Chambers, the co-inventor and prime proponent of Registered Reports deserves special credit. In turn he credited:Chris Chambers @chrisdc77: 'You. That's right. Some of the most useful and flexible funding I've received has been donated by hundreds of generous members of public (& small orgs) via our @LetsFundOrg-supported crowd sourcing fund'You may feel smug.If you want to make a bigger donation (>$1k), click here. There are proposals to improve Regis...]]>
Hauke Hillebrandt https://forum.effectivealtruism.org/posts/uJG79y8eji9zALjAd/let-s-fund-better-science-impact-evaluation-registered Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Let's Fund: Better Science impact evaluation. Registered Reports now available in Nature, published by Hauke Hillebrandt on February 26, 2023 on The Effective Altruism Forum.Cross-posted from my blog - inspired by the recent call for more monitoring and evaluationHi, it's Hauke, the founder of Let's Fund. We research pressing problems, like climate change or the replication crisis in science, and then crowdfund for particularly effective policy solutions.Ages ago, you signed up to my newsletter. Now I've evaluated the $1M+ in grants you donated, and they had a big impact. Below I present the Better Science / Registered Report campaign evaluation, but stay tuned for the climate policy campaign impact evaluation (spoiler: clean energy R&D increased by billions of dollars).Let's Fund: Better ScienceChris Chambers giving a talk on Registered ReportsWe crowdfunded ~$80k for Prof. Chambers to promote Registered Reports, a new publication format, where research is peer-reviewed before the results are known. This fundamentally changes the way research is done across all scientific fields. For instance, one recent Registered Report studied COVID patients undergoing ventilation1 (but there are examples in other areas including climate science,2 development economics,3 biosecurity, 4 farm animal welfare,5 etc.).Registered Reports have higher quality than normal publications,6 because theymake science more theory-driven, open and transparentfind methodological weaknesses and also potential biosafety failures of dangerous dual-use research prior to publication (e.g. gain of function research)7get more papers published that fail to confirm the original hypothesisincrease the credibility of non-randomized natural experiments using observational dataIf Registered Reports become widely adopted, it might lead to a paradigm shift and better science. 300+ journals have already adapted Registered Reports. And just last week Nature, the most prestigious academic journal, adopted it:Chris Chambers on Twitter: "10 years after we created Registered Reports, the thing critics told us would never happen has happened: @Nature is offering them Congratulations @Magda_Skipper & team. The @RegReports initiative just went up a gear and we are one step closer to eradicating publication bias.This is big and Registered Reports might soon become the gold standard.Why? Imagine you’re a scientist with a good idea for an experiment with high value of information (think: a simple cure for depression). If that has a low chance of working out (say 1%), then previously you had little incentive to run it.Now, if your idea is really good, and based on strong theory, Registered Reports derisks running the experiment. You can first submit the idea and methodology to Nature and the reviewers might say: ‘This idea is nuts, but we agree there’s a small chance it might work and really interested in this works. If you run the experiment, we’ll publish this independent of results!’ Now you can go ahead spend a lot of effort on running the experiment, because even if it doesn’t work, you still get a Nature paper (which you wouldn’t with null results).This will lead to more high risk, high reward research (share this post or the tweet with academics! They might thank you for the Nature publication).Many people were integral to this progress, but I think Chambers, the co-inventor and prime proponent of Registered Reports deserves special credit. In turn he credited:Chris Chambers @chrisdc77: 'You. That's right. Some of the most useful and flexible funding I've received has been donated by hundreds of generous members of public (& small orgs) via our @LetsFundOrg-supported crowd sourcing fund'You may feel smug.If you want to make a bigger donation (>$1k), click here. There are proposals to improve Regis...]]>
Sun, 26 Feb 2023 20:54:17 +0000 EA - Let's Fund: Better Science impact evaluation. Registered Reports now available in Nature by Hauke Hillebrandt Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Let's Fund: Better Science impact evaluation. Registered Reports now available in Nature, published by Hauke Hillebrandt on February 26, 2023 on The Effective Altruism Forum.Cross-posted from my blog - inspired by the recent call for more monitoring and evaluationHi, it's Hauke, the founder of Let's Fund. We research pressing problems, like climate change or the replication crisis in science, and then crowdfund for particularly effective policy solutions.Ages ago, you signed up to my newsletter. Now I've evaluated the $1M+ in grants you donated, and they had a big impact. Below I present the Better Science / Registered Report campaign evaluation, but stay tuned for the climate policy campaign impact evaluation (spoiler: clean energy R&D increased by billions of dollars).Let's Fund: Better ScienceChris Chambers giving a talk on Registered ReportsWe crowdfunded ~$80k for Prof. Chambers to promote Registered Reports, a new publication format, where research is peer-reviewed before the results are known. This fundamentally changes the way research is done across all scientific fields. For instance, one recent Registered Report studied COVID patients undergoing ventilation1 (but there are examples in other areas including climate science,2 development economics,3 biosecurity, 4 farm animal welfare,5 etc.).Registered Reports have higher quality than normal publications,6 because theymake science more theory-driven, open and transparentfind methodological weaknesses and also potential biosafety failures of dangerous dual-use research prior to publication (e.g. gain of function research)7get more papers published that fail to confirm the original hypothesisincrease the credibility of non-randomized natural experiments using observational dataIf Registered Reports become widely adopted, it might lead to a paradigm shift and better science. 300+ journals have already adapted Registered Reports. And just last week Nature, the most prestigious academic journal, adopted it:Chris Chambers on Twitter: "10 years after we created Registered Reports, the thing critics told us would never happen has happened: @Nature is offering them Congratulations @Magda_Skipper & team. The @RegReports initiative just went up a gear and we are one step closer to eradicating publication bias.This is big and Registered Reports might soon become the gold standard.Why? Imagine you’re a scientist with a good idea for an experiment with high value of information (think: a simple cure for depression). If that has a low chance of working out (say 1%), then previously you had little incentive to run it.Now, if your idea is really good, and based on strong theory, Registered Reports derisks running the experiment. You can first submit the idea and methodology to Nature and the reviewers might say: ‘This idea is nuts, but we agree there’s a small chance it might work and really interested in this works. If you run the experiment, we’ll publish this independent of results!’ Now you can go ahead spend a lot of effort on running the experiment, because even if it doesn’t work, you still get a Nature paper (which you wouldn’t with null results).This will lead to more high risk, high reward research (share this post or the tweet with academics! They might thank you for the Nature publication).Many people were integral to this progress, but I think Chambers, the co-inventor and prime proponent of Registered Reports deserves special credit. In turn he credited:Chris Chambers @chrisdc77: 'You. That's right. Some of the most useful and flexible funding I've received has been donated by hundreds of generous members of public (& small orgs) via our @LetsFundOrg-supported crowd sourcing fund'You may feel smug.If you want to make a bigger donation (>$1k), click here. There are proposals to improve Regis...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Let's Fund: Better Science impact evaluation. Registered Reports now available in Nature, published by Hauke Hillebrandt on February 26, 2023 on The Effective Altruism Forum.Cross-posted from my blog - inspired by the recent call for more monitoring and evaluationHi, it's Hauke, the founder of Let's Fund. We research pressing problems, like climate change or the replication crisis in science, and then crowdfund for particularly effective policy solutions.Ages ago, you signed up to my newsletter. Now I've evaluated the $1M+ in grants you donated, and they had a big impact. Below I present the Better Science / Registered Report campaign evaluation, but stay tuned for the climate policy campaign impact evaluation (spoiler: clean energy R&D increased by billions of dollars).Let's Fund: Better ScienceChris Chambers giving a talk on Registered ReportsWe crowdfunded ~$80k for Prof. Chambers to promote Registered Reports, a new publication format, where research is peer-reviewed before the results are known. This fundamentally changes the way research is done across all scientific fields. For instance, one recent Registered Report studied COVID patients undergoing ventilation1 (but there are examples in other areas including climate science,2 development economics,3 biosecurity, 4 farm animal welfare,5 etc.).Registered Reports have higher quality than normal publications,6 because theymake science more theory-driven, open and transparentfind methodological weaknesses and also potential biosafety failures of dangerous dual-use research prior to publication (e.g. gain of function research)7get more papers published that fail to confirm the original hypothesisincrease the credibility of non-randomized natural experiments using observational dataIf Registered Reports become widely adopted, it might lead to a paradigm shift and better science. 300+ journals have already adapted Registered Reports. And just last week Nature, the most prestigious academic journal, adopted it:Chris Chambers on Twitter: "10 years after we created Registered Reports, the thing critics told us would never happen has happened: @Nature is offering them Congratulations @Magda_Skipper & team. The @RegReports initiative just went up a gear and we are one step closer to eradicating publication bias.This is big and Registered Reports might soon become the gold standard.Why? Imagine you’re a scientist with a good idea for an experiment with high value of information (think: a simple cure for depression). If that has a low chance of working out (say 1%), then previously you had little incentive to run it.Now, if your idea is really good, and based on strong theory, Registered Reports derisks running the experiment. You can first submit the idea and methodology to Nature and the reviewers might say: ‘This idea is nuts, but we agree there’s a small chance it might work and really interested in this works. If you run the experiment, we’ll publish this independent of results!’ Now you can go ahead spend a lot of effort on running the experiment, because even if it doesn’t work, you still get a Nature paper (which you wouldn’t with null results).This will lead to more high risk, high reward research (share this post or the tweet with academics! They might thank you for the Nature publication).Many people were integral to this progress, but I think Chambers, the co-inventor and prime proponent of Registered Reports deserves special credit. In turn he credited:Chris Chambers @chrisdc77: 'You. That's right. Some of the most useful and flexible funding I've received has been donated by hundreds of generous members of public (& small orgs) via our @LetsFundOrg-supported crowd sourcing fund'You may feel smug.If you want to make a bigger donation (>$1k), click here. There are proposals to improve Regis...]]>
Hauke Hillebrandt https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:49 None full 5036
5KsrEWEbc4mwzMTLp_NL_EA_EA EA - Some more projects I’d like to see by finm Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some more projects I’d like to see, published by finm on February 25, 2023 on The Effective Altruism Forum.I recently wrote about some EA projects I’d like to see (also on the EA Forum). This went well!I suggested I’d write out a few more half-baked ideas sometime. As with the previous post, I make no claim to originating these ideas, and I’ll try to attribute them where possible. I also make no claim to being confident that all the ideas are any good; just that they seem potentially good without much due diligence. Since many of these are based on shallow dives, I’ve likely missed relevant ongoing projects.If you’re considering writing a similar list, at the end of this post I reflect on the value of writing about speculative project ideas in public.The order of these ideas is arbitrary and you can read any number of them (i.e. there’s no thread running through them).SummaryFermi gamesBOTEC toolsBillionaire impact listForecasting guideShort stories about AI futuresTechnical assistance with AI safety verificationInfosec consultancy for AI labsAchievements ledgerWorld health dashboardThe Humanity TimesFermi gamesMany people are interested in getting good at making forecasts, and spreading good forecasting practice. Becoming better (more accurate and better calibrated) at forecasting important outcomes — and being willing to make numerical, testable predictions in the first place — often translates into better decisions that bear on those outcomes.A close (and similarly underappreciated) neighbor of forecasting is the Fermi estimate, or BOTEC. This is the skill of considering some figure you’re uncertain about, coming up with some sensible model or decomposition into other figures you can begin guessing at, and reaching a guess. It is also the skill of knowing how confident you should be in that guess; or how wide your uncertainty should be. If you have interviewed for some kind of consulting-adjacent job you have likely been asked to (for example) size a market for whiteboard markers; that is an example.As well as looking ahead in time, you can answer questions about how the past turned out (‘retrocasting’). It’s hard to make retrocasting seriously competitive, because Google exists, but it is presumably a way to teach forecasting: you tell people about the events that led up to some decision in a niche of history few people are familiar with, and ask: did X happen next? How long did Y persist for? And so on. You can also make estimates without dates involved. Douglas Hofstadter lists some examples in Metamagical Themas:How many people die per day on the earth?How many passenger-miles are flown each day in the U.S.?How many square miles are there in the U.S.? How many of them have you been in?How many syllables have been uttered by humans since 1400 A.D.?How many moving parts are in the Columbia space shuttle?What volume of oil is removed from the earth each year?How many barrels of oil are left in the world?How many meaningful, grammatical, ten-word sentences are there in English?How many insects [.] are now alive? [.] Tigers? Ostriches? Horseshoe crabs?How many tons of garbage does New York City put out each week?How fast does your hair grow (in miles per hour)?What is the weight of the Empire State Building? Of the Hoover Dam? Of a fully loaded jumbo jet?Again, most forecasts have a nice feature for evaluation and scoring, which is that before the time where a forecast resolves nobody knows the answer for sure, and after it resolves everyone does, and so there is no way to cheat other than through prophecy.This doesn’t typically apply for other kinds of Fermi estimation questions. In particular, things get really interesting where nobody really knows the correct answer, though a correct answer clearly exists. This pays when ‘ground ...]]>
finm https://forum.effectivealtruism.org/posts/5KsrEWEbc4mwzMTLp/some-more-projects-i-d-like-to-see Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some more projects I’d like to see, published by finm on February 25, 2023 on The Effective Altruism Forum.I recently wrote about some EA projects I’d like to see (also on the EA Forum). This went well!I suggested I’d write out a few more half-baked ideas sometime. As with the previous post, I make no claim to originating these ideas, and I’ll try to attribute them where possible. I also make no claim to being confident that all the ideas are any good; just that they seem potentially good without much due diligence. Since many of these are based on shallow dives, I’ve likely missed relevant ongoing projects.If you’re considering writing a similar list, at the end of this post I reflect on the value of writing about speculative project ideas in public.The order of these ideas is arbitrary and you can read any number of them (i.e. there’s no thread running through them).SummaryFermi gamesBOTEC toolsBillionaire impact listForecasting guideShort stories about AI futuresTechnical assistance with AI safety verificationInfosec consultancy for AI labsAchievements ledgerWorld health dashboardThe Humanity TimesFermi gamesMany people are interested in getting good at making forecasts, and spreading good forecasting practice. Becoming better (more accurate and better calibrated) at forecasting important outcomes — and being willing to make numerical, testable predictions in the first place — often translates into better decisions that bear on those outcomes.A close (and similarly underappreciated) neighbor of forecasting is the Fermi estimate, or BOTEC. This is the skill of considering some figure you’re uncertain about, coming up with some sensible model or decomposition into other figures you can begin guessing at, and reaching a guess. It is also the skill of knowing how confident you should be in that guess; or how wide your uncertainty should be. If you have interviewed for some kind of consulting-adjacent job you have likely been asked to (for example) size a market for whiteboard markers; that is an example.As well as looking ahead in time, you can answer questions about how the past turned out (‘retrocasting’). It’s hard to make retrocasting seriously competitive, because Google exists, but it is presumably a way to teach forecasting: you tell people about the events that led up to some decision in a niche of history few people are familiar with, and ask: did X happen next? How long did Y persist for? And so on. You can also make estimates without dates involved. Douglas Hofstadter lists some examples in Metamagical Themas:How many people die per day on the earth?How many passenger-miles are flown each day in the U.S.?How many square miles are there in the U.S.? How many of them have you been in?How many syllables have been uttered by humans since 1400 A.D.?How many moving parts are in the Columbia space shuttle?What volume of oil is removed from the earth each year?How many barrels of oil are left in the world?How many meaningful, grammatical, ten-word sentences are there in English?How many insects [.] are now alive? [.] Tigers? Ostriches? Horseshoe crabs?How many tons of garbage does New York City put out each week?How fast does your hair grow (in miles per hour)?What is the weight of the Empire State Building? Of the Hoover Dam? Of a fully loaded jumbo jet?Again, most forecasts have a nice feature for evaluation and scoring, which is that before the time where a forecast resolves nobody knows the answer for sure, and after it resolves everyone does, and so there is no way to cheat other than through prophecy.This doesn’t typically apply for other kinds of Fermi estimation questions. In particular, things get really interesting where nobody really knows the correct answer, though a correct answer clearly exists. This pays when ‘ground ...]]>
Sun, 26 Feb 2023 09:59:49 +0000 EA - Some more projects I’d like to see by finm Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some more projects I’d like to see, published by finm on February 25, 2023 on The Effective Altruism Forum.I recently wrote about some EA projects I’d like to see (also on the EA Forum). This went well!I suggested I’d write out a few more half-baked ideas sometime. As with the previous post, I make no claim to originating these ideas, and I’ll try to attribute them where possible. I also make no claim to being confident that all the ideas are any good; just that they seem potentially good without much due diligence. Since many of these are based on shallow dives, I’ve likely missed relevant ongoing projects.If you’re considering writing a similar list, at the end of this post I reflect on the value of writing about speculative project ideas in public.The order of these ideas is arbitrary and you can read any number of them (i.e. there’s no thread running through them).SummaryFermi gamesBOTEC toolsBillionaire impact listForecasting guideShort stories about AI futuresTechnical assistance with AI safety verificationInfosec consultancy for AI labsAchievements ledgerWorld health dashboardThe Humanity TimesFermi gamesMany people are interested in getting good at making forecasts, and spreading good forecasting practice. Becoming better (more accurate and better calibrated) at forecasting important outcomes — and being willing to make numerical, testable predictions in the first place — often translates into better decisions that bear on those outcomes.A close (and similarly underappreciated) neighbor of forecasting is the Fermi estimate, or BOTEC. This is the skill of considering some figure you’re uncertain about, coming up with some sensible model or decomposition into other figures you can begin guessing at, and reaching a guess. It is also the skill of knowing how confident you should be in that guess; or how wide your uncertainty should be. If you have interviewed for some kind of consulting-adjacent job you have likely been asked to (for example) size a market for whiteboard markers; that is an example.As well as looking ahead in time, you can answer questions about how the past turned out (‘retrocasting’). It’s hard to make retrocasting seriously competitive, because Google exists, but it is presumably a way to teach forecasting: you tell people about the events that led up to some decision in a niche of history few people are familiar with, and ask: did X happen next? How long did Y persist for? And so on. You can also make estimates without dates involved. Douglas Hofstadter lists some examples in Metamagical Themas:How many people die per day on the earth?How many passenger-miles are flown each day in the U.S.?How many square miles are there in the U.S.? How many of them have you been in?How many syllables have been uttered by humans since 1400 A.D.?How many moving parts are in the Columbia space shuttle?What volume of oil is removed from the earth each year?How many barrels of oil are left in the world?How many meaningful, grammatical, ten-word sentences are there in English?How many insects [.] are now alive? [.] Tigers? Ostriches? Horseshoe crabs?How many tons of garbage does New York City put out each week?How fast does your hair grow (in miles per hour)?What is the weight of the Empire State Building? Of the Hoover Dam? Of a fully loaded jumbo jet?Again, most forecasts have a nice feature for evaluation and scoring, which is that before the time where a forecast resolves nobody knows the answer for sure, and after it resolves everyone does, and so there is no way to cheat other than through prophecy.This doesn’t typically apply for other kinds of Fermi estimation questions. In particular, things get really interesting where nobody really knows the correct answer, though a correct answer clearly exists. This pays when ‘ground ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some more projects I’d like to see, published by finm on February 25, 2023 on The Effective Altruism Forum.I recently wrote about some EA projects I’d like to see (also on the EA Forum). This went well!I suggested I’d write out a few more half-baked ideas sometime. As with the previous post, I make no claim to originating these ideas, and I’ll try to attribute them where possible. I also make no claim to being confident that all the ideas are any good; just that they seem potentially good without much due diligence. Since many of these are based on shallow dives, I’ve likely missed relevant ongoing projects.If you’re considering writing a similar list, at the end of this post I reflect on the value of writing about speculative project ideas in public.The order of these ideas is arbitrary and you can read any number of them (i.e. there’s no thread running through them).SummaryFermi gamesBOTEC toolsBillionaire impact listForecasting guideShort stories about AI futuresTechnical assistance with AI safety verificationInfosec consultancy for AI labsAchievements ledgerWorld health dashboardThe Humanity TimesFermi gamesMany people are interested in getting good at making forecasts, and spreading good forecasting practice. Becoming better (more accurate and better calibrated) at forecasting important outcomes — and being willing to make numerical, testable predictions in the first place — often translates into better decisions that bear on those outcomes.A close (and similarly underappreciated) neighbor of forecasting is the Fermi estimate, or BOTEC. This is the skill of considering some figure you’re uncertain about, coming up with some sensible model or decomposition into other figures you can begin guessing at, and reaching a guess. It is also the skill of knowing how confident you should be in that guess; or how wide your uncertainty should be. If you have interviewed for some kind of consulting-adjacent job you have likely been asked to (for example) size a market for whiteboard markers; that is an example.As well as looking ahead in time, you can answer questions about how the past turned out (‘retrocasting’). It’s hard to make retrocasting seriously competitive, because Google exists, but it is presumably a way to teach forecasting: you tell people about the events that led up to some decision in a niche of history few people are familiar with, and ask: did X happen next? How long did Y persist for? And so on. You can also make estimates without dates involved. Douglas Hofstadter lists some examples in Metamagical Themas:How many people die per day on the earth?How many passenger-miles are flown each day in the U.S.?How many square miles are there in the U.S.? How many of them have you been in?How many syllables have been uttered by humans since 1400 A.D.?How many moving parts are in the Columbia space shuttle?What volume of oil is removed from the earth each year?How many barrels of oil are left in the world?How many meaningful, grammatical, ten-word sentences are there in English?How many insects [.] are now alive? [.] Tigers? Ostriches? Horseshoe crabs?How many tons of garbage does New York City put out each week?How fast does your hair grow (in miles per hour)?What is the weight of the Empire State Building? Of the Hoover Dam? Of a fully loaded jumbo jet?Again, most forecasts have a nice feature for evaluation and scoring, which is that before the time where a forecast resolves nobody knows the answer for sure, and after it resolves everyone does, and so there is no way to cheat other than through prophecy.This doesn’t typically apply for other kinds of Fermi estimation questions. In particular, things get really interesting where nobody really knows the correct answer, though a correct answer clearly exists. This pays when ‘ground ...]]>
finm https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 39:35 None full 5033
qi3MEEmScmK87sfBZ_NL_EA_EA EA - Worldview Investigations Team: An Overview by Rethink Priorities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Worldview Investigations Team: An Overview, published by Rethink Priorities on February 25, 2023 on The Effective Altruism Forum.IntroductionRethink Priorities’ Worldview Investigations Team (WIT) exists to improve resource allocation within the effective altruism movement, focusing on tractable, high-impact questions that bear on philanthropic priorities. WIT builds on Rethink Priorities’ strengths as a multi-cause, stakeholder-driven, interdisciplinary research organization: it takes action-relevant philosophical, methodological, and strategic problems and turns them into manageable, modelable problems. Rethink Priorities is currently hiring multiple roles to build out the team:Worldview Investigations Philosophy ResearcherWorldview Investigations Quantitative ResearcherWorldview Investigations ProgrammerThese positions offer a significant opportunity for thoughtful and curious individuals to shift the priorities, research areas, and philanthropic spending strategies of major organizations through interdisciplinary work. WIT tackles problems like:How should we convert between the units employed in various cost-effectiveness analyses (welfare to DALYs-averted; DALYs-averted to basis points of existential risk averted, etc.)?What are the implications of moral uncertainty for work on different cause areas?What difference would various levels of risk- and ambiguity-aversion have on cause prioritization? Can those levels of risk- and/or ambiguity-aversion be justified?The work involves getting up to speed with the literature in different fields, contacting experts, writing up reasoning in a manner that makes sense to experts and non-experts alike, and engaging with quantitative models.The rest of this post sketches WIT’s history, strategy, and theory of change.WIT’s HistoryWorldview investigation has been part of Rethink Priorities from the beginning, as some of Rethink Priorities’ earliest work was on invertebrate sentience. Invertebrate animals are far more numerous than vertebrate animals, but the vast majority of animal-focused philanthropic resources go to vertebrates rather than invertebrates. If invertebrates aren’t sentient, then this is as it should be, given that sentience is necessary for moral status. However, if invertebrates are sentient, then it would be very surprising if the current resource allocation were optimal. So, this project involved sorting through the conceptual issues associated with assessing sentience, identifying observable proxies for sentience, and scouring the academic literature for evidence with respect to each proxy. In essence, this project developed a simple, transparent tool for making progress on fundamental questions about the distribution of consciousness.If the members of a species have a sufficient number of relevant traits, then they probably deserve more philanthropic attention than they’ve received previously.Rethink Priorities’ work on invertebrate sentience led directly to its next worldview investigation project, as even if animals are equally sentient, they may not have equal capacity for welfare. For all we know, some animals may be able to realize much more welfare than others. Jason Schukraft took up this question in his five-post series about moral weight, again trying to sort out the conceptual issues and make empirical progress by finding relevant proxies for morally relevant differences. His influential work laid the foundation for the Moral Weight Project, which, again, created a simple, transparent tool for assessing differences in capacity for welfare. Moreover, it developed a way to implement those differences in cost-effectiveness analyses.In addition to its work on animals, Rethink Priorities has done research on standard metrics for evaluating health interventions and estimating the burden...]]>
Rethink Priorities https://forum.effectivealtruism.org/posts/qi3MEEmScmK87sfBZ/worldview-investigations-team-an-overview Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Worldview Investigations Team: An Overview, published by Rethink Priorities on February 25, 2023 on The Effective Altruism Forum.IntroductionRethink Priorities’ Worldview Investigations Team (WIT) exists to improve resource allocation within the effective altruism movement, focusing on tractable, high-impact questions that bear on philanthropic priorities. WIT builds on Rethink Priorities’ strengths as a multi-cause, stakeholder-driven, interdisciplinary research organization: it takes action-relevant philosophical, methodological, and strategic problems and turns them into manageable, modelable problems. Rethink Priorities is currently hiring multiple roles to build out the team:Worldview Investigations Philosophy ResearcherWorldview Investigations Quantitative ResearcherWorldview Investigations ProgrammerThese positions offer a significant opportunity for thoughtful and curious individuals to shift the priorities, research areas, and philanthropic spending strategies of major organizations through interdisciplinary work. WIT tackles problems like:How should we convert between the units employed in various cost-effectiveness analyses (welfare to DALYs-averted; DALYs-averted to basis points of existential risk averted, etc.)?What are the implications of moral uncertainty for work on different cause areas?What difference would various levels of risk- and ambiguity-aversion have on cause prioritization? Can those levels of risk- and/or ambiguity-aversion be justified?The work involves getting up to speed with the literature in different fields, contacting experts, writing up reasoning in a manner that makes sense to experts and non-experts alike, and engaging with quantitative models.The rest of this post sketches WIT’s history, strategy, and theory of change.WIT’s HistoryWorldview investigation has been part of Rethink Priorities from the beginning, as some of Rethink Priorities’ earliest work was on invertebrate sentience. Invertebrate animals are far more numerous than vertebrate animals, but the vast majority of animal-focused philanthropic resources go to vertebrates rather than invertebrates. If invertebrates aren’t sentient, then this is as it should be, given that sentience is necessary for moral status. However, if invertebrates are sentient, then it would be very surprising if the current resource allocation were optimal. So, this project involved sorting through the conceptual issues associated with assessing sentience, identifying observable proxies for sentience, and scouring the academic literature for evidence with respect to each proxy. In essence, this project developed a simple, transparent tool for making progress on fundamental questions about the distribution of consciousness.If the members of a species have a sufficient number of relevant traits, then they probably deserve more philanthropic attention than they’ve received previously.Rethink Priorities’ work on invertebrate sentience led directly to its next worldview investigation project, as even if animals are equally sentient, they may not have equal capacity for welfare. For all we know, some animals may be able to realize much more welfare than others. Jason Schukraft took up this question in his five-post series about moral weight, again trying to sort out the conceptual issues and make empirical progress by finding relevant proxies for morally relevant differences. His influential work laid the foundation for the Moral Weight Project, which, again, created a simple, transparent tool for assessing differences in capacity for welfare. Moreover, it developed a way to implement those differences in cost-effectiveness analyses.In addition to its work on animals, Rethink Priorities has done research on standard metrics for evaluating health interventions and estimating the burden...]]>
Sat, 25 Feb 2023 18:11:47 +0000 EA - Worldview Investigations Team: An Overview by Rethink Priorities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Worldview Investigations Team: An Overview, published by Rethink Priorities on February 25, 2023 on The Effective Altruism Forum.IntroductionRethink Priorities’ Worldview Investigations Team (WIT) exists to improve resource allocation within the effective altruism movement, focusing on tractable, high-impact questions that bear on philanthropic priorities. WIT builds on Rethink Priorities’ strengths as a multi-cause, stakeholder-driven, interdisciplinary research organization: it takes action-relevant philosophical, methodological, and strategic problems and turns them into manageable, modelable problems. Rethink Priorities is currently hiring multiple roles to build out the team:Worldview Investigations Philosophy ResearcherWorldview Investigations Quantitative ResearcherWorldview Investigations ProgrammerThese positions offer a significant opportunity for thoughtful and curious individuals to shift the priorities, research areas, and philanthropic spending strategies of major organizations through interdisciplinary work. WIT tackles problems like:How should we convert between the units employed in various cost-effectiveness analyses (welfare to DALYs-averted; DALYs-averted to basis points of existential risk averted, etc.)?What are the implications of moral uncertainty for work on different cause areas?What difference would various levels of risk- and ambiguity-aversion have on cause prioritization? Can those levels of risk- and/or ambiguity-aversion be justified?The work involves getting up to speed with the literature in different fields, contacting experts, writing up reasoning in a manner that makes sense to experts and non-experts alike, and engaging with quantitative models.The rest of this post sketches WIT’s history, strategy, and theory of change.WIT’s HistoryWorldview investigation has been part of Rethink Priorities from the beginning, as some of Rethink Priorities’ earliest work was on invertebrate sentience. Invertebrate animals are far more numerous than vertebrate animals, but the vast majority of animal-focused philanthropic resources go to vertebrates rather than invertebrates. If invertebrates aren’t sentient, then this is as it should be, given that sentience is necessary for moral status. However, if invertebrates are sentient, then it would be very surprising if the current resource allocation were optimal. So, this project involved sorting through the conceptual issues associated with assessing sentience, identifying observable proxies for sentience, and scouring the academic literature for evidence with respect to each proxy. In essence, this project developed a simple, transparent tool for making progress on fundamental questions about the distribution of consciousness.If the members of a species have a sufficient number of relevant traits, then they probably deserve more philanthropic attention than they’ve received previously.Rethink Priorities’ work on invertebrate sentience led directly to its next worldview investigation project, as even if animals are equally sentient, they may not have equal capacity for welfare. For all we know, some animals may be able to realize much more welfare than others. Jason Schukraft took up this question in his five-post series about moral weight, again trying to sort out the conceptual issues and make empirical progress by finding relevant proxies for morally relevant differences. His influential work laid the foundation for the Moral Weight Project, which, again, created a simple, transparent tool for assessing differences in capacity for welfare. Moreover, it developed a way to implement those differences in cost-effectiveness analyses.In addition to its work on animals, Rethink Priorities has done research on standard metrics for evaluating health interventions and estimating the burden...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Worldview Investigations Team: An Overview, published by Rethink Priorities on February 25, 2023 on The Effective Altruism Forum.IntroductionRethink Priorities’ Worldview Investigations Team (WIT) exists to improve resource allocation within the effective altruism movement, focusing on tractable, high-impact questions that bear on philanthropic priorities. WIT builds on Rethink Priorities’ strengths as a multi-cause, stakeholder-driven, interdisciplinary research organization: it takes action-relevant philosophical, methodological, and strategic problems and turns them into manageable, modelable problems. Rethink Priorities is currently hiring multiple roles to build out the team:Worldview Investigations Philosophy ResearcherWorldview Investigations Quantitative ResearcherWorldview Investigations ProgrammerThese positions offer a significant opportunity for thoughtful and curious individuals to shift the priorities, research areas, and philanthropic spending strategies of major organizations through interdisciplinary work. WIT tackles problems like:How should we convert between the units employed in various cost-effectiveness analyses (welfare to DALYs-averted; DALYs-averted to basis points of existential risk averted, etc.)?What are the implications of moral uncertainty for work on different cause areas?What difference would various levels of risk- and ambiguity-aversion have on cause prioritization? Can those levels of risk- and/or ambiguity-aversion be justified?The work involves getting up to speed with the literature in different fields, contacting experts, writing up reasoning in a manner that makes sense to experts and non-experts alike, and engaging with quantitative models.The rest of this post sketches WIT’s history, strategy, and theory of change.WIT’s HistoryWorldview investigation has been part of Rethink Priorities from the beginning, as some of Rethink Priorities’ earliest work was on invertebrate sentience. Invertebrate animals are far more numerous than vertebrate animals, but the vast majority of animal-focused philanthropic resources go to vertebrates rather than invertebrates. If invertebrates aren’t sentient, then this is as it should be, given that sentience is necessary for moral status. However, if invertebrates are sentient, then it would be very surprising if the current resource allocation were optimal. So, this project involved sorting through the conceptual issues associated with assessing sentience, identifying observable proxies for sentience, and scouring the academic literature for evidence with respect to each proxy. In essence, this project developed a simple, transparent tool for making progress on fundamental questions about the distribution of consciousness.If the members of a species have a sufficient number of relevant traits, then they probably deserve more philanthropic attention than they’ve received previously.Rethink Priorities’ work on invertebrate sentience led directly to its next worldview investigation project, as even if animals are equally sentient, they may not have equal capacity for welfare. For all we know, some animals may be able to realize much more welfare than others. Jason Schukraft took up this question in his five-post series about moral weight, again trying to sort out the conceptual issues and make empirical progress by finding relevant proxies for morally relevant differences. His influential work laid the foundation for the Moral Weight Project, which, again, created a simple, transparent tool for assessing differences in capacity for welfare. Moreover, it developed a way to implement those differences in cost-effectiveness analyses.In addition to its work on animals, Rethink Priorities has done research on standard metrics for evaluating health interventions and estimating the burden...]]>
Rethink Priorities https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:00 None full 5029
ruJnXtdDS7XiiwzSP_NL_EA_EA EA - How major governments can help with the most important century by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How major governments can help with the most important century, published by Holden Karnofsky on February 24, 2023 on The Effective Altruism Forum.I’ve been writing about tangible things we can do today to help the most important century go well. Previously, I wrote about helpful messages to spread; how to help via full-time work; and how major AI companies can help.What about major governments1 - what can they be doing today to help?I think governments could play crucial roles in the future. For example, see my discussion of standards and monitoring.However, I’m honestly nervous about most possible ways that governments could get involved in AI development and regulation today.I think we still know very little about what key future situations will look like, which is why my discussion of AI companies (previous piece) emphasizes doing things that have limited downsides and are useful in a wide variety of possible futures.I think governments are “stickier” than companies - I think they have a much harder time getting rid of processes, rules, etc. that no longer make sense. So in many ways I’d rather see them keep their options open for the future by not committing to specific regulations, processes, projects, etc. now.I worry that governments, at least as they stand today, are far too oriented toward the competition frame (“we have to develop powerful AI systems before other countries do”) and not receptive enough to the caution frame (“We should worry that AI systems could be dangerous to everyone at once, and consider cooperating internationally to reduce risk”). (This concern also applies to companies, but see footnote.2)In a previous piece, I talked about two contrasting frames for how to make the best of the most important century:The caution frame. This frame emphasizes that a furious race to develop powerful AI could end up making everyone worse off. This could be via: (a) AI forming dangerous goals of its own and defeating humanity entirely; (b) humans racing to gain power and resources and “lock in” their values.Ideally, everyone with the potential to build something powerful enough AI would be able to pour energy into building something safe (not misaligned), and carefully planning out (and negotiating with others on) how to roll it out, without a rush or a race. With this in mind, perhaps we should be doing things like:Working to improve trust and cooperation between major world powers. Perhaps via AI-centric versions of Pugwash (an international conference aimed at reducing the risk of military conflict), perhaps by pushing back against hawkish foreign relations moves.Discouraging governments and investors from shoveling money into AI research, encouraging AI labs to thoroughly consider the implications of their research before publishing it or scaling it up, working toward standards and monitoring, etc. Slowing things down in this manner could buy more time to do research on avoiding misaligned AI, more time to build trust and cooperation mechanisms, and more time to generally gain strategic clarityThe “competition” frame. This frame focuses less on how the transition to a radically different future happens, and more on who's making the key decisions as it happens.If something like PASTA is developed primarily (or first) in country X, then the government of country X could be making a lot of crucial decisions about whether and how to regulate a potential explosion of new technologies.In addition, the people and organizations leading the way on AI and other technology advancement at that time could be especially influential in such decisions.This means it could matter enormously "who leads the way on transformative AI" - which country or countries, which people or organizations.Some people feel that we can make confident statements today a...]]>
Holden Karnofsky https://forum.effectivealtruism.org/posts/ruJnXtdDS7XiiwzSP/how-major-governments-can-help-with-the-most-important Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How major governments can help with the most important century, published by Holden Karnofsky on February 24, 2023 on The Effective Altruism Forum.I’ve been writing about tangible things we can do today to help the most important century go well. Previously, I wrote about helpful messages to spread; how to help via full-time work; and how major AI companies can help.What about major governments1 - what can they be doing today to help?I think governments could play crucial roles in the future. For example, see my discussion of standards and monitoring.However, I’m honestly nervous about most possible ways that governments could get involved in AI development and regulation today.I think we still know very little about what key future situations will look like, which is why my discussion of AI companies (previous piece) emphasizes doing things that have limited downsides and are useful in a wide variety of possible futures.I think governments are “stickier” than companies - I think they have a much harder time getting rid of processes, rules, etc. that no longer make sense. So in many ways I’d rather see them keep their options open for the future by not committing to specific regulations, processes, projects, etc. now.I worry that governments, at least as they stand today, are far too oriented toward the competition frame (“we have to develop powerful AI systems before other countries do”) and not receptive enough to the caution frame (“We should worry that AI systems could be dangerous to everyone at once, and consider cooperating internationally to reduce risk”). (This concern also applies to companies, but see footnote.2)In a previous piece, I talked about two contrasting frames for how to make the best of the most important century:The caution frame. This frame emphasizes that a furious race to develop powerful AI could end up making everyone worse off. This could be via: (a) AI forming dangerous goals of its own and defeating humanity entirely; (b) humans racing to gain power and resources and “lock in” their values.Ideally, everyone with the potential to build something powerful enough AI would be able to pour energy into building something safe (not misaligned), and carefully planning out (and negotiating with others on) how to roll it out, without a rush or a race. With this in mind, perhaps we should be doing things like:Working to improve trust and cooperation between major world powers. Perhaps via AI-centric versions of Pugwash (an international conference aimed at reducing the risk of military conflict), perhaps by pushing back against hawkish foreign relations moves.Discouraging governments and investors from shoveling money into AI research, encouraging AI labs to thoroughly consider the implications of their research before publishing it or scaling it up, working toward standards and monitoring, etc. Slowing things down in this manner could buy more time to do research on avoiding misaligned AI, more time to build trust and cooperation mechanisms, and more time to generally gain strategic clarityThe “competition” frame. This frame focuses less on how the transition to a radically different future happens, and more on who's making the key decisions as it happens.If something like PASTA is developed primarily (or first) in country X, then the government of country X could be making a lot of crucial decisions about whether and how to regulate a potential explosion of new technologies.In addition, the people and organizations leading the way on AI and other technology advancement at that time could be especially influential in such decisions.This means it could matter enormously "who leads the way on transformative AI" - which country or countries, which people or organizations.Some people feel that we can make confident statements today a...]]>
Sat, 25 Feb 2023 18:11:10 +0000 EA - How major governments can help with the most important century by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How major governments can help with the most important century, published by Holden Karnofsky on February 24, 2023 on The Effective Altruism Forum.I’ve been writing about tangible things we can do today to help the most important century go well. Previously, I wrote about helpful messages to spread; how to help via full-time work; and how major AI companies can help.What about major governments1 - what can they be doing today to help?I think governments could play crucial roles in the future. For example, see my discussion of standards and monitoring.However, I’m honestly nervous about most possible ways that governments could get involved in AI development and regulation today.I think we still know very little about what key future situations will look like, which is why my discussion of AI companies (previous piece) emphasizes doing things that have limited downsides and are useful in a wide variety of possible futures.I think governments are “stickier” than companies - I think they have a much harder time getting rid of processes, rules, etc. that no longer make sense. So in many ways I’d rather see them keep their options open for the future by not committing to specific regulations, processes, projects, etc. now.I worry that governments, at least as they stand today, are far too oriented toward the competition frame (“we have to develop powerful AI systems before other countries do”) and not receptive enough to the caution frame (“We should worry that AI systems could be dangerous to everyone at once, and consider cooperating internationally to reduce risk”). (This concern also applies to companies, but see footnote.2)In a previous piece, I talked about two contrasting frames for how to make the best of the most important century:The caution frame. This frame emphasizes that a furious race to develop powerful AI could end up making everyone worse off. This could be via: (a) AI forming dangerous goals of its own and defeating humanity entirely; (b) humans racing to gain power and resources and “lock in” their values.Ideally, everyone with the potential to build something powerful enough AI would be able to pour energy into building something safe (not misaligned), and carefully planning out (and negotiating with others on) how to roll it out, without a rush or a race. With this in mind, perhaps we should be doing things like:Working to improve trust and cooperation between major world powers. Perhaps via AI-centric versions of Pugwash (an international conference aimed at reducing the risk of military conflict), perhaps by pushing back against hawkish foreign relations moves.Discouraging governments and investors from shoveling money into AI research, encouraging AI labs to thoroughly consider the implications of their research before publishing it or scaling it up, working toward standards and monitoring, etc. Slowing things down in this manner could buy more time to do research on avoiding misaligned AI, more time to build trust and cooperation mechanisms, and more time to generally gain strategic clarityThe “competition” frame. This frame focuses less on how the transition to a radically different future happens, and more on who's making the key decisions as it happens.If something like PASTA is developed primarily (or first) in country X, then the government of country X could be making a lot of crucial decisions about whether and how to regulate a potential explosion of new technologies.In addition, the people and organizations leading the way on AI and other technology advancement at that time could be especially influential in such decisions.This means it could matter enormously "who leads the way on transformative AI" - which country or countries, which people or organizations.Some people feel that we can make confident statements today a...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How major governments can help with the most important century, published by Holden Karnofsky on February 24, 2023 on The Effective Altruism Forum.I’ve been writing about tangible things we can do today to help the most important century go well. Previously, I wrote about helpful messages to spread; how to help via full-time work; and how major AI companies can help.What about major governments1 - what can they be doing today to help?I think governments could play crucial roles in the future. For example, see my discussion of standards and monitoring.However, I’m honestly nervous about most possible ways that governments could get involved in AI development and regulation today.I think we still know very little about what key future situations will look like, which is why my discussion of AI companies (previous piece) emphasizes doing things that have limited downsides and are useful in a wide variety of possible futures.I think governments are “stickier” than companies - I think they have a much harder time getting rid of processes, rules, etc. that no longer make sense. So in many ways I’d rather see them keep their options open for the future by not committing to specific regulations, processes, projects, etc. now.I worry that governments, at least as they stand today, are far too oriented toward the competition frame (“we have to develop powerful AI systems before other countries do”) and not receptive enough to the caution frame (“We should worry that AI systems could be dangerous to everyone at once, and consider cooperating internationally to reduce risk”). (This concern also applies to companies, but see footnote.2)In a previous piece, I talked about two contrasting frames for how to make the best of the most important century:The caution frame. This frame emphasizes that a furious race to develop powerful AI could end up making everyone worse off. This could be via: (a) AI forming dangerous goals of its own and defeating humanity entirely; (b) humans racing to gain power and resources and “lock in” their values.Ideally, everyone with the potential to build something powerful enough AI would be able to pour energy into building something safe (not misaligned), and carefully planning out (and negotiating with others on) how to roll it out, without a rush or a race. With this in mind, perhaps we should be doing things like:Working to improve trust and cooperation between major world powers. Perhaps via AI-centric versions of Pugwash (an international conference aimed at reducing the risk of military conflict), perhaps by pushing back against hawkish foreign relations moves.Discouraging governments and investors from shoveling money into AI research, encouraging AI labs to thoroughly consider the implications of their research before publishing it or scaling it up, working toward standards and monitoring, etc. Slowing things down in this manner could buy more time to do research on avoiding misaligned AI, more time to build trust and cooperation mechanisms, and more time to generally gain strategic clarityThe “competition” frame. This frame focuses less on how the transition to a radically different future happens, and more on who's making the key decisions as it happens.If something like PASTA is developed primarily (or first) in country X, then the government of country X could be making a lot of crucial decisions about whether and how to regulate a potential explosion of new technologies.In addition, the people and organizations leading the way on AI and other technology advancement at that time could be especially influential in such decisions.This means it could matter enormously "who leads the way on transformative AI" - which country or countries, which people or organizations.Some people feel that we can make confident statements today a...]]>
Holden Karnofsky https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:34 None full 5034
jEHcbrsumxditRhtG_NL_EA_EA EA - Updates from the Mental Health Funder's Circle by wtroy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updates from the Mental Health Funder's Circle, published by wtroy on February 24, 2023 on The Effective Altruism Forum.The Mental Health Funder’s Circle held its first grant round in the Fall/Winter of 2022. To those of you who applied for this round, we appreciate your patience. As this was our first round of funding, everything took longer than expected. We will iterate on the structure of the funding circle over time, and intend to develop a process that adds value for members and grantees alike.A total of $254,000 was distributed to the following three organizations:A matching grant of $44,000 to Vida Plena for their work on cost-effective community mental health in Ecuador.Two grants totaling $100,000 to Happier Lives Institute for their continued work on subjective wellbeing and cause prioritization research.Two grants totaling $110,000 to Rethink Wellbeing to support mental health initiatives for the EA community.Our next round of funding is now open, with initial 1-pagers due April 1st. After applications have been reviewed, we will contact promising grantees and make final funding decisions by June 1st. Applications can be found on our website. For more information on the MHFC, see this forum post. Unfortunately, we lack the ability to respond to every application.We are excited to find and support impactful organizations working on cost-effective and catalytic mental health interventions. We encourage you to apply, even if you think your project might be outside of our scope!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
wtroy https://forum.effectivealtruism.org/posts/jEHcbrsumxditRhtG/updates-from-the-mental-health-funder-s-circle Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updates from the Mental Health Funder's Circle, published by wtroy on February 24, 2023 on The Effective Altruism Forum.The Mental Health Funder’s Circle held its first grant round in the Fall/Winter of 2022. To those of you who applied for this round, we appreciate your patience. As this was our first round of funding, everything took longer than expected. We will iterate on the structure of the funding circle over time, and intend to develop a process that adds value for members and grantees alike.A total of $254,000 was distributed to the following three organizations:A matching grant of $44,000 to Vida Plena for their work on cost-effective community mental health in Ecuador.Two grants totaling $100,000 to Happier Lives Institute for their continued work on subjective wellbeing and cause prioritization research.Two grants totaling $110,000 to Rethink Wellbeing to support mental health initiatives for the EA community.Our next round of funding is now open, with initial 1-pagers due April 1st. After applications have been reviewed, we will contact promising grantees and make final funding decisions by June 1st. Applications can be found on our website. For more information on the MHFC, see this forum post. Unfortunately, we lack the ability to respond to every application.We are excited to find and support impactful organizations working on cost-effective and catalytic mental health interventions. We encourage you to apply, even if you think your project might be outside of our scope!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 25 Feb 2023 07:59:47 +0000 EA - Updates from the Mental Health Funder's Circle by wtroy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updates from the Mental Health Funder's Circle, published by wtroy on February 24, 2023 on The Effective Altruism Forum.The Mental Health Funder’s Circle held its first grant round in the Fall/Winter of 2022. To those of you who applied for this round, we appreciate your patience. As this was our first round of funding, everything took longer than expected. We will iterate on the structure of the funding circle over time, and intend to develop a process that adds value for members and grantees alike.A total of $254,000 was distributed to the following three organizations:A matching grant of $44,000 to Vida Plena for their work on cost-effective community mental health in Ecuador.Two grants totaling $100,000 to Happier Lives Institute for their continued work on subjective wellbeing and cause prioritization research.Two grants totaling $110,000 to Rethink Wellbeing to support mental health initiatives for the EA community.Our next round of funding is now open, with initial 1-pagers due April 1st. After applications have been reviewed, we will contact promising grantees and make final funding decisions by June 1st. Applications can be found on our website. For more information on the MHFC, see this forum post. Unfortunately, we lack the ability to respond to every application.We are excited to find and support impactful organizations working on cost-effective and catalytic mental health interventions. We encourage you to apply, even if you think your project might be outside of our scope!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updates from the Mental Health Funder's Circle, published by wtroy on February 24, 2023 on The Effective Altruism Forum.The Mental Health Funder’s Circle held its first grant round in the Fall/Winter of 2022. To those of you who applied for this round, we appreciate your patience. As this was our first round of funding, everything took longer than expected. We will iterate on the structure of the funding circle over time, and intend to develop a process that adds value for members and grantees alike.A total of $254,000 was distributed to the following three organizations:A matching grant of $44,000 to Vida Plena for their work on cost-effective community mental health in Ecuador.Two grants totaling $100,000 to Happier Lives Institute for their continued work on subjective wellbeing and cause prioritization research.Two grants totaling $110,000 to Rethink Wellbeing to support mental health initiatives for the EA community.Our next round of funding is now open, with initial 1-pagers due April 1st. After applications have been reviewed, we will contact promising grantees and make final funding decisions by June 1st. Applications can be found on our website. For more information on the MHFC, see this forum post. Unfortunately, we lack the ability to respond to every application.We are excited to find and support impactful organizations working on cost-effective and catalytic mental health interventions. We encourage you to apply, even if you think your project might be outside of our scope!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
wtroy https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:43 None full 5030
WB8yLjDDNuHaGdotM_NL_EA_EA EA - Make RCTs cheaper: smaller treatment, bigger control groups by Rory Fenton Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Make RCTs cheaper: smaller treatment, bigger control groups, published by Rory Fenton on February 24, 2023 on The Effective Altruism Forum.Epistemic status: I think this is a statistical “fact” but I feel a bit cautious since so few people seem to take advantage of itSummaryIt may not always be optimal for cost or statistical power to have equal-sized treatment/control groups in a study. When your intervention is quite expensive relative to data collection, you can maximise statistical power or save costs by using a larger control group and smaller treatment group. The optimal ratio of treatment sample to control sample is just the square root of the cost per treatment participant divided by the square root of the cost per control participant.Why larger control groups seem betterStudies generally have equal numbers of treatment and control participants. This makes intuitive sense: a study with 500 treatment and 500 control will be more powerful than a study with 499 treatment and 501 control, for example. This is due to the diminishing power returns to increasing your sample size: the extra person removed from one arm hurts your power more than the extra person added to the other arm increases it.But what if your intervention is expensive relative to data collection? Perhaps you are studying a $720 cash transfer and it costs $80 to complete each survey, for a total cost of $800 per treatment participant ($720 + $80) and $80 per control. Now, for the same cost as 500 treatment and 500 control, you could have 499 treatment and 510 control, or 450 treatment and 1000 control: up to a point, the loss in precision from the smaller treatment is more than offset by the 10x larger increase in your control group, resulting in a more powerful study overall. In other words: when your treatment is expensive, it is generally more powerful to have a larger control group, because it's just so much cheaper to add control participants.How much larger? The exact ratio of treatment:control that optimises statistical power is surprisingly simple, it’s just the ratio of the square roots of the costs of adding to each arm i.e. sqrt(control_cost) : sqrt(treatment_cost) (See Appendix for justification). For example, if adding an extra treatment participant costs 16x more than adding a control participant, you should optimally have sqrt(16/1) = 4x as many control as treatment.Quantifying the benefitsWith this approach, you either get free extra power for the same money or save money without losing power. For example, let’s look at the hypothetical cash transfer study above with treatment participants costing $800 and control participants $80. The optimal ratio of control to treatment is then sqrt(800/80) = 3.2 :1, resulting in either:Saving money without losing power: the study is currently powered to measure an effect of 0.175 SD and, with 500 treatment and control, costs $440,000. With a 3.2 : 1 ratio (types furiously in Stata) you could achieve the same power with a sample of 337 treatment and 1079 control, which would cost $356,000: saving you a cool $84k without any loss of statistical power.Getting extra power for the same budget: alternatively, if you still want to spend the full $440k, you could then afford 416 treatment and 1,331 control, cutting your detectable effect from 0.175 SD to 0.155 SD at no extra cost.CaveatsEthics: there may be ethical reasons for not wanting a larger control group, for example in a medical trial where you would be denying potentially life-saving treatments to sick patients. Even outside of medicine, control participants’ time is important and you may wish to avoid “wasting” it on participating in your study (although you could use some of the savings to compensate control participants, if that won’t mess with your study).Necessarily limited ...]]>
Rory Fenton https://forum.effectivealtruism.org/posts/WB8yLjDDNuHaGdotM/make-rcts-cheaper-smaller-treatment-bigger-control-groups Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Make RCTs cheaper: smaller treatment, bigger control groups, published by Rory Fenton on February 24, 2023 on The Effective Altruism Forum.Epistemic status: I think this is a statistical “fact” but I feel a bit cautious since so few people seem to take advantage of itSummaryIt may not always be optimal for cost or statistical power to have equal-sized treatment/control groups in a study. When your intervention is quite expensive relative to data collection, you can maximise statistical power or save costs by using a larger control group and smaller treatment group. The optimal ratio of treatment sample to control sample is just the square root of the cost per treatment participant divided by the square root of the cost per control participant.Why larger control groups seem betterStudies generally have equal numbers of treatment and control participants. This makes intuitive sense: a study with 500 treatment and 500 control will be more powerful than a study with 499 treatment and 501 control, for example. This is due to the diminishing power returns to increasing your sample size: the extra person removed from one arm hurts your power more than the extra person added to the other arm increases it.But what if your intervention is expensive relative to data collection? Perhaps you are studying a $720 cash transfer and it costs $80 to complete each survey, for a total cost of $800 per treatment participant ($720 + $80) and $80 per control. Now, for the same cost as 500 treatment and 500 control, you could have 499 treatment and 510 control, or 450 treatment and 1000 control: up to a point, the loss in precision from the smaller treatment is more than offset by the 10x larger increase in your control group, resulting in a more powerful study overall. In other words: when your treatment is expensive, it is generally more powerful to have a larger control group, because it's just so much cheaper to add control participants.How much larger? The exact ratio of treatment:control that optimises statistical power is surprisingly simple, it’s just the ratio of the square roots of the costs of adding to each arm i.e. sqrt(control_cost) : sqrt(treatment_cost) (See Appendix for justification). For example, if adding an extra treatment participant costs 16x more than adding a control participant, you should optimally have sqrt(16/1) = 4x as many control as treatment.Quantifying the benefitsWith this approach, you either get free extra power for the same money or save money without losing power. For example, let’s look at the hypothetical cash transfer study above with treatment participants costing $800 and control participants $80. The optimal ratio of control to treatment is then sqrt(800/80) = 3.2 :1, resulting in either:Saving money without losing power: the study is currently powered to measure an effect of 0.175 SD and, with 500 treatment and control, costs $440,000. With a 3.2 : 1 ratio (types furiously in Stata) you could achieve the same power with a sample of 337 treatment and 1079 control, which would cost $356,000: saving you a cool $84k without any loss of statistical power.Getting extra power for the same budget: alternatively, if you still want to spend the full $440k, you could then afford 416 treatment and 1,331 control, cutting your detectable effect from 0.175 SD to 0.155 SD at no extra cost.CaveatsEthics: there may be ethical reasons for not wanting a larger control group, for example in a medical trial where you would be denying potentially life-saving treatments to sick patients. Even outside of medicine, control participants’ time is important and you may wish to avoid “wasting” it on participating in your study (although you could use some of the savings to compensate control participants, if that won’t mess with your study).Necessarily limited ...]]>
Sat, 25 Feb 2023 03:05:57 +0000 EA - Make RCTs cheaper: smaller treatment, bigger control groups by Rory Fenton Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Make RCTs cheaper: smaller treatment, bigger control groups, published by Rory Fenton on February 24, 2023 on The Effective Altruism Forum.Epistemic status: I think this is a statistical “fact” but I feel a bit cautious since so few people seem to take advantage of itSummaryIt may not always be optimal for cost or statistical power to have equal-sized treatment/control groups in a study. When your intervention is quite expensive relative to data collection, you can maximise statistical power or save costs by using a larger control group and smaller treatment group. The optimal ratio of treatment sample to control sample is just the square root of the cost per treatment participant divided by the square root of the cost per control participant.Why larger control groups seem betterStudies generally have equal numbers of treatment and control participants. This makes intuitive sense: a study with 500 treatment and 500 control will be more powerful than a study with 499 treatment and 501 control, for example. This is due to the diminishing power returns to increasing your sample size: the extra person removed from one arm hurts your power more than the extra person added to the other arm increases it.But what if your intervention is expensive relative to data collection? Perhaps you are studying a $720 cash transfer and it costs $80 to complete each survey, for a total cost of $800 per treatment participant ($720 + $80) and $80 per control. Now, for the same cost as 500 treatment and 500 control, you could have 499 treatment and 510 control, or 450 treatment and 1000 control: up to a point, the loss in precision from the smaller treatment is more than offset by the 10x larger increase in your control group, resulting in a more powerful study overall. In other words: when your treatment is expensive, it is generally more powerful to have a larger control group, because it's just so much cheaper to add control participants.How much larger? The exact ratio of treatment:control that optimises statistical power is surprisingly simple, it’s just the ratio of the square roots of the costs of adding to each arm i.e. sqrt(control_cost) : sqrt(treatment_cost) (See Appendix for justification). For example, if adding an extra treatment participant costs 16x more than adding a control participant, you should optimally have sqrt(16/1) = 4x as many control as treatment.Quantifying the benefitsWith this approach, you either get free extra power for the same money or save money without losing power. For example, let’s look at the hypothetical cash transfer study above with treatment participants costing $800 and control participants $80. The optimal ratio of control to treatment is then sqrt(800/80) = 3.2 :1, resulting in either:Saving money without losing power: the study is currently powered to measure an effect of 0.175 SD and, with 500 treatment and control, costs $440,000. With a 3.2 : 1 ratio (types furiously in Stata) you could achieve the same power with a sample of 337 treatment and 1079 control, which would cost $356,000: saving you a cool $84k without any loss of statistical power.Getting extra power for the same budget: alternatively, if you still want to spend the full $440k, you could then afford 416 treatment and 1,331 control, cutting your detectable effect from 0.175 SD to 0.155 SD at no extra cost.CaveatsEthics: there may be ethical reasons for not wanting a larger control group, for example in a medical trial where you would be denying potentially life-saving treatments to sick patients. Even outside of medicine, control participants’ time is important and you may wish to avoid “wasting” it on participating in your study (although you could use some of the savings to compensate control participants, if that won’t mess with your study).Necessarily limited ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Make RCTs cheaper: smaller treatment, bigger control groups, published by Rory Fenton on February 24, 2023 on The Effective Altruism Forum.Epistemic status: I think this is a statistical “fact” but I feel a bit cautious since so few people seem to take advantage of itSummaryIt may not always be optimal for cost or statistical power to have equal-sized treatment/control groups in a study. When your intervention is quite expensive relative to data collection, you can maximise statistical power or save costs by using a larger control group and smaller treatment group. The optimal ratio of treatment sample to control sample is just the square root of the cost per treatment participant divided by the square root of the cost per control participant.Why larger control groups seem betterStudies generally have equal numbers of treatment and control participants. This makes intuitive sense: a study with 500 treatment and 500 control will be more powerful than a study with 499 treatment and 501 control, for example. This is due to the diminishing power returns to increasing your sample size: the extra person removed from one arm hurts your power more than the extra person added to the other arm increases it.But what if your intervention is expensive relative to data collection? Perhaps you are studying a $720 cash transfer and it costs $80 to complete each survey, for a total cost of $800 per treatment participant ($720 + $80) and $80 per control. Now, for the same cost as 500 treatment and 500 control, you could have 499 treatment and 510 control, or 450 treatment and 1000 control: up to a point, the loss in precision from the smaller treatment is more than offset by the 10x larger increase in your control group, resulting in a more powerful study overall. In other words: when your treatment is expensive, it is generally more powerful to have a larger control group, because it's just so much cheaper to add control participants.How much larger? The exact ratio of treatment:control that optimises statistical power is surprisingly simple, it’s just the ratio of the square roots of the costs of adding to each arm i.e. sqrt(control_cost) : sqrt(treatment_cost) (See Appendix for justification). For example, if adding an extra treatment participant costs 16x more than adding a control participant, you should optimally have sqrt(16/1) = 4x as many control as treatment.Quantifying the benefitsWith this approach, you either get free extra power for the same money or save money without losing power. For example, let’s look at the hypothetical cash transfer study above with treatment participants costing $800 and control participants $80. The optimal ratio of control to treatment is then sqrt(800/80) = 3.2 :1, resulting in either:Saving money without losing power: the study is currently powered to measure an effect of 0.175 SD and, with 500 treatment and control, costs $440,000. With a 3.2 : 1 ratio (types furiously in Stata) you could achieve the same power with a sample of 337 treatment and 1079 control, which would cost $356,000: saving you a cool $84k without any loss of statistical power.Getting extra power for the same budget: alternatively, if you still want to spend the full $440k, you could then afford 416 treatment and 1,331 control, cutting your detectable effect from 0.175 SD to 0.155 SD at no extra cost.CaveatsEthics: there may be ethical reasons for not wanting a larger control group, for example in a medical trial where you would be denying potentially life-saving treatments to sick patients. Even outside of medicine, control participants’ time is important and you may wish to avoid “wasting” it on participating in your study (although you could use some of the savings to compensate control participants, if that won’t mess with your study).Necessarily limited ...]]>
Rory Fenton https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:31 None full 5022
Sb2JPgpMkXxwZ3g4W_NL_EA_EA EA - On Philosophy Tube's Video on Effective Altruism by Jessica Wen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Philosophy Tube's Video on Effective Altruism, published by Jessica Wen on February 24, 2023 on The Effective Altruism Forum.tl;dr:I found Philosophy Tube's new video on EA enjoyable and the criticisms fair. I wrote out some thoughts on her criticisms. I would recommend a watch.BackgroundI’ve been into Abigail Thorn's channel Philosophy Tube for about as long as I’ve been into Effective Altruism. I currently co-direct High Impact Engineers, but this post is written from a personal standpoint and does not represent the views of High Impact Engineers. Philosophy Tube creates content explaining philosophy (and many aspects of Western culture) with a dramatic streak (think fantastic lighting and flashy outfits - yes please!). So when I found out that Philosophy Tube would be creating a video on Effective Altruism, I got very excited.I have written this almost chronologically and in a very short amount of time, so the quality and format may not be up to the normal standards of the EA Forum. I wanted to hash out my thoughts for my own understanding and to see what others thought.Content, Criticisms, and ContemplationsEA and SBFFirstly, Thorn outlines what EA is, and what’s happened over the past 6 months (FTX, a mention of the Time article, and other critical pieces) and essentially says that the leaders of the movement ignored what was happening on the ground in the community and didn’t listen to criticisms. Although I don’t think this was the only cause of the above scandals, I think there is some truth in Thorn’s analysis. I also disagree with the insinuation that Earning to Give is a bad strategy because it leads to SBF-type disasters: 80,000 Hours explicitly tells people to not take work that does harm even if you expect the positive outcome to outweigh the harmful means.EA and LongtermismIn the next section, Thorn discusses Longtermism, What We Owe the Future (WWOTF), and The Precipice. She mentions that there is no discussion of reproductive rights in a book about our duties to future people (which I see as an oversight – and not one that a woman would have made); she prefers The Precipice, which I agree is more detailed, considers more points of view, and is more persuasive. However, I think The Precipice is drier and less easy to read than WWOTF, the latter of which is aimed at a broader audience.There is a brief (and entertaining) illustration of Expected Value (EV) and the resulting extreme case of Pascal’s Mugging. Although MacAskill puts this to the side, Thorn goes deeper into the consequences of basing decisions on EV and the measurability bias that results – and she is right that although there is thinking done on how to overcome this in EA (she gives the example of Peter Singer’s The Most Good You Can Do, but also see this, this and this for examples of EAs thinking about tackling measurability bias), she mentions that this issue is never tackled by MacAskill. (She generalises this to EA philosophers, but isn't Singer one of the OG EA philosophers?)EA and ~The System~The last section is the most important criticism of EA. I think this section is most worth watching. Thorn mentions the classic leftist criticism of EA: it reinforces the 19th-century idea of philanthropy where people get rich and donate their money to avoid criticisms of how they got their money and doesn’t directly tackle the unfair system that privileges some people over others.Thorn brings Mr Beast into the discussion, and although she doesn’t explicitly say that he’s an EA, she uses Mr Beast as an example of how EA might see this as: “1000 people were blind yesterday and can see today – isn’t that a fact worth celebrating?”. The question that neither Mr Beast nor the hypothetical EA ask is: “how do we change the world?”. Changing the world, she implies, necessitates chang...]]>
Jessica Wen https://forum.effectivealtruism.org/posts/Sb2JPgpMkXxwZ3g4W/on-philosophy-tube-s-video-on-effective-altruism Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Philosophy Tube's Video on Effective Altruism, published by Jessica Wen on February 24, 2023 on The Effective Altruism Forum.tl;dr:I found Philosophy Tube's new video on EA enjoyable and the criticisms fair. I wrote out some thoughts on her criticisms. I would recommend a watch.BackgroundI’ve been into Abigail Thorn's channel Philosophy Tube for about as long as I’ve been into Effective Altruism. I currently co-direct High Impact Engineers, but this post is written from a personal standpoint and does not represent the views of High Impact Engineers. Philosophy Tube creates content explaining philosophy (and many aspects of Western culture) with a dramatic streak (think fantastic lighting and flashy outfits - yes please!). So when I found out that Philosophy Tube would be creating a video on Effective Altruism, I got very excited.I have written this almost chronologically and in a very short amount of time, so the quality and format may not be up to the normal standards of the EA Forum. I wanted to hash out my thoughts for my own understanding and to see what others thought.Content, Criticisms, and ContemplationsEA and SBFFirstly, Thorn outlines what EA is, and what’s happened over the past 6 months (FTX, a mention of the Time article, and other critical pieces) and essentially says that the leaders of the movement ignored what was happening on the ground in the community and didn’t listen to criticisms. Although I don’t think this was the only cause of the above scandals, I think there is some truth in Thorn’s analysis. I also disagree with the insinuation that Earning to Give is a bad strategy because it leads to SBF-type disasters: 80,000 Hours explicitly tells people to not take work that does harm even if you expect the positive outcome to outweigh the harmful means.EA and LongtermismIn the next section, Thorn discusses Longtermism, What We Owe the Future (WWOTF), and The Precipice. She mentions that there is no discussion of reproductive rights in a book about our duties to future people (which I see as an oversight – and not one that a woman would have made); she prefers The Precipice, which I agree is more detailed, considers more points of view, and is more persuasive. However, I think The Precipice is drier and less easy to read than WWOTF, the latter of which is aimed at a broader audience.There is a brief (and entertaining) illustration of Expected Value (EV) and the resulting extreme case of Pascal’s Mugging. Although MacAskill puts this to the side, Thorn goes deeper into the consequences of basing decisions on EV and the measurability bias that results – and she is right that although there is thinking done on how to overcome this in EA (she gives the example of Peter Singer’s The Most Good You Can Do, but also see this, this and this for examples of EAs thinking about tackling measurability bias), she mentions that this issue is never tackled by MacAskill. (She generalises this to EA philosophers, but isn't Singer one of the OG EA philosophers?)EA and ~The System~The last section is the most important criticism of EA. I think this section is most worth watching. Thorn mentions the classic leftist criticism of EA: it reinforces the 19th-century idea of philanthropy where people get rich and donate their money to avoid criticisms of how they got their money and doesn’t directly tackle the unfair system that privileges some people over others.Thorn brings Mr Beast into the discussion, and although she doesn’t explicitly say that he’s an EA, she uses Mr Beast as an example of how EA might see this as: “1000 people were blind yesterday and can see today – isn’t that a fact worth celebrating?”. The question that neither Mr Beast nor the hypothetical EA ask is: “how do we change the world?”. Changing the world, she implies, necessitates chang...]]>
Sat, 25 Feb 2023 00:52:40 +0000 EA - On Philosophy Tube's Video on Effective Altruism by Jessica Wen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Philosophy Tube's Video on Effective Altruism, published by Jessica Wen on February 24, 2023 on The Effective Altruism Forum.tl;dr:I found Philosophy Tube's new video on EA enjoyable and the criticisms fair. I wrote out some thoughts on her criticisms. I would recommend a watch.BackgroundI’ve been into Abigail Thorn's channel Philosophy Tube for about as long as I’ve been into Effective Altruism. I currently co-direct High Impact Engineers, but this post is written from a personal standpoint and does not represent the views of High Impact Engineers. Philosophy Tube creates content explaining philosophy (and many aspects of Western culture) with a dramatic streak (think fantastic lighting and flashy outfits - yes please!). So when I found out that Philosophy Tube would be creating a video on Effective Altruism, I got very excited.I have written this almost chronologically and in a very short amount of time, so the quality and format may not be up to the normal standards of the EA Forum. I wanted to hash out my thoughts for my own understanding and to see what others thought.Content, Criticisms, and ContemplationsEA and SBFFirstly, Thorn outlines what EA is, and what’s happened over the past 6 months (FTX, a mention of the Time article, and other critical pieces) and essentially says that the leaders of the movement ignored what was happening on the ground in the community and didn’t listen to criticisms. Although I don’t think this was the only cause of the above scandals, I think there is some truth in Thorn’s analysis. I also disagree with the insinuation that Earning to Give is a bad strategy because it leads to SBF-type disasters: 80,000 Hours explicitly tells people to not take work that does harm even if you expect the positive outcome to outweigh the harmful means.EA and LongtermismIn the next section, Thorn discusses Longtermism, What We Owe the Future (WWOTF), and The Precipice. She mentions that there is no discussion of reproductive rights in a book about our duties to future people (which I see as an oversight – and not one that a woman would have made); she prefers The Precipice, which I agree is more detailed, considers more points of view, and is more persuasive. However, I think The Precipice is drier and less easy to read than WWOTF, the latter of which is aimed at a broader audience.There is a brief (and entertaining) illustration of Expected Value (EV) and the resulting extreme case of Pascal’s Mugging. Although MacAskill puts this to the side, Thorn goes deeper into the consequences of basing decisions on EV and the measurability bias that results – and she is right that although there is thinking done on how to overcome this in EA (she gives the example of Peter Singer’s The Most Good You Can Do, but also see this, this and this for examples of EAs thinking about tackling measurability bias), she mentions that this issue is never tackled by MacAskill. (She generalises this to EA philosophers, but isn't Singer one of the OG EA philosophers?)EA and ~The System~The last section is the most important criticism of EA. I think this section is most worth watching. Thorn mentions the classic leftist criticism of EA: it reinforces the 19th-century idea of philanthropy where people get rich and donate their money to avoid criticisms of how they got their money and doesn’t directly tackle the unfair system that privileges some people over others.Thorn brings Mr Beast into the discussion, and although she doesn’t explicitly say that he’s an EA, she uses Mr Beast as an example of how EA might see this as: “1000 people were blind yesterday and can see today – isn’t that a fact worth celebrating?”. The question that neither Mr Beast nor the hypothetical EA ask is: “how do we change the world?”. Changing the world, she implies, necessitates chang...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Philosophy Tube's Video on Effective Altruism, published by Jessica Wen on February 24, 2023 on The Effective Altruism Forum.tl;dr:I found Philosophy Tube's new video on EA enjoyable and the criticisms fair. I wrote out some thoughts on her criticisms. I would recommend a watch.BackgroundI’ve been into Abigail Thorn's channel Philosophy Tube for about as long as I’ve been into Effective Altruism. I currently co-direct High Impact Engineers, but this post is written from a personal standpoint and does not represent the views of High Impact Engineers. Philosophy Tube creates content explaining philosophy (and many aspects of Western culture) with a dramatic streak (think fantastic lighting and flashy outfits - yes please!). So when I found out that Philosophy Tube would be creating a video on Effective Altruism, I got very excited.I have written this almost chronologically and in a very short amount of time, so the quality and format may not be up to the normal standards of the EA Forum. I wanted to hash out my thoughts for my own understanding and to see what others thought.Content, Criticisms, and ContemplationsEA and SBFFirstly, Thorn outlines what EA is, and what’s happened over the past 6 months (FTX, a mention of the Time article, and other critical pieces) and essentially says that the leaders of the movement ignored what was happening on the ground in the community and didn’t listen to criticisms. Although I don’t think this was the only cause of the above scandals, I think there is some truth in Thorn’s analysis. I also disagree with the insinuation that Earning to Give is a bad strategy because it leads to SBF-type disasters: 80,000 Hours explicitly tells people to not take work that does harm even if you expect the positive outcome to outweigh the harmful means.EA and LongtermismIn the next section, Thorn discusses Longtermism, What We Owe the Future (WWOTF), and The Precipice. She mentions that there is no discussion of reproductive rights in a book about our duties to future people (which I see as an oversight – and not one that a woman would have made); she prefers The Precipice, which I agree is more detailed, considers more points of view, and is more persuasive. However, I think The Precipice is drier and less easy to read than WWOTF, the latter of which is aimed at a broader audience.There is a brief (and entertaining) illustration of Expected Value (EV) and the resulting extreme case of Pascal’s Mugging. Although MacAskill puts this to the side, Thorn goes deeper into the consequences of basing decisions on EV and the measurability bias that results – and she is right that although there is thinking done on how to overcome this in EA (she gives the example of Peter Singer’s The Most Good You Can Do, but also see this, this and this for examples of EAs thinking about tackling measurability bias), she mentions that this issue is never tackled by MacAskill. (She generalises this to EA philosophers, but isn't Singer one of the OG EA philosophers?)EA and ~The System~The last section is the most important criticism of EA. I think this section is most worth watching. Thorn mentions the classic leftist criticism of EA: it reinforces the 19th-century idea of philanthropy where people get rich and donate their money to avoid criticisms of how they got their money and doesn’t directly tackle the unfair system that privileges some people over others.Thorn brings Mr Beast into the discussion, and although she doesn’t explicitly say that he’s an EA, she uses Mr Beast as an example of how EA might see this as: “1000 people were blind yesterday and can see today – isn’t that a fact worth celebrating?”. The question that neither Mr Beast nor the hypothetical EA ask is: “how do we change the world?”. Changing the world, she implies, necessitates chang...]]>
Jessica Wen https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:40 None full 5023
cTfEv6zAakfyxrbQu_NL_EA_EA EA - EA content in French: Announcing EA France’s translation project and our translation coordination initiative by Louise Verkin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA content in French: Announcing EA France’s translation project and our translation coordination initiative, published by Louise Verkin on February 24, 2023 on The Effective Altruism Forum.We’re happy to announce that thanks to a grant from Open Philanthropy, EA France has been translating core EA content into French.EA France is also coordinating EA ENFR translation efforts: if you’re translating EA content from English to French or considering it, please contact me so we can check that there is no duplicated effort and provide support!EA France’s translation projectWith Open Philanthropy’s grant, we hired professional translators to translate 16 articles, totalling ~67,000 words. Their work is being reviewed by volunteers from the French EA community.Articles being translatedWhat is effective altruism? (translation available at Qu’est-ce que l’altruisme efficace ?)Comparing Charities (translation available at Y a-t-il des organisations caritatives plus efficaces que d’autres ?)Expected ValueOn CaringFour Ideas You Already Agree WithThe Parable Of The Boy Who Cried 5% Chance Of WolfPreventing an AI-related catastropheThe case for reducing existential risksPreventing catastrophic pandemicsClimate changeNuclear securityThe “most important century” blog series (summary)Counterfactual impactNeglectedness and impactCause profile: Global health and developmentThis is your most important decision (career “start here”)(See the appendix for other translation projects from English to French, and for existing translations.)All content translated as part of the EA France translation project will be released on the EA France blog.It is also available for use by other French-speaking communities, provided that they 1) cite original writers, 2) link EA France’s translations, 3) notify EA France at contact@altruismeefficacefrance.org.We’re very happy that more EA content will be available to French speakers, and we hope that it will make outreach efforts significantly easier!Translation Coordination InitiativeNow that several translation projects exist, it’s essential that we have a way to coordinate so that:we don’t duplicate effort (translating the same content twice),we agree on a common vocabulary (so the same term doesn’t get translated in 3 different ways, which makes it needlessly confusing for readers),EA France can provide support to all projects (e.g. sharing translations once they’re published, helping with editing, hosting translated works).The coordination initiative consists of:a master spreadsheet which lists all existing projects, and what they’re translating,a glossary of existing translations that translators and editors can refer to.Both items are accessible upon request.AppendixWhat other translation projects exist?There are at least two other ongoing projects contributing to this overall effort, feeding the glossary and monitored in the master spreadsheet:translating the 80,000 Hours Career Guide, led by Théo Knopfer (funded by a grant from Open Philanthropy),translating the EA Handbook, led by Baptiste Roucau (also funded by a grant from Open Philanthropy).What translations are already available in French?Longtermism: An Introduction (Les trois hypothèses du long-termisme, translation by Antonin Broi)500 Million, But Not A Single One More (500 millions, mais pas un de plus, translation by Jérémy Perret)The lack of controversy over well-targeted aid (L’absence de controverse au sujet de l’aide ciblée, translation by Eve-Léana Angot)Framing Effective Altruism as Overcoming Indifference (Surmonter l’indifférence, translation by Flavien P)Efficient Charity — Do Unto Others (Une charité efficace — Faire au profit des autres, translation by Guillaume Thomas)How did you choose what to translate?We used the following cri...]]>
Louise Verkin https://forum.effectivealtruism.org/posts/cTfEv6zAakfyxrbQu/ea-content-in-french-announcing-ea-france-s-translation Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA content in French: Announcing EA France’s translation project and our translation coordination initiative, published by Louise Verkin on February 24, 2023 on The Effective Altruism Forum.We’re happy to announce that thanks to a grant from Open Philanthropy, EA France has been translating core EA content into French.EA France is also coordinating EA ENFR translation efforts: if you’re translating EA content from English to French or considering it, please contact me so we can check that there is no duplicated effort and provide support!EA France’s translation projectWith Open Philanthropy’s grant, we hired professional translators to translate 16 articles, totalling ~67,000 words. Their work is being reviewed by volunteers from the French EA community.Articles being translatedWhat is effective altruism? (translation available at Qu’est-ce que l’altruisme efficace ?)Comparing Charities (translation available at Y a-t-il des organisations caritatives plus efficaces que d’autres ?)Expected ValueOn CaringFour Ideas You Already Agree WithThe Parable Of The Boy Who Cried 5% Chance Of WolfPreventing an AI-related catastropheThe case for reducing existential risksPreventing catastrophic pandemicsClimate changeNuclear securityThe “most important century” blog series (summary)Counterfactual impactNeglectedness and impactCause profile: Global health and developmentThis is your most important decision (career “start here”)(See the appendix for other translation projects from English to French, and for existing translations.)All content translated as part of the EA France translation project will be released on the EA France blog.It is also available for use by other French-speaking communities, provided that they 1) cite original writers, 2) link EA France’s translations, 3) notify EA France at contact@altruismeefficacefrance.org.We’re very happy that more EA content will be available to French speakers, and we hope that it will make outreach efforts significantly easier!Translation Coordination InitiativeNow that several translation projects exist, it’s essential that we have a way to coordinate so that:we don’t duplicate effort (translating the same content twice),we agree on a common vocabulary (so the same term doesn’t get translated in 3 different ways, which makes it needlessly confusing for readers),EA France can provide support to all projects (e.g. sharing translations once they’re published, helping with editing, hosting translated works).The coordination initiative consists of:a master spreadsheet which lists all existing projects, and what they’re translating,a glossary of existing translations that translators and editors can refer to.Both items are accessible upon request.AppendixWhat other translation projects exist?There are at least two other ongoing projects contributing to this overall effort, feeding the glossary and monitored in the master spreadsheet:translating the 80,000 Hours Career Guide, led by Théo Knopfer (funded by a grant from Open Philanthropy),translating the EA Handbook, led by Baptiste Roucau (also funded by a grant from Open Philanthropy).What translations are already available in French?Longtermism: An Introduction (Les trois hypothèses du long-termisme, translation by Antonin Broi)500 Million, But Not A Single One More (500 millions, mais pas un de plus, translation by Jérémy Perret)The lack of controversy over well-targeted aid (L’absence de controverse au sujet de l’aide ciblée, translation by Eve-Léana Angot)Framing Effective Altruism as Overcoming Indifference (Surmonter l’indifférence, translation by Flavien P)Efficient Charity — Do Unto Others (Une charité efficace — Faire au profit des autres, translation by Guillaume Thomas)How did you choose what to translate?We used the following cri...]]>
Fri, 24 Feb 2023 23:41:29 +0000 EA - EA content in French: Announcing EA France’s translation project and our translation coordination initiative by Louise Verkin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA content in French: Announcing EA France’s translation project and our translation coordination initiative, published by Louise Verkin on February 24, 2023 on The Effective Altruism Forum.We’re happy to announce that thanks to a grant from Open Philanthropy, EA France has been translating core EA content into French.EA France is also coordinating EA ENFR translation efforts: if you’re translating EA content from English to French or considering it, please contact me so we can check that there is no duplicated effort and provide support!EA France’s translation projectWith Open Philanthropy’s grant, we hired professional translators to translate 16 articles, totalling ~67,000 words. Their work is being reviewed by volunteers from the French EA community.Articles being translatedWhat is effective altruism? (translation available at Qu’est-ce que l’altruisme efficace ?)Comparing Charities (translation available at Y a-t-il des organisations caritatives plus efficaces que d’autres ?)Expected ValueOn CaringFour Ideas You Already Agree WithThe Parable Of The Boy Who Cried 5% Chance Of WolfPreventing an AI-related catastropheThe case for reducing existential risksPreventing catastrophic pandemicsClimate changeNuclear securityThe “most important century” blog series (summary)Counterfactual impactNeglectedness and impactCause profile: Global health and developmentThis is your most important decision (career “start here”)(See the appendix for other translation projects from English to French, and for existing translations.)All content translated as part of the EA France translation project will be released on the EA France blog.It is also available for use by other French-speaking communities, provided that they 1) cite original writers, 2) link EA France’s translations, 3) notify EA France at contact@altruismeefficacefrance.org.We’re very happy that more EA content will be available to French speakers, and we hope that it will make outreach efforts significantly easier!Translation Coordination InitiativeNow that several translation projects exist, it’s essential that we have a way to coordinate so that:we don’t duplicate effort (translating the same content twice),we agree on a common vocabulary (so the same term doesn’t get translated in 3 different ways, which makes it needlessly confusing for readers),EA France can provide support to all projects (e.g. sharing translations once they’re published, helping with editing, hosting translated works).The coordination initiative consists of:a master spreadsheet which lists all existing projects, and what they’re translating,a glossary of existing translations that translators and editors can refer to.Both items are accessible upon request.AppendixWhat other translation projects exist?There are at least two other ongoing projects contributing to this overall effort, feeding the glossary and monitored in the master spreadsheet:translating the 80,000 Hours Career Guide, led by Théo Knopfer (funded by a grant from Open Philanthropy),translating the EA Handbook, led by Baptiste Roucau (also funded by a grant from Open Philanthropy).What translations are already available in French?Longtermism: An Introduction (Les trois hypothèses du long-termisme, translation by Antonin Broi)500 Million, But Not A Single One More (500 millions, mais pas un de plus, translation by Jérémy Perret)The lack of controversy over well-targeted aid (L’absence de controverse au sujet de l’aide ciblée, translation by Eve-Léana Angot)Framing Effective Altruism as Overcoming Indifference (Surmonter l’indifférence, translation by Flavien P)Efficient Charity — Do Unto Others (Une charité efficace — Faire au profit des autres, translation by Guillaume Thomas)How did you choose what to translate?We used the following cri...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA content in French: Announcing EA France’s translation project and our translation coordination initiative, published by Louise Verkin on February 24, 2023 on The Effective Altruism Forum.We’re happy to announce that thanks to a grant from Open Philanthropy, EA France has been translating core EA content into French.EA France is also coordinating EA ENFR translation efforts: if you’re translating EA content from English to French or considering it, please contact me so we can check that there is no duplicated effort and provide support!EA France’s translation projectWith Open Philanthropy’s grant, we hired professional translators to translate 16 articles, totalling ~67,000 words. Their work is being reviewed by volunteers from the French EA community.Articles being translatedWhat is effective altruism? (translation available at Qu’est-ce que l’altruisme efficace ?)Comparing Charities (translation available at Y a-t-il des organisations caritatives plus efficaces que d’autres ?)Expected ValueOn CaringFour Ideas You Already Agree WithThe Parable Of The Boy Who Cried 5% Chance Of WolfPreventing an AI-related catastropheThe case for reducing existential risksPreventing catastrophic pandemicsClimate changeNuclear securityThe “most important century” blog series (summary)Counterfactual impactNeglectedness and impactCause profile: Global health and developmentThis is your most important decision (career “start here”)(See the appendix for other translation projects from English to French, and for existing translations.)All content translated as part of the EA France translation project will be released on the EA France blog.It is also available for use by other French-speaking communities, provided that they 1) cite original writers, 2) link EA France’s translations, 3) notify EA France at contact@altruismeefficacefrance.org.We’re very happy that more EA content will be available to French speakers, and we hope that it will make outreach efforts significantly easier!Translation Coordination InitiativeNow that several translation projects exist, it’s essential that we have a way to coordinate so that:we don’t duplicate effort (translating the same content twice),we agree on a common vocabulary (so the same term doesn’t get translated in 3 different ways, which makes it needlessly confusing for readers),EA France can provide support to all projects (e.g. sharing translations once they’re published, helping with editing, hosting translated works).The coordination initiative consists of:a master spreadsheet which lists all existing projects, and what they’re translating,a glossary of existing translations that translators and editors can refer to.Both items are accessible upon request.AppendixWhat other translation projects exist?There are at least two other ongoing projects contributing to this overall effort, feeding the glossary and monitored in the master spreadsheet:translating the 80,000 Hours Career Guide, led by Théo Knopfer (funded by a grant from Open Philanthropy),translating the EA Handbook, led by Baptiste Roucau (also funded by a grant from Open Philanthropy).What translations are already available in French?Longtermism: An Introduction (Les trois hypothèses du long-termisme, translation by Antonin Broi)500 Million, But Not A Single One More (500 millions, mais pas un de plus, translation by Jérémy Perret)The lack of controversy over well-targeted aid (L’absence de controverse au sujet de l’aide ciblée, translation by Eve-Léana Angot)Framing Effective Altruism as Overcoming Indifference (Surmonter l’indifférence, translation by Flavien P)Efficient Charity — Do Unto Others (Une charité efficace — Faire au profit des autres, translation by Guillaume Thomas)How did you choose what to translate?We used the following cri...]]>
Louise Verkin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:56 None full 5024
gr4epkwe5WoYJXF32_NL_EA_EA EA - Why I don’t agree with HLI’s estimate of household spillovers from therapy by JamesSnowden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I don’t agree with HLI’s estimate of household spillovers from therapy, published by JamesSnowden on February 24, 2023 on The Effective Altruism Forum.SummaryIn its cost-effectiveness estimate of StrongMinds, Happier Lives Institute (HLI) estimates that most of the benefits accrue not to the women who receive therapy, but to household members.According to HLI’s estimates, each household member benefits from the intervention ~50% as much as the person receiving therapy. Because there are ~5 non-recipient household members per treated person, this estimate increases the cost-effectiveness estimate by ~250%. i.e. ~70-80% of the benefits of therapy accrue to household members, rather than the program participant.I don’t think the existing evidence justifies HLI's estimate of 50% household spillovers.My main disagreements are:Two of the three RCTs HLI relies on to estimate spillovers are on interventions specifically intended to benefit household members (unlike StrongMinds’ program, which targets women and adolescents living with depression).Those RCTs only measure the wellbeing of a subset of household members most likely to benefit from the intervention.The results of the third RCT are inconsistent with HLI’s estimate.I’d guess the spillover benefit to other household members is more likely to be in the 5-25% range (though this is speculative). That reduces the estimated cost-effectiveness of StrongMinds from 9x to 3-6x cash transfers, which would be below GiveWell’s funding bar of 10x. Caveat in footnote.I think I also disagree with other parts of HLI’s analysis (including how worried to be about reporting bias; the costs of StrongMinds’ program; and the point on a life satisfaction scale that’s morally equivalent to death). I’d guess, though I’m not certain, that more careful consideration of each of these would reduce StrongMinds’ cost-effectiveness estimate further relative to other opportunities. But I’m going to focus on spillovers in this post because I think it makes the most difference to the bottom line, represents the clearest issue to me, and has received relatively little attention in other critiques.For context: I wrote the first version of Founders Pledge’s mental health report in 2017 and gave feedback on an early draft of HLI’s report on household spillovers. I’ve spent 5-10 hours digging into the question of household spillovers from therapy specifically. I work at Open Philanthropy but wrote this post in a personal capacity. I’m reasonably confident the main critiques in this post are right, but much less confident in what the true magnitude of household spillovers is. I admire the work StrongMinds is doing and I’m grateful to HLI for their expansive literature reviews and analysis on this question.Thank you to Joel McGuire, Akhil Bansal, Isabel Arjmand, Alex Cohen, Sjir Hoeijmakers, Josh Rosenberg, and Matt Lerner for their insightful comments. They don’t necessarily endorse the conclusions of this post.0. How HLI estimates the household spillover rate of therapyHLI estimates household spillovers of therapy on the basis of the three RCTs on therapy which collected data on the subjective wellbeing of some of the household members of program participants: Mutamba et al. (2018), Swartz et al. (2008), Kemp et al. (2009).Combining those RCTs in a meta-analysis, HLI estimates household spillover rates of 53% (see the forest plot below; 53% comes from dividing the average household member effect (0.35) by the average recipient effect (0.66)).HLI assumes StrongMinds’ intervention will have a similar effect on household members. But, I don't think these three RCTs can be used to generate a reliable estimate for the spillovers of StrongMinds' program for three reasons.1. Two of the three RCTs HLI relies on to estimate spillovers are on in...]]>
JamesSnowden https://forum.effectivealtruism.org/posts/gr4epkwe5WoYJXF32/why-i-don-t-agree-with-hli-s-estimate-of-household Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I don’t agree with HLI’s estimate of household spillovers from therapy, published by JamesSnowden on February 24, 2023 on The Effective Altruism Forum.SummaryIn its cost-effectiveness estimate of StrongMinds, Happier Lives Institute (HLI) estimates that most of the benefits accrue not to the women who receive therapy, but to household members.According to HLI’s estimates, each household member benefits from the intervention ~50% as much as the person receiving therapy. Because there are ~5 non-recipient household members per treated person, this estimate increases the cost-effectiveness estimate by ~250%. i.e. ~70-80% of the benefits of therapy accrue to household members, rather than the program participant.I don’t think the existing evidence justifies HLI's estimate of 50% household spillovers.My main disagreements are:Two of the three RCTs HLI relies on to estimate spillovers are on interventions specifically intended to benefit household members (unlike StrongMinds’ program, which targets women and adolescents living with depression).Those RCTs only measure the wellbeing of a subset of household members most likely to benefit from the intervention.The results of the third RCT are inconsistent with HLI’s estimate.I’d guess the spillover benefit to other household members is more likely to be in the 5-25% range (though this is speculative). That reduces the estimated cost-effectiveness of StrongMinds from 9x to 3-6x cash transfers, which would be below GiveWell’s funding bar of 10x. Caveat in footnote.I think I also disagree with other parts of HLI’s analysis (including how worried to be about reporting bias; the costs of StrongMinds’ program; and the point on a life satisfaction scale that’s morally equivalent to death). I’d guess, though I’m not certain, that more careful consideration of each of these would reduce StrongMinds’ cost-effectiveness estimate further relative to other opportunities. But I’m going to focus on spillovers in this post because I think it makes the most difference to the bottom line, represents the clearest issue to me, and has received relatively little attention in other critiques.For context: I wrote the first version of Founders Pledge’s mental health report in 2017 and gave feedback on an early draft of HLI’s report on household spillovers. I’ve spent 5-10 hours digging into the question of household spillovers from therapy specifically. I work at Open Philanthropy but wrote this post in a personal capacity. I’m reasonably confident the main critiques in this post are right, but much less confident in what the true magnitude of household spillovers is. I admire the work StrongMinds is doing and I’m grateful to HLI for their expansive literature reviews and analysis on this question.Thank you to Joel McGuire, Akhil Bansal, Isabel Arjmand, Alex Cohen, Sjir Hoeijmakers, Josh Rosenberg, and Matt Lerner for their insightful comments. They don’t necessarily endorse the conclusions of this post.0. How HLI estimates the household spillover rate of therapyHLI estimates household spillovers of therapy on the basis of the three RCTs on therapy which collected data on the subjective wellbeing of some of the household members of program participants: Mutamba et al. (2018), Swartz et al. (2008), Kemp et al. (2009).Combining those RCTs in a meta-analysis, HLI estimates household spillover rates of 53% (see the forest plot below; 53% comes from dividing the average household member effect (0.35) by the average recipient effect (0.66)).HLI assumes StrongMinds’ intervention will have a similar effect on household members. But, I don't think these three RCTs can be used to generate a reliable estimate for the spillovers of StrongMinds' program for three reasons.1. Two of the three RCTs HLI relies on to estimate spillovers are on in...]]>
Fri, 24 Feb 2023 19:59:00 +0000 EA - Why I don’t agree with HLI’s estimate of household spillovers from therapy by JamesSnowden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I don’t agree with HLI’s estimate of household spillovers from therapy, published by JamesSnowden on February 24, 2023 on The Effective Altruism Forum.SummaryIn its cost-effectiveness estimate of StrongMinds, Happier Lives Institute (HLI) estimates that most of the benefits accrue not to the women who receive therapy, but to household members.According to HLI’s estimates, each household member benefits from the intervention ~50% as much as the person receiving therapy. Because there are ~5 non-recipient household members per treated person, this estimate increases the cost-effectiveness estimate by ~250%. i.e. ~70-80% of the benefits of therapy accrue to household members, rather than the program participant.I don’t think the existing evidence justifies HLI's estimate of 50% household spillovers.My main disagreements are:Two of the three RCTs HLI relies on to estimate spillovers are on interventions specifically intended to benefit household members (unlike StrongMinds’ program, which targets women and adolescents living with depression).Those RCTs only measure the wellbeing of a subset of household members most likely to benefit from the intervention.The results of the third RCT are inconsistent with HLI’s estimate.I’d guess the spillover benefit to other household members is more likely to be in the 5-25% range (though this is speculative). That reduces the estimated cost-effectiveness of StrongMinds from 9x to 3-6x cash transfers, which would be below GiveWell’s funding bar of 10x. Caveat in footnote.I think I also disagree with other parts of HLI’s analysis (including how worried to be about reporting bias; the costs of StrongMinds’ program; and the point on a life satisfaction scale that’s morally equivalent to death). I’d guess, though I’m not certain, that more careful consideration of each of these would reduce StrongMinds’ cost-effectiveness estimate further relative to other opportunities. But I’m going to focus on spillovers in this post because I think it makes the most difference to the bottom line, represents the clearest issue to me, and has received relatively little attention in other critiques.For context: I wrote the first version of Founders Pledge’s mental health report in 2017 and gave feedback on an early draft of HLI’s report on household spillovers. I’ve spent 5-10 hours digging into the question of household spillovers from therapy specifically. I work at Open Philanthropy but wrote this post in a personal capacity. I’m reasonably confident the main critiques in this post are right, but much less confident in what the true magnitude of household spillovers is. I admire the work StrongMinds is doing and I’m grateful to HLI for their expansive literature reviews and analysis on this question.Thank you to Joel McGuire, Akhil Bansal, Isabel Arjmand, Alex Cohen, Sjir Hoeijmakers, Josh Rosenberg, and Matt Lerner for their insightful comments. They don’t necessarily endorse the conclusions of this post.0. How HLI estimates the household spillover rate of therapyHLI estimates household spillovers of therapy on the basis of the three RCTs on therapy which collected data on the subjective wellbeing of some of the household members of program participants: Mutamba et al. (2018), Swartz et al. (2008), Kemp et al. (2009).Combining those RCTs in a meta-analysis, HLI estimates household spillover rates of 53% (see the forest plot below; 53% comes from dividing the average household member effect (0.35) by the average recipient effect (0.66)).HLI assumes StrongMinds’ intervention will have a similar effect on household members. But, I don't think these three RCTs can be used to generate a reliable estimate for the spillovers of StrongMinds' program for three reasons.1. Two of the three RCTs HLI relies on to estimate spillovers are on in...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I don’t agree with HLI’s estimate of household spillovers from therapy, published by JamesSnowden on February 24, 2023 on The Effective Altruism Forum.SummaryIn its cost-effectiveness estimate of StrongMinds, Happier Lives Institute (HLI) estimates that most of the benefits accrue not to the women who receive therapy, but to household members.According to HLI’s estimates, each household member benefits from the intervention ~50% as much as the person receiving therapy. Because there are ~5 non-recipient household members per treated person, this estimate increases the cost-effectiveness estimate by ~250%. i.e. ~70-80% of the benefits of therapy accrue to household members, rather than the program participant.I don’t think the existing evidence justifies HLI's estimate of 50% household spillovers.My main disagreements are:Two of the three RCTs HLI relies on to estimate spillovers are on interventions specifically intended to benefit household members (unlike StrongMinds’ program, which targets women and adolescents living with depression).Those RCTs only measure the wellbeing of a subset of household members most likely to benefit from the intervention.The results of the third RCT are inconsistent with HLI’s estimate.I’d guess the spillover benefit to other household members is more likely to be in the 5-25% range (though this is speculative). That reduces the estimated cost-effectiveness of StrongMinds from 9x to 3-6x cash transfers, which would be below GiveWell’s funding bar of 10x. Caveat in footnote.I think I also disagree with other parts of HLI’s analysis (including how worried to be about reporting bias; the costs of StrongMinds’ program; and the point on a life satisfaction scale that’s morally equivalent to death). I’d guess, though I’m not certain, that more careful consideration of each of these would reduce StrongMinds’ cost-effectiveness estimate further relative to other opportunities. But I’m going to focus on spillovers in this post because I think it makes the most difference to the bottom line, represents the clearest issue to me, and has received relatively little attention in other critiques.For context: I wrote the first version of Founders Pledge’s mental health report in 2017 and gave feedback on an early draft of HLI’s report on household spillovers. I’ve spent 5-10 hours digging into the question of household spillovers from therapy specifically. I work at Open Philanthropy but wrote this post in a personal capacity. I’m reasonably confident the main critiques in this post are right, but much less confident in what the true magnitude of household spillovers is. I admire the work StrongMinds is doing and I’m grateful to HLI for their expansive literature reviews and analysis on this question.Thank you to Joel McGuire, Akhil Bansal, Isabel Arjmand, Alex Cohen, Sjir Hoeijmakers, Josh Rosenberg, and Matt Lerner for their insightful comments. They don’t necessarily endorse the conclusions of this post.0. How HLI estimates the household spillover rate of therapyHLI estimates household spillovers of therapy on the basis of the three RCTs on therapy which collected data on the subjective wellbeing of some of the household members of program participants: Mutamba et al. (2018), Swartz et al. (2008), Kemp et al. (2009).Combining those RCTs in a meta-analysis, HLI estimates household spillover rates of 53% (see the forest plot below; 53% comes from dividing the average household member effect (0.35) by the average recipient effect (0.66)).HLI assumes StrongMinds’ intervention will have a similar effect on household members. But, I don't think these three RCTs can be used to generate a reliable estimate for the spillovers of StrongMinds' program for three reasons.1. Two of the three RCTs HLI relies on to estimate spillovers are on in...]]>
JamesSnowden https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 14:49 None full 5013
4bPjDbxkYMCAdqPCv_NL_EA_EA EA - Manifund Impact Market / Mini-Grants Round On Forecasting by Scott Alexander Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Manifund Impact Market / Mini-Grants Round On Forecasting, published by Scott Alexander on February 24, 2023 on The Effective Altruism Forum.A team associated with Manifold Markets has created a prototype market for minting and trading impact certificates.To help test it out, I'm sponsoring a $20,000 grants round, restricted to forecasting-related projects only (to keep it small - sorry, everyone else). You can read the details at the Astral Codex Ten post. If you have a forecasting-related project idea for less than that amount of money, consider reading the post and creating a Manifund account and minting an impact certificate for it.If you're an accredited investor, you can buy and sell impact certificates. Read the post, create a Manifund account, send them enough financial information to confirm your accreditation, and start buying and selling.If you have a non-forecasting related project, you can try using the platform, but you won't be eligible for this grants round and you'll have to find your own oracular funding. We wouldn't recommend this unless you know exactly what you're doing.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Scott Alexander https://forum.effectivealtruism.org/posts/4bPjDbxkYMCAdqPCv/manifund-impact-market-mini-grants-round-on-forecasting Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Manifund Impact Market / Mini-Grants Round On Forecasting, published by Scott Alexander on February 24, 2023 on The Effective Altruism Forum.A team associated with Manifold Markets has created a prototype market for minting and trading impact certificates.To help test it out, I'm sponsoring a $20,000 grants round, restricted to forecasting-related projects only (to keep it small - sorry, everyone else). You can read the details at the Astral Codex Ten post. If you have a forecasting-related project idea for less than that amount of money, consider reading the post and creating a Manifund account and minting an impact certificate for it.If you're an accredited investor, you can buy and sell impact certificates. Read the post, create a Manifund account, send them enough financial information to confirm your accreditation, and start buying and selling.If you have a non-forecasting related project, you can try using the platform, but you won't be eligible for this grants round and you'll have to find your own oracular funding. We wouldn't recommend this unless you know exactly what you're doing.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 24 Feb 2023 19:13:56 +0000 EA - Manifund Impact Market / Mini-Grants Round On Forecasting by Scott Alexander Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Manifund Impact Market / Mini-Grants Round On Forecasting, published by Scott Alexander on February 24, 2023 on The Effective Altruism Forum.A team associated with Manifold Markets has created a prototype market for minting and trading impact certificates.To help test it out, I'm sponsoring a $20,000 grants round, restricted to forecasting-related projects only (to keep it small - sorry, everyone else). You can read the details at the Astral Codex Ten post. If you have a forecasting-related project idea for less than that amount of money, consider reading the post and creating a Manifund account and minting an impact certificate for it.If you're an accredited investor, you can buy and sell impact certificates. Read the post, create a Manifund account, send them enough financial information to confirm your accreditation, and start buying and selling.If you have a non-forecasting related project, you can try using the platform, but you won't be eligible for this grants round and you'll have to find your own oracular funding. We wouldn't recommend this unless you know exactly what you're doing.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Manifund Impact Market / Mini-Grants Round On Forecasting, published by Scott Alexander on February 24, 2023 on The Effective Altruism Forum.A team associated with Manifold Markets has created a prototype market for minting and trading impact certificates.To help test it out, I'm sponsoring a $20,000 grants round, restricted to forecasting-related projects only (to keep it small - sorry, everyone else). You can read the details at the Astral Codex Ten post. If you have a forecasting-related project idea for less than that amount of money, consider reading the post and creating a Manifund account and minting an impact certificate for it.If you're an accredited investor, you can buy and sell impact certificates. Read the post, create a Manifund account, send them enough financial information to confirm your accreditation, and start buying and selling.If you have a non-forecasting related project, you can try using the platform, but you won't be eligible for this grants round and you'll have to find your own oracular funding. We wouldn't recommend this unless you know exactly what you're doing.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Scott Alexander https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:19 None full 5015
7LqFyJWxGCZZXte3N_NL_EA_EA EA - Summary of “Animal Rights Activism Trends to Look Out for in 2023” by Animal Agriculture Alliance by Aashish Khimasia Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summary of “Animal Rights Activism Trends to Look Out for in 2023” by Animal Agriculture Alliance, published by Aashish Khimasia on February 23, 2023 on The Effective Altruism Forum.A blog-post by a member of the Animal Agriculture Alliance (AAA) has identified several trends in animal rights activism that they project for 2023. These trends are likely to be causes for concern for the animal agriculture industry, and the piece was written to make AAA supporters aware of them. Recognising these trends and identifying the views held on these animal advocacy tactics by proponents of animal agriculture may provide advocates with valuable insights.In this post, I list the key trends identified by the article and bullet point tactics highlighted by the article which are of particular interest.I’m thankful to “The Cranky Vegan” for bringing this article to my attention through their linked video.Linking CAFOs to negative human and environmental healthDrawing attention to the detrimental effects of CAFOs (concentrated animal feeding operations) to human and environmental healthUsing historical precedents of CAFOs being charged in court such as in North Carolina and Seattle in messagingExploring cases where ethnic minorities have experienced disproportionate negative health impacts of CAFOsThis strategy may create opposition to CAFOs from individuals and organisations that may not be compelled by animal-focused driven arguments, and could be further integrated into outreach and media messaging. Referring to historical precedents of CAFOs being charged with breaching environmental regulations may help to legitimise messaging against them.The use of undercover footage in court and mediaUsing undercover footage from factory farms to motivate arguments in court that such operations engage in unfair competition, false advertising, market distortion and fraudUsing undercover footage from factory farms pressure retailers to cut ties with such farmsUsing undercover footage from animal rescue missions from factory farms as evidence against charges of trespassing and theftThe continued and increased use of undercover footage from factory farms is clearly concerning for animal agriculture, given the extensive efforts to block this such as through so called Ag-gag laws. However, the suppression of undercover footage from factory farms may lead to increased media attention on these items and public scrutiny on the conditions of factory farms. Indeed, in a recent case, Direct Action Everywhere activists who were being prosecuted after liberating piglets from a Smithfield Foods farm and releasing footage from their mission, were acquitted by the jury, despite the judge blocking the jury from viewing the footage taken. The aforementioned ways in which undercover footage may be used to aid the acquittal of activists, challenge farms in court and pressure retailers to cut ties with farms highlight the potency of combining undercover footage with legal action.Prioritising Youth EngagementEngaging young people in programmes that rival agricultural programmes like FFA and 4-HFostering social disapproval of animal product consumption and normalising plant-based foods in classrooms, presenting the suffering caused by factory farming in an emotive wayEducating young people and creating a shift in culture towards empathy, through recognising the suffering caused by animal agriculture and normalising plant-based foods, may challenge the image that animal agriculture is trying to maintain. This may be an important factor in changing consumption habits of future generations.Deconstructing legal personhoodThe use of the writ of habeas corpus, a right that protects against unlawful and indefinite imprisonment, as a way to challenge the legal personhood of animals by the Nonhuman Rights ...]]>
Aashish Khimasia https://forum.effectivealtruism.org/posts/7LqFyJWxGCZZXte3N/summary-of-animal-rights-activism-trends-to-look-out-for-in Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summary of “Animal Rights Activism Trends to Look Out for in 2023” by Animal Agriculture Alliance, published by Aashish Khimasia on February 23, 2023 on The Effective Altruism Forum.A blog-post by a member of the Animal Agriculture Alliance (AAA) has identified several trends in animal rights activism that they project for 2023. These trends are likely to be causes for concern for the animal agriculture industry, and the piece was written to make AAA supporters aware of them. Recognising these trends and identifying the views held on these animal advocacy tactics by proponents of animal agriculture may provide advocates with valuable insights.In this post, I list the key trends identified by the article and bullet point tactics highlighted by the article which are of particular interest.I’m thankful to “The Cranky Vegan” for bringing this article to my attention through their linked video.Linking CAFOs to negative human and environmental healthDrawing attention to the detrimental effects of CAFOs (concentrated animal feeding operations) to human and environmental healthUsing historical precedents of CAFOs being charged in court such as in North Carolina and Seattle in messagingExploring cases where ethnic minorities have experienced disproportionate negative health impacts of CAFOsThis strategy may create opposition to CAFOs from individuals and organisations that may not be compelled by animal-focused driven arguments, and could be further integrated into outreach and media messaging. Referring to historical precedents of CAFOs being charged with breaching environmental regulations may help to legitimise messaging against them.The use of undercover footage in court and mediaUsing undercover footage from factory farms to motivate arguments in court that such operations engage in unfair competition, false advertising, market distortion and fraudUsing undercover footage from factory farms pressure retailers to cut ties with such farmsUsing undercover footage from animal rescue missions from factory farms as evidence against charges of trespassing and theftThe continued and increased use of undercover footage from factory farms is clearly concerning for animal agriculture, given the extensive efforts to block this such as through so called Ag-gag laws. However, the suppression of undercover footage from factory farms may lead to increased media attention on these items and public scrutiny on the conditions of factory farms. Indeed, in a recent case, Direct Action Everywhere activists who were being prosecuted after liberating piglets from a Smithfield Foods farm and releasing footage from their mission, were acquitted by the jury, despite the judge blocking the jury from viewing the footage taken. The aforementioned ways in which undercover footage may be used to aid the acquittal of activists, challenge farms in court and pressure retailers to cut ties with farms highlight the potency of combining undercover footage with legal action.Prioritising Youth EngagementEngaging young people in programmes that rival agricultural programmes like FFA and 4-HFostering social disapproval of animal product consumption and normalising plant-based foods in classrooms, presenting the suffering caused by factory farming in an emotive wayEducating young people and creating a shift in culture towards empathy, through recognising the suffering caused by animal agriculture and normalising plant-based foods, may challenge the image that animal agriculture is trying to maintain. This may be an important factor in changing consumption habits of future generations.Deconstructing legal personhoodThe use of the writ of habeas corpus, a right that protects against unlawful and indefinite imprisonment, as a way to challenge the legal personhood of animals by the Nonhuman Rights ...]]>
Fri, 24 Feb 2023 18:33:23 +0000 EA - Summary of “Animal Rights Activism Trends to Look Out for in 2023” by Animal Agriculture Alliance by Aashish Khimasia Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summary of “Animal Rights Activism Trends to Look Out for in 2023” by Animal Agriculture Alliance, published by Aashish Khimasia on February 23, 2023 on The Effective Altruism Forum.A blog-post by a member of the Animal Agriculture Alliance (AAA) has identified several trends in animal rights activism that they project for 2023. These trends are likely to be causes for concern for the animal agriculture industry, and the piece was written to make AAA supporters aware of them. Recognising these trends and identifying the views held on these animal advocacy tactics by proponents of animal agriculture may provide advocates with valuable insights.In this post, I list the key trends identified by the article and bullet point tactics highlighted by the article which are of particular interest.I’m thankful to “The Cranky Vegan” for bringing this article to my attention through their linked video.Linking CAFOs to negative human and environmental healthDrawing attention to the detrimental effects of CAFOs (concentrated animal feeding operations) to human and environmental healthUsing historical precedents of CAFOs being charged in court such as in North Carolina and Seattle in messagingExploring cases where ethnic minorities have experienced disproportionate negative health impacts of CAFOsThis strategy may create opposition to CAFOs from individuals and organisations that may not be compelled by animal-focused driven arguments, and could be further integrated into outreach and media messaging. Referring to historical precedents of CAFOs being charged with breaching environmental regulations may help to legitimise messaging against them.The use of undercover footage in court and mediaUsing undercover footage from factory farms to motivate arguments in court that such operations engage in unfair competition, false advertising, market distortion and fraudUsing undercover footage from factory farms pressure retailers to cut ties with such farmsUsing undercover footage from animal rescue missions from factory farms as evidence against charges of trespassing and theftThe continued and increased use of undercover footage from factory farms is clearly concerning for animal agriculture, given the extensive efforts to block this such as through so called Ag-gag laws. However, the suppression of undercover footage from factory farms may lead to increased media attention on these items and public scrutiny on the conditions of factory farms. Indeed, in a recent case, Direct Action Everywhere activists who were being prosecuted after liberating piglets from a Smithfield Foods farm and releasing footage from their mission, were acquitted by the jury, despite the judge blocking the jury from viewing the footage taken. The aforementioned ways in which undercover footage may be used to aid the acquittal of activists, challenge farms in court and pressure retailers to cut ties with farms highlight the potency of combining undercover footage with legal action.Prioritising Youth EngagementEngaging young people in programmes that rival agricultural programmes like FFA and 4-HFostering social disapproval of animal product consumption and normalising plant-based foods in classrooms, presenting the suffering caused by factory farming in an emotive wayEducating young people and creating a shift in culture towards empathy, through recognising the suffering caused by animal agriculture and normalising plant-based foods, may challenge the image that animal agriculture is trying to maintain. This may be an important factor in changing consumption habits of future generations.Deconstructing legal personhoodThe use of the writ of habeas corpus, a right that protects against unlawful and indefinite imprisonment, as a way to challenge the legal personhood of animals by the Nonhuman Rights ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summary of “Animal Rights Activism Trends to Look Out for in 2023” by Animal Agriculture Alliance, published by Aashish Khimasia on February 23, 2023 on The Effective Altruism Forum.A blog-post by a member of the Animal Agriculture Alliance (AAA) has identified several trends in animal rights activism that they project for 2023. These trends are likely to be causes for concern for the animal agriculture industry, and the piece was written to make AAA supporters aware of them. Recognising these trends and identifying the views held on these animal advocacy tactics by proponents of animal agriculture may provide advocates with valuable insights.In this post, I list the key trends identified by the article and bullet point tactics highlighted by the article which are of particular interest.I’m thankful to “The Cranky Vegan” for bringing this article to my attention through their linked video.Linking CAFOs to negative human and environmental healthDrawing attention to the detrimental effects of CAFOs (concentrated animal feeding operations) to human and environmental healthUsing historical precedents of CAFOs being charged in court such as in North Carolina and Seattle in messagingExploring cases where ethnic minorities have experienced disproportionate negative health impacts of CAFOsThis strategy may create opposition to CAFOs from individuals and organisations that may not be compelled by animal-focused driven arguments, and could be further integrated into outreach and media messaging. Referring to historical precedents of CAFOs being charged with breaching environmental regulations may help to legitimise messaging against them.The use of undercover footage in court and mediaUsing undercover footage from factory farms to motivate arguments in court that such operations engage in unfair competition, false advertising, market distortion and fraudUsing undercover footage from factory farms pressure retailers to cut ties with such farmsUsing undercover footage from animal rescue missions from factory farms as evidence against charges of trespassing and theftThe continued and increased use of undercover footage from factory farms is clearly concerning for animal agriculture, given the extensive efforts to block this such as through so called Ag-gag laws. However, the suppression of undercover footage from factory farms may lead to increased media attention on these items and public scrutiny on the conditions of factory farms. Indeed, in a recent case, Direct Action Everywhere activists who were being prosecuted after liberating piglets from a Smithfield Foods farm and releasing footage from their mission, were acquitted by the jury, despite the judge blocking the jury from viewing the footage taken. The aforementioned ways in which undercover footage may be used to aid the acquittal of activists, challenge farms in court and pressure retailers to cut ties with farms highlight the potency of combining undercover footage with legal action.Prioritising Youth EngagementEngaging young people in programmes that rival agricultural programmes like FFA and 4-HFostering social disapproval of animal product consumption and normalising plant-based foods in classrooms, presenting the suffering caused by factory farming in an emotive wayEducating young people and creating a shift in culture towards empathy, through recognising the suffering caused by animal agriculture and normalising plant-based foods, may challenge the image that animal agriculture is trying to maintain. This may be an important factor in changing consumption habits of future generations.Deconstructing legal personhoodThe use of the writ of habeas corpus, a right that protects against unlawful and indefinite imprisonment, as a way to challenge the legal personhood of animals by the Nonhuman Rights ...]]>
Aashish Khimasia https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:56 None full 5016
rqg7PRYTvCf74TRyG_NL_EA_EA EA - Consent Isn't Always Enough by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consent Isn't Always Enough, published by Jeff Kaufman on February 24, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://forum.effectivealtruism.org/posts/rqg7PRYTvCf74TRyG/consent-isn-t-always-enough Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consent Isn't Always Enough, published by Jeff Kaufman on February 24, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 24 Feb 2023 16:11:25 +0000 EA - Consent Isn't Always Enough by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consent Isn't Always Enough, published by Jeff Kaufman on February 24, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consent Isn't Always Enough, published by Jeff Kaufman on February 24, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:25 None full 5014
8ZrdmwEnRRSdXdJe2_NL_EA_EA EA - EA Israel: 2022 Progress and 2023 Plans by ezrah Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Israel: 2022 Progress and 2023 Plans, published by ezrah on February 23, 2023 on The Effective Altruism Forum.This document recaps EA Israel’s and the Israeli effective altruism community progress in 2022, and lays out EA Israel’s plans for 2023 (we know that 2023 started a couple months ago, but figured better late than never). We wrote the post in order to increase transparency about EA Israel’s activities, share our thoughts with the global community, and as an opportunity to reflect, strategize and celebrate.SummaryUpdates to our existing strategyWe’re placing an increased emphasis on supporting, incubating and launching new projects and organizationsWe’re investing in our operations, in order to be able to scale our programs, support community members’ initiatives and mature into a professional workplace to support staff development and retentionWe’re presenting our work and value proposition clearly and in a way that’s easily understood by the team, community, and general public2022 ProgressAchievements by Israeli EA communityWe asked community members to briefly share their personal progress this year.EA Israel’s ProgressEA Israel’s work can be divided into four verticals:1. Teaching tools about effective social action and introducing Israelis to effective altruismThrough an accredited university course, university groups, year-long fellowships, short intro fellowships (“crash courses”) for young professionals, newsletter and social media and large public events, along with onboarding new community members.2. Helping community members take action and maximize their social impactIncubating sub-groups (based on cause area / profession)Impact acceleration programs and servicesSupport for community members and projects3. Increasing the effectiveness of donations in IsraelPreparing for the launch of Effective Giving IsraelLaunching the Maximum Impact Program, a program that works with nonprofits to create and publish cost-effectiveness reports at scale (22 reports in the pilot) with the goal of making Israeli philanthropy effectiveness-oriented and evidence-basedCounterfactually raise 500k ILS for high-impact nonprofits4. Infrastructure to enable continued growthWe’re setting ourselves up to be a well-run, high-capability organizationWe’re supporting a thriving and healthy communityWe also discuss some of the major challenges of 2022:FTX’s crashStaff turnover and the difficulties of transitioning from a volunteer-based group to a funded nonprofit2023 Annual Plan (requisite Miro board included)Effective Altruism Israel’s vision is one where all Israelis who are interested in maximizing their social impact have access to the people and the resources they need to help others, using their careers, projects, and donations.In 2023 EA Israel will continue to focus on its 4 core areas,Teaching tools about effective social action and growing the EA Israel communityCore objectives: scale and optimize outreach programsSupporting impactful actionCore objectives: incubate new sub-groups; launch new impact-assistance programs with potential to scale; provide operational support for projects, orgs and individualsEffective donationsCore objectives: Launch Effective Giving Israel; improve, scale and run second round of local nonprofit evaluation programOrganizational and community infrastructureCore objectives: support growth of outwards-facing programs; implement M&E systems; streamline internal processes and operations; improve male / female community ratio and support a thriving and healthy communityHere’s a visual map of our current and planned projects and services, where projects in italics are planned projects, and if you scroll down you’ll see our services mapped out relative to our target audiences. Note that the impact / cost scal...]]>
ezrah https://forum.effectivealtruism.org/posts/8ZrdmwEnRRSdXdJe2/ea-israel-2022-progress-and-2023-plans Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Israel: 2022 Progress and 2023 Plans, published by ezrah on February 23, 2023 on The Effective Altruism Forum.This document recaps EA Israel’s and the Israeli effective altruism community progress in 2022, and lays out EA Israel’s plans for 2023 (we know that 2023 started a couple months ago, but figured better late than never). We wrote the post in order to increase transparency about EA Israel’s activities, share our thoughts with the global community, and as an opportunity to reflect, strategize and celebrate.SummaryUpdates to our existing strategyWe’re placing an increased emphasis on supporting, incubating and launching new projects and organizationsWe’re investing in our operations, in order to be able to scale our programs, support community members’ initiatives and mature into a professional workplace to support staff development and retentionWe’re presenting our work and value proposition clearly and in a way that’s easily understood by the team, community, and general public2022 ProgressAchievements by Israeli EA communityWe asked community members to briefly share their personal progress this year.EA Israel’s ProgressEA Israel’s work can be divided into four verticals:1. Teaching tools about effective social action and introducing Israelis to effective altruismThrough an accredited university course, university groups, year-long fellowships, short intro fellowships (“crash courses”) for young professionals, newsletter and social media and large public events, along with onboarding new community members.2. Helping community members take action and maximize their social impactIncubating sub-groups (based on cause area / profession)Impact acceleration programs and servicesSupport for community members and projects3. Increasing the effectiveness of donations in IsraelPreparing for the launch of Effective Giving IsraelLaunching the Maximum Impact Program, a program that works with nonprofits to create and publish cost-effectiveness reports at scale (22 reports in the pilot) with the goal of making Israeli philanthropy effectiveness-oriented and evidence-basedCounterfactually raise 500k ILS for high-impact nonprofits4. Infrastructure to enable continued growthWe’re setting ourselves up to be a well-run, high-capability organizationWe’re supporting a thriving and healthy communityWe also discuss some of the major challenges of 2022:FTX’s crashStaff turnover and the difficulties of transitioning from a volunteer-based group to a funded nonprofit2023 Annual Plan (requisite Miro board included)Effective Altruism Israel’s vision is one where all Israelis who are interested in maximizing their social impact have access to the people and the resources they need to help others, using their careers, projects, and donations.In 2023 EA Israel will continue to focus on its 4 core areas,Teaching tools about effective social action and growing the EA Israel communityCore objectives: scale and optimize outreach programsSupporting impactful actionCore objectives: incubate new sub-groups; launch new impact-assistance programs with potential to scale; provide operational support for projects, orgs and individualsEffective donationsCore objectives: Launch Effective Giving Israel; improve, scale and run second round of local nonprofit evaluation programOrganizational and community infrastructureCore objectives: support growth of outwards-facing programs; implement M&E systems; streamline internal processes and operations; improve male / female community ratio and support a thriving and healthy communityHere’s a visual map of our current and planned projects and services, where projects in italics are planned projects, and if you scroll down you’ll see our services mapped out relative to our target audiences. Note that the impact / cost scal...]]>
Fri, 24 Feb 2023 09:54:08 +0000 EA - EA Israel: 2022 Progress and 2023 Plans by ezrah Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Israel: 2022 Progress and 2023 Plans, published by ezrah on February 23, 2023 on The Effective Altruism Forum.This document recaps EA Israel’s and the Israeli effective altruism community progress in 2022, and lays out EA Israel’s plans for 2023 (we know that 2023 started a couple months ago, but figured better late than never). We wrote the post in order to increase transparency about EA Israel’s activities, share our thoughts with the global community, and as an opportunity to reflect, strategize and celebrate.SummaryUpdates to our existing strategyWe’re placing an increased emphasis on supporting, incubating and launching new projects and organizationsWe’re investing in our operations, in order to be able to scale our programs, support community members’ initiatives and mature into a professional workplace to support staff development and retentionWe’re presenting our work and value proposition clearly and in a way that’s easily understood by the team, community, and general public2022 ProgressAchievements by Israeli EA communityWe asked community members to briefly share their personal progress this year.EA Israel’s ProgressEA Israel’s work can be divided into four verticals:1. Teaching tools about effective social action and introducing Israelis to effective altruismThrough an accredited university course, university groups, year-long fellowships, short intro fellowships (“crash courses”) for young professionals, newsletter and social media and large public events, along with onboarding new community members.2. Helping community members take action and maximize their social impactIncubating sub-groups (based on cause area / profession)Impact acceleration programs and servicesSupport for community members and projects3. Increasing the effectiveness of donations in IsraelPreparing for the launch of Effective Giving IsraelLaunching the Maximum Impact Program, a program that works with nonprofits to create and publish cost-effectiveness reports at scale (22 reports in the pilot) with the goal of making Israeli philanthropy effectiveness-oriented and evidence-basedCounterfactually raise 500k ILS for high-impact nonprofits4. Infrastructure to enable continued growthWe’re setting ourselves up to be a well-run, high-capability organizationWe’re supporting a thriving and healthy communityWe also discuss some of the major challenges of 2022:FTX’s crashStaff turnover and the difficulties of transitioning from a volunteer-based group to a funded nonprofit2023 Annual Plan (requisite Miro board included)Effective Altruism Israel’s vision is one where all Israelis who are interested in maximizing their social impact have access to the people and the resources they need to help others, using their careers, projects, and donations.In 2023 EA Israel will continue to focus on its 4 core areas,Teaching tools about effective social action and growing the EA Israel communityCore objectives: scale and optimize outreach programsSupporting impactful actionCore objectives: incubate new sub-groups; launch new impact-assistance programs with potential to scale; provide operational support for projects, orgs and individualsEffective donationsCore objectives: Launch Effective Giving Israel; improve, scale and run second round of local nonprofit evaluation programOrganizational and community infrastructureCore objectives: support growth of outwards-facing programs; implement M&E systems; streamline internal processes and operations; improve male / female community ratio and support a thriving and healthy communityHere’s a visual map of our current and planned projects and services, where projects in italics are planned projects, and if you scroll down you’ll see our services mapped out relative to our target audiences. Note that the impact / cost scal...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Israel: 2022 Progress and 2023 Plans, published by ezrah on February 23, 2023 on The Effective Altruism Forum.This document recaps EA Israel’s and the Israeli effective altruism community progress in 2022, and lays out EA Israel’s plans for 2023 (we know that 2023 started a couple months ago, but figured better late than never). We wrote the post in order to increase transparency about EA Israel’s activities, share our thoughts with the global community, and as an opportunity to reflect, strategize and celebrate.SummaryUpdates to our existing strategyWe’re placing an increased emphasis on supporting, incubating and launching new projects and organizationsWe’re investing in our operations, in order to be able to scale our programs, support community members’ initiatives and mature into a professional workplace to support staff development and retentionWe’re presenting our work and value proposition clearly and in a way that’s easily understood by the team, community, and general public2022 ProgressAchievements by Israeli EA communityWe asked community members to briefly share their personal progress this year.EA Israel’s ProgressEA Israel’s work can be divided into four verticals:1. Teaching tools about effective social action and introducing Israelis to effective altruismThrough an accredited university course, university groups, year-long fellowships, short intro fellowships (“crash courses”) for young professionals, newsletter and social media and large public events, along with onboarding new community members.2. Helping community members take action and maximize their social impactIncubating sub-groups (based on cause area / profession)Impact acceleration programs and servicesSupport for community members and projects3. Increasing the effectiveness of donations in IsraelPreparing for the launch of Effective Giving IsraelLaunching the Maximum Impact Program, a program that works with nonprofits to create and publish cost-effectiveness reports at scale (22 reports in the pilot) with the goal of making Israeli philanthropy effectiveness-oriented and evidence-basedCounterfactually raise 500k ILS for high-impact nonprofits4. Infrastructure to enable continued growthWe’re setting ourselves up to be a well-run, high-capability organizationWe’re supporting a thriving and healthy communityWe also discuss some of the major challenges of 2022:FTX’s crashStaff turnover and the difficulties of transitioning from a volunteer-based group to a funded nonprofit2023 Annual Plan (requisite Miro board included)Effective Altruism Israel’s vision is one where all Israelis who are interested in maximizing their social impact have access to the people and the resources they need to help others, using their careers, projects, and donations.In 2023 EA Israel will continue to focus on its 4 core areas,Teaching tools about effective social action and growing the EA Israel communityCore objectives: scale and optimize outreach programsSupporting impactful actionCore objectives: incubate new sub-groups; launch new impact-assistance programs with potential to scale; provide operational support for projects, orgs and individualsEffective donationsCore objectives: Launch Effective Giving Israel; improve, scale and run second round of local nonprofit evaluation programOrganizational and community infrastructureCore objectives: support growth of outwards-facing programs; implement M&E systems; streamline internal processes and operations; improve male / female community ratio and support a thriving and healthy communityHere’s a visual map of our current and planned projects and services, where projects in italics are planned projects, and if you scroll down you’ll see our services mapped out relative to our target audiences. Note that the impact / cost scal...]]>
ezrah https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 41:52 None full 5003
ybA5g9CoG8ErdJfhp_NL_EA_EA EA - New database for economics research by Oshan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New database for economics research, published by Oshan on February 23, 2023 on The Effective Altruism Forum.Hi! We're thrilled to share the Library of Economic Possibility (LEP), a new kind of knowledge-base for discovering, organizing, and sharing economic research around high-impact policies that have remained outside the mainstream despite significant research.Options for discovering economics research today — especially for non-specialists — are clunky. Activists tend to cherry-pick, journalists don't have the room to present a wide range of evidence in a single article, academics share their work through papers and conferences that general audiences generally don't engage with, think tanks wind up burying research beneath article archives. Search functions on major databases aren't the greatest.This is consequential in a moment where interest is rising around new economic ideas — LEP hopes to ground this rising interest in the wealth of existing evidence, and build a bridge between general audiences and economics research. We're also trying out a new way of organizing and connecting information.We're passionate about debating what the next economic system might look like, but we're also nerds about information architecture, and LEP's search features reflect that. Bidirectional links create associative trails between the network of information, and advanced search filters let you mix & match policies with specific areas of interest to hone in on precise relationships between information.So if you're curious to learn more about how basic income might affect entrepreneurship, you can select the policy "basic income" and the tag "entrepreneurship," and scroll through all our insights and sources that relate to both of those filters.Or, you could select "land value tax" and "urban development," or "codetermination" and "innovation." You get the idea.You can find those filters on the left column:Our policy reports also use a nifty little feature we call "insight cards." Any statistic or claim we use in a policy report is interactive, letting you pop open the card to see the source it come from, further context, the authors behind it, etc:We have more information in our launch announcement and Twitter thread. Happy to hear any feedback or answer questions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Oshan https://forum.effectivealtruism.org/posts/ybA5g9CoG8ErdJfhp/new-database-for-economics-research Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New database for economics research, published by Oshan on February 23, 2023 on The Effective Altruism Forum.Hi! We're thrilled to share the Library of Economic Possibility (LEP), a new kind of knowledge-base for discovering, organizing, and sharing economic research around high-impact policies that have remained outside the mainstream despite significant research.Options for discovering economics research today — especially for non-specialists — are clunky. Activists tend to cherry-pick, journalists don't have the room to present a wide range of evidence in a single article, academics share their work through papers and conferences that general audiences generally don't engage with, think tanks wind up burying research beneath article archives. Search functions on major databases aren't the greatest.This is consequential in a moment where interest is rising around new economic ideas — LEP hopes to ground this rising interest in the wealth of existing evidence, and build a bridge between general audiences and economics research. We're also trying out a new way of organizing and connecting information.We're passionate about debating what the next economic system might look like, but we're also nerds about information architecture, and LEP's search features reflect that. Bidirectional links create associative trails between the network of information, and advanced search filters let you mix & match policies with specific areas of interest to hone in on precise relationships between information.So if you're curious to learn more about how basic income might affect entrepreneurship, you can select the policy "basic income" and the tag "entrepreneurship," and scroll through all our insights and sources that relate to both of those filters.Or, you could select "land value tax" and "urban development," or "codetermination" and "innovation." You get the idea.You can find those filters on the left column:Our policy reports also use a nifty little feature we call "insight cards." Any statistic or claim we use in a policy report is interactive, letting you pop open the card to see the source it come from, further context, the authors behind it, etc:We have more information in our launch announcement and Twitter thread. Happy to hear any feedback or answer questions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 24 Feb 2023 09:03:43 +0000 EA - New database for economics research by Oshan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New database for economics research, published by Oshan on February 23, 2023 on The Effective Altruism Forum.Hi! We're thrilled to share the Library of Economic Possibility (LEP), a new kind of knowledge-base for discovering, organizing, and sharing economic research around high-impact policies that have remained outside the mainstream despite significant research.Options for discovering economics research today — especially for non-specialists — are clunky. Activists tend to cherry-pick, journalists don't have the room to present a wide range of evidence in a single article, academics share their work through papers and conferences that general audiences generally don't engage with, think tanks wind up burying research beneath article archives. Search functions on major databases aren't the greatest.This is consequential in a moment where interest is rising around new economic ideas — LEP hopes to ground this rising interest in the wealth of existing evidence, and build a bridge between general audiences and economics research. We're also trying out a new way of organizing and connecting information.We're passionate about debating what the next economic system might look like, but we're also nerds about information architecture, and LEP's search features reflect that. Bidirectional links create associative trails between the network of information, and advanced search filters let you mix & match policies with specific areas of interest to hone in on precise relationships between information.So if you're curious to learn more about how basic income might affect entrepreneurship, you can select the policy "basic income" and the tag "entrepreneurship," and scroll through all our insights and sources that relate to both of those filters.Or, you could select "land value tax" and "urban development," or "codetermination" and "innovation." You get the idea.You can find those filters on the left column:Our policy reports also use a nifty little feature we call "insight cards." Any statistic or claim we use in a policy report is interactive, letting you pop open the card to see the source it come from, further context, the authors behind it, etc:We have more information in our launch announcement and Twitter thread. Happy to hear any feedback or answer questions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New database for economics research, published by Oshan on February 23, 2023 on The Effective Altruism Forum.Hi! We're thrilled to share the Library of Economic Possibility (LEP), a new kind of knowledge-base for discovering, organizing, and sharing economic research around high-impact policies that have remained outside the mainstream despite significant research.Options for discovering economics research today — especially for non-specialists — are clunky. Activists tend to cherry-pick, journalists don't have the room to present a wide range of evidence in a single article, academics share their work through papers and conferences that general audiences generally don't engage with, think tanks wind up burying research beneath article archives. Search functions on major databases aren't the greatest.This is consequential in a moment where interest is rising around new economic ideas — LEP hopes to ground this rising interest in the wealth of existing evidence, and build a bridge between general audiences and economics research. We're also trying out a new way of organizing and connecting information.We're passionate about debating what the next economic system might look like, but we're also nerds about information architecture, and LEP's search features reflect that. Bidirectional links create associative trails between the network of information, and advanced search filters let you mix & match policies with specific areas of interest to hone in on precise relationships between information.So if you're curious to learn more about how basic income might affect entrepreneurship, you can select the policy "basic income" and the tag "entrepreneurship," and scroll through all our insights and sources that relate to both of those filters.Or, you could select "land value tax" and "urban development," or "codetermination" and "innovation." You get the idea.You can find those filters on the left column:Our policy reports also use a nifty little feature we call "insight cards." Any statistic or claim we use in a policy report is interactive, letting you pop open the card to see the source it come from, further context, the authors behind it, etc:We have more information in our launch announcement and Twitter thread. Happy to hear any feedback or answer questions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Oshan https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:17 None full 5004
hAHNtAYLidmSJK7bs_NL_EA_EA EA - Who is Uncomfortable Critiquing Who, Around EA? by Ozzie Gooen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who is Uncomfortable Critiquing Who, Around EA?, published by Ozzie Gooen on February 24, 2023 on The Effective Altruism Forum.Summary and DisclaimersI think EA is made up of a bunch of different parties, many of whom find it at least somewhat uncomfortable to criticize or honestly evaluate each other for a wide variety of reasons.I think that this is a very standard challenge that most organizations and movements have. As I would also recommend to other parties, I think that investigation and improvement here could be very valuable to EA. This post continues on many of the ideas as Select Challenges with Criticism & Evaluation Around EA.One early reviewer critiqued this post saying that they didn't believe that discomfort was a problem. If you don't think it is, I don't aim in this post to convince you. My goal here is to do early exploration what the problem even seems to look like, not to argue that the problem is severe or not.Like with that previous post, I rely here mostly on anecdotal experiences, introspection, and recommendations from various management books. This comes from me working on QURI (trying to pursue better longtermist evaluation), being an employee and manager in multiple (EA and not EA) organizations, and hearing a whole lot of various rants and frustrations from EAs. I’d love to see further work to better understand where bottlenecks to valuable communication are most restrictive and then design and test solutions.Writing this has helped me find some insights on this problem. However, it is a messy problem, and as I explained before, I find the terminology lacking. Apologies in advance.IntroductionThere’s a massive difference between a group saying that it’s open to criticism, and a group that people actually feel comfortable criticizing.I think that many EA individuals and organizations advocate and promote feedback in ways unusual for the industry. However, I think there’s also a lot of work to do.In most communities, it takes a lot of iteration and trust-building to find ways for people to routinely and usefully give candid information or feedback to each other.In companies, for example, employees often don’t have much to personally gain by voicing their critiques to management, and a lot to potentially lose. Even if the management seems really nice, is an honest critique really worth the ~3% chance of resentment? Often you won’t ever know — management could just keep their dislike of you to themselves, and later take action accordingly. On the other side, it’s often uncomfortable for managers to convey candid feedback to their reports privately, let alone discuss department or employee failures to people throughout the organization.My impression is that many online social settings contain a bunch of social groups that are really afraid of being honest with each other, and this leads to problems immediate (important information not getting shared) and expansive (groups developing extended distrust and sometimes hatred with each other).Problems of communication and comfort happen within power hierarchies, and they also happen between peer communities. Really, they happen everywhere.To a first approximation, "Everyone is at least a little of afraid of everyone else."I think a lot of people's natural reaction to issues like this is to point fingers at groups they don't like and blame them. But really, I personally think that all of us are broadly responsible (at least a little), and also are broadly able to understand and improve things. I see these issues as systemic, not personal.Criticism Between Different GroupsAround effective altruism, I’ve noticed:Evaluation in Global WelfareGlobal poverty charity evaluation and criticism seem like fair game. When GiveWell started, they weren’t friends with the leaders of the organiza...]]>
Ozzie Gooen https://forum.effectivealtruism.org/posts/hAHNtAYLidmSJK7bs/who-is-uncomfortable-critiquing-who-around-ea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who is Uncomfortable Critiquing Who, Around EA?, published by Ozzie Gooen on February 24, 2023 on The Effective Altruism Forum.Summary and DisclaimersI think EA is made up of a bunch of different parties, many of whom find it at least somewhat uncomfortable to criticize or honestly evaluate each other for a wide variety of reasons.I think that this is a very standard challenge that most organizations and movements have. As I would also recommend to other parties, I think that investigation and improvement here could be very valuable to EA. This post continues on many of the ideas as Select Challenges with Criticism & Evaluation Around EA.One early reviewer critiqued this post saying that they didn't believe that discomfort was a problem. If you don't think it is, I don't aim in this post to convince you. My goal here is to do early exploration what the problem even seems to look like, not to argue that the problem is severe or not.Like with that previous post, I rely here mostly on anecdotal experiences, introspection, and recommendations from various management books. This comes from me working on QURI (trying to pursue better longtermist evaluation), being an employee and manager in multiple (EA and not EA) organizations, and hearing a whole lot of various rants and frustrations from EAs. I’d love to see further work to better understand where bottlenecks to valuable communication are most restrictive and then design and test solutions.Writing this has helped me find some insights on this problem. However, it is a messy problem, and as I explained before, I find the terminology lacking. Apologies in advance.IntroductionThere’s a massive difference between a group saying that it’s open to criticism, and a group that people actually feel comfortable criticizing.I think that many EA individuals and organizations advocate and promote feedback in ways unusual for the industry. However, I think there’s also a lot of work to do.In most communities, it takes a lot of iteration and trust-building to find ways for people to routinely and usefully give candid information or feedback to each other.In companies, for example, employees often don’t have much to personally gain by voicing their critiques to management, and a lot to potentially lose. Even if the management seems really nice, is an honest critique really worth the ~3% chance of resentment? Often you won’t ever know — management could just keep their dislike of you to themselves, and later take action accordingly. On the other side, it’s often uncomfortable for managers to convey candid feedback to their reports privately, let alone discuss department or employee failures to people throughout the organization.My impression is that many online social settings contain a bunch of social groups that are really afraid of being honest with each other, and this leads to problems immediate (important information not getting shared) and expansive (groups developing extended distrust and sometimes hatred with each other).Problems of communication and comfort happen within power hierarchies, and they also happen between peer communities. Really, they happen everywhere.To a first approximation, "Everyone is at least a little of afraid of everyone else."I think a lot of people's natural reaction to issues like this is to point fingers at groups they don't like and blame them. But really, I personally think that all of us are broadly responsible (at least a little), and also are broadly able to understand and improve things. I see these issues as systemic, not personal.Criticism Between Different GroupsAround effective altruism, I’ve noticed:Evaluation in Global WelfareGlobal poverty charity evaluation and criticism seem like fair game. When GiveWell started, they weren’t friends with the leaders of the organiza...]]>
Fri, 24 Feb 2023 08:28:49 +0000 EA - Who is Uncomfortable Critiquing Who, Around EA? by Ozzie Gooen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who is Uncomfortable Critiquing Who, Around EA?, published by Ozzie Gooen on February 24, 2023 on The Effective Altruism Forum.Summary and DisclaimersI think EA is made up of a bunch of different parties, many of whom find it at least somewhat uncomfortable to criticize or honestly evaluate each other for a wide variety of reasons.I think that this is a very standard challenge that most organizations and movements have. As I would also recommend to other parties, I think that investigation and improvement here could be very valuable to EA. This post continues on many of the ideas as Select Challenges with Criticism & Evaluation Around EA.One early reviewer critiqued this post saying that they didn't believe that discomfort was a problem. If you don't think it is, I don't aim in this post to convince you. My goal here is to do early exploration what the problem even seems to look like, not to argue that the problem is severe or not.Like with that previous post, I rely here mostly on anecdotal experiences, introspection, and recommendations from various management books. This comes from me working on QURI (trying to pursue better longtermist evaluation), being an employee and manager in multiple (EA and not EA) organizations, and hearing a whole lot of various rants and frustrations from EAs. I’d love to see further work to better understand where bottlenecks to valuable communication are most restrictive and then design and test solutions.Writing this has helped me find some insights on this problem. However, it is a messy problem, and as I explained before, I find the terminology lacking. Apologies in advance.IntroductionThere’s a massive difference between a group saying that it’s open to criticism, and a group that people actually feel comfortable criticizing.I think that many EA individuals and organizations advocate and promote feedback in ways unusual for the industry. However, I think there’s also a lot of work to do.In most communities, it takes a lot of iteration and trust-building to find ways for people to routinely and usefully give candid information or feedback to each other.In companies, for example, employees often don’t have much to personally gain by voicing their critiques to management, and a lot to potentially lose. Even if the management seems really nice, is an honest critique really worth the ~3% chance of resentment? Often you won’t ever know — management could just keep their dislike of you to themselves, and later take action accordingly. On the other side, it’s often uncomfortable for managers to convey candid feedback to their reports privately, let alone discuss department or employee failures to people throughout the organization.My impression is that many online social settings contain a bunch of social groups that are really afraid of being honest with each other, and this leads to problems immediate (important information not getting shared) and expansive (groups developing extended distrust and sometimes hatred with each other).Problems of communication and comfort happen within power hierarchies, and they also happen between peer communities. Really, they happen everywhere.To a first approximation, "Everyone is at least a little of afraid of everyone else."I think a lot of people's natural reaction to issues like this is to point fingers at groups they don't like and blame them. But really, I personally think that all of us are broadly responsible (at least a little), and also are broadly able to understand and improve things. I see these issues as systemic, not personal.Criticism Between Different GroupsAround effective altruism, I’ve noticed:Evaluation in Global WelfareGlobal poverty charity evaluation and criticism seem like fair game. When GiveWell started, they weren’t friends with the leaders of the organiza...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who is Uncomfortable Critiquing Who, Around EA?, published by Ozzie Gooen on February 24, 2023 on The Effective Altruism Forum.Summary and DisclaimersI think EA is made up of a bunch of different parties, many of whom find it at least somewhat uncomfortable to criticize or honestly evaluate each other for a wide variety of reasons.I think that this is a very standard challenge that most organizations and movements have. As I would also recommend to other parties, I think that investigation and improvement here could be very valuable to EA. This post continues on many of the ideas as Select Challenges with Criticism & Evaluation Around EA.One early reviewer critiqued this post saying that they didn't believe that discomfort was a problem. If you don't think it is, I don't aim in this post to convince you. My goal here is to do early exploration what the problem even seems to look like, not to argue that the problem is severe or not.Like with that previous post, I rely here mostly on anecdotal experiences, introspection, and recommendations from various management books. This comes from me working on QURI (trying to pursue better longtermist evaluation), being an employee and manager in multiple (EA and not EA) organizations, and hearing a whole lot of various rants and frustrations from EAs. I’d love to see further work to better understand where bottlenecks to valuable communication are most restrictive and then design and test solutions.Writing this has helped me find some insights on this problem. However, it is a messy problem, and as I explained before, I find the terminology lacking. Apologies in advance.IntroductionThere’s a massive difference between a group saying that it’s open to criticism, and a group that people actually feel comfortable criticizing.I think that many EA individuals and organizations advocate and promote feedback in ways unusual for the industry. However, I think there’s also a lot of work to do.In most communities, it takes a lot of iteration and trust-building to find ways for people to routinely and usefully give candid information or feedback to each other.In companies, for example, employees often don’t have much to personally gain by voicing their critiques to management, and a lot to potentially lose. Even if the management seems really nice, is an honest critique really worth the ~3% chance of resentment? Often you won’t ever know — management could just keep their dislike of you to themselves, and later take action accordingly. On the other side, it’s often uncomfortable for managers to convey candid feedback to their reports privately, let alone discuss department or employee failures to people throughout the organization.My impression is that many online social settings contain a bunch of social groups that are really afraid of being honest with each other, and this leads to problems immediate (important information not getting shared) and expansive (groups developing extended distrust and sometimes hatred with each other).Problems of communication and comfort happen within power hierarchies, and they also happen between peer communities. Really, they happen everywhere.To a first approximation, "Everyone is at least a little of afraid of everyone else."I think a lot of people's natural reaction to issues like this is to point fingers at groups they don't like and blame them. But really, I personally think that all of us are broadly responsible (at least a little), and also are broadly able to understand and improve things. I see these issues as systemic, not personal.Criticism Between Different GroupsAround effective altruism, I’ve noticed:Evaluation in Global WelfareGlobal poverty charity evaluation and criticism seem like fair game. When GiveWell started, they weren’t friends with the leaders of the organiza...]]>
Ozzie Gooen https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 17:20 None full 5001
mb4kzhfRnpQNtF6ut_NL_EA_EA EA - Introducing EASE, a managed directory of EA Organization Service Providers by Deena Englander Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing EASE, a managed directory of EA Organization Service Providers, published by Deena Englander on February 23, 2023 on The Effective Altruism Forum.What is EASE?EASE (EA Services) is a directory of independent agencies and freelancers offering expertise to EA-aligned organizations. Please visit our website at to view our members.Who are we?We are a team of service providers. The authors of this post are the core coordinators. We all have our own organizations providing services to EA-aligned organizations. We see the problems most organizations encounter, and we developed a solution to help address that need.Why did we start EASE?Many organizations in the EA world have similar needs but lack the bandwidth or expertise to realize them. By providing a directory of experts covering many common challenges, we aim to save the community time whilst addressing key skill shortages. We believe that most organizations need external expertise in order to maximize their organization’s potential. We are all focused on being effective – and we believe that forming this centralized directory is the most effective way of making a large resource group more available to EA-aligned organizations.Why should EA organizations consider working with these agencies?By working with multiple EA organizations, these agencies have gathered plenty of expertise to provide relevant advice, save time and money, and most importantly, increase your impact. Our screening process ensures that the vendors listed are pre-qualified as experts in their represented fields. This minimizes the risk of engaging with a new “unknown” entity, as they’re already proven to be valuable team players.Additionally, we have programming in place to consolidate the interagency interactions and strengthen relationships, so that when you work with one member of our group, you’re accessing a part of a larger network.Our members are vetted to determine capabilities, accuracy, and work history, but we do not give out any endorsement for specific providers.What are the criteria for being added to the directory?Our aim is to build a comprehensive list of service providers who work with EA organizations. We screen members to ensure that the providers are experienced and are truly experts in their field, as well as being active participants in EA or having experience working with EA-aligned organizations.Are you an individual or team providing services to EA-aligned organizations and would like to be added?We love growing our network! Fill out this form and someone will contact you to begin the screening process.Are you ready to get the help you need?Feel free to contact the service providers directly.Are you an EA organization in need of help but aren’t sure what you need or if you have the budget?We can help you figure out what kind of services and budget you need so you can try to get the funds necessary to pay for these critical services. Please send us an email to info@ea-services.org, and we will do our best to help you.Is the directory up to date?We regularly review the listings to make sure they remain relevant. If you have any comments or suggestions, please send an email to info@ea-services.org.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Deena Englander https://forum.effectivealtruism.org/posts/mb4kzhfRnpQNtF6ut/introducing-ease-a-managed-directory-of-ea-organization Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing EASE, a managed directory of EA Organization Service Providers, published by Deena Englander on February 23, 2023 on The Effective Altruism Forum.What is EASE?EASE (EA Services) is a directory of independent agencies and freelancers offering expertise to EA-aligned organizations. Please visit our website at to view our members.Who are we?We are a team of service providers. The authors of this post are the core coordinators. We all have our own organizations providing services to EA-aligned organizations. We see the problems most organizations encounter, and we developed a solution to help address that need.Why did we start EASE?Many organizations in the EA world have similar needs but lack the bandwidth or expertise to realize them. By providing a directory of experts covering many common challenges, we aim to save the community time whilst addressing key skill shortages. We believe that most organizations need external expertise in order to maximize their organization’s potential. We are all focused on being effective – and we believe that forming this centralized directory is the most effective way of making a large resource group more available to EA-aligned organizations.Why should EA organizations consider working with these agencies?By working with multiple EA organizations, these agencies have gathered plenty of expertise to provide relevant advice, save time and money, and most importantly, increase your impact. Our screening process ensures that the vendors listed are pre-qualified as experts in their represented fields. This minimizes the risk of engaging with a new “unknown” entity, as they’re already proven to be valuable team players.Additionally, we have programming in place to consolidate the interagency interactions and strengthen relationships, so that when you work with one member of our group, you’re accessing a part of a larger network.Our members are vetted to determine capabilities, accuracy, and work history, but we do not give out any endorsement for specific providers.What are the criteria for being added to the directory?Our aim is to build a comprehensive list of service providers who work with EA organizations. We screen members to ensure that the providers are experienced and are truly experts in their field, as well as being active participants in EA or having experience working with EA-aligned organizations.Are you an individual or team providing services to EA-aligned organizations and would like to be added?We love growing our network! Fill out this form and someone will contact you to begin the screening process.Are you ready to get the help you need?Feel free to contact the service providers directly.Are you an EA organization in need of help but aren’t sure what you need or if you have the budget?We can help you figure out what kind of services and budget you need so you can try to get the funds necessary to pay for these critical services. Please send us an email to info@ea-services.org, and we will do our best to help you.Is the directory up to date?We regularly review the listings to make sure they remain relevant. If you have any comments or suggestions, please send an email to info@ea-services.org.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 24 Feb 2023 00:36:00 +0000 EA - Introducing EASE, a managed directory of EA Organization Service Providers by Deena Englander Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing EASE, a managed directory of EA Organization Service Providers, published by Deena Englander on February 23, 2023 on The Effective Altruism Forum.What is EASE?EASE (EA Services) is a directory of independent agencies and freelancers offering expertise to EA-aligned organizations. Please visit our website at to view our members.Who are we?We are a team of service providers. The authors of this post are the core coordinators. We all have our own organizations providing services to EA-aligned organizations. We see the problems most organizations encounter, and we developed a solution to help address that need.Why did we start EASE?Many organizations in the EA world have similar needs but lack the bandwidth or expertise to realize them. By providing a directory of experts covering many common challenges, we aim to save the community time whilst addressing key skill shortages. We believe that most organizations need external expertise in order to maximize their organization’s potential. We are all focused on being effective – and we believe that forming this centralized directory is the most effective way of making a large resource group more available to EA-aligned organizations.Why should EA organizations consider working with these agencies?By working with multiple EA organizations, these agencies have gathered plenty of expertise to provide relevant advice, save time and money, and most importantly, increase your impact. Our screening process ensures that the vendors listed are pre-qualified as experts in their represented fields. This minimizes the risk of engaging with a new “unknown” entity, as they’re already proven to be valuable team players.Additionally, we have programming in place to consolidate the interagency interactions and strengthen relationships, so that when you work with one member of our group, you’re accessing a part of a larger network.Our members are vetted to determine capabilities, accuracy, and work history, but we do not give out any endorsement for specific providers.What are the criteria for being added to the directory?Our aim is to build a comprehensive list of service providers who work with EA organizations. We screen members to ensure that the providers are experienced and are truly experts in their field, as well as being active participants in EA or having experience working with EA-aligned organizations.Are you an individual or team providing services to EA-aligned organizations and would like to be added?We love growing our network! Fill out this form and someone will contact you to begin the screening process.Are you ready to get the help you need?Feel free to contact the service providers directly.Are you an EA organization in need of help but aren’t sure what you need or if you have the budget?We can help you figure out what kind of services and budget you need so you can try to get the funds necessary to pay for these critical services. Please send us an email to info@ea-services.org, and we will do our best to help you.Is the directory up to date?We regularly review the listings to make sure they remain relevant. If you have any comments or suggestions, please send an email to info@ea-services.org.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing EASE, a managed directory of EA Organization Service Providers, published by Deena Englander on February 23, 2023 on The Effective Altruism Forum.What is EASE?EASE (EA Services) is a directory of independent agencies and freelancers offering expertise to EA-aligned organizations. Please visit our website at to view our members.Who are we?We are a team of service providers. The authors of this post are the core coordinators. We all have our own organizations providing services to EA-aligned organizations. We see the problems most organizations encounter, and we developed a solution to help address that need.Why did we start EASE?Many organizations in the EA world have similar needs but lack the bandwidth or expertise to realize them. By providing a directory of experts covering many common challenges, we aim to save the community time whilst addressing key skill shortages. We believe that most organizations need external expertise in order to maximize their organization’s potential. We are all focused on being effective – and we believe that forming this centralized directory is the most effective way of making a large resource group more available to EA-aligned organizations.Why should EA organizations consider working with these agencies?By working with multiple EA organizations, these agencies have gathered plenty of expertise to provide relevant advice, save time and money, and most importantly, increase your impact. Our screening process ensures that the vendors listed are pre-qualified as experts in their represented fields. This minimizes the risk of engaging with a new “unknown” entity, as they’re already proven to be valuable team players.Additionally, we have programming in place to consolidate the interagency interactions and strengthen relationships, so that when you work with one member of our group, you’re accessing a part of a larger network.Our members are vetted to determine capabilities, accuracy, and work history, but we do not give out any endorsement for specific providers.What are the criteria for being added to the directory?Our aim is to build a comprehensive list of service providers who work with EA organizations. We screen members to ensure that the providers are experienced and are truly experts in their field, as well as being active participants in EA or having experience working with EA-aligned organizations.Are you an individual or team providing services to EA-aligned organizations and would like to be added?We love growing our network! Fill out this form and someone will contact you to begin the screening process.Are you ready to get the help you need?Feel free to contact the service providers directly.Are you an EA organization in need of help but aren’t sure what you need or if you have the budget?We can help you figure out what kind of services and budget you need so you can try to get the funds necessary to pay for these critical services. Please send us an email to info@ea-services.org, and we will do our best to help you.Is the directory up to date?We regularly review the listings to make sure they remain relevant. If you have any comments or suggestions, please send an email to info@ea-services.org.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Deena Englander https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:11 None full 5002
oa3MdDRJaY3XfRsrF_NL_EA_EA EA - EA Global in 2022 and plans for 2023 by Eli Nathan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Global in 2022 and plans for 2023, published by Eli Nathan on February 23, 2023 on The Effective Altruism Forum.SummaryWe ran three EA Global events in 2022, in London, San Francisco, and Washington, D.C. These conferences all had ~1300–1500 people each and were some of our biggest yet:These events had an average score of 9.02 to the question “How likely is it that you would recommend EA Global to a friend or colleague with similar interests to your own?”, rated out of 10.Those who filled out our feedback survey (which was a minority of attendees, around 1200 individuals in total across all three events) reported over 36,000 new connections made.This was the first time we ran three EA Globals in one year since 2017, and we only had ~1200 attendees total across all three of those events.We hosted and recorded lots of new content, a substantial amount of which is located on our YouTube channel.This was the first time trialing out a EA conference in D.C. of any kind. We generally received positive feedback about this event from attendees and stakeholders.Plans for 2023We’re reducing our spending in a lot of ways, most significantly by cutting some meals and the majority of travel grants, which we expect may somewhat reduce the overall ratings of our events. Please note that this is a fairly dynamic situation and we may update our spending plans as our financial situation changes.We’re doing three EA Globals in 2023, in the Bay Area and London again, and with our US east coast event in Boston rather than D.C. As well as EA Globals, there are also several upcoming EAGx events, check out the full list of confirmed and provisional events below.EA Global: Bay Area | 24–26 FebruaryEAGxCambridge | 17–19 MarchEAGxNordics | 21–23 AprilEA Global: London | 19–21 MayEAGxWarsaw | 9–11 June [provisional]EAGxNYC | July / August [provisional]EAGxBerlin | Early September [provisional]EAGxAustralia | Late September [provisional]EA Global: Boston | Oct 27–Oct 29EAGxVirtual | November [provisional]We’re aiming to have similar-sized conferences, though with the reduction in travel grants we expect the events to perhaps be a little smaller, maybe around 1000 people per EA Global.We recently completed a hiring round and now have ~4 FTEs working on the EA Global team.We’ve recently revamped our website and incorporated it into effectivealtruism.org — see here.We’ve switched over our backend systems from Zoho to Salesforce. This will help us integrate better with the rest of CEA’s products, and will hopefully create a smoother front and backend that’s better suited to our users. (Note that the switchover itself has been somewhat buggy, but we are clearing these up and hope to have minimal issues moving forwards.)We’re also trialing a referral system for applications, where we’ve given a select number of advisors working in EA community building the ability to admit people to the conference. If this goes well we may expand this program next year.Growth areasFood got generally negative reviews in 2022:Food is a notoriously hard area to get right and quality can vary a lot between venues, and we often have little or no choice between catering options.We’ve explored ways to improve the food quality, including hiring a catering consultant, but a lot of these options are cost prohibitive, and realistically we expect food quality to continue to be an issue moving forwards.Swapcard (our event application app) also got generally negative reviews in 2022:We explored and tested several competitor apps, though none of them seem better than Swapcard.We explored working with external developers to build our own event networking app, but eventually concluded that this would be too costly in terms of both time and money.We’ve been working with Swapcard to roll out new featur...]]>
Eli Nathan https://forum.effectivealtruism.org/posts/oa3MdDRJaY3XfRsrF/ea-global-in-2022-and-plans-for-2023 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Global in 2022 and plans for 2023, published by Eli Nathan on February 23, 2023 on The Effective Altruism Forum.SummaryWe ran three EA Global events in 2022, in London, San Francisco, and Washington, D.C. These conferences all had ~1300–1500 people each and were some of our biggest yet:These events had an average score of 9.02 to the question “How likely is it that you would recommend EA Global to a friend or colleague with similar interests to your own?”, rated out of 10.Those who filled out our feedback survey (which was a minority of attendees, around 1200 individuals in total across all three events) reported over 36,000 new connections made.This was the first time we ran three EA Globals in one year since 2017, and we only had ~1200 attendees total across all three of those events.We hosted and recorded lots of new content, a substantial amount of which is located on our YouTube channel.This was the first time trialing out a EA conference in D.C. of any kind. We generally received positive feedback about this event from attendees and stakeholders.Plans for 2023We’re reducing our spending in a lot of ways, most significantly by cutting some meals and the majority of travel grants, which we expect may somewhat reduce the overall ratings of our events. Please note that this is a fairly dynamic situation and we may update our spending plans as our financial situation changes.We’re doing three EA Globals in 2023, in the Bay Area and London again, and with our US east coast event in Boston rather than D.C. As well as EA Globals, there are also several upcoming EAGx events, check out the full list of confirmed and provisional events below.EA Global: Bay Area | 24–26 FebruaryEAGxCambridge | 17–19 MarchEAGxNordics | 21–23 AprilEA Global: London | 19–21 MayEAGxWarsaw | 9–11 June [provisional]EAGxNYC | July / August [provisional]EAGxBerlin | Early September [provisional]EAGxAustralia | Late September [provisional]EA Global: Boston | Oct 27–Oct 29EAGxVirtual | November [provisional]We’re aiming to have similar-sized conferences, though with the reduction in travel grants we expect the events to perhaps be a little smaller, maybe around 1000 people per EA Global.We recently completed a hiring round and now have ~4 FTEs working on the EA Global team.We’ve recently revamped our website and incorporated it into effectivealtruism.org — see here.We’ve switched over our backend systems from Zoho to Salesforce. This will help us integrate better with the rest of CEA’s products, and will hopefully create a smoother front and backend that’s better suited to our users. (Note that the switchover itself has been somewhat buggy, but we are clearing these up and hope to have minimal issues moving forwards.)We’re also trialing a referral system for applications, where we’ve given a select number of advisors working in EA community building the ability to admit people to the conference. If this goes well we may expand this program next year.Growth areasFood got generally negative reviews in 2022:Food is a notoriously hard area to get right and quality can vary a lot between venues, and we often have little or no choice between catering options.We’ve explored ways to improve the food quality, including hiring a catering consultant, but a lot of these options are cost prohibitive, and realistically we expect food quality to continue to be an issue moving forwards.Swapcard (our event application app) also got generally negative reviews in 2022:We explored and tested several competitor apps, though none of them seem better than Swapcard.We explored working with external developers to build our own event networking app, but eventually concluded that this would be too costly in terms of both time and money.We’ve been working with Swapcard to roll out new featur...]]>
Thu, 23 Feb 2023 21:09:54 +0000 EA - EA Global in 2022 and plans for 2023 by Eli Nathan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Global in 2022 and plans for 2023, published by Eli Nathan on February 23, 2023 on The Effective Altruism Forum.SummaryWe ran three EA Global events in 2022, in London, San Francisco, and Washington, D.C. These conferences all had ~1300–1500 people each and were some of our biggest yet:These events had an average score of 9.02 to the question “How likely is it that you would recommend EA Global to a friend or colleague with similar interests to your own?”, rated out of 10.Those who filled out our feedback survey (which was a minority of attendees, around 1200 individuals in total across all three events) reported over 36,000 new connections made.This was the first time we ran three EA Globals in one year since 2017, and we only had ~1200 attendees total across all three of those events.We hosted and recorded lots of new content, a substantial amount of which is located on our YouTube channel.This was the first time trialing out a EA conference in D.C. of any kind. We generally received positive feedback about this event from attendees and stakeholders.Plans for 2023We’re reducing our spending in a lot of ways, most significantly by cutting some meals and the majority of travel grants, which we expect may somewhat reduce the overall ratings of our events. Please note that this is a fairly dynamic situation and we may update our spending plans as our financial situation changes.We’re doing three EA Globals in 2023, in the Bay Area and London again, and with our US east coast event in Boston rather than D.C. As well as EA Globals, there are also several upcoming EAGx events, check out the full list of confirmed and provisional events below.EA Global: Bay Area | 24–26 FebruaryEAGxCambridge | 17–19 MarchEAGxNordics | 21–23 AprilEA Global: London | 19–21 MayEAGxWarsaw | 9–11 June [provisional]EAGxNYC | July / August [provisional]EAGxBerlin | Early September [provisional]EAGxAustralia | Late September [provisional]EA Global: Boston | Oct 27–Oct 29EAGxVirtual | November [provisional]We’re aiming to have similar-sized conferences, though with the reduction in travel grants we expect the events to perhaps be a little smaller, maybe around 1000 people per EA Global.We recently completed a hiring round and now have ~4 FTEs working on the EA Global team.We’ve recently revamped our website and incorporated it into effectivealtruism.org — see here.We’ve switched over our backend systems from Zoho to Salesforce. This will help us integrate better with the rest of CEA’s products, and will hopefully create a smoother front and backend that’s better suited to our users. (Note that the switchover itself has been somewhat buggy, but we are clearing these up and hope to have minimal issues moving forwards.)We’re also trialing a referral system for applications, where we’ve given a select number of advisors working in EA community building the ability to admit people to the conference. If this goes well we may expand this program next year.Growth areasFood got generally negative reviews in 2022:Food is a notoriously hard area to get right and quality can vary a lot between venues, and we often have little or no choice between catering options.We’ve explored ways to improve the food quality, including hiring a catering consultant, but a lot of these options are cost prohibitive, and realistically we expect food quality to continue to be an issue moving forwards.Swapcard (our event application app) also got generally negative reviews in 2022:We explored and tested several competitor apps, though none of them seem better than Swapcard.We explored working with external developers to build our own event networking app, but eventually concluded that this would be too costly in terms of both time and money.We’ve been working with Swapcard to roll out new featur...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Global in 2022 and plans for 2023, published by Eli Nathan on February 23, 2023 on The Effective Altruism Forum.SummaryWe ran three EA Global events in 2022, in London, San Francisco, and Washington, D.C. These conferences all had ~1300–1500 people each and were some of our biggest yet:These events had an average score of 9.02 to the question “How likely is it that you would recommend EA Global to a friend or colleague with similar interests to your own?”, rated out of 10.Those who filled out our feedback survey (which was a minority of attendees, around 1200 individuals in total across all three events) reported over 36,000 new connections made.This was the first time we ran three EA Globals in one year since 2017, and we only had ~1200 attendees total across all three of those events.We hosted and recorded lots of new content, a substantial amount of which is located on our YouTube channel.This was the first time trialing out a EA conference in D.C. of any kind. We generally received positive feedback about this event from attendees and stakeholders.Plans for 2023We’re reducing our spending in a lot of ways, most significantly by cutting some meals and the majority of travel grants, which we expect may somewhat reduce the overall ratings of our events. Please note that this is a fairly dynamic situation and we may update our spending plans as our financial situation changes.We’re doing three EA Globals in 2023, in the Bay Area and London again, and with our US east coast event in Boston rather than D.C. As well as EA Globals, there are also several upcoming EAGx events, check out the full list of confirmed and provisional events below.EA Global: Bay Area | 24–26 FebruaryEAGxCambridge | 17–19 MarchEAGxNordics | 21–23 AprilEA Global: London | 19–21 MayEAGxWarsaw | 9–11 June [provisional]EAGxNYC | July / August [provisional]EAGxBerlin | Early September [provisional]EAGxAustralia | Late September [provisional]EA Global: Boston | Oct 27–Oct 29EAGxVirtual | November [provisional]We’re aiming to have similar-sized conferences, though with the reduction in travel grants we expect the events to perhaps be a little smaller, maybe around 1000 people per EA Global.We recently completed a hiring round and now have ~4 FTEs working on the EA Global team.We’ve recently revamped our website and incorporated it into effectivealtruism.org — see here.We’ve switched over our backend systems from Zoho to Salesforce. This will help us integrate better with the rest of CEA’s products, and will hopefully create a smoother front and backend that’s better suited to our users. (Note that the switchover itself has been somewhat buggy, but we are clearing these up and hope to have minimal issues moving forwards.)We’re also trialing a referral system for applications, where we’ve given a select number of advisors working in EA community building the ability to admit people to the conference. If this goes well we may expand this program next year.Growth areasFood got generally negative reviews in 2022:Food is a notoriously hard area to get right and quality can vary a lot between venues, and we often have little or no choice between catering options.We’ve explored ways to improve the food quality, including hiring a catering consultant, but a lot of these options are cost prohibitive, and realistically we expect food quality to continue to be an issue moving forwards.Swapcard (our event application app) also got generally negative reviews in 2022:We explored and tested several competitor apps, though none of them seem better than Swapcard.We explored working with external developers to build our own event networking app, but eventually concluded that this would be too costly in terms of both time and money.We’ve been working with Swapcard to roll out new featur...]]>
Eli Nathan https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:15 None full 4995
aJwcgm2nqiZu6zq2S_NL_EA_EA EA - Taking a leave of absence from Open Philanthropy to work on AI safety by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Taking a leave of absence from Open Philanthropy to work on AI safety, published by Holden Karnofsky on February 23, 2023 on The Effective Altruism Forum.I’m planning a leave of absence (aiming for around 3 months and potentially more) from Open Philanthropy, starting on March 8, to explore working directly on AI safety.I have a few different interventions I might explore. The first I explore will be AI safety standards: documented expectations (enforced via self-regulation at first, and potentially government regulation later) that AI labs won’t build and deploy systems that pose too much risk to the world, as evaluated by a systematic evaluation regime. (More here.) There’s significant interest from some AI labs in self-regulating via safety standards, and I want to see whether I can help with the work ARC and others are doing to hammer out standards that are both protective and practical - to the point where major AI labs are likely to sign on.During my leave, Alexander Berger will serve as sole CEO of Open Philanthropy (as he did during my parental leave in 2021).Depending on how things play out, I may end up working directly on AI safety full-time. Open Philanthropy will remain my employer for at least the start of my leave, but I’ll join or start another organization if I go full-time.The reasons I’m doing this:First, I’m very concerned about the possibility that transformative AI could be developed soon (possibly even within the decade - I don’t think this is >50% likely, but it seems too likely for my comfort). I want to be as helpful as possible, and I think the way to do this might be via working on AI safety directly rather than grantmaking.Second, as a general matter, I’ve always aspired to help build multiple organizations rather than running one indefinitely. I think the former is a better fit for my talents and interests.At both organizations I’ve co-founded (GiveWell and Open Philanthropy), I’ve had a goal from day one of helping to build an organization that can be great without me - and then moving on to build something else.I think this went well with GiveWell thanks to Elie Hassenfeld’s leadership. I hope Open Philanthropy can go well under Alexander’s leadership.Trying to get to that point has been a long-term project. Alexander, Cari, Dustin and I have been actively discussing the path to Open Philanthropy running without me since 2018.1 Our mid-2021 promotion of Alexander to co-CEO was a major step in this direction (putting him in charge of more than half of the organization’s employees and giving), and this is another step, which we’ve been discussing and preparing for for over a year (and announced internally at Open Philanthropy on January 20).I’ve become increasingly excited about various interventions to reduce AI risk, such as working on safety standards. I’m looking forward to experimenting with focusing my energy on AI safety.FootnotesThis was only a year after Open Philanthropy became a separate organization, but it was several years after Open Philanthropy started as part of GiveWell under the title “GiveWell Labs.” ↩Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Holden Karnofsky https://forum.effectivealtruism.org/posts/aJwcgm2nqiZu6zq2S/taking-a-leave-of-absence-from-open-philanthropy-to-work-on Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Taking a leave of absence from Open Philanthropy to work on AI safety, published by Holden Karnofsky on February 23, 2023 on The Effective Altruism Forum.I’m planning a leave of absence (aiming for around 3 months and potentially more) from Open Philanthropy, starting on March 8, to explore working directly on AI safety.I have a few different interventions I might explore. The first I explore will be AI safety standards: documented expectations (enforced via self-regulation at first, and potentially government regulation later) that AI labs won’t build and deploy systems that pose too much risk to the world, as evaluated by a systematic evaluation regime. (More here.) There’s significant interest from some AI labs in self-regulating via safety standards, and I want to see whether I can help with the work ARC and others are doing to hammer out standards that are both protective and practical - to the point where major AI labs are likely to sign on.During my leave, Alexander Berger will serve as sole CEO of Open Philanthropy (as he did during my parental leave in 2021).Depending on how things play out, I may end up working directly on AI safety full-time. Open Philanthropy will remain my employer for at least the start of my leave, but I’ll join or start another organization if I go full-time.The reasons I’m doing this:First, I’m very concerned about the possibility that transformative AI could be developed soon (possibly even within the decade - I don’t think this is >50% likely, but it seems too likely for my comfort). I want to be as helpful as possible, and I think the way to do this might be via working on AI safety directly rather than grantmaking.Second, as a general matter, I’ve always aspired to help build multiple organizations rather than running one indefinitely. I think the former is a better fit for my talents and interests.At both organizations I’ve co-founded (GiveWell and Open Philanthropy), I’ve had a goal from day one of helping to build an organization that can be great without me - and then moving on to build something else.I think this went well with GiveWell thanks to Elie Hassenfeld’s leadership. I hope Open Philanthropy can go well under Alexander’s leadership.Trying to get to that point has been a long-term project. Alexander, Cari, Dustin and I have been actively discussing the path to Open Philanthropy running without me since 2018.1 Our mid-2021 promotion of Alexander to co-CEO was a major step in this direction (putting him in charge of more than half of the organization’s employees and giving), and this is another step, which we’ve been discussing and preparing for for over a year (and announced internally at Open Philanthropy on January 20).I’ve become increasingly excited about various interventions to reduce AI risk, such as working on safety standards. I’m looking forward to experimenting with focusing my energy on AI safety.FootnotesThis was only a year after Open Philanthropy became a separate organization, but it was several years after Open Philanthropy started as part of GiveWell under the title “GiveWell Labs.” ↩Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 23 Feb 2023 19:27:37 +0000 EA - Taking a leave of absence from Open Philanthropy to work on AI safety by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Taking a leave of absence from Open Philanthropy to work on AI safety, published by Holden Karnofsky on February 23, 2023 on The Effective Altruism Forum.I’m planning a leave of absence (aiming for around 3 months and potentially more) from Open Philanthropy, starting on March 8, to explore working directly on AI safety.I have a few different interventions I might explore. The first I explore will be AI safety standards: documented expectations (enforced via self-regulation at first, and potentially government regulation later) that AI labs won’t build and deploy systems that pose too much risk to the world, as evaluated by a systematic evaluation regime. (More here.) There’s significant interest from some AI labs in self-regulating via safety standards, and I want to see whether I can help with the work ARC and others are doing to hammer out standards that are both protective and practical - to the point where major AI labs are likely to sign on.During my leave, Alexander Berger will serve as sole CEO of Open Philanthropy (as he did during my parental leave in 2021).Depending on how things play out, I may end up working directly on AI safety full-time. Open Philanthropy will remain my employer for at least the start of my leave, but I’ll join or start another organization if I go full-time.The reasons I’m doing this:First, I’m very concerned about the possibility that transformative AI could be developed soon (possibly even within the decade - I don’t think this is >50% likely, but it seems too likely for my comfort). I want to be as helpful as possible, and I think the way to do this might be via working on AI safety directly rather than grantmaking.Second, as a general matter, I’ve always aspired to help build multiple organizations rather than running one indefinitely. I think the former is a better fit for my talents and interests.At both organizations I’ve co-founded (GiveWell and Open Philanthropy), I’ve had a goal from day one of helping to build an organization that can be great without me - and then moving on to build something else.I think this went well with GiveWell thanks to Elie Hassenfeld’s leadership. I hope Open Philanthropy can go well under Alexander’s leadership.Trying to get to that point has been a long-term project. Alexander, Cari, Dustin and I have been actively discussing the path to Open Philanthropy running without me since 2018.1 Our mid-2021 promotion of Alexander to co-CEO was a major step in this direction (putting him in charge of more than half of the organization’s employees and giving), and this is another step, which we’ve been discussing and preparing for for over a year (and announced internally at Open Philanthropy on January 20).I’ve become increasingly excited about various interventions to reduce AI risk, such as working on safety standards. I’m looking forward to experimenting with focusing my energy on AI safety.FootnotesThis was only a year after Open Philanthropy became a separate organization, but it was several years after Open Philanthropy started as part of GiveWell under the title “GiveWell Labs.” ↩Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Taking a leave of absence from Open Philanthropy to work on AI safety, published by Holden Karnofsky on February 23, 2023 on The Effective Altruism Forum.I’m planning a leave of absence (aiming for around 3 months and potentially more) from Open Philanthropy, starting on March 8, to explore working directly on AI safety.I have a few different interventions I might explore. The first I explore will be AI safety standards: documented expectations (enforced via self-regulation at first, and potentially government regulation later) that AI labs won’t build and deploy systems that pose too much risk to the world, as evaluated by a systematic evaluation regime. (More here.) There’s significant interest from some AI labs in self-regulating via safety standards, and I want to see whether I can help with the work ARC and others are doing to hammer out standards that are both protective and practical - to the point where major AI labs are likely to sign on.During my leave, Alexander Berger will serve as sole CEO of Open Philanthropy (as he did during my parental leave in 2021).Depending on how things play out, I may end up working directly on AI safety full-time. Open Philanthropy will remain my employer for at least the start of my leave, but I’ll join or start another organization if I go full-time.The reasons I’m doing this:First, I’m very concerned about the possibility that transformative AI could be developed soon (possibly even within the decade - I don’t think this is >50% likely, but it seems too likely for my comfort). I want to be as helpful as possible, and I think the way to do this might be via working on AI safety directly rather than grantmaking.Second, as a general matter, I’ve always aspired to help build multiple organizations rather than running one indefinitely. I think the former is a better fit for my talents and interests.At both organizations I’ve co-founded (GiveWell and Open Philanthropy), I’ve had a goal from day one of helping to build an organization that can be great without me - and then moving on to build something else.I think this went well with GiveWell thanks to Elie Hassenfeld’s leadership. I hope Open Philanthropy can go well under Alexander’s leadership.Trying to get to that point has been a long-term project. Alexander, Cari, Dustin and I have been actively discussing the path to Open Philanthropy running without me since 2018.1 Our mid-2021 promotion of Alexander to co-CEO was a major step in this direction (putting him in charge of more than half of the organization’s employees and giving), and this is another step, which we’ve been discussing and preparing for for over a year (and announced internally at Open Philanthropy on January 20).I’ve become increasingly excited about various interventions to reduce AI risk, such as working on safety standards. I’m looking forward to experimenting with focusing my energy on AI safety.FootnotesThis was only a year after Open Philanthropy became a separate organization, but it was several years after Open Philanthropy started as part of GiveWell under the title “GiveWell Labs.” ↩Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Holden Karnofsky https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:07 None full 4997
5uZiBK4h5WcccjA2R_NL_EA_EA EA - EA is too New and Important to Schism by Wil Perkins Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA is too New & Important to Schism, published by Wil Perkins on February 23, 2023 on The Effective Altruism Forum.As many of us have seen there has recently been a surge in discourse around people in the community with different views. Many of this underlying tension has only been brought about by large scandals that have broken in the last 6 months or so.I've seen a few people using language which, to me, seems schismatic. Discussing how there are two distinct and incompatible groups within EA, being shocked/hurt/feeling rejected by the movement, etc. I'd like to urge us to try and find reconciliation if possible.Influential Movements avoid Early SchismsIf you look through history at any major religious/political/social movements, most of them avoid having early schisms, or if they do, it creates significant issues and tension. It seems optimal to let movements develop loosely over time and become more diverse, before starting to draw hard lines between what "is" a part of the in group and what isn't.For instance, early Christianity had some schisms, but nothing major until the Council of Nicea in 325 A.D. This meant that Christianity could consolidate power/followers for centuries before actively breaking up into different groups.Another parallel is the infamous Sunni-Shia split in Islam, which caused massive amounts of bloodshed and still continues to this day. This schism still echos today, for instance with the civil war in Syria.For a more modern example, look at the New Atheism Movement which in many ways attracted similar people to EA. Relatively early on in the movement, in fact right as the movement gained popular awareness (similar to the moment right now in EA) many prominent folks in New Atheism advocated for New Atheism Plus. This was essentially an attempt to schism the movement along cultural / social justice lines, which quickly eroded the cohesion of the movement and ultimately contributed to its massive decline in relevance.Effective Altruism as a movement is relatively brand new - we can't afford major schisms or we may not continue as a relevant cultural force in 10-20 years.Getting Movement Building Right MattersSomething which I think is sometimes lost in community building discussions is that the stakes we're playing for are extremely high. My motivation to join EA was primarily because I saw major problems in the world, and people that were extremely dedicated to solving them. We are playing for the future, for the survival of the human race. We can't afford to let relatively petty squabbles divide us too much!Especially with advances in AGI, I know many people in the movement are more worried than ever that we will experience significant shifts via technology over the coming decades. Some have pointed out the possibility of Value Lock-in, or that as we rapidly increase our power our values may become stagnant, especially if for instance an AGI is controlled by a group with strong, anti-pluralistic values.Overall I hope to advocate for the idea of reconciliation within EA. We should work to disentangle our feelings from the future of the movement, and try to discuss how to have the most impact as we grow. My vote is that having a major schism is one of the worst things we could do for our impact - and is a common failure mode we should strive to avoid.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wil Perkins https://forum.effectivealtruism.org/posts/5uZiBK4h5WcccjA2R/ea-is-too-new-and-important-to-schism Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA is too New & Important to Schism, published by Wil Perkins on February 23, 2023 on The Effective Altruism Forum.As many of us have seen there has recently been a surge in discourse around people in the community with different views. Many of this underlying tension has only been brought about by large scandals that have broken in the last 6 months or so.I've seen a few people using language which, to me, seems schismatic. Discussing how there are two distinct and incompatible groups within EA, being shocked/hurt/feeling rejected by the movement, etc. I'd like to urge us to try and find reconciliation if possible.Influential Movements avoid Early SchismsIf you look through history at any major religious/political/social movements, most of them avoid having early schisms, or if they do, it creates significant issues and tension. It seems optimal to let movements develop loosely over time and become more diverse, before starting to draw hard lines between what "is" a part of the in group and what isn't.For instance, early Christianity had some schisms, but nothing major until the Council of Nicea in 325 A.D. This meant that Christianity could consolidate power/followers for centuries before actively breaking up into different groups.Another parallel is the infamous Sunni-Shia split in Islam, which caused massive amounts of bloodshed and still continues to this day. This schism still echos today, for instance with the civil war in Syria.For a more modern example, look at the New Atheism Movement which in many ways attracted similar people to EA. Relatively early on in the movement, in fact right as the movement gained popular awareness (similar to the moment right now in EA) many prominent folks in New Atheism advocated for New Atheism Plus. This was essentially an attempt to schism the movement along cultural / social justice lines, which quickly eroded the cohesion of the movement and ultimately contributed to its massive decline in relevance.Effective Altruism as a movement is relatively brand new - we can't afford major schisms or we may not continue as a relevant cultural force in 10-20 years.Getting Movement Building Right MattersSomething which I think is sometimes lost in community building discussions is that the stakes we're playing for are extremely high. My motivation to join EA was primarily because I saw major problems in the world, and people that were extremely dedicated to solving them. We are playing for the future, for the survival of the human race. We can't afford to let relatively petty squabbles divide us too much!Especially with advances in AGI, I know many people in the movement are more worried than ever that we will experience significant shifts via technology over the coming decades. Some have pointed out the possibility of Value Lock-in, or that as we rapidly increase our power our values may become stagnant, especially if for instance an AGI is controlled by a group with strong, anti-pluralistic values.Overall I hope to advocate for the idea of reconciliation within EA. We should work to disentangle our feelings from the future of the movement, and try to discuss how to have the most impact as we grow. My vote is that having a major schism is one of the worst things we could do for our impact - and is a common failure mode we should strive to avoid.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 23 Feb 2023 18:48:07 +0000 EA - EA is too New and Important to Schism by Wil Perkins Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA is too New & Important to Schism, published by Wil Perkins on February 23, 2023 on The Effective Altruism Forum.As many of us have seen there has recently been a surge in discourse around people in the community with different views. Many of this underlying tension has only been brought about by large scandals that have broken in the last 6 months or so.I've seen a few people using language which, to me, seems schismatic. Discussing how there are two distinct and incompatible groups within EA, being shocked/hurt/feeling rejected by the movement, etc. I'd like to urge us to try and find reconciliation if possible.Influential Movements avoid Early SchismsIf you look through history at any major religious/political/social movements, most of them avoid having early schisms, or if they do, it creates significant issues and tension. It seems optimal to let movements develop loosely over time and become more diverse, before starting to draw hard lines between what "is" a part of the in group and what isn't.For instance, early Christianity had some schisms, but nothing major until the Council of Nicea in 325 A.D. This meant that Christianity could consolidate power/followers for centuries before actively breaking up into different groups.Another parallel is the infamous Sunni-Shia split in Islam, which caused massive amounts of bloodshed and still continues to this day. This schism still echos today, for instance with the civil war in Syria.For a more modern example, look at the New Atheism Movement which in many ways attracted similar people to EA. Relatively early on in the movement, in fact right as the movement gained popular awareness (similar to the moment right now in EA) many prominent folks in New Atheism advocated for New Atheism Plus. This was essentially an attempt to schism the movement along cultural / social justice lines, which quickly eroded the cohesion of the movement and ultimately contributed to its massive decline in relevance.Effective Altruism as a movement is relatively brand new - we can't afford major schisms or we may not continue as a relevant cultural force in 10-20 years.Getting Movement Building Right MattersSomething which I think is sometimes lost in community building discussions is that the stakes we're playing for are extremely high. My motivation to join EA was primarily because I saw major problems in the world, and people that were extremely dedicated to solving them. We are playing for the future, for the survival of the human race. We can't afford to let relatively petty squabbles divide us too much!Especially with advances in AGI, I know many people in the movement are more worried than ever that we will experience significant shifts via technology over the coming decades. Some have pointed out the possibility of Value Lock-in, or that as we rapidly increase our power our values may become stagnant, especially if for instance an AGI is controlled by a group with strong, anti-pluralistic values.Overall I hope to advocate for the idea of reconciliation within EA. We should work to disentangle our feelings from the future of the movement, and try to discuss how to have the most impact as we grow. My vote is that having a major schism is one of the worst things we could do for our impact - and is a common failure mode we should strive to avoid.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA is too New & Important to Schism, published by Wil Perkins on February 23, 2023 on The Effective Altruism Forum.As many of us have seen there has recently been a surge in discourse around people in the community with different views. Many of this underlying tension has only been brought about by large scandals that have broken in the last 6 months or so.I've seen a few people using language which, to me, seems schismatic. Discussing how there are two distinct and incompatible groups within EA, being shocked/hurt/feeling rejected by the movement, etc. I'd like to urge us to try and find reconciliation if possible.Influential Movements avoid Early SchismsIf you look through history at any major religious/political/social movements, most of them avoid having early schisms, or if they do, it creates significant issues and tension. It seems optimal to let movements develop loosely over time and become more diverse, before starting to draw hard lines between what "is" a part of the in group and what isn't.For instance, early Christianity had some schisms, but nothing major until the Council of Nicea in 325 A.D. This meant that Christianity could consolidate power/followers for centuries before actively breaking up into different groups.Another parallel is the infamous Sunni-Shia split in Islam, which caused massive amounts of bloodshed and still continues to this day. This schism still echos today, for instance with the civil war in Syria.For a more modern example, look at the New Atheism Movement which in many ways attracted similar people to EA. Relatively early on in the movement, in fact right as the movement gained popular awareness (similar to the moment right now in EA) many prominent folks in New Atheism advocated for New Atheism Plus. This was essentially an attempt to schism the movement along cultural / social justice lines, which quickly eroded the cohesion of the movement and ultimately contributed to its massive decline in relevance.Effective Altruism as a movement is relatively brand new - we can't afford major schisms or we may not continue as a relevant cultural force in 10-20 years.Getting Movement Building Right MattersSomething which I think is sometimes lost in community building discussions is that the stakes we're playing for are extremely high. My motivation to join EA was primarily because I saw major problems in the world, and people that were extremely dedicated to solving them. We are playing for the future, for the survival of the human race. We can't afford to let relatively petty squabbles divide us too much!Especially with advances in AGI, I know many people in the movement are more worried than ever that we will experience significant shifts via technology over the coming decades. Some have pointed out the possibility of Value Lock-in, or that as we rapidly increase our power our values may become stagnant, especially if for instance an AGI is controlled by a group with strong, anti-pluralistic values.Overall I hope to advocate for the idea of reconciliation within EA. We should work to disentangle our feelings from the future of the movement, and try to discuss how to have the most impact as we grow. My vote is that having a major schism is one of the worst things we could do for our impact - and is a common failure mode we should strive to avoid.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wil Perkins https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:15 None full 4998
36xmKk4fXYTptfFXy_NL_EA_EA EA - How can we improve discussions on the Forum? by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How can we improve discussions on the Forum?, published by Lizka on February 23, 2023 on The Effective Altruism Forum.I’d like to run a very rough survey to get a better sense of:How you feel about the Frontpage change we’re currently testingWhat changes to the site — how it's set up and organized — you think could help us have discussions betterWhat conversations you'd like to see on the ForumAnd moreGive us your inputThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lizka https://forum.effectivealtruism.org/posts/36xmKk4fXYTptfFXy/how-can-we-improve-discussions-on-the-forum Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How can we improve discussions on the Forum?, published by Lizka on February 23, 2023 on The Effective Altruism Forum.I’d like to run a very rough survey to get a better sense of:How you feel about the Frontpage change we’re currently testingWhat changes to the site — how it's set up and organized — you think could help us have discussions betterWhat conversations you'd like to see on the ForumAnd moreGive us your inputThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 23 Feb 2023 08:11:04 +0000 EA - How can we improve discussions on the Forum? by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How can we improve discussions on the Forum?, published by Lizka on February 23, 2023 on The Effective Altruism Forum.I’d like to run a very rough survey to get a better sense of:How you feel about the Frontpage change we’re currently testingWhat changes to the site — how it's set up and organized — you think could help us have discussions betterWhat conversations you'd like to see on the ForumAnd moreGive us your inputThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How can we improve discussions on the Forum?, published by Lizka on February 23, 2023 on The Effective Altruism Forum.I’d like to run a very rough survey to get a better sense of:How you feel about the Frontpage change we’re currently testingWhat changes to the site — how it's set up and organized — you think could help us have discussions betterWhat conversations you'd like to see on the ForumAnd moreGive us your inputThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:42 None full 4999
otfCm4TMFGrs8vLng_NL_EA_EA EA - Faunalytics Analysis on Reasons for Abandoning Vegn Diets by JLRiedi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Faunalytics Analysis on Reasons for Abandoning Vegn Diets, published by JLRiedi on February 22, 2023 on The Effective Altruism Forum.Nonprofit research organization Faunalytics has released a new analysis on reasons people abandon vegan or vegetarian (vegn) diets, looking at the obstacles former vegns faced and what they would need to resume being vegn.Although causes for lapsing have been analyzed to an extent, a deeper analysis that considers people’s reasons in their own words is necessary to not only understand why people give up their vegn goals, but to find the best ways to help people stick with their commitment to vegnism and even lure back some of the lapsers.Read the full report here:BackgroundPeople have a variety of motivations for switching to plant-based diets, yet not all people who begin the transition to a vegan or vegetarian (collectively called vegn) diet maintain it long-term. In fact, Faunalytics’ study of current and former vegns (2014) found that the number of lapsed (former) vegans and vegetarians in the United States far surpasses the number of current vegns, and most who lapse do so within a year. Are these people the low-hanging fruit for diet advocates? They could be—there are many of them and they’re clearly at least somewhat willing to go vegn, so maybe more attention should be paid to the lapsers.That’s one possibility. The other, more pessimistic possibility, is that when we as advocates think our diet campaigns are successful, these are the people we think we’re convincing. That is, we see the part where they go vegn, but not the part where they later lapse back. This interpretation is one that a lot of people made when our study of current and former vegns released, but we don’t have strong evidence either way.This analysis, in which we looked at the obstacles faced by people who once pursued a vegn diet and what they would need to resume being vegn, aims to shed a bit more light on these questions. Although causes for lapsing have been analyzed to an extent, a deeper analysis that considers people’s reasons in their own words is necessary to not only understand why people give up their vegn goals, but to find the best ways to help people stick with their commitment to vegnism and even lure back some of the lapsers.Research TeamThis project’s lead author was Constanza Arévalo (Faunalytics). Dr. Jo Anderson (Faunalytics) reviewed and oversaw the work.ConclusionDiets Are More Than FoodFood plays an important role in our lives. More than just nutrition, food is a very personal yet social experience, a cultural identity, and at times, a religious or spiritual practice or symbol. Naturally, a good-tasting diet is important—especially when the idea is to maintain it long-term. However, lapsed vegns’ answers suggested that food dissatisfaction, although a very common struggle, was not the most crucial obstacle to overcome to return to vegnism. Instead, having access to vegn options, as well as the time and ability to prepare vegn meals (often alongside non-vegn meals for family), were much more common must-haves.Additionally, people’s feelings of healthiness while on their diet, seemed to hold a lot of weight. Many lapsed vegns who had faced issues managing their health named this as their main reason for lapsing. Similarly, Faunalytics (2022) found that people who felt unhealthy when first trying out a vegn diet were significantly more likely to lapse within the first six months than people who felt healthier. This was the case even if their initial motivation for going vegn wasn’t health-related.Seeking professional medical advice while pursuing a vegn diet (ideally from a doctor who understands and has experience with vegn diets) is the best way to manage any major concerns and get information about the vitamins and nutriti...]]>
JLRiedi https://forum.effectivealtruism.org/posts/otfCm4TMFGrs8vLng/faunalytics-analysis-on-reasons-for-abandoning-veg-n-diets Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Faunalytics Analysis on Reasons for Abandoning Vegn Diets, published by JLRiedi on February 22, 2023 on The Effective Altruism Forum.Nonprofit research organization Faunalytics has released a new analysis on reasons people abandon vegan or vegetarian (vegn) diets, looking at the obstacles former vegns faced and what they would need to resume being vegn.Although causes for lapsing have been analyzed to an extent, a deeper analysis that considers people’s reasons in their own words is necessary to not only understand why people give up their vegn goals, but to find the best ways to help people stick with their commitment to vegnism and even lure back some of the lapsers.Read the full report here:BackgroundPeople have a variety of motivations for switching to plant-based diets, yet not all people who begin the transition to a vegan or vegetarian (collectively called vegn) diet maintain it long-term. In fact, Faunalytics’ study of current and former vegns (2014) found that the number of lapsed (former) vegans and vegetarians in the United States far surpasses the number of current vegns, and most who lapse do so within a year. Are these people the low-hanging fruit for diet advocates? They could be—there are many of them and they’re clearly at least somewhat willing to go vegn, so maybe more attention should be paid to the lapsers.That’s one possibility. The other, more pessimistic possibility, is that when we as advocates think our diet campaigns are successful, these are the people we think we’re convincing. That is, we see the part where they go vegn, but not the part where they later lapse back. This interpretation is one that a lot of people made when our study of current and former vegns released, but we don’t have strong evidence either way.This analysis, in which we looked at the obstacles faced by people who once pursued a vegn diet and what they would need to resume being vegn, aims to shed a bit more light on these questions. Although causes for lapsing have been analyzed to an extent, a deeper analysis that considers people’s reasons in their own words is necessary to not only understand why people give up their vegn goals, but to find the best ways to help people stick with their commitment to vegnism and even lure back some of the lapsers.Research TeamThis project’s lead author was Constanza Arévalo (Faunalytics). Dr. Jo Anderson (Faunalytics) reviewed and oversaw the work.ConclusionDiets Are More Than FoodFood plays an important role in our lives. More than just nutrition, food is a very personal yet social experience, a cultural identity, and at times, a religious or spiritual practice or symbol. Naturally, a good-tasting diet is important—especially when the idea is to maintain it long-term. However, lapsed vegns’ answers suggested that food dissatisfaction, although a very common struggle, was not the most crucial obstacle to overcome to return to vegnism. Instead, having access to vegn options, as well as the time and ability to prepare vegn meals (often alongside non-vegn meals for family), were much more common must-haves.Additionally, people’s feelings of healthiness while on their diet, seemed to hold a lot of weight. Many lapsed vegns who had faced issues managing their health named this as their main reason for lapsing. Similarly, Faunalytics (2022) found that people who felt unhealthy when first trying out a vegn diet were significantly more likely to lapse within the first six months than people who felt healthier. This was the case even if their initial motivation for going vegn wasn’t health-related.Seeking professional medical advice while pursuing a vegn diet (ideally from a doctor who understands and has experience with vegn diets) is the best way to manage any major concerns and get information about the vitamins and nutriti...]]>
Thu, 23 Feb 2023 05:32:13 +0000 EA - Faunalytics Analysis on Reasons for Abandoning Vegn Diets by JLRiedi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Faunalytics Analysis on Reasons for Abandoning Vegn Diets, published by JLRiedi on February 22, 2023 on The Effective Altruism Forum.Nonprofit research organization Faunalytics has released a new analysis on reasons people abandon vegan or vegetarian (vegn) diets, looking at the obstacles former vegns faced and what they would need to resume being vegn.Although causes for lapsing have been analyzed to an extent, a deeper analysis that considers people’s reasons in their own words is necessary to not only understand why people give up their vegn goals, but to find the best ways to help people stick with their commitment to vegnism and even lure back some of the lapsers.Read the full report here:BackgroundPeople have a variety of motivations for switching to plant-based diets, yet not all people who begin the transition to a vegan or vegetarian (collectively called vegn) diet maintain it long-term. In fact, Faunalytics’ study of current and former vegns (2014) found that the number of lapsed (former) vegans and vegetarians in the United States far surpasses the number of current vegns, and most who lapse do so within a year. Are these people the low-hanging fruit for diet advocates? They could be—there are many of them and they’re clearly at least somewhat willing to go vegn, so maybe more attention should be paid to the lapsers.That’s one possibility. The other, more pessimistic possibility, is that when we as advocates think our diet campaigns are successful, these are the people we think we’re convincing. That is, we see the part where they go vegn, but not the part where they later lapse back. This interpretation is one that a lot of people made when our study of current and former vegns released, but we don’t have strong evidence either way.This analysis, in which we looked at the obstacles faced by people who once pursued a vegn diet and what they would need to resume being vegn, aims to shed a bit more light on these questions. Although causes for lapsing have been analyzed to an extent, a deeper analysis that considers people’s reasons in their own words is necessary to not only understand why people give up their vegn goals, but to find the best ways to help people stick with their commitment to vegnism and even lure back some of the lapsers.Research TeamThis project’s lead author was Constanza Arévalo (Faunalytics). Dr. Jo Anderson (Faunalytics) reviewed and oversaw the work.ConclusionDiets Are More Than FoodFood plays an important role in our lives. More than just nutrition, food is a very personal yet social experience, a cultural identity, and at times, a religious or spiritual practice or symbol. Naturally, a good-tasting diet is important—especially when the idea is to maintain it long-term. However, lapsed vegns’ answers suggested that food dissatisfaction, although a very common struggle, was not the most crucial obstacle to overcome to return to vegnism. Instead, having access to vegn options, as well as the time and ability to prepare vegn meals (often alongside non-vegn meals for family), were much more common must-haves.Additionally, people’s feelings of healthiness while on their diet, seemed to hold a lot of weight. Many lapsed vegns who had faced issues managing their health named this as their main reason for lapsing. Similarly, Faunalytics (2022) found that people who felt unhealthy when first trying out a vegn diet were significantly more likely to lapse within the first six months than people who felt healthier. This was the case even if their initial motivation for going vegn wasn’t health-related.Seeking professional medical advice while pursuing a vegn diet (ideally from a doctor who understands and has experience with vegn diets) is the best way to manage any major concerns and get information about the vitamins and nutriti...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Faunalytics Analysis on Reasons for Abandoning Vegn Diets, published by JLRiedi on February 22, 2023 on The Effective Altruism Forum.Nonprofit research organization Faunalytics has released a new analysis on reasons people abandon vegan or vegetarian (vegn) diets, looking at the obstacles former vegns faced and what they would need to resume being vegn.Although causes for lapsing have been analyzed to an extent, a deeper analysis that considers people’s reasons in their own words is necessary to not only understand why people give up their vegn goals, but to find the best ways to help people stick with their commitment to vegnism and even lure back some of the lapsers.Read the full report here:BackgroundPeople have a variety of motivations for switching to plant-based diets, yet not all people who begin the transition to a vegan or vegetarian (collectively called vegn) diet maintain it long-term. In fact, Faunalytics’ study of current and former vegns (2014) found that the number of lapsed (former) vegans and vegetarians in the United States far surpasses the number of current vegns, and most who lapse do so within a year. Are these people the low-hanging fruit for diet advocates? They could be—there are many of them and they’re clearly at least somewhat willing to go vegn, so maybe more attention should be paid to the lapsers.That’s one possibility. The other, more pessimistic possibility, is that when we as advocates think our diet campaigns are successful, these are the people we think we’re convincing. That is, we see the part where they go vegn, but not the part where they later lapse back. This interpretation is one that a lot of people made when our study of current and former vegns released, but we don’t have strong evidence either way.This analysis, in which we looked at the obstacles faced by people who once pursued a vegn diet and what they would need to resume being vegn, aims to shed a bit more light on these questions. Although causes for lapsing have been analyzed to an extent, a deeper analysis that considers people’s reasons in their own words is necessary to not only understand why people give up their vegn goals, but to find the best ways to help people stick with their commitment to vegnism and even lure back some of the lapsers.Research TeamThis project’s lead author was Constanza Arévalo (Faunalytics). Dr. Jo Anderson (Faunalytics) reviewed and oversaw the work.ConclusionDiets Are More Than FoodFood plays an important role in our lives. More than just nutrition, food is a very personal yet social experience, a cultural identity, and at times, a religious or spiritual practice or symbol. Naturally, a good-tasting diet is important—especially when the idea is to maintain it long-term. However, lapsed vegns’ answers suggested that food dissatisfaction, although a very common struggle, was not the most crucial obstacle to overcome to return to vegnism. Instead, having access to vegn options, as well as the time and ability to prepare vegn meals (often alongside non-vegn meals for family), were much more common must-haves.Additionally, people’s feelings of healthiness while on their diet, seemed to hold a lot of weight. Many lapsed vegns who had faced issues managing their health named this as their main reason for lapsing. Similarly, Faunalytics (2022) found that people who felt unhealthy when first trying out a vegn diet were significantly more likely to lapse within the first six months than people who felt healthier. This was the case even if their initial motivation for going vegn wasn’t health-related.Seeking professional medical advice while pursuing a vegn diet (ideally from a doctor who understands and has experience with vegn diets) is the best way to manage any major concerns and get information about the vitamins and nutriti...]]>
JLRiedi https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:01 None full 5000
Bsdq5wK63vLEB3Gqg_NL_EA_EA EA - Announcing the Launch of the Insect Institute by Dustin Crummett Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Launch of the Insect Institute, published by Dustin Crummett on February 22, 2023 on The Effective Altruism Forum.The Insect Institute, a fiscally-sponsored project of Rethink Priorities, is excited to announce its official launch. I (Dustin Crummett) am the executive director of this new initiative.The Insect Institute was created to focus on the rapidly growing use of insects as food and feed. Our aim is to work with policymakers, industry, and other relevant stakeholders to address key uncertainties involving animal welfare, public health, and environmental sustainability. As this industry evolves over time, we may also expand our work to other areas.While we don’t currently have any open positions, we do expect to grow our team in the future. If you are interested in working with us, please feel free to submit an expression of interest through our contact form, or sign up for our email list (at the bottom of our home page) to stay up to date on future developments and opportunities.I look forward to seeing many of you at EAG Bay Area—please come say hello if you’d like to chat! I’m also very happy to field questions via DM or via email to dustin@insectinstitute.org.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Dustin Crummett https://forum.effectivealtruism.org/posts/Bsdq5wK63vLEB3Gqg/announcing-the-launch-of-the-insect-institute Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Launch of the Insect Institute, published by Dustin Crummett on February 22, 2023 on The Effective Altruism Forum.The Insect Institute, a fiscally-sponsored project of Rethink Priorities, is excited to announce its official launch. I (Dustin Crummett) am the executive director of this new initiative.The Insect Institute was created to focus on the rapidly growing use of insects as food and feed. Our aim is to work with policymakers, industry, and other relevant stakeholders to address key uncertainties involving animal welfare, public health, and environmental sustainability. As this industry evolves over time, we may also expand our work to other areas.While we don’t currently have any open positions, we do expect to grow our team in the future. If you are interested in working with us, please feel free to submit an expression of interest through our contact form, or sign up for our email list (at the bottom of our home page) to stay up to date on future developments and opportunities.I look forward to seeing many of you at EAG Bay Area—please come say hello if you’d like to chat! I’m also very happy to field questions via DM or via email to dustin@insectinstitute.org.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 22 Feb 2023 19:50:27 +0000 EA - Announcing the Launch of the Insect Institute by Dustin Crummett Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Launch of the Insect Institute, published by Dustin Crummett on February 22, 2023 on The Effective Altruism Forum.The Insect Institute, a fiscally-sponsored project of Rethink Priorities, is excited to announce its official launch. I (Dustin Crummett) am the executive director of this new initiative.The Insect Institute was created to focus on the rapidly growing use of insects as food and feed. Our aim is to work with policymakers, industry, and other relevant stakeholders to address key uncertainties involving animal welfare, public health, and environmental sustainability. As this industry evolves over time, we may also expand our work to other areas.While we don’t currently have any open positions, we do expect to grow our team in the future. If you are interested in working with us, please feel free to submit an expression of interest through our contact form, or sign up for our email list (at the bottom of our home page) to stay up to date on future developments and opportunities.I look forward to seeing many of you at EAG Bay Area—please come say hello if you’d like to chat! I’m also very happy to field questions via DM or via email to dustin@insectinstitute.org.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Launch of the Insect Institute, published by Dustin Crummett on February 22, 2023 on The Effective Altruism Forum.The Insect Institute, a fiscally-sponsored project of Rethink Priorities, is excited to announce its official launch. I (Dustin Crummett) am the executive director of this new initiative.The Insect Institute was created to focus on the rapidly growing use of insects as food and feed. Our aim is to work with policymakers, industry, and other relevant stakeholders to address key uncertainties involving animal welfare, public health, and environmental sustainability. As this industry evolves over time, we may also expand our work to other areas.While we don’t currently have any open positions, we do expect to grow our team in the future. If you are interested in working with us, please feel free to submit an expression of interest through our contact form, or sign up for our email list (at the bottom of our home page) to stay up to date on future developments and opportunities.I look forward to seeing many of you at EAG Bay Area—please come say hello if you’d like to chat! I’m also very happy to field questions via DM or via email to dustin@insectinstitute.org.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Dustin Crummett https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:24 None full 4987
7cfdwqZkhuKFACpeG_NL_EA_EA EA - Cyborg Periods: There will be multiple AI transitions by Jan Kulveit Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cyborg Periods: There will be multiple AI transitions, published by Jan Kulveit on February 22, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jan Kulveit https://forum.effectivealtruism.org/posts/7cfdwqZkhuKFACpeG/cyborg-periods-there-will-be-multiple-ai-transitions Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cyborg Periods: There will be multiple AI transitions, published by Jan Kulveit on February 22, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 22 Feb 2023 18:05:32 +0000 EA - Cyborg Periods: There will be multiple AI transitions by Jan Kulveit Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cyborg Periods: There will be multiple AI transitions, published by Jan Kulveit on February 22, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cyborg Periods: There will be multiple AI transitions, published by Jan Kulveit on February 22, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jan Kulveit https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:28 None full 4988
aGkLx2hfr9s3mSdng_NL_EA_EA EA - Consider not sleeping around within the community by Patrick Sue Domin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider not sleeping around within the community, published by Patrick Sue Domin on February 22, 2023 on The Effective Altruism Forum.Posted under pseudonym for reasons I’d rather not get into. If it’s relevant, I’m pretty involved in EA. I’ve been to several EAGs and I do direct work.tldr I think many more people in the community should consider refraining from sleeping around within the community. I especially think people should consider refraining from sleeping around within EA if they have two or more of the following traits- high status in EA, a man who sleeps with women, and socially clumsy.I think the community would be a more welcoming place, with less sexual misconduct and less other sexually unwelcome behaviour, if more EAs chose to personally refrain from sleeping around within EA or attempting to do so. Most functional institutions outside of EA, from companies to friend groups to extended families, have developed norms against sleeping around within the group. We obviously don’t want to simply unquestionably accept all of society’s norms, but I think in this case those norms address real problems.I worry that as a group, EAs run a risk of discarding valuable cultural practices that don’t immediately make sense in a first principles way, and that this tendency can have particularly high costs where sex is involved (Owen more or less admitted this was a factor in his behaviour in his statement/apology: “I was leaning into my own view-at-the-time about what good conduct looked like, and interested in experimenting to find ways to build a better culture than society-at-large has”).Regarding sleeping around within a tight-knit community, I think this behaviour has risks whether the pursuer is successful or not. Failed attempts at sleeping with someone can very often lead to awkwardness or uncomfortability. In EA, where employment and funding may be front of mind, this uncomfortability may be increased a lot, and there may be no way for the person who was pursued to realistically avoid the pursuer in the future if they want to without major career repercussions. Successful attempts at sleeping around can obviously also cause all sorts of drama, either shortly after or down the road.Personal factors that may increase risksI think within EA, the risks of harm are increased greatly if the pursuer has any of the following three traits:High status within EA- this can create bad power dynamics and awkward social pressure. First, people generally don’t like pissing off high status people within their social circles as there may be social repercussions to doing so. Second, high status people within EA often control funding and employment decisions. Even if the pursuer isn’t in such a position now, they might wind up in one in the future. Third, high status EAs often talk to other high status EAs, so an unjustified bad reputation can spread to other figures in the movement who control funding or employment. Fourth, many EAs consider the community to be their one best shot at living the kind of ethical life they want, raising the stakes a bunch. Fifth, the moralising aspect of EA may make some people find it more uncomfortable to rebuff a high status EA.A man pursuing a woman (such as a heterosexual man or a bi-/pansexual man pursuing a woman)- this factor can sometimes be an elephant that people dance around in discussions, but I’ll just address it head on. On average men are more assertive, aggressive, and physically intimidating than women. On average women are more perceptive about subtle social cues and find it more awkward when those subtle social cues are ignored. My sense is these factors are pretty robust across cultures, but I don’t think it matters for this discussion what the cause of these average differences are. Add to all that, the EA communit...]]>
Patrick Sue Domin https://forum.effectivealtruism.org/posts/aGkLx2hfr9s3mSdng/consider-not-sleeping-around-within-the-community-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider not sleeping around within the community, published by Patrick Sue Domin on February 22, 2023 on The Effective Altruism Forum.Posted under pseudonym for reasons I’d rather not get into. If it’s relevant, I’m pretty involved in EA. I’ve been to several EAGs and I do direct work.tldr I think many more people in the community should consider refraining from sleeping around within the community. I especially think people should consider refraining from sleeping around within EA if they have two or more of the following traits- high status in EA, a man who sleeps with women, and socially clumsy.I think the community would be a more welcoming place, with less sexual misconduct and less other sexually unwelcome behaviour, if more EAs chose to personally refrain from sleeping around within EA or attempting to do so. Most functional institutions outside of EA, from companies to friend groups to extended families, have developed norms against sleeping around within the group. We obviously don’t want to simply unquestionably accept all of society’s norms, but I think in this case those norms address real problems.I worry that as a group, EAs run a risk of discarding valuable cultural practices that don’t immediately make sense in a first principles way, and that this tendency can have particularly high costs where sex is involved (Owen more or less admitted this was a factor in his behaviour in his statement/apology: “I was leaning into my own view-at-the-time about what good conduct looked like, and interested in experimenting to find ways to build a better culture than society-at-large has”).Regarding sleeping around within a tight-knit community, I think this behaviour has risks whether the pursuer is successful or not. Failed attempts at sleeping with someone can very often lead to awkwardness or uncomfortability. In EA, where employment and funding may be front of mind, this uncomfortability may be increased a lot, and there may be no way for the person who was pursued to realistically avoid the pursuer in the future if they want to without major career repercussions. Successful attempts at sleeping around can obviously also cause all sorts of drama, either shortly after or down the road.Personal factors that may increase risksI think within EA, the risks of harm are increased greatly if the pursuer has any of the following three traits:High status within EA- this can create bad power dynamics and awkward social pressure. First, people generally don’t like pissing off high status people within their social circles as there may be social repercussions to doing so. Second, high status people within EA often control funding and employment decisions. Even if the pursuer isn’t in such a position now, they might wind up in one in the future. Third, high status EAs often talk to other high status EAs, so an unjustified bad reputation can spread to other figures in the movement who control funding or employment. Fourth, many EAs consider the community to be their one best shot at living the kind of ethical life they want, raising the stakes a bunch. Fifth, the moralising aspect of EA may make some people find it more uncomfortable to rebuff a high status EA.A man pursuing a woman (such as a heterosexual man or a bi-/pansexual man pursuing a woman)- this factor can sometimes be an elephant that people dance around in discussions, but I’ll just address it head on. On average men are more assertive, aggressive, and physically intimidating than women. On average women are more perceptive about subtle social cues and find it more awkward when those subtle social cues are ignored. My sense is these factors are pretty robust across cultures, but I don’t think it matters for this discussion what the cause of these average differences are. Add to all that, the EA communit...]]>
Wed, 22 Feb 2023 10:46:29 +0000 EA - Consider not sleeping around within the community by Patrick Sue Domin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider not sleeping around within the community, published by Patrick Sue Domin on February 22, 2023 on The Effective Altruism Forum.Posted under pseudonym for reasons I’d rather not get into. If it’s relevant, I’m pretty involved in EA. I’ve been to several EAGs and I do direct work.tldr I think many more people in the community should consider refraining from sleeping around within the community. I especially think people should consider refraining from sleeping around within EA if they have two or more of the following traits- high status in EA, a man who sleeps with women, and socially clumsy.I think the community would be a more welcoming place, with less sexual misconduct and less other sexually unwelcome behaviour, if more EAs chose to personally refrain from sleeping around within EA or attempting to do so. Most functional institutions outside of EA, from companies to friend groups to extended families, have developed norms against sleeping around within the group. We obviously don’t want to simply unquestionably accept all of society’s norms, but I think in this case those norms address real problems.I worry that as a group, EAs run a risk of discarding valuable cultural practices that don’t immediately make sense in a first principles way, and that this tendency can have particularly high costs where sex is involved (Owen more or less admitted this was a factor in his behaviour in his statement/apology: “I was leaning into my own view-at-the-time about what good conduct looked like, and interested in experimenting to find ways to build a better culture than society-at-large has”).Regarding sleeping around within a tight-knit community, I think this behaviour has risks whether the pursuer is successful or not. Failed attempts at sleeping with someone can very often lead to awkwardness or uncomfortability. In EA, where employment and funding may be front of mind, this uncomfortability may be increased a lot, and there may be no way for the person who was pursued to realistically avoid the pursuer in the future if they want to without major career repercussions. Successful attempts at sleeping around can obviously also cause all sorts of drama, either shortly after or down the road.Personal factors that may increase risksI think within EA, the risks of harm are increased greatly if the pursuer has any of the following three traits:High status within EA- this can create bad power dynamics and awkward social pressure. First, people generally don’t like pissing off high status people within their social circles as there may be social repercussions to doing so. Second, high status people within EA often control funding and employment decisions. Even if the pursuer isn’t in such a position now, they might wind up in one in the future. Third, high status EAs often talk to other high status EAs, so an unjustified bad reputation can spread to other figures in the movement who control funding or employment. Fourth, many EAs consider the community to be their one best shot at living the kind of ethical life they want, raising the stakes a bunch. Fifth, the moralising aspect of EA may make some people find it more uncomfortable to rebuff a high status EA.A man pursuing a woman (such as a heterosexual man or a bi-/pansexual man pursuing a woman)- this factor can sometimes be an elephant that people dance around in discussions, but I’ll just address it head on. On average men are more assertive, aggressive, and physically intimidating than women. On average women are more perceptive about subtle social cues and find it more awkward when those subtle social cues are ignored. My sense is these factors are pretty robust across cultures, but I don’t think it matters for this discussion what the cause of these average differences are. Add to all that, the EA communit...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider not sleeping around within the community, published by Patrick Sue Domin on February 22, 2023 on The Effective Altruism Forum.Posted under pseudonym for reasons I’d rather not get into. If it’s relevant, I’m pretty involved in EA. I’ve been to several EAGs and I do direct work.tldr I think many more people in the community should consider refraining from sleeping around within the community. I especially think people should consider refraining from sleeping around within EA if they have two or more of the following traits- high status in EA, a man who sleeps with women, and socially clumsy.I think the community would be a more welcoming place, with less sexual misconduct and less other sexually unwelcome behaviour, if more EAs chose to personally refrain from sleeping around within EA or attempting to do so. Most functional institutions outside of EA, from companies to friend groups to extended families, have developed norms against sleeping around within the group. We obviously don’t want to simply unquestionably accept all of society’s norms, but I think in this case those norms address real problems.I worry that as a group, EAs run a risk of discarding valuable cultural practices that don’t immediately make sense in a first principles way, and that this tendency can have particularly high costs where sex is involved (Owen more or less admitted this was a factor in his behaviour in his statement/apology: “I was leaning into my own view-at-the-time about what good conduct looked like, and interested in experimenting to find ways to build a better culture than society-at-large has”).Regarding sleeping around within a tight-knit community, I think this behaviour has risks whether the pursuer is successful or not. Failed attempts at sleeping with someone can very often lead to awkwardness or uncomfortability. In EA, where employment and funding may be front of mind, this uncomfortability may be increased a lot, and there may be no way for the person who was pursued to realistically avoid the pursuer in the future if they want to without major career repercussions. Successful attempts at sleeping around can obviously also cause all sorts of drama, either shortly after or down the road.Personal factors that may increase risksI think within EA, the risks of harm are increased greatly if the pursuer has any of the following three traits:High status within EA- this can create bad power dynamics and awkward social pressure. First, people generally don’t like pissing off high status people within their social circles as there may be social repercussions to doing so. Second, high status people within EA often control funding and employment decisions. Even if the pursuer isn’t in such a position now, they might wind up in one in the future. Third, high status EAs often talk to other high status EAs, so an unjustified bad reputation can spread to other figures in the movement who control funding or employment. Fourth, many EAs consider the community to be their one best shot at living the kind of ethical life they want, raising the stakes a bunch. Fifth, the moralising aspect of EA may make some people find it more uncomfortable to rebuff a high status EA.A man pursuing a woman (such as a heterosexual man or a bi-/pansexual man pursuing a woman)- this factor can sometimes be an elephant that people dance around in discussions, but I’ll just address it head on. On average men are more assertive, aggressive, and physically intimidating than women. On average women are more perceptive about subtle social cues and find it more awkward when those subtle social cues are ignored. My sense is these factors are pretty robust across cultures, but I don’t think it matters for this discussion what the cause of these average differences are. Add to all that, the EA communit...]]>
Patrick Sue Domin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:17 None full 4977
udnaqtaw9FFygjx2f_NL_EA_EA EA - A list of EA-relevant business books I've read by Drew Spartz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A list of EA-relevant business books I've read, published by Drew Spartz on February 21, 2023 on The Effective Altruism Forum.Some have suggested EA is too insular and needs to learn from other fields. In this vein, I think there are important mental models from the for-profit world that are underutilized by non-profits.After all, business can be thought of as the study of how to accomplish goals as an organization - how to get things done in the real world. EA needs the right mix of theory and real world execution. If you replace the word “profit” with “impact”, you’ll find a large percentage of lessons can be cross-applied.Eight months ago, I challenged myself to read a book a day for a year. I've been posting daily summaries on social media and had enough EAs reach out to me for book recs that, inspired by Michael Aird and Anna Riedl, I thought it might be worth sharing my all-time favorites here.Below are the best ~50 out of the ~500 books I read in the past few years. I’m an entrepreneur so they’re mostly business-related. Bold = extra-recommended.If you’d like any more specific recommendations feel free to leave a comment and I can try to be helpful.Also - I’m hosting an unofficial entrepreneur meetup at EAG Bay Area. Message me on SwapCard for details or think it might be high impact to connect :)The best ~50 books:Fundraising:FundraisingThe Power Law: Venture Capital and the Making of the New FutureLeadership/Management:The Hard Thing About Hard Things: Building a Business When There Are No Easy AnswersThe Advantage: Why Organizational Health Trumps Everything Else In BusinessThe Coaching Habit: Say Less, Ask More & Change the Way You Lead ForeverEntrepreneurship/Startups:Running LeanThe Founder's Dilemmas: Anticipating and Avoiding the Pitfalls That Can Sink a StartupZero to One: Notes on Startups, or How to Build the FutureThe Startup Owner's Manual: The Step-By-Step Guide for Building a Great CompanyStrategy/Innovation:The Mom Test: How to talk to customers & learn if your business is a good idea when everyone is lying to youScaling Up: How a Few Companies Make It...and Why the Rest Don'tOperations/Get Shit Done:The Goal: A Process of Ongoing ImprovementThe Phoenix Project: A Novel about IT, DevOps, and Helping Your Business WinMaking Work Visible: Exposing Time Theft to Optimize Work & FlowStatistics/Forecasting:How to Measure Anything: Finding the Value of Intangibles in BusinessSuperforecasting: The Art and Science of PredictionAntifragile: Things That Gain from DisorderWriting/Storytelling:Wired for Story: The Writer's Guide to Using Brain Science to Hook Readers from the Very First SentenceThe Story Grid: What Good Editors KnowProduct/Design/User Experience:The Cold Start Problem: How to Start and Scale Network EffectsThe Lean Product Playbook: How to Innovate with Minimum Viable Products and Rapid Customer FeedbackPsychology/Influence:SPIN Selling (unfortunate acronym)The Elephant in the Brain: Hidden Motives in Everyday LifeInfluence: The Psychology of PersuasionOutreach/Marketing/Advocacy:80/20 Sales and Marketing: The Definitive Guide to Working Less and Making MoreTraction: How Any Startup Can Achieve Explosive Customer GrowthHow to learn things faster:Ultralearning: Master Hard Skills, Outsmart the Competition, and Accelerate Your CareerMake It Stick: The Science of Successful LearningThe Little Book of Talent: 52 Tips for Improving Your SkillsPersonal Development:The Confident Mind: A Battle-Tested Guide to Unshakable PerformanceThe Almanack of Naval Ravikant: A Guide to Wealth and HappinessAtomic HabitsRecruiting/Hiring:RecruitingWho: The A Method for HiringNegotiating:Negotiation GeniusNever Split the Difference: Negotiating As If Your Life Depended On ItSecrets of Power Negotiating: I...]]>
Drew Spartz https://forum.effectivealtruism.org/posts/udnaqtaw9FFygjx2f/a-list-of-ea-relevant-business-books-i-ve-read Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A list of EA-relevant business books I've read, published by Drew Spartz on February 21, 2023 on The Effective Altruism Forum.Some have suggested EA is too insular and needs to learn from other fields. In this vein, I think there are important mental models from the for-profit world that are underutilized by non-profits.After all, business can be thought of as the study of how to accomplish goals as an organization - how to get things done in the real world. EA needs the right mix of theory and real world execution. If you replace the word “profit” with “impact”, you’ll find a large percentage of lessons can be cross-applied.Eight months ago, I challenged myself to read a book a day for a year. I've been posting daily summaries on social media and had enough EAs reach out to me for book recs that, inspired by Michael Aird and Anna Riedl, I thought it might be worth sharing my all-time favorites here.Below are the best ~50 out of the ~500 books I read in the past few years. I’m an entrepreneur so they’re mostly business-related. Bold = extra-recommended.If you’d like any more specific recommendations feel free to leave a comment and I can try to be helpful.Also - I’m hosting an unofficial entrepreneur meetup at EAG Bay Area. Message me on SwapCard for details or think it might be high impact to connect :)The best ~50 books:Fundraising:FundraisingThe Power Law: Venture Capital and the Making of the New FutureLeadership/Management:The Hard Thing About Hard Things: Building a Business When There Are No Easy AnswersThe Advantage: Why Organizational Health Trumps Everything Else In BusinessThe Coaching Habit: Say Less, Ask More & Change the Way You Lead ForeverEntrepreneurship/Startups:Running LeanThe Founder's Dilemmas: Anticipating and Avoiding the Pitfalls That Can Sink a StartupZero to One: Notes on Startups, or How to Build the FutureThe Startup Owner's Manual: The Step-By-Step Guide for Building a Great CompanyStrategy/Innovation:The Mom Test: How to talk to customers & learn if your business is a good idea when everyone is lying to youScaling Up: How a Few Companies Make It...and Why the Rest Don'tOperations/Get Shit Done:The Goal: A Process of Ongoing ImprovementThe Phoenix Project: A Novel about IT, DevOps, and Helping Your Business WinMaking Work Visible: Exposing Time Theft to Optimize Work & FlowStatistics/Forecasting:How to Measure Anything: Finding the Value of Intangibles in BusinessSuperforecasting: The Art and Science of PredictionAntifragile: Things That Gain from DisorderWriting/Storytelling:Wired for Story: The Writer's Guide to Using Brain Science to Hook Readers from the Very First SentenceThe Story Grid: What Good Editors KnowProduct/Design/User Experience:The Cold Start Problem: How to Start and Scale Network EffectsThe Lean Product Playbook: How to Innovate with Minimum Viable Products and Rapid Customer FeedbackPsychology/Influence:SPIN Selling (unfortunate acronym)The Elephant in the Brain: Hidden Motives in Everyday LifeInfluence: The Psychology of PersuasionOutreach/Marketing/Advocacy:80/20 Sales and Marketing: The Definitive Guide to Working Less and Making MoreTraction: How Any Startup Can Achieve Explosive Customer GrowthHow to learn things faster:Ultralearning: Master Hard Skills, Outsmart the Competition, and Accelerate Your CareerMake It Stick: The Science of Successful LearningThe Little Book of Talent: 52 Tips for Improving Your SkillsPersonal Development:The Confident Mind: A Battle-Tested Guide to Unshakable PerformanceThe Almanack of Naval Ravikant: A Guide to Wealth and HappinessAtomic HabitsRecruiting/Hiring:RecruitingWho: The A Method for HiringNegotiating:Negotiation GeniusNever Split the Difference: Negotiating As If Your Life Depended On ItSecrets of Power Negotiating: I...]]>
Wed, 22 Feb 2023 06:01:26 +0000 EA - A list of EA-relevant business books I've read by Drew Spartz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A list of EA-relevant business books I've read, published by Drew Spartz on February 21, 2023 on The Effective Altruism Forum.Some have suggested EA is too insular and needs to learn from other fields. In this vein, I think there are important mental models from the for-profit world that are underutilized by non-profits.After all, business can be thought of as the study of how to accomplish goals as an organization - how to get things done in the real world. EA needs the right mix of theory and real world execution. If you replace the word “profit” with “impact”, you’ll find a large percentage of lessons can be cross-applied.Eight months ago, I challenged myself to read a book a day for a year. I've been posting daily summaries on social media and had enough EAs reach out to me for book recs that, inspired by Michael Aird and Anna Riedl, I thought it might be worth sharing my all-time favorites here.Below are the best ~50 out of the ~500 books I read in the past few years. I’m an entrepreneur so they’re mostly business-related. Bold = extra-recommended.If you’d like any more specific recommendations feel free to leave a comment and I can try to be helpful.Also - I’m hosting an unofficial entrepreneur meetup at EAG Bay Area. Message me on SwapCard for details or think it might be high impact to connect :)The best ~50 books:Fundraising:FundraisingThe Power Law: Venture Capital and the Making of the New FutureLeadership/Management:The Hard Thing About Hard Things: Building a Business When There Are No Easy AnswersThe Advantage: Why Organizational Health Trumps Everything Else In BusinessThe Coaching Habit: Say Less, Ask More & Change the Way You Lead ForeverEntrepreneurship/Startups:Running LeanThe Founder's Dilemmas: Anticipating and Avoiding the Pitfalls That Can Sink a StartupZero to One: Notes on Startups, or How to Build the FutureThe Startup Owner's Manual: The Step-By-Step Guide for Building a Great CompanyStrategy/Innovation:The Mom Test: How to talk to customers & learn if your business is a good idea when everyone is lying to youScaling Up: How a Few Companies Make It...and Why the Rest Don'tOperations/Get Shit Done:The Goal: A Process of Ongoing ImprovementThe Phoenix Project: A Novel about IT, DevOps, and Helping Your Business WinMaking Work Visible: Exposing Time Theft to Optimize Work & FlowStatistics/Forecasting:How to Measure Anything: Finding the Value of Intangibles in BusinessSuperforecasting: The Art and Science of PredictionAntifragile: Things That Gain from DisorderWriting/Storytelling:Wired for Story: The Writer's Guide to Using Brain Science to Hook Readers from the Very First SentenceThe Story Grid: What Good Editors KnowProduct/Design/User Experience:The Cold Start Problem: How to Start and Scale Network EffectsThe Lean Product Playbook: How to Innovate with Minimum Viable Products and Rapid Customer FeedbackPsychology/Influence:SPIN Selling (unfortunate acronym)The Elephant in the Brain: Hidden Motives in Everyday LifeInfluence: The Psychology of PersuasionOutreach/Marketing/Advocacy:80/20 Sales and Marketing: The Definitive Guide to Working Less and Making MoreTraction: How Any Startup Can Achieve Explosive Customer GrowthHow to learn things faster:Ultralearning: Master Hard Skills, Outsmart the Competition, and Accelerate Your CareerMake It Stick: The Science of Successful LearningThe Little Book of Talent: 52 Tips for Improving Your SkillsPersonal Development:The Confident Mind: A Battle-Tested Guide to Unshakable PerformanceThe Almanack of Naval Ravikant: A Guide to Wealth and HappinessAtomic HabitsRecruiting/Hiring:RecruitingWho: The A Method for HiringNegotiating:Negotiation GeniusNever Split the Difference: Negotiating As If Your Life Depended On ItSecrets of Power Negotiating: I...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A list of EA-relevant business books I've read, published by Drew Spartz on February 21, 2023 on The Effective Altruism Forum.Some have suggested EA is too insular and needs to learn from other fields. In this vein, I think there are important mental models from the for-profit world that are underutilized by non-profits.After all, business can be thought of as the study of how to accomplish goals as an organization - how to get things done in the real world. EA needs the right mix of theory and real world execution. If you replace the word “profit” with “impact”, you’ll find a large percentage of lessons can be cross-applied.Eight months ago, I challenged myself to read a book a day for a year. I've been posting daily summaries on social media and had enough EAs reach out to me for book recs that, inspired by Michael Aird and Anna Riedl, I thought it might be worth sharing my all-time favorites here.Below are the best ~50 out of the ~500 books I read in the past few years. I’m an entrepreneur so they’re mostly business-related. Bold = extra-recommended.If you’d like any more specific recommendations feel free to leave a comment and I can try to be helpful.Also - I’m hosting an unofficial entrepreneur meetup at EAG Bay Area. Message me on SwapCard for details or think it might be high impact to connect :)The best ~50 books:Fundraising:FundraisingThe Power Law: Venture Capital and the Making of the New FutureLeadership/Management:The Hard Thing About Hard Things: Building a Business When There Are No Easy AnswersThe Advantage: Why Organizational Health Trumps Everything Else In BusinessThe Coaching Habit: Say Less, Ask More & Change the Way You Lead ForeverEntrepreneurship/Startups:Running LeanThe Founder's Dilemmas: Anticipating and Avoiding the Pitfalls That Can Sink a StartupZero to One: Notes on Startups, or How to Build the FutureThe Startup Owner's Manual: The Step-By-Step Guide for Building a Great CompanyStrategy/Innovation:The Mom Test: How to talk to customers & learn if your business is a good idea when everyone is lying to youScaling Up: How a Few Companies Make It...and Why the Rest Don'tOperations/Get Shit Done:The Goal: A Process of Ongoing ImprovementThe Phoenix Project: A Novel about IT, DevOps, and Helping Your Business WinMaking Work Visible: Exposing Time Theft to Optimize Work & FlowStatistics/Forecasting:How to Measure Anything: Finding the Value of Intangibles in BusinessSuperforecasting: The Art and Science of PredictionAntifragile: Things That Gain from DisorderWriting/Storytelling:Wired for Story: The Writer's Guide to Using Brain Science to Hook Readers from the Very First SentenceThe Story Grid: What Good Editors KnowProduct/Design/User Experience:The Cold Start Problem: How to Start and Scale Network EffectsThe Lean Product Playbook: How to Innovate with Minimum Viable Products and Rapid Customer FeedbackPsychology/Influence:SPIN Selling (unfortunate acronym)The Elephant in the Brain: Hidden Motives in Everyday LifeInfluence: The Psychology of PersuasionOutreach/Marketing/Advocacy:80/20 Sales and Marketing: The Definitive Guide to Working Less and Making MoreTraction: How Any Startup Can Achieve Explosive Customer GrowthHow to learn things faster:Ultralearning: Master Hard Skills, Outsmart the Competition, and Accelerate Your CareerMake It Stick: The Science of Successful LearningThe Little Book of Talent: 52 Tips for Improving Your SkillsPersonal Development:The Confident Mind: A Battle-Tested Guide to Unshakable PerformanceThe Almanack of Naval Ravikant: A Guide to Wealth and HappinessAtomic HabitsRecruiting/Hiring:RecruitingWho: The A Method for HiringNegotiating:Negotiation GeniusNever Split the Difference: Negotiating As If Your Life Depended On ItSecrets of Power Negotiating: I...]]>
Drew Spartz https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:36 None full 4979
tqQKkRqx4zxBPR6ci_NL_EA_EA EA - A Stranger Priority? Topics at the Outer Reaches of Effective Altruism (my dissertation) by Joe Carlsmith Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Stranger Priority? Topics at the Outer Reaches of Effective Altruism (my dissertation), published by Joe Carlsmith on February 21, 2023 on The Effective Altruism Forum.(Cross-posted from my website.)After many years of focusing on other stuff, I recently completed my doctorate in philosophy from the University of Oxford. My dissertation ("A Stranger Priority? Topics at the Outer Reaches of Effective Altruism") was three of my essays -- on anthropic reasoning, simulation arguments, and infinite ethics -- revised, stapled together, and unified under the theme of the "crazy train" as a possible objection to longtermism.The full text is here. I've also broken the main chapters up into individual PDFs:Chapter 1: SIA vs. SSAChapter 2: Simulation argumentsChapter 3: Infinite ethics and the utilitarian dreamChapter 1 and Chapter 3 are pretty similar to the original essays (here and here). Chapter 2, however, has been re-thought and almost entirely re-written -- and I think it's now substantially clearer about the issues at stake.Since submitting the thesis in fall of 2022, I've thought more about various "crazy train" issues, and my current view is that there's quite a bit more to say in defense of longtermism than the thesis has explored. In particular, I want to highlight a distinction I discuss in the conclusion of the thesis, between what I call "welfare longtermism," which focuses on our impact on the welfare of future people, and what I call "wisdom longtermism," which focuses on reaching a wise and empowered future more broadly. The case for the latter seems to me more robust to various "crazy train" considerations than the case for the former.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Joe Carlsmith https://forum.effectivealtruism.org/posts/tqQKkRqx4zxBPR6ci/a-stranger-priority-topics-at-the-outer-reaches-of-effective Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Stranger Priority? Topics at the Outer Reaches of Effective Altruism (my dissertation), published by Joe Carlsmith on February 21, 2023 on The Effective Altruism Forum.(Cross-posted from my website.)After many years of focusing on other stuff, I recently completed my doctorate in philosophy from the University of Oxford. My dissertation ("A Stranger Priority? Topics at the Outer Reaches of Effective Altruism") was three of my essays -- on anthropic reasoning, simulation arguments, and infinite ethics -- revised, stapled together, and unified under the theme of the "crazy train" as a possible objection to longtermism.The full text is here. I've also broken the main chapters up into individual PDFs:Chapter 1: SIA vs. SSAChapter 2: Simulation argumentsChapter 3: Infinite ethics and the utilitarian dreamChapter 1 and Chapter 3 are pretty similar to the original essays (here and here). Chapter 2, however, has been re-thought and almost entirely re-written -- and I think it's now substantially clearer about the issues at stake.Since submitting the thesis in fall of 2022, I've thought more about various "crazy train" issues, and my current view is that there's quite a bit more to say in defense of longtermism than the thesis has explored. In particular, I want to highlight a distinction I discuss in the conclusion of the thesis, between what I call "welfare longtermism," which focuses on our impact on the welfare of future people, and what I call "wisdom longtermism," which focuses on reaching a wise and empowered future more broadly. The case for the latter seems to me more robust to various "crazy train" considerations than the case for the former.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 22 Feb 2023 05:59:46 +0000 EA - A Stranger Priority? Topics at the Outer Reaches of Effective Altruism (my dissertation) by Joe Carlsmith Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Stranger Priority? Topics at the Outer Reaches of Effective Altruism (my dissertation), published by Joe Carlsmith on February 21, 2023 on The Effective Altruism Forum.(Cross-posted from my website.)After many years of focusing on other stuff, I recently completed my doctorate in philosophy from the University of Oxford. My dissertation ("A Stranger Priority? Topics at the Outer Reaches of Effective Altruism") was three of my essays -- on anthropic reasoning, simulation arguments, and infinite ethics -- revised, stapled together, and unified under the theme of the "crazy train" as a possible objection to longtermism.The full text is here. I've also broken the main chapters up into individual PDFs:Chapter 1: SIA vs. SSAChapter 2: Simulation argumentsChapter 3: Infinite ethics and the utilitarian dreamChapter 1 and Chapter 3 are pretty similar to the original essays (here and here). Chapter 2, however, has been re-thought and almost entirely re-written -- and I think it's now substantially clearer about the issues at stake.Since submitting the thesis in fall of 2022, I've thought more about various "crazy train" issues, and my current view is that there's quite a bit more to say in defense of longtermism than the thesis has explored. In particular, I want to highlight a distinction I discuss in the conclusion of the thesis, between what I call "welfare longtermism," which focuses on our impact on the welfare of future people, and what I call "wisdom longtermism," which focuses on reaching a wise and empowered future more broadly. The case for the latter seems to me more robust to various "crazy train" considerations than the case for the former.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Stranger Priority? Topics at the Outer Reaches of Effective Altruism (my dissertation), published by Joe Carlsmith on February 21, 2023 on The Effective Altruism Forum.(Cross-posted from my website.)After many years of focusing on other stuff, I recently completed my doctorate in philosophy from the University of Oxford. My dissertation ("A Stranger Priority? Topics at the Outer Reaches of Effective Altruism") was three of my essays -- on anthropic reasoning, simulation arguments, and infinite ethics -- revised, stapled together, and unified under the theme of the "crazy train" as a possible objection to longtermism.The full text is here. I've also broken the main chapters up into individual PDFs:Chapter 1: SIA vs. SSAChapter 2: Simulation argumentsChapter 3: Infinite ethics and the utilitarian dreamChapter 1 and Chapter 3 are pretty similar to the original essays (here and here). Chapter 2, however, has been re-thought and almost entirely re-written -- and I think it's now substantially clearer about the issues at stake.Since submitting the thesis in fall of 2022, I've thought more about various "crazy train" issues, and my current view is that there's quite a bit more to say in defense of longtermism than the thesis has explored. In particular, I want to highlight a distinction I discuss in the conclusion of the thesis, between what I call "welfare longtermism," which focuses on our impact on the welfare of future people, and what I call "wisdom longtermism," which focuses on reaching a wise and empowered future more broadly. The case for the latter seems to me more robust to various "crazy train" considerations than the case for the former.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Joe Carlsmith https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:56 None full 4983
yiTcjSWuy7ptTb5XS_NL_EA_EA EA - What is it like doing AI safety work? by Kat Woods Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is it like doing AI safety work?, published by Kat Woods on February 21, 2023 on The Effective Altruism Forum.How do you know if you’ll like AI safety work? What’s the day-to-day work like? What are the best parts of the job? What are the worst?To better answer these questions, we talked to ten AI safety researchers in a variety of organizations, roles, and subfields. If you’re interested in getting into AI safety research, we hope this helps you be better informed about what pursuing a career in the field might entail.The first section is about what people do day-to-day and the second section describes each person’s favorite and least favorite aspects of the job.Of note, the people we talked with are not a random sample of AI safety researchers, and it is also important to consider the effects of survivorship bias. However, we still think it's useful and informative to hear about their day-to-day lives and what they love and hate about their jobs.Also, these interviews were done about a year ago, so may no longer represent what the researchers are currently doing.Reminder that you can listen to LessWrong and EA Forum posts like this on your podcast player using the Nonlinear Library.This post is part of a project I’ve been working on at Nonlinear. You can see the first part of the project here where I explain the different ways people got into the field.What do people do all day?John WentworthJohn describes a few different categories of days.He sometimes spends a day writing a post; this usually takes about a day if all the ideas are developed already.He might spend a day responding to comments on posts or talking to people about ideas. This can be a bit of a chore but is also necessary and useful.He might spend his day doing theoretical work. For example, if he’s stuck on a particular problem, he can spend a day working with a notebook or on a whiteboard. This means going over ideas, trying out formulas and setups, and trying to make progress on a particular problem.Over the past month he’s started working with David Lorell. David’s a more active version of the programmer's "rubber duck". As John’s thinking through the math on a whiteboard, he’ll explain to David what's going on. David will ask for clarifications, examples, how things tie into the bigger picture, why did/didn't X work, etc.John estimates that this has increased his productivity at theoretical work by a factor somewhere between 2 and 5.Ondrej BajgarOndrej starts the day by cycling to the office. He has breakfast there and tries to spend as much time as possible at a whiteboard away from his computer. He tries to get into a deep-thinking mindset, where there aren’t all the answers easily available. Ideally, mornings are completely free of meetings and reserved for this deep-thinking work.Deep thinking involves a lot of zooming in and out, working on sub-goals while periodically zooming out to check on the higher-level goal every half hour. He switches between trying to make progress and reflecting on how this is actually going. This is to avoid getting derailed on something unproductive but cognitively demanding.Once an idea is mostly formed, he’ll try to implement things in code. Sometimes seeing things in action can make you see new things you wouldn’t get from just the theory. But he also says that it’s important to not get caught in the trap of writing code, which can feel fun and feel productive even when it isn’t that useful.Scott EmmonsScott talked about a few different categories of day-to-day work:Research, which involves brainstorming, programming, writing & communicating, and collaborating with peopleReading papers to stay up-to-date with the literatureAdministrative workService, such as giving advice to undergrads, talking about AI safety, and reviewing other...]]>
Kat Woods https://forum.effectivealtruism.org/posts/yiTcjSWuy7ptTb5XS/what-is-it-like-doing-ai-safety-work Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is it like doing AI safety work?, published by Kat Woods on February 21, 2023 on The Effective Altruism Forum.How do you know if you’ll like AI safety work? What’s the day-to-day work like? What are the best parts of the job? What are the worst?To better answer these questions, we talked to ten AI safety researchers in a variety of organizations, roles, and subfields. If you’re interested in getting into AI safety research, we hope this helps you be better informed about what pursuing a career in the field might entail.The first section is about what people do day-to-day and the second section describes each person’s favorite and least favorite aspects of the job.Of note, the people we talked with are not a random sample of AI safety researchers, and it is also important to consider the effects of survivorship bias. However, we still think it's useful and informative to hear about their day-to-day lives and what they love and hate about their jobs.Also, these interviews were done about a year ago, so may no longer represent what the researchers are currently doing.Reminder that you can listen to LessWrong and EA Forum posts like this on your podcast player using the Nonlinear Library.This post is part of a project I’ve been working on at Nonlinear. You can see the first part of the project here where I explain the different ways people got into the field.What do people do all day?John WentworthJohn describes a few different categories of days.He sometimes spends a day writing a post; this usually takes about a day if all the ideas are developed already.He might spend a day responding to comments on posts or talking to people about ideas. This can be a bit of a chore but is also necessary and useful.He might spend his day doing theoretical work. For example, if he’s stuck on a particular problem, he can spend a day working with a notebook or on a whiteboard. This means going over ideas, trying out formulas and setups, and trying to make progress on a particular problem.Over the past month he’s started working with David Lorell. David’s a more active version of the programmer's "rubber duck". As John’s thinking through the math on a whiteboard, he’ll explain to David what's going on. David will ask for clarifications, examples, how things tie into the bigger picture, why did/didn't X work, etc.John estimates that this has increased his productivity at theoretical work by a factor somewhere between 2 and 5.Ondrej BajgarOndrej starts the day by cycling to the office. He has breakfast there and tries to spend as much time as possible at a whiteboard away from his computer. He tries to get into a deep-thinking mindset, where there aren’t all the answers easily available. Ideally, mornings are completely free of meetings and reserved for this deep-thinking work.Deep thinking involves a lot of zooming in and out, working on sub-goals while periodically zooming out to check on the higher-level goal every half hour. He switches between trying to make progress and reflecting on how this is actually going. This is to avoid getting derailed on something unproductive but cognitively demanding.Once an idea is mostly formed, he’ll try to implement things in code. Sometimes seeing things in action can make you see new things you wouldn’t get from just the theory. But he also says that it’s important to not get caught in the trap of writing code, which can feel fun and feel productive even when it isn’t that useful.Scott EmmonsScott talked about a few different categories of day-to-day work:Research, which involves brainstorming, programming, writing & communicating, and collaborating with peopleReading papers to stay up-to-date with the literatureAdministrative workService, such as giving advice to undergrads, talking about AI safety, and reviewing other...]]>
Wed, 22 Feb 2023 01:25:19 +0000 EA - What is it like doing AI safety work? by Kat Woods Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is it like doing AI safety work?, published by Kat Woods on February 21, 2023 on The Effective Altruism Forum.How do you know if you’ll like AI safety work? What’s the day-to-day work like? What are the best parts of the job? What are the worst?To better answer these questions, we talked to ten AI safety researchers in a variety of organizations, roles, and subfields. If you’re interested in getting into AI safety research, we hope this helps you be better informed about what pursuing a career in the field might entail.The first section is about what people do day-to-day and the second section describes each person’s favorite and least favorite aspects of the job.Of note, the people we talked with are not a random sample of AI safety researchers, and it is also important to consider the effects of survivorship bias. However, we still think it's useful and informative to hear about their day-to-day lives and what they love and hate about their jobs.Also, these interviews were done about a year ago, so may no longer represent what the researchers are currently doing.Reminder that you can listen to LessWrong and EA Forum posts like this on your podcast player using the Nonlinear Library.This post is part of a project I’ve been working on at Nonlinear. You can see the first part of the project here where I explain the different ways people got into the field.What do people do all day?John WentworthJohn describes a few different categories of days.He sometimes spends a day writing a post; this usually takes about a day if all the ideas are developed already.He might spend a day responding to comments on posts or talking to people about ideas. This can be a bit of a chore but is also necessary and useful.He might spend his day doing theoretical work. For example, if he’s stuck on a particular problem, he can spend a day working with a notebook or on a whiteboard. This means going over ideas, trying out formulas and setups, and trying to make progress on a particular problem.Over the past month he’s started working with David Lorell. David’s a more active version of the programmer's "rubber duck". As John’s thinking through the math on a whiteboard, he’ll explain to David what's going on. David will ask for clarifications, examples, how things tie into the bigger picture, why did/didn't X work, etc.John estimates that this has increased his productivity at theoretical work by a factor somewhere between 2 and 5.Ondrej BajgarOndrej starts the day by cycling to the office. He has breakfast there and tries to spend as much time as possible at a whiteboard away from his computer. He tries to get into a deep-thinking mindset, where there aren’t all the answers easily available. Ideally, mornings are completely free of meetings and reserved for this deep-thinking work.Deep thinking involves a lot of zooming in and out, working on sub-goals while periodically zooming out to check on the higher-level goal every half hour. He switches between trying to make progress and reflecting on how this is actually going. This is to avoid getting derailed on something unproductive but cognitively demanding.Once an idea is mostly formed, he’ll try to implement things in code. Sometimes seeing things in action can make you see new things you wouldn’t get from just the theory. But he also says that it’s important to not get caught in the trap of writing code, which can feel fun and feel productive even when it isn’t that useful.Scott EmmonsScott talked about a few different categories of day-to-day work:Research, which involves brainstorming, programming, writing & communicating, and collaborating with peopleReading papers to stay up-to-date with the literatureAdministrative workService, such as giving advice to undergrads, talking about AI safety, and reviewing other...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is it like doing AI safety work?, published by Kat Woods on February 21, 2023 on The Effective Altruism Forum.How do you know if you’ll like AI safety work? What’s the day-to-day work like? What are the best parts of the job? What are the worst?To better answer these questions, we talked to ten AI safety researchers in a variety of organizations, roles, and subfields. If you’re interested in getting into AI safety research, we hope this helps you be better informed about what pursuing a career in the field might entail.The first section is about what people do day-to-day and the second section describes each person’s favorite and least favorite aspects of the job.Of note, the people we talked with are not a random sample of AI safety researchers, and it is also important to consider the effects of survivorship bias. However, we still think it's useful and informative to hear about their day-to-day lives and what they love and hate about their jobs.Also, these interviews were done about a year ago, so may no longer represent what the researchers are currently doing.Reminder that you can listen to LessWrong and EA Forum posts like this on your podcast player using the Nonlinear Library.This post is part of a project I’ve been working on at Nonlinear. You can see the first part of the project here where I explain the different ways people got into the field.What do people do all day?John WentworthJohn describes a few different categories of days.He sometimes spends a day writing a post; this usually takes about a day if all the ideas are developed already.He might spend a day responding to comments on posts or talking to people about ideas. This can be a bit of a chore but is also necessary and useful.He might spend his day doing theoretical work. For example, if he’s stuck on a particular problem, he can spend a day working with a notebook or on a whiteboard. This means going over ideas, trying out formulas and setups, and trying to make progress on a particular problem.Over the past month he’s started working with David Lorell. David’s a more active version of the programmer's "rubber duck". As John’s thinking through the math on a whiteboard, he’ll explain to David what's going on. David will ask for clarifications, examples, how things tie into the bigger picture, why did/didn't X work, etc.John estimates that this has increased his productivity at theoretical work by a factor somewhere between 2 and 5.Ondrej BajgarOndrej starts the day by cycling to the office. He has breakfast there and tries to spend as much time as possible at a whiteboard away from his computer. He tries to get into a deep-thinking mindset, where there aren’t all the answers easily available. Ideally, mornings are completely free of meetings and reserved for this deep-thinking work.Deep thinking involves a lot of zooming in and out, working on sub-goals while periodically zooming out to check on the higher-level goal every half hour. He switches between trying to make progress and reflecting on how this is actually going. This is to avoid getting derailed on something unproductive but cognitively demanding.Once an idea is mostly formed, he’ll try to implement things in code. Sometimes seeing things in action can make you see new things you wouldn’t get from just the theory. But he also says that it’s important to not get caught in the trap of writing code, which can feel fun and feel productive even when it isn’t that useful.Scott EmmonsScott talked about a few different categories of day-to-day work:Research, which involves brainstorming, programming, writing & communicating, and collaborating with peopleReading papers to stay up-to-date with the literatureAdministrative workService, such as giving advice to undergrads, talking about AI safety, and reviewing other...]]>
Kat Woods https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 15:08 None full 4981
XFBGu9sGfbYAsb8Gb_NL_EA_EA EA - Bad Actors are not the Main Issue in EA Governance by Grayden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bad Actors are not the Main Issue in EA Governance, published by Grayden on February 21, 2023 on The Effective Altruism Forum.BackgroundWhile I have a technical background, my career has been spent working with corporate boards and management teams. I have seen first-hand how critical leadership are to the success of organizations. Organizations filled with competent people can fail miserably if individuals do not have the right interpersonal skills and humility.I have worried about governance within EA for a while. In October, I launched the EA Good Governance Project and wrote that "We have not yet experienced a scandal / major problem and have not yet started to think through how to avoid that happening again" . Now, 4 months later, we've had our fair share and people are open to change.This post is my attempt to put some thoughts together. It has been written in a rather rushed way given recent news, so apologies if some parts are poorly worded.IntroductionI have structured my thoughts in 4 sections, corresponding to the 4 key ways in which leadership can fail:1) Bad actor2) Well-intentioned people with low competence3) Well-intentioned high-competence people with collective blind spots4) Right group of people, bad practicesBad ActorsMuch discussion on the forum in recent months has focused on the concept of a bad actor. I think we are focusing far too much on this concept.The term comes from computer science where hackers are prevalent. However, real life is rarely this black and white. Never attribute to malice that which is adequately explained by incompetence (Hanlon's razor).The bad actor concept can be used, consciously or unconsciously, to justify recruiting board members from within your clique. Many EA boards comprise groups of friends who know each other socially. This limits the competence and diversity of the board. Typically the people you know well are exactly the worst people to provide different perspectives and hold you to account. If they are your friends, you have these perspectives and this accountability already and you can prevent bad actors through referencing, donation history and background checks.Key takeaway: Break the cliqueCompetenceThere's an old adage: How do you know if someone is not very good at Excel? They will say they are an expert. With Excel, the more you know, the more aware you are of what you don't know. I think leadership is similar. When I had 3-5 years of professional experience, I thought I could lead anything. Now I know better.Some aspects of leaderships come naturally to people, but many have to be learned by close interaction with role models. When you are a community without experienced figures at the top, this is hard. We should not expect people with less than 10 years of professional experience to be a fully rounded leader. Equally, it's possible to be successful without being a good leader.I think many of us in the community have historically held EA leaders on a pedestal. They were typically appointed because of their expertise in a particular field. Some of the brightest people I've ever met are within the EA community. However, we then assumed they had excellent people judgment, a sound understanding of conflicts of interest, in-depth knowledge of real estate investments and an appreciation for power dynamics in sexual relationships. It turns out some of them don't. This shouldn't come as a big surprise. It doesn't mean they can't be a valuable contributor to the community and it certainly doesn't make them bad actors.Key takeaway: We need to elevate the importance of soft skills and learn from models of effective leadership in other communities and organizationsBlind SpotsThe more worrying thing though is how those people also believed in their own abilities. In my career, I have met...]]>
Grayden https://forum.effectivealtruism.org/posts/XFBGu9sGfbYAsb8Gb/bad-actors-are-not-the-main-issue-in-ea-governance Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bad Actors are not the Main Issue in EA Governance, published by Grayden on February 21, 2023 on The Effective Altruism Forum.BackgroundWhile I have a technical background, my career has been spent working with corporate boards and management teams. I have seen first-hand how critical leadership are to the success of organizations. Organizations filled with competent people can fail miserably if individuals do not have the right interpersonal skills and humility.I have worried about governance within EA for a while. In October, I launched the EA Good Governance Project and wrote that "We have not yet experienced a scandal / major problem and have not yet started to think through how to avoid that happening again" . Now, 4 months later, we've had our fair share and people are open to change.This post is my attempt to put some thoughts together. It has been written in a rather rushed way given recent news, so apologies if some parts are poorly worded.IntroductionI have structured my thoughts in 4 sections, corresponding to the 4 key ways in which leadership can fail:1) Bad actor2) Well-intentioned people with low competence3) Well-intentioned high-competence people with collective blind spots4) Right group of people, bad practicesBad ActorsMuch discussion on the forum in recent months has focused on the concept of a bad actor. I think we are focusing far too much on this concept.The term comes from computer science where hackers are prevalent. However, real life is rarely this black and white. Never attribute to malice that which is adequately explained by incompetence (Hanlon's razor).The bad actor concept can be used, consciously or unconsciously, to justify recruiting board members from within your clique. Many EA boards comprise groups of friends who know each other socially. This limits the competence and diversity of the board. Typically the people you know well are exactly the worst people to provide different perspectives and hold you to account. If they are your friends, you have these perspectives and this accountability already and you can prevent bad actors through referencing, donation history and background checks.Key takeaway: Break the cliqueCompetenceThere's an old adage: How do you know if someone is not very good at Excel? They will say they are an expert. With Excel, the more you know, the more aware you are of what you don't know. I think leadership is similar. When I had 3-5 years of professional experience, I thought I could lead anything. Now I know better.Some aspects of leaderships come naturally to people, but many have to be learned by close interaction with role models. When you are a community without experienced figures at the top, this is hard. We should not expect people with less than 10 years of professional experience to be a fully rounded leader. Equally, it's possible to be successful without being a good leader.I think many of us in the community have historically held EA leaders on a pedestal. They were typically appointed because of their expertise in a particular field. Some of the brightest people I've ever met are within the EA community. However, we then assumed they had excellent people judgment, a sound understanding of conflicts of interest, in-depth knowledge of real estate investments and an appreciation for power dynamics in sexual relationships. It turns out some of them don't. This shouldn't come as a big surprise. It doesn't mean they can't be a valuable contributor to the community and it certainly doesn't make them bad actors.Key takeaway: We need to elevate the importance of soft skills and learn from models of effective leadership in other communities and organizationsBlind SpotsThe more worrying thing though is how those people also believed in their own abilities. In my career, I have met...]]>
Tue, 21 Feb 2023 23:27:40 +0000 EA - Bad Actors are not the Main Issue in EA Governance by Grayden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bad Actors are not the Main Issue in EA Governance, published by Grayden on February 21, 2023 on The Effective Altruism Forum.BackgroundWhile I have a technical background, my career has been spent working with corporate boards and management teams. I have seen first-hand how critical leadership are to the success of organizations. Organizations filled with competent people can fail miserably if individuals do not have the right interpersonal skills and humility.I have worried about governance within EA for a while. In October, I launched the EA Good Governance Project and wrote that "We have not yet experienced a scandal / major problem and have not yet started to think through how to avoid that happening again" . Now, 4 months later, we've had our fair share and people are open to change.This post is my attempt to put some thoughts together. It has been written in a rather rushed way given recent news, so apologies if some parts are poorly worded.IntroductionI have structured my thoughts in 4 sections, corresponding to the 4 key ways in which leadership can fail:1) Bad actor2) Well-intentioned people with low competence3) Well-intentioned high-competence people with collective blind spots4) Right group of people, bad practicesBad ActorsMuch discussion on the forum in recent months has focused on the concept of a bad actor. I think we are focusing far too much on this concept.The term comes from computer science where hackers are prevalent. However, real life is rarely this black and white. Never attribute to malice that which is adequately explained by incompetence (Hanlon's razor).The bad actor concept can be used, consciously or unconsciously, to justify recruiting board members from within your clique. Many EA boards comprise groups of friends who know each other socially. This limits the competence and diversity of the board. Typically the people you know well are exactly the worst people to provide different perspectives and hold you to account. If they are your friends, you have these perspectives and this accountability already and you can prevent bad actors through referencing, donation history and background checks.Key takeaway: Break the cliqueCompetenceThere's an old adage: How do you know if someone is not very good at Excel? They will say they are an expert. With Excel, the more you know, the more aware you are of what you don't know. I think leadership is similar. When I had 3-5 years of professional experience, I thought I could lead anything. Now I know better.Some aspects of leaderships come naturally to people, but many have to be learned by close interaction with role models. When you are a community without experienced figures at the top, this is hard. We should not expect people with less than 10 years of professional experience to be a fully rounded leader. Equally, it's possible to be successful without being a good leader.I think many of us in the community have historically held EA leaders on a pedestal. They were typically appointed because of their expertise in a particular field. Some of the brightest people I've ever met are within the EA community. However, we then assumed they had excellent people judgment, a sound understanding of conflicts of interest, in-depth knowledge of real estate investments and an appreciation for power dynamics in sexual relationships. It turns out some of them don't. This shouldn't come as a big surprise. It doesn't mean they can't be a valuable contributor to the community and it certainly doesn't make them bad actors.Key takeaway: We need to elevate the importance of soft skills and learn from models of effective leadership in other communities and organizationsBlind SpotsThe more worrying thing though is how those people also believed in their own abilities. In my career, I have met...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bad Actors are not the Main Issue in EA Governance, published by Grayden on February 21, 2023 on The Effective Altruism Forum.BackgroundWhile I have a technical background, my career has been spent working with corporate boards and management teams. I have seen first-hand how critical leadership are to the success of organizations. Organizations filled with competent people can fail miserably if individuals do not have the right interpersonal skills and humility.I have worried about governance within EA for a while. In October, I launched the EA Good Governance Project and wrote that "We have not yet experienced a scandal / major problem and have not yet started to think through how to avoid that happening again" . Now, 4 months later, we've had our fair share and people are open to change.This post is my attempt to put some thoughts together. It has been written in a rather rushed way given recent news, so apologies if some parts are poorly worded.IntroductionI have structured my thoughts in 4 sections, corresponding to the 4 key ways in which leadership can fail:1) Bad actor2) Well-intentioned people with low competence3) Well-intentioned high-competence people with collective blind spots4) Right group of people, bad practicesBad ActorsMuch discussion on the forum in recent months has focused on the concept of a bad actor. I think we are focusing far too much on this concept.The term comes from computer science where hackers are prevalent. However, real life is rarely this black and white. Never attribute to malice that which is adequately explained by incompetence (Hanlon's razor).The bad actor concept can be used, consciously or unconsciously, to justify recruiting board members from within your clique. Many EA boards comprise groups of friends who know each other socially. This limits the competence and diversity of the board. Typically the people you know well are exactly the worst people to provide different perspectives and hold you to account. If they are your friends, you have these perspectives and this accountability already and you can prevent bad actors through referencing, donation history and background checks.Key takeaway: Break the cliqueCompetenceThere's an old adage: How do you know if someone is not very good at Excel? They will say they are an expert. With Excel, the more you know, the more aware you are of what you don't know. I think leadership is similar. When I had 3-5 years of professional experience, I thought I could lead anything. Now I know better.Some aspects of leaderships come naturally to people, but many have to be learned by close interaction with role models. When you are a community without experienced figures at the top, this is hard. We should not expect people with less than 10 years of professional experience to be a fully rounded leader. Equally, it's possible to be successful without being a good leader.I think many of us in the community have historically held EA leaders on a pedestal. They were typically appointed because of their expertise in a particular field. Some of the brightest people I've ever met are within the EA community. However, we then assumed they had excellent people judgment, a sound understanding of conflicts of interest, in-depth knowledge of real estate investments and an appreciation for power dynamics in sexual relationships. It turns out some of them don't. This shouldn't come as a big surprise. It doesn't mean they can't be a valuable contributor to the community and it certainly doesn't make them bad actors.Key takeaway: We need to elevate the importance of soft skills and learn from models of effective leadership in other communities and organizationsBlind SpotsThe more worrying thing though is how those people also believed in their own abilities. In my career, I have met...]]>
Grayden https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:48 None full 4978
PTCw5CJT7cE6Kx9ZR_NL_EA_EA EA - You're probably a eugenicist by Sentientist Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You're probably a eugenicist, published by Sentientist on February 21, 2023 on The Effective Altruism Forum.A couple of EAs encouraged me to crosspost this here. I had been sitting on a shorter version of this essay for a long time and decided to publish this expanded version this month partly because of the accusations of eugenics leveraged against Nick Bostrom and the effective altruism community.The piece is the first article on my substack and you can listen to me narrate it at that link.You're probably a eugenicistLet me start this essay with a love story.Susan and Patrick were a young German couple in love. But, the German state never allowed Susan and Patrick to get married. Shockingly, Patrick was imprisoned for years because of his sexual relationship with Susan.Despite these obstacles, over the course of their relationship, Susan and Patrick had four children. Three of their children—Eric, Sarah, and Nancy—had severe problems: epilepsy, cognitive disabilities, and a congenital heart defect that required a transplant. The German state took away these children and placed them with foster families.Patrick and Susan with their daughter Sofia - credit dpa picture alliance archiveWhy did Germany do all these terrible things to Susan and Patrick?Eugenics.No, this story didn’t happen in Nazi Germany, it happened over the course of the last 20 years. But why haven’t you heard this story before?Because Patrick and Susan are siblings.One of the aims of eugenics is to intervene in reproduction so as to decrease the number of people born with serious disabilities or health problems. Susan and Patrick were much more likely than the average couple to have children with genetic problems because they are brother and sister. So, the German state punished this couple by restricting them from marriage, taking away their children, and forcefully separating them with Patrick’s imprisonment.Patrick Stübing filed a case against Germany with the European Court on Human Rights, arguing that the laws forbidding opposite-sex sibling incest violated his rights to family life and sexual autonomy. The European Court on Human Rights’ majority opinion in the Stübing case clearly sets out the eugenic case for those laws: that the children of incest and their future children will suffer because of genetic problems. But the dissenting opinion argued that eugenics cannot be a valid justification for punishing incest because eugenics is associated with the Nazis, and because other people (for example, older mothers and people with genetic disorders) who have a high chance of producing children with genetic defects are not prevented from reproducing. Ultimately, the European Court on Human Rights upheld Germany's anti-incest law on eugenic grounds.If Germany had punished any other citizens this severely on eugenic grounds—for example by imprisoning a female carrier of Huntington’s disease who was trying to get pregnant— there would be a huge outcry. But incest seems to be an exception.Our instinctive aversion to incest is informed by intuitive eugenics. Not only are we reflexively disgusted by the thought of having sex with our own blood relatives, but we’re also disgusted by the thought of any blood relatives having sex with each other.Siblings and close relatives conceive children who are more likely to end up with two copies of the same defective genes, which makes those children more likely to inherit disabilities and health problems. It’s estimated that the children of sibling incest have a greater than 40 percent chance of either dying prematurely or being born with a severe impairment. By comparison, first cousins have around a five percent chance of having children with a genetic problem—twice as likely as unrelated couples. In the UK, first cousin marriages are legal and...]]>
Sentientist https://forum.effectivealtruism.org/posts/PTCw5CJT7cE6Kx9ZR/you-re-probably-a-eugenicist Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You're probably a eugenicist, published by Sentientist on February 21, 2023 on The Effective Altruism Forum.A couple of EAs encouraged me to crosspost this here. I had been sitting on a shorter version of this essay for a long time and decided to publish this expanded version this month partly because of the accusations of eugenics leveraged against Nick Bostrom and the effective altruism community.The piece is the first article on my substack and you can listen to me narrate it at that link.You're probably a eugenicistLet me start this essay with a love story.Susan and Patrick were a young German couple in love. But, the German state never allowed Susan and Patrick to get married. Shockingly, Patrick was imprisoned for years because of his sexual relationship with Susan.Despite these obstacles, over the course of their relationship, Susan and Patrick had four children. Three of their children—Eric, Sarah, and Nancy—had severe problems: epilepsy, cognitive disabilities, and a congenital heart defect that required a transplant. The German state took away these children and placed them with foster families.Patrick and Susan with their daughter Sofia - credit dpa picture alliance archiveWhy did Germany do all these terrible things to Susan and Patrick?Eugenics.No, this story didn’t happen in Nazi Germany, it happened over the course of the last 20 years. But why haven’t you heard this story before?Because Patrick and Susan are siblings.One of the aims of eugenics is to intervene in reproduction so as to decrease the number of people born with serious disabilities or health problems. Susan and Patrick were much more likely than the average couple to have children with genetic problems because they are brother and sister. So, the German state punished this couple by restricting them from marriage, taking away their children, and forcefully separating them with Patrick’s imprisonment.Patrick Stübing filed a case against Germany with the European Court on Human Rights, arguing that the laws forbidding opposite-sex sibling incest violated his rights to family life and sexual autonomy. The European Court on Human Rights’ majority opinion in the Stübing case clearly sets out the eugenic case for those laws: that the children of incest and their future children will suffer because of genetic problems. But the dissenting opinion argued that eugenics cannot be a valid justification for punishing incest because eugenics is associated with the Nazis, and because other people (for example, older mothers and people with genetic disorders) who have a high chance of producing children with genetic defects are not prevented from reproducing. Ultimately, the European Court on Human Rights upheld Germany's anti-incest law on eugenic grounds.If Germany had punished any other citizens this severely on eugenic grounds—for example by imprisoning a female carrier of Huntington’s disease who was trying to get pregnant— there would be a huge outcry. But incest seems to be an exception.Our instinctive aversion to incest is informed by intuitive eugenics. Not only are we reflexively disgusted by the thought of having sex with our own blood relatives, but we’re also disgusted by the thought of any blood relatives having sex with each other.Siblings and close relatives conceive children who are more likely to end up with two copies of the same defective genes, which makes those children more likely to inherit disabilities and health problems. It’s estimated that the children of sibling incest have a greater than 40 percent chance of either dying prematurely or being born with a severe impairment. By comparison, first cousins have around a five percent chance of having children with a genetic problem—twice as likely as unrelated couples. In the UK, first cousin marriages are legal and...]]>
Tue, 21 Feb 2023 23:18:41 +0000 EA - You're probably a eugenicist by Sentientist Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You're probably a eugenicist, published by Sentientist on February 21, 2023 on The Effective Altruism Forum.A couple of EAs encouraged me to crosspost this here. I had been sitting on a shorter version of this essay for a long time and decided to publish this expanded version this month partly because of the accusations of eugenics leveraged against Nick Bostrom and the effective altruism community.The piece is the first article on my substack and you can listen to me narrate it at that link.You're probably a eugenicistLet me start this essay with a love story.Susan and Patrick were a young German couple in love. But, the German state never allowed Susan and Patrick to get married. Shockingly, Patrick was imprisoned for years because of his sexual relationship with Susan.Despite these obstacles, over the course of their relationship, Susan and Patrick had four children. Three of their children—Eric, Sarah, and Nancy—had severe problems: epilepsy, cognitive disabilities, and a congenital heart defect that required a transplant. The German state took away these children and placed them with foster families.Patrick and Susan with their daughter Sofia - credit dpa picture alliance archiveWhy did Germany do all these terrible things to Susan and Patrick?Eugenics.No, this story didn’t happen in Nazi Germany, it happened over the course of the last 20 years. But why haven’t you heard this story before?Because Patrick and Susan are siblings.One of the aims of eugenics is to intervene in reproduction so as to decrease the number of people born with serious disabilities or health problems. Susan and Patrick were much more likely than the average couple to have children with genetic problems because they are brother and sister. So, the German state punished this couple by restricting them from marriage, taking away their children, and forcefully separating them with Patrick’s imprisonment.Patrick Stübing filed a case against Germany with the European Court on Human Rights, arguing that the laws forbidding opposite-sex sibling incest violated his rights to family life and sexual autonomy. The European Court on Human Rights’ majority opinion in the Stübing case clearly sets out the eugenic case for those laws: that the children of incest and their future children will suffer because of genetic problems. But the dissenting opinion argued that eugenics cannot be a valid justification for punishing incest because eugenics is associated with the Nazis, and because other people (for example, older mothers and people with genetic disorders) who have a high chance of producing children with genetic defects are not prevented from reproducing. Ultimately, the European Court on Human Rights upheld Germany's anti-incest law on eugenic grounds.If Germany had punished any other citizens this severely on eugenic grounds—for example by imprisoning a female carrier of Huntington’s disease who was trying to get pregnant— there would be a huge outcry. But incest seems to be an exception.Our instinctive aversion to incest is informed by intuitive eugenics. Not only are we reflexively disgusted by the thought of having sex with our own blood relatives, but we’re also disgusted by the thought of any blood relatives having sex with each other.Siblings and close relatives conceive children who are more likely to end up with two copies of the same defective genes, which makes those children more likely to inherit disabilities and health problems. It’s estimated that the children of sibling incest have a greater than 40 percent chance of either dying prematurely or being born with a severe impairment. By comparison, first cousins have around a five percent chance of having children with a genetic problem—twice as likely as unrelated couples. In the UK, first cousin marriages are legal and...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You're probably a eugenicist, published by Sentientist on February 21, 2023 on The Effective Altruism Forum.A couple of EAs encouraged me to crosspost this here. I had been sitting on a shorter version of this essay for a long time and decided to publish this expanded version this month partly because of the accusations of eugenics leveraged against Nick Bostrom and the effective altruism community.The piece is the first article on my substack and you can listen to me narrate it at that link.You're probably a eugenicistLet me start this essay with a love story.Susan and Patrick were a young German couple in love. But, the German state never allowed Susan and Patrick to get married. Shockingly, Patrick was imprisoned for years because of his sexual relationship with Susan.Despite these obstacles, over the course of their relationship, Susan and Patrick had four children. Three of their children—Eric, Sarah, and Nancy—had severe problems: epilepsy, cognitive disabilities, and a congenital heart defect that required a transplant. The German state took away these children and placed them with foster families.Patrick and Susan with their daughter Sofia - credit dpa picture alliance archiveWhy did Germany do all these terrible things to Susan and Patrick?Eugenics.No, this story didn’t happen in Nazi Germany, it happened over the course of the last 20 years. But why haven’t you heard this story before?Because Patrick and Susan are siblings.One of the aims of eugenics is to intervene in reproduction so as to decrease the number of people born with serious disabilities or health problems. Susan and Patrick were much more likely than the average couple to have children with genetic problems because they are brother and sister. So, the German state punished this couple by restricting them from marriage, taking away their children, and forcefully separating them with Patrick’s imprisonment.Patrick Stübing filed a case against Germany with the European Court on Human Rights, arguing that the laws forbidding opposite-sex sibling incest violated his rights to family life and sexual autonomy. The European Court on Human Rights’ majority opinion in the Stübing case clearly sets out the eugenic case for those laws: that the children of incest and their future children will suffer because of genetic problems. But the dissenting opinion argued that eugenics cannot be a valid justification for punishing incest because eugenics is associated with the Nazis, and because other people (for example, older mothers and people with genetic disorders) who have a high chance of producing children with genetic defects are not prevented from reproducing. Ultimately, the European Court on Human Rights upheld Germany's anti-incest law on eugenic grounds.If Germany had punished any other citizens this severely on eugenic grounds—for example by imprisoning a female carrier of Huntington’s disease who was trying to get pregnant— there would be a huge outcry. But incest seems to be an exception.Our instinctive aversion to incest is informed by intuitive eugenics. Not only are we reflexively disgusted by the thought of having sex with our own blood relatives, but we’re also disgusted by the thought of any blood relatives having sex with each other.Siblings and close relatives conceive children who are more likely to end up with two copies of the same defective genes, which makes those children more likely to inherit disabilities and health problems. It’s estimated that the children of sibling incest have a greater than 40 percent chance of either dying prematurely or being born with a severe impairment. By comparison, first cousins have around a five percent chance of having children with a genetic problem—twice as likely as unrelated couples. In the UK, first cousin marriages are legal and...]]>
Sentientist https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 22:42 None full 4980
6pc8iuDxz8LArWng8_NL_EA_EA EA - EU Food Agency Recommends Banning Cages by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EU Food Agency Recommends Banning Cages, published by Ben West on February 21, 2023 on The Effective Altruism Forum.Some key recommendations (all direct quotes from either here or here):Birds should be housed in cage-free systemsAvoid all forms of mutilations in broiler breedersAvoid the use of cages, feed and water restrictions in broiler breedersLimit the growth rate of broilers to a maximum of 50 g/day.Substantially reduce the stocking density to meet the behavioural needs of broilers.My understanding is that the European Commission requested these recommendations as a result of several things, including work by some EA-affiliated animal welfare organizations, and it is now up to them to propose legislation implementing the recommendations.This Forum post from two years ago describes some of the previous work that got us here. It's kind of cool to look back on the "major looming fight" that post forecasts and see that the fight is, if not won, at least on its way.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ben West https://forum.effectivealtruism.org/posts/6pc8iuDxz8LArWng8/eu-food-agency-recommends-banning-cages Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EU Food Agency Recommends Banning Cages, published by Ben West on February 21, 2023 on The Effective Altruism Forum.Some key recommendations (all direct quotes from either here or here):Birds should be housed in cage-free systemsAvoid all forms of mutilations in broiler breedersAvoid the use of cages, feed and water restrictions in broiler breedersLimit the growth rate of broilers to a maximum of 50 g/day.Substantially reduce the stocking density to meet the behavioural needs of broilers.My understanding is that the European Commission requested these recommendations as a result of several things, including work by some EA-affiliated animal welfare organizations, and it is now up to them to propose legislation implementing the recommendations.This Forum post from two years ago describes some of the previous work that got us here. It's kind of cool to look back on the "major looming fight" that post forecasts and see that the fight is, if not won, at least on its way.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 21 Feb 2023 21:37:00 +0000 EA - EU Food Agency Recommends Banning Cages by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EU Food Agency Recommends Banning Cages, published by Ben West on February 21, 2023 on The Effective Altruism Forum.Some key recommendations (all direct quotes from either here or here):Birds should be housed in cage-free systemsAvoid all forms of mutilations in broiler breedersAvoid the use of cages, feed and water restrictions in broiler breedersLimit the growth rate of broilers to a maximum of 50 g/day.Substantially reduce the stocking density to meet the behavioural needs of broilers.My understanding is that the European Commission requested these recommendations as a result of several things, including work by some EA-affiliated animal welfare organizations, and it is now up to them to propose legislation implementing the recommendations.This Forum post from two years ago describes some of the previous work that got us here. It's kind of cool to look back on the "major looming fight" that post forecasts and see that the fight is, if not won, at least on its way.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EU Food Agency Recommends Banning Cages, published by Ben West on February 21, 2023 on The Effective Altruism Forum.Some key recommendations (all direct quotes from either here or here):Birds should be housed in cage-free systemsAvoid all forms of mutilations in broiler breedersAvoid the use of cages, feed and water restrictions in broiler breedersLimit the growth rate of broilers to a maximum of 50 g/day.Substantially reduce the stocking density to meet the behavioural needs of broilers.My understanding is that the European Commission requested these recommendations as a result of several things, including work by some EA-affiliated animal welfare organizations, and it is now up to them to propose legislation implementing the recommendations.This Forum post from two years ago describes some of the previous work that got us here. It's kind of cool to look back on the "major looming fight" that post forecasts and see that the fight is, if not won, at least on its way.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ben West https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:14 None full 4964
XcBE5Csza8tSA9HbB_NL_EA_EA EA - Effective Thesis is looking for a new Executive Director by Effective Thesis Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Thesis is looking for a new Executive Director, published by Effective Thesis on February 21, 2023 on The Effective Altruism Forum.Cross-posting this job description from the Effective Thesis website.Are you a visionary leader looking for a chance to make a large impact? We’re seeking an Executive Director to lead our nonprofit organisation into a new era of greater impact, excellence and sustainability.Key detailsEffective Thesis seeks a visionary Executive Director to lead our nonprofit organisation into a new era of strategic and operational excellence. You will drive long-term impact and sustainability through innovative and high-impact activities. As a member of our young, diverse team, you will enjoy remote work flexibility, while honing your leadership skills and expanding your network. If you’re passionate about significantly improving the world and have the skills to transform our organisation, we want to hear from you!Hours: Preference for a full-time 40 hours/week role but we will consider capacity as low as 30 hours/weekWork Location: Fully Remote (Candidates should be able to work in UTC+1 for at least 3 hours per day).Deadline: Applications close 22 March 2023 23:59 Pacific Standard TimeIdeal Start Date: May 2023 (with flexibility to start later)Apply hereFor more details about the role, why it’s impactful and how to tell if you’re a good fit please read below.About Effective ThesisMissionEffective Thesis is a non-profit organisation. Our mission is to support university students to begin and progress in research careers that significantly improve the world. We do this primarily by helping students identify important problems where further research could have a big impact and advising them on their research topic selection (mostly in the context of a final thesis/dissertation/capstone project). Choosing a final thesis topic or a PhD topic is an important step that can influence the rest of a researcher’s career. We support students in this choice by providing introductions to various research directions and early-career research advice on our website and offering topic choice coaching. Additionally, we provide other types of support to address key bottlenecks in early stages of research careers, such as finding supervision, funding or useful opportunities.Here is our 2022 report outlining our activities, impact, and future plans.TeamWe are a fully remote team of 8 employees (with 5 close to full-time and 3 around 10h/week) and 6 long-term volunteers, who help run our research opportunities newsletter. We operate mostly in UTC+1 (Central European) timezone. Candidates should be able to work in UTC+1 for some share of their work hours.The team you are joining is young, agentic, diverse in skill set and is connected through shared motivation to contribute to our mission. 75% of our employed team are women/non-binary people and team members are based in 6 different countries across Europe, UK and Asia. Our culture is start-up-like. We value creative problem solving, open communication and collaboration. We have regular online socials, optional weekly coworking and in the coming year we will have multiple in-person retreats in Europe or the UK. You can expect a thorough onboarding experience including a week of in person co-working with the full-time team in the UK and additional in person meetings with the Managing Director.About the RoleAs Executive Director, you will have the opportunity to lead the way for Effective Thesis to reach its full potential, learn and hone essential leadership skills and work with a wonderful team of people across the world. This remote role comes with a large degree of freedom in when and how you work. You can expect a steep learning curve, a growing network and lots of opportunities for personal an...]]>
Effective Thesis https://forum.effectivealtruism.org/posts/XcBE5Csza8tSA9HbB/effective-thesis-is-looking-for-a-new-executive-director Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Thesis is looking for a new Executive Director, published by Effective Thesis on February 21, 2023 on The Effective Altruism Forum.Cross-posting this job description from the Effective Thesis website.Are you a visionary leader looking for a chance to make a large impact? We’re seeking an Executive Director to lead our nonprofit organisation into a new era of greater impact, excellence and sustainability.Key detailsEffective Thesis seeks a visionary Executive Director to lead our nonprofit organisation into a new era of strategic and operational excellence. You will drive long-term impact and sustainability through innovative and high-impact activities. As a member of our young, diverse team, you will enjoy remote work flexibility, while honing your leadership skills and expanding your network. If you’re passionate about significantly improving the world and have the skills to transform our organisation, we want to hear from you!Hours: Preference for a full-time 40 hours/week role but we will consider capacity as low as 30 hours/weekWork Location: Fully Remote (Candidates should be able to work in UTC+1 for at least 3 hours per day).Deadline: Applications close 22 March 2023 23:59 Pacific Standard TimeIdeal Start Date: May 2023 (with flexibility to start later)Apply hereFor more details about the role, why it’s impactful and how to tell if you’re a good fit please read below.About Effective ThesisMissionEffective Thesis is a non-profit organisation. Our mission is to support university students to begin and progress in research careers that significantly improve the world. We do this primarily by helping students identify important problems where further research could have a big impact and advising them on their research topic selection (mostly in the context of a final thesis/dissertation/capstone project). Choosing a final thesis topic or a PhD topic is an important step that can influence the rest of a researcher’s career. We support students in this choice by providing introductions to various research directions and early-career research advice on our website and offering topic choice coaching. Additionally, we provide other types of support to address key bottlenecks in early stages of research careers, such as finding supervision, funding or useful opportunities.Here is our 2022 report outlining our activities, impact, and future plans.TeamWe are a fully remote team of 8 employees (with 5 close to full-time and 3 around 10h/week) and 6 long-term volunteers, who help run our research opportunities newsletter. We operate mostly in UTC+1 (Central European) timezone. Candidates should be able to work in UTC+1 for some share of their work hours.The team you are joining is young, agentic, diverse in skill set and is connected through shared motivation to contribute to our mission. 75% of our employed team are women/non-binary people and team members are based in 6 different countries across Europe, UK and Asia. Our culture is start-up-like. We value creative problem solving, open communication and collaboration. We have regular online socials, optional weekly coworking and in the coming year we will have multiple in-person retreats in Europe or the UK. You can expect a thorough onboarding experience including a week of in person co-working with the full-time team in the UK and additional in person meetings with the Managing Director.About the RoleAs Executive Director, you will have the opportunity to lead the way for Effective Thesis to reach its full potential, learn and hone essential leadership skills and work with a wonderful team of people across the world. This remote role comes with a large degree of freedom in when and how you work. You can expect a steep learning curve, a growing network and lots of opportunities for personal an...]]>
Tue, 21 Feb 2023 20:37:21 +0000 EA - Effective Thesis is looking for a new Executive Director by Effective Thesis Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Thesis is looking for a new Executive Director, published by Effective Thesis on February 21, 2023 on The Effective Altruism Forum.Cross-posting this job description from the Effective Thesis website.Are you a visionary leader looking for a chance to make a large impact? We’re seeking an Executive Director to lead our nonprofit organisation into a new era of greater impact, excellence and sustainability.Key detailsEffective Thesis seeks a visionary Executive Director to lead our nonprofit organisation into a new era of strategic and operational excellence. You will drive long-term impact and sustainability through innovative and high-impact activities. As a member of our young, diverse team, you will enjoy remote work flexibility, while honing your leadership skills and expanding your network. If you’re passionate about significantly improving the world and have the skills to transform our organisation, we want to hear from you!Hours: Preference for a full-time 40 hours/week role but we will consider capacity as low as 30 hours/weekWork Location: Fully Remote (Candidates should be able to work in UTC+1 for at least 3 hours per day).Deadline: Applications close 22 March 2023 23:59 Pacific Standard TimeIdeal Start Date: May 2023 (with flexibility to start later)Apply hereFor more details about the role, why it’s impactful and how to tell if you’re a good fit please read below.About Effective ThesisMissionEffective Thesis is a non-profit organisation. Our mission is to support university students to begin and progress in research careers that significantly improve the world. We do this primarily by helping students identify important problems where further research could have a big impact and advising them on their research topic selection (mostly in the context of a final thesis/dissertation/capstone project). Choosing a final thesis topic or a PhD topic is an important step that can influence the rest of a researcher’s career. We support students in this choice by providing introductions to various research directions and early-career research advice on our website and offering topic choice coaching. Additionally, we provide other types of support to address key bottlenecks in early stages of research careers, such as finding supervision, funding or useful opportunities.Here is our 2022 report outlining our activities, impact, and future plans.TeamWe are a fully remote team of 8 employees (with 5 close to full-time and 3 around 10h/week) and 6 long-term volunteers, who help run our research opportunities newsletter. We operate mostly in UTC+1 (Central European) timezone. Candidates should be able to work in UTC+1 for some share of their work hours.The team you are joining is young, agentic, diverse in skill set and is connected through shared motivation to contribute to our mission. 75% of our employed team are women/non-binary people and team members are based in 6 different countries across Europe, UK and Asia. Our culture is start-up-like. We value creative problem solving, open communication and collaboration. We have regular online socials, optional weekly coworking and in the coming year we will have multiple in-person retreats in Europe or the UK. You can expect a thorough onboarding experience including a week of in person co-working with the full-time team in the UK and additional in person meetings with the Managing Director.About the RoleAs Executive Director, you will have the opportunity to lead the way for Effective Thesis to reach its full potential, learn and hone essential leadership skills and work with a wonderful team of people across the world. This remote role comes with a large degree of freedom in when and how you work. You can expect a steep learning curve, a growing network and lots of opportunities for personal an...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Thesis is looking for a new Executive Director, published by Effective Thesis on February 21, 2023 on The Effective Altruism Forum.Cross-posting this job description from the Effective Thesis website.Are you a visionary leader looking for a chance to make a large impact? We’re seeking an Executive Director to lead our nonprofit organisation into a new era of greater impact, excellence and sustainability.Key detailsEffective Thesis seeks a visionary Executive Director to lead our nonprofit organisation into a new era of strategic and operational excellence. You will drive long-term impact and sustainability through innovative and high-impact activities. As a member of our young, diverse team, you will enjoy remote work flexibility, while honing your leadership skills and expanding your network. If you’re passionate about significantly improving the world and have the skills to transform our organisation, we want to hear from you!Hours: Preference for a full-time 40 hours/week role but we will consider capacity as low as 30 hours/weekWork Location: Fully Remote (Candidates should be able to work in UTC+1 for at least 3 hours per day).Deadline: Applications close 22 March 2023 23:59 Pacific Standard TimeIdeal Start Date: May 2023 (with flexibility to start later)Apply hereFor more details about the role, why it’s impactful and how to tell if you’re a good fit please read below.About Effective ThesisMissionEffective Thesis is a non-profit organisation. Our mission is to support university students to begin and progress in research careers that significantly improve the world. We do this primarily by helping students identify important problems where further research could have a big impact and advising them on their research topic selection (mostly in the context of a final thesis/dissertation/capstone project). Choosing a final thesis topic or a PhD topic is an important step that can influence the rest of a researcher’s career. We support students in this choice by providing introductions to various research directions and early-career research advice on our website and offering topic choice coaching. Additionally, we provide other types of support to address key bottlenecks in early stages of research careers, such as finding supervision, funding or useful opportunities.Here is our 2022 report outlining our activities, impact, and future plans.TeamWe are a fully remote team of 8 employees (with 5 close to full-time and 3 around 10h/week) and 6 long-term volunteers, who help run our research opportunities newsletter. We operate mostly in UTC+1 (Central European) timezone. Candidates should be able to work in UTC+1 for some share of their work hours.The team you are joining is young, agentic, diverse in skill set and is connected through shared motivation to contribute to our mission. 75% of our employed team are women/non-binary people and team members are based in 6 different countries across Europe, UK and Asia. Our culture is start-up-like. We value creative problem solving, open communication and collaboration. We have regular online socials, optional weekly coworking and in the coming year we will have multiple in-person retreats in Europe or the UK. You can expect a thorough onboarding experience including a week of in person co-working with the full-time team in the UK and additional in person meetings with the Managing Director.About the RoleAs Executive Director, you will have the opportunity to lead the way for Effective Thesis to reach its full potential, learn and hone essential leadership skills and work with a wonderful team of people across the world. This remote role comes with a large degree of freedom in when and how you work. You can expect a steep learning curve, a growing network and lots of opportunities for personal an...]]>
Effective Thesis https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:03 None full 4967
GPmkJfsMjsAghM3bT_NL_EA_EA EA - Effective altruists are already institutionalists and are doing far more than unworkable longtermism - A response to "On the Differences between Ecomodernism and Effective Altruism" by jackva Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective altruists are already institutionalists and are doing far more than unworkable longtermism - A response to "On the Differences between Ecomodernism and Effective Altruism", published by jackva on February 21, 2023 on The Effective Altruism Forum.This is the long-form version of a post published as an invited reply to the original essay on the website of the Breakthrough Institute.Why I am writing thisAs someone who worked at the Breakthrough Institute back in the day, learned a lot from ecomodernism, and is now deeply involved in effective altruism’s work on climate (e.g. here, here, and here), I was very happy to find Alex’s essay in my inbox -- an honest attempt to describe how ecomodernism and effective altruism relate and differ.However, reading the essay, I found many of Alex’s observations and inferences in stark contrast to my lived experience of and in effective altruism over the past seven years. I also had the impression that there were a fair number of misunderstandings as well as a lack of awareness of many existing effective altruists’ efforts. So I am taking Alex up on his ask to provide a view on how the community sees itself. I should note, however, that this is my personal view.While I disagree strongly with many of Alex's characterizations of effective altruism, his effort was clearly in good faith -- so my response is not so much a rebuttal rather than a friendly attempt to clarify, add nuance, and promote an accurate mutual understanding of the similarities and differences of two social movements and their respective sets ot ideas and beliefs.Where I agree with the original essayIt is clear that there is a difference on how most effective altruists think about animals and how ecomodernists and other environmentalists do. This difference is well characterized in the essay. My moral intuitions here are more on the pan-species-utilitarianism side, but I am not a moral philosopher so I will not defend that view and just note that the description points to a real difference.I also agree that it is worth pointing out the differences and similarities between ecomodernism and effective altruism and, furthermore, that both have distinctive value to add to the world.With this clarified, let’s focus on the disagreements:Unworkable longtermism, if it exists at all, is only a small part of effective altruismBefore diving into the critique of unworkable longtermism it is worth pointing out that “longtermism” and “effective altruism” are not synonymous and that -- either for ethical reasons or for reasons similar to those discussed by Alex (the knowledge problem) -- most work in effective altruism is actually not long-termist.Even at its arguably most longtermist, in August 2022, estimated longtermist funding for 2022 was less than ⅓ of total effective altruist funding.Thus, however one comes out on the workability of longtermism, there is a large effective altruist project remaining not affected by this critique.The primary reason Alex gives for describing longtermism as unworkable is the knowledge problem:“But I want to focus on the “knowledge problem” as the core flaw in longtermism, since the problems associated with projecting too much certainty about the future are something effective altruists and conventional environmentalists have in common.We simply have no idea how likely it is that an asteroid will collide with the planet over the course of the next century, nor do we have any idea what civilization will exist in the year 2100 to deal with the effects of climate change, nor do we have any access to the preferences of interstellar metahumans in the year 21000. We do not need to have any idea how to make rational, robust actions and investments in the present.”This knowledge problem is, of course, well-known in effective altr...]]>
jackva https://forum.effectivealtruism.org/posts/GPmkJfsMjsAghM3bT/effective-altruists-are-already-institutionalists-and-are Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective altruists are already institutionalists and are doing far more than unworkable longtermism - A response to "On the Differences between Ecomodernism and Effective Altruism", published by jackva on February 21, 2023 on The Effective Altruism Forum.This is the long-form version of a post published as an invited reply to the original essay on the website of the Breakthrough Institute.Why I am writing thisAs someone who worked at the Breakthrough Institute back in the day, learned a lot from ecomodernism, and is now deeply involved in effective altruism’s work on climate (e.g. here, here, and here), I was very happy to find Alex’s essay in my inbox -- an honest attempt to describe how ecomodernism and effective altruism relate and differ.However, reading the essay, I found many of Alex’s observations and inferences in stark contrast to my lived experience of and in effective altruism over the past seven years. I also had the impression that there were a fair number of misunderstandings as well as a lack of awareness of many existing effective altruists’ efforts. So I am taking Alex up on his ask to provide a view on how the community sees itself. I should note, however, that this is my personal view.While I disagree strongly with many of Alex's characterizations of effective altruism, his effort was clearly in good faith -- so my response is not so much a rebuttal rather than a friendly attempt to clarify, add nuance, and promote an accurate mutual understanding of the similarities and differences of two social movements and their respective sets ot ideas and beliefs.Where I agree with the original essayIt is clear that there is a difference on how most effective altruists think about animals and how ecomodernists and other environmentalists do. This difference is well characterized in the essay. My moral intuitions here are more on the pan-species-utilitarianism side, but I am not a moral philosopher so I will not defend that view and just note that the description points to a real difference.I also agree that it is worth pointing out the differences and similarities between ecomodernism and effective altruism and, furthermore, that both have distinctive value to add to the world.With this clarified, let’s focus on the disagreements:Unworkable longtermism, if it exists at all, is only a small part of effective altruismBefore diving into the critique of unworkable longtermism it is worth pointing out that “longtermism” and “effective altruism” are not synonymous and that -- either for ethical reasons or for reasons similar to those discussed by Alex (the knowledge problem) -- most work in effective altruism is actually not long-termist.Even at its arguably most longtermist, in August 2022, estimated longtermist funding for 2022 was less than ⅓ of total effective altruist funding.Thus, however one comes out on the workability of longtermism, there is a large effective altruist project remaining not affected by this critique.The primary reason Alex gives for describing longtermism as unworkable is the knowledge problem:“But I want to focus on the “knowledge problem” as the core flaw in longtermism, since the problems associated with projecting too much certainty about the future are something effective altruists and conventional environmentalists have in common.We simply have no idea how likely it is that an asteroid will collide with the planet over the course of the next century, nor do we have any idea what civilization will exist in the year 2100 to deal with the effects of climate change, nor do we have any access to the preferences of interstellar metahumans in the year 21000. We do not need to have any idea how to make rational, robust actions and investments in the present.”This knowledge problem is, of course, well-known in effective altr...]]>
Tue, 21 Feb 2023 19:20:59 +0000 EA - Effective altruists are already institutionalists and are doing far more than unworkable longtermism - A response to "On the Differences between Ecomodernism and Effective Altruism" by jackva Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective altruists are already institutionalists and are doing far more than unworkable longtermism - A response to "On the Differences between Ecomodernism and Effective Altruism", published by jackva on February 21, 2023 on The Effective Altruism Forum.This is the long-form version of a post published as an invited reply to the original essay on the website of the Breakthrough Institute.Why I am writing thisAs someone who worked at the Breakthrough Institute back in the day, learned a lot from ecomodernism, and is now deeply involved in effective altruism’s work on climate (e.g. here, here, and here), I was very happy to find Alex’s essay in my inbox -- an honest attempt to describe how ecomodernism and effective altruism relate and differ.However, reading the essay, I found many of Alex’s observations and inferences in stark contrast to my lived experience of and in effective altruism over the past seven years. I also had the impression that there were a fair number of misunderstandings as well as a lack of awareness of many existing effective altruists’ efforts. So I am taking Alex up on his ask to provide a view on how the community sees itself. I should note, however, that this is my personal view.While I disagree strongly with many of Alex's characterizations of effective altruism, his effort was clearly in good faith -- so my response is not so much a rebuttal rather than a friendly attempt to clarify, add nuance, and promote an accurate mutual understanding of the similarities and differences of two social movements and their respective sets ot ideas and beliefs.Where I agree with the original essayIt is clear that there is a difference on how most effective altruists think about animals and how ecomodernists and other environmentalists do. This difference is well characterized in the essay. My moral intuitions here are more on the pan-species-utilitarianism side, but I am not a moral philosopher so I will not defend that view and just note that the description points to a real difference.I also agree that it is worth pointing out the differences and similarities between ecomodernism and effective altruism and, furthermore, that both have distinctive value to add to the world.With this clarified, let’s focus on the disagreements:Unworkable longtermism, if it exists at all, is only a small part of effective altruismBefore diving into the critique of unworkable longtermism it is worth pointing out that “longtermism” and “effective altruism” are not synonymous and that -- either for ethical reasons or for reasons similar to those discussed by Alex (the knowledge problem) -- most work in effective altruism is actually not long-termist.Even at its arguably most longtermist, in August 2022, estimated longtermist funding for 2022 was less than ⅓ of total effective altruist funding.Thus, however one comes out on the workability of longtermism, there is a large effective altruist project remaining not affected by this critique.The primary reason Alex gives for describing longtermism as unworkable is the knowledge problem:“But I want to focus on the “knowledge problem” as the core flaw in longtermism, since the problems associated with projecting too much certainty about the future are something effective altruists and conventional environmentalists have in common.We simply have no idea how likely it is that an asteroid will collide with the planet over the course of the next century, nor do we have any idea what civilization will exist in the year 2100 to deal with the effects of climate change, nor do we have any access to the preferences of interstellar metahumans in the year 21000. We do not need to have any idea how to make rational, robust actions and investments in the present.”This knowledge problem is, of course, well-known in effective altr...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective altruists are already institutionalists and are doing far more than unworkable longtermism - A response to "On the Differences between Ecomodernism and Effective Altruism", published by jackva on February 21, 2023 on The Effective Altruism Forum.This is the long-form version of a post published as an invited reply to the original essay on the website of the Breakthrough Institute.Why I am writing thisAs someone who worked at the Breakthrough Institute back in the day, learned a lot from ecomodernism, and is now deeply involved in effective altruism’s work on climate (e.g. here, here, and here), I was very happy to find Alex’s essay in my inbox -- an honest attempt to describe how ecomodernism and effective altruism relate and differ.However, reading the essay, I found many of Alex’s observations and inferences in stark contrast to my lived experience of and in effective altruism over the past seven years. I also had the impression that there were a fair number of misunderstandings as well as a lack of awareness of many existing effective altruists’ efforts. So I am taking Alex up on his ask to provide a view on how the community sees itself. I should note, however, that this is my personal view.While I disagree strongly with many of Alex's characterizations of effective altruism, his effort was clearly in good faith -- so my response is not so much a rebuttal rather than a friendly attempt to clarify, add nuance, and promote an accurate mutual understanding of the similarities and differences of two social movements and their respective sets ot ideas and beliefs.Where I agree with the original essayIt is clear that there is a difference on how most effective altruists think about animals and how ecomodernists and other environmentalists do. This difference is well characterized in the essay. My moral intuitions here are more on the pan-species-utilitarianism side, but I am not a moral philosopher so I will not defend that view and just note that the description points to a real difference.I also agree that it is worth pointing out the differences and similarities between ecomodernism and effective altruism and, furthermore, that both have distinctive value to add to the world.With this clarified, let’s focus on the disagreements:Unworkable longtermism, if it exists at all, is only a small part of effective altruismBefore diving into the critique of unworkable longtermism it is worth pointing out that “longtermism” and “effective altruism” are not synonymous and that -- either for ethical reasons or for reasons similar to those discussed by Alex (the knowledge problem) -- most work in effective altruism is actually not long-termist.Even at its arguably most longtermist, in August 2022, estimated longtermist funding for 2022 was less than ⅓ of total effective altruist funding.Thus, however one comes out on the workability of longtermism, there is a large effective altruist project remaining not affected by this critique.The primary reason Alex gives for describing longtermism as unworkable is the knowledge problem:“But I want to focus on the “knowledge problem” as the core flaw in longtermism, since the problems associated with projecting too much certainty about the future are something effective altruists and conventional environmentalists have in common.We simply have no idea how likely it is that an asteroid will collide with the planet over the course of the next century, nor do we have any idea what civilization will exist in the year 2100 to deal with the effects of climate change, nor do we have any access to the preferences of interstellar metahumans in the year 21000. We do not need to have any idea how to make rational, robust actions and investments in the present.”This knowledge problem is, of course, well-known in effective altr...]]>
jackva https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 21:34 None full 4965
kPGqj8BMDzGpvFCCH_NL_EA_EA EA - The EA Mental Health and Productivity Survey 2023 by Emily Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA Mental Health & Productivity Survey 2023, published by Emily on February 21, 2023 on The Effective Altruism Forum.This survey is intended for members of the Effective Altruism (EA) community who aim to improve or maintain good mental health and productivity. We’d be so grateful if you could donate ~10 minutes of your time to complete the survey! You will help both identify the most pressing next steps for enhancing mental flourishing within the EA community, and provide the interventions and resources you’d prefer. These can be psychological, physiological, and lifestyle interventions.Why this survey?The mind is inherently the basis of everything we do and feel. Its health and performance are the foundation of any metric of happiness and productivity at impactful work. Good mental health is not just the absence of mental health issues. It is a core component of flourishing, enabling functioning, wellbeing, and value-aligned living.Rethink Wellbeing, the Mental Health Navigator, High Impact Psychology, and two independent EAs have teamed up to create this community-wide survey on Mental Health and Productivity.Through this survey, we aim to better understand the key issues and bottlenecks of EA performance and well-being. We also want to shed light on EAs' interest in and openness to different interventions that proactively improve health, well-being, and productivity. The results will likely serve as a basis for further projects and initiatives surrounding the improvement of well-being, mental health and productivity in the EA community. By filling out this form, you will help us with that.Based on form responses, we will compile overview statistics for the EA community that will be published on the EA Forum in 2023.Survey informationPlease complete the survey by March 17th.We recommend you take the survey on your computer, since the format doesn’t work well on cell phones.All responses will be kept confidential, and we will not use the data you provide for any other purposes.Thank you!We are deeply grateful to all participants!Feel free to reach out to us if you have any questions or feedback.Emily Jennings, Samuel Nellessen, Tim Farkas, and Inga GrossmannThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Emily https://forum.effectivealtruism.org/posts/kPGqj8BMDzGpvFCCH/the-ea-mental-health-and-productivity-survey-2023 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA Mental Health & Productivity Survey 2023, published by Emily on February 21, 2023 on The Effective Altruism Forum.This survey is intended for members of the Effective Altruism (EA) community who aim to improve or maintain good mental health and productivity. We’d be so grateful if you could donate ~10 minutes of your time to complete the survey! You will help both identify the most pressing next steps for enhancing mental flourishing within the EA community, and provide the interventions and resources you’d prefer. These can be psychological, physiological, and lifestyle interventions.Why this survey?The mind is inherently the basis of everything we do and feel. Its health and performance are the foundation of any metric of happiness and productivity at impactful work. Good mental health is not just the absence of mental health issues. It is a core component of flourishing, enabling functioning, wellbeing, and value-aligned living.Rethink Wellbeing, the Mental Health Navigator, High Impact Psychology, and two independent EAs have teamed up to create this community-wide survey on Mental Health and Productivity.Through this survey, we aim to better understand the key issues and bottlenecks of EA performance and well-being. We also want to shed light on EAs' interest in and openness to different interventions that proactively improve health, well-being, and productivity. The results will likely serve as a basis for further projects and initiatives surrounding the improvement of well-being, mental health and productivity in the EA community. By filling out this form, you will help us with that.Based on form responses, we will compile overview statistics for the EA community that will be published on the EA Forum in 2023.Survey informationPlease complete the survey by March 17th.We recommend you take the survey on your computer, since the format doesn’t work well on cell phones.All responses will be kept confidential, and we will not use the data you provide for any other purposes.Thank you!We are deeply grateful to all participants!Feel free to reach out to us if you have any questions or feedback.Emily Jennings, Samuel Nellessen, Tim Farkas, and Inga GrossmannThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 21 Feb 2023 18:52:29 +0000 EA - The EA Mental Health and Productivity Survey 2023 by Emily Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA Mental Health & Productivity Survey 2023, published by Emily on February 21, 2023 on The Effective Altruism Forum.This survey is intended for members of the Effective Altruism (EA) community who aim to improve or maintain good mental health and productivity. We’d be so grateful if you could donate ~10 minutes of your time to complete the survey! You will help both identify the most pressing next steps for enhancing mental flourishing within the EA community, and provide the interventions and resources you’d prefer. These can be psychological, physiological, and lifestyle interventions.Why this survey?The mind is inherently the basis of everything we do and feel. Its health and performance are the foundation of any metric of happiness and productivity at impactful work. Good mental health is not just the absence of mental health issues. It is a core component of flourishing, enabling functioning, wellbeing, and value-aligned living.Rethink Wellbeing, the Mental Health Navigator, High Impact Psychology, and two independent EAs have teamed up to create this community-wide survey on Mental Health and Productivity.Through this survey, we aim to better understand the key issues and bottlenecks of EA performance and well-being. We also want to shed light on EAs' interest in and openness to different interventions that proactively improve health, well-being, and productivity. The results will likely serve as a basis for further projects and initiatives surrounding the improvement of well-being, mental health and productivity in the EA community. By filling out this form, you will help us with that.Based on form responses, we will compile overview statistics for the EA community that will be published on the EA Forum in 2023.Survey informationPlease complete the survey by March 17th.We recommend you take the survey on your computer, since the format doesn’t work well on cell phones.All responses will be kept confidential, and we will not use the data you provide for any other purposes.Thank you!We are deeply grateful to all participants!Feel free to reach out to us if you have any questions or feedback.Emily Jennings, Samuel Nellessen, Tim Farkas, and Inga GrossmannThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA Mental Health & Productivity Survey 2023, published by Emily on February 21, 2023 on The Effective Altruism Forum.This survey is intended for members of the Effective Altruism (EA) community who aim to improve or maintain good mental health and productivity. We’d be so grateful if you could donate ~10 minutes of your time to complete the survey! You will help both identify the most pressing next steps for enhancing mental flourishing within the EA community, and provide the interventions and resources you’d prefer. These can be psychological, physiological, and lifestyle interventions.Why this survey?The mind is inherently the basis of everything we do and feel. Its health and performance are the foundation of any metric of happiness and productivity at impactful work. Good mental health is not just the absence of mental health issues. It is a core component of flourishing, enabling functioning, wellbeing, and value-aligned living.Rethink Wellbeing, the Mental Health Navigator, High Impact Psychology, and two independent EAs have teamed up to create this community-wide survey on Mental Health and Productivity.Through this survey, we aim to better understand the key issues and bottlenecks of EA performance and well-being. We also want to shed light on EAs' interest in and openness to different interventions that proactively improve health, well-being, and productivity. The results will likely serve as a basis for further projects and initiatives surrounding the improvement of well-being, mental health and productivity in the EA community. By filling out this form, you will help us with that.Based on form responses, we will compile overview statistics for the EA community that will be published on the EA Forum in 2023.Survey informationPlease complete the survey by March 17th.We recommend you take the survey on your computer, since the format doesn’t work well on cell phones.All responses will be kept confidential, and we will not use the data you provide for any other purposes.Thank you!We are deeply grateful to all participants!Feel free to reach out to us if you have any questions or feedback.Emily Jennings, Samuel Nellessen, Tim Farkas, and Inga GrossmannThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Emily https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:20 None full 4966
2yrprQuNJpFJKMGWh_NL_EA_EA EA - Should we tell people they are morally obligated to give to charity? [Recent Paper] by benleo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should we tell people they are morally obligated to give to charity? [Recent Paper], published by benleo on February 21, 2023 on The Effective Altruism Forum.SUMMARYIn this post, we summarise a recently published paper of ours that investigates how people respond to moral arguments, and morally demanding statements, such as “You are morally obligated to give to charity” . The paper is forthcoming in the [Journal of Behavioural and Experimental Economics]. (If you want an ungated copy, please get in touch with either Ben or Philipp).We ran two pre-registered experiments with a total sample size of n=3700 participants.We compared a control treatment to a moral argument treatment, and we also varied the level of moral demandingness to donate after they read the moral argument. We found that the moral argument increased the frequency and amount of donations. However, increasing the levels of moral demandingness did not translate into higher or lower giving.BACKGROUNDThe central motivation for our paper was the worry that many have expressed, including a number of philosophers (e.g., Kagan, 1989; Unger, 1996; De Lazari-Radek and Singer, 2010) that having highly morally demanding solicitations for charitable giving may result in reduced (not increased) donations. This possibility of a backfire effect had been raised many times in a variety of contexts but had not been tested empirically. In our paper, we attempted to do just that in the context of donations to Give Directly.EXPERIMENT DESIGNIn our first study (n=2500), we had five treatments (control, moral argument, inspiration, weak demandingness, and strong demandingness). In the Control condition, we showed participants unrelated information about some technicalities of UK parliamentary procedure. In the Moral Argument condition, we presented participants with a text about global poverty and the ability of those living in Western countries to help (see figure 2). For the Inspiration, Weak Demandingness, and Strong Demandingness conditions, we used the same text as in the moral argument condition, but added one sentence to each.Inspiration: For these reasons, you can do a lot of good if you give money to charities-such as GiveDirectly-to alleviate the suffering of people in developing countries at a minimal cost to yourself.Weak Demandingness: For these reasons, you should give money to charities-such as GiveDirectly-to alleviate the suffering of people in developing countries at a minimal cost to yourself.Strong Demandingness: For these reasons, you are morally obligated to give money to charities-such as GiveDirectly-to alleviate the suffering of people in developing countries at a minimal cost to yourself.In this study, we were interested in two comparisons. First, we compared the control and the moral argument conditions to look at the effect of moral arguments on charitable giving. Second, we compared the moral argument with each of the three moral demandingness conditions to investigate whether increasing levels of moral demandingness lead to an increase or reduction in charitable giving. In our second study (n=1200), we narrow down our research question by looking only at the conditions of control, moral argument, and strong demandingness. We test the same two main questions as in our first study. The key difference is that the Moral Argument (and demandingness) was presented to participants via the Giving What We Can Website (see Figure 3). This was done to mitigate experimenter demand effects, as well as to provide a more natural vehicle for the information to be delivered.In both studies, after reading the randomly allotted text, participants could choose to donate some, none, or all of their earnings (20 ECUs, where 1 ECU=£0.05) to the charity GiveDirectly.RESULTSThe main results of experiment 1 ...]]>
benleo https://forum.effectivealtruism.org/posts/2yrprQuNJpFJKMGWh/should-we-tell-people-they-are-morally-obligated-to-give-to Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should we tell people they are morally obligated to give to charity? [Recent Paper], published by benleo on February 21, 2023 on The Effective Altruism Forum.SUMMARYIn this post, we summarise a recently published paper of ours that investigates how people respond to moral arguments, and morally demanding statements, such as “You are morally obligated to give to charity” . The paper is forthcoming in the [Journal of Behavioural and Experimental Economics]. (If you want an ungated copy, please get in touch with either Ben or Philipp).We ran two pre-registered experiments with a total sample size of n=3700 participants.We compared a control treatment to a moral argument treatment, and we also varied the level of moral demandingness to donate after they read the moral argument. We found that the moral argument increased the frequency and amount of donations. However, increasing the levels of moral demandingness did not translate into higher or lower giving.BACKGROUNDThe central motivation for our paper was the worry that many have expressed, including a number of philosophers (e.g., Kagan, 1989; Unger, 1996; De Lazari-Radek and Singer, 2010) that having highly morally demanding solicitations for charitable giving may result in reduced (not increased) donations. This possibility of a backfire effect had been raised many times in a variety of contexts but had not been tested empirically. In our paper, we attempted to do just that in the context of donations to Give Directly.EXPERIMENT DESIGNIn our first study (n=2500), we had five treatments (control, moral argument, inspiration, weak demandingness, and strong demandingness). In the Control condition, we showed participants unrelated information about some technicalities of UK parliamentary procedure. In the Moral Argument condition, we presented participants with a text about global poverty and the ability of those living in Western countries to help (see figure 2). For the Inspiration, Weak Demandingness, and Strong Demandingness conditions, we used the same text as in the moral argument condition, but added one sentence to each.Inspiration: For these reasons, you can do a lot of good if you give money to charities-such as GiveDirectly-to alleviate the suffering of people in developing countries at a minimal cost to yourself.Weak Demandingness: For these reasons, you should give money to charities-such as GiveDirectly-to alleviate the suffering of people in developing countries at a minimal cost to yourself.Strong Demandingness: For these reasons, you are morally obligated to give money to charities-such as GiveDirectly-to alleviate the suffering of people in developing countries at a minimal cost to yourself.In this study, we were interested in two comparisons. First, we compared the control and the moral argument conditions to look at the effect of moral arguments on charitable giving. Second, we compared the moral argument with each of the three moral demandingness conditions to investigate whether increasing levels of moral demandingness lead to an increase or reduction in charitable giving. In our second study (n=1200), we narrow down our research question by looking only at the conditions of control, moral argument, and strong demandingness. We test the same two main questions as in our first study. The key difference is that the Moral Argument (and demandingness) was presented to participants via the Giving What We Can Website (see Figure 3). This was done to mitigate experimenter demand effects, as well as to provide a more natural vehicle for the information to be delivered.In both studies, after reading the randomly allotted text, participants could choose to donate some, none, or all of their earnings (20 ECUs, where 1 ECU=£0.05) to the charity GiveDirectly.RESULTSThe main results of experiment 1 ...]]>
Tue, 21 Feb 2023 16:27:31 +0000 EA - Should we tell people they are morally obligated to give to charity? [Recent Paper] by benleo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should we tell people they are morally obligated to give to charity? [Recent Paper], published by benleo on February 21, 2023 on The Effective Altruism Forum.SUMMARYIn this post, we summarise a recently published paper of ours that investigates how people respond to moral arguments, and morally demanding statements, such as “You are morally obligated to give to charity” . The paper is forthcoming in the [Journal of Behavioural and Experimental Economics]. (If you want an ungated copy, please get in touch with either Ben or Philipp).We ran two pre-registered experiments with a total sample size of n=3700 participants.We compared a control treatment to a moral argument treatment, and we also varied the level of moral demandingness to donate after they read the moral argument. We found that the moral argument increased the frequency and amount of donations. However, increasing the levels of moral demandingness did not translate into higher or lower giving.BACKGROUNDThe central motivation for our paper was the worry that many have expressed, including a number of philosophers (e.g., Kagan, 1989; Unger, 1996; De Lazari-Radek and Singer, 2010) that having highly morally demanding solicitations for charitable giving may result in reduced (not increased) donations. This possibility of a backfire effect had been raised many times in a variety of contexts but had not been tested empirically. In our paper, we attempted to do just that in the context of donations to Give Directly.EXPERIMENT DESIGNIn our first study (n=2500), we had five treatments (control, moral argument, inspiration, weak demandingness, and strong demandingness). In the Control condition, we showed participants unrelated information about some technicalities of UK parliamentary procedure. In the Moral Argument condition, we presented participants with a text about global poverty and the ability of those living in Western countries to help (see figure 2). For the Inspiration, Weak Demandingness, and Strong Demandingness conditions, we used the same text as in the moral argument condition, but added one sentence to each.Inspiration: For these reasons, you can do a lot of good if you give money to charities-such as GiveDirectly-to alleviate the suffering of people in developing countries at a minimal cost to yourself.Weak Demandingness: For these reasons, you should give money to charities-such as GiveDirectly-to alleviate the suffering of people in developing countries at a minimal cost to yourself.Strong Demandingness: For these reasons, you are morally obligated to give money to charities-such as GiveDirectly-to alleviate the suffering of people in developing countries at a minimal cost to yourself.In this study, we were interested in two comparisons. First, we compared the control and the moral argument conditions to look at the effect of moral arguments on charitable giving. Second, we compared the moral argument with each of the three moral demandingness conditions to investigate whether increasing levels of moral demandingness lead to an increase or reduction in charitable giving. In our second study (n=1200), we narrow down our research question by looking only at the conditions of control, moral argument, and strong demandingness. We test the same two main questions as in our first study. The key difference is that the Moral Argument (and demandingness) was presented to participants via the Giving What We Can Website (see Figure 3). This was done to mitigate experimenter demand effects, as well as to provide a more natural vehicle for the information to be delivered.In both studies, after reading the randomly allotted text, participants could choose to donate some, none, or all of their earnings (20 ECUs, where 1 ECU=£0.05) to the charity GiveDirectly.RESULTSThe main results of experiment 1 ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should we tell people they are morally obligated to give to charity? [Recent Paper], published by benleo on February 21, 2023 on The Effective Altruism Forum.SUMMARYIn this post, we summarise a recently published paper of ours that investigates how people respond to moral arguments, and morally demanding statements, such as “You are morally obligated to give to charity” . The paper is forthcoming in the [Journal of Behavioural and Experimental Economics]. (If you want an ungated copy, please get in touch with either Ben or Philipp).We ran two pre-registered experiments with a total sample size of n=3700 participants.We compared a control treatment to a moral argument treatment, and we also varied the level of moral demandingness to donate after they read the moral argument. We found that the moral argument increased the frequency and amount of donations. However, increasing the levels of moral demandingness did not translate into higher or lower giving.BACKGROUNDThe central motivation for our paper was the worry that many have expressed, including a number of philosophers (e.g., Kagan, 1989; Unger, 1996; De Lazari-Radek and Singer, 2010) that having highly morally demanding solicitations for charitable giving may result in reduced (not increased) donations. This possibility of a backfire effect had been raised many times in a variety of contexts but had not been tested empirically. In our paper, we attempted to do just that in the context of donations to Give Directly.EXPERIMENT DESIGNIn our first study (n=2500), we had five treatments (control, moral argument, inspiration, weak demandingness, and strong demandingness). In the Control condition, we showed participants unrelated information about some technicalities of UK parliamentary procedure. In the Moral Argument condition, we presented participants with a text about global poverty and the ability of those living in Western countries to help (see figure 2). For the Inspiration, Weak Demandingness, and Strong Demandingness conditions, we used the same text as in the moral argument condition, but added one sentence to each.Inspiration: For these reasons, you can do a lot of good if you give money to charities-such as GiveDirectly-to alleviate the suffering of people in developing countries at a minimal cost to yourself.Weak Demandingness: For these reasons, you should give money to charities-such as GiveDirectly-to alleviate the suffering of people in developing countries at a minimal cost to yourself.Strong Demandingness: For these reasons, you are morally obligated to give money to charities-such as GiveDirectly-to alleviate the suffering of people in developing countries at a minimal cost to yourself.In this study, we were interested in two comparisons. First, we compared the control and the moral argument conditions to look at the effect of moral arguments on charitable giving. Second, we compared the moral argument with each of the three moral demandingness conditions to investigate whether increasing levels of moral demandingness lead to an increase or reduction in charitable giving. In our second study (n=1200), we narrow down our research question by looking only at the conditions of control, moral argument, and strong demandingness. We test the same two main questions as in our first study. The key difference is that the Moral Argument (and demandingness) was presented to participants via the Giving What We Can Website (see Figure 3). This was done to mitigate experimenter demand effects, as well as to provide a more natural vehicle for the information to be delivered.In both studies, after reading the randomly allotted text, participants could choose to donate some, none, or all of their earnings (20 ECUs, where 1 ECU=£0.05) to the charity GiveDirectly.RESULTSThe main results of experiment 1 ...]]>
benleo https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:10 None full 4968
FoRyordtA7LDoEhd7_NL_EA_EA EA - There are no coherence theorems by EJT Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There are no coherence theorems, published by EJT on February 20, 2023 on The Effective Altruism Forum.IntroductionFor about fifteen years, the AI safety community has been discussing coherence arguments. In papers and posts on the subject, it’s often written that there exist 'coherence theorems' which state that, unless an agent can be represented as maximizing expected utility, that agent is liable to pursue strategies that are dominated by some other available strategy. Despite the prominence of these arguments, authors are often a little hazy about exactly which theorems qualify as coherence theorems. This is no accident. If the authors had tried to be precise, they would have discovered that there are no such theorems.I’m concerned about this. Coherence arguments seem to be a moderately important part of the basic case for existential risk from AI. To spot the error in these arguments, we only have to look up what cited ‘coherence theorems’ actually say. And yet the error seems to have gone uncorrected for more than a decade.More detail below.Coherence argumentsSome authors frame coherence arguments in terms of ‘dominated strategies’. Others frame them in terms of ‘exploitation’, ‘money-pumping’, ‘Dutch Books’, ‘shooting oneself in the foot’, ‘Pareto-suboptimal behavior’, and ‘losing things that one values’ (see the Appendix for examples).In the context of coherence arguments, each of these terms means roughly the same thing: a strategy A is dominated by a strategy B if and only if A is worse than B in some respect that the agent cares about and A is not better than B in any respect that the agent cares about. If the agent chooses A over B, they have behaved Pareto-suboptimally, shot themselves in the foot, and lost something that they value. If the agent’s loss is someone else’s gain, then the agent has been exploited, money-pumped, or Dutch-booked. Since all these phrases point to the same sort of phenomenon, I’ll save words by talking mainly in terms of ‘dominated strategies’.With that background, here’s a quick rendition of coherence arguments:There exist coherence theorems which state that, unless an agent can be represented as maximizing expected utility, that agent is liable to pursue strategies that are dominated by some other available strategy.Sufficiently-advanced artificial agents will not pursue dominated strategies.So, sufficiently-advanced artificial agents will be ‘coherent’: they will be representable as maximizing expected utility.Typically, authors go on to suggest that these expected-utility-maximizing agents are likely to behave in certain, potentially-dangerous ways. For example, such agents are likely to appear ‘goal-directed’ in some intuitive sense. They are likely to have certain instrumental goals, like acquiring power and resources. And they are likely to fight back against attempts to shut them down or modify their goals.There are many ways to challenge the argument stated above, and many of those challenges have been made. There are also many ways to respond to those challenges, and many of those responses have been made too. The challenge that seems to remain yet unmade is that Premise 1 is false: there are no coherence theorems.Cited ‘coherence theorems’ and what they actually sayHere’s a list of theorems that have been called ‘coherence theorems’. None of these theorems state that, unless an agent can be represented as maximizing expected utility, that agent is liable to pursue dominated strategies. Here’s what the theorems say:The Von Neumann-Morgenstern Expected Utility Theorem:The Von Neumann-Morgenstern Expected Utility Theorem is as follows:An agent can be represented as maximizing expected utility if and only if their preferences satisfy the following four axioms:Completeness: For all lotteries X and Y, X...]]>
EJT https://forum.effectivealtruism.org/posts/FoRyordtA7LDoEhd7/there-are-no-coherence-theorems Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There are no coherence theorems, published by EJT on February 20, 2023 on The Effective Altruism Forum.IntroductionFor about fifteen years, the AI safety community has been discussing coherence arguments. In papers and posts on the subject, it’s often written that there exist 'coherence theorems' which state that, unless an agent can be represented as maximizing expected utility, that agent is liable to pursue strategies that are dominated by some other available strategy. Despite the prominence of these arguments, authors are often a little hazy about exactly which theorems qualify as coherence theorems. This is no accident. If the authors had tried to be precise, they would have discovered that there are no such theorems.I’m concerned about this. Coherence arguments seem to be a moderately important part of the basic case for existential risk from AI. To spot the error in these arguments, we only have to look up what cited ‘coherence theorems’ actually say. And yet the error seems to have gone uncorrected for more than a decade.More detail below.Coherence argumentsSome authors frame coherence arguments in terms of ‘dominated strategies’. Others frame them in terms of ‘exploitation’, ‘money-pumping’, ‘Dutch Books’, ‘shooting oneself in the foot’, ‘Pareto-suboptimal behavior’, and ‘losing things that one values’ (see the Appendix for examples).In the context of coherence arguments, each of these terms means roughly the same thing: a strategy A is dominated by a strategy B if and only if A is worse than B in some respect that the agent cares about and A is not better than B in any respect that the agent cares about. If the agent chooses A over B, they have behaved Pareto-suboptimally, shot themselves in the foot, and lost something that they value. If the agent’s loss is someone else’s gain, then the agent has been exploited, money-pumped, or Dutch-booked. Since all these phrases point to the same sort of phenomenon, I’ll save words by talking mainly in terms of ‘dominated strategies’.With that background, here’s a quick rendition of coherence arguments:There exist coherence theorems which state that, unless an agent can be represented as maximizing expected utility, that agent is liable to pursue strategies that are dominated by some other available strategy.Sufficiently-advanced artificial agents will not pursue dominated strategies.So, sufficiently-advanced artificial agents will be ‘coherent’: they will be representable as maximizing expected utility.Typically, authors go on to suggest that these expected-utility-maximizing agents are likely to behave in certain, potentially-dangerous ways. For example, such agents are likely to appear ‘goal-directed’ in some intuitive sense. They are likely to have certain instrumental goals, like acquiring power and resources. And they are likely to fight back against attempts to shut them down or modify their goals.There are many ways to challenge the argument stated above, and many of those challenges have been made. There are also many ways to respond to those challenges, and many of those responses have been made too. The challenge that seems to remain yet unmade is that Premise 1 is false: there are no coherence theorems.Cited ‘coherence theorems’ and what they actually sayHere’s a list of theorems that have been called ‘coherence theorems’. None of these theorems state that, unless an agent can be represented as maximizing expected utility, that agent is liable to pursue dominated strategies. Here’s what the theorems say:The Von Neumann-Morgenstern Expected Utility Theorem:The Von Neumann-Morgenstern Expected Utility Theorem is as follows:An agent can be represented as maximizing expected utility if and only if their preferences satisfy the following four axioms:Completeness: For all lotteries X and Y, X...]]>
Mon, 20 Feb 2023 23:22:13 +0000 EA - There are no coherence theorems by EJT Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There are no coherence theorems, published by EJT on February 20, 2023 on The Effective Altruism Forum.IntroductionFor about fifteen years, the AI safety community has been discussing coherence arguments. In papers and posts on the subject, it’s often written that there exist 'coherence theorems' which state that, unless an agent can be represented as maximizing expected utility, that agent is liable to pursue strategies that are dominated by some other available strategy. Despite the prominence of these arguments, authors are often a little hazy about exactly which theorems qualify as coherence theorems. This is no accident. If the authors had tried to be precise, they would have discovered that there are no such theorems.I’m concerned about this. Coherence arguments seem to be a moderately important part of the basic case for existential risk from AI. To spot the error in these arguments, we only have to look up what cited ‘coherence theorems’ actually say. And yet the error seems to have gone uncorrected for more than a decade.More detail below.Coherence argumentsSome authors frame coherence arguments in terms of ‘dominated strategies’. Others frame them in terms of ‘exploitation’, ‘money-pumping’, ‘Dutch Books’, ‘shooting oneself in the foot’, ‘Pareto-suboptimal behavior’, and ‘losing things that one values’ (see the Appendix for examples).In the context of coherence arguments, each of these terms means roughly the same thing: a strategy A is dominated by a strategy B if and only if A is worse than B in some respect that the agent cares about and A is not better than B in any respect that the agent cares about. If the agent chooses A over B, they have behaved Pareto-suboptimally, shot themselves in the foot, and lost something that they value. If the agent’s loss is someone else’s gain, then the agent has been exploited, money-pumped, or Dutch-booked. Since all these phrases point to the same sort of phenomenon, I’ll save words by talking mainly in terms of ‘dominated strategies’.With that background, here’s a quick rendition of coherence arguments:There exist coherence theorems which state that, unless an agent can be represented as maximizing expected utility, that agent is liable to pursue strategies that are dominated by some other available strategy.Sufficiently-advanced artificial agents will not pursue dominated strategies.So, sufficiently-advanced artificial agents will be ‘coherent’: they will be representable as maximizing expected utility.Typically, authors go on to suggest that these expected-utility-maximizing agents are likely to behave in certain, potentially-dangerous ways. For example, such agents are likely to appear ‘goal-directed’ in some intuitive sense. They are likely to have certain instrumental goals, like acquiring power and resources. And they are likely to fight back against attempts to shut them down or modify their goals.There are many ways to challenge the argument stated above, and many of those challenges have been made. There are also many ways to respond to those challenges, and many of those responses have been made too. The challenge that seems to remain yet unmade is that Premise 1 is false: there are no coherence theorems.Cited ‘coherence theorems’ and what they actually sayHere’s a list of theorems that have been called ‘coherence theorems’. None of these theorems state that, unless an agent can be represented as maximizing expected utility, that agent is liable to pursue dominated strategies. Here’s what the theorems say:The Von Neumann-Morgenstern Expected Utility Theorem:The Von Neumann-Morgenstern Expected Utility Theorem is as follows:An agent can be represented as maximizing expected utility if and only if their preferences satisfy the following four axioms:Completeness: For all lotteries X and Y, X...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There are no coherence theorems, published by EJT on February 20, 2023 on The Effective Altruism Forum.IntroductionFor about fifteen years, the AI safety community has been discussing coherence arguments. In papers and posts on the subject, it’s often written that there exist 'coherence theorems' which state that, unless an agent can be represented as maximizing expected utility, that agent is liable to pursue strategies that are dominated by some other available strategy. Despite the prominence of these arguments, authors are often a little hazy about exactly which theorems qualify as coherence theorems. This is no accident. If the authors had tried to be precise, they would have discovered that there are no such theorems.I’m concerned about this. Coherence arguments seem to be a moderately important part of the basic case for existential risk from AI. To spot the error in these arguments, we only have to look up what cited ‘coherence theorems’ actually say. And yet the error seems to have gone uncorrected for more than a decade.More detail below.Coherence argumentsSome authors frame coherence arguments in terms of ‘dominated strategies’. Others frame them in terms of ‘exploitation’, ‘money-pumping’, ‘Dutch Books’, ‘shooting oneself in the foot’, ‘Pareto-suboptimal behavior’, and ‘losing things that one values’ (see the Appendix for examples).In the context of coherence arguments, each of these terms means roughly the same thing: a strategy A is dominated by a strategy B if and only if A is worse than B in some respect that the agent cares about and A is not better than B in any respect that the agent cares about. If the agent chooses A over B, they have behaved Pareto-suboptimally, shot themselves in the foot, and lost something that they value. If the agent’s loss is someone else’s gain, then the agent has been exploited, money-pumped, or Dutch-booked. Since all these phrases point to the same sort of phenomenon, I’ll save words by talking mainly in terms of ‘dominated strategies’.With that background, here’s a quick rendition of coherence arguments:There exist coherence theorems which state that, unless an agent can be represented as maximizing expected utility, that agent is liable to pursue strategies that are dominated by some other available strategy.Sufficiently-advanced artificial agents will not pursue dominated strategies.So, sufficiently-advanced artificial agents will be ‘coherent’: they will be representable as maximizing expected utility.Typically, authors go on to suggest that these expected-utility-maximizing agents are likely to behave in certain, potentially-dangerous ways. For example, such agents are likely to appear ‘goal-directed’ in some intuitive sense. They are likely to have certain instrumental goals, like acquiring power and resources. And they are likely to fight back against attempts to shut them down or modify their goals.There are many ways to challenge the argument stated above, and many of those challenges have been made. There are also many ways to respond to those challenges, and many of those responses have been made too. The challenge that seems to remain yet unmade is that Premise 1 is false: there are no coherence theorems.Cited ‘coherence theorems’ and what they actually sayHere’s a list of theorems that have been called ‘coherence theorems’. None of these theorems state that, unless an agent can be represented as maximizing expected utility, that agent is liable to pursue dominated strategies. Here’s what the theorems say:The Von Neumann-Morgenstern Expected Utility Theorem:The Von Neumann-Morgenstern Expected Utility Theorem is as follows:An agent can be represented as maximizing expected utility if and only if their preferences satisfy the following four axioms:Completeness: For all lotteries X and Y, X...]]>
EJT https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 34:32 None full 4963
zbPMiA6CftdB827cF_NL_EA_EA EA - Sanity check - effectiveness/goodness of Trans Rescue? by David D Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sanity check - effectiveness/goodness of Trans Rescue?, published by David D on February 20, 2023 on The Effective Altruism Forum.I stumbled across the charity Trans Rescue, which helps transgender people living in unsafe parts of the world move. They've published advice for people living in first world countries with worsening legal situations for trans people, but the vast majority of their funding goes toward helping people in Africa and the Middle East immigrate to safer countries (or for Kenyans, move to Trans Rescue's group home in the safest region of Kenya) and stay away from abusive families.As of September 2022, their total funding since inception was just under 33k euros . They helped about twenty people move using this funding/ . That puts the cost to help a person move at about 1,650 euros, which is in the same ballpark as a Givewell top charity's cost to save one person from fatal malaria.I haven't looked closely at the likely outcome for people who would benefit from Trans Rescue's services but don't get help. Some would live and some would not, but I don't have a good sense of the relative numbers, or how to put QUALYs on undertaking a move such as this. Since they're very new and very small, I'm considering donating and keeping an eye on how they grow as an organization.Mainly I hoped you all could help me by pointing out whether there's anything fishy that I might have missed. This review was published by a group of Twitter users, apparently after an argument with one of the board members. It's certainly not unbiased, but they do seem to have made a concerted effort to find anything bad or construable as bad that Trans Rescue has ever done. Trans Rescue wrote a blog post in response . I came away with a sense that the board is new at running an organization like this, and they rely on imperfect volunteer labor to be able to move as many people as they do, but their work is overall helpful to their clients.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
David D https://forum.effectivealtruism.org/posts/zbPMiA6CftdB827cF/sanity-check-effectiveness-goodness-of-trans-rescue Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sanity check - effectiveness/goodness of Trans Rescue?, published by David D on February 20, 2023 on The Effective Altruism Forum.I stumbled across the charity Trans Rescue, which helps transgender people living in unsafe parts of the world move. They've published advice for people living in first world countries with worsening legal situations for trans people, but the vast majority of their funding goes toward helping people in Africa and the Middle East immigrate to safer countries (or for Kenyans, move to Trans Rescue's group home in the safest region of Kenya) and stay away from abusive families.As of September 2022, their total funding since inception was just under 33k euros . They helped about twenty people move using this funding/ . That puts the cost to help a person move at about 1,650 euros, which is in the same ballpark as a Givewell top charity's cost to save one person from fatal malaria.I haven't looked closely at the likely outcome for people who would benefit from Trans Rescue's services but don't get help. Some would live and some would not, but I don't have a good sense of the relative numbers, or how to put QUALYs on undertaking a move such as this. Since they're very new and very small, I'm considering donating and keeping an eye on how they grow as an organization.Mainly I hoped you all could help me by pointing out whether there's anything fishy that I might have missed. This review was published by a group of Twitter users, apparently after an argument with one of the board members. It's certainly not unbiased, but they do seem to have made a concerted effort to find anything bad or construable as bad that Trans Rescue has ever done. Trans Rescue wrote a blog post in response . I came away with a sense that the board is new at running an organization like this, and they rely on imperfect volunteer labor to be able to move as many people as they do, but their work is overall helpful to their clients.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 20 Feb 2023 22:01:14 +0000 EA - Sanity check - effectiveness/goodness of Trans Rescue? by David D Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sanity check - effectiveness/goodness of Trans Rescue?, published by David D on February 20, 2023 on The Effective Altruism Forum.I stumbled across the charity Trans Rescue, which helps transgender people living in unsafe parts of the world move. They've published advice for people living in first world countries with worsening legal situations for trans people, but the vast majority of their funding goes toward helping people in Africa and the Middle East immigrate to safer countries (or for Kenyans, move to Trans Rescue's group home in the safest region of Kenya) and stay away from abusive families.As of September 2022, their total funding since inception was just under 33k euros . They helped about twenty people move using this funding/ . That puts the cost to help a person move at about 1,650 euros, which is in the same ballpark as a Givewell top charity's cost to save one person from fatal malaria.I haven't looked closely at the likely outcome for people who would benefit from Trans Rescue's services but don't get help. Some would live and some would not, but I don't have a good sense of the relative numbers, or how to put QUALYs on undertaking a move such as this. Since they're very new and very small, I'm considering donating and keeping an eye on how they grow as an organization.Mainly I hoped you all could help me by pointing out whether there's anything fishy that I might have missed. This review was published by a group of Twitter users, apparently after an argument with one of the board members. It's certainly not unbiased, but they do seem to have made a concerted effort to find anything bad or construable as bad that Trans Rescue has ever done. Trans Rescue wrote a blog post in response . I came away with a sense that the board is new at running an organization like this, and they rely on imperfect volunteer labor to be able to move as many people as they do, but their work is overall helpful to their clients.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sanity check - effectiveness/goodness of Trans Rescue?, published by David D on February 20, 2023 on The Effective Altruism Forum.I stumbled across the charity Trans Rescue, which helps transgender people living in unsafe parts of the world move. They've published advice for people living in first world countries with worsening legal situations for trans people, but the vast majority of their funding goes toward helping people in Africa and the Middle East immigrate to safer countries (or for Kenyans, move to Trans Rescue's group home in the safest region of Kenya) and stay away from abusive families.As of September 2022, their total funding since inception was just under 33k euros . They helped about twenty people move using this funding/ . That puts the cost to help a person move at about 1,650 euros, which is in the same ballpark as a Givewell top charity's cost to save one person from fatal malaria.I haven't looked closely at the likely outcome for people who would benefit from Trans Rescue's services but don't get help. Some would live and some would not, but I don't have a good sense of the relative numbers, or how to put QUALYs on undertaking a move such as this. Since they're very new and very small, I'm considering donating and keeping an eye on how they grow as an organization.Mainly I hoped you all could help me by pointing out whether there's anything fishy that I might have missed. This review was published by a group of Twitter users, apparently after an argument with one of the board members. It's certainly not unbiased, but they do seem to have made a concerted effort to find anything bad or construable as bad that Trans Rescue has ever done. Trans Rescue wrote a blog post in response . I came away with a sense that the board is new at running an organization like this, and they rely on imperfect volunteer labor to be able to move as many people as they do, but their work is overall helpful to their clients.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
David D https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:00 None full 4951
8XDcMhJ65mwWrouro_NL_EA_EA EA - Join a new slack for animal advocacy by SofiaBalderson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Join a new slack for animal advocacy, published by SofiaBalderson on February 20, 2023 on The Effective Altruism Forum.We'd like to extend an invitation to you all for a new Slack space that brings together individuals who are passionate about making a meaningful impact for animals.Our aim is to:support discussions in the animal advocacy spacefoster new connectionslearn from each othergenerate innovative ideasperhaps even launch new projects.It would be great for you to join us today and introduce yourself to the community! We already have some great discussions going.FAQs:"But there are already lots of Slack spaces?" - before we started this space, there wasn't a space that was just for animal welfare."Who should join?"Anyone already working in animal advocacy or adjacent organisationAnyone involved in alt proteinAnyone, regardless of experience or place of work, interested in helping animals in an impactful wayBy animals, we mean farmed animals and wild animals"How much do I have to participate, could I join and lurk for a bit?"- Sure, you don't need to be active all the time, all we ask is that you introduce yourself and invite anyone you know in the space who may be interested. If you'd like, you can message others or participate in discussions, but no pressure.If in doubt, please join!Best wishes,Sofia and CameronThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
SofiaBalderson https://forum.effectivealtruism.org/posts/8XDcMhJ65mwWrouro/join-a-new-slack-for-animal-advocacy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Join a new slack for animal advocacy, published by SofiaBalderson on February 20, 2023 on The Effective Altruism Forum.We'd like to extend an invitation to you all for a new Slack space that brings together individuals who are passionate about making a meaningful impact for animals.Our aim is to:support discussions in the animal advocacy spacefoster new connectionslearn from each othergenerate innovative ideasperhaps even launch new projects.It would be great for you to join us today and introduce yourself to the community! We already have some great discussions going.FAQs:"But there are already lots of Slack spaces?" - before we started this space, there wasn't a space that was just for animal welfare."Who should join?"Anyone already working in animal advocacy or adjacent organisationAnyone involved in alt proteinAnyone, regardless of experience or place of work, interested in helping animals in an impactful wayBy animals, we mean farmed animals and wild animals"How much do I have to participate, could I join and lurk for a bit?"- Sure, you don't need to be active all the time, all we ask is that you introduce yourself and invite anyone you know in the space who may be interested. If you'd like, you can message others or participate in discussions, but no pressure.If in doubt, please join!Best wishes,Sofia and CameronThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 20 Feb 2023 22:01:01 +0000 EA - Join a new slack for animal advocacy by SofiaBalderson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Join a new slack for animal advocacy, published by SofiaBalderson on February 20, 2023 on The Effective Altruism Forum.We'd like to extend an invitation to you all for a new Slack space that brings together individuals who are passionate about making a meaningful impact for animals.Our aim is to:support discussions in the animal advocacy spacefoster new connectionslearn from each othergenerate innovative ideasperhaps even launch new projects.It would be great for you to join us today and introduce yourself to the community! We already have some great discussions going.FAQs:"But there are already lots of Slack spaces?" - before we started this space, there wasn't a space that was just for animal welfare."Who should join?"Anyone already working in animal advocacy or adjacent organisationAnyone involved in alt proteinAnyone, regardless of experience or place of work, interested in helping animals in an impactful wayBy animals, we mean farmed animals and wild animals"How much do I have to participate, could I join and lurk for a bit?"- Sure, you don't need to be active all the time, all we ask is that you introduce yourself and invite anyone you know in the space who may be interested. If you'd like, you can message others or participate in discussions, but no pressure.If in doubt, please join!Best wishes,Sofia and CameronThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Join a new slack for animal advocacy, published by SofiaBalderson on February 20, 2023 on The Effective Altruism Forum.We'd like to extend an invitation to you all for a new Slack space that brings together individuals who are passionate about making a meaningful impact for animals.Our aim is to:support discussions in the animal advocacy spacefoster new connectionslearn from each othergenerate innovative ideasperhaps even launch new projects.It would be great for you to join us today and introduce yourself to the community! We already have some great discussions going.FAQs:"But there are already lots of Slack spaces?" - before we started this space, there wasn't a space that was just for animal welfare."Who should join?"Anyone already working in animal advocacy or adjacent organisationAnyone involved in alt proteinAnyone, regardless of experience or place of work, interested in helping animals in an impactful wayBy animals, we mean farmed animals and wild animals"How much do I have to participate, could I join and lurk for a bit?"- Sure, you don't need to be active all the time, all we ask is that you introduce yourself and invite anyone you know in the space who may be interested. If you'd like, you can message others or participate in discussions, but no pressure.If in doubt, please join!Best wishes,Sofia and CameronThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
SofiaBalderson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:37 None full 4944
46Wi9cnTKP2gTwNd3_NL_EA_EA EA - On Loyalty by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Loyalty, published by Nathan Young on February 20, 2023 on The Effective Altruism Forum.Epistemic status: I am confident about most individual points but there are probably errors overall. I imagine if there are lots of comments to this piece I'll have changed my mind by the time I've finished reading them.I was having a conversation with a friend, who said that EAs weren't loyal. They said that the community's recent behaviour would made them fear that they would be attacked for small errors. Hearing this I felt sad, and wanted to understand.Tl;drI feel a desire to be loyal. This is an exploration of that. If you don't feel that desire this may not be usefulI think loyalty is a compact between current and future members on what it is to be within a community - "do this and you will be safe"Status matters and loyalty is a "status insurance policy" - even if everyone else doesn't like you, we willI find more interest in where we have been disloyal than where we ought to be loyal.Were we disloyal to Bostrom?Is loyalty even good?Were people disloyal in sharing the Leadership Slack?Going forward I would like thatI have and give a clear sense of how I coordinate with othersI take crises slowly and relaxedly and not panicI am able to trust that people will keep private conversations which discuss no serious wrongdoing secret and have them trust me that I will do the sameI feel that it is acceptable to talk to journalists if helping a directionally accurate story, but that almost all of the time I should consider this a risky thing to doThis account is deliberately incoherent - it contains many views and feelings that I can't turn into a single argument. Feel free to disagree or suggestion some kind of synthesis.IntroTesting situations are when I find out who I am. But importantly, I can change. And living right matters to me perhaps more than anything else (though I am hugely flawed). So should I change here?I go through these 1 by 1 because I think this is actually hard and I don't know the answers. Feel free to nitpick.What is Loyalty (in this article)?I think loyalty is the sense that do right by those who have followed the rules. It's a coordination tool. "You stuck by the rules, so you'll be treated well", "you are safe within these bounds, you don't need to fear".I want to be loyal - for people to think "there goes Nathan, when I abide by community norms, he won't treat me badly".The notion of safety implies a notion of risk. I am okay with that. Sometimes a little fear and ambiguity is good - I'm okay with a little fear around flirting at EAGs because that reduces the amount of it, I'm okay with ambiguity in how to work in a biorisk lab. There isn't always a clear path to "doing the right thing" and if, in hindsight I didn't do it, I don't want your loyalty. But I want our safety, uncertainty, disaster circles to be well calibrated.Some might say "loyalty isn't good" - that we should seek to treat those in EA exactly the same as those outside. For me this equivaltent to saying our circles should be well calibrated - if someone turns out to have broken the norms we care about then I already don't feel a need to be loyal to them.But to me, a sense of ingroup loyalty feel inevitable. I just like you more and differently than those ousisde EA. It feels naïve to say otherwise. Much like "you aren't in traffic, you are trafffic", "I don't just feel feelings, to some extent I am feelings"So let's cut to the chase.The One With The EmailNick Bostrom sent an awful email. He wrote an apology a while back, but he wrote another to avoid a scandal (whoops). Twitter at least did not think he was sorry. CEA wrote a short condemnation. There was a lot of forum upset. Hopefully we can agree on this.Let's look at this through some different frames.The ...]]>
Nathan Young https://forum.effectivealtruism.org/posts/46Wi9cnTKP2gTwNd3/on-loyalty Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Loyalty, published by Nathan Young on February 20, 2023 on The Effective Altruism Forum.Epistemic status: I am confident about most individual points but there are probably errors overall. I imagine if there are lots of comments to this piece I'll have changed my mind by the time I've finished reading them.I was having a conversation with a friend, who said that EAs weren't loyal. They said that the community's recent behaviour would made them fear that they would be attacked for small errors. Hearing this I felt sad, and wanted to understand.Tl;drI feel a desire to be loyal. This is an exploration of that. If you don't feel that desire this may not be usefulI think loyalty is a compact between current and future members on what it is to be within a community - "do this and you will be safe"Status matters and loyalty is a "status insurance policy" - even if everyone else doesn't like you, we willI find more interest in where we have been disloyal than where we ought to be loyal.Were we disloyal to Bostrom?Is loyalty even good?Were people disloyal in sharing the Leadership Slack?Going forward I would like thatI have and give a clear sense of how I coordinate with othersI take crises slowly and relaxedly and not panicI am able to trust that people will keep private conversations which discuss no serious wrongdoing secret and have them trust me that I will do the sameI feel that it is acceptable to talk to journalists if helping a directionally accurate story, but that almost all of the time I should consider this a risky thing to doThis account is deliberately incoherent - it contains many views and feelings that I can't turn into a single argument. Feel free to disagree or suggestion some kind of synthesis.IntroTesting situations are when I find out who I am. But importantly, I can change. And living right matters to me perhaps more than anything else (though I am hugely flawed). So should I change here?I go through these 1 by 1 because I think this is actually hard and I don't know the answers. Feel free to nitpick.What is Loyalty (in this article)?I think loyalty is the sense that do right by those who have followed the rules. It's a coordination tool. "You stuck by the rules, so you'll be treated well", "you are safe within these bounds, you don't need to fear".I want to be loyal - for people to think "there goes Nathan, when I abide by community norms, he won't treat me badly".The notion of safety implies a notion of risk. I am okay with that. Sometimes a little fear and ambiguity is good - I'm okay with a little fear around flirting at EAGs because that reduces the amount of it, I'm okay with ambiguity in how to work in a biorisk lab. There isn't always a clear path to "doing the right thing" and if, in hindsight I didn't do it, I don't want your loyalty. But I want our safety, uncertainty, disaster circles to be well calibrated.Some might say "loyalty isn't good" - that we should seek to treat those in EA exactly the same as those outside. For me this equivaltent to saying our circles should be well calibrated - if someone turns out to have broken the norms we care about then I already don't feel a need to be loyal to them.But to me, a sense of ingroup loyalty feel inevitable. I just like you more and differently than those ousisde EA. It feels naïve to say otherwise. Much like "you aren't in traffic, you are trafffic", "I don't just feel feelings, to some extent I am feelings"So let's cut to the chase.The One With The EmailNick Bostrom sent an awful email. He wrote an apology a while back, but he wrote another to avoid a scandal (whoops). Twitter at least did not think he was sorry. CEA wrote a short condemnation. There was a lot of forum upset. Hopefully we can agree on this.Let's look at this through some different frames.The ...]]>
Mon, 20 Feb 2023 21:51:14 +0000 EA - On Loyalty by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Loyalty, published by Nathan Young on February 20, 2023 on The Effective Altruism Forum.Epistemic status: I am confident about most individual points but there are probably errors overall. I imagine if there are lots of comments to this piece I'll have changed my mind by the time I've finished reading them.I was having a conversation with a friend, who said that EAs weren't loyal. They said that the community's recent behaviour would made them fear that they would be attacked for small errors. Hearing this I felt sad, and wanted to understand.Tl;drI feel a desire to be loyal. This is an exploration of that. If you don't feel that desire this may not be usefulI think loyalty is a compact between current and future members on what it is to be within a community - "do this and you will be safe"Status matters and loyalty is a "status insurance policy" - even if everyone else doesn't like you, we willI find more interest in where we have been disloyal than where we ought to be loyal.Were we disloyal to Bostrom?Is loyalty even good?Were people disloyal in sharing the Leadership Slack?Going forward I would like thatI have and give a clear sense of how I coordinate with othersI take crises slowly and relaxedly and not panicI am able to trust that people will keep private conversations which discuss no serious wrongdoing secret and have them trust me that I will do the sameI feel that it is acceptable to talk to journalists if helping a directionally accurate story, but that almost all of the time I should consider this a risky thing to doThis account is deliberately incoherent - it contains many views and feelings that I can't turn into a single argument. Feel free to disagree or suggestion some kind of synthesis.IntroTesting situations are when I find out who I am. But importantly, I can change. And living right matters to me perhaps more than anything else (though I am hugely flawed). So should I change here?I go through these 1 by 1 because I think this is actually hard and I don't know the answers. Feel free to nitpick.What is Loyalty (in this article)?I think loyalty is the sense that do right by those who have followed the rules. It's a coordination tool. "You stuck by the rules, so you'll be treated well", "you are safe within these bounds, you don't need to fear".I want to be loyal - for people to think "there goes Nathan, when I abide by community norms, he won't treat me badly".The notion of safety implies a notion of risk. I am okay with that. Sometimes a little fear and ambiguity is good - I'm okay with a little fear around flirting at EAGs because that reduces the amount of it, I'm okay with ambiguity in how to work in a biorisk lab. There isn't always a clear path to "doing the right thing" and if, in hindsight I didn't do it, I don't want your loyalty. But I want our safety, uncertainty, disaster circles to be well calibrated.Some might say "loyalty isn't good" - that we should seek to treat those in EA exactly the same as those outside. For me this equivaltent to saying our circles should be well calibrated - if someone turns out to have broken the norms we care about then I already don't feel a need to be loyal to them.But to me, a sense of ingroup loyalty feel inevitable. I just like you more and differently than those ousisde EA. It feels naïve to say otherwise. Much like "you aren't in traffic, you are trafffic", "I don't just feel feelings, to some extent I am feelings"So let's cut to the chase.The One With The EmailNick Bostrom sent an awful email. He wrote an apology a while back, but he wrote another to avoid a scandal (whoops). Twitter at least did not think he was sorry. CEA wrote a short condemnation. There was a lot of forum upset. Hopefully we can agree on this.Let's look at this through some different frames.The ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Loyalty, published by Nathan Young on February 20, 2023 on The Effective Altruism Forum.Epistemic status: I am confident about most individual points but there are probably errors overall. I imagine if there are lots of comments to this piece I'll have changed my mind by the time I've finished reading them.I was having a conversation with a friend, who said that EAs weren't loyal. They said that the community's recent behaviour would made them fear that they would be attacked for small errors. Hearing this I felt sad, and wanted to understand.Tl;drI feel a desire to be loyal. This is an exploration of that. If you don't feel that desire this may not be usefulI think loyalty is a compact between current and future members on what it is to be within a community - "do this and you will be safe"Status matters and loyalty is a "status insurance policy" - even if everyone else doesn't like you, we willI find more interest in where we have been disloyal than where we ought to be loyal.Were we disloyal to Bostrom?Is loyalty even good?Were people disloyal in sharing the Leadership Slack?Going forward I would like thatI have and give a clear sense of how I coordinate with othersI take crises slowly and relaxedly and not panicI am able to trust that people will keep private conversations which discuss no serious wrongdoing secret and have them trust me that I will do the sameI feel that it is acceptable to talk to journalists if helping a directionally accurate story, but that almost all of the time I should consider this a risky thing to doThis account is deliberately incoherent - it contains many views and feelings that I can't turn into a single argument. Feel free to disagree or suggestion some kind of synthesis.IntroTesting situations are when I find out who I am. But importantly, I can change. And living right matters to me perhaps more than anything else (though I am hugely flawed). So should I change here?I go through these 1 by 1 because I think this is actually hard and I don't know the answers. Feel free to nitpick.What is Loyalty (in this article)?I think loyalty is the sense that do right by those who have followed the rules. It's a coordination tool. "You stuck by the rules, so you'll be treated well", "you are safe within these bounds, you don't need to fear".I want to be loyal - for people to think "there goes Nathan, when I abide by community norms, he won't treat me badly".The notion of safety implies a notion of risk. I am okay with that. Sometimes a little fear and ambiguity is good - I'm okay with a little fear around flirting at EAGs because that reduces the amount of it, I'm okay with ambiguity in how to work in a biorisk lab. There isn't always a clear path to "doing the right thing" and if, in hindsight I didn't do it, I don't want your loyalty. But I want our safety, uncertainty, disaster circles to be well calibrated.Some might say "loyalty isn't good" - that we should seek to treat those in EA exactly the same as those outside. For me this equivaltent to saying our circles should be well calibrated - if someone turns out to have broken the norms we care about then I already don't feel a need to be loyal to them.But to me, a sense of ingroup loyalty feel inevitable. I just like you more and differently than those ousisde EA. It feels naïve to say otherwise. Much like "you aren't in traffic, you are trafffic", "I don't just feel feelings, to some extent I am feelings"So let's cut to the chase.The One With The EmailNick Bostrom sent an awful email. He wrote an apology a while back, but he wrote another to avoid a scandal (whoops). Twitter at least did not think he was sorry. CEA wrote a short condemnation. There was a lot of forum upset. Hopefully we can agree on this.Let's look at this through some different frames.The ...]]>
Nathan Young https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 23:07 None full 4954
i6btyefRRX23yCpnP_NL_EA_EA EA - What AI companies can do today to help with the most important century by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What AI companies can do today to help with the most important century, published by Holden Karnofsky on February 20, 2023 on The Effective Altruism Forum.I’ve been writing about tangible things we can do today to help the most important century go well. Previously, I wrote about helpful messages to spread and how to help via full-time work.This piece is about what major AI companies can do (and not do) to be helpful. By “major AI companies,” I mean the sorts of AI companies that are advancing the state of the art, and/or could play a major role in how very powerful AI systems end up getting used.1This piece could be useful to people who work at those companies, or people who are just curious.Generally, these are not pie-in-the-sky suggestions - I can name2 more than one AI company that has at least made a serious effort at each of the things I discuss below (beyond what it would do if everyone at the company were singularly focused on making a profit).3I’ll cover:Prioritizing alignment research, strong security, and safety standards (all of which I’ve written about previously).Avoiding hype and acceleration, which I think could leave us with less time to prepare for key risks.Preparing for difficult decisions ahead: setting up governance, employee expectations, investor expectations, etc. so that the company is capable of doing non-profit-maximizing things to help avoid catastrophe in the future.Balancing these cautionary measures with conventional/financial success.I’ll also list a few things that some AI companies present as important, but which I’m less excited about: censorship of AI models, open-sourcing AI models, raising awareness of AI with governments and the public. I don’t think all these things are necessarily bad, but I think some are, and I’m skeptical that any are crucial for the risks I’ve focused on.I previously laid out a summary of how I see the major risks of advanced AI, and four key things I think can help (alignment research; strong security; standards and monitoring; successful, careful AI projects). I won’t repeat that summary now, but it might be helpful for orienting you if you don’t remember the rest of this series too well; click here to read it.Some basics: alignment research, strong security, safety standardsFirst off, AI companies can contribute to the “things that can help” I listed above:They can prioritize alignment research (and other technical research, e.g. threat assessment research and misuse research).For example, they can prioritize hiring for safety teams, empowering these teams, encouraging their best flexible researchers to work on safety, aiming for high-quality research that targets crucial challenges, etc.It could also be important for AI companies to find ways to partner with outside safety researchers rather than rely solely on their own teams. As discussed previously, this could be challenging. But I generally expect that AI companies that care a lot about safety research partnerships will find ways to make them work.They can help work toward a standards and monitoring regime. E.g., they can do their own work to come up with standards like "An AI system is dangerous if we observe that it's able to ___, and if we observe this we will take safety and security measures such as ____." They can also consult with others developing safety standards, voluntarily self-regulate beyond what’s required by law, etc.They can prioritize strong security, beyond what normal commercial incentives would call for.It could easily take years to build secure enough systems, processes and technologies for very high-stakes AI.It could be important to hire not only people to handle everyday security needs, but people to experiment with more exotic setups that could be needed later, as the incentives to steal AI get strong...]]>
Holden Karnofsky https://forum.effectivealtruism.org/posts/i6btyefRRX23yCpnP/what-ai-companies-can-do-today-to-help-with-the-most Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What AI companies can do today to help with the most important century, published by Holden Karnofsky on February 20, 2023 on The Effective Altruism Forum.I’ve been writing about tangible things we can do today to help the most important century go well. Previously, I wrote about helpful messages to spread and how to help via full-time work.This piece is about what major AI companies can do (and not do) to be helpful. By “major AI companies,” I mean the sorts of AI companies that are advancing the state of the art, and/or could play a major role in how very powerful AI systems end up getting used.1This piece could be useful to people who work at those companies, or people who are just curious.Generally, these are not pie-in-the-sky suggestions - I can name2 more than one AI company that has at least made a serious effort at each of the things I discuss below (beyond what it would do if everyone at the company were singularly focused on making a profit).3I’ll cover:Prioritizing alignment research, strong security, and safety standards (all of which I’ve written about previously).Avoiding hype and acceleration, which I think could leave us with less time to prepare for key risks.Preparing for difficult decisions ahead: setting up governance, employee expectations, investor expectations, etc. so that the company is capable of doing non-profit-maximizing things to help avoid catastrophe in the future.Balancing these cautionary measures with conventional/financial success.I’ll also list a few things that some AI companies present as important, but which I’m less excited about: censorship of AI models, open-sourcing AI models, raising awareness of AI with governments and the public. I don’t think all these things are necessarily bad, but I think some are, and I’m skeptical that any are crucial for the risks I’ve focused on.I previously laid out a summary of how I see the major risks of advanced AI, and four key things I think can help (alignment research; strong security; standards and monitoring; successful, careful AI projects). I won’t repeat that summary now, but it might be helpful for orienting you if you don’t remember the rest of this series too well; click here to read it.Some basics: alignment research, strong security, safety standardsFirst off, AI companies can contribute to the “things that can help” I listed above:They can prioritize alignment research (and other technical research, e.g. threat assessment research and misuse research).For example, they can prioritize hiring for safety teams, empowering these teams, encouraging their best flexible researchers to work on safety, aiming for high-quality research that targets crucial challenges, etc.It could also be important for AI companies to find ways to partner with outside safety researchers rather than rely solely on their own teams. As discussed previously, this could be challenging. But I generally expect that AI companies that care a lot about safety research partnerships will find ways to make them work.They can help work toward a standards and monitoring regime. E.g., they can do their own work to come up with standards like "An AI system is dangerous if we observe that it's able to ___, and if we observe this we will take safety and security measures such as ____." They can also consult with others developing safety standards, voluntarily self-regulate beyond what’s required by law, etc.They can prioritize strong security, beyond what normal commercial incentives would call for.It could easily take years to build secure enough systems, processes and technologies for very high-stakes AI.It could be important to hire not only people to handle everyday security needs, but people to experiment with more exotic setups that could be needed later, as the incentives to steal AI get strong...]]>
Mon, 20 Feb 2023 18:44:02 +0000 EA - What AI companies can do today to help with the most important century by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What AI companies can do today to help with the most important century, published by Holden Karnofsky on February 20, 2023 on The Effective Altruism Forum.I’ve been writing about tangible things we can do today to help the most important century go well. Previously, I wrote about helpful messages to spread and how to help via full-time work.This piece is about what major AI companies can do (and not do) to be helpful. By “major AI companies,” I mean the sorts of AI companies that are advancing the state of the art, and/or could play a major role in how very powerful AI systems end up getting used.1This piece could be useful to people who work at those companies, or people who are just curious.Generally, these are not pie-in-the-sky suggestions - I can name2 more than one AI company that has at least made a serious effort at each of the things I discuss below (beyond what it would do if everyone at the company were singularly focused on making a profit).3I’ll cover:Prioritizing alignment research, strong security, and safety standards (all of which I’ve written about previously).Avoiding hype and acceleration, which I think could leave us with less time to prepare for key risks.Preparing for difficult decisions ahead: setting up governance, employee expectations, investor expectations, etc. so that the company is capable of doing non-profit-maximizing things to help avoid catastrophe in the future.Balancing these cautionary measures with conventional/financial success.I’ll also list a few things that some AI companies present as important, but which I’m less excited about: censorship of AI models, open-sourcing AI models, raising awareness of AI with governments and the public. I don’t think all these things are necessarily bad, but I think some are, and I’m skeptical that any are crucial for the risks I’ve focused on.I previously laid out a summary of how I see the major risks of advanced AI, and four key things I think can help (alignment research; strong security; standards and monitoring; successful, careful AI projects). I won’t repeat that summary now, but it might be helpful for orienting you if you don’t remember the rest of this series too well; click here to read it.Some basics: alignment research, strong security, safety standardsFirst off, AI companies can contribute to the “things that can help” I listed above:They can prioritize alignment research (and other technical research, e.g. threat assessment research and misuse research).For example, they can prioritize hiring for safety teams, empowering these teams, encouraging their best flexible researchers to work on safety, aiming for high-quality research that targets crucial challenges, etc.It could also be important for AI companies to find ways to partner with outside safety researchers rather than rely solely on their own teams. As discussed previously, this could be challenging. But I generally expect that AI companies that care a lot about safety research partnerships will find ways to make them work.They can help work toward a standards and monitoring regime. E.g., they can do their own work to come up with standards like "An AI system is dangerous if we observe that it's able to ___, and if we observe this we will take safety and security measures such as ____." They can also consult with others developing safety standards, voluntarily self-regulate beyond what’s required by law, etc.They can prioritize strong security, beyond what normal commercial incentives would call for.It could easily take years to build secure enough systems, processes and technologies for very high-stakes AI.It could be important to hire not only people to handle everyday security needs, but people to experiment with more exotic setups that could be needed later, as the incentives to steal AI get strong...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What AI companies can do today to help with the most important century, published by Holden Karnofsky on February 20, 2023 on The Effective Altruism Forum.I’ve been writing about tangible things we can do today to help the most important century go well. Previously, I wrote about helpful messages to spread and how to help via full-time work.This piece is about what major AI companies can do (and not do) to be helpful. By “major AI companies,” I mean the sorts of AI companies that are advancing the state of the art, and/or could play a major role in how very powerful AI systems end up getting used.1This piece could be useful to people who work at those companies, or people who are just curious.Generally, these are not pie-in-the-sky suggestions - I can name2 more than one AI company that has at least made a serious effort at each of the things I discuss below (beyond what it would do if everyone at the company were singularly focused on making a profit).3I’ll cover:Prioritizing alignment research, strong security, and safety standards (all of which I’ve written about previously).Avoiding hype and acceleration, which I think could leave us with less time to prepare for key risks.Preparing for difficult decisions ahead: setting up governance, employee expectations, investor expectations, etc. so that the company is capable of doing non-profit-maximizing things to help avoid catastrophe in the future.Balancing these cautionary measures with conventional/financial success.I’ll also list a few things that some AI companies present as important, but which I’m less excited about: censorship of AI models, open-sourcing AI models, raising awareness of AI with governments and the public. I don’t think all these things are necessarily bad, but I think some are, and I’m skeptical that any are crucial for the risks I’ve focused on.I previously laid out a summary of how I see the major risks of advanced AI, and four key things I think can help (alignment research; strong security; standards and monitoring; successful, careful AI projects). I won’t repeat that summary now, but it might be helpful for orienting you if you don’t remember the rest of this series too well; click here to read it.Some basics: alignment research, strong security, safety standardsFirst off, AI companies can contribute to the “things that can help” I listed above:They can prioritize alignment research (and other technical research, e.g. threat assessment research and misuse research).For example, they can prioritize hiring for safety teams, empowering these teams, encouraging their best flexible researchers to work on safety, aiming for high-quality research that targets crucial challenges, etc.It could also be important for AI companies to find ways to partner with outside safety researchers rather than rely solely on their own teams. As discussed previously, this could be challenging. But I generally expect that AI companies that care a lot about safety research partnerships will find ways to make them work.They can help work toward a standards and monitoring regime. E.g., they can do their own work to come up with standards like "An AI system is dangerous if we observe that it's able to ___, and if we observe this we will take safety and security measures such as ____." They can also consult with others developing safety standards, voluntarily self-regulate beyond what’s required by law, etc.They can prioritize strong security, beyond what normal commercial incentives would call for.It could easily take years to build secure enough systems, processes and technologies for very high-stakes AI.It could be important to hire not only people to handle everyday security needs, but people to experiment with more exotic setups that could be needed later, as the incentives to steal AI get strong...]]>
Holden Karnofsky https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 17:09 None full 4948
9JCkkjKMNL4Hmg4qP_NL_EA_EA EA - EV UK board statement on Owen's resignation by EV UK Board Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EV UK board statement on Owen's resignation, published by EV UK Board on February 20, 2023 on The Effective Altruism Forum.In a recent TIME Magazine article, a claim of misconduct was made about an “influential figure in EA”:A third [woman] described an unsettling experience with an influential figure in EA whose role included picking out promising students and funneling them towards highly coveted jobs. After that leader arranged for her to be flown to the U.K. for a job interview, she recalls being surprised to discover that she was expected to stay in his home, not a hotel. When she arrived, she says, “he told me he needed to masturbate before seeing me.”Shortly after the article came out, Julia Wise (CEA’s community liaison) informed the EV UK board that this concerned behaviour of Owen Cotton-Barratt; the incident occurred more than 5 years ago and was reported to her in 2021. (Owen became a board member in 2020.)Following this, on February 11th, Owen voluntarily resigned from the board. This included stepping down from his role with Wytham Abbey; he is also no longer helping organise The Summit on Existential Security.Though Owen’s account of the incident differs in scope and emphasis from the version expressed in the TIME article, he still believes that he made significant mistakes, and also notes that there have been other cases where he regretted his behaviour.It's very important to us that EV and the wider EA community strive to provide safe and respectful environments, and that we have reliable mechanisms for investigating and addressing claims of misconduct in the EA community. So, in order to better understand what happened, we are commissioning an external investigation by an independent law firm into Owen’s behaviour and the Community Health team’s response.This post is jointly from the Board of EV UK: Claire Zabel, Nick Beckstead, Tasha McCauley and Will MacAskill.The disclosure occurred as follows: shortly after the article came out, Owen and Julia agreed that Julia would work out whether Owen's identity should be disclosed to other people in EV UK and EV US; Julia determined that it should be shared with the boards.Julia writes about her response at the time here.See comment here from Chana Messinger on behalf of the Community Health team.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
EV UK Board https://forum.effectivealtruism.org/posts/9JCkkjKMNL4Hmg4qP/ev-uk-board-statement-on-owen-s-resignation Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EV UK board statement on Owen's resignation, published by EV UK Board on February 20, 2023 on The Effective Altruism Forum.In a recent TIME Magazine article, a claim of misconduct was made about an “influential figure in EA”:A third [woman] described an unsettling experience with an influential figure in EA whose role included picking out promising students and funneling them towards highly coveted jobs. After that leader arranged for her to be flown to the U.K. for a job interview, she recalls being surprised to discover that she was expected to stay in his home, not a hotel. When she arrived, she says, “he told me he needed to masturbate before seeing me.”Shortly after the article came out, Julia Wise (CEA’s community liaison) informed the EV UK board that this concerned behaviour of Owen Cotton-Barratt; the incident occurred more than 5 years ago and was reported to her in 2021. (Owen became a board member in 2020.)Following this, on February 11th, Owen voluntarily resigned from the board. This included stepping down from his role with Wytham Abbey; he is also no longer helping organise The Summit on Existential Security.Though Owen’s account of the incident differs in scope and emphasis from the version expressed in the TIME article, he still believes that he made significant mistakes, and also notes that there have been other cases where he regretted his behaviour.It's very important to us that EV and the wider EA community strive to provide safe and respectful environments, and that we have reliable mechanisms for investigating and addressing claims of misconduct in the EA community. So, in order to better understand what happened, we are commissioning an external investigation by an independent law firm into Owen’s behaviour and the Community Health team’s response.This post is jointly from the Board of EV UK: Claire Zabel, Nick Beckstead, Tasha McCauley and Will MacAskill.The disclosure occurred as follows: shortly after the article came out, Owen and Julia agreed that Julia would work out whether Owen's identity should be disclosed to other people in EV UK and EV US; Julia determined that it should be shared with the boards.Julia writes about her response at the time here.See comment here from Chana Messinger on behalf of the Community Health team.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 20 Feb 2023 18:02:44 +0000 EA - EV UK board statement on Owen's resignation by EV UK Board Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EV UK board statement on Owen's resignation, published by EV UK Board on February 20, 2023 on The Effective Altruism Forum.In a recent TIME Magazine article, a claim of misconduct was made about an “influential figure in EA”:A third [woman] described an unsettling experience with an influential figure in EA whose role included picking out promising students and funneling them towards highly coveted jobs. After that leader arranged for her to be flown to the U.K. for a job interview, she recalls being surprised to discover that she was expected to stay in his home, not a hotel. When she arrived, she says, “he told me he needed to masturbate before seeing me.”Shortly after the article came out, Julia Wise (CEA’s community liaison) informed the EV UK board that this concerned behaviour of Owen Cotton-Barratt; the incident occurred more than 5 years ago and was reported to her in 2021. (Owen became a board member in 2020.)Following this, on February 11th, Owen voluntarily resigned from the board. This included stepping down from his role with Wytham Abbey; he is also no longer helping organise The Summit on Existential Security.Though Owen’s account of the incident differs in scope and emphasis from the version expressed in the TIME article, he still believes that he made significant mistakes, and also notes that there have been other cases where he regretted his behaviour.It's very important to us that EV and the wider EA community strive to provide safe and respectful environments, and that we have reliable mechanisms for investigating and addressing claims of misconduct in the EA community. So, in order to better understand what happened, we are commissioning an external investigation by an independent law firm into Owen’s behaviour and the Community Health team’s response.This post is jointly from the Board of EV UK: Claire Zabel, Nick Beckstead, Tasha McCauley and Will MacAskill.The disclosure occurred as follows: shortly after the article came out, Owen and Julia agreed that Julia would work out whether Owen's identity should be disclosed to other people in EV UK and EV US; Julia determined that it should be shared with the boards.Julia writes about her response at the time here.See comment here from Chana Messinger on behalf of the Community Health team.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EV UK board statement on Owen's resignation, published by EV UK Board on February 20, 2023 on The Effective Altruism Forum.In a recent TIME Magazine article, a claim of misconduct was made about an “influential figure in EA”:A third [woman] described an unsettling experience with an influential figure in EA whose role included picking out promising students and funneling them towards highly coveted jobs. After that leader arranged for her to be flown to the U.K. for a job interview, she recalls being surprised to discover that she was expected to stay in his home, not a hotel. When she arrived, she says, “he told me he needed to masturbate before seeing me.”Shortly after the article came out, Julia Wise (CEA’s community liaison) informed the EV UK board that this concerned behaviour of Owen Cotton-Barratt; the incident occurred more than 5 years ago and was reported to her in 2021. (Owen became a board member in 2020.)Following this, on February 11th, Owen voluntarily resigned from the board. This included stepping down from his role with Wytham Abbey; he is also no longer helping organise The Summit on Existential Security.Though Owen’s account of the incident differs in scope and emphasis from the version expressed in the TIME article, he still believes that he made significant mistakes, and also notes that there have been other cases where he regretted his behaviour.It's very important to us that EV and the wider EA community strive to provide safe and respectful environments, and that we have reliable mechanisms for investigating and addressing claims of misconduct in the EA community. So, in order to better understand what happened, we are commissioning an external investigation by an independent law firm into Owen’s behaviour and the Community Health team’s response.This post is jointly from the Board of EV UK: Claire Zabel, Nick Beckstead, Tasha McCauley and Will MacAskill.The disclosure occurred as follows: shortly after the article came out, Owen and Julia agreed that Julia would work out whether Owen's identity should be disclosed to other people in EV UK and EV US; Julia determined that it should be shared with the boards.Julia writes about her response at the time here.See comment here from Chana Messinger on behalf of the Community Health team.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
EV UK Board https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:20 None full 4946
QMee23Evryqzthcvn_NL_EA_EA EA - A statement and an apology by Owen Cotton-Barratt Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A statement and an apology, published by Owen Cotton-Barratt on February 20, 2023 on The Effective Altruism Forum.Since the Time article on sexual harassment came out, people have been asking for information about one paragraph of it, about an “influential figure in EA”. I wanted to respond to that.This is talking about me, more than five years ago. I think I made significant mistakes; I regret them a lot; and I’m sorry.ContextI think the actual mistakes I made look different from what many readers may take away from the article, so I first wanted to provide a bit more context (some of this is straightforwardly factual; other parts should be understood as my interpretation):We had what I perceived as a preexisting friendship where we were experimenting with being unusually direct and honest (/“edgy”)Including about sexual mattersThere was what would commonly be regarded as oversharing from both sides (this wasn’t the first time I’d mentioned masturbation)Our friendship continued in an active way for several months afterwardsI should however note that:We had met via EA and spent a good fraction of conversation time talking about EA-relevant topicsI was older and more central in the EA communityOn other occasions, including early in our friendship, we had some professional interactions, and I wasn’t clear about how I was handling the personal/professional boundaryI was employed as a researcher at that timeMy role didn’t develop to connecting people with different positions until later, and this wasn’t part of my self-conception at the time(However it makes sense to me that this was her perception)I was not affiliated with the org she was interviewing atI’d suggested her as a candidate earlier in the application process, but was not part of their decision-making processOn the other hand I think that a lot of what was problematic about my behaviour with respect to this person was not about this incident in particular, but the broad dynamic where:I in fact had significant amounts of powerThis was not very salient to me but very salient to herShe consequently felt pressure to match my vibee.g. in an earlier draft of this post, before fact-checking it with her, I said that we talked about “feelings of mutual attraction”This was not her experienceI drafted it like that because we’d had what I’d interpreted as conversations where this was stated explicitly(I think this is just another central example of the point I’m making in this set of bullets)Similarly at some point she volunteered to me that she was enjoying the dynamic between us (but I probably interpreted this much more broadly than she intended)She was in a structural position where it was (I now believe) unreasonable to expect honesty about her experienceAs the person with power it was on me to notice and head off these dynamics, and I failed to do that(Sorry, I know that's all pretty light on detail, but I don't want to risk accidentally de-anonymising the other person. I want to stress that I’m not claiming she provided any inaccurate information to the journalist who wrote the story; just that I think the extra context may be helpful for people seeking to evaluate or understand my conduct.)My mistakesIn any case, I think my actions were poorly judged and fell well short of the high standards I would like to live up to, and that I think we should ask from people in positions of leadership. Afterwards, I felt vaguely like the whole friendship wasn’t well done and I wished I had approached things differently. Then when I found out that I’d made the person feel uncomfortable(/disempowered/pressured), I was horrified (not putting pressure on people is something like a core value of mine). I have apologized to the person in question, but I also feel like I’ve let the whole community down,...]]>
Owen Cotton-Barratt https://forum.effectivealtruism.org/posts/QMee23Evryqzthcvn/a-statement-and-an-apology Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A statement and an apology, published by Owen Cotton-Barratt on February 20, 2023 on The Effective Altruism Forum.Since the Time article on sexual harassment came out, people have been asking for information about one paragraph of it, about an “influential figure in EA”. I wanted to respond to that.This is talking about me, more than five years ago. I think I made significant mistakes; I regret them a lot; and I’m sorry.ContextI think the actual mistakes I made look different from what many readers may take away from the article, so I first wanted to provide a bit more context (some of this is straightforwardly factual; other parts should be understood as my interpretation):We had what I perceived as a preexisting friendship where we were experimenting with being unusually direct and honest (/“edgy”)Including about sexual mattersThere was what would commonly be regarded as oversharing from both sides (this wasn’t the first time I’d mentioned masturbation)Our friendship continued in an active way for several months afterwardsI should however note that:We had met via EA and spent a good fraction of conversation time talking about EA-relevant topicsI was older and more central in the EA communityOn other occasions, including early in our friendship, we had some professional interactions, and I wasn’t clear about how I was handling the personal/professional boundaryI was employed as a researcher at that timeMy role didn’t develop to connecting people with different positions until later, and this wasn’t part of my self-conception at the time(However it makes sense to me that this was her perception)I was not affiliated with the org she was interviewing atI’d suggested her as a candidate earlier in the application process, but was not part of their decision-making processOn the other hand I think that a lot of what was problematic about my behaviour with respect to this person was not about this incident in particular, but the broad dynamic where:I in fact had significant amounts of powerThis was not very salient to me but very salient to herShe consequently felt pressure to match my vibee.g. in an earlier draft of this post, before fact-checking it with her, I said that we talked about “feelings of mutual attraction”This was not her experienceI drafted it like that because we’d had what I’d interpreted as conversations where this was stated explicitly(I think this is just another central example of the point I’m making in this set of bullets)Similarly at some point she volunteered to me that she was enjoying the dynamic between us (but I probably interpreted this much more broadly than she intended)She was in a structural position where it was (I now believe) unreasonable to expect honesty about her experienceAs the person with power it was on me to notice and head off these dynamics, and I failed to do that(Sorry, I know that's all pretty light on detail, but I don't want to risk accidentally de-anonymising the other person. I want to stress that I’m not claiming she provided any inaccurate information to the journalist who wrote the story; just that I think the extra context may be helpful for people seeking to evaluate or understand my conduct.)My mistakesIn any case, I think my actions were poorly judged and fell well short of the high standards I would like to live up to, and that I think we should ask from people in positions of leadership. Afterwards, I felt vaguely like the whole friendship wasn’t well done and I wished I had approached things differently. Then when I found out that I’d made the person feel uncomfortable(/disempowered/pressured), I was horrified (not putting pressure on people is something like a core value of mine). I have apologized to the person in question, but I also feel like I’ve let the whole community down,...]]>
Mon, 20 Feb 2023 17:48:28 +0000 EA - A statement and an apology by Owen Cotton-Barratt Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A statement and an apology, published by Owen Cotton-Barratt on February 20, 2023 on The Effective Altruism Forum.Since the Time article on sexual harassment came out, people have been asking for information about one paragraph of it, about an “influential figure in EA”. I wanted to respond to that.This is talking about me, more than five years ago. I think I made significant mistakes; I regret them a lot; and I’m sorry.ContextI think the actual mistakes I made look different from what many readers may take away from the article, so I first wanted to provide a bit more context (some of this is straightforwardly factual; other parts should be understood as my interpretation):We had what I perceived as a preexisting friendship where we were experimenting with being unusually direct and honest (/“edgy”)Including about sexual mattersThere was what would commonly be regarded as oversharing from both sides (this wasn’t the first time I’d mentioned masturbation)Our friendship continued in an active way for several months afterwardsI should however note that:We had met via EA and spent a good fraction of conversation time talking about EA-relevant topicsI was older and more central in the EA communityOn other occasions, including early in our friendship, we had some professional interactions, and I wasn’t clear about how I was handling the personal/professional boundaryI was employed as a researcher at that timeMy role didn’t develop to connecting people with different positions until later, and this wasn’t part of my self-conception at the time(However it makes sense to me that this was her perception)I was not affiliated with the org she was interviewing atI’d suggested her as a candidate earlier in the application process, but was not part of their decision-making processOn the other hand I think that a lot of what was problematic about my behaviour with respect to this person was not about this incident in particular, but the broad dynamic where:I in fact had significant amounts of powerThis was not very salient to me but very salient to herShe consequently felt pressure to match my vibee.g. in an earlier draft of this post, before fact-checking it with her, I said that we talked about “feelings of mutual attraction”This was not her experienceI drafted it like that because we’d had what I’d interpreted as conversations where this was stated explicitly(I think this is just another central example of the point I’m making in this set of bullets)Similarly at some point she volunteered to me that she was enjoying the dynamic between us (but I probably interpreted this much more broadly than she intended)She was in a structural position where it was (I now believe) unreasonable to expect honesty about her experienceAs the person with power it was on me to notice and head off these dynamics, and I failed to do that(Sorry, I know that's all pretty light on detail, but I don't want to risk accidentally de-anonymising the other person. I want to stress that I’m not claiming she provided any inaccurate information to the journalist who wrote the story; just that I think the extra context may be helpful for people seeking to evaluate or understand my conduct.)My mistakesIn any case, I think my actions were poorly judged and fell well short of the high standards I would like to live up to, and that I think we should ask from people in positions of leadership. Afterwards, I felt vaguely like the whole friendship wasn’t well done and I wished I had approached things differently. Then when I found out that I’d made the person feel uncomfortable(/disempowered/pressured), I was horrified (not putting pressure on people is something like a core value of mine). I have apologized to the person in question, but I also feel like I’ve let the whole community down,...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A statement and an apology, published by Owen Cotton-Barratt on February 20, 2023 on The Effective Altruism Forum.Since the Time article on sexual harassment came out, people have been asking for information about one paragraph of it, about an “influential figure in EA”. I wanted to respond to that.This is talking about me, more than five years ago. I think I made significant mistakes; I regret them a lot; and I’m sorry.ContextI think the actual mistakes I made look different from what many readers may take away from the article, so I first wanted to provide a bit more context (some of this is straightforwardly factual; other parts should be understood as my interpretation):We had what I perceived as a preexisting friendship where we were experimenting with being unusually direct and honest (/“edgy”)Including about sexual mattersThere was what would commonly be regarded as oversharing from both sides (this wasn’t the first time I’d mentioned masturbation)Our friendship continued in an active way for several months afterwardsI should however note that:We had met via EA and spent a good fraction of conversation time talking about EA-relevant topicsI was older and more central in the EA communityOn other occasions, including early in our friendship, we had some professional interactions, and I wasn’t clear about how I was handling the personal/professional boundaryI was employed as a researcher at that timeMy role didn’t develop to connecting people with different positions until later, and this wasn’t part of my self-conception at the time(However it makes sense to me that this was her perception)I was not affiliated with the org she was interviewing atI’d suggested her as a candidate earlier in the application process, but was not part of their decision-making processOn the other hand I think that a lot of what was problematic about my behaviour with respect to this person was not about this incident in particular, but the broad dynamic where:I in fact had significant amounts of powerThis was not very salient to me but very salient to herShe consequently felt pressure to match my vibee.g. in an earlier draft of this post, before fact-checking it with her, I said that we talked about “feelings of mutual attraction”This was not her experienceI drafted it like that because we’d had what I’d interpreted as conversations where this was stated explicitly(I think this is just another central example of the point I’m making in this set of bullets)Similarly at some point she volunteered to me that she was enjoying the dynamic between us (but I probably interpreted this much more broadly than she intended)She was in a structural position where it was (I now believe) unreasonable to expect honesty about her experienceAs the person with power it was on me to notice and head off these dynamics, and I failed to do that(Sorry, I know that's all pretty light on detail, but I don't want to risk accidentally de-anonymising the other person. I want to stress that I’m not claiming she provided any inaccurate information to the journalist who wrote the story; just that I think the extra context may be helpful for people seeking to evaluate or understand my conduct.)My mistakesIn any case, I think my actions were poorly judged and fell well short of the high standards I would like to live up to, and that I think we should ask from people in positions of leadership. Afterwards, I felt vaguely like the whole friendship wasn’t well done and I wished I had approached things differently. Then when I found out that I’d made the person feel uncomfortable(/disempowered/pressured), I was horrified (not putting pressure on people is something like a core value of mine). I have apologized to the person in question, but I also feel like I’ve let the whole community down,...]]>
Owen Cotton-Barratt https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:32 None full 4949
XDwnGK7x4EjkaHbje_NL_EA_EA EA - The Estimation Game: a monthly Fermi estimation web app by Sage Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Estimation Game: a monthly Fermi estimation web app, published by Sage on February 20, 2023 on The Effective Altruism Forum.Announcing the first monthly Estimation Game!Answer 10 Fermi estimation questions, like “How many piano tuners are there in New York?”Train your estimation skills and get more comfortable putting numbers on thingsTeam up with friends, or play soloSee how your scores compare on the global leaderboardThe game is around 10-40 minutes, depending on how much you want to discuss and reflect on your estimatesYou can play The Estimation Game on Quantified Intuitions, solo, or with friends. The February game is live for one week (until Sunday 26th).We’ll release a new Estimation Game each month. Lots of people tell us they’d like to get more practice doing BOTECs and estimating, but they don’t get around to it. So we’ve designed The Estimation Game to give you the impetus to do a bit of estimation each month in a fun context.You might use this as a sandbox to experiment with different methods of estimating. You could decompose the question into easier-to-estimate quantities - make estimates in your head, discuss with friends, use a bit of paper, or even build a scrappy Guesstimate or Squiggle model.We’d appreciate your feedback in the comments, in our Discord, or at adam@sage-future.org. We’d love to have suggestions for questions for future rounds of The Estimation Game - this will help us keep the game varied and fun in future months!Info for organisersIf you run a community group or meetup, we’ve designed the Estimation Game to be super easy to run as an off-the-shelf event. Check out our info for organisers page for resources and FAQs.If you’re running a large-scale event and want to run a custom Estimation Game at it, let us know and we can help you set it up. We’re planning to pilot custom Estimation Games at EAGx Nordics (and maybe EAGx Cambridge).About Quantified IntuitionsWe built Quantified Intuitions as an epistemics training site. See our previous post for more on our motivation. Alongside the monthly Estimation Game, we’ve made two permanent tools:Pastcasting: Predict past events to rapidly practise forecastingCalibration: Answer EA-themed trivia questions to calibrate your uncertaintyThanks to our test groups in London, to community builders who gave feedback, in particular Robert Harling, Adash Herrenschmidt-Moller, and Sam Robinson, and to Chana Messinger at CEA for the idea and feedback throughout.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sage https://forum.effectivealtruism.org/posts/XDwnGK7x4EjkaHbje/the-estimation-game-a-monthly-fermi-estimation-web-app Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Estimation Game: a monthly Fermi estimation web app, published by Sage on February 20, 2023 on The Effective Altruism Forum.Announcing the first monthly Estimation Game!Answer 10 Fermi estimation questions, like “How many piano tuners are there in New York?”Train your estimation skills and get more comfortable putting numbers on thingsTeam up with friends, or play soloSee how your scores compare on the global leaderboardThe game is around 10-40 minutes, depending on how much you want to discuss and reflect on your estimatesYou can play The Estimation Game on Quantified Intuitions, solo, or with friends. The February game is live for one week (until Sunday 26th).We’ll release a new Estimation Game each month. Lots of people tell us they’d like to get more practice doing BOTECs and estimating, but they don’t get around to it. So we’ve designed The Estimation Game to give you the impetus to do a bit of estimation each month in a fun context.You might use this as a sandbox to experiment with different methods of estimating. You could decompose the question into easier-to-estimate quantities - make estimates in your head, discuss with friends, use a bit of paper, or even build a scrappy Guesstimate or Squiggle model.We’d appreciate your feedback in the comments, in our Discord, or at adam@sage-future.org. We’d love to have suggestions for questions for future rounds of The Estimation Game - this will help us keep the game varied and fun in future months!Info for organisersIf you run a community group or meetup, we’ve designed the Estimation Game to be super easy to run as an off-the-shelf event. Check out our info for organisers page for resources and FAQs.If you’re running a large-scale event and want to run a custom Estimation Game at it, let us know and we can help you set it up. We’re planning to pilot custom Estimation Games at EAGx Nordics (and maybe EAGx Cambridge).About Quantified IntuitionsWe built Quantified Intuitions as an epistemics training site. See our previous post for more on our motivation. Alongside the monthly Estimation Game, we’ve made two permanent tools:Pastcasting: Predict past events to rapidly practise forecastingCalibration: Answer EA-themed trivia questions to calibrate your uncertaintyThanks to our test groups in London, to community builders who gave feedback, in particular Robert Harling, Adash Herrenschmidt-Moller, and Sam Robinson, and to Chana Messinger at CEA for the idea and feedback throughout.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 20 Feb 2023 16:58:03 +0000 EA - The Estimation Game: a monthly Fermi estimation web app by Sage Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Estimation Game: a monthly Fermi estimation web app, published by Sage on February 20, 2023 on The Effective Altruism Forum.Announcing the first monthly Estimation Game!Answer 10 Fermi estimation questions, like “How many piano tuners are there in New York?”Train your estimation skills and get more comfortable putting numbers on thingsTeam up with friends, or play soloSee how your scores compare on the global leaderboardThe game is around 10-40 minutes, depending on how much you want to discuss and reflect on your estimatesYou can play The Estimation Game on Quantified Intuitions, solo, or with friends. The February game is live for one week (until Sunday 26th).We’ll release a new Estimation Game each month. Lots of people tell us they’d like to get more practice doing BOTECs and estimating, but they don’t get around to it. So we’ve designed The Estimation Game to give you the impetus to do a bit of estimation each month in a fun context.You might use this as a sandbox to experiment with different methods of estimating. You could decompose the question into easier-to-estimate quantities - make estimates in your head, discuss with friends, use a bit of paper, or even build a scrappy Guesstimate or Squiggle model.We’d appreciate your feedback in the comments, in our Discord, or at adam@sage-future.org. We’d love to have suggestions for questions for future rounds of The Estimation Game - this will help us keep the game varied and fun in future months!Info for organisersIf you run a community group or meetup, we’ve designed the Estimation Game to be super easy to run as an off-the-shelf event. Check out our info for organisers page for resources and FAQs.If you’re running a large-scale event and want to run a custom Estimation Game at it, let us know and we can help you set it up. We’re planning to pilot custom Estimation Games at EAGx Nordics (and maybe EAGx Cambridge).About Quantified IntuitionsWe built Quantified Intuitions as an epistemics training site. See our previous post for more on our motivation. Alongside the monthly Estimation Game, we’ve made two permanent tools:Pastcasting: Predict past events to rapidly practise forecastingCalibration: Answer EA-themed trivia questions to calibrate your uncertaintyThanks to our test groups in London, to community builders who gave feedback, in particular Robert Harling, Adash Herrenschmidt-Moller, and Sam Robinson, and to Chana Messinger at CEA for the idea and feedback throughout.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Estimation Game: a monthly Fermi estimation web app, published by Sage on February 20, 2023 on The Effective Altruism Forum.Announcing the first monthly Estimation Game!Answer 10 Fermi estimation questions, like “How many piano tuners are there in New York?”Train your estimation skills and get more comfortable putting numbers on thingsTeam up with friends, or play soloSee how your scores compare on the global leaderboardThe game is around 10-40 minutes, depending on how much you want to discuss and reflect on your estimatesYou can play The Estimation Game on Quantified Intuitions, solo, or with friends. The February game is live for one week (until Sunday 26th).We’ll release a new Estimation Game each month. Lots of people tell us they’d like to get more practice doing BOTECs and estimating, but they don’t get around to it. So we’ve designed The Estimation Game to give you the impetus to do a bit of estimation each month in a fun context.You might use this as a sandbox to experiment with different methods of estimating. You could decompose the question into easier-to-estimate quantities - make estimates in your head, discuss with friends, use a bit of paper, or even build a scrappy Guesstimate or Squiggle model.We’d appreciate your feedback in the comments, in our Discord, or at adam@sage-future.org. We’d love to have suggestions for questions for future rounds of The Estimation Game - this will help us keep the game varied and fun in future months!Info for organisersIf you run a community group or meetup, we’ve designed the Estimation Game to be super easy to run as an off-the-shelf event. Check out our info for organisers page for resources and FAQs.If you’re running a large-scale event and want to run a custom Estimation Game at it, let us know and we can help you set it up. We’re planning to pilot custom Estimation Games at EAGx Nordics (and maybe EAGx Cambridge).About Quantified IntuitionsWe built Quantified Intuitions as an epistemics training site. See our previous post for more on our motivation. Alongside the monthly Estimation Game, we’ve made two permanent tools:Pastcasting: Predict past events to rapidly practise forecastingCalibration: Answer EA-themed trivia questions to calibrate your uncertaintyThanks to our test groups in London, to community builders who gave feedback, in particular Robert Harling, Adash Herrenschmidt-Moller, and Sam Robinson, and to Chana Messinger at CEA for the idea and feedback throughout.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sage https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:35 None full 4953
qELZknyx8eBg7HskH_NL_EA_EA EA - Metaculus Introduces New 'Conditional Pair' Forecast Questions for Making Conditional Predictions by christian Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Metaculus Introduces New 'Conditional Pair' Forecast Questions for Making Conditional Predictions, published by christian on February 20, 2023 on The Effective Altruism Forum.Predict P(A|B) & P(A|B') With New Conditional PairsEvents don't take place in isolation. Often we want to know the likelihood of an event occurring if another one does.Metaculus has launched conditional pairs, a new kind of forecast question that enables forecasters to predict the probability of an event given the outcome of another, and provides forecast consumers with greater clarity on the relationships between events.This post explains the motivation behind conditional pairs, how to interpret them, and how to start making conditional forecasts.(Check out this video explainer for more on conditional pairs and how to forecast with them.)How Do Conditional Pairs Work?A conditional pair poses two conditional questions (or "conditionals"):If Question B resolves Yes how will Question A resolve?If Question B resolves No how will Question A resolve?For example, a forecaster may want to predict on a question such as this:Will Alphabet’s Market Capitalization Fall Below $1 Trillion by 2025?The forecast depends—on many things. But consider one factor: Bing's share of the search engine market.And so if one knew that Bing's search engine market would be least 5% in March of 2024, they could make a more informed forecast. They might assign greater likelihood to Alphabet's decline.Our conditional pair is then:If Bing's market share is more than 5% in March, 2024, will Alphabet's market capitalization fall below $1 trillion?If Bing's market share is not more than 5% in March, 2024, will Alphabet's market capitalization fall below $1 trillion?(Start forecasting on this conditional pair here.)Two forecasters could have the same forecasts for Bing’s market share and Alphabet’s market cap while having very different mental models of their relationship. Conditional pairs help make these sometimes implicit differences explicit so they can be discussed and scored.Start Forecasting on Conditional PairsHere are some newly created conditional pairs to start forecasting on:If Human-Machine Intelligence Parity Is Reached by 2040, Will the US Impose Compute Capacity Restrictions Before 2050?If Chinese GDP Overtakes US GDP by 2030, Will the US Go to War With China by 2035?If There Is a Bilateral Ceasefire in Ukraine by 2024, Will There Be Large-Scale Conflict With Russia by 2030?If There Is a US Debt Default by 2024, Will Democrats Win the 2024 US Presidential Election?Parent & Child QuestionsConditional pairs like the above are composed of a "Parent Question" and a "Child Question."Parent: Bing has 5% Market Share by March, 2024Child: Alphabet's Market Cap is Below $1 Trillion by 2025Forecasts are made for the Child question conditional on the outcome of the Parent. Here, see that:In a world where Bing reaches 5% market share, the Metaculus community predicts Alphabet's decline is 56% likely.In a world where Bing does not reach 5% market share, the Metaculus community predicts Alphabet's decline is 44% likely.Conditional pairs are a step toward Metaculus's larger goal of empowering forecasters and forecast consumers to quantify and understand the impact of particular events and policy decisions. Feedback is appreciated!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
christian https://forum.effectivealtruism.org/posts/qELZknyx8eBg7HskH/metaculus-introduces-new-conditional-pair-forecast-questions Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Metaculus Introduces New 'Conditional Pair' Forecast Questions for Making Conditional Predictions, published by christian on February 20, 2023 on The Effective Altruism Forum.Predict P(A|B) & P(A|B') With New Conditional PairsEvents don't take place in isolation. Often we want to know the likelihood of an event occurring if another one does.Metaculus has launched conditional pairs, a new kind of forecast question that enables forecasters to predict the probability of an event given the outcome of another, and provides forecast consumers with greater clarity on the relationships between events.This post explains the motivation behind conditional pairs, how to interpret them, and how to start making conditional forecasts.(Check out this video explainer for more on conditional pairs and how to forecast with them.)How Do Conditional Pairs Work?A conditional pair poses two conditional questions (or "conditionals"):If Question B resolves Yes how will Question A resolve?If Question B resolves No how will Question A resolve?For example, a forecaster may want to predict on a question such as this:Will Alphabet’s Market Capitalization Fall Below $1 Trillion by 2025?The forecast depends—on many things. But consider one factor: Bing's share of the search engine market.And so if one knew that Bing's search engine market would be least 5% in March of 2024, they could make a more informed forecast. They might assign greater likelihood to Alphabet's decline.Our conditional pair is then:If Bing's market share is more than 5% in March, 2024, will Alphabet's market capitalization fall below $1 trillion?If Bing's market share is not more than 5% in March, 2024, will Alphabet's market capitalization fall below $1 trillion?(Start forecasting on this conditional pair here.)Two forecasters could have the same forecasts for Bing’s market share and Alphabet’s market cap while having very different mental models of their relationship. Conditional pairs help make these sometimes implicit differences explicit so they can be discussed and scored.Start Forecasting on Conditional PairsHere are some newly created conditional pairs to start forecasting on:If Human-Machine Intelligence Parity Is Reached by 2040, Will the US Impose Compute Capacity Restrictions Before 2050?If Chinese GDP Overtakes US GDP by 2030, Will the US Go to War With China by 2035?If There Is a Bilateral Ceasefire in Ukraine by 2024, Will There Be Large-Scale Conflict With Russia by 2030?If There Is a US Debt Default by 2024, Will Democrats Win the 2024 US Presidential Election?Parent & Child QuestionsConditional pairs like the above are composed of a "Parent Question" and a "Child Question."Parent: Bing has 5% Market Share by March, 2024Child: Alphabet's Market Cap is Below $1 Trillion by 2025Forecasts are made for the Child question conditional on the outcome of the Parent. Here, see that:In a world where Bing reaches 5% market share, the Metaculus community predicts Alphabet's decline is 56% likely.In a world where Bing does not reach 5% market share, the Metaculus community predicts Alphabet's decline is 44% likely.Conditional pairs are a step toward Metaculus's larger goal of empowering forecasters and forecast consumers to quantify and understand the impact of particular events and policy decisions. Feedback is appreciated!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 20 Feb 2023 15:59:45 +0000 EA - Metaculus Introduces New 'Conditional Pair' Forecast Questions for Making Conditional Predictions by christian Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Metaculus Introduces New 'Conditional Pair' Forecast Questions for Making Conditional Predictions, published by christian on February 20, 2023 on The Effective Altruism Forum.Predict P(A|B) & P(A|B') With New Conditional PairsEvents don't take place in isolation. Often we want to know the likelihood of an event occurring if another one does.Metaculus has launched conditional pairs, a new kind of forecast question that enables forecasters to predict the probability of an event given the outcome of another, and provides forecast consumers with greater clarity on the relationships between events.This post explains the motivation behind conditional pairs, how to interpret them, and how to start making conditional forecasts.(Check out this video explainer for more on conditional pairs and how to forecast with them.)How Do Conditional Pairs Work?A conditional pair poses two conditional questions (or "conditionals"):If Question B resolves Yes how will Question A resolve?If Question B resolves No how will Question A resolve?For example, a forecaster may want to predict on a question such as this:Will Alphabet’s Market Capitalization Fall Below $1 Trillion by 2025?The forecast depends—on many things. But consider one factor: Bing's share of the search engine market.And so if one knew that Bing's search engine market would be least 5% in March of 2024, they could make a more informed forecast. They might assign greater likelihood to Alphabet's decline.Our conditional pair is then:If Bing's market share is more than 5% in March, 2024, will Alphabet's market capitalization fall below $1 trillion?If Bing's market share is not more than 5% in March, 2024, will Alphabet's market capitalization fall below $1 trillion?(Start forecasting on this conditional pair here.)Two forecasters could have the same forecasts for Bing’s market share and Alphabet’s market cap while having very different mental models of their relationship. Conditional pairs help make these sometimes implicit differences explicit so they can be discussed and scored.Start Forecasting on Conditional PairsHere are some newly created conditional pairs to start forecasting on:If Human-Machine Intelligence Parity Is Reached by 2040, Will the US Impose Compute Capacity Restrictions Before 2050?If Chinese GDP Overtakes US GDP by 2030, Will the US Go to War With China by 2035?If There Is a Bilateral Ceasefire in Ukraine by 2024, Will There Be Large-Scale Conflict With Russia by 2030?If There Is a US Debt Default by 2024, Will Democrats Win the 2024 US Presidential Election?Parent & Child QuestionsConditional pairs like the above are composed of a "Parent Question" and a "Child Question."Parent: Bing has 5% Market Share by March, 2024Child: Alphabet's Market Cap is Below $1 Trillion by 2025Forecasts are made for the Child question conditional on the outcome of the Parent. Here, see that:In a world where Bing reaches 5% market share, the Metaculus community predicts Alphabet's decline is 56% likely.In a world where Bing does not reach 5% market share, the Metaculus community predicts Alphabet's decline is 44% likely.Conditional pairs are a step toward Metaculus's larger goal of empowering forecasters and forecast consumers to quantify and understand the impact of particular events and policy decisions. Feedback is appreciated!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Metaculus Introduces New 'Conditional Pair' Forecast Questions for Making Conditional Predictions, published by christian on February 20, 2023 on The Effective Altruism Forum.Predict P(A|B) & P(A|B') With New Conditional PairsEvents don't take place in isolation. Often we want to know the likelihood of an event occurring if another one does.Metaculus has launched conditional pairs, a new kind of forecast question that enables forecasters to predict the probability of an event given the outcome of another, and provides forecast consumers with greater clarity on the relationships between events.This post explains the motivation behind conditional pairs, how to interpret them, and how to start making conditional forecasts.(Check out this video explainer for more on conditional pairs and how to forecast with them.)How Do Conditional Pairs Work?A conditional pair poses two conditional questions (or "conditionals"):If Question B resolves Yes how will Question A resolve?If Question B resolves No how will Question A resolve?For example, a forecaster may want to predict on a question such as this:Will Alphabet’s Market Capitalization Fall Below $1 Trillion by 2025?The forecast depends—on many things. But consider one factor: Bing's share of the search engine market.And so if one knew that Bing's search engine market would be least 5% in March of 2024, they could make a more informed forecast. They might assign greater likelihood to Alphabet's decline.Our conditional pair is then:If Bing's market share is more than 5% in March, 2024, will Alphabet's market capitalization fall below $1 trillion?If Bing's market share is not more than 5% in March, 2024, will Alphabet's market capitalization fall below $1 trillion?(Start forecasting on this conditional pair here.)Two forecasters could have the same forecasts for Bing’s market share and Alphabet’s market cap while having very different mental models of their relationship. Conditional pairs help make these sometimes implicit differences explicit so they can be discussed and scored.Start Forecasting on Conditional PairsHere are some newly created conditional pairs to start forecasting on:If Human-Machine Intelligence Parity Is Reached by 2040, Will the US Impose Compute Capacity Restrictions Before 2050?If Chinese GDP Overtakes US GDP by 2030, Will the US Go to War With China by 2035?If There Is a Bilateral Ceasefire in Ukraine by 2024, Will There Be Large-Scale Conflict With Russia by 2030?If There Is a US Debt Default by 2024, Will Democrats Win the 2024 US Presidential Election?Parent & Child QuestionsConditional pairs like the above are composed of a "Parent Question" and a "Child Question."Parent: Bing has 5% Market Share by March, 2024Child: Alphabet's Market Cap is Below $1 Trillion by 2025Forecasts are made for the Child question conditional on the outcome of the Parent. Here, see that:In a world where Bing reaches 5% market share, the Metaculus community predicts Alphabet's decline is 56% likely.In a world where Bing does not reach 5% market share, the Metaculus community predicts Alphabet's decline is 44% likely.Conditional pairs are a step toward Metaculus's larger goal of empowering forecasters and forecast consumers to quantify and understand the impact of particular events and policy decisions. Feedback is appreciated!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
christian https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:40 None full 4952
7f9eMGRhfEgjbMxsa_NL_EA_EA EA - Immigration reform: a shallow cause exploration by JoelMcGuire Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Immigration reform: a shallow cause exploration, published by JoelMcGuire on February 20, 2023 on The Effective Altruism Forum.This shallow investigation was commissioned by Founders Pledge.SummaryThis shallow cause area report explores the impact of immigration on subjective wellbeing (SWB). It was completed in two weeks. In this report, we start by reviewing the literature and modelling the impact of immigration on wellbeing. Then, we conduct back of the envelope calculations (BOTECs) of the cost-effectiveness of various interventions to increase immigration.The effect of immigration has been studied extensively. However, most of the studies we find are correlational and do not provide causal evidence. Additionally, most of the studies use life satisfaction as a measure of SWB, so it’s unclear whether immigration impacts life satisfaction and affective happiness (e.g. positive emotions on a daily basis) differently.Despite these limitations, we attempt to estimate the effect of immigration on wellbeing. We find that immigrating to countries with higher average SWB levels might produce large benefits to wellbeing, but we are very uncertain about the exact size of the effect. According to our model, when people move to a country with higher SWB, they will gain 77% of the SWB gap between the origin and destination country. We assume this benefit will be immediate and permanent, as there is little evidence to model how this benefit evolves over time, and existing evidence doesn’t suggest large deviations from this assumption.There are open questions about the spillover effects of immigration on the immigrant’s household as well as their original and destination communities. Immigrating likely benefits the whole family if they move together, but the impact on household members that stay behind is less clear, as the economic benefits of remittances are countered by the negative effects of separation. On balance, we estimate a small, non-significant benefit for households that stay behind when a member immigrates (+0.01 WELLBY per household member). We did not include spillovers on the origin community due to scarce evidence (only one study) that suggested small, null effects. For destination communities, we estimate that increasing the proportion of immigrants by 1% is associated with a small, non-significant, negative spillover for natives (-0.01 WELLBYs per native), although this is likely moderated by attitudes towards immigrants.We then conducted BOTECs of possible interventions to increase immigration. The most promising is policy advocacy, which we estimate is 11 times more cost-effective than GiveDirectly cash transfers. The other interventions we investigated are 2 to 6 times better than cash transfers. However, all of our BOTECs are speculative and exploratory in nature. These estimates are also limited because we’re unsure how to model the potential for immigration increasing interventions to foster anti-immigrant sentiment in the future. Plus, there might be non-trivial risks that a big push for immigration or other polarising topics by Effective Altruists could burn goodwill that might be used on other issues (e.g., biosecurity). Accordingly, we’re inclined towards treating these as upper-bound estimates and we expect that once these costs are taken into account immigration policy advocacy would no longer be promising.We recommend that future research assesses the costs, chances of success, and risk of backlash for potential policy-based interventions to increase immigration.NotesThis report focuses on the impact of immigration in terms of WELLBYs. One WELLBY is a 1 life satisfaction point change for one year (or any equivalent combination of change in life satisfaction and time). In some cases, we convert results in standard deviations of life sati...]]>
JoelMcGuire https://forum.effectivealtruism.org/posts/7f9eMGRhfEgjbMxsa/immigration-reform-a-shallow-cause-exploration Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Immigration reform: a shallow cause exploration, published by JoelMcGuire on February 20, 2023 on The Effective Altruism Forum.This shallow investigation was commissioned by Founders Pledge.SummaryThis shallow cause area report explores the impact of immigration on subjective wellbeing (SWB). It was completed in two weeks. In this report, we start by reviewing the literature and modelling the impact of immigration on wellbeing. Then, we conduct back of the envelope calculations (BOTECs) of the cost-effectiveness of various interventions to increase immigration.The effect of immigration has been studied extensively. However, most of the studies we find are correlational and do not provide causal evidence. Additionally, most of the studies use life satisfaction as a measure of SWB, so it’s unclear whether immigration impacts life satisfaction and affective happiness (e.g. positive emotions on a daily basis) differently.Despite these limitations, we attempt to estimate the effect of immigration on wellbeing. We find that immigrating to countries with higher average SWB levels might produce large benefits to wellbeing, but we are very uncertain about the exact size of the effect. According to our model, when people move to a country with higher SWB, they will gain 77% of the SWB gap between the origin and destination country. We assume this benefit will be immediate and permanent, as there is little evidence to model how this benefit evolves over time, and existing evidence doesn’t suggest large deviations from this assumption.There are open questions about the spillover effects of immigration on the immigrant’s household as well as their original and destination communities. Immigrating likely benefits the whole family if they move together, but the impact on household members that stay behind is less clear, as the economic benefits of remittances are countered by the negative effects of separation. On balance, we estimate a small, non-significant benefit for households that stay behind when a member immigrates (+0.01 WELLBY per household member). We did not include spillovers on the origin community due to scarce evidence (only one study) that suggested small, null effects. For destination communities, we estimate that increasing the proportion of immigrants by 1% is associated with a small, non-significant, negative spillover for natives (-0.01 WELLBYs per native), although this is likely moderated by attitudes towards immigrants.We then conducted BOTECs of possible interventions to increase immigration. The most promising is policy advocacy, which we estimate is 11 times more cost-effective than GiveDirectly cash transfers. The other interventions we investigated are 2 to 6 times better than cash transfers. However, all of our BOTECs are speculative and exploratory in nature. These estimates are also limited because we’re unsure how to model the potential for immigration increasing interventions to foster anti-immigrant sentiment in the future. Plus, there might be non-trivial risks that a big push for immigration or other polarising topics by Effective Altruists could burn goodwill that might be used on other issues (e.g., biosecurity). Accordingly, we’re inclined towards treating these as upper-bound estimates and we expect that once these costs are taken into account immigration policy advocacy would no longer be promising.We recommend that future research assesses the costs, chances of success, and risk of backlash for potential policy-based interventions to increase immigration.NotesThis report focuses on the impact of immigration in terms of WELLBYs. One WELLBY is a 1 life satisfaction point change for one year (or any equivalent combination of change in life satisfaction and time). In some cases, we convert results in standard deviations of life sati...]]>
Mon, 20 Feb 2023 15:59:42 +0000 EA - Immigration reform: a shallow cause exploration by JoelMcGuire Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Immigration reform: a shallow cause exploration, published by JoelMcGuire on February 20, 2023 on The Effective Altruism Forum.This shallow investigation was commissioned by Founders Pledge.SummaryThis shallow cause area report explores the impact of immigration on subjective wellbeing (SWB). It was completed in two weeks. In this report, we start by reviewing the literature and modelling the impact of immigration on wellbeing. Then, we conduct back of the envelope calculations (BOTECs) of the cost-effectiveness of various interventions to increase immigration.The effect of immigration has been studied extensively. However, most of the studies we find are correlational and do not provide causal evidence. Additionally, most of the studies use life satisfaction as a measure of SWB, so it’s unclear whether immigration impacts life satisfaction and affective happiness (e.g. positive emotions on a daily basis) differently.Despite these limitations, we attempt to estimate the effect of immigration on wellbeing. We find that immigrating to countries with higher average SWB levels might produce large benefits to wellbeing, but we are very uncertain about the exact size of the effect. According to our model, when people move to a country with higher SWB, they will gain 77% of the SWB gap between the origin and destination country. We assume this benefit will be immediate and permanent, as there is little evidence to model how this benefit evolves over time, and existing evidence doesn’t suggest large deviations from this assumption.There are open questions about the spillover effects of immigration on the immigrant’s household as well as their original and destination communities. Immigrating likely benefits the whole family if they move together, but the impact on household members that stay behind is less clear, as the economic benefits of remittances are countered by the negative effects of separation. On balance, we estimate a small, non-significant benefit for households that stay behind when a member immigrates (+0.01 WELLBY per household member). We did not include spillovers on the origin community due to scarce evidence (only one study) that suggested small, null effects. For destination communities, we estimate that increasing the proportion of immigrants by 1% is associated with a small, non-significant, negative spillover for natives (-0.01 WELLBYs per native), although this is likely moderated by attitudes towards immigrants.We then conducted BOTECs of possible interventions to increase immigration. The most promising is policy advocacy, which we estimate is 11 times more cost-effective than GiveDirectly cash transfers. The other interventions we investigated are 2 to 6 times better than cash transfers. However, all of our BOTECs are speculative and exploratory in nature. These estimates are also limited because we’re unsure how to model the potential for immigration increasing interventions to foster anti-immigrant sentiment in the future. Plus, there might be non-trivial risks that a big push for immigration or other polarising topics by Effective Altruists could burn goodwill that might be used on other issues (e.g., biosecurity). Accordingly, we’re inclined towards treating these as upper-bound estimates and we expect that once these costs are taken into account immigration policy advocacy would no longer be promising.We recommend that future research assesses the costs, chances of success, and risk of backlash for potential policy-based interventions to increase immigration.NotesThis report focuses on the impact of immigration in terms of WELLBYs. One WELLBY is a 1 life satisfaction point change for one year (or any equivalent combination of change in life satisfaction and time). In some cases, we convert results in standard deviations of life sati...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Immigration reform: a shallow cause exploration, published by JoelMcGuire on February 20, 2023 on The Effective Altruism Forum.This shallow investigation was commissioned by Founders Pledge.SummaryThis shallow cause area report explores the impact of immigration on subjective wellbeing (SWB). It was completed in two weeks. In this report, we start by reviewing the literature and modelling the impact of immigration on wellbeing. Then, we conduct back of the envelope calculations (BOTECs) of the cost-effectiveness of various interventions to increase immigration.The effect of immigration has been studied extensively. However, most of the studies we find are correlational and do not provide causal evidence. Additionally, most of the studies use life satisfaction as a measure of SWB, so it’s unclear whether immigration impacts life satisfaction and affective happiness (e.g. positive emotions on a daily basis) differently.Despite these limitations, we attempt to estimate the effect of immigration on wellbeing. We find that immigrating to countries with higher average SWB levels might produce large benefits to wellbeing, but we are very uncertain about the exact size of the effect. According to our model, when people move to a country with higher SWB, they will gain 77% of the SWB gap between the origin and destination country. We assume this benefit will be immediate and permanent, as there is little evidence to model how this benefit evolves over time, and existing evidence doesn’t suggest large deviations from this assumption.There are open questions about the spillover effects of immigration on the immigrant’s household as well as their original and destination communities. Immigrating likely benefits the whole family if they move together, but the impact on household members that stay behind is less clear, as the economic benefits of remittances are countered by the negative effects of separation. On balance, we estimate a small, non-significant benefit for households that stay behind when a member immigrates (+0.01 WELLBY per household member). We did not include spillovers on the origin community due to scarce evidence (only one study) that suggested small, null effects. For destination communities, we estimate that increasing the proportion of immigrants by 1% is associated with a small, non-significant, negative spillover for natives (-0.01 WELLBYs per native), although this is likely moderated by attitudes towards immigrants.We then conducted BOTECs of possible interventions to increase immigration. The most promising is policy advocacy, which we estimate is 11 times more cost-effective than GiveDirectly cash transfers. The other interventions we investigated are 2 to 6 times better than cash transfers. However, all of our BOTECs are speculative and exploratory in nature. These estimates are also limited because we’re unsure how to model the potential for immigration increasing interventions to foster anti-immigrant sentiment in the future. Plus, there might be non-trivial risks that a big push for immigration or other polarising topics by Effective Altruists could burn goodwill that might be used on other issues (e.g., biosecurity). Accordingly, we’re inclined towards treating these as upper-bound estimates and we expect that once these costs are taken into account immigration policy advocacy would no longer be promising.We recommend that future research assesses the costs, chances of success, and risk of backlash for potential policy-based interventions to increase immigration.NotesThis report focuses on the impact of immigration in terms of WELLBYs. One WELLBY is a 1 life satisfaction point change for one year (or any equivalent combination of change in life satisfaction and time). In some cases, we convert results in standard deviations of life sati...]]>
JoelMcGuire https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:12:01 None full 4950
2eotFCxvFjXH7zNNw_NL_EA_EA EA - People Will Sometimes Just Lie About You by aella Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: People Will Sometimes Just Lie About You, published by aella on February 18, 2023 on The Effective Altruism Forum.Before getting mini-famous, I did not appreciate the degree to which people would misrepresent and lie about other people.I knew about it in theory. I occasionally stumbled across claims about people that I later found out were false. I knew, abstractly, that any particular accuser is telling a truth or a lie, but you're not sure which one.But now, I'm in the unique position where people say things about me all the time, and I hear most of it, and I have direct access to whether it's accurate or not. There's something about the lack of ambiguity that has left me startled, here. Something was way off about my models of the world before I had access to the truth of a wide range of accusational samples.In the last few years, I've risen in visibility to the degree it's started to get unpleasant. I've had to give up on the idea of throwing parties at my house where guests are allowed to invite unvetted friends. There are internet pockets dedicated to hating me, where people have doxxed me and my family, including the home addresses of my parents and sister. I’ve experienced one kidnapping attempt. I might have to move. One stalker sent me, on average, three long messages every day for nearly three years. By this point death threats are losing their novelty.Before I was this visible, my model was "If a lot of people don't like you, maybe the problem is actually you." Some part of me, before, thought that if you were just consistently nice and charitable, if you were a good, kind person, people would... see that, somehow? Maybe you get one or two insane people, but overall truth would ultimately prevail, because lies without evidence wither and die. And even if people didn't like or agree with you, they wouldn't try to destroy you, because you can only really incite that level of fury in someone if you were at least a little bit at fault yourself. So if you do find yourself in a situation where lots of people are saying terrible things about you, you should take a look in the mirror.But this sort of thing doesn't hold true at large scales! It really doesn't, and that fact shocks some subconscious part of me, to the degree that even I get kinda large-scale gaslit about myself. I often read people talking about how I'm terrible, and then I'm like damn, I must have been a little too sloppy or aggressive in my language to cause them to be so upset with me. Then I go read the original thing they're upset about and find I was actually fine, and really kind, and what the fuck? I'm not used to disagreements being so clearly black and white! And me in the right? What is this, some cartoon children's book caricature of a moral lesson?And I have a similar shock when people work very hard to represent things I do in a sinister light. There've been multiple writeups about me, either by or informed by people I knew in person, where they describe things I've done in a manner that I consider to be extremely uncharitable. People develop a narrative by speculating on my mental state, beliefs, or intentions ("of course she knew people would have that reaction, she knew that person's background"), by blurring the line between thing I concretely did and vaguer facts about context ("she was central to the party so she was responsible for that thing that happened at it"), and by emphasizing reactions more than any concrete bad behavior (“this person says they felt really bad, that proves you did a terrible thing”).Collectively, these paint a picture that sounds convincing, because it seems like all the parts of the narrative are pointing in the same direction. Individually, however, the claims don’t hold up. (In this case, “someone got upset at a party I attended” is a real fa...]]>
aella https://forum.effectivealtruism.org/posts/2eotFCxvFjXH7zNNw/people-will-sometimes-just-lie-about-you Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: People Will Sometimes Just Lie About You, published by aella on February 18, 2023 on The Effective Altruism Forum.Before getting mini-famous, I did not appreciate the degree to which people would misrepresent and lie about other people.I knew about it in theory. I occasionally stumbled across claims about people that I later found out were false. I knew, abstractly, that any particular accuser is telling a truth or a lie, but you're not sure which one.But now, I'm in the unique position where people say things about me all the time, and I hear most of it, and I have direct access to whether it's accurate or not. There's something about the lack of ambiguity that has left me startled, here. Something was way off about my models of the world before I had access to the truth of a wide range of accusational samples.In the last few years, I've risen in visibility to the degree it's started to get unpleasant. I've had to give up on the idea of throwing parties at my house where guests are allowed to invite unvetted friends. There are internet pockets dedicated to hating me, where people have doxxed me and my family, including the home addresses of my parents and sister. I’ve experienced one kidnapping attempt. I might have to move. One stalker sent me, on average, three long messages every day for nearly three years. By this point death threats are losing their novelty.Before I was this visible, my model was "If a lot of people don't like you, maybe the problem is actually you." Some part of me, before, thought that if you were just consistently nice and charitable, if you were a good, kind person, people would... see that, somehow? Maybe you get one or two insane people, but overall truth would ultimately prevail, because lies without evidence wither and die. And even if people didn't like or agree with you, they wouldn't try to destroy you, because you can only really incite that level of fury in someone if you were at least a little bit at fault yourself. So if you do find yourself in a situation where lots of people are saying terrible things about you, you should take a look in the mirror.But this sort of thing doesn't hold true at large scales! It really doesn't, and that fact shocks some subconscious part of me, to the degree that even I get kinda large-scale gaslit about myself. I often read people talking about how I'm terrible, and then I'm like damn, I must have been a little too sloppy or aggressive in my language to cause them to be so upset with me. Then I go read the original thing they're upset about and find I was actually fine, and really kind, and what the fuck? I'm not used to disagreements being so clearly black and white! And me in the right? What is this, some cartoon children's book caricature of a moral lesson?And I have a similar shock when people work very hard to represent things I do in a sinister light. There've been multiple writeups about me, either by or informed by people I knew in person, where they describe things I've done in a manner that I consider to be extremely uncharitable. People develop a narrative by speculating on my mental state, beliefs, or intentions ("of course she knew people would have that reaction, she knew that person's background"), by blurring the line between thing I concretely did and vaguer facts about context ("she was central to the party so she was responsible for that thing that happened at it"), and by emphasizing reactions more than any concrete bad behavior (“this person says they felt really bad, that proves you did a terrible thing”).Collectively, these paint a picture that sounds convincing, because it seems like all the parts of the narrative are pointing in the same direction. Individually, however, the claims don’t hold up. (In this case, “someone got upset at a party I attended” is a real fa...]]>
Sat, 18 Feb 2023 04:20:21 +0000 EA - People Will Sometimes Just Lie About You by aella Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: People Will Sometimes Just Lie About You, published by aella on February 18, 2023 on The Effective Altruism Forum.Before getting mini-famous, I did not appreciate the degree to which people would misrepresent and lie about other people.I knew about it in theory. I occasionally stumbled across claims about people that I later found out were false. I knew, abstractly, that any particular accuser is telling a truth or a lie, but you're not sure which one.But now, I'm in the unique position where people say things about me all the time, and I hear most of it, and I have direct access to whether it's accurate or not. There's something about the lack of ambiguity that has left me startled, here. Something was way off about my models of the world before I had access to the truth of a wide range of accusational samples.In the last few years, I've risen in visibility to the degree it's started to get unpleasant. I've had to give up on the idea of throwing parties at my house where guests are allowed to invite unvetted friends. There are internet pockets dedicated to hating me, where people have doxxed me and my family, including the home addresses of my parents and sister. I’ve experienced one kidnapping attempt. I might have to move. One stalker sent me, on average, three long messages every day for nearly three years. By this point death threats are losing their novelty.Before I was this visible, my model was "If a lot of people don't like you, maybe the problem is actually you." Some part of me, before, thought that if you were just consistently nice and charitable, if you were a good, kind person, people would... see that, somehow? Maybe you get one or two insane people, but overall truth would ultimately prevail, because lies without evidence wither and die. And even if people didn't like or agree with you, they wouldn't try to destroy you, because you can only really incite that level of fury in someone if you were at least a little bit at fault yourself. So if you do find yourself in a situation where lots of people are saying terrible things about you, you should take a look in the mirror.But this sort of thing doesn't hold true at large scales! It really doesn't, and that fact shocks some subconscious part of me, to the degree that even I get kinda large-scale gaslit about myself. I often read people talking about how I'm terrible, and then I'm like damn, I must have been a little too sloppy or aggressive in my language to cause them to be so upset with me. Then I go read the original thing they're upset about and find I was actually fine, and really kind, and what the fuck? I'm not used to disagreements being so clearly black and white! And me in the right? What is this, some cartoon children's book caricature of a moral lesson?And I have a similar shock when people work very hard to represent things I do in a sinister light. There've been multiple writeups about me, either by or informed by people I knew in person, where they describe things I've done in a manner that I consider to be extremely uncharitable. People develop a narrative by speculating on my mental state, beliefs, or intentions ("of course she knew people would have that reaction, she knew that person's background"), by blurring the line between thing I concretely did and vaguer facts about context ("she was central to the party so she was responsible for that thing that happened at it"), and by emphasizing reactions more than any concrete bad behavior (“this person says they felt really bad, that proves you did a terrible thing”).Collectively, these paint a picture that sounds convincing, because it seems like all the parts of the narrative are pointing in the same direction. Individually, however, the claims don’t hold up. (In this case, “someone got upset at a party I attended” is a real fa...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: People Will Sometimes Just Lie About You, published by aella on February 18, 2023 on The Effective Altruism Forum.Before getting mini-famous, I did not appreciate the degree to which people would misrepresent and lie about other people.I knew about it in theory. I occasionally stumbled across claims about people that I later found out were false. I knew, abstractly, that any particular accuser is telling a truth or a lie, but you're not sure which one.But now, I'm in the unique position where people say things about me all the time, and I hear most of it, and I have direct access to whether it's accurate or not. There's something about the lack of ambiguity that has left me startled, here. Something was way off about my models of the world before I had access to the truth of a wide range of accusational samples.In the last few years, I've risen in visibility to the degree it's started to get unpleasant. I've had to give up on the idea of throwing parties at my house where guests are allowed to invite unvetted friends. There are internet pockets dedicated to hating me, where people have doxxed me and my family, including the home addresses of my parents and sister. I’ve experienced one kidnapping attempt. I might have to move. One stalker sent me, on average, three long messages every day for nearly three years. By this point death threats are losing their novelty.Before I was this visible, my model was "If a lot of people don't like you, maybe the problem is actually you." Some part of me, before, thought that if you were just consistently nice and charitable, if you were a good, kind person, people would... see that, somehow? Maybe you get one or two insane people, but overall truth would ultimately prevail, because lies without evidence wither and die. And even if people didn't like or agree with you, they wouldn't try to destroy you, because you can only really incite that level of fury in someone if you were at least a little bit at fault yourself. So if you do find yourself in a situation where lots of people are saying terrible things about you, you should take a look in the mirror.But this sort of thing doesn't hold true at large scales! It really doesn't, and that fact shocks some subconscious part of me, to the degree that even I get kinda large-scale gaslit about myself. I often read people talking about how I'm terrible, and then I'm like damn, I must have been a little too sloppy or aggressive in my language to cause them to be so upset with me. Then I go read the original thing they're upset about and find I was actually fine, and really kind, and what the fuck? I'm not used to disagreements being so clearly black and white! And me in the right? What is this, some cartoon children's book caricature of a moral lesson?And I have a similar shock when people work very hard to represent things I do in a sinister light. There've been multiple writeups about me, either by or informed by people I knew in person, where they describe things I've done in a manner that I consider to be extremely uncharitable. People develop a narrative by speculating on my mental state, beliefs, or intentions ("of course she knew people would have that reaction, she knew that person's background"), by blurring the line between thing I concretely did and vaguer facts about context ("she was central to the party so she was responsible for that thing that happened at it"), and by emphasizing reactions more than any concrete bad behavior (“this person says they felt really bad, that proves you did a terrible thing”).Collectively, these paint a picture that sounds convincing, because it seems like all the parts of the narrative are pointing in the same direction. Individually, however, the claims don’t hold up. (In this case, “someone got upset at a party I attended” is a real fa...]]>
aella https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 25:31 None full 4926
seFH9jcH3saXHJqin_NL_EA_EA EA - Data on how much solutions differ in effectiveness by Benjamin Todd Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Data on how much solutions differ in effectiveness, published by Benjamin Todd on February 17, 2023 on The Effective Altruism Forum.Click the link above to see the full article and charts. Here is a summary I wrote for the latest edition to the 80,000 Hours newsletter, or see the Twitter version.Is it really true that some ways of solving social problems achieve hundreds of times more, given the same amount of effort?Back in 2013, Toby Ord1 pointed out some striking data about global health. He found that the best interventions were:10,000x better at creating years of healthy life than the worst interventions.50x better than the median intervention.He argued this could have radical implications for people who want to do good, namely that a focus on cost-effectiveness is vital.For instance, it could suggest that by focusing on the best interventions, you might be able to have 50 times more impact than a typical person in the field.This argument was one of the original inspirations for our work and effective altruism in general.Now, ten years later, we decided to check how well the pattern in the data holds up and see whether it still applies – especially when extended beyond global health.We gathered all the datasets we could find to test the hypothesis. We found data covering health in rich and poor countries, education, US social interventions, and climate policy.If you want to get the full picture on the data and its implications, read the full article (with lots of charts!):How much do solutions to social problems differ in effectiveness? A collection of all the studies we could find.The bottom line is that the pattern Toby found holds up surprisingly well.This huge variation suggests that once you’ve built some career capital and chosen some problem areas, it’s valuable to think hard about which solutions to any problem you’re working on are most effective and to focus your efforts on those. The difficult question, however, is to say how important this is. I think people interested in effective altruism have sometimes been too quick to conclude that it’s possible to have, say, 1,000 times the impact by using data to compare the best solutions.First, I think a fairer point of comparison isn’t between best and worst but rather between the best measurable intervention and picking randomly. And if you pick randomly, you expect to get the mean effectiveness (rather than the worst or the median).Our data only shows the best interventions are about 10 times better than the mean, rather than 100 or 1,000 times better.Second, these studies will typically overstate the differences between the best and average measurable interventions due to regression to the mean: if you think a solution seems unusually good, that might be because it is actually good, or because you made an error in its favour. The better something seems, the greater the chance of error. So typically the solutions that seem best are actually closer to the mean. This effect can be large.Another important downside of a data-driven approach is that it excludes many non-measurable interventions. The history of philanthropy suggests the most effective solutions historically have been things like R&D and advocacy, which can’t be measured ahead of time in randomised trials.This means that restricting yourself to measurable solutions could mean excluding the very best ones.And since our data shows the very best solutions are far more effective than average, it’s very bad for your impact to exclude them.In practice, I’m most keen on the “hits-based approach” to choosing solutions. I think it’s possible to find rules of thumb that make a solution more likely to be among the very most effective, such as “does this solution have the chance of solving a lot of the problem?”, “does it offer leverage?”, “does it...]]>
Benjamin Todd https://forum.effectivealtruism.org/posts/seFH9jcH3saXHJqin/data-on-how-much-solutions-differ-in-effectiveness Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Data on how much solutions differ in effectiveness, published by Benjamin Todd on February 17, 2023 on The Effective Altruism Forum.Click the link above to see the full article and charts. Here is a summary I wrote for the latest edition to the 80,000 Hours newsletter, or see the Twitter version.Is it really true that some ways of solving social problems achieve hundreds of times more, given the same amount of effort?Back in 2013, Toby Ord1 pointed out some striking data about global health. He found that the best interventions were:10,000x better at creating years of healthy life than the worst interventions.50x better than the median intervention.He argued this could have radical implications for people who want to do good, namely that a focus on cost-effectiveness is vital.For instance, it could suggest that by focusing on the best interventions, you might be able to have 50 times more impact than a typical person in the field.This argument was one of the original inspirations for our work and effective altruism in general.Now, ten years later, we decided to check how well the pattern in the data holds up and see whether it still applies – especially when extended beyond global health.We gathered all the datasets we could find to test the hypothesis. We found data covering health in rich and poor countries, education, US social interventions, and climate policy.If you want to get the full picture on the data and its implications, read the full article (with lots of charts!):How much do solutions to social problems differ in effectiveness? A collection of all the studies we could find.The bottom line is that the pattern Toby found holds up surprisingly well.This huge variation suggests that once you’ve built some career capital and chosen some problem areas, it’s valuable to think hard about which solutions to any problem you’re working on are most effective and to focus your efforts on those. The difficult question, however, is to say how important this is. I think people interested in effective altruism have sometimes been too quick to conclude that it’s possible to have, say, 1,000 times the impact by using data to compare the best solutions.First, I think a fairer point of comparison isn’t between best and worst but rather between the best measurable intervention and picking randomly. And if you pick randomly, you expect to get the mean effectiveness (rather than the worst or the median).Our data only shows the best interventions are about 10 times better than the mean, rather than 100 or 1,000 times better.Second, these studies will typically overstate the differences between the best and average measurable interventions due to regression to the mean: if you think a solution seems unusually good, that might be because it is actually good, or because you made an error in its favour. The better something seems, the greater the chance of error. So typically the solutions that seem best are actually closer to the mean. This effect can be large.Another important downside of a data-driven approach is that it excludes many non-measurable interventions. The history of philanthropy suggests the most effective solutions historically have been things like R&D and advocacy, which can’t be measured ahead of time in randomised trials.This means that restricting yourself to measurable solutions could mean excluding the very best ones.And since our data shows the very best solutions are far more effective than average, it’s very bad for your impact to exclude them.In practice, I’m most keen on the “hits-based approach” to choosing solutions. I think it’s possible to find rules of thumb that make a solution more likely to be among the very most effective, such as “does this solution have the chance of solving a lot of the problem?”, “does it offer leverage?”, “does it...]]>
Sat, 18 Feb 2023 01:57:46 +0000 EA - Data on how much solutions differ in effectiveness by Benjamin Todd Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Data on how much solutions differ in effectiveness, published by Benjamin Todd on February 17, 2023 on The Effective Altruism Forum.Click the link above to see the full article and charts. Here is a summary I wrote for the latest edition to the 80,000 Hours newsletter, or see the Twitter version.Is it really true that some ways of solving social problems achieve hundreds of times more, given the same amount of effort?Back in 2013, Toby Ord1 pointed out some striking data about global health. He found that the best interventions were:10,000x better at creating years of healthy life than the worst interventions.50x better than the median intervention.He argued this could have radical implications for people who want to do good, namely that a focus on cost-effectiveness is vital.For instance, it could suggest that by focusing on the best interventions, you might be able to have 50 times more impact than a typical person in the field.This argument was one of the original inspirations for our work and effective altruism in general.Now, ten years later, we decided to check how well the pattern in the data holds up and see whether it still applies – especially when extended beyond global health.We gathered all the datasets we could find to test the hypothesis. We found data covering health in rich and poor countries, education, US social interventions, and climate policy.If you want to get the full picture on the data and its implications, read the full article (with lots of charts!):How much do solutions to social problems differ in effectiveness? A collection of all the studies we could find.The bottom line is that the pattern Toby found holds up surprisingly well.This huge variation suggests that once you’ve built some career capital and chosen some problem areas, it’s valuable to think hard about which solutions to any problem you’re working on are most effective and to focus your efforts on those. The difficult question, however, is to say how important this is. I think people interested in effective altruism have sometimes been too quick to conclude that it’s possible to have, say, 1,000 times the impact by using data to compare the best solutions.First, I think a fairer point of comparison isn’t between best and worst but rather between the best measurable intervention and picking randomly. And if you pick randomly, you expect to get the mean effectiveness (rather than the worst or the median).Our data only shows the best interventions are about 10 times better than the mean, rather than 100 or 1,000 times better.Second, these studies will typically overstate the differences between the best and average measurable interventions due to regression to the mean: if you think a solution seems unusually good, that might be because it is actually good, or because you made an error in its favour. The better something seems, the greater the chance of error. So typically the solutions that seem best are actually closer to the mean. This effect can be large.Another important downside of a data-driven approach is that it excludes many non-measurable interventions. The history of philanthropy suggests the most effective solutions historically have been things like R&D and advocacy, which can’t be measured ahead of time in randomised trials.This means that restricting yourself to measurable solutions could mean excluding the very best ones.And since our data shows the very best solutions are far more effective than average, it’s very bad for your impact to exclude them.In practice, I’m most keen on the “hits-based approach” to choosing solutions. I think it’s possible to find rules of thumb that make a solution more likely to be among the very most effective, such as “does this solution have the chance of solving a lot of the problem?”, “does it offer leverage?”, “does it...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Data on how much solutions differ in effectiveness, published by Benjamin Todd on February 17, 2023 on The Effective Altruism Forum.Click the link above to see the full article and charts. Here is a summary I wrote for the latest edition to the 80,000 Hours newsletter, or see the Twitter version.Is it really true that some ways of solving social problems achieve hundreds of times more, given the same amount of effort?Back in 2013, Toby Ord1 pointed out some striking data about global health. He found that the best interventions were:10,000x better at creating years of healthy life than the worst interventions.50x better than the median intervention.He argued this could have radical implications for people who want to do good, namely that a focus on cost-effectiveness is vital.For instance, it could suggest that by focusing on the best interventions, you might be able to have 50 times more impact than a typical person in the field.This argument was one of the original inspirations for our work and effective altruism in general.Now, ten years later, we decided to check how well the pattern in the data holds up and see whether it still applies – especially when extended beyond global health.We gathered all the datasets we could find to test the hypothesis. We found data covering health in rich and poor countries, education, US social interventions, and climate policy.If you want to get the full picture on the data and its implications, read the full article (with lots of charts!):How much do solutions to social problems differ in effectiveness? A collection of all the studies we could find.The bottom line is that the pattern Toby found holds up surprisingly well.This huge variation suggests that once you’ve built some career capital and chosen some problem areas, it’s valuable to think hard about which solutions to any problem you’re working on are most effective and to focus your efforts on those. The difficult question, however, is to say how important this is. I think people interested in effective altruism have sometimes been too quick to conclude that it’s possible to have, say, 1,000 times the impact by using data to compare the best solutions.First, I think a fairer point of comparison isn’t between best and worst but rather between the best measurable intervention and picking randomly. And if you pick randomly, you expect to get the mean effectiveness (rather than the worst or the median).Our data only shows the best interventions are about 10 times better than the mean, rather than 100 or 1,000 times better.Second, these studies will typically overstate the differences between the best and average measurable interventions due to regression to the mean: if you think a solution seems unusually good, that might be because it is actually good, or because you made an error in its favour. The better something seems, the greater the chance of error. So typically the solutions that seem best are actually closer to the mean. This effect can be large.Another important downside of a data-driven approach is that it excludes many non-measurable interventions. The history of philanthropy suggests the most effective solutions historically have been things like R&D and advocacy, which can’t be measured ahead of time in randomised trials.This means that restricting yourself to measurable solutions could mean excluding the very best ones.And since our data shows the very best solutions are far more effective than average, it’s very bad for your impact to exclude them.In practice, I’m most keen on the “hits-based approach” to choosing solutions. I think it’s possible to find rules of thumb that make a solution more likely to be among the very most effective, such as “does this solution have the chance of solving a lot of the problem?”, “does it offer leverage?”, “does it...]]>
Benjamin Todd https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:37 None full 4927
RRT5ApXHnvvzgnYy8_NL_EA_EA EA - EA London Hackathon Retrospective by Jonny Spicer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA London Hackathon Retrospective, published by Jonny Spicer on February 17, 2023 on The Effective Altruism Forum.IntroductionOn Saturday 11th February, we ran a day-long EA hackathon in London, modeled off a similar event in Berkeley late last year. This post follows a similar format to the retrospective linked above. You can see some of the basic information about the event, as well as notes on some of the projects that people were working on in this Google doc.We are incredibly grateful to Nichole Janeway Bills and Zac Hatfield-Dodds for their advice before the event, and to Edward Saperia for his advice and assistance on the day itself.TL:DR;We ran a pilot hackathon in London, and were surprised by the success of the event.Around 50 people turned up, they gave mostly positive feedback, there were several impressive projects which have plausible impact.The event helped build the EA London tech community and generated opportunities for community members to work on impactful software projects on an ongoing, long-term basis.We're excited to continue running these kinds of events and want to produce a blueprint for others to run similar hackathons in other places.Goals of the eventWhile we hoped that this event would produce artifacts that had legible impact, it was mainly a community-building exercise. Our primary goal was to validate the concept - could we run a successful hackathon? If so, would running similar events in the future lead to greater tangible impact, even if this one didn't necessarily do so?What went wellApproximately 50 people attended, which was more than we'd expected.The average skill level was high.The majority of teams had a strong lead or mentor, in most cases an existing maintainer or the person who had the original idea for the project.Asking people if they'd like to be put in groups and then doing so generally worked well - I would estimate this was beneficial for 80% of the people who selected this option. See related potential improvement below.Dedicating one of the monthly EA London tech meetups to brainstorming ideas for hackathon projects both yielded good ideas and got people engaged.Having a show & tell section encouraged attendees to optimize for having something concrete, and gave groups the chance to learn about what other groups had worked on.The venue was excellent.The average overall rating for the event on our feedback form was 4.36/5 .What we could do better next timeWe had a limited budget, and more people showed up then we could provide food for on said budget, meaning we didn't provide lunch after we'd originally said we would. We weren't transparent about this, which was a mistake.We didn't do enough to accommodate those who were less experienced coders. In future, we'll use a different question on the sign-up form, along the lines of "how much guidance would you need in order to complete an issue in a project that required coding?". We can then organise our groups/projects/activities accordingly, including having more ways to contribute to projects through means other than coding.We underestimated the ratio of people with jobs in the "data" family relative to the "software" family, and so our suggested projects were almost entirely software-focused.We could've had a better shared digital space. This ended up being a bit of an afterthought for us, and we ended up asking people to join a WhatsApp group when they signed in, but it wasn't used much during the day. A different platform could facilitate more collaboration/visibility between groups, allow people to ask for help more easily, and generally give more of a community feel to the event.Outputs and resourcesThere were several pull requests submitted to existing open-source projects, including VeganBootcamp and Stampy.Multiple proof-of-concep...]]>
Jonny Spicer https://forum.effectivealtruism.org/posts/RRT5ApXHnvvzgnYy8/ea-london-hackathon-retrospective Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA London Hackathon Retrospective, published by Jonny Spicer on February 17, 2023 on The Effective Altruism Forum.IntroductionOn Saturday 11th February, we ran a day-long EA hackathon in London, modeled off a similar event in Berkeley late last year. This post follows a similar format to the retrospective linked above. You can see some of the basic information about the event, as well as notes on some of the projects that people were working on in this Google doc.We are incredibly grateful to Nichole Janeway Bills and Zac Hatfield-Dodds for their advice before the event, and to Edward Saperia for his advice and assistance on the day itself.TL:DR;We ran a pilot hackathon in London, and were surprised by the success of the event.Around 50 people turned up, they gave mostly positive feedback, there were several impressive projects which have plausible impact.The event helped build the EA London tech community and generated opportunities for community members to work on impactful software projects on an ongoing, long-term basis.We're excited to continue running these kinds of events and want to produce a blueprint for others to run similar hackathons in other places.Goals of the eventWhile we hoped that this event would produce artifacts that had legible impact, it was mainly a community-building exercise. Our primary goal was to validate the concept - could we run a successful hackathon? If so, would running similar events in the future lead to greater tangible impact, even if this one didn't necessarily do so?What went wellApproximately 50 people attended, which was more than we'd expected.The average skill level was high.The majority of teams had a strong lead or mentor, in most cases an existing maintainer or the person who had the original idea for the project.Asking people if they'd like to be put in groups and then doing so generally worked well - I would estimate this was beneficial for 80% of the people who selected this option. See related potential improvement below.Dedicating one of the monthly EA London tech meetups to brainstorming ideas for hackathon projects both yielded good ideas and got people engaged.Having a show & tell section encouraged attendees to optimize for having something concrete, and gave groups the chance to learn about what other groups had worked on.The venue was excellent.The average overall rating for the event on our feedback form was 4.36/5 .What we could do better next timeWe had a limited budget, and more people showed up then we could provide food for on said budget, meaning we didn't provide lunch after we'd originally said we would. We weren't transparent about this, which was a mistake.We didn't do enough to accommodate those who were less experienced coders. In future, we'll use a different question on the sign-up form, along the lines of "how much guidance would you need in order to complete an issue in a project that required coding?". We can then organise our groups/projects/activities accordingly, including having more ways to contribute to projects through means other than coding.We underestimated the ratio of people with jobs in the "data" family relative to the "software" family, and so our suggested projects were almost entirely software-focused.We could've had a better shared digital space. This ended up being a bit of an afterthought for us, and we ended up asking people to join a WhatsApp group when they signed in, but it wasn't used much during the day. A different platform could facilitate more collaboration/visibility between groups, allow people to ask for help more easily, and generally give more of a community feel to the event.Outputs and resourcesThere were several pull requests submitted to existing open-source projects, including VeganBootcamp and Stampy.Multiple proof-of-concep...]]>
Fri, 17 Feb 2023 22:14:05 +0000 EA - EA London Hackathon Retrospective by Jonny Spicer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA London Hackathon Retrospective, published by Jonny Spicer on February 17, 2023 on The Effective Altruism Forum.IntroductionOn Saturday 11th February, we ran a day-long EA hackathon in London, modeled off a similar event in Berkeley late last year. This post follows a similar format to the retrospective linked above. You can see some of the basic information about the event, as well as notes on some of the projects that people were working on in this Google doc.We are incredibly grateful to Nichole Janeway Bills and Zac Hatfield-Dodds for their advice before the event, and to Edward Saperia for his advice and assistance on the day itself.TL:DR;We ran a pilot hackathon in London, and were surprised by the success of the event.Around 50 people turned up, they gave mostly positive feedback, there were several impressive projects which have plausible impact.The event helped build the EA London tech community and generated opportunities for community members to work on impactful software projects on an ongoing, long-term basis.We're excited to continue running these kinds of events and want to produce a blueprint for others to run similar hackathons in other places.Goals of the eventWhile we hoped that this event would produce artifacts that had legible impact, it was mainly a community-building exercise. Our primary goal was to validate the concept - could we run a successful hackathon? If so, would running similar events in the future lead to greater tangible impact, even if this one didn't necessarily do so?What went wellApproximately 50 people attended, which was more than we'd expected.The average skill level was high.The majority of teams had a strong lead or mentor, in most cases an existing maintainer or the person who had the original idea for the project.Asking people if they'd like to be put in groups and then doing so generally worked well - I would estimate this was beneficial for 80% of the people who selected this option. See related potential improvement below.Dedicating one of the monthly EA London tech meetups to brainstorming ideas for hackathon projects both yielded good ideas and got people engaged.Having a show & tell section encouraged attendees to optimize for having something concrete, and gave groups the chance to learn about what other groups had worked on.The venue was excellent.The average overall rating for the event on our feedback form was 4.36/5 .What we could do better next timeWe had a limited budget, and more people showed up then we could provide food for on said budget, meaning we didn't provide lunch after we'd originally said we would. We weren't transparent about this, which was a mistake.We didn't do enough to accommodate those who were less experienced coders. In future, we'll use a different question on the sign-up form, along the lines of "how much guidance would you need in order to complete an issue in a project that required coding?". We can then organise our groups/projects/activities accordingly, including having more ways to contribute to projects through means other than coding.We underestimated the ratio of people with jobs in the "data" family relative to the "software" family, and so our suggested projects were almost entirely software-focused.We could've had a better shared digital space. This ended up being a bit of an afterthought for us, and we ended up asking people to join a WhatsApp group when they signed in, but it wasn't used much during the day. A different platform could facilitate more collaboration/visibility between groups, allow people to ask for help more easily, and generally give more of a community feel to the event.Outputs and resourcesThere were several pull requests submitted to existing open-source projects, including VeganBootcamp and Stampy.Multiple proof-of-concep...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA London Hackathon Retrospective, published by Jonny Spicer on February 17, 2023 on The Effective Altruism Forum.IntroductionOn Saturday 11th February, we ran a day-long EA hackathon in London, modeled off a similar event in Berkeley late last year. This post follows a similar format to the retrospective linked above. You can see some of the basic information about the event, as well as notes on some of the projects that people were working on in this Google doc.We are incredibly grateful to Nichole Janeway Bills and Zac Hatfield-Dodds for their advice before the event, and to Edward Saperia for his advice and assistance on the day itself.TL:DR;We ran a pilot hackathon in London, and were surprised by the success of the event.Around 50 people turned up, they gave mostly positive feedback, there were several impressive projects which have plausible impact.The event helped build the EA London tech community and generated opportunities for community members to work on impactful software projects on an ongoing, long-term basis.We're excited to continue running these kinds of events and want to produce a blueprint for others to run similar hackathons in other places.Goals of the eventWhile we hoped that this event would produce artifacts that had legible impact, it was mainly a community-building exercise. Our primary goal was to validate the concept - could we run a successful hackathon? If so, would running similar events in the future lead to greater tangible impact, even if this one didn't necessarily do so?What went wellApproximately 50 people attended, which was more than we'd expected.The average skill level was high.The majority of teams had a strong lead or mentor, in most cases an existing maintainer or the person who had the original idea for the project.Asking people if they'd like to be put in groups and then doing so generally worked well - I would estimate this was beneficial for 80% of the people who selected this option. See related potential improvement below.Dedicating one of the monthly EA London tech meetups to brainstorming ideas for hackathon projects both yielded good ideas and got people engaged.Having a show & tell section encouraged attendees to optimize for having something concrete, and gave groups the chance to learn about what other groups had worked on.The venue was excellent.The average overall rating for the event on our feedback form was 4.36/5 .What we could do better next timeWe had a limited budget, and more people showed up then we could provide food for on said budget, meaning we didn't provide lunch after we'd originally said we would. We weren't transparent about this, which was a mistake.We didn't do enough to accommodate those who were less experienced coders. In future, we'll use a different question on the sign-up form, along the lines of "how much guidance would you need in order to complete an issue in a project that required coding?". We can then organise our groups/projects/activities accordingly, including having more ways to contribute to projects through means other than coding.We underestimated the ratio of people with jobs in the "data" family relative to the "software" family, and so our suggested projects were almost entirely software-focused.We could've had a better shared digital space. This ended up being a bit of an afterthought for us, and we ended up asking people to join a WhatsApp group when they signed in, but it wasn't used much during the day. A different platform could facilitate more collaboration/visibility between groups, allow people to ask for help more easily, and generally give more of a community feel to the event.Outputs and resourcesThere were several pull requests submitted to existing open-source projects, including VeganBootcamp and Stampy.Multiple proof-of-concep...]]>
Jonny Spicer https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:37 None full 4914
4cCRCoYvcLEr7prC2_NL_EA_EA EA - AI Safety Info Distillation Fellowship by robertskmiles Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Info Distillation Fellowship, published by robertskmiles on February 17, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
robertskmiles https://forum.effectivealtruism.org/posts/4cCRCoYvcLEr7prC2/ai-safety-info-distillation-fellowship Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Info Distillation Fellowship, published by robertskmiles on February 17, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 17 Feb 2023 18:15:40 +0000 EA - AI Safety Info Distillation Fellowship by robertskmiles Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Info Distillation Fellowship, published by robertskmiles on February 17, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Info Distillation Fellowship, published by robertskmiles on February 17, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
robertskmiles https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:26 None full 4913
9wJ5Mtba9Fc2yCvpN_NL_EA_EA EA - Getting organizational value from EA conferences, featuring Charity Entrepreneurship’s experience by Amy Labenz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting organizational value from EA conferences, featuring Charity Entrepreneurship’s experience, published by Amy Labenz on February 17, 2023 on The Effective Altruism Forum.For individuals in the EA community, EA Global and EAGx conferences are an opportunity to be exposed to ideas and people who can help us identify and pursue more impactful paths. And for organizations, they’re an opportunity to connect with potential new hires, grantees, and funders.Charity Entrepreneurship recently shared with us (the CEA events team) a summary of the impact their participation at EAG and EAGx has had on their Incubation Program.We were inspired by this post, and wanted to encourage orgs working on EA causes — object and meta, large and small, older and newer — to consider carefully how to get the most out of the conferences we’ve got lined up in 2023. The opportunities include:Talks & workshops can be used to inform the community about your work and how others can contribute to increasing your impact as collaborators, funders or recruits.Office hours & career fairs are available for you to interact directly with those interested in your plans and how they might fit into them.1-on-1 meetings are the bedrock of all the conferences CEA is involved in organizing, facilitating in-depth discussion and establishing connections between people with common goals.If you’re interested in your org appearing on the program at an upcoming conference, or would like to discuss with us how best to approach these events, please reach out to us at hello@eaglobal.org.We thought sharing Charity Entrepreneurship’s experience publicly might be valuable for others to learn from in planning their own approach to future conferences. Here’s their story in their own words.CE’s EAG and EAGx Success StoriesFrom the launch of Charity Entrepreneurship’s Incubation Program, EAG and EAGx conferences have been a crucial part of the organization’s outreach strategy. Thanks in part to CE’s increasing presence — talks, workshops, and participation in numerous career fairs — nonprofit entrepreneurship has become a recognized career option in the effective altruist community.From just one round of applications for CE’s Incubation Program, 325 out of 720 applicants had participated in EAG or EAGx conferences where the organization was present. 169 applicants had interacted directly with CE at EAGs by participating in a workshop, office hours, or talking to CE staff one on one.When young entrepreneurs who got into the Incubation Program and started new high-impact charities were asked what convinced them to apply, they named EAGs as one of the top reasons (along with the EA movement in general, partner/friend, internships, talking to CE staff members and group organizers).So far, Charity Entrepreneurship has started 23 organizations and provided them with $1.88 million in total funding. These organizations have fundraised a further $20 million to cover their costs and are now reaching over 120 million animals and over 10 million people with their interventions. You can learn more about them on CE’s website; they include charities like Fortify Health (three GiveWell Incubation Grants, 25% chance of becoming a top charity), Fish Welfare Initiative (ACE Standout Charity), Shrimp Welfare Project (first organization ever working on shrimp welfare), and Lead Exposure Elimination Project (precursory policy organization now working in 10 countries).Key benefits for CEFinding talented entrepreneurs to start high-impact projects (50% of the founders had been in contact with CE’s outreach efforts during EAG conferences).Finding hires for the CE team (20% of the team have been hired thanks to outreach efforts during EAG conferences), as well as fellows and interns.Making useful connections for funding (10% of fun...]]>
Amy Labenz https://forum.effectivealtruism.org/posts/9wJ5Mtba9Fc2yCvpN/getting-organizational-value-from-ea-conferences-featuring Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting organizational value from EA conferences, featuring Charity Entrepreneurship’s experience, published by Amy Labenz on February 17, 2023 on The Effective Altruism Forum.For individuals in the EA community, EA Global and EAGx conferences are an opportunity to be exposed to ideas and people who can help us identify and pursue more impactful paths. And for organizations, they’re an opportunity to connect with potential new hires, grantees, and funders.Charity Entrepreneurship recently shared with us (the CEA events team) a summary of the impact their participation at EAG and EAGx has had on their Incubation Program.We were inspired by this post, and wanted to encourage orgs working on EA causes — object and meta, large and small, older and newer — to consider carefully how to get the most out of the conferences we’ve got lined up in 2023. The opportunities include:Talks & workshops can be used to inform the community about your work and how others can contribute to increasing your impact as collaborators, funders or recruits.Office hours & career fairs are available for you to interact directly with those interested in your plans and how they might fit into them.1-on-1 meetings are the bedrock of all the conferences CEA is involved in organizing, facilitating in-depth discussion and establishing connections between people with common goals.If you’re interested in your org appearing on the program at an upcoming conference, or would like to discuss with us how best to approach these events, please reach out to us at hello@eaglobal.org.We thought sharing Charity Entrepreneurship’s experience publicly might be valuable for others to learn from in planning their own approach to future conferences. Here’s their story in their own words.CE’s EAG and EAGx Success StoriesFrom the launch of Charity Entrepreneurship’s Incubation Program, EAG and EAGx conferences have been a crucial part of the organization’s outreach strategy. Thanks in part to CE’s increasing presence — talks, workshops, and participation in numerous career fairs — nonprofit entrepreneurship has become a recognized career option in the effective altruist community.From just one round of applications for CE’s Incubation Program, 325 out of 720 applicants had participated in EAG or EAGx conferences where the organization was present. 169 applicants had interacted directly with CE at EAGs by participating in a workshop, office hours, or talking to CE staff one on one.When young entrepreneurs who got into the Incubation Program and started new high-impact charities were asked what convinced them to apply, they named EAGs as one of the top reasons (along with the EA movement in general, partner/friend, internships, talking to CE staff members and group organizers).So far, Charity Entrepreneurship has started 23 organizations and provided them with $1.88 million in total funding. These organizations have fundraised a further $20 million to cover their costs and are now reaching over 120 million animals and over 10 million people with their interventions. You can learn more about them on CE’s website; they include charities like Fortify Health (three GiveWell Incubation Grants, 25% chance of becoming a top charity), Fish Welfare Initiative (ACE Standout Charity), Shrimp Welfare Project (first organization ever working on shrimp welfare), and Lead Exposure Elimination Project (precursory policy organization now working in 10 countries).Key benefits for CEFinding talented entrepreneurs to start high-impact projects (50% of the founders had been in contact with CE’s outreach efforts during EAG conferences).Finding hires for the CE team (20% of the team have been hired thanks to outreach efforts during EAG conferences), as well as fellows and interns.Making useful connections for funding (10% of fun...]]>
Fri, 17 Feb 2023 14:43:10 +0000 EA - Getting organizational value from EA conferences, featuring Charity Entrepreneurship’s experience by Amy Labenz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting organizational value from EA conferences, featuring Charity Entrepreneurship’s experience, published by Amy Labenz on February 17, 2023 on The Effective Altruism Forum.For individuals in the EA community, EA Global and EAGx conferences are an opportunity to be exposed to ideas and people who can help us identify and pursue more impactful paths. And for organizations, they’re an opportunity to connect with potential new hires, grantees, and funders.Charity Entrepreneurship recently shared with us (the CEA events team) a summary of the impact their participation at EAG and EAGx has had on their Incubation Program.We were inspired by this post, and wanted to encourage orgs working on EA causes — object and meta, large and small, older and newer — to consider carefully how to get the most out of the conferences we’ve got lined up in 2023. The opportunities include:Talks & workshops can be used to inform the community about your work and how others can contribute to increasing your impact as collaborators, funders or recruits.Office hours & career fairs are available for you to interact directly with those interested in your plans and how they might fit into them.1-on-1 meetings are the bedrock of all the conferences CEA is involved in organizing, facilitating in-depth discussion and establishing connections between people with common goals.If you’re interested in your org appearing on the program at an upcoming conference, or would like to discuss with us how best to approach these events, please reach out to us at hello@eaglobal.org.We thought sharing Charity Entrepreneurship’s experience publicly might be valuable for others to learn from in planning their own approach to future conferences. Here’s their story in their own words.CE’s EAG and EAGx Success StoriesFrom the launch of Charity Entrepreneurship’s Incubation Program, EAG and EAGx conferences have been a crucial part of the organization’s outreach strategy. Thanks in part to CE’s increasing presence — talks, workshops, and participation in numerous career fairs — nonprofit entrepreneurship has become a recognized career option in the effective altruist community.From just one round of applications for CE’s Incubation Program, 325 out of 720 applicants had participated in EAG or EAGx conferences where the organization was present. 169 applicants had interacted directly with CE at EAGs by participating in a workshop, office hours, or talking to CE staff one on one.When young entrepreneurs who got into the Incubation Program and started new high-impact charities were asked what convinced them to apply, they named EAGs as one of the top reasons (along with the EA movement in general, partner/friend, internships, talking to CE staff members and group organizers).So far, Charity Entrepreneurship has started 23 organizations and provided them with $1.88 million in total funding. These organizations have fundraised a further $20 million to cover their costs and are now reaching over 120 million animals and over 10 million people with their interventions. You can learn more about them on CE’s website; they include charities like Fortify Health (three GiveWell Incubation Grants, 25% chance of becoming a top charity), Fish Welfare Initiative (ACE Standout Charity), Shrimp Welfare Project (first organization ever working on shrimp welfare), and Lead Exposure Elimination Project (precursory policy organization now working in 10 countries).Key benefits for CEFinding talented entrepreneurs to start high-impact projects (50% of the founders had been in contact with CE’s outreach efforts during EAG conferences).Finding hires for the CE team (20% of the team have been hired thanks to outreach efforts during EAG conferences), as well as fellows and interns.Making useful connections for funding (10% of fun...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting organizational value from EA conferences, featuring Charity Entrepreneurship’s experience, published by Amy Labenz on February 17, 2023 on The Effective Altruism Forum.For individuals in the EA community, EA Global and EAGx conferences are an opportunity to be exposed to ideas and people who can help us identify and pursue more impactful paths. And for organizations, they’re an opportunity to connect with potential new hires, grantees, and funders.Charity Entrepreneurship recently shared with us (the CEA events team) a summary of the impact their participation at EAG and EAGx has had on their Incubation Program.We were inspired by this post, and wanted to encourage orgs working on EA causes — object and meta, large and small, older and newer — to consider carefully how to get the most out of the conferences we’ve got lined up in 2023. The opportunities include:Talks & workshops can be used to inform the community about your work and how others can contribute to increasing your impact as collaborators, funders or recruits.Office hours & career fairs are available for you to interact directly with those interested in your plans and how they might fit into them.1-on-1 meetings are the bedrock of all the conferences CEA is involved in organizing, facilitating in-depth discussion and establishing connections between people with common goals.If you’re interested in your org appearing on the program at an upcoming conference, or would like to discuss with us how best to approach these events, please reach out to us at hello@eaglobal.org.We thought sharing Charity Entrepreneurship’s experience publicly might be valuable for others to learn from in planning their own approach to future conferences. Here’s their story in their own words.CE’s EAG and EAGx Success StoriesFrom the launch of Charity Entrepreneurship’s Incubation Program, EAG and EAGx conferences have been a crucial part of the organization’s outreach strategy. Thanks in part to CE’s increasing presence — talks, workshops, and participation in numerous career fairs — nonprofit entrepreneurship has become a recognized career option in the effective altruist community.From just one round of applications for CE’s Incubation Program, 325 out of 720 applicants had participated in EAG or EAGx conferences where the organization was present. 169 applicants had interacted directly with CE at EAGs by participating in a workshop, office hours, or talking to CE staff one on one.When young entrepreneurs who got into the Incubation Program and started new high-impact charities were asked what convinced them to apply, they named EAGs as one of the top reasons (along with the EA movement in general, partner/friend, internships, talking to CE staff members and group organizers).So far, Charity Entrepreneurship has started 23 organizations and provided them with $1.88 million in total funding. These organizations have fundraised a further $20 million to cover their costs and are now reaching over 120 million animals and over 10 million people with their interventions. You can learn more about them on CE’s website; they include charities like Fortify Health (three GiveWell Incubation Grants, 25% chance of becoming a top charity), Fish Welfare Initiative (ACE Standout Charity), Shrimp Welfare Project (first organization ever working on shrimp welfare), and Lead Exposure Elimination Project (precursory policy organization now working in 10 countries).Key benefits for CEFinding talented entrepreneurs to start high-impact projects (50% of the founders had been in contact with CE’s outreach efforts during EAG conferences).Finding hires for the CE team (20% of the team have been hired thanks to outreach efforts during EAG conferences), as well as fellows and interns.Making useful connections for funding (10% of fun...]]>
Amy Labenz https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:33 None full 4915
vPHMdeDn5jNzGv8gK_NL_EA_EA EA - Project for Awesome 2023: Vote for videos of EA charities! by EA ProjectForAwesome Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Project for Awesome 2023: Vote for videos of EA charities!, published by EA ProjectForAwesome on February 16, 2023 on The Effective Altruism Forum.Project for Awesome (P4A) is a yearly charity video contest running this weekend in 2023. Voting will start Friday, February 17th, 12:00pm EST and is open until February 19th, 11:59am EST. The charities with the most votes, the total across all their videos, win money. In the last years, this was between $14,000 and $32,000 per charity.This is a good opportunity to raise money for EA charities with just a few clicks (~5 minutes). Please ask your friends and EA group members to vote for ALL the videos for EA charities! A sample text message and a voting instruction is below.Sample text message: Hey! Not sure if you know, but every year Hank and John Green organize the Project For Awesome, which raises money for charities. All you have to do is click “Vote” on a bunch of videos and you could potentially help thousands of dollars go to highly effective charities. Would you be willing to help out?If they reply yes: Great! Here are the videos of the charities we’re promoting. You can vote for all of the videos. Without watching videos, it will just take a few minutes. (insert the instructions or just the links)Voting instruction with links:1. Invite your friends to vote, too!2. Open one of the following links, then open one video first and do the CAPTCHA before opening the other videos of that charity in new tabs. Vote for ALL videos of that charity.3. Repeat step 2 for all charities listed below. Vote for ALL videos for each of these EA charities. You can see that you voted for a video by the grayed out "Voted" button. In the end, P4A will sum up all votes for all videos of one charity.In total there are several videos for our supported EA charities but it only takes a few clicks (if you do not want to watch the videos) and it's really worth it.Supported EA charities:Animal Advocacy Africa: Ethics: Society: Top Charities Fund: Food Institute: Humane League: Animal Initiative:Other EA(-related) charities:International Campaign to Abolish Nuclear Weapons (ICAN): Against Malaria Foundation: Air Task Force:: Means No Worldwide: Global Fund:Please also join our Facebook group, EA Project 4 Awesome 2023, and our Facebook event for this year’s voting!Thank you very much!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
EA ProjectForAwesome https://forum.effectivealtruism.org/posts/vPHMdeDn5jNzGv8gK/project-for-awesome-2023-vote-for-videos-of-ea-charities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Project for Awesome 2023: Vote for videos of EA charities!, published by EA ProjectForAwesome on February 16, 2023 on The Effective Altruism Forum.Project for Awesome (P4A) is a yearly charity video contest running this weekend in 2023. Voting will start Friday, February 17th, 12:00pm EST and is open until February 19th, 11:59am EST. The charities with the most votes, the total across all their videos, win money. In the last years, this was between $14,000 and $32,000 per charity.This is a good opportunity to raise money for EA charities with just a few clicks (~5 minutes). Please ask your friends and EA group members to vote for ALL the videos for EA charities! A sample text message and a voting instruction is below.Sample text message: Hey! Not sure if you know, but every year Hank and John Green organize the Project For Awesome, which raises money for charities. All you have to do is click “Vote” on a bunch of videos and you could potentially help thousands of dollars go to highly effective charities. Would you be willing to help out?If they reply yes: Great! Here are the videos of the charities we’re promoting. You can vote for all of the videos. Without watching videos, it will just take a few minutes. (insert the instructions or just the links)Voting instruction with links:1. Invite your friends to vote, too!2. Open one of the following links, then open one video first and do the CAPTCHA before opening the other videos of that charity in new tabs. Vote for ALL videos of that charity.3. Repeat step 2 for all charities listed below. Vote for ALL videos for each of these EA charities. You can see that you voted for a video by the grayed out "Voted" button. In the end, P4A will sum up all votes for all videos of one charity.In total there are several videos for our supported EA charities but it only takes a few clicks (if you do not want to watch the videos) and it's really worth it.Supported EA charities:Animal Advocacy Africa: Ethics: Society: Top Charities Fund: Food Institute: Humane League: Animal Initiative:Other EA(-related) charities:International Campaign to Abolish Nuclear Weapons (ICAN): Against Malaria Foundation: Air Task Force:: Means No Worldwide: Global Fund:Please also join our Facebook group, EA Project 4 Awesome 2023, and our Facebook event for this year’s voting!Thank you very much!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 17 Feb 2023 00:09:16 +0000 EA - Project for Awesome 2023: Vote for videos of EA charities! by EA ProjectForAwesome Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Project for Awesome 2023: Vote for videos of EA charities!, published by EA ProjectForAwesome on February 16, 2023 on The Effective Altruism Forum.Project for Awesome (P4A) is a yearly charity video contest running this weekend in 2023. Voting will start Friday, February 17th, 12:00pm EST and is open until February 19th, 11:59am EST. The charities with the most votes, the total across all their videos, win money. In the last years, this was between $14,000 and $32,000 per charity.This is a good opportunity to raise money for EA charities with just a few clicks (~5 minutes). Please ask your friends and EA group members to vote for ALL the videos for EA charities! A sample text message and a voting instruction is below.Sample text message: Hey! Not sure if you know, but every year Hank and John Green organize the Project For Awesome, which raises money for charities. All you have to do is click “Vote” on a bunch of videos and you could potentially help thousands of dollars go to highly effective charities. Would you be willing to help out?If they reply yes: Great! Here are the videos of the charities we’re promoting. You can vote for all of the videos. Without watching videos, it will just take a few minutes. (insert the instructions or just the links)Voting instruction with links:1. Invite your friends to vote, too!2. Open one of the following links, then open one video first and do the CAPTCHA before opening the other videos of that charity in new tabs. Vote for ALL videos of that charity.3. Repeat step 2 for all charities listed below. Vote for ALL videos for each of these EA charities. You can see that you voted for a video by the grayed out "Voted" button. In the end, P4A will sum up all votes for all videos of one charity.In total there are several videos for our supported EA charities but it only takes a few clicks (if you do not want to watch the videos) and it's really worth it.Supported EA charities:Animal Advocacy Africa: Ethics: Society: Top Charities Fund: Food Institute: Humane League: Animal Initiative:Other EA(-related) charities:International Campaign to Abolish Nuclear Weapons (ICAN): Against Malaria Foundation: Air Task Force:: Means No Worldwide: Global Fund:Please also join our Facebook group, EA Project 4 Awesome 2023, and our Facebook event for this year’s voting!Thank you very much!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Project for Awesome 2023: Vote for videos of EA charities!, published by EA ProjectForAwesome on February 16, 2023 on The Effective Altruism Forum.Project for Awesome (P4A) is a yearly charity video contest running this weekend in 2023. Voting will start Friday, February 17th, 12:00pm EST and is open until February 19th, 11:59am EST. The charities with the most votes, the total across all their videos, win money. In the last years, this was between $14,000 and $32,000 per charity.This is a good opportunity to raise money for EA charities with just a few clicks (~5 minutes). Please ask your friends and EA group members to vote for ALL the videos for EA charities! A sample text message and a voting instruction is below.Sample text message: Hey! Not sure if you know, but every year Hank and John Green organize the Project For Awesome, which raises money for charities. All you have to do is click “Vote” on a bunch of videos and you could potentially help thousands of dollars go to highly effective charities. Would you be willing to help out?If they reply yes: Great! Here are the videos of the charities we’re promoting. You can vote for all of the videos. Without watching videos, it will just take a few minutes. (insert the instructions or just the links)Voting instruction with links:1. Invite your friends to vote, too!2. Open one of the following links, then open one video first and do the CAPTCHA before opening the other videos of that charity in new tabs. Vote for ALL videos of that charity.3. Repeat step 2 for all charities listed below. Vote for ALL videos for each of these EA charities. You can see that you voted for a video by the grayed out "Voted" button. In the end, P4A will sum up all votes for all videos of one charity.In total there are several videos for our supported EA charities but it only takes a few clicks (if you do not want to watch the videos) and it's really worth it.Supported EA charities:Animal Advocacy Africa: Ethics: Society: Top Charities Fund: Food Institute: Humane League: Animal Initiative:Other EA(-related) charities:International Campaign to Abolish Nuclear Weapons (ICAN): Against Malaria Foundation: Air Task Force:: Means No Worldwide: Global Fund:Please also join our Facebook group, EA Project 4 Awesome 2023, and our Facebook event for this year’s voting!Thank you very much!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
EA ProjectForAwesome https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:45 None full 4908
ugsgFkLGShEjMqfBA_NL_EA_EA EA - Advice for an alcoholic EA by anotherburneraccountsorry Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Advice for an alcoholic EA, published by anotherburneraccountsorry on February 16, 2023 on The Effective Altruism Forum.Hi guys, as my username suggests, I’m sorry to write this pseudonymously, but I don’t know how public I want to be about my problems yet. So, the short version is that I’m an alcoholic and I’m an Effective Altruist, and I don’t know exactly how much I should or shouldn’t involve EA in my recovery efforts. I am vaguely aware that EA has mental health resources for struggling EAs, and I am struggling. I also don’t know how many of them are relevant to substance abuse in particular. These are some of the considerations that I am conflicted about:Against involving EA more: Most of my problems are not directly related to EA, and I’m not sure if I should be using EA resources for personal health problems unless I have some strong idea of how my problems relate to my involvement in EA. Maybe more to the point, I have access to other mental health resources, I am currently seeing someone at my school about this, and it feels like a waste of resources to involve EA in my problems if I don’t need to. Additionally, there are many recent worries that EA is too insular, and this can lead to problems in how it handles personal issues. I share some of these worries, and although I don’t distrust EA’s mental health team, it seems like I should be cautious in over-involving EA in my personal life where it is unnecessary. If nothing else, it makes me more dependent on EA. Additionally as mentioned before, I just don’t know if EA’s mental health team deals with things like substance abuse so much as burn out.In favor: While my drinking is not deeply connected to my involvement in Effective Altruism, there are a number of things that have exacerbated my problem which are idiosyncratic to EA in a way that makes me uncomfortable talking to a normal therapist about it. I have still not mentioned anything EA related to my counselor so far despite our sessions thus far largely focusing on my “triggers” for drinking. Related to this, I am not a huge fan of my current counselor’s approach, there is a bunch of focus on things like what drives me to drink, when I buy more into a bias-based and chemical model of drinking, where mostly the issue with my “triggers” is that I am unusually susceptible to finding lame excuses for myself. She also keeps recommending a bunch of other mental health resources, some of which seem quite tangentially related to my main problem.I think that a more focused approach would be valuable, and think that the type of triage and evidence-based thinking common in EA makes it more likely to be a space where the counseling I get will be, well, effective. I also don’t want to speak too soon about resource problems, as there may be many services that aren’t resource intensive, like groups sessions for EAs with substance abuse problems.Does anyone have any advice? Are there people here who have gone through a situation like this before, and have they involved EA’s mental health resources in some way? If so what did they get out of it?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
anotherburneraccountsorry https://forum.effectivealtruism.org/posts/ugsgFkLGShEjMqfBA/advice-for-an-alcoholic-ea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Advice for an alcoholic EA, published by anotherburneraccountsorry on February 16, 2023 on The Effective Altruism Forum.Hi guys, as my username suggests, I’m sorry to write this pseudonymously, but I don’t know how public I want to be about my problems yet. So, the short version is that I’m an alcoholic and I’m an Effective Altruist, and I don’t know exactly how much I should or shouldn’t involve EA in my recovery efforts. I am vaguely aware that EA has mental health resources for struggling EAs, and I am struggling. I also don’t know how many of them are relevant to substance abuse in particular. These are some of the considerations that I am conflicted about:Against involving EA more: Most of my problems are not directly related to EA, and I’m not sure if I should be using EA resources for personal health problems unless I have some strong idea of how my problems relate to my involvement in EA. Maybe more to the point, I have access to other mental health resources, I am currently seeing someone at my school about this, and it feels like a waste of resources to involve EA in my problems if I don’t need to. Additionally, there are many recent worries that EA is too insular, and this can lead to problems in how it handles personal issues. I share some of these worries, and although I don’t distrust EA’s mental health team, it seems like I should be cautious in over-involving EA in my personal life where it is unnecessary. If nothing else, it makes me more dependent on EA. Additionally as mentioned before, I just don’t know if EA’s mental health team deals with things like substance abuse so much as burn out.In favor: While my drinking is not deeply connected to my involvement in Effective Altruism, there are a number of things that have exacerbated my problem which are idiosyncratic to EA in a way that makes me uncomfortable talking to a normal therapist about it. I have still not mentioned anything EA related to my counselor so far despite our sessions thus far largely focusing on my “triggers” for drinking. Related to this, I am not a huge fan of my current counselor’s approach, there is a bunch of focus on things like what drives me to drink, when I buy more into a bias-based and chemical model of drinking, where mostly the issue with my “triggers” is that I am unusually susceptible to finding lame excuses for myself. She also keeps recommending a bunch of other mental health resources, some of which seem quite tangentially related to my main problem.I think that a more focused approach would be valuable, and think that the type of triage and evidence-based thinking common in EA makes it more likely to be a space where the counseling I get will be, well, effective. I also don’t want to speak too soon about resource problems, as there may be many services that aren’t resource intensive, like groups sessions for EAs with substance abuse problems.Does anyone have any advice? Are there people here who have gone through a situation like this before, and have they involved EA’s mental health resources in some way? If so what did they get out of it?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 17 Feb 2023 00:04:03 +0000 EA - Advice for an alcoholic EA by anotherburneraccountsorry Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Advice for an alcoholic EA, published by anotherburneraccountsorry on February 16, 2023 on The Effective Altruism Forum.Hi guys, as my username suggests, I’m sorry to write this pseudonymously, but I don’t know how public I want to be about my problems yet. So, the short version is that I’m an alcoholic and I’m an Effective Altruist, and I don’t know exactly how much I should or shouldn’t involve EA in my recovery efforts. I am vaguely aware that EA has mental health resources for struggling EAs, and I am struggling. I also don’t know how many of them are relevant to substance abuse in particular. These are some of the considerations that I am conflicted about:Against involving EA more: Most of my problems are not directly related to EA, and I’m not sure if I should be using EA resources for personal health problems unless I have some strong idea of how my problems relate to my involvement in EA. Maybe more to the point, I have access to other mental health resources, I am currently seeing someone at my school about this, and it feels like a waste of resources to involve EA in my problems if I don’t need to. Additionally, there are many recent worries that EA is too insular, and this can lead to problems in how it handles personal issues. I share some of these worries, and although I don’t distrust EA’s mental health team, it seems like I should be cautious in over-involving EA in my personal life where it is unnecessary. If nothing else, it makes me more dependent on EA. Additionally as mentioned before, I just don’t know if EA’s mental health team deals with things like substance abuse so much as burn out.In favor: While my drinking is not deeply connected to my involvement in Effective Altruism, there are a number of things that have exacerbated my problem which are idiosyncratic to EA in a way that makes me uncomfortable talking to a normal therapist about it. I have still not mentioned anything EA related to my counselor so far despite our sessions thus far largely focusing on my “triggers” for drinking. Related to this, I am not a huge fan of my current counselor’s approach, there is a bunch of focus on things like what drives me to drink, when I buy more into a bias-based and chemical model of drinking, where mostly the issue with my “triggers” is that I am unusually susceptible to finding lame excuses for myself. She also keeps recommending a bunch of other mental health resources, some of which seem quite tangentially related to my main problem.I think that a more focused approach would be valuable, and think that the type of triage and evidence-based thinking common in EA makes it more likely to be a space where the counseling I get will be, well, effective. I also don’t want to speak too soon about resource problems, as there may be many services that aren’t resource intensive, like groups sessions for EAs with substance abuse problems.Does anyone have any advice? Are there people here who have gone through a situation like this before, and have they involved EA’s mental health resources in some way? If so what did they get out of it?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Advice for an alcoholic EA, published by anotherburneraccountsorry on February 16, 2023 on The Effective Altruism Forum.Hi guys, as my username suggests, I’m sorry to write this pseudonymously, but I don’t know how public I want to be about my problems yet. So, the short version is that I’m an alcoholic and I’m an Effective Altruist, and I don’t know exactly how much I should or shouldn’t involve EA in my recovery efforts. I am vaguely aware that EA has mental health resources for struggling EAs, and I am struggling. I also don’t know how many of them are relevant to substance abuse in particular. These are some of the considerations that I am conflicted about:Against involving EA more: Most of my problems are not directly related to EA, and I’m not sure if I should be using EA resources for personal health problems unless I have some strong idea of how my problems relate to my involvement in EA. Maybe more to the point, I have access to other mental health resources, I am currently seeing someone at my school about this, and it feels like a waste of resources to involve EA in my problems if I don’t need to. Additionally, there are many recent worries that EA is too insular, and this can lead to problems in how it handles personal issues. I share some of these worries, and although I don’t distrust EA’s mental health team, it seems like I should be cautious in over-involving EA in my personal life where it is unnecessary. If nothing else, it makes me more dependent on EA. Additionally as mentioned before, I just don’t know if EA’s mental health team deals with things like substance abuse so much as burn out.In favor: While my drinking is not deeply connected to my involvement in Effective Altruism, there are a number of things that have exacerbated my problem which are idiosyncratic to EA in a way that makes me uncomfortable talking to a normal therapist about it. I have still not mentioned anything EA related to my counselor so far despite our sessions thus far largely focusing on my “triggers” for drinking. Related to this, I am not a huge fan of my current counselor’s approach, there is a bunch of focus on things like what drives me to drink, when I buy more into a bias-based and chemical model of drinking, where mostly the issue with my “triggers” is that I am unusually susceptible to finding lame excuses for myself. She also keeps recommending a bunch of other mental health resources, some of which seem quite tangentially related to my main problem.I think that a more focused approach would be valuable, and think that the type of triage and evidence-based thinking common in EA makes it more likely to be a space where the counseling I get will be, well, effective. I also don’t want to speak too soon about resource problems, as there may be many services that aren’t resource intensive, like groups sessions for EAs with substance abuse problems.Does anyone have any advice? Are there people here who have gone through a situation like this before, and have they involved EA’s mental health resources in some way? If so what did they get out of it?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
anotherburneraccountsorry https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:36 None full 4910
K9GdCuiz5tanoiDKN_NL_EA_EA EA - Why should ethical anti-realists do ethics? by Joe Carlsmith Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why should ethical anti-realists do ethics?, published by Joe Carlsmith on February 16, 2023 on The Effective Altruism Forum.(Cross-posted from my website. Podcast version here, or search "Joe Carlsmith Audio" on your podcast app.)"What was it then? What did it mean? Could things thrust their hands up and grip one? Could the blade cut; the fist grasp?"Virginia Woolf1. IntroductionEthical philosophy often tries to systematize. That is, it seeks general principles that will explain, unify, and revise our more particular intuitions. And sometimes, this can lead to strange and uncomfortable places.So why do it? If you believe in an objective ethical truth, you might talk about getting closer to that truth. But suppose that you don’t. Suppose you think that you’re “free to do whatever you want.” In that case, if “systematizing” starts getting tough and uncomfortable, why not just . stop? After all, you can always just do whatever’s most intuitive or common-sensical in a given case – and often, this is the choice the “ethics game” was trying so hard to validate, anyway. So why play?I think it’s a reasonable question. And I’ve found it showing up in my life in various ways. So I wrote a set of two essays explaining part of my current take. This is the first essay. Here I describe the question in more detail, give some examples of where it shows up, and describe my dissatisfaction with two places anti-realists often look for answers, namely:some sort of brute preference for your values/policy having various structural properties (consistency, coherence, etc), andavoiding money-pumps (i.e., sequences of actions that take you back to where you started, but with less money)In the second essay, I try to give a more positive account.Thanks to Ketan Ramakrishnan, Katja Grace, Nick Beckstead, and Jacob Trefethen for discussion.2. The problemThere’s some sort of project that ethical philosophy represents. What is it?2.1 Map-making with no territoryAccording to normative realists, it’s “figuring out the normative truth.” That is: there is an objective, normative reality “out there,” and we are as scientists, inquiring about its nature.Many normative anti-realists often adopt this posture as well. They want to talk, too, about the normative truth, and to rely on norms and assumptions familiar from the context of inquiry. But it’s a lot less clear what’s going on when they do.Perhaps, for example, they claim: “the normative truth this inquiry seeks is constituted by the very endpoint of this inquiry – e.g., reflective equilibrium, what I would think on reflection, or some such.” But what sort of inquiry is that? Not, one suspects, the normal kind. It sounds too . unconstrained. As though the inquiry could veer in any old direction (“maximize bricks!”), and thereby (assuming it persists in its course) make that direction the right one. In the absence of a territory – if the true map is just: whatever map we would draw, after spending ages thinking about what map to draw – why are we acting like ethics is a normal form of map-making? Why are we pretending to be scientists investigating a realm that doesn’t exist?2.2 Why curve-fit?My own best guess is that ethics – including the ethics that normative realists are doing, despite their self-conception – is best understood in a more active posture: namely, as an especially general form of deciding what to do. That is: there isn’t the one thing, figuring out what you should do, and then that other separate thing, deciding what to do. Rather, ethical thought is essentially practical. It’s the part of cognition that issues in action, rather than the part that “maps” a “territory.”But on this anti-realist conception of ethics, it can become unclear why the specific sort of thinking ethicists tend to engage in is worth doing. ...]]>
Joe Carlsmith https://forum.effectivealtruism.org/posts/K9GdCuiz5tanoiDKN/why-should-ethical-anti-realists-do-ethics Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why should ethical anti-realists do ethics?, published by Joe Carlsmith on February 16, 2023 on The Effective Altruism Forum.(Cross-posted from my website. Podcast version here, or search "Joe Carlsmith Audio" on your podcast app.)"What was it then? What did it mean? Could things thrust their hands up and grip one? Could the blade cut; the fist grasp?"Virginia Woolf1. IntroductionEthical philosophy often tries to systematize. That is, it seeks general principles that will explain, unify, and revise our more particular intuitions. And sometimes, this can lead to strange and uncomfortable places.So why do it? If you believe in an objective ethical truth, you might talk about getting closer to that truth. But suppose that you don’t. Suppose you think that you’re “free to do whatever you want.” In that case, if “systematizing” starts getting tough and uncomfortable, why not just . stop? After all, you can always just do whatever’s most intuitive or common-sensical in a given case – and often, this is the choice the “ethics game” was trying so hard to validate, anyway. So why play?I think it’s a reasonable question. And I’ve found it showing up in my life in various ways. So I wrote a set of two essays explaining part of my current take. This is the first essay. Here I describe the question in more detail, give some examples of where it shows up, and describe my dissatisfaction with two places anti-realists often look for answers, namely:some sort of brute preference for your values/policy having various structural properties (consistency, coherence, etc), andavoiding money-pumps (i.e., sequences of actions that take you back to where you started, but with less money)In the second essay, I try to give a more positive account.Thanks to Ketan Ramakrishnan, Katja Grace, Nick Beckstead, and Jacob Trefethen for discussion.2. The problemThere’s some sort of project that ethical philosophy represents. What is it?2.1 Map-making with no territoryAccording to normative realists, it’s “figuring out the normative truth.” That is: there is an objective, normative reality “out there,” and we are as scientists, inquiring about its nature.Many normative anti-realists often adopt this posture as well. They want to talk, too, about the normative truth, and to rely on norms and assumptions familiar from the context of inquiry. But it’s a lot less clear what’s going on when they do.Perhaps, for example, they claim: “the normative truth this inquiry seeks is constituted by the very endpoint of this inquiry – e.g., reflective equilibrium, what I would think on reflection, or some such.” But what sort of inquiry is that? Not, one suspects, the normal kind. It sounds too . unconstrained. As though the inquiry could veer in any old direction (“maximize bricks!”), and thereby (assuming it persists in its course) make that direction the right one. In the absence of a territory – if the true map is just: whatever map we would draw, after spending ages thinking about what map to draw – why are we acting like ethics is a normal form of map-making? Why are we pretending to be scientists investigating a realm that doesn’t exist?2.2 Why curve-fit?My own best guess is that ethics – including the ethics that normative realists are doing, despite their self-conception – is best understood in a more active posture: namely, as an especially general form of deciding what to do. That is: there isn’t the one thing, figuring out what you should do, and then that other separate thing, deciding what to do. Rather, ethical thought is essentially practical. It’s the part of cognition that issues in action, rather than the part that “maps” a “territory.”But on this anti-realist conception of ethics, it can become unclear why the specific sort of thinking ethicists tend to engage in is worth doing. ...]]>
Thu, 16 Feb 2023 23:11:36 +0000 EA - Why should ethical anti-realists do ethics? by Joe Carlsmith Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why should ethical anti-realists do ethics?, published by Joe Carlsmith on February 16, 2023 on The Effective Altruism Forum.(Cross-posted from my website. Podcast version here, or search "Joe Carlsmith Audio" on your podcast app.)"What was it then? What did it mean? Could things thrust their hands up and grip one? Could the blade cut; the fist grasp?"Virginia Woolf1. IntroductionEthical philosophy often tries to systematize. That is, it seeks general principles that will explain, unify, and revise our more particular intuitions. And sometimes, this can lead to strange and uncomfortable places.So why do it? If you believe in an objective ethical truth, you might talk about getting closer to that truth. But suppose that you don’t. Suppose you think that you’re “free to do whatever you want.” In that case, if “systematizing” starts getting tough and uncomfortable, why not just . stop? After all, you can always just do whatever’s most intuitive or common-sensical in a given case – and often, this is the choice the “ethics game” was trying so hard to validate, anyway. So why play?I think it’s a reasonable question. And I’ve found it showing up in my life in various ways. So I wrote a set of two essays explaining part of my current take. This is the first essay. Here I describe the question in more detail, give some examples of where it shows up, and describe my dissatisfaction with two places anti-realists often look for answers, namely:some sort of brute preference for your values/policy having various structural properties (consistency, coherence, etc), andavoiding money-pumps (i.e., sequences of actions that take you back to where you started, but with less money)In the second essay, I try to give a more positive account.Thanks to Ketan Ramakrishnan, Katja Grace, Nick Beckstead, and Jacob Trefethen for discussion.2. The problemThere’s some sort of project that ethical philosophy represents. What is it?2.1 Map-making with no territoryAccording to normative realists, it’s “figuring out the normative truth.” That is: there is an objective, normative reality “out there,” and we are as scientists, inquiring about its nature.Many normative anti-realists often adopt this posture as well. They want to talk, too, about the normative truth, and to rely on norms and assumptions familiar from the context of inquiry. But it’s a lot less clear what’s going on when they do.Perhaps, for example, they claim: “the normative truth this inquiry seeks is constituted by the very endpoint of this inquiry – e.g., reflective equilibrium, what I would think on reflection, or some such.” But what sort of inquiry is that? Not, one suspects, the normal kind. It sounds too . unconstrained. As though the inquiry could veer in any old direction (“maximize bricks!”), and thereby (assuming it persists in its course) make that direction the right one. In the absence of a territory – if the true map is just: whatever map we would draw, after spending ages thinking about what map to draw – why are we acting like ethics is a normal form of map-making? Why are we pretending to be scientists investigating a realm that doesn’t exist?2.2 Why curve-fit?My own best guess is that ethics – including the ethics that normative realists are doing, despite their self-conception – is best understood in a more active posture: namely, as an especially general form of deciding what to do. That is: there isn’t the one thing, figuring out what you should do, and then that other separate thing, deciding what to do. Rather, ethical thought is essentially practical. It’s the part of cognition that issues in action, rather than the part that “maps” a “territory.”But on this anti-realist conception of ethics, it can become unclear why the specific sort of thinking ethicists tend to engage in is worth doing. ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why should ethical anti-realists do ethics?, published by Joe Carlsmith on February 16, 2023 on The Effective Altruism Forum.(Cross-posted from my website. Podcast version here, or search "Joe Carlsmith Audio" on your podcast app.)"What was it then? What did it mean? Could things thrust their hands up and grip one? Could the blade cut; the fist grasp?"Virginia Woolf1. IntroductionEthical philosophy often tries to systematize. That is, it seeks general principles that will explain, unify, and revise our more particular intuitions. And sometimes, this can lead to strange and uncomfortable places.So why do it? If you believe in an objective ethical truth, you might talk about getting closer to that truth. But suppose that you don’t. Suppose you think that you’re “free to do whatever you want.” In that case, if “systematizing” starts getting tough and uncomfortable, why not just . stop? After all, you can always just do whatever’s most intuitive or common-sensical in a given case – and often, this is the choice the “ethics game” was trying so hard to validate, anyway. So why play?I think it’s a reasonable question. And I’ve found it showing up in my life in various ways. So I wrote a set of two essays explaining part of my current take. This is the first essay. Here I describe the question in more detail, give some examples of where it shows up, and describe my dissatisfaction with two places anti-realists often look for answers, namely:some sort of brute preference for your values/policy having various structural properties (consistency, coherence, etc), andavoiding money-pumps (i.e., sequences of actions that take you back to where you started, but with less money)In the second essay, I try to give a more positive account.Thanks to Ketan Ramakrishnan, Katja Grace, Nick Beckstead, and Jacob Trefethen for discussion.2. The problemThere’s some sort of project that ethical philosophy represents. What is it?2.1 Map-making with no territoryAccording to normative realists, it’s “figuring out the normative truth.” That is: there is an objective, normative reality “out there,” and we are as scientists, inquiring about its nature.Many normative anti-realists often adopt this posture as well. They want to talk, too, about the normative truth, and to rely on norms and assumptions familiar from the context of inquiry. But it’s a lot less clear what’s going on when they do.Perhaps, for example, they claim: “the normative truth this inquiry seeks is constituted by the very endpoint of this inquiry – e.g., reflective equilibrium, what I would think on reflection, or some such.” But what sort of inquiry is that? Not, one suspects, the normal kind. It sounds too . unconstrained. As though the inquiry could veer in any old direction (“maximize bricks!”), and thereby (assuming it persists in its course) make that direction the right one. In the absence of a territory – if the true map is just: whatever map we would draw, after spending ages thinking about what map to draw – why are we acting like ethics is a normal form of map-making? Why are we pretending to be scientists investigating a realm that doesn’t exist?2.2 Why curve-fit?My own best guess is that ethics – including the ethics that normative realists are doing, despite their self-conception – is best understood in a more active posture: namely, as an especially general form of deciding what to do. That is: there isn’t the one thing, figuring out what you should do, and then that other separate thing, deciding what to do. Rather, ethical thought is essentially practical. It’s the part of cognition that issues in action, rather than the part that “maps” a “territory.”But on this anti-realist conception of ethics, it can become unclear why the specific sort of thinking ethicists tend to engage in is worth doing. ...]]>
Joe Carlsmith https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 58:31 None full 4909
99tnp7Jpts7Gssq7J_NL_EA_EA EA - Transitioning to an advisory role by MaxDalton Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transitioning to an advisory role, published by MaxDalton on February 16, 2023 on The Effective Altruism Forum.I’m writing to announce that I’ve resigned from my role as CEA’s Executive Director, and will be transitioning to an advisory role.Basically, my mental health has been bad for the last 3 months. Starting in November, my role changed from one that I love - building a team and a product, building close working relationships with people, executing - to one that I find really stressful: dealing with media attention, stakeholders, and lawyers at long unpredictable hours and wrestling with strategic uncertainty. I think I’m also not so good at the latter sort of work, relative to the former.I've been getting lots of advice, therapy, and support, but recently I've been close to a crisis – struggling to get out of bed, feeling terror at the idea of sitting at my desk. I really wish that I were strong enough to keep doing this job, especially right now – I care so much about CEA’s work to help more people tackle the important problems we face, and I care deeply about the team we’ve built.But I’m just not able to keep going in my current role, and I don't think that pretending to be stronger or struggling on will be good for CEA or for me, because I’m not able to perform as well as I would like and there’s a risk that I’ll burn out with no handover. So I think it’s best to move into an advisory role and allow someone else to direct CEA.The boards of Effective Ventures UK and Effective Ventures US, which govern CEA, will appoint an interim Executive Director soon. Once they’re appointed I plan to continue advising and working with them and the CEA team to ensure a smooth transition and help find a new permanent ED. I hope that moving from an executive to advisory role will help alleviate some of the pressure and allow me to contribute more productively to our shared work going forward.For a while now I've been trying to build up the leadership team as the body running CEA, with me as one member. I think that the leadership team is very strong: people disagree with each other directly but with care, have complementary strengths, and show strong leadership for their own programs. I think that they will be able to do a great job leading CEA together with the interim ED and the new permanent ED.Of course, FTX and subsequent events have highlighted some important issues in EA. I’ve been working with the team to reflect on how this might impact our work and necessitate changes, and I hope that they’ll be able to share more on these conversations and plans in the future. Although I’m very sad not to be able to see through that work in my current role with CEA, I think that the work we’ve done so far will set the new leadership team up well. I also plan to continue to reflect, will discuss my thinking with new leadership, and may publish some of my personal reflections.Despite the setbacks of these last few months, I'm very proud of what we've achieved together over the last four years. Compared to 2019, the number of new connections we’re making at events is 5x higher, and people are spending 10x time engaging with the Forum (which also has a lot more interesting content). Overall, I think that we’ve helped hundreds of people to reflect on how they can best contribute to making the world a better place, and begin to work on these critical problems.I’m also incredibly grateful to have been a part of this team: CEA staff are incredibly talented, caring, and dedicated. I’ve loved to be a part of a culture where staff are valued and empowered to do things.I look forward to seeing the impact which they continue to have over the coming months and years under new leadership.This has been true for many people, especially EV board members and some staff who have jumped in t...]]>
MaxDalton https://forum.effectivealtruism.org/posts/99tnp7Jpts7Gssq7J/transitioning-to-an-advisory-role Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transitioning to an advisory role, published by MaxDalton on February 16, 2023 on The Effective Altruism Forum.I’m writing to announce that I’ve resigned from my role as CEA’s Executive Director, and will be transitioning to an advisory role.Basically, my mental health has been bad for the last 3 months. Starting in November, my role changed from one that I love - building a team and a product, building close working relationships with people, executing - to one that I find really stressful: dealing with media attention, stakeholders, and lawyers at long unpredictable hours and wrestling with strategic uncertainty. I think I’m also not so good at the latter sort of work, relative to the former.I've been getting lots of advice, therapy, and support, but recently I've been close to a crisis – struggling to get out of bed, feeling terror at the idea of sitting at my desk. I really wish that I were strong enough to keep doing this job, especially right now – I care so much about CEA’s work to help more people tackle the important problems we face, and I care deeply about the team we’ve built.But I’m just not able to keep going in my current role, and I don't think that pretending to be stronger or struggling on will be good for CEA or for me, because I’m not able to perform as well as I would like and there’s a risk that I’ll burn out with no handover. So I think it’s best to move into an advisory role and allow someone else to direct CEA.The boards of Effective Ventures UK and Effective Ventures US, which govern CEA, will appoint an interim Executive Director soon. Once they’re appointed I plan to continue advising and working with them and the CEA team to ensure a smooth transition and help find a new permanent ED. I hope that moving from an executive to advisory role will help alleviate some of the pressure and allow me to contribute more productively to our shared work going forward.For a while now I've been trying to build up the leadership team as the body running CEA, with me as one member. I think that the leadership team is very strong: people disagree with each other directly but with care, have complementary strengths, and show strong leadership for their own programs. I think that they will be able to do a great job leading CEA together with the interim ED and the new permanent ED.Of course, FTX and subsequent events have highlighted some important issues in EA. I’ve been working with the team to reflect on how this might impact our work and necessitate changes, and I hope that they’ll be able to share more on these conversations and plans in the future. Although I’m very sad not to be able to see through that work in my current role with CEA, I think that the work we’ve done so far will set the new leadership team up well. I also plan to continue to reflect, will discuss my thinking with new leadership, and may publish some of my personal reflections.Despite the setbacks of these last few months, I'm very proud of what we've achieved together over the last four years. Compared to 2019, the number of new connections we’re making at events is 5x higher, and people are spending 10x time engaging with the Forum (which also has a lot more interesting content). Overall, I think that we’ve helped hundreds of people to reflect on how they can best contribute to making the world a better place, and begin to work on these critical problems.I’m also incredibly grateful to have been a part of this team: CEA staff are incredibly talented, caring, and dedicated. I’ve loved to be a part of a culture where staff are valued and empowered to do things.I look forward to seeing the impact which they continue to have over the coming months and years under new leadership.This has been true for many people, especially EV board members and some staff who have jumped in t...]]>
Thu, 16 Feb 2023 18:28:39 +0000 EA - Transitioning to an advisory role by MaxDalton Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transitioning to an advisory role, published by MaxDalton on February 16, 2023 on The Effective Altruism Forum.I’m writing to announce that I’ve resigned from my role as CEA’s Executive Director, and will be transitioning to an advisory role.Basically, my mental health has been bad for the last 3 months. Starting in November, my role changed from one that I love - building a team and a product, building close working relationships with people, executing - to one that I find really stressful: dealing with media attention, stakeholders, and lawyers at long unpredictable hours and wrestling with strategic uncertainty. I think I’m also not so good at the latter sort of work, relative to the former.I've been getting lots of advice, therapy, and support, but recently I've been close to a crisis – struggling to get out of bed, feeling terror at the idea of sitting at my desk. I really wish that I were strong enough to keep doing this job, especially right now – I care so much about CEA’s work to help more people tackle the important problems we face, and I care deeply about the team we’ve built.But I’m just not able to keep going in my current role, and I don't think that pretending to be stronger or struggling on will be good for CEA or for me, because I’m not able to perform as well as I would like and there’s a risk that I’ll burn out with no handover. So I think it’s best to move into an advisory role and allow someone else to direct CEA.The boards of Effective Ventures UK and Effective Ventures US, which govern CEA, will appoint an interim Executive Director soon. Once they’re appointed I plan to continue advising and working with them and the CEA team to ensure a smooth transition and help find a new permanent ED. I hope that moving from an executive to advisory role will help alleviate some of the pressure and allow me to contribute more productively to our shared work going forward.For a while now I've been trying to build up the leadership team as the body running CEA, with me as one member. I think that the leadership team is very strong: people disagree with each other directly but with care, have complementary strengths, and show strong leadership for their own programs. I think that they will be able to do a great job leading CEA together with the interim ED and the new permanent ED.Of course, FTX and subsequent events have highlighted some important issues in EA. I’ve been working with the team to reflect on how this might impact our work and necessitate changes, and I hope that they’ll be able to share more on these conversations and plans in the future. Although I’m very sad not to be able to see through that work in my current role with CEA, I think that the work we’ve done so far will set the new leadership team up well. I also plan to continue to reflect, will discuss my thinking with new leadership, and may publish some of my personal reflections.Despite the setbacks of these last few months, I'm very proud of what we've achieved together over the last four years. Compared to 2019, the number of new connections we’re making at events is 5x higher, and people are spending 10x time engaging with the Forum (which also has a lot more interesting content). Overall, I think that we’ve helped hundreds of people to reflect on how they can best contribute to making the world a better place, and begin to work on these critical problems.I’m also incredibly grateful to have been a part of this team: CEA staff are incredibly talented, caring, and dedicated. I’ve loved to be a part of a culture where staff are valued and empowered to do things.I look forward to seeing the impact which they continue to have over the coming months and years under new leadership.This has been true for many people, especially EV board members and some staff who have jumped in t...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transitioning to an advisory role, published by MaxDalton on February 16, 2023 on The Effective Altruism Forum.I’m writing to announce that I’ve resigned from my role as CEA’s Executive Director, and will be transitioning to an advisory role.Basically, my mental health has been bad for the last 3 months. Starting in November, my role changed from one that I love - building a team and a product, building close working relationships with people, executing - to one that I find really stressful: dealing with media attention, stakeholders, and lawyers at long unpredictable hours and wrestling with strategic uncertainty. I think I’m also not so good at the latter sort of work, relative to the former.I've been getting lots of advice, therapy, and support, but recently I've been close to a crisis – struggling to get out of bed, feeling terror at the idea of sitting at my desk. I really wish that I were strong enough to keep doing this job, especially right now – I care so much about CEA’s work to help more people tackle the important problems we face, and I care deeply about the team we’ve built.But I’m just not able to keep going in my current role, and I don't think that pretending to be stronger or struggling on will be good for CEA or for me, because I’m not able to perform as well as I would like and there’s a risk that I’ll burn out with no handover. So I think it’s best to move into an advisory role and allow someone else to direct CEA.The boards of Effective Ventures UK and Effective Ventures US, which govern CEA, will appoint an interim Executive Director soon. Once they’re appointed I plan to continue advising and working with them and the CEA team to ensure a smooth transition and help find a new permanent ED. I hope that moving from an executive to advisory role will help alleviate some of the pressure and allow me to contribute more productively to our shared work going forward.For a while now I've been trying to build up the leadership team as the body running CEA, with me as one member. I think that the leadership team is very strong: people disagree with each other directly but with care, have complementary strengths, and show strong leadership for their own programs. I think that they will be able to do a great job leading CEA together with the interim ED and the new permanent ED.Of course, FTX and subsequent events have highlighted some important issues in EA. I’ve been working with the team to reflect on how this might impact our work and necessitate changes, and I hope that they’ll be able to share more on these conversations and plans in the future. Although I’m very sad not to be able to see through that work in my current role with CEA, I think that the work we’ve done so far will set the new leadership team up well. I also plan to continue to reflect, will discuss my thinking with new leadership, and may publish some of my personal reflections.Despite the setbacks of these last few months, I'm very proud of what we've achieved together over the last four years. Compared to 2019, the number of new connections we’re making at events is 5x higher, and people are spending 10x time engaging with the Forum (which also has a lot more interesting content). Overall, I think that we’ve helped hundreds of people to reflect on how they can best contribute to making the world a better place, and begin to work on these critical problems.I’m also incredibly grateful to have been a part of this team: CEA staff are incredibly talented, caring, and dedicated. I’ve loved to be a part of a culture where staff are valued and empowered to do things.I look forward to seeing the impact which they continue to have over the coming months and years under new leadership.This has been true for many people, especially EV board members and some staff who have jumped in t...]]>
MaxDalton https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:42 None full 4900
RuKEwLFbmBxSRtcDP_NL_EA_EA EA - Qualities that alignment mentors value in junior researchers by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Qualities that alignment mentors value in junior researchers, published by Akash on February 14, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://forum.effectivealtruism.org/posts/RuKEwLFbmBxSRtcDP/qualities-that-alignment-mentors-value-in-junior-researchers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Qualities that alignment mentors value in junior researchers, published by Akash on February 14, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 16 Feb 2023 17:07:06 +0000 EA - Qualities that alignment mentors value in junior researchers by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Qualities that alignment mentors value in junior researchers, published by Akash on February 14, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Qualities that alignment mentors value in junior researchers, published by Akash on February 14, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:27 None full 4903
CFeRnEpJHvBerJ39h_NL_EA_EA EA - Please don't throw your mind away by TsviBT Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Please don't throw your mind away, published by TsviBT on February 15, 2023 on The Effective Altruism Forum.Dialogue[Warning: the following dialogue contains an incidental spoiler for "Music in Human Evolution" by Kevin Simler. That post is short, good, and worth reading without spoilers, and this post will still be here if you come back later. It's also possible to get the point of this post by skipping the dialogue and reading the other sections.]Pretty often, talking to someone who's arriving to the existential risk / AGI risk / longtermism cluster, I'll have a conversation like the following.Tsvi: "So, what's been catching your eye about this stuff?"Arrival: "I think I want to work on machine learning, and see if I can contribute to alignment that way."T: "What's something that got your interest in ML?"A: "It seems like people think that deep learning might be on the final ramp up to AGI, so I should probably know how that stuff works, and I think I have a good chance of learning ML at least well enough to maybe contribute to a research project."T: "That makes sense. I guess I'm fairly skeptical of AGI coming very soon, compared to people around here, or at least I'm skeptical that most people have good reasons for believing that. Also I think it's pretty valuable to not cut yourself off from thinking about the whole alignment problem, whether or not you expect to work on an already-existing project. But what you're saying makes sense too. I'm curious though if there's something you were thinking about recently that just strikes you as fun, or like it's in the back of your mind a bit, even if you're not trying to think about it for some purpose."A: "Hm... Oh, I saw this video of an octopus doing a really weird swirly thing. Here, let me pull it up on my phone."T: "Weird! Maybe it's cleaning itself, like a cat licking its fur? But it doesn't look like it's actually contacting itself that much."A: "I thought it might be a signaling display, like a mating dance, or for scaring off predators by looking like a big coordinated army. Like how humans might have scared off predators and scavenging competitors in the ancestral environment by singing and dancing in unison."T: "A plausible hypothesis. Though it wouldn't be getting the benefit of being big, like a spread out group of humans."A: "Yeah. Anyway yeah I'm really into animal behavior. Haven't been thinking about that stuff recently though because I've been trying to start learning ML."T: "Ah, hm, uh... I'm probably maybe imagining things, but something about that is a bit worrying to me. It could make sense, consequentialist backchaining can be good, and diving in deep can be good, and while a lot of that research doesn't seem to me like a very hopeworthy approach, some well-informed people do. And I'm not saying not to do that stuff. But there's something that worries me about having your little curiosities squashed by the backchained goals. Like, I think there's something really good about just doing what's actually interesting to you, and I think it would be bad if you were to avoid putting a lot of energy into stuff that's caught your attention in a deep way, because that would tend to sacrifice a lot of important stuff that happens when you're exploring something out of a natural urge to investigate."A: "That took a bit of a turn. I'm not sure I know what you mean. You're saying I should just follow my passion, and not try to work towards some specific goal?"T: "No, that's not it. More like, when I see someone coming to this social cluster concerned with existential risk and so on, I worry that they're going to get their mind eaten. Or, I worry that they'll think they're being told to throw their mind away. I'm trying to say, don't throw your mind away."A: "I... don't think I'm being told to t...]]>
TsviBT https://forum.effectivealtruism.org/posts/CFeRnEpJHvBerJ39h/please-don-t-throw-your-mind-away-2 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Please don't throw your mind away, published by TsviBT on February 15, 2023 on The Effective Altruism Forum.Dialogue[Warning: the following dialogue contains an incidental spoiler for "Music in Human Evolution" by Kevin Simler. That post is short, good, and worth reading without spoilers, and this post will still be here if you come back later. It's also possible to get the point of this post by skipping the dialogue and reading the other sections.]Pretty often, talking to someone who's arriving to the existential risk / AGI risk / longtermism cluster, I'll have a conversation like the following.Tsvi: "So, what's been catching your eye about this stuff?"Arrival: "I think I want to work on machine learning, and see if I can contribute to alignment that way."T: "What's something that got your interest in ML?"A: "It seems like people think that deep learning might be on the final ramp up to AGI, so I should probably know how that stuff works, and I think I have a good chance of learning ML at least well enough to maybe contribute to a research project."T: "That makes sense. I guess I'm fairly skeptical of AGI coming very soon, compared to people around here, or at least I'm skeptical that most people have good reasons for believing that. Also I think it's pretty valuable to not cut yourself off from thinking about the whole alignment problem, whether or not you expect to work on an already-existing project. But what you're saying makes sense too. I'm curious though if there's something you were thinking about recently that just strikes you as fun, or like it's in the back of your mind a bit, even if you're not trying to think about it for some purpose."A: "Hm... Oh, I saw this video of an octopus doing a really weird swirly thing. Here, let me pull it up on my phone."T: "Weird! Maybe it's cleaning itself, like a cat licking its fur? But it doesn't look like it's actually contacting itself that much."A: "I thought it might be a signaling display, like a mating dance, or for scaring off predators by looking like a big coordinated army. Like how humans might have scared off predators and scavenging competitors in the ancestral environment by singing and dancing in unison."T: "A plausible hypothesis. Though it wouldn't be getting the benefit of being big, like a spread out group of humans."A: "Yeah. Anyway yeah I'm really into animal behavior. Haven't been thinking about that stuff recently though because I've been trying to start learning ML."T: "Ah, hm, uh... I'm probably maybe imagining things, but something about that is a bit worrying to me. It could make sense, consequentialist backchaining can be good, and diving in deep can be good, and while a lot of that research doesn't seem to me like a very hopeworthy approach, some well-informed people do. And I'm not saying not to do that stuff. But there's something that worries me about having your little curiosities squashed by the backchained goals. Like, I think there's something really good about just doing what's actually interesting to you, and I think it would be bad if you were to avoid putting a lot of energy into stuff that's caught your attention in a deep way, because that would tend to sacrifice a lot of important stuff that happens when you're exploring something out of a natural urge to investigate."A: "That took a bit of a turn. I'm not sure I know what you mean. You're saying I should just follow my passion, and not try to work towards some specific goal?"T: "No, that's not it. More like, when I see someone coming to this social cluster concerned with existential risk and so on, I worry that they're going to get their mind eaten. Or, I worry that they'll think they're being told to throw their mind away. I'm trying to say, don't throw your mind away."A: "I... don't think I'm being told to t...]]>
Thu, 16 Feb 2023 15:45:46 +0000 EA - Please don't throw your mind away by TsviBT Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Please don't throw your mind away, published by TsviBT on February 15, 2023 on The Effective Altruism Forum.Dialogue[Warning: the following dialogue contains an incidental spoiler for "Music in Human Evolution" by Kevin Simler. That post is short, good, and worth reading without spoilers, and this post will still be here if you come back later. It's also possible to get the point of this post by skipping the dialogue and reading the other sections.]Pretty often, talking to someone who's arriving to the existential risk / AGI risk / longtermism cluster, I'll have a conversation like the following.Tsvi: "So, what's been catching your eye about this stuff?"Arrival: "I think I want to work on machine learning, and see if I can contribute to alignment that way."T: "What's something that got your interest in ML?"A: "It seems like people think that deep learning might be on the final ramp up to AGI, so I should probably know how that stuff works, and I think I have a good chance of learning ML at least well enough to maybe contribute to a research project."T: "That makes sense. I guess I'm fairly skeptical of AGI coming very soon, compared to people around here, or at least I'm skeptical that most people have good reasons for believing that. Also I think it's pretty valuable to not cut yourself off from thinking about the whole alignment problem, whether or not you expect to work on an already-existing project. But what you're saying makes sense too. I'm curious though if there's something you were thinking about recently that just strikes you as fun, or like it's in the back of your mind a bit, even if you're not trying to think about it for some purpose."A: "Hm... Oh, I saw this video of an octopus doing a really weird swirly thing. Here, let me pull it up on my phone."T: "Weird! Maybe it's cleaning itself, like a cat licking its fur? But it doesn't look like it's actually contacting itself that much."A: "I thought it might be a signaling display, like a mating dance, or for scaring off predators by looking like a big coordinated army. Like how humans might have scared off predators and scavenging competitors in the ancestral environment by singing and dancing in unison."T: "A plausible hypothesis. Though it wouldn't be getting the benefit of being big, like a spread out group of humans."A: "Yeah. Anyway yeah I'm really into animal behavior. Haven't been thinking about that stuff recently though because I've been trying to start learning ML."T: "Ah, hm, uh... I'm probably maybe imagining things, but something about that is a bit worrying to me. It could make sense, consequentialist backchaining can be good, and diving in deep can be good, and while a lot of that research doesn't seem to me like a very hopeworthy approach, some well-informed people do. And I'm not saying not to do that stuff. But there's something that worries me about having your little curiosities squashed by the backchained goals. Like, I think there's something really good about just doing what's actually interesting to you, and I think it would be bad if you were to avoid putting a lot of energy into stuff that's caught your attention in a deep way, because that would tend to sacrifice a lot of important stuff that happens when you're exploring something out of a natural urge to investigate."A: "That took a bit of a turn. I'm not sure I know what you mean. You're saying I should just follow my passion, and not try to work towards some specific goal?"T: "No, that's not it. More like, when I see someone coming to this social cluster concerned with existential risk and so on, I worry that they're going to get their mind eaten. Or, I worry that they'll think they're being told to throw their mind away. I'm trying to say, don't throw your mind away."A: "I... don't think I'm being told to t...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Please don't throw your mind away, published by TsviBT on February 15, 2023 on The Effective Altruism Forum.Dialogue[Warning: the following dialogue contains an incidental spoiler for "Music in Human Evolution" by Kevin Simler. That post is short, good, and worth reading without spoilers, and this post will still be here if you come back later. It's also possible to get the point of this post by skipping the dialogue and reading the other sections.]Pretty often, talking to someone who's arriving to the existential risk / AGI risk / longtermism cluster, I'll have a conversation like the following.Tsvi: "So, what's been catching your eye about this stuff?"Arrival: "I think I want to work on machine learning, and see if I can contribute to alignment that way."T: "What's something that got your interest in ML?"A: "It seems like people think that deep learning might be on the final ramp up to AGI, so I should probably know how that stuff works, and I think I have a good chance of learning ML at least well enough to maybe contribute to a research project."T: "That makes sense. I guess I'm fairly skeptical of AGI coming very soon, compared to people around here, or at least I'm skeptical that most people have good reasons for believing that. Also I think it's pretty valuable to not cut yourself off from thinking about the whole alignment problem, whether or not you expect to work on an already-existing project. But what you're saying makes sense too. I'm curious though if there's something you were thinking about recently that just strikes you as fun, or like it's in the back of your mind a bit, even if you're not trying to think about it for some purpose."A: "Hm... Oh, I saw this video of an octopus doing a really weird swirly thing. Here, let me pull it up on my phone."T: "Weird! Maybe it's cleaning itself, like a cat licking its fur? But it doesn't look like it's actually contacting itself that much."A: "I thought it might be a signaling display, like a mating dance, or for scaring off predators by looking like a big coordinated army. Like how humans might have scared off predators and scavenging competitors in the ancestral environment by singing and dancing in unison."T: "A plausible hypothesis. Though it wouldn't be getting the benefit of being big, like a spread out group of humans."A: "Yeah. Anyway yeah I'm really into animal behavior. Haven't been thinking about that stuff recently though because I've been trying to start learning ML."T: "Ah, hm, uh... I'm probably maybe imagining things, but something about that is a bit worrying to me. It could make sense, consequentialist backchaining can be good, and diving in deep can be good, and while a lot of that research doesn't seem to me like a very hopeworthy approach, some well-informed people do. And I'm not saying not to do that stuff. But there's something that worries me about having your little curiosities squashed by the backchained goals. Like, I think there's something really good about just doing what's actually interesting to you, and I think it would be bad if you were to avoid putting a lot of energy into stuff that's caught your attention in a deep way, because that would tend to sacrifice a lot of important stuff that happens when you're exploring something out of a natural urge to investigate."A: "That took a bit of a turn. I'm not sure I know what you mean. You're saying I should just follow my passion, and not try to work towards some specific goal?"T: "No, that's not it. More like, when I see someone coming to this social cluster concerned with existential risk and so on, I worry that they're going to get their mind eaten. Or, I worry that they'll think they're being told to throw their mind away. I'm trying to say, don't throw your mind away."A: "I... don't think I'm being told to t...]]>
TsviBT https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 27:21 None full 4901
pYPag2wpB7LF9BSBj_NL_EA_EA EA - Anyone who likes the idea of EA and meeting EAs but doesn't want to discuss EA concepts IRL? by antisocial-throwaway Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anyone who likes the idea of EA & meeting EAs but doesn't want to discuss EA concepts IRL?, published by antisocial-throwaway on February 15, 2023 on The Effective Altruism Forum.Hi all, using a throwaway as this feels a bit anti-social to post!Basically, I'm aligned with & live via the basic concepts of EA (use your career for good + donate some of your earnings to effective causes), but that's where my interest really kind of ends.I'm not interested in using my free time to read into EA further, I don't feel motivated to learn more about all the concepts that people use/ discuss, etc (utilitarianism, expected value, etc etc). I really like having non-EA friends and don't get any enjoyment from having philosophical discussions with people.I really like the core ethos of EA, but when I've gone to in-person meetups (including Prague Fall Season) I've felt like a fraud because I'm not at all versed in the language, and actually have no interest in discussion all the forum talking points. I just want to meet cool people who care about doing good!Of course a clear rebuttal here would be "ok then dude just talk to people about other stuff", but I've often felt at these events like people are there to discuss this kind of stuff, and to talk about more normal/ "mundane" stuff would make people think I'm wasting their time.So I guess my question is like - is there anyone else out there who feels this way? Any tips? I'd really like to make friends through the EA community but in the same breath I only want to be involved in a straightforward way (career + key ideas), rather than scouring the forums & LessWrong and hitting all the squares on the EA bingo card. Also, is it kind of in my head that you need to be a hardcore EA who has strong opinions on ethics & philosophy & etc, or is that the general mood?(I also appreciate that the people who end up reading this will be more "hardcore EA" types who check the forum regularly...)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
antisocial-throwaway https://forum.effectivealtruism.org/posts/pYPag2wpB7LF9BSBj/anyone-who-likes-the-idea-of-ea-and-meeting-eas-but-doesn-t Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anyone who likes the idea of EA & meeting EAs but doesn't want to discuss EA concepts IRL?, published by antisocial-throwaway on February 15, 2023 on The Effective Altruism Forum.Hi all, using a throwaway as this feels a bit anti-social to post!Basically, I'm aligned with & live via the basic concepts of EA (use your career for good + donate some of your earnings to effective causes), but that's where my interest really kind of ends.I'm not interested in using my free time to read into EA further, I don't feel motivated to learn more about all the concepts that people use/ discuss, etc (utilitarianism, expected value, etc etc). I really like having non-EA friends and don't get any enjoyment from having philosophical discussions with people.I really like the core ethos of EA, but when I've gone to in-person meetups (including Prague Fall Season) I've felt like a fraud because I'm not at all versed in the language, and actually have no interest in discussion all the forum talking points. I just want to meet cool people who care about doing good!Of course a clear rebuttal here would be "ok then dude just talk to people about other stuff", but I've often felt at these events like people are there to discuss this kind of stuff, and to talk about more normal/ "mundane" stuff would make people think I'm wasting their time.So I guess my question is like - is there anyone else out there who feels this way? Any tips? I'd really like to make friends through the EA community but in the same breath I only want to be involved in a straightforward way (career + key ideas), rather than scouring the forums & LessWrong and hitting all the squares on the EA bingo card. Also, is it kind of in my head that you need to be a hardcore EA who has strong opinions on ethics & philosophy & etc, or is that the general mood?(I also appreciate that the people who end up reading this will be more "hardcore EA" types who check the forum regularly...)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 16 Feb 2023 13:58:17 +0000 EA - Anyone who likes the idea of EA and meeting EAs but doesn't want to discuss EA concepts IRL? by antisocial-throwaway Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anyone who likes the idea of EA & meeting EAs but doesn't want to discuss EA concepts IRL?, published by antisocial-throwaway on February 15, 2023 on The Effective Altruism Forum.Hi all, using a throwaway as this feels a bit anti-social to post!Basically, I'm aligned with & live via the basic concepts of EA (use your career for good + donate some of your earnings to effective causes), but that's where my interest really kind of ends.I'm not interested in using my free time to read into EA further, I don't feel motivated to learn more about all the concepts that people use/ discuss, etc (utilitarianism, expected value, etc etc). I really like having non-EA friends and don't get any enjoyment from having philosophical discussions with people.I really like the core ethos of EA, but when I've gone to in-person meetups (including Prague Fall Season) I've felt like a fraud because I'm not at all versed in the language, and actually have no interest in discussion all the forum talking points. I just want to meet cool people who care about doing good!Of course a clear rebuttal here would be "ok then dude just talk to people about other stuff", but I've often felt at these events like people are there to discuss this kind of stuff, and to talk about more normal/ "mundane" stuff would make people think I'm wasting their time.So I guess my question is like - is there anyone else out there who feels this way? Any tips? I'd really like to make friends through the EA community but in the same breath I only want to be involved in a straightforward way (career + key ideas), rather than scouring the forums & LessWrong and hitting all the squares on the EA bingo card. Also, is it kind of in my head that you need to be a hardcore EA who has strong opinions on ethics & philosophy & etc, or is that the general mood?(I also appreciate that the people who end up reading this will be more "hardcore EA" types who check the forum regularly...)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anyone who likes the idea of EA & meeting EAs but doesn't want to discuss EA concepts IRL?, published by antisocial-throwaway on February 15, 2023 on The Effective Altruism Forum.Hi all, using a throwaway as this feels a bit anti-social to post!Basically, I'm aligned with & live via the basic concepts of EA (use your career for good + donate some of your earnings to effective causes), but that's where my interest really kind of ends.I'm not interested in using my free time to read into EA further, I don't feel motivated to learn more about all the concepts that people use/ discuss, etc (utilitarianism, expected value, etc etc). I really like having non-EA friends and don't get any enjoyment from having philosophical discussions with people.I really like the core ethos of EA, but when I've gone to in-person meetups (including Prague Fall Season) I've felt like a fraud because I'm not at all versed in the language, and actually have no interest in discussion all the forum talking points. I just want to meet cool people who care about doing good!Of course a clear rebuttal here would be "ok then dude just talk to people about other stuff", but I've often felt at these events like people are there to discuss this kind of stuff, and to talk about more normal/ "mundane" stuff would make people think I'm wasting their time.So I guess my question is like - is there anyone else out there who feels this way? Any tips? I'd really like to make friends through the EA community but in the same breath I only want to be involved in a straightforward way (career + key ideas), rather than scouring the forums & LessWrong and hitting all the squares on the EA bingo card. Also, is it kind of in my head that you need to be a hardcore EA who has strong opinions on ethics & philosophy & etc, or is that the general mood?(I also appreciate that the people who end up reading this will be more "hardcore EA" types who check the forum regularly...)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
antisocial-throwaway https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:05 None full 4902
nrmmiezSq24S4hi9g_NL_EA_EA EA - Nobody Wants to Read Your Sht: my favorite book of writing advice, condensed for your convenience by jvb Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nobody Wants to Read Your Sht: my favorite book of writing advice, condensed for your convenience, published by jvb on February 15, 2023 on The Effective Altruism Forum.Nobody Wants to Read Your Sht by Steven Pressfield is my favorite book of writing advice. Its core insight is expressed in the title. The best thing you can do for your writing is to internalize this deep truth.Pressfield did it by writing ad copy. You can’t avoid internalizing that nobody wants to read your shit when you’re writing ads, which everybody hates and nobody ever wants to read. Maybe you don’t have to go write ad copy to understand this; maybe you can just read the book, or just this post.When you understand that nobody wants to read your shit, your mind becomes powerfully concentrated. You begin to understand that writing/reading is, above all, a transaction. The reader donates his time and attention, which are supremely valuable commodities. In return, you the writer must give him something worthy of his gift to you.When you understand that nobody wants to read your shit, you develop empathy. [...] You learn to ask yourself with every sentence and every phrase: Is this interesting? Is it fun or challenging or inventive? Am I giving the reader enough? Is she bored? Is she following where I want to lead her?What should you do about the fact that nobody wants to read your shit?Streamline your message. Be as clear, simple, and easy to understand as you possibly can.Make it fun. Or sexy or interesting or scary or informative. Fun writing saves lives.Apply this insight to all forms of communication.Pressfield wrote this book primarily for fiction writers, who are at the most serious risk of forgetting that nobody wants to read their shit (source: am fiction writer). But the art of empathy applies to all communication, and so do many other elements of fiction:Nonfiction is fiction. If you want your factual history or memoir, your grant proposal or dissertation or TED talk to be powerful and engaging and to hold the reader and audience's attention, you must organize your material as if it were a story and as if it were fiction. [...]What are the universal structural elements of all stories? Hook. Build. Payoff. This is the shape any story must take. A beginning that grabs the listener. A middle that escalates in tension, suspense, and excitement. And an ending that brings it all home with a bang. That's a novel, that's a play, that's a movie. That's a joke, that's a seduction, that's a military campaign. It's also your TED talk, your sales pitch, your Master's thesis, and the 890-page true saga of your great-great-grandmother's life.And your whitepaper, and your grant proposal, and your EA forum post. For this reason, I do recommend going out and grabbing this book, even though much of it concerns fiction. It only takes about an hour to read, because Pressfield knows we don’t want to read his shit. Finally:All clients have one thing in common. They're in love with their product/company/service. In the ad biz, this is called Client's Disease. [...]What the ad person understands that the client does not is that nobody gives a damn about the client or his product. [...]The pros understand that nobody wants to read their shit. They will start from that premise and employ all their arts and all their skills to come up with some brilliant stroke that will cut through that indifference.The relevance of this quote to EA writing is left as an exercise to the reader.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
jvb https://forum.effectivealtruism.org/posts/nrmmiezSq24S4hi9g/nobody-wants-to-read-your-sh-t-my-favorite-book-of-writing Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nobody Wants to Read Your Sht: my favorite book of writing advice, condensed for your convenience, published by jvb on February 15, 2023 on The Effective Altruism Forum.Nobody Wants to Read Your Sht by Steven Pressfield is my favorite book of writing advice. Its core insight is expressed in the title. The best thing you can do for your writing is to internalize this deep truth.Pressfield did it by writing ad copy. You can’t avoid internalizing that nobody wants to read your shit when you’re writing ads, which everybody hates and nobody ever wants to read. Maybe you don’t have to go write ad copy to understand this; maybe you can just read the book, or just this post.When you understand that nobody wants to read your shit, your mind becomes powerfully concentrated. You begin to understand that writing/reading is, above all, a transaction. The reader donates his time and attention, which are supremely valuable commodities. In return, you the writer must give him something worthy of his gift to you.When you understand that nobody wants to read your shit, you develop empathy. [...] You learn to ask yourself with every sentence and every phrase: Is this interesting? Is it fun or challenging or inventive? Am I giving the reader enough? Is she bored? Is she following where I want to lead her?What should you do about the fact that nobody wants to read your shit?Streamline your message. Be as clear, simple, and easy to understand as you possibly can.Make it fun. Or sexy or interesting or scary or informative. Fun writing saves lives.Apply this insight to all forms of communication.Pressfield wrote this book primarily for fiction writers, who are at the most serious risk of forgetting that nobody wants to read their shit (source: am fiction writer). But the art of empathy applies to all communication, and so do many other elements of fiction:Nonfiction is fiction. If you want your factual history or memoir, your grant proposal or dissertation or TED talk to be powerful and engaging and to hold the reader and audience's attention, you must organize your material as if it were a story and as if it were fiction. [...]What are the universal structural elements of all stories? Hook. Build. Payoff. This is the shape any story must take. A beginning that grabs the listener. A middle that escalates in tension, suspense, and excitement. And an ending that brings it all home with a bang. That's a novel, that's a play, that's a movie. That's a joke, that's a seduction, that's a military campaign. It's also your TED talk, your sales pitch, your Master's thesis, and the 890-page true saga of your great-great-grandmother's life.And your whitepaper, and your grant proposal, and your EA forum post. For this reason, I do recommend going out and grabbing this book, even though much of it concerns fiction. It only takes about an hour to read, because Pressfield knows we don’t want to read his shit. Finally:All clients have one thing in common. They're in love with their product/company/service. In the ad biz, this is called Client's Disease. [...]What the ad person understands that the client does not is that nobody gives a damn about the client or his product. [...]The pros understand that nobody wants to read their shit. They will start from that premise and employ all their arts and all their skills to come up with some brilliant stroke that will cut through that indifference.The relevance of this quote to EA writing is left as an exercise to the reader.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 16 Feb 2023 10:33:04 +0000 EA - Nobody Wants to Read Your Sht: my favorite book of writing advice, condensed for your convenience by jvb Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nobody Wants to Read Your Sht: my favorite book of writing advice, condensed for your convenience, published by jvb on February 15, 2023 on The Effective Altruism Forum.Nobody Wants to Read Your Sht by Steven Pressfield is my favorite book of writing advice. Its core insight is expressed in the title. The best thing you can do for your writing is to internalize this deep truth.Pressfield did it by writing ad copy. You can’t avoid internalizing that nobody wants to read your shit when you’re writing ads, which everybody hates and nobody ever wants to read. Maybe you don’t have to go write ad copy to understand this; maybe you can just read the book, or just this post.When you understand that nobody wants to read your shit, your mind becomes powerfully concentrated. You begin to understand that writing/reading is, above all, a transaction. The reader donates his time and attention, which are supremely valuable commodities. In return, you the writer must give him something worthy of his gift to you.When you understand that nobody wants to read your shit, you develop empathy. [...] You learn to ask yourself with every sentence and every phrase: Is this interesting? Is it fun or challenging or inventive? Am I giving the reader enough? Is she bored? Is she following where I want to lead her?What should you do about the fact that nobody wants to read your shit?Streamline your message. Be as clear, simple, and easy to understand as you possibly can.Make it fun. Or sexy or interesting or scary or informative. Fun writing saves lives.Apply this insight to all forms of communication.Pressfield wrote this book primarily for fiction writers, who are at the most serious risk of forgetting that nobody wants to read their shit (source: am fiction writer). But the art of empathy applies to all communication, and so do many other elements of fiction:Nonfiction is fiction. If you want your factual history or memoir, your grant proposal or dissertation or TED talk to be powerful and engaging and to hold the reader and audience's attention, you must organize your material as if it were a story and as if it were fiction. [...]What are the universal structural elements of all stories? Hook. Build. Payoff. This is the shape any story must take. A beginning that grabs the listener. A middle that escalates in tension, suspense, and excitement. And an ending that brings it all home with a bang. That's a novel, that's a play, that's a movie. That's a joke, that's a seduction, that's a military campaign. It's also your TED talk, your sales pitch, your Master's thesis, and the 890-page true saga of your great-great-grandmother's life.And your whitepaper, and your grant proposal, and your EA forum post. For this reason, I do recommend going out and grabbing this book, even though much of it concerns fiction. It only takes about an hour to read, because Pressfield knows we don’t want to read his shit. Finally:All clients have one thing in common. They're in love with their product/company/service. In the ad biz, this is called Client's Disease. [...]What the ad person understands that the client does not is that nobody gives a damn about the client or his product. [...]The pros understand that nobody wants to read their shit. They will start from that premise and employ all their arts and all their skills to come up with some brilliant stroke that will cut through that indifference.The relevance of this quote to EA writing is left as an exercise to the reader.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nobody Wants to Read Your Sht: my favorite book of writing advice, condensed for your convenience, published by jvb on February 15, 2023 on The Effective Altruism Forum.Nobody Wants to Read Your Sht by Steven Pressfield is my favorite book of writing advice. Its core insight is expressed in the title. The best thing you can do for your writing is to internalize this deep truth.Pressfield did it by writing ad copy. You can’t avoid internalizing that nobody wants to read your shit when you’re writing ads, which everybody hates and nobody ever wants to read. Maybe you don’t have to go write ad copy to understand this; maybe you can just read the book, or just this post.When you understand that nobody wants to read your shit, your mind becomes powerfully concentrated. You begin to understand that writing/reading is, above all, a transaction. The reader donates his time and attention, which are supremely valuable commodities. In return, you the writer must give him something worthy of his gift to you.When you understand that nobody wants to read your shit, you develop empathy. [...] You learn to ask yourself with every sentence and every phrase: Is this interesting? Is it fun or challenging or inventive? Am I giving the reader enough? Is she bored? Is she following where I want to lead her?What should you do about the fact that nobody wants to read your shit?Streamline your message. Be as clear, simple, and easy to understand as you possibly can.Make it fun. Or sexy or interesting or scary or informative. Fun writing saves lives.Apply this insight to all forms of communication.Pressfield wrote this book primarily for fiction writers, who are at the most serious risk of forgetting that nobody wants to read their shit (source: am fiction writer). But the art of empathy applies to all communication, and so do many other elements of fiction:Nonfiction is fiction. If you want your factual history or memoir, your grant proposal or dissertation or TED talk to be powerful and engaging and to hold the reader and audience's attention, you must organize your material as if it were a story and as if it were fiction. [...]What are the universal structural elements of all stories? Hook. Build. Payoff. This is the shape any story must take. A beginning that grabs the listener. A middle that escalates in tension, suspense, and excitement. And an ending that brings it all home with a bang. That's a novel, that's a play, that's a movie. That's a joke, that's a seduction, that's a military campaign. It's also your TED talk, your sales pitch, your Master's thesis, and the 890-page true saga of your great-great-grandmother's life.And your whitepaper, and your grant proposal, and your EA forum post. For this reason, I do recommend going out and grabbing this book, even though much of it concerns fiction. It only takes about an hour to read, because Pressfield knows we don’t want to read his shit. Finally:All clients have one thing in common. They're in love with their product/company/service. In the ad biz, this is called Client's Disease. [...]What the ad person understands that the client does not is that nobody gives a damn about the client or his product. [...]The pros understand that nobody wants to read their shit. They will start from that premise and employ all their arts and all their skills to come up with some brilliant stroke that will cut through that indifference.The relevance of this quote to EA writing is left as an exercise to the reader.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
jvb https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:30 None full 4891
e8ZJvaiuxwQraG3yL_NL_EA_EA EA - Don't Over-Update On Others' Failures by lincolnq Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don't Over-Update On Others' Failures, published by lincolnq on February 15, 2023 on The Effective Altruism Forum.Scenario: You're working hard on an important seeming problem. Maybe you have an idea to cure a specific form of cancer using mRNA. You've been working on the idea for a year or two, and seem to be making slow progress; it is not yet clear whether you will succeed.Then, you read a blog post or a paper about a similar approach by someone else: "Why I Am Not Working On Cures to Cancer Anymore." They failed in their approach and are giving up. You read their postmortem, and there are a few similarities but most of the details differ from your approach. How much should you update that your path will not succeed?Maybe a little: After all, they might have tried the thing you're working on too and just didn't mention it. But not that much, since after all they didn't actually appear to try the specific thing you're doing. Even if they had, execution is often more important than ideas anyway, and maybe their failure was execution related.The same applies for cause prioritization. Someone working on wild animal suffering might read this recent post, and even though they are working on an angle not mentioned, give up. I think in most cases this would be over-updating. Read the post, learn from it, but don't give up just because someone else didn't manage to find an angle.Last example—climate change. 80000 Hours makes clear that they think it is important but "all else equal, we think it's less pressing than our highest priority areas". (source) This does not mean working on climate change is useless, and if you read the post it becomes clear they just don't see a good angle. If you have an angle on climate change, please work on it!Indeed, I will go further and make the point: important advances are made by people who have unique angles that others didn't see. To put it another way from the startup world: "the best ideas look initially like bad ideas".Angles on solving problems are subtle. It's hard to find good ones, and execution matters, so much that even two attempts which superficially have the same thesis could succeed differently.Don't over-update from others' failures. The best work will be done by people who have unique takes on how to make the world better.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
lincolnq https://forum.effectivealtruism.org/posts/e8ZJvaiuxwQraG3yL/don-t-over-update-on-others-failures Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don't Over-Update On Others' Failures, published by lincolnq on February 15, 2023 on The Effective Altruism Forum.Scenario: You're working hard on an important seeming problem. Maybe you have an idea to cure a specific form of cancer using mRNA. You've been working on the idea for a year or two, and seem to be making slow progress; it is not yet clear whether you will succeed.Then, you read a blog post or a paper about a similar approach by someone else: "Why I Am Not Working On Cures to Cancer Anymore." They failed in their approach and are giving up. You read their postmortem, and there are a few similarities but most of the details differ from your approach. How much should you update that your path will not succeed?Maybe a little: After all, they might have tried the thing you're working on too and just didn't mention it. But not that much, since after all they didn't actually appear to try the specific thing you're doing. Even if they had, execution is often more important than ideas anyway, and maybe their failure was execution related.The same applies for cause prioritization. Someone working on wild animal suffering might read this recent post, and even though they are working on an angle not mentioned, give up. I think in most cases this would be over-updating. Read the post, learn from it, but don't give up just because someone else didn't manage to find an angle.Last example—climate change. 80000 Hours makes clear that they think it is important but "all else equal, we think it's less pressing than our highest priority areas". (source) This does not mean working on climate change is useless, and if you read the post it becomes clear they just don't see a good angle. If you have an angle on climate change, please work on it!Indeed, I will go further and make the point: important advances are made by people who have unique angles that others didn't see. To put it another way from the startup world: "the best ideas look initially like bad ideas".Angles on solving problems are subtle. It's hard to find good ones, and execution matters, so much that even two attempts which superficially have the same thesis could succeed differently.Don't over-update from others' failures. The best work will be done by people who have unique takes on how to make the world better.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 15 Feb 2023 18:36:50 +0000 EA - Don't Over-Update On Others' Failures by lincolnq Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don't Over-Update On Others' Failures, published by lincolnq on February 15, 2023 on The Effective Altruism Forum.Scenario: You're working hard on an important seeming problem. Maybe you have an idea to cure a specific form of cancer using mRNA. You've been working on the idea for a year or two, and seem to be making slow progress; it is not yet clear whether you will succeed.Then, you read a blog post or a paper about a similar approach by someone else: "Why I Am Not Working On Cures to Cancer Anymore." They failed in their approach and are giving up. You read their postmortem, and there are a few similarities but most of the details differ from your approach. How much should you update that your path will not succeed?Maybe a little: After all, they might have tried the thing you're working on too and just didn't mention it. But not that much, since after all they didn't actually appear to try the specific thing you're doing. Even if they had, execution is often more important than ideas anyway, and maybe their failure was execution related.The same applies for cause prioritization. Someone working on wild animal suffering might read this recent post, and even though they are working on an angle not mentioned, give up. I think in most cases this would be over-updating. Read the post, learn from it, but don't give up just because someone else didn't manage to find an angle.Last example—climate change. 80000 Hours makes clear that they think it is important but "all else equal, we think it's less pressing than our highest priority areas". (source) This does not mean working on climate change is useless, and if you read the post it becomes clear they just don't see a good angle. If you have an angle on climate change, please work on it!Indeed, I will go further and make the point: important advances are made by people who have unique angles that others didn't see. To put it another way from the startup world: "the best ideas look initially like bad ideas".Angles on solving problems are subtle. It's hard to find good ones, and execution matters, so much that even two attempts which superficially have the same thesis could succeed differently.Don't over-update from others' failures. The best work will be done by people who have unique takes on how to make the world better.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don't Over-Update On Others' Failures, published by lincolnq on February 15, 2023 on The Effective Altruism Forum.Scenario: You're working hard on an important seeming problem. Maybe you have an idea to cure a specific form of cancer using mRNA. You've been working on the idea for a year or two, and seem to be making slow progress; it is not yet clear whether you will succeed.Then, you read a blog post or a paper about a similar approach by someone else: "Why I Am Not Working On Cures to Cancer Anymore." They failed in their approach and are giving up. You read their postmortem, and there are a few similarities but most of the details differ from your approach. How much should you update that your path will not succeed?Maybe a little: After all, they might have tried the thing you're working on too and just didn't mention it. But not that much, since after all they didn't actually appear to try the specific thing you're doing. Even if they had, execution is often more important than ideas anyway, and maybe their failure was execution related.The same applies for cause prioritization. Someone working on wild animal suffering might read this recent post, and even though they are working on an angle not mentioned, give up. I think in most cases this would be over-updating. Read the post, learn from it, but don't give up just because someone else didn't manage to find an angle.Last example—climate change. 80000 Hours makes clear that they think it is important but "all else equal, we think it's less pressing than our highest priority areas". (source) This does not mean working on climate change is useless, and if you read the post it becomes clear they just don't see a good angle. If you have an angle on climate change, please work on it!Indeed, I will go further and make the point: important advances are made by people who have unique angles that others didn't see. To put it another way from the startup world: "the best ideas look initially like bad ideas".Angles on solving problems are subtle. It's hard to find good ones, and execution matters, so much that even two attempts which superficially have the same thesis could succeed differently.Don't over-update from others' failures. The best work will be done by people who have unique takes on how to make the world better.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
lincolnq https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:18 None full 4886
8yaQ6i3oaFLprsFyb_NL_EA_EA EA - AI alignment researchers may have a comparative advantage in reducing s-risks by Lukas Gloor Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI alignment researchers may have a comparative advantage in reducing s-risks, published by Lukas Gloor on February 15, 2023 on The Effective Altruism Forum.I believe AI alignment researchers might be uniquely well-positioned to make a difference to s-risks. In particular, I think this of alignment researchers with a keen interest in “macrostrategy.” By that, I mean ones who habitually engage in big-picture thinking related to the most pressing problems (like AI alignment and strategy), form mental models of how the future might unfold, and think through their work’s paths to impact. (There’s also a researcher profile where a person specializes in a specific problem area so much that they no longer have much interest in interdisciplinary work and issues of strategy – those researchers aren’t the target audience of this post.)Of course, having the motivation to work on a specific topic is a significant component of having a comparative advantage (or lack thereof). Whether AI alignment researchers find themselves motivated to invest a portion of their time/attention into s-risk reduction will depend on several factors, including:Their opportunity costsWhether they think the work is sufficiently tractableWhether s-risks matter enough (compared to other practical priorities) given their normative viewsWhether they agree that they may have a community-wide comparative advantageFurther below, I will say a few more things about these bullet points. In short, I believe that, for people with the right set of skills, reducing AI-related s-risks will become sufficiently tractable (if it isn’t already) once we know more about what transformative AI will look like. (The rest depends on individual choices about prioritization.)SummarySuffering risks (or “s-risks”) are risks of events that bring about suffering in cosmically significant amounts. (“Significant” relative to our current expectation over future suffering.)(This post will focus on “directly AI-related s-risks,” as opposed to things like “future humans don't exhibit sufficient concern for other sentient minds.”)Early efforts to research s-risks were motivated in a peculiar way – morally “suffering-focused” EAs started working on s-risks not because they seemed particularly likely or tractable, but because of the theoretical potential for s-risks to vastly overshadow more immediate sources of suffering.Consequently, it seems a priori plausible that the people who’ve prioritzed s-risks thus far don’t have much of a comparative advantage for researching object-level interventions against s-risks (apart from their high motivation inspired by their normative views).Indeed, this seems to be the case: I argue below that the most promising (object-level) ways to reduce s-risks often involve reasoning about the architectures or training processes of transformative AI systems, which involves skills that (at least historically) the s-risk community has not been specializing in all that much.[1]Taking a step back, one challenge for s-risk reduction is that s-risks would happen so far ahead in the future that we have only the most brittle of reasons to assume that we can foreseeably affect things for the better.Nonetheless, I believe we can tractably reduce s-risks by focusing on levers that stay identifiable across a broad range of possible futures. In particular, we can focus on the propensity of agents to preserve themselves and pursue their goals in a wide range of environments. By focusing our efforts on shaping the next generation(s) of influential agents (e.g., our AI successors), we can address some of the most significant risk factors for s-risks.[2] In particular:Install design principles like hyperexistential separation into the goal/decision architectures of transformative AI systems.Shape AI training env...]]>
Lukas Gloor https://forum.effectivealtruism.org/posts/8yaQ6i3oaFLprsFyb/ai-alignment-researchers-may-have-a-comparative-advantage-in Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI alignment researchers may have a comparative advantage in reducing s-risks, published by Lukas Gloor on February 15, 2023 on The Effective Altruism Forum.I believe AI alignment researchers might be uniquely well-positioned to make a difference to s-risks. In particular, I think this of alignment researchers with a keen interest in “macrostrategy.” By that, I mean ones who habitually engage in big-picture thinking related to the most pressing problems (like AI alignment and strategy), form mental models of how the future might unfold, and think through their work’s paths to impact. (There’s also a researcher profile where a person specializes in a specific problem area so much that they no longer have much interest in interdisciplinary work and issues of strategy – those researchers aren’t the target audience of this post.)Of course, having the motivation to work on a specific topic is a significant component of having a comparative advantage (or lack thereof). Whether AI alignment researchers find themselves motivated to invest a portion of their time/attention into s-risk reduction will depend on several factors, including:Their opportunity costsWhether they think the work is sufficiently tractableWhether s-risks matter enough (compared to other practical priorities) given their normative viewsWhether they agree that they may have a community-wide comparative advantageFurther below, I will say a few more things about these bullet points. In short, I believe that, for people with the right set of skills, reducing AI-related s-risks will become sufficiently tractable (if it isn’t already) once we know more about what transformative AI will look like. (The rest depends on individual choices about prioritization.)SummarySuffering risks (or “s-risks”) are risks of events that bring about suffering in cosmically significant amounts. (“Significant” relative to our current expectation over future suffering.)(This post will focus on “directly AI-related s-risks,” as opposed to things like “future humans don't exhibit sufficient concern for other sentient minds.”)Early efforts to research s-risks were motivated in a peculiar way – morally “suffering-focused” EAs started working on s-risks not because they seemed particularly likely or tractable, but because of the theoretical potential for s-risks to vastly overshadow more immediate sources of suffering.Consequently, it seems a priori plausible that the people who’ve prioritzed s-risks thus far don’t have much of a comparative advantage for researching object-level interventions against s-risks (apart from their high motivation inspired by their normative views).Indeed, this seems to be the case: I argue below that the most promising (object-level) ways to reduce s-risks often involve reasoning about the architectures or training processes of transformative AI systems, which involves skills that (at least historically) the s-risk community has not been specializing in all that much.[1]Taking a step back, one challenge for s-risk reduction is that s-risks would happen so far ahead in the future that we have only the most brittle of reasons to assume that we can foreseeably affect things for the better.Nonetheless, I believe we can tractably reduce s-risks by focusing on levers that stay identifiable across a broad range of possible futures. In particular, we can focus on the propensity of agents to preserve themselves and pursue their goals in a wide range of environments. By focusing our efforts on shaping the next generation(s) of influential agents (e.g., our AI successors), we can address some of the most significant risk factors for s-risks.[2] In particular:Install design principles like hyperexistential separation into the goal/decision architectures of transformative AI systems.Shape AI training env...]]>
Wed, 15 Feb 2023 16:48:35 +0000 EA - AI alignment researchers may have a comparative advantage in reducing s-risks by Lukas Gloor Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI alignment researchers may have a comparative advantage in reducing s-risks, published by Lukas Gloor on February 15, 2023 on The Effective Altruism Forum.I believe AI alignment researchers might be uniquely well-positioned to make a difference to s-risks. In particular, I think this of alignment researchers with a keen interest in “macrostrategy.” By that, I mean ones who habitually engage in big-picture thinking related to the most pressing problems (like AI alignment and strategy), form mental models of how the future might unfold, and think through their work’s paths to impact. (There’s also a researcher profile where a person specializes in a specific problem area so much that they no longer have much interest in interdisciplinary work and issues of strategy – those researchers aren’t the target audience of this post.)Of course, having the motivation to work on a specific topic is a significant component of having a comparative advantage (or lack thereof). Whether AI alignment researchers find themselves motivated to invest a portion of their time/attention into s-risk reduction will depend on several factors, including:Their opportunity costsWhether they think the work is sufficiently tractableWhether s-risks matter enough (compared to other practical priorities) given their normative viewsWhether they agree that they may have a community-wide comparative advantageFurther below, I will say a few more things about these bullet points. In short, I believe that, for people with the right set of skills, reducing AI-related s-risks will become sufficiently tractable (if it isn’t already) once we know more about what transformative AI will look like. (The rest depends on individual choices about prioritization.)SummarySuffering risks (or “s-risks”) are risks of events that bring about suffering in cosmically significant amounts. (“Significant” relative to our current expectation over future suffering.)(This post will focus on “directly AI-related s-risks,” as opposed to things like “future humans don't exhibit sufficient concern for other sentient minds.”)Early efforts to research s-risks were motivated in a peculiar way – morally “suffering-focused” EAs started working on s-risks not because they seemed particularly likely or tractable, but because of the theoretical potential for s-risks to vastly overshadow more immediate sources of suffering.Consequently, it seems a priori plausible that the people who’ve prioritzed s-risks thus far don’t have much of a comparative advantage for researching object-level interventions against s-risks (apart from their high motivation inspired by their normative views).Indeed, this seems to be the case: I argue below that the most promising (object-level) ways to reduce s-risks often involve reasoning about the architectures or training processes of transformative AI systems, which involves skills that (at least historically) the s-risk community has not been specializing in all that much.[1]Taking a step back, one challenge for s-risk reduction is that s-risks would happen so far ahead in the future that we have only the most brittle of reasons to assume that we can foreseeably affect things for the better.Nonetheless, I believe we can tractably reduce s-risks by focusing on levers that stay identifiable across a broad range of possible futures. In particular, we can focus on the propensity of agents to preserve themselves and pursue their goals in a wide range of environments. By focusing our efforts on shaping the next generation(s) of influential agents (e.g., our AI successors), we can address some of the most significant risk factors for s-risks.[2] In particular:Install design principles like hyperexistential separation into the goal/decision architectures of transformative AI systems.Shape AI training env...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI alignment researchers may have a comparative advantage in reducing s-risks, published by Lukas Gloor on February 15, 2023 on The Effective Altruism Forum.I believe AI alignment researchers might be uniquely well-positioned to make a difference to s-risks. In particular, I think this of alignment researchers with a keen interest in “macrostrategy.” By that, I mean ones who habitually engage in big-picture thinking related to the most pressing problems (like AI alignment and strategy), form mental models of how the future might unfold, and think through their work’s paths to impact. (There’s also a researcher profile where a person specializes in a specific problem area so much that they no longer have much interest in interdisciplinary work and issues of strategy – those researchers aren’t the target audience of this post.)Of course, having the motivation to work on a specific topic is a significant component of having a comparative advantage (or lack thereof). Whether AI alignment researchers find themselves motivated to invest a portion of their time/attention into s-risk reduction will depend on several factors, including:Their opportunity costsWhether they think the work is sufficiently tractableWhether s-risks matter enough (compared to other practical priorities) given their normative viewsWhether they agree that they may have a community-wide comparative advantageFurther below, I will say a few more things about these bullet points. In short, I believe that, for people with the right set of skills, reducing AI-related s-risks will become sufficiently tractable (if it isn’t already) once we know more about what transformative AI will look like. (The rest depends on individual choices about prioritization.)SummarySuffering risks (or “s-risks”) are risks of events that bring about suffering in cosmically significant amounts. (“Significant” relative to our current expectation over future suffering.)(This post will focus on “directly AI-related s-risks,” as opposed to things like “future humans don't exhibit sufficient concern for other sentient minds.”)Early efforts to research s-risks were motivated in a peculiar way – morally “suffering-focused” EAs started working on s-risks not because they seemed particularly likely or tractable, but because of the theoretical potential for s-risks to vastly overshadow more immediate sources of suffering.Consequently, it seems a priori plausible that the people who’ve prioritzed s-risks thus far don’t have much of a comparative advantage for researching object-level interventions against s-risks (apart from their high motivation inspired by their normative views).Indeed, this seems to be the case: I argue below that the most promising (object-level) ways to reduce s-risks often involve reasoning about the architectures or training processes of transformative AI systems, which involves skills that (at least historically) the s-risk community has not been specializing in all that much.[1]Taking a step back, one challenge for s-risk reduction is that s-risks would happen so far ahead in the future that we have only the most brittle of reasons to assume that we can foreseeably affect things for the better.Nonetheless, I believe we can tractably reduce s-risks by focusing on levers that stay identifiable across a broad range of possible futures. In particular, we can focus on the propensity of agents to preserve themselves and pursue their goals in a wide range of environments. By focusing our efforts on shaping the next generation(s) of influential agents (e.g., our AI successors), we can address some of the most significant risk factors for s-risks.[2] In particular:Install design principles like hyperexistential separation into the goal/decision architectures of transformative AI systems.Shape AI training env...]]>
Lukas Gloor https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 18:57 None full 4887
saEQXBgzmDbob9GdH_NL_EA_EA EA - Why I No Longer Prioritize Wild Animal Welfare by saulius Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I No Longer Prioritize Wild Animal Welfare, published by saulius on February 15, 2023 on The Effective Altruism Forum.This is the story of how I came to see Wild Animal Welfare (WAW) as a less promising cause than I did initially. I summarise three articles I wrote on WAW: ‘Why it’s difficult to find cost-effective WAW interventions we could do now’, ‘Lobbying governments to improve WAW’, and ‘WAW in the far future’. I then draw some more general conclusions. The articles assume some familiarity with WAW ideas. See here or here for an intro to WAW ideas.My initial opinionMy first exposure to EA was reading Brian Tomasik’s articles about WAW. I couldn’t believe that despite constantly watching nature documentaries, I had never realized that all this natural suffering is a problem we could try solving. When I became familiar with other EA ideas, I still saw WAW as by far the most promising non-longtermist cause. I thought that EA individuals and organizations continued to focus most of the funding and work on farmed animals because of the status quo bias, risk-aversion, failure to appreciate the scale of WAW issues, misconceptions about WAW, and because they didn’t care about small animals despite evidence that they could be sentient.There seem to be no cost-effective interventions to pursue nowIn 2021, I was given the task of finding a cost-effective WAW intervention that could be pursued in the next few years. I was surprised by how difficult it was to come up with promising WAW interventions. Also, most ideas were very difficult to evaluate and their impacts were highly uncertain. To my surprise, most WAW researchers that I talked to agreed that we’re unlikely to find WAW interventions that could be as cost-effective as farmed animal welfare interventions within the next few years. It’s just much easier to change conditions and observe consequences for farmed animals because their genetics and environment are controlled by humans. I ended up spending most of my time evaluating interventions to reduce aquatic noise. While I think this is promising compared to other WAW interventions I considered, there are quite many farmed animal interventions that I would prioritize over reducing aquatic noise.I still think there is about a 15% chance that someone will find a direct WAW intervention in the next ten years that is more promising than the marginal farmed animal welfare intervention.I discuss direct short-term WAW interventions in more detail here.Influencing governmentsEven though WAW work doesn’t seem as promising as farmed animal welfare in terms of immediate impact, some people have suggested that we should do some non-controversial WAW interventions anyway, in order to promote the wild animal welfare field and show that WAW is tractable. But then I questioned: is it tractable? And what is the plan after we do these interventions? I started asking people who work on WAW about the theory of change for the movement. Some people said that the ultimate aim is to influence governments decades into the future to improve WAW on a large scale. But influence them to do what exactly? Any goals I could come up with didn’t seem as promising and unambiguously positive as I expected. The argument for the importance of WAW rests on the enormous numbers of small wild animals. But it’s difficult to imagine politicians and voters wanting to spend taxpayer money on improving wild insect welfare, especially in a scope-sensitive way.It also seems very difficult to find out and agree on what interventions are good for overall wild animal welfare when all things are considered.See here for further discussion of the goals of lobbying governments to improve WAW, and obstacles to doing this.Long-term futureOthers have argued that what matters most in WAW is moral circle exp...]]>
saulius https://forum.effectivealtruism.org/posts/saEQXBgzmDbob9GdH/why-i-no-longer-prioritize-wild-animal-welfare Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I No Longer Prioritize Wild Animal Welfare, published by saulius on February 15, 2023 on The Effective Altruism Forum.This is the story of how I came to see Wild Animal Welfare (WAW) as a less promising cause than I did initially. I summarise three articles I wrote on WAW: ‘Why it’s difficult to find cost-effective WAW interventions we could do now’, ‘Lobbying governments to improve WAW’, and ‘WAW in the far future’. I then draw some more general conclusions. The articles assume some familiarity with WAW ideas. See here or here for an intro to WAW ideas.My initial opinionMy first exposure to EA was reading Brian Tomasik’s articles about WAW. I couldn’t believe that despite constantly watching nature documentaries, I had never realized that all this natural suffering is a problem we could try solving. When I became familiar with other EA ideas, I still saw WAW as by far the most promising non-longtermist cause. I thought that EA individuals and organizations continued to focus most of the funding and work on farmed animals because of the status quo bias, risk-aversion, failure to appreciate the scale of WAW issues, misconceptions about WAW, and because they didn’t care about small animals despite evidence that they could be sentient.There seem to be no cost-effective interventions to pursue nowIn 2021, I was given the task of finding a cost-effective WAW intervention that could be pursued in the next few years. I was surprised by how difficult it was to come up with promising WAW interventions. Also, most ideas were very difficult to evaluate and their impacts were highly uncertain. To my surprise, most WAW researchers that I talked to agreed that we’re unlikely to find WAW interventions that could be as cost-effective as farmed animal welfare interventions within the next few years. It’s just much easier to change conditions and observe consequences for farmed animals because their genetics and environment are controlled by humans. I ended up spending most of my time evaluating interventions to reduce aquatic noise. While I think this is promising compared to other WAW interventions I considered, there are quite many farmed animal interventions that I would prioritize over reducing aquatic noise.I still think there is about a 15% chance that someone will find a direct WAW intervention in the next ten years that is more promising than the marginal farmed animal welfare intervention.I discuss direct short-term WAW interventions in more detail here.Influencing governmentsEven though WAW work doesn’t seem as promising as farmed animal welfare in terms of immediate impact, some people have suggested that we should do some non-controversial WAW interventions anyway, in order to promote the wild animal welfare field and show that WAW is tractable. But then I questioned: is it tractable? And what is the plan after we do these interventions? I started asking people who work on WAW about the theory of change for the movement. Some people said that the ultimate aim is to influence governments decades into the future to improve WAW on a large scale. But influence them to do what exactly? Any goals I could come up with didn’t seem as promising and unambiguously positive as I expected. The argument for the importance of WAW rests on the enormous numbers of small wild animals. But it’s difficult to imagine politicians and voters wanting to spend taxpayer money on improving wild insect welfare, especially in a scope-sensitive way.It also seems very difficult to find out and agree on what interventions are good for overall wild animal welfare when all things are considered.See here for further discussion of the goals of lobbying governments to improve WAW, and obstacles to doing this.Long-term futureOthers have argued that what matters most in WAW is moral circle exp...]]>
Wed, 15 Feb 2023 12:54:27 +0000 EA - Why I No Longer Prioritize Wild Animal Welfare by saulius Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I No Longer Prioritize Wild Animal Welfare, published by saulius on February 15, 2023 on The Effective Altruism Forum.This is the story of how I came to see Wild Animal Welfare (WAW) as a less promising cause than I did initially. I summarise three articles I wrote on WAW: ‘Why it’s difficult to find cost-effective WAW interventions we could do now’, ‘Lobbying governments to improve WAW’, and ‘WAW in the far future’. I then draw some more general conclusions. The articles assume some familiarity with WAW ideas. See here or here for an intro to WAW ideas.My initial opinionMy first exposure to EA was reading Brian Tomasik’s articles about WAW. I couldn’t believe that despite constantly watching nature documentaries, I had never realized that all this natural suffering is a problem we could try solving. When I became familiar with other EA ideas, I still saw WAW as by far the most promising non-longtermist cause. I thought that EA individuals and organizations continued to focus most of the funding and work on farmed animals because of the status quo bias, risk-aversion, failure to appreciate the scale of WAW issues, misconceptions about WAW, and because they didn’t care about small animals despite evidence that they could be sentient.There seem to be no cost-effective interventions to pursue nowIn 2021, I was given the task of finding a cost-effective WAW intervention that could be pursued in the next few years. I was surprised by how difficult it was to come up with promising WAW interventions. Also, most ideas were very difficult to evaluate and their impacts were highly uncertain. To my surprise, most WAW researchers that I talked to agreed that we’re unlikely to find WAW interventions that could be as cost-effective as farmed animal welfare interventions within the next few years. It’s just much easier to change conditions and observe consequences for farmed animals because their genetics and environment are controlled by humans. I ended up spending most of my time evaluating interventions to reduce aquatic noise. While I think this is promising compared to other WAW interventions I considered, there are quite many farmed animal interventions that I would prioritize over reducing aquatic noise.I still think there is about a 15% chance that someone will find a direct WAW intervention in the next ten years that is more promising than the marginal farmed animal welfare intervention.I discuss direct short-term WAW interventions in more detail here.Influencing governmentsEven though WAW work doesn’t seem as promising as farmed animal welfare in terms of immediate impact, some people have suggested that we should do some non-controversial WAW interventions anyway, in order to promote the wild animal welfare field and show that WAW is tractable. But then I questioned: is it tractable? And what is the plan after we do these interventions? I started asking people who work on WAW about the theory of change for the movement. Some people said that the ultimate aim is to influence governments decades into the future to improve WAW on a large scale. But influence them to do what exactly? Any goals I could come up with didn’t seem as promising and unambiguously positive as I expected. The argument for the importance of WAW rests on the enormous numbers of small wild animals. But it’s difficult to imagine politicians and voters wanting to spend taxpayer money on improving wild insect welfare, especially in a scope-sensitive way.It also seems very difficult to find out and agree on what interventions are good for overall wild animal welfare when all things are considered.See here for further discussion of the goals of lobbying governments to improve WAW, and obstacles to doing this.Long-term futureOthers have argued that what matters most in WAW is moral circle exp...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I No Longer Prioritize Wild Animal Welfare, published by saulius on February 15, 2023 on The Effective Altruism Forum.This is the story of how I came to see Wild Animal Welfare (WAW) as a less promising cause than I did initially. I summarise three articles I wrote on WAW: ‘Why it’s difficult to find cost-effective WAW interventions we could do now’, ‘Lobbying governments to improve WAW’, and ‘WAW in the far future’. I then draw some more general conclusions. The articles assume some familiarity with WAW ideas. See here or here for an intro to WAW ideas.My initial opinionMy first exposure to EA was reading Brian Tomasik’s articles about WAW. I couldn’t believe that despite constantly watching nature documentaries, I had never realized that all this natural suffering is a problem we could try solving. When I became familiar with other EA ideas, I still saw WAW as by far the most promising non-longtermist cause. I thought that EA individuals and organizations continued to focus most of the funding and work on farmed animals because of the status quo bias, risk-aversion, failure to appreciate the scale of WAW issues, misconceptions about WAW, and because they didn’t care about small animals despite evidence that they could be sentient.There seem to be no cost-effective interventions to pursue nowIn 2021, I was given the task of finding a cost-effective WAW intervention that could be pursued in the next few years. I was surprised by how difficult it was to come up with promising WAW interventions. Also, most ideas were very difficult to evaluate and their impacts were highly uncertain. To my surprise, most WAW researchers that I talked to agreed that we’re unlikely to find WAW interventions that could be as cost-effective as farmed animal welfare interventions within the next few years. It’s just much easier to change conditions and observe consequences for farmed animals because their genetics and environment are controlled by humans. I ended up spending most of my time evaluating interventions to reduce aquatic noise. While I think this is promising compared to other WAW interventions I considered, there are quite many farmed animal interventions that I would prioritize over reducing aquatic noise.I still think there is about a 15% chance that someone will find a direct WAW intervention in the next ten years that is more promising than the marginal farmed animal welfare intervention.I discuss direct short-term WAW interventions in more detail here.Influencing governmentsEven though WAW work doesn’t seem as promising as farmed animal welfare in terms of immediate impact, some people have suggested that we should do some non-controversial WAW interventions anyway, in order to promote the wild animal welfare field and show that WAW is tractable. But then I questioned: is it tractable? And what is the plan after we do these interventions? I started asking people who work on WAW about the theory of change for the movement. Some people said that the ultimate aim is to influence governments decades into the future to improve WAW on a large scale. But influence them to do what exactly? Any goals I could come up with didn’t seem as promising and unambiguously positive as I expected. The argument for the importance of WAW rests on the enormous numbers of small wild animals. But it’s difficult to imagine politicians and voters wanting to spend taxpayer money on improving wild insect welfare, especially in a scope-sensitive way.It also seems very difficult to find out and agree on what interventions are good for overall wild animal welfare when all things are considered.See here for further discussion of the goals of lobbying governments to improve WAW, and obstacles to doing this.Long-term futureOthers have argued that what matters most in WAW is moral circle exp...]]>
saulius https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:36 None full 4888
fnAcuwQsc8johAyWN_NL_EA_EA EA - EA Organization Updates: January 2023 by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Organization Updates: January 2023, published by Lizka on February 14, 2023 on The Effective Altruism Forum.These monthly posts originated as the "Updates" section of the EA Newsletter. Organizations submit their own updates, which we edit for clarity.Job listings that these organizations highlighted (as well as a couple of other impactful jobs) are at the top of this post. Some of the jobs have pressing deadlines.You can see previous updates on the "EA Organization Updates (monthly series)" topic page, or in our repository of past newsletters. Notice that there’s also an “org update” tag, where you can find more news and updates that are not part of this consolidated series.The organizations are in alphabetical order.Job listingsConsider also exploring jobs listed on “Job listing (open).”Against Malaria FoundationSenior Operations Manager (Remote, £50,000 - £60,000)Effective Institutions ProjectChief of Staff/Chief Operating Officer (Remote, $75,000+)Charity EntrepreneurshipNonprofit Founder in Biosecurity and Large-Scale Global Health (Remote Program/ 2 weeks in London/ Stipends + up to $200,000 in seed funding, apply to the Incubation Program by 12 March)GiveWellSenior Researcher (Remote / Oakland, CA, $181,400 - $199,800)Senior Research Associate (Remote / Oakland, CA, $127,000 - $139,900)Content Editor (Remote / Oakland, CA, $83,500 - $91,900)Open PhilanthropyCause Prioritization Interns (Summer 2023) (Remote, $1,900 / week, apply by 26 February)Senior Program Associate - Forecasting (Remote, $134,526, apply by 5 March)Associated roles in operations and finance (Most roles remote but require US working hours, $84,303 - $104,132)Rethink Priorities:Expression of Interest - Project lead/co-lead for a Longtermist Incubator (Remote, flexible, $67,000 - 115,000, apply by 28 February)Organizational updates80,000 HoursEVF’s recent update - announcing interim CEOs of EVF - highlights changes to 80,000 Hours’ organisation structure. 80,000 Hours’ CEO, Howie Lempel, moved to Interim CEO of Effective Ventures Foundation in the UK in November, and Brenton Mayer has been acting as Interim CEO of 80k in his absence.This month on The 80,000 Hours Podcast, Rob Wiblin interviewed Athena Aktipis on cancer, cooperation, and the apocalypse.80,000 Hours also shared several blog posts, including Michelle Hutchinson’s writing on My thoughts on parenting and having an impactful career.Anima InternationalAnima International’s Bulgarian team, Nevidimi Zhivotni, recently released a whistleblower video interviewing a former fur farmworker. As a result, part of the video was shown on Bulgaria’s national evening news programme and a member of the team was interviewed live on air. This represents the first time that fur has been covered in depth as a topic on prime-time television in the country.Further north in Poland, Anima International’s Polish team Otwarte Klatki launched one of its biggest projects as part of the Fur Free Europe campaign. In the video, we see celebrities reacting to being shown footage from fur farms. It’s worth remembering that Poland is one of the world’s top fur producers.Finally, the team in Norway is gearing up for the Anima Advocacy Conference (Dyrevernkonferansen) 2023. The conference is dedicated to effective animal advocacy and was the first with such a focus when it launched a few years ago. For more information and to get tickets, you can go here.Berkeley Existential Risk Initiative (BERI)Elizabeth Cooper joins BERI as Deputy Director starting March 1. She’ll help run BERI’s university collaborations program, as well as launching new BERI programs in the future.Centre for Effective AltruismForum teamYou can see an update from the Forum team here: “Community” posts have their own section, subforums are closing, and more (F...]]>
Lizka https://forum.effectivealtruism.org/posts/fnAcuwQsc8johAyWN/ea-organization-updates-january-2023-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Organization Updates: January 2023, published by Lizka on February 14, 2023 on The Effective Altruism Forum.These monthly posts originated as the "Updates" section of the EA Newsletter. Organizations submit their own updates, which we edit for clarity.Job listings that these organizations highlighted (as well as a couple of other impactful jobs) are at the top of this post. Some of the jobs have pressing deadlines.You can see previous updates on the "EA Organization Updates (monthly series)" topic page, or in our repository of past newsletters. Notice that there’s also an “org update” tag, where you can find more news and updates that are not part of this consolidated series.The organizations are in alphabetical order.Job listingsConsider also exploring jobs listed on “Job listing (open).”Against Malaria FoundationSenior Operations Manager (Remote, £50,000 - £60,000)Effective Institutions ProjectChief of Staff/Chief Operating Officer (Remote, $75,000+)Charity EntrepreneurshipNonprofit Founder in Biosecurity and Large-Scale Global Health (Remote Program/ 2 weeks in London/ Stipends + up to $200,000 in seed funding, apply to the Incubation Program by 12 March)GiveWellSenior Researcher (Remote / Oakland, CA, $181,400 - $199,800)Senior Research Associate (Remote / Oakland, CA, $127,000 - $139,900)Content Editor (Remote / Oakland, CA, $83,500 - $91,900)Open PhilanthropyCause Prioritization Interns (Summer 2023) (Remote, $1,900 / week, apply by 26 February)Senior Program Associate - Forecasting (Remote, $134,526, apply by 5 March)Associated roles in operations and finance (Most roles remote but require US working hours, $84,303 - $104,132)Rethink Priorities:Expression of Interest - Project lead/co-lead for a Longtermist Incubator (Remote, flexible, $67,000 - 115,000, apply by 28 February)Organizational updates80,000 HoursEVF’s recent update - announcing interim CEOs of EVF - highlights changes to 80,000 Hours’ organisation structure. 80,000 Hours’ CEO, Howie Lempel, moved to Interim CEO of Effective Ventures Foundation in the UK in November, and Brenton Mayer has been acting as Interim CEO of 80k in his absence.This month on The 80,000 Hours Podcast, Rob Wiblin interviewed Athena Aktipis on cancer, cooperation, and the apocalypse.80,000 Hours also shared several blog posts, including Michelle Hutchinson’s writing on My thoughts on parenting and having an impactful career.Anima InternationalAnima International’s Bulgarian team, Nevidimi Zhivotni, recently released a whistleblower video interviewing a former fur farmworker. As a result, part of the video was shown on Bulgaria’s national evening news programme and a member of the team was interviewed live on air. This represents the first time that fur has been covered in depth as a topic on prime-time television in the country.Further north in Poland, Anima International’s Polish team Otwarte Klatki launched one of its biggest projects as part of the Fur Free Europe campaign. In the video, we see celebrities reacting to being shown footage from fur farms. It’s worth remembering that Poland is one of the world’s top fur producers.Finally, the team in Norway is gearing up for the Anima Advocacy Conference (Dyrevernkonferansen) 2023. The conference is dedicated to effective animal advocacy and was the first with such a focus when it launched a few years ago. For more information and to get tickets, you can go here.Berkeley Existential Risk Initiative (BERI)Elizabeth Cooper joins BERI as Deputy Director starting March 1. She’ll help run BERI’s university collaborations program, as well as launching new BERI programs in the future.Centre for Effective AltruismForum teamYou can see an update from the Forum team here: “Community” posts have their own section, subforums are closing, and more (F...]]>
Wed, 15 Feb 2023 00:43:24 +0000 EA - EA Organization Updates: January 2023 by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Organization Updates: January 2023, published by Lizka on February 14, 2023 on The Effective Altruism Forum.These monthly posts originated as the "Updates" section of the EA Newsletter. Organizations submit their own updates, which we edit for clarity.Job listings that these organizations highlighted (as well as a couple of other impactful jobs) are at the top of this post. Some of the jobs have pressing deadlines.You can see previous updates on the "EA Organization Updates (monthly series)" topic page, or in our repository of past newsletters. Notice that there’s also an “org update” tag, where you can find more news and updates that are not part of this consolidated series.The organizations are in alphabetical order.Job listingsConsider also exploring jobs listed on “Job listing (open).”Against Malaria FoundationSenior Operations Manager (Remote, £50,000 - £60,000)Effective Institutions ProjectChief of Staff/Chief Operating Officer (Remote, $75,000+)Charity EntrepreneurshipNonprofit Founder in Biosecurity and Large-Scale Global Health (Remote Program/ 2 weeks in London/ Stipends + up to $200,000 in seed funding, apply to the Incubation Program by 12 March)GiveWellSenior Researcher (Remote / Oakland, CA, $181,400 - $199,800)Senior Research Associate (Remote / Oakland, CA, $127,000 - $139,900)Content Editor (Remote / Oakland, CA, $83,500 - $91,900)Open PhilanthropyCause Prioritization Interns (Summer 2023) (Remote, $1,900 / week, apply by 26 February)Senior Program Associate - Forecasting (Remote, $134,526, apply by 5 March)Associated roles in operations and finance (Most roles remote but require US working hours, $84,303 - $104,132)Rethink Priorities:Expression of Interest - Project lead/co-lead for a Longtermist Incubator (Remote, flexible, $67,000 - 115,000, apply by 28 February)Organizational updates80,000 HoursEVF’s recent update - announcing interim CEOs of EVF - highlights changes to 80,000 Hours’ organisation structure. 80,000 Hours’ CEO, Howie Lempel, moved to Interim CEO of Effective Ventures Foundation in the UK in November, and Brenton Mayer has been acting as Interim CEO of 80k in his absence.This month on The 80,000 Hours Podcast, Rob Wiblin interviewed Athena Aktipis on cancer, cooperation, and the apocalypse.80,000 Hours also shared several blog posts, including Michelle Hutchinson’s writing on My thoughts on parenting and having an impactful career.Anima InternationalAnima International’s Bulgarian team, Nevidimi Zhivotni, recently released a whistleblower video interviewing a former fur farmworker. As a result, part of the video was shown on Bulgaria’s national evening news programme and a member of the team was interviewed live on air. This represents the first time that fur has been covered in depth as a topic on prime-time television in the country.Further north in Poland, Anima International’s Polish team Otwarte Klatki launched one of its biggest projects as part of the Fur Free Europe campaign. In the video, we see celebrities reacting to being shown footage from fur farms. It’s worth remembering that Poland is one of the world’s top fur producers.Finally, the team in Norway is gearing up for the Anima Advocacy Conference (Dyrevernkonferansen) 2023. The conference is dedicated to effective animal advocacy and was the first with such a focus when it launched a few years ago. For more information and to get tickets, you can go here.Berkeley Existential Risk Initiative (BERI)Elizabeth Cooper joins BERI as Deputy Director starting March 1. She’ll help run BERI’s university collaborations program, as well as launching new BERI programs in the future.Centre for Effective AltruismForum teamYou can see an update from the Forum team here: “Community” posts have their own section, subforums are closing, and more (F...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Organization Updates: January 2023, published by Lizka on February 14, 2023 on The Effective Altruism Forum.These monthly posts originated as the "Updates" section of the EA Newsletter. Organizations submit their own updates, which we edit for clarity.Job listings that these organizations highlighted (as well as a couple of other impactful jobs) are at the top of this post. Some of the jobs have pressing deadlines.You can see previous updates on the "EA Organization Updates (monthly series)" topic page, or in our repository of past newsletters. Notice that there’s also an “org update” tag, where you can find more news and updates that are not part of this consolidated series.The organizations are in alphabetical order.Job listingsConsider also exploring jobs listed on “Job listing (open).”Against Malaria FoundationSenior Operations Manager (Remote, £50,000 - £60,000)Effective Institutions ProjectChief of Staff/Chief Operating Officer (Remote, $75,000+)Charity EntrepreneurshipNonprofit Founder in Biosecurity and Large-Scale Global Health (Remote Program/ 2 weeks in London/ Stipends + up to $200,000 in seed funding, apply to the Incubation Program by 12 March)GiveWellSenior Researcher (Remote / Oakland, CA, $181,400 - $199,800)Senior Research Associate (Remote / Oakland, CA, $127,000 - $139,900)Content Editor (Remote / Oakland, CA, $83,500 - $91,900)Open PhilanthropyCause Prioritization Interns (Summer 2023) (Remote, $1,900 / week, apply by 26 February)Senior Program Associate - Forecasting (Remote, $134,526, apply by 5 March)Associated roles in operations and finance (Most roles remote but require US working hours, $84,303 - $104,132)Rethink Priorities:Expression of Interest - Project lead/co-lead for a Longtermist Incubator (Remote, flexible, $67,000 - 115,000, apply by 28 February)Organizational updates80,000 HoursEVF’s recent update - announcing interim CEOs of EVF - highlights changes to 80,000 Hours’ organisation structure. 80,000 Hours’ CEO, Howie Lempel, moved to Interim CEO of Effective Ventures Foundation in the UK in November, and Brenton Mayer has been acting as Interim CEO of 80k in his absence.This month on The 80,000 Hours Podcast, Rob Wiblin interviewed Athena Aktipis on cancer, cooperation, and the apocalypse.80,000 Hours also shared several blog posts, including Michelle Hutchinson’s writing on My thoughts on parenting and having an impactful career.Anima InternationalAnima International’s Bulgarian team, Nevidimi Zhivotni, recently released a whistleblower video interviewing a former fur farmworker. As a result, part of the video was shown on Bulgaria’s national evening news programme and a member of the team was interviewed live on air. This represents the first time that fur has been covered in depth as a topic on prime-time television in the country.Further north in Poland, Anima International’s Polish team Otwarte Klatki launched one of its biggest projects as part of the Fur Free Europe campaign. In the video, we see celebrities reacting to being shown footage from fur farms. It’s worth remembering that Poland is one of the world’s top fur producers.Finally, the team in Norway is gearing up for the Anima Advocacy Conference (Dyrevernkonferansen) 2023. The conference is dedicated to effective animal advocacy and was the first with such a focus when it launched a few years ago. For more information and to get tickets, you can go here.Berkeley Existential Risk Initiative (BERI)Elizabeth Cooper joins BERI as Deputy Director starting March 1. She’ll help run BERI’s university collaborations program, as well as launching new BERI programs in the future.Centre for Effective AltruismForum teamYou can see an update from the Forum team here: “Community” posts have their own section, subforums are closing, and more (F...]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 18:33 None full 4885
fpudvy4KB6np324Fx_NL_EA_EA EA - Philanthropy to the Right of Boom [Founders Pledge] by christian.r Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Philanthropy to the Right of Boom [Founders Pledge], published by christian.r on February 14, 2023 on The Effective Altruism Forum.Background and Acknowledgements:This write-up represents part of an ongoing Founders Pledge research project to understand the landscape of nuclear risk and philanthropic support of nuclear risk reduction measures. It is in some respects a work in progress and can be viewed as a Google Document here and on Founders Pledge's website here.With thanks to James Acton, Conor Barnes, Tom Barnes, Patty-Jane Geller, Matthew Gentzel, Matt Lerner, Jeffrey Lewis, Ankit Panda, Andrew Reddie, and Carl Robichaud for reviewing this document and for their thoughtful comments and suggestions.“The Nuclear Equivalent of Mosquito Nets”In philanthropy, the term “impact multipliers” refers to features of the world that make one funding opportunity relatively more effective than another. Stacking these multipliers makes effectiveness a “conjunction of multipliers;” understanding this conjunction can in turn help guide philanthropists seeking to maximize impact under high uncertainty.Not all impact multipliers are created equal, however. To systematically engage in effective giving, philanthropists must understand the largest impact multipliers — “critical multipliers” — those features that most dramatically cleave more effective interventions from less effective interventions. In global health and development, for example, one critical multiplier is simply to focus on the world’s poorest people. Because of large inequalities in wealth and the decreasing marginal utility of money, helping people living in extreme poverty rather than people in the Global North is a critical multiplier that winnows the field of possible interventions more than many other possible multipliers.Additional considerations — the prevalence of mosquito-borne illnesses, the low cost and scalability of bednet distribution, and more — ultimately point philanthropists in global health and development to one of the most effective interventions to reduce suffering in the near term: funding the distribution of insecticide-treated bednets.This write-up represents an attempt to identify a defensible critical multiplier in nuclear philanthropy, and potentially to move one step closer to finding “the nuclear equivalent of mosquito nets.”Impact Multipliers in Nuclear PhilanthropyThere are many potential impact multipliers in nuclear philanthropy. For example, focusing on states with large nuclear arsenals may be more impactful than focusing on nuclear terrorism. Nuclear terrorism would be horrific and a single attack in a city (e.g. with a dirty bomb) could kill thousands of people, injure many more, and cause long-lasting damage to the physical and mental health of millions. All-out nuclear war between the United States and Russia, however, would be many times worse. Hundreds of millions of people would likely die from the direct effects of a war. If we believe nuclear winter modeling, moreover, there may be many more deaths from climate effects and famine. In the worst case, civilization could collapse. Simplifying these effects, suppose for the sake of argument that a nuclear terrorist attack could kill 100,000 people, and an all-out nuclear war could kill 1 billion people.All else equal, in this scenario it would be 10,000 times more effective to focus on preventing all-out war than it is to focus on nuclear terrorism.Generalizing this pattern, philanthropists ought to prioritize the largest nuclear wars (again, all else equal) when thinking about additional resources at the margin. This can be operationalized with real numbers — nuclear arsenal size, military spending, and other measures can serve as proxy variables for the severity of nuclear war, yielding rough multipliers. This w...]]>
christian.r https://forum.effectivealtruism.org/posts/fpudvy4KB6np324Fx/philanthropy-to-the-right-of-boom-founders-pledge Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Philanthropy to the Right of Boom [Founders Pledge], published by christian.r on February 14, 2023 on The Effective Altruism Forum.Background and Acknowledgements:This write-up represents part of an ongoing Founders Pledge research project to understand the landscape of nuclear risk and philanthropic support of nuclear risk reduction measures. It is in some respects a work in progress and can be viewed as a Google Document here and on Founders Pledge's website here.With thanks to James Acton, Conor Barnes, Tom Barnes, Patty-Jane Geller, Matthew Gentzel, Matt Lerner, Jeffrey Lewis, Ankit Panda, Andrew Reddie, and Carl Robichaud for reviewing this document and for their thoughtful comments and suggestions.“The Nuclear Equivalent of Mosquito Nets”In philanthropy, the term “impact multipliers” refers to features of the world that make one funding opportunity relatively more effective than another. Stacking these multipliers makes effectiveness a “conjunction of multipliers;” understanding this conjunction can in turn help guide philanthropists seeking to maximize impact under high uncertainty.Not all impact multipliers are created equal, however. To systematically engage in effective giving, philanthropists must understand the largest impact multipliers — “critical multipliers” — those features that most dramatically cleave more effective interventions from less effective interventions. In global health and development, for example, one critical multiplier is simply to focus on the world’s poorest people. Because of large inequalities in wealth and the decreasing marginal utility of money, helping people living in extreme poverty rather than people in the Global North is a critical multiplier that winnows the field of possible interventions more than many other possible multipliers.Additional considerations — the prevalence of mosquito-borne illnesses, the low cost and scalability of bednet distribution, and more — ultimately point philanthropists in global health and development to one of the most effective interventions to reduce suffering in the near term: funding the distribution of insecticide-treated bednets.This write-up represents an attempt to identify a defensible critical multiplier in nuclear philanthropy, and potentially to move one step closer to finding “the nuclear equivalent of mosquito nets.”Impact Multipliers in Nuclear PhilanthropyThere are many potential impact multipliers in nuclear philanthropy. For example, focusing on states with large nuclear arsenals may be more impactful than focusing on nuclear terrorism. Nuclear terrorism would be horrific and a single attack in a city (e.g. with a dirty bomb) could kill thousands of people, injure many more, and cause long-lasting damage to the physical and mental health of millions. All-out nuclear war between the United States and Russia, however, would be many times worse. Hundreds of millions of people would likely die from the direct effects of a war. If we believe nuclear winter modeling, moreover, there may be many more deaths from climate effects and famine. In the worst case, civilization could collapse. Simplifying these effects, suppose for the sake of argument that a nuclear terrorist attack could kill 100,000 people, and an all-out nuclear war could kill 1 billion people.All else equal, in this scenario it would be 10,000 times more effective to focus on preventing all-out war than it is to focus on nuclear terrorism.Generalizing this pattern, philanthropists ought to prioritize the largest nuclear wars (again, all else equal) when thinking about additional resources at the margin. This can be operationalized with real numbers — nuclear arsenal size, military spending, and other measures can serve as proxy variables for the severity of nuclear war, yielding rough multipliers. This w...]]>
Tue, 14 Feb 2023 21:52:59 +0000 EA - Philanthropy to the Right of Boom [Founders Pledge] by christian.r Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Philanthropy to the Right of Boom [Founders Pledge], published by christian.r on February 14, 2023 on The Effective Altruism Forum.Background and Acknowledgements:This write-up represents part of an ongoing Founders Pledge research project to understand the landscape of nuclear risk and philanthropic support of nuclear risk reduction measures. It is in some respects a work in progress and can be viewed as a Google Document here and on Founders Pledge's website here.With thanks to James Acton, Conor Barnes, Tom Barnes, Patty-Jane Geller, Matthew Gentzel, Matt Lerner, Jeffrey Lewis, Ankit Panda, Andrew Reddie, and Carl Robichaud for reviewing this document and for their thoughtful comments and suggestions.“The Nuclear Equivalent of Mosquito Nets”In philanthropy, the term “impact multipliers” refers to features of the world that make one funding opportunity relatively more effective than another. Stacking these multipliers makes effectiveness a “conjunction of multipliers;” understanding this conjunction can in turn help guide philanthropists seeking to maximize impact under high uncertainty.Not all impact multipliers are created equal, however. To systematically engage in effective giving, philanthropists must understand the largest impact multipliers — “critical multipliers” — those features that most dramatically cleave more effective interventions from less effective interventions. In global health and development, for example, one critical multiplier is simply to focus on the world’s poorest people. Because of large inequalities in wealth and the decreasing marginal utility of money, helping people living in extreme poverty rather than people in the Global North is a critical multiplier that winnows the field of possible interventions more than many other possible multipliers.Additional considerations — the prevalence of mosquito-borne illnesses, the low cost and scalability of bednet distribution, and more — ultimately point philanthropists in global health and development to one of the most effective interventions to reduce suffering in the near term: funding the distribution of insecticide-treated bednets.This write-up represents an attempt to identify a defensible critical multiplier in nuclear philanthropy, and potentially to move one step closer to finding “the nuclear equivalent of mosquito nets.”Impact Multipliers in Nuclear PhilanthropyThere are many potential impact multipliers in nuclear philanthropy. For example, focusing on states with large nuclear arsenals may be more impactful than focusing on nuclear terrorism. Nuclear terrorism would be horrific and a single attack in a city (e.g. with a dirty bomb) could kill thousands of people, injure many more, and cause long-lasting damage to the physical and mental health of millions. All-out nuclear war between the United States and Russia, however, would be many times worse. Hundreds of millions of people would likely die from the direct effects of a war. If we believe nuclear winter modeling, moreover, there may be many more deaths from climate effects and famine. In the worst case, civilization could collapse. Simplifying these effects, suppose for the sake of argument that a nuclear terrorist attack could kill 100,000 people, and an all-out nuclear war could kill 1 billion people.All else equal, in this scenario it would be 10,000 times more effective to focus on preventing all-out war than it is to focus on nuclear terrorism.Generalizing this pattern, philanthropists ought to prioritize the largest nuclear wars (again, all else equal) when thinking about additional resources at the margin. This can be operationalized with real numbers — nuclear arsenal size, military spending, and other measures can serve as proxy variables for the severity of nuclear war, yielding rough multipliers. This w...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Philanthropy to the Right of Boom [Founders Pledge], published by christian.r on February 14, 2023 on The Effective Altruism Forum.Background and Acknowledgements:This write-up represents part of an ongoing Founders Pledge research project to understand the landscape of nuclear risk and philanthropic support of nuclear risk reduction measures. It is in some respects a work in progress and can be viewed as a Google Document here and on Founders Pledge's website here.With thanks to James Acton, Conor Barnes, Tom Barnes, Patty-Jane Geller, Matthew Gentzel, Matt Lerner, Jeffrey Lewis, Ankit Panda, Andrew Reddie, and Carl Robichaud for reviewing this document and for their thoughtful comments and suggestions.“The Nuclear Equivalent of Mosquito Nets”In philanthropy, the term “impact multipliers” refers to features of the world that make one funding opportunity relatively more effective than another. Stacking these multipliers makes effectiveness a “conjunction of multipliers;” understanding this conjunction can in turn help guide philanthropists seeking to maximize impact under high uncertainty.Not all impact multipliers are created equal, however. To systematically engage in effective giving, philanthropists must understand the largest impact multipliers — “critical multipliers” — those features that most dramatically cleave more effective interventions from less effective interventions. In global health and development, for example, one critical multiplier is simply to focus on the world’s poorest people. Because of large inequalities in wealth and the decreasing marginal utility of money, helping people living in extreme poverty rather than people in the Global North is a critical multiplier that winnows the field of possible interventions more than many other possible multipliers.Additional considerations — the prevalence of mosquito-borne illnesses, the low cost and scalability of bednet distribution, and more — ultimately point philanthropists in global health and development to one of the most effective interventions to reduce suffering in the near term: funding the distribution of insecticide-treated bednets.This write-up represents an attempt to identify a defensible critical multiplier in nuclear philanthropy, and potentially to move one step closer to finding “the nuclear equivalent of mosquito nets.”Impact Multipliers in Nuclear PhilanthropyThere are many potential impact multipliers in nuclear philanthropy. For example, focusing on states with large nuclear arsenals may be more impactful than focusing on nuclear terrorism. Nuclear terrorism would be horrific and a single attack in a city (e.g. with a dirty bomb) could kill thousands of people, injure many more, and cause long-lasting damage to the physical and mental health of millions. All-out nuclear war between the United States and Russia, however, would be many times worse. Hundreds of millions of people would likely die from the direct effects of a war. If we believe nuclear winter modeling, moreover, there may be many more deaths from climate effects and famine. In the worst case, civilization could collapse. Simplifying these effects, suppose for the sake of argument that a nuclear terrorist attack could kill 100,000 people, and an all-out nuclear war could kill 1 billion people.All else equal, in this scenario it would be 10,000 times more effective to focus on preventing all-out war than it is to focus on nuclear terrorism.Generalizing this pattern, philanthropists ought to prioritize the largest nuclear wars (again, all else equal) when thinking about additional resources at the margin. This can be operationalized with real numbers — nuclear arsenal size, military spending, and other measures can serve as proxy variables for the severity of nuclear war, yielding rough multipliers. This w...]]>
christian.r https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 49:33 None full 4875
NzPwFfzJur5bMmHTg_NL_EA_EA EA - Diversity takes by quinn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Diversity takes, published by quinn on February 14, 2023 on The Effective Altruism Forum.Richard Ren gets points for the part of diversity's value prop that has to do with frame disruption, I also realized a previously neglected by me merit of standpoint epistemology while I was ranting about something to him.Written quickly over not at all, based on my assessment of the importance of not letting the chase after that next karma hit prevent me from doing actual work, but also wanting to permanently communicate something that I've ranted about on discord and meatspace numerous times.Ozy really crushed it recently, and in this post I'm kind of expanding on their takedown of Doing EA Better's homogeneity takes.A premise of this post is that diversity is separable into demographic and intellectual parts. Demographic diversity is when you enumerate all the sources of variance in things like race, gender, faith, birthplace, et. al. and intellectual diversity is where you talk about sources of variance in ideas, philosophy, ideology, priors, et. al.. In this post, I'm going to celebrate intellectual exclusion, then explore an objection to see how much it weakens my argument. I'm going to be mostly boring and agreeable about EA's room for improvement on demographic diversity, but I'm going to distinguish between demographic diversity's true value prop and its false value prop. The false value prop will lead me to highlighting standpoint epistemology, talk about why I think rejection of it in general is deeply EA but also defend keeping it in the overton window, and outline what I think are the right and wrong ways to use it.I will distinguish solidarity from altruism as the two basic strategies for improving the world, and claim that EA oughtn't try too hard to cultivate solidarity.Definition 1: standpoint epistemologyTwice, philosophers have informed me that standpoint epistemology isn't a real tradition. That it has signaling functions on a kind of ideology level in and between philosophy departments, ready be used as a cudgel or flag, but there's no granular characterization of what it is and how it works that epistemologists agree on. This is probably cultural: critical theorists and analytical philosophers have different standards of what it means to "characterize" an "epistemology", but I don't in fact care about philosophy department politics, I care about a coherent phenomenon that I've observed in the things people say. So, without any appeal to the flowery language of the credentialed, and assuming the anticipation-constraint definition of "belief" as a premise, I will define standpoint epistemology as expecting people with skin in a game to know more about that game than someone who's not directly exposed.Take 1: poor people are cool and smart...I'm singling out poor people instead of some other niche group because I spent several years under $10k/year. (I was a leftist who cared a lot about art, which means I was what leftists and artists affectionately call "downwardly mobile", not like earning similarly to my parents or anything close to that). I think those years directly cultivated a model of how the world works, and separately gave me a permanent ability to see between the staples of society all the resilience, bravery, and pain that exists out there.Here's a factoid that some of you might not know, in spite of eating in restaurants a lot: back-of-house is actually operationally marvelous. Kitchens are like orchestras. I sometimes tell people that they should pick up some back of house shifts just so they can be a part of it.There's kind of a scope sensitivity thing going on where you can abstractly communicate about single mothers relying on the bus to get to minimum wage jobs, but it really hits different when you've worked with them. Here's anot...]]>
quinn https://forum.effectivealtruism.org/posts/NzPwFfzJur5bMmHTg/diversity-takes Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Diversity takes, published by quinn on February 14, 2023 on The Effective Altruism Forum.Richard Ren gets points for the part of diversity's value prop that has to do with frame disruption, I also realized a previously neglected by me merit of standpoint epistemology while I was ranting about something to him.Written quickly over not at all, based on my assessment of the importance of not letting the chase after that next karma hit prevent me from doing actual work, but also wanting to permanently communicate something that I've ranted about on discord and meatspace numerous times.Ozy really crushed it recently, and in this post I'm kind of expanding on their takedown of Doing EA Better's homogeneity takes.A premise of this post is that diversity is separable into demographic and intellectual parts. Demographic diversity is when you enumerate all the sources of variance in things like race, gender, faith, birthplace, et. al. and intellectual diversity is where you talk about sources of variance in ideas, philosophy, ideology, priors, et. al.. In this post, I'm going to celebrate intellectual exclusion, then explore an objection to see how much it weakens my argument. I'm going to be mostly boring and agreeable about EA's room for improvement on demographic diversity, but I'm going to distinguish between demographic diversity's true value prop and its false value prop. The false value prop will lead me to highlighting standpoint epistemology, talk about why I think rejection of it in general is deeply EA but also defend keeping it in the overton window, and outline what I think are the right and wrong ways to use it.I will distinguish solidarity from altruism as the two basic strategies for improving the world, and claim that EA oughtn't try too hard to cultivate solidarity.Definition 1: standpoint epistemologyTwice, philosophers have informed me that standpoint epistemology isn't a real tradition. That it has signaling functions on a kind of ideology level in and between philosophy departments, ready be used as a cudgel or flag, but there's no granular characterization of what it is and how it works that epistemologists agree on. This is probably cultural: critical theorists and analytical philosophers have different standards of what it means to "characterize" an "epistemology", but I don't in fact care about philosophy department politics, I care about a coherent phenomenon that I've observed in the things people say. So, without any appeal to the flowery language of the credentialed, and assuming the anticipation-constraint definition of "belief" as a premise, I will define standpoint epistemology as expecting people with skin in a game to know more about that game than someone who's not directly exposed.Take 1: poor people are cool and smart...I'm singling out poor people instead of some other niche group because I spent several years under $10k/year. (I was a leftist who cared a lot about art, which means I was what leftists and artists affectionately call "downwardly mobile", not like earning similarly to my parents or anything close to that). I think those years directly cultivated a model of how the world works, and separately gave me a permanent ability to see between the staples of society all the resilience, bravery, and pain that exists out there.Here's a factoid that some of you might not know, in spite of eating in restaurants a lot: back-of-house is actually operationally marvelous. Kitchens are like orchestras. I sometimes tell people that they should pick up some back of house shifts just so they can be a part of it.There's kind of a scope sensitivity thing going on where you can abstractly communicate about single mothers relying on the bus to get to minimum wage jobs, but it really hits different when you've worked with them. Here's anot...]]>
Tue, 14 Feb 2023 21:30:15 +0000 EA - Diversity takes by quinn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Diversity takes, published by quinn on February 14, 2023 on The Effective Altruism Forum.Richard Ren gets points for the part of diversity's value prop that has to do with frame disruption, I also realized a previously neglected by me merit of standpoint epistemology while I was ranting about something to him.Written quickly over not at all, based on my assessment of the importance of not letting the chase after that next karma hit prevent me from doing actual work, but also wanting to permanently communicate something that I've ranted about on discord and meatspace numerous times.Ozy really crushed it recently, and in this post I'm kind of expanding on their takedown of Doing EA Better's homogeneity takes.A premise of this post is that diversity is separable into demographic and intellectual parts. Demographic diversity is when you enumerate all the sources of variance in things like race, gender, faith, birthplace, et. al. and intellectual diversity is where you talk about sources of variance in ideas, philosophy, ideology, priors, et. al.. In this post, I'm going to celebrate intellectual exclusion, then explore an objection to see how much it weakens my argument. I'm going to be mostly boring and agreeable about EA's room for improvement on demographic diversity, but I'm going to distinguish between demographic diversity's true value prop and its false value prop. The false value prop will lead me to highlighting standpoint epistemology, talk about why I think rejection of it in general is deeply EA but also defend keeping it in the overton window, and outline what I think are the right and wrong ways to use it.I will distinguish solidarity from altruism as the two basic strategies for improving the world, and claim that EA oughtn't try too hard to cultivate solidarity.Definition 1: standpoint epistemologyTwice, philosophers have informed me that standpoint epistemology isn't a real tradition. That it has signaling functions on a kind of ideology level in and between philosophy departments, ready be used as a cudgel or flag, but there's no granular characterization of what it is and how it works that epistemologists agree on. This is probably cultural: critical theorists and analytical philosophers have different standards of what it means to "characterize" an "epistemology", but I don't in fact care about philosophy department politics, I care about a coherent phenomenon that I've observed in the things people say. So, without any appeal to the flowery language of the credentialed, and assuming the anticipation-constraint definition of "belief" as a premise, I will define standpoint epistemology as expecting people with skin in a game to know more about that game than someone who's not directly exposed.Take 1: poor people are cool and smart...I'm singling out poor people instead of some other niche group because I spent several years under $10k/year. (I was a leftist who cared a lot about art, which means I was what leftists and artists affectionately call "downwardly mobile", not like earning similarly to my parents or anything close to that). I think those years directly cultivated a model of how the world works, and separately gave me a permanent ability to see between the staples of society all the resilience, bravery, and pain that exists out there.Here's a factoid that some of you might not know, in spite of eating in restaurants a lot: back-of-house is actually operationally marvelous. Kitchens are like orchestras. I sometimes tell people that they should pick up some back of house shifts just so they can be a part of it.There's kind of a scope sensitivity thing going on where you can abstractly communicate about single mothers relying on the bus to get to minimum wage jobs, but it really hits different when you've worked with them. Here's anot...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Diversity takes, published by quinn on February 14, 2023 on The Effective Altruism Forum.Richard Ren gets points for the part of diversity's value prop that has to do with frame disruption, I also realized a previously neglected by me merit of standpoint epistemology while I was ranting about something to him.Written quickly over not at all, based on my assessment of the importance of not letting the chase after that next karma hit prevent me from doing actual work, but also wanting to permanently communicate something that I've ranted about on discord and meatspace numerous times.Ozy really crushed it recently, and in this post I'm kind of expanding on their takedown of Doing EA Better's homogeneity takes.A premise of this post is that diversity is separable into demographic and intellectual parts. Demographic diversity is when you enumerate all the sources of variance in things like race, gender, faith, birthplace, et. al. and intellectual diversity is where you talk about sources of variance in ideas, philosophy, ideology, priors, et. al.. In this post, I'm going to celebrate intellectual exclusion, then explore an objection to see how much it weakens my argument. I'm going to be mostly boring and agreeable about EA's room for improvement on demographic diversity, but I'm going to distinguish between demographic diversity's true value prop and its false value prop. The false value prop will lead me to highlighting standpoint epistemology, talk about why I think rejection of it in general is deeply EA but also defend keeping it in the overton window, and outline what I think are the right and wrong ways to use it.I will distinguish solidarity from altruism as the two basic strategies for improving the world, and claim that EA oughtn't try too hard to cultivate solidarity.Definition 1: standpoint epistemologyTwice, philosophers have informed me that standpoint epistemology isn't a real tradition. That it has signaling functions on a kind of ideology level in and between philosophy departments, ready be used as a cudgel or flag, but there's no granular characterization of what it is and how it works that epistemologists agree on. This is probably cultural: critical theorists and analytical philosophers have different standards of what it means to "characterize" an "epistemology", but I don't in fact care about philosophy department politics, I care about a coherent phenomenon that I've observed in the things people say. So, without any appeal to the flowery language of the credentialed, and assuming the anticipation-constraint definition of "belief" as a premise, I will define standpoint epistemology as expecting people with skin in a game to know more about that game than someone who's not directly exposed.Take 1: poor people are cool and smart...I'm singling out poor people instead of some other niche group because I spent several years under $10k/year. (I was a leftist who cared a lot about art, which means I was what leftists and artists affectionately call "downwardly mobile", not like earning similarly to my parents or anything close to that). I think those years directly cultivated a model of how the world works, and separately gave me a permanent ability to see between the staples of society all the resilience, bravery, and pain that exists out there.Here's a factoid that some of you might not know, in spite of eating in restaurants a lot: back-of-house is actually operationally marvelous. Kitchens are like orchestras. I sometimes tell people that they should pick up some back of house shifts just so they can be a part of it.There's kind of a scope sensitivity thing going on where you can abstractly communicate about single mothers relying on the bus to get to minimum wage jobs, but it really hits different when you've worked with them. Here's anot...]]>
quinn https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:30 None full 4876
mEkRrDweNSdNdrmvx_NL_EA_EA EA - Plans for investigating and improving the experience of women, non-binary and trans people in EA by Catherine Low Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Plans for investigating and improving the experience of women, non-binary and trans people in EA, published by Catherine Low on February 14, 2023 on The Effective Altruism Forum.I’m Catherine from CEA’s Community Health and Special Projects Team.I’ve been frustrated and angered by some of the experiences some women and gender minorities have had in this community, ranging from feeling uncomfortable being in an extreme minority at a meeting through to sexual harassment and much worse. And I’ve been saddened by the lost impact that resulted from these experiences. I’ve tried to make things a bit better (including via co-founding Magnify Mentoring before I started at CEA), and I hope to do more this year.In December 2022, after a couple of very sad posts by women on the EA Forum, Anu Oak and I started working on a project to get a better understanding of the experiences of women and gender minorities in the EA community. Łukasz Grabowski is now also helping out. Hopefully this information will help us form effective strategies to improve the EA movement.I don’t really know what we’re going to find, and I’m very uncertain about what actions we’ll want to take at the end of this. We’re open to the possibility that things are really bad and that improving the experiences of women and gender minorities should be a major priority for our team. But we’re also open to finding out that things aren’t – on the whole – all that bad, or aren’t all that tractable, and there are no significant changes we want to prioritise.We are still in the early stages of our project. The things we are doing now are:Gathering together and analysing existing data (EA Survey data, EAG(x) event feedback forms, incoming reports to the Community Health team, data from EA community subgroups, etc).Talking to others in the community who are running related projects, or who have relevant expertise.Planning our next steps.If you have existing data you think would be helpful and that you’d like to share please get in touch by emailing Anu on anubhuti.oak@centreforeffectivealtruism.org.If you’re running a related project, feel free to get in touch if you’d like to explore coordinating in some way (but please don’t feel obligated to).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Catherine Low https://forum.effectivealtruism.org/posts/mEkRrDweNSdNdrmvx/plans-for-investigating-and-improving-the-experience-of Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Plans for investigating and improving the experience of women, non-binary and trans people in EA, published by Catherine Low on February 14, 2023 on The Effective Altruism Forum.I’m Catherine from CEA’s Community Health and Special Projects Team.I’ve been frustrated and angered by some of the experiences some women and gender minorities have had in this community, ranging from feeling uncomfortable being in an extreme minority at a meeting through to sexual harassment and much worse. And I’ve been saddened by the lost impact that resulted from these experiences. I’ve tried to make things a bit better (including via co-founding Magnify Mentoring before I started at CEA), and I hope to do more this year.In December 2022, after a couple of very sad posts by women on the EA Forum, Anu Oak and I started working on a project to get a better understanding of the experiences of women and gender minorities in the EA community. Łukasz Grabowski is now also helping out. Hopefully this information will help us form effective strategies to improve the EA movement.I don’t really know what we’re going to find, and I’m very uncertain about what actions we’ll want to take at the end of this. We’re open to the possibility that things are really bad and that improving the experiences of women and gender minorities should be a major priority for our team. But we’re also open to finding out that things aren’t – on the whole – all that bad, or aren’t all that tractable, and there are no significant changes we want to prioritise.We are still in the early stages of our project. The things we are doing now are:Gathering together and analysing existing data (EA Survey data, EAG(x) event feedback forms, incoming reports to the Community Health team, data from EA community subgroups, etc).Talking to others in the community who are running related projects, or who have relevant expertise.Planning our next steps.If you have existing data you think would be helpful and that you’d like to share please get in touch by emailing Anu on anubhuti.oak@centreforeffectivealtruism.org.If you’re running a related project, feel free to get in touch if you’d like to explore coordinating in some way (but please don’t feel obligated to).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 14 Feb 2023 21:11:26 +0000 EA - Plans for investigating and improving the experience of women, non-binary and trans people in EA by Catherine Low Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Plans for investigating and improving the experience of women, non-binary and trans people in EA, published by Catherine Low on February 14, 2023 on The Effective Altruism Forum.I’m Catherine from CEA’s Community Health and Special Projects Team.I’ve been frustrated and angered by some of the experiences some women and gender minorities have had in this community, ranging from feeling uncomfortable being in an extreme minority at a meeting through to sexual harassment and much worse. And I’ve been saddened by the lost impact that resulted from these experiences. I’ve tried to make things a bit better (including via co-founding Magnify Mentoring before I started at CEA), and I hope to do more this year.In December 2022, after a couple of very sad posts by women on the EA Forum, Anu Oak and I started working on a project to get a better understanding of the experiences of women and gender minorities in the EA community. Łukasz Grabowski is now also helping out. Hopefully this information will help us form effective strategies to improve the EA movement.I don’t really know what we’re going to find, and I’m very uncertain about what actions we’ll want to take at the end of this. We’re open to the possibility that things are really bad and that improving the experiences of women and gender minorities should be a major priority for our team. But we’re also open to finding out that things aren’t – on the whole – all that bad, or aren’t all that tractable, and there are no significant changes we want to prioritise.We are still in the early stages of our project. The things we are doing now are:Gathering together and analysing existing data (EA Survey data, EAG(x) event feedback forms, incoming reports to the Community Health team, data from EA community subgroups, etc).Talking to others in the community who are running related projects, or who have relevant expertise.Planning our next steps.If you have existing data you think would be helpful and that you’d like to share please get in touch by emailing Anu on anubhuti.oak@centreforeffectivealtruism.org.If you’re running a related project, feel free to get in touch if you’d like to explore coordinating in some way (but please don’t feel obligated to).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Plans for investigating and improving the experience of women, non-binary and trans people in EA, published by Catherine Low on February 14, 2023 on The Effective Altruism Forum.I’m Catherine from CEA’s Community Health and Special Projects Team.I’ve been frustrated and angered by some of the experiences some women and gender minorities have had in this community, ranging from feeling uncomfortable being in an extreme minority at a meeting through to sexual harassment and much worse. And I’ve been saddened by the lost impact that resulted from these experiences. I’ve tried to make things a bit better (including via co-founding Magnify Mentoring before I started at CEA), and I hope to do more this year.In December 2022, after a couple of very sad posts by women on the EA Forum, Anu Oak and I started working on a project to get a better understanding of the experiences of women and gender minorities in the EA community. Łukasz Grabowski is now also helping out. Hopefully this information will help us form effective strategies to improve the EA movement.I don’t really know what we’re going to find, and I’m very uncertain about what actions we’ll want to take at the end of this. We’re open to the possibility that things are really bad and that improving the experiences of women and gender minorities should be a major priority for our team. But we’re also open to finding out that things aren’t – on the whole – all that bad, or aren’t all that tractable, and there are no significant changes we want to prioritise.We are still in the early stages of our project. The things we are doing now are:Gathering together and analysing existing data (EA Survey data, EAG(x) event feedback forms, incoming reports to the Community Health team, data from EA community subgroups, etc).Talking to others in the community who are running related projects, or who have relevant expertise.Planning our next steps.If you have existing data you think would be helpful and that you’d like to share please get in touch by emailing Anu on anubhuti.oak@centreforeffectivealtruism.org.If you’re running a related project, feel free to get in touch if you’d like to explore coordinating in some way (but please don’t feel obligated to).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Catherine Low https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:17 None full 4874
kn4SDAaSKJYvKkSyF_NL_EA_EA EA - How meat-free meal selection varies with menu options: an exploration by Sagar K Shah Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How meat-free meal selection varies with menu options: an exploration, published by Sagar K Shah on February 14, 2023 on The Effective Altruism Forum.SummaryIncreasing consumption of meat-free meals can help reduce demand for factory farmed animal products and anthropogenic greenhouse gas emissions. But relatively little research has been done on how meat-free meal selection is influenced by menu options, such as the availability of meat-analogue options or different types of meat.We conducted a preregistered reanalysis of data from a series of hypothetical discrete choice experiments from Brachem et al. (2019). We explored how meat-free meal selection by 1348 respondents (mostly German students) varied across 26 different menus, depending on the number of meat-free options and whether any options contained fish/poultry meat or meat-analogues. Menus consisted of five options (of which, two or three were meat-free) and were composed using images and descriptions of actual dishes available at restaurants at the University of Göttingen.While our work was motivated by causal hypotheses, our reanalysis was limited to detecting correlations and not causal effects. Specific limitations include:Examining hypotheses that the original study was not designed to evaluate.De facto observational design, despite blinded randomization in the original study.Possible non-random correlations between the presence of poultry/fish or meat-analogue menu options and the appealingness of other dishes.Analysis of self-reported, hypothetical meal preferences, rather than actual behavior.Meat-analogues in menus not reflecting prominent products attracting significant financial investment.Notwithstanding, our reanalysis found meat-free meal selection odds were:higher among menus with an extra meat-free option (odds ratio of 2.3, 90% CI [1.8 to 3.0]).lower among menus featuring poultry or fish options (odds ratio of 0.7, 90% CI [0.6 to 0.9]).not significantly associated with the presence of meat-analogues on a menu (odds ratio of 1.2 (90% CI [0.9 to 1.6])) in our preregistered meat-analogue definition. Estimates varied across analogue definitions, but were never significantly different from 1.Despite the many limitations, these findings might slightly update our beliefs to the extent we believe correlations would be expected if causation were occurring.The poultry/fish option correlation highlights the potential for welfare losses from substitution towards small-bodied animals from menu changes as well as shifts in consumer preferences.Given the study didn’t feature very prominent meat analogues, the absence of a correlation in this reanalysis cannot credibly be used to refute a belief that high-quality analogues play an important role in reducing meat consumption. But when coupled with the strong correlation on an additional meat-free option, we think the reanalysis highlights the need for further research on the most effective ways to encourage selection of meat-free meals. It remains an open question whether, at the margin, it would be more cost-effective to advocate for more menu options featuring meat-analogues specifically, or for more meat-free options of any kind.You can read the full post on the Rethink Priorities website, and also see the pre-print and code via the Open Science Framework..Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sagar K Shah https://forum.effectivealtruism.org/posts/kn4SDAaSKJYvKkSyF/how-meat-free-meal-selection-varies-with-menu-options-an Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How meat-free meal selection varies with menu options: an exploration, published by Sagar K Shah on February 14, 2023 on The Effective Altruism Forum.SummaryIncreasing consumption of meat-free meals can help reduce demand for factory farmed animal products and anthropogenic greenhouse gas emissions. But relatively little research has been done on how meat-free meal selection is influenced by menu options, such as the availability of meat-analogue options or different types of meat.We conducted a preregistered reanalysis of data from a series of hypothetical discrete choice experiments from Brachem et al. (2019). We explored how meat-free meal selection by 1348 respondents (mostly German students) varied across 26 different menus, depending on the number of meat-free options and whether any options contained fish/poultry meat or meat-analogues. Menus consisted of five options (of which, two or three were meat-free) and were composed using images and descriptions of actual dishes available at restaurants at the University of Göttingen.While our work was motivated by causal hypotheses, our reanalysis was limited to detecting correlations and not causal effects. Specific limitations include:Examining hypotheses that the original study was not designed to evaluate.De facto observational design, despite blinded randomization in the original study.Possible non-random correlations between the presence of poultry/fish or meat-analogue menu options and the appealingness of other dishes.Analysis of self-reported, hypothetical meal preferences, rather than actual behavior.Meat-analogues in menus not reflecting prominent products attracting significant financial investment.Notwithstanding, our reanalysis found meat-free meal selection odds were:higher among menus with an extra meat-free option (odds ratio of 2.3, 90% CI [1.8 to 3.0]).lower among menus featuring poultry or fish options (odds ratio of 0.7, 90% CI [0.6 to 0.9]).not significantly associated with the presence of meat-analogues on a menu (odds ratio of 1.2 (90% CI [0.9 to 1.6])) in our preregistered meat-analogue definition. Estimates varied across analogue definitions, but were never significantly different from 1.Despite the many limitations, these findings might slightly update our beliefs to the extent we believe correlations would be expected if causation were occurring.The poultry/fish option correlation highlights the potential for welfare losses from substitution towards small-bodied animals from menu changes as well as shifts in consumer preferences.Given the study didn’t feature very prominent meat analogues, the absence of a correlation in this reanalysis cannot credibly be used to refute a belief that high-quality analogues play an important role in reducing meat consumption. But when coupled with the strong correlation on an additional meat-free option, we think the reanalysis highlights the need for further research on the most effective ways to encourage selection of meat-free meals. It remains an open question whether, at the margin, it would be more cost-effective to advocate for more menu options featuring meat-analogues specifically, or for more meat-free options of any kind.You can read the full post on the Rethink Priorities website, and also see the pre-print and code via the Open Science Framework..Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 14 Feb 2023 17:19:52 +0000 EA - How meat-free meal selection varies with menu options: an exploration by Sagar K Shah Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How meat-free meal selection varies with menu options: an exploration, published by Sagar K Shah on February 14, 2023 on The Effective Altruism Forum.SummaryIncreasing consumption of meat-free meals can help reduce demand for factory farmed animal products and anthropogenic greenhouse gas emissions. But relatively little research has been done on how meat-free meal selection is influenced by menu options, such as the availability of meat-analogue options or different types of meat.We conducted a preregistered reanalysis of data from a series of hypothetical discrete choice experiments from Brachem et al. (2019). We explored how meat-free meal selection by 1348 respondents (mostly German students) varied across 26 different menus, depending on the number of meat-free options and whether any options contained fish/poultry meat or meat-analogues. Menus consisted of five options (of which, two or three were meat-free) and were composed using images and descriptions of actual dishes available at restaurants at the University of Göttingen.While our work was motivated by causal hypotheses, our reanalysis was limited to detecting correlations and not causal effects. Specific limitations include:Examining hypotheses that the original study was not designed to evaluate.De facto observational design, despite blinded randomization in the original study.Possible non-random correlations between the presence of poultry/fish or meat-analogue menu options and the appealingness of other dishes.Analysis of self-reported, hypothetical meal preferences, rather than actual behavior.Meat-analogues in menus not reflecting prominent products attracting significant financial investment.Notwithstanding, our reanalysis found meat-free meal selection odds were:higher among menus with an extra meat-free option (odds ratio of 2.3, 90% CI [1.8 to 3.0]).lower among menus featuring poultry or fish options (odds ratio of 0.7, 90% CI [0.6 to 0.9]).not significantly associated with the presence of meat-analogues on a menu (odds ratio of 1.2 (90% CI [0.9 to 1.6])) in our preregistered meat-analogue definition. Estimates varied across analogue definitions, but were never significantly different from 1.Despite the many limitations, these findings might slightly update our beliefs to the extent we believe correlations would be expected if causation were occurring.The poultry/fish option correlation highlights the potential for welfare losses from substitution towards small-bodied animals from menu changes as well as shifts in consumer preferences.Given the study didn’t feature very prominent meat analogues, the absence of a correlation in this reanalysis cannot credibly be used to refute a belief that high-quality analogues play an important role in reducing meat consumption. But when coupled with the strong correlation on an additional meat-free option, we think the reanalysis highlights the need for further research on the most effective ways to encourage selection of meat-free meals. It remains an open question whether, at the margin, it would be more cost-effective to advocate for more menu options featuring meat-analogues specifically, or for more meat-free options of any kind.You can read the full post on the Rethink Priorities website, and also see the pre-print and code via the Open Science Framework..Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How meat-free meal selection varies with menu options: an exploration, published by Sagar K Shah on February 14, 2023 on The Effective Altruism Forum.SummaryIncreasing consumption of meat-free meals can help reduce demand for factory farmed animal products and anthropogenic greenhouse gas emissions. But relatively little research has been done on how meat-free meal selection is influenced by menu options, such as the availability of meat-analogue options or different types of meat.We conducted a preregistered reanalysis of data from a series of hypothetical discrete choice experiments from Brachem et al. (2019). We explored how meat-free meal selection by 1348 respondents (mostly German students) varied across 26 different menus, depending on the number of meat-free options and whether any options contained fish/poultry meat or meat-analogues. Menus consisted of five options (of which, two or three were meat-free) and were composed using images and descriptions of actual dishes available at restaurants at the University of Göttingen.While our work was motivated by causal hypotheses, our reanalysis was limited to detecting correlations and not causal effects. Specific limitations include:Examining hypotheses that the original study was not designed to evaluate.De facto observational design, despite blinded randomization in the original study.Possible non-random correlations between the presence of poultry/fish or meat-analogue menu options and the appealingness of other dishes.Analysis of self-reported, hypothetical meal preferences, rather than actual behavior.Meat-analogues in menus not reflecting prominent products attracting significant financial investment.Notwithstanding, our reanalysis found meat-free meal selection odds were:higher among menus with an extra meat-free option (odds ratio of 2.3, 90% CI [1.8 to 3.0]).lower among menus featuring poultry or fish options (odds ratio of 0.7, 90% CI [0.6 to 0.9]).not significantly associated with the presence of meat-analogues on a menu (odds ratio of 1.2 (90% CI [0.9 to 1.6])) in our preregistered meat-analogue definition. Estimates varied across analogue definitions, but were never significantly different from 1.Despite the many limitations, these findings might slightly update our beliefs to the extent we believe correlations would be expected if causation were occurring.The poultry/fish option correlation highlights the potential for welfare losses from substitution towards small-bodied animals from menu changes as well as shifts in consumer preferences.Given the study didn’t feature very prominent meat analogues, the absence of a correlation in this reanalysis cannot credibly be used to refute a belief that high-quality analogues play an important role in reducing meat consumption. But when coupled with the strong correlation on an additional meat-free option, we think the reanalysis highlights the need for further research on the most effective ways to encourage selection of meat-free meals. It remains an open question whether, at the margin, it would be more cost-effective to advocate for more menu options featuring meat-analogues specifically, or for more meat-free options of any kind.You can read the full post on the Rethink Priorities website, and also see the pre-print and code via the Open Science Framework..Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sagar K Shah https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:28 None full 4877
PNv9QRvruPMfhE8xW_NL_EA_EA EA - 4 ways to think about democratizing AI [GovAI Linkpost] by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 4 ways to think about democratizing AI [GovAI Linkpost], published by Akash on February 13, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://forum.effectivealtruism.org/posts/PNv9QRvruPMfhE8xW/4-ways-to-think-about-democratizing-ai-govai-linkpost Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 4 ways to think about democratizing AI [GovAI Linkpost], published by Akash on February 13, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 14 Feb 2023 17:11:11 +0000 EA - 4 ways to think about democratizing AI [GovAI Linkpost] by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 4 ways to think about democratizing AI [GovAI Linkpost], published by Akash on February 13, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 4 ways to think about democratizing AI [GovAI Linkpost], published by Akash on February 13, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:29 None full 4879
JSMveartaeBMH4mTH_NL_EA_EA EA - New EA Podcast: Critiques of EA by Nick Anyos Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New EA Podcast: Critiques of EA, published by Nick Anyos on February 13, 2023 on The Effective Altruism Forum.Emotional Status: I started working on this project before the FTX collapse and all subsequent controversies and drama. I notice an internal sense that I am "piling on" or "kicking EA while it's down." This isn't my intention, and I understand if a person reading this feels burned out on EA criticisms and would rather focus on object level forum posts right now.I have just released the first three episodes of a new interview podcast on criticisms of EA:Democratizing Risk and EA with Carla Zoe Cremer and Luke KempExpected Value and Critical Rationalism with Vaden Masrani and Ben ChuggIs EA an Ideology? with James FodorI am in the process of contacting potential guests for future episodes, and would love any suggestions on who I should interview next. Here is an anonymous feedback form that you can use to tell me anything you don't want to write in a comment.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nick Anyos https://forum.effectivealtruism.org/posts/JSMveartaeBMH4mTH/new-ea-podcast-critiques-of-ea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New EA Podcast: Critiques of EA, published by Nick Anyos on February 13, 2023 on The Effective Altruism Forum.Emotional Status: I started working on this project before the FTX collapse and all subsequent controversies and drama. I notice an internal sense that I am "piling on" or "kicking EA while it's down." This isn't my intention, and I understand if a person reading this feels burned out on EA criticisms and would rather focus on object level forum posts right now.I have just released the first three episodes of a new interview podcast on criticisms of EA:Democratizing Risk and EA with Carla Zoe Cremer and Luke KempExpected Value and Critical Rationalism with Vaden Masrani and Ben ChuggIs EA an Ideology? with James FodorI am in the process of contacting potential guests for future episodes, and would love any suggestions on who I should interview next. Here is an anonymous feedback form that you can use to tell me anything you don't want to write in a comment.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 14 Feb 2023 14:17:02 +0000 EA - New EA Podcast: Critiques of EA by Nick Anyos Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New EA Podcast: Critiques of EA, published by Nick Anyos on February 13, 2023 on The Effective Altruism Forum.Emotional Status: I started working on this project before the FTX collapse and all subsequent controversies and drama. I notice an internal sense that I am "piling on" or "kicking EA while it's down." This isn't my intention, and I understand if a person reading this feels burned out on EA criticisms and would rather focus on object level forum posts right now.I have just released the first three episodes of a new interview podcast on criticisms of EA:Democratizing Risk and EA with Carla Zoe Cremer and Luke KempExpected Value and Critical Rationalism with Vaden Masrani and Ben ChuggIs EA an Ideology? with James FodorI am in the process of contacting potential guests for future episodes, and would love any suggestions on who I should interview next. Here is an anonymous feedback form that you can use to tell me anything you don't want to write in a comment.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New EA Podcast: Critiques of EA, published by Nick Anyos on February 13, 2023 on The Effective Altruism Forum.Emotional Status: I started working on this project before the FTX collapse and all subsequent controversies and drama. I notice an internal sense that I am "piling on" or "kicking EA while it's down." This isn't my intention, and I understand if a person reading this feels burned out on EA criticisms and would rather focus on object level forum posts right now.I have just released the first three episodes of a new interview podcast on criticisms of EA:Democratizing Risk and EA with Carla Zoe Cremer and Luke KempExpected Value and Critical Rationalism with Vaden Masrani and Ben ChuggIs EA an Ideology? with James FodorI am in the process of contacting potential guests for future episodes, and would love any suggestions on who I should interview next. Here is an anonymous feedback form that you can use to tell me anything you don't want to write in a comment.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nick Anyos https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:14 None full 4878
qAdApGsDHxwbNQsLH_NL_EA_EA EA - Valentine's Day fundraiser for FEM by GraceAdams Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Valentine's Day fundraiser for FEM, published by GraceAdams on February 14, 2023 on The Effective Altruism Forum.Epistemic status: feeling the loveThis Valentine's Day, I'm donating the cost of a date night to Family Empowerment Media (FEM) and would love you to join me! 🥰What better way to celebrate love than promoting family planning to prevent unintended pregnancies?Family Empowerment Media (FEM) is an evidence-driven nonprofit committed to eliminating maternal deaths and other health burdens from unintended pregnancies. FEM produces and air radio-based social and behavioural change campaigns on family planning to empower women and men who want to delay or prevent pregnancy to consistently use contraception.Read more about FEMVisit FEM's websiteJoin me in donating to FEM here:Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
GraceAdams https://forum.effectivealtruism.org/posts/qAdApGsDHxwbNQsLH/valentine-s-day-fundraiser-for-fem Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Valentine's Day fundraiser for FEM, published by GraceAdams on February 14, 2023 on The Effective Altruism Forum.Epistemic status: feeling the loveThis Valentine's Day, I'm donating the cost of a date night to Family Empowerment Media (FEM) and would love you to join me! 🥰What better way to celebrate love than promoting family planning to prevent unintended pregnancies?Family Empowerment Media (FEM) is an evidence-driven nonprofit committed to eliminating maternal deaths and other health burdens from unintended pregnancies. FEM produces and air radio-based social and behavioural change campaigns on family planning to empower women and men who want to delay or prevent pregnancy to consistently use contraception.Read more about FEMVisit FEM's websiteJoin me in donating to FEM here:Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 14 Feb 2023 08:59:14 +0000 EA - Valentine's Day fundraiser for FEM by GraceAdams Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Valentine's Day fundraiser for FEM, published by GraceAdams on February 14, 2023 on The Effective Altruism Forum.Epistemic status: feeling the loveThis Valentine's Day, I'm donating the cost of a date night to Family Empowerment Media (FEM) and would love you to join me! 🥰What better way to celebrate love than promoting family planning to prevent unintended pregnancies?Family Empowerment Media (FEM) is an evidence-driven nonprofit committed to eliminating maternal deaths and other health burdens from unintended pregnancies. FEM produces and air radio-based social and behavioural change campaigns on family planning to empower women and men who want to delay or prevent pregnancy to consistently use contraception.Read more about FEMVisit FEM's websiteJoin me in donating to FEM here:Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Valentine's Day fundraiser for FEM, published by GraceAdams on February 14, 2023 on The Effective Altruism Forum.Epistemic status: feeling the loveThis Valentine's Day, I'm donating the cost of a date night to Family Empowerment Media (FEM) and would love you to join me! 🥰What better way to celebrate love than promoting family planning to prevent unintended pregnancies?Family Empowerment Media (FEM) is an evidence-driven nonprofit committed to eliminating maternal deaths and other health burdens from unintended pregnancies. FEM produces and air radio-based social and behavioural change campaigns on family planning to empower women and men who want to delay or prevent pregnancy to consistently use contraception.Read more about FEMVisit FEM's websiteJoin me in donating to FEM here:Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
GraceAdams https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:06 None full 4866
JnijsXwYDCJDwRcuc_NL_EA_EA EA - Elements of Rationalist Discourse by RobBensinger Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Elements of Rationalist Discourse, published by RobBensinger on February 14, 2023 on The Effective Altruism Forum.I liked Duncan Sabien's Basics of Rationalist Discourse, but it felt somewhat different from what my brain thinks of as "the basics of rationalist discourse". So I decided to write down my own version (which overlaps some with Duncan's).Probably this new version also won't match "the basics" as other people perceive them. People may not even agree that these are all good ideas! Partly I'm posting these just out of curiosity about what the delta is between my perspective on rationalist discourse and y'alls perspectives.The basics of rationalist discourse, as I understand them:1. Truth-Seeking. Try to contribute to a social environment that encourages belief accuracy and good epistemic processes.Try not to “win” arguments using symmetric weapons (tools that work similarly well whether you're right or wrong). Indeed, try not to treat arguments like soldiers at all:Arguments are soldiers. Once you know which side you’re on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it’s like stabbing your soldiers in the back.Instead, treat arguments like scouts: tools for better understanding reality.2. Non-Violence: The response to "argument" is "counter-argument". The response to arguments is never bullets. The response to arguments is never doxxing, or death threats, or coercion.3. Non-Deception. Never try to steer your conversation partners (or onlookers) toward having falser models.Additionally, try to avoid things that will (on average) mislead readers as a side-effect of some other thing you're trying to do. Where possible, avoid saying things that you expect to lower the net belief accuracy of the average person you're communicating with; or failing that, at least flag that you're worried about this happening.As a corollary:3.1. Meta-Honesty. Make it easy for others to tell how honest, literal, PR-y, etc. you are (in general, or in particular contexts). This can include everything from "prominently publicly discussing the sorts of situations in which you'd lie" to "tweaking your image/persona/tone/etc. to make it likelier that people will have the right priors about your honesty".4. Localizability. A common way groups end up stuck with false beliefs is that, e.g., two rival political factions will exist—call them the Blues and the Greens—and the Blues will believe some false generalization based on a bunch of smaller, less-obviously-important arguments or pieces of evidence.The key to reaching the truth will be for the Blues to nitpick their data points more: encourage people to point out local errors, ways a data point is unrepresentative, etc. But this never ends up happening, because (a) each individual data point isn't obviously crucial, so it feels like nitpicking; and (b) worse, pushing back in a nitpicky way will make your fellow Blues suspect you of disloyalty, or even of harboring secret sympathies for the evil Greens.The result is that pushback on local claims feels socially risky, so it happens a lot less in venues where the Blues are paying attention; and when someone does work up the courage to object or cite contrary evidence, the other Blues are excessively skeptical.Moreover, this process tends to exacerbate itself over time: the more the Blues and Greens each do this, the more extreme their views will become, which reinforces the other side's impression "wow our enemies are extreme!". And the more this happens, the more likely it becomes that someone raising concerns or criticisms is secretly disloyal, because in fact you've created a hostile discourse environment where it's hard for people to justify bringing up objections if their goal is merely curiosity.By...]]>
RobBensinger https://forum.effectivealtruism.org/posts/JnijsXwYDCJDwRcuc/elements-of-rationalist-discourse Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Elements of Rationalist Discourse, published by RobBensinger on February 14, 2023 on The Effective Altruism Forum.I liked Duncan Sabien's Basics of Rationalist Discourse, but it felt somewhat different from what my brain thinks of as "the basics of rationalist discourse". So I decided to write down my own version (which overlaps some with Duncan's).Probably this new version also won't match "the basics" as other people perceive them. People may not even agree that these are all good ideas! Partly I'm posting these just out of curiosity about what the delta is between my perspective on rationalist discourse and y'alls perspectives.The basics of rationalist discourse, as I understand them:1. Truth-Seeking. Try to contribute to a social environment that encourages belief accuracy and good epistemic processes.Try not to “win” arguments using symmetric weapons (tools that work similarly well whether you're right or wrong). Indeed, try not to treat arguments like soldiers at all:Arguments are soldiers. Once you know which side you’re on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it’s like stabbing your soldiers in the back.Instead, treat arguments like scouts: tools for better understanding reality.2. Non-Violence: The response to "argument" is "counter-argument". The response to arguments is never bullets. The response to arguments is never doxxing, or death threats, or coercion.3. Non-Deception. Never try to steer your conversation partners (or onlookers) toward having falser models.Additionally, try to avoid things that will (on average) mislead readers as a side-effect of some other thing you're trying to do. Where possible, avoid saying things that you expect to lower the net belief accuracy of the average person you're communicating with; or failing that, at least flag that you're worried about this happening.As a corollary:3.1. Meta-Honesty. Make it easy for others to tell how honest, literal, PR-y, etc. you are (in general, or in particular contexts). This can include everything from "prominently publicly discussing the sorts of situations in which you'd lie" to "tweaking your image/persona/tone/etc. to make it likelier that people will have the right priors about your honesty".4. Localizability. A common way groups end up stuck with false beliefs is that, e.g., two rival political factions will exist—call them the Blues and the Greens—and the Blues will believe some false generalization based on a bunch of smaller, less-obviously-important arguments or pieces of evidence.The key to reaching the truth will be for the Blues to nitpick their data points more: encourage people to point out local errors, ways a data point is unrepresentative, etc. But this never ends up happening, because (a) each individual data point isn't obviously crucial, so it feels like nitpicking; and (b) worse, pushing back in a nitpicky way will make your fellow Blues suspect you of disloyalty, or even of harboring secret sympathies for the evil Greens.The result is that pushback on local claims feels socially risky, so it happens a lot less in venues where the Blues are paying attention; and when someone does work up the courage to object or cite contrary evidence, the other Blues are excessively skeptical.Moreover, this process tends to exacerbate itself over time: the more the Blues and Greens each do this, the more extreme their views will become, which reinforces the other side's impression "wow our enemies are extreme!". And the more this happens, the more likely it becomes that someone raising concerns or criticisms is secretly disloyal, because in fact you've created a hostile discourse environment where it's hard for people to justify bringing up objections if their goal is merely curiosity.By...]]>
Tue, 14 Feb 2023 08:21:49 +0000 EA - Elements of Rationalist Discourse by RobBensinger Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Elements of Rationalist Discourse, published by RobBensinger on February 14, 2023 on The Effective Altruism Forum.I liked Duncan Sabien's Basics of Rationalist Discourse, but it felt somewhat different from what my brain thinks of as "the basics of rationalist discourse". So I decided to write down my own version (which overlaps some with Duncan's).Probably this new version also won't match "the basics" as other people perceive them. People may not even agree that these are all good ideas! Partly I'm posting these just out of curiosity about what the delta is between my perspective on rationalist discourse and y'alls perspectives.The basics of rationalist discourse, as I understand them:1. Truth-Seeking. Try to contribute to a social environment that encourages belief accuracy and good epistemic processes.Try not to “win” arguments using symmetric weapons (tools that work similarly well whether you're right or wrong). Indeed, try not to treat arguments like soldiers at all:Arguments are soldiers. Once you know which side you’re on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it’s like stabbing your soldiers in the back.Instead, treat arguments like scouts: tools for better understanding reality.2. Non-Violence: The response to "argument" is "counter-argument". The response to arguments is never bullets. The response to arguments is never doxxing, or death threats, or coercion.3. Non-Deception. Never try to steer your conversation partners (or onlookers) toward having falser models.Additionally, try to avoid things that will (on average) mislead readers as a side-effect of some other thing you're trying to do. Where possible, avoid saying things that you expect to lower the net belief accuracy of the average person you're communicating with; or failing that, at least flag that you're worried about this happening.As a corollary:3.1. Meta-Honesty. Make it easy for others to tell how honest, literal, PR-y, etc. you are (in general, or in particular contexts). This can include everything from "prominently publicly discussing the sorts of situations in which you'd lie" to "tweaking your image/persona/tone/etc. to make it likelier that people will have the right priors about your honesty".4. Localizability. A common way groups end up stuck with false beliefs is that, e.g., two rival political factions will exist—call them the Blues and the Greens—and the Blues will believe some false generalization based on a bunch of smaller, less-obviously-important arguments or pieces of evidence.The key to reaching the truth will be for the Blues to nitpick their data points more: encourage people to point out local errors, ways a data point is unrepresentative, etc. But this never ends up happening, because (a) each individual data point isn't obviously crucial, so it feels like nitpicking; and (b) worse, pushing back in a nitpicky way will make your fellow Blues suspect you of disloyalty, or even of harboring secret sympathies for the evil Greens.The result is that pushback on local claims feels socially risky, so it happens a lot less in venues where the Blues are paying attention; and when someone does work up the courage to object or cite contrary evidence, the other Blues are excessively skeptical.Moreover, this process tends to exacerbate itself over time: the more the Blues and Greens each do this, the more extreme their views will become, which reinforces the other side's impression "wow our enemies are extreme!". And the more this happens, the more likely it becomes that someone raising concerns or criticisms is secretly disloyal, because in fact you've created a hostile discourse environment where it's hard for people to justify bringing up objections if their goal is merely curiosity.By...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Elements of Rationalist Discourse, published by RobBensinger on February 14, 2023 on The Effective Altruism Forum.I liked Duncan Sabien's Basics of Rationalist Discourse, but it felt somewhat different from what my brain thinks of as "the basics of rationalist discourse". So I decided to write down my own version (which overlaps some with Duncan's).Probably this new version also won't match "the basics" as other people perceive them. People may not even agree that these are all good ideas! Partly I'm posting these just out of curiosity about what the delta is between my perspective on rationalist discourse and y'alls perspectives.The basics of rationalist discourse, as I understand them:1. Truth-Seeking. Try to contribute to a social environment that encourages belief accuracy and good epistemic processes.Try not to “win” arguments using symmetric weapons (tools that work similarly well whether you're right or wrong). Indeed, try not to treat arguments like soldiers at all:Arguments are soldiers. Once you know which side you’re on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it’s like stabbing your soldiers in the back.Instead, treat arguments like scouts: tools for better understanding reality.2. Non-Violence: The response to "argument" is "counter-argument". The response to arguments is never bullets. The response to arguments is never doxxing, or death threats, or coercion.3. Non-Deception. Never try to steer your conversation partners (or onlookers) toward having falser models.Additionally, try to avoid things that will (on average) mislead readers as a side-effect of some other thing you're trying to do. Where possible, avoid saying things that you expect to lower the net belief accuracy of the average person you're communicating with; or failing that, at least flag that you're worried about this happening.As a corollary:3.1. Meta-Honesty. Make it easy for others to tell how honest, literal, PR-y, etc. you are (in general, or in particular contexts). This can include everything from "prominently publicly discussing the sorts of situations in which you'd lie" to "tweaking your image/persona/tone/etc. to make it likelier that people will have the right priors about your honesty".4. Localizability. A common way groups end up stuck with false beliefs is that, e.g., two rival political factions will exist—call them the Blues and the Greens—and the Blues will believe some false generalization based on a bunch of smaller, less-obviously-important arguments or pieces of evidence.The key to reaching the truth will be for the Blues to nitpick their data points more: encourage people to point out local errors, ways a data point is unrepresentative, etc. But this never ends up happening, because (a) each individual data point isn't obviously crucial, so it feels like nitpicking; and (b) worse, pushing back in a nitpicky way will make your fellow Blues suspect you of disloyalty, or even of harboring secret sympathies for the evil Greens.The result is that pushback on local claims feels socially risky, so it happens a lot less in venues where the Blues are paying attention; and when someone does work up the courage to object or cite contrary evidence, the other Blues are excessively skeptical.Moreover, this process tends to exacerbate itself over time: the more the Blues and Greens each do this, the more extreme their views will become, which reinforces the other side's impression "wow our enemies are extreme!". And the more this happens, the more likely it becomes that someone raising concerns or criticisms is secretly disloyal, because in fact you've created a hostile discourse environment where it's hard for people to justify bringing up objections if their goal is merely curiosity.By...]]>
RobBensinger https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 20:38 None full 4867
2oBKh3eJj8uDJzsoA_NL_EA_EA EA - No injuries were reported by JulianHazell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: No injuries were reported, published by JulianHazell on February 14, 2023 on The Effective Altruism Forum.100,000 chickens were killed in a fire at a Connecticut farm a few weeks ago. Normally, I can compartmentalise the horrors of factory farming reasonably well. But this story was eating at me. I felt a visceral sense of pain just from thinking about it, and then a profound sense of sadness at the thought of 100,000 chickens essentially being a rounding error compared to the overall size of the factory farming industry.I also felt disgusted at how flippant the world is to these sorts of tragedies — one article in particular noted that "no injuries were reported in the fire".Rather than sit by uncomfortably, I wanted to try turning these feelings into something productive, so I wrote a short story about it. As a content warning, this story contains graphic descriptions of suffering. If this sort of thing (understandably) bothers you, I'd probably suggest not reading.My hope is that purposefully writing as uncomfortable of a story as possible can help build intuition about how utterly repugnant factory farming is. I'm also curious if these kinds of short fictional stories might be helpful for spreading awareness of things like the harms of factory farming or the importance of protecting future generations.No injuries were reportedThe flames burned with primordial rage as they devoured the building’s wooden skeleton. She stared aimlessly at the smouldering walls as her breath came in searing gasps. All she felt in this moment was her drive to survive eclipsed by a sense of fear, all while a billowing plume of smoke enveloped her body. Heat and terror. Helplessness.Through strained eyes, she saw other shapes staggering and flailing in a chaotic stupor. Cries of panic and pain blended into an encompassing wall of sound. She tried to rise but could not. She was in company, yet all alone.No injuries were reported.She wasn’t aware of it, but soon she would be melted to a pile of flesh. A burnt clump of matter. Nameless and faceless. None would be the wiser.She did not have much of a sense of her place in the world. But she could feel, more than most would ever suspect. Millions of years of evolution attuned her nervous system to avoid pain, and to seek pleasure. In some senses, it was all she ever knew.The fire seared through her nerves, causing excruciating pain to radiate through her body. In this moment, she hadn’t a desire in the world, other than a primal urge to escape her torturous environment. But she was confined, and her fate was sealed.No injuries were reported.Despite her feeling of isolation, she was not alone in any sense of the word. She was surrounded by a hundred thousand others, who similarly wrestled with their impending deaths. Somewhere in the crowd lurked her own faint cries, a scream amid the greater tide. Soon, heart by heart, the screams and flutterings faded until only the fire roared. For an omniscient observer, the void left by those silenced hearts would scream louder than the uncontrollable flames, a chasm of nothing where much had once been.No injuries were reported.Less than an hour after the flame ignited, the barn was reduced to rubble. A pile of burnt flesh, bones, feathers, and faeces. It was probably a malfunctioning heating device that caused the blaze that killed over a hundred thousand chickens. But by the time the fire crew arrived, it was too late.The press soon began reaching out to the farm’s owners, in response to questions from the public of what caused the massive column of smoke to fill the winter air. “Rest assured for folks who are concerned”, said the owners, “nobody was hurt in the fire”. The workers had already left for the day, and the flames were confined to the barn.Thankfully, no injuries were report...]]>
JulianHazell https://forum.effectivealtruism.org/posts/2oBKh3eJj8uDJzsoA/no-injuries-were-reported Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: No injuries were reported, published by JulianHazell on February 14, 2023 on The Effective Altruism Forum.100,000 chickens were killed in a fire at a Connecticut farm a few weeks ago. Normally, I can compartmentalise the horrors of factory farming reasonably well. But this story was eating at me. I felt a visceral sense of pain just from thinking about it, and then a profound sense of sadness at the thought of 100,000 chickens essentially being a rounding error compared to the overall size of the factory farming industry.I also felt disgusted at how flippant the world is to these sorts of tragedies — one article in particular noted that "no injuries were reported in the fire".Rather than sit by uncomfortably, I wanted to try turning these feelings into something productive, so I wrote a short story about it. As a content warning, this story contains graphic descriptions of suffering. If this sort of thing (understandably) bothers you, I'd probably suggest not reading.My hope is that purposefully writing as uncomfortable of a story as possible can help build intuition about how utterly repugnant factory farming is. I'm also curious if these kinds of short fictional stories might be helpful for spreading awareness of things like the harms of factory farming or the importance of protecting future generations.No injuries were reportedThe flames burned with primordial rage as they devoured the building’s wooden skeleton. She stared aimlessly at the smouldering walls as her breath came in searing gasps. All she felt in this moment was her drive to survive eclipsed by a sense of fear, all while a billowing plume of smoke enveloped her body. Heat and terror. Helplessness.Through strained eyes, she saw other shapes staggering and flailing in a chaotic stupor. Cries of panic and pain blended into an encompassing wall of sound. She tried to rise but could not. She was in company, yet all alone.No injuries were reported.She wasn’t aware of it, but soon she would be melted to a pile of flesh. A burnt clump of matter. Nameless and faceless. None would be the wiser.She did not have much of a sense of her place in the world. But she could feel, more than most would ever suspect. Millions of years of evolution attuned her nervous system to avoid pain, and to seek pleasure. In some senses, it was all she ever knew.The fire seared through her nerves, causing excruciating pain to radiate through her body. In this moment, she hadn’t a desire in the world, other than a primal urge to escape her torturous environment. But she was confined, and her fate was sealed.No injuries were reported.Despite her feeling of isolation, she was not alone in any sense of the word. She was surrounded by a hundred thousand others, who similarly wrestled with their impending deaths. Somewhere in the crowd lurked her own faint cries, a scream amid the greater tide. Soon, heart by heart, the screams and flutterings faded until only the fire roared. For an omniscient observer, the void left by those silenced hearts would scream louder than the uncontrollable flames, a chasm of nothing where much had once been.No injuries were reported.Less than an hour after the flame ignited, the barn was reduced to rubble. A pile of burnt flesh, bones, feathers, and faeces. It was probably a malfunctioning heating device that caused the blaze that killed over a hundred thousand chickens. But by the time the fire crew arrived, it was too late.The press soon began reaching out to the farm’s owners, in response to questions from the public of what caused the massive column of smoke to fill the winter air. “Rest assured for folks who are concerned”, said the owners, “nobody was hurt in the fire”. The workers had already left for the day, and the flames were confined to the barn.Thankfully, no injuries were report...]]>
Tue, 14 Feb 2023 02:37:01 +0000 EA - No injuries were reported by JulianHazell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: No injuries were reported, published by JulianHazell on February 14, 2023 on The Effective Altruism Forum.100,000 chickens were killed in a fire at a Connecticut farm a few weeks ago. Normally, I can compartmentalise the horrors of factory farming reasonably well. But this story was eating at me. I felt a visceral sense of pain just from thinking about it, and then a profound sense of sadness at the thought of 100,000 chickens essentially being a rounding error compared to the overall size of the factory farming industry.I also felt disgusted at how flippant the world is to these sorts of tragedies — one article in particular noted that "no injuries were reported in the fire".Rather than sit by uncomfortably, I wanted to try turning these feelings into something productive, so I wrote a short story about it. As a content warning, this story contains graphic descriptions of suffering. If this sort of thing (understandably) bothers you, I'd probably suggest not reading.My hope is that purposefully writing as uncomfortable of a story as possible can help build intuition about how utterly repugnant factory farming is. I'm also curious if these kinds of short fictional stories might be helpful for spreading awareness of things like the harms of factory farming or the importance of protecting future generations.No injuries were reportedThe flames burned with primordial rage as they devoured the building’s wooden skeleton. She stared aimlessly at the smouldering walls as her breath came in searing gasps. All she felt in this moment was her drive to survive eclipsed by a sense of fear, all while a billowing plume of smoke enveloped her body. Heat and terror. Helplessness.Through strained eyes, she saw other shapes staggering and flailing in a chaotic stupor. Cries of panic and pain blended into an encompassing wall of sound. She tried to rise but could not. She was in company, yet all alone.No injuries were reported.She wasn’t aware of it, but soon she would be melted to a pile of flesh. A burnt clump of matter. Nameless and faceless. None would be the wiser.She did not have much of a sense of her place in the world. But she could feel, more than most would ever suspect. Millions of years of evolution attuned her nervous system to avoid pain, and to seek pleasure. In some senses, it was all she ever knew.The fire seared through her nerves, causing excruciating pain to radiate through her body. In this moment, she hadn’t a desire in the world, other than a primal urge to escape her torturous environment. But she was confined, and her fate was sealed.No injuries were reported.Despite her feeling of isolation, she was not alone in any sense of the word. She was surrounded by a hundred thousand others, who similarly wrestled with their impending deaths. Somewhere in the crowd lurked her own faint cries, a scream amid the greater tide. Soon, heart by heart, the screams and flutterings faded until only the fire roared. For an omniscient observer, the void left by those silenced hearts would scream louder than the uncontrollable flames, a chasm of nothing where much had once been.No injuries were reported.Less than an hour after the flame ignited, the barn was reduced to rubble. A pile of burnt flesh, bones, feathers, and faeces. It was probably a malfunctioning heating device that caused the blaze that killed over a hundred thousand chickens. But by the time the fire crew arrived, it was too late.The press soon began reaching out to the farm’s owners, in response to questions from the public of what caused the massive column of smoke to fill the winter air. “Rest assured for folks who are concerned”, said the owners, “nobody was hurt in the fire”. The workers had already left for the day, and the flames were confined to the barn.Thankfully, no injuries were report...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: No injuries were reported, published by JulianHazell on February 14, 2023 on The Effective Altruism Forum.100,000 chickens were killed in a fire at a Connecticut farm a few weeks ago. Normally, I can compartmentalise the horrors of factory farming reasonably well. But this story was eating at me. I felt a visceral sense of pain just from thinking about it, and then a profound sense of sadness at the thought of 100,000 chickens essentially being a rounding error compared to the overall size of the factory farming industry.I also felt disgusted at how flippant the world is to these sorts of tragedies — one article in particular noted that "no injuries were reported in the fire".Rather than sit by uncomfortably, I wanted to try turning these feelings into something productive, so I wrote a short story about it. As a content warning, this story contains graphic descriptions of suffering. If this sort of thing (understandably) bothers you, I'd probably suggest not reading.My hope is that purposefully writing as uncomfortable of a story as possible can help build intuition about how utterly repugnant factory farming is. I'm also curious if these kinds of short fictional stories might be helpful for spreading awareness of things like the harms of factory farming or the importance of protecting future generations.No injuries were reportedThe flames burned with primordial rage as they devoured the building’s wooden skeleton. She stared aimlessly at the smouldering walls as her breath came in searing gasps. All she felt in this moment was her drive to survive eclipsed by a sense of fear, all while a billowing plume of smoke enveloped her body. Heat and terror. Helplessness.Through strained eyes, she saw other shapes staggering and flailing in a chaotic stupor. Cries of panic and pain blended into an encompassing wall of sound. She tried to rise but could not. She was in company, yet all alone.No injuries were reported.She wasn’t aware of it, but soon she would be melted to a pile of flesh. A burnt clump of matter. Nameless and faceless. None would be the wiser.She did not have much of a sense of her place in the world. But she could feel, more than most would ever suspect. Millions of years of evolution attuned her nervous system to avoid pain, and to seek pleasure. In some senses, it was all she ever knew.The fire seared through her nerves, causing excruciating pain to radiate through her body. In this moment, she hadn’t a desire in the world, other than a primal urge to escape her torturous environment. But she was confined, and her fate was sealed.No injuries were reported.Despite her feeling of isolation, she was not alone in any sense of the word. She was surrounded by a hundred thousand others, who similarly wrestled with their impending deaths. Somewhere in the crowd lurked her own faint cries, a scream amid the greater tide. Soon, heart by heart, the screams and flutterings faded until only the fire roared. For an omniscient observer, the void left by those silenced hearts would scream louder than the uncontrollable flames, a chasm of nothing where much had once been.No injuries were reported.Less than an hour after the flame ignited, the barn was reduced to rubble. A pile of burnt flesh, bones, feathers, and faeces. It was probably a malfunctioning heating device that caused the blaze that killed over a hundred thousand chickens. But by the time the fire crew arrived, it was too late.The press soon began reaching out to the farm’s owners, in response to questions from the public of what caused the massive column of smoke to fill the winter air. “Rest assured for folks who are concerned”, said the owners, “nobody was hurt in the fire”. The workers had already left for the day, and the flames were confined to the barn.Thankfully, no injuries were report...]]>
JulianHazell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:45 None full 4868
A9dS2AvNpG5FqxdR9_NL_EA_EA EA - Rethink Priorities is inviting expressions of interest for (co)leading a longtermist project/organization incubator by Jam Kraprayoon Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities is inviting expressions of interest for (co)leading a longtermist project/organization incubator, published by Jam Kraprayoon on February 13, 2023 on The Effective Altruism Forum.The General Longtermism Team at Rethink Priorities is currently considering creating a "Longtermist Incubator" program and is accepting expression of interest submissions for a project lead/ co-lead to run the program if it’s launched.If you’d be interested in leading a longtermist incubator as part of Rethink Priorities and fit the profile described below, we encourage you to complete this general interest application. Please click "Apply Now" to respond to the brief prompts in the application form and submit your CV.We will keep your profile on our system, and as this is an expression of interest for an exploratory role, we will only contact you when we have general updates related to this position, or if there could be a suitable match, or as relevant opportunities arise. Please note that we have not fully committed to a particular format for a longtermist incubator or to hiring for this program, and given this, please don’t worry if you don’t get a quick update or a response from us. In the meantime, we'd be happy to refer you to other roles within RP or at related groups, where applicable, if you have given permission for us to do so in your application form.Application Deadline: There is currently no deadline, and we anticipate that we will be accepting expressions of interest into mid-late March.However, since the main goal of this form is to gauge interest and availability for this position, if you're interested in applying, we'd appreciate it if you could apply by February 28 in line with our planning efforts for this project.Contact: Please email jam@rethinkpriorities.org if you have any questions.About the Position and the Longtermist Incubator ProgramIn 2022, Rethink Priorities’ General Longtermism team focused on facilitating the faster and better creation of longtermist projects (see here). In 2023, we are exploring ways to improve this process, including creating a more formalized ‘incubator’ program within the organization. Broadly, by longtermist incubator program, we mean a program that finds and matches potential founders with longtermist project ideas and provides support so they can get these projects off the ground. We have not settled on the exact model for this yet, but we anticipate that a key bottleneck will be the availability of leads/co-leads with experience running/managing incubator programs.This lead/co-lead position would initially be a fixed-term role for 6-12 months to manage the pilot of this incubator program. Whether the program would be continued would be decided only after the pilot is completed. In case of discontinuation of the program, the contractor we hire for this experiment could potentially stay for other projects, although this is not guaranteed and whether this would be on a new fixed-term contract term or a different work arrangement would depend on the circumstances at that time. We expect this role to be full- or part-time, with a minimum time commitment of 20 hours per week. The role could be fully remote, but might require the contractor to be based in (most likely) London or Oxford for perhaps 2-8 weeks, depending on strategic decisions about the incubator pilot.This person, if the project goes through, would work closely with and have the support of our General Longtermism and Special Projects teams. The General Longtermism team focuses on generating, prioritizing among, and doing further research on longtermist projects and interventions.The Special Projects team is a new initiative that focuses on operations activities related to supporting these projects, such as developing budgets, writing business plan...]]>
Jam Kraprayoon https://forum.effectivealtruism.org/posts/A9dS2AvNpG5FqxdR9/rethink-priorities-is-inviting-expressions-of-interest-for Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities is inviting expressions of interest for (co)leading a longtermist project/organization incubator, published by Jam Kraprayoon on February 13, 2023 on The Effective Altruism Forum.The General Longtermism Team at Rethink Priorities is currently considering creating a "Longtermist Incubator" program and is accepting expression of interest submissions for a project lead/ co-lead to run the program if it’s launched.If you’d be interested in leading a longtermist incubator as part of Rethink Priorities and fit the profile described below, we encourage you to complete this general interest application. Please click "Apply Now" to respond to the brief prompts in the application form and submit your CV.We will keep your profile on our system, and as this is an expression of interest for an exploratory role, we will only contact you when we have general updates related to this position, or if there could be a suitable match, or as relevant opportunities arise. Please note that we have not fully committed to a particular format for a longtermist incubator or to hiring for this program, and given this, please don’t worry if you don’t get a quick update or a response from us. In the meantime, we'd be happy to refer you to other roles within RP or at related groups, where applicable, if you have given permission for us to do so in your application form.Application Deadline: There is currently no deadline, and we anticipate that we will be accepting expressions of interest into mid-late March.However, since the main goal of this form is to gauge interest and availability for this position, if you're interested in applying, we'd appreciate it if you could apply by February 28 in line with our planning efforts for this project.Contact: Please email jam@rethinkpriorities.org if you have any questions.About the Position and the Longtermist Incubator ProgramIn 2022, Rethink Priorities’ General Longtermism team focused on facilitating the faster and better creation of longtermist projects (see here). In 2023, we are exploring ways to improve this process, including creating a more formalized ‘incubator’ program within the organization. Broadly, by longtermist incubator program, we mean a program that finds and matches potential founders with longtermist project ideas and provides support so they can get these projects off the ground. We have not settled on the exact model for this yet, but we anticipate that a key bottleneck will be the availability of leads/co-leads with experience running/managing incubator programs.This lead/co-lead position would initially be a fixed-term role for 6-12 months to manage the pilot of this incubator program. Whether the program would be continued would be decided only after the pilot is completed. In case of discontinuation of the program, the contractor we hire for this experiment could potentially stay for other projects, although this is not guaranteed and whether this would be on a new fixed-term contract term or a different work arrangement would depend on the circumstances at that time. We expect this role to be full- or part-time, with a minimum time commitment of 20 hours per week. The role could be fully remote, but might require the contractor to be based in (most likely) London or Oxford for perhaps 2-8 weeks, depending on strategic decisions about the incubator pilot.This person, if the project goes through, would work closely with and have the support of our General Longtermism and Special Projects teams. The General Longtermism team focuses on generating, prioritizing among, and doing further research on longtermist projects and interventions.The Special Projects team is a new initiative that focuses on operations activities related to supporting these projects, such as developing budgets, writing business plan...]]>
Mon, 13 Feb 2023 13:54:01 +0000 EA - Rethink Priorities is inviting expressions of interest for (co)leading a longtermist project/organization incubator by Jam Kraprayoon Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities is inviting expressions of interest for (co)leading a longtermist project/organization incubator, published by Jam Kraprayoon on February 13, 2023 on The Effective Altruism Forum.The General Longtermism Team at Rethink Priorities is currently considering creating a "Longtermist Incubator" program and is accepting expression of interest submissions for a project lead/ co-lead to run the program if it’s launched.If you’d be interested in leading a longtermist incubator as part of Rethink Priorities and fit the profile described below, we encourage you to complete this general interest application. Please click "Apply Now" to respond to the brief prompts in the application form and submit your CV.We will keep your profile on our system, and as this is an expression of interest for an exploratory role, we will only contact you when we have general updates related to this position, or if there could be a suitable match, or as relevant opportunities arise. Please note that we have not fully committed to a particular format for a longtermist incubator or to hiring for this program, and given this, please don’t worry if you don’t get a quick update or a response from us. In the meantime, we'd be happy to refer you to other roles within RP or at related groups, where applicable, if you have given permission for us to do so in your application form.Application Deadline: There is currently no deadline, and we anticipate that we will be accepting expressions of interest into mid-late March.However, since the main goal of this form is to gauge interest and availability for this position, if you're interested in applying, we'd appreciate it if you could apply by February 28 in line with our planning efforts for this project.Contact: Please email jam@rethinkpriorities.org if you have any questions.About the Position and the Longtermist Incubator ProgramIn 2022, Rethink Priorities’ General Longtermism team focused on facilitating the faster and better creation of longtermist projects (see here). In 2023, we are exploring ways to improve this process, including creating a more formalized ‘incubator’ program within the organization. Broadly, by longtermist incubator program, we mean a program that finds and matches potential founders with longtermist project ideas and provides support so they can get these projects off the ground. We have not settled on the exact model for this yet, but we anticipate that a key bottleneck will be the availability of leads/co-leads with experience running/managing incubator programs.This lead/co-lead position would initially be a fixed-term role for 6-12 months to manage the pilot of this incubator program. Whether the program would be continued would be decided only after the pilot is completed. In case of discontinuation of the program, the contractor we hire for this experiment could potentially stay for other projects, although this is not guaranteed and whether this would be on a new fixed-term contract term or a different work arrangement would depend on the circumstances at that time. We expect this role to be full- or part-time, with a minimum time commitment of 20 hours per week. The role could be fully remote, but might require the contractor to be based in (most likely) London or Oxford for perhaps 2-8 weeks, depending on strategic decisions about the incubator pilot.This person, if the project goes through, would work closely with and have the support of our General Longtermism and Special Projects teams. The General Longtermism team focuses on generating, prioritizing among, and doing further research on longtermist projects and interventions.The Special Projects team is a new initiative that focuses on operations activities related to supporting these projects, such as developing budgets, writing business plan...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities is inviting expressions of interest for (co)leading a longtermist project/organization incubator, published by Jam Kraprayoon on February 13, 2023 on The Effective Altruism Forum.The General Longtermism Team at Rethink Priorities is currently considering creating a "Longtermist Incubator" program and is accepting expression of interest submissions for a project lead/ co-lead to run the program if it’s launched.If you’d be interested in leading a longtermist incubator as part of Rethink Priorities and fit the profile described below, we encourage you to complete this general interest application. Please click "Apply Now" to respond to the brief prompts in the application form and submit your CV.We will keep your profile on our system, and as this is an expression of interest for an exploratory role, we will only contact you when we have general updates related to this position, or if there could be a suitable match, or as relevant opportunities arise. Please note that we have not fully committed to a particular format for a longtermist incubator or to hiring for this program, and given this, please don’t worry if you don’t get a quick update or a response from us. In the meantime, we'd be happy to refer you to other roles within RP or at related groups, where applicable, if you have given permission for us to do so in your application form.Application Deadline: There is currently no deadline, and we anticipate that we will be accepting expressions of interest into mid-late March.However, since the main goal of this form is to gauge interest and availability for this position, if you're interested in applying, we'd appreciate it if you could apply by February 28 in line with our planning efforts for this project.Contact: Please email jam@rethinkpriorities.org if you have any questions.About the Position and the Longtermist Incubator ProgramIn 2022, Rethink Priorities’ General Longtermism team focused on facilitating the faster and better creation of longtermist projects (see here). In 2023, we are exploring ways to improve this process, including creating a more formalized ‘incubator’ program within the organization. Broadly, by longtermist incubator program, we mean a program that finds and matches potential founders with longtermist project ideas and provides support so they can get these projects off the ground. We have not settled on the exact model for this yet, but we anticipate that a key bottleneck will be the availability of leads/co-leads with experience running/managing incubator programs.This lead/co-lead position would initially be a fixed-term role for 6-12 months to manage the pilot of this incubator program. Whether the program would be continued would be decided only after the pilot is completed. In case of discontinuation of the program, the contractor we hire for this experiment could potentially stay for other projects, although this is not guaranteed and whether this would be on a new fixed-term contract term or a different work arrangement would depend on the circumstances at that time. We expect this role to be full- or part-time, with a minimum time commitment of 20 hours per week. The role could be fully remote, but might require the contractor to be based in (most likely) London or Oxford for perhaps 2-8 weeks, depending on strategic decisions about the incubator pilot.This person, if the project goes through, would work closely with and have the support of our General Longtermism and Special Projects teams. The General Longtermism team focuses on generating, prioritizing among, and doing further research on longtermist projects and interventions.The Special Projects team is a new initiative that focuses on operations activities related to supporting these projects, such as developing budgets, writing business plan...]]>
Jam Kraprayoon https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:05 None full 4862
xWRweQmmEKoLFwGyu_NL_EA_EA EA - CE: Announcing our 2023 Charity Ideas. Apply now! by SteveThompson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CE: Announcing our 2023 Charity Ideas. Apply now!, published by SteveThompson on February 13, 2023 on The Effective Altruism Forum.Apply now to start a nonprofit in Biosecurity or Large-Scale Global Health In this post we introduce our top five charity ideas for launch in 2023, in the areas of Biosecurity and Large-Scale Global Health. These are the result of five months’ work from our research team, and a six-stage iterative process that includes collaboration with partners and ideas from within and outside of the EA community.We’re looking for people to launch these ideas through our July - August 2023 Incubation Program. The deadline for applications is March 12, 2023. [APPLY NOW]We provide cost-covered two-month training, stipends, ongoing mentorship, and grants up to $200,000 per project. You can learn more on our website. We also invite you to join our event on February 20, 6PM UK Time. Sam Hilton, our Director of Research, will introduce the ideas and answer your questions. Sign up here.Disclaimers:In an effort to be brief we have sacrificed nuance, the details of our considerable uncertainties, and the downside risks that are discussed in the longer reports; these shall be published in the upcoming weeks.Please note that previous incubatees attest to the ideas becoming increasingly exciting over the course of the program.One-Sentence SummariesAn organization that works to prevent the growth of antimicrobial resistance by advocating for better (pull) funding mechanisms to drive R&D and responsible use of new antibiotics.An advocacy organization that promotes academic guidelines to restrict potentially harmful “dual-use” research.A charity that rolls out dual HIV/syphilis rapid diagnostic tests, penicillin, and training to antenatal clinics, to effectively tackle congenital syphilis at scale in low-and middle-income countries.An organization that distributes Oral Rehydration Solution and zinc co-packages to effectively treat life-threatening diarrhea in under five year olds in low-and middle-income countries.A charity that builds healthcare capacity to provide “Kangaroo Care”, an exceptionally simple and cost-effective treatment, to avert hundreds of thousands of newborn deaths each year in low-and middle-income countries.One-Paragraph Summaries1. Preventing growth of antimicrobial resistance by advocating for better (pull) funding mechanisms to drive R&D and responsible use of new antibioticsAntimicrobial resistance (AMR) is where harmful microbes develop the ability to resist treatment. Recent reports suggest that there are 47 million disability adjusted life years (DALYs) per annum attributed to antibiotic resistance (a key type of AMR). The current financial incentives for new antimicrobial drug development are insufficient, and only a handful of novel antibiotics have been brought to market in the last 30 years. There are well-developed proposals for new market mechanisms using a “subscription” model that would solve this issue. This new charity will advocate for these better market mechanisms, to incentivise the creation and responsible deployment of important new antimicrobial drugs. We estimate that government spending on new market mechanisms for antibiotics could lead to a DALY averted for $100-$150.2. Advocacy for restricting potentially harmful “dual-use” researchDual-use research of concern (DURC) is well-intended research which can lead to harm, either by accident or through deliberate misapplication of the technology. DURC could be a key risk factor for the occurrence of a particularly deadly pandemic during the next 50 years. This risk should be mitigatable with the development and adoption of good practice. We analyzed a dozen case studies of attempts to ensure research safety, ranging from animal testing to human cloning, and w...]]>
SteveThompson https://forum.effectivealtruism.org/posts/xWRweQmmEKoLFwGyu/ce-announcing-our-2023-charity-ideas-apply-now-2 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CE: Announcing our 2023 Charity Ideas. Apply now!, published by SteveThompson on February 13, 2023 on The Effective Altruism Forum.Apply now to start a nonprofit in Biosecurity or Large-Scale Global Health In this post we introduce our top five charity ideas for launch in 2023, in the areas of Biosecurity and Large-Scale Global Health. These are the result of five months’ work from our research team, and a six-stage iterative process that includes collaboration with partners and ideas from within and outside of the EA community.We’re looking for people to launch these ideas through our July - August 2023 Incubation Program. The deadline for applications is March 12, 2023. [APPLY NOW]We provide cost-covered two-month training, stipends, ongoing mentorship, and grants up to $200,000 per project. You can learn more on our website. We also invite you to join our event on February 20, 6PM UK Time. Sam Hilton, our Director of Research, will introduce the ideas and answer your questions. Sign up here.Disclaimers:In an effort to be brief we have sacrificed nuance, the details of our considerable uncertainties, and the downside risks that are discussed in the longer reports; these shall be published in the upcoming weeks.Please note that previous incubatees attest to the ideas becoming increasingly exciting over the course of the program.One-Sentence SummariesAn organization that works to prevent the growth of antimicrobial resistance by advocating for better (pull) funding mechanisms to drive R&D and responsible use of new antibiotics.An advocacy organization that promotes academic guidelines to restrict potentially harmful “dual-use” research.A charity that rolls out dual HIV/syphilis rapid diagnostic tests, penicillin, and training to antenatal clinics, to effectively tackle congenital syphilis at scale in low-and middle-income countries.An organization that distributes Oral Rehydration Solution and zinc co-packages to effectively treat life-threatening diarrhea in under five year olds in low-and middle-income countries.A charity that builds healthcare capacity to provide “Kangaroo Care”, an exceptionally simple and cost-effective treatment, to avert hundreds of thousands of newborn deaths each year in low-and middle-income countries.One-Paragraph Summaries1. Preventing growth of antimicrobial resistance by advocating for better (pull) funding mechanisms to drive R&D and responsible use of new antibioticsAntimicrobial resistance (AMR) is where harmful microbes develop the ability to resist treatment. Recent reports suggest that there are 47 million disability adjusted life years (DALYs) per annum attributed to antibiotic resistance (a key type of AMR). The current financial incentives for new antimicrobial drug development are insufficient, and only a handful of novel antibiotics have been brought to market in the last 30 years. There are well-developed proposals for new market mechanisms using a “subscription” model that would solve this issue. This new charity will advocate for these better market mechanisms, to incentivise the creation and responsible deployment of important new antimicrobial drugs. We estimate that government spending on new market mechanisms for antibiotics could lead to a DALY averted for $100-$150.2. Advocacy for restricting potentially harmful “dual-use” researchDual-use research of concern (DURC) is well-intended research which can lead to harm, either by accident or through deliberate misapplication of the technology. DURC could be a key risk factor for the occurrence of a particularly deadly pandemic during the next 50 years. This risk should be mitigatable with the development and adoption of good practice. We analyzed a dozen case studies of attempts to ensure research safety, ranging from animal testing to human cloning, and w...]]>
Mon, 13 Feb 2023 13:30:43 +0000 EA - CE: Announcing our 2023 Charity Ideas. Apply now! by SteveThompson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CE: Announcing our 2023 Charity Ideas. Apply now!, published by SteveThompson on February 13, 2023 on The Effective Altruism Forum.Apply now to start a nonprofit in Biosecurity or Large-Scale Global Health In this post we introduce our top five charity ideas for launch in 2023, in the areas of Biosecurity and Large-Scale Global Health. These are the result of five months’ work from our research team, and a six-stage iterative process that includes collaboration with partners and ideas from within and outside of the EA community.We’re looking for people to launch these ideas through our July - August 2023 Incubation Program. The deadline for applications is March 12, 2023. [APPLY NOW]We provide cost-covered two-month training, stipends, ongoing mentorship, and grants up to $200,000 per project. You can learn more on our website. We also invite you to join our event on February 20, 6PM UK Time. Sam Hilton, our Director of Research, will introduce the ideas and answer your questions. Sign up here.Disclaimers:In an effort to be brief we have sacrificed nuance, the details of our considerable uncertainties, and the downside risks that are discussed in the longer reports; these shall be published in the upcoming weeks.Please note that previous incubatees attest to the ideas becoming increasingly exciting over the course of the program.One-Sentence SummariesAn organization that works to prevent the growth of antimicrobial resistance by advocating for better (pull) funding mechanisms to drive R&D and responsible use of new antibiotics.An advocacy organization that promotes academic guidelines to restrict potentially harmful “dual-use” research.A charity that rolls out dual HIV/syphilis rapid diagnostic tests, penicillin, and training to antenatal clinics, to effectively tackle congenital syphilis at scale in low-and middle-income countries.An organization that distributes Oral Rehydration Solution and zinc co-packages to effectively treat life-threatening diarrhea in under five year olds in low-and middle-income countries.A charity that builds healthcare capacity to provide “Kangaroo Care”, an exceptionally simple and cost-effective treatment, to avert hundreds of thousands of newborn deaths each year in low-and middle-income countries.One-Paragraph Summaries1. Preventing growth of antimicrobial resistance by advocating for better (pull) funding mechanisms to drive R&D and responsible use of new antibioticsAntimicrobial resistance (AMR) is where harmful microbes develop the ability to resist treatment. Recent reports suggest that there are 47 million disability adjusted life years (DALYs) per annum attributed to antibiotic resistance (a key type of AMR). The current financial incentives for new antimicrobial drug development are insufficient, and only a handful of novel antibiotics have been brought to market in the last 30 years. There are well-developed proposals for new market mechanisms using a “subscription” model that would solve this issue. This new charity will advocate for these better market mechanisms, to incentivise the creation and responsible deployment of important new antimicrobial drugs. We estimate that government spending on new market mechanisms for antibiotics could lead to a DALY averted for $100-$150.2. Advocacy for restricting potentially harmful “dual-use” researchDual-use research of concern (DURC) is well-intended research which can lead to harm, either by accident or through deliberate misapplication of the technology. DURC could be a key risk factor for the occurrence of a particularly deadly pandemic during the next 50 years. This risk should be mitigatable with the development and adoption of good practice. We analyzed a dozen case studies of attempts to ensure research safety, ranging from animal testing to human cloning, and w...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CE: Announcing our 2023 Charity Ideas. Apply now!, published by SteveThompson on February 13, 2023 on The Effective Altruism Forum.Apply now to start a nonprofit in Biosecurity or Large-Scale Global Health In this post we introduce our top five charity ideas for launch in 2023, in the areas of Biosecurity and Large-Scale Global Health. These are the result of five months’ work from our research team, and a six-stage iterative process that includes collaboration with partners and ideas from within and outside of the EA community.We’re looking for people to launch these ideas through our July - August 2023 Incubation Program. The deadline for applications is March 12, 2023. [APPLY NOW]We provide cost-covered two-month training, stipends, ongoing mentorship, and grants up to $200,000 per project. You can learn more on our website. We also invite you to join our event on February 20, 6PM UK Time. Sam Hilton, our Director of Research, will introduce the ideas and answer your questions. Sign up here.Disclaimers:In an effort to be brief we have sacrificed nuance, the details of our considerable uncertainties, and the downside risks that are discussed in the longer reports; these shall be published in the upcoming weeks.Please note that previous incubatees attest to the ideas becoming increasingly exciting over the course of the program.One-Sentence SummariesAn organization that works to prevent the growth of antimicrobial resistance by advocating for better (pull) funding mechanisms to drive R&D and responsible use of new antibiotics.An advocacy organization that promotes academic guidelines to restrict potentially harmful “dual-use” research.A charity that rolls out dual HIV/syphilis rapid diagnostic tests, penicillin, and training to antenatal clinics, to effectively tackle congenital syphilis at scale in low-and middle-income countries.An organization that distributes Oral Rehydration Solution and zinc co-packages to effectively treat life-threatening diarrhea in under five year olds in low-and middle-income countries.A charity that builds healthcare capacity to provide “Kangaroo Care”, an exceptionally simple and cost-effective treatment, to avert hundreds of thousands of newborn deaths each year in low-and middle-income countries.One-Paragraph Summaries1. Preventing growth of antimicrobial resistance by advocating for better (pull) funding mechanisms to drive R&D and responsible use of new antibioticsAntimicrobial resistance (AMR) is where harmful microbes develop the ability to resist treatment. Recent reports suggest that there are 47 million disability adjusted life years (DALYs) per annum attributed to antibiotic resistance (a key type of AMR). The current financial incentives for new antimicrobial drug development are insufficient, and only a handful of novel antibiotics have been brought to market in the last 30 years. There are well-developed proposals for new market mechanisms using a “subscription” model that would solve this issue. This new charity will advocate for these better market mechanisms, to incentivise the creation and responsible deployment of important new antimicrobial drugs. We estimate that government spending on new market mechanisms for antibiotics could lead to a DALY averted for $100-$150.2. Advocacy for restricting potentially harmful “dual-use” researchDual-use research of concern (DURC) is well-intended research which can lead to harm, either by accident or through deliberate misapplication of the technology. DURC could be a key risk factor for the occurrence of a particularly deadly pandemic during the next 50 years. This risk should be mitigatable with the development and adoption of good practice. We analyzed a dozen case studies of attempts to ensure research safety, ranging from animal testing to human cloning, and w...]]>
SteveThompson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 23:49 None full 4861
ajdhMQEe7e8nNagiM_NL_EA_EA EA - Polyamory and dating in the EA community by va Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Polyamory and dating in the EA community, published by va on February 13, 2023 on The Effective Altruism Forum.Thank you to everyone who read this, commented and gave suggestions. This is very much an “it takes a village” kind of post. I’d love to hear more experiences to form a fuller picture- you can share your thoughts in the comments, or fill out this anonymous form. If I have capacity, I’ll aim to incorporate additions over the next few months. At the very least, I will share comments from the form as they come in (if I notice there are details that may de-anonymize folks I might remove those details).Please engage kindly in the comments on this post. I know that the topics raised here are going to be really important to a lot of people, but I do have faith in our community’s ability to have productive discussions on this topic.Why am I writing this?I’m writing this post because I feel there is (or was) an elephant in the room when it comes to discussions of polyamory (poly) in EA and adjacent forums that have largely focused on the Bay Area, and its implications for the EA community. I want to talk openly about my experiences as a woman who is poly and living in the Bay area. I’ve observed a number of conversations recently (and over the past year) where people have discussed issues in the community and specifically poly in misleading, confusing and sometimes hurtful ways. This frustrates me - I want people to make informed decisions on their involvement with the EA community based on accurate information. This post aims to inform people about poly in an approachable way, and discuss the challenges of polyamory in the EA community.For some context about me - I was monogamous for many years, and have been poly for some time now. I don’t think poly is right for everyone, and I’m not sure if it will always be right for me. I’m conflicted about my own role in complicating this issue, as someone who has (had) relationships with people in the community. The content in this post is based on my personal experiences (being monogamous and polyamorous), anecdotal observations and conversations with other (mostly highly engaged) community members. I also shared this post with several people who I think engaged in thoughtful discussions on the topic in the past (naturally, mistakes are mine).I hope this post will be a step on the path towards a more careful, thoughtful and kind community. This post aims to inform people about poly in an approachable way, and discuss the challenges of polyamory as far as I have experience including in the Bay-based EA community.I’m writing anonymously because 1) not everyone in my social network knows I am poly and 2) I expect that for at least some career paths I’m considering being openly poly and/or talking about it could make it harder to succeed in that domain.What is polyamory in the context of EA communities?If you’re unfamiliar with polyamory, I’ve written a very brief background with some common terminology in the appendix for reference.Most people in the EA community are not poly. Poly is most common (and sometimes assumed) in the Bay Area EA community where I’ve heard an estimate that maybe 60% of EAs are poly. The EA community in the Bay overlaps a lot with the rationality community, where poly is also fairly common. Some reviewers of this post suggested that early members of the rationality community thought people were monogamous by default, and if people designed their relationships intentionally a lot more people would be poly - see this 2010 LessWrong post and discussion as an example.Polyamory is fairly common in the Bay Area, but it’s more prevalent in the EA and rationalist communities than the Bay Area average. A higher percentage of the most engaged EAs are poly than the average in the community, and many eng...]]>
va https://forum.effectivealtruism.org/posts/ajdhMQEe7e8nNagiM/polyamory-and-dating-in-the-ea-community Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Polyamory and dating in the EA community, published by va on February 13, 2023 on The Effective Altruism Forum.Thank you to everyone who read this, commented and gave suggestions. This is very much an “it takes a village” kind of post. I’d love to hear more experiences to form a fuller picture- you can share your thoughts in the comments, or fill out this anonymous form. If I have capacity, I’ll aim to incorporate additions over the next few months. At the very least, I will share comments from the form as they come in (if I notice there are details that may de-anonymize folks I might remove those details).Please engage kindly in the comments on this post. I know that the topics raised here are going to be really important to a lot of people, but I do have faith in our community’s ability to have productive discussions on this topic.Why am I writing this?I’m writing this post because I feel there is (or was) an elephant in the room when it comes to discussions of polyamory (poly) in EA and adjacent forums that have largely focused on the Bay Area, and its implications for the EA community. I want to talk openly about my experiences as a woman who is poly and living in the Bay area. I’ve observed a number of conversations recently (and over the past year) where people have discussed issues in the community and specifically poly in misleading, confusing and sometimes hurtful ways. This frustrates me - I want people to make informed decisions on their involvement with the EA community based on accurate information. This post aims to inform people about poly in an approachable way, and discuss the challenges of polyamory in the EA community.For some context about me - I was monogamous for many years, and have been poly for some time now. I don’t think poly is right for everyone, and I’m not sure if it will always be right for me. I’m conflicted about my own role in complicating this issue, as someone who has (had) relationships with people in the community. The content in this post is based on my personal experiences (being monogamous and polyamorous), anecdotal observations and conversations with other (mostly highly engaged) community members. I also shared this post with several people who I think engaged in thoughtful discussions on the topic in the past (naturally, mistakes are mine).I hope this post will be a step on the path towards a more careful, thoughtful and kind community. This post aims to inform people about poly in an approachable way, and discuss the challenges of polyamory as far as I have experience including in the Bay-based EA community.I’m writing anonymously because 1) not everyone in my social network knows I am poly and 2) I expect that for at least some career paths I’m considering being openly poly and/or talking about it could make it harder to succeed in that domain.What is polyamory in the context of EA communities?If you’re unfamiliar with polyamory, I’ve written a very brief background with some common terminology in the appendix for reference.Most people in the EA community are not poly. Poly is most common (and sometimes assumed) in the Bay Area EA community where I’ve heard an estimate that maybe 60% of EAs are poly. The EA community in the Bay overlaps a lot with the rationality community, where poly is also fairly common. Some reviewers of this post suggested that early members of the rationality community thought people were monogamous by default, and if people designed their relationships intentionally a lot more people would be poly - see this 2010 LessWrong post and discussion as an example.Polyamory is fairly common in the Bay Area, but it’s more prevalent in the EA and rationalist communities than the Bay Area average. A higher percentage of the most engaged EAs are poly than the average in the community, and many eng...]]>
Mon, 13 Feb 2023 08:52:20 +0000 EA - Polyamory and dating in the EA community by va Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Polyamory and dating in the EA community, published by va on February 13, 2023 on The Effective Altruism Forum.Thank you to everyone who read this, commented and gave suggestions. This is very much an “it takes a village” kind of post. I’d love to hear more experiences to form a fuller picture- you can share your thoughts in the comments, or fill out this anonymous form. If I have capacity, I’ll aim to incorporate additions over the next few months. At the very least, I will share comments from the form as they come in (if I notice there are details that may de-anonymize folks I might remove those details).Please engage kindly in the comments on this post. I know that the topics raised here are going to be really important to a lot of people, but I do have faith in our community’s ability to have productive discussions on this topic.Why am I writing this?I’m writing this post because I feel there is (or was) an elephant in the room when it comes to discussions of polyamory (poly) in EA and adjacent forums that have largely focused on the Bay Area, and its implications for the EA community. I want to talk openly about my experiences as a woman who is poly and living in the Bay area. I’ve observed a number of conversations recently (and over the past year) where people have discussed issues in the community and specifically poly in misleading, confusing and sometimes hurtful ways. This frustrates me - I want people to make informed decisions on their involvement with the EA community based on accurate information. This post aims to inform people about poly in an approachable way, and discuss the challenges of polyamory in the EA community.For some context about me - I was monogamous for many years, and have been poly for some time now. I don’t think poly is right for everyone, and I’m not sure if it will always be right for me. I’m conflicted about my own role in complicating this issue, as someone who has (had) relationships with people in the community. The content in this post is based on my personal experiences (being monogamous and polyamorous), anecdotal observations and conversations with other (mostly highly engaged) community members. I also shared this post with several people who I think engaged in thoughtful discussions on the topic in the past (naturally, mistakes are mine).I hope this post will be a step on the path towards a more careful, thoughtful and kind community. This post aims to inform people about poly in an approachable way, and discuss the challenges of polyamory as far as I have experience including in the Bay-based EA community.I’m writing anonymously because 1) not everyone in my social network knows I am poly and 2) I expect that for at least some career paths I’m considering being openly poly and/or talking about it could make it harder to succeed in that domain.What is polyamory in the context of EA communities?If you’re unfamiliar with polyamory, I’ve written a very brief background with some common terminology in the appendix for reference.Most people in the EA community are not poly. Poly is most common (and sometimes assumed) in the Bay Area EA community where I’ve heard an estimate that maybe 60% of EAs are poly. The EA community in the Bay overlaps a lot with the rationality community, where poly is also fairly common. Some reviewers of this post suggested that early members of the rationality community thought people were monogamous by default, and if people designed their relationships intentionally a lot more people would be poly - see this 2010 LessWrong post and discussion as an example.Polyamory is fairly common in the Bay Area, but it’s more prevalent in the EA and rationalist communities than the Bay Area average. A higher percentage of the most engaged EAs are poly than the average in the community, and many eng...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Polyamory and dating in the EA community, published by va on February 13, 2023 on The Effective Altruism Forum.Thank you to everyone who read this, commented and gave suggestions. This is very much an “it takes a village” kind of post. I’d love to hear more experiences to form a fuller picture- you can share your thoughts in the comments, or fill out this anonymous form. If I have capacity, I’ll aim to incorporate additions over the next few months. At the very least, I will share comments from the form as they come in (if I notice there are details that may de-anonymize folks I might remove those details).Please engage kindly in the comments on this post. I know that the topics raised here are going to be really important to a lot of people, but I do have faith in our community’s ability to have productive discussions on this topic.Why am I writing this?I’m writing this post because I feel there is (or was) an elephant in the room when it comes to discussions of polyamory (poly) in EA and adjacent forums that have largely focused on the Bay Area, and its implications for the EA community. I want to talk openly about my experiences as a woman who is poly and living in the Bay area. I’ve observed a number of conversations recently (and over the past year) where people have discussed issues in the community and specifically poly in misleading, confusing and sometimes hurtful ways. This frustrates me - I want people to make informed decisions on their involvement with the EA community based on accurate information. This post aims to inform people about poly in an approachable way, and discuss the challenges of polyamory in the EA community.For some context about me - I was monogamous for many years, and have been poly for some time now. I don’t think poly is right for everyone, and I’m not sure if it will always be right for me. I’m conflicted about my own role in complicating this issue, as someone who has (had) relationships with people in the community. The content in this post is based on my personal experiences (being monogamous and polyamorous), anecdotal observations and conversations with other (mostly highly engaged) community members. I also shared this post with several people who I think engaged in thoughtful discussions on the topic in the past (naturally, mistakes are mine).I hope this post will be a step on the path towards a more careful, thoughtful and kind community. This post aims to inform people about poly in an approachable way, and discuss the challenges of polyamory as far as I have experience including in the Bay-based EA community.I’m writing anonymously because 1) not everyone in my social network knows I am poly and 2) I expect that for at least some career paths I’m considering being openly poly and/or talking about it could make it harder to succeed in that domain.What is polyamory in the context of EA communities?If you’re unfamiliar with polyamory, I’ve written a very brief background with some common terminology in the appendix for reference.Most people in the EA community are not poly. Poly is most common (and sometimes assumed) in the Bay Area EA community where I’ve heard an estimate that maybe 60% of EAs are poly. The EA community in the Bay overlaps a lot with the rationality community, where poly is also fairly common. Some reviewers of this post suggested that early members of the rationality community thought people were monogamous by default, and if people designed their relationships intentionally a lot more people would be poly - see this 2010 LessWrong post and discussion as an example.Polyamory is fairly common in the Bay Area, but it’s more prevalent in the EA and rationalist communities than the Bay Area average. A higher percentage of the most engaged EAs are poly than the average in the community, and many eng...]]>
va https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 21:35 None full 4857
szfrzAQQ6CDfaisMr_NL_EA_EA EA - FYI there is a German institute studying sociological aspects of existential risk by Max Görlitz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FYI there is a German institute studying sociological aspects of existential risk, published by Max Görlitz on February 12, 2023 on The Effective Altruism Forum.The institute is called Käte Hamburger Centre for Apocalyptic and Post-Apocalyptic Studies and is based in Heidelberg, Germany. They started in 2021 and initially received 9 million € of funding from the German government for the first four years.AFAICT, they study sociological aspects of narratives of apocalypses, existential risks, and the end of the world.They have engaged with EA thinking, and I assume they will have an interesting outside perspective of some prevalent worldviews in EA. For example, here is a recorded talk about longtermism (I have only skipped through it so far), which mentions MIRI, FHI, and What We Owe The Future.I stumbled upon this today and thought it could interest some people here. Generally, I am very curious to learn more about alternative worldviews to EA that also engage with existential risk in epistemically sound ways. One criticism of EA that became more popular over the last months is that EA organizations engage too little with other disciplines and institutions with relevant expertise. Therefore, I suggest checking out the work of this Centre.Please comment if you have engaged with them before and know more than I do.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Max Görlitz https://forum.effectivealtruism.org/posts/szfrzAQQ6CDfaisMr/fyi-there-is-a-german-institute-studying-sociological Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FYI there is a German institute studying sociological aspects of existential risk, published by Max Görlitz on February 12, 2023 on The Effective Altruism Forum.The institute is called Käte Hamburger Centre for Apocalyptic and Post-Apocalyptic Studies and is based in Heidelberg, Germany. They started in 2021 and initially received 9 million € of funding from the German government for the first four years.AFAICT, they study sociological aspects of narratives of apocalypses, existential risks, and the end of the world.They have engaged with EA thinking, and I assume they will have an interesting outside perspective of some prevalent worldviews in EA. For example, here is a recorded talk about longtermism (I have only skipped through it so far), which mentions MIRI, FHI, and What We Owe The Future.I stumbled upon this today and thought it could interest some people here. Generally, I am very curious to learn more about alternative worldviews to EA that also engage with existential risk in epistemically sound ways. One criticism of EA that became more popular over the last months is that EA organizations engage too little with other disciplines and institutions with relevant expertise. Therefore, I suggest checking out the work of this Centre.Please comment if you have engaged with them before and know more than I do.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 12 Feb 2023 19:45:35 +0000 EA - FYI there is a German institute studying sociological aspects of existential risk by Max Görlitz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FYI there is a German institute studying sociological aspects of existential risk, published by Max Görlitz on February 12, 2023 on The Effective Altruism Forum.The institute is called Käte Hamburger Centre for Apocalyptic and Post-Apocalyptic Studies and is based in Heidelberg, Germany. They started in 2021 and initially received 9 million € of funding from the German government for the first four years.AFAICT, they study sociological aspects of narratives of apocalypses, existential risks, and the end of the world.They have engaged with EA thinking, and I assume they will have an interesting outside perspective of some prevalent worldviews in EA. For example, here is a recorded talk about longtermism (I have only skipped through it so far), which mentions MIRI, FHI, and What We Owe The Future.I stumbled upon this today and thought it could interest some people here. Generally, I am very curious to learn more about alternative worldviews to EA that also engage with existential risk in epistemically sound ways. One criticism of EA that became more popular over the last months is that EA organizations engage too little with other disciplines and institutions with relevant expertise. Therefore, I suggest checking out the work of this Centre.Please comment if you have engaged with them before and know more than I do.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FYI there is a German institute studying sociological aspects of existential risk, published by Max Görlitz on February 12, 2023 on The Effective Altruism Forum.The institute is called Käte Hamburger Centre for Apocalyptic and Post-Apocalyptic Studies and is based in Heidelberg, Germany. They started in 2021 and initially received 9 million € of funding from the German government for the first four years.AFAICT, they study sociological aspects of narratives of apocalypses, existential risks, and the end of the world.They have engaged with EA thinking, and I assume they will have an interesting outside perspective of some prevalent worldviews in EA. For example, here is a recorded talk about longtermism (I have only skipped through it so far), which mentions MIRI, FHI, and What We Owe The Future.I stumbled upon this today and thought it could interest some people here. Generally, I am very curious to learn more about alternative worldviews to EA that also engage with existential risk in epistemically sound ways. One criticism of EA that became more popular over the last months is that EA organizations engage too little with other disciplines and institutions with relevant expertise. Therefore, I suggest checking out the work of this Centre.Please comment if you have engaged with them before and know more than I do.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Max Görlitz https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:34 None full 4852
v3SHEiSenb7vXHjHh_NL_EA_EA EA - How effective are the "Best Charities by Cause" organizations recommended by Charity Navigator? by Max Pietsch Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How effective are the "Best Charities by Cause" organizations recommended by Charity Navigator?, published by Max Pietsch on February 10, 2023 on The Effective Altruism Forum.After acquiring Impact Matters, Charity Navigator now includes impact as a criteria for the charities they recommend here. Charity Navigator's criteria for recommending these charities is less clear than for instance GiveWell's criteria, so I'm unsure of how effective these organizations truly are. Any insight is appreciated.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Max Pietsch https://forum.effectivealtruism.org/posts/v3SHEiSenb7vXHjHh/how-effective-are-the-best-charities-by-cause-organizations Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How effective are the "Best Charities by Cause" organizations recommended by Charity Navigator?, published by Max Pietsch on February 10, 2023 on The Effective Altruism Forum.After acquiring Impact Matters, Charity Navigator now includes impact as a criteria for the charities they recommend here. Charity Navigator's criteria for recommending these charities is less clear than for instance GiveWell's criteria, so I'm unsure of how effective these organizations truly are. Any insight is appreciated.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 12 Feb 2023 17:42:09 +0000 EA - How effective are the "Best Charities by Cause" organizations recommended by Charity Navigator? by Max Pietsch Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How effective are the "Best Charities by Cause" organizations recommended by Charity Navigator?, published by Max Pietsch on February 10, 2023 on The Effective Altruism Forum.After acquiring Impact Matters, Charity Navigator now includes impact as a criteria for the charities they recommend here. Charity Navigator's criteria for recommending these charities is less clear than for instance GiveWell's criteria, so I'm unsure of how effective these organizations truly are. Any insight is appreciated.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How effective are the "Best Charities by Cause" organizations recommended by Charity Navigator?, published by Max Pietsch on February 10, 2023 on The Effective Altruism Forum.After acquiring Impact Matters, Charity Navigator now includes impact as a criteria for the charities they recommend here. Charity Navigator's criteria for recommending these charities is less clear than for instance GiveWell's criteria, so I'm unsure of how effective these organizations truly are. Any insight is appreciated.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Max Pietsch https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:48 None full 4855
furE5aznZCNDjkdmb_NL_EA_EA EA - High impact job opportunity at ARIA (UK) by Rasool Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: High impact job opportunity at ARIA (UK), published by Rasool on February 12, 2023 on The Effective Altruism Forum.ARIA (Advanced Research + Invention Agency) is the UK government's new research funding body based on the US Defence Advanced Research Projects Agency (DARPA).They have £800m in committed funding and are looking for a founding programme director to allocate a £50m budget. This organisation is brand new so I expect this early hire will have significant input on the direction it takes.Quoting the job description:Ideate, then create a programme around your own scientific/technical vision.Direct a budget of up to £50M+Select a portfolio of projects to fund from across the R&D landscape, decide how ARIA will fund them.Create a new community around the vision and goals of your programme.Shape ARIA’s DNA, working with a small peer group to define our programmes, culture and impact.Listed cause areas include:- Genomics- AI governance- Material science- Climate change- Medical researchSome more information about the role, and a tweet blast from The Centre for Long-Term Resilience (CLTR).They say they will be hiring more in the coming months including:Product LeadProduct Operations AssociateExecutive AssociateIn-House CounselAnd I've heard that they are seconding civil servants which is maybe something @tobyj of Impactful Government Careers can advise on.P.S.Nukaz ARIA (couldn't resist)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Rasool https://forum.effectivealtruism.org/posts/furE5aznZCNDjkdmb/high-impact-job-opportunity-at-aria-uk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: High impact job opportunity at ARIA (UK), published by Rasool on February 12, 2023 on The Effective Altruism Forum.ARIA (Advanced Research + Invention Agency) is the UK government's new research funding body based on the US Defence Advanced Research Projects Agency (DARPA).They have £800m in committed funding and are looking for a founding programme director to allocate a £50m budget. This organisation is brand new so I expect this early hire will have significant input on the direction it takes.Quoting the job description:Ideate, then create a programme around your own scientific/technical vision.Direct a budget of up to £50M+Select a portfolio of projects to fund from across the R&D landscape, decide how ARIA will fund them.Create a new community around the vision and goals of your programme.Shape ARIA’s DNA, working with a small peer group to define our programmes, culture and impact.Listed cause areas include:- Genomics- AI governance- Material science- Climate change- Medical researchSome more information about the role, and a tweet blast from The Centre for Long-Term Resilience (CLTR).They say they will be hiring more in the coming months including:Product LeadProduct Operations AssociateExecutive AssociateIn-House CounselAnd I've heard that they are seconding civil servants which is maybe something @tobyj of Impactful Government Careers can advise on.P.S.Nukaz ARIA (couldn't resist)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 12 Feb 2023 17:35:59 +0000 EA - High impact job opportunity at ARIA (UK) by Rasool Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: High impact job opportunity at ARIA (UK), published by Rasool on February 12, 2023 on The Effective Altruism Forum.ARIA (Advanced Research + Invention Agency) is the UK government's new research funding body based on the US Defence Advanced Research Projects Agency (DARPA).They have £800m in committed funding and are looking for a founding programme director to allocate a £50m budget. This organisation is brand new so I expect this early hire will have significant input on the direction it takes.Quoting the job description:Ideate, then create a programme around your own scientific/technical vision.Direct a budget of up to £50M+Select a portfolio of projects to fund from across the R&D landscape, decide how ARIA will fund them.Create a new community around the vision and goals of your programme.Shape ARIA’s DNA, working with a small peer group to define our programmes, culture and impact.Listed cause areas include:- Genomics- AI governance- Material science- Climate change- Medical researchSome more information about the role, and a tweet blast from The Centre for Long-Term Resilience (CLTR).They say they will be hiring more in the coming months including:Product LeadProduct Operations AssociateExecutive AssociateIn-House CounselAnd I've heard that they are seconding civil servants which is maybe something @tobyj of Impactful Government Careers can advise on.P.S.Nukaz ARIA (couldn't resist)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: High impact job opportunity at ARIA (UK), published by Rasool on February 12, 2023 on The Effective Altruism Forum.ARIA (Advanced Research + Invention Agency) is the UK government's new research funding body based on the US Defence Advanced Research Projects Agency (DARPA).They have £800m in committed funding and are looking for a founding programme director to allocate a £50m budget. This organisation is brand new so I expect this early hire will have significant input on the direction it takes.Quoting the job description:Ideate, then create a programme around your own scientific/technical vision.Direct a budget of up to £50M+Select a portfolio of projects to fund from across the R&D landscape, decide how ARIA will fund them.Create a new community around the vision and goals of your programme.Shape ARIA’s DNA, working with a small peer group to define our programmes, culture and impact.Listed cause areas include:- Genomics- AI governance- Material science- Climate change- Medical researchSome more information about the role, and a tweet blast from The Centre for Long-Term Resilience (CLTR).They say they will be hiring more in the coming months including:Product LeadProduct Operations AssociateExecutive AssociateIn-House CounselAnd I've heard that they are seconding civil servants which is maybe something @tobyj of Impactful Government Careers can advise on.P.S.Nukaz ARIA (couldn't resist)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Rasool https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:44 None full 4853
nEnDvu2Ha9HLguvK8_NL_EA_EA EA - Update from the EA Good Governance Project by Grayden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update from the EA Good Governance Project, published by Grayden on February 11, 2023 on The Effective Altruism Forum.BackgroundThe EA Good Governance Project (GGP) launched 4 months ago. For more information on the rationale, please see here. The last few months have been eventful for the EA community. The number of posts tagged nonprofit governance has grown from 2 to 21. If you have not already read it, I would highly recommend this post.I would also like to thank the many people who have shared the EA GGP. I was also delighted to receive many unsolicited cold emails offering financial support, volunteering their skills and reporting broken links. What a great community we have!Trustee DirectoryAs at 11th February 2023, the Trustee Directory includes 60 individuals:with 557 years of collective experience;from 19 countries;expertise in all 18 subject areas; and15 of the 16 skills we listed.Roughly half earn-to-give, including:15 associated with Founders' Pledge / EA Entrepreneurs;12 associated with the EA Consulting Network; and7 associated with EA Finance.28 organizations have signed up to view the directory and 5 have requested candidate contact details.Best Practice GuidanceSince launch, we have developed a variety of guidance on governance topics, as well as a template for conducting a board assessment. We hope that this will strengthen practices within EA orgs. If you have any requests for topics to cover, please reach out.ImpactTo-date, we are not aware of anyone who has been hired as a result of EA GGP (we do not have perfect information, so please message me if I am wrong). It is difficult to assess the counterfactual because we do not know whether organizations would have been connected to the same candidates if the EA GGP did not exist. This is particularly true of large organizations such as Founders' Pledge.The impact of guidance documents will be even harder to assess, but given these have been published more recently, the impact is likely to be close to zero.The cost so far has been ~£300 and 30-50 hours of time (mostly pre-launch).This includes people who signed up using personal email addresses so may not represent an organizationThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Grayden https://forum.effectivealtruism.org/posts/nEnDvu2Ha9HLguvK8/update-from-the-ea-good-governance-project Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update from the EA Good Governance Project, published by Grayden on February 11, 2023 on The Effective Altruism Forum.BackgroundThe EA Good Governance Project (GGP) launched 4 months ago. For more information on the rationale, please see here. The last few months have been eventful for the EA community. The number of posts tagged nonprofit governance has grown from 2 to 21. If you have not already read it, I would highly recommend this post.I would also like to thank the many people who have shared the EA GGP. I was also delighted to receive many unsolicited cold emails offering financial support, volunteering their skills and reporting broken links. What a great community we have!Trustee DirectoryAs at 11th February 2023, the Trustee Directory includes 60 individuals:with 557 years of collective experience;from 19 countries;expertise in all 18 subject areas; and15 of the 16 skills we listed.Roughly half earn-to-give, including:15 associated with Founders' Pledge / EA Entrepreneurs;12 associated with the EA Consulting Network; and7 associated with EA Finance.28 organizations have signed up to view the directory and 5 have requested candidate contact details.Best Practice GuidanceSince launch, we have developed a variety of guidance on governance topics, as well as a template for conducting a board assessment. We hope that this will strengthen practices within EA orgs. If you have any requests for topics to cover, please reach out.ImpactTo-date, we are not aware of anyone who has been hired as a result of EA GGP (we do not have perfect information, so please message me if I am wrong). It is difficult to assess the counterfactual because we do not know whether organizations would have been connected to the same candidates if the EA GGP did not exist. This is particularly true of large organizations such as Founders' Pledge.The impact of guidance documents will be even harder to assess, but given these have been published more recently, the impact is likely to be close to zero.The cost so far has been ~£300 and 30-50 hours of time (mostly pre-launch).This includes people who signed up using personal email addresses so may not represent an organizationThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 12 Feb 2023 11:51:37 +0000 EA - Update from the EA Good Governance Project by Grayden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update from the EA Good Governance Project, published by Grayden on February 11, 2023 on The Effective Altruism Forum.BackgroundThe EA Good Governance Project (GGP) launched 4 months ago. For more information on the rationale, please see here. The last few months have been eventful for the EA community. The number of posts tagged nonprofit governance has grown from 2 to 21. If you have not already read it, I would highly recommend this post.I would also like to thank the many people who have shared the EA GGP. I was also delighted to receive many unsolicited cold emails offering financial support, volunteering their skills and reporting broken links. What a great community we have!Trustee DirectoryAs at 11th February 2023, the Trustee Directory includes 60 individuals:with 557 years of collective experience;from 19 countries;expertise in all 18 subject areas; and15 of the 16 skills we listed.Roughly half earn-to-give, including:15 associated with Founders' Pledge / EA Entrepreneurs;12 associated with the EA Consulting Network; and7 associated with EA Finance.28 organizations have signed up to view the directory and 5 have requested candidate contact details.Best Practice GuidanceSince launch, we have developed a variety of guidance on governance topics, as well as a template for conducting a board assessment. We hope that this will strengthen practices within EA orgs. If you have any requests for topics to cover, please reach out.ImpactTo-date, we are not aware of anyone who has been hired as a result of EA GGP (we do not have perfect information, so please message me if I am wrong). It is difficult to assess the counterfactual because we do not know whether organizations would have been connected to the same candidates if the EA GGP did not exist. This is particularly true of large organizations such as Founders' Pledge.The impact of guidance documents will be even harder to assess, but given these have been published more recently, the impact is likely to be close to zero.The cost so far has been ~£300 and 30-50 hours of time (mostly pre-launch).This includes people who signed up using personal email addresses so may not represent an organizationThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update from the EA Good Governance Project, published by Grayden on February 11, 2023 on The Effective Altruism Forum.BackgroundThe EA Good Governance Project (GGP) launched 4 months ago. For more information on the rationale, please see here. The last few months have been eventful for the EA community. The number of posts tagged nonprofit governance has grown from 2 to 21. If you have not already read it, I would highly recommend this post.I would also like to thank the many people who have shared the EA GGP. I was also delighted to receive many unsolicited cold emails offering financial support, volunteering their skills and reporting broken links. What a great community we have!Trustee DirectoryAs at 11th February 2023, the Trustee Directory includes 60 individuals:with 557 years of collective experience;from 19 countries;expertise in all 18 subject areas; and15 of the 16 skills we listed.Roughly half earn-to-give, including:15 associated with Founders' Pledge / EA Entrepreneurs;12 associated with the EA Consulting Network; and7 associated with EA Finance.28 organizations have signed up to view the directory and 5 have requested candidate contact details.Best Practice GuidanceSince launch, we have developed a variety of guidance on governance topics, as well as a template for conducting a board assessment. We hope that this will strengthen practices within EA orgs. If you have any requests for topics to cover, please reach out.ImpactTo-date, we are not aware of anyone who has been hired as a result of EA GGP (we do not have perfect information, so please message me if I am wrong). It is difficult to assess the counterfactual because we do not know whether organizations would have been connected to the same candidates if the EA GGP did not exist. This is particularly true of large organizations such as Founders' Pledge.The impact of guidance documents will be even harder to assess, but given these have been published more recently, the impact is likely to be close to zero.The cost so far has been ~£300 and 30-50 hours of time (mostly pre-launch).This includes people who signed up using personal email addresses so may not represent an organizationThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Grayden https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:30 None full 4854
uihTedcLyNpzbuCHC_NL_EA_EA EA - Dissatisfied with the state of EA? You can give me a call. by Severin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dissatisfied with the state of EA? You can give me a call., published by Severin on February 11, 2023 on The Effective Altruism Forum.A couple of days ago, Catherine Low invited people to reach out to her if they are worried about EA.I’d like to second that offer.The last weeks have left me concerned about EA’s conflict and discourse culture, and I’d like to do my part in moving it towards a constructive direction. Whether by helping you put your murky frustration with the state of the movement into words, whether by helping you steelman the criticisms you have, or just by offering an open ear so that you feel less alone. Of course, all of that confidentially.My credentials: I bring counseling and facilitation training. I know both the rationalist community and the woke left from the inside (and accordingly as far as I can tell ~both sides of the current EA culture war). I have a graduate degree in putting fuzzy and complex ideas into writing (i.e. philosophy). I’ve recently written on the forum about social technologies that might help EA make it through all of this drama. And, first and foremost: I’m not employed by CEA, and somewhere between agnostic and doubtful about their current strategy. That may be relevant for those of you who are decisively unhappy with them.So: If there’s something you feel uneasy about in regards to the EA community, and if you feel like I may be the right person to talk to about it, feel invited to book a call in my Calendly. Or, shoot me a message if nothing there fits.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Severin https://forum.effectivealtruism.org/posts/uihTedcLyNpzbuCHC/dissatisfied-with-the-state-of-ea-you-can-give-me-a-call Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dissatisfied with the state of EA? You can give me a call., published by Severin on February 11, 2023 on The Effective Altruism Forum.A couple of days ago, Catherine Low invited people to reach out to her if they are worried about EA.I’d like to second that offer.The last weeks have left me concerned about EA’s conflict and discourse culture, and I’d like to do my part in moving it towards a constructive direction. Whether by helping you put your murky frustration with the state of the movement into words, whether by helping you steelman the criticisms you have, or just by offering an open ear so that you feel less alone. Of course, all of that confidentially.My credentials: I bring counseling and facilitation training. I know both the rationalist community and the woke left from the inside (and accordingly as far as I can tell ~both sides of the current EA culture war). I have a graduate degree in putting fuzzy and complex ideas into writing (i.e. philosophy). I’ve recently written on the forum about social technologies that might help EA make it through all of this drama. And, first and foremost: I’m not employed by CEA, and somewhere between agnostic and doubtful about their current strategy. That may be relevant for those of you who are decisively unhappy with them.So: If there’s something you feel uneasy about in regards to the EA community, and if you feel like I may be the right person to talk to about it, feel invited to book a call in my Calendly. Or, shoot me a message if nothing there fits.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 12 Feb 2023 00:10:09 +0000 EA - Dissatisfied with the state of EA? You can give me a call. by Severin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dissatisfied with the state of EA? You can give me a call., published by Severin on February 11, 2023 on The Effective Altruism Forum.A couple of days ago, Catherine Low invited people to reach out to her if they are worried about EA.I’d like to second that offer.The last weeks have left me concerned about EA’s conflict and discourse culture, and I’d like to do my part in moving it towards a constructive direction. Whether by helping you put your murky frustration with the state of the movement into words, whether by helping you steelman the criticisms you have, or just by offering an open ear so that you feel less alone. Of course, all of that confidentially.My credentials: I bring counseling and facilitation training. I know both the rationalist community and the woke left from the inside (and accordingly as far as I can tell ~both sides of the current EA culture war). I have a graduate degree in putting fuzzy and complex ideas into writing (i.e. philosophy). I’ve recently written on the forum about social technologies that might help EA make it through all of this drama. And, first and foremost: I’m not employed by CEA, and somewhere between agnostic and doubtful about their current strategy. That may be relevant for those of you who are decisively unhappy with them.So: If there’s something you feel uneasy about in regards to the EA community, and if you feel like I may be the right person to talk to about it, feel invited to book a call in my Calendly. Or, shoot me a message if nothing there fits.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dissatisfied with the state of EA? You can give me a call., published by Severin on February 11, 2023 on The Effective Altruism Forum.A couple of days ago, Catherine Low invited people to reach out to her if they are worried about EA.I’d like to second that offer.The last weeks have left me concerned about EA’s conflict and discourse culture, and I’d like to do my part in moving it towards a constructive direction. Whether by helping you put your murky frustration with the state of the movement into words, whether by helping you steelman the criticisms you have, or just by offering an open ear so that you feel less alone. Of course, all of that confidentially.My credentials: I bring counseling and facilitation training. I know both the rationalist community and the woke left from the inside (and accordingly as far as I can tell ~both sides of the current EA culture war). I have a graduate degree in putting fuzzy and complex ideas into writing (i.e. philosophy). I’ve recently written on the forum about social technologies that might help EA make it through all of this drama. And, first and foremost: I’m not employed by CEA, and somewhere between agnostic and doubtful about their current strategy. That may be relevant for those of you who are decisively unhappy with them.So: If there’s something you feel uneasy about in regards to the EA community, and if you feel like I may be the right person to talk to about it, feel invited to book a call in my Calendly. Or, shoot me a message if nothing there fits.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Severin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:40 None full 4847
2n77pe6oiYZZTZAzL_NL_EA_EA EA - What are the best examples of object-level work that was done by (or at least inspired by) the longtermist EA community that concretely and legibly reduced existential risk? by Ben Snodin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What are the best examples of object-level work that was done by (or at least inspired by) the longtermist EA community that concretely and legibly reduced existential risk?, published by Ben Snodin on February 11, 2023 on The Effective Altruism Forum.A motivating scenario could be: imagine you are trying to provide examples to help convince a skeptical friend that it is in fact possible to positively change the long-run future by actively seeking and pursuing opportunities to reduce existential risk.Examples of things that are kind of close but miss the markThere are probably decent historical examples where people reduced existential risk but where thoes people didn't really have longtermist-EA-type motivations (maybe more "generally wanting to do good" plus "in the right place at the right time")There are probably meta-level things that longtermist EA community members can take credit for (e.g. "get lots of people to think seriously about reducing x risk"), but these aren't very object-level or concreteThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ben Snodin https://forum.effectivealtruism.org/posts/2n77pe6oiYZZTZAzL/what-are-the-best-examples-of-object-level-work-that-was Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What are the best examples of object-level work that was done by (or at least inspired by) the longtermist EA community that concretely and legibly reduced existential risk?, published by Ben Snodin on February 11, 2023 on The Effective Altruism Forum.A motivating scenario could be: imagine you are trying to provide examples to help convince a skeptical friend that it is in fact possible to positively change the long-run future by actively seeking and pursuing opportunities to reduce existential risk.Examples of things that are kind of close but miss the markThere are probably decent historical examples where people reduced existential risk but where thoes people didn't really have longtermist-EA-type motivations (maybe more "generally wanting to do good" plus "in the right place at the right time")There are probably meta-level things that longtermist EA community members can take credit for (e.g. "get lots of people to think seriously about reducing x risk"), but these aren't very object-level or concreteThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 11 Feb 2023 14:42:42 +0000 EA - What are the best examples of object-level work that was done by (or at least inspired by) the longtermist EA community that concretely and legibly reduced existential risk? by Ben Snodin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What are the best examples of object-level work that was done by (or at least inspired by) the longtermist EA community that concretely and legibly reduced existential risk?, published by Ben Snodin on February 11, 2023 on The Effective Altruism Forum.A motivating scenario could be: imagine you are trying to provide examples to help convince a skeptical friend that it is in fact possible to positively change the long-run future by actively seeking and pursuing opportunities to reduce existential risk.Examples of things that are kind of close but miss the markThere are probably decent historical examples where people reduced existential risk but where thoes people didn't really have longtermist-EA-type motivations (maybe more "generally wanting to do good" plus "in the right place at the right time")There are probably meta-level things that longtermist EA community members can take credit for (e.g. "get lots of people to think seriously about reducing x risk"), but these aren't very object-level or concreteThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What are the best examples of object-level work that was done by (or at least inspired by) the longtermist EA community that concretely and legibly reduced existential risk?, published by Ben Snodin on February 11, 2023 on The Effective Altruism Forum.A motivating scenario could be: imagine you are trying to provide examples to help convince a skeptical friend that it is in fact possible to positively change the long-run future by actively seeking and pursuing opportunities to reduce existential risk.Examples of things that are kind of close but miss the markThere are probably decent historical examples where people reduced existential risk but where thoes people didn't really have longtermist-EA-type motivations (maybe more "generally wanting to do good" plus "in the right place at the right time")There are probably meta-level things that longtermist EA community members can take credit for (e.g. "get lots of people to think seriously about reducing x risk"), but these aren't very object-level or concreteThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ben Snodin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:18 None full 4840
kfNBwwyDjfqXN676w_NL_EA_EA EA - Maybe longtermism isn't for everyone by BrownHairedEevee Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Maybe longtermism isn't for everyone, published by BrownHairedEevee on February 10, 2023 on The Effective Altruism Forum.I've noticed that the EA community has been aggressively promoting longtermism and longtermist causes:The huge book tour around What We Owe the Future, which promotes longtermism itselfThere was a recent post claiming that 80k's messaging is discouraging to non-longtermists, although the author deleted (Benjamin Hilton's response is preserved here). The post observed that 80k lists x-risk related causes as "recommended" causes while neartermist causes like global poverty and factory farming are only "sometimes recommended". Further, in 2021, 80k put together a podcast feed called Effective Altruism: An Introduction, which many commenters complained was too skewed towards longtermist causes.I used to think that longtermism is compatible with a wide range of worldviews, as these pages (1, 2) claim, so I was puzzled as to why so many people who engage with longtermism could be uncomfortable with it. Sure, it's a counterintuitive worldview, but it also flows from such basic principles. But I'm starting to question this - longtermism is very sensitive to the rate of pure time preference, and recently, some philosophers have started to argue that nonzero pure time preference can be justified (section "3. Beyond Neutrality" here).By contrast, x-risk as a cause area has support from a broader range of moral worldviews:Chapter 2 of The Precipice discusses five different moral justifications for caring about x-risks (video here).Carl Shulman makes a "common-sense case" for valuing x-risk reduction that doesn't depend on there being any value in the long-term future at all.Maybe it's better to take a two-pronged approach:Promote x-risk reduction as a cause area that most people can agree on; andPromote longtermism as a novel idea in moral philosophy that some people might want to adopt, but be open about its limitations and acknowledge that our audiences might be uncomfortable with it and have valid reasons not to accept it.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
BrownHairedEevee https://forum.effectivealtruism.org/posts/kfNBwwyDjfqXN676w/maybe-longtermism-isn-t-for-everyone Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Maybe longtermism isn't for everyone, published by BrownHairedEevee on February 10, 2023 on The Effective Altruism Forum.I've noticed that the EA community has been aggressively promoting longtermism and longtermist causes:The huge book tour around What We Owe the Future, which promotes longtermism itselfThere was a recent post claiming that 80k's messaging is discouraging to non-longtermists, although the author deleted (Benjamin Hilton's response is preserved here). The post observed that 80k lists x-risk related causes as "recommended" causes while neartermist causes like global poverty and factory farming are only "sometimes recommended". Further, in 2021, 80k put together a podcast feed called Effective Altruism: An Introduction, which many commenters complained was too skewed towards longtermist causes.I used to think that longtermism is compatible with a wide range of worldviews, as these pages (1, 2) claim, so I was puzzled as to why so many people who engage with longtermism could be uncomfortable with it. Sure, it's a counterintuitive worldview, but it also flows from such basic principles. But I'm starting to question this - longtermism is very sensitive to the rate of pure time preference, and recently, some philosophers have started to argue that nonzero pure time preference can be justified (section "3. Beyond Neutrality" here).By contrast, x-risk as a cause area has support from a broader range of moral worldviews:Chapter 2 of The Precipice discusses five different moral justifications for caring about x-risks (video here).Carl Shulman makes a "common-sense case" for valuing x-risk reduction that doesn't depend on there being any value in the long-term future at all.Maybe it's better to take a two-pronged approach:Promote x-risk reduction as a cause area that most people can agree on; andPromote longtermism as a novel idea in moral philosophy that some people might want to adopt, but be open about its limitations and acknowledge that our audiences might be uncomfortable with it and have valid reasons not to accept it.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 11 Feb 2023 11:31:49 +0000 EA - Maybe longtermism isn't for everyone by BrownHairedEevee Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Maybe longtermism isn't for everyone, published by BrownHairedEevee on February 10, 2023 on The Effective Altruism Forum.I've noticed that the EA community has been aggressively promoting longtermism and longtermist causes:The huge book tour around What We Owe the Future, which promotes longtermism itselfThere was a recent post claiming that 80k's messaging is discouraging to non-longtermists, although the author deleted (Benjamin Hilton's response is preserved here). The post observed that 80k lists x-risk related causes as "recommended" causes while neartermist causes like global poverty and factory farming are only "sometimes recommended". Further, in 2021, 80k put together a podcast feed called Effective Altruism: An Introduction, which many commenters complained was too skewed towards longtermist causes.I used to think that longtermism is compatible with a wide range of worldviews, as these pages (1, 2) claim, so I was puzzled as to why so many people who engage with longtermism could be uncomfortable with it. Sure, it's a counterintuitive worldview, but it also flows from such basic principles. But I'm starting to question this - longtermism is very sensitive to the rate of pure time preference, and recently, some philosophers have started to argue that nonzero pure time preference can be justified (section "3. Beyond Neutrality" here).By contrast, x-risk as a cause area has support from a broader range of moral worldviews:Chapter 2 of The Precipice discusses five different moral justifications for caring about x-risks (video here).Carl Shulman makes a "common-sense case" for valuing x-risk reduction that doesn't depend on there being any value in the long-term future at all.Maybe it's better to take a two-pronged approach:Promote x-risk reduction as a cause area that most people can agree on; andPromote longtermism as a novel idea in moral philosophy that some people might want to adopt, but be open about its limitations and acknowledge that our audiences might be uncomfortable with it and have valid reasons not to accept it.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Maybe longtermism isn't for everyone, published by BrownHairedEevee on February 10, 2023 on The Effective Altruism Forum.I've noticed that the EA community has been aggressively promoting longtermism and longtermist causes:The huge book tour around What We Owe the Future, which promotes longtermism itselfThere was a recent post claiming that 80k's messaging is discouraging to non-longtermists, although the author deleted (Benjamin Hilton's response is preserved here). The post observed that 80k lists x-risk related causes as "recommended" causes while neartermist causes like global poverty and factory farming are only "sometimes recommended". Further, in 2021, 80k put together a podcast feed called Effective Altruism: An Introduction, which many commenters complained was too skewed towards longtermist causes.I used to think that longtermism is compatible with a wide range of worldviews, as these pages (1, 2) claim, so I was puzzled as to why so many people who engage with longtermism could be uncomfortable with it. Sure, it's a counterintuitive worldview, but it also flows from such basic principles. But I'm starting to question this - longtermism is very sensitive to the rate of pure time preference, and recently, some philosophers have started to argue that nonzero pure time preference can be justified (section "3. Beyond Neutrality" here).By contrast, x-risk as a cause area has support from a broader range of moral worldviews:Chapter 2 of The Precipice discusses five different moral justifications for caring about x-risks (video here).Carl Shulman makes a "common-sense case" for valuing x-risk reduction that doesn't depend on there being any value in the long-term future at all.Maybe it's better to take a two-pronged approach:Promote x-risk reduction as a cause area that most people can agree on; andPromote longtermism as a novel idea in moral philosophy that some people might want to adopt, but be open about its limitations and acknowledge that our audiences might be uncomfortable with it and have valid reasons not to accept it.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
BrownHairedEevee https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:13 None full 4841
TfqmoroYCrNh2s2TF_NL_EA_EA EA - Select Challenges with Criticism and Evaluation Around EA by Ozzie Gooen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Select Challenges with Criticism & Evaluation Around EA, published by Ozzie Gooen on February 10, 2023 on The Effective Altruism Forum.I think that critique (or sharing uncomfortable information) is very important, difficult, and misunderstood. I've started to notice many specific challenges when dealing with criticism - here's my quick take on some of the important ones. This comes mostly from a mix of books and personal experiences.This post is a bit rambly and rough. I expect much of the content will be self-evident to many readers.A whole lot has been written about criticism, but I've found the literature scattered and not very rationalist-coded (see point # 6). I wouldn't be surprised if there were better related lists out there that would apply to EA, please do leave comments if you know of them.1. Many People Hate Being CriticizedMany people and organizations actively detest being open or being criticized. Even if those being evaluated wouldn’t push back against evaluation, evaluators really don’t want to harm people. The fact that antagonistic actors online are using public information as ammunition makes things worse.There’s a broad spectrum regarding how much candidness different people/agents can handle. Even the most critical people can be really bad at taking criticism. One extreme is Scandal Markets. These could be really pragmatically useful, but many people would absolutely hate showing up on one. If any large effective altruist community enforced high-candidness norms, that would exclude some subset of potential collaborators.In business, there are innumerable books about how to give criticism and feedback. Similar to romantic relationships. It’s a major social issue!2. Not Criticizing Leads to DistrustWhat’s scarier than getting negative feedback? Knowing that people who matter to you dislike you, but not knowing why.When there are barriers in communication, people often assume the worst. They make up bizarre stories.The number one piece of advice I’ve seen for resolving tense situations in business and romance is to just talk to each other. (Sometimes bringing in a moderator can help with the early steps!)If you have critical takes on someone, you could:Make it clear you have issues with them, be silent on the matter, or pretend you don’t have issues with them.Reveal these issues, or hide them.Lying in Choice 1 can be locally convenient, but it often comes with many problems.You might have to create more lies to justify these lies.You might lie to yourself to make yourself more believable, but this messes with your epistemics in subtle ways that you won’t be able to notice.If others catch on that you do this, they won’t be able to trust many important things you say.If you are honest in Choice 1 (this can include social hints) but choose to conceal in Choice 2, then the other party will make up reasons why you dislike them. These are heated topics (people hate being disliked). I suspect that’s why guesses here are often particularly bad.Some things I’ve (loosely) heard around the EA community include:That’s why [person X] rejected my application. It’s probably because I’m just a bad researcher, and I will never be useful.Funders don’t like me because I don’t attend the right parties.Funders don’t like me because I criticized them, and they hate criticism.EAs hate my posts because I use emotion, and EAs hate emotion.All the EA critics just care about Woke buzzwords.When I post to the EA Forum about my reasons for (important actions), I get downvoted. That’s because they just care about woke stuff and no longer care about epistemics.3. People have wildly different abilities to convey candid information gracefullyFirst, there seems to be a large subset of people that assumes that social grace or politeness is all BS. You ca...]]>
Ozzie Gooen https://forum.effectivealtruism.org/posts/TfqmoroYCrNh2s2TF/select-challenges-with-criticism-and-evaluation-around-ea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Select Challenges with Criticism & Evaluation Around EA, published by Ozzie Gooen on February 10, 2023 on The Effective Altruism Forum.I think that critique (or sharing uncomfortable information) is very important, difficult, and misunderstood. I've started to notice many specific challenges when dealing with criticism - here's my quick take on some of the important ones. This comes mostly from a mix of books and personal experiences.This post is a bit rambly and rough. I expect much of the content will be self-evident to many readers.A whole lot has been written about criticism, but I've found the literature scattered and not very rationalist-coded (see point # 6). I wouldn't be surprised if there were better related lists out there that would apply to EA, please do leave comments if you know of them.1. Many People Hate Being CriticizedMany people and organizations actively detest being open or being criticized. Even if those being evaluated wouldn’t push back against evaluation, evaluators really don’t want to harm people. The fact that antagonistic actors online are using public information as ammunition makes things worse.There’s a broad spectrum regarding how much candidness different people/agents can handle. Even the most critical people can be really bad at taking criticism. One extreme is Scandal Markets. These could be really pragmatically useful, but many people would absolutely hate showing up on one. If any large effective altruist community enforced high-candidness norms, that would exclude some subset of potential collaborators.In business, there are innumerable books about how to give criticism and feedback. Similar to romantic relationships. It’s a major social issue!2. Not Criticizing Leads to DistrustWhat’s scarier than getting negative feedback? Knowing that people who matter to you dislike you, but not knowing why.When there are barriers in communication, people often assume the worst. They make up bizarre stories.The number one piece of advice I’ve seen for resolving tense situations in business and romance is to just talk to each other. (Sometimes bringing in a moderator can help with the early steps!)If you have critical takes on someone, you could:Make it clear you have issues with them, be silent on the matter, or pretend you don’t have issues with them.Reveal these issues, or hide them.Lying in Choice 1 can be locally convenient, but it often comes with many problems.You might have to create more lies to justify these lies.You might lie to yourself to make yourself more believable, but this messes with your epistemics in subtle ways that you won’t be able to notice.If others catch on that you do this, they won’t be able to trust many important things you say.If you are honest in Choice 1 (this can include social hints) but choose to conceal in Choice 2, then the other party will make up reasons why you dislike them. These are heated topics (people hate being disliked). I suspect that’s why guesses here are often particularly bad.Some things I’ve (loosely) heard around the EA community include:That’s why [person X] rejected my application. It’s probably because I’m just a bad researcher, and I will never be useful.Funders don’t like me because I don’t attend the right parties.Funders don’t like me because I criticized them, and they hate criticism.EAs hate my posts because I use emotion, and EAs hate emotion.All the EA critics just care about Woke buzzwords.When I post to the EA Forum about my reasons for (important actions), I get downvoted. That’s because they just care about woke stuff and no longer care about epistemics.3. People have wildly different abilities to convey candid information gracefullyFirst, there seems to be a large subset of people that assumes that social grace or politeness is all BS. You ca...]]>
Sat, 11 Feb 2023 03:16:27 +0000 EA - Select Challenges with Criticism and Evaluation Around EA by Ozzie Gooen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Select Challenges with Criticism & Evaluation Around EA, published by Ozzie Gooen on February 10, 2023 on The Effective Altruism Forum.I think that critique (or sharing uncomfortable information) is very important, difficult, and misunderstood. I've started to notice many specific challenges when dealing with criticism - here's my quick take on some of the important ones. This comes mostly from a mix of books and personal experiences.This post is a bit rambly and rough. I expect much of the content will be self-evident to many readers.A whole lot has been written about criticism, but I've found the literature scattered and not very rationalist-coded (see point # 6). I wouldn't be surprised if there were better related lists out there that would apply to EA, please do leave comments if you know of them.1. Many People Hate Being CriticizedMany people and organizations actively detest being open or being criticized. Even if those being evaluated wouldn’t push back against evaluation, evaluators really don’t want to harm people. The fact that antagonistic actors online are using public information as ammunition makes things worse.There’s a broad spectrum regarding how much candidness different people/agents can handle. Even the most critical people can be really bad at taking criticism. One extreme is Scandal Markets. These could be really pragmatically useful, but many people would absolutely hate showing up on one. If any large effective altruist community enforced high-candidness norms, that would exclude some subset of potential collaborators.In business, there are innumerable books about how to give criticism and feedback. Similar to romantic relationships. It’s a major social issue!2. Not Criticizing Leads to DistrustWhat’s scarier than getting negative feedback? Knowing that people who matter to you dislike you, but not knowing why.When there are barriers in communication, people often assume the worst. They make up bizarre stories.The number one piece of advice I’ve seen for resolving tense situations in business and romance is to just talk to each other. (Sometimes bringing in a moderator can help with the early steps!)If you have critical takes on someone, you could:Make it clear you have issues with them, be silent on the matter, or pretend you don’t have issues with them.Reveal these issues, or hide them.Lying in Choice 1 can be locally convenient, but it often comes with many problems.You might have to create more lies to justify these lies.You might lie to yourself to make yourself more believable, but this messes with your epistemics in subtle ways that you won’t be able to notice.If others catch on that you do this, they won’t be able to trust many important things you say.If you are honest in Choice 1 (this can include social hints) but choose to conceal in Choice 2, then the other party will make up reasons why you dislike them. These are heated topics (people hate being disliked). I suspect that’s why guesses here are often particularly bad.Some things I’ve (loosely) heard around the EA community include:That’s why [person X] rejected my application. It’s probably because I’m just a bad researcher, and I will never be useful.Funders don’t like me because I don’t attend the right parties.Funders don’t like me because I criticized them, and they hate criticism.EAs hate my posts because I use emotion, and EAs hate emotion.All the EA critics just care about Woke buzzwords.When I post to the EA Forum about my reasons for (important actions), I get downvoted. That’s because they just care about woke stuff and no longer care about epistemics.3. People have wildly different abilities to convey candid information gracefullyFirst, there seems to be a large subset of people that assumes that social grace or politeness is all BS. You ca...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Select Challenges with Criticism & Evaluation Around EA, published by Ozzie Gooen on February 10, 2023 on The Effective Altruism Forum.I think that critique (or sharing uncomfortable information) is very important, difficult, and misunderstood. I've started to notice many specific challenges when dealing with criticism - here's my quick take on some of the important ones. This comes mostly from a mix of books and personal experiences.This post is a bit rambly and rough. I expect much of the content will be self-evident to many readers.A whole lot has been written about criticism, but I've found the literature scattered and not very rationalist-coded (see point # 6). I wouldn't be surprised if there were better related lists out there that would apply to EA, please do leave comments if you know of them.1. Many People Hate Being CriticizedMany people and organizations actively detest being open or being criticized. Even if those being evaluated wouldn’t push back against evaluation, evaluators really don’t want to harm people. The fact that antagonistic actors online are using public information as ammunition makes things worse.There’s a broad spectrum regarding how much candidness different people/agents can handle. Even the most critical people can be really bad at taking criticism. One extreme is Scandal Markets. These could be really pragmatically useful, but many people would absolutely hate showing up on one. If any large effective altruist community enforced high-candidness norms, that would exclude some subset of potential collaborators.In business, there are innumerable books about how to give criticism and feedback. Similar to romantic relationships. It’s a major social issue!2. Not Criticizing Leads to DistrustWhat’s scarier than getting negative feedback? Knowing that people who matter to you dislike you, but not knowing why.When there are barriers in communication, people often assume the worst. They make up bizarre stories.The number one piece of advice I’ve seen for resolving tense situations in business and romance is to just talk to each other. (Sometimes bringing in a moderator can help with the early steps!)If you have critical takes on someone, you could:Make it clear you have issues with them, be silent on the matter, or pretend you don’t have issues with them.Reveal these issues, or hide them.Lying in Choice 1 can be locally convenient, but it often comes with many problems.You might have to create more lies to justify these lies.You might lie to yourself to make yourself more believable, but this messes with your epistemics in subtle ways that you won’t be able to notice.If others catch on that you do this, they won’t be able to trust many important things you say.If you are honest in Choice 1 (this can include social hints) but choose to conceal in Choice 2, then the other party will make up reasons why you dislike them. These are heated topics (people hate being disliked). I suspect that’s why guesses here are often particularly bad.Some things I’ve (loosely) heard around the EA community include:That’s why [person X] rejected my application. It’s probably because I’m just a bad researcher, and I will never be useful.Funders don’t like me because I don’t attend the right parties.Funders don’t like me because I criticized them, and they hate criticism.EAs hate my posts because I use emotion, and EAs hate emotion.All the EA critics just care about Woke buzzwords.When I post to the EA Forum about my reasons for (important actions), I get downvoted. That’s because they just care about woke stuff and no longer care about epistemics.3. People have wildly different abilities to convey candid information gracefullyFirst, there seems to be a large subset of people that assumes that social grace or politeness is all BS. You ca...]]>
Ozzie Gooen https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:23 None full 4835
pckJu3L6XDwDgHKvZ_NL_EA_EA EA - There can be highly neglected solutions to less-neglected problems by Linda Linsefors Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There can be highly neglected solutions to less-neglected problems, published by Linda Linsefors on February 10, 2023 on The Effective Altruism Forum.This post is based on an old draft that I wrote years ago, but never quite finished. The specific references might be a bit outdated, but think the main point is still relevant.Thanks to Amber Dawn Ace for turning my draft into something worth posting.EAs are often interested in how ‘neglected’ problems are: that is, how much money is being spent on solving them, and/or how many people are working on them. 80,000 Hours, for example, prioritizes pressing problems using a version of the importance, tractability, neglectedness (ITN) framework, where a problem’s ‘neglectedness’ is one of three factors that determine how much we should prioritize it relative to other problems.80,000 Hours define neglectedness as:“How many people, or dollars, are currently being dedicated to solving the problem?I suggest that a better definition would be:“How many people, or dollars, are currently being dedicated to this particular solution?”I think it makes more sense to assess solutions for neglectedness, rather than problems. Sometimes, a problem is not neglected, but effective solutions to that problem are neglected. A lot of people are working on the problem and a lot of money is being spent on it, but in ineffective (or less effective) ways.Here’s the example that 80,000 Hours uses to illustrate neglectedness:“[M]ass immunisation of children is an extremely effective intervention to improve global health, but it is already being vigorously pursued by governments and several major foundations, including the Gates Foundation. This makes it less likely to be a top opportunity for future donors.”Note that mass immunisation of children is a solution, not a problem. But it makes sense for 80K to think about this: we can imagine a world in which charities spent just as much money on preventing or curing diseases, but they spent it on less effective solutions. In that world, even though global disease would not be a neglected problem, mass vaccination would be a neglected (effective) solution, and it would make sense for donors to prioritize it.There’s something fractal about solutions and problems. Every solution to a problem presents its own problem: how best to implement it. Mass vaccination is an effective solution to the problem of disease. But then there’s the problem of ‘how can we best achieve mass vaccination?’. And solutions to that problem - for example, different methods of vaccine distribution, or interventions to address vaccine skepticism - pose their own, more granular problems.Here’s another example: hundreds of millions of dollars is spent each year on preventing nuclear war and nuclear winter. However, only a small fraction of this is spent on interventions intended to mitigate the negative impacts of nuclear winter - for example, ALLFED’s work researching food sources that are not dependent on sunlight. But in the event that there is a nuclear winter, these sorts of interventions will be extremely effective: they’ll enable us to produce much more food, so far fewer people will starve. Mitigation interventions are thus a highly neglected class of solutions for a problem - nuclear war - that is comparatively less neglected.As another example: climate change is not a neglected problem. But different solutions get very different amounts of attention. One of the most effective interventions seems to be preserving the rainforests (and other bodies of biomass), yet only a lonely few organizations (e.g. Cool Earth and the Carbon Fund) are working on this.Evaluating the neglectedness of solutions has its own problems. It means that neglectedness no longer lines up so neatly with solvability/tractability and scale/import...]]>
Linda Linsefors https://forum.effectivealtruism.org/posts/pckJu3L6XDwDgHKvZ/there-can-be-highly-neglected-solutions-to-less-neglected Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There can be highly neglected solutions to less-neglected problems, published by Linda Linsefors on February 10, 2023 on The Effective Altruism Forum.This post is based on an old draft that I wrote years ago, but never quite finished. The specific references might be a bit outdated, but think the main point is still relevant.Thanks to Amber Dawn Ace for turning my draft into something worth posting.EAs are often interested in how ‘neglected’ problems are: that is, how much money is being spent on solving them, and/or how many people are working on them. 80,000 Hours, for example, prioritizes pressing problems using a version of the importance, tractability, neglectedness (ITN) framework, where a problem’s ‘neglectedness’ is one of three factors that determine how much we should prioritize it relative to other problems.80,000 Hours define neglectedness as:“How many people, or dollars, are currently being dedicated to solving the problem?I suggest that a better definition would be:“How many people, or dollars, are currently being dedicated to this particular solution?”I think it makes more sense to assess solutions for neglectedness, rather than problems. Sometimes, a problem is not neglected, but effective solutions to that problem are neglected. A lot of people are working on the problem and a lot of money is being spent on it, but in ineffective (or less effective) ways.Here’s the example that 80,000 Hours uses to illustrate neglectedness:“[M]ass immunisation of children is an extremely effective intervention to improve global health, but it is already being vigorously pursued by governments and several major foundations, including the Gates Foundation. This makes it less likely to be a top opportunity for future donors.”Note that mass immunisation of children is a solution, not a problem. But it makes sense for 80K to think about this: we can imagine a world in which charities spent just as much money on preventing or curing diseases, but they spent it on less effective solutions. In that world, even though global disease would not be a neglected problem, mass vaccination would be a neglected (effective) solution, and it would make sense for donors to prioritize it.There’s something fractal about solutions and problems. Every solution to a problem presents its own problem: how best to implement it. Mass vaccination is an effective solution to the problem of disease. But then there’s the problem of ‘how can we best achieve mass vaccination?’. And solutions to that problem - for example, different methods of vaccine distribution, or interventions to address vaccine skepticism - pose their own, more granular problems.Here’s another example: hundreds of millions of dollars is spent each year on preventing nuclear war and nuclear winter. However, only a small fraction of this is spent on interventions intended to mitigate the negative impacts of nuclear winter - for example, ALLFED’s work researching food sources that are not dependent on sunlight. But in the event that there is a nuclear winter, these sorts of interventions will be extremely effective: they’ll enable us to produce much more food, so far fewer people will starve. Mitigation interventions are thus a highly neglected class of solutions for a problem - nuclear war - that is comparatively less neglected.As another example: climate change is not a neglected problem. But different solutions get very different amounts of attention. One of the most effective interventions seems to be preserving the rainforests (and other bodies of biomass), yet only a lonely few organizations (e.g. Cool Earth and the Carbon Fund) are working on this.Evaluating the neglectedness of solutions has its own problems. It means that neglectedness no longer lines up so neatly with solvability/tractability and scale/import...]]>
Fri, 10 Feb 2023 21:27:19 +0000 EA - There can be highly neglected solutions to less-neglected problems by Linda Linsefors Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There can be highly neglected solutions to less-neglected problems, published by Linda Linsefors on February 10, 2023 on The Effective Altruism Forum.This post is based on an old draft that I wrote years ago, but never quite finished. The specific references might be a bit outdated, but think the main point is still relevant.Thanks to Amber Dawn Ace for turning my draft into something worth posting.EAs are often interested in how ‘neglected’ problems are: that is, how much money is being spent on solving them, and/or how many people are working on them. 80,000 Hours, for example, prioritizes pressing problems using a version of the importance, tractability, neglectedness (ITN) framework, where a problem’s ‘neglectedness’ is one of three factors that determine how much we should prioritize it relative to other problems.80,000 Hours define neglectedness as:“How many people, or dollars, are currently being dedicated to solving the problem?I suggest that a better definition would be:“How many people, or dollars, are currently being dedicated to this particular solution?”I think it makes more sense to assess solutions for neglectedness, rather than problems. Sometimes, a problem is not neglected, but effective solutions to that problem are neglected. A lot of people are working on the problem and a lot of money is being spent on it, but in ineffective (or less effective) ways.Here’s the example that 80,000 Hours uses to illustrate neglectedness:“[M]ass immunisation of children is an extremely effective intervention to improve global health, but it is already being vigorously pursued by governments and several major foundations, including the Gates Foundation. This makes it less likely to be a top opportunity for future donors.”Note that mass immunisation of children is a solution, not a problem. But it makes sense for 80K to think about this: we can imagine a world in which charities spent just as much money on preventing or curing diseases, but they spent it on less effective solutions. In that world, even though global disease would not be a neglected problem, mass vaccination would be a neglected (effective) solution, and it would make sense for donors to prioritize it.There’s something fractal about solutions and problems. Every solution to a problem presents its own problem: how best to implement it. Mass vaccination is an effective solution to the problem of disease. But then there’s the problem of ‘how can we best achieve mass vaccination?’. And solutions to that problem - for example, different methods of vaccine distribution, or interventions to address vaccine skepticism - pose their own, more granular problems.Here’s another example: hundreds of millions of dollars is spent each year on preventing nuclear war and nuclear winter. However, only a small fraction of this is spent on interventions intended to mitigate the negative impacts of nuclear winter - for example, ALLFED’s work researching food sources that are not dependent on sunlight. But in the event that there is a nuclear winter, these sorts of interventions will be extremely effective: they’ll enable us to produce much more food, so far fewer people will starve. Mitigation interventions are thus a highly neglected class of solutions for a problem - nuclear war - that is comparatively less neglected.As another example: climate change is not a neglected problem. But different solutions get very different amounts of attention. One of the most effective interventions seems to be preserving the rainforests (and other bodies of biomass), yet only a lonely few organizations (e.g. Cool Earth and the Carbon Fund) are working on this.Evaluating the neglectedness of solutions has its own problems. It means that neglectedness no longer lines up so neatly with solvability/tractability and scale/import...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There can be highly neglected solutions to less-neglected problems, published by Linda Linsefors on February 10, 2023 on The Effective Altruism Forum.This post is based on an old draft that I wrote years ago, but never quite finished. The specific references might be a bit outdated, but think the main point is still relevant.Thanks to Amber Dawn Ace for turning my draft into something worth posting.EAs are often interested in how ‘neglected’ problems are: that is, how much money is being spent on solving them, and/or how many people are working on them. 80,000 Hours, for example, prioritizes pressing problems using a version of the importance, tractability, neglectedness (ITN) framework, where a problem’s ‘neglectedness’ is one of three factors that determine how much we should prioritize it relative to other problems.80,000 Hours define neglectedness as:“How many people, or dollars, are currently being dedicated to solving the problem?I suggest that a better definition would be:“How many people, or dollars, are currently being dedicated to this particular solution?”I think it makes more sense to assess solutions for neglectedness, rather than problems. Sometimes, a problem is not neglected, but effective solutions to that problem are neglected. A lot of people are working on the problem and a lot of money is being spent on it, but in ineffective (or less effective) ways.Here’s the example that 80,000 Hours uses to illustrate neglectedness:“[M]ass immunisation of children is an extremely effective intervention to improve global health, but it is already being vigorously pursued by governments and several major foundations, including the Gates Foundation. This makes it less likely to be a top opportunity for future donors.”Note that mass immunisation of children is a solution, not a problem. But it makes sense for 80K to think about this: we can imagine a world in which charities spent just as much money on preventing or curing diseases, but they spent it on less effective solutions. In that world, even though global disease would not be a neglected problem, mass vaccination would be a neglected (effective) solution, and it would make sense for donors to prioritize it.There’s something fractal about solutions and problems. Every solution to a problem presents its own problem: how best to implement it. Mass vaccination is an effective solution to the problem of disease. But then there’s the problem of ‘how can we best achieve mass vaccination?’. And solutions to that problem - for example, different methods of vaccine distribution, or interventions to address vaccine skepticism - pose their own, more granular problems.Here’s another example: hundreds of millions of dollars is spent each year on preventing nuclear war and nuclear winter. However, only a small fraction of this is spent on interventions intended to mitigate the negative impacts of nuclear winter - for example, ALLFED’s work researching food sources that are not dependent on sunlight. But in the event that there is a nuclear winter, these sorts of interventions will be extremely effective: they’ll enable us to produce much more food, so far fewer people will starve. Mitigation interventions are thus a highly neglected class of solutions for a problem - nuclear war - that is comparatively less neglected.As another example: climate change is not a neglected problem. But different solutions get very different amounts of attention. One of the most effective interventions seems to be preserving the rainforests (and other bodies of biomass), yet only a lonely few organizations (e.g. Cool Earth and the Carbon Fund) are working on this.Evaluating the neglectedness of solutions has its own problems. It means that neglectedness no longer lines up so neatly with solvability/tractability and scale/import...]]>
Linda Linsefors https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:37 None full 4825
aGmhi4uvAJptY8TA7_NL_EA_EA EA - Straightforwardly eliciting probabilities from GPT-3 by NunoSempere Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Straightforwardly eliciting probabilities from GPT-3, published by NunoSempere on February 9, 2023 on The Effective Altruism Forum.I explain two straightforward strategies for eliciting probabilities from language models, and in particular for GPT-3, provide code, and give my thoughts on what I would do if I were being more hardcore about this.Straightforward strategiesLook at the probability of yes/no completionGiven a binary question, like “At the end of 2023, will Vladimir Putin be President of Russia?” you can create something like the following text for the model to complete:Then we can compare the relative probabilities of completion to the “Yes,” “yes,” “No” and “no” tokens. This requires a bit of care. Note that we are not making the same query 100 times and looking at the frequencies, but rather asking for the probabilities directly:You can see a version of this strategy implemented here.A related strategy might be to look at what probabilities the model assigns to a pair of sentences with opposite meanings:“Putin will be the president of Russia in 2023”“Putin will not be the president of Russia in 2023.”For example, GPT-3 could assign a probability of 9 10^-N to the first sentence and 10^-N to the second sentence. We could then interpret that as a 90% probability that Putin will be president of Russia by the end of 2023.But that method has two problems:The negatively worded sentence has one word more, and so it might systematically have a lower probabilityGPT-3’s API doesn’t appear to provide a way of calculating the likelihood of a whole sentence.Have the model output the probability verballyYou can directly ask the model for a probability, as follows:Now, the problem with this approach is that, untweaked, it does poorly.Instead, I’ve tried to use templates. For example, here is a template for producing reasoning in base rates:Many good forecasts are made in two steps.Look at the base rate or historical frequency to arrive at a baseline probability.Take into account other considerations and update the baseline slightly.For example, we can answer the question “will there be a schism in the Catholic Church in 2023?” as follows:There have been around 40 schisms in the 2000 years since the Catholic Church was founded. This is a base rate of 40 schisms / 2000 years = 2% chance of a schism / year. If we only look at the last 100 years, there have been 4 schisms, which is a base rate of 4 schisms / 100 years = 4% chance of a schism / year. In between is 3%, so we will take that as our baseline.The Catholic Church in Germany is currently in tension and arguing with Rome. This increases the probability a bit, to 5%.Therefore, our final probability for “will there be a schism in the Catholic Church in 2023?” is: 5%For another example, we can answer the question “${question}” as follows:That approach does somewhat better. The problem is that sometimes the base rate approach isn’t quite relevant, because sometimes we have neither a historical record—e.g,. global nuclear war. And sometimes we can't straightforwardly rely on the lack of a historical track record: VR headsets haven’t really been adopted in the mainstream, but their price has been falling and their quality rising, so making a forecast solely looking at the historical lack of adoption might lead one astray.You can see some code which implements this strategy here.More elaborate strategiesVarious templates, and choosing the template depending on the type of questionThe base rate template is only one of many possible options. We could also look at:Laplace rule of succession template: Since X was first possible, how often has it happened?“Mainstream plausibility” template: We could prompt a model to simulate how plausible a well-informed member of the public thinks that an eve...]]>
NunoSempere https://forum.effectivealtruism.org/posts/aGmhi4uvAJptY8TA7/straightforwardly-eliciting-probabilities-from-gpt-3 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Straightforwardly eliciting probabilities from GPT-3, published by NunoSempere on February 9, 2023 on The Effective Altruism Forum.I explain two straightforward strategies for eliciting probabilities from language models, and in particular for GPT-3, provide code, and give my thoughts on what I would do if I were being more hardcore about this.Straightforward strategiesLook at the probability of yes/no completionGiven a binary question, like “At the end of 2023, will Vladimir Putin be President of Russia?” you can create something like the following text for the model to complete:Then we can compare the relative probabilities of completion to the “Yes,” “yes,” “No” and “no” tokens. This requires a bit of care. Note that we are not making the same query 100 times and looking at the frequencies, but rather asking for the probabilities directly:You can see a version of this strategy implemented here.A related strategy might be to look at what probabilities the model assigns to a pair of sentences with opposite meanings:“Putin will be the president of Russia in 2023”“Putin will not be the president of Russia in 2023.”For example, GPT-3 could assign a probability of 9 10^-N to the first sentence and 10^-N to the second sentence. We could then interpret that as a 90% probability that Putin will be president of Russia by the end of 2023.But that method has two problems:The negatively worded sentence has one word more, and so it might systematically have a lower probabilityGPT-3’s API doesn’t appear to provide a way of calculating the likelihood of a whole sentence.Have the model output the probability verballyYou can directly ask the model for a probability, as follows:Now, the problem with this approach is that, untweaked, it does poorly.Instead, I’ve tried to use templates. For example, here is a template for producing reasoning in base rates:Many good forecasts are made in two steps.Look at the base rate or historical frequency to arrive at a baseline probability.Take into account other considerations and update the baseline slightly.For example, we can answer the question “will there be a schism in the Catholic Church in 2023?” as follows:There have been around 40 schisms in the 2000 years since the Catholic Church was founded. This is a base rate of 40 schisms / 2000 years = 2% chance of a schism / year. If we only look at the last 100 years, there have been 4 schisms, which is a base rate of 4 schisms / 100 years = 4% chance of a schism / year. In between is 3%, so we will take that as our baseline.The Catholic Church in Germany is currently in tension and arguing with Rome. This increases the probability a bit, to 5%.Therefore, our final probability for “will there be a schism in the Catholic Church in 2023?” is: 5%For another example, we can answer the question “${question}” as follows:That approach does somewhat better. The problem is that sometimes the base rate approach isn’t quite relevant, because sometimes we have neither a historical record—e.g,. global nuclear war. And sometimes we can't straightforwardly rely on the lack of a historical track record: VR headsets haven’t really been adopted in the mainstream, but their price has been falling and their quality rising, so making a forecast solely looking at the historical lack of adoption might lead one astray.You can see some code which implements this strategy here.More elaborate strategiesVarious templates, and choosing the template depending on the type of questionThe base rate template is only one of many possible options. We could also look at:Laplace rule of succession template: Since X was first possible, how often has it happened?“Mainstream plausibility” template: We could prompt a model to simulate how plausible a well-informed member of the public thinks that an eve...]]>
Fri, 10 Feb 2023 12:30:37 +0000 EA - Straightforwardly eliciting probabilities from GPT-3 by NunoSempere Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Straightforwardly eliciting probabilities from GPT-3, published by NunoSempere on February 9, 2023 on The Effective Altruism Forum.I explain two straightforward strategies for eliciting probabilities from language models, and in particular for GPT-3, provide code, and give my thoughts on what I would do if I were being more hardcore about this.Straightforward strategiesLook at the probability of yes/no completionGiven a binary question, like “At the end of 2023, will Vladimir Putin be President of Russia?” you can create something like the following text for the model to complete:Then we can compare the relative probabilities of completion to the “Yes,” “yes,” “No” and “no” tokens. This requires a bit of care. Note that we are not making the same query 100 times and looking at the frequencies, but rather asking for the probabilities directly:You can see a version of this strategy implemented here.A related strategy might be to look at what probabilities the model assigns to a pair of sentences with opposite meanings:“Putin will be the president of Russia in 2023”“Putin will not be the president of Russia in 2023.”For example, GPT-3 could assign a probability of 9 10^-N to the first sentence and 10^-N to the second sentence. We could then interpret that as a 90% probability that Putin will be president of Russia by the end of 2023.But that method has two problems:The negatively worded sentence has one word more, and so it might systematically have a lower probabilityGPT-3’s API doesn’t appear to provide a way of calculating the likelihood of a whole sentence.Have the model output the probability verballyYou can directly ask the model for a probability, as follows:Now, the problem with this approach is that, untweaked, it does poorly.Instead, I’ve tried to use templates. For example, here is a template for producing reasoning in base rates:Many good forecasts are made in two steps.Look at the base rate or historical frequency to arrive at a baseline probability.Take into account other considerations and update the baseline slightly.For example, we can answer the question “will there be a schism in the Catholic Church in 2023?” as follows:There have been around 40 schisms in the 2000 years since the Catholic Church was founded. This is a base rate of 40 schisms / 2000 years = 2% chance of a schism / year. If we only look at the last 100 years, there have been 4 schisms, which is a base rate of 4 schisms / 100 years = 4% chance of a schism / year. In between is 3%, so we will take that as our baseline.The Catholic Church in Germany is currently in tension and arguing with Rome. This increases the probability a bit, to 5%.Therefore, our final probability for “will there be a schism in the Catholic Church in 2023?” is: 5%For another example, we can answer the question “${question}” as follows:That approach does somewhat better. The problem is that sometimes the base rate approach isn’t quite relevant, because sometimes we have neither a historical record—e.g,. global nuclear war. And sometimes we can't straightforwardly rely on the lack of a historical track record: VR headsets haven’t really been adopted in the mainstream, but their price has been falling and their quality rising, so making a forecast solely looking at the historical lack of adoption might lead one astray.You can see some code which implements this strategy here.More elaborate strategiesVarious templates, and choosing the template depending on the type of questionThe base rate template is only one of many possible options. We could also look at:Laplace rule of succession template: Since X was first possible, how often has it happened?“Mainstream plausibility” template: We could prompt a model to simulate how plausible a well-informed member of the public thinks that an eve...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Straightforwardly eliciting probabilities from GPT-3, published by NunoSempere on February 9, 2023 on The Effective Altruism Forum.I explain two straightforward strategies for eliciting probabilities from language models, and in particular for GPT-3, provide code, and give my thoughts on what I would do if I were being more hardcore about this.Straightforward strategiesLook at the probability of yes/no completionGiven a binary question, like “At the end of 2023, will Vladimir Putin be President of Russia?” you can create something like the following text for the model to complete:Then we can compare the relative probabilities of completion to the “Yes,” “yes,” “No” and “no” tokens. This requires a bit of care. Note that we are not making the same query 100 times and looking at the frequencies, but rather asking for the probabilities directly:You can see a version of this strategy implemented here.A related strategy might be to look at what probabilities the model assigns to a pair of sentences with opposite meanings:“Putin will be the president of Russia in 2023”“Putin will not be the president of Russia in 2023.”For example, GPT-3 could assign a probability of 9 10^-N to the first sentence and 10^-N to the second sentence. We could then interpret that as a 90% probability that Putin will be president of Russia by the end of 2023.But that method has two problems:The negatively worded sentence has one word more, and so it might systematically have a lower probabilityGPT-3’s API doesn’t appear to provide a way of calculating the likelihood of a whole sentence.Have the model output the probability verballyYou can directly ask the model for a probability, as follows:Now, the problem with this approach is that, untweaked, it does poorly.Instead, I’ve tried to use templates. For example, here is a template for producing reasoning in base rates:Many good forecasts are made in two steps.Look at the base rate or historical frequency to arrive at a baseline probability.Take into account other considerations and update the baseline slightly.For example, we can answer the question “will there be a schism in the Catholic Church in 2023?” as follows:There have been around 40 schisms in the 2000 years since the Catholic Church was founded. This is a base rate of 40 schisms / 2000 years = 2% chance of a schism / year. If we only look at the last 100 years, there have been 4 schisms, which is a base rate of 4 schisms / 100 years = 4% chance of a schism / year. In between is 3%, so we will take that as our baseline.The Catholic Church in Germany is currently in tension and arguing with Rome. This increases the probability a bit, to 5%.Therefore, our final probability for “will there be a schism in the Catholic Church in 2023?” is: 5%For another example, we can answer the question “${question}” as follows:That approach does somewhat better. The problem is that sometimes the base rate approach isn’t quite relevant, because sometimes we have neither a historical record—e.g,. global nuclear war. And sometimes we can't straightforwardly rely on the lack of a historical track record: VR headsets haven’t really been adopted in the mainstream, but their price has been falling and their quality rising, so making a forecast solely looking at the historical lack of adoption might lead one astray.You can see some code which implements this strategy here.More elaborate strategiesVarious templates, and choosing the template depending on the type of questionThe base rate template is only one of many possible options. We could also look at:Laplace rule of succession template: Since X was first possible, how often has it happened?“Mainstream plausibility” template: We could prompt a model to simulate how plausible a well-informed member of the public thinks that an eve...]]>
NunoSempere https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:04 None full 4826
vs8FFnPfKnitAhcJb_NL_EA_EA EA - “Community” posts have their own section, subforums are closing, and more (Forum update February 2023) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: “Community” posts have their own section, subforums are closing, and more (Forum update February 2023), published by Lizka on February 10, 2023 on The Effective Altruism Forum.TL;DR: We’re kicking off a test where “Community” posts don’t go on the Frontpage with other posts but have their own section below the fold. We’re also closing subforums and focusing on improving “core topic” pages to let people go deeper on specific sub-fields in EA. And we’re sharing a few other updates.More detailed summary:We’re running a test: “Community” posts have a separate section on the Frontpage ⬇️We’re shutting down subforums, and pivoting to “core topics” ⬇️Other changes ⬇️You can now notify users by tagging or “mentioning” them ⬇️You can tag shortform posts that are related to core topics ⬇️You can upload social preview images to your posts ⬇️Coauthors can edit posts ⬇️“Community” posts have a separate section on the Frontpage — a testWe recently shared that we might test separating community discussion from posts on other topics, and outlined two approaches we might try. Based on the feedback we got and other considerations, we’re going forward with version 2: a section for “Community” posts on the Frontpage.What this meansPosts that are primarily impactful via the EA community as a phenomenon (posts that aren’t significantly relevant to a non-meta organization, field of research, etc., including posts about the community) get the “Community” tag (as before), which moves them off the top of the Frontpage and into a “Community” section below the fold (on the same page).Moderators and Forum facilitators will be applying and monitoring this tag. Other users cannot modify the tag.Readers can change this by modifying their tag filters (e.g. to still include "Community" posts).Discussions on “Community” posts will also not be showing up in the “Recent Discussion” section. (Readers can change this back by going to their account settings and looking in the "Site customizations" section.)We will run this test for around a month, and track engagement with different kinds of posts, feedback we get, subjective evaluations of how discussions go, and more.If you have any feedback on this change (or any others), we’d love to hear it. You can comment on this post or email forum@centreforeffectivealtruism.org.You can see how this works by scrolling through the Frontpage right now.Why we're doing thisYou can see our full reasoning here (and in the comments). In brief, we are worried about a way in which the voting structure on the Forum leads to more engagement with “Community” posts than users endorse, we’ve been hearing user feedback on related issues for a long time, and we’ve been having lots of conversations on hypotheses that we’d like to test.Transitioning from subforums to “core topics”Very short TL;DR: we’re closing the subforums that we were testing, and pivoting to developing more polished and easy-to-use topic pages that help readers explore those topics.Slightly less brief:We were testing hosting subforums on the Forum. We launched subforums for Effective Giving, Bioethics, Software Development, and Forecasting & Estimation.We faced some issues with the subforums and decided to discontinue that project (at least for the near future — we might revisit subforums). (See more on what we learned.) The existing subforums are closing (and most will become “core topic” pages).Content from the subforums — discussion threads, posts, etc. — will still be available on the Forum. (Posts on the topic pages, discussions in the "Posts" tab.)We still want to build better topic-specific spaces on the Forum, and are developing “core topic” pages that we hope will let people keep up with content on those topics or go deeper into topic-specific content.(We also hope to suppor...]]>
Lizka https://forum.effectivealtruism.org/posts/vs8FFnPfKnitAhcJb/community-posts-have-their-own-section-subforums-are-closing Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: “Community” posts have their own section, subforums are closing, and more (Forum update February 2023), published by Lizka on February 10, 2023 on The Effective Altruism Forum.TL;DR: We’re kicking off a test where “Community” posts don’t go on the Frontpage with other posts but have their own section below the fold. We’re also closing subforums and focusing on improving “core topic” pages to let people go deeper on specific sub-fields in EA. And we’re sharing a few other updates.More detailed summary:We’re running a test: “Community” posts have a separate section on the Frontpage ⬇️We’re shutting down subforums, and pivoting to “core topics” ⬇️Other changes ⬇️You can now notify users by tagging or “mentioning” them ⬇️You can tag shortform posts that are related to core topics ⬇️You can upload social preview images to your posts ⬇️Coauthors can edit posts ⬇️“Community” posts have a separate section on the Frontpage — a testWe recently shared that we might test separating community discussion from posts on other topics, and outlined two approaches we might try. Based on the feedback we got and other considerations, we’re going forward with version 2: a section for “Community” posts on the Frontpage.What this meansPosts that are primarily impactful via the EA community as a phenomenon (posts that aren’t significantly relevant to a non-meta organization, field of research, etc., including posts about the community) get the “Community” tag (as before), which moves them off the top of the Frontpage and into a “Community” section below the fold (on the same page).Moderators and Forum facilitators will be applying and monitoring this tag. Other users cannot modify the tag.Readers can change this by modifying their tag filters (e.g. to still include "Community" posts).Discussions on “Community” posts will also not be showing up in the “Recent Discussion” section. (Readers can change this back by going to their account settings and looking in the "Site customizations" section.)We will run this test for around a month, and track engagement with different kinds of posts, feedback we get, subjective evaluations of how discussions go, and more.If you have any feedback on this change (or any others), we’d love to hear it. You can comment on this post or email forum@centreforeffectivealtruism.org.You can see how this works by scrolling through the Frontpage right now.Why we're doing thisYou can see our full reasoning here (and in the comments). In brief, we are worried about a way in which the voting structure on the Forum leads to more engagement with “Community” posts than users endorse, we’ve been hearing user feedback on related issues for a long time, and we’ve been having lots of conversations on hypotheses that we’d like to test.Transitioning from subforums to “core topics”Very short TL;DR: we’re closing the subforums that we were testing, and pivoting to developing more polished and easy-to-use topic pages that help readers explore those topics.Slightly less brief:We were testing hosting subforums on the Forum. We launched subforums for Effective Giving, Bioethics, Software Development, and Forecasting & Estimation.We faced some issues with the subforums and decided to discontinue that project (at least for the near future — we might revisit subforums). (See more on what we learned.) The existing subforums are closing (and most will become “core topic” pages).Content from the subforums — discussion threads, posts, etc. — will still be available on the Forum. (Posts on the topic pages, discussions in the "Posts" tab.)We still want to build better topic-specific spaces on the Forum, and are developing “core topic” pages that we hope will let people keep up with content on those topics or go deeper into topic-specific content.(We also hope to suppor...]]>
Fri, 10 Feb 2023 00:22:00 +0000 EA - “Community” posts have their own section, subforums are closing, and more (Forum update February 2023) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: “Community” posts have their own section, subforums are closing, and more (Forum update February 2023), published by Lizka on February 10, 2023 on The Effective Altruism Forum.TL;DR: We’re kicking off a test where “Community” posts don’t go on the Frontpage with other posts but have their own section below the fold. We’re also closing subforums and focusing on improving “core topic” pages to let people go deeper on specific sub-fields in EA. And we’re sharing a few other updates.More detailed summary:We’re running a test: “Community” posts have a separate section on the Frontpage ⬇️We’re shutting down subforums, and pivoting to “core topics” ⬇️Other changes ⬇️You can now notify users by tagging or “mentioning” them ⬇️You can tag shortform posts that are related to core topics ⬇️You can upload social preview images to your posts ⬇️Coauthors can edit posts ⬇️“Community” posts have a separate section on the Frontpage — a testWe recently shared that we might test separating community discussion from posts on other topics, and outlined two approaches we might try. Based on the feedback we got and other considerations, we’re going forward with version 2: a section for “Community” posts on the Frontpage.What this meansPosts that are primarily impactful via the EA community as a phenomenon (posts that aren’t significantly relevant to a non-meta organization, field of research, etc., including posts about the community) get the “Community” tag (as before), which moves them off the top of the Frontpage and into a “Community” section below the fold (on the same page).Moderators and Forum facilitators will be applying and monitoring this tag. Other users cannot modify the tag.Readers can change this by modifying their tag filters (e.g. to still include "Community" posts).Discussions on “Community” posts will also not be showing up in the “Recent Discussion” section. (Readers can change this back by going to their account settings and looking in the "Site customizations" section.)We will run this test for around a month, and track engagement with different kinds of posts, feedback we get, subjective evaluations of how discussions go, and more.If you have any feedback on this change (or any others), we’d love to hear it. You can comment on this post or email forum@centreforeffectivealtruism.org.You can see how this works by scrolling through the Frontpage right now.Why we're doing thisYou can see our full reasoning here (and in the comments). In brief, we are worried about a way in which the voting structure on the Forum leads to more engagement with “Community” posts than users endorse, we’ve been hearing user feedback on related issues for a long time, and we’ve been having lots of conversations on hypotheses that we’d like to test.Transitioning from subforums to “core topics”Very short TL;DR: we’re closing the subforums that we were testing, and pivoting to developing more polished and easy-to-use topic pages that help readers explore those topics.Slightly less brief:We were testing hosting subforums on the Forum. We launched subforums for Effective Giving, Bioethics, Software Development, and Forecasting & Estimation.We faced some issues with the subforums and decided to discontinue that project (at least for the near future — we might revisit subforums). (See more on what we learned.) The existing subforums are closing (and most will become “core topic” pages).Content from the subforums — discussion threads, posts, etc. — will still be available on the Forum. (Posts on the topic pages, discussions in the "Posts" tab.)We still want to build better topic-specific spaces on the Forum, and are developing “core topic” pages that we hope will let people keep up with content on those topics or go deeper into topic-specific content.(We also hope to suppor...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: “Community” posts have their own section, subforums are closing, and more (Forum update February 2023), published by Lizka on February 10, 2023 on The Effective Altruism Forum.TL;DR: We’re kicking off a test where “Community” posts don’t go on the Frontpage with other posts but have their own section below the fold. We’re also closing subforums and focusing on improving “core topic” pages to let people go deeper on specific sub-fields in EA. And we’re sharing a few other updates.More detailed summary:We’re running a test: “Community” posts have a separate section on the Frontpage ⬇️We’re shutting down subforums, and pivoting to “core topics” ⬇️Other changes ⬇️You can now notify users by tagging or “mentioning” them ⬇️You can tag shortform posts that are related to core topics ⬇️You can upload social preview images to your posts ⬇️Coauthors can edit posts ⬇️“Community” posts have a separate section on the Frontpage — a testWe recently shared that we might test separating community discussion from posts on other topics, and outlined two approaches we might try. Based on the feedback we got and other considerations, we’re going forward with version 2: a section for “Community” posts on the Frontpage.What this meansPosts that are primarily impactful via the EA community as a phenomenon (posts that aren’t significantly relevant to a non-meta organization, field of research, etc., including posts about the community) get the “Community” tag (as before), which moves them off the top of the Frontpage and into a “Community” section below the fold (on the same page).Moderators and Forum facilitators will be applying and monitoring this tag. Other users cannot modify the tag.Readers can change this by modifying their tag filters (e.g. to still include "Community" posts).Discussions on “Community” posts will also not be showing up in the “Recent Discussion” section. (Readers can change this back by going to their account settings and looking in the "Site customizations" section.)We will run this test for around a month, and track engagement with different kinds of posts, feedback we get, subjective evaluations of how discussions go, and more.If you have any feedback on this change (or any others), we’d love to hear it. You can comment on this post or email forum@centreforeffectivealtruism.org.You can see how this works by scrolling through the Frontpage right now.Why we're doing thisYou can see our full reasoning here (and in the comments). In brief, we are worried about a way in which the voting structure on the Forum leads to more engagement with “Community” posts than users endorse, we’ve been hearing user feedback on related issues for a long time, and we’ve been having lots of conversations on hypotheses that we’d like to test.Transitioning from subforums to “core topics”Very short TL;DR: we’re closing the subforums that we were testing, and pivoting to developing more polished and easy-to-use topic pages that help readers explore those topics.Slightly less brief:We were testing hosting subforums on the Forum. We launched subforums for Effective Giving, Bioethics, Software Development, and Forecasting & Estimation.We faced some issues with the subforums and decided to discontinue that project (at least for the near future — we might revisit subforums). (See more on what we learned.) The existing subforums are closing (and most will become “core topic” pages).Content from the subforums — discussion threads, posts, etc. — will still be available on the Forum. (Posts on the topic pages, discussions in the "Posts" tab.)We still want to build better topic-specific spaces on the Forum, and are developing “core topic” pages that we hope will let people keep up with content on those topics or go deeper into topic-specific content.(We also hope to suppor...]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:32 None full 4823
sDaXKQCKHmX5DKhRy_NL_EA_EA EA - EA Community Builders’ Commitment to Anti-Racism and Anti-Sexism by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Community Builders’ Commitment to Anti-Racism & Anti-Sexism, published by Rockwell on February 9, 2023 on The Effective Altruism Forum.In light of recent events in the EA community, several professional EA community builders have been working on a statement for the past few weeks: EA Community Builders’ Commitment to Anti-Racism & Anti-Sexism. You can see the growing list of signatories at the link.We have chosen to be a part of the effective altruism community because we agree that the world can and should be a better place for everyone in it. We have chosen to be community builders because we recognize that lasting, impactful change comes out of collective effort. The positive change we want to see in the world requires a diverse set of actors collaborating within an inclusive community for the greater good.But inclusive, diverse, collaborative communities need to be protected, not just built. Bigoted ideologies, such as racism and sexism, are intrinsically harmful. They also fundamentally undermine the very collaborations needed to produce a world that is better for everyone in it.We unequivocally condemn racism and sexism, including “scientific” justifications for either, and believe they have no place in the effective altruism community. As community builders within the effective altruism space, we commit to practicing and promoting anti-racism and anti-sexism within our communities.If you are the leader/organizer of an EA community building group (including national and city groups, professional groups, affinity groups, and university groups), you can add your signature and any additional commentary specific to you/your organization (that will display as a footnote on the statement) by filling out this form.Thank you to the many community builders who contributed to the creation of this document.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Rockwell https://forum.effectivealtruism.org/posts/sDaXKQCKHmX5DKhRy/ea-community-builders-commitment-to-anti-racism-and-anti Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Community Builders’ Commitment to Anti-Racism & Anti-Sexism, published by Rockwell on February 9, 2023 on The Effective Altruism Forum.In light of recent events in the EA community, several professional EA community builders have been working on a statement for the past few weeks: EA Community Builders’ Commitment to Anti-Racism & Anti-Sexism. You can see the growing list of signatories at the link.We have chosen to be a part of the effective altruism community because we agree that the world can and should be a better place for everyone in it. We have chosen to be community builders because we recognize that lasting, impactful change comes out of collective effort. The positive change we want to see in the world requires a diverse set of actors collaborating within an inclusive community for the greater good.But inclusive, diverse, collaborative communities need to be protected, not just built. Bigoted ideologies, such as racism and sexism, are intrinsically harmful. They also fundamentally undermine the very collaborations needed to produce a world that is better for everyone in it.We unequivocally condemn racism and sexism, including “scientific” justifications for either, and believe they have no place in the effective altruism community. As community builders within the effective altruism space, we commit to practicing and promoting anti-racism and anti-sexism within our communities.If you are the leader/organizer of an EA community building group (including national and city groups, professional groups, affinity groups, and university groups), you can add your signature and any additional commentary specific to you/your organization (that will display as a footnote on the statement) by filling out this form.Thank you to the many community builders who contributed to the creation of this document.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 09 Feb 2023 23:22:57 +0000 EA - EA Community Builders’ Commitment to Anti-Racism and Anti-Sexism by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Community Builders’ Commitment to Anti-Racism & Anti-Sexism, published by Rockwell on February 9, 2023 on The Effective Altruism Forum.In light of recent events in the EA community, several professional EA community builders have been working on a statement for the past few weeks: EA Community Builders’ Commitment to Anti-Racism & Anti-Sexism. You can see the growing list of signatories at the link.We have chosen to be a part of the effective altruism community because we agree that the world can and should be a better place for everyone in it. We have chosen to be community builders because we recognize that lasting, impactful change comes out of collective effort. The positive change we want to see in the world requires a diverse set of actors collaborating within an inclusive community for the greater good.But inclusive, diverse, collaborative communities need to be protected, not just built. Bigoted ideologies, such as racism and sexism, are intrinsically harmful. They also fundamentally undermine the very collaborations needed to produce a world that is better for everyone in it.We unequivocally condemn racism and sexism, including “scientific” justifications for either, and believe they have no place in the effective altruism community. As community builders within the effective altruism space, we commit to practicing and promoting anti-racism and anti-sexism within our communities.If you are the leader/organizer of an EA community building group (including national and city groups, professional groups, affinity groups, and university groups), you can add your signature and any additional commentary specific to you/your organization (that will display as a footnote on the statement) by filling out this form.Thank you to the many community builders who contributed to the creation of this document.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Community Builders’ Commitment to Anti-Racism & Anti-Sexism, published by Rockwell on February 9, 2023 on The Effective Altruism Forum.In light of recent events in the EA community, several professional EA community builders have been working on a statement for the past few weeks: EA Community Builders’ Commitment to Anti-Racism & Anti-Sexism. You can see the growing list of signatories at the link.We have chosen to be a part of the effective altruism community because we agree that the world can and should be a better place for everyone in it. We have chosen to be community builders because we recognize that lasting, impactful change comes out of collective effort. The positive change we want to see in the world requires a diverse set of actors collaborating within an inclusive community for the greater good.But inclusive, diverse, collaborative communities need to be protected, not just built. Bigoted ideologies, such as racism and sexism, are intrinsically harmful. They also fundamentally undermine the very collaborations needed to produce a world that is better for everyone in it.We unequivocally condemn racism and sexism, including “scientific” justifications for either, and believe they have no place in the effective altruism community. As community builders within the effective altruism space, we commit to practicing and promoting anti-racism and anti-sexism within our communities.If you are the leader/organizer of an EA community building group (including national and city groups, professional groups, affinity groups, and university groups), you can add your signature and any additional commentary specific to you/your organization (that will display as a footnote on the statement) by filling out this form.Thank you to the many community builders who contributed to the creation of this document.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Rockwell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:54 None full 4836
HuQtr7qfB2EfcGqTu_NL_EA_EA EA - Technological developments that could increase risks from nuclear weapons: A shallow review by MichaelA Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Technological developments that could increase risks from nuclear weapons: A shallow review, published by MichaelA on February 9, 2023 on The Effective Altruism Forum.This is a blog post, not a research report, meaning it was produced relatively quickly and is not to Rethink Priorities' typical standards of substantiveness and careful checking for accuracy.SummaryThis post is a shallow exploration of some technological developments that might occur and might increase risks from nuclear weapons - especially existential risk or other risks to the long-term future. This is one of many questions relevant to how much to prioritize nuclear risk relative to other issues, what risks and interventions to prioritize within the nuclear risk area, and how that should change in future. But note that, due to time constraints, this post isn’t comprehensive and was less thoroughly researched and reviewed than we’d like.For each potential development, we provide some very quick, rough guesses about how much and in what ways the development would affect the odds and consequences of nuclear conflict (“Importance”), the likelihood of the development in the coming decade or decades (“Likelihood/Closeness”), and how much and in what ways thoughtful altruistic actors could influence whether and how the technology is developed and used (“Steerability”).These tentative bottom line beliefs are summarized in the table below:Radiological weaponsPure fusion bombsHigh-altitude electromagnetic pulse (HEMP)Neutron bombsAtomically precise manufacturing (APM)AI-assisted production/designOther developments in methods for production/design???Hypersonic missiles/glide vehiclesMore accurate nuclear weaponsLong-range conventional strike capabilitiesBetter detection of nuclear warhead platforms, launchers, and/or delivery vehiclesMissile defense systemsAdvances in AI capabilitiesCyberattack (or defense) capabilitiesAdvances in autonomous weaponsMore integration of AI with NC3 systemsAnti-satellite weapons (ASAT)“Space planes” and other (non-ASAT) space capabilitiesCategoryTechnological DevelopmentImportanceLikelihood / ClosenessSteer-abilityBomb types and production methodsMediumMedium/HighMedium/LowMediumMedium/LowMediumMediumMedium/LowMedium/LowLowMedium/LowMediumMethods for production and designHighLowMediumMedium/HighMedium/LowMediumDelivery systemsMedium/LowMedium/HighMedium/LowMediumMediumMedium/LowMedium/LowMedium/HighMedium/LowDetection and defenseMedium/HighMedium/HighMedium/LowMediumMediumMedium/LowAI and cyberMedium/HighMedium/HighMediumMedium/HighMedium/HighMediumMediumMedium/HighMediumMediumMediumMediumNon-nuclear warmaking advancesMedium/LowMediumMedium/LowMedium/LowMediumMedium/LowNote that:Each “potential technological development” is really more like a somewhat wide area in which a variety of different types and levels of development could occur, which makes the ratings in the above table less meaningful and more ambiguous.“Importance” is here assessed conditional on the development occurring, so will overstate the importance of thinking about or trying to steer unlikely developments.In some cases (e.g, “More accurate nuclear weapons”), the “Importance” score accounts for potential risk-reducing effects as well.“Likelihood/Closeness” is actually inelegantly collapsing together two different things, making our ratings of developments on that criterion less meaningful. E.g., one development could be moderately likely to occur quite soon and moderately likely to occur never, while another is very likely to occur in 15-25 years but not before then.Some of the topics this post discusses involve or are adjacent to information hazards (especially attention hazards), as is the case with much other di...]]>
MichaelA https://forum.effectivealtruism.org/posts/HuQtr7qfB2EfcGqTu/technological-developments-that-could-increase-risks-from-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Technological developments that could increase risks from nuclear weapons: A shallow review, published by MichaelA on February 9, 2023 on The Effective Altruism Forum.This is a blog post, not a research report, meaning it was produced relatively quickly and is not to Rethink Priorities' typical standards of substantiveness and careful checking for accuracy.SummaryThis post is a shallow exploration of some technological developments that might occur and might increase risks from nuclear weapons - especially existential risk or other risks to the long-term future. This is one of many questions relevant to how much to prioritize nuclear risk relative to other issues, what risks and interventions to prioritize within the nuclear risk area, and how that should change in future. But note that, due to time constraints, this post isn’t comprehensive and was less thoroughly researched and reviewed than we’d like.For each potential development, we provide some very quick, rough guesses about how much and in what ways the development would affect the odds and consequences of nuclear conflict (“Importance”), the likelihood of the development in the coming decade or decades (“Likelihood/Closeness”), and how much and in what ways thoughtful altruistic actors could influence whether and how the technology is developed and used (“Steerability”).These tentative bottom line beliefs are summarized in the table below:Radiological weaponsPure fusion bombsHigh-altitude electromagnetic pulse (HEMP)Neutron bombsAtomically precise manufacturing (APM)AI-assisted production/designOther developments in methods for production/design???Hypersonic missiles/glide vehiclesMore accurate nuclear weaponsLong-range conventional strike capabilitiesBetter detection of nuclear warhead platforms, launchers, and/or delivery vehiclesMissile defense systemsAdvances in AI capabilitiesCyberattack (or defense) capabilitiesAdvances in autonomous weaponsMore integration of AI with NC3 systemsAnti-satellite weapons (ASAT)“Space planes” and other (non-ASAT) space capabilitiesCategoryTechnological DevelopmentImportanceLikelihood / ClosenessSteer-abilityBomb types and production methodsMediumMedium/HighMedium/LowMediumMedium/LowMediumMediumMedium/LowMedium/LowLowMedium/LowMediumMethods for production and designHighLowMediumMedium/HighMedium/LowMediumDelivery systemsMedium/LowMedium/HighMedium/LowMediumMediumMedium/LowMedium/LowMedium/HighMedium/LowDetection and defenseMedium/HighMedium/HighMedium/LowMediumMediumMedium/LowAI and cyberMedium/HighMedium/HighMediumMedium/HighMedium/HighMediumMediumMedium/HighMediumMediumMediumMediumNon-nuclear warmaking advancesMedium/LowMediumMedium/LowMedium/LowMediumMedium/LowNote that:Each “potential technological development” is really more like a somewhat wide area in which a variety of different types and levels of development could occur, which makes the ratings in the above table less meaningful and more ambiguous.“Importance” is here assessed conditional on the development occurring, so will overstate the importance of thinking about or trying to steer unlikely developments.In some cases (e.g, “More accurate nuclear weapons”), the “Importance” score accounts for potential risk-reducing effects as well.“Likelihood/Closeness” is actually inelegantly collapsing together two different things, making our ratings of developments on that criterion less meaningful. E.g., one development could be moderately likely to occur quite soon and moderately likely to occur never, while another is very likely to occur in 15-25 years but not before then.Some of the topics this post discusses involve or are adjacent to information hazards (especially attention hazards), as is the case with much other di...]]>
Thu, 09 Feb 2023 23:21:28 +0000 EA - Technological developments that could increase risks from nuclear weapons: A shallow review by MichaelA Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Technological developments that could increase risks from nuclear weapons: A shallow review, published by MichaelA on February 9, 2023 on The Effective Altruism Forum.This is a blog post, not a research report, meaning it was produced relatively quickly and is not to Rethink Priorities' typical standards of substantiveness and careful checking for accuracy.SummaryThis post is a shallow exploration of some technological developments that might occur and might increase risks from nuclear weapons - especially existential risk or other risks to the long-term future. This is one of many questions relevant to how much to prioritize nuclear risk relative to other issues, what risks and interventions to prioritize within the nuclear risk area, and how that should change in future. But note that, due to time constraints, this post isn’t comprehensive and was less thoroughly researched and reviewed than we’d like.For each potential development, we provide some very quick, rough guesses about how much and in what ways the development would affect the odds and consequences of nuclear conflict (“Importance”), the likelihood of the development in the coming decade or decades (“Likelihood/Closeness”), and how much and in what ways thoughtful altruistic actors could influence whether and how the technology is developed and used (“Steerability”).These tentative bottom line beliefs are summarized in the table below:Radiological weaponsPure fusion bombsHigh-altitude electromagnetic pulse (HEMP)Neutron bombsAtomically precise manufacturing (APM)AI-assisted production/designOther developments in methods for production/design???Hypersonic missiles/glide vehiclesMore accurate nuclear weaponsLong-range conventional strike capabilitiesBetter detection of nuclear warhead platforms, launchers, and/or delivery vehiclesMissile defense systemsAdvances in AI capabilitiesCyberattack (or defense) capabilitiesAdvances in autonomous weaponsMore integration of AI with NC3 systemsAnti-satellite weapons (ASAT)“Space planes” and other (non-ASAT) space capabilitiesCategoryTechnological DevelopmentImportanceLikelihood / ClosenessSteer-abilityBomb types and production methodsMediumMedium/HighMedium/LowMediumMedium/LowMediumMediumMedium/LowMedium/LowLowMedium/LowMediumMethods for production and designHighLowMediumMedium/HighMedium/LowMediumDelivery systemsMedium/LowMedium/HighMedium/LowMediumMediumMedium/LowMedium/LowMedium/HighMedium/LowDetection and defenseMedium/HighMedium/HighMedium/LowMediumMediumMedium/LowAI and cyberMedium/HighMedium/HighMediumMedium/HighMedium/HighMediumMediumMedium/HighMediumMediumMediumMediumNon-nuclear warmaking advancesMedium/LowMediumMedium/LowMedium/LowMediumMedium/LowNote that:Each “potential technological development” is really more like a somewhat wide area in which a variety of different types and levels of development could occur, which makes the ratings in the above table less meaningful and more ambiguous.“Importance” is here assessed conditional on the development occurring, so will overstate the importance of thinking about or trying to steer unlikely developments.In some cases (e.g, “More accurate nuclear weapons”), the “Importance” score accounts for potential risk-reducing effects as well.“Likelihood/Closeness” is actually inelegantly collapsing together two different things, making our ratings of developments on that criterion less meaningful. E.g., one development could be moderately likely to occur quite soon and moderately likely to occur never, while another is very likely to occur in 15-25 years but not before then.Some of the topics this post discusses involve or are adjacent to information hazards (especially attention hazards), as is the case with much other di...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Technological developments that could increase risks from nuclear weapons: A shallow review, published by MichaelA on February 9, 2023 on The Effective Altruism Forum.This is a blog post, not a research report, meaning it was produced relatively quickly and is not to Rethink Priorities' typical standards of substantiveness and careful checking for accuracy.SummaryThis post is a shallow exploration of some technological developments that might occur and might increase risks from nuclear weapons - especially existential risk or other risks to the long-term future. This is one of many questions relevant to how much to prioritize nuclear risk relative to other issues, what risks and interventions to prioritize within the nuclear risk area, and how that should change in future. But note that, due to time constraints, this post isn’t comprehensive and was less thoroughly researched and reviewed than we’d like.For each potential development, we provide some very quick, rough guesses about how much and in what ways the development would affect the odds and consequences of nuclear conflict (“Importance”), the likelihood of the development in the coming decade or decades (“Likelihood/Closeness”), and how much and in what ways thoughtful altruistic actors could influence whether and how the technology is developed and used (“Steerability”).These tentative bottom line beliefs are summarized in the table below:Radiological weaponsPure fusion bombsHigh-altitude electromagnetic pulse (HEMP)Neutron bombsAtomically precise manufacturing (APM)AI-assisted production/designOther developments in methods for production/design???Hypersonic missiles/glide vehiclesMore accurate nuclear weaponsLong-range conventional strike capabilitiesBetter detection of nuclear warhead platforms, launchers, and/or delivery vehiclesMissile defense systemsAdvances in AI capabilitiesCyberattack (or defense) capabilitiesAdvances in autonomous weaponsMore integration of AI with NC3 systemsAnti-satellite weapons (ASAT)“Space planes” and other (non-ASAT) space capabilitiesCategoryTechnological DevelopmentImportanceLikelihood / ClosenessSteer-abilityBomb types and production methodsMediumMedium/HighMedium/LowMediumMedium/LowMediumMediumMedium/LowMedium/LowLowMedium/LowMediumMethods for production and designHighLowMediumMedium/HighMedium/LowMediumDelivery systemsMedium/LowMedium/HighMedium/LowMediumMediumMedium/LowMedium/LowMedium/HighMedium/LowDetection and defenseMedium/HighMedium/HighMedium/LowMediumMediumMedium/LowAI and cyberMedium/HighMedium/HighMediumMedium/HighMedium/HighMediumMediumMedium/HighMediumMediumMediumMediumNon-nuclear warmaking advancesMedium/LowMediumMedium/LowMedium/LowMediumMedium/LowNote that:Each “potential technological development” is really more like a somewhat wide area in which a variety of different types and levels of development could occur, which makes the ratings in the above table less meaningful and more ambiguous.“Importance” is here assessed conditional on the development occurring, so will overstate the importance of thinking about or trying to steer unlikely developments.In some cases (e.g, “More accurate nuclear weapons”), the “Importance” score accounts for potential risk-reducing effects as well.“Likelihood/Closeness” is actually inelegantly collapsing together two different things, making our ratings of developments on that criterion less meaningful. E.g., one development could be moderately likely to occur quite soon and moderately likely to occur never, while another is very likely to occur in 15-25 years but not before then.Some of the topics this post discusses involve or are adjacent to information hazards (especially attention hazards), as is the case with much other di...]]>
MichaelA https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:43 None full 4824
hwyzytrEhdDeoyPzH_NL_EA_EA EA - Apply for Cambridge ML for Alignment Bootcamp (CaMLAB) [26 March - 8 April] by hannah Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply for Cambridge ML for Alignment Bootcamp (CaMLAB) [26 March - 8 April], published by hannah on February 9, 2023 on The Effective Altruism Forum.TL;DR: A two-week machine learning bootcamp this spring in Cambridge, UK, open to global applicants and aimed at providing ML skills for AI alignment. Apply by 26 February to participate or TA.Following a series of machine learning bootcamps earlier this year in Cambridge, Berkeley and Boston, the Cambridge AI Safety Hub is running the next iteration of the Cambridge ML for Alignment Bootcamp (CaMLAB) in spring.This two-week curriculum expects no prior experience with machine learning, although familiarity with Python and understanding of (basic) linear algebra is crucial.The curriculum, based on MLAB, provides a thorough, nuts-and-bolts introduction to the state-of-the-art in ML techniques such as interpretability and reinforcement learning. You’ll be guided through the steps of building various deep learning models, from ResNets to transformers. You’ll come away well-versed in PyTorch and useful complementary frameworks.From Richard Ren, an undergraduate at UPenn who participated in the January camp:The material from the bootcamp was well-prepared and helped me understand how to use PyTorch and einops, as well as how backpropagation and transformers work. The mentorship from the TAs and peers was excellent, and because of their support, I think the time I spent at the camp was at least 3-5x as productive as focused time I would've spent outside of the camp learning the material on my own — propelling me to be able to take graduate-level deep learning classes at my school, read AI safety papers on my own, and giving me the knowledge necessary to pursue serious machine learning research projects.In addition, the benefits of spending two weeks in-person with other motivated and ambitious individuals cannot be overstated: alongside the pedagogical benefits of being paired with another person each day for programming, the conversations which took place around the curriculum were a seedbed for new insights and valuable connections.Richard continues:The mentorship from the TAs, as well as the chance conversations from the people I've met, have had a serious impact on how I'll approach the career path(s) I'm interested in — from meeting an economics Ph.D. (and having my worldview on pursuing a policy career change) to talking with someone who worked at EleutherAI in the Cambridge EA office about various pathways in AI safety. I loved the people I was surrounded with — they were ambitious, driven, kind, emotionally intelligent, and hardworking.Feedback from the end of the previous camp showed that:Participants on average said they would be 93% likely to recommend the bootcamp to a friend or colleague.Everyone found the camp at least as good as expected, with 82% finding it better than expected, and 24% finding it much better than expected.94% of participants found the camp more valuable than the counterfactual use of their time, with 71% finding it much more valuable.In addition, first and second place in Apart Research’s January Mechanistic Interpretability Hackathon were awarded to teams formed from participants and TAs from our January bootcamp.Chris Mathwin, who was part of the runner-up project, writes of the bootcamp:A really formative experience! Great people, great content and truly great support. It was a significantly better use of my time in upskilling in this field than I would have spent elsewhere and I have continued to work with some of my peers afterwards!If you’re interested in participating in the upcoming round of CaMLAB, apply here. If you have substantial ML experience and are interested in being a teaching assistant (TA), apply here. You can find more details below.Schedule & logisticsTh...]]>
hannah https://forum.effectivealtruism.org/posts/hwyzytrEhdDeoyPzH/apply-for-cambridge-ml-for-alignment-bootcamp-camlab-26 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply for Cambridge ML for Alignment Bootcamp (CaMLAB) [26 March - 8 April], published by hannah on February 9, 2023 on The Effective Altruism Forum.TL;DR: A two-week machine learning bootcamp this spring in Cambridge, UK, open to global applicants and aimed at providing ML skills for AI alignment. Apply by 26 February to participate or TA.Following a series of machine learning bootcamps earlier this year in Cambridge, Berkeley and Boston, the Cambridge AI Safety Hub is running the next iteration of the Cambridge ML for Alignment Bootcamp (CaMLAB) in spring.This two-week curriculum expects no prior experience with machine learning, although familiarity with Python and understanding of (basic) linear algebra is crucial.The curriculum, based on MLAB, provides a thorough, nuts-and-bolts introduction to the state-of-the-art in ML techniques such as interpretability and reinforcement learning. You’ll be guided through the steps of building various deep learning models, from ResNets to transformers. You’ll come away well-versed in PyTorch and useful complementary frameworks.From Richard Ren, an undergraduate at UPenn who participated in the January camp:The material from the bootcamp was well-prepared and helped me understand how to use PyTorch and einops, as well as how backpropagation and transformers work. The mentorship from the TAs and peers was excellent, and because of their support, I think the time I spent at the camp was at least 3-5x as productive as focused time I would've spent outside of the camp learning the material on my own — propelling me to be able to take graduate-level deep learning classes at my school, read AI safety papers on my own, and giving me the knowledge necessary to pursue serious machine learning research projects.In addition, the benefits of spending two weeks in-person with other motivated and ambitious individuals cannot be overstated: alongside the pedagogical benefits of being paired with another person each day for programming, the conversations which took place around the curriculum were a seedbed for new insights and valuable connections.Richard continues:The mentorship from the TAs, as well as the chance conversations from the people I've met, have had a serious impact on how I'll approach the career path(s) I'm interested in — from meeting an economics Ph.D. (and having my worldview on pursuing a policy career change) to talking with someone who worked at EleutherAI in the Cambridge EA office about various pathways in AI safety. I loved the people I was surrounded with — they were ambitious, driven, kind, emotionally intelligent, and hardworking.Feedback from the end of the previous camp showed that:Participants on average said they would be 93% likely to recommend the bootcamp to a friend or colleague.Everyone found the camp at least as good as expected, with 82% finding it better than expected, and 24% finding it much better than expected.94% of participants found the camp more valuable than the counterfactual use of their time, with 71% finding it much more valuable.In addition, first and second place in Apart Research’s January Mechanistic Interpretability Hackathon were awarded to teams formed from participants and TAs from our January bootcamp.Chris Mathwin, who was part of the runner-up project, writes of the bootcamp:A really formative experience! Great people, great content and truly great support. It was a significantly better use of my time in upskilling in this field than I would have spent elsewhere and I have continued to work with some of my peers afterwards!If you’re interested in participating in the upcoming round of CaMLAB, apply here. If you have substantial ML experience and are interested in being a teaching assistant (TA), apply here. You can find more details below.Schedule & logisticsTh...]]>
Thu, 09 Feb 2023 22:58:31 +0000 EA - Apply for Cambridge ML for Alignment Bootcamp (CaMLAB) [26 March - 8 April] by hannah Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply for Cambridge ML for Alignment Bootcamp (CaMLAB) [26 March - 8 April], published by hannah on February 9, 2023 on The Effective Altruism Forum.TL;DR: A two-week machine learning bootcamp this spring in Cambridge, UK, open to global applicants and aimed at providing ML skills for AI alignment. Apply by 26 February to participate or TA.Following a series of machine learning bootcamps earlier this year in Cambridge, Berkeley and Boston, the Cambridge AI Safety Hub is running the next iteration of the Cambridge ML for Alignment Bootcamp (CaMLAB) in spring.This two-week curriculum expects no prior experience with machine learning, although familiarity with Python and understanding of (basic) linear algebra is crucial.The curriculum, based on MLAB, provides a thorough, nuts-and-bolts introduction to the state-of-the-art in ML techniques such as interpretability and reinforcement learning. You’ll be guided through the steps of building various deep learning models, from ResNets to transformers. You’ll come away well-versed in PyTorch and useful complementary frameworks.From Richard Ren, an undergraduate at UPenn who participated in the January camp:The material from the bootcamp was well-prepared and helped me understand how to use PyTorch and einops, as well as how backpropagation and transformers work. The mentorship from the TAs and peers was excellent, and because of their support, I think the time I spent at the camp was at least 3-5x as productive as focused time I would've spent outside of the camp learning the material on my own — propelling me to be able to take graduate-level deep learning classes at my school, read AI safety papers on my own, and giving me the knowledge necessary to pursue serious machine learning research projects.In addition, the benefits of spending two weeks in-person with other motivated and ambitious individuals cannot be overstated: alongside the pedagogical benefits of being paired with another person each day for programming, the conversations which took place around the curriculum were a seedbed for new insights and valuable connections.Richard continues:The mentorship from the TAs, as well as the chance conversations from the people I've met, have had a serious impact on how I'll approach the career path(s) I'm interested in — from meeting an economics Ph.D. (and having my worldview on pursuing a policy career change) to talking with someone who worked at EleutherAI in the Cambridge EA office about various pathways in AI safety. I loved the people I was surrounded with — they were ambitious, driven, kind, emotionally intelligent, and hardworking.Feedback from the end of the previous camp showed that:Participants on average said they would be 93% likely to recommend the bootcamp to a friend or colleague.Everyone found the camp at least as good as expected, with 82% finding it better than expected, and 24% finding it much better than expected.94% of participants found the camp more valuable than the counterfactual use of their time, with 71% finding it much more valuable.In addition, first and second place in Apart Research’s January Mechanistic Interpretability Hackathon were awarded to teams formed from participants and TAs from our January bootcamp.Chris Mathwin, who was part of the runner-up project, writes of the bootcamp:A really formative experience! Great people, great content and truly great support. It was a significantly better use of my time in upskilling in this field than I would have spent elsewhere and I have continued to work with some of my peers afterwards!If you’re interested in participating in the upcoming round of CaMLAB, apply here. If you have substantial ML experience and are interested in being a teaching assistant (TA), apply here. You can find more details below.Schedule & logisticsTh...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply for Cambridge ML for Alignment Bootcamp (CaMLAB) [26 March - 8 April], published by hannah on February 9, 2023 on The Effective Altruism Forum.TL;DR: A two-week machine learning bootcamp this spring in Cambridge, UK, open to global applicants and aimed at providing ML skills for AI alignment. Apply by 26 February to participate or TA.Following a series of machine learning bootcamps earlier this year in Cambridge, Berkeley and Boston, the Cambridge AI Safety Hub is running the next iteration of the Cambridge ML for Alignment Bootcamp (CaMLAB) in spring.This two-week curriculum expects no prior experience with machine learning, although familiarity with Python and understanding of (basic) linear algebra is crucial.The curriculum, based on MLAB, provides a thorough, nuts-and-bolts introduction to the state-of-the-art in ML techniques such as interpretability and reinforcement learning. You’ll be guided through the steps of building various deep learning models, from ResNets to transformers. You’ll come away well-versed in PyTorch and useful complementary frameworks.From Richard Ren, an undergraduate at UPenn who participated in the January camp:The material from the bootcamp was well-prepared and helped me understand how to use PyTorch and einops, as well as how backpropagation and transformers work. The mentorship from the TAs and peers was excellent, and because of their support, I think the time I spent at the camp was at least 3-5x as productive as focused time I would've spent outside of the camp learning the material on my own — propelling me to be able to take graduate-level deep learning classes at my school, read AI safety papers on my own, and giving me the knowledge necessary to pursue serious machine learning research projects.In addition, the benefits of spending two weeks in-person with other motivated and ambitious individuals cannot be overstated: alongside the pedagogical benefits of being paired with another person each day for programming, the conversations which took place around the curriculum were a seedbed for new insights and valuable connections.Richard continues:The mentorship from the TAs, as well as the chance conversations from the people I've met, have had a serious impact on how I'll approach the career path(s) I'm interested in — from meeting an economics Ph.D. (and having my worldview on pursuing a policy career change) to talking with someone who worked at EleutherAI in the Cambridge EA office about various pathways in AI safety. I loved the people I was surrounded with — they were ambitious, driven, kind, emotionally intelligent, and hardworking.Feedback from the end of the previous camp showed that:Participants on average said they would be 93% likely to recommend the bootcamp to a friend or colleague.Everyone found the camp at least as good as expected, with 82% finding it better than expected, and 24% finding it much better than expected.94% of participants found the camp more valuable than the counterfactual use of their time, with 71% finding it much more valuable.In addition, first and second place in Apart Research’s January Mechanistic Interpretability Hackathon were awarded to teams formed from participants and TAs from our January bootcamp.Chris Mathwin, who was part of the runner-up project, writes of the bootcamp:A really formative experience! Great people, great content and truly great support. It was a significantly better use of my time in upskilling in this field than I would have spent elsewhere and I have continued to work with some of my peers afterwards!If you’re interested in participating in the upcoming round of CaMLAB, apply here. If you have substantial ML experience and are interested in being a teaching assistant (TA), apply here. You can find more details below.Schedule & logisticsTh...]]>
hannah https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:09 None full 4810
NzkCEqgBDBuPdbuuC_NL_EA_EA EA - Summer Internships at Open Philanthropy - Global Health and Wellbeing (due Feb 26) by ChrisSmith Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summer Internships at Open Philanthropy - Global Health and Wellbeing (due Feb 26), published by ChrisSmith on February 9, 2023 on The Effective Altruism Forum.We’re excited to announce that the Global Health and Wellbeing Cause Prioritization team at Open Philanthropy will again be hiring several interns this summer to work with us on research to help pick new causes to support. We think this is a great way for us to grow our capacity, develop research talent, and expand our pipeline for future full-time roles. The key points are:Applications are due at 11:59 PM Pacific on Sunday, February 26, 2022.Applicants must be currently enrolled in a degree program or working in a position that offers externship/secondment opportunities.The internship runs from June 5 to August 11-25 (with limited adjustments based on academic calendars) and is paid ($1,900 per week) and fully remote.We’re open to a wide variety of backgrounds, but expect some of the strongest candidates to be enrolled in master's or doctoral programs in the social sciences.We aim to employ people with many different experiences, perspectives and backgrounds who share our passion for accomplishing as much good as we can. We particularly encourage applications from people of color, self-identified women, non-binary individuals, and people from low and middle income countries.Full details (and a link to the application) are available here and are also copied below.We hope that you’ll apply and share the news with others!About the internshipWe’re looking for students currently enrolled in degree programs (or people whose work offers externship/secondment opportunities) to apply for a research internship from June-August 2023 and help us investigate important questions and causes. We see the internship as a way to grow our capacity, develop promising research talent, and expand our recruiting pipeline for full-time roles down the line. We plan to treat interns as team members working on the team’s core priorities, while also showing them how Open Philanthropy works and helping them build skills important for cause prioritization research.Most projects will take the form of “opinionated, short research briefs” – synthesizing expert opinion, academic research, and prior views to get to a bottom line on an important question or promising area. Examples of projects our interns have taken on in the past include:assessing how greater risk of conflict from climate change should affect our social cost of carbonevaluating tobacco control as an area for high impact grantmakinghelping with a major internal project focused on determining Open Philanthropy’s optimal spending pathWe expect that future interns will work on additional shallow investigations on diverse and potentially highly impactful topics — such as community health workers in low- and lower-middle income countries, the elimination of non-compete clauses from employment contracts, and vaccine adjuvants. Interns will work on multiple projects of different depths in the same way as full-time team members. Specific projects will depend on the team’s needs and the intern’s skills. Interns will report to an existing cause prioritization team member and participate in team meetings and discussions, including presenting their work to the team for feedback.Like the day-to-day work of our full-time Research and Strategy Fellows, the internship would require:Talking to global experts, reviewing reports or academic papers, and working with potential grantees to decide whether a potential cause area is important, neglected, and tractable.Dividing time between gathering new information and synthesizing it into concrete recommendations.Working to get the right answer, not to summarize others’ views. This will require making reasonable judgment calls and be...]]>
ChrisSmith https://forum.effectivealtruism.org/posts/NzkCEqgBDBuPdbuuC/summer-internships-at-open-philanthropy-global-health-and Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summer Internships at Open Philanthropy - Global Health and Wellbeing (due Feb 26), published by ChrisSmith on February 9, 2023 on The Effective Altruism Forum.We’re excited to announce that the Global Health and Wellbeing Cause Prioritization team at Open Philanthropy will again be hiring several interns this summer to work with us on research to help pick new causes to support. We think this is a great way for us to grow our capacity, develop research talent, and expand our pipeline for future full-time roles. The key points are:Applications are due at 11:59 PM Pacific on Sunday, February 26, 2022.Applicants must be currently enrolled in a degree program or working in a position that offers externship/secondment opportunities.The internship runs from June 5 to August 11-25 (with limited adjustments based on academic calendars) and is paid ($1,900 per week) and fully remote.We’re open to a wide variety of backgrounds, but expect some of the strongest candidates to be enrolled in master's or doctoral programs in the social sciences.We aim to employ people with many different experiences, perspectives and backgrounds who share our passion for accomplishing as much good as we can. We particularly encourage applications from people of color, self-identified women, non-binary individuals, and people from low and middle income countries.Full details (and a link to the application) are available here and are also copied below.We hope that you’ll apply and share the news with others!About the internshipWe’re looking for students currently enrolled in degree programs (or people whose work offers externship/secondment opportunities) to apply for a research internship from June-August 2023 and help us investigate important questions and causes. We see the internship as a way to grow our capacity, develop promising research talent, and expand our recruiting pipeline for full-time roles down the line. We plan to treat interns as team members working on the team’s core priorities, while also showing them how Open Philanthropy works and helping them build skills important for cause prioritization research.Most projects will take the form of “opinionated, short research briefs” – synthesizing expert opinion, academic research, and prior views to get to a bottom line on an important question or promising area. Examples of projects our interns have taken on in the past include:assessing how greater risk of conflict from climate change should affect our social cost of carbonevaluating tobacco control as an area for high impact grantmakinghelping with a major internal project focused on determining Open Philanthropy’s optimal spending pathWe expect that future interns will work on additional shallow investigations on diverse and potentially highly impactful topics — such as community health workers in low- and lower-middle income countries, the elimination of non-compete clauses from employment contracts, and vaccine adjuvants. Interns will work on multiple projects of different depths in the same way as full-time team members. Specific projects will depend on the team’s needs and the intern’s skills. Interns will report to an existing cause prioritization team member and participate in team meetings and discussions, including presenting their work to the team for feedback.Like the day-to-day work of our full-time Research and Strategy Fellows, the internship would require:Talking to global experts, reviewing reports or academic papers, and working with potential grantees to decide whether a potential cause area is important, neglected, and tractable.Dividing time between gathering new information and synthesizing it into concrete recommendations.Working to get the right answer, not to summarize others’ views. This will require making reasonable judgment calls and be...]]>
Thu, 09 Feb 2023 21:53:47 +0000 EA - Summer Internships at Open Philanthropy - Global Health and Wellbeing (due Feb 26) by ChrisSmith Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summer Internships at Open Philanthropy - Global Health and Wellbeing (due Feb 26), published by ChrisSmith on February 9, 2023 on The Effective Altruism Forum.We’re excited to announce that the Global Health and Wellbeing Cause Prioritization team at Open Philanthropy will again be hiring several interns this summer to work with us on research to help pick new causes to support. We think this is a great way for us to grow our capacity, develop research talent, and expand our pipeline for future full-time roles. The key points are:Applications are due at 11:59 PM Pacific on Sunday, February 26, 2022.Applicants must be currently enrolled in a degree program or working in a position that offers externship/secondment opportunities.The internship runs from June 5 to August 11-25 (with limited adjustments based on academic calendars) and is paid ($1,900 per week) and fully remote.We’re open to a wide variety of backgrounds, but expect some of the strongest candidates to be enrolled in master's or doctoral programs in the social sciences.We aim to employ people with many different experiences, perspectives and backgrounds who share our passion for accomplishing as much good as we can. We particularly encourage applications from people of color, self-identified women, non-binary individuals, and people from low and middle income countries.Full details (and a link to the application) are available here and are also copied below.We hope that you’ll apply and share the news with others!About the internshipWe’re looking for students currently enrolled in degree programs (or people whose work offers externship/secondment opportunities) to apply for a research internship from June-August 2023 and help us investigate important questions and causes. We see the internship as a way to grow our capacity, develop promising research talent, and expand our recruiting pipeline for full-time roles down the line. We plan to treat interns as team members working on the team’s core priorities, while also showing them how Open Philanthropy works and helping them build skills important for cause prioritization research.Most projects will take the form of “opinionated, short research briefs” – synthesizing expert opinion, academic research, and prior views to get to a bottom line on an important question or promising area. Examples of projects our interns have taken on in the past include:assessing how greater risk of conflict from climate change should affect our social cost of carbonevaluating tobacco control as an area for high impact grantmakinghelping with a major internal project focused on determining Open Philanthropy’s optimal spending pathWe expect that future interns will work on additional shallow investigations on diverse and potentially highly impactful topics — such as community health workers in low- and lower-middle income countries, the elimination of non-compete clauses from employment contracts, and vaccine adjuvants. Interns will work on multiple projects of different depths in the same way as full-time team members. Specific projects will depend on the team’s needs and the intern’s skills. Interns will report to an existing cause prioritization team member and participate in team meetings and discussions, including presenting their work to the team for feedback.Like the day-to-day work of our full-time Research and Strategy Fellows, the internship would require:Talking to global experts, reviewing reports or academic papers, and working with potential grantees to decide whether a potential cause area is important, neglected, and tractable.Dividing time between gathering new information and synthesizing it into concrete recommendations.Working to get the right answer, not to summarize others’ views. This will require making reasonable judgment calls and be...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summer Internships at Open Philanthropy - Global Health and Wellbeing (due Feb 26), published by ChrisSmith on February 9, 2023 on The Effective Altruism Forum.We’re excited to announce that the Global Health and Wellbeing Cause Prioritization team at Open Philanthropy will again be hiring several interns this summer to work with us on research to help pick new causes to support. We think this is a great way for us to grow our capacity, develop research talent, and expand our pipeline for future full-time roles. The key points are:Applications are due at 11:59 PM Pacific on Sunday, February 26, 2022.Applicants must be currently enrolled in a degree program or working in a position that offers externship/secondment opportunities.The internship runs from June 5 to August 11-25 (with limited adjustments based on academic calendars) and is paid ($1,900 per week) and fully remote.We’re open to a wide variety of backgrounds, but expect some of the strongest candidates to be enrolled in master's or doctoral programs in the social sciences.We aim to employ people with many different experiences, perspectives and backgrounds who share our passion for accomplishing as much good as we can. We particularly encourage applications from people of color, self-identified women, non-binary individuals, and people from low and middle income countries.Full details (and a link to the application) are available here and are also copied below.We hope that you’ll apply and share the news with others!About the internshipWe’re looking for students currently enrolled in degree programs (or people whose work offers externship/secondment opportunities) to apply for a research internship from June-August 2023 and help us investigate important questions and causes. We see the internship as a way to grow our capacity, develop promising research talent, and expand our recruiting pipeline for full-time roles down the line. We plan to treat interns as team members working on the team’s core priorities, while also showing them how Open Philanthropy works and helping them build skills important for cause prioritization research.Most projects will take the form of “opinionated, short research briefs” – synthesizing expert opinion, academic research, and prior views to get to a bottom line on an important question or promising area. Examples of projects our interns have taken on in the past include:assessing how greater risk of conflict from climate change should affect our social cost of carbonevaluating tobacco control as an area for high impact grantmakinghelping with a major internal project focused on determining Open Philanthropy’s optimal spending pathWe expect that future interns will work on additional shallow investigations on diverse and potentially highly impactful topics — such as community health workers in low- and lower-middle income countries, the elimination of non-compete clauses from employment contracts, and vaccine adjuvants. Interns will work on multiple projects of different depths in the same way as full-time team members. Specific projects will depend on the team’s needs and the intern’s skills. Interns will report to an existing cause prioritization team member and participate in team meetings and discussions, including presenting their work to the team for feedback.Like the day-to-day work of our full-time Research and Strategy Fellows, the internship would require:Talking to global experts, reviewing reports or academic papers, and working with potential grantees to decide whether a potential cause area is important, neglected, and tractable.Dividing time between gathering new information and synthesizing it into concrete recommendations.Working to get the right answer, not to summarize others’ views. This will require making reasonable judgment calls and be...]]>
ChrisSmith https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:33 None full 4808
yoAuqGEvJ2Gss5YtK_NL_EA_EA EA - Hardening pharmaceutical response to pandemics: concrete project seeks project lead by Joel Becker Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hardening pharmaceutical response to pandemics: concrete project seeks project lead, published by Joel Becker on February 9, 2023 on The Effective Altruism Forum.TL;DRWe are looking for a collaborator to explore projects aiming to harden pharmaceutical infrastructure.We would figure out up to 3 months of full-time equivalent funding, at up to 100% of ‘EA-salary,’ with the duration of the project ranging from 3 to 12 months based on the lead's preferences. The successful candidate would be responsible for (1) refining/adapting our MVP proposal, (2) finding an external partner, and (3) securing additional funding.Express your interest here!Pharmaceutical response needs hardeningCertain pandemic scenarios may lead to severe compromises to pharmaceutical response. Such compromises would increase the likelihood of far worse pandemic outcomes.Governments already expend significant resources to protect command and control, military response, and other capabilities against more traditional threats. By comparison, the pharmaceutical response capability is relatively unprotected.Meanwhile, we believe that there are high-impact, relatively low-cost measures that can be taken to better harden existing facilities against these threats.We have the beginnings of a planSee here.We have also interacted with a couple of (technologically and politically) relevant stakeholders, who have expressed interest in seeing this project move forward.We need someone to run with this + partner organizationOur first blocker is a person to drive this project forward. While we can act as involved advisors, we won’t have the capacity to work on this full-time over the next 2-3 years.The ideal candidate would have the following attributes, in descending order of importance. (Please don’t be discouraged from submitting the interest form if you don’t fit all of the criteria!)Share core vision/be enthusiastic about taking responsibility for success of project,Experience successfully leading projects, Excellent communicator, planner, team-builder,Skilled at building relationships with and managing external stakeholders,Expertise in biosecurity or related fields, and/orExisting connections with relevant stakeholders.We are open to commitments from 1 day per week to full-time.Our second blocker is a partner organization with whom we can develop, iterate, and implement our ideas in the real world.That person could be youSo let us know if you’re interested here!We also welcome suggestions about the substance of the idea, or about getting more buy-in from important stakeholders.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Joel Becker https://forum.effectivealtruism.org/posts/yoAuqGEvJ2Gss5YtK/hardening-pharmaceutical-response-to-pandemics-concrete Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hardening pharmaceutical response to pandemics: concrete project seeks project lead, published by Joel Becker on February 9, 2023 on The Effective Altruism Forum.TL;DRWe are looking for a collaborator to explore projects aiming to harden pharmaceutical infrastructure.We would figure out up to 3 months of full-time equivalent funding, at up to 100% of ‘EA-salary,’ with the duration of the project ranging from 3 to 12 months based on the lead's preferences. The successful candidate would be responsible for (1) refining/adapting our MVP proposal, (2) finding an external partner, and (3) securing additional funding.Express your interest here!Pharmaceutical response needs hardeningCertain pandemic scenarios may lead to severe compromises to pharmaceutical response. Such compromises would increase the likelihood of far worse pandemic outcomes.Governments already expend significant resources to protect command and control, military response, and other capabilities against more traditional threats. By comparison, the pharmaceutical response capability is relatively unprotected.Meanwhile, we believe that there are high-impact, relatively low-cost measures that can be taken to better harden existing facilities against these threats.We have the beginnings of a planSee here.We have also interacted with a couple of (technologically and politically) relevant stakeholders, who have expressed interest in seeing this project move forward.We need someone to run with this + partner organizationOur first blocker is a person to drive this project forward. While we can act as involved advisors, we won’t have the capacity to work on this full-time over the next 2-3 years.The ideal candidate would have the following attributes, in descending order of importance. (Please don’t be discouraged from submitting the interest form if you don’t fit all of the criteria!)Share core vision/be enthusiastic about taking responsibility for success of project,Experience successfully leading projects, Excellent communicator, planner, team-builder,Skilled at building relationships with and managing external stakeholders,Expertise in biosecurity or related fields, and/orExisting connections with relevant stakeholders.We are open to commitments from 1 day per week to full-time.Our second blocker is a partner organization with whom we can develop, iterate, and implement our ideas in the real world.That person could be youSo let us know if you’re interested here!We also welcome suggestions about the substance of the idea, or about getting more buy-in from important stakeholders.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 09 Feb 2023 21:09:44 +0000 EA - Hardening pharmaceutical response to pandemics: concrete project seeks project lead by Joel Becker Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hardening pharmaceutical response to pandemics: concrete project seeks project lead, published by Joel Becker on February 9, 2023 on The Effective Altruism Forum.TL;DRWe are looking for a collaborator to explore projects aiming to harden pharmaceutical infrastructure.We would figure out up to 3 months of full-time equivalent funding, at up to 100% of ‘EA-salary,’ with the duration of the project ranging from 3 to 12 months based on the lead's preferences. The successful candidate would be responsible for (1) refining/adapting our MVP proposal, (2) finding an external partner, and (3) securing additional funding.Express your interest here!Pharmaceutical response needs hardeningCertain pandemic scenarios may lead to severe compromises to pharmaceutical response. Such compromises would increase the likelihood of far worse pandemic outcomes.Governments already expend significant resources to protect command and control, military response, and other capabilities against more traditional threats. By comparison, the pharmaceutical response capability is relatively unprotected.Meanwhile, we believe that there are high-impact, relatively low-cost measures that can be taken to better harden existing facilities against these threats.We have the beginnings of a planSee here.We have also interacted with a couple of (technologically and politically) relevant stakeholders, who have expressed interest in seeing this project move forward.We need someone to run with this + partner organizationOur first blocker is a person to drive this project forward. While we can act as involved advisors, we won’t have the capacity to work on this full-time over the next 2-3 years.The ideal candidate would have the following attributes, in descending order of importance. (Please don’t be discouraged from submitting the interest form if you don’t fit all of the criteria!)Share core vision/be enthusiastic about taking responsibility for success of project,Experience successfully leading projects, Excellent communicator, planner, team-builder,Skilled at building relationships with and managing external stakeholders,Expertise in biosecurity or related fields, and/orExisting connections with relevant stakeholders.We are open to commitments from 1 day per week to full-time.Our second blocker is a partner organization with whom we can develop, iterate, and implement our ideas in the real world.That person could be youSo let us know if you’re interested here!We also welcome suggestions about the substance of the idea, or about getting more buy-in from important stakeholders.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hardening pharmaceutical response to pandemics: concrete project seeks project lead, published by Joel Becker on February 9, 2023 on The Effective Altruism Forum.TL;DRWe are looking for a collaborator to explore projects aiming to harden pharmaceutical infrastructure.We would figure out up to 3 months of full-time equivalent funding, at up to 100% of ‘EA-salary,’ with the duration of the project ranging from 3 to 12 months based on the lead's preferences. The successful candidate would be responsible for (1) refining/adapting our MVP proposal, (2) finding an external partner, and (3) securing additional funding.Express your interest here!Pharmaceutical response needs hardeningCertain pandemic scenarios may lead to severe compromises to pharmaceutical response. Such compromises would increase the likelihood of far worse pandemic outcomes.Governments already expend significant resources to protect command and control, military response, and other capabilities against more traditional threats. By comparison, the pharmaceutical response capability is relatively unprotected.Meanwhile, we believe that there are high-impact, relatively low-cost measures that can be taken to better harden existing facilities against these threats.We have the beginnings of a planSee here.We have also interacted with a couple of (technologically and politically) relevant stakeholders, who have expressed interest in seeing this project move forward.We need someone to run with this + partner organizationOur first blocker is a person to drive this project forward. While we can act as involved advisors, we won’t have the capacity to work on this full-time over the next 2-3 years.The ideal candidate would have the following attributes, in descending order of importance. (Please don’t be discouraged from submitting the interest form if you don’t fit all of the criteria!)Share core vision/be enthusiastic about taking responsibility for success of project,Experience successfully leading projects, Excellent communicator, planner, team-builder,Skilled at building relationships with and managing external stakeholders,Expertise in biosecurity or related fields, and/orExisting connections with relevant stakeholders.We are open to commitments from 1 day per week to full-time.Our second blocker is a partner organization with whom we can develop, iterate, and implement our ideas in the real world.That person could be youSo let us know if you’re interested here!We also welcome suggestions about the substance of the idea, or about getting more buy-in from important stakeholders.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Joel Becker https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:45 None full 4809
a9vfDqofuZwbzpKvw_NL_EA_EA EA - Make CoI Policies Public by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Make CoI Policies Public, published by Jeff Kaufman on February 9, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://forum.effectivealtruism.org/posts/a9vfDqofuZwbzpKvw/make-coi-policies-public Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Make CoI Policies Public, published by Jeff Kaufman on February 9, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 09 Feb 2023 20:00:56 +0000 EA - Make CoI Policies Public by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Make CoI Policies Public, published by Jeff Kaufman on February 9, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Make CoI Policies Public, published by Jeff Kaufman on February 9, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:24 None full 4807
Y9ELNXmLDSDi8Z6RX_NL_EA_EA EA - In (mild) defence of the social/professional overlap in EA by Amber Dawn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: In (mild) defence of the social/professional overlap in EA, published by Amber Dawn on February 9, 2023 on The Effective Altruism Forum.The EA community has both a professional aspect and a social aspect; sometimes, these overlap. In particular, there are networks of EAs in major hubs who variously date each other, are friends with each other, live together, work at the same organizations, and sometimes make grantmaking or funding decisions related to each other. (I’ll call this the ‘work/social overlap’ throughout).Recently, lots of people have criticised the work/social overlap in EA, blaming it for things like the misogynistic, abusive dynamics described in the recent Time article, or for the FTX collapse. In this post, I talk a bit about why I’m wary of calls to “deal with” or “address” the work/social overlap in some way (this includes, but isn’t limited to, calls to “deal with” polyamory in EA specifically).This is partly because I think that the work/social overlap has some strong benefits, which is why it exists, and the benefits outweigh the drawbacks (though we should certainly try to mitigate the drawbacks as best we can). If you want to get rid of something, you should first try to make sure that it’s not importantly load-bearing first; I think in fact that EA work/social overlap probably is importantly load-bearing in some ways. (I never thought that I’d be invoking a Chesterton’s fence argument in support of polyamorously dating all your coworkers :p but there we go).And it’s partly because I think it’s immoral and harmful to try to prevent people from consensually forming relationships with whom they want, and it’s only a little better if the mechanisms you use to attempt this are supposedly “soft” or “non-coercive”.In Forum tradition, disclosures and caveats first:I’m fully embedded in the EA work/social overlap. I work as a freelancer, mostly with EA clients, some of whom I know socially; I live in London and am friends with many other London EAs; and I’m dating two other EAs.I also want to acknowledge that there are downsides to the work/social overlap - the critics aren’t wrong about their points, they are just missing important other parts of the story. In particular:1. The work/social overlap gives rise to conflicts-of-interest (and this is more of a problem in highly poly communities, because there are just more conflict-of-interest-ish relationships)2. The work/social overlap means that people who are engaged with EA professionally, but not part of the social community, may miss out on opportunities. The recent ‘come hang out in the Bay Area’ push seems to be a tacit acknowledgement that this is, in fact, how it works. It’s good that Bay Area EAs tried to mitigate this by inviting non-locals to visit the Bay and get more plugged in to the in-person social community, but it would be better, in my opinion, if organizations in hubs were less clique-y and more willing to hire people who weren’t part of the social community (and indeed, people who don’t identify as EA at all).3. You’re putting all your eggs in one basket. If you live with EAs, date EAs, are friends with EAs, and work with EAs, it makes the thought of leaving the EA community really hard. This is bad for epistemics: for a while, I found it really hard to think clearly about where I stood on EA philosophical/ideological positions, because I felt so ‘locked in’ to the community in all parts of my life.(I got past this thanks to a fruitful coaching session with Tee Barnett, where he pointed out that my social relationships formed through EA are now much deeper than our EA connection, and when I actually thought about it, I realised that most of my EA friends wouldn’t ditch me or disown me if I came to disagree with EA, so leaving EA wouldn’t really mean losing access to my so...]]>
Amber Dawn https://forum.effectivealtruism.org/posts/Y9ELNXmLDSDi8Z6RX/in-mild-defence-of-the-social-professional-overlap-in-ea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: In (mild) defence of the social/professional overlap in EA, published by Amber Dawn on February 9, 2023 on The Effective Altruism Forum.The EA community has both a professional aspect and a social aspect; sometimes, these overlap. In particular, there are networks of EAs in major hubs who variously date each other, are friends with each other, live together, work at the same organizations, and sometimes make grantmaking or funding decisions related to each other. (I’ll call this the ‘work/social overlap’ throughout).Recently, lots of people have criticised the work/social overlap in EA, blaming it for things like the misogynistic, abusive dynamics described in the recent Time article, or for the FTX collapse. In this post, I talk a bit about why I’m wary of calls to “deal with” or “address” the work/social overlap in some way (this includes, but isn’t limited to, calls to “deal with” polyamory in EA specifically).This is partly because I think that the work/social overlap has some strong benefits, which is why it exists, and the benefits outweigh the drawbacks (though we should certainly try to mitigate the drawbacks as best we can). If you want to get rid of something, you should first try to make sure that it’s not importantly load-bearing first; I think in fact that EA work/social overlap probably is importantly load-bearing in some ways. (I never thought that I’d be invoking a Chesterton’s fence argument in support of polyamorously dating all your coworkers :p but there we go).And it’s partly because I think it’s immoral and harmful to try to prevent people from consensually forming relationships with whom they want, and it’s only a little better if the mechanisms you use to attempt this are supposedly “soft” or “non-coercive”.In Forum tradition, disclosures and caveats first:I’m fully embedded in the EA work/social overlap. I work as a freelancer, mostly with EA clients, some of whom I know socially; I live in London and am friends with many other London EAs; and I’m dating two other EAs.I also want to acknowledge that there are downsides to the work/social overlap - the critics aren’t wrong about their points, they are just missing important other parts of the story. In particular:1. The work/social overlap gives rise to conflicts-of-interest (and this is more of a problem in highly poly communities, because there are just more conflict-of-interest-ish relationships)2. The work/social overlap means that people who are engaged with EA professionally, but not part of the social community, may miss out on opportunities. The recent ‘come hang out in the Bay Area’ push seems to be a tacit acknowledgement that this is, in fact, how it works. It’s good that Bay Area EAs tried to mitigate this by inviting non-locals to visit the Bay and get more plugged in to the in-person social community, but it would be better, in my opinion, if organizations in hubs were less clique-y and more willing to hire people who weren’t part of the social community (and indeed, people who don’t identify as EA at all).3. You’re putting all your eggs in one basket. If you live with EAs, date EAs, are friends with EAs, and work with EAs, it makes the thought of leaving the EA community really hard. This is bad for epistemics: for a while, I found it really hard to think clearly about where I stood on EA philosophical/ideological positions, because I felt so ‘locked in’ to the community in all parts of my life.(I got past this thanks to a fruitful coaching session with Tee Barnett, where he pointed out that my social relationships formed through EA are now much deeper than our EA connection, and when I actually thought about it, I realised that most of my EA friends wouldn’t ditch me or disown me if I came to disagree with EA, so leaving EA wouldn’t really mean losing access to my so...]]>
Thu, 09 Feb 2023 16:52:59 +0000 EA - In (mild) defence of the social/professional overlap in EA by Amber Dawn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: In (mild) defence of the social/professional overlap in EA, published by Amber Dawn on February 9, 2023 on The Effective Altruism Forum.The EA community has both a professional aspect and a social aspect; sometimes, these overlap. In particular, there are networks of EAs in major hubs who variously date each other, are friends with each other, live together, work at the same organizations, and sometimes make grantmaking or funding decisions related to each other. (I’ll call this the ‘work/social overlap’ throughout).Recently, lots of people have criticised the work/social overlap in EA, blaming it for things like the misogynistic, abusive dynamics described in the recent Time article, or for the FTX collapse. In this post, I talk a bit about why I’m wary of calls to “deal with” or “address” the work/social overlap in some way (this includes, but isn’t limited to, calls to “deal with” polyamory in EA specifically).This is partly because I think that the work/social overlap has some strong benefits, which is why it exists, and the benefits outweigh the drawbacks (though we should certainly try to mitigate the drawbacks as best we can). If you want to get rid of something, you should first try to make sure that it’s not importantly load-bearing first; I think in fact that EA work/social overlap probably is importantly load-bearing in some ways. (I never thought that I’d be invoking a Chesterton’s fence argument in support of polyamorously dating all your coworkers :p but there we go).And it’s partly because I think it’s immoral and harmful to try to prevent people from consensually forming relationships with whom they want, and it’s only a little better if the mechanisms you use to attempt this are supposedly “soft” or “non-coercive”.In Forum tradition, disclosures and caveats first:I’m fully embedded in the EA work/social overlap. I work as a freelancer, mostly with EA clients, some of whom I know socially; I live in London and am friends with many other London EAs; and I’m dating two other EAs.I also want to acknowledge that there are downsides to the work/social overlap - the critics aren’t wrong about their points, they are just missing important other parts of the story. In particular:1. The work/social overlap gives rise to conflicts-of-interest (and this is more of a problem in highly poly communities, because there are just more conflict-of-interest-ish relationships)2. The work/social overlap means that people who are engaged with EA professionally, but not part of the social community, may miss out on opportunities. The recent ‘come hang out in the Bay Area’ push seems to be a tacit acknowledgement that this is, in fact, how it works. It’s good that Bay Area EAs tried to mitigate this by inviting non-locals to visit the Bay and get more plugged in to the in-person social community, but it would be better, in my opinion, if organizations in hubs were less clique-y and more willing to hire people who weren’t part of the social community (and indeed, people who don’t identify as EA at all).3. You’re putting all your eggs in one basket. If you live with EAs, date EAs, are friends with EAs, and work with EAs, it makes the thought of leaving the EA community really hard. This is bad for epistemics: for a while, I found it really hard to think clearly about where I stood on EA philosophical/ideological positions, because I felt so ‘locked in’ to the community in all parts of my life.(I got past this thanks to a fruitful coaching session with Tee Barnett, where he pointed out that my social relationships formed through EA are now much deeper than our EA connection, and when I actually thought about it, I realised that most of my EA friends wouldn’t ditch me or disown me if I came to disagree with EA, so leaving EA wouldn’t really mean losing access to my so...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: In (mild) defence of the social/professional overlap in EA, published by Amber Dawn on February 9, 2023 on The Effective Altruism Forum.The EA community has both a professional aspect and a social aspect; sometimes, these overlap. In particular, there are networks of EAs in major hubs who variously date each other, are friends with each other, live together, work at the same organizations, and sometimes make grantmaking or funding decisions related to each other. (I’ll call this the ‘work/social overlap’ throughout).Recently, lots of people have criticised the work/social overlap in EA, blaming it for things like the misogynistic, abusive dynamics described in the recent Time article, or for the FTX collapse. In this post, I talk a bit about why I’m wary of calls to “deal with” or “address” the work/social overlap in some way (this includes, but isn’t limited to, calls to “deal with” polyamory in EA specifically).This is partly because I think that the work/social overlap has some strong benefits, which is why it exists, and the benefits outweigh the drawbacks (though we should certainly try to mitigate the drawbacks as best we can). If you want to get rid of something, you should first try to make sure that it’s not importantly load-bearing first; I think in fact that EA work/social overlap probably is importantly load-bearing in some ways. (I never thought that I’d be invoking a Chesterton’s fence argument in support of polyamorously dating all your coworkers :p but there we go).And it’s partly because I think it’s immoral and harmful to try to prevent people from consensually forming relationships with whom they want, and it’s only a little better if the mechanisms you use to attempt this are supposedly “soft” or “non-coercive”.In Forum tradition, disclosures and caveats first:I’m fully embedded in the EA work/social overlap. I work as a freelancer, mostly with EA clients, some of whom I know socially; I live in London and am friends with many other London EAs; and I’m dating two other EAs.I also want to acknowledge that there are downsides to the work/social overlap - the critics aren’t wrong about their points, they are just missing important other parts of the story. In particular:1. The work/social overlap gives rise to conflicts-of-interest (and this is more of a problem in highly poly communities, because there are just more conflict-of-interest-ish relationships)2. The work/social overlap means that people who are engaged with EA professionally, but not part of the social community, may miss out on opportunities. The recent ‘come hang out in the Bay Area’ push seems to be a tacit acknowledgement that this is, in fact, how it works. It’s good that Bay Area EAs tried to mitigate this by inviting non-locals to visit the Bay and get more plugged in to the in-person social community, but it would be better, in my opinion, if organizations in hubs were less clique-y and more willing to hire people who weren’t part of the social community (and indeed, people who don’t identify as EA at all).3. You’re putting all your eggs in one basket. If you live with EAs, date EAs, are friends with EAs, and work with EAs, it makes the thought of leaving the EA community really hard. This is bad for epistemics: for a while, I found it really hard to think clearly about where I stood on EA philosophical/ideological positions, because I felt so ‘locked in’ to the community in all parts of my life.(I got past this thanks to a fruitful coaching session with Tee Barnett, where he pointed out that my social relationships formed through EA are now much deeper than our EA connection, and when I actually thought about it, I realised that most of my EA friends wouldn’t ditch me or disown me if I came to disagree with EA, so leaving EA wouldn’t really mean losing access to my so...]]>
Amber Dawn https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:56 None full 4811
yBPyByccETmHmaByn_NL_EA_EA EA - Could 80,000 Hours’ messaging be discouraging for people not working on x-risk / longtermism? by Mack the Knife Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Could 80,000 Hours’ messaging be discouraging for people not working on x-risk / longtermism?, published by Mack the Knife on February 9, 2023 on The Effective Altruism Forum.Disclaimer: I understand the concept and concerns around x-risk and longtermism very well. Nevertheless, I personally have no interest in working in these areas, as I find others more important and suitable for myself.Edit: I want to make it clear that I understand why 80,000 Hours ranks career paths and that I certainly think they should advocate the cause areas they deem most important. After all, that's core EA. My critique is not about the "what", but about the "how" of their messaging - which, I fear, might discourage people motivated by so called "less important" cause areas to stop engaging or engage less with EA, thereby doing less good than they could.In a recent newsletter on "Is the world getting better or worse?", 80,000 Hours presents several problem areas that still exist in the world today - especially factory farming and, of course, x-risk. Although a career in animal welfare is admittedly mentioned, the conclusion is: "We think that the problem you work on in your career is the biggest driver of your impact. And we think that these existential risks are the biggest problems we currently face.”Bam. The biggest. And we're done. As is so often the case. At least according to my impression: 80,000 Hours pushes x-risk and longtermism, and other cause areas many EAs (and other people) are working hard and effectively on are actively cornered, not mentioned at all or described as "sometimes recommended" or as "less pressing than our highest priority areas" - the latter example being nothing less than climate change.Of course, I've become biased by now and notice such examples much more. For me, however, it's not so much the problem that x-risk and longtermism are put on a pedestal - but rather the failure to simultaneously present other career paths (that do a great deal of good and are, in my experience, typically the reason people become involved in EA in the first place) as equally worth pursuing. Instead, the famous list "What are the most pressing world problems?" puts factory farming and global health way at the bottom of the page as "Problems many of our readers prioritise". Readers prioritise these areas, but not 80,000 Hours because they're irrelevant - is that how I'm supposed to understand this?I'm somewhat sceptical about "ranking" something so complex and personal as career paths anyway, but can understand why 80,000 Hours is doing it. However, isn't it also possible to always be outright and unambiguously appreciative of the tremendous amount of good these "other" areas achieve for people, animals and planet?In summary, I have noticed over the last few months how these messages continue to upset me, and also demoralise and demotivate me. Because each of these messages makes me feel bad about the supposedly 'suboptimal' path I've chosen in the past - and like any future efforts (and also donations) in the areas that are close to my heart would not be valuable and I should just leave them altogether. This is further insensified by x-risk / longtermism's (at least perceived) growing singularisation within the entire EA community and large-scale messaging (WWOTF etc.).And so I wanted to ask: What do you think? Are you like me with this? Or do you feel that your (volunteer) work is fully valued in the general presentation of 80,000 Hours (and EA in general), even though you're not focused on x-risk and longtermism?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mack the Knife https://forum.effectivealtruism.org/posts/yBPyByccETmHmaByn/could-80-000-hours-messaging-be-discouraging-for-people-not Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Could 80,000 Hours’ messaging be discouraging for people not working on x-risk / longtermism?, published by Mack the Knife on February 9, 2023 on The Effective Altruism Forum.Disclaimer: I understand the concept and concerns around x-risk and longtermism very well. Nevertheless, I personally have no interest in working in these areas, as I find others more important and suitable for myself.Edit: I want to make it clear that I understand why 80,000 Hours ranks career paths and that I certainly think they should advocate the cause areas they deem most important. After all, that's core EA. My critique is not about the "what", but about the "how" of their messaging - which, I fear, might discourage people motivated by so called "less important" cause areas to stop engaging or engage less with EA, thereby doing less good than they could.In a recent newsletter on "Is the world getting better or worse?", 80,000 Hours presents several problem areas that still exist in the world today - especially factory farming and, of course, x-risk. Although a career in animal welfare is admittedly mentioned, the conclusion is: "We think that the problem you work on in your career is the biggest driver of your impact. And we think that these existential risks are the biggest problems we currently face.”Bam. The biggest. And we're done. As is so often the case. At least according to my impression: 80,000 Hours pushes x-risk and longtermism, and other cause areas many EAs (and other people) are working hard and effectively on are actively cornered, not mentioned at all or described as "sometimes recommended" or as "less pressing than our highest priority areas" - the latter example being nothing less than climate change.Of course, I've become biased by now and notice such examples much more. For me, however, it's not so much the problem that x-risk and longtermism are put on a pedestal - but rather the failure to simultaneously present other career paths (that do a great deal of good and are, in my experience, typically the reason people become involved in EA in the first place) as equally worth pursuing. Instead, the famous list "What are the most pressing world problems?" puts factory farming and global health way at the bottom of the page as "Problems many of our readers prioritise". Readers prioritise these areas, but not 80,000 Hours because they're irrelevant - is that how I'm supposed to understand this?I'm somewhat sceptical about "ranking" something so complex and personal as career paths anyway, but can understand why 80,000 Hours is doing it. However, isn't it also possible to always be outright and unambiguously appreciative of the tremendous amount of good these "other" areas achieve for people, animals and planet?In summary, I have noticed over the last few months how these messages continue to upset me, and also demoralise and demotivate me. Because each of these messages makes me feel bad about the supposedly 'suboptimal' path I've chosen in the past - and like any future efforts (and also donations) in the areas that are close to my heart would not be valuable and I should just leave them altogether. This is further insensified by x-risk / longtermism's (at least perceived) growing singularisation within the entire EA community and large-scale messaging (WWOTF etc.).And so I wanted to ask: What do you think? Are you like me with this? Or do you feel that your (volunteer) work is fully valued in the general presentation of 80,000 Hours (and EA in general), even though you're not focused on x-risk and longtermism?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 09 Feb 2023 14:57:52 +0000 EA - Could 80,000 Hours’ messaging be discouraging for people not working on x-risk / longtermism? by Mack the Knife Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Could 80,000 Hours’ messaging be discouraging for people not working on x-risk / longtermism?, published by Mack the Knife on February 9, 2023 on The Effective Altruism Forum.Disclaimer: I understand the concept and concerns around x-risk and longtermism very well. Nevertheless, I personally have no interest in working in these areas, as I find others more important and suitable for myself.Edit: I want to make it clear that I understand why 80,000 Hours ranks career paths and that I certainly think they should advocate the cause areas they deem most important. After all, that's core EA. My critique is not about the "what", but about the "how" of their messaging - which, I fear, might discourage people motivated by so called "less important" cause areas to stop engaging or engage less with EA, thereby doing less good than they could.In a recent newsletter on "Is the world getting better or worse?", 80,000 Hours presents several problem areas that still exist in the world today - especially factory farming and, of course, x-risk. Although a career in animal welfare is admittedly mentioned, the conclusion is: "We think that the problem you work on in your career is the biggest driver of your impact. And we think that these existential risks are the biggest problems we currently face.”Bam. The biggest. And we're done. As is so often the case. At least according to my impression: 80,000 Hours pushes x-risk and longtermism, and other cause areas many EAs (and other people) are working hard and effectively on are actively cornered, not mentioned at all or described as "sometimes recommended" or as "less pressing than our highest priority areas" - the latter example being nothing less than climate change.Of course, I've become biased by now and notice such examples much more. For me, however, it's not so much the problem that x-risk and longtermism are put on a pedestal - but rather the failure to simultaneously present other career paths (that do a great deal of good and are, in my experience, typically the reason people become involved in EA in the first place) as equally worth pursuing. Instead, the famous list "What are the most pressing world problems?" puts factory farming and global health way at the bottom of the page as "Problems many of our readers prioritise". Readers prioritise these areas, but not 80,000 Hours because they're irrelevant - is that how I'm supposed to understand this?I'm somewhat sceptical about "ranking" something so complex and personal as career paths anyway, but can understand why 80,000 Hours is doing it. However, isn't it also possible to always be outright and unambiguously appreciative of the tremendous amount of good these "other" areas achieve for people, animals and planet?In summary, I have noticed over the last few months how these messages continue to upset me, and also demoralise and demotivate me. Because each of these messages makes me feel bad about the supposedly 'suboptimal' path I've chosen in the past - and like any future efforts (and also donations) in the areas that are close to my heart would not be valuable and I should just leave them altogether. This is further insensified by x-risk / longtermism's (at least perceived) growing singularisation within the entire EA community and large-scale messaging (WWOTF etc.).And so I wanted to ask: What do you think? Are you like me with this? Or do you feel that your (volunteer) work is fully valued in the general presentation of 80,000 Hours (and EA in general), even though you're not focused on x-risk and longtermism?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Could 80,000 Hours’ messaging be discouraging for people not working on x-risk / longtermism?, published by Mack the Knife on February 9, 2023 on The Effective Altruism Forum.Disclaimer: I understand the concept and concerns around x-risk and longtermism very well. Nevertheless, I personally have no interest in working in these areas, as I find others more important and suitable for myself.Edit: I want to make it clear that I understand why 80,000 Hours ranks career paths and that I certainly think they should advocate the cause areas they deem most important. After all, that's core EA. My critique is not about the "what", but about the "how" of their messaging - which, I fear, might discourage people motivated by so called "less important" cause areas to stop engaging or engage less with EA, thereby doing less good than they could.In a recent newsletter on "Is the world getting better or worse?", 80,000 Hours presents several problem areas that still exist in the world today - especially factory farming and, of course, x-risk. Although a career in animal welfare is admittedly mentioned, the conclusion is: "We think that the problem you work on in your career is the biggest driver of your impact. And we think that these existential risks are the biggest problems we currently face.”Bam. The biggest. And we're done. As is so often the case. At least according to my impression: 80,000 Hours pushes x-risk and longtermism, and other cause areas many EAs (and other people) are working hard and effectively on are actively cornered, not mentioned at all or described as "sometimes recommended" or as "less pressing than our highest priority areas" - the latter example being nothing less than climate change.Of course, I've become biased by now and notice such examples much more. For me, however, it's not so much the problem that x-risk and longtermism are put on a pedestal - but rather the failure to simultaneously present other career paths (that do a great deal of good and are, in my experience, typically the reason people become involved in EA in the first place) as equally worth pursuing. Instead, the famous list "What are the most pressing world problems?" puts factory farming and global health way at the bottom of the page as "Problems many of our readers prioritise". Readers prioritise these areas, but not 80,000 Hours because they're irrelevant - is that how I'm supposed to understand this?I'm somewhat sceptical about "ranking" something so complex and personal as career paths anyway, but can understand why 80,000 Hours is doing it. However, isn't it also possible to always be outright and unambiguously appreciative of the tremendous amount of good these "other" areas achieve for people, animals and planet?In summary, I have noticed over the last few months how these messages continue to upset me, and also demoralise and demotivate me. Because each of these messages makes me feel bad about the supposedly 'suboptimal' path I've chosen in the past - and like any future efforts (and also donations) in the areas that are close to my heart would not be valuable and I should just leave them altogether. This is further insensified by x-risk / longtermism's (at least perceived) growing singularisation within the entire EA community and large-scale messaging (WWOTF etc.).And so I wanted to ask: What do you think? Are you like me with this? Or do you feel that your (volunteer) work is fully valued in the general presentation of 80,000 Hours (and EA in general), even though you're not focused on x-risk and longtermism?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mack the Knife https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:33 None full 4812
FPTNhr2PgMEFskZHK_NL_EA_EA EA - Massive Earthquake in Turkey: Comments on the situation from the EA Community in Turkey by Eda Arpacı Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Massive Earthquake in Turkey: Comments on the situation from the EA Community in Turkey, published by Eda Arpacı on February 9, 2023 on The Effective Altruism Forum.From the members of the EA community in TurkeyDear effective altruist friends and community,As you may know, two earthquakes with massive magnitudes of 7.8 and 7.7 occurred in Turkey. The World Health Organization declared a Grade 3 emergency -the highest level of emergency- in the region. Almost 15,000 people lost their lives, more than 60,000 are wounded, and time is running out for the many thousands still waiting to be rescued. The death toll is likely to continue rising, and economic devastation in the affected areas is vast. Given this situation, those of us living in Turkey have been receiving questions about effective and trustworthy organizations to donate to for earthquake relief.Although it is very difficult to know the details of their cost-effectiveness and their precise impact, particularly from an EA perspective, and we cannot fully vouch for them, we can initially recommend the following organizations if you are interested in hearing from the effective altruism community in Turkey about our thoughts on where to donate or if you are already considering donating:Turkish Philanthropy Funds: A US-based foundation that has started an earthquake relief fund. It has matched an additional $1 million for organizations on the ground.AHBAP: One of the most active and well-known voluntary networks in the region.AKUT: A non-governmental organization based in Turkey involved in searching and rescuing the victims of the earthquake on the ground.AFAD: The official disaster and emergency management authority of Turkey.1 USD=18.90TL and 1 EUR=20.40 TL, approximately.Please note that the effective altruism community in Turkey is not affiliated with any of the above organizations. We neither represent them nor receive any funding from them. Please also note that donations to these organizations will be used for earthquake relief in Turkey, and not in Syria, as we only have the necessary context to comment on the situation in Turkey alone.If you have made a donation before or after seeing this post, we would appreciate it if you could email us at bilgi@eaturkiye.org or comment below so that we can have a clearer understanding of the level of support in the community.If you have been affected by the earthquakes in any way and would like to talk about it with one of us from the effective altruism community in Turkey, please feel free to reach out to us privately via bilgi@eaturkiye.org.Feel free to also ask questions in the comments or email at bilgi@eaturkiye.orgThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Eda Arpacı https://forum.effectivealtruism.org/posts/FPTNhr2PgMEFskZHK/massive-earthquake-in-turkey-comments-on-the-situation-from Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Massive Earthquake in Turkey: Comments on the situation from the EA Community in Turkey, published by Eda Arpacı on February 9, 2023 on The Effective Altruism Forum.From the members of the EA community in TurkeyDear effective altruist friends and community,As you may know, two earthquakes with massive magnitudes of 7.8 and 7.7 occurred in Turkey. The World Health Organization declared a Grade 3 emergency -the highest level of emergency- in the region. Almost 15,000 people lost their lives, more than 60,000 are wounded, and time is running out for the many thousands still waiting to be rescued. The death toll is likely to continue rising, and economic devastation in the affected areas is vast. Given this situation, those of us living in Turkey have been receiving questions about effective and trustworthy organizations to donate to for earthquake relief.Although it is very difficult to know the details of their cost-effectiveness and their precise impact, particularly from an EA perspective, and we cannot fully vouch for them, we can initially recommend the following organizations if you are interested in hearing from the effective altruism community in Turkey about our thoughts on where to donate or if you are already considering donating:Turkish Philanthropy Funds: A US-based foundation that has started an earthquake relief fund. It has matched an additional $1 million for organizations on the ground.AHBAP: One of the most active and well-known voluntary networks in the region.AKUT: A non-governmental organization based in Turkey involved in searching and rescuing the victims of the earthquake on the ground.AFAD: The official disaster and emergency management authority of Turkey.1 USD=18.90TL and 1 EUR=20.40 TL, approximately.Please note that the effective altruism community in Turkey is not affiliated with any of the above organizations. We neither represent them nor receive any funding from them. Please also note that donations to these organizations will be used for earthquake relief in Turkey, and not in Syria, as we only have the necessary context to comment on the situation in Turkey alone.If you have made a donation before or after seeing this post, we would appreciate it if you could email us at bilgi@eaturkiye.org or comment below so that we can have a clearer understanding of the level of support in the community.If you have been affected by the earthquakes in any way and would like to talk about it with one of us from the effective altruism community in Turkey, please feel free to reach out to us privately via bilgi@eaturkiye.org.Feel free to also ask questions in the comments or email at bilgi@eaturkiye.orgThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 09 Feb 2023 13:26:34 +0000 EA - Massive Earthquake in Turkey: Comments on the situation from the EA Community in Turkey by Eda Arpacı Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Massive Earthquake in Turkey: Comments on the situation from the EA Community in Turkey, published by Eda Arpacı on February 9, 2023 on The Effective Altruism Forum.From the members of the EA community in TurkeyDear effective altruist friends and community,As you may know, two earthquakes with massive magnitudes of 7.8 and 7.7 occurred in Turkey. The World Health Organization declared a Grade 3 emergency -the highest level of emergency- in the region. Almost 15,000 people lost their lives, more than 60,000 are wounded, and time is running out for the many thousands still waiting to be rescued. The death toll is likely to continue rising, and economic devastation in the affected areas is vast. Given this situation, those of us living in Turkey have been receiving questions about effective and trustworthy organizations to donate to for earthquake relief.Although it is very difficult to know the details of their cost-effectiveness and their precise impact, particularly from an EA perspective, and we cannot fully vouch for them, we can initially recommend the following organizations if you are interested in hearing from the effective altruism community in Turkey about our thoughts on where to donate or if you are already considering donating:Turkish Philanthropy Funds: A US-based foundation that has started an earthquake relief fund. It has matched an additional $1 million for organizations on the ground.AHBAP: One of the most active and well-known voluntary networks in the region.AKUT: A non-governmental organization based in Turkey involved in searching and rescuing the victims of the earthquake on the ground.AFAD: The official disaster and emergency management authority of Turkey.1 USD=18.90TL and 1 EUR=20.40 TL, approximately.Please note that the effective altruism community in Turkey is not affiliated with any of the above organizations. We neither represent them nor receive any funding from them. Please also note that donations to these organizations will be used for earthquake relief in Turkey, and not in Syria, as we only have the necessary context to comment on the situation in Turkey alone.If you have made a donation before or after seeing this post, we would appreciate it if you could email us at bilgi@eaturkiye.org or comment below so that we can have a clearer understanding of the level of support in the community.If you have been affected by the earthquakes in any way and would like to talk about it with one of us from the effective altruism community in Turkey, please feel free to reach out to us privately via bilgi@eaturkiye.org.Feel free to also ask questions in the comments or email at bilgi@eaturkiye.orgThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Massive Earthquake in Turkey: Comments on the situation from the EA Community in Turkey, published by Eda Arpacı on February 9, 2023 on The Effective Altruism Forum.From the members of the EA community in TurkeyDear effective altruist friends and community,As you may know, two earthquakes with massive magnitudes of 7.8 and 7.7 occurred in Turkey. The World Health Organization declared a Grade 3 emergency -the highest level of emergency- in the region. Almost 15,000 people lost their lives, more than 60,000 are wounded, and time is running out for the many thousands still waiting to be rescued. The death toll is likely to continue rising, and economic devastation in the affected areas is vast. Given this situation, those of us living in Turkey have been receiving questions about effective and trustworthy organizations to donate to for earthquake relief.Although it is very difficult to know the details of their cost-effectiveness and their precise impact, particularly from an EA perspective, and we cannot fully vouch for them, we can initially recommend the following organizations if you are interested in hearing from the effective altruism community in Turkey about our thoughts on where to donate or if you are already considering donating:Turkish Philanthropy Funds: A US-based foundation that has started an earthquake relief fund. It has matched an additional $1 million for organizations on the ground.AHBAP: One of the most active and well-known voluntary networks in the region.AKUT: A non-governmental organization based in Turkey involved in searching and rescuing the victims of the earthquake on the ground.AFAD: The official disaster and emergency management authority of Turkey.1 USD=18.90TL and 1 EUR=20.40 TL, approximately.Please note that the effective altruism community in Turkey is not affiliated with any of the above organizations. We neither represent them nor receive any funding from them. Please also note that donations to these organizations will be used for earthquake relief in Turkey, and not in Syria, as we only have the necessary context to comment on the situation in Turkey alone.If you have made a donation before or after seeing this post, we would appreciate it if you could email us at bilgi@eaturkiye.org or comment below so that we can have a clearer understanding of the level of support in the community.If you have been affected by the earthquakes in any way and would like to talk about it with one of us from the effective altruism community in Turkey, please feel free to reach out to us privately via bilgi@eaturkiye.org.Feel free to also ask questions in the comments or email at bilgi@eaturkiye.orgThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Eda Arpacı https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:44 None full 4813
54T4YmxZm3Ht4fuMA_NL_EA_EA EA - Deconfusion Part 3 - EA Community and Social Structure by Davidmanheim Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Deconfusion Part 3 - EA Community and Social Structure, published by Davidmanheim on February 9, 2023 on The Effective Altruism Forum.This is part 3 of my attempt to disentangle and clarify some parts of what comprises Effective Altruism, in this case, the community. As I’ve written earlier in this series, EA is first a normative philosophical position that is near-universal, as well as some widely accepted ideas about maximizing good that are compatible with most moral positions. It’s also, as I wrote in the second post, a set of causes, in many cases contingent on very unclear or deeply debated philosophical claims, and a set of associated ideas which inform specific funding and prioritization decisions, but which are not necessary parts of the philosophy, yet are accepted by (most of) the community for other reasons.The community itself, however, is a part of Effective Altruism as an applied philosophy, for two reasons. The first, as noted above, is that it impacts the prioritization and funding decisions. It affects them both because of philosophical, political, and similar factors belonging to those within the community, and because of directly social factors, such as knowledge of projects, the benefits of interpersonal trust, and the far less beneficial conflicts of interest that occur. The second is that EA promotes community building as itself a cause area, as a way to build the number of people donating and directly working on other high-priority cause areas.Note: The posts in this sequence are intended primarily as descriptive and diagnostic, to help me, and hopefully readers, make sense of Effective Altruism. EA is important, but even if you actually think it’s “the most wonderful idea ever,” we still want to avoid a Happy Death Spiral. Ideally, a scout mindset will allow us to separate different parts of EA, and feel comfortable accepting some things and rejecting others, or assisting people in keeping identity small but still embrace ideas. That said, I have views on different aspects of the community, and I’m not a purely disinterested writer, so some of my views are going to be present in this attempt at dispassionate analysis - I've tried to keep those to the footnotes.What is the community? (Or, what are the communities?)This history of Effective Altruism involves a confluence of different groups which overlap or are parallel. A complete history is beyond the scope of this post. On the other hand, it’s clear that there was a lot happening. Utilitarian philosophers started with doing good, as I outlined in the first post, but animal rights activists pushed for taking animal suffering seriously, financial analyst donors pushed for charity evaluations, extropians pushed for a glorious transhuman future, economists pushed for RCTs, rationalists pushed for bayesian viewpoints, libertarians pushed for distrusting government, and so on. And in almost all cases I’m aware of, central people in effective altruism belonged to several of these groups simultaneously.Despite the overlap, at a high level some of the key groups in EA as it evolved are the utilitarian philosophers centered in Oxford, the global health economists, the Lesswrong rationalists and AI-safety groups centered in the Bay, and the biorisk community. Less central but at some-point relevant or related groups are the George Mason libertarians, the animal suffering activists, former extropians and transhumanists, the EA meme groups, the right-wing and trad Lesswrong splinter groups, the leftist AI fairness academics, the polyamory crowd, the progress studies movement, the democratic party funders and analysts, post-rationalist mystics, and AI safety researchers. Getting into the relationship between all of these groups is several careers worth of research and writing as a modest start, but ...]]>
Davidmanheim https://forum.effectivealtruism.org/posts/54T4YmxZm3Ht4fuMA/deconfusion-part-3-ea-community-and-social-structure Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Deconfusion Part 3 - EA Community and Social Structure, published by Davidmanheim on February 9, 2023 on The Effective Altruism Forum.This is part 3 of my attempt to disentangle and clarify some parts of what comprises Effective Altruism, in this case, the community. As I’ve written earlier in this series, EA is first a normative philosophical position that is near-universal, as well as some widely accepted ideas about maximizing good that are compatible with most moral positions. It’s also, as I wrote in the second post, a set of causes, in many cases contingent on very unclear or deeply debated philosophical claims, and a set of associated ideas which inform specific funding and prioritization decisions, but which are not necessary parts of the philosophy, yet are accepted by (most of) the community for other reasons.The community itself, however, is a part of Effective Altruism as an applied philosophy, for two reasons. The first, as noted above, is that it impacts the prioritization and funding decisions. It affects them both because of philosophical, political, and similar factors belonging to those within the community, and because of directly social factors, such as knowledge of projects, the benefits of interpersonal trust, and the far less beneficial conflicts of interest that occur. The second is that EA promotes community building as itself a cause area, as a way to build the number of people donating and directly working on other high-priority cause areas.Note: The posts in this sequence are intended primarily as descriptive and diagnostic, to help me, and hopefully readers, make sense of Effective Altruism. EA is important, but even if you actually think it’s “the most wonderful idea ever,” we still want to avoid a Happy Death Spiral. Ideally, a scout mindset will allow us to separate different parts of EA, and feel comfortable accepting some things and rejecting others, or assisting people in keeping identity small but still embrace ideas. That said, I have views on different aspects of the community, and I’m not a purely disinterested writer, so some of my views are going to be present in this attempt at dispassionate analysis - I've tried to keep those to the footnotes.What is the community? (Or, what are the communities?)This history of Effective Altruism involves a confluence of different groups which overlap or are parallel. A complete history is beyond the scope of this post. On the other hand, it’s clear that there was a lot happening. Utilitarian philosophers started with doing good, as I outlined in the first post, but animal rights activists pushed for taking animal suffering seriously, financial analyst donors pushed for charity evaluations, extropians pushed for a glorious transhuman future, economists pushed for RCTs, rationalists pushed for bayesian viewpoints, libertarians pushed for distrusting government, and so on. And in almost all cases I’m aware of, central people in effective altruism belonged to several of these groups simultaneously.Despite the overlap, at a high level some of the key groups in EA as it evolved are the utilitarian philosophers centered in Oxford, the global health economists, the Lesswrong rationalists and AI-safety groups centered in the Bay, and the biorisk community. Less central but at some-point relevant or related groups are the George Mason libertarians, the animal suffering activists, former extropians and transhumanists, the EA meme groups, the right-wing and trad Lesswrong splinter groups, the leftist AI fairness academics, the polyamory crowd, the progress studies movement, the democratic party funders and analysts, post-rationalist mystics, and AI safety researchers. Getting into the relationship between all of these groups is several careers worth of research and writing as a modest start, but ...]]>
Thu, 09 Feb 2023 10:10:32 +0000 EA - Deconfusion Part 3 - EA Community and Social Structure by Davidmanheim Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Deconfusion Part 3 - EA Community and Social Structure, published by Davidmanheim on February 9, 2023 on The Effective Altruism Forum.This is part 3 of my attempt to disentangle and clarify some parts of what comprises Effective Altruism, in this case, the community. As I’ve written earlier in this series, EA is first a normative philosophical position that is near-universal, as well as some widely accepted ideas about maximizing good that are compatible with most moral positions. It’s also, as I wrote in the second post, a set of causes, in many cases contingent on very unclear or deeply debated philosophical claims, and a set of associated ideas which inform specific funding and prioritization decisions, but which are not necessary parts of the philosophy, yet are accepted by (most of) the community for other reasons.The community itself, however, is a part of Effective Altruism as an applied philosophy, for two reasons. The first, as noted above, is that it impacts the prioritization and funding decisions. It affects them both because of philosophical, political, and similar factors belonging to those within the community, and because of directly social factors, such as knowledge of projects, the benefits of interpersonal trust, and the far less beneficial conflicts of interest that occur. The second is that EA promotes community building as itself a cause area, as a way to build the number of people donating and directly working on other high-priority cause areas.Note: The posts in this sequence are intended primarily as descriptive and diagnostic, to help me, and hopefully readers, make sense of Effective Altruism. EA is important, but even if you actually think it’s “the most wonderful idea ever,” we still want to avoid a Happy Death Spiral. Ideally, a scout mindset will allow us to separate different parts of EA, and feel comfortable accepting some things and rejecting others, or assisting people in keeping identity small but still embrace ideas. That said, I have views on different aspects of the community, and I’m not a purely disinterested writer, so some of my views are going to be present in this attempt at dispassionate analysis - I've tried to keep those to the footnotes.What is the community? (Or, what are the communities?)This history of Effective Altruism involves a confluence of different groups which overlap or are parallel. A complete history is beyond the scope of this post. On the other hand, it’s clear that there was a lot happening. Utilitarian philosophers started with doing good, as I outlined in the first post, but animal rights activists pushed for taking animal suffering seriously, financial analyst donors pushed for charity evaluations, extropians pushed for a glorious transhuman future, economists pushed for RCTs, rationalists pushed for bayesian viewpoints, libertarians pushed for distrusting government, and so on. And in almost all cases I’m aware of, central people in effective altruism belonged to several of these groups simultaneously.Despite the overlap, at a high level some of the key groups in EA as it evolved are the utilitarian philosophers centered in Oxford, the global health economists, the Lesswrong rationalists and AI-safety groups centered in the Bay, and the biorisk community. Less central but at some-point relevant or related groups are the George Mason libertarians, the animal suffering activists, former extropians and transhumanists, the EA meme groups, the right-wing and trad Lesswrong splinter groups, the leftist AI fairness academics, the polyamory crowd, the progress studies movement, the democratic party funders and analysts, post-rationalist mystics, and AI safety researchers. Getting into the relationship between all of these groups is several careers worth of research and writing as a modest start, but ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Deconfusion Part 3 - EA Community and Social Structure, published by Davidmanheim on February 9, 2023 on The Effective Altruism Forum.This is part 3 of my attempt to disentangle and clarify some parts of what comprises Effective Altruism, in this case, the community. As I’ve written earlier in this series, EA is first a normative philosophical position that is near-universal, as well as some widely accepted ideas about maximizing good that are compatible with most moral positions. It’s also, as I wrote in the second post, a set of causes, in many cases contingent on very unclear or deeply debated philosophical claims, and a set of associated ideas which inform specific funding and prioritization decisions, but which are not necessary parts of the philosophy, yet are accepted by (most of) the community for other reasons.The community itself, however, is a part of Effective Altruism as an applied philosophy, for two reasons. The first, as noted above, is that it impacts the prioritization and funding decisions. It affects them both because of philosophical, political, and similar factors belonging to those within the community, and because of directly social factors, such as knowledge of projects, the benefits of interpersonal trust, and the far less beneficial conflicts of interest that occur. The second is that EA promotes community building as itself a cause area, as a way to build the number of people donating and directly working on other high-priority cause areas.Note: The posts in this sequence are intended primarily as descriptive and diagnostic, to help me, and hopefully readers, make sense of Effective Altruism. EA is important, but even if you actually think it’s “the most wonderful idea ever,” we still want to avoid a Happy Death Spiral. Ideally, a scout mindset will allow us to separate different parts of EA, and feel comfortable accepting some things and rejecting others, or assisting people in keeping identity small but still embrace ideas. That said, I have views on different aspects of the community, and I’m not a purely disinterested writer, so some of my views are going to be present in this attempt at dispassionate analysis - I've tried to keep those to the footnotes.What is the community? (Or, what are the communities?)This history of Effective Altruism involves a confluence of different groups which overlap or are parallel. A complete history is beyond the scope of this post. On the other hand, it’s clear that there was a lot happening. Utilitarian philosophers started with doing good, as I outlined in the first post, but animal rights activists pushed for taking animal suffering seriously, financial analyst donors pushed for charity evaluations, extropians pushed for a glorious transhuman future, economists pushed for RCTs, rationalists pushed for bayesian viewpoints, libertarians pushed for distrusting government, and so on. And in almost all cases I’m aware of, central people in effective altruism belonged to several of these groups simultaneously.Despite the overlap, at a high level some of the key groups in EA as it evolved are the utilitarian philosophers centered in Oxford, the global health economists, the Lesswrong rationalists and AI-safety groups centered in the Bay, and the biorisk community. Less central but at some-point relevant or related groups are the George Mason libertarians, the animal suffering activists, former extropians and transhumanists, the EA meme groups, the right-wing and trad Lesswrong splinter groups, the leftist AI fairness academics, the polyamory crowd, the progress studies movement, the democratic party funders and analysts, post-rationalist mystics, and AI safety researchers. Getting into the relationship between all of these groups is several careers worth of research and writing as a modest start, but ...]]>
Davidmanheim https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 17:31 None full 4801
psMz4CanKHk5Xya4g_NL_EA_EA EA - Community building: Lessons from ten years of facilitation experience by Severin T. Seehrich Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Community building: Lessons from ten years of facilitation experience, published by Severin T. Seehrich on February 8, 2023 on The Effective Altruism Forum.Epistemic status: Anecdotal but strong. Most of this is based on practical experience and things I learned through word-of-mouth.Here, I’d like to present some pieces of facilitation advice I’d give my former self. I’ve selected this list for things that I wouldn’t have found obvious at all, and that became crucial to the way I lead groups. I hope this post helps some of you run even better events.0. About me.I facilitated a number of different gatherings in a number of contexts over the last ten years: Retreats, discussion rounds, reading groups, authentic relating games, communication trainings, a secular solstice, meditation sessions, two sessions of a self-organized Krav Maga study group (my life sometimes takes weird turns), and probably a bunch of other things I forgot.Audiences I have experience with range from teacher trainees over political student groups and Esperantists all the way to EAs and rationalists. I trained with the Ruth Cohn Institute for TCI International and Authentic Revolution. I received mentorship from experienced counseling trainers, Circling/Authentic Relating facilitators, and an Active Hope workshop facilitator.In total, I probably gathered 1000+ hours of facilitation experience.So, here you go for the 80/20 version of what I've learned in these years.1. If you get the beginning right, the group almost leads itself. If not, you are doomed.When we enter a new group, all of us come with a number of implicit questions: Will these people like me? Is it safe here, can I show up with my edges and quirks without getting hurt or exploited? Will this be valuable for me?As a facilitator, it is my task to enable participants to answer these questions for themselves. If I don’t make space for that, the group I lead will be distracted by the unmet needs that underlie these questions: Belonging, safety, meaning.Saving time at the start of a group by ignoring these questions is not effective. Because then, people are only half-engaged with the topic at hand, and (at best) half-distracted by trying to figure out how to answer these questions despite my facilitation, not because of it.So, what can you do to help people answer them?a. Will these people like me?The best strategy depends on a number of factors: The group size, the task at hand, and, first and foremost, the duration and format of the group work. Is it a one-off evening event? A weekly recurring meetup? A weekend retreat? A recurring program with intense contact that runs over several months? The longer the group stays together and the more personal the task at hand, the more investment into trust-building is necessary. Not only upfront, but also along the way.A quick-and-dirty version of trust-building I like to do for shorter one-off events or recurring evenings should contain all of these three elements within the first 30 minutes:i. Greet participants personally and individually..or have a co-facilitator or veteran member of your community do it for you. Bonus points for not needing to glance at name tags, and for making a genuine effort to pronounce their names correctly, regardless of whether you know their native language.Of course, this is not possible in very large groups or online events. In those cases, you can cover part of this function by putting a lot of care into the opening mail, both regarding content and writing style.ii. Enable at least one 1-on-1-interaction with another group member.A short 1-on-1-conversation does wonders for turning strangers into friends. While talking to every group member 1-on-1 is overkill, it is massively grounding for new people in a group to know that they have at least o...]]>
Severin T. Seehrich https://forum.effectivealtruism.org/posts/psMz4CanKHk5Xya4g/community-building-lessons-from-ten-years-of-facilitation Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Community building: Lessons from ten years of facilitation experience, published by Severin T. Seehrich on February 8, 2023 on The Effective Altruism Forum.Epistemic status: Anecdotal but strong. Most of this is based on practical experience and things I learned through word-of-mouth.Here, I’d like to present some pieces of facilitation advice I’d give my former self. I’ve selected this list for things that I wouldn’t have found obvious at all, and that became crucial to the way I lead groups. I hope this post helps some of you run even better events.0. About me.I facilitated a number of different gatherings in a number of contexts over the last ten years: Retreats, discussion rounds, reading groups, authentic relating games, communication trainings, a secular solstice, meditation sessions, two sessions of a self-organized Krav Maga study group (my life sometimes takes weird turns), and probably a bunch of other things I forgot.Audiences I have experience with range from teacher trainees over political student groups and Esperantists all the way to EAs and rationalists. I trained with the Ruth Cohn Institute for TCI International and Authentic Revolution. I received mentorship from experienced counseling trainers, Circling/Authentic Relating facilitators, and an Active Hope workshop facilitator.In total, I probably gathered 1000+ hours of facilitation experience.So, here you go for the 80/20 version of what I've learned in these years.1. If you get the beginning right, the group almost leads itself. If not, you are doomed.When we enter a new group, all of us come with a number of implicit questions: Will these people like me? Is it safe here, can I show up with my edges and quirks without getting hurt or exploited? Will this be valuable for me?As a facilitator, it is my task to enable participants to answer these questions for themselves. If I don’t make space for that, the group I lead will be distracted by the unmet needs that underlie these questions: Belonging, safety, meaning.Saving time at the start of a group by ignoring these questions is not effective. Because then, people are only half-engaged with the topic at hand, and (at best) half-distracted by trying to figure out how to answer these questions despite my facilitation, not because of it.So, what can you do to help people answer them?a. Will these people like me?The best strategy depends on a number of factors: The group size, the task at hand, and, first and foremost, the duration and format of the group work. Is it a one-off evening event? A weekly recurring meetup? A weekend retreat? A recurring program with intense contact that runs over several months? The longer the group stays together and the more personal the task at hand, the more investment into trust-building is necessary. Not only upfront, but also along the way.A quick-and-dirty version of trust-building I like to do for shorter one-off events or recurring evenings should contain all of these three elements within the first 30 minutes:i. Greet participants personally and individually..or have a co-facilitator or veteran member of your community do it for you. Bonus points for not needing to glance at name tags, and for making a genuine effort to pronounce their names correctly, regardless of whether you know their native language.Of course, this is not possible in very large groups or online events. In those cases, you can cover part of this function by putting a lot of care into the opening mail, both regarding content and writing style.ii. Enable at least one 1-on-1-interaction with another group member.A short 1-on-1-conversation does wonders for turning strangers into friends. While talking to every group member 1-on-1 is overkill, it is massively grounding for new people in a group to know that they have at least o...]]>
Thu, 09 Feb 2023 04:26:58 +0000 EA - Community building: Lessons from ten years of facilitation experience by Severin T. Seehrich Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Community building: Lessons from ten years of facilitation experience, published by Severin T. Seehrich on February 8, 2023 on The Effective Altruism Forum.Epistemic status: Anecdotal but strong. Most of this is based on practical experience and things I learned through word-of-mouth.Here, I’d like to present some pieces of facilitation advice I’d give my former self. I’ve selected this list for things that I wouldn’t have found obvious at all, and that became crucial to the way I lead groups. I hope this post helps some of you run even better events.0. About me.I facilitated a number of different gatherings in a number of contexts over the last ten years: Retreats, discussion rounds, reading groups, authentic relating games, communication trainings, a secular solstice, meditation sessions, two sessions of a self-organized Krav Maga study group (my life sometimes takes weird turns), and probably a bunch of other things I forgot.Audiences I have experience with range from teacher trainees over political student groups and Esperantists all the way to EAs and rationalists. I trained with the Ruth Cohn Institute for TCI International and Authentic Revolution. I received mentorship from experienced counseling trainers, Circling/Authentic Relating facilitators, and an Active Hope workshop facilitator.In total, I probably gathered 1000+ hours of facilitation experience.So, here you go for the 80/20 version of what I've learned in these years.1. If you get the beginning right, the group almost leads itself. If not, you are doomed.When we enter a new group, all of us come with a number of implicit questions: Will these people like me? Is it safe here, can I show up with my edges and quirks without getting hurt or exploited? Will this be valuable for me?As a facilitator, it is my task to enable participants to answer these questions for themselves. If I don’t make space for that, the group I lead will be distracted by the unmet needs that underlie these questions: Belonging, safety, meaning.Saving time at the start of a group by ignoring these questions is not effective. Because then, people are only half-engaged with the topic at hand, and (at best) half-distracted by trying to figure out how to answer these questions despite my facilitation, not because of it.So, what can you do to help people answer them?a. Will these people like me?The best strategy depends on a number of factors: The group size, the task at hand, and, first and foremost, the duration and format of the group work. Is it a one-off evening event? A weekly recurring meetup? A weekend retreat? A recurring program with intense contact that runs over several months? The longer the group stays together and the more personal the task at hand, the more investment into trust-building is necessary. Not only upfront, but also along the way.A quick-and-dirty version of trust-building I like to do for shorter one-off events or recurring evenings should contain all of these three elements within the first 30 minutes:i. Greet participants personally and individually..or have a co-facilitator or veteran member of your community do it for you. Bonus points for not needing to glance at name tags, and for making a genuine effort to pronounce their names correctly, regardless of whether you know their native language.Of course, this is not possible in very large groups or online events. In those cases, you can cover part of this function by putting a lot of care into the opening mail, both regarding content and writing style.ii. Enable at least one 1-on-1-interaction with another group member.A short 1-on-1-conversation does wonders for turning strangers into friends. While talking to every group member 1-on-1 is overkill, it is massively grounding for new people in a group to know that they have at least o...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Community building: Lessons from ten years of facilitation experience, published by Severin T. Seehrich on February 8, 2023 on The Effective Altruism Forum.Epistemic status: Anecdotal but strong. Most of this is based on practical experience and things I learned through word-of-mouth.Here, I’d like to present some pieces of facilitation advice I’d give my former self. I’ve selected this list for things that I wouldn’t have found obvious at all, and that became crucial to the way I lead groups. I hope this post helps some of you run even better events.0. About me.I facilitated a number of different gatherings in a number of contexts over the last ten years: Retreats, discussion rounds, reading groups, authentic relating games, communication trainings, a secular solstice, meditation sessions, two sessions of a self-organized Krav Maga study group (my life sometimes takes weird turns), and probably a bunch of other things I forgot.Audiences I have experience with range from teacher trainees over political student groups and Esperantists all the way to EAs and rationalists. I trained with the Ruth Cohn Institute for TCI International and Authentic Revolution. I received mentorship from experienced counseling trainers, Circling/Authentic Relating facilitators, and an Active Hope workshop facilitator.In total, I probably gathered 1000+ hours of facilitation experience.So, here you go for the 80/20 version of what I've learned in these years.1. If you get the beginning right, the group almost leads itself. If not, you are doomed.When we enter a new group, all of us come with a number of implicit questions: Will these people like me? Is it safe here, can I show up with my edges and quirks without getting hurt or exploited? Will this be valuable for me?As a facilitator, it is my task to enable participants to answer these questions for themselves. If I don’t make space for that, the group I lead will be distracted by the unmet needs that underlie these questions: Belonging, safety, meaning.Saving time at the start of a group by ignoring these questions is not effective. Because then, people are only half-engaged with the topic at hand, and (at best) half-distracted by trying to figure out how to answer these questions despite my facilitation, not because of it.So, what can you do to help people answer them?a. Will these people like me?The best strategy depends on a number of factors: The group size, the task at hand, and, first and foremost, the duration and format of the group work. Is it a one-off evening event? A weekly recurring meetup? A weekend retreat? A recurring program with intense contact that runs over several months? The longer the group stays together and the more personal the task at hand, the more investment into trust-building is necessary. Not only upfront, but also along the way.A quick-and-dirty version of trust-building I like to do for shorter one-off events or recurring evenings should contain all of these three elements within the first 30 minutes:i. Greet participants personally and individually..or have a co-facilitator or veteran member of your community do it for you. Bonus points for not needing to glance at name tags, and for making a genuine effort to pronounce their names correctly, regardless of whether you know their native language.Of course, this is not possible in very large groups or online events. In those cases, you can cover part of this function by putting a lot of care into the opening mail, both regarding content and writing style.ii. Enable at least one 1-on-1-interaction with another group member.A short 1-on-1-conversation does wonders for turning strangers into friends. While talking to every group member 1-on-1 is overkill, it is massively grounding for new people in a group to know that they have at least o...]]>
Severin T. Seehrich https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 31:10 None full 4806
pMe3pg5H9CPedTYA8_NL_EA_EA EA - EA Giving Tuesday Hibernation by phgubbins Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Giving Tuesday Hibernation, published by phgubbins on February 9, 2023 on The Effective Altruism Forum.This year’s Giving Tuesday was quite different from previous years. Giving What We Can and One For The World were already running a pared-down version of EAGT due to the change in ownership of the project, and once the rules for 2022 were announced, the project was essentially put on hold. Unfortunately, unlike previous years we do not have a retrospective demonstrating impact, instead, we (OFTW and GWWC) are advising that EAGT should be fully hibernated for future years.Here are the high-level details of this year’s match:“To help nonprofits jumpstart their Giving Season fundraising, Meta will match your donors’ recurring donation 100% up to $100 in the next month (up to $100,000 per organization and up to $7 million in total across all organizations). All new recurring donors who start a recurring donation within November 15 - December 31, 2022 are eligible. Read the terms and conditions.”The reasons for hibernating the project include:Smaller potential impact due to new donor limits (previously to a single charity you could do $20k/donor, now only $100/donor).The matching seems to be more of a lottery than first come first serve, so coordination makes less sense (More details per Will Kiely’s comment here; lots of thanks to Will for all his help!)Recurring donations being necessary and potentially indefinite (making the match actually 50% since the second donation is the one being matched), placing strain on regranting.If the rules were to change for future Giving Tuesdays or another matching opportunity comes up that seems to be a good candidate for coordination, GWWC and OFTW would be happy to facilitate volunteers to work on the project, given there is sufficient demand for it. We will archive the work that was done in previous years and can make it available to community members on request.We encourage community members to share donation matching opportunities on the EA Forum, and other spaces where donations get discussed. Giving What We Can tries to share counterfactual donation matching opportunities for effective charities through our normal communication channels.We’d like to thank everyone who organized and participated in EA Giving Tuesday over the years. It's been wonderful to see the community come together to raise money for effective charities.Thank you for your support,The EA Giving Tuesday TeamThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
phgubbins https://forum.effectivealtruism.org/posts/pMe3pg5H9CPedTYA8/ea-giving-tuesday-hibernation Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Giving Tuesday Hibernation, published by phgubbins on February 9, 2023 on The Effective Altruism Forum.This year’s Giving Tuesday was quite different from previous years. Giving What We Can and One For The World were already running a pared-down version of EAGT due to the change in ownership of the project, and once the rules for 2022 were announced, the project was essentially put on hold. Unfortunately, unlike previous years we do not have a retrospective demonstrating impact, instead, we (OFTW and GWWC) are advising that EAGT should be fully hibernated for future years.Here are the high-level details of this year’s match:“To help nonprofits jumpstart their Giving Season fundraising, Meta will match your donors’ recurring donation 100% up to $100 in the next month (up to $100,000 per organization and up to $7 million in total across all organizations). All new recurring donors who start a recurring donation within November 15 - December 31, 2022 are eligible. Read the terms and conditions.”The reasons for hibernating the project include:Smaller potential impact due to new donor limits (previously to a single charity you could do $20k/donor, now only $100/donor).The matching seems to be more of a lottery than first come first serve, so coordination makes less sense (More details per Will Kiely’s comment here; lots of thanks to Will for all his help!)Recurring donations being necessary and potentially indefinite (making the match actually 50% since the second donation is the one being matched), placing strain on regranting.If the rules were to change for future Giving Tuesdays or another matching opportunity comes up that seems to be a good candidate for coordination, GWWC and OFTW would be happy to facilitate volunteers to work on the project, given there is sufficient demand for it. We will archive the work that was done in previous years and can make it available to community members on request.We encourage community members to share donation matching opportunities on the EA Forum, and other spaces where donations get discussed. Giving What We Can tries to share counterfactual donation matching opportunities for effective charities through our normal communication channels.We’d like to thank everyone who organized and participated in EA Giving Tuesday over the years. It's been wonderful to see the community come together to raise money for effective charities.Thank you for your support,The EA Giving Tuesday TeamThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 09 Feb 2023 04:18:01 +0000 EA - EA Giving Tuesday Hibernation by phgubbins Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Giving Tuesday Hibernation, published by phgubbins on February 9, 2023 on The Effective Altruism Forum.This year’s Giving Tuesday was quite different from previous years. Giving What We Can and One For The World were already running a pared-down version of EAGT due to the change in ownership of the project, and once the rules for 2022 were announced, the project was essentially put on hold. Unfortunately, unlike previous years we do not have a retrospective demonstrating impact, instead, we (OFTW and GWWC) are advising that EAGT should be fully hibernated for future years.Here are the high-level details of this year’s match:“To help nonprofits jumpstart their Giving Season fundraising, Meta will match your donors’ recurring donation 100% up to $100 in the next month (up to $100,000 per organization and up to $7 million in total across all organizations). All new recurring donors who start a recurring donation within November 15 - December 31, 2022 are eligible. Read the terms and conditions.”The reasons for hibernating the project include:Smaller potential impact due to new donor limits (previously to a single charity you could do $20k/donor, now only $100/donor).The matching seems to be more of a lottery than first come first serve, so coordination makes less sense (More details per Will Kiely’s comment here; lots of thanks to Will for all his help!)Recurring donations being necessary and potentially indefinite (making the match actually 50% since the second donation is the one being matched), placing strain on regranting.If the rules were to change for future Giving Tuesdays or another matching opportunity comes up that seems to be a good candidate for coordination, GWWC and OFTW would be happy to facilitate volunteers to work on the project, given there is sufficient demand for it. We will archive the work that was done in previous years and can make it available to community members on request.We encourage community members to share donation matching opportunities on the EA Forum, and other spaces where donations get discussed. Giving What We Can tries to share counterfactual donation matching opportunities for effective charities through our normal communication channels.We’d like to thank everyone who organized and participated in EA Giving Tuesday over the years. It's been wonderful to see the community come together to raise money for effective charities.Thank you for your support,The EA Giving Tuesday TeamThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Giving Tuesday Hibernation, published by phgubbins on February 9, 2023 on The Effective Altruism Forum.This year’s Giving Tuesday was quite different from previous years. Giving What We Can and One For The World were already running a pared-down version of EAGT due to the change in ownership of the project, and once the rules for 2022 were announced, the project was essentially put on hold. Unfortunately, unlike previous years we do not have a retrospective demonstrating impact, instead, we (OFTW and GWWC) are advising that EAGT should be fully hibernated for future years.Here are the high-level details of this year’s match:“To help nonprofits jumpstart their Giving Season fundraising, Meta will match your donors’ recurring donation 100% up to $100 in the next month (up to $100,000 per organization and up to $7 million in total across all organizations). All new recurring donors who start a recurring donation within November 15 - December 31, 2022 are eligible. Read the terms and conditions.”The reasons for hibernating the project include:Smaller potential impact due to new donor limits (previously to a single charity you could do $20k/donor, now only $100/donor).The matching seems to be more of a lottery than first come first serve, so coordination makes less sense (More details per Will Kiely’s comment here; lots of thanks to Will for all his help!)Recurring donations being necessary and potentially indefinite (making the match actually 50% since the second donation is the one being matched), placing strain on regranting.If the rules were to change for future Giving Tuesdays or another matching opportunity comes up that seems to be a good candidate for coordination, GWWC and OFTW would be happy to facilitate volunteers to work on the project, given there is sufficient demand for it. We will archive the work that was done in previous years and can make it available to community members on request.We encourage community members to share donation matching opportunities on the EA Forum, and other spaces where donations get discussed. Giving What We Can tries to share counterfactual donation matching opportunities for effective charities through our normal communication channels.We’d like to thank everyone who organized and participated in EA Giving Tuesday over the years. It's been wonderful to see the community come together to raise money for effective charities.Thank you for your support,The EA Giving Tuesday TeamThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
phgubbins https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:34 None full 4803
YqMsH9LH9C26mahue_NL_EA_EA EA - No Silver Bullet Solutions for the Werewolf Crisis by Aaron Gertler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: No Silver Bullet Solutions for the Werewolf Crisis, published by Aaron Gertler on February 9, 2023 on The Effective Altruism Forum.Reposted here with permission from the original author, Lars Doucet.Mark Coopersmith gaveled the town meeting to order. “Ladies and Gentlemen, what are we going to do about the werewolf crisis? Every full moon, this town is attacked by invincible supernatural werewolves that murder people and then eat them.”A few hands went up. “Yes, Trevor Farrier, in the front row–what do you suggest?”“We have to improve our emergency response. If the paramedics can get to the scene fast enough after a werewolf attack, we can save some of the people the werewolves have left for dead before they bleed out.”“Excellent idea, Trevor. Who else?” More hands went up. “Yes, Rachel Miller. What have you got for us?”“We need more werewolf victim assistance programs. Werewolves make a lot of mess after they murder someone and then eat them, and it can be a real economic burden for the survivors to pay all the funeral and cleanup expenses by themselves.”“Good idea Rachel, but what if some of this assistance goes to rich families who should already be able to afford it?” asked Mark. “And what if some of the poorer families spend their assistance money on frivolous comforts?”“Well obviously, werewolf victim assistance needs to be means-tested and closely regulated. After somebody's family has been murdered and then eaten by werewolves, we'll ask them to sign a bunch of forms, provide proof of income, as well as character references to make sure they're sufficiently deserving of assistance. And naturally, we'll hire a team of officials who work full time scrutinizing recipients' spending to make sure there's no financial waste in the system.”"Thanks Rachel. Very sensible. Any other ideas? Fred."“Thank you,” said Fred Planter. “Werewolves are at root a supply issue–the supply of wolfsbane, namely. Studies have consistently shown that werewolves are somewhat repelled by the smell of wolfsbane, and the plant has a lot other beneficial uses as well. Unfortunately, the damn NWIMBYs that run this place won't let us grow or store any. This kind of smart, stockable, mixed-use herbalism is illegal to grow in most Transylvanian towns!”“Wolfsbane is a disgusting, poisonous weed!” squealed Karen Piper. “It ruins the neighborhood character! No Wolfsbane in My Back Yard!”Fred glared at her and shot back: “Easy for you to say, Karen–you literally live in a castle you inherited from your father, complete with 40-foot unscalable walls. When was the last time you lost a loved one to a werewolf?”“You're a shill for herb developers!” piped back Karen.Mark banged the gavel for order. “That's enough of that. Let's hear from someone else.” One hand remained. “Okay, Larry Smith. Let's hear it.”“Wolfsbane's a good idea and we should do it. But it won't solve the root problem by itself, for that we need to get at the heart of the matter, literally: let's shoot the werewolves with silver bullets.”Mark rolled his eyes. “Look, there are no silver bullet solutions to the werewolf crisis.”“Yes there are,” protested Larry. “Silver bullets are the silver bullet solution. Lycanthrobiologists have repeatedly demonstrated that werewolf physiology is extremely susceptible to high velocity projectiles made of silver.”“We don't need theories, we need evidence.”“I've got your evidence right here,” said Larry, pulling out a stack of research papers. “A recent meta-analysis by Van Helsing, et al shows that households subject to werewolf attacks have highly differentiated mortality rates that strongly vary with defense typology. The highest rate of survivorship occured in home defenders who, in the blind panic of a werewolf attack, happened to stuff their blunderbusses full of silverware bef...]]>
Aaron Gertler https://forum.effectivealtruism.org/posts/YqMsH9LH9C26mahue/no-silver-bullet-solutions-for-the-werewolf-crisis Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: No Silver Bullet Solutions for the Werewolf Crisis, published by Aaron Gertler on February 9, 2023 on The Effective Altruism Forum.Reposted here with permission from the original author, Lars Doucet.Mark Coopersmith gaveled the town meeting to order. “Ladies and Gentlemen, what are we going to do about the werewolf crisis? Every full moon, this town is attacked by invincible supernatural werewolves that murder people and then eat them.”A few hands went up. “Yes, Trevor Farrier, in the front row–what do you suggest?”“We have to improve our emergency response. If the paramedics can get to the scene fast enough after a werewolf attack, we can save some of the people the werewolves have left for dead before they bleed out.”“Excellent idea, Trevor. Who else?” More hands went up. “Yes, Rachel Miller. What have you got for us?”“We need more werewolf victim assistance programs. Werewolves make a lot of mess after they murder someone and then eat them, and it can be a real economic burden for the survivors to pay all the funeral and cleanup expenses by themselves.”“Good idea Rachel, but what if some of this assistance goes to rich families who should already be able to afford it?” asked Mark. “And what if some of the poorer families spend their assistance money on frivolous comforts?”“Well obviously, werewolf victim assistance needs to be means-tested and closely regulated. After somebody's family has been murdered and then eaten by werewolves, we'll ask them to sign a bunch of forms, provide proof of income, as well as character references to make sure they're sufficiently deserving of assistance. And naturally, we'll hire a team of officials who work full time scrutinizing recipients' spending to make sure there's no financial waste in the system.”"Thanks Rachel. Very sensible. Any other ideas? Fred."“Thank you,” said Fred Planter. “Werewolves are at root a supply issue–the supply of wolfsbane, namely. Studies have consistently shown that werewolves are somewhat repelled by the smell of wolfsbane, and the plant has a lot other beneficial uses as well. Unfortunately, the damn NWIMBYs that run this place won't let us grow or store any. This kind of smart, stockable, mixed-use herbalism is illegal to grow in most Transylvanian towns!”“Wolfsbane is a disgusting, poisonous weed!” squealed Karen Piper. “It ruins the neighborhood character! No Wolfsbane in My Back Yard!”Fred glared at her and shot back: “Easy for you to say, Karen–you literally live in a castle you inherited from your father, complete with 40-foot unscalable walls. When was the last time you lost a loved one to a werewolf?”“You're a shill for herb developers!” piped back Karen.Mark banged the gavel for order. “That's enough of that. Let's hear from someone else.” One hand remained. “Okay, Larry Smith. Let's hear it.”“Wolfsbane's a good idea and we should do it. But it won't solve the root problem by itself, for that we need to get at the heart of the matter, literally: let's shoot the werewolves with silver bullets.”Mark rolled his eyes. “Look, there are no silver bullet solutions to the werewolf crisis.”“Yes there are,” protested Larry. “Silver bullets are the silver bullet solution. Lycanthrobiologists have repeatedly demonstrated that werewolf physiology is extremely susceptible to high velocity projectiles made of silver.”“We don't need theories, we need evidence.”“I've got your evidence right here,” said Larry, pulling out a stack of research papers. “A recent meta-analysis by Van Helsing, et al shows that households subject to werewolf attacks have highly differentiated mortality rates that strongly vary with defense typology. The highest rate of survivorship occured in home defenders who, in the blind panic of a werewolf attack, happened to stuff their blunderbusses full of silverware bef...]]>
Thu, 09 Feb 2023 02:19:15 +0000 EA - No Silver Bullet Solutions for the Werewolf Crisis by Aaron Gertler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: No Silver Bullet Solutions for the Werewolf Crisis, published by Aaron Gertler on February 9, 2023 on The Effective Altruism Forum.Reposted here with permission from the original author, Lars Doucet.Mark Coopersmith gaveled the town meeting to order. “Ladies and Gentlemen, what are we going to do about the werewolf crisis? Every full moon, this town is attacked by invincible supernatural werewolves that murder people and then eat them.”A few hands went up. “Yes, Trevor Farrier, in the front row–what do you suggest?”“We have to improve our emergency response. If the paramedics can get to the scene fast enough after a werewolf attack, we can save some of the people the werewolves have left for dead before they bleed out.”“Excellent idea, Trevor. Who else?” More hands went up. “Yes, Rachel Miller. What have you got for us?”“We need more werewolf victim assistance programs. Werewolves make a lot of mess after they murder someone and then eat them, and it can be a real economic burden for the survivors to pay all the funeral and cleanup expenses by themselves.”“Good idea Rachel, but what if some of this assistance goes to rich families who should already be able to afford it?” asked Mark. “And what if some of the poorer families spend their assistance money on frivolous comforts?”“Well obviously, werewolf victim assistance needs to be means-tested and closely regulated. After somebody's family has been murdered and then eaten by werewolves, we'll ask them to sign a bunch of forms, provide proof of income, as well as character references to make sure they're sufficiently deserving of assistance. And naturally, we'll hire a team of officials who work full time scrutinizing recipients' spending to make sure there's no financial waste in the system.”"Thanks Rachel. Very sensible. Any other ideas? Fred."“Thank you,” said Fred Planter. “Werewolves are at root a supply issue–the supply of wolfsbane, namely. Studies have consistently shown that werewolves are somewhat repelled by the smell of wolfsbane, and the plant has a lot other beneficial uses as well. Unfortunately, the damn NWIMBYs that run this place won't let us grow or store any. This kind of smart, stockable, mixed-use herbalism is illegal to grow in most Transylvanian towns!”“Wolfsbane is a disgusting, poisonous weed!” squealed Karen Piper. “It ruins the neighborhood character! No Wolfsbane in My Back Yard!”Fred glared at her and shot back: “Easy for you to say, Karen–you literally live in a castle you inherited from your father, complete with 40-foot unscalable walls. When was the last time you lost a loved one to a werewolf?”“You're a shill for herb developers!” piped back Karen.Mark banged the gavel for order. “That's enough of that. Let's hear from someone else.” One hand remained. “Okay, Larry Smith. Let's hear it.”“Wolfsbane's a good idea and we should do it. But it won't solve the root problem by itself, for that we need to get at the heart of the matter, literally: let's shoot the werewolves with silver bullets.”Mark rolled his eyes. “Look, there are no silver bullet solutions to the werewolf crisis.”“Yes there are,” protested Larry. “Silver bullets are the silver bullet solution. Lycanthrobiologists have repeatedly demonstrated that werewolf physiology is extremely susceptible to high velocity projectiles made of silver.”“We don't need theories, we need evidence.”“I've got your evidence right here,” said Larry, pulling out a stack of research papers. “A recent meta-analysis by Van Helsing, et al shows that households subject to werewolf attacks have highly differentiated mortality rates that strongly vary with defense typology. The highest rate of survivorship occured in home defenders who, in the blind panic of a werewolf attack, happened to stuff their blunderbusses full of silverware bef...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: No Silver Bullet Solutions for the Werewolf Crisis, published by Aaron Gertler on February 9, 2023 on The Effective Altruism Forum.Reposted here with permission from the original author, Lars Doucet.Mark Coopersmith gaveled the town meeting to order. “Ladies and Gentlemen, what are we going to do about the werewolf crisis? Every full moon, this town is attacked by invincible supernatural werewolves that murder people and then eat them.”A few hands went up. “Yes, Trevor Farrier, in the front row–what do you suggest?”“We have to improve our emergency response. If the paramedics can get to the scene fast enough after a werewolf attack, we can save some of the people the werewolves have left for dead before they bleed out.”“Excellent idea, Trevor. Who else?” More hands went up. “Yes, Rachel Miller. What have you got for us?”“We need more werewolf victim assistance programs. Werewolves make a lot of mess after they murder someone and then eat them, and it can be a real economic burden for the survivors to pay all the funeral and cleanup expenses by themselves.”“Good idea Rachel, but what if some of this assistance goes to rich families who should already be able to afford it?” asked Mark. “And what if some of the poorer families spend their assistance money on frivolous comforts?”“Well obviously, werewolf victim assistance needs to be means-tested and closely regulated. After somebody's family has been murdered and then eaten by werewolves, we'll ask them to sign a bunch of forms, provide proof of income, as well as character references to make sure they're sufficiently deserving of assistance. And naturally, we'll hire a team of officials who work full time scrutinizing recipients' spending to make sure there's no financial waste in the system.”"Thanks Rachel. Very sensible. Any other ideas? Fred."“Thank you,” said Fred Planter. “Werewolves are at root a supply issue–the supply of wolfsbane, namely. Studies have consistently shown that werewolves are somewhat repelled by the smell of wolfsbane, and the plant has a lot other beneficial uses as well. Unfortunately, the damn NWIMBYs that run this place won't let us grow or store any. This kind of smart, stockable, mixed-use herbalism is illegal to grow in most Transylvanian towns!”“Wolfsbane is a disgusting, poisonous weed!” squealed Karen Piper. “It ruins the neighborhood character! No Wolfsbane in My Back Yard!”Fred glared at her and shot back: “Easy for you to say, Karen–you literally live in a castle you inherited from your father, complete with 40-foot unscalable walls. When was the last time you lost a loved one to a werewolf?”“You're a shill for herb developers!” piped back Karen.Mark banged the gavel for order. “That's enough of that. Let's hear from someone else.” One hand remained. “Okay, Larry Smith. Let's hear it.”“Wolfsbane's a good idea and we should do it. But it won't solve the root problem by itself, for that we need to get at the heart of the matter, literally: let's shoot the werewolves with silver bullets.”Mark rolled his eyes. “Look, there are no silver bullet solutions to the werewolf crisis.”“Yes there are,” protested Larry. “Silver bullets are the silver bullet solution. Lycanthrobiologists have repeatedly demonstrated that werewolf physiology is extremely susceptible to high velocity projectiles made of silver.”“We don't need theories, we need evidence.”“I've got your evidence right here,” said Larry, pulling out a stack of research papers. “A recent meta-analysis by Van Helsing, et al shows that households subject to werewolf attacks have highly differentiated mortality rates that strongly vary with defense typology. The highest rate of survivorship occured in home defenders who, in the blind panic of a werewolf attack, happened to stuff their blunderbusses full of silverware bef...]]>
Aaron Gertler https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:32 None full 4804
LFEvLYQZKGhkT5mpB_NL_EA_EA EA - Animal Welfare - 6 Months in 6 Minutes by Zoe Williams Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal Welfare - 6 Months in 6 Minutes, published by Zoe Williams on February 8, 2023 on The Effective Altruism Forum.In August 2022, I started making summaries of the top EA and LW forum posts each week. This post collates together the key themes I’ve seen within top-rated animal welfare posts since then. (Note a lot of good work is happening outside what's posted on the forum too! This post doesn't try to cover that work.)If you’re interested in staying up to date on a more regular basis, consider subscribing to the Weekly EA & LW Forum Summaries, or to the Animal Advocacy Biweekly Digest. Forum announcements here and here.And for a great overview of the good the community is doing in this space, I highly recommend reading Big Wins for Farm Animals This Decade by Lewis Bollard.Key TakeawaysA multi-proxy method has been suggested as a better option than neuron counts for estimating the moral weights of different species, with a first stab completed by Rethink Priorities based on empirical and philosophical research.There is an increasing focus on small animal welfare eg. fish, crustaceans and insects. This is particularly relevant for interventions which may cause substitution effects (consumers moving from one form of animal product to another).Wild animal welfare is becoming a more established cause area, with recent launches including WildAnimalSuffering.org and the NYU Wild Animal Welfare Program.Several major policy wins were achieved, including the first FDA approval of cultivated meat, and an EU announcement that it would put forward a proposal to end the systematic killing of male chicks.ThemesCross-Species ComparisonsMoving beyond neuron countsBig strides have been made in modeling cross-species welfare comparisons.Rethink Priorities published the Moral Weight Project Sequence (led by Bob Fischer), which tackles philosophical and empirical questions related to the relative welfare capacities of 11 different farmed animals. This included looking at the evidence for 90 different hedonic and cognitive proxies in those animals, discussing why we shouldn’t just use neuron counts, and publishing a model of relative differences in the possible intensities of these animals’ pleasure and pains (relative to humans). You can see the results below - they suggest using these as the best-available placeholders until further research can be completed, and noting the translation from intensity of experience (welfare range) into ‘moral weight’ is dependent on several philosophical assumptions:Other work in this area has included:The launching of the NYU Mind, Ethics, and Policy Program, which will conduct and support foundational research about the nature and intrinsic value of nonhuman minds, including biological and artificial minds. There will be a special focus on invertebrates and AIs.The Shrimp Welfare Project (founded 2021) released research on using biological markers to measure the welfare of shrimp, and prioritize the practices causing the worst harms.MHR points out the issues with using neuron counts extend to the practical as well as the philosophical - the only publicly-available empirical reports of fish neuron counts sample exclusively from very small species (<1 g bodyweight), while many farmed fish are 1000x larger. This should add extra skepticism to neuron-count-based estimates of the moral weight of farmed fish.Rethink Priorities released research on welfare considerations for farmed black soldier flies - anticipated to become the most farmed insect species in the next decade.The danger of substitution effectsA team of entomologists and philosophers from Iran, the UK, and the USA found strong or substantial evidence for pain in adult insects of five orders, and ishankhire attempted to model cricket suffering per kg consumed and found t...]]>
Zoe Williams https://forum.effectivealtruism.org/posts/LFEvLYQZKGhkT5mpB/animal-welfare-6-months-in-6-minutes Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal Welfare - 6 Months in 6 Minutes, published by Zoe Williams on February 8, 2023 on The Effective Altruism Forum.In August 2022, I started making summaries of the top EA and LW forum posts each week. This post collates together the key themes I’ve seen within top-rated animal welfare posts since then. (Note a lot of good work is happening outside what's posted on the forum too! This post doesn't try to cover that work.)If you’re interested in staying up to date on a more regular basis, consider subscribing to the Weekly EA & LW Forum Summaries, or to the Animal Advocacy Biweekly Digest. Forum announcements here and here.And for a great overview of the good the community is doing in this space, I highly recommend reading Big Wins for Farm Animals This Decade by Lewis Bollard.Key TakeawaysA multi-proxy method has been suggested as a better option than neuron counts for estimating the moral weights of different species, with a first stab completed by Rethink Priorities based on empirical and philosophical research.There is an increasing focus on small animal welfare eg. fish, crustaceans and insects. This is particularly relevant for interventions which may cause substitution effects (consumers moving from one form of animal product to another).Wild animal welfare is becoming a more established cause area, with recent launches including WildAnimalSuffering.org and the NYU Wild Animal Welfare Program.Several major policy wins were achieved, including the first FDA approval of cultivated meat, and an EU announcement that it would put forward a proposal to end the systematic killing of male chicks.ThemesCross-Species ComparisonsMoving beyond neuron countsBig strides have been made in modeling cross-species welfare comparisons.Rethink Priorities published the Moral Weight Project Sequence (led by Bob Fischer), which tackles philosophical and empirical questions related to the relative welfare capacities of 11 different farmed animals. This included looking at the evidence for 90 different hedonic and cognitive proxies in those animals, discussing why we shouldn’t just use neuron counts, and publishing a model of relative differences in the possible intensities of these animals’ pleasure and pains (relative to humans). You can see the results below - they suggest using these as the best-available placeholders until further research can be completed, and noting the translation from intensity of experience (welfare range) into ‘moral weight’ is dependent on several philosophical assumptions:Other work in this area has included:The launching of the NYU Mind, Ethics, and Policy Program, which will conduct and support foundational research about the nature and intrinsic value of nonhuman minds, including biological and artificial minds. There will be a special focus on invertebrates and AIs.The Shrimp Welfare Project (founded 2021) released research on using biological markers to measure the welfare of shrimp, and prioritize the practices causing the worst harms.MHR points out the issues with using neuron counts extend to the practical as well as the philosophical - the only publicly-available empirical reports of fish neuron counts sample exclusively from very small species (<1 g bodyweight), while many farmed fish are 1000x larger. This should add extra skepticism to neuron-count-based estimates of the moral weight of farmed fish.Rethink Priorities released research on welfare considerations for farmed black soldier flies - anticipated to become the most farmed insect species in the next decade.The danger of substitution effectsA team of entomologists and philosophers from Iran, the UK, and the USA found strong or substantial evidence for pain in adult insects of five orders, and ishankhire attempted to model cricket suffering per kg consumed and found t...]]>
Wed, 08 Feb 2023 23:36:45 +0000 EA - Animal Welfare - 6 Months in 6 Minutes by Zoe Williams Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal Welfare - 6 Months in 6 Minutes, published by Zoe Williams on February 8, 2023 on The Effective Altruism Forum.In August 2022, I started making summaries of the top EA and LW forum posts each week. This post collates together the key themes I’ve seen within top-rated animal welfare posts since then. (Note a lot of good work is happening outside what's posted on the forum too! This post doesn't try to cover that work.)If you’re interested in staying up to date on a more regular basis, consider subscribing to the Weekly EA & LW Forum Summaries, or to the Animal Advocacy Biweekly Digest. Forum announcements here and here.And for a great overview of the good the community is doing in this space, I highly recommend reading Big Wins for Farm Animals This Decade by Lewis Bollard.Key TakeawaysA multi-proxy method has been suggested as a better option than neuron counts for estimating the moral weights of different species, with a first stab completed by Rethink Priorities based on empirical and philosophical research.There is an increasing focus on small animal welfare eg. fish, crustaceans and insects. This is particularly relevant for interventions which may cause substitution effects (consumers moving from one form of animal product to another).Wild animal welfare is becoming a more established cause area, with recent launches including WildAnimalSuffering.org and the NYU Wild Animal Welfare Program.Several major policy wins were achieved, including the first FDA approval of cultivated meat, and an EU announcement that it would put forward a proposal to end the systematic killing of male chicks.ThemesCross-Species ComparisonsMoving beyond neuron countsBig strides have been made in modeling cross-species welfare comparisons.Rethink Priorities published the Moral Weight Project Sequence (led by Bob Fischer), which tackles philosophical and empirical questions related to the relative welfare capacities of 11 different farmed animals. This included looking at the evidence for 90 different hedonic and cognitive proxies in those animals, discussing why we shouldn’t just use neuron counts, and publishing a model of relative differences in the possible intensities of these animals’ pleasure and pains (relative to humans). You can see the results below - they suggest using these as the best-available placeholders until further research can be completed, and noting the translation from intensity of experience (welfare range) into ‘moral weight’ is dependent on several philosophical assumptions:Other work in this area has included:The launching of the NYU Mind, Ethics, and Policy Program, which will conduct and support foundational research about the nature and intrinsic value of nonhuman minds, including biological and artificial minds. There will be a special focus on invertebrates and AIs.The Shrimp Welfare Project (founded 2021) released research on using biological markers to measure the welfare of shrimp, and prioritize the practices causing the worst harms.MHR points out the issues with using neuron counts extend to the practical as well as the philosophical - the only publicly-available empirical reports of fish neuron counts sample exclusively from very small species (<1 g bodyweight), while many farmed fish are 1000x larger. This should add extra skepticism to neuron-count-based estimates of the moral weight of farmed fish.Rethink Priorities released research on welfare considerations for farmed black soldier flies - anticipated to become the most farmed insect species in the next decade.The danger of substitution effectsA team of entomologists and philosophers from Iran, the UK, and the USA found strong or substantial evidence for pain in adult insects of five orders, and ishankhire attempted to model cricket suffering per kg consumed and found t...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal Welfare - 6 Months in 6 Minutes, published by Zoe Williams on February 8, 2023 on The Effective Altruism Forum.In August 2022, I started making summaries of the top EA and LW forum posts each week. This post collates together the key themes I’ve seen within top-rated animal welfare posts since then. (Note a lot of good work is happening outside what's posted on the forum too! This post doesn't try to cover that work.)If you’re interested in staying up to date on a more regular basis, consider subscribing to the Weekly EA & LW Forum Summaries, or to the Animal Advocacy Biweekly Digest. Forum announcements here and here.And for a great overview of the good the community is doing in this space, I highly recommend reading Big Wins for Farm Animals This Decade by Lewis Bollard.Key TakeawaysA multi-proxy method has been suggested as a better option than neuron counts for estimating the moral weights of different species, with a first stab completed by Rethink Priorities based on empirical and philosophical research.There is an increasing focus on small animal welfare eg. fish, crustaceans and insects. This is particularly relevant for interventions which may cause substitution effects (consumers moving from one form of animal product to another).Wild animal welfare is becoming a more established cause area, with recent launches including WildAnimalSuffering.org and the NYU Wild Animal Welfare Program.Several major policy wins were achieved, including the first FDA approval of cultivated meat, and an EU announcement that it would put forward a proposal to end the systematic killing of male chicks.ThemesCross-Species ComparisonsMoving beyond neuron countsBig strides have been made in modeling cross-species welfare comparisons.Rethink Priorities published the Moral Weight Project Sequence (led by Bob Fischer), which tackles philosophical and empirical questions related to the relative welfare capacities of 11 different farmed animals. This included looking at the evidence for 90 different hedonic and cognitive proxies in those animals, discussing why we shouldn’t just use neuron counts, and publishing a model of relative differences in the possible intensities of these animals’ pleasure and pains (relative to humans). You can see the results below - they suggest using these as the best-available placeholders until further research can be completed, and noting the translation from intensity of experience (welfare range) into ‘moral weight’ is dependent on several philosophical assumptions:Other work in this area has included:The launching of the NYU Mind, Ethics, and Policy Program, which will conduct and support foundational research about the nature and intrinsic value of nonhuman minds, including biological and artificial minds. There will be a special focus on invertebrates and AIs.The Shrimp Welfare Project (founded 2021) released research on using biological markers to measure the welfare of shrimp, and prioritize the practices causing the worst harms.MHR points out the issues with using neuron counts extend to the practical as well as the philosophical - the only publicly-available empirical reports of fish neuron counts sample exclusively from very small species (<1 g bodyweight), while many farmed fish are 1000x larger. This should add extra skepticism to neuron-count-based estimates of the moral weight of farmed fish.Rethink Priorities released research on welfare considerations for farmed black soldier flies - anticipated to become the most farmed insect species in the next decade.The danger of substitution effectsA team of entomologists and philosophers from Iran, the UK, and the USA found strong or substantial evidence for pain in adult insects of five orders, and ishankhire attempted to model cricket suffering per kg consumed and found t...]]>
Zoe Williams https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:16 None full 4805
kMyt5sPPsZjDaFtFh_NL_EA_EA EA - EigenKarma: trust at scale by Henrik Karlsson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EigenKarma: trust at scale, published by Henrik Karlsson on February 8, 2023 on The Effective Altruism Forum.Upvotes or likes have become a standard way to filter information online. The quality of this filter is determined by the users handing out the upvotes.For this reason, the archetypal pattern of online communities is one of gradual decay. People are more likely to join communities where users are more skilled than they are. As communities grow, the skill of the median user goes down. The capacity to filter for quality deteriorates. Simpler, more memetic content drives out more complex thinking. Malicious actors manipulate the rankings through fake votes and the like.This is a problem that will get increasingly pressing as powerful AI models start coming online. To ensure our capacity to make intellectual progress under those conditions, we should take measures to future-proof our public communication channels.One solution is redesigning the karma system in such a way that you can decide whose upvotes you see.In this post, I’m going to detail a prototype of this type of karma system, which has been built by volunteers in Alignment Ecosystem Development. EigenKarma allows each user to define a personal trust graph based on their upvote history.EigenKarmaAt first glance, EigenKarma behaves like normal karma. If you like something, you upvote it.The key difference is that in EigenKarma, every user has a personal trust graph. If you look at my profile, you will see the karma assigned to me by the people in your trust network. There is no global karma score.If we imagine this trust graph powering a feed, and I have gamed the algorithm and gotten a million upvotes, that doesn’t matter; my blog post won’t filter through to you anyway, since you do not put any weight on the judgment of the anonymous masses.If you upvote someone you don’t know, they are attached to your trust graph. This can be interpreted as a tiny signal that you trust them:That trust will also spread to the users they trust in turn. If they trust user X, for example, you too trust X—a little:This is how we intuitively reason about trust when thinking about our friends and the friends of our friends. Only EigenKarma being a database, it can remember and compile more data than you, so it can keep track of more than a Dunbar’s number of relationships. It scales trust. Karma propagates outward through the network from trusted node to trusted node.Once you’ve given out a few upvotes, you can look up people you have never interacted with, like K., and see if people you “trust” think highly of them. If several people you “trust” have upvoted K., the karma they have given to K. is compiled together. The more you “trust” someone, the more karma they will be able to confer:I have written about trust networks and scaling them before, and there’s been plenty of research suggesting that this type of “transitivity of trust” is a highly desired property of a trust metric. But until now, we haven’t seen a serious attempt to build such a system. It is interesting to see it put to use in the wild.Currently, you access EigenKarma through a Discord bot or the website. But the underlying trust graph is platform-independent. You can connect the API (which you can find here) to any platform and bring your trust graph with you.Now, what does a design like this allow us to do?EigenKarma is a primitiveEigenKarma is a primitive. It can be inserted into other tools. Once you start to curate a personal trust graph, it can be used to improve the quality of filtering in many contexts.It can, as mentioned, be used to evaluate content.This lets you curate better personal feeds.It can also be used as a forum moderation tool.What should be shown? Work that is trusted by the core team, perhaps, or work trusted by ...]]>
Henrik Karlsson https://forum.effectivealtruism.org/posts/kMyt5sPPsZjDaFtFh/eigenkarma-trust-at-scale Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EigenKarma: trust at scale, published by Henrik Karlsson on February 8, 2023 on The Effective Altruism Forum.Upvotes or likes have become a standard way to filter information online. The quality of this filter is determined by the users handing out the upvotes.For this reason, the archetypal pattern of online communities is one of gradual decay. People are more likely to join communities where users are more skilled than they are. As communities grow, the skill of the median user goes down. The capacity to filter for quality deteriorates. Simpler, more memetic content drives out more complex thinking. Malicious actors manipulate the rankings through fake votes and the like.This is a problem that will get increasingly pressing as powerful AI models start coming online. To ensure our capacity to make intellectual progress under those conditions, we should take measures to future-proof our public communication channels.One solution is redesigning the karma system in such a way that you can decide whose upvotes you see.In this post, I’m going to detail a prototype of this type of karma system, which has been built by volunteers in Alignment Ecosystem Development. EigenKarma allows each user to define a personal trust graph based on their upvote history.EigenKarmaAt first glance, EigenKarma behaves like normal karma. If you like something, you upvote it.The key difference is that in EigenKarma, every user has a personal trust graph. If you look at my profile, you will see the karma assigned to me by the people in your trust network. There is no global karma score.If we imagine this trust graph powering a feed, and I have gamed the algorithm and gotten a million upvotes, that doesn’t matter; my blog post won’t filter through to you anyway, since you do not put any weight on the judgment of the anonymous masses.If you upvote someone you don’t know, they are attached to your trust graph. This can be interpreted as a tiny signal that you trust them:That trust will also spread to the users they trust in turn. If they trust user X, for example, you too trust X—a little:This is how we intuitively reason about trust when thinking about our friends and the friends of our friends. Only EigenKarma being a database, it can remember and compile more data than you, so it can keep track of more than a Dunbar’s number of relationships. It scales trust. Karma propagates outward through the network from trusted node to trusted node.Once you’ve given out a few upvotes, you can look up people you have never interacted with, like K., and see if people you “trust” think highly of them. If several people you “trust” have upvoted K., the karma they have given to K. is compiled together. The more you “trust” someone, the more karma they will be able to confer:I have written about trust networks and scaling them before, and there’s been plenty of research suggesting that this type of “transitivity of trust” is a highly desired property of a trust metric. But until now, we haven’t seen a serious attempt to build such a system. It is interesting to see it put to use in the wild.Currently, you access EigenKarma through a Discord bot or the website. But the underlying trust graph is platform-independent. You can connect the API (which you can find here) to any platform and bring your trust graph with you.Now, what does a design like this allow us to do?EigenKarma is a primitiveEigenKarma is a primitive. It can be inserted into other tools. Once you start to curate a personal trust graph, it can be used to improve the quality of filtering in many contexts.It can, as mentioned, be used to evaluate content.This lets you curate better personal feeds.It can also be used as a forum moderation tool.What should be shown? Work that is trusted by the core team, perhaps, or work trusted by ...]]>
Wed, 08 Feb 2023 19:59:19 +0000 EA - EigenKarma: trust at scale by Henrik Karlsson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EigenKarma: trust at scale, published by Henrik Karlsson on February 8, 2023 on The Effective Altruism Forum.Upvotes or likes have become a standard way to filter information online. The quality of this filter is determined by the users handing out the upvotes.For this reason, the archetypal pattern of online communities is one of gradual decay. People are more likely to join communities where users are more skilled than they are. As communities grow, the skill of the median user goes down. The capacity to filter for quality deteriorates. Simpler, more memetic content drives out more complex thinking. Malicious actors manipulate the rankings through fake votes and the like.This is a problem that will get increasingly pressing as powerful AI models start coming online. To ensure our capacity to make intellectual progress under those conditions, we should take measures to future-proof our public communication channels.One solution is redesigning the karma system in such a way that you can decide whose upvotes you see.In this post, I’m going to detail a prototype of this type of karma system, which has been built by volunteers in Alignment Ecosystem Development. EigenKarma allows each user to define a personal trust graph based on their upvote history.EigenKarmaAt first glance, EigenKarma behaves like normal karma. If you like something, you upvote it.The key difference is that in EigenKarma, every user has a personal trust graph. If you look at my profile, you will see the karma assigned to me by the people in your trust network. There is no global karma score.If we imagine this trust graph powering a feed, and I have gamed the algorithm and gotten a million upvotes, that doesn’t matter; my blog post won’t filter through to you anyway, since you do not put any weight on the judgment of the anonymous masses.If you upvote someone you don’t know, they are attached to your trust graph. This can be interpreted as a tiny signal that you trust them:That trust will also spread to the users they trust in turn. If they trust user X, for example, you too trust X—a little:This is how we intuitively reason about trust when thinking about our friends and the friends of our friends. Only EigenKarma being a database, it can remember and compile more data than you, so it can keep track of more than a Dunbar’s number of relationships. It scales trust. Karma propagates outward through the network from trusted node to trusted node.Once you’ve given out a few upvotes, you can look up people you have never interacted with, like K., and see if people you “trust” think highly of them. If several people you “trust” have upvoted K., the karma they have given to K. is compiled together. The more you “trust” someone, the more karma they will be able to confer:I have written about trust networks and scaling them before, and there’s been plenty of research suggesting that this type of “transitivity of trust” is a highly desired property of a trust metric. But until now, we haven’t seen a serious attempt to build such a system. It is interesting to see it put to use in the wild.Currently, you access EigenKarma through a Discord bot or the website. But the underlying trust graph is platform-independent. You can connect the API (which you can find here) to any platform and bring your trust graph with you.Now, what does a design like this allow us to do?EigenKarma is a primitiveEigenKarma is a primitive. It can be inserted into other tools. Once you start to curate a personal trust graph, it can be used to improve the quality of filtering in many contexts.It can, as mentioned, be used to evaluate content.This lets you curate better personal feeds.It can also be used as a forum moderation tool.What should be shown? Work that is trusted by the core team, perhaps, or work trusted by ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EigenKarma: trust at scale, published by Henrik Karlsson on February 8, 2023 on The Effective Altruism Forum.Upvotes or likes have become a standard way to filter information online. The quality of this filter is determined by the users handing out the upvotes.For this reason, the archetypal pattern of online communities is one of gradual decay. People are more likely to join communities where users are more skilled than they are. As communities grow, the skill of the median user goes down. The capacity to filter for quality deteriorates. Simpler, more memetic content drives out more complex thinking. Malicious actors manipulate the rankings through fake votes and the like.This is a problem that will get increasingly pressing as powerful AI models start coming online. To ensure our capacity to make intellectual progress under those conditions, we should take measures to future-proof our public communication channels.One solution is redesigning the karma system in such a way that you can decide whose upvotes you see.In this post, I’m going to detail a prototype of this type of karma system, which has been built by volunteers in Alignment Ecosystem Development. EigenKarma allows each user to define a personal trust graph based on their upvote history.EigenKarmaAt first glance, EigenKarma behaves like normal karma. If you like something, you upvote it.The key difference is that in EigenKarma, every user has a personal trust graph. If you look at my profile, you will see the karma assigned to me by the people in your trust network. There is no global karma score.If we imagine this trust graph powering a feed, and I have gamed the algorithm and gotten a million upvotes, that doesn’t matter; my blog post won’t filter through to you anyway, since you do not put any weight on the judgment of the anonymous masses.If you upvote someone you don’t know, they are attached to your trust graph. This can be interpreted as a tiny signal that you trust them:That trust will also spread to the users they trust in turn. If they trust user X, for example, you too trust X—a little:This is how we intuitively reason about trust when thinking about our friends and the friends of our friends. Only EigenKarma being a database, it can remember and compile more data than you, so it can keep track of more than a Dunbar’s number of relationships. It scales trust. Karma propagates outward through the network from trusted node to trusted node.Once you’ve given out a few upvotes, you can look up people you have never interacted with, like K., and see if people you “trust” think highly of them. If several people you “trust” have upvoted K., the karma they have given to K. is compiled together. The more you “trust” someone, the more karma they will be able to confer:I have written about trust networks and scaling them before, and there’s been plenty of research suggesting that this type of “transitivity of trust” is a highly desired property of a trust metric. But until now, we haven’t seen a serious attempt to build such a system. It is interesting to see it put to use in the wild.Currently, you access EigenKarma through a Discord bot or the website. But the underlying trust graph is platform-independent. You can connect the API (which you can find here) to any platform and bring your trust graph with you.Now, what does a design like this allow us to do?EigenKarma is a primitiveEigenKarma is a primitive. It can be inserted into other tools. Once you start to curate a personal trust graph, it can be used to improve the quality of filtering in many contexts.It can, as mentioned, be used to evaluate content.This lets you curate better personal feeds.It can also be used as a forum moderation tool.What should be shown? Work that is trusted by the core team, perhaps, or work trusted by ...]]>
Henrik Karlsson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:28 None full 4792
vtzrZuigkT3ovn4rq_NL_EA_EA EA - How will we know if we are doing good better: The case for more and better monitoring and evaluation by TomBill Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How will we know if we are doing good better: The case for more and better monitoring and evaluation, published by TomBill on February 8, 2023 on The Effective Altruism Forum.Written by Sophie Gulliver and Thomas BillingtonTL;DR - EAs want to do the most good with the resources we have. But how do we know if we are actually doing the most good? How do we know if our projects are running efficiently and effectively? How do we know we are achieving impact? How do we check we are not causing harm? Monitoring and evaluation (M&E) theories and tools can help EA organisations answer these questions—but are currently not being applied to a sufficient degree. These theories and practices will help us achieve more impact and value in the short-term and long-term.Here, we outline a few ways to build M&E knowledge and skills within EA, including:Helpful resourcesThe EA M&E Slack community groupSigning up for our pro bono M&E supportM&E as a career choiceIntroductionEA is “a project that aims to find the best ways to help others, and put them into practice”. Specifically, EA distinguishes itself from general do-gooding through its commitment to doing the most good possible with its resources.To these ends, the EA community is only as good at achieving its goal of “finding the best way to help others” as it is at knowing what the best way to help others is. But how can we know what the best ways to help others are? And how will we know if what we are doing is actually helping others in a cost-effective and impactful way?Monitoring and evaluation (M&E) experts, tools and concepts are some of the best ways to realise effective altruism’s philosophy of maximising impact and doing good better.M&E are two interrelated concepts that help us track and assess a project or organisation’s progress, impact, and value. In essence, if you really care about whether a project is making the world better, then you should also care about M&E.However, in our experience, the integration of M&E tools and expertise within the EA community has been variable and mostly restricted to the global health and development sector.This post argues that the broader EA movement should also be engaging with M&E more to ensure we are doing the most good possible.Note: This post aims at being a broad overview of M&E. Our objective is to increase knowledge and awareness of M&E in the EA space, as well as give some first steps to learn more. Don't worry if you feel like you can't apply it immediately. Future posts will delve into details, and see our section below for what you can do right now.What is M&E?Monitoring and evaluation are two distinct functions that work synergistically. They can be defined as:Monitoring: the systematic and routine collection of information to track progress and identify areas for improvement. Monitoring asks questions like:Are we on track?Are we reaching the right people?Are we using our money and time efficiently?What can we improve?For example, if you were running a project to reduce deaths through improved water quality, you might regularly monitor chlorine availability at water points, or run quarterly surveys asking community members how often they are chlorinating their water.Evaluation: the rigorous assessment of the value of a project or programme to inform decision-making. This assessment usually measures the performance of the project against criteria that define what a ‘valuable’ project looks like. These could be criteria like ‘impactful’ or ‘cost-effective’ or ‘sustainable’. Unlike monitoring, evaluations answer bigger questions to inform important decisions about a project’s future:How well is this project doing overall?How valuable is it?Is it worth it?Should we adapt, scale up or scale down?Given the deeper analysis required, evaluations are t...]]>
TomBill https://forum.effectivealtruism.org/posts/vtzrZuigkT3ovn4rq/how-will-we-know-if-we-are-doing-good-better-the-case-for Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How will we know if we are doing good better: The case for more and better monitoring and evaluation, published by TomBill on February 8, 2023 on The Effective Altruism Forum.Written by Sophie Gulliver and Thomas BillingtonTL;DR - EAs want to do the most good with the resources we have. But how do we know if we are actually doing the most good? How do we know if our projects are running efficiently and effectively? How do we know we are achieving impact? How do we check we are not causing harm? Monitoring and evaluation (M&E) theories and tools can help EA organisations answer these questions—but are currently not being applied to a sufficient degree. These theories and practices will help us achieve more impact and value in the short-term and long-term.Here, we outline a few ways to build M&E knowledge and skills within EA, including:Helpful resourcesThe EA M&E Slack community groupSigning up for our pro bono M&E supportM&E as a career choiceIntroductionEA is “a project that aims to find the best ways to help others, and put them into practice”. Specifically, EA distinguishes itself from general do-gooding through its commitment to doing the most good possible with its resources.To these ends, the EA community is only as good at achieving its goal of “finding the best way to help others” as it is at knowing what the best way to help others is. But how can we know what the best ways to help others are? And how will we know if what we are doing is actually helping others in a cost-effective and impactful way?Monitoring and evaluation (M&E) experts, tools and concepts are some of the best ways to realise effective altruism’s philosophy of maximising impact and doing good better.M&E are two interrelated concepts that help us track and assess a project or organisation’s progress, impact, and value. In essence, if you really care about whether a project is making the world better, then you should also care about M&E.However, in our experience, the integration of M&E tools and expertise within the EA community has been variable and mostly restricted to the global health and development sector.This post argues that the broader EA movement should also be engaging with M&E more to ensure we are doing the most good possible.Note: This post aims at being a broad overview of M&E. Our objective is to increase knowledge and awareness of M&E in the EA space, as well as give some first steps to learn more. Don't worry if you feel like you can't apply it immediately. Future posts will delve into details, and see our section below for what you can do right now.What is M&E?Monitoring and evaluation are two distinct functions that work synergistically. They can be defined as:Monitoring: the systematic and routine collection of information to track progress and identify areas for improvement. Monitoring asks questions like:Are we on track?Are we reaching the right people?Are we using our money and time efficiently?What can we improve?For example, if you were running a project to reduce deaths through improved water quality, you might regularly monitor chlorine availability at water points, or run quarterly surveys asking community members how often they are chlorinating their water.Evaluation: the rigorous assessment of the value of a project or programme to inform decision-making. This assessment usually measures the performance of the project against criteria that define what a ‘valuable’ project looks like. These could be criteria like ‘impactful’ or ‘cost-effective’ or ‘sustainable’. Unlike monitoring, evaluations answer bigger questions to inform important decisions about a project’s future:How well is this project doing overall?How valuable is it?Is it worth it?Should we adapt, scale up or scale down?Given the deeper analysis required, evaluations are t...]]>
Wed, 08 Feb 2023 17:33:38 +0000 EA - How will we know if we are doing good better: The case for more and better monitoring and evaluation by TomBill Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How will we know if we are doing good better: The case for more and better monitoring and evaluation, published by TomBill on February 8, 2023 on The Effective Altruism Forum.Written by Sophie Gulliver and Thomas BillingtonTL;DR - EAs want to do the most good with the resources we have. But how do we know if we are actually doing the most good? How do we know if our projects are running efficiently and effectively? How do we know we are achieving impact? How do we check we are not causing harm? Monitoring and evaluation (M&E) theories and tools can help EA organisations answer these questions—but are currently not being applied to a sufficient degree. These theories and practices will help us achieve more impact and value in the short-term and long-term.Here, we outline a few ways to build M&E knowledge and skills within EA, including:Helpful resourcesThe EA M&E Slack community groupSigning up for our pro bono M&E supportM&E as a career choiceIntroductionEA is “a project that aims to find the best ways to help others, and put them into practice”. Specifically, EA distinguishes itself from general do-gooding through its commitment to doing the most good possible with its resources.To these ends, the EA community is only as good at achieving its goal of “finding the best way to help others” as it is at knowing what the best way to help others is. But how can we know what the best ways to help others are? And how will we know if what we are doing is actually helping others in a cost-effective and impactful way?Monitoring and evaluation (M&E) experts, tools and concepts are some of the best ways to realise effective altruism’s philosophy of maximising impact and doing good better.M&E are two interrelated concepts that help us track and assess a project or organisation’s progress, impact, and value. In essence, if you really care about whether a project is making the world better, then you should also care about M&E.However, in our experience, the integration of M&E tools and expertise within the EA community has been variable and mostly restricted to the global health and development sector.This post argues that the broader EA movement should also be engaging with M&E more to ensure we are doing the most good possible.Note: This post aims at being a broad overview of M&E. Our objective is to increase knowledge and awareness of M&E in the EA space, as well as give some first steps to learn more. Don't worry if you feel like you can't apply it immediately. Future posts will delve into details, and see our section below for what you can do right now.What is M&E?Monitoring and evaluation are two distinct functions that work synergistically. They can be defined as:Monitoring: the systematic and routine collection of information to track progress and identify areas for improvement. Monitoring asks questions like:Are we on track?Are we reaching the right people?Are we using our money and time efficiently?What can we improve?For example, if you were running a project to reduce deaths through improved water quality, you might regularly monitor chlorine availability at water points, or run quarterly surveys asking community members how often they are chlorinating their water.Evaluation: the rigorous assessment of the value of a project or programme to inform decision-making. This assessment usually measures the performance of the project against criteria that define what a ‘valuable’ project looks like. These could be criteria like ‘impactful’ or ‘cost-effective’ or ‘sustainable’. Unlike monitoring, evaluations answer bigger questions to inform important decisions about a project’s future:How well is this project doing overall?How valuable is it?Is it worth it?Should we adapt, scale up or scale down?Given the deeper analysis required, evaluations are t...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How will we know if we are doing good better: The case for more and better monitoring and evaluation, published by TomBill on February 8, 2023 on The Effective Altruism Forum.Written by Sophie Gulliver and Thomas BillingtonTL;DR - EAs want to do the most good with the resources we have. But how do we know if we are actually doing the most good? How do we know if our projects are running efficiently and effectively? How do we know we are achieving impact? How do we check we are not causing harm? Monitoring and evaluation (M&E) theories and tools can help EA organisations answer these questions—but are currently not being applied to a sufficient degree. These theories and practices will help us achieve more impact and value in the short-term and long-term.Here, we outline a few ways to build M&E knowledge and skills within EA, including:Helpful resourcesThe EA M&E Slack community groupSigning up for our pro bono M&E supportM&E as a career choiceIntroductionEA is “a project that aims to find the best ways to help others, and put them into practice”. Specifically, EA distinguishes itself from general do-gooding through its commitment to doing the most good possible with its resources.To these ends, the EA community is only as good at achieving its goal of “finding the best way to help others” as it is at knowing what the best way to help others is. But how can we know what the best ways to help others are? And how will we know if what we are doing is actually helping others in a cost-effective and impactful way?Monitoring and evaluation (M&E) experts, tools and concepts are some of the best ways to realise effective altruism’s philosophy of maximising impact and doing good better.M&E are two interrelated concepts that help us track and assess a project or organisation’s progress, impact, and value. In essence, if you really care about whether a project is making the world better, then you should also care about M&E.However, in our experience, the integration of M&E tools and expertise within the EA community has been variable and mostly restricted to the global health and development sector.This post argues that the broader EA movement should also be engaging with M&E more to ensure we are doing the most good possible.Note: This post aims at being a broad overview of M&E. Our objective is to increase knowledge and awareness of M&E in the EA space, as well as give some first steps to learn more. Don't worry if you feel like you can't apply it immediately. Future posts will delve into details, and see our section below for what you can do right now.What is M&E?Monitoring and evaluation are two distinct functions that work synergistically. They can be defined as:Monitoring: the systematic and routine collection of information to track progress and identify areas for improvement. Monitoring asks questions like:Are we on track?Are we reaching the right people?Are we using our money and time efficiently?What can we improve?For example, if you were running a project to reduce deaths through improved water quality, you might regularly monitor chlorine availability at water points, or run quarterly surveys asking community members how often they are chlorinating their water.Evaluation: the rigorous assessment of the value of a project or programme to inform decision-making. This assessment usually measures the performance of the project against criteria that define what a ‘valuable’ project looks like. These could be criteria like ‘impactful’ or ‘cost-effective’ or ‘sustainable’. Unlike monitoring, evaluations answer bigger questions to inform important decisions about a project’s future:How well is this project doing overall?How valuable is it?Is it worth it?Should we adapt, scale up or scale down?Given the deeper analysis required, evaluations are t...]]>
TomBill https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 15:35 None full 4794
K6GazPCAutKsfu9f5_NL_EA_EA EA - Solidarity for those Rejected from EA Global by Wil Perkins Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Solidarity for those Rejected from EA Global, published by Wil Perkins on February 8, 2023 on The Effective Altruism Forum.This time of year, there are surely many folks out there upset they didn’t get into EA Global (EAG). As the year continues, there will be plenty more people who get rejected, and feel hurt. I am one of them.I'm writing this post to briefly share my experience, and give space for others to vent and commiserate about not being able to attend EAG.As a quick aside, I understand that not everyone can get into EAG due to the high standards the admittance committee has. (at least with the way it’s currently structured.) I also understand that with funding constraints many people had to be rejected, almost definitely more than in previous years. I’m sure that it’s not easy being on the EAG team and making these decisions.That being said, as many of us know, being rejected from EAG can be miserable. Last year, after reading and learning about Effective Altruism for many years, I upended my entire life to work towards having a larger impact. I started an EA group in my local city. I quit my lucrative job and joined an early stage, mission driven AI startup, working for free for months until we secured funding. All this in the hopes I would have more impact and be able to give more to the EA community, and the world.Unfortunately despite all of this effort, I was rejected from EAG. Surprisingly I got in last year, despite being much less involved and having less potential impact from my own perspective. It stings, and I’m frustrated. I don’t blame the people making the decision as I’m sure they had good reasons not to accept my application. But it still hurts. It feels like I devoted hundreds of hours of my life and tied my identity to a group, only to be told I wasn’t good enough.As I said, I want to invite others feeling the same way to comment. I don’t want to encourage destructive or vindictive dog piling on CEA or the EAG team, but I do think it’s important to share what a rejection from EAG means to people.I'd also like to encourage people who did get invited to EAG, or look down on this type of post as complaining, to try and have charity towards folks like me. A bit of empathy can go a long way.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wil Perkins https://forum.effectivealtruism.org/posts/K6GazPCAutKsfu9f5/solidarity-for-those-rejected-from-ea-global Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Solidarity for those Rejected from EA Global, published by Wil Perkins on February 8, 2023 on The Effective Altruism Forum.This time of year, there are surely many folks out there upset they didn’t get into EA Global (EAG). As the year continues, there will be plenty more people who get rejected, and feel hurt. I am one of them.I'm writing this post to briefly share my experience, and give space for others to vent and commiserate about not being able to attend EAG.As a quick aside, I understand that not everyone can get into EAG due to the high standards the admittance committee has. (at least with the way it’s currently structured.) I also understand that with funding constraints many people had to be rejected, almost definitely more than in previous years. I’m sure that it’s not easy being on the EAG team and making these decisions.That being said, as many of us know, being rejected from EAG can be miserable. Last year, after reading and learning about Effective Altruism for many years, I upended my entire life to work towards having a larger impact. I started an EA group in my local city. I quit my lucrative job and joined an early stage, mission driven AI startup, working for free for months until we secured funding. All this in the hopes I would have more impact and be able to give more to the EA community, and the world.Unfortunately despite all of this effort, I was rejected from EAG. Surprisingly I got in last year, despite being much less involved and having less potential impact from my own perspective. It stings, and I’m frustrated. I don’t blame the people making the decision as I’m sure they had good reasons not to accept my application. But it still hurts. It feels like I devoted hundreds of hours of my life and tied my identity to a group, only to be told I wasn’t good enough.As I said, I want to invite others feeling the same way to comment. I don’t want to encourage destructive or vindictive dog piling on CEA or the EAG team, but I do think it’s important to share what a rejection from EAG means to people.I'd also like to encourage people who did get invited to EAG, or look down on this type of post as complaining, to try and have charity towards folks like me. A bit of empathy can go a long way.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 08 Feb 2023 17:31:08 +0000 EA - Solidarity for those Rejected from EA Global by Wil Perkins Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Solidarity for those Rejected from EA Global, published by Wil Perkins on February 8, 2023 on The Effective Altruism Forum.This time of year, there are surely many folks out there upset they didn’t get into EA Global (EAG). As the year continues, there will be plenty more people who get rejected, and feel hurt. I am one of them.I'm writing this post to briefly share my experience, and give space for others to vent and commiserate about not being able to attend EAG.As a quick aside, I understand that not everyone can get into EAG due to the high standards the admittance committee has. (at least with the way it’s currently structured.) I also understand that with funding constraints many people had to be rejected, almost definitely more than in previous years. I’m sure that it’s not easy being on the EAG team and making these decisions.That being said, as many of us know, being rejected from EAG can be miserable. Last year, after reading and learning about Effective Altruism for many years, I upended my entire life to work towards having a larger impact. I started an EA group in my local city. I quit my lucrative job and joined an early stage, mission driven AI startup, working for free for months until we secured funding. All this in the hopes I would have more impact and be able to give more to the EA community, and the world.Unfortunately despite all of this effort, I was rejected from EAG. Surprisingly I got in last year, despite being much less involved and having less potential impact from my own perspective. It stings, and I’m frustrated. I don’t blame the people making the decision as I’m sure they had good reasons not to accept my application. But it still hurts. It feels like I devoted hundreds of hours of my life and tied my identity to a group, only to be told I wasn’t good enough.As I said, I want to invite others feeling the same way to comment. I don’t want to encourage destructive or vindictive dog piling on CEA or the EAG team, but I do think it’s important to share what a rejection from EAG means to people.I'd also like to encourage people who did get invited to EAG, or look down on this type of post as complaining, to try and have charity towards folks like me. A bit of empathy can go a long way.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Solidarity for those Rejected from EA Global, published by Wil Perkins on February 8, 2023 on The Effective Altruism Forum.This time of year, there are surely many folks out there upset they didn’t get into EA Global (EAG). As the year continues, there will be plenty more people who get rejected, and feel hurt. I am one of them.I'm writing this post to briefly share my experience, and give space for others to vent and commiserate about not being able to attend EAG.As a quick aside, I understand that not everyone can get into EAG due to the high standards the admittance committee has. (at least with the way it’s currently structured.) I also understand that with funding constraints many people had to be rejected, almost definitely more than in previous years. I’m sure that it’s not easy being on the EAG team and making these decisions.That being said, as many of us know, being rejected from EAG can be miserable. Last year, after reading and learning about Effective Altruism for many years, I upended my entire life to work towards having a larger impact. I started an EA group in my local city. I quit my lucrative job and joined an early stage, mission driven AI startup, working for free for months until we secured funding. All this in the hopes I would have more impact and be able to give more to the EA community, and the world.Unfortunately despite all of this effort, I was rejected from EAG. Surprisingly I got in last year, despite being much less involved and having less potential impact from my own perspective. It stings, and I’m frustrated. I don’t blame the people making the decision as I’m sure they had good reasons not to accept my application. But it still hurts. It feels like I devoted hundreds of hours of my life and tied my identity to a group, only to be told I wasn’t good enough.As I said, I want to invite others feeling the same way to comment. I don’t want to encourage destructive or vindictive dog piling on CEA or the EAG team, but I do think it’s important to share what a rejection from EAG means to people.I'd also like to encourage people who did get invited to EAG, or look down on this type of post as complaining, to try and have charity towards folks like me. A bit of empathy can go a long way.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wil Perkins https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:18 None full 4795
gwbaySuugjxJAjkgQ_NL_EA_EA EA - Why People Use Burner Accounts: A Commentary on Blacklists, the EA "Inner Circle", and Funding by BurnerExplainer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why People Use Burner Accounts: A Commentary on Blacklists, the EA "Inner Circle", and Funding, published by BurnerExplainer on February 8, 2023 on The Effective Altruism Forum.I wanted to offer a different perspective on this post on why people use burner accounts from someone who has used one previously. This is not intended to be a rebuttal of the arguments made there. It is meant to add to the public discourse and made more sense as a separate post than a comment.I hope that any upvotes/downvotes are given based on the effort you think I put in offering a different perspective instead of whether you agree/disagree with my comments (NB: I think there should be separate upvote/downvote agree/disagree buttons for forum posts too).DisclaimerNote that...I used a burner account in early 2021 after finding myself unhappy with the EA Community.I've been visiting the EA Forums weekly since and occasionally still going to EAGs.I work at an EA organisation that receives significant funding from Open Philanthropy.Reasons for Using BurnersThe two biggest reasons for using burners are the potential operation of blacklists and funding. I'll use them together (with anecdotes and some evidence) to make an overarching point at the end, so read all the way through.BlacklistsUnfortunately, some EA groups and organisations use blacklists (this seems more common in the Bay Area).Note that this is difficult to prove as...I'd have to directly ask them, "Are you using a blacklist? I've heard X rumour that seems to suggest this", and they're very unlikely to say yes (if they are) as it's not in their interests.I don't want to be seen as a "troublemaker" by suggesting an organisation is using a blacklist when I have a strong reason to believe they are. If they operate a blacklist, I'd likely be blacklisted from events for being a "troublemaker".What makes me think blacklists are used is anecdotes. The instance that I've heard the most (and therefore feel the most comfortable describing) about was through someone at EAG London 2022 who told me about someone else who was not invited to a rationality retreat in the Bay Area, despite all others in the same position at the organisation they worked at being invited.The individual believed it was because they gave a brief talk about the importance of diversity at a previous retreat, and this Bay Area rationality organiser (running this new retreat) was attending the previous retreat, was in the audience during the talk, and didn't believe diversity to be important.Note that this may match priors people have about EA being predominantly male, white, and ultimately lacking in diversity (also highlighted in the recent TIME article).The individual asked the organiser why they weren't invited when everyone else in the same position was invited. The organiser ignored their questions.When they asked a different Bay Area rationality organiser, they were told that their talk on diversity may have been "epistemically weak" and "not truth-seeking" enough.These are common terms used amongst Bay Area rationalists and even some funders.When I searched the term truth-seeking on the EA Forums, I found this comment by a funder who was later asked by the OP of the post what "truth-seeking" meant.Anecdotally, a friend of mine was rejected by the Open Philanthropy Undergraduate Scholarship the grantmaker saying they weren't "truth-seeking enough" as their feedback.Strong Personal Opinion: I think the problem here is rationality. It provides a camouflage of formalism, dignity, and an intellectual high ground for when you want to be an absolute asshole, justify contrarian views, and quickly dismiss other people's opinions. By contrarian, in this case, I mean where you think diversity is not important when most of Western society and the media do...]]>
BurnerExplainer https://forum.effectivealtruism.org/posts/gwbaySuugjxJAjkgQ/why-people-use-burner-accounts-a-commentary-on-blacklists Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why People Use Burner Accounts: A Commentary on Blacklists, the EA "Inner Circle", and Funding, published by BurnerExplainer on February 8, 2023 on The Effective Altruism Forum.I wanted to offer a different perspective on this post on why people use burner accounts from someone who has used one previously. This is not intended to be a rebuttal of the arguments made there. It is meant to add to the public discourse and made more sense as a separate post than a comment.I hope that any upvotes/downvotes are given based on the effort you think I put in offering a different perspective instead of whether you agree/disagree with my comments (NB: I think there should be separate upvote/downvote agree/disagree buttons for forum posts too).DisclaimerNote that...I used a burner account in early 2021 after finding myself unhappy with the EA Community.I've been visiting the EA Forums weekly since and occasionally still going to EAGs.I work at an EA organisation that receives significant funding from Open Philanthropy.Reasons for Using BurnersThe two biggest reasons for using burners are the potential operation of blacklists and funding. I'll use them together (with anecdotes and some evidence) to make an overarching point at the end, so read all the way through.BlacklistsUnfortunately, some EA groups and organisations use blacklists (this seems more common in the Bay Area).Note that this is difficult to prove as...I'd have to directly ask them, "Are you using a blacklist? I've heard X rumour that seems to suggest this", and they're very unlikely to say yes (if they are) as it's not in their interests.I don't want to be seen as a "troublemaker" by suggesting an organisation is using a blacklist when I have a strong reason to believe they are. If they operate a blacklist, I'd likely be blacklisted from events for being a "troublemaker".What makes me think blacklists are used is anecdotes. The instance that I've heard the most (and therefore feel the most comfortable describing) about was through someone at EAG London 2022 who told me about someone else who was not invited to a rationality retreat in the Bay Area, despite all others in the same position at the organisation they worked at being invited.The individual believed it was because they gave a brief talk about the importance of diversity at a previous retreat, and this Bay Area rationality organiser (running this new retreat) was attending the previous retreat, was in the audience during the talk, and didn't believe diversity to be important.Note that this may match priors people have about EA being predominantly male, white, and ultimately lacking in diversity (also highlighted in the recent TIME article).The individual asked the organiser why they weren't invited when everyone else in the same position was invited. The organiser ignored their questions.When they asked a different Bay Area rationality organiser, they were told that their talk on diversity may have been "epistemically weak" and "not truth-seeking" enough.These are common terms used amongst Bay Area rationalists and even some funders.When I searched the term truth-seeking on the EA Forums, I found this comment by a funder who was later asked by the OP of the post what "truth-seeking" meant.Anecdotally, a friend of mine was rejected by the Open Philanthropy Undergraduate Scholarship the grantmaker saying they weren't "truth-seeking enough" as their feedback.Strong Personal Opinion: I think the problem here is rationality. It provides a camouflage of formalism, dignity, and an intellectual high ground for when you want to be an absolute asshole, justify contrarian views, and quickly dismiss other people's opinions. By contrarian, in this case, I mean where you think diversity is not important when most of Western society and the media do...]]>
Wed, 08 Feb 2023 13:17:20 +0000 EA - Why People Use Burner Accounts: A Commentary on Blacklists, the EA "Inner Circle", and Funding by BurnerExplainer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why People Use Burner Accounts: A Commentary on Blacklists, the EA "Inner Circle", and Funding, published by BurnerExplainer on February 8, 2023 on The Effective Altruism Forum.I wanted to offer a different perspective on this post on why people use burner accounts from someone who has used one previously. This is not intended to be a rebuttal of the arguments made there. It is meant to add to the public discourse and made more sense as a separate post than a comment.I hope that any upvotes/downvotes are given based on the effort you think I put in offering a different perspective instead of whether you agree/disagree with my comments (NB: I think there should be separate upvote/downvote agree/disagree buttons for forum posts too).DisclaimerNote that...I used a burner account in early 2021 after finding myself unhappy with the EA Community.I've been visiting the EA Forums weekly since and occasionally still going to EAGs.I work at an EA organisation that receives significant funding from Open Philanthropy.Reasons for Using BurnersThe two biggest reasons for using burners are the potential operation of blacklists and funding. I'll use them together (with anecdotes and some evidence) to make an overarching point at the end, so read all the way through.BlacklistsUnfortunately, some EA groups and organisations use blacklists (this seems more common in the Bay Area).Note that this is difficult to prove as...I'd have to directly ask them, "Are you using a blacklist? I've heard X rumour that seems to suggest this", and they're very unlikely to say yes (if they are) as it's not in their interests.I don't want to be seen as a "troublemaker" by suggesting an organisation is using a blacklist when I have a strong reason to believe they are. If they operate a blacklist, I'd likely be blacklisted from events for being a "troublemaker".What makes me think blacklists are used is anecdotes. The instance that I've heard the most (and therefore feel the most comfortable describing) about was through someone at EAG London 2022 who told me about someone else who was not invited to a rationality retreat in the Bay Area, despite all others in the same position at the organisation they worked at being invited.The individual believed it was because they gave a brief talk about the importance of diversity at a previous retreat, and this Bay Area rationality organiser (running this new retreat) was attending the previous retreat, was in the audience during the talk, and didn't believe diversity to be important.Note that this may match priors people have about EA being predominantly male, white, and ultimately lacking in diversity (also highlighted in the recent TIME article).The individual asked the organiser why they weren't invited when everyone else in the same position was invited. The organiser ignored their questions.When they asked a different Bay Area rationality organiser, they were told that their talk on diversity may have been "epistemically weak" and "not truth-seeking" enough.These are common terms used amongst Bay Area rationalists and even some funders.When I searched the term truth-seeking on the EA Forums, I found this comment by a funder who was later asked by the OP of the post what "truth-seeking" meant.Anecdotally, a friend of mine was rejected by the Open Philanthropy Undergraduate Scholarship the grantmaker saying they weren't "truth-seeking enough" as their feedback.Strong Personal Opinion: I think the problem here is rationality. It provides a camouflage of formalism, dignity, and an intellectual high ground for when you want to be an absolute asshole, justify contrarian views, and quickly dismiss other people's opinions. By contrarian, in this case, I mean where you think diversity is not important when most of Western society and the media do...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why People Use Burner Accounts: A Commentary on Blacklists, the EA "Inner Circle", and Funding, published by BurnerExplainer on February 8, 2023 on The Effective Altruism Forum.I wanted to offer a different perspective on this post on why people use burner accounts from someone who has used one previously. This is not intended to be a rebuttal of the arguments made there. It is meant to add to the public discourse and made more sense as a separate post than a comment.I hope that any upvotes/downvotes are given based on the effort you think I put in offering a different perspective instead of whether you agree/disagree with my comments (NB: I think there should be separate upvote/downvote agree/disagree buttons for forum posts too).DisclaimerNote that...I used a burner account in early 2021 after finding myself unhappy with the EA Community.I've been visiting the EA Forums weekly since and occasionally still going to EAGs.I work at an EA organisation that receives significant funding from Open Philanthropy.Reasons for Using BurnersThe two biggest reasons for using burners are the potential operation of blacklists and funding. I'll use them together (with anecdotes and some evidence) to make an overarching point at the end, so read all the way through.BlacklistsUnfortunately, some EA groups and organisations use blacklists (this seems more common in the Bay Area).Note that this is difficult to prove as...I'd have to directly ask them, "Are you using a blacklist? I've heard X rumour that seems to suggest this", and they're very unlikely to say yes (if they are) as it's not in their interests.I don't want to be seen as a "troublemaker" by suggesting an organisation is using a blacklist when I have a strong reason to believe they are. If they operate a blacklist, I'd likely be blacklisted from events for being a "troublemaker".What makes me think blacklists are used is anecdotes. The instance that I've heard the most (and therefore feel the most comfortable describing) about was through someone at EAG London 2022 who told me about someone else who was not invited to a rationality retreat in the Bay Area, despite all others in the same position at the organisation they worked at being invited.The individual believed it was because they gave a brief talk about the importance of diversity at a previous retreat, and this Bay Area rationality organiser (running this new retreat) was attending the previous retreat, was in the audience during the talk, and didn't believe diversity to be important.Note that this may match priors people have about EA being predominantly male, white, and ultimately lacking in diversity (also highlighted in the recent TIME article).The individual asked the organiser why they weren't invited when everyone else in the same position was invited. The organiser ignored their questions.When they asked a different Bay Area rationality organiser, they were told that their talk on diversity may have been "epistemically weak" and "not truth-seeking" enough.These are common terms used amongst Bay Area rationalists and even some funders.When I searched the term truth-seeking on the EA Forums, I found this comment by a funder who was later asked by the OP of the post what "truth-seeking" meant.Anecdotally, a friend of mine was rejected by the Open Philanthropy Undergraduate Scholarship the grantmaker saying they weren't "truth-seeking enough" as their feedback.Strong Personal Opinion: I think the problem here is rationality. It provides a camouflage of formalism, dignity, and an intellectual high ground for when you want to be an absolute asshole, justify contrarian views, and quickly dismiss other people's opinions. By contrarian, in this case, I mean where you think diversity is not important when most of Western society and the media do...]]>
BurnerExplainer https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:20 None full 4797
vyRsRAGqryuLMTahj_NL_EA_EA EA - EA Philippines' Progress in 2022 by Elmerei Cuevas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Philippines' Progress in 2022 , published by Elmerei Cuevas on February 8, 2023 on The Effective Altruism Forum.2022 has been a productive year for EA Philippines. In this report, we would like to present our accomplishments and points for improvement for the duration of our 2022 EA Infrastructure Fund grant. We hope that through this report, we are able to inform the wider EA community about the progress and potential of EA Community Building in the Philippines.Parts I-VI are approximately a ~12 minute read. The appendix is approximately a ~16 minute read.I. Funding Received and Financial SummaryIn February 2022, EA PH received a total grant of $89,350 from the EA Infrastructure Fund, for 1 year of 2.32 FTE salary split across 5 to do community building work for EA Philippines + our student chapters.This was how we intended the grant to be split:Person / Budget ItemElmer CuevasFull-Time Community BuilderRed BermejoPart-Time Community BuilderBrian TanPart-Time Community BuilderJanai BarilPart-Time Communications & Events AssociateTanya QuijanoPart-Time Health & Dev't Community BuilderEA Philippines General Grant FundingRoleAvg. EA PH FTE from Feb 2022 to Jan 20230.940.500.490.250.152 Months Buffer / Runway(i.e. if we have delays in our grant application or fundraising for the period after this grant)We generally followed this split, and the team worked roughly at the FTE hours allotted to them, except for Brian. He reduced his FTE gradually at EA PH since he transitioned to working full-time at CEA. He now works just 0.1 FTE as an adviser for EA Philippines, and averaged 0.21 FTE over the year for EA PH. His grant was reduced proportionally, and starting November 2022, Althy Cendaña started as a 0.5 FTE community builder for EA Philippines using the leftover money from Brian’s grant, with the approval of the EAIF.However, some months within the year required more work hours from the original FTE hours allotment. Hence, for the next grant application, we realized the need to increase FTE allotment for certain roles. Breakdown of the General Grant Funding of $6,500 can be found in the Appendix.II. Theory of Change used for Grant Period 2022-2023The document linked above was last updated in Sept 2021 but became the rough basis for the goals for this current EAIF grant. We have started to revisit this ToC for the 2023 cycle.We chose to focus on getting people into higher engagement categories. The Engagement Category Targets were as follows:Create at least 4 new highly engaged EAs (as defined by CEA here)Get at least an additional 6 people to reach category 4 of EA engagement, based on the community building grant EA engagement categories (linked here if you’re not familiar with them:)Get at least an additional 17 people to reach category 3 of EA engagement (100 hours learning about EA)Get at least an additional 29 people to reach category 2 of EA engagement (50 hrs learning about EA.Results of the Community Growth are in the next section.III. Growth in EA PH Community Membership as of December 2022As communicated in our approved grant application, below are our tentative high-level outcomes that we ended up using for the grant period, and our target completion as of January 18, 2023. .Rating the engagement category was done subjectively but informed by factoring in the fellowships, events, reading groups, 1:1s, conferences attended by members, personal EA content reading based on community surveys to contribute to hours of engagement, and volunteer, internships and career plans made. These numbers may be off by 5-10%, but currently is the best way we quantify these categories.Highly Engaged EA (Very good understanding of EA + taken significant action)4 (100+ hrs of EA content engagement + EA played major role in their choice of current job...]]>
Elmerei Cuevas https://forum.effectivealtruism.org/posts/vyRsRAGqryuLMTahj/ea-philippines-progress-in-2022 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Philippines' Progress in 2022 , published by Elmerei Cuevas on February 8, 2023 on The Effective Altruism Forum.2022 has been a productive year for EA Philippines. In this report, we would like to present our accomplishments and points for improvement for the duration of our 2022 EA Infrastructure Fund grant. We hope that through this report, we are able to inform the wider EA community about the progress and potential of EA Community Building in the Philippines.Parts I-VI are approximately a ~12 minute read. The appendix is approximately a ~16 minute read.I. Funding Received and Financial SummaryIn February 2022, EA PH received a total grant of $89,350 from the EA Infrastructure Fund, for 1 year of 2.32 FTE salary split across 5 to do community building work for EA Philippines + our student chapters.This was how we intended the grant to be split:Person / Budget ItemElmer CuevasFull-Time Community BuilderRed BermejoPart-Time Community BuilderBrian TanPart-Time Community BuilderJanai BarilPart-Time Communications & Events AssociateTanya QuijanoPart-Time Health & Dev't Community BuilderEA Philippines General Grant FundingRoleAvg. EA PH FTE from Feb 2022 to Jan 20230.940.500.490.250.152 Months Buffer / Runway(i.e. if we have delays in our grant application or fundraising for the period after this grant)We generally followed this split, and the team worked roughly at the FTE hours allotted to them, except for Brian. He reduced his FTE gradually at EA PH since he transitioned to working full-time at CEA. He now works just 0.1 FTE as an adviser for EA Philippines, and averaged 0.21 FTE over the year for EA PH. His grant was reduced proportionally, and starting November 2022, Althy Cendaña started as a 0.5 FTE community builder for EA Philippines using the leftover money from Brian’s grant, with the approval of the EAIF.However, some months within the year required more work hours from the original FTE hours allotment. Hence, for the next grant application, we realized the need to increase FTE allotment for certain roles. Breakdown of the General Grant Funding of $6,500 can be found in the Appendix.II. Theory of Change used for Grant Period 2022-2023The document linked above was last updated in Sept 2021 but became the rough basis for the goals for this current EAIF grant. We have started to revisit this ToC for the 2023 cycle.We chose to focus on getting people into higher engagement categories. The Engagement Category Targets were as follows:Create at least 4 new highly engaged EAs (as defined by CEA here)Get at least an additional 6 people to reach category 4 of EA engagement, based on the community building grant EA engagement categories (linked here if you’re not familiar with them:)Get at least an additional 17 people to reach category 3 of EA engagement (100 hours learning about EA)Get at least an additional 29 people to reach category 2 of EA engagement (50 hrs learning about EA.Results of the Community Growth are in the next section.III. Growth in EA PH Community Membership as of December 2022As communicated in our approved grant application, below are our tentative high-level outcomes that we ended up using for the grant period, and our target completion as of January 18, 2023. .Rating the engagement category was done subjectively but informed by factoring in the fellowships, events, reading groups, 1:1s, conferences attended by members, personal EA content reading based on community surveys to contribute to hours of engagement, and volunteer, internships and career plans made. These numbers may be off by 5-10%, but currently is the best way we quantify these categories.Highly Engaged EA (Very good understanding of EA + taken significant action)4 (100+ hrs of EA content engagement + EA played major role in their choice of current job...]]>
Wed, 08 Feb 2023 12:51:40 +0000 EA - EA Philippines' Progress in 2022 by Elmerei Cuevas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Philippines' Progress in 2022 , published by Elmerei Cuevas on February 8, 2023 on The Effective Altruism Forum.2022 has been a productive year for EA Philippines. In this report, we would like to present our accomplishments and points for improvement for the duration of our 2022 EA Infrastructure Fund grant. We hope that through this report, we are able to inform the wider EA community about the progress and potential of EA Community Building in the Philippines.Parts I-VI are approximately a ~12 minute read. The appendix is approximately a ~16 minute read.I. Funding Received and Financial SummaryIn February 2022, EA PH received a total grant of $89,350 from the EA Infrastructure Fund, for 1 year of 2.32 FTE salary split across 5 to do community building work for EA Philippines + our student chapters.This was how we intended the grant to be split:Person / Budget ItemElmer CuevasFull-Time Community BuilderRed BermejoPart-Time Community BuilderBrian TanPart-Time Community BuilderJanai BarilPart-Time Communications & Events AssociateTanya QuijanoPart-Time Health & Dev't Community BuilderEA Philippines General Grant FundingRoleAvg. EA PH FTE from Feb 2022 to Jan 20230.940.500.490.250.152 Months Buffer / Runway(i.e. if we have delays in our grant application or fundraising for the period after this grant)We generally followed this split, and the team worked roughly at the FTE hours allotted to them, except for Brian. He reduced his FTE gradually at EA PH since he transitioned to working full-time at CEA. He now works just 0.1 FTE as an adviser for EA Philippines, and averaged 0.21 FTE over the year for EA PH. His grant was reduced proportionally, and starting November 2022, Althy Cendaña started as a 0.5 FTE community builder for EA Philippines using the leftover money from Brian’s grant, with the approval of the EAIF.However, some months within the year required more work hours from the original FTE hours allotment. Hence, for the next grant application, we realized the need to increase FTE allotment for certain roles. Breakdown of the General Grant Funding of $6,500 can be found in the Appendix.II. Theory of Change used for Grant Period 2022-2023The document linked above was last updated in Sept 2021 but became the rough basis for the goals for this current EAIF grant. We have started to revisit this ToC for the 2023 cycle.We chose to focus on getting people into higher engagement categories. The Engagement Category Targets were as follows:Create at least 4 new highly engaged EAs (as defined by CEA here)Get at least an additional 6 people to reach category 4 of EA engagement, based on the community building grant EA engagement categories (linked here if you’re not familiar with them:)Get at least an additional 17 people to reach category 3 of EA engagement (100 hours learning about EA)Get at least an additional 29 people to reach category 2 of EA engagement (50 hrs learning about EA.Results of the Community Growth are in the next section.III. Growth in EA PH Community Membership as of December 2022As communicated in our approved grant application, below are our tentative high-level outcomes that we ended up using for the grant period, and our target completion as of January 18, 2023. .Rating the engagement category was done subjectively but informed by factoring in the fellowships, events, reading groups, 1:1s, conferences attended by members, personal EA content reading based on community surveys to contribute to hours of engagement, and volunteer, internships and career plans made. These numbers may be off by 5-10%, but currently is the best way we quantify these categories.Highly Engaged EA (Very good understanding of EA + taken significant action)4 (100+ hrs of EA content engagement + EA played major role in their choice of current job...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Philippines' Progress in 2022 , published by Elmerei Cuevas on February 8, 2023 on The Effective Altruism Forum.2022 has been a productive year for EA Philippines. In this report, we would like to present our accomplishments and points for improvement for the duration of our 2022 EA Infrastructure Fund grant. We hope that through this report, we are able to inform the wider EA community about the progress and potential of EA Community Building in the Philippines.Parts I-VI are approximately a ~12 minute read. The appendix is approximately a ~16 minute read.I. Funding Received and Financial SummaryIn February 2022, EA PH received a total grant of $89,350 from the EA Infrastructure Fund, for 1 year of 2.32 FTE salary split across 5 to do community building work for EA Philippines + our student chapters.This was how we intended the grant to be split:Person / Budget ItemElmer CuevasFull-Time Community BuilderRed BermejoPart-Time Community BuilderBrian TanPart-Time Community BuilderJanai BarilPart-Time Communications & Events AssociateTanya QuijanoPart-Time Health & Dev't Community BuilderEA Philippines General Grant FundingRoleAvg. EA PH FTE from Feb 2022 to Jan 20230.940.500.490.250.152 Months Buffer / Runway(i.e. if we have delays in our grant application or fundraising for the period after this grant)We generally followed this split, and the team worked roughly at the FTE hours allotted to them, except for Brian. He reduced his FTE gradually at EA PH since he transitioned to working full-time at CEA. He now works just 0.1 FTE as an adviser for EA Philippines, and averaged 0.21 FTE over the year for EA PH. His grant was reduced proportionally, and starting November 2022, Althy Cendaña started as a 0.5 FTE community builder for EA Philippines using the leftover money from Brian’s grant, with the approval of the EAIF.However, some months within the year required more work hours from the original FTE hours allotment. Hence, for the next grant application, we realized the need to increase FTE allotment for certain roles. Breakdown of the General Grant Funding of $6,500 can be found in the Appendix.II. Theory of Change used for Grant Period 2022-2023The document linked above was last updated in Sept 2021 but became the rough basis for the goals for this current EAIF grant. We have started to revisit this ToC for the 2023 cycle.We chose to focus on getting people into higher engagement categories. The Engagement Category Targets were as follows:Create at least 4 new highly engaged EAs (as defined by CEA here)Get at least an additional 6 people to reach category 4 of EA engagement, based on the community building grant EA engagement categories (linked here if you’re not familiar with them:)Get at least an additional 17 people to reach category 3 of EA engagement (100 hours learning about EA)Get at least an additional 29 people to reach category 2 of EA engagement (50 hrs learning about EA.Results of the Community Growth are in the next section.III. Growth in EA PH Community Membership as of December 2022As communicated in our approved grant application, below are our tentative high-level outcomes that we ended up using for the grant period, and our target completion as of January 18, 2023. .Rating the engagement category was done subjectively but informed by factoring in the fellowships, events, reading groups, 1:1s, conferences attended by members, personal EA content reading based on community surveys to contribute to hours of engagement, and volunteer, internships and career plans made. These numbers may be off by 5-10%, but currently is the best way we quantify these categories.Highly Engaged EA (Very good understanding of EA + taken significant action)4 (100+ hrs of EA content engagement + EA played major role in their choice of current job...]]>
Elmerei Cuevas https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 41:48 None full 4796
BsAmChNX9cvwEccny_NL_EA_EA EA - [Our World in Data] AI timelines: What do experts in artificial intelligence expect for the future? (Roser, 2023) by will Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Our World in Data] AI timelines: What do experts in artificial intelligence expect for the future? (Roser, 2023), published by will on February 7, 2023 on The Effective Altruism Forum.Linkposting, tagging and excerpting - in this case, excerpting the article's conclusion - in accord with 'Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum?'.[click here for a big version of the visualization]The visualization shows the forecasts of 1128 people – 812 individual AI experts, the aggregated estimates of 315 forecasters from the Metaculus platform, and the findings of the detailed study by Ajeya Cotra.There are two big takeaways from these forecasts on AI timelines:There is no consensus, and the uncertainty is high. There is huge disagreement between experts about when human-level AI will be developed. Some believe that it is decades away, while others think it is probable that such systems will be developed within the next few years or months.There is not just disagreement between experts; individual experts also emphasize the large uncertainty around their own individual estimate. As always when the uncertainty is high, it is important to stress that it cuts both ways. It might be very long until we see human-level AI, but it also means that we might have little time to prepare.At the same time, there is large agreement in the overall picture. The timelines of many experts are shorter than a century, and many have timelines that are substantially shorter than that. The majority of those who study this question believe that there is a 50% chance that transformative AI systems will be developed within the next 50 years. In this case it would plausibly be the biggest transformation in the lifetime of our children, or even in our own lifetime.The public discourse and the decision-making at major institutions have not caught up with these prospects. In discussions on the future of our world – from the future of our climate, to the future of our economies, to the future of our political institutions – the prospect of transformative AI is rarely central to the conversation. Often it is not mentioned at all, not even in a footnote.We seem to be in a situation where most people hardly think about the future of artificial intelligence, while the few who dedicate their attention to it find it plausible that one of the biggest transformations in humanity’s history is likely to happen within our lifetimes.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
will https://forum.effectivealtruism.org/posts/BsAmChNX9cvwEccny/our-world-in-data-ai-timelines-what-do-experts-in-artificial Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Our World in Data] AI timelines: What do experts in artificial intelligence expect for the future? (Roser, 2023), published by will on February 7, 2023 on The Effective Altruism Forum.Linkposting, tagging and excerpting - in this case, excerpting the article's conclusion - in accord with 'Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum?'.[click here for a big version of the visualization]The visualization shows the forecasts of 1128 people – 812 individual AI experts, the aggregated estimates of 315 forecasters from the Metaculus platform, and the findings of the detailed study by Ajeya Cotra.There are two big takeaways from these forecasts on AI timelines:There is no consensus, and the uncertainty is high. There is huge disagreement between experts about when human-level AI will be developed. Some believe that it is decades away, while others think it is probable that such systems will be developed within the next few years or months.There is not just disagreement between experts; individual experts also emphasize the large uncertainty around their own individual estimate. As always when the uncertainty is high, it is important to stress that it cuts both ways. It might be very long until we see human-level AI, but it also means that we might have little time to prepare.At the same time, there is large agreement in the overall picture. The timelines of many experts are shorter than a century, and many have timelines that are substantially shorter than that. The majority of those who study this question believe that there is a 50% chance that transformative AI systems will be developed within the next 50 years. In this case it would plausibly be the biggest transformation in the lifetime of our children, or even in our own lifetime.The public discourse and the decision-making at major institutions have not caught up with these prospects. In discussions on the future of our world – from the future of our climate, to the future of our economies, to the future of our political institutions – the prospect of transformative AI is rarely central to the conversation. Often it is not mentioned at all, not even in a footnote.We seem to be in a situation where most people hardly think about the future of artificial intelligence, while the few who dedicate their attention to it find it plausible that one of the biggest transformations in humanity’s history is likely to happen within our lifetimes.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 07 Feb 2023 22:36:48 +0000 EA - [Our World in Data] AI timelines: What do experts in artificial intelligence expect for the future? (Roser, 2023) by will Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Our World in Data] AI timelines: What do experts in artificial intelligence expect for the future? (Roser, 2023), published by will on February 7, 2023 on The Effective Altruism Forum.Linkposting, tagging and excerpting - in this case, excerpting the article's conclusion - in accord with 'Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum?'.[click here for a big version of the visualization]The visualization shows the forecasts of 1128 people – 812 individual AI experts, the aggregated estimates of 315 forecasters from the Metaculus platform, and the findings of the detailed study by Ajeya Cotra.There are two big takeaways from these forecasts on AI timelines:There is no consensus, and the uncertainty is high. There is huge disagreement between experts about when human-level AI will be developed. Some believe that it is decades away, while others think it is probable that such systems will be developed within the next few years or months.There is not just disagreement between experts; individual experts also emphasize the large uncertainty around their own individual estimate. As always when the uncertainty is high, it is important to stress that it cuts both ways. It might be very long until we see human-level AI, but it also means that we might have little time to prepare.At the same time, there is large agreement in the overall picture. The timelines of many experts are shorter than a century, and many have timelines that are substantially shorter than that. The majority of those who study this question believe that there is a 50% chance that transformative AI systems will be developed within the next 50 years. In this case it would plausibly be the biggest transformation in the lifetime of our children, or even in our own lifetime.The public discourse and the decision-making at major institutions have not caught up with these prospects. In discussions on the future of our world – from the future of our climate, to the future of our economies, to the future of our political institutions – the prospect of transformative AI is rarely central to the conversation. Often it is not mentioned at all, not even in a footnote.We seem to be in a situation where most people hardly think about the future of artificial intelligence, while the few who dedicate their attention to it find it plausible that one of the biggest transformations in humanity’s history is likely to happen within our lifetimes.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Our World in Data] AI timelines: What do experts in artificial intelligence expect for the future? (Roser, 2023), published by will on February 7, 2023 on The Effective Altruism Forum.Linkposting, tagging and excerpting - in this case, excerpting the article's conclusion - in accord with 'Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum?'.[click here for a big version of the visualization]The visualization shows the forecasts of 1128 people – 812 individual AI experts, the aggregated estimates of 315 forecasters from the Metaculus platform, and the findings of the detailed study by Ajeya Cotra.There are two big takeaways from these forecasts on AI timelines:There is no consensus, and the uncertainty is high. There is huge disagreement between experts about when human-level AI will be developed. Some believe that it is decades away, while others think it is probable that such systems will be developed within the next few years or months.There is not just disagreement between experts; individual experts also emphasize the large uncertainty around their own individual estimate. As always when the uncertainty is high, it is important to stress that it cuts both ways. It might be very long until we see human-level AI, but it also means that we might have little time to prepare.At the same time, there is large agreement in the overall picture. The timelines of many experts are shorter than a century, and many have timelines that are substantially shorter than that. The majority of those who study this question believe that there is a 50% chance that transformative AI systems will be developed within the next 50 years. In this case it would plausibly be the biggest transformation in the lifetime of our children, or even in our own lifetime.The public discourse and the decision-making at major institutions have not caught up with these prospects. In discussions on the future of our world – from the future of our climate, to the future of our economies, to the future of our political institutions – the prospect of transformative AI is rarely central to the conversation. Often it is not mentioned at all, not even in a footnote.We seem to be in a situation where most people hardly think about the future of artificial intelligence, while the few who dedicate their attention to it find it plausible that one of the biggest transformations in humanity’s history is likely to happen within our lifetimes.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
will https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:35 None full 4786
tD2rXd9vXmTkRBwHN_NL_EA_EA EA - Scalable longtermist projects: Speedrun series – Introduction by Buhl Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Scalable longtermist projects: Speedrun series – Introduction, published by Buhl on February 7, 2023 on The Effective Altruism Forum.This is the introductory post in a sequence showcasing a series of mini-research projects (“speedruns”) into scalable longtermist projects, conducted by Rethink Priorities’ General Longtermism Team in the fall of 2022. Each speedrun involved an initial scoping and evaluation of an idea for a scalable longtermist project, to try to identify if the project could be a top candidate for our team to try to bring about.This post explains how and why you might want to use the speedruns. The appendix contains additional information about how and why we produced the speedruns.Who is this sequence for?We imagine the speedruns might be interesting to:Potential longtermist entrepreneurs who might want to use the speedruns as a source of information and inspiration when deciding what projects to work on.Potential funders of entrepreneurial longtermist projects (for the same reason).Researchers who might be interested in looking further into any of the project ideas.(Aspiring) junior researchers interested in empirical global priorities research who might want to use the speedruns as an example of what such research can look like.Stakeholders and the public interested in Rethink Priorities’ General Longtermism team who might want to use the speedruns as an insight into how we work, such as potential funders and job applicants.Things to keep in mind when using the speedrunsThe speedruns are very preliminary and should not be considered the final word on whether a project is promising or not. They were written by junior generalists, and we spent only ~15h on each, prioritizing speed (surprise surprise) over depth and rigor. So they likely contain mistakes, miss important information, and include poor takes. We would have conducted a more in-depth investigation before launching any of these projects (and recommend that others do the same).The project ideas covered in the speedruns are very exploratory and tentative. They are not plans for what RP will work on, but just ideas for what RP could consider working on.The three speedruns in this series should not be considered as “the three most promising projects in our view”. “Project promisingness” was not a criterion in deciding which speedruns to publish (more on how we chose which speedruns to publish in the Appendix). That said, all topics we conducted speedruns on scored relatively highly on an internal weighted factor model.Opinions on whether a given project is worth pursuing will differ for people in a different position to our team. The speedruns were conducted with the specific aim of helping our team figure out which projects to spend further resources on. So they take into account factors that are specific to our team and strategy, such as our relative fit to work on the project in question.We have not updated the conclusions of the speedrun to reflect the recent changes to the funding situation. The speedruns were all conducted before the drastic changes in the EA funding landscape towards the end of 2022, and so they operate with a rough cost-effectiveness bar that is probably outdated.Overview of the sequenceSo far, we are planning for this series to contain 3 of the 13 speedruns we conducted in fall 2022. There’s no particular order we recommend reading them in.The speedruns we’re planning to include in this sequence (so far) are:Developing an affordable super PPECreate AI alignment prizes [coming up later this week]Demonstrate the ability to rapidly scale food production in the case of nuclear winter [coming up later this week]These are the speedruns that (a) did not develop into longer-term research projects which might have other publishable outputs, and (b) were cl...]]>
Buhl https://forum.effectivealtruism.org/posts/tD2rXd9vXmTkRBwHN/scalable-longtermist-projects-speedrun-series-introduction Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Scalable longtermist projects: Speedrun series – Introduction, published by Buhl on February 7, 2023 on The Effective Altruism Forum.This is the introductory post in a sequence showcasing a series of mini-research projects (“speedruns”) into scalable longtermist projects, conducted by Rethink Priorities’ General Longtermism Team in the fall of 2022. Each speedrun involved an initial scoping and evaluation of an idea for a scalable longtermist project, to try to identify if the project could be a top candidate for our team to try to bring about.This post explains how and why you might want to use the speedruns. The appendix contains additional information about how and why we produced the speedruns.Who is this sequence for?We imagine the speedruns might be interesting to:Potential longtermist entrepreneurs who might want to use the speedruns as a source of information and inspiration when deciding what projects to work on.Potential funders of entrepreneurial longtermist projects (for the same reason).Researchers who might be interested in looking further into any of the project ideas.(Aspiring) junior researchers interested in empirical global priorities research who might want to use the speedruns as an example of what such research can look like.Stakeholders and the public interested in Rethink Priorities’ General Longtermism team who might want to use the speedruns as an insight into how we work, such as potential funders and job applicants.Things to keep in mind when using the speedrunsThe speedruns are very preliminary and should not be considered the final word on whether a project is promising or not. They were written by junior generalists, and we spent only ~15h on each, prioritizing speed (surprise surprise) over depth and rigor. So they likely contain mistakes, miss important information, and include poor takes. We would have conducted a more in-depth investigation before launching any of these projects (and recommend that others do the same).The project ideas covered in the speedruns are very exploratory and tentative. They are not plans for what RP will work on, but just ideas for what RP could consider working on.The three speedruns in this series should not be considered as “the three most promising projects in our view”. “Project promisingness” was not a criterion in deciding which speedruns to publish (more on how we chose which speedruns to publish in the Appendix). That said, all topics we conducted speedruns on scored relatively highly on an internal weighted factor model.Opinions on whether a given project is worth pursuing will differ for people in a different position to our team. The speedruns were conducted with the specific aim of helping our team figure out which projects to spend further resources on. So they take into account factors that are specific to our team and strategy, such as our relative fit to work on the project in question.We have not updated the conclusions of the speedrun to reflect the recent changes to the funding situation. The speedruns were all conducted before the drastic changes in the EA funding landscape towards the end of 2022, and so they operate with a rough cost-effectiveness bar that is probably outdated.Overview of the sequenceSo far, we are planning for this series to contain 3 of the 13 speedruns we conducted in fall 2022. There’s no particular order we recommend reading them in.The speedruns we’re planning to include in this sequence (so far) are:Developing an affordable super PPECreate AI alignment prizes [coming up later this week]Demonstrate the ability to rapidly scale food production in the case of nuclear winter [coming up later this week]These are the speedruns that (a) did not develop into longer-term research projects which might have other publishable outputs, and (b) were cl...]]>
Tue, 07 Feb 2023 22:20:37 +0000 EA - Scalable longtermist projects: Speedrun series – Introduction by Buhl Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Scalable longtermist projects: Speedrun series – Introduction, published by Buhl on February 7, 2023 on The Effective Altruism Forum.This is the introductory post in a sequence showcasing a series of mini-research projects (“speedruns”) into scalable longtermist projects, conducted by Rethink Priorities’ General Longtermism Team in the fall of 2022. Each speedrun involved an initial scoping and evaluation of an idea for a scalable longtermist project, to try to identify if the project could be a top candidate for our team to try to bring about.This post explains how and why you might want to use the speedruns. The appendix contains additional information about how and why we produced the speedruns.Who is this sequence for?We imagine the speedruns might be interesting to:Potential longtermist entrepreneurs who might want to use the speedruns as a source of information and inspiration when deciding what projects to work on.Potential funders of entrepreneurial longtermist projects (for the same reason).Researchers who might be interested in looking further into any of the project ideas.(Aspiring) junior researchers interested in empirical global priorities research who might want to use the speedruns as an example of what such research can look like.Stakeholders and the public interested in Rethink Priorities’ General Longtermism team who might want to use the speedruns as an insight into how we work, such as potential funders and job applicants.Things to keep in mind when using the speedrunsThe speedruns are very preliminary and should not be considered the final word on whether a project is promising or not. They were written by junior generalists, and we spent only ~15h on each, prioritizing speed (surprise surprise) over depth and rigor. So they likely contain mistakes, miss important information, and include poor takes. We would have conducted a more in-depth investigation before launching any of these projects (and recommend that others do the same).The project ideas covered in the speedruns are very exploratory and tentative. They are not plans for what RP will work on, but just ideas for what RP could consider working on.The three speedruns in this series should not be considered as “the three most promising projects in our view”. “Project promisingness” was not a criterion in deciding which speedruns to publish (more on how we chose which speedruns to publish in the Appendix). That said, all topics we conducted speedruns on scored relatively highly on an internal weighted factor model.Opinions on whether a given project is worth pursuing will differ for people in a different position to our team. The speedruns were conducted with the specific aim of helping our team figure out which projects to spend further resources on. So they take into account factors that are specific to our team and strategy, such as our relative fit to work on the project in question.We have not updated the conclusions of the speedrun to reflect the recent changes to the funding situation. The speedruns were all conducted before the drastic changes in the EA funding landscape towards the end of 2022, and so they operate with a rough cost-effectiveness bar that is probably outdated.Overview of the sequenceSo far, we are planning for this series to contain 3 of the 13 speedruns we conducted in fall 2022. There’s no particular order we recommend reading them in.The speedruns we’re planning to include in this sequence (so far) are:Developing an affordable super PPECreate AI alignment prizes [coming up later this week]Demonstrate the ability to rapidly scale food production in the case of nuclear winter [coming up later this week]These are the speedruns that (a) did not develop into longer-term research projects which might have other publishable outputs, and (b) were cl...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Scalable longtermist projects: Speedrun series – Introduction, published by Buhl on February 7, 2023 on The Effective Altruism Forum.This is the introductory post in a sequence showcasing a series of mini-research projects (“speedruns”) into scalable longtermist projects, conducted by Rethink Priorities’ General Longtermism Team in the fall of 2022. Each speedrun involved an initial scoping and evaluation of an idea for a scalable longtermist project, to try to identify if the project could be a top candidate for our team to try to bring about.This post explains how and why you might want to use the speedruns. The appendix contains additional information about how and why we produced the speedruns.Who is this sequence for?We imagine the speedruns might be interesting to:Potential longtermist entrepreneurs who might want to use the speedruns as a source of information and inspiration when deciding what projects to work on.Potential funders of entrepreneurial longtermist projects (for the same reason).Researchers who might be interested in looking further into any of the project ideas.(Aspiring) junior researchers interested in empirical global priorities research who might want to use the speedruns as an example of what such research can look like.Stakeholders and the public interested in Rethink Priorities’ General Longtermism team who might want to use the speedruns as an insight into how we work, such as potential funders and job applicants.Things to keep in mind when using the speedrunsThe speedruns are very preliminary and should not be considered the final word on whether a project is promising or not. They were written by junior generalists, and we spent only ~15h on each, prioritizing speed (surprise surprise) over depth and rigor. So they likely contain mistakes, miss important information, and include poor takes. We would have conducted a more in-depth investigation before launching any of these projects (and recommend that others do the same).The project ideas covered in the speedruns are very exploratory and tentative. They are not plans for what RP will work on, but just ideas for what RP could consider working on.The three speedruns in this series should not be considered as “the three most promising projects in our view”. “Project promisingness” was not a criterion in deciding which speedruns to publish (more on how we chose which speedruns to publish in the Appendix). That said, all topics we conducted speedruns on scored relatively highly on an internal weighted factor model.Opinions on whether a given project is worth pursuing will differ for people in a different position to our team. The speedruns were conducted with the specific aim of helping our team figure out which projects to spend further resources on. So they take into account factors that are specific to our team and strategy, such as our relative fit to work on the project in question.We have not updated the conclusions of the speedrun to reflect the recent changes to the funding situation. The speedruns were all conducted before the drastic changes in the EA funding landscape towards the end of 2022, and so they operate with a rough cost-effectiveness bar that is probably outdated.Overview of the sequenceSo far, we are planning for this series to contain 3 of the 13 speedruns we conducted in fall 2022. There’s no particular order we recommend reading them in.The speedruns we’re planning to include in this sequence (so far) are:Developing an affordable super PPECreate AI alignment prizes [coming up later this week]Demonstrate the ability to rapidly scale food production in the case of nuclear winter [coming up later this week]These are the speedruns that (a) did not develop into longer-term research projects which might have other publishable outputs, and (b) were cl...]]>
Buhl https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:55 None full 4777
wAzYXjGxhJmrxunct_NL_EA_EA EA - Speedrun: Develop an affordable super PPE by Buhl Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Speedrun: Develop an affordable super PPE, published by Buhl on February 7, 2023 on The Effective Altruism Forum.IntroductionThis post is a shallow investigation of the intervention of developing better, cheaper, and easier-to-use personal protective equipment (PPE).The post is part of a sequence of speedrun research projects by Rethink Priorities’ general longtermism team. I recommend you begin by reading the introductory post in the sequence if you haven’t already for information about the context of this post.A quick context tl;dr:The aim of this investigation was to help our team decide if we should take steps towards incubating an organization focusing on this project. Keep in mind that some of the conclusions take into account considerations (such as our team’s comparative advantage) that may not be relevant to the reader.The investigation was intended to be very early-stage and prioritize speed over rigor. We would have conducted a more in-depth investigation before launching any projects in this space (and recommend that others do the same).This post was written in early fall 2022; the funding situation has changed significantly since then, but the investigation has not been updated to reflect this. As such, the funding bar alluded to in the post is probably outdated. My quick guess is that this does not significantly affect the bottom line in this case; at least I think this project is still likely to rank relatively highly among biosecurity-related projects.Epistemic statusI spent ~15 hours researching and writing this speedrun. I have no other experience with biosecurity research and I don’t have a natural science background. So: I’m a junior generalist who has thought about this for a couple of work days, and as a result, this post should be considered very preliminary and likely to contain mistakes and bad takes. My goal in publishing this regardless is that it may be useful as (a) a primer gathering useful information in one place, and (b) an example of the kind of research done by junior generalists.SummaryBottom line:I’m generally optimistic about work being done to develop better PPE: I think it’s ~60% likely to be in the top 20 of the project ideas on our current list excluding considerations about Rethink’s fit for supporting a project like this.My analysis of the merits of this project is primarily based on deferral to Andrew Snyder-Beattie and Ethan Alley, from whose post we got this idea.If I were to make a decision now about whether RP should try to incubate a new project in this space, I would say that it should not . (Incubating a new project might involve things like scoping out a concrete project plan and conducting a founder search; more on how we think about project incubation in this post.)The key reason is that there are a number of other actors already doing or planning projects in this space that are likely better suited to support this kind of work (primarily due to having more biosecurity expertise).I think I’d be unlikely to change my mind about this bottom line with ~10 more hours of investigation, but could easily imagine myself changing my mind after something like ~40 hours of investigation.More detailed summary:There are two main types of PPE a project could focus on developing: Type I PPE, optimized for price and ease of use, and Type II PPE optimized for robust threat protection. I focus on type I PPE, mainly because it’s more commonly brought up in the x-risk community (but there are also some plausible object-level reasons to do so). (more)I think the most important impact of type I PPE from a longtermist perspective is to reduce panic, unrest, and conflict in a highly deadly pandemic, thereby helping to avoid civilisational collapse. (more)I estimate that the financial cost of a project aiming to devel...]]>
Buhl https://forum.effectivealtruism.org/posts/wAzYXjGxhJmrxunct/speedrun-develop-an-affordable-super-ppe Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Speedrun: Develop an affordable super PPE, published by Buhl on February 7, 2023 on The Effective Altruism Forum.IntroductionThis post is a shallow investigation of the intervention of developing better, cheaper, and easier-to-use personal protective equipment (PPE).The post is part of a sequence of speedrun research projects by Rethink Priorities’ general longtermism team. I recommend you begin by reading the introductory post in the sequence if you haven’t already for information about the context of this post.A quick context tl;dr:The aim of this investigation was to help our team decide if we should take steps towards incubating an organization focusing on this project. Keep in mind that some of the conclusions take into account considerations (such as our team’s comparative advantage) that may not be relevant to the reader.The investigation was intended to be very early-stage and prioritize speed over rigor. We would have conducted a more in-depth investigation before launching any projects in this space (and recommend that others do the same).This post was written in early fall 2022; the funding situation has changed significantly since then, but the investigation has not been updated to reflect this. As such, the funding bar alluded to in the post is probably outdated. My quick guess is that this does not significantly affect the bottom line in this case; at least I think this project is still likely to rank relatively highly among biosecurity-related projects.Epistemic statusI spent ~15 hours researching and writing this speedrun. I have no other experience with biosecurity research and I don’t have a natural science background. So: I’m a junior generalist who has thought about this for a couple of work days, and as a result, this post should be considered very preliminary and likely to contain mistakes and bad takes. My goal in publishing this regardless is that it may be useful as (a) a primer gathering useful information in one place, and (b) an example of the kind of research done by junior generalists.SummaryBottom line:I’m generally optimistic about work being done to develop better PPE: I think it’s ~60% likely to be in the top 20 of the project ideas on our current list excluding considerations about Rethink’s fit for supporting a project like this.My analysis of the merits of this project is primarily based on deferral to Andrew Snyder-Beattie and Ethan Alley, from whose post we got this idea.If I were to make a decision now about whether RP should try to incubate a new project in this space, I would say that it should not . (Incubating a new project might involve things like scoping out a concrete project plan and conducting a founder search; more on how we think about project incubation in this post.)The key reason is that there are a number of other actors already doing or planning projects in this space that are likely better suited to support this kind of work (primarily due to having more biosecurity expertise).I think I’d be unlikely to change my mind about this bottom line with ~10 more hours of investigation, but could easily imagine myself changing my mind after something like ~40 hours of investigation.More detailed summary:There are two main types of PPE a project could focus on developing: Type I PPE, optimized for price and ease of use, and Type II PPE optimized for robust threat protection. I focus on type I PPE, mainly because it’s more commonly brought up in the x-risk community (but there are also some plausible object-level reasons to do so). (more)I think the most important impact of type I PPE from a longtermist perspective is to reduce panic, unrest, and conflict in a highly deadly pandemic, thereby helping to avoid civilisational collapse. (more)I estimate that the financial cost of a project aiming to devel...]]>
Tue, 07 Feb 2023 21:23:23 +0000 EA - Speedrun: Develop an affordable super PPE by Buhl Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Speedrun: Develop an affordable super PPE, published by Buhl on February 7, 2023 on The Effective Altruism Forum.IntroductionThis post is a shallow investigation of the intervention of developing better, cheaper, and easier-to-use personal protective equipment (PPE).The post is part of a sequence of speedrun research projects by Rethink Priorities’ general longtermism team. I recommend you begin by reading the introductory post in the sequence if you haven’t already for information about the context of this post.A quick context tl;dr:The aim of this investigation was to help our team decide if we should take steps towards incubating an organization focusing on this project. Keep in mind that some of the conclusions take into account considerations (such as our team’s comparative advantage) that may not be relevant to the reader.The investigation was intended to be very early-stage and prioritize speed over rigor. We would have conducted a more in-depth investigation before launching any projects in this space (and recommend that others do the same).This post was written in early fall 2022; the funding situation has changed significantly since then, but the investigation has not been updated to reflect this. As such, the funding bar alluded to in the post is probably outdated. My quick guess is that this does not significantly affect the bottom line in this case; at least I think this project is still likely to rank relatively highly among biosecurity-related projects.Epistemic statusI spent ~15 hours researching and writing this speedrun. I have no other experience with biosecurity research and I don’t have a natural science background. So: I’m a junior generalist who has thought about this for a couple of work days, and as a result, this post should be considered very preliminary and likely to contain mistakes and bad takes. My goal in publishing this regardless is that it may be useful as (a) a primer gathering useful information in one place, and (b) an example of the kind of research done by junior generalists.SummaryBottom line:I’m generally optimistic about work being done to develop better PPE: I think it’s ~60% likely to be in the top 20 of the project ideas on our current list excluding considerations about Rethink’s fit for supporting a project like this.My analysis of the merits of this project is primarily based on deferral to Andrew Snyder-Beattie and Ethan Alley, from whose post we got this idea.If I were to make a decision now about whether RP should try to incubate a new project in this space, I would say that it should not . (Incubating a new project might involve things like scoping out a concrete project plan and conducting a founder search; more on how we think about project incubation in this post.)The key reason is that there are a number of other actors already doing or planning projects in this space that are likely better suited to support this kind of work (primarily due to having more biosecurity expertise).I think I’d be unlikely to change my mind about this bottom line with ~10 more hours of investigation, but could easily imagine myself changing my mind after something like ~40 hours of investigation.More detailed summary:There are two main types of PPE a project could focus on developing: Type I PPE, optimized for price and ease of use, and Type II PPE optimized for robust threat protection. I focus on type I PPE, mainly because it’s more commonly brought up in the x-risk community (but there are also some plausible object-level reasons to do so). (more)I think the most important impact of type I PPE from a longtermist perspective is to reduce panic, unrest, and conflict in a highly deadly pandemic, thereby helping to avoid civilisational collapse. (more)I estimate that the financial cost of a project aiming to devel...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Speedrun: Develop an affordable super PPE, published by Buhl on February 7, 2023 on The Effective Altruism Forum.IntroductionThis post is a shallow investigation of the intervention of developing better, cheaper, and easier-to-use personal protective equipment (PPE).The post is part of a sequence of speedrun research projects by Rethink Priorities’ general longtermism team. I recommend you begin by reading the introductory post in the sequence if you haven’t already for information about the context of this post.A quick context tl;dr:The aim of this investigation was to help our team decide if we should take steps towards incubating an organization focusing on this project. Keep in mind that some of the conclusions take into account considerations (such as our team’s comparative advantage) that may not be relevant to the reader.The investigation was intended to be very early-stage and prioritize speed over rigor. We would have conducted a more in-depth investigation before launching any projects in this space (and recommend that others do the same).This post was written in early fall 2022; the funding situation has changed significantly since then, but the investigation has not been updated to reflect this. As such, the funding bar alluded to in the post is probably outdated. My quick guess is that this does not significantly affect the bottom line in this case; at least I think this project is still likely to rank relatively highly among biosecurity-related projects.Epistemic statusI spent ~15 hours researching and writing this speedrun. I have no other experience with biosecurity research and I don’t have a natural science background. So: I’m a junior generalist who has thought about this for a couple of work days, and as a result, this post should be considered very preliminary and likely to contain mistakes and bad takes. My goal in publishing this regardless is that it may be useful as (a) a primer gathering useful information in one place, and (b) an example of the kind of research done by junior generalists.SummaryBottom line:I’m generally optimistic about work being done to develop better PPE: I think it’s ~60% likely to be in the top 20 of the project ideas on our current list excluding considerations about Rethink’s fit for supporting a project like this.My analysis of the merits of this project is primarily based on deferral to Andrew Snyder-Beattie and Ethan Alley, from whose post we got this idea.If I were to make a decision now about whether RP should try to incubate a new project in this space, I would say that it should not . (Incubating a new project might involve things like scoping out a concrete project plan and conducting a founder search; more on how we think about project incubation in this post.)The key reason is that there are a number of other actors already doing or planning projects in this space that are likely better suited to support this kind of work (primarily due to having more biosecurity expertise).I think I’d be unlikely to change my mind about this bottom line with ~10 more hours of investigation, but could easily imagine myself changing my mind after something like ~40 hours of investigation.More detailed summary:There are two main types of PPE a project could focus on developing: Type I PPE, optimized for price and ease of use, and Type II PPE optimized for robust threat protection. I focus on type I PPE, mainly because it’s more commonly brought up in the x-risk community (but there are also some plausible object-level reasons to do so). (more)I think the most important impact of type I PPE from a longtermist perspective is to reduce panic, unrest, and conflict in a highly deadly pandemic, thereby helping to avoid civilisational collapse. (more)I estimate that the financial cost of a project aiming to devel...]]>
Buhl https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 27:20 None full 4776
u9xyvv8vW2DAtixPh_NL_EA_EA EA - An update to our policies on revealing personal information on the Forum by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An update to our policies on revealing personal information on the Forum, published by Lizka on February 7, 2023 on The Effective Altruism Forum.This is an update from the moderation team that tries to clarify and formalize some norms around revealing personal information on the Forum. You can see the full Guide to Norms here. Please note that we might update these norms.The line between “private information” and “public information” and the line between “personal information” and “relevant information to the EA community” can get fuzzy. We expect that there will be judgement calls about where an incident lies on these spectrums. However, there are some broad principles and some clear-cut cases. We outline them here.TL;DR: personal information is sometimes ok to share, depending on how sensitive it is, how relevant it is to a discussion important for effective altruism, and how public the information is elsewhere. We may encode or remove some kinds of information.A few important notes:We think a very good norm is to check unverified rumors or claims before sharing them — especially if they might be damaging or if they relate to sensitive or stigmatized topics.If you’re not sure whether you should check something (or how to check), you can contact the moderation team to ask.If you think that some information should be removed, you should flag this to us. We will probably not remove information that no one has asked us to remove.(We don’t read everything on the Forum, and when we are reading, we’re not always thinking about everything through the lens of our policies.)Why we don’t just default to removing all private/personal information: we think there are cases when some personal information about people who are highly relevant to work in effective altruism is important to share (like discussions of potential conflicts of interest (COIs) or reasons for why someone in a position of power shouldn't be in that position). We also want to keep the potential for censorship from the moderation team low.The way we enforce these norms isn't about whether we think a specific comment is "overall correct" or helpful, etc.; we're trying to outline policies that will help us make these calls more objectively.How to ask for information to be removedThere are instructions on contacting the moderation team here. One easy way to get in touch with us is to email forum-moderation@effectivealtruism.org. (Please share a link to the content that you believe shares overly personal information, explain what the information is, and consider explaining why you think it’s better to remove if you think it won’t be obvious to us.)Considerations, examples, and what we might doBroad principlesCertain kinds of information are much worse to share. In general, the more personal and further away from professional work the information is, the worse it is to share. We will err much more on the side of removing the following kinds of information (and if people override our decisions by reposting removed information or reverting our edits in these cases, we will take further action, like bans):Information about stigmatized but victimless characteristics, like sexual orientationInformation that poses a danger to the people discussed, like street addresses, or anything that sounds like a call to harassmentInformation that might well be inaccurate or is based on rumors and has the potential to seriously harm someone’s reputation, although this doesn’t mean that all such discussion is banned (note also that we think it is a good norm to check unverified rumors before sharing them)Relevance to EA is a key consideration.(How public or influential the figure is in EA will affect the relevance to EA of discussions about specific people.)How public the information is elsewhere is a factor.When...]]>
Lizka https://forum.effectivealtruism.org/posts/u9xyvv8vW2DAtixPh/an-update-to-our-policies-on-revealing-personal-information Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An update to our policies on revealing personal information on the Forum, published by Lizka on February 7, 2023 on The Effective Altruism Forum.This is an update from the moderation team that tries to clarify and formalize some norms around revealing personal information on the Forum. You can see the full Guide to Norms here. Please note that we might update these norms.The line between “private information” and “public information” and the line between “personal information” and “relevant information to the EA community” can get fuzzy. We expect that there will be judgement calls about where an incident lies on these spectrums. However, there are some broad principles and some clear-cut cases. We outline them here.TL;DR: personal information is sometimes ok to share, depending on how sensitive it is, how relevant it is to a discussion important for effective altruism, and how public the information is elsewhere. We may encode or remove some kinds of information.A few important notes:We think a very good norm is to check unverified rumors or claims before sharing them — especially if they might be damaging or if they relate to sensitive or stigmatized topics.If you’re not sure whether you should check something (or how to check), you can contact the moderation team to ask.If you think that some information should be removed, you should flag this to us. We will probably not remove information that no one has asked us to remove.(We don’t read everything on the Forum, and when we are reading, we’re not always thinking about everything through the lens of our policies.)Why we don’t just default to removing all private/personal information: we think there are cases when some personal information about people who are highly relevant to work in effective altruism is important to share (like discussions of potential conflicts of interest (COIs) or reasons for why someone in a position of power shouldn't be in that position). We also want to keep the potential for censorship from the moderation team low.The way we enforce these norms isn't about whether we think a specific comment is "overall correct" or helpful, etc.; we're trying to outline policies that will help us make these calls more objectively.How to ask for information to be removedThere are instructions on contacting the moderation team here. One easy way to get in touch with us is to email forum-moderation@effectivealtruism.org. (Please share a link to the content that you believe shares overly personal information, explain what the information is, and consider explaining why you think it’s better to remove if you think it won’t be obvious to us.)Considerations, examples, and what we might doBroad principlesCertain kinds of information are much worse to share. In general, the more personal and further away from professional work the information is, the worse it is to share. We will err much more on the side of removing the following kinds of information (and if people override our decisions by reposting removed information or reverting our edits in these cases, we will take further action, like bans):Information about stigmatized but victimless characteristics, like sexual orientationInformation that poses a danger to the people discussed, like street addresses, or anything that sounds like a call to harassmentInformation that might well be inaccurate or is based on rumors and has the potential to seriously harm someone’s reputation, although this doesn’t mean that all such discussion is banned (note also that we think it is a good norm to check unverified rumors before sharing them)Relevance to EA is a key consideration.(How public or influential the figure is in EA will affect the relevance to EA of discussions about specific people.)How public the information is elsewhere is a factor.When...]]>
Tue, 07 Feb 2023 19:22:54 +0000 EA - An update to our policies on revealing personal information on the Forum by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An update to our policies on revealing personal information on the Forum, published by Lizka on February 7, 2023 on The Effective Altruism Forum.This is an update from the moderation team that tries to clarify and formalize some norms around revealing personal information on the Forum. You can see the full Guide to Norms here. Please note that we might update these norms.The line between “private information” and “public information” and the line between “personal information” and “relevant information to the EA community” can get fuzzy. We expect that there will be judgement calls about where an incident lies on these spectrums. However, there are some broad principles and some clear-cut cases. We outline them here.TL;DR: personal information is sometimes ok to share, depending on how sensitive it is, how relevant it is to a discussion important for effective altruism, and how public the information is elsewhere. We may encode or remove some kinds of information.A few important notes:We think a very good norm is to check unverified rumors or claims before sharing them — especially if they might be damaging or if they relate to sensitive or stigmatized topics.If you’re not sure whether you should check something (or how to check), you can contact the moderation team to ask.If you think that some information should be removed, you should flag this to us. We will probably not remove information that no one has asked us to remove.(We don’t read everything on the Forum, and when we are reading, we’re not always thinking about everything through the lens of our policies.)Why we don’t just default to removing all private/personal information: we think there are cases when some personal information about people who are highly relevant to work in effective altruism is important to share (like discussions of potential conflicts of interest (COIs) or reasons for why someone in a position of power shouldn't be in that position). We also want to keep the potential for censorship from the moderation team low.The way we enforce these norms isn't about whether we think a specific comment is "overall correct" or helpful, etc.; we're trying to outline policies that will help us make these calls more objectively.How to ask for information to be removedThere are instructions on contacting the moderation team here. One easy way to get in touch with us is to email forum-moderation@effectivealtruism.org. (Please share a link to the content that you believe shares overly personal information, explain what the information is, and consider explaining why you think it’s better to remove if you think it won’t be obvious to us.)Considerations, examples, and what we might doBroad principlesCertain kinds of information are much worse to share. In general, the more personal and further away from professional work the information is, the worse it is to share. We will err much more on the side of removing the following kinds of information (and if people override our decisions by reposting removed information or reverting our edits in these cases, we will take further action, like bans):Information about stigmatized but victimless characteristics, like sexual orientationInformation that poses a danger to the people discussed, like street addresses, or anything that sounds like a call to harassmentInformation that might well be inaccurate or is based on rumors and has the potential to seriously harm someone’s reputation, although this doesn’t mean that all such discussion is banned (note also that we think it is a good norm to check unverified rumors before sharing them)Relevance to EA is a key consideration.(How public or influential the figure is in EA will affect the relevance to EA of discussions about specific people.)How public the information is elsewhere is a factor.When...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An update to our policies on revealing personal information on the Forum, published by Lizka on February 7, 2023 on The Effective Altruism Forum.This is an update from the moderation team that tries to clarify and formalize some norms around revealing personal information on the Forum. You can see the full Guide to Norms here. Please note that we might update these norms.The line between “private information” and “public information” and the line between “personal information” and “relevant information to the EA community” can get fuzzy. We expect that there will be judgement calls about where an incident lies on these spectrums. However, there are some broad principles and some clear-cut cases. We outline them here.TL;DR: personal information is sometimes ok to share, depending on how sensitive it is, how relevant it is to a discussion important for effective altruism, and how public the information is elsewhere. We may encode or remove some kinds of information.A few important notes:We think a very good norm is to check unverified rumors or claims before sharing them — especially if they might be damaging or if they relate to sensitive or stigmatized topics.If you’re not sure whether you should check something (or how to check), you can contact the moderation team to ask.If you think that some information should be removed, you should flag this to us. We will probably not remove information that no one has asked us to remove.(We don’t read everything on the Forum, and when we are reading, we’re not always thinking about everything through the lens of our policies.)Why we don’t just default to removing all private/personal information: we think there are cases when some personal information about people who are highly relevant to work in effective altruism is important to share (like discussions of potential conflicts of interest (COIs) or reasons for why someone in a position of power shouldn't be in that position). We also want to keep the potential for censorship from the moderation team low.The way we enforce these norms isn't about whether we think a specific comment is "overall correct" or helpful, etc.; we're trying to outline policies that will help us make these calls more objectively.How to ask for information to be removedThere are instructions on contacting the moderation team here. One easy way to get in touch with us is to email forum-moderation@effectivealtruism.org. (Please share a link to the content that you believe shares overly personal information, explain what the information is, and consider explaining why you think it’s better to remove if you think it won’t be obvious to us.)Considerations, examples, and what we might doBroad principlesCertain kinds of information are much worse to share. In general, the more personal and further away from professional work the information is, the worse it is to share. We will err much more on the side of removing the following kinds of information (and if people override our decisions by reposting removed information or reverting our edits in these cases, we will take further action, like bans):Information about stigmatized but victimless characteristics, like sexual orientationInformation that poses a danger to the people discussed, like street addresses, or anything that sounds like a call to harassmentInformation that might well be inaccurate or is based on rumors and has the potential to seriously harm someone’s reputation, although this doesn’t mean that all such discussion is banned (note also that we think it is a good norm to check unverified rumors before sharing them)Relevance to EA is a key consideration.(How public or influential the figure is in EA will affect the relevance to EA of discussions about specific people.)How public the information is elsewhere is a factor.When...]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:04 None full 4778
BNKBJs4RJsA8FtdWE_NL_EA_EA EA - A personal reflection on SBF by So8res Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A personal reflection on SBF, published by So8res on February 7, 2023 on The Effective Altruism Forum.MetaThe following is a personal account of my (direct and indirect) interactions with Sam Bankman-Fried, which I wrote up in early/mid-November when news came out that FTX had apparently stolen billions of dollars from its customers.I’d previously intended to post a version of this publicly, on account of how people were worried about who knew what when, but in the writing of it I realized how many of my observations were second-hand and shared with me in confidence. This ultimately led to me shelving it (after completing enough of it to extract what lessons I could from the whole affair).I’m posting this now (with various details blurred out) because early last week Rob Bensinger suggested that I do so. Rob argued that accounts such as this one might be useful to the larger community, because they help strip away a layer of mystery and ambiguity from the situation by plainly stating what particular EAs knew or believed, and when they knew or believed it.This post is structured as a chronological account of the facts as I recall them, followed by my own accounting of salient things I think I did right and wrong, followed by general takeaways.Some caveats:I don’t speak for any of the people who shared their thoughts or experiences with me. Some info was shared with me in confidence, and I asked those people for feedback and gave them the opportunity to veto this post, and their feedback made this post better, but their lack of a veto does not constitute approval of the content. My impression is that they think I have some of the emphasis and framings wrong (but it’s not worth the time/attention it would take to correct).This post consists of some of my own processing of my mistakes. It's not a reaction to the whole FTX affair. (My high-level reaction at the time was one of surprise, anger, sadness, and disappointment, with tone and content not terribly dissimilar from Rob Wiblin’s reactions, as I understood them.)The genre of this essay is "me accounting for how I failed my art, while comparing myself to an implausibly high standard". I'm against self-flagellation, and I don't recommend beating yourself up for failing to meet an implausibly high standard.I endorse comparing yourself to a high standard, if doing so helps you notice where your thinking processes could be improved, and if doing so does not cause you psychological distress.My original draft of this post started with a list of relatively raw observations. But the most salient raw observations were shared in confidence, and much of the remainder felt like airing personal details unnecessarily, which feels like an undue violation of others’ privacy. As such, I’ve kept the recounting somewhat vague.I am not particularly recommending that others in the community who had qualms about Sam write up a similarly thorough account. I was pretty tangential to the whole affair, which is why I can fit something this thorough into only ~7k words, and is why it doesn’t seem to me like a huge invasion of privacy to post something like this (especially given what I’m keeping vague).Hopefully this helps people get a better sense of the degree to which at least one EA had at least some warning signs about Sam, and what sort of signs those were. Maybe it will even spark some candid conversation, as I expect might be healthy, if the discussion quality is good.Short versionMy firsthand interactions with Sam were largely pleasant. Multiple of my friends had bad experiences with him, though. Some of them gave me warnings.In one case, a friend warned me about Sam and I (foolishly) misunderstood the friend as arguing that Sam was pursuing ill ends, and weighed their evidence against other evidence that Sam was pursuing...]]>
So8res https://forum.effectivealtruism.org/posts/BNKBJs4RJsA8FtdWE/a-personal-reflection-on-sbf Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A personal reflection on SBF, published by So8res on February 7, 2023 on The Effective Altruism Forum.MetaThe following is a personal account of my (direct and indirect) interactions with Sam Bankman-Fried, which I wrote up in early/mid-November when news came out that FTX had apparently stolen billions of dollars from its customers.I’d previously intended to post a version of this publicly, on account of how people were worried about who knew what when, but in the writing of it I realized how many of my observations were second-hand and shared with me in confidence. This ultimately led to me shelving it (after completing enough of it to extract what lessons I could from the whole affair).I’m posting this now (with various details blurred out) because early last week Rob Bensinger suggested that I do so. Rob argued that accounts such as this one might be useful to the larger community, because they help strip away a layer of mystery and ambiguity from the situation by plainly stating what particular EAs knew or believed, and when they knew or believed it.This post is structured as a chronological account of the facts as I recall them, followed by my own accounting of salient things I think I did right and wrong, followed by general takeaways.Some caveats:I don’t speak for any of the people who shared their thoughts or experiences with me. Some info was shared with me in confidence, and I asked those people for feedback and gave them the opportunity to veto this post, and their feedback made this post better, but their lack of a veto does not constitute approval of the content. My impression is that they think I have some of the emphasis and framings wrong (but it’s not worth the time/attention it would take to correct).This post consists of some of my own processing of my mistakes. It's not a reaction to the whole FTX affair. (My high-level reaction at the time was one of surprise, anger, sadness, and disappointment, with tone and content not terribly dissimilar from Rob Wiblin’s reactions, as I understood them.)The genre of this essay is "me accounting for how I failed my art, while comparing myself to an implausibly high standard". I'm against self-flagellation, and I don't recommend beating yourself up for failing to meet an implausibly high standard.I endorse comparing yourself to a high standard, if doing so helps you notice where your thinking processes could be improved, and if doing so does not cause you psychological distress.My original draft of this post started with a list of relatively raw observations. But the most salient raw observations were shared in confidence, and much of the remainder felt like airing personal details unnecessarily, which feels like an undue violation of others’ privacy. As such, I’ve kept the recounting somewhat vague.I am not particularly recommending that others in the community who had qualms about Sam write up a similarly thorough account. I was pretty tangential to the whole affair, which is why I can fit something this thorough into only ~7k words, and is why it doesn’t seem to me like a huge invasion of privacy to post something like this (especially given what I’m keeping vague).Hopefully this helps people get a better sense of the degree to which at least one EA had at least some warning signs about Sam, and what sort of signs those were. Maybe it will even spark some candid conversation, as I expect might be healthy, if the discussion quality is good.Short versionMy firsthand interactions with Sam were largely pleasant. Multiple of my friends had bad experiences with him, though. Some of them gave me warnings.In one case, a friend warned me about Sam and I (foolishly) misunderstood the friend as arguing that Sam was pursuing ill ends, and weighed their evidence against other evidence that Sam was pursuing...]]>
Tue, 07 Feb 2023 18:14:54 +0000 EA - A personal reflection on SBF by So8res Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A personal reflection on SBF, published by So8res on February 7, 2023 on The Effective Altruism Forum.MetaThe following is a personal account of my (direct and indirect) interactions with Sam Bankman-Fried, which I wrote up in early/mid-November when news came out that FTX had apparently stolen billions of dollars from its customers.I’d previously intended to post a version of this publicly, on account of how people were worried about who knew what when, but in the writing of it I realized how many of my observations were second-hand and shared with me in confidence. This ultimately led to me shelving it (after completing enough of it to extract what lessons I could from the whole affair).I’m posting this now (with various details blurred out) because early last week Rob Bensinger suggested that I do so. Rob argued that accounts such as this one might be useful to the larger community, because they help strip away a layer of mystery and ambiguity from the situation by plainly stating what particular EAs knew or believed, and when they knew or believed it.This post is structured as a chronological account of the facts as I recall them, followed by my own accounting of salient things I think I did right and wrong, followed by general takeaways.Some caveats:I don’t speak for any of the people who shared their thoughts or experiences with me. Some info was shared with me in confidence, and I asked those people for feedback and gave them the opportunity to veto this post, and their feedback made this post better, but their lack of a veto does not constitute approval of the content. My impression is that they think I have some of the emphasis and framings wrong (but it’s not worth the time/attention it would take to correct).This post consists of some of my own processing of my mistakes. It's not a reaction to the whole FTX affair. (My high-level reaction at the time was one of surprise, anger, sadness, and disappointment, with tone and content not terribly dissimilar from Rob Wiblin’s reactions, as I understood them.)The genre of this essay is "me accounting for how I failed my art, while comparing myself to an implausibly high standard". I'm against self-flagellation, and I don't recommend beating yourself up for failing to meet an implausibly high standard.I endorse comparing yourself to a high standard, if doing so helps you notice where your thinking processes could be improved, and if doing so does not cause you psychological distress.My original draft of this post started with a list of relatively raw observations. But the most salient raw observations were shared in confidence, and much of the remainder felt like airing personal details unnecessarily, which feels like an undue violation of others’ privacy. As such, I’ve kept the recounting somewhat vague.I am not particularly recommending that others in the community who had qualms about Sam write up a similarly thorough account. I was pretty tangential to the whole affair, which is why I can fit something this thorough into only ~7k words, and is why it doesn’t seem to me like a huge invasion of privacy to post something like this (especially given what I’m keeping vague).Hopefully this helps people get a better sense of the degree to which at least one EA had at least some warning signs about Sam, and what sort of signs those were. Maybe it will even spark some candid conversation, as I expect might be healthy, if the discussion quality is good.Short versionMy firsthand interactions with Sam were largely pleasant. Multiple of my friends had bad experiences with him, though. Some of them gave me warnings.In one case, a friend warned me about Sam and I (foolishly) misunderstood the friend as arguing that Sam was pursuing ill ends, and weighed their evidence against other evidence that Sam was pursuing...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A personal reflection on SBF, published by So8res on February 7, 2023 on The Effective Altruism Forum.MetaThe following is a personal account of my (direct and indirect) interactions with Sam Bankman-Fried, which I wrote up in early/mid-November when news came out that FTX had apparently stolen billions of dollars from its customers.I’d previously intended to post a version of this publicly, on account of how people were worried about who knew what when, but in the writing of it I realized how many of my observations were second-hand and shared with me in confidence. This ultimately led to me shelving it (after completing enough of it to extract what lessons I could from the whole affair).I’m posting this now (with various details blurred out) because early last week Rob Bensinger suggested that I do so. Rob argued that accounts such as this one might be useful to the larger community, because they help strip away a layer of mystery and ambiguity from the situation by plainly stating what particular EAs knew or believed, and when they knew or believed it.This post is structured as a chronological account of the facts as I recall them, followed by my own accounting of salient things I think I did right and wrong, followed by general takeaways.Some caveats:I don’t speak for any of the people who shared their thoughts or experiences with me. Some info was shared with me in confidence, and I asked those people for feedback and gave them the opportunity to veto this post, and their feedback made this post better, but their lack of a veto does not constitute approval of the content. My impression is that they think I have some of the emphasis and framings wrong (but it’s not worth the time/attention it would take to correct).This post consists of some of my own processing of my mistakes. It's not a reaction to the whole FTX affair. (My high-level reaction at the time was one of surprise, anger, sadness, and disappointment, with tone and content not terribly dissimilar from Rob Wiblin’s reactions, as I understood them.)The genre of this essay is "me accounting for how I failed my art, while comparing myself to an implausibly high standard". I'm against self-flagellation, and I don't recommend beating yourself up for failing to meet an implausibly high standard.I endorse comparing yourself to a high standard, if doing so helps you notice where your thinking processes could be improved, and if doing so does not cause you psychological distress.My original draft of this post started with a list of relatively raw observations. But the most salient raw observations were shared in confidence, and much of the remainder felt like airing personal details unnecessarily, which feels like an undue violation of others’ privacy. As such, I’ve kept the recounting somewhat vague.I am not particularly recommending that others in the community who had qualms about Sam write up a similarly thorough account. I was pretty tangential to the whole affair, which is why I can fit something this thorough into only ~7k words, and is why it doesn’t seem to me like a huge invasion of privacy to post something like this (especially given what I’m keeping vague).Hopefully this helps people get a better sense of the degree to which at least one EA had at least some warning signs about Sam, and what sort of signs those were. Maybe it will even spark some candid conversation, as I expect might be healthy, if the discussion quality is good.Short versionMy firsthand interactions with Sam were largely pleasant. Multiple of my friends had bad experiences with him, though. Some of them gave me warnings.In one case, a friend warned me about Sam and I (foolishly) misunderstood the friend as arguing that Sam was pursuing ill ends, and weighed their evidence against other evidence that Sam was pursuing...]]>
So8res https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 35:01 None full 4779
W4dNEg3wWwJPN5NEL_NL_EA_EA EA - A Manifesto by sophiathephirst Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Manifesto, published by sophiathephirst on February 7, 2023 on The Effective Altruism Forum.I started writing my EAG Bay Area application tonight (as any chronic procrastinator does), and instead ended up whipping up my backstory and a public declaration of my life's mission.I moved to Berkeley in July of 2022 without knowing what EA was. I moved to California alone at 18 because I needed to find a community that both shared a similar fervor for their work, and had values that aligned with mine. Rolling back a couple of months, to May of my senior year of high school, I knew my time was coming to an end. I had so few days left to leave a tangible impact on a community I loved so dearly. I had grown a deep-rooted seed of love in my heart for my senior class, as well as the underclassmen I had mentored in sports, and academic tutoring.I was brought up in a Christian household with core values of love, kindness, and responsibility. As I grew up, I tied these three values together, so by the time I was a senior in high school, the class president, and well-liked by my peers and administrators, I knew I needed to make a difference. I used the skills and ideas that I possessed to make my high school a better place for my peers, in return for the community and friendships they provided. But this love, and drive to make the community better extended beyond my peers.I had grown up doing service projects, packing meals with Feed My Starving Children, helping the homeless during the George Floyd murder and riots near my home, and helping people who needed it. I grew up with two polar opposite perspectives from my parents. I had been instilled with the values of love and kindness from my mother, who would always go out of her way to make those around her comfortable and content. On the other side, my father, an observant, methodical engineer, taught me to think about why things worked the way they did. I ended up with a blend of these traits, which led me to sit in a hotel room one night in West Yellowstone, staying up until 2 am, reading about effective charities to which I could donate my money.I was raised to work hard and earn my own money from a young age. I started caddying at the local country club at the age of 12, and by the age of 16, I had earned over $10,000 the previous summer. A little over a year before Yellowstone, I had decided that I would start donating a portion of my money. I had always tithed as a child, and at 15, my friend’s brother had recently become a quadriplegic, and I felt compelled to give $500 of my own money to help pay for rehab services.At 15, I realized that there were two intuitive components to altruism, one of which I had been missing at the church. 1) I would donate my resources to causes that I felt passionate about, as I believed there was an emotional element that I needed to acknowledge which is hard coded into humans, that we as humans should help those around us, to preserve the lives of others, and further our collective existence. 2) I wanted to give to causes that I knew my money would go to good use for. I started to question how my money had been used at church, and where my money was going when I donated to charitable causes such as WorldVision, and the Salvation Army. I needed transparency. I needed to know that the time and energy I had put into earning my funds, was saving lives equally if not, better than I could if I used my time to save lives directly.Flash forward to July of 2022, I sat in the Berkeley WeWork astounded that I had just talked to someone who shared the same passion that I had to make the world a better place, because of their love for humanity, as well as their desire to use their time effectively, and ask questions about why people did the things they did and acted the way they do. I soon stumble...]]>
sophiathephirst https://forum.effectivealtruism.org/posts/W4dNEg3wWwJPN5NEL/a-manifesto Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Manifesto, published by sophiathephirst on February 7, 2023 on The Effective Altruism Forum.I started writing my EAG Bay Area application tonight (as any chronic procrastinator does), and instead ended up whipping up my backstory and a public declaration of my life's mission.I moved to Berkeley in July of 2022 without knowing what EA was. I moved to California alone at 18 because I needed to find a community that both shared a similar fervor for their work, and had values that aligned with mine. Rolling back a couple of months, to May of my senior year of high school, I knew my time was coming to an end. I had so few days left to leave a tangible impact on a community I loved so dearly. I had grown a deep-rooted seed of love in my heart for my senior class, as well as the underclassmen I had mentored in sports, and academic tutoring.I was brought up in a Christian household with core values of love, kindness, and responsibility. As I grew up, I tied these three values together, so by the time I was a senior in high school, the class president, and well-liked by my peers and administrators, I knew I needed to make a difference. I used the skills and ideas that I possessed to make my high school a better place for my peers, in return for the community and friendships they provided. But this love, and drive to make the community better extended beyond my peers.I had grown up doing service projects, packing meals with Feed My Starving Children, helping the homeless during the George Floyd murder and riots near my home, and helping people who needed it. I grew up with two polar opposite perspectives from my parents. I had been instilled with the values of love and kindness from my mother, who would always go out of her way to make those around her comfortable and content. On the other side, my father, an observant, methodical engineer, taught me to think about why things worked the way they did. I ended up with a blend of these traits, which led me to sit in a hotel room one night in West Yellowstone, staying up until 2 am, reading about effective charities to which I could donate my money.I was raised to work hard and earn my own money from a young age. I started caddying at the local country club at the age of 12, and by the age of 16, I had earned over $10,000 the previous summer. A little over a year before Yellowstone, I had decided that I would start donating a portion of my money. I had always tithed as a child, and at 15, my friend’s brother had recently become a quadriplegic, and I felt compelled to give $500 of my own money to help pay for rehab services.At 15, I realized that there were two intuitive components to altruism, one of which I had been missing at the church. 1) I would donate my resources to causes that I felt passionate about, as I believed there was an emotional element that I needed to acknowledge which is hard coded into humans, that we as humans should help those around us, to preserve the lives of others, and further our collective existence. 2) I wanted to give to causes that I knew my money would go to good use for. I started to question how my money had been used at church, and where my money was going when I donated to charitable causes such as WorldVision, and the Salvation Army. I needed transparency. I needed to know that the time and energy I had put into earning my funds, was saving lives equally if not, better than I could if I used my time to save lives directly.Flash forward to July of 2022, I sat in the Berkeley WeWork astounded that I had just talked to someone who shared the same passion that I had to make the world a better place, because of their love for humanity, as well as their desire to use their time effectively, and ask questions about why people did the things they did and acted the way they do. I soon stumble...]]>
Tue, 07 Feb 2023 11:35:17 +0000 EA - A Manifesto by sophiathephirst Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Manifesto, published by sophiathephirst on February 7, 2023 on The Effective Altruism Forum.I started writing my EAG Bay Area application tonight (as any chronic procrastinator does), and instead ended up whipping up my backstory and a public declaration of my life's mission.I moved to Berkeley in July of 2022 without knowing what EA was. I moved to California alone at 18 because I needed to find a community that both shared a similar fervor for their work, and had values that aligned with mine. Rolling back a couple of months, to May of my senior year of high school, I knew my time was coming to an end. I had so few days left to leave a tangible impact on a community I loved so dearly. I had grown a deep-rooted seed of love in my heart for my senior class, as well as the underclassmen I had mentored in sports, and academic tutoring.I was brought up in a Christian household with core values of love, kindness, and responsibility. As I grew up, I tied these three values together, so by the time I was a senior in high school, the class president, and well-liked by my peers and administrators, I knew I needed to make a difference. I used the skills and ideas that I possessed to make my high school a better place for my peers, in return for the community and friendships they provided. But this love, and drive to make the community better extended beyond my peers.I had grown up doing service projects, packing meals with Feed My Starving Children, helping the homeless during the George Floyd murder and riots near my home, and helping people who needed it. I grew up with two polar opposite perspectives from my parents. I had been instilled with the values of love and kindness from my mother, who would always go out of her way to make those around her comfortable and content. On the other side, my father, an observant, methodical engineer, taught me to think about why things worked the way they did. I ended up with a blend of these traits, which led me to sit in a hotel room one night in West Yellowstone, staying up until 2 am, reading about effective charities to which I could donate my money.I was raised to work hard and earn my own money from a young age. I started caddying at the local country club at the age of 12, and by the age of 16, I had earned over $10,000 the previous summer. A little over a year before Yellowstone, I had decided that I would start donating a portion of my money. I had always tithed as a child, and at 15, my friend’s brother had recently become a quadriplegic, and I felt compelled to give $500 of my own money to help pay for rehab services.At 15, I realized that there were two intuitive components to altruism, one of which I had been missing at the church. 1) I would donate my resources to causes that I felt passionate about, as I believed there was an emotional element that I needed to acknowledge which is hard coded into humans, that we as humans should help those around us, to preserve the lives of others, and further our collective existence. 2) I wanted to give to causes that I knew my money would go to good use for. I started to question how my money had been used at church, and where my money was going when I donated to charitable causes such as WorldVision, and the Salvation Army. I needed transparency. I needed to know that the time and energy I had put into earning my funds, was saving lives equally if not, better than I could if I used my time to save lives directly.Flash forward to July of 2022, I sat in the Berkeley WeWork astounded that I had just talked to someone who shared the same passion that I had to make the world a better place, because of their love for humanity, as well as their desire to use their time effectively, and ask questions about why people did the things they did and acted the way they do. I soon stumble...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Manifesto, published by sophiathephirst on February 7, 2023 on The Effective Altruism Forum.I started writing my EAG Bay Area application tonight (as any chronic procrastinator does), and instead ended up whipping up my backstory and a public declaration of my life's mission.I moved to Berkeley in July of 2022 without knowing what EA was. I moved to California alone at 18 because I needed to find a community that both shared a similar fervor for their work, and had values that aligned with mine. Rolling back a couple of months, to May of my senior year of high school, I knew my time was coming to an end. I had so few days left to leave a tangible impact on a community I loved so dearly. I had grown a deep-rooted seed of love in my heart for my senior class, as well as the underclassmen I had mentored in sports, and academic tutoring.I was brought up in a Christian household with core values of love, kindness, and responsibility. As I grew up, I tied these three values together, so by the time I was a senior in high school, the class president, and well-liked by my peers and administrators, I knew I needed to make a difference. I used the skills and ideas that I possessed to make my high school a better place for my peers, in return for the community and friendships they provided. But this love, and drive to make the community better extended beyond my peers.I had grown up doing service projects, packing meals with Feed My Starving Children, helping the homeless during the George Floyd murder and riots near my home, and helping people who needed it. I grew up with two polar opposite perspectives from my parents. I had been instilled with the values of love and kindness from my mother, who would always go out of her way to make those around her comfortable and content. On the other side, my father, an observant, methodical engineer, taught me to think about why things worked the way they did. I ended up with a blend of these traits, which led me to sit in a hotel room one night in West Yellowstone, staying up until 2 am, reading about effective charities to which I could donate my money.I was raised to work hard and earn my own money from a young age. I started caddying at the local country club at the age of 12, and by the age of 16, I had earned over $10,000 the previous summer. A little over a year before Yellowstone, I had decided that I would start donating a portion of my money. I had always tithed as a child, and at 15, my friend’s brother had recently become a quadriplegic, and I felt compelled to give $500 of my own money to help pay for rehab services.At 15, I realized that there were two intuitive components to altruism, one of which I had been missing at the church. 1) I would donate my resources to causes that I felt passionate about, as I believed there was an emotional element that I needed to acknowledge which is hard coded into humans, that we as humans should help those around us, to preserve the lives of others, and further our collective existence. 2) I wanted to give to causes that I knew my money would go to good use for. I started to question how my money had been used at church, and where my money was going when I donated to charitable causes such as WorldVision, and the Salvation Army. I needed transparency. I needed to know that the time and energy I had put into earning my funds, was saving lives equally if not, better than I could if I used my time to save lives directly.Flash forward to July of 2022, I sat in the Berkeley WeWork astounded that I had just talked to someone who shared the same passion that I had to make the world a better place, because of their love for humanity, as well as their desire to use their time effectively, and ask questions about why people did the things they did and acted the way they do. I soon stumble...]]>
sophiathephirst https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:04 None full 4780
p5v8KH5uxvxsryz55_NL_EA_EA EA - The number of burner accounts is too damn high by quinn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The number of burner accounts is too damn high, published by quinn on February 7, 2023 on The Effective Altruism Forum.I'd like to take a moment to mourn what the discourse doesn't have.It's unfortunate that we don't trust eachother.Buck's comment.There will be no enumeration by me right now (you're encouraged to try in the comments) of the vastly different types of anonymous forum participation. The variance in reasons people have for not committing posts and comments is broad, and I would miss at least one.Separately, I'd like to take a moment to mourn the fact that this short note about movement drama can be expected to generate more comments than my effortposts about my actual work can hope to get.But I think it's important to point out, for anyone who hasn't noticed yet, that the presence of burner accounts is a signal we're failing at something.Think of how much more this excellent comment of Linch's would have meant if the OP was out and proud.I would like to say that I feel like a coward when I hold my tongue for reputational considerations, without anyone who's utilized a burner account hearing me and responding with "so you're saying I'm a coward". There are too many reasons out there for people to partake in burner accounts for me to say that.I'm normally deeply sympathetic to romantic discussions of the ancient internet values, in which anonymity was a weapon against the biases of status and demographic. I usually lament the identityfication of the internet that comes up around the time of facebook. But there is a grave race to the bottom of integrity standards when we tolerate infringements on anyone's ability - or indeed their inclination - to tell the truth as they see it and own the consequences of standing up and saying it.I'm much more saying "if burner account users are correctly or rationally responding to the environment (with respect to whatever risk tolerance they have), then that's a signal to fix the environment" than I am saying "burner account users are not correct or rational". But I think at the margin, some of the burnerified comments I've seen have crossed the line into, I say as I resist a perceptible urge to say behind a burner account, actual cowardice.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
quinn https://forum.effectivealtruism.org/posts/p5v8KH5uxvxsryz55/the-number-of-burner-accounts-is-too-damn-high Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The number of burner accounts is too damn high, published by quinn on February 7, 2023 on The Effective Altruism Forum.I'd like to take a moment to mourn what the discourse doesn't have.It's unfortunate that we don't trust eachother.Buck's comment.There will be no enumeration by me right now (you're encouraged to try in the comments) of the vastly different types of anonymous forum participation. The variance in reasons people have for not committing posts and comments is broad, and I would miss at least one.Separately, I'd like to take a moment to mourn the fact that this short note about movement drama can be expected to generate more comments than my effortposts about my actual work can hope to get.But I think it's important to point out, for anyone who hasn't noticed yet, that the presence of burner accounts is a signal we're failing at something.Think of how much more this excellent comment of Linch's would have meant if the OP was out and proud.I would like to say that I feel like a coward when I hold my tongue for reputational considerations, without anyone who's utilized a burner account hearing me and responding with "so you're saying I'm a coward". There are too many reasons out there for people to partake in burner accounts for me to say that.I'm normally deeply sympathetic to romantic discussions of the ancient internet values, in which anonymity was a weapon against the biases of status and demographic. I usually lament the identityfication of the internet that comes up around the time of facebook. But there is a grave race to the bottom of integrity standards when we tolerate infringements on anyone's ability - or indeed their inclination - to tell the truth as they see it and own the consequences of standing up and saying it.I'm much more saying "if burner account users are correctly or rationally responding to the environment (with respect to whatever risk tolerance they have), then that's a signal to fix the environment" than I am saying "burner account users are not correct or rational". But I think at the margin, some of the burnerified comments I've seen have crossed the line into, I say as I resist a perceptible urge to say behind a burner account, actual cowardice.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 07 Feb 2023 08:33:12 +0000 EA - The number of burner accounts is too damn high by quinn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The number of burner accounts is too damn high, published by quinn on February 7, 2023 on The Effective Altruism Forum.I'd like to take a moment to mourn what the discourse doesn't have.It's unfortunate that we don't trust eachother.Buck's comment.There will be no enumeration by me right now (you're encouraged to try in the comments) of the vastly different types of anonymous forum participation. The variance in reasons people have for not committing posts and comments is broad, and I would miss at least one.Separately, I'd like to take a moment to mourn the fact that this short note about movement drama can be expected to generate more comments than my effortposts about my actual work can hope to get.But I think it's important to point out, for anyone who hasn't noticed yet, that the presence of burner accounts is a signal we're failing at something.Think of how much more this excellent comment of Linch's would have meant if the OP was out and proud.I would like to say that I feel like a coward when I hold my tongue for reputational considerations, without anyone who's utilized a burner account hearing me and responding with "so you're saying I'm a coward". There are too many reasons out there for people to partake in burner accounts for me to say that.I'm normally deeply sympathetic to romantic discussions of the ancient internet values, in which anonymity was a weapon against the biases of status and demographic. I usually lament the identityfication of the internet that comes up around the time of facebook. But there is a grave race to the bottom of integrity standards when we tolerate infringements on anyone's ability - or indeed their inclination - to tell the truth as they see it and own the consequences of standing up and saying it.I'm much more saying "if burner account users are correctly or rationally responding to the environment (with respect to whatever risk tolerance they have), then that's a signal to fix the environment" than I am saying "burner account users are not correct or rational". But I think at the margin, some of the burnerified comments I've seen have crossed the line into, I say as I resist a perceptible urge to say behind a burner account, actual cowardice.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The number of burner accounts is too damn high, published by quinn on February 7, 2023 on The Effective Altruism Forum.I'd like to take a moment to mourn what the discourse doesn't have.It's unfortunate that we don't trust eachother.Buck's comment.There will be no enumeration by me right now (you're encouraged to try in the comments) of the vastly different types of anonymous forum participation. The variance in reasons people have for not committing posts and comments is broad, and I would miss at least one.Separately, I'd like to take a moment to mourn the fact that this short note about movement drama can be expected to generate more comments than my effortposts about my actual work can hope to get.But I think it's important to point out, for anyone who hasn't noticed yet, that the presence of burner accounts is a signal we're failing at something.Think of how much more this excellent comment of Linch's would have meant if the OP was out and proud.I would like to say that I feel like a coward when I hold my tongue for reputational considerations, without anyone who's utilized a burner account hearing me and responding with "so you're saying I'm a coward". There are too many reasons out there for people to partake in burner accounts for me to say that.I'm normally deeply sympathetic to romantic discussions of the ancient internet values, in which anonymity was a weapon against the biases of status and demographic. I usually lament the identityfication of the internet that comes up around the time of facebook. But there is a grave race to the bottom of integrity standards when we tolerate infringements on anyone's ability - or indeed their inclination - to tell the truth as they see it and own the consequences of standing up and saying it.I'm much more saying "if burner account users are correctly or rationally responding to the environment (with respect to whatever risk tolerance they have), then that's a signal to fix the environment" than I am saying "burner account users are not correct or rational". But I think at the margin, some of the burnerified comments I've seen have crossed the line into, I say as I resist a perceptible urge to say behind a burner account, actual cowardice.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
quinn https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:07 None full 4772
u34ufDg7fnETkaB2r_NL_EA_EA EA - Have there been any detailed, cross-cultural surveys into global moral priorities? by Amber Dawn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Have there been any detailed, cross-cultural surveys into global moral priorities?, published by Amber Dawn on February 6, 2023 on The Effective Altruism Forum.Has there been any research done (either within or outside of EA) about most people’s moral priorities, and/or about the priorities of recipients of philanthropy? I’m thinking of things like, surveys of large groups of people across many cultures which asked them ‘is it more important to be healthy, or wealthy, or have more choices, or prevent risks that might hurt your grandchildren?’What motivates this question is something like: there’s been a lot of talk about democratizing EA. But even if more EA community members had input into funding decisions, that’s still not really democratic. I want to know: what does the average person worldwide think that wealthy philanthropists should do with their money?GiveWell commissioned some research into the preferences of people in some low income communities, similar to the beneficiaries of many of their top charities. However, they only asked about whether they valued saving the lives of younger vs older people, and how much they valued saving years of life vs increasing income. It would be interesting to read more holistic surveys that asked about other things that people might value, including things that charities might not straightforwardly be able to provide (like more political participation, or less oppression). (You could use as a basis, for example, the capability approach, or Spencer Greenberg’s work on intrinsic values.)This might be useful for longtermists as well as those who focus on global health and poverty in the nearer term. You could ask people how much they value risk mitigation vs increases in wellbeing, for example; or you could use people’s answers to try to shape a future that fits more people’s values.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Amber Dawn https://forum.effectivealtruism.org/posts/u34ufDg7fnETkaB2r/have-there-been-any-detailed-cross-cultural-surveys-into Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Have there been any detailed, cross-cultural surveys into global moral priorities?, published by Amber Dawn on February 6, 2023 on The Effective Altruism Forum.Has there been any research done (either within or outside of EA) about most people’s moral priorities, and/or about the priorities of recipients of philanthropy? I’m thinking of things like, surveys of large groups of people across many cultures which asked them ‘is it more important to be healthy, or wealthy, or have more choices, or prevent risks that might hurt your grandchildren?’What motivates this question is something like: there’s been a lot of talk about democratizing EA. But even if more EA community members had input into funding decisions, that’s still not really democratic. I want to know: what does the average person worldwide think that wealthy philanthropists should do with their money?GiveWell commissioned some research into the preferences of people in some low income communities, similar to the beneficiaries of many of their top charities. However, they only asked about whether they valued saving the lives of younger vs older people, and how much they valued saving years of life vs increasing income. It would be interesting to read more holistic surveys that asked about other things that people might value, including things that charities might not straightforwardly be able to provide (like more political participation, or less oppression). (You could use as a basis, for example, the capability approach, or Spencer Greenberg’s work on intrinsic values.)This might be useful for longtermists as well as those who focus on global health and poverty in the nearer term. You could ask people how much they value risk mitigation vs increases in wellbeing, for example; or you could use people’s answers to try to shape a future that fits more people’s values.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 07 Feb 2023 01:31:59 +0000 EA - Have there been any detailed, cross-cultural surveys into global moral priorities? by Amber Dawn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Have there been any detailed, cross-cultural surveys into global moral priorities?, published by Amber Dawn on February 6, 2023 on The Effective Altruism Forum.Has there been any research done (either within or outside of EA) about most people’s moral priorities, and/or about the priorities of recipients of philanthropy? I’m thinking of things like, surveys of large groups of people across many cultures which asked them ‘is it more important to be healthy, or wealthy, or have more choices, or prevent risks that might hurt your grandchildren?’What motivates this question is something like: there’s been a lot of talk about democratizing EA. But even if more EA community members had input into funding decisions, that’s still not really democratic. I want to know: what does the average person worldwide think that wealthy philanthropists should do with their money?GiveWell commissioned some research into the preferences of people in some low income communities, similar to the beneficiaries of many of their top charities. However, they only asked about whether they valued saving the lives of younger vs older people, and how much they valued saving years of life vs increasing income. It would be interesting to read more holistic surveys that asked about other things that people might value, including things that charities might not straightforwardly be able to provide (like more political participation, or less oppression). (You could use as a basis, for example, the capability approach, or Spencer Greenberg’s work on intrinsic values.)This might be useful for longtermists as well as those who focus on global health and poverty in the nearer term. You could ask people how much they value risk mitigation vs increases in wellbeing, for example; or you could use people’s answers to try to shape a future that fits more people’s values.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Have there been any detailed, cross-cultural surveys into global moral priorities?, published by Amber Dawn on February 6, 2023 on The Effective Altruism Forum.Has there been any research done (either within or outside of EA) about most people’s moral priorities, and/or about the priorities of recipients of philanthropy? I’m thinking of things like, surveys of large groups of people across many cultures which asked them ‘is it more important to be healthy, or wealthy, or have more choices, or prevent risks that might hurt your grandchildren?’What motivates this question is something like: there’s been a lot of talk about democratizing EA. But even if more EA community members had input into funding decisions, that’s still not really democratic. I want to know: what does the average person worldwide think that wealthy philanthropists should do with their money?GiveWell commissioned some research into the preferences of people in some low income communities, similar to the beneficiaries of many of their top charities. However, they only asked about whether they valued saving the lives of younger vs older people, and how much they valued saving years of life vs increasing income. It would be interesting to read more holistic surveys that asked about other things that people might value, including things that charities might not straightforwardly be able to provide (like more political participation, or less oppression). (You could use as a basis, for example, the capability approach, or Spencer Greenberg’s work on intrinsic values.)This might be useful for longtermists as well as those who focus on global health and poverty in the nearer term. You could ask people how much they value risk mitigation vs increases in wellbeing, for example; or you could use people’s answers to try to shape a future that fits more people’s values.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Amber Dawn https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:56 None full 4773
4t7xQYSmJF4qarY6Y_NL_EA_EA EA - Project Idea: Lots of Cause-area-specific Online Unconferences by Linda Linsefors Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Project Idea: Lots of Cause-area-specific Online Unconferences, published by Linda Linsefors on February 6, 2023 on The Effective Altruism Forum.Here's a project idea I have that's been laying around for some time. I think this would be high impact, but I need help to make it happen. It don't have to be this exact setup (which I describe below) as long as the output are more online unconferences for the EA community.I don't do grant applications anymore, due to past event that I'm still emotionally healing from. However if you want to write and submit an application on my behalf, that would be very welcome.Project description can also be found here: Lots of EA Online Unconferences - Google DocsTotal budget: $40k - 85k.Summary: We need more online events - from my experience running TAISU and AISU, there are many smart people who can’t access existing events because of geographical constraints. I expect this to be true of other cause areas too. I want to hire one person for one year and teach them how to run an online unconference. They’ll run 3-4 events on different EA topics of their choice. If things go well, they secure more funding before the end of the year, and keep going.MotivationOnline unconferences are low cost and relatively low effort to run, and are also really great. I think this type of event has the highest cost to value ratio out of any event type that I know of.Why more events?EA is growing faster than the infrastructure can keep up with. I’m mostly familiar with AI Safety but I expect the situation to be similar in other areas. Every AI Safety event I know about gets a massive number of applications. There is definitely room for more events.Why cause area specific?Generic EA events are great, but when you’ve picked what problem to work on, it’s much more valuable to connect with people who are interested in the same cause area as you. Those are the people you can have high context discussion with. Those are the people you can team up with to get work done.I count “EA Meta” (e.g. ops, events etc) as a cause area for the purpose of this project, because it seems valuable for people who work in this area to meet up and exchange ideas, and find common solutions to common problems.Why unconferences?Unconferences are great in many ways, but they are especially great in a rapidly growing movement, where the people at the top, can’t keep up with or even keep track of the need for infrastructure. Unconferences are self organised around the needs of interests of the participants. This makes them more adaptive but also lower effort to run relatively to other types of events.I’m not saying that unconferences are a one-size-fits-all type of event. There is definitely room for more specialised events too. But everything equal, I think EA would be better off with more unconferences.Why online?Online and offline have different pros and cons.One big advantage with online events is that they are much more accessible. This is important! Even if we have lots of in person events, it’s worth having online events too. Accessibility helps in two ways. Firstly there are people who can’t attend otherwise for various reasons. Just paying for their travel may not help since the barrier might be time or visa. Secondly, it lowers the cost (in time and money) to attend for everyone. People who would not be willing to pay the time cost for an entire in-person event often choose to join in for just a few sessions of an online event, benefiting both them and everyone else. Allowing people to pick and choose among sessions, rather than commit to a full event, opens up for more specialised sessions.The other main advantage of online events is that they are almost infinitely scalable, much less work to organise, and much lower organising costs. A single person can d...]]>
Linda Linsefors https://forum.effectivealtruism.org/posts/4t7xQYSmJF4qarY6Y/project-idea-lots-of-cause-area-specific-online Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Project Idea: Lots of Cause-area-specific Online Unconferences, published by Linda Linsefors on February 6, 2023 on The Effective Altruism Forum.Here's a project idea I have that's been laying around for some time. I think this would be high impact, but I need help to make it happen. It don't have to be this exact setup (which I describe below) as long as the output are more online unconferences for the EA community.I don't do grant applications anymore, due to past event that I'm still emotionally healing from. However if you want to write and submit an application on my behalf, that would be very welcome.Project description can also be found here: Lots of EA Online Unconferences - Google DocsTotal budget: $40k - 85k.Summary: We need more online events - from my experience running TAISU and AISU, there are many smart people who can’t access existing events because of geographical constraints. I expect this to be true of other cause areas too. I want to hire one person for one year and teach them how to run an online unconference. They’ll run 3-4 events on different EA topics of their choice. If things go well, they secure more funding before the end of the year, and keep going.MotivationOnline unconferences are low cost and relatively low effort to run, and are also really great. I think this type of event has the highest cost to value ratio out of any event type that I know of.Why more events?EA is growing faster than the infrastructure can keep up with. I’m mostly familiar with AI Safety but I expect the situation to be similar in other areas. Every AI Safety event I know about gets a massive number of applications. There is definitely room for more events.Why cause area specific?Generic EA events are great, but when you’ve picked what problem to work on, it’s much more valuable to connect with people who are interested in the same cause area as you. Those are the people you can have high context discussion with. Those are the people you can team up with to get work done.I count “EA Meta” (e.g. ops, events etc) as a cause area for the purpose of this project, because it seems valuable for people who work in this area to meet up and exchange ideas, and find common solutions to common problems.Why unconferences?Unconferences are great in many ways, but they are especially great in a rapidly growing movement, where the people at the top, can’t keep up with or even keep track of the need for infrastructure. Unconferences are self organised around the needs of interests of the participants. This makes them more adaptive but also lower effort to run relatively to other types of events.I’m not saying that unconferences are a one-size-fits-all type of event. There is definitely room for more specialised events too. But everything equal, I think EA would be better off with more unconferences.Why online?Online and offline have different pros and cons.One big advantage with online events is that they are much more accessible. This is important! Even if we have lots of in person events, it’s worth having online events too. Accessibility helps in two ways. Firstly there are people who can’t attend otherwise for various reasons. Just paying for their travel may not help since the barrier might be time or visa. Secondly, it lowers the cost (in time and money) to attend for everyone. People who would not be willing to pay the time cost for an entire in-person event often choose to join in for just a few sessions of an online event, benefiting both them and everyone else. Allowing people to pick and choose among sessions, rather than commit to a full event, opens up for more specialised sessions.The other main advantage of online events is that they are almost infinitely scalable, much less work to organise, and much lower organising costs. A single person can d...]]>
Mon, 06 Feb 2023 22:23:52 +0000 EA - Project Idea: Lots of Cause-area-specific Online Unconferences by Linda Linsefors Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Project Idea: Lots of Cause-area-specific Online Unconferences, published by Linda Linsefors on February 6, 2023 on The Effective Altruism Forum.Here's a project idea I have that's been laying around for some time. I think this would be high impact, but I need help to make it happen. It don't have to be this exact setup (which I describe below) as long as the output are more online unconferences for the EA community.I don't do grant applications anymore, due to past event that I'm still emotionally healing from. However if you want to write and submit an application on my behalf, that would be very welcome.Project description can also be found here: Lots of EA Online Unconferences - Google DocsTotal budget: $40k - 85k.Summary: We need more online events - from my experience running TAISU and AISU, there are many smart people who can’t access existing events because of geographical constraints. I expect this to be true of other cause areas too. I want to hire one person for one year and teach them how to run an online unconference. They’ll run 3-4 events on different EA topics of their choice. If things go well, they secure more funding before the end of the year, and keep going.MotivationOnline unconferences are low cost and relatively low effort to run, and are also really great. I think this type of event has the highest cost to value ratio out of any event type that I know of.Why more events?EA is growing faster than the infrastructure can keep up with. I’m mostly familiar with AI Safety but I expect the situation to be similar in other areas. Every AI Safety event I know about gets a massive number of applications. There is definitely room for more events.Why cause area specific?Generic EA events are great, but when you’ve picked what problem to work on, it’s much more valuable to connect with people who are interested in the same cause area as you. Those are the people you can have high context discussion with. Those are the people you can team up with to get work done.I count “EA Meta” (e.g. ops, events etc) as a cause area for the purpose of this project, because it seems valuable for people who work in this area to meet up and exchange ideas, and find common solutions to common problems.Why unconferences?Unconferences are great in many ways, but they are especially great in a rapidly growing movement, where the people at the top, can’t keep up with or even keep track of the need for infrastructure. Unconferences are self organised around the needs of interests of the participants. This makes them more adaptive but also lower effort to run relatively to other types of events.I’m not saying that unconferences are a one-size-fits-all type of event. There is definitely room for more specialised events too. But everything equal, I think EA would be better off with more unconferences.Why online?Online and offline have different pros and cons.One big advantage with online events is that they are much more accessible. This is important! Even if we have lots of in person events, it’s worth having online events too. Accessibility helps in two ways. Firstly there are people who can’t attend otherwise for various reasons. Just paying for their travel may not help since the barrier might be time or visa. Secondly, it lowers the cost (in time and money) to attend for everyone. People who would not be willing to pay the time cost for an entire in-person event often choose to join in for just a few sessions of an online event, benefiting both them and everyone else. Allowing people to pick and choose among sessions, rather than commit to a full event, opens up for more specialised sessions.The other main advantage of online events is that they are almost infinitely scalable, much less work to organise, and much lower organising costs. A single person can d...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Project Idea: Lots of Cause-area-specific Online Unconferences, published by Linda Linsefors on February 6, 2023 on The Effective Altruism Forum.Here's a project idea I have that's been laying around for some time. I think this would be high impact, but I need help to make it happen. It don't have to be this exact setup (which I describe below) as long as the output are more online unconferences for the EA community.I don't do grant applications anymore, due to past event that I'm still emotionally healing from. However if you want to write and submit an application on my behalf, that would be very welcome.Project description can also be found here: Lots of EA Online Unconferences - Google DocsTotal budget: $40k - 85k.Summary: We need more online events - from my experience running TAISU and AISU, there are many smart people who can’t access existing events because of geographical constraints. I expect this to be true of other cause areas too. I want to hire one person for one year and teach them how to run an online unconference. They’ll run 3-4 events on different EA topics of their choice. If things go well, they secure more funding before the end of the year, and keep going.MotivationOnline unconferences are low cost and relatively low effort to run, and are also really great. I think this type of event has the highest cost to value ratio out of any event type that I know of.Why more events?EA is growing faster than the infrastructure can keep up with. I’m mostly familiar with AI Safety but I expect the situation to be similar in other areas. Every AI Safety event I know about gets a massive number of applications. There is definitely room for more events.Why cause area specific?Generic EA events are great, but when you’ve picked what problem to work on, it’s much more valuable to connect with people who are interested in the same cause area as you. Those are the people you can have high context discussion with. Those are the people you can team up with to get work done.I count “EA Meta” (e.g. ops, events etc) as a cause area for the purpose of this project, because it seems valuable for people who work in this area to meet up and exchange ideas, and find common solutions to common problems.Why unconferences?Unconferences are great in many ways, but they are especially great in a rapidly growing movement, where the people at the top, can’t keep up with or even keep track of the need for infrastructure. Unconferences are self organised around the needs of interests of the participants. This makes them more adaptive but also lower effort to run relatively to other types of events.I’m not saying that unconferences are a one-size-fits-all type of event. There is definitely room for more specialised events too. But everything equal, I think EA would be better off with more unconferences.Why online?Online and offline have different pros and cons.One big advantage with online events is that they are much more accessible. This is important! Even if we have lots of in person events, it’s worth having online events too. Accessibility helps in two ways. Firstly there are people who can’t attend otherwise for various reasons. Just paying for their travel may not help since the barrier might be time or visa. Secondly, it lowers the cost (in time and money) to attend for everyone. People who would not be willing to pay the time cost for an entire in-person event often choose to join in for just a few sessions of an online event, benefiting both them and everyone else. Allowing people to pick and choose among sessions, rather than commit to a full event, opens up for more specialised sessions.The other main advantage of online events is that they are almost infinitely scalable, much less work to organise, and much lower organising costs. A single person can d...]]>
Linda Linsefors https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:44 None full 4762
wvBfYnNeRvfEXvezP_NL_EA_EA EA - Moving community discussion to a separate tab (a test we might run) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Moving community discussion to a separate tab (a test we might run), published by Lizka on February 6, 2023 on The Effective Altruism Forum.TL;DR: we (the Forum team) might make a pretty significant change on Frontpage for a month as a test. We’re sharing the outline of the proposal to get feedback from the community. The change we’ll test is hiding “Community” posts from the Frontpage for everybody by default, and moving those posts to a prominent tab.Where we are right now:We’re probably going to run the test, although we might not depending on the feedback we get as a result of this post and we might significantly change our approach. After the test, it’s pretty unclear to us whether we’ll keep the change — and we might want to extend the test period — but we think that we will probably want to keep something like the change after the month goes by.We will review the feedback we get, and if we decide to go for it, we’ll likely start the test within a week or two.Summary of the change we’d test: moving community discussion to a separate tabFor the test, we’d move all “Community” posts off the top section of the Frontpage for everyone by default (by setting the default topic filter for “Community” to “hidden,” meaning that people will be able to opt out of the change). We will probably also remove “Community” posts from Recent Discussion on the Frontpage (you will be able to change this back to the default in your user settings).Then we’ll probably either add a tab on the Frontpage or a section lower down on the Frontpage, which would feature a limited number of “Community” posts. You could then load more articles or visit the tab.There’s a chance that we’ll tweak our classification of what we mean by “Community” posts, but currently, it’s something like what’s described in this footnote.Here are two mockups of the layouts we’re considering testing (if we go for one of these, we’ll probably continue testing and improving it):A “Community” tabA section for “Community” posts on the FrontpageWhy consider doing this at all?We’re pretty worried about the Forum voting structure driving more engagement to some topics systematically (without the community endorsing this prioritization), and related issues.Some topics and discussions tend to get a lot more Forum karma and attention than others, as Ben and Lizka outlined in this post. These tend to be:About the community (because it interests almost everyone at least a little bit)Accessible to everyone, or on topics where everyone has an opinionThis phenomenon means that posts that interest a smaller fraction of the community — even if they interest their target readers a lot more and are more useful to their audience (and even if you consider that some other posts’ audiences are larger), will get a lot less attention than might be helpful.We want to do something to address this imbalance, and we think that separating out “Community” posts might get us a lot of the benefit while still letting important conversations happen. (We checked a lot of posts we thought were getting extra engagement via this effect; almost all were “Community” posts.)Note that this mechanism “overvalues” “Community” posts whether or not you think that “Community” posts are currently over-valued by the average Forum user (the Forum team has mixed opinions on this); given whatever the community actually cares about, things that are easier to discuss or think about will get more engagement. (See more in this comment.)We also want the Frontpage to feature different kinds of content that’s particularly interesting to different groups, instead of being full of posts that interest the majority of Forum users to at least some extent (in particular, posts about the EA community). This is how we get cross-pollination between different cause ...]]>
Lizka https://forum.effectivealtruism.org/posts/wvBfYnNeRvfEXvezP/moving-community-discussion-to-a-separate-tab-a-test-we Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Moving community discussion to a separate tab (a test we might run), published by Lizka on February 6, 2023 on The Effective Altruism Forum.TL;DR: we (the Forum team) might make a pretty significant change on Frontpage for a month as a test. We’re sharing the outline of the proposal to get feedback from the community. The change we’ll test is hiding “Community” posts from the Frontpage for everybody by default, and moving those posts to a prominent tab.Where we are right now:We’re probably going to run the test, although we might not depending on the feedback we get as a result of this post and we might significantly change our approach. After the test, it’s pretty unclear to us whether we’ll keep the change — and we might want to extend the test period — but we think that we will probably want to keep something like the change after the month goes by.We will review the feedback we get, and if we decide to go for it, we’ll likely start the test within a week or two.Summary of the change we’d test: moving community discussion to a separate tabFor the test, we’d move all “Community” posts off the top section of the Frontpage for everyone by default (by setting the default topic filter for “Community” to “hidden,” meaning that people will be able to opt out of the change). We will probably also remove “Community” posts from Recent Discussion on the Frontpage (you will be able to change this back to the default in your user settings).Then we’ll probably either add a tab on the Frontpage or a section lower down on the Frontpage, which would feature a limited number of “Community” posts. You could then load more articles or visit the tab.There’s a chance that we’ll tweak our classification of what we mean by “Community” posts, but currently, it’s something like what’s described in this footnote.Here are two mockups of the layouts we’re considering testing (if we go for one of these, we’ll probably continue testing and improving it):A “Community” tabA section for “Community” posts on the FrontpageWhy consider doing this at all?We’re pretty worried about the Forum voting structure driving more engagement to some topics systematically (without the community endorsing this prioritization), and related issues.Some topics and discussions tend to get a lot more Forum karma and attention than others, as Ben and Lizka outlined in this post. These tend to be:About the community (because it interests almost everyone at least a little bit)Accessible to everyone, or on topics where everyone has an opinionThis phenomenon means that posts that interest a smaller fraction of the community — even if they interest their target readers a lot more and are more useful to their audience (and even if you consider that some other posts’ audiences are larger), will get a lot less attention than might be helpful.We want to do something to address this imbalance, and we think that separating out “Community” posts might get us a lot of the benefit while still letting important conversations happen. (We checked a lot of posts we thought were getting extra engagement via this effect; almost all were “Community” posts.)Note that this mechanism “overvalues” “Community” posts whether or not you think that “Community” posts are currently over-valued by the average Forum user (the Forum team has mixed opinions on this); given whatever the community actually cares about, things that are easier to discuss or think about will get more engagement. (See more in this comment.)We also want the Frontpage to feature different kinds of content that’s particularly interesting to different groups, instead of being full of posts that interest the majority of Forum users to at least some extent (in particular, posts about the EA community). This is how we get cross-pollination between different cause ...]]>
Mon, 06 Feb 2023 21:56:20 +0000 EA - Moving community discussion to a separate tab (a test we might run) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Moving community discussion to a separate tab (a test we might run), published by Lizka on February 6, 2023 on The Effective Altruism Forum.TL;DR: we (the Forum team) might make a pretty significant change on Frontpage for a month as a test. We’re sharing the outline of the proposal to get feedback from the community. The change we’ll test is hiding “Community” posts from the Frontpage for everybody by default, and moving those posts to a prominent tab.Where we are right now:We’re probably going to run the test, although we might not depending on the feedback we get as a result of this post and we might significantly change our approach. After the test, it’s pretty unclear to us whether we’ll keep the change — and we might want to extend the test period — but we think that we will probably want to keep something like the change after the month goes by.We will review the feedback we get, and if we decide to go for it, we’ll likely start the test within a week or two.Summary of the change we’d test: moving community discussion to a separate tabFor the test, we’d move all “Community” posts off the top section of the Frontpage for everyone by default (by setting the default topic filter for “Community” to “hidden,” meaning that people will be able to opt out of the change). We will probably also remove “Community” posts from Recent Discussion on the Frontpage (you will be able to change this back to the default in your user settings).Then we’ll probably either add a tab on the Frontpage or a section lower down on the Frontpage, which would feature a limited number of “Community” posts. You could then load more articles or visit the tab.There’s a chance that we’ll tweak our classification of what we mean by “Community” posts, but currently, it’s something like what’s described in this footnote.Here are two mockups of the layouts we’re considering testing (if we go for one of these, we’ll probably continue testing and improving it):A “Community” tabA section for “Community” posts on the FrontpageWhy consider doing this at all?We’re pretty worried about the Forum voting structure driving more engagement to some topics systematically (without the community endorsing this prioritization), and related issues.Some topics and discussions tend to get a lot more Forum karma and attention than others, as Ben and Lizka outlined in this post. These tend to be:About the community (because it interests almost everyone at least a little bit)Accessible to everyone, or on topics where everyone has an opinionThis phenomenon means that posts that interest a smaller fraction of the community — even if they interest their target readers a lot more and are more useful to their audience (and even if you consider that some other posts’ audiences are larger), will get a lot less attention than might be helpful.We want to do something to address this imbalance, and we think that separating out “Community” posts might get us a lot of the benefit while still letting important conversations happen. (We checked a lot of posts we thought were getting extra engagement via this effect; almost all were “Community” posts.)Note that this mechanism “overvalues” “Community” posts whether or not you think that “Community” posts are currently over-valued by the average Forum user (the Forum team has mixed opinions on this); given whatever the community actually cares about, things that are easier to discuss or think about will get more engagement. (See more in this comment.)We also want the Frontpage to feature different kinds of content that’s particularly interesting to different groups, instead of being full of posts that interest the majority of Forum users to at least some extent (in particular, posts about the EA community). This is how we get cross-pollination between different cause ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Moving community discussion to a separate tab (a test we might run), published by Lizka on February 6, 2023 on The Effective Altruism Forum.TL;DR: we (the Forum team) might make a pretty significant change on Frontpage for a month as a test. We’re sharing the outline of the proposal to get feedback from the community. The change we’ll test is hiding “Community” posts from the Frontpage for everybody by default, and moving those posts to a prominent tab.Where we are right now:We’re probably going to run the test, although we might not depending on the feedback we get as a result of this post and we might significantly change our approach. After the test, it’s pretty unclear to us whether we’ll keep the change — and we might want to extend the test period — but we think that we will probably want to keep something like the change after the month goes by.We will review the feedback we get, and if we decide to go for it, we’ll likely start the test within a week or two.Summary of the change we’d test: moving community discussion to a separate tabFor the test, we’d move all “Community” posts off the top section of the Frontpage for everyone by default (by setting the default topic filter for “Community” to “hidden,” meaning that people will be able to opt out of the change). We will probably also remove “Community” posts from Recent Discussion on the Frontpage (you will be able to change this back to the default in your user settings).Then we’ll probably either add a tab on the Frontpage or a section lower down on the Frontpage, which would feature a limited number of “Community” posts. You could then load more articles or visit the tab.There’s a chance that we’ll tweak our classification of what we mean by “Community” posts, but currently, it’s something like what’s described in this footnote.Here are two mockups of the layouts we’re considering testing (if we go for one of these, we’ll probably continue testing and improving it):A “Community” tabA section for “Community” posts on the FrontpageWhy consider doing this at all?We’re pretty worried about the Forum voting structure driving more engagement to some topics systematically (without the community endorsing this prioritization), and related issues.Some topics and discussions tend to get a lot more Forum karma and attention than others, as Ben and Lizka outlined in this post. These tend to be:About the community (because it interests almost everyone at least a little bit)Accessible to everyone, or on topics where everyone has an opinionThis phenomenon means that posts that interest a smaller fraction of the community — even if they interest their target readers a lot more and are more useful to their audience (and even if you consider that some other posts’ audiences are larger), will get a lot less attention than might be helpful.We want to do something to address this imbalance, and we think that separating out “Community” posts might get us a lot of the benefit while still letting important conversations happen. (We checked a lot of posts we thought were getting extra engagement via this effect; almost all were “Community” posts.)Note that this mechanism “overvalues” “Community” posts whether or not you think that “Community” posts are currently over-valued by the average Forum user (the Forum team has mixed opinions on this); given whatever the community actually cares about, things that are easier to discuss or think about will get more engagement. (See more in this comment.)We also want the Frontpage to feature different kinds of content that’s particularly interesting to different groups, instead of being full of posts that interest the majority of Forum users to at least some extent (in particular, posts about the EA community). This is how we get cross-pollination between different cause ...]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:30 None full 4757
QdYKFRexDaPeQaQCA_NL_EA_EA EA - Unjournal's 1st eval is up: Resilient foods paper (Denkenberger et al) and AMA ~48 hours by david reinstein Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Unjournal's 1st eval is up: Resilient foods paper (Denkenberger et al) & AMA ~48 hours, published by david reinstein on February 6, 2023 on The Effective Altruism Forum.The Unjournal: reporting some progress(Link: our main page, see also our post Unjournal: Call for participants and research.)Our group (curating articles and evaluations) is now live on Sciety HERE.The first evaluated research project (paper) has now been posted HERE .First evaluation: Denkenberger et alOur first evaluation is for "Long Term Cost-Effectiveness of Resilient Foods for Global Catastrophes Compared to Artificial General Intelligence Safety", by David Denkenberger, Anders Sandberg, Ross Tieman, and Joshua M. Pearce, published in the International Journal of Disaster Risk Reduction.These three reports and ratings (see a sample below) come from three experts with (what we believe to be) complementary backgrounds (note, these evaluators agreed to be identified rather than remain anonymous):Alex Bates: An award-winning cost-effectiveness analyst with some background in considering long-term and existential risksScott Janzwood: A political scientist and Research Director at the Cascade InstituteAnca Hanea: A senior researcher and applied probabilist based at the Centre of Excellence for Biosecurity Risk Analysis (CEBRA) at the University of Melbourne. She has done prominent research into eliciting and aggregating (expert) judgments, working with the RepliCATS project.These evaluations were, overall, fairly involved. They engaged with specific details of the paper as well as overall themes, directions, and implications. While they were largely positive about the paper, they did not seem to pull punches. Some examples of their feedback and evaluation below (direct quotes).Extract of evaluation contentBates:I’d be surprised if I ever again read a paper with such potential importance to global priorities.My view is that it would be premature to reallocate funding from AI Risk reduction to resilient food on the basis of this paper alone. I think the paper would have benefitted from more attention being paid to the underlying theory of cost-effectiveness motivating the investigation. Decisions made in places seem to have multiplied uncertainty which could have been resolved with a more consistent approach to analysis.The most serious conceptual issue which I think needs to be resolved before this can happen is to demonstrate that ‘do nothing’ would be less cost-effective than investing $86m in resilient foods, given that the ‘do nothing’ approach would potentially include strong market dynamics leaning towards resilient foods.".Janzwoodthe authors’ cost-effectiveness model, which attempts to decrease uncertainty about the potential uncertainty-reducing and harm/likelihood-reducing 'power' of resilient food R&D and compare it to R&D on AGI safety, is an important contribution"It would have been useful to see a brief discussion of some of these acknowledged epistemic uncertainties (e.g., the impact of resilient foods on public health, immunology, and disease resistance) to emphasize that some epistemic uncertainty could be reduced by exactly the kind of resilient food R&D they are advocating for.Hanea:The structure of the models is not discussed. How did [they] decide that this is a robust structure (no sensitivity to structure performed as far as I understood)"It is unclear if the compiled data sets are compatible. I think the quantification of the model should be documented better or in a more compact way."The authors also responded in detail. Some excerpts:The evaluations provided well thought out and constructively critical analysis of the work, pointing out several assumptions which could impact findings of the paper while also recognizing the value of the work in spite of s...]]>
david reinstein https://forum.effectivealtruism.org/posts/QdYKFRexDaPeQaQCA/unjournal-s-1st-eval-is-up-resilient-foods-paper Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Unjournal's 1st eval is up: Resilient foods paper (Denkenberger et al) & AMA ~48 hours, published by david reinstein on February 6, 2023 on The Effective Altruism Forum.The Unjournal: reporting some progress(Link: our main page, see also our post Unjournal: Call for participants and research.)Our group (curating articles and evaluations) is now live on Sciety HERE.The first evaluated research project (paper) has now been posted HERE .First evaluation: Denkenberger et alOur first evaluation is for "Long Term Cost-Effectiveness of Resilient Foods for Global Catastrophes Compared to Artificial General Intelligence Safety", by David Denkenberger, Anders Sandberg, Ross Tieman, and Joshua M. Pearce, published in the International Journal of Disaster Risk Reduction.These three reports and ratings (see a sample below) come from three experts with (what we believe to be) complementary backgrounds (note, these evaluators agreed to be identified rather than remain anonymous):Alex Bates: An award-winning cost-effectiveness analyst with some background in considering long-term and existential risksScott Janzwood: A political scientist and Research Director at the Cascade InstituteAnca Hanea: A senior researcher and applied probabilist based at the Centre of Excellence for Biosecurity Risk Analysis (CEBRA) at the University of Melbourne. She has done prominent research into eliciting and aggregating (expert) judgments, working with the RepliCATS project.These evaluations were, overall, fairly involved. They engaged with specific details of the paper as well as overall themes, directions, and implications. While they were largely positive about the paper, they did not seem to pull punches. Some examples of their feedback and evaluation below (direct quotes).Extract of evaluation contentBates:I’d be surprised if I ever again read a paper with such potential importance to global priorities.My view is that it would be premature to reallocate funding from AI Risk reduction to resilient food on the basis of this paper alone. I think the paper would have benefitted from more attention being paid to the underlying theory of cost-effectiveness motivating the investigation. Decisions made in places seem to have multiplied uncertainty which could have been resolved with a more consistent approach to analysis.The most serious conceptual issue which I think needs to be resolved before this can happen is to demonstrate that ‘do nothing’ would be less cost-effective than investing $86m in resilient foods, given that the ‘do nothing’ approach would potentially include strong market dynamics leaning towards resilient foods.".Janzwoodthe authors’ cost-effectiveness model, which attempts to decrease uncertainty about the potential uncertainty-reducing and harm/likelihood-reducing 'power' of resilient food R&D and compare it to R&D on AGI safety, is an important contribution"It would have been useful to see a brief discussion of some of these acknowledged epistemic uncertainties (e.g., the impact of resilient foods on public health, immunology, and disease resistance) to emphasize that some epistemic uncertainty could be reduced by exactly the kind of resilient food R&D they are advocating for.Hanea:The structure of the models is not discussed. How did [they] decide that this is a robust structure (no sensitivity to structure performed as far as I understood)"It is unclear if the compiled data sets are compatible. I think the quantification of the model should be documented better or in a more compact way."The authors also responded in detail. Some excerpts:The evaluations provided well thought out and constructively critical analysis of the work, pointing out several assumptions which could impact findings of the paper while also recognizing the value of the work in spite of s...]]>
Mon, 06 Feb 2023 20:33:56 +0000 EA - Unjournal's 1st eval is up: Resilient foods paper (Denkenberger et al) and AMA ~48 hours by david reinstein Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Unjournal's 1st eval is up: Resilient foods paper (Denkenberger et al) & AMA ~48 hours, published by david reinstein on February 6, 2023 on The Effective Altruism Forum.The Unjournal: reporting some progress(Link: our main page, see also our post Unjournal: Call for participants and research.)Our group (curating articles and evaluations) is now live on Sciety HERE.The first evaluated research project (paper) has now been posted HERE .First evaluation: Denkenberger et alOur first evaluation is for "Long Term Cost-Effectiveness of Resilient Foods for Global Catastrophes Compared to Artificial General Intelligence Safety", by David Denkenberger, Anders Sandberg, Ross Tieman, and Joshua M. Pearce, published in the International Journal of Disaster Risk Reduction.These three reports and ratings (see a sample below) come from three experts with (what we believe to be) complementary backgrounds (note, these evaluators agreed to be identified rather than remain anonymous):Alex Bates: An award-winning cost-effectiveness analyst with some background in considering long-term and existential risksScott Janzwood: A political scientist and Research Director at the Cascade InstituteAnca Hanea: A senior researcher and applied probabilist based at the Centre of Excellence for Biosecurity Risk Analysis (CEBRA) at the University of Melbourne. She has done prominent research into eliciting and aggregating (expert) judgments, working with the RepliCATS project.These evaluations were, overall, fairly involved. They engaged with specific details of the paper as well as overall themes, directions, and implications. While they were largely positive about the paper, they did not seem to pull punches. Some examples of their feedback and evaluation below (direct quotes).Extract of evaluation contentBates:I’d be surprised if I ever again read a paper with such potential importance to global priorities.My view is that it would be premature to reallocate funding from AI Risk reduction to resilient food on the basis of this paper alone. I think the paper would have benefitted from more attention being paid to the underlying theory of cost-effectiveness motivating the investigation. Decisions made in places seem to have multiplied uncertainty which could have been resolved with a more consistent approach to analysis.The most serious conceptual issue which I think needs to be resolved before this can happen is to demonstrate that ‘do nothing’ would be less cost-effective than investing $86m in resilient foods, given that the ‘do nothing’ approach would potentially include strong market dynamics leaning towards resilient foods.".Janzwoodthe authors’ cost-effectiveness model, which attempts to decrease uncertainty about the potential uncertainty-reducing and harm/likelihood-reducing 'power' of resilient food R&D and compare it to R&D on AGI safety, is an important contribution"It would have been useful to see a brief discussion of some of these acknowledged epistemic uncertainties (e.g., the impact of resilient foods on public health, immunology, and disease resistance) to emphasize that some epistemic uncertainty could be reduced by exactly the kind of resilient food R&D they are advocating for.Hanea:The structure of the models is not discussed. How did [they] decide that this is a robust structure (no sensitivity to structure performed as far as I understood)"It is unclear if the compiled data sets are compatible. I think the quantification of the model should be documented better or in a more compact way."The authors also responded in detail. Some excerpts:The evaluations provided well thought out and constructively critical analysis of the work, pointing out several assumptions which could impact findings of the paper while also recognizing the value of the work in spite of s...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Unjournal's 1st eval is up: Resilient foods paper (Denkenberger et al) & AMA ~48 hours, published by david reinstein on February 6, 2023 on The Effective Altruism Forum.The Unjournal: reporting some progress(Link: our main page, see also our post Unjournal: Call for participants and research.)Our group (curating articles and evaluations) is now live on Sciety HERE.The first evaluated research project (paper) has now been posted HERE .First evaluation: Denkenberger et alOur first evaluation is for "Long Term Cost-Effectiveness of Resilient Foods for Global Catastrophes Compared to Artificial General Intelligence Safety", by David Denkenberger, Anders Sandberg, Ross Tieman, and Joshua M. Pearce, published in the International Journal of Disaster Risk Reduction.These three reports and ratings (see a sample below) come from three experts with (what we believe to be) complementary backgrounds (note, these evaluators agreed to be identified rather than remain anonymous):Alex Bates: An award-winning cost-effectiveness analyst with some background in considering long-term and existential risksScott Janzwood: A political scientist and Research Director at the Cascade InstituteAnca Hanea: A senior researcher and applied probabilist based at the Centre of Excellence for Biosecurity Risk Analysis (CEBRA) at the University of Melbourne. She has done prominent research into eliciting and aggregating (expert) judgments, working with the RepliCATS project.These evaluations were, overall, fairly involved. They engaged with specific details of the paper as well as overall themes, directions, and implications. While they were largely positive about the paper, they did not seem to pull punches. Some examples of their feedback and evaluation below (direct quotes).Extract of evaluation contentBates:I’d be surprised if I ever again read a paper with such potential importance to global priorities.My view is that it would be premature to reallocate funding from AI Risk reduction to resilient food on the basis of this paper alone. I think the paper would have benefitted from more attention being paid to the underlying theory of cost-effectiveness motivating the investigation. Decisions made in places seem to have multiplied uncertainty which could have been resolved with a more consistent approach to analysis.The most serious conceptual issue which I think needs to be resolved before this can happen is to demonstrate that ‘do nothing’ would be less cost-effective than investing $86m in resilient foods, given that the ‘do nothing’ approach would potentially include strong market dynamics leaning towards resilient foods.".Janzwoodthe authors’ cost-effectiveness model, which attempts to decrease uncertainty about the potential uncertainty-reducing and harm/likelihood-reducing 'power' of resilient food R&D and compare it to R&D on AGI safety, is an important contribution"It would have been useful to see a brief discussion of some of these acknowledged epistemic uncertainties (e.g., the impact of resilient foods on public health, immunology, and disease resistance) to emphasize that some epistemic uncertainty could be reduced by exactly the kind of resilient food R&D they are advocating for.Hanea:The structure of the models is not discussed. How did [they] decide that this is a robust structure (no sensitivity to structure performed as far as I understood)"It is unclear if the compiled data sets are compatible. I think the quantification of the model should be documented better or in a more compact way."The authors also responded in detail. Some excerpts:The evaluations provided well thought out and constructively critical analysis of the work, pointing out several assumptions which could impact findings of the paper while also recognizing the value of the work in spite of s...]]>
david reinstein https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:02 None full 4758
2y9eSkMAkdPQeXWMf_NL_EA_EA EA - Podcast with Oli Habryka on LessWrong / Lightcone Infrastructure by DanielFilan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Podcast with Oli Habryka on LessWrong / Lightcone Infrastructure, published by DanielFilan on February 6, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
DanielFilan https://forum.effectivealtruism.org/posts/2y9eSkMAkdPQeXWMf/podcast-with-oli-habryka-on-lesswrong-lightcone Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Podcast with Oli Habryka on LessWrong / Lightcone Infrastructure, published by DanielFilan on February 6, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 06 Feb 2023 18:52:17 +0000 EA - Podcast with Oli Habryka on LessWrong / Lightcone Infrastructure by DanielFilan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Podcast with Oli Habryka on LessWrong / Lightcone Infrastructure, published by DanielFilan on February 6, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Podcast with Oli Habryka on LessWrong / Lightcone Infrastructure, published by DanielFilan on February 6, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
DanielFilan https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:29 None full 4760
tmhCnrMKn8Diqjrgk_NL_EA_EA EA - Shallow investigation: Loneliness by Em Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow investigation: Loneliness, published by Em on February 6, 2023 on The Effective Altruism Forum.This is a shallow report examining loneliness.In a nutshellImportance: 4-4.5/5 Unfortunately, it is very common to experience loneliness and particularly so later in life, so the importance of loneliness as a cause area seems high. There is evidence that loneliness impacts a lot of health and economic domains. More worryingly, there is evidence of a ‘vicious cycle’ between health and loneliness such that lonely people become sicker and sicker people tend to have fewer opportunities to find social contact. Some evidence suggests more people may become lonelier over time.Tractability: 2/5 Current loneliness interventions seem very costly. There is also mixed evidence about the effectiveness of interventions, with some being effective and some not alleviating loneliness. Further challenges relate to the lack of essential data about the prevalence of loneliness, methodological, sampling, targeting and mechanistic problems in interventions. Gaps in knowledge are more pronounced in LMICs.Neglectedness: 3-3.5/5 Funding, relevant NGOs and charities, as well as public awareness campaigns are present and even seem to be increasing in high-income settings. At the same time, there is more pronounced neglectedness in LMICs.[1] Further, there is still a stigma about discussing loneliness and mental health issues.Key uncertaintiesKey uncertainty 1: How effective and sustainable are anti-loneliness interventions in the long run?The current interventional evidence base is predominantly cross-sectional or short in duration, marked by relatively small and not globally representative samples, so it is hard to make strong causal claims as well as claims with high certainty for generalizability. There are few longitudinal interventions, so little is known about the long-term impacts and potential sustainability of such projects.Key uncertainty 2: Can anti-loneliness interventions be scaled?Some of the promising interventions to tackle loneliness require significant involvement in terms of time and service provision, and so may be difficult to scale. Compounding this difficulty, many of the most socially isolated people live rurally or in poorly connected areas and so reaching them may be challenging.Key uncertainty 3: How much can the cost-effectiveness of loneliness interventions (practically) be improved in the short term?A surprising key target many current anti-loneliness interventions miss is actually recruiting participants who are lonely. I anticipate this is an essential part to understanding the real cost-effectiveness of available interventions. Beyond this, provision of a relevant intervention type for the relevant group as well as provision for a sufficient length of time are also warranted.ImportanceProblem OverviewQuestionAnswerDefinition of the problemLoneliness is commonly defined as a discrepancy between a person’s desired and actual social relationships or opportunities for social contact. This can be a subjective (perceived) or objective discrepancy (social isolation). Loneliness is a risk factor for a variety of general and mental health conditions, mortality, loss of productivity, and poor self-esteem.Burden of the problemHealth burden:£170 - £780 per lonely person per year in the UK or about £340 million - £1.56 billion for the lonely adult population in the UK annually.In the US, lonely adults pay an additional $1643 in annual healthcare coverage, which means the lonely adult population in the country may be paying an excess of $18.4 billion annually.In Australia, loneliness is estimated to cost $2.7 billion a year in excess costs linked to additional GP visits, more physical inactivity and smoking, equivalent to $1,565 per person.Productivi...]]>
Em https://forum.effectivealtruism.org/posts/tmhCnrMKn8Diqjrgk/shallow-investigation-loneliness Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow investigation: Loneliness, published by Em on February 6, 2023 on The Effective Altruism Forum.This is a shallow report examining loneliness.In a nutshellImportance: 4-4.5/5 Unfortunately, it is very common to experience loneliness and particularly so later in life, so the importance of loneliness as a cause area seems high. There is evidence that loneliness impacts a lot of health and economic domains. More worryingly, there is evidence of a ‘vicious cycle’ between health and loneliness such that lonely people become sicker and sicker people tend to have fewer opportunities to find social contact. Some evidence suggests more people may become lonelier over time.Tractability: 2/5 Current loneliness interventions seem very costly. There is also mixed evidence about the effectiveness of interventions, with some being effective and some not alleviating loneliness. Further challenges relate to the lack of essential data about the prevalence of loneliness, methodological, sampling, targeting and mechanistic problems in interventions. Gaps in knowledge are more pronounced in LMICs.Neglectedness: 3-3.5/5 Funding, relevant NGOs and charities, as well as public awareness campaigns are present and even seem to be increasing in high-income settings. At the same time, there is more pronounced neglectedness in LMICs.[1] Further, there is still a stigma about discussing loneliness and mental health issues.Key uncertaintiesKey uncertainty 1: How effective and sustainable are anti-loneliness interventions in the long run?The current interventional evidence base is predominantly cross-sectional or short in duration, marked by relatively small and not globally representative samples, so it is hard to make strong causal claims as well as claims with high certainty for generalizability. There are few longitudinal interventions, so little is known about the long-term impacts and potential sustainability of such projects.Key uncertainty 2: Can anti-loneliness interventions be scaled?Some of the promising interventions to tackle loneliness require significant involvement in terms of time and service provision, and so may be difficult to scale. Compounding this difficulty, many of the most socially isolated people live rurally or in poorly connected areas and so reaching them may be challenging.Key uncertainty 3: How much can the cost-effectiveness of loneliness interventions (practically) be improved in the short term?A surprising key target many current anti-loneliness interventions miss is actually recruiting participants who are lonely. I anticipate this is an essential part to understanding the real cost-effectiveness of available interventions. Beyond this, provision of a relevant intervention type for the relevant group as well as provision for a sufficient length of time are also warranted.ImportanceProblem OverviewQuestionAnswerDefinition of the problemLoneliness is commonly defined as a discrepancy between a person’s desired and actual social relationships or opportunities for social contact. This can be a subjective (perceived) or objective discrepancy (social isolation). Loneliness is a risk factor for a variety of general and mental health conditions, mortality, loss of productivity, and poor self-esteem.Burden of the problemHealth burden:£170 - £780 per lonely person per year in the UK or about £340 million - £1.56 billion for the lonely adult population in the UK annually.In the US, lonely adults pay an additional $1643 in annual healthcare coverage, which means the lonely adult population in the country may be paying an excess of $18.4 billion annually.In Australia, loneliness is estimated to cost $2.7 billion a year in excess costs linked to additional GP visits, more physical inactivity and smoking, equivalent to $1,565 per person.Productivi...]]>
Mon, 06 Feb 2023 18:26:31 +0000 EA - Shallow investigation: Loneliness by Em Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow investigation: Loneliness, published by Em on February 6, 2023 on The Effective Altruism Forum.This is a shallow report examining loneliness.In a nutshellImportance: 4-4.5/5 Unfortunately, it is very common to experience loneliness and particularly so later in life, so the importance of loneliness as a cause area seems high. There is evidence that loneliness impacts a lot of health and economic domains. More worryingly, there is evidence of a ‘vicious cycle’ between health and loneliness such that lonely people become sicker and sicker people tend to have fewer opportunities to find social contact. Some evidence suggests more people may become lonelier over time.Tractability: 2/5 Current loneliness interventions seem very costly. There is also mixed evidence about the effectiveness of interventions, with some being effective and some not alleviating loneliness. Further challenges relate to the lack of essential data about the prevalence of loneliness, methodological, sampling, targeting and mechanistic problems in interventions. Gaps in knowledge are more pronounced in LMICs.Neglectedness: 3-3.5/5 Funding, relevant NGOs and charities, as well as public awareness campaigns are present and even seem to be increasing in high-income settings. At the same time, there is more pronounced neglectedness in LMICs.[1] Further, there is still a stigma about discussing loneliness and mental health issues.Key uncertaintiesKey uncertainty 1: How effective and sustainable are anti-loneliness interventions in the long run?The current interventional evidence base is predominantly cross-sectional or short in duration, marked by relatively small and not globally representative samples, so it is hard to make strong causal claims as well as claims with high certainty for generalizability. There are few longitudinal interventions, so little is known about the long-term impacts and potential sustainability of such projects.Key uncertainty 2: Can anti-loneliness interventions be scaled?Some of the promising interventions to tackle loneliness require significant involvement in terms of time and service provision, and so may be difficult to scale. Compounding this difficulty, many of the most socially isolated people live rurally or in poorly connected areas and so reaching them may be challenging.Key uncertainty 3: How much can the cost-effectiveness of loneliness interventions (practically) be improved in the short term?A surprising key target many current anti-loneliness interventions miss is actually recruiting participants who are lonely. I anticipate this is an essential part to understanding the real cost-effectiveness of available interventions. Beyond this, provision of a relevant intervention type for the relevant group as well as provision for a sufficient length of time are also warranted.ImportanceProblem OverviewQuestionAnswerDefinition of the problemLoneliness is commonly defined as a discrepancy between a person’s desired and actual social relationships or opportunities for social contact. This can be a subjective (perceived) or objective discrepancy (social isolation). Loneliness is a risk factor for a variety of general and mental health conditions, mortality, loss of productivity, and poor self-esteem.Burden of the problemHealth burden:£170 - £780 per lonely person per year in the UK or about £340 million - £1.56 billion for the lonely adult population in the UK annually.In the US, lonely adults pay an additional $1643 in annual healthcare coverage, which means the lonely adult population in the country may be paying an excess of $18.4 billion annually.In Australia, loneliness is estimated to cost $2.7 billion a year in excess costs linked to additional GP visits, more physical inactivity and smoking, equivalent to $1,565 per person.Productivi...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow investigation: Loneliness, published by Em on February 6, 2023 on The Effective Altruism Forum.This is a shallow report examining loneliness.In a nutshellImportance: 4-4.5/5 Unfortunately, it is very common to experience loneliness and particularly so later in life, so the importance of loneliness as a cause area seems high. There is evidence that loneliness impacts a lot of health and economic domains. More worryingly, there is evidence of a ‘vicious cycle’ between health and loneliness such that lonely people become sicker and sicker people tend to have fewer opportunities to find social contact. Some evidence suggests more people may become lonelier over time.Tractability: 2/5 Current loneliness interventions seem very costly. There is also mixed evidence about the effectiveness of interventions, with some being effective and some not alleviating loneliness. Further challenges relate to the lack of essential data about the prevalence of loneliness, methodological, sampling, targeting and mechanistic problems in interventions. Gaps in knowledge are more pronounced in LMICs.Neglectedness: 3-3.5/5 Funding, relevant NGOs and charities, as well as public awareness campaigns are present and even seem to be increasing in high-income settings. At the same time, there is more pronounced neglectedness in LMICs.[1] Further, there is still a stigma about discussing loneliness and mental health issues.Key uncertaintiesKey uncertainty 1: How effective and sustainable are anti-loneliness interventions in the long run?The current interventional evidence base is predominantly cross-sectional or short in duration, marked by relatively small and not globally representative samples, so it is hard to make strong causal claims as well as claims with high certainty for generalizability. There are few longitudinal interventions, so little is known about the long-term impacts and potential sustainability of such projects.Key uncertainty 2: Can anti-loneliness interventions be scaled?Some of the promising interventions to tackle loneliness require significant involvement in terms of time and service provision, and so may be difficult to scale. Compounding this difficulty, many of the most socially isolated people live rurally or in poorly connected areas and so reaching them may be challenging.Key uncertainty 3: How much can the cost-effectiveness of loneliness interventions (practically) be improved in the short term?A surprising key target many current anti-loneliness interventions miss is actually recruiting participants who are lonely. I anticipate this is an essential part to understanding the real cost-effectiveness of available interventions. Beyond this, provision of a relevant intervention type for the relevant group as well as provision for a sufficient length of time are also warranted.ImportanceProblem OverviewQuestionAnswerDefinition of the problemLoneliness is commonly defined as a discrepancy between a person’s desired and actual social relationships or opportunities for social contact. This can be a subjective (perceived) or objective discrepancy (social isolation). Loneliness is a risk factor for a variety of general and mental health conditions, mortality, loss of productivity, and poor self-esteem.Burden of the problemHealth burden:£170 - £780 per lonely person per year in the UK or about £340 million - £1.56 billion for the lonely adult population in the UK annually.In the US, lonely adults pay an additional $1643 in annual healthcare coverage, which means the lonely adult population in the country may be paying an excess of $18.4 billion annually.In Australia, loneliness is estimated to cost $2.7 billion a year in excess costs linked to additional GP visits, more physical inactivity and smoking, equivalent to $1,565 per person.Productivi...]]>
Em https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 56:37 None full 4759
E7xdBbxqPNLjhrnz6_NL_EA_EA EA - If Adult Insects Matter, How Much Do Juveniles Matter? by Bob Fischer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If Adult Insects Matter, How Much Do Juveniles Matter?, published by Bob Fischer on February 6, 2023 on The Effective Altruism Forum.Key TakeawaysThere are major uncertainties about sentience, welfare ranges, and the taxa of interest that make it difficult to draw any firm conclusions.Nevertheless, it’s plausible that there are some differences in the probability of sentience and welfare ranges across insect life stages. However likely it is that adult insects are sentient, it's less likely that juveniles are sentient; however much welfare adult insects can realize, juveniles can probably realize less.These differences are probably best explained by the life requirements at those stages, which vary in species-specific ways.Insofar as we can compare the magnitudes of these differences, the best evidence currently available doesn’t support the conclusion that they’re large. For instance, we don't see a plausible argument for the view that adult insects matter 1,000x more than juveniles.We may be able to make some predictions about the probability of sentience based on two factors: first, degrees of independence at the immature life stages; second, the developmentally-relevant details of the rearing environments.IntroductionThis is the ninth and final post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species.The goal of this post is to help animal welfare grantmakers assess the relative value of improvements to the lives of some commercially-important insects: the yellow mealworm (Tenebrio molitor), black soldier fly (Hermetia illucens), western honey bee (Apis mellifera), house cricket (Acheta domesticus), and silkworm (Bombyx mori). The vast majority of yellow mealworms, black soldier flies, and silkworms do not reach adulthood in the commercial systems of interest here (the exceptions are the members of the breeding population). By contrast, honey bees and house crickets do reach adulthood. This raises the question of how to value improvements to the lives of insects in their immature versus adult life stages. If adult insects are more likely to be sentient than immature insects, then, all else equal, it’s more important to prevent harm to adult insects than immature insects.Similarly, if it’s likely that adult insects can suffer more intensely than immature insects (which is relevant to their respective welfare ranges, which are one dimension of their respective capacities for welfare), then, all else equal, it’s more important to prevent harm to adult insects than immature insects. So, insofar as it’s possible to identify differences in either the probability of sentience or welfare ranges, that information is action-relevant when all else is equal. And insofar as we can compare the magnitudes of any such differences, that information may be action-relevant when all else isn’t equal.This document unfolds as follows. In the next section, we provide some relevant biological information about the taxa of interest and introduce the reader to some important dimensions of the development and neurobiology of insects. After that, we explain how Rethink Priorities tried to estimate differences in the probability of sentience and welfare ranges for other taxa—which, among other things, involves surveying the literature for evidence regarding sentience- and welfare-range-relevant traits. We extend that approach here. Then, we summarize the literature we reviewed. Next, we argue for the key takeaways. Finally, we make some suggestions regarding future work in this area.Biological Background InformationInsects develop in the larval (or nymphal) stage, un...]]>
Bob Fischer https://forum.effectivealtruism.org/posts/E7xdBbxqPNLjhrnz6/if-adult-insects-matter-how-much-do-juveniles-matter Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If Adult Insects Matter, How Much Do Juveniles Matter?, published by Bob Fischer on February 6, 2023 on The Effective Altruism Forum.Key TakeawaysThere are major uncertainties about sentience, welfare ranges, and the taxa of interest that make it difficult to draw any firm conclusions.Nevertheless, it’s plausible that there are some differences in the probability of sentience and welfare ranges across insect life stages. However likely it is that adult insects are sentient, it's less likely that juveniles are sentient; however much welfare adult insects can realize, juveniles can probably realize less.These differences are probably best explained by the life requirements at those stages, which vary in species-specific ways.Insofar as we can compare the magnitudes of these differences, the best evidence currently available doesn’t support the conclusion that they’re large. For instance, we don't see a plausible argument for the view that adult insects matter 1,000x more than juveniles.We may be able to make some predictions about the probability of sentience based on two factors: first, degrees of independence at the immature life stages; second, the developmentally-relevant details of the rearing environments.IntroductionThis is the ninth and final post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species.The goal of this post is to help animal welfare grantmakers assess the relative value of improvements to the lives of some commercially-important insects: the yellow mealworm (Tenebrio molitor), black soldier fly (Hermetia illucens), western honey bee (Apis mellifera), house cricket (Acheta domesticus), and silkworm (Bombyx mori). The vast majority of yellow mealworms, black soldier flies, and silkworms do not reach adulthood in the commercial systems of interest here (the exceptions are the members of the breeding population). By contrast, honey bees and house crickets do reach adulthood. This raises the question of how to value improvements to the lives of insects in their immature versus adult life stages. If adult insects are more likely to be sentient than immature insects, then, all else equal, it’s more important to prevent harm to adult insects than immature insects.Similarly, if it’s likely that adult insects can suffer more intensely than immature insects (which is relevant to their respective welfare ranges, which are one dimension of their respective capacities for welfare), then, all else equal, it’s more important to prevent harm to adult insects than immature insects. So, insofar as it’s possible to identify differences in either the probability of sentience or welfare ranges, that information is action-relevant when all else is equal. And insofar as we can compare the magnitudes of any such differences, that information may be action-relevant when all else isn’t equal.This document unfolds as follows. In the next section, we provide some relevant biological information about the taxa of interest and introduce the reader to some important dimensions of the development and neurobiology of insects. After that, we explain how Rethink Priorities tried to estimate differences in the probability of sentience and welfare ranges for other taxa—which, among other things, involves surveying the literature for evidence regarding sentience- and welfare-range-relevant traits. We extend that approach here. Then, we summarize the literature we reviewed. Next, we argue for the key takeaways. Finally, we make some suggestions regarding future work in this area.Biological Background InformationInsects develop in the larval (or nymphal) stage, un...]]>
Mon, 06 Feb 2023 15:14:51 +0000 EA - If Adult Insects Matter, How Much Do Juveniles Matter? by Bob Fischer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If Adult Insects Matter, How Much Do Juveniles Matter?, published by Bob Fischer on February 6, 2023 on The Effective Altruism Forum.Key TakeawaysThere are major uncertainties about sentience, welfare ranges, and the taxa of interest that make it difficult to draw any firm conclusions.Nevertheless, it’s plausible that there are some differences in the probability of sentience and welfare ranges across insect life stages. However likely it is that adult insects are sentient, it's less likely that juveniles are sentient; however much welfare adult insects can realize, juveniles can probably realize less.These differences are probably best explained by the life requirements at those stages, which vary in species-specific ways.Insofar as we can compare the magnitudes of these differences, the best evidence currently available doesn’t support the conclusion that they’re large. For instance, we don't see a plausible argument for the view that adult insects matter 1,000x more than juveniles.We may be able to make some predictions about the probability of sentience based on two factors: first, degrees of independence at the immature life stages; second, the developmentally-relevant details of the rearing environments.IntroductionThis is the ninth and final post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species.The goal of this post is to help animal welfare grantmakers assess the relative value of improvements to the lives of some commercially-important insects: the yellow mealworm (Tenebrio molitor), black soldier fly (Hermetia illucens), western honey bee (Apis mellifera), house cricket (Acheta domesticus), and silkworm (Bombyx mori). The vast majority of yellow mealworms, black soldier flies, and silkworms do not reach adulthood in the commercial systems of interest here (the exceptions are the members of the breeding population). By contrast, honey bees and house crickets do reach adulthood. This raises the question of how to value improvements to the lives of insects in their immature versus adult life stages. If adult insects are more likely to be sentient than immature insects, then, all else equal, it’s more important to prevent harm to adult insects than immature insects.Similarly, if it’s likely that adult insects can suffer more intensely than immature insects (which is relevant to their respective welfare ranges, which are one dimension of their respective capacities for welfare), then, all else equal, it’s more important to prevent harm to adult insects than immature insects. So, insofar as it’s possible to identify differences in either the probability of sentience or welfare ranges, that information is action-relevant when all else is equal. And insofar as we can compare the magnitudes of any such differences, that information may be action-relevant when all else isn’t equal.This document unfolds as follows. In the next section, we provide some relevant biological information about the taxa of interest and introduce the reader to some important dimensions of the development and neurobiology of insects. After that, we explain how Rethink Priorities tried to estimate differences in the probability of sentience and welfare ranges for other taxa—which, among other things, involves surveying the literature for evidence regarding sentience- and welfare-range-relevant traits. We extend that approach here. Then, we summarize the literature we reviewed. Next, we argue for the key takeaways. Finally, we make some suggestions regarding future work in this area.Biological Background InformationInsects develop in the larval (or nymphal) stage, un...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If Adult Insects Matter, How Much Do Juveniles Matter?, published by Bob Fischer on February 6, 2023 on The Effective Altruism Forum.Key TakeawaysThere are major uncertainties about sentience, welfare ranges, and the taxa of interest that make it difficult to draw any firm conclusions.Nevertheless, it’s plausible that there are some differences in the probability of sentience and welfare ranges across insect life stages. However likely it is that adult insects are sentient, it's less likely that juveniles are sentient; however much welfare adult insects can realize, juveniles can probably realize less.These differences are probably best explained by the life requirements at those stages, which vary in species-specific ways.Insofar as we can compare the magnitudes of these differences, the best evidence currently available doesn’t support the conclusion that they’re large. For instance, we don't see a plausible argument for the view that adult insects matter 1,000x more than juveniles.We may be able to make some predictions about the probability of sentience based on two factors: first, degrees of independence at the immature life stages; second, the developmentally-relevant details of the rearing environments.IntroductionThis is the ninth and final post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species.The goal of this post is to help animal welfare grantmakers assess the relative value of improvements to the lives of some commercially-important insects: the yellow mealworm (Tenebrio molitor), black soldier fly (Hermetia illucens), western honey bee (Apis mellifera), house cricket (Acheta domesticus), and silkworm (Bombyx mori). The vast majority of yellow mealworms, black soldier flies, and silkworms do not reach adulthood in the commercial systems of interest here (the exceptions are the members of the breeding population). By contrast, honey bees and house crickets do reach adulthood. This raises the question of how to value improvements to the lives of insects in their immature versus adult life stages. If adult insects are more likely to be sentient than immature insects, then, all else equal, it’s more important to prevent harm to adult insects than immature insects.Similarly, if it’s likely that adult insects can suffer more intensely than immature insects (which is relevant to their respective welfare ranges, which are one dimension of their respective capacities for welfare), then, all else equal, it’s more important to prevent harm to adult insects than immature insects. So, insofar as it’s possible to identify differences in either the probability of sentience or welfare ranges, that information is action-relevant when all else is equal. And insofar as we can compare the magnitudes of any such differences, that information may be action-relevant when all else isn’t equal.This document unfolds as follows. In the next section, we provide some relevant biological information about the taxa of interest and introduce the reader to some important dimensions of the development and neurobiology of insects. After that, we explain how Rethink Priorities tried to estimate differences in the probability of sentience and welfare ranges for other taxa—which, among other things, involves surveying the literature for evidence regarding sentience- and welfare-range-relevant traits. We extend that approach here. Then, we summarize the literature we reviewed. Next, we argue for the key takeaways. Finally, we make some suggestions regarding future work in this area.Biological Background InformationInsects develop in the larval (or nymphal) stage, un...]]>
Bob Fischer https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 48:57 None full 4761
QcTZyh9YCWJbjc2mc_NL_EA_EA EA - Pandemic Prediction Checklist: H5N1 by DirectedEvolution Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pandemic Prediction Checklist: H5N1, published by DirectedEvolution on February 5, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
DirectedEvolution https://forum.effectivealtruism.org/posts/QcTZyh9YCWJbjc2mc/pandemic-prediction-checklist-h5n1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pandemic Prediction Checklist: H5N1, published by DirectedEvolution on February 5, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 06 Feb 2023 02:51:09 +0000 EA - Pandemic Prediction Checklist: H5N1 by DirectedEvolution Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pandemic Prediction Checklist: H5N1, published by DirectedEvolution on February 5, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pandemic Prediction Checklist: H5N1, published by DirectedEvolution on February 5, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
DirectedEvolution https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:28 None full 4751
qpuaYNrKfgZapm4QA_NL_EA_EA EA - Effective Altruism Deconfusion, Part 2: Causes, Philosophy, and Social Constraints by Davidmanheim Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Altruism Deconfusion, Part 2: Causes, Philosophy, and Social Constraints, published by Davidmanheim on February 5, 2023 on The Effective Altruism Forum.This is part 2 of my attempt to disentangle and clarify some parts of the overall set of claims that comprise effective altruism, in this case, the set of philosophical positions and cause areas - not to evaluate them, but to enable clearer discussion of the claims, and disagreements, to both opponents and proponents. In my previous post, I made a number of claims about Effective Altruism as a philosophical position. I claimed that there is a nearly universally accepted normative claim that doing good things is important, and a slightly less universal but widely agreed upon claim that one should do those things effectively.My central claim in this post is that the notion of “impartial,” when determining “how to maximize the good with a given unit of resources, in impartial welfarist terms” is hiding almost all of the philosophical complexity and debate that occurs. In the previous post, I said that Effective Altruism as a philosophy was widely shared. Now I’m saying that the specific goals are very much not shared. Unsurprisingly, this mostly appears in discussions of cause area prioritization. But the set of causes that could be prioritized are, I claim, far larger than the set effective altruists typically assume - and embed lots of assumptions and claims that aren’t getting questioned clearly.Causes and PhilosophyTo start, I’d like to explore the compatibility or lack of compatibility of Effective Altruism with other philosophical positions. There are many different philosophical areas and positions, and most of them aren’t actually limited to philosophers. Without going into the different areas of philosophy in detail, I’ll say that I think all of axiology, which includes both aesthetics and ethics, and large parts of metaphysics, are all actually pretty central to the questions Effective Altruism addresses. These debates are central to any discussion of how to pursue cause-neutrality, but are often, in fact, nearly always, ignored by the community.Aesthetics and EA, or Aesthetics versus EA?For example, aesthetics, the study of beauty and joy, could be central to the question of maximizing welfare. According to some views, joy and beauty tell us what welfare is. Many point out that someone can derive great pleasure from something physically painful or unpleasant directly - working out, or sacrificing themselves for a cause, whether it be their children, or their religious beliefs. Similarly, many actually value art and music highly personally. Given a choice between, say, losing their hearing and never again getting to listen to music, or living another decade, many would choose music. Preference utilitarianism (or the equivalent preference beneficentrism,) would say that people benefit from getting what they want.Similarly, many people place great value on aesthetics, and think that music and the arts are an important part of benefiting others. On the other hand, a thought experiment that is sometimes used to argue about consequentialism is to imagine a museum on fire, and weigh saving, say, the Mona Lisa against saving a patron who was there to look at it. Typically, the point being made is that the Mona Lisa has a monetary value that far exceeds the cost of saving a life, and so a certain type of person, say, an economist, might say to save the painting. (An EA might then sell it, and use the proceeds to save many lives.) But a different viewpoint is that there is a reason the Mona Lisa is valued so highly - aesthetics matters to people so much that, when considering public budgeting between fighting homelessness and funding museums, they think the correct moral tradeoff is to spend some money ...]]>
Davidmanheim https://forum.effectivealtruism.org/posts/qpuaYNrKfgZapm4QA/effective-altruism-deconfusion-part-2-causes-philosophy-and Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Altruism Deconfusion, Part 2: Causes, Philosophy, and Social Constraints, published by Davidmanheim on February 5, 2023 on The Effective Altruism Forum.This is part 2 of my attempt to disentangle and clarify some parts of the overall set of claims that comprise effective altruism, in this case, the set of philosophical positions and cause areas - not to evaluate them, but to enable clearer discussion of the claims, and disagreements, to both opponents and proponents. In my previous post, I made a number of claims about Effective Altruism as a philosophical position. I claimed that there is a nearly universally accepted normative claim that doing good things is important, and a slightly less universal but widely agreed upon claim that one should do those things effectively.My central claim in this post is that the notion of “impartial,” when determining “how to maximize the good with a given unit of resources, in impartial welfarist terms” is hiding almost all of the philosophical complexity and debate that occurs. In the previous post, I said that Effective Altruism as a philosophy was widely shared. Now I’m saying that the specific goals are very much not shared. Unsurprisingly, this mostly appears in discussions of cause area prioritization. But the set of causes that could be prioritized are, I claim, far larger than the set effective altruists typically assume - and embed lots of assumptions and claims that aren’t getting questioned clearly.Causes and PhilosophyTo start, I’d like to explore the compatibility or lack of compatibility of Effective Altruism with other philosophical positions. There are many different philosophical areas and positions, and most of them aren’t actually limited to philosophers. Without going into the different areas of philosophy in detail, I’ll say that I think all of axiology, which includes both aesthetics and ethics, and large parts of metaphysics, are all actually pretty central to the questions Effective Altruism addresses. These debates are central to any discussion of how to pursue cause-neutrality, but are often, in fact, nearly always, ignored by the community.Aesthetics and EA, or Aesthetics versus EA?For example, aesthetics, the study of beauty and joy, could be central to the question of maximizing welfare. According to some views, joy and beauty tell us what welfare is. Many point out that someone can derive great pleasure from something physically painful or unpleasant directly - working out, or sacrificing themselves for a cause, whether it be their children, or their religious beliefs. Similarly, many actually value art and music highly personally. Given a choice between, say, losing their hearing and never again getting to listen to music, or living another decade, many would choose music. Preference utilitarianism (or the equivalent preference beneficentrism,) would say that people benefit from getting what they want.Similarly, many people place great value on aesthetics, and think that music and the arts are an important part of benefiting others. On the other hand, a thought experiment that is sometimes used to argue about consequentialism is to imagine a museum on fire, and weigh saving, say, the Mona Lisa against saving a patron who was there to look at it. Typically, the point being made is that the Mona Lisa has a monetary value that far exceeds the cost of saving a life, and so a certain type of person, say, an economist, might say to save the painting. (An EA might then sell it, and use the proceeds to save many lives.) But a different viewpoint is that there is a reason the Mona Lisa is valued so highly - aesthetics matters to people so much that, when considering public budgeting between fighting homelessness and funding museums, they think the correct moral tradeoff is to spend some money ...]]>
Mon, 06 Feb 2023 00:50:10 +0000 EA - Effective Altruism Deconfusion, Part 2: Causes, Philosophy, and Social Constraints by Davidmanheim Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Altruism Deconfusion, Part 2: Causes, Philosophy, and Social Constraints, published by Davidmanheim on February 5, 2023 on The Effective Altruism Forum.This is part 2 of my attempt to disentangle and clarify some parts of the overall set of claims that comprise effective altruism, in this case, the set of philosophical positions and cause areas - not to evaluate them, but to enable clearer discussion of the claims, and disagreements, to both opponents and proponents. In my previous post, I made a number of claims about Effective Altruism as a philosophical position. I claimed that there is a nearly universally accepted normative claim that doing good things is important, and a slightly less universal but widely agreed upon claim that one should do those things effectively.My central claim in this post is that the notion of “impartial,” when determining “how to maximize the good with a given unit of resources, in impartial welfarist terms” is hiding almost all of the philosophical complexity and debate that occurs. In the previous post, I said that Effective Altruism as a philosophy was widely shared. Now I’m saying that the specific goals are very much not shared. Unsurprisingly, this mostly appears in discussions of cause area prioritization. But the set of causes that could be prioritized are, I claim, far larger than the set effective altruists typically assume - and embed lots of assumptions and claims that aren’t getting questioned clearly.Causes and PhilosophyTo start, I’d like to explore the compatibility or lack of compatibility of Effective Altruism with other philosophical positions. There are many different philosophical areas and positions, and most of them aren’t actually limited to philosophers. Without going into the different areas of philosophy in detail, I’ll say that I think all of axiology, which includes both aesthetics and ethics, and large parts of metaphysics, are all actually pretty central to the questions Effective Altruism addresses. These debates are central to any discussion of how to pursue cause-neutrality, but are often, in fact, nearly always, ignored by the community.Aesthetics and EA, or Aesthetics versus EA?For example, aesthetics, the study of beauty and joy, could be central to the question of maximizing welfare. According to some views, joy and beauty tell us what welfare is. Many point out that someone can derive great pleasure from something physically painful or unpleasant directly - working out, or sacrificing themselves for a cause, whether it be their children, or their religious beliefs. Similarly, many actually value art and music highly personally. Given a choice between, say, losing their hearing and never again getting to listen to music, or living another decade, many would choose music. Preference utilitarianism (or the equivalent preference beneficentrism,) would say that people benefit from getting what they want.Similarly, many people place great value on aesthetics, and think that music and the arts are an important part of benefiting others. On the other hand, a thought experiment that is sometimes used to argue about consequentialism is to imagine a museum on fire, and weigh saving, say, the Mona Lisa against saving a patron who was there to look at it. Typically, the point being made is that the Mona Lisa has a monetary value that far exceeds the cost of saving a life, and so a certain type of person, say, an economist, might say to save the painting. (An EA might then sell it, and use the proceeds to save many lives.) But a different viewpoint is that there is a reason the Mona Lisa is valued so highly - aesthetics matters to people so much that, when considering public budgeting between fighting homelessness and funding museums, they think the correct moral tradeoff is to spend some money ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Altruism Deconfusion, Part 2: Causes, Philosophy, and Social Constraints, published by Davidmanheim on February 5, 2023 on The Effective Altruism Forum.This is part 2 of my attempt to disentangle and clarify some parts of the overall set of claims that comprise effective altruism, in this case, the set of philosophical positions and cause areas - not to evaluate them, but to enable clearer discussion of the claims, and disagreements, to both opponents and proponents. In my previous post, I made a number of claims about Effective Altruism as a philosophical position. I claimed that there is a nearly universally accepted normative claim that doing good things is important, and a slightly less universal but widely agreed upon claim that one should do those things effectively.My central claim in this post is that the notion of “impartial,” when determining “how to maximize the good with a given unit of resources, in impartial welfarist terms” is hiding almost all of the philosophical complexity and debate that occurs. In the previous post, I said that Effective Altruism as a philosophy was widely shared. Now I’m saying that the specific goals are very much not shared. Unsurprisingly, this mostly appears in discussions of cause area prioritization. But the set of causes that could be prioritized are, I claim, far larger than the set effective altruists typically assume - and embed lots of assumptions and claims that aren’t getting questioned clearly.Causes and PhilosophyTo start, I’d like to explore the compatibility or lack of compatibility of Effective Altruism with other philosophical positions. There are many different philosophical areas and positions, and most of them aren’t actually limited to philosophers. Without going into the different areas of philosophy in detail, I’ll say that I think all of axiology, which includes both aesthetics and ethics, and large parts of metaphysics, are all actually pretty central to the questions Effective Altruism addresses. These debates are central to any discussion of how to pursue cause-neutrality, but are often, in fact, nearly always, ignored by the community.Aesthetics and EA, or Aesthetics versus EA?For example, aesthetics, the study of beauty and joy, could be central to the question of maximizing welfare. According to some views, joy and beauty tell us what welfare is. Many point out that someone can derive great pleasure from something physically painful or unpleasant directly - working out, or sacrificing themselves for a cause, whether it be their children, or their religious beliefs. Similarly, many actually value art and music highly personally. Given a choice between, say, losing their hearing and never again getting to listen to music, or living another decade, many would choose music. Preference utilitarianism (or the equivalent preference beneficentrism,) would say that people benefit from getting what they want.Similarly, many people place great value on aesthetics, and think that music and the arts are an important part of benefiting others. On the other hand, a thought experiment that is sometimes used to argue about consequentialism is to imagine a museum on fire, and weigh saving, say, the Mona Lisa against saving a patron who was there to look at it. Typically, the point being made is that the Mona Lisa has a monetary value that far exceeds the cost of saving a life, and so a certain type of person, say, an economist, might say to save the painting. (An EA might then sell it, and use the proceeds to save many lives.) But a different viewpoint is that there is a reason the Mona Lisa is valued so highly - aesthetics matters to people so much that, when considering public budgeting between fighting homelessness and funding museums, they think the correct moral tradeoff is to spend some money ...]]>
Davidmanheim https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 21:41 None full 4752
KesbhkciEoqDG77EA_NL_EA_EA EA - The EA Forum should remove community posts from search-engine indexing. by devansh Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA Forum should remove community posts from search-engine indexing., published by devansh on February 5, 2023 on The Effective Altruism Forum.I want to propose a common-sense reform to the EA forum - stop any post tagged "Community" from showing up in Google searches. (I believe this to be technically implementable; this not being true would be good reason not to implement this change). I think it's probably good for EA discourse right now to be able to talk about scandals etc. openly but also have, like, minimum moats preventing some of the lowest effort bad-faith targeting by external parties.The important parts of the EA forum to people who are Googling us are, like, the things that we object-level care about! The actual stuff that the majority of EAs in direct work do every day—distributing insecticide-treated antimalarial bednets, or doing research in AI alignment, or figuring out how to make vaccines in advance for the next pandemic. There are status and incentive gradients to write about and upvote community stuff on the EA forum, but we can counter that at least somewhat by removing it from search engines!)I also, for similar reasons, think that we should further decrease the default rate at which community posts show up on the frontpage, perhaps going as far as to mark community posts as Personal Blog by default. I think they make discourse norms worse and having the frontpage full of object-level takes about the world (which in fact actually tracks what most people doing direct EA work are actually focused on, instead of writing Forum posts!) is better for both discourse norms and, IDK, the health of the EA community. (This is not a hill that I want to die on, but feels like a relevant extension).Finally, I find myself instinctively censoring myself on the Forum because anything I say can be adversarially quoted by a journalist attempting to take it out of context. There's not a lot I can do about that, but we could at least make it slightly harder for discussion amongst EAs that often require context about EA principles and values and community norms to be on the public internet.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
devansh https://forum.effectivealtruism.org/posts/KesbhkciEoqDG77EA/the-ea-forum-should-remove-community-posts-from-search Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA Forum should remove community posts from search-engine indexing., published by devansh on February 5, 2023 on The Effective Altruism Forum.I want to propose a common-sense reform to the EA forum - stop any post tagged "Community" from showing up in Google searches. (I believe this to be technically implementable; this not being true would be good reason not to implement this change). I think it's probably good for EA discourse right now to be able to talk about scandals etc. openly but also have, like, minimum moats preventing some of the lowest effort bad-faith targeting by external parties.The important parts of the EA forum to people who are Googling us are, like, the things that we object-level care about! The actual stuff that the majority of EAs in direct work do every day—distributing insecticide-treated antimalarial bednets, or doing research in AI alignment, or figuring out how to make vaccines in advance for the next pandemic. There are status and incentive gradients to write about and upvote community stuff on the EA forum, but we can counter that at least somewhat by removing it from search engines!)I also, for similar reasons, think that we should further decrease the default rate at which community posts show up on the frontpage, perhaps going as far as to mark community posts as Personal Blog by default. I think they make discourse norms worse and having the frontpage full of object-level takes about the world (which in fact actually tracks what most people doing direct EA work are actually focused on, instead of writing Forum posts!) is better for both discourse norms and, IDK, the health of the EA community. (This is not a hill that I want to die on, but feels like a relevant extension).Finally, I find myself instinctively censoring myself on the Forum because anything I say can be adversarially quoted by a journalist attempting to take it out of context. There's not a lot I can do about that, but we could at least make it slightly harder for discussion amongst EAs that often require context about EA principles and values and community norms to be on the public internet.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 05 Feb 2023 17:07:37 +0000 EA - The EA Forum should remove community posts from search-engine indexing. by devansh Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA Forum should remove community posts from search-engine indexing., published by devansh on February 5, 2023 on The Effective Altruism Forum.I want to propose a common-sense reform to the EA forum - stop any post tagged "Community" from showing up in Google searches. (I believe this to be technically implementable; this not being true would be good reason not to implement this change). I think it's probably good for EA discourse right now to be able to talk about scandals etc. openly but also have, like, minimum moats preventing some of the lowest effort bad-faith targeting by external parties.The important parts of the EA forum to people who are Googling us are, like, the things that we object-level care about! The actual stuff that the majority of EAs in direct work do every day—distributing insecticide-treated antimalarial bednets, or doing research in AI alignment, or figuring out how to make vaccines in advance for the next pandemic. There are status and incentive gradients to write about and upvote community stuff on the EA forum, but we can counter that at least somewhat by removing it from search engines!)I also, for similar reasons, think that we should further decrease the default rate at which community posts show up on the frontpage, perhaps going as far as to mark community posts as Personal Blog by default. I think they make discourse norms worse and having the frontpage full of object-level takes about the world (which in fact actually tracks what most people doing direct EA work are actually focused on, instead of writing Forum posts!) is better for both discourse norms and, IDK, the health of the EA community. (This is not a hill that I want to die on, but feels like a relevant extension).Finally, I find myself instinctively censoring myself on the Forum because anything I say can be adversarially quoted by a journalist attempting to take it out of context. There's not a lot I can do about that, but we could at least make it slightly harder for discussion amongst EAs that often require context about EA principles and values and community norms to be on the public internet.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA Forum should remove community posts from search-engine indexing., published by devansh on February 5, 2023 on The Effective Altruism Forum.I want to propose a common-sense reform to the EA forum - stop any post tagged "Community" from showing up in Google searches. (I believe this to be technically implementable; this not being true would be good reason not to implement this change). I think it's probably good for EA discourse right now to be able to talk about scandals etc. openly but also have, like, minimum moats preventing some of the lowest effort bad-faith targeting by external parties.The important parts of the EA forum to people who are Googling us are, like, the things that we object-level care about! The actual stuff that the majority of EAs in direct work do every day—distributing insecticide-treated antimalarial bednets, or doing research in AI alignment, or figuring out how to make vaccines in advance for the next pandemic. There are status and incentive gradients to write about and upvote community stuff on the EA forum, but we can counter that at least somewhat by removing it from search engines!)I also, for similar reasons, think that we should further decrease the default rate at which community posts show up on the frontpage, perhaps going as far as to mark community posts as Personal Blog by default. I think they make discourse norms worse and having the frontpage full of object-level takes about the world (which in fact actually tracks what most people doing direct EA work are actually focused on, instead of writing Forum posts!) is better for both discourse norms and, IDK, the health of the EA community. (This is not a hill that I want to die on, but feels like a relevant extension).Finally, I find myself instinctively censoring myself on the Forum because anything I say can be adversarially quoted by a journalist attempting to take it out of context. There's not a lot I can do about that, but we could at least make it slightly harder for discussion amongst EAs that often require context about EA principles and values and community norms to be on the public internet.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
devansh https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:09 None full 4744
jFnyqaLAtgfWpATeJ_NL_EA_EA EA - EA's weirdness makes it unusually susceptible to bad behavior by OutsideView Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA's weirdness makes it unusually susceptible to bad behavior, published by OutsideView on February 5, 2023 on The Effective Altruism Forum.Writing this under a fresh account because I don't want my views on this impact career opportunities.TLDR: We're all aware that EA has been rocked by a series of high profile scandals recently. I believe EA is more susceptible to these kinds of scandals than most movements because EA fundamentally has a very high tolerance for deeply weird people. This tolerance leads to more acceptance of socially unacceptable behavior than would otherwise be permitted.It seems uncontroversial and obviously true to me that EA is deeply fucking weird. It's easy to forget once you're inside the community, but even the basics like "Do some math to see how much good our charitable dollars do" is an unusual instinct for most regular people. Extending that into "Donate your money to save African people from diseases" is very weird for most regular people. Extending further into other 'mainstream EA' cause areas (like AI safety) ups the weird factor by several orders of magnitude. The work that many EAs do seems fundamentally bizarre to much/most of the world.Ideas that most of the world would find patently insane - that we should care about shrimp welfare, insect welfare, trillions 0f future em-style beings - are regularly discussed, taken seriously, and given funding and institutional weight in EA. Wildly unusual social practices like polyamory are common and other unusual practices like atheism and veganism are outright the default. Anyone who's spent any amount of time in EA can probably tell you about some very odd people they've met: whether it's a guy who only wears those shoes with individual toes, or the girl who does taxidermy for fun and wants to talk to you about it for the next several hours, or the the guy who doesn't believe in showers. I don't have hard numbers but I am sure the EA community over-indexes like mad for those on the autism spectrum.This movement might have the one of the highest 'weirdness tolerance' factors of all extant movements today.This has real consequences, good and bad. Many of you have probably jumped to one of the good parts: if you want to generate new ideas, you need weirdos. There are benefits to taking in misfits and people with idiosyncratic ideas and bizarre behaviors, because sometimes those are the people with startlingly valuable new insights. This is broadly true. There are a lot of people doing objectively weird things in EA who are good, smart, kind, interesting and valuable thinkers, and who are having a positive impact on the world. I've met and admire many of them. If EA is empowering these folks to flex their weirdness for good, then I'm glad.But there are downsides as well. If there's a big dial where one end is 'Be Intolerant Of Odd People' and one end is 'Be Tolerant of Odd People' and you crank it all the way to 100% tolerance, you're going to end up with more than just the helpful kind weirdos. You're going to end up with creeps and unhelpful, poisonous weirdos as well. You're going to end up with the people who casually invite coworkers to go to sex parties with them to experiment with BDSM toys. You're going to end up with people who say that "pedophilic relationships between very young women and older men are a good way to transfer knowledge" and also people whose first instinct is to defend such a statement as "high decoupling cognitive style". People whose reaction to accusations of misconduct is to build a probability model and try to set an 'acceptableness threshold'. You know what should worry EA?I was not the least bit surprised to see so many accusations of wildly inappropriate workplace behavior or semantic games defending abhorrent ideas/people. I thought 'yeah seems like...]]>
OutsideView https://forum.effectivealtruism.org/posts/jFnyqaLAtgfWpATeJ/ea-s-weirdness-makes-it-unusually-susceptible-to-bad Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA's weirdness makes it unusually susceptible to bad behavior, published by OutsideView on February 5, 2023 on The Effective Altruism Forum.Writing this under a fresh account because I don't want my views on this impact career opportunities.TLDR: We're all aware that EA has been rocked by a series of high profile scandals recently. I believe EA is more susceptible to these kinds of scandals than most movements because EA fundamentally has a very high tolerance for deeply weird people. This tolerance leads to more acceptance of socially unacceptable behavior than would otherwise be permitted.It seems uncontroversial and obviously true to me that EA is deeply fucking weird. It's easy to forget once you're inside the community, but even the basics like "Do some math to see how much good our charitable dollars do" is an unusual instinct for most regular people. Extending that into "Donate your money to save African people from diseases" is very weird for most regular people. Extending further into other 'mainstream EA' cause areas (like AI safety) ups the weird factor by several orders of magnitude. The work that many EAs do seems fundamentally bizarre to much/most of the world.Ideas that most of the world would find patently insane - that we should care about shrimp welfare, insect welfare, trillions 0f future em-style beings - are regularly discussed, taken seriously, and given funding and institutional weight in EA. Wildly unusual social practices like polyamory are common and other unusual practices like atheism and veganism are outright the default. Anyone who's spent any amount of time in EA can probably tell you about some very odd people they've met: whether it's a guy who only wears those shoes with individual toes, or the girl who does taxidermy for fun and wants to talk to you about it for the next several hours, or the the guy who doesn't believe in showers. I don't have hard numbers but I am sure the EA community over-indexes like mad for those on the autism spectrum.This movement might have the one of the highest 'weirdness tolerance' factors of all extant movements today.This has real consequences, good and bad. Many of you have probably jumped to one of the good parts: if you want to generate new ideas, you need weirdos. There are benefits to taking in misfits and people with idiosyncratic ideas and bizarre behaviors, because sometimes those are the people with startlingly valuable new insights. This is broadly true. There are a lot of people doing objectively weird things in EA who are good, smart, kind, interesting and valuable thinkers, and who are having a positive impact on the world. I've met and admire many of them. If EA is empowering these folks to flex their weirdness for good, then I'm glad.But there are downsides as well. If there's a big dial where one end is 'Be Intolerant Of Odd People' and one end is 'Be Tolerant of Odd People' and you crank it all the way to 100% tolerance, you're going to end up with more than just the helpful kind weirdos. You're going to end up with creeps and unhelpful, poisonous weirdos as well. You're going to end up with the people who casually invite coworkers to go to sex parties with them to experiment with BDSM toys. You're going to end up with people who say that "pedophilic relationships between very young women and older men are a good way to transfer knowledge" and also people whose first instinct is to defend such a statement as "high decoupling cognitive style". People whose reaction to accusations of misconduct is to build a probability model and try to set an 'acceptableness threshold'. You know what should worry EA?I was not the least bit surprised to see so many accusations of wildly inappropriate workplace behavior or semantic games defending abhorrent ideas/people. I thought 'yeah seems like...]]>
Sun, 05 Feb 2023 13:49:30 +0000 EA - EA's weirdness makes it unusually susceptible to bad behavior by OutsideView Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA's weirdness makes it unusually susceptible to bad behavior, published by OutsideView on February 5, 2023 on The Effective Altruism Forum.Writing this under a fresh account because I don't want my views on this impact career opportunities.TLDR: We're all aware that EA has been rocked by a series of high profile scandals recently. I believe EA is more susceptible to these kinds of scandals than most movements because EA fundamentally has a very high tolerance for deeply weird people. This tolerance leads to more acceptance of socially unacceptable behavior than would otherwise be permitted.It seems uncontroversial and obviously true to me that EA is deeply fucking weird. It's easy to forget once you're inside the community, but even the basics like "Do some math to see how much good our charitable dollars do" is an unusual instinct for most regular people. Extending that into "Donate your money to save African people from diseases" is very weird for most regular people. Extending further into other 'mainstream EA' cause areas (like AI safety) ups the weird factor by several orders of magnitude. The work that many EAs do seems fundamentally bizarre to much/most of the world.Ideas that most of the world would find patently insane - that we should care about shrimp welfare, insect welfare, trillions 0f future em-style beings - are regularly discussed, taken seriously, and given funding and institutional weight in EA. Wildly unusual social practices like polyamory are common and other unusual practices like atheism and veganism are outright the default. Anyone who's spent any amount of time in EA can probably tell you about some very odd people they've met: whether it's a guy who only wears those shoes with individual toes, or the girl who does taxidermy for fun and wants to talk to you about it for the next several hours, or the the guy who doesn't believe in showers. I don't have hard numbers but I am sure the EA community over-indexes like mad for those on the autism spectrum.This movement might have the one of the highest 'weirdness tolerance' factors of all extant movements today.This has real consequences, good and bad. Many of you have probably jumped to one of the good parts: if you want to generate new ideas, you need weirdos. There are benefits to taking in misfits and people with idiosyncratic ideas and bizarre behaviors, because sometimes those are the people with startlingly valuable new insights. This is broadly true. There are a lot of people doing objectively weird things in EA who are good, smart, kind, interesting and valuable thinkers, and who are having a positive impact on the world. I've met and admire many of them. If EA is empowering these folks to flex their weirdness for good, then I'm glad.But there are downsides as well. If there's a big dial where one end is 'Be Intolerant Of Odd People' and one end is 'Be Tolerant of Odd People' and you crank it all the way to 100% tolerance, you're going to end up with more than just the helpful kind weirdos. You're going to end up with creeps and unhelpful, poisonous weirdos as well. You're going to end up with the people who casually invite coworkers to go to sex parties with them to experiment with BDSM toys. You're going to end up with people who say that "pedophilic relationships between very young women and older men are a good way to transfer knowledge" and also people whose first instinct is to defend such a statement as "high decoupling cognitive style". People whose reaction to accusations of misconduct is to build a probability model and try to set an 'acceptableness threshold'. You know what should worry EA?I was not the least bit surprised to see so many accusations of wildly inappropriate workplace behavior or semantic games defending abhorrent ideas/people. I thought 'yeah seems like...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA's weirdness makes it unusually susceptible to bad behavior, published by OutsideView on February 5, 2023 on The Effective Altruism Forum.Writing this under a fresh account because I don't want my views on this impact career opportunities.TLDR: We're all aware that EA has been rocked by a series of high profile scandals recently. I believe EA is more susceptible to these kinds of scandals than most movements because EA fundamentally has a very high tolerance for deeply weird people. This tolerance leads to more acceptance of socially unacceptable behavior than would otherwise be permitted.It seems uncontroversial and obviously true to me that EA is deeply fucking weird. It's easy to forget once you're inside the community, but even the basics like "Do some math to see how much good our charitable dollars do" is an unusual instinct for most regular people. Extending that into "Donate your money to save African people from diseases" is very weird for most regular people. Extending further into other 'mainstream EA' cause areas (like AI safety) ups the weird factor by several orders of magnitude. The work that many EAs do seems fundamentally bizarre to much/most of the world.Ideas that most of the world would find patently insane - that we should care about shrimp welfare, insect welfare, trillions 0f future em-style beings - are regularly discussed, taken seriously, and given funding and institutional weight in EA. Wildly unusual social practices like polyamory are common and other unusual practices like atheism and veganism are outright the default. Anyone who's spent any amount of time in EA can probably tell you about some very odd people they've met: whether it's a guy who only wears those shoes with individual toes, or the girl who does taxidermy for fun and wants to talk to you about it for the next several hours, or the the guy who doesn't believe in showers. I don't have hard numbers but I am sure the EA community over-indexes like mad for those on the autism spectrum.This movement might have the one of the highest 'weirdness tolerance' factors of all extant movements today.This has real consequences, good and bad. Many of you have probably jumped to one of the good parts: if you want to generate new ideas, you need weirdos. There are benefits to taking in misfits and people with idiosyncratic ideas and bizarre behaviors, because sometimes those are the people with startlingly valuable new insights. This is broadly true. There are a lot of people doing objectively weird things in EA who are good, smart, kind, interesting and valuable thinkers, and who are having a positive impact on the world. I've met and admire many of them. If EA is empowering these folks to flex their weirdness for good, then I'm glad.But there are downsides as well. If there's a big dial where one end is 'Be Intolerant Of Odd People' and one end is 'Be Tolerant of Odd People' and you crank it all the way to 100% tolerance, you're going to end up with more than just the helpful kind weirdos. You're going to end up with creeps and unhelpful, poisonous weirdos as well. You're going to end up with the people who casually invite coworkers to go to sex parties with them to experiment with BDSM toys. You're going to end up with people who say that "pedophilic relationships between very young women and older men are a good way to transfer knowledge" and also people whose first instinct is to defend such a statement as "high decoupling cognitive style". People whose reaction to accusations of misconduct is to build a probability model and try to set an 'acceptableness threshold'. You know what should worry EA?I was not the least bit surprised to see so many accusations of wildly inappropriate workplace behavior or semantic games defending abhorrent ideas/people. I thought 'yeah seems like...]]>
OutsideView https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:35 None full 4747
CKnqXvLrkxsaYFtg8_NL_EA_EA EA - Appreciation thread Feb 2023 by Michelle Hutchinson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Appreciation thread Feb 2023, published by Michelle Hutchinson on February 5, 2023 on The Effective Altruism Forum.It feels like it’s been a while since we had an appreciation thread. At a tough time for the community I thought that might be especially nice for all of us. I get a lot out of gratitude journaling, but I also really enjoy hearing what other people are appreciative of, big and small. I’ll try to get us started with a few things of different scopes and subject matters, to hopefully make it as easy as possible for others to chip in with whatever they feel appreciative of today!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Michelle Hutchinson https://forum.effectivealtruism.org/posts/CKnqXvLrkxsaYFtg8/appreciation-thread-feb-2023 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Appreciation thread Feb 2023, published by Michelle Hutchinson on February 5, 2023 on The Effective Altruism Forum.It feels like it’s been a while since we had an appreciation thread. At a tough time for the community I thought that might be especially nice for all of us. I get a lot out of gratitude journaling, but I also really enjoy hearing what other people are appreciative of, big and small. I’ll try to get us started with a few things of different scopes and subject matters, to hopefully make it as easy as possible for others to chip in with whatever they feel appreciative of today!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 05 Feb 2023 12:39:45 +0000 EA - Appreciation thread Feb 2023 by Michelle Hutchinson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Appreciation thread Feb 2023, published by Michelle Hutchinson on February 5, 2023 on The Effective Altruism Forum.It feels like it’s been a while since we had an appreciation thread. At a tough time for the community I thought that might be especially nice for all of us. I get a lot out of gratitude journaling, but I also really enjoy hearing what other people are appreciative of, big and small. I’ll try to get us started with a few things of different scopes and subject matters, to hopefully make it as easy as possible for others to chip in with whatever they feel appreciative of today!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Appreciation thread Feb 2023, published by Michelle Hutchinson on February 5, 2023 on The Effective Altruism Forum.It feels like it’s been a while since we had an appreciation thread. At a tough time for the community I thought that might be especially nice for all of us. I get a lot out of gratitude journaling, but I also really enjoy hearing what other people are appreciative of, big and small. I’ll try to get us started with a few things of different scopes and subject matters, to hopefully make it as easy as possible for others to chip in with whatever they feel appreciative of today!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Michelle Hutchinson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:50 None full 4746
QMMFyAX3ajf9vF5sb_NL_EA_EA EA - H5N1 - thread for information sharing, planning, and action by MathiasKB Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: H5N1 - thread for information sharing, planning, and action, published by MathiasKB on February 5, 2023 on The Effective Altruism Forum.Hi everyone,I've been reading up on H5N1 this weekend, and I'm pretty concerned. Right now my estimate hunch is that there is a 5% non-zero chance that it will cost more than 10,000 people their lives.To be clear, I think it is unlikely that H5N1 will become a pandemic anywhere close to the size of covid.Nevertheless, I think our community should be actively following the news and start thinking about ways to be helpful if the probability increases. I am creating this thread as a place where people can discuss and share information about H5N1. We have a lot of pandemic experts in this community, do chime in!ResourcesArticles(paper showing H5N1 has spread to minks, which is my primary cause for concern)(widely shared, but I'm unsure how much to trust the claims)MarketsManifoldGroup of H5N1 manifold markets:MetaculusPlan for actionFight status quo biasIn January 2020, many in the effective altruism and rationalist communities had correctly gauged the seriousness of the pandemic threat and were warning people publicly about it. Despite being convinced it was likely to become a pandemic I almost entirely failed to act beyond a few symbolic gestures such as stocking up on food/masks and warning relatives.I consider this to have been the biggest personal failing of my life. I could have started initiatives to organize and prepare, I could have invested in mRNA producers, I could have researched how it would affect third-world hospitals. Yet all I did was sit idly by and doom scroll the internet for news about covid.My goal with this thread is to avoid making that mistake ever again, even if it means most likely looking really stupid in a few months time.How can we lower the chance of a serious pandemic?I encourage everyone to think about actionable steps and be ambitious in their thinking. As far as I understand mink-to-human transmission is currently the primary reason to be concerned. What ways are there to minimize the chance of this occuring?The following companies currently own vaccines for H5N1:Sanofi SAAflunovGSK plcQ-Pan H5N1 influenza vaccineCSL LimitedAudenz (and 1-3 more I think?)Roche Holding AG Genussscheineoseltamivir (aka Tamiflu, not a vaccine), this one seems less useful than the othersCould we pay them to start scaling up production tomorrow? One thing to note is that all these vaccines are egg-based. Are mRNA vaccines possible to create for this? If so, what can we do to speed up the process of making them?Any other ideas?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
MathiasKB https://forum.effectivealtruism.org/posts/QMMFyAX3ajf9vF5sb/h5n1-thread-for-information-sharing-planning-and-action Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: H5N1 - thread for information sharing, planning, and action, published by MathiasKB on February 5, 2023 on The Effective Altruism Forum.Hi everyone,I've been reading up on H5N1 this weekend, and I'm pretty concerned. Right now my estimate hunch is that there is a 5% non-zero chance that it will cost more than 10,000 people their lives.To be clear, I think it is unlikely that H5N1 will become a pandemic anywhere close to the size of covid.Nevertheless, I think our community should be actively following the news and start thinking about ways to be helpful if the probability increases. I am creating this thread as a place where people can discuss and share information about H5N1. We have a lot of pandemic experts in this community, do chime in!ResourcesArticles(paper showing H5N1 has spread to minks, which is my primary cause for concern)(widely shared, but I'm unsure how much to trust the claims)MarketsManifoldGroup of H5N1 manifold markets:MetaculusPlan for actionFight status quo biasIn January 2020, many in the effective altruism and rationalist communities had correctly gauged the seriousness of the pandemic threat and were warning people publicly about it. Despite being convinced it was likely to become a pandemic I almost entirely failed to act beyond a few symbolic gestures such as stocking up on food/masks and warning relatives.I consider this to have been the biggest personal failing of my life. I could have started initiatives to organize and prepare, I could have invested in mRNA producers, I could have researched how it would affect third-world hospitals. Yet all I did was sit idly by and doom scroll the internet for news about covid.My goal with this thread is to avoid making that mistake ever again, even if it means most likely looking really stupid in a few months time.How can we lower the chance of a serious pandemic?I encourage everyone to think about actionable steps and be ambitious in their thinking. As far as I understand mink-to-human transmission is currently the primary reason to be concerned. What ways are there to minimize the chance of this occuring?The following companies currently own vaccines for H5N1:Sanofi SAAflunovGSK plcQ-Pan H5N1 influenza vaccineCSL LimitedAudenz (and 1-3 more I think?)Roche Holding AG Genussscheineoseltamivir (aka Tamiflu, not a vaccine), this one seems less useful than the othersCould we pay them to start scaling up production tomorrow? One thing to note is that all these vaccines are egg-based. Are mRNA vaccines possible to create for this? If so, what can we do to speed up the process of making them?Any other ideas?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 05 Feb 2023 12:01:00 +0000 EA - H5N1 - thread for information sharing, planning, and action by MathiasKB Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: H5N1 - thread for information sharing, planning, and action, published by MathiasKB on February 5, 2023 on The Effective Altruism Forum.Hi everyone,I've been reading up on H5N1 this weekend, and I'm pretty concerned. Right now my estimate hunch is that there is a 5% non-zero chance that it will cost more than 10,000 people their lives.To be clear, I think it is unlikely that H5N1 will become a pandemic anywhere close to the size of covid.Nevertheless, I think our community should be actively following the news and start thinking about ways to be helpful if the probability increases. I am creating this thread as a place where people can discuss and share information about H5N1. We have a lot of pandemic experts in this community, do chime in!ResourcesArticles(paper showing H5N1 has spread to minks, which is my primary cause for concern)(widely shared, but I'm unsure how much to trust the claims)MarketsManifoldGroup of H5N1 manifold markets:MetaculusPlan for actionFight status quo biasIn January 2020, many in the effective altruism and rationalist communities had correctly gauged the seriousness of the pandemic threat and were warning people publicly about it. Despite being convinced it was likely to become a pandemic I almost entirely failed to act beyond a few symbolic gestures such as stocking up on food/masks and warning relatives.I consider this to have been the biggest personal failing of my life. I could have started initiatives to organize and prepare, I could have invested in mRNA producers, I could have researched how it would affect third-world hospitals. Yet all I did was sit idly by and doom scroll the internet for news about covid.My goal with this thread is to avoid making that mistake ever again, even if it means most likely looking really stupid in a few months time.How can we lower the chance of a serious pandemic?I encourage everyone to think about actionable steps and be ambitious in their thinking. As far as I understand mink-to-human transmission is currently the primary reason to be concerned. What ways are there to minimize the chance of this occuring?The following companies currently own vaccines for H5N1:Sanofi SAAflunovGSK plcQ-Pan H5N1 influenza vaccineCSL LimitedAudenz (and 1-3 more I think?)Roche Holding AG Genussscheineoseltamivir (aka Tamiflu, not a vaccine), this one seems less useful than the othersCould we pay them to start scaling up production tomorrow? One thing to note is that all these vaccines are egg-based. Are mRNA vaccines possible to create for this? If so, what can we do to speed up the process of making them?Any other ideas?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: H5N1 - thread for information sharing, planning, and action, published by MathiasKB on February 5, 2023 on The Effective Altruism Forum.Hi everyone,I've been reading up on H5N1 this weekend, and I'm pretty concerned. Right now my estimate hunch is that there is a 5% non-zero chance that it will cost more than 10,000 people their lives.To be clear, I think it is unlikely that H5N1 will become a pandemic anywhere close to the size of covid.Nevertheless, I think our community should be actively following the news and start thinking about ways to be helpful if the probability increases. I am creating this thread as a place where people can discuss and share information about H5N1. We have a lot of pandemic experts in this community, do chime in!ResourcesArticles(paper showing H5N1 has spread to minks, which is my primary cause for concern)(widely shared, but I'm unsure how much to trust the claims)MarketsManifoldGroup of H5N1 manifold markets:MetaculusPlan for actionFight status quo biasIn January 2020, many in the effective altruism and rationalist communities had correctly gauged the seriousness of the pandemic threat and were warning people publicly about it. Despite being convinced it was likely to become a pandemic I almost entirely failed to act beyond a few symbolic gestures such as stocking up on food/masks and warning relatives.I consider this to have been the biggest personal failing of my life. I could have started initiatives to organize and prepare, I could have invested in mRNA producers, I could have researched how it would affect third-world hospitals. Yet all I did was sit idly by and doom scroll the internet for news about covid.My goal with this thread is to avoid making that mistake ever again, even if it means most likely looking really stupid in a few months time.How can we lower the chance of a serious pandemic?I encourage everyone to think about actionable steps and be ambitious in their thinking. As far as I understand mink-to-human transmission is currently the primary reason to be concerned. What ways are there to minimize the chance of this occuring?The following companies currently own vaccines for H5N1:Sanofi SAAflunovGSK plcQ-Pan H5N1 influenza vaccineCSL LimitedAudenz (and 1-3 more I think?)Roche Holding AG Genussscheineoseltamivir (aka Tamiflu, not a vaccine), this one seems less useful than the othersCould we pay them to start scaling up production tomorrow? One thing to note is that all these vaccines are egg-based. Are mRNA vaccines possible to create for this? If so, what can we do to speed up the process of making them?Any other ideas?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
MathiasKB https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:54 None full 4745
aFGzLDwPrepQLevu6_NL_EA_EA EA - Should EVF consider appointing new board members? by BurnerAcct Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should EVF consider appointing new board members?, published by BurnerAcct on February 5, 2023 on The Effective Altruism Forum.From Announcing Interim CEOs of EVF:The EVF UK board consists of Will MacAskill, Tasha McCauley, Claire Zabel, Owen Cotton-Barratt, and Nick Beckstead. The EVF US board consists of Nick Beckstead, Rebecca Kagan, and Nicole Ross. Given their ties to the FTX Foundation and Future Fund, Will MacAskill and Nick Beckstead are recused from discussions and decision-making that relate to FTX,[4] as they have been since early November.Will MacAskill and Nick Beckstead had significant enough ties to FTX to be recused from EVF FTX-related decision-making, a significant and legally complex element of the boards' current responsibilities.Claire Zabel oversees significant grant-making to EVF organizations through her role at Open Phil, some of which have come under fire. While it is common for funders to serve on boards, it is not necessarily best practice.Nicole Ross is an employee of EVF organization CEA, where she serves as Head of Community Health and Special Projects. It is atypical for non-executive employees to serve on boards where they have oversight and control over their own managers.I do not know relevant details regarding McCauley, Cotton-Barratt, or Kagan.All board members are, to my knowledge, European and American.All listed are, to my knowledge, reputable and generally ethical individuals. However, these connections represent a larger intermingling in EA that is concerning and representative of a culture rife with conflicts of interest. Should EVF consider appointing new board members?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
BurnerAcct https://forum.effectivealtruism.org/posts/aFGzLDwPrepQLevu6/should-evf-consider-appointing-new-board-members Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should EVF consider appointing new board members?, published by BurnerAcct on February 5, 2023 on The Effective Altruism Forum.From Announcing Interim CEOs of EVF:The EVF UK board consists of Will MacAskill, Tasha McCauley, Claire Zabel, Owen Cotton-Barratt, and Nick Beckstead. The EVF US board consists of Nick Beckstead, Rebecca Kagan, and Nicole Ross. Given their ties to the FTX Foundation and Future Fund, Will MacAskill and Nick Beckstead are recused from discussions and decision-making that relate to FTX,[4] as they have been since early November.Will MacAskill and Nick Beckstead had significant enough ties to FTX to be recused from EVF FTX-related decision-making, a significant and legally complex element of the boards' current responsibilities.Claire Zabel oversees significant grant-making to EVF organizations through her role at Open Phil, some of which have come under fire. While it is common for funders to serve on boards, it is not necessarily best practice.Nicole Ross is an employee of EVF organization CEA, where she serves as Head of Community Health and Special Projects. It is atypical for non-executive employees to serve on boards where they have oversight and control over their own managers.I do not know relevant details regarding McCauley, Cotton-Barratt, or Kagan.All board members are, to my knowledge, European and American.All listed are, to my knowledge, reputable and generally ethical individuals. However, these connections represent a larger intermingling in EA that is concerning and representative of a culture rife with conflicts of interest. Should EVF consider appointing new board members?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 05 Feb 2023 10:51:38 +0000 EA - Should EVF consider appointing new board members? by BurnerAcct Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should EVF consider appointing new board members?, published by BurnerAcct on February 5, 2023 on The Effective Altruism Forum.From Announcing Interim CEOs of EVF:The EVF UK board consists of Will MacAskill, Tasha McCauley, Claire Zabel, Owen Cotton-Barratt, and Nick Beckstead. The EVF US board consists of Nick Beckstead, Rebecca Kagan, and Nicole Ross. Given their ties to the FTX Foundation and Future Fund, Will MacAskill and Nick Beckstead are recused from discussions and decision-making that relate to FTX,[4] as they have been since early November.Will MacAskill and Nick Beckstead had significant enough ties to FTX to be recused from EVF FTX-related decision-making, a significant and legally complex element of the boards' current responsibilities.Claire Zabel oversees significant grant-making to EVF organizations through her role at Open Phil, some of which have come under fire. While it is common for funders to serve on boards, it is not necessarily best practice.Nicole Ross is an employee of EVF organization CEA, where she serves as Head of Community Health and Special Projects. It is atypical for non-executive employees to serve on boards where they have oversight and control over their own managers.I do not know relevant details regarding McCauley, Cotton-Barratt, or Kagan.All board members are, to my knowledge, European and American.All listed are, to my knowledge, reputable and generally ethical individuals. However, these connections represent a larger intermingling in EA that is concerning and representative of a culture rife with conflicts of interest. Should EVF consider appointing new board members?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should EVF consider appointing new board members?, published by BurnerAcct on February 5, 2023 on The Effective Altruism Forum.From Announcing Interim CEOs of EVF:The EVF UK board consists of Will MacAskill, Tasha McCauley, Claire Zabel, Owen Cotton-Barratt, and Nick Beckstead. The EVF US board consists of Nick Beckstead, Rebecca Kagan, and Nicole Ross. Given their ties to the FTX Foundation and Future Fund, Will MacAskill and Nick Beckstead are recused from discussions and decision-making that relate to FTX,[4] as they have been since early November.Will MacAskill and Nick Beckstead had significant enough ties to FTX to be recused from EVF FTX-related decision-making, a significant and legally complex element of the boards' current responsibilities.Claire Zabel oversees significant grant-making to EVF organizations through her role at Open Phil, some of which have come under fire. While it is common for funders to serve on boards, it is not necessarily best practice.Nicole Ross is an employee of EVF organization CEA, where she serves as Head of Community Health and Special Projects. It is atypical for non-executive employees to serve on boards where they have oversight and control over their own managers.I do not know relevant details regarding McCauley, Cotton-Barratt, or Kagan.All board members are, to my knowledge, European and American.All listed are, to my knowledge, reputable and generally ethical individuals. However, these connections represent a larger intermingling in EA that is concerning and representative of a culture rife with conflicts of interest. Should EVF consider appointing new board members?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
BurnerAcct https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:54 None full 4748
QFHPbSWKfEcc7cuF5_NL_EA_EA EA - Thank you so much to everyone at CEA who helps with our community's health and forum. by Arvin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thank you so much to everyone at CEA who helps with our community's health and forum., published by Arvin on February 4, 2023 on The Effective Altruism Forum.Overflowing with gratefulness thinking about the crucial work they do while going through so many hard times. I'm emotional and loss for words so all I can say is thank you, thank you, thank you so much for your efforts Julia Wise, Catherine Low, Chana Messinger, Eve McCormick, Nicole Ross, Lorenzo Buonanno, Ryan Fugate, Amber Dawn, Edo Arad, Aaron Gertler, Lizka Vaintrob, Ben West, JP Addison and everyone else who helps with our forum. You folks work insanely hard through recent crazy circumstances. I'm so grateful our community has all of you. Thank you, thank you.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Arvin https://forum.effectivealtruism.org/posts/QFHPbSWKfEcc7cuF5/thank-you-so-much-to-everyone-at-cea-who-helps-with-our Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thank you so much to everyone at CEA who helps with our community's health and forum., published by Arvin on February 4, 2023 on The Effective Altruism Forum.Overflowing with gratefulness thinking about the crucial work they do while going through so many hard times. I'm emotional and loss for words so all I can say is thank you, thank you, thank you so much for your efforts Julia Wise, Catherine Low, Chana Messinger, Eve McCormick, Nicole Ross, Lorenzo Buonanno, Ryan Fugate, Amber Dawn, Edo Arad, Aaron Gertler, Lizka Vaintrob, Ben West, JP Addison and everyone else who helps with our forum. You folks work insanely hard through recent crazy circumstances. I'm so grateful our community has all of you. Thank you, thank you.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 04 Feb 2023 21:43:54 +0000 EA - Thank you so much to everyone at CEA who helps with our community's health and forum. by Arvin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thank you so much to everyone at CEA who helps with our community's health and forum., published by Arvin on February 4, 2023 on The Effective Altruism Forum.Overflowing with gratefulness thinking about the crucial work they do while going through so many hard times. I'm emotional and loss for words so all I can say is thank you, thank you, thank you so much for your efforts Julia Wise, Catherine Low, Chana Messinger, Eve McCormick, Nicole Ross, Lorenzo Buonanno, Ryan Fugate, Amber Dawn, Edo Arad, Aaron Gertler, Lizka Vaintrob, Ben West, JP Addison and everyone else who helps with our forum. You folks work insanely hard through recent crazy circumstances. I'm so grateful our community has all of you. Thank you, thank you.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thank you so much to everyone at CEA who helps with our community's health and forum., published by Arvin on February 4, 2023 on The Effective Altruism Forum.Overflowing with gratefulness thinking about the crucial work they do while going through so many hard times. I'm emotional and loss for words so all I can say is thank you, thank you, thank you so much for your efforts Julia Wise, Catherine Low, Chana Messinger, Eve McCormick, Nicole Ross, Lorenzo Buonanno, Ryan Fugate, Amber Dawn, Edo Arad, Aaron Gertler, Lizka Vaintrob, Ben West, JP Addison and everyone else who helps with our forum. You folks work insanely hard through recent crazy circumstances. I'm so grateful our community has all of you. Thank you, thank you.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Arvin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:02 None full 4736
fMrtoKBFK7p6oRHpu_NL_EA_EA EA - [Atlas Fellowship] Why do 100 high-schoolers need $50k each from Open Philanthropy? by Sparcalum2 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Atlas Fellowship] Why do 100 high-schoolers need $50k each from Open Philanthropy?, published by Sparcalum2 on February 4, 2023 on The Effective Altruism Forum.Note this is a sincere question. Not intended to cause controversy. It was inspired by this post questioning another OP Grant.Full DisclosureI applied to the Atlas Fellowship but was rejected. However, I attended SPARC, a free in-person program that teaches rationality tools to high schoolers (and follows a similar structure to Atlas Fellowship's camp). I'm friends with many Atlas Fellows.What is the Atlas Fellowship?For those newer to the EA Community, the Atlas Fellowship is a competitive program for high schoolers. If you are awarded it, you receive,A $50k scholarship (or $12,000 for Atlas India).Atlas Fellows can spend this money on anything considered an "academic expense". This includes travel expenses if justification can be provided.A fully-funded 11-day summer program in the Bay Area in a large former fraternity on UC Berkley's campus.College admissions preparation for top universities. (The admissions tutors are paid $200-300/hr).Access to the $1m Atlas Fund to learn, experiment, and build impactful projects.For 500 finalists, they receive $1,000 and 5 free books.Total Cost of Prizes$50k x 100 + $12k x 20 + $1k x 500 = $5.74mThis does not include the instructors, venue, or travel costs.What made me write this post?This came to my attention after reading EA London's monthly newsletter. It highlighted new grants that Open Philanthropy made. I learnt that OP made an additional $1.8m grant to the Atlas Fellowship in December 2022This is on top of a $5m grant they made in March 2022 and a $5m grant that the FTX Future Fund made.There is much discussion (even amongst Atlas Fellows) that it is not a good use of money and that high schoolers don't need $50k scholarships; therefore, I felt raising this question is worthwhile and of interest to the wider community.Questions I have for Open Philanthropy and the Atlas FellowshipWhy do high schoolers need $50k scholarships? If the reason is to attract talent, why is this required when programs such as SPARC and ESPR do an excellent job of attracting talented high schoolers?Note that SPARC and ESPR have been running for close to a decade. Many alumni go to top universities worldwide (MIT, Stanford, Oxford, etc.)I estimate each Atlas Fellow costs $80-90k, given you need to divide the total costs by the number of fellows (i.e., instructor cost should be considered).If the answer is to attract better talent, is there a significant difference in talent between Atlas Fellows and those attending SPARC and ESPR that makes this $80-90k money worthwhile? (Note that this would be over 20 lives saved through the Against Malaria Foundation).Why was a $50k scholarship offered if a $25k scholarship would attract, say, 80-90% of the same applicants?I suspect that a $5k unconditional grant that they can spend on whatever would attract just as many quality applications and be much cheaper.What is the breakdown of the socioeconomic background of Atlas Fellows? What countries are all Atlas Fellows from? What about the finalists?Atlas says they're doing "talent search". This connotes finding talent from under-resourced communities or poor students. Do the statistics match this?From friends who are Atlas Fellows, they said many Atlas Fellows do not require the scholarship as their parents earn a lot and can already pay for college. This makes me question why some people are accepted as $50k to the Against Malaria Foundation, which would save over ten lives. (Even more, if you consider it $80-90k). What are the Atlas Fellows spending the money on if their parents have more than enough to pay for college?What measures do they have to identify talent that ot...]]>
Sparcalum2 https://forum.effectivealtruism.org/posts/fMrtoKBFK7p6oRHpu/atlas-fellowship-why-do-100-high-schoolers-need-usd50k-each Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Atlas Fellowship] Why do 100 high-schoolers need $50k each from Open Philanthropy?, published by Sparcalum2 on February 4, 2023 on The Effective Altruism Forum.Note this is a sincere question. Not intended to cause controversy. It was inspired by this post questioning another OP Grant.Full DisclosureI applied to the Atlas Fellowship but was rejected. However, I attended SPARC, a free in-person program that teaches rationality tools to high schoolers (and follows a similar structure to Atlas Fellowship's camp). I'm friends with many Atlas Fellows.What is the Atlas Fellowship?For those newer to the EA Community, the Atlas Fellowship is a competitive program for high schoolers. If you are awarded it, you receive,A $50k scholarship (or $12,000 for Atlas India).Atlas Fellows can spend this money on anything considered an "academic expense". This includes travel expenses if justification can be provided.A fully-funded 11-day summer program in the Bay Area in a large former fraternity on UC Berkley's campus.College admissions preparation for top universities. (The admissions tutors are paid $200-300/hr).Access to the $1m Atlas Fund to learn, experiment, and build impactful projects.For 500 finalists, they receive $1,000 and 5 free books.Total Cost of Prizes$50k x 100 + $12k x 20 + $1k x 500 = $5.74mThis does not include the instructors, venue, or travel costs.What made me write this post?This came to my attention after reading EA London's monthly newsletter. It highlighted new grants that Open Philanthropy made. I learnt that OP made an additional $1.8m grant to the Atlas Fellowship in December 2022This is on top of a $5m grant they made in March 2022 and a $5m grant that the FTX Future Fund made.There is much discussion (even amongst Atlas Fellows) that it is not a good use of money and that high schoolers don't need $50k scholarships; therefore, I felt raising this question is worthwhile and of interest to the wider community.Questions I have for Open Philanthropy and the Atlas FellowshipWhy do high schoolers need $50k scholarships? If the reason is to attract talent, why is this required when programs such as SPARC and ESPR do an excellent job of attracting talented high schoolers?Note that SPARC and ESPR have been running for close to a decade. Many alumni go to top universities worldwide (MIT, Stanford, Oxford, etc.)I estimate each Atlas Fellow costs $80-90k, given you need to divide the total costs by the number of fellows (i.e., instructor cost should be considered).If the answer is to attract better talent, is there a significant difference in talent between Atlas Fellows and those attending SPARC and ESPR that makes this $80-90k money worthwhile? (Note that this would be over 20 lives saved through the Against Malaria Foundation).Why was a $50k scholarship offered if a $25k scholarship would attract, say, 80-90% of the same applicants?I suspect that a $5k unconditional grant that they can spend on whatever would attract just as many quality applications and be much cheaper.What is the breakdown of the socioeconomic background of Atlas Fellows? What countries are all Atlas Fellows from? What about the finalists?Atlas says they're doing "talent search". This connotes finding talent from under-resourced communities or poor students. Do the statistics match this?From friends who are Atlas Fellows, they said many Atlas Fellows do not require the scholarship as their parents earn a lot and can already pay for college. This makes me question why some people are accepted as $50k to the Against Malaria Foundation, which would save over ten lives. (Even more, if you consider it $80-90k). What are the Atlas Fellows spending the money on if their parents have more than enough to pay for college?What measures do they have to identify talent that ot...]]>
Sat, 04 Feb 2023 17:18:42 +0000 EA - [Atlas Fellowship] Why do 100 high-schoolers need $50k each from Open Philanthropy? by Sparcalum2 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Atlas Fellowship] Why do 100 high-schoolers need $50k each from Open Philanthropy?, published by Sparcalum2 on February 4, 2023 on The Effective Altruism Forum.Note this is a sincere question. Not intended to cause controversy. It was inspired by this post questioning another OP Grant.Full DisclosureI applied to the Atlas Fellowship but was rejected. However, I attended SPARC, a free in-person program that teaches rationality tools to high schoolers (and follows a similar structure to Atlas Fellowship's camp). I'm friends with many Atlas Fellows.What is the Atlas Fellowship?For those newer to the EA Community, the Atlas Fellowship is a competitive program for high schoolers. If you are awarded it, you receive,A $50k scholarship (or $12,000 for Atlas India).Atlas Fellows can spend this money on anything considered an "academic expense". This includes travel expenses if justification can be provided.A fully-funded 11-day summer program in the Bay Area in a large former fraternity on UC Berkley's campus.College admissions preparation for top universities. (The admissions tutors are paid $200-300/hr).Access to the $1m Atlas Fund to learn, experiment, and build impactful projects.For 500 finalists, they receive $1,000 and 5 free books.Total Cost of Prizes$50k x 100 + $12k x 20 + $1k x 500 = $5.74mThis does not include the instructors, venue, or travel costs.What made me write this post?This came to my attention after reading EA London's monthly newsletter. It highlighted new grants that Open Philanthropy made. I learnt that OP made an additional $1.8m grant to the Atlas Fellowship in December 2022This is on top of a $5m grant they made in March 2022 and a $5m grant that the FTX Future Fund made.There is much discussion (even amongst Atlas Fellows) that it is not a good use of money and that high schoolers don't need $50k scholarships; therefore, I felt raising this question is worthwhile and of interest to the wider community.Questions I have for Open Philanthropy and the Atlas FellowshipWhy do high schoolers need $50k scholarships? If the reason is to attract talent, why is this required when programs such as SPARC and ESPR do an excellent job of attracting talented high schoolers?Note that SPARC and ESPR have been running for close to a decade. Many alumni go to top universities worldwide (MIT, Stanford, Oxford, etc.)I estimate each Atlas Fellow costs $80-90k, given you need to divide the total costs by the number of fellows (i.e., instructor cost should be considered).If the answer is to attract better talent, is there a significant difference in talent between Atlas Fellows and those attending SPARC and ESPR that makes this $80-90k money worthwhile? (Note that this would be over 20 lives saved through the Against Malaria Foundation).Why was a $50k scholarship offered if a $25k scholarship would attract, say, 80-90% of the same applicants?I suspect that a $5k unconditional grant that they can spend on whatever would attract just as many quality applications and be much cheaper.What is the breakdown of the socioeconomic background of Atlas Fellows? What countries are all Atlas Fellows from? What about the finalists?Atlas says they're doing "talent search". This connotes finding talent from under-resourced communities or poor students. Do the statistics match this?From friends who are Atlas Fellows, they said many Atlas Fellows do not require the scholarship as their parents earn a lot and can already pay for college. This makes me question why some people are accepted as $50k to the Against Malaria Foundation, which would save over ten lives. (Even more, if you consider it $80-90k). What are the Atlas Fellows spending the money on if their parents have more than enough to pay for college?What measures do they have to identify talent that ot...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Atlas Fellowship] Why do 100 high-schoolers need $50k each from Open Philanthropy?, published by Sparcalum2 on February 4, 2023 on The Effective Altruism Forum.Note this is a sincere question. Not intended to cause controversy. It was inspired by this post questioning another OP Grant.Full DisclosureI applied to the Atlas Fellowship but was rejected. However, I attended SPARC, a free in-person program that teaches rationality tools to high schoolers (and follows a similar structure to Atlas Fellowship's camp). I'm friends with many Atlas Fellows.What is the Atlas Fellowship?For those newer to the EA Community, the Atlas Fellowship is a competitive program for high schoolers. If you are awarded it, you receive,A $50k scholarship (or $12,000 for Atlas India).Atlas Fellows can spend this money on anything considered an "academic expense". This includes travel expenses if justification can be provided.A fully-funded 11-day summer program in the Bay Area in a large former fraternity on UC Berkley's campus.College admissions preparation for top universities. (The admissions tutors are paid $200-300/hr).Access to the $1m Atlas Fund to learn, experiment, and build impactful projects.For 500 finalists, they receive $1,000 and 5 free books.Total Cost of Prizes$50k x 100 + $12k x 20 + $1k x 500 = $5.74mThis does not include the instructors, venue, or travel costs.What made me write this post?This came to my attention after reading EA London's monthly newsletter. It highlighted new grants that Open Philanthropy made. I learnt that OP made an additional $1.8m grant to the Atlas Fellowship in December 2022This is on top of a $5m grant they made in March 2022 and a $5m grant that the FTX Future Fund made.There is much discussion (even amongst Atlas Fellows) that it is not a good use of money and that high schoolers don't need $50k scholarships; therefore, I felt raising this question is worthwhile and of interest to the wider community.Questions I have for Open Philanthropy and the Atlas FellowshipWhy do high schoolers need $50k scholarships? If the reason is to attract talent, why is this required when programs such as SPARC and ESPR do an excellent job of attracting talented high schoolers?Note that SPARC and ESPR have been running for close to a decade. Many alumni go to top universities worldwide (MIT, Stanford, Oxford, etc.)I estimate each Atlas Fellow costs $80-90k, given you need to divide the total costs by the number of fellows (i.e., instructor cost should be considered).If the answer is to attract better talent, is there a significant difference in talent between Atlas Fellows and those attending SPARC and ESPR that makes this $80-90k money worthwhile? (Note that this would be over 20 lives saved through the Against Malaria Foundation).Why was a $50k scholarship offered if a $25k scholarship would attract, say, 80-90% of the same applicants?I suspect that a $5k unconditional grant that they can spend on whatever would attract just as many quality applications and be much cheaper.What is the breakdown of the socioeconomic background of Atlas Fellows? What countries are all Atlas Fellows from? What about the finalists?Atlas says they're doing "talent search". This connotes finding talent from under-resourced communities or poor students. Do the statistics match this?From friends who are Atlas Fellows, they said many Atlas Fellows do not require the scholarship as their parents earn a lot and can already pay for college. This makes me question why some people are accepted as $50k to the Against Malaria Foundation, which would save over ten lives. (Even more, if you consider it $80-90k). What are the Atlas Fellows spending the money on if their parents have more than enough to pay for college?What measures do they have to identify talent that ot...]]>
Sparcalum2 https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:36 None full 4737
trqswoctpQ92tcY2y_NL_EA_EA EA - Criticism Thread: What things should OpenPhil improve on? by anonymousEA20 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Criticism Thread: What things should OpenPhil improve on?, published by anonymousEA20 on February 4, 2023 on The Effective Altruism Forum.This post was prompted by three other posts on the EA forum. A recent post raises the alarm about abuse of power and relationships in the EA community. An earlier post suggested that the EA community welcomes shallow critiques but is less receptive to deep critiques. Dustin Moskovitz has recently mentioned that the EA forum functions as a conflict of interest appeals board for Open Philanthropy. Yet on the EA forum, there doesn’t seem to be many specific criticisms of Open Philanthropy.For the sake of epistemics, I wanted to create this post and invite individuals to voice any issues they may have with Open Philanthropy or propose potential solutions.I’ll start by discussing the funding dynamics within the field of technical alignment (alignment theory, applied alignment), with a particular focus on Open Philanthropy.In the past two years, the technical alignment organisations which have received substantial funding include:Anthropic (The president of Anthropic is the wife of Open Philanthropy’s CEO. The CEO of Anthropic is the brother-in-law of Open Philanthropy’s CEO.)ARC (The CEO is married to an Open Philanthropy grantmaker, according to facebook.)CHAISERI MATS (A director/main leader has had a relationship with an Open Philanthropy grantmaker.)Redwood Research (A director/main leader is engaged to an Open Philanthropy grantmaker, according to facebook. Open Philanthropy’s main technical alignment funders are also working out of their office.)All of these organisations are situated in the San Francisco Bay Area. Although many people are thinking about the alignment problem, there is much less funding for technical alignment researchers for other locations (e.g., the east coast of the US, the UK, or other parts of Europe).This collectively indicates that, all else being equal, having strong or intimate connections with employees of Open Philanthropy greatly enhances the chances of having funding, and it seems almost necessary. As a concerned EA, this seems incredibly alarming and in need of significant reform. Residency in the San Francisco Bay Area is also a must. A skeptical perspective would be that Open Philanthropy allocates its resources to those with the most political access. Since it's hard to solve the alignment problem, the only people grantmakers end up trusting to do so are those who are very close to them.This is a problem with Open Philanthropy’s design and processes, and points to the biases of the technical alignment grantmakers and decision makers. This seems almost inevitable given (1) community norms around conflicts of interest and (2) Open Philanthropy’s strong centralization of power. This is not to say that any specific individual is to blame. Instead processes, structure, and norms are more useful to direct reforms towards.Right now, even if a highly respected alignment researcher thinks what you do is extremely valuable, the decision ultimately can be blocked by an Open Philanthropy grantmaker, which could cause people to leave alignment altogether.One common suggestion involves making the grantmaking process more democratic or less centralised. For example, the “regranting” approach has been successful for other grantmakers. This involves selecting a large pool of grantmakers or regrantors who have the autonomy to make their own decisions. With more grantmakers, there is less potential for Goodharting by individuals and reduces the likelihood of funding only those who are known best by a few Open Philanthropy staff. Additionally, Open Philanthropy can still choose regrantors who are more aligned with EA values or have previously demonstrated good judgement. A smaller thing that could help...]]>
anonymousEA20 https://forum.effectivealtruism.org/posts/trqswoctpQ92tcY2y/criticism-thread-what-things-should-openphil-improve-on Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Criticism Thread: What things should OpenPhil improve on?, published by anonymousEA20 on February 4, 2023 on The Effective Altruism Forum.This post was prompted by three other posts on the EA forum. A recent post raises the alarm about abuse of power and relationships in the EA community. An earlier post suggested that the EA community welcomes shallow critiques but is less receptive to deep critiques. Dustin Moskovitz has recently mentioned that the EA forum functions as a conflict of interest appeals board for Open Philanthropy. Yet on the EA forum, there doesn’t seem to be many specific criticisms of Open Philanthropy.For the sake of epistemics, I wanted to create this post and invite individuals to voice any issues they may have with Open Philanthropy or propose potential solutions.I’ll start by discussing the funding dynamics within the field of technical alignment (alignment theory, applied alignment), with a particular focus on Open Philanthropy.In the past two years, the technical alignment organisations which have received substantial funding include:Anthropic (The president of Anthropic is the wife of Open Philanthropy’s CEO. The CEO of Anthropic is the brother-in-law of Open Philanthropy’s CEO.)ARC (The CEO is married to an Open Philanthropy grantmaker, according to facebook.)CHAISERI MATS (A director/main leader has had a relationship with an Open Philanthropy grantmaker.)Redwood Research (A director/main leader is engaged to an Open Philanthropy grantmaker, according to facebook. Open Philanthropy’s main technical alignment funders are also working out of their office.)All of these organisations are situated in the San Francisco Bay Area. Although many people are thinking about the alignment problem, there is much less funding for technical alignment researchers for other locations (e.g., the east coast of the US, the UK, or other parts of Europe).This collectively indicates that, all else being equal, having strong or intimate connections with employees of Open Philanthropy greatly enhances the chances of having funding, and it seems almost necessary. As a concerned EA, this seems incredibly alarming and in need of significant reform. Residency in the San Francisco Bay Area is also a must. A skeptical perspective would be that Open Philanthropy allocates its resources to those with the most political access. Since it's hard to solve the alignment problem, the only people grantmakers end up trusting to do so are those who are very close to them.This is a problem with Open Philanthropy’s design and processes, and points to the biases of the technical alignment grantmakers and decision makers. This seems almost inevitable given (1) community norms around conflicts of interest and (2) Open Philanthropy’s strong centralization of power. This is not to say that any specific individual is to blame. Instead processes, structure, and norms are more useful to direct reforms towards.Right now, even if a highly respected alignment researcher thinks what you do is extremely valuable, the decision ultimately can be blocked by an Open Philanthropy grantmaker, which could cause people to leave alignment altogether.One common suggestion involves making the grantmaking process more democratic or less centralised. For example, the “regranting” approach has been successful for other grantmakers. This involves selecting a large pool of grantmakers or regrantors who have the autonomy to make their own decisions. With more grantmakers, there is less potential for Goodharting by individuals and reduces the likelihood of funding only those who are known best by a few Open Philanthropy staff. Additionally, Open Philanthropy can still choose regrantors who are more aligned with EA values or have previously demonstrated good judgement. A smaller thing that could help...]]>
Sat, 04 Feb 2023 13:04:10 +0000 EA - Criticism Thread: What things should OpenPhil improve on? by anonymousEA20 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Criticism Thread: What things should OpenPhil improve on?, published by anonymousEA20 on February 4, 2023 on The Effective Altruism Forum.This post was prompted by three other posts on the EA forum. A recent post raises the alarm about abuse of power and relationships in the EA community. An earlier post suggested that the EA community welcomes shallow critiques but is less receptive to deep critiques. Dustin Moskovitz has recently mentioned that the EA forum functions as a conflict of interest appeals board for Open Philanthropy. Yet on the EA forum, there doesn’t seem to be many specific criticisms of Open Philanthropy.For the sake of epistemics, I wanted to create this post and invite individuals to voice any issues they may have with Open Philanthropy or propose potential solutions.I’ll start by discussing the funding dynamics within the field of technical alignment (alignment theory, applied alignment), with a particular focus on Open Philanthropy.In the past two years, the technical alignment organisations which have received substantial funding include:Anthropic (The president of Anthropic is the wife of Open Philanthropy’s CEO. The CEO of Anthropic is the brother-in-law of Open Philanthropy’s CEO.)ARC (The CEO is married to an Open Philanthropy grantmaker, according to facebook.)CHAISERI MATS (A director/main leader has had a relationship with an Open Philanthropy grantmaker.)Redwood Research (A director/main leader is engaged to an Open Philanthropy grantmaker, according to facebook. Open Philanthropy’s main technical alignment funders are also working out of their office.)All of these organisations are situated in the San Francisco Bay Area. Although many people are thinking about the alignment problem, there is much less funding for technical alignment researchers for other locations (e.g., the east coast of the US, the UK, or other parts of Europe).This collectively indicates that, all else being equal, having strong or intimate connections with employees of Open Philanthropy greatly enhances the chances of having funding, and it seems almost necessary. As a concerned EA, this seems incredibly alarming and in need of significant reform. Residency in the San Francisco Bay Area is also a must. A skeptical perspective would be that Open Philanthropy allocates its resources to those with the most political access. Since it's hard to solve the alignment problem, the only people grantmakers end up trusting to do so are those who are very close to them.This is a problem with Open Philanthropy’s design and processes, and points to the biases of the technical alignment grantmakers and decision makers. This seems almost inevitable given (1) community norms around conflicts of interest and (2) Open Philanthropy’s strong centralization of power. This is not to say that any specific individual is to blame. Instead processes, structure, and norms are more useful to direct reforms towards.Right now, even if a highly respected alignment researcher thinks what you do is extremely valuable, the decision ultimately can be blocked by an Open Philanthropy grantmaker, which could cause people to leave alignment altogether.One common suggestion involves making the grantmaking process more democratic or less centralised. For example, the “regranting” approach has been successful for other grantmakers. This involves selecting a large pool of grantmakers or regrantors who have the autonomy to make their own decisions. With more grantmakers, there is less potential for Goodharting by individuals and reduces the likelihood of funding only those who are known best by a few Open Philanthropy staff. Additionally, Open Philanthropy can still choose regrantors who are more aligned with EA values or have previously demonstrated good judgement. A smaller thing that could help...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Criticism Thread: What things should OpenPhil improve on?, published by anonymousEA20 on February 4, 2023 on The Effective Altruism Forum.This post was prompted by three other posts on the EA forum. A recent post raises the alarm about abuse of power and relationships in the EA community. An earlier post suggested that the EA community welcomes shallow critiques but is less receptive to deep critiques. Dustin Moskovitz has recently mentioned that the EA forum functions as a conflict of interest appeals board for Open Philanthropy. Yet on the EA forum, there doesn’t seem to be many specific criticisms of Open Philanthropy.For the sake of epistemics, I wanted to create this post and invite individuals to voice any issues they may have with Open Philanthropy or propose potential solutions.I’ll start by discussing the funding dynamics within the field of technical alignment (alignment theory, applied alignment), with a particular focus on Open Philanthropy.In the past two years, the technical alignment organisations which have received substantial funding include:Anthropic (The president of Anthropic is the wife of Open Philanthropy’s CEO. The CEO of Anthropic is the brother-in-law of Open Philanthropy’s CEO.)ARC (The CEO is married to an Open Philanthropy grantmaker, according to facebook.)CHAISERI MATS (A director/main leader has had a relationship with an Open Philanthropy grantmaker.)Redwood Research (A director/main leader is engaged to an Open Philanthropy grantmaker, according to facebook. Open Philanthropy’s main technical alignment funders are also working out of their office.)All of these organisations are situated in the San Francisco Bay Area. Although many people are thinking about the alignment problem, there is much less funding for technical alignment researchers for other locations (e.g., the east coast of the US, the UK, or other parts of Europe).This collectively indicates that, all else being equal, having strong or intimate connections with employees of Open Philanthropy greatly enhances the chances of having funding, and it seems almost necessary. As a concerned EA, this seems incredibly alarming and in need of significant reform. Residency in the San Francisco Bay Area is also a must. A skeptical perspective would be that Open Philanthropy allocates its resources to those with the most political access. Since it's hard to solve the alignment problem, the only people grantmakers end up trusting to do so are those who are very close to them.This is a problem with Open Philanthropy’s design and processes, and points to the biases of the technical alignment grantmakers and decision makers. This seems almost inevitable given (1) community norms around conflicts of interest and (2) Open Philanthropy’s strong centralization of power. This is not to say that any specific individual is to blame. Instead processes, structure, and norms are more useful to direct reforms towards.Right now, even if a highly respected alignment researcher thinks what you do is extremely valuable, the decision ultimately can be blocked by an Open Philanthropy grantmaker, which could cause people to leave alignment altogether.One common suggestion involves making the grantmaking process more democratic or less centralised. For example, the “regranting” approach has been successful for other grantmakers. This involves selecting a large pool of grantmakers or regrantors who have the autonomy to make their own decisions. With more grantmakers, there is less potential for Goodharting by individuals and reduces the likelihood of funding only those who are known best by a few Open Philanthropy staff. Additionally, Open Philanthropy can still choose regrantors who are more aligned with EA values or have previously demonstrated good judgement. A smaller thing that could help...]]>
anonymousEA20 https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:08 None full 4738
GPDmbGxrtsNCWaB49_NL_EA_EA EA - EA NYC’s Community Health Infrastructure by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA NYC’s Community Health Infrastructure, published by Rockwell on February 3, 2023 on The Effective Altruism Forum.Written by Alex Rahl-Kaplan (EA NYC Community Coordinator), Megan Nelson (EA NYC Community Health Coordinator), and Rockwell Schwartz (EA NYC Director).The mission of EA NYC is to support the growth and success of the effective altruism community in New York City and beyond by fostering a welcoming, inspiring, and productive hub that serves those aiming to do the most good. In support of this mission, over the course of 2022, the team at EA NYC worked to scale our community health infrastructure. The EA NYC team believes that strong community health is a priority and a crucial component of achieving our collective aims. We also believe our investment in community health to date has been fruitful and we intend to continue to prioritize and expand our community health infrastructure, as there are ever-evolving community needs and always room for improvement.In this post, we outline what the EA NYC team has been doing over the past year to support the health of our local EA community. We encourage other EA communities to consider creating similar infrastructure and we are happy to serve as a resource and partner in such efforts. We also welcome feedback and would love to learn from others as we continuously strive to make EA a safer and more inclusive community.Paid, Part-Time Community Health Coordinator HiredIn April 2022, EA NYC hired Megan Nelson as a paid, part-time Community Health Coordinator through the CEA Community Building Grants program.Megan’s Background: Megan is a licensed master social worker in New York State with a decade of experience serving individuals, families, and groups. She became involved in EA before it was EA, after reading The Life You Can Save upon its publication in 2009.Megan’s Role: Megan advises the EA NYC team on sustainable community growth, and is available as a point-person for those who have concerns related to the NYC EA community. Megan serves as a confidential, supportive resource. You can email her here, or contact her anonymously here.Megan also helps coordinate NYC Community Builders monthly dinners (discussed below), gave presentations at EAG DC and EAGxBerkeley on self-care and burnout, organized group accommodations for EAGxBoston 2022, spoke at a university groups retreat, was a volunteer facilitator for EA NYU’s Spring 2022 Intro Fellowship, and co-leads EA NYC's new Women and Non-Binary subgroup (discussed below). Megan played a central role in the creation of guidelines for the potential future NY EA coworking space and a fair and balanced system for vetting potential users.Publicizing Megan’s Existence: EA NYC publicizes the existence of the Community Health Coordinator through our website, monthly newsletter, and all community surveys. We also explain the role during our events (at which Megan is often present) and encourage the community to utilize Megan as a resource.EA NYC CH vs. CEA CH: The EA NYC team continues to utilize and work with CEA’s Community Health Team and the EA NYC Community Health Coordinator role should be viewed as an additional resource to, rather than a replacement for, CEA’s CH work. By having multiple point-people available, we hope to offer our community members more options for reporting their concerns as well as a consistent, local resource.Weekly Community Health All-Team CallsOnce a week, the EA NYC team has a 45-to-60-minute Community Health call. We created this dedicated weekly meeting in order to keep community health front-and-center in our decision-making and programming. During these calls, the Community Health Coordinator, Community Coordinator, and Director discuss a range of topics that are largely proactive, rather than reactive to existing issues. C...]]>
Rockwell https://forum.effectivealtruism.org/posts/GPDmbGxrtsNCWaB49/ea-nyc-s-community-health-infrastructure Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA NYC’s Community Health Infrastructure, published by Rockwell on February 3, 2023 on The Effective Altruism Forum.Written by Alex Rahl-Kaplan (EA NYC Community Coordinator), Megan Nelson (EA NYC Community Health Coordinator), and Rockwell Schwartz (EA NYC Director).The mission of EA NYC is to support the growth and success of the effective altruism community in New York City and beyond by fostering a welcoming, inspiring, and productive hub that serves those aiming to do the most good. In support of this mission, over the course of 2022, the team at EA NYC worked to scale our community health infrastructure. The EA NYC team believes that strong community health is a priority and a crucial component of achieving our collective aims. We also believe our investment in community health to date has been fruitful and we intend to continue to prioritize and expand our community health infrastructure, as there are ever-evolving community needs and always room for improvement.In this post, we outline what the EA NYC team has been doing over the past year to support the health of our local EA community. We encourage other EA communities to consider creating similar infrastructure and we are happy to serve as a resource and partner in such efforts. We also welcome feedback and would love to learn from others as we continuously strive to make EA a safer and more inclusive community.Paid, Part-Time Community Health Coordinator HiredIn April 2022, EA NYC hired Megan Nelson as a paid, part-time Community Health Coordinator through the CEA Community Building Grants program.Megan’s Background: Megan is a licensed master social worker in New York State with a decade of experience serving individuals, families, and groups. She became involved in EA before it was EA, after reading The Life You Can Save upon its publication in 2009.Megan’s Role: Megan advises the EA NYC team on sustainable community growth, and is available as a point-person for those who have concerns related to the NYC EA community. Megan serves as a confidential, supportive resource. You can email her here, or contact her anonymously here.Megan also helps coordinate NYC Community Builders monthly dinners (discussed below), gave presentations at EAG DC and EAGxBerkeley on self-care and burnout, organized group accommodations for EAGxBoston 2022, spoke at a university groups retreat, was a volunteer facilitator for EA NYU’s Spring 2022 Intro Fellowship, and co-leads EA NYC's new Women and Non-Binary subgroup (discussed below). Megan played a central role in the creation of guidelines for the potential future NY EA coworking space and a fair and balanced system for vetting potential users.Publicizing Megan’s Existence: EA NYC publicizes the existence of the Community Health Coordinator through our website, monthly newsletter, and all community surveys. We also explain the role during our events (at which Megan is often present) and encourage the community to utilize Megan as a resource.EA NYC CH vs. CEA CH: The EA NYC team continues to utilize and work with CEA’s Community Health Team and the EA NYC Community Health Coordinator role should be viewed as an additional resource to, rather than a replacement for, CEA’s CH work. By having multiple point-people available, we hope to offer our community members more options for reporting their concerns as well as a consistent, local resource.Weekly Community Health All-Team CallsOnce a week, the EA NYC team has a 45-to-60-minute Community Health call. We created this dedicated weekly meeting in order to keep community health front-and-center in our decision-making and programming. During these calls, the Community Health Coordinator, Community Coordinator, and Director discuss a range of topics that are largely proactive, rather than reactive to existing issues. C...]]>
Fri, 03 Feb 2023 21:50:25 +0000 EA - EA NYC’s Community Health Infrastructure by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA NYC’s Community Health Infrastructure, published by Rockwell on February 3, 2023 on The Effective Altruism Forum.Written by Alex Rahl-Kaplan (EA NYC Community Coordinator), Megan Nelson (EA NYC Community Health Coordinator), and Rockwell Schwartz (EA NYC Director).The mission of EA NYC is to support the growth and success of the effective altruism community in New York City and beyond by fostering a welcoming, inspiring, and productive hub that serves those aiming to do the most good. In support of this mission, over the course of 2022, the team at EA NYC worked to scale our community health infrastructure. The EA NYC team believes that strong community health is a priority and a crucial component of achieving our collective aims. We also believe our investment in community health to date has been fruitful and we intend to continue to prioritize and expand our community health infrastructure, as there are ever-evolving community needs and always room for improvement.In this post, we outline what the EA NYC team has been doing over the past year to support the health of our local EA community. We encourage other EA communities to consider creating similar infrastructure and we are happy to serve as a resource and partner in such efforts. We also welcome feedback and would love to learn from others as we continuously strive to make EA a safer and more inclusive community.Paid, Part-Time Community Health Coordinator HiredIn April 2022, EA NYC hired Megan Nelson as a paid, part-time Community Health Coordinator through the CEA Community Building Grants program.Megan’s Background: Megan is a licensed master social worker in New York State with a decade of experience serving individuals, families, and groups. She became involved in EA before it was EA, after reading The Life You Can Save upon its publication in 2009.Megan’s Role: Megan advises the EA NYC team on sustainable community growth, and is available as a point-person for those who have concerns related to the NYC EA community. Megan serves as a confidential, supportive resource. You can email her here, or contact her anonymously here.Megan also helps coordinate NYC Community Builders monthly dinners (discussed below), gave presentations at EAG DC and EAGxBerkeley on self-care and burnout, organized group accommodations for EAGxBoston 2022, spoke at a university groups retreat, was a volunteer facilitator for EA NYU’s Spring 2022 Intro Fellowship, and co-leads EA NYC's new Women and Non-Binary subgroup (discussed below). Megan played a central role in the creation of guidelines for the potential future NY EA coworking space and a fair and balanced system for vetting potential users.Publicizing Megan’s Existence: EA NYC publicizes the existence of the Community Health Coordinator through our website, monthly newsletter, and all community surveys. We also explain the role during our events (at which Megan is often present) and encourage the community to utilize Megan as a resource.EA NYC CH vs. CEA CH: The EA NYC team continues to utilize and work with CEA’s Community Health Team and the EA NYC Community Health Coordinator role should be viewed as an additional resource to, rather than a replacement for, CEA’s CH work. By having multiple point-people available, we hope to offer our community members more options for reporting their concerns as well as a consistent, local resource.Weekly Community Health All-Team CallsOnce a week, the EA NYC team has a 45-to-60-minute Community Health call. We created this dedicated weekly meeting in order to keep community health front-and-center in our decision-making and programming. During these calls, the Community Health Coordinator, Community Coordinator, and Director discuss a range of topics that are largely proactive, rather than reactive to existing issues. C...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA NYC’s Community Health Infrastructure, published by Rockwell on February 3, 2023 on The Effective Altruism Forum.Written by Alex Rahl-Kaplan (EA NYC Community Coordinator), Megan Nelson (EA NYC Community Health Coordinator), and Rockwell Schwartz (EA NYC Director).The mission of EA NYC is to support the growth and success of the effective altruism community in New York City and beyond by fostering a welcoming, inspiring, and productive hub that serves those aiming to do the most good. In support of this mission, over the course of 2022, the team at EA NYC worked to scale our community health infrastructure. The EA NYC team believes that strong community health is a priority and a crucial component of achieving our collective aims. We also believe our investment in community health to date has been fruitful and we intend to continue to prioritize and expand our community health infrastructure, as there are ever-evolving community needs and always room for improvement.In this post, we outline what the EA NYC team has been doing over the past year to support the health of our local EA community. We encourage other EA communities to consider creating similar infrastructure and we are happy to serve as a resource and partner in such efforts. We also welcome feedback and would love to learn from others as we continuously strive to make EA a safer and more inclusive community.Paid, Part-Time Community Health Coordinator HiredIn April 2022, EA NYC hired Megan Nelson as a paid, part-time Community Health Coordinator through the CEA Community Building Grants program.Megan’s Background: Megan is a licensed master social worker in New York State with a decade of experience serving individuals, families, and groups. She became involved in EA before it was EA, after reading The Life You Can Save upon its publication in 2009.Megan’s Role: Megan advises the EA NYC team on sustainable community growth, and is available as a point-person for those who have concerns related to the NYC EA community. Megan serves as a confidential, supportive resource. You can email her here, or contact her anonymously here.Megan also helps coordinate NYC Community Builders monthly dinners (discussed below), gave presentations at EAG DC and EAGxBerkeley on self-care and burnout, organized group accommodations for EAGxBoston 2022, spoke at a university groups retreat, was a volunteer facilitator for EA NYU’s Spring 2022 Intro Fellowship, and co-leads EA NYC's new Women and Non-Binary subgroup (discussed below). Megan played a central role in the creation of guidelines for the potential future NY EA coworking space and a fair and balanced system for vetting potential users.Publicizing Megan’s Existence: EA NYC publicizes the existence of the Community Health Coordinator through our website, monthly newsletter, and all community surveys. We also explain the role during our events (at which Megan is often present) and encourage the community to utilize Megan as a resource.EA NYC CH vs. CEA CH: The EA NYC team continues to utilize and work with CEA’s Community Health Team and the EA NYC Community Health Coordinator role should be viewed as an additional resource to, rather than a replacement for, CEA’s CH work. By having multiple point-people available, we hope to offer our community members more options for reporting their concerns as well as a consistent, local resource.Weekly Community Health All-Team CallsOnce a week, the EA NYC team has a 45-to-60-minute Community Health call. We created this dedicated weekly meeting in order to keep community health front-and-center in our decision-making and programming. During these calls, the Community Health Coordinator, Community Coordinator, and Director discuss a range of topics that are largely proactive, rather than reactive to existing issues. C...]]>
Rockwell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:43 None full 4719
FnszH6ZGBi9hd8rtv_NL_EA_EA EA - Google invests $300mn in artificial intelligence start-up Anthropic | FT by 𝕮𝖎𝖓𝖊𝖗𝖆 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Google invests $300mn in artificial intelligence start-up Anthropic | FT, published by 𝕮𝖎𝖓𝖊𝖗𝖆 on February 3, 2023 on The Effective Altruism Forum.The terms of the deal, through which Google will take a stake of about 10 per cent, requires Anthropic to use the money to buy computing resources from the search company’s cloud computing division, said three people familiar with the arrangement.While Microsoft has sought to integrate OpenAI’s technology into many of its own services, Google’s relationship with Anthropic is limited to acting as the company’s tech supplier in what has become an AI arm’s race, according to people familiar with the arrangement."Anthropic has developed an intelligent chatbot called Claude, rivalling OpenAI’s ChatGPT, though it has not yet been released publicly.""The start-up had raised more than $700mn before Google’s investment, which was made in late 2022 but has not previously been reported. The company’s biggest investor is Alameda Research, the crypto hedge fund of FTX founder Sam Bankman-Fried, which put in $500mn before filing for bankruptcy last year. FTX’s bankruptcy estate has flagged Anthropic as an asset that may help creditors with recoveries.""Google’s investment was made by its cloud division, run by former Oracle executive Thomas Kurian. Bringing Anthropic’s data-intensive computing work to Google’s data centres is part of an effort to catch up with the lead Microsoft has taken in the fast-growing AI market thanks to its work with OpenAI. Google’s cloud division is also working with other start-ups such as Cohere and C3 to try to secure a bigger foothold in AI."Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
𝕮𝖎𝖓𝖊𝖗𝖆 https://forum.effectivealtruism.org/posts/FnszH6ZGBi9hd8rtv/google-invests-usd300mn-in-artificial-intelligence-start-up Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Google invests $300mn in artificial intelligence start-up Anthropic | FT, published by 𝕮𝖎𝖓𝖊𝖗𝖆 on February 3, 2023 on The Effective Altruism Forum.The terms of the deal, through which Google will take a stake of about 10 per cent, requires Anthropic to use the money to buy computing resources from the search company’s cloud computing division, said three people familiar with the arrangement.While Microsoft has sought to integrate OpenAI’s technology into many of its own services, Google’s relationship with Anthropic is limited to acting as the company’s tech supplier in what has become an AI arm’s race, according to people familiar with the arrangement."Anthropic has developed an intelligent chatbot called Claude, rivalling OpenAI’s ChatGPT, though it has not yet been released publicly.""The start-up had raised more than $700mn before Google’s investment, which was made in late 2022 but has not previously been reported. The company’s biggest investor is Alameda Research, the crypto hedge fund of FTX founder Sam Bankman-Fried, which put in $500mn before filing for bankruptcy last year. FTX’s bankruptcy estate has flagged Anthropic as an asset that may help creditors with recoveries.""Google’s investment was made by its cloud division, run by former Oracle executive Thomas Kurian. Bringing Anthropic’s data-intensive computing work to Google’s data centres is part of an effort to catch up with the lead Microsoft has taken in the fast-growing AI market thanks to its work with OpenAI. Google’s cloud division is also working with other start-ups such as Cohere and C3 to try to secure a bigger foothold in AI."Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 03 Feb 2023 20:22:52 +0000 EA - Google invests $300mn in artificial intelligence start-up Anthropic | FT by 𝕮𝖎𝖓𝖊𝖗𝖆 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Google invests $300mn in artificial intelligence start-up Anthropic | FT, published by 𝕮𝖎𝖓𝖊𝖗𝖆 on February 3, 2023 on The Effective Altruism Forum.The terms of the deal, through which Google will take a stake of about 10 per cent, requires Anthropic to use the money to buy computing resources from the search company’s cloud computing division, said three people familiar with the arrangement.While Microsoft has sought to integrate OpenAI’s technology into many of its own services, Google’s relationship with Anthropic is limited to acting as the company’s tech supplier in what has become an AI arm’s race, according to people familiar with the arrangement."Anthropic has developed an intelligent chatbot called Claude, rivalling OpenAI’s ChatGPT, though it has not yet been released publicly.""The start-up had raised more than $700mn before Google’s investment, which was made in late 2022 but has not previously been reported. The company’s biggest investor is Alameda Research, the crypto hedge fund of FTX founder Sam Bankman-Fried, which put in $500mn before filing for bankruptcy last year. FTX’s bankruptcy estate has flagged Anthropic as an asset that may help creditors with recoveries.""Google’s investment was made by its cloud division, run by former Oracle executive Thomas Kurian. Bringing Anthropic’s data-intensive computing work to Google’s data centres is part of an effort to catch up with the lead Microsoft has taken in the fast-growing AI market thanks to its work with OpenAI. Google’s cloud division is also working with other start-ups such as Cohere and C3 to try to secure a bigger foothold in AI."Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Google invests $300mn in artificial intelligence start-up Anthropic | FT, published by 𝕮𝖎𝖓𝖊𝖗𝖆 on February 3, 2023 on The Effective Altruism Forum.The terms of the deal, through which Google will take a stake of about 10 per cent, requires Anthropic to use the money to buy computing resources from the search company’s cloud computing division, said three people familiar with the arrangement.While Microsoft has sought to integrate OpenAI’s technology into many of its own services, Google’s relationship with Anthropic is limited to acting as the company’s tech supplier in what has become an AI arm’s race, according to people familiar with the arrangement."Anthropic has developed an intelligent chatbot called Claude, rivalling OpenAI’s ChatGPT, though it has not yet been released publicly.""The start-up had raised more than $700mn before Google’s investment, which was made in late 2022 but has not previously been reported. The company’s biggest investor is Alameda Research, the crypto hedge fund of FTX founder Sam Bankman-Fried, which put in $500mn before filing for bankruptcy last year. FTX’s bankruptcy estate has flagged Anthropic as an asset that may help creditors with recoveries.""Google’s investment was made by its cloud division, run by former Oracle executive Thomas Kurian. Bringing Anthropic’s data-intensive computing work to Google’s data centres is part of an effort to catch up with the lead Microsoft has taken in the fast-growing AI market thanks to its work with OpenAI. Google’s cloud division is also working with other start-ups such as Cohere and C3 to try to secure a bigger foothold in AI."Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
𝕮𝖎𝖓𝖊𝖗𝖆 https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:52 None full 4720
JCyX29F77Jak5gbwq_NL_EA_EA EA - EA, Sexual Harassment, and Abuse by whistleblower9 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA, Sexual Harassment, and Abuse, published by whistleblower9 on February 3, 2023 on The Effective Altruism Forum.Link-post for the article "Effective Altruism Promises to Do Good Better. These Women Say It Has a Toxic Culture Of Sexual Harassment and Abuse"A few quotes:Three times in one year, she says, men at informal EA gatherings tried to convince [Keerthana Gopalakrishnan] to join these so-called “polycules.” When Gopalakrishnan said she wasn’t interested, she recalls, they would “shame” her or try to pressure her, casting monogamy as a lifestyle governed by jealousy, and polyamory as a more enlightened and rational approach.After a particularly troubling incident of sexual harassment, Gopalakrishnan wrote a post on an online forum for EAs in Nov. 2022. While she declined to publicly describe details of the incident, she argued that EA’s culture was hostile toward women. “It puts your safety at risk,” she wrote, adding that most of the access to funding and opportunities within the movement was controlled by men. Gopalakrishnan was alarmed at some of the responses. One commenter wrote that her post was “bigoted” against polyamorous people. Another said it would “pollute the epistemic environment,” and argued it was “net-negative for solving the problem.”This story is based on interviews with more than 30 current and former effective altruists and people who live among them. Many of the women spoke on condition of anonymity to avoid personal or professional reprisals, citing the small number of people and organizations within EA that control plum jobs and opportunities.Many of them asked that their alleged abusers not be named and that TIME shield their identities to avoid retaliation.One recalled being “groomed” by a powerful man nearly twice her age who argued that “pedophilic relationships” were both perfectly natural and highly educational. Another told TIME a much older EA recruited her to join his polyamorous relationship while she was still in college. A third described an unsettling experience with an influential figure in EA whose role included picking out promising students and funneling them towards highly coveted jobs. After that leader arranged for her to be flown to the U.K. for a job interview, she recalls being surprised to discover that she was expected to stay in his home, not a hotel. When she arrived, she says, “he told me he needed to masturbate before seeing me.”The women who spoke to TIME counter that the problem is particularly acute in EA. The movement’s high-minded goals can create a moral shield, they say, allowing members to present themselves as altruists committed to saving humanity regardless of how they treat the people around them. “It’s this white knight savior complex,” says Sonia Joseph, a former EA who has since moved away from the movement partially because of its treatment of women. “Like: we are better than others because we are more rational or more reasonable or more thoughtful.” The movement “has a veneer of very logical, rigorous do-gooderism,” she continues. “But it’s misogyny encoded into math.”Several of the women who spoke to TIME said that EA’s polyamorous subculture was a key reason why the community had become a hostile environment for women. One woman told TIME she began dating a man who had held significant roles at two EA-aligned organizations while she was still an undergraduate. They met when he was speaking at an EA-affiliated conference, and he invited her out to dinner after she was one of the only students to get his math and probability questions right. He asked how old she was, she recalls, then quickly suggested she join his polyamorous relationship. Shortly after agreeing to date him, “He told me that ‘I could sleep with you on Monday,’ but on Tuesday I’m with this other girl,” she says. “It...]]>
whistleblower9 https://forum.effectivealtruism.org/posts/JCyX29F77Jak5gbwq/ea-sexual-harassment-and-abuse Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA, Sexual Harassment, and Abuse, published by whistleblower9 on February 3, 2023 on The Effective Altruism Forum.Link-post for the article "Effective Altruism Promises to Do Good Better. These Women Say It Has a Toxic Culture Of Sexual Harassment and Abuse"A few quotes:Three times in one year, she says, men at informal EA gatherings tried to convince [Keerthana Gopalakrishnan] to join these so-called “polycules.” When Gopalakrishnan said she wasn’t interested, she recalls, they would “shame” her or try to pressure her, casting monogamy as a lifestyle governed by jealousy, and polyamory as a more enlightened and rational approach.After a particularly troubling incident of sexual harassment, Gopalakrishnan wrote a post on an online forum for EAs in Nov. 2022. While she declined to publicly describe details of the incident, she argued that EA’s culture was hostile toward women. “It puts your safety at risk,” she wrote, adding that most of the access to funding and opportunities within the movement was controlled by men. Gopalakrishnan was alarmed at some of the responses. One commenter wrote that her post was “bigoted” against polyamorous people. Another said it would “pollute the epistemic environment,” and argued it was “net-negative for solving the problem.”This story is based on interviews with more than 30 current and former effective altruists and people who live among them. Many of the women spoke on condition of anonymity to avoid personal or professional reprisals, citing the small number of people and organizations within EA that control plum jobs and opportunities.Many of them asked that their alleged abusers not be named and that TIME shield their identities to avoid retaliation.One recalled being “groomed” by a powerful man nearly twice her age who argued that “pedophilic relationships” were both perfectly natural and highly educational. Another told TIME a much older EA recruited her to join his polyamorous relationship while she was still in college. A third described an unsettling experience with an influential figure in EA whose role included picking out promising students and funneling them towards highly coveted jobs. After that leader arranged for her to be flown to the U.K. for a job interview, she recalls being surprised to discover that she was expected to stay in his home, not a hotel. When she arrived, she says, “he told me he needed to masturbate before seeing me.”The women who spoke to TIME counter that the problem is particularly acute in EA. The movement’s high-minded goals can create a moral shield, they say, allowing members to present themselves as altruists committed to saving humanity regardless of how they treat the people around them. “It’s this white knight savior complex,” says Sonia Joseph, a former EA who has since moved away from the movement partially because of its treatment of women. “Like: we are better than others because we are more rational or more reasonable or more thoughtful.” The movement “has a veneer of very logical, rigorous do-gooderism,” she continues. “But it’s misogyny encoded into math.”Several of the women who spoke to TIME said that EA’s polyamorous subculture was a key reason why the community had become a hostile environment for women. One woman told TIME she began dating a man who had held significant roles at two EA-aligned organizations while she was still an undergraduate. They met when he was speaking at an EA-affiliated conference, and he invited her out to dinner after she was one of the only students to get his math and probability questions right. He asked how old she was, she recalls, then quickly suggested she join his polyamorous relationship. Shortly after agreeing to date him, “He told me that ‘I could sleep with you on Monday,’ but on Tuesday I’m with this other girl,” she says. “It...]]>
Fri, 03 Feb 2023 15:22:12 +0000 EA - EA, Sexual Harassment, and Abuse by whistleblower9 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA, Sexual Harassment, and Abuse, published by whistleblower9 on February 3, 2023 on The Effective Altruism Forum.Link-post for the article "Effective Altruism Promises to Do Good Better. These Women Say It Has a Toxic Culture Of Sexual Harassment and Abuse"A few quotes:Three times in one year, she says, men at informal EA gatherings tried to convince [Keerthana Gopalakrishnan] to join these so-called “polycules.” When Gopalakrishnan said she wasn’t interested, she recalls, they would “shame” her or try to pressure her, casting monogamy as a lifestyle governed by jealousy, and polyamory as a more enlightened and rational approach.After a particularly troubling incident of sexual harassment, Gopalakrishnan wrote a post on an online forum for EAs in Nov. 2022. While she declined to publicly describe details of the incident, she argued that EA’s culture was hostile toward women. “It puts your safety at risk,” she wrote, adding that most of the access to funding and opportunities within the movement was controlled by men. Gopalakrishnan was alarmed at some of the responses. One commenter wrote that her post was “bigoted” against polyamorous people. Another said it would “pollute the epistemic environment,” and argued it was “net-negative for solving the problem.”This story is based on interviews with more than 30 current and former effective altruists and people who live among them. Many of the women spoke on condition of anonymity to avoid personal or professional reprisals, citing the small number of people and organizations within EA that control plum jobs and opportunities.Many of them asked that their alleged abusers not be named and that TIME shield their identities to avoid retaliation.One recalled being “groomed” by a powerful man nearly twice her age who argued that “pedophilic relationships” were both perfectly natural and highly educational. Another told TIME a much older EA recruited her to join his polyamorous relationship while she was still in college. A third described an unsettling experience with an influential figure in EA whose role included picking out promising students and funneling them towards highly coveted jobs. After that leader arranged for her to be flown to the U.K. for a job interview, she recalls being surprised to discover that she was expected to stay in his home, not a hotel. When she arrived, she says, “he told me he needed to masturbate before seeing me.”The women who spoke to TIME counter that the problem is particularly acute in EA. The movement’s high-minded goals can create a moral shield, they say, allowing members to present themselves as altruists committed to saving humanity regardless of how they treat the people around them. “It’s this white knight savior complex,” says Sonia Joseph, a former EA who has since moved away from the movement partially because of its treatment of women. “Like: we are better than others because we are more rational or more reasonable or more thoughtful.” The movement “has a veneer of very logical, rigorous do-gooderism,” she continues. “But it’s misogyny encoded into math.”Several of the women who spoke to TIME said that EA’s polyamorous subculture was a key reason why the community had become a hostile environment for women. One woman told TIME she began dating a man who had held significant roles at two EA-aligned organizations while she was still an undergraduate. They met when he was speaking at an EA-affiliated conference, and he invited her out to dinner after she was one of the only students to get his math and probability questions right. He asked how old she was, she recalls, then quickly suggested she join his polyamorous relationship. Shortly after agreeing to date him, “He told me that ‘I could sleep with you on Monday,’ but on Tuesday I’m with this other girl,” she says. “It...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA, Sexual Harassment, and Abuse, published by whistleblower9 on February 3, 2023 on The Effective Altruism Forum.Link-post for the article "Effective Altruism Promises to Do Good Better. These Women Say It Has a Toxic Culture Of Sexual Harassment and Abuse"A few quotes:Three times in one year, she says, men at informal EA gatherings tried to convince [Keerthana Gopalakrishnan] to join these so-called “polycules.” When Gopalakrishnan said she wasn’t interested, she recalls, they would “shame” her or try to pressure her, casting monogamy as a lifestyle governed by jealousy, and polyamory as a more enlightened and rational approach.After a particularly troubling incident of sexual harassment, Gopalakrishnan wrote a post on an online forum for EAs in Nov. 2022. While she declined to publicly describe details of the incident, she argued that EA’s culture was hostile toward women. “It puts your safety at risk,” she wrote, adding that most of the access to funding and opportunities within the movement was controlled by men. Gopalakrishnan was alarmed at some of the responses. One commenter wrote that her post was “bigoted” against polyamorous people. Another said it would “pollute the epistemic environment,” and argued it was “net-negative for solving the problem.”This story is based on interviews with more than 30 current and former effective altruists and people who live among them. Many of the women spoke on condition of anonymity to avoid personal or professional reprisals, citing the small number of people and organizations within EA that control plum jobs and opportunities.Many of them asked that their alleged abusers not be named and that TIME shield their identities to avoid retaliation.One recalled being “groomed” by a powerful man nearly twice her age who argued that “pedophilic relationships” were both perfectly natural and highly educational. Another told TIME a much older EA recruited her to join his polyamorous relationship while she was still in college. A third described an unsettling experience with an influential figure in EA whose role included picking out promising students and funneling them towards highly coveted jobs. After that leader arranged for her to be flown to the U.K. for a job interview, she recalls being surprised to discover that she was expected to stay in his home, not a hotel. When she arrived, she says, “he told me he needed to masturbate before seeing me.”The women who spoke to TIME counter that the problem is particularly acute in EA. The movement’s high-minded goals can create a moral shield, they say, allowing members to present themselves as altruists committed to saving humanity regardless of how they treat the people around them. “It’s this white knight savior complex,” says Sonia Joseph, a former EA who has since moved away from the movement partially because of its treatment of women. “Like: we are better than others because we are more rational or more reasonable or more thoughtful.” The movement “has a veneer of very logical, rigorous do-gooderism,” she continues. “But it’s misogyny encoded into math.”Several of the women who spoke to TIME said that EA’s polyamorous subculture was a key reason why the community had become a hostile environment for women. One woman told TIME she began dating a man who had held significant roles at two EA-aligned organizations while she was still an undergraduate. They met when he was speaking at an EA-affiliated conference, and he invited her out to dinner after she was one of the only students to get his math and probability questions right. He asked how old she was, she recalls, then quickly suggested she join his polyamorous relationship. Shortly after agreeing to date him, “He told me that ‘I could sleep with you on Monday,’ but on Tuesday I’m with this other girl,” she says. “It...]]>
whistleblower9 https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:13 None full 4721
2rD6nLqw5Z3dyD5me_NL_EA_EA EA - Does the US public support ultraviolet germicidal irradiation technology for reducing risks from pathogens? by Jam Kraprayoon Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does the US public support ultraviolet germicidal irradiation technology for reducing risks from pathogens?, published by Jam Kraprayoon on February 3, 2023 on The Effective Altruism Forum.SummaryUltraviolet germicidal irradiation technology (UVGI) or germicidal UV light (GUV) represents a promising technology for reducing catastrophic biorisk and would likely confer near-termist benefits as wellTwo subtypes of UVGI, upper-room UVC and far-UVC, seem like particularly promising ways to reduce indoor pathogen transmission.Understanding the level of support for and awareness of these technologies, what framings of benefits are most compelling, and what concerns exist should be helpful for developing strategies around advocacy and expanding deployment.We ran four online surveys in November and December 2022 to better understand US public attitudes towards these technologies. As far as we know, these were the first surveys done on US public attitudes towards GUV light as a means of reducing risks from pathogens.The results of these surveys seem to confirm and disconfirm intuitions in the biosecurity/indoor air quality community around public attitudes towards GUV technology.Survey results show moderate awareness of GUV light and low awareness of far-UVC light.Despite low levels of awareness, once respondents were provided with a description of the technology, support for GUV light and its subtypes was broadly positive.Safety concerns seem to be a meaningful factor in public perception of this technology. Respondents' most prominent concern was that GUV lights need additional testing to establish more evidence on safety and efficacy.Contrary to the assumption in the indoor air quality community that since far-UVC involves direct light exposure to humans, people are less likely to support it, we find respondents consistently showed slightly greater support for far-UVC over upper-room UVC.Also, contrary to the view that terminology for GUV light that includes UV will get less support due to UV’s association with cancer (e.g., far-UVC vs low-wavelength light), our survey shows no statistically significant difference in support when using terms that mention UV vs don’t mention UV.The attitudes expressed by poll respondents in response to broad questions may not be reliable indicators of actual support for specific policies or messages. It would be better to test people's responses to more detailed messages and policy proposalsThere are further investigations that can be done to tease out different explanations for some of the main findings mentioned in the write-up, to find out whether these results hold across countries, and to gauge public support for indoor air quality more generally.MethodsWe ran four online surveys in November and December 2022. In survey 1, respondents were asked whether they had previously heard of ‘germicidal UV’ (GUV) and ‘far UVC’ as ways to reduce pathogen transmission. They were also asked to provide an open comment on what they currently knew about both technologies.In survey 2, respondents were presented with a description of germicidal UV light and asked to consider how positively or negatively they felt about the technology for reducing pathogen transmission, the extent to which they would support efforts to deploy germicidal UV in public spaces, and what benefits and concerns they had about this technology.In survey 3, we tested people’s support for three different types of UV light: germicidal UV, far-UVC, and upper-room UVC. We also tested whether including diagrams for far-UVC and upper-room UVC would affect peoples’ support (Figures 1 and 2 below).Figure 1: Diagram illustrating use of an upper room UVC systemFigure 2: Diagram illustrating use of a far-UVC systemIn survey 4, we tested the effect of five different message...]]>
Jam Kraprayoon https://forum.effectivealtruism.org/posts/2rD6nLqw5Z3dyD5me/does-the-us-public-support-ultraviolet-germicidal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does the US public support ultraviolet germicidal irradiation technology for reducing risks from pathogens?, published by Jam Kraprayoon on February 3, 2023 on The Effective Altruism Forum.SummaryUltraviolet germicidal irradiation technology (UVGI) or germicidal UV light (GUV) represents a promising technology for reducing catastrophic biorisk and would likely confer near-termist benefits as wellTwo subtypes of UVGI, upper-room UVC and far-UVC, seem like particularly promising ways to reduce indoor pathogen transmission.Understanding the level of support for and awareness of these technologies, what framings of benefits are most compelling, and what concerns exist should be helpful for developing strategies around advocacy and expanding deployment.We ran four online surveys in November and December 2022 to better understand US public attitudes towards these technologies. As far as we know, these were the first surveys done on US public attitudes towards GUV light as a means of reducing risks from pathogens.The results of these surveys seem to confirm and disconfirm intuitions in the biosecurity/indoor air quality community around public attitudes towards GUV technology.Survey results show moderate awareness of GUV light and low awareness of far-UVC light.Despite low levels of awareness, once respondents were provided with a description of the technology, support for GUV light and its subtypes was broadly positive.Safety concerns seem to be a meaningful factor in public perception of this technology. Respondents' most prominent concern was that GUV lights need additional testing to establish more evidence on safety and efficacy.Contrary to the assumption in the indoor air quality community that since far-UVC involves direct light exposure to humans, people are less likely to support it, we find respondents consistently showed slightly greater support for far-UVC over upper-room UVC.Also, contrary to the view that terminology for GUV light that includes UV will get less support due to UV’s association with cancer (e.g., far-UVC vs low-wavelength light), our survey shows no statistically significant difference in support when using terms that mention UV vs don’t mention UV.The attitudes expressed by poll respondents in response to broad questions may not be reliable indicators of actual support for specific policies or messages. It would be better to test people's responses to more detailed messages and policy proposalsThere are further investigations that can be done to tease out different explanations for some of the main findings mentioned in the write-up, to find out whether these results hold across countries, and to gauge public support for indoor air quality more generally.MethodsWe ran four online surveys in November and December 2022. In survey 1, respondents were asked whether they had previously heard of ‘germicidal UV’ (GUV) and ‘far UVC’ as ways to reduce pathogen transmission. They were also asked to provide an open comment on what they currently knew about both technologies.In survey 2, respondents were presented with a description of germicidal UV light and asked to consider how positively or negatively they felt about the technology for reducing pathogen transmission, the extent to which they would support efforts to deploy germicidal UV in public spaces, and what benefits and concerns they had about this technology.In survey 3, we tested people’s support for three different types of UV light: germicidal UV, far-UVC, and upper-room UVC. We also tested whether including diagrams for far-UVC and upper-room UVC would affect peoples’ support (Figures 1 and 2 below).Figure 1: Diagram illustrating use of an upper room UVC systemFigure 2: Diagram illustrating use of a far-UVC systemIn survey 4, we tested the effect of five different message...]]>
Fri, 03 Feb 2023 15:07:13 +0000 EA - Does the US public support ultraviolet germicidal irradiation technology for reducing risks from pathogens? by Jam Kraprayoon Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does the US public support ultraviolet germicidal irradiation technology for reducing risks from pathogens?, published by Jam Kraprayoon on February 3, 2023 on The Effective Altruism Forum.SummaryUltraviolet germicidal irradiation technology (UVGI) or germicidal UV light (GUV) represents a promising technology for reducing catastrophic biorisk and would likely confer near-termist benefits as wellTwo subtypes of UVGI, upper-room UVC and far-UVC, seem like particularly promising ways to reduce indoor pathogen transmission.Understanding the level of support for and awareness of these technologies, what framings of benefits are most compelling, and what concerns exist should be helpful for developing strategies around advocacy and expanding deployment.We ran four online surveys in November and December 2022 to better understand US public attitudes towards these technologies. As far as we know, these were the first surveys done on US public attitudes towards GUV light as a means of reducing risks from pathogens.The results of these surveys seem to confirm and disconfirm intuitions in the biosecurity/indoor air quality community around public attitudes towards GUV technology.Survey results show moderate awareness of GUV light and low awareness of far-UVC light.Despite low levels of awareness, once respondents were provided with a description of the technology, support for GUV light and its subtypes was broadly positive.Safety concerns seem to be a meaningful factor in public perception of this technology. Respondents' most prominent concern was that GUV lights need additional testing to establish more evidence on safety and efficacy.Contrary to the assumption in the indoor air quality community that since far-UVC involves direct light exposure to humans, people are less likely to support it, we find respondents consistently showed slightly greater support for far-UVC over upper-room UVC.Also, contrary to the view that terminology for GUV light that includes UV will get less support due to UV’s association with cancer (e.g., far-UVC vs low-wavelength light), our survey shows no statistically significant difference in support when using terms that mention UV vs don’t mention UV.The attitudes expressed by poll respondents in response to broad questions may not be reliable indicators of actual support for specific policies or messages. It would be better to test people's responses to more detailed messages and policy proposalsThere are further investigations that can be done to tease out different explanations for some of the main findings mentioned in the write-up, to find out whether these results hold across countries, and to gauge public support for indoor air quality more generally.MethodsWe ran four online surveys in November and December 2022. In survey 1, respondents were asked whether they had previously heard of ‘germicidal UV’ (GUV) and ‘far UVC’ as ways to reduce pathogen transmission. They were also asked to provide an open comment on what they currently knew about both technologies.In survey 2, respondents were presented with a description of germicidal UV light and asked to consider how positively or negatively they felt about the technology for reducing pathogen transmission, the extent to which they would support efforts to deploy germicidal UV in public spaces, and what benefits and concerns they had about this technology.In survey 3, we tested people’s support for three different types of UV light: germicidal UV, far-UVC, and upper-room UVC. We also tested whether including diagrams for far-UVC and upper-room UVC would affect peoples’ support (Figures 1 and 2 below).Figure 1: Diagram illustrating use of an upper room UVC systemFigure 2: Diagram illustrating use of a far-UVC systemIn survey 4, we tested the effect of five different message...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does the US public support ultraviolet germicidal irradiation technology for reducing risks from pathogens?, published by Jam Kraprayoon on February 3, 2023 on The Effective Altruism Forum.SummaryUltraviolet germicidal irradiation technology (UVGI) or germicidal UV light (GUV) represents a promising technology for reducing catastrophic biorisk and would likely confer near-termist benefits as wellTwo subtypes of UVGI, upper-room UVC and far-UVC, seem like particularly promising ways to reduce indoor pathogen transmission.Understanding the level of support for and awareness of these technologies, what framings of benefits are most compelling, and what concerns exist should be helpful for developing strategies around advocacy and expanding deployment.We ran four online surveys in November and December 2022 to better understand US public attitudes towards these technologies. As far as we know, these were the first surveys done on US public attitudes towards GUV light as a means of reducing risks from pathogens.The results of these surveys seem to confirm and disconfirm intuitions in the biosecurity/indoor air quality community around public attitudes towards GUV technology.Survey results show moderate awareness of GUV light and low awareness of far-UVC light.Despite low levels of awareness, once respondents were provided with a description of the technology, support for GUV light and its subtypes was broadly positive.Safety concerns seem to be a meaningful factor in public perception of this technology. Respondents' most prominent concern was that GUV lights need additional testing to establish more evidence on safety and efficacy.Contrary to the assumption in the indoor air quality community that since far-UVC involves direct light exposure to humans, people are less likely to support it, we find respondents consistently showed slightly greater support for far-UVC over upper-room UVC.Also, contrary to the view that terminology for GUV light that includes UV will get less support due to UV’s association with cancer (e.g., far-UVC vs low-wavelength light), our survey shows no statistically significant difference in support when using terms that mention UV vs don’t mention UV.The attitudes expressed by poll respondents in response to broad questions may not be reliable indicators of actual support for specific policies or messages. It would be better to test people's responses to more detailed messages and policy proposalsThere are further investigations that can be done to tease out different explanations for some of the main findings mentioned in the write-up, to find out whether these results hold across countries, and to gauge public support for indoor air quality more generally.MethodsWe ran four online surveys in November and December 2022. In survey 1, respondents were asked whether they had previously heard of ‘germicidal UV’ (GUV) and ‘far UVC’ as ways to reduce pathogen transmission. They were also asked to provide an open comment on what they currently knew about both technologies.In survey 2, respondents were presented with a description of germicidal UV light and asked to consider how positively or negatively they felt about the technology for reducing pathogen transmission, the extent to which they would support efforts to deploy germicidal UV in public spaces, and what benefits and concerns they had about this technology.In survey 3, we tested people’s support for three different types of UV light: germicidal UV, far-UVC, and upper-room UVC. We also tested whether including diagrams for far-UVC and upper-room UVC would affect peoples’ support (Figures 1 and 2 below).Figure 1: Diagram illustrating use of an upper room UVC systemFigure 2: Diagram illustrating use of a far-UVC systemIn survey 4, we tested the effect of five different message...]]>
Jam Kraprayoon https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 32:10 None full 4722
Ky7C7whxdLexXWqss_NL_EA_EA EA - Future Matters #7: AI timelines, AI skepticism, and lock-in by Pablo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future Matters #7: AI timelines, AI skepticism, and lock-in, published by Pablo on February 3, 2023 on The Effective Altruism Forum.That man is born merely for a few, who thinks only of the people of his own generation. Many thousands of years and many thousands of peoples will come after you; it is to these that you should have regard.Lucius Annaeus SenecaFuture Matters is a newsletter about longtermism and existential risk. Each month we collect and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, listen on your favorite podcast platform and follow on Twitter. Future Matters is also available in Spanish.ResearchAjeya Cotra’s biological anchors model to forecast AGI timelines consists of three parts — an estimate of the compute required to train AGI with 2020 algorithms, a projection of how these compute requirements decrease over time due to algorithmic progress, and a forecast of how the size of training runs will increase over time due to declining hardware costs and increased investment in AI training. Tom Davidson’s What a compute-centric framework says about AI takeoff speeds extends Cotra’s framework to incorporate a more sophisticated model of how R&D investment translates into algorithmic and hardware progress, and also to capture the “virtuous circle” whereby AI progress leads to more automation in AI R&D and in turn faster AI progress. This results in a model of AI takeoff speed, defined here as the time between AI being able to automate 20% of cognitive tasks to being able to automate 100% of cognitive tasks. Davidson’s median estimate for AI takeoff is approximately three years.This is an impressive and significant piece of research, which we cannot summarize adequately here; we hope to feature a conversation with the author in a future issue to explore it in more depth. The full report is available here. Readers are encouraged to play around with the neat interactive model.Zac Hatfield-Dodds shares some Concrete reasons for hope about AI safety []. A researcher at Anthropic (writing in a personal capacity), he takes existential risks from AI seriously, but pushes back on recent pronouncements that AI catastrophe is pretty much inevitable. Hatfield-Dodds highlights some of the promising results from the nascent efforts at figuring out how to align and interpret large language models. The piece is intended to “rebalance the emotional scales” in the AI safety community, which he feels have recently tipped too far towards a despair that feels is both unwarranted and unconstructive.Holden Karnofsky's Transformative AI issues (not just misalignment) [] surveys some of the high-stakes issues raised by transformative AI, particularly those that we should be thinking about ahead of time in order to make a lasting difference to the long-term future. These include not just existential risk from misalignment, but also power imbalances, early AI applications, new life forms, and persistent policies and norms. Karnofsky is inclined to prioritize the first two issues, since he feels very uncertain about the sign of interventions focused on the remaining ones.Lizka Vaintrob argues that we should Beware safety-washing [] by AI companies, akin to greenwashing, where companies misrepresent themselves as being more environmentally conscious than they actually are, rather than taking costly actions to reduce their environmental impact. This could involve misleading not just consumers, but investors, employees, regulators, etc. on whether an AI project took safety concerns seriously. One promising way to address this would be developing common standards for safety, and trustworthy methods for auditing and evaluating companies against these standards.In How we could ...]]>
Pablo https://forum.effectivealtruism.org/posts/Ky7C7whxdLexXWqss/future-matters-7-ai-timelines-ai-skepticism-and-lock-in Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future Matters #7: AI timelines, AI skepticism, and lock-in, published by Pablo on February 3, 2023 on The Effective Altruism Forum.That man is born merely for a few, who thinks only of the people of his own generation. Many thousands of years and many thousands of peoples will come after you; it is to these that you should have regard.Lucius Annaeus SenecaFuture Matters is a newsletter about longtermism and existential risk. Each month we collect and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, listen on your favorite podcast platform and follow on Twitter. Future Matters is also available in Spanish.ResearchAjeya Cotra’s biological anchors model to forecast AGI timelines consists of three parts — an estimate of the compute required to train AGI with 2020 algorithms, a projection of how these compute requirements decrease over time due to algorithmic progress, and a forecast of how the size of training runs will increase over time due to declining hardware costs and increased investment in AI training. Tom Davidson’s What a compute-centric framework says about AI takeoff speeds extends Cotra’s framework to incorporate a more sophisticated model of how R&D investment translates into algorithmic and hardware progress, and also to capture the “virtuous circle” whereby AI progress leads to more automation in AI R&D and in turn faster AI progress. This results in a model of AI takeoff speed, defined here as the time between AI being able to automate 20% of cognitive tasks to being able to automate 100% of cognitive tasks. Davidson’s median estimate for AI takeoff is approximately three years.This is an impressive and significant piece of research, which we cannot summarize adequately here; we hope to feature a conversation with the author in a future issue to explore it in more depth. The full report is available here. Readers are encouraged to play around with the neat interactive model.Zac Hatfield-Dodds shares some Concrete reasons for hope about AI safety []. A researcher at Anthropic (writing in a personal capacity), he takes existential risks from AI seriously, but pushes back on recent pronouncements that AI catastrophe is pretty much inevitable. Hatfield-Dodds highlights some of the promising results from the nascent efforts at figuring out how to align and interpret large language models. The piece is intended to “rebalance the emotional scales” in the AI safety community, which he feels have recently tipped too far towards a despair that feels is both unwarranted and unconstructive.Holden Karnofsky's Transformative AI issues (not just misalignment) [] surveys some of the high-stakes issues raised by transformative AI, particularly those that we should be thinking about ahead of time in order to make a lasting difference to the long-term future. These include not just existential risk from misalignment, but also power imbalances, early AI applications, new life forms, and persistent policies and norms. Karnofsky is inclined to prioritize the first two issues, since he feels very uncertain about the sign of interventions focused on the remaining ones.Lizka Vaintrob argues that we should Beware safety-washing [] by AI companies, akin to greenwashing, where companies misrepresent themselves as being more environmentally conscious than they actually are, rather than taking costly actions to reduce their environmental impact. This could involve misleading not just consumers, but investors, employees, regulators, etc. on whether an AI project took safety concerns seriously. One promising way to address this would be developing common standards for safety, and trustworthy methods for auditing and evaluating companies against these standards.In How we could ...]]>
Fri, 03 Feb 2023 14:36:35 +0000 EA - Future Matters #7: AI timelines, AI skepticism, and lock-in by Pablo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future Matters #7: AI timelines, AI skepticism, and lock-in, published by Pablo on February 3, 2023 on The Effective Altruism Forum.That man is born merely for a few, who thinks only of the people of his own generation. Many thousands of years and many thousands of peoples will come after you; it is to these that you should have regard.Lucius Annaeus SenecaFuture Matters is a newsletter about longtermism and existential risk. Each month we collect and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, listen on your favorite podcast platform and follow on Twitter. Future Matters is also available in Spanish.ResearchAjeya Cotra’s biological anchors model to forecast AGI timelines consists of three parts — an estimate of the compute required to train AGI with 2020 algorithms, a projection of how these compute requirements decrease over time due to algorithmic progress, and a forecast of how the size of training runs will increase over time due to declining hardware costs and increased investment in AI training. Tom Davidson’s What a compute-centric framework says about AI takeoff speeds extends Cotra’s framework to incorporate a more sophisticated model of how R&D investment translates into algorithmic and hardware progress, and also to capture the “virtuous circle” whereby AI progress leads to more automation in AI R&D and in turn faster AI progress. This results in a model of AI takeoff speed, defined here as the time between AI being able to automate 20% of cognitive tasks to being able to automate 100% of cognitive tasks. Davidson’s median estimate for AI takeoff is approximately three years.This is an impressive and significant piece of research, which we cannot summarize adequately here; we hope to feature a conversation with the author in a future issue to explore it in more depth. The full report is available here. Readers are encouraged to play around with the neat interactive model.Zac Hatfield-Dodds shares some Concrete reasons for hope about AI safety []. A researcher at Anthropic (writing in a personal capacity), he takes existential risks from AI seriously, but pushes back on recent pronouncements that AI catastrophe is pretty much inevitable. Hatfield-Dodds highlights some of the promising results from the nascent efforts at figuring out how to align and interpret large language models. The piece is intended to “rebalance the emotional scales” in the AI safety community, which he feels have recently tipped too far towards a despair that feels is both unwarranted and unconstructive.Holden Karnofsky's Transformative AI issues (not just misalignment) [] surveys some of the high-stakes issues raised by transformative AI, particularly those that we should be thinking about ahead of time in order to make a lasting difference to the long-term future. These include not just existential risk from misalignment, but also power imbalances, early AI applications, new life forms, and persistent policies and norms. Karnofsky is inclined to prioritize the first two issues, since he feels very uncertain about the sign of interventions focused on the remaining ones.Lizka Vaintrob argues that we should Beware safety-washing [] by AI companies, akin to greenwashing, where companies misrepresent themselves as being more environmentally conscious than they actually are, rather than taking costly actions to reduce their environmental impact. This could involve misleading not just consumers, but investors, employees, regulators, etc. on whether an AI project took safety concerns seriously. One promising way to address this would be developing common standards for safety, and trustworthy methods for auditing and evaluating companies against these standards.In How we could ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future Matters #7: AI timelines, AI skepticism, and lock-in, published by Pablo on February 3, 2023 on The Effective Altruism Forum.That man is born merely for a few, who thinks only of the people of his own generation. Many thousands of years and many thousands of peoples will come after you; it is to these that you should have regard.Lucius Annaeus SenecaFuture Matters is a newsletter about longtermism and existential risk. Each month we collect and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, listen on your favorite podcast platform and follow on Twitter. Future Matters is also available in Spanish.ResearchAjeya Cotra’s biological anchors model to forecast AGI timelines consists of three parts — an estimate of the compute required to train AGI with 2020 algorithms, a projection of how these compute requirements decrease over time due to algorithmic progress, and a forecast of how the size of training runs will increase over time due to declining hardware costs and increased investment in AI training. Tom Davidson’s What a compute-centric framework says about AI takeoff speeds extends Cotra’s framework to incorporate a more sophisticated model of how R&D investment translates into algorithmic and hardware progress, and also to capture the “virtuous circle” whereby AI progress leads to more automation in AI R&D and in turn faster AI progress. This results in a model of AI takeoff speed, defined here as the time between AI being able to automate 20% of cognitive tasks to being able to automate 100% of cognitive tasks. Davidson’s median estimate for AI takeoff is approximately three years.This is an impressive and significant piece of research, which we cannot summarize adequately here; we hope to feature a conversation with the author in a future issue to explore it in more depth. The full report is available here. Readers are encouraged to play around with the neat interactive model.Zac Hatfield-Dodds shares some Concrete reasons for hope about AI safety []. A researcher at Anthropic (writing in a personal capacity), he takes existential risks from AI seriously, but pushes back on recent pronouncements that AI catastrophe is pretty much inevitable. Hatfield-Dodds highlights some of the promising results from the nascent efforts at figuring out how to align and interpret large language models. The piece is intended to “rebalance the emotional scales” in the AI safety community, which he feels have recently tipped too far towards a despair that feels is both unwarranted and unconstructive.Holden Karnofsky's Transformative AI issues (not just misalignment) [] surveys some of the high-stakes issues raised by transformative AI, particularly those that we should be thinking about ahead of time in order to make a lasting difference to the long-term future. These include not just existential risk from misalignment, but also power imbalances, early AI applications, new life forms, and persistent policies and norms. Karnofsky is inclined to prioritize the first two issues, since he feels very uncertain about the sign of interventions focused on the remaining ones.Lizka Vaintrob argues that we should Beware safety-washing [] by AI companies, akin to greenwashing, where companies misrepresent themselves as being more environmentally conscious than they actually are, rather than taking costly actions to reduce their environmental impact. This could involve misleading not just consumers, but investors, employees, regulators, etc. on whether an AI project took safety concerns seriously. One promising way to address this would be developing common standards for safety, and trustworthy methods for auditing and evaluating companies against these standards.In How we could ...]]>
Pablo https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 26:26 None full 4723
tBkAg7Cys84eGyew6_NL_EA_EA EA - Assessing China's importance as an AI superpower by JulianHazell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Assessing China's importance as an AI superpower, published by JulianHazell on February 3, 2023 on The Effective Altruism Forum.I recently published a blog post where I tried to assess China's importance as a global actor on the path transformative AI. This was a relatively shallow dive, but I hope it will still be able to spark an interesting conversation on this topic, and/or inspire others to research this topic further.The post is quite long (0ver 6,000 words), so I'll copy and paste my bottom line takes, and (roughly) how confident I am in them after brief reflection:China is, as of early 2023, overhyped as an AI superpower - 60%.That being said, the reasons that they might emerge closer to the frontier, and the overall importance of positively shaping the development of AI, are enough to warrant a watchful eye on Chinese AI progress - 90%.China’s recent AI research output, as it pertains to transformative AI, is not quite as impressive as headlines might otherwise suggest - 75%.I suspect hardware difficulties, and structural factors that push top-tier researchers towards other countries, are two of China’s biggest hurdles in the short-to-medium term, and neither seem easily solvable - 60%.It seems likely to me that the US is currently much more likely to create transformative AI before China, especially under short(ish) timelines (next 5-15 years) - 70%.A second or third place China that lags the US and allies could still be important. Since AI progress has recently moved at a break-neck pace, being second place might only mean being a year or two behind — though I suspect this gap will increase as the technology matures - 65%.I might be missing some important factors, and I’m not very certain about which are the most important when thinking about this question - 95%.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
JulianHazell https://forum.effectivealtruism.org/posts/tBkAg7Cys84eGyew6/assessing-china-s-importance-as-an-ai-superpower Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Assessing China's importance as an AI superpower, published by JulianHazell on February 3, 2023 on The Effective Altruism Forum.I recently published a blog post where I tried to assess China's importance as a global actor on the path transformative AI. This was a relatively shallow dive, but I hope it will still be able to spark an interesting conversation on this topic, and/or inspire others to research this topic further.The post is quite long (0ver 6,000 words), so I'll copy and paste my bottom line takes, and (roughly) how confident I am in them after brief reflection:China is, as of early 2023, overhyped as an AI superpower - 60%.That being said, the reasons that they might emerge closer to the frontier, and the overall importance of positively shaping the development of AI, are enough to warrant a watchful eye on Chinese AI progress - 90%.China’s recent AI research output, as it pertains to transformative AI, is not quite as impressive as headlines might otherwise suggest - 75%.I suspect hardware difficulties, and structural factors that push top-tier researchers towards other countries, are two of China’s biggest hurdles in the short-to-medium term, and neither seem easily solvable - 60%.It seems likely to me that the US is currently much more likely to create transformative AI before China, especially under short(ish) timelines (next 5-15 years) - 70%.A second or third place China that lags the US and allies could still be important. Since AI progress has recently moved at a break-neck pace, being second place might only mean being a year or two behind — though I suspect this gap will increase as the technology matures - 65%.I might be missing some important factors, and I’m not very certain about which are the most important when thinking about this question - 95%.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 03 Feb 2023 12:03:35 +0000 EA - Assessing China's importance as an AI superpower by JulianHazell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Assessing China's importance as an AI superpower, published by JulianHazell on February 3, 2023 on The Effective Altruism Forum.I recently published a blog post where I tried to assess China's importance as a global actor on the path transformative AI. This was a relatively shallow dive, but I hope it will still be able to spark an interesting conversation on this topic, and/or inspire others to research this topic further.The post is quite long (0ver 6,000 words), so I'll copy and paste my bottom line takes, and (roughly) how confident I am in them after brief reflection:China is, as of early 2023, overhyped as an AI superpower - 60%.That being said, the reasons that they might emerge closer to the frontier, and the overall importance of positively shaping the development of AI, are enough to warrant a watchful eye on Chinese AI progress - 90%.China’s recent AI research output, as it pertains to transformative AI, is not quite as impressive as headlines might otherwise suggest - 75%.I suspect hardware difficulties, and structural factors that push top-tier researchers towards other countries, are two of China’s biggest hurdles in the short-to-medium term, and neither seem easily solvable - 60%.It seems likely to me that the US is currently much more likely to create transformative AI before China, especially under short(ish) timelines (next 5-15 years) - 70%.A second or third place China that lags the US and allies could still be important. Since AI progress has recently moved at a break-neck pace, being second place might only mean being a year or two behind — though I suspect this gap will increase as the technology matures - 65%.I might be missing some important factors, and I’m not very certain about which are the most important when thinking about this question - 95%.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Assessing China's importance as an AI superpower, published by JulianHazell on February 3, 2023 on The Effective Altruism Forum.I recently published a blog post where I tried to assess China's importance as a global actor on the path transformative AI. This was a relatively shallow dive, but I hope it will still be able to spark an interesting conversation on this topic, and/or inspire others to research this topic further.The post is quite long (0ver 6,000 words), so I'll copy and paste my bottom line takes, and (roughly) how confident I am in them after brief reflection:China is, as of early 2023, overhyped as an AI superpower - 60%.That being said, the reasons that they might emerge closer to the frontier, and the overall importance of positively shaping the development of AI, are enough to warrant a watchful eye on Chinese AI progress - 90%.China’s recent AI research output, as it pertains to transformative AI, is not quite as impressive as headlines might otherwise suggest - 75%.I suspect hardware difficulties, and structural factors that push top-tier researchers towards other countries, are two of China’s biggest hurdles in the short-to-medium term, and neither seem easily solvable - 60%.It seems likely to me that the US is currently much more likely to create transformative AI before China, especially under short(ish) timelines (next 5-15 years) - 70%.A second or third place China that lags the US and allies could still be important. Since AI progress has recently moved at a break-neck pace, being second place might only mean being a year or two behind — though I suspect this gap will increase as the technology matures - 65%.I might be missing some important factors, and I’m not very certain about which are the most important when thinking about this question - 95%.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
JulianHazell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:04 None full 4724
oxGpocpzzbBNdtmf6_NL_EA_EA EA - “My Model Of EA Burnout” (Logan Strohl) by will Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: “My Model Of EA Burnout” (Logan Strohl), published by will on February 3, 2023 on The Effective Altruism Forum.(Linkposting with permission from the author, Logan Strohl. Below, my - Will's - excerpted summary of the post precedes the full text. The first person speaker is Logan.)SummaryI think that EA burnout usually results from prolonged dedication to satisfying the values you think you should have, while neglecting the values you actually have.Perhaps your true values just happen to exactly match the central set of EA values, and that is why you are an EA.However, I think it’s much more common for people to be EAs because their true values have some overlap with the EA values; and I think it’s also common for EAs to dramatically overestimate the magnitude of that overlap. According to my model, this is why “EA burnout” is a thing.If I am wrong about what I value, then I will miss-manage my motivational resources. Chronic mismanagement of motivational resources results in some really bad stuff.Over a couple of years, I change my career, my friend group, and my hobbies to reflect my new values. I spend as little time as possible on Things That Don’t Matter, because now I care about Impact.I’ve oriented my whole life around The Should Values for my longtermist EA strategy [...] while neglecting my True Values. As a result, my engines of motivation are hardly ever receiving any fuel.It seems awfully important to me that EAs put fuel into their gas tanks, rather than dumping that fuel onto the pavement where fictional cars sit in their imaginations.It is probably possible to recover even from severe cases of EA burnout. I think I’ve done a decent job of it myself, though there’s certainly room for improvement. But it takes years.My advice to my past self would be: First, know who you are. If you’re in this for the long haul, build a life in which the real you can thrive. And then, from the abundance of that thriving, put the excess toward Impact.Full Text(Probably somebody else has said most of this. But I personally haven't read it, and felt like writing it down myself, so here we go.)I think that EA burnout usually results from prolonged dedication to satisfying the values you think you should have, while neglecting the values you actually have.Setting aside for the moment what “values” are and what it means to “actually” have one, suppose that I actually value these things (among others):True ValuesAbundancePowerNoveltySocial HarmonyBeautyGrowthComfortThe Wellbeing Of OthersExcitementPersonal LongevityAccuracyOne day I learn about “global catastrophic risk”: Perhaps we’ll all die in a nuclear war, or an AI apocalypse, or a bioengineered global pandemic, and perhaps one of these things will happen quite soon.I recognize that GCR is a direct threat to The Wellbeing Of Others and to Personal Longevity, and as I do, I get scared. I get scared in a way I have never been scared before, because I’ve never before taken seriously the possibility that everyone might die, leaving nobody to continue the species or even to remember that we ever existed—and because this new perspective on the future of humanity has caused my own personal mortality to hit me harder than the lingering perspective of my Christian upbringing ever allowed. For the first time in my life, I’m really aware that I, and everyone I will ever care about, may die.My fear has me very focused on just two of my values: The Wellbeing Of Others and Personal Longevity. But as I read, think, and process, I realize that pretty much regardless of what my other values might be, they cannot possibly be satisfied if the entire species—or the planet, or the lightcone—is destroyed.[This is, of course, a version of EA that’s especially focused on the far future; but I think it’s common for a ...]]>
will https://forum.effectivealtruism.org/posts/oxGpocpzzbBNdtmf6/my-model-of-ea-burnout-logan-strohl Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: “My Model Of EA Burnout” (Logan Strohl), published by will on February 3, 2023 on The Effective Altruism Forum.(Linkposting with permission from the author, Logan Strohl. Below, my - Will's - excerpted summary of the post precedes the full text. The first person speaker is Logan.)SummaryI think that EA burnout usually results from prolonged dedication to satisfying the values you think you should have, while neglecting the values you actually have.Perhaps your true values just happen to exactly match the central set of EA values, and that is why you are an EA.However, I think it’s much more common for people to be EAs because their true values have some overlap with the EA values; and I think it’s also common for EAs to dramatically overestimate the magnitude of that overlap. According to my model, this is why “EA burnout” is a thing.If I am wrong about what I value, then I will miss-manage my motivational resources. Chronic mismanagement of motivational resources results in some really bad stuff.Over a couple of years, I change my career, my friend group, and my hobbies to reflect my new values. I spend as little time as possible on Things That Don’t Matter, because now I care about Impact.I’ve oriented my whole life around The Should Values for my longtermist EA strategy [...] while neglecting my True Values. As a result, my engines of motivation are hardly ever receiving any fuel.It seems awfully important to me that EAs put fuel into their gas tanks, rather than dumping that fuel onto the pavement where fictional cars sit in their imaginations.It is probably possible to recover even from severe cases of EA burnout. I think I’ve done a decent job of it myself, though there’s certainly room for improvement. But it takes years.My advice to my past self would be: First, know who you are. If you’re in this for the long haul, build a life in which the real you can thrive. And then, from the abundance of that thriving, put the excess toward Impact.Full Text(Probably somebody else has said most of this. But I personally haven't read it, and felt like writing it down myself, so here we go.)I think that EA burnout usually results from prolonged dedication to satisfying the values you think you should have, while neglecting the values you actually have.Setting aside for the moment what “values” are and what it means to “actually” have one, suppose that I actually value these things (among others):True ValuesAbundancePowerNoveltySocial HarmonyBeautyGrowthComfortThe Wellbeing Of OthersExcitementPersonal LongevityAccuracyOne day I learn about “global catastrophic risk”: Perhaps we’ll all die in a nuclear war, or an AI apocalypse, or a bioengineered global pandemic, and perhaps one of these things will happen quite soon.I recognize that GCR is a direct threat to The Wellbeing Of Others and to Personal Longevity, and as I do, I get scared. I get scared in a way I have never been scared before, because I’ve never before taken seriously the possibility that everyone might die, leaving nobody to continue the species or even to remember that we ever existed—and because this new perspective on the future of humanity has caused my own personal mortality to hit me harder than the lingering perspective of my Christian upbringing ever allowed. For the first time in my life, I’m really aware that I, and everyone I will ever care about, may die.My fear has me very focused on just two of my values: The Wellbeing Of Others and Personal Longevity. But as I read, think, and process, I realize that pretty much regardless of what my other values might be, they cannot possibly be satisfied if the entire species—or the planet, or the lightcone—is destroyed.[This is, of course, a version of EA that’s especially focused on the far future; but I think it’s common for a ...]]>
Fri, 03 Feb 2023 09:52:33 +0000 EA - “My Model Of EA Burnout” (Logan Strohl) by will Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: “My Model Of EA Burnout” (Logan Strohl), published by will on February 3, 2023 on The Effective Altruism Forum.(Linkposting with permission from the author, Logan Strohl. Below, my - Will's - excerpted summary of the post precedes the full text. The first person speaker is Logan.)SummaryI think that EA burnout usually results from prolonged dedication to satisfying the values you think you should have, while neglecting the values you actually have.Perhaps your true values just happen to exactly match the central set of EA values, and that is why you are an EA.However, I think it’s much more common for people to be EAs because their true values have some overlap with the EA values; and I think it’s also common for EAs to dramatically overestimate the magnitude of that overlap. According to my model, this is why “EA burnout” is a thing.If I am wrong about what I value, then I will miss-manage my motivational resources. Chronic mismanagement of motivational resources results in some really bad stuff.Over a couple of years, I change my career, my friend group, and my hobbies to reflect my new values. I spend as little time as possible on Things That Don’t Matter, because now I care about Impact.I’ve oriented my whole life around The Should Values for my longtermist EA strategy [...] while neglecting my True Values. As a result, my engines of motivation are hardly ever receiving any fuel.It seems awfully important to me that EAs put fuel into their gas tanks, rather than dumping that fuel onto the pavement where fictional cars sit in their imaginations.It is probably possible to recover even from severe cases of EA burnout. I think I’ve done a decent job of it myself, though there’s certainly room for improvement. But it takes years.My advice to my past self would be: First, know who you are. If you’re in this for the long haul, build a life in which the real you can thrive. And then, from the abundance of that thriving, put the excess toward Impact.Full Text(Probably somebody else has said most of this. But I personally haven't read it, and felt like writing it down myself, so here we go.)I think that EA burnout usually results from prolonged dedication to satisfying the values you think you should have, while neglecting the values you actually have.Setting aside for the moment what “values” are and what it means to “actually” have one, suppose that I actually value these things (among others):True ValuesAbundancePowerNoveltySocial HarmonyBeautyGrowthComfortThe Wellbeing Of OthersExcitementPersonal LongevityAccuracyOne day I learn about “global catastrophic risk”: Perhaps we’ll all die in a nuclear war, or an AI apocalypse, or a bioengineered global pandemic, and perhaps one of these things will happen quite soon.I recognize that GCR is a direct threat to The Wellbeing Of Others and to Personal Longevity, and as I do, I get scared. I get scared in a way I have never been scared before, because I’ve never before taken seriously the possibility that everyone might die, leaving nobody to continue the species or even to remember that we ever existed—and because this new perspective on the future of humanity has caused my own personal mortality to hit me harder than the lingering perspective of my Christian upbringing ever allowed. For the first time in my life, I’m really aware that I, and everyone I will ever care about, may die.My fear has me very focused on just two of my values: The Wellbeing Of Others and Personal Longevity. But as I read, think, and process, I realize that pretty much regardless of what my other values might be, they cannot possibly be satisfied if the entire species—or the planet, or the lightcone—is destroyed.[This is, of course, a version of EA that’s especially focused on the far future; but I think it’s common for a ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: “My Model Of EA Burnout” (Logan Strohl), published by will on February 3, 2023 on The Effective Altruism Forum.(Linkposting with permission from the author, Logan Strohl. Below, my - Will's - excerpted summary of the post precedes the full text. The first person speaker is Logan.)SummaryI think that EA burnout usually results from prolonged dedication to satisfying the values you think you should have, while neglecting the values you actually have.Perhaps your true values just happen to exactly match the central set of EA values, and that is why you are an EA.However, I think it’s much more common for people to be EAs because their true values have some overlap with the EA values; and I think it’s also common for EAs to dramatically overestimate the magnitude of that overlap. According to my model, this is why “EA burnout” is a thing.If I am wrong about what I value, then I will miss-manage my motivational resources. Chronic mismanagement of motivational resources results in some really bad stuff.Over a couple of years, I change my career, my friend group, and my hobbies to reflect my new values. I spend as little time as possible on Things That Don’t Matter, because now I care about Impact.I’ve oriented my whole life around The Should Values for my longtermist EA strategy [...] while neglecting my True Values. As a result, my engines of motivation are hardly ever receiving any fuel.It seems awfully important to me that EAs put fuel into their gas tanks, rather than dumping that fuel onto the pavement where fictional cars sit in their imaginations.It is probably possible to recover even from severe cases of EA burnout. I think I’ve done a decent job of it myself, though there’s certainly room for improvement. But it takes years.My advice to my past self would be: First, know who you are. If you’re in this for the long haul, build a life in which the real you can thrive. And then, from the abundance of that thriving, put the excess toward Impact.Full Text(Probably somebody else has said most of this. But I personally haven't read it, and felt like writing it down myself, so here we go.)I think that EA burnout usually results from prolonged dedication to satisfying the values you think you should have, while neglecting the values you actually have.Setting aside for the moment what “values” are and what it means to “actually” have one, suppose that I actually value these things (among others):True ValuesAbundancePowerNoveltySocial HarmonyBeautyGrowthComfortThe Wellbeing Of OthersExcitementPersonal LongevityAccuracyOne day I learn about “global catastrophic risk”: Perhaps we’ll all die in a nuclear war, or an AI apocalypse, or a bioengineered global pandemic, and perhaps one of these things will happen quite soon.I recognize that GCR is a direct threat to The Wellbeing Of Others and to Personal Longevity, and as I do, I get scared. I get scared in a way I have never been scared before, because I’ve never before taken seriously the possibility that everyone might die, leaving nobody to continue the species or even to remember that we ever existed—and because this new perspective on the future of humanity has caused my own personal mortality to hit me harder than the lingering perspective of my Christian upbringing ever allowed. For the first time in my life, I’m really aware that I, and everyone I will ever care about, may die.My fear has me very focused on just two of my values: The Wellbeing Of Others and Personal Longevity. But as I read, think, and process, I realize that pretty much regardless of what my other values might be, they cannot possibly be satisfied if the entire species—or the planet, or the lightcone—is destroyed.[This is, of course, a version of EA that’s especially focused on the far future; but I think it’s common for a ...]]>
will https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:20 None full 4710
zy2xK2fB5wBY9GzAb_NL_EA_EA EA - Rethink Priorities is hiring an Events and Office Coordinator by Rethink Priorities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities is hiring an Events and Office Coordinator, published by Rethink Priorities on February 2, 2023 on The Effective Altruism Forum.Application deadline: February 12th, 2023, at the end of the day in US/Eastern (EST).About the PositionThe Events and Office Coordinator role at Rethink Priorities is a 90% event coordination and 10% office management position. The position reports to the Operations and Finance Manager.The Coordinator will be the primary event planner for RP, working on running 5-7 team retreats, 1 all-staff retreat, and up to 10 additional events each year. The typical event that the coordinator will work is a retreat for a team of researchers, enabling them to focus on solving key problems in their work. The events will range from 5 to 100 people. Event planning responsibilities include: researching and booking venues, securing supplies for events, booking lodging and coordinating travel for attendees, and often traveling to the events to ensure the operations run smoothly (and other tasks that may be required to organize an event).The Coordinator will also serve as office manager for the Rethink Priorities office in Philadelphia, Pennsylvania. This aspect of the role will primarily entail keeping the office stocked with supplies, coordinating visits by guests, ensuring it is prepped for events, managing the mail, and coordinating with vendors. On average, the role will be in office around 1 day a week, though there isn’t a strict schedule that the role is expected to follow (except as needed for events).This role is primarily remote, with in-office work occasionally required in the Fishtown neighborhood of Philadelphia, Pennsylvania, and is open to candidates located in the Philadelphia area, or who are willing to relocate. You may be expected to attend meetings during working hours between UTC-8 and UTC+3 time zones, where most of our staff are based. This role is open to full-time candidates.Key ResponsibilitiesCoordinating staff retreats and other events from start to finish, primarily by:Booking meeting spacesCoordinating mealsResearching and securing lodging for attendeesCoordinating travelAssist with events project managementStreamlining communication between the planning committee and attendeesProviding in-person support; traveling to larger events to ensure that they run smoothlyManaging event budgetsTaking the lead on in-person initiatives for Rethink Priorities, including through the creation of new events for staffManaging the RP Philadelphia Office, primarily by:Keeping supplies stockedCoordinating in-office coworking daysCoordinating with vendorsManaging access to the space for staff and visitorsChecking and distributing mailPrepping the space for eventsProcessing event requestsManaging the office calendarWhat we are looking forSkills and CompetenciesStrong attention to detailStrong project management skillsCritical thinking and problem solvingInterest in providing a high-quality experience for event attendeesAbility to write clear and concise communications with event attendeesKnowledge and Experience2+ years of organizing in-person events with many moving parts (i.e. vendor coordination, attendee communication, etc.)Location and Travel RequirementsResiding or willing to relocate to Philadelphia, PA USAAbility to travel both in the US and internationally up to 10 weeks per yearWhat we offerCompensationAnnual salary between the following ranges for a full-time position, prorated for part-time work:$80,000 - $81,000 USD pre-taxThe exact salary will be based on the candidate’s prior relevant experience and corresponding title level, and calculated using RP’s salary algorithm. RP does not negotiate salaries to ensure fairness.Other BenefitsOpportunity to contribute to a fast-growing...]]>
Rethink Priorities https://forum.effectivealtruism.org/posts/zy2xK2fB5wBY9GzAb/rethink-priorities-is-hiring-an-events-and-office Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities is hiring an Events and Office Coordinator, published by Rethink Priorities on February 2, 2023 on The Effective Altruism Forum.Application deadline: February 12th, 2023, at the end of the day in US/Eastern (EST).About the PositionThe Events and Office Coordinator role at Rethink Priorities is a 90% event coordination and 10% office management position. The position reports to the Operations and Finance Manager.The Coordinator will be the primary event planner for RP, working on running 5-7 team retreats, 1 all-staff retreat, and up to 10 additional events each year. The typical event that the coordinator will work is a retreat for a team of researchers, enabling them to focus on solving key problems in their work. The events will range from 5 to 100 people. Event planning responsibilities include: researching and booking venues, securing supplies for events, booking lodging and coordinating travel for attendees, and often traveling to the events to ensure the operations run smoothly (and other tasks that may be required to organize an event).The Coordinator will also serve as office manager for the Rethink Priorities office in Philadelphia, Pennsylvania. This aspect of the role will primarily entail keeping the office stocked with supplies, coordinating visits by guests, ensuring it is prepped for events, managing the mail, and coordinating with vendors. On average, the role will be in office around 1 day a week, though there isn’t a strict schedule that the role is expected to follow (except as needed for events).This role is primarily remote, with in-office work occasionally required in the Fishtown neighborhood of Philadelphia, Pennsylvania, and is open to candidates located in the Philadelphia area, or who are willing to relocate. You may be expected to attend meetings during working hours between UTC-8 and UTC+3 time zones, where most of our staff are based. This role is open to full-time candidates.Key ResponsibilitiesCoordinating staff retreats and other events from start to finish, primarily by:Booking meeting spacesCoordinating mealsResearching and securing lodging for attendeesCoordinating travelAssist with events project managementStreamlining communication between the planning committee and attendeesProviding in-person support; traveling to larger events to ensure that they run smoothlyManaging event budgetsTaking the lead on in-person initiatives for Rethink Priorities, including through the creation of new events for staffManaging the RP Philadelphia Office, primarily by:Keeping supplies stockedCoordinating in-office coworking daysCoordinating with vendorsManaging access to the space for staff and visitorsChecking and distributing mailPrepping the space for eventsProcessing event requestsManaging the office calendarWhat we are looking forSkills and CompetenciesStrong attention to detailStrong project management skillsCritical thinking and problem solvingInterest in providing a high-quality experience for event attendeesAbility to write clear and concise communications with event attendeesKnowledge and Experience2+ years of organizing in-person events with many moving parts (i.e. vendor coordination, attendee communication, etc.)Location and Travel RequirementsResiding or willing to relocate to Philadelphia, PA USAAbility to travel both in the US and internationally up to 10 weeks per yearWhat we offerCompensationAnnual salary between the following ranges for a full-time position, prorated for part-time work:$80,000 - $81,000 USD pre-taxThe exact salary will be based on the candidate’s prior relevant experience and corresponding title level, and calculated using RP’s salary algorithm. RP does not negotiate salaries to ensure fairness.Other BenefitsOpportunity to contribute to a fast-growing...]]>
Fri, 03 Feb 2023 08:53:34 +0000 EA - Rethink Priorities is hiring an Events and Office Coordinator by Rethink Priorities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities is hiring an Events and Office Coordinator, published by Rethink Priorities on February 2, 2023 on The Effective Altruism Forum.Application deadline: February 12th, 2023, at the end of the day in US/Eastern (EST).About the PositionThe Events and Office Coordinator role at Rethink Priorities is a 90% event coordination and 10% office management position. The position reports to the Operations and Finance Manager.The Coordinator will be the primary event planner for RP, working on running 5-7 team retreats, 1 all-staff retreat, and up to 10 additional events each year. The typical event that the coordinator will work is a retreat for a team of researchers, enabling them to focus on solving key problems in their work. The events will range from 5 to 100 people. Event planning responsibilities include: researching and booking venues, securing supplies for events, booking lodging and coordinating travel for attendees, and often traveling to the events to ensure the operations run smoothly (and other tasks that may be required to organize an event).The Coordinator will also serve as office manager for the Rethink Priorities office in Philadelphia, Pennsylvania. This aspect of the role will primarily entail keeping the office stocked with supplies, coordinating visits by guests, ensuring it is prepped for events, managing the mail, and coordinating with vendors. On average, the role will be in office around 1 day a week, though there isn’t a strict schedule that the role is expected to follow (except as needed for events).This role is primarily remote, with in-office work occasionally required in the Fishtown neighborhood of Philadelphia, Pennsylvania, and is open to candidates located in the Philadelphia area, or who are willing to relocate. You may be expected to attend meetings during working hours between UTC-8 and UTC+3 time zones, where most of our staff are based. This role is open to full-time candidates.Key ResponsibilitiesCoordinating staff retreats and other events from start to finish, primarily by:Booking meeting spacesCoordinating mealsResearching and securing lodging for attendeesCoordinating travelAssist with events project managementStreamlining communication between the planning committee and attendeesProviding in-person support; traveling to larger events to ensure that they run smoothlyManaging event budgetsTaking the lead on in-person initiatives for Rethink Priorities, including through the creation of new events for staffManaging the RP Philadelphia Office, primarily by:Keeping supplies stockedCoordinating in-office coworking daysCoordinating with vendorsManaging access to the space for staff and visitorsChecking and distributing mailPrepping the space for eventsProcessing event requestsManaging the office calendarWhat we are looking forSkills and CompetenciesStrong attention to detailStrong project management skillsCritical thinking and problem solvingInterest in providing a high-quality experience for event attendeesAbility to write clear and concise communications with event attendeesKnowledge and Experience2+ years of organizing in-person events with many moving parts (i.e. vendor coordination, attendee communication, etc.)Location and Travel RequirementsResiding or willing to relocate to Philadelphia, PA USAAbility to travel both in the US and internationally up to 10 weeks per yearWhat we offerCompensationAnnual salary between the following ranges for a full-time position, prorated for part-time work:$80,000 - $81,000 USD pre-taxThe exact salary will be based on the candidate’s prior relevant experience and corresponding title level, and calculated using RP’s salary algorithm. RP does not negotiate salaries to ensure fairness.Other BenefitsOpportunity to contribute to a fast-growing...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities is hiring an Events and Office Coordinator, published by Rethink Priorities on February 2, 2023 on The Effective Altruism Forum.Application deadline: February 12th, 2023, at the end of the day in US/Eastern (EST).About the PositionThe Events and Office Coordinator role at Rethink Priorities is a 90% event coordination and 10% office management position. The position reports to the Operations and Finance Manager.The Coordinator will be the primary event planner for RP, working on running 5-7 team retreats, 1 all-staff retreat, and up to 10 additional events each year. The typical event that the coordinator will work is a retreat for a team of researchers, enabling them to focus on solving key problems in their work. The events will range from 5 to 100 people. Event planning responsibilities include: researching and booking venues, securing supplies for events, booking lodging and coordinating travel for attendees, and often traveling to the events to ensure the operations run smoothly (and other tasks that may be required to organize an event).The Coordinator will also serve as office manager for the Rethink Priorities office in Philadelphia, Pennsylvania. This aspect of the role will primarily entail keeping the office stocked with supplies, coordinating visits by guests, ensuring it is prepped for events, managing the mail, and coordinating with vendors. On average, the role will be in office around 1 day a week, though there isn’t a strict schedule that the role is expected to follow (except as needed for events).This role is primarily remote, with in-office work occasionally required in the Fishtown neighborhood of Philadelphia, Pennsylvania, and is open to candidates located in the Philadelphia area, or who are willing to relocate. You may be expected to attend meetings during working hours between UTC-8 and UTC+3 time zones, where most of our staff are based. This role is open to full-time candidates.Key ResponsibilitiesCoordinating staff retreats and other events from start to finish, primarily by:Booking meeting spacesCoordinating mealsResearching and securing lodging for attendeesCoordinating travelAssist with events project managementStreamlining communication between the planning committee and attendeesProviding in-person support; traveling to larger events to ensure that they run smoothlyManaging event budgetsTaking the lead on in-person initiatives for Rethink Priorities, including through the creation of new events for staffManaging the RP Philadelphia Office, primarily by:Keeping supplies stockedCoordinating in-office coworking daysCoordinating with vendorsManaging access to the space for staff and visitorsChecking and distributing mailPrepping the space for eventsProcessing event requestsManaging the office calendarWhat we are looking forSkills and CompetenciesStrong attention to detailStrong project management skillsCritical thinking and problem solvingInterest in providing a high-quality experience for event attendeesAbility to write clear and concise communications with event attendeesKnowledge and Experience2+ years of organizing in-person events with many moving parts (i.e. vendor coordination, attendee communication, etc.)Location and Travel RequirementsResiding or willing to relocate to Philadelphia, PA USAAbility to travel both in the US and internationally up to 10 weeks per yearWhat we offerCompensationAnnual salary between the following ranges for a full-time position, prorated for part-time work:$80,000 - $81,000 USD pre-taxThe exact salary will be based on the candidate’s prior relevant experience and corresponding title level, and calculated using RP’s salary algorithm. RP does not negotiate salaries to ensure fairness.Other BenefitsOpportunity to contribute to a fast-growing...]]>
Rethink Priorities https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:49 None full 4712
qm5BowbscCvBdhbpi_NL_EA_EA EA - Let's advertise EA infrastructure projects, Feb 2023 by Arepo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Let's advertise EA infrastructure projects, Feb 2023, published by Arepo on February 2, 2023 on The Effective Altruism Forum.A few months ago I posted an advertisement for various EA infrastructure projects on the grounds that there are now many such free or discounted services, and there's very little ongoing way to bring them to the attention of newcomers or remind people who've been around longer and may have forgotten about them.The trouble is, that post had the same issues all attempts to broadcast ideas in this space do: it sat on the front page for a few hours and then fell away - with the organisation I edited later getting almost no exposure. So this post is essentially a repost of it, albeit with the suggestions that were edited in later from the start. Ideally I would hope a post like this could be pinned to the top of the forum, but in lieu of that, I'm wondering about posting something like this every N months. My reservations are a) the hassle and b) it being too spammy (or, conversely, that it ends up being a cheap way of karma farming - to counteract this I've left a comment below for counterbalancing downvotes if you upvote this post). Please let me know your thoughts in the comments: assuming nothing better is implemented, should I continue to do this? If so, for what value of N/under what conditions?Meanwhile, without further ado, here are the projects that you should check out:Coworking/socialisingEA Gather Town - An always-on virtual meeting place for coworking, connecting, and having both casual and impactful conversationsEA Anywhere - An online EA community for everyoneEA coworking Discord - A Discord server dedicated to online coworkingFree or subsidised accommodationCEEALAR/formerly the EA hotel - Provides free or subsidised serviced accommodation and board, and a moderate stipend for other living expenses.NonLinear's EA house database - An experiment by Nonlinear to try to connect EAs with extra space with EAs who could do good work if they didn't have to pay rent (or could pay less rent).Professional servicesGood Governance Project - helps EA organizations create strong boards by finding qualified and diverse professionalsAltruistic Agency - provides discounted tech support and development to organisationsTech support from Soof GolanLegal advice from Tyrone Barugh - a practice under consideration with the primary aim of providing legal support to EA orgs and individual EAs, with that practice probably being based in the UK.SEADS - Data Science services to EA organizationsUser-Friendly - an EA-aligned marketing agencyAnti Entropy - offers services related operations for EA organizationsArb - Our consulting work spans forecasting, machine learning, and epidemiology. We do original research, evidence reviews, and large-scale data pipelines.Pineapple Operations - Maintains a public database of people who are seeking operations or Personal Assistant/Executive Assistant work (part- or full-time) within the next 6 months in the Effective Altruism ecosystemCoachingAI Safety Support - free health coaching to people working on AI safety80,0000 Hours career coaching - Speak with us for free about using your career to help solve one of the world’s most pressing problemsYonatan Cale - Coaching for software devsTraining for Good - Our goal is to help you clarify your aims, reduce self-imposed friction, and improve your leadership.FAANG style mock interviewsFinancial and other material supportNonlinear productivity fund - A low-barrier fund paying for productivity enhancing tools for top longtermists. Supported services and products include Coaching, Therapy, Sleep coaching, Medication management , Personal Assistants, Research Assistants, Virtual Assistants, Tutors (e.g. ML, CS, language), Asana, FocusMate, Zapier, etc., Produ...]]>
Arepo https://forum.effectivealtruism.org/posts/qm5BowbscCvBdhbpi/let-s-advertise-ea-infrastructure-projects-feb-2023 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Let's advertise EA infrastructure projects, Feb 2023, published by Arepo on February 2, 2023 on The Effective Altruism Forum.A few months ago I posted an advertisement for various EA infrastructure projects on the grounds that there are now many such free or discounted services, and there's very little ongoing way to bring them to the attention of newcomers or remind people who've been around longer and may have forgotten about them.The trouble is, that post had the same issues all attempts to broadcast ideas in this space do: it sat on the front page for a few hours and then fell away - with the organisation I edited later getting almost no exposure. So this post is essentially a repost of it, albeit with the suggestions that were edited in later from the start. Ideally I would hope a post like this could be pinned to the top of the forum, but in lieu of that, I'm wondering about posting something like this every N months. My reservations are a) the hassle and b) it being too spammy (or, conversely, that it ends up being a cheap way of karma farming - to counteract this I've left a comment below for counterbalancing downvotes if you upvote this post). Please let me know your thoughts in the comments: assuming nothing better is implemented, should I continue to do this? If so, for what value of N/under what conditions?Meanwhile, without further ado, here are the projects that you should check out:Coworking/socialisingEA Gather Town - An always-on virtual meeting place for coworking, connecting, and having both casual and impactful conversationsEA Anywhere - An online EA community for everyoneEA coworking Discord - A Discord server dedicated to online coworkingFree or subsidised accommodationCEEALAR/formerly the EA hotel - Provides free or subsidised serviced accommodation and board, and a moderate stipend for other living expenses.NonLinear's EA house database - An experiment by Nonlinear to try to connect EAs with extra space with EAs who could do good work if they didn't have to pay rent (or could pay less rent).Professional servicesGood Governance Project - helps EA organizations create strong boards by finding qualified and diverse professionalsAltruistic Agency - provides discounted tech support and development to organisationsTech support from Soof GolanLegal advice from Tyrone Barugh - a practice under consideration with the primary aim of providing legal support to EA orgs and individual EAs, with that practice probably being based in the UK.SEADS - Data Science services to EA organizationsUser-Friendly - an EA-aligned marketing agencyAnti Entropy - offers services related operations for EA organizationsArb - Our consulting work spans forecasting, machine learning, and epidemiology. We do original research, evidence reviews, and large-scale data pipelines.Pineapple Operations - Maintains a public database of people who are seeking operations or Personal Assistant/Executive Assistant work (part- or full-time) within the next 6 months in the Effective Altruism ecosystemCoachingAI Safety Support - free health coaching to people working on AI safety80,0000 Hours career coaching - Speak with us for free about using your career to help solve one of the world’s most pressing problemsYonatan Cale - Coaching for software devsTraining for Good - Our goal is to help you clarify your aims, reduce self-imposed friction, and improve your leadership.FAANG style mock interviewsFinancial and other material supportNonlinear productivity fund - A low-barrier fund paying for productivity enhancing tools for top longtermists. Supported services and products include Coaching, Therapy, Sleep coaching, Medication management , Personal Assistants, Research Assistants, Virtual Assistants, Tutors (e.g. ML, CS, language), Asana, FocusMate, Zapier, etc., Produ...]]>
Fri, 03 Feb 2023 02:16:22 +0000 EA - Let's advertise EA infrastructure projects, Feb 2023 by Arepo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Let's advertise EA infrastructure projects, Feb 2023, published by Arepo on February 2, 2023 on The Effective Altruism Forum.A few months ago I posted an advertisement for various EA infrastructure projects on the grounds that there are now many such free or discounted services, and there's very little ongoing way to bring them to the attention of newcomers or remind people who've been around longer and may have forgotten about them.The trouble is, that post had the same issues all attempts to broadcast ideas in this space do: it sat on the front page for a few hours and then fell away - with the organisation I edited later getting almost no exposure. So this post is essentially a repost of it, albeit with the suggestions that were edited in later from the start. Ideally I would hope a post like this could be pinned to the top of the forum, but in lieu of that, I'm wondering about posting something like this every N months. My reservations are a) the hassle and b) it being too spammy (or, conversely, that it ends up being a cheap way of karma farming - to counteract this I've left a comment below for counterbalancing downvotes if you upvote this post). Please let me know your thoughts in the comments: assuming nothing better is implemented, should I continue to do this? If so, for what value of N/under what conditions?Meanwhile, without further ado, here are the projects that you should check out:Coworking/socialisingEA Gather Town - An always-on virtual meeting place for coworking, connecting, and having both casual and impactful conversationsEA Anywhere - An online EA community for everyoneEA coworking Discord - A Discord server dedicated to online coworkingFree or subsidised accommodationCEEALAR/formerly the EA hotel - Provides free or subsidised serviced accommodation and board, and a moderate stipend for other living expenses.NonLinear's EA house database - An experiment by Nonlinear to try to connect EAs with extra space with EAs who could do good work if they didn't have to pay rent (or could pay less rent).Professional servicesGood Governance Project - helps EA organizations create strong boards by finding qualified and diverse professionalsAltruistic Agency - provides discounted tech support and development to organisationsTech support from Soof GolanLegal advice from Tyrone Barugh - a practice under consideration with the primary aim of providing legal support to EA orgs and individual EAs, with that practice probably being based in the UK.SEADS - Data Science services to EA organizationsUser-Friendly - an EA-aligned marketing agencyAnti Entropy - offers services related operations for EA organizationsArb - Our consulting work spans forecasting, machine learning, and epidemiology. We do original research, evidence reviews, and large-scale data pipelines.Pineapple Operations - Maintains a public database of people who are seeking operations or Personal Assistant/Executive Assistant work (part- or full-time) within the next 6 months in the Effective Altruism ecosystemCoachingAI Safety Support - free health coaching to people working on AI safety80,0000 Hours career coaching - Speak with us for free about using your career to help solve one of the world’s most pressing problemsYonatan Cale - Coaching for software devsTraining for Good - Our goal is to help you clarify your aims, reduce self-imposed friction, and improve your leadership.FAANG style mock interviewsFinancial and other material supportNonlinear productivity fund - A low-barrier fund paying for productivity enhancing tools for top longtermists. Supported services and products include Coaching, Therapy, Sleep coaching, Medication management , Personal Assistants, Research Assistants, Virtual Assistants, Tutors (e.g. ML, CS, language), Asana, FocusMate, Zapier, etc., Produ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Let's advertise EA infrastructure projects, Feb 2023, published by Arepo on February 2, 2023 on The Effective Altruism Forum.A few months ago I posted an advertisement for various EA infrastructure projects on the grounds that there are now many such free or discounted services, and there's very little ongoing way to bring them to the attention of newcomers or remind people who've been around longer and may have forgotten about them.The trouble is, that post had the same issues all attempts to broadcast ideas in this space do: it sat on the front page for a few hours and then fell away - with the organisation I edited later getting almost no exposure. So this post is essentially a repost of it, albeit with the suggestions that were edited in later from the start. Ideally I would hope a post like this could be pinned to the top of the forum, but in lieu of that, I'm wondering about posting something like this every N months. My reservations are a) the hassle and b) it being too spammy (or, conversely, that it ends up being a cheap way of karma farming - to counteract this I've left a comment below for counterbalancing downvotes if you upvote this post). Please let me know your thoughts in the comments: assuming nothing better is implemented, should I continue to do this? If so, for what value of N/under what conditions?Meanwhile, without further ado, here are the projects that you should check out:Coworking/socialisingEA Gather Town - An always-on virtual meeting place for coworking, connecting, and having both casual and impactful conversationsEA Anywhere - An online EA community for everyoneEA coworking Discord - A Discord server dedicated to online coworkingFree or subsidised accommodationCEEALAR/formerly the EA hotel - Provides free or subsidised serviced accommodation and board, and a moderate stipend for other living expenses.NonLinear's EA house database - An experiment by Nonlinear to try to connect EAs with extra space with EAs who could do good work if they didn't have to pay rent (or could pay less rent).Professional servicesGood Governance Project - helps EA organizations create strong boards by finding qualified and diverse professionalsAltruistic Agency - provides discounted tech support and development to organisationsTech support from Soof GolanLegal advice from Tyrone Barugh - a practice under consideration with the primary aim of providing legal support to EA orgs and individual EAs, with that practice probably being based in the UK.SEADS - Data Science services to EA organizationsUser-Friendly - an EA-aligned marketing agencyAnti Entropy - offers services related operations for EA organizationsArb - Our consulting work spans forecasting, machine learning, and epidemiology. We do original research, evidence reviews, and large-scale data pipelines.Pineapple Operations - Maintains a public database of people who are seeking operations or Personal Assistant/Executive Assistant work (part- or full-time) within the next 6 months in the Effective Altruism ecosystemCoachingAI Safety Support - free health coaching to people working on AI safety80,0000 Hours career coaching - Speak with us for free about using your career to help solve one of the world’s most pressing problemsYonatan Cale - Coaching for software devsTraining for Good - Our goal is to help you clarify your aims, reduce self-imposed friction, and improve your leadership.FAANG style mock interviewsFinancial and other material supportNonlinear productivity fund - A low-barrier fund paying for productivity enhancing tools for top longtermists. Supported services and products include Coaching, Therapy, Sleep coaching, Medication management , Personal Assistants, Research Assistants, Virtual Assistants, Tutors (e.g. ML, CS, language), Asana, FocusMate, Zapier, etc., Produ...]]>
Arepo https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:37 None full 4711
HQL7fHWKHRnCijvik_NL_EA_EA EA - CE Incubation Programs - applications now open + updates to the programs by KarolinaSarek Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CE Incubation Programs - applications now open + updates to the programs, published by KarolinaSarek on February 2, 2023 on The Effective Altruism Forum.Applications are now open for our two upcoming Incubation Programs:July-August 2023 with a focus on biosecurity interventions and large-scale global health interventionsFebruary-March 2024 with a focus on farmed animals and global health and development mass-media interventionsIn this post we set out some of the key updates we’ve made to the program, namely:Increased fundingMore time for participants in person in LondonExtended stipends and support to provide an even bigger safety net for participantsEven more ongoing support after the programMore time for applicationsContext:In four years we’ve launched 23 new effective charities that have reached 120 million animals and over 10 million people. The Incubation Program provides you with two months of intensive training, well-researched charity ideas, and access to the funding you need to launch. All we care about is impact, and the most pressing determinant of success is finding potential founders.APPLY HEREUpdates to the Incubation ProgramAll the details are here on our website, but below we summarize the latest changes/improvements.Increased quantity and probability of fundingIn recent years, in part due to our portfolio’s track record, we’re seeing a significant uptick in the seed funding being achieved by our incubatees. In the most recent round, for example, eight out of nine participants started organizations and received $732,000, with grants ranging from $100,000 to $220,000. The ninth participant joined the CE team as a research analyst.A Bigger Safety NetIn the past two years, we’ve trained 34 people. After the program:20 launched new charities and raised over $1.2 million in seed funding6 got jobs in EA orgs (including CE)1 worked on mental health research with funding in Asia (and 1 year later become a co-founder of a newly incubated by CE mental health charity)1 worked as a senior EA community manager1 got funded to do their own specialist research project and has since hired 3 people2 launched their own grantmaking foundation1 works for that grantmaking foundation1 is running for office in America and was elected to the district parliament1 kept on working on the project they co-founded in the alternative protein space1 runs a charity evaluator in China1 was hired by one of the previously incubated charitiesSo in summary: 100% of participants, within weeks of finishing the program, landed relevant roles with high personal fit and excellent impact potential.During the program we will provide you with:Stipends to cover your living costs during the Incubation Program (e.g., rent, wifi, food, childcare). The stipends are around $2,000 per month and are based on participants' needs and adjusted accordingly.Travel and board costs for the 2 weeks in person in London.If, for any reason, you do not start a charity after the program, we provide:Career mentorship (our track record for connecting non-founder participants to research grants, related jobs, and other pathways to impact is near 100%).Two-month stipends to provide a safety net during the period of looking for alternative opportunities.More time in-person in LondonThe Incubation Program lasts 8 weeks, followed by a 2 week seed-funding process.The 8 week program runs online, now with 2 weeks in person in CE’s London officeDuring the 2 week seed-funding process you make final improvements to your proposal, which is submitted to the CE seed network that makes the final decision on your grant.Even more support after the programYou will graduate the program with a co-founder, a high-quality charity idea, a plan for implementation, and a robust funding proposal. On top ...]]>
KarolinaSarek https://forum.effectivealtruism.org/posts/HQL7fHWKHRnCijvik/ce-incubation-programs-applications-now-open-updates-to-the Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CE Incubation Programs - applications now open + updates to the programs, published by KarolinaSarek on February 2, 2023 on The Effective Altruism Forum.Applications are now open for our two upcoming Incubation Programs:July-August 2023 with a focus on biosecurity interventions and large-scale global health interventionsFebruary-March 2024 with a focus on farmed animals and global health and development mass-media interventionsIn this post we set out some of the key updates we’ve made to the program, namely:Increased fundingMore time for participants in person in LondonExtended stipends and support to provide an even bigger safety net for participantsEven more ongoing support after the programMore time for applicationsContext:In four years we’ve launched 23 new effective charities that have reached 120 million animals and over 10 million people. The Incubation Program provides you with two months of intensive training, well-researched charity ideas, and access to the funding you need to launch. All we care about is impact, and the most pressing determinant of success is finding potential founders.APPLY HEREUpdates to the Incubation ProgramAll the details are here on our website, but below we summarize the latest changes/improvements.Increased quantity and probability of fundingIn recent years, in part due to our portfolio’s track record, we’re seeing a significant uptick in the seed funding being achieved by our incubatees. In the most recent round, for example, eight out of nine participants started organizations and received $732,000, with grants ranging from $100,000 to $220,000. The ninth participant joined the CE team as a research analyst.A Bigger Safety NetIn the past two years, we’ve trained 34 people. After the program:20 launched new charities and raised over $1.2 million in seed funding6 got jobs in EA orgs (including CE)1 worked on mental health research with funding in Asia (and 1 year later become a co-founder of a newly incubated by CE mental health charity)1 worked as a senior EA community manager1 got funded to do their own specialist research project and has since hired 3 people2 launched their own grantmaking foundation1 works for that grantmaking foundation1 is running for office in America and was elected to the district parliament1 kept on working on the project they co-founded in the alternative protein space1 runs a charity evaluator in China1 was hired by one of the previously incubated charitiesSo in summary: 100% of participants, within weeks of finishing the program, landed relevant roles with high personal fit and excellent impact potential.During the program we will provide you with:Stipends to cover your living costs during the Incubation Program (e.g., rent, wifi, food, childcare). The stipends are around $2,000 per month and are based on participants' needs and adjusted accordingly.Travel and board costs for the 2 weeks in person in London.If, for any reason, you do not start a charity after the program, we provide:Career mentorship (our track record for connecting non-founder participants to research grants, related jobs, and other pathways to impact is near 100%).Two-month stipends to provide a safety net during the period of looking for alternative opportunities.More time in-person in LondonThe Incubation Program lasts 8 weeks, followed by a 2 week seed-funding process.The 8 week program runs online, now with 2 weeks in person in CE’s London officeDuring the 2 week seed-funding process you make final improvements to your proposal, which is submitted to the CE seed network that makes the final decision on your grant.Even more support after the programYou will graduate the program with a co-founder, a high-quality charity idea, a plan for implementation, and a robust funding proposal. On top ...]]>
Thu, 02 Feb 2023 17:33:09 +0000 EA - CE Incubation Programs - applications now open + updates to the programs by KarolinaSarek Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CE Incubation Programs - applications now open + updates to the programs, published by KarolinaSarek on February 2, 2023 on The Effective Altruism Forum.Applications are now open for our two upcoming Incubation Programs:July-August 2023 with a focus on biosecurity interventions and large-scale global health interventionsFebruary-March 2024 with a focus on farmed animals and global health and development mass-media interventionsIn this post we set out some of the key updates we’ve made to the program, namely:Increased fundingMore time for participants in person in LondonExtended stipends and support to provide an even bigger safety net for participantsEven more ongoing support after the programMore time for applicationsContext:In four years we’ve launched 23 new effective charities that have reached 120 million animals and over 10 million people. The Incubation Program provides you with two months of intensive training, well-researched charity ideas, and access to the funding you need to launch. All we care about is impact, and the most pressing determinant of success is finding potential founders.APPLY HEREUpdates to the Incubation ProgramAll the details are here on our website, but below we summarize the latest changes/improvements.Increased quantity and probability of fundingIn recent years, in part due to our portfolio’s track record, we’re seeing a significant uptick in the seed funding being achieved by our incubatees. In the most recent round, for example, eight out of nine participants started organizations and received $732,000, with grants ranging from $100,000 to $220,000. The ninth participant joined the CE team as a research analyst.A Bigger Safety NetIn the past two years, we’ve trained 34 people. After the program:20 launched new charities and raised over $1.2 million in seed funding6 got jobs in EA orgs (including CE)1 worked on mental health research with funding in Asia (and 1 year later become a co-founder of a newly incubated by CE mental health charity)1 worked as a senior EA community manager1 got funded to do their own specialist research project and has since hired 3 people2 launched their own grantmaking foundation1 works for that grantmaking foundation1 is running for office in America and was elected to the district parliament1 kept on working on the project they co-founded in the alternative protein space1 runs a charity evaluator in China1 was hired by one of the previously incubated charitiesSo in summary: 100% of participants, within weeks of finishing the program, landed relevant roles with high personal fit and excellent impact potential.During the program we will provide you with:Stipends to cover your living costs during the Incubation Program (e.g., rent, wifi, food, childcare). The stipends are around $2,000 per month and are based on participants' needs and adjusted accordingly.Travel and board costs for the 2 weeks in person in London.If, for any reason, you do not start a charity after the program, we provide:Career mentorship (our track record for connecting non-founder participants to research grants, related jobs, and other pathways to impact is near 100%).Two-month stipends to provide a safety net during the period of looking for alternative opportunities.More time in-person in LondonThe Incubation Program lasts 8 weeks, followed by a 2 week seed-funding process.The 8 week program runs online, now with 2 weeks in person in CE’s London officeDuring the 2 week seed-funding process you make final improvements to your proposal, which is submitted to the CE seed network that makes the final decision on your grant.Even more support after the programYou will graduate the program with a co-founder, a high-quality charity idea, a plan for implementation, and a robust funding proposal. On top ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CE Incubation Programs - applications now open + updates to the programs, published by KarolinaSarek on February 2, 2023 on The Effective Altruism Forum.Applications are now open for our two upcoming Incubation Programs:July-August 2023 with a focus on biosecurity interventions and large-scale global health interventionsFebruary-March 2024 with a focus on farmed animals and global health and development mass-media interventionsIn this post we set out some of the key updates we’ve made to the program, namely:Increased fundingMore time for participants in person in LondonExtended stipends and support to provide an even bigger safety net for participantsEven more ongoing support after the programMore time for applicationsContext:In four years we’ve launched 23 new effective charities that have reached 120 million animals and over 10 million people. The Incubation Program provides you with two months of intensive training, well-researched charity ideas, and access to the funding you need to launch. All we care about is impact, and the most pressing determinant of success is finding potential founders.APPLY HEREUpdates to the Incubation ProgramAll the details are here on our website, but below we summarize the latest changes/improvements.Increased quantity and probability of fundingIn recent years, in part due to our portfolio’s track record, we’re seeing a significant uptick in the seed funding being achieved by our incubatees. In the most recent round, for example, eight out of nine participants started organizations and received $732,000, with grants ranging from $100,000 to $220,000. The ninth participant joined the CE team as a research analyst.A Bigger Safety NetIn the past two years, we’ve trained 34 people. After the program:20 launched new charities and raised over $1.2 million in seed funding6 got jobs in EA orgs (including CE)1 worked on mental health research with funding in Asia (and 1 year later become a co-founder of a newly incubated by CE mental health charity)1 worked as a senior EA community manager1 got funded to do their own specialist research project and has since hired 3 people2 launched their own grantmaking foundation1 works for that grantmaking foundation1 is running for office in America and was elected to the district parliament1 kept on working on the project they co-founded in the alternative protein space1 runs a charity evaluator in China1 was hired by one of the previously incubated charitiesSo in summary: 100% of participants, within weeks of finishing the program, landed relevant roles with high personal fit and excellent impact potential.During the program we will provide you with:Stipends to cover your living costs during the Incubation Program (e.g., rent, wifi, food, childcare). The stipends are around $2,000 per month and are based on participants' needs and adjusted accordingly.Travel and board costs for the 2 weeks in person in London.If, for any reason, you do not start a charity after the program, we provide:Career mentorship (our track record for connecting non-founder participants to research grants, related jobs, and other pathways to impact is near 100%).Two-month stipends to provide a safety net during the period of looking for alternative opportunities.More time in-person in LondonThe Incubation Program lasts 8 weeks, followed by a 2 week seed-funding process.The 8 week program runs online, now with 2 weeks in person in CE’s London officeDuring the 2 week seed-funding process you make final improvements to your proposal, which is submitted to the CE seed network that makes the final decision on your grant.Even more support after the programYou will graduate the program with a co-founder, a high-quality charity idea, a plan for implementation, and a robust funding proposal. On top ...]]>
KarolinaSarek https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:50 None full 4706
jGqHkeDw8awQnPYat_NL_EA_EA EA - Epoch Impact Report 2022 by Jaime Sevilla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Epoch Impact Report 2022, published by Jaime Sevilla on February 2, 2023 on The Effective Altruism Forum.Epoch is a research group forecasting the development of transformative Artificial Intelligence. We try to understand how progress in AI happens and what economic impacts we might see from advanced AI.We want to enable better governance during this economic transition by gathering information about the timing of new developments, studying which levers can be used to influence AI progress and making current and past trends in ML more understandable.Founded in April of 2022, Epoch currently has a staff of 13 people, corresponding to 9 FTEs. We have received $1.96M in funding through a grant from Open Philanthropy. We are fiscally sponsored and operationally supported by Rethink Priorities, whose Special Projects team has been a core part of our success as an organisation.Epoch is fundraising a total of $7.07M over 2 years, or approximately $2.91M for September 2023 to September 2024, and $4.16M for September 2024 to September 2025. A detailed budget can be found in the full report.With this funding, we expect to continue and expand our research capacity in understanding the future of AI. Through our research, we hope to continue informing about key drivers of AI progress and when key AI capabilities will be developed.You can now donate to Epoch and subscribe to our newsletter.Read the full report to learn about our key insights this year, an overview of our research and non-research activities, and our plans for the future.Epoch is one of the coolest (and in my opinion underrated) research orgs for understanding trends in ML. Rather than speculating, they meticulously analyze empirical trends and make projections for the future. Lots of interesting findings in their data! – Jacob SteinhardtRead now the full reportThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jaime Sevilla https://forum.effectivealtruism.org/posts/jGqHkeDw8awQnPYat/epoch-impact-report-2022 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Epoch Impact Report 2022, published by Jaime Sevilla on February 2, 2023 on The Effective Altruism Forum.Epoch is a research group forecasting the development of transformative Artificial Intelligence. We try to understand how progress in AI happens and what economic impacts we might see from advanced AI.We want to enable better governance during this economic transition by gathering information about the timing of new developments, studying which levers can be used to influence AI progress and making current and past trends in ML more understandable.Founded in April of 2022, Epoch currently has a staff of 13 people, corresponding to 9 FTEs. We have received $1.96M in funding through a grant from Open Philanthropy. We are fiscally sponsored and operationally supported by Rethink Priorities, whose Special Projects team has been a core part of our success as an organisation.Epoch is fundraising a total of $7.07M over 2 years, or approximately $2.91M for September 2023 to September 2024, and $4.16M for September 2024 to September 2025. A detailed budget can be found in the full report.With this funding, we expect to continue and expand our research capacity in understanding the future of AI. Through our research, we hope to continue informing about key drivers of AI progress and when key AI capabilities will be developed.You can now donate to Epoch and subscribe to our newsletter.Read the full report to learn about our key insights this year, an overview of our research and non-research activities, and our plans for the future.Epoch is one of the coolest (and in my opinion underrated) research orgs for understanding trends in ML. Rather than speculating, they meticulously analyze empirical trends and make projections for the future. Lots of interesting findings in their data! – Jacob SteinhardtRead now the full reportThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 02 Feb 2023 17:28:34 +0000 EA - Epoch Impact Report 2022 by Jaime Sevilla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Epoch Impact Report 2022, published by Jaime Sevilla on February 2, 2023 on The Effective Altruism Forum.Epoch is a research group forecasting the development of transformative Artificial Intelligence. We try to understand how progress in AI happens and what economic impacts we might see from advanced AI.We want to enable better governance during this economic transition by gathering information about the timing of new developments, studying which levers can be used to influence AI progress and making current and past trends in ML more understandable.Founded in April of 2022, Epoch currently has a staff of 13 people, corresponding to 9 FTEs. We have received $1.96M in funding through a grant from Open Philanthropy. We are fiscally sponsored and operationally supported by Rethink Priorities, whose Special Projects team has been a core part of our success as an organisation.Epoch is fundraising a total of $7.07M over 2 years, or approximately $2.91M for September 2023 to September 2024, and $4.16M for September 2024 to September 2025. A detailed budget can be found in the full report.With this funding, we expect to continue and expand our research capacity in understanding the future of AI. Through our research, we hope to continue informing about key drivers of AI progress and when key AI capabilities will be developed.You can now donate to Epoch and subscribe to our newsletter.Read the full report to learn about our key insights this year, an overview of our research and non-research activities, and our plans for the future.Epoch is one of the coolest (and in my opinion underrated) research orgs for understanding trends in ML. Rather than speculating, they meticulously analyze empirical trends and make projections for the future. Lots of interesting findings in their data! – Jacob SteinhardtRead now the full reportThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Epoch Impact Report 2022, published by Jaime Sevilla on February 2, 2023 on The Effective Altruism Forum.Epoch is a research group forecasting the development of transformative Artificial Intelligence. We try to understand how progress in AI happens and what economic impacts we might see from advanced AI.We want to enable better governance during this economic transition by gathering information about the timing of new developments, studying which levers can be used to influence AI progress and making current and past trends in ML more understandable.Founded in April of 2022, Epoch currently has a staff of 13 people, corresponding to 9 FTEs. We have received $1.96M in funding through a grant from Open Philanthropy. We are fiscally sponsored and operationally supported by Rethink Priorities, whose Special Projects team has been a core part of our success as an organisation.Epoch is fundraising a total of $7.07M over 2 years, or approximately $2.91M for September 2023 to September 2024, and $4.16M for September 2024 to September 2025. A detailed budget can be found in the full report.With this funding, we expect to continue and expand our research capacity in understanding the future of AI. Through our research, we hope to continue informing about key drivers of AI progress and when key AI capabilities will be developed.You can now donate to Epoch and subscribe to our newsletter.Read the full report to learn about our key insights this year, an overview of our research and non-research activities, and our plans for the future.Epoch is one of the coolest (and in my opinion underrated) research orgs for understanding trends in ML. Rather than speculating, they meticulously analyze empirical trends and make projections for the future. Lots of interesting findings in their data! – Jacob SteinhardtRead now the full reportThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jaime Sevilla https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:07 None full 4707
JpP25iKFgMrTJGYJs_NL_EA_EA EA - What advice would you give someone who wants to avoid doxing themselves here? by Throwaway012723 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What advice would you give someone who wants to avoid doxing themselves here?, published by Throwaway012723 on February 2, 2023 on The Effective Altruism Forum.From my recent comment, I gathered from the upvotes that there was some interest in people wanting me to share my negative experience with the community health team.My biggest concern in writing that post is accidentally doxing myself.The advice I'm seeking is, how do I include details to show the complexity of the situation the community health team dealt with (as I think this would be of public interest and only fair to them) without me accidentally doxing myself?Including details is hard as I have many friends in the EA Community, and I don't want them to identify me.Including simple details reduces the option space in people's minds of who could be writing this. I think with 2 or 3 simple details (e.g., in city X then in city Y), someone I'm friends with can be 95% sure that Y is writing.To illustrate my point, I intentionally chose "many friends" in the bullet point above as revealing an order of magnitude also reduces the option space in people's minds.But I also appreciate that people can read between the lines and still conclude the details I wanted to avoid. For example, intentionally saying "many friends" still suggests that I have a large number, so it was irrelevant whether I said an order of magnitude.As has been done in EA Forum posts that I've seen from others, I could include false details to throw people off, but I believe this is dishonest.The community health team will almost certainly know who I am from the post, as I will be sharing excerpts from my emails with them. I'm not worried about them doxing me as they would know if they do, they'll reduce the trust that the EA community has in them, and this is not what they want.However, as they will almost certainly reply to the forum post, I'm concerned they will reveal details I wanted to avoid revealing as I knew that those details would make it easier for others to identify me.Currently, my solution is to not worry too much about including and set the moderation guidelines on the post to "Reign of Terror - I delete anything I judge to be annoying or counterproductive". I set those guidelines here to see what it's like to have them on.I'm bullet-pointing as I have a distinctive writing style amongst my friends that I'm trying to avoid in these posts.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Throwaway012723 https://forum.effectivealtruism.org/posts/JpP25iKFgMrTJGYJs/what-advice-would-you-give-someone-who-wants-to-avoid-doxing Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What advice would you give someone who wants to avoid doxing themselves here?, published by Throwaway012723 on February 2, 2023 on The Effective Altruism Forum.From my recent comment, I gathered from the upvotes that there was some interest in people wanting me to share my negative experience with the community health team.My biggest concern in writing that post is accidentally doxing myself.The advice I'm seeking is, how do I include details to show the complexity of the situation the community health team dealt with (as I think this would be of public interest and only fair to them) without me accidentally doxing myself?Including details is hard as I have many friends in the EA Community, and I don't want them to identify me.Including simple details reduces the option space in people's minds of who could be writing this. I think with 2 or 3 simple details (e.g., in city X then in city Y), someone I'm friends with can be 95% sure that Y is writing.To illustrate my point, I intentionally chose "many friends" in the bullet point above as revealing an order of magnitude also reduces the option space in people's minds.But I also appreciate that people can read between the lines and still conclude the details I wanted to avoid. For example, intentionally saying "many friends" still suggests that I have a large number, so it was irrelevant whether I said an order of magnitude.As has been done in EA Forum posts that I've seen from others, I could include false details to throw people off, but I believe this is dishonest.The community health team will almost certainly know who I am from the post, as I will be sharing excerpts from my emails with them. I'm not worried about them doxing me as they would know if they do, they'll reduce the trust that the EA community has in them, and this is not what they want.However, as they will almost certainly reply to the forum post, I'm concerned they will reveal details I wanted to avoid revealing as I knew that those details would make it easier for others to identify me.Currently, my solution is to not worry too much about including and set the moderation guidelines on the post to "Reign of Terror - I delete anything I judge to be annoying or counterproductive". I set those guidelines here to see what it's like to have them on.I'm bullet-pointing as I have a distinctive writing style amongst my friends that I'm trying to avoid in these posts.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 02 Feb 2023 13:42:53 +0000 EA - What advice would you give someone who wants to avoid doxing themselves here? by Throwaway012723 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What advice would you give someone who wants to avoid doxing themselves here?, published by Throwaway012723 on February 2, 2023 on The Effective Altruism Forum.From my recent comment, I gathered from the upvotes that there was some interest in people wanting me to share my negative experience with the community health team.My biggest concern in writing that post is accidentally doxing myself.The advice I'm seeking is, how do I include details to show the complexity of the situation the community health team dealt with (as I think this would be of public interest and only fair to them) without me accidentally doxing myself?Including details is hard as I have many friends in the EA Community, and I don't want them to identify me.Including simple details reduces the option space in people's minds of who could be writing this. I think with 2 or 3 simple details (e.g., in city X then in city Y), someone I'm friends with can be 95% sure that Y is writing.To illustrate my point, I intentionally chose "many friends" in the bullet point above as revealing an order of magnitude also reduces the option space in people's minds.But I also appreciate that people can read between the lines and still conclude the details I wanted to avoid. For example, intentionally saying "many friends" still suggests that I have a large number, so it was irrelevant whether I said an order of magnitude.As has been done in EA Forum posts that I've seen from others, I could include false details to throw people off, but I believe this is dishonest.The community health team will almost certainly know who I am from the post, as I will be sharing excerpts from my emails with them. I'm not worried about them doxing me as they would know if they do, they'll reduce the trust that the EA community has in them, and this is not what they want.However, as they will almost certainly reply to the forum post, I'm concerned they will reveal details I wanted to avoid revealing as I knew that those details would make it easier for others to identify me.Currently, my solution is to not worry too much about including and set the moderation guidelines on the post to "Reign of Terror - I delete anything I judge to be annoying or counterproductive". I set those guidelines here to see what it's like to have them on.I'm bullet-pointing as I have a distinctive writing style amongst my friends that I'm trying to avoid in these posts.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What advice would you give someone who wants to avoid doxing themselves here?, published by Throwaway012723 on February 2, 2023 on The Effective Altruism Forum.From my recent comment, I gathered from the upvotes that there was some interest in people wanting me to share my negative experience with the community health team.My biggest concern in writing that post is accidentally doxing myself.The advice I'm seeking is, how do I include details to show the complexity of the situation the community health team dealt with (as I think this would be of public interest and only fair to them) without me accidentally doxing myself?Including details is hard as I have many friends in the EA Community, and I don't want them to identify me.Including simple details reduces the option space in people's minds of who could be writing this. I think with 2 or 3 simple details (e.g., in city X then in city Y), someone I'm friends with can be 95% sure that Y is writing.To illustrate my point, I intentionally chose "many friends" in the bullet point above as revealing an order of magnitude also reduces the option space in people's minds.But I also appreciate that people can read between the lines and still conclude the details I wanted to avoid. For example, intentionally saying "many friends" still suggests that I have a large number, so it was irrelevant whether I said an order of magnitude.As has been done in EA Forum posts that I've seen from others, I could include false details to throw people off, but I believe this is dishonest.The community health team will almost certainly know who I am from the post, as I will be sharing excerpts from my emails with them. I'm not worried about them doxing me as they would know if they do, they'll reduce the trust that the EA community has in them, and this is not what they want.However, as they will almost certainly reply to the forum post, I'm concerned they will reveal details I wanted to avoid revealing as I knew that those details would make it easier for others to identify me.Currently, my solution is to not worry too much about including and set the moderation guidelines on the post to "Reign of Terror - I delete anything I judge to be annoying or counterproductive". I set those guidelines here to see what it's like to have them on.I'm bullet-pointing as I have a distinctive writing style amongst my friends that I'm trying to avoid in these posts.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Throwaway012723 https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:25 None full 4708
qvEo8kzn3zMEriGtN_NL_EA_EA EA - Questions about OP grant to Helena by DizzyMarmot Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Questions about OP grant to Helena, published by DizzyMarmot on February 2, 2023 on The Effective Altruism Forum.TL;DRA biosecurity & pandemic preparedness grant was made by Open Philanthropy to Helena for $500,000 in Nov 2022The grant profile raises questions about how it was justified and approvedThere is scant public information that could justify it as the best-placed and most appropriate recipient, a clear risk of nepotism inherent in the recipient organization, and what appear to be previous false representations made by the recipient organizationThis post asks OP to, in an effort to support accountability and transparency, publish further grant details and the investigation documents for the grantNotes: I apologize in advance as I am not experienced in writing using the EA syntax and structure, all errors are my own. Full disclosure, I previously applied to and was rejected for a role at Open Philanthropy. I work for an organization that in the past has been a recipient of funding from OP, as a person who has an affinity with but does not identify strongly as an EA. I think it is a good thing for accountability, transparency, and the broader governance of EA that OP listed the grant online, that I am able to question the grant in this forum and expect that OP will respond in good faith.ContentI have watched with interest as EA’s influence has grown within global health and development, in particular Open Philanthropy’s work in health security and biosecurity. Recent calls for transparency and accountability within effective altruism have thankfully been met by new initiatives like openbook.fyi. In light of this, a particular grant seems like it warrants further inspection and from OP’s perspective raises enough questions that it should have been accompanied by more information.I occasionally trawl the grants databases for philanthropic foundations working in global health & health security and found a $500,000 grant to ‘Helena’ awarded in November 2022. I didn’t think anything of it until I saw it again in the openbook log, and upon seeing it looked into the organization.I hope that my initial perception of the grant and the organization is entirely wrong, but the grant is striking in that it has scant details on what is a sizable grant to an organization with no track record in the subject matter that appears to have pretty clearly misrepresented itself in the recent past.The grant commitment and recipient organization as they stand are emblematic of the challenges in effective and equitable grantmaking for philanthropies like Open Philanthropy. Henry Elkus and Helena’s path can be summed up as:Henry’s VC dad uses his resources to support Henry’s interestsHenry drops out of school because he thinks he is exceptionally smarter and better equipped to solve 'our problems’Henry starts an obtuse org with for-profit venture capital and non-profit 501(c)3 sidesThe organization has a confusing and meandering scope of work, that appears to be a collection of personal interests at each point in timeHas no proven tangible value-add beyond ‘networking’, with previous programmatic activities summed up as funding a semi-academic sociology event and acting as a rogue PPE procurement agency during COVIDHelena then gets half a million from OP to explore ‘policy related to health security’ work that they have no track record of doing or apparent expertise inIt feels wrong when Helena (subjectively) seems like a self-aggrandizing nepotism project at best, and (objectively) gets a large grant with no details and no track record in the area of work. It is also notable that Henry and Helena have no track record of participating in EA settings. Participating in transparency and holding your grantmaking to account meaningfully are two different things. This all lea...]]>
DizzyMarmot https://forum.effectivealtruism.org/posts/qvEo8kzn3zMEriGtN/questions-about-op-grant-to-helena Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Questions about OP grant to Helena, published by DizzyMarmot on February 2, 2023 on The Effective Altruism Forum.TL;DRA biosecurity & pandemic preparedness grant was made by Open Philanthropy to Helena for $500,000 in Nov 2022The grant profile raises questions about how it was justified and approvedThere is scant public information that could justify it as the best-placed and most appropriate recipient, a clear risk of nepotism inherent in the recipient organization, and what appear to be previous false representations made by the recipient organizationThis post asks OP to, in an effort to support accountability and transparency, publish further grant details and the investigation documents for the grantNotes: I apologize in advance as I am not experienced in writing using the EA syntax and structure, all errors are my own. Full disclosure, I previously applied to and was rejected for a role at Open Philanthropy. I work for an organization that in the past has been a recipient of funding from OP, as a person who has an affinity with but does not identify strongly as an EA. I think it is a good thing for accountability, transparency, and the broader governance of EA that OP listed the grant online, that I am able to question the grant in this forum and expect that OP will respond in good faith.ContentI have watched with interest as EA’s influence has grown within global health and development, in particular Open Philanthropy’s work in health security and biosecurity. Recent calls for transparency and accountability within effective altruism have thankfully been met by new initiatives like openbook.fyi. In light of this, a particular grant seems like it warrants further inspection and from OP’s perspective raises enough questions that it should have been accompanied by more information.I occasionally trawl the grants databases for philanthropic foundations working in global health & health security and found a $500,000 grant to ‘Helena’ awarded in November 2022. I didn’t think anything of it until I saw it again in the openbook log, and upon seeing it looked into the organization.I hope that my initial perception of the grant and the organization is entirely wrong, but the grant is striking in that it has scant details on what is a sizable grant to an organization with no track record in the subject matter that appears to have pretty clearly misrepresented itself in the recent past.The grant commitment and recipient organization as they stand are emblematic of the challenges in effective and equitable grantmaking for philanthropies like Open Philanthropy. Henry Elkus and Helena’s path can be summed up as:Henry’s VC dad uses his resources to support Henry’s interestsHenry drops out of school because he thinks he is exceptionally smarter and better equipped to solve 'our problems’Henry starts an obtuse org with for-profit venture capital and non-profit 501(c)3 sidesThe organization has a confusing and meandering scope of work, that appears to be a collection of personal interests at each point in timeHas no proven tangible value-add beyond ‘networking’, with previous programmatic activities summed up as funding a semi-academic sociology event and acting as a rogue PPE procurement agency during COVIDHelena then gets half a million from OP to explore ‘policy related to health security’ work that they have no track record of doing or apparent expertise inIt feels wrong when Helena (subjectively) seems like a self-aggrandizing nepotism project at best, and (objectively) gets a large grant with no details and no track record in the area of work. It is also notable that Henry and Helena have no track record of participating in EA settings. Participating in transparency and holding your grantmaking to account meaningfully are two different things. This all lea...]]>
Thu, 02 Feb 2023 07:26:20 +0000 EA - Questions about OP grant to Helena by DizzyMarmot Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Questions about OP grant to Helena, published by DizzyMarmot on February 2, 2023 on The Effective Altruism Forum.TL;DRA biosecurity & pandemic preparedness grant was made by Open Philanthropy to Helena for $500,000 in Nov 2022The grant profile raises questions about how it was justified and approvedThere is scant public information that could justify it as the best-placed and most appropriate recipient, a clear risk of nepotism inherent in the recipient organization, and what appear to be previous false representations made by the recipient organizationThis post asks OP to, in an effort to support accountability and transparency, publish further grant details and the investigation documents for the grantNotes: I apologize in advance as I am not experienced in writing using the EA syntax and structure, all errors are my own. Full disclosure, I previously applied to and was rejected for a role at Open Philanthropy. I work for an organization that in the past has been a recipient of funding from OP, as a person who has an affinity with but does not identify strongly as an EA. I think it is a good thing for accountability, transparency, and the broader governance of EA that OP listed the grant online, that I am able to question the grant in this forum and expect that OP will respond in good faith.ContentI have watched with interest as EA’s influence has grown within global health and development, in particular Open Philanthropy’s work in health security and biosecurity. Recent calls for transparency and accountability within effective altruism have thankfully been met by new initiatives like openbook.fyi. In light of this, a particular grant seems like it warrants further inspection and from OP’s perspective raises enough questions that it should have been accompanied by more information.I occasionally trawl the grants databases for philanthropic foundations working in global health & health security and found a $500,000 grant to ‘Helena’ awarded in November 2022. I didn’t think anything of it until I saw it again in the openbook log, and upon seeing it looked into the organization.I hope that my initial perception of the grant and the organization is entirely wrong, but the grant is striking in that it has scant details on what is a sizable grant to an organization with no track record in the subject matter that appears to have pretty clearly misrepresented itself in the recent past.The grant commitment and recipient organization as they stand are emblematic of the challenges in effective and equitable grantmaking for philanthropies like Open Philanthropy. Henry Elkus and Helena’s path can be summed up as:Henry’s VC dad uses his resources to support Henry’s interestsHenry drops out of school because he thinks he is exceptionally smarter and better equipped to solve 'our problems’Henry starts an obtuse org with for-profit venture capital and non-profit 501(c)3 sidesThe organization has a confusing and meandering scope of work, that appears to be a collection of personal interests at each point in timeHas no proven tangible value-add beyond ‘networking’, with previous programmatic activities summed up as funding a semi-academic sociology event and acting as a rogue PPE procurement agency during COVIDHelena then gets half a million from OP to explore ‘policy related to health security’ work that they have no track record of doing or apparent expertise inIt feels wrong when Helena (subjectively) seems like a self-aggrandizing nepotism project at best, and (objectively) gets a large grant with no details and no track record in the area of work. It is also notable that Henry and Helena have no track record of participating in EA settings. Participating in transparency and holding your grantmaking to account meaningfully are two different things. This all lea...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Questions about OP grant to Helena, published by DizzyMarmot on February 2, 2023 on The Effective Altruism Forum.TL;DRA biosecurity & pandemic preparedness grant was made by Open Philanthropy to Helena for $500,000 in Nov 2022The grant profile raises questions about how it was justified and approvedThere is scant public information that could justify it as the best-placed and most appropriate recipient, a clear risk of nepotism inherent in the recipient organization, and what appear to be previous false representations made by the recipient organizationThis post asks OP to, in an effort to support accountability and transparency, publish further grant details and the investigation documents for the grantNotes: I apologize in advance as I am not experienced in writing using the EA syntax and structure, all errors are my own. Full disclosure, I previously applied to and was rejected for a role at Open Philanthropy. I work for an organization that in the past has been a recipient of funding from OP, as a person who has an affinity with but does not identify strongly as an EA. I think it is a good thing for accountability, transparency, and the broader governance of EA that OP listed the grant online, that I am able to question the grant in this forum and expect that OP will respond in good faith.ContentI have watched with interest as EA’s influence has grown within global health and development, in particular Open Philanthropy’s work in health security and biosecurity. Recent calls for transparency and accountability within effective altruism have thankfully been met by new initiatives like openbook.fyi. In light of this, a particular grant seems like it warrants further inspection and from OP’s perspective raises enough questions that it should have been accompanied by more information.I occasionally trawl the grants databases for philanthropic foundations working in global health & health security and found a $500,000 grant to ‘Helena’ awarded in November 2022. I didn’t think anything of it until I saw it again in the openbook log, and upon seeing it looked into the organization.I hope that my initial perception of the grant and the organization is entirely wrong, but the grant is striking in that it has scant details on what is a sizable grant to an organization with no track record in the subject matter that appears to have pretty clearly misrepresented itself in the recent past.The grant commitment and recipient organization as they stand are emblematic of the challenges in effective and equitable grantmaking for philanthropies like Open Philanthropy. Henry Elkus and Helena’s path can be summed up as:Henry’s VC dad uses his resources to support Henry’s interestsHenry drops out of school because he thinks he is exceptionally smarter and better equipped to solve 'our problems’Henry starts an obtuse org with for-profit venture capital and non-profit 501(c)3 sidesThe organization has a confusing and meandering scope of work, that appears to be a collection of personal interests at each point in timeHas no proven tangible value-add beyond ‘networking’, with previous programmatic activities summed up as funding a semi-academic sociology event and acting as a rogue PPE procurement agency during COVIDHelena then gets half a million from OP to explore ‘policy related to health security’ work that they have no track record of doing or apparent expertise inIt feels wrong when Helena (subjectively) seems like a self-aggrandizing nepotism project at best, and (objectively) gets a large grant with no details and no track record in the area of work. It is also notable that Henry and Helena have no track record of participating in EA settings. Participating in transparency and holding your grantmaking to account meaningfully are two different things. This all lea...]]>
DizzyMarmot https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:31 None full 4690
EYRBprmo3c7AxMGw4_NL_EA_EA EA - Interviews with 97 AI Researchers: Quantitative Analysis by Maheen Shermohammed Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Interviews with 97 AI Researchers: Quantitative Analysis, published by Maheen Shermohammed on February 2, 2023 on The Effective Altruism Forum.TLDR: Last year, Vael Gates interviewed 97 AI researchers about their perceptions on the future of AI, focusing on risks from advanced AI. Among other questions, researchers were asked about the alignment problem, the problem of instrumental incentives, and their interest in AI alignment research. Following up after 5-6 months, 51% reported the interview had a lasting effect on their beliefs. Our new report analyzes these interviews in depth. We describe our primary results and some implications for field-building below. Check out the full report (interactive graph version), a complementary writeup describing whether we can predict a researcher’s interest in alignment, and our results below![Link to post on LessWrong]OverviewThis report (interactive graph version) is a quantitative analysis of 97 interviews conducted in Feb-March 2022 with machine learning researchers, who were asked about their perceptions of artificial intelligence (AI) now and in the future, with particular focus on risks from advanced AI systems. Of the interviewees, 92 were selected from NeurIPS or ICML 2021 submissions, and 5 were informally recommended experts. For each interview, a transcript was generated, and common responses were identified and tagged to support quantitative analysis. The transcripts, as well as a qualitative walkthrough of common perspectives, are available at Interviews.Several core questions were asked in these interviews:When advanced AI (~AGI) would be developed (note that this term was imprecisely defined in the interviews)A probe about the alignment problem: “What do you think of the argument ‘highly intelligent systems will fail to optimize exactly what their designers intended them to, and this is dangerous’?”A probe about instrumental incentives: “What do you think about the argument: ‘highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing their goals, and this is dangerous’?”Whether interviewees were interested in working on AI alignment, and why or why notWhether interviewees had heard of AI safety or AI alignmentFindings SummarySome key findings from our primary questions of interest:Most participants (75%), at some point in the conversation, said that they thought humanity would achieve advanced AI (imprecisely labeled “AGI” for the rest of this summary) eventually, but their timelines to AGI varied. Within this group:32% thought it would happen in 0-50 years40% thought 50-200 years18% thought 200+ yearsand 28% were quite uncertain, reporting a very wide range.(These sum to more than 100% because several people endorsed multiple timelines over the course of the conversation.) (Source)Among participants who thought humanity would never develop AGI (22%), the most commonly cited reason was that they couldn't see AGI happening based on current progress in AI. (Source)Participants were pretty split on whether they thought the alignment problem argument was valid. Some common reasons for disagreement were (Source):A set of responses that included the idea that AI alignment problems would be solved over the normal course of AI development (caveat: this was a very heterogeneous tag).Pointing out that humans have alignment problems too (so the potential risk of the AI alignment problem is capped in some sense by how bad alignment problems are for humans).AI systems will be tested (and humans will catch issues and implement safeguards before systems are rolled out in the real world).The objective function will not be designed in a way that causes the alignment problem / dangerous consequences of the alignment problem to arise.Perfe...]]>
Maheen Shermohammed https://forum.effectivealtruism.org/posts/EYRBprmo3c7AxMGw4/interviews-with-97-ai-researchers-quantitative-analysis Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Interviews with 97 AI Researchers: Quantitative Analysis, published by Maheen Shermohammed on February 2, 2023 on The Effective Altruism Forum.TLDR: Last year, Vael Gates interviewed 97 AI researchers about their perceptions on the future of AI, focusing on risks from advanced AI. Among other questions, researchers were asked about the alignment problem, the problem of instrumental incentives, and their interest in AI alignment research. Following up after 5-6 months, 51% reported the interview had a lasting effect on their beliefs. Our new report analyzes these interviews in depth. We describe our primary results and some implications for field-building below. Check out the full report (interactive graph version), a complementary writeup describing whether we can predict a researcher’s interest in alignment, and our results below![Link to post on LessWrong]OverviewThis report (interactive graph version) is a quantitative analysis of 97 interviews conducted in Feb-March 2022 with machine learning researchers, who were asked about their perceptions of artificial intelligence (AI) now and in the future, with particular focus on risks from advanced AI systems. Of the interviewees, 92 were selected from NeurIPS or ICML 2021 submissions, and 5 were informally recommended experts. For each interview, a transcript was generated, and common responses were identified and tagged to support quantitative analysis. The transcripts, as well as a qualitative walkthrough of common perspectives, are available at Interviews.Several core questions were asked in these interviews:When advanced AI (~AGI) would be developed (note that this term was imprecisely defined in the interviews)A probe about the alignment problem: “What do you think of the argument ‘highly intelligent systems will fail to optimize exactly what their designers intended them to, and this is dangerous’?”A probe about instrumental incentives: “What do you think about the argument: ‘highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing their goals, and this is dangerous’?”Whether interviewees were interested in working on AI alignment, and why or why notWhether interviewees had heard of AI safety or AI alignmentFindings SummarySome key findings from our primary questions of interest:Most participants (75%), at some point in the conversation, said that they thought humanity would achieve advanced AI (imprecisely labeled “AGI” for the rest of this summary) eventually, but their timelines to AGI varied. Within this group:32% thought it would happen in 0-50 years40% thought 50-200 years18% thought 200+ yearsand 28% were quite uncertain, reporting a very wide range.(These sum to more than 100% because several people endorsed multiple timelines over the course of the conversation.) (Source)Among participants who thought humanity would never develop AGI (22%), the most commonly cited reason was that they couldn't see AGI happening based on current progress in AI. (Source)Participants were pretty split on whether they thought the alignment problem argument was valid. Some common reasons for disagreement were (Source):A set of responses that included the idea that AI alignment problems would be solved over the normal course of AI development (caveat: this was a very heterogeneous tag).Pointing out that humans have alignment problems too (so the potential risk of the AI alignment problem is capped in some sense by how bad alignment problems are for humans).AI systems will be tested (and humans will catch issues and implement safeguards before systems are rolled out in the real world).The objective function will not be designed in a way that causes the alignment problem / dangerous consequences of the alignment problem to arise.Perfe...]]>
Thu, 02 Feb 2023 06:11:55 +0000 EA - Interviews with 97 AI Researchers: Quantitative Analysis by Maheen Shermohammed Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Interviews with 97 AI Researchers: Quantitative Analysis, published by Maheen Shermohammed on February 2, 2023 on The Effective Altruism Forum.TLDR: Last year, Vael Gates interviewed 97 AI researchers about their perceptions on the future of AI, focusing on risks from advanced AI. Among other questions, researchers were asked about the alignment problem, the problem of instrumental incentives, and their interest in AI alignment research. Following up after 5-6 months, 51% reported the interview had a lasting effect on their beliefs. Our new report analyzes these interviews in depth. We describe our primary results and some implications for field-building below. Check out the full report (interactive graph version), a complementary writeup describing whether we can predict a researcher’s interest in alignment, and our results below![Link to post on LessWrong]OverviewThis report (interactive graph version) is a quantitative analysis of 97 interviews conducted in Feb-March 2022 with machine learning researchers, who were asked about their perceptions of artificial intelligence (AI) now and in the future, with particular focus on risks from advanced AI systems. Of the interviewees, 92 were selected from NeurIPS or ICML 2021 submissions, and 5 were informally recommended experts. For each interview, a transcript was generated, and common responses were identified and tagged to support quantitative analysis. The transcripts, as well as a qualitative walkthrough of common perspectives, are available at Interviews.Several core questions were asked in these interviews:When advanced AI (~AGI) would be developed (note that this term was imprecisely defined in the interviews)A probe about the alignment problem: “What do you think of the argument ‘highly intelligent systems will fail to optimize exactly what their designers intended them to, and this is dangerous’?”A probe about instrumental incentives: “What do you think about the argument: ‘highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing their goals, and this is dangerous’?”Whether interviewees were interested in working on AI alignment, and why or why notWhether interviewees had heard of AI safety or AI alignmentFindings SummarySome key findings from our primary questions of interest:Most participants (75%), at some point in the conversation, said that they thought humanity would achieve advanced AI (imprecisely labeled “AGI” for the rest of this summary) eventually, but their timelines to AGI varied. Within this group:32% thought it would happen in 0-50 years40% thought 50-200 years18% thought 200+ yearsand 28% were quite uncertain, reporting a very wide range.(These sum to more than 100% because several people endorsed multiple timelines over the course of the conversation.) (Source)Among participants who thought humanity would never develop AGI (22%), the most commonly cited reason was that they couldn't see AGI happening based on current progress in AI. (Source)Participants were pretty split on whether they thought the alignment problem argument was valid. Some common reasons for disagreement were (Source):A set of responses that included the idea that AI alignment problems would be solved over the normal course of AI development (caveat: this was a very heterogeneous tag).Pointing out that humans have alignment problems too (so the potential risk of the AI alignment problem is capped in some sense by how bad alignment problems are for humans).AI systems will be tested (and humans will catch issues and implement safeguards before systems are rolled out in the real world).The objective function will not be designed in a way that causes the alignment problem / dangerous consequences of the alignment problem to arise.Perfe...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Interviews with 97 AI Researchers: Quantitative Analysis, published by Maheen Shermohammed on February 2, 2023 on The Effective Altruism Forum.TLDR: Last year, Vael Gates interviewed 97 AI researchers about their perceptions on the future of AI, focusing on risks from advanced AI. Among other questions, researchers were asked about the alignment problem, the problem of instrumental incentives, and their interest in AI alignment research. Following up after 5-6 months, 51% reported the interview had a lasting effect on their beliefs. Our new report analyzes these interviews in depth. We describe our primary results and some implications for field-building below. Check out the full report (interactive graph version), a complementary writeup describing whether we can predict a researcher’s interest in alignment, and our results below![Link to post on LessWrong]OverviewThis report (interactive graph version) is a quantitative analysis of 97 interviews conducted in Feb-March 2022 with machine learning researchers, who were asked about their perceptions of artificial intelligence (AI) now and in the future, with particular focus on risks from advanced AI systems. Of the interviewees, 92 were selected from NeurIPS or ICML 2021 submissions, and 5 were informally recommended experts. For each interview, a transcript was generated, and common responses were identified and tagged to support quantitative analysis. The transcripts, as well as a qualitative walkthrough of common perspectives, are available at Interviews.Several core questions were asked in these interviews:When advanced AI (~AGI) would be developed (note that this term was imprecisely defined in the interviews)A probe about the alignment problem: “What do you think of the argument ‘highly intelligent systems will fail to optimize exactly what their designers intended them to, and this is dangerous’?”A probe about instrumental incentives: “What do you think about the argument: ‘highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing their goals, and this is dangerous’?”Whether interviewees were interested in working on AI alignment, and why or why notWhether interviewees had heard of AI safety or AI alignmentFindings SummarySome key findings from our primary questions of interest:Most participants (75%), at some point in the conversation, said that they thought humanity would achieve advanced AI (imprecisely labeled “AGI” for the rest of this summary) eventually, but their timelines to AGI varied. Within this group:32% thought it would happen in 0-50 years40% thought 50-200 years18% thought 200+ yearsand 28% were quite uncertain, reporting a very wide range.(These sum to more than 100% because several people endorsed multiple timelines over the course of the conversation.) (Source)Among participants who thought humanity would never develop AGI (22%), the most commonly cited reason was that they couldn't see AGI happening based on current progress in AI. (Source)Participants were pretty split on whether they thought the alignment problem argument was valid. Some common reasons for disagreement were (Source):A set of responses that included the idea that AI alignment problems would be solved over the normal course of AI development (caveat: this was a very heterogeneous tag).Pointing out that humans have alignment problems too (so the potential risk of the AI alignment problem is capped in some sense by how bad alignment problems are for humans).AI systems will be tested (and humans will catch issues and implement safeguards before systems are rolled out in the real world).The objective function will not be designed in a way that causes the alignment problem / dangerous consequences of the alignment problem to arise.Perfe...]]>
Maheen Shermohammed https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:51 None full 4691
SsMYCwugcpotfiCT6_NL_EA_EA EA - Focus on the places where you feel shocked everyone's dropping the ball by So8res Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Focus on the places where you feel shocked everyone's dropping the ball, published by So8res on February 2, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
So8res https://forum.effectivealtruism.org/posts/SsMYCwugcpotfiCT6/focus-on-the-places-where-you-feel-shocked-everyone-s Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Focus on the places where you feel shocked everyone's dropping the ball, published by So8res on February 2, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 02 Feb 2023 04:36:37 +0000 EA - Focus on the places where you feel shocked everyone's dropping the ball by So8res Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Focus on the places where you feel shocked everyone's dropping the ball, published by So8res on February 2, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Focus on the places where you feel shocked everyone's dropping the ball, published by So8res on February 2, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
So8res https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:28 None full 4692
EBC7RtBycTAmMhCJt_NL_EA_EA EA - Trends in the dollar training cost of machine learning systems by Ben Cottier Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Trends in the dollar training cost of machine learning systems, published by Ben Cottier on February 1, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ben Cottier https://forum.effectivealtruism.org/posts/EBC7RtBycTAmMhCJt/trends-in-the-dollar-training-cost-of-machine-learning Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Trends in the dollar training cost of machine learning systems, published by Ben Cottier on February 1, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 02 Feb 2023 01:53:26 +0000 EA - Trends in the dollar training cost of machine learning systems by Ben Cottier Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Trends in the dollar training cost of machine learning systems, published by Ben Cottier on February 1, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Trends in the dollar training cost of machine learning systems, published by Ben Cottier on February 1, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ben Cottier https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:27 None full 4694
QeLE22fefLqKfYTW6_NL_EA_EA EA - Eli Lifland on Navigating the AI Alignment Landscape by Ozzie Gooen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Eli Lifland on Navigating the AI Alignment Landscape, published by Ozzie Gooen on February 1, 2023 on The Effective Altruism Forum.Recently I had a conversation with Eli Lifland about the AI Alignment landscape. Eli Lifland has been a forecaster at Samotsvety and has been investigating said landscape.I’ve known Eli for the last 8 months or so, and have appreciated many of his takes on AI alignment strategy.This was my first recorded video, so there were a few issues, but I think most of it is understandable.Full (edited) transcript below. I suggest browsing the section titles for a better overview of our discussion.TranscriptSectionsSamotsvety, a Recent Forecasting OrganizationReading, “Is Power-Seeking AI an Existential Risk?”Categories of AI Failures: Accident, Misuse, and StructuralWho Is Making Strategic Progress on Alignment?Community Building: Arguments ForCommunity Building: Fellowships and MentorshipCruxes in the AI Alignment SpaceCrux: How Promising is AI Interpretability?Crux: Should We Use Narrow AIs to Help Solve Alignment?The Need for AI Alignment BenchmarksCrux: Conceptual Insights vs. Empirical IterationVehicles and Planes as Potential MetaphorsSamotsvety, a Recent Forecasting OrganizationOzzie Gooen: So to get started, I want to talk a little bit about Samotsvety.Eli Lifland: It's a Russian name. Samotsvety currently has about 15 forecasters. We've been releasing forecasts for the community on topics such as nuclear risk and AI. We’re considering how to create forecasts for different clients and make public forecasts on existential risk, particularly AI.Team forecasting has been valuable, and I've encouraged more people to do it. We have a weekly call where we choose questions to discuss in advance. If people have time, they make their forecasts beforehand, and then we discuss the differences and debate. It's beneficial for team bonding, forming friendships, and potential future work collaborations.It's also interesting to see which forecasts are correct when they resolve. It's a good activity for different groups, such as AI community groups, to try.Ozzie Gooen: How many people are in the group right now?Eli Lifland: Right now, it's about 15, but on any given week, probably closer to five to ten can come. Initially, it was just us three. It was just Nuño, Misha, and I, and we would meet each weekend and discuss different questions on either Foretell (now INFER) or Good Judgment Open, but now it's five to ten people per week, from a total pool of 15 people.Ozzie Gooen: That makes sense. I know Samotsvety has worked on nuclear risk and a few other posts. What do you forecast when you're not working on those megaprojects?Eli Lifland: Yeah. We do a mix of things. Some things we've done for specific clients haven't been released publicly. Some things are still in progress and haven't been released yet. For example, we've been working on forecasting the level of AI existential risk for the Future Fund, now called the Open Philanthropy Worldview Prize, for the past 1-2 months. We meet each week to revise and discuss different ways to decompose the risk, but we haven't finished yet. Hopefully, we will.Sometimes we just choose a few interesting questions for discussion, even if we don't publish a write-up on them.Ozzie Gooen: So the idea is to have more people do very similar things, just like other teams are three to five, they're pretty independent; do you give them like coaching or anything? If I wanted to start my own group like this, what do I do?Eli Lifland: Feel free to reach out to any of us for advice on how we did it. As I mentioned, it was fairly simple—choosing and discussing questions each week.In terms of value, I believe it was valuable for all of us and many others who joined us. Some got more interested in effec...]]>
Ozzie Gooen https://forum.effectivealtruism.org/posts/QeLE22fefLqKfYTW6/eli-lifland-on-navigating-the-ai-alignment-landscape Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Eli Lifland on Navigating the AI Alignment Landscape, published by Ozzie Gooen on February 1, 2023 on The Effective Altruism Forum.Recently I had a conversation with Eli Lifland about the AI Alignment landscape. Eli Lifland has been a forecaster at Samotsvety and has been investigating said landscape.I’ve known Eli for the last 8 months or so, and have appreciated many of his takes on AI alignment strategy.This was my first recorded video, so there were a few issues, but I think most of it is understandable.Full (edited) transcript below. I suggest browsing the section titles for a better overview of our discussion.TranscriptSectionsSamotsvety, a Recent Forecasting OrganizationReading, “Is Power-Seeking AI an Existential Risk?”Categories of AI Failures: Accident, Misuse, and StructuralWho Is Making Strategic Progress on Alignment?Community Building: Arguments ForCommunity Building: Fellowships and MentorshipCruxes in the AI Alignment SpaceCrux: How Promising is AI Interpretability?Crux: Should We Use Narrow AIs to Help Solve Alignment?The Need for AI Alignment BenchmarksCrux: Conceptual Insights vs. Empirical IterationVehicles and Planes as Potential MetaphorsSamotsvety, a Recent Forecasting OrganizationOzzie Gooen: So to get started, I want to talk a little bit about Samotsvety.Eli Lifland: It's a Russian name. Samotsvety currently has about 15 forecasters. We've been releasing forecasts for the community on topics such as nuclear risk and AI. We’re considering how to create forecasts for different clients and make public forecasts on existential risk, particularly AI.Team forecasting has been valuable, and I've encouraged more people to do it. We have a weekly call where we choose questions to discuss in advance. If people have time, they make their forecasts beforehand, and then we discuss the differences and debate. It's beneficial for team bonding, forming friendships, and potential future work collaborations.It's also interesting to see which forecasts are correct when they resolve. It's a good activity for different groups, such as AI community groups, to try.Ozzie Gooen: How many people are in the group right now?Eli Lifland: Right now, it's about 15, but on any given week, probably closer to five to ten can come. Initially, it was just us three. It was just Nuño, Misha, and I, and we would meet each weekend and discuss different questions on either Foretell (now INFER) or Good Judgment Open, but now it's five to ten people per week, from a total pool of 15 people.Ozzie Gooen: That makes sense. I know Samotsvety has worked on nuclear risk and a few other posts. What do you forecast when you're not working on those megaprojects?Eli Lifland: Yeah. We do a mix of things. Some things we've done for specific clients haven't been released publicly. Some things are still in progress and haven't been released yet. For example, we've been working on forecasting the level of AI existential risk for the Future Fund, now called the Open Philanthropy Worldview Prize, for the past 1-2 months. We meet each week to revise and discuss different ways to decompose the risk, but we haven't finished yet. Hopefully, we will.Sometimes we just choose a few interesting questions for discussion, even if we don't publish a write-up on them.Ozzie Gooen: So the idea is to have more people do very similar things, just like other teams are three to five, they're pretty independent; do you give them like coaching or anything? If I wanted to start my own group like this, what do I do?Eli Lifland: Feel free to reach out to any of us for advice on how we did it. As I mentioned, it was fairly simple—choosing and discussing questions each week.In terms of value, I believe it was valuable for all of us and many others who joined us. Some got more interested in effec...]]>
Wed, 01 Feb 2023 21:32:14 +0000 EA - Eli Lifland on Navigating the AI Alignment Landscape by Ozzie Gooen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Eli Lifland on Navigating the AI Alignment Landscape, published by Ozzie Gooen on February 1, 2023 on The Effective Altruism Forum.Recently I had a conversation with Eli Lifland about the AI Alignment landscape. Eli Lifland has been a forecaster at Samotsvety and has been investigating said landscape.I’ve known Eli for the last 8 months or so, and have appreciated many of his takes on AI alignment strategy.This was my first recorded video, so there were a few issues, but I think most of it is understandable.Full (edited) transcript below. I suggest browsing the section titles for a better overview of our discussion.TranscriptSectionsSamotsvety, a Recent Forecasting OrganizationReading, “Is Power-Seeking AI an Existential Risk?”Categories of AI Failures: Accident, Misuse, and StructuralWho Is Making Strategic Progress on Alignment?Community Building: Arguments ForCommunity Building: Fellowships and MentorshipCruxes in the AI Alignment SpaceCrux: How Promising is AI Interpretability?Crux: Should We Use Narrow AIs to Help Solve Alignment?The Need for AI Alignment BenchmarksCrux: Conceptual Insights vs. Empirical IterationVehicles and Planes as Potential MetaphorsSamotsvety, a Recent Forecasting OrganizationOzzie Gooen: So to get started, I want to talk a little bit about Samotsvety.Eli Lifland: It's a Russian name. Samotsvety currently has about 15 forecasters. We've been releasing forecasts for the community on topics such as nuclear risk and AI. We’re considering how to create forecasts for different clients and make public forecasts on existential risk, particularly AI.Team forecasting has been valuable, and I've encouraged more people to do it. We have a weekly call where we choose questions to discuss in advance. If people have time, they make their forecasts beforehand, and then we discuss the differences and debate. It's beneficial for team bonding, forming friendships, and potential future work collaborations.It's also interesting to see which forecasts are correct when they resolve. It's a good activity for different groups, such as AI community groups, to try.Ozzie Gooen: How many people are in the group right now?Eli Lifland: Right now, it's about 15, but on any given week, probably closer to five to ten can come. Initially, it was just us three. It was just Nuño, Misha, and I, and we would meet each weekend and discuss different questions on either Foretell (now INFER) or Good Judgment Open, but now it's five to ten people per week, from a total pool of 15 people.Ozzie Gooen: That makes sense. I know Samotsvety has worked on nuclear risk and a few other posts. What do you forecast when you're not working on those megaprojects?Eli Lifland: Yeah. We do a mix of things. Some things we've done for specific clients haven't been released publicly. Some things are still in progress and haven't been released yet. For example, we've been working on forecasting the level of AI existential risk for the Future Fund, now called the Open Philanthropy Worldview Prize, for the past 1-2 months. We meet each week to revise and discuss different ways to decompose the risk, but we haven't finished yet. Hopefully, we will.Sometimes we just choose a few interesting questions for discussion, even if we don't publish a write-up on them.Ozzie Gooen: So the idea is to have more people do very similar things, just like other teams are three to five, they're pretty independent; do you give them like coaching or anything? If I wanted to start my own group like this, what do I do?Eli Lifland: Feel free to reach out to any of us for advice on how we did it. As I mentioned, it was fairly simple—choosing and discussing questions each week.In terms of value, I believe it was valuable for all of us and many others who joined us. Some got more interested in effec...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Eli Lifland on Navigating the AI Alignment Landscape, published by Ozzie Gooen on February 1, 2023 on The Effective Altruism Forum.Recently I had a conversation with Eli Lifland about the AI Alignment landscape. Eli Lifland has been a forecaster at Samotsvety and has been investigating said landscape.I’ve known Eli for the last 8 months or so, and have appreciated many of his takes on AI alignment strategy.This was my first recorded video, so there were a few issues, but I think most of it is understandable.Full (edited) transcript below. I suggest browsing the section titles for a better overview of our discussion.TranscriptSectionsSamotsvety, a Recent Forecasting OrganizationReading, “Is Power-Seeking AI an Existential Risk?”Categories of AI Failures: Accident, Misuse, and StructuralWho Is Making Strategic Progress on Alignment?Community Building: Arguments ForCommunity Building: Fellowships and MentorshipCruxes in the AI Alignment SpaceCrux: How Promising is AI Interpretability?Crux: Should We Use Narrow AIs to Help Solve Alignment?The Need for AI Alignment BenchmarksCrux: Conceptual Insights vs. Empirical IterationVehicles and Planes as Potential MetaphorsSamotsvety, a Recent Forecasting OrganizationOzzie Gooen: So to get started, I want to talk a little bit about Samotsvety.Eli Lifland: It's a Russian name. Samotsvety currently has about 15 forecasters. We've been releasing forecasts for the community on topics such as nuclear risk and AI. We’re considering how to create forecasts for different clients and make public forecasts on existential risk, particularly AI.Team forecasting has been valuable, and I've encouraged more people to do it. We have a weekly call where we choose questions to discuss in advance. If people have time, they make their forecasts beforehand, and then we discuss the differences and debate. It's beneficial for team bonding, forming friendships, and potential future work collaborations.It's also interesting to see which forecasts are correct when they resolve. It's a good activity for different groups, such as AI community groups, to try.Ozzie Gooen: How many people are in the group right now?Eli Lifland: Right now, it's about 15, but on any given week, probably closer to five to ten can come. Initially, it was just us three. It was just Nuño, Misha, and I, and we would meet each weekend and discuss different questions on either Foretell (now INFER) or Good Judgment Open, but now it's five to ten people per week, from a total pool of 15 people.Ozzie Gooen: That makes sense. I know Samotsvety has worked on nuclear risk and a few other posts. What do you forecast when you're not working on those megaprojects?Eli Lifland: Yeah. We do a mix of things. Some things we've done for specific clients haven't been released publicly. Some things are still in progress and haven't been released yet. For example, we've been working on forecasting the level of AI existential risk for the Future Fund, now called the Open Philanthropy Worldview Prize, for the past 1-2 months. We meet each week to revise and discuss different ways to decompose the risk, but we haven't finished yet. Hopefully, we will.Sometimes we just choose a few interesting questions for discussion, even if we don't publish a write-up on them.Ozzie Gooen: So the idea is to have more people do very similar things, just like other teams are three to five, they're pretty independent; do you give them like coaching or anything? If I wanted to start my own group like this, what do I do?Eli Lifland: Feel free to reach out to any of us for advice on how we did it. As I mentioned, it was fairly simple—choosing and discussing questions each week.In terms of value, I believe it was valuable for all of us and many others who joined us. Some got more interested in effec...]]>
Ozzie Gooen https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 46:19 None full 4687
WN4thu6pYoBTHfEay_NL_EA_EA EA - Who owns "Effective Altruism"? by River Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who owns "Effective Altruism"?, published by River on February 1, 2023 on The Effective Altruism Forum.I've noticed a bit of confusion among EAs about whether CEA (or technically EVF) owns the phrase "effective altruism". I've heard a couple of EAs claim they have the trademark, or asserted some kind of ownership right against a scammer once. And I do have the impression that CEA tries to manage the EA brand in a way that suggests that they feel like they own it.However, I checked the US Patent and Trademark Office's database, it shows trademarks on "Centre for Effective Altruism" and "Effective Altruism Global", but not "effective altruism". Is it even possible to have a trademark on the name of a philosophy? Is there some other area of law that is relevant?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
River https://forum.effectivealtruism.org/posts/WN4thu6pYoBTHfEay/who-owns-effective-altruism Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who owns "Effective Altruism"?, published by River on February 1, 2023 on The Effective Altruism Forum.I've noticed a bit of confusion among EAs about whether CEA (or technically EVF) owns the phrase "effective altruism". I've heard a couple of EAs claim they have the trademark, or asserted some kind of ownership right against a scammer once. And I do have the impression that CEA tries to manage the EA brand in a way that suggests that they feel like they own it.However, I checked the US Patent and Trademark Office's database, it shows trademarks on "Centre for Effective Altruism" and "Effective Altruism Global", but not "effective altruism". Is it even possible to have a trademark on the name of a philosophy? Is there some other area of law that is relevant?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 01 Feb 2023 17:42:30 +0000 EA - Who owns "Effective Altruism"? by River Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who owns "Effective Altruism"?, published by River on February 1, 2023 on The Effective Altruism Forum.I've noticed a bit of confusion among EAs about whether CEA (or technically EVF) owns the phrase "effective altruism". I've heard a couple of EAs claim they have the trademark, or asserted some kind of ownership right against a scammer once. And I do have the impression that CEA tries to manage the EA brand in a way that suggests that they feel like they own it.However, I checked the US Patent and Trademark Office's database, it shows trademarks on "Centre for Effective Altruism" and "Effective Altruism Global", but not "effective altruism". Is it even possible to have a trademark on the name of a philosophy? Is there some other area of law that is relevant?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who owns "Effective Altruism"?, published by River on February 1, 2023 on The Effective Altruism Forum.I've noticed a bit of confusion among EAs about whether CEA (or technically EVF) owns the phrase "effective altruism". I've heard a couple of EAs claim they have the trademark, or asserted some kind of ownership right against a scammer once. And I do have the impression that CEA tries to manage the EA brand in a way that suggests that they feel like they own it.However, I checked the US Patent and Trademark Office's database, it shows trademarks on "Centre for Effective Altruism" and "Effective Altruism Global", but not "effective altruism". Is it even possible to have a trademark on the name of a philosophy? Is there some other area of law that is relevant?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
River https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:59 None full 4686
fS4mM6kD4GK3PWzEj_NL_EA_EA EA - Nelson Mandela’s organization, The Elders, backing x risk prevention and longtermism by krohmal5 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nelson Mandela’s organization, The Elders, backing x risk prevention and longtermism, published by krohmal5 on February 1, 2023 on The Effective Altruism Forum.Excerpt from the web posting below. Strong endorsement of EA priorities from a decidedly not EA source—The Elders—an organization founded by Nelson Mandela in 2007. Although their reference to “approaching a precipice” and imagery that looks a lot like the Toby Ord book cover seems unlikely to be a coincidence? Also, thanks to Nathan for encouraging me to post this on the forum, a first for me.“For the next five years, our focus will be on the climate crisis, nuclear weapons, pandemics, and the ongoing scourge of conflict. The impact of these threats is already being seen on lives and livelihoods: a rapid rise in extreme weather events, a pandemic that killed millions and cost trillions, a war in which the use of nuclear weapons has been openly raised. Some of these threats jeopardise the very existence of human life on our planet, yet nations lack the ability or will to manage these risks.We are approaching a precipice.The urgency of the interconnected existential threats we face requires a crisis mindset from our leaders – one that puts our shared humanity centre-stage, leaves no one behind, and recognises the rights of future generations.We believe that when nations work together, these threats can all be addressed for the good of the whole world. But to do so, our political leaders must show purpose, moral courage and an urgency of action.This is why, over the next five years, we will work on three programmes that address existential threats to humanity requiring a collective response - the climate crisis, pandemics, and nuclear weapons - and also on conflict (a threat in itself, and a risk factor for other threats).”Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
krohmal5 https://forum.effectivealtruism.org/posts/fS4mM6kD4GK3PWzEj/nelson-mandela-s-organization-the-elders-backing-x-risk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nelson Mandela’s organization, The Elders, backing x risk prevention and longtermism, published by krohmal5 on February 1, 2023 on The Effective Altruism Forum.Excerpt from the web posting below. Strong endorsement of EA priorities from a decidedly not EA source—The Elders—an organization founded by Nelson Mandela in 2007. Although their reference to “approaching a precipice” and imagery that looks a lot like the Toby Ord book cover seems unlikely to be a coincidence? Also, thanks to Nathan for encouraging me to post this on the forum, a first for me.“For the next five years, our focus will be on the climate crisis, nuclear weapons, pandemics, and the ongoing scourge of conflict. The impact of these threats is already being seen on lives and livelihoods: a rapid rise in extreme weather events, a pandemic that killed millions and cost trillions, a war in which the use of nuclear weapons has been openly raised. Some of these threats jeopardise the very existence of human life on our planet, yet nations lack the ability or will to manage these risks.We are approaching a precipice.The urgency of the interconnected existential threats we face requires a crisis mindset from our leaders – one that puts our shared humanity centre-stage, leaves no one behind, and recognises the rights of future generations.We believe that when nations work together, these threats can all be addressed for the good of the whole world. But to do so, our political leaders must show purpose, moral courage and an urgency of action.This is why, over the next five years, we will work on three programmes that address existential threats to humanity requiring a collective response - the climate crisis, pandemics, and nuclear weapons - and also on conflict (a threat in itself, and a risk factor for other threats).”Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 01 Feb 2023 10:22:31 +0000 EA - Nelson Mandela’s organization, The Elders, backing x risk prevention and longtermism by krohmal5 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nelson Mandela’s organization, The Elders, backing x risk prevention and longtermism, published by krohmal5 on February 1, 2023 on The Effective Altruism Forum.Excerpt from the web posting below. Strong endorsement of EA priorities from a decidedly not EA source—The Elders—an organization founded by Nelson Mandela in 2007. Although their reference to “approaching a precipice” and imagery that looks a lot like the Toby Ord book cover seems unlikely to be a coincidence? Also, thanks to Nathan for encouraging me to post this on the forum, a first for me.“For the next five years, our focus will be on the climate crisis, nuclear weapons, pandemics, and the ongoing scourge of conflict. The impact of these threats is already being seen on lives and livelihoods: a rapid rise in extreme weather events, a pandemic that killed millions and cost trillions, a war in which the use of nuclear weapons has been openly raised. Some of these threats jeopardise the very existence of human life on our planet, yet nations lack the ability or will to manage these risks.We are approaching a precipice.The urgency of the interconnected existential threats we face requires a crisis mindset from our leaders – one that puts our shared humanity centre-stage, leaves no one behind, and recognises the rights of future generations.We believe that when nations work together, these threats can all be addressed for the good of the whole world. But to do so, our political leaders must show purpose, moral courage and an urgency of action.This is why, over the next five years, we will work on three programmes that address existential threats to humanity requiring a collective response - the climate crisis, pandemics, and nuclear weapons - and also on conflict (a threat in itself, and a risk factor for other threats).”Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nelson Mandela’s organization, The Elders, backing x risk prevention and longtermism, published by krohmal5 on February 1, 2023 on The Effective Altruism Forum.Excerpt from the web posting below. Strong endorsement of EA priorities from a decidedly not EA source—The Elders—an organization founded by Nelson Mandela in 2007. Although their reference to “approaching a precipice” and imagery that looks a lot like the Toby Ord book cover seems unlikely to be a coincidence? Also, thanks to Nathan for encouraging me to post this on the forum, a first for me.“For the next five years, our focus will be on the climate crisis, nuclear weapons, pandemics, and the ongoing scourge of conflict. The impact of these threats is already being seen on lives and livelihoods: a rapid rise in extreme weather events, a pandemic that killed millions and cost trillions, a war in which the use of nuclear weapons has been openly raised. Some of these threats jeopardise the very existence of human life on our planet, yet nations lack the ability or will to manage these risks.We are approaching a precipice.The urgency of the interconnected existential threats we face requires a crisis mindset from our leaders – one that puts our shared humanity centre-stage, leaves no one behind, and recognises the rights of future generations.We believe that when nations work together, these threats can all be addressed for the good of the whole world. But to do so, our political leaders must show purpose, moral courage and an urgency of action.This is why, over the next five years, we will work on three programmes that address existential threats to humanity requiring a collective response - the climate crisis, pandemics, and nuclear weapons - and also on conflict (a threat in itself, and a risk factor for other threats).”Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
krohmal5 https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:00 None full 4680
CQfuNy5sw5niXFF5A_NL_EA_EA EA - More Is Probably More - Forecasting Accuracy and Number of Forecasters on Metaculus by nikos Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: More Is Probably More - Forecasting Accuracy and Number of Forecasters on Metaculus, published by nikos on January 31, 2023 on The Effective Altruism Forum.TLDRAn increase in the number of forecasters seems to lead to an improvement of the Metaculus community prediction. I believe this effect is real, but due to confounding effects, the analysis presented here might overestimate the improvement gained.That improvement of the Metaculus community prediction seems to be approximately logarithmic, meaning that doubling the number of forecasters seems to lead to a roughly constant (albeit probably diminishing) relative improvement in performance in terms of Brier Score: Going from 100 to 200 would give you a relative improvement in Brier score almost as large as when going from 10 to 20 (e.g. an improvement by x percent). Note though, that it is a bit unclear what "an improvement in Brier score by X" actually means in terms of forecast quality.Increasing the number of forecasters on Metaculus seems to not only improve performance on average, but also seems to decrease the variability of predictions, making them more stable and reliableThis analysis complements another existing one and comes to similar conclusions. Both analyses suffer from potential biases, but they are different ones.All code used for this analysis can be found here.IntroductionOne of the central wisdoms in forecasting is that an ensemble of forecasts is more than the sum of its parts. Take a crowd of forecasters and average their predictions - the resulting ensemble will usually be more accurate than almost all of the individual forecasts.But how does the performance of the ensemble change when you increase the number of forecasters? Are fifty forecasters ten times as good as five? Are five hundred even better? Charles Dillon looked at this a while ago using Metaculus data.He broadly found that more forecasters usually means better performance. Specifically, he estimated that doubling the number of forecasters would reduce the average Brier score by 0.012 points. The Brier score is a metric commonly used to evaluate the performance of forecasts with a binary yes/no outcome and equals the squared difference between the outcome (0 or 1) and the forecast. Smaller values are better. Charles concluded that in practice a Metaculus community prediction with only ten forecasters is not a lot less reliable than a community prediction with thirty forecasters.Charles' analysis was restricted to aggregated data, which means that he had access to the Metaculus community prediction, but not to individual level data. This makes the analysis susceptible to potential biases. For example, it could be the case that forecasters really like easy questions and that those questions which attracted fewer forecasters were genuinely harder. We would then expect to see worse performance on questions with fewer forecasters even if the number of forecasters had no actual effect on performance. In this post I will try to shed some more light on the question, this time making use of individual level data.MethodologyTo examine the effect of the number of forecasters on the performance of the community prediction, we can use a technique called "bootstrapping". The idea is simple. Take a question that has n = 200 forecasters. And let's suppose we are interested in how the community prediction would have performed had there only been n = 5, 10, 20, or 50 instead of the actual 200 forecasters. To find out we can take a random sample of size n, e.g. 5, forecasters, discard all the other ones, compute a community prediction based on just these 5 forecasters and see how it would have performed. One data point isn't all too informative, so we repeat the process a few thousand times and average the results. Now we do that for othe...]]>
nikos https://forum.effectivealtruism.org/posts/CQfuNy5sw5niXFF5A/more-is-probably-more-forecasting-accuracy-and-number-of Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: More Is Probably More - Forecasting Accuracy and Number of Forecasters on Metaculus, published by nikos on January 31, 2023 on The Effective Altruism Forum.TLDRAn increase in the number of forecasters seems to lead to an improvement of the Metaculus community prediction. I believe this effect is real, but due to confounding effects, the analysis presented here might overestimate the improvement gained.That improvement of the Metaculus community prediction seems to be approximately logarithmic, meaning that doubling the number of forecasters seems to lead to a roughly constant (albeit probably diminishing) relative improvement in performance in terms of Brier Score: Going from 100 to 200 would give you a relative improvement in Brier score almost as large as when going from 10 to 20 (e.g. an improvement by x percent). Note though, that it is a bit unclear what "an improvement in Brier score by X" actually means in terms of forecast quality.Increasing the number of forecasters on Metaculus seems to not only improve performance on average, but also seems to decrease the variability of predictions, making them more stable and reliableThis analysis complements another existing one and comes to similar conclusions. Both analyses suffer from potential biases, but they are different ones.All code used for this analysis can be found here.IntroductionOne of the central wisdoms in forecasting is that an ensemble of forecasts is more than the sum of its parts. Take a crowd of forecasters and average their predictions - the resulting ensemble will usually be more accurate than almost all of the individual forecasts.But how does the performance of the ensemble change when you increase the number of forecasters? Are fifty forecasters ten times as good as five? Are five hundred even better? Charles Dillon looked at this a while ago using Metaculus data.He broadly found that more forecasters usually means better performance. Specifically, he estimated that doubling the number of forecasters would reduce the average Brier score by 0.012 points. The Brier score is a metric commonly used to evaluate the performance of forecasts with a binary yes/no outcome and equals the squared difference between the outcome (0 or 1) and the forecast. Smaller values are better. Charles concluded that in practice a Metaculus community prediction with only ten forecasters is not a lot less reliable than a community prediction with thirty forecasters.Charles' analysis was restricted to aggregated data, which means that he had access to the Metaculus community prediction, but not to individual level data. This makes the analysis susceptible to potential biases. For example, it could be the case that forecasters really like easy questions and that those questions which attracted fewer forecasters were genuinely harder. We would then expect to see worse performance on questions with fewer forecasters even if the number of forecasters had no actual effect on performance. In this post I will try to shed some more light on the question, this time making use of individual level data.MethodologyTo examine the effect of the number of forecasters on the performance of the community prediction, we can use a technique called "bootstrapping". The idea is simple. Take a question that has n = 200 forecasters. And let's suppose we are interested in how the community prediction would have performed had there only been n = 5, 10, 20, or 50 instead of the actual 200 forecasters. To find out we can take a random sample of size n, e.g. 5, forecasters, discard all the other ones, compute a community prediction based on just these 5 forecasters and see how it would have performed. One data point isn't all too informative, so we repeat the process a few thousand times and average the results. Now we do that for othe...]]>
Wed, 01 Feb 2023 01:43:31 +0000 EA - More Is Probably More - Forecasting Accuracy and Number of Forecasters on Metaculus by nikos Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: More Is Probably More - Forecasting Accuracy and Number of Forecasters on Metaculus, published by nikos on January 31, 2023 on The Effective Altruism Forum.TLDRAn increase in the number of forecasters seems to lead to an improvement of the Metaculus community prediction. I believe this effect is real, but due to confounding effects, the analysis presented here might overestimate the improvement gained.That improvement of the Metaculus community prediction seems to be approximately logarithmic, meaning that doubling the number of forecasters seems to lead to a roughly constant (albeit probably diminishing) relative improvement in performance in terms of Brier Score: Going from 100 to 200 would give you a relative improvement in Brier score almost as large as when going from 10 to 20 (e.g. an improvement by x percent). Note though, that it is a bit unclear what "an improvement in Brier score by X" actually means in terms of forecast quality.Increasing the number of forecasters on Metaculus seems to not only improve performance on average, but also seems to decrease the variability of predictions, making them more stable and reliableThis analysis complements another existing one and comes to similar conclusions. Both analyses suffer from potential biases, but they are different ones.All code used for this analysis can be found here.IntroductionOne of the central wisdoms in forecasting is that an ensemble of forecasts is more than the sum of its parts. Take a crowd of forecasters and average their predictions - the resulting ensemble will usually be more accurate than almost all of the individual forecasts.But how does the performance of the ensemble change when you increase the number of forecasters? Are fifty forecasters ten times as good as five? Are five hundred even better? Charles Dillon looked at this a while ago using Metaculus data.He broadly found that more forecasters usually means better performance. Specifically, he estimated that doubling the number of forecasters would reduce the average Brier score by 0.012 points. The Brier score is a metric commonly used to evaluate the performance of forecasts with a binary yes/no outcome and equals the squared difference between the outcome (0 or 1) and the forecast. Smaller values are better. Charles concluded that in practice a Metaculus community prediction with only ten forecasters is not a lot less reliable than a community prediction with thirty forecasters.Charles' analysis was restricted to aggregated data, which means that he had access to the Metaculus community prediction, but not to individual level data. This makes the analysis susceptible to potential biases. For example, it could be the case that forecasters really like easy questions and that those questions which attracted fewer forecasters were genuinely harder. We would then expect to see worse performance on questions with fewer forecasters even if the number of forecasters had no actual effect on performance. In this post I will try to shed some more light on the question, this time making use of individual level data.MethodologyTo examine the effect of the number of forecasters on the performance of the community prediction, we can use a technique called "bootstrapping". The idea is simple. Take a question that has n = 200 forecasters. And let's suppose we are interested in how the community prediction would have performed had there only been n = 5, 10, 20, or 50 instead of the actual 200 forecasters. To find out we can take a random sample of size n, e.g. 5, forecasters, discard all the other ones, compute a community prediction based on just these 5 forecasters and see how it would have performed. One data point isn't all too informative, so we repeat the process a few thousand times and average the results. Now we do that for othe...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: More Is Probably More - Forecasting Accuracy and Number of Forecasters on Metaculus, published by nikos on January 31, 2023 on The Effective Altruism Forum.TLDRAn increase in the number of forecasters seems to lead to an improvement of the Metaculus community prediction. I believe this effect is real, but due to confounding effects, the analysis presented here might overestimate the improvement gained.That improvement of the Metaculus community prediction seems to be approximately logarithmic, meaning that doubling the number of forecasters seems to lead to a roughly constant (albeit probably diminishing) relative improvement in performance in terms of Brier Score: Going from 100 to 200 would give you a relative improvement in Brier score almost as large as when going from 10 to 20 (e.g. an improvement by x percent). Note though, that it is a bit unclear what "an improvement in Brier score by X" actually means in terms of forecast quality.Increasing the number of forecasters on Metaculus seems to not only improve performance on average, but also seems to decrease the variability of predictions, making them more stable and reliableThis analysis complements another existing one and comes to similar conclusions. Both analyses suffer from potential biases, but they are different ones.All code used for this analysis can be found here.IntroductionOne of the central wisdoms in forecasting is that an ensemble of forecasts is more than the sum of its parts. Take a crowd of forecasters and average their predictions - the resulting ensemble will usually be more accurate than almost all of the individual forecasts.But how does the performance of the ensemble change when you increase the number of forecasters? Are fifty forecasters ten times as good as five? Are five hundred even better? Charles Dillon looked at this a while ago using Metaculus data.He broadly found that more forecasters usually means better performance. Specifically, he estimated that doubling the number of forecasters would reduce the average Brier score by 0.012 points. The Brier score is a metric commonly used to evaluate the performance of forecasts with a binary yes/no outcome and equals the squared difference between the outcome (0 or 1) and the forecast. Smaller values are better. Charles concluded that in practice a Metaculus community prediction with only ten forecasters is not a lot less reliable than a community prediction with thirty forecasters.Charles' analysis was restricted to aggregated data, which means that he had access to the Metaculus community prediction, but not to individual level data. This makes the analysis susceptible to potential biases. For example, it could be the case that forecasters really like easy questions and that those questions which attracted fewer forecasters were genuinely harder. We would then expect to see worse performance on questions with fewer forecasters even if the number of forecasters had no actual effect on performance. In this post I will try to shed some more light on the question, this time making use of individual level data.MethodologyTo examine the effect of the number of forecasters on the performance of the community prediction, we can use a technique called "bootstrapping". The idea is simple. Take a question that has n = 200 forecasters. And let's suppose we are interested in how the community prediction would have performed had there only been n = 5, 10, 20, or 50 instead of the actual 200 forecasters. To find out we can take a random sample of size n, e.g. 5, forecasters, discard all the other ones, compute a community prediction based on just these 5 forecasters and see how it would have performed. One data point isn't all too informative, so we repeat the process a few thousand times and average the results. Now we do that for othe...]]>
nikos https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 16:49 None full 4681
nxBKxFcfMnEb3Cmys_NL_EA_EA EA - How to use AI speech transcription and analysis to accelerate social science research by AlexanderSaeri Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to use AI speech transcription and analysis to accelerate social science research, published by AlexanderSaeri on January 31, 2023 on The Effective Altruism Forum.SummaryAI tools like OpenAI Whisper and GPT-3 can be used to improve social science research workflows by helping to collect and analyse speech and text data.In this article, I describe two worked examples where I applied AI tools to (1) transcribe and (2) conduct basic thematic analysis of a research interview, and provide enough detail for readers to replicate and adapt my approach.OpenAI Whisper (example) created a high quality English transcription of a 30 minute research interview at a ~70x cost saving compared to a human transcriber.GPT-3 (text-davinci-003; example) answered a research question and identified relevant themes from a transcribed research interview, after providing a structured prompt and one example.These tools, when chained together with human oversight, can be considered an early, weak example of PASTA (Process for Automating Scientific and Technological Advancement).Social science research workflows involve a lot of speech and text data that is laborious to collect and analyseThe daily practice of social science research involves a lot of talking, reading and writing. In my applied behaviour science research consulting role at Monash University and through Ready Research, I generate or participate in the generation of a huge amount of speech and text data. This includes highly structured research activities such as interviews, surveys, observation and experiments; but also less structured research activities like workshops and meetings.Some fictionalised examples of work I’ve done in the past year:Research interviews with 20 regular city commuters to understand what influences their commuting behaviour post-COVID, to assist a public transit authority in planning and operating its services efficientlyPractitioner interviews with staff from city, regional and rural local governments to assess organisational readiness for flood preparation and responseWorkshop of 5-10 people involved in hospital sepsis care, each representing a different interest (e.g., patients, clinicians, researchers, funders) to identify priority areas to direct $5M research fundingSurvey of 5,000 Australians to understand the impacts and experiences of living under lockdown in Melbourne, Australia during COVID-19Evaluation interviews with 4 participants in the AGI Safety Fundamentals course to understand the most significant change in their knowledge, skills, or behaviours as a result of their participationTo make this data useful it needs to be collected, processed, organised, structured and analysed. The typical workflow for these kinds of activities involves taking written notes during the research activity, or recording the audio / video research activity and reviewing the recording later. Interviews are sometimes transcribed by a paid service for later analysis. Other times they are transcribed by the researcher.The amount of speech and text data generated during research activity is large - each research activity yields thousands of words. The sheer volume of data can be overwhelming and daunting, making it difficult to carry out analysis in any meaningful way. In addition, sometimes data just isn’t collected (e.g., during an interview or workshop) because the researcher is busy facilitating / listening / processing / connecting with research participants.Even for data that is collected, managing and analysing it is a challenge. Specialised programs such as nVivo are used in social science to manage and analyse text data, but less structured research activities would almost never be managed or analysed through this kind of program, because of the time and skills required. Open text data i...]]>
AlexanderSaeri https://forum.effectivealtruism.org/posts/nxBKxFcfMnEb3Cmys/how-to-use-ai-speech-transcription-and-analysis-to Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to use AI speech transcription and analysis to accelerate social science research, published by AlexanderSaeri on January 31, 2023 on The Effective Altruism Forum.SummaryAI tools like OpenAI Whisper and GPT-3 can be used to improve social science research workflows by helping to collect and analyse speech and text data.In this article, I describe two worked examples where I applied AI tools to (1) transcribe and (2) conduct basic thematic analysis of a research interview, and provide enough detail for readers to replicate and adapt my approach.OpenAI Whisper (example) created a high quality English transcription of a 30 minute research interview at a ~70x cost saving compared to a human transcriber.GPT-3 (text-davinci-003; example) answered a research question and identified relevant themes from a transcribed research interview, after providing a structured prompt and one example.These tools, when chained together with human oversight, can be considered an early, weak example of PASTA (Process for Automating Scientific and Technological Advancement).Social science research workflows involve a lot of speech and text data that is laborious to collect and analyseThe daily practice of social science research involves a lot of talking, reading and writing. In my applied behaviour science research consulting role at Monash University and through Ready Research, I generate or participate in the generation of a huge amount of speech and text data. This includes highly structured research activities such as interviews, surveys, observation and experiments; but also less structured research activities like workshops and meetings.Some fictionalised examples of work I’ve done in the past year:Research interviews with 20 regular city commuters to understand what influences their commuting behaviour post-COVID, to assist a public transit authority in planning and operating its services efficientlyPractitioner interviews with staff from city, regional and rural local governments to assess organisational readiness for flood preparation and responseWorkshop of 5-10 people involved in hospital sepsis care, each representing a different interest (e.g., patients, clinicians, researchers, funders) to identify priority areas to direct $5M research fundingSurvey of 5,000 Australians to understand the impacts and experiences of living under lockdown in Melbourne, Australia during COVID-19Evaluation interviews with 4 participants in the AGI Safety Fundamentals course to understand the most significant change in their knowledge, skills, or behaviours as a result of their participationTo make this data useful it needs to be collected, processed, organised, structured and analysed. The typical workflow for these kinds of activities involves taking written notes during the research activity, or recording the audio / video research activity and reviewing the recording later. Interviews are sometimes transcribed by a paid service for later analysis. Other times they are transcribed by the researcher.The amount of speech and text data generated during research activity is large - each research activity yields thousands of words. The sheer volume of data can be overwhelming and daunting, making it difficult to carry out analysis in any meaningful way. In addition, sometimes data just isn’t collected (e.g., during an interview or workshop) because the researcher is busy facilitating / listening / processing / connecting with research participants.Even for data that is collected, managing and analysing it is a challenge. Specialised programs such as nVivo are used in social science to manage and analyse text data, but less structured research activities would almost never be managed or analysed through this kind of program, because of the time and skills required. Open text data i...]]>
Tue, 31 Jan 2023 23:08:56 +0000 EA - How to use AI speech transcription and analysis to accelerate social science research by AlexanderSaeri Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to use AI speech transcription and analysis to accelerate social science research, published by AlexanderSaeri on January 31, 2023 on The Effective Altruism Forum.SummaryAI tools like OpenAI Whisper and GPT-3 can be used to improve social science research workflows by helping to collect and analyse speech and text data.In this article, I describe two worked examples where I applied AI tools to (1) transcribe and (2) conduct basic thematic analysis of a research interview, and provide enough detail for readers to replicate and adapt my approach.OpenAI Whisper (example) created a high quality English transcription of a 30 minute research interview at a ~70x cost saving compared to a human transcriber.GPT-3 (text-davinci-003; example) answered a research question and identified relevant themes from a transcribed research interview, after providing a structured prompt and one example.These tools, when chained together with human oversight, can be considered an early, weak example of PASTA (Process for Automating Scientific and Technological Advancement).Social science research workflows involve a lot of speech and text data that is laborious to collect and analyseThe daily practice of social science research involves a lot of talking, reading and writing. In my applied behaviour science research consulting role at Monash University and through Ready Research, I generate or participate in the generation of a huge amount of speech and text data. This includes highly structured research activities such as interviews, surveys, observation and experiments; but also less structured research activities like workshops and meetings.Some fictionalised examples of work I’ve done in the past year:Research interviews with 20 regular city commuters to understand what influences their commuting behaviour post-COVID, to assist a public transit authority in planning and operating its services efficientlyPractitioner interviews with staff from city, regional and rural local governments to assess organisational readiness for flood preparation and responseWorkshop of 5-10 people involved in hospital sepsis care, each representing a different interest (e.g., patients, clinicians, researchers, funders) to identify priority areas to direct $5M research fundingSurvey of 5,000 Australians to understand the impacts and experiences of living under lockdown in Melbourne, Australia during COVID-19Evaluation interviews with 4 participants in the AGI Safety Fundamentals course to understand the most significant change in their knowledge, skills, or behaviours as a result of their participationTo make this data useful it needs to be collected, processed, organised, structured and analysed. The typical workflow for these kinds of activities involves taking written notes during the research activity, or recording the audio / video research activity and reviewing the recording later. Interviews are sometimes transcribed by a paid service for later analysis. Other times they are transcribed by the researcher.The amount of speech and text data generated during research activity is large - each research activity yields thousands of words. The sheer volume of data can be overwhelming and daunting, making it difficult to carry out analysis in any meaningful way. In addition, sometimes data just isn’t collected (e.g., during an interview or workshop) because the researcher is busy facilitating / listening / processing / connecting with research participants.Even for data that is collected, managing and analysing it is a challenge. Specialised programs such as nVivo are used in social science to manage and analyse text data, but less structured research activities would almost never be managed or analysed through this kind of program, because of the time and skills required. Open text data i...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to use AI speech transcription and analysis to accelerate social science research, published by AlexanderSaeri on January 31, 2023 on The Effective Altruism Forum.SummaryAI tools like OpenAI Whisper and GPT-3 can be used to improve social science research workflows by helping to collect and analyse speech and text data.In this article, I describe two worked examples where I applied AI tools to (1) transcribe and (2) conduct basic thematic analysis of a research interview, and provide enough detail for readers to replicate and adapt my approach.OpenAI Whisper (example) created a high quality English transcription of a 30 minute research interview at a ~70x cost saving compared to a human transcriber.GPT-3 (text-davinci-003; example) answered a research question and identified relevant themes from a transcribed research interview, after providing a structured prompt and one example.These tools, when chained together with human oversight, can be considered an early, weak example of PASTA (Process for Automating Scientific and Technological Advancement).Social science research workflows involve a lot of speech and text data that is laborious to collect and analyseThe daily practice of social science research involves a lot of talking, reading and writing. In my applied behaviour science research consulting role at Monash University and through Ready Research, I generate or participate in the generation of a huge amount of speech and text data. This includes highly structured research activities such as interviews, surveys, observation and experiments; but also less structured research activities like workshops and meetings.Some fictionalised examples of work I’ve done in the past year:Research interviews with 20 regular city commuters to understand what influences their commuting behaviour post-COVID, to assist a public transit authority in planning and operating its services efficientlyPractitioner interviews with staff from city, regional and rural local governments to assess organisational readiness for flood preparation and responseWorkshop of 5-10 people involved in hospital sepsis care, each representing a different interest (e.g., patients, clinicians, researchers, funders) to identify priority areas to direct $5M research fundingSurvey of 5,000 Australians to understand the impacts and experiences of living under lockdown in Melbourne, Australia during COVID-19Evaluation interviews with 4 participants in the AGI Safety Fundamentals course to understand the most significant change in their knowledge, skills, or behaviours as a result of their participationTo make this data useful it needs to be collected, processed, organised, structured and analysed. The typical workflow for these kinds of activities involves taking written notes during the research activity, or recording the audio / video research activity and reviewing the recording later. Interviews are sometimes transcribed by a paid service for later analysis. Other times they are transcribed by the researcher.The amount of speech and text data generated during research activity is large - each research activity yields thousands of words. The sheer volume of data can be overwhelming and daunting, making it difficult to carry out analysis in any meaningful way. In addition, sometimes data just isn’t collected (e.g., during an interview or workshop) because the researcher is busy facilitating / listening / processing / connecting with research participants.Even for data that is collected, managing and analysing it is a challenge. Specialised programs such as nVivo are used in social science to manage and analyse text data, but less structured research activities would almost never be managed or analysed through this kind of program, because of the time and skills required. Open text data i...]]>
AlexanderSaeri https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 20:39 None full 4682
uoEHR2ETE73vDMHqe_NL_EA_EA EA - What I thought about child marriage as a cause area, and how I've changed my mind by Catherine Fist Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What I thought about child marriage as a cause area, and how I've changed my mind, published by Catherine Fist on January 31, 2023 on The Effective Altruism Forum.Summary:I have been working on a research project into the scale, tractability and neglectedness of child marriage. After 80 hours of research, I thought that there was a relatively strong case that effective altruist funding organisations that fund projects addressing international poverty should consider funding child marriage interventions. I then found a source that undermined a key premise: child marriage is clearly harmful across a number of health metrics. I describe in more detail my experience and findings below, and share some tips for those undertaking self-directed research projects to avoid making the mistakes I made (skip to ‘What I will do next time’ for these).Context:I had no direct experience researching child marriage, but I was interested to learn about effective interventions and whether it had potential as a possible cause area. I studied Political Science and International Relations at University, as well as some subjects on development, gender and economics, I have also worked as a government evaluator. My goal was to do some preliminary research and determine if child marriage was large scale problem, tractable and neglected. If so, I would share this research with effective altruist funders.My model:In October last year, I started a self-directed research project into the scale, tractability and neglectedness of child marriage. I read and collected dozens of sources, analyzed data, contacted a top researcher, compared effective interventions, built a mental model of what the charitable space looked like and identified potential interventions for EA support.I came to the following findings, based on around 80 hours of research:Scale/importanceChild marriage is a widespread practice that affects around 12 millions girls per year (UNICEF, 2022)Child marriage is a harmful practice that increases the risk of negative maternal and sexual health outcomes, domestic and sexual violence, and reduces the likelihood that a girl will complete school. This is the consensus position held by global development institutions (see meeting report from leading global institutions on child marriage UNFPA, 2019).TractabilityThere are cost effective interventions that work to prevent child marriage, e.g. the ‘cost per marriage averted’ ranged between US$159 and US$732 in this study (Erulkar, Medhin and Weissman 2017).The effect of child marriage on quality adjusted or disability adjusted life years has not been quantified so it difficult to compare cost effectiveness with other interventions (EA Forum explainer on these metrics).NeglectednessPopulation Council is a research body focussed on running quasi-experimental programs and creating scalable interventions (Population Council, date unknown).The lead investigator into child marriage at Population Council informed me that it is not currently running programs to prevent child marriage because of lack of funding.ConclusionEffective altruist funding organisations focused on international health and poverty should consider funding effective interventions to prevent child marriage at a large scale.What broke my modelEarlier this week, I decided it would be useful to try and quantify the harm of child marriage, or at least some of the harms, into commonly used metrics like quality adjusted or disability adjusted life years (QALYs or DALYs). I anticipated that this would be a key piece of information for EA funders, and it had not been done so far (finding 2b). In doing so, I came across a study that fundamentally challenged finding 1b: child marriage is an underlying cause of many harmful outcomes. Without strong evidence that child marri...]]>
Catherine Fist https://forum.effectivealtruism.org/posts/uoEHR2ETE73vDMHqe/what-i-thought-about-child-marriage-as-a-cause-area-and-how Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What I thought about child marriage as a cause area, and how I've changed my mind, published by Catherine Fist on January 31, 2023 on The Effective Altruism Forum.Summary:I have been working on a research project into the scale, tractability and neglectedness of child marriage. After 80 hours of research, I thought that there was a relatively strong case that effective altruist funding organisations that fund projects addressing international poverty should consider funding child marriage interventions. I then found a source that undermined a key premise: child marriage is clearly harmful across a number of health metrics. I describe in more detail my experience and findings below, and share some tips for those undertaking self-directed research projects to avoid making the mistakes I made (skip to ‘What I will do next time’ for these).Context:I had no direct experience researching child marriage, but I was interested to learn about effective interventions and whether it had potential as a possible cause area. I studied Political Science and International Relations at University, as well as some subjects on development, gender and economics, I have also worked as a government evaluator. My goal was to do some preliminary research and determine if child marriage was large scale problem, tractable and neglected. If so, I would share this research with effective altruist funders.My model:In October last year, I started a self-directed research project into the scale, tractability and neglectedness of child marriage. I read and collected dozens of sources, analyzed data, contacted a top researcher, compared effective interventions, built a mental model of what the charitable space looked like and identified potential interventions for EA support.I came to the following findings, based on around 80 hours of research:Scale/importanceChild marriage is a widespread practice that affects around 12 millions girls per year (UNICEF, 2022)Child marriage is a harmful practice that increases the risk of negative maternal and sexual health outcomes, domestic and sexual violence, and reduces the likelihood that a girl will complete school. This is the consensus position held by global development institutions (see meeting report from leading global institutions on child marriage UNFPA, 2019).TractabilityThere are cost effective interventions that work to prevent child marriage, e.g. the ‘cost per marriage averted’ ranged between US$159 and US$732 in this study (Erulkar, Medhin and Weissman 2017).The effect of child marriage on quality adjusted or disability adjusted life years has not been quantified so it difficult to compare cost effectiveness with other interventions (EA Forum explainer on these metrics).NeglectednessPopulation Council is a research body focussed on running quasi-experimental programs and creating scalable interventions (Population Council, date unknown).The lead investigator into child marriage at Population Council informed me that it is not currently running programs to prevent child marriage because of lack of funding.ConclusionEffective altruist funding organisations focused on international health and poverty should consider funding effective interventions to prevent child marriage at a large scale.What broke my modelEarlier this week, I decided it would be useful to try and quantify the harm of child marriage, or at least some of the harms, into commonly used metrics like quality adjusted or disability adjusted life years (QALYs or DALYs). I anticipated that this would be a key piece of information for EA funders, and it had not been done so far (finding 2b). In doing so, I came across a study that fundamentally challenged finding 1b: child marriage is an underlying cause of many harmful outcomes. Without strong evidence that child marri...]]>
Tue, 31 Jan 2023 21:18:13 +0000 EA - What I thought about child marriage as a cause area, and how I've changed my mind by Catherine Fist Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What I thought about child marriage as a cause area, and how I've changed my mind, published by Catherine Fist on January 31, 2023 on The Effective Altruism Forum.Summary:I have been working on a research project into the scale, tractability and neglectedness of child marriage. After 80 hours of research, I thought that there was a relatively strong case that effective altruist funding organisations that fund projects addressing international poverty should consider funding child marriage interventions. I then found a source that undermined a key premise: child marriage is clearly harmful across a number of health metrics. I describe in more detail my experience and findings below, and share some tips for those undertaking self-directed research projects to avoid making the mistakes I made (skip to ‘What I will do next time’ for these).Context:I had no direct experience researching child marriage, but I was interested to learn about effective interventions and whether it had potential as a possible cause area. I studied Political Science and International Relations at University, as well as some subjects on development, gender and economics, I have also worked as a government evaluator. My goal was to do some preliminary research and determine if child marriage was large scale problem, tractable and neglected. If so, I would share this research with effective altruist funders.My model:In October last year, I started a self-directed research project into the scale, tractability and neglectedness of child marriage. I read and collected dozens of sources, analyzed data, contacted a top researcher, compared effective interventions, built a mental model of what the charitable space looked like and identified potential interventions for EA support.I came to the following findings, based on around 80 hours of research:Scale/importanceChild marriage is a widespread practice that affects around 12 millions girls per year (UNICEF, 2022)Child marriage is a harmful practice that increases the risk of negative maternal and sexual health outcomes, domestic and sexual violence, and reduces the likelihood that a girl will complete school. This is the consensus position held by global development institutions (see meeting report from leading global institutions on child marriage UNFPA, 2019).TractabilityThere are cost effective interventions that work to prevent child marriage, e.g. the ‘cost per marriage averted’ ranged between US$159 and US$732 in this study (Erulkar, Medhin and Weissman 2017).The effect of child marriage on quality adjusted or disability adjusted life years has not been quantified so it difficult to compare cost effectiveness with other interventions (EA Forum explainer on these metrics).NeglectednessPopulation Council is a research body focussed on running quasi-experimental programs and creating scalable interventions (Population Council, date unknown).The lead investigator into child marriage at Population Council informed me that it is not currently running programs to prevent child marriage because of lack of funding.ConclusionEffective altruist funding organisations focused on international health and poverty should consider funding effective interventions to prevent child marriage at a large scale.What broke my modelEarlier this week, I decided it would be useful to try and quantify the harm of child marriage, or at least some of the harms, into commonly used metrics like quality adjusted or disability adjusted life years (QALYs or DALYs). I anticipated that this would be a key piece of information for EA funders, and it had not been done so far (finding 2b). In doing so, I came across a study that fundamentally challenged finding 1b: child marriage is an underlying cause of many harmful outcomes. Without strong evidence that child marri...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What I thought about child marriage as a cause area, and how I've changed my mind, published by Catherine Fist on January 31, 2023 on The Effective Altruism Forum.Summary:I have been working on a research project into the scale, tractability and neglectedness of child marriage. After 80 hours of research, I thought that there was a relatively strong case that effective altruist funding organisations that fund projects addressing international poverty should consider funding child marriage interventions. I then found a source that undermined a key premise: child marriage is clearly harmful across a number of health metrics. I describe in more detail my experience and findings below, and share some tips for those undertaking self-directed research projects to avoid making the mistakes I made (skip to ‘What I will do next time’ for these).Context:I had no direct experience researching child marriage, but I was interested to learn about effective interventions and whether it had potential as a possible cause area. I studied Political Science and International Relations at University, as well as some subjects on development, gender and economics, I have also worked as a government evaluator. My goal was to do some preliminary research and determine if child marriage was large scale problem, tractable and neglected. If so, I would share this research with effective altruist funders.My model:In October last year, I started a self-directed research project into the scale, tractability and neglectedness of child marriage. I read and collected dozens of sources, analyzed data, contacted a top researcher, compared effective interventions, built a mental model of what the charitable space looked like and identified potential interventions for EA support.I came to the following findings, based on around 80 hours of research:Scale/importanceChild marriage is a widespread practice that affects around 12 millions girls per year (UNICEF, 2022)Child marriage is a harmful practice that increases the risk of negative maternal and sexual health outcomes, domestic and sexual violence, and reduces the likelihood that a girl will complete school. This is the consensus position held by global development institutions (see meeting report from leading global institutions on child marriage UNFPA, 2019).TractabilityThere are cost effective interventions that work to prevent child marriage, e.g. the ‘cost per marriage averted’ ranged between US$159 and US$732 in this study (Erulkar, Medhin and Weissman 2017).The effect of child marriage on quality adjusted or disability adjusted life years has not been quantified so it difficult to compare cost effectiveness with other interventions (EA Forum explainer on these metrics).NeglectednessPopulation Council is a research body focussed on running quasi-experimental programs and creating scalable interventions (Population Council, date unknown).The lead investigator into child marriage at Population Council informed me that it is not currently running programs to prevent child marriage because of lack of funding.ConclusionEffective altruist funding organisations focused on international health and poverty should consider funding effective interventions to prevent child marriage at a large scale.What broke my modelEarlier this week, I decided it would be useful to try and quantify the harm of child marriage, or at least some of the harms, into commonly used metrics like quality adjusted or disability adjusted life years (QALYs or DALYs). I anticipated that this would be a key piece of information for EA funders, and it had not been done so far (finding 2b). In doing so, I came across a study that fundamentally challenged finding 1b: child marriage is an underlying cause of many harmful outcomes. Without strong evidence that child marri...]]>
Catherine Fist https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 12:15 None full 4671
cziz5YGLnSnpa5qzS_NL_EA_EA EA - Longtermism and animals: Resources + join our Discord community! by Ren Springlea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Longtermism and animals: Resources + join our Discord community!, published by Ren Springlea on January 31, 2023 on The Effective Altruism Forum.Key pointsWe host a Discord server for discussing animals and longtermism. You are more than welcome to join here.There are compelling reasons to help those who will live in the long-term future, and there are compelling reasons to help nonhuman animals. As such, the intersection between longtermism and animal advocacy is starting to receive a bit more attention among the EA community.This post has three main purposes:To invite you to our Discord server on animals and longtermism (link above).To share a list of resources on animals and longtermism that we've collected over time, in case this helps anybody.To share some details about an organisation we almost launched, which would have aimed to find the best interventions for helping animals in the long-term future. We got funding, but the grant was awarded by an FTX Future Fund regrantor, so the funding fell through. In this post, we have linked our grant application/plans in case anybody wants to pick up where we left off.Background to animals and longtermismThere are many reasons why animal advocacy and longtermism can help each other do even more good. These reasons are explored in detail in the resources that we list below. To quickly name a few:The expected number of animals in the far future could be simply enormous. This means that considering the lives of animals in the far future could be a great way to have a large impact. As Browning and Viet conclude, "Work on longtermism has thus far primarily focused on the existence and wellbeing of future humans, without corresponding consideration of animal welfare. [...] Given the sheer expected number of future animals, as well as the likelihood of their suffering, we argue that the well-being of future animals should be given serious consideration when thinking about the long-term future, allowing for the possibility that in some cases their interests may even dominate."Animal welfare may represent most of the moral value in the far future. This means that longtermists may need to consider the perspective of animal advocacy in order to do the most good.Most animals might exist in the long-term future. This means that animal advocates may need to consider the perspective of longtermists in order to do the most good.Society’s attitude towards animals is important for the long-term trajectory of society's moral values.Animals may also interact with new phenomena, like artificial intelligence, space colonisation, artificial sentience, and brain emulations in ways that are likely to have serious moral implications.So far, research on animals in the long-term future has fallen into these broad buckets:Animals and space colonisation (making sure space exploration and colonisation is animal-friendly)Wild animal welfare (e.g. resilience of wild animal populations over time, introducing wild animals to other planets, how artificial intelligence affects wild animals)Animals and artificial intelligence (making sure artificial intelligence and machine learning is designed with animals in mind)Digital animal mindsHealth of the animal advocacy movement (to enable animal advocacy to sustain its efforts over time)Further (meta) thinking on longtermism and animalsResources on animals and longtermismGeneral animals and longtermismHeather Browning and Walter Veit - Longtermism and AnimalsMichael Dello-Iacovo - Longtermism and Animal Farming TrajectoriesZach Freitas-Groff - Longtermism in Animal AdvocacyOscar Horta - Why Animal Advocacy Matters Much More Than You ThinkAnimal advocacy & longtermism - Effective Animal Advocacy Leadership Coordination Forum 2022 (slides)William Bench - Apples and Oranges: On the Conv...]]>
Ren Springlea https://forum.effectivealtruism.org/posts/cziz5YGLnSnpa5qzS/longtermism-and-animals-resources-join-our-discord-community Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Longtermism and animals: Resources + join our Discord community!, published by Ren Springlea on January 31, 2023 on The Effective Altruism Forum.Key pointsWe host a Discord server for discussing animals and longtermism. You are more than welcome to join here.There are compelling reasons to help those who will live in the long-term future, and there are compelling reasons to help nonhuman animals. As such, the intersection between longtermism and animal advocacy is starting to receive a bit more attention among the EA community.This post has three main purposes:To invite you to our Discord server on animals and longtermism (link above).To share a list of resources on animals and longtermism that we've collected over time, in case this helps anybody.To share some details about an organisation we almost launched, which would have aimed to find the best interventions for helping animals in the long-term future. We got funding, but the grant was awarded by an FTX Future Fund regrantor, so the funding fell through. In this post, we have linked our grant application/plans in case anybody wants to pick up where we left off.Background to animals and longtermismThere are many reasons why animal advocacy and longtermism can help each other do even more good. These reasons are explored in detail in the resources that we list below. To quickly name a few:The expected number of animals in the far future could be simply enormous. This means that considering the lives of animals in the far future could be a great way to have a large impact. As Browning and Viet conclude, "Work on longtermism has thus far primarily focused on the existence and wellbeing of future humans, without corresponding consideration of animal welfare. [...] Given the sheer expected number of future animals, as well as the likelihood of their suffering, we argue that the well-being of future animals should be given serious consideration when thinking about the long-term future, allowing for the possibility that in some cases their interests may even dominate."Animal welfare may represent most of the moral value in the far future. This means that longtermists may need to consider the perspective of animal advocacy in order to do the most good.Most animals might exist in the long-term future. This means that animal advocates may need to consider the perspective of longtermists in order to do the most good.Society’s attitude towards animals is important for the long-term trajectory of society's moral values.Animals may also interact with new phenomena, like artificial intelligence, space colonisation, artificial sentience, and brain emulations in ways that are likely to have serious moral implications.So far, research on animals in the long-term future has fallen into these broad buckets:Animals and space colonisation (making sure space exploration and colonisation is animal-friendly)Wild animal welfare (e.g. resilience of wild animal populations over time, introducing wild animals to other planets, how artificial intelligence affects wild animals)Animals and artificial intelligence (making sure artificial intelligence and machine learning is designed with animals in mind)Digital animal mindsHealth of the animal advocacy movement (to enable animal advocacy to sustain its efforts over time)Further (meta) thinking on longtermism and animalsResources on animals and longtermismGeneral animals and longtermismHeather Browning and Walter Veit - Longtermism and AnimalsMichael Dello-Iacovo - Longtermism and Animal Farming TrajectoriesZach Freitas-Groff - Longtermism in Animal AdvocacyOscar Horta - Why Animal Advocacy Matters Much More Than You ThinkAnimal advocacy & longtermism - Effective Animal Advocacy Leadership Coordination Forum 2022 (slides)William Bench - Apples and Oranges: On the Conv...]]>
Tue, 31 Jan 2023 16:15:11 +0000 EA - Longtermism and animals: Resources + join our Discord community! by Ren Springlea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Longtermism and animals: Resources + join our Discord community!, published by Ren Springlea on January 31, 2023 on The Effective Altruism Forum.Key pointsWe host a Discord server for discussing animals and longtermism. You are more than welcome to join here.There are compelling reasons to help those who will live in the long-term future, and there are compelling reasons to help nonhuman animals. As such, the intersection between longtermism and animal advocacy is starting to receive a bit more attention among the EA community.This post has three main purposes:To invite you to our Discord server on animals and longtermism (link above).To share a list of resources on animals and longtermism that we've collected over time, in case this helps anybody.To share some details about an organisation we almost launched, which would have aimed to find the best interventions for helping animals in the long-term future. We got funding, but the grant was awarded by an FTX Future Fund regrantor, so the funding fell through. In this post, we have linked our grant application/plans in case anybody wants to pick up where we left off.Background to animals and longtermismThere are many reasons why animal advocacy and longtermism can help each other do even more good. These reasons are explored in detail in the resources that we list below. To quickly name a few:The expected number of animals in the far future could be simply enormous. This means that considering the lives of animals in the far future could be a great way to have a large impact. As Browning and Viet conclude, "Work on longtermism has thus far primarily focused on the existence and wellbeing of future humans, without corresponding consideration of animal welfare. [...] Given the sheer expected number of future animals, as well as the likelihood of their suffering, we argue that the well-being of future animals should be given serious consideration when thinking about the long-term future, allowing for the possibility that in some cases their interests may even dominate."Animal welfare may represent most of the moral value in the far future. This means that longtermists may need to consider the perspective of animal advocacy in order to do the most good.Most animals might exist in the long-term future. This means that animal advocates may need to consider the perspective of longtermists in order to do the most good.Society’s attitude towards animals is important for the long-term trajectory of society's moral values.Animals may also interact with new phenomena, like artificial intelligence, space colonisation, artificial sentience, and brain emulations in ways that are likely to have serious moral implications.So far, research on animals in the long-term future has fallen into these broad buckets:Animals and space colonisation (making sure space exploration and colonisation is animal-friendly)Wild animal welfare (e.g. resilience of wild animal populations over time, introducing wild animals to other planets, how artificial intelligence affects wild animals)Animals and artificial intelligence (making sure artificial intelligence and machine learning is designed with animals in mind)Digital animal mindsHealth of the animal advocacy movement (to enable animal advocacy to sustain its efforts over time)Further (meta) thinking on longtermism and animalsResources on animals and longtermismGeneral animals and longtermismHeather Browning and Walter Veit - Longtermism and AnimalsMichael Dello-Iacovo - Longtermism and Animal Farming TrajectoriesZach Freitas-Groff - Longtermism in Animal AdvocacyOscar Horta - Why Animal Advocacy Matters Much More Than You ThinkAnimal advocacy & longtermism - Effective Animal Advocacy Leadership Coordination Forum 2022 (slides)William Bench - Apples and Oranges: On the Conv...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Longtermism and animals: Resources + join our Discord community!, published by Ren Springlea on January 31, 2023 on The Effective Altruism Forum.Key pointsWe host a Discord server for discussing animals and longtermism. You are more than welcome to join here.There are compelling reasons to help those who will live in the long-term future, and there are compelling reasons to help nonhuman animals. As such, the intersection between longtermism and animal advocacy is starting to receive a bit more attention among the EA community.This post has three main purposes:To invite you to our Discord server on animals and longtermism (link above).To share a list of resources on animals and longtermism that we've collected over time, in case this helps anybody.To share some details about an organisation we almost launched, which would have aimed to find the best interventions for helping animals in the long-term future. We got funding, but the grant was awarded by an FTX Future Fund regrantor, so the funding fell through. In this post, we have linked our grant application/plans in case anybody wants to pick up where we left off.Background to animals and longtermismThere are many reasons why animal advocacy and longtermism can help each other do even more good. These reasons are explored in detail in the resources that we list below. To quickly name a few:The expected number of animals in the far future could be simply enormous. This means that considering the lives of animals in the far future could be a great way to have a large impact. As Browning and Viet conclude, "Work on longtermism has thus far primarily focused on the existence and wellbeing of future humans, without corresponding consideration of animal welfare. [...] Given the sheer expected number of future animals, as well as the likelihood of their suffering, we argue that the well-being of future animals should be given serious consideration when thinking about the long-term future, allowing for the possibility that in some cases their interests may even dominate."Animal welfare may represent most of the moral value in the far future. This means that longtermists may need to consider the perspective of animal advocacy in order to do the most good.Most animals might exist in the long-term future. This means that animal advocates may need to consider the perspective of longtermists in order to do the most good.Society’s attitude towards animals is important for the long-term trajectory of society's moral values.Animals may also interact with new phenomena, like artificial intelligence, space colonisation, artificial sentience, and brain emulations in ways that are likely to have serious moral implications.So far, research on animals in the long-term future has fallen into these broad buckets:Animals and space colonisation (making sure space exploration and colonisation is animal-friendly)Wild animal welfare (e.g. resilience of wild animal populations over time, introducing wild animals to other planets, how artificial intelligence affects wild animals)Animals and artificial intelligence (making sure artificial intelligence and machine learning is designed with animals in mind)Digital animal mindsHealth of the animal advocacy movement (to enable animal advocacy to sustain its efforts over time)Further (meta) thinking on longtermism and animalsResources on animals and longtermismGeneral animals and longtermismHeather Browning and Walter Veit - Longtermism and AnimalsMichael Dello-Iacovo - Longtermism and Animal Farming TrajectoriesZach Freitas-Groff - Longtermism in Animal AdvocacyOscar Horta - Why Animal Advocacy Matters Much More Than You ThinkAnimal advocacy & longtermism - Effective Animal Advocacy Leadership Coordination Forum 2022 (slides)William Bench - Apples and Oranges: On the Conv...]]>
Ren Springlea https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:52 None full 4672
7JZs8PGQWtQXoDqP6_NL_EA_EA EA - Follow-Up Survey: Major Layoffs at Tech Giants [recruiting support] by nicolenohemi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Follow-Up Survey: Major Layoffs at Tech Giants [recruiting support], published by nicolenohemi on January 31, 2023 on The Effective Altruism Forum.Forum Reactions to Tech LayoffsI recently wrote a post about the current layoffs at tech giants, intended to inform relevant parties about it.I received more feedback and messages than expected, such as:Recruiters asking to be connected with talentLaid-off personnel searching for job opportunitiesIndividuals asking about the process of researching and contacting talent.Fill out this Survey if you're looking to (be) hire(d)I decided to follow up with an expression of interest form to help connect the right people with each other.If you're looking to hire or you're searching for new job opportunities, please complete this short survey (~30 seconds), and share it with anyone who might benefit from it.Little Uplifting AnecdoteEven though this awful situation is hitting many people in various very unexpected ways, I want to share a story I found inspirational.One of my calls was with a Google software engineer who expressed fears about the consequences of being laid off and is currently dealing with this uncertainty.This person has been closely following the EA space for ~5 years, has ever since been AI safety-curious, completed the AGI Safety Fundamentals, reads the Alignment Forum, etc. Yet, they have never taken the leap to try and work directly on AI safety.Despite the current circumstances not being of their own choosing, they are taking the initiative to shape the situation to their liking. Without this external stimulus, they may have never made an attempt.If the layoffs or other negative situations have impacted you, this anecdote may help motivate you to reclaim the narrative and make it your own.Helpful LinksMental HealthEA Mental Health NavigatorList of recommended therapistsSSC-recommended mental health professionalsCEA Community Health teamJulia Wise at julia.wise@centreforeffectivealtruism.orgCatherine Low at catherine@centreforeffectivealtruism.orgYou can also fill out the Community Health Contact Form (can be filled out anonymously)Career80,000 hours career advice & Job BoardAI Safety Support Career CoachingAISS Lots of LinksThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
nicolenohemi https://forum.effectivealtruism.org/posts/7JZs8PGQWtQXoDqP6/follow-up-survey-major-layoffs-at-tech-giants-recruiting Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Follow-Up Survey: Major Layoffs at Tech Giants [recruiting support], published by nicolenohemi on January 31, 2023 on The Effective Altruism Forum.Forum Reactions to Tech LayoffsI recently wrote a post about the current layoffs at tech giants, intended to inform relevant parties about it.I received more feedback and messages than expected, such as:Recruiters asking to be connected with talentLaid-off personnel searching for job opportunitiesIndividuals asking about the process of researching and contacting talent.Fill out this Survey if you're looking to (be) hire(d)I decided to follow up with an expression of interest form to help connect the right people with each other.If you're looking to hire or you're searching for new job opportunities, please complete this short survey (~30 seconds), and share it with anyone who might benefit from it.Little Uplifting AnecdoteEven though this awful situation is hitting many people in various very unexpected ways, I want to share a story I found inspirational.One of my calls was with a Google software engineer who expressed fears about the consequences of being laid off and is currently dealing with this uncertainty.This person has been closely following the EA space for ~5 years, has ever since been AI safety-curious, completed the AGI Safety Fundamentals, reads the Alignment Forum, etc. Yet, they have never taken the leap to try and work directly on AI safety.Despite the current circumstances not being of their own choosing, they are taking the initiative to shape the situation to their liking. Without this external stimulus, they may have never made an attempt.If the layoffs or other negative situations have impacted you, this anecdote may help motivate you to reclaim the narrative and make it your own.Helpful LinksMental HealthEA Mental Health NavigatorList of recommended therapistsSSC-recommended mental health professionalsCEA Community Health teamJulia Wise at julia.wise@centreforeffectivealtruism.orgCatherine Low at catherine@centreforeffectivealtruism.orgYou can also fill out the Community Health Contact Form (can be filled out anonymously)Career80,000 hours career advice & Job BoardAI Safety Support Career CoachingAISS Lots of LinksThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 31 Jan 2023 07:12:56 +0000 EA - Follow-Up Survey: Major Layoffs at Tech Giants [recruiting support] by nicolenohemi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Follow-Up Survey: Major Layoffs at Tech Giants [recruiting support], published by nicolenohemi on January 31, 2023 on The Effective Altruism Forum.Forum Reactions to Tech LayoffsI recently wrote a post about the current layoffs at tech giants, intended to inform relevant parties about it.I received more feedback and messages than expected, such as:Recruiters asking to be connected with talentLaid-off personnel searching for job opportunitiesIndividuals asking about the process of researching and contacting talent.Fill out this Survey if you're looking to (be) hire(d)I decided to follow up with an expression of interest form to help connect the right people with each other.If you're looking to hire or you're searching for new job opportunities, please complete this short survey (~30 seconds), and share it with anyone who might benefit from it.Little Uplifting AnecdoteEven though this awful situation is hitting many people in various very unexpected ways, I want to share a story I found inspirational.One of my calls was with a Google software engineer who expressed fears about the consequences of being laid off and is currently dealing with this uncertainty.This person has been closely following the EA space for ~5 years, has ever since been AI safety-curious, completed the AGI Safety Fundamentals, reads the Alignment Forum, etc. Yet, they have never taken the leap to try and work directly on AI safety.Despite the current circumstances not being of their own choosing, they are taking the initiative to shape the situation to their liking. Without this external stimulus, they may have never made an attempt.If the layoffs or other negative situations have impacted you, this anecdote may help motivate you to reclaim the narrative and make it your own.Helpful LinksMental HealthEA Mental Health NavigatorList of recommended therapistsSSC-recommended mental health professionalsCEA Community Health teamJulia Wise at julia.wise@centreforeffectivealtruism.orgCatherine Low at catherine@centreforeffectivealtruism.orgYou can also fill out the Community Health Contact Form (can be filled out anonymously)Career80,000 hours career advice & Job BoardAI Safety Support Career CoachingAISS Lots of LinksThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Follow-Up Survey: Major Layoffs at Tech Giants [recruiting support], published by nicolenohemi on January 31, 2023 on The Effective Altruism Forum.Forum Reactions to Tech LayoffsI recently wrote a post about the current layoffs at tech giants, intended to inform relevant parties about it.I received more feedback and messages than expected, such as:Recruiters asking to be connected with talentLaid-off personnel searching for job opportunitiesIndividuals asking about the process of researching and contacting talent.Fill out this Survey if you're looking to (be) hire(d)I decided to follow up with an expression of interest form to help connect the right people with each other.If you're looking to hire or you're searching for new job opportunities, please complete this short survey (~30 seconds), and share it with anyone who might benefit from it.Little Uplifting AnecdoteEven though this awful situation is hitting many people in various very unexpected ways, I want to share a story I found inspirational.One of my calls was with a Google software engineer who expressed fears about the consequences of being laid off and is currently dealing with this uncertainty.This person has been closely following the EA space for ~5 years, has ever since been AI safety-curious, completed the AGI Safety Fundamentals, reads the Alignment Forum, etc. Yet, they have never taken the leap to try and work directly on AI safety.Despite the current circumstances not being of their own choosing, they are taking the initiative to shape the situation to their liking. Without this external stimulus, they may have never made an attempt.If the layoffs or other negative situations have impacted you, this anecdote may help motivate you to reclaim the narrative and make it your own.Helpful LinksMental HealthEA Mental Health NavigatorList of recommended therapistsSSC-recommended mental health professionalsCEA Community Health teamJulia Wise at julia.wise@centreforeffectivealtruism.orgCatherine Low at catherine@centreforeffectivealtruism.orgYou can also fill out the Community Health Contact Form (can be filled out anonymously)Career80,000 hours career advice & Job BoardAI Safety Support Career CoachingAISS Lots of LinksThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
nicolenohemi https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:28 None full 4666
4TcaBNu7EmEukjGoc_NL_EA_EA EA - Questions about AI that bother me by Eleni A Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Questions about AI that bother me, published by Eleni A on January 31, 2023 on The Effective Altruism Forum.As 2022 comes to an end, I thought it'd be good to maintain a list of "questions that bother me" in thinking about AI safety and alignment. I don't claim I'm the first or only one to have thought about them. I'll keep updating this list.(The title of this post alludes to the book "Things That Bother Me" by Galen Strawson)First posted: 12/6/22Last updated: 1/30/23General CognitionWhat signs do I need to look for to tell whether a model's cognition has started to emerge, e.g., situational awareness?Will a capacity for "doing science" be sufficient condition for general intelligence?How easy was it for humans to get science (e.g., compared to evolving to take over the world).DeceptionWhat kind of interpretability tools do we need to avoid deception?How do we get these interpretability tools and even if we do get them, what if they're like neuroscience for understanding brains (not enough)?How can I tell whether a model has found another goal to optimize for during its training?What is it that makes a model switch to a goal different from the one set by the designer? How do you prevent it from doing so?Agent FoundationsIs the description/modeling of an agent ultimately a mathematical task?From where do human agents derive their goals?Is value fragile?Theory of Machine LearningWhat explains the success of deep neural networks?Why was connectionism unlikely to succeed?Epistemology of Alignment (I've written about this here)How can we accelerate research?Has philosophy ever really helped scientific research e.g., with concept clarification?What are some concrete takeaways from the history of science and technology that could be used as advice for alignment researchers and field-builders?The emergence of the AI Safety paradigmPhilosophy of Existential RiskWhat is the best way to explain the difference between forecasting extinction scenarios and narratives from chiliasm or escatology?What is the best way to think about serious risks in the future without reinforcing a sense of doom?Teaching and CommunicationYounger people (e.g., my undergraduate students) seem more willing to entertain scenarios of catastrophes and extinction compared to older people (e.g., academics). I find that strange and I don't have a good explanation as to why that is the case.The idea of a technological singularity was not difficult to explain and discuss with my students. I think that's surprising given how powerful the weirdness heuristic is.The idea of "agency" or "being an agent" was easy to conflate with "consciousness" in philosophical discussions. It's not clear to me why that was the case since I gave a very specific definition of agency.Most of my students thought that AI models will never be conscious; it was difficult for them to articulate specific arguments about this, but their intuition seemed to be that there's something uniquely human about consciousness/sentience.The "AIs will take our jobs in the future" seems to be a very common concern both among students and academics.80% of a ~25 people classroom thought that philosophy is the right thing to major in if you're interested in how minds work. The question I asked them was: "should you major in philosophy or cognitive science if you want to study how minds work?"Governance/StrategyShould we try to slow down AI progress? What does this mean in concrete steps?How should we go about capabilities externalities?How should concrete AI risk stories inform/affect AI governance and short-term/long-term future planning?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Eleni A https://forum.effectivealtruism.org/posts/4TcaBNu7EmEukjGoc/questions-about-ai-that-bother-me Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Questions about AI that bother me, published by Eleni A on January 31, 2023 on The Effective Altruism Forum.As 2022 comes to an end, I thought it'd be good to maintain a list of "questions that bother me" in thinking about AI safety and alignment. I don't claim I'm the first or only one to have thought about them. I'll keep updating this list.(The title of this post alludes to the book "Things That Bother Me" by Galen Strawson)First posted: 12/6/22Last updated: 1/30/23General CognitionWhat signs do I need to look for to tell whether a model's cognition has started to emerge, e.g., situational awareness?Will a capacity for "doing science" be sufficient condition for general intelligence?How easy was it for humans to get science (e.g., compared to evolving to take over the world).DeceptionWhat kind of interpretability tools do we need to avoid deception?How do we get these interpretability tools and even if we do get them, what if they're like neuroscience for understanding brains (not enough)?How can I tell whether a model has found another goal to optimize for during its training?What is it that makes a model switch to a goal different from the one set by the designer? How do you prevent it from doing so?Agent FoundationsIs the description/modeling of an agent ultimately a mathematical task?From where do human agents derive their goals?Is value fragile?Theory of Machine LearningWhat explains the success of deep neural networks?Why was connectionism unlikely to succeed?Epistemology of Alignment (I've written about this here)How can we accelerate research?Has philosophy ever really helped scientific research e.g., with concept clarification?What are some concrete takeaways from the history of science and technology that could be used as advice for alignment researchers and field-builders?The emergence of the AI Safety paradigmPhilosophy of Existential RiskWhat is the best way to explain the difference between forecasting extinction scenarios and narratives from chiliasm or escatology?What is the best way to think about serious risks in the future without reinforcing a sense of doom?Teaching and CommunicationYounger people (e.g., my undergraduate students) seem more willing to entertain scenarios of catastrophes and extinction compared to older people (e.g., academics). I find that strange and I don't have a good explanation as to why that is the case.The idea of a technological singularity was not difficult to explain and discuss with my students. I think that's surprising given how powerful the weirdness heuristic is.The idea of "agency" or "being an agent" was easy to conflate with "consciousness" in philosophical discussions. It's not clear to me why that was the case since I gave a very specific definition of agency.Most of my students thought that AI models will never be conscious; it was difficult for them to articulate specific arguments about this, but their intuition seemed to be that there's something uniquely human about consciousness/sentience.The "AIs will take our jobs in the future" seems to be a very common concern both among students and academics.80% of a ~25 people classroom thought that philosophy is the right thing to major in if you're interested in how minds work. The question I asked them was: "should you major in philosophy or cognitive science if you want to study how minds work?"Governance/StrategyShould we try to slow down AI progress? What does this mean in concrete steps?How should we go about capabilities externalities?How should concrete AI risk stories inform/affect AI governance and short-term/long-term future planning?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 31 Jan 2023 06:50:43 +0000 EA - Questions about AI that bother me by Eleni A Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Questions about AI that bother me, published by Eleni A on January 31, 2023 on The Effective Altruism Forum.As 2022 comes to an end, I thought it'd be good to maintain a list of "questions that bother me" in thinking about AI safety and alignment. I don't claim I'm the first or only one to have thought about them. I'll keep updating this list.(The title of this post alludes to the book "Things That Bother Me" by Galen Strawson)First posted: 12/6/22Last updated: 1/30/23General CognitionWhat signs do I need to look for to tell whether a model's cognition has started to emerge, e.g., situational awareness?Will a capacity for "doing science" be sufficient condition for general intelligence?How easy was it for humans to get science (e.g., compared to evolving to take over the world).DeceptionWhat kind of interpretability tools do we need to avoid deception?How do we get these interpretability tools and even if we do get them, what if they're like neuroscience for understanding brains (not enough)?How can I tell whether a model has found another goal to optimize for during its training?What is it that makes a model switch to a goal different from the one set by the designer? How do you prevent it from doing so?Agent FoundationsIs the description/modeling of an agent ultimately a mathematical task?From where do human agents derive their goals?Is value fragile?Theory of Machine LearningWhat explains the success of deep neural networks?Why was connectionism unlikely to succeed?Epistemology of Alignment (I've written about this here)How can we accelerate research?Has philosophy ever really helped scientific research e.g., with concept clarification?What are some concrete takeaways from the history of science and technology that could be used as advice for alignment researchers and field-builders?The emergence of the AI Safety paradigmPhilosophy of Existential RiskWhat is the best way to explain the difference between forecasting extinction scenarios and narratives from chiliasm or escatology?What is the best way to think about serious risks in the future without reinforcing a sense of doom?Teaching and CommunicationYounger people (e.g., my undergraduate students) seem more willing to entertain scenarios of catastrophes and extinction compared to older people (e.g., academics). I find that strange and I don't have a good explanation as to why that is the case.The idea of a technological singularity was not difficult to explain and discuss with my students. I think that's surprising given how powerful the weirdness heuristic is.The idea of "agency" or "being an agent" was easy to conflate with "consciousness" in philosophical discussions. It's not clear to me why that was the case since I gave a very specific definition of agency.Most of my students thought that AI models will never be conscious; it was difficult for them to articulate specific arguments about this, but their intuition seemed to be that there's something uniquely human about consciousness/sentience.The "AIs will take our jobs in the future" seems to be a very common concern both among students and academics.80% of a ~25 people classroom thought that philosophy is the right thing to major in if you're interested in how minds work. The question I asked them was: "should you major in philosophy or cognitive science if you want to study how minds work?"Governance/StrategyShould we try to slow down AI progress? What does this mean in concrete steps?How should we go about capabilities externalities?How should concrete AI risk stories inform/affect AI governance and short-term/long-term future planning?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Questions about AI that bother me, published by Eleni A on January 31, 2023 on The Effective Altruism Forum.As 2022 comes to an end, I thought it'd be good to maintain a list of "questions that bother me" in thinking about AI safety and alignment. I don't claim I'm the first or only one to have thought about them. I'll keep updating this list.(The title of this post alludes to the book "Things That Bother Me" by Galen Strawson)First posted: 12/6/22Last updated: 1/30/23General CognitionWhat signs do I need to look for to tell whether a model's cognition has started to emerge, e.g., situational awareness?Will a capacity for "doing science" be sufficient condition for general intelligence?How easy was it for humans to get science (e.g., compared to evolving to take over the world).DeceptionWhat kind of interpretability tools do we need to avoid deception?How do we get these interpretability tools and even if we do get them, what if they're like neuroscience for understanding brains (not enough)?How can I tell whether a model has found another goal to optimize for during its training?What is it that makes a model switch to a goal different from the one set by the designer? How do you prevent it from doing so?Agent FoundationsIs the description/modeling of an agent ultimately a mathematical task?From where do human agents derive their goals?Is value fragile?Theory of Machine LearningWhat explains the success of deep neural networks?Why was connectionism unlikely to succeed?Epistemology of Alignment (I've written about this here)How can we accelerate research?Has philosophy ever really helped scientific research e.g., with concept clarification?What are some concrete takeaways from the history of science and technology that could be used as advice for alignment researchers and field-builders?The emergence of the AI Safety paradigmPhilosophy of Existential RiskWhat is the best way to explain the difference between forecasting extinction scenarios and narratives from chiliasm or escatology?What is the best way to think about serious risks in the future without reinforcing a sense of doom?Teaching and CommunicationYounger people (e.g., my undergraduate students) seem more willing to entertain scenarios of catastrophes and extinction compared to older people (e.g., academics). I find that strange and I don't have a good explanation as to why that is the case.The idea of a technological singularity was not difficult to explain and discuss with my students. I think that's surprising given how powerful the weirdness heuristic is.The idea of "agency" or "being an agent" was easy to conflate with "consciousness" in philosophical discussions. It's not clear to me why that was the case since I gave a very specific definition of agency.Most of my students thought that AI models will never be conscious; it was difficult for them to articulate specific arguments about this, but their intuition seemed to be that there's something uniquely human about consciousness/sentience.The "AIs will take our jobs in the future" seems to be a very common concern both among students and academics.80% of a ~25 people classroom thought that philosophy is the right thing to major in if you're interested in how minds work. The question I asked them was: "should you major in philosophy or cognitive science if you want to study how minds work?"Governance/StrategyShould we try to slow down AI progress? What does this mean in concrete steps?How should we go about capabilities externalities?How should concrete AI risk stories inform/affect AI governance and short-term/long-term future planning?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Eleni A https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:37 None full 4665
4ssvjxD8x9iBLYnMB_NL_EA_EA EA - FIRE and EA: Seeking feedback on "Fi-lanthropy" Calculator by Rebecca Herbst Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FIRE & EA: Seeking feedback on "Fi-lanthropy" Calculator, published by Rebecca Herbst on January 30, 2023 on The Effective Altruism Forum.REQUESTI am soliciting feedback on a tool I have made entitled “Fi-lanthropy Calculator”. The target audience includes existing or aspiring members of the FIRE movement (Financial Independence Retire Early). They may use this calculator to explore the impact of taking a giving pledge both to their FI timeline and their retirement portfolio.This tool is meant to be simple to use and easy to digest. It is not a comprehensive financial plan, but more so a way to ‘whet the appetite’ for giving in general. I view this tool as a companion to, say, The Life You Can Save’s Impact Calculator. We know this is not exact, but it’s a good place to see tangible examples of how much good you can do.I’d like to know:If you find this tool helpful and/or informativeIf you think it is missing anything crucial (keeping in mind I want to keep it relatively simple)If there are any major errors I’ve missedRATIONALEThere is a growing movement within the FIRE community of people who are focused on giving back and looking for ways in which they can successfully find financial independence but also create space to help others. This can be seen through the growing number of members in the Socially Conscious FIRE facebook group (now at ~9,000 members, up from <1,000 when I joined in May 2020). We also see true leaders within the Financial Independence community continuing to lean into this space include: Pete Adeney of Mr. Money Mustache advocates for Effective Altruism directly on his blog, Tanja Hester of Our Next Life has written Wallet Activism, and Vicki Robin has her podcast What Could Possibly Go Right. (The last two are not EA focused, but have had a very large influence on getting people to think more about charitable giving in general.)I believe there is a large audience here for the EA community to reach (Mr. Money Mustache’s blog has had tens of millions of visitors), with advocates already in place. My hope is to provide more tools and resources for the FI community to consider some of the EA practices, largely Earn to Give and Invest to Give.While some may feel that the communities clash (FIRE focuses on bettering oneself while EA focuses on bettering others), I believe that there are more synergies than we might think:FIRE folks are optimizers: They ensure that each dollar is working for them, and that little money is wasted. They believe that each dollar they spend or save goes to a useful cause. It is likely FIRE followers will more readily understand the importance of effective charitable donations, knowing their dollar does the most good possible.FIRE folks spend time to understand what makes them happy: FIRE followers, if diligent, have spent a significant amount of time evaluating the marginal benefit of spending more money on themselves once their basic needs are met. And thus, if they have money to satisfy their basic needs and more, there is an opportunity to share this money with others.FIRE folks who are close to retiring early or have already retired are looking for meaning: While it’s true that many people are focused on the journey to getting to early retirement, at some point the question will come up “What is the point of all this?”. And many find that it’s less about quitting their full time work and more about finding meaning in life. And quitting your job gives you a lot more space to think through that. Vicki Robin uses the Hero’s Journey to explain this concept exceptionally well in this video.Many seek FI without the need to RE: Many folks don’t necessarily want to retire early, they just want the ability to become Financially Independent. Being FI allows you to make life changes more easily as you are not ob...]]>
Rebecca Herbst https://forum.effectivealtruism.org/posts/4ssvjxD8x9iBLYnMB/fire-and-ea-seeking-feedback-on-fi-lanthropy-calculator Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FIRE & EA: Seeking feedback on "Fi-lanthropy" Calculator, published by Rebecca Herbst on January 30, 2023 on The Effective Altruism Forum.REQUESTI am soliciting feedback on a tool I have made entitled “Fi-lanthropy Calculator”. The target audience includes existing or aspiring members of the FIRE movement (Financial Independence Retire Early). They may use this calculator to explore the impact of taking a giving pledge both to their FI timeline and their retirement portfolio.This tool is meant to be simple to use and easy to digest. It is not a comprehensive financial plan, but more so a way to ‘whet the appetite’ for giving in general. I view this tool as a companion to, say, The Life You Can Save’s Impact Calculator. We know this is not exact, but it’s a good place to see tangible examples of how much good you can do.I’d like to know:If you find this tool helpful and/or informativeIf you think it is missing anything crucial (keeping in mind I want to keep it relatively simple)If there are any major errors I’ve missedRATIONALEThere is a growing movement within the FIRE community of people who are focused on giving back and looking for ways in which they can successfully find financial independence but also create space to help others. This can be seen through the growing number of members in the Socially Conscious FIRE facebook group (now at ~9,000 members, up from <1,000 when I joined in May 2020). We also see true leaders within the Financial Independence community continuing to lean into this space include: Pete Adeney of Mr. Money Mustache advocates for Effective Altruism directly on his blog, Tanja Hester of Our Next Life has written Wallet Activism, and Vicki Robin has her podcast What Could Possibly Go Right. (The last two are not EA focused, but have had a very large influence on getting people to think more about charitable giving in general.)I believe there is a large audience here for the EA community to reach (Mr. Money Mustache’s blog has had tens of millions of visitors), with advocates already in place. My hope is to provide more tools and resources for the FI community to consider some of the EA practices, largely Earn to Give and Invest to Give.While some may feel that the communities clash (FIRE focuses on bettering oneself while EA focuses on bettering others), I believe that there are more synergies than we might think:FIRE folks are optimizers: They ensure that each dollar is working for them, and that little money is wasted. They believe that each dollar they spend or save goes to a useful cause. It is likely FIRE followers will more readily understand the importance of effective charitable donations, knowing their dollar does the most good possible.FIRE folks spend time to understand what makes them happy: FIRE followers, if diligent, have spent a significant amount of time evaluating the marginal benefit of spending more money on themselves once their basic needs are met. And thus, if they have money to satisfy their basic needs and more, there is an opportunity to share this money with others.FIRE folks who are close to retiring early or have already retired are looking for meaning: While it’s true that many people are focused on the journey to getting to early retirement, at some point the question will come up “What is the point of all this?”. And many find that it’s less about quitting their full time work and more about finding meaning in life. And quitting your job gives you a lot more space to think through that. Vicki Robin uses the Hero’s Journey to explain this concept exceptionally well in this video.Many seek FI without the need to RE: Many folks don’t necessarily want to retire early, they just want the ability to become Financially Independent. Being FI allows you to make life changes more easily as you are not ob...]]>
Tue, 31 Jan 2023 06:00:28 +0000 EA - FIRE and EA: Seeking feedback on "Fi-lanthropy" Calculator by Rebecca Herbst Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FIRE & EA: Seeking feedback on "Fi-lanthropy" Calculator, published by Rebecca Herbst on January 30, 2023 on The Effective Altruism Forum.REQUESTI am soliciting feedback on a tool I have made entitled “Fi-lanthropy Calculator”. The target audience includes existing or aspiring members of the FIRE movement (Financial Independence Retire Early). They may use this calculator to explore the impact of taking a giving pledge both to their FI timeline and their retirement portfolio.This tool is meant to be simple to use and easy to digest. It is not a comprehensive financial plan, but more so a way to ‘whet the appetite’ for giving in general. I view this tool as a companion to, say, The Life You Can Save’s Impact Calculator. We know this is not exact, but it’s a good place to see tangible examples of how much good you can do.I’d like to know:If you find this tool helpful and/or informativeIf you think it is missing anything crucial (keeping in mind I want to keep it relatively simple)If there are any major errors I’ve missedRATIONALEThere is a growing movement within the FIRE community of people who are focused on giving back and looking for ways in which they can successfully find financial independence but also create space to help others. This can be seen through the growing number of members in the Socially Conscious FIRE facebook group (now at ~9,000 members, up from <1,000 when I joined in May 2020). We also see true leaders within the Financial Independence community continuing to lean into this space include: Pete Adeney of Mr. Money Mustache advocates for Effective Altruism directly on his blog, Tanja Hester of Our Next Life has written Wallet Activism, and Vicki Robin has her podcast What Could Possibly Go Right. (The last two are not EA focused, but have had a very large influence on getting people to think more about charitable giving in general.)I believe there is a large audience here for the EA community to reach (Mr. Money Mustache’s blog has had tens of millions of visitors), with advocates already in place. My hope is to provide more tools and resources for the FI community to consider some of the EA practices, largely Earn to Give and Invest to Give.While some may feel that the communities clash (FIRE focuses on bettering oneself while EA focuses on bettering others), I believe that there are more synergies than we might think:FIRE folks are optimizers: They ensure that each dollar is working for them, and that little money is wasted. They believe that each dollar they spend or save goes to a useful cause. It is likely FIRE followers will more readily understand the importance of effective charitable donations, knowing their dollar does the most good possible.FIRE folks spend time to understand what makes them happy: FIRE followers, if diligent, have spent a significant amount of time evaluating the marginal benefit of spending more money on themselves once their basic needs are met. And thus, if they have money to satisfy their basic needs and more, there is an opportunity to share this money with others.FIRE folks who are close to retiring early or have already retired are looking for meaning: While it’s true that many people are focused on the journey to getting to early retirement, at some point the question will come up “What is the point of all this?”. And many find that it’s less about quitting their full time work and more about finding meaning in life. And quitting your job gives you a lot more space to think through that. Vicki Robin uses the Hero’s Journey to explain this concept exceptionally well in this video.Many seek FI without the need to RE: Many folks don’t necessarily want to retire early, they just want the ability to become Financially Independent. Being FI allows you to make life changes more easily as you are not ob...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FIRE & EA: Seeking feedback on "Fi-lanthropy" Calculator, published by Rebecca Herbst on January 30, 2023 on The Effective Altruism Forum.REQUESTI am soliciting feedback on a tool I have made entitled “Fi-lanthropy Calculator”. The target audience includes existing or aspiring members of the FIRE movement (Financial Independence Retire Early). They may use this calculator to explore the impact of taking a giving pledge both to their FI timeline and their retirement portfolio.This tool is meant to be simple to use and easy to digest. It is not a comprehensive financial plan, but more so a way to ‘whet the appetite’ for giving in general. I view this tool as a companion to, say, The Life You Can Save’s Impact Calculator. We know this is not exact, but it’s a good place to see tangible examples of how much good you can do.I’d like to know:If you find this tool helpful and/or informativeIf you think it is missing anything crucial (keeping in mind I want to keep it relatively simple)If there are any major errors I’ve missedRATIONALEThere is a growing movement within the FIRE community of people who are focused on giving back and looking for ways in which they can successfully find financial independence but also create space to help others. This can be seen through the growing number of members in the Socially Conscious FIRE facebook group (now at ~9,000 members, up from <1,000 when I joined in May 2020). We also see true leaders within the Financial Independence community continuing to lean into this space include: Pete Adeney of Mr. Money Mustache advocates for Effective Altruism directly on his blog, Tanja Hester of Our Next Life has written Wallet Activism, and Vicki Robin has her podcast What Could Possibly Go Right. (The last two are not EA focused, but have had a very large influence on getting people to think more about charitable giving in general.)I believe there is a large audience here for the EA community to reach (Mr. Money Mustache’s blog has had tens of millions of visitors), with advocates already in place. My hope is to provide more tools and resources for the FI community to consider some of the EA practices, largely Earn to Give and Invest to Give.While some may feel that the communities clash (FIRE focuses on bettering oneself while EA focuses on bettering others), I believe that there are more synergies than we might think:FIRE folks are optimizers: They ensure that each dollar is working for them, and that little money is wasted. They believe that each dollar they spend or save goes to a useful cause. It is likely FIRE followers will more readily understand the importance of effective charitable donations, knowing their dollar does the most good possible.FIRE folks spend time to understand what makes them happy: FIRE followers, if diligent, have spent a significant amount of time evaluating the marginal benefit of spending more money on themselves once their basic needs are met. And thus, if they have money to satisfy their basic needs and more, there is an opportunity to share this money with others.FIRE folks who are close to retiring early or have already retired are looking for meaning: While it’s true that many people are focused on the journey to getting to early retirement, at some point the question will come up “What is the point of all this?”. And many find that it’s less about quitting their full time work and more about finding meaning in life. And quitting your job gives you a lot more space to think through that. Vicki Robin uses the Hero’s Journey to explain this concept exceptionally well in this video.Many seek FI without the need to RE: Many folks don’t necessarily want to retire early, they just want the ability to become Financially Independent. Being FI allows you to make life changes more easily as you are not ob...]]>
Rebecca Herbst https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:57 None full 4667
FHJMKSwrwdTogYLGF_NL_EA_EA EA - We're no longer "pausing most new longtermist funding commitments" by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We're no longer "pausing most new longtermist funding commitments", published by Holden Karnofsky on January 30, 2023 on The Effective Altruism Forum.In November, I wrote about Open Philanthropy’s soft pause of new longtermist funding commitments:We will have to raise our bar for longtermist grantmaking: with more funding opportunities that we’re choosing between, we’ll have to fund a lower percentage of them. This means grants that we would’ve made before might no longer be made, and/or we might want to provide smaller amounts of money to projects we previously would have supported more generously ..Open Philanthropy also need[s] to raise its bar in light of general market movements (particularly the fall in META stock) and other factors ... the longtermist community has been growing; our rate of spending has been going up; and we expect both of these trends to continue. This further contributes to the need to raise our bar ...It’s a priority for us to think through how much to raise the bar for longtermist grantmaking, and therefore what kinds of giving opportunities to fund. We hope to gain some clarity on this in the next month or so, but right now we’re dealing with major new information and don’t have a lot to say about what it means. It could mean reducing support for a lot of projects, or for relatively few ...Because of this, we are pausing most new longtermist funding commitments (that is, commitments within Potential Risks from Advanced Artificial Intelligence, Biosecurity & Pandemic Preparedness, and Effective Altruism Community Growth) until we gain more clarity, which we hope will be within a month or so ...It’s not an absolute pause: we will continue to do some longtermist grantmaking, mostly when it is time-sensitive and seems highly likely to end up above our bar (this is especially likely for relatively small grants).Since then, we’ve done some work to assess where our new funding bar should be, and we have created enough internal guidance that the pause no longer applies. (The pause stopped applying about a month ago, but it took some additional time to write publicly about it.)What did we do to come up with new guidance on where the bar is?What we did:Ranking past grants: We created1 a rough ranking of nearly all the grants we’d made over the last 18 months; we also included a number of grants now-defunct FTX-associated funders had made.The basic idea of the ranking is essentially: “For any two grants, the grant we would make if we could only make one should rank higher.”2 It is based on a combination of rough quantitative impact estimates, grantmaker intuitions, etc.With this ranking, we are now able to take any given total Open Phil longtermist budget for the time period in question, and identify which grants would and would not have made the cut under that budget.In some cases, we gave separate rankings to separate “tranches” of a grant (e.g., the first $1 million of a grant might be ranked much higher than the next $1 million, because of diminishing returns to adding more funding to a given project).Annual spending guidelines: We estimated how much annual spending would exhaust the capital we have available for longtermist grantmaking over the next 20-50 years (the difference between 20 years and 50 years is not huge3 in per-year spending terms), to get some anchors on what our annual budget for longtermist grantmaking should be.We planned conservatively here, assuming that longtermist work would end up with 30-50% of all funding available to Open Philanthropy.4 OP is also working on a longer-term project to revisit how we should allocate our resources between longtermist and global health and wellbeing funding; it’s possible that longtermist work will end up with more than 50%, which would leave more room to grow.Setting the b...]]>
Holden Karnofsky https://forum.effectivealtruism.org/posts/FHJMKSwrwdTogYLGF/we-re-no-longer-pausing-most-new-longtermist-funding Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We're no longer "pausing most new longtermist funding commitments", published by Holden Karnofsky on January 30, 2023 on The Effective Altruism Forum.In November, I wrote about Open Philanthropy’s soft pause of new longtermist funding commitments:We will have to raise our bar for longtermist grantmaking: with more funding opportunities that we’re choosing between, we’ll have to fund a lower percentage of them. This means grants that we would’ve made before might no longer be made, and/or we might want to provide smaller amounts of money to projects we previously would have supported more generously ..Open Philanthropy also need[s] to raise its bar in light of general market movements (particularly the fall in META stock) and other factors ... the longtermist community has been growing; our rate of spending has been going up; and we expect both of these trends to continue. This further contributes to the need to raise our bar ...It’s a priority for us to think through how much to raise the bar for longtermist grantmaking, and therefore what kinds of giving opportunities to fund. We hope to gain some clarity on this in the next month or so, but right now we’re dealing with major new information and don’t have a lot to say about what it means. It could mean reducing support for a lot of projects, or for relatively few ...Because of this, we are pausing most new longtermist funding commitments (that is, commitments within Potential Risks from Advanced Artificial Intelligence, Biosecurity & Pandemic Preparedness, and Effective Altruism Community Growth) until we gain more clarity, which we hope will be within a month or so ...It’s not an absolute pause: we will continue to do some longtermist grantmaking, mostly when it is time-sensitive and seems highly likely to end up above our bar (this is especially likely for relatively small grants).Since then, we’ve done some work to assess where our new funding bar should be, and we have created enough internal guidance that the pause no longer applies. (The pause stopped applying about a month ago, but it took some additional time to write publicly about it.)What did we do to come up with new guidance on where the bar is?What we did:Ranking past grants: We created1 a rough ranking of nearly all the grants we’d made over the last 18 months; we also included a number of grants now-defunct FTX-associated funders had made.The basic idea of the ranking is essentially: “For any two grants, the grant we would make if we could only make one should rank higher.”2 It is based on a combination of rough quantitative impact estimates, grantmaker intuitions, etc.With this ranking, we are now able to take any given total Open Phil longtermist budget for the time period in question, and identify which grants would and would not have made the cut under that budget.In some cases, we gave separate rankings to separate “tranches” of a grant (e.g., the first $1 million of a grant might be ranked much higher than the next $1 million, because of diminishing returns to adding more funding to a given project).Annual spending guidelines: We estimated how much annual spending would exhaust the capital we have available for longtermist grantmaking over the next 20-50 years (the difference between 20 years and 50 years is not huge3 in per-year spending terms), to get some anchors on what our annual budget for longtermist grantmaking should be.We planned conservatively here, assuming that longtermist work would end up with 30-50% of all funding available to Open Philanthropy.4 OP is also working on a longer-term project to revisit how we should allocate our resources between longtermist and global health and wellbeing funding; it’s possible that longtermist work will end up with more than 50%, which would leave more room to grow.Setting the b...]]>
Mon, 30 Jan 2023 20:04:02 +0000 EA - We're no longer "pausing most new longtermist funding commitments" by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We're no longer "pausing most new longtermist funding commitments", published by Holden Karnofsky on January 30, 2023 on The Effective Altruism Forum.In November, I wrote about Open Philanthropy’s soft pause of new longtermist funding commitments:We will have to raise our bar for longtermist grantmaking: with more funding opportunities that we’re choosing between, we’ll have to fund a lower percentage of them. This means grants that we would’ve made before might no longer be made, and/or we might want to provide smaller amounts of money to projects we previously would have supported more generously ..Open Philanthropy also need[s] to raise its bar in light of general market movements (particularly the fall in META stock) and other factors ... the longtermist community has been growing; our rate of spending has been going up; and we expect both of these trends to continue. This further contributes to the need to raise our bar ...It’s a priority for us to think through how much to raise the bar for longtermist grantmaking, and therefore what kinds of giving opportunities to fund. We hope to gain some clarity on this in the next month or so, but right now we’re dealing with major new information and don’t have a lot to say about what it means. It could mean reducing support for a lot of projects, or for relatively few ...Because of this, we are pausing most new longtermist funding commitments (that is, commitments within Potential Risks from Advanced Artificial Intelligence, Biosecurity & Pandemic Preparedness, and Effective Altruism Community Growth) until we gain more clarity, which we hope will be within a month or so ...It’s not an absolute pause: we will continue to do some longtermist grantmaking, mostly when it is time-sensitive and seems highly likely to end up above our bar (this is especially likely for relatively small grants).Since then, we’ve done some work to assess where our new funding bar should be, and we have created enough internal guidance that the pause no longer applies. (The pause stopped applying about a month ago, but it took some additional time to write publicly about it.)What did we do to come up with new guidance on where the bar is?What we did:Ranking past grants: We created1 a rough ranking of nearly all the grants we’d made over the last 18 months; we also included a number of grants now-defunct FTX-associated funders had made.The basic idea of the ranking is essentially: “For any two grants, the grant we would make if we could only make one should rank higher.”2 It is based on a combination of rough quantitative impact estimates, grantmaker intuitions, etc.With this ranking, we are now able to take any given total Open Phil longtermist budget for the time period in question, and identify which grants would and would not have made the cut under that budget.In some cases, we gave separate rankings to separate “tranches” of a grant (e.g., the first $1 million of a grant might be ranked much higher than the next $1 million, because of diminishing returns to adding more funding to a given project).Annual spending guidelines: We estimated how much annual spending would exhaust the capital we have available for longtermist grantmaking over the next 20-50 years (the difference between 20 years and 50 years is not huge3 in per-year spending terms), to get some anchors on what our annual budget for longtermist grantmaking should be.We planned conservatively here, assuming that longtermist work would end up with 30-50% of all funding available to Open Philanthropy.4 OP is also working on a longer-term project to revisit how we should allocate our resources between longtermist and global health and wellbeing funding; it’s possible that longtermist work will end up with more than 50%, which would leave more room to grow.Setting the b...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We're no longer "pausing most new longtermist funding commitments", published by Holden Karnofsky on January 30, 2023 on The Effective Altruism Forum.In November, I wrote about Open Philanthropy’s soft pause of new longtermist funding commitments:We will have to raise our bar for longtermist grantmaking: with more funding opportunities that we’re choosing between, we’ll have to fund a lower percentage of them. This means grants that we would’ve made before might no longer be made, and/or we might want to provide smaller amounts of money to projects we previously would have supported more generously ..Open Philanthropy also need[s] to raise its bar in light of general market movements (particularly the fall in META stock) and other factors ... the longtermist community has been growing; our rate of spending has been going up; and we expect both of these trends to continue. This further contributes to the need to raise our bar ...It’s a priority for us to think through how much to raise the bar for longtermist grantmaking, and therefore what kinds of giving opportunities to fund. We hope to gain some clarity on this in the next month or so, but right now we’re dealing with major new information and don’t have a lot to say about what it means. It could mean reducing support for a lot of projects, or for relatively few ...Because of this, we are pausing most new longtermist funding commitments (that is, commitments within Potential Risks from Advanced Artificial Intelligence, Biosecurity & Pandemic Preparedness, and Effective Altruism Community Growth) until we gain more clarity, which we hope will be within a month or so ...It’s not an absolute pause: we will continue to do some longtermist grantmaking, mostly when it is time-sensitive and seems highly likely to end up above our bar (this is especially likely for relatively small grants).Since then, we’ve done some work to assess where our new funding bar should be, and we have created enough internal guidance that the pause no longer applies. (The pause stopped applying about a month ago, but it took some additional time to write publicly about it.)What did we do to come up with new guidance on where the bar is?What we did:Ranking past grants: We created1 a rough ranking of nearly all the grants we’d made over the last 18 months; we also included a number of grants now-defunct FTX-associated funders had made.The basic idea of the ranking is essentially: “For any two grants, the grant we would make if we could only make one should rank higher.”2 It is based on a combination of rough quantitative impact estimates, grantmaker intuitions, etc.With this ranking, we are now able to take any given total Open Phil longtermist budget for the time period in question, and identify which grants would and would not have made the cut under that budget.In some cases, we gave separate rankings to separate “tranches” of a grant (e.g., the first $1 million of a grant might be ranked much higher than the next $1 million, because of diminishing returns to adding more funding to a given project).Annual spending guidelines: We estimated how much annual spending would exhaust the capital we have available for longtermist grantmaking over the next 20-50 years (the difference between 20 years and 50 years is not huge3 in per-year spending terms), to get some anchors on what our annual budget for longtermist grantmaking should be.We planned conservatively here, assuming that longtermist work would end up with 30-50% of all funding available to Open Philanthropy.4 OP is also working on a longer-term project to revisit how we should allocate our resources between longtermist and global health and wellbeing funding; it’s possible that longtermist work will end up with more than 50%, which would leave more room to grow.Setting the b...]]>
Holden Karnofsky https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:26 None full 4654
KScqjN2ouSjTjWopp_NL_EA_EA EA - An in-progress experiment to test how Laplace’s rule of succession performs in practice. by NunoSempere Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An in-progress experiment to test how Laplace’s rule of succession performs in practice., published by NunoSempere on January 30, 2023 on The Effective Altruism Forum.Note: Of reduced interest to generalist audiences.SummaryI compiled a dataset of 206 mathematical conjectures together with the years in which they were posited. Then in a few years, I intend to check whether the probabilities implied by Laplace’s rule—which only depends on the number of years passed since a conjecture was created—are about right.In a few years, I think this will shed some light on whether Laplace’s rule of succession is useful in practice. For people wanting answers more quickly, I also outline some further work which could be done to obtain results now.The dataset I’m using can be seen here (a).Probability that a conjecture will be resolved by a given year according to Laplace’s law.I estimate the probability that a randomly chosen conjecture will be solved as follows:That is, the probability that the conjecture will first be solved in the year n is the probability given by Laplace conditional on it not having been solved any year before.For reference, a “pseudo-count” corresponds to either changing the numerator to an integer higher than one, or to making n higher. This can be used to capture some of the structure that a problem manifests. E.g., if we don’t think that the prior probability of a theorem being solved in the first year is around 50%, this can be addressed by adding pseudo-counts.Code to do these operations in the programming language R can be found here. A dataset that includes these probabilities can be seen here.Expected distribution of the number of resolved conjectures according to Laplace’s rule of successionUsing the above probabilities, we can, through sampling, estimate the number of conjectures in our database that will be solved in the next 3, 5, or 10 years. The code to do this is in the same R file linked a paragraph ago.For three yearsIf we calculate the 90% and the 98% confidence intervals, these are respectively (6 to 16) and (4 to 18) problems solved in the next three years.For five yearsIf we calculate the 90% and the 98% confidence intervals, these are respectively (11 to 24) and (9 to 27) problems solved in the next five years.For ten yearsIf we calculate the 90% and the 98% confidence intervals, these are respectively (23 to 40) and (20 to 43) problems solved in the next five years.Ideas for further workDo this experiment for other topics besides mathematical theorems and for other methods besides Laplace’s lawAlthough I expect that this experiment restricted to mathematical experiments will already be decently informative, it would also be interesting to look at the performance of Laplace’s law for a range of topics.It might also be worth it to look at other approaches. In particular, I’d be interested in seeing the same experiment but for “semi-informative priors”—there is no particular reasons why that approach only has to apply to super-speculative areas like AI. So an experiment could look at experts trying to come up with semi-informative priors for events that are testable in the next few years, and this might shed some light into the general method.Checking whether the predictions from Laplace’s law come trueIn three, five, and ten years I’ll check the number of conjectures which have been resolved. If that falls outside the 99% confidence interval, I will become more skeptical of using Laplace’s law for arbitrary domains. I’ll then investigate whether Laplace’s law could be rescued in some way, e.g., by using its time-invariant version, by adding some pseudo-counts, or through some other method.With pseudo-counts, the idea would be that there would be a number of unique pseudo-counts which would make Laplace output t...]]>
NunoSempere https://forum.effectivealtruism.org/posts/KScqjN2ouSjTjWopp/an-in-progress-experiment-to-test-how-laplace-s-rule-of Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An in-progress experiment to test how Laplace’s rule of succession performs in practice., published by NunoSempere on January 30, 2023 on The Effective Altruism Forum.Note: Of reduced interest to generalist audiences.SummaryI compiled a dataset of 206 mathematical conjectures together with the years in which they were posited. Then in a few years, I intend to check whether the probabilities implied by Laplace’s rule—which only depends on the number of years passed since a conjecture was created—are about right.In a few years, I think this will shed some light on whether Laplace’s rule of succession is useful in practice. For people wanting answers more quickly, I also outline some further work which could be done to obtain results now.The dataset I’m using can be seen here (a).Probability that a conjecture will be resolved by a given year according to Laplace’s law.I estimate the probability that a randomly chosen conjecture will be solved as follows:That is, the probability that the conjecture will first be solved in the year n is the probability given by Laplace conditional on it not having been solved any year before.For reference, a “pseudo-count” corresponds to either changing the numerator to an integer higher than one, or to making n higher. This can be used to capture some of the structure that a problem manifests. E.g., if we don’t think that the prior probability of a theorem being solved in the first year is around 50%, this can be addressed by adding pseudo-counts.Code to do these operations in the programming language R can be found here. A dataset that includes these probabilities can be seen here.Expected distribution of the number of resolved conjectures according to Laplace’s rule of successionUsing the above probabilities, we can, through sampling, estimate the number of conjectures in our database that will be solved in the next 3, 5, or 10 years. The code to do this is in the same R file linked a paragraph ago.For three yearsIf we calculate the 90% and the 98% confidence intervals, these are respectively (6 to 16) and (4 to 18) problems solved in the next three years.For five yearsIf we calculate the 90% and the 98% confidence intervals, these are respectively (11 to 24) and (9 to 27) problems solved in the next five years.For ten yearsIf we calculate the 90% and the 98% confidence intervals, these are respectively (23 to 40) and (20 to 43) problems solved in the next five years.Ideas for further workDo this experiment for other topics besides mathematical theorems and for other methods besides Laplace’s lawAlthough I expect that this experiment restricted to mathematical experiments will already be decently informative, it would also be interesting to look at the performance of Laplace’s law for a range of topics.It might also be worth it to look at other approaches. In particular, I’d be interested in seeing the same experiment but for “semi-informative priors”—there is no particular reasons why that approach only has to apply to super-speculative areas like AI. So an experiment could look at experts trying to come up with semi-informative priors for events that are testable in the next few years, and this might shed some light into the general method.Checking whether the predictions from Laplace’s law come trueIn three, five, and ten years I’ll check the number of conjectures which have been resolved. If that falls outside the 99% confidence interval, I will become more skeptical of using Laplace’s law for arbitrary domains. I’ll then investigate whether Laplace’s law could be rescued in some way, e.g., by using its time-invariant version, by adding some pseudo-counts, or through some other method.With pseudo-counts, the idea would be that there would be a number of unique pseudo-counts which would make Laplace output t...]]>
Mon, 30 Jan 2023 19:31:53 +0000 EA - An in-progress experiment to test how Laplace’s rule of succession performs in practice. by NunoSempere Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An in-progress experiment to test how Laplace’s rule of succession performs in practice., published by NunoSempere on January 30, 2023 on The Effective Altruism Forum.Note: Of reduced interest to generalist audiences.SummaryI compiled a dataset of 206 mathematical conjectures together with the years in which they were posited. Then in a few years, I intend to check whether the probabilities implied by Laplace’s rule—which only depends on the number of years passed since a conjecture was created—are about right.In a few years, I think this will shed some light on whether Laplace’s rule of succession is useful in practice. For people wanting answers more quickly, I also outline some further work which could be done to obtain results now.The dataset I’m using can be seen here (a).Probability that a conjecture will be resolved by a given year according to Laplace’s law.I estimate the probability that a randomly chosen conjecture will be solved as follows:That is, the probability that the conjecture will first be solved in the year n is the probability given by Laplace conditional on it not having been solved any year before.For reference, a “pseudo-count” corresponds to either changing the numerator to an integer higher than one, or to making n higher. This can be used to capture some of the structure that a problem manifests. E.g., if we don’t think that the prior probability of a theorem being solved in the first year is around 50%, this can be addressed by adding pseudo-counts.Code to do these operations in the programming language R can be found here. A dataset that includes these probabilities can be seen here.Expected distribution of the number of resolved conjectures according to Laplace’s rule of successionUsing the above probabilities, we can, through sampling, estimate the number of conjectures in our database that will be solved in the next 3, 5, or 10 years. The code to do this is in the same R file linked a paragraph ago.For three yearsIf we calculate the 90% and the 98% confidence intervals, these are respectively (6 to 16) and (4 to 18) problems solved in the next three years.For five yearsIf we calculate the 90% and the 98% confidence intervals, these are respectively (11 to 24) and (9 to 27) problems solved in the next five years.For ten yearsIf we calculate the 90% and the 98% confidence intervals, these are respectively (23 to 40) and (20 to 43) problems solved in the next five years.Ideas for further workDo this experiment for other topics besides mathematical theorems and for other methods besides Laplace’s lawAlthough I expect that this experiment restricted to mathematical experiments will already be decently informative, it would also be interesting to look at the performance of Laplace’s law for a range of topics.It might also be worth it to look at other approaches. In particular, I’d be interested in seeing the same experiment but for “semi-informative priors”—there is no particular reasons why that approach only has to apply to super-speculative areas like AI. So an experiment could look at experts trying to come up with semi-informative priors for events that are testable in the next few years, and this might shed some light into the general method.Checking whether the predictions from Laplace’s law come trueIn three, five, and ten years I’ll check the number of conjectures which have been resolved. If that falls outside the 99% confidence interval, I will become more skeptical of using Laplace’s law for arbitrary domains. I’ll then investigate whether Laplace’s law could be rescued in some way, e.g., by using its time-invariant version, by adding some pseudo-counts, or through some other method.With pseudo-counts, the idea would be that there would be a number of unique pseudo-counts which would make Laplace output t...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An in-progress experiment to test how Laplace’s rule of succession performs in practice., published by NunoSempere on January 30, 2023 on The Effective Altruism Forum.Note: Of reduced interest to generalist audiences.SummaryI compiled a dataset of 206 mathematical conjectures together with the years in which they were posited. Then in a few years, I intend to check whether the probabilities implied by Laplace’s rule—which only depends on the number of years passed since a conjecture was created—are about right.In a few years, I think this will shed some light on whether Laplace’s rule of succession is useful in practice. For people wanting answers more quickly, I also outline some further work which could be done to obtain results now.The dataset I’m using can be seen here (a).Probability that a conjecture will be resolved by a given year according to Laplace’s law.I estimate the probability that a randomly chosen conjecture will be solved as follows:That is, the probability that the conjecture will first be solved in the year n is the probability given by Laplace conditional on it not having been solved any year before.For reference, a “pseudo-count” corresponds to either changing the numerator to an integer higher than one, or to making n higher. This can be used to capture some of the structure that a problem manifests. E.g., if we don’t think that the prior probability of a theorem being solved in the first year is around 50%, this can be addressed by adding pseudo-counts.Code to do these operations in the programming language R can be found here. A dataset that includes these probabilities can be seen here.Expected distribution of the number of resolved conjectures according to Laplace’s rule of successionUsing the above probabilities, we can, through sampling, estimate the number of conjectures in our database that will be solved in the next 3, 5, or 10 years. The code to do this is in the same R file linked a paragraph ago.For three yearsIf we calculate the 90% and the 98% confidence intervals, these are respectively (6 to 16) and (4 to 18) problems solved in the next three years.For five yearsIf we calculate the 90% and the 98% confidence intervals, these are respectively (11 to 24) and (9 to 27) problems solved in the next five years.For ten yearsIf we calculate the 90% and the 98% confidence intervals, these are respectively (23 to 40) and (20 to 43) problems solved in the next five years.Ideas for further workDo this experiment for other topics besides mathematical theorems and for other methods besides Laplace’s lawAlthough I expect that this experiment restricted to mathematical experiments will already be decently informative, it would also be interesting to look at the performance of Laplace’s law for a range of topics.It might also be worth it to look at other approaches. In particular, I’d be interested in seeing the same experiment but for “semi-informative priors”—there is no particular reasons why that approach only has to apply to super-speculative areas like AI. So an experiment could look at experts trying to come up with semi-informative priors for events that are testable in the next few years, and this might shed some light into the general method.Checking whether the predictions from Laplace’s law come trueIn three, five, and ten years I’ll check the number of conjectures which have been resolved. If that falls outside the 99% confidence interval, I will become more skeptical of using Laplace’s law for arbitrary domains. I’ll then investigate whether Laplace’s law could be rescued in some way, e.g., by using its time-invariant version, by adding some pseudo-counts, or through some other method.With pseudo-counts, the idea would be that there would be a number of unique pseudo-counts which would make Laplace output t...]]>
NunoSempere https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:49 None full 4656
dDudLPHv7AgPLrzef_NL_EA_EA EA - Karma overrates some topics; resulting issues and potential solutions by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Karma overrates some topics; resulting issues and potential solutions, published by Lizka on January 30, 2023 on The Effective Altruism Forum.TL;DR: Karma overrates “lowest-common-denominator” posts that interest a large fraction of the community, leading to some issues. We list some potential solutions at the bottom.Please see the disclaimer at the bottom of the post.Posts that interest everyone — or discussions where everyone has an opinion — tend to get a lot more Forum karma (and attention) than niche posts.These posts tend to beabout the EA communityaccessible to everyone, or on topics where everyone has an opinionWhy does this happen?There are different groups with different niche interests, but an overlapping interest in the EA community:When a post about the EA community is published, many people might have opinions, and many people feel that they can vote on the post. Most people upvote, so more people voting usually means that a post will get higher karma.Similarly, if the topic of the post is something that doesn’t require particular expertise to have an opinion about, lots of people feel like they can weigh in. You can think of these as “lowest-common-denominator posts.” This is related to bike-shedding.This leads to some issuesThis misleads people about what the Forum — and the EA community — cares about10 of the 10 highest karma posts from 2022 were community posts, even though less than ⅓ of total karma went to community posts.When someone is trying to evaluate the quality of the Forum, they often go to the list of top posts and evaluate those. This seems like a very reasonable thing to do, but it's actually giving a very skewed picture of what happens on the Forum.Because discussions about the community seem to be so highly valued by Forum readers, people might accidentally start to value community-oriented topics more themselves, and drift away from real-world issuesImagine an author posting about some issue with RCTs that’s relevant to their work — they’ll get a bit of engagement, some appreciation, and maybe some questions. Then they write a quick post about the font on the Forum — suddenly everyone has an opinion and they get loads of karma. Unconsciously, they might view this as an indicator that the community values the second post more than the first. If this happens repeatedly or they see this happening, they might shift towards that view themselves if they defer even a bit to the community’s view.Now imagine this happening on the scale of the thousands of people who use the Forum; these small updates add up.This directs even more attention to community-oriented, low-barrier topics, and away from niche topics and topics that are more complex, which might be more valuable to discussKarma is used for sorting the Frontpage: higher-rated posts stay on the Frontpage for longer. This is useful, as it tends to hide the most irrelevant posts, and generally boosts higher quality content — more people see the better posts.But because posts that hit the middle sections in the Venn diagrams above get more karma, they tend to stick around for longer, which then gets them more karma, etc.(We didn't try to make this list of issues as exhaustive as possible.)Note that karma is not perfect even within a much more specific topic — pretty random factors can affect a Forum post’s karma, and readers aren’t always great at voting, but that is a separate issue. (We might write a post about it later.)Solutions we’re considering or exploringCreate something like a subforum or separate tab for “community opinion” posts, and filter them out from the Frontpage by defaultOr otherwise move in this directionRename “Top” sorting to more clearly indicate what karma actually measuresWe tend to have a somewhat higher bar for sharing “community” posts ...]]>
Lizka https://forum.effectivealtruism.org/posts/dDudLPHv7AgPLrzef/karma-overrates-some-topics-resulting-issues-and-potential Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Karma overrates some topics; resulting issues and potential solutions, published by Lizka on January 30, 2023 on The Effective Altruism Forum.TL;DR: Karma overrates “lowest-common-denominator” posts that interest a large fraction of the community, leading to some issues. We list some potential solutions at the bottom.Please see the disclaimer at the bottom of the post.Posts that interest everyone — or discussions where everyone has an opinion — tend to get a lot more Forum karma (and attention) than niche posts.These posts tend to beabout the EA communityaccessible to everyone, or on topics where everyone has an opinionWhy does this happen?There are different groups with different niche interests, but an overlapping interest in the EA community:When a post about the EA community is published, many people might have opinions, and many people feel that they can vote on the post. Most people upvote, so more people voting usually means that a post will get higher karma.Similarly, if the topic of the post is something that doesn’t require particular expertise to have an opinion about, lots of people feel like they can weigh in. You can think of these as “lowest-common-denominator posts.” This is related to bike-shedding.This leads to some issuesThis misleads people about what the Forum — and the EA community — cares about10 of the 10 highest karma posts from 2022 were community posts, even though less than ⅓ of total karma went to community posts.When someone is trying to evaluate the quality of the Forum, they often go to the list of top posts and evaluate those. This seems like a very reasonable thing to do, but it's actually giving a very skewed picture of what happens on the Forum.Because discussions about the community seem to be so highly valued by Forum readers, people might accidentally start to value community-oriented topics more themselves, and drift away from real-world issuesImagine an author posting about some issue with RCTs that’s relevant to their work — they’ll get a bit of engagement, some appreciation, and maybe some questions. Then they write a quick post about the font on the Forum — suddenly everyone has an opinion and they get loads of karma. Unconsciously, they might view this as an indicator that the community values the second post more than the first. If this happens repeatedly or they see this happening, they might shift towards that view themselves if they defer even a bit to the community’s view.Now imagine this happening on the scale of the thousands of people who use the Forum; these small updates add up.This directs even more attention to community-oriented, low-barrier topics, and away from niche topics and topics that are more complex, which might be more valuable to discussKarma is used for sorting the Frontpage: higher-rated posts stay on the Frontpage for longer. This is useful, as it tends to hide the most irrelevant posts, and generally boosts higher quality content — more people see the better posts.But because posts that hit the middle sections in the Venn diagrams above get more karma, they tend to stick around for longer, which then gets them more karma, etc.(We didn't try to make this list of issues as exhaustive as possible.)Note that karma is not perfect even within a much more specific topic — pretty random factors can affect a Forum post’s karma, and readers aren’t always great at voting, but that is a separate issue. (We might write a post about it later.)Solutions we’re considering or exploringCreate something like a subforum or separate tab for “community opinion” posts, and filter them out from the Frontpage by defaultOr otherwise move in this directionRename “Top” sorting to more clearly indicate what karma actually measuresWe tend to have a somewhat higher bar for sharing “community” posts ...]]>
Mon, 30 Jan 2023 19:07:05 +0000 EA - Karma overrates some topics; resulting issues and potential solutions by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Karma overrates some topics; resulting issues and potential solutions, published by Lizka on January 30, 2023 on The Effective Altruism Forum.TL;DR: Karma overrates “lowest-common-denominator” posts that interest a large fraction of the community, leading to some issues. We list some potential solutions at the bottom.Please see the disclaimer at the bottom of the post.Posts that interest everyone — or discussions where everyone has an opinion — tend to get a lot more Forum karma (and attention) than niche posts.These posts tend to beabout the EA communityaccessible to everyone, or on topics where everyone has an opinionWhy does this happen?There are different groups with different niche interests, but an overlapping interest in the EA community:When a post about the EA community is published, many people might have opinions, and many people feel that they can vote on the post. Most people upvote, so more people voting usually means that a post will get higher karma.Similarly, if the topic of the post is something that doesn’t require particular expertise to have an opinion about, lots of people feel like they can weigh in. You can think of these as “lowest-common-denominator posts.” This is related to bike-shedding.This leads to some issuesThis misleads people about what the Forum — and the EA community — cares about10 of the 10 highest karma posts from 2022 were community posts, even though less than ⅓ of total karma went to community posts.When someone is trying to evaluate the quality of the Forum, they often go to the list of top posts and evaluate those. This seems like a very reasonable thing to do, but it's actually giving a very skewed picture of what happens on the Forum.Because discussions about the community seem to be so highly valued by Forum readers, people might accidentally start to value community-oriented topics more themselves, and drift away from real-world issuesImagine an author posting about some issue with RCTs that’s relevant to their work — they’ll get a bit of engagement, some appreciation, and maybe some questions. Then they write a quick post about the font on the Forum — suddenly everyone has an opinion and they get loads of karma. Unconsciously, they might view this as an indicator that the community values the second post more than the first. If this happens repeatedly or they see this happening, they might shift towards that view themselves if they defer even a bit to the community’s view.Now imagine this happening on the scale of the thousands of people who use the Forum; these small updates add up.This directs even more attention to community-oriented, low-barrier topics, and away from niche topics and topics that are more complex, which might be more valuable to discussKarma is used for sorting the Frontpage: higher-rated posts stay on the Frontpage for longer. This is useful, as it tends to hide the most irrelevant posts, and generally boosts higher quality content — more people see the better posts.But because posts that hit the middle sections in the Venn diagrams above get more karma, they tend to stick around for longer, which then gets them more karma, etc.(We didn't try to make this list of issues as exhaustive as possible.)Note that karma is not perfect even within a much more specific topic — pretty random factors can affect a Forum post’s karma, and readers aren’t always great at voting, but that is a separate issue. (We might write a post about it later.)Solutions we’re considering or exploringCreate something like a subforum or separate tab for “community opinion” posts, and filter them out from the Frontpage by defaultOr otherwise move in this directionRename “Top” sorting to more clearly indicate what karma actually measuresWe tend to have a somewhat higher bar for sharing “community” posts ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Karma overrates some topics; resulting issues and potential solutions, published by Lizka on January 30, 2023 on The Effective Altruism Forum.TL;DR: Karma overrates “lowest-common-denominator” posts that interest a large fraction of the community, leading to some issues. We list some potential solutions at the bottom.Please see the disclaimer at the bottom of the post.Posts that interest everyone — or discussions where everyone has an opinion — tend to get a lot more Forum karma (and attention) than niche posts.These posts tend to beabout the EA communityaccessible to everyone, or on topics where everyone has an opinionWhy does this happen?There are different groups with different niche interests, but an overlapping interest in the EA community:When a post about the EA community is published, many people might have opinions, and many people feel that they can vote on the post. Most people upvote, so more people voting usually means that a post will get higher karma.Similarly, if the topic of the post is something that doesn’t require particular expertise to have an opinion about, lots of people feel like they can weigh in. You can think of these as “lowest-common-denominator posts.” This is related to bike-shedding.This leads to some issuesThis misleads people about what the Forum — and the EA community — cares about10 of the 10 highest karma posts from 2022 were community posts, even though less than ⅓ of total karma went to community posts.When someone is trying to evaluate the quality of the Forum, they often go to the list of top posts and evaluate those. This seems like a very reasonable thing to do, but it's actually giving a very skewed picture of what happens on the Forum.Because discussions about the community seem to be so highly valued by Forum readers, people might accidentally start to value community-oriented topics more themselves, and drift away from real-world issuesImagine an author posting about some issue with RCTs that’s relevant to their work — they’ll get a bit of engagement, some appreciation, and maybe some questions. Then they write a quick post about the font on the Forum — suddenly everyone has an opinion and they get loads of karma. Unconsciously, they might view this as an indicator that the community values the second post more than the first. If this happens repeatedly or they see this happening, they might shift towards that view themselves if they defer even a bit to the community’s view.Now imagine this happening on the scale of the thousands of people who use the Forum; these small updates add up.This directs even more attention to community-oriented, low-barrier topics, and away from niche topics and topics that are more complex, which might be more valuable to discussKarma is used for sorting the Frontpage: higher-rated posts stay on the Frontpage for longer. This is useful, as it tends to hide the most irrelevant posts, and generally boosts higher quality content — more people see the better posts.But because posts that hit the middle sections in the Venn diagrams above get more karma, they tend to stick around for longer, which then gets them more karma, etc.(We didn't try to make this list of issues as exhaustive as possible.)Note that karma is not perfect even within a much more specific topic — pretty random factors can affect a Forum post’s karma, and readers aren’t always great at voting, but that is a separate issue. (We might write a post about it later.)Solutions we’re considering or exploringCreate something like a subforum or separate tab for “community opinion” posts, and filter them out from the Frontpage by defaultOr otherwise move in this directionRename “Top” sorting to more clearly indicate what karma actually measuresWe tend to have a somewhat higher bar for sharing “community” posts ...]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:02 None full 4655
zG8fPXhiRWxruEeKQ_NL_EA_EA EA - Alignment is mostly about making cognition aimable at all by So8res Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Alignment is mostly about making cognition aimable at all, published by So8res on January 30, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
So8res https://forum.effectivealtruism.org/posts/zG8fPXhiRWxruEeKQ/alignment-is-mostly-about-making-cognition-aimable-at-all Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Alignment is mostly about making cognition aimable at all, published by So8res on January 30, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 30 Jan 2023 17:33:32 +0000 EA - Alignment is mostly about making cognition aimable at all by So8res Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Alignment is mostly about making cognition aimable at all, published by So8res on January 30, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Alignment is mostly about making cognition aimable at all, published by So8res on January 30, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
So8res https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:27 None full 4657
C89mZ5T5MTYBu8ZFR_NL_EA_EA EA - Regulatory inquiry into Effective Ventures Foundation UK by HowieL Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Regulatory inquiry into Effective Ventures Foundation UK, published by HowieL on January 30, 2023 on The Effective Altruism Forum.Earlier today, the Charity Commission for England and Wales announced a statutory inquiry into Effective Ventures Foundation UK (EVF UK). We issued a short statement acknowledging the inquiry. As the recently appointed Interim CEO of EVF UK, I also wanted to say a bit more to the community.As you might imagine, writing this post feels a bit tough for a bunch of reasons. It’s a sensitive topic, there are lots of complicated legal issues to consider, and it’s generally a bit weird to write publicly about an agency that’s in the middle of investigating you (it feels a little like talking about someone in the third person without acknowledging that they’re sitting at the dinner table right next to you). One thing I really want to do is avoid speaking for the Charity Commission, which makes me hesitant to speculate too much about why exactly they chose to begin this inquiry, etc.But I’m going to do my best to figure out what’s helpful to say here.SummaryEffective Ventures Foundation UK (EVF UK) is the UK charity which, together with Effective Ventures Foundation US (EVF US), acts as host and fiscal sponsor for a wide range of projects including CEA, 80,000 Hours, and others.Following the FTX collapse, the Charity Commission for England and Wales has opened an inquiry into EVF UK’s financial situation and its governance and administration by its trustees.We were not particularly surprised by the Commission’s decision to open an inquiry. As it notes, some FTX-linked entities (such as the FTX Foundation) and related individuals were major donors to EVF UK, and there was the potential for a variety of conflicts of interest with respect to governance (e.g., two of EVF UK’s board members were affiliated with FTX Foundation). Especially given the scale and profile of the FTX collapse, it makes sense that the Commission wants extra visibility into whether we’re appropriately protecting our property, managing potential conflicts of interest, and generally complying with UK charity law.The UK board and executive team are cooperating in full and the Commission has confirmed that the trustees have fulfilled their duties by filing a serious incident report.Sharing updates in the midst of an ongoing inquiry is legally complicated, so there will probably be some limitations to the updates we’ll provide before the inquiry is complete. But we’ll do our best to share whatever (if anything) is legally sensible.What is Effective Ventures Foundation UK?For context, EVF UK is the UK charity which, together with EVF US, acts as host and fiscal sponsor for a wide range of projects, including CEA, 80,000 Hours and others. Everything in this post is only referring to the UK entity since that’s the entity which is regulated by the Charity Commission. (You can read more about the structure and role of EVF UK here.)What is a statutory inquiry?The Charity Commission is a regulator whose responsibilities include:Preventing mismanagement and/or misconduct by charities;Promoting compliance with charity law;Protecting charities’ property, beneficiaries, and work; andSafeguarding the public’s trust and confidence in charities.A statutory inquiry is one tool the Commission has for establishing facts or collecting evidence related to these responsibilities. Opening an inquiry gives the Commission information gathering powers; they can require a charity to send relevant information or documents and they can require individuals to answer questions. The Commission also has wide-ranging powers for protecting charities and ensuring compliance. Outcomes can range from taking no action or issuing a warning to appointing an interim manager to manage the charity or even r...]]>
HowieL https://forum.effectivealtruism.org/posts/C89mZ5T5MTYBu8ZFR/regulatory-inquiry-into-effective-ventures-foundation-uk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Regulatory inquiry into Effective Ventures Foundation UK, published by HowieL on January 30, 2023 on The Effective Altruism Forum.Earlier today, the Charity Commission for England and Wales announced a statutory inquiry into Effective Ventures Foundation UK (EVF UK). We issued a short statement acknowledging the inquiry. As the recently appointed Interim CEO of EVF UK, I also wanted to say a bit more to the community.As you might imagine, writing this post feels a bit tough for a bunch of reasons. It’s a sensitive topic, there are lots of complicated legal issues to consider, and it’s generally a bit weird to write publicly about an agency that’s in the middle of investigating you (it feels a little like talking about someone in the third person without acknowledging that they’re sitting at the dinner table right next to you). One thing I really want to do is avoid speaking for the Charity Commission, which makes me hesitant to speculate too much about why exactly they chose to begin this inquiry, etc.But I’m going to do my best to figure out what’s helpful to say here.SummaryEffective Ventures Foundation UK (EVF UK) is the UK charity which, together with Effective Ventures Foundation US (EVF US), acts as host and fiscal sponsor for a wide range of projects including CEA, 80,000 Hours, and others.Following the FTX collapse, the Charity Commission for England and Wales has opened an inquiry into EVF UK’s financial situation and its governance and administration by its trustees.We were not particularly surprised by the Commission’s decision to open an inquiry. As it notes, some FTX-linked entities (such as the FTX Foundation) and related individuals were major donors to EVF UK, and there was the potential for a variety of conflicts of interest with respect to governance (e.g., two of EVF UK’s board members were affiliated with FTX Foundation). Especially given the scale and profile of the FTX collapse, it makes sense that the Commission wants extra visibility into whether we’re appropriately protecting our property, managing potential conflicts of interest, and generally complying with UK charity law.The UK board and executive team are cooperating in full and the Commission has confirmed that the trustees have fulfilled their duties by filing a serious incident report.Sharing updates in the midst of an ongoing inquiry is legally complicated, so there will probably be some limitations to the updates we’ll provide before the inquiry is complete. But we’ll do our best to share whatever (if anything) is legally sensible.What is Effective Ventures Foundation UK?For context, EVF UK is the UK charity which, together with EVF US, acts as host and fiscal sponsor for a wide range of projects, including CEA, 80,000 Hours and others. Everything in this post is only referring to the UK entity since that’s the entity which is regulated by the Charity Commission. (You can read more about the structure and role of EVF UK here.)What is a statutory inquiry?The Charity Commission is a regulator whose responsibilities include:Preventing mismanagement and/or misconduct by charities;Promoting compliance with charity law;Protecting charities’ property, beneficiaries, and work; andSafeguarding the public’s trust and confidence in charities.A statutory inquiry is one tool the Commission has for establishing facts or collecting evidence related to these responsibilities. Opening an inquiry gives the Commission information gathering powers; they can require a charity to send relevant information or documents and they can require individuals to answer questions. The Commission also has wide-ranging powers for protecting charities and ensuring compliance. Outcomes can range from taking no action or issuing a warning to appointing an interim manager to manage the charity or even r...]]>
Mon, 30 Jan 2023 15:29:54 +0000 EA - Regulatory inquiry into Effective Ventures Foundation UK by HowieL Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Regulatory inquiry into Effective Ventures Foundation UK, published by HowieL on January 30, 2023 on The Effective Altruism Forum.Earlier today, the Charity Commission for England and Wales announced a statutory inquiry into Effective Ventures Foundation UK (EVF UK). We issued a short statement acknowledging the inquiry. As the recently appointed Interim CEO of EVF UK, I also wanted to say a bit more to the community.As you might imagine, writing this post feels a bit tough for a bunch of reasons. It’s a sensitive topic, there are lots of complicated legal issues to consider, and it’s generally a bit weird to write publicly about an agency that’s in the middle of investigating you (it feels a little like talking about someone in the third person without acknowledging that they’re sitting at the dinner table right next to you). One thing I really want to do is avoid speaking for the Charity Commission, which makes me hesitant to speculate too much about why exactly they chose to begin this inquiry, etc.But I’m going to do my best to figure out what’s helpful to say here.SummaryEffective Ventures Foundation UK (EVF UK) is the UK charity which, together with Effective Ventures Foundation US (EVF US), acts as host and fiscal sponsor for a wide range of projects including CEA, 80,000 Hours, and others.Following the FTX collapse, the Charity Commission for England and Wales has opened an inquiry into EVF UK’s financial situation and its governance and administration by its trustees.We were not particularly surprised by the Commission’s decision to open an inquiry. As it notes, some FTX-linked entities (such as the FTX Foundation) and related individuals were major donors to EVF UK, and there was the potential for a variety of conflicts of interest with respect to governance (e.g., two of EVF UK’s board members were affiliated with FTX Foundation). Especially given the scale and profile of the FTX collapse, it makes sense that the Commission wants extra visibility into whether we’re appropriately protecting our property, managing potential conflicts of interest, and generally complying with UK charity law.The UK board and executive team are cooperating in full and the Commission has confirmed that the trustees have fulfilled their duties by filing a serious incident report.Sharing updates in the midst of an ongoing inquiry is legally complicated, so there will probably be some limitations to the updates we’ll provide before the inquiry is complete. But we’ll do our best to share whatever (if anything) is legally sensible.What is Effective Ventures Foundation UK?For context, EVF UK is the UK charity which, together with EVF US, acts as host and fiscal sponsor for a wide range of projects, including CEA, 80,000 Hours and others. Everything in this post is only referring to the UK entity since that’s the entity which is regulated by the Charity Commission. (You can read more about the structure and role of EVF UK here.)What is a statutory inquiry?The Charity Commission is a regulator whose responsibilities include:Preventing mismanagement and/or misconduct by charities;Promoting compliance with charity law;Protecting charities’ property, beneficiaries, and work; andSafeguarding the public’s trust and confidence in charities.A statutory inquiry is one tool the Commission has for establishing facts or collecting evidence related to these responsibilities. Opening an inquiry gives the Commission information gathering powers; they can require a charity to send relevant information or documents and they can require individuals to answer questions. The Commission also has wide-ranging powers for protecting charities and ensuring compliance. Outcomes can range from taking no action or issuing a warning to appointing an interim manager to manage the charity or even r...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Regulatory inquiry into Effective Ventures Foundation UK, published by HowieL on January 30, 2023 on The Effective Altruism Forum.Earlier today, the Charity Commission for England and Wales announced a statutory inquiry into Effective Ventures Foundation UK (EVF UK). We issued a short statement acknowledging the inquiry. As the recently appointed Interim CEO of EVF UK, I also wanted to say a bit more to the community.As you might imagine, writing this post feels a bit tough for a bunch of reasons. It’s a sensitive topic, there are lots of complicated legal issues to consider, and it’s generally a bit weird to write publicly about an agency that’s in the middle of investigating you (it feels a little like talking about someone in the third person without acknowledging that they’re sitting at the dinner table right next to you). One thing I really want to do is avoid speaking for the Charity Commission, which makes me hesitant to speculate too much about why exactly they chose to begin this inquiry, etc.But I’m going to do my best to figure out what’s helpful to say here.SummaryEffective Ventures Foundation UK (EVF UK) is the UK charity which, together with Effective Ventures Foundation US (EVF US), acts as host and fiscal sponsor for a wide range of projects including CEA, 80,000 Hours, and others.Following the FTX collapse, the Charity Commission for England and Wales has opened an inquiry into EVF UK’s financial situation and its governance and administration by its trustees.We were not particularly surprised by the Commission’s decision to open an inquiry. As it notes, some FTX-linked entities (such as the FTX Foundation) and related individuals were major donors to EVF UK, and there was the potential for a variety of conflicts of interest with respect to governance (e.g., two of EVF UK’s board members were affiliated with FTX Foundation). Especially given the scale and profile of the FTX collapse, it makes sense that the Commission wants extra visibility into whether we’re appropriately protecting our property, managing potential conflicts of interest, and generally complying with UK charity law.The UK board and executive team are cooperating in full and the Commission has confirmed that the trustees have fulfilled their duties by filing a serious incident report.Sharing updates in the midst of an ongoing inquiry is legally complicated, so there will probably be some limitations to the updates we’ll provide before the inquiry is complete. But we’ll do our best to share whatever (if anything) is legally sensible.What is Effective Ventures Foundation UK?For context, EVF UK is the UK charity which, together with EVF US, acts as host and fiscal sponsor for a wide range of projects, including CEA, 80,000 Hours and others. Everything in this post is only referring to the UK entity since that’s the entity which is regulated by the Charity Commission. (You can read more about the structure and role of EVF UK here.)What is a statutory inquiry?The Charity Commission is a regulator whose responsibilities include:Preventing mismanagement and/or misconduct by charities;Promoting compliance with charity law;Protecting charities’ property, beneficiaries, and work; andSafeguarding the public’s trust and confidence in charities.A statutory inquiry is one tool the Commission has for establishing facts or collecting evidence related to these responsibilities. Opening an inquiry gives the Commission information gathering powers; they can require a charity to send relevant information or documents and they can require individuals to answer questions. The Commission also has wide-ranging powers for protecting charities and ensuring compliance. Outcomes can range from taking no action or issuing a warning to appointing an interim manager to manage the charity or even r...]]>
HowieL https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:15 None full 4658
GoWNiPbrEb6NHD3MF_NL_EA_EA EA - Announcing Interim CEOs of EVF by Owen Cotton-Barratt Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Interim CEOs of EVF, published by Owen Cotton-Barratt on January 30, 2023 on The Effective Altruism Forum.OverviewEffective Ventures Foundation USA (“EVF US”) has appointed Zachary Robinson as its Interim CEO as of today. We’re also taking this time to announce that Effective Ventures Foundation (“EVF UK”) appointed Howie Lempel as its Interim CEO back in November.EVF UK and EVF US together act as hosts and fiscal sponsors of many key projects in effective altruism, including the Centre for Effective Altruism (CEA), 80,000 Hours (80k), Longview Philanthropy, etc. These charities (EVF UK and EVF US) previously did not have the position of CEO, and project leads (for CEA, 80k, etc.) reported directly to EVF boards for management oversight.The FTX situation has given rise to a number of new challenges. In particular, there are financial and legal issues that occur at the level of the charities (EVF UK and EVF US), not at the project level, because the projects are not separate legal entities. Because of this, we’ve chosen to coordinate the legal, financial, and communications strategies of the projects under EVF much more than before.In response to the new challenges from FTX, the boards became much more involved in the day-to-day operations of the charities. But it’s not ideal to have boards playing the role of executives, so the boards have now also appointed Interim CEOs to each charity.The Interim CEO roles are about handling crises and helping the entities transition to an improved long-term structure. They are naturally time-limited roles; we aren’t sure how they might change, or when it will make sense for Howie and/or Zach to hand the reins off. The announcement of these Interim CEOs doesn’t constitute any change of project leadership; for example, Max Dalton will continue as leader of CEA, the community-building project which is part of EVF.Meta remarksThis post is written on behalf of the boards of EVF. It’s difficult to write or act as a group without either doing things that not everyone is totally behind, or constraining ourselves to the bland. In the case of this post, it is largely the work of the primary authors; other board members might not agree with the more subjective judgements.The impetus for getting this post out now was a desire to empower the CEOs to speak on behalf of the entities; there are some time-sensitive updates they expect to share soon.Edit: Howie has now shared oneThere’s a lot to say about what FTX’s collapse means for EA and EVF; in many ways we’re still in the early days of wrestling with what this means for us and our communities. This post isn’t about that. However, we know that this is the first major public communication from the EVF boards, and we don’t want the implicature to be that we don’t think this is important or worth discussing.We’re hoping to write more soon about why we haven’t said more, how we’re seeing the situation, and how EVF’s choices may have impacted EA discourse about FTX.About Howie and ZachHowie Lempel has been Interim CEO of EVF UK since mid-November. To take this on, Howie took leave from his role as CEO of 80k, one of the largest EVF projects. In light of Howie’s move, Brenton Mayer was appointed as Interim CEO of 80k in December. Brenton was previously Director of Internal Systems at 80k.Before he worked at 80k, Howie went to Yale Law for two years. He left Yale to join Open Philanthropy while it was still being incubated at GiveWell, working as their first Program Officer for Global Catastrophic Risks. Howie has also worked on white collar crime at the Manhattan DA’s office and on U.S. economic policy as a research assistant at the Brookings Institution. In addition to his role as CEO of 80k, he’s also known in the EA community for his personal podcast episode on mental he...]]>
Owen Cotton-Barratt https://forum.effectivealtruism.org/posts/GoWNiPbrEb6NHD3MF/announcing-interim-ceos-of-evf Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Interim CEOs of EVF, published by Owen Cotton-Barratt on January 30, 2023 on The Effective Altruism Forum.OverviewEffective Ventures Foundation USA (“EVF US”) has appointed Zachary Robinson as its Interim CEO as of today. We’re also taking this time to announce that Effective Ventures Foundation (“EVF UK”) appointed Howie Lempel as its Interim CEO back in November.EVF UK and EVF US together act as hosts and fiscal sponsors of many key projects in effective altruism, including the Centre for Effective Altruism (CEA), 80,000 Hours (80k), Longview Philanthropy, etc. These charities (EVF UK and EVF US) previously did not have the position of CEO, and project leads (for CEA, 80k, etc.) reported directly to EVF boards for management oversight.The FTX situation has given rise to a number of new challenges. In particular, there are financial and legal issues that occur at the level of the charities (EVF UK and EVF US), not at the project level, because the projects are not separate legal entities. Because of this, we’ve chosen to coordinate the legal, financial, and communications strategies of the projects under EVF much more than before.In response to the new challenges from FTX, the boards became much more involved in the day-to-day operations of the charities. But it’s not ideal to have boards playing the role of executives, so the boards have now also appointed Interim CEOs to each charity.The Interim CEO roles are about handling crises and helping the entities transition to an improved long-term structure. They are naturally time-limited roles; we aren’t sure how they might change, or when it will make sense for Howie and/or Zach to hand the reins off. The announcement of these Interim CEOs doesn’t constitute any change of project leadership; for example, Max Dalton will continue as leader of CEA, the community-building project which is part of EVF.Meta remarksThis post is written on behalf of the boards of EVF. It’s difficult to write or act as a group without either doing things that not everyone is totally behind, or constraining ourselves to the bland. In the case of this post, it is largely the work of the primary authors; other board members might not agree with the more subjective judgements.The impetus for getting this post out now was a desire to empower the CEOs to speak on behalf of the entities; there are some time-sensitive updates they expect to share soon.Edit: Howie has now shared oneThere’s a lot to say about what FTX’s collapse means for EA and EVF; in many ways we’re still in the early days of wrestling with what this means for us and our communities. This post isn’t about that. However, we know that this is the first major public communication from the EVF boards, and we don’t want the implicature to be that we don’t think this is important or worth discussing.We’re hoping to write more soon about why we haven’t said more, how we’re seeing the situation, and how EVF’s choices may have impacted EA discourse about FTX.About Howie and ZachHowie Lempel has been Interim CEO of EVF UK since mid-November. To take this on, Howie took leave from his role as CEO of 80k, one of the largest EVF projects. In light of Howie’s move, Brenton Mayer was appointed as Interim CEO of 80k in December. Brenton was previously Director of Internal Systems at 80k.Before he worked at 80k, Howie went to Yale Law for two years. He left Yale to join Open Philanthropy while it was still being incubated at GiveWell, working as their first Program Officer for Global Catastrophic Risks. Howie has also worked on white collar crime at the Manhattan DA’s office and on U.S. economic policy as a research assistant at the Brookings Institution. In addition to his role as CEO of 80k, he’s also known in the EA community for his personal podcast episode on mental he...]]>
Mon, 30 Jan 2023 14:30:34 +0000 EA - Announcing Interim CEOs of EVF by Owen Cotton-Barratt Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Interim CEOs of EVF, published by Owen Cotton-Barratt on January 30, 2023 on The Effective Altruism Forum.OverviewEffective Ventures Foundation USA (“EVF US”) has appointed Zachary Robinson as its Interim CEO as of today. We’re also taking this time to announce that Effective Ventures Foundation (“EVF UK”) appointed Howie Lempel as its Interim CEO back in November.EVF UK and EVF US together act as hosts and fiscal sponsors of many key projects in effective altruism, including the Centre for Effective Altruism (CEA), 80,000 Hours (80k), Longview Philanthropy, etc. These charities (EVF UK and EVF US) previously did not have the position of CEO, and project leads (for CEA, 80k, etc.) reported directly to EVF boards for management oversight.The FTX situation has given rise to a number of new challenges. In particular, there are financial and legal issues that occur at the level of the charities (EVF UK and EVF US), not at the project level, because the projects are not separate legal entities. Because of this, we’ve chosen to coordinate the legal, financial, and communications strategies of the projects under EVF much more than before.In response to the new challenges from FTX, the boards became much more involved in the day-to-day operations of the charities. But it’s not ideal to have boards playing the role of executives, so the boards have now also appointed Interim CEOs to each charity.The Interim CEO roles are about handling crises and helping the entities transition to an improved long-term structure. They are naturally time-limited roles; we aren’t sure how they might change, or when it will make sense for Howie and/or Zach to hand the reins off. The announcement of these Interim CEOs doesn’t constitute any change of project leadership; for example, Max Dalton will continue as leader of CEA, the community-building project which is part of EVF.Meta remarksThis post is written on behalf of the boards of EVF. It’s difficult to write or act as a group without either doing things that not everyone is totally behind, or constraining ourselves to the bland. In the case of this post, it is largely the work of the primary authors; other board members might not agree with the more subjective judgements.The impetus for getting this post out now was a desire to empower the CEOs to speak on behalf of the entities; there are some time-sensitive updates they expect to share soon.Edit: Howie has now shared oneThere’s a lot to say about what FTX’s collapse means for EA and EVF; in many ways we’re still in the early days of wrestling with what this means for us and our communities. This post isn’t about that. However, we know that this is the first major public communication from the EVF boards, and we don’t want the implicature to be that we don’t think this is important or worth discussing.We’re hoping to write more soon about why we haven’t said more, how we’re seeing the situation, and how EVF’s choices may have impacted EA discourse about FTX.About Howie and ZachHowie Lempel has been Interim CEO of EVF UK since mid-November. To take this on, Howie took leave from his role as CEO of 80k, one of the largest EVF projects. In light of Howie’s move, Brenton Mayer was appointed as Interim CEO of 80k in December. Brenton was previously Director of Internal Systems at 80k.Before he worked at 80k, Howie went to Yale Law for two years. He left Yale to join Open Philanthropy while it was still being incubated at GiveWell, working as their first Program Officer for Global Catastrophic Risks. Howie has also worked on white collar crime at the Manhattan DA’s office and on U.S. economic policy as a research assistant at the Brookings Institution. In addition to his role as CEO of 80k, he’s also known in the EA community for his personal podcast episode on mental he...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Interim CEOs of EVF, published by Owen Cotton-Barratt on January 30, 2023 on The Effective Altruism Forum.OverviewEffective Ventures Foundation USA (“EVF US”) has appointed Zachary Robinson as its Interim CEO as of today. We’re also taking this time to announce that Effective Ventures Foundation (“EVF UK”) appointed Howie Lempel as its Interim CEO back in November.EVF UK and EVF US together act as hosts and fiscal sponsors of many key projects in effective altruism, including the Centre for Effective Altruism (CEA), 80,000 Hours (80k), Longview Philanthropy, etc. These charities (EVF UK and EVF US) previously did not have the position of CEO, and project leads (for CEA, 80k, etc.) reported directly to EVF boards for management oversight.The FTX situation has given rise to a number of new challenges. In particular, there are financial and legal issues that occur at the level of the charities (EVF UK and EVF US), not at the project level, because the projects are not separate legal entities. Because of this, we’ve chosen to coordinate the legal, financial, and communications strategies of the projects under EVF much more than before.In response to the new challenges from FTX, the boards became much more involved in the day-to-day operations of the charities. But it’s not ideal to have boards playing the role of executives, so the boards have now also appointed Interim CEOs to each charity.The Interim CEO roles are about handling crises and helping the entities transition to an improved long-term structure. They are naturally time-limited roles; we aren’t sure how they might change, or when it will make sense for Howie and/or Zach to hand the reins off. The announcement of these Interim CEOs doesn’t constitute any change of project leadership; for example, Max Dalton will continue as leader of CEA, the community-building project which is part of EVF.Meta remarksThis post is written on behalf of the boards of EVF. It’s difficult to write or act as a group without either doing things that not everyone is totally behind, or constraining ourselves to the bland. In the case of this post, it is largely the work of the primary authors; other board members might not agree with the more subjective judgements.The impetus for getting this post out now was a desire to empower the CEOs to speak on behalf of the entities; there are some time-sensitive updates they expect to share soon.Edit: Howie has now shared oneThere’s a lot to say about what FTX’s collapse means for EA and EVF; in many ways we’re still in the early days of wrestling with what this means for us and our communities. This post isn’t about that. However, we know that this is the first major public communication from the EVF boards, and we don’t want the implicature to be that we don’t think this is important or worth discussing.We’re hoping to write more soon about why we haven’t said more, how we’re seeing the situation, and how EVF’s choices may have impacted EA discourse about FTX.About Howie and ZachHowie Lempel has been Interim CEO of EVF UK since mid-November. To take this on, Howie took leave from his role as CEO of 80k, one of the largest EVF projects. In light of Howie’s move, Brenton Mayer was appointed as Interim CEO of 80k in December. Brenton was previously Director of Internal Systems at 80k.Before he worked at 80k, Howie went to Yale Law for two years. He left Yale to join Open Philanthropy while it was still being incubated at GiveWell, working as their first Program Officer for Global Catastrophic Risks. Howie has also worked on white collar crime at the Manhattan DA’s office and on U.S. economic policy as a research assistant at the Brookings Institution. In addition to his role as CEO of 80k, he’s also known in the EA community for his personal podcast episode on mental he...]]>
Owen Cotton-Barratt https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:52 None full 4659
QDBntBeBWJ94EQdou_NL_EA_EA EA - Time-stamping: An urgent, neglected AI safety measure by Axel Svensson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Time-stamping: An urgent, neglected AI safety measure, published by Axel Svensson on January 30, 2023 on The Effective Altruism Forum.TL;DRI believe we should use a 5-digit annual budget to create/serve trust-less, cryptographic timestamps of all public content, in order to significantly counteract the growing threat that AI-generated fake content poses to truth-seeking and trust. We should also encourage and help all organizations to do likewise with private content.THE PROBLEM & SOLUTIONAs the rate and quality of AI-generated content keeps increasing, it seems inevitable that it will become easier to create fake content and harder to verify/refute it. Remember the very recent past when faking a photo was so hard that simply providing a photo was considered proof? If we do nothing about it, these AI advances might have a devastating impact on people's opportunities to trust both each other and historic material, and might end up having negative value for humanity on net.I believe that trust-less time-stamping is an effective, urgent, tractable and cheap method to partly, but significantly so, counteract this lamentable development. Here's why:EFFECTIVEIt is likely that fake creation technology will outpace fake detection technology. If so, we will nominally end up in an indefinite state of having to doubt pretty much all content. However, with trust-less time-stamping, the contest instead becomes between the fake creation technology available at the time of timestamping, and the fake detection technology available at the time of truth-seeking.Time-stamping everything today will protect all past and current content against suspicion of interference by all future fake creation technology. As both fake creation and fake detection technology progress, no matter at what relative pace, the value of timestamps will grow over time. Perhaps in a not so distant future, it will become an indispensable historical record.URGENTNeed I say much about the pace of progress for AI technology, or the extent of existing content? The value of timestamping everything today rather than in one month, is some function of the value of the truth of all historical records and other content, and technological development during that time. I suspect there's a multiplication somewhere in that function.TRACTABLEWe already have the cryptographic technology and infrastructure to make trust-less timestamps. We also have large public archives of digital and/or digitized content, including but not limited to the web. Time-stamping all of it might not be trivial, but it's not particularly hard. It can even be done without convincing very many people that it needs to be done. For non-public content, adding timestamping as a feature in backup software should be similarly tractable - here the main struggle will probably be to convince users of the value of timestamping.Implementation: Each piece of content is hashed, the hashes put into a merkle tree, and the root of that tree published on several popular, secure, trust-less public ledgers. Proof of timestamp is produced as a list of hashes along the merkle branch from the content up to the root, together with transaction IDs. This technology, including implementations, services and public ledgers already exists. For private content, you might want to be able to prove a timestamp for one piece of content without divulging the existence of another piece of content. To do so, one would add one bottom level in the merkle tree where each content hash is hashed with a pseudo-random value rather than another content hash. This pseudo-random value can be produced from the content hash itself and a salt that is constant within an organization.CHEAPTimestamping n pieces of content comprising a total of b bytes will incur a one-time cost for processin...]]>
Axel Svensson https://forum.effectivealtruism.org/posts/QDBntBeBWJ94EQdou/time-stamping-an-urgent-neglected-ai-safety-measure Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Time-stamping: An urgent, neglected AI safety measure, published by Axel Svensson on January 30, 2023 on The Effective Altruism Forum.TL;DRI believe we should use a 5-digit annual budget to create/serve trust-less, cryptographic timestamps of all public content, in order to significantly counteract the growing threat that AI-generated fake content poses to truth-seeking and trust. We should also encourage and help all organizations to do likewise with private content.THE PROBLEM & SOLUTIONAs the rate and quality of AI-generated content keeps increasing, it seems inevitable that it will become easier to create fake content and harder to verify/refute it. Remember the very recent past when faking a photo was so hard that simply providing a photo was considered proof? If we do nothing about it, these AI advances might have a devastating impact on people's opportunities to trust both each other and historic material, and might end up having negative value for humanity on net.I believe that trust-less time-stamping is an effective, urgent, tractable and cheap method to partly, but significantly so, counteract this lamentable development. Here's why:EFFECTIVEIt is likely that fake creation technology will outpace fake detection technology. If so, we will nominally end up in an indefinite state of having to doubt pretty much all content. However, with trust-less time-stamping, the contest instead becomes between the fake creation technology available at the time of timestamping, and the fake detection technology available at the time of truth-seeking.Time-stamping everything today will protect all past and current content against suspicion of interference by all future fake creation technology. As both fake creation and fake detection technology progress, no matter at what relative pace, the value of timestamps will grow over time. Perhaps in a not so distant future, it will become an indispensable historical record.URGENTNeed I say much about the pace of progress for AI technology, or the extent of existing content? The value of timestamping everything today rather than in one month, is some function of the value of the truth of all historical records and other content, and technological development during that time. I suspect there's a multiplication somewhere in that function.TRACTABLEWe already have the cryptographic technology and infrastructure to make trust-less timestamps. We also have large public archives of digital and/or digitized content, including but not limited to the web. Time-stamping all of it might not be trivial, but it's not particularly hard. It can even be done without convincing very many people that it needs to be done. For non-public content, adding timestamping as a feature in backup software should be similarly tractable - here the main struggle will probably be to convince users of the value of timestamping.Implementation: Each piece of content is hashed, the hashes put into a merkle tree, and the root of that tree published on several popular, secure, trust-less public ledgers. Proof of timestamp is produced as a list of hashes along the merkle branch from the content up to the root, together with transaction IDs. This technology, including implementations, services and public ledgers already exists. For private content, you might want to be able to prove a timestamp for one piece of content without divulging the existence of another piece of content. To do so, one would add one bottom level in the merkle tree where each content hash is hashed with a pseudo-random value rather than another content hash. This pseudo-random value can be produced from the content hash itself and a salt that is constant within an organization.CHEAPTimestamping n pieces of content comprising a total of b bytes will incur a one-time cost for processin...]]>
Mon, 30 Jan 2023 14:19:46 +0000 EA - Time-stamping: An urgent, neglected AI safety measure by Axel Svensson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Time-stamping: An urgent, neglected AI safety measure, published by Axel Svensson on January 30, 2023 on The Effective Altruism Forum.TL;DRI believe we should use a 5-digit annual budget to create/serve trust-less, cryptographic timestamps of all public content, in order to significantly counteract the growing threat that AI-generated fake content poses to truth-seeking and trust. We should also encourage and help all organizations to do likewise with private content.THE PROBLEM & SOLUTIONAs the rate and quality of AI-generated content keeps increasing, it seems inevitable that it will become easier to create fake content and harder to verify/refute it. Remember the very recent past when faking a photo was so hard that simply providing a photo was considered proof? If we do nothing about it, these AI advances might have a devastating impact on people's opportunities to trust both each other and historic material, and might end up having negative value for humanity on net.I believe that trust-less time-stamping is an effective, urgent, tractable and cheap method to partly, but significantly so, counteract this lamentable development. Here's why:EFFECTIVEIt is likely that fake creation technology will outpace fake detection technology. If so, we will nominally end up in an indefinite state of having to doubt pretty much all content. However, with trust-less time-stamping, the contest instead becomes between the fake creation technology available at the time of timestamping, and the fake detection technology available at the time of truth-seeking.Time-stamping everything today will protect all past and current content against suspicion of interference by all future fake creation technology. As both fake creation and fake detection technology progress, no matter at what relative pace, the value of timestamps will grow over time. Perhaps in a not so distant future, it will become an indispensable historical record.URGENTNeed I say much about the pace of progress for AI technology, or the extent of existing content? The value of timestamping everything today rather than in one month, is some function of the value of the truth of all historical records and other content, and technological development during that time. I suspect there's a multiplication somewhere in that function.TRACTABLEWe already have the cryptographic technology and infrastructure to make trust-less timestamps. We also have large public archives of digital and/or digitized content, including but not limited to the web. Time-stamping all of it might not be trivial, but it's not particularly hard. It can even be done without convincing very many people that it needs to be done. For non-public content, adding timestamping as a feature in backup software should be similarly tractable - here the main struggle will probably be to convince users of the value of timestamping.Implementation: Each piece of content is hashed, the hashes put into a merkle tree, and the root of that tree published on several popular, secure, trust-less public ledgers. Proof of timestamp is produced as a list of hashes along the merkle branch from the content up to the root, together with transaction IDs. This technology, including implementations, services and public ledgers already exists. For private content, you might want to be able to prove a timestamp for one piece of content without divulging the existence of another piece of content. To do so, one would add one bottom level in the merkle tree where each content hash is hashed with a pseudo-random value rather than another content hash. This pseudo-random value can be produced from the content hash itself and a salt that is constant within an organization.CHEAPTimestamping n pieces of content comprising a total of b bytes will incur a one-time cost for processin...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Time-stamping: An urgent, neglected AI safety measure, published by Axel Svensson on January 30, 2023 on The Effective Altruism Forum.TL;DRI believe we should use a 5-digit annual budget to create/serve trust-less, cryptographic timestamps of all public content, in order to significantly counteract the growing threat that AI-generated fake content poses to truth-seeking and trust. We should also encourage and help all organizations to do likewise with private content.THE PROBLEM & SOLUTIONAs the rate and quality of AI-generated content keeps increasing, it seems inevitable that it will become easier to create fake content and harder to verify/refute it. Remember the very recent past when faking a photo was so hard that simply providing a photo was considered proof? If we do nothing about it, these AI advances might have a devastating impact on people's opportunities to trust both each other and historic material, and might end up having negative value for humanity on net.I believe that trust-less time-stamping is an effective, urgent, tractable and cheap method to partly, but significantly so, counteract this lamentable development. Here's why:EFFECTIVEIt is likely that fake creation technology will outpace fake detection technology. If so, we will nominally end up in an indefinite state of having to doubt pretty much all content. However, with trust-less time-stamping, the contest instead becomes between the fake creation technology available at the time of timestamping, and the fake detection technology available at the time of truth-seeking.Time-stamping everything today will protect all past and current content against suspicion of interference by all future fake creation technology. As both fake creation and fake detection technology progress, no matter at what relative pace, the value of timestamps will grow over time. Perhaps in a not so distant future, it will become an indispensable historical record.URGENTNeed I say much about the pace of progress for AI technology, or the extent of existing content? The value of timestamping everything today rather than in one month, is some function of the value of the truth of all historical records and other content, and technological development during that time. I suspect there's a multiplication somewhere in that function.TRACTABLEWe already have the cryptographic technology and infrastructure to make trust-less timestamps. We also have large public archives of digital and/or digitized content, including but not limited to the web. Time-stamping all of it might not be trivial, but it's not particularly hard. It can even be done without convincing very many people that it needs to be done. For non-public content, adding timestamping as a feature in backup software should be similarly tractable - here the main struggle will probably be to convince users of the value of timestamping.Implementation: Each piece of content is hashed, the hashes put into a merkle tree, and the root of that tree published on several popular, secure, trust-less public ledgers. Proof of timestamp is produced as a list of hashes along the merkle branch from the content up to the root, together with transaction IDs. This technology, including implementations, services and public ledgers already exists. For private content, you might want to be able to prove a timestamp for one piece of content without divulging the existence of another piece of content. To do so, one would add one bottom level in the merkle tree where each content hash is hashed with a pseudo-random value rather than another content hash. This pseudo-random value can be produced from the content hash itself and a salt that is constant within an organization.CHEAPTimestamping n pieces of content comprising a total of b bytes will incur a one-time cost for processin...]]>
Axel Svensson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:30 None full 4660
uJtKGCkduR2JLCGsj_NL_EA_EA EA - Proposed improvements to EAG(x) admissions process by Elika Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Proposed improvements to EAG(x) admissions process, published by Elika on January 30, 2023 on The Effective Altruism Forum.This post is part of an ongoing series: Events in EA: Learnings and Critiques.Elika worked (as a contractor) for the Centre for Effective Altruism (CEA) to run EAGxBerkeley. This post is written in a personal capacity, and are not the views of or endorsed by CEA.ApplicationsDo a better job of auto filling application forms with information from previous applications.Applicants with less community experience who might be a good fit for the conference don’t necessarily know community communication norms. The following suggestions aim to provide more context and transparency around the process of application processing, without providing information that would allow applicants to goodhart.Make important questions mandatory to reduce the rate of incomplete applications.Adding context to questions (e.g. for some questions, if the answer is not at all related to EA it's hard to know if the person misunderstood the question, or just has had very limited interaction with EA)Asking applicants to provide specific, concrete answers to questions rather than vague generalities (e.g. “I’d like to learn more about EA” is very vague, it would be more helpful to know what specifically they’d like to learn, and what they already know)Tell applicants not to assume the organiser knows who you are (I think it's generally bad for culture and status).AdmissionsMake admissions policies clear (ex. this conference is for people in X region), both in public communications and throughout the application process.Don’t accept applicants after the deadline has passed. If you do want to accept last-minute applicants, have a consistent and fair criteria for accepting late applicationsExperiment with adding different questions to the application, such as:Check yes if you’re okay giving up your spot for a first time attendee if the conference is at capacity. We may still accept you if we think you could help other attendees.Checking yes if you’re applying but are uncertain you will register and attend (excluding last minute emergencies) to better forecast how many accepted applicants will actually apply.Check yes to “I can attend with <1 week’s notice” for people (e.g. locals) who are willing / able to attend last minute. (H/T Larks)"Why do you think it would be valuable to attend this specific conference" Adding more intentionality to the conference could help make applicants more likely to commit, and help organisers decide between applicants.Removing the multiple conference check box so you can only apply to one conference at a time to improve intentionality.Consider making all tickets (barring needs-based applicants) paid so that people feel a greater sense of commitment towards attending. We will write more about this in a future post.Create a “Virtual Only” option for people who:Don’t want to attend in-person, but do want to be listed as a resource for others (e.g. see Vael’s comment)Initially wanted to attend in-person, but aren’t able to and still want access to the attendee list (e.g. see Lorenzo’s comment)Reminders & Comms to Accepted ApplicantsSend more (at least 3) reminders to accepted applicants to register, with reminders about how many people are waitlisted to incentivise people to release their ticketReminder to register: Hey X, please register for this event! If you don’t register by August 18th, we’ll give your spot to someone on the waitlist. Registering helps us ensure that we get estimates to vendors on time, and makes sure that the application process is fair for everyone. Read more here (link to a post explaining the downsides)Release accepted but not registered applicants' tickets after a certain date and clearly communicate that ...]]>
Elika https://forum.effectivealtruism.org/posts/uJtKGCkduR2JLCGsj/proposed-improvements-to-eag-x-admissions-process Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Proposed improvements to EAG(x) admissions process, published by Elika on January 30, 2023 on The Effective Altruism Forum.This post is part of an ongoing series: Events in EA: Learnings and Critiques.Elika worked (as a contractor) for the Centre for Effective Altruism (CEA) to run EAGxBerkeley. This post is written in a personal capacity, and are not the views of or endorsed by CEA.ApplicationsDo a better job of auto filling application forms with information from previous applications.Applicants with less community experience who might be a good fit for the conference don’t necessarily know community communication norms. The following suggestions aim to provide more context and transparency around the process of application processing, without providing information that would allow applicants to goodhart.Make important questions mandatory to reduce the rate of incomplete applications.Adding context to questions (e.g. for some questions, if the answer is not at all related to EA it's hard to know if the person misunderstood the question, or just has had very limited interaction with EA)Asking applicants to provide specific, concrete answers to questions rather than vague generalities (e.g. “I’d like to learn more about EA” is very vague, it would be more helpful to know what specifically they’d like to learn, and what they already know)Tell applicants not to assume the organiser knows who you are (I think it's generally bad for culture and status).AdmissionsMake admissions policies clear (ex. this conference is for people in X region), both in public communications and throughout the application process.Don’t accept applicants after the deadline has passed. If you do want to accept last-minute applicants, have a consistent and fair criteria for accepting late applicationsExperiment with adding different questions to the application, such as:Check yes if you’re okay giving up your spot for a first time attendee if the conference is at capacity. We may still accept you if we think you could help other attendees.Checking yes if you’re applying but are uncertain you will register and attend (excluding last minute emergencies) to better forecast how many accepted applicants will actually apply.Check yes to “I can attend with <1 week’s notice” for people (e.g. locals) who are willing / able to attend last minute. (H/T Larks)"Why do you think it would be valuable to attend this specific conference" Adding more intentionality to the conference could help make applicants more likely to commit, and help organisers decide between applicants.Removing the multiple conference check box so you can only apply to one conference at a time to improve intentionality.Consider making all tickets (barring needs-based applicants) paid so that people feel a greater sense of commitment towards attending. We will write more about this in a future post.Create a “Virtual Only” option for people who:Don’t want to attend in-person, but do want to be listed as a resource for others (e.g. see Vael’s comment)Initially wanted to attend in-person, but aren’t able to and still want access to the attendee list (e.g. see Lorenzo’s comment)Reminders & Comms to Accepted ApplicantsSend more (at least 3) reminders to accepted applicants to register, with reminders about how many people are waitlisted to incentivise people to release their ticketReminder to register: Hey X, please register for this event! If you don’t register by August 18th, we’ll give your spot to someone on the waitlist. Registering helps us ensure that we get estimates to vendors on time, and makes sure that the application process is fair for everyone. Read more here (link to a post explaining the downsides)Release accepted but not registered applicants' tickets after a certain date and clearly communicate that ...]]>
Mon, 30 Jan 2023 10:38:53 +0000 EA - Proposed improvements to EAG(x) admissions process by Elika Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Proposed improvements to EAG(x) admissions process, published by Elika on January 30, 2023 on The Effective Altruism Forum.This post is part of an ongoing series: Events in EA: Learnings and Critiques.Elika worked (as a contractor) for the Centre for Effective Altruism (CEA) to run EAGxBerkeley. This post is written in a personal capacity, and are not the views of or endorsed by CEA.ApplicationsDo a better job of auto filling application forms with information from previous applications.Applicants with less community experience who might be a good fit for the conference don’t necessarily know community communication norms. The following suggestions aim to provide more context and transparency around the process of application processing, without providing information that would allow applicants to goodhart.Make important questions mandatory to reduce the rate of incomplete applications.Adding context to questions (e.g. for some questions, if the answer is not at all related to EA it's hard to know if the person misunderstood the question, or just has had very limited interaction with EA)Asking applicants to provide specific, concrete answers to questions rather than vague generalities (e.g. “I’d like to learn more about EA” is very vague, it would be more helpful to know what specifically they’d like to learn, and what they already know)Tell applicants not to assume the organiser knows who you are (I think it's generally bad for culture and status).AdmissionsMake admissions policies clear (ex. this conference is for people in X region), both in public communications and throughout the application process.Don’t accept applicants after the deadline has passed. If you do want to accept last-minute applicants, have a consistent and fair criteria for accepting late applicationsExperiment with adding different questions to the application, such as:Check yes if you’re okay giving up your spot for a first time attendee if the conference is at capacity. We may still accept you if we think you could help other attendees.Checking yes if you’re applying but are uncertain you will register and attend (excluding last minute emergencies) to better forecast how many accepted applicants will actually apply.Check yes to “I can attend with <1 week’s notice” for people (e.g. locals) who are willing / able to attend last minute. (H/T Larks)"Why do you think it would be valuable to attend this specific conference" Adding more intentionality to the conference could help make applicants more likely to commit, and help organisers decide between applicants.Removing the multiple conference check box so you can only apply to one conference at a time to improve intentionality.Consider making all tickets (barring needs-based applicants) paid so that people feel a greater sense of commitment towards attending. We will write more about this in a future post.Create a “Virtual Only” option for people who:Don’t want to attend in-person, but do want to be listed as a resource for others (e.g. see Vael’s comment)Initially wanted to attend in-person, but aren’t able to and still want access to the attendee list (e.g. see Lorenzo’s comment)Reminders & Comms to Accepted ApplicantsSend more (at least 3) reminders to accepted applicants to register, with reminders about how many people are waitlisted to incentivise people to release their ticketReminder to register: Hey X, please register for this event! If you don’t register by August 18th, we’ll give your spot to someone on the waitlist. Registering helps us ensure that we get estimates to vendors on time, and makes sure that the application process is fair for everyone. Read more here (link to a post explaining the downsides)Release accepted but not registered applicants' tickets after a certain date and clearly communicate that ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Proposed improvements to EAG(x) admissions process, published by Elika on January 30, 2023 on The Effective Altruism Forum.This post is part of an ongoing series: Events in EA: Learnings and Critiques.Elika worked (as a contractor) for the Centre for Effective Altruism (CEA) to run EAGxBerkeley. This post is written in a personal capacity, and are not the views of or endorsed by CEA.ApplicationsDo a better job of auto filling application forms with information from previous applications.Applicants with less community experience who might be a good fit for the conference don’t necessarily know community communication norms. The following suggestions aim to provide more context and transparency around the process of application processing, without providing information that would allow applicants to goodhart.Make important questions mandatory to reduce the rate of incomplete applications.Adding context to questions (e.g. for some questions, if the answer is not at all related to EA it's hard to know if the person misunderstood the question, or just has had very limited interaction with EA)Asking applicants to provide specific, concrete answers to questions rather than vague generalities (e.g. “I’d like to learn more about EA” is very vague, it would be more helpful to know what specifically they’d like to learn, and what they already know)Tell applicants not to assume the organiser knows who you are (I think it's generally bad for culture and status).AdmissionsMake admissions policies clear (ex. this conference is for people in X region), both in public communications and throughout the application process.Don’t accept applicants after the deadline has passed. If you do want to accept last-minute applicants, have a consistent and fair criteria for accepting late applicationsExperiment with adding different questions to the application, such as:Check yes if you’re okay giving up your spot for a first time attendee if the conference is at capacity. We may still accept you if we think you could help other attendees.Checking yes if you’re applying but are uncertain you will register and attend (excluding last minute emergencies) to better forecast how many accepted applicants will actually apply.Check yes to “I can attend with <1 week’s notice” for people (e.g. locals) who are willing / able to attend last minute. (H/T Larks)"Why do you think it would be valuable to attend this specific conference" Adding more intentionality to the conference could help make applicants more likely to commit, and help organisers decide between applicants.Removing the multiple conference check box so you can only apply to one conference at a time to improve intentionality.Consider making all tickets (barring needs-based applicants) paid so that people feel a greater sense of commitment towards attending. We will write more about this in a future post.Create a “Virtual Only” option for people who:Don’t want to attend in-person, but do want to be listed as a resource for others (e.g. see Vael’s comment)Initially wanted to attend in-person, but aren’t able to and still want access to the attendee list (e.g. see Lorenzo’s comment)Reminders & Comms to Accepted ApplicantsSend more (at least 3) reminders to accepted applicants to register, with reminders about how many people are waitlisted to incentivise people to release their ticketReminder to register: Hey X, please register for this event! If you don’t register by August 18th, we’ll give your spot to someone on the waitlist. Registering helps us ensure that we get estimates to vendors on time, and makes sure that the application process is fair for everyone. Read more here (link to a post explaining the downsides)Release accepted but not registered applicants' tickets after a certain date and clearly communicate that ...]]>
Elika https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:52 None full 4650
wtzGZuH9EukEuEZmB_NL_EA_EA EA - EA novel published on Amazon by timunderwood Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA novel published on Amazon, published by timunderwood on January 29, 2023 on The Effective Altruism Forum.Just so everyone knows, the novel I wrote to promote EA has been published on Amazon. This is not a request to purchase it, or try to directly support it in any particular way.However if the blurb does sound interesting, go for it :)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
timunderwood https://forum.effectivealtruism.org/posts/wtzGZuH9EukEuEZmB/ea-novel-published-on-amazon Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA novel published on Amazon, published by timunderwood on January 29, 2023 on The Effective Altruism Forum.Just so everyone knows, the novel I wrote to promote EA has been published on Amazon. This is not a request to purchase it, or try to directly support it in any particular way.However if the blurb does sound interesting, go for it :)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 30 Jan 2023 09:14:47 +0000 EA - EA novel published on Amazon by timunderwood Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA novel published on Amazon, published by timunderwood on January 29, 2023 on The Effective Altruism Forum.Just so everyone knows, the novel I wrote to promote EA has been published on Amazon. This is not a request to purchase it, or try to directly support it in any particular way.However if the blurb does sound interesting, go for it :)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA novel published on Amazon, published by timunderwood on January 29, 2023 on The Effective Altruism Forum.Just so everyone knows, the novel I wrote to promote EA has been published on Amazon. This is not a request to purchase it, or try to directly support it in any particular way.However if the blurb does sound interesting, go for it :)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
timunderwood https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:39 None full 4653
45EMKCpPesvhGxwey_NL_EA_EA EA - Protect Our Future's Crypto Politics by Mohammad Ismam Huda Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Protect Our Future's Crypto Politics, published by Mohammad Ismam Huda on January 29, 2023 on The Effective Altruism Forum.TLDR; The Effective Altruism-linked political fund, Protect Our Future, has come under scrutiny for perceived cryptocurrency lobbying. Skip to section Irregularities to see the findings of my investigation into the group.BackgroundThe effective altruism movement has become increasingly interested and involved in politics. A major vehicle for this involvement has been Protect Our Future PAC (POF), a political fund active in the recent 2022 US midterm elections. The fund ostensibly sought to lobby for effective altruist cause areas, with a particular emphasis on longtermism and pandemic prevention.POF entered the electoral scene with a bang funding the campaign of Effective Altruist Carrick Flynn to the tune of $10 million. The fund proceeded to donate to 27 candidates in total, exclusively in the Democratic Primaries.In total POF raised and spent $28 million, nearly all of this coming from Sam Bankman-Fried (SBF) the cryptocurrency tycoon. Despite SBF's financial support, POF has repeatedly denied the fund served SBF's financial interests in cryptocurrency.IrregularitiesAs a result of my interest in POF, I proceeded to research and investigate the group and have discovered what to me are irregularities.I reached out to the President (Michael Sadowksy) and Press Spokesman (Mike Levine) of Protect Our Future, urging them to provide an explanation, but received no response. My inquiry can be viewed here.Cryptocurrency Interests Amongst CandidatesOf the 27 endorsed candidates I have been able to link 16 as having meaningful pro-cryptocurrency sentiment and/or comittee positions that would make them target-worthy by cryptocurrency lobbyists.A selection of candidates and their ties are presented here. A full table of all candidates endorsed by POF is available in the appendix.POF CandidateCrypto Sympathies/ Conflict of InterestAbigail Spanberger (VA-07)://www.washingtonexaminer.com/policy/economy/sec-official-steps-down-ftx-connectionAdam Hollier (MI-13)Received donations from Web3Forward PAC, a cryptocurrency lobbying group - $412K Gilbert Villegas (IL-03)Expressed pro-cryptocurrency sentiments as an alderman.Rep. Spanberger sits on the subcomittee which oversees the CFTC.It is known that SBF and other crypto adherents were lobbying authorities to shift crypto oversight from the SEC to CFTC.The presence of cryptocurrency advocates amongst POF's endorsed candidates was also picked up by the press:The Bankman-Frieds have insisted the organizations are focused on pandemic preparedness and aren’t about influencing cryptocurrency policy. But endorsed candidates are sometimes major supporters of the crypto sector. Lafazan, for example, said back in June he would take campaign donations in cryptocurrency and said he would file a bill in the Nassau County Legislature to create a cryptocurrency task force. Torres, too, who won support from Guarding Against Pandemics, has also publicly boosted the industry.Ray La Raja, a political scientist who studies campaign finance at the University of Massachusetts – Amherst, says it’s likely no coincidence that Protect Our Future’s funds have gone to establishment candidates at a time when Congress is considering cryptocurrency regulations.One, Maxwell Frost, a Florida Democrat who has become well known as the first Gen-Z member of Congress, got $8,700 in contributions from the Bankman-Fried brothers and nearly $1 million in help from Protect Our Future, almost all of it after announcing a “crypto-advisory council” for his campaign.Opportunity For Tommorrow PACThere is also another fund that appears to have close ties to Protect Our Future called Opportunity For Tomorrow. The fund was exclusively donated...]]>
Mohammad Ismam Huda https://forum.effectivealtruism.org/posts/45EMKCpPesvhGxwey/protect-our-future-s-crypto-politics Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Protect Our Future's Crypto Politics, published by Mohammad Ismam Huda on January 29, 2023 on The Effective Altruism Forum.TLDR; The Effective Altruism-linked political fund, Protect Our Future, has come under scrutiny for perceived cryptocurrency lobbying. Skip to section Irregularities to see the findings of my investigation into the group.BackgroundThe effective altruism movement has become increasingly interested and involved in politics. A major vehicle for this involvement has been Protect Our Future PAC (POF), a political fund active in the recent 2022 US midterm elections. The fund ostensibly sought to lobby for effective altruist cause areas, with a particular emphasis on longtermism and pandemic prevention.POF entered the electoral scene with a bang funding the campaign of Effective Altruist Carrick Flynn to the tune of $10 million. The fund proceeded to donate to 27 candidates in total, exclusively in the Democratic Primaries.In total POF raised and spent $28 million, nearly all of this coming from Sam Bankman-Fried (SBF) the cryptocurrency tycoon. Despite SBF's financial support, POF has repeatedly denied the fund served SBF's financial interests in cryptocurrency.IrregularitiesAs a result of my interest in POF, I proceeded to research and investigate the group and have discovered what to me are irregularities.I reached out to the President (Michael Sadowksy) and Press Spokesman (Mike Levine) of Protect Our Future, urging them to provide an explanation, but received no response. My inquiry can be viewed here.Cryptocurrency Interests Amongst CandidatesOf the 27 endorsed candidates I have been able to link 16 as having meaningful pro-cryptocurrency sentiment and/or comittee positions that would make them target-worthy by cryptocurrency lobbyists.A selection of candidates and their ties are presented here. A full table of all candidates endorsed by POF is available in the appendix.POF CandidateCrypto Sympathies/ Conflict of InterestAbigail Spanberger (VA-07)://www.washingtonexaminer.com/policy/economy/sec-official-steps-down-ftx-connectionAdam Hollier (MI-13)Received donations from Web3Forward PAC, a cryptocurrency lobbying group - $412K Gilbert Villegas (IL-03)Expressed pro-cryptocurrency sentiments as an alderman.Rep. Spanberger sits on the subcomittee which oversees the CFTC.It is known that SBF and other crypto adherents were lobbying authorities to shift crypto oversight from the SEC to CFTC.The presence of cryptocurrency advocates amongst POF's endorsed candidates was also picked up by the press:The Bankman-Frieds have insisted the organizations are focused on pandemic preparedness and aren’t about influencing cryptocurrency policy. But endorsed candidates are sometimes major supporters of the crypto sector. Lafazan, for example, said back in June he would take campaign donations in cryptocurrency and said he would file a bill in the Nassau County Legislature to create a cryptocurrency task force. Torres, too, who won support from Guarding Against Pandemics, has also publicly boosted the industry.Ray La Raja, a political scientist who studies campaign finance at the University of Massachusetts – Amherst, says it’s likely no coincidence that Protect Our Future’s funds have gone to establishment candidates at a time when Congress is considering cryptocurrency regulations.One, Maxwell Frost, a Florida Democrat who has become well known as the first Gen-Z member of Congress, got $8,700 in contributions from the Bankman-Fried brothers and nearly $1 million in help from Protect Our Future, almost all of it after announcing a “crypto-advisory council” for his campaign.Opportunity For Tommorrow PACThere is also another fund that appears to have close ties to Protect Our Future called Opportunity For Tomorrow. The fund was exclusively donated...]]>
Mon, 30 Jan 2023 01:28:55 +0000 EA - Protect Our Future's Crypto Politics by Mohammad Ismam Huda Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Protect Our Future's Crypto Politics, published by Mohammad Ismam Huda on January 29, 2023 on The Effective Altruism Forum.TLDR; The Effective Altruism-linked political fund, Protect Our Future, has come under scrutiny for perceived cryptocurrency lobbying. Skip to section Irregularities to see the findings of my investigation into the group.BackgroundThe effective altruism movement has become increasingly interested and involved in politics. A major vehicle for this involvement has been Protect Our Future PAC (POF), a political fund active in the recent 2022 US midterm elections. The fund ostensibly sought to lobby for effective altruist cause areas, with a particular emphasis on longtermism and pandemic prevention.POF entered the electoral scene with a bang funding the campaign of Effective Altruist Carrick Flynn to the tune of $10 million. The fund proceeded to donate to 27 candidates in total, exclusively in the Democratic Primaries.In total POF raised and spent $28 million, nearly all of this coming from Sam Bankman-Fried (SBF) the cryptocurrency tycoon. Despite SBF's financial support, POF has repeatedly denied the fund served SBF's financial interests in cryptocurrency.IrregularitiesAs a result of my interest in POF, I proceeded to research and investigate the group and have discovered what to me are irregularities.I reached out to the President (Michael Sadowksy) and Press Spokesman (Mike Levine) of Protect Our Future, urging them to provide an explanation, but received no response. My inquiry can be viewed here.Cryptocurrency Interests Amongst CandidatesOf the 27 endorsed candidates I have been able to link 16 as having meaningful pro-cryptocurrency sentiment and/or comittee positions that would make them target-worthy by cryptocurrency lobbyists.A selection of candidates and their ties are presented here. A full table of all candidates endorsed by POF is available in the appendix.POF CandidateCrypto Sympathies/ Conflict of InterestAbigail Spanberger (VA-07)://www.washingtonexaminer.com/policy/economy/sec-official-steps-down-ftx-connectionAdam Hollier (MI-13)Received donations from Web3Forward PAC, a cryptocurrency lobbying group - $412K Gilbert Villegas (IL-03)Expressed pro-cryptocurrency sentiments as an alderman.Rep. Spanberger sits on the subcomittee which oversees the CFTC.It is known that SBF and other crypto adherents were lobbying authorities to shift crypto oversight from the SEC to CFTC.The presence of cryptocurrency advocates amongst POF's endorsed candidates was also picked up by the press:The Bankman-Frieds have insisted the organizations are focused on pandemic preparedness and aren’t about influencing cryptocurrency policy. But endorsed candidates are sometimes major supporters of the crypto sector. Lafazan, for example, said back in June he would take campaign donations in cryptocurrency and said he would file a bill in the Nassau County Legislature to create a cryptocurrency task force. Torres, too, who won support from Guarding Against Pandemics, has also publicly boosted the industry.Ray La Raja, a political scientist who studies campaign finance at the University of Massachusetts – Amherst, says it’s likely no coincidence that Protect Our Future’s funds have gone to establishment candidates at a time when Congress is considering cryptocurrency regulations.One, Maxwell Frost, a Florida Democrat who has become well known as the first Gen-Z member of Congress, got $8,700 in contributions from the Bankman-Fried brothers and nearly $1 million in help from Protect Our Future, almost all of it after announcing a “crypto-advisory council” for his campaign.Opportunity For Tommorrow PACThere is also another fund that appears to have close ties to Protect Our Future called Opportunity For Tomorrow. The fund was exclusively donated...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Protect Our Future's Crypto Politics, published by Mohammad Ismam Huda on January 29, 2023 on The Effective Altruism Forum.TLDR; The Effective Altruism-linked political fund, Protect Our Future, has come under scrutiny for perceived cryptocurrency lobbying. Skip to section Irregularities to see the findings of my investigation into the group.BackgroundThe effective altruism movement has become increasingly interested and involved in politics. A major vehicle for this involvement has been Protect Our Future PAC (POF), a political fund active in the recent 2022 US midterm elections. The fund ostensibly sought to lobby for effective altruist cause areas, with a particular emphasis on longtermism and pandemic prevention.POF entered the electoral scene with a bang funding the campaign of Effective Altruist Carrick Flynn to the tune of $10 million. The fund proceeded to donate to 27 candidates in total, exclusively in the Democratic Primaries.In total POF raised and spent $28 million, nearly all of this coming from Sam Bankman-Fried (SBF) the cryptocurrency tycoon. Despite SBF's financial support, POF has repeatedly denied the fund served SBF's financial interests in cryptocurrency.IrregularitiesAs a result of my interest in POF, I proceeded to research and investigate the group and have discovered what to me are irregularities.I reached out to the President (Michael Sadowksy) and Press Spokesman (Mike Levine) of Protect Our Future, urging them to provide an explanation, but received no response. My inquiry can be viewed here.Cryptocurrency Interests Amongst CandidatesOf the 27 endorsed candidates I have been able to link 16 as having meaningful pro-cryptocurrency sentiment and/or comittee positions that would make them target-worthy by cryptocurrency lobbyists.A selection of candidates and their ties are presented here. A full table of all candidates endorsed by POF is available in the appendix.POF CandidateCrypto Sympathies/ Conflict of InterestAbigail Spanberger (VA-07)://www.washingtonexaminer.com/policy/economy/sec-official-steps-down-ftx-connectionAdam Hollier (MI-13)Received donations from Web3Forward PAC, a cryptocurrency lobbying group - $412K Gilbert Villegas (IL-03)Expressed pro-cryptocurrency sentiments as an alderman.Rep. Spanberger sits on the subcomittee which oversees the CFTC.It is known that SBF and other crypto adherents were lobbying authorities to shift crypto oversight from the SEC to CFTC.The presence of cryptocurrency advocates amongst POF's endorsed candidates was also picked up by the press:The Bankman-Frieds have insisted the organizations are focused on pandemic preparedness and aren’t about influencing cryptocurrency policy. But endorsed candidates are sometimes major supporters of the crypto sector. Lafazan, for example, said back in June he would take campaign donations in cryptocurrency and said he would file a bill in the Nassau County Legislature to create a cryptocurrency task force. Torres, too, who won support from Guarding Against Pandemics, has also publicly boosted the industry.Ray La Raja, a political scientist who studies campaign finance at the University of Massachusetts – Amherst, says it’s likely no coincidence that Protect Our Future’s funds have gone to establishment candidates at a time when Congress is considering cryptocurrency regulations.One, Maxwell Frost, a Florida Democrat who has become well known as the first Gen-Z member of Congress, got $8,700 in contributions from the Bankman-Fried brothers and nearly $1 million in help from Protect Our Future, almost all of it after announcing a “crypto-advisory council” for his campaign.Opportunity For Tommorrow PACThere is also another fund that appears to have close ties to Protect Our Future called Opportunity For Tomorrow. The fund was exclusively donated...]]>
Mohammad Ismam Huda https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:59 None full 4651
yZLjq2bfwpdBFTQMD_NL_EA_EA EA - Biosecurity newsletters you should subscribe to by Sofya Lebedeva Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Biosecurity newsletters you should subscribe to, published by Sofya Lebedeva on January 29, 2023 on The Effective Altruism Forum.TL;DRI searched for other lists of biosecurity newsletters specifically and didn’t find one that suited my needs, so I made one! Please leave a comment with any other newsletters that I missed so that I can add them. I hope you find something useful in this list.NewslettersJohns Hopkins Center for Health Security (CHS) subscribe hereHealth Security Headlines (also from CHS) subscribe hereCenter for Infectious Disease Research and Policy (CIDRAP) subscribe hereGlobal Biodefence subscribe herePandora Report subscribe hereNuclear Threat Initiative (NTI) subscribe hereBipartisan Commission on Biodefence subscribe hereThe Association for Biosafety and Biosecurity (ABSA) subscribe here (scroll to the bottom)Council on Strategic Risks (CSR) subscribe hereBiomedical Advanced Research and Development Authority (BARDA) subscribe hereAdministration for Strategic Preparedness and Response (ASPR) do not recommend subscribing as it is very narrowly biosafety oriented and is sometimes poorly referenced.CommentsI sourced a lot of these from recommendations by Caitlin Walker, as well as from looking through various posts by Chris Bakerlee and Tessa Alexanian. Please don’t hesitate to point out any links that are broken, comment about the relative quality of the above newsletters, or comment with any newsletters that I have missed. Thank you!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sofya Lebedeva https://forum.effectivealtruism.org/posts/yZLjq2bfwpdBFTQMD/biosecurity-newsletters-you-should-subscribe-to Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Biosecurity newsletters you should subscribe to, published by Sofya Lebedeva on January 29, 2023 on The Effective Altruism Forum.TL;DRI searched for other lists of biosecurity newsletters specifically and didn’t find one that suited my needs, so I made one! Please leave a comment with any other newsletters that I missed so that I can add them. I hope you find something useful in this list.NewslettersJohns Hopkins Center for Health Security (CHS) subscribe hereHealth Security Headlines (also from CHS) subscribe hereCenter for Infectious Disease Research and Policy (CIDRAP) subscribe hereGlobal Biodefence subscribe herePandora Report subscribe hereNuclear Threat Initiative (NTI) subscribe hereBipartisan Commission on Biodefence subscribe hereThe Association for Biosafety and Biosecurity (ABSA) subscribe here (scroll to the bottom)Council on Strategic Risks (CSR) subscribe hereBiomedical Advanced Research and Development Authority (BARDA) subscribe hereAdministration for Strategic Preparedness and Response (ASPR) do not recommend subscribing as it is very narrowly biosafety oriented and is sometimes poorly referenced.CommentsI sourced a lot of these from recommendations by Caitlin Walker, as well as from looking through various posts by Chris Bakerlee and Tessa Alexanian. Please don’t hesitate to point out any links that are broken, comment about the relative quality of the above newsletters, or comment with any newsletters that I have missed. Thank you!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 29 Jan 2023 22:36:32 +0000 EA - Biosecurity newsletters you should subscribe to by Sofya Lebedeva Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Biosecurity newsletters you should subscribe to, published by Sofya Lebedeva on January 29, 2023 on The Effective Altruism Forum.TL;DRI searched for other lists of biosecurity newsletters specifically and didn’t find one that suited my needs, so I made one! Please leave a comment with any other newsletters that I missed so that I can add them. I hope you find something useful in this list.NewslettersJohns Hopkins Center for Health Security (CHS) subscribe hereHealth Security Headlines (also from CHS) subscribe hereCenter for Infectious Disease Research and Policy (CIDRAP) subscribe hereGlobal Biodefence subscribe herePandora Report subscribe hereNuclear Threat Initiative (NTI) subscribe hereBipartisan Commission on Biodefence subscribe hereThe Association for Biosafety and Biosecurity (ABSA) subscribe here (scroll to the bottom)Council on Strategic Risks (CSR) subscribe hereBiomedical Advanced Research and Development Authority (BARDA) subscribe hereAdministration for Strategic Preparedness and Response (ASPR) do not recommend subscribing as it is very narrowly biosafety oriented and is sometimes poorly referenced.CommentsI sourced a lot of these from recommendations by Caitlin Walker, as well as from looking through various posts by Chris Bakerlee and Tessa Alexanian. Please don’t hesitate to point out any links that are broken, comment about the relative quality of the above newsletters, or comment with any newsletters that I have missed. Thank you!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Biosecurity newsletters you should subscribe to, published by Sofya Lebedeva on January 29, 2023 on The Effective Altruism Forum.TL;DRI searched for other lists of biosecurity newsletters specifically and didn’t find one that suited my needs, so I made one! Please leave a comment with any other newsletters that I missed so that I can add them. I hope you find something useful in this list.NewslettersJohns Hopkins Center for Health Security (CHS) subscribe hereHealth Security Headlines (also from CHS) subscribe hereCenter for Infectious Disease Research and Policy (CIDRAP) subscribe hereGlobal Biodefence subscribe herePandora Report subscribe hereNuclear Threat Initiative (NTI) subscribe hereBipartisan Commission on Biodefence subscribe hereThe Association for Biosafety and Biosecurity (ABSA) subscribe here (scroll to the bottom)Council on Strategic Risks (CSR) subscribe hereBiomedical Advanced Research and Development Authority (BARDA) subscribe hereAdministration for Strategic Preparedness and Response (ASPR) do not recommend subscribing as it is very narrowly biosafety oriented and is sometimes poorly referenced.CommentsI sourced a lot of these from recommendations by Caitlin Walker, as well as from looking through various posts by Chris Bakerlee and Tessa Alexanian. Please don’t hesitate to point out any links that are broken, comment about the relative quality of the above newsletters, or comment with any newsletters that I have missed. Thank you!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sofya Lebedeva https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:50 None full 4640
2CMgCzvLjZydy4oTe_NL_EA_EA EA - Advice I found helpful in 2022 by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Advice I found helpful in 2022, published by Akash on January 28, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://forum.effectivealtruism.org/posts/2CMgCzvLjZydy4oTe/advice-i-found-helpful-in-2022 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Advice I found helpful in 2022, published by Akash on January 28, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 29 Jan 2023 21:00:51 +0000 EA - Advice I found helpful in 2022 by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Advice I found helpful in 2022, published by Akash on January 28, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Advice I found helpful in 2022, published by Akash on January 28, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:26 None full 4643
eLY4GKTgxxdtBHEep_NL_EA_EA EA - EA is going through a bunch of conflict. Here’s some social technologies that may help. by Severin T. Seehrich Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA is going through a bunch of conflict. Here’s some social technologies that may help., published by Severin T. Seehrich on January 29, 2023 on The Effective Altruism Forum.There’s two ways I can see EA develop after the latest community drama: The community can have all the necessary debates in good faith, grow past them, and become a more mature movement. Or, neither side feels heard, nothing changes, and way too many competent people lose trust in EA and focus their energies elsewhere. After all, I’ve heard people from either side declare EA’s moral bankruptcy in consequence of the Bostrom email debate.In order to shift the balance towards a little bit more of movement maturation and a little bit less of quitting, we need to talk. The conversations to be had here are between you and me, between him and them, and not between a small band of anonymous rebels on the one side and the EA establishment at CEA on the other. The more isolated and unheard people feel in their opinions, the more angry and polarized things get. Starting to pop the bubbles starts right in our own local EA groups.For that, I’d like to share some tools that I found useful for resolving conflict in groups. It might be worthwhile to explore them on your own, and it might be even more worthwhile to spread them in your local EA group.But how does this fit into our other community building work?To answer this question, I’d like to summarize my favorite EAF post on community building of all times: Jan Kulveit’s “Different forms of capital”.He claims that there are different forms of capital we can optimize for in community building, and that there are two EA tends to over-optimize for:Financial capital: The total amount of donations committed and sent to our charities of choice.Human capital: The number of active community members and their level of commitment.Further, he argues, there are two other forms of capital EA tends to neglect, probably because they are harder to measure:Network capital: The number and closeness of ties between community members.Structural capital: The institutions and processes we have in place for doing stuff in a sensibly structured manner.And I’d like to add a fifth dimension:Memetic capital: The sum total of the useful ideas, psycho- and social technologies we have widely available in the community. An instance of memetic capital without which EA would be unthinkable is the capability to use language. Some other examples for memetic capital include knowledge about priorization, forecasting, or productivity tools, about where and how to apply for grants, how we think about mental health and staying productive in the long-term - and our strategies for addressing and resolving conflict with friends, colleagues, and fellow EAs.I think it would be wise for EA community building to start explicitly taking all five of these factors into account. In line with that, this post aims at increasing the community’s memetic capital. Trying these tools one-off to resolve a particular issue is good, but what I’m hoping for is that over time, they just become part of the way we do things. That way, they’d have the strongest and long-lasting positive impact on our network capital through frequent and casual prevention and repair of conflict.Social technologies for resolving and preventing conflictRepairRule 0 (coined by Seek Healing)WHY: The longer we sit on irritations, the bigger they grow, until at some point we write very, very long and elaborate EA Forum posts. Addressing points of conflict sooner rather than later helps create less friction and improve feedback loops in groups of people, no matter how uncomfortable it initially is.HOW: Rule 0 is a rule of thumb that goes like this: “If you feel queasy about addressing something with somebody, that’s a sign that y...]]>
Severin T. Seehrich https://forum.effectivealtruism.org/posts/eLY4GKTgxxdtBHEep/ea-is-going-through-a-bunch-of-conflict-here-s-some-social Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA is going through a bunch of conflict. Here’s some social technologies that may help., published by Severin T. Seehrich on January 29, 2023 on The Effective Altruism Forum.There’s two ways I can see EA develop after the latest community drama: The community can have all the necessary debates in good faith, grow past them, and become a more mature movement. Or, neither side feels heard, nothing changes, and way too many competent people lose trust in EA and focus their energies elsewhere. After all, I’ve heard people from either side declare EA’s moral bankruptcy in consequence of the Bostrom email debate.In order to shift the balance towards a little bit more of movement maturation and a little bit less of quitting, we need to talk. The conversations to be had here are between you and me, between him and them, and not between a small band of anonymous rebels on the one side and the EA establishment at CEA on the other. The more isolated and unheard people feel in their opinions, the more angry and polarized things get. Starting to pop the bubbles starts right in our own local EA groups.For that, I’d like to share some tools that I found useful for resolving conflict in groups. It might be worthwhile to explore them on your own, and it might be even more worthwhile to spread them in your local EA group.But how does this fit into our other community building work?To answer this question, I’d like to summarize my favorite EAF post on community building of all times: Jan Kulveit’s “Different forms of capital”.He claims that there are different forms of capital we can optimize for in community building, and that there are two EA tends to over-optimize for:Financial capital: The total amount of donations committed and sent to our charities of choice.Human capital: The number of active community members and their level of commitment.Further, he argues, there are two other forms of capital EA tends to neglect, probably because they are harder to measure:Network capital: The number and closeness of ties between community members.Structural capital: The institutions and processes we have in place for doing stuff in a sensibly structured manner.And I’d like to add a fifth dimension:Memetic capital: The sum total of the useful ideas, psycho- and social technologies we have widely available in the community. An instance of memetic capital without which EA would be unthinkable is the capability to use language. Some other examples for memetic capital include knowledge about priorization, forecasting, or productivity tools, about where and how to apply for grants, how we think about mental health and staying productive in the long-term - and our strategies for addressing and resolving conflict with friends, colleagues, and fellow EAs.I think it would be wise for EA community building to start explicitly taking all five of these factors into account. In line with that, this post aims at increasing the community’s memetic capital. Trying these tools one-off to resolve a particular issue is good, but what I’m hoping for is that over time, they just become part of the way we do things. That way, they’d have the strongest and long-lasting positive impact on our network capital through frequent and casual prevention and repair of conflict.Social technologies for resolving and preventing conflictRepairRule 0 (coined by Seek Healing)WHY: The longer we sit on irritations, the bigger they grow, until at some point we write very, very long and elaborate EA Forum posts. Addressing points of conflict sooner rather than later helps create less friction and improve feedback loops in groups of people, no matter how uncomfortable it initially is.HOW: Rule 0 is a rule of thumb that goes like this: “If you feel queasy about addressing something with somebody, that’s a sign that y...]]>
Sun, 29 Jan 2023 17:29:41 +0000 EA - EA is going through a bunch of conflict. Here’s some social technologies that may help. by Severin T. Seehrich Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA is going through a bunch of conflict. Here’s some social technologies that may help., published by Severin T. Seehrich on January 29, 2023 on The Effective Altruism Forum.There’s two ways I can see EA develop after the latest community drama: The community can have all the necessary debates in good faith, grow past them, and become a more mature movement. Or, neither side feels heard, nothing changes, and way too many competent people lose trust in EA and focus their energies elsewhere. After all, I’ve heard people from either side declare EA’s moral bankruptcy in consequence of the Bostrom email debate.In order to shift the balance towards a little bit more of movement maturation and a little bit less of quitting, we need to talk. The conversations to be had here are between you and me, between him and them, and not between a small band of anonymous rebels on the one side and the EA establishment at CEA on the other. The more isolated and unheard people feel in their opinions, the more angry and polarized things get. Starting to pop the bubbles starts right in our own local EA groups.For that, I’d like to share some tools that I found useful for resolving conflict in groups. It might be worthwhile to explore them on your own, and it might be even more worthwhile to spread them in your local EA group.But how does this fit into our other community building work?To answer this question, I’d like to summarize my favorite EAF post on community building of all times: Jan Kulveit’s “Different forms of capital”.He claims that there are different forms of capital we can optimize for in community building, and that there are two EA tends to over-optimize for:Financial capital: The total amount of donations committed and sent to our charities of choice.Human capital: The number of active community members and their level of commitment.Further, he argues, there are two other forms of capital EA tends to neglect, probably because they are harder to measure:Network capital: The number and closeness of ties between community members.Structural capital: The institutions and processes we have in place for doing stuff in a sensibly structured manner.And I’d like to add a fifth dimension:Memetic capital: The sum total of the useful ideas, psycho- and social technologies we have widely available in the community. An instance of memetic capital without which EA would be unthinkable is the capability to use language. Some other examples for memetic capital include knowledge about priorization, forecasting, or productivity tools, about where and how to apply for grants, how we think about mental health and staying productive in the long-term - and our strategies for addressing and resolving conflict with friends, colleagues, and fellow EAs.I think it would be wise for EA community building to start explicitly taking all five of these factors into account. In line with that, this post aims at increasing the community’s memetic capital. Trying these tools one-off to resolve a particular issue is good, but what I’m hoping for is that over time, they just become part of the way we do things. That way, they’d have the strongest and long-lasting positive impact on our network capital through frequent and casual prevention and repair of conflict.Social technologies for resolving and preventing conflictRepairRule 0 (coined by Seek Healing)WHY: The longer we sit on irritations, the bigger they grow, until at some point we write very, very long and elaborate EA Forum posts. Addressing points of conflict sooner rather than later helps create less friction and improve feedback loops in groups of people, no matter how uncomfortable it initially is.HOW: Rule 0 is a rule of thumb that goes like this: “If you feel queasy about addressing something with somebody, that’s a sign that y...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA is going through a bunch of conflict. Here’s some social technologies that may help., published by Severin T. Seehrich on January 29, 2023 on The Effective Altruism Forum.There’s two ways I can see EA develop after the latest community drama: The community can have all the necessary debates in good faith, grow past them, and become a more mature movement. Or, neither side feels heard, nothing changes, and way too many competent people lose trust in EA and focus their energies elsewhere. After all, I’ve heard people from either side declare EA’s moral bankruptcy in consequence of the Bostrom email debate.In order to shift the balance towards a little bit more of movement maturation and a little bit less of quitting, we need to talk. The conversations to be had here are between you and me, between him and them, and not between a small band of anonymous rebels on the one side and the EA establishment at CEA on the other. The more isolated and unheard people feel in their opinions, the more angry and polarized things get. Starting to pop the bubbles starts right in our own local EA groups.For that, I’d like to share some tools that I found useful for resolving conflict in groups. It might be worthwhile to explore them on your own, and it might be even more worthwhile to spread them in your local EA group.But how does this fit into our other community building work?To answer this question, I’d like to summarize my favorite EAF post on community building of all times: Jan Kulveit’s “Different forms of capital”.He claims that there are different forms of capital we can optimize for in community building, and that there are two EA tends to over-optimize for:Financial capital: The total amount of donations committed and sent to our charities of choice.Human capital: The number of active community members and their level of commitment.Further, he argues, there are two other forms of capital EA tends to neglect, probably because they are harder to measure:Network capital: The number and closeness of ties between community members.Structural capital: The institutions and processes we have in place for doing stuff in a sensibly structured manner.And I’d like to add a fifth dimension:Memetic capital: The sum total of the useful ideas, psycho- and social technologies we have widely available in the community. An instance of memetic capital without which EA would be unthinkable is the capability to use language. Some other examples for memetic capital include knowledge about priorization, forecasting, or productivity tools, about where and how to apply for grants, how we think about mental health and staying productive in the long-term - and our strategies for addressing and resolving conflict with friends, colleagues, and fellow EAs.I think it would be wise for EA community building to start explicitly taking all five of these factors into account. In line with that, this post aims at increasing the community’s memetic capital. Trying these tools one-off to resolve a particular issue is good, but what I’m hoping for is that over time, they just become part of the way we do things. That way, they’d have the strongest and long-lasting positive impact on our network capital through frequent and casual prevention and repair of conflict.Social technologies for resolving and preventing conflictRepairRule 0 (coined by Seek Healing)WHY: The longer we sit on irritations, the bigger they grow, until at some point we write very, very long and elaborate EA Forum posts. Addressing points of conflict sooner rather than later helps create less friction and improve feedback loops in groups of people, no matter how uncomfortable it initially is.HOW: Rule 0 is a rule of thumb that goes like this: “If you feel queasy about addressing something with somebody, that’s a sign that y...]]>
Severin T. Seehrich https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:51 None full 4652
f5quXKA7k5Qzu7z7k_NL_EA_EA EA - Giving Multiplier Paper by James Montavon Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Giving Multiplier Paper, published by James Montavon on January 28, 2023 on The Effective Altruism Forum.The Joshua Greene and Lucius Caviola article about their givingmultiplier.org work has just been published. Here's the abstract:The most effective charities are hundreds of times more impactful than typical charities. However, most donors favor charities with personal/emotional appeal over effectiveness. We gave donors the option to split their donations between their personal favorite charity and an expert-recommended highly effective charity. This bundling technique increased donors’ impact without undermining their altruistic motivation, boosting effective donations by 76%. An additional boost of 55% was achieved by offering matching donations with increasing rates for allocating more to the highly effective charity. We show further that matching funds can be provided by donors focused on effectiveness through a self-sustaining process of micromatching. We applied these techniques in a new online donation platform (GivingMultiplier.org), which fundraised more than $1.5 million in its first 14 months.While prior applied research on altruism has focused on the quantity of giving, the present results demonstrate the value of focusing on the effectiveness of altruistic behavior.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
James Montavon https://forum.effectivealtruism.org/posts/f5quXKA7k5Qzu7z7k/giving-multiplier-paper Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Giving Multiplier Paper, published by James Montavon on January 28, 2023 on The Effective Altruism Forum.The Joshua Greene and Lucius Caviola article about their givingmultiplier.org work has just been published. Here's the abstract:The most effective charities are hundreds of times more impactful than typical charities. However, most donors favor charities with personal/emotional appeal over effectiveness. We gave donors the option to split their donations between their personal favorite charity and an expert-recommended highly effective charity. This bundling technique increased donors’ impact without undermining their altruistic motivation, boosting effective donations by 76%. An additional boost of 55% was achieved by offering matching donations with increasing rates for allocating more to the highly effective charity. We show further that matching funds can be provided by donors focused on effectiveness through a self-sustaining process of micromatching. We applied these techniques in a new online donation platform (GivingMultiplier.org), which fundraised more than $1.5 million in its first 14 months.While prior applied research on altruism has focused on the quantity of giving, the present results demonstrate the value of focusing on the effectiveness of altruistic behavior.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 29 Jan 2023 07:55:42 +0000 EA - Giving Multiplier Paper by James Montavon Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Giving Multiplier Paper, published by James Montavon on January 28, 2023 on The Effective Altruism Forum.The Joshua Greene and Lucius Caviola article about their givingmultiplier.org work has just been published. Here's the abstract:The most effective charities are hundreds of times more impactful than typical charities. However, most donors favor charities with personal/emotional appeal over effectiveness. We gave donors the option to split their donations between their personal favorite charity and an expert-recommended highly effective charity. This bundling technique increased donors’ impact without undermining their altruistic motivation, boosting effective donations by 76%. An additional boost of 55% was achieved by offering matching donations with increasing rates for allocating more to the highly effective charity. We show further that matching funds can be provided by donors focused on effectiveness through a self-sustaining process of micromatching. We applied these techniques in a new online donation platform (GivingMultiplier.org), which fundraised more than $1.5 million in its first 14 months.While prior applied research on altruism has focused on the quantity of giving, the present results demonstrate the value of focusing on the effectiveness of altruistic behavior.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Giving Multiplier Paper, published by James Montavon on January 28, 2023 on The Effective Altruism Forum.The Joshua Greene and Lucius Caviola article about their givingmultiplier.org work has just been published. Here's the abstract:The most effective charities are hundreds of times more impactful than typical charities. However, most donors favor charities with personal/emotional appeal over effectiveness. We gave donors the option to split their donations between their personal favorite charity and an expert-recommended highly effective charity. This bundling technique increased donors’ impact without undermining their altruistic motivation, boosting effective donations by 76%. An additional boost of 55% was achieved by offering matching donations with increasing rates for allocating more to the highly effective charity. We show further that matching funds can be provided by donors focused on effectiveness through a self-sustaining process of micromatching. We applied these techniques in a new online donation platform (GivingMultiplier.org), which fundraised more than $1.5 million in its first 14 months.While prior applied research on altruism has focused on the quantity of giving, the present results demonstrate the value of focusing on the effectiveness of altruistic behavior.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
James Montavon https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:29 None full 4644
xQf9HpHoyz7DemzAE_NL_EA_EA EA - OpenBook: New EA Grants Database by Rachel Weinberg Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OpenBook: New EA Grants Database, published by Rachel Weinberg on January 28, 2023 on The Effective Altruism Forum.openbook.fyi is a new website where you can see ~4,000 EA grants from donors including Open Phil, FTX Future Fund, and EA Funds in a single place.Why?If you're a donor: OpenBook shows you how much orgs have already received, and where other donors you respect have contributed their money.If you're a grant applicant: OpenBook shows you what kinds of projects your funders have previously sponsored, and also who funds projects similar to your own.If you're neither: browse around to get a sense of how money flows in EA!FeaturesRight now, you can:search through all grants by donor, recipient, and cause areago to an organization's page and see all grants they've given and received, plus in some cases organization details (e.g. country, GiveWell review)view donation details (e.g. intended use of funds, notes)see largest donors and recipients on fileadd a donation that we're missingFeatures I'm thinking of adding soon:allow edits to existing donationsstandardize cause areas and add cause area pages for the most common ones where you can view and search all the grants associated with that cause areaadd ACX Grants to databaseallow bulk donation uploads through a CSVlet users claim a page and create an account where they can add their own donations and disable adding of othersallow for comments on organizations/grantsimprove the mobile layoutmake color theme changeable (just for fun)Please let me know which of these features you'd use, or if there are other features you'd like!CreditsHuge thanks to Vipul Naik and Issa Rice for compiling most of the data for their donations site and making it open source. This meant I mostly just had to build the UI.And to Austin Chen for the idea, some code, and the many hours spent debugging with me.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Rachel Weinberg https://forum.effectivealtruism.org/posts/xQf9HpHoyz7DemzAE/openbook-new-ea-grants-database Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OpenBook: New EA Grants Database, published by Rachel Weinberg on January 28, 2023 on The Effective Altruism Forum.openbook.fyi is a new website where you can see ~4,000 EA grants from donors including Open Phil, FTX Future Fund, and EA Funds in a single place.Why?If you're a donor: OpenBook shows you how much orgs have already received, and where other donors you respect have contributed their money.If you're a grant applicant: OpenBook shows you what kinds of projects your funders have previously sponsored, and also who funds projects similar to your own.If you're neither: browse around to get a sense of how money flows in EA!FeaturesRight now, you can:search through all grants by donor, recipient, and cause areago to an organization's page and see all grants they've given and received, plus in some cases organization details (e.g. country, GiveWell review)view donation details (e.g. intended use of funds, notes)see largest donors and recipients on fileadd a donation that we're missingFeatures I'm thinking of adding soon:allow edits to existing donationsstandardize cause areas and add cause area pages for the most common ones where you can view and search all the grants associated with that cause areaadd ACX Grants to databaseallow bulk donation uploads through a CSVlet users claim a page and create an account where they can add their own donations and disable adding of othersallow for comments on organizations/grantsimprove the mobile layoutmake color theme changeable (just for fun)Please let me know which of these features you'd use, or if there are other features you'd like!CreditsHuge thanks to Vipul Naik and Issa Rice for compiling most of the data for their donations site and making it open source. This meant I mostly just had to build the UI.And to Austin Chen for the idea, some code, and the many hours spent debugging with me.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 28 Jan 2023 19:01:53 +0000 EA - OpenBook: New EA Grants Database by Rachel Weinberg Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OpenBook: New EA Grants Database, published by Rachel Weinberg on January 28, 2023 on The Effective Altruism Forum.openbook.fyi is a new website where you can see ~4,000 EA grants from donors including Open Phil, FTX Future Fund, and EA Funds in a single place.Why?If you're a donor: OpenBook shows you how much orgs have already received, and where other donors you respect have contributed their money.If you're a grant applicant: OpenBook shows you what kinds of projects your funders have previously sponsored, and also who funds projects similar to your own.If you're neither: browse around to get a sense of how money flows in EA!FeaturesRight now, you can:search through all grants by donor, recipient, and cause areago to an organization's page and see all grants they've given and received, plus in some cases organization details (e.g. country, GiveWell review)view donation details (e.g. intended use of funds, notes)see largest donors and recipients on fileadd a donation that we're missingFeatures I'm thinking of adding soon:allow edits to existing donationsstandardize cause areas and add cause area pages for the most common ones where you can view and search all the grants associated with that cause areaadd ACX Grants to databaseallow bulk donation uploads through a CSVlet users claim a page and create an account where they can add their own donations and disable adding of othersallow for comments on organizations/grantsimprove the mobile layoutmake color theme changeable (just for fun)Please let me know which of these features you'd use, or if there are other features you'd like!CreditsHuge thanks to Vipul Naik and Issa Rice for compiling most of the data for their donations site and making it open source. This meant I mostly just had to build the UI.And to Austin Chen for the idea, some code, and the many hours spent debugging with me.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OpenBook: New EA Grants Database, published by Rachel Weinberg on January 28, 2023 on The Effective Altruism Forum.openbook.fyi is a new website where you can see ~4,000 EA grants from donors including Open Phil, FTX Future Fund, and EA Funds in a single place.Why?If you're a donor: OpenBook shows you how much orgs have already received, and where other donors you respect have contributed their money.If you're a grant applicant: OpenBook shows you what kinds of projects your funders have previously sponsored, and also who funds projects similar to your own.If you're neither: browse around to get a sense of how money flows in EA!FeaturesRight now, you can:search through all grants by donor, recipient, and cause areago to an organization's page and see all grants they've given and received, plus in some cases organization details (e.g. country, GiveWell review)view donation details (e.g. intended use of funds, notes)see largest donors and recipients on fileadd a donation that we're missingFeatures I'm thinking of adding soon:allow edits to existing donationsstandardize cause areas and add cause area pages for the most common ones where you can view and search all the grants associated with that cause areaadd ACX Grants to databaseallow bulk donation uploads through a CSVlet users claim a page and create an account where they can add their own donations and disable adding of othersallow for comments on organizations/grantsimprove the mobile layoutmake color theme changeable (just for fun)Please let me know which of these features you'd use, or if there are other features you'd like!CreditsHuge thanks to Vipul Naik and Issa Rice for compiling most of the data for their donations site and making it open source. This meant I mostly just had to build the UI.And to Austin Chen for the idea, some code, and the many hours spent debugging with me.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Rachel Weinberg https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:09 None full 4634
4Ckc2zNrAKQwnAyA2_NL_EA_EA EA - Literature review of Transformative Artificial Intelligence timelines by Jaime Sevilla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Literature review of Transformative Artificial Intelligence timelines, published by Jaime Sevilla on January 27, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jaime Sevilla https://forum.effectivealtruism.org/posts/4Ckc2zNrAKQwnAyA2/literature-review-of-transformative-artificial-intelligence Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Literature review of Transformative Artificial Intelligence timelines, published by Jaime Sevilla on January 27, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 28 Jan 2023 14:24:46 +0000 EA - Literature review of Transformative Artificial Intelligence timelines by Jaime Sevilla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Literature review of Transformative Artificial Intelligence timelines, published by Jaime Sevilla on January 27, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Literature review of Transformative Artificial Intelligence timelines, published by Jaime Sevilla on January 27, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jaime Sevilla https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:29 None full 4635
tZbMtxQk42teAFBYs_NL_EA_EA EA - Native English speaker EAs: could you please speak slower? by Luca Parodi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Native English speaker EAs: could you please speak slower?, published by Luca Parodi on January 27, 2023 on The Effective Altruism Forum.Epistemic status: personal opinion, based on anecdotical evidences and my gut feelings as an expat among native speaker EAsA large majority of EAs are native English speakers. Additionally, there is a significant portion of EAs from countries, such as Northern European countries, where English is well-taught or serves as a second language, such as India. However, there is also a small but significant minority of individuals like myself. In my country (Italy), the education system has neglected the teaching of English, particularly for those from lower-class backgrounds and who attended public schools of questionable quality. As a result, I had to teach myself English from scratch at the age of 22, and according to what I've been told, I have achieved decent results.In the past year, I spent half of my time in London, primarily interacting with other EAs. I have noticed that native English speakers often pay little attention to the varying levels of language proficiency, speaking extremely quickly about already complex topics, and frequently using metaphors, analogies, cultural references, and technical terms. This is not something that occurs when I communicate with other non-native or expat individuals. And it is frustrating.Those who know me personally are aware that in my native language, I am a highly confident and fast speaker who probably talks too much, especially considering my job involves public and social media outreach about rationality. However, when I have to interact with EAs in real life, I sometimes feel stupid and become shy (which is unusual for someone who is a 95th percentile extrovert on the Big Five scale). I often just nod as if I understand what is being said because I fear that by asking "Can you repeat that, please?" I will be perceived as stupid and slow in a community that values time, effectiveness, high-value actions, and reason. I understand that this is mainly my own issue, and I am working on improving my language skills, but I think that something might be done on the other side too.So, do we want to be more inclusive? Let's start with the little things, such as our day-to-day interactions. Here are some tips, based on my own experience, that I can give you if you are a native speaker and are interacting with a non-native speaker:Try to be mindful and slow down the pace of your speakingAvoid using too many metaphors, analogies and extremely technical words when they are not neededBe aware and try to control the voice inside of your head (which I am highly confident is there even if you don't want to admit it) that says "ugh, this person who clearly isn't understanding me seems slow and stupid. I don't want to waste my time with them"Reduce references to your country's politics, pop culture, cultural conversations and inside jokes to a bare minimum.Don't say "you're doing great! You're English is super good man" if I tell you that I am struggling with the language and if you don't mean it. Actually, don't say it at all. Even if it comes from a genuine and well-intended instinct, it might sound extremely condescending, like the teacher who says to a struggling student's parent "he's so sweet. I know he will do great things!"Instead, try to actually help me by applying these insightsThe list is obviously incomplete, so any additions or corrections would be much appreciated.Do we really need to use IT jargon every two sentences to express easy concepts?Source: more than a couple of occasions in which I was super excited by the conversation but I was struggling to understand and the other person stopped talking with me.I don't know (and, really, I don't want or need to know) about your po...]]>
Luca Parodi https://forum.effectivealtruism.org/posts/tZbMtxQk42teAFBYs/native-english-speaker-eas-could-you-please-speak-slower Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Native English speaker EAs: could you please speak slower?, published by Luca Parodi on January 27, 2023 on The Effective Altruism Forum.Epistemic status: personal opinion, based on anecdotical evidences and my gut feelings as an expat among native speaker EAsA large majority of EAs are native English speakers. Additionally, there is a significant portion of EAs from countries, such as Northern European countries, where English is well-taught or serves as a second language, such as India. However, there is also a small but significant minority of individuals like myself. In my country (Italy), the education system has neglected the teaching of English, particularly for those from lower-class backgrounds and who attended public schools of questionable quality. As a result, I had to teach myself English from scratch at the age of 22, and according to what I've been told, I have achieved decent results.In the past year, I spent half of my time in London, primarily interacting with other EAs. I have noticed that native English speakers often pay little attention to the varying levels of language proficiency, speaking extremely quickly about already complex topics, and frequently using metaphors, analogies, cultural references, and technical terms. This is not something that occurs when I communicate with other non-native or expat individuals. And it is frustrating.Those who know me personally are aware that in my native language, I am a highly confident and fast speaker who probably talks too much, especially considering my job involves public and social media outreach about rationality. However, when I have to interact with EAs in real life, I sometimes feel stupid and become shy (which is unusual for someone who is a 95th percentile extrovert on the Big Five scale). I often just nod as if I understand what is being said because I fear that by asking "Can you repeat that, please?" I will be perceived as stupid and slow in a community that values time, effectiveness, high-value actions, and reason. I understand that this is mainly my own issue, and I am working on improving my language skills, but I think that something might be done on the other side too.So, do we want to be more inclusive? Let's start with the little things, such as our day-to-day interactions. Here are some tips, based on my own experience, that I can give you if you are a native speaker and are interacting with a non-native speaker:Try to be mindful and slow down the pace of your speakingAvoid using too many metaphors, analogies and extremely technical words when they are not neededBe aware and try to control the voice inside of your head (which I am highly confident is there even if you don't want to admit it) that says "ugh, this person who clearly isn't understanding me seems slow and stupid. I don't want to waste my time with them"Reduce references to your country's politics, pop culture, cultural conversations and inside jokes to a bare minimum.Don't say "you're doing great! You're English is super good man" if I tell you that I am struggling with the language and if you don't mean it. Actually, don't say it at all. Even if it comes from a genuine and well-intended instinct, it might sound extremely condescending, like the teacher who says to a struggling student's parent "he's so sweet. I know he will do great things!"Instead, try to actually help me by applying these insightsThe list is obviously incomplete, so any additions or corrections would be much appreciated.Do we really need to use IT jargon every two sentences to express easy concepts?Source: more than a couple of occasions in which I was super excited by the conversation but I was struggling to understand and the other person stopped talking with me.I don't know (and, really, I don't want or need to know) about your po...]]>
Fri, 27 Jan 2023 23:13:35 +0000 EA - Native English speaker EAs: could you please speak slower? by Luca Parodi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Native English speaker EAs: could you please speak slower?, published by Luca Parodi on January 27, 2023 on The Effective Altruism Forum.Epistemic status: personal opinion, based on anecdotical evidences and my gut feelings as an expat among native speaker EAsA large majority of EAs are native English speakers. Additionally, there is a significant portion of EAs from countries, such as Northern European countries, where English is well-taught or serves as a second language, such as India. However, there is also a small but significant minority of individuals like myself. In my country (Italy), the education system has neglected the teaching of English, particularly for those from lower-class backgrounds and who attended public schools of questionable quality. As a result, I had to teach myself English from scratch at the age of 22, and according to what I've been told, I have achieved decent results.In the past year, I spent half of my time in London, primarily interacting with other EAs. I have noticed that native English speakers often pay little attention to the varying levels of language proficiency, speaking extremely quickly about already complex topics, and frequently using metaphors, analogies, cultural references, and technical terms. This is not something that occurs when I communicate with other non-native or expat individuals. And it is frustrating.Those who know me personally are aware that in my native language, I am a highly confident and fast speaker who probably talks too much, especially considering my job involves public and social media outreach about rationality. However, when I have to interact with EAs in real life, I sometimes feel stupid and become shy (which is unusual for someone who is a 95th percentile extrovert on the Big Five scale). I often just nod as if I understand what is being said because I fear that by asking "Can you repeat that, please?" I will be perceived as stupid and slow in a community that values time, effectiveness, high-value actions, and reason. I understand that this is mainly my own issue, and I am working on improving my language skills, but I think that something might be done on the other side too.So, do we want to be more inclusive? Let's start with the little things, such as our day-to-day interactions. Here are some tips, based on my own experience, that I can give you if you are a native speaker and are interacting with a non-native speaker:Try to be mindful and slow down the pace of your speakingAvoid using too many metaphors, analogies and extremely technical words when they are not neededBe aware and try to control the voice inside of your head (which I am highly confident is there even if you don't want to admit it) that says "ugh, this person who clearly isn't understanding me seems slow and stupid. I don't want to waste my time with them"Reduce references to your country's politics, pop culture, cultural conversations and inside jokes to a bare minimum.Don't say "you're doing great! You're English is super good man" if I tell you that I am struggling with the language and if you don't mean it. Actually, don't say it at all. Even if it comes from a genuine and well-intended instinct, it might sound extremely condescending, like the teacher who says to a struggling student's parent "he's so sweet. I know he will do great things!"Instead, try to actually help me by applying these insightsThe list is obviously incomplete, so any additions or corrections would be much appreciated.Do we really need to use IT jargon every two sentences to express easy concepts?Source: more than a couple of occasions in which I was super excited by the conversation but I was struggling to understand and the other person stopped talking with me.I don't know (and, really, I don't want or need to know) about your po...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Native English speaker EAs: could you please speak slower?, published by Luca Parodi on January 27, 2023 on The Effective Altruism Forum.Epistemic status: personal opinion, based on anecdotical evidences and my gut feelings as an expat among native speaker EAsA large majority of EAs are native English speakers. Additionally, there is a significant portion of EAs from countries, such as Northern European countries, where English is well-taught or serves as a second language, such as India. However, there is also a small but significant minority of individuals like myself. In my country (Italy), the education system has neglected the teaching of English, particularly for those from lower-class backgrounds and who attended public schools of questionable quality. As a result, I had to teach myself English from scratch at the age of 22, and according to what I've been told, I have achieved decent results.In the past year, I spent half of my time in London, primarily interacting with other EAs. I have noticed that native English speakers often pay little attention to the varying levels of language proficiency, speaking extremely quickly about already complex topics, and frequently using metaphors, analogies, cultural references, and technical terms. This is not something that occurs when I communicate with other non-native or expat individuals. And it is frustrating.Those who know me personally are aware that in my native language, I am a highly confident and fast speaker who probably talks too much, especially considering my job involves public and social media outreach about rationality. However, when I have to interact with EAs in real life, I sometimes feel stupid and become shy (which is unusual for someone who is a 95th percentile extrovert on the Big Five scale). I often just nod as if I understand what is being said because I fear that by asking "Can you repeat that, please?" I will be perceived as stupid and slow in a community that values time, effectiveness, high-value actions, and reason. I understand that this is mainly my own issue, and I am working on improving my language skills, but I think that something might be done on the other side too.So, do we want to be more inclusive? Let's start with the little things, such as our day-to-day interactions. Here are some tips, based on my own experience, that I can give you if you are a native speaker and are interacting with a non-native speaker:Try to be mindful and slow down the pace of your speakingAvoid using too many metaphors, analogies and extremely technical words when they are not neededBe aware and try to control the voice inside of your head (which I am highly confident is there even if you don't want to admit it) that says "ugh, this person who clearly isn't understanding me seems slow and stupid. I don't want to waste my time with them"Reduce references to your country's politics, pop culture, cultural conversations and inside jokes to a bare minimum.Don't say "you're doing great! You're English is super good man" if I tell you that I am struggling with the language and if you don't mean it. Actually, don't say it at all. Even if it comes from a genuine and well-intended instinct, it might sound extremely condescending, like the teacher who says to a struggling student's parent "he's so sweet. I know he will do great things!"Instead, try to actually help me by applying these insightsThe list is obviously incomplete, so any additions or corrections would be much appreciated.Do we really need to use IT jargon every two sentences to express easy concepts?Source: more than a couple of occasions in which I was super excited by the conversation but I was struggling to understand and the other person stopped talking with me.I don't know (and, really, I don't want or need to know) about your po...]]>
Luca Parodi https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:49 None full 4631
JAKXPxQBzHnjHbrJi_NL_EA_EA EA - Pineapple now lists marketing, comms and fundraising talent, fiscal sponsorship recs, private database (Jan '23 Update) by Vaidehi Agarwalla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pineapple now lists marketing, comms & fundraising talent, fiscal sponsorship recs, private database (Jan '23 Update), published by Vaidehi Agarwalla on January 27, 2023 on The Effective Altruism Forum.Marketing, PR & comms skills lists now live (in addition to Ops, PA & ExA work)Candidates can now explicitly list themselves as having marketing, comms & fundraising roles. As we previously discussed, these kinds of skills are often swept under the operations umbrella at many EA-aligned orgs, especially smaller ones.We soft-launched the option in our sign-up form during Dec '22, and have already had ~12 people be open to these roles on their profile.You can view candidates by role on our roles page here. If you're an existing candidate who'd like to add those skills to your profile, email us at info@pineappleoperations.org.We now list fiscal sponsorship resourcesFiscal sponsorship and operations support are a common pain points of early stage projects, so we've shared some recommendations from operations staff at other EA organizations. The Pineapple Ops team has not personally vetted or thoroughly reviewed this list. Please do research any fiscal sponsor you wish to work with.Why are we sharing these resources? Knowledge sharing of operations in EA has a lot of room for improvement. Since our team doesn't have the capacity to do resource compilations ourselves with a concerted effort, we wanted to try collaborating with other folks to create lists.Our progress to date (what we think we've done well on)Progress on placements (our main metric for impact)We've been up for 3 months, and in that time we've placed at least 9 people in full- or part-time roles, with at least 4 more getting strong leads and/or making it to late stage of the applications process that we know of. We currently list over 180 people candidates.Rough cost per placement: ~$420 USD (We expect cost per placement to drop as this estimate includes project set-up time)We've received a lot of positive feedback from employers and candidates alike - even those who haven't yet been placed directly from the board.Moving fast & experimentingI am happy with the pace at which we've moved as a part-time project. I think we've found new areas to explore which aren't too ambitiousWe are currently experimenting with another, smaller project that we hope will result in more candidate placements, and currently troubleshooting demand-supply issues. (We plan to discuss this project in more depth in future updates).Areas of improvementDuring our informal internal review, I (Vaidehi) felt the main area for improvement was to spend a little bit more time advertising the database to candidates and employers more. We did an initial sprint during November but this died down over Dec & Jan. We want to make sure that folks know about Pineapple Operations and its offered resources. We are currently evaluating capacity to do more advertising, but tentatively want to hit a target of putting out reminders, content or resources every 3-6 weeks. So far, we don't think we've made any major mistakes in the course of running this project.Additionally, although initial budgeting for the project was correct, we've identified more worthwhile opportunities than we expected to spend our initial funding on and will likely need to reduce time spent on the project or fundraise.As an unpaid volunteer, I (Vaidehi) have not been as strict on time tracking my hours as I would like to be, so I expect we may be underestimated my time input (and possible counterfactual of where else I could spend my time).Get Involved & Support usSign up to be listed on the databaseLet us know if you've been hired or made it to later stages of a job application (or found a good candidate!) because of our job boardWe are seeking funding (~$10K) t...]]>
Vaidehi Agarwalla https://forum.effectivealtruism.org/posts/JAKXPxQBzHnjHbrJi/pineapple-now-lists-marketing-comms-and-fundraising-talent Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pineapple now lists marketing, comms & fundraising talent, fiscal sponsorship recs, private database (Jan '23 Update), published by Vaidehi Agarwalla on January 27, 2023 on The Effective Altruism Forum.Marketing, PR & comms skills lists now live (in addition to Ops, PA & ExA work)Candidates can now explicitly list themselves as having marketing, comms & fundraising roles. As we previously discussed, these kinds of skills are often swept under the operations umbrella at many EA-aligned orgs, especially smaller ones.We soft-launched the option in our sign-up form during Dec '22, and have already had ~12 people be open to these roles on their profile.You can view candidates by role on our roles page here. If you're an existing candidate who'd like to add those skills to your profile, email us at info@pineappleoperations.org.We now list fiscal sponsorship resourcesFiscal sponsorship and operations support are a common pain points of early stage projects, so we've shared some recommendations from operations staff at other EA organizations. The Pineapple Ops team has not personally vetted or thoroughly reviewed this list. Please do research any fiscal sponsor you wish to work with.Why are we sharing these resources? Knowledge sharing of operations in EA has a lot of room for improvement. Since our team doesn't have the capacity to do resource compilations ourselves with a concerted effort, we wanted to try collaborating with other folks to create lists.Our progress to date (what we think we've done well on)Progress on placements (our main metric for impact)We've been up for 3 months, and in that time we've placed at least 9 people in full- or part-time roles, with at least 4 more getting strong leads and/or making it to late stage of the applications process that we know of. We currently list over 180 people candidates.Rough cost per placement: ~$420 USD (We expect cost per placement to drop as this estimate includes project set-up time)We've received a lot of positive feedback from employers and candidates alike - even those who haven't yet been placed directly from the board.Moving fast & experimentingI am happy with the pace at which we've moved as a part-time project. I think we've found new areas to explore which aren't too ambitiousWe are currently experimenting with another, smaller project that we hope will result in more candidate placements, and currently troubleshooting demand-supply issues. (We plan to discuss this project in more depth in future updates).Areas of improvementDuring our informal internal review, I (Vaidehi) felt the main area for improvement was to spend a little bit more time advertising the database to candidates and employers more. We did an initial sprint during November but this died down over Dec & Jan. We want to make sure that folks know about Pineapple Operations and its offered resources. We are currently evaluating capacity to do more advertising, but tentatively want to hit a target of putting out reminders, content or resources every 3-6 weeks. So far, we don't think we've made any major mistakes in the course of running this project.Additionally, although initial budgeting for the project was correct, we've identified more worthwhile opportunities than we expected to spend our initial funding on and will likely need to reduce time spent on the project or fundraise.As an unpaid volunteer, I (Vaidehi) have not been as strict on time tracking my hours as I would like to be, so I expect we may be underestimated my time input (and possible counterfactual of where else I could spend my time).Get Involved & Support usSign up to be listed on the databaseLet us know if you've been hired or made it to later stages of a job application (or found a good candidate!) because of our job boardWe are seeking funding (~$10K) t...]]>
Fri, 27 Jan 2023 22:32:15 +0000 EA - Pineapple now lists marketing, comms and fundraising talent, fiscal sponsorship recs, private database (Jan '23 Update) by Vaidehi Agarwalla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pineapple now lists marketing, comms & fundraising talent, fiscal sponsorship recs, private database (Jan '23 Update), published by Vaidehi Agarwalla on January 27, 2023 on The Effective Altruism Forum.Marketing, PR & comms skills lists now live (in addition to Ops, PA & ExA work)Candidates can now explicitly list themselves as having marketing, comms & fundraising roles. As we previously discussed, these kinds of skills are often swept under the operations umbrella at many EA-aligned orgs, especially smaller ones.We soft-launched the option in our sign-up form during Dec '22, and have already had ~12 people be open to these roles on their profile.You can view candidates by role on our roles page here. If you're an existing candidate who'd like to add those skills to your profile, email us at info@pineappleoperations.org.We now list fiscal sponsorship resourcesFiscal sponsorship and operations support are a common pain points of early stage projects, so we've shared some recommendations from operations staff at other EA organizations. The Pineapple Ops team has not personally vetted or thoroughly reviewed this list. Please do research any fiscal sponsor you wish to work with.Why are we sharing these resources? Knowledge sharing of operations in EA has a lot of room for improvement. Since our team doesn't have the capacity to do resource compilations ourselves with a concerted effort, we wanted to try collaborating with other folks to create lists.Our progress to date (what we think we've done well on)Progress on placements (our main metric for impact)We've been up for 3 months, and in that time we've placed at least 9 people in full- or part-time roles, with at least 4 more getting strong leads and/or making it to late stage of the applications process that we know of. We currently list over 180 people candidates.Rough cost per placement: ~$420 USD (We expect cost per placement to drop as this estimate includes project set-up time)We've received a lot of positive feedback from employers and candidates alike - even those who haven't yet been placed directly from the board.Moving fast & experimentingI am happy with the pace at which we've moved as a part-time project. I think we've found new areas to explore which aren't too ambitiousWe are currently experimenting with another, smaller project that we hope will result in more candidate placements, and currently troubleshooting demand-supply issues. (We plan to discuss this project in more depth in future updates).Areas of improvementDuring our informal internal review, I (Vaidehi) felt the main area for improvement was to spend a little bit more time advertising the database to candidates and employers more. We did an initial sprint during November but this died down over Dec & Jan. We want to make sure that folks know about Pineapple Operations and its offered resources. We are currently evaluating capacity to do more advertising, but tentatively want to hit a target of putting out reminders, content or resources every 3-6 weeks. So far, we don't think we've made any major mistakes in the course of running this project.Additionally, although initial budgeting for the project was correct, we've identified more worthwhile opportunities than we expected to spend our initial funding on and will likely need to reduce time spent on the project or fundraise.As an unpaid volunteer, I (Vaidehi) have not been as strict on time tracking my hours as I would like to be, so I expect we may be underestimated my time input (and possible counterfactual of where else I could spend my time).Get Involved & Support usSign up to be listed on the databaseLet us know if you've been hired or made it to later stages of a job application (or found a good candidate!) because of our job boardWe are seeking funding (~$10K) t...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pineapple now lists marketing, comms & fundraising talent, fiscal sponsorship recs, private database (Jan '23 Update), published by Vaidehi Agarwalla on January 27, 2023 on The Effective Altruism Forum.Marketing, PR & comms skills lists now live (in addition to Ops, PA & ExA work)Candidates can now explicitly list themselves as having marketing, comms & fundraising roles. As we previously discussed, these kinds of skills are often swept under the operations umbrella at many EA-aligned orgs, especially smaller ones.We soft-launched the option in our sign-up form during Dec '22, and have already had ~12 people be open to these roles on their profile.You can view candidates by role on our roles page here. If you're an existing candidate who'd like to add those skills to your profile, email us at info@pineappleoperations.org.We now list fiscal sponsorship resourcesFiscal sponsorship and operations support are a common pain points of early stage projects, so we've shared some recommendations from operations staff at other EA organizations. The Pineapple Ops team has not personally vetted or thoroughly reviewed this list. Please do research any fiscal sponsor you wish to work with.Why are we sharing these resources? Knowledge sharing of operations in EA has a lot of room for improvement. Since our team doesn't have the capacity to do resource compilations ourselves with a concerted effort, we wanted to try collaborating with other folks to create lists.Our progress to date (what we think we've done well on)Progress on placements (our main metric for impact)We've been up for 3 months, and in that time we've placed at least 9 people in full- or part-time roles, with at least 4 more getting strong leads and/or making it to late stage of the applications process that we know of. We currently list over 180 people candidates.Rough cost per placement: ~$420 USD (We expect cost per placement to drop as this estimate includes project set-up time)We've received a lot of positive feedback from employers and candidates alike - even those who haven't yet been placed directly from the board.Moving fast & experimentingI am happy with the pace at which we've moved as a part-time project. I think we've found new areas to explore which aren't too ambitiousWe are currently experimenting with another, smaller project that we hope will result in more candidate placements, and currently troubleshooting demand-supply issues. (We plan to discuss this project in more depth in future updates).Areas of improvementDuring our informal internal review, I (Vaidehi) felt the main area for improvement was to spend a little bit more time advertising the database to candidates and employers more. We did an initial sprint during November but this died down over Dec & Jan. We want to make sure that folks know about Pineapple Operations and its offered resources. We are currently evaluating capacity to do more advertising, but tentatively want to hit a target of putting out reminders, content or resources every 3-6 weeks. So far, we don't think we've made any major mistakes in the course of running this project.Additionally, although initial budgeting for the project was correct, we've identified more worthwhile opportunities than we expected to spend our initial funding on and will likely need to reduce time spent on the project or fundraise.As an unpaid volunteer, I (Vaidehi) have not been as strict on time tracking my hours as I would like to be, so I expect we may be underestimated my time input (and possible counterfactual of where else I could spend my time).Get Involved & Support usSign up to be listed on the databaseLet us know if you've been hired or made it to later stages of a job application (or found a good candidate!) because of our job boardWe are seeking funding (~$10K) t...]]>
Vaidehi Agarwalla https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:31 None full 4629
WLZabqQGCd2joZpxR_NL_EA_EA EA - Summit on Existential Security 2023 by Amy Labenz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summit on Existential Security 2023, published by Amy Labenz on January 27, 2023 on The Effective Altruism Forum.The CEA events team wants to be more transparent about what we’re doing, to allow feedback and visibility. We’re not committing to publishing detailed information about every event, but in that spirit, we wanted to share something about an event we’re running soon — the Summit on Existential Security — even though it’s an invite-only event and we don’t expect the information in this post to be directly useful to many people.The Summit on Existential Security will take place the week following EA Global: Bay Area 2023.The summit is aimed at professionals working directly or indirectly (for example, through grantmaking) towards existential security (i.e. addressing existential risks). The event aims to help participants orient themselves to the complex and developing existential risk situation so that we have the best shot at identifying and pursuing important ways to reduce the risk of existential catastrophe.In particular, the summit aims to facilitate conversations that make progress on crucial topics. We’re especially interested in topics that are important but aren’t fully within the remit of any research group (perhaps because they’re big-picture, cross-cutting, new, or not purely academic). We hope these might lead people to better understand something important, change their minds, make new plans, or improve their strategies. Near the end of the summit, there will be sessions designed to encourage attendees to get into the details of making specific plans. But we don’t need every conversation to have immediate decision relevance.A secondary goal of the summit is to build the professional community of people working on existential security. We expect the event to be especially valuable for people who are less plugged in to existing networks to be able to make useful connections. Past impact evaluations for the Coordination Forum and similar events indicate that often the people who got the most out of these events are those who were relatively less plugged in to the community core prior to the event.The summit will have approximately 160 attendees. The event size was chosen because we wanted to experiment with whether we could get the benefits attendees report from the Coordination Forum and similar retreat-style events at a larger scale (the summit will be about 3 times larger than the Coordination Forum was last year). Compared to EA Global, the summit will be much smaller (10 times smaller) and is residential. This means the summit can be focused on collaborative group sessions which try to make progress on important topics related to reducing existential risk and on facilitating and strengthening connections between people working on existential security.The summit is an invite-only event. Invitations to the event were assembled via two mechanisms:A convening committee of eight community advisors selected some people to invite based on their work and how helpful we thought they could be to other attendees.We reached out to about 25 people to ask for recommendations for people in their networks who might be less well-known to other parts of the community.If the summit proves to be successful, we may consider expanding this method in the future.It’s important to note that this event is not an attempt to collect individuals who are doing the highest-value work on existential security, but rather a mix of individuals working on object-level and meta-level issues, in order to facilitate important connections between these groups.The summit is being organized by the Partner Events team, a new program within the CEA Events team that supports subject matter experts to run high-impact events. The Partner Events team is working on a variety of e...]]>
Amy Labenz https://forum.effectivealtruism.org/posts/WLZabqQGCd2joZpxR/summit-on-existential-security-2023 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summit on Existential Security 2023, published by Amy Labenz on January 27, 2023 on The Effective Altruism Forum.The CEA events team wants to be more transparent about what we’re doing, to allow feedback and visibility. We’re not committing to publishing detailed information about every event, but in that spirit, we wanted to share something about an event we’re running soon — the Summit on Existential Security — even though it’s an invite-only event and we don’t expect the information in this post to be directly useful to many people.The Summit on Existential Security will take place the week following EA Global: Bay Area 2023.The summit is aimed at professionals working directly or indirectly (for example, through grantmaking) towards existential security (i.e. addressing existential risks). The event aims to help participants orient themselves to the complex and developing existential risk situation so that we have the best shot at identifying and pursuing important ways to reduce the risk of existential catastrophe.In particular, the summit aims to facilitate conversations that make progress on crucial topics. We’re especially interested in topics that are important but aren’t fully within the remit of any research group (perhaps because they’re big-picture, cross-cutting, new, or not purely academic). We hope these might lead people to better understand something important, change their minds, make new plans, or improve their strategies. Near the end of the summit, there will be sessions designed to encourage attendees to get into the details of making specific plans. But we don’t need every conversation to have immediate decision relevance.A secondary goal of the summit is to build the professional community of people working on existential security. We expect the event to be especially valuable for people who are less plugged in to existing networks to be able to make useful connections. Past impact evaluations for the Coordination Forum and similar events indicate that often the people who got the most out of these events are those who were relatively less plugged in to the community core prior to the event.The summit will have approximately 160 attendees. The event size was chosen because we wanted to experiment with whether we could get the benefits attendees report from the Coordination Forum and similar retreat-style events at a larger scale (the summit will be about 3 times larger than the Coordination Forum was last year). Compared to EA Global, the summit will be much smaller (10 times smaller) and is residential. This means the summit can be focused on collaborative group sessions which try to make progress on important topics related to reducing existential risk and on facilitating and strengthening connections between people working on existential security.The summit is an invite-only event. Invitations to the event were assembled via two mechanisms:A convening committee of eight community advisors selected some people to invite based on their work and how helpful we thought they could be to other attendees.We reached out to about 25 people to ask for recommendations for people in their networks who might be less well-known to other parts of the community.If the summit proves to be successful, we may consider expanding this method in the future.It’s important to note that this event is not an attempt to collect individuals who are doing the highest-value work on existential security, but rather a mix of individuals working on object-level and meta-level issues, in order to facilitate important connections between these groups.The summit is being organized by the Partner Events team, a new program within the CEA Events team that supports subject matter experts to run high-impact events. The Partner Events team is working on a variety of e...]]>
Fri, 27 Jan 2023 19:57:57 +0000 EA - Summit on Existential Security 2023 by Amy Labenz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summit on Existential Security 2023, published by Amy Labenz on January 27, 2023 on The Effective Altruism Forum.The CEA events team wants to be more transparent about what we’re doing, to allow feedback and visibility. We’re not committing to publishing detailed information about every event, but in that spirit, we wanted to share something about an event we’re running soon — the Summit on Existential Security — even though it’s an invite-only event and we don’t expect the information in this post to be directly useful to many people.The Summit on Existential Security will take place the week following EA Global: Bay Area 2023.The summit is aimed at professionals working directly or indirectly (for example, through grantmaking) towards existential security (i.e. addressing existential risks). The event aims to help participants orient themselves to the complex and developing existential risk situation so that we have the best shot at identifying and pursuing important ways to reduce the risk of existential catastrophe.In particular, the summit aims to facilitate conversations that make progress on crucial topics. We’re especially interested in topics that are important but aren’t fully within the remit of any research group (perhaps because they’re big-picture, cross-cutting, new, or not purely academic). We hope these might lead people to better understand something important, change their minds, make new plans, or improve their strategies. Near the end of the summit, there will be sessions designed to encourage attendees to get into the details of making specific plans. But we don’t need every conversation to have immediate decision relevance.A secondary goal of the summit is to build the professional community of people working on existential security. We expect the event to be especially valuable for people who are less plugged in to existing networks to be able to make useful connections. Past impact evaluations for the Coordination Forum and similar events indicate that often the people who got the most out of these events are those who were relatively less plugged in to the community core prior to the event.The summit will have approximately 160 attendees. The event size was chosen because we wanted to experiment with whether we could get the benefits attendees report from the Coordination Forum and similar retreat-style events at a larger scale (the summit will be about 3 times larger than the Coordination Forum was last year). Compared to EA Global, the summit will be much smaller (10 times smaller) and is residential. This means the summit can be focused on collaborative group sessions which try to make progress on important topics related to reducing existential risk and on facilitating and strengthening connections between people working on existential security.The summit is an invite-only event. Invitations to the event were assembled via two mechanisms:A convening committee of eight community advisors selected some people to invite based on their work and how helpful we thought they could be to other attendees.We reached out to about 25 people to ask for recommendations for people in their networks who might be less well-known to other parts of the community.If the summit proves to be successful, we may consider expanding this method in the future.It’s important to note that this event is not an attempt to collect individuals who are doing the highest-value work on existential security, but rather a mix of individuals working on object-level and meta-level issues, in order to facilitate important connections between these groups.The summit is being organized by the Partner Events team, a new program within the CEA Events team that supports subject matter experts to run high-impact events. The Partner Events team is working on a variety of e...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summit on Existential Security 2023, published by Amy Labenz on January 27, 2023 on The Effective Altruism Forum.The CEA events team wants to be more transparent about what we’re doing, to allow feedback and visibility. We’re not committing to publishing detailed information about every event, but in that spirit, we wanted to share something about an event we’re running soon — the Summit on Existential Security — even though it’s an invite-only event and we don’t expect the information in this post to be directly useful to many people.The Summit on Existential Security will take place the week following EA Global: Bay Area 2023.The summit is aimed at professionals working directly or indirectly (for example, through grantmaking) towards existential security (i.e. addressing existential risks). The event aims to help participants orient themselves to the complex and developing existential risk situation so that we have the best shot at identifying and pursuing important ways to reduce the risk of existential catastrophe.In particular, the summit aims to facilitate conversations that make progress on crucial topics. We’re especially interested in topics that are important but aren’t fully within the remit of any research group (perhaps because they’re big-picture, cross-cutting, new, or not purely academic). We hope these might lead people to better understand something important, change their minds, make new plans, or improve their strategies. Near the end of the summit, there will be sessions designed to encourage attendees to get into the details of making specific plans. But we don’t need every conversation to have immediate decision relevance.A secondary goal of the summit is to build the professional community of people working on existential security. We expect the event to be especially valuable for people who are less plugged in to existing networks to be able to make useful connections. Past impact evaluations for the Coordination Forum and similar events indicate that often the people who got the most out of these events are those who were relatively less plugged in to the community core prior to the event.The summit will have approximately 160 attendees. The event size was chosen because we wanted to experiment with whether we could get the benefits attendees report from the Coordination Forum and similar retreat-style events at a larger scale (the summit will be about 3 times larger than the Coordination Forum was last year). Compared to EA Global, the summit will be much smaller (10 times smaller) and is residential. This means the summit can be focused on collaborative group sessions which try to make progress on important topics related to reducing existential risk and on facilitating and strengthening connections between people working on existential security.The summit is an invite-only event. Invitations to the event were assembled via two mechanisms:A convening committee of eight community advisors selected some people to invite based on their work and how helpful we thought they could be to other attendees.We reached out to about 25 people to ask for recommendations for people in their networks who might be less well-known to other parts of the community.If the summit proves to be successful, we may consider expanding this method in the future.It’s important to note that this event is not an attempt to collect individuals who are doing the highest-value work on existential security, but rather a mix of individuals working on object-level and meta-level issues, in order to facilitate important connections between these groups.The summit is being organized by the Partner Events team, a new program within the CEA Events team that supports subject matter experts to run high-impact events. The Partner Events team is working on a variety of e...]]>
Amy Labenz https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:41 None full 4627
fQ8hTwr6L9feboDD9_NL_EA_EA EA - ¿Por qué escribir en español? (Why should we write in Spanish: a follow-up of EAGx Latam) by Simon Ruiz-Martinez Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ¿Por qué escribir en español? (Why should we write in Spanish: a follow-up of EAGx Latam), published by Simon Ruiz-Martinez on January 26, 2023 on The Effective Altruism Forum.La realización del EAGx Latam en el pasado mes de enero es el resultado de una comunidad que ha ido en paulatino aumento y que ha alcanzado la madurez suficiente para hacerse cargo de una discusión comunitaria de alto nivel, donde el desarrollo y discusión del Altruismo Eficaz permite avanzar en la co-creación de maneras de ponderar lo mejor que podemos hacer.El interés que logré evidenciar entre los asistentes sobre el contexto específico de América Latina –mayoritariamente en encuentros informales cabe aclarar– muestra las potencialidades de una comunidad comprometida, y cuya visibilidad se hace cada vez más importante en los diferentes espacios que ofrece la comunidad global.La identidad del diverso conglomerado de profesionales, estudiantes y académicos que compone la comunidad hispanohablante está atravesada –aunque no limitada por– el español. De ahí la pregunta planteada en el título de esta publicación: ¿Por qué escribir en español? A ella se asocia otra pregunta: ¿por qué hay menos de cinco publicaciones en español en el foro? Invito a quien lee esta publicación a que responda esta segunda pregunta. Intentaré, en lo que sigue, concentrarme en la primera.Pero antes de proseguir con ello, una salvedad.¿A quién va dirigido este post?Escribir en una lengua diferente a la institucionalizada por una comunidad puede parecer un acto de exclusión. Espero que los angloparlantes y demás usuarios de las diferentes lenguas que componen el diverso conglomerado de EA en la actualidad disculpen esa falta de deferencia de mi parte. No se trata de excluir a quienes hablan otros idiomas. Este post tiene como objetivo fundamental eliminar las barreras lingüísticas para la difusión científica; específicamente, al animar a la difusión de ideas en otros idiomas; en especial, para una lengua que cuenta con alrededor de 460 millones de personas.Es importante desestimar la presión por el nivel de muchos de nuestros colegas que han publicado. No creo que sea muy controvertido afirmar que no solo es importante discutir sobre temáticas de alto impacto, sino también en cuestiones de construcción de comunidad que pueden tener como auditorio un público que viene iniciando y se beneficiaría mucho de las recomendaciones de una experiencia semejante.Así, cuestiones como creación de grupos universitarios, grupos de lectura, redes de profesionales, etc., todas son actividades que pueden tener un lector interesado que acabe de ingresar a la comunidad y que busque una publicación sobre experiencias sistematizadas que le prevengan de cometer posibles reprocesos en el camino.¿Cuál es realmente la invitación?Si bien es una tarea fundamental la traducción de conceptos e ideas (frecuentemente me pasa que termino usando términos en inglés al hablar con colegas hispanohablantes), la invitación va, concretamente, a difundir la perspectiva específica de nuestras latitudes: fomentar la producción de contenido original. El apoyo comunitario realizado a partir de la publicación de experiencias significativas puede perfectamente complementarse con publicaciones donde se indaguen las particularidades que han implicado las ideas de EA en el contexto latinoamericano.Difundir los intereses profesionales y académicos de quienes trabajamos en estas áreas permite, además, crear un acervo del cuál se puedan derivar formas distintas de priorización (especialmente para LMICs), estudios comparados (e.g. en la efectividad de políticas y transferibilidad del conocimiento), y en colaboración científica y esfuerzos mancomunados por temas y opciones de carrera.Este propósito específico complementa también una comunidad que ha crecido en otr...]]>
Simon Ruiz-Martinez https://forum.effectivealtruism.org/posts/fQ8hTwr6L9feboDD9/por-que-escribir-en-espanol-why-should-we-write-in-spanish-a Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ¿Por qué escribir en español? (Why should we write in Spanish: a follow-up of EAGx Latam), published by Simon Ruiz-Martinez on January 26, 2023 on The Effective Altruism Forum.La realización del EAGx Latam en el pasado mes de enero es el resultado de una comunidad que ha ido en paulatino aumento y que ha alcanzado la madurez suficiente para hacerse cargo de una discusión comunitaria de alto nivel, donde el desarrollo y discusión del Altruismo Eficaz permite avanzar en la co-creación de maneras de ponderar lo mejor que podemos hacer.El interés que logré evidenciar entre los asistentes sobre el contexto específico de América Latina –mayoritariamente en encuentros informales cabe aclarar– muestra las potencialidades de una comunidad comprometida, y cuya visibilidad se hace cada vez más importante en los diferentes espacios que ofrece la comunidad global.La identidad del diverso conglomerado de profesionales, estudiantes y académicos que compone la comunidad hispanohablante está atravesada –aunque no limitada por– el español. De ahí la pregunta planteada en el título de esta publicación: ¿Por qué escribir en español? A ella se asocia otra pregunta: ¿por qué hay menos de cinco publicaciones en español en el foro? Invito a quien lee esta publicación a que responda esta segunda pregunta. Intentaré, en lo que sigue, concentrarme en la primera.Pero antes de proseguir con ello, una salvedad.¿A quién va dirigido este post?Escribir en una lengua diferente a la institucionalizada por una comunidad puede parecer un acto de exclusión. Espero que los angloparlantes y demás usuarios de las diferentes lenguas que componen el diverso conglomerado de EA en la actualidad disculpen esa falta de deferencia de mi parte. No se trata de excluir a quienes hablan otros idiomas. Este post tiene como objetivo fundamental eliminar las barreras lingüísticas para la difusión científica; específicamente, al animar a la difusión de ideas en otros idiomas; en especial, para una lengua que cuenta con alrededor de 460 millones de personas.Es importante desestimar la presión por el nivel de muchos de nuestros colegas que han publicado. No creo que sea muy controvertido afirmar que no solo es importante discutir sobre temáticas de alto impacto, sino también en cuestiones de construcción de comunidad que pueden tener como auditorio un público que viene iniciando y se beneficiaría mucho de las recomendaciones de una experiencia semejante.Así, cuestiones como creación de grupos universitarios, grupos de lectura, redes de profesionales, etc., todas son actividades que pueden tener un lector interesado que acabe de ingresar a la comunidad y que busque una publicación sobre experiencias sistematizadas que le prevengan de cometer posibles reprocesos en el camino.¿Cuál es realmente la invitación?Si bien es una tarea fundamental la traducción de conceptos e ideas (frecuentemente me pasa que termino usando términos en inglés al hablar con colegas hispanohablantes), la invitación va, concretamente, a difundir la perspectiva específica de nuestras latitudes: fomentar la producción de contenido original. El apoyo comunitario realizado a partir de la publicación de experiencias significativas puede perfectamente complementarse con publicaciones donde se indaguen las particularidades que han implicado las ideas de EA en el contexto latinoamericano.Difundir los intereses profesionales y académicos de quienes trabajamos en estas áreas permite, además, crear un acervo del cuál se puedan derivar formas distintas de priorización (especialmente para LMICs), estudios comparados (e.g. en la efectividad de políticas y transferibilidad del conocimiento), y en colaboración científica y esfuerzos mancomunados por temas y opciones de carrera.Este propósito específico complementa también una comunidad que ha crecido en otr...]]>
Fri, 27 Jan 2023 17:34:17 +0000 EA - ¿Por qué escribir en español? (Why should we write in Spanish: a follow-up of EAGx Latam) by Simon Ruiz-Martinez Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ¿Por qué escribir en español? (Why should we write in Spanish: a follow-up of EAGx Latam), published by Simon Ruiz-Martinez on January 26, 2023 on The Effective Altruism Forum.La realización del EAGx Latam en el pasado mes de enero es el resultado de una comunidad que ha ido en paulatino aumento y que ha alcanzado la madurez suficiente para hacerse cargo de una discusión comunitaria de alto nivel, donde el desarrollo y discusión del Altruismo Eficaz permite avanzar en la co-creación de maneras de ponderar lo mejor que podemos hacer.El interés que logré evidenciar entre los asistentes sobre el contexto específico de América Latina –mayoritariamente en encuentros informales cabe aclarar– muestra las potencialidades de una comunidad comprometida, y cuya visibilidad se hace cada vez más importante en los diferentes espacios que ofrece la comunidad global.La identidad del diverso conglomerado de profesionales, estudiantes y académicos que compone la comunidad hispanohablante está atravesada –aunque no limitada por– el español. De ahí la pregunta planteada en el título de esta publicación: ¿Por qué escribir en español? A ella se asocia otra pregunta: ¿por qué hay menos de cinco publicaciones en español en el foro? Invito a quien lee esta publicación a que responda esta segunda pregunta. Intentaré, en lo que sigue, concentrarme en la primera.Pero antes de proseguir con ello, una salvedad.¿A quién va dirigido este post?Escribir en una lengua diferente a la institucionalizada por una comunidad puede parecer un acto de exclusión. Espero que los angloparlantes y demás usuarios de las diferentes lenguas que componen el diverso conglomerado de EA en la actualidad disculpen esa falta de deferencia de mi parte. No se trata de excluir a quienes hablan otros idiomas. Este post tiene como objetivo fundamental eliminar las barreras lingüísticas para la difusión científica; específicamente, al animar a la difusión de ideas en otros idiomas; en especial, para una lengua que cuenta con alrededor de 460 millones de personas.Es importante desestimar la presión por el nivel de muchos de nuestros colegas que han publicado. No creo que sea muy controvertido afirmar que no solo es importante discutir sobre temáticas de alto impacto, sino también en cuestiones de construcción de comunidad que pueden tener como auditorio un público que viene iniciando y se beneficiaría mucho de las recomendaciones de una experiencia semejante.Así, cuestiones como creación de grupos universitarios, grupos de lectura, redes de profesionales, etc., todas son actividades que pueden tener un lector interesado que acabe de ingresar a la comunidad y que busque una publicación sobre experiencias sistematizadas que le prevengan de cometer posibles reprocesos en el camino.¿Cuál es realmente la invitación?Si bien es una tarea fundamental la traducción de conceptos e ideas (frecuentemente me pasa que termino usando términos en inglés al hablar con colegas hispanohablantes), la invitación va, concretamente, a difundir la perspectiva específica de nuestras latitudes: fomentar la producción de contenido original. El apoyo comunitario realizado a partir de la publicación de experiencias significativas puede perfectamente complementarse con publicaciones donde se indaguen las particularidades que han implicado las ideas de EA en el contexto latinoamericano.Difundir los intereses profesionales y académicos de quienes trabajamos en estas áreas permite, además, crear un acervo del cuál se puedan derivar formas distintas de priorización (especialmente para LMICs), estudios comparados (e.g. en la efectividad de políticas y transferibilidad del conocimiento), y en colaboración científica y esfuerzos mancomunados por temas y opciones de carrera.Este propósito específico complementa también una comunidad que ha crecido en otr...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ¿Por qué escribir en español? (Why should we write in Spanish: a follow-up of EAGx Latam), published by Simon Ruiz-Martinez on January 26, 2023 on The Effective Altruism Forum.La realización del EAGx Latam en el pasado mes de enero es el resultado de una comunidad que ha ido en paulatino aumento y que ha alcanzado la madurez suficiente para hacerse cargo de una discusión comunitaria de alto nivel, donde el desarrollo y discusión del Altruismo Eficaz permite avanzar en la co-creación de maneras de ponderar lo mejor que podemos hacer.El interés que logré evidenciar entre los asistentes sobre el contexto específico de América Latina –mayoritariamente en encuentros informales cabe aclarar– muestra las potencialidades de una comunidad comprometida, y cuya visibilidad se hace cada vez más importante en los diferentes espacios que ofrece la comunidad global.La identidad del diverso conglomerado de profesionales, estudiantes y académicos que compone la comunidad hispanohablante está atravesada –aunque no limitada por– el español. De ahí la pregunta planteada en el título de esta publicación: ¿Por qué escribir en español? A ella se asocia otra pregunta: ¿por qué hay menos de cinco publicaciones en español en el foro? Invito a quien lee esta publicación a que responda esta segunda pregunta. Intentaré, en lo que sigue, concentrarme en la primera.Pero antes de proseguir con ello, una salvedad.¿A quién va dirigido este post?Escribir en una lengua diferente a la institucionalizada por una comunidad puede parecer un acto de exclusión. Espero que los angloparlantes y demás usuarios de las diferentes lenguas que componen el diverso conglomerado de EA en la actualidad disculpen esa falta de deferencia de mi parte. No se trata de excluir a quienes hablan otros idiomas. Este post tiene como objetivo fundamental eliminar las barreras lingüísticas para la difusión científica; específicamente, al animar a la difusión de ideas en otros idiomas; en especial, para una lengua que cuenta con alrededor de 460 millones de personas.Es importante desestimar la presión por el nivel de muchos de nuestros colegas que han publicado. No creo que sea muy controvertido afirmar que no solo es importante discutir sobre temáticas de alto impacto, sino también en cuestiones de construcción de comunidad que pueden tener como auditorio un público que viene iniciando y se beneficiaría mucho de las recomendaciones de una experiencia semejante.Así, cuestiones como creación de grupos universitarios, grupos de lectura, redes de profesionales, etc., todas son actividades que pueden tener un lector interesado que acabe de ingresar a la comunidad y que busque una publicación sobre experiencias sistematizadas que le prevengan de cometer posibles reprocesos en el camino.¿Cuál es realmente la invitación?Si bien es una tarea fundamental la traducción de conceptos e ideas (frecuentemente me pasa que termino usando términos en inglés al hablar con colegas hispanohablantes), la invitación va, concretamente, a difundir la perspectiva específica de nuestras latitudes: fomentar la producción de contenido original. El apoyo comunitario realizado a partir de la publicación de experiencias significativas puede perfectamente complementarse con publicaciones donde se indaguen las particularidades que han implicado las ideas de EA en el contexto latinoamericano.Difundir los intereses profesionales y académicos de quienes trabajamos en estas áreas permite, además, crear un acervo del cuál se puedan derivar formas distintas de priorización (especialmente para LMICs), estudios comparados (e.g. en la efectividad de políticas y transferibilidad del conocimiento), y en colaboración científica y esfuerzos mancomunados por temas y opciones de carrera.Este propósito específico complementa también una comunidad que ha crecido en otr...]]>
Simon Ruiz-Martinez https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 12:49 None full 4630
jYdmcrgAj5odTunCT_NL_EA_EA EA - Demodex mites: Large and neglected group of wild animals by Mai T Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Demodex mites: Large and neglected group of wild animals, published by Mai T on January 27, 2023 on The Effective Altruism Forum.Content warningThis article contains information about the human body that some readers may find uncomfortable. There is nothing graphic, but there is detailed information about a genus of human parasites.SummaryDemodex mites are arachnids who live their entire life cycle on/in human skin. Basically every human seems to have numerous mites living on them. Rough estimates suggest that there could be thousands or even millions mites living on each human face, meaning trillions living on all human faces.It seems likely that Demodex mites are frequently killed by human activities, including bathing, tattoos, the use of light therapy, the use of some medicines, and possibly makeup (though I wasn't able to confirm the effects of makeup with experts).Given the scale and neglectedness, this may make Demodex mites a potentially impactful focus for future animal advocacy strategy (though perhaps less impactful than other wild invertebrate interventions). This could be relevant for EAs and researchers interested in wild animal suffering. The first step would be to do a bit more research on some key uncertainties.It's unclear whether Demodex mites are sentient because nobody has researched this question. Given evidence of sentience in other groups of invertebrates and the immense number of Demodex mites who are alive at any one time, it seems plausible that we should be giving serious consideration to their interests.Research direction: A literature review on arachnid sentience, in the style of Gibbons et al (2022), would be helpful here.Another uncertainty is whether Demodex mites are on parts of the bodies that have not been well-sampled in scientific studies (e.g. arms, hands, legs, stomach).Research direction: It would be quite easy to resolve this using a lens that you can buy for ~$420 USD plus your smartphone. This could be a great little project for a small team of volunteers, like a university EA group.Beyond the implications for animal advocacy movement strategy, Demodex mites may also have implications for the ethics of personal lifestyles. I'm updating in the following ways: I will no longer conduct activities that could kill Demodex mites that I do purely for pleasure (for me, this includes tattoos on my head and upper body, and maybe face makeup). I will continue to do essential activities like bathing and receiving health treatments (e.g. laser therapy, antibiotics) despite the possibility that doing so will kill Demodex mites.You may or may not want to do anything about this. After all, many human activities similarly affect the lives of invertebrates (e.g. driving, composting food, gardening, sanitation).I know having concern for the interests of skin mites might seem pretty wild to many people. To me, trying not to kill Demodex mites seems comparable to trying not to step on ants when it can be avoided, which is something that many people (particularly EAs and animal advocates) find sensible. Obviously, this hinges on one's beliefs about the moral value of mites, compared to other animals.Demodex mitesWhat are they?Demodex mites are arachnids who live on and in human skin.There are two main species that live on humans: Demodex folliculorum and D. brevis. There are other species that live on other animals.How common are they?It is very plausible that 100% of humans have Demodex mites (other than newborn babies).Many studies that have recorded prevalences around 20-80% using visual identification methods.But one study found genetic evidence of Demodex mites in 100% of humans participating in the study.There are two possible explanations: genetic sampling is better than visual identification, so ~100% of humans have...]]>
Mai T https://forum.effectivealtruism.org/posts/jYdmcrgAj5odTunCT/demodex-mites-large-and-neglected-group-of-wild-animals Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Demodex mites: Large and neglected group of wild animals, published by Mai T on January 27, 2023 on The Effective Altruism Forum.Content warningThis article contains information about the human body that some readers may find uncomfortable. There is nothing graphic, but there is detailed information about a genus of human parasites.SummaryDemodex mites are arachnids who live their entire life cycle on/in human skin. Basically every human seems to have numerous mites living on them. Rough estimates suggest that there could be thousands or even millions mites living on each human face, meaning trillions living on all human faces.It seems likely that Demodex mites are frequently killed by human activities, including bathing, tattoos, the use of light therapy, the use of some medicines, and possibly makeup (though I wasn't able to confirm the effects of makeup with experts).Given the scale and neglectedness, this may make Demodex mites a potentially impactful focus for future animal advocacy strategy (though perhaps less impactful than other wild invertebrate interventions). This could be relevant for EAs and researchers interested in wild animal suffering. The first step would be to do a bit more research on some key uncertainties.It's unclear whether Demodex mites are sentient because nobody has researched this question. Given evidence of sentience in other groups of invertebrates and the immense number of Demodex mites who are alive at any one time, it seems plausible that we should be giving serious consideration to their interests.Research direction: A literature review on arachnid sentience, in the style of Gibbons et al (2022), would be helpful here.Another uncertainty is whether Demodex mites are on parts of the bodies that have not been well-sampled in scientific studies (e.g. arms, hands, legs, stomach).Research direction: It would be quite easy to resolve this using a lens that you can buy for ~$420 USD plus your smartphone. This could be a great little project for a small team of volunteers, like a university EA group.Beyond the implications for animal advocacy movement strategy, Demodex mites may also have implications for the ethics of personal lifestyles. I'm updating in the following ways: I will no longer conduct activities that could kill Demodex mites that I do purely for pleasure (for me, this includes tattoos on my head and upper body, and maybe face makeup). I will continue to do essential activities like bathing and receiving health treatments (e.g. laser therapy, antibiotics) despite the possibility that doing so will kill Demodex mites.You may or may not want to do anything about this. After all, many human activities similarly affect the lives of invertebrates (e.g. driving, composting food, gardening, sanitation).I know having concern for the interests of skin mites might seem pretty wild to many people. To me, trying not to kill Demodex mites seems comparable to trying not to step on ants when it can be avoided, which is something that many people (particularly EAs and animal advocates) find sensible. Obviously, this hinges on one's beliefs about the moral value of mites, compared to other animals.Demodex mitesWhat are they?Demodex mites are arachnids who live on and in human skin.There are two main species that live on humans: Demodex folliculorum and D. brevis. There are other species that live on other animals.How common are they?It is very plausible that 100% of humans have Demodex mites (other than newborn babies).Many studies that have recorded prevalences around 20-80% using visual identification methods.But one study found genetic evidence of Demodex mites in 100% of humans participating in the study.There are two possible explanations: genetic sampling is better than visual identification, so ~100% of humans have...]]>
Fri, 27 Jan 2023 12:10:21 +0000 EA - Demodex mites: Large and neglected group of wild animals by Mai T Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Demodex mites: Large and neglected group of wild animals, published by Mai T on January 27, 2023 on The Effective Altruism Forum.Content warningThis article contains information about the human body that some readers may find uncomfortable. There is nothing graphic, but there is detailed information about a genus of human parasites.SummaryDemodex mites are arachnids who live their entire life cycle on/in human skin. Basically every human seems to have numerous mites living on them. Rough estimates suggest that there could be thousands or even millions mites living on each human face, meaning trillions living on all human faces.It seems likely that Demodex mites are frequently killed by human activities, including bathing, tattoos, the use of light therapy, the use of some medicines, and possibly makeup (though I wasn't able to confirm the effects of makeup with experts).Given the scale and neglectedness, this may make Demodex mites a potentially impactful focus for future animal advocacy strategy (though perhaps less impactful than other wild invertebrate interventions). This could be relevant for EAs and researchers interested in wild animal suffering. The first step would be to do a bit more research on some key uncertainties.It's unclear whether Demodex mites are sentient because nobody has researched this question. Given evidence of sentience in other groups of invertebrates and the immense number of Demodex mites who are alive at any one time, it seems plausible that we should be giving serious consideration to their interests.Research direction: A literature review on arachnid sentience, in the style of Gibbons et al (2022), would be helpful here.Another uncertainty is whether Demodex mites are on parts of the bodies that have not been well-sampled in scientific studies (e.g. arms, hands, legs, stomach).Research direction: It would be quite easy to resolve this using a lens that you can buy for ~$420 USD plus your smartphone. This could be a great little project for a small team of volunteers, like a university EA group.Beyond the implications for animal advocacy movement strategy, Demodex mites may also have implications for the ethics of personal lifestyles. I'm updating in the following ways: I will no longer conduct activities that could kill Demodex mites that I do purely for pleasure (for me, this includes tattoos on my head and upper body, and maybe face makeup). I will continue to do essential activities like bathing and receiving health treatments (e.g. laser therapy, antibiotics) despite the possibility that doing so will kill Demodex mites.You may or may not want to do anything about this. After all, many human activities similarly affect the lives of invertebrates (e.g. driving, composting food, gardening, sanitation).I know having concern for the interests of skin mites might seem pretty wild to many people. To me, trying not to kill Demodex mites seems comparable to trying not to step on ants when it can be avoided, which is something that many people (particularly EAs and animal advocates) find sensible. Obviously, this hinges on one's beliefs about the moral value of mites, compared to other animals.Demodex mitesWhat are they?Demodex mites are arachnids who live on and in human skin.There are two main species that live on humans: Demodex folliculorum and D. brevis. There are other species that live on other animals.How common are they?It is very plausible that 100% of humans have Demodex mites (other than newborn babies).Many studies that have recorded prevalences around 20-80% using visual identification methods.But one study found genetic evidence of Demodex mites in 100% of humans participating in the study.There are two possible explanations: genetic sampling is better than visual identification, so ~100% of humans have...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Demodex mites: Large and neglected group of wild animals, published by Mai T on January 27, 2023 on The Effective Altruism Forum.Content warningThis article contains information about the human body that some readers may find uncomfortable. There is nothing graphic, but there is detailed information about a genus of human parasites.SummaryDemodex mites are arachnids who live their entire life cycle on/in human skin. Basically every human seems to have numerous mites living on them. Rough estimates suggest that there could be thousands or even millions mites living on each human face, meaning trillions living on all human faces.It seems likely that Demodex mites are frequently killed by human activities, including bathing, tattoos, the use of light therapy, the use of some medicines, and possibly makeup (though I wasn't able to confirm the effects of makeup with experts).Given the scale and neglectedness, this may make Demodex mites a potentially impactful focus for future animal advocacy strategy (though perhaps less impactful than other wild invertebrate interventions). This could be relevant for EAs and researchers interested in wild animal suffering. The first step would be to do a bit more research on some key uncertainties.It's unclear whether Demodex mites are sentient because nobody has researched this question. Given evidence of sentience in other groups of invertebrates and the immense number of Demodex mites who are alive at any one time, it seems plausible that we should be giving serious consideration to their interests.Research direction: A literature review on arachnid sentience, in the style of Gibbons et al (2022), would be helpful here.Another uncertainty is whether Demodex mites are on parts of the bodies that have not been well-sampled in scientific studies (e.g. arms, hands, legs, stomach).Research direction: It would be quite easy to resolve this using a lens that you can buy for ~$420 USD plus your smartphone. This could be a great little project for a small team of volunteers, like a university EA group.Beyond the implications for animal advocacy movement strategy, Demodex mites may also have implications for the ethics of personal lifestyles. I'm updating in the following ways: I will no longer conduct activities that could kill Demodex mites that I do purely for pleasure (for me, this includes tattoos on my head and upper body, and maybe face makeup). I will continue to do essential activities like bathing and receiving health treatments (e.g. laser therapy, antibiotics) despite the possibility that doing so will kill Demodex mites.You may or may not want to do anything about this. After all, many human activities similarly affect the lives of invertebrates (e.g. driving, composting food, gardening, sanitation).I know having concern for the interests of skin mites might seem pretty wild to many people. To me, trying not to kill Demodex mites seems comparable to trying not to step on ants when it can be avoided, which is something that many people (particularly EAs and animal advocates) find sensible. Obviously, this hinges on one's beliefs about the moral value of mites, compared to other animals.Demodex mitesWhat are they?Demodex mites are arachnids who live on and in human skin.There are two main species that live on humans: Demodex folliculorum and D. brevis. There are other species that live on other animals.How common are they?It is very plausible that 100% of humans have Demodex mites (other than newborn babies).Many studies that have recorded prevalences around 20-80% using visual identification methods.But one study found genetic evidence of Demodex mites in 100% of humans participating in the study.There are two possible explanations: genetic sampling is better than visual identification, so ~100% of humans have...]]>
Mai T https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 19:01 None full 4610
Ne5wzJKNT4fd2SYHX_EA EA - Pain relief: a shallow cause exploration by Samuel Dupret Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pain relief: a shallow cause exploration, published by Samuel Dupret on January 27, 2023 on The Effective Altruism Forum.This shallow investigation was commissioned by Founders Pledge.SummaryThis report is a shallow cause exploration, completed in two weeks, which expands on our previous work considering pain as a potential cause area (Sharma et al., 2020). Here, we attempt to explore the relationship between pain and subjective wellbeing (SWB) more directly, both conceptually and quantitatively.First, we try to calculate a conversion rate between self-reported pain intensity and SWB measures. However, the limited literature provides us with two potential conversion rates: a 1-point change on a 0-10 pain scale could lead to either a 0.1-point or 1-point change on a 0-10 SWB scale. Choosing one or the other leads to drastically different results when evaluating the cost-effectiveness of pain treatments.Second, we assess the severity and scale of chronic pain in terms of life satisfaction to be large. However, we think this is likely an underestimate which will benefit from further evaluation.Third, we offer some novel back-of-the-envelope calculations for the cost-effectiveness of several interventions to treat pain. We conclude - in agreement with Sharma et al. (2020) - that providing opioids for terminal pain and drugs for migraines are potentially cost-effective interventions. We add an analysis suggesting that psychotherapy for chronic pain could be moderately cost-effective if it can be deployed in ways that reduce costs (task-shifted, grouped, and/or digital), although we doubt it would be as cost-effective as psychotherapy for depression. We also present other interventions which we are more uncertain about but we think are worth researching further.There are many interventions we were unable to review. Reviewing the medical literature on pain was more time intensive than for our other projects because most meta-analyses evaluated their evidence as “moderate to low” quality. Furthermore, our subjective judgement was that these meta-analyses were of lower quality than the work we typically review from the fields of economics, psychology, and global health.The most valuable directions for further cause prioritisation research are (1) narrowing our substantial uncertainty about the conversion rates between pain scores and SWB measures, and (2) investigating the potential of advocacy campaigns to increase access to opioids.OutlineIn Section 1 we define pain and present how it is measured.In Section 2 we explore the relationship between pain and subjective wellbeing.In Section 3 we model the scale and severity of chronic pain in subjective wellbeing terms.In Section 4 we present potential interventions to treat pain.In Section 5 we present our recommendations for future research and conclude with the key takeaways from the report.NotesThis report focuses on impact in terms of WELLBYs. One WELLBY is a 1 point change in life satisfaction for one year (or any equivalent combination of change in life satisfaction and time). In some cases, we convert results in standard deviations of life satisfaction to WELLBYs using a 2 point standard deviation on 0-10 life satisfaction scales (i.e., 1 SD change is the equivalent of 2 point changes on a 0-10 life satisfaction scale). This naive conversion is based on estimates from large scale data sets like the World Happiness Reports. See our post on the WELLBY method for more details.The shallowness of this investigation means (1) we include more guesses and uncertainty in our models, (2) we couldn’t always conduct the most detailed or complex analyses, (3) we might have missed some data, and (4) we take some findings at face value.Our calculations and data extraction can be found in this spreadsheet and GitHub rep...]]>
Samuel Dupret https://forum.effectivealtruism.org/posts/Ne5wzJKNT4fd2SYHX/pain-relief-a-shallow-cause-exploration Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pain relief: a shallow cause exploration, published by Samuel Dupret on January 27, 2023 on The Effective Altruism Forum.This shallow investigation was commissioned by Founders Pledge.SummaryThis report is a shallow cause exploration, completed in two weeks, which expands on our previous work considering pain as a potential cause area (Sharma et al., 2020). Here, we attempt to explore the relationship between pain and subjective wellbeing (SWB) more directly, both conceptually and quantitatively.First, we try to calculate a conversion rate between self-reported pain intensity and SWB measures. However, the limited literature provides us with two potential conversion rates: a 1-point change on a 0-10 pain scale could lead to either a 0.1-point or 1-point change on a 0-10 SWB scale. Choosing one or the other leads to drastically different results when evaluating the cost-effectiveness of pain treatments.Second, we assess the severity and scale of chronic pain in terms of life satisfaction to be large. However, we think this is likely an underestimate which will benefit from further evaluation.Third, we offer some novel back-of-the-envelope calculations for the cost-effectiveness of several interventions to treat pain. We conclude - in agreement with Sharma et al. (2020) - that providing opioids for terminal pain and drugs for migraines are potentially cost-effective interventions. We add an analysis suggesting that psychotherapy for chronic pain could be moderately cost-effective if it can be deployed in ways that reduce costs (task-shifted, grouped, and/or digital), although we doubt it would be as cost-effective as psychotherapy for depression. We also present other interventions which we are more uncertain about but we think are worth researching further.There are many interventions we were unable to review. Reviewing the medical literature on pain was more time intensive than for our other projects because most meta-analyses evaluated their evidence as “moderate to low” quality. Furthermore, our subjective judgement was that these meta-analyses were of lower quality than the work we typically review from the fields of economics, psychology, and global health.The most valuable directions for further cause prioritisation research are (1) narrowing our substantial uncertainty about the conversion rates between pain scores and SWB measures, and (2) investigating the potential of advocacy campaigns to increase access to opioids.OutlineIn Section 1 we define pain and present how it is measured.In Section 2 we explore the relationship between pain and subjective wellbeing.In Section 3 we model the scale and severity of chronic pain in subjective wellbeing terms.In Section 4 we present potential interventions to treat pain.In Section 5 we present our recommendations for future research and conclude with the key takeaways from the report.NotesThis report focuses on impact in terms of WELLBYs. One WELLBY is a 1 point change in life satisfaction for one year (or any equivalent combination of change in life satisfaction and time). In some cases, we convert results in standard deviations of life satisfaction to WELLBYs using a 2 point standard deviation on 0-10 life satisfaction scales (i.e., 1 SD change is the equivalent of 2 point changes on a 0-10 life satisfaction scale). This naive conversion is based on estimates from large scale data sets like the World Happiness Reports. See our post on the WELLBY method for more details.The shallowness of this investigation means (1) we include more guesses and uncertainty in our models, (2) we couldn’t always conduct the most detailed or complex analyses, (3) we might have missed some data, and (4) we take some findings at face value.Our calculations and data extraction can be found in this spreadsheet and GitHub rep...]]>
Fri, 27 Jan 2023 10:38:31 +0000 EA - Pain relief: a shallow cause exploration by Samuel Dupret Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pain relief: a shallow cause exploration, published by Samuel Dupret on January 27, 2023 on The Effective Altruism Forum.This shallow investigation was commissioned by Founders Pledge.SummaryThis report is a shallow cause exploration, completed in two weeks, which expands on our previous work considering pain as a potential cause area (Sharma et al., 2020). Here, we attempt to explore the relationship between pain and subjective wellbeing (SWB) more directly, both conceptually and quantitatively.First, we try to calculate a conversion rate between self-reported pain intensity and SWB measures. However, the limited literature provides us with two potential conversion rates: a 1-point change on a 0-10 pain scale could lead to either a 0.1-point or 1-point change on a 0-10 SWB scale. Choosing one or the other leads to drastically different results when evaluating the cost-effectiveness of pain treatments.Second, we assess the severity and scale of chronic pain in terms of life satisfaction to be large. However, we think this is likely an underestimate which will benefit from further evaluation.Third, we offer some novel back-of-the-envelope calculations for the cost-effectiveness of several interventions to treat pain. We conclude - in agreement with Sharma et al. (2020) - that providing opioids for terminal pain and drugs for migraines are potentially cost-effective interventions. We add an analysis suggesting that psychotherapy for chronic pain could be moderately cost-effective if it can be deployed in ways that reduce costs (task-shifted, grouped, and/or digital), although we doubt it would be as cost-effective as psychotherapy for depression. We also present other interventions which we are more uncertain about but we think are worth researching further.There are many interventions we were unable to review. Reviewing the medical literature on pain was more time intensive than for our other projects because most meta-analyses evaluated their evidence as “moderate to low” quality. Furthermore, our subjective judgement was that these meta-analyses were of lower quality than the work we typically review from the fields of economics, psychology, and global health.The most valuable directions for further cause prioritisation research are (1) narrowing our substantial uncertainty about the conversion rates between pain scores and SWB measures, and (2) investigating the potential of advocacy campaigns to increase access to opioids.OutlineIn Section 1 we define pain and present how it is measured.In Section 2 we explore the relationship between pain and subjective wellbeing.In Section 3 we model the scale and severity of chronic pain in subjective wellbeing terms.In Section 4 we present potential interventions to treat pain.In Section 5 we present our recommendations for future research and conclude with the key takeaways from the report.NotesThis report focuses on impact in terms of WELLBYs. One WELLBY is a 1 point change in life satisfaction for one year (or any equivalent combination of change in life satisfaction and time). In some cases, we convert results in standard deviations of life satisfaction to WELLBYs using a 2 point standard deviation on 0-10 life satisfaction scales (i.e., 1 SD change is the equivalent of 2 point changes on a 0-10 life satisfaction scale). This naive conversion is based on estimates from large scale data sets like the World Happiness Reports. See our post on the WELLBY method for more details.The shallowness of this investigation means (1) we include more guesses and uncertainty in our models, (2) we couldn’t always conduct the most detailed or complex analyses, (3) we might have missed some data, and (4) we take some findings at face value.Our calculations and data extraction can be found in this spreadsheet and GitHub rep...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pain relief: a shallow cause exploration, published by Samuel Dupret on January 27, 2023 on The Effective Altruism Forum.This shallow investigation was commissioned by Founders Pledge.SummaryThis report is a shallow cause exploration, completed in two weeks, which expands on our previous work considering pain as a potential cause area (Sharma et al., 2020). Here, we attempt to explore the relationship between pain and subjective wellbeing (SWB) more directly, both conceptually and quantitatively.First, we try to calculate a conversion rate between self-reported pain intensity and SWB measures. However, the limited literature provides us with two potential conversion rates: a 1-point change on a 0-10 pain scale could lead to either a 0.1-point or 1-point change on a 0-10 SWB scale. Choosing one or the other leads to drastically different results when evaluating the cost-effectiveness of pain treatments.Second, we assess the severity and scale of chronic pain in terms of life satisfaction to be large. However, we think this is likely an underestimate which will benefit from further evaluation.Third, we offer some novel back-of-the-envelope calculations for the cost-effectiveness of several interventions to treat pain. We conclude - in agreement with Sharma et al. (2020) - that providing opioids for terminal pain and drugs for migraines are potentially cost-effective interventions. We add an analysis suggesting that psychotherapy for chronic pain could be moderately cost-effective if it can be deployed in ways that reduce costs (task-shifted, grouped, and/or digital), although we doubt it would be as cost-effective as psychotherapy for depression. We also present other interventions which we are more uncertain about but we think are worth researching further.There are many interventions we were unable to review. Reviewing the medical literature on pain was more time intensive than for our other projects because most meta-analyses evaluated their evidence as “moderate to low” quality. Furthermore, our subjective judgement was that these meta-analyses were of lower quality than the work we typically review from the fields of economics, psychology, and global health.The most valuable directions for further cause prioritisation research are (1) narrowing our substantial uncertainty about the conversion rates between pain scores and SWB measures, and (2) investigating the potential of advocacy campaigns to increase access to opioids.OutlineIn Section 1 we define pain and present how it is measured.In Section 2 we explore the relationship between pain and subjective wellbeing.In Section 3 we model the scale and severity of chronic pain in subjective wellbeing terms.In Section 4 we present potential interventions to treat pain.In Section 5 we present our recommendations for future research and conclude with the key takeaways from the report.NotesThis report focuses on impact in terms of WELLBYs. One WELLBY is a 1 point change in life satisfaction for one year (or any equivalent combination of change in life satisfaction and time). In some cases, we convert results in standard deviations of life satisfaction to WELLBYs using a 2 point standard deviation on 0-10 life satisfaction scales (i.e., 1 SD change is the equivalent of 2 point changes on a 0-10 life satisfaction scale). This naive conversion is based on estimates from large scale data sets like the World Happiness Reports. See our post on the WELLBY method for more details.The shallowness of this investigation means (1) we include more guesses and uncertainty in our models, (2) we couldn’t always conduct the most detailed or complex analyses, (3) we might have missed some data, and (4) we take some findings at face value.Our calculations and data extraction can be found in this spreadsheet and GitHub rep...]]>
Samuel Dupret https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:15:37 None full 4597
Ne5wzJKNT4fd2SYHX_NL_EA_EA EA - Pain relief: a shallow cause exploration by Samuel Dupret Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pain relief: a shallow cause exploration, published by Samuel Dupret on January 27, 2023 on The Effective Altruism Forum.This shallow investigation was commissioned by Founders Pledge.SummaryThis report is a shallow cause exploration, completed in two weeks, which expands on our previous work considering pain as a potential cause area (Sharma et al., 2020). Here, we attempt to explore the relationship between pain and subjective wellbeing (SWB) more directly, both conceptually and quantitatively.First, we try to calculate a conversion rate between self-reported pain intensity and SWB measures. However, the limited literature provides us with two potential conversion rates: a 1-point change on a 0-10 pain scale could lead to either a 0.1-point or 1-point change on a 0-10 SWB scale. Choosing one or the other leads to drastically different results when evaluating the cost-effectiveness of pain treatments.Second, we assess the severity and scale of chronic pain in terms of life satisfaction to be large. However, we think this is likely an underestimate which will benefit from further evaluation.Third, we offer some novel back-of-the-envelope calculations for the cost-effectiveness of several interventions to treat pain. We conclude - in agreement with Sharma et al. (2020) - that providing opioids for terminal pain and drugs for migraines are potentially cost-effective interventions. We add an analysis suggesting that psychotherapy for chronic pain could be moderately cost-effective if it can be deployed in ways that reduce costs (task-shifted, grouped, and/or digital), although we doubt it would be as cost-effective as psychotherapy for depression. We also present other interventions which we are more uncertain about but we think are worth researching further.There are many interventions we were unable to review. Reviewing the medical literature on pain was more time intensive than for our other projects because most meta-analyses evaluated their evidence as “moderate to low” quality. Furthermore, our subjective judgement was that these meta-analyses were of lower quality than the work we typically review from the fields of economics, psychology, and global health.The most valuable directions for further cause prioritisation research are (1) narrowing our substantial uncertainty about the conversion rates between pain scores and SWB measures, and (2) investigating the potential of advocacy campaigns to increase access to opioids.OutlineIn Section 1 we define pain and present how it is measured.In Section 2 we explore the relationship between pain and subjective wellbeing.In Section 3 we model the scale and severity of chronic pain in subjective wellbeing terms.In Section 4 we present potential interventions to treat pain.In Section 5 we present our recommendations for future research and conclude with the key takeaways from the report.NotesThis report focuses on impact in terms of WELLBYs. One WELLBY is a 1 point change in life satisfaction for one year (or any equivalent combination of change in life satisfaction and time). In some cases, we convert results in standard deviations of life satisfaction to WELLBYs using a 2 point standard deviation on 0-10 life satisfaction scales (i.e., 1 SD change is the equivalent of 2 point changes on a 0-10 life satisfaction scale). This naive conversion is based on estimates from large scale data sets like the World Happiness Reports. See our post on the WELLBY method for more details.The shallowness of this investigation means (1) we include more guesses and uncertainty in our models, (2) we couldn’t always conduct the most detailed or complex analyses, (3) we might have missed some data, and (4) we take some findings at face value.Our calculations and data extraction can be found in this spreadsheet and GitHub rep...]]>
Samuel Dupret https://forum.effectivealtruism.org/posts/Ne5wzJKNT4fd2SYHX/pain-relief-a-shallow-cause-exploration Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pain relief: a shallow cause exploration, published by Samuel Dupret on January 27, 2023 on The Effective Altruism Forum.This shallow investigation was commissioned by Founders Pledge.SummaryThis report is a shallow cause exploration, completed in two weeks, which expands on our previous work considering pain as a potential cause area (Sharma et al., 2020). Here, we attempt to explore the relationship between pain and subjective wellbeing (SWB) more directly, both conceptually and quantitatively.First, we try to calculate a conversion rate between self-reported pain intensity and SWB measures. However, the limited literature provides us with two potential conversion rates: a 1-point change on a 0-10 pain scale could lead to either a 0.1-point or 1-point change on a 0-10 SWB scale. Choosing one or the other leads to drastically different results when evaluating the cost-effectiveness of pain treatments.Second, we assess the severity and scale of chronic pain in terms of life satisfaction to be large. However, we think this is likely an underestimate which will benefit from further evaluation.Third, we offer some novel back-of-the-envelope calculations for the cost-effectiveness of several interventions to treat pain. We conclude - in agreement with Sharma et al. (2020) - that providing opioids for terminal pain and drugs for migraines are potentially cost-effective interventions. We add an analysis suggesting that psychotherapy for chronic pain could be moderately cost-effective if it can be deployed in ways that reduce costs (task-shifted, grouped, and/or digital), although we doubt it would be as cost-effective as psychotherapy for depression. We also present other interventions which we are more uncertain about but we think are worth researching further.There are many interventions we were unable to review. Reviewing the medical literature on pain was more time intensive than for our other projects because most meta-analyses evaluated their evidence as “moderate to low” quality. Furthermore, our subjective judgement was that these meta-analyses were of lower quality than the work we typically review from the fields of economics, psychology, and global health.The most valuable directions for further cause prioritisation research are (1) narrowing our substantial uncertainty about the conversion rates between pain scores and SWB measures, and (2) investigating the potential of advocacy campaigns to increase access to opioids.OutlineIn Section 1 we define pain and present how it is measured.In Section 2 we explore the relationship between pain and subjective wellbeing.In Section 3 we model the scale and severity of chronic pain in subjective wellbeing terms.In Section 4 we present potential interventions to treat pain.In Section 5 we present our recommendations for future research and conclude with the key takeaways from the report.NotesThis report focuses on impact in terms of WELLBYs. One WELLBY is a 1 point change in life satisfaction for one year (or any equivalent combination of change in life satisfaction and time). In some cases, we convert results in standard deviations of life satisfaction to WELLBYs using a 2 point standard deviation on 0-10 life satisfaction scales (i.e., 1 SD change is the equivalent of 2 point changes on a 0-10 life satisfaction scale). This naive conversion is based on estimates from large scale data sets like the World Happiness Reports. See our post on the WELLBY method for more details.The shallowness of this investigation means (1) we include more guesses and uncertainty in our models, (2) we couldn’t always conduct the most detailed or complex analyses, (3) we might have missed some data, and (4) we take some findings at face value.Our calculations and data extraction can be found in this spreadsheet and GitHub rep...]]>
Fri, 27 Jan 2023 10:38:31 +0000 EA - Pain relief: a shallow cause exploration by Samuel Dupret Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pain relief: a shallow cause exploration, published by Samuel Dupret on January 27, 2023 on The Effective Altruism Forum.This shallow investigation was commissioned by Founders Pledge.SummaryThis report is a shallow cause exploration, completed in two weeks, which expands on our previous work considering pain as a potential cause area (Sharma et al., 2020). Here, we attempt to explore the relationship between pain and subjective wellbeing (SWB) more directly, both conceptually and quantitatively.First, we try to calculate a conversion rate between self-reported pain intensity and SWB measures. However, the limited literature provides us with two potential conversion rates: a 1-point change on a 0-10 pain scale could lead to either a 0.1-point or 1-point change on a 0-10 SWB scale. Choosing one or the other leads to drastically different results when evaluating the cost-effectiveness of pain treatments.Second, we assess the severity and scale of chronic pain in terms of life satisfaction to be large. However, we think this is likely an underestimate which will benefit from further evaluation.Third, we offer some novel back-of-the-envelope calculations for the cost-effectiveness of several interventions to treat pain. We conclude - in agreement with Sharma et al. (2020) - that providing opioids for terminal pain and drugs for migraines are potentially cost-effective interventions. We add an analysis suggesting that psychotherapy for chronic pain could be moderately cost-effective if it can be deployed in ways that reduce costs (task-shifted, grouped, and/or digital), although we doubt it would be as cost-effective as psychotherapy for depression. We also present other interventions which we are more uncertain about but we think are worth researching further.There are many interventions we were unable to review. Reviewing the medical literature on pain was more time intensive than for our other projects because most meta-analyses evaluated their evidence as “moderate to low” quality. Furthermore, our subjective judgement was that these meta-analyses were of lower quality than the work we typically review from the fields of economics, psychology, and global health.The most valuable directions for further cause prioritisation research are (1) narrowing our substantial uncertainty about the conversion rates between pain scores and SWB measures, and (2) investigating the potential of advocacy campaigns to increase access to opioids.OutlineIn Section 1 we define pain and present how it is measured.In Section 2 we explore the relationship between pain and subjective wellbeing.In Section 3 we model the scale and severity of chronic pain in subjective wellbeing terms.In Section 4 we present potential interventions to treat pain.In Section 5 we present our recommendations for future research and conclude with the key takeaways from the report.NotesThis report focuses on impact in terms of WELLBYs. One WELLBY is a 1 point change in life satisfaction for one year (or any equivalent combination of change in life satisfaction and time). In some cases, we convert results in standard deviations of life satisfaction to WELLBYs using a 2 point standard deviation on 0-10 life satisfaction scales (i.e., 1 SD change is the equivalent of 2 point changes on a 0-10 life satisfaction scale). This naive conversion is based on estimates from large scale data sets like the World Happiness Reports. See our post on the WELLBY method for more details.The shallowness of this investigation means (1) we include more guesses and uncertainty in our models, (2) we couldn’t always conduct the most detailed or complex analyses, (3) we might have missed some data, and (4) we take some findings at face value.Our calculations and data extraction can be found in this spreadsheet and GitHub rep...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pain relief: a shallow cause exploration, published by Samuel Dupret on January 27, 2023 on The Effective Altruism Forum.This shallow investigation was commissioned by Founders Pledge.SummaryThis report is a shallow cause exploration, completed in two weeks, which expands on our previous work considering pain as a potential cause area (Sharma et al., 2020). Here, we attempt to explore the relationship between pain and subjective wellbeing (SWB) more directly, both conceptually and quantitatively.First, we try to calculate a conversion rate between self-reported pain intensity and SWB measures. However, the limited literature provides us with two potential conversion rates: a 1-point change on a 0-10 pain scale could lead to either a 0.1-point or 1-point change on a 0-10 SWB scale. Choosing one or the other leads to drastically different results when evaluating the cost-effectiveness of pain treatments.Second, we assess the severity and scale of chronic pain in terms of life satisfaction to be large. However, we think this is likely an underestimate which will benefit from further evaluation.Third, we offer some novel back-of-the-envelope calculations for the cost-effectiveness of several interventions to treat pain. We conclude - in agreement with Sharma et al. (2020) - that providing opioids for terminal pain and drugs for migraines are potentially cost-effective interventions. We add an analysis suggesting that psychotherapy for chronic pain could be moderately cost-effective if it can be deployed in ways that reduce costs (task-shifted, grouped, and/or digital), although we doubt it would be as cost-effective as psychotherapy for depression. We also present other interventions which we are more uncertain about but we think are worth researching further.There are many interventions we were unable to review. Reviewing the medical literature on pain was more time intensive than for our other projects because most meta-analyses evaluated their evidence as “moderate to low” quality. Furthermore, our subjective judgement was that these meta-analyses were of lower quality than the work we typically review from the fields of economics, psychology, and global health.The most valuable directions for further cause prioritisation research are (1) narrowing our substantial uncertainty about the conversion rates between pain scores and SWB measures, and (2) investigating the potential of advocacy campaigns to increase access to opioids.OutlineIn Section 1 we define pain and present how it is measured.In Section 2 we explore the relationship between pain and subjective wellbeing.In Section 3 we model the scale and severity of chronic pain in subjective wellbeing terms.In Section 4 we present potential interventions to treat pain.In Section 5 we present our recommendations for future research and conclude with the key takeaways from the report.NotesThis report focuses on impact in terms of WELLBYs. One WELLBY is a 1 point change in life satisfaction for one year (or any equivalent combination of change in life satisfaction and time). In some cases, we convert results in standard deviations of life satisfaction to WELLBYs using a 2 point standard deviation on 0-10 life satisfaction scales (i.e., 1 SD change is the equivalent of 2 point changes on a 0-10 life satisfaction scale). This naive conversion is based on estimates from large scale data sets like the World Happiness Reports. See our post on the WELLBY method for more details.The shallowness of this investigation means (1) we include more guesses and uncertainty in our models, (2) we couldn’t always conduct the most detailed or complex analyses, (3) we might have missed some data, and (4) we take some findings at face value.Our calculations and data extraction can be found in this spreadsheet and GitHub rep...]]>
Samuel Dupret https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:15:37 None full 4608
LiHqgrKDuzJPFjtaT_NL_EA_EA EA - Moving Toward More Concrete Proposals for Reform by Jason Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Moving Toward More Concrete Proposals for Reform, published by Jason on January 27, 2023 on The Effective Altruism Forum.Epistemic status: uncertain, unsure of the extent that I mean this as a practical proposal vs. a thought experiment to make certain observations, aware I could improve this but also aware that it should be timelyTiming note: Most of this was written before Doing EA Better was released, and is largely based on public events on the Forum. It is not specifically in response to that post, and is not really about possible epistemic reforms at all.SummaryThis post suggests that discussions of possible reforms relating to governance, deconcentration of power, and transparency have often been relatively unhelpful due to a lack of specificity. It suggests raising modest funding for the creation of more developed proposals, with the broader EA community taking the lead on the project. The post presents various reasons I think this proposal has value even if one assumes the underlying reform proposals lack merit. It explains how the proposal would provide information for and tie into further reform initiatives, and suggests some mechanics for funding and selecting proposals.IntroductionI find the very-high level discussion of possible reform to be generally unhelpful. At that level, it's too easy for proponents to gloss over implementation costs and downsides, and too easy for skeptics to attack strawmen. There needs to be more substance for anyone to seriously evaluate most of the proposals that are floating around as possibilities -- either in the abstract or in terms of whether investing resources in a proposal specific to their organization makes sense.I've also noted, as have others, that it's unreasonable to place the burden of producing more detailed proposals on reform advocates without either a strong reason to believe those reforms would be successful or providing reasonable compensation for their work. (Of course, no one should ever feel pressure to write even if compensation is offered!)For what it's worth, I think some of the reform proposals going around are likely correct, some are worthwhile in carefully selected contexts, and some are very unlikely to be viable. But I also think, for many proposals, that my assessments could update significantly if I read more detailed versions of the proposal.I'm a newcomer who has never accepted anything from any EA source other than one free book. This gives me certain advantages and limitations. I don't have to worry about damaging effects on my career, nor do I personally have anything to personally lose or gain from reform. On the other hand, my outsider status means I am probably ignorant of some important information. And it certainly means I have no right to make demands on anyone, only suggestions.Background AssumptionsMost of the money in the ecosystem comes from Open Phil and a few other large donors. However, there is still a respectably-sized slice of independent funding from small to mid-size donors. I submit that certain functions within EA are in particular need of a diversified funding base, rather than one that is dependent on Open Phil or other megadonors. In particular, the development of proposals about power, governance, and transparency is one of those functions. Not only are people not particularly good at deciding whether they need to give up power or expose their decisions to transparency, they also have an obvious conflict of interest on the question. One can doubtless think of other functions for which independence from major funders is important or even critical.For various reasons, I identify with a desire for moderate reform at this time. I'll define that (imprecisely) as a set of reforms that seeks to make EA a better version of itself, but still clearly identifiab...]]>
Jason https://forum.effectivealtruism.org/posts/LiHqgrKDuzJPFjtaT/moving-toward-more-concrete-proposals-for-reform Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Moving Toward More Concrete Proposals for Reform, published by Jason on January 27, 2023 on The Effective Altruism Forum.Epistemic status: uncertain, unsure of the extent that I mean this as a practical proposal vs. a thought experiment to make certain observations, aware I could improve this but also aware that it should be timelyTiming note: Most of this was written before Doing EA Better was released, and is largely based on public events on the Forum. It is not specifically in response to that post, and is not really about possible epistemic reforms at all.SummaryThis post suggests that discussions of possible reforms relating to governance, deconcentration of power, and transparency have often been relatively unhelpful due to a lack of specificity. It suggests raising modest funding for the creation of more developed proposals, with the broader EA community taking the lead on the project. The post presents various reasons I think this proposal has value even if one assumes the underlying reform proposals lack merit. It explains how the proposal would provide information for and tie into further reform initiatives, and suggests some mechanics for funding and selecting proposals.IntroductionI find the very-high level discussion of possible reform to be generally unhelpful. At that level, it's too easy for proponents to gloss over implementation costs and downsides, and too easy for skeptics to attack strawmen. There needs to be more substance for anyone to seriously evaluate most of the proposals that are floating around as possibilities -- either in the abstract or in terms of whether investing resources in a proposal specific to their organization makes sense.I've also noted, as have others, that it's unreasonable to place the burden of producing more detailed proposals on reform advocates without either a strong reason to believe those reforms would be successful or providing reasonable compensation for their work. (Of course, no one should ever feel pressure to write even if compensation is offered!)For what it's worth, I think some of the reform proposals going around are likely correct, some are worthwhile in carefully selected contexts, and some are very unlikely to be viable. But I also think, for many proposals, that my assessments could update significantly if I read more detailed versions of the proposal.I'm a newcomer who has never accepted anything from any EA source other than one free book. This gives me certain advantages and limitations. I don't have to worry about damaging effects on my career, nor do I personally have anything to personally lose or gain from reform. On the other hand, my outsider status means I am probably ignorant of some important information. And it certainly means I have no right to make demands on anyone, only suggestions.Background AssumptionsMost of the money in the ecosystem comes from Open Phil and a few other large donors. However, there is still a respectably-sized slice of independent funding from small to mid-size donors. I submit that certain functions within EA are in particular need of a diversified funding base, rather than one that is dependent on Open Phil or other megadonors. In particular, the development of proposals about power, governance, and transparency is one of those functions. Not only are people not particularly good at deciding whether they need to give up power or expose their decisions to transparency, they also have an obvious conflict of interest on the question. One can doubtless think of other functions for which independence from major funders is important or even critical.For various reasons, I identify with a desire for moderate reform at this time. I'll define that (imprecisely) as a set of reforms that seeks to make EA a better version of itself, but still clearly identifiab...]]>
Fri, 27 Jan 2023 08:22:43 +0000 EA - Moving Toward More Concrete Proposals for Reform by Jason Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Moving Toward More Concrete Proposals for Reform, published by Jason on January 27, 2023 on The Effective Altruism Forum.Epistemic status: uncertain, unsure of the extent that I mean this as a practical proposal vs. a thought experiment to make certain observations, aware I could improve this but also aware that it should be timelyTiming note: Most of this was written before Doing EA Better was released, and is largely based on public events on the Forum. It is not specifically in response to that post, and is not really about possible epistemic reforms at all.SummaryThis post suggests that discussions of possible reforms relating to governance, deconcentration of power, and transparency have often been relatively unhelpful due to a lack of specificity. It suggests raising modest funding for the creation of more developed proposals, with the broader EA community taking the lead on the project. The post presents various reasons I think this proposal has value even if one assumes the underlying reform proposals lack merit. It explains how the proposal would provide information for and tie into further reform initiatives, and suggests some mechanics for funding and selecting proposals.IntroductionI find the very-high level discussion of possible reform to be generally unhelpful. At that level, it's too easy for proponents to gloss over implementation costs and downsides, and too easy for skeptics to attack strawmen. There needs to be more substance for anyone to seriously evaluate most of the proposals that are floating around as possibilities -- either in the abstract or in terms of whether investing resources in a proposal specific to their organization makes sense.I've also noted, as have others, that it's unreasonable to place the burden of producing more detailed proposals on reform advocates without either a strong reason to believe those reforms would be successful or providing reasonable compensation for their work. (Of course, no one should ever feel pressure to write even if compensation is offered!)For what it's worth, I think some of the reform proposals going around are likely correct, some are worthwhile in carefully selected contexts, and some are very unlikely to be viable. But I also think, for many proposals, that my assessments could update significantly if I read more detailed versions of the proposal.I'm a newcomer who has never accepted anything from any EA source other than one free book. This gives me certain advantages and limitations. I don't have to worry about damaging effects on my career, nor do I personally have anything to personally lose or gain from reform. On the other hand, my outsider status means I am probably ignorant of some important information. And it certainly means I have no right to make demands on anyone, only suggestions.Background AssumptionsMost of the money in the ecosystem comes from Open Phil and a few other large donors. However, there is still a respectably-sized slice of independent funding from small to mid-size donors. I submit that certain functions within EA are in particular need of a diversified funding base, rather than one that is dependent on Open Phil or other megadonors. In particular, the development of proposals about power, governance, and transparency is one of those functions. Not only are people not particularly good at deciding whether they need to give up power or expose their decisions to transparency, they also have an obvious conflict of interest on the question. One can doubtless think of other functions for which independence from major funders is important or even critical.For various reasons, I identify with a desire for moderate reform at this time. I'll define that (imprecisely) as a set of reforms that seeks to make EA a better version of itself, but still clearly identifiab...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Moving Toward More Concrete Proposals for Reform, published by Jason on January 27, 2023 on The Effective Altruism Forum.Epistemic status: uncertain, unsure of the extent that I mean this as a practical proposal vs. a thought experiment to make certain observations, aware I could improve this but also aware that it should be timelyTiming note: Most of this was written before Doing EA Better was released, and is largely based on public events on the Forum. It is not specifically in response to that post, and is not really about possible epistemic reforms at all.SummaryThis post suggests that discussions of possible reforms relating to governance, deconcentration of power, and transparency have often been relatively unhelpful due to a lack of specificity. It suggests raising modest funding for the creation of more developed proposals, with the broader EA community taking the lead on the project. The post presents various reasons I think this proposal has value even if one assumes the underlying reform proposals lack merit. It explains how the proposal would provide information for and tie into further reform initiatives, and suggests some mechanics for funding and selecting proposals.IntroductionI find the very-high level discussion of possible reform to be generally unhelpful. At that level, it's too easy for proponents to gloss over implementation costs and downsides, and too easy for skeptics to attack strawmen. There needs to be more substance for anyone to seriously evaluate most of the proposals that are floating around as possibilities -- either in the abstract or in terms of whether investing resources in a proposal specific to their organization makes sense.I've also noted, as have others, that it's unreasonable to place the burden of producing more detailed proposals on reform advocates without either a strong reason to believe those reforms would be successful or providing reasonable compensation for their work. (Of course, no one should ever feel pressure to write even if compensation is offered!)For what it's worth, I think some of the reform proposals going around are likely correct, some are worthwhile in carefully selected contexts, and some are very unlikely to be viable. But I also think, for many proposals, that my assessments could update significantly if I read more detailed versions of the proposal.I'm a newcomer who has never accepted anything from any EA source other than one free book. This gives me certain advantages and limitations. I don't have to worry about damaging effects on my career, nor do I personally have anything to personally lose or gain from reform. On the other hand, my outsider status means I am probably ignorant of some important information. And it certainly means I have no right to make demands on anyone, only suggestions.Background AssumptionsMost of the money in the ecosystem comes from Open Phil and a few other large donors. However, there is still a respectably-sized slice of independent funding from small to mid-size donors. I submit that certain functions within EA are in particular need of a diversified funding base, rather than one that is dependent on Open Phil or other megadonors. In particular, the development of proposals about power, governance, and transparency is one of those functions. Not only are people not particularly good at deciding whether they need to give up power or expose their decisions to transparency, they also have an obvious conflict of interest on the question. One can doubtless think of other functions for which independence from major funders is important or even critical.For various reasons, I identify with a desire for moderate reform at this time. I'll define that (imprecisely) as a set of reforms that seeks to make EA a better version of itself, but still clearly identifiab...]]>
Jason https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 15:36 None full 4612
LiHqgrKDuzJPFjtaT_EA EA - Moving Toward More Concrete Proposals for Reform by Jason Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Moving Toward More Concrete Proposals for Reform, published by Jason on January 27, 2023 on The Effective Altruism Forum.Epistemic status: uncertain, unsure of the extent that I mean this as a practical proposal vs. a thought experiment to make certain observations, aware I could improve this but also aware that it should be timelyTiming note: Most of this was written before Doing EA Better was released, and is largely based on public events on the Forum. It is not specifically in response to that post, and is not really about possible epistemic reforms at all.SummaryThis post suggests that discussions of possible reforms relating to governance, deconcentration of power, and transparency have often been relatively unhelpful due to a lack of specificity. It suggests raising modest funding for the creation of more developed proposals, with the broader EA community taking the lead on the project. The post presents various reasons I think this proposal has value even if one assumes the underlying reform proposals lack merit. It explains how the proposal would provide information for and tie into further reform initiatives, and suggests some mechanics for funding and selecting proposals.IntroductionI find the very-high level discussion of possible reform to be generally unhelpful. At that level, it's too easy for proponents to gloss over implementation costs and downsides, and too easy for skeptics to attack strawmen. There needs to be more substance for anyone to seriously evaluate most of the proposals that are floating around as possibilities -- either in the abstract or in terms of whether investing resources in a proposal specific to their organization makes sense.I've also noted, as have others, that it's unreasonable to place the burden of producing more detailed proposals on reform advocates without either a strong reason to believe those reforms would be successful or providing reasonable compensation for their work. (Of course, no one should ever feel pressure to write even if compensation is offered!)For what it's worth, I think some of the reform proposals going around are likely correct, some are worthwhile in carefully selected contexts, and some are very unlikely to be viable. But I also think, for many proposals, that my assessments could update significantly if I read more detailed versions of the proposal.I'm a newcomer who has never accepted anything from any EA source other than one free book. This gives me certain advantages and limitations. I don't have to worry about damaging effects on my career, nor do I personally have anything to personally lose or gain from reform. On the other hand, my outsider status means I am probably ignorant of some important information. And it certainly means I have no right to make demands on anyone, only suggestions.Background AssumptionsMost of the money in the ecosystem comes from Open Phil and a few other large donors. However, there is still a respectably-sized slice of independent funding from small to mid-size donors. I submit that certain functions within EA are in particular need of a diversified funding base, rather than one that is dependent on Open Phil or other megadonors. In particular, the development of proposals about power, governance, and transparency is one of those functions. Not only are people not particularly good at deciding whether they need to give up power or expose their decisions to transparency, they also have an obvious conflict of interest on the question. One can doubtless think of other functions for which independence from major funders is important or even critical.For various reasons, I identify with a desire for moderate reform at this time. I'll define that (imprecisely) as a set of reforms that seeks to make EA a better version of itself, but still clearly identifiab...]]>
Jason https://forum.effectivealtruism.org/posts/LiHqgrKDuzJPFjtaT/moving-toward-more-concrete-proposals-for-reform Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Moving Toward More Concrete Proposals for Reform, published by Jason on January 27, 2023 on The Effective Altruism Forum.Epistemic status: uncertain, unsure of the extent that I mean this as a practical proposal vs. a thought experiment to make certain observations, aware I could improve this but also aware that it should be timelyTiming note: Most of this was written before Doing EA Better was released, and is largely based on public events on the Forum. It is not specifically in response to that post, and is not really about possible epistemic reforms at all.SummaryThis post suggests that discussions of possible reforms relating to governance, deconcentration of power, and transparency have often been relatively unhelpful due to a lack of specificity. It suggests raising modest funding for the creation of more developed proposals, with the broader EA community taking the lead on the project. The post presents various reasons I think this proposal has value even if one assumes the underlying reform proposals lack merit. It explains how the proposal would provide information for and tie into further reform initiatives, and suggests some mechanics for funding and selecting proposals.IntroductionI find the very-high level discussion of possible reform to be generally unhelpful. At that level, it's too easy for proponents to gloss over implementation costs and downsides, and too easy for skeptics to attack strawmen. There needs to be more substance for anyone to seriously evaluate most of the proposals that are floating around as possibilities -- either in the abstract or in terms of whether investing resources in a proposal specific to their organization makes sense.I've also noted, as have others, that it's unreasonable to place the burden of producing more detailed proposals on reform advocates without either a strong reason to believe those reforms would be successful or providing reasonable compensation for their work. (Of course, no one should ever feel pressure to write even if compensation is offered!)For what it's worth, I think some of the reform proposals going around are likely correct, some are worthwhile in carefully selected contexts, and some are very unlikely to be viable. But I also think, for many proposals, that my assessments could update significantly if I read more detailed versions of the proposal.I'm a newcomer who has never accepted anything from any EA source other than one free book. This gives me certain advantages and limitations. I don't have to worry about damaging effects on my career, nor do I personally have anything to personally lose or gain from reform. On the other hand, my outsider status means I am probably ignorant of some important information. And it certainly means I have no right to make demands on anyone, only suggestions.Background AssumptionsMost of the money in the ecosystem comes from Open Phil and a few other large donors. However, there is still a respectably-sized slice of independent funding from small to mid-size donors. I submit that certain functions within EA are in particular need of a diversified funding base, rather than one that is dependent on Open Phil or other megadonors. In particular, the development of proposals about power, governance, and transparency is one of those functions. Not only are people not particularly good at deciding whether they need to give up power or expose their decisions to transparency, they also have an obvious conflict of interest on the question. One can doubtless think of other functions for which independence from major funders is important or even critical.For various reasons, I identify with a desire for moderate reform at this time. I'll define that (imprecisely) as a set of reforms that seeks to make EA a better version of itself, but still clearly identifiab...]]>
Fri, 27 Jan 2023 08:22:43 +0000 EA - Moving Toward More Concrete Proposals for Reform by Jason Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Moving Toward More Concrete Proposals for Reform, published by Jason on January 27, 2023 on The Effective Altruism Forum.Epistemic status: uncertain, unsure of the extent that I mean this as a practical proposal vs. a thought experiment to make certain observations, aware I could improve this but also aware that it should be timelyTiming note: Most of this was written before Doing EA Better was released, and is largely based on public events on the Forum. It is not specifically in response to that post, and is not really about possible epistemic reforms at all.SummaryThis post suggests that discussions of possible reforms relating to governance, deconcentration of power, and transparency have often been relatively unhelpful due to a lack of specificity. It suggests raising modest funding for the creation of more developed proposals, with the broader EA community taking the lead on the project. The post presents various reasons I think this proposal has value even if one assumes the underlying reform proposals lack merit. It explains how the proposal would provide information for and tie into further reform initiatives, and suggests some mechanics for funding and selecting proposals.IntroductionI find the very-high level discussion of possible reform to be generally unhelpful. At that level, it's too easy for proponents to gloss over implementation costs and downsides, and too easy for skeptics to attack strawmen. There needs to be more substance for anyone to seriously evaluate most of the proposals that are floating around as possibilities -- either in the abstract or in terms of whether investing resources in a proposal specific to their organization makes sense.I've also noted, as have others, that it's unreasonable to place the burden of producing more detailed proposals on reform advocates without either a strong reason to believe those reforms would be successful or providing reasonable compensation for their work. (Of course, no one should ever feel pressure to write even if compensation is offered!)For what it's worth, I think some of the reform proposals going around are likely correct, some are worthwhile in carefully selected contexts, and some are very unlikely to be viable. But I also think, for many proposals, that my assessments could update significantly if I read more detailed versions of the proposal.I'm a newcomer who has never accepted anything from any EA source other than one free book. This gives me certain advantages and limitations. I don't have to worry about damaging effects on my career, nor do I personally have anything to personally lose or gain from reform. On the other hand, my outsider status means I am probably ignorant of some important information. And it certainly means I have no right to make demands on anyone, only suggestions.Background AssumptionsMost of the money in the ecosystem comes from Open Phil and a few other large donors. However, there is still a respectably-sized slice of independent funding from small to mid-size donors. I submit that certain functions within EA are in particular need of a diversified funding base, rather than one that is dependent on Open Phil or other megadonors. In particular, the development of proposals about power, governance, and transparency is one of those functions. Not only are people not particularly good at deciding whether they need to give up power or expose their decisions to transparency, they also have an obvious conflict of interest on the question. One can doubtless think of other functions for which independence from major funders is important or even critical.For various reasons, I identify with a desire for moderate reform at this time. I'll define that (imprecisely) as a set of reforms that seeks to make EA a better version of itself, but still clearly identifiab...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Moving Toward More Concrete Proposals for Reform, published by Jason on January 27, 2023 on The Effective Altruism Forum.Epistemic status: uncertain, unsure of the extent that I mean this as a practical proposal vs. a thought experiment to make certain observations, aware I could improve this but also aware that it should be timelyTiming note: Most of this was written before Doing EA Better was released, and is largely based on public events on the Forum. It is not specifically in response to that post, and is not really about possible epistemic reforms at all.SummaryThis post suggests that discussions of possible reforms relating to governance, deconcentration of power, and transparency have often been relatively unhelpful due to a lack of specificity. It suggests raising modest funding for the creation of more developed proposals, with the broader EA community taking the lead on the project. The post presents various reasons I think this proposal has value even if one assumes the underlying reform proposals lack merit. It explains how the proposal would provide information for and tie into further reform initiatives, and suggests some mechanics for funding and selecting proposals.IntroductionI find the very-high level discussion of possible reform to be generally unhelpful. At that level, it's too easy for proponents to gloss over implementation costs and downsides, and too easy for skeptics to attack strawmen. There needs to be more substance for anyone to seriously evaluate most of the proposals that are floating around as possibilities -- either in the abstract or in terms of whether investing resources in a proposal specific to their organization makes sense.I've also noted, as have others, that it's unreasonable to place the burden of producing more detailed proposals on reform advocates without either a strong reason to believe those reforms would be successful or providing reasonable compensation for their work. (Of course, no one should ever feel pressure to write even if compensation is offered!)For what it's worth, I think some of the reform proposals going around are likely correct, some are worthwhile in carefully selected contexts, and some are very unlikely to be viable. But I also think, for many proposals, that my assessments could update significantly if I read more detailed versions of the proposal.I'm a newcomer who has never accepted anything from any EA source other than one free book. This gives me certain advantages and limitations. I don't have to worry about damaging effects on my career, nor do I personally have anything to personally lose or gain from reform. On the other hand, my outsider status means I am probably ignorant of some important information. And it certainly means I have no right to make demands on anyone, only suggestions.Background AssumptionsMost of the money in the ecosystem comes from Open Phil and a few other large donors. However, there is still a respectably-sized slice of independent funding from small to mid-size donors. I submit that certain functions within EA are in particular need of a diversified funding base, rather than one that is dependent on Open Phil or other megadonors. In particular, the development of proposals about power, governance, and transparency is one of those functions. Not only are people not particularly good at deciding whether they need to give up power or expose their decisions to transparency, they also have an obvious conflict of interest on the question. One can doubtless think of other functions for which independence from major funders is important or even critical.For various reasons, I identify with a desire for moderate reform at this time. I'll define that (imprecisely) as a set of reforms that seeks to make EA a better version of itself, but still clearly identifiab...]]>
Jason https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 15:36 None full 4598
9xZ5ubYEsLWCZRYnL_NL_EA_EA EA - How to Reform Effective Altruism after SBF Vox interview with Holden Karnofsky 1/23/2023 by Peter Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to Reform Effective Altruism after SBF Vox interview with Holden Karnofsky 1/23/2023, published by Peter on January 26, 2023 on The Effective Altruism Forum.Interview with Holden Karnofsky on SBF and EA. If you prefer listening, x2 speed is available here. Not anything really new but I really resonate with the last 5 minutes and the idea of being ambitious about helping people tempered by moderation and pluralism in perspectives. Parts not in the transcript that I liked: "You don't have to build your whole identity around EA in order to do an enormous amount of good, in order to be extremely excited about effective altruist ideas. If you can be basically a normal person in most respects, you have a lot going on in your life, you have a lot that you care about, and a job, and you work about as hard as a lot of people work at their jobs... you can do a huge amount of good.""Do the most good possible I think is a good idea in moderation. But it's similar to running. Faster time is better, but you can do that in moderation. You can care about other things at the same time.I think there is a ton of value to coming at doing good with a mindset of finding out the way to do the most good with the resources I have. I think that brings a ton of value compared to just trying to do some good. But then, doing that in moderation I think does get you most of the gains, and ultimately where I think the most healthy place to be is, and probably in my guess the way to do the most good in the end too."Parts from transcript I liked:"...What effective altruism means to me is basically, let’s be ambitious about helping a lot of people. . I feel like this is good, so I think I’m more in the camp of, this is a good idea in moderation. This is a good idea when accompanied by pluralism.""Longtermism tends to emphasize the importance of future generations. But there’s a separate idea of just, like, global catastrophic risk reduction. There’s some risks facing humanity that are really big and that we’ve got to be paying more attention to. One of them is climate change.One of them is pandemics. And then there’s AI. I think the dangers of certain kinds of AI that you could easily imagine being developed are vastly underappreciated.So I would say that I’m currently more sold on bio risk and AI risk as just things that we’ve got to be paying more attention to, no matter what your philosophical orientation. I’m more sold on that than I am on longtermism.But I am somewhat sold on both. I’ve always kind of thought, “Hey, future people are people and we should care about what happens in the future.” But I’ve always been skeptical of claims to go further than that and say something like, “The value of future generations, and in particular the value of as many people as possible getting to exist, is so vast that it just completely trumps everything else, and you shouldn’t even think about other ways to help people.” That’s a claim that I’ve never really been on board with, and I’m still not on board with.""Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Peter https://forum.effectivealtruism.org/posts/9xZ5ubYEsLWCZRYnL/how-to-reform-effective-altruism-after-sbf-vox-interview Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to Reform Effective Altruism after SBF Vox interview with Holden Karnofsky 1/23/2023, published by Peter on January 26, 2023 on The Effective Altruism Forum.Interview with Holden Karnofsky on SBF and EA. If you prefer listening, x2 speed is available here. Not anything really new but I really resonate with the last 5 minutes and the idea of being ambitious about helping people tempered by moderation and pluralism in perspectives. Parts not in the transcript that I liked: "You don't have to build your whole identity around EA in order to do an enormous amount of good, in order to be extremely excited about effective altruist ideas. If you can be basically a normal person in most respects, you have a lot going on in your life, you have a lot that you care about, and a job, and you work about as hard as a lot of people work at their jobs... you can do a huge amount of good.""Do the most good possible I think is a good idea in moderation. But it's similar to running. Faster time is better, but you can do that in moderation. You can care about other things at the same time.I think there is a ton of value to coming at doing good with a mindset of finding out the way to do the most good with the resources I have. I think that brings a ton of value compared to just trying to do some good. But then, doing that in moderation I think does get you most of the gains, and ultimately where I think the most healthy place to be is, and probably in my guess the way to do the most good in the end too."Parts from transcript I liked:"...What effective altruism means to me is basically, let’s be ambitious about helping a lot of people. . I feel like this is good, so I think I’m more in the camp of, this is a good idea in moderation. This is a good idea when accompanied by pluralism.""Longtermism tends to emphasize the importance of future generations. But there’s a separate idea of just, like, global catastrophic risk reduction. There’s some risks facing humanity that are really big and that we’ve got to be paying more attention to. One of them is climate change.One of them is pandemics. And then there’s AI. I think the dangers of certain kinds of AI that you could easily imagine being developed are vastly underappreciated.So I would say that I’m currently more sold on bio risk and AI risk as just things that we’ve got to be paying more attention to, no matter what your philosophical orientation. I’m more sold on that than I am on longtermism.But I am somewhat sold on both. I’ve always kind of thought, “Hey, future people are people and we should care about what happens in the future.” But I’ve always been skeptical of claims to go further than that and say something like, “The value of future generations, and in particular the value of as many people as possible getting to exist, is so vast that it just completely trumps everything else, and you shouldn’t even think about other ways to help people.” That’s a claim that I’ve never really been on board with, and I’m still not on board with.""Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 27 Jan 2023 05:25:31 +0000 EA - How to Reform Effective Altruism after SBF Vox interview with Holden Karnofsky 1/23/2023 by Peter Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to Reform Effective Altruism after SBF Vox interview with Holden Karnofsky 1/23/2023, published by Peter on January 26, 2023 on The Effective Altruism Forum.Interview with Holden Karnofsky on SBF and EA. If you prefer listening, x2 speed is available here. Not anything really new but I really resonate with the last 5 minutes and the idea of being ambitious about helping people tempered by moderation and pluralism in perspectives. Parts not in the transcript that I liked: "You don't have to build your whole identity around EA in order to do an enormous amount of good, in order to be extremely excited about effective altruist ideas. If you can be basically a normal person in most respects, you have a lot going on in your life, you have a lot that you care about, and a job, and you work about as hard as a lot of people work at their jobs... you can do a huge amount of good.""Do the most good possible I think is a good idea in moderation. But it's similar to running. Faster time is better, but you can do that in moderation. You can care about other things at the same time.I think there is a ton of value to coming at doing good with a mindset of finding out the way to do the most good with the resources I have. I think that brings a ton of value compared to just trying to do some good. But then, doing that in moderation I think does get you most of the gains, and ultimately where I think the most healthy place to be is, and probably in my guess the way to do the most good in the end too."Parts from transcript I liked:"...What effective altruism means to me is basically, let’s be ambitious about helping a lot of people. . I feel like this is good, so I think I’m more in the camp of, this is a good idea in moderation. This is a good idea when accompanied by pluralism.""Longtermism tends to emphasize the importance of future generations. But there’s a separate idea of just, like, global catastrophic risk reduction. There’s some risks facing humanity that are really big and that we’ve got to be paying more attention to. One of them is climate change.One of them is pandemics. And then there’s AI. I think the dangers of certain kinds of AI that you could easily imagine being developed are vastly underappreciated.So I would say that I’m currently more sold on bio risk and AI risk as just things that we’ve got to be paying more attention to, no matter what your philosophical orientation. I’m more sold on that than I am on longtermism.But I am somewhat sold on both. I’ve always kind of thought, “Hey, future people are people and we should care about what happens in the future.” But I’ve always been skeptical of claims to go further than that and say something like, “The value of future generations, and in particular the value of as many people as possible getting to exist, is so vast that it just completely trumps everything else, and you shouldn’t even think about other ways to help people.” That’s a claim that I’ve never really been on board with, and I’m still not on board with.""Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to Reform Effective Altruism after SBF Vox interview with Holden Karnofsky 1/23/2023, published by Peter on January 26, 2023 on The Effective Altruism Forum.Interview with Holden Karnofsky on SBF and EA. If you prefer listening, x2 speed is available here. Not anything really new but I really resonate with the last 5 minutes and the idea of being ambitious about helping people tempered by moderation and pluralism in perspectives. Parts not in the transcript that I liked: "You don't have to build your whole identity around EA in order to do an enormous amount of good, in order to be extremely excited about effective altruist ideas. If you can be basically a normal person in most respects, you have a lot going on in your life, you have a lot that you care about, and a job, and you work about as hard as a lot of people work at their jobs... you can do a huge amount of good.""Do the most good possible I think is a good idea in moderation. But it's similar to running. Faster time is better, but you can do that in moderation. You can care about other things at the same time.I think there is a ton of value to coming at doing good with a mindset of finding out the way to do the most good with the resources I have. I think that brings a ton of value compared to just trying to do some good. But then, doing that in moderation I think does get you most of the gains, and ultimately where I think the most healthy place to be is, and probably in my guess the way to do the most good in the end too."Parts from transcript I liked:"...What effective altruism means to me is basically, let’s be ambitious about helping a lot of people. . I feel like this is good, so I think I’m more in the camp of, this is a good idea in moderation. This is a good idea when accompanied by pluralism.""Longtermism tends to emphasize the importance of future generations. But there’s a separate idea of just, like, global catastrophic risk reduction. There’s some risks facing humanity that are really big and that we’ve got to be paying more attention to. One of them is climate change.One of them is pandemics. And then there’s AI. I think the dangers of certain kinds of AI that you could easily imagine being developed are vastly underappreciated.So I would say that I’m currently more sold on bio risk and AI risk as just things that we’ve got to be paying more attention to, no matter what your philosophical orientation. I’m more sold on that than I am on longtermism.But I am somewhat sold on both. I’ve always kind of thought, “Hey, future people are people and we should care about what happens in the future.” But I’ve always been skeptical of claims to go further than that and say something like, “The value of future generations, and in particular the value of as many people as possible getting to exist, is so vast that it just completely trumps everything else, and you shouldn’t even think about other ways to help people.” That’s a claim that I’ve never really been on board with, and I’m still not on board with.""Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Peter https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:57 None full 4614
9xZ5ubYEsLWCZRYnL_EA EA - How to Reform Effective Altruism after SBF Vox interview with Holden Karnofsky 1/23/2023 by Peter Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to Reform Effective Altruism after SBF Vox interview with Holden Karnofsky 1/23/2023, published by Peter on January 26, 2023 on The Effective Altruism Forum.Interview with Holden Karnofsky on SBF and EA. If you prefer listening, x2 speed is available here. Not anything really new but I really resonate with the last 5 minutes and the idea of being ambitious about helping people tempered by moderation and pluralism in perspectives. Parts not in the transcript that I liked: "You don't have to build your whole identity around EA in order to do an enormous amount of good, in order to be extremely excited about effective altruist ideas. If you can be basically a normal person in most respects, you have a lot going on in your life, you have a lot that you care about, and a job, and you work about as hard as a lot of people work at their jobs... you can do a huge amount of good.""Do the most good possible I think is a good idea in moderation. But it's similar to running. Faster time is better, but you can do that in moderation. You can care about other things at the same time.I think there is a ton of value to coming at doing good with a mindset of finding out the way to do the most good with the resources I have. I think that brings a ton of value compared to just trying to do some good. But then, doing that in moderation I think does get you most of the gains, and ultimately where I think the most healthy place to be is, and probably in my guess the way to do the most good in the end too."Parts from transcript I liked:"...What effective altruism means to me is basically, let’s be ambitious about helping a lot of people. . I feel like this is good, so I think I’m more in the camp of, this is a good idea in moderation. This is a good idea when accompanied by pluralism.""Longtermism tends to emphasize the importance of future generations. But there’s a separate idea of just, like, global catastrophic risk reduction. There’s some risks facing humanity that are really big and that we’ve got to be paying more attention to. One of them is climate change.One of them is pandemics. And then there’s AI. I think the dangers of certain kinds of AI that you could easily imagine being developed are vastly underappreciated.So I would say that I’m currently more sold on bio risk and AI risk as just things that we’ve got to be paying more attention to, no matter what your philosophical orientation. I’m more sold on that than I am on longtermism.But I am somewhat sold on both. I’ve always kind of thought, “Hey, future people are people and we should care about what happens in the future.” But I’ve always been skeptical of claims to go further than that and say something like, “The value of future generations, and in particular the value of as many people as possible getting to exist, is so vast that it just completely trumps everything else, and you shouldn’t even think about other ways to help people.” That’s a claim that I’ve never really been on board with, and I’m still not on board with.""Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Peter https://forum.effectivealtruism.org/posts/9xZ5ubYEsLWCZRYnL/how-to-reform-effective-altruism-after-sbf-vox-interview Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to Reform Effective Altruism after SBF Vox interview with Holden Karnofsky 1/23/2023, published by Peter on January 26, 2023 on The Effective Altruism Forum.Interview with Holden Karnofsky on SBF and EA. If you prefer listening, x2 speed is available here. Not anything really new but I really resonate with the last 5 minutes and the idea of being ambitious about helping people tempered by moderation and pluralism in perspectives. Parts not in the transcript that I liked: "You don't have to build your whole identity around EA in order to do an enormous amount of good, in order to be extremely excited about effective altruist ideas. If you can be basically a normal person in most respects, you have a lot going on in your life, you have a lot that you care about, and a job, and you work about as hard as a lot of people work at their jobs... you can do a huge amount of good.""Do the most good possible I think is a good idea in moderation. But it's similar to running. Faster time is better, but you can do that in moderation. You can care about other things at the same time.I think there is a ton of value to coming at doing good with a mindset of finding out the way to do the most good with the resources I have. I think that brings a ton of value compared to just trying to do some good. But then, doing that in moderation I think does get you most of the gains, and ultimately where I think the most healthy place to be is, and probably in my guess the way to do the most good in the end too."Parts from transcript I liked:"...What effective altruism means to me is basically, let’s be ambitious about helping a lot of people. . I feel like this is good, so I think I’m more in the camp of, this is a good idea in moderation. This is a good idea when accompanied by pluralism.""Longtermism tends to emphasize the importance of future generations. But there’s a separate idea of just, like, global catastrophic risk reduction. There’s some risks facing humanity that are really big and that we’ve got to be paying more attention to. One of them is climate change.One of them is pandemics. And then there’s AI. I think the dangers of certain kinds of AI that you could easily imagine being developed are vastly underappreciated.So I would say that I’m currently more sold on bio risk and AI risk as just things that we’ve got to be paying more attention to, no matter what your philosophical orientation. I’m more sold on that than I am on longtermism.But I am somewhat sold on both. I’ve always kind of thought, “Hey, future people are people and we should care about what happens in the future.” But I’ve always been skeptical of claims to go further than that and say something like, “The value of future generations, and in particular the value of as many people as possible getting to exist, is so vast that it just completely trumps everything else, and you shouldn’t even think about other ways to help people.” That’s a claim that I’ve never really been on board with, and I’m still not on board with.""Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 27 Jan 2023 05:25:31 +0000 EA - How to Reform Effective Altruism after SBF Vox interview with Holden Karnofsky 1/23/2023 by Peter Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to Reform Effective Altruism after SBF Vox interview with Holden Karnofsky 1/23/2023, published by Peter on January 26, 2023 on The Effective Altruism Forum.Interview with Holden Karnofsky on SBF and EA. If you prefer listening, x2 speed is available here. Not anything really new but I really resonate with the last 5 minutes and the idea of being ambitious about helping people tempered by moderation and pluralism in perspectives. Parts not in the transcript that I liked: "You don't have to build your whole identity around EA in order to do an enormous amount of good, in order to be extremely excited about effective altruist ideas. If you can be basically a normal person in most respects, you have a lot going on in your life, you have a lot that you care about, and a job, and you work about as hard as a lot of people work at their jobs... you can do a huge amount of good.""Do the most good possible I think is a good idea in moderation. But it's similar to running. Faster time is better, but you can do that in moderation. You can care about other things at the same time.I think there is a ton of value to coming at doing good with a mindset of finding out the way to do the most good with the resources I have. I think that brings a ton of value compared to just trying to do some good. But then, doing that in moderation I think does get you most of the gains, and ultimately where I think the most healthy place to be is, and probably in my guess the way to do the most good in the end too."Parts from transcript I liked:"...What effective altruism means to me is basically, let’s be ambitious about helping a lot of people. . I feel like this is good, so I think I’m more in the camp of, this is a good idea in moderation. This is a good idea when accompanied by pluralism.""Longtermism tends to emphasize the importance of future generations. But there’s a separate idea of just, like, global catastrophic risk reduction. There’s some risks facing humanity that are really big and that we’ve got to be paying more attention to. One of them is climate change.One of them is pandemics. And then there’s AI. I think the dangers of certain kinds of AI that you could easily imagine being developed are vastly underappreciated.So I would say that I’m currently more sold on bio risk and AI risk as just things that we’ve got to be paying more attention to, no matter what your philosophical orientation. I’m more sold on that than I am on longtermism.But I am somewhat sold on both. I’ve always kind of thought, “Hey, future people are people and we should care about what happens in the future.” But I’ve always been skeptical of claims to go further than that and say something like, “The value of future generations, and in particular the value of as many people as possible getting to exist, is so vast that it just completely trumps everything else, and you shouldn’t even think about other ways to help people.” That’s a claim that I’ve never really been on board with, and I’m still not on board with.""Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to Reform Effective Altruism after SBF Vox interview with Holden Karnofsky 1/23/2023, published by Peter on January 26, 2023 on The Effective Altruism Forum.Interview with Holden Karnofsky on SBF and EA. If you prefer listening, x2 speed is available here. Not anything really new but I really resonate with the last 5 minutes and the idea of being ambitious about helping people tempered by moderation and pluralism in perspectives. Parts not in the transcript that I liked: "You don't have to build your whole identity around EA in order to do an enormous amount of good, in order to be extremely excited about effective altruist ideas. If you can be basically a normal person in most respects, you have a lot going on in your life, you have a lot that you care about, and a job, and you work about as hard as a lot of people work at their jobs... you can do a huge amount of good.""Do the most good possible I think is a good idea in moderation. But it's similar to running. Faster time is better, but you can do that in moderation. You can care about other things at the same time.I think there is a ton of value to coming at doing good with a mindset of finding out the way to do the most good with the resources I have. I think that brings a ton of value compared to just trying to do some good. But then, doing that in moderation I think does get you most of the gains, and ultimately where I think the most healthy place to be is, and probably in my guess the way to do the most good in the end too."Parts from transcript I liked:"...What effective altruism means to me is basically, let’s be ambitious about helping a lot of people. . I feel like this is good, so I think I’m more in the camp of, this is a good idea in moderation. This is a good idea when accompanied by pluralism.""Longtermism tends to emphasize the importance of future generations. But there’s a separate idea of just, like, global catastrophic risk reduction. There’s some risks facing humanity that are really big and that we’ve got to be paying more attention to. One of them is climate change.One of them is pandemics. And then there’s AI. I think the dangers of certain kinds of AI that you could easily imagine being developed are vastly underappreciated.So I would say that I’m currently more sold on bio risk and AI risk as just things that we’ve got to be paying more attention to, no matter what your philosophical orientation. I’m more sold on that than I am on longtermism.But I am somewhat sold on both. I’ve always kind of thought, “Hey, future people are people and we should care about what happens in the future.” But I’ve always been skeptical of claims to go further than that and say something like, “The value of future generations, and in particular the value of as many people as possible getting to exist, is so vast that it just completely trumps everything else, and you shouldn’t even think about other ways to help people.” That’s a claim that I’ve never really been on board with, and I’m still not on board with.""Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Peter https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:57 None full 4599
nKWc4EzRjkpcbDA3A_EA EA - AI Risk Management Framework | NIST by 𝕮𝖎𝖓𝖊𝖗𝖆 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Risk Management Framework | NIST, published by 𝕮𝖎𝖓𝖊𝖗𝖆 on January 26, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
𝕮𝖎𝖓𝖊𝖗𝖆 https://forum.effectivealtruism.org/posts/nKWc4EzRjkpcbDA3A/ai-risk-management-framework-or-nist Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Risk Management Framework | NIST, published by 𝕮𝖎𝖓𝖊𝖗𝖆 on January 26, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 26 Jan 2023 22:42:10 +0000 EA - AI Risk Management Framework | NIST by 𝕮𝖎𝖓𝖊𝖗𝖆 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Risk Management Framework | NIST, published by 𝕮𝖎𝖓𝖊𝖗𝖆 on January 26, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Risk Management Framework | NIST, published by 𝕮𝖎𝖓𝖊𝖗𝖆 on January 26, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
𝕮𝖎𝖓𝖊𝖗𝖆 https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:28 None full 4590
nKWc4EzRjkpcbDA3A_NL_EA_EA EA - AI Risk Management Framework | NIST by 𝕮𝖎𝖓𝖊𝖗𝖆 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Risk Management Framework | NIST, published by 𝕮𝖎𝖓𝖊𝖗𝖆 on January 26, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
𝕮𝖎𝖓𝖊𝖗𝖆 https://forum.effectivealtruism.org/posts/nKWc4EzRjkpcbDA3A/ai-risk-management-framework-or-nist Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Risk Management Framework | NIST, published by 𝕮𝖎𝖓𝖊𝖗𝖆 on January 26, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 26 Jan 2023 22:42:10 +0000 EA - AI Risk Management Framework | NIST by 𝕮𝖎𝖓𝖊𝖗𝖆 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Risk Management Framework | NIST, published by 𝕮𝖎𝖓𝖊𝖗𝖆 on January 26, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Risk Management Framework | NIST, published by 𝕮𝖎𝖓𝖊𝖗𝖆 on January 26, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
𝕮𝖎𝖓𝖊𝖗𝖆 https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:28 None full 4622
grL3eTkSBjT4EgxSy_EA EA - A new framing to replace "Welfarism vs. Abolitionism" by Aidan Kankyoku Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A new framing to replace "Welfarism vs. Abolitionism", published by Aidan Kankyoku on January 26, 2023 on The Effective Altruism Forum.Inspired by what seems like a recent détente between different factions in animal advocacy, I just posted this two-part article to paxfauna.org and wanted to share it here as well. It has more of a narrative form but I try to present a new way of thinking about these different roles in the movement. Hope you find it useful or at least enjoyable!Part 1: "Welfarism Vs. Abolitionism" Is ObsoleteThe GistAnimal advocates have been divided over pursuing more or less radical demands, leading to a conflict often framed as welfarism vs. abolitionism.This framing obscures the fact that all strategies used by animal advocates are incremental; we merely focus on different increments.Existing evidence does not support some of activists’ most common concerns about incremental welfare campaigns.A more sophisticated view of the different roles necessary in a movement ecology can resolve these conflicts.An Innocent QuestionWhen I was in college, around 2016, my campus animal rights club hosted a talk by the local representative of The Humane League (THL). As she stood facing about two dozen college students interested in animal activism, she began her talk with a question:“What goal should animal activists pursue?”After several seconds of silence, I threw out an answer that reflected my background as an organizer with Direct Action Everywhere (DxE for short). DxE had a notorious flair for dramatic confrontations with the public, using disruptive protest to demand a complete dismantling of the legal systems abetting the exploitation of other animals for the benefit of humans. My answer, one of DxE’s slogans, was shorthand for that:“Total animal liberation.”The THL rep (I’ll call her Kristy since I haven’t asked permission to use her name) endured an awkward silence waiting to see if anyone else would respond. Kristy had been working for THL about as long as I’d been organizing with DxE. Her job was to mobilize volunteers to support THL’s signature tactics: handing out leaflets to the public about meatless diets, and pressuring corporations like Mcdonald's to set animal welfare standards for their supply chains. When she clicked to the next slide, the answer waiting there was, like mine, a reflection of her organization’s ethos:“Reduce the greatest amount of suffering for the greatest number of animals we can.”Them’s Fightin’ WordsFor an outsider to the world of animal advocacy, these two answers would probably seem perfectly compatible. Yet from the moment they were spoken, room 217 of the Hellems Arts & Sciences building was filled with a palpable tension. A conflict much larger than us had asserted itself.The humans that make up both THL and DxE share the extremely uncommon view that farming animals is a grievous moral harm, and the even less common conviction to dedicate their lives to opposing it. Yet back in 2016, this didn’t seem to be worth much. The relationship between the organizations was racked with mutual distrust, even disdain. And this malaise was merely a microcosm for a larger conflict among animal advocates, one that had been playing out for years in vicious comment threads across social media. To at least one side, this was known as the battle of welfarists vs. abolitionists.In a moment, I’ll explain why I hope this dichotomy will finally be relegated to the dustbin of history. In fact, I believe it was as useless and misleading back then as it is now. But that’s not what I thought at the time.As soon as Kristy’s answer appeared on the screen, a familiar narrative was racing through my brain. I had labeled her a welfarist, and as fast as my neurons could fire, this label was joined by a series of harsh judgments. K...]]>
Aidan Kankyoku https://forum.effectivealtruism.org/posts/grL3eTkSBjT4EgxSy/a-new-framing-to-replace-welfarism-vs-abolitionism Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A new framing to replace "Welfarism vs. Abolitionism", published by Aidan Kankyoku on January 26, 2023 on The Effective Altruism Forum.Inspired by what seems like a recent détente between different factions in animal advocacy, I just posted this two-part article to paxfauna.org and wanted to share it here as well. It has more of a narrative form but I try to present a new way of thinking about these different roles in the movement. Hope you find it useful or at least enjoyable!Part 1: "Welfarism Vs. Abolitionism" Is ObsoleteThe GistAnimal advocates have been divided over pursuing more or less radical demands, leading to a conflict often framed as welfarism vs. abolitionism.This framing obscures the fact that all strategies used by animal advocates are incremental; we merely focus on different increments.Existing evidence does not support some of activists’ most common concerns about incremental welfare campaigns.A more sophisticated view of the different roles necessary in a movement ecology can resolve these conflicts.An Innocent QuestionWhen I was in college, around 2016, my campus animal rights club hosted a talk by the local representative of The Humane League (THL). As she stood facing about two dozen college students interested in animal activism, she began her talk with a question:“What goal should animal activists pursue?”After several seconds of silence, I threw out an answer that reflected my background as an organizer with Direct Action Everywhere (DxE for short). DxE had a notorious flair for dramatic confrontations with the public, using disruptive protest to demand a complete dismantling of the legal systems abetting the exploitation of other animals for the benefit of humans. My answer, one of DxE’s slogans, was shorthand for that:“Total animal liberation.”The THL rep (I’ll call her Kristy since I haven’t asked permission to use her name) endured an awkward silence waiting to see if anyone else would respond. Kristy had been working for THL about as long as I’d been organizing with DxE. Her job was to mobilize volunteers to support THL’s signature tactics: handing out leaflets to the public about meatless diets, and pressuring corporations like Mcdonald's to set animal welfare standards for their supply chains. When she clicked to the next slide, the answer waiting there was, like mine, a reflection of her organization’s ethos:“Reduce the greatest amount of suffering for the greatest number of animals we can.”Them’s Fightin’ WordsFor an outsider to the world of animal advocacy, these two answers would probably seem perfectly compatible. Yet from the moment they were spoken, room 217 of the Hellems Arts & Sciences building was filled with a palpable tension. A conflict much larger than us had asserted itself.The humans that make up both THL and DxE share the extremely uncommon view that farming animals is a grievous moral harm, and the even less common conviction to dedicate their lives to opposing it. Yet back in 2016, this didn’t seem to be worth much. The relationship between the organizations was racked with mutual distrust, even disdain. And this malaise was merely a microcosm for a larger conflict among animal advocates, one that had been playing out for years in vicious comment threads across social media. To at least one side, this was known as the battle of welfarists vs. abolitionists.In a moment, I’ll explain why I hope this dichotomy will finally be relegated to the dustbin of history. In fact, I believe it was as useless and misleading back then as it is now. But that’s not what I thought at the time.As soon as Kristy’s answer appeared on the screen, a familiar narrative was racing through my brain. I had labeled her a welfarist, and as fast as my neurons could fire, this label was joined by a series of harsh judgments. K...]]>
Thu, 26 Jan 2023 22:16:59 +0000 EA - A new framing to replace "Welfarism vs. Abolitionism" by Aidan Kankyoku Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A new framing to replace "Welfarism vs. Abolitionism", published by Aidan Kankyoku on January 26, 2023 on The Effective Altruism Forum.Inspired by what seems like a recent détente between different factions in animal advocacy, I just posted this two-part article to paxfauna.org and wanted to share it here as well. It has more of a narrative form but I try to present a new way of thinking about these different roles in the movement. Hope you find it useful or at least enjoyable!Part 1: "Welfarism Vs. Abolitionism" Is ObsoleteThe GistAnimal advocates have been divided over pursuing more or less radical demands, leading to a conflict often framed as welfarism vs. abolitionism.This framing obscures the fact that all strategies used by animal advocates are incremental; we merely focus on different increments.Existing evidence does not support some of activists’ most common concerns about incremental welfare campaigns.A more sophisticated view of the different roles necessary in a movement ecology can resolve these conflicts.An Innocent QuestionWhen I was in college, around 2016, my campus animal rights club hosted a talk by the local representative of The Humane League (THL). As she stood facing about two dozen college students interested in animal activism, she began her talk with a question:“What goal should animal activists pursue?”After several seconds of silence, I threw out an answer that reflected my background as an organizer with Direct Action Everywhere (DxE for short). DxE had a notorious flair for dramatic confrontations with the public, using disruptive protest to demand a complete dismantling of the legal systems abetting the exploitation of other animals for the benefit of humans. My answer, one of DxE’s slogans, was shorthand for that:“Total animal liberation.”The THL rep (I’ll call her Kristy since I haven’t asked permission to use her name) endured an awkward silence waiting to see if anyone else would respond. Kristy had been working for THL about as long as I’d been organizing with DxE. Her job was to mobilize volunteers to support THL’s signature tactics: handing out leaflets to the public about meatless diets, and pressuring corporations like Mcdonald's to set animal welfare standards for their supply chains. When she clicked to the next slide, the answer waiting there was, like mine, a reflection of her organization’s ethos:“Reduce the greatest amount of suffering for the greatest number of animals we can.”Them’s Fightin’ WordsFor an outsider to the world of animal advocacy, these two answers would probably seem perfectly compatible. Yet from the moment they were spoken, room 217 of the Hellems Arts & Sciences building was filled with a palpable tension. A conflict much larger than us had asserted itself.The humans that make up both THL and DxE share the extremely uncommon view that farming animals is a grievous moral harm, and the even less common conviction to dedicate their lives to opposing it. Yet back in 2016, this didn’t seem to be worth much. The relationship between the organizations was racked with mutual distrust, even disdain. And this malaise was merely a microcosm for a larger conflict among animal advocates, one that had been playing out for years in vicious comment threads across social media. To at least one side, this was known as the battle of welfarists vs. abolitionists.In a moment, I’ll explain why I hope this dichotomy will finally be relegated to the dustbin of history. In fact, I believe it was as useless and misleading back then as it is now. But that’s not what I thought at the time.As soon as Kristy’s answer appeared on the screen, a familiar narrative was racing through my brain. I had labeled her a welfarist, and as fast as my neurons could fire, this label was joined by a series of harsh judgments. K...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A new framing to replace "Welfarism vs. Abolitionism", published by Aidan Kankyoku on January 26, 2023 on The Effective Altruism Forum.Inspired by what seems like a recent détente between different factions in animal advocacy, I just posted this two-part article to paxfauna.org and wanted to share it here as well. It has more of a narrative form but I try to present a new way of thinking about these different roles in the movement. Hope you find it useful or at least enjoyable!Part 1: "Welfarism Vs. Abolitionism" Is ObsoleteThe GistAnimal advocates have been divided over pursuing more or less radical demands, leading to a conflict often framed as welfarism vs. abolitionism.This framing obscures the fact that all strategies used by animal advocates are incremental; we merely focus on different increments.Existing evidence does not support some of activists’ most common concerns about incremental welfare campaigns.A more sophisticated view of the different roles necessary in a movement ecology can resolve these conflicts.An Innocent QuestionWhen I was in college, around 2016, my campus animal rights club hosted a talk by the local representative of The Humane League (THL). As she stood facing about two dozen college students interested in animal activism, she began her talk with a question:“What goal should animal activists pursue?”After several seconds of silence, I threw out an answer that reflected my background as an organizer with Direct Action Everywhere (DxE for short). DxE had a notorious flair for dramatic confrontations with the public, using disruptive protest to demand a complete dismantling of the legal systems abetting the exploitation of other animals for the benefit of humans. My answer, one of DxE’s slogans, was shorthand for that:“Total animal liberation.”The THL rep (I’ll call her Kristy since I haven’t asked permission to use her name) endured an awkward silence waiting to see if anyone else would respond. Kristy had been working for THL about as long as I’d been organizing with DxE. Her job was to mobilize volunteers to support THL’s signature tactics: handing out leaflets to the public about meatless diets, and pressuring corporations like Mcdonald's to set animal welfare standards for their supply chains. When she clicked to the next slide, the answer waiting there was, like mine, a reflection of her organization’s ethos:“Reduce the greatest amount of suffering for the greatest number of animals we can.”Them’s Fightin’ WordsFor an outsider to the world of animal advocacy, these two answers would probably seem perfectly compatible. Yet from the moment they were spoken, room 217 of the Hellems Arts & Sciences building was filled with a palpable tension. A conflict much larger than us had asserted itself.The humans that make up both THL and DxE share the extremely uncommon view that farming animals is a grievous moral harm, and the even less common conviction to dedicate their lives to opposing it. Yet back in 2016, this didn’t seem to be worth much. The relationship between the organizations was racked with mutual distrust, even disdain. And this malaise was merely a microcosm for a larger conflict among animal advocates, one that had been playing out for years in vicious comment threads across social media. To at least one side, this was known as the battle of welfarists vs. abolitionists.In a moment, I’ll explain why I hope this dichotomy will finally be relegated to the dustbin of history. In fact, I believe it was as useless and misleading back then as it is now. But that’s not what I thought at the time.As soon as Kristy’s answer appeared on the screen, a familiar narrative was racing through my brain. I had labeled her a welfarist, and as fast as my neurons could fire, this label was joined by a series of harsh judgments. K...]]>
Aidan Kankyoku https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 30:51 None full 4587
grL3eTkSBjT4EgxSy_NL_EA_EA EA - A new framing to replace "Welfarism vs. Abolitionism" by Aidan Kankyoku Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A new framing to replace "Welfarism vs. Abolitionism", published by Aidan Kankyoku on January 26, 2023 on The Effective Altruism Forum.Inspired by what seems like a recent détente between different factions in animal advocacy, I just posted this two-part article to paxfauna.org and wanted to share it here as well. It has more of a narrative form but I try to present a new way of thinking about these different roles in the movement. Hope you find it useful or at least enjoyable!Part 1: "Welfarism Vs. Abolitionism" Is ObsoleteThe GistAnimal advocates have been divided over pursuing more or less radical demands, leading to a conflict often framed as welfarism vs. abolitionism.This framing obscures the fact that all strategies used by animal advocates are incremental; we merely focus on different increments.Existing evidence does not support some of activists’ most common concerns about incremental welfare campaigns.A more sophisticated view of the different roles necessary in a movement ecology can resolve these conflicts.An Innocent QuestionWhen I was in college, around 2016, my campus animal rights club hosted a talk by the local representative of The Humane League (THL). As she stood facing about two dozen college students interested in animal activism, she began her talk with a question:“What goal should animal activists pursue?”After several seconds of silence, I threw out an answer that reflected my background as an organizer with Direct Action Everywhere (DxE for short). DxE had a notorious flair for dramatic confrontations with the public, using disruptive protest to demand a complete dismantling of the legal systems abetting the exploitation of other animals for the benefit of humans. My answer, one of DxE’s slogans, was shorthand for that:“Total animal liberation.”The THL rep (I’ll call her Kristy since I haven’t asked permission to use her name) endured an awkward silence waiting to see if anyone else would respond. Kristy had been working for THL about as long as I’d been organizing with DxE. Her job was to mobilize volunteers to support THL’s signature tactics: handing out leaflets to the public about meatless diets, and pressuring corporations like Mcdonald's to set animal welfare standards for their supply chains. When she clicked to the next slide, the answer waiting there was, like mine, a reflection of her organization’s ethos:“Reduce the greatest amount of suffering for the greatest number of animals we can.”Them’s Fightin’ WordsFor an outsider to the world of animal advocacy, these two answers would probably seem perfectly compatible. Yet from the moment they were spoken, room 217 of the Hellems Arts & Sciences building was filled with a palpable tension. A conflict much larger than us had asserted itself.The humans that make up both THL and DxE share the extremely uncommon view that farming animals is a grievous moral harm, and the even less common conviction to dedicate their lives to opposing it. Yet back in 2016, this didn’t seem to be worth much. The relationship between the organizations was racked with mutual distrust, even disdain. And this malaise was merely a microcosm for a larger conflict among animal advocates, one that had been playing out for years in vicious comment threads across social media. To at least one side, this was known as the battle of welfarists vs. abolitionists.In a moment, I’ll explain why I hope this dichotomy will finally be relegated to the dustbin of history. In fact, I believe it was as useless and misleading back then as it is now. But that’s not what I thought at the time.As soon as Kristy’s answer appeared on the screen, a familiar narrative was racing through my brain. I had labeled her a welfarist, and as fast as my neurons could fire, this label was joined by a series of harsh judgments. K...]]>
Aidan Kankyoku https://forum.effectivealtruism.org/posts/grL3eTkSBjT4EgxSy/a-new-framing-to-replace-welfarism-vs-abolitionism Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A new framing to replace "Welfarism vs. Abolitionism", published by Aidan Kankyoku on January 26, 2023 on The Effective Altruism Forum.Inspired by what seems like a recent détente between different factions in animal advocacy, I just posted this two-part article to paxfauna.org and wanted to share it here as well. It has more of a narrative form but I try to present a new way of thinking about these different roles in the movement. Hope you find it useful or at least enjoyable!Part 1: "Welfarism Vs. Abolitionism" Is ObsoleteThe GistAnimal advocates have been divided over pursuing more or less radical demands, leading to a conflict often framed as welfarism vs. abolitionism.This framing obscures the fact that all strategies used by animal advocates are incremental; we merely focus on different increments.Existing evidence does not support some of activists’ most common concerns about incremental welfare campaigns.A more sophisticated view of the different roles necessary in a movement ecology can resolve these conflicts.An Innocent QuestionWhen I was in college, around 2016, my campus animal rights club hosted a talk by the local representative of The Humane League (THL). As she stood facing about two dozen college students interested in animal activism, she began her talk with a question:“What goal should animal activists pursue?”After several seconds of silence, I threw out an answer that reflected my background as an organizer with Direct Action Everywhere (DxE for short). DxE had a notorious flair for dramatic confrontations with the public, using disruptive protest to demand a complete dismantling of the legal systems abetting the exploitation of other animals for the benefit of humans. My answer, one of DxE’s slogans, was shorthand for that:“Total animal liberation.”The THL rep (I’ll call her Kristy since I haven’t asked permission to use her name) endured an awkward silence waiting to see if anyone else would respond. Kristy had been working for THL about as long as I’d been organizing with DxE. Her job was to mobilize volunteers to support THL’s signature tactics: handing out leaflets to the public about meatless diets, and pressuring corporations like Mcdonald's to set animal welfare standards for their supply chains. When she clicked to the next slide, the answer waiting there was, like mine, a reflection of her organization’s ethos:“Reduce the greatest amount of suffering for the greatest number of animals we can.”Them’s Fightin’ WordsFor an outsider to the world of animal advocacy, these two answers would probably seem perfectly compatible. Yet from the moment they were spoken, room 217 of the Hellems Arts & Sciences building was filled with a palpable tension. A conflict much larger than us had asserted itself.The humans that make up both THL and DxE share the extremely uncommon view that farming animals is a grievous moral harm, and the even less common conviction to dedicate their lives to opposing it. Yet back in 2016, this didn’t seem to be worth much. The relationship between the organizations was racked with mutual distrust, even disdain. And this malaise was merely a microcosm for a larger conflict among animal advocates, one that had been playing out for years in vicious comment threads across social media. To at least one side, this was known as the battle of welfarists vs. abolitionists.In a moment, I’ll explain why I hope this dichotomy will finally be relegated to the dustbin of history. In fact, I believe it was as useless and misleading back then as it is now. But that’s not what I thought at the time.As soon as Kristy’s answer appeared on the screen, a familiar narrative was racing through my brain. I had labeled her a welfarist, and as fast as my neurons could fire, this label was joined by a series of harsh judgments. K...]]>
Thu, 26 Jan 2023 22:16:59 +0000 EA - A new framing to replace "Welfarism vs. Abolitionism" by Aidan Kankyoku Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A new framing to replace "Welfarism vs. Abolitionism", published by Aidan Kankyoku on January 26, 2023 on The Effective Altruism Forum.Inspired by what seems like a recent détente between different factions in animal advocacy, I just posted this two-part article to paxfauna.org and wanted to share it here as well. It has more of a narrative form but I try to present a new way of thinking about these different roles in the movement. Hope you find it useful or at least enjoyable!Part 1: "Welfarism Vs. Abolitionism" Is ObsoleteThe GistAnimal advocates have been divided over pursuing more or less radical demands, leading to a conflict often framed as welfarism vs. abolitionism.This framing obscures the fact that all strategies used by animal advocates are incremental; we merely focus on different increments.Existing evidence does not support some of activists’ most common concerns about incremental welfare campaigns.A more sophisticated view of the different roles necessary in a movement ecology can resolve these conflicts.An Innocent QuestionWhen I was in college, around 2016, my campus animal rights club hosted a talk by the local representative of The Humane League (THL). As she stood facing about two dozen college students interested in animal activism, she began her talk with a question:“What goal should animal activists pursue?”After several seconds of silence, I threw out an answer that reflected my background as an organizer with Direct Action Everywhere (DxE for short). DxE had a notorious flair for dramatic confrontations with the public, using disruptive protest to demand a complete dismantling of the legal systems abetting the exploitation of other animals for the benefit of humans. My answer, one of DxE’s slogans, was shorthand for that:“Total animal liberation.”The THL rep (I’ll call her Kristy since I haven’t asked permission to use her name) endured an awkward silence waiting to see if anyone else would respond. Kristy had been working for THL about as long as I’d been organizing with DxE. Her job was to mobilize volunteers to support THL’s signature tactics: handing out leaflets to the public about meatless diets, and pressuring corporations like Mcdonald's to set animal welfare standards for their supply chains. When she clicked to the next slide, the answer waiting there was, like mine, a reflection of her organization’s ethos:“Reduce the greatest amount of suffering for the greatest number of animals we can.”Them’s Fightin’ WordsFor an outsider to the world of animal advocacy, these two answers would probably seem perfectly compatible. Yet from the moment they were spoken, room 217 of the Hellems Arts & Sciences building was filled with a palpable tension. A conflict much larger than us had asserted itself.The humans that make up both THL and DxE share the extremely uncommon view that farming animals is a grievous moral harm, and the even less common conviction to dedicate their lives to opposing it. Yet back in 2016, this didn’t seem to be worth much. The relationship between the organizations was racked with mutual distrust, even disdain. And this malaise was merely a microcosm for a larger conflict among animal advocates, one that had been playing out for years in vicious comment threads across social media. To at least one side, this was known as the battle of welfarists vs. abolitionists.In a moment, I’ll explain why I hope this dichotomy will finally be relegated to the dustbin of history. In fact, I believe it was as useless and misleading back then as it is now. But that’s not what I thought at the time.As soon as Kristy’s answer appeared on the screen, a familiar narrative was racing through my brain. I had labeled her a welfarist, and as fast as my neurons could fire, this label was joined by a series of harsh judgments. K...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A new framing to replace "Welfarism vs. Abolitionism", published by Aidan Kankyoku on January 26, 2023 on The Effective Altruism Forum.Inspired by what seems like a recent détente between different factions in animal advocacy, I just posted this two-part article to paxfauna.org and wanted to share it here as well. It has more of a narrative form but I try to present a new way of thinking about these different roles in the movement. Hope you find it useful or at least enjoyable!Part 1: "Welfarism Vs. Abolitionism" Is ObsoleteThe GistAnimal advocates have been divided over pursuing more or less radical demands, leading to a conflict often framed as welfarism vs. abolitionism.This framing obscures the fact that all strategies used by animal advocates are incremental; we merely focus on different increments.Existing evidence does not support some of activists’ most common concerns about incremental welfare campaigns.A more sophisticated view of the different roles necessary in a movement ecology can resolve these conflicts.An Innocent QuestionWhen I was in college, around 2016, my campus animal rights club hosted a talk by the local representative of The Humane League (THL). As she stood facing about two dozen college students interested in animal activism, she began her talk with a question:“What goal should animal activists pursue?”After several seconds of silence, I threw out an answer that reflected my background as an organizer with Direct Action Everywhere (DxE for short). DxE had a notorious flair for dramatic confrontations with the public, using disruptive protest to demand a complete dismantling of the legal systems abetting the exploitation of other animals for the benefit of humans. My answer, one of DxE’s slogans, was shorthand for that:“Total animal liberation.”The THL rep (I’ll call her Kristy since I haven’t asked permission to use her name) endured an awkward silence waiting to see if anyone else would respond. Kristy had been working for THL about as long as I’d been organizing with DxE. Her job was to mobilize volunteers to support THL’s signature tactics: handing out leaflets to the public about meatless diets, and pressuring corporations like Mcdonald's to set animal welfare standards for their supply chains. When she clicked to the next slide, the answer waiting there was, like mine, a reflection of her organization’s ethos:“Reduce the greatest amount of suffering for the greatest number of animals we can.”Them’s Fightin’ WordsFor an outsider to the world of animal advocacy, these two answers would probably seem perfectly compatible. Yet from the moment they were spoken, room 217 of the Hellems Arts & Sciences building was filled with a palpable tension. A conflict much larger than us had asserted itself.The humans that make up both THL and DxE share the extremely uncommon view that farming animals is a grievous moral harm, and the even less common conviction to dedicate their lives to opposing it. Yet back in 2016, this didn’t seem to be worth much. The relationship between the organizations was racked with mutual distrust, even disdain. And this malaise was merely a microcosm for a larger conflict among animal advocates, one that had been playing out for years in vicious comment threads across social media. To at least one side, this was known as the battle of welfarists vs. abolitionists.In a moment, I’ll explain why I hope this dichotomy will finally be relegated to the dustbin of history. In fact, I believe it was as useless and misleading back then as it is now. But that’s not what I thought at the time.As soon as Kristy’s answer appeared on the screen, a familiar narrative was racing through my brain. I had labeled her a welfarist, and as fast as my neurons could fire, this label was joined by a series of harsh judgments. K...]]>
Aidan Kankyoku https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 30:51 None full 4616
Nxm8htyEJsmKdZdyp_EA EA - Getting Actual Value from “Info Value”: Example from a Failed Experiment by Nikola Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting Actual Value from “Info Value”: Example from a Failed Experiment, published by Nikola on January 26, 2023 on The Effective Altruism Forum.TL;DR: Experiments that generate info value should be shared between EA groups. Publication bias makes it more likely that groups repeat each other’s mistakes. Doing things “for the info value” produces info value only insofar as 1) you actually try to do them well and 2) you process and share the information that they generate. As an example, we present a failed experiment we ran in the Spring of 2022 where we changed the format of our Precipice Reading Group, which had very low attendance due (we think) to a structure that felt impersonal and unaccountable. This, among other experiences, has taught us that personalization and accountability is very important for the success of a program. We invite other group organizers (or anyone else who does things “for the info value”) to share potentially helpful results of failed experiments in the comments.How “Info Value” Experiments Lose Their Potential ValueEA group organizers often consciously change a program’s setup, or run an unusual program, “for the info value” – that is, they might not have high expectations that it will succeed, but “at least we’ll know if something like this works.” It is very valuable to try new things; EA groups have explored very little of the space of field-building programs, and good discoveries can be scaled up!But we want to raise two caveats:Running things “for the info value” is often paired with “80-20’ing,” where organizers run low-overhead versions of programs and don’t sweat the details. But if you actually expect a program to be impactful, you might not want to “80-20 it” – you might want to make the operations smooth, pay lots of attention to participants, and so on. This means that even while piloting a program, you get much worse info value if the program is understaffed or otherwise badly run and understaffed. If the program works anyway, this can indeed signal to organizers that an even better version could be great, but often the program goes poorly and fails to generate the info value of whether a well-executed version of the program would have worked. (We think our Precipice group described below mostly executed everything except the format reasonably well and does not fit in this category, but have witnessed this a few times, including in our own programs.)People seem much more likely to post on the forum about their groups’ successful experiments than their failed experiments, which might be resulting in lots of wasted effort. It’s understandable not to want to write a whole post about a failed experiment, though, so we’d like to invite group organizers to post about other things that didn’t work but generated useful information value in the comments!Example: Precipice Reading Group Format ChangeHarvard organizers have been doing Precipice reading groups every semester since the Fall semester of 2021. Their format was similar to an introductory fellowship: specific people were assigned to cohorts with an assigned discussion leader, and they were to meet at specific times and locations. There wasn’t an extensive application form, just a sign up form, and people were automatically admitted to the program.Dropoff rates were pretty high (~60% of signups came to the first meeting, and as we got more into the semester, only around 15% of signups would actually finish the reading group). This dropoff seems to widely apply to reading groups/book clubs. For example, a remote summer book club hosted by the Harvard Data Science Initiative went from around 15-20 participants to around 5-7 regulars. For comparison, in the Fall 2022 Arete Fellowship (with a more formal syllabus and application process), ~90% of admitted students atten...]]>
Nikola https://forum.effectivealtruism.org/posts/Nxm8htyEJsmKdZdyp/getting-actual-value-from-info-value-example-from-a-failed Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting Actual Value from “Info Value”: Example from a Failed Experiment, published by Nikola on January 26, 2023 on The Effective Altruism Forum.TL;DR: Experiments that generate info value should be shared between EA groups. Publication bias makes it more likely that groups repeat each other’s mistakes. Doing things “for the info value” produces info value only insofar as 1) you actually try to do them well and 2) you process and share the information that they generate. As an example, we present a failed experiment we ran in the Spring of 2022 where we changed the format of our Precipice Reading Group, which had very low attendance due (we think) to a structure that felt impersonal and unaccountable. This, among other experiences, has taught us that personalization and accountability is very important for the success of a program. We invite other group organizers (or anyone else who does things “for the info value”) to share potentially helpful results of failed experiments in the comments.How “Info Value” Experiments Lose Their Potential ValueEA group organizers often consciously change a program’s setup, or run an unusual program, “for the info value” – that is, they might not have high expectations that it will succeed, but “at least we’ll know if something like this works.” It is very valuable to try new things; EA groups have explored very little of the space of field-building programs, and good discoveries can be scaled up!But we want to raise two caveats:Running things “for the info value” is often paired with “80-20’ing,” where organizers run low-overhead versions of programs and don’t sweat the details. But if you actually expect a program to be impactful, you might not want to “80-20 it” – you might want to make the operations smooth, pay lots of attention to participants, and so on. This means that even while piloting a program, you get much worse info value if the program is understaffed or otherwise badly run and understaffed. If the program works anyway, this can indeed signal to organizers that an even better version could be great, but often the program goes poorly and fails to generate the info value of whether a well-executed version of the program would have worked. (We think our Precipice group described below mostly executed everything except the format reasonably well and does not fit in this category, but have witnessed this a few times, including in our own programs.)People seem much more likely to post on the forum about their groups’ successful experiments than their failed experiments, which might be resulting in lots of wasted effort. It’s understandable not to want to write a whole post about a failed experiment, though, so we’d like to invite group organizers to post about other things that didn’t work but generated useful information value in the comments!Example: Precipice Reading Group Format ChangeHarvard organizers have been doing Precipice reading groups every semester since the Fall semester of 2021. Their format was similar to an introductory fellowship: specific people were assigned to cohorts with an assigned discussion leader, and they were to meet at specific times and locations. There wasn’t an extensive application form, just a sign up form, and people were automatically admitted to the program.Dropoff rates were pretty high (~60% of signups came to the first meeting, and as we got more into the semester, only around 15% of signups would actually finish the reading group). This dropoff seems to widely apply to reading groups/book clubs. For example, a remote summer book club hosted by the Harvard Data Science Initiative went from around 15-20 participants to around 5-7 regulars. For comparison, in the Fall 2022 Arete Fellowship (with a more formal syllabus and application process), ~90% of admitted students atten...]]>
Thu, 26 Jan 2023 21:43:25 +0000 EA - Getting Actual Value from “Info Value”: Example from a Failed Experiment by Nikola Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting Actual Value from “Info Value”: Example from a Failed Experiment, published by Nikola on January 26, 2023 on The Effective Altruism Forum.TL;DR: Experiments that generate info value should be shared between EA groups. Publication bias makes it more likely that groups repeat each other’s mistakes. Doing things “for the info value” produces info value only insofar as 1) you actually try to do them well and 2) you process and share the information that they generate. As an example, we present a failed experiment we ran in the Spring of 2022 where we changed the format of our Precipice Reading Group, which had very low attendance due (we think) to a structure that felt impersonal and unaccountable. This, among other experiences, has taught us that personalization and accountability is very important for the success of a program. We invite other group organizers (or anyone else who does things “for the info value”) to share potentially helpful results of failed experiments in the comments.How “Info Value” Experiments Lose Their Potential ValueEA group organizers often consciously change a program’s setup, or run an unusual program, “for the info value” – that is, they might not have high expectations that it will succeed, but “at least we’ll know if something like this works.” It is very valuable to try new things; EA groups have explored very little of the space of field-building programs, and good discoveries can be scaled up!But we want to raise two caveats:Running things “for the info value” is often paired with “80-20’ing,” where organizers run low-overhead versions of programs and don’t sweat the details. But if you actually expect a program to be impactful, you might not want to “80-20 it” – you might want to make the operations smooth, pay lots of attention to participants, and so on. This means that even while piloting a program, you get much worse info value if the program is understaffed or otherwise badly run and understaffed. If the program works anyway, this can indeed signal to organizers that an even better version could be great, but often the program goes poorly and fails to generate the info value of whether a well-executed version of the program would have worked. (We think our Precipice group described below mostly executed everything except the format reasonably well and does not fit in this category, but have witnessed this a few times, including in our own programs.)People seem much more likely to post on the forum about their groups’ successful experiments than their failed experiments, which might be resulting in lots of wasted effort. It’s understandable not to want to write a whole post about a failed experiment, though, so we’d like to invite group organizers to post about other things that didn’t work but generated useful information value in the comments!Example: Precipice Reading Group Format ChangeHarvard organizers have been doing Precipice reading groups every semester since the Fall semester of 2021. Their format was similar to an introductory fellowship: specific people were assigned to cohorts with an assigned discussion leader, and they were to meet at specific times and locations. There wasn’t an extensive application form, just a sign up form, and people were automatically admitted to the program.Dropoff rates were pretty high (~60% of signups came to the first meeting, and as we got more into the semester, only around 15% of signups would actually finish the reading group). This dropoff seems to widely apply to reading groups/book clubs. For example, a remote summer book club hosted by the Harvard Data Science Initiative went from around 15-20 participants to around 5-7 regulars. For comparison, in the Fall 2022 Arete Fellowship (with a more formal syllabus and application process), ~90% of admitted students atten...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting Actual Value from “Info Value”: Example from a Failed Experiment, published by Nikola on January 26, 2023 on The Effective Altruism Forum.TL;DR: Experiments that generate info value should be shared between EA groups. Publication bias makes it more likely that groups repeat each other’s mistakes. Doing things “for the info value” produces info value only insofar as 1) you actually try to do them well and 2) you process and share the information that they generate. As an example, we present a failed experiment we ran in the Spring of 2022 where we changed the format of our Precipice Reading Group, which had very low attendance due (we think) to a structure that felt impersonal and unaccountable. This, among other experiences, has taught us that personalization and accountability is very important for the success of a program. We invite other group organizers (or anyone else who does things “for the info value”) to share potentially helpful results of failed experiments in the comments.How “Info Value” Experiments Lose Their Potential ValueEA group organizers often consciously change a program’s setup, or run an unusual program, “for the info value” – that is, they might not have high expectations that it will succeed, but “at least we’ll know if something like this works.” It is very valuable to try new things; EA groups have explored very little of the space of field-building programs, and good discoveries can be scaled up!But we want to raise two caveats:Running things “for the info value” is often paired with “80-20’ing,” where organizers run low-overhead versions of programs and don’t sweat the details. But if you actually expect a program to be impactful, you might not want to “80-20 it” – you might want to make the operations smooth, pay lots of attention to participants, and so on. This means that even while piloting a program, you get much worse info value if the program is understaffed or otherwise badly run and understaffed. If the program works anyway, this can indeed signal to organizers that an even better version could be great, but often the program goes poorly and fails to generate the info value of whether a well-executed version of the program would have worked. (We think our Precipice group described below mostly executed everything except the format reasonably well and does not fit in this category, but have witnessed this a few times, including in our own programs.)People seem much more likely to post on the forum about their groups’ successful experiments than their failed experiments, which might be resulting in lots of wasted effort. It’s understandable not to want to write a whole post about a failed experiment, though, so we’d like to invite group organizers to post about other things that didn’t work but generated useful information value in the comments!Example: Precipice Reading Group Format ChangeHarvard organizers have been doing Precipice reading groups every semester since the Fall semester of 2021. Their format was similar to an introductory fellowship: specific people were assigned to cohorts with an assigned discussion leader, and they were to meet at specific times and locations. There wasn’t an extensive application form, just a sign up form, and people were automatically admitted to the program.Dropoff rates were pretty high (~60% of signups came to the first meeting, and as we got more into the semester, only around 15% of signups would actually finish the reading group). This dropoff seems to widely apply to reading groups/book clubs. For example, a remote summer book club hosted by the Harvard Data Science Initiative went from around 15-20 participants to around 5-7 regulars. For comparison, in the Fall 2022 Arete Fellowship (with a more formal syllabus and application process), ~90% of admitted students atten...]]>
Nikola https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:10 None full 4588
Nxm8htyEJsmKdZdyp_NL_EA_EA EA - Getting Actual Value from “Info Value”: Example from a Failed Experiment by Nikola Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting Actual Value from “Info Value”: Example from a Failed Experiment, published by Nikola on January 26, 2023 on The Effective Altruism Forum.TL;DR: Experiments that generate info value should be shared between EA groups. Publication bias makes it more likely that groups repeat each other’s mistakes. Doing things “for the info value” produces info value only insofar as 1) you actually try to do them well and 2) you process and share the information that they generate. As an example, we present a failed experiment we ran in the Spring of 2022 where we changed the format of our Precipice Reading Group, which had very low attendance due (we think) to a structure that felt impersonal and unaccountable. This, among other experiences, has taught us that personalization and accountability is very important for the success of a program. We invite other group organizers (or anyone else who does things “for the info value”) to share potentially helpful results of failed experiments in the comments.How “Info Value” Experiments Lose Their Potential ValueEA group organizers often consciously change a program’s setup, or run an unusual program, “for the info value” – that is, they might not have high expectations that it will succeed, but “at least we’ll know if something like this works.” It is very valuable to try new things; EA groups have explored very little of the space of field-building programs, and good discoveries can be scaled up!But we want to raise two caveats:Running things “for the info value” is often paired with “80-20’ing,” where organizers run low-overhead versions of programs and don’t sweat the details. But if you actually expect a program to be impactful, you might not want to “80-20 it” – you might want to make the operations smooth, pay lots of attention to participants, and so on. This means that even while piloting a program, you get much worse info value if the program is understaffed or otherwise badly run and understaffed. If the program works anyway, this can indeed signal to organizers that an even better version could be great, but often the program goes poorly and fails to generate the info value of whether a well-executed version of the program would have worked. (We think our Precipice group described below mostly executed everything except the format reasonably well and does not fit in this category, but have witnessed this a few times, including in our own programs.)People seem much more likely to post on the forum about their groups’ successful experiments than their failed experiments, which might be resulting in lots of wasted effort. It’s understandable not to want to write a whole post about a failed experiment, though, so we’d like to invite group organizers to post about other things that didn’t work but generated useful information value in the comments!Example: Precipice Reading Group Format ChangeHarvard organizers have been doing Precipice reading groups every semester since the Fall semester of 2021. Their format was similar to an introductory fellowship: specific people were assigned to cohorts with an assigned discussion leader, and they were to meet at specific times and locations. There wasn’t an extensive application form, just a sign up form, and people were automatically admitted to the program.Dropoff rates were pretty high (~60% of signups came to the first meeting, and as we got more into the semester, only around 15% of signups would actually finish the reading group). This dropoff seems to widely apply to reading groups/book clubs. For example, a remote summer book club hosted by the Harvard Data Science Initiative went from around 15-20 participants to around 5-7 regulars. For comparison, in the Fall 2022 Arete Fellowship (with a more formal syllabus and application process), ~90% of admitted students atten...]]>
Nikola https://forum.effectivealtruism.org/posts/Nxm8htyEJsmKdZdyp/getting-actual-value-from-info-value-example-from-a-failed Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting Actual Value from “Info Value”: Example from a Failed Experiment, published by Nikola on January 26, 2023 on The Effective Altruism Forum.TL;DR: Experiments that generate info value should be shared between EA groups. Publication bias makes it more likely that groups repeat each other’s mistakes. Doing things “for the info value” produces info value only insofar as 1) you actually try to do them well and 2) you process and share the information that they generate. As an example, we present a failed experiment we ran in the Spring of 2022 where we changed the format of our Precipice Reading Group, which had very low attendance due (we think) to a structure that felt impersonal and unaccountable. This, among other experiences, has taught us that personalization and accountability is very important for the success of a program. We invite other group organizers (or anyone else who does things “for the info value”) to share potentially helpful results of failed experiments in the comments.How “Info Value” Experiments Lose Their Potential ValueEA group organizers often consciously change a program’s setup, or run an unusual program, “for the info value” – that is, they might not have high expectations that it will succeed, but “at least we’ll know if something like this works.” It is very valuable to try new things; EA groups have explored very little of the space of field-building programs, and good discoveries can be scaled up!But we want to raise two caveats:Running things “for the info value” is often paired with “80-20’ing,” where organizers run low-overhead versions of programs and don’t sweat the details. But if you actually expect a program to be impactful, you might not want to “80-20 it” – you might want to make the operations smooth, pay lots of attention to participants, and so on. This means that even while piloting a program, you get much worse info value if the program is understaffed or otherwise badly run and understaffed. If the program works anyway, this can indeed signal to organizers that an even better version could be great, but often the program goes poorly and fails to generate the info value of whether a well-executed version of the program would have worked. (We think our Precipice group described below mostly executed everything except the format reasonably well and does not fit in this category, but have witnessed this a few times, including in our own programs.)People seem much more likely to post on the forum about their groups’ successful experiments than their failed experiments, which might be resulting in lots of wasted effort. It’s understandable not to want to write a whole post about a failed experiment, though, so we’d like to invite group organizers to post about other things that didn’t work but generated useful information value in the comments!Example: Precipice Reading Group Format ChangeHarvard organizers have been doing Precipice reading groups every semester since the Fall semester of 2021. Their format was similar to an introductory fellowship: specific people were assigned to cohorts with an assigned discussion leader, and they were to meet at specific times and locations. There wasn’t an extensive application form, just a sign up form, and people were automatically admitted to the program.Dropoff rates were pretty high (~60% of signups came to the first meeting, and as we got more into the semester, only around 15% of signups would actually finish the reading group). This dropoff seems to widely apply to reading groups/book clubs. For example, a remote summer book club hosted by the Harvard Data Science Initiative went from around 15-20 participants to around 5-7 regulars. For comparison, in the Fall 2022 Arete Fellowship (with a more formal syllabus and application process), ~90% of admitted students atten...]]>
Thu, 26 Jan 2023 21:43:25 +0000 EA - Getting Actual Value from “Info Value”: Example from a Failed Experiment by Nikola Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting Actual Value from “Info Value”: Example from a Failed Experiment, published by Nikola on January 26, 2023 on The Effective Altruism Forum.TL;DR: Experiments that generate info value should be shared between EA groups. Publication bias makes it more likely that groups repeat each other’s mistakes. Doing things “for the info value” produces info value only insofar as 1) you actually try to do them well and 2) you process and share the information that they generate. As an example, we present a failed experiment we ran in the Spring of 2022 where we changed the format of our Precipice Reading Group, which had very low attendance due (we think) to a structure that felt impersonal and unaccountable. This, among other experiences, has taught us that personalization and accountability is very important for the success of a program. We invite other group organizers (or anyone else who does things “for the info value”) to share potentially helpful results of failed experiments in the comments.How “Info Value” Experiments Lose Their Potential ValueEA group organizers often consciously change a program’s setup, or run an unusual program, “for the info value” – that is, they might not have high expectations that it will succeed, but “at least we’ll know if something like this works.” It is very valuable to try new things; EA groups have explored very little of the space of field-building programs, and good discoveries can be scaled up!But we want to raise two caveats:Running things “for the info value” is often paired with “80-20’ing,” where organizers run low-overhead versions of programs and don’t sweat the details. But if you actually expect a program to be impactful, you might not want to “80-20 it” – you might want to make the operations smooth, pay lots of attention to participants, and so on. This means that even while piloting a program, you get much worse info value if the program is understaffed or otherwise badly run and understaffed. If the program works anyway, this can indeed signal to organizers that an even better version could be great, but often the program goes poorly and fails to generate the info value of whether a well-executed version of the program would have worked. (We think our Precipice group described below mostly executed everything except the format reasonably well and does not fit in this category, but have witnessed this a few times, including in our own programs.)People seem much more likely to post on the forum about their groups’ successful experiments than their failed experiments, which might be resulting in lots of wasted effort. It’s understandable not to want to write a whole post about a failed experiment, though, so we’d like to invite group organizers to post about other things that didn’t work but generated useful information value in the comments!Example: Precipice Reading Group Format ChangeHarvard organizers have been doing Precipice reading groups every semester since the Fall semester of 2021. Their format was similar to an introductory fellowship: specific people were assigned to cohorts with an assigned discussion leader, and they were to meet at specific times and locations. There wasn’t an extensive application form, just a sign up form, and people were automatically admitted to the program.Dropoff rates were pretty high (~60% of signups came to the first meeting, and as we got more into the semester, only around 15% of signups would actually finish the reading group). This dropoff seems to widely apply to reading groups/book clubs. For example, a remote summer book club hosted by the Harvard Data Science Initiative went from around 15-20 participants to around 5-7 regulars. For comparison, in the Fall 2022 Arete Fellowship (with a more formal syllabus and application process), ~90% of admitted students atten...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting Actual Value from “Info Value”: Example from a Failed Experiment, published by Nikola on January 26, 2023 on The Effective Altruism Forum.TL;DR: Experiments that generate info value should be shared between EA groups. Publication bias makes it more likely that groups repeat each other’s mistakes. Doing things “for the info value” produces info value only insofar as 1) you actually try to do them well and 2) you process and share the information that they generate. As an example, we present a failed experiment we ran in the Spring of 2022 where we changed the format of our Precipice Reading Group, which had very low attendance due (we think) to a structure that felt impersonal and unaccountable. This, among other experiences, has taught us that personalization and accountability is very important for the success of a program. We invite other group organizers (or anyone else who does things “for the info value”) to share potentially helpful results of failed experiments in the comments.How “Info Value” Experiments Lose Their Potential ValueEA group organizers often consciously change a program’s setup, or run an unusual program, “for the info value” – that is, they might not have high expectations that it will succeed, but “at least we’ll know if something like this works.” It is very valuable to try new things; EA groups have explored very little of the space of field-building programs, and good discoveries can be scaled up!But we want to raise two caveats:Running things “for the info value” is often paired with “80-20’ing,” where organizers run low-overhead versions of programs and don’t sweat the details. But if you actually expect a program to be impactful, you might not want to “80-20 it” – you might want to make the operations smooth, pay lots of attention to participants, and so on. This means that even while piloting a program, you get much worse info value if the program is understaffed or otherwise badly run and understaffed. If the program works anyway, this can indeed signal to organizers that an even better version could be great, but often the program goes poorly and fails to generate the info value of whether a well-executed version of the program would have worked. (We think our Precipice group described below mostly executed everything except the format reasonably well and does not fit in this category, but have witnessed this a few times, including in our own programs.)People seem much more likely to post on the forum about their groups’ successful experiments than their failed experiments, which might be resulting in lots of wasted effort. It’s understandable not to want to write a whole post about a failed experiment, though, so we’d like to invite group organizers to post about other things that didn’t work but generated useful information value in the comments!Example: Precipice Reading Group Format ChangeHarvard organizers have been doing Precipice reading groups every semester since the Fall semester of 2021. Their format was similar to an introductory fellowship: specific people were assigned to cohorts with an assigned discussion leader, and they were to meet at specific times and locations. There wasn’t an extensive application form, just a sign up form, and people were automatically admitted to the program.Dropoff rates were pretty high (~60% of signups came to the first meeting, and as we got more into the semester, only around 15% of signups would actually finish the reading group). This dropoff seems to widely apply to reading groups/book clubs. For example, a remote summer book club hosted by the Harvard Data Science Initiative went from around 15-20 participants to around 5-7 regulars. For comparison, in the Fall 2022 Arete Fellowship (with a more formal syllabus and application process), ~90% of admitted students atten...]]>
Nikola https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:10 None full 4618
wxo9GMirRjZtzX8Mz_EA EA - Moving money from fashion week to effective causes by Vincent van der Holst Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Moving money from fashion week to effective causes, published by Vincent van der Holst on January 26, 2023 on The Effective Altruism Forum.I apologize in advance for asking the EA forum to help us activate a campaign, but because I believe this to be an effective, new and interesting way to build the community and get more incremental money to effective charities, I have written the post nonetheless. If you feel this is inapproriate, please let us know in the comments.Fashion week is next month. The week where hip people wear overpriced clothing persuading normal looking people to buy the same clothing, resulting in lots of profits for shareholders. That got us thinking: can we create a fashion collection that is overpriced, worn by hip people, but where the status comes from the amount of good you do rather than the [insert luxury brand we can't name because we might be sued] logo on your chest?And so we created the "This is..." collection, a collection of t-shirts where all profits go to effective causes rather than shareholders of rich luxury fashion brands. We have the "This is malaria prevention" shirt donating to the malaria consortium, the "This is Eyesight" t-shirt, donating all profits to the Fred Hollows foundation and a bunch of other shirts. And we've created the most expensive t-shirt in the world, one that is estimated to save a hundred kids lives through malaria protection.With this campaign, we've tried to make effective giving appealing to the masses, by creating something they want (a high quality overpriced t-shirt), with a simple message ("This is a company") and we're trying to get hip people (press and influencers) to write about and wear them, so people will want to buy them instead of the garbage we see on fashion week.We think this is effective for several reasonsThe majority of the money for each t-shirt will go to effective causesThis campaign makes effective causes more appealing with simple messages and activation through influencers and pressMost buyers, press and influencers who will take part in this campaign are non-EAs, so this can help get people acquainted to EABecause most buyers will be non-EAs, this will move money to ECs that would have otherwise gone to something ineffectiveThe success of the campaign will help our company BOAS get more traction, raise funding and become succesful so we can donate all of our profits to effective charitiesHow you can helpYou can buy one of the t-shirts, they ship worldwideYou can send this to 2 or 3 people who will find this interesting and who might buy itIf you have the t-shirt, wear it a lot so people scan the QR code on the front, driving more traffic and salesShow of your t-shirt on social mediaIf you have access to PR or influencers who you think might be interested, or you think you can help in another way, reach out to me on vin@boas.coSome things to considerWe are operating this at a loss, but we still have to account for the cost of the t-shirt, shipping and VAT. You can see a breakdown of how much will go to charity on each shirt's page. The cheapest t-shirts have the lowest % going to charity because the relative cost is higher.We're not sure if you can deduct the donation amount from your taxes. It appears this will differ for each country and we can't say with certainty that you can.Right now the t-shirts have VAT on the total price including the donation, lowering the percentage we can donate. We're still figuring out if its possible to charge VAT on just the cost of the T-Shirt and not on the donation amount.If you want more information about the campaign, you can check out this onepager or our blog about it. If you have criticism or you're skeptical about this campaign, please let us know in the comments.Thanks for listening. To help us out with The Nonlin...]]>
Vincent van der Holst https://forum.effectivealtruism.org/posts/wxo9GMirRjZtzX8Mz/moving-money-from-fashion-week-to-effective-causes Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Moving money from fashion week to effective causes, published by Vincent van der Holst on January 26, 2023 on The Effective Altruism Forum.I apologize in advance for asking the EA forum to help us activate a campaign, but because I believe this to be an effective, new and interesting way to build the community and get more incremental money to effective charities, I have written the post nonetheless. If you feel this is inapproriate, please let us know in the comments.Fashion week is next month. The week where hip people wear overpriced clothing persuading normal looking people to buy the same clothing, resulting in lots of profits for shareholders. That got us thinking: can we create a fashion collection that is overpriced, worn by hip people, but where the status comes from the amount of good you do rather than the [insert luxury brand we can't name because we might be sued] logo on your chest?And so we created the "This is..." collection, a collection of t-shirts where all profits go to effective causes rather than shareholders of rich luxury fashion brands. We have the "This is malaria prevention" shirt donating to the malaria consortium, the "This is Eyesight" t-shirt, donating all profits to the Fred Hollows foundation and a bunch of other shirts. And we've created the most expensive t-shirt in the world, one that is estimated to save a hundred kids lives through malaria protection.With this campaign, we've tried to make effective giving appealing to the masses, by creating something they want (a high quality overpriced t-shirt), with a simple message ("This is a company") and we're trying to get hip people (press and influencers) to write about and wear them, so people will want to buy them instead of the garbage we see on fashion week.We think this is effective for several reasonsThe majority of the money for each t-shirt will go to effective causesThis campaign makes effective causes more appealing with simple messages and activation through influencers and pressMost buyers, press and influencers who will take part in this campaign are non-EAs, so this can help get people acquainted to EABecause most buyers will be non-EAs, this will move money to ECs that would have otherwise gone to something ineffectiveThe success of the campaign will help our company BOAS get more traction, raise funding and become succesful so we can donate all of our profits to effective charitiesHow you can helpYou can buy one of the t-shirts, they ship worldwideYou can send this to 2 or 3 people who will find this interesting and who might buy itIf you have the t-shirt, wear it a lot so people scan the QR code on the front, driving more traffic and salesShow of your t-shirt on social mediaIf you have access to PR or influencers who you think might be interested, or you think you can help in another way, reach out to me on vin@boas.coSome things to considerWe are operating this at a loss, but we still have to account for the cost of the t-shirt, shipping and VAT. You can see a breakdown of how much will go to charity on each shirt's page. The cheapest t-shirts have the lowest % going to charity because the relative cost is higher.We're not sure if you can deduct the donation amount from your taxes. It appears this will differ for each country and we can't say with certainty that you can.Right now the t-shirts have VAT on the total price including the donation, lowering the percentage we can donate. We're still figuring out if its possible to charge VAT on just the cost of the T-Shirt and not on the donation amount.If you want more information about the campaign, you can check out this onepager or our blog about it. If you have criticism or you're skeptical about this campaign, please let us know in the comments.Thanks for listening. To help us out with The Nonlin...]]>
Thu, 26 Jan 2023 17:22:07 +0000 EA - Moving money from fashion week to effective causes by Vincent van der Holst Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Moving money from fashion week to effective causes, published by Vincent van der Holst on January 26, 2023 on The Effective Altruism Forum.I apologize in advance for asking the EA forum to help us activate a campaign, but because I believe this to be an effective, new and interesting way to build the community and get more incremental money to effective charities, I have written the post nonetheless. If you feel this is inapproriate, please let us know in the comments.Fashion week is next month. The week where hip people wear overpriced clothing persuading normal looking people to buy the same clothing, resulting in lots of profits for shareholders. That got us thinking: can we create a fashion collection that is overpriced, worn by hip people, but where the status comes from the amount of good you do rather than the [insert luxury brand we can't name because we might be sued] logo on your chest?And so we created the "This is..." collection, a collection of t-shirts where all profits go to effective causes rather than shareholders of rich luxury fashion brands. We have the "This is malaria prevention" shirt donating to the malaria consortium, the "This is Eyesight" t-shirt, donating all profits to the Fred Hollows foundation and a bunch of other shirts. And we've created the most expensive t-shirt in the world, one that is estimated to save a hundred kids lives through malaria protection.With this campaign, we've tried to make effective giving appealing to the masses, by creating something they want (a high quality overpriced t-shirt), with a simple message ("This is a company") and we're trying to get hip people (press and influencers) to write about and wear them, so people will want to buy them instead of the garbage we see on fashion week.We think this is effective for several reasonsThe majority of the money for each t-shirt will go to effective causesThis campaign makes effective causes more appealing with simple messages and activation through influencers and pressMost buyers, press and influencers who will take part in this campaign are non-EAs, so this can help get people acquainted to EABecause most buyers will be non-EAs, this will move money to ECs that would have otherwise gone to something ineffectiveThe success of the campaign will help our company BOAS get more traction, raise funding and become succesful so we can donate all of our profits to effective charitiesHow you can helpYou can buy one of the t-shirts, they ship worldwideYou can send this to 2 or 3 people who will find this interesting and who might buy itIf you have the t-shirt, wear it a lot so people scan the QR code on the front, driving more traffic and salesShow of your t-shirt on social mediaIf you have access to PR or influencers who you think might be interested, or you think you can help in another way, reach out to me on vin@boas.coSome things to considerWe are operating this at a loss, but we still have to account for the cost of the t-shirt, shipping and VAT. You can see a breakdown of how much will go to charity on each shirt's page. The cheapest t-shirts have the lowest % going to charity because the relative cost is higher.We're not sure if you can deduct the donation amount from your taxes. It appears this will differ for each country and we can't say with certainty that you can.Right now the t-shirts have VAT on the total price including the donation, lowering the percentage we can donate. We're still figuring out if its possible to charge VAT on just the cost of the T-Shirt and not on the donation amount.If you want more information about the campaign, you can check out this onepager or our blog about it. If you have criticism or you're skeptical about this campaign, please let us know in the comments.Thanks for listening. To help us out with The Nonlin...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Moving money from fashion week to effective causes, published by Vincent van der Holst on January 26, 2023 on The Effective Altruism Forum.I apologize in advance for asking the EA forum to help us activate a campaign, but because I believe this to be an effective, new and interesting way to build the community and get more incremental money to effective charities, I have written the post nonetheless. If you feel this is inapproriate, please let us know in the comments.Fashion week is next month. The week where hip people wear overpriced clothing persuading normal looking people to buy the same clothing, resulting in lots of profits for shareholders. That got us thinking: can we create a fashion collection that is overpriced, worn by hip people, but where the status comes from the amount of good you do rather than the [insert luxury brand we can't name because we might be sued] logo on your chest?And so we created the "This is..." collection, a collection of t-shirts where all profits go to effective causes rather than shareholders of rich luxury fashion brands. We have the "This is malaria prevention" shirt donating to the malaria consortium, the "This is Eyesight" t-shirt, donating all profits to the Fred Hollows foundation and a bunch of other shirts. And we've created the most expensive t-shirt in the world, one that is estimated to save a hundred kids lives through malaria protection.With this campaign, we've tried to make effective giving appealing to the masses, by creating something they want (a high quality overpriced t-shirt), with a simple message ("This is a company") and we're trying to get hip people (press and influencers) to write about and wear them, so people will want to buy them instead of the garbage we see on fashion week.We think this is effective for several reasonsThe majority of the money for each t-shirt will go to effective causesThis campaign makes effective causes more appealing with simple messages and activation through influencers and pressMost buyers, press and influencers who will take part in this campaign are non-EAs, so this can help get people acquainted to EABecause most buyers will be non-EAs, this will move money to ECs that would have otherwise gone to something ineffectiveThe success of the campaign will help our company BOAS get more traction, raise funding and become succesful so we can donate all of our profits to effective charitiesHow you can helpYou can buy one of the t-shirts, they ship worldwideYou can send this to 2 or 3 people who will find this interesting and who might buy itIf you have the t-shirt, wear it a lot so people scan the QR code on the front, driving more traffic and salesShow of your t-shirt on social mediaIf you have access to PR or influencers who you think might be interested, or you think you can help in another way, reach out to me on vin@boas.coSome things to considerWe are operating this at a loss, but we still have to account for the cost of the t-shirt, shipping and VAT. You can see a breakdown of how much will go to charity on each shirt's page. The cheapest t-shirts have the lowest % going to charity because the relative cost is higher.We're not sure if you can deduct the donation amount from your taxes. It appears this will differ for each country and we can't say with certainty that you can.Right now the t-shirts have VAT on the total price including the donation, lowering the percentage we can donate. We're still figuring out if its possible to charge VAT on just the cost of the T-Shirt and not on the donation amount.If you want more information about the campaign, you can check out this onepager or our blog about it. If you have criticism or you're skeptical about this campaign, please let us know in the comments.Thanks for listening. To help us out with The Nonlin...]]>
Vincent van der Holst https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:30 None full 4592
JAKmTnrJWP5fGbaGk_EA EA - Celebrating EAGxLatAm and EAGxIndia by OllieBase Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Celebrating EAGxLatAm and EAGxIndia, published by OllieBase on January 26, 2023 on The Effective Altruism Forum.In the first two weeks of January, we had the first EA conferences to take place in Latin America and India: EAGxLatAm in Mexico City and EAGxIndia in Jaipur, Rajasthan.Lots of EAGx events happen every year, and we don’t usually post about each one. But I think these events are particularly special as two of the largest EA conferences to take place in LMICs (Low and Middle Income Countries) and the first ever in each region. So, I wanted to share a bit more about how they went.Tl;DR: they went well.EAGxLatinAmericaEAGxLatAm was organised by community builders from the Spanish-speaking EA community with support from CEA. Over 200 people attended the event: ~32% of attendees came from within Mexico, ~13% from the US, ~12% from Colombia, ~7% from Brazil, ~5% from Chile and ~5% from Spain. There were also attendees from several other countries including Argentina, Peru, the Philippines, Canada, and countries in Europe.The event hosted talks in English, Spanish, and Portuguese and attendees could mark on their name badges which languages they spoke. Sessions included:A talk on the high-impact careers service, Carreras con Impacto;A virtual Q&A with Toby Ord to discuss existential risks and the role of LMICs in tackling them;Racionalidad para principiantes (Rationality for beginners) by Laura González Salmerón;A panel on Riesgos Catastróficos Globales (Global Catastrophic Risks);A panel on AI safety with staff from OpenAI, Redwood Research, CSER and Epoch;A talk on Animal Welfare cause prioritisation by Daniela Romero Waldhorn (Rethink Priorities);A talk by JPAL staff on ‘Applying evidence to reduce intimate partner violence’;A talk on local cause prioritisation by Luis Mota (Global Priorities Institute);A panel on EA in LMICs with EA community builders from Brazil, the Spanish-speaking community, the Philippines, Nigeria, South Africa, and EA Anywhere;A talk by Vida Plena (a new organisation incubated by Charity Entrepreneurship focused on mental health in Latin America);Meetups for attendees from Mexico, Chile/Argentina, Colombia/Perú, Brazil and Spain.The event received a likelihood to recommend score of 9.07/10 and attendees made an average of 9.37 new connections. Some quotes from the feedback survey I enjoyed reading:“I had a 1-1 with someone else interested in nuclear security and nuclear winter research, and we're developing an online reading group curriculum to [ideally] launch in March!"“I got convinced on deciding to focus on [sic] direct work on ai safety by talking to [two other attendees].”“I met Rob Wiblin at lunch and I didn't recognize him."One attendee described the event to a CEA colleague as “the best weekend of their life”.The event wasn’t perfect, of course. Several attendees reported that there was too much content in English that some attendees couldn’t follow, and some of the snacks and beverages seemed low quality. We hope to do even better next time!EAGxIndiaEAGxIndia was organised by community builders based in India with support from CEA. 163 people attended: ~70% of attendees came from within India, ~9% from the UK and ~9% from the US. The rest came from Singapore, Indonesia, the Netherlands, Germany, and Brazil.The sessions were all in English, and included:A talk on South Asian Air Quality by Santosh Harish (Open Philanthropy);An AI governance Q&A with Michael Aird (Rethink Priorities);An update from the Fish Welfare Initiative, who are based in India;A talk on AI safety research agendas and opportunities by Adam Gleave (FAR AI);Two sessions led by Charity Entrepreneurship;A workshop on what the EA community should look like in India, led by Indian community builders and CEA staff;A talk mapping the...]]>
OllieBase https://forum.effectivealtruism.org/posts/JAKmTnrJWP5fGbaGk/celebrating-eagxlatam-and-eagxindia Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Celebrating EAGxLatAm and EAGxIndia, published by OllieBase on January 26, 2023 on The Effective Altruism Forum.In the first two weeks of January, we had the first EA conferences to take place in Latin America and India: EAGxLatAm in Mexico City and EAGxIndia in Jaipur, Rajasthan.Lots of EAGx events happen every year, and we don’t usually post about each one. But I think these events are particularly special as two of the largest EA conferences to take place in LMICs (Low and Middle Income Countries) and the first ever in each region. So, I wanted to share a bit more about how they went.Tl;DR: they went well.EAGxLatinAmericaEAGxLatAm was organised by community builders from the Spanish-speaking EA community with support from CEA. Over 200 people attended the event: ~32% of attendees came from within Mexico, ~13% from the US, ~12% from Colombia, ~7% from Brazil, ~5% from Chile and ~5% from Spain. There were also attendees from several other countries including Argentina, Peru, the Philippines, Canada, and countries in Europe.The event hosted talks in English, Spanish, and Portuguese and attendees could mark on their name badges which languages they spoke. Sessions included:A talk on the high-impact careers service, Carreras con Impacto;A virtual Q&A with Toby Ord to discuss existential risks and the role of LMICs in tackling them;Racionalidad para principiantes (Rationality for beginners) by Laura González Salmerón;A panel on Riesgos Catastróficos Globales (Global Catastrophic Risks);A panel on AI safety with staff from OpenAI, Redwood Research, CSER and Epoch;A talk on Animal Welfare cause prioritisation by Daniela Romero Waldhorn (Rethink Priorities);A talk by JPAL staff on ‘Applying evidence to reduce intimate partner violence’;A talk on local cause prioritisation by Luis Mota (Global Priorities Institute);A panel on EA in LMICs with EA community builders from Brazil, the Spanish-speaking community, the Philippines, Nigeria, South Africa, and EA Anywhere;A talk by Vida Plena (a new organisation incubated by Charity Entrepreneurship focused on mental health in Latin America);Meetups for attendees from Mexico, Chile/Argentina, Colombia/Perú, Brazil and Spain.The event received a likelihood to recommend score of 9.07/10 and attendees made an average of 9.37 new connections. Some quotes from the feedback survey I enjoyed reading:“I had a 1-1 with someone else interested in nuclear security and nuclear winter research, and we're developing an online reading group curriculum to [ideally] launch in March!"“I got convinced on deciding to focus on [sic] direct work on ai safety by talking to [two other attendees].”“I met Rob Wiblin at lunch and I didn't recognize him."One attendee described the event to a CEA colleague as “the best weekend of their life”.The event wasn’t perfect, of course. Several attendees reported that there was too much content in English that some attendees couldn’t follow, and some of the snacks and beverages seemed low quality. We hope to do even better next time!EAGxIndiaEAGxIndia was organised by community builders based in India with support from CEA. 163 people attended: ~70% of attendees came from within India, ~9% from the UK and ~9% from the US. The rest came from Singapore, Indonesia, the Netherlands, Germany, and Brazil.The sessions were all in English, and included:A talk on South Asian Air Quality by Santosh Harish (Open Philanthropy);An AI governance Q&A with Michael Aird (Rethink Priorities);An update from the Fish Welfare Initiative, who are based in India;A talk on AI safety research agendas and opportunities by Adam Gleave (FAR AI);Two sessions led by Charity Entrepreneurship;A workshop on what the EA community should look like in India, led by Indian community builders and CEA staff;A talk mapping the...]]>
Thu, 26 Jan 2023 16:29:23 +0000 EA - Celebrating EAGxLatAm and EAGxIndia by OllieBase Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Celebrating EAGxLatAm and EAGxIndia, published by OllieBase on January 26, 2023 on The Effective Altruism Forum.In the first two weeks of January, we had the first EA conferences to take place in Latin America and India: EAGxLatAm in Mexico City and EAGxIndia in Jaipur, Rajasthan.Lots of EAGx events happen every year, and we don’t usually post about each one. But I think these events are particularly special as two of the largest EA conferences to take place in LMICs (Low and Middle Income Countries) and the first ever in each region. So, I wanted to share a bit more about how they went.Tl;DR: they went well.EAGxLatinAmericaEAGxLatAm was organised by community builders from the Spanish-speaking EA community with support from CEA. Over 200 people attended the event: ~32% of attendees came from within Mexico, ~13% from the US, ~12% from Colombia, ~7% from Brazil, ~5% from Chile and ~5% from Spain. There were also attendees from several other countries including Argentina, Peru, the Philippines, Canada, and countries in Europe.The event hosted talks in English, Spanish, and Portuguese and attendees could mark on their name badges which languages they spoke. Sessions included:A talk on the high-impact careers service, Carreras con Impacto;A virtual Q&A with Toby Ord to discuss existential risks and the role of LMICs in tackling them;Racionalidad para principiantes (Rationality for beginners) by Laura González Salmerón;A panel on Riesgos Catastróficos Globales (Global Catastrophic Risks);A panel on AI safety with staff from OpenAI, Redwood Research, CSER and Epoch;A talk on Animal Welfare cause prioritisation by Daniela Romero Waldhorn (Rethink Priorities);A talk by JPAL staff on ‘Applying evidence to reduce intimate partner violence’;A talk on local cause prioritisation by Luis Mota (Global Priorities Institute);A panel on EA in LMICs with EA community builders from Brazil, the Spanish-speaking community, the Philippines, Nigeria, South Africa, and EA Anywhere;A talk by Vida Plena (a new organisation incubated by Charity Entrepreneurship focused on mental health in Latin America);Meetups for attendees from Mexico, Chile/Argentina, Colombia/Perú, Brazil and Spain.The event received a likelihood to recommend score of 9.07/10 and attendees made an average of 9.37 new connections. Some quotes from the feedback survey I enjoyed reading:“I had a 1-1 with someone else interested in nuclear security and nuclear winter research, and we're developing an online reading group curriculum to [ideally] launch in March!"“I got convinced on deciding to focus on [sic] direct work on ai safety by talking to [two other attendees].”“I met Rob Wiblin at lunch and I didn't recognize him."One attendee described the event to a CEA colleague as “the best weekend of their life”.The event wasn’t perfect, of course. Several attendees reported that there was too much content in English that some attendees couldn’t follow, and some of the snacks and beverages seemed low quality. We hope to do even better next time!EAGxIndiaEAGxIndia was organised by community builders based in India with support from CEA. 163 people attended: ~70% of attendees came from within India, ~9% from the UK and ~9% from the US. The rest came from Singapore, Indonesia, the Netherlands, Germany, and Brazil.The sessions were all in English, and included:A talk on South Asian Air Quality by Santosh Harish (Open Philanthropy);An AI governance Q&A with Michael Aird (Rethink Priorities);An update from the Fish Welfare Initiative, who are based in India;A talk on AI safety research agendas and opportunities by Adam Gleave (FAR AI);Two sessions led by Charity Entrepreneurship;A workshop on what the EA community should look like in India, led by Indian community builders and CEA staff;A talk mapping the...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Celebrating EAGxLatAm and EAGxIndia, published by OllieBase on January 26, 2023 on The Effective Altruism Forum.In the first two weeks of January, we had the first EA conferences to take place in Latin America and India: EAGxLatAm in Mexico City and EAGxIndia in Jaipur, Rajasthan.Lots of EAGx events happen every year, and we don’t usually post about each one. But I think these events are particularly special as two of the largest EA conferences to take place in LMICs (Low and Middle Income Countries) and the first ever in each region. So, I wanted to share a bit more about how they went.Tl;DR: they went well.EAGxLatinAmericaEAGxLatAm was organised by community builders from the Spanish-speaking EA community with support from CEA. Over 200 people attended the event: ~32% of attendees came from within Mexico, ~13% from the US, ~12% from Colombia, ~7% from Brazil, ~5% from Chile and ~5% from Spain. There were also attendees from several other countries including Argentina, Peru, the Philippines, Canada, and countries in Europe.The event hosted talks in English, Spanish, and Portuguese and attendees could mark on their name badges which languages they spoke. Sessions included:A talk on the high-impact careers service, Carreras con Impacto;A virtual Q&A with Toby Ord to discuss existential risks and the role of LMICs in tackling them;Racionalidad para principiantes (Rationality for beginners) by Laura González Salmerón;A panel on Riesgos Catastróficos Globales (Global Catastrophic Risks);A panel on AI safety with staff from OpenAI, Redwood Research, CSER and Epoch;A talk on Animal Welfare cause prioritisation by Daniela Romero Waldhorn (Rethink Priorities);A talk by JPAL staff on ‘Applying evidence to reduce intimate partner violence’;A talk on local cause prioritisation by Luis Mota (Global Priorities Institute);A panel on EA in LMICs with EA community builders from Brazil, the Spanish-speaking community, the Philippines, Nigeria, South Africa, and EA Anywhere;A talk by Vida Plena (a new organisation incubated by Charity Entrepreneurship focused on mental health in Latin America);Meetups for attendees from Mexico, Chile/Argentina, Colombia/Perú, Brazil and Spain.The event received a likelihood to recommend score of 9.07/10 and attendees made an average of 9.37 new connections. Some quotes from the feedback survey I enjoyed reading:“I had a 1-1 with someone else interested in nuclear security and nuclear winter research, and we're developing an online reading group curriculum to [ideally] launch in March!"“I got convinced on deciding to focus on [sic] direct work on ai safety by talking to [two other attendees].”“I met Rob Wiblin at lunch and I didn't recognize him."One attendee described the event to a CEA colleague as “the best weekend of their life”.The event wasn’t perfect, of course. Several attendees reported that there was too much content in English that some attendees couldn’t follow, and some of the snacks and beverages seemed low quality. We hope to do even better next time!EAGxIndiaEAGxIndia was organised by community builders based in India with support from CEA. 163 people attended: ~70% of attendees came from within India, ~9% from the UK and ~9% from the US. The rest came from Singapore, Indonesia, the Netherlands, Germany, and Brazil.The sessions were all in English, and included:A talk on South Asian Air Quality by Santosh Harish (Open Philanthropy);An AI governance Q&A with Michael Aird (Rethink Priorities);An update from the Fish Welfare Initiative, who are based in India;A talk on AI safety research agendas and opportunities by Adam Gleave (FAR AI);Two sessions led by Charity Entrepreneurship;A workshop on what the EA community should look like in India, led by Indian community builders and CEA staff;A talk mapping the...]]>
OllieBase https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:53 None full 4589
JAKmTnrJWP5fGbaGk_NL_EA_EA EA - Celebrating EAGxLatAm and EAGxIndia by OllieBase Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Celebrating EAGxLatAm and EAGxIndia, published by OllieBase on January 26, 2023 on The Effective Altruism Forum.In the first two weeks of January, we had the first EA conferences to take place in Latin America and India: EAGxLatAm in Mexico City and EAGxIndia in Jaipur, Rajasthan.Lots of EAGx events happen every year, and we don’t usually post about each one. But I think these events are particularly special as two of the largest EA conferences to take place in LMICs (Low and Middle Income Countries) and the first ever in each region. So, I wanted to share a bit more about how they went.Tl;DR: they went well.EAGxLatinAmericaEAGxLatAm was organised by community builders from the Spanish-speaking EA community with support from CEA. Over 200 people attended the event: ~32% of attendees came from within Mexico, ~13% from the US, ~12% from Colombia, ~7% from Brazil, ~5% from Chile and ~5% from Spain. There were also attendees from several other countries including Argentina, Peru, the Philippines, Canada, and countries in Europe.The event hosted talks in English, Spanish, and Portuguese and attendees could mark on their name badges which languages they spoke. Sessions included:A talk on the high-impact careers service, Carreras con Impacto;A virtual Q&A with Toby Ord to discuss existential risks and the role of LMICs in tackling them;Racionalidad para principiantes (Rationality for beginners) by Laura González Salmerón;A panel on Riesgos Catastróficos Globales (Global Catastrophic Risks);A panel on AI safety with staff from OpenAI, Redwood Research, CSER and Epoch;A talk on Animal Welfare cause prioritisation by Daniela Romero Waldhorn (Rethink Priorities);A talk by JPAL staff on ‘Applying evidence to reduce intimate partner violence’;A talk on local cause prioritisation by Luis Mota (Global Priorities Institute);A panel on EA in LMICs with EA community builders from Brazil, the Spanish-speaking community, the Philippines, Nigeria, South Africa, and EA Anywhere;A talk by Vida Plena (a new organisation incubated by Charity Entrepreneurship focused on mental health in Latin America);Meetups for attendees from Mexico, Chile/Argentina, Colombia/Perú, Brazil and Spain.The event received a likelihood to recommend score of 9.07/10 and attendees made an average of 9.37 new connections. Some quotes from the feedback survey I enjoyed reading:“I had a 1-1 with someone else interested in nuclear security and nuclear winter research, and we're developing an online reading group curriculum to [ideally] launch in March!"“I got convinced on deciding to focus on [sic] direct work on ai safety by talking to [two other attendees].”“I met Rob Wiblin at lunch and I didn't recognize him."One attendee described the event to a CEA colleague as “the best weekend of their life”.The event wasn’t perfect, of course. Several attendees reported that there was too much content in English that some attendees couldn’t follow, and some of the snacks and beverages seemed low quality. We hope to do even better next time!EAGxIndiaEAGxIndia was organised by community builders based in India with support from CEA. 163 people attended: ~70% of attendees came from within India, ~9% from the UK and ~9% from the US. The rest came from Singapore, Indonesia, the Netherlands, Germany, and Brazil.The sessions were all in English, and included:A talk on South Asian Air Quality by Santosh Harish (Open Philanthropy);An AI governance Q&A with Michael Aird (Rethink Priorities);An update from the Fish Welfare Initiative, who are based in India;A talk on AI safety research agendas and opportunities by Adam Gleave (FAR AI);Two sessions led by Charity Entrepreneurship;A workshop on what the EA community should look like in India, led by Indian community builders and CEA staff;A talk mapping the...]]>
OllieBase https://forum.effectivealtruism.org/posts/JAKmTnrJWP5fGbaGk/celebrating-eagxlatam-and-eagxindia Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Celebrating EAGxLatAm and EAGxIndia, published by OllieBase on January 26, 2023 on The Effective Altruism Forum.In the first two weeks of January, we had the first EA conferences to take place in Latin America and India: EAGxLatAm in Mexico City and EAGxIndia in Jaipur, Rajasthan.Lots of EAGx events happen every year, and we don’t usually post about each one. But I think these events are particularly special as two of the largest EA conferences to take place in LMICs (Low and Middle Income Countries) and the first ever in each region. So, I wanted to share a bit more about how they went.Tl;DR: they went well.EAGxLatinAmericaEAGxLatAm was organised by community builders from the Spanish-speaking EA community with support from CEA. Over 200 people attended the event: ~32% of attendees came from within Mexico, ~13% from the US, ~12% from Colombia, ~7% from Brazil, ~5% from Chile and ~5% from Spain. There were also attendees from several other countries including Argentina, Peru, the Philippines, Canada, and countries in Europe.The event hosted talks in English, Spanish, and Portuguese and attendees could mark on their name badges which languages they spoke. Sessions included:A talk on the high-impact careers service, Carreras con Impacto;A virtual Q&A with Toby Ord to discuss existential risks and the role of LMICs in tackling them;Racionalidad para principiantes (Rationality for beginners) by Laura González Salmerón;A panel on Riesgos Catastróficos Globales (Global Catastrophic Risks);A panel on AI safety with staff from OpenAI, Redwood Research, CSER and Epoch;A talk on Animal Welfare cause prioritisation by Daniela Romero Waldhorn (Rethink Priorities);A talk by JPAL staff on ‘Applying evidence to reduce intimate partner violence’;A talk on local cause prioritisation by Luis Mota (Global Priorities Institute);A panel on EA in LMICs with EA community builders from Brazil, the Spanish-speaking community, the Philippines, Nigeria, South Africa, and EA Anywhere;A talk by Vida Plena (a new organisation incubated by Charity Entrepreneurship focused on mental health in Latin America);Meetups for attendees from Mexico, Chile/Argentina, Colombia/Perú, Brazil and Spain.The event received a likelihood to recommend score of 9.07/10 and attendees made an average of 9.37 new connections. Some quotes from the feedback survey I enjoyed reading:“I had a 1-1 with someone else interested in nuclear security and nuclear winter research, and we're developing an online reading group curriculum to [ideally] launch in March!"“I got convinced on deciding to focus on [sic] direct work on ai safety by talking to [two other attendees].”“I met Rob Wiblin at lunch and I didn't recognize him."One attendee described the event to a CEA colleague as “the best weekend of their life”.The event wasn’t perfect, of course. Several attendees reported that there was too much content in English that some attendees couldn’t follow, and some of the snacks and beverages seemed low quality. We hope to do even better next time!EAGxIndiaEAGxIndia was organised by community builders based in India with support from CEA. 163 people attended: ~70% of attendees came from within India, ~9% from the UK and ~9% from the US. The rest came from Singapore, Indonesia, the Netherlands, Germany, and Brazil.The sessions were all in English, and included:A talk on South Asian Air Quality by Santosh Harish (Open Philanthropy);An AI governance Q&A with Michael Aird (Rethink Priorities);An update from the Fish Welfare Initiative, who are based in India;A talk on AI safety research agendas and opportunities by Adam Gleave (FAR AI);Two sessions led by Charity Entrepreneurship;A workshop on what the EA community should look like in India, led by Indian community builders and CEA staff;A talk mapping the...]]>
Thu, 26 Jan 2023 16:29:23 +0000 EA - Celebrating EAGxLatAm and EAGxIndia by OllieBase Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Celebrating EAGxLatAm and EAGxIndia, published by OllieBase on January 26, 2023 on The Effective Altruism Forum.In the first two weeks of January, we had the first EA conferences to take place in Latin America and India: EAGxLatAm in Mexico City and EAGxIndia in Jaipur, Rajasthan.Lots of EAGx events happen every year, and we don’t usually post about each one. But I think these events are particularly special as two of the largest EA conferences to take place in LMICs (Low and Middle Income Countries) and the first ever in each region. So, I wanted to share a bit more about how they went.Tl;DR: they went well.EAGxLatinAmericaEAGxLatAm was organised by community builders from the Spanish-speaking EA community with support from CEA. Over 200 people attended the event: ~32% of attendees came from within Mexico, ~13% from the US, ~12% from Colombia, ~7% from Brazil, ~5% from Chile and ~5% from Spain. There were also attendees from several other countries including Argentina, Peru, the Philippines, Canada, and countries in Europe.The event hosted talks in English, Spanish, and Portuguese and attendees could mark on their name badges which languages they spoke. Sessions included:A talk on the high-impact careers service, Carreras con Impacto;A virtual Q&A with Toby Ord to discuss existential risks and the role of LMICs in tackling them;Racionalidad para principiantes (Rationality for beginners) by Laura González Salmerón;A panel on Riesgos Catastróficos Globales (Global Catastrophic Risks);A panel on AI safety with staff from OpenAI, Redwood Research, CSER and Epoch;A talk on Animal Welfare cause prioritisation by Daniela Romero Waldhorn (Rethink Priorities);A talk by JPAL staff on ‘Applying evidence to reduce intimate partner violence’;A talk on local cause prioritisation by Luis Mota (Global Priorities Institute);A panel on EA in LMICs with EA community builders from Brazil, the Spanish-speaking community, the Philippines, Nigeria, South Africa, and EA Anywhere;A talk by Vida Plena (a new organisation incubated by Charity Entrepreneurship focused on mental health in Latin America);Meetups for attendees from Mexico, Chile/Argentina, Colombia/Perú, Brazil and Spain.The event received a likelihood to recommend score of 9.07/10 and attendees made an average of 9.37 new connections. Some quotes from the feedback survey I enjoyed reading:“I had a 1-1 with someone else interested in nuclear security and nuclear winter research, and we're developing an online reading group curriculum to [ideally] launch in March!"“I got convinced on deciding to focus on [sic] direct work on ai safety by talking to [two other attendees].”“I met Rob Wiblin at lunch and I didn't recognize him."One attendee described the event to a CEA colleague as “the best weekend of their life”.The event wasn’t perfect, of course. Several attendees reported that there was too much content in English that some attendees couldn’t follow, and some of the snacks and beverages seemed low quality. We hope to do even better next time!EAGxIndiaEAGxIndia was organised by community builders based in India with support from CEA. 163 people attended: ~70% of attendees came from within India, ~9% from the UK and ~9% from the US. The rest came from Singapore, Indonesia, the Netherlands, Germany, and Brazil.The sessions were all in English, and included:A talk on South Asian Air Quality by Santosh Harish (Open Philanthropy);An AI governance Q&A with Michael Aird (Rethink Priorities);An update from the Fish Welfare Initiative, who are based in India;A talk on AI safety research agendas and opportunities by Adam Gleave (FAR AI);Two sessions led by Charity Entrepreneurship;A workshop on what the EA community should look like in India, led by Indian community builders and CEA staff;A talk mapping the...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Celebrating EAGxLatAm and EAGxIndia, published by OllieBase on January 26, 2023 on The Effective Altruism Forum.In the first two weeks of January, we had the first EA conferences to take place in Latin America and India: EAGxLatAm in Mexico City and EAGxIndia in Jaipur, Rajasthan.Lots of EAGx events happen every year, and we don’t usually post about each one. But I think these events are particularly special as two of the largest EA conferences to take place in LMICs (Low and Middle Income Countries) and the first ever in each region. So, I wanted to share a bit more about how they went.Tl;DR: they went well.EAGxLatinAmericaEAGxLatAm was organised by community builders from the Spanish-speaking EA community with support from CEA. Over 200 people attended the event: ~32% of attendees came from within Mexico, ~13% from the US, ~12% from Colombia, ~7% from Brazil, ~5% from Chile and ~5% from Spain. There were also attendees from several other countries including Argentina, Peru, the Philippines, Canada, and countries in Europe.The event hosted talks in English, Spanish, and Portuguese and attendees could mark on their name badges which languages they spoke. Sessions included:A talk on the high-impact careers service, Carreras con Impacto;A virtual Q&A with Toby Ord to discuss existential risks and the role of LMICs in tackling them;Racionalidad para principiantes (Rationality for beginners) by Laura González Salmerón;A panel on Riesgos Catastróficos Globales (Global Catastrophic Risks);A panel on AI safety with staff from OpenAI, Redwood Research, CSER and Epoch;A talk on Animal Welfare cause prioritisation by Daniela Romero Waldhorn (Rethink Priorities);A talk by JPAL staff on ‘Applying evidence to reduce intimate partner violence’;A talk on local cause prioritisation by Luis Mota (Global Priorities Institute);A panel on EA in LMICs with EA community builders from Brazil, the Spanish-speaking community, the Philippines, Nigeria, South Africa, and EA Anywhere;A talk by Vida Plena (a new organisation incubated by Charity Entrepreneurship focused on mental health in Latin America);Meetups for attendees from Mexico, Chile/Argentina, Colombia/Perú, Brazil and Spain.The event received a likelihood to recommend score of 9.07/10 and attendees made an average of 9.37 new connections. Some quotes from the feedback survey I enjoyed reading:“I had a 1-1 with someone else interested in nuclear security and nuclear winter research, and we're developing an online reading group curriculum to [ideally] launch in March!"“I got convinced on deciding to focus on [sic] direct work on ai safety by talking to [two other attendees].”“I met Rob Wiblin at lunch and I didn't recognize him."One attendee described the event to a CEA colleague as “the best weekend of their life”.The event wasn’t perfect, of course. Several attendees reported that there was too much content in English that some attendees couldn’t follow, and some of the snacks and beverages seemed low quality. We hope to do even better next time!EAGxIndiaEAGxIndia was organised by community builders based in India with support from CEA. 163 people attended: ~70% of attendees came from within India, ~9% from the UK and ~9% from the US. The rest came from Singapore, Indonesia, the Netherlands, Germany, and Brazil.The sessions were all in English, and included:A talk on South Asian Air Quality by Santosh Harish (Open Philanthropy);An AI governance Q&A with Michael Aird (Rethink Priorities);An update from the Fish Welfare Initiative, who are based in India;A talk on AI safety research agendas and opportunities by Adam Gleave (FAR AI);Two sessions led by Charity Entrepreneurship;A workshop on what the EA community should look like in India, led by Indian community builders and CEA staff;A talk mapping the...]]>
OllieBase https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:53 None full 4620
Xr4n2cczyxxYpbrsh_EA EA - Overview of effective giving organisations by SjirH Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Overview of effective giving organisations, published by SjirH on January 26, 2023 on The Effective Altruism Forum.I've been keeping an overview of the public effective giving ecosystem that I thought would be worth sharing in its own post (I've previously referred to it here). I've noticed people often aren't aware of many of the initiatives in this space (>50!) and they could be missing out on some great funding opportunity recommendations, collaboration and job opportunities, and other useful connections and information.The list is meant to contain all organisations and projects that aim to identify publicly accessible philanthropic funding opportunities using an effective-altruism-inspired methodology (evaluators), and/or to fundraise for the funding opportunities that have already been identified (fundraisers). As I note on the sheet, inclusion in this list does not imply that the organisation or project's research and recommendations have been vetted for quality: it only implies self-association with the effective giving community.Please let me know if you think any organisation or project is missing! I aim to keep this list updated, and expect it to change quite a bit over the coming year (I know of a few more budding initiatives that may soon be added).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
SjirH https://forum.effectivealtruism.org/posts/Xr4n2cczyxxYpbrsh/overview-of-effective-giving-organisations Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Overview of effective giving organisations, published by SjirH on January 26, 2023 on The Effective Altruism Forum.I've been keeping an overview of the public effective giving ecosystem that I thought would be worth sharing in its own post (I've previously referred to it here). I've noticed people often aren't aware of many of the initiatives in this space (>50!) and they could be missing out on some great funding opportunity recommendations, collaboration and job opportunities, and other useful connections and information.The list is meant to contain all organisations and projects that aim to identify publicly accessible philanthropic funding opportunities using an effective-altruism-inspired methodology (evaluators), and/or to fundraise for the funding opportunities that have already been identified (fundraisers). As I note on the sheet, inclusion in this list does not imply that the organisation or project's research and recommendations have been vetted for quality: it only implies self-association with the effective giving community.Please let me know if you think any organisation or project is missing! I aim to keep this list updated, and expect it to change quite a bit over the coming year (I know of a few more budding initiatives that may soon be added).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 26 Jan 2023 12:09:08 +0000 EA - Overview of effective giving organisations by SjirH Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Overview of effective giving organisations, published by SjirH on January 26, 2023 on The Effective Altruism Forum.I've been keeping an overview of the public effective giving ecosystem that I thought would be worth sharing in its own post (I've previously referred to it here). I've noticed people often aren't aware of many of the initiatives in this space (>50!) and they could be missing out on some great funding opportunity recommendations, collaboration and job opportunities, and other useful connections and information.The list is meant to contain all organisations and projects that aim to identify publicly accessible philanthropic funding opportunities using an effective-altruism-inspired methodology (evaluators), and/or to fundraise for the funding opportunities that have already been identified (fundraisers). As I note on the sheet, inclusion in this list does not imply that the organisation or project's research and recommendations have been vetted for quality: it only implies self-association with the effective giving community.Please let me know if you think any organisation or project is missing! I aim to keep this list updated, and expect it to change quite a bit over the coming year (I know of a few more budding initiatives that may soon be added).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Overview of effective giving organisations, published by SjirH on January 26, 2023 on The Effective Altruism Forum.I've been keeping an overview of the public effective giving ecosystem that I thought would be worth sharing in its own post (I've previously referred to it here). I've noticed people often aren't aware of many of the initiatives in this space (>50!) and they could be missing out on some great funding opportunity recommendations, collaboration and job opportunities, and other useful connections and information.The list is meant to contain all organisations and projects that aim to identify publicly accessible philanthropic funding opportunities using an effective-altruism-inspired methodology (evaluators), and/or to fundraise for the funding opportunities that have already been identified (fundraisers). As I note on the sheet, inclusion in this list does not imply that the organisation or project's research and recommendations have been vetted for quality: it only implies self-association with the effective giving community.Please let me know if you think any organisation or project is missing! I aim to keep this list updated, and expect it to change quite a bit over the coming year (I know of a few more budding initiatives that may soon be added).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
SjirH https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:23 None full 4591
wwkyJFjnGDyM7ddyh_EA EA - Have worries about EA? Want to chat? by Catherine Low Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Have worries about EA? Want to chat?, published by Catherine Low on January 26, 2023 on The Effective Altruism Forum.Feeling worried?Hey all. I’m Catherine, and I work on CEA’s Community Health and Special Projects Team. One of my roles is to act as a contact person for the EA community, alongside Julia Wise.I just wanted to let you know that I’m available if you want to share EA-related worries or frustrations with me (they can be specific or very vague). My email is catherine@centreforeffectivealtruism.org, or you can reach the whole Community Health and Special Projects team on our form (which can be filled in anonymously if you wish). Sometimes I’ll have the time to jump on a video call to talk it through if you’d find that helpful (but it might not always be possible). Maybe I’ll be able to help in some way, or maybe not. I also read through all the posts on the Forum relating to worries about the community (and the comments).What I’m worried about right nowPlease feel free to reach out about any manner of worries relating to EA – whether they are related to my current worry or something entirely different.Personally, I’m currently feeling disappointed, worried, frustrated, and a bit angry (which isn’t an emotion I commonly feel) about a few things that I’ve read on the Forum recently. I’m also incredibly privileged in nearly all metrics of privilege one can imagine, so there is a high chance that what I’m feeling doesn’t scratch the surface of the feelings some of you are experiencing.But, I also notice that there are a lot of people in the wider EA community who don’t follow the online EA spaces. They’re just getting on with the job of learning, upskilling, or making the world a better place. I’ve been oscillating wildly between feeling that my worry is a really big signal I should pay attention to or mostly a function of me spending too much time online reading meta-EA things.One thing I tell myself when I get upset, angry or frustrated in an EA space is “This space ≠EA”.If you’re feeling one EA space is bad for you, consider trying another EA space (for me, I sometimes get disillusioned when reading the Forum, but in-person meetings or video calls make me feel SO MUCH better about our community.)You might like to try rereading the texts that first brought you to EA (my personal choice for reinvigorating myself is On Caring - it’s not a happy read but for me it brings things back into focus).And, just a reminder, sometimes the right thing to do is to take a step back for a bit.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Catherine Low https://forum.effectivealtruism.org/posts/wwkyJFjnGDyM7ddyh/have-worries-about-ea-want-to-chat Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Have worries about EA? Want to chat?, published by Catherine Low on January 26, 2023 on The Effective Altruism Forum.Feeling worried?Hey all. I’m Catherine, and I work on CEA’s Community Health and Special Projects Team. One of my roles is to act as a contact person for the EA community, alongside Julia Wise.I just wanted to let you know that I’m available if you want to share EA-related worries or frustrations with me (they can be specific or very vague). My email is catherine@centreforeffectivealtruism.org, or you can reach the whole Community Health and Special Projects team on our form (which can be filled in anonymously if you wish). Sometimes I’ll have the time to jump on a video call to talk it through if you’d find that helpful (but it might not always be possible). Maybe I’ll be able to help in some way, or maybe not. I also read through all the posts on the Forum relating to worries about the community (and the comments).What I’m worried about right nowPlease feel free to reach out about any manner of worries relating to EA – whether they are related to my current worry or something entirely different.Personally, I’m currently feeling disappointed, worried, frustrated, and a bit angry (which isn’t an emotion I commonly feel) about a few things that I’ve read on the Forum recently. I’m also incredibly privileged in nearly all metrics of privilege one can imagine, so there is a high chance that what I’m feeling doesn’t scratch the surface of the feelings some of you are experiencing.But, I also notice that there are a lot of people in the wider EA community who don’t follow the online EA spaces. They’re just getting on with the job of learning, upskilling, or making the world a better place. I’ve been oscillating wildly between feeling that my worry is a really big signal I should pay attention to or mostly a function of me spending too much time online reading meta-EA things.One thing I tell myself when I get upset, angry or frustrated in an EA space is “This space ≠EA”.If you’re feeling one EA space is bad for you, consider trying another EA space (for me, I sometimes get disillusioned when reading the Forum, but in-person meetings or video calls make me feel SO MUCH better about our community.)You might like to try rereading the texts that first brought you to EA (my personal choice for reinvigorating myself is On Caring - it’s not a happy read but for me it brings things back into focus).And, just a reminder, sometimes the right thing to do is to take a step back for a bit.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 26 Jan 2023 10:40:38 +0000 EA - Have worries about EA? Want to chat? by Catherine Low Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Have worries about EA? Want to chat?, published by Catherine Low on January 26, 2023 on The Effective Altruism Forum.Feeling worried?Hey all. I’m Catherine, and I work on CEA’s Community Health and Special Projects Team. One of my roles is to act as a contact person for the EA community, alongside Julia Wise.I just wanted to let you know that I’m available if you want to share EA-related worries or frustrations with me (they can be specific or very vague). My email is catherine@centreforeffectivealtruism.org, or you can reach the whole Community Health and Special Projects team on our form (which can be filled in anonymously if you wish). Sometimes I’ll have the time to jump on a video call to talk it through if you’d find that helpful (but it might not always be possible). Maybe I’ll be able to help in some way, or maybe not. I also read through all the posts on the Forum relating to worries about the community (and the comments).What I’m worried about right nowPlease feel free to reach out about any manner of worries relating to EA – whether they are related to my current worry or something entirely different.Personally, I’m currently feeling disappointed, worried, frustrated, and a bit angry (which isn’t an emotion I commonly feel) about a few things that I’ve read on the Forum recently. I’m also incredibly privileged in nearly all metrics of privilege one can imagine, so there is a high chance that what I’m feeling doesn’t scratch the surface of the feelings some of you are experiencing.But, I also notice that there are a lot of people in the wider EA community who don’t follow the online EA spaces. They’re just getting on with the job of learning, upskilling, or making the world a better place. I’ve been oscillating wildly between feeling that my worry is a really big signal I should pay attention to or mostly a function of me spending too much time online reading meta-EA things.One thing I tell myself when I get upset, angry or frustrated in an EA space is “This space ≠EA”.If you’re feeling one EA space is bad for you, consider trying another EA space (for me, I sometimes get disillusioned when reading the Forum, but in-person meetings or video calls make me feel SO MUCH better about our community.)You might like to try rereading the texts that first brought you to EA (my personal choice for reinvigorating myself is On Caring - it’s not a happy read but for me it brings things back into focus).And, just a reminder, sometimes the right thing to do is to take a step back for a bit.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Have worries about EA? Want to chat?, published by Catherine Low on January 26, 2023 on The Effective Altruism Forum.Feeling worried?Hey all. I’m Catherine, and I work on CEA’s Community Health and Special Projects Team. One of my roles is to act as a contact person for the EA community, alongside Julia Wise.I just wanted to let you know that I’m available if you want to share EA-related worries or frustrations with me (they can be specific or very vague). My email is catherine@centreforeffectivealtruism.org, or you can reach the whole Community Health and Special Projects team on our form (which can be filled in anonymously if you wish). Sometimes I’ll have the time to jump on a video call to talk it through if you’d find that helpful (but it might not always be possible). Maybe I’ll be able to help in some way, or maybe not. I also read through all the posts on the Forum relating to worries about the community (and the comments).What I’m worried about right nowPlease feel free to reach out about any manner of worries relating to EA – whether they are related to my current worry or something entirely different.Personally, I’m currently feeling disappointed, worried, frustrated, and a bit angry (which isn’t an emotion I commonly feel) about a few things that I’ve read on the Forum recently. I’m also incredibly privileged in nearly all metrics of privilege one can imagine, so there is a high chance that what I’m feeling doesn’t scratch the surface of the feelings some of you are experiencing.But, I also notice that there are a lot of people in the wider EA community who don’t follow the online EA spaces. They’re just getting on with the job of learning, upskilling, or making the world a better place. I’ve been oscillating wildly between feeling that my worry is a really big signal I should pay attention to or mostly a function of me spending too much time online reading meta-EA things.One thing I tell myself when I get upset, angry or frustrated in an EA space is “This space ≠EA”.If you’re feeling one EA space is bad for you, consider trying another EA space (for me, I sometimes get disillusioned when reading the Forum, but in-person meetings or video calls make me feel SO MUCH better about our community.)You might like to try rereading the texts that first brought you to EA (my personal choice for reinvigorating myself is On Caring - it’s not a happy read but for me it brings things back into focus).And, just a reminder, sometimes the right thing to do is to take a step back for a bit.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Catherine Low https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:33 None full 4585
wwkyJFjnGDyM7ddyh_NL_EA_EA EA - Have worries about EA? Want to chat? by Catherine Low Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Have worries about EA? Want to chat?, published by Catherine Low on January 26, 2023 on The Effective Altruism Forum.Feeling worried?Hey all. I’m Catherine, and I work on CEA’s Community Health and Special Projects Team. One of my roles is to act as a contact person for the EA community, alongside Julia Wise.I just wanted to let you know that I’m available if you want to share EA-related worries or frustrations with me (they can be specific or very vague). My email is catherine@centreforeffectivealtruism.org, or you can reach the whole Community Health and Special Projects team on our form (which can be filled in anonymously if you wish). Sometimes I’ll have the time to jump on a video call to talk it through if you’d find that helpful (but it might not always be possible). Maybe I’ll be able to help in some way, or maybe not. I also read through all the posts on the Forum relating to worries about the community (and the comments).What I’m worried about right nowPlease feel free to reach out about any manner of worries relating to EA – whether they are related to my current worry or something entirely different.Personally, I’m currently feeling disappointed, worried, frustrated, and a bit angry (which isn’t an emotion I commonly feel) about a few things that I’ve read on the Forum recently. I’m also incredibly privileged in nearly all metrics of privilege one can imagine, so there is a high chance that what I’m feeling doesn’t scratch the surface of the feelings some of you are experiencing.But, I also notice that there are a lot of people in the wider EA community who don’t follow the online EA spaces. They’re just getting on with the job of learning, upskilling, or making the world a better place. I’ve been oscillating wildly between feeling that my worry is a really big signal I should pay attention to or mostly a function of me spending too much time online reading meta-EA things.One thing I tell myself when I get upset, angry or frustrated in an EA space is “This space ≠EA”.If you’re feeling one EA space is bad for you, consider trying another EA space (for me, I sometimes get disillusioned when reading the Forum, but in-person meetings or video calls make me feel SO MUCH better about our community.)You might like to try rereading the texts that first brought you to EA (my personal choice for reinvigorating myself is On Caring - it’s not a happy read but for me it brings things back into focus).And, just a reminder, sometimes the right thing to do is to take a step back for a bit.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Catherine Low https://forum.effectivealtruism.org/posts/wwkyJFjnGDyM7ddyh/have-worries-about-ea-want-to-chat Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Have worries about EA? Want to chat?, published by Catherine Low on January 26, 2023 on The Effective Altruism Forum.Feeling worried?Hey all. I’m Catherine, and I work on CEA’s Community Health and Special Projects Team. One of my roles is to act as a contact person for the EA community, alongside Julia Wise.I just wanted to let you know that I’m available if you want to share EA-related worries or frustrations with me (they can be specific or very vague). My email is catherine@centreforeffectivealtruism.org, or you can reach the whole Community Health and Special Projects team on our form (which can be filled in anonymously if you wish). Sometimes I’ll have the time to jump on a video call to talk it through if you’d find that helpful (but it might not always be possible). Maybe I’ll be able to help in some way, or maybe not. I also read through all the posts on the Forum relating to worries about the community (and the comments).What I’m worried about right nowPlease feel free to reach out about any manner of worries relating to EA – whether they are related to my current worry or something entirely different.Personally, I’m currently feeling disappointed, worried, frustrated, and a bit angry (which isn’t an emotion I commonly feel) about a few things that I’ve read on the Forum recently. I’m also incredibly privileged in nearly all metrics of privilege one can imagine, so there is a high chance that what I’m feeling doesn’t scratch the surface of the feelings some of you are experiencing.But, I also notice that there are a lot of people in the wider EA community who don’t follow the online EA spaces. They’re just getting on with the job of learning, upskilling, or making the world a better place. I’ve been oscillating wildly between feeling that my worry is a really big signal I should pay attention to or mostly a function of me spending too much time online reading meta-EA things.One thing I tell myself when I get upset, angry or frustrated in an EA space is “This space ≠EA”.If you’re feeling one EA space is bad for you, consider trying another EA space (for me, I sometimes get disillusioned when reading the Forum, but in-person meetings or video calls make me feel SO MUCH better about our community.)You might like to try rereading the texts that first brought you to EA (my personal choice for reinvigorating myself is On Caring - it’s not a happy read but for me it brings things back into focus).And, just a reminder, sometimes the right thing to do is to take a step back for a bit.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 26 Jan 2023 10:40:38 +0000 EA - Have worries about EA? Want to chat? by Catherine Low Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Have worries about EA? Want to chat?, published by Catherine Low on January 26, 2023 on The Effective Altruism Forum.Feeling worried?Hey all. I’m Catherine, and I work on CEA’s Community Health and Special Projects Team. One of my roles is to act as a contact person for the EA community, alongside Julia Wise.I just wanted to let you know that I’m available if you want to share EA-related worries or frustrations with me (they can be specific or very vague). My email is catherine@centreforeffectivealtruism.org, or you can reach the whole Community Health and Special Projects team on our form (which can be filled in anonymously if you wish). Sometimes I’ll have the time to jump on a video call to talk it through if you’d find that helpful (but it might not always be possible). Maybe I’ll be able to help in some way, or maybe not. I also read through all the posts on the Forum relating to worries about the community (and the comments).What I’m worried about right nowPlease feel free to reach out about any manner of worries relating to EA – whether they are related to my current worry or something entirely different.Personally, I’m currently feeling disappointed, worried, frustrated, and a bit angry (which isn’t an emotion I commonly feel) about a few things that I’ve read on the Forum recently. I’m also incredibly privileged in nearly all metrics of privilege one can imagine, so there is a high chance that what I’m feeling doesn’t scratch the surface of the feelings some of you are experiencing.But, I also notice that there are a lot of people in the wider EA community who don’t follow the online EA spaces. They’re just getting on with the job of learning, upskilling, or making the world a better place. I’ve been oscillating wildly between feeling that my worry is a really big signal I should pay attention to or mostly a function of me spending too much time online reading meta-EA things.One thing I tell myself when I get upset, angry or frustrated in an EA space is “This space ≠EA”.If you’re feeling one EA space is bad for you, consider trying another EA space (for me, I sometimes get disillusioned when reading the Forum, but in-person meetings or video calls make me feel SO MUCH better about our community.)You might like to try rereading the texts that first brought you to EA (my personal choice for reinvigorating myself is On Caring - it’s not a happy read but for me it brings things back into focus).And, just a reminder, sometimes the right thing to do is to take a step back for a bit.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Have worries about EA? Want to chat?, published by Catherine Low on January 26, 2023 on The Effective Altruism Forum.Feeling worried?Hey all. I’m Catherine, and I work on CEA’s Community Health and Special Projects Team. One of my roles is to act as a contact person for the EA community, alongside Julia Wise.I just wanted to let you know that I’m available if you want to share EA-related worries or frustrations with me (they can be specific or very vague). My email is catherine@centreforeffectivealtruism.org, or you can reach the whole Community Health and Special Projects team on our form (which can be filled in anonymously if you wish). Sometimes I’ll have the time to jump on a video call to talk it through if you’d find that helpful (but it might not always be possible). Maybe I’ll be able to help in some way, or maybe not. I also read through all the posts on the Forum relating to worries about the community (and the comments).What I’m worried about right nowPlease feel free to reach out about any manner of worries relating to EA – whether they are related to my current worry or something entirely different.Personally, I’m currently feeling disappointed, worried, frustrated, and a bit angry (which isn’t an emotion I commonly feel) about a few things that I’ve read on the Forum recently. I’m also incredibly privileged in nearly all metrics of privilege one can imagine, so there is a high chance that what I’m feeling doesn’t scratch the surface of the feelings some of you are experiencing.But, I also notice that there are a lot of people in the wider EA community who don’t follow the online EA spaces. They’re just getting on with the job of learning, upskilling, or making the world a better place. I’ve been oscillating wildly between feeling that my worry is a really big signal I should pay attention to or mostly a function of me spending too much time online reading meta-EA things.One thing I tell myself when I get upset, angry or frustrated in an EA space is “This space ≠EA”.If you’re feeling one EA space is bad for you, consider trying another EA space (for me, I sometimes get disillusioned when reading the Forum, but in-person meetings or video calls make me feel SO MUCH better about our community.)You might like to try rereading the texts that first brought you to EA (my personal choice for reinvigorating myself is On Caring - it’s not a happy read but for me it brings things back into focus).And, just a reminder, sometimes the right thing to do is to take a step back for a bit.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Catherine Low https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:33 None full 4624
CcJsh4JcxEqYDaSte_EA EA - Spreading messages to help with the most important century by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Spreading messages to help with the most important century, published by Holden Karnofsky on January 25, 2023 on The Effective Altruism Forum.In the most important century series, I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.In this more recent series, I’ve been trying to help answer this question: “So what? What can I do to help?”So far, I’ve just been trying to build a picture of some of the major risks we might face (especially the risk of misaligned AI that could defeat all of humanity), what might be challenging about these risks, and why we might succeed anyway. Now I’ve finally gotten to the part where I can start laying out tangible ideas for how to help (beyond the pretty lame suggestions I gave before).This piece is about one broad way to help: spreading messages that ought to be more widely understood.One reason I think this topic is worth a whole piece is that practically everyone can help with spreading messages at least some, via things like talking to friends; writing explanations of your own that will appeal to particular people; and, yes, posting to Facebook and Twitter and all of that. Call it slacktivism if you want, but I’d guess it can be a big deal: many extremely important AI-related ideas are understood by vanishingly small numbers of people, and a bit more awareness could snowball. Especially because these topics often feel too “weird” for people to feel comfortable talking about them! Engaging in credible, reasonable ways could contribute to an overall background sense that it’s OK to take these ideas seriously.And then there are a lot of potential readers who might have special opportunities to spread messages. Maybe they are professional communicators (journalists, bloggers, TV writers, novelists, TikTokers, etc.), maybe they’re non-professionals who still have sizable audiences (e.g., on Twitter), maybe they have unusual personal and professional networks, etc. Overall, the more you feel you are good at communicating with some important audience (even a small one), the more this post is for you.That said, I’m not excited about blasting around hyper-simplified messages. As I hope this series has shown, the challenges that could lie ahead of us are complex and daunting, and shouting stuff like “AI is the biggest deal ever!” or “AI development should be illegal!” could do more harm than good (if only by associating important ideas with being annoying). Relatedly, I think it’s generally not good enough to spread the most broad/relatable/easy-to-agree-to version of each key idea, like “AI systems could harm society.” Some of the unintuitive details are crucial.Instead, the gauntlet I’m throwing is: “find ways to help people understand the core parts of the challenges we might face, in as much detail as is feasible.” That is: the goal is to try to help people get to the point where they could maintain a reasonable position in a detailed back-and-forth, not just to get them to repeat a few words or nod along to a high-level take like “AI safety is important.” This is a lot harder than shouting “AI is the biggest deal ever!”, but I think it’s worth it, so I’m encouraging people to rise to the challenge and stretch their communication skills.Below, I will:Outline some general challenges of this sort of message-spreading.Go through some ideas I think it’s risky to spread too far, at least in isolation.Go through some of the ideas I’d be most excited to see spread.Talk a little bit about how to spread ideas - but this is mostly up to you.Challenges of AI-related messagesHere’s a simplified story for h...]]>
Holden Karnofsky https://forum.effectivealtruism.org/posts/CcJsh4JcxEqYDaSte/spreading-messages-to-help-with-the-most-important-century Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Spreading messages to help with the most important century, published by Holden Karnofsky on January 25, 2023 on The Effective Altruism Forum.In the most important century series, I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.In this more recent series, I’ve been trying to help answer this question: “So what? What can I do to help?”So far, I’ve just been trying to build a picture of some of the major risks we might face (especially the risk of misaligned AI that could defeat all of humanity), what might be challenging about these risks, and why we might succeed anyway. Now I’ve finally gotten to the part where I can start laying out tangible ideas for how to help (beyond the pretty lame suggestions I gave before).This piece is about one broad way to help: spreading messages that ought to be more widely understood.One reason I think this topic is worth a whole piece is that practically everyone can help with spreading messages at least some, via things like talking to friends; writing explanations of your own that will appeal to particular people; and, yes, posting to Facebook and Twitter and all of that. Call it slacktivism if you want, but I’d guess it can be a big deal: many extremely important AI-related ideas are understood by vanishingly small numbers of people, and a bit more awareness could snowball. Especially because these topics often feel too “weird” for people to feel comfortable talking about them! Engaging in credible, reasonable ways could contribute to an overall background sense that it’s OK to take these ideas seriously.And then there are a lot of potential readers who might have special opportunities to spread messages. Maybe they are professional communicators (journalists, bloggers, TV writers, novelists, TikTokers, etc.), maybe they’re non-professionals who still have sizable audiences (e.g., on Twitter), maybe they have unusual personal and professional networks, etc. Overall, the more you feel you are good at communicating with some important audience (even a small one), the more this post is for you.That said, I’m not excited about blasting around hyper-simplified messages. As I hope this series has shown, the challenges that could lie ahead of us are complex and daunting, and shouting stuff like “AI is the biggest deal ever!” or “AI development should be illegal!” could do more harm than good (if only by associating important ideas with being annoying). Relatedly, I think it’s generally not good enough to spread the most broad/relatable/easy-to-agree-to version of each key idea, like “AI systems could harm society.” Some of the unintuitive details are crucial.Instead, the gauntlet I’m throwing is: “find ways to help people understand the core parts of the challenges we might face, in as much detail as is feasible.” That is: the goal is to try to help people get to the point where they could maintain a reasonable position in a detailed back-and-forth, not just to get them to repeat a few words or nod along to a high-level take like “AI safety is important.” This is a lot harder than shouting “AI is the biggest deal ever!”, but I think it’s worth it, so I’m encouraging people to rise to the challenge and stretch their communication skills.Below, I will:Outline some general challenges of this sort of message-spreading.Go through some ideas I think it’s risky to spread too far, at least in isolation.Go through some of the ideas I’d be most excited to see spread.Talk a little bit about how to spread ideas - but this is mostly up to you.Challenges of AI-related messagesHere’s a simplified story for h...]]>
Wed, 25 Jan 2023 21:45:37 +0000 EA - Spreading messages to help with the most important century by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Spreading messages to help with the most important century, published by Holden Karnofsky on January 25, 2023 on The Effective Altruism Forum.In the most important century series, I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.In this more recent series, I’ve been trying to help answer this question: “So what? What can I do to help?”So far, I’ve just been trying to build a picture of some of the major risks we might face (especially the risk of misaligned AI that could defeat all of humanity), what might be challenging about these risks, and why we might succeed anyway. Now I’ve finally gotten to the part where I can start laying out tangible ideas for how to help (beyond the pretty lame suggestions I gave before).This piece is about one broad way to help: spreading messages that ought to be more widely understood.One reason I think this topic is worth a whole piece is that practically everyone can help with spreading messages at least some, via things like talking to friends; writing explanations of your own that will appeal to particular people; and, yes, posting to Facebook and Twitter and all of that. Call it slacktivism if you want, but I’d guess it can be a big deal: many extremely important AI-related ideas are understood by vanishingly small numbers of people, and a bit more awareness could snowball. Especially because these topics often feel too “weird” for people to feel comfortable talking about them! Engaging in credible, reasonable ways could contribute to an overall background sense that it’s OK to take these ideas seriously.And then there are a lot of potential readers who might have special opportunities to spread messages. Maybe they are professional communicators (journalists, bloggers, TV writers, novelists, TikTokers, etc.), maybe they’re non-professionals who still have sizable audiences (e.g., on Twitter), maybe they have unusual personal and professional networks, etc. Overall, the more you feel you are good at communicating with some important audience (even a small one), the more this post is for you.That said, I’m not excited about blasting around hyper-simplified messages. As I hope this series has shown, the challenges that could lie ahead of us are complex and daunting, and shouting stuff like “AI is the biggest deal ever!” or “AI development should be illegal!” could do more harm than good (if only by associating important ideas with being annoying). Relatedly, I think it’s generally not good enough to spread the most broad/relatable/easy-to-agree-to version of each key idea, like “AI systems could harm society.” Some of the unintuitive details are crucial.Instead, the gauntlet I’m throwing is: “find ways to help people understand the core parts of the challenges we might face, in as much detail as is feasible.” That is: the goal is to try to help people get to the point where they could maintain a reasonable position in a detailed back-and-forth, not just to get them to repeat a few words or nod along to a high-level take like “AI safety is important.” This is a lot harder than shouting “AI is the biggest deal ever!”, but I think it’s worth it, so I’m encouraging people to rise to the challenge and stretch their communication skills.Below, I will:Outline some general challenges of this sort of message-spreading.Go through some ideas I think it’s risky to spread too far, at least in isolation.Go through some of the ideas I’d be most excited to see spread.Talk a little bit about how to spread ideas - but this is mostly up to you.Challenges of AI-related messagesHere’s a simplified story for h...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Spreading messages to help with the most important century, published by Holden Karnofsky on January 25, 2023 on The Effective Altruism Forum.In the most important century series, I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.In this more recent series, I’ve been trying to help answer this question: “So what? What can I do to help?”So far, I’ve just been trying to build a picture of some of the major risks we might face (especially the risk of misaligned AI that could defeat all of humanity), what might be challenging about these risks, and why we might succeed anyway. Now I’ve finally gotten to the part where I can start laying out tangible ideas for how to help (beyond the pretty lame suggestions I gave before).This piece is about one broad way to help: spreading messages that ought to be more widely understood.One reason I think this topic is worth a whole piece is that practically everyone can help with spreading messages at least some, via things like talking to friends; writing explanations of your own that will appeal to particular people; and, yes, posting to Facebook and Twitter and all of that. Call it slacktivism if you want, but I’d guess it can be a big deal: many extremely important AI-related ideas are understood by vanishingly small numbers of people, and a bit more awareness could snowball. Especially because these topics often feel too “weird” for people to feel comfortable talking about them! Engaging in credible, reasonable ways could contribute to an overall background sense that it’s OK to take these ideas seriously.And then there are a lot of potential readers who might have special opportunities to spread messages. Maybe they are professional communicators (journalists, bloggers, TV writers, novelists, TikTokers, etc.), maybe they’re non-professionals who still have sizable audiences (e.g., on Twitter), maybe they have unusual personal and professional networks, etc. Overall, the more you feel you are good at communicating with some important audience (even a small one), the more this post is for you.That said, I’m not excited about blasting around hyper-simplified messages. As I hope this series has shown, the challenges that could lie ahead of us are complex and daunting, and shouting stuff like “AI is the biggest deal ever!” or “AI development should be illegal!” could do more harm than good (if only by associating important ideas with being annoying). Relatedly, I think it’s generally not good enough to spread the most broad/relatable/easy-to-agree-to version of each key idea, like “AI systems could harm society.” Some of the unintuitive details are crucial.Instead, the gauntlet I’m throwing is: “find ways to help people understand the core parts of the challenges we might face, in as much detail as is feasible.” That is: the goal is to try to help people get to the point where they could maintain a reasonable position in a detailed back-and-forth, not just to get them to repeat a few words or nod along to a high-level take like “AI safety is important.” This is a lot harder than shouting “AI is the biggest deal ever!”, but I think it’s worth it, so I’m encouraging people to rise to the challenge and stretch their communication skills.Below, I will:Outline some general challenges of this sort of message-spreading.Go through some ideas I think it’s risky to spread too far, at least in isolation.Go through some of the ideas I’d be most excited to see spread.Talk a little bit about how to spread ideas - but this is mostly up to you.Challenges of AI-related messagesHere’s a simplified story for h...]]>
Holden Karnofsky https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 27:27 None full 4575
cbRACrtsNXQxf4jL9_EA EA - Anti Entropy: Supporting ops professionals in EA by redbermejo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anti Entropy: Supporting ops professionals in EA, published by redbermejo on January 25, 2023 on The Effective Altruism Forum.[‘Operations’ (in the effective altruism community) refers to everything that supports and facilitates direct work, from HR to finance to office management to recruiting]One of the key challenges facing the EA community is the lack of much-needed operational support - which drove us to form Anti Entropy. We believe that by providing guidance, tools, and resources to equip and empower EAs for effective operations management, we can help EA organizations to survive and thrive.In this post, we share our thoughts on the challenge for EA operations, what the EA community can do about this, and what Anti Entropy offers.Why is ops support lacking?We set up Anti Entropy to address a general lack of operational support in the community. What causes this situation?First, there are relatively few experienced ops professionals in the community. Second, good resources are limited or inadequately vetted. And third, EA founders and organizational leaders don’t always take advantage of existing operational support. For example, a newly-formed organization may not be aware that they can pay a hiring agency to help them with hiring; or they may not have the budget to use one even if they are aware of them because they didn’t anticipate this expense when they applied for funding.This is exacerbated by a culture of frugality and self-sufficiency in EA. Grantees feel like they need to spend their funding responsibly, which is generally good and admirable, but often they are not given clear guidance on what ‘responsible’ means and what the funders would or would not consider a ‘responsible,’ reasonable use. So they err on the side of caution and inaction.This leads to several problems:Organizations fail unnecessarilyA lack of operational support can cause promising organizations to shut down prematurely. This is often due to practical issues rather than a lack of impact or bad ideas. This is particularly concerning for new or rapidly-growing organizations, which may struggle to keep up with the demands of fast growth. A lack of ops support can lead to bottlenecks, internal cultural issues, a loss of efficiency, and an unnecessary loss of impact.Stress and burnoutEven among organizations that succeed, employees are more stressed and prone to burnout. Ops people have too much responsibility and not enough support. In some cases, employees who are not specifically trained in operations may be required to take on operational tasks in addition to their normal work, leading to overwork and burnout.While the need for operational support in the EA community has been recognized in the past, this has sometimes led to people taking on operational roles without the necessary training or support. Luckily, this has become less common, but there is still room for improvement.MisinformationPeople sometimes think, ‘I don’t need to hire an ops person to deal with taxes/write my grievance policy/deal with the bureaucracy of setting up a nonprofit - I’ll just look up how to do those things myself’. Unfortunately, this can lead to people relying on inaccurate or inappropriate information, potentially leading to legal issues.For example, a UK organization can’t use a US vacation policy since the laws are different. Similarly, some topics related to organizations’ financial status are very poorly understood - for example, what does it mean to ‘set up an entity’, ‘get fiscally sponsored’, or hire people ‘as an independent contractor’? When organizational leaders don’t understand these things, it can lead to them unwittingly breaking the law.What can I do?If you’re currently working in operations for an EA organizationCheck the Anti Entropy website for resources to help ...]]>
redbermejo https://forum.effectivealtruism.org/posts/cbRACrtsNXQxf4jL9/anti-entropy-supporting-ops-professionals-in-ea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anti Entropy: Supporting ops professionals in EA, published by redbermejo on January 25, 2023 on The Effective Altruism Forum.[‘Operations’ (in the effective altruism community) refers to everything that supports and facilitates direct work, from HR to finance to office management to recruiting]One of the key challenges facing the EA community is the lack of much-needed operational support - which drove us to form Anti Entropy. We believe that by providing guidance, tools, and resources to equip and empower EAs for effective operations management, we can help EA organizations to survive and thrive.In this post, we share our thoughts on the challenge for EA operations, what the EA community can do about this, and what Anti Entropy offers.Why is ops support lacking?We set up Anti Entropy to address a general lack of operational support in the community. What causes this situation?First, there are relatively few experienced ops professionals in the community. Second, good resources are limited or inadequately vetted. And third, EA founders and organizational leaders don’t always take advantage of existing operational support. For example, a newly-formed organization may not be aware that they can pay a hiring agency to help them with hiring; or they may not have the budget to use one even if they are aware of them because they didn’t anticipate this expense when they applied for funding.This is exacerbated by a culture of frugality and self-sufficiency in EA. Grantees feel like they need to spend their funding responsibly, which is generally good and admirable, but often they are not given clear guidance on what ‘responsible’ means and what the funders would or would not consider a ‘responsible,’ reasonable use. So they err on the side of caution and inaction.This leads to several problems:Organizations fail unnecessarilyA lack of operational support can cause promising organizations to shut down prematurely. This is often due to practical issues rather than a lack of impact or bad ideas. This is particularly concerning for new or rapidly-growing organizations, which may struggle to keep up with the demands of fast growth. A lack of ops support can lead to bottlenecks, internal cultural issues, a loss of efficiency, and an unnecessary loss of impact.Stress and burnoutEven among organizations that succeed, employees are more stressed and prone to burnout. Ops people have too much responsibility and not enough support. In some cases, employees who are not specifically trained in operations may be required to take on operational tasks in addition to their normal work, leading to overwork and burnout.While the need for operational support in the EA community has been recognized in the past, this has sometimes led to people taking on operational roles without the necessary training or support. Luckily, this has become less common, but there is still room for improvement.MisinformationPeople sometimes think, ‘I don’t need to hire an ops person to deal with taxes/write my grievance policy/deal with the bureaucracy of setting up a nonprofit - I’ll just look up how to do those things myself’. Unfortunately, this can lead to people relying on inaccurate or inappropriate information, potentially leading to legal issues.For example, a UK organization can’t use a US vacation policy since the laws are different. Similarly, some topics related to organizations’ financial status are very poorly understood - for example, what does it mean to ‘set up an entity’, ‘get fiscally sponsored’, or hire people ‘as an independent contractor’? When organizational leaders don’t understand these things, it can lead to them unwittingly breaking the law.What can I do?If you’re currently working in operations for an EA organizationCheck the Anti Entropy website for resources to help ...]]>
Wed, 25 Jan 2023 19:40:46 +0000 EA - Anti Entropy: Supporting ops professionals in EA by redbermejo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anti Entropy: Supporting ops professionals in EA, published by redbermejo on January 25, 2023 on The Effective Altruism Forum.[‘Operations’ (in the effective altruism community) refers to everything that supports and facilitates direct work, from HR to finance to office management to recruiting]One of the key challenges facing the EA community is the lack of much-needed operational support - which drove us to form Anti Entropy. We believe that by providing guidance, tools, and resources to equip and empower EAs for effective operations management, we can help EA organizations to survive and thrive.In this post, we share our thoughts on the challenge for EA operations, what the EA community can do about this, and what Anti Entropy offers.Why is ops support lacking?We set up Anti Entropy to address a general lack of operational support in the community. What causes this situation?First, there are relatively few experienced ops professionals in the community. Second, good resources are limited or inadequately vetted. And third, EA founders and organizational leaders don’t always take advantage of existing operational support. For example, a newly-formed organization may not be aware that they can pay a hiring agency to help them with hiring; or they may not have the budget to use one even if they are aware of them because they didn’t anticipate this expense when they applied for funding.This is exacerbated by a culture of frugality and self-sufficiency in EA. Grantees feel like they need to spend their funding responsibly, which is generally good and admirable, but often they are not given clear guidance on what ‘responsible’ means and what the funders would or would not consider a ‘responsible,’ reasonable use. So they err on the side of caution and inaction.This leads to several problems:Organizations fail unnecessarilyA lack of operational support can cause promising organizations to shut down prematurely. This is often due to practical issues rather than a lack of impact or bad ideas. This is particularly concerning for new or rapidly-growing organizations, which may struggle to keep up with the demands of fast growth. A lack of ops support can lead to bottlenecks, internal cultural issues, a loss of efficiency, and an unnecessary loss of impact.Stress and burnoutEven among organizations that succeed, employees are more stressed and prone to burnout. Ops people have too much responsibility and not enough support. In some cases, employees who are not specifically trained in operations may be required to take on operational tasks in addition to their normal work, leading to overwork and burnout.While the need for operational support in the EA community has been recognized in the past, this has sometimes led to people taking on operational roles without the necessary training or support. Luckily, this has become less common, but there is still room for improvement.MisinformationPeople sometimes think, ‘I don’t need to hire an ops person to deal with taxes/write my grievance policy/deal with the bureaucracy of setting up a nonprofit - I’ll just look up how to do those things myself’. Unfortunately, this can lead to people relying on inaccurate or inappropriate information, potentially leading to legal issues.For example, a UK organization can’t use a US vacation policy since the laws are different. Similarly, some topics related to organizations’ financial status are very poorly understood - for example, what does it mean to ‘set up an entity’, ‘get fiscally sponsored’, or hire people ‘as an independent contractor’? When organizational leaders don’t understand these things, it can lead to them unwittingly breaking the law.What can I do?If you’re currently working in operations for an EA organizationCheck the Anti Entropy website for resources to help ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anti Entropy: Supporting ops professionals in EA, published by redbermejo on January 25, 2023 on The Effective Altruism Forum.[‘Operations’ (in the effective altruism community) refers to everything that supports and facilitates direct work, from HR to finance to office management to recruiting]One of the key challenges facing the EA community is the lack of much-needed operational support - which drove us to form Anti Entropy. We believe that by providing guidance, tools, and resources to equip and empower EAs for effective operations management, we can help EA organizations to survive and thrive.In this post, we share our thoughts on the challenge for EA operations, what the EA community can do about this, and what Anti Entropy offers.Why is ops support lacking?We set up Anti Entropy to address a general lack of operational support in the community. What causes this situation?First, there are relatively few experienced ops professionals in the community. Second, good resources are limited or inadequately vetted. And third, EA founders and organizational leaders don’t always take advantage of existing operational support. For example, a newly-formed organization may not be aware that they can pay a hiring agency to help them with hiring; or they may not have the budget to use one even if they are aware of them because they didn’t anticipate this expense when they applied for funding.This is exacerbated by a culture of frugality and self-sufficiency in EA. Grantees feel like they need to spend their funding responsibly, which is generally good and admirable, but often they are not given clear guidance on what ‘responsible’ means and what the funders would or would not consider a ‘responsible,’ reasonable use. So they err on the side of caution and inaction.This leads to several problems:Organizations fail unnecessarilyA lack of operational support can cause promising organizations to shut down prematurely. This is often due to practical issues rather than a lack of impact or bad ideas. This is particularly concerning for new or rapidly-growing organizations, which may struggle to keep up with the demands of fast growth. A lack of ops support can lead to bottlenecks, internal cultural issues, a loss of efficiency, and an unnecessary loss of impact.Stress and burnoutEven among organizations that succeed, employees are more stressed and prone to burnout. Ops people have too much responsibility and not enough support. In some cases, employees who are not specifically trained in operations may be required to take on operational tasks in addition to their normal work, leading to overwork and burnout.While the need for operational support in the EA community has been recognized in the past, this has sometimes led to people taking on operational roles without the necessary training or support. Luckily, this has become less common, but there is still room for improvement.MisinformationPeople sometimes think, ‘I don’t need to hire an ops person to deal with taxes/write my grievance policy/deal with the bureaucracy of setting up a nonprofit - I’ll just look up how to do those things myself’. Unfortunately, this can lead to people relying on inaccurate or inappropriate information, potentially leading to legal issues.For example, a UK organization can’t use a US vacation policy since the laws are different. Similarly, some topics related to organizations’ financial status are very poorly understood - for example, what does it mean to ‘set up an entity’, ‘get fiscally sponsored’, or hire people ‘as an independent contractor’? When organizational leaders don’t understand these things, it can lead to them unwittingly breaking the law.What can I do?If you’re currently working in operations for an EA organizationCheck the Anti Entropy website for resources to help ...]]>
redbermejo https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:23 None full 4577
uitxF9dczuqpkwszJ_EA EA - When Did EA Start? by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: When Did EA Start?, published by Jeff Kaufman on January 25, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://forum.effectivealtruism.org/posts/uitxF9dczuqpkwszJ/when-did-ea-start Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: When Did EA Start?, published by Jeff Kaufman on January 25, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 25 Jan 2023 17:14:04 +0000 EA - When Did EA Start? by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: When Did EA Start?, published by Jeff Kaufman on January 25, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: When Did EA Start?, published by Jeff Kaufman on January 25, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:24 None full 4576
fNuuzCLGr6BdiWH25_EA EA - Doing EA Better: grant-makers should consider grant app peer review along the public-sector model by ben.smith Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Doing EA Better: grant-makers should consider grant app peer review along the public-sector model, published by ben.smith on January 24, 2023 on The Effective Altruism Forum.Epistemic status: speculativeThis is a response to recent posts including Doing EA Better and The EA community does not own its donors' money. In order to make better funding decisions, some EAs have called for democratizing EA's funding systems. This can be problematic, because others have raised questions such as (paraphrasing heavily) such as: "how do we decide who gets a vote?" and "would funders still give if they were forced to follow community preferences"? The same EAs have argued that EA decision-making is “highly centralised, opaque, and unaccountable”, and said that to improve our impact on the world, the effective altruism movement should be more decentralized and there should be greater transparency amongst EA institutions.To meet both of the families of concerns expressed over the last week, I propose a grant-assessment system that improves transparency, decentralizes decision-making, and could better inform grant allocation by drawing information from a wider section of the community whilst maintaining funders' prerogatives to select the areas they wish to donate to. The proposal is to adopt a peer-review process used by the grant-making system run by public bodies in the United States, such as the National Institutes of Health and the National Science Foundation.In this model, the funder’s program manager makes decisions about grant awards based on reviews and numerical scores allocated by peer reviewers coordinating in expert panels to evaluate grant applications. This would be a positive-sum change that benefits both funders and the community: the community has more input into the grant-making process, and funders benefit from expertise in the community to better achieve their objectives.In the rest of this post, I will describe the National Institutes of Health grant evaluation process, describe why I think now is the right time for the effective altruism movement to consider peer review as part of a more mature grant evaluation process, give some notes on implementation in EA specifically, and describe how this approach can both maintain funders’ prerogative to spend their own money as they wish, while giving the community a greater level of decision-making.The grant peer review process at the NIH and NSFNational Institutes of HealthThe National Institutes of Health (NIH) uses a peer review process to evaluate grant applications. This process involves the formation of ‘study sections’, which are groups of experts in the relevant field who review and evaluate grant applications.When an application is received, it is assigned to a study section based on its scientific area of focus. Each study section is composed of scientists, physicians, and other experts who have experience in the field related to the research proposed in the application. These would be drawn from the scientific community at large. Study section members are typically compensated for participation, but participation isn’t a full time job–it’s generally a small additional duty researchers can choose to take on, as part of their broader set of research activities.The study section members more-or-less independently review the applications and provide written critiques that are used to evaluate the strengths and weaknesses of each application. The study section then meets to discuss the applications, and each member provides a priority score and written summary of the application. These scores and summaries are used to determine which applications will be funded.In summary, the NIH uses study sections composed of experts in the relevant field to review and evaluate grant applications through a peer rev...]]>
ben.smith https://forum.effectivealtruism.org/posts/fNuuzCLGr6BdiWH25/doing-ea-better-grant-makers-should-consider-grant-app-peer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Doing EA Better: grant-makers should consider grant app peer review along the public-sector model, published by ben.smith on January 24, 2023 on The Effective Altruism Forum.Epistemic status: speculativeThis is a response to recent posts including Doing EA Better and The EA community does not own its donors' money. In order to make better funding decisions, some EAs have called for democratizing EA's funding systems. This can be problematic, because others have raised questions such as (paraphrasing heavily) such as: "how do we decide who gets a vote?" and "would funders still give if they were forced to follow community preferences"? The same EAs have argued that EA decision-making is “highly centralised, opaque, and unaccountable”, and said that to improve our impact on the world, the effective altruism movement should be more decentralized and there should be greater transparency amongst EA institutions.To meet both of the families of concerns expressed over the last week, I propose a grant-assessment system that improves transparency, decentralizes decision-making, and could better inform grant allocation by drawing information from a wider section of the community whilst maintaining funders' prerogatives to select the areas they wish to donate to. The proposal is to adopt a peer-review process used by the grant-making system run by public bodies in the United States, such as the National Institutes of Health and the National Science Foundation.In this model, the funder’s program manager makes decisions about grant awards based on reviews and numerical scores allocated by peer reviewers coordinating in expert panels to evaluate grant applications. This would be a positive-sum change that benefits both funders and the community: the community has more input into the grant-making process, and funders benefit from expertise in the community to better achieve their objectives.In the rest of this post, I will describe the National Institutes of Health grant evaluation process, describe why I think now is the right time for the effective altruism movement to consider peer review as part of a more mature grant evaluation process, give some notes on implementation in EA specifically, and describe how this approach can both maintain funders’ prerogative to spend their own money as they wish, while giving the community a greater level of decision-making.The grant peer review process at the NIH and NSFNational Institutes of HealthThe National Institutes of Health (NIH) uses a peer review process to evaluate grant applications. This process involves the formation of ‘study sections’, which are groups of experts in the relevant field who review and evaluate grant applications.When an application is received, it is assigned to a study section based on its scientific area of focus. Each study section is composed of scientists, physicians, and other experts who have experience in the field related to the research proposed in the application. These would be drawn from the scientific community at large. Study section members are typically compensated for participation, but participation isn’t a full time job–it’s generally a small additional duty researchers can choose to take on, as part of their broader set of research activities.The study section members more-or-less independently review the applications and provide written critiques that are used to evaluate the strengths and weaknesses of each application. The study section then meets to discuss the applications, and each member provides a priority score and written summary of the application. These scores and summaries are used to determine which applications will be funded.In summary, the NIH uses study sections composed of experts in the relevant field to review and evaluate grant applications through a peer rev...]]>
Wed, 25 Jan 2023 12:14:51 +0000 EA - Doing EA Better: grant-makers should consider grant app peer review along the public-sector model by ben.smith Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Doing EA Better: grant-makers should consider grant app peer review along the public-sector model, published by ben.smith on January 24, 2023 on The Effective Altruism Forum.Epistemic status: speculativeThis is a response to recent posts including Doing EA Better and The EA community does not own its donors' money. In order to make better funding decisions, some EAs have called for democratizing EA's funding systems. This can be problematic, because others have raised questions such as (paraphrasing heavily) such as: "how do we decide who gets a vote?" and "would funders still give if they were forced to follow community preferences"? The same EAs have argued that EA decision-making is “highly centralised, opaque, and unaccountable”, and said that to improve our impact on the world, the effective altruism movement should be more decentralized and there should be greater transparency amongst EA institutions.To meet both of the families of concerns expressed over the last week, I propose a grant-assessment system that improves transparency, decentralizes decision-making, and could better inform grant allocation by drawing information from a wider section of the community whilst maintaining funders' prerogatives to select the areas they wish to donate to. The proposal is to adopt a peer-review process used by the grant-making system run by public bodies in the United States, such as the National Institutes of Health and the National Science Foundation.In this model, the funder’s program manager makes decisions about grant awards based on reviews and numerical scores allocated by peer reviewers coordinating in expert panels to evaluate grant applications. This would be a positive-sum change that benefits both funders and the community: the community has more input into the grant-making process, and funders benefit from expertise in the community to better achieve their objectives.In the rest of this post, I will describe the National Institutes of Health grant evaluation process, describe why I think now is the right time for the effective altruism movement to consider peer review as part of a more mature grant evaluation process, give some notes on implementation in EA specifically, and describe how this approach can both maintain funders’ prerogative to spend their own money as they wish, while giving the community a greater level of decision-making.The grant peer review process at the NIH and NSFNational Institutes of HealthThe National Institutes of Health (NIH) uses a peer review process to evaluate grant applications. This process involves the formation of ‘study sections’, which are groups of experts in the relevant field who review and evaluate grant applications.When an application is received, it is assigned to a study section based on its scientific area of focus. Each study section is composed of scientists, physicians, and other experts who have experience in the field related to the research proposed in the application. These would be drawn from the scientific community at large. Study section members are typically compensated for participation, but participation isn’t a full time job–it’s generally a small additional duty researchers can choose to take on, as part of their broader set of research activities.The study section members more-or-less independently review the applications and provide written critiques that are used to evaluate the strengths and weaknesses of each application. The study section then meets to discuss the applications, and each member provides a priority score and written summary of the application. These scores and summaries are used to determine which applications will be funded.In summary, the NIH uses study sections composed of experts in the relevant field to review and evaluate grant applications through a peer rev...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Doing EA Better: grant-makers should consider grant app peer review along the public-sector model, published by ben.smith on January 24, 2023 on The Effective Altruism Forum.Epistemic status: speculativeThis is a response to recent posts including Doing EA Better and The EA community does not own its donors' money. In order to make better funding decisions, some EAs have called for democratizing EA's funding systems. This can be problematic, because others have raised questions such as (paraphrasing heavily) such as: "how do we decide who gets a vote?" and "would funders still give if they were forced to follow community preferences"? The same EAs have argued that EA decision-making is “highly centralised, opaque, and unaccountable”, and said that to improve our impact on the world, the effective altruism movement should be more decentralized and there should be greater transparency amongst EA institutions.To meet both of the families of concerns expressed over the last week, I propose a grant-assessment system that improves transparency, decentralizes decision-making, and could better inform grant allocation by drawing information from a wider section of the community whilst maintaining funders' prerogatives to select the areas they wish to donate to. The proposal is to adopt a peer-review process used by the grant-making system run by public bodies in the United States, such as the National Institutes of Health and the National Science Foundation.In this model, the funder’s program manager makes decisions about grant awards based on reviews and numerical scores allocated by peer reviewers coordinating in expert panels to evaluate grant applications. This would be a positive-sum change that benefits both funders and the community: the community has more input into the grant-making process, and funders benefit from expertise in the community to better achieve their objectives.In the rest of this post, I will describe the National Institutes of Health grant evaluation process, describe why I think now is the right time for the effective altruism movement to consider peer review as part of a more mature grant evaluation process, give some notes on implementation in EA specifically, and describe how this approach can both maintain funders’ prerogative to spend their own money as they wish, while giving the community a greater level of decision-making.The grant peer review process at the NIH and NSFNational Institutes of HealthThe National Institutes of Health (NIH) uses a peer review process to evaluate grant applications. This process involves the formation of ‘study sections’, which are groups of experts in the relevant field who review and evaluate grant applications.When an application is received, it is assigned to a study section based on its scientific area of focus. Each study section is composed of scientists, physicians, and other experts who have experience in the field related to the research proposed in the application. These would be drawn from the scientific community at large. Study section members are typically compensated for participation, but participation isn’t a full time job–it’s generally a small additional duty researchers can choose to take on, as part of their broader set of research activities.The study section members more-or-less independently review the applications and provide written critiques that are used to evaluate the strengths and weaknesses of each application. The study section then meets to discuss the applications, and each member provides a priority score and written summary of the application. These scores and summaries are used to determine which applications will be funded.In summary, the NIH uses study sections composed of experts in the relevant field to review and evaluate grant applications through a peer rev...]]>
ben.smith https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 14:16 None full 4578
bfJPcHqDXb5yp2zXo_EA EA - Open Philanthropy Shallow Investigation: Tobacco Control by Open Philanthropy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Philanthropy Shallow Investigation: Tobacco Control, published by Open Philanthropy on January 25, 2023 on The Effective Altruism Forum.1. PreambleThis document is a shallow investigation, as described here. As we noted in the civil conflict investigation and telecommunications in LMICs investigation we shared earlier this year, we have not shared many shallow investigations in the last few years but are moving towards sharing more of our early stage work.This investigation was written by Helen Kissel, a PhD candidate in economics at Stanford who worked at Open Philanthropy for 10 weeks in summer 2022 as one of five interns on the Global Health and Wellbeing Cause Prioritization team. We’ve also included the peer foreword, written by Strategy Fellow Chris Smith. The peer foreword, which is a standard part of our research process, is an initial response to a piece of research work, written by a team member who is not the primary author or their manager.A slightly earlier draft of this work has been read and discussed by the cause prioritization team. At this point, we plan to learn more about this topic by engaging with philanthropists who are already working on tobacco, extending the depth of this research (particularly on e-cigarettes), and digging deeper into countries which have seen big declines in their smoking burden (e.g. Brazil).2. Peer forewordWritten by Chris SmithIt was in 1964 that the US Surgeon General published a report which linked smoking cigarettes with lung cancer, building on research going back more than a decade. The report told readers that smokers had a 9-10x relative risk of developing lung cancer; that smoking was the primary cause of chronic bronchitis; that pregnant people who smoked were more likely to have underweight newborns, and that smoking was also linked to emphysema and heart disease.In this shallow, Helen reports that nearly sixty years later, there are ~1.3B tobacco users, and that smoking combustible tobacco remains an extraordinary contributor to the global burden of disease, responsible for some 8 million deaths (including secondhand smoke) and ~230M normative disability-adjusted life years (DALYs) (~173M OP descriptive DALYs) making it a bigger contributor to health damages in our terms than HIV/AIDS plus tuberculosis plus malaria. Moreover, the forward-looking projections are for only modest declines in the total burden as population increases offset a decline in smoking rates. As our framework puts it, we’ve got an important problem.Helen walks through the conventional orthodoxy on tobacco control at a population level (higher taxes, marketing restrictions, warning labels) and on smoking cessation support (nicotine replacement therapy, pharmaceutical support). She estimates that a campaign for a cigarette tax which increased the retail price of cigarettes in Indonesia by 10% (a large country with a high attributable disease burden) would reduce tobacco consumption (and attributable DALYs) by 5%, having an expected social return on investment (SROI) of ~3,300x, assuming a 3-year speedup, 10% success rate, and $3M campaign cost. Taxes are considered the single most effective policy measure, but going down the ladder to a moderate advertising ban, the subsequent expected 1% reduction in tobacco consumption and associated DALYs would have an SROI of ~500x. As with any of our shallow back-of-the-envelope-calculations (BOTECs), there is room to debate both the structure and the parameter choices.But I think that this is — when combined with the other material — a strong indicator that there could be relatively mainstream tobacco advocacy work which is above our bar in expectation. This suggests a somewhat tractable problem.Ok, but isn’t this addressed? Don’t people already know that cigarettes are bad for you? We...]]>
Open Philanthropy https://forum.effectivealtruism.org/posts/bfJPcHqDXb5yp2zXo/open-philanthropy-shallow-investigation-tobacco-control Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Philanthropy Shallow Investigation: Tobacco Control, published by Open Philanthropy on January 25, 2023 on The Effective Altruism Forum.1. PreambleThis document is a shallow investigation, as described here. As we noted in the civil conflict investigation and telecommunications in LMICs investigation we shared earlier this year, we have not shared many shallow investigations in the last few years but are moving towards sharing more of our early stage work.This investigation was written by Helen Kissel, a PhD candidate in economics at Stanford who worked at Open Philanthropy for 10 weeks in summer 2022 as one of five interns on the Global Health and Wellbeing Cause Prioritization team. We’ve also included the peer foreword, written by Strategy Fellow Chris Smith. The peer foreword, which is a standard part of our research process, is an initial response to a piece of research work, written by a team member who is not the primary author or their manager.A slightly earlier draft of this work has been read and discussed by the cause prioritization team. At this point, we plan to learn more about this topic by engaging with philanthropists who are already working on tobacco, extending the depth of this research (particularly on e-cigarettes), and digging deeper into countries which have seen big declines in their smoking burden (e.g. Brazil).2. Peer forewordWritten by Chris SmithIt was in 1964 that the US Surgeon General published a report which linked smoking cigarettes with lung cancer, building on research going back more than a decade. The report told readers that smokers had a 9-10x relative risk of developing lung cancer; that smoking was the primary cause of chronic bronchitis; that pregnant people who smoked were more likely to have underweight newborns, and that smoking was also linked to emphysema and heart disease.In this shallow, Helen reports that nearly sixty years later, there are ~1.3B tobacco users, and that smoking combustible tobacco remains an extraordinary contributor to the global burden of disease, responsible for some 8 million deaths (including secondhand smoke) and ~230M normative disability-adjusted life years (DALYs) (~173M OP descriptive DALYs) making it a bigger contributor to health damages in our terms than HIV/AIDS plus tuberculosis plus malaria. Moreover, the forward-looking projections are for only modest declines in the total burden as population increases offset a decline in smoking rates. As our framework puts it, we’ve got an important problem.Helen walks through the conventional orthodoxy on tobacco control at a population level (higher taxes, marketing restrictions, warning labels) and on smoking cessation support (nicotine replacement therapy, pharmaceutical support). She estimates that a campaign for a cigarette tax which increased the retail price of cigarettes in Indonesia by 10% (a large country with a high attributable disease burden) would reduce tobacco consumption (and attributable DALYs) by 5%, having an expected social return on investment (SROI) of ~3,300x, assuming a 3-year speedup, 10% success rate, and $3M campaign cost. Taxes are considered the single most effective policy measure, but going down the ladder to a moderate advertising ban, the subsequent expected 1% reduction in tobacco consumption and associated DALYs would have an SROI of ~500x. As with any of our shallow back-of-the-envelope-calculations (BOTECs), there is room to debate both the structure and the parameter choices.But I think that this is — when combined with the other material — a strong indicator that there could be relatively mainstream tobacco advocacy work which is above our bar in expectation. This suggests a somewhat tractable problem.Ok, but isn’t this addressed? Don’t people already know that cigarettes are bad for you? We...]]>
Wed, 25 Jan 2023 06:08:30 +0000 EA - Open Philanthropy Shallow Investigation: Tobacco Control by Open Philanthropy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Philanthropy Shallow Investigation: Tobacco Control, published by Open Philanthropy on January 25, 2023 on The Effective Altruism Forum.1. PreambleThis document is a shallow investigation, as described here. As we noted in the civil conflict investigation and telecommunications in LMICs investigation we shared earlier this year, we have not shared many shallow investigations in the last few years but are moving towards sharing more of our early stage work.This investigation was written by Helen Kissel, a PhD candidate in economics at Stanford who worked at Open Philanthropy for 10 weeks in summer 2022 as one of five interns on the Global Health and Wellbeing Cause Prioritization team. We’ve also included the peer foreword, written by Strategy Fellow Chris Smith. The peer foreword, which is a standard part of our research process, is an initial response to a piece of research work, written by a team member who is not the primary author or their manager.A slightly earlier draft of this work has been read and discussed by the cause prioritization team. At this point, we plan to learn more about this topic by engaging with philanthropists who are already working on tobacco, extending the depth of this research (particularly on e-cigarettes), and digging deeper into countries which have seen big declines in their smoking burden (e.g. Brazil).2. Peer forewordWritten by Chris SmithIt was in 1964 that the US Surgeon General published a report which linked smoking cigarettes with lung cancer, building on research going back more than a decade. The report told readers that smokers had a 9-10x relative risk of developing lung cancer; that smoking was the primary cause of chronic bronchitis; that pregnant people who smoked were more likely to have underweight newborns, and that smoking was also linked to emphysema and heart disease.In this shallow, Helen reports that nearly sixty years later, there are ~1.3B tobacco users, and that smoking combustible tobacco remains an extraordinary contributor to the global burden of disease, responsible for some 8 million deaths (including secondhand smoke) and ~230M normative disability-adjusted life years (DALYs) (~173M OP descriptive DALYs) making it a bigger contributor to health damages in our terms than HIV/AIDS plus tuberculosis plus malaria. Moreover, the forward-looking projections are for only modest declines in the total burden as population increases offset a decline in smoking rates. As our framework puts it, we’ve got an important problem.Helen walks through the conventional orthodoxy on tobacco control at a population level (higher taxes, marketing restrictions, warning labels) and on smoking cessation support (nicotine replacement therapy, pharmaceutical support). She estimates that a campaign for a cigarette tax which increased the retail price of cigarettes in Indonesia by 10% (a large country with a high attributable disease burden) would reduce tobacco consumption (and attributable DALYs) by 5%, having an expected social return on investment (SROI) of ~3,300x, assuming a 3-year speedup, 10% success rate, and $3M campaign cost. Taxes are considered the single most effective policy measure, but going down the ladder to a moderate advertising ban, the subsequent expected 1% reduction in tobacco consumption and associated DALYs would have an SROI of ~500x. As with any of our shallow back-of-the-envelope-calculations (BOTECs), there is room to debate both the structure and the parameter choices.But I think that this is — when combined with the other material — a strong indicator that there could be relatively mainstream tobacco advocacy work which is above our bar in expectation. This suggests a somewhat tractable problem.Ok, but isn’t this addressed? Don’t people already know that cigarettes are bad for you? We...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Philanthropy Shallow Investigation: Tobacco Control, published by Open Philanthropy on January 25, 2023 on The Effective Altruism Forum.1. PreambleThis document is a shallow investigation, as described here. As we noted in the civil conflict investigation and telecommunications in LMICs investigation we shared earlier this year, we have not shared many shallow investigations in the last few years but are moving towards sharing more of our early stage work.This investigation was written by Helen Kissel, a PhD candidate in economics at Stanford who worked at Open Philanthropy for 10 weeks in summer 2022 as one of five interns on the Global Health and Wellbeing Cause Prioritization team. We’ve also included the peer foreword, written by Strategy Fellow Chris Smith. The peer foreword, which is a standard part of our research process, is an initial response to a piece of research work, written by a team member who is not the primary author or their manager.A slightly earlier draft of this work has been read and discussed by the cause prioritization team. At this point, we plan to learn more about this topic by engaging with philanthropists who are already working on tobacco, extending the depth of this research (particularly on e-cigarettes), and digging deeper into countries which have seen big declines in their smoking burden (e.g. Brazil).2. Peer forewordWritten by Chris SmithIt was in 1964 that the US Surgeon General published a report which linked smoking cigarettes with lung cancer, building on research going back more than a decade. The report told readers that smokers had a 9-10x relative risk of developing lung cancer; that smoking was the primary cause of chronic bronchitis; that pregnant people who smoked were more likely to have underweight newborns, and that smoking was also linked to emphysema and heart disease.In this shallow, Helen reports that nearly sixty years later, there are ~1.3B tobacco users, and that smoking combustible tobacco remains an extraordinary contributor to the global burden of disease, responsible for some 8 million deaths (including secondhand smoke) and ~230M normative disability-adjusted life years (DALYs) (~173M OP descriptive DALYs) making it a bigger contributor to health damages in our terms than HIV/AIDS plus tuberculosis plus malaria. Moreover, the forward-looking projections are for only modest declines in the total burden as population increases offset a decline in smoking rates. As our framework puts it, we’ve got an important problem.Helen walks through the conventional orthodoxy on tobacco control at a population level (higher taxes, marketing restrictions, warning labels) and on smoking cessation support (nicotine replacement therapy, pharmaceutical support). She estimates that a campaign for a cigarette tax which increased the retail price of cigarettes in Indonesia by 10% (a large country with a high attributable disease burden) would reduce tobacco consumption (and attributable DALYs) by 5%, having an expected social return on investment (SROI) of ~3,300x, assuming a 3-year speedup, 10% success rate, and $3M campaign cost. Taxes are considered the single most effective policy measure, but going down the ladder to a moderate advertising ban, the subsequent expected 1% reduction in tobacco consumption and associated DALYs would have an SROI of ~500x. As with any of our shallow back-of-the-envelope-calculations (BOTECs), there is room to debate both the structure and the parameter choices.But I think that this is — when combined with the other material — a strong indicator that there could be relatively mainstream tobacco advocacy work which is above our bar in expectation. This suggests a somewhat tractable problem.Ok, but isn’t this addressed? Don’t people already know that cigarettes are bad for you? We...]]>
Open Philanthropy https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:09:08 None full 4574
p5NjwqtpqPrYpZNwQ_EA EA - GWWC Pledge History by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC Pledge History, published by Jeff Kaufman on January 24, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://forum.effectivealtruism.org/posts/p5NjwqtpqPrYpZNwQ/gwwc-pledge-history Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC Pledge History, published by Jeff Kaufman on January 24, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 24 Jan 2023 19:37:47 +0000 EA - GWWC Pledge History by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC Pledge History, published by Jeff Kaufman on January 24, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC Pledge History, published by Jeff Kaufman on January 24, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:26 None full 4569
3eivCYyZm8NR4Sdq5_EA EA - Call me, maybe? Hotlines and Global Catastrophic Risk [Founders Pledge] by christian.r Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Call me, maybe? Hotlines and Global Catastrophic Risk [Founders Pledge], published by christian.r on January 24, 2023 on The Effective Altruism Forum.This post summarizes a Founders Pledge shallow investigation on direct communications links (DCLs or "hotlines") between states as global catastrophic risks interventions. As a shallow investigation, it is a rough attempt at understanding an issue, and is in some respects a work in progress.SummaryCrisis-communication links or “hotlines” between states are a subset of crisis management tools intended to help leaders defuse the worst possible crises and to limit or terminate war (especially nuclear war) when it does break out. Despite a clear theory of change, however, there is high uncertainty about their effectiveness and little empirical evidence. The most important dyadic adversarial relationships (e.g., U.S.-China, U.S.-Russia, Pakistan-India, India-China) already have existing hotlines between them, and forming new hotlines is an unlikely candidate for effective philanthropy. Along with high uncertainty about hotline effectiveness in crisis management, the highest stakes application of hotlines (i.e., WMD conflict limitation and termination) remains untested, and dedicated crisis-communications channels may have an important fail-safe role in the event of conflict.War limitation- and termination-enabling hotlines have high expected value even with very low probability of success, because of the distribution of fatalities in WMD-related conflicts. Importantly, it appears that existing hotlines — cobbled together from legacy Cold-War systems and modern technology — are not resilient to the very conflicts they are supposed to control, and may fail in the event of nuclear war, electro-magnetic pulse, cyber operations and some natural catastrophic risks, like solar flares. Additionally, there are political and institutional obstacles to hotline use, including China’s repeated failure to answer in crisis situations.Philanthropists interested in crisis management tools like hotlines could pursue a number of interventions, including:Funding work and dialogues to establish new hotlines;Funding work and dialogues on hotline resilience (including technical work on hotlines in communications-denied environments);Funding more rigorous studies of hotline effectiveness;Funding track II dialogues between the U.S. and China (and potentially other powerful states) focused on hotlines to understand different conceptions of crisis communication.We believe that the marginal value of establishing new hotlines is likely to be low. The other interventions likely need to be sequenced — before investing in hotline resilience, we ought to better understand whether hotlines work, and what political and institutional issues affect their function. Crucially for avoiding great power conflict, we recommend investing in understanding why China does not “pick up” crisis communications channels in times of crisis.Acknowledgments: I would like to thank Tom Barnes, Linton Brooks, Matt Lerner, Peter Rautenbach, David Santoro, Shaan Shaikh, and Sarah Weiler for helpful input on this project.BackgroundThomas Schelling first suggested the idea of a direct communications link between the United States and the Soviet Union in 1958, and the idea was popularized in outlets like Parade magazine. Although early attempts were made at implementing such a link (e.g. in early 1962), the need for such a dedicated communications channel between the United States and Soviet Union became pressingly clear during the Cuban Missile crisis, when Kennedy and Krushchev communicated through “clumsy” and slow traditional communications channels. Officials at the Soviet embassy in Washington later recalled that even their own communications with Moscow used slow an...]]>
christian.r https://forum.effectivealtruism.org/posts/3eivCYyZm8NR4Sdq5/call-me-maybe-hotlines-and-global-catastrophic-risk-founders Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Call me, maybe? Hotlines and Global Catastrophic Risk [Founders Pledge], published by christian.r on January 24, 2023 on The Effective Altruism Forum.This post summarizes a Founders Pledge shallow investigation on direct communications links (DCLs or "hotlines") between states as global catastrophic risks interventions. As a shallow investigation, it is a rough attempt at understanding an issue, and is in some respects a work in progress.SummaryCrisis-communication links or “hotlines” between states are a subset of crisis management tools intended to help leaders defuse the worst possible crises and to limit or terminate war (especially nuclear war) when it does break out. Despite a clear theory of change, however, there is high uncertainty about their effectiveness and little empirical evidence. The most important dyadic adversarial relationships (e.g., U.S.-China, U.S.-Russia, Pakistan-India, India-China) already have existing hotlines between them, and forming new hotlines is an unlikely candidate for effective philanthropy. Along with high uncertainty about hotline effectiveness in crisis management, the highest stakes application of hotlines (i.e., WMD conflict limitation and termination) remains untested, and dedicated crisis-communications channels may have an important fail-safe role in the event of conflict.War limitation- and termination-enabling hotlines have high expected value even with very low probability of success, because of the distribution of fatalities in WMD-related conflicts. Importantly, it appears that existing hotlines — cobbled together from legacy Cold-War systems and modern technology — are not resilient to the very conflicts they are supposed to control, and may fail in the event of nuclear war, electro-magnetic pulse, cyber operations and some natural catastrophic risks, like solar flares. Additionally, there are political and institutional obstacles to hotline use, including China’s repeated failure to answer in crisis situations.Philanthropists interested in crisis management tools like hotlines could pursue a number of interventions, including:Funding work and dialogues to establish new hotlines;Funding work and dialogues on hotline resilience (including technical work on hotlines in communications-denied environments);Funding more rigorous studies of hotline effectiveness;Funding track II dialogues between the U.S. and China (and potentially other powerful states) focused on hotlines to understand different conceptions of crisis communication.We believe that the marginal value of establishing new hotlines is likely to be low. The other interventions likely need to be sequenced — before investing in hotline resilience, we ought to better understand whether hotlines work, and what political and institutional issues affect their function. Crucially for avoiding great power conflict, we recommend investing in understanding why China does not “pick up” crisis communications channels in times of crisis.Acknowledgments: I would like to thank Tom Barnes, Linton Brooks, Matt Lerner, Peter Rautenbach, David Santoro, Shaan Shaikh, and Sarah Weiler for helpful input on this project.BackgroundThomas Schelling first suggested the idea of a direct communications link between the United States and the Soviet Union in 1958, and the idea was popularized in outlets like Parade magazine. Although early attempts were made at implementing such a link (e.g. in early 1962), the need for such a dedicated communications channel between the United States and Soviet Union became pressingly clear during the Cuban Missile crisis, when Kennedy and Krushchev communicated through “clumsy” and slow traditional communications channels. Officials at the Soviet embassy in Washington later recalled that even their own communications with Moscow used slow an...]]>
Tue, 24 Jan 2023 18:05:56 +0000 EA - Call me, maybe? Hotlines and Global Catastrophic Risk [Founders Pledge] by christian.r Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Call me, maybe? Hotlines and Global Catastrophic Risk [Founders Pledge], published by christian.r on January 24, 2023 on The Effective Altruism Forum.This post summarizes a Founders Pledge shallow investigation on direct communications links (DCLs or "hotlines") between states as global catastrophic risks interventions. As a shallow investigation, it is a rough attempt at understanding an issue, and is in some respects a work in progress.SummaryCrisis-communication links or “hotlines” between states are a subset of crisis management tools intended to help leaders defuse the worst possible crises and to limit or terminate war (especially nuclear war) when it does break out. Despite a clear theory of change, however, there is high uncertainty about their effectiveness and little empirical evidence. The most important dyadic adversarial relationships (e.g., U.S.-China, U.S.-Russia, Pakistan-India, India-China) already have existing hotlines between them, and forming new hotlines is an unlikely candidate for effective philanthropy. Along with high uncertainty about hotline effectiveness in crisis management, the highest stakes application of hotlines (i.e., WMD conflict limitation and termination) remains untested, and dedicated crisis-communications channels may have an important fail-safe role in the event of conflict.War limitation- and termination-enabling hotlines have high expected value even with very low probability of success, because of the distribution of fatalities in WMD-related conflicts. Importantly, it appears that existing hotlines — cobbled together from legacy Cold-War systems and modern technology — are not resilient to the very conflicts they are supposed to control, and may fail in the event of nuclear war, electro-magnetic pulse, cyber operations and some natural catastrophic risks, like solar flares. Additionally, there are political and institutional obstacles to hotline use, including China’s repeated failure to answer in crisis situations.Philanthropists interested in crisis management tools like hotlines could pursue a number of interventions, including:Funding work and dialogues to establish new hotlines;Funding work and dialogues on hotline resilience (including technical work on hotlines in communications-denied environments);Funding more rigorous studies of hotline effectiveness;Funding track II dialogues between the U.S. and China (and potentially other powerful states) focused on hotlines to understand different conceptions of crisis communication.We believe that the marginal value of establishing new hotlines is likely to be low. The other interventions likely need to be sequenced — before investing in hotline resilience, we ought to better understand whether hotlines work, and what political and institutional issues affect their function. Crucially for avoiding great power conflict, we recommend investing in understanding why China does not “pick up” crisis communications channels in times of crisis.Acknowledgments: I would like to thank Tom Barnes, Linton Brooks, Matt Lerner, Peter Rautenbach, David Santoro, Shaan Shaikh, and Sarah Weiler for helpful input on this project.BackgroundThomas Schelling first suggested the idea of a direct communications link between the United States and the Soviet Union in 1958, and the idea was popularized in outlets like Parade magazine. Although early attempts were made at implementing such a link (e.g. in early 1962), the need for such a dedicated communications channel between the United States and Soviet Union became pressingly clear during the Cuban Missile crisis, when Kennedy and Krushchev communicated through “clumsy” and slow traditional communications channels. Officials at the Soviet embassy in Washington later recalled that even their own communications with Moscow used slow an...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Call me, maybe? Hotlines and Global Catastrophic Risk [Founders Pledge], published by christian.r on January 24, 2023 on The Effective Altruism Forum.This post summarizes a Founders Pledge shallow investigation on direct communications links (DCLs or "hotlines") between states as global catastrophic risks interventions. As a shallow investigation, it is a rough attempt at understanding an issue, and is in some respects a work in progress.SummaryCrisis-communication links or “hotlines” between states are a subset of crisis management tools intended to help leaders defuse the worst possible crises and to limit or terminate war (especially nuclear war) when it does break out. Despite a clear theory of change, however, there is high uncertainty about their effectiveness and little empirical evidence. The most important dyadic adversarial relationships (e.g., U.S.-China, U.S.-Russia, Pakistan-India, India-China) already have existing hotlines between them, and forming new hotlines is an unlikely candidate for effective philanthropy. Along with high uncertainty about hotline effectiveness in crisis management, the highest stakes application of hotlines (i.e., WMD conflict limitation and termination) remains untested, and dedicated crisis-communications channels may have an important fail-safe role in the event of conflict.War limitation- and termination-enabling hotlines have high expected value even with very low probability of success, because of the distribution of fatalities in WMD-related conflicts. Importantly, it appears that existing hotlines — cobbled together from legacy Cold-War systems and modern technology — are not resilient to the very conflicts they are supposed to control, and may fail in the event of nuclear war, electro-magnetic pulse, cyber operations and some natural catastrophic risks, like solar flares. Additionally, there are political and institutional obstacles to hotline use, including China’s repeated failure to answer in crisis situations.Philanthropists interested in crisis management tools like hotlines could pursue a number of interventions, including:Funding work and dialogues to establish new hotlines;Funding work and dialogues on hotline resilience (including technical work on hotlines in communications-denied environments);Funding more rigorous studies of hotline effectiveness;Funding track II dialogues between the U.S. and China (and potentially other powerful states) focused on hotlines to understand different conceptions of crisis communication.We believe that the marginal value of establishing new hotlines is likely to be low. The other interventions likely need to be sequenced — before investing in hotline resilience, we ought to better understand whether hotlines work, and what political and institutional issues affect their function. Crucially for avoiding great power conflict, we recommend investing in understanding why China does not “pick up” crisis communications channels in times of crisis.Acknowledgments: I would like to thank Tom Barnes, Linton Brooks, Matt Lerner, Peter Rautenbach, David Santoro, Shaan Shaikh, and Sarah Weiler for helpful input on this project.BackgroundThomas Schelling first suggested the idea of a direct communications link between the United States and the Soviet Union in 1958, and the idea was popularized in outlets like Parade magazine. Although early attempts were made at implementing such a link (e.g. in early 1962), the need for such a dedicated communications channel between the United States and Soviet Union became pressingly clear during the Cuban Missile crisis, when Kennedy and Krushchev communicated through “clumsy” and slow traditional communications channels. Officials at the Soviet embassy in Washington later recalled that even their own communications with Moscow used slow an...]]>
christian.r https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 58:21 None full 4568
i2BcQ8TdRELjqv23c_EA EA - Save the date - EAGxWarsaw 2023 by Łukasz Grabowski Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Save the date - EAGxWarsaw 2023, published by Łukasz Grabowski on January 24, 2023 on The Effective Altruism Forum.We’d like to invite you to this year’s Central-Eastern European EAGx!The Polish Effective Altruism community has grown rapidly over the last couple years. We’re very excited to support the community by hosting the very first EAGx event in Poland!EAGxWarsaw will take place this June, 9th to 11th in Polin - the conference centre attached to the Museum of the History of Polish Jews in Warsaw. Please come join us!Who is this event for?We’d like to welcome EAs and EA-adjacent individuals who make helping others a core part of their lives. In the first place we would like to invite people from our region. We want to gather ambitious activists, scholars and students from countries like Belarus, Bosnia and Herzegovina, Bulgaria, Croatia, Czech Republic, Estonia, Georgia, Greece, Hungary, Latvia, Lithuania, Moldova, Poland, Romania, Serbia, Slovenia, Slovakia, Turkey and Ukraine.We are also open and very happy to host people from places like Austria, Germany, the rest of Europe or even from the whole world - just please be aware that our capacity is limited to 500 guests, so in case of an overwhelming interest we will be prioritising applicants who have a stake in our region. But as always - when unsure, please apply!Approximate scheduleThe conference will start on Friday, 9th of June at about 6pm, with pre-registration opening a couple hours earlier. The Friday programme will consist mostly of a career fair. Then for Saturday and Sunday the majority of the workshops, meetups, talks and discussion will take place. The closing talk is scheduled for 6pm on Sunday the 11th, after which social activities will commence.A more detailed agenda will be presented closer to the conference, via Swapcard.ApplicationsApplications will open by the end of Q1 of 2023. If you want to be notified about the opening of applications please fill out this form.Travel expensesWe are prepared to reimburse travel expenses for some attendees. More details will be presented, when the applications open. In the meantime you can check CEA’s travel support policy here.Email us at warsaw@eaglobalx.org with any questions or feedback.See you in June!EAGxWarsaw teamThe museum will remain open to the public throughout the event.The conference centre attached to the museum is used for different events all the time, so there is nothing unusual with EAGx happening there. Actually they are very happy to host us, as they are also oriented towards doing good.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Łukasz Grabowski https://forum.effectivealtruism.org/posts/i2BcQ8TdRELjqv23c/save-the-date-eagxwarsaw-2023 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Save the date - EAGxWarsaw 2023, published by Łukasz Grabowski on January 24, 2023 on The Effective Altruism Forum.We’d like to invite you to this year’s Central-Eastern European EAGx!The Polish Effective Altruism community has grown rapidly over the last couple years. We’re very excited to support the community by hosting the very first EAGx event in Poland!EAGxWarsaw will take place this June, 9th to 11th in Polin - the conference centre attached to the Museum of the History of Polish Jews in Warsaw. Please come join us!Who is this event for?We’d like to welcome EAs and EA-adjacent individuals who make helping others a core part of their lives. In the first place we would like to invite people from our region. We want to gather ambitious activists, scholars and students from countries like Belarus, Bosnia and Herzegovina, Bulgaria, Croatia, Czech Republic, Estonia, Georgia, Greece, Hungary, Latvia, Lithuania, Moldova, Poland, Romania, Serbia, Slovenia, Slovakia, Turkey and Ukraine.We are also open and very happy to host people from places like Austria, Germany, the rest of Europe or even from the whole world - just please be aware that our capacity is limited to 500 guests, so in case of an overwhelming interest we will be prioritising applicants who have a stake in our region. But as always - when unsure, please apply!Approximate scheduleThe conference will start on Friday, 9th of June at about 6pm, with pre-registration opening a couple hours earlier. The Friday programme will consist mostly of a career fair. Then for Saturday and Sunday the majority of the workshops, meetups, talks and discussion will take place. The closing talk is scheduled for 6pm on Sunday the 11th, after which social activities will commence.A more detailed agenda will be presented closer to the conference, via Swapcard.ApplicationsApplications will open by the end of Q1 of 2023. If you want to be notified about the opening of applications please fill out this form.Travel expensesWe are prepared to reimburse travel expenses for some attendees. More details will be presented, when the applications open. In the meantime you can check CEA’s travel support policy here.Email us at warsaw@eaglobalx.org with any questions or feedback.See you in June!EAGxWarsaw teamThe museum will remain open to the public throughout the event.The conference centre attached to the museum is used for different events all the time, so there is nothing unusual with EAGx happening there. Actually they are very happy to host us, as they are also oriented towards doing good.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 24 Jan 2023 15:39:31 +0000 EA - Save the date - EAGxWarsaw 2023 by Łukasz Grabowski Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Save the date - EAGxWarsaw 2023, published by Łukasz Grabowski on January 24, 2023 on The Effective Altruism Forum.We’d like to invite you to this year’s Central-Eastern European EAGx!The Polish Effective Altruism community has grown rapidly over the last couple years. We’re very excited to support the community by hosting the very first EAGx event in Poland!EAGxWarsaw will take place this June, 9th to 11th in Polin - the conference centre attached to the Museum of the History of Polish Jews in Warsaw. Please come join us!Who is this event for?We’d like to welcome EAs and EA-adjacent individuals who make helping others a core part of their lives. In the first place we would like to invite people from our region. We want to gather ambitious activists, scholars and students from countries like Belarus, Bosnia and Herzegovina, Bulgaria, Croatia, Czech Republic, Estonia, Georgia, Greece, Hungary, Latvia, Lithuania, Moldova, Poland, Romania, Serbia, Slovenia, Slovakia, Turkey and Ukraine.We are also open and very happy to host people from places like Austria, Germany, the rest of Europe or even from the whole world - just please be aware that our capacity is limited to 500 guests, so in case of an overwhelming interest we will be prioritising applicants who have a stake in our region. But as always - when unsure, please apply!Approximate scheduleThe conference will start on Friday, 9th of June at about 6pm, with pre-registration opening a couple hours earlier. The Friday programme will consist mostly of a career fair. Then for Saturday and Sunday the majority of the workshops, meetups, talks and discussion will take place. The closing talk is scheduled for 6pm on Sunday the 11th, after which social activities will commence.A more detailed agenda will be presented closer to the conference, via Swapcard.ApplicationsApplications will open by the end of Q1 of 2023. If you want to be notified about the opening of applications please fill out this form.Travel expensesWe are prepared to reimburse travel expenses for some attendees. More details will be presented, when the applications open. In the meantime you can check CEA’s travel support policy here.Email us at warsaw@eaglobalx.org with any questions or feedback.See you in June!EAGxWarsaw teamThe museum will remain open to the public throughout the event.The conference centre attached to the museum is used for different events all the time, so there is nothing unusual with EAGx happening there. Actually they are very happy to host us, as they are also oriented towards doing good.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Save the date - EAGxWarsaw 2023, published by Łukasz Grabowski on January 24, 2023 on The Effective Altruism Forum.We’d like to invite you to this year’s Central-Eastern European EAGx!The Polish Effective Altruism community has grown rapidly over the last couple years. We’re very excited to support the community by hosting the very first EAGx event in Poland!EAGxWarsaw will take place this June, 9th to 11th in Polin - the conference centre attached to the Museum of the History of Polish Jews in Warsaw. Please come join us!Who is this event for?We’d like to welcome EAs and EA-adjacent individuals who make helping others a core part of their lives. In the first place we would like to invite people from our region. We want to gather ambitious activists, scholars and students from countries like Belarus, Bosnia and Herzegovina, Bulgaria, Croatia, Czech Republic, Estonia, Georgia, Greece, Hungary, Latvia, Lithuania, Moldova, Poland, Romania, Serbia, Slovenia, Slovakia, Turkey and Ukraine.We are also open and very happy to host people from places like Austria, Germany, the rest of Europe or even from the whole world - just please be aware that our capacity is limited to 500 guests, so in case of an overwhelming interest we will be prioritising applicants who have a stake in our region. But as always - when unsure, please apply!Approximate scheduleThe conference will start on Friday, 9th of June at about 6pm, with pre-registration opening a couple hours earlier. The Friday programme will consist mostly of a career fair. Then for Saturday and Sunday the majority of the workshops, meetups, talks and discussion will take place. The closing talk is scheduled for 6pm on Sunday the 11th, after which social activities will commence.A more detailed agenda will be presented closer to the conference, via Swapcard.ApplicationsApplications will open by the end of Q1 of 2023. If you want to be notified about the opening of applications please fill out this form.Travel expensesWe are prepared to reimburse travel expenses for some attendees. More details will be presented, when the applications open. In the meantime you can check CEA’s travel support policy here.Email us at warsaw@eaglobalx.org with any questions or feedback.See you in June!EAGxWarsaw teamThe museum will remain open to the public throughout the event.The conference centre attached to the museum is used for different events all the time, so there is nothing unusual with EAGx happening there. Actually they are very happy to host us, as they are also oriented towards doing good.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Łukasz Grabowski https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:49 None full 4570
QWuKM5fsbry8Jp2x5_EA EA - Why people want to work on AI safety (but don’t) by Emily Grundy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why people want to work on AI safety (but don’t), published by Emily Grundy on January 24, 2023 on The Effective Altruism Forum.Epistemic status: Based mainly on my own experience and a couple of conversations with others learning about AI safety. It’s very likely that I’m some overlooking existing resources that address my concerns - feel free to add them in the comments.I had an ‘eugh’ response to doing work in the AI safety space. I understood the importance. I believed in the urgency. I wished I wanted to work on it. But I didn’t.An intro AI safety course helped me unpack my hesitation. This post outlines what I found: three barriers to delving into the world of AI safety, and what could help address them.Why read this post?If you’re similarly conflicted, this post might be validating and evoke the comforting, "Oh it’s not just me" feeling. It might also help you discern where your hesitation is coming from. Once you understand that, you can try to address it. Maybe, ‘I wish I wanted to work on AI safety’ just becomes, ‘I want to work on AI safety’.If you want to build the AI safety community, it could be helpful to understand how a newcomer, like myself, interprets the field (and what makes me less likely to get involved).I’ll discuss three barriers (and their potential solutions):A world with advanced AI (and how we might get there) is hard to imagine: AKA "What even is it?"AI can be technical but it’s not clear how much of that you need to know: AKA "But I don’t program"There’s a lot of jargon and it’s not always well explained: AKA "Can you explain that again.but like I’m 5"Jump to the end for a visual summary.A world with advanced AI (and how we might get there) is hard to imagine: AKA "What even is it?"A lot of intro AI explainers go like this:Here’s where we’re at with AI (cue chess, art, and ChatGPT examples)Here are a bunch of reasons why AI could (and probably will) become powerfulI mean, really powerfulAnd here’s how it could go wrongWhat I don’t get from these explanations is an image of what it actually looks like to: 1) live in a world with advanced AI or 2) go from our current world to that one. Below I outline what I mean by those two points, why I think they’re important, and what could help.What does it look like to live in a world with AI?I can regurgitate examples of how advanced AI might be used in the future – maybe it’ll be our future CEOs, doctors, politicians, or artists. What I’m missing is the ability to imagine any of these things - to understand, concretely, how that might look. I can say things like, "AI might be the future policymakers", but have no idea how they would create or communicate policies, or how we would interact with them as policymakers.To flesh this out a bit, I imagine there are three levels of understanding here: 1) what AI could do (roles it might adopt, things it could achieve), 2) how that would actually look (concrete, detailed descriptions), and 3) how that would work (the technical stuff). A lot of content I’ve seen focuses on the first and third levels, but skips over the middle one. Here's a figure for the visually inclined:Why is this important?For me, having a very surface-level understanding of something stunts thought. Because I can’t imagine how it might look, I struggle to imagine other problems, possibilities, or solutions. Plus, big risks are already hard to imagine and hard to feel, which isn’t great for our judgement of those risks or our motivation to work on them.What could help?I imagine the go-to response to this is – "check out some fiction stories". I think that’s a great idea if your audience is willing to invest time into finding and reading these. But, I think fleshed out examples have a place beyond fiction.If you’re introducing people to the idea of AI (e.g., yo...]]>
Emily Grundy https://forum.effectivealtruism.org/posts/QWuKM5fsbry8Jp2x5/why-people-want-to-work-on-ai-safety-but-don-t Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why people want to work on AI safety (but don’t), published by Emily Grundy on January 24, 2023 on The Effective Altruism Forum.Epistemic status: Based mainly on my own experience and a couple of conversations with others learning about AI safety. It’s very likely that I’m some overlooking existing resources that address my concerns - feel free to add them in the comments.I had an ‘eugh’ response to doing work in the AI safety space. I understood the importance. I believed in the urgency. I wished I wanted to work on it. But I didn’t.An intro AI safety course helped me unpack my hesitation. This post outlines what I found: three barriers to delving into the world of AI safety, and what could help address them.Why read this post?If you’re similarly conflicted, this post might be validating and evoke the comforting, "Oh it’s not just me" feeling. It might also help you discern where your hesitation is coming from. Once you understand that, you can try to address it. Maybe, ‘I wish I wanted to work on AI safety’ just becomes, ‘I want to work on AI safety’.If you want to build the AI safety community, it could be helpful to understand how a newcomer, like myself, interprets the field (and what makes me less likely to get involved).I’ll discuss three barriers (and their potential solutions):A world with advanced AI (and how we might get there) is hard to imagine: AKA "What even is it?"AI can be technical but it’s not clear how much of that you need to know: AKA "But I don’t program"There’s a lot of jargon and it’s not always well explained: AKA "Can you explain that again.but like I’m 5"Jump to the end for a visual summary.A world with advanced AI (and how we might get there) is hard to imagine: AKA "What even is it?"A lot of intro AI explainers go like this:Here’s where we’re at with AI (cue chess, art, and ChatGPT examples)Here are a bunch of reasons why AI could (and probably will) become powerfulI mean, really powerfulAnd here’s how it could go wrongWhat I don’t get from these explanations is an image of what it actually looks like to: 1) live in a world with advanced AI or 2) go from our current world to that one. Below I outline what I mean by those two points, why I think they’re important, and what could help.What does it look like to live in a world with AI?I can regurgitate examples of how advanced AI might be used in the future – maybe it’ll be our future CEOs, doctors, politicians, or artists. What I’m missing is the ability to imagine any of these things - to understand, concretely, how that might look. I can say things like, "AI might be the future policymakers", but have no idea how they would create or communicate policies, or how we would interact with them as policymakers.To flesh this out a bit, I imagine there are three levels of understanding here: 1) what AI could do (roles it might adopt, things it could achieve), 2) how that would actually look (concrete, detailed descriptions), and 3) how that would work (the technical stuff). A lot of content I’ve seen focuses on the first and third levels, but skips over the middle one. Here's a figure for the visually inclined:Why is this important?For me, having a very surface-level understanding of something stunts thought. Because I can’t imagine how it might look, I struggle to imagine other problems, possibilities, or solutions. Plus, big risks are already hard to imagine and hard to feel, which isn’t great for our judgement of those risks or our motivation to work on them.What could help?I imagine the go-to response to this is – "check out some fiction stories". I think that’s a great idea if your audience is willing to invest time into finding and reading these. But, I think fleshed out examples have a place beyond fiction.If you’re introducing people to the idea of AI (e.g., yo...]]>
Tue, 24 Jan 2023 12:24:06 +0000 EA - Why people want to work on AI safety (but don’t) by Emily Grundy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why people want to work on AI safety (but don’t), published by Emily Grundy on January 24, 2023 on The Effective Altruism Forum.Epistemic status: Based mainly on my own experience and a couple of conversations with others learning about AI safety. It’s very likely that I’m some overlooking existing resources that address my concerns - feel free to add them in the comments.I had an ‘eugh’ response to doing work in the AI safety space. I understood the importance. I believed in the urgency. I wished I wanted to work on it. But I didn’t.An intro AI safety course helped me unpack my hesitation. This post outlines what I found: three barriers to delving into the world of AI safety, and what could help address them.Why read this post?If you’re similarly conflicted, this post might be validating and evoke the comforting, "Oh it’s not just me" feeling. It might also help you discern where your hesitation is coming from. Once you understand that, you can try to address it. Maybe, ‘I wish I wanted to work on AI safety’ just becomes, ‘I want to work on AI safety’.If you want to build the AI safety community, it could be helpful to understand how a newcomer, like myself, interprets the field (and what makes me less likely to get involved).I’ll discuss three barriers (and their potential solutions):A world with advanced AI (and how we might get there) is hard to imagine: AKA "What even is it?"AI can be technical but it’s not clear how much of that you need to know: AKA "But I don’t program"There’s a lot of jargon and it’s not always well explained: AKA "Can you explain that again.but like I’m 5"Jump to the end for a visual summary.A world with advanced AI (and how we might get there) is hard to imagine: AKA "What even is it?"A lot of intro AI explainers go like this:Here’s where we’re at with AI (cue chess, art, and ChatGPT examples)Here are a bunch of reasons why AI could (and probably will) become powerfulI mean, really powerfulAnd here’s how it could go wrongWhat I don’t get from these explanations is an image of what it actually looks like to: 1) live in a world with advanced AI or 2) go from our current world to that one. Below I outline what I mean by those two points, why I think they’re important, and what could help.What does it look like to live in a world with AI?I can regurgitate examples of how advanced AI might be used in the future – maybe it’ll be our future CEOs, doctors, politicians, or artists. What I’m missing is the ability to imagine any of these things - to understand, concretely, how that might look. I can say things like, "AI might be the future policymakers", but have no idea how they would create or communicate policies, or how we would interact with them as policymakers.To flesh this out a bit, I imagine there are three levels of understanding here: 1) what AI could do (roles it might adopt, things it could achieve), 2) how that would actually look (concrete, detailed descriptions), and 3) how that would work (the technical stuff). A lot of content I’ve seen focuses on the first and third levels, but skips over the middle one. Here's a figure for the visually inclined:Why is this important?For me, having a very surface-level understanding of something stunts thought. Because I can’t imagine how it might look, I struggle to imagine other problems, possibilities, or solutions. Plus, big risks are already hard to imagine and hard to feel, which isn’t great for our judgement of those risks or our motivation to work on them.What could help?I imagine the go-to response to this is – "check out some fiction stories". I think that’s a great idea if your audience is willing to invest time into finding and reading these. But, I think fleshed out examples have a place beyond fiction.If you’re introducing people to the idea of AI (e.g., yo...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why people want to work on AI safety (but don’t), published by Emily Grundy on January 24, 2023 on The Effective Altruism Forum.Epistemic status: Based mainly on my own experience and a couple of conversations with others learning about AI safety. It’s very likely that I’m some overlooking existing resources that address my concerns - feel free to add them in the comments.I had an ‘eugh’ response to doing work in the AI safety space. I understood the importance. I believed in the urgency. I wished I wanted to work on it. But I didn’t.An intro AI safety course helped me unpack my hesitation. This post outlines what I found: three barriers to delving into the world of AI safety, and what could help address them.Why read this post?If you’re similarly conflicted, this post might be validating and evoke the comforting, "Oh it’s not just me" feeling. It might also help you discern where your hesitation is coming from. Once you understand that, you can try to address it. Maybe, ‘I wish I wanted to work on AI safety’ just becomes, ‘I want to work on AI safety’.If you want to build the AI safety community, it could be helpful to understand how a newcomer, like myself, interprets the field (and what makes me less likely to get involved).I’ll discuss three barriers (and their potential solutions):A world with advanced AI (and how we might get there) is hard to imagine: AKA "What even is it?"AI can be technical but it’s not clear how much of that you need to know: AKA "But I don’t program"There’s a lot of jargon and it’s not always well explained: AKA "Can you explain that again.but like I’m 5"Jump to the end for a visual summary.A world with advanced AI (and how we might get there) is hard to imagine: AKA "What even is it?"A lot of intro AI explainers go like this:Here’s where we’re at with AI (cue chess, art, and ChatGPT examples)Here are a bunch of reasons why AI could (and probably will) become powerfulI mean, really powerfulAnd here’s how it could go wrongWhat I don’t get from these explanations is an image of what it actually looks like to: 1) live in a world with advanced AI or 2) go from our current world to that one. Below I outline what I mean by those two points, why I think they’re important, and what could help.What does it look like to live in a world with AI?I can regurgitate examples of how advanced AI might be used in the future – maybe it’ll be our future CEOs, doctors, politicians, or artists. What I’m missing is the ability to imagine any of these things - to understand, concretely, how that might look. I can say things like, "AI might be the future policymakers", but have no idea how they would create or communicate policies, or how we would interact with them as policymakers.To flesh this out a bit, I imagine there are three levels of understanding here: 1) what AI could do (roles it might adopt, things it could achieve), 2) how that would actually look (concrete, detailed descriptions), and 3) how that would work (the technical stuff). A lot of content I’ve seen focuses on the first and third levels, but skips over the middle one. Here's a figure for the visually inclined:Why is this important?For me, having a very surface-level understanding of something stunts thought. Because I can’t imagine how it might look, I struggle to imagine other problems, possibilities, or solutions. Plus, big risks are already hard to imagine and hard to feel, which isn’t great for our judgement of those risks or our motivation to work on them.What could help?I imagine the go-to response to this is – "check out some fiction stories". I think that’s a great idea if your audience is willing to invest time into finding and reading these. But, I think fleshed out examples have a place beyond fiction.If you’re introducing people to the idea of AI (e.g., yo...]]>
Emily Grundy https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:32 None full 4572
NcJBZwLKLBimY5nYd_EA EA - Dean Karlan is now Chief Economist of USAID by Henry Howard Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dean Karlan is now Chief Economist of USAID, published by Henry Howard on January 24, 2023 on The Effective Altruism Forum.In November 2022, the United Stated Agency for International Development appointed Professor Dean Karlan as their Chief Economist. Dean Karlan is a development economist who founded Innovations for Poverty Action (IPA) in 2002 and has been its president since. He's also on the Executive Committee of MIT's Jameel Poverty Action Lab (J-PAL).IPA and J-PAL have been responsible for a lot of the research that underpins GiveWell's charity recommendations (GiveWell has a 2011 overview of IPA's contributions here). Their work includes:Evidence for the effectiveness of free vs. priced bednetsOngoing work on unconditional cash transfersEvidence for positive effects from deworming (a 2019 Cochrane review suggests otherwise)Work suggesting that microfinance isn't that greatEvidence for the effectiveness of chlorine dispensersWork on incentives for immunisationThis is among hundreds of other policy/intervention evaluations the two groups have done.Dean Karlan seems to have played a big role in advancing evidence-based global development.USAID has an allocation of $29.4 billion for 2023. Wikipedia says this is the world's largest aid budget. If Prof. Karlan improves the effectiveness of the USAID program by even a small amount it could have a huge positive impact.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Henry Howard https://forum.effectivealtruism.org/posts/NcJBZwLKLBimY5nYd/dean-karlan-is-now-chief-economist-of-usaid Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dean Karlan is now Chief Economist of USAID, published by Henry Howard on January 24, 2023 on The Effective Altruism Forum.In November 2022, the United Stated Agency for International Development appointed Professor Dean Karlan as their Chief Economist. Dean Karlan is a development economist who founded Innovations for Poverty Action (IPA) in 2002 and has been its president since. He's also on the Executive Committee of MIT's Jameel Poverty Action Lab (J-PAL).IPA and J-PAL have been responsible for a lot of the research that underpins GiveWell's charity recommendations (GiveWell has a 2011 overview of IPA's contributions here). Their work includes:Evidence for the effectiveness of free vs. priced bednetsOngoing work on unconditional cash transfersEvidence for positive effects from deworming (a 2019 Cochrane review suggests otherwise)Work suggesting that microfinance isn't that greatEvidence for the effectiveness of chlorine dispensersWork on incentives for immunisationThis is among hundreds of other policy/intervention evaluations the two groups have done.Dean Karlan seems to have played a big role in advancing evidence-based global development.USAID has an allocation of $29.4 billion for 2023. Wikipedia says this is the world's largest aid budget. If Prof. Karlan improves the effectiveness of the USAID program by even a small amount it could have a huge positive impact.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 24 Jan 2023 11:29:57 +0000 EA - Dean Karlan is now Chief Economist of USAID by Henry Howard Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dean Karlan is now Chief Economist of USAID, published by Henry Howard on January 24, 2023 on The Effective Altruism Forum.In November 2022, the United Stated Agency for International Development appointed Professor Dean Karlan as their Chief Economist. Dean Karlan is a development economist who founded Innovations for Poverty Action (IPA) in 2002 and has been its president since. He's also on the Executive Committee of MIT's Jameel Poverty Action Lab (J-PAL).IPA and J-PAL have been responsible for a lot of the research that underpins GiveWell's charity recommendations (GiveWell has a 2011 overview of IPA's contributions here). Their work includes:Evidence for the effectiveness of free vs. priced bednetsOngoing work on unconditional cash transfersEvidence for positive effects from deworming (a 2019 Cochrane review suggests otherwise)Work suggesting that microfinance isn't that greatEvidence for the effectiveness of chlorine dispensersWork on incentives for immunisationThis is among hundreds of other policy/intervention evaluations the two groups have done.Dean Karlan seems to have played a big role in advancing evidence-based global development.USAID has an allocation of $29.4 billion for 2023. Wikipedia says this is the world's largest aid budget. If Prof. Karlan improves the effectiveness of the USAID program by even a small amount it could have a huge positive impact.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dean Karlan is now Chief Economist of USAID, published by Henry Howard on January 24, 2023 on The Effective Altruism Forum.In November 2022, the United Stated Agency for International Development appointed Professor Dean Karlan as their Chief Economist. Dean Karlan is a development economist who founded Innovations for Poverty Action (IPA) in 2002 and has been its president since. He's also on the Executive Committee of MIT's Jameel Poverty Action Lab (J-PAL).IPA and J-PAL have been responsible for a lot of the research that underpins GiveWell's charity recommendations (GiveWell has a 2011 overview of IPA's contributions here). Their work includes:Evidence for the effectiveness of free vs. priced bednetsOngoing work on unconditional cash transfersEvidence for positive effects from deworming (a 2019 Cochrane review suggests otherwise)Work suggesting that microfinance isn't that greatEvidence for the effectiveness of chlorine dispensersWork on incentives for immunisationThis is among hundreds of other policy/intervention evaluations the two groups have done.Dean Karlan seems to have played a big role in advancing evidence-based global development.USAID has an allocation of $29.4 billion for 2023. Wikipedia says this is the world's largest aid budget. If Prof. Karlan improves the effectiveness of the USAID program by even a small amount it could have a huge positive impact.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Henry Howard https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:40 None full 4571
ByBBqwRXWqX5m9erL_EA EA - Update to Samotsvety AGI timelines by Misha Yagudin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update to Samotsvety AGI timelines, published by Misha Yagudin on January 24, 2023 on The Effective Altruism Forum.Previously: Samotsvety's AI risk forecasts.Our colleagues at Epoch recently asked us to update our AI timelines estimate for their upcoming literature review on TAI timelines. We met on 2023-01-21 to discuss our predictions about when advanced AI systems will arrive.Forecasts:Definition of AGIWe used the following definition to determine the “moment at which AGI is considered to have arrived,” building on this Metaculus question:The moment that a system capable of passing the adversarial Turing test against a top-5% human who has access to experts on various topics is developed.More concretely:A Turing test is said to be “adversarial” if the human judges make a good-faith attempt to unmask the AI as an impostor, and the human confederates make a good-faith attempt to demonstrate that they are humans.An AI is said to “pass” a Turing test if at least half of judges rated the AI as more human than at least third of the human confederates.This definition of AGI is not unproblematic, e.g., it’s possible that AGI could be unmasked long after its economic value and capabilities are very high. We chose to use an imperfect definition and indicated to forecasters that they should interpret the definition not “as is” but “in spirit” to avoid annoying edge cases.Individual forecastsP(AGI by 2030)P(AGI by 2050)P(AGI by 2100)P(AGI by this year) = 10%P(AGI by this year) = 50%P(AGI by this year) = 90%F1F3F4F5F6F7F8F90.390.750.7820282034N/A0.280.70.872027203921200.260.580.932025203920880.350.730.912025203720750.40.650.820252035N/A0.330.650.82026203722500.20.50.72026205022000.230.440.67202620602250AggregateP(AGI by 2030)P(AGI by 2050)P(AGI by 2100)P(AGI by this year) = 10%P(AGI by this year) = 50%P(AGI by this year) = 90%mean:0.310.630.81202620412164stdev:0.070.110.091.078.9979.6550% CI:[0.26, 0.35][0.55, 0.70][0.74, 0.87][2025.3, 2026.7][2035, 2047][2110, 2218]80% CI:[0.21, 0.40][0.48, 0.77][0.69, 0.93][2024.6, 2027.4][2030, 2053][2062, 2266]95% CI:[0.16, 0.45][0.41, 0.84][0.62, 0.99][2023.9, 2028.1][2024, 2059][2008, 2320]geomean:0.300.620.802026.0020412163geo odds:0.300.630.82Epistemic status:For Samotsvety track-record see:/Note that this track record comes mostly from questions about geopolitics and technology that resolve within 12 months.Most forecasters have at least read Joe Carlsmith’s report on AI x-risk, Is “Power-Seeking AI an Existential Risk?”. Those who are short on time may have just skimmed the report and/or watched the presentation. We discussed the report section by section over the course of a few weekly meetings.Note also that there might be selection effects at the level of which forecasters chose to participate in this exercise; for example, Samotsvety forecasters who view AI as an important/interesting/etc. topic could have self-selected into the discussion.(Though, the set of forecasters who participated this time and participated last time is very similar.)Update from our previous estimateThe last time we publicly elicited a similar probability from our forecasters, we were at 32% that AGI would be developed in the next 20 years (so by late 2042); and at 73% that it would be developed by 2100. These are a bit lower than our current forecasts. The changes since then can be attributed toWe have gotten more time to think about the topic, and work through considerations and counter-considerations, e.g., the extent to which we should fear selection effects in the types of arguments to which we are exposed.Some of our forecasters still give substantial weight to more skeptical probabilities coming from semi-informative priors, from Lap...]]>
Misha Yagudin https://forum.effectivealtruism.org/posts/ByBBqwRXWqX5m9erL/update-to-samotsvety-agi-timelines Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update to Samotsvety AGI timelines, published by Misha Yagudin on January 24, 2023 on The Effective Altruism Forum.Previously: Samotsvety's AI risk forecasts.Our colleagues at Epoch recently asked us to update our AI timelines estimate for their upcoming literature review on TAI timelines. We met on 2023-01-21 to discuss our predictions about when advanced AI systems will arrive.Forecasts:Definition of AGIWe used the following definition to determine the “moment at which AGI is considered to have arrived,” building on this Metaculus question:The moment that a system capable of passing the adversarial Turing test against a top-5% human who has access to experts on various topics is developed.More concretely:A Turing test is said to be “adversarial” if the human judges make a good-faith attempt to unmask the AI as an impostor, and the human confederates make a good-faith attempt to demonstrate that they are humans.An AI is said to “pass” a Turing test if at least half of judges rated the AI as more human than at least third of the human confederates.This definition of AGI is not unproblematic, e.g., it’s possible that AGI could be unmasked long after its economic value and capabilities are very high. We chose to use an imperfect definition and indicated to forecasters that they should interpret the definition not “as is” but “in spirit” to avoid annoying edge cases.Individual forecastsP(AGI by 2030)P(AGI by 2050)P(AGI by 2100)P(AGI by this year) = 10%P(AGI by this year) = 50%P(AGI by this year) = 90%F1F3F4F5F6F7F8F90.390.750.7820282034N/A0.280.70.872027203921200.260.580.932025203920880.350.730.912025203720750.40.650.820252035N/A0.330.650.82026203722500.20.50.72026205022000.230.440.67202620602250AggregateP(AGI by 2030)P(AGI by 2050)P(AGI by 2100)P(AGI by this year) = 10%P(AGI by this year) = 50%P(AGI by this year) = 90%mean:0.310.630.81202620412164stdev:0.070.110.091.078.9979.6550% CI:[0.26, 0.35][0.55, 0.70][0.74, 0.87][2025.3, 2026.7][2035, 2047][2110, 2218]80% CI:[0.21, 0.40][0.48, 0.77][0.69, 0.93][2024.6, 2027.4][2030, 2053][2062, 2266]95% CI:[0.16, 0.45][0.41, 0.84][0.62, 0.99][2023.9, 2028.1][2024, 2059][2008, 2320]geomean:0.300.620.802026.0020412163geo odds:0.300.630.82Epistemic status:For Samotsvety track-record see:/Note that this track record comes mostly from questions about geopolitics and technology that resolve within 12 months.Most forecasters have at least read Joe Carlsmith’s report on AI x-risk, Is “Power-Seeking AI an Existential Risk?”. Those who are short on time may have just skimmed the report and/or watched the presentation. We discussed the report section by section over the course of a few weekly meetings.Note also that there might be selection effects at the level of which forecasters chose to participate in this exercise; for example, Samotsvety forecasters who view AI as an important/interesting/etc. topic could have self-selected into the discussion.(Though, the set of forecasters who participated this time and participated last time is very similar.)Update from our previous estimateThe last time we publicly elicited a similar probability from our forecasters, we were at 32% that AGI would be developed in the next 20 years (so by late 2042); and at 73% that it would be developed by 2100. These are a bit lower than our current forecasts. The changes since then can be attributed toWe have gotten more time to think about the topic, and work through considerations and counter-considerations, e.g., the extent to which we should fear selection effects in the types of arguments to which we are exposed.Some of our forecasters still give substantial weight to more skeptical probabilities coming from semi-informative priors, from Lap...]]>
Tue, 24 Jan 2023 05:54:05 +0000 EA - Update to Samotsvety AGI timelines by Misha Yagudin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update to Samotsvety AGI timelines, published by Misha Yagudin on January 24, 2023 on The Effective Altruism Forum.Previously: Samotsvety's AI risk forecasts.Our colleagues at Epoch recently asked us to update our AI timelines estimate for their upcoming literature review on TAI timelines. We met on 2023-01-21 to discuss our predictions about when advanced AI systems will arrive.Forecasts:Definition of AGIWe used the following definition to determine the “moment at which AGI is considered to have arrived,” building on this Metaculus question:The moment that a system capable of passing the adversarial Turing test against a top-5% human who has access to experts on various topics is developed.More concretely:A Turing test is said to be “adversarial” if the human judges make a good-faith attempt to unmask the AI as an impostor, and the human confederates make a good-faith attempt to demonstrate that they are humans.An AI is said to “pass” a Turing test if at least half of judges rated the AI as more human than at least third of the human confederates.This definition of AGI is not unproblematic, e.g., it’s possible that AGI could be unmasked long after its economic value and capabilities are very high. We chose to use an imperfect definition and indicated to forecasters that they should interpret the definition not “as is” but “in spirit” to avoid annoying edge cases.Individual forecastsP(AGI by 2030)P(AGI by 2050)P(AGI by 2100)P(AGI by this year) = 10%P(AGI by this year) = 50%P(AGI by this year) = 90%F1F3F4F5F6F7F8F90.390.750.7820282034N/A0.280.70.872027203921200.260.580.932025203920880.350.730.912025203720750.40.650.820252035N/A0.330.650.82026203722500.20.50.72026205022000.230.440.67202620602250AggregateP(AGI by 2030)P(AGI by 2050)P(AGI by 2100)P(AGI by this year) = 10%P(AGI by this year) = 50%P(AGI by this year) = 90%mean:0.310.630.81202620412164stdev:0.070.110.091.078.9979.6550% CI:[0.26, 0.35][0.55, 0.70][0.74, 0.87][2025.3, 2026.7][2035, 2047][2110, 2218]80% CI:[0.21, 0.40][0.48, 0.77][0.69, 0.93][2024.6, 2027.4][2030, 2053][2062, 2266]95% CI:[0.16, 0.45][0.41, 0.84][0.62, 0.99][2023.9, 2028.1][2024, 2059][2008, 2320]geomean:0.300.620.802026.0020412163geo odds:0.300.630.82Epistemic status:For Samotsvety track-record see:/Note that this track record comes mostly from questions about geopolitics and technology that resolve within 12 months.Most forecasters have at least read Joe Carlsmith’s report on AI x-risk, Is “Power-Seeking AI an Existential Risk?”. Those who are short on time may have just skimmed the report and/or watched the presentation. We discussed the report section by section over the course of a few weekly meetings.Note also that there might be selection effects at the level of which forecasters chose to participate in this exercise; for example, Samotsvety forecasters who view AI as an important/interesting/etc. topic could have self-selected into the discussion.(Though, the set of forecasters who participated this time and participated last time is very similar.)Update from our previous estimateThe last time we publicly elicited a similar probability from our forecasters, we were at 32% that AGI would be developed in the next 20 years (so by late 2042); and at 73% that it would be developed by 2100. These are a bit lower than our current forecasts. The changes since then can be attributed toWe have gotten more time to think about the topic, and work through considerations and counter-considerations, e.g., the extent to which we should fear selection effects in the types of arguments to which we are exposed.Some of our forecasters still give substantial weight to more skeptical probabilities coming from semi-informative priors, from Lap...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update to Samotsvety AGI timelines, published by Misha Yagudin on January 24, 2023 on The Effective Altruism Forum.Previously: Samotsvety's AI risk forecasts.Our colleagues at Epoch recently asked us to update our AI timelines estimate for their upcoming literature review on TAI timelines. We met on 2023-01-21 to discuss our predictions about when advanced AI systems will arrive.Forecasts:Definition of AGIWe used the following definition to determine the “moment at which AGI is considered to have arrived,” building on this Metaculus question:The moment that a system capable of passing the adversarial Turing test against a top-5% human who has access to experts on various topics is developed.More concretely:A Turing test is said to be “adversarial” if the human judges make a good-faith attempt to unmask the AI as an impostor, and the human confederates make a good-faith attempt to demonstrate that they are humans.An AI is said to “pass” a Turing test if at least half of judges rated the AI as more human than at least third of the human confederates.This definition of AGI is not unproblematic, e.g., it’s possible that AGI could be unmasked long after its economic value and capabilities are very high. We chose to use an imperfect definition and indicated to forecasters that they should interpret the definition not “as is” but “in spirit” to avoid annoying edge cases.Individual forecastsP(AGI by 2030)P(AGI by 2050)P(AGI by 2100)P(AGI by this year) = 10%P(AGI by this year) = 50%P(AGI by this year) = 90%F1F3F4F5F6F7F8F90.390.750.7820282034N/A0.280.70.872027203921200.260.580.932025203920880.350.730.912025203720750.40.650.820252035N/A0.330.650.82026203722500.20.50.72026205022000.230.440.67202620602250AggregateP(AGI by 2030)P(AGI by 2050)P(AGI by 2100)P(AGI by this year) = 10%P(AGI by this year) = 50%P(AGI by this year) = 90%mean:0.310.630.81202620412164stdev:0.070.110.091.078.9979.6550% CI:[0.26, 0.35][0.55, 0.70][0.74, 0.87][2025.3, 2026.7][2035, 2047][2110, 2218]80% CI:[0.21, 0.40][0.48, 0.77][0.69, 0.93][2024.6, 2027.4][2030, 2053][2062, 2266]95% CI:[0.16, 0.45][0.41, 0.84][0.62, 0.99][2023.9, 2028.1][2024, 2059][2008, 2320]geomean:0.300.620.802026.0020412163geo odds:0.300.630.82Epistemic status:For Samotsvety track-record see:/Note that this track record comes mostly from questions about geopolitics and technology that resolve within 12 months.Most forecasters have at least read Joe Carlsmith’s report on AI x-risk, Is “Power-Seeking AI an Existential Risk?”. Those who are short on time may have just skimmed the report and/or watched the presentation. We discussed the report section by section over the course of a few weekly meetings.Note also that there might be selection effects at the level of which forecasters chose to participate in this exercise; for example, Samotsvety forecasters who view AI as an important/interesting/etc. topic could have self-selected into the discussion.(Though, the set of forecasters who participated this time and participated last time is very similar.)Update from our previous estimateThe last time we publicly elicited a similar probability from our forecasters, we were at 32% that AGI would be developed in the next 20 years (so by late 2042); and at 73% that it would be developed by 2100. These are a bit lower than our current forecasts. The changes since then can be attributed toWe have gotten more time to think about the topic, and work through considerations and counter-considerations, e.g., the extent to which we should fear selection effects in the types of arguments to which we are exposed.Some of our forecasters still give substantial weight to more skeptical probabilities coming from semi-informative priors, from Lap...]]>
Misha Yagudin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:06 None full 4554
KCEjmnKLG6Ew9Kphc_EA EA - Forum + LW relationship: What is the effect? by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forum + LW relationship: What is the effect?, published by Rockwell on January 23, 2023 on The Effective Altruism Forum.At the risk of starting a messy discussion, I'm curious how folks feel the heavy linking between the EA Forum and LessWrong affects both content on the Forum itself and the EA community more generally.I know many people identify as part of both spaces (and I myself peruse LW), but I'm wondering if the connection has larger cultural effects, from normalized writing styles to a perhaps disproportionate rationalist representation. Thoughts?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Rockwell https://forum.effectivealtruism.org/posts/KCEjmnKLG6Ew9Kphc/forum-lw-relationship-what-is-the-effect Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forum + LW relationship: What is the effect?, published by Rockwell on January 23, 2023 on The Effective Altruism Forum.At the risk of starting a messy discussion, I'm curious how folks feel the heavy linking between the EA Forum and LessWrong affects both content on the Forum itself and the EA community more generally.I know many people identify as part of both spaces (and I myself peruse LW), but I'm wondering if the connection has larger cultural effects, from normalized writing styles to a perhaps disproportionate rationalist representation. Thoughts?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 23 Jan 2023 22:38:26 +0000 EA - Forum + LW relationship: What is the effect? by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forum + LW relationship: What is the effect?, published by Rockwell on January 23, 2023 on The Effective Altruism Forum.At the risk of starting a messy discussion, I'm curious how folks feel the heavy linking between the EA Forum and LessWrong affects both content on the Forum itself and the EA community more generally.I know many people identify as part of both spaces (and I myself peruse LW), but I'm wondering if the connection has larger cultural effects, from normalized writing styles to a perhaps disproportionate rationalist representation. Thoughts?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forum + LW relationship: What is the effect?, published by Rockwell on January 23, 2023 on The Effective Altruism Forum.At the risk of starting a messy discussion, I'm curious how folks feel the heavy linking between the EA Forum and LessWrong affects both content on the Forum itself and the EA community more generally.I know many people identify as part of both spaces (and I myself peruse LW), but I'm wondering if the connection has larger cultural effects, from normalized writing styles to a perhaps disproportionate rationalist representation. Thoughts?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Rockwell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:51 None full 4547
zsHCYHiMoJvywyKaa_EA EA - (Incorrectly) Overemphasizing Effective Careers by Davidmanheim Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: (Incorrectly) Overemphasizing Effective Careers, published by Davidmanheim on January 23, 2023 on The Effective Altruism Forum.Effective altruism can be compatible with most moral viewpoints, and there is nothing fundamental about effective altruism that requires it to be a near-exclusive focus. There is, however, an analysis of effective altruism that seems to implicitly disagree, sneaking in a near-complete focus on effectiveness via implicit assumptions in analysis. I think that this type of analysis, which is (incorrectly) assumed by many people I have spoken to in the community, is not a necessary conclusion, and in fact runs counter to the assumptions for fiscal effective altruism. Effective careers are great, but making them the only "real" way to be an Effective Altruist should be strongly rejected.The mistaken analysis goes as follows; if we are balancing priorities, and take a consequentialist view, we should prioritize our decisions on the basis of overall impact. However, effective altruism has shown that different interventions differ in their impact by orders of magnitude. Therefore, if we give any non-trivial weight to improving the world, then it is such a large impact, it will overbalance other considerations.This can be illustrated with a notional career choice model. In this model, someone has several different goals. Perhaps they wish to have a family, and think that impacts on their family is almost half of the total reason to pick a career, while their personal happiness is another almost-half. Finally, in line with the financial obligation to give 10% of their income, they “tithe” their career choice, assigning 10% of the weight to their positive impact on the broader world. Now, they must choose between an “effective career,” or working a typical office job.FactorFamilyPersonal HappinessBeneficenceOverallWeight45%45%10%100%PriorityRatingImpactRatingImpactRatingImpactRatingImpactEffective Career3/1032/1029/1010003.15102.5Office Job9/1099/1091/1018.28.2As the table illustrates, there are two ways to weigh the options; either rating how preferred the option is as a preference, or assessing its impact. The office job effectively only has impact via donations, while the effective career addresses a global need. The first method leads to choosing the office job, the second to choosing the effective career. In this way, the non-trivial weight put on impact becomes overwhelming. (I will note that this analysis often fails to account for another issue that Effective Altruism used to focus on more strongly, that of replaceability. But even assuming a neglected area, where replaceability is negligible, the totalizing critique obtains.)The equivalent fiscal analysis certainly fails; dedicating 10% of your money to effective causes does not imply that if the cause is very effective, it requires you to give more than 10% of your money. This is not to say that the second analysis is confused - but it does require accepting that under a sufficiently utilitarian viewpoint, where your decisions weight your benefit against others, even greatly prioritizing your own happiness creates a nearly-totalizing obligation to others. And that's not what Effective Altruism generally suggests.And to be clear, my claim is not particularly novel. To quote a recent EA Forum post from 80,000 hours: “It feels important that working to improve the world doesn’t prevent me from achieving any of the other things that are really significant to me in life — for example, having a good relationship with my husband and having close, long-term friendships.”It seems important, however, to clarify that in many scenarios it simply is not the case that an effective career requires anything like the degree of sacrifice that the example above implies. While charities and altruistic end...]]>
Davidmanheim https://forum.effectivealtruism.org/posts/zsHCYHiMoJvywyKaa/incorrectly-overemphasizing-effective-careers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: (Incorrectly) Overemphasizing Effective Careers, published by Davidmanheim on January 23, 2023 on The Effective Altruism Forum.Effective altruism can be compatible with most moral viewpoints, and there is nothing fundamental about effective altruism that requires it to be a near-exclusive focus. There is, however, an analysis of effective altruism that seems to implicitly disagree, sneaking in a near-complete focus on effectiveness via implicit assumptions in analysis. I think that this type of analysis, which is (incorrectly) assumed by many people I have spoken to in the community, is not a necessary conclusion, and in fact runs counter to the assumptions for fiscal effective altruism. Effective careers are great, but making them the only "real" way to be an Effective Altruist should be strongly rejected.The mistaken analysis goes as follows; if we are balancing priorities, and take a consequentialist view, we should prioritize our decisions on the basis of overall impact. However, effective altruism has shown that different interventions differ in their impact by orders of magnitude. Therefore, if we give any non-trivial weight to improving the world, then it is such a large impact, it will overbalance other considerations.This can be illustrated with a notional career choice model. In this model, someone has several different goals. Perhaps they wish to have a family, and think that impacts on their family is almost half of the total reason to pick a career, while their personal happiness is another almost-half. Finally, in line with the financial obligation to give 10% of their income, they “tithe” their career choice, assigning 10% of the weight to their positive impact on the broader world. Now, they must choose between an “effective career,” or working a typical office job.FactorFamilyPersonal HappinessBeneficenceOverallWeight45%45%10%100%PriorityRatingImpactRatingImpactRatingImpactRatingImpactEffective Career3/1032/1029/1010003.15102.5Office Job9/1099/1091/1018.28.2As the table illustrates, there are two ways to weigh the options; either rating how preferred the option is as a preference, or assessing its impact. The office job effectively only has impact via donations, while the effective career addresses a global need. The first method leads to choosing the office job, the second to choosing the effective career. In this way, the non-trivial weight put on impact becomes overwhelming. (I will note that this analysis often fails to account for another issue that Effective Altruism used to focus on more strongly, that of replaceability. But even assuming a neglected area, where replaceability is negligible, the totalizing critique obtains.)The equivalent fiscal analysis certainly fails; dedicating 10% of your money to effective causes does not imply that if the cause is very effective, it requires you to give more than 10% of your money. This is not to say that the second analysis is confused - but it does require accepting that under a sufficiently utilitarian viewpoint, where your decisions weight your benefit against others, even greatly prioritizing your own happiness creates a nearly-totalizing obligation to others. And that's not what Effective Altruism generally suggests.And to be clear, my claim is not particularly novel. To quote a recent EA Forum post from 80,000 hours: “It feels important that working to improve the world doesn’t prevent me from achieving any of the other things that are really significant to me in life — for example, having a good relationship with my husband and having close, long-term friendships.”It seems important, however, to clarify that in many scenarios it simply is not the case that an effective career requires anything like the degree of sacrifice that the example above implies. While charities and altruistic end...]]>
Mon, 23 Jan 2023 22:37:43 +0000 EA - (Incorrectly) Overemphasizing Effective Careers by Davidmanheim Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: (Incorrectly) Overemphasizing Effective Careers, published by Davidmanheim on January 23, 2023 on The Effective Altruism Forum.Effective altruism can be compatible with most moral viewpoints, and there is nothing fundamental about effective altruism that requires it to be a near-exclusive focus. There is, however, an analysis of effective altruism that seems to implicitly disagree, sneaking in a near-complete focus on effectiveness via implicit assumptions in analysis. I think that this type of analysis, which is (incorrectly) assumed by many people I have spoken to in the community, is not a necessary conclusion, and in fact runs counter to the assumptions for fiscal effective altruism. Effective careers are great, but making them the only "real" way to be an Effective Altruist should be strongly rejected.The mistaken analysis goes as follows; if we are balancing priorities, and take a consequentialist view, we should prioritize our decisions on the basis of overall impact. However, effective altruism has shown that different interventions differ in their impact by orders of magnitude. Therefore, if we give any non-trivial weight to improving the world, then it is such a large impact, it will overbalance other considerations.This can be illustrated with a notional career choice model. In this model, someone has several different goals. Perhaps they wish to have a family, and think that impacts on their family is almost half of the total reason to pick a career, while their personal happiness is another almost-half. Finally, in line with the financial obligation to give 10% of their income, they “tithe” their career choice, assigning 10% of the weight to their positive impact on the broader world. Now, they must choose between an “effective career,” or working a typical office job.FactorFamilyPersonal HappinessBeneficenceOverallWeight45%45%10%100%PriorityRatingImpactRatingImpactRatingImpactRatingImpactEffective Career3/1032/1029/1010003.15102.5Office Job9/1099/1091/1018.28.2As the table illustrates, there are two ways to weigh the options; either rating how preferred the option is as a preference, or assessing its impact. The office job effectively only has impact via donations, while the effective career addresses a global need. The first method leads to choosing the office job, the second to choosing the effective career. In this way, the non-trivial weight put on impact becomes overwhelming. (I will note that this analysis often fails to account for another issue that Effective Altruism used to focus on more strongly, that of replaceability. But even assuming a neglected area, where replaceability is negligible, the totalizing critique obtains.)The equivalent fiscal analysis certainly fails; dedicating 10% of your money to effective causes does not imply that if the cause is very effective, it requires you to give more than 10% of your money. This is not to say that the second analysis is confused - but it does require accepting that under a sufficiently utilitarian viewpoint, where your decisions weight your benefit against others, even greatly prioritizing your own happiness creates a nearly-totalizing obligation to others. And that's not what Effective Altruism generally suggests.And to be clear, my claim is not particularly novel. To quote a recent EA Forum post from 80,000 hours: “It feels important that working to improve the world doesn’t prevent me from achieving any of the other things that are really significant to me in life — for example, having a good relationship with my husband and having close, long-term friendships.”It seems important, however, to clarify that in many scenarios it simply is not the case that an effective career requires anything like the degree of sacrifice that the example above implies. While charities and altruistic end...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: (Incorrectly) Overemphasizing Effective Careers, published by Davidmanheim on January 23, 2023 on The Effective Altruism Forum.Effective altruism can be compatible with most moral viewpoints, and there is nothing fundamental about effective altruism that requires it to be a near-exclusive focus. There is, however, an analysis of effective altruism that seems to implicitly disagree, sneaking in a near-complete focus on effectiveness via implicit assumptions in analysis. I think that this type of analysis, which is (incorrectly) assumed by many people I have spoken to in the community, is not a necessary conclusion, and in fact runs counter to the assumptions for fiscal effective altruism. Effective careers are great, but making them the only "real" way to be an Effective Altruist should be strongly rejected.The mistaken analysis goes as follows; if we are balancing priorities, and take a consequentialist view, we should prioritize our decisions on the basis of overall impact. However, effective altruism has shown that different interventions differ in their impact by orders of magnitude. Therefore, if we give any non-trivial weight to improving the world, then it is such a large impact, it will overbalance other considerations.This can be illustrated with a notional career choice model. In this model, someone has several different goals. Perhaps they wish to have a family, and think that impacts on their family is almost half of the total reason to pick a career, while their personal happiness is another almost-half. Finally, in line with the financial obligation to give 10% of their income, they “tithe” their career choice, assigning 10% of the weight to their positive impact on the broader world. Now, they must choose between an “effective career,” or working a typical office job.FactorFamilyPersonal HappinessBeneficenceOverallWeight45%45%10%100%PriorityRatingImpactRatingImpactRatingImpactRatingImpactEffective Career3/1032/1029/1010003.15102.5Office Job9/1099/1091/1018.28.2As the table illustrates, there are two ways to weigh the options; either rating how preferred the option is as a preference, or assessing its impact. The office job effectively only has impact via donations, while the effective career addresses a global need. The first method leads to choosing the office job, the second to choosing the effective career. In this way, the non-trivial weight put on impact becomes overwhelming. (I will note that this analysis often fails to account for another issue that Effective Altruism used to focus on more strongly, that of replaceability. But even assuming a neglected area, where replaceability is negligible, the totalizing critique obtains.)The equivalent fiscal analysis certainly fails; dedicating 10% of your money to effective causes does not imply that if the cause is very effective, it requires you to give more than 10% of your money. This is not to say that the second analysis is confused - but it does require accepting that under a sufficiently utilitarian viewpoint, where your decisions weight your benefit against others, even greatly prioritizing your own happiness creates a nearly-totalizing obligation to others. And that's not what Effective Altruism generally suggests.And to be clear, my claim is not particularly novel. To quote a recent EA Forum post from 80,000 hours: “It feels important that working to improve the world doesn’t prevent me from achieving any of the other things that are really significant to me in life — for example, having a good relationship with my husband and having close, long-term friendships.”It seems important, however, to clarify that in many scenarios it simply is not the case that an effective career requires anything like the degree of sacrifice that the example above implies. While charities and altruistic end...]]>
Davidmanheim https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:57 None full 4549
L6ZmggEJw8ri4KB8X_EA EA - My highly personal skepticism braindump on existential risk from artificial intelligence. by NunoSempere Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My highly personal skepticism braindump on existential risk from artificial intelligence., published by NunoSempere on January 23, 2023 on The Effective Altruism Forum.SummaryThis document seeks to outline why I feel uneasy about high existential risk estimates from AGI (e.g., 80% doom by 2070). When I try to verbalize this, I view considerations likeselection effects at the level of which arguments are discovered and distributedcommunity epistemic problems, andincreased uncertainty due to chains of reasoning with imperfect conceptsas real and important.I still think that existential risk from AGI is important. But I don’t view it as certain or close to certain, and I think that something is going wrong when people see it as all but assured.Discussion of weaknessesI think that this document was important for me personally to write up. However, I also think that it has some significant weaknesses:There is some danger in verbalization leading to rationalization.It alternates controversial points with points that are dead obvious.It is to a large extent a reaction to my imperfectly digested understanding of a worldview pushed around the ESPR/CFAR/MIRI/LessWrong cluster from 2016-2019, which nobody might hold now.In response to these weaknesses:I want to keep in mind that do want to give weight to my gut feeling, and that I might want to updating on a feeling of uneasiness rather than on its accompanying reasonings or rationalizations.Readers might want to keep in mind that parts of this post may look like a bravery debate. But on the other hand, I've seen that the points which people consider obvious and uncontroversial vary from person to person, so I don’t get the impression that there is that much I can do on my end for the effort that I’m willing to spend.Readers might want to keep in mind that actual AI safety people and AI safety proponents may hold more nuanced views, and that to a large extent I am arguing against a “Nuño of the past” view.Despite these flaws, I think that this text was personally important for me to write up, and it might also have some utility to readers.Uneasiness about chains of reasoning with imperfect conceptsUneasiness about conjunctivenessIt’s not clear to me how conjunctive AI doom is. Proponents will argue that it is very disjunctive, that there are lot of ways that things could go wrong. I’m not so sure.In particular, when you see that a parsimonious decomposition (like Carlsmith’s) tends to generate lower estimates, you can conclude:That the method is producing a biased result, and trying to account for thatThat the topic under discussion is, in itself, conjunctive: that there are several steps that need to be satisfied. For example, “AI causing a big catastrophe” and “AI causing human exinction given that it has caused a large catastrophe” seem like they are two distinct steps that would need to be modelled separately,I feel uneasy about only doing 1.) and not doing 2.) I think that the principled answer might be to split some probability into each case. Overall, though, I’d tend to think that AI risk is more conjunctive than it is disjunctiveI also feel uneasy about the social pressure in my particular social bubble. I think that the social pressure is for me to just accept Nate Soares’ argument here that Carlsmith’s method is biased, rather than to probabilistically incorporate it into my calculations. As in “oh, yes, people know that conjunctive chains of reasoning have been debunked, Nate Soares addressed that in a blogpost saying that they are biased”.I don’t trust the conceptsMy understanding is that MIRI and others’ work started in the 2000s. As such, their understanding of the shape that an AI would take doesn’t particularly resemble current deep learning approaches.In particular, I think that man...]]>
NunoSempere https://forum.effectivealtruism.org/posts/L6ZmggEJw8ri4KB8X/my-highly-personal-skepticism-braindump-on-existential-risk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My highly personal skepticism braindump on existential risk from artificial intelligence., published by NunoSempere on January 23, 2023 on The Effective Altruism Forum.SummaryThis document seeks to outline why I feel uneasy about high existential risk estimates from AGI (e.g., 80% doom by 2070). When I try to verbalize this, I view considerations likeselection effects at the level of which arguments are discovered and distributedcommunity epistemic problems, andincreased uncertainty due to chains of reasoning with imperfect conceptsas real and important.I still think that existential risk from AGI is important. But I don’t view it as certain or close to certain, and I think that something is going wrong when people see it as all but assured.Discussion of weaknessesI think that this document was important for me personally to write up. However, I also think that it has some significant weaknesses:There is some danger in verbalization leading to rationalization.It alternates controversial points with points that are dead obvious.It is to a large extent a reaction to my imperfectly digested understanding of a worldview pushed around the ESPR/CFAR/MIRI/LessWrong cluster from 2016-2019, which nobody might hold now.In response to these weaknesses:I want to keep in mind that do want to give weight to my gut feeling, and that I might want to updating on a feeling of uneasiness rather than on its accompanying reasonings or rationalizations.Readers might want to keep in mind that parts of this post may look like a bravery debate. But on the other hand, I've seen that the points which people consider obvious and uncontroversial vary from person to person, so I don’t get the impression that there is that much I can do on my end for the effort that I’m willing to spend.Readers might want to keep in mind that actual AI safety people and AI safety proponents may hold more nuanced views, and that to a large extent I am arguing against a “Nuño of the past” view.Despite these flaws, I think that this text was personally important for me to write up, and it might also have some utility to readers.Uneasiness about chains of reasoning with imperfect conceptsUneasiness about conjunctivenessIt’s not clear to me how conjunctive AI doom is. Proponents will argue that it is very disjunctive, that there are lot of ways that things could go wrong. I’m not so sure.In particular, when you see that a parsimonious decomposition (like Carlsmith’s) tends to generate lower estimates, you can conclude:That the method is producing a biased result, and trying to account for thatThat the topic under discussion is, in itself, conjunctive: that there are several steps that need to be satisfied. For example, “AI causing a big catastrophe” and “AI causing human exinction given that it has caused a large catastrophe” seem like they are two distinct steps that would need to be modelled separately,I feel uneasy about only doing 1.) and not doing 2.) I think that the principled answer might be to split some probability into each case. Overall, though, I’d tend to think that AI risk is more conjunctive than it is disjunctiveI also feel uneasy about the social pressure in my particular social bubble. I think that the social pressure is for me to just accept Nate Soares’ argument here that Carlsmith’s method is biased, rather than to probabilistically incorporate it into my calculations. As in “oh, yes, people know that conjunctive chains of reasoning have been debunked, Nate Soares addressed that in a blogpost saying that they are biased”.I don’t trust the conceptsMy understanding is that MIRI and others’ work started in the 2000s. As such, their understanding of the shape that an AI would take doesn’t particularly resemble current deep learning approaches.In particular, I think that man...]]>
Mon, 23 Jan 2023 21:27:54 +0000 EA - My highly personal skepticism braindump on existential risk from artificial intelligence. by NunoSempere Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My highly personal skepticism braindump on existential risk from artificial intelligence., published by NunoSempere on January 23, 2023 on The Effective Altruism Forum.SummaryThis document seeks to outline why I feel uneasy about high existential risk estimates from AGI (e.g., 80% doom by 2070). When I try to verbalize this, I view considerations likeselection effects at the level of which arguments are discovered and distributedcommunity epistemic problems, andincreased uncertainty due to chains of reasoning with imperfect conceptsas real and important.I still think that existential risk from AGI is important. But I don’t view it as certain or close to certain, and I think that something is going wrong when people see it as all but assured.Discussion of weaknessesI think that this document was important for me personally to write up. However, I also think that it has some significant weaknesses:There is some danger in verbalization leading to rationalization.It alternates controversial points with points that are dead obvious.It is to a large extent a reaction to my imperfectly digested understanding of a worldview pushed around the ESPR/CFAR/MIRI/LessWrong cluster from 2016-2019, which nobody might hold now.In response to these weaknesses:I want to keep in mind that do want to give weight to my gut feeling, and that I might want to updating on a feeling of uneasiness rather than on its accompanying reasonings or rationalizations.Readers might want to keep in mind that parts of this post may look like a bravery debate. But on the other hand, I've seen that the points which people consider obvious and uncontroversial vary from person to person, so I don’t get the impression that there is that much I can do on my end for the effort that I’m willing to spend.Readers might want to keep in mind that actual AI safety people and AI safety proponents may hold more nuanced views, and that to a large extent I am arguing against a “Nuño of the past” view.Despite these flaws, I think that this text was personally important for me to write up, and it might also have some utility to readers.Uneasiness about chains of reasoning with imperfect conceptsUneasiness about conjunctivenessIt’s not clear to me how conjunctive AI doom is. Proponents will argue that it is very disjunctive, that there are lot of ways that things could go wrong. I’m not so sure.In particular, when you see that a parsimonious decomposition (like Carlsmith’s) tends to generate lower estimates, you can conclude:That the method is producing a biased result, and trying to account for thatThat the topic under discussion is, in itself, conjunctive: that there are several steps that need to be satisfied. For example, “AI causing a big catastrophe” and “AI causing human exinction given that it has caused a large catastrophe” seem like they are two distinct steps that would need to be modelled separately,I feel uneasy about only doing 1.) and not doing 2.) I think that the principled answer might be to split some probability into each case. Overall, though, I’d tend to think that AI risk is more conjunctive than it is disjunctiveI also feel uneasy about the social pressure in my particular social bubble. I think that the social pressure is for me to just accept Nate Soares’ argument here that Carlsmith’s method is biased, rather than to probabilistically incorporate it into my calculations. As in “oh, yes, people know that conjunctive chains of reasoning have been debunked, Nate Soares addressed that in a blogpost saying that they are biased”.I don’t trust the conceptsMy understanding is that MIRI and others’ work started in the 2000s. As such, their understanding of the shape that an AI would take doesn’t particularly resemble current deep learning approaches.In particular, I think that man...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My highly personal skepticism braindump on existential risk from artificial intelligence., published by NunoSempere on January 23, 2023 on The Effective Altruism Forum.SummaryThis document seeks to outline why I feel uneasy about high existential risk estimates from AGI (e.g., 80% doom by 2070). When I try to verbalize this, I view considerations likeselection effects at the level of which arguments are discovered and distributedcommunity epistemic problems, andincreased uncertainty due to chains of reasoning with imperfect conceptsas real and important.I still think that existential risk from AGI is important. But I don’t view it as certain or close to certain, and I think that something is going wrong when people see it as all but assured.Discussion of weaknessesI think that this document was important for me personally to write up. However, I also think that it has some significant weaknesses:There is some danger in verbalization leading to rationalization.It alternates controversial points with points that are dead obvious.It is to a large extent a reaction to my imperfectly digested understanding of a worldview pushed around the ESPR/CFAR/MIRI/LessWrong cluster from 2016-2019, which nobody might hold now.In response to these weaknesses:I want to keep in mind that do want to give weight to my gut feeling, and that I might want to updating on a feeling of uneasiness rather than on its accompanying reasonings or rationalizations.Readers might want to keep in mind that parts of this post may look like a bravery debate. But on the other hand, I've seen that the points which people consider obvious and uncontroversial vary from person to person, so I don’t get the impression that there is that much I can do on my end for the effort that I’m willing to spend.Readers might want to keep in mind that actual AI safety people and AI safety proponents may hold more nuanced views, and that to a large extent I am arguing against a “Nuño of the past” view.Despite these flaws, I think that this text was personally important for me to write up, and it might also have some utility to readers.Uneasiness about chains of reasoning with imperfect conceptsUneasiness about conjunctivenessIt’s not clear to me how conjunctive AI doom is. Proponents will argue that it is very disjunctive, that there are lot of ways that things could go wrong. I’m not so sure.In particular, when you see that a parsimonious decomposition (like Carlsmith’s) tends to generate lower estimates, you can conclude:That the method is producing a biased result, and trying to account for thatThat the topic under discussion is, in itself, conjunctive: that there are several steps that need to be satisfied. For example, “AI causing a big catastrophe” and “AI causing human exinction given that it has caused a large catastrophe” seem like they are two distinct steps that would need to be modelled separately,I feel uneasy about only doing 1.) and not doing 2.) I think that the principled answer might be to split some probability into each case. Overall, though, I’d tend to think that AI risk is more conjunctive than it is disjunctiveI also feel uneasy about the social pressure in my particular social bubble. I think that the social pressure is for me to just accept Nate Soares’ argument here that Carlsmith’s method is biased, rather than to probabilistically incorporate it into my calculations. As in “oh, yes, people know that conjunctive chains of reasoning have been debunked, Nate Soares addressed that in a blogpost saying that they are biased”.I don’t trust the conceptsMy understanding is that MIRI and others’ work started in the 2000s. As such, their understanding of the shape that an AI would take doesn’t particularly resemble current deep learning approaches.In particular, I think that man...]]>
NunoSempere https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 22:04 None full 4548
5WQQvoHrwAoMgfJjR_EA EA - Announcing Introductions for Collaborative Truth Seeking Tools by brook Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Introductions for Collaborative Truth Seeking Tools, published by brook on January 23, 2023 on The Effective Altruism Forum.EA involves working together remotely a lot. Tools exist to make this experience better, but they may be underused.In particular, we're trying to do truth seeking. It's easier to do that when it's easy to explain your view, understand others', and quickly see agreements, disagreements and cruxes. In addition, taking some time to lay out your model clearly is probably good for your own thinking and understanding.This sequence will introduce tools for doing that, alongside tutorials & guidance. Each day's post will also have a challenge you can use the tool to solve, to get a feel for the tool and see if it's for you.We’ll also be running a short session each day in the EA GatherTown; if you want to try the tools as part of a group, make sure to come along!Why do this?EA involves a lot of collaborating, often remotely, and in particular communicating complex models or concepts.There are a lot of software tools available to help with this, but they don't seem to be used as often as they could be, or as broadly. Often, when tutorials exist, they're poorly written and/or not targeted at the way EAs would likely be using this tool.This project has 3 key goals:Increase the use of collaborative truth-seeking tools (so EAs can find truth better together)Improve how EAs are using these tools (by introducing them to the key features quickly)Save EAs time (searching for tutorials or looking through poorly-written documentation)What is it?I've collated tutorials (where they exist) and written or recorded them (where they don't) which I think are useful for EAs. There'll be a forum post each day for the next 7 week days, each consisting of video and text tutorials for either one complex tool or a handful of smaller tools.Sometimes posts will consist primarily of links elsewhere, if good tutorials already exist. To help people find tools which might be useful for a given project, there's also a wiki page with short summaries of tools.Here are the posts you can expect to see over the next [??]:Guesstimate: Why and How to Use itVisualisation of Probability MassSquiggle: Why and How to Use itLoom: Why and How to Use itExcalidraw: Why and How to Use itForecasting tools and Prediction Markets: Why and HowPolis: Why and How to Use itWhat kinds of tasks are we talking about?To make this a little more concrete than 'collaborative truth seeking', here are some examples of tasks EAs might want to do which could benefit from the use of one of these tools:Get feedback on your model of the impact of a grant to provide malaria bed nets (guesstimate)Summarise a forum post visually (excalidraw)Summarise a forum post in video form (loom)View an aggregate prediction for when we might expect AGI before deciding which area of AI to work in, and how (metaculus)Compare theories of what kinds of interventions might be useful to reduce biorisk (excalidraw, guesstimate)Calculate the expected value of trialling meditating for 2 weeks (guesstimate)Estimate the success of an ongoing project providing access to birth control for monitoring and evaluation (squiggle)Record a 3 minute video summarising your progress instead of having an hour-long meeting with your supervisor (loom)Learn to quantitatively forecast with a short feedback loop (pastcasting)So, if you've tried to do tasks of this nature, keep an eye out! Tomorrow: Guesstimate, a tool for quantifying intuitions.As a final request, we’d also really appreciate any feedback you have about the tools or the posts, or if you have any suggestions for tools you think deserve to have posts made about them!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please ...]]>
brook https://forum.effectivealtruism.org/posts/5WQQvoHrwAoMgfJjR/announcing-introductions-for-collaborative-truth-seeking Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Introductions for Collaborative Truth Seeking Tools, published by brook on January 23, 2023 on The Effective Altruism Forum.EA involves working together remotely a lot. Tools exist to make this experience better, but they may be underused.In particular, we're trying to do truth seeking. It's easier to do that when it's easy to explain your view, understand others', and quickly see agreements, disagreements and cruxes. In addition, taking some time to lay out your model clearly is probably good for your own thinking and understanding.This sequence will introduce tools for doing that, alongside tutorials & guidance. Each day's post will also have a challenge you can use the tool to solve, to get a feel for the tool and see if it's for you.We’ll also be running a short session each day in the EA GatherTown; if you want to try the tools as part of a group, make sure to come along!Why do this?EA involves a lot of collaborating, often remotely, and in particular communicating complex models or concepts.There are a lot of software tools available to help with this, but they don't seem to be used as often as they could be, or as broadly. Often, when tutorials exist, they're poorly written and/or not targeted at the way EAs would likely be using this tool.This project has 3 key goals:Increase the use of collaborative truth-seeking tools (so EAs can find truth better together)Improve how EAs are using these tools (by introducing them to the key features quickly)Save EAs time (searching for tutorials or looking through poorly-written documentation)What is it?I've collated tutorials (where they exist) and written or recorded them (where they don't) which I think are useful for EAs. There'll be a forum post each day for the next 7 week days, each consisting of video and text tutorials for either one complex tool or a handful of smaller tools.Sometimes posts will consist primarily of links elsewhere, if good tutorials already exist. To help people find tools which might be useful for a given project, there's also a wiki page with short summaries of tools.Here are the posts you can expect to see over the next [??]:Guesstimate: Why and How to Use itVisualisation of Probability MassSquiggle: Why and How to Use itLoom: Why and How to Use itExcalidraw: Why and How to Use itForecasting tools and Prediction Markets: Why and HowPolis: Why and How to Use itWhat kinds of tasks are we talking about?To make this a little more concrete than 'collaborative truth seeking', here are some examples of tasks EAs might want to do which could benefit from the use of one of these tools:Get feedback on your model of the impact of a grant to provide malaria bed nets (guesstimate)Summarise a forum post visually (excalidraw)Summarise a forum post in video form (loom)View an aggregate prediction for when we might expect AGI before deciding which area of AI to work in, and how (metaculus)Compare theories of what kinds of interventions might be useful to reduce biorisk (excalidraw, guesstimate)Calculate the expected value of trialling meditating for 2 weeks (guesstimate)Estimate the success of an ongoing project providing access to birth control for monitoring and evaluation (squiggle)Record a 3 minute video summarising your progress instead of having an hour-long meeting with your supervisor (loom)Learn to quantitatively forecast with a short feedback loop (pastcasting)So, if you've tried to do tasks of this nature, keep an eye out! Tomorrow: Guesstimate, a tool for quantifying intuitions.As a final request, we’d also really appreciate any feedback you have about the tools or the posts, or if you have any suggestions for tools you think deserve to have posts made about them!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please ...]]>
Mon, 23 Jan 2023 20:24:03 +0000 EA - Announcing Introductions for Collaborative Truth Seeking Tools by brook Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Introductions for Collaborative Truth Seeking Tools, published by brook on January 23, 2023 on The Effective Altruism Forum.EA involves working together remotely a lot. Tools exist to make this experience better, but they may be underused.In particular, we're trying to do truth seeking. It's easier to do that when it's easy to explain your view, understand others', and quickly see agreements, disagreements and cruxes. In addition, taking some time to lay out your model clearly is probably good for your own thinking and understanding.This sequence will introduce tools for doing that, alongside tutorials & guidance. Each day's post will also have a challenge you can use the tool to solve, to get a feel for the tool and see if it's for you.We’ll also be running a short session each day in the EA GatherTown; if you want to try the tools as part of a group, make sure to come along!Why do this?EA involves a lot of collaborating, often remotely, and in particular communicating complex models or concepts.There are a lot of software tools available to help with this, but they don't seem to be used as often as they could be, or as broadly. Often, when tutorials exist, they're poorly written and/or not targeted at the way EAs would likely be using this tool.This project has 3 key goals:Increase the use of collaborative truth-seeking tools (so EAs can find truth better together)Improve how EAs are using these tools (by introducing them to the key features quickly)Save EAs time (searching for tutorials or looking through poorly-written documentation)What is it?I've collated tutorials (where they exist) and written or recorded them (where they don't) which I think are useful for EAs. There'll be a forum post each day for the next 7 week days, each consisting of video and text tutorials for either one complex tool or a handful of smaller tools.Sometimes posts will consist primarily of links elsewhere, if good tutorials already exist. To help people find tools which might be useful for a given project, there's also a wiki page with short summaries of tools.Here are the posts you can expect to see over the next [??]:Guesstimate: Why and How to Use itVisualisation of Probability MassSquiggle: Why and How to Use itLoom: Why and How to Use itExcalidraw: Why and How to Use itForecasting tools and Prediction Markets: Why and HowPolis: Why and How to Use itWhat kinds of tasks are we talking about?To make this a little more concrete than 'collaborative truth seeking', here are some examples of tasks EAs might want to do which could benefit from the use of one of these tools:Get feedback on your model of the impact of a grant to provide malaria bed nets (guesstimate)Summarise a forum post visually (excalidraw)Summarise a forum post in video form (loom)View an aggregate prediction for when we might expect AGI before deciding which area of AI to work in, and how (metaculus)Compare theories of what kinds of interventions might be useful to reduce biorisk (excalidraw, guesstimate)Calculate the expected value of trialling meditating for 2 weeks (guesstimate)Estimate the success of an ongoing project providing access to birth control for monitoring and evaluation (squiggle)Record a 3 minute video summarising your progress instead of having an hour-long meeting with your supervisor (loom)Learn to quantitatively forecast with a short feedback loop (pastcasting)So, if you've tried to do tasks of this nature, keep an eye out! Tomorrow: Guesstimate, a tool for quantifying intuitions.As a final request, we’d also really appreciate any feedback you have about the tools or the posts, or if you have any suggestions for tools you think deserve to have posts made about them!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Introductions for Collaborative Truth Seeking Tools, published by brook on January 23, 2023 on The Effective Altruism Forum.EA involves working together remotely a lot. Tools exist to make this experience better, but they may be underused.In particular, we're trying to do truth seeking. It's easier to do that when it's easy to explain your view, understand others', and quickly see agreements, disagreements and cruxes. In addition, taking some time to lay out your model clearly is probably good for your own thinking and understanding.This sequence will introduce tools for doing that, alongside tutorials & guidance. Each day's post will also have a challenge you can use the tool to solve, to get a feel for the tool and see if it's for you.We’ll also be running a short session each day in the EA GatherTown; if you want to try the tools as part of a group, make sure to come along!Why do this?EA involves a lot of collaborating, often remotely, and in particular communicating complex models or concepts.There are a lot of software tools available to help with this, but they don't seem to be used as often as they could be, or as broadly. Often, when tutorials exist, they're poorly written and/or not targeted at the way EAs would likely be using this tool.This project has 3 key goals:Increase the use of collaborative truth-seeking tools (so EAs can find truth better together)Improve how EAs are using these tools (by introducing them to the key features quickly)Save EAs time (searching for tutorials or looking through poorly-written documentation)What is it?I've collated tutorials (where they exist) and written or recorded them (where they don't) which I think are useful for EAs. There'll be a forum post each day for the next 7 week days, each consisting of video and text tutorials for either one complex tool or a handful of smaller tools.Sometimes posts will consist primarily of links elsewhere, if good tutorials already exist. To help people find tools which might be useful for a given project, there's also a wiki page with short summaries of tools.Here are the posts you can expect to see over the next [??]:Guesstimate: Why and How to Use itVisualisation of Probability MassSquiggle: Why and How to Use itLoom: Why and How to Use itExcalidraw: Why and How to Use itForecasting tools and Prediction Markets: Why and HowPolis: Why and How to Use itWhat kinds of tasks are we talking about?To make this a little more concrete than 'collaborative truth seeking', here are some examples of tasks EAs might want to do which could benefit from the use of one of these tools:Get feedback on your model of the impact of a grant to provide malaria bed nets (guesstimate)Summarise a forum post visually (excalidraw)Summarise a forum post in video form (loom)View an aggregate prediction for when we might expect AGI before deciding which area of AI to work in, and how (metaculus)Compare theories of what kinds of interventions might be useful to reduce biorisk (excalidraw, guesstimate)Calculate the expected value of trialling meditating for 2 weeks (guesstimate)Estimate the success of an ongoing project providing access to birth control for monitoring and evaluation (squiggle)Record a 3 minute video summarising your progress instead of having an hour-long meeting with your supervisor (loom)Learn to quantitatively forecast with a short feedback loop (pastcasting)So, if you've tried to do tasks of this nature, keep an eye out! Tomorrow: Guesstimate, a tool for quantifying intuitions.As a final request, we’d also really appreciate any feedback you have about the tools or the posts, or if you have any suggestions for tools you think deserve to have posts made about them!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please ...]]>
brook https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:46 None full 4552
Ly3pz7hzhS5bmSty8_EA EA - My thoughts on parenting and having an impactful career by 80000 Hours Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My thoughts on parenting and having an impactful career, published by 80000 Hours on January 23, 2023 on The Effective Altruism Forum.When my husband and I decided to have children, we didn’t put much thought into the broader social impact of the decision. We got together at secondary school and had been discussing the fact we were going to have kids since we were 18, long before we found effective altruism.We made the actual decision to have a child much later, but how it would affect our careers or abilities to help others still wasn’t a large factor in the decision. As with most people though, the decision has, in fact, had significant effects on our careers.Raising my son, Leo — now three years old — is one of the great joys of my life, and I’m so happy that my husband and I decided to have him. But having kids can be challenging for anyone, and there may be unique challenges for people who aim to have a positive impact with their careers.I’m currently the director of the one-on-one programme at 80,000 Hours and a fund manager for the Effective Altruism Infrastructure Fund. So I wanted to share my experience with parenting and working for organisations whose mission I care about deeply. Here are my aims:Give readers an example of a working parent who also thinks a lot about 80,000 Hours’ advice.Discuss some of the ways having kids is likely to affect the impact you have in your career, for people who want to consider that when deciding whether to have kids.Discuss challenges people might face in their careers related to having kids and how they might handle them.Help people feel less alone if they’re finding some of the standard parenting advice alienating — particularly any mothers who feel the literature tends to underestimate how much they care about their career.Write out some of the lessons I’ve learned and things I would have liked to have known beforehand (I still find some of this hard to keep in mind!).Start a conversation with the hope that other like-minded parents will share their lessons and suggestions.Highlight some of the ways the effective altruism community supports parents.Note different people find very different advice useful, and people’s situations vary greatly by how many children they have, whether they have a partner and what that person’s situation is like, what family help they have nearby, their socioeconomic condition, and so on. I’ve been very fortunate to live in a wealthy country like the UK with a lot of social support, and I’ve been paid well enough to always meet my needs. My experiences will be most relevant to people who are similarly situated.And some of what follows will be speculative, because I consider counterfactuals and possibilities that are inevitably uncertain. Also, my son is only three, so I have fairly limited experience. I’d love for others to contribute to this conversation and offer additional perspectives.Deciding whether to have childrenIt feels important that working to improve the world doesn’t prevent me from achieving any of the other things that are really significant to me in life — for example, having a good relationship with my husband and having close, long-term friendships.Becoming a parent was another personal priority in my life. For that reason, I didn’t think much about how having a child would affect the impact I had over my life. While I think it’s important to consider how we can best have a positive impact on the world, I don’t think it’s required or practical to think we might have to give up some of the things that are most important to us in the name of impact.I did think about it some when considering whether to have more children. The potential negative effects on my ability to have an impact with my career counted against having any more kids, but my husband also was ...]]>
80000 Hours https://forum.effectivealtruism.org/posts/Ly3pz7hzhS5bmSty8/my-thoughts-on-parenting-and-having-an-impactful-career Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My thoughts on parenting and having an impactful career, published by 80000 Hours on January 23, 2023 on The Effective Altruism Forum.When my husband and I decided to have children, we didn’t put much thought into the broader social impact of the decision. We got together at secondary school and had been discussing the fact we were going to have kids since we were 18, long before we found effective altruism.We made the actual decision to have a child much later, but how it would affect our careers or abilities to help others still wasn’t a large factor in the decision. As with most people though, the decision has, in fact, had significant effects on our careers.Raising my son, Leo — now three years old — is one of the great joys of my life, and I’m so happy that my husband and I decided to have him. But having kids can be challenging for anyone, and there may be unique challenges for people who aim to have a positive impact with their careers.I’m currently the director of the one-on-one programme at 80,000 Hours and a fund manager for the Effective Altruism Infrastructure Fund. So I wanted to share my experience with parenting and working for organisations whose mission I care about deeply. Here are my aims:Give readers an example of a working parent who also thinks a lot about 80,000 Hours’ advice.Discuss some of the ways having kids is likely to affect the impact you have in your career, for people who want to consider that when deciding whether to have kids.Discuss challenges people might face in their careers related to having kids and how they might handle them.Help people feel less alone if they’re finding some of the standard parenting advice alienating — particularly any mothers who feel the literature tends to underestimate how much they care about their career.Write out some of the lessons I’ve learned and things I would have liked to have known beforehand (I still find some of this hard to keep in mind!).Start a conversation with the hope that other like-minded parents will share their lessons and suggestions.Highlight some of the ways the effective altruism community supports parents.Note different people find very different advice useful, and people’s situations vary greatly by how many children they have, whether they have a partner and what that person’s situation is like, what family help they have nearby, their socioeconomic condition, and so on. I’ve been very fortunate to live in a wealthy country like the UK with a lot of social support, and I’ve been paid well enough to always meet my needs. My experiences will be most relevant to people who are similarly situated.And some of what follows will be speculative, because I consider counterfactuals and possibilities that are inevitably uncertain. Also, my son is only three, so I have fairly limited experience. I’d love for others to contribute to this conversation and offer additional perspectives.Deciding whether to have childrenIt feels important that working to improve the world doesn’t prevent me from achieving any of the other things that are really significant to me in life — for example, having a good relationship with my husband and having close, long-term friendships.Becoming a parent was another personal priority in my life. For that reason, I didn’t think much about how having a child would affect the impact I had over my life. While I think it’s important to consider how we can best have a positive impact on the world, I don’t think it’s required or practical to think we might have to give up some of the things that are most important to us in the name of impact.I did think about it some when considering whether to have more children. The potential negative effects on my ability to have an impact with my career counted against having any more kids, but my husband also was ...]]>
Mon, 23 Jan 2023 18:15:30 +0000 EA - My thoughts on parenting and having an impactful career by 80000 Hours Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My thoughts on parenting and having an impactful career, published by 80000 Hours on January 23, 2023 on The Effective Altruism Forum.When my husband and I decided to have children, we didn’t put much thought into the broader social impact of the decision. We got together at secondary school and had been discussing the fact we were going to have kids since we were 18, long before we found effective altruism.We made the actual decision to have a child much later, but how it would affect our careers or abilities to help others still wasn’t a large factor in the decision. As with most people though, the decision has, in fact, had significant effects on our careers.Raising my son, Leo — now three years old — is one of the great joys of my life, and I’m so happy that my husband and I decided to have him. But having kids can be challenging for anyone, and there may be unique challenges for people who aim to have a positive impact with their careers.I’m currently the director of the one-on-one programme at 80,000 Hours and a fund manager for the Effective Altruism Infrastructure Fund. So I wanted to share my experience with parenting and working for organisations whose mission I care about deeply. Here are my aims:Give readers an example of a working parent who also thinks a lot about 80,000 Hours’ advice.Discuss some of the ways having kids is likely to affect the impact you have in your career, for people who want to consider that when deciding whether to have kids.Discuss challenges people might face in their careers related to having kids and how they might handle them.Help people feel less alone if they’re finding some of the standard parenting advice alienating — particularly any mothers who feel the literature tends to underestimate how much they care about their career.Write out some of the lessons I’ve learned and things I would have liked to have known beforehand (I still find some of this hard to keep in mind!).Start a conversation with the hope that other like-minded parents will share their lessons and suggestions.Highlight some of the ways the effective altruism community supports parents.Note different people find very different advice useful, and people’s situations vary greatly by how many children they have, whether they have a partner and what that person’s situation is like, what family help they have nearby, their socioeconomic condition, and so on. I’ve been very fortunate to live in a wealthy country like the UK with a lot of social support, and I’ve been paid well enough to always meet my needs. My experiences will be most relevant to people who are similarly situated.And some of what follows will be speculative, because I consider counterfactuals and possibilities that are inevitably uncertain. Also, my son is only three, so I have fairly limited experience. I’d love for others to contribute to this conversation and offer additional perspectives.Deciding whether to have childrenIt feels important that working to improve the world doesn’t prevent me from achieving any of the other things that are really significant to me in life — for example, having a good relationship with my husband and having close, long-term friendships.Becoming a parent was another personal priority in my life. For that reason, I didn’t think much about how having a child would affect the impact I had over my life. While I think it’s important to consider how we can best have a positive impact on the world, I don’t think it’s required or practical to think we might have to give up some of the things that are most important to us in the name of impact.I did think about it some when considering whether to have more children. The potential negative effects on my ability to have an impact with my career counted against having any more kids, but my husband also was ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My thoughts on parenting and having an impactful career, published by 80000 Hours on January 23, 2023 on The Effective Altruism Forum.When my husband and I decided to have children, we didn’t put much thought into the broader social impact of the decision. We got together at secondary school and had been discussing the fact we were going to have kids since we were 18, long before we found effective altruism.We made the actual decision to have a child much later, but how it would affect our careers or abilities to help others still wasn’t a large factor in the decision. As with most people though, the decision has, in fact, had significant effects on our careers.Raising my son, Leo — now three years old — is one of the great joys of my life, and I’m so happy that my husband and I decided to have him. But having kids can be challenging for anyone, and there may be unique challenges for people who aim to have a positive impact with their careers.I’m currently the director of the one-on-one programme at 80,000 Hours and a fund manager for the Effective Altruism Infrastructure Fund. So I wanted to share my experience with parenting and working for organisations whose mission I care about deeply. Here are my aims:Give readers an example of a working parent who also thinks a lot about 80,000 Hours’ advice.Discuss some of the ways having kids is likely to affect the impact you have in your career, for people who want to consider that when deciding whether to have kids.Discuss challenges people might face in their careers related to having kids and how they might handle them.Help people feel less alone if they’re finding some of the standard parenting advice alienating — particularly any mothers who feel the literature tends to underestimate how much they care about their career.Write out some of the lessons I’ve learned and things I would have liked to have known beforehand (I still find some of this hard to keep in mind!).Start a conversation with the hope that other like-minded parents will share their lessons and suggestions.Highlight some of the ways the effective altruism community supports parents.Note different people find very different advice useful, and people’s situations vary greatly by how many children they have, whether they have a partner and what that person’s situation is like, what family help they have nearby, their socioeconomic condition, and so on. I’ve been very fortunate to live in a wealthy country like the UK with a lot of social support, and I’ve been paid well enough to always meet my needs. My experiences will be most relevant to people who are similarly situated.And some of what follows will be speculative, because I consider counterfactuals and possibilities that are inevitably uncertain. Also, my son is only three, so I have fairly limited experience. I’d love for others to contribute to this conversation and offer additional perspectives.Deciding whether to have childrenIt feels important that working to improve the world doesn’t prevent me from achieving any of the other things that are really significant to me in life — for example, having a good relationship with my husband and having close, long-term friendships.Becoming a parent was another personal priority in my life. For that reason, I didn’t think much about how having a child would affect the impact I had over my life. While I think it’s important to consider how we can best have a positive impact on the world, I don’t think it’s required or practical to think we might have to give up some of the things that are most important to us in the name of impact.I did think about it some when considering whether to have more children. The potential negative effects on my ability to have an impact with my career counted against having any more kids, but my husband also was ...]]>
80000 Hours https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 21:35 None full 4550
zThADDRFHFRjonf25_EA EA - Questions I ask myself when making decisions (or communicating) by Catherine Low Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Questions I ask myself when making decisions (or communicating), published by Catherine Low on January 23, 2023 on The Effective Altruism Forum.I work on CEA’s Community Health and Special Projects Team. But these thoughts are my own (apart from question 6 which I inherited from Nicole Ross).I have an internal list of questions that I often go through when I’m making a decision (including when that decision is whether or what to communicate to others).I often bring these questions up when I’m trying to help others in the EA community make decisions, and some folks seem to find them helpful. I also sometimes see people making (what I believe to be) sub-optimal decisions and one or other of these questions may have helped things go better.Maybe you’ll find them helpful too, and maybe you have your own questions you ask yourself (please share them in the comments!).I’m afraid they don’t tell you how to make a decision (sorry - I haven’t worked that bit out!), and they aren’t exhaustive. They are just questions that seem to help when I am making decisions - either by nudging me to generate new options, or to become more aware of consequences.1. “Is this really where the tradeoff bites?”Some decisions are framed as “A” or “B”. More decisions are framed as tradeoffs between amounts of A and amounts of B. Considering tradeoffs is very useful! But I often notice people assuming they’re making a decision under difficult tradeoffs between values A and B, when I see A and B as being values on two quite different axes. There are often alternatives where we can get more A without sacrificing B or vice versa, or we can get more A AND more B (i.e. there are Pareto improvements we can make).The A and B I’ve been thinking most about recently are (caricatured) “clearly stating what you believe” and “being sensitive to others”. I think Richard Chappell’s “Don’t be an Edgelord” explains this particular A and B well. But I’ve noticed the general pattern a lot.2. “What does my shoulder X think?” or. (higher cost) “what does X think?”You’re not always going to please everyone. That’s the way things are. But plenty of people seem to be surprised by the negative reaction they get from their decisions or words. This problem seems tractable.It would be nice if people could ask lots of people their opinion before making a decision, but that is costly (and choosing to ask is a decision in itself). So I’ve been trying to develop a range of people/groups that metaphorically pop up on my shoulder to help me think:Some individuals whose thinking I respect - some who I usually agree with, some who I regularly disagree with.Individuals or groups who would be affected by my decisionsI’m sure my shoulder people OFTEN don’t match the real person’s thoughts. But I still find them helpful to create new perspectives.Some tips for developing or using shoulder people:If the stakes are high - don’t rely on your shoulder person. Talk to the real people, the real stakeholders, people who have a different background or a different way of thinking to you. This especially goes if you’ve noticed you are frequently surprised by people’s reactions to your actions or communications, or if you don’t know the stakeholders well.You don’t have to know the person/group to make a shoulder person - just read a bunch of what they’ve written. Some examples: I’ve often found insight from Julia Wise, so through reading her blog I created a shoulder Julia long before I started working with her. I’ve only met Habryka once or twice, but I often get new perspectives from reading their contributions on the Forum. This seems helpful, so I’d like to improve my shoulder Habryka.Before publishing something, read your writing several times with several different shoulder people in mind.Practice! One way to develop should...]]>
Catherine Low https://forum.effectivealtruism.org/posts/zThADDRFHFRjonf25/questions-i-ask-myself-when-making-decisions-or Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Questions I ask myself when making decisions (or communicating), published by Catherine Low on January 23, 2023 on The Effective Altruism Forum.I work on CEA’s Community Health and Special Projects Team. But these thoughts are my own (apart from question 6 which I inherited from Nicole Ross).I have an internal list of questions that I often go through when I’m making a decision (including when that decision is whether or what to communicate to others).I often bring these questions up when I’m trying to help others in the EA community make decisions, and some folks seem to find them helpful. I also sometimes see people making (what I believe to be) sub-optimal decisions and one or other of these questions may have helped things go better.Maybe you’ll find them helpful too, and maybe you have your own questions you ask yourself (please share them in the comments!).I’m afraid they don’t tell you how to make a decision (sorry - I haven’t worked that bit out!), and they aren’t exhaustive. They are just questions that seem to help when I am making decisions - either by nudging me to generate new options, or to become more aware of consequences.1. “Is this really where the tradeoff bites?”Some decisions are framed as “A” or “B”. More decisions are framed as tradeoffs between amounts of A and amounts of B. Considering tradeoffs is very useful! But I often notice people assuming they’re making a decision under difficult tradeoffs between values A and B, when I see A and B as being values on two quite different axes. There are often alternatives where we can get more A without sacrificing B or vice versa, or we can get more A AND more B (i.e. there are Pareto improvements we can make).The A and B I’ve been thinking most about recently are (caricatured) “clearly stating what you believe” and “being sensitive to others”. I think Richard Chappell’s “Don’t be an Edgelord” explains this particular A and B well. But I’ve noticed the general pattern a lot.2. “What does my shoulder X think?” or. (higher cost) “what does X think?”You’re not always going to please everyone. That’s the way things are. But plenty of people seem to be surprised by the negative reaction they get from their decisions or words. This problem seems tractable.It would be nice if people could ask lots of people their opinion before making a decision, but that is costly (and choosing to ask is a decision in itself). So I’ve been trying to develop a range of people/groups that metaphorically pop up on my shoulder to help me think:Some individuals whose thinking I respect - some who I usually agree with, some who I regularly disagree with.Individuals or groups who would be affected by my decisionsI’m sure my shoulder people OFTEN don’t match the real person’s thoughts. But I still find them helpful to create new perspectives.Some tips for developing or using shoulder people:If the stakes are high - don’t rely on your shoulder person. Talk to the real people, the real stakeholders, people who have a different background or a different way of thinking to you. This especially goes if you’ve noticed you are frequently surprised by people’s reactions to your actions or communications, or if you don’t know the stakeholders well.You don’t have to know the person/group to make a shoulder person - just read a bunch of what they’ve written. Some examples: I’ve often found insight from Julia Wise, so through reading her blog I created a shoulder Julia long before I started working with her. I’ve only met Habryka once or twice, but I often get new perspectives from reading their contributions on the Forum. This seems helpful, so I’d like to improve my shoulder Habryka.Before publishing something, read your writing several times with several different shoulder people in mind.Practice! One way to develop should...]]>
Mon, 23 Jan 2023 17:41:25 +0000 EA - Questions I ask myself when making decisions (or communicating) by Catherine Low Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Questions I ask myself when making decisions (or communicating), published by Catherine Low on January 23, 2023 on The Effective Altruism Forum.I work on CEA’s Community Health and Special Projects Team. But these thoughts are my own (apart from question 6 which I inherited from Nicole Ross).I have an internal list of questions that I often go through when I’m making a decision (including when that decision is whether or what to communicate to others).I often bring these questions up when I’m trying to help others in the EA community make decisions, and some folks seem to find them helpful. I also sometimes see people making (what I believe to be) sub-optimal decisions and one or other of these questions may have helped things go better.Maybe you’ll find them helpful too, and maybe you have your own questions you ask yourself (please share them in the comments!).I’m afraid they don’t tell you how to make a decision (sorry - I haven’t worked that bit out!), and they aren’t exhaustive. They are just questions that seem to help when I am making decisions - either by nudging me to generate new options, or to become more aware of consequences.1. “Is this really where the tradeoff bites?”Some decisions are framed as “A” or “B”. More decisions are framed as tradeoffs between amounts of A and amounts of B. Considering tradeoffs is very useful! But I often notice people assuming they’re making a decision under difficult tradeoffs between values A and B, when I see A and B as being values on two quite different axes. There are often alternatives where we can get more A without sacrificing B or vice versa, or we can get more A AND more B (i.e. there are Pareto improvements we can make).The A and B I’ve been thinking most about recently are (caricatured) “clearly stating what you believe” and “being sensitive to others”. I think Richard Chappell’s “Don’t be an Edgelord” explains this particular A and B well. But I’ve noticed the general pattern a lot.2. “What does my shoulder X think?” or. (higher cost) “what does X think?”You’re not always going to please everyone. That’s the way things are. But plenty of people seem to be surprised by the negative reaction they get from their decisions or words. This problem seems tractable.It would be nice if people could ask lots of people their opinion before making a decision, but that is costly (and choosing to ask is a decision in itself). So I’ve been trying to develop a range of people/groups that metaphorically pop up on my shoulder to help me think:Some individuals whose thinking I respect - some who I usually agree with, some who I regularly disagree with.Individuals or groups who would be affected by my decisionsI’m sure my shoulder people OFTEN don’t match the real person’s thoughts. But I still find them helpful to create new perspectives.Some tips for developing or using shoulder people:If the stakes are high - don’t rely on your shoulder person. Talk to the real people, the real stakeholders, people who have a different background or a different way of thinking to you. This especially goes if you’ve noticed you are frequently surprised by people’s reactions to your actions or communications, or if you don’t know the stakeholders well.You don’t have to know the person/group to make a shoulder person - just read a bunch of what they’ve written. Some examples: I’ve often found insight from Julia Wise, so through reading her blog I created a shoulder Julia long before I started working with her. I’ve only met Habryka once or twice, but I often get new perspectives from reading their contributions on the Forum. This seems helpful, so I’d like to improve my shoulder Habryka.Before publishing something, read your writing several times with several different shoulder people in mind.Practice! One way to develop should...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Questions I ask myself when making decisions (or communicating), published by Catherine Low on January 23, 2023 on The Effective Altruism Forum.I work on CEA’s Community Health and Special Projects Team. But these thoughts are my own (apart from question 6 which I inherited from Nicole Ross).I have an internal list of questions that I often go through when I’m making a decision (including when that decision is whether or what to communicate to others).I often bring these questions up when I’m trying to help others in the EA community make decisions, and some folks seem to find them helpful. I also sometimes see people making (what I believe to be) sub-optimal decisions and one or other of these questions may have helped things go better.Maybe you’ll find them helpful too, and maybe you have your own questions you ask yourself (please share them in the comments!).I’m afraid they don’t tell you how to make a decision (sorry - I haven’t worked that bit out!), and they aren’t exhaustive. They are just questions that seem to help when I am making decisions - either by nudging me to generate new options, or to become more aware of consequences.1. “Is this really where the tradeoff bites?”Some decisions are framed as “A” or “B”. More decisions are framed as tradeoffs between amounts of A and amounts of B. Considering tradeoffs is very useful! But I often notice people assuming they’re making a decision under difficult tradeoffs between values A and B, when I see A and B as being values on two quite different axes. There are often alternatives where we can get more A without sacrificing B or vice versa, or we can get more A AND more B (i.e. there are Pareto improvements we can make).The A and B I’ve been thinking most about recently are (caricatured) “clearly stating what you believe” and “being sensitive to others”. I think Richard Chappell’s “Don’t be an Edgelord” explains this particular A and B well. But I’ve noticed the general pattern a lot.2. “What does my shoulder X think?” or. (higher cost) “what does X think?”You’re not always going to please everyone. That’s the way things are. But plenty of people seem to be surprised by the negative reaction they get from their decisions or words. This problem seems tractable.It would be nice if people could ask lots of people their opinion before making a decision, but that is costly (and choosing to ask is a decision in itself). So I’ve been trying to develop a range of people/groups that metaphorically pop up on my shoulder to help me think:Some individuals whose thinking I respect - some who I usually agree with, some who I regularly disagree with.Individuals or groups who would be affected by my decisionsI’m sure my shoulder people OFTEN don’t match the real person’s thoughts. But I still find them helpful to create new perspectives.Some tips for developing or using shoulder people:If the stakes are high - don’t rely on your shoulder person. Talk to the real people, the real stakeholders, people who have a different background or a different way of thinking to you. This especially goes if you’ve noticed you are frequently surprised by people’s reactions to your actions or communications, or if you don’t know the stakeholders well.You don’t have to know the person/group to make a shoulder person - just read a bunch of what they’ve written. Some examples: I’ve often found insight from Julia Wise, so through reading her blog I created a shoulder Julia long before I started working with her. I’ve only met Habryka once or twice, but I often get new perspectives from reading their contributions on the Forum. This seems helpful, so I’d like to improve my shoulder Habryka.Before publishing something, read your writing several times with several different shoulder people in mind.Practice! One way to develop should...]]>
Catherine Low https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:23 None full 4551
nRXugEFFDz7MtGKz9_EA EA - There should be a public adversarial collaboration on AI x-risk by pradyuprasad Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There should be a public adversarial collaboration on AI x-risk, published by pradyuprasad on January 23, 2023 on The Effective Altruism Forum.I think that adversarial collaborations are a good way of understanding competing perspectives on an idea, especially if it is polarising or especially controversial.The term was first introduced by Daniel Kahneman. The basic idea is that two people with competing perspectives on an issue work together towards a joint belief. Two people working in good faith would be able to devise various experiments and discussions that clarify the idea and work towards a joint belief. (Kahneman uses the word "truth", but I think the word "belief" is more justified in this context).AI x-risk is a good place to have a public adversarial collaborationFirst the issue is especially polarising. The beliefs of people working on AI risk are that AI presents one of the greatest challenges to humanity's survival. On the other hand, AI research organisations by revealed preference (they're going full speed ahead on building AI capabilities) and stated preference (see this survey too) think the risk is much lower.In my opinion having an adversarial collaboration between a top AI safety person (who works on x-risk from AI) and someone who did not think that the x risks were substantial would have clear benefits.It would make the lines of disagreement clearer. To me, an outsider in the space it's not very clear where exactly people disagree and to what extent. This would clear that up and possibly provide a baseline for future debate to be based on.It would also legitimise x-risk concerns quite a bit if this was to be co-written by someone respected in the field.Finally, it would make both sides of the debate evaluate the other side clearly and see their own blindspots better. This would improve the overall epistemic quality of the AI x-risk debate.How could this go wrong?The main failure mode is that the parties writing it aren't doing it in good faith. If they're trying to write it out with the purpose of proving the other side wrong, it will fail terribly.The second failure mode is that the arguments for either sides are based too much on thought experiments and it is hard to find a resolution because there isn't much empirical grounding for either side. In Kahenman's example, even with actual experiments they could infer from, both parties couldn't agree with it for 8 years. That's entirely possible with this as well.Other key considerationsFinding the right people from both sides of the debate might be more difficult than I assume. I think there are people who can do it (eg. Richard Ngo and Jacob Buckman have said that they have done it in private) and Boaz Bark and Ben Edelman have also published a thoughtful critique (although not an adversarial collaboration), but it maybe that they're too busy or aren't interested enough in doing itA similar version has been done before and this might risk duplicating it. I don't think this is the case because the debate was hard to follow and not explicitly written with the indent of finding a joint belief.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
pradyuprasad https://forum.effectivealtruism.org/posts/nRXugEFFDz7MtGKz9/there-should-be-a-public-adversarial-collaboration-on-ai-x Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There should be a public adversarial collaboration on AI x-risk, published by pradyuprasad on January 23, 2023 on The Effective Altruism Forum.I think that adversarial collaborations are a good way of understanding competing perspectives on an idea, especially if it is polarising or especially controversial.The term was first introduced by Daniel Kahneman. The basic idea is that two people with competing perspectives on an issue work together towards a joint belief. Two people working in good faith would be able to devise various experiments and discussions that clarify the idea and work towards a joint belief. (Kahneman uses the word "truth", but I think the word "belief" is more justified in this context).AI x-risk is a good place to have a public adversarial collaborationFirst the issue is especially polarising. The beliefs of people working on AI risk are that AI presents one of the greatest challenges to humanity's survival. On the other hand, AI research organisations by revealed preference (they're going full speed ahead on building AI capabilities) and stated preference (see this survey too) think the risk is much lower.In my opinion having an adversarial collaboration between a top AI safety person (who works on x-risk from AI) and someone who did not think that the x risks were substantial would have clear benefits.It would make the lines of disagreement clearer. To me, an outsider in the space it's not very clear where exactly people disagree and to what extent. This would clear that up and possibly provide a baseline for future debate to be based on.It would also legitimise x-risk concerns quite a bit if this was to be co-written by someone respected in the field.Finally, it would make both sides of the debate evaluate the other side clearly and see their own blindspots better. This would improve the overall epistemic quality of the AI x-risk debate.How could this go wrong?The main failure mode is that the parties writing it aren't doing it in good faith. If they're trying to write it out with the purpose of proving the other side wrong, it will fail terribly.The second failure mode is that the arguments for either sides are based too much on thought experiments and it is hard to find a resolution because there isn't much empirical grounding for either side. In Kahenman's example, even with actual experiments they could infer from, both parties couldn't agree with it for 8 years. That's entirely possible with this as well.Other key considerationsFinding the right people from both sides of the debate might be more difficult than I assume. I think there are people who can do it (eg. Richard Ngo and Jacob Buckman have said that they have done it in private) and Boaz Bark and Ben Edelman have also published a thoughtful critique (although not an adversarial collaboration), but it maybe that they're too busy or aren't interested enough in doing itA similar version has been done before and this might risk duplicating it. I don't think this is the case because the debate was hard to follow and not explicitly written with the indent of finding a joint belief.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 23 Jan 2023 10:33:31 +0000 EA - There should be a public adversarial collaboration on AI x-risk by pradyuprasad Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There should be a public adversarial collaboration on AI x-risk, published by pradyuprasad on January 23, 2023 on The Effective Altruism Forum.I think that adversarial collaborations are a good way of understanding competing perspectives on an idea, especially if it is polarising or especially controversial.The term was first introduced by Daniel Kahneman. The basic idea is that two people with competing perspectives on an issue work together towards a joint belief. Two people working in good faith would be able to devise various experiments and discussions that clarify the idea and work towards a joint belief. (Kahneman uses the word "truth", but I think the word "belief" is more justified in this context).AI x-risk is a good place to have a public adversarial collaborationFirst the issue is especially polarising. The beliefs of people working on AI risk are that AI presents one of the greatest challenges to humanity's survival. On the other hand, AI research organisations by revealed preference (they're going full speed ahead on building AI capabilities) and stated preference (see this survey too) think the risk is much lower.In my opinion having an adversarial collaboration between a top AI safety person (who works on x-risk from AI) and someone who did not think that the x risks were substantial would have clear benefits.It would make the lines of disagreement clearer. To me, an outsider in the space it's not very clear where exactly people disagree and to what extent. This would clear that up and possibly provide a baseline for future debate to be based on.It would also legitimise x-risk concerns quite a bit if this was to be co-written by someone respected in the field.Finally, it would make both sides of the debate evaluate the other side clearly and see their own blindspots better. This would improve the overall epistemic quality of the AI x-risk debate.How could this go wrong?The main failure mode is that the parties writing it aren't doing it in good faith. If they're trying to write it out with the purpose of proving the other side wrong, it will fail terribly.The second failure mode is that the arguments for either sides are based too much on thought experiments and it is hard to find a resolution because there isn't much empirical grounding for either side. In Kahenman's example, even with actual experiments they could infer from, both parties couldn't agree with it for 8 years. That's entirely possible with this as well.Other key considerationsFinding the right people from both sides of the debate might be more difficult than I assume. I think there are people who can do it (eg. Richard Ngo and Jacob Buckman have said that they have done it in private) and Boaz Bark and Ben Edelman have also published a thoughtful critique (although not an adversarial collaboration), but it maybe that they're too busy or aren't interested enough in doing itA similar version has been done before and this might risk duplicating it. I don't think this is the case because the debate was hard to follow and not explicitly written with the indent of finding a joint belief.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There should be a public adversarial collaboration on AI x-risk, published by pradyuprasad on January 23, 2023 on The Effective Altruism Forum.I think that adversarial collaborations are a good way of understanding competing perspectives on an idea, especially if it is polarising or especially controversial.The term was first introduced by Daniel Kahneman. The basic idea is that two people with competing perspectives on an issue work together towards a joint belief. Two people working in good faith would be able to devise various experiments and discussions that clarify the idea and work towards a joint belief. (Kahneman uses the word "truth", but I think the word "belief" is more justified in this context).AI x-risk is a good place to have a public adversarial collaborationFirst the issue is especially polarising. The beliefs of people working on AI risk are that AI presents one of the greatest challenges to humanity's survival. On the other hand, AI research organisations by revealed preference (they're going full speed ahead on building AI capabilities) and stated preference (see this survey too) think the risk is much lower.In my opinion having an adversarial collaboration between a top AI safety person (who works on x-risk from AI) and someone who did not think that the x risks were substantial would have clear benefits.It would make the lines of disagreement clearer. To me, an outsider in the space it's not very clear where exactly people disagree and to what extent. This would clear that up and possibly provide a baseline for future debate to be based on.It would also legitimise x-risk concerns quite a bit if this was to be co-written by someone respected in the field.Finally, it would make both sides of the debate evaluate the other side clearly and see their own blindspots better. This would improve the overall epistemic quality of the AI x-risk debate.How could this go wrong?The main failure mode is that the parties writing it aren't doing it in good faith. If they're trying to write it out with the purpose of proving the other side wrong, it will fail terribly.The second failure mode is that the arguments for either sides are based too much on thought experiments and it is hard to find a resolution because there isn't much empirical grounding for either side. In Kahenman's example, even with actual experiments they could infer from, both parties couldn't agree with it for 8 years. That's entirely possible with this as well.Other key considerationsFinding the right people from both sides of the debate might be more difficult than I assume. I think there are people who can do it (eg. Richard Ngo and Jacob Buckman have said that they have done it in private) and Boaz Bark and Ben Edelman have also published a thoughtful critique (although not an adversarial collaboration), but it maybe that they're too busy or aren't interested enough in doing itA similar version has been done before and this might risk duplicating it. I don't think this is the case because the debate was hard to follow and not explicitly written with the indent of finding a joint belief.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
pradyuprasad https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:01 None full 4544
3vDarp6adLPBTux5g_EA EA - What a compute-centric framework says about AI takeoff speeds - draft report by Tom Davidson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What a compute-centric framework says about AI takeoff speeds - draft report, published by Tom Davidson on January 23, 2023 on The Effective Altruism Forum.I’ve written a draft report on AI takeoff speeds, the question of how quickly AI capabilities might improve as we approach and surpass human-level AI. Will human-level AI be a bolt from the blue, or will we have AI that is nearly as capable many years earlier?Most of the analysis is from the perspective of a compute-centric framework, inspired by that used in the Bio Anchors report, in which AI capabilities increase continuously with more training compute and work to develop better AI algorithms.This post doesn’t summarise the report. Instead I want to explain some of the high-level takeaways from the research which I think apply even if you don’t buy the compute-centric framework.The frameworkh/t Dan Kokotajlo for writing most of this sectionThis report accompanies and explains (h/t Epoch for building this!), a user-friendly quantitative model of AGI timelines and takeoff, which you can go play around with right now. (By AGI I mean “AI that can readily[1] perform 100% of cognitive tasks” as well as a human professional; AGI could be many AI systems working together, or one unified system.)Takeoff simulation with Tom’s best-guess value for each parameter.The framework was inspired by and builds upon the previous “Bio Anchors” report. The “core” of the Bio Anchors report was a three-factor model for forecasting AGI timelines:Dan’s visual representation of Bio Anchors reportCompute to train AGI using 2020 algorithms. The first and most subjective factor is a probability distribution over training requirements (measured in FLOP) given today’s ideas. It allows for some probability to be placed in the “no amount would be enough” bucket.The probability distribution is shown by the coloured blocks on the y-axis in the above figure.Algorithmic progress. The second factor is the rate at which new ideas come along, lowering AGI training requirements. Bio Anchors models this as a steady exponential decline.It’s shown by the falling yellow lines.Bigger training runs. The third factor is the rate at which FLOP used on training runs increases, as a result of better hardware and more $ spending. Bio Anchors assumes that hardware improves at a steady exponential rate.The FLOP used on the biggest training run is shown by the rising purple lines.Once there’s been enough algorithmic progress, and training runs are big enough, we can train AGI. (How much is enough? That depends on the first factor!)This draft report builds a more detailed model inspired by the above. It contains many minor changes and two major ones.The first major change is that algorithmic and hardware progress are no longer assumed to have steady exponential growth. Instead, I use standard semi-endogenous growth models from the economics literature to forecast how the two factors will grow in response to hardware and software R&D spending, and forecast that spending will grow over time. The upshot is that spending accelerates as AGI draws near, driving faster algorithmic (“software”) and hardware progress.The key dynamics represented in the model. “Software” refers to the quality of algorithms for training AI.The second major change is that I model the effects of AI systems automating economic tasks – and, crucially, tasks in hardware and software R&D – prior to AGI. I do this via the “effective FLOP gap:” the gap between AGI training requirements and training requirements for AI that can readily perform 20% of cognitive tasks (weighted by economic-value-in-2022). My best guess, defended in the report, is that you need 10,000X more effective compute to train AGI. To estimate the training requirements for AI that can readily perform x% of cognit...]]>
Tom Davidson https://forum.effectivealtruism.org/posts/3vDarp6adLPBTux5g/what-a-compute-centric-framework-says-about-ai-takeoff Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What a compute-centric framework says about AI takeoff speeds - draft report, published by Tom Davidson on January 23, 2023 on The Effective Altruism Forum.I’ve written a draft report on AI takeoff speeds, the question of how quickly AI capabilities might improve as we approach and surpass human-level AI. Will human-level AI be a bolt from the blue, or will we have AI that is nearly as capable many years earlier?Most of the analysis is from the perspective of a compute-centric framework, inspired by that used in the Bio Anchors report, in which AI capabilities increase continuously with more training compute and work to develop better AI algorithms.This post doesn’t summarise the report. Instead I want to explain some of the high-level takeaways from the research which I think apply even if you don’t buy the compute-centric framework.The frameworkh/t Dan Kokotajlo for writing most of this sectionThis report accompanies and explains (h/t Epoch for building this!), a user-friendly quantitative model of AGI timelines and takeoff, which you can go play around with right now. (By AGI I mean “AI that can readily[1] perform 100% of cognitive tasks” as well as a human professional; AGI could be many AI systems working together, or one unified system.)Takeoff simulation with Tom’s best-guess value for each parameter.The framework was inspired by and builds upon the previous “Bio Anchors” report. The “core” of the Bio Anchors report was a three-factor model for forecasting AGI timelines:Dan’s visual representation of Bio Anchors reportCompute to train AGI using 2020 algorithms. The first and most subjective factor is a probability distribution over training requirements (measured in FLOP) given today’s ideas. It allows for some probability to be placed in the “no amount would be enough” bucket.The probability distribution is shown by the coloured blocks on the y-axis in the above figure.Algorithmic progress. The second factor is the rate at which new ideas come along, lowering AGI training requirements. Bio Anchors models this as a steady exponential decline.It’s shown by the falling yellow lines.Bigger training runs. The third factor is the rate at which FLOP used on training runs increases, as a result of better hardware and more $ spending. Bio Anchors assumes that hardware improves at a steady exponential rate.The FLOP used on the biggest training run is shown by the rising purple lines.Once there’s been enough algorithmic progress, and training runs are big enough, we can train AGI. (How much is enough? That depends on the first factor!)This draft report builds a more detailed model inspired by the above. It contains many minor changes and two major ones.The first major change is that algorithmic and hardware progress are no longer assumed to have steady exponential growth. Instead, I use standard semi-endogenous growth models from the economics literature to forecast how the two factors will grow in response to hardware and software R&D spending, and forecast that spending will grow over time. The upshot is that spending accelerates as AGI draws near, driving faster algorithmic (“software”) and hardware progress.The key dynamics represented in the model. “Software” refers to the quality of algorithms for training AI.The second major change is that I model the effects of AI systems automating economic tasks – and, crucially, tasks in hardware and software R&D – prior to AGI. I do this via the “effective FLOP gap:” the gap between AGI training requirements and training requirements for AI that can readily perform 20% of cognitive tasks (weighted by economic-value-in-2022). My best guess, defended in the report, is that you need 10,000X more effective compute to train AGI. To estimate the training requirements for AI that can readily perform x% of cognit...]]>
Mon, 23 Jan 2023 05:02:00 +0000 EA - What a compute-centric framework says about AI takeoff speeds - draft report by Tom Davidson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What a compute-centric framework says about AI takeoff speeds - draft report, published by Tom Davidson on January 23, 2023 on The Effective Altruism Forum.I’ve written a draft report on AI takeoff speeds, the question of how quickly AI capabilities might improve as we approach and surpass human-level AI. Will human-level AI be a bolt from the blue, or will we have AI that is nearly as capable many years earlier?Most of the analysis is from the perspective of a compute-centric framework, inspired by that used in the Bio Anchors report, in which AI capabilities increase continuously with more training compute and work to develop better AI algorithms.This post doesn’t summarise the report. Instead I want to explain some of the high-level takeaways from the research which I think apply even if you don’t buy the compute-centric framework.The frameworkh/t Dan Kokotajlo for writing most of this sectionThis report accompanies and explains (h/t Epoch for building this!), a user-friendly quantitative model of AGI timelines and takeoff, which you can go play around with right now. (By AGI I mean “AI that can readily[1] perform 100% of cognitive tasks” as well as a human professional; AGI could be many AI systems working together, or one unified system.)Takeoff simulation with Tom’s best-guess value for each parameter.The framework was inspired by and builds upon the previous “Bio Anchors” report. The “core” of the Bio Anchors report was a three-factor model for forecasting AGI timelines:Dan’s visual representation of Bio Anchors reportCompute to train AGI using 2020 algorithms. The first and most subjective factor is a probability distribution over training requirements (measured in FLOP) given today’s ideas. It allows for some probability to be placed in the “no amount would be enough” bucket.The probability distribution is shown by the coloured blocks on the y-axis in the above figure.Algorithmic progress. The second factor is the rate at which new ideas come along, lowering AGI training requirements. Bio Anchors models this as a steady exponential decline.It’s shown by the falling yellow lines.Bigger training runs. The third factor is the rate at which FLOP used on training runs increases, as a result of better hardware and more $ spending. Bio Anchors assumes that hardware improves at a steady exponential rate.The FLOP used on the biggest training run is shown by the rising purple lines.Once there’s been enough algorithmic progress, and training runs are big enough, we can train AGI. (How much is enough? That depends on the first factor!)This draft report builds a more detailed model inspired by the above. It contains many minor changes and two major ones.The first major change is that algorithmic and hardware progress are no longer assumed to have steady exponential growth. Instead, I use standard semi-endogenous growth models from the economics literature to forecast how the two factors will grow in response to hardware and software R&D spending, and forecast that spending will grow over time. The upshot is that spending accelerates as AGI draws near, driving faster algorithmic (“software”) and hardware progress.The key dynamics represented in the model. “Software” refers to the quality of algorithms for training AI.The second major change is that I model the effects of AI systems automating economic tasks – and, crucially, tasks in hardware and software R&D – prior to AGI. I do this via the “effective FLOP gap:” the gap between AGI training requirements and training requirements for AI that can readily perform 20% of cognitive tasks (weighted by economic-value-in-2022). My best guess, defended in the report, is that you need 10,000X more effective compute to train AGI. To estimate the training requirements for AI that can readily perform x% of cognit...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What a compute-centric framework says about AI takeoff speeds - draft report, published by Tom Davidson on January 23, 2023 on The Effective Altruism Forum.I’ve written a draft report on AI takeoff speeds, the question of how quickly AI capabilities might improve as we approach and surpass human-level AI. Will human-level AI be a bolt from the blue, or will we have AI that is nearly as capable many years earlier?Most of the analysis is from the perspective of a compute-centric framework, inspired by that used in the Bio Anchors report, in which AI capabilities increase continuously with more training compute and work to develop better AI algorithms.This post doesn’t summarise the report. Instead I want to explain some of the high-level takeaways from the research which I think apply even if you don’t buy the compute-centric framework.The frameworkh/t Dan Kokotajlo for writing most of this sectionThis report accompanies and explains (h/t Epoch for building this!), a user-friendly quantitative model of AGI timelines and takeoff, which you can go play around with right now. (By AGI I mean “AI that can readily[1] perform 100% of cognitive tasks” as well as a human professional; AGI could be many AI systems working together, or one unified system.)Takeoff simulation with Tom’s best-guess value for each parameter.The framework was inspired by and builds upon the previous “Bio Anchors” report. The “core” of the Bio Anchors report was a three-factor model for forecasting AGI timelines:Dan’s visual representation of Bio Anchors reportCompute to train AGI using 2020 algorithms. The first and most subjective factor is a probability distribution over training requirements (measured in FLOP) given today’s ideas. It allows for some probability to be placed in the “no amount would be enough” bucket.The probability distribution is shown by the coloured blocks on the y-axis in the above figure.Algorithmic progress. The second factor is the rate at which new ideas come along, lowering AGI training requirements. Bio Anchors models this as a steady exponential decline.It’s shown by the falling yellow lines.Bigger training runs. The third factor is the rate at which FLOP used on training runs increases, as a result of better hardware and more $ spending. Bio Anchors assumes that hardware improves at a steady exponential rate.The FLOP used on the biggest training run is shown by the rising purple lines.Once there’s been enough algorithmic progress, and training runs are big enough, we can train AGI. (How much is enough? That depends on the first factor!)This draft report builds a more detailed model inspired by the above. It contains many minor changes and two major ones.The first major change is that algorithmic and hardware progress are no longer assumed to have steady exponential growth. Instead, I use standard semi-endogenous growth models from the economics literature to forecast how the two factors will grow in response to hardware and software R&D spending, and forecast that spending will grow over time. The upshot is that spending accelerates as AGI draws near, driving faster algorithmic (“software”) and hardware progress.The key dynamics represented in the model. “Software” refers to the quality of algorithms for training AI.The second major change is that I model the effects of AI systems automating economic tasks – and, crucially, tasks in hardware and software R&D – prior to AGI. I do this via the “effective FLOP gap:” the gap between AGI training requirements and training requirements for AI that can readily perform 20% of cognitive tasks (weighted by economic-value-in-2022). My best guess, defended in the report, is that you need 10,000X more effective compute to train AGI. To estimate the training requirements for AI that can readily perform x% of cognit...]]>
Tom Davidson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 29:43 None full 4543
aE68kcbG7ugCcTKXd_EA EA - ALTER Israel - End-of-2022 Update by Davidmanheim Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ALTER Israel - End-of-2022 Update, published by Davidmanheim on January 22, 2023 on The Effective Altruism Forum.ALTER is an organization in Israel that works on several EA priority areas and causes. As laid out in our mid-year update, we had several things planned for the second half of 2022, and they were largely successful. Unfortunately, the past several months have been particularly challenging, in large part due to the collapse of FTX and issues with the Israeli banking system. (See coverage here on some specific impacts.) Delayed items include actually posting this update, and other planned progress - but despite that, ALTER has made progress on several fronts, and is continuing its work on several fronts.Accomplishments and ProgressWe ran a successful AI safety conference in Israel, AISIC2022 and identified a number of people who are interested in or working on AI safety here in Israel. We are continuing to follow up with them, and will hopefully have more to announce in that vein in the near future.We are continuing to do academic outreach in Israel, and building our network for AI safety, biosecurity, and other cause areas.Vadim Weinstein,the mathematician who we hired, has been working on AI safety with Vanessa, and we are excited about potential progress. We are working on figuring out how to replace his grant from FTX with other money, but there is a strong interest in funding the work, and we are attempting to finalize those arrangements.David attended the Bioweapons Convention Review Conference and continued outreach around Israeli engagement in various international forums.ChallengesThe situation with banking in Israel has become much more challenging due to our receipt of a grant from FTX. We are now prohibited from receiving money from international organizations from our (current) bank, which has made getting our other grant money very challenging, and we have so far been unsuccessful. (We are likely to be able to resolve this in the coming months, and EA Israel can potentially lend us funds in the interim.)The grant we received from FTX is not being used, and we are trying to understand the legal situation, and what will happen with the bankruptcy. This money is being kept in a separate account, pending clarity about its status and the potential to provide restitution to those defrauded by FTX. Any resolution will likely not be for quite a long time.Once we receive currently granted but unreceived funds, our runway is sufficient through the end of this year, rather than the end of next year. We will be applying for additional funding over the course of the year to allow us to continue operations.The funding environment over the coming years is more challenging, and for that reason we are less able to support additional work, and have a smaller funding cushion. We are still investigating the different potential avenues forward for this.We are also revisiting our strategic planning in light of the changing funding environment and our current projects.PlansDavid will be working for ALTER as full time director of research and policy in 2023, as well as continuing to work on our direction and planning.We are continuing to work to build a community of people in Israel working on AI safety, including funding one researcher identified via our AI safety conference.We are funding a prize for work on Vanessa Kosoy’s AI Safety Learning-Theoretic AgendaWe are building out a network of academics interested in additional EA-adjacent fields, especially focused on issues relevant to Global Catastrophic Biological Risks. (If you know of anyone in Israel who we should contact, let us know!)We are working on a collaborative policy project with several other individuals and groups to get Israel to iodize salt, since iodine deficiency in Israel is an ...]]>
Davidmanheim https://forum.effectivealtruism.org/posts/aE68kcbG7ugCcTKXd/alter-israel-end-of-2022-update Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ALTER Israel - End-of-2022 Update, published by Davidmanheim on January 22, 2023 on The Effective Altruism Forum.ALTER is an organization in Israel that works on several EA priority areas and causes. As laid out in our mid-year update, we had several things planned for the second half of 2022, and they were largely successful. Unfortunately, the past several months have been particularly challenging, in large part due to the collapse of FTX and issues with the Israeli banking system. (See coverage here on some specific impacts.) Delayed items include actually posting this update, and other planned progress - but despite that, ALTER has made progress on several fronts, and is continuing its work on several fronts.Accomplishments and ProgressWe ran a successful AI safety conference in Israel, AISIC2022 and identified a number of people who are interested in or working on AI safety here in Israel. We are continuing to follow up with them, and will hopefully have more to announce in that vein in the near future.We are continuing to do academic outreach in Israel, and building our network for AI safety, biosecurity, and other cause areas.Vadim Weinstein,the mathematician who we hired, has been working on AI safety with Vanessa, and we are excited about potential progress. We are working on figuring out how to replace his grant from FTX with other money, but there is a strong interest in funding the work, and we are attempting to finalize those arrangements.David attended the Bioweapons Convention Review Conference and continued outreach around Israeli engagement in various international forums.ChallengesThe situation with banking in Israel has become much more challenging due to our receipt of a grant from FTX. We are now prohibited from receiving money from international organizations from our (current) bank, which has made getting our other grant money very challenging, and we have so far been unsuccessful. (We are likely to be able to resolve this in the coming months, and EA Israel can potentially lend us funds in the interim.)The grant we received from FTX is not being used, and we are trying to understand the legal situation, and what will happen with the bankruptcy. This money is being kept in a separate account, pending clarity about its status and the potential to provide restitution to those defrauded by FTX. Any resolution will likely not be for quite a long time.Once we receive currently granted but unreceived funds, our runway is sufficient through the end of this year, rather than the end of next year. We will be applying for additional funding over the course of the year to allow us to continue operations.The funding environment over the coming years is more challenging, and for that reason we are less able to support additional work, and have a smaller funding cushion. We are still investigating the different potential avenues forward for this.We are also revisiting our strategic planning in light of the changing funding environment and our current projects.PlansDavid will be working for ALTER as full time director of research and policy in 2023, as well as continuing to work on our direction and planning.We are continuing to work to build a community of people in Israel working on AI safety, including funding one researcher identified via our AI safety conference.We are funding a prize for work on Vanessa Kosoy’s AI Safety Learning-Theoretic AgendaWe are building out a network of academics interested in additional EA-adjacent fields, especially focused on issues relevant to Global Catastrophic Biological Risks. (If you know of anyone in Israel who we should contact, let us know!)We are working on a collaborative policy project with several other individuals and groups to get Israel to iodize salt, since iodine deficiency in Israel is an ...]]>
Mon, 23 Jan 2023 00:24:00 +0000 EA - ALTER Israel - End-of-2022 Update by Davidmanheim Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ALTER Israel - End-of-2022 Update, published by Davidmanheim on January 22, 2023 on The Effective Altruism Forum.ALTER is an organization in Israel that works on several EA priority areas and causes. As laid out in our mid-year update, we had several things planned for the second half of 2022, and they were largely successful. Unfortunately, the past several months have been particularly challenging, in large part due to the collapse of FTX and issues with the Israeli banking system. (See coverage here on some specific impacts.) Delayed items include actually posting this update, and other planned progress - but despite that, ALTER has made progress on several fronts, and is continuing its work on several fronts.Accomplishments and ProgressWe ran a successful AI safety conference in Israel, AISIC2022 and identified a number of people who are interested in or working on AI safety here in Israel. We are continuing to follow up with them, and will hopefully have more to announce in that vein in the near future.We are continuing to do academic outreach in Israel, and building our network for AI safety, biosecurity, and other cause areas.Vadim Weinstein,the mathematician who we hired, has been working on AI safety with Vanessa, and we are excited about potential progress. We are working on figuring out how to replace his grant from FTX with other money, but there is a strong interest in funding the work, and we are attempting to finalize those arrangements.David attended the Bioweapons Convention Review Conference and continued outreach around Israeli engagement in various international forums.ChallengesThe situation with banking in Israel has become much more challenging due to our receipt of a grant from FTX. We are now prohibited from receiving money from international organizations from our (current) bank, which has made getting our other grant money very challenging, and we have so far been unsuccessful. (We are likely to be able to resolve this in the coming months, and EA Israel can potentially lend us funds in the interim.)The grant we received from FTX is not being used, and we are trying to understand the legal situation, and what will happen with the bankruptcy. This money is being kept in a separate account, pending clarity about its status and the potential to provide restitution to those defrauded by FTX. Any resolution will likely not be for quite a long time.Once we receive currently granted but unreceived funds, our runway is sufficient through the end of this year, rather than the end of next year. We will be applying for additional funding over the course of the year to allow us to continue operations.The funding environment over the coming years is more challenging, and for that reason we are less able to support additional work, and have a smaller funding cushion. We are still investigating the different potential avenues forward for this.We are also revisiting our strategic planning in light of the changing funding environment and our current projects.PlansDavid will be working for ALTER as full time director of research and policy in 2023, as well as continuing to work on our direction and planning.We are continuing to work to build a community of people in Israel working on AI safety, including funding one researcher identified via our AI safety conference.We are funding a prize for work on Vanessa Kosoy’s AI Safety Learning-Theoretic AgendaWe are building out a network of academics interested in additional EA-adjacent fields, especially focused on issues relevant to Global Catastrophic Biological Risks. (If you know of anyone in Israel who we should contact, let us know!)We are working on a collaborative policy project with several other individuals and groups to get Israel to iodize salt, since iodine deficiency in Israel is an ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ALTER Israel - End-of-2022 Update, published by Davidmanheim on January 22, 2023 on The Effective Altruism Forum.ALTER is an organization in Israel that works on several EA priority areas and causes. As laid out in our mid-year update, we had several things planned for the second half of 2022, and they were largely successful. Unfortunately, the past several months have been particularly challenging, in large part due to the collapse of FTX and issues with the Israeli banking system. (See coverage here on some specific impacts.) Delayed items include actually posting this update, and other planned progress - but despite that, ALTER has made progress on several fronts, and is continuing its work on several fronts.Accomplishments and ProgressWe ran a successful AI safety conference in Israel, AISIC2022 and identified a number of people who are interested in or working on AI safety here in Israel. We are continuing to follow up with them, and will hopefully have more to announce in that vein in the near future.We are continuing to do academic outreach in Israel, and building our network for AI safety, biosecurity, and other cause areas.Vadim Weinstein,the mathematician who we hired, has been working on AI safety with Vanessa, and we are excited about potential progress. We are working on figuring out how to replace his grant from FTX with other money, but there is a strong interest in funding the work, and we are attempting to finalize those arrangements.David attended the Bioweapons Convention Review Conference and continued outreach around Israeli engagement in various international forums.ChallengesThe situation with banking in Israel has become much more challenging due to our receipt of a grant from FTX. We are now prohibited from receiving money from international organizations from our (current) bank, which has made getting our other grant money very challenging, and we have so far been unsuccessful. (We are likely to be able to resolve this in the coming months, and EA Israel can potentially lend us funds in the interim.)The grant we received from FTX is not being used, and we are trying to understand the legal situation, and what will happen with the bankruptcy. This money is being kept in a separate account, pending clarity about its status and the potential to provide restitution to those defrauded by FTX. Any resolution will likely not be for quite a long time.Once we receive currently granted but unreceived funds, our runway is sufficient through the end of this year, rather than the end of next year. We will be applying for additional funding over the course of the year to allow us to continue operations.The funding environment over the coming years is more challenging, and for that reason we are less able to support additional work, and have a smaller funding cushion. We are still investigating the different potential avenues forward for this.We are also revisiting our strategic planning in light of the changing funding environment and our current projects.PlansDavid will be working for ALTER as full time director of research and policy in 2023, as well as continuing to work on our direction and planning.We are continuing to work to build a community of people in Israel working on AI safety, including funding one researcher identified via our AI safety conference.We are funding a prize for work on Vanessa Kosoy’s AI Safety Learning-Theoretic AgendaWe are building out a network of academics interested in additional EA-adjacent fields, especially focused on issues relevant to Global Catastrophic Biological Risks. (If you know of anyone in Israel who we should contact, let us know!)We are working on a collaborative policy project with several other individuals and groups to get Israel to iodize salt, since iodine deficiency in Israel is an ...]]>
Davidmanheim https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:53 None full 4545
t7LzuwM2rAYSexifW_EA EA - [BBC News] Postpartum haemorrhage: Niger halves blood-loss deaths at clinics by Sean Ericson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [BBC News] Postpartum haemorrhage: Niger halves blood-loss deaths at clinics, published by Sean Ericson on January 21, 2023 on The Effective Altruism Forum.I'm not an expert in this area, but it would seem that this study is an encouraging development in a very important area.Over the research period in Niger, an estimated 1,417 fewer women died from bleeding after childbirth - known as postpartum haemorrhage (PPH) - than otherwise would have.It also prevented tens of thousands of other women from experiencing abnormally high blood loss.PPH now accounts for one in 10 of maternal deaths in Niger, whereas before the project began it accounted for more than three times that.Summary of the Lancet article about the study:BackgroundPrimary postpartum haemorrhage is the principal cause of birth-related maternal mortality in most settings and has remained persistently high in severely resource-constrained countries. We evaluate the impact of an intervention that aims to halve maternal mortality caused by primary postpartum haemorrhage within 2 years, nationwide in Niger.MethodsIn this 72-month longitudinal study, we analysed the effects of a primary postpartum haemorrhage intervention in hospitals and health centres in Niger, using data on maternal birth outcomes assessed and recorded by the facilities’ health professionals and reported once per month at the national level. Reported data were monitored, compiled, and analysed by a non-governmental organisation collaborating with the Ministry of Health. All births in all health facilities in which births occurred, nationwide, were included, with no exclusion criteria. After a preintervention survey, brief training, and supplies distribution, Niger implemented a nationwide primary postpartum haemorrhage prevention and three-step treatment strategy using misoprostol, followed if needed by an intrauterine condom tamponade, and a non-inflatable anti-shock garment, with a specific set of organisational public health tools, aiming to reduce primary postpartum haemorrhage mortality.FindingsAmong 5 382 488 expected births, 2 254 885 (41·9%) occurred in health facilities, of which information was available on 1 380 779 births from Jan 1, 2015, to Dec 31, 2020, with reporting increasing considerably over time. Primary postpartum mortality decreased from 82 (32·16%; 95% CI 25·58–39·92) of 255 health facility maternal deaths in the 2013 preintervention survey to 146 (9·53%; 8·05–11·21) of 1532 deaths among 343 668 births in 2020. Primary postpartum haemorrhage incidence varied between 1900 (2·10%; 2·01–2·20) of 90 453 births and 4758 (1·47%; 1·43–1·52) of 322 859 births during 2015–20, an annual trend of 0·98 (95% CI 0·97–0·99; p<0·0001).InterpretationPrimary postpartum haemorrhage morbidity and mortality declined rapidly nationwide. Because each treatment technology that was used has shown some efficacy when used alone, a strategic combination of these treatments can reasonably attain outcomes of this magnitude. Niger's strategy warrants testing in other low-income and perhaps some middle-income settings.FundingThe Government of Norway, the Government of Niger, the Kavli Trust (Kavlifondet), the InFiL Foundation, and individuals in Norway, the UK, and the USA.TranslationFor the French translation of the abstract see Supplementary Materials section.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sean Ericson https://forum.effectivealtruism.org/posts/t7LzuwM2rAYSexifW/bbc-news-postpartum-haemorrhage-niger-halves-blood-loss Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [BBC News] Postpartum haemorrhage: Niger halves blood-loss deaths at clinics, published by Sean Ericson on January 21, 2023 on The Effective Altruism Forum.I'm not an expert in this area, but it would seem that this study is an encouraging development in a very important area.Over the research period in Niger, an estimated 1,417 fewer women died from bleeding after childbirth - known as postpartum haemorrhage (PPH) - than otherwise would have.It also prevented tens of thousands of other women from experiencing abnormally high blood loss.PPH now accounts for one in 10 of maternal deaths in Niger, whereas before the project began it accounted for more than three times that.Summary of the Lancet article about the study:BackgroundPrimary postpartum haemorrhage is the principal cause of birth-related maternal mortality in most settings and has remained persistently high in severely resource-constrained countries. We evaluate the impact of an intervention that aims to halve maternal mortality caused by primary postpartum haemorrhage within 2 years, nationwide in Niger.MethodsIn this 72-month longitudinal study, we analysed the effects of a primary postpartum haemorrhage intervention in hospitals and health centres in Niger, using data on maternal birth outcomes assessed and recorded by the facilities’ health professionals and reported once per month at the national level. Reported data were monitored, compiled, and analysed by a non-governmental organisation collaborating with the Ministry of Health. All births in all health facilities in which births occurred, nationwide, were included, with no exclusion criteria. After a preintervention survey, brief training, and supplies distribution, Niger implemented a nationwide primary postpartum haemorrhage prevention and three-step treatment strategy using misoprostol, followed if needed by an intrauterine condom tamponade, and a non-inflatable anti-shock garment, with a specific set of organisational public health tools, aiming to reduce primary postpartum haemorrhage mortality.FindingsAmong 5 382 488 expected births, 2 254 885 (41·9%) occurred in health facilities, of which information was available on 1 380 779 births from Jan 1, 2015, to Dec 31, 2020, with reporting increasing considerably over time. Primary postpartum mortality decreased from 82 (32·16%; 95% CI 25·58–39·92) of 255 health facility maternal deaths in the 2013 preintervention survey to 146 (9·53%; 8·05–11·21) of 1532 deaths among 343 668 births in 2020. Primary postpartum haemorrhage incidence varied between 1900 (2·10%; 2·01–2·20) of 90 453 births and 4758 (1·47%; 1·43–1·52) of 322 859 births during 2015–20, an annual trend of 0·98 (95% CI 0·97–0·99; p<0·0001).InterpretationPrimary postpartum haemorrhage morbidity and mortality declined rapidly nationwide. Because each treatment technology that was used has shown some efficacy when used alone, a strategic combination of these treatments can reasonably attain outcomes of this magnitude. Niger's strategy warrants testing in other low-income and perhaps some middle-income settings.FundingThe Government of Norway, the Government of Niger, the Kavli Trust (Kavlifondet), the InFiL Foundation, and individuals in Norway, the UK, and the USA.TranslationFor the French translation of the abstract see Supplementary Materials section.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 22 Jan 2023 20:47:22 +0000 EA - [BBC News] Postpartum haemorrhage: Niger halves blood-loss deaths at clinics by Sean Ericson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [BBC News] Postpartum haemorrhage: Niger halves blood-loss deaths at clinics, published by Sean Ericson on January 21, 2023 on The Effective Altruism Forum.I'm not an expert in this area, but it would seem that this study is an encouraging development in a very important area.Over the research period in Niger, an estimated 1,417 fewer women died from bleeding after childbirth - known as postpartum haemorrhage (PPH) - than otherwise would have.It also prevented tens of thousands of other women from experiencing abnormally high blood loss.PPH now accounts for one in 10 of maternal deaths in Niger, whereas before the project began it accounted for more than three times that.Summary of the Lancet article about the study:BackgroundPrimary postpartum haemorrhage is the principal cause of birth-related maternal mortality in most settings and has remained persistently high in severely resource-constrained countries. We evaluate the impact of an intervention that aims to halve maternal mortality caused by primary postpartum haemorrhage within 2 years, nationwide in Niger.MethodsIn this 72-month longitudinal study, we analysed the effects of a primary postpartum haemorrhage intervention in hospitals and health centres in Niger, using data on maternal birth outcomes assessed and recorded by the facilities’ health professionals and reported once per month at the national level. Reported data were monitored, compiled, and analysed by a non-governmental organisation collaborating with the Ministry of Health. All births in all health facilities in which births occurred, nationwide, were included, with no exclusion criteria. After a preintervention survey, brief training, and supplies distribution, Niger implemented a nationwide primary postpartum haemorrhage prevention and three-step treatment strategy using misoprostol, followed if needed by an intrauterine condom tamponade, and a non-inflatable anti-shock garment, with a specific set of organisational public health tools, aiming to reduce primary postpartum haemorrhage mortality.FindingsAmong 5 382 488 expected births, 2 254 885 (41·9%) occurred in health facilities, of which information was available on 1 380 779 births from Jan 1, 2015, to Dec 31, 2020, with reporting increasing considerably over time. Primary postpartum mortality decreased from 82 (32·16%; 95% CI 25·58–39·92) of 255 health facility maternal deaths in the 2013 preintervention survey to 146 (9·53%; 8·05–11·21) of 1532 deaths among 343 668 births in 2020. Primary postpartum haemorrhage incidence varied between 1900 (2·10%; 2·01–2·20) of 90 453 births and 4758 (1·47%; 1·43–1·52) of 322 859 births during 2015–20, an annual trend of 0·98 (95% CI 0·97–0·99; p<0·0001).InterpretationPrimary postpartum haemorrhage morbidity and mortality declined rapidly nationwide. Because each treatment technology that was used has shown some efficacy when used alone, a strategic combination of these treatments can reasonably attain outcomes of this magnitude. Niger's strategy warrants testing in other low-income and perhaps some middle-income settings.FundingThe Government of Norway, the Government of Niger, the Kavli Trust (Kavlifondet), the InFiL Foundation, and individuals in Norway, the UK, and the USA.TranslationFor the French translation of the abstract see Supplementary Materials section.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [BBC News] Postpartum haemorrhage: Niger halves blood-loss deaths at clinics, published by Sean Ericson on January 21, 2023 on The Effective Altruism Forum.I'm not an expert in this area, but it would seem that this study is an encouraging development in a very important area.Over the research period in Niger, an estimated 1,417 fewer women died from bleeding after childbirth - known as postpartum haemorrhage (PPH) - than otherwise would have.It also prevented tens of thousands of other women from experiencing abnormally high blood loss.PPH now accounts for one in 10 of maternal deaths in Niger, whereas before the project began it accounted for more than three times that.Summary of the Lancet article about the study:BackgroundPrimary postpartum haemorrhage is the principal cause of birth-related maternal mortality in most settings and has remained persistently high in severely resource-constrained countries. We evaluate the impact of an intervention that aims to halve maternal mortality caused by primary postpartum haemorrhage within 2 years, nationwide in Niger.MethodsIn this 72-month longitudinal study, we analysed the effects of a primary postpartum haemorrhage intervention in hospitals and health centres in Niger, using data on maternal birth outcomes assessed and recorded by the facilities’ health professionals and reported once per month at the national level. Reported data were monitored, compiled, and analysed by a non-governmental organisation collaborating with the Ministry of Health. All births in all health facilities in which births occurred, nationwide, were included, with no exclusion criteria. After a preintervention survey, brief training, and supplies distribution, Niger implemented a nationwide primary postpartum haemorrhage prevention and three-step treatment strategy using misoprostol, followed if needed by an intrauterine condom tamponade, and a non-inflatable anti-shock garment, with a specific set of organisational public health tools, aiming to reduce primary postpartum haemorrhage mortality.FindingsAmong 5 382 488 expected births, 2 254 885 (41·9%) occurred in health facilities, of which information was available on 1 380 779 births from Jan 1, 2015, to Dec 31, 2020, with reporting increasing considerably over time. Primary postpartum mortality decreased from 82 (32·16%; 95% CI 25·58–39·92) of 255 health facility maternal deaths in the 2013 preintervention survey to 146 (9·53%; 8·05–11·21) of 1532 deaths among 343 668 births in 2020. Primary postpartum haemorrhage incidence varied between 1900 (2·10%; 2·01–2·20) of 90 453 births and 4758 (1·47%; 1·43–1·52) of 322 859 births during 2015–20, an annual trend of 0·98 (95% CI 0·97–0·99; p<0·0001).InterpretationPrimary postpartum haemorrhage morbidity and mortality declined rapidly nationwide. Because each treatment technology that was used has shown some efficacy when used alone, a strategic combination of these treatments can reasonably attain outcomes of this magnitude. Niger's strategy warrants testing in other low-income and perhaps some middle-income settings.FundingThe Government of Norway, the Government of Niger, the Kavli Trust (Kavlifondet), the InFiL Foundation, and individuals in Norway, the UK, and the USA.TranslationFor the French translation of the abstract see Supplementary Materials section.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sean Ericson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:07 None full 4540
Nm9ahJzKsDGFfF66b_EA EA - NYT: Google will “recalibrate” the risk of releasing AI due to competition with OpenAI by Michael Huang Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: NYT: Google will “recalibrate” the risk of releasing AI due to competition with OpenAI, published by Michael Huang on January 22, 2023 on The Effective Altruism Forum.The New York Times: Sundar Pichai, CEO of Alphabet and Google, is trying to speed up the release of AI technology by taking on more risk.Mr. Pichai has tried to accelerate product approval reviews, according to the presentation reviewed by The Times.The company established a fast-track review process called the “Green Lane” initiative, pushing groups of employees who try to ensure that technology is fair and ethical to more quickly approve its upcoming A.I. technology.The company will also find ways for teams developing A.I. to conduct their own reviews, and it will “recalibrate” the level of risk it is willing to take when releasing the technology, according to the presentation.This change is in response to OpenAI's public release of ChatGPT. It is evidence that the race between Google/DeepMind and Microsoft/OpenAI is eroding ethics and safety.Demis Hassabis, CEO of DeepMind, urged caution in his recent interview in Time:He says AI is now “on the cusp” of being able to make tools that could be deeply damaging to human civilization, and urges his competitors to proceed with more caution than before.“When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says.“Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.”Worse still, Hassabis points out, we are the guinea pigs.Alphabet/Google is trying to accelerate a technology that its own subsidiary says is powerful and dangerous.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Michael Huang https://forum.effectivealtruism.org/posts/Nm9ahJzKsDGFfF66b/nyt-google-will-recalibrate-the-risk-of-releasing-ai-due-to Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: NYT: Google will “recalibrate” the risk of releasing AI due to competition with OpenAI, published by Michael Huang on January 22, 2023 on The Effective Altruism Forum.The New York Times: Sundar Pichai, CEO of Alphabet and Google, is trying to speed up the release of AI technology by taking on more risk.Mr. Pichai has tried to accelerate product approval reviews, according to the presentation reviewed by The Times.The company established a fast-track review process called the “Green Lane” initiative, pushing groups of employees who try to ensure that technology is fair and ethical to more quickly approve its upcoming A.I. technology.The company will also find ways for teams developing A.I. to conduct their own reviews, and it will “recalibrate” the level of risk it is willing to take when releasing the technology, according to the presentation.This change is in response to OpenAI's public release of ChatGPT. It is evidence that the race between Google/DeepMind and Microsoft/OpenAI is eroding ethics and safety.Demis Hassabis, CEO of DeepMind, urged caution in his recent interview in Time:He says AI is now “on the cusp” of being able to make tools that could be deeply damaging to human civilization, and urges his competitors to proceed with more caution than before.“When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says.“Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.”Worse still, Hassabis points out, we are the guinea pigs.Alphabet/Google is trying to accelerate a technology that its own subsidiary says is powerful and dangerous.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 22 Jan 2023 03:27:33 +0000 EA - NYT: Google will “recalibrate” the risk of releasing AI due to competition with OpenAI by Michael Huang Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: NYT: Google will “recalibrate” the risk of releasing AI due to competition with OpenAI, published by Michael Huang on January 22, 2023 on The Effective Altruism Forum.The New York Times: Sundar Pichai, CEO of Alphabet and Google, is trying to speed up the release of AI technology by taking on more risk.Mr. Pichai has tried to accelerate product approval reviews, according to the presentation reviewed by The Times.The company established a fast-track review process called the “Green Lane” initiative, pushing groups of employees who try to ensure that technology is fair and ethical to more quickly approve its upcoming A.I. technology.The company will also find ways for teams developing A.I. to conduct their own reviews, and it will “recalibrate” the level of risk it is willing to take when releasing the technology, according to the presentation.This change is in response to OpenAI's public release of ChatGPT. It is evidence that the race between Google/DeepMind and Microsoft/OpenAI is eroding ethics and safety.Demis Hassabis, CEO of DeepMind, urged caution in his recent interview in Time:He says AI is now “on the cusp” of being able to make tools that could be deeply damaging to human civilization, and urges his competitors to proceed with more caution than before.“When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says.“Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.”Worse still, Hassabis points out, we are the guinea pigs.Alphabet/Google is trying to accelerate a technology that its own subsidiary says is powerful and dangerous.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: NYT: Google will “recalibrate” the risk of releasing AI due to competition with OpenAI, published by Michael Huang on January 22, 2023 on The Effective Altruism Forum.The New York Times: Sundar Pichai, CEO of Alphabet and Google, is trying to speed up the release of AI technology by taking on more risk.Mr. Pichai has tried to accelerate product approval reviews, according to the presentation reviewed by The Times.The company established a fast-track review process called the “Green Lane” initiative, pushing groups of employees who try to ensure that technology is fair and ethical to more quickly approve its upcoming A.I. technology.The company will also find ways for teams developing A.I. to conduct their own reviews, and it will “recalibrate” the level of risk it is willing to take when releasing the technology, according to the presentation.This change is in response to OpenAI's public release of ChatGPT. It is evidence that the race between Google/DeepMind and Microsoft/OpenAI is eroding ethics and safety.Demis Hassabis, CEO of DeepMind, urged caution in his recent interview in Time:He says AI is now “on the cusp” of being able to make tools that could be deeply damaging to human civilization, and urges his competitors to proceed with more caution than before.“When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says.“Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.”Worse still, Hassabis points out, we are the guinea pigs.Alphabet/Google is trying to accelerate a technology that its own subsidiary says is powerful and dangerous.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Michael Huang https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:56 None full 4536
dbw2mgSGSAKB45fAk_EA EA - Vegan Nutrition Testing Project: Interim Report by Elizabeth Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vegan Nutrition Testing Project: Interim Report, published by Elizabeth on January 21, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Elizabeth https://forum.effectivealtruism.org/posts/dbw2mgSGSAKB45fAk/vegan-nutrition-testing-project-interim-report Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vegan Nutrition Testing Project: Interim Report, published by Elizabeth on January 21, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 22 Jan 2023 02:46:20 +0000 EA - Vegan Nutrition Testing Project: Interim Report by Elizabeth Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vegan Nutrition Testing Project: Interim Report, published by Elizabeth on January 21, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vegan Nutrition Testing Project: Interim Report, published by Elizabeth on January 21, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Elizabeth https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:27 None full 4537
52qqxvwxNJ9RXzEm6_EA EA - Available talent after major layoffs at tech giants by nicolenohemi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Available talent after major layoffs at tech giants, published by nicolenohemi on January 21, 2023 on The Effective Altruism Forum.Disclaimer: I am uncertain whether all of the relevant AI safety-related organizations this post might reach are net positive.Initially, I expected recruiting teams to already know about the layoffs discussed here, which is why I didn’t post about it sooner. When I asked some organizations about it, however, they were too swamped with other recruiting-related tasks, so none of them had heard about the layoffs (I didn’t yet get in touch with 80,000 hours). They were grateful for being connected with the available talent.SituationAmidst the volatile global economy, rising inflation, investor pressure, and increased borrowing costs, the highest number of job cuts in 2022 were ascribed to the tech sector, with almost 100,000 for the year, up by 649% compared to 2021 (Challenger, Gray & Christmas Inc.). Layoffs include engineers, ML researchers, executives, and more.List of layoffsLayoffs in numbers and (% of total employees)Meta: 11,000 (13%), among them, the Probability teamGoogle: 12,000 (6%)Amazon: 18,000 (1.2%)Twitter 3,750 (50%)Stripe Inc. 980 (14%)Salesforce: 8,000 (10%)etc.Notable example: Meta’s Probability teamSome of the Meta AI Probability team’s focus areas are differentiable and probabilistic programming and uncertainty, including increasing machine learning systems' robustness, trustworthiness, and efficiency by ascribing uncertainty measurements to models. Their whole team of about 50 employees was laid off. I had some casual chats with some of their team members, finding out that they were:already thinking about AI alignment,thought that “alignment definitely is important”,keen to start working in an AI alignment-related field, andhappy to apply/be contacted by AI safety-related orgs and even requested to connect with their recruiting teams.TakeawayThere is some seriously great tech talent that, sadly, has been laid off abruptly by the above-named tech giants. Thousands of individuals are now seeking new job opportunities. It could be beneficial for relevant organizations to explore this opportunity and consider contacting and encouraging some of them to apply for open positions.News sourcesNPRBloombergCNBC, CNBCThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
nicolenohemi https://forum.effectivealtruism.org/posts/52qqxvwxNJ9RXzEm6/available-talent-after-major-layoffs-at-tech-giants Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Available talent after major layoffs at tech giants, published by nicolenohemi on January 21, 2023 on The Effective Altruism Forum.Disclaimer: I am uncertain whether all of the relevant AI safety-related organizations this post might reach are net positive.Initially, I expected recruiting teams to already know about the layoffs discussed here, which is why I didn’t post about it sooner. When I asked some organizations about it, however, they were too swamped with other recruiting-related tasks, so none of them had heard about the layoffs (I didn’t yet get in touch with 80,000 hours). They were grateful for being connected with the available talent.SituationAmidst the volatile global economy, rising inflation, investor pressure, and increased borrowing costs, the highest number of job cuts in 2022 were ascribed to the tech sector, with almost 100,000 for the year, up by 649% compared to 2021 (Challenger, Gray & Christmas Inc.). Layoffs include engineers, ML researchers, executives, and more.List of layoffsLayoffs in numbers and (% of total employees)Meta: 11,000 (13%), among them, the Probability teamGoogle: 12,000 (6%)Amazon: 18,000 (1.2%)Twitter 3,750 (50%)Stripe Inc. 980 (14%)Salesforce: 8,000 (10%)etc.Notable example: Meta’s Probability teamSome of the Meta AI Probability team’s focus areas are differentiable and probabilistic programming and uncertainty, including increasing machine learning systems' robustness, trustworthiness, and efficiency by ascribing uncertainty measurements to models. Their whole team of about 50 employees was laid off. I had some casual chats with some of their team members, finding out that they were:already thinking about AI alignment,thought that “alignment definitely is important”,keen to start working in an AI alignment-related field, andhappy to apply/be contacted by AI safety-related orgs and even requested to connect with their recruiting teams.TakeawayThere is some seriously great tech talent that, sadly, has been laid off abruptly by the above-named tech giants. Thousands of individuals are now seeking new job opportunities. It could be beneficial for relevant organizations to explore this opportunity and consider contacting and encouraging some of them to apply for open positions.News sourcesNPRBloombergCNBC, CNBCThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 21 Jan 2023 04:11:18 +0000 EA - Available talent after major layoffs at tech giants by nicolenohemi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Available talent after major layoffs at tech giants, published by nicolenohemi on January 21, 2023 on The Effective Altruism Forum.Disclaimer: I am uncertain whether all of the relevant AI safety-related organizations this post might reach are net positive.Initially, I expected recruiting teams to already know about the layoffs discussed here, which is why I didn’t post about it sooner. When I asked some organizations about it, however, they were too swamped with other recruiting-related tasks, so none of them had heard about the layoffs (I didn’t yet get in touch with 80,000 hours). They were grateful for being connected with the available talent.SituationAmidst the volatile global economy, rising inflation, investor pressure, and increased borrowing costs, the highest number of job cuts in 2022 were ascribed to the tech sector, with almost 100,000 for the year, up by 649% compared to 2021 (Challenger, Gray & Christmas Inc.). Layoffs include engineers, ML researchers, executives, and more.List of layoffsLayoffs in numbers and (% of total employees)Meta: 11,000 (13%), among them, the Probability teamGoogle: 12,000 (6%)Amazon: 18,000 (1.2%)Twitter 3,750 (50%)Stripe Inc. 980 (14%)Salesforce: 8,000 (10%)etc.Notable example: Meta’s Probability teamSome of the Meta AI Probability team’s focus areas are differentiable and probabilistic programming and uncertainty, including increasing machine learning systems' robustness, trustworthiness, and efficiency by ascribing uncertainty measurements to models. Their whole team of about 50 employees was laid off. I had some casual chats with some of their team members, finding out that they were:already thinking about AI alignment,thought that “alignment definitely is important”,keen to start working in an AI alignment-related field, andhappy to apply/be contacted by AI safety-related orgs and even requested to connect with their recruiting teams.TakeawayThere is some seriously great tech talent that, sadly, has been laid off abruptly by the above-named tech giants. Thousands of individuals are now seeking new job opportunities. It could be beneficial for relevant organizations to explore this opportunity and consider contacting and encouraging some of them to apply for open positions.News sourcesNPRBloombergCNBC, CNBCThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Available talent after major layoffs at tech giants, published by nicolenohemi on January 21, 2023 on The Effective Altruism Forum.Disclaimer: I am uncertain whether all of the relevant AI safety-related organizations this post might reach are net positive.Initially, I expected recruiting teams to already know about the layoffs discussed here, which is why I didn’t post about it sooner. When I asked some organizations about it, however, they were too swamped with other recruiting-related tasks, so none of them had heard about the layoffs (I didn’t yet get in touch with 80,000 hours). They were grateful for being connected with the available talent.SituationAmidst the volatile global economy, rising inflation, investor pressure, and increased borrowing costs, the highest number of job cuts in 2022 were ascribed to the tech sector, with almost 100,000 for the year, up by 649% compared to 2021 (Challenger, Gray & Christmas Inc.). Layoffs include engineers, ML researchers, executives, and more.List of layoffsLayoffs in numbers and (% of total employees)Meta: 11,000 (13%), among them, the Probability teamGoogle: 12,000 (6%)Amazon: 18,000 (1.2%)Twitter 3,750 (50%)Stripe Inc. 980 (14%)Salesforce: 8,000 (10%)etc.Notable example: Meta’s Probability teamSome of the Meta AI Probability team’s focus areas are differentiable and probabilistic programming and uncertainty, including increasing machine learning systems' robustness, trustworthiness, and efficiency by ascribing uncertainty measurements to models. Their whole team of about 50 employees was laid off. I had some casual chats with some of their team members, finding out that they were:already thinking about AI alignment,thought that “alignment definitely is important”,keen to start working in an AI alignment-related field, andhappy to apply/be contacted by AI safety-related orgs and even requested to connect with their recruiting teams.TakeawayThere is some seriously great tech talent that, sadly, has been laid off abruptly by the above-named tech giants. Thousands of individuals are now seeking new job opportunities. It could be beneficial for relevant organizations to explore this opportunity and consider contacting and encouraging some of them to apply for open positions.News sourcesNPRBloombergCNBC, CNBCThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
nicolenohemi https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:48 None full 4532
GDkrPrP2m6TQqdSGF_EA EA - [TIME magazine] DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution (Perrigo, 2023) by will Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [TIME magazine] DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution (Perrigo, 2023), published by will on January 20, 2023 on The Effective Altruism Forum.Linkposting, tagging and excerpting in accord with 'Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum?'.He [Demis Hassibis] and his colleagues have been working toward a much grander ambition: creating artificial general intelligence, or AGI, by building machines that can think, learn, and be set to solve humanity’s toughest problems. Today’s AI is narrow, brittle, and often not very intelligent at all. But AGI, Hassabis believes, will be an “epoch-defining” technology—like the harnessing of electricity—that will change the very fabric of human life. If he’s right, it could earn him a place in history that would relegate the namesakes of his meeting rooms to mere footnotes.But with AI’s promise also comes peril. In recent months, researchers building an AI system to design new drugs revealed that their tool could be easily repurposed to make deadly new chemicals. A separate AI model trained to spew out toxic hate speech went viral, exemplifying the risk to vulnerable communities online. And inside AI labs around the world, policy experts were grappling with near-term questions like what to do when an AI has the potential to be commandeered by rogue states to mount widespread hacking campaigns or infer state-level nuclear secrets. In December 2022, ChatGPT, a chatbot designed by DeepMind’s rival OpenAI, went viral for its seeming ability to write almost like a human—but faced criticism for its susceptibility to racism and misinformation.It is in this uncertain climate that Hassabis agrees to a rare interview, to issue a stark warning about his growing concerns. “I would advocate not moving fast and breaking things,” he says, referring to an old Facebook motto that encouraged engineers to release their technologies into the world first and fix any problems that arose later. The phrase has since become synonymous with disruption. That culture, subsequently emulated by a generation of startups, helped Facebook rocket to 3 billion users. But it also left the company entirely unprepared when disinformation, hate speech, and even incitement to genocide began appearing on its platform. Hassabis sees a similarly worrying trend developing with AI. He says AI is now “on the cusp” of being able to make tools that could be deeply damaging to human civilization, and urges his competitors to proceed with more caution than before.“When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says. “Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.” Worse still, Hassabis points out, we are the guinea pigs.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
will https://forum.effectivealtruism.org/posts/GDkrPrP2m6TQqdSGF/time-magazine-deepmind-s-ceo-helped-take-ai-mainstream-now Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [TIME magazine] DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution (Perrigo, 2023), published by will on January 20, 2023 on The Effective Altruism Forum.Linkposting, tagging and excerpting in accord with 'Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum?'.He [Demis Hassibis] and his colleagues have been working toward a much grander ambition: creating artificial general intelligence, or AGI, by building machines that can think, learn, and be set to solve humanity’s toughest problems. Today’s AI is narrow, brittle, and often not very intelligent at all. But AGI, Hassabis believes, will be an “epoch-defining” technology—like the harnessing of electricity—that will change the very fabric of human life. If he’s right, it could earn him a place in history that would relegate the namesakes of his meeting rooms to mere footnotes.But with AI’s promise also comes peril. In recent months, researchers building an AI system to design new drugs revealed that their tool could be easily repurposed to make deadly new chemicals. A separate AI model trained to spew out toxic hate speech went viral, exemplifying the risk to vulnerable communities online. And inside AI labs around the world, policy experts were grappling with near-term questions like what to do when an AI has the potential to be commandeered by rogue states to mount widespread hacking campaigns or infer state-level nuclear secrets. In December 2022, ChatGPT, a chatbot designed by DeepMind’s rival OpenAI, went viral for its seeming ability to write almost like a human—but faced criticism for its susceptibility to racism and misinformation.It is in this uncertain climate that Hassabis agrees to a rare interview, to issue a stark warning about his growing concerns. “I would advocate not moving fast and breaking things,” he says, referring to an old Facebook motto that encouraged engineers to release their technologies into the world first and fix any problems that arose later. The phrase has since become synonymous with disruption. That culture, subsequently emulated by a generation of startups, helped Facebook rocket to 3 billion users. But it also left the company entirely unprepared when disinformation, hate speech, and even incitement to genocide began appearing on its platform. Hassabis sees a similarly worrying trend developing with AI. He says AI is now “on the cusp” of being able to make tools that could be deeply damaging to human civilization, and urges his competitors to proceed with more caution than before.“When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says. “Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.” Worse still, Hassabis points out, we are the guinea pigs.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 21 Jan 2023 03:39:33 +0000 EA - [TIME magazine] DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution (Perrigo, 2023) by will Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [TIME magazine] DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution (Perrigo, 2023), published by will on January 20, 2023 on The Effective Altruism Forum.Linkposting, tagging and excerpting in accord with 'Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum?'.He [Demis Hassibis] and his colleagues have been working toward a much grander ambition: creating artificial general intelligence, or AGI, by building machines that can think, learn, and be set to solve humanity’s toughest problems. Today’s AI is narrow, brittle, and often not very intelligent at all. But AGI, Hassabis believes, will be an “epoch-defining” technology—like the harnessing of electricity—that will change the very fabric of human life. If he’s right, it could earn him a place in history that would relegate the namesakes of his meeting rooms to mere footnotes.But with AI’s promise also comes peril. In recent months, researchers building an AI system to design new drugs revealed that their tool could be easily repurposed to make deadly new chemicals. A separate AI model trained to spew out toxic hate speech went viral, exemplifying the risk to vulnerable communities online. And inside AI labs around the world, policy experts were grappling with near-term questions like what to do when an AI has the potential to be commandeered by rogue states to mount widespread hacking campaigns or infer state-level nuclear secrets. In December 2022, ChatGPT, a chatbot designed by DeepMind’s rival OpenAI, went viral for its seeming ability to write almost like a human—but faced criticism for its susceptibility to racism and misinformation.It is in this uncertain climate that Hassabis agrees to a rare interview, to issue a stark warning about his growing concerns. “I would advocate not moving fast and breaking things,” he says, referring to an old Facebook motto that encouraged engineers to release their technologies into the world first and fix any problems that arose later. The phrase has since become synonymous with disruption. That culture, subsequently emulated by a generation of startups, helped Facebook rocket to 3 billion users. But it also left the company entirely unprepared when disinformation, hate speech, and even incitement to genocide began appearing on its platform. Hassabis sees a similarly worrying trend developing with AI. He says AI is now “on the cusp” of being able to make tools that could be deeply damaging to human civilization, and urges his competitors to proceed with more caution than before.“When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says. “Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.” Worse still, Hassabis points out, we are the guinea pigs.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [TIME magazine] DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution (Perrigo, 2023), published by will on January 20, 2023 on The Effective Altruism Forum.Linkposting, tagging and excerpting in accord with 'Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum?'.He [Demis Hassibis] and his colleagues have been working toward a much grander ambition: creating artificial general intelligence, or AGI, by building machines that can think, learn, and be set to solve humanity’s toughest problems. Today’s AI is narrow, brittle, and often not very intelligent at all. But AGI, Hassabis believes, will be an “epoch-defining” technology—like the harnessing of electricity—that will change the very fabric of human life. If he’s right, it could earn him a place in history that would relegate the namesakes of his meeting rooms to mere footnotes.But with AI’s promise also comes peril. In recent months, researchers building an AI system to design new drugs revealed that their tool could be easily repurposed to make deadly new chemicals. A separate AI model trained to spew out toxic hate speech went viral, exemplifying the risk to vulnerable communities online. And inside AI labs around the world, policy experts were grappling with near-term questions like what to do when an AI has the potential to be commandeered by rogue states to mount widespread hacking campaigns or infer state-level nuclear secrets. In December 2022, ChatGPT, a chatbot designed by DeepMind’s rival OpenAI, went viral for its seeming ability to write almost like a human—but faced criticism for its susceptibility to racism and misinformation.It is in this uncertain climate that Hassabis agrees to a rare interview, to issue a stark warning about his growing concerns. “I would advocate not moving fast and breaking things,” he says, referring to an old Facebook motto that encouraged engineers to release their technologies into the world first and fix any problems that arose later. The phrase has since become synonymous with disruption. That culture, subsequently emulated by a generation of startups, helped Facebook rocket to 3 billion users. But it also left the company entirely unprepared when disinformation, hate speech, and even incitement to genocide began appearing on its platform. Hassabis sees a similarly worrying trend developing with AI. He says AI is now “on the cusp” of being able to make tools that could be deeply damaging to human civilization, and urges his competitors to proceed with more caution than before.“When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says. “Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.” Worse still, Hassabis points out, we are the guinea pigs.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
will https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:05 None full 4533
BsKwAna6tzCsv2Q5n_EA EA - Animal Advocacy Digest #4: Dec 24th- Jan 20th by James Ozden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal Advocacy Digest #4: Dec 24th- Jan 20th, published by James Ozden on January 20, 2023 on The Effective Altruism Forum.Note: This digest will be once per month now, as EA Forum animal content isn’t as frequent as we expected!Welcome to this edition of the Animal Advocacy Digest, where we aim to help people in the animal welfare community quickly keep up with the most important information in the space. As a reminder, you can get this digest as an email by signing up here. =I’m summarising the following posts:Video: A Few Exciting Giving Opportunities for Animal Welfare in Asia - Jack Stennett, Jah Ying Chung (Good Growth)Why Anima International suspended the campaign to end live fish sales in Poland - Jakub Stencel, Weronika Zurek (Anima International)Don’t Balk at Animal-friendly Results - Bob Fischer (Rethink Priorities)Longtermism and Animal Farming Trajectories - Michael Dello (Sentience Institute)Abolitionist in the Streets, Pragmatist in the Sheets: New Ideas for Effective Animal Advocacy - Dhruv MakwanaQuick links:Linkpost: Big Wins for Farm Animals This Decade (Lewis Bollard)Video: A Few Exciting Giving Opportunities for Animal Welfare in Asia - Jack Stennett, Jah Ying Chung (Good Growth)Good Growth is an Effective Altruist-aligned organisation focusing on animal welfare and food systems in Asia. They focus on growing the animal advocacy ecosystem in Asia, by conducting research prioritisation on key topics within Asian animal advocacy, developing accessible databases on the alternative protein market in Asian countries, as well as building the research capacity of Asian animal advocacy organisations.In this video, Good Growth highlights five organisations (including themselves) who might be exciting donation opportunities for those interested in supporting Asian animal advocacy. They are:Green Camel Bell, who are working to end the farming of wild animals, such as Giant Salamanders, in ChinaSinergia Animals, who are currently working on cage-free eggs advocacy, institutional outreach for plant-based meals, and investigations in ThailandAnimal Alliance Asia, who are working on a regranting programme for international funders to reach local advocates in AsiaAnimal Empathy Philippines who are working on capacity-building initiatives to grow the farm animal welfare movement in the Philippines.Why might you donate to Asian animal advocacy opportunities? Good Growth has made this handy graphic below which might provide some insight!Why Anima International suspended the campaign to end live fish sales in Poland - Jakub Stencel, Weronika Zurek (Anima International)Anima International recently took the (brave!) decision recently to suspend their campaign which focuses on ending the live sale of carp in Poland. This campaign was specifically against the purchasing of live carp around Christmas time, a tradition in some post-communist countries in East Europe. Animal advocacy groups had achieved significant wins with this campaign, leading to major retailers withdrawing from the sale of live fish.However, Anima International became worried about the consumer trend to replace carp with other fish, such as salmon. As salmon is a carnivorous fish (and carp mostly isn’t), Anima International was increasingly concerned this campaign could actually be leading to more animal deaths, as salmon farming often requires fish feed. Anima calculates that buying one Atlantic Salmon requires around 11 times more fish deaths than buying the same weight in common carp, due to considerations around weight, mortality rates and number of feed fish required to bring these animals to sale weight. As a result, Anima International took the difficult decision to suspend this long standing campaign, in light of potential negative impacts on fish.Longtermism and Animal ...]]>
James Ozden https://forum.effectivealtruism.org/posts/BsKwAna6tzCsv2Q5n/animal-advocacy-digest-4-dec-24th-jan-20th Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal Advocacy Digest #4: Dec 24th- Jan 20th, published by James Ozden on January 20, 2023 on The Effective Altruism Forum.Note: This digest will be once per month now, as EA Forum animal content isn’t as frequent as we expected!Welcome to this edition of the Animal Advocacy Digest, where we aim to help people in the animal welfare community quickly keep up with the most important information in the space. As a reminder, you can get this digest as an email by signing up here. =I’m summarising the following posts:Video: A Few Exciting Giving Opportunities for Animal Welfare in Asia - Jack Stennett, Jah Ying Chung (Good Growth)Why Anima International suspended the campaign to end live fish sales in Poland - Jakub Stencel, Weronika Zurek (Anima International)Don’t Balk at Animal-friendly Results - Bob Fischer (Rethink Priorities)Longtermism and Animal Farming Trajectories - Michael Dello (Sentience Institute)Abolitionist in the Streets, Pragmatist in the Sheets: New Ideas for Effective Animal Advocacy - Dhruv MakwanaQuick links:Linkpost: Big Wins for Farm Animals This Decade (Lewis Bollard)Video: A Few Exciting Giving Opportunities for Animal Welfare in Asia - Jack Stennett, Jah Ying Chung (Good Growth)Good Growth is an Effective Altruist-aligned organisation focusing on animal welfare and food systems in Asia. They focus on growing the animal advocacy ecosystem in Asia, by conducting research prioritisation on key topics within Asian animal advocacy, developing accessible databases on the alternative protein market in Asian countries, as well as building the research capacity of Asian animal advocacy organisations.In this video, Good Growth highlights five organisations (including themselves) who might be exciting donation opportunities for those interested in supporting Asian animal advocacy. They are:Green Camel Bell, who are working to end the farming of wild animals, such as Giant Salamanders, in ChinaSinergia Animals, who are currently working on cage-free eggs advocacy, institutional outreach for plant-based meals, and investigations in ThailandAnimal Alliance Asia, who are working on a regranting programme for international funders to reach local advocates in AsiaAnimal Empathy Philippines who are working on capacity-building initiatives to grow the farm animal welfare movement in the Philippines.Why might you donate to Asian animal advocacy opportunities? Good Growth has made this handy graphic below which might provide some insight!Why Anima International suspended the campaign to end live fish sales in Poland - Jakub Stencel, Weronika Zurek (Anima International)Anima International recently took the (brave!) decision recently to suspend their campaign which focuses on ending the live sale of carp in Poland. This campaign was specifically against the purchasing of live carp around Christmas time, a tradition in some post-communist countries in East Europe. Animal advocacy groups had achieved significant wins with this campaign, leading to major retailers withdrawing from the sale of live fish.However, Anima International became worried about the consumer trend to replace carp with other fish, such as salmon. As salmon is a carnivorous fish (and carp mostly isn’t), Anima International was increasingly concerned this campaign could actually be leading to more animal deaths, as salmon farming often requires fish feed. Anima calculates that buying one Atlantic Salmon requires around 11 times more fish deaths than buying the same weight in common carp, due to considerations around weight, mortality rates and number of feed fish required to bring these animals to sale weight. As a result, Anima International took the difficult decision to suspend this long standing campaign, in light of potential negative impacts on fish.Longtermism and Animal ...]]>
Sat, 21 Jan 2023 00:10:46 +0000 EA - Animal Advocacy Digest #4: Dec 24th- Jan 20th by James Ozden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal Advocacy Digest #4: Dec 24th- Jan 20th, published by James Ozden on January 20, 2023 on The Effective Altruism Forum.Note: This digest will be once per month now, as EA Forum animal content isn’t as frequent as we expected!Welcome to this edition of the Animal Advocacy Digest, where we aim to help people in the animal welfare community quickly keep up with the most important information in the space. As a reminder, you can get this digest as an email by signing up here. =I’m summarising the following posts:Video: A Few Exciting Giving Opportunities for Animal Welfare in Asia - Jack Stennett, Jah Ying Chung (Good Growth)Why Anima International suspended the campaign to end live fish sales in Poland - Jakub Stencel, Weronika Zurek (Anima International)Don’t Balk at Animal-friendly Results - Bob Fischer (Rethink Priorities)Longtermism and Animal Farming Trajectories - Michael Dello (Sentience Institute)Abolitionist in the Streets, Pragmatist in the Sheets: New Ideas for Effective Animal Advocacy - Dhruv MakwanaQuick links:Linkpost: Big Wins for Farm Animals This Decade (Lewis Bollard)Video: A Few Exciting Giving Opportunities for Animal Welfare in Asia - Jack Stennett, Jah Ying Chung (Good Growth)Good Growth is an Effective Altruist-aligned organisation focusing on animal welfare and food systems in Asia. They focus on growing the animal advocacy ecosystem in Asia, by conducting research prioritisation on key topics within Asian animal advocacy, developing accessible databases on the alternative protein market in Asian countries, as well as building the research capacity of Asian animal advocacy organisations.In this video, Good Growth highlights five organisations (including themselves) who might be exciting donation opportunities for those interested in supporting Asian animal advocacy. They are:Green Camel Bell, who are working to end the farming of wild animals, such as Giant Salamanders, in ChinaSinergia Animals, who are currently working on cage-free eggs advocacy, institutional outreach for plant-based meals, and investigations in ThailandAnimal Alliance Asia, who are working on a regranting programme for international funders to reach local advocates in AsiaAnimal Empathy Philippines who are working on capacity-building initiatives to grow the farm animal welfare movement in the Philippines.Why might you donate to Asian animal advocacy opportunities? Good Growth has made this handy graphic below which might provide some insight!Why Anima International suspended the campaign to end live fish sales in Poland - Jakub Stencel, Weronika Zurek (Anima International)Anima International recently took the (brave!) decision recently to suspend their campaign which focuses on ending the live sale of carp in Poland. This campaign was specifically against the purchasing of live carp around Christmas time, a tradition in some post-communist countries in East Europe. Animal advocacy groups had achieved significant wins with this campaign, leading to major retailers withdrawing from the sale of live fish.However, Anima International became worried about the consumer trend to replace carp with other fish, such as salmon. As salmon is a carnivorous fish (and carp mostly isn’t), Anima International was increasingly concerned this campaign could actually be leading to more animal deaths, as salmon farming often requires fish feed. Anima calculates that buying one Atlantic Salmon requires around 11 times more fish deaths than buying the same weight in common carp, due to considerations around weight, mortality rates and number of feed fish required to bring these animals to sale weight. As a result, Anima International took the difficult decision to suspend this long standing campaign, in light of potential negative impacts on fish.Longtermism and Animal ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal Advocacy Digest #4: Dec 24th- Jan 20th, published by James Ozden on January 20, 2023 on The Effective Altruism Forum.Note: This digest will be once per month now, as EA Forum animal content isn’t as frequent as we expected!Welcome to this edition of the Animal Advocacy Digest, where we aim to help people in the animal welfare community quickly keep up with the most important information in the space. As a reminder, you can get this digest as an email by signing up here. =I’m summarising the following posts:Video: A Few Exciting Giving Opportunities for Animal Welfare in Asia - Jack Stennett, Jah Ying Chung (Good Growth)Why Anima International suspended the campaign to end live fish sales in Poland - Jakub Stencel, Weronika Zurek (Anima International)Don’t Balk at Animal-friendly Results - Bob Fischer (Rethink Priorities)Longtermism and Animal Farming Trajectories - Michael Dello (Sentience Institute)Abolitionist in the Streets, Pragmatist in the Sheets: New Ideas for Effective Animal Advocacy - Dhruv MakwanaQuick links:Linkpost: Big Wins for Farm Animals This Decade (Lewis Bollard)Video: A Few Exciting Giving Opportunities for Animal Welfare in Asia - Jack Stennett, Jah Ying Chung (Good Growth)Good Growth is an Effective Altruist-aligned organisation focusing on animal welfare and food systems in Asia. They focus on growing the animal advocacy ecosystem in Asia, by conducting research prioritisation on key topics within Asian animal advocacy, developing accessible databases on the alternative protein market in Asian countries, as well as building the research capacity of Asian animal advocacy organisations.In this video, Good Growth highlights five organisations (including themselves) who might be exciting donation opportunities for those interested in supporting Asian animal advocacy. They are:Green Camel Bell, who are working to end the farming of wild animals, such as Giant Salamanders, in ChinaSinergia Animals, who are currently working on cage-free eggs advocacy, institutional outreach for plant-based meals, and investigations in ThailandAnimal Alliance Asia, who are working on a regranting programme for international funders to reach local advocates in AsiaAnimal Empathy Philippines who are working on capacity-building initiatives to grow the farm animal welfare movement in the Philippines.Why might you donate to Asian animal advocacy opportunities? Good Growth has made this handy graphic below which might provide some insight!Why Anima International suspended the campaign to end live fish sales in Poland - Jakub Stencel, Weronika Zurek (Anima International)Anima International recently took the (brave!) decision recently to suspend their campaign which focuses on ending the live sale of carp in Poland. This campaign was specifically against the purchasing of live carp around Christmas time, a tradition in some post-communist countries in East Europe. Animal advocacy groups had achieved significant wins with this campaign, leading to major retailers withdrawing from the sale of live fish.However, Anima International became worried about the consumer trend to replace carp with other fish, such as salmon. As salmon is a carnivorous fish (and carp mostly isn’t), Anima International was increasingly concerned this campaign could actually be leading to more animal deaths, as salmon farming often requires fish feed. Anima calculates that buying one Atlantic Salmon requires around 11 times more fish deaths than buying the same weight in common carp, due to considerations around weight, mortality rates and number of feed fish required to bring these animals to sale weight. As a result, Anima International took the difficult decision to suspend this long standing campaign, in light of potential negative impacts on fish.Longtermism and Animal ...]]>
James Ozden https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:12 None full 4534
7CdtdieiijWXWhiZB_EA EA - What’s going on with ‘crunch time’? by rosehadshar Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What’s going on with ‘crunch time’?, published by rosehadshar on January 20, 2023 on The Effective Altruism Forum.Also on LW here.Imagine the set of decisions which impact TAI outcomes.Some of the decisions are more important than others: they have a larger impact on TAI outcomes. We care more about a decision the more important it is.Some of the decisions are also easier to influence than others, and we care more about a decision the easier it is to influence.Decisions in the past are un-influenceableProbably some decisions in the future are too, but it may be hard to identify which onesHow easy it is to influence a decision depends on who you arePeople have access to different sets of people and resourcesInfluence can be more or less in/direct (e.g. how many steps removed are you from the US President)The influenceable decisions are distributed over time, but we don’t know the distribution:Decisions might happen earlier or later (timing/timelines)There might be more or fewer decisions in the setDecisions might be more or less concentrated over time (urgency/duration)There might be discontinuities (only one decision matters, at the right of the distribution suddenly there are no more decisions because takeover has happened.)It’s possible that in the future there might be a particularly concentrated period of important decisions, for example like this:People refer to this concentrated period of important decisions as ‘crunch time’. The distribution might or might not end up actually containing a concentrated period of important decisions - or in other words, crunch time may or may not happen.Where the distribution does contain a concentrated period of important decisions (or at least is sufficiently likely to in expectation), crunch time might be a useful concept to have:It’s a flag to switch modes (e.g. aiming for direct effects rather than getting ourselves into a better position in future)It’s a way of compressing lots of information about the world (e.g. these 15 things have happened with various levels of confidence, which makes us think that these 50 things will happen soon with various probabilities)It’s a concept which can help to coordinate lots of people, which in some crunch time scenarios could be very importantBut we don’t know what the underlying distribution of decisions is. Here are just some of the ways the distribution might look:The fact that the underlying distribution is uncertain means that there are several ways in which crunch time might be an unhelpful concept:Different sorts of concentrated periods of important decisions might differ so much that it’s better to think about them separately.A crunch time lasting a day seems pretty different to one lasting 5 yearsA crunch time where one actor makes one decision seems pretty different to one where many thousands of actors make many thousands of decisionsThe eventual distribution may not contain a single concentrated period of important influenceable decisions.Maybe the most important decisions are in the pastMaybe there will be a gradual decline in the importance of influenceable decisions as humanity becomes increasingly disempoweredMaybe the distribution is lumpy, and there will be many discrete concentrated periods of important decisionsDistributions in different domains, or even for different individuals, might diverge sufficiently that there’s no meaningful single period of important decisionsIt seems unlikely that distributions diverge wildly at the individual level, but I think it’s possible that there might be reasonably different distributions for people who work in government versus in AI labs, for example.It might still be useful to have crunch time as a concept for a particularly concentrated period of important decisions, but only apply it in specific...]]>
rosehadshar https://forum.effectivealtruism.org/posts/7CdtdieiijWXWhiZB/what-s-going-on-with-crunch-time Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What’s going on with ‘crunch time’?, published by rosehadshar on January 20, 2023 on The Effective Altruism Forum.Also on LW here.Imagine the set of decisions which impact TAI outcomes.Some of the decisions are more important than others: they have a larger impact on TAI outcomes. We care more about a decision the more important it is.Some of the decisions are also easier to influence than others, and we care more about a decision the easier it is to influence.Decisions in the past are un-influenceableProbably some decisions in the future are too, but it may be hard to identify which onesHow easy it is to influence a decision depends on who you arePeople have access to different sets of people and resourcesInfluence can be more or less in/direct (e.g. how many steps removed are you from the US President)The influenceable decisions are distributed over time, but we don’t know the distribution:Decisions might happen earlier or later (timing/timelines)There might be more or fewer decisions in the setDecisions might be more or less concentrated over time (urgency/duration)There might be discontinuities (only one decision matters, at the right of the distribution suddenly there are no more decisions because takeover has happened.)It’s possible that in the future there might be a particularly concentrated period of important decisions, for example like this:People refer to this concentrated period of important decisions as ‘crunch time’. The distribution might or might not end up actually containing a concentrated period of important decisions - or in other words, crunch time may or may not happen.Where the distribution does contain a concentrated period of important decisions (or at least is sufficiently likely to in expectation), crunch time might be a useful concept to have:It’s a flag to switch modes (e.g. aiming for direct effects rather than getting ourselves into a better position in future)It’s a way of compressing lots of information about the world (e.g. these 15 things have happened with various levels of confidence, which makes us think that these 50 things will happen soon with various probabilities)It’s a concept which can help to coordinate lots of people, which in some crunch time scenarios could be very importantBut we don’t know what the underlying distribution of decisions is. Here are just some of the ways the distribution might look:The fact that the underlying distribution is uncertain means that there are several ways in which crunch time might be an unhelpful concept:Different sorts of concentrated periods of important decisions might differ so much that it’s better to think about them separately.A crunch time lasting a day seems pretty different to one lasting 5 yearsA crunch time where one actor makes one decision seems pretty different to one where many thousands of actors make many thousands of decisionsThe eventual distribution may not contain a single concentrated period of important influenceable decisions.Maybe the most important decisions are in the pastMaybe there will be a gradual decline in the importance of influenceable decisions as humanity becomes increasingly disempoweredMaybe the distribution is lumpy, and there will be many discrete concentrated periods of important decisionsDistributions in different domains, or even for different individuals, might diverge sufficiently that there’s no meaningful single period of important decisionsIt seems unlikely that distributions diverge wildly at the individual level, but I think it’s possible that there might be reasonably different distributions for people who work in government versus in AI labs, for example.It might still be useful to have crunch time as a concept for a particularly concentrated period of important decisions, but only apply it in specific...]]>
Fri, 20 Jan 2023 16:13:37 +0000 EA - What’s going on with ‘crunch time’? by rosehadshar Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What’s going on with ‘crunch time’?, published by rosehadshar on January 20, 2023 on The Effective Altruism Forum.Also on LW here.Imagine the set of decisions which impact TAI outcomes.Some of the decisions are more important than others: they have a larger impact on TAI outcomes. We care more about a decision the more important it is.Some of the decisions are also easier to influence than others, and we care more about a decision the easier it is to influence.Decisions in the past are un-influenceableProbably some decisions in the future are too, but it may be hard to identify which onesHow easy it is to influence a decision depends on who you arePeople have access to different sets of people and resourcesInfluence can be more or less in/direct (e.g. how many steps removed are you from the US President)The influenceable decisions are distributed over time, but we don’t know the distribution:Decisions might happen earlier or later (timing/timelines)There might be more or fewer decisions in the setDecisions might be more or less concentrated over time (urgency/duration)There might be discontinuities (only one decision matters, at the right of the distribution suddenly there are no more decisions because takeover has happened.)It’s possible that in the future there might be a particularly concentrated period of important decisions, for example like this:People refer to this concentrated period of important decisions as ‘crunch time’. The distribution might or might not end up actually containing a concentrated period of important decisions - or in other words, crunch time may or may not happen.Where the distribution does contain a concentrated period of important decisions (or at least is sufficiently likely to in expectation), crunch time might be a useful concept to have:It’s a flag to switch modes (e.g. aiming for direct effects rather than getting ourselves into a better position in future)It’s a way of compressing lots of information about the world (e.g. these 15 things have happened with various levels of confidence, which makes us think that these 50 things will happen soon with various probabilities)It’s a concept which can help to coordinate lots of people, which in some crunch time scenarios could be very importantBut we don’t know what the underlying distribution of decisions is. Here are just some of the ways the distribution might look:The fact that the underlying distribution is uncertain means that there are several ways in which crunch time might be an unhelpful concept:Different sorts of concentrated periods of important decisions might differ so much that it’s better to think about them separately.A crunch time lasting a day seems pretty different to one lasting 5 yearsA crunch time where one actor makes one decision seems pretty different to one where many thousands of actors make many thousands of decisionsThe eventual distribution may not contain a single concentrated period of important influenceable decisions.Maybe the most important decisions are in the pastMaybe there will be a gradual decline in the importance of influenceable decisions as humanity becomes increasingly disempoweredMaybe the distribution is lumpy, and there will be many discrete concentrated periods of important decisionsDistributions in different domains, or even for different individuals, might diverge sufficiently that there’s no meaningful single period of important decisionsIt seems unlikely that distributions diverge wildly at the individual level, but I think it’s possible that there might be reasonably different distributions for people who work in government versus in AI labs, for example.It might still be useful to have crunch time as a concept for a particularly concentrated period of important decisions, but only apply it in specific...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What’s going on with ‘crunch time’?, published by rosehadshar on January 20, 2023 on The Effective Altruism Forum.Also on LW here.Imagine the set of decisions which impact TAI outcomes.Some of the decisions are more important than others: they have a larger impact on TAI outcomes. We care more about a decision the more important it is.Some of the decisions are also easier to influence than others, and we care more about a decision the easier it is to influence.Decisions in the past are un-influenceableProbably some decisions in the future are too, but it may be hard to identify which onesHow easy it is to influence a decision depends on who you arePeople have access to different sets of people and resourcesInfluence can be more or less in/direct (e.g. how many steps removed are you from the US President)The influenceable decisions are distributed over time, but we don’t know the distribution:Decisions might happen earlier or later (timing/timelines)There might be more or fewer decisions in the setDecisions might be more or less concentrated over time (urgency/duration)There might be discontinuities (only one decision matters, at the right of the distribution suddenly there are no more decisions because takeover has happened.)It’s possible that in the future there might be a particularly concentrated period of important decisions, for example like this:People refer to this concentrated period of important decisions as ‘crunch time’. The distribution might or might not end up actually containing a concentrated period of important decisions - or in other words, crunch time may or may not happen.Where the distribution does contain a concentrated period of important decisions (or at least is sufficiently likely to in expectation), crunch time might be a useful concept to have:It’s a flag to switch modes (e.g. aiming for direct effects rather than getting ourselves into a better position in future)It’s a way of compressing lots of information about the world (e.g. these 15 things have happened with various levels of confidence, which makes us think that these 50 things will happen soon with various probabilities)It’s a concept which can help to coordinate lots of people, which in some crunch time scenarios could be very importantBut we don’t know what the underlying distribution of decisions is. Here are just some of the ways the distribution might look:The fact that the underlying distribution is uncertain means that there are several ways in which crunch time might be an unhelpful concept:Different sorts of concentrated periods of important decisions might differ so much that it’s better to think about them separately.A crunch time lasting a day seems pretty different to one lasting 5 yearsA crunch time where one actor makes one decision seems pretty different to one where many thousands of actors make many thousands of decisionsThe eventual distribution may not contain a single concentrated period of important influenceable decisions.Maybe the most important decisions are in the pastMaybe there will be a gradual decline in the importance of influenceable decisions as humanity becomes increasingly disempoweredMaybe the distribution is lumpy, and there will be many discrete concentrated periods of important decisionsDistributions in different domains, or even for different individuals, might diverge sufficiently that there’s no meaningful single period of important decisionsIt seems unlikely that distributions diverge wildly at the individual level, but I think it’s possible that there might be reasonably different distributions for people who work in government versus in AI labs, for example.It might still be useful to have crunch time as a concept for a particularly concentrated period of important decisions, but only apply it in specific...]]>
rosehadshar https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:23 None full 4530
dTnkzb4F7wLnjiufm_EA EA - Funding by Voting: Ignorance and Irrationality by pradyuprasad Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding by Voting: Ignorance and Irrationality, published by pradyuprasad on January 19, 2023 on The Effective Altruism Forum.I am sceptical of the idea that a substantial amount of EA funds should be allocated democratically. In my view there are some strong reasons to believe that the existing model of allocating funding (that is funders giving money to organisations which allocate money with the allocators' best judgement) does better than a simple democratic vote.The main problem in my opinion is that individual voters do not have sufficient incentives to allocate collective funds in an effective mannerRational Ignorance (or even irrationality)?The main problem is that getting decisions to be correct requires a large amount of information (which is costly to collect) and decision making (which is again costly to make correctly).Consider the decision on whether EA funders should spend more money on reducing civil conflict. This is a hard decision to make because it involves a large amount of information (like historical statistics on civil wars, their causes, historical efforts to reduce them etc)., subjective judgement (how tractable it is, comparing it to other funding opportunities, weighing PR concerns), and finally decisions based on the above judgements (how much to fund and to whom). The above process costs many hours of high intensity thinking.The most obvious response to this is that EA voters aren’t going to be ignorant because they care about EA causes. The argument would go that EA voters are going to spend more time than what political scientists would consider rational because they have a deep commitment to the cause. I do find this plausible. Many EAs regularly donate over 10% of their income to cause areas. Many of them spend hours explaining and debating these issues online, and for some of them working in an EA cause area will be their main career priority.My first response to this is that these people are going to be a minority of the total voter population (if broadly defined). Not everyone will have the same level of engagement with EA ideas, and not everyone will have the same level of personal interest in getting them right. These qualities are going to be concentrated in a minority of people because, relative to the total EA population (say as defined as DAUs or MAUs of the EA forum, or a larger one as GWWC pledge takers), this is a small number of people.But my second critique is more substantive. It is that apart from being rationally ignorant on these issues, most voters probably will be irrational on them too. On several EA topics voters will be biassed in the sense that they will have beliefs that are systematically wrong in some direction. They might be in favour of some cause area without updating when new information or arguments are produced. Their notions about certain people or organisations might make them systematically biassed against them and lead to bad voting outcomes.What incentive do voters in this have to improve? I don’t think they have many. Each voter does not think that their vote is very important. They won’t have a large chance of influencing the vote, so why bother doing all the reading anyways? Their previous notions will influence the voting decision and they will have no incentive to change if they are wrong.The existing method for EA orgs to allocate money has been for donors to give them money, then to analyse the costs and benefits of whatever opportunities they investigate and decide on what grants to give.I think this would lead to better outcomes because there are clearly identifiable people who have reputations to build by doing good analysis and recommending good grants. This can work in the opposite direction (people are too afraid to propose high risk grants because they feel it might stick...]]>
pradyuprasad https://forum.effectivealtruism.org/posts/dTnkzb4F7wLnjiufm/funding-by-voting-ignorance-and-irrationality Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding by Voting: Ignorance and Irrationality, published by pradyuprasad on January 19, 2023 on The Effective Altruism Forum.I am sceptical of the idea that a substantial amount of EA funds should be allocated democratically. In my view there are some strong reasons to believe that the existing model of allocating funding (that is funders giving money to organisations which allocate money with the allocators' best judgement) does better than a simple democratic vote.The main problem in my opinion is that individual voters do not have sufficient incentives to allocate collective funds in an effective mannerRational Ignorance (or even irrationality)?The main problem is that getting decisions to be correct requires a large amount of information (which is costly to collect) and decision making (which is again costly to make correctly).Consider the decision on whether EA funders should spend more money on reducing civil conflict. This is a hard decision to make because it involves a large amount of information (like historical statistics on civil wars, their causes, historical efforts to reduce them etc)., subjective judgement (how tractable it is, comparing it to other funding opportunities, weighing PR concerns), and finally decisions based on the above judgements (how much to fund and to whom). The above process costs many hours of high intensity thinking.The most obvious response to this is that EA voters aren’t going to be ignorant because they care about EA causes. The argument would go that EA voters are going to spend more time than what political scientists would consider rational because they have a deep commitment to the cause. I do find this plausible. Many EAs regularly donate over 10% of their income to cause areas. Many of them spend hours explaining and debating these issues online, and for some of them working in an EA cause area will be their main career priority.My first response to this is that these people are going to be a minority of the total voter population (if broadly defined). Not everyone will have the same level of engagement with EA ideas, and not everyone will have the same level of personal interest in getting them right. These qualities are going to be concentrated in a minority of people because, relative to the total EA population (say as defined as DAUs or MAUs of the EA forum, or a larger one as GWWC pledge takers), this is a small number of people.But my second critique is more substantive. It is that apart from being rationally ignorant on these issues, most voters probably will be irrational on them too. On several EA topics voters will be biassed in the sense that they will have beliefs that are systematically wrong in some direction. They might be in favour of some cause area without updating when new information or arguments are produced. Their notions about certain people or organisations might make them systematically biassed against them and lead to bad voting outcomes.What incentive do voters in this have to improve? I don’t think they have many. Each voter does not think that their vote is very important. They won’t have a large chance of influencing the vote, so why bother doing all the reading anyways? Their previous notions will influence the voting decision and they will have no incentive to change if they are wrong.The existing method for EA orgs to allocate money has been for donors to give them money, then to analyse the costs and benefits of whatever opportunities they investigate and decide on what grants to give.I think this would lead to better outcomes because there are clearly identifiable people who have reputations to build by doing good analysis and recommending good grants. This can work in the opposite direction (people are too afraid to propose high risk grants because they feel it might stick...]]>
Fri, 20 Jan 2023 10:15:42 +0000 EA - Funding by Voting: Ignorance and Irrationality by pradyuprasad Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding by Voting: Ignorance and Irrationality, published by pradyuprasad on January 19, 2023 on The Effective Altruism Forum.I am sceptical of the idea that a substantial amount of EA funds should be allocated democratically. In my view there are some strong reasons to believe that the existing model of allocating funding (that is funders giving money to organisations which allocate money with the allocators' best judgement) does better than a simple democratic vote.The main problem in my opinion is that individual voters do not have sufficient incentives to allocate collective funds in an effective mannerRational Ignorance (or even irrationality)?The main problem is that getting decisions to be correct requires a large amount of information (which is costly to collect) and decision making (which is again costly to make correctly).Consider the decision on whether EA funders should spend more money on reducing civil conflict. This is a hard decision to make because it involves a large amount of information (like historical statistics on civil wars, their causes, historical efforts to reduce them etc)., subjective judgement (how tractable it is, comparing it to other funding opportunities, weighing PR concerns), and finally decisions based on the above judgements (how much to fund and to whom). The above process costs many hours of high intensity thinking.The most obvious response to this is that EA voters aren’t going to be ignorant because they care about EA causes. The argument would go that EA voters are going to spend more time than what political scientists would consider rational because they have a deep commitment to the cause. I do find this plausible. Many EAs regularly donate over 10% of their income to cause areas. Many of them spend hours explaining and debating these issues online, and for some of them working in an EA cause area will be their main career priority.My first response to this is that these people are going to be a minority of the total voter population (if broadly defined). Not everyone will have the same level of engagement with EA ideas, and not everyone will have the same level of personal interest in getting them right. These qualities are going to be concentrated in a minority of people because, relative to the total EA population (say as defined as DAUs or MAUs of the EA forum, or a larger one as GWWC pledge takers), this is a small number of people.But my second critique is more substantive. It is that apart from being rationally ignorant on these issues, most voters probably will be irrational on them too. On several EA topics voters will be biassed in the sense that they will have beliefs that are systematically wrong in some direction. They might be in favour of some cause area without updating when new information or arguments are produced. Their notions about certain people or organisations might make them systematically biassed against them and lead to bad voting outcomes.What incentive do voters in this have to improve? I don’t think they have many. Each voter does not think that their vote is very important. They won’t have a large chance of influencing the vote, so why bother doing all the reading anyways? Their previous notions will influence the voting decision and they will have no incentive to change if they are wrong.The existing method for EA orgs to allocate money has been for donors to give them money, then to analyse the costs and benefits of whatever opportunities they investigate and decide on what grants to give.I think this would lead to better outcomes because there are clearly identifiable people who have reputations to build by doing good analysis and recommending good grants. This can work in the opposite direction (people are too afraid to propose high risk grants because they feel it might stick...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding by Voting: Ignorance and Irrationality, published by pradyuprasad on January 19, 2023 on The Effective Altruism Forum.I am sceptical of the idea that a substantial amount of EA funds should be allocated democratically. In my view there are some strong reasons to believe that the existing model of allocating funding (that is funders giving money to organisations which allocate money with the allocators' best judgement) does better than a simple democratic vote.The main problem in my opinion is that individual voters do not have sufficient incentives to allocate collective funds in an effective mannerRational Ignorance (or even irrationality)?The main problem is that getting decisions to be correct requires a large amount of information (which is costly to collect) and decision making (which is again costly to make correctly).Consider the decision on whether EA funders should spend more money on reducing civil conflict. This is a hard decision to make because it involves a large amount of information (like historical statistics on civil wars, their causes, historical efforts to reduce them etc)., subjective judgement (how tractable it is, comparing it to other funding opportunities, weighing PR concerns), and finally decisions based on the above judgements (how much to fund and to whom). The above process costs many hours of high intensity thinking.The most obvious response to this is that EA voters aren’t going to be ignorant because they care about EA causes. The argument would go that EA voters are going to spend more time than what political scientists would consider rational because they have a deep commitment to the cause. I do find this plausible. Many EAs regularly donate over 10% of their income to cause areas. Many of them spend hours explaining and debating these issues online, and for some of them working in an EA cause area will be their main career priority.My first response to this is that these people are going to be a minority of the total voter population (if broadly defined). Not everyone will have the same level of engagement with EA ideas, and not everyone will have the same level of personal interest in getting them right. These qualities are going to be concentrated in a minority of people because, relative to the total EA population (say as defined as DAUs or MAUs of the EA forum, or a larger one as GWWC pledge takers), this is a small number of people.But my second critique is more substantive. It is that apart from being rationally ignorant on these issues, most voters probably will be irrational on them too. On several EA topics voters will be biassed in the sense that they will have beliefs that are systematically wrong in some direction. They might be in favour of some cause area without updating when new information or arguments are produced. Their notions about certain people or organisations might make them systematically biassed against them and lead to bad voting outcomes.What incentive do voters in this have to improve? I don’t think they have many. Each voter does not think that their vote is very important. They won’t have a large chance of influencing the vote, so why bother doing all the reading anyways? Their previous notions will influence the voting decision and they will have no incentive to change if they are wrong.The existing method for EA orgs to allocate money has been for donors to give them money, then to analyse the costs and benefits of whatever opportunities they investigate and decide on what grants to give.I think this would lead to better outcomes because there are clearly identifiable people who have reputations to build by doing good analysis and recommending good grants. This can work in the opposite direction (people are too afraid to propose high risk grants because they feel it might stick...]]>
pradyuprasad https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:00 None full 4525
xBeqaWEJfWZv8ALWn_EA EA - Announcing Cavendish Labs by dyusha Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Cavendish Labs, published by dyusha on January 19, 2023 on The Effective Altruism Forum.We’re excited to announce Cavendish Labs, a new research institute in Vermont focused on AI safety and pandemic prevention! We’re founding a community of researchers who will live together and work on the world’s most pressing problems.Uh, why Vermont?It’s beautiful; it has one of the cheapest costs of living in the United States; there’s lots of great people; it’s only a few hours away from Boston, NYC, and Montreal. There’s even a train that goes there from Washington D.C.! A few of us briefly lived in Vermont during the pandemic, and we found it to be a fantastic place to live, think, and work. Each season brings with it a new kind of beauty to the hills. There are no barriers to a relaxing walk in the woods. There's practically no light pollution, so the cosmos is waiting outside the door whenever you need inspiration.What are you going to be researching?We have a few research interests:1. AI Alignment. How do we make sure that AI does what we want? We’ve spent some time thinking about ELK and inverse scaling; however, we think that AGI will most likely be achieved through some sort of model-based RL framework, so that is our current focus. For instance, we know how to induce provable guarantees of behavior in supervised learning; could we do something similar in RL?2. Pandemic prevention. There’s been a lot of talk about the potential of Far-UVC for ambient disinfection. Understanding why it works on a molecular level, and whether it works safely, is key for developing broad-spectrum pandemic prevention tools.3. Diagnostic development. We're interested in designing a low-cost and simple-to-use platform for LAMP reactions so that generalized diagnostic capabilities are more widespread. We envision a world where it is both cheap and easy to run a panel of tests so one can swiftly determine the exact virus behind an infection.How’s this organized?We'll be living and working on different floors of the same building—some combination of a small liberal arts college and research lab. To ensure we’re not too isolated, we’ll visit Boston at least once a month, and invite a rotating group of visitors to work with us, while maintaining collaborations with researchers around the world.Sounds interesting!We’re actively searching for collaborators in our areas of interest; if this sounds like you, send us an email at hello@cavendishlabs.org! Our space in Vermont isn’t ready until late spring, so in the meantime we’ll be located in Berkeley and Rhode Island.At the same time, we’re looking for visiting scholars to come work with us in the summer or fall: if you’re interested, keep an eye out for our application!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
dyusha https://forum.effectivealtruism.org/posts/xBeqaWEJfWZv8ALWn/announcing-cavendish-labs Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Cavendish Labs, published by dyusha on January 19, 2023 on The Effective Altruism Forum.We’re excited to announce Cavendish Labs, a new research institute in Vermont focused on AI safety and pandemic prevention! We’re founding a community of researchers who will live together and work on the world’s most pressing problems.Uh, why Vermont?It’s beautiful; it has one of the cheapest costs of living in the United States; there’s lots of great people; it’s only a few hours away from Boston, NYC, and Montreal. There’s even a train that goes there from Washington D.C.! A few of us briefly lived in Vermont during the pandemic, and we found it to be a fantastic place to live, think, and work. Each season brings with it a new kind of beauty to the hills. There are no barriers to a relaxing walk in the woods. There's practically no light pollution, so the cosmos is waiting outside the door whenever you need inspiration.What are you going to be researching?We have a few research interests:1. AI Alignment. How do we make sure that AI does what we want? We’ve spent some time thinking about ELK and inverse scaling; however, we think that AGI will most likely be achieved through some sort of model-based RL framework, so that is our current focus. For instance, we know how to induce provable guarantees of behavior in supervised learning; could we do something similar in RL?2. Pandemic prevention. There’s been a lot of talk about the potential of Far-UVC for ambient disinfection. Understanding why it works on a molecular level, and whether it works safely, is key for developing broad-spectrum pandemic prevention tools.3. Diagnostic development. We're interested in designing a low-cost and simple-to-use platform for LAMP reactions so that generalized diagnostic capabilities are more widespread. We envision a world where it is both cheap and easy to run a panel of tests so one can swiftly determine the exact virus behind an infection.How’s this organized?We'll be living and working on different floors of the same building—some combination of a small liberal arts college and research lab. To ensure we’re not too isolated, we’ll visit Boston at least once a month, and invite a rotating group of visitors to work with us, while maintaining collaborations with researchers around the world.Sounds interesting!We’re actively searching for collaborators in our areas of interest; if this sounds like you, send us an email at hello@cavendishlabs.org! Our space in Vermont isn’t ready until late spring, so in the meantime we’ll be located in Berkeley and Rhode Island.At the same time, we’re looking for visiting scholars to come work with us in the summer or fall: if you’re interested, keep an eye out for our application!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 20 Jan 2023 00:36:10 +0000 EA - Announcing Cavendish Labs by dyusha Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Cavendish Labs, published by dyusha on January 19, 2023 on The Effective Altruism Forum.We’re excited to announce Cavendish Labs, a new research institute in Vermont focused on AI safety and pandemic prevention! We’re founding a community of researchers who will live together and work on the world’s most pressing problems.Uh, why Vermont?It’s beautiful; it has one of the cheapest costs of living in the United States; there’s lots of great people; it’s only a few hours away from Boston, NYC, and Montreal. There’s even a train that goes there from Washington D.C.! A few of us briefly lived in Vermont during the pandemic, and we found it to be a fantastic place to live, think, and work. Each season brings with it a new kind of beauty to the hills. There are no barriers to a relaxing walk in the woods. There's practically no light pollution, so the cosmos is waiting outside the door whenever you need inspiration.What are you going to be researching?We have a few research interests:1. AI Alignment. How do we make sure that AI does what we want? We’ve spent some time thinking about ELK and inverse scaling; however, we think that AGI will most likely be achieved through some sort of model-based RL framework, so that is our current focus. For instance, we know how to induce provable guarantees of behavior in supervised learning; could we do something similar in RL?2. Pandemic prevention. There’s been a lot of talk about the potential of Far-UVC for ambient disinfection. Understanding why it works on a molecular level, and whether it works safely, is key for developing broad-spectrum pandemic prevention tools.3. Diagnostic development. We're interested in designing a low-cost and simple-to-use platform for LAMP reactions so that generalized diagnostic capabilities are more widespread. We envision a world where it is both cheap and easy to run a panel of tests so one can swiftly determine the exact virus behind an infection.How’s this organized?We'll be living and working on different floors of the same building—some combination of a small liberal arts college and research lab. To ensure we’re not too isolated, we’ll visit Boston at least once a month, and invite a rotating group of visitors to work with us, while maintaining collaborations with researchers around the world.Sounds interesting!We’re actively searching for collaborators in our areas of interest; if this sounds like you, send us an email at hello@cavendishlabs.org! Our space in Vermont isn’t ready until late spring, so in the meantime we’ll be located in Berkeley and Rhode Island.At the same time, we’re looking for visiting scholars to come work with us in the summer or fall: if you’re interested, keep an eye out for our application!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Cavendish Labs, published by dyusha on January 19, 2023 on The Effective Altruism Forum.We’re excited to announce Cavendish Labs, a new research institute in Vermont focused on AI safety and pandemic prevention! We’re founding a community of researchers who will live together and work on the world’s most pressing problems.Uh, why Vermont?It’s beautiful; it has one of the cheapest costs of living in the United States; there’s lots of great people; it’s only a few hours away from Boston, NYC, and Montreal. There’s even a train that goes there from Washington D.C.! A few of us briefly lived in Vermont during the pandemic, and we found it to be a fantastic place to live, think, and work. Each season brings with it a new kind of beauty to the hills. There are no barriers to a relaxing walk in the woods. There's practically no light pollution, so the cosmos is waiting outside the door whenever you need inspiration.What are you going to be researching?We have a few research interests:1. AI Alignment. How do we make sure that AI does what we want? We’ve spent some time thinking about ELK and inverse scaling; however, we think that AGI will most likely be achieved through some sort of model-based RL framework, so that is our current focus. For instance, we know how to induce provable guarantees of behavior in supervised learning; could we do something similar in RL?2. Pandemic prevention. There’s been a lot of talk about the potential of Far-UVC for ambient disinfection. Understanding why it works on a molecular level, and whether it works safely, is key for developing broad-spectrum pandemic prevention tools.3. Diagnostic development. We're interested in designing a low-cost and simple-to-use platform for LAMP reactions so that generalized diagnostic capabilities are more widespread. We envision a world where it is both cheap and easy to run a panel of tests so one can swiftly determine the exact virus behind an infection.How’s this organized?We'll be living and working on different floors of the same building—some combination of a small liberal arts college and research lab. To ensure we’re not too isolated, we’ll visit Boston at least once a month, and invite a rotating group of visitors to work with us, while maintaining collaborations with researchers around the world.Sounds interesting!We’re actively searching for collaborators in our areas of interest; if this sounds like you, send us an email at hello@cavendishlabs.org! Our space in Vermont isn’t ready until late spring, so in the meantime we’ll be located in Berkeley and Rhode Island.At the same time, we’re looking for visiting scholars to come work with us in the summer or fall: if you’re interested, keep an eye out for our application!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
dyusha https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:47 None full 4526
KsgmLHwqRj7fZ9szo_EA EA - UK Personal Finance Tips and Info by Rasool Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: UK Personal Finance Tips & Info, published by Rasool on January 19, 2023 on The Effective Altruism Forum.SummaryInspired by this post, a short intro to some of the important features and misconceptions of the UK personal finance landscape.For further information:MoneySavingExpert (MSE) is by far-and-away the best resource for all things finance. Subscribe to the weekly newsletter and consult the site guide for anything and everything money-related./r/UKPersonalFinance is excellent, especially the wiki and the flowchart.The Government/HMRC website is actually pretty accessible.This is only a brief overview, though I am happy to expand on any topic if there is interest. I am not your financial advisor and this is not financial advice.The topics covered are tax, banking, investing, and giving. Links to further reading are provided for each section.TaxIncome TaxThe UK has a marginal tax band system:Everything you earn below £12,570 is tax-freePounds 12,571 - 50,270 are taxed at 20%Pounds 50,271 - 150,000 are taxed at 40%Pounds 150,000+ are taxed at 45%.This is automatically calculated and deducted from your paycheck. Every month your employer should provide a payslip with your gross pay and deductions for tax, national insurance, pension, and student loan (if applicable). You should check this alongside online salary calculators like this, and speak to your HR/accounting department if there is anything you are not sure about.Example 1Your salary is £20,000, the first £12,570 is taxed at 0%. The portion between £12,571 and £20,000 is taxed at 20%.So you pay:0% on 12,57020% on 7,430 (which is 20000 - 12570)Which gives a total of £1,486Example 2Your salary is £60,000, the first £12,570 is taxed at 0%. The portion between £12,571 and £50,270 is taxed at 20%. The portion between £50,271 and £60,000 is taxed at 40%.So you pay:0% on 12,57020% on 37,700 (which is 50270 - 12570)40% on 9,730 (which is 60000 - 50270)Which gives a total of £11,432Your whole income is not taxed at your marginal tax band so it is impossible to take home less money as a result of a pay raise.These figures can be verified for example here ().National InsuranceNational Insurance is used to calculate your state pension. From the state retirement age, you get a weekly pension payment from the government.The size of this payment depends on the number of 'qualifying years' that you have paid national insurance (or earned it via credits - eg. by being on certain kinds of benefits, or being a parent or carer). Note that this does not take into account how much national insurance you have paid, only the number of years that you contributed.If you pay national insurance for 35 years you get the full state pension. If you only pay national insurance for 20 years, you will get 20/35ths of that amount.What you pay is also done on a marginal tax band system, though slightly confusingly the bands are per week/month rather than taking your total income over the whole year.Pretty much, what you pay 20% income tax on, you pay 12% national insurance on, and what you pay 40%+ income tax on, you pay 2% national insurance on.So a more accurate diagram is:And our examples from earlier are now:Example 1Your salary is £20,000, the first £12,570 is taxed at 0%. The portion between £12,571 and £20,000 is taxed at 20% with national insurance at 12%.So you pay:0% on 12,57032% on 7,430 (which is 20000 - 12570)Which gives a total of £2377.60Example 2Your salary is £60,000, the first £12,570 is taxed at 0%. The portion between £12,571 and £50,270 is taxed at 20% with national insurance at 12%. The portion between £50,271 and £60,000 is taxed at 40% with national insurance at 2%.So you pay:0% on 12,57032% on 37,700 (which is 50270 - 12570)42% on 9,730 (which is 60000 - 50270)Which gi...]]>
Rasool https://forum.effectivealtruism.org/posts/KsgmLHwqRj7fZ9szo/uk-personal-finance-tips-and-info Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: UK Personal Finance Tips & Info, published by Rasool on January 19, 2023 on The Effective Altruism Forum.SummaryInspired by this post, a short intro to some of the important features and misconceptions of the UK personal finance landscape.For further information:MoneySavingExpert (MSE) is by far-and-away the best resource for all things finance. Subscribe to the weekly newsletter and consult the site guide for anything and everything money-related./r/UKPersonalFinance is excellent, especially the wiki and the flowchart.The Government/HMRC website is actually pretty accessible.This is only a brief overview, though I am happy to expand on any topic if there is interest. I am not your financial advisor and this is not financial advice.The topics covered are tax, banking, investing, and giving. Links to further reading are provided for each section.TaxIncome TaxThe UK has a marginal tax band system:Everything you earn below £12,570 is tax-freePounds 12,571 - 50,270 are taxed at 20%Pounds 50,271 - 150,000 are taxed at 40%Pounds 150,000+ are taxed at 45%.This is automatically calculated and deducted from your paycheck. Every month your employer should provide a payslip with your gross pay and deductions for tax, national insurance, pension, and student loan (if applicable). You should check this alongside online salary calculators like this, and speak to your HR/accounting department if there is anything you are not sure about.Example 1Your salary is £20,000, the first £12,570 is taxed at 0%. The portion between £12,571 and £20,000 is taxed at 20%.So you pay:0% on 12,57020% on 7,430 (which is 20000 - 12570)Which gives a total of £1,486Example 2Your salary is £60,000, the first £12,570 is taxed at 0%. The portion between £12,571 and £50,270 is taxed at 20%. The portion between £50,271 and £60,000 is taxed at 40%.So you pay:0% on 12,57020% on 37,700 (which is 50270 - 12570)40% on 9,730 (which is 60000 - 50270)Which gives a total of £11,432Your whole income is not taxed at your marginal tax band so it is impossible to take home less money as a result of a pay raise.These figures can be verified for example here ().National InsuranceNational Insurance is used to calculate your state pension. From the state retirement age, you get a weekly pension payment from the government.The size of this payment depends on the number of 'qualifying years' that you have paid national insurance (or earned it via credits - eg. by being on certain kinds of benefits, or being a parent or carer). Note that this does not take into account how much national insurance you have paid, only the number of years that you contributed.If you pay national insurance for 35 years you get the full state pension. If you only pay national insurance for 20 years, you will get 20/35ths of that amount.What you pay is also done on a marginal tax band system, though slightly confusingly the bands are per week/month rather than taking your total income over the whole year.Pretty much, what you pay 20% income tax on, you pay 12% national insurance on, and what you pay 40%+ income tax on, you pay 2% national insurance on.So a more accurate diagram is:And our examples from earlier are now:Example 1Your salary is £20,000, the first £12,570 is taxed at 0%. The portion between £12,571 and £20,000 is taxed at 20% with national insurance at 12%.So you pay:0% on 12,57032% on 7,430 (which is 20000 - 12570)Which gives a total of £2377.60Example 2Your salary is £60,000, the first £12,570 is taxed at 0%. The portion between £12,571 and £50,270 is taxed at 20% with national insurance at 12%. The portion between £50,271 and £60,000 is taxed at 40% with national insurance at 2%.So you pay:0% on 12,57032% on 37,700 (which is 50270 - 12570)42% on 9,730 (which is 60000 - 50270)Which gi...]]>
Thu, 19 Jan 2023 21:02:43 +0000 EA - UK Personal Finance Tips and Info by Rasool Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: UK Personal Finance Tips & Info, published by Rasool on January 19, 2023 on The Effective Altruism Forum.SummaryInspired by this post, a short intro to some of the important features and misconceptions of the UK personal finance landscape.For further information:MoneySavingExpert (MSE) is by far-and-away the best resource for all things finance. Subscribe to the weekly newsletter and consult the site guide for anything and everything money-related./r/UKPersonalFinance is excellent, especially the wiki and the flowchart.The Government/HMRC website is actually pretty accessible.This is only a brief overview, though I am happy to expand on any topic if there is interest. I am not your financial advisor and this is not financial advice.The topics covered are tax, banking, investing, and giving. Links to further reading are provided for each section.TaxIncome TaxThe UK has a marginal tax band system:Everything you earn below £12,570 is tax-freePounds 12,571 - 50,270 are taxed at 20%Pounds 50,271 - 150,000 are taxed at 40%Pounds 150,000+ are taxed at 45%.This is automatically calculated and deducted from your paycheck. Every month your employer should provide a payslip with your gross pay and deductions for tax, national insurance, pension, and student loan (if applicable). You should check this alongside online salary calculators like this, and speak to your HR/accounting department if there is anything you are not sure about.Example 1Your salary is £20,000, the first £12,570 is taxed at 0%. The portion between £12,571 and £20,000 is taxed at 20%.So you pay:0% on 12,57020% on 7,430 (which is 20000 - 12570)Which gives a total of £1,486Example 2Your salary is £60,000, the first £12,570 is taxed at 0%. The portion between £12,571 and £50,270 is taxed at 20%. The portion between £50,271 and £60,000 is taxed at 40%.So you pay:0% on 12,57020% on 37,700 (which is 50270 - 12570)40% on 9,730 (which is 60000 - 50270)Which gives a total of £11,432Your whole income is not taxed at your marginal tax band so it is impossible to take home less money as a result of a pay raise.These figures can be verified for example here ().National InsuranceNational Insurance is used to calculate your state pension. From the state retirement age, you get a weekly pension payment from the government.The size of this payment depends on the number of 'qualifying years' that you have paid national insurance (or earned it via credits - eg. by being on certain kinds of benefits, or being a parent or carer). Note that this does not take into account how much national insurance you have paid, only the number of years that you contributed.If you pay national insurance for 35 years you get the full state pension. If you only pay national insurance for 20 years, you will get 20/35ths of that amount.What you pay is also done on a marginal tax band system, though slightly confusingly the bands are per week/month rather than taking your total income over the whole year.Pretty much, what you pay 20% income tax on, you pay 12% national insurance on, and what you pay 40%+ income tax on, you pay 2% national insurance on.So a more accurate diagram is:And our examples from earlier are now:Example 1Your salary is £20,000, the first £12,570 is taxed at 0%. The portion between £12,571 and £20,000 is taxed at 20% with national insurance at 12%.So you pay:0% on 12,57032% on 7,430 (which is 20000 - 12570)Which gives a total of £2377.60Example 2Your salary is £60,000, the first £12,570 is taxed at 0%. The portion between £12,571 and £50,270 is taxed at 20% with national insurance at 12%. The portion between £50,271 and £60,000 is taxed at 40% with national insurance at 2%.So you pay:0% on 12,57032% on 37,700 (which is 50270 - 12570)42% on 9,730 (which is 60000 - 50270)Which gi...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: UK Personal Finance Tips & Info, published by Rasool on January 19, 2023 on The Effective Altruism Forum.SummaryInspired by this post, a short intro to some of the important features and misconceptions of the UK personal finance landscape.For further information:MoneySavingExpert (MSE) is by far-and-away the best resource for all things finance. Subscribe to the weekly newsletter and consult the site guide for anything and everything money-related./r/UKPersonalFinance is excellent, especially the wiki and the flowchart.The Government/HMRC website is actually pretty accessible.This is only a brief overview, though I am happy to expand on any topic if there is interest. I am not your financial advisor and this is not financial advice.The topics covered are tax, banking, investing, and giving. Links to further reading are provided for each section.TaxIncome TaxThe UK has a marginal tax band system:Everything you earn below £12,570 is tax-freePounds 12,571 - 50,270 are taxed at 20%Pounds 50,271 - 150,000 are taxed at 40%Pounds 150,000+ are taxed at 45%.This is automatically calculated and deducted from your paycheck. Every month your employer should provide a payslip with your gross pay and deductions for tax, national insurance, pension, and student loan (if applicable). You should check this alongside online salary calculators like this, and speak to your HR/accounting department if there is anything you are not sure about.Example 1Your salary is £20,000, the first £12,570 is taxed at 0%. The portion between £12,571 and £20,000 is taxed at 20%.So you pay:0% on 12,57020% on 7,430 (which is 20000 - 12570)Which gives a total of £1,486Example 2Your salary is £60,000, the first £12,570 is taxed at 0%. The portion between £12,571 and £50,270 is taxed at 20%. The portion between £50,271 and £60,000 is taxed at 40%.So you pay:0% on 12,57020% on 37,700 (which is 50270 - 12570)40% on 9,730 (which is 60000 - 50270)Which gives a total of £11,432Your whole income is not taxed at your marginal tax band so it is impossible to take home less money as a result of a pay raise.These figures can be verified for example here ().National InsuranceNational Insurance is used to calculate your state pension. From the state retirement age, you get a weekly pension payment from the government.The size of this payment depends on the number of 'qualifying years' that you have paid national insurance (or earned it via credits - eg. by being on certain kinds of benefits, or being a parent or carer). Note that this does not take into account how much national insurance you have paid, only the number of years that you contributed.If you pay national insurance for 35 years you get the full state pension. If you only pay national insurance for 20 years, you will get 20/35ths of that amount.What you pay is also done on a marginal tax band system, though slightly confusingly the bands are per week/month rather than taking your total income over the whole year.Pretty much, what you pay 20% income tax on, you pay 12% national insurance on, and what you pay 40%+ income tax on, you pay 2% national insurance on.So a more accurate diagram is:And our examples from earlier are now:Example 1Your salary is £20,000, the first £12,570 is taxed at 0%. The portion between £12,571 and £20,000 is taxed at 20% with national insurance at 12%.So you pay:0% on 12,57032% on 7,430 (which is 20000 - 12570)Which gives a total of £2377.60Example 2Your salary is £60,000, the first £12,570 is taxed at 0%. The portion between £12,571 and £50,270 is taxed at 20% with national insurance at 12%. The portion between £50,271 and £60,000 is taxed at 40% with national insurance at 2%.So you pay:0% on 12,57032% on 37,700 (which is 50270 - 12570)42% on 9,730 (which is 60000 - 50270)Which gi...]]>
Rasool https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 21:35 None full 4520
D92RwcbrfqGMjGfqw_EA EA - (Paper Summary) Managing the Transition to Widespread Metagenomic Monitoring: Policy Considerations for Future Biosurveillance by chelsea liang Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: (Paper Summary) Managing the Transition to Widespread Metagenomic Monitoring: Policy Considerations for Future Biosurveillance, published by chelsea liang on January 19, 2023 on The Effective Altruism Forum.Our paper Managing the Transition to Widespread Metagenomic Monitoring: Policy Considerations for Future Biosurveillance, was recently published in Health Security.TL;DRMetagenomic sequencing is a really good thing, and lots of people are working on making the technologies work. But getting the tech to work is only half of the problem - we also need it to be implementable and useful for policy, and legally acceptable. There are a bunch of important questions which need to be addressed to make sure this isn’t a problem, and the paper intends to start a discussion of how to solve those problems - so that a decade from now, we don’t look back and say that the technology works, but it’s not going to be used for pandemic prevention because of these other issues, which are far harder or impossible to fix post-hoc. This paper serves to put these problems on the agenda now so they can be addressed by the relevant academic, policy, advocacy and professional communities.AbstractThe technological possibilities and future public health importance of metagenomic sequencing have received extensive attention, but there has been little discussion about the policy and regulatory issues that need to be addressed if metagenomic sequencing is adopted as a key technology for biosurveillance. In this article, we introduce metagenomic monitoring as a possible path to eventually replacing current infectious disease monitoring models. Many key enablers are technological, whereas others are not. We therefore highlight key policy challenges and implementation questions that need to be addressed for “widespread metagenomic monitoring” to be possible. Policymakers must address pitfalls like fragmentation of the technological base, private capture of benefits, privacy concerns, the usefulness of the system during nonpandemic times, and how the future systems will enable better response.If these challenges are addressed, the technological and public health promise of metagenomic sequencing can be realized.The paper is broken down into 3 sections:Present State of Biosurveillance (Where we are)Potential Metagenomic Monitoring Futures (Qualities of an ideal metagenomic future)Way Points and Obstacles in a Transition (How to get from here to there)Present State of BiosurveillanceGlobal biosurveillance efforts together only provide partial coverage. Existing genomic data collection and analysis is often siloed and very difficult to integrate for a comprehensive disease landscape. Biosurveillance efforts that have tended to maintain funding are foodborne pathogens and rare reportable diseases.Potential Metagenomic Monitoring FuturesTo transition to Widespread Metagenomic Monitoring (WMGM) responsibly and with maximized biosecurity benefits, there needs to be a common understanding of qualities we expect to see in a high-investment and ambitious scenario. See this section for specifics.Way Points and Obstacles in a TransitionWe mention some antecedents of a WMGM along with a table of Critical Technological Advances. For example, Shean and Greninger1 propose a near-term future resting on widespread deployment of clinical sampling. Another near-term possibility is the Nucleic Acid Observatory,2 which proposed ongoing wastewater and watershed sampling across the United States to find sequences that recently emerged or are increasing in frequency, indicating a potential new pathogen or other notable events. We identify the following as systemic obstacles on the path to WMGM which need to be addressed by the relevant professional communities and institutions:Suboptimal use and high prices...]]>
chelsea liang https://forum.effectivealtruism.org/posts/D92RwcbrfqGMjGfqw/paper-summary-managing-the-transition-to-widespread Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: (Paper Summary) Managing the Transition to Widespread Metagenomic Monitoring: Policy Considerations for Future Biosurveillance, published by chelsea liang on January 19, 2023 on The Effective Altruism Forum.Our paper Managing the Transition to Widespread Metagenomic Monitoring: Policy Considerations for Future Biosurveillance, was recently published in Health Security.TL;DRMetagenomic sequencing is a really good thing, and lots of people are working on making the technologies work. But getting the tech to work is only half of the problem - we also need it to be implementable and useful for policy, and legally acceptable. There are a bunch of important questions which need to be addressed to make sure this isn’t a problem, and the paper intends to start a discussion of how to solve those problems - so that a decade from now, we don’t look back and say that the technology works, but it’s not going to be used for pandemic prevention because of these other issues, which are far harder or impossible to fix post-hoc. This paper serves to put these problems on the agenda now so they can be addressed by the relevant academic, policy, advocacy and professional communities.AbstractThe technological possibilities and future public health importance of metagenomic sequencing have received extensive attention, but there has been little discussion about the policy and regulatory issues that need to be addressed if metagenomic sequencing is adopted as a key technology for biosurveillance. In this article, we introduce metagenomic monitoring as a possible path to eventually replacing current infectious disease monitoring models. Many key enablers are technological, whereas others are not. We therefore highlight key policy challenges and implementation questions that need to be addressed for “widespread metagenomic monitoring” to be possible. Policymakers must address pitfalls like fragmentation of the technological base, private capture of benefits, privacy concerns, the usefulness of the system during nonpandemic times, and how the future systems will enable better response.If these challenges are addressed, the technological and public health promise of metagenomic sequencing can be realized.The paper is broken down into 3 sections:Present State of Biosurveillance (Where we are)Potential Metagenomic Monitoring Futures (Qualities of an ideal metagenomic future)Way Points and Obstacles in a Transition (How to get from here to there)Present State of BiosurveillanceGlobal biosurveillance efforts together only provide partial coverage. Existing genomic data collection and analysis is often siloed and very difficult to integrate for a comprehensive disease landscape. Biosurveillance efforts that have tended to maintain funding are foodborne pathogens and rare reportable diseases.Potential Metagenomic Monitoring FuturesTo transition to Widespread Metagenomic Monitoring (WMGM) responsibly and with maximized biosecurity benefits, there needs to be a common understanding of qualities we expect to see in a high-investment and ambitious scenario. See this section for specifics.Way Points and Obstacles in a TransitionWe mention some antecedents of a WMGM along with a table of Critical Technological Advances. For example, Shean and Greninger1 propose a near-term future resting on widespread deployment of clinical sampling. Another near-term possibility is the Nucleic Acid Observatory,2 which proposed ongoing wastewater and watershed sampling across the United States to find sequences that recently emerged or are increasing in frequency, indicating a potential new pathogen or other notable events. We identify the following as systemic obstacles on the path to WMGM which need to be addressed by the relevant professional communities and institutions:Suboptimal use and high prices...]]>
Thu, 19 Jan 2023 19:54:07 +0000 EA - (Paper Summary) Managing the Transition to Widespread Metagenomic Monitoring: Policy Considerations for Future Biosurveillance by chelsea liang Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: (Paper Summary) Managing the Transition to Widespread Metagenomic Monitoring: Policy Considerations for Future Biosurveillance, published by chelsea liang on January 19, 2023 on The Effective Altruism Forum.Our paper Managing the Transition to Widespread Metagenomic Monitoring: Policy Considerations for Future Biosurveillance, was recently published in Health Security.TL;DRMetagenomic sequencing is a really good thing, and lots of people are working on making the technologies work. But getting the tech to work is only half of the problem - we also need it to be implementable and useful for policy, and legally acceptable. There are a bunch of important questions which need to be addressed to make sure this isn’t a problem, and the paper intends to start a discussion of how to solve those problems - so that a decade from now, we don’t look back and say that the technology works, but it’s not going to be used for pandemic prevention because of these other issues, which are far harder or impossible to fix post-hoc. This paper serves to put these problems on the agenda now so they can be addressed by the relevant academic, policy, advocacy and professional communities.AbstractThe technological possibilities and future public health importance of metagenomic sequencing have received extensive attention, but there has been little discussion about the policy and regulatory issues that need to be addressed if metagenomic sequencing is adopted as a key technology for biosurveillance. In this article, we introduce metagenomic monitoring as a possible path to eventually replacing current infectious disease monitoring models. Many key enablers are technological, whereas others are not. We therefore highlight key policy challenges and implementation questions that need to be addressed for “widespread metagenomic monitoring” to be possible. Policymakers must address pitfalls like fragmentation of the technological base, private capture of benefits, privacy concerns, the usefulness of the system during nonpandemic times, and how the future systems will enable better response.If these challenges are addressed, the technological and public health promise of metagenomic sequencing can be realized.The paper is broken down into 3 sections:Present State of Biosurveillance (Where we are)Potential Metagenomic Monitoring Futures (Qualities of an ideal metagenomic future)Way Points and Obstacles in a Transition (How to get from here to there)Present State of BiosurveillanceGlobal biosurveillance efforts together only provide partial coverage. Existing genomic data collection and analysis is often siloed and very difficult to integrate for a comprehensive disease landscape. Biosurveillance efforts that have tended to maintain funding are foodborne pathogens and rare reportable diseases.Potential Metagenomic Monitoring FuturesTo transition to Widespread Metagenomic Monitoring (WMGM) responsibly and with maximized biosecurity benefits, there needs to be a common understanding of qualities we expect to see in a high-investment and ambitious scenario. See this section for specifics.Way Points and Obstacles in a TransitionWe mention some antecedents of a WMGM along with a table of Critical Technological Advances. For example, Shean and Greninger1 propose a near-term future resting on widespread deployment of clinical sampling. Another near-term possibility is the Nucleic Acid Observatory,2 which proposed ongoing wastewater and watershed sampling across the United States to find sequences that recently emerged or are increasing in frequency, indicating a potential new pathogen or other notable events. We identify the following as systemic obstacles on the path to WMGM which need to be addressed by the relevant professional communities and institutions:Suboptimal use and high prices...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: (Paper Summary) Managing the Transition to Widespread Metagenomic Monitoring: Policy Considerations for Future Biosurveillance, published by chelsea liang on January 19, 2023 on The Effective Altruism Forum.Our paper Managing the Transition to Widespread Metagenomic Monitoring: Policy Considerations for Future Biosurveillance, was recently published in Health Security.TL;DRMetagenomic sequencing is a really good thing, and lots of people are working on making the technologies work. But getting the tech to work is only half of the problem - we also need it to be implementable and useful for policy, and legally acceptable. There are a bunch of important questions which need to be addressed to make sure this isn’t a problem, and the paper intends to start a discussion of how to solve those problems - so that a decade from now, we don’t look back and say that the technology works, but it’s not going to be used for pandemic prevention because of these other issues, which are far harder or impossible to fix post-hoc. This paper serves to put these problems on the agenda now so they can be addressed by the relevant academic, policy, advocacy and professional communities.AbstractThe technological possibilities and future public health importance of metagenomic sequencing have received extensive attention, but there has been little discussion about the policy and regulatory issues that need to be addressed if metagenomic sequencing is adopted as a key technology for biosurveillance. In this article, we introduce metagenomic monitoring as a possible path to eventually replacing current infectious disease monitoring models. Many key enablers are technological, whereas others are not. We therefore highlight key policy challenges and implementation questions that need to be addressed for “widespread metagenomic monitoring” to be possible. Policymakers must address pitfalls like fragmentation of the technological base, private capture of benefits, privacy concerns, the usefulness of the system during nonpandemic times, and how the future systems will enable better response.If these challenges are addressed, the technological and public health promise of metagenomic sequencing can be realized.The paper is broken down into 3 sections:Present State of Biosurveillance (Where we are)Potential Metagenomic Monitoring Futures (Qualities of an ideal metagenomic future)Way Points and Obstacles in a Transition (How to get from here to there)Present State of BiosurveillanceGlobal biosurveillance efforts together only provide partial coverage. Existing genomic data collection and analysis is often siloed and very difficult to integrate for a comprehensive disease landscape. Biosurveillance efforts that have tended to maintain funding are foodborne pathogens and rare reportable diseases.Potential Metagenomic Monitoring FuturesTo transition to Widespread Metagenomic Monitoring (WMGM) responsibly and with maximized biosecurity benefits, there needs to be a common understanding of qualities we expect to see in a high-investment and ambitious scenario. See this section for specifics.Way Points and Obstacles in a TransitionWe mention some antecedents of a WMGM along with a table of Critical Technological Advances. For example, Shean and Greninger1 propose a near-term future resting on widespread deployment of clinical sampling. Another near-term possibility is the Nucleic Acid Observatory,2 which proposed ongoing wastewater and watershed sampling across the United States to find sequences that recently emerged or are increasing in frequency, indicating a potential new pathogen or other notable events. We identify the following as systemic obstacles on the path to WMGM which need to be addressed by the relevant professional communities and institutions:Suboptimal use and high prices...]]>
chelsea liang https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:36 None full 4521
9XGhNArGhiZ3daQws_EA EA - Heretical Thoughts on AI | Eli Dourado by 𝕮𝖎𝖓𝖊𝖗𝖆 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Heretical Thoughts on AI | Eli Dourado, published by 𝕮𝖎𝖓𝖊𝖗𝖆 on January 19, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
𝕮𝖎𝖓𝖊𝖗𝖆 https://forum.effectivealtruism.org/posts/9XGhNArGhiZ3daQws/heretical-thoughts-on-ai-or-eli-dourado Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Heretical Thoughts on AI | Eli Dourado, published by 𝕮𝖎𝖓𝖊𝖗𝖆 on January 19, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 19 Jan 2023 17:54:43 +0000 EA - Heretical Thoughts on AI | Eli Dourado by 𝕮𝖎𝖓𝖊𝖗𝖆 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Heretical Thoughts on AI | Eli Dourado, published by 𝕮𝖎𝖓𝖊𝖗𝖆 on January 19, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Heretical Thoughts on AI | Eli Dourado, published by 𝕮𝖎𝖓𝖊𝖗𝖆 on January 19, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
𝕮𝖎𝖓𝖊𝖗𝖆 https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:28 None full 4518
LvwihGYgFEzjGDhBt_EA EA - FLI FAQ on the rejected grant proposal controversy by Tegmark Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FLI FAQ on the rejected grant proposal controversy, published by Tegmark on January 19, 2023 on The Effective Altruism Forum.The details surrounding FLI's rejection of a grant proposal from Nya Dagbladet last November has raised controversy and important questions (including here on this forum) which we address in this FAQ.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tegmark https://forum.effectivealtruism.org/posts/LvwihGYgFEzjGDhBt/fli-faq-on-the-rejected-grant-proposal-controversy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FLI FAQ on the rejected grant proposal controversy, published by Tegmark on January 19, 2023 on The Effective Altruism Forum.The details surrounding FLI's rejection of a grant proposal from Nya Dagbladet last November has raised controversy and important questions (including here on this forum) which we address in this FAQ.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 19 Jan 2023 17:51:05 +0000 EA - FLI FAQ on the rejected grant proposal controversy by Tegmark Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FLI FAQ on the rejected grant proposal controversy, published by Tegmark on January 19, 2023 on The Effective Altruism Forum.The details surrounding FLI's rejection of a grant proposal from Nya Dagbladet last November has raised controversy and important questions (including here on this forum) which we address in this FAQ.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FLI FAQ on the rejected grant proposal controversy, published by Tegmark on January 19, 2023 on The Effective Altruism Forum.The details surrounding FLI's rejection of a grant proposal from Nya Dagbladet last November has raised controversy and important questions (including here on this forum) which we address in this FAQ.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tegmark https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:39 None full 4517
g9skQrrirWHk9Bbyg_EA EA - The ones that walk away by Karthik Tadepalli Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The ones that walk away, published by Karthik Tadepalli on January 19, 2023 on The Effective Altruism Forum.Alice: I've grown disillusioned with the EA community. I still want to dedicate my life to doing as much good as I can, but I am no longer certain that EA is the best way to do that.Bob: I see where you're coming from, but where specifically is that disillusionment leading you?Alice: I am still confident that major EA causes are important areas to work on. I think EA organizations do good work in those areas, so I would be quite happy to work at some of them. On the other hand, I'm much less willing to defer to EA institutions than before, and I'm unlikely to attend EA events or personally associate with EAs. So I imagine mostly disengaging from EA, albeit with some professional interest in EA organizations.Bob: You're disentangling different aspects of EA as a community. We are linked first and foremost by our moral commitments, to having a larger moral circle and trying to do the most good for that moral circle. You still hold those commitments. On top of that, we're also linked by the intellectual commitment to think rigorously and impartially about ways to do the most good. It sounds like you still believe in that, and the change is that you want to do more of that thinking for yourself and less of it through the EA collective consciousness. Is that right?Alice: Pretty much.Bob: But if you still hold the moral and intellectual commitments that define effective altruism, why do you want to disengage from EA?Alice: For me, the social dimension creates a dangerous tribalism. I get upset when people criticize EA on Twitter and in my life, and I feel the need to defend it. My in-group bias is being activated to defend people and arguments that I would not otherwise defend.Bob: Isn't being cognizant of tribalism enough to help you avoid it?Alice: That's unlikely, at least for me. I'm not a brain in a vat; my emotions are important to me. They don't dictate every action I take, but they have some sway. Furthermore, everyone thinks they are above tribalism, so we should be skeptical about our ability to succeed where everyone else fails.Bob: Point taken, but this argument proves too much. This is not just an argument against identifying with EA - it's an argument against identifying with any collective, since every collective makes you feel some tribalism.Alice: And that's exactly what I'm defending. I think it makes sense to work with collectives to accomplish shared goals - as I said, I would still work at EA organizations - but I am much less excited about identifying with them. That shared identity is not necessary for us to do good work together, and it creates a lot of scope for abuse.Bob: That feels uncomfortably transactional. Can you really work with someone towards a shared goal that is meaningful to you without feeling some bond with them? Don't you feel kinship with people who care about animal suffering, for example?Alice: Well... I see what you mean, so I'll step back from the strong claim. But the EA community is far more tightly knit than that basic moral kinship. We have group houses, co-working spaces, student groups, conferences with afterparties, a CEA community health team, the Forum, Dank EA Memes, EA Twitter... this is not your average community, and the typical EA could probably step back quite a lot while retaining that kinship and the sense of working together to make the world better.Bob: It's true that this is a highly-engaged community, but most of those aren't just for fun; they have some role in our ability to do good. To pick on two examples you listed, I've met people at conferences who I learnt a lot from, and the Forum is one of the best websites on the internet if you filter it aggressively. I wouldn't take this rea...]]>
Karthik Tadepalli https://forum.effectivealtruism.org/posts/g9skQrrirWHk9Bbyg/the-ones-that-walk-away Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The ones that walk away, published by Karthik Tadepalli on January 19, 2023 on The Effective Altruism Forum.Alice: I've grown disillusioned with the EA community. I still want to dedicate my life to doing as much good as I can, but I am no longer certain that EA is the best way to do that.Bob: I see where you're coming from, but where specifically is that disillusionment leading you?Alice: I am still confident that major EA causes are important areas to work on. I think EA organizations do good work in those areas, so I would be quite happy to work at some of them. On the other hand, I'm much less willing to defer to EA institutions than before, and I'm unlikely to attend EA events or personally associate with EAs. So I imagine mostly disengaging from EA, albeit with some professional interest in EA organizations.Bob: You're disentangling different aspects of EA as a community. We are linked first and foremost by our moral commitments, to having a larger moral circle and trying to do the most good for that moral circle. You still hold those commitments. On top of that, we're also linked by the intellectual commitment to think rigorously and impartially about ways to do the most good. It sounds like you still believe in that, and the change is that you want to do more of that thinking for yourself and less of it through the EA collective consciousness. Is that right?Alice: Pretty much.Bob: But if you still hold the moral and intellectual commitments that define effective altruism, why do you want to disengage from EA?Alice: For me, the social dimension creates a dangerous tribalism. I get upset when people criticize EA on Twitter and in my life, and I feel the need to defend it. My in-group bias is being activated to defend people and arguments that I would not otherwise defend.Bob: Isn't being cognizant of tribalism enough to help you avoid it?Alice: That's unlikely, at least for me. I'm not a brain in a vat; my emotions are important to me. They don't dictate every action I take, but they have some sway. Furthermore, everyone thinks they are above tribalism, so we should be skeptical about our ability to succeed where everyone else fails.Bob: Point taken, but this argument proves too much. This is not just an argument against identifying with EA - it's an argument against identifying with any collective, since every collective makes you feel some tribalism.Alice: And that's exactly what I'm defending. I think it makes sense to work with collectives to accomplish shared goals - as I said, I would still work at EA organizations - but I am much less excited about identifying with them. That shared identity is not necessary for us to do good work together, and it creates a lot of scope for abuse.Bob: That feels uncomfortably transactional. Can you really work with someone towards a shared goal that is meaningful to you without feeling some bond with them? Don't you feel kinship with people who care about animal suffering, for example?Alice: Well... I see what you mean, so I'll step back from the strong claim. But the EA community is far more tightly knit than that basic moral kinship. We have group houses, co-working spaces, student groups, conferences with afterparties, a CEA community health team, the Forum, Dank EA Memes, EA Twitter... this is not your average community, and the typical EA could probably step back quite a lot while retaining that kinship and the sense of working together to make the world better.Bob: It's true that this is a highly-engaged community, but most of those aren't just for fun; they have some role in our ability to do good. To pick on two examples you listed, I've met people at conferences who I learnt a lot from, and the Forum is one of the best websites on the internet if you filter it aggressively. I wouldn't take this rea...]]>
Thu, 19 Jan 2023 16:20:44 +0000 EA - The ones that walk away by Karthik Tadepalli Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The ones that walk away, published by Karthik Tadepalli on January 19, 2023 on The Effective Altruism Forum.Alice: I've grown disillusioned with the EA community. I still want to dedicate my life to doing as much good as I can, but I am no longer certain that EA is the best way to do that.Bob: I see where you're coming from, but where specifically is that disillusionment leading you?Alice: I am still confident that major EA causes are important areas to work on. I think EA organizations do good work in those areas, so I would be quite happy to work at some of them. On the other hand, I'm much less willing to defer to EA institutions than before, and I'm unlikely to attend EA events or personally associate with EAs. So I imagine mostly disengaging from EA, albeit with some professional interest in EA organizations.Bob: You're disentangling different aspects of EA as a community. We are linked first and foremost by our moral commitments, to having a larger moral circle and trying to do the most good for that moral circle. You still hold those commitments. On top of that, we're also linked by the intellectual commitment to think rigorously and impartially about ways to do the most good. It sounds like you still believe in that, and the change is that you want to do more of that thinking for yourself and less of it through the EA collective consciousness. Is that right?Alice: Pretty much.Bob: But if you still hold the moral and intellectual commitments that define effective altruism, why do you want to disengage from EA?Alice: For me, the social dimension creates a dangerous tribalism. I get upset when people criticize EA on Twitter and in my life, and I feel the need to defend it. My in-group bias is being activated to defend people and arguments that I would not otherwise defend.Bob: Isn't being cognizant of tribalism enough to help you avoid it?Alice: That's unlikely, at least for me. I'm not a brain in a vat; my emotions are important to me. They don't dictate every action I take, but they have some sway. Furthermore, everyone thinks they are above tribalism, so we should be skeptical about our ability to succeed where everyone else fails.Bob: Point taken, but this argument proves too much. This is not just an argument against identifying with EA - it's an argument against identifying with any collective, since every collective makes you feel some tribalism.Alice: And that's exactly what I'm defending. I think it makes sense to work with collectives to accomplish shared goals - as I said, I would still work at EA organizations - but I am much less excited about identifying with them. That shared identity is not necessary for us to do good work together, and it creates a lot of scope for abuse.Bob: That feels uncomfortably transactional. Can you really work with someone towards a shared goal that is meaningful to you without feeling some bond with them? Don't you feel kinship with people who care about animal suffering, for example?Alice: Well... I see what you mean, so I'll step back from the strong claim. But the EA community is far more tightly knit than that basic moral kinship. We have group houses, co-working spaces, student groups, conferences with afterparties, a CEA community health team, the Forum, Dank EA Memes, EA Twitter... this is not your average community, and the typical EA could probably step back quite a lot while retaining that kinship and the sense of working together to make the world better.Bob: It's true that this is a highly-engaged community, but most of those aren't just for fun; they have some role in our ability to do good. To pick on two examples you listed, I've met people at conferences who I learnt a lot from, and the Forum is one of the best websites on the internet if you filter it aggressively. I wouldn't take this rea...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The ones that walk away, published by Karthik Tadepalli on January 19, 2023 on The Effective Altruism Forum.Alice: I've grown disillusioned with the EA community. I still want to dedicate my life to doing as much good as I can, but I am no longer certain that EA is the best way to do that.Bob: I see where you're coming from, but where specifically is that disillusionment leading you?Alice: I am still confident that major EA causes are important areas to work on. I think EA organizations do good work in those areas, so I would be quite happy to work at some of them. On the other hand, I'm much less willing to defer to EA institutions than before, and I'm unlikely to attend EA events or personally associate with EAs. So I imagine mostly disengaging from EA, albeit with some professional interest in EA organizations.Bob: You're disentangling different aspects of EA as a community. We are linked first and foremost by our moral commitments, to having a larger moral circle and trying to do the most good for that moral circle. You still hold those commitments. On top of that, we're also linked by the intellectual commitment to think rigorously and impartially about ways to do the most good. It sounds like you still believe in that, and the change is that you want to do more of that thinking for yourself and less of it through the EA collective consciousness. Is that right?Alice: Pretty much.Bob: But if you still hold the moral and intellectual commitments that define effective altruism, why do you want to disengage from EA?Alice: For me, the social dimension creates a dangerous tribalism. I get upset when people criticize EA on Twitter and in my life, and I feel the need to defend it. My in-group bias is being activated to defend people and arguments that I would not otherwise defend.Bob: Isn't being cognizant of tribalism enough to help you avoid it?Alice: That's unlikely, at least for me. I'm not a brain in a vat; my emotions are important to me. They don't dictate every action I take, but they have some sway. Furthermore, everyone thinks they are above tribalism, so we should be skeptical about our ability to succeed where everyone else fails.Bob: Point taken, but this argument proves too much. This is not just an argument against identifying with EA - it's an argument against identifying with any collective, since every collective makes you feel some tribalism.Alice: And that's exactly what I'm defending. I think it makes sense to work with collectives to accomplish shared goals - as I said, I would still work at EA organizations - but I am much less excited about identifying with them. That shared identity is not necessary for us to do good work together, and it creates a lot of scope for abuse.Bob: That feels uncomfortably transactional. Can you really work with someone towards a shared goal that is meaningful to you without feeling some bond with them? Don't you feel kinship with people who care about animal suffering, for example?Alice: Well... I see what you mean, so I'll step back from the strong claim. But the EA community is far more tightly knit than that basic moral kinship. We have group houses, co-working spaces, student groups, conferences with afterparties, a CEA community health team, the Forum, Dank EA Memes, EA Twitter... this is not your average community, and the typical EA could probably step back quite a lot while retaining that kinship and the sense of working together to make the world better.Bob: It's true that this is a highly-engaged community, but most of those aren't just for fun; they have some role in our ability to do good. To pick on two examples you listed, I've met people at conferences who I learnt a lot from, and the Forum is one of the best websites on the internet if you filter it aggressively. I wouldn't take this rea...]]>
Karthik Tadepalli https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:15 None full 4519
5HSDoTbkiiEwZAdJ7_EA EA - It was probably hard to hedge financial risk to EA by Stan van Wingerden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It was probably hard to hedge financial risk to EA, published by Stan van Wingerden on January 19, 2023 on The Effective Altruism Forum.SummaryHedging essentially means reducing the risk of your assets. [Link]Deciding whether or not to hedge (future) cashflow requires making a risk-reward tradeoff. [Link]Both the risks and the rewards of your hedges are uncertain & hard to guess for a variety of reasons. [Link]I think it was probably hard / expensive to safeguard EA crypto funds against the big crypto downswing, but this depends on the circumstances (which I know little about). [Link]IntroThe financial situation of EA has significantly worsened over the last few months. This is not only due to the FTX debacle, but also due to a broad downswing in cryptocurrency and tech stock prices. It’s worth investigating whether the financial risk of these events to EA organisations could have been mitigated somehow. This risk has many components, but the main financial risk issue to EA organisations I’m discussing here is the combination of multi-year $-denominated spending plans with future donations without fixed $ values. These cases show up all the time, be it from expected donations from companies or from broad financial exposure to different sectors, so one might wonder if EA organisations could have hedged some part of these future donations.This post deals with what it means to reduce risk through hedging (crypto) exposure, why one would want to, and why doing so might be undesirable or difficult in practice. I’ve tried to keep it non-technical, as it is intended to help a discussion on whether or not EA charities should have hedged their crypto exposure. I think most of this might still apply to stocks, but I’m no expert in that so I’m not discussing that. Definitely do not take any of this as financial or legal advice.What is hedging?Hedging essentially means reducing the risk of your assets. This can be done in a variety of ways, the most common of which use derivatives to place a bet that pays off if the price of the underlying crypto token goes down. For example, if you have 1 Bitcoin worth ~17000$ today but possibly less in 2024, you can agree with someone you trust to sell this bitcoin to them in 2024 for 17000$. This agreement to sell in the future functions as a hedge against price decreases of bitcoin: even if bitcoin drops in value, you’d still get 17k in 2024. (On the flip side, if bitcoin were to increase in price you would miss out on that as well.) One way to think about a hedge is as an insurance policy against bad market situations.You could also of course sell the bitcoin now in order to get rid of the risk in 2024. I’m going to ignore this possibility and focus instead on hedging future income, as I feel that those situations are most relevant to EA. In the above example, this would mean the bitcoin donation is made in 2024, you don’t have access to it beforehand, but you want to get rid of the bitcoin price risk in the meantime.A more uncertain but potentially more valuable hedge would use assets that are correlated with the amount you’re expected to be donated. If you both 1. expect to be donated large amounts of money by some company deeply involved in cryptocurrency and 2. expect the health of this company to be strongly related to the price of Bitcoin, then you can ‘hedge’ your expected future donations by taking on a bitcoin short. After all, if the company goes under, then you expect the price of bitcoin to drop, and therefore your hedge should pay off in that scenario, allowing you to still receive some money in the place of your missed donations. Compared to the above example of hedging a to-be-donated bitcoin, this example is not a strictly risk-lowering hedge, as your correlation assumptions can turn out to be wrong. Still, this ...]]>
Stan van Wingerden https://forum.effectivealtruism.org/posts/5HSDoTbkiiEwZAdJ7/it-was-probably-hard-to-hedge-financial-risk-to-ea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It was probably hard to hedge financial risk to EA, published by Stan van Wingerden on January 19, 2023 on The Effective Altruism Forum.SummaryHedging essentially means reducing the risk of your assets. [Link]Deciding whether or not to hedge (future) cashflow requires making a risk-reward tradeoff. [Link]Both the risks and the rewards of your hedges are uncertain & hard to guess for a variety of reasons. [Link]I think it was probably hard / expensive to safeguard EA crypto funds against the big crypto downswing, but this depends on the circumstances (which I know little about). [Link]IntroThe financial situation of EA has significantly worsened over the last few months. This is not only due to the FTX debacle, but also due to a broad downswing in cryptocurrency and tech stock prices. It’s worth investigating whether the financial risk of these events to EA organisations could have been mitigated somehow. This risk has many components, but the main financial risk issue to EA organisations I’m discussing here is the combination of multi-year $-denominated spending plans with future donations without fixed $ values. These cases show up all the time, be it from expected donations from companies or from broad financial exposure to different sectors, so one might wonder if EA organisations could have hedged some part of these future donations.This post deals with what it means to reduce risk through hedging (crypto) exposure, why one would want to, and why doing so might be undesirable or difficult in practice. I’ve tried to keep it non-technical, as it is intended to help a discussion on whether or not EA charities should have hedged their crypto exposure. I think most of this might still apply to stocks, but I’m no expert in that so I’m not discussing that. Definitely do not take any of this as financial or legal advice.What is hedging?Hedging essentially means reducing the risk of your assets. This can be done in a variety of ways, the most common of which use derivatives to place a bet that pays off if the price of the underlying crypto token goes down. For example, if you have 1 Bitcoin worth ~17000$ today but possibly less in 2024, you can agree with someone you trust to sell this bitcoin to them in 2024 for 17000$. This agreement to sell in the future functions as a hedge against price decreases of bitcoin: even if bitcoin drops in value, you’d still get 17k in 2024. (On the flip side, if bitcoin were to increase in price you would miss out on that as well.) One way to think about a hedge is as an insurance policy against bad market situations.You could also of course sell the bitcoin now in order to get rid of the risk in 2024. I’m going to ignore this possibility and focus instead on hedging future income, as I feel that those situations are most relevant to EA. In the above example, this would mean the bitcoin donation is made in 2024, you don’t have access to it beforehand, but you want to get rid of the bitcoin price risk in the meantime.A more uncertain but potentially more valuable hedge would use assets that are correlated with the amount you’re expected to be donated. If you both 1. expect to be donated large amounts of money by some company deeply involved in cryptocurrency and 2. expect the health of this company to be strongly related to the price of Bitcoin, then you can ‘hedge’ your expected future donations by taking on a bitcoin short. After all, if the company goes under, then you expect the price of bitcoin to drop, and therefore your hedge should pay off in that scenario, allowing you to still receive some money in the place of your missed donations. Compared to the above example of hedging a to-be-donated bitcoin, this example is not a strictly risk-lowering hedge, as your correlation assumptions can turn out to be wrong. Still, this ...]]>
Thu, 19 Jan 2023 14:44:13 +0000 EA - It was probably hard to hedge financial risk to EA by Stan van Wingerden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It was probably hard to hedge financial risk to EA, published by Stan van Wingerden on January 19, 2023 on The Effective Altruism Forum.SummaryHedging essentially means reducing the risk of your assets. [Link]Deciding whether or not to hedge (future) cashflow requires making a risk-reward tradeoff. [Link]Both the risks and the rewards of your hedges are uncertain & hard to guess for a variety of reasons. [Link]I think it was probably hard / expensive to safeguard EA crypto funds against the big crypto downswing, but this depends on the circumstances (which I know little about). [Link]IntroThe financial situation of EA has significantly worsened over the last few months. This is not only due to the FTX debacle, but also due to a broad downswing in cryptocurrency and tech stock prices. It’s worth investigating whether the financial risk of these events to EA organisations could have been mitigated somehow. This risk has many components, but the main financial risk issue to EA organisations I’m discussing here is the combination of multi-year $-denominated spending plans with future donations without fixed $ values. These cases show up all the time, be it from expected donations from companies or from broad financial exposure to different sectors, so one might wonder if EA organisations could have hedged some part of these future donations.This post deals with what it means to reduce risk through hedging (crypto) exposure, why one would want to, and why doing so might be undesirable or difficult in practice. I’ve tried to keep it non-technical, as it is intended to help a discussion on whether or not EA charities should have hedged their crypto exposure. I think most of this might still apply to stocks, but I’m no expert in that so I’m not discussing that. Definitely do not take any of this as financial or legal advice.What is hedging?Hedging essentially means reducing the risk of your assets. This can be done in a variety of ways, the most common of which use derivatives to place a bet that pays off if the price of the underlying crypto token goes down. For example, if you have 1 Bitcoin worth ~17000$ today but possibly less in 2024, you can agree with someone you trust to sell this bitcoin to them in 2024 for 17000$. This agreement to sell in the future functions as a hedge against price decreases of bitcoin: even if bitcoin drops in value, you’d still get 17k in 2024. (On the flip side, if bitcoin were to increase in price you would miss out on that as well.) One way to think about a hedge is as an insurance policy against bad market situations.You could also of course sell the bitcoin now in order to get rid of the risk in 2024. I’m going to ignore this possibility and focus instead on hedging future income, as I feel that those situations are most relevant to EA. In the above example, this would mean the bitcoin donation is made in 2024, you don’t have access to it beforehand, but you want to get rid of the bitcoin price risk in the meantime.A more uncertain but potentially more valuable hedge would use assets that are correlated with the amount you’re expected to be donated. If you both 1. expect to be donated large amounts of money by some company deeply involved in cryptocurrency and 2. expect the health of this company to be strongly related to the price of Bitcoin, then you can ‘hedge’ your expected future donations by taking on a bitcoin short. After all, if the company goes under, then you expect the price of bitcoin to drop, and therefore your hedge should pay off in that scenario, allowing you to still receive some money in the place of your missed donations. Compared to the above example of hedging a to-be-donated bitcoin, this example is not a strictly risk-lowering hedge, as your correlation assumptions can turn out to be wrong. Still, this ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It was probably hard to hedge financial risk to EA, published by Stan van Wingerden on January 19, 2023 on The Effective Altruism Forum.SummaryHedging essentially means reducing the risk of your assets. [Link]Deciding whether or not to hedge (future) cashflow requires making a risk-reward tradeoff. [Link]Both the risks and the rewards of your hedges are uncertain & hard to guess for a variety of reasons. [Link]I think it was probably hard / expensive to safeguard EA crypto funds against the big crypto downswing, but this depends on the circumstances (which I know little about). [Link]IntroThe financial situation of EA has significantly worsened over the last few months. This is not only due to the FTX debacle, but also due to a broad downswing in cryptocurrency and tech stock prices. It’s worth investigating whether the financial risk of these events to EA organisations could have been mitigated somehow. This risk has many components, but the main financial risk issue to EA organisations I’m discussing here is the combination of multi-year $-denominated spending plans with future donations without fixed $ values. These cases show up all the time, be it from expected donations from companies or from broad financial exposure to different sectors, so one might wonder if EA organisations could have hedged some part of these future donations.This post deals with what it means to reduce risk through hedging (crypto) exposure, why one would want to, and why doing so might be undesirable or difficult in practice. I’ve tried to keep it non-technical, as it is intended to help a discussion on whether or not EA charities should have hedged their crypto exposure. I think most of this might still apply to stocks, but I’m no expert in that so I’m not discussing that. Definitely do not take any of this as financial or legal advice.What is hedging?Hedging essentially means reducing the risk of your assets. This can be done in a variety of ways, the most common of which use derivatives to place a bet that pays off if the price of the underlying crypto token goes down. For example, if you have 1 Bitcoin worth ~17000$ today but possibly less in 2024, you can agree with someone you trust to sell this bitcoin to them in 2024 for 17000$. This agreement to sell in the future functions as a hedge against price decreases of bitcoin: even if bitcoin drops in value, you’d still get 17k in 2024. (On the flip side, if bitcoin were to increase in price you would miss out on that as well.) One way to think about a hedge is as an insurance policy against bad market situations.You could also of course sell the bitcoin now in order to get rid of the risk in 2024. I’m going to ignore this possibility and focus instead on hedging future income, as I feel that those situations are most relevant to EA. In the above example, this would mean the bitcoin donation is made in 2024, you don’t have access to it beforehand, but you want to get rid of the bitcoin price risk in the meantime.A more uncertain but potentially more valuable hedge would use assets that are correlated with the amount you’re expected to be donated. If you both 1. expect to be donated large amounts of money by some company deeply involved in cryptocurrency and 2. expect the health of this company to be strongly related to the price of Bitcoin, then you can ‘hedge’ your expected future donations by taking on a bitcoin short. After all, if the company goes under, then you expect the price of bitcoin to drop, and therefore your hedge should pay off in that scenario, allowing you to still receive some money in the place of your missed donations. Compared to the above example of hedging a to-be-donated bitcoin, this example is not a strictly risk-lowering hedge, as your correlation assumptions can turn out to be wrong. Still, this ...]]>
Stan van Wingerden https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 14:24 None full 4522
HqEmL7XAuuD5Pc4eg_EA EA - Evaluating StrongMinds: how strong is the evidence? by JoelMcGuire Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Evaluating StrongMinds: how strong is the evidence?, published by JoelMcGuire on January 19, 2023 on The Effective Altruism Forum.A recent post by Simon_M argued that StrongMinds should not be a top recommended charity (yet), and many people seemed to agree. While I think Simon raised several useful points regarding StrongMinds, he didn't engage with the cost-effectiveness analysis of StrongMinds that I conducted for the Happier Lives Institute (HLI) in 2021 and justified this decision on the following grounds:“Whilst I think they have some of the deepest analysis of StrongMinds, I am still confused by some of their methodology, it’s not clear to me what their relationship to StrongMinds is.”.By failing to discuss HLI’s analysis, Simon’s post presented an incomplete and potentially misleading picture of the evidence base for StrongMinds. In addition, some of the comments seemed to call into question the independence of HLI’s research. I’m publishing this post to clarify the strength of the evidence for StrongMinds, HLI’s independence, and to acknowledge what we’ve learned from this discussion.I raise concerns with several of Simon’s specific points in a comment on the original post. In the rest of this post, I’ll respond to four general questions raised by Simon’s post that were too long to include in my comment. I briefly summarise the issues below and then discuss them in more detail in the rest of the post1. Should StrongMinds be a top-rated charity? In my view, yes. Simon claims the conclusion is not warranted because StrongMinds’ specific evidence is weak and implies implausibly large results. I agree these results are overly optimistic, so my analysis doesn’t rely on StrongMind’s evidence alone. Instead, the analysis is based mainly on evidence synthesised from 39 RCTs of primarily group psychotherapy deployed in low-income countries.2. When should a charity be classed as “top-rated”? I think that a charity could be considered top-rated when there is strong general evidence OR charity-specific evidence that the intervention is more cost-effective than cash transfers. StrongMinds clears this bar, despite the uncertainties in the data.3. Is HLI an independent research institute? Yes. HLI’s mission is to find the most cost-effective giving opportunities to increase wellbeing. Our research has found that treating depression is very cost-effective, but we’re not committed to it as a matter of principle. Our work has just begun, and we plan to publish reports on lead regulation, pain relief, and immigration reform in the coming months. Our giving recommendations will follow the evidence.4. What can HLI do better in the future? Communicate better and update our analyses. We didn’t explicitly discuss the implausibility of StrongMinds’ data in our work. Nor did we push StrongMinds to make more reasonable claims when we could have done so. We acknowledge that we could have done better, and we will try to do better in the future. We also plan to revise and update our analysis of StrongMinds before Giving Season 2023.1. Should StrongMinds be a top-rated charity?I agree that StrongMinds’ claims of curing 90+% of depression are overly optimistic, and I don’t rely on them in my analysis. This figure mainly comes from StrongMinds’ pre-post data rather than a comparison between a treatment group and a control. These data will overstate the effect because depression scores tend to decline over time due to a natural recovery rate. If you monitored a group of depressed people and provided no treatment, some would recover anyway.My analysis of StrongMinds is based on a meta-analysis of 39 RCTS of group psychotherapy in low-income countries. I didn’t rely solely on StrongMinds’ own evidence alone, I incorporated the broader evidence base from other similar interventions t...]]>
JoelMcGuire https://forum.effectivealtruism.org/posts/HqEmL7XAuuD5Pc4eg/evaluating-strongminds-how-strong-is-the-evidence Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Evaluating StrongMinds: how strong is the evidence?, published by JoelMcGuire on January 19, 2023 on The Effective Altruism Forum.A recent post by Simon_M argued that StrongMinds should not be a top recommended charity (yet), and many people seemed to agree. While I think Simon raised several useful points regarding StrongMinds, he didn't engage with the cost-effectiveness analysis of StrongMinds that I conducted for the Happier Lives Institute (HLI) in 2021 and justified this decision on the following grounds:“Whilst I think they have some of the deepest analysis of StrongMinds, I am still confused by some of their methodology, it’s not clear to me what their relationship to StrongMinds is.”.By failing to discuss HLI’s analysis, Simon’s post presented an incomplete and potentially misleading picture of the evidence base for StrongMinds. In addition, some of the comments seemed to call into question the independence of HLI’s research. I’m publishing this post to clarify the strength of the evidence for StrongMinds, HLI’s independence, and to acknowledge what we’ve learned from this discussion.I raise concerns with several of Simon’s specific points in a comment on the original post. In the rest of this post, I’ll respond to four general questions raised by Simon’s post that were too long to include in my comment. I briefly summarise the issues below and then discuss them in more detail in the rest of the post1. Should StrongMinds be a top-rated charity? In my view, yes. Simon claims the conclusion is not warranted because StrongMinds’ specific evidence is weak and implies implausibly large results. I agree these results are overly optimistic, so my analysis doesn’t rely on StrongMind’s evidence alone. Instead, the analysis is based mainly on evidence synthesised from 39 RCTs of primarily group psychotherapy deployed in low-income countries.2. When should a charity be classed as “top-rated”? I think that a charity could be considered top-rated when there is strong general evidence OR charity-specific evidence that the intervention is more cost-effective than cash transfers. StrongMinds clears this bar, despite the uncertainties in the data.3. Is HLI an independent research institute? Yes. HLI’s mission is to find the most cost-effective giving opportunities to increase wellbeing. Our research has found that treating depression is very cost-effective, but we’re not committed to it as a matter of principle. Our work has just begun, and we plan to publish reports on lead regulation, pain relief, and immigration reform in the coming months. Our giving recommendations will follow the evidence.4. What can HLI do better in the future? Communicate better and update our analyses. We didn’t explicitly discuss the implausibility of StrongMinds’ data in our work. Nor did we push StrongMinds to make more reasonable claims when we could have done so. We acknowledge that we could have done better, and we will try to do better in the future. We also plan to revise and update our analysis of StrongMinds before Giving Season 2023.1. Should StrongMinds be a top-rated charity?I agree that StrongMinds’ claims of curing 90+% of depression are overly optimistic, and I don’t rely on them in my analysis. This figure mainly comes from StrongMinds’ pre-post data rather than a comparison between a treatment group and a control. These data will overstate the effect because depression scores tend to decline over time due to a natural recovery rate. If you monitored a group of depressed people and provided no treatment, some would recover anyway.My analysis of StrongMinds is based on a meta-analysis of 39 RCTS of group psychotherapy in low-income countries. I didn’t rely solely on StrongMinds’ own evidence alone, I incorporated the broader evidence base from other similar interventions t...]]>
Thu, 19 Jan 2023 05:47:12 +0000 EA - Evaluating StrongMinds: how strong is the evidence? by JoelMcGuire Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Evaluating StrongMinds: how strong is the evidence?, published by JoelMcGuire on January 19, 2023 on The Effective Altruism Forum.A recent post by Simon_M argued that StrongMinds should not be a top recommended charity (yet), and many people seemed to agree. While I think Simon raised several useful points regarding StrongMinds, he didn't engage with the cost-effectiveness analysis of StrongMinds that I conducted for the Happier Lives Institute (HLI) in 2021 and justified this decision on the following grounds:“Whilst I think they have some of the deepest analysis of StrongMinds, I am still confused by some of their methodology, it’s not clear to me what their relationship to StrongMinds is.”.By failing to discuss HLI’s analysis, Simon’s post presented an incomplete and potentially misleading picture of the evidence base for StrongMinds. In addition, some of the comments seemed to call into question the independence of HLI’s research. I’m publishing this post to clarify the strength of the evidence for StrongMinds, HLI’s independence, and to acknowledge what we’ve learned from this discussion.I raise concerns with several of Simon’s specific points in a comment on the original post. In the rest of this post, I’ll respond to four general questions raised by Simon’s post that were too long to include in my comment. I briefly summarise the issues below and then discuss them in more detail in the rest of the post1. Should StrongMinds be a top-rated charity? In my view, yes. Simon claims the conclusion is not warranted because StrongMinds’ specific evidence is weak and implies implausibly large results. I agree these results are overly optimistic, so my analysis doesn’t rely on StrongMind’s evidence alone. Instead, the analysis is based mainly on evidence synthesised from 39 RCTs of primarily group psychotherapy deployed in low-income countries.2. When should a charity be classed as “top-rated”? I think that a charity could be considered top-rated when there is strong general evidence OR charity-specific evidence that the intervention is more cost-effective than cash transfers. StrongMinds clears this bar, despite the uncertainties in the data.3. Is HLI an independent research institute? Yes. HLI’s mission is to find the most cost-effective giving opportunities to increase wellbeing. Our research has found that treating depression is very cost-effective, but we’re not committed to it as a matter of principle. Our work has just begun, and we plan to publish reports on lead regulation, pain relief, and immigration reform in the coming months. Our giving recommendations will follow the evidence.4. What can HLI do better in the future? Communicate better and update our analyses. We didn’t explicitly discuss the implausibility of StrongMinds’ data in our work. Nor did we push StrongMinds to make more reasonable claims when we could have done so. We acknowledge that we could have done better, and we will try to do better in the future. We also plan to revise and update our analysis of StrongMinds before Giving Season 2023.1. Should StrongMinds be a top-rated charity?I agree that StrongMinds’ claims of curing 90+% of depression are overly optimistic, and I don’t rely on them in my analysis. This figure mainly comes from StrongMinds’ pre-post data rather than a comparison between a treatment group and a control. These data will overstate the effect because depression scores tend to decline over time due to a natural recovery rate. If you monitored a group of depressed people and provided no treatment, some would recover anyway.My analysis of StrongMinds is based on a meta-analysis of 39 RCTS of group psychotherapy in low-income countries. I didn’t rely solely on StrongMinds’ own evidence alone, I incorporated the broader evidence base from other similar interventions t...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Evaluating StrongMinds: how strong is the evidence?, published by JoelMcGuire on January 19, 2023 on The Effective Altruism Forum.A recent post by Simon_M argued that StrongMinds should not be a top recommended charity (yet), and many people seemed to agree. While I think Simon raised several useful points regarding StrongMinds, he didn't engage with the cost-effectiveness analysis of StrongMinds that I conducted for the Happier Lives Institute (HLI) in 2021 and justified this decision on the following grounds:“Whilst I think they have some of the deepest analysis of StrongMinds, I am still confused by some of their methodology, it’s not clear to me what their relationship to StrongMinds is.”.By failing to discuss HLI’s analysis, Simon’s post presented an incomplete and potentially misleading picture of the evidence base for StrongMinds. In addition, some of the comments seemed to call into question the independence of HLI’s research. I’m publishing this post to clarify the strength of the evidence for StrongMinds, HLI’s independence, and to acknowledge what we’ve learned from this discussion.I raise concerns with several of Simon’s specific points in a comment on the original post. In the rest of this post, I’ll respond to four general questions raised by Simon’s post that were too long to include in my comment. I briefly summarise the issues below and then discuss them in more detail in the rest of the post1. Should StrongMinds be a top-rated charity? In my view, yes. Simon claims the conclusion is not warranted because StrongMinds’ specific evidence is weak and implies implausibly large results. I agree these results are overly optimistic, so my analysis doesn’t rely on StrongMind’s evidence alone. Instead, the analysis is based mainly on evidence synthesised from 39 RCTs of primarily group psychotherapy deployed in low-income countries.2. When should a charity be classed as “top-rated”? I think that a charity could be considered top-rated when there is strong general evidence OR charity-specific evidence that the intervention is more cost-effective than cash transfers. StrongMinds clears this bar, despite the uncertainties in the data.3. Is HLI an independent research institute? Yes. HLI’s mission is to find the most cost-effective giving opportunities to increase wellbeing. Our research has found that treating depression is very cost-effective, but we’re not committed to it as a matter of principle. Our work has just begun, and we plan to publish reports on lead regulation, pain relief, and immigration reform in the coming months. Our giving recommendations will follow the evidence.4. What can HLI do better in the future? Communicate better and update our analyses. We didn’t explicitly discuss the implausibility of StrongMinds’ data in our work. Nor did we push StrongMinds to make more reasonable claims when we could have done so. We acknowledge that we could have done better, and we will try to do better in the future. We also plan to revise and update our analysis of StrongMinds before Giving Season 2023.1. Should StrongMinds be a top-rated charity?I agree that StrongMinds’ claims of curing 90+% of depression are overly optimistic, and I don’t rely on them in my analysis. This figure mainly comes from StrongMinds’ pre-post data rather than a comparison between a treatment group and a control. These data will overstate the effect because depression scores tend to decline over time due to a natural recovery rate. If you monitored a group of depressed people and provided no treatment, some would recover anyway.My analysis of StrongMinds is based on a meta-analysis of 39 RCTS of group psychotherapy in low-income countries. I didn’t rely solely on StrongMinds’ own evidence alone, I incorporated the broader evidence base from other similar interventions t...]]>
JoelMcGuire https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:31 None full 4508
fjaLYSNTggztom29c_EA EA - Possible changes to EA, a big upvoted list by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Possible changes to EA, a big upvoted list, published by Nathan Young on January 18, 2023 on The Effective Altruism Forum.We should put all possible changes/reforms in a big list, that everyone can upvote/downvote, agree disagree.EA is governed but a set of core EAs, so if you want change, I suggest that giving them less to read and a strong signal of community consensus is good.The top-level comments should be a short clear explanation of a possible change. If you want to comment on a change, do it as a reply to the top level commentThis other post gives a set of reforms, but they are a in a big long list at the bottom. Instead we can have a list that changes by our opinions!Note that I do not agree with all comments I post here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nathan Young https://forum.effectivealtruism.org/posts/fjaLYSNTggztom29c/possible-changes-to-ea-a-big-upvoted-list Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Possible changes to EA, a big upvoted list, published by Nathan Young on January 18, 2023 on The Effective Altruism Forum.We should put all possible changes/reforms in a big list, that everyone can upvote/downvote, agree disagree.EA is governed but a set of core EAs, so if you want change, I suggest that giving them less to read and a strong signal of community consensus is good.The top-level comments should be a short clear explanation of a possible change. If you want to comment on a change, do it as a reply to the top level commentThis other post gives a set of reforms, but they are a in a big long list at the bottom. Instead we can have a list that changes by our opinions!Note that I do not agree with all comments I post here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 19 Jan 2023 03:07:31 +0000 EA - Possible changes to EA, a big upvoted list by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Possible changes to EA, a big upvoted list, published by Nathan Young on January 18, 2023 on The Effective Altruism Forum.We should put all possible changes/reforms in a big list, that everyone can upvote/downvote, agree disagree.EA is governed but a set of core EAs, so if you want change, I suggest that giving them less to read and a strong signal of community consensus is good.The top-level comments should be a short clear explanation of a possible change. If you want to comment on a change, do it as a reply to the top level commentThis other post gives a set of reforms, but they are a in a big long list at the bottom. Instead we can have a list that changes by our opinions!Note that I do not agree with all comments I post here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Possible changes to EA, a big upvoted list, published by Nathan Young on January 18, 2023 on The Effective Altruism Forum.We should put all possible changes/reforms in a big list, that everyone can upvote/downvote, agree disagree.EA is governed but a set of core EAs, so if you want change, I suggest that giving them less to read and a strong signal of community consensus is good.The top-level comments should be a short clear explanation of a possible change. If you want to comment on a change, do it as a reply to the top level commentThis other post gives a set of reforms, but they are a in a big long list at the bottom. Instead we can have a list that changes by our opinions!Note that I do not agree with all comments I post here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nathan Young https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:00 None full 4509
zuqpqqFoue5LyutTv_EA EA - The EA community does not own its donors' money by Nick Whitaker Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA community does not own its donors' money, published by Nick Whitaker on January 18, 2023 on The Effective Altruism Forum.A number of recent proposals have detailed EA reforms. I have generally been unimpressed with these - they feel highly reactive and too tied to attractive sounding concepts (democratic, transparent, accountable) without well thought through mechanisms. I will try to expand my thoughts on these at a later time.Today I focus on one element that seems at best confused and at worst highly destructive: large-scale, democratic control over EA funds.This has been mentioned in a few proposals: It originated (to my knowledge) in Carla Zoe Cremer's Structural Reforms proposal:Within 5 years: EA funding decisions are made collectivelyFirst set up experiments for a safe cause area with small funding pots that are distributed according to different collective decision-making mechanisms(Note this is classified as a 'List A' proposal - per Cremer: "ideas I’m pretty sure about and thus believe we should now hire someone full time to work out different implementation options and implement one of them")It was also reiterated in the recent mega-proposal, Doing EA Better:Within 5 years, EA funding decisions should be made collectivelyFurthermore (from the same post):Donors should commit a large proportion of their wealth to EA bodies or trusts controlled by EA bodies to provide EA with financial stability and as a costly signal of their support for EA ideasAnd:The big funding bodies (OpenPhil, EA Funds, etc.) should be disaggregated into smaller independent funding bodies within 3 years(See also the Deciding better together section from the same post)How would this happen?One could try to personally convince Dustin Moskovitz that he should turn OpenPhil funds over to an EA Community panel, that it would help OpenPhil distribute its funds better.I suspect this would fail, and proponents would feel very frustrated.But, as with other discourse, these proposals assume that because a foundation called Open Philanthropy is interested in the "EA Community" that the "EA Community" has/deserves/should be entitled to a say in how the foundation spends their money. Yet the fact that someone is interested in listening to the advice of some members of a group on some issues does not mean they have to completely surrender to the broader group on all questions. They may be interested in community input for their funding, via regranting for example, or invest in the Community, but does not imply they would want the bulk of their donations governed by the EA community.(Also - I'm using scare quotes here because I am very confused who these proposals mean when they say EA community. Is it a matter of having read certain books, or attending EAGs, hanging around for a certain amount of time, working at an org, donating a set amount of money, or being in the right Slacks? These details seem incredibly important when this is the set of people given major control of funding, in lieu of current expert funders)So at a basic level, the assumption that EA has some innate claim to the money of its donors is basically incorrect. (I understand that the claim is also normative). But for now, the money possessed by Moskovitz and Tuna, OP, and GoodVentures is not the property of the EA community. So what, then, to do?Can you demand ten billion dollars?Say you can't convince Moskovitz and OpenPhil leadership to turn over their funds to community deliberation.You could try to create a cartel of EA organizations to refuse OpenPhil donations. This seems likely to fail - it would involve asking tens, perhaps hundreds, of people to risk their livelihoods. It would also be an incredibly poor way of managing the relationship between the community and its most generous funder--and...]]>
Nick Whitaker https://forum.effectivealtruism.org/posts/zuqpqqFoue5LyutTv/the-ea-community-does-not-own-its-donors-money Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA community does not own its donors' money, published by Nick Whitaker on January 18, 2023 on The Effective Altruism Forum.A number of recent proposals have detailed EA reforms. I have generally been unimpressed with these - they feel highly reactive and too tied to attractive sounding concepts (democratic, transparent, accountable) without well thought through mechanisms. I will try to expand my thoughts on these at a later time.Today I focus on one element that seems at best confused and at worst highly destructive: large-scale, democratic control over EA funds.This has been mentioned in a few proposals: It originated (to my knowledge) in Carla Zoe Cremer's Structural Reforms proposal:Within 5 years: EA funding decisions are made collectivelyFirst set up experiments for a safe cause area with small funding pots that are distributed according to different collective decision-making mechanisms(Note this is classified as a 'List A' proposal - per Cremer: "ideas I’m pretty sure about and thus believe we should now hire someone full time to work out different implementation options and implement one of them")It was also reiterated in the recent mega-proposal, Doing EA Better:Within 5 years, EA funding decisions should be made collectivelyFurthermore (from the same post):Donors should commit a large proportion of their wealth to EA bodies or trusts controlled by EA bodies to provide EA with financial stability and as a costly signal of their support for EA ideasAnd:The big funding bodies (OpenPhil, EA Funds, etc.) should be disaggregated into smaller independent funding bodies within 3 years(See also the Deciding better together section from the same post)How would this happen?One could try to personally convince Dustin Moskovitz that he should turn OpenPhil funds over to an EA Community panel, that it would help OpenPhil distribute its funds better.I suspect this would fail, and proponents would feel very frustrated.But, as with other discourse, these proposals assume that because a foundation called Open Philanthropy is interested in the "EA Community" that the "EA Community" has/deserves/should be entitled to a say in how the foundation spends their money. Yet the fact that someone is interested in listening to the advice of some members of a group on some issues does not mean they have to completely surrender to the broader group on all questions. They may be interested in community input for their funding, via regranting for example, or invest in the Community, but does not imply they would want the bulk of their donations governed by the EA community.(Also - I'm using scare quotes here because I am very confused who these proposals mean when they say EA community. Is it a matter of having read certain books, or attending EAGs, hanging around for a certain amount of time, working at an org, donating a set amount of money, or being in the right Slacks? These details seem incredibly important when this is the set of people given major control of funding, in lieu of current expert funders)So at a basic level, the assumption that EA has some innate claim to the money of its donors is basically incorrect. (I understand that the claim is also normative). But for now, the money possessed by Moskovitz and Tuna, OP, and GoodVentures is not the property of the EA community. So what, then, to do?Can you demand ten billion dollars?Say you can't convince Moskovitz and OpenPhil leadership to turn over their funds to community deliberation.You could try to create a cartel of EA organizations to refuse OpenPhil donations. This seems likely to fail - it would involve asking tens, perhaps hundreds, of people to risk their livelihoods. It would also be an incredibly poor way of managing the relationship between the community and its most generous funder--and...]]>
Wed, 18 Jan 2023 19:15:29 +0000 EA - The EA community does not own its donors' money by Nick Whitaker Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA community does not own its donors' money, published by Nick Whitaker on January 18, 2023 on The Effective Altruism Forum.A number of recent proposals have detailed EA reforms. I have generally been unimpressed with these - they feel highly reactive and too tied to attractive sounding concepts (democratic, transparent, accountable) without well thought through mechanisms. I will try to expand my thoughts on these at a later time.Today I focus on one element that seems at best confused and at worst highly destructive: large-scale, democratic control over EA funds.This has been mentioned in a few proposals: It originated (to my knowledge) in Carla Zoe Cremer's Structural Reforms proposal:Within 5 years: EA funding decisions are made collectivelyFirst set up experiments for a safe cause area with small funding pots that are distributed according to different collective decision-making mechanisms(Note this is classified as a 'List A' proposal - per Cremer: "ideas I’m pretty sure about and thus believe we should now hire someone full time to work out different implementation options and implement one of them")It was also reiterated in the recent mega-proposal, Doing EA Better:Within 5 years, EA funding decisions should be made collectivelyFurthermore (from the same post):Donors should commit a large proportion of their wealth to EA bodies or trusts controlled by EA bodies to provide EA with financial stability and as a costly signal of their support for EA ideasAnd:The big funding bodies (OpenPhil, EA Funds, etc.) should be disaggregated into smaller independent funding bodies within 3 years(See also the Deciding better together section from the same post)How would this happen?One could try to personally convince Dustin Moskovitz that he should turn OpenPhil funds over to an EA Community panel, that it would help OpenPhil distribute its funds better.I suspect this would fail, and proponents would feel very frustrated.But, as with other discourse, these proposals assume that because a foundation called Open Philanthropy is interested in the "EA Community" that the "EA Community" has/deserves/should be entitled to a say in how the foundation spends their money. Yet the fact that someone is interested in listening to the advice of some members of a group on some issues does not mean they have to completely surrender to the broader group on all questions. They may be interested in community input for their funding, via regranting for example, or invest in the Community, but does not imply they would want the bulk of their donations governed by the EA community.(Also - I'm using scare quotes here because I am very confused who these proposals mean when they say EA community. Is it a matter of having read certain books, or attending EAGs, hanging around for a certain amount of time, working at an org, donating a set amount of money, or being in the right Slacks? These details seem incredibly important when this is the set of people given major control of funding, in lieu of current expert funders)So at a basic level, the assumption that EA has some innate claim to the money of its donors is basically incorrect. (I understand that the claim is also normative). But for now, the money possessed by Moskovitz and Tuna, OP, and GoodVentures is not the property of the EA community. So what, then, to do?Can you demand ten billion dollars?Say you can't convince Moskovitz and OpenPhil leadership to turn over their funds to community deliberation.You could try to create a cartel of EA organizations to refuse OpenPhil donations. This seems likely to fail - it would involve asking tens, perhaps hundreds, of people to risk their livelihoods. It would also be an incredibly poor way of managing the relationship between the community and its most generous funder--and...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA community does not own its donors' money, published by Nick Whitaker on January 18, 2023 on The Effective Altruism Forum.A number of recent proposals have detailed EA reforms. I have generally been unimpressed with these - they feel highly reactive and too tied to attractive sounding concepts (democratic, transparent, accountable) without well thought through mechanisms. I will try to expand my thoughts on these at a later time.Today I focus on one element that seems at best confused and at worst highly destructive: large-scale, democratic control over EA funds.This has been mentioned in a few proposals: It originated (to my knowledge) in Carla Zoe Cremer's Structural Reforms proposal:Within 5 years: EA funding decisions are made collectivelyFirst set up experiments for a safe cause area with small funding pots that are distributed according to different collective decision-making mechanisms(Note this is classified as a 'List A' proposal - per Cremer: "ideas I’m pretty sure about and thus believe we should now hire someone full time to work out different implementation options and implement one of them")It was also reiterated in the recent mega-proposal, Doing EA Better:Within 5 years, EA funding decisions should be made collectivelyFurthermore (from the same post):Donors should commit a large proportion of their wealth to EA bodies or trusts controlled by EA bodies to provide EA with financial stability and as a costly signal of their support for EA ideasAnd:The big funding bodies (OpenPhil, EA Funds, etc.) should be disaggregated into smaller independent funding bodies within 3 years(See also the Deciding better together section from the same post)How would this happen?One could try to personally convince Dustin Moskovitz that he should turn OpenPhil funds over to an EA Community panel, that it would help OpenPhil distribute its funds better.I suspect this would fail, and proponents would feel very frustrated.But, as with other discourse, these proposals assume that because a foundation called Open Philanthropy is interested in the "EA Community" that the "EA Community" has/deserves/should be entitled to a say in how the foundation spends their money. Yet the fact that someone is interested in listening to the advice of some members of a group on some issues does not mean they have to completely surrender to the broader group on all questions. They may be interested in community input for their funding, via regranting for example, or invest in the Community, but does not imply they would want the bulk of their donations governed by the EA community.(Also - I'm using scare quotes here because I am very confused who these proposals mean when they say EA community. Is it a matter of having read certain books, or attending EAGs, hanging around for a certain amount of time, working at an org, donating a set amount of money, or being in the right Slacks? These details seem incredibly important when this is the set of people given major control of funding, in lieu of current expert funders)So at a basic level, the assumption that EA has some innate claim to the money of its donors is basically incorrect. (I understand that the claim is also normative). But for now, the money possessed by Moskovitz and Tuna, OP, and GoodVentures is not the property of the EA community. So what, then, to do?Can you demand ten billion dollars?Say you can't convince Moskovitz and OpenPhil leadership to turn over their funds to community deliberation.You could try to create a cartel of EA organizations to refuse OpenPhil donations. This seems likely to fail - it would involve asking tens, perhaps hundreds, of people to risk their livelihoods. It would also be an incredibly poor way of managing the relationship between the community and its most generous funder--and...]]>
Nick Whitaker https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:03 None full 4504
cAviezLQdkoNy42HG_EA EA - Exceptional Research Award by Effective Thesis (ETERA): reflection by Effective Thesis Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Exceptional Research Award by Effective Thesis (ETERA): reflection, published by Effective Thesis on January 18, 2023 on The Effective Altruism Forum.This post was written by David Janků with contributions from Adéla Novotná (graphs and analyses on the background) and useful edits by Sophie Kirkham.SummaryIn autumn 2022, Effective Thesis held the Exceptional Research Award to encourage and recognize students conducting promising research that has the potential to significantly improve the world.We received 35 submissions, had 25 of them reviewed and chose 6 main prize winners and 4 commendation prize winners (see the winning submissions and winners’ profiles here).We received applications from across a variety of study disciplines and cause areas, though causes that have traditionally been prioritised by EA leaders - such as AI alignment, biosecurity, and animal welfare - were relatively underrepresented in our sample. It is unclear to us why this is the case (e.g. whether people working on those causes already have existing cause-specific prizes to apply to; or have different publication norms - like publishing mainly via blog posts - that our award is not compatible with; or to what extent some of these directions are not bottlenecked by the creation of new research ideas and understanding but rather by execution of existing ideas). We think this represents room for improvement in the future.Only 57 % of all applicants and 50 % of all winners had any engagement with Effective Thesis prior to applying for the award, suggesting the award was able to attract students who are doing valuable work but who have not interacted with Effective Thesis before.That being said, people previously coached by Effective Thesis scored higher in the reviews on “potentially having more impact with their research” than people who were not coached, suggesting our coaching might help people arrive at better ideas for research topics.The review scores indicated that the quality of the application pool was fairly high. The highest scores were usually given for rigour and novelty, and the lower scores were for the ability to make progress in the research field and potential positive impact in the world. This is to be expected since achieving impact is harder than achieving rigour and novelty.Both reviewers and applicants (successful and unsuccessful) seemed to be happy about their participation in this award, as reflected by their willingness to be associated with the award publicly, high ratings of the usefulness of the reviews by applicants, and a number of qualitative comments we received.What is ETERA and why did we decide to run it?The Effective Thesis Exceptional Research Award (ETERA) has been established to encourage and recognize promising research, conducted by students from undergraduate to PhD level, that has the potential to significantly improve the world. We accepted submissions of theses, dissertations and capstone papers. Entries were required to have been submitted to a university between 1st January 2021 - 1st September 2022 and to relate to one or multiple prioritised research directions on our site. We accepted entries from all countries, though all entries had to be submitted in English.A prize of 1000 USD was offered for winning entries (and 100 USD for commendation prizes). In addition, prize-winning entries would be featured on our website and promoted by us.We expected the award to have three primary positive impacts:Providing a career boost for aspiring researchers already focused on very impactful questions and producing high quality work. Winning this award would give them additional credentials and increase their chances of building a successful research career.Motivating additional aspiring researchers to begin exploring questions in some of the ...]]>
Effective Thesis https://forum.effectivealtruism.org/posts/cAviezLQdkoNy42HG/exceptional-research-award-by-effective-thesis-etera Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Exceptional Research Award by Effective Thesis (ETERA): reflection, published by Effective Thesis on January 18, 2023 on The Effective Altruism Forum.This post was written by David Janků with contributions from Adéla Novotná (graphs and analyses on the background) and useful edits by Sophie Kirkham.SummaryIn autumn 2022, Effective Thesis held the Exceptional Research Award to encourage and recognize students conducting promising research that has the potential to significantly improve the world.We received 35 submissions, had 25 of them reviewed and chose 6 main prize winners and 4 commendation prize winners (see the winning submissions and winners’ profiles here).We received applications from across a variety of study disciplines and cause areas, though causes that have traditionally been prioritised by EA leaders - such as AI alignment, biosecurity, and animal welfare - were relatively underrepresented in our sample. It is unclear to us why this is the case (e.g. whether people working on those causes already have existing cause-specific prizes to apply to; or have different publication norms - like publishing mainly via blog posts - that our award is not compatible with; or to what extent some of these directions are not bottlenecked by the creation of new research ideas and understanding but rather by execution of existing ideas). We think this represents room for improvement in the future.Only 57 % of all applicants and 50 % of all winners had any engagement with Effective Thesis prior to applying for the award, suggesting the award was able to attract students who are doing valuable work but who have not interacted with Effective Thesis before.That being said, people previously coached by Effective Thesis scored higher in the reviews on “potentially having more impact with their research” than people who were not coached, suggesting our coaching might help people arrive at better ideas for research topics.The review scores indicated that the quality of the application pool was fairly high. The highest scores were usually given for rigour and novelty, and the lower scores were for the ability to make progress in the research field and potential positive impact in the world. This is to be expected since achieving impact is harder than achieving rigour and novelty.Both reviewers and applicants (successful and unsuccessful) seemed to be happy about their participation in this award, as reflected by their willingness to be associated with the award publicly, high ratings of the usefulness of the reviews by applicants, and a number of qualitative comments we received.What is ETERA and why did we decide to run it?The Effective Thesis Exceptional Research Award (ETERA) has been established to encourage and recognize promising research, conducted by students from undergraduate to PhD level, that has the potential to significantly improve the world. We accepted submissions of theses, dissertations and capstone papers. Entries were required to have been submitted to a university between 1st January 2021 - 1st September 2022 and to relate to one or multiple prioritised research directions on our site. We accepted entries from all countries, though all entries had to be submitted in English.A prize of 1000 USD was offered for winning entries (and 100 USD for commendation prizes). In addition, prize-winning entries would be featured on our website and promoted by us.We expected the award to have three primary positive impacts:Providing a career boost for aspiring researchers already focused on very impactful questions and producing high quality work. Winning this award would give them additional credentials and increase their chances of building a successful research career.Motivating additional aspiring researchers to begin exploring questions in some of the ...]]>
Wed, 18 Jan 2023 18:48:33 +0000 EA - Exceptional Research Award by Effective Thesis (ETERA): reflection by Effective Thesis Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Exceptional Research Award by Effective Thesis (ETERA): reflection, published by Effective Thesis on January 18, 2023 on The Effective Altruism Forum.This post was written by David Janků with contributions from Adéla Novotná (graphs and analyses on the background) and useful edits by Sophie Kirkham.SummaryIn autumn 2022, Effective Thesis held the Exceptional Research Award to encourage and recognize students conducting promising research that has the potential to significantly improve the world.We received 35 submissions, had 25 of them reviewed and chose 6 main prize winners and 4 commendation prize winners (see the winning submissions and winners’ profiles here).We received applications from across a variety of study disciplines and cause areas, though causes that have traditionally been prioritised by EA leaders - such as AI alignment, biosecurity, and animal welfare - were relatively underrepresented in our sample. It is unclear to us why this is the case (e.g. whether people working on those causes already have existing cause-specific prizes to apply to; or have different publication norms - like publishing mainly via blog posts - that our award is not compatible with; or to what extent some of these directions are not bottlenecked by the creation of new research ideas and understanding but rather by execution of existing ideas). We think this represents room for improvement in the future.Only 57 % of all applicants and 50 % of all winners had any engagement with Effective Thesis prior to applying for the award, suggesting the award was able to attract students who are doing valuable work but who have not interacted with Effective Thesis before.That being said, people previously coached by Effective Thesis scored higher in the reviews on “potentially having more impact with their research” than people who were not coached, suggesting our coaching might help people arrive at better ideas for research topics.The review scores indicated that the quality of the application pool was fairly high. The highest scores were usually given for rigour and novelty, and the lower scores were for the ability to make progress in the research field and potential positive impact in the world. This is to be expected since achieving impact is harder than achieving rigour and novelty.Both reviewers and applicants (successful and unsuccessful) seemed to be happy about their participation in this award, as reflected by their willingness to be associated with the award publicly, high ratings of the usefulness of the reviews by applicants, and a number of qualitative comments we received.What is ETERA and why did we decide to run it?The Effective Thesis Exceptional Research Award (ETERA) has been established to encourage and recognize promising research, conducted by students from undergraduate to PhD level, that has the potential to significantly improve the world. We accepted submissions of theses, dissertations and capstone papers. Entries were required to have been submitted to a university between 1st January 2021 - 1st September 2022 and to relate to one or multiple prioritised research directions on our site. We accepted entries from all countries, though all entries had to be submitted in English.A prize of 1000 USD was offered for winning entries (and 100 USD for commendation prizes). In addition, prize-winning entries would be featured on our website and promoted by us.We expected the award to have three primary positive impacts:Providing a career boost for aspiring researchers already focused on very impactful questions and producing high quality work. Winning this award would give them additional credentials and increase their chances of building a successful research career.Motivating additional aspiring researchers to begin exploring questions in some of the ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Exceptional Research Award by Effective Thesis (ETERA): reflection, published by Effective Thesis on January 18, 2023 on The Effective Altruism Forum.This post was written by David Janků with contributions from Adéla Novotná (graphs and analyses on the background) and useful edits by Sophie Kirkham.SummaryIn autumn 2022, Effective Thesis held the Exceptional Research Award to encourage and recognize students conducting promising research that has the potential to significantly improve the world.We received 35 submissions, had 25 of them reviewed and chose 6 main prize winners and 4 commendation prize winners (see the winning submissions and winners’ profiles here).We received applications from across a variety of study disciplines and cause areas, though causes that have traditionally been prioritised by EA leaders - such as AI alignment, biosecurity, and animal welfare - were relatively underrepresented in our sample. It is unclear to us why this is the case (e.g. whether people working on those causes already have existing cause-specific prizes to apply to; or have different publication norms - like publishing mainly via blog posts - that our award is not compatible with; or to what extent some of these directions are not bottlenecked by the creation of new research ideas and understanding but rather by execution of existing ideas). We think this represents room for improvement in the future.Only 57 % of all applicants and 50 % of all winners had any engagement with Effective Thesis prior to applying for the award, suggesting the award was able to attract students who are doing valuable work but who have not interacted with Effective Thesis before.That being said, people previously coached by Effective Thesis scored higher in the reviews on “potentially having more impact with their research” than people who were not coached, suggesting our coaching might help people arrive at better ideas for research topics.The review scores indicated that the quality of the application pool was fairly high. The highest scores were usually given for rigour and novelty, and the lower scores were for the ability to make progress in the research field and potential positive impact in the world. This is to be expected since achieving impact is harder than achieving rigour and novelty.Both reviewers and applicants (successful and unsuccessful) seemed to be happy about their participation in this award, as reflected by their willingness to be associated with the award publicly, high ratings of the usefulness of the reviews by applicants, and a number of qualitative comments we received.What is ETERA and why did we decide to run it?The Effective Thesis Exceptional Research Award (ETERA) has been established to encourage and recognize promising research, conducted by students from undergraduate to PhD level, that has the potential to significantly improve the world. We accepted submissions of theses, dissertations and capstone papers. Entries were required to have been submitted to a university between 1st January 2021 - 1st September 2022 and to relate to one or multiple prioritised research directions on our site. We accepted entries from all countries, though all entries had to be submitted in English.A prize of 1000 USD was offered for winning entries (and 100 USD for commendation prizes). In addition, prize-winning entries would be featured on our website and promoted by us.We expected the award to have three primary positive impacts:Providing a career boost for aspiring researchers already focused on very impactful questions and producing high quality work. Winning this award would give them additional credentials and increase their chances of building a successful research career.Motivating additional aspiring researchers to begin exploring questions in some of the ...]]>
Effective Thesis https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 21:03 None full 4506
LFraZKf7X367DaT8v_EA EA - 2022 EA conference talks are now live by Eli Nathan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2022 EA conference talks are now live, published by Eli Nathan on January 18, 2023 on The Effective Altruism Forum.Recordings from various 2022 EA conferences are now live on our YouTube channel, these include talks from London, San Francisco, Washington, D.C., EAGxBoston, EAGxOxford, EAGxBerlin, and EAGxVirtual (alongside many other talks from previous years).Listening to talks can be a great way to learn more about EA and stay up to date on EA cause areas, and recording them allows people who couldn’t attend (or who were busy in 1:1 meetings) to watch them in their own time. Recordings from other EA conferences will likely be live on our channel soon, and we recommend subscribing if you’d like to be notified of these.Some highlighted talks are displayed below:EA Global: LondonPresenting big ideas & complex data to the public | Edouard Mathieu and Hannah RitchieIn this talk, Edouard Mathieu discusses the lessons of data communication in the COVID-19 pandemic. Hannah and Edouard then have a fireside chat and Q&A on Our World in Data and how it fits with the EA framework.The state of aquatic animal advocacy | Sophika Kostyniuk, Andrés Jiménez Zorrilla, Alex Holst, and Bruce FriedrichAddressing aquatic animal welfare is important, as it is highly neglected, and tractable. Estimates vary, but there are approximately 100 billion fin fish and 350–400 billion shrimps farmed annually, which is far more than all of the land animals combined (more than 7x as many at the upper estimation). For the most part, farmed aquatic animals are treated like inanimate objects — their suffering is almost unimaginable.This session discusses why aquatic animal welfare is critical to address, and some of the priority interventions that can alleviate vast amounts of suffering.EA Global: San FranciscoBetting on AI is like betting on semiconductors in the 70's | Danny HernandezDanny discusses the three exponentials driving AI progress: hardware, algorithmic, and spending. He considers extrapolating these trends 10–20 years out and translates effective compute progress into GPT-2 to GPT-3 sized jumps and builds an intuition for such jumps. Danny uses the extrapolations and jump intuitions to think about what capabilities normal progress in effective compute are likely to yield. This session is likely to be particularly relevant to people very concerned about AI, considering working in AI, or choosing an agenda within AI.Science of scaling | Ahmed Mushfiq Mobarak and Heidi McAnnally-LinzMushfiq Mobarak and Heidi McAnnally-Linz speak to learning from cutting edge research on the science of scaling using examples from scaling interventions targeted at addressing seasonal poverty and from the NORM model targeted at increasing community-level mask wearing during the COVID-19 pandemic. They draw on these experiences to make the case for innovation through research as well as working with existing at-scale partners. They emphasize the importance of using direct evidence from RCTs as well as exploring other complexities of scale such as national impacts, spillovers, etc.EA Global: Washington, D.C.Safeguarding modern bioscience & biotechnology to prevent catastrophic biological events | Jaime Yassif, Beth Cameron, and Jaspreet PannuBioscience and biotechnology advances are vital for fighting disease, protecting the environment, and promoting economic development — and they hold incredible promise. However, these innovations can also pose unique challenges — increasing the risks of accidental misuse or deliberate abuse with potentially catastrophic global consequences.This session begins with a talk by Dr. Jaime Yassif highlighting these issues and discussing effective strategies for improving bioscience governance and reducing emerging biorisks — including through the establishment of t...]]>
Eli Nathan https://forum.effectivealtruism.org/posts/LFraZKf7X367DaT8v/2022-ea-conference-talks-are-now-live Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2022 EA conference talks are now live, published by Eli Nathan on January 18, 2023 on The Effective Altruism Forum.Recordings from various 2022 EA conferences are now live on our YouTube channel, these include talks from London, San Francisco, Washington, D.C., EAGxBoston, EAGxOxford, EAGxBerlin, and EAGxVirtual (alongside many other talks from previous years).Listening to talks can be a great way to learn more about EA and stay up to date on EA cause areas, and recording them allows people who couldn’t attend (or who were busy in 1:1 meetings) to watch them in their own time. Recordings from other EA conferences will likely be live on our channel soon, and we recommend subscribing if you’d like to be notified of these.Some highlighted talks are displayed below:EA Global: LondonPresenting big ideas & complex data to the public | Edouard Mathieu and Hannah RitchieIn this talk, Edouard Mathieu discusses the lessons of data communication in the COVID-19 pandemic. Hannah and Edouard then have a fireside chat and Q&A on Our World in Data and how it fits with the EA framework.The state of aquatic animal advocacy | Sophika Kostyniuk, Andrés Jiménez Zorrilla, Alex Holst, and Bruce FriedrichAddressing aquatic animal welfare is important, as it is highly neglected, and tractable. Estimates vary, but there are approximately 100 billion fin fish and 350–400 billion shrimps farmed annually, which is far more than all of the land animals combined (more than 7x as many at the upper estimation). For the most part, farmed aquatic animals are treated like inanimate objects — their suffering is almost unimaginable.This session discusses why aquatic animal welfare is critical to address, and some of the priority interventions that can alleviate vast amounts of suffering.EA Global: San FranciscoBetting on AI is like betting on semiconductors in the 70's | Danny HernandezDanny discusses the three exponentials driving AI progress: hardware, algorithmic, and spending. He considers extrapolating these trends 10–20 years out and translates effective compute progress into GPT-2 to GPT-3 sized jumps and builds an intuition for such jumps. Danny uses the extrapolations and jump intuitions to think about what capabilities normal progress in effective compute are likely to yield. This session is likely to be particularly relevant to people very concerned about AI, considering working in AI, or choosing an agenda within AI.Science of scaling | Ahmed Mushfiq Mobarak and Heidi McAnnally-LinzMushfiq Mobarak and Heidi McAnnally-Linz speak to learning from cutting edge research on the science of scaling using examples from scaling interventions targeted at addressing seasonal poverty and from the NORM model targeted at increasing community-level mask wearing during the COVID-19 pandemic. They draw on these experiences to make the case for innovation through research as well as working with existing at-scale partners. They emphasize the importance of using direct evidence from RCTs as well as exploring other complexities of scale such as national impacts, spillovers, etc.EA Global: Washington, D.C.Safeguarding modern bioscience & biotechnology to prevent catastrophic biological events | Jaime Yassif, Beth Cameron, and Jaspreet PannuBioscience and biotechnology advances are vital for fighting disease, protecting the environment, and promoting economic development — and they hold incredible promise. However, these innovations can also pose unique challenges — increasing the risks of accidental misuse or deliberate abuse with potentially catastrophic global consequences.This session begins with a talk by Dr. Jaime Yassif highlighting these issues and discussing effective strategies for improving bioscience governance and reducing emerging biorisks — including through the establishment of t...]]>
Wed, 18 Jan 2023 18:32:03 +0000 EA - 2022 EA conference talks are now live by Eli Nathan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2022 EA conference talks are now live, published by Eli Nathan on January 18, 2023 on The Effective Altruism Forum.Recordings from various 2022 EA conferences are now live on our YouTube channel, these include talks from London, San Francisco, Washington, D.C., EAGxBoston, EAGxOxford, EAGxBerlin, and EAGxVirtual (alongside many other talks from previous years).Listening to talks can be a great way to learn more about EA and stay up to date on EA cause areas, and recording them allows people who couldn’t attend (or who were busy in 1:1 meetings) to watch them in their own time. Recordings from other EA conferences will likely be live on our channel soon, and we recommend subscribing if you’d like to be notified of these.Some highlighted talks are displayed below:EA Global: LondonPresenting big ideas & complex data to the public | Edouard Mathieu and Hannah RitchieIn this talk, Edouard Mathieu discusses the lessons of data communication in the COVID-19 pandemic. Hannah and Edouard then have a fireside chat and Q&A on Our World in Data and how it fits with the EA framework.The state of aquatic animal advocacy | Sophika Kostyniuk, Andrés Jiménez Zorrilla, Alex Holst, and Bruce FriedrichAddressing aquatic animal welfare is important, as it is highly neglected, and tractable. Estimates vary, but there are approximately 100 billion fin fish and 350–400 billion shrimps farmed annually, which is far more than all of the land animals combined (more than 7x as many at the upper estimation). For the most part, farmed aquatic animals are treated like inanimate objects — their suffering is almost unimaginable.This session discusses why aquatic animal welfare is critical to address, and some of the priority interventions that can alleviate vast amounts of suffering.EA Global: San FranciscoBetting on AI is like betting on semiconductors in the 70's | Danny HernandezDanny discusses the three exponentials driving AI progress: hardware, algorithmic, and spending. He considers extrapolating these trends 10–20 years out and translates effective compute progress into GPT-2 to GPT-3 sized jumps and builds an intuition for such jumps. Danny uses the extrapolations and jump intuitions to think about what capabilities normal progress in effective compute are likely to yield. This session is likely to be particularly relevant to people very concerned about AI, considering working in AI, or choosing an agenda within AI.Science of scaling | Ahmed Mushfiq Mobarak and Heidi McAnnally-LinzMushfiq Mobarak and Heidi McAnnally-Linz speak to learning from cutting edge research on the science of scaling using examples from scaling interventions targeted at addressing seasonal poverty and from the NORM model targeted at increasing community-level mask wearing during the COVID-19 pandemic. They draw on these experiences to make the case for innovation through research as well as working with existing at-scale partners. They emphasize the importance of using direct evidence from RCTs as well as exploring other complexities of scale such as national impacts, spillovers, etc.EA Global: Washington, D.C.Safeguarding modern bioscience & biotechnology to prevent catastrophic biological events | Jaime Yassif, Beth Cameron, and Jaspreet PannuBioscience and biotechnology advances are vital for fighting disease, protecting the environment, and promoting economic development — and they hold incredible promise. However, these innovations can also pose unique challenges — increasing the risks of accidental misuse or deliberate abuse with potentially catastrophic global consequences.This session begins with a talk by Dr. Jaime Yassif highlighting these issues and discussing effective strategies for improving bioscience governance and reducing emerging biorisks — including through the establishment of t...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2022 EA conference talks are now live, published by Eli Nathan on January 18, 2023 on The Effective Altruism Forum.Recordings from various 2022 EA conferences are now live on our YouTube channel, these include talks from London, San Francisco, Washington, D.C., EAGxBoston, EAGxOxford, EAGxBerlin, and EAGxVirtual (alongside many other talks from previous years).Listening to talks can be a great way to learn more about EA and stay up to date on EA cause areas, and recording them allows people who couldn’t attend (or who were busy in 1:1 meetings) to watch them in their own time. Recordings from other EA conferences will likely be live on our channel soon, and we recommend subscribing if you’d like to be notified of these.Some highlighted talks are displayed below:EA Global: LondonPresenting big ideas & complex data to the public | Edouard Mathieu and Hannah RitchieIn this talk, Edouard Mathieu discusses the lessons of data communication in the COVID-19 pandemic. Hannah and Edouard then have a fireside chat and Q&A on Our World in Data and how it fits with the EA framework.The state of aquatic animal advocacy | Sophika Kostyniuk, Andrés Jiménez Zorrilla, Alex Holst, and Bruce FriedrichAddressing aquatic animal welfare is important, as it is highly neglected, and tractable. Estimates vary, but there are approximately 100 billion fin fish and 350–400 billion shrimps farmed annually, which is far more than all of the land animals combined (more than 7x as many at the upper estimation). For the most part, farmed aquatic animals are treated like inanimate objects — their suffering is almost unimaginable.This session discusses why aquatic animal welfare is critical to address, and some of the priority interventions that can alleviate vast amounts of suffering.EA Global: San FranciscoBetting on AI is like betting on semiconductors in the 70's | Danny HernandezDanny discusses the three exponentials driving AI progress: hardware, algorithmic, and spending. He considers extrapolating these trends 10–20 years out and translates effective compute progress into GPT-2 to GPT-3 sized jumps and builds an intuition for such jumps. Danny uses the extrapolations and jump intuitions to think about what capabilities normal progress in effective compute are likely to yield. This session is likely to be particularly relevant to people very concerned about AI, considering working in AI, or choosing an agenda within AI.Science of scaling | Ahmed Mushfiq Mobarak and Heidi McAnnally-LinzMushfiq Mobarak and Heidi McAnnally-Linz speak to learning from cutting edge research on the science of scaling using examples from scaling interventions targeted at addressing seasonal poverty and from the NORM model targeted at increasing community-level mask wearing during the COVID-19 pandemic. They draw on these experiences to make the case for innovation through research as well as working with existing at-scale partners. They emphasize the importance of using direct evidence from RCTs as well as exploring other complexities of scale such as national impacts, spillovers, etc.EA Global: Washington, D.C.Safeguarding modern bioscience & biotechnology to prevent catastrophic biological events | Jaime Yassif, Beth Cameron, and Jaspreet PannuBioscience and biotechnology advances are vital for fighting disease, protecting the environment, and promoting economic development — and they hold incredible promise. However, these innovations can also pose unique challenges — increasing the risks of accidental misuse or deliberate abuse with potentially catastrophic global consequences.This session begins with a talk by Dr. Jaime Yassif highlighting these issues and discussing effective strategies for improving bioscience governance and reducing emerging biorisks — including through the establishment of t...]]>
Eli Nathan https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:15 None full 4505
Yi2vGpsAQftuA9DHj_EA EA - Be wary of enacting norms you think are unethical by RobBensinger Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Be wary of enacting norms you think are unethical, published by RobBensinger on January 18, 2023 on The Effective Altruism Forum.To my eye, a lot of EAs seem to under-appreciate the extent to which your response to a crisis isn't just a reaction to an existing, fixed set of societal norms. The act of choosing a response is the act of creating a norm.You're helping bring into existence a particular version of EA, a particular version of academia and intellectual life, and a particular world-at-large. This is particularly true insofar as EA is a widely-paid-attention-to influencer or thought leader in intellectual society, which I think it (weakly) is.It's possible to overestimate your influence, but my impression is that most EAs are currently underestimating it. Hopefully this post can at least bring to your attention the hypothesis that your actions matter for determining the norms going forward, even if you don't currently think you have enough evidence to believe that hypothesis.If you want the culture to be a certain way, I think it's worth taking the time to flesh out for yourself what the details would look like, and find ways to test whether there are any ways to effect that norm, or to move in the direction that seems best to you.Anchor more to what you actually think is ethical, to what you think is kind, to what you think is honorable, to what you think is important and worth protecting. If you think your peers aren't living up to your highest principles, then don't give up on your principles.(And maybe don't give up on your peers, or maybe do, depending on what seems right to you.)Don't ignore the current set of norms you see; but be wary of willing bad outcomes into being via self-fulfilling prophecies.Be dissatisfied, if the world doesn't already look like the vision of a good, wholesome, virtuous world you can picture in your head. Because as someone who feels optimistic about EAs' capacities to do what's right, I want to see more people fighting for their personal visions of what that involves.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
RobBensinger https://forum.effectivealtruism.org/posts/Yi2vGpsAQftuA9DHj/be-wary-of-enacting-norms-you-think-are-unethical Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Be wary of enacting norms you think are unethical, published by RobBensinger on January 18, 2023 on The Effective Altruism Forum.To my eye, a lot of EAs seem to under-appreciate the extent to which your response to a crisis isn't just a reaction to an existing, fixed set of societal norms. The act of choosing a response is the act of creating a norm.You're helping bring into existence a particular version of EA, a particular version of academia and intellectual life, and a particular world-at-large. This is particularly true insofar as EA is a widely-paid-attention-to influencer or thought leader in intellectual society, which I think it (weakly) is.It's possible to overestimate your influence, but my impression is that most EAs are currently underestimating it. Hopefully this post can at least bring to your attention the hypothesis that your actions matter for determining the norms going forward, even if you don't currently think you have enough evidence to believe that hypothesis.If you want the culture to be a certain way, I think it's worth taking the time to flesh out for yourself what the details would look like, and find ways to test whether there are any ways to effect that norm, or to move in the direction that seems best to you.Anchor more to what you actually think is ethical, to what you think is kind, to what you think is honorable, to what you think is important and worth protecting. If you think your peers aren't living up to your highest principles, then don't give up on your principles.(And maybe don't give up on your peers, or maybe do, depending on what seems right to you.)Don't ignore the current set of norms you see; but be wary of willing bad outcomes into being via self-fulfilling prophecies.Be dissatisfied, if the world doesn't already look like the vision of a good, wholesome, virtuous world you can picture in your head. Because as someone who feels optimistic about EAs' capacities to do what's right, I want to see more people fighting for their personal visions of what that involves.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 18 Jan 2023 09:44:02 +0000 EA - Be wary of enacting norms you think are unethical by RobBensinger Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Be wary of enacting norms you think are unethical, published by RobBensinger on January 18, 2023 on The Effective Altruism Forum.To my eye, a lot of EAs seem to under-appreciate the extent to which your response to a crisis isn't just a reaction to an existing, fixed set of societal norms. The act of choosing a response is the act of creating a norm.You're helping bring into existence a particular version of EA, a particular version of academia and intellectual life, and a particular world-at-large. This is particularly true insofar as EA is a widely-paid-attention-to influencer or thought leader in intellectual society, which I think it (weakly) is.It's possible to overestimate your influence, but my impression is that most EAs are currently underestimating it. Hopefully this post can at least bring to your attention the hypothesis that your actions matter for determining the norms going forward, even if you don't currently think you have enough evidence to believe that hypothesis.If you want the culture to be a certain way, I think it's worth taking the time to flesh out for yourself what the details would look like, and find ways to test whether there are any ways to effect that norm, or to move in the direction that seems best to you.Anchor more to what you actually think is ethical, to what you think is kind, to what you think is honorable, to what you think is important and worth protecting. If you think your peers aren't living up to your highest principles, then don't give up on your principles.(And maybe don't give up on your peers, or maybe do, depending on what seems right to you.)Don't ignore the current set of norms you see; but be wary of willing bad outcomes into being via self-fulfilling prophecies.Be dissatisfied, if the world doesn't already look like the vision of a good, wholesome, virtuous world you can picture in your head. Because as someone who feels optimistic about EAs' capacities to do what's right, I want to see more people fighting for their personal visions of what that involves.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Be wary of enacting norms you think are unethical, published by RobBensinger on January 18, 2023 on The Effective Altruism Forum.To my eye, a lot of EAs seem to under-appreciate the extent to which your response to a crisis isn't just a reaction to an existing, fixed set of societal norms. The act of choosing a response is the act of creating a norm.You're helping bring into existence a particular version of EA, a particular version of academia and intellectual life, and a particular world-at-large. This is particularly true insofar as EA is a widely-paid-attention-to influencer or thought leader in intellectual society, which I think it (weakly) is.It's possible to overestimate your influence, but my impression is that most EAs are currently underestimating it. Hopefully this post can at least bring to your attention the hypothesis that your actions matter for determining the norms going forward, even if you don't currently think you have enough evidence to believe that hypothesis.If you want the culture to be a certain way, I think it's worth taking the time to flesh out for yourself what the details would look like, and find ways to test whether there are any ways to effect that norm, or to move in the direction that seems best to you.Anchor more to what you actually think is ethical, to what you think is kind, to what you think is honorable, to what you think is important and worth protecting. If you think your peers aren't living up to your highest principles, then don't give up on your principles.(And maybe don't give up on your peers, or maybe do, depending on what seems right to you.)Don't ignore the current set of norms you see; but be wary of willing bad outcomes into being via self-fulfilling prophecies.Be dissatisfied, if the world doesn't already look like the vision of a good, wholesome, virtuous world you can picture in your head. Because as someone who feels optimistic about EAs' capacities to do what's right, I want to see more people fighting for their personal visions of what that involves.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
RobBensinger https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:03 None full 4497
Ny3v2Qe4LfaYJYKcq_EA EA - Book critique of Effective Altruism by Manuel Del Río Rodríguez Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Book critique of Effective Altruism, published by Manuel Del Río Rodríguez on January 17, 2023 on The Effective Altruism Forum.Hello there! First post in the forum, so I apologize in advance for the probable mistakes and overall clumsiness. I have checked the forum writing guidelines but am pretty sure there's a high probability of my screwing up something or somewhere, so if that proves to be the case, "I am sure you have a waste basket handy".The case is, I was just checking Amazon today for some books on Effective Altruism with which to supplement the digital EA Handbook I am reading when I found this volume which will be made available exactly a month from now: The Good It Promises, the Harm It Does: Critical Essays on Effective Altruism, by Carol J. Adams, Lori Gruen and Alice Crary. I haven't seen any post mentioning it, and I thought it might be interesting to share.As stated, the book hasn't been published yet, but one can look inside. I have been browsing the introduction, and in line with its title, it is pretty harsh in its wording. For example, from page xxv of the introduction:"In addition to describing how EA can harm animals and humans, the book contains critical studies of EA's philosophical assumptions and critical studies of organizations that set out to realize them. It invites readers to recognize EA as an alluring and extremely pernicious ideology, and it traces out a number of mutually reinforcing strategies for submitting this ideology for criticism".From the tone of the introduction I can suppose the general tone will be pretty scathing and hostile, as well as its general orientation. Still, I imagine the arguments it makes will profit from some attention, discussion and counterargument when it comes out.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Manuel Del Río Rodríguez https://forum.effectivealtruism.org/posts/Ny3v2Qe4LfaYJYKcq/book-critique-of-effective-altruism Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Book critique of Effective Altruism, published by Manuel Del Río Rodríguez on January 17, 2023 on The Effective Altruism Forum.Hello there! First post in the forum, so I apologize in advance for the probable mistakes and overall clumsiness. I have checked the forum writing guidelines but am pretty sure there's a high probability of my screwing up something or somewhere, so if that proves to be the case, "I am sure you have a waste basket handy".The case is, I was just checking Amazon today for some books on Effective Altruism with which to supplement the digital EA Handbook I am reading when I found this volume which will be made available exactly a month from now: The Good It Promises, the Harm It Does: Critical Essays on Effective Altruism, by Carol J. Adams, Lori Gruen and Alice Crary. I haven't seen any post mentioning it, and I thought it might be interesting to share.As stated, the book hasn't been published yet, but one can look inside. I have been browsing the introduction, and in line with its title, it is pretty harsh in its wording. For example, from page xxv of the introduction:"In addition to describing how EA can harm animals and humans, the book contains critical studies of EA's philosophical assumptions and critical studies of organizations that set out to realize them. It invites readers to recognize EA as an alluring and extremely pernicious ideology, and it traces out a number of mutually reinforcing strategies for submitting this ideology for criticism".From the tone of the introduction I can suppose the general tone will be pretty scathing and hostile, as well as its general orientation. Still, I imagine the arguments it makes will profit from some attention, discussion and counterargument when it comes out.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 18 Jan 2023 07:39:18 +0000 EA - Book critique of Effective Altruism by Manuel Del Río Rodríguez Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Book critique of Effective Altruism, published by Manuel Del Río Rodríguez on January 17, 2023 on The Effective Altruism Forum.Hello there! First post in the forum, so I apologize in advance for the probable mistakes and overall clumsiness. I have checked the forum writing guidelines but am pretty sure there's a high probability of my screwing up something or somewhere, so if that proves to be the case, "I am sure you have a waste basket handy".The case is, I was just checking Amazon today for some books on Effective Altruism with which to supplement the digital EA Handbook I am reading when I found this volume which will be made available exactly a month from now: The Good It Promises, the Harm It Does: Critical Essays on Effective Altruism, by Carol J. Adams, Lori Gruen and Alice Crary. I haven't seen any post mentioning it, and I thought it might be interesting to share.As stated, the book hasn't been published yet, but one can look inside. I have been browsing the introduction, and in line with its title, it is pretty harsh in its wording. For example, from page xxv of the introduction:"In addition to describing how EA can harm animals and humans, the book contains critical studies of EA's philosophical assumptions and critical studies of organizations that set out to realize them. It invites readers to recognize EA as an alluring and extremely pernicious ideology, and it traces out a number of mutually reinforcing strategies for submitting this ideology for criticism".From the tone of the introduction I can suppose the general tone will be pretty scathing and hostile, as well as its general orientation. Still, I imagine the arguments it makes will profit from some attention, discussion and counterargument when it comes out.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Book critique of Effective Altruism, published by Manuel Del Río Rodríguez on January 17, 2023 on The Effective Altruism Forum.Hello there! First post in the forum, so I apologize in advance for the probable mistakes and overall clumsiness. I have checked the forum writing guidelines but am pretty sure there's a high probability of my screwing up something or somewhere, so if that proves to be the case, "I am sure you have a waste basket handy".The case is, I was just checking Amazon today for some books on Effective Altruism with which to supplement the digital EA Handbook I am reading when I found this volume which will be made available exactly a month from now: The Good It Promises, the Harm It Does: Critical Essays on Effective Altruism, by Carol J. Adams, Lori Gruen and Alice Crary. I haven't seen any post mentioning it, and I thought it might be interesting to share.As stated, the book hasn't been published yet, but one can look inside. I have been browsing the introduction, and in line with its title, it is pretty harsh in its wording. For example, from page xxv of the introduction:"In addition to describing how EA can harm animals and humans, the book contains critical studies of EA's philosophical assumptions and critical studies of organizations that set out to realize them. It invites readers to recognize EA as an alluring and extremely pernicious ideology, and it traces out a number of mutually reinforcing strategies for submitting this ideology for criticism".From the tone of the introduction I can suppose the general tone will be pretty scathing and hostile, as well as its general orientation. Still, I imagine the arguments it makes will profit from some attention, discussion and counterargument when it comes out.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Manuel Del Río Rodríguez https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:53 None full 4498
zMAdoAyAcZJHybG2R_EA EA - Calculating how much small donors funge with money that will never be spent by Tristan Cook Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Calculating how much small donors funge with money that will never be spent, published by Tristan Cook on January 16, 2023 on The Effective Altruism Forum.Epistemic status: Confident that the effect is real, though likely smaller than suggested by the toy-model.SummarySmall donors should discount the cost effectiveness of their donations to interventions above a large funder’s bar ifthey expect the large funder not to have spent all their capital by the time of AGI’s arrivaltheir donation to interventions above the large funder’s bar funges with the large funder.In this post I describe a toy model to calculate how much to discount due to this effect.I apply the model to a guess of Open Philanthropy’s spending on Global Health and Development (GHD) with Metaculus’ AGI timelines (25% by 2029, 50% by 2039). The model implies that small donors should consider interventions above OP's GHD bar, e.g. GiveWell's top charities, are only 55% as cost effective as the small donors first thought. For shorter AGI timelines (25% by 2027, 50% by 2030) this factor is around 35%.I use OP's GHD spending as an example because of their clarity around funding rate and bar for interventions. This discount factor would be larger if one funges with 'patient' philanthropic funds (such as The Patient Philanthropy Fund).This effect is a corollary of the result that most donor's AGI timelines (e.g. deferral to Metaculus) imply that the community spend at a greater rate. When a small donor funges with a large donor (and saves them spending themselves), the community's spending rate is effectively lowered (compared to when the small donor does not funge).This effect occurs when a small donor has shorter timelines than a large funder, or the large funder is not spending at a sufficiently high rate. In the latter case, small donors - by donating to interventions below the large funder's bar - are effectively correcting the community's implicit bar for funding.Toy modelSuppose you have the choice of donating to one of two interventions, A which gives a utils per $, or B, which gives b utils per $. Suppose further that the available interventions remain the same every year and that both have room for funding this year.A large funder F will ensure that A is fully funded this year, so if you donate $1 to A, then F, effectively, has $1 more to donate in the future. I suppose that F only ever donates to (opportunities as good as) A.I suppose that F's capital decreases by some constant amount f times their initial capital each year. This means that F will have no assets in 1/f years from now.Supposing AGI arrives t years from now, then F will have spent fraction min(ft,1) of their current capital on A.Accounting for this funging and assuming AGI arrives at time t, the cost effectiveness of your donation to A is then min(ft,1)a utils per $. Then if b>min(ft,1)a, marginal spending by small donors on B is more cost effective than on A.By considering distributions of AGI's arrival time t and the large funder's funding rate f we can get a distribution of this multiplier.Plugging in numbersI takeThe large funder F to be Open Philanthropy’s Global Health and Wellbeing spending on Global Health and Development and intervention A to be Givewell’s recommendations.I take 1/f, the expected time until OP's funds dedicated to GHD are depleted to be distributed Normal(20,20) bounded below by 5.I take AGI timelines to be an approximation those on Metaculus.These distributions on AGI timelines and 1/f give the following distribution of the funging multiplier (reproducible here).The ratio of cost effectiveness between GiveWell's recommendations and GiveDirectly, a/b, is approximately 7-8 and so small donors should give to interventions in the (5, 7)x GiveDirectly range.For donors with shorter timeli...]]>
Tristan Cook https://forum.effectivealtruism.org/posts/zMAdoAyAcZJHybG2R/calculating-how-much-small-donors-funge-with-money-that-will Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Calculating how much small donors funge with money that will never be spent, published by Tristan Cook on January 16, 2023 on The Effective Altruism Forum.Epistemic status: Confident that the effect is real, though likely smaller than suggested by the toy-model.SummarySmall donors should discount the cost effectiveness of their donations to interventions above a large funder’s bar ifthey expect the large funder not to have spent all their capital by the time of AGI’s arrivaltheir donation to interventions above the large funder’s bar funges with the large funder.In this post I describe a toy model to calculate how much to discount due to this effect.I apply the model to a guess of Open Philanthropy’s spending on Global Health and Development (GHD) with Metaculus’ AGI timelines (25% by 2029, 50% by 2039). The model implies that small donors should consider interventions above OP's GHD bar, e.g. GiveWell's top charities, are only 55% as cost effective as the small donors first thought. For shorter AGI timelines (25% by 2027, 50% by 2030) this factor is around 35%.I use OP's GHD spending as an example because of their clarity around funding rate and bar for interventions. This discount factor would be larger if one funges with 'patient' philanthropic funds (such as The Patient Philanthropy Fund).This effect is a corollary of the result that most donor's AGI timelines (e.g. deferral to Metaculus) imply that the community spend at a greater rate. When a small donor funges with a large donor (and saves them spending themselves), the community's spending rate is effectively lowered (compared to when the small donor does not funge).This effect occurs when a small donor has shorter timelines than a large funder, or the large funder is not spending at a sufficiently high rate. In the latter case, small donors - by donating to interventions below the large funder's bar - are effectively correcting the community's implicit bar for funding.Toy modelSuppose you have the choice of donating to one of two interventions, A which gives a utils per $, or B, which gives b utils per $. Suppose further that the available interventions remain the same every year and that both have room for funding this year.A large funder F will ensure that A is fully funded this year, so if you donate $1 to A, then F, effectively, has $1 more to donate in the future. I suppose that F only ever donates to (opportunities as good as) A.I suppose that F's capital decreases by some constant amount f times their initial capital each year. This means that F will have no assets in 1/f years from now.Supposing AGI arrives t years from now, then F will have spent fraction min(ft,1) of their current capital on A.Accounting for this funging and assuming AGI arrives at time t, the cost effectiveness of your donation to A is then min(ft,1)a utils per $. Then if b>min(ft,1)a, marginal spending by small donors on B is more cost effective than on A.By considering distributions of AGI's arrival time t and the large funder's funding rate f we can get a distribution of this multiplier.Plugging in numbersI takeThe large funder F to be Open Philanthropy’s Global Health and Wellbeing spending on Global Health and Development and intervention A to be Givewell’s recommendations.I take 1/f, the expected time until OP's funds dedicated to GHD are depleted to be distributed Normal(20,20) bounded below by 5.I take AGI timelines to be an approximation those on Metaculus.These distributions on AGI timelines and 1/f give the following distribution of the funging multiplier (reproducible here).The ratio of cost effectiveness between GiveWell's recommendations and GiveDirectly, a/b, is approximately 7-8 and so small donors should give to interventions in the (5, 7)x GiveDirectly range.For donors with shorter timeli...]]>
Tue, 17 Jan 2023 22:04:52 +0000 EA - Calculating how much small donors funge with money that will never be spent by Tristan Cook Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Calculating how much small donors funge with money that will never be spent, published by Tristan Cook on January 16, 2023 on The Effective Altruism Forum.Epistemic status: Confident that the effect is real, though likely smaller than suggested by the toy-model.SummarySmall donors should discount the cost effectiveness of their donations to interventions above a large funder’s bar ifthey expect the large funder not to have spent all their capital by the time of AGI’s arrivaltheir donation to interventions above the large funder’s bar funges with the large funder.In this post I describe a toy model to calculate how much to discount due to this effect.I apply the model to a guess of Open Philanthropy’s spending on Global Health and Development (GHD) with Metaculus’ AGI timelines (25% by 2029, 50% by 2039). The model implies that small donors should consider interventions above OP's GHD bar, e.g. GiveWell's top charities, are only 55% as cost effective as the small donors first thought. For shorter AGI timelines (25% by 2027, 50% by 2030) this factor is around 35%.I use OP's GHD spending as an example because of their clarity around funding rate and bar for interventions. This discount factor would be larger if one funges with 'patient' philanthropic funds (such as The Patient Philanthropy Fund).This effect is a corollary of the result that most donor's AGI timelines (e.g. deferral to Metaculus) imply that the community spend at a greater rate. When a small donor funges with a large donor (and saves them spending themselves), the community's spending rate is effectively lowered (compared to when the small donor does not funge).This effect occurs when a small donor has shorter timelines than a large funder, or the large funder is not spending at a sufficiently high rate. In the latter case, small donors - by donating to interventions below the large funder's bar - are effectively correcting the community's implicit bar for funding.Toy modelSuppose you have the choice of donating to one of two interventions, A which gives a utils per $, or B, which gives b utils per $. Suppose further that the available interventions remain the same every year and that both have room for funding this year.A large funder F will ensure that A is fully funded this year, so if you donate $1 to A, then F, effectively, has $1 more to donate in the future. I suppose that F only ever donates to (opportunities as good as) A.I suppose that F's capital decreases by some constant amount f times their initial capital each year. This means that F will have no assets in 1/f years from now.Supposing AGI arrives t years from now, then F will have spent fraction min(ft,1) of their current capital on A.Accounting for this funging and assuming AGI arrives at time t, the cost effectiveness of your donation to A is then min(ft,1)a utils per $. Then if b>min(ft,1)a, marginal spending by small donors on B is more cost effective than on A.By considering distributions of AGI's arrival time t and the large funder's funding rate f we can get a distribution of this multiplier.Plugging in numbersI takeThe large funder F to be Open Philanthropy’s Global Health and Wellbeing spending on Global Health and Development and intervention A to be Givewell’s recommendations.I take 1/f, the expected time until OP's funds dedicated to GHD are depleted to be distributed Normal(20,20) bounded below by 5.I take AGI timelines to be an approximation those on Metaculus.These distributions on AGI timelines and 1/f give the following distribution of the funging multiplier (reproducible here).The ratio of cost effectiveness between GiveWell's recommendations and GiveDirectly, a/b, is approximately 7-8 and so small donors should give to interventions in the (5, 7)x GiveDirectly range.For donors with shorter timeli...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Calculating how much small donors funge with money that will never be spent, published by Tristan Cook on January 16, 2023 on The Effective Altruism Forum.Epistemic status: Confident that the effect is real, though likely smaller than suggested by the toy-model.SummarySmall donors should discount the cost effectiveness of their donations to interventions above a large funder’s bar ifthey expect the large funder not to have spent all their capital by the time of AGI’s arrivaltheir donation to interventions above the large funder’s bar funges with the large funder.In this post I describe a toy model to calculate how much to discount due to this effect.I apply the model to a guess of Open Philanthropy’s spending on Global Health and Development (GHD) with Metaculus’ AGI timelines (25% by 2029, 50% by 2039). The model implies that small donors should consider interventions above OP's GHD bar, e.g. GiveWell's top charities, are only 55% as cost effective as the small donors first thought. For shorter AGI timelines (25% by 2027, 50% by 2030) this factor is around 35%.I use OP's GHD spending as an example because of their clarity around funding rate and bar for interventions. This discount factor would be larger if one funges with 'patient' philanthropic funds (such as The Patient Philanthropy Fund).This effect is a corollary of the result that most donor's AGI timelines (e.g. deferral to Metaculus) imply that the community spend at a greater rate. When a small donor funges with a large donor (and saves them spending themselves), the community's spending rate is effectively lowered (compared to when the small donor does not funge).This effect occurs when a small donor has shorter timelines than a large funder, or the large funder is not spending at a sufficiently high rate. In the latter case, small donors - by donating to interventions below the large funder's bar - are effectively correcting the community's implicit bar for funding.Toy modelSuppose you have the choice of donating to one of two interventions, A which gives a utils per $, or B, which gives b utils per $. Suppose further that the available interventions remain the same every year and that both have room for funding this year.A large funder F will ensure that A is fully funded this year, so if you donate $1 to A, then F, effectively, has $1 more to donate in the future. I suppose that F only ever donates to (opportunities as good as) A.I suppose that F's capital decreases by some constant amount f times their initial capital each year. This means that F will have no assets in 1/f years from now.Supposing AGI arrives t years from now, then F will have spent fraction min(ft,1) of their current capital on A.Accounting for this funging and assuming AGI arrives at time t, the cost effectiveness of your donation to A is then min(ft,1)a utils per $. Then if b>min(ft,1)a, marginal spending by small donors on B is more cost effective than on A.By considering distributions of AGI's arrival time t and the large funder's funding rate f we can get a distribution of this multiplier.Plugging in numbersI takeThe large funder F to be Open Philanthropy’s Global Health and Wellbeing spending on Global Health and Development and intervention A to be Givewell’s recommendations.I take 1/f, the expected time until OP's funds dedicated to GHD are depleted to be distributed Normal(20,20) bounded below by 5.I take AGI timelines to be an approximation those on Metaculus.These distributions on AGI timelines and 1/f give the following distribution of the funging multiplier (reproducible here).The ratio of cost effectiveness between GiveWell's recommendations and GiveDirectly, a/b, is approximately 7-8 and so small donors should give to interventions in the (5, 7)x GiveDirectly range.For donors with shorter timeli...]]>
Tristan Cook https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:41 None full 4494
SaRFsRDrneDgxNBbh_EA EA - Recursive Middle Manager Hell by Raemon Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Recursive Middle Manager Hell, published by Raemon on January 17, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Raemon https://forum.effectivealtruism.org/posts/SaRFsRDrneDgxNBbh/recursive-middle-manager-hell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Recursive Middle Manager Hell, published by Raemon on January 17, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 17 Jan 2023 20:47:40 +0000 EA - Recursive Middle Manager Hell by Raemon Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Recursive Middle Manager Hell, published by Raemon on January 17, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Recursive Middle Manager Hell, published by Raemon on January 17, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Raemon https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:24 None full 4490
sJpCYcHDGjHFG2Qvr_EA EA - Introducing Lafiya Nigeria by Klau Chmielowska Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Lafiya Nigeria, published by Klau Chmielowska on January 17, 2023 on The Effective Altruism Forum.Reducing maternal mortality through informed family planningIn June 2021, we founded Lafiya Nigeria, a non-profit organisation that works toward ending maternal mortality in Nigeria by widening the access and information about family planning. TL; DR Introduction to our organisation in a 3-min videoThis post describes (I) the challenge we aim to solve, (II) our approach, (III), our traction, (IV) our value-add, (V) our plans, (VI) how you can get involved in our initative.(I) The challengeIn low and middle-income countries, women are dying from giving life. Nearly 300,000 women and girls are dying from pregnancy-related complications each year, according to the Guttmacher Institute. Other health complications such as obstetric fistula, postpartum anemia, and postnatal depression are also key health burdens borne by pregnant women.Reducing the number of unintended pregnancies is an effective means of reducing health burdens for mothers and newborns. Despite a significant number of women in these countries wanting to avoid pregnancy, many are not using modern contraceptives, resulting in 85 million unintended pregnancies per year. If all women with unmet need were provided access to and used modern contraceptives, the Lancet estimated that maternal deaths globally would drop 44%. An estimated 70,000 maternal deaths could be prevented each year, with 441,000 new-born deaths also averted. Additionally, the Guttmacher Institute estimates that every dollar spent on contraceptive services beyond the current level would reduce the cost of pregnancy-related and newborn care by three dollars. The Copenhagen Consensus also found that every dollar spent on access to modern contraception leads to 120 dollars of social, economic, and environmental benefits.Access to family planning is beyond a health issue: its dividends are seen also in positive effects on education, income generation, and children's welfare. A study in Indonesia found that providing access to family planning was three times more effective than improving school quality in keeping girls in school an extra year. Research in Colombia found that girls with access to family planning clinics were 7% more likely to participate in the formal workforce as adults. Long-term studies have also shown that providing access to family planning programs can lead to improved college completion rates of children and higher family incomes decades later. These spillover effects are difficult to measure and are often neglected in traditional cost-effectiveness analyses.In recent years, there has been an increase in the use of modern contraceptives in countries like Nigeria but there has also been an increase in the unmet need. From 2012 to 2019, the portion of women using contraceptives in Nigeria rose from 11.2% to 14.2%, while the unmet need also rose from 22.4% to 23.7%. These figures are greatly exaggeratedFocus: NigeriaLafiya Nigeria focuses on rural and underserved regions of northern NigeriaNigeria has >45M women of child-bearing age, and 65% have unmet contraceptive needs (IHME), resulting in around 40,000 maternal deaths a year.In Nigeria, over 83% of women had not used any contraceptive methods for family prevention in 2018. This rate reached 96% among women without any education. This staggering gap in health provision results in maternal and infant deaths. In Nigeria, over 40,000 women die each year from pregnancy-related issues. The loss of life does end with mother, either. Over one million children under the age of five also die as a result of losing their mothers to pregnancy delivery complications.In our pilot region, Jigawa, more than 98% of women have no prior contraceptive use due to stockouts ...]]>
Klau Chmielowska https://forum.effectivealtruism.org/posts/sJpCYcHDGjHFG2Qvr/introducing-lafiya-nigeria Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Lafiya Nigeria, published by Klau Chmielowska on January 17, 2023 on The Effective Altruism Forum.Reducing maternal mortality through informed family planningIn June 2021, we founded Lafiya Nigeria, a non-profit organisation that works toward ending maternal mortality in Nigeria by widening the access and information about family planning. TL; DR Introduction to our organisation in a 3-min videoThis post describes (I) the challenge we aim to solve, (II) our approach, (III), our traction, (IV) our value-add, (V) our plans, (VI) how you can get involved in our initative.(I) The challengeIn low and middle-income countries, women are dying from giving life. Nearly 300,000 women and girls are dying from pregnancy-related complications each year, according to the Guttmacher Institute. Other health complications such as obstetric fistula, postpartum anemia, and postnatal depression are also key health burdens borne by pregnant women.Reducing the number of unintended pregnancies is an effective means of reducing health burdens for mothers and newborns. Despite a significant number of women in these countries wanting to avoid pregnancy, many are not using modern contraceptives, resulting in 85 million unintended pregnancies per year. If all women with unmet need were provided access to and used modern contraceptives, the Lancet estimated that maternal deaths globally would drop 44%. An estimated 70,000 maternal deaths could be prevented each year, with 441,000 new-born deaths also averted. Additionally, the Guttmacher Institute estimates that every dollar spent on contraceptive services beyond the current level would reduce the cost of pregnancy-related and newborn care by three dollars. The Copenhagen Consensus also found that every dollar spent on access to modern contraception leads to 120 dollars of social, economic, and environmental benefits.Access to family planning is beyond a health issue: its dividends are seen also in positive effects on education, income generation, and children's welfare. A study in Indonesia found that providing access to family planning was three times more effective than improving school quality in keeping girls in school an extra year. Research in Colombia found that girls with access to family planning clinics were 7% more likely to participate in the formal workforce as adults. Long-term studies have also shown that providing access to family planning programs can lead to improved college completion rates of children and higher family incomes decades later. These spillover effects are difficult to measure and are often neglected in traditional cost-effectiveness analyses.In recent years, there has been an increase in the use of modern contraceptives in countries like Nigeria but there has also been an increase in the unmet need. From 2012 to 2019, the portion of women using contraceptives in Nigeria rose from 11.2% to 14.2%, while the unmet need also rose from 22.4% to 23.7%. These figures are greatly exaggeratedFocus: NigeriaLafiya Nigeria focuses on rural and underserved regions of northern NigeriaNigeria has >45M women of child-bearing age, and 65% have unmet contraceptive needs (IHME), resulting in around 40,000 maternal deaths a year.In Nigeria, over 83% of women had not used any contraceptive methods for family prevention in 2018. This rate reached 96% among women without any education. This staggering gap in health provision results in maternal and infant deaths. In Nigeria, over 40,000 women die each year from pregnancy-related issues. The loss of life does end with mother, either. Over one million children under the age of five also die as a result of losing their mothers to pregnancy delivery complications.In our pilot region, Jigawa, more than 98% of women have no prior contraceptive use due to stockouts ...]]>
Tue, 17 Jan 2023 19:39:30 +0000 EA - Introducing Lafiya Nigeria by Klau Chmielowska Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Lafiya Nigeria, published by Klau Chmielowska on January 17, 2023 on The Effective Altruism Forum.Reducing maternal mortality through informed family planningIn June 2021, we founded Lafiya Nigeria, a non-profit organisation that works toward ending maternal mortality in Nigeria by widening the access and information about family planning. TL; DR Introduction to our organisation in a 3-min videoThis post describes (I) the challenge we aim to solve, (II) our approach, (III), our traction, (IV) our value-add, (V) our plans, (VI) how you can get involved in our initative.(I) The challengeIn low and middle-income countries, women are dying from giving life. Nearly 300,000 women and girls are dying from pregnancy-related complications each year, according to the Guttmacher Institute. Other health complications such as obstetric fistula, postpartum anemia, and postnatal depression are also key health burdens borne by pregnant women.Reducing the number of unintended pregnancies is an effective means of reducing health burdens for mothers and newborns. Despite a significant number of women in these countries wanting to avoid pregnancy, many are not using modern contraceptives, resulting in 85 million unintended pregnancies per year. If all women with unmet need were provided access to and used modern contraceptives, the Lancet estimated that maternal deaths globally would drop 44%. An estimated 70,000 maternal deaths could be prevented each year, with 441,000 new-born deaths also averted. Additionally, the Guttmacher Institute estimates that every dollar spent on contraceptive services beyond the current level would reduce the cost of pregnancy-related and newborn care by three dollars. The Copenhagen Consensus also found that every dollar spent on access to modern contraception leads to 120 dollars of social, economic, and environmental benefits.Access to family planning is beyond a health issue: its dividends are seen also in positive effects on education, income generation, and children's welfare. A study in Indonesia found that providing access to family planning was three times more effective than improving school quality in keeping girls in school an extra year. Research in Colombia found that girls with access to family planning clinics were 7% more likely to participate in the formal workforce as adults. Long-term studies have also shown that providing access to family planning programs can lead to improved college completion rates of children and higher family incomes decades later. These spillover effects are difficult to measure and are often neglected in traditional cost-effectiveness analyses.In recent years, there has been an increase in the use of modern contraceptives in countries like Nigeria but there has also been an increase in the unmet need. From 2012 to 2019, the portion of women using contraceptives in Nigeria rose from 11.2% to 14.2%, while the unmet need also rose from 22.4% to 23.7%. These figures are greatly exaggeratedFocus: NigeriaLafiya Nigeria focuses on rural and underserved regions of northern NigeriaNigeria has >45M women of child-bearing age, and 65% have unmet contraceptive needs (IHME), resulting in around 40,000 maternal deaths a year.In Nigeria, over 83% of women had not used any contraceptive methods for family prevention in 2018. This rate reached 96% among women without any education. This staggering gap in health provision results in maternal and infant deaths. In Nigeria, over 40,000 women die each year from pregnancy-related issues. The loss of life does end with mother, either. Over one million children under the age of five also die as a result of losing their mothers to pregnancy delivery complications.In our pilot region, Jigawa, more than 98% of women have no prior contraceptive use due to stockouts ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Lafiya Nigeria, published by Klau Chmielowska on January 17, 2023 on The Effective Altruism Forum.Reducing maternal mortality through informed family planningIn June 2021, we founded Lafiya Nigeria, a non-profit organisation that works toward ending maternal mortality in Nigeria by widening the access and information about family planning. TL; DR Introduction to our organisation in a 3-min videoThis post describes (I) the challenge we aim to solve, (II) our approach, (III), our traction, (IV) our value-add, (V) our plans, (VI) how you can get involved in our initative.(I) The challengeIn low and middle-income countries, women are dying from giving life. Nearly 300,000 women and girls are dying from pregnancy-related complications each year, according to the Guttmacher Institute. Other health complications such as obstetric fistula, postpartum anemia, and postnatal depression are also key health burdens borne by pregnant women.Reducing the number of unintended pregnancies is an effective means of reducing health burdens for mothers and newborns. Despite a significant number of women in these countries wanting to avoid pregnancy, many are not using modern contraceptives, resulting in 85 million unintended pregnancies per year. If all women with unmet need were provided access to and used modern contraceptives, the Lancet estimated that maternal deaths globally would drop 44%. An estimated 70,000 maternal deaths could be prevented each year, with 441,000 new-born deaths also averted. Additionally, the Guttmacher Institute estimates that every dollar spent on contraceptive services beyond the current level would reduce the cost of pregnancy-related and newborn care by three dollars. The Copenhagen Consensus also found that every dollar spent on access to modern contraception leads to 120 dollars of social, economic, and environmental benefits.Access to family planning is beyond a health issue: its dividends are seen also in positive effects on education, income generation, and children's welfare. A study in Indonesia found that providing access to family planning was three times more effective than improving school quality in keeping girls in school an extra year. Research in Colombia found that girls with access to family planning clinics were 7% more likely to participate in the formal workforce as adults. Long-term studies have also shown that providing access to family planning programs can lead to improved college completion rates of children and higher family incomes decades later. These spillover effects are difficult to measure and are often neglected in traditional cost-effectiveness analyses.In recent years, there has been an increase in the use of modern contraceptives in countries like Nigeria but there has also been an increase in the unmet need. From 2012 to 2019, the portion of women using contraceptives in Nigeria rose from 11.2% to 14.2%, while the unmet need also rose from 22.4% to 23.7%. These figures are greatly exaggeratedFocus: NigeriaLafiya Nigeria focuses on rural and underserved regions of northern NigeriaNigeria has >45M women of child-bearing age, and 65% have unmet contraceptive needs (IHME), resulting in around 40,000 maternal deaths a year.In Nigeria, over 83% of women had not used any contraceptive methods for family prevention in 2018. This rate reached 96% among women without any education. This staggering gap in health provision results in maternal and infant deaths. In Nigeria, over 40,000 women die each year from pregnancy-related issues. The loss of life does end with mother, either. Over one million children under the age of five also die as a result of losing their mothers to pregnancy delivery complications.In our pilot region, Jigawa, more than 98% of women have no prior contraceptive use due to stockouts ...]]>
Klau Chmielowska https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:07 None full 4491
TdMSpzSqkQDccnSy2_EA EA - Posts from 2022 you thought were valuable (and underrated) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Posts from 2022 you thought were valuable (and underrated), published by Lizka on January 17, 2023 on The Effective Altruism Forum.Forum Wrapped showed you what you upvoted and strong-upvoted in 2022. We also gave you the chance to mark some posts as “most valuable.”I’m sharing which posts were marked as “most valuable” by most people, and which posts were most underrated by their karma score (relative to the number of “most valuable” votes).Please note that this is a very rough list; relatively few people marked posts as "most valuable," and I imagine that those who did, didn't do it very carefully or comprehensively. And there are various biases in the data (like the fact that we showed the list in order of karma).Before you continue, consider looking back (if you haven’t done that yet).You can look at your Forum Wrapped, consider scrolling through high-rated (adjusted) posts from 2022, or explore posts on your favorite topics.(You can still mark posts as “most valuable” — we might revisit these numbers later, and you might find it useful to have a record of this for yourself.)If you want to explore more content, you could look at What are the most underrated posts & comments of 2022, according to you? (thread), my comment on the Forum Wrapped announcement, curated posts from 2022, and older content like the Results from the First Decade Review and the Forum Digest Classics.Which posts did Forum users think were most valuable?(Note that we ordered posts in "wrapped" by karma score, meaning higher-karma posts might be artificially over-rated.)VotesAuthor(s)TitleASB, ecaConcrete Biosecurity Projects (some of which could be big)Julia WisePower dynamics between people in EATheo HawkingBad Omens in Current Community BuildingScott Alexander"Long-Termism" vs. "Existential Risk"George RosenfeldFree-spending EA might be a big problem for optics and epistemicsHaydn BelfieldAre you really in a race? The Cautionary Tales of Szilárd and EllsbergHolden KarnofskyEA is about maximization, and maximization is perilousSimon MStrongMinds should not be a top-rated charity (yet)Fods12The FTX crisis highlights a deeper cultural problem within EA - we don't sufficiently value good governanceMathiasKBSnakebites kill 100,000 people every year, here's what you should knowWilliam MacAskillEA and the current funding situationLinchSome unfun lessons I learned as a junior grantmakerWill Bradshaw, Mike McLaren, Anjali GopalAnnouncing the Nucleic Acid Observatory project for early detection of catastrophic biothreatsThomas KwaEffectiveness is aConjunction of MultipliersKarolina Sarek, JoeyPresenting: 2022 Incubated Charities (Charity Entrepreneurship)Nuño SempereA Critical Review of Open Philanthropy’s Bet On Criminal Justice ReformHolden KarnofskyMy takes on the FTX situation will (mostly) be cold, not hotPeter McLaughlinGetting on a different train: can Effective Altruism avoid collapsing into absurdity?Katja GraceLet’s think about slowing down AIHolden KarnofskyImportant, actionable research questions for the most important centuryHolden KarnofskySome comments on recent FTX-related eventsMaya DI’m a 22-year-old woman involved in Effective Altruism. I’m sad, disappointed, and scared.AjeyaWithout specific countermeasures, the easiest path to transformative AI likely leads to AI takeoverVotesAuthor(s)TitleASB, ecaConcrete Biosecurity Projects (some of which could be big)Julia WisePower dynamics between people in EATheo HawkingBad Omens in Current Community BuildingScott Alexander"Long-Termism" vs. "Existential Risk"George RosenfeldFree-spending EA might be a big problem for optics and epistemicsHaydn BelfieldAre you really in a race?The Cautionary Tales of Szilárd and EllsbergHolden KarnofskyEA is about maximization, and maximization is perilousSimon MStrongMinds should not be a top-...]]>
Lizka https://forum.effectivealtruism.org/posts/TdMSpzSqkQDccnSy2/posts-from-2022-you-thought-were-valuable-and-underrated Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Posts from 2022 you thought were valuable (and underrated), published by Lizka on January 17, 2023 on The Effective Altruism Forum.Forum Wrapped showed you what you upvoted and strong-upvoted in 2022. We also gave you the chance to mark some posts as “most valuable.”I’m sharing which posts were marked as “most valuable” by most people, and which posts were most underrated by their karma score (relative to the number of “most valuable” votes).Please note that this is a very rough list; relatively few people marked posts as "most valuable," and I imagine that those who did, didn't do it very carefully or comprehensively. And there are various biases in the data (like the fact that we showed the list in order of karma).Before you continue, consider looking back (if you haven’t done that yet).You can look at your Forum Wrapped, consider scrolling through high-rated (adjusted) posts from 2022, or explore posts on your favorite topics.(You can still mark posts as “most valuable” — we might revisit these numbers later, and you might find it useful to have a record of this for yourself.)If you want to explore more content, you could look at What are the most underrated posts & comments of 2022, according to you? (thread), my comment on the Forum Wrapped announcement, curated posts from 2022, and older content like the Results from the First Decade Review and the Forum Digest Classics.Which posts did Forum users think were most valuable?(Note that we ordered posts in "wrapped" by karma score, meaning higher-karma posts might be artificially over-rated.)VotesAuthor(s)TitleASB, ecaConcrete Biosecurity Projects (some of which could be big)Julia WisePower dynamics between people in EATheo HawkingBad Omens in Current Community BuildingScott Alexander"Long-Termism" vs. "Existential Risk"George RosenfeldFree-spending EA might be a big problem for optics and epistemicsHaydn BelfieldAre you really in a race? The Cautionary Tales of Szilárd and EllsbergHolden KarnofskyEA is about maximization, and maximization is perilousSimon MStrongMinds should not be a top-rated charity (yet)Fods12The FTX crisis highlights a deeper cultural problem within EA - we don't sufficiently value good governanceMathiasKBSnakebites kill 100,000 people every year, here's what you should knowWilliam MacAskillEA and the current funding situationLinchSome unfun lessons I learned as a junior grantmakerWill Bradshaw, Mike McLaren, Anjali GopalAnnouncing the Nucleic Acid Observatory project for early detection of catastrophic biothreatsThomas KwaEffectiveness is aConjunction of MultipliersKarolina Sarek, JoeyPresenting: 2022 Incubated Charities (Charity Entrepreneurship)Nuño SempereA Critical Review of Open Philanthropy’s Bet On Criminal Justice ReformHolden KarnofskyMy takes on the FTX situation will (mostly) be cold, not hotPeter McLaughlinGetting on a different train: can Effective Altruism avoid collapsing into absurdity?Katja GraceLet’s think about slowing down AIHolden KarnofskyImportant, actionable research questions for the most important centuryHolden KarnofskySome comments on recent FTX-related eventsMaya DI’m a 22-year-old woman involved in Effective Altruism. I’m sad, disappointed, and scared.AjeyaWithout specific countermeasures, the easiest path to transformative AI likely leads to AI takeoverVotesAuthor(s)TitleASB, ecaConcrete Biosecurity Projects (some of which could be big)Julia WisePower dynamics between people in EATheo HawkingBad Omens in Current Community BuildingScott Alexander"Long-Termism" vs. "Existential Risk"George RosenfeldFree-spending EA might be a big problem for optics and epistemicsHaydn BelfieldAre you really in a race?The Cautionary Tales of Szilárd and EllsbergHolden KarnofskyEA is about maximization, and maximization is perilousSimon MStrongMinds should not be a top-...]]>
Tue, 17 Jan 2023 18:18:08 +0000 EA - Posts from 2022 you thought were valuable (and underrated) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Posts from 2022 you thought were valuable (and underrated), published by Lizka on January 17, 2023 on The Effective Altruism Forum.Forum Wrapped showed you what you upvoted and strong-upvoted in 2022. We also gave you the chance to mark some posts as “most valuable.”I’m sharing which posts were marked as “most valuable” by most people, and which posts were most underrated by their karma score (relative to the number of “most valuable” votes).Please note that this is a very rough list; relatively few people marked posts as "most valuable," and I imagine that those who did, didn't do it very carefully or comprehensively. And there are various biases in the data (like the fact that we showed the list in order of karma).Before you continue, consider looking back (if you haven’t done that yet).You can look at your Forum Wrapped, consider scrolling through high-rated (adjusted) posts from 2022, or explore posts on your favorite topics.(You can still mark posts as “most valuable” — we might revisit these numbers later, and you might find it useful to have a record of this for yourself.)If you want to explore more content, you could look at What are the most underrated posts & comments of 2022, according to you? (thread), my comment on the Forum Wrapped announcement, curated posts from 2022, and older content like the Results from the First Decade Review and the Forum Digest Classics.Which posts did Forum users think were most valuable?(Note that we ordered posts in "wrapped" by karma score, meaning higher-karma posts might be artificially over-rated.)VotesAuthor(s)TitleASB, ecaConcrete Biosecurity Projects (some of which could be big)Julia WisePower dynamics between people in EATheo HawkingBad Omens in Current Community BuildingScott Alexander"Long-Termism" vs. "Existential Risk"George RosenfeldFree-spending EA might be a big problem for optics and epistemicsHaydn BelfieldAre you really in a race? The Cautionary Tales of Szilárd and EllsbergHolden KarnofskyEA is about maximization, and maximization is perilousSimon MStrongMinds should not be a top-rated charity (yet)Fods12The FTX crisis highlights a deeper cultural problem within EA - we don't sufficiently value good governanceMathiasKBSnakebites kill 100,000 people every year, here's what you should knowWilliam MacAskillEA and the current funding situationLinchSome unfun lessons I learned as a junior grantmakerWill Bradshaw, Mike McLaren, Anjali GopalAnnouncing the Nucleic Acid Observatory project for early detection of catastrophic biothreatsThomas KwaEffectiveness is aConjunction of MultipliersKarolina Sarek, JoeyPresenting: 2022 Incubated Charities (Charity Entrepreneurship)Nuño SempereA Critical Review of Open Philanthropy’s Bet On Criminal Justice ReformHolden KarnofskyMy takes on the FTX situation will (mostly) be cold, not hotPeter McLaughlinGetting on a different train: can Effective Altruism avoid collapsing into absurdity?Katja GraceLet’s think about slowing down AIHolden KarnofskyImportant, actionable research questions for the most important centuryHolden KarnofskySome comments on recent FTX-related eventsMaya DI’m a 22-year-old woman involved in Effective Altruism. I’m sad, disappointed, and scared.AjeyaWithout specific countermeasures, the easiest path to transformative AI likely leads to AI takeoverVotesAuthor(s)TitleASB, ecaConcrete Biosecurity Projects (some of which could be big)Julia WisePower dynamics between people in EATheo HawkingBad Omens in Current Community BuildingScott Alexander"Long-Termism" vs. "Existential Risk"George RosenfeldFree-spending EA might be a big problem for optics and epistemicsHaydn BelfieldAre you really in a race?The Cautionary Tales of Szilárd and EllsbergHolden KarnofskyEA is about maximization, and maximization is perilousSimon MStrongMinds should not be a top-...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Posts from 2022 you thought were valuable (and underrated), published by Lizka on January 17, 2023 on The Effective Altruism Forum.Forum Wrapped showed you what you upvoted and strong-upvoted in 2022. We also gave you the chance to mark some posts as “most valuable.”I’m sharing which posts were marked as “most valuable” by most people, and which posts were most underrated by their karma score (relative to the number of “most valuable” votes).Please note that this is a very rough list; relatively few people marked posts as "most valuable," and I imagine that those who did, didn't do it very carefully or comprehensively. And there are various biases in the data (like the fact that we showed the list in order of karma).Before you continue, consider looking back (if you haven’t done that yet).You can look at your Forum Wrapped, consider scrolling through high-rated (adjusted) posts from 2022, or explore posts on your favorite topics.(You can still mark posts as “most valuable” — we might revisit these numbers later, and you might find it useful to have a record of this for yourself.)If you want to explore more content, you could look at What are the most underrated posts & comments of 2022, according to you? (thread), my comment on the Forum Wrapped announcement, curated posts from 2022, and older content like the Results from the First Decade Review and the Forum Digest Classics.Which posts did Forum users think were most valuable?(Note that we ordered posts in "wrapped" by karma score, meaning higher-karma posts might be artificially over-rated.)VotesAuthor(s)TitleASB, ecaConcrete Biosecurity Projects (some of which could be big)Julia WisePower dynamics between people in EATheo HawkingBad Omens in Current Community BuildingScott Alexander"Long-Termism" vs. "Existential Risk"George RosenfeldFree-spending EA might be a big problem for optics and epistemicsHaydn BelfieldAre you really in a race? The Cautionary Tales of Szilárd and EllsbergHolden KarnofskyEA is about maximization, and maximization is perilousSimon MStrongMinds should not be a top-rated charity (yet)Fods12The FTX crisis highlights a deeper cultural problem within EA - we don't sufficiently value good governanceMathiasKBSnakebites kill 100,000 people every year, here's what you should knowWilliam MacAskillEA and the current funding situationLinchSome unfun lessons I learned as a junior grantmakerWill Bradshaw, Mike McLaren, Anjali GopalAnnouncing the Nucleic Acid Observatory project for early detection of catastrophic biothreatsThomas KwaEffectiveness is aConjunction of MultipliersKarolina Sarek, JoeyPresenting: 2022 Incubated Charities (Charity Entrepreneurship)Nuño SempereA Critical Review of Open Philanthropy’s Bet On Criminal Justice ReformHolden KarnofskyMy takes on the FTX situation will (mostly) be cold, not hotPeter McLaughlinGetting on a different train: can Effective Altruism avoid collapsing into absurdity?Katja GraceLet’s think about slowing down AIHolden KarnofskyImportant, actionable research questions for the most important centuryHolden KarnofskySome comments on recent FTX-related eventsMaya DI’m a 22-year-old woman involved in Effective Altruism. I’m sad, disappointed, and scared.AjeyaWithout specific countermeasures, the easiest path to transformative AI likely leads to AI takeoverVotesAuthor(s)TitleASB, ecaConcrete Biosecurity Projects (some of which could be big)Julia WisePower dynamics between people in EATheo HawkingBad Omens in Current Community BuildingScott Alexander"Long-Termism" vs. "Existential Risk"George RosenfeldFree-spending EA might be a big problem for optics and epistemicsHaydn BelfieldAre you really in a race?The Cautionary Tales of Szilárd and EllsbergHolden KarnofskyEA is about maximization, and maximization is perilousSimon MStrongMinds should not be a top-...]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:56 None full 4492
rZoRGxJzipcQoaPST_EA EA - How many people are working (directly) on reducing existential risk from AI? by Benjamin Hilton Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How many people are working (directly) on reducing existential risk from AI?, published by Benjamin Hilton on January 17, 2023 on The Effective Altruism Forum.SummaryI've updated my estimate of the number of FTE (full-time equivalent) working (directly) on reducing existential risks from AI from 300 FTE to 400 FTE.Below I've pasted some slightly edited excepts of the relevant sections of the 80,000 Hours profile on preventing an AI-related catastrophe.New 80,000 Hours estimate of the number of people working on reducing AI riskNeglectedness estimateWe estimate there are around 400 people around the world working directly on reducing the chances of an AI-related existential catastrophe (with a 90% confidence interval ranging between 200 and 1,000). Of these, about three quarters are working on technical AI safety research, with the rest split between strategy (and other governance) research and advocacy. We think there are around 800 people working in complementary roles, but we’re highly uncertain about this estimate.Footnote on methodologyIt’s difficult to estimate this number.Ideally we want to estimate the number of FTE (“full-time equivalent“) working on the problem of reducing existential risks from AI.But there are lots of ambiguities around what counts as working on the issue. So I tried to use the following guidelines in my estimates:I didn’t include people who might think of themselves on a career path that is building towards a role preventing an AI-related catastrophe, but who are currently skilling up rather than working directly on the problem.I included researchers, engineers, and other staff that seem to work directly on technical AI safety research or AI strategy and governance. But there’s an uncertain boundary between these people and others who I chose not to include. For example, I didn’t include machine learning engineers whose role is building AI systems that might be used for safety research but aren’t primarily designed for that purpose.I only included time spent on work that seems related to reducing the potentially existential risks from AI, like those discussed in this article. Lots of wider AI safety and AI ethics work focuses on reducing other risks from AI seems relevant to reducing existential risks – this ‘indirect’ work makes this estimate difficult. I decided not to include indirect work on reducing the risks of an AI-related catastrophe (see our problem framework for more).Relatedly, I didn’t include people working on other problems that might indirectly affect the chances of an AI-related catastrophe, such as epistemics and improving institutional decision-making, reducing the chances of great power conflict, or building effective altruism.With those decisions made, I estimated this in three different ways.First, for each organisation in the AI Watch database, I estimated the number of FTE working directly on reducing existential risks from AI. I did this by looking at the number of staff listed at each organisation, both in total and in 2022, as well as the number of researchers listed at each organisation. Overall I estimated that there were 76 to 536 FTE working on technical AI safety (90% confidence), with a mean of 196 FTE. I estimated that there were 51 to 359 FTE working on AI governance and strategy (90% confidence), with a mean of 151 FTE. There’s a lot of subjective judgement in these estimates because of the ambiguities above. The estimates could be too low if AI Watch is missing data on some organisations, or too high if the data counts people more than once or includes people who no longer work in the area.Second, I adapted the methodology used by Gavin Leech’s estimate of the number of people working on reducing existential risks from AI. I split the organisations in Leech’s estimate into technical sa...]]>
Benjamin Hilton https://forum.effectivealtruism.org/posts/rZoRGxJzipcQoaPST/how-many-people-are-working-directly-on-reducing-existential Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How many people are working (directly) on reducing existential risk from AI?, published by Benjamin Hilton on January 17, 2023 on The Effective Altruism Forum.SummaryI've updated my estimate of the number of FTE (full-time equivalent) working (directly) on reducing existential risks from AI from 300 FTE to 400 FTE.Below I've pasted some slightly edited excepts of the relevant sections of the 80,000 Hours profile on preventing an AI-related catastrophe.New 80,000 Hours estimate of the number of people working on reducing AI riskNeglectedness estimateWe estimate there are around 400 people around the world working directly on reducing the chances of an AI-related existential catastrophe (with a 90% confidence interval ranging between 200 and 1,000). Of these, about three quarters are working on technical AI safety research, with the rest split between strategy (and other governance) research and advocacy. We think there are around 800 people working in complementary roles, but we’re highly uncertain about this estimate.Footnote on methodologyIt’s difficult to estimate this number.Ideally we want to estimate the number of FTE (“full-time equivalent“) working on the problem of reducing existential risks from AI.But there are lots of ambiguities around what counts as working on the issue. So I tried to use the following guidelines in my estimates:I didn’t include people who might think of themselves on a career path that is building towards a role preventing an AI-related catastrophe, but who are currently skilling up rather than working directly on the problem.I included researchers, engineers, and other staff that seem to work directly on technical AI safety research or AI strategy and governance. But there’s an uncertain boundary between these people and others who I chose not to include. For example, I didn’t include machine learning engineers whose role is building AI systems that might be used for safety research but aren’t primarily designed for that purpose.I only included time spent on work that seems related to reducing the potentially existential risks from AI, like those discussed in this article. Lots of wider AI safety and AI ethics work focuses on reducing other risks from AI seems relevant to reducing existential risks – this ‘indirect’ work makes this estimate difficult. I decided not to include indirect work on reducing the risks of an AI-related catastrophe (see our problem framework for more).Relatedly, I didn’t include people working on other problems that might indirectly affect the chances of an AI-related catastrophe, such as epistemics and improving institutional decision-making, reducing the chances of great power conflict, or building effective altruism.With those decisions made, I estimated this in three different ways.First, for each organisation in the AI Watch database, I estimated the number of FTE working directly on reducing existential risks from AI. I did this by looking at the number of staff listed at each organisation, both in total and in 2022, as well as the number of researchers listed at each organisation. Overall I estimated that there were 76 to 536 FTE working on technical AI safety (90% confidence), with a mean of 196 FTE. I estimated that there were 51 to 359 FTE working on AI governance and strategy (90% confidence), with a mean of 151 FTE. There’s a lot of subjective judgement in these estimates because of the ambiguities above. The estimates could be too low if AI Watch is missing data on some organisations, or too high if the data counts people more than once or includes people who no longer work in the area.Second, I adapted the methodology used by Gavin Leech’s estimate of the number of people working on reducing existential risks from AI. I split the organisations in Leech’s estimate into technical sa...]]>
Tue, 17 Jan 2023 15:43:27 +0000 EA - How many people are working (directly) on reducing existential risk from AI? by Benjamin Hilton Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How many people are working (directly) on reducing existential risk from AI?, published by Benjamin Hilton on January 17, 2023 on The Effective Altruism Forum.SummaryI've updated my estimate of the number of FTE (full-time equivalent) working (directly) on reducing existential risks from AI from 300 FTE to 400 FTE.Below I've pasted some slightly edited excepts of the relevant sections of the 80,000 Hours profile on preventing an AI-related catastrophe.New 80,000 Hours estimate of the number of people working on reducing AI riskNeglectedness estimateWe estimate there are around 400 people around the world working directly on reducing the chances of an AI-related existential catastrophe (with a 90% confidence interval ranging between 200 and 1,000). Of these, about three quarters are working on technical AI safety research, with the rest split between strategy (and other governance) research and advocacy. We think there are around 800 people working in complementary roles, but we’re highly uncertain about this estimate.Footnote on methodologyIt’s difficult to estimate this number.Ideally we want to estimate the number of FTE (“full-time equivalent“) working on the problem of reducing existential risks from AI.But there are lots of ambiguities around what counts as working on the issue. So I tried to use the following guidelines in my estimates:I didn’t include people who might think of themselves on a career path that is building towards a role preventing an AI-related catastrophe, but who are currently skilling up rather than working directly on the problem.I included researchers, engineers, and other staff that seem to work directly on technical AI safety research or AI strategy and governance. But there’s an uncertain boundary between these people and others who I chose not to include. For example, I didn’t include machine learning engineers whose role is building AI systems that might be used for safety research but aren’t primarily designed for that purpose.I only included time spent on work that seems related to reducing the potentially existential risks from AI, like those discussed in this article. Lots of wider AI safety and AI ethics work focuses on reducing other risks from AI seems relevant to reducing existential risks – this ‘indirect’ work makes this estimate difficult. I decided not to include indirect work on reducing the risks of an AI-related catastrophe (see our problem framework for more).Relatedly, I didn’t include people working on other problems that might indirectly affect the chances of an AI-related catastrophe, such as epistemics and improving institutional decision-making, reducing the chances of great power conflict, or building effective altruism.With those decisions made, I estimated this in three different ways.First, for each organisation in the AI Watch database, I estimated the number of FTE working directly on reducing existential risks from AI. I did this by looking at the number of staff listed at each organisation, both in total and in 2022, as well as the number of researchers listed at each organisation. Overall I estimated that there were 76 to 536 FTE working on technical AI safety (90% confidence), with a mean of 196 FTE. I estimated that there were 51 to 359 FTE working on AI governance and strategy (90% confidence), with a mean of 151 FTE. There’s a lot of subjective judgement in these estimates because of the ambiguities above. The estimates could be too low if AI Watch is missing data on some organisations, or too high if the data counts people more than once or includes people who no longer work in the area.Second, I adapted the methodology used by Gavin Leech’s estimate of the number of people working on reducing existential risks from AI. I split the organisations in Leech’s estimate into technical sa...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How many people are working (directly) on reducing existential risk from AI?, published by Benjamin Hilton on January 17, 2023 on The Effective Altruism Forum.SummaryI've updated my estimate of the number of FTE (full-time equivalent) working (directly) on reducing existential risks from AI from 300 FTE to 400 FTE.Below I've pasted some slightly edited excepts of the relevant sections of the 80,000 Hours profile on preventing an AI-related catastrophe.New 80,000 Hours estimate of the number of people working on reducing AI riskNeglectedness estimateWe estimate there are around 400 people around the world working directly on reducing the chances of an AI-related existential catastrophe (with a 90% confidence interval ranging between 200 and 1,000). Of these, about three quarters are working on technical AI safety research, with the rest split between strategy (and other governance) research and advocacy. We think there are around 800 people working in complementary roles, but we’re highly uncertain about this estimate.Footnote on methodologyIt’s difficult to estimate this number.Ideally we want to estimate the number of FTE (“full-time equivalent“) working on the problem of reducing existential risks from AI.But there are lots of ambiguities around what counts as working on the issue. So I tried to use the following guidelines in my estimates:I didn’t include people who might think of themselves on a career path that is building towards a role preventing an AI-related catastrophe, but who are currently skilling up rather than working directly on the problem.I included researchers, engineers, and other staff that seem to work directly on technical AI safety research or AI strategy and governance. But there’s an uncertain boundary between these people and others who I chose not to include. For example, I didn’t include machine learning engineers whose role is building AI systems that might be used for safety research but aren’t primarily designed for that purpose.I only included time spent on work that seems related to reducing the potentially existential risks from AI, like those discussed in this article. Lots of wider AI safety and AI ethics work focuses on reducing other risks from AI seems relevant to reducing existential risks – this ‘indirect’ work makes this estimate difficult. I decided not to include indirect work on reducing the risks of an AI-related catastrophe (see our problem framework for more).Relatedly, I didn’t include people working on other problems that might indirectly affect the chances of an AI-related catastrophe, such as epistemics and improving institutional decision-making, reducing the chances of great power conflict, or building effective altruism.With those decisions made, I estimated this in three different ways.First, for each organisation in the AI Watch database, I estimated the number of FTE working directly on reducing existential risks from AI. I did this by looking at the number of staff listed at each organisation, both in total and in 2022, as well as the number of researchers listed at each organisation. Overall I estimated that there were 76 to 536 FTE working on technical AI safety (90% confidence), with a mean of 196 FTE. I estimated that there were 51 to 359 FTE working on AI governance and strategy (90% confidence), with a mean of 151 FTE. There’s a lot of subjective judgement in these estimates because of the ambiguities above. The estimates could be too low if AI Watch is missing data on some organisations, or too high if the data counts people more than once or includes people who no longer work in the area.Second, I adapted the methodology used by Gavin Leech’s estimate of the number of people working on reducing existential risks from AI. I split the organisations in Leech’s estimate into technical sa...]]>
Benjamin Hilton https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:08 None full 4493
RnkiBwhXTJHdsq2eS_EA EA - What improvements should be made to improve EA discussion on heated topics? by Ozzie Gooen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What improvements should be made to improve EA discussion on heated topics?, published by Ozzie Gooen on January 16, 2023 on The Effective Altruism Forum.We've recently went through a series of intense EA controversies. I get the sense that many EAs have found EA discussion on these to be exhausting and disappointing.Whatever the details of the current controversies, I think it's clear there's improvement to be made. This could be a good time to reflect on what would be useful.I don't want to comment on the heated topics of the day, so let's assume that these changes will only be applies to future topics, no matter what those might be.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ozzie Gooen https://forum.effectivealtruism.org/posts/RnkiBwhXTJHdsq2eS/what-improvements-should-be-made-to-improve-ea-discussion-on Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What improvements should be made to improve EA discussion on heated topics?, published by Ozzie Gooen on January 16, 2023 on The Effective Altruism Forum.We've recently went through a series of intense EA controversies. I get the sense that many EAs have found EA discussion on these to be exhausting and disappointing.Whatever the details of the current controversies, I think it's clear there's improvement to be made. This could be a good time to reflect on what would be useful.I don't want to comment on the heated topics of the day, so let's assume that these changes will only be applies to future topics, no matter what those might be.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 17 Jan 2023 08:48:32 +0000 EA - What improvements should be made to improve EA discussion on heated topics? by Ozzie Gooen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What improvements should be made to improve EA discussion on heated topics?, published by Ozzie Gooen on January 16, 2023 on The Effective Altruism Forum.We've recently went through a series of intense EA controversies. I get the sense that many EAs have found EA discussion on these to be exhausting and disappointing.Whatever the details of the current controversies, I think it's clear there's improvement to be made. This could be a good time to reflect on what would be useful.I don't want to comment on the heated topics of the day, so let's assume that these changes will only be applies to future topics, no matter what those might be.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What improvements should be made to improve EA discussion on heated topics?, published by Ozzie Gooen on January 16, 2023 on The Effective Altruism Forum.We've recently went through a series of intense EA controversies. I get the sense that many EAs have found EA discussion on these to be exhausting and disappointing.Whatever the details of the current controversies, I think it's clear there's improvement to be made. This could be a good time to reflect on what would be useful.I don't want to comment on the heated topics of the day, so let's assume that these changes will only be applies to future topics, no matter what those might be.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ozzie Gooen https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:53 None full 4482
exZ7sqDCoXJwKNZQz_EA EA - Replace Neglectedness by Indra Gesink Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Replace Neglectedness, published by Indra Gesink on January 16, 2023 on The Effective Altruism Forum.for example with Leverage, as featured in Will MacAskill’s What We Owe the Future.The second bullet point featured in the website introduction to effective altruism is the ITN framework. This exists to prioritize problems. The framework does so by considering the Importance — or scale, S — of a problem, as the number of people or quality-adjusted life years (QALYs) affected, multiplied with the Tractability, as the potential that this problem can be addressed, and Neglectedness, as the number of people already working to address this problem (ITN-framework, including Leverage). Tractibility is sometimes also called Solvability, and non-neglectedness crowdedness.Some criticisms and difficulties in interpreting the framework (1, 2, 3, 4) have preceded this forum post. The ITN framework can be interpreted - as also in the final paragraph of (1) - such that IT represents the potential that a problem can be addressed, while ITN considers the difference that any one individual can make to that problem, particularly the next individual. How much impact can the next individual make, choosing to work on this problem, on average? Why do I add “on average”? We are still ignoring the person’s unique qualities, and instead more abstractly consider an average person. Adding “personal fit” as another multiplicative factor would make it personal as well.So “How much impact can the next individual make on this problem?” really asks for the marginal counterfactual impact. Respectively this is the amount of impact that this one individual adds to the total impact so far, which would not happen otherwise. The ITN-factor Neglectedness assumes that this marginal counterfactual impact is declining — strictly — as more individuals join the endeavor of addressing the particular problem. If this is true, then — indeed — a more neglected problem ceteris paribus — i.e. not varying factors I, T (or personal fit) simultaneously — always yields more impact when fewer individuals are already addressing it. This is however not always true, as also already pointed out in the criticisms referenced above.Consider the following string of examples. Suppose a partial civilizational collapse has occurred, and you consider whether it would be good to go and repopulate the now barren lands. The ITN-framework says that as the first person to do so you make the biggest difference. However, alone you cannot procreate, at least not without far-reaching technological assistance. In fact a sizable group of people deciding to do so might very well still be ineffective, by not bringing in sufficient genetic diversity. This is captured by a well-known term in population biology: the critical or minimally viable population size (to persist). Something similar operates to a lesser extent in the effectiveness of teams. I for example once found the advice to better not join a company as the sole data scientist, as you would not have a team to exchange ideas with. Working together, you become more effective, and develop more.Advocating for policies is another area that is important and where you need teams. Consider there being multiple equally worthwhile causes to protest for, but by the logic of the ITN-framework you always join the least populated protest. And no critical mass is obtained. Doesn’t that seem absurd? See also (5). (And the third image in (3), depicting a one-time significant increase in marginal counterfactual impact, as with a critical vote to establish a majority. This graph is also called an indicator function). Effective altruists might similarly often find themselves advocating for policies which are neglected and that are thus not well known to the recipient of such advocacy. As opposed to max...]]>
Indra Gesink https://forum.effectivealtruism.org/posts/exZ7sqDCoXJwKNZQz/replace-neglectedness Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Replace Neglectedness, published by Indra Gesink on January 16, 2023 on The Effective Altruism Forum.for example with Leverage, as featured in Will MacAskill’s What We Owe the Future.The second bullet point featured in the website introduction to effective altruism is the ITN framework. This exists to prioritize problems. The framework does so by considering the Importance — or scale, S — of a problem, as the number of people or quality-adjusted life years (QALYs) affected, multiplied with the Tractability, as the potential that this problem can be addressed, and Neglectedness, as the number of people already working to address this problem (ITN-framework, including Leverage). Tractibility is sometimes also called Solvability, and non-neglectedness crowdedness.Some criticisms and difficulties in interpreting the framework (1, 2, 3, 4) have preceded this forum post. The ITN framework can be interpreted - as also in the final paragraph of (1) - such that IT represents the potential that a problem can be addressed, while ITN considers the difference that any one individual can make to that problem, particularly the next individual. How much impact can the next individual make, choosing to work on this problem, on average? Why do I add “on average”? We are still ignoring the person’s unique qualities, and instead more abstractly consider an average person. Adding “personal fit” as another multiplicative factor would make it personal as well.So “How much impact can the next individual make on this problem?” really asks for the marginal counterfactual impact. Respectively this is the amount of impact that this one individual adds to the total impact so far, which would not happen otherwise. The ITN-factor Neglectedness assumes that this marginal counterfactual impact is declining — strictly — as more individuals join the endeavor of addressing the particular problem. If this is true, then — indeed — a more neglected problem ceteris paribus — i.e. not varying factors I, T (or personal fit) simultaneously — always yields more impact when fewer individuals are already addressing it. This is however not always true, as also already pointed out in the criticisms referenced above.Consider the following string of examples. Suppose a partial civilizational collapse has occurred, and you consider whether it would be good to go and repopulate the now barren lands. The ITN-framework says that as the first person to do so you make the biggest difference. However, alone you cannot procreate, at least not without far-reaching technological assistance. In fact a sizable group of people deciding to do so might very well still be ineffective, by not bringing in sufficient genetic diversity. This is captured by a well-known term in population biology: the critical or minimally viable population size (to persist). Something similar operates to a lesser extent in the effectiveness of teams. I for example once found the advice to better not join a company as the sole data scientist, as you would not have a team to exchange ideas with. Working together, you become more effective, and develop more.Advocating for policies is another area that is important and where you need teams. Consider there being multiple equally worthwhile causes to protest for, but by the logic of the ITN-framework you always join the least populated protest. And no critical mass is obtained. Doesn’t that seem absurd? See also (5). (And the third image in (3), depicting a one-time significant increase in marginal counterfactual impact, as with a critical vote to establish a majority. This graph is also called an indicator function). Effective altruists might similarly often find themselves advocating for policies which are neglected and that are thus not well known to the recipient of such advocacy. As opposed to max...]]>
Tue, 17 Jan 2023 08:25:43 +0000 EA - Replace Neglectedness by Indra Gesink Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Replace Neglectedness, published by Indra Gesink on January 16, 2023 on The Effective Altruism Forum.for example with Leverage, as featured in Will MacAskill’s What We Owe the Future.The second bullet point featured in the website introduction to effective altruism is the ITN framework. This exists to prioritize problems. The framework does so by considering the Importance — or scale, S — of a problem, as the number of people or quality-adjusted life years (QALYs) affected, multiplied with the Tractability, as the potential that this problem can be addressed, and Neglectedness, as the number of people already working to address this problem (ITN-framework, including Leverage). Tractibility is sometimes also called Solvability, and non-neglectedness crowdedness.Some criticisms and difficulties in interpreting the framework (1, 2, 3, 4) have preceded this forum post. The ITN framework can be interpreted - as also in the final paragraph of (1) - such that IT represents the potential that a problem can be addressed, while ITN considers the difference that any one individual can make to that problem, particularly the next individual. How much impact can the next individual make, choosing to work on this problem, on average? Why do I add “on average”? We are still ignoring the person’s unique qualities, and instead more abstractly consider an average person. Adding “personal fit” as another multiplicative factor would make it personal as well.So “How much impact can the next individual make on this problem?” really asks for the marginal counterfactual impact. Respectively this is the amount of impact that this one individual adds to the total impact so far, which would not happen otherwise. The ITN-factor Neglectedness assumes that this marginal counterfactual impact is declining — strictly — as more individuals join the endeavor of addressing the particular problem. If this is true, then — indeed — a more neglected problem ceteris paribus — i.e. not varying factors I, T (or personal fit) simultaneously — always yields more impact when fewer individuals are already addressing it. This is however not always true, as also already pointed out in the criticisms referenced above.Consider the following string of examples. Suppose a partial civilizational collapse has occurred, and you consider whether it would be good to go and repopulate the now barren lands. The ITN-framework says that as the first person to do so you make the biggest difference. However, alone you cannot procreate, at least not without far-reaching technological assistance. In fact a sizable group of people deciding to do so might very well still be ineffective, by not bringing in sufficient genetic diversity. This is captured by a well-known term in population biology: the critical or minimally viable population size (to persist). Something similar operates to a lesser extent in the effectiveness of teams. I for example once found the advice to better not join a company as the sole data scientist, as you would not have a team to exchange ideas with. Working together, you become more effective, and develop more.Advocating for policies is another area that is important and where you need teams. Consider there being multiple equally worthwhile causes to protest for, but by the logic of the ITN-framework you always join the least populated protest. And no critical mass is obtained. Doesn’t that seem absurd? See also (5). (And the third image in (3), depicting a one-time significant increase in marginal counterfactual impact, as with a critical vote to establish a majority. This graph is also called an indicator function). Effective altruists might similarly often find themselves advocating for policies which are neglected and that are thus not well known to the recipient of such advocacy. As opposed to max...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Replace Neglectedness, published by Indra Gesink on January 16, 2023 on The Effective Altruism Forum.for example with Leverage, as featured in Will MacAskill’s What We Owe the Future.The second bullet point featured in the website introduction to effective altruism is the ITN framework. This exists to prioritize problems. The framework does so by considering the Importance — or scale, S — of a problem, as the number of people or quality-adjusted life years (QALYs) affected, multiplied with the Tractability, as the potential that this problem can be addressed, and Neglectedness, as the number of people already working to address this problem (ITN-framework, including Leverage). Tractibility is sometimes also called Solvability, and non-neglectedness crowdedness.Some criticisms and difficulties in interpreting the framework (1, 2, 3, 4) have preceded this forum post. The ITN framework can be interpreted - as also in the final paragraph of (1) - such that IT represents the potential that a problem can be addressed, while ITN considers the difference that any one individual can make to that problem, particularly the next individual. How much impact can the next individual make, choosing to work on this problem, on average? Why do I add “on average”? We are still ignoring the person’s unique qualities, and instead more abstractly consider an average person. Adding “personal fit” as another multiplicative factor would make it personal as well.So “How much impact can the next individual make on this problem?” really asks for the marginal counterfactual impact. Respectively this is the amount of impact that this one individual adds to the total impact so far, which would not happen otherwise. The ITN-factor Neglectedness assumes that this marginal counterfactual impact is declining — strictly — as more individuals join the endeavor of addressing the particular problem. If this is true, then — indeed — a more neglected problem ceteris paribus — i.e. not varying factors I, T (or personal fit) simultaneously — always yields more impact when fewer individuals are already addressing it. This is however not always true, as also already pointed out in the criticisms referenced above.Consider the following string of examples. Suppose a partial civilizational collapse has occurred, and you consider whether it would be good to go and repopulate the now barren lands. The ITN-framework says that as the first person to do so you make the biggest difference. However, alone you cannot procreate, at least not without far-reaching technological assistance. In fact a sizable group of people deciding to do so might very well still be ineffective, by not bringing in sufficient genetic diversity. This is captured by a well-known term in population biology: the critical or minimally viable population size (to persist). Something similar operates to a lesser extent in the effectiveness of teams. I for example once found the advice to better not join a company as the sole data scientist, as you would not have a team to exchange ideas with. Working together, you become more effective, and develop more.Advocating for policies is another area that is important and where you need teams. Consider there being multiple equally worthwhile causes to protest for, but by the logic of the ITN-framework you always join the least populated protest. And no critical mass is obtained. Doesn’t that seem absurd? See also (5). (And the third image in (3), depicting a one-time significant increase in marginal counterfactual impact, as with a critical vote to establish a majority. This graph is also called an indicator function). Effective altruists might similarly often find themselves advocating for policies which are neglected and that are thus not well known to the recipient of such advocacy. As opposed to max...]]>
Indra Gesink https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:42 None full 4483
zWscvNhd3xeGJabEn_EA EA - Announcing aisafety.training by JJ Hepburn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing aisafety.training, published by JJ Hepburn on January 17, 2023 on The Effective Altruism Forum.Crossposted to Less WrongTo help people find what to apply to, aisafety.training acts as a well-maintained living document of AI safety programs, conferences, and other events. This will smooth the experience of people working on and joining AI safety and reduce the burden on word-of-mouth transmission of available programs. This can also be helpful for field builders planning events to see when other things are happening to plan around. We at AI Safety Support have been internally maintaining this document for some time and using it in our free career coaching calls. We now have a public-facing version, a form to add anything we’ve missed, and an email to alert us to corrections.For example, below, you will find static / will-soon-be-outdated images without clicking on the website link.Application DeadlinesProgram TimelineProgram tableIf you’re interested in helping to maintain this and can consistently dedicate a few hours per week to reading places where things might have been announced, accepting additions from the form and corrections by email, or if you have feedback or feature requests, leave us a comment or drop by the Alignment Ecosystem Development Discord, where this is worked on along with related projects to improve the information ecosystem such as aisafety.community and the upcoming aisafety.world and ea.domains.Additionally, if you wish to have a monthly reminder of upcoming events sent directly to your inbox, subscribe to AI Safety Support’s newsletter here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
JJ Hepburn https://forum.effectivealtruism.org/posts/zWscvNhd3xeGJabEn/announcing-aisafety-training Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing aisafety.training, published by JJ Hepburn on January 17, 2023 on The Effective Altruism Forum.Crossposted to Less WrongTo help people find what to apply to, aisafety.training acts as a well-maintained living document of AI safety programs, conferences, and other events. This will smooth the experience of people working on and joining AI safety and reduce the burden on word-of-mouth transmission of available programs. This can also be helpful for field builders planning events to see when other things are happening to plan around. We at AI Safety Support have been internally maintaining this document for some time and using it in our free career coaching calls. We now have a public-facing version, a form to add anything we’ve missed, and an email to alert us to corrections.For example, below, you will find static / will-soon-be-outdated images without clicking on the website link.Application DeadlinesProgram TimelineProgram tableIf you’re interested in helping to maintain this and can consistently dedicate a few hours per week to reading places where things might have been announced, accepting additions from the form and corrections by email, or if you have feedback or feature requests, leave us a comment or drop by the Alignment Ecosystem Development Discord, where this is worked on along with related projects to improve the information ecosystem such as aisafety.community and the upcoming aisafety.world and ea.domains.Additionally, if you wish to have a monthly reminder of upcoming events sent directly to your inbox, subscribe to AI Safety Support’s newsletter here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 17 Jan 2023 06:46:38 +0000 EA - Announcing aisafety.training by JJ Hepburn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing aisafety.training, published by JJ Hepburn on January 17, 2023 on The Effective Altruism Forum.Crossposted to Less WrongTo help people find what to apply to, aisafety.training acts as a well-maintained living document of AI safety programs, conferences, and other events. This will smooth the experience of people working on and joining AI safety and reduce the burden on word-of-mouth transmission of available programs. This can also be helpful for field builders planning events to see when other things are happening to plan around. We at AI Safety Support have been internally maintaining this document for some time and using it in our free career coaching calls. We now have a public-facing version, a form to add anything we’ve missed, and an email to alert us to corrections.For example, below, you will find static / will-soon-be-outdated images without clicking on the website link.Application DeadlinesProgram TimelineProgram tableIf you’re interested in helping to maintain this and can consistently dedicate a few hours per week to reading places where things might have been announced, accepting additions from the form and corrections by email, or if you have feedback or feature requests, leave us a comment or drop by the Alignment Ecosystem Development Discord, where this is worked on along with related projects to improve the information ecosystem such as aisafety.community and the upcoming aisafety.world and ea.domains.Additionally, if you wish to have a monthly reminder of upcoming events sent directly to your inbox, subscribe to AI Safety Support’s newsletter here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing aisafety.training, published by JJ Hepburn on January 17, 2023 on The Effective Altruism Forum.Crossposted to Less WrongTo help people find what to apply to, aisafety.training acts as a well-maintained living document of AI safety programs, conferences, and other events. This will smooth the experience of people working on and joining AI safety and reduce the burden on word-of-mouth transmission of available programs. This can also be helpful for field builders planning events to see when other things are happening to plan around. We at AI Safety Support have been internally maintaining this document for some time and using it in our free career coaching calls. We now have a public-facing version, a form to add anything we’ve missed, and an email to alert us to corrections.For example, below, you will find static / will-soon-be-outdated images without clicking on the website link.Application DeadlinesProgram TimelineProgram tableIf you’re interested in helping to maintain this and can consistently dedicate a few hours per week to reading places where things might have been announced, accepting additions from the form and corrections by email, or if you have feedback or feature requests, leave us a comment or drop by the Alignment Ecosystem Development Discord, where this is worked on along with related projects to improve the information ecosystem such as aisafety.community and the upcoming aisafety.world and ea.domains.Additionally, if you wish to have a monthly reminder of upcoming events sent directly to your inbox, subscribe to AI Safety Support’s newsletter here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
JJ Hepburn https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:44 None full 4481
AoqwhAY5faTHfYu4Z_EA EA - Some intuitions about fellowship programs by Joel Becker Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some intuitions about fellowship programs, published by Joel Becker on January 16, 2023 on The Effective Altruism Forum.Epistemic status: not sure how reliable my intuition is, but I do have more experience with these programs than most.TL;DRMaking sure >30 participants have regular opportunities to spontaneously gather, active programming, basic food and medical amenities, and common knowledge about visit dates hugely increases the benefit of residential fellowship programs.For transparency, I should note a stronger, less confident belief of mine that I will not defend here. My instinct is that all of the above factors are not merely beneficial but necessary in order for these programs to be worth their cost (assuming that their goals are indeed as I describe below). I am aware that some factors come with large time or money costs. If these costs are prohibitive, so be it.BackgroundI ran one of these programs and participated in two others. So what follows is a post-mortem of my own mistakes as much as it is feedback for organizers and recommendations for funders and organizers considering future initiatives.It’s worth emphasizing that I got lots out of each of these experiences, and that I feel very grateful to fellow participants and organizers!Fellowship programsI take the goals of these programs to be some combination of:Increasing short-term productivity,Generating counterfactual collaborations/relationships, andMoving participants towards more impactful careers at an accelerated rate.I take the method of these programs to be some combination of:Hosting people who largely do not know each other in a new physical environment for a small number of months,Covering the cost of accommodation, co-working space, and travel to-and-from, andOrganizing social and professional activities.Things that I am not talking about include:Communities where many people already know one another, andThemed retreats, or other temporary communities organized around a professional aptitude/cause area/etc.Number of participantsMy intuition is that hosting 35 participants is much, much better than hosting 25. Not only in total, but on a per-participant basis.Evidence for the directional claim:In the program I ran, the overwhelming majority of reported positive impact anecdotes came during the ~50% of the time we hosted >30 participants.In fact, I need to get to the joint-30th most subjectively impressive anecdote before finding one I think came about during the time we hosted <=30 participants.A possible exception is the first few weeks of the program, when participants came across many novel ideas, relationships, and physical spaces even with smaller numbers of participants.With >30 participants, shared spaces were often packed in evenings, participants participated in and ran a wider variety of well-attended activities, the office felt alive, and work felt urgent.Among other ~vibes~ based on discussions with participants, my time as a participant, and my time as an organizer.30 might feel like an arbitrary cut-off: readers might think that I don’t mean this literally. In fact, I am tempted to defend this close-to-literally. One piece of evidence: in my experience, there are almost no spontaneous gatherings of most participants below 25; these gatherings become somewhat more likely beyond this, then shoot up around 30. They continue increasing beyond 35, albeit more slowly.My guess is that the effect runs through two mechanisms:The chance that someone's professional experiences are highly complementary with your own increases sharply around this point.Maybe because some small clusters of people with shared interests choose to participate at the same time — although I think this was largely not the case.The chance that someone will join you if you spontaneo...]]>
Joel Becker https://forum.effectivealtruism.org/posts/AoqwhAY5faTHfYu4Z/some-intuitions-about-fellowship-programs Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some intuitions about fellowship programs, published by Joel Becker on January 16, 2023 on The Effective Altruism Forum.Epistemic status: not sure how reliable my intuition is, but I do have more experience with these programs than most.TL;DRMaking sure >30 participants have regular opportunities to spontaneously gather, active programming, basic food and medical amenities, and common knowledge about visit dates hugely increases the benefit of residential fellowship programs.For transparency, I should note a stronger, less confident belief of mine that I will not defend here. My instinct is that all of the above factors are not merely beneficial but necessary in order for these programs to be worth their cost (assuming that their goals are indeed as I describe below). I am aware that some factors come with large time or money costs. If these costs are prohibitive, so be it.BackgroundI ran one of these programs and participated in two others. So what follows is a post-mortem of my own mistakes as much as it is feedback for organizers and recommendations for funders and organizers considering future initiatives.It’s worth emphasizing that I got lots out of each of these experiences, and that I feel very grateful to fellow participants and organizers!Fellowship programsI take the goals of these programs to be some combination of:Increasing short-term productivity,Generating counterfactual collaborations/relationships, andMoving participants towards more impactful careers at an accelerated rate.I take the method of these programs to be some combination of:Hosting people who largely do not know each other in a new physical environment for a small number of months,Covering the cost of accommodation, co-working space, and travel to-and-from, andOrganizing social and professional activities.Things that I am not talking about include:Communities where many people already know one another, andThemed retreats, or other temporary communities organized around a professional aptitude/cause area/etc.Number of participantsMy intuition is that hosting 35 participants is much, much better than hosting 25. Not only in total, but on a per-participant basis.Evidence for the directional claim:In the program I ran, the overwhelming majority of reported positive impact anecdotes came during the ~50% of the time we hosted >30 participants.In fact, I need to get to the joint-30th most subjectively impressive anecdote before finding one I think came about during the time we hosted <=30 participants.A possible exception is the first few weeks of the program, when participants came across many novel ideas, relationships, and physical spaces even with smaller numbers of participants.With >30 participants, shared spaces were often packed in evenings, participants participated in and ran a wider variety of well-attended activities, the office felt alive, and work felt urgent.Among other ~vibes~ based on discussions with participants, my time as a participant, and my time as an organizer.30 might feel like an arbitrary cut-off: readers might think that I don’t mean this literally. In fact, I am tempted to defend this close-to-literally. One piece of evidence: in my experience, there are almost no spontaneous gatherings of most participants below 25; these gatherings become somewhat more likely beyond this, then shoot up around 30. They continue increasing beyond 35, albeit more slowly.My guess is that the effect runs through two mechanisms:The chance that someone's professional experiences are highly complementary with your own increases sharply around this point.Maybe because some small clusters of people with shared interests choose to participate at the same time — although I think this was largely not the case.The chance that someone will join you if you spontaneo...]]>
Mon, 16 Jan 2023 22:20:55 +0000 EA - Some intuitions about fellowship programs by Joel Becker Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some intuitions about fellowship programs, published by Joel Becker on January 16, 2023 on The Effective Altruism Forum.Epistemic status: not sure how reliable my intuition is, but I do have more experience with these programs than most.TL;DRMaking sure >30 participants have regular opportunities to spontaneously gather, active programming, basic food and medical amenities, and common knowledge about visit dates hugely increases the benefit of residential fellowship programs.For transparency, I should note a stronger, less confident belief of mine that I will not defend here. My instinct is that all of the above factors are not merely beneficial but necessary in order for these programs to be worth their cost (assuming that their goals are indeed as I describe below). I am aware that some factors come with large time or money costs. If these costs are prohibitive, so be it.BackgroundI ran one of these programs and participated in two others. So what follows is a post-mortem of my own mistakes as much as it is feedback for organizers and recommendations for funders and organizers considering future initiatives.It’s worth emphasizing that I got lots out of each of these experiences, and that I feel very grateful to fellow participants and organizers!Fellowship programsI take the goals of these programs to be some combination of:Increasing short-term productivity,Generating counterfactual collaborations/relationships, andMoving participants towards more impactful careers at an accelerated rate.I take the method of these programs to be some combination of:Hosting people who largely do not know each other in a new physical environment for a small number of months,Covering the cost of accommodation, co-working space, and travel to-and-from, andOrganizing social and professional activities.Things that I am not talking about include:Communities where many people already know one another, andThemed retreats, or other temporary communities organized around a professional aptitude/cause area/etc.Number of participantsMy intuition is that hosting 35 participants is much, much better than hosting 25. Not only in total, but on a per-participant basis.Evidence for the directional claim:In the program I ran, the overwhelming majority of reported positive impact anecdotes came during the ~50% of the time we hosted >30 participants.In fact, I need to get to the joint-30th most subjectively impressive anecdote before finding one I think came about during the time we hosted <=30 participants.A possible exception is the first few weeks of the program, when participants came across many novel ideas, relationships, and physical spaces even with smaller numbers of participants.With >30 participants, shared spaces were often packed in evenings, participants participated in and ran a wider variety of well-attended activities, the office felt alive, and work felt urgent.Among other ~vibes~ based on discussions with participants, my time as a participant, and my time as an organizer.30 might feel like an arbitrary cut-off: readers might think that I don’t mean this literally. In fact, I am tempted to defend this close-to-literally. One piece of evidence: in my experience, there are almost no spontaneous gatherings of most participants below 25; these gatherings become somewhat more likely beyond this, then shoot up around 30. They continue increasing beyond 35, albeit more slowly.My guess is that the effect runs through two mechanisms:The chance that someone's professional experiences are highly complementary with your own increases sharply around this point.Maybe because some small clusters of people with shared interests choose to participate at the same time — although I think this was largely not the case.The chance that someone will join you if you spontaneo...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some intuitions about fellowship programs, published by Joel Becker on January 16, 2023 on The Effective Altruism Forum.Epistemic status: not sure how reliable my intuition is, but I do have more experience with these programs than most.TL;DRMaking sure >30 participants have regular opportunities to spontaneously gather, active programming, basic food and medical amenities, and common knowledge about visit dates hugely increases the benefit of residential fellowship programs.For transparency, I should note a stronger, less confident belief of mine that I will not defend here. My instinct is that all of the above factors are not merely beneficial but necessary in order for these programs to be worth their cost (assuming that their goals are indeed as I describe below). I am aware that some factors come with large time or money costs. If these costs are prohibitive, so be it.BackgroundI ran one of these programs and participated in two others. So what follows is a post-mortem of my own mistakes as much as it is feedback for organizers and recommendations for funders and organizers considering future initiatives.It’s worth emphasizing that I got lots out of each of these experiences, and that I feel very grateful to fellow participants and organizers!Fellowship programsI take the goals of these programs to be some combination of:Increasing short-term productivity,Generating counterfactual collaborations/relationships, andMoving participants towards more impactful careers at an accelerated rate.I take the method of these programs to be some combination of:Hosting people who largely do not know each other in a new physical environment for a small number of months,Covering the cost of accommodation, co-working space, and travel to-and-from, andOrganizing social and professional activities.Things that I am not talking about include:Communities where many people already know one another, andThemed retreats, or other temporary communities organized around a professional aptitude/cause area/etc.Number of participantsMy intuition is that hosting 35 participants is much, much better than hosting 25. Not only in total, but on a per-participant basis.Evidence for the directional claim:In the program I ran, the overwhelming majority of reported positive impact anecdotes came during the ~50% of the time we hosted >30 participants.In fact, I need to get to the joint-30th most subjectively impressive anecdote before finding one I think came about during the time we hosted <=30 participants.A possible exception is the first few weeks of the program, when participants came across many novel ideas, relationships, and physical spaces even with smaller numbers of participants.With >30 participants, shared spaces were often packed in evenings, participants participated in and ran a wider variety of well-attended activities, the office felt alive, and work felt urgent.Among other ~vibes~ based on discussions with participants, my time as a participant, and my time as an organizer.30 might feel like an arbitrary cut-off: readers might think that I don’t mean this literally. In fact, I am tempted to defend this close-to-literally. One piece of evidence: in my experience, there are almost no spontaneous gatherings of most participants below 25; these gatherings become somewhat more likely beyond this, then shoot up around 30. They continue increasing beyond 35, albeit more slowly.My guess is that the effect runs through two mechanisms:The chance that someone's professional experiences are highly complementary with your own increases sharply around this point.Maybe because some small clusters of people with shared interests choose to participate at the same time — although I think this was largely not the case.The chance that someone will join you if you spontaneo...]]>
Joel Becker https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:47 None full 4473
yjm5CW9JdwBTFZB2B_EA EA - How we could stumble into AI catastrophe by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How we could stumble into AI catastrophe, published by Holden Karnofsky on January 16, 2023 on The Effective Altruism Forum.This post will lay out a couple of stylized stories about how, if transformative AI is developed relatively soon, this could result in global catastrophe. (By “transformative AI,” I mean AI powerful and capable enough to bring about the sort of world-changing consequences I write about in my most important century series.)This piece is more about visualizing possibilities than about providing arguments. For the latter, I recommend the rest of this series.In the stories I’ll be telling, the world doesn't do much advance preparation or careful consideration of risks I’ve discussed previously, especially re: misaligned AI (AI forming dangerous goals of its own).People do try to “test” AI systems for safety, and they do need to achieve some level of “safety” to commercialize. When early problems arise, they react to these problems.But this isn’t enough, because of some unique challenges of measuring whether an AI system is “safe,” and because of the strong incentives to race forward with scaling up and deploying AI systems as fast as possible.So we end up with a world run by misaligned AI - or, even if we’re lucky enough to avoid that outcome, other catastrophes are possible.After laying these catastrophic possibilities, I’ll briefly note a few key ways we could do better, mostly as a reminder (these topics were covered in previous posts). Future pieces will get more specific about what we can be doing today to prepare.BackdropThis piece takes a lot of previous writing I’ve done as backdrop. Two key important assumptions (click to expand) are below; for more, see the rest of this series.In the most important century series, I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.I focus on a hypothetical kind of AI that I call PASTA, or Process for Automating Scientific and Technological Advancement. PASTA would be AI that can essentially automate all of the human activities needed to speed up scientific and technological advancement.Using a variety of different forecasting approaches, I argue that PASTA seems more likely than not to be developed this century - and there’s a decent chance (more than 10%) that we’ll see it within 15 years or so.I argue that the consequences of this sort of AI could be enormous: an explosion in scientific and technological progress. This could get us more quickly than most imagine to a radically unfamiliar future.I’ve also argued that AI systems along these lines could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.For more, see the most important century landing page. The series is available in many formats, including audio; I also provide a summary, and links to podcasts where I discuss it at a high level.-->It’s hard to talk about risks from transformative AI because of the many uncertainties about when and how such AI will be developed - and how much the (now-nascent) field of “AI safety research” will have grown by then, and how seriously people will take the risk, etc. etc. etc. So maybe it’s not surprising that estimates of the “misaligned AI” risk range from ~1% to ~99%.This piece takes an approach I call nearcasting: trying to answer key strategic questions about transformative AI, under the assumption that such AI arrives in a world that is otherwise relatively similar to today's.You can think of this approach like this: “Instead of asking where our ship will ultimately end up, let’s start by asking what destination it’s poi...]]>
Holden Karnofsky https://forum.effectivealtruism.org/posts/yjm5CW9JdwBTFZB2B/how-we-could-stumble-into-ai-catastrophe Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How we could stumble into AI catastrophe, published by Holden Karnofsky on January 16, 2023 on The Effective Altruism Forum.This post will lay out a couple of stylized stories about how, if transformative AI is developed relatively soon, this could result in global catastrophe. (By “transformative AI,” I mean AI powerful and capable enough to bring about the sort of world-changing consequences I write about in my most important century series.)This piece is more about visualizing possibilities than about providing arguments. For the latter, I recommend the rest of this series.In the stories I’ll be telling, the world doesn't do much advance preparation or careful consideration of risks I’ve discussed previously, especially re: misaligned AI (AI forming dangerous goals of its own).People do try to “test” AI systems for safety, and they do need to achieve some level of “safety” to commercialize. When early problems arise, they react to these problems.But this isn’t enough, because of some unique challenges of measuring whether an AI system is “safe,” and because of the strong incentives to race forward with scaling up and deploying AI systems as fast as possible.So we end up with a world run by misaligned AI - or, even if we’re lucky enough to avoid that outcome, other catastrophes are possible.After laying these catastrophic possibilities, I’ll briefly note a few key ways we could do better, mostly as a reminder (these topics were covered in previous posts). Future pieces will get more specific about what we can be doing today to prepare.BackdropThis piece takes a lot of previous writing I’ve done as backdrop. Two key important assumptions (click to expand) are below; for more, see the rest of this series.In the most important century series, I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.I focus on a hypothetical kind of AI that I call PASTA, or Process for Automating Scientific and Technological Advancement. PASTA would be AI that can essentially automate all of the human activities needed to speed up scientific and technological advancement.Using a variety of different forecasting approaches, I argue that PASTA seems more likely than not to be developed this century - and there’s a decent chance (more than 10%) that we’ll see it within 15 years or so.I argue that the consequences of this sort of AI could be enormous: an explosion in scientific and technological progress. This could get us more quickly than most imagine to a radically unfamiliar future.I’ve also argued that AI systems along these lines could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.For more, see the most important century landing page. The series is available in many formats, including audio; I also provide a summary, and links to podcasts where I discuss it at a high level.-->It’s hard to talk about risks from transformative AI because of the many uncertainties about when and how such AI will be developed - and how much the (now-nascent) field of “AI safety research” will have grown by then, and how seriously people will take the risk, etc. etc. etc. So maybe it’s not surprising that estimates of the “misaligned AI” risk range from ~1% to ~99%.This piece takes an approach I call nearcasting: trying to answer key strategic questions about transformative AI, under the assumption that such AI arrives in a world that is otherwise relatively similar to today's.You can think of this approach like this: “Instead of asking where our ship will ultimately end up, let’s start by asking what destination it’s poi...]]>
Mon, 16 Jan 2023 19:05:48 +0000 EA - How we could stumble into AI catastrophe by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How we could stumble into AI catastrophe, published by Holden Karnofsky on January 16, 2023 on The Effective Altruism Forum.This post will lay out a couple of stylized stories about how, if transformative AI is developed relatively soon, this could result in global catastrophe. (By “transformative AI,” I mean AI powerful and capable enough to bring about the sort of world-changing consequences I write about in my most important century series.)This piece is more about visualizing possibilities than about providing arguments. For the latter, I recommend the rest of this series.In the stories I’ll be telling, the world doesn't do much advance preparation or careful consideration of risks I’ve discussed previously, especially re: misaligned AI (AI forming dangerous goals of its own).People do try to “test” AI systems for safety, and they do need to achieve some level of “safety” to commercialize. When early problems arise, they react to these problems.But this isn’t enough, because of some unique challenges of measuring whether an AI system is “safe,” and because of the strong incentives to race forward with scaling up and deploying AI systems as fast as possible.So we end up with a world run by misaligned AI - or, even if we’re lucky enough to avoid that outcome, other catastrophes are possible.After laying these catastrophic possibilities, I’ll briefly note a few key ways we could do better, mostly as a reminder (these topics were covered in previous posts). Future pieces will get more specific about what we can be doing today to prepare.BackdropThis piece takes a lot of previous writing I’ve done as backdrop. Two key important assumptions (click to expand) are below; for more, see the rest of this series.In the most important century series, I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.I focus on a hypothetical kind of AI that I call PASTA, or Process for Automating Scientific and Technological Advancement. PASTA would be AI that can essentially automate all of the human activities needed to speed up scientific and technological advancement.Using a variety of different forecasting approaches, I argue that PASTA seems more likely than not to be developed this century - and there’s a decent chance (more than 10%) that we’ll see it within 15 years or so.I argue that the consequences of this sort of AI could be enormous: an explosion in scientific and technological progress. This could get us more quickly than most imagine to a radically unfamiliar future.I’ve also argued that AI systems along these lines could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.For more, see the most important century landing page. The series is available in many formats, including audio; I also provide a summary, and links to podcasts where I discuss it at a high level.-->It’s hard to talk about risks from transformative AI because of the many uncertainties about when and how such AI will be developed - and how much the (now-nascent) field of “AI safety research” will have grown by then, and how seriously people will take the risk, etc. etc. etc. So maybe it’s not surprising that estimates of the “misaligned AI” risk range from ~1% to ~99%.This piece takes an approach I call nearcasting: trying to answer key strategic questions about transformative AI, under the assumption that such AI arrives in a world that is otherwise relatively similar to today's.You can think of this approach like this: “Instead of asking where our ship will ultimately end up, let’s start by asking what destination it’s poi...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How we could stumble into AI catastrophe, published by Holden Karnofsky on January 16, 2023 on The Effective Altruism Forum.This post will lay out a couple of stylized stories about how, if transformative AI is developed relatively soon, this could result in global catastrophe. (By “transformative AI,” I mean AI powerful and capable enough to bring about the sort of world-changing consequences I write about in my most important century series.)This piece is more about visualizing possibilities than about providing arguments. For the latter, I recommend the rest of this series.In the stories I’ll be telling, the world doesn't do much advance preparation or careful consideration of risks I’ve discussed previously, especially re: misaligned AI (AI forming dangerous goals of its own).People do try to “test” AI systems for safety, and they do need to achieve some level of “safety” to commercialize. When early problems arise, they react to these problems.But this isn’t enough, because of some unique challenges of measuring whether an AI system is “safe,” and because of the strong incentives to race forward with scaling up and deploying AI systems as fast as possible.So we end up with a world run by misaligned AI - or, even if we’re lucky enough to avoid that outcome, other catastrophes are possible.After laying these catastrophic possibilities, I’ll briefly note a few key ways we could do better, mostly as a reminder (these topics were covered in previous posts). Future pieces will get more specific about what we can be doing today to prepare.BackdropThis piece takes a lot of previous writing I’ve done as backdrop. Two key important assumptions (click to expand) are below; for more, see the rest of this series.In the most important century series, I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.I focus on a hypothetical kind of AI that I call PASTA, or Process for Automating Scientific and Technological Advancement. PASTA would be AI that can essentially automate all of the human activities needed to speed up scientific and technological advancement.Using a variety of different forecasting approaches, I argue that PASTA seems more likely than not to be developed this century - and there’s a decent chance (more than 10%) that we’ll see it within 15 years or so.I argue that the consequences of this sort of AI could be enormous: an explosion in scientific and technological progress. This could get us more quickly than most imagine to a radically unfamiliar future.I’ve also argued that AI systems along these lines could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.For more, see the most important century landing page. The series is available in many formats, including audio; I also provide a summary, and links to podcasts where I discuss it at a high level.-->It’s hard to talk about risks from transformative AI because of the many uncertainties about when and how such AI will be developed - and how much the (now-nascent) field of “AI safety research” will have grown by then, and how seriously people will take the risk, etc. etc. etc. So maybe it’s not surprising that estimates of the “misaligned AI” risk range from ~1% to ~99%.This piece takes an approach I call nearcasting: trying to answer key strategic questions about transformative AI, under the assumption that such AI arrives in a world that is otherwise relatively similar to today's.You can think of this approach like this: “Instead of asking where our ship will ultimately end up, let’s start by asking what destination it’s poi...]]>
Holden Karnofsky https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 47:17 None full 4476
zfrNgByjdi8efZSon_EA EA - EA Organization Updates: January 2023 by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Organization Updates: January 2023, published by Lizka on January 16, 2023 on The Effective Altruism Forum.These monthly posts originated as the "Updates" section of the EA Newsletter. Organizations submit their own updates, which we edit for clarity.Job listings that these organizations highlighted (as well as a couple of other impactful jobs) are at the top of this post. Some of the jobs have pressing deadlines.You can see previous updates on the "EA Organization Updates (monthly series)" topic page, or in our repository of past newsletters. Notice that there’s also an “org update” tag, where you can find more news and updates that are not part of this consolidated series.The organizations are in alphabetical order, starting with L-Z, 0-A-K.Job listingsConsider also exploring jobs listed on “Job listing (open).”GiveWellSenior Researcher (Remote / Oakland, CA, $181,400 - $199,800)Senior Research Associate (Remote / Oakland, CA, $127,000 - $139,900)Content Editor (Remote / Oakland, CA, $83,500 - $91,900)Global Priorities InstituteOperations Coordinator (Maternity Cover) (Oxford, £29,614 - £35,326, apply by 24 January)IDinsight2023 Associate & Senior Associate Global Drive (Multiple locations)Technical Delivery Manager/Director (New Delhi, India or Nairobi, Kenya)Associate Product Manager (New Delhi, India or Nairobi, Kenya)Open PhilanthropyAssorted jobs in Salesforce administration, operations, and recruiting (Remote; working hours must overlap with US hours for most roles. Salary range $84,303 - $127,021 across all jobs)Rethink PrioritiesBoard Member (Remote, voluntary roles entail 3-10 hours/month while paid roles require 5-10 hours/week at a rate of $40.53/hour, apply by 20 January)Wild Animal InitiativeDevelopment Director (Remote, US preferred, open to UK applicants, $82,020 - $100,247, apply by 23 January)Organizational updatesThese are in alphabetical order, starting with L-Z, 0-A-K.Legal Priorities ProjectLPP’s Eric Martínez and Christoph Winter published a new working paper titled “Ordinary meaning of existential risk” investigating the ordinary meaning of legally relevant concepts in the existential risk literature. The paper aims to provide crucial insights for those tasked with drafting and interpreting existential risk laws, and for the coherence of ordinary meaning analysis more generally.José Villalobos and Christoph Winter participated in EAGxLatinAmerica. They hosted a Q&A on international law and existential risk.Matthijs Maas published a blog post titled “Existential risk mitigation: What I worry about when there are only bad options” as part of Draft Amnesty Day.LPP received a grant of $115,000 from the Survival and Flourishing Fund to support their general operations.One for the WorldOne for the World mirrors the recommendations made by GiveWell for their own Nonprofit Partners portfolio. This year, GiveWell has updated its portfolio to contain a smaller list of nonprofits than before.In practice, this means that their Nonprofit Partners list has temporarily become much smaller, containing just four individual nonprofits. These nonprofits continue to offer gold-standard evidence that their method works and is incredibly cost-effective: Against Malaria Foundation, Malaria Consortium, New Incentives, and Helen Keller International.They are also adding a new option upon taking the 1% Pledge, which is GiveWell’s new All Grants Fund. This Fund will continue to make higher-risk grants, potentially including grants to nonprofits removed from GiveWell’s recommended nonprofits list. One for the World therefore thinks this gives donors the best chance to continue supporting a wider variety of granting opportunities.Open PhilanthropyOpen Philanthropy pre-announced its AI Worldviews Contest, which will launch in early 20...]]>
Lizka https://forum.effectivealtruism.org/posts/zfrNgByjdi8efZSon/ea-organization-updates-january-2023 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Organization Updates: January 2023, published by Lizka on January 16, 2023 on The Effective Altruism Forum.These monthly posts originated as the "Updates" section of the EA Newsletter. Organizations submit their own updates, which we edit for clarity.Job listings that these organizations highlighted (as well as a couple of other impactful jobs) are at the top of this post. Some of the jobs have pressing deadlines.You can see previous updates on the "EA Organization Updates (monthly series)" topic page, or in our repository of past newsletters. Notice that there’s also an “org update” tag, where you can find more news and updates that are not part of this consolidated series.The organizations are in alphabetical order, starting with L-Z, 0-A-K.Job listingsConsider also exploring jobs listed on “Job listing (open).”GiveWellSenior Researcher (Remote / Oakland, CA, $181,400 - $199,800)Senior Research Associate (Remote / Oakland, CA, $127,000 - $139,900)Content Editor (Remote / Oakland, CA, $83,500 - $91,900)Global Priorities InstituteOperations Coordinator (Maternity Cover) (Oxford, £29,614 - £35,326, apply by 24 January)IDinsight2023 Associate & Senior Associate Global Drive (Multiple locations)Technical Delivery Manager/Director (New Delhi, India or Nairobi, Kenya)Associate Product Manager (New Delhi, India or Nairobi, Kenya)Open PhilanthropyAssorted jobs in Salesforce administration, operations, and recruiting (Remote; working hours must overlap with US hours for most roles. Salary range $84,303 - $127,021 across all jobs)Rethink PrioritiesBoard Member (Remote, voluntary roles entail 3-10 hours/month while paid roles require 5-10 hours/week at a rate of $40.53/hour, apply by 20 January)Wild Animal InitiativeDevelopment Director (Remote, US preferred, open to UK applicants, $82,020 - $100,247, apply by 23 January)Organizational updatesThese are in alphabetical order, starting with L-Z, 0-A-K.Legal Priorities ProjectLPP’s Eric Martínez and Christoph Winter published a new working paper titled “Ordinary meaning of existential risk” investigating the ordinary meaning of legally relevant concepts in the existential risk literature. The paper aims to provide crucial insights for those tasked with drafting and interpreting existential risk laws, and for the coherence of ordinary meaning analysis more generally.José Villalobos and Christoph Winter participated in EAGxLatinAmerica. They hosted a Q&A on international law and existential risk.Matthijs Maas published a blog post titled “Existential risk mitigation: What I worry about when there are only bad options” as part of Draft Amnesty Day.LPP received a grant of $115,000 from the Survival and Flourishing Fund to support their general operations.One for the WorldOne for the World mirrors the recommendations made by GiveWell for their own Nonprofit Partners portfolio. This year, GiveWell has updated its portfolio to contain a smaller list of nonprofits than before.In practice, this means that their Nonprofit Partners list has temporarily become much smaller, containing just four individual nonprofits. These nonprofits continue to offer gold-standard evidence that their method works and is incredibly cost-effective: Against Malaria Foundation, Malaria Consortium, New Incentives, and Helen Keller International.They are also adding a new option upon taking the 1% Pledge, which is GiveWell’s new All Grants Fund. This Fund will continue to make higher-risk grants, potentially including grants to nonprofits removed from GiveWell’s recommended nonprofits list. One for the World therefore thinks this gives donors the best chance to continue supporting a wider variety of granting opportunities.Open PhilanthropyOpen Philanthropy pre-announced its AI Worldviews Contest, which will launch in early 20...]]>
Mon, 16 Jan 2023 18:39:08 +0000 EA - EA Organization Updates: January 2023 by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Organization Updates: January 2023, published by Lizka on January 16, 2023 on The Effective Altruism Forum.These monthly posts originated as the "Updates" section of the EA Newsletter. Organizations submit their own updates, which we edit for clarity.Job listings that these organizations highlighted (as well as a couple of other impactful jobs) are at the top of this post. Some of the jobs have pressing deadlines.You can see previous updates on the "EA Organization Updates (monthly series)" topic page, or in our repository of past newsletters. Notice that there’s also an “org update” tag, where you can find more news and updates that are not part of this consolidated series.The organizations are in alphabetical order, starting with L-Z, 0-A-K.Job listingsConsider also exploring jobs listed on “Job listing (open).”GiveWellSenior Researcher (Remote / Oakland, CA, $181,400 - $199,800)Senior Research Associate (Remote / Oakland, CA, $127,000 - $139,900)Content Editor (Remote / Oakland, CA, $83,500 - $91,900)Global Priorities InstituteOperations Coordinator (Maternity Cover) (Oxford, £29,614 - £35,326, apply by 24 January)IDinsight2023 Associate & Senior Associate Global Drive (Multiple locations)Technical Delivery Manager/Director (New Delhi, India or Nairobi, Kenya)Associate Product Manager (New Delhi, India or Nairobi, Kenya)Open PhilanthropyAssorted jobs in Salesforce administration, operations, and recruiting (Remote; working hours must overlap with US hours for most roles. Salary range $84,303 - $127,021 across all jobs)Rethink PrioritiesBoard Member (Remote, voluntary roles entail 3-10 hours/month while paid roles require 5-10 hours/week at a rate of $40.53/hour, apply by 20 January)Wild Animal InitiativeDevelopment Director (Remote, US preferred, open to UK applicants, $82,020 - $100,247, apply by 23 January)Organizational updatesThese are in alphabetical order, starting with L-Z, 0-A-K.Legal Priorities ProjectLPP’s Eric Martínez and Christoph Winter published a new working paper titled “Ordinary meaning of existential risk” investigating the ordinary meaning of legally relevant concepts in the existential risk literature. The paper aims to provide crucial insights for those tasked with drafting and interpreting existential risk laws, and for the coherence of ordinary meaning analysis more generally.José Villalobos and Christoph Winter participated in EAGxLatinAmerica. They hosted a Q&A on international law and existential risk.Matthijs Maas published a blog post titled “Existential risk mitigation: What I worry about when there are only bad options” as part of Draft Amnesty Day.LPP received a grant of $115,000 from the Survival and Flourishing Fund to support their general operations.One for the WorldOne for the World mirrors the recommendations made by GiveWell for their own Nonprofit Partners portfolio. This year, GiveWell has updated its portfolio to contain a smaller list of nonprofits than before.In practice, this means that their Nonprofit Partners list has temporarily become much smaller, containing just four individual nonprofits. These nonprofits continue to offer gold-standard evidence that their method works and is incredibly cost-effective: Against Malaria Foundation, Malaria Consortium, New Incentives, and Helen Keller International.They are also adding a new option upon taking the 1% Pledge, which is GiveWell’s new All Grants Fund. This Fund will continue to make higher-risk grants, potentially including grants to nonprofits removed from GiveWell’s recommended nonprofits list. One for the World therefore thinks this gives donors the best chance to continue supporting a wider variety of granting opportunities.Open PhilanthropyOpen Philanthropy pre-announced its AI Worldviews Contest, which will launch in early 20...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Organization Updates: January 2023, published by Lizka on January 16, 2023 on The Effective Altruism Forum.These monthly posts originated as the "Updates" section of the EA Newsletter. Organizations submit their own updates, which we edit for clarity.Job listings that these organizations highlighted (as well as a couple of other impactful jobs) are at the top of this post. Some of the jobs have pressing deadlines.You can see previous updates on the "EA Organization Updates (monthly series)" topic page, or in our repository of past newsletters. Notice that there’s also an “org update” tag, where you can find more news and updates that are not part of this consolidated series.The organizations are in alphabetical order, starting with L-Z, 0-A-K.Job listingsConsider also exploring jobs listed on “Job listing (open).”GiveWellSenior Researcher (Remote / Oakland, CA, $181,400 - $199,800)Senior Research Associate (Remote / Oakland, CA, $127,000 - $139,900)Content Editor (Remote / Oakland, CA, $83,500 - $91,900)Global Priorities InstituteOperations Coordinator (Maternity Cover) (Oxford, £29,614 - £35,326, apply by 24 January)IDinsight2023 Associate & Senior Associate Global Drive (Multiple locations)Technical Delivery Manager/Director (New Delhi, India or Nairobi, Kenya)Associate Product Manager (New Delhi, India or Nairobi, Kenya)Open PhilanthropyAssorted jobs in Salesforce administration, operations, and recruiting (Remote; working hours must overlap with US hours for most roles. Salary range $84,303 - $127,021 across all jobs)Rethink PrioritiesBoard Member (Remote, voluntary roles entail 3-10 hours/month while paid roles require 5-10 hours/week at a rate of $40.53/hour, apply by 20 January)Wild Animal InitiativeDevelopment Director (Remote, US preferred, open to UK applicants, $82,020 - $100,247, apply by 23 January)Organizational updatesThese are in alphabetical order, starting with L-Z, 0-A-K.Legal Priorities ProjectLPP’s Eric Martínez and Christoph Winter published a new working paper titled “Ordinary meaning of existential risk” investigating the ordinary meaning of legally relevant concepts in the existential risk literature. The paper aims to provide crucial insights for those tasked with drafting and interpreting existential risk laws, and for the coherence of ordinary meaning analysis more generally.José Villalobos and Christoph Winter participated in EAGxLatinAmerica. They hosted a Q&A on international law and existential risk.Matthijs Maas published a blog post titled “Existential risk mitigation: What I worry about when there are only bad options” as part of Draft Amnesty Day.LPP received a grant of $115,000 from the Survival and Flourishing Fund to support their general operations.One for the WorldOne for the World mirrors the recommendations made by GiveWell for their own Nonprofit Partners portfolio. This year, GiveWell has updated its portfolio to contain a smaller list of nonprofits than before.In practice, this means that their Nonprofit Partners list has temporarily become much smaller, containing just four individual nonprofits. These nonprofits continue to offer gold-standard evidence that their method works and is incredibly cost-effective: Against Malaria Foundation, Malaria Consortium, New Incentives, and Helen Keller International.They are also adding a new option upon taking the 1% Pledge, which is GiveWell’s new All Grants Fund. This Fund will continue to make higher-risk grants, potentially including grants to nonprofits removed from GiveWell’s recommended nonprofits list. One for the World therefore thinks this gives donors the best chance to continue supporting a wider variety of granting opportunities.Open PhilanthropyOpen Philanthropy pre-announced its AI Worldviews Contest, which will launch in early 20...]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 18:10 None full 4475
63pYakESGrQpfNw25_EA EA - Can GPT-3 produce new ideas? Partially automating Robin Hanson and others by NunoSempere Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can GPT-3 produce new ideas? Partially automating Robin Hanson and others, published by NunoSempere on January 16, 2023 on The Effective Altruism Forum.Brief description of the experimentI asked a language model to replicate a few patterns of generating insight that humanity hasn't really exploited much yet, such as:Variations on "if you never miss a plane, you've been spending too much time at the airport".Variations on the Robin Hanson argument of "for common human behaviour X, its usual purported justification is Y, but it usually results in more Z than Y. If we cared about Y, we might do A instead".Variations on the genealogical argument: that the results of historical accidents are most likely not moral necessities or optimal systems.Motivation behind this experimentOne of reasons to be afraid of artificial intelligence might be because, if you think in the abstract about how a system might behave as it becomes extremely intelligent, you might conclude that it might be able to completely outmanoeuvre us because of its superior ability to grasp the true structure of the world.This possibility is scary in the same sense that a modern chemist is scary to a historical alchemist. Our current chemist can completely outmanoeuvre previous alchemists by using their superior understanding of natural laws to produce better explosions, more subtle poisons, or more addictive and mind-blowing drugs.I do buy this fear in the limit for a being of God-like intelligence. But it's not clear to me whether it also applies to current systems or whether it will apply to their close descendants. In particular language models seem like they are powerful remixers and predictors but perhaps limited to drawing from the conceptual toolkit which humans already have. On the other hand, because they have access to so much information, they might be able to be prompted so as to reveal new relationships, connections, and insights.Some conceptual insights which have been historically important are:Explaining natural phenomena not in terms of Greek or Roman anthropomorphic gods, but with reference to naturalistic, physical explanationsUnderstanding acceleration as distinct from motionScience as an experimental methodologyThe is/ought distinctionBayesian reasoningCeasing to accept the divine right of kings as a justification for monarchical governanceRandomized trials as a more robust way of generating generalizable knowledgeThe genealogical argument: understanding that systems (such as the details of the current prison system, our monetary system, the lack of color in men's clothes, or our attitudes towards gender and sex) are the result of historical accidents which could have gone differently. But often these systems are rationalized as being particularly adequate, or even morally necessary.But I don't think that language models are currently able to come up with original insights like the above from scratch (this would be very scary).Instead, I probe GPT-3's ability to come up with original variations of these three argumentative patterns:Variations on "if you never miss a plane, you've been spending too much time at the airport".Variations on the Robin Hanson argument of "for common human behaviour X, its usual purported justification is Y, but it usually results in more Z than Y. If we cared about Y, we might do A instead".Variations on the genealogical argument: that the results of historical accidents are most likely not moral necessities or optimal systems.The first pattern is known as an Umeshism. I associate the second pattern with Robin Hanson, who has had part of a fruitful career exploring some of its variations—though he is also known for other ideas, e.g., prediction markets and grabby aliens. I associate the third pattern with Nietzsche (who used it to criti...]]>
NunoSempere https://forum.effectivealtruism.org/posts/63pYakESGrQpfNw25/can-gpt-3-produce-new-ideas-partially-automating-robin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can GPT-3 produce new ideas? Partially automating Robin Hanson and others, published by NunoSempere on January 16, 2023 on The Effective Altruism Forum.Brief description of the experimentI asked a language model to replicate a few patterns of generating insight that humanity hasn't really exploited much yet, such as:Variations on "if you never miss a plane, you've been spending too much time at the airport".Variations on the Robin Hanson argument of "for common human behaviour X, its usual purported justification is Y, but it usually results in more Z than Y. If we cared about Y, we might do A instead".Variations on the genealogical argument: that the results of historical accidents are most likely not moral necessities or optimal systems.Motivation behind this experimentOne of reasons to be afraid of artificial intelligence might be because, if you think in the abstract about how a system might behave as it becomes extremely intelligent, you might conclude that it might be able to completely outmanoeuvre us because of its superior ability to grasp the true structure of the world.This possibility is scary in the same sense that a modern chemist is scary to a historical alchemist. Our current chemist can completely outmanoeuvre previous alchemists by using their superior understanding of natural laws to produce better explosions, more subtle poisons, or more addictive and mind-blowing drugs.I do buy this fear in the limit for a being of God-like intelligence. But it's not clear to me whether it also applies to current systems or whether it will apply to their close descendants. In particular language models seem like they are powerful remixers and predictors but perhaps limited to drawing from the conceptual toolkit which humans already have. On the other hand, because they have access to so much information, they might be able to be prompted so as to reveal new relationships, connections, and insights.Some conceptual insights which have been historically important are:Explaining natural phenomena not in terms of Greek or Roman anthropomorphic gods, but with reference to naturalistic, physical explanationsUnderstanding acceleration as distinct from motionScience as an experimental methodologyThe is/ought distinctionBayesian reasoningCeasing to accept the divine right of kings as a justification for monarchical governanceRandomized trials as a more robust way of generating generalizable knowledgeThe genealogical argument: understanding that systems (such as the details of the current prison system, our monetary system, the lack of color in men's clothes, or our attitudes towards gender and sex) are the result of historical accidents which could have gone differently. But often these systems are rationalized as being particularly adequate, or even morally necessary.But I don't think that language models are currently able to come up with original insights like the above from scratch (this would be very scary).Instead, I probe GPT-3's ability to come up with original variations of these three argumentative patterns:Variations on "if you never miss a plane, you've been spending too much time at the airport".Variations on the Robin Hanson argument of "for common human behaviour X, its usual purported justification is Y, but it usually results in more Z than Y. If we cared about Y, we might do A instead".Variations on the genealogical argument: that the results of historical accidents are most likely not moral necessities or optimal systems.The first pattern is known as an Umeshism. I associate the second pattern with Robin Hanson, who has had part of a fruitful career exploring some of its variations—though he is also known for other ideas, e.g., prediction markets and grabby aliens. I associate the third pattern with Nietzsche (who used it to criti...]]>
Mon, 16 Jan 2023 17:08:07 +0000 EA - Can GPT-3 produce new ideas? Partially automating Robin Hanson and others by NunoSempere Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can GPT-3 produce new ideas? Partially automating Robin Hanson and others, published by NunoSempere on January 16, 2023 on The Effective Altruism Forum.Brief description of the experimentI asked a language model to replicate a few patterns of generating insight that humanity hasn't really exploited much yet, such as:Variations on "if you never miss a plane, you've been spending too much time at the airport".Variations on the Robin Hanson argument of "for common human behaviour X, its usual purported justification is Y, but it usually results in more Z than Y. If we cared about Y, we might do A instead".Variations on the genealogical argument: that the results of historical accidents are most likely not moral necessities or optimal systems.Motivation behind this experimentOne of reasons to be afraid of artificial intelligence might be because, if you think in the abstract about how a system might behave as it becomes extremely intelligent, you might conclude that it might be able to completely outmanoeuvre us because of its superior ability to grasp the true structure of the world.This possibility is scary in the same sense that a modern chemist is scary to a historical alchemist. Our current chemist can completely outmanoeuvre previous alchemists by using their superior understanding of natural laws to produce better explosions, more subtle poisons, or more addictive and mind-blowing drugs.I do buy this fear in the limit for a being of God-like intelligence. But it's not clear to me whether it also applies to current systems or whether it will apply to their close descendants. In particular language models seem like they are powerful remixers and predictors but perhaps limited to drawing from the conceptual toolkit which humans already have. On the other hand, because they have access to so much information, they might be able to be prompted so as to reveal new relationships, connections, and insights.Some conceptual insights which have been historically important are:Explaining natural phenomena not in terms of Greek or Roman anthropomorphic gods, but with reference to naturalistic, physical explanationsUnderstanding acceleration as distinct from motionScience as an experimental methodologyThe is/ought distinctionBayesian reasoningCeasing to accept the divine right of kings as a justification for monarchical governanceRandomized trials as a more robust way of generating generalizable knowledgeThe genealogical argument: understanding that systems (such as the details of the current prison system, our monetary system, the lack of color in men's clothes, or our attitudes towards gender and sex) are the result of historical accidents which could have gone differently. But often these systems are rationalized as being particularly adequate, or even morally necessary.But I don't think that language models are currently able to come up with original insights like the above from scratch (this would be very scary).Instead, I probe GPT-3's ability to come up with original variations of these three argumentative patterns:Variations on "if you never miss a plane, you've been spending too much time at the airport".Variations on the Robin Hanson argument of "for common human behaviour X, its usual purported justification is Y, but it usually results in more Z than Y. If we cared about Y, we might do A instead".Variations on the genealogical argument: that the results of historical accidents are most likely not moral necessities or optimal systems.The first pattern is known as an Umeshism. I associate the second pattern with Robin Hanson, who has had part of a fruitful career exploring some of its variations—though he is also known for other ideas, e.g., prediction markets and grabby aliens. I associate the third pattern with Nietzsche (who used it to criti...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can GPT-3 produce new ideas? Partially automating Robin Hanson and others, published by NunoSempere on January 16, 2023 on The Effective Altruism Forum.Brief description of the experimentI asked a language model to replicate a few patterns of generating insight that humanity hasn't really exploited much yet, such as:Variations on "if you never miss a plane, you've been spending too much time at the airport".Variations on the Robin Hanson argument of "for common human behaviour X, its usual purported justification is Y, but it usually results in more Z than Y. If we cared about Y, we might do A instead".Variations on the genealogical argument: that the results of historical accidents are most likely not moral necessities or optimal systems.Motivation behind this experimentOne of reasons to be afraid of artificial intelligence might be because, if you think in the abstract about how a system might behave as it becomes extremely intelligent, you might conclude that it might be able to completely outmanoeuvre us because of its superior ability to grasp the true structure of the world.This possibility is scary in the same sense that a modern chemist is scary to a historical alchemist. Our current chemist can completely outmanoeuvre previous alchemists by using their superior understanding of natural laws to produce better explosions, more subtle poisons, or more addictive and mind-blowing drugs.I do buy this fear in the limit for a being of God-like intelligence. But it's not clear to me whether it also applies to current systems or whether it will apply to their close descendants. In particular language models seem like they are powerful remixers and predictors but perhaps limited to drawing from the conceptual toolkit which humans already have. On the other hand, because they have access to so much information, they might be able to be prompted so as to reveal new relationships, connections, and insights.Some conceptual insights which have been historically important are:Explaining natural phenomena not in terms of Greek or Roman anthropomorphic gods, but with reference to naturalistic, physical explanationsUnderstanding acceleration as distinct from motionScience as an experimental methodologyThe is/ought distinctionBayesian reasoningCeasing to accept the divine right of kings as a justification for monarchical governanceRandomized trials as a more robust way of generating generalizable knowledgeThe genealogical argument: understanding that systems (such as the details of the current prison system, our monetary system, the lack of color in men's clothes, or our attitudes towards gender and sex) are the result of historical accidents which could have gone differently. But often these systems are rationalized as being particularly adequate, or even morally necessary.But I don't think that language models are currently able to come up with original insights like the above from scratch (this would be very scary).Instead, I probe GPT-3's ability to come up with original variations of these three argumentative patterns:Variations on "if you never miss a plane, you've been spending too much time at the airport".Variations on the Robin Hanson argument of "for common human behaviour X, its usual purported justification is Y, but it usually results in more Z than Y. If we cared about Y, we might do A instead".Variations on the genealogical argument: that the results of historical accidents are most likely not moral necessities or optimal systems.The first pattern is known as an Umeshism. I associate the second pattern with Robin Hanson, who has had part of a fruitful career exploring some of its variations—though he is also known for other ideas, e.g., prediction markets and grabby aliens. I associate the third pattern with Nietzsche (who used it to criti...]]>
NunoSempere https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 15:43 None full 4474
KXSBb2zgkLE6gnn3K_EA EA - Don’t Balk at Animal-friendly Results by Bob Fischer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don’t Balk at Animal-friendly Results, published by Bob Fischer on January 16, 2023 on The Effective Altruism Forum.Key TakeawaysThe Moral Weight Project assumes hedonism. It also assumes that in the absence of good direct measures of the intensities of valenced experiences, the best way to assess differences in the potential intensities of animals’ valenced experiences is to look at differences in other capacities that might serve as proxies for differences in hedonic potential.Suppose that these assumptions lead to the conclusion that chickens and humans can realize roughly the same amount of welfare at any given time. Call this “the Equality Result.” The key question: Would the Equality Result alone be a good reason to think that one or both of these assumptions is mistaken?We don’t think so. To explain why not, we consider three bases for skepticism about the Equality Result. Then, we consider whether the Equality Result should be surprising given hedonism.Three Bases for SkepticismFirst, someone might balk at the implications of the Equality Result given certain independent theses. For instance, given utilitarianism, the Equality Result probably implies that there should be a massive shift in neartermist resources toward animals, and someone might find this unbelievable. But the Equality Result isn’t to blame: utilitarianism is.Second, someone might be inclined to accept some theory of welfare that does not support the Equality Result. Fair enough, but as we’ve argued elsewhere, that only gets you so far.Third, someone might balk at the Equality Result even given hedonism. The basic problem with this is that the anti-Equality-Result intuition is uncalibrated, uncalibrated intuitions are vulnerable to various biases, and there are some highly relevant biases in the present context.If Hedonism, Then the Equality Result Shouldn’t Be SurprisingWe quickly consider three popular theories valenced states and argue that there are plausible assumptions on which each one leads to the Equality Result.This isn’t an argument for the Equality Result. It is, however, a check against knee-jerk skepticism.IntroductionThis is the seventh post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species.As EAs, we want to compare the cost-effectiveness of all interventions, including ones that benefit (sentient) animals with the cost-effectiveness of interventions that benefit humans. To do that, we need to estimate the value of each kind of animal relative to humans. If we understand each individual’s value in terms of the welfare they generate, whether positive or negative, then this means we need to estimate the amount of welfare that each kind of animal realizes relative to the amount of welfare that humans realize.How should we react if our method for generating these estimates produces a surprising result? Suppose, for instance, that we make various assumptions and generate a method for estimating how much welfare animals can realize relative to how much welfare humans can realize (which is the first step toward estimating how much welfare animals actually realize). And suppose that, when applied, our method suggests that chickens and humans can realize roughly the same amount of welfare at any given time. Call this “the Equality Result.” Would getting the Equality Result itself be a reason to think we made a mistake?Let’s make this concrete. The Moral Weight Project assumes hedonism—i.e., that all and only positively valenced experiences are good for you and all and only negatively valenced experiences are bad for you. Moreover, it assumes that, in the ab...]]>
Bob Fischer https://forum.effectivealtruism.org/posts/KXSBb2zgkLE6gnn3K/don-t-balk-at-animal-friendly-results-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don’t Balk at Animal-friendly Results, published by Bob Fischer on January 16, 2023 on The Effective Altruism Forum.Key TakeawaysThe Moral Weight Project assumes hedonism. It also assumes that in the absence of good direct measures of the intensities of valenced experiences, the best way to assess differences in the potential intensities of animals’ valenced experiences is to look at differences in other capacities that might serve as proxies for differences in hedonic potential.Suppose that these assumptions lead to the conclusion that chickens and humans can realize roughly the same amount of welfare at any given time. Call this “the Equality Result.” The key question: Would the Equality Result alone be a good reason to think that one or both of these assumptions is mistaken?We don’t think so. To explain why not, we consider three bases for skepticism about the Equality Result. Then, we consider whether the Equality Result should be surprising given hedonism.Three Bases for SkepticismFirst, someone might balk at the implications of the Equality Result given certain independent theses. For instance, given utilitarianism, the Equality Result probably implies that there should be a massive shift in neartermist resources toward animals, and someone might find this unbelievable. But the Equality Result isn’t to blame: utilitarianism is.Second, someone might be inclined to accept some theory of welfare that does not support the Equality Result. Fair enough, but as we’ve argued elsewhere, that only gets you so far.Third, someone might balk at the Equality Result even given hedonism. The basic problem with this is that the anti-Equality-Result intuition is uncalibrated, uncalibrated intuitions are vulnerable to various biases, and there are some highly relevant biases in the present context.If Hedonism, Then the Equality Result Shouldn’t Be SurprisingWe quickly consider three popular theories valenced states and argue that there are plausible assumptions on which each one leads to the Equality Result.This isn’t an argument for the Equality Result. It is, however, a check against knee-jerk skepticism.IntroductionThis is the seventh post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species.As EAs, we want to compare the cost-effectiveness of all interventions, including ones that benefit (sentient) animals with the cost-effectiveness of interventions that benefit humans. To do that, we need to estimate the value of each kind of animal relative to humans. If we understand each individual’s value in terms of the welfare they generate, whether positive or negative, then this means we need to estimate the amount of welfare that each kind of animal realizes relative to the amount of welfare that humans realize.How should we react if our method for generating these estimates produces a surprising result? Suppose, for instance, that we make various assumptions and generate a method for estimating how much welfare animals can realize relative to how much welfare humans can realize (which is the first step toward estimating how much welfare animals actually realize). And suppose that, when applied, our method suggests that chickens and humans can realize roughly the same amount of welfare at any given time. Call this “the Equality Result.” Would getting the Equality Result itself be a reason to think we made a mistake?Let’s make this concrete. The Moral Weight Project assumes hedonism—i.e., that all and only positively valenced experiences are good for you and all and only negatively valenced experiences are bad for you. Moreover, it assumes that, in the ab...]]>
Mon, 16 Jan 2023 15:27:23 +0000 EA - Don’t Balk at Animal-friendly Results by Bob Fischer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don’t Balk at Animal-friendly Results, published by Bob Fischer on January 16, 2023 on The Effective Altruism Forum.Key TakeawaysThe Moral Weight Project assumes hedonism. It also assumes that in the absence of good direct measures of the intensities of valenced experiences, the best way to assess differences in the potential intensities of animals’ valenced experiences is to look at differences in other capacities that might serve as proxies for differences in hedonic potential.Suppose that these assumptions lead to the conclusion that chickens and humans can realize roughly the same amount of welfare at any given time. Call this “the Equality Result.” The key question: Would the Equality Result alone be a good reason to think that one or both of these assumptions is mistaken?We don’t think so. To explain why not, we consider three bases for skepticism about the Equality Result. Then, we consider whether the Equality Result should be surprising given hedonism.Three Bases for SkepticismFirst, someone might balk at the implications of the Equality Result given certain independent theses. For instance, given utilitarianism, the Equality Result probably implies that there should be a massive shift in neartermist resources toward animals, and someone might find this unbelievable. But the Equality Result isn’t to blame: utilitarianism is.Second, someone might be inclined to accept some theory of welfare that does not support the Equality Result. Fair enough, but as we’ve argued elsewhere, that only gets you so far.Third, someone might balk at the Equality Result even given hedonism. The basic problem with this is that the anti-Equality-Result intuition is uncalibrated, uncalibrated intuitions are vulnerable to various biases, and there are some highly relevant biases in the present context.If Hedonism, Then the Equality Result Shouldn’t Be SurprisingWe quickly consider three popular theories valenced states and argue that there are plausible assumptions on which each one leads to the Equality Result.This isn’t an argument for the Equality Result. It is, however, a check against knee-jerk skepticism.IntroductionThis is the seventh post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species.As EAs, we want to compare the cost-effectiveness of all interventions, including ones that benefit (sentient) animals with the cost-effectiveness of interventions that benefit humans. To do that, we need to estimate the value of each kind of animal relative to humans. If we understand each individual’s value in terms of the welfare they generate, whether positive or negative, then this means we need to estimate the amount of welfare that each kind of animal realizes relative to the amount of welfare that humans realize.How should we react if our method for generating these estimates produces a surprising result? Suppose, for instance, that we make various assumptions and generate a method for estimating how much welfare animals can realize relative to how much welfare humans can realize (which is the first step toward estimating how much welfare animals actually realize). And suppose that, when applied, our method suggests that chickens and humans can realize roughly the same amount of welfare at any given time. Call this “the Equality Result.” Would getting the Equality Result itself be a reason to think we made a mistake?Let’s make this concrete. The Moral Weight Project assumes hedonism—i.e., that all and only positively valenced experiences are good for you and all and only negatively valenced experiences are bad for you. Moreover, it assumes that, in the ab...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don’t Balk at Animal-friendly Results, published by Bob Fischer on January 16, 2023 on The Effective Altruism Forum.Key TakeawaysThe Moral Weight Project assumes hedonism. It also assumes that in the absence of good direct measures of the intensities of valenced experiences, the best way to assess differences in the potential intensities of animals’ valenced experiences is to look at differences in other capacities that might serve as proxies for differences in hedonic potential.Suppose that these assumptions lead to the conclusion that chickens and humans can realize roughly the same amount of welfare at any given time. Call this “the Equality Result.” The key question: Would the Equality Result alone be a good reason to think that one or both of these assumptions is mistaken?We don’t think so. To explain why not, we consider three bases for skepticism about the Equality Result. Then, we consider whether the Equality Result should be surprising given hedonism.Three Bases for SkepticismFirst, someone might balk at the implications of the Equality Result given certain independent theses. For instance, given utilitarianism, the Equality Result probably implies that there should be a massive shift in neartermist resources toward animals, and someone might find this unbelievable. But the Equality Result isn’t to blame: utilitarianism is.Second, someone might be inclined to accept some theory of welfare that does not support the Equality Result. Fair enough, but as we’ve argued elsewhere, that only gets you so far.Third, someone might balk at the Equality Result even given hedonism. The basic problem with this is that the anti-Equality-Result intuition is uncalibrated, uncalibrated intuitions are vulnerable to various biases, and there are some highly relevant biases in the present context.If Hedonism, Then the Equality Result Shouldn’t Be SurprisingWe quickly consider three popular theories valenced states and argue that there are plausible assumptions on which each one leads to the Equality Result.This isn’t an argument for the Equality Result. It is, however, a check against knee-jerk skepticism.IntroductionThis is the seventh post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species.As EAs, we want to compare the cost-effectiveness of all interventions, including ones that benefit (sentient) animals with the cost-effectiveness of interventions that benefit humans. To do that, we need to estimate the value of each kind of animal relative to humans. If we understand each individual’s value in terms of the welfare they generate, whether positive or negative, then this means we need to estimate the amount of welfare that each kind of animal realizes relative to the amount of welfare that humans realize.How should we react if our method for generating these estimates produces a surprising result? Suppose, for instance, that we make various assumptions and generate a method for estimating how much welfare animals can realize relative to how much welfare humans can realize (which is the first step toward estimating how much welfare animals actually realize). And suppose that, when applied, our method suggests that chickens and humans can realize roughly the same amount of welfare at any given time. Call this “the Equality Result.” Would getting the Equality Result itself be a reason to think we made a mistake?Let’s make this concrete. The Moral Weight Project assumes hedonism—i.e., that all and only positively valenced experiences are good for you and all and only negatively valenced experiences are bad for you. Moreover, it assumes that, in the ab...]]>
Bob Fischer https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 25:39 None full 4478
pndb6TQ9nAiAkXg8x_EA EA - 80,000 Hours career review: Information security in high-impact areas by 80000 Hours Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80,000 Hours career review: Information security in high-impact areas, published by 80000 Hours on January 16, 2023 on The Effective Altruism Forum.This is a cross-post of a career review from the 80,000 Hours website written by Jarrah Bloomfield. See the original here.IntroductionAs the 2016 US presidential campaign was entering a fractious round of primaries, Hillary Clinton’s campaign chair, John Podesta, opened a disturbing email.[1] The March 19 message warned that his Gmail password had been compromised and that he urgently needed to change it.The email was a lie. It wasn’t trying to help him protect his account — it was a phishing attack trying to gain illicit access.Podesta was suspicious, but the campaign’s IT team erroneously wrote the email was “legitimate” and told him to change his password. The IT team provided a safe link for Podesta to use, but it seems he or one of his staffers instead clicked the link in the forged email. That link was used by Russian intelligence hackers known as “Fancy Bear,” and they used their access to leak private campaign emails for public consumption in the final weeks of the 2016 race, embarrassing the Clinton team.While there are plausibly many critical factors in any close election, it’s possible that the controversy around the leaked emails played a non-trivial role in Clinton’s subsequent loss to Donald Trump. This would mean the failure of the campaign’s security team to prevent the hack — which might have come down to a mere typo[2] — was extraordinarily consequential.These events vividly illustrate how careers in infosecurity at key organisations have the potential for outsized impact. Ideally, security professionals can develop robust practices that reduce the likelihood that a single slip-up will result in a significant breach. But this key component for the continued and unimpaired functioning of important organisations is often neglected.And the need for such protection stretches far beyond hackers trying to cause chaos in an election season. Information security is vital to safeguard all kinds of critical organisations such as those storing extremely sensitive data about biological threats, nuclear weapons, or advanced artificial intelligence, that might be targeted by criminal hackers or aggressive nation states. Such attacks, if successful, could contribute to dangerous competitive dynamics (such as arms races) or directly lead to catastrophe.Some infosecurity roles involve managing and coordinating organisational policy, working on technical aspects of security, or a combination of both. We believe many such roles have thus far been underrated among those interested in effective altruism and reducing global catastrophic risks, and we’d be excited to see more altruistically motivated candidates move into this field.In a nutshellOrganisations with influence, financial power, and advanced technology are targeted by actors seeking to steal or abuse these assets. A career in information security is a promising avenue to support high-impact organisations by protecting against these attacks, which have the potential to disrupt an organisation’s mission or even increase existential risk.Jeffrey Ladish contributed to this career review. We also thank Wim van der Schoot for his helpful comments.Why might information security be a high-impact career?Information security protects against events that hamper an organisation’s ability to fulfil its mission, such as attackers gaining access to confidential information. Information security specialists play a vital role in supporting the mission of organisations, similar to roles in operations.So if you want an impactful career, expertise in information security could enable you to make a significant positive difference in the world by helping important organis...]]>
80000 Hours https://forum.effectivealtruism.org/posts/pndb6TQ9nAiAkXg8x/80-000-hours-career-review-information-security-in-high Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80,000 Hours career review: Information security in high-impact areas, published by 80000 Hours on January 16, 2023 on The Effective Altruism Forum.This is a cross-post of a career review from the 80,000 Hours website written by Jarrah Bloomfield. See the original here.IntroductionAs the 2016 US presidential campaign was entering a fractious round of primaries, Hillary Clinton’s campaign chair, John Podesta, opened a disturbing email.[1] The March 19 message warned that his Gmail password had been compromised and that he urgently needed to change it.The email was a lie. It wasn’t trying to help him protect his account — it was a phishing attack trying to gain illicit access.Podesta was suspicious, but the campaign’s IT team erroneously wrote the email was “legitimate” and told him to change his password. The IT team provided a safe link for Podesta to use, but it seems he or one of his staffers instead clicked the link in the forged email. That link was used by Russian intelligence hackers known as “Fancy Bear,” and they used their access to leak private campaign emails for public consumption in the final weeks of the 2016 race, embarrassing the Clinton team.While there are plausibly many critical factors in any close election, it’s possible that the controversy around the leaked emails played a non-trivial role in Clinton’s subsequent loss to Donald Trump. This would mean the failure of the campaign’s security team to prevent the hack — which might have come down to a mere typo[2] — was extraordinarily consequential.These events vividly illustrate how careers in infosecurity at key organisations have the potential for outsized impact. Ideally, security professionals can develop robust practices that reduce the likelihood that a single slip-up will result in a significant breach. But this key component for the continued and unimpaired functioning of important organisations is often neglected.And the need for such protection stretches far beyond hackers trying to cause chaos in an election season. Information security is vital to safeguard all kinds of critical organisations such as those storing extremely sensitive data about biological threats, nuclear weapons, or advanced artificial intelligence, that might be targeted by criminal hackers or aggressive nation states. Such attacks, if successful, could contribute to dangerous competitive dynamics (such as arms races) or directly lead to catastrophe.Some infosecurity roles involve managing and coordinating organisational policy, working on technical aspects of security, or a combination of both. We believe many such roles have thus far been underrated among those interested in effective altruism and reducing global catastrophic risks, and we’d be excited to see more altruistically motivated candidates move into this field.In a nutshellOrganisations with influence, financial power, and advanced technology are targeted by actors seeking to steal or abuse these assets. A career in information security is a promising avenue to support high-impact organisations by protecting against these attacks, which have the potential to disrupt an organisation’s mission or even increase existential risk.Jeffrey Ladish contributed to this career review. We also thank Wim van der Schoot for his helpful comments.Why might information security be a high-impact career?Information security protects against events that hamper an organisation’s ability to fulfil its mission, such as attackers gaining access to confidential information. Information security specialists play a vital role in supporting the mission of organisations, similar to roles in operations.So if you want an impactful career, expertise in information security could enable you to make a significant positive difference in the world by helping important organis...]]>
Mon, 16 Jan 2023 15:20:55 +0000 EA - 80,000 Hours career review: Information security in high-impact areas by 80000 Hours Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80,000 Hours career review: Information security in high-impact areas, published by 80000 Hours on January 16, 2023 on The Effective Altruism Forum.This is a cross-post of a career review from the 80,000 Hours website written by Jarrah Bloomfield. See the original here.IntroductionAs the 2016 US presidential campaign was entering a fractious round of primaries, Hillary Clinton’s campaign chair, John Podesta, opened a disturbing email.[1] The March 19 message warned that his Gmail password had been compromised and that he urgently needed to change it.The email was a lie. It wasn’t trying to help him protect his account — it was a phishing attack trying to gain illicit access.Podesta was suspicious, but the campaign’s IT team erroneously wrote the email was “legitimate” and told him to change his password. The IT team provided a safe link for Podesta to use, but it seems he or one of his staffers instead clicked the link in the forged email. That link was used by Russian intelligence hackers known as “Fancy Bear,” and they used their access to leak private campaign emails for public consumption in the final weeks of the 2016 race, embarrassing the Clinton team.While there are plausibly many critical factors in any close election, it’s possible that the controversy around the leaked emails played a non-trivial role in Clinton’s subsequent loss to Donald Trump. This would mean the failure of the campaign’s security team to prevent the hack — which might have come down to a mere typo[2] — was extraordinarily consequential.These events vividly illustrate how careers in infosecurity at key organisations have the potential for outsized impact. Ideally, security professionals can develop robust practices that reduce the likelihood that a single slip-up will result in a significant breach. But this key component for the continued and unimpaired functioning of important organisations is often neglected.And the need for such protection stretches far beyond hackers trying to cause chaos in an election season. Information security is vital to safeguard all kinds of critical organisations such as those storing extremely sensitive data about biological threats, nuclear weapons, or advanced artificial intelligence, that might be targeted by criminal hackers or aggressive nation states. Such attacks, if successful, could contribute to dangerous competitive dynamics (such as arms races) or directly lead to catastrophe.Some infosecurity roles involve managing and coordinating organisational policy, working on technical aspects of security, or a combination of both. We believe many such roles have thus far been underrated among those interested in effective altruism and reducing global catastrophic risks, and we’d be excited to see more altruistically motivated candidates move into this field.In a nutshellOrganisations with influence, financial power, and advanced technology are targeted by actors seeking to steal or abuse these assets. A career in information security is a promising avenue to support high-impact organisations by protecting against these attacks, which have the potential to disrupt an organisation’s mission or even increase existential risk.Jeffrey Ladish contributed to this career review. We also thank Wim van der Schoot for his helpful comments.Why might information security be a high-impact career?Information security protects against events that hamper an organisation’s ability to fulfil its mission, such as attackers gaining access to confidential information. Information security specialists play a vital role in supporting the mission of organisations, similar to roles in operations.So if you want an impactful career, expertise in information security could enable you to make a significant positive difference in the world by helping important organis...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80,000 Hours career review: Information security in high-impact areas, published by 80000 Hours on January 16, 2023 on The Effective Altruism Forum.This is a cross-post of a career review from the 80,000 Hours website written by Jarrah Bloomfield. See the original here.IntroductionAs the 2016 US presidential campaign was entering a fractious round of primaries, Hillary Clinton’s campaign chair, John Podesta, opened a disturbing email.[1] The March 19 message warned that his Gmail password had been compromised and that he urgently needed to change it.The email was a lie. It wasn’t trying to help him protect his account — it was a phishing attack trying to gain illicit access.Podesta was suspicious, but the campaign’s IT team erroneously wrote the email was “legitimate” and told him to change his password. The IT team provided a safe link for Podesta to use, but it seems he or one of his staffers instead clicked the link in the forged email. That link was used by Russian intelligence hackers known as “Fancy Bear,” and they used their access to leak private campaign emails for public consumption in the final weeks of the 2016 race, embarrassing the Clinton team.While there are plausibly many critical factors in any close election, it’s possible that the controversy around the leaked emails played a non-trivial role in Clinton’s subsequent loss to Donald Trump. This would mean the failure of the campaign’s security team to prevent the hack — which might have come down to a mere typo[2] — was extraordinarily consequential.These events vividly illustrate how careers in infosecurity at key organisations have the potential for outsized impact. Ideally, security professionals can develop robust practices that reduce the likelihood that a single slip-up will result in a significant breach. But this key component for the continued and unimpaired functioning of important organisations is often neglected.And the need for such protection stretches far beyond hackers trying to cause chaos in an election season. Information security is vital to safeguard all kinds of critical organisations such as those storing extremely sensitive data about biological threats, nuclear weapons, or advanced artificial intelligence, that might be targeted by criminal hackers or aggressive nation states. Such attacks, if successful, could contribute to dangerous competitive dynamics (such as arms races) or directly lead to catastrophe.Some infosecurity roles involve managing and coordinating organisational policy, working on technical aspects of security, or a combination of both. We believe many such roles have thus far been underrated among those interested in effective altruism and reducing global catastrophic risks, and we’d be excited to see more altruistically motivated candidates move into this field.In a nutshellOrganisations with influence, financial power, and advanced technology are targeted by actors seeking to steal or abuse these assets. A career in information security is a promising avenue to support high-impact organisations by protecting against these attacks, which have the potential to disrupt an organisation’s mission or even increase existential risk.Jeffrey Ladish contributed to this career review. We also thank Wim van der Schoot for his helpful comments.Why might information security be a high-impact career?Information security protects against events that hamper an organisation’s ability to fulfil its mission, such as attackers gaining access to confidential information. Information security specialists play a vital role in supporting the mission of organisations, similar to roles in operations.So if you want an impactful career, expertise in information security could enable you to make a significant positive difference in the world by helping important organis...]]>
80000 Hours https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 19:32 None full 4477
dMhscQ9sTXCaegYju_EA EA - Consider paying for literature or book reviews using bounties and dominant assurance contracts by Arjun Panickssery Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider paying for literature or book reviews using bounties and dominant assurance contracts, published by Arjun Panickssery on January 15, 2023 on The Effective Altruism Forum.Cross-posted to LessWrong here.In September 2021, a LessWrong pilot program paid $500 for high-quality book reviews related to "science, history, and rationality." They got 36 submissions (they don't say how many were rejected for low quality, but at least 9 were accepted) and a bunch of them were popular on the forum.There's a bounty tag on LW (and on the EA Forum) but it isn't used much. A culture of posting bounties—either individually or in groups of people interested in the same information—has benefits for patrons, writers, and the community generally:For patrons—If there's a question you want investigated or a book you want reviewed, you can save your valuable time by tossing the question into the LW void and waiting for a piece that's worth accepting. Others probably want the same thing and can contribute to a pool. Ambitious patrons can also influence the direction of the community by sponsoring posts on topics they think people should think about more. You don't have to worry about filtering for visible credentials and writers don't have to worry about having any.For writers—Bounties motivate people to write both through a direct monetary incentive, but also because a lot of people dissuade themselves from writing on the Internet to avoid looking vain or self-important. Bounties cover for this awkwardness by providing non-status-related reasons to post.For the community—This whole exchange provides a positive externality to the lurkers who can read more posts for free.The simplest way this could work is for people to post individual bounties of e.g. $500 for posts drawing conclusions that would have taken them just too long to justify at the hourly value of their time. These bounties can guide writers who may be looking for things to read and write about anyway.An obstacle to bounty markets is that writers incur the risk of being outshone by a better post written around the same time. They could also be snubbed by picky benefactors. If most bounties are posted by a small group of people who post many individual bounties, then reputation effects can manage this. Group bounties could be difficult to coordinate since people aren't motivated to post "I'd contribute $100 to a post analyzing the top 10 writing advice books for insights" if the bounty is unlikely ever to be fulfilled. And they're not motivated to join existing pools when they could free-ride instead.One solution is to use the opposite direction: Kickstarter-style writing proposals. In monthly threads, users post advertisements for book reviews/literature reviews/investigations that they'd be willing to produce at some price and specifications. This puts the reputational demands on the writers and the risk on the sponsors who might pay for a poor product.In theory, writers could kickstart posts using dominant assurance contracts. An example (this is a real offer): If you send $20 to arjun.panickssery at Gmail via PayPal by noon New York time on January 21st, I'll send you back $25 if fewer than 10 people sent me money. If 10 or more people send me money, I'll post a review of Steven Pinker's The Sense of Style: The Thinking Person's Guide to Writing in the 21st Century by the end of the month. I'm not sure whether I'm just giving away free money right now.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Arjun Panickssery https://forum.effectivealtruism.org/posts/dMhscQ9sTXCaegYju/consider-paying-for-literature-or-book-reviews-using Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider paying for literature or book reviews using bounties and dominant assurance contracts, published by Arjun Panickssery on January 15, 2023 on The Effective Altruism Forum.Cross-posted to LessWrong here.In September 2021, a LessWrong pilot program paid $500 for high-quality book reviews related to "science, history, and rationality." They got 36 submissions (they don't say how many were rejected for low quality, but at least 9 were accepted) and a bunch of them were popular on the forum.There's a bounty tag on LW (and on the EA Forum) but it isn't used much. A culture of posting bounties—either individually or in groups of people interested in the same information—has benefits for patrons, writers, and the community generally:For patrons—If there's a question you want investigated or a book you want reviewed, you can save your valuable time by tossing the question into the LW void and waiting for a piece that's worth accepting. Others probably want the same thing and can contribute to a pool. Ambitious patrons can also influence the direction of the community by sponsoring posts on topics they think people should think about more. You don't have to worry about filtering for visible credentials and writers don't have to worry about having any.For writers—Bounties motivate people to write both through a direct monetary incentive, but also because a lot of people dissuade themselves from writing on the Internet to avoid looking vain or self-important. Bounties cover for this awkwardness by providing non-status-related reasons to post.For the community—This whole exchange provides a positive externality to the lurkers who can read more posts for free.The simplest way this could work is for people to post individual bounties of e.g. $500 for posts drawing conclusions that would have taken them just too long to justify at the hourly value of their time. These bounties can guide writers who may be looking for things to read and write about anyway.An obstacle to bounty markets is that writers incur the risk of being outshone by a better post written around the same time. They could also be snubbed by picky benefactors. If most bounties are posted by a small group of people who post many individual bounties, then reputation effects can manage this. Group bounties could be difficult to coordinate since people aren't motivated to post "I'd contribute $100 to a post analyzing the top 10 writing advice books for insights" if the bounty is unlikely ever to be fulfilled. And they're not motivated to join existing pools when they could free-ride instead.One solution is to use the opposite direction: Kickstarter-style writing proposals. In monthly threads, users post advertisements for book reviews/literature reviews/investigations that they'd be willing to produce at some price and specifications. This puts the reputational demands on the writers and the risk on the sponsors who might pay for a poor product.In theory, writers could kickstart posts using dominant assurance contracts. An example (this is a real offer): If you send $20 to arjun.panickssery at Gmail via PayPal by noon New York time on January 21st, I'll send you back $25 if fewer than 10 people sent me money. If 10 or more people send me money, I'll post a review of Steven Pinker's The Sense of Style: The Thinking Person's Guide to Writing in the 21st Century by the end of the month. I'm not sure whether I'm just giving away free money right now.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 16 Jan 2023 14:33:44 +0000 EA - Consider paying for literature or book reviews using bounties and dominant assurance contracts by Arjun Panickssery Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider paying for literature or book reviews using bounties and dominant assurance contracts, published by Arjun Panickssery on January 15, 2023 on The Effective Altruism Forum.Cross-posted to LessWrong here.In September 2021, a LessWrong pilot program paid $500 for high-quality book reviews related to "science, history, and rationality." They got 36 submissions (they don't say how many were rejected for low quality, but at least 9 were accepted) and a bunch of them were popular on the forum.There's a bounty tag on LW (and on the EA Forum) but it isn't used much. A culture of posting bounties—either individually or in groups of people interested in the same information—has benefits for patrons, writers, and the community generally:For patrons—If there's a question you want investigated or a book you want reviewed, you can save your valuable time by tossing the question into the LW void and waiting for a piece that's worth accepting. Others probably want the same thing and can contribute to a pool. Ambitious patrons can also influence the direction of the community by sponsoring posts on topics they think people should think about more. You don't have to worry about filtering for visible credentials and writers don't have to worry about having any.For writers—Bounties motivate people to write both through a direct monetary incentive, but also because a lot of people dissuade themselves from writing on the Internet to avoid looking vain or self-important. Bounties cover for this awkwardness by providing non-status-related reasons to post.For the community—This whole exchange provides a positive externality to the lurkers who can read more posts for free.The simplest way this could work is for people to post individual bounties of e.g. $500 for posts drawing conclusions that would have taken them just too long to justify at the hourly value of their time. These bounties can guide writers who may be looking for things to read and write about anyway.An obstacle to bounty markets is that writers incur the risk of being outshone by a better post written around the same time. They could also be snubbed by picky benefactors. If most bounties are posted by a small group of people who post many individual bounties, then reputation effects can manage this. Group bounties could be difficult to coordinate since people aren't motivated to post "I'd contribute $100 to a post analyzing the top 10 writing advice books for insights" if the bounty is unlikely ever to be fulfilled. And they're not motivated to join existing pools when they could free-ride instead.One solution is to use the opposite direction: Kickstarter-style writing proposals. In monthly threads, users post advertisements for book reviews/literature reviews/investigations that they'd be willing to produce at some price and specifications. This puts the reputational demands on the writers and the risk on the sponsors who might pay for a poor product.In theory, writers could kickstart posts using dominant assurance contracts. An example (this is a real offer): If you send $20 to arjun.panickssery at Gmail via PayPal by noon New York time on January 21st, I'll send you back $25 if fewer than 10 people sent me money. If 10 or more people send me money, I'll post a review of Steven Pinker's The Sense of Style: The Thinking Person's Guide to Writing in the 21st Century by the end of the month. I'm not sure whether I'm just giving away free money right now.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider paying for literature or book reviews using bounties and dominant assurance contracts, published by Arjun Panickssery on January 15, 2023 on The Effective Altruism Forum.Cross-posted to LessWrong here.In September 2021, a LessWrong pilot program paid $500 for high-quality book reviews related to "science, history, and rationality." They got 36 submissions (they don't say how many were rejected for low quality, but at least 9 were accepted) and a bunch of them were popular on the forum.There's a bounty tag on LW (and on the EA Forum) but it isn't used much. A culture of posting bounties—either individually or in groups of people interested in the same information—has benefits for patrons, writers, and the community generally:For patrons—If there's a question you want investigated or a book you want reviewed, you can save your valuable time by tossing the question into the LW void and waiting for a piece that's worth accepting. Others probably want the same thing and can contribute to a pool. Ambitious patrons can also influence the direction of the community by sponsoring posts on topics they think people should think about more. You don't have to worry about filtering for visible credentials and writers don't have to worry about having any.For writers—Bounties motivate people to write both through a direct monetary incentive, but also because a lot of people dissuade themselves from writing on the Internet to avoid looking vain or self-important. Bounties cover for this awkwardness by providing non-status-related reasons to post.For the community—This whole exchange provides a positive externality to the lurkers who can read more posts for free.The simplest way this could work is for people to post individual bounties of e.g. $500 for posts drawing conclusions that would have taken them just too long to justify at the hourly value of their time. These bounties can guide writers who may be looking for things to read and write about anyway.An obstacle to bounty markets is that writers incur the risk of being outshone by a better post written around the same time. They could also be snubbed by picky benefactors. If most bounties are posted by a small group of people who post many individual bounties, then reputation effects can manage this. Group bounties could be difficult to coordinate since people aren't motivated to post "I'd contribute $100 to a post analyzing the top 10 writing advice books for insights" if the bounty is unlikely ever to be fulfilled. And they're not motivated to join existing pools when they could free-ride instead.One solution is to use the opposite direction: Kickstarter-style writing proposals. In monthly threads, users post advertisements for book reviews/literature reviews/investigations that they'd be willing to produce at some price and specifications. This puts the reputational demands on the writers and the risk on the sponsors who might pay for a poor product.In theory, writers could kickstart posts using dominant assurance contracts. An example (this is a real offer): If you send $20 to arjun.panickssery at Gmail via PayPal by noon New York time on January 21st, I'll send you back $25 if fewer than 10 people sent me money. If 10 or more people send me money, I'll post a review of Steven Pinker's The Sense of Style: The Thinking Person's Guide to Writing in the 21st Century by the end of the month. I'm not sure whether I'm just giving away free money right now.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Arjun Panickssery https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:25 None full 4479
KB8XPfh7dJ9uJaaDs_EA EA - Does EA understand how to apologize for things? by titotal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does EA understand how to apologize for things?, published by titotal on January 15, 2023 on The Effective Altruism Forum.In response to the drama over Bostroms apology for an old email, the original email has been universally condemned from all sides. But I've also seen some confusion over why people dislike the apology itself. After all, nothing in the apology was technically inaccurate, right? What part of it do we disagree with?Well, I object to it because it was an apology. And when you grade an apology, you don't grade it on the factual accuracy of the scientific claims contained within, you grade it on how good it is at being an apology. And to be frank, this was probably one of the worst apologies I have ever seen in my life, although it has since been topped by Tegmark's awful non-apology for the far right newspaper affair.Okay, let's go over the rules for an apology to be genuine and sincere. I'll take them from here.Acknowledge the offense.Explain what happened.Express remorse.Offer to make amends.Notably missing from this list is step 5: Go off on an unrelated tangent about eugenics.Imagine if I called someone's mother overweight in a vulgar manner. When they get upset, I compose a long apology email where I apologize for the language, but then note that I believe it is factually true their mother has a BMI substantially above average, as does their sister, father, and wife. Whether or not those claims are factually true doesn't actually matter, because bringing them up at all is unnecessary and further upsets the person I just hurt.In Bostroms email of 9 paragraphs, he spends 2 talking about the historical context of the email, 1 talking about why he decided to release it, 1 actually apologizing, and the remaining 5 paragraphs giving an overview of his current views on race, intelligence, genetics, and eugenics.What this betrays is an extreme lack of empathy for the people he is meant to be apologizing to. Imagine if he was reading this apology out loud to the average black person, and think about how uncomfortable they would feel by the time he got to part discussing his papers about the ethics of genetic enhancement.Bostroms original racist email did not mention racial genetic differences or eugenics. They should not have been brought up in the apology either. As a direct result of him bringing the subject up, this forum and others throughout the internet have been filled with race science debate, an outcome that I believe is very harmful. Discussions of racial differences are divisive, bad PR, probably result in the spread of harmful beliefs, and are completely irrelevant to top EA causes. If Bostrom didn't anticipate that this outcome would result from bringing the subject up, then he was being hopelessly naive.On the other hand, Bostroms apology looks absolutely saintly next to the FLI's/Max Tegmarks non-apology for the initial approval of grant money to a far-right newspaper (the funding offer was later rescinded). At no point does he offer any understanding at all as to why people might be concerned about approving, even temporarily, funding for a far-right newspaper that promotes holocaust denial, covid vaccine conspiracy theories, and defending "ethnic rights".I don't even know what to say about this statement. The FLI has managed to fail at point 1 of an apology: understanding that they did something wrong. I hope they manage to release a real apology soon, and when they do, maybe they can learn some lessons from previous failures.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
titotal https://forum.effectivealtruism.org/posts/KB8XPfh7dJ9uJaaDs/does-ea-understand-how-to-apologize-for-things Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does EA understand how to apologize for things?, published by titotal on January 15, 2023 on The Effective Altruism Forum.In response to the drama over Bostroms apology for an old email, the original email has been universally condemned from all sides. But I've also seen some confusion over why people dislike the apology itself. After all, nothing in the apology was technically inaccurate, right? What part of it do we disagree with?Well, I object to it because it was an apology. And when you grade an apology, you don't grade it on the factual accuracy of the scientific claims contained within, you grade it on how good it is at being an apology. And to be frank, this was probably one of the worst apologies I have ever seen in my life, although it has since been topped by Tegmark's awful non-apology for the far right newspaper affair.Okay, let's go over the rules for an apology to be genuine and sincere. I'll take them from here.Acknowledge the offense.Explain what happened.Express remorse.Offer to make amends.Notably missing from this list is step 5: Go off on an unrelated tangent about eugenics.Imagine if I called someone's mother overweight in a vulgar manner. When they get upset, I compose a long apology email where I apologize for the language, but then note that I believe it is factually true their mother has a BMI substantially above average, as does their sister, father, and wife. Whether or not those claims are factually true doesn't actually matter, because bringing them up at all is unnecessary and further upsets the person I just hurt.In Bostroms email of 9 paragraphs, he spends 2 talking about the historical context of the email, 1 talking about why he decided to release it, 1 actually apologizing, and the remaining 5 paragraphs giving an overview of his current views on race, intelligence, genetics, and eugenics.What this betrays is an extreme lack of empathy for the people he is meant to be apologizing to. Imagine if he was reading this apology out loud to the average black person, and think about how uncomfortable they would feel by the time he got to part discussing his papers about the ethics of genetic enhancement.Bostroms original racist email did not mention racial genetic differences or eugenics. They should not have been brought up in the apology either. As a direct result of him bringing the subject up, this forum and others throughout the internet have been filled with race science debate, an outcome that I believe is very harmful. Discussions of racial differences are divisive, bad PR, probably result in the spread of harmful beliefs, and are completely irrelevant to top EA causes. If Bostrom didn't anticipate that this outcome would result from bringing the subject up, then he was being hopelessly naive.On the other hand, Bostroms apology looks absolutely saintly next to the FLI's/Max Tegmarks non-apology for the initial approval of grant money to a far-right newspaper (the funding offer was later rescinded). At no point does he offer any understanding at all as to why people might be concerned about approving, even temporarily, funding for a far-right newspaper that promotes holocaust denial, covid vaccine conspiracy theories, and defending "ethnic rights".I don't even know what to say about this statement. The FLI has managed to fail at point 1 of an apology: understanding that they did something wrong. I hope they manage to release a real apology soon, and when they do, maybe they can learn some lessons from previous failures.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 15 Jan 2023 20:05:47 +0000 EA - Does EA understand how to apologize for things? by titotal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does EA understand how to apologize for things?, published by titotal on January 15, 2023 on The Effective Altruism Forum.In response to the drama over Bostroms apology for an old email, the original email has been universally condemned from all sides. But I've also seen some confusion over why people dislike the apology itself. After all, nothing in the apology was technically inaccurate, right? What part of it do we disagree with?Well, I object to it because it was an apology. And when you grade an apology, you don't grade it on the factual accuracy of the scientific claims contained within, you grade it on how good it is at being an apology. And to be frank, this was probably one of the worst apologies I have ever seen in my life, although it has since been topped by Tegmark's awful non-apology for the far right newspaper affair.Okay, let's go over the rules for an apology to be genuine and sincere. I'll take them from here.Acknowledge the offense.Explain what happened.Express remorse.Offer to make amends.Notably missing from this list is step 5: Go off on an unrelated tangent about eugenics.Imagine if I called someone's mother overweight in a vulgar manner. When they get upset, I compose a long apology email where I apologize for the language, but then note that I believe it is factually true their mother has a BMI substantially above average, as does their sister, father, and wife. Whether or not those claims are factually true doesn't actually matter, because bringing them up at all is unnecessary and further upsets the person I just hurt.In Bostroms email of 9 paragraphs, he spends 2 talking about the historical context of the email, 1 talking about why he decided to release it, 1 actually apologizing, and the remaining 5 paragraphs giving an overview of his current views on race, intelligence, genetics, and eugenics.What this betrays is an extreme lack of empathy for the people he is meant to be apologizing to. Imagine if he was reading this apology out loud to the average black person, and think about how uncomfortable they would feel by the time he got to part discussing his papers about the ethics of genetic enhancement.Bostroms original racist email did not mention racial genetic differences or eugenics. They should not have been brought up in the apology either. As a direct result of him bringing the subject up, this forum and others throughout the internet have been filled with race science debate, an outcome that I believe is very harmful. Discussions of racial differences are divisive, bad PR, probably result in the spread of harmful beliefs, and are completely irrelevant to top EA causes. If Bostrom didn't anticipate that this outcome would result from bringing the subject up, then he was being hopelessly naive.On the other hand, Bostroms apology looks absolutely saintly next to the FLI's/Max Tegmarks non-apology for the initial approval of grant money to a far-right newspaper (the funding offer was later rescinded). At no point does he offer any understanding at all as to why people might be concerned about approving, even temporarily, funding for a far-right newspaper that promotes holocaust denial, covid vaccine conspiracy theories, and defending "ethnic rights".I don't even know what to say about this statement. The FLI has managed to fail at point 1 of an apology: understanding that they did something wrong. I hope they manage to release a real apology soon, and when they do, maybe they can learn some lessons from previous failures.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does EA understand how to apologize for things?, published by titotal on January 15, 2023 on The Effective Altruism Forum.In response to the drama over Bostroms apology for an old email, the original email has been universally condemned from all sides. But I've also seen some confusion over why people dislike the apology itself. After all, nothing in the apology was technically inaccurate, right? What part of it do we disagree with?Well, I object to it because it was an apology. And when you grade an apology, you don't grade it on the factual accuracy of the scientific claims contained within, you grade it on how good it is at being an apology. And to be frank, this was probably one of the worst apologies I have ever seen in my life, although it has since been topped by Tegmark's awful non-apology for the far right newspaper affair.Okay, let's go over the rules for an apology to be genuine and sincere. I'll take them from here.Acknowledge the offense.Explain what happened.Express remorse.Offer to make amends.Notably missing from this list is step 5: Go off on an unrelated tangent about eugenics.Imagine if I called someone's mother overweight in a vulgar manner. When they get upset, I compose a long apology email where I apologize for the language, but then note that I believe it is factually true their mother has a BMI substantially above average, as does their sister, father, and wife. Whether or not those claims are factually true doesn't actually matter, because bringing them up at all is unnecessary and further upsets the person I just hurt.In Bostroms email of 9 paragraphs, he spends 2 talking about the historical context of the email, 1 talking about why he decided to release it, 1 actually apologizing, and the remaining 5 paragraphs giving an overview of his current views on race, intelligence, genetics, and eugenics.What this betrays is an extreme lack of empathy for the people he is meant to be apologizing to. Imagine if he was reading this apology out loud to the average black person, and think about how uncomfortable they would feel by the time he got to part discussing his papers about the ethics of genetic enhancement.Bostroms original racist email did not mention racial genetic differences or eugenics. They should not have been brought up in the apology either. As a direct result of him bringing the subject up, this forum and others throughout the internet have been filled with race science debate, an outcome that I believe is very harmful. Discussions of racial differences are divisive, bad PR, probably result in the spread of harmful beliefs, and are completely irrelevant to top EA causes. If Bostrom didn't anticipate that this outcome would result from bringing the subject up, then he was being hopelessly naive.On the other hand, Bostroms apology looks absolutely saintly next to the FLI's/Max Tegmarks non-apology for the initial approval of grant money to a far-right newspaper (the funding offer was later rescinded). At no point does he offer any understanding at all as to why people might be concerned about approving, even temporarily, funding for a far-right newspaper that promotes holocaust denial, covid vaccine conspiracy theories, and defending "ethnic rights".I don't even know what to say about this statement. The FLI has managed to fail at point 1 of an apology: understanding that they did something wrong. I hope they manage to release a real apology soon, and when they do, maybe they can learn some lessons from previous failures.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
titotal https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:27 None full 4467
qtGjAJrmBRNiJGKFQ_EA EA - The writing style here is bad by Michał Zabłocki Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The writing style here is bad, published by Michał Zabłocki on January 15, 2023 on The Effective Altruism Forum.Epistemic status: around that of Descartes' (low)I am not a native English speaker. Despite that, I've had my English skills in high regard most of my life. It was the language of my studies at the university. Although I still make plenty of mistakes, I want to assure you I am capable of reading academic texts.That being said: a whole lot of posts and comments here do feel like academic texts. The most basic/heuristic check: I found a tool to measure linguistic complexity, here/ - so you can play with it yourself, if you'd like to. Now, I realize that AI Safety is a complicated, professional topic with a lot of jargon. Hence, let's take a discussion that, I believe, should be especially welcoming to non-professionals:I could make some Python project and analyse lingustic complexity of a whole range of posts, produce graphs and it sure would be fun and much better, but I am a lazy person and I just want to show you the idea. I mean to sound extremely simple when I say the following.There's a whole lot of syllables right there.Most of the comments here do feel like academic papers. Reading them is a really taxing exercise. In fact, I usually just stray from it. Whether it's my shit attention span or people on a global scale are not proficient English speakers, it is my firm belief that ideas should be communicated in an understandable matter when posssible. That is, most of people should be able to understand them. If you want to increase diveristy and be more inclusive, well, I think that's one really good way at attempting so.This is also the reason for the exact title of the post, rather than "Linguistic preferences of some effective altruists seem to be impacted by a tendency to overly intellectualize."Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Michał Zabłocki https://forum.effectivealtruism.org/posts/qtGjAJrmBRNiJGKFQ/the-writing-style-here-is-bad Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The writing style here is bad, published by Michał Zabłocki on January 15, 2023 on The Effective Altruism Forum.Epistemic status: around that of Descartes' (low)I am not a native English speaker. Despite that, I've had my English skills in high regard most of my life. It was the language of my studies at the university. Although I still make plenty of mistakes, I want to assure you I am capable of reading academic texts.That being said: a whole lot of posts and comments here do feel like academic texts. The most basic/heuristic check: I found a tool to measure linguistic complexity, here/ - so you can play with it yourself, if you'd like to. Now, I realize that AI Safety is a complicated, professional topic with a lot of jargon. Hence, let's take a discussion that, I believe, should be especially welcoming to non-professionals:I could make some Python project and analyse lingustic complexity of a whole range of posts, produce graphs and it sure would be fun and much better, but I am a lazy person and I just want to show you the idea. I mean to sound extremely simple when I say the following.There's a whole lot of syllables right there.Most of the comments here do feel like academic papers. Reading them is a really taxing exercise. In fact, I usually just stray from it. Whether it's my shit attention span or people on a global scale are not proficient English speakers, it is my firm belief that ideas should be communicated in an understandable matter when posssible. That is, most of people should be able to understand them. If you want to increase diveristy and be more inclusive, well, I think that's one really good way at attempting so.This is also the reason for the exact title of the post, rather than "Linguistic preferences of some effective altruists seem to be impacted by a tendency to overly intellectualize."Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 15 Jan 2023 14:03:44 +0000 EA - The writing style here is bad by Michał Zabłocki Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The writing style here is bad, published by Michał Zabłocki on January 15, 2023 on The Effective Altruism Forum.Epistemic status: around that of Descartes' (low)I am not a native English speaker. Despite that, I've had my English skills in high regard most of my life. It was the language of my studies at the university. Although I still make plenty of mistakes, I want to assure you I am capable of reading academic texts.That being said: a whole lot of posts and comments here do feel like academic texts. The most basic/heuristic check: I found a tool to measure linguistic complexity, here/ - so you can play with it yourself, if you'd like to. Now, I realize that AI Safety is a complicated, professional topic with a lot of jargon. Hence, let's take a discussion that, I believe, should be especially welcoming to non-professionals:I could make some Python project and analyse lingustic complexity of a whole range of posts, produce graphs and it sure would be fun and much better, but I am a lazy person and I just want to show you the idea. I mean to sound extremely simple when I say the following.There's a whole lot of syllables right there.Most of the comments here do feel like academic papers. Reading them is a really taxing exercise. In fact, I usually just stray from it. Whether it's my shit attention span or people on a global scale are not proficient English speakers, it is my firm belief that ideas should be communicated in an understandable matter when posssible. That is, most of people should be able to understand them. If you want to increase diveristy and be more inclusive, well, I think that's one really good way at attempting so.This is also the reason for the exact title of the post, rather than "Linguistic preferences of some effective altruists seem to be impacted by a tendency to overly intellectualize."Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The writing style here is bad, published by Michał Zabłocki on January 15, 2023 on The Effective Altruism Forum.Epistemic status: around that of Descartes' (low)I am not a native English speaker. Despite that, I've had my English skills in high regard most of my life. It was the language of my studies at the university. Although I still make plenty of mistakes, I want to assure you I am capable of reading academic texts.That being said: a whole lot of posts and comments here do feel like academic texts. The most basic/heuristic check: I found a tool to measure linguistic complexity, here/ - so you can play with it yourself, if you'd like to. Now, I realize that AI Safety is a complicated, professional topic with a lot of jargon. Hence, let's take a discussion that, I believe, should be especially welcoming to non-professionals:I could make some Python project and analyse lingustic complexity of a whole range of posts, produce graphs and it sure would be fun and much better, but I am a lazy person and I just want to show you the idea. I mean to sound extremely simple when I say the following.There's a whole lot of syllables right there.Most of the comments here do feel like academic papers. Reading them is a really taxing exercise. In fact, I usually just stray from it. Whether it's my shit attention span or people on a global scale are not proficient English speakers, it is my firm belief that ideas should be communicated in an understandable matter when posssible. That is, most of people should be able to understand them. If you want to increase diveristy and be more inclusive, well, I think that's one really good way at attempting so.This is also the reason for the exact title of the post, rather than "Linguistic preferences of some effective altruists seem to be impacted by a tendency to overly intellectualize."Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Michał Zabłocki https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:59 None full 4469
Zynbd3nF6eCyvrnz3_EA EA - Do better, please ... by Rohit is a Strange Loop Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Do better, please ..., published by Rohit is a Strange Loop on January 15, 2023 on The Effective Altruism Forum.I am not a card carrying member of EA. I am not particularly A, much less E in that context. However the past few months have been exhausting in seeing not just the community, one I like, in turmoil repeatedly, while clearly fumbling basic aspects of how they're seen in the wider world. I like having EA in the world, I think it does a lot of good. And I think you guys are literally throwing it away based on aesthetics of misguided epistemic virtue signaling. But it's late, and I read more than a few articles, and this post is me begging you to please just stop.The specific push here is of course the Bostrom incident, when he clearly and highly legibly wrote black people have lower intelligence than other races. And his apology, was, to put it mildly, mealy mouthed and without much substance. If anything, in the intervening 25 years since the offending email, all he seems to have learnt to do is forget the one thing he said he wanted to do - to speak plainly.I'm not here to litigate race science. There's plenty of well reviewed science in the field that demonstrates that, varyingly, there are issues with measurements of both race and intelligence, much less how they evolve over time, catch up speeds, and a truly dizzying array of confounders. I can easily imagine if you're young and not particularly interested in this space you'd have a variety of views, what is silly is seeing someone who is so clearly in a position of authority, with a reputation for careful consideration and truth seeking, maintaining this kind of view.And not only is this just wrong, it's counterproductive.If EA wants to work on the most important problems in the world and make progress on them, it would be useful to have the world look upon you with trust. For anything more than turning money into malaria nets, you need people to trust you. And that includes trusting your intentions and your character.If you believe there are racial differences in intelligence, and your work forces you to work on the hard problems of resource allocation or longtermist societal evolution, nobody will trust you to do the right tradeoffs. History is filled with optimisation experiments gone horribly wrong when these beliefs existed at the bottom. The base rate of horrible outcomes is uncomfortably large.This is human values misalignment. Unless you have overwhelming evidence (or any real evidence), this is just a dumb prior to hold and publicise if you're working on actively changing people's lives. I don't care what you think about ethics about sentient digital life in the future if you can't figure this out today.Again, all of which individually is fine. I'm an advocate of people holding crazy opinions should they want to. But when like a third of the community seems to support him, and the defenses require contortions that agree, dismiss and generally be whiny about drama, that's ridiculous. While I appreciate posts like this, which speak about the importance of epistemic integrity, it seems to miss the fact that applauding someone for not lying is great but not if the belief they're holding is bad. And even if this blows over, it will remain a drag on EA unless it's addressed unequivocally.Or this type of comment which uses a lot of words but effectively seems to support the same thought. That no, our job is to differentiate QALYs and therefore differences are part of life.But guess what, epistemic integrity on something like this (I believe something pretty reprehensible and am not cowing to people telling me so) isn't going to help with shrimp welfare or AI risk prevention. Or even malaria net provision. Do not mistake "sticking with your beliefs" to be an overriding good, above believing w...]]>
Rohit is a Strange Loop https://forum.effectivealtruism.org/posts/Zynbd3nF6eCyvrnz3/do-better-please Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Do better, please ..., published by Rohit is a Strange Loop on January 15, 2023 on The Effective Altruism Forum.I am not a card carrying member of EA. I am not particularly A, much less E in that context. However the past few months have been exhausting in seeing not just the community, one I like, in turmoil repeatedly, while clearly fumbling basic aspects of how they're seen in the wider world. I like having EA in the world, I think it does a lot of good. And I think you guys are literally throwing it away based on aesthetics of misguided epistemic virtue signaling. But it's late, and I read more than a few articles, and this post is me begging you to please just stop.The specific push here is of course the Bostrom incident, when he clearly and highly legibly wrote black people have lower intelligence than other races. And his apology, was, to put it mildly, mealy mouthed and without much substance. If anything, in the intervening 25 years since the offending email, all he seems to have learnt to do is forget the one thing he said he wanted to do - to speak plainly.I'm not here to litigate race science. There's plenty of well reviewed science in the field that demonstrates that, varyingly, there are issues with measurements of both race and intelligence, much less how they evolve over time, catch up speeds, and a truly dizzying array of confounders. I can easily imagine if you're young and not particularly interested in this space you'd have a variety of views, what is silly is seeing someone who is so clearly in a position of authority, with a reputation for careful consideration and truth seeking, maintaining this kind of view.And not only is this just wrong, it's counterproductive.If EA wants to work on the most important problems in the world and make progress on them, it would be useful to have the world look upon you with trust. For anything more than turning money into malaria nets, you need people to trust you. And that includes trusting your intentions and your character.If you believe there are racial differences in intelligence, and your work forces you to work on the hard problems of resource allocation or longtermist societal evolution, nobody will trust you to do the right tradeoffs. History is filled with optimisation experiments gone horribly wrong when these beliefs existed at the bottom. The base rate of horrible outcomes is uncomfortably large.This is human values misalignment. Unless you have overwhelming evidence (or any real evidence), this is just a dumb prior to hold and publicise if you're working on actively changing people's lives. I don't care what you think about ethics about sentient digital life in the future if you can't figure this out today.Again, all of which individually is fine. I'm an advocate of people holding crazy opinions should they want to. But when like a third of the community seems to support him, and the defenses require contortions that agree, dismiss and generally be whiny about drama, that's ridiculous. While I appreciate posts like this, which speak about the importance of epistemic integrity, it seems to miss the fact that applauding someone for not lying is great but not if the belief they're holding is bad. And even if this blows over, it will remain a drag on EA unless it's addressed unequivocally.Or this type of comment which uses a lot of words but effectively seems to support the same thought. That no, our job is to differentiate QALYs and therefore differences are part of life.But guess what, epistemic integrity on something like this (I believe something pretty reprehensible and am not cowing to people telling me so) isn't going to help with shrimp welfare or AI risk prevention. Or even malaria net provision. Do not mistake "sticking with your beliefs" to be an overriding good, above believing w...]]>
Sun, 15 Jan 2023 12:47:50 +0000 EA - Do better, please ... by Rohit is a Strange Loop Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Do better, please ..., published by Rohit is a Strange Loop on January 15, 2023 on The Effective Altruism Forum.I am not a card carrying member of EA. I am not particularly A, much less E in that context. However the past few months have been exhausting in seeing not just the community, one I like, in turmoil repeatedly, while clearly fumbling basic aspects of how they're seen in the wider world. I like having EA in the world, I think it does a lot of good. And I think you guys are literally throwing it away based on aesthetics of misguided epistemic virtue signaling. But it's late, and I read more than a few articles, and this post is me begging you to please just stop.The specific push here is of course the Bostrom incident, when he clearly and highly legibly wrote black people have lower intelligence than other races. And his apology, was, to put it mildly, mealy mouthed and without much substance. If anything, in the intervening 25 years since the offending email, all he seems to have learnt to do is forget the one thing he said he wanted to do - to speak plainly.I'm not here to litigate race science. There's plenty of well reviewed science in the field that demonstrates that, varyingly, there are issues with measurements of both race and intelligence, much less how they evolve over time, catch up speeds, and a truly dizzying array of confounders. I can easily imagine if you're young and not particularly interested in this space you'd have a variety of views, what is silly is seeing someone who is so clearly in a position of authority, with a reputation for careful consideration and truth seeking, maintaining this kind of view.And not only is this just wrong, it's counterproductive.If EA wants to work on the most important problems in the world and make progress on them, it would be useful to have the world look upon you with trust. For anything more than turning money into malaria nets, you need people to trust you. And that includes trusting your intentions and your character.If you believe there are racial differences in intelligence, and your work forces you to work on the hard problems of resource allocation or longtermist societal evolution, nobody will trust you to do the right tradeoffs. History is filled with optimisation experiments gone horribly wrong when these beliefs existed at the bottom. The base rate of horrible outcomes is uncomfortably large.This is human values misalignment. Unless you have overwhelming evidence (or any real evidence), this is just a dumb prior to hold and publicise if you're working on actively changing people's lives. I don't care what you think about ethics about sentient digital life in the future if you can't figure this out today.Again, all of which individually is fine. I'm an advocate of people holding crazy opinions should they want to. But when like a third of the community seems to support him, and the defenses require contortions that agree, dismiss and generally be whiny about drama, that's ridiculous. While I appreciate posts like this, which speak about the importance of epistemic integrity, it seems to miss the fact that applauding someone for not lying is great but not if the belief they're holding is bad. And even if this blows over, it will remain a drag on EA unless it's addressed unequivocally.Or this type of comment which uses a lot of words but effectively seems to support the same thought. That no, our job is to differentiate QALYs and therefore differences are part of life.But guess what, epistemic integrity on something like this (I believe something pretty reprehensible and am not cowing to people telling me so) isn't going to help with shrimp welfare or AI risk prevention. Or even malaria net provision. Do not mistake "sticking with your beliefs" to be an overriding good, above believing w...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Do better, please ..., published by Rohit is a Strange Loop on January 15, 2023 on The Effective Altruism Forum.I am not a card carrying member of EA. I am not particularly A, much less E in that context. However the past few months have been exhausting in seeing not just the community, one I like, in turmoil repeatedly, while clearly fumbling basic aspects of how they're seen in the wider world. I like having EA in the world, I think it does a lot of good. And I think you guys are literally throwing it away based on aesthetics of misguided epistemic virtue signaling. But it's late, and I read more than a few articles, and this post is me begging you to please just stop.The specific push here is of course the Bostrom incident, when he clearly and highly legibly wrote black people have lower intelligence than other races. And his apology, was, to put it mildly, mealy mouthed and without much substance. If anything, in the intervening 25 years since the offending email, all he seems to have learnt to do is forget the one thing he said he wanted to do - to speak plainly.I'm not here to litigate race science. There's plenty of well reviewed science in the field that demonstrates that, varyingly, there are issues with measurements of both race and intelligence, much less how they evolve over time, catch up speeds, and a truly dizzying array of confounders. I can easily imagine if you're young and not particularly interested in this space you'd have a variety of views, what is silly is seeing someone who is so clearly in a position of authority, with a reputation for careful consideration and truth seeking, maintaining this kind of view.And not only is this just wrong, it's counterproductive.If EA wants to work on the most important problems in the world and make progress on them, it would be useful to have the world look upon you with trust. For anything more than turning money into malaria nets, you need people to trust you. And that includes trusting your intentions and your character.If you believe there are racial differences in intelligence, and your work forces you to work on the hard problems of resource allocation or longtermist societal evolution, nobody will trust you to do the right tradeoffs. History is filled with optimisation experiments gone horribly wrong when these beliefs existed at the bottom. The base rate of horrible outcomes is uncomfortably large.This is human values misalignment. Unless you have overwhelming evidence (or any real evidence), this is just a dumb prior to hold and publicise if you're working on actively changing people's lives. I don't care what you think about ethics about sentient digital life in the future if you can't figure this out today.Again, all of which individually is fine. I'm an advocate of people holding crazy opinions should they want to. But when like a third of the community seems to support him, and the defenses require contortions that agree, dismiss and generally be whiny about drama, that's ridiculous. While I appreciate posts like this, which speak about the importance of epistemic integrity, it seems to miss the fact that applauding someone for not lying is great but not if the belief they're holding is bad. And even if this blows over, it will remain a drag on EA unless it's addressed unequivocally.Or this type of comment which uses a lot of words but effectively seems to support the same thought. That no, our job is to differentiate QALYs and therefore differences are part of life.But guess what, epistemic integrity on something like this (I believe something pretty reprehensible and am not cowing to people telling me so) isn't going to help with shrimp welfare or AI risk prevention. Or even malaria net provision. Do not mistake "sticking with your beliefs" to be an overriding good, above believing w...]]>
Rohit is a Strange Loop https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:06 None full 4468
HWnLXCyxxcoarQZvg_EA EA - Someone should write a detailed history of effective altruism by Pete Rowlett Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Someone should write a detailed history of effective altruism, published by Pete Rowlett on January 14, 2023 on The Effective Altruism Forum.I think that someone should write a detailed history of the effective altruism movement. The history that currently exists on the forum is pretty limited, and I’m not aware of much other material, so I think there’s room for substantial improvement. An oral history was already suggested in this post.I tentatively planned to write this post before FTX collapsed, but the reasons for writing this are probably even more compelling now than they were beforehand. I think a comprehensive written history would help.Develop an EA ethos/identity based on a shared intellectual history and provide a launch pad for future developments (e.g. longtermism and an influx of money). I remember reading about a community member who mostly thought about global health getting on board with AI safety when they met a civil rights attorney who was concerned about it. A demonstration of shared values allowed for that development.Build trust within the movement. As the community grows, it can no longer rely on everyone knowing everyone else, and needs external tools to keep everyone on the same page. Aesthetics have been suggested as one option, and I think that may be part of the solution, in concert with a written history.Mitigate existential risk to the EA movement. See EA criticism #6 in Peter Wildeford’s post and this post about ways in which EA could fail. Assuming the book would help the movement develop an identity and shared trust, it could lower risk to the movement.Understand the strengths and weaknesses of the movement, and what has historically been done well and what has been done poorly.There are a few ways this could happen.Open Phil (which already has a History of Philanthropy focus area) or CEA could actively seek out someone for the role and fund them for the duration of the project. This process would give the writer the credibility needed to get time with important EA people.A would-be writer could request a grant, perhaps from the EA Infrastructure Fund.An already-established EA journalist like Kelsey Piper could do it. There would be a high opportunity cost associated with this option, of course, since they’re already doing valuable work. On the other hand, they would already have the credibility and baseline knowledge required to do a great job.I’d be interested in hearing people’s thoughts on this, or if I missed a resource that already exists.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Pete Rowlett https://forum.effectivealtruism.org/posts/HWnLXCyxxcoarQZvg/someone-should-write-a-detailed-history-of-effective Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Someone should write a detailed history of effective altruism, published by Pete Rowlett on January 14, 2023 on The Effective Altruism Forum.I think that someone should write a detailed history of the effective altruism movement. The history that currently exists on the forum is pretty limited, and I’m not aware of much other material, so I think there’s room for substantial improvement. An oral history was already suggested in this post.I tentatively planned to write this post before FTX collapsed, but the reasons for writing this are probably even more compelling now than they were beforehand. I think a comprehensive written history would help.Develop an EA ethos/identity based on a shared intellectual history and provide a launch pad for future developments (e.g. longtermism and an influx of money). I remember reading about a community member who mostly thought about global health getting on board with AI safety when they met a civil rights attorney who was concerned about it. A demonstration of shared values allowed for that development.Build trust within the movement. As the community grows, it can no longer rely on everyone knowing everyone else, and needs external tools to keep everyone on the same page. Aesthetics have been suggested as one option, and I think that may be part of the solution, in concert with a written history.Mitigate existential risk to the EA movement. See EA criticism #6 in Peter Wildeford’s post and this post about ways in which EA could fail. Assuming the book would help the movement develop an identity and shared trust, it could lower risk to the movement.Understand the strengths and weaknesses of the movement, and what has historically been done well and what has been done poorly.There are a few ways this could happen.Open Phil (which already has a History of Philanthropy focus area) or CEA could actively seek out someone for the role and fund them for the duration of the project. This process would give the writer the credibility needed to get time with important EA people.A would-be writer could request a grant, perhaps from the EA Infrastructure Fund.An already-established EA journalist like Kelsey Piper could do it. There would be a high opportunity cost associated with this option, of course, since they’re already doing valuable work. On the other hand, they would already have the credibility and baseline knowledge required to do a great job.I’d be interested in hearing people’s thoughts on this, or if I missed a resource that already exists.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 15 Jan 2023 03:54:32 +0000 EA - Someone should write a detailed history of effective altruism by Pete Rowlett Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Someone should write a detailed history of effective altruism, published by Pete Rowlett on January 14, 2023 on The Effective Altruism Forum.I think that someone should write a detailed history of the effective altruism movement. The history that currently exists on the forum is pretty limited, and I’m not aware of much other material, so I think there’s room for substantial improvement. An oral history was already suggested in this post.I tentatively planned to write this post before FTX collapsed, but the reasons for writing this are probably even more compelling now than they were beforehand. I think a comprehensive written history would help.Develop an EA ethos/identity based on a shared intellectual history and provide a launch pad for future developments (e.g. longtermism and an influx of money). I remember reading about a community member who mostly thought about global health getting on board with AI safety when they met a civil rights attorney who was concerned about it. A demonstration of shared values allowed for that development.Build trust within the movement. As the community grows, it can no longer rely on everyone knowing everyone else, and needs external tools to keep everyone on the same page. Aesthetics have been suggested as one option, and I think that may be part of the solution, in concert with a written history.Mitigate existential risk to the EA movement. See EA criticism #6 in Peter Wildeford’s post and this post about ways in which EA could fail. Assuming the book would help the movement develop an identity and shared trust, it could lower risk to the movement.Understand the strengths and weaknesses of the movement, and what has historically been done well and what has been done poorly.There are a few ways this could happen.Open Phil (which already has a History of Philanthropy focus area) or CEA could actively seek out someone for the role and fund them for the duration of the project. This process would give the writer the credibility needed to get time with important EA people.A would-be writer could request a grant, perhaps from the EA Infrastructure Fund.An already-established EA journalist like Kelsey Piper could do it. There would be a high opportunity cost associated with this option, of course, since they’re already doing valuable work. On the other hand, they would already have the credibility and baseline knowledge required to do a great job.I’d be interested in hearing people’s thoughts on this, or if I missed a resource that already exists.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Someone should write a detailed history of effective altruism, published by Pete Rowlett on January 14, 2023 on The Effective Altruism Forum.I think that someone should write a detailed history of the effective altruism movement. The history that currently exists on the forum is pretty limited, and I’m not aware of much other material, so I think there’s room for substantial improvement. An oral history was already suggested in this post.I tentatively planned to write this post before FTX collapsed, but the reasons for writing this are probably even more compelling now than they were beforehand. I think a comprehensive written history would help.Develop an EA ethos/identity based on a shared intellectual history and provide a launch pad for future developments (e.g. longtermism and an influx of money). I remember reading about a community member who mostly thought about global health getting on board with AI safety when they met a civil rights attorney who was concerned about it. A demonstration of shared values allowed for that development.Build trust within the movement. As the community grows, it can no longer rely on everyone knowing everyone else, and needs external tools to keep everyone on the same page. Aesthetics have been suggested as one option, and I think that may be part of the solution, in concert with a written history.Mitigate existential risk to the EA movement. See EA criticism #6 in Peter Wildeford’s post and this post about ways in which EA could fail. Assuming the book would help the movement develop an identity and shared trust, it could lower risk to the movement.Understand the strengths and weaknesses of the movement, and what has historically been done well and what has been done poorly.There are a few ways this could happen.Open Phil (which already has a History of Philanthropy focus area) or CEA could actively seek out someone for the role and fund them for the duration of the project. This process would give the writer the credibility needed to get time with important EA people.A would-be writer could request a grant, perhaps from the EA Infrastructure Fund.An already-established EA journalist like Kelsey Piper could do it. There would be a high opportunity cost associated with this option, of course, since they’re already doing valuable work. On the other hand, they would already have the credibility and baseline knowledge required to do a great job.I’d be interested in hearing people’s thoughts on this, or if I missed a resource that already exists.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Pete Rowlett https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:28 None full 4464
PzxKQCWuaknbGF7qW_EA EA - EA should help Tyler Cowen publish his drafted book in China by Matt Brooks Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA should help Tyler Cowen publish his drafted book in China, published by Matt Brooks on January 14, 2023 on The Effective Altruism Forum.Tyler Cowen was on the Jan 9th episode of ChinaTalk, a podcast hosted by Jordan Schneider.Podcast:China Talk Substack:At 39:45 Tyler mentions writing a book to improve US relations with China that will likely never be published. We should help him publish it!Edit: Tyler is interested although worried about censorshipI transcribed this part of the podcast with Whisper, so there may be mistakes. Go listen to the entire episode anyway, it’s worth a listen.TranscriptionJordanSo shortly, millions of Chinese nationals who've been playing World of Warcraft their entire lives will no longer be able to. I'm curious, how important shared cultural touchstones, like video games, the NBA and Marvel movies are to keeping the peace?TylerI don't know, we had plenty such touchstones with, say, Germany before World War I, World War II, it didn't matter. But certainly worth trying, you know, I had my own project to improve relations with China, which failed, by the way. I wrote a manuscript for a book, and my plan was to publish it only in China. And it was a book designed to explain America to the Chinese, and make it more explicable, more understandable. So I wrote the book, I submitted it to Xinhua, which gave me a contract, even paid me in advance. But then a number of events came along, most specifically the Trump trade wars, and the book never came out. They're still sitting on it. I don't think it will ever come out. That was my, you know, you could call it, misguided project, to just do a very small amount to help the two countries get along better.JordanWow, what were your, what were your themes?TylerWell, if you think of Tokvill, he wrote democracy in America, so that Europeans would understand America better, right? So I thought, well, if we're trying to explain America to Chinese people, it's a really very different set of questions, especially in the 21st century. Though I covered a lot of basic differences across the economies, the policies, why are the economies different?Why is there so little state ownership in America?Why are so many parts of America so bad at infrastructure?Why do Americans save less?How is religion different in America?That was, I think, an especially sensitive topic. And just try to make sense of America for Chinese readers, but not defending it. Just some kind of, all of branch of understanding. Here's how we are. And I don't know. I don't think they'll ever put the book out. And of course, by now, it's out of date.JordanYeah, but there's, I mean, there's plenty of other people. Other like countries on the planet who could use a little, you know, a civics 101.TylerThey could. I mean, this is a book written for Chinese people with the contrasts and data comparisons to China. So to sort of send the same book to, you know, Senegal, I don't think would really make sense.JordanYeah, but if you publish it in the US, it will like, you know, Osmos out. I don't think it needs to be published by Xinhua for Chinese people to read it, Tyler.TylerI've thought of having it translated into Chinese distributed Somersault in some way. Haven't ruled that out. No downside for me, but you want to do things right. And I kept on waiting for Xinhua. And now I've really completely given up. The book is out of date with facts. That's not a big problem. Facts you can update, but it's very out of date with respect to tone. So right now, everyone feels you need to be tough with China. You can't sort of say nice things to China about China, you're pandering. You look like LeBron James or you're afraid to speak up. And the book would have made a lot of sense, say in 2015 that its current tone doesn't make sense in ...]]>
Matt Brooks https://forum.effectivealtruism.org/posts/PzxKQCWuaknbGF7qW/ea-should-help-tyler-cowen-publish-his-drafted-book-in-china Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA should help Tyler Cowen publish his drafted book in China, published by Matt Brooks on January 14, 2023 on The Effective Altruism Forum.Tyler Cowen was on the Jan 9th episode of ChinaTalk, a podcast hosted by Jordan Schneider.Podcast:China Talk Substack:At 39:45 Tyler mentions writing a book to improve US relations with China that will likely never be published. We should help him publish it!Edit: Tyler is interested although worried about censorshipI transcribed this part of the podcast with Whisper, so there may be mistakes. Go listen to the entire episode anyway, it’s worth a listen.TranscriptionJordanSo shortly, millions of Chinese nationals who've been playing World of Warcraft their entire lives will no longer be able to. I'm curious, how important shared cultural touchstones, like video games, the NBA and Marvel movies are to keeping the peace?TylerI don't know, we had plenty such touchstones with, say, Germany before World War I, World War II, it didn't matter. But certainly worth trying, you know, I had my own project to improve relations with China, which failed, by the way. I wrote a manuscript for a book, and my plan was to publish it only in China. And it was a book designed to explain America to the Chinese, and make it more explicable, more understandable. So I wrote the book, I submitted it to Xinhua, which gave me a contract, even paid me in advance. But then a number of events came along, most specifically the Trump trade wars, and the book never came out. They're still sitting on it. I don't think it will ever come out. That was my, you know, you could call it, misguided project, to just do a very small amount to help the two countries get along better.JordanWow, what were your, what were your themes?TylerWell, if you think of Tokvill, he wrote democracy in America, so that Europeans would understand America better, right? So I thought, well, if we're trying to explain America to Chinese people, it's a really very different set of questions, especially in the 21st century. Though I covered a lot of basic differences across the economies, the policies, why are the economies different?Why is there so little state ownership in America?Why are so many parts of America so bad at infrastructure?Why do Americans save less?How is religion different in America?That was, I think, an especially sensitive topic. And just try to make sense of America for Chinese readers, but not defending it. Just some kind of, all of branch of understanding. Here's how we are. And I don't know. I don't think they'll ever put the book out. And of course, by now, it's out of date.JordanYeah, but there's, I mean, there's plenty of other people. Other like countries on the planet who could use a little, you know, a civics 101.TylerThey could. I mean, this is a book written for Chinese people with the contrasts and data comparisons to China. So to sort of send the same book to, you know, Senegal, I don't think would really make sense.JordanYeah, but if you publish it in the US, it will like, you know, Osmos out. I don't think it needs to be published by Xinhua for Chinese people to read it, Tyler.TylerI've thought of having it translated into Chinese distributed Somersault in some way. Haven't ruled that out. No downside for me, but you want to do things right. And I kept on waiting for Xinhua. And now I've really completely given up. The book is out of date with facts. That's not a big problem. Facts you can update, but it's very out of date with respect to tone. So right now, everyone feels you need to be tough with China. You can't sort of say nice things to China about China, you're pandering. You look like LeBron James or you're afraid to speak up. And the book would have made a lot of sense, say in 2015 that its current tone doesn't make sense in ...]]>
Sun, 15 Jan 2023 01:29:21 +0000 EA - EA should help Tyler Cowen publish his drafted book in China by Matt Brooks Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA should help Tyler Cowen publish his drafted book in China, published by Matt Brooks on January 14, 2023 on The Effective Altruism Forum.Tyler Cowen was on the Jan 9th episode of ChinaTalk, a podcast hosted by Jordan Schneider.Podcast:China Talk Substack:At 39:45 Tyler mentions writing a book to improve US relations with China that will likely never be published. We should help him publish it!Edit: Tyler is interested although worried about censorshipI transcribed this part of the podcast with Whisper, so there may be mistakes. Go listen to the entire episode anyway, it’s worth a listen.TranscriptionJordanSo shortly, millions of Chinese nationals who've been playing World of Warcraft their entire lives will no longer be able to. I'm curious, how important shared cultural touchstones, like video games, the NBA and Marvel movies are to keeping the peace?TylerI don't know, we had plenty such touchstones with, say, Germany before World War I, World War II, it didn't matter. But certainly worth trying, you know, I had my own project to improve relations with China, which failed, by the way. I wrote a manuscript for a book, and my plan was to publish it only in China. And it was a book designed to explain America to the Chinese, and make it more explicable, more understandable. So I wrote the book, I submitted it to Xinhua, which gave me a contract, even paid me in advance. But then a number of events came along, most specifically the Trump trade wars, and the book never came out. They're still sitting on it. I don't think it will ever come out. That was my, you know, you could call it, misguided project, to just do a very small amount to help the two countries get along better.JordanWow, what were your, what were your themes?TylerWell, if you think of Tokvill, he wrote democracy in America, so that Europeans would understand America better, right? So I thought, well, if we're trying to explain America to Chinese people, it's a really very different set of questions, especially in the 21st century. Though I covered a lot of basic differences across the economies, the policies, why are the economies different?Why is there so little state ownership in America?Why are so many parts of America so bad at infrastructure?Why do Americans save less?How is religion different in America?That was, I think, an especially sensitive topic. And just try to make sense of America for Chinese readers, but not defending it. Just some kind of, all of branch of understanding. Here's how we are. And I don't know. I don't think they'll ever put the book out. And of course, by now, it's out of date.JordanYeah, but there's, I mean, there's plenty of other people. Other like countries on the planet who could use a little, you know, a civics 101.TylerThey could. I mean, this is a book written for Chinese people with the contrasts and data comparisons to China. So to sort of send the same book to, you know, Senegal, I don't think would really make sense.JordanYeah, but if you publish it in the US, it will like, you know, Osmos out. I don't think it needs to be published by Xinhua for Chinese people to read it, Tyler.TylerI've thought of having it translated into Chinese distributed Somersault in some way. Haven't ruled that out. No downside for me, but you want to do things right. And I kept on waiting for Xinhua. And now I've really completely given up. The book is out of date with facts. That's not a big problem. Facts you can update, but it's very out of date with respect to tone. So right now, everyone feels you need to be tough with China. You can't sort of say nice things to China about China, you're pandering. You look like LeBron James or you're afraid to speak up. And the book would have made a lot of sense, say in 2015 that its current tone doesn't make sense in ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA should help Tyler Cowen publish his drafted book in China, published by Matt Brooks on January 14, 2023 on The Effective Altruism Forum.Tyler Cowen was on the Jan 9th episode of ChinaTalk, a podcast hosted by Jordan Schneider.Podcast:China Talk Substack:At 39:45 Tyler mentions writing a book to improve US relations with China that will likely never be published. We should help him publish it!Edit: Tyler is interested although worried about censorshipI transcribed this part of the podcast with Whisper, so there may be mistakes. Go listen to the entire episode anyway, it’s worth a listen.TranscriptionJordanSo shortly, millions of Chinese nationals who've been playing World of Warcraft their entire lives will no longer be able to. I'm curious, how important shared cultural touchstones, like video games, the NBA and Marvel movies are to keeping the peace?TylerI don't know, we had plenty such touchstones with, say, Germany before World War I, World War II, it didn't matter. But certainly worth trying, you know, I had my own project to improve relations with China, which failed, by the way. I wrote a manuscript for a book, and my plan was to publish it only in China. And it was a book designed to explain America to the Chinese, and make it more explicable, more understandable. So I wrote the book, I submitted it to Xinhua, which gave me a contract, even paid me in advance. But then a number of events came along, most specifically the Trump trade wars, and the book never came out. They're still sitting on it. I don't think it will ever come out. That was my, you know, you could call it, misguided project, to just do a very small amount to help the two countries get along better.JordanWow, what were your, what were your themes?TylerWell, if you think of Tokvill, he wrote democracy in America, so that Europeans would understand America better, right? So I thought, well, if we're trying to explain America to Chinese people, it's a really very different set of questions, especially in the 21st century. Though I covered a lot of basic differences across the economies, the policies, why are the economies different?Why is there so little state ownership in America?Why are so many parts of America so bad at infrastructure?Why do Americans save less?How is religion different in America?That was, I think, an especially sensitive topic. And just try to make sense of America for Chinese readers, but not defending it. Just some kind of, all of branch of understanding. Here's how we are. And I don't know. I don't think they'll ever put the book out. And of course, by now, it's out of date.JordanYeah, but there's, I mean, there's plenty of other people. Other like countries on the planet who could use a little, you know, a civics 101.TylerThey could. I mean, this is a book written for Chinese people with the contrasts and data comparisons to China. So to sort of send the same book to, you know, Senegal, I don't think would really make sense.JordanYeah, but if you publish it in the US, it will like, you know, Osmos out. I don't think it needs to be published by Xinhua for Chinese people to read it, Tyler.TylerI've thought of having it translated into Chinese distributed Somersault in some way. Haven't ruled that out. No downside for me, but you want to do things right. And I kept on waiting for Xinhua. And now I've really completely given up. The book is out of date with facts. That's not a big problem. Facts you can update, but it's very out of date with respect to tone. So right now, everyone feels you need to be tough with China. You can't sort of say nice things to China about China, you're pandering. You look like LeBron James or you're afraid to speak up. And the book would have made a lot of sense, say in 2015 that its current tone doesn't make sense in ...]]>
Matt Brooks https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:17 None full 4471
kDyG6p6FqwJ4ioQt4_EA EA - Concerns about AI safety career change by mmKALLL Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Concerns about AI safety career change, published by mmKALLL on January 13, 2023 on The Effective Altruism Forum.Summary:I'm a software engineer interested in working on AI safety, but confused about its career prospects. I outlined all my concerns below.In particular, I had trouble finding accounts of engineers working in the field, and the differences between organizations/companies working on AI safety are very unclear from the outside.It's also not clear if frontend skills are seen as useful, or whether applicants should reside within the US.Full text:I'm an experienced full-stack software engineer and software/strategy consultant based in Japan. I've been loosely following EA since 2010, and have become increasingly concerned about AI x-risk since 2016. This has led me to regularly consider possible careers in AI safety, especially now that the demand for software engineers in the field has increased dramatically.However, having spent ~15 hours reading about the current state of the field, organizations, and role of engineers, I find myself having more questions than I started with. In hope of finding more clarity and help share what engineers considering the career shift might be wondering, I decided to outline my main points of concern below:The only accounts of engineers working in AI safety I could find were two articles and a problem profile on 80,000 Hours. Not even the AI Alignment Forum seemed to have any posts written by engineers sharing their experience. Despite this, most orgs have open positions for ML engineers, DevOps engineers, or generalist software developers. What are all of them doing?Many job descriptions listed very similar skills for engineers, even when the orgs seemed to have very different approaches on tackling AI safety problems. Is the set of required software skills really that uniform across organizations?Do software engineers in the field feel that their day-to-day work is meaningful? Are they regularly learning interesting and useful things? How do they see their career prospects?I'm also curious whether projects are done with a diverse set of technologies? Who is typically responsible for data transformations and cleanup? How much ML theory should an engineer coming into the field learn beforehand? (I'm excited to learn about ML, but got very mixed signals about the expectations.)Some orgs describe their agenda and goals. In many cases, these seemed very similar to me, as all of them are pragmatic and many even had shared or adjacent areas of research. Given the similarities, why are there so many different organizations? How is an outsider supposed to know what makes each of them unique?As an example, MIRI states that they want to "ensure that the creation of smarter-than-human machine intelligence has a positive impact", Anthropic states they have "long-term goals of steerable, trustworthy AI", Redwood Research states they want to "align -- future systems with human interests", and Center of AI Safety states they want to "reduce catastrophic and existential risks from AI". What makes these different from each other? They all sound like they'd lead to similar conclusions about what to work on.I was surprised to find that some orgs didn't really describe their work or what differentiates them. How are they supposed to find the best engineers if interested ones can't know what areas they are working on? I also found that it's sometimes very difficult to evaluate whether an org is active and/or trustworthy.Related to this, I was baffled to find that MIRI hasn't updated their agenda since 2015, and their latest publication is dated at 2016. However, their blog seems to have ~quarterly updates? Are they still relevant?Despite finding many orgs by reading articles and publications, I couldn't find a good overall list ...]]>
mmKALLL https://forum.effectivealtruism.org/posts/kDyG6p6FqwJ4ioQt4/concerns-about-ai-safety-career-change Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Concerns about AI safety career change, published by mmKALLL on January 13, 2023 on The Effective Altruism Forum.Summary:I'm a software engineer interested in working on AI safety, but confused about its career prospects. I outlined all my concerns below.In particular, I had trouble finding accounts of engineers working in the field, and the differences between organizations/companies working on AI safety are very unclear from the outside.It's also not clear if frontend skills are seen as useful, or whether applicants should reside within the US.Full text:I'm an experienced full-stack software engineer and software/strategy consultant based in Japan. I've been loosely following EA since 2010, and have become increasingly concerned about AI x-risk since 2016. This has led me to regularly consider possible careers in AI safety, especially now that the demand for software engineers in the field has increased dramatically.However, having spent ~15 hours reading about the current state of the field, organizations, and role of engineers, I find myself having more questions than I started with. In hope of finding more clarity and help share what engineers considering the career shift might be wondering, I decided to outline my main points of concern below:The only accounts of engineers working in AI safety I could find were two articles and a problem profile on 80,000 Hours. Not even the AI Alignment Forum seemed to have any posts written by engineers sharing their experience. Despite this, most orgs have open positions for ML engineers, DevOps engineers, or generalist software developers. What are all of them doing?Many job descriptions listed very similar skills for engineers, even when the orgs seemed to have very different approaches on tackling AI safety problems. Is the set of required software skills really that uniform across organizations?Do software engineers in the field feel that their day-to-day work is meaningful? Are they regularly learning interesting and useful things? How do they see their career prospects?I'm also curious whether projects are done with a diverse set of technologies? Who is typically responsible for data transformations and cleanup? How much ML theory should an engineer coming into the field learn beforehand? (I'm excited to learn about ML, but got very mixed signals about the expectations.)Some orgs describe their agenda and goals. In many cases, these seemed very similar to me, as all of them are pragmatic and many even had shared or adjacent areas of research. Given the similarities, why are there so many different organizations? How is an outsider supposed to know what makes each of them unique?As an example, MIRI states that they want to "ensure that the creation of smarter-than-human machine intelligence has a positive impact", Anthropic states they have "long-term goals of steerable, trustworthy AI", Redwood Research states they want to "align -- future systems with human interests", and Center of AI Safety states they want to "reduce catastrophic and existential risks from AI". What makes these different from each other? They all sound like they'd lead to similar conclusions about what to work on.I was surprised to find that some orgs didn't really describe their work or what differentiates them. How are they supposed to find the best engineers if interested ones can't know what areas they are working on? I also found that it's sometimes very difficult to evaluate whether an org is active and/or trustworthy.Related to this, I was baffled to find that MIRI hasn't updated their agenda since 2015, and their latest publication is dated at 2016. However, their blog seems to have ~quarterly updates? Are they still relevant?Despite finding many orgs by reading articles and publications, I couldn't find a good overall list ...]]>
Sat, 14 Jan 2023 20:35:33 +0000 EA - Concerns about AI safety career change by mmKALLL Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Concerns about AI safety career change, published by mmKALLL on January 13, 2023 on The Effective Altruism Forum.Summary:I'm a software engineer interested in working on AI safety, but confused about its career prospects. I outlined all my concerns below.In particular, I had trouble finding accounts of engineers working in the field, and the differences between organizations/companies working on AI safety are very unclear from the outside.It's also not clear if frontend skills are seen as useful, or whether applicants should reside within the US.Full text:I'm an experienced full-stack software engineer and software/strategy consultant based in Japan. I've been loosely following EA since 2010, and have become increasingly concerned about AI x-risk since 2016. This has led me to regularly consider possible careers in AI safety, especially now that the demand for software engineers in the field has increased dramatically.However, having spent ~15 hours reading about the current state of the field, organizations, and role of engineers, I find myself having more questions than I started with. In hope of finding more clarity and help share what engineers considering the career shift might be wondering, I decided to outline my main points of concern below:The only accounts of engineers working in AI safety I could find were two articles and a problem profile on 80,000 Hours. Not even the AI Alignment Forum seemed to have any posts written by engineers sharing their experience. Despite this, most orgs have open positions for ML engineers, DevOps engineers, or generalist software developers. What are all of them doing?Many job descriptions listed very similar skills for engineers, even when the orgs seemed to have very different approaches on tackling AI safety problems. Is the set of required software skills really that uniform across organizations?Do software engineers in the field feel that their day-to-day work is meaningful? Are they regularly learning interesting and useful things? How do they see their career prospects?I'm also curious whether projects are done with a diverse set of technologies? Who is typically responsible for data transformations and cleanup? How much ML theory should an engineer coming into the field learn beforehand? (I'm excited to learn about ML, but got very mixed signals about the expectations.)Some orgs describe their agenda and goals. In many cases, these seemed very similar to me, as all of them are pragmatic and many even had shared or adjacent areas of research. Given the similarities, why are there so many different organizations? How is an outsider supposed to know what makes each of them unique?As an example, MIRI states that they want to "ensure that the creation of smarter-than-human machine intelligence has a positive impact", Anthropic states they have "long-term goals of steerable, trustworthy AI", Redwood Research states they want to "align -- future systems with human interests", and Center of AI Safety states they want to "reduce catastrophic and existential risks from AI". What makes these different from each other? They all sound like they'd lead to similar conclusions about what to work on.I was surprised to find that some orgs didn't really describe their work or what differentiates them. How are they supposed to find the best engineers if interested ones can't know what areas they are working on? I also found that it's sometimes very difficult to evaluate whether an org is active and/or trustworthy.Related to this, I was baffled to find that MIRI hasn't updated their agenda since 2015, and their latest publication is dated at 2016. However, their blog seems to have ~quarterly updates? Are they still relevant?Despite finding many orgs by reading articles and publications, I couldn't find a good overall list ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Concerns about AI safety career change, published by mmKALLL on January 13, 2023 on The Effective Altruism Forum.Summary:I'm a software engineer interested in working on AI safety, but confused about its career prospects. I outlined all my concerns below.In particular, I had trouble finding accounts of engineers working in the field, and the differences between organizations/companies working on AI safety are very unclear from the outside.It's also not clear if frontend skills are seen as useful, or whether applicants should reside within the US.Full text:I'm an experienced full-stack software engineer and software/strategy consultant based in Japan. I've been loosely following EA since 2010, and have become increasingly concerned about AI x-risk since 2016. This has led me to regularly consider possible careers in AI safety, especially now that the demand for software engineers in the field has increased dramatically.However, having spent ~15 hours reading about the current state of the field, organizations, and role of engineers, I find myself having more questions than I started with. In hope of finding more clarity and help share what engineers considering the career shift might be wondering, I decided to outline my main points of concern below:The only accounts of engineers working in AI safety I could find were two articles and a problem profile on 80,000 Hours. Not even the AI Alignment Forum seemed to have any posts written by engineers sharing their experience. Despite this, most orgs have open positions for ML engineers, DevOps engineers, or generalist software developers. What are all of them doing?Many job descriptions listed very similar skills for engineers, even when the orgs seemed to have very different approaches on tackling AI safety problems. Is the set of required software skills really that uniform across organizations?Do software engineers in the field feel that their day-to-day work is meaningful? Are they regularly learning interesting and useful things? How do they see their career prospects?I'm also curious whether projects are done with a diverse set of technologies? Who is typically responsible for data transformations and cleanup? How much ML theory should an engineer coming into the field learn beforehand? (I'm excited to learn about ML, but got very mixed signals about the expectations.)Some orgs describe their agenda and goals. In many cases, these seemed very similar to me, as all of them are pragmatic and many even had shared or adjacent areas of research. Given the similarities, why are there so many different organizations? How is an outsider supposed to know what makes each of them unique?As an example, MIRI states that they want to "ensure that the creation of smarter-than-human machine intelligence has a positive impact", Anthropic states they have "long-term goals of steerable, trustworthy AI", Redwood Research states they want to "align -- future systems with human interests", and Center of AI Safety states they want to "reduce catastrophic and existential risks from AI". What makes these different from each other? They all sound like they'd lead to similar conclusions about what to work on.I was surprised to find that some orgs didn't really describe their work or what differentiates them. How are they supposed to find the best engineers if interested ones can't know what areas they are working on? I also found that it's sometimes very difficult to evaluate whether an org is active and/or trustworthy.Related to this, I was baffled to find that MIRI hasn't updated their agenda since 2015, and their latest publication is dated at 2016. However, their blog seems to have ~quarterly updates? Are they still relevant?Despite finding many orgs by reading articles and publications, I couldn't find a good overall list ...]]>
mmKALLL https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:22 None full 4458
u5gLprWhFDJLxooLc_EA EA - Speak the truth, even if your voice trembles by RobertM Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Speak the truth, even if your voice trembles, published by RobertM on January 14, 2023 on The Effective Altruism Forum.Epistemic status: Motivated by the feeling that there's something like a missing mood in the EA sphere. Informed by my personal experience, not by rigorous survey. Probably a bit scattershot, but it's already more than a month after I wanted to publish this. (Minus this parenthetical, this post was entirely written before the Bostrom thing. I just kept forgetting to post it.)The last half year - the time since I moved to Berkeley to work on LessWrong, and consequently found myself embedded in the broader Bay Area rationality & EA communities - have been surprisingly normal.The weeks following the FTX collapse, admittedly, a little less so.One thing has kept coming up, though. I keep hearing that people are reluctant to voice disagreements, criticisms, or concerns they have, and each time I do a double-take. (My consistent surprise is part of what prompted me to write this post: both those generating the surprise, and those who are surprised like me, might benefit from this perspective.)The type of issue where one person has an unpleasant interaction with another person is difficult to navigate. The current solution of discussing those things with the CEA Community Health team at least tries to balance both concerns of reducing false positive and false negatives; earlier and more public discussion of those concerns is not a Pareto-improvement.But most of them are other fears: that you will annoy an important funder, by criticizing ideas that they support, or by raising concerns about their honesty, given publicly-available evidence, or something similar. And the degree to which these fears have shaped the epistemic landscape makes me feel like I took a wrong turn somewhere and ended up in a mirror universe.Having these fears - probably common! Discussing those fears in public - not crazy! Acting on those fears? (I keep running face-first into the fact that not everybody has read The Sequences, that not everybody who has read them has internalized them, and that not everybody who has internalized them has externalized that understanding through their actions.)My take is that acting on those fears, by not publishing that criticism, or raising those concerns, with receipts attached, is harmful. For simplicity's sake, let's consider the cartesian product of the options:to publicize a criticism, or notthe criticism being accurate, or notthe funder deciding to fund your work, or notThe set of possible outcomes:you publicize a criticism; the criticism is accurate; the funder funds your workyou publicize a criticism; the criticism is accurate; the funder doesn't fund your workyou publicize a criticism; the criticism is inaccurate; the funder funds your workyou publicize a criticism; the criticism is inaccurate; the funder doesn't fund your workyou don't publicize a criticism; the criticism is accurate; the funder funds your workyou don't publicize a criticism; the criticism is accurate; the funder doesn't fund your workyou don't publicize a criticism; the criticism is inaccurate; the funds your workyou don't publicize a criticism; the criticism is inaccurate; the funder doesn't fund your workWhat predicted outcomes are motivating these fears? 2 and 4 are the obvious candidates.I won't pretend that these are impossible, or that you would necessarily see another funder step in if such a thing happened. You could very well pay costs for saying things in public. I do think that people overestimate how likely those outcomes are, or how high the costs will be, and underestimate the damage that staying silent causes to community epistemics.But I will bite the bullet: assuming the worst, you should pay those costs. In the long run, you do not a...]]>
RobertM https://forum.effectivealtruism.org/posts/u5gLprWhFDJLxooLc/speak-the-truth-even-if-your-voice-trembles Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Speak the truth, even if your voice trembles, published by RobertM on January 14, 2023 on The Effective Altruism Forum.Epistemic status: Motivated by the feeling that there's something like a missing mood in the EA sphere. Informed by my personal experience, not by rigorous survey. Probably a bit scattershot, but it's already more than a month after I wanted to publish this. (Minus this parenthetical, this post was entirely written before the Bostrom thing. I just kept forgetting to post it.)The last half year - the time since I moved to Berkeley to work on LessWrong, and consequently found myself embedded in the broader Bay Area rationality & EA communities - have been surprisingly normal.The weeks following the FTX collapse, admittedly, a little less so.One thing has kept coming up, though. I keep hearing that people are reluctant to voice disagreements, criticisms, or concerns they have, and each time I do a double-take. (My consistent surprise is part of what prompted me to write this post: both those generating the surprise, and those who are surprised like me, might benefit from this perspective.)The type of issue where one person has an unpleasant interaction with another person is difficult to navigate. The current solution of discussing those things with the CEA Community Health team at least tries to balance both concerns of reducing false positive and false negatives; earlier and more public discussion of those concerns is not a Pareto-improvement.But most of them are other fears: that you will annoy an important funder, by criticizing ideas that they support, or by raising concerns about their honesty, given publicly-available evidence, or something similar. And the degree to which these fears have shaped the epistemic landscape makes me feel like I took a wrong turn somewhere and ended up in a mirror universe.Having these fears - probably common! Discussing those fears in public - not crazy! Acting on those fears? (I keep running face-first into the fact that not everybody has read The Sequences, that not everybody who has read them has internalized them, and that not everybody who has internalized them has externalized that understanding through their actions.)My take is that acting on those fears, by not publishing that criticism, or raising those concerns, with receipts attached, is harmful. For simplicity's sake, let's consider the cartesian product of the options:to publicize a criticism, or notthe criticism being accurate, or notthe funder deciding to fund your work, or notThe set of possible outcomes:you publicize a criticism; the criticism is accurate; the funder funds your workyou publicize a criticism; the criticism is accurate; the funder doesn't fund your workyou publicize a criticism; the criticism is inaccurate; the funder funds your workyou publicize a criticism; the criticism is inaccurate; the funder doesn't fund your workyou don't publicize a criticism; the criticism is accurate; the funder funds your workyou don't publicize a criticism; the criticism is accurate; the funder doesn't fund your workyou don't publicize a criticism; the criticism is inaccurate; the funds your workyou don't publicize a criticism; the criticism is inaccurate; the funder doesn't fund your workWhat predicted outcomes are motivating these fears? 2 and 4 are the obvious candidates.I won't pretend that these are impossible, or that you would necessarily see another funder step in if such a thing happened. You could very well pay costs for saying things in public. I do think that people overestimate how likely those outcomes are, or how high the costs will be, and underestimate the damage that staying silent causes to community epistemics.But I will bite the bullet: assuming the worst, you should pay those costs. In the long run, you do not a...]]>
Sat, 14 Jan 2023 08:37:59 +0000 EA - Speak the truth, even if your voice trembles by RobertM Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Speak the truth, even if your voice trembles, published by RobertM on January 14, 2023 on The Effective Altruism Forum.Epistemic status: Motivated by the feeling that there's something like a missing mood in the EA sphere. Informed by my personal experience, not by rigorous survey. Probably a bit scattershot, but it's already more than a month after I wanted to publish this. (Minus this parenthetical, this post was entirely written before the Bostrom thing. I just kept forgetting to post it.)The last half year - the time since I moved to Berkeley to work on LessWrong, and consequently found myself embedded in the broader Bay Area rationality & EA communities - have been surprisingly normal.The weeks following the FTX collapse, admittedly, a little less so.One thing has kept coming up, though. I keep hearing that people are reluctant to voice disagreements, criticisms, or concerns they have, and each time I do a double-take. (My consistent surprise is part of what prompted me to write this post: both those generating the surprise, and those who are surprised like me, might benefit from this perspective.)The type of issue where one person has an unpleasant interaction with another person is difficult to navigate. The current solution of discussing those things with the CEA Community Health team at least tries to balance both concerns of reducing false positive and false negatives; earlier and more public discussion of those concerns is not a Pareto-improvement.But most of them are other fears: that you will annoy an important funder, by criticizing ideas that they support, or by raising concerns about their honesty, given publicly-available evidence, or something similar. And the degree to which these fears have shaped the epistemic landscape makes me feel like I took a wrong turn somewhere and ended up in a mirror universe.Having these fears - probably common! Discussing those fears in public - not crazy! Acting on those fears? (I keep running face-first into the fact that not everybody has read The Sequences, that not everybody who has read them has internalized them, and that not everybody who has internalized them has externalized that understanding through their actions.)My take is that acting on those fears, by not publishing that criticism, or raising those concerns, with receipts attached, is harmful. For simplicity's sake, let's consider the cartesian product of the options:to publicize a criticism, or notthe criticism being accurate, or notthe funder deciding to fund your work, or notThe set of possible outcomes:you publicize a criticism; the criticism is accurate; the funder funds your workyou publicize a criticism; the criticism is accurate; the funder doesn't fund your workyou publicize a criticism; the criticism is inaccurate; the funder funds your workyou publicize a criticism; the criticism is inaccurate; the funder doesn't fund your workyou don't publicize a criticism; the criticism is accurate; the funder funds your workyou don't publicize a criticism; the criticism is accurate; the funder doesn't fund your workyou don't publicize a criticism; the criticism is inaccurate; the funds your workyou don't publicize a criticism; the criticism is inaccurate; the funder doesn't fund your workWhat predicted outcomes are motivating these fears? 2 and 4 are the obvious candidates.I won't pretend that these are impossible, or that you would necessarily see another funder step in if such a thing happened. You could very well pay costs for saying things in public. I do think that people overestimate how likely those outcomes are, or how high the costs will be, and underestimate the damage that staying silent causes to community epistemics.But I will bite the bullet: assuming the worst, you should pay those costs. In the long run, you do not a...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Speak the truth, even if your voice trembles, published by RobertM on January 14, 2023 on The Effective Altruism Forum.Epistemic status: Motivated by the feeling that there's something like a missing mood in the EA sphere. Informed by my personal experience, not by rigorous survey. Probably a bit scattershot, but it's already more than a month after I wanted to publish this. (Minus this parenthetical, this post was entirely written before the Bostrom thing. I just kept forgetting to post it.)The last half year - the time since I moved to Berkeley to work on LessWrong, and consequently found myself embedded in the broader Bay Area rationality & EA communities - have been surprisingly normal.The weeks following the FTX collapse, admittedly, a little less so.One thing has kept coming up, though. I keep hearing that people are reluctant to voice disagreements, criticisms, or concerns they have, and each time I do a double-take. (My consistent surprise is part of what prompted me to write this post: both those generating the surprise, and those who are surprised like me, might benefit from this perspective.)The type of issue where one person has an unpleasant interaction with another person is difficult to navigate. The current solution of discussing those things with the CEA Community Health team at least tries to balance both concerns of reducing false positive and false negatives; earlier and more public discussion of those concerns is not a Pareto-improvement.But most of them are other fears: that you will annoy an important funder, by criticizing ideas that they support, or by raising concerns about their honesty, given publicly-available evidence, or something similar. And the degree to which these fears have shaped the epistemic landscape makes me feel like I took a wrong turn somewhere and ended up in a mirror universe.Having these fears - probably common! Discussing those fears in public - not crazy! Acting on those fears? (I keep running face-first into the fact that not everybody has read The Sequences, that not everybody who has read them has internalized them, and that not everybody who has internalized them has externalized that understanding through their actions.)My take is that acting on those fears, by not publishing that criticism, or raising those concerns, with receipts attached, is harmful. For simplicity's sake, let's consider the cartesian product of the options:to publicize a criticism, or notthe criticism being accurate, or notthe funder deciding to fund your work, or notThe set of possible outcomes:you publicize a criticism; the criticism is accurate; the funder funds your workyou publicize a criticism; the criticism is accurate; the funder doesn't fund your workyou publicize a criticism; the criticism is inaccurate; the funder funds your workyou publicize a criticism; the criticism is inaccurate; the funder doesn't fund your workyou don't publicize a criticism; the criticism is accurate; the funder funds your workyou don't publicize a criticism; the criticism is accurate; the funder doesn't fund your workyou don't publicize a criticism; the criticism is inaccurate; the funds your workyou don't publicize a criticism; the criticism is inaccurate; the funder doesn't fund your workWhat predicted outcomes are motivating these fears? 2 and 4 are the obvious candidates.I won't pretend that these are impossible, or that you would necessarily see another funder step in if such a thing happened. You could very well pay costs for saying things in public. I do think that people overestimate how likely those outcomes are, or how high the costs will be, and underestimate the damage that staying silent causes to community epistemics.But I will bite the bullet: assuming the worst, you should pay those costs. In the long run, you do not a...]]>
RobertM https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:56 None full 4454
frcAPFXwiCpNrECgQ_EA EA - A general comment on discussions of genetic group differences by anonymous8101 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A general comment on discussions of genetic group differences, published by anonymous8101 on January 14, 2023 on The Effective Altruism Forum.(content warning: discussion of racially motivated violence and coercion)I wanted to share that I think it's not bad to think about the object level question of whether there are group differences in intelligence rooted in genetic differences. This is an empirical claim, and can be true or false.My moral beliefs are pretty rooted in egalitarianism. I think as a matter of policy, but also as a matter of moral character, it is good and important to treat the experience of strangers as equally valuable, regardless of their class or race. I do not think more intelligent people are more worthy of moral consideration than less intelligent people. I think it can be complicated at the extremes, especially when considering digital people, animals, etc., but that this has little bearing on public policy when concerning existing humans.I don't think genetic group differences in intelligence are likely to be that relevant given I have short AI timelines. If we assume longer timelines, I believe the most likely places they would be important in terms of policy would be in education and reproductive technology. Whether or not there are such differences between groups now, there could easily come to be large differences through the application of embryo selection techniques or other intelligence enhancing technologies. From an egalitarian moral framework, I suspect it would be important to subsidize this technology for disadvantaged groups or individuals so that they have the same options and opportunities as everyone else. Even if genes turn out to not be a major cause of inegalitarian outcomes today, they can definitely become a major cause in the future, if we don't exercise wisdom and thoughtfulness in how we wield these technologies.However, as I said, I don't expect this to be very significant in practice given short AI timelines.Most importantly, from my perspective, it's important to be able to think about questions like this clearly, and so I want to encourage people to not feel constrained to avoid the question because of fear of social censure for merely thinking about them. For a reasonably well researched (not necessarily correct) discussion of the object level, see this post:[link deleted at the author's request]I think it's important context to keep in view that some of the worst human behaviors have involved the enslavement and subjugation of whole groups of people, or attempts to murder entire groups—racial groups, national groups, cultural groups, religious groups. The eugenics movement in the United States and elsewhere attempted to significantly curtail the reproductive freedom of many people through extremely coercive means in the not-so-distant past. Between 1907 and 1963, over 64,000 individuals were forcibly sterilized under eugenic legislation in the United States, and minority groups were especially targeted. Presently in China, tens of thousands of Uighurs are being sterilized, and while we don't have a great deal of information about it, I would predict that there is a major element of government coercion in these sterilizations.Coercive policies like this are extremely wrong, and plainly so. I oppose and condemn them. I am aware that the advocates of these policies sometimes used genetic group differences in abilities as justification for their coercion. This does not cause me to think that I should avoid the whole subject of genetic group differences in ability. Making this subject taboo, and sanctioning anyone who speaks of it, seems like a sure way to prevent people from actually understanding the underlying problems disadvantaged groups or individuals face. This seems likely to inhibit rather than pro...]]>
anonymous8101 https://forum.effectivealtruism.org/posts/frcAPFXwiCpNrECgQ/a-general-comment-on-discussions-of-genetic-group Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A general comment on discussions of genetic group differences, published by anonymous8101 on January 14, 2023 on The Effective Altruism Forum.(content warning: discussion of racially motivated violence and coercion)I wanted to share that I think it's not bad to think about the object level question of whether there are group differences in intelligence rooted in genetic differences. This is an empirical claim, and can be true or false.My moral beliefs are pretty rooted in egalitarianism. I think as a matter of policy, but also as a matter of moral character, it is good and important to treat the experience of strangers as equally valuable, regardless of their class or race. I do not think more intelligent people are more worthy of moral consideration than less intelligent people. I think it can be complicated at the extremes, especially when considering digital people, animals, etc., but that this has little bearing on public policy when concerning existing humans.I don't think genetic group differences in intelligence are likely to be that relevant given I have short AI timelines. If we assume longer timelines, I believe the most likely places they would be important in terms of policy would be in education and reproductive technology. Whether or not there are such differences between groups now, there could easily come to be large differences through the application of embryo selection techniques or other intelligence enhancing technologies. From an egalitarian moral framework, I suspect it would be important to subsidize this technology for disadvantaged groups or individuals so that they have the same options and opportunities as everyone else. Even if genes turn out to not be a major cause of inegalitarian outcomes today, they can definitely become a major cause in the future, if we don't exercise wisdom and thoughtfulness in how we wield these technologies.However, as I said, I don't expect this to be very significant in practice given short AI timelines.Most importantly, from my perspective, it's important to be able to think about questions like this clearly, and so I want to encourage people to not feel constrained to avoid the question because of fear of social censure for merely thinking about them. For a reasonably well researched (not necessarily correct) discussion of the object level, see this post:[link deleted at the author's request]I think it's important context to keep in view that some of the worst human behaviors have involved the enslavement and subjugation of whole groups of people, or attempts to murder entire groups—racial groups, national groups, cultural groups, religious groups. The eugenics movement in the United States and elsewhere attempted to significantly curtail the reproductive freedom of many people through extremely coercive means in the not-so-distant past. Between 1907 and 1963, over 64,000 individuals were forcibly sterilized under eugenic legislation in the United States, and minority groups were especially targeted. Presently in China, tens of thousands of Uighurs are being sterilized, and while we don't have a great deal of information about it, I would predict that there is a major element of government coercion in these sterilizations.Coercive policies like this are extremely wrong, and plainly so. I oppose and condemn them. I am aware that the advocates of these policies sometimes used genetic group differences in abilities as justification for their coercion. This does not cause me to think that I should avoid the whole subject of genetic group differences in ability. Making this subject taboo, and sanctioning anyone who speaks of it, seems like a sure way to prevent people from actually understanding the underlying problems disadvantaged groups or individuals face. This seems likely to inhibit rather than pro...]]>
Sat, 14 Jan 2023 07:13:17 +0000 EA - A general comment on discussions of genetic group differences by anonymous8101 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A general comment on discussions of genetic group differences, published by anonymous8101 on January 14, 2023 on The Effective Altruism Forum.(content warning: discussion of racially motivated violence and coercion)I wanted to share that I think it's not bad to think about the object level question of whether there are group differences in intelligence rooted in genetic differences. This is an empirical claim, and can be true or false.My moral beliefs are pretty rooted in egalitarianism. I think as a matter of policy, but also as a matter of moral character, it is good and important to treat the experience of strangers as equally valuable, regardless of their class or race. I do not think more intelligent people are more worthy of moral consideration than less intelligent people. I think it can be complicated at the extremes, especially when considering digital people, animals, etc., but that this has little bearing on public policy when concerning existing humans.I don't think genetic group differences in intelligence are likely to be that relevant given I have short AI timelines. If we assume longer timelines, I believe the most likely places they would be important in terms of policy would be in education and reproductive technology. Whether or not there are such differences between groups now, there could easily come to be large differences through the application of embryo selection techniques or other intelligence enhancing technologies. From an egalitarian moral framework, I suspect it would be important to subsidize this technology for disadvantaged groups or individuals so that they have the same options and opportunities as everyone else. Even if genes turn out to not be a major cause of inegalitarian outcomes today, they can definitely become a major cause in the future, if we don't exercise wisdom and thoughtfulness in how we wield these technologies.However, as I said, I don't expect this to be very significant in practice given short AI timelines.Most importantly, from my perspective, it's important to be able to think about questions like this clearly, and so I want to encourage people to not feel constrained to avoid the question because of fear of social censure for merely thinking about them. For a reasonably well researched (not necessarily correct) discussion of the object level, see this post:[link deleted at the author's request]I think it's important context to keep in view that some of the worst human behaviors have involved the enslavement and subjugation of whole groups of people, or attempts to murder entire groups—racial groups, national groups, cultural groups, religious groups. The eugenics movement in the United States and elsewhere attempted to significantly curtail the reproductive freedom of many people through extremely coercive means in the not-so-distant past. Between 1907 and 1963, over 64,000 individuals were forcibly sterilized under eugenic legislation in the United States, and minority groups were especially targeted. Presently in China, tens of thousands of Uighurs are being sterilized, and while we don't have a great deal of information about it, I would predict that there is a major element of government coercion in these sterilizations.Coercive policies like this are extremely wrong, and plainly so. I oppose and condemn them. I am aware that the advocates of these policies sometimes used genetic group differences in abilities as justification for their coercion. This does not cause me to think that I should avoid the whole subject of genetic group differences in ability. Making this subject taboo, and sanctioning anyone who speaks of it, seems like a sure way to prevent people from actually understanding the underlying problems disadvantaged groups or individuals face. This seems likely to inhibit rather than pro...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A general comment on discussions of genetic group differences, published by anonymous8101 on January 14, 2023 on The Effective Altruism Forum.(content warning: discussion of racially motivated violence and coercion)I wanted to share that I think it's not bad to think about the object level question of whether there are group differences in intelligence rooted in genetic differences. This is an empirical claim, and can be true or false.My moral beliefs are pretty rooted in egalitarianism. I think as a matter of policy, but also as a matter of moral character, it is good and important to treat the experience of strangers as equally valuable, regardless of their class or race. I do not think more intelligent people are more worthy of moral consideration than less intelligent people. I think it can be complicated at the extremes, especially when considering digital people, animals, etc., but that this has little bearing on public policy when concerning existing humans.I don't think genetic group differences in intelligence are likely to be that relevant given I have short AI timelines. If we assume longer timelines, I believe the most likely places they would be important in terms of policy would be in education and reproductive technology. Whether or not there are such differences between groups now, there could easily come to be large differences through the application of embryo selection techniques or other intelligence enhancing technologies. From an egalitarian moral framework, I suspect it would be important to subsidize this technology for disadvantaged groups or individuals so that they have the same options and opportunities as everyone else. Even if genes turn out to not be a major cause of inegalitarian outcomes today, they can definitely become a major cause in the future, if we don't exercise wisdom and thoughtfulness in how we wield these technologies.However, as I said, I don't expect this to be very significant in practice given short AI timelines.Most importantly, from my perspective, it's important to be able to think about questions like this clearly, and so I want to encourage people to not feel constrained to avoid the question because of fear of social censure for merely thinking about them. For a reasonably well researched (not necessarily correct) discussion of the object level, see this post:[link deleted at the author's request]I think it's important context to keep in view that some of the worst human behaviors have involved the enslavement and subjugation of whole groups of people, or attempts to murder entire groups—racial groups, national groups, cultural groups, religious groups. The eugenics movement in the United States and elsewhere attempted to significantly curtail the reproductive freedom of many people through extremely coercive means in the not-so-distant past. Between 1907 and 1963, over 64,000 individuals were forcibly sterilized under eugenic legislation in the United States, and minority groups were especially targeted. Presently in China, tens of thousands of Uighurs are being sterilized, and while we don't have a great deal of information about it, I would predict that there is a major element of government coercion in these sterilizations.Coercive policies like this are extremely wrong, and plainly so. I oppose and condemn them. I am aware that the advocates of these policies sometimes used genetic group differences in abilities as justification for their coercion. This does not cause me to think that I should avoid the whole subject of genetic group differences in ability. Making this subject taboo, and sanctioning anyone who speaks of it, seems like a sure way to prevent people from actually understanding the underlying problems disadvantaged groups or individuals face. This seems likely to inhibit rather than pro...]]>
anonymous8101 https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:49 None full 4453
e22kvdZWBfRYurPwg_EA EA - Iron deficiencies are very bad and you should treat them by Elizabeth Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Iron deficiencies are very bad and you should treat them, published by Elizabeth on January 13, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Elizabeth https://forum.effectivealtruism.org/posts/e22kvdZWBfRYurPwg/iron-deficiencies-are-very-bad-and-you-should-treat-them-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Iron deficiencies are very bad and you should treat them, published by Elizabeth on January 13, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 14 Jan 2023 01:39:48 +0000 EA - Iron deficiencies are very bad and you should treat them by Elizabeth Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Iron deficiencies are very bad and you should treat them, published by Elizabeth on January 13, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Iron deficiencies are very bad and you should treat them, published by Elizabeth on January 13, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Elizabeth https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:26 None full 4456
jgspXC8GKA7RtxMRE_EA EA - On Living Without Idols by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Living Without Idols, published by Rockwell on January 13, 2023 on The Effective Altruism Forum.For many years, I've actively lived in avoidance of idolizing behavior and in pursuit of a nuanced view of even those I respect most deeply. I think this has helped me in numerous ways and has been of particular help in weathering the past few months within the EA community. Below, I discuss how I think about the act of idolizing behavior, some of my personal experiences, and how this mentality can be of use to others.Note: I want more people to post on the EA Forum and have their ideas taken seriously regardless of whether they conform to Forum stylistic norms. I'm perfectly capable of writing a version of this post in the style typical to the Forum, but this post is written the way I actually like to write. If this style doesn’t work for you, you might want to read the first section “Anarchists have no idols” and then skip ahead to the section “Living without idols, Pt. 1” toward the end. You’ll lose some of the insights contained in my anecdotes, but still get most of the core ideas I want to convey here.Anarchists have no idols.I wrote a Facebook post in July 2019 following a blowup in one of my communities:"Anarchists have no idols."Years ago, I heard this expression (that weirdly doesn't seem to exist in Google) and it really stuck with me. I think about it often. It's something I try to live by and it feels extremely timely. Whether you agree with anarchism or not, I think this is a philosophy everyone might benefit from.What this means to me: Never put someone on a pedestal. Never believe anyone is incapable of doing wrong. Always create mechanisms for accountability, even if you don't anticipate ever needing to use them. Allow people to be multifaceted. Exist in nuance. Operate with an understanding of that nuance. Cherish the good while recognizing it doesn't mean there is no bad. Remember not to hero worship. Remember your fave is probably problematic. Remember no one is too big to fail, too big for flaws. Remember that when you idolize someone, it depersonalizes the idolized and erodes your autonomy. Hold on to your autonomy. Cultivate a culture of liberty. Idolize no one.Idolize no one. Idolize no one.My mentor, Pt. 1.When I was in college, I had a boss I considered my mentor. She was intelligent, ethical, and skilled. She shared her expertise with me and I eagerly learned from her. She gave me responsibility and trusted me to use it well. She oversaw me without micromanaging me, and used a gentle hand to correct my course and steer my development. She saw my potential and helped me to see it, too.She also lied to me. Directly to my face. She violated an ethical principle she had previously imparted to me, involved me in the violation, and then lied to me about it. I was made an unwitting participant in something I deeply morally opposed and I experienced a major, life-shattering breach of trust from someone I deeply respected. She was my boss and my friend, but in a sense, she was also my idol. And since then, I have refused to have another.Abusive people do not exist.A month after my mentor ceased to be my mentor, I took a semester-long course, "Domestic Violence". It stands as one of the most formative experiences in my way of thinking about the world. There's a lot I could write about it, but I want to share one small tidbit here, that I wrote about a few years after the course concluded:More and more people are promoting a shift in our language away from talking about “abusive relationships” and toward relationships with “abusive people.” This is a small but powerful way to locate where culpability lies. It is not the relationship that is to blame, but one individual in it. I suggest taking this a step further and selectively avoiding use of...]]>
Rockwell https://forum.effectivealtruism.org/posts/jgspXC8GKA7RtxMRE/on-living-without-idols Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Living Without Idols, published by Rockwell on January 13, 2023 on The Effective Altruism Forum.For many years, I've actively lived in avoidance of idolizing behavior and in pursuit of a nuanced view of even those I respect most deeply. I think this has helped me in numerous ways and has been of particular help in weathering the past few months within the EA community. Below, I discuss how I think about the act of idolizing behavior, some of my personal experiences, and how this mentality can be of use to others.Note: I want more people to post on the EA Forum and have their ideas taken seriously regardless of whether they conform to Forum stylistic norms. I'm perfectly capable of writing a version of this post in the style typical to the Forum, but this post is written the way I actually like to write. If this style doesn’t work for you, you might want to read the first section “Anarchists have no idols” and then skip ahead to the section “Living without idols, Pt. 1” toward the end. You’ll lose some of the insights contained in my anecdotes, but still get most of the core ideas I want to convey here.Anarchists have no idols.I wrote a Facebook post in July 2019 following a blowup in one of my communities:"Anarchists have no idols."Years ago, I heard this expression (that weirdly doesn't seem to exist in Google) and it really stuck with me. I think about it often. It's something I try to live by and it feels extremely timely. Whether you agree with anarchism or not, I think this is a philosophy everyone might benefit from.What this means to me: Never put someone on a pedestal. Never believe anyone is incapable of doing wrong. Always create mechanisms for accountability, even if you don't anticipate ever needing to use them. Allow people to be multifaceted. Exist in nuance. Operate with an understanding of that nuance. Cherish the good while recognizing it doesn't mean there is no bad. Remember not to hero worship. Remember your fave is probably problematic. Remember no one is too big to fail, too big for flaws. Remember that when you idolize someone, it depersonalizes the idolized and erodes your autonomy. Hold on to your autonomy. Cultivate a culture of liberty. Idolize no one.Idolize no one. Idolize no one.My mentor, Pt. 1.When I was in college, I had a boss I considered my mentor. She was intelligent, ethical, and skilled. She shared her expertise with me and I eagerly learned from her. She gave me responsibility and trusted me to use it well. She oversaw me without micromanaging me, and used a gentle hand to correct my course and steer my development. She saw my potential and helped me to see it, too.She also lied to me. Directly to my face. She violated an ethical principle she had previously imparted to me, involved me in the violation, and then lied to me about it. I was made an unwitting participant in something I deeply morally opposed and I experienced a major, life-shattering breach of trust from someone I deeply respected. She was my boss and my friend, but in a sense, she was also my idol. And since then, I have refused to have another.Abusive people do not exist.A month after my mentor ceased to be my mentor, I took a semester-long course, "Domestic Violence". It stands as one of the most formative experiences in my way of thinking about the world. There's a lot I could write about it, but I want to share one small tidbit here, that I wrote about a few years after the course concluded:More and more people are promoting a shift in our language away from talking about “abusive relationships” and toward relationships with “abusive people.” This is a small but powerful way to locate where culpability lies. It is not the relationship that is to blame, but one individual in it. I suggest taking this a step further and selectively avoiding use of...]]>
Sat, 14 Jan 2023 00:00:00 +0000 EA - On Living Without Idols by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Living Without Idols, published by Rockwell on January 13, 2023 on The Effective Altruism Forum.For many years, I've actively lived in avoidance of idolizing behavior and in pursuit of a nuanced view of even those I respect most deeply. I think this has helped me in numerous ways and has been of particular help in weathering the past few months within the EA community. Below, I discuss how I think about the act of idolizing behavior, some of my personal experiences, and how this mentality can be of use to others.Note: I want more people to post on the EA Forum and have their ideas taken seriously regardless of whether they conform to Forum stylistic norms. I'm perfectly capable of writing a version of this post in the style typical to the Forum, but this post is written the way I actually like to write. If this style doesn’t work for you, you might want to read the first section “Anarchists have no idols” and then skip ahead to the section “Living without idols, Pt. 1” toward the end. You’ll lose some of the insights contained in my anecdotes, but still get most of the core ideas I want to convey here.Anarchists have no idols.I wrote a Facebook post in July 2019 following a blowup in one of my communities:"Anarchists have no idols."Years ago, I heard this expression (that weirdly doesn't seem to exist in Google) and it really stuck with me. I think about it often. It's something I try to live by and it feels extremely timely. Whether you agree with anarchism or not, I think this is a philosophy everyone might benefit from.What this means to me: Never put someone on a pedestal. Never believe anyone is incapable of doing wrong. Always create mechanisms for accountability, even if you don't anticipate ever needing to use them. Allow people to be multifaceted. Exist in nuance. Operate with an understanding of that nuance. Cherish the good while recognizing it doesn't mean there is no bad. Remember not to hero worship. Remember your fave is probably problematic. Remember no one is too big to fail, too big for flaws. Remember that when you idolize someone, it depersonalizes the idolized and erodes your autonomy. Hold on to your autonomy. Cultivate a culture of liberty. Idolize no one.Idolize no one. Idolize no one.My mentor, Pt. 1.When I was in college, I had a boss I considered my mentor. She was intelligent, ethical, and skilled. She shared her expertise with me and I eagerly learned from her. She gave me responsibility and trusted me to use it well. She oversaw me without micromanaging me, and used a gentle hand to correct my course and steer my development. She saw my potential and helped me to see it, too.She also lied to me. Directly to my face. She violated an ethical principle she had previously imparted to me, involved me in the violation, and then lied to me about it. I was made an unwitting participant in something I deeply morally opposed and I experienced a major, life-shattering breach of trust from someone I deeply respected. She was my boss and my friend, but in a sense, she was also my idol. And since then, I have refused to have another.Abusive people do not exist.A month after my mentor ceased to be my mentor, I took a semester-long course, "Domestic Violence". It stands as one of the most formative experiences in my way of thinking about the world. There's a lot I could write about it, but I want to share one small tidbit here, that I wrote about a few years after the course concluded:More and more people are promoting a shift in our language away from talking about “abusive relationships” and toward relationships with “abusive people.” This is a small but powerful way to locate where culpability lies. It is not the relationship that is to blame, but one individual in it. I suggest taking this a step further and selectively avoiding use of...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Living Without Idols, published by Rockwell on January 13, 2023 on The Effective Altruism Forum.For many years, I've actively lived in avoidance of idolizing behavior and in pursuit of a nuanced view of even those I respect most deeply. I think this has helped me in numerous ways and has been of particular help in weathering the past few months within the EA community. Below, I discuss how I think about the act of idolizing behavior, some of my personal experiences, and how this mentality can be of use to others.Note: I want more people to post on the EA Forum and have their ideas taken seriously regardless of whether they conform to Forum stylistic norms. I'm perfectly capable of writing a version of this post in the style typical to the Forum, but this post is written the way I actually like to write. If this style doesn’t work for you, you might want to read the first section “Anarchists have no idols” and then skip ahead to the section “Living without idols, Pt. 1” toward the end. You’ll lose some of the insights contained in my anecdotes, but still get most of the core ideas I want to convey here.Anarchists have no idols.I wrote a Facebook post in July 2019 following a blowup in one of my communities:"Anarchists have no idols."Years ago, I heard this expression (that weirdly doesn't seem to exist in Google) and it really stuck with me. I think about it often. It's something I try to live by and it feels extremely timely. Whether you agree with anarchism or not, I think this is a philosophy everyone might benefit from.What this means to me: Never put someone on a pedestal. Never believe anyone is incapable of doing wrong. Always create mechanisms for accountability, even if you don't anticipate ever needing to use them. Allow people to be multifaceted. Exist in nuance. Operate with an understanding of that nuance. Cherish the good while recognizing it doesn't mean there is no bad. Remember not to hero worship. Remember your fave is probably problematic. Remember no one is too big to fail, too big for flaws. Remember that when you idolize someone, it depersonalizes the idolized and erodes your autonomy. Hold on to your autonomy. Cultivate a culture of liberty. Idolize no one.Idolize no one. Idolize no one.My mentor, Pt. 1.When I was in college, I had a boss I considered my mentor. She was intelligent, ethical, and skilled. She shared her expertise with me and I eagerly learned from her. She gave me responsibility and trusted me to use it well. She oversaw me without micromanaging me, and used a gentle hand to correct my course and steer my development. She saw my potential and helped me to see it, too.She also lied to me. Directly to my face. She violated an ethical principle she had previously imparted to me, involved me in the violation, and then lied to me about it. I was made an unwitting participant in something I deeply morally opposed and I experienced a major, life-shattering breach of trust from someone I deeply respected. She was my boss and my friend, but in a sense, she was also my idol. And since then, I have refused to have another.Abusive people do not exist.A month after my mentor ceased to be my mentor, I took a semester-long course, "Domestic Violence". It stands as one of the most formative experiences in my way of thinking about the world. There's a lot I could write about it, but I want to share one small tidbit here, that I wrote about a few years after the course concluded:More and more people are promoting a shift in our language away from talking about “abusive relationships” and toward relationships with “abusive people.” This is a small but powerful way to locate where culpability lies. It is not the relationship that is to blame, but one individual in it. I suggest taking this a step further and selectively avoiding use of...]]>
Rockwell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:19 None full 4455
n82RPezsuW7yQdztE_EA EA - Forecasting could use more gender diversity by Tegan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forecasting could use more gender diversity, published by Tegan on January 13, 2023 on The Effective Altruism Forum.The new org I’m part of, the Forecasting Research Institute, recently began a hiring round, and we noticed that around 80% of research analyst applicants were male. This isn’t terribly surprising—a large percentage of forecasting tournament participants have been male historically, so we expected familiarity and interest in forecasting to be male-skewed currently. But, as a woman in forecasting myself, it does bug me.It bugs me on several levels. One is puzzlement: many of the fields most relevant to forecasting and forecasting research—like international relations, history, social science, and, in the case of forecasting research, behavioral science—are not short of women. (And in any case, forecasting is so interdisciplinary that most backgrounds have some value to add.) There’s also nothing about the broad spectrum of potential uses of forecasting, in policy, public discourse, institutional decision-making, philanthropy, etc., that suggests its impact is limited to particularly male domains.Which brings me to the next point. If the field of forecasting is systematically failing to attract roughly half of the population, containing roughly half of the talent, good ideas, valuable skills and knowledge, it’s made poorer and will likely face an uphill climb toward relevance and real-world impact. Which would be a shame, given its great potential.So if you’re a woman reading this (or another sort of person who feels underrepresented in forecasting) and you think there’s even a chance you might be interested in doing forecasting research, here’s my case for why you should apply to FRI’s research analyst position:You don’t need to be an experienced forecaster, or like doing forecasting. You don’t need to be super quantitative. You don’t need to be intimately familiar with forecasting research—for the right candidate, as long as you’re curious, motivated, and can get excited about unsolved problems in forecasting, FRI is willing to train you up.Forecasting is an early-stage field with high impact potential, and the opportunity to be part of building such a field is kind of rare.This also means there are tons of interesting open questions we need to answer, and lots of room to be creative in finding the answers.In terms of subject matter, the work is extremely varied—so if you’re, say, interested in bio and AI and public policy, you might get to touch on all of them at the same time, or in quick succession. Some of our projects will also allow you to interact with leading domain experts or prominent organizations doing cool work when we partner to test forecasting tools.If you want to use it this way, forecasting research can be a great facilitator for working on your own epistemics, and can provide a framework for interpreting and filtering the many important things one can have views about in a relatively independent way. (This is one of the primary motivations of my colleague Josh.)Not only is everyone on our team enthusiastic about improving gender diversity in our org, and in the forecasting space generally, they’re also just extremely nice, interesting, competent and thoughtful as people.If you’re still not sure if you could get sufficiently interested in forecasting for applying to be worth it, I’d suggest checking out our research page to see the projects we’ll be working on in the near future. But to give you a sense of how diverse the traits of people who’ve gotten into forecasting research are (all on the FRI team):I don’t have a “formal” background of any sort: I dropped out of college, and subsequently hopped around doing stuff like writing and generalist research. I then got into “caring about AI” via compelling arguments from friend...]]>
Tegan https://forum.effectivealtruism.org/posts/n82RPezsuW7yQdztE/forecasting-could-use-more-gender-diversity Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forecasting could use more gender diversity, published by Tegan on January 13, 2023 on The Effective Altruism Forum.The new org I’m part of, the Forecasting Research Institute, recently began a hiring round, and we noticed that around 80% of research analyst applicants were male. This isn’t terribly surprising—a large percentage of forecasting tournament participants have been male historically, so we expected familiarity and interest in forecasting to be male-skewed currently. But, as a woman in forecasting myself, it does bug me.It bugs me on several levels. One is puzzlement: many of the fields most relevant to forecasting and forecasting research—like international relations, history, social science, and, in the case of forecasting research, behavioral science—are not short of women. (And in any case, forecasting is so interdisciplinary that most backgrounds have some value to add.) There’s also nothing about the broad spectrum of potential uses of forecasting, in policy, public discourse, institutional decision-making, philanthropy, etc., that suggests its impact is limited to particularly male domains.Which brings me to the next point. If the field of forecasting is systematically failing to attract roughly half of the population, containing roughly half of the talent, good ideas, valuable skills and knowledge, it’s made poorer and will likely face an uphill climb toward relevance and real-world impact. Which would be a shame, given its great potential.So if you’re a woman reading this (or another sort of person who feels underrepresented in forecasting) and you think there’s even a chance you might be interested in doing forecasting research, here’s my case for why you should apply to FRI’s research analyst position:You don’t need to be an experienced forecaster, or like doing forecasting. You don’t need to be super quantitative. You don’t need to be intimately familiar with forecasting research—for the right candidate, as long as you’re curious, motivated, and can get excited about unsolved problems in forecasting, FRI is willing to train you up.Forecasting is an early-stage field with high impact potential, and the opportunity to be part of building such a field is kind of rare.This also means there are tons of interesting open questions we need to answer, and lots of room to be creative in finding the answers.In terms of subject matter, the work is extremely varied—so if you’re, say, interested in bio and AI and public policy, you might get to touch on all of them at the same time, or in quick succession. Some of our projects will also allow you to interact with leading domain experts or prominent organizations doing cool work when we partner to test forecasting tools.If you want to use it this way, forecasting research can be a great facilitator for working on your own epistemics, and can provide a framework for interpreting and filtering the many important things one can have views about in a relatively independent way. (This is one of the primary motivations of my colleague Josh.)Not only is everyone on our team enthusiastic about improving gender diversity in our org, and in the forecasting space generally, they’re also just extremely nice, interesting, competent and thoughtful as people.If you’re still not sure if you could get sufficiently interested in forecasting for applying to be worth it, I’d suggest checking out our research page to see the projects we’ll be working on in the near future. But to give you a sense of how diverse the traits of people who’ve gotten into forecasting research are (all on the FRI team):I don’t have a “formal” background of any sort: I dropped out of college, and subsequently hopped around doing stuff like writing and generalist research. I then got into “caring about AI” via compelling arguments from friend...]]>
Fri, 13 Jan 2023 20:31:50 +0000 EA - Forecasting could use more gender diversity by Tegan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forecasting could use more gender diversity, published by Tegan on January 13, 2023 on The Effective Altruism Forum.The new org I’m part of, the Forecasting Research Institute, recently began a hiring round, and we noticed that around 80% of research analyst applicants were male. This isn’t terribly surprising—a large percentage of forecasting tournament participants have been male historically, so we expected familiarity and interest in forecasting to be male-skewed currently. But, as a woman in forecasting myself, it does bug me.It bugs me on several levels. One is puzzlement: many of the fields most relevant to forecasting and forecasting research—like international relations, history, social science, and, in the case of forecasting research, behavioral science—are not short of women. (And in any case, forecasting is so interdisciplinary that most backgrounds have some value to add.) There’s also nothing about the broad spectrum of potential uses of forecasting, in policy, public discourse, institutional decision-making, philanthropy, etc., that suggests its impact is limited to particularly male domains.Which brings me to the next point. If the field of forecasting is systematically failing to attract roughly half of the population, containing roughly half of the talent, good ideas, valuable skills and knowledge, it’s made poorer and will likely face an uphill climb toward relevance and real-world impact. Which would be a shame, given its great potential.So if you’re a woman reading this (or another sort of person who feels underrepresented in forecasting) and you think there’s even a chance you might be interested in doing forecasting research, here’s my case for why you should apply to FRI’s research analyst position:You don’t need to be an experienced forecaster, or like doing forecasting. You don’t need to be super quantitative. You don’t need to be intimately familiar with forecasting research—for the right candidate, as long as you’re curious, motivated, and can get excited about unsolved problems in forecasting, FRI is willing to train you up.Forecasting is an early-stage field with high impact potential, and the opportunity to be part of building such a field is kind of rare.This also means there are tons of interesting open questions we need to answer, and lots of room to be creative in finding the answers.In terms of subject matter, the work is extremely varied—so if you’re, say, interested in bio and AI and public policy, you might get to touch on all of them at the same time, or in quick succession. Some of our projects will also allow you to interact with leading domain experts or prominent organizations doing cool work when we partner to test forecasting tools.If you want to use it this way, forecasting research can be a great facilitator for working on your own epistemics, and can provide a framework for interpreting and filtering the many important things one can have views about in a relatively independent way. (This is one of the primary motivations of my colleague Josh.)Not only is everyone on our team enthusiastic about improving gender diversity in our org, and in the forecasting space generally, they’re also just extremely nice, interesting, competent and thoughtful as people.If you’re still not sure if you could get sufficiently interested in forecasting for applying to be worth it, I’d suggest checking out our research page to see the projects we’ll be working on in the near future. But to give you a sense of how diverse the traits of people who’ve gotten into forecasting research are (all on the FRI team):I don’t have a “formal” background of any sort: I dropped out of college, and subsequently hopped around doing stuff like writing and generalist research. I then got into “caring about AI” via compelling arguments from friend...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forecasting could use more gender diversity, published by Tegan on January 13, 2023 on The Effective Altruism Forum.The new org I’m part of, the Forecasting Research Institute, recently began a hiring round, and we noticed that around 80% of research analyst applicants were male. This isn’t terribly surprising—a large percentage of forecasting tournament participants have been male historically, so we expected familiarity and interest in forecasting to be male-skewed currently. But, as a woman in forecasting myself, it does bug me.It bugs me on several levels. One is puzzlement: many of the fields most relevant to forecasting and forecasting research—like international relations, history, social science, and, in the case of forecasting research, behavioral science—are not short of women. (And in any case, forecasting is so interdisciplinary that most backgrounds have some value to add.) There’s also nothing about the broad spectrum of potential uses of forecasting, in policy, public discourse, institutional decision-making, philanthropy, etc., that suggests its impact is limited to particularly male domains.Which brings me to the next point. If the field of forecasting is systematically failing to attract roughly half of the population, containing roughly half of the talent, good ideas, valuable skills and knowledge, it’s made poorer and will likely face an uphill climb toward relevance and real-world impact. Which would be a shame, given its great potential.So if you’re a woman reading this (or another sort of person who feels underrepresented in forecasting) and you think there’s even a chance you might be interested in doing forecasting research, here’s my case for why you should apply to FRI’s research analyst position:You don’t need to be an experienced forecaster, or like doing forecasting. You don’t need to be super quantitative. You don’t need to be intimately familiar with forecasting research—for the right candidate, as long as you’re curious, motivated, and can get excited about unsolved problems in forecasting, FRI is willing to train you up.Forecasting is an early-stage field with high impact potential, and the opportunity to be part of building such a field is kind of rare.This also means there are tons of interesting open questions we need to answer, and lots of room to be creative in finding the answers.In terms of subject matter, the work is extremely varied—so if you’re, say, interested in bio and AI and public policy, you might get to touch on all of them at the same time, or in quick succession. Some of our projects will also allow you to interact with leading domain experts or prominent organizations doing cool work when we partner to test forecasting tools.If you want to use it this way, forecasting research can be a great facilitator for working on your own epistemics, and can provide a framework for interpreting and filtering the many important things one can have views about in a relatively independent way. (This is one of the primary motivations of my colleague Josh.)Not only is everyone on our team enthusiastic about improving gender diversity in our org, and in the forecasting space generally, they’re also just extremely nice, interesting, competent and thoughtful as people.If you’re still not sure if you could get sufficiently interested in forecasting for applying to be worth it, I’d suggest checking out our research page to see the projects we’ll be working on in the near future. But to give you a sense of how diverse the traits of people who’ve gotten into forecasting research are (all on the FRI team):I don’t have a “formal” background of any sort: I dropped out of college, and subsequently hopped around doing stuff like writing and generalist research. I then got into “caring about AI” via compelling arguments from friend...]]>
Tegan https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:53 None full 4442
eXND5GtLzezE2s5YS_EA EA - Quick PSA: 8000hours.org (not 80000hours.org) is a malicious scam site by Will Bradshaw Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Quick PSA: 8000hours.org (not 80000hours.org) is a malicious scam site, published by Will Bradshaw on January 13, 2023 on The Effective Altruism Forum.Just what the title says: 8000hours.org is a malicious scam/phishing site. If you mean to go to 80000hours.org and come across something strange/confusing, be aware and take care to keep yourself safe. (EDIT: And maybe think twice about going to the scam site yourself; one user contacted me to say that their browser had autocompleted information into its fields.)I've accidentally gone to this site instead of 80000hours.org a few times over the past year, and each time it's been a different scam, so it seems to change hands/tactics frequently. Some of them were convincing enough that I'm worried they'd catch an unwary user.I've notified 80K about this; this post isn't meant to be a complaint or criticism about them, just to make others aware of the danger. That said, people from other orgs reading this might do well to think about what other domains close to their own someone might use to prey on their users, and what they might do to pre-emptively prevent this.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Will Bradshaw https://forum.effectivealtruism.org/posts/eXND5GtLzezE2s5YS/quick-psa-8000hours-org-not-80000hours-org-is-a-malicious Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Quick PSA: 8000hours.org (not 80000hours.org) is a malicious scam site, published by Will Bradshaw on January 13, 2023 on The Effective Altruism Forum.Just what the title says: 8000hours.org is a malicious scam/phishing site. If you mean to go to 80000hours.org and come across something strange/confusing, be aware and take care to keep yourself safe. (EDIT: And maybe think twice about going to the scam site yourself; one user contacted me to say that their browser had autocompleted information into its fields.)I've accidentally gone to this site instead of 80000hours.org a few times over the past year, and each time it's been a different scam, so it seems to change hands/tactics frequently. Some of them were convincing enough that I'm worried they'd catch an unwary user.I've notified 80K about this; this post isn't meant to be a complaint or criticism about them, just to make others aware of the danger. That said, people from other orgs reading this might do well to think about what other domains close to their own someone might use to prey on their users, and what they might do to pre-emptively prevent this.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 13 Jan 2023 20:19:35 +0000 EA - Quick PSA: 8000hours.org (not 80000hours.org) is a malicious scam site by Will Bradshaw Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Quick PSA: 8000hours.org (not 80000hours.org) is a malicious scam site, published by Will Bradshaw on January 13, 2023 on The Effective Altruism Forum.Just what the title says: 8000hours.org is a malicious scam/phishing site. If you mean to go to 80000hours.org and come across something strange/confusing, be aware and take care to keep yourself safe. (EDIT: And maybe think twice about going to the scam site yourself; one user contacted me to say that their browser had autocompleted information into its fields.)I've accidentally gone to this site instead of 80000hours.org a few times over the past year, and each time it's been a different scam, so it seems to change hands/tactics frequently. Some of them were convincing enough that I'm worried they'd catch an unwary user.I've notified 80K about this; this post isn't meant to be a complaint or criticism about them, just to make others aware of the danger. That said, people from other orgs reading this might do well to think about what other domains close to their own someone might use to prey on their users, and what they might do to pre-emptively prevent this.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Quick PSA: 8000hours.org (not 80000hours.org) is a malicious scam site, published by Will Bradshaw on January 13, 2023 on The Effective Altruism Forum.Just what the title says: 8000hours.org is a malicious scam/phishing site. If you mean to go to 80000hours.org and come across something strange/confusing, be aware and take care to keep yourself safe. (EDIT: And maybe think twice about going to the scam site yourself; one user contacted me to say that their browser had autocompleted information into its fields.)I've accidentally gone to this site instead of 80000hours.org a few times over the past year, and each time it's been a different scam, so it seems to change hands/tactics frequently. Some of them were convincing enough that I'm worried they'd catch an unwary user.I've notified 80K about this; this post isn't meant to be a complaint or criticism about them, just to make others aware of the danger. That said, people from other orgs reading this might do well to think about what other domains close to their own someone might use to prey on their users, and what they might do to pre-emptively prevent this.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Will Bradshaw https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:27 None full 4443
zy6jGPeFKHaoxKEfT_EA EA - The Capability Approach by ryancbriggs Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Capability Approach, published by ryancbriggs on January 13, 2023 on The Effective Altruism Forum.This post outlines the capability approach to thinking about human welfare. I think that this approach, while very popular in international development, is neglected in EA. While the capability approach has problems, I think that it provides a better approach to thinking about improving human welfare than approaches based on measuring happiness or subjective wellbeing (SWB) or approaches based on preference satisfaction. Finally, even if you disagree that the capability approach is best, I think this post will be useful to you because it may clarify why many people and organizations in the international development or global health space take the positions that they do. I will be drawing heavily on the work of Amartya Sen, but I will often not be citing specific texts because I’m an academic and getting to write without careful citations is thrilling.This post will have four sections. First, I will describe the capability approach. Second, I will give some simple examples that illustrate why I think that aiming to maximize capabilities is the best way to do good for people. I’ll frame these examples in opposition to other common approaches, but my goal here is mostly constructive and to argue for the capability approach rather than against maximizing, for example, SWB. Third, I will describe what I see as the largest downsides to the capability approach as well as possible responses to these downsides. Fourth and finally, I will explain my weakly-held theory that a lot of the ways that global health or international development organizations, including GiveWell, behave owes to the deep (but often unrecognized) influence of the capability approach on their thought.The capability approachThe fundamental unit of the value in the capability approach is a functioning, which is anything that you can be or do. Eating is a functioning. Being an EA is a functioning. Other functionings include: being a doctor, running, practicing Judaism, sleeping, and being a parent. Capabilities are options to be or do a functioning. The goal of the capability approach is not to maximize the number of capabilities available to people, it is instead to maximize the number of sets of capabilities. The notion here is that if you maximized simply the number of capabilities then you might enable someone to be: a parent or employed outside the home. But someone might want to do both. If you’re focusing on maximizing the number of sets of capabilities then you’ll end up with: parent, employed, both parent and employed, and neither.The simple beauty of this setup is that it is aiming to maximize the options that people have available to them, from which they then select the group of functionings that they want most. This is why one great book about this approach is entitled “Development as Freedom.” The argument is that development is the process of expanding capabilities, or individual freedom to live the kind of life that you want.I will come to criticisms later on, but one thing people may note is that this approach will lead to a lot of sets of capabilities and we will need some way to rank them or condense the list. In theory, we would want to do this based on how much people value each capability set. I will discuss this issue in more detail in the third section.Examples of why I love the capability approachHere I’ll lay out a few examples that show why I think the capability approach is the best way to think about improving human welfare.First, in opposition to preference-satisfaction approaches, the capability approach values options not taken. I think this accords with most of our intuitions, and that it takes real work for economics to train it out of people. Here are two examples...]]>
ryancbriggs https://forum.effectivealtruism.org/posts/zy6jGPeFKHaoxKEfT/the-capability-approach Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Capability Approach, published by ryancbriggs on January 13, 2023 on The Effective Altruism Forum.This post outlines the capability approach to thinking about human welfare. I think that this approach, while very popular in international development, is neglected in EA. While the capability approach has problems, I think that it provides a better approach to thinking about improving human welfare than approaches based on measuring happiness or subjective wellbeing (SWB) or approaches based on preference satisfaction. Finally, even if you disagree that the capability approach is best, I think this post will be useful to you because it may clarify why many people and organizations in the international development or global health space take the positions that they do. I will be drawing heavily on the work of Amartya Sen, but I will often not be citing specific texts because I’m an academic and getting to write without careful citations is thrilling.This post will have four sections. First, I will describe the capability approach. Second, I will give some simple examples that illustrate why I think that aiming to maximize capabilities is the best way to do good for people. I’ll frame these examples in opposition to other common approaches, but my goal here is mostly constructive and to argue for the capability approach rather than against maximizing, for example, SWB. Third, I will describe what I see as the largest downsides to the capability approach as well as possible responses to these downsides. Fourth and finally, I will explain my weakly-held theory that a lot of the ways that global health or international development organizations, including GiveWell, behave owes to the deep (but often unrecognized) influence of the capability approach on their thought.The capability approachThe fundamental unit of the value in the capability approach is a functioning, which is anything that you can be or do. Eating is a functioning. Being an EA is a functioning. Other functionings include: being a doctor, running, practicing Judaism, sleeping, and being a parent. Capabilities are options to be or do a functioning. The goal of the capability approach is not to maximize the number of capabilities available to people, it is instead to maximize the number of sets of capabilities. The notion here is that if you maximized simply the number of capabilities then you might enable someone to be: a parent or employed outside the home. But someone might want to do both. If you’re focusing on maximizing the number of sets of capabilities then you’ll end up with: parent, employed, both parent and employed, and neither.The simple beauty of this setup is that it is aiming to maximize the options that people have available to them, from which they then select the group of functionings that they want most. This is why one great book about this approach is entitled “Development as Freedom.” The argument is that development is the process of expanding capabilities, or individual freedom to live the kind of life that you want.I will come to criticisms later on, but one thing people may note is that this approach will lead to a lot of sets of capabilities and we will need some way to rank them or condense the list. In theory, we would want to do this based on how much people value each capability set. I will discuss this issue in more detail in the third section.Examples of why I love the capability approachHere I’ll lay out a few examples that show why I think the capability approach is the best way to think about improving human welfare.First, in opposition to preference-satisfaction approaches, the capability approach values options not taken. I think this accords with most of our intuitions, and that it takes real work for economics to train it out of people. Here are two examples...]]>
Fri, 13 Jan 2023 19:23:55 +0000 EA - The Capability Approach by ryancbriggs Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Capability Approach, published by ryancbriggs on January 13, 2023 on The Effective Altruism Forum.This post outlines the capability approach to thinking about human welfare. I think that this approach, while very popular in international development, is neglected in EA. While the capability approach has problems, I think that it provides a better approach to thinking about improving human welfare than approaches based on measuring happiness or subjective wellbeing (SWB) or approaches based on preference satisfaction. Finally, even if you disagree that the capability approach is best, I think this post will be useful to you because it may clarify why many people and organizations in the international development or global health space take the positions that they do. I will be drawing heavily on the work of Amartya Sen, but I will often not be citing specific texts because I’m an academic and getting to write without careful citations is thrilling.This post will have four sections. First, I will describe the capability approach. Second, I will give some simple examples that illustrate why I think that aiming to maximize capabilities is the best way to do good for people. I’ll frame these examples in opposition to other common approaches, but my goal here is mostly constructive and to argue for the capability approach rather than against maximizing, for example, SWB. Third, I will describe what I see as the largest downsides to the capability approach as well as possible responses to these downsides. Fourth and finally, I will explain my weakly-held theory that a lot of the ways that global health or international development organizations, including GiveWell, behave owes to the deep (but often unrecognized) influence of the capability approach on their thought.The capability approachThe fundamental unit of the value in the capability approach is a functioning, which is anything that you can be or do. Eating is a functioning. Being an EA is a functioning. Other functionings include: being a doctor, running, practicing Judaism, sleeping, and being a parent. Capabilities are options to be or do a functioning. The goal of the capability approach is not to maximize the number of capabilities available to people, it is instead to maximize the number of sets of capabilities. The notion here is that if you maximized simply the number of capabilities then you might enable someone to be: a parent or employed outside the home. But someone might want to do both. If you’re focusing on maximizing the number of sets of capabilities then you’ll end up with: parent, employed, both parent and employed, and neither.The simple beauty of this setup is that it is aiming to maximize the options that people have available to them, from which they then select the group of functionings that they want most. This is why one great book about this approach is entitled “Development as Freedom.” The argument is that development is the process of expanding capabilities, or individual freedom to live the kind of life that you want.I will come to criticisms later on, but one thing people may note is that this approach will lead to a lot of sets of capabilities and we will need some way to rank them or condense the list. In theory, we would want to do this based on how much people value each capability set. I will discuss this issue in more detail in the third section.Examples of why I love the capability approachHere I’ll lay out a few examples that show why I think the capability approach is the best way to think about improving human welfare.First, in opposition to preference-satisfaction approaches, the capability approach values options not taken. I think this accords with most of our intuitions, and that it takes real work for economics to train it out of people. Here are two examples...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Capability Approach, published by ryancbriggs on January 13, 2023 on The Effective Altruism Forum.This post outlines the capability approach to thinking about human welfare. I think that this approach, while very popular in international development, is neglected in EA. While the capability approach has problems, I think that it provides a better approach to thinking about improving human welfare than approaches based on measuring happiness or subjective wellbeing (SWB) or approaches based on preference satisfaction. Finally, even if you disagree that the capability approach is best, I think this post will be useful to you because it may clarify why many people and organizations in the international development or global health space take the positions that they do. I will be drawing heavily on the work of Amartya Sen, but I will often not be citing specific texts because I’m an academic and getting to write without careful citations is thrilling.This post will have four sections. First, I will describe the capability approach. Second, I will give some simple examples that illustrate why I think that aiming to maximize capabilities is the best way to do good for people. I’ll frame these examples in opposition to other common approaches, but my goal here is mostly constructive and to argue for the capability approach rather than against maximizing, for example, SWB. Third, I will describe what I see as the largest downsides to the capability approach as well as possible responses to these downsides. Fourth and finally, I will explain my weakly-held theory that a lot of the ways that global health or international development organizations, including GiveWell, behave owes to the deep (but often unrecognized) influence of the capability approach on their thought.The capability approachThe fundamental unit of the value in the capability approach is a functioning, which is anything that you can be or do. Eating is a functioning. Being an EA is a functioning. Other functionings include: being a doctor, running, practicing Judaism, sleeping, and being a parent. Capabilities are options to be or do a functioning. The goal of the capability approach is not to maximize the number of capabilities available to people, it is instead to maximize the number of sets of capabilities. The notion here is that if you maximized simply the number of capabilities then you might enable someone to be: a parent or employed outside the home. But someone might want to do both. If you’re focusing on maximizing the number of sets of capabilities then you’ll end up with: parent, employed, both parent and employed, and neither.The simple beauty of this setup is that it is aiming to maximize the options that people have available to them, from which they then select the group of functionings that they want most. This is why one great book about this approach is entitled “Development as Freedom.” The argument is that development is the process of expanding capabilities, or individual freedom to live the kind of life that you want.I will come to criticisms later on, but one thing people may note is that this approach will lead to a lot of sets of capabilities and we will need some way to rank them or condense the list. In theory, we would want to do this based on how much people value each capability set. I will discuss this issue in more detail in the third section.Examples of why I love the capability approachHere I’ll lay out a few examples that show why I think the capability approach is the best way to think about improving human welfare.First, in opposition to preference-satisfaction approaches, the capability approach values options not taken. I think this accords with most of our intuitions, and that it takes real work for economics to train it out of people. Here are two examples...]]>
ryancbriggs https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 15:02 None full 4445
XMKDvbjxtZCz3rueM_EA EA - Economic Theory and Global Prioritization (summer 2023): Apply now! by trammell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Economic Theory & Global Prioritization (summer 2023): Apply now!, published by trammell on January 13, 2023 on The Effective Altruism Forum.Background: I'm an economics DPhil student at Oxford and research associate at GPI.Last summer, I organized a course on “Topics in Economic Theory and Global Prioritization”. It aimed to provide a rigorous introduction to a selection of topics in economic theory that appear especially relevant to the project of doing the most good. It was designed primarily for economics graduate students, and strong, late-stage undergraduate students, considering careers in global priorities research.A summary of how it went, including links to the 2022 syllabus, can be found here.Applications are now open for summer 2023! It will probably be run similarly to how it was run in 2022, with minor changes summarized in the post linked above.A provisional syllabus and program outline for 2023 can be found here.Application deadline: February 18 (11:59pm GMT)When you will hear back: March 4 or earlierLocation: Oxford, UKCourse dates: August 12–25 (+optional unstructured week to September 2)The course is sponsored by the Forethought Foundation. If accepted, your transportation to and from Oxford, and accommodation in Oxford for the duration of the course, will be provided.Click here for more info and to apply.Please don’t hesitate to email etgp@forethought.org, or comment below, if you have any comments or questions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
trammell https://forum.effectivealtruism.org/posts/XMKDvbjxtZCz3rueM/economic-theory-and-global-prioritization-summer-2023-apply Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Economic Theory & Global Prioritization (summer 2023): Apply now!, published by trammell on January 13, 2023 on The Effective Altruism Forum.Background: I'm an economics DPhil student at Oxford and research associate at GPI.Last summer, I organized a course on “Topics in Economic Theory and Global Prioritization”. It aimed to provide a rigorous introduction to a selection of topics in economic theory that appear especially relevant to the project of doing the most good. It was designed primarily for economics graduate students, and strong, late-stage undergraduate students, considering careers in global priorities research.A summary of how it went, including links to the 2022 syllabus, can be found here.Applications are now open for summer 2023! It will probably be run similarly to how it was run in 2022, with minor changes summarized in the post linked above.A provisional syllabus and program outline for 2023 can be found here.Application deadline: February 18 (11:59pm GMT)When you will hear back: March 4 or earlierLocation: Oxford, UKCourse dates: August 12–25 (+optional unstructured week to September 2)The course is sponsored by the Forethought Foundation. If accepted, your transportation to and from Oxford, and accommodation in Oxford for the duration of the course, will be provided.Click here for more info and to apply.Please don’t hesitate to email etgp@forethought.org, or comment below, if you have any comments or questions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 13 Jan 2023 19:23:33 +0000 EA - Economic Theory and Global Prioritization (summer 2023): Apply now! by trammell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Economic Theory & Global Prioritization (summer 2023): Apply now!, published by trammell on January 13, 2023 on The Effective Altruism Forum.Background: I'm an economics DPhil student at Oxford and research associate at GPI.Last summer, I organized a course on “Topics in Economic Theory and Global Prioritization”. It aimed to provide a rigorous introduction to a selection of topics in economic theory that appear especially relevant to the project of doing the most good. It was designed primarily for economics graduate students, and strong, late-stage undergraduate students, considering careers in global priorities research.A summary of how it went, including links to the 2022 syllabus, can be found here.Applications are now open for summer 2023! It will probably be run similarly to how it was run in 2022, with minor changes summarized in the post linked above.A provisional syllabus and program outline for 2023 can be found here.Application deadline: February 18 (11:59pm GMT)When you will hear back: March 4 or earlierLocation: Oxford, UKCourse dates: August 12–25 (+optional unstructured week to September 2)The course is sponsored by the Forethought Foundation. If accepted, your transportation to and from Oxford, and accommodation in Oxford for the duration of the course, will be provided.Click here for more info and to apply.Please don’t hesitate to email etgp@forethought.org, or comment below, if you have any comments or questions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Economic Theory & Global Prioritization (summer 2023): Apply now!, published by trammell on January 13, 2023 on The Effective Altruism Forum.Background: I'm an economics DPhil student at Oxford and research associate at GPI.Last summer, I organized a course on “Topics in Economic Theory and Global Prioritization”. It aimed to provide a rigorous introduction to a selection of topics in economic theory that appear especially relevant to the project of doing the most good. It was designed primarily for economics graduate students, and strong, late-stage undergraduate students, considering careers in global priorities research.A summary of how it went, including links to the 2022 syllabus, can be found here.Applications are now open for summer 2023! It will probably be run similarly to how it was run in 2022, with minor changes summarized in the post linked above.A provisional syllabus and program outline for 2023 can be found here.Application deadline: February 18 (11:59pm GMT)When you will hear back: March 4 or earlierLocation: Oxford, UKCourse dates: August 12–25 (+optional unstructured week to September 2)The course is sponsored by the Forethought Foundation. If accepted, your transportation to and from Oxford, and accommodation in Oxford for the duration of the course, will be provided.Click here for more info and to apply.Please don’t hesitate to email etgp@forethought.org, or comment below, if you have any comments or questions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
trammell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:50 None full 4444
NdZPQxc74zNdg8Mvm_EA EA - Tyler Cowen on effective altruism (December 2022) by peterhartree Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tyler Cowen on effective altruism (December 2022), published by peterhartree on January 13, 2023 on The Effective Altruism Forum.In December 2022, Tyler Cowen gave a talk on effective altruism. It was hosted by Luca Stroppa and Theron Pummer at the University of St Andrews.You can watch the talk on YouTube, listen to the podcast version, or read the transcript below.TranscriptThe transcript was generated by OpenAI Whisper. I made a couple of minor edits and corrections for clarity.Hello everyone, and thank you for coming to this talk. Thank you Theron for the introduction.I find effective altruism is what people actually want to talk about, which is a good thing. So I thought I would talk about it as well. And I'll start by giving two big reasons why I'm favorably inclined, but then work through a number of details where I might differ from effective altruism.So let me give you what I think are the two big pluses. They're not the only pluses. But to me, they're the two reasons why in the net ledger, it's strongly positive.The first is that simply as a youth movement, effective altruism seems to attract more talented young people than anything else I know of right now by a considerable margin. And I've observed this by running my own project, Emergent Ventures for Talented Young People. And I just see time and again, the smartest and most successful people who apply get grants. They turn out to have connections to the EA movement. And that's very much to the credit of effective altruism. Whether or not you agree with everything there, that to me is a more important fact about the movement than anything else you might say about it. Unlike some philosophers, I do not draw a totally rigorous and clear distinction between what you might call the conceptual side of effective altruism and the more sociological side. They're somewhat intertwined and best thought of as such.The second positive point that I like about effective altruism is simply that what you might call traditional charity is so terrible, such a train wreck, so poorly conceived and ill thought out and badly managed and run that anything that waves its arms and says, hey, we should do better than this, again, whether or not you agree with all of the points, that has to be a big positive. So whether or not you think we should send more money to anti-malaria bed nets, the point is effective altruism is forcing us all to rethink what philanthropy should be. And again, that for me is really a very significant positive.Now before I get to some of the more arcane points of difference or at least different emphasis, let me try to outline some core propositions of effective altruism, noting I don't think there's a single dominant or correct definition. It's a pretty diverse movement. I learned recently there's like a sub movement, effective altruism for Christians. I also learned there's a sub sub movement, effective altruism for Quakers. So I don't think there's any one way to sum it all up, but I think you'll recognize these themes as things you see appearing repeatedly.So my first group of themes will be those where contemporary effective altruism differs a bit from classic utilitarianism. And then I'll give two ways in which effective altruism is fairly similar to classical utilitarianism.So here are three ways I think current effective altruism has evolved from classical utilitarianism and is different:The first is simply an emphasis on existential risk, the notion that the entire world could end, world of humans at least, and this would be a very terrible thing. I don't recall having read that, say, in Bentham or in John Stuart Mill. It might be in there somewhere, but it certainly receives far, far more emphasis today than it did in the 19th century.The second point, which I think is somewhat in c...]]>
peterhartree https://forum.effectivealtruism.org/posts/NdZPQxc74zNdg8Mvm/tyler-cowen-on-effective-altruism-december-2022 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tyler Cowen on effective altruism (December 2022), published by peterhartree on January 13, 2023 on The Effective Altruism Forum.In December 2022, Tyler Cowen gave a talk on effective altruism. It was hosted by Luca Stroppa and Theron Pummer at the University of St Andrews.You can watch the talk on YouTube, listen to the podcast version, or read the transcript below.TranscriptThe transcript was generated by OpenAI Whisper. I made a couple of minor edits and corrections for clarity.Hello everyone, and thank you for coming to this talk. Thank you Theron for the introduction.I find effective altruism is what people actually want to talk about, which is a good thing. So I thought I would talk about it as well. And I'll start by giving two big reasons why I'm favorably inclined, but then work through a number of details where I might differ from effective altruism.So let me give you what I think are the two big pluses. They're not the only pluses. But to me, they're the two reasons why in the net ledger, it's strongly positive.The first is that simply as a youth movement, effective altruism seems to attract more talented young people than anything else I know of right now by a considerable margin. And I've observed this by running my own project, Emergent Ventures for Talented Young People. And I just see time and again, the smartest and most successful people who apply get grants. They turn out to have connections to the EA movement. And that's very much to the credit of effective altruism. Whether or not you agree with everything there, that to me is a more important fact about the movement than anything else you might say about it. Unlike some philosophers, I do not draw a totally rigorous and clear distinction between what you might call the conceptual side of effective altruism and the more sociological side. They're somewhat intertwined and best thought of as such.The second positive point that I like about effective altruism is simply that what you might call traditional charity is so terrible, such a train wreck, so poorly conceived and ill thought out and badly managed and run that anything that waves its arms and says, hey, we should do better than this, again, whether or not you agree with all of the points, that has to be a big positive. So whether or not you think we should send more money to anti-malaria bed nets, the point is effective altruism is forcing us all to rethink what philanthropy should be. And again, that for me is really a very significant positive.Now before I get to some of the more arcane points of difference or at least different emphasis, let me try to outline some core propositions of effective altruism, noting I don't think there's a single dominant or correct definition. It's a pretty diverse movement. I learned recently there's like a sub movement, effective altruism for Christians. I also learned there's a sub sub movement, effective altruism for Quakers. So I don't think there's any one way to sum it all up, but I think you'll recognize these themes as things you see appearing repeatedly.So my first group of themes will be those where contemporary effective altruism differs a bit from classic utilitarianism. And then I'll give two ways in which effective altruism is fairly similar to classical utilitarianism.So here are three ways I think current effective altruism has evolved from classical utilitarianism and is different:The first is simply an emphasis on existential risk, the notion that the entire world could end, world of humans at least, and this would be a very terrible thing. I don't recall having read that, say, in Bentham or in John Stuart Mill. It might be in there somewhere, but it certainly receives far, far more emphasis today than it did in the 19th century.The second point, which I think is somewhat in c...]]>
Fri, 13 Jan 2023 17:42:47 +0000 EA - Tyler Cowen on effective altruism (December 2022) by peterhartree Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tyler Cowen on effective altruism (December 2022), published by peterhartree on January 13, 2023 on The Effective Altruism Forum.In December 2022, Tyler Cowen gave a talk on effective altruism. It was hosted by Luca Stroppa and Theron Pummer at the University of St Andrews.You can watch the talk on YouTube, listen to the podcast version, or read the transcript below.TranscriptThe transcript was generated by OpenAI Whisper. I made a couple of minor edits and corrections for clarity.Hello everyone, and thank you for coming to this talk. Thank you Theron for the introduction.I find effective altruism is what people actually want to talk about, which is a good thing. So I thought I would talk about it as well. And I'll start by giving two big reasons why I'm favorably inclined, but then work through a number of details where I might differ from effective altruism.So let me give you what I think are the two big pluses. They're not the only pluses. But to me, they're the two reasons why in the net ledger, it's strongly positive.The first is that simply as a youth movement, effective altruism seems to attract more talented young people than anything else I know of right now by a considerable margin. And I've observed this by running my own project, Emergent Ventures for Talented Young People. And I just see time and again, the smartest and most successful people who apply get grants. They turn out to have connections to the EA movement. And that's very much to the credit of effective altruism. Whether or not you agree with everything there, that to me is a more important fact about the movement than anything else you might say about it. Unlike some philosophers, I do not draw a totally rigorous and clear distinction between what you might call the conceptual side of effective altruism and the more sociological side. They're somewhat intertwined and best thought of as such.The second positive point that I like about effective altruism is simply that what you might call traditional charity is so terrible, such a train wreck, so poorly conceived and ill thought out and badly managed and run that anything that waves its arms and says, hey, we should do better than this, again, whether or not you agree with all of the points, that has to be a big positive. So whether or not you think we should send more money to anti-malaria bed nets, the point is effective altruism is forcing us all to rethink what philanthropy should be. And again, that for me is really a very significant positive.Now before I get to some of the more arcane points of difference or at least different emphasis, let me try to outline some core propositions of effective altruism, noting I don't think there's a single dominant or correct definition. It's a pretty diverse movement. I learned recently there's like a sub movement, effective altruism for Christians. I also learned there's a sub sub movement, effective altruism for Quakers. So I don't think there's any one way to sum it all up, but I think you'll recognize these themes as things you see appearing repeatedly.So my first group of themes will be those where contemporary effective altruism differs a bit from classic utilitarianism. And then I'll give two ways in which effective altruism is fairly similar to classical utilitarianism.So here are three ways I think current effective altruism has evolved from classical utilitarianism and is different:The first is simply an emphasis on existential risk, the notion that the entire world could end, world of humans at least, and this would be a very terrible thing. I don't recall having read that, say, in Bentham or in John Stuart Mill. It might be in there somewhere, but it certainly receives far, far more emphasis today than it did in the 19th century.The second point, which I think is somewhat in c...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tyler Cowen on effective altruism (December 2022), published by peterhartree on January 13, 2023 on The Effective Altruism Forum.In December 2022, Tyler Cowen gave a talk on effective altruism. It was hosted by Luca Stroppa and Theron Pummer at the University of St Andrews.You can watch the talk on YouTube, listen to the podcast version, or read the transcript below.TranscriptThe transcript was generated by OpenAI Whisper. I made a couple of minor edits and corrections for clarity.Hello everyone, and thank you for coming to this talk. Thank you Theron for the introduction.I find effective altruism is what people actually want to talk about, which is a good thing. So I thought I would talk about it as well. And I'll start by giving two big reasons why I'm favorably inclined, but then work through a number of details where I might differ from effective altruism.So let me give you what I think are the two big pluses. They're not the only pluses. But to me, they're the two reasons why in the net ledger, it's strongly positive.The first is that simply as a youth movement, effective altruism seems to attract more talented young people than anything else I know of right now by a considerable margin. And I've observed this by running my own project, Emergent Ventures for Talented Young People. And I just see time and again, the smartest and most successful people who apply get grants. They turn out to have connections to the EA movement. And that's very much to the credit of effective altruism. Whether or not you agree with everything there, that to me is a more important fact about the movement than anything else you might say about it. Unlike some philosophers, I do not draw a totally rigorous and clear distinction between what you might call the conceptual side of effective altruism and the more sociological side. They're somewhat intertwined and best thought of as such.The second positive point that I like about effective altruism is simply that what you might call traditional charity is so terrible, such a train wreck, so poorly conceived and ill thought out and badly managed and run that anything that waves its arms and says, hey, we should do better than this, again, whether or not you agree with all of the points, that has to be a big positive. So whether or not you think we should send more money to anti-malaria bed nets, the point is effective altruism is forcing us all to rethink what philanthropy should be. And again, that for me is really a very significant positive.Now before I get to some of the more arcane points of difference or at least different emphasis, let me try to outline some core propositions of effective altruism, noting I don't think there's a single dominant or correct definition. It's a pretty diverse movement. I learned recently there's like a sub movement, effective altruism for Christians. I also learned there's a sub sub movement, effective altruism for Quakers. So I don't think there's any one way to sum it all up, but I think you'll recognize these themes as things you see appearing repeatedly.So my first group of themes will be those where contemporary effective altruism differs a bit from classic utilitarianism. And then I'll give two ways in which effective altruism is fairly similar to classical utilitarianism.So here are three ways I think current effective altruism has evolved from classical utilitarianism and is different:The first is simply an emphasis on existential risk, the notion that the entire world could end, world of humans at least, and this would be a very terrible thing. I don't recall having read that, say, in Bentham or in John Stuart Mill. It might be in there somewhere, but it certainly receives far, far more emphasis today than it did in the 19th century.The second point, which I think is somewhat in c...]]>
peterhartree https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 28:38 None full 4450
5vFmMXWsh6PaYjqab_EA EA - [Linkpost] FLI alleged to have offered funding to far right foundation by Jens Nordmark Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] FLI alleged to have offered funding to far right foundation, published by Jens Nordmark on January 13, 2023 on The Effective Altruism Forum.This seems concerning. It is claimed that the Future of Life Institute, run by MIT professor Max Tegmark, offered but did not pay out a grant to a Swedish far-right foundation. The character of this foundation and its associates is well-known in Sweden. Expo is an old and respected watchdog organization specialized on neo-nazism and related movements.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jens Nordmark https://forum.effectivealtruism.org/posts/5vFmMXWsh6PaYjqab/linkpost-fli-alleged-to-have-offered-funding-to-far-right Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] FLI alleged to have offered funding to far right foundation, published by Jens Nordmark on January 13, 2023 on The Effective Altruism Forum.This seems concerning. It is claimed that the Future of Life Institute, run by MIT professor Max Tegmark, offered but did not pay out a grant to a Swedish far-right foundation. The character of this foundation and its associates is well-known in Sweden. Expo is an old and respected watchdog organization specialized on neo-nazism and related movements.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 13 Jan 2023 15:53:58 +0000 EA - [Linkpost] FLI alleged to have offered funding to far right foundation by Jens Nordmark Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] FLI alleged to have offered funding to far right foundation, published by Jens Nordmark on January 13, 2023 on The Effective Altruism Forum.This seems concerning. It is claimed that the Future of Life Institute, run by MIT professor Max Tegmark, offered but did not pay out a grant to a Swedish far-right foundation. The character of this foundation and its associates is well-known in Sweden. Expo is an old and respected watchdog organization specialized on neo-nazism and related movements.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] FLI alleged to have offered funding to far right foundation, published by Jens Nordmark on January 13, 2023 on The Effective Altruism Forum.This seems concerning. It is claimed that the Future of Life Institute, run by MIT professor Max Tegmark, offered but did not pay out a grant to a Swedish far-right foundation. The character of this foundation and its associates is well-known in Sweden. Expo is an old and respected watchdog organization specialized on neo-nazism and related movements.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jens Nordmark https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:49 None full 4447
kuqgJDPF6nfscSZsZ_EA EA - Thread for discussing Bostrom's email and apology by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thread for discussing Bostrom's email and apology, published by Lizka on January 13, 2023 on The Effective Altruism Forum.The Forum is getting a bit swamped with discussions about Bostrom's email and apology. We’re making this thread where you can discuss the topic.All other posts on this topic will be marked as “Personal Blog” — people who opt in or have opted into seeing “Personal Blog” posts will see them on the Frontpage, but others won’t; they’ll see them only in Recent Discussion or in All Posts. (If you want to change your "Personal Blog" setting, you can do that by following the instructions here.)(Please also feel free to give us feedback on this thread and approach. This is the first time we’ve tried this in response to events that dominate Forum discussion. You can give feedback by commenting on the thread, or by reaching out to forum@effectivealtruism.org.)If you choose to participate in this discussion, please remember Forum norms. Chiefly,Be kind.Stay civil, at the minimum. Don’t sneer or be snarky. In general, assume good faith. We may delete unnecessary rudeness and issue warnings or bans for it.Substantive disagreements are fine and expected. Disagreements help us find the truth and are part of healthy communication.Please try to remember that most people on the Forum are here for collaborative discussions about doing good.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lizka https://forum.effectivealtruism.org/posts/kuqgJDPF6nfscSZsZ/thread-for-discussing-bostrom-s-email-and-apology Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thread for discussing Bostrom's email and apology, published by Lizka on January 13, 2023 on The Effective Altruism Forum.The Forum is getting a bit swamped with discussions about Bostrom's email and apology. We’re making this thread where you can discuss the topic.All other posts on this topic will be marked as “Personal Blog” — people who opt in or have opted into seeing “Personal Blog” posts will see them on the Frontpage, but others won’t; they’ll see them only in Recent Discussion or in All Posts. (If you want to change your "Personal Blog" setting, you can do that by following the instructions here.)(Please also feel free to give us feedback on this thread and approach. This is the first time we’ve tried this in response to events that dominate Forum discussion. You can give feedback by commenting on the thread, or by reaching out to forum@effectivealtruism.org.)If you choose to participate in this discussion, please remember Forum norms. Chiefly,Be kind.Stay civil, at the minimum. Don’t sneer or be snarky. In general, assume good faith. We may delete unnecessary rudeness and issue warnings or bans for it.Substantive disagreements are fine and expected. Disagreements help us find the truth and are part of healthy communication.Please try to remember that most people on the Forum are here for collaborative discussions about doing good.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 13 Jan 2023 13:58:54 +0000 EA - Thread for discussing Bostrom's email and apology by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thread for discussing Bostrom's email and apology, published by Lizka on January 13, 2023 on The Effective Altruism Forum.The Forum is getting a bit swamped with discussions about Bostrom's email and apology. We’re making this thread where you can discuss the topic.All other posts on this topic will be marked as “Personal Blog” — people who opt in or have opted into seeing “Personal Blog” posts will see them on the Frontpage, but others won’t; they’ll see them only in Recent Discussion or in All Posts. (If you want to change your "Personal Blog" setting, you can do that by following the instructions here.)(Please also feel free to give us feedback on this thread and approach. This is the first time we’ve tried this in response to events that dominate Forum discussion. You can give feedback by commenting on the thread, or by reaching out to forum@effectivealtruism.org.)If you choose to participate in this discussion, please remember Forum norms. Chiefly,Be kind.Stay civil, at the minimum. Don’t sneer or be snarky. In general, assume good faith. We may delete unnecessary rudeness and issue warnings or bans for it.Substantive disagreements are fine and expected. Disagreements help us find the truth and are part of healthy communication.Please try to remember that most people on the Forum are here for collaborative discussions about doing good.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thread for discussing Bostrom's email and apology, published by Lizka on January 13, 2023 on The Effective Altruism Forum.The Forum is getting a bit swamped with discussions about Bostrom's email and apology. We’re making this thread where you can discuss the topic.All other posts on this topic will be marked as “Personal Blog” — people who opt in or have opted into seeing “Personal Blog” posts will see them on the Frontpage, but others won’t; they’ll see them only in Recent Discussion or in All Posts. (If you want to change your "Personal Blog" setting, you can do that by following the instructions here.)(Please also feel free to give us feedback on this thread and approach. This is the first time we’ve tried this in response to events that dominate Forum discussion. You can give feedback by commenting on the thread, or by reaching out to forum@effectivealtruism.org.)If you choose to participate in this discussion, please remember Forum norms. Chiefly,Be kind.Stay civil, at the minimum. Don’t sneer or be snarky. In general, assume good faith. We may delete unnecessary rudeness and issue warnings or bans for it.Substantive disagreements are fine and expected. Disagreements help us find the truth and are part of healthy communication.Please try to remember that most people on the Forum are here for collaborative discussions about doing good.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:32 None full 4446
irhgjSgvocfrwnzRz_EA EA - Should the forum be structured such that the drama of the day doesn't occur on the front page? by Chris Leong Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should the forum be structured such that the drama of the day doesn't occur on the front page?, published by Chris Leong on January 13, 2023 on The Effective Altruism Forum.I feel kind of hypocritical here, as I ended up commenting on a bunch of the posts related to the drama of the day, but here goes anyway...I think it's important for people to be able to criticize Effective Altruism. One of the things I love about this community is its openness to criticism.At the same time, I'm starting to worry that constantly having all this drama play out on the front page of the forum is very distracting. But what would be even more worrying is if we've now reached a certain size/level of attention where this is the new normal going forward.So I guess I feel it's gotten to the point where I feel that we have to discuss how to balance these twin interests. I think this is incredibly challenging. If we change how this site works to address this issue, I want to these policies be fair to people holding different viewpoints and on different sides of these issues. And this is tricky, if we decided "let's move drama of the day discussion to a separate section of the site", well then maybe that just leads to a lot of arguments about what counts as drama and people feeling their issues aren't being heard or that they're being treated unfairly.I don't actually know if there's any policy or site mechanics shift I would reflectively endorse after thinking through the consequences. But maybe someone thinks that they have a solution to this?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Chris Leong https://forum.effectivealtruism.org/posts/irhgjSgvocfrwnzRz/should-the-forum-be-structured-such-that-the-drama-of-the Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should the forum be structured such that the drama of the day doesn't occur on the front page?, published by Chris Leong on January 13, 2023 on The Effective Altruism Forum.I feel kind of hypocritical here, as I ended up commenting on a bunch of the posts related to the drama of the day, but here goes anyway...I think it's important for people to be able to criticize Effective Altruism. One of the things I love about this community is its openness to criticism.At the same time, I'm starting to worry that constantly having all this drama play out on the front page of the forum is very distracting. But what would be even more worrying is if we've now reached a certain size/level of attention where this is the new normal going forward.So I guess I feel it's gotten to the point where I feel that we have to discuss how to balance these twin interests. I think this is incredibly challenging. If we change how this site works to address this issue, I want to these policies be fair to people holding different viewpoints and on different sides of these issues. And this is tricky, if we decided "let's move drama of the day discussion to a separate section of the site", well then maybe that just leads to a lot of arguments about what counts as drama and people feeling their issues aren't being heard or that they're being treated unfairly.I don't actually know if there's any policy or site mechanics shift I would reflectively endorse after thinking through the consequences. But maybe someone thinks that they have a solution to this?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 13 Jan 2023 12:41:05 +0000 EA - Should the forum be structured such that the drama of the day doesn't occur on the front page? by Chris Leong Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should the forum be structured such that the drama of the day doesn't occur on the front page?, published by Chris Leong on January 13, 2023 on The Effective Altruism Forum.I feel kind of hypocritical here, as I ended up commenting on a bunch of the posts related to the drama of the day, but here goes anyway...I think it's important for people to be able to criticize Effective Altruism. One of the things I love about this community is its openness to criticism.At the same time, I'm starting to worry that constantly having all this drama play out on the front page of the forum is very distracting. But what would be even more worrying is if we've now reached a certain size/level of attention where this is the new normal going forward.So I guess I feel it's gotten to the point where I feel that we have to discuss how to balance these twin interests. I think this is incredibly challenging. If we change how this site works to address this issue, I want to these policies be fair to people holding different viewpoints and on different sides of these issues. And this is tricky, if we decided "let's move drama of the day discussion to a separate section of the site", well then maybe that just leads to a lot of arguments about what counts as drama and people feeling their issues aren't being heard or that they're being treated unfairly.I don't actually know if there's any policy or site mechanics shift I would reflectively endorse after thinking through the consequences. But maybe someone thinks that they have a solution to this?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should the forum be structured such that the drama of the day doesn't occur on the front page?, published by Chris Leong on January 13, 2023 on The Effective Altruism Forum.I feel kind of hypocritical here, as I ended up commenting on a bunch of the posts related to the drama of the day, but here goes anyway...I think it's important for people to be able to criticize Effective Altruism. One of the things I love about this community is its openness to criticism.At the same time, I'm starting to worry that constantly having all this drama play out on the front page of the forum is very distracting. But what would be even more worrying is if we've now reached a certain size/level of attention where this is the new normal going forward.So I guess I feel it's gotten to the point where I feel that we have to discuss how to balance these twin interests. I think this is incredibly challenging. If we change how this site works to address this issue, I want to these policies be fair to people holding different viewpoints and on different sides of these issues. And this is tricky, if we decided "let's move drama of the day discussion to a separate section of the site", well then maybe that just leads to a lot of arguments about what counts as drama and people feeling their issues aren't being heard or that they're being treated unfairly.I don't actually know if there's any policy or site mechanics shift I would reflectively endorse after thinking through the consequences. But maybe someone thinks that they have a solution to this?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Chris Leong https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:35 None full 4448
f2qojPr8NaMPo2KJC_EA EA - Beware safety-washing by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Beware safety-washing, published by Lizka on January 13, 2023 on The Effective Altruism Forum.Tl;dr: don’t be fooled into thinking that some groups working on AI are taking “safety” concerns seriously (enough).OutlineTwo non-AI examplesGreenwashingHumanewashingDefinition of safety-washingWhat are the harms?What can (and should) we do about this?Note: I’m posting this in my personal capacity. All views expressed here are my own. I am also not (at all) an expert on the topic.Two non-AI examplesGreenwashingCompanies “greenwash” when they mislead people into incorrectly thinking that their products or practices are climate and environment-friendly (or that the company focuses on climate-friendly work).Investopedia explains:Greenwashing is an attempt to capitalize on the growing demand for environmentally sound products.The term originated in the 1960s, when the hotel industry devised one of the most blatant examples of greenwashing. They placed notices in hotel rooms asking guests to reuse their towels to save the environment. The hotels enjoyed the benefit of lower laundry costs.Wikipedia: “[Jay Westerveld, the originator of the term] concluded that often the real objective was increased profit, and labeled this and other profitable-but-ineffective ‘environmentally-conscientious’ acts as greenwashing.” (Wikipedia also provides a long list of examples of the practice.)I enjoy some of the parody/art (responding to things like this) that comes out of noticing the hypocrisy of the practice.HumanewashingA similar phenomenon is the “humanewashing” of animal products. There’s a Vox article that explains this phenomenon (as it happens in the US):A carton of “all natural” eggs might bear an illustration of a rustic farm; packages of chicken meat are touted as “humanely raised."In a few cases, these sunny depictions are accurate. But far too often they mask the industrial conditions under which these animals were raised and slaughtered.Animal welfare and consumer protection advocates have a name for such misleading labeling: “humanewashing.” And research suggests it’s having precisely the effect that meat producers intend it to. A recent national survey by C.O.nxt, a food marketing firm, found that animal welfare and “natural” claims on meat, dairy, and egg packaging increased the intent to purchase for over half of consumers....rather than engaging in the costly endeavor of actually changing their farming practices, far too many major meat producers are attempting to assuage consumer concerns by merely changing their packaging and advertising with claims of sustainable farms and humane treatment. These efforts mislead consumers, and undermine the small sliver of farmers who have put in the hard work to actually improve animal treatment.If you want a resource on what food labels actually mean, here are some: one, two, three (these are most useful in the US). (If you know of a better one, please let me know. I’d especially love a resource that lists the estimated relative value of things like “free-range” vs. “cage-free,” etc., according to cited and reasonable sources.)Definition of safety-washingIn brief, “safety-washing” is misleading people into thinking that some products or practices are “safe” or that safety is a big priority for a given company, when this is not the case.An increasing number of people believe that developing powerful AI systems is very dangerous, so companies might want to show that they are being “safe” in their work on AI.Being safe with AI is hard and potentially costly, so if you’re a company working on AI capabilities, you might want to overstate the extent to which you focus on “safety.”So you might:Pick a safety paradigm that is convenient for you, and focus on thatTalk about “safety” when you really mean other kinds o...]]>
Lizka https://forum.effectivealtruism.org/posts/f2qojPr8NaMPo2KJC/beware-safety-washing Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Beware safety-washing, published by Lizka on January 13, 2023 on The Effective Altruism Forum.Tl;dr: don’t be fooled into thinking that some groups working on AI are taking “safety” concerns seriously (enough).OutlineTwo non-AI examplesGreenwashingHumanewashingDefinition of safety-washingWhat are the harms?What can (and should) we do about this?Note: I’m posting this in my personal capacity. All views expressed here are my own. I am also not (at all) an expert on the topic.Two non-AI examplesGreenwashingCompanies “greenwash” when they mislead people into incorrectly thinking that their products or practices are climate and environment-friendly (or that the company focuses on climate-friendly work).Investopedia explains:Greenwashing is an attempt to capitalize on the growing demand for environmentally sound products.The term originated in the 1960s, when the hotel industry devised one of the most blatant examples of greenwashing. They placed notices in hotel rooms asking guests to reuse their towels to save the environment. The hotels enjoyed the benefit of lower laundry costs.Wikipedia: “[Jay Westerveld, the originator of the term] concluded that often the real objective was increased profit, and labeled this and other profitable-but-ineffective ‘environmentally-conscientious’ acts as greenwashing.” (Wikipedia also provides a long list of examples of the practice.)I enjoy some of the parody/art (responding to things like this) that comes out of noticing the hypocrisy of the practice.HumanewashingA similar phenomenon is the “humanewashing” of animal products. There’s a Vox article that explains this phenomenon (as it happens in the US):A carton of “all natural” eggs might bear an illustration of a rustic farm; packages of chicken meat are touted as “humanely raised."In a few cases, these sunny depictions are accurate. But far too often they mask the industrial conditions under which these animals were raised and slaughtered.Animal welfare and consumer protection advocates have a name for such misleading labeling: “humanewashing.” And research suggests it’s having precisely the effect that meat producers intend it to. A recent national survey by C.O.nxt, a food marketing firm, found that animal welfare and “natural” claims on meat, dairy, and egg packaging increased the intent to purchase for over half of consumers....rather than engaging in the costly endeavor of actually changing their farming practices, far too many major meat producers are attempting to assuage consumer concerns by merely changing their packaging and advertising with claims of sustainable farms and humane treatment. These efforts mislead consumers, and undermine the small sliver of farmers who have put in the hard work to actually improve animal treatment.If you want a resource on what food labels actually mean, here are some: one, two, three (these are most useful in the US). (If you know of a better one, please let me know. I’d especially love a resource that lists the estimated relative value of things like “free-range” vs. “cage-free,” etc., according to cited and reasonable sources.)Definition of safety-washingIn brief, “safety-washing” is misleading people into thinking that some products or practices are “safe” or that safety is a big priority for a given company, when this is not the case.An increasing number of people believe that developing powerful AI systems is very dangerous, so companies might want to show that they are being “safe” in their work on AI.Being safe with AI is hard and potentially costly, so if you’re a company working on AI capabilities, you might want to overstate the extent to which you focus on “safety.”So you might:Pick a safety paradigm that is convenient for you, and focus on thatTalk about “safety” when you really mean other kinds o...]]>
Fri, 13 Jan 2023 11:46:36 +0000 EA - Beware safety-washing by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Beware safety-washing, published by Lizka on January 13, 2023 on The Effective Altruism Forum.Tl;dr: don’t be fooled into thinking that some groups working on AI are taking “safety” concerns seriously (enough).OutlineTwo non-AI examplesGreenwashingHumanewashingDefinition of safety-washingWhat are the harms?What can (and should) we do about this?Note: I’m posting this in my personal capacity. All views expressed here are my own. I am also not (at all) an expert on the topic.Two non-AI examplesGreenwashingCompanies “greenwash” when they mislead people into incorrectly thinking that their products or practices are climate and environment-friendly (or that the company focuses on climate-friendly work).Investopedia explains:Greenwashing is an attempt to capitalize on the growing demand for environmentally sound products.The term originated in the 1960s, when the hotel industry devised one of the most blatant examples of greenwashing. They placed notices in hotel rooms asking guests to reuse their towels to save the environment. The hotels enjoyed the benefit of lower laundry costs.Wikipedia: “[Jay Westerveld, the originator of the term] concluded that often the real objective was increased profit, and labeled this and other profitable-but-ineffective ‘environmentally-conscientious’ acts as greenwashing.” (Wikipedia also provides a long list of examples of the practice.)I enjoy some of the parody/art (responding to things like this) that comes out of noticing the hypocrisy of the practice.HumanewashingA similar phenomenon is the “humanewashing” of animal products. There’s a Vox article that explains this phenomenon (as it happens in the US):A carton of “all natural” eggs might bear an illustration of a rustic farm; packages of chicken meat are touted as “humanely raised."In a few cases, these sunny depictions are accurate. But far too often they mask the industrial conditions under which these animals were raised and slaughtered.Animal welfare and consumer protection advocates have a name for such misleading labeling: “humanewashing.” And research suggests it’s having precisely the effect that meat producers intend it to. A recent national survey by C.O.nxt, a food marketing firm, found that animal welfare and “natural” claims on meat, dairy, and egg packaging increased the intent to purchase for over half of consumers....rather than engaging in the costly endeavor of actually changing their farming practices, far too many major meat producers are attempting to assuage consumer concerns by merely changing their packaging and advertising with claims of sustainable farms and humane treatment. These efforts mislead consumers, and undermine the small sliver of farmers who have put in the hard work to actually improve animal treatment.If you want a resource on what food labels actually mean, here are some: one, two, three (these are most useful in the US). (If you know of a better one, please let me know. I’d especially love a resource that lists the estimated relative value of things like “free-range” vs. “cage-free,” etc., according to cited and reasonable sources.)Definition of safety-washingIn brief, “safety-washing” is misleading people into thinking that some products or practices are “safe” or that safety is a big priority for a given company, when this is not the case.An increasing number of people believe that developing powerful AI systems is very dangerous, so companies might want to show that they are being “safe” in their work on AI.Being safe with AI is hard and potentially costly, so if you’re a company working on AI capabilities, you might want to overstate the extent to which you focus on “safety.”So you might:Pick a safety paradigm that is convenient for you, and focus on thatTalk about “safety” when you really mean other kinds o...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Beware safety-washing, published by Lizka on January 13, 2023 on The Effective Altruism Forum.Tl;dr: don’t be fooled into thinking that some groups working on AI are taking “safety” concerns seriously (enough).OutlineTwo non-AI examplesGreenwashingHumanewashingDefinition of safety-washingWhat are the harms?What can (and should) we do about this?Note: I’m posting this in my personal capacity. All views expressed here are my own. I am also not (at all) an expert on the topic.Two non-AI examplesGreenwashingCompanies “greenwash” when they mislead people into incorrectly thinking that their products or practices are climate and environment-friendly (or that the company focuses on climate-friendly work).Investopedia explains:Greenwashing is an attempt to capitalize on the growing demand for environmentally sound products.The term originated in the 1960s, when the hotel industry devised one of the most blatant examples of greenwashing. They placed notices in hotel rooms asking guests to reuse their towels to save the environment. The hotels enjoyed the benefit of lower laundry costs.Wikipedia: “[Jay Westerveld, the originator of the term] concluded that often the real objective was increased profit, and labeled this and other profitable-but-ineffective ‘environmentally-conscientious’ acts as greenwashing.” (Wikipedia also provides a long list of examples of the practice.)I enjoy some of the parody/art (responding to things like this) that comes out of noticing the hypocrisy of the practice.HumanewashingA similar phenomenon is the “humanewashing” of animal products. There’s a Vox article that explains this phenomenon (as it happens in the US):A carton of “all natural” eggs might bear an illustration of a rustic farm; packages of chicken meat are touted as “humanely raised."In a few cases, these sunny depictions are accurate. But far too often they mask the industrial conditions under which these animals were raised and slaughtered.Animal welfare and consumer protection advocates have a name for such misleading labeling: “humanewashing.” And research suggests it’s having precisely the effect that meat producers intend it to. A recent national survey by C.O.nxt, a food marketing firm, found that animal welfare and “natural” claims on meat, dairy, and egg packaging increased the intent to purchase for over half of consumers....rather than engaging in the costly endeavor of actually changing their farming practices, far too many major meat producers are attempting to assuage consumer concerns by merely changing their packaging and advertising with claims of sustainable farms and humane treatment. These efforts mislead consumers, and undermine the small sliver of farmers who have put in the hard work to actually improve animal treatment.If you want a resource on what food labels actually mean, here are some: one, two, three (these are most useful in the US). (If you know of a better one, please let me know. I’d especially love a resource that lists the estimated relative value of things like “free-range” vs. “cage-free,” etc., according to cited and reasonable sources.)Definition of safety-washingIn brief, “safety-washing” is misleading people into thinking that some products or practices are “safe” or that safety is a big priority for a given company, when this is not the case.An increasing number of people believe that developing powerful AI systems is very dangerous, so companies might want to show that they are being “safe” in their work on AI.Being safe with AI is hard and potentially costly, so if you’re a company working on AI capabilities, you might want to overstate the extent to which you focus on “safety.”So you might:Pick a safety paradigm that is convenient for you, and focus on thatTalk about “safety” when you really mean other kinds o...]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:04 None full 4449
LHN8mfi9Dc7bKD6Gu_EA EA - ea.domains - Domains Free to a Good Home by plex Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ea.domains - Domains Free to a Good Home, published by plex on January 12, 2023 on The Effective Altruism Forum.Tl;dr: I’ve set up a database of domains at ea.domains which are free to a good home, to prevent them being squatted and blocked for use. You can add domains you control to it using this form.Since my well received post on setting up an Anti-squatted AI x-risk domains index, I’ve been picking up more and more domains, talking to other domain holders, and building an interface for viewing them. I’ve also put a few of them to good use already!Also a big thanks to Ben West for sharing 215 of CEA's parked domains, which EA projects are welcome to request use of by emailing tech@centreforeffectivealtruism.org. He also offered for CEA to be the custodian of the domains I bought, with the condition that they will be pointed to nameservers I control and that they will return ownership of them for EA-aligned use on request, unless they’re in active use by another EA project. This will save me from having to pay upkeep, which will help make this more sustainable. He's open to extending this offer to holders of other relevant domains.If you'd like to use one of these domains, then message the contact person specified. They each have different policies for handing over the domain, but a good standard is that they'll point the domain towards your servers on request, and hand it over for free if and when you have built something useful at that location.Here’s the top 40 domains I’m most excited about, but go check the full list:DomainPossible useContactaisafety.globalAI Safety Conferencehello@alignment.devexistential-risks.orgHigh quality explanation?tech@centreforeffectivealtruism.orgontological.techNew org?hello@alignment.devexistential.devNew org?hello@alignment.devepistemic.devNew org?hello@alignment.devaisafety.toolsDirectory of resources?jj@aisafetysupport.orgaisafetycareers.com esben@apartresearch.comagenty.orgNew org?hello@alignment.devx-risks.com drewspartz@nonlinear.orgaisafety.events jj@aisafetysupport.orgaisafety.me jj@aisafetysupport.orgeffectivealtruism.venturesEA entrepreneurs or impact investing group?donychristie@gmail.comaisafetybounties.com esben@apartresearch.comaisafety.degreeAI Safety PhD cohortshello@alignment.devalignment.careers80k said they were happy for others to join the careers advice spacehello@alignment.devxrisk.fundx-risk specific funding organization?hello@alignment.devaisafety.careers80k said they were happy for others to join the careers advicespacehello@alignment.devaisafety.fundAIS-specific funding org?hello@alignment.devanimalwelfare.dayDays to do a coordinated push for cause areas?hello@alignment.devglobalhealth.dayDays to do a coordinated push for cause areas?hello@alignment.devalignment.dayDays to do a coordinated push for cause areas?hello@alignment.devaisafety.dayDays to do a coordinated push for cause areas?hello@alignment.devcause-x.dayDays to do a coordinated push for cause areas?hello@alignment.devbiosecurity.dayDays to do a coordinated push for cause areas? Anti-GOF?hello@alignment.devrationality.dayDays to do a coordinated push for cause areas?hello@alignment.devaisafety.questProject Euler for AIS?hello@alignment.devaisafety.coachAn org which specializes in coaching AI safety people?hello@alignment.devaisafety.instituteResearch organization?hello@alignment.devaisafety.observerArticles on news on the AI safety space?hello@alignment.devalignment.academyTraining programhello@alignment.devalignment.fyihello@alignment.devaisafety.venturesEntrepreneurs org?hello@alignment.devaisafety.groupPeer-to-peer study groups for skilling up maybe?hello@alignment.devglobalprioritiesresearch.com tech@centreforeffectivealtruism.orgbountiedrationality.orgWebsite to pair with the BR facebook group.noahcremean@gmail.comaisafety.foundati...]]>
plex https://forum.effectivealtruism.org/posts/LHN8mfi9Dc7bKD6Gu/ea-domains-domains-free-to-a-good-home Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ea.domains - Domains Free to a Good Home, published by plex on January 12, 2023 on The Effective Altruism Forum.Tl;dr: I’ve set up a database of domains at ea.domains which are free to a good home, to prevent them being squatted and blocked for use. You can add domains you control to it using this form.Since my well received post on setting up an Anti-squatted AI x-risk domains index, I’ve been picking up more and more domains, talking to other domain holders, and building an interface for viewing them. I’ve also put a few of them to good use already!Also a big thanks to Ben West for sharing 215 of CEA's parked domains, which EA projects are welcome to request use of by emailing tech@centreforeffectivealtruism.org. He also offered for CEA to be the custodian of the domains I bought, with the condition that they will be pointed to nameservers I control and that they will return ownership of them for EA-aligned use on request, unless they’re in active use by another EA project. This will save me from having to pay upkeep, which will help make this more sustainable. He's open to extending this offer to holders of other relevant domains.If you'd like to use one of these domains, then message the contact person specified. They each have different policies for handing over the domain, but a good standard is that they'll point the domain towards your servers on request, and hand it over for free if and when you have built something useful at that location.Here’s the top 40 domains I’m most excited about, but go check the full list:DomainPossible useContactaisafety.globalAI Safety Conferencehello@alignment.devexistential-risks.orgHigh quality explanation?tech@centreforeffectivealtruism.orgontological.techNew org?hello@alignment.devexistential.devNew org?hello@alignment.devepistemic.devNew org?hello@alignment.devaisafety.toolsDirectory of resources?jj@aisafetysupport.orgaisafetycareers.com esben@apartresearch.comagenty.orgNew org?hello@alignment.devx-risks.com drewspartz@nonlinear.orgaisafety.events jj@aisafetysupport.orgaisafety.me jj@aisafetysupport.orgeffectivealtruism.venturesEA entrepreneurs or impact investing group?donychristie@gmail.comaisafetybounties.com esben@apartresearch.comaisafety.degreeAI Safety PhD cohortshello@alignment.devalignment.careers80k said they were happy for others to join the careers advice spacehello@alignment.devxrisk.fundx-risk specific funding organization?hello@alignment.devaisafety.careers80k said they were happy for others to join the careers advicespacehello@alignment.devaisafety.fundAIS-specific funding org?hello@alignment.devanimalwelfare.dayDays to do a coordinated push for cause areas?hello@alignment.devglobalhealth.dayDays to do a coordinated push for cause areas?hello@alignment.devalignment.dayDays to do a coordinated push for cause areas?hello@alignment.devaisafety.dayDays to do a coordinated push for cause areas?hello@alignment.devcause-x.dayDays to do a coordinated push for cause areas?hello@alignment.devbiosecurity.dayDays to do a coordinated push for cause areas? Anti-GOF?hello@alignment.devrationality.dayDays to do a coordinated push for cause areas?hello@alignment.devaisafety.questProject Euler for AIS?hello@alignment.devaisafety.coachAn org which specializes in coaching AI safety people?hello@alignment.devaisafety.instituteResearch organization?hello@alignment.devaisafety.observerArticles on news on the AI safety space?hello@alignment.devalignment.academyTraining programhello@alignment.devalignment.fyihello@alignment.devaisafety.venturesEntrepreneurs org?hello@alignment.devaisafety.groupPeer-to-peer study groups for skilling up maybe?hello@alignment.devglobalprioritiesresearch.com tech@centreforeffectivealtruism.orgbountiedrationality.orgWebsite to pair with the BR facebook group.noahcremean@gmail.comaisafety.foundati...]]>
Fri, 13 Jan 2023 01:20:48 +0000 EA - ea.domains - Domains Free to a Good Home by plex Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ea.domains - Domains Free to a Good Home, published by plex on January 12, 2023 on The Effective Altruism Forum.Tl;dr: I’ve set up a database of domains at ea.domains which are free to a good home, to prevent them being squatted and blocked for use. You can add domains you control to it using this form.Since my well received post on setting up an Anti-squatted AI x-risk domains index, I’ve been picking up more and more domains, talking to other domain holders, and building an interface for viewing them. I’ve also put a few of them to good use already!Also a big thanks to Ben West for sharing 215 of CEA's parked domains, which EA projects are welcome to request use of by emailing tech@centreforeffectivealtruism.org. He also offered for CEA to be the custodian of the domains I bought, with the condition that they will be pointed to nameservers I control and that they will return ownership of them for EA-aligned use on request, unless they’re in active use by another EA project. This will save me from having to pay upkeep, which will help make this more sustainable. He's open to extending this offer to holders of other relevant domains.If you'd like to use one of these domains, then message the contact person specified. They each have different policies for handing over the domain, but a good standard is that they'll point the domain towards your servers on request, and hand it over for free if and when you have built something useful at that location.Here’s the top 40 domains I’m most excited about, but go check the full list:DomainPossible useContactaisafety.globalAI Safety Conferencehello@alignment.devexistential-risks.orgHigh quality explanation?tech@centreforeffectivealtruism.orgontological.techNew org?hello@alignment.devexistential.devNew org?hello@alignment.devepistemic.devNew org?hello@alignment.devaisafety.toolsDirectory of resources?jj@aisafetysupport.orgaisafetycareers.com esben@apartresearch.comagenty.orgNew org?hello@alignment.devx-risks.com drewspartz@nonlinear.orgaisafety.events jj@aisafetysupport.orgaisafety.me jj@aisafetysupport.orgeffectivealtruism.venturesEA entrepreneurs or impact investing group?donychristie@gmail.comaisafetybounties.com esben@apartresearch.comaisafety.degreeAI Safety PhD cohortshello@alignment.devalignment.careers80k said they were happy for others to join the careers advice spacehello@alignment.devxrisk.fundx-risk specific funding organization?hello@alignment.devaisafety.careers80k said they were happy for others to join the careers advicespacehello@alignment.devaisafety.fundAIS-specific funding org?hello@alignment.devanimalwelfare.dayDays to do a coordinated push for cause areas?hello@alignment.devglobalhealth.dayDays to do a coordinated push for cause areas?hello@alignment.devalignment.dayDays to do a coordinated push for cause areas?hello@alignment.devaisafety.dayDays to do a coordinated push for cause areas?hello@alignment.devcause-x.dayDays to do a coordinated push for cause areas?hello@alignment.devbiosecurity.dayDays to do a coordinated push for cause areas? Anti-GOF?hello@alignment.devrationality.dayDays to do a coordinated push for cause areas?hello@alignment.devaisafety.questProject Euler for AIS?hello@alignment.devaisafety.coachAn org which specializes in coaching AI safety people?hello@alignment.devaisafety.instituteResearch organization?hello@alignment.devaisafety.observerArticles on news on the AI safety space?hello@alignment.devalignment.academyTraining programhello@alignment.devalignment.fyihello@alignment.devaisafety.venturesEntrepreneurs org?hello@alignment.devaisafety.groupPeer-to-peer study groups for skilling up maybe?hello@alignment.devglobalprioritiesresearch.com tech@centreforeffectivealtruism.orgbountiedrationality.orgWebsite to pair with the BR facebook group.noahcremean@gmail.comaisafety.foundati...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ea.domains - Domains Free to a Good Home, published by plex on January 12, 2023 on The Effective Altruism Forum.Tl;dr: I’ve set up a database of domains at ea.domains which are free to a good home, to prevent them being squatted and blocked for use. You can add domains you control to it using this form.Since my well received post on setting up an Anti-squatted AI x-risk domains index, I’ve been picking up more and more domains, talking to other domain holders, and building an interface for viewing them. I’ve also put a few of them to good use already!Also a big thanks to Ben West for sharing 215 of CEA's parked domains, which EA projects are welcome to request use of by emailing tech@centreforeffectivealtruism.org. He also offered for CEA to be the custodian of the domains I bought, with the condition that they will be pointed to nameservers I control and that they will return ownership of them for EA-aligned use on request, unless they’re in active use by another EA project. This will save me from having to pay upkeep, which will help make this more sustainable. He's open to extending this offer to holders of other relevant domains.If you'd like to use one of these domains, then message the contact person specified. They each have different policies for handing over the domain, but a good standard is that they'll point the domain towards your servers on request, and hand it over for free if and when you have built something useful at that location.Here’s the top 40 domains I’m most excited about, but go check the full list:DomainPossible useContactaisafety.globalAI Safety Conferencehello@alignment.devexistential-risks.orgHigh quality explanation?tech@centreforeffectivealtruism.orgontological.techNew org?hello@alignment.devexistential.devNew org?hello@alignment.devepistemic.devNew org?hello@alignment.devaisafety.toolsDirectory of resources?jj@aisafetysupport.orgaisafetycareers.com esben@apartresearch.comagenty.orgNew org?hello@alignment.devx-risks.com drewspartz@nonlinear.orgaisafety.events jj@aisafetysupport.orgaisafety.me jj@aisafetysupport.orgeffectivealtruism.venturesEA entrepreneurs or impact investing group?donychristie@gmail.comaisafetybounties.com esben@apartresearch.comaisafety.degreeAI Safety PhD cohortshello@alignment.devalignment.careers80k said they were happy for others to join the careers advice spacehello@alignment.devxrisk.fundx-risk specific funding organization?hello@alignment.devaisafety.careers80k said they were happy for others to join the careers advicespacehello@alignment.devaisafety.fundAIS-specific funding org?hello@alignment.devanimalwelfare.dayDays to do a coordinated push for cause areas?hello@alignment.devglobalhealth.dayDays to do a coordinated push for cause areas?hello@alignment.devalignment.dayDays to do a coordinated push for cause areas?hello@alignment.devaisafety.dayDays to do a coordinated push for cause areas?hello@alignment.devcause-x.dayDays to do a coordinated push for cause areas?hello@alignment.devbiosecurity.dayDays to do a coordinated push for cause areas? Anti-GOF?hello@alignment.devrationality.dayDays to do a coordinated push for cause areas?hello@alignment.devaisafety.questProject Euler for AIS?hello@alignment.devaisafety.coachAn org which specializes in coaching AI safety people?hello@alignment.devaisafety.instituteResearch organization?hello@alignment.devaisafety.observerArticles on news on the AI safety space?hello@alignment.devalignment.academyTraining programhello@alignment.devalignment.fyihello@alignment.devaisafety.venturesEntrepreneurs org?hello@alignment.devaisafety.groupPeer-to-peer study groups for skilling up maybe?hello@alignment.devglobalprioritiesresearch.com tech@centreforeffectivealtruism.orgbountiedrationality.orgWebsite to pair with the BR facebook group.noahcremean@gmail.comaisafety.foundati...]]>
plex https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:32 None full 4439
JcaecPXE8am7MYJXZ_EA EA - I am tired. by MeganNelson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I am tired., published by MeganNelson on January 12, 2023 on The Effective Altruism Forum.Hello! It’s me, a small-scale part-time EA community builder. I read The Life You Can Save in 2009 and figured that in addition to being a vegan and a social worker, I should donate 10%-plus of my income to highly effective causes. Then I connected with my local effective altruism community in 2016 and figured that I should also spend a not-insignificant portion of my waking hours encouraging and connecting other people who want to make the world a better place.I am cheerful. I work hard. I volunteer at EAGs. I show up for the people around me.Why? Because I think it’s the right thing to do.But folks, I am TIRED.I am tired of having a few people put on pedestals because they are very smart - or very good at self-promotion. I am tired of listening to arguments about who can have the think-iest thoughts. I am tired of drama, scandals, and PR. I am tired of being in a position where I have to apologize for sexism, racism, and other toxic ideologies within this movement. I am tired of convening calls with other community builders where we try to figure out how to best react to the latest Thing That Happened. I am tired of billionaires. And I am really, really tired of seeing people publicly defend bad behavior as good epistemics.I’m just here because I want the world to be a better, kinder, softer place. I know I’m not the only one. I’m not quitting. But I am tired.Maybe you are tired, too.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
MeganNelson https://forum.effectivealtruism.org/posts/JcaecPXE8am7MYJXZ/i-am-tired Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I am tired., published by MeganNelson on January 12, 2023 on The Effective Altruism Forum.Hello! It’s me, a small-scale part-time EA community builder. I read The Life You Can Save in 2009 and figured that in addition to being a vegan and a social worker, I should donate 10%-plus of my income to highly effective causes. Then I connected with my local effective altruism community in 2016 and figured that I should also spend a not-insignificant portion of my waking hours encouraging and connecting other people who want to make the world a better place.I am cheerful. I work hard. I volunteer at EAGs. I show up for the people around me.Why? Because I think it’s the right thing to do.But folks, I am TIRED.I am tired of having a few people put on pedestals because they are very smart - or very good at self-promotion. I am tired of listening to arguments about who can have the think-iest thoughts. I am tired of drama, scandals, and PR. I am tired of being in a position where I have to apologize for sexism, racism, and other toxic ideologies within this movement. I am tired of convening calls with other community builders where we try to figure out how to best react to the latest Thing That Happened. I am tired of billionaires. And I am really, really tired of seeing people publicly defend bad behavior as good epistemics.I’m just here because I want the world to be a better, kinder, softer place. I know I’m not the only one. I’m not quitting. But I am tired.Maybe you are tired, too.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 12 Jan 2023 22:44:02 +0000 EA - I am tired. by MeganNelson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I am tired., published by MeganNelson on January 12, 2023 on The Effective Altruism Forum.Hello! It’s me, a small-scale part-time EA community builder. I read The Life You Can Save in 2009 and figured that in addition to being a vegan and a social worker, I should donate 10%-plus of my income to highly effective causes. Then I connected with my local effective altruism community in 2016 and figured that I should also spend a not-insignificant portion of my waking hours encouraging and connecting other people who want to make the world a better place.I am cheerful. I work hard. I volunteer at EAGs. I show up for the people around me.Why? Because I think it’s the right thing to do.But folks, I am TIRED.I am tired of having a few people put on pedestals because they are very smart - or very good at self-promotion. I am tired of listening to arguments about who can have the think-iest thoughts. I am tired of drama, scandals, and PR. I am tired of being in a position where I have to apologize for sexism, racism, and other toxic ideologies within this movement. I am tired of convening calls with other community builders where we try to figure out how to best react to the latest Thing That Happened. I am tired of billionaires. And I am really, really tired of seeing people publicly defend bad behavior as good epistemics.I’m just here because I want the world to be a better, kinder, softer place. I know I’m not the only one. I’m not quitting. But I am tired.Maybe you are tired, too.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I am tired., published by MeganNelson on January 12, 2023 on The Effective Altruism Forum.Hello! It’s me, a small-scale part-time EA community builder. I read The Life You Can Save in 2009 and figured that in addition to being a vegan and a social worker, I should donate 10%-plus of my income to highly effective causes. Then I connected with my local effective altruism community in 2016 and figured that I should also spend a not-insignificant portion of my waking hours encouraging and connecting other people who want to make the world a better place.I am cheerful. I work hard. I volunteer at EAGs. I show up for the people around me.Why? Because I think it’s the right thing to do.But folks, I am TIRED.I am tired of having a few people put on pedestals because they are very smart - or very good at self-promotion. I am tired of listening to arguments about who can have the think-iest thoughts. I am tired of drama, scandals, and PR. I am tired of being in a position where I have to apologize for sexism, racism, and other toxic ideologies within this movement. I am tired of convening calls with other community builders where we try to figure out how to best react to the latest Thing That Happened. I am tired of billionaires. And I am really, really tired of seeing people publicly defend bad behavior as good epistemics.I’m just here because I want the world to be a better, kinder, softer place. I know I’m not the only one. I’m not quitting. But I am tired.Maybe you are tired, too.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
MeganNelson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:44 None full 4427
NniTsDNQQo58hnxkr_EA EA - I Support Bostrom by 𝕮𝖎𝖓𝖊𝖗𝖆 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I Support Bostrom, published by 𝕮𝖎𝖓𝖊𝖗𝖆 on January 12, 2023 on The Effective Altruism Forum.Epistemic StatusExpressing this opinion because I get the sense the current zeitgeist on the forum underweights it, so staking it out feels somewhat valuable.Personal ContextFor context, I'm black (Nigerian who migrated to the UK last year as a student), currently upskilling to work in AI safety and joined EA via osmosis from LessWrong/the rationalist community.I've been a rationalist since 2017, and EA-adjacent since 2019-ish? I began overtly identifying as an EA last year.I'm concerned about the longterm flourishing of humanity, and I want to do what I can to help create a radically brighter future.I'm just going to express my honest opinions here:The events of the last 48 hours (slightly[1]) raised my opinion of Nick Bostrom. I was very relieved that Bostrom did not compromise his epistemic integrity by expressing more socially palatable views that are contrary to those he actually holds.I think it would be quite tragic to compromise honestly/accurately reporting our beliefs when the situation calls for it to fit in better. I'm very glad Bostrom did not do that.Beyond just general epistemic integrity that I think we should uphold, to the extent that one thinks that Bostrom is an especially important thinker re: humanity's longterm flourishing, then it's even more important that he strongly adheres to epistemic integrity.I think accurately reporting our beliefs and being honest even when society would reproach us for it is especially valuable for people thinking about "grand strategy for humanity".I think it would be very tragic if Bostrom were to face professional censure because of this. I don't think an environment that punishes epistemic integrity is particularly productive with respect to working on humanity's most pressing problems.As for the contents of the email itself, while very untasteful, they were sent in a particular context to be deliberately offensive and Bostrom did regret it and apologise for it at the time. I don't think it's useful/valuable to judge him on the basis of an email he sent a few decades ago as a student. The Bostrom that sent the email did not reflectively endorse its contents, and current Bostrom does not either.I'm not interested in a discussion on race & IQ, so I deliberately avoided addressing that.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
𝕮𝖎𝖓𝖊𝖗𝖆 https://forum.effectivealtruism.org/posts/NniTsDNQQo58hnxkr/i-support-bostrom Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I Support Bostrom, published by 𝕮𝖎𝖓𝖊𝖗𝖆 on January 12, 2023 on The Effective Altruism Forum.Epistemic StatusExpressing this opinion because I get the sense the current zeitgeist on the forum underweights it, so staking it out feels somewhat valuable.Personal ContextFor context, I'm black (Nigerian who migrated to the UK last year as a student), currently upskilling to work in AI safety and joined EA via osmosis from LessWrong/the rationalist community.I've been a rationalist since 2017, and EA-adjacent since 2019-ish? I began overtly identifying as an EA last year.I'm concerned about the longterm flourishing of humanity, and I want to do what I can to help create a radically brighter future.I'm just going to express my honest opinions here:The events of the last 48 hours (slightly[1]) raised my opinion of Nick Bostrom. I was very relieved that Bostrom did not compromise his epistemic integrity by expressing more socially palatable views that are contrary to those he actually holds.I think it would be quite tragic to compromise honestly/accurately reporting our beliefs when the situation calls for it to fit in better. I'm very glad Bostrom did not do that.Beyond just general epistemic integrity that I think we should uphold, to the extent that one thinks that Bostrom is an especially important thinker re: humanity's longterm flourishing, then it's even more important that he strongly adheres to epistemic integrity.I think accurately reporting our beliefs and being honest even when society would reproach us for it is especially valuable for people thinking about "grand strategy for humanity".I think it would be very tragic if Bostrom were to face professional censure because of this. I don't think an environment that punishes epistemic integrity is particularly productive with respect to working on humanity's most pressing problems.As for the contents of the email itself, while very untasteful, they were sent in a particular context to be deliberately offensive and Bostrom did regret it and apologise for it at the time. I don't think it's useful/valuable to judge him on the basis of an email he sent a few decades ago as a student. The Bostrom that sent the email did not reflectively endorse its contents, and current Bostrom does not either.I'm not interested in a discussion on race & IQ, so I deliberately avoided addressing that.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 12 Jan 2023 21:49:50 +0000 EA - I Support Bostrom by 𝕮𝖎𝖓𝖊𝖗𝖆 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I Support Bostrom, published by 𝕮𝖎𝖓𝖊𝖗𝖆 on January 12, 2023 on The Effective Altruism Forum.Epistemic StatusExpressing this opinion because I get the sense the current zeitgeist on the forum underweights it, so staking it out feels somewhat valuable.Personal ContextFor context, I'm black (Nigerian who migrated to the UK last year as a student), currently upskilling to work in AI safety and joined EA via osmosis from LessWrong/the rationalist community.I've been a rationalist since 2017, and EA-adjacent since 2019-ish? I began overtly identifying as an EA last year.I'm concerned about the longterm flourishing of humanity, and I want to do what I can to help create a radically brighter future.I'm just going to express my honest opinions here:The events of the last 48 hours (slightly[1]) raised my opinion of Nick Bostrom. I was very relieved that Bostrom did not compromise his epistemic integrity by expressing more socially palatable views that are contrary to those he actually holds.I think it would be quite tragic to compromise honestly/accurately reporting our beliefs when the situation calls for it to fit in better. I'm very glad Bostrom did not do that.Beyond just general epistemic integrity that I think we should uphold, to the extent that one thinks that Bostrom is an especially important thinker re: humanity's longterm flourishing, then it's even more important that he strongly adheres to epistemic integrity.I think accurately reporting our beliefs and being honest even when society would reproach us for it is especially valuable for people thinking about "grand strategy for humanity".I think it would be very tragic if Bostrom were to face professional censure because of this. I don't think an environment that punishes epistemic integrity is particularly productive with respect to working on humanity's most pressing problems.As for the contents of the email itself, while very untasteful, they were sent in a particular context to be deliberately offensive and Bostrom did regret it and apologise for it at the time. I don't think it's useful/valuable to judge him on the basis of an email he sent a few decades ago as a student. The Bostrom that sent the email did not reflectively endorse its contents, and current Bostrom does not either.I'm not interested in a discussion on race & IQ, so I deliberately avoided addressing that.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I Support Bostrom, published by 𝕮𝖎𝖓𝖊𝖗𝖆 on January 12, 2023 on The Effective Altruism Forum.Epistemic StatusExpressing this opinion because I get the sense the current zeitgeist on the forum underweights it, so staking it out feels somewhat valuable.Personal ContextFor context, I'm black (Nigerian who migrated to the UK last year as a student), currently upskilling to work in AI safety and joined EA via osmosis from LessWrong/the rationalist community.I've been a rationalist since 2017, and EA-adjacent since 2019-ish? I began overtly identifying as an EA last year.I'm concerned about the longterm flourishing of humanity, and I want to do what I can to help create a radically brighter future.I'm just going to express my honest opinions here:The events of the last 48 hours (slightly[1]) raised my opinion of Nick Bostrom. I was very relieved that Bostrom did not compromise his epistemic integrity by expressing more socially palatable views that are contrary to those he actually holds.I think it would be quite tragic to compromise honestly/accurately reporting our beliefs when the situation calls for it to fit in better. I'm very glad Bostrom did not do that.Beyond just general epistemic integrity that I think we should uphold, to the extent that one thinks that Bostrom is an especially important thinker re: humanity's longterm flourishing, then it's even more important that he strongly adheres to epistemic integrity.I think accurately reporting our beliefs and being honest even when society would reproach us for it is especially valuable for people thinking about "grand strategy for humanity".I think it would be very tragic if Bostrom were to face professional censure because of this. I don't think an environment that punishes epistemic integrity is particularly productive with respect to working on humanity's most pressing problems.As for the contents of the email itself, while very untasteful, they were sent in a particular context to be deliberately offensive and Bostrom did regret it and apologise for it at the time. I don't think it's useful/valuable to judge him on the basis of an email he sent a few decades ago as a student. The Bostrom that sent the email did not reflectively endorse its contents, and current Bostrom does not either.I'm not interested in a discussion on race & IQ, so I deliberately avoided addressing that.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
𝕮𝖎𝖓𝖊𝖗𝖆 https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:25 None full 4428
8zLwD862MRGZTzs8k_EA EA - A personal response to Nick Bostrom's "Apology for an Old Email" by Habiba Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A personal response to Nick Bostrom's "Apology for an Old Email", published by Habiba on January 12, 2023 on The Effective Altruism Forum.On 9th January 2023, Nick Bostrom posted this apology for an email he sent on the Extropians listserv in the 90s. On 11th January 2023, Anders Sandberg linked to it on Bostrom's behalf in this twitter thread.I recommend you read those first as I don't summarise or explain the contents below.This is my personal response to reading Bostrom's apology and email and is (bar some minor changes) a cross-post of my tweet thread.As a meta-point I would like to flag that I do find discussion of the topic to be incredibly stressful. I have almost never posted here on the forum about even straightforward things. And debating race and IQ is something I find exceptionally emotionally tough. So I don't plan to participate in any extensive debates in the comments, hope that you understand why.My thoughtsIn my view, Bostrom's email would have been offensive in the 90s and it is offensive now, for good reason. His apology fails badly to fully take responsibility or display an understanding of the harm the views expressed represent.I think that being deliberately offensive to make a point is gross. When people in positions of privilege use or mention slurs lightly they are able to do so because they are blinkered to the lived experience of others and disengaged from empathy with those different to them.Note that I’m not generally in the business of picking people apart for small one-off past infractions. But I do think it would be virtuous to apologise for and to truly take responsibility for one’s past actions.Bostrom’s apology is defensively couched - emphasising the age of the email, what others wrote on the listserv, that it would be best forgotten, that fear that people might smear him. I think that is cowardly and shows a disappointing lack of ownership of his actions.But I don’t just care about the inclusion of a slur in the email. I am deeply uncomfortable with a discussion of race and intelligence failing to acknowledge the historical context of the ideas’ origin and the harm they can and have caused.To be clear, I think the view Bostrom expressed was wrong, and wrong in a harmful and reckless way.When you argue a point like this without addressing the context of how those ideas came about you will likely be missing something important we should learn from history and be badly wrong.When you are willfully disengaged from the empathy that underlies common decency you will have a massive blindspot in your reasoning and you will likely be badly wrong.I do not think that there is only one acceptable way to express thoughts about this issue, nor do I think this issue could never be discussed sensitively. And I do think it is okay for people to sometimes say things online that I think are plain wrong.But we all know that this issue is high stakes - ideas about racial superiority in the UK, America, and Germany led to some of the worst atrocities of the 20th century. And eugenists historically espoused utterly wrong views on race and intelligence wearing the guise of science.There were c.60,000 sterilisations in US eugenics programmes - focused on women of colour. Including those who were young, poor, victims of sexual abuse, labelled “feeble minded” (allegedly inherited via a recessive gene), and had their fates decided by committees of white men.Hitler was a fan of eugenicist Madison Grant. And at the Nuremberg trials, the Nazi defendants entered Grant’s book - The Passing of the Great Race - in their defence tracing the lineage of their genocidal ideas to a popular American author.And we know now there are very good reasons to think that scores on IQ tests are affected by cultural factors, that global IQ databases are poor sour...]]>
Habiba https://forum.effectivealtruism.org/posts/8zLwD862MRGZTzs8k/a-personal-response-to-nick-bostrom-s-apology-for-an-old Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A personal response to Nick Bostrom's "Apology for an Old Email", published by Habiba on January 12, 2023 on The Effective Altruism Forum.On 9th January 2023, Nick Bostrom posted this apology for an email he sent on the Extropians listserv in the 90s. On 11th January 2023, Anders Sandberg linked to it on Bostrom's behalf in this twitter thread.I recommend you read those first as I don't summarise or explain the contents below.This is my personal response to reading Bostrom's apology and email and is (bar some minor changes) a cross-post of my tweet thread.As a meta-point I would like to flag that I do find discussion of the topic to be incredibly stressful. I have almost never posted here on the forum about even straightforward things. And debating race and IQ is something I find exceptionally emotionally tough. So I don't plan to participate in any extensive debates in the comments, hope that you understand why.My thoughtsIn my view, Bostrom's email would have been offensive in the 90s and it is offensive now, for good reason. His apology fails badly to fully take responsibility or display an understanding of the harm the views expressed represent.I think that being deliberately offensive to make a point is gross. When people in positions of privilege use or mention slurs lightly they are able to do so because they are blinkered to the lived experience of others and disengaged from empathy with those different to them.Note that I’m not generally in the business of picking people apart for small one-off past infractions. But I do think it would be virtuous to apologise for and to truly take responsibility for one’s past actions.Bostrom’s apology is defensively couched - emphasising the age of the email, what others wrote on the listserv, that it would be best forgotten, that fear that people might smear him. I think that is cowardly and shows a disappointing lack of ownership of his actions.But I don’t just care about the inclusion of a slur in the email. I am deeply uncomfortable with a discussion of race and intelligence failing to acknowledge the historical context of the ideas’ origin and the harm they can and have caused.To be clear, I think the view Bostrom expressed was wrong, and wrong in a harmful and reckless way.When you argue a point like this without addressing the context of how those ideas came about you will likely be missing something important we should learn from history and be badly wrong.When you are willfully disengaged from the empathy that underlies common decency you will have a massive blindspot in your reasoning and you will likely be badly wrong.I do not think that there is only one acceptable way to express thoughts about this issue, nor do I think this issue could never be discussed sensitively. And I do think it is okay for people to sometimes say things online that I think are plain wrong.But we all know that this issue is high stakes - ideas about racial superiority in the UK, America, and Germany led to some of the worst atrocities of the 20th century. And eugenists historically espoused utterly wrong views on race and intelligence wearing the guise of science.There were c.60,000 sterilisations in US eugenics programmes - focused on women of colour. Including those who were young, poor, victims of sexual abuse, labelled “feeble minded” (allegedly inherited via a recessive gene), and had their fates decided by committees of white men.Hitler was a fan of eugenicist Madison Grant. And at the Nuremberg trials, the Nazi defendants entered Grant’s book - The Passing of the Great Race - in their defence tracing the lineage of their genocidal ideas to a popular American author.And we know now there are very good reasons to think that scores on IQ tests are affected by cultural factors, that global IQ databases are poor sour...]]>
Thu, 12 Jan 2023 19:35:55 +0000 EA - A personal response to Nick Bostrom's "Apology for an Old Email" by Habiba Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A personal response to Nick Bostrom's "Apology for an Old Email", published by Habiba on January 12, 2023 on The Effective Altruism Forum.On 9th January 2023, Nick Bostrom posted this apology for an email he sent on the Extropians listserv in the 90s. On 11th January 2023, Anders Sandberg linked to it on Bostrom's behalf in this twitter thread.I recommend you read those first as I don't summarise or explain the contents below.This is my personal response to reading Bostrom's apology and email and is (bar some minor changes) a cross-post of my tweet thread.As a meta-point I would like to flag that I do find discussion of the topic to be incredibly stressful. I have almost never posted here on the forum about even straightforward things. And debating race and IQ is something I find exceptionally emotionally tough. So I don't plan to participate in any extensive debates in the comments, hope that you understand why.My thoughtsIn my view, Bostrom's email would have been offensive in the 90s and it is offensive now, for good reason. His apology fails badly to fully take responsibility or display an understanding of the harm the views expressed represent.I think that being deliberately offensive to make a point is gross. When people in positions of privilege use or mention slurs lightly they are able to do so because they are blinkered to the lived experience of others and disengaged from empathy with those different to them.Note that I’m not generally in the business of picking people apart for small one-off past infractions. But I do think it would be virtuous to apologise for and to truly take responsibility for one’s past actions.Bostrom’s apology is defensively couched - emphasising the age of the email, what others wrote on the listserv, that it would be best forgotten, that fear that people might smear him. I think that is cowardly and shows a disappointing lack of ownership of his actions.But I don’t just care about the inclusion of a slur in the email. I am deeply uncomfortable with a discussion of race and intelligence failing to acknowledge the historical context of the ideas’ origin and the harm they can and have caused.To be clear, I think the view Bostrom expressed was wrong, and wrong in a harmful and reckless way.When you argue a point like this without addressing the context of how those ideas came about you will likely be missing something important we should learn from history and be badly wrong.When you are willfully disengaged from the empathy that underlies common decency you will have a massive blindspot in your reasoning and you will likely be badly wrong.I do not think that there is only one acceptable way to express thoughts about this issue, nor do I think this issue could never be discussed sensitively. And I do think it is okay for people to sometimes say things online that I think are plain wrong.But we all know that this issue is high stakes - ideas about racial superiority in the UK, America, and Germany led to some of the worst atrocities of the 20th century. And eugenists historically espoused utterly wrong views on race and intelligence wearing the guise of science.There were c.60,000 sterilisations in US eugenics programmes - focused on women of colour. Including those who were young, poor, victims of sexual abuse, labelled “feeble minded” (allegedly inherited via a recessive gene), and had their fates decided by committees of white men.Hitler was a fan of eugenicist Madison Grant. And at the Nuremberg trials, the Nazi defendants entered Grant’s book - The Passing of the Great Race - in their defence tracing the lineage of their genocidal ideas to a popular American author.And we know now there are very good reasons to think that scores on IQ tests are affected by cultural factors, that global IQ databases are poor sour...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A personal response to Nick Bostrom's "Apology for an Old Email", published by Habiba on January 12, 2023 on The Effective Altruism Forum.On 9th January 2023, Nick Bostrom posted this apology for an email he sent on the Extropians listserv in the 90s. On 11th January 2023, Anders Sandberg linked to it on Bostrom's behalf in this twitter thread.I recommend you read those first as I don't summarise or explain the contents below.This is my personal response to reading Bostrom's apology and email and is (bar some minor changes) a cross-post of my tweet thread.As a meta-point I would like to flag that I do find discussion of the topic to be incredibly stressful. I have almost never posted here on the forum about even straightforward things. And debating race and IQ is something I find exceptionally emotionally tough. So I don't plan to participate in any extensive debates in the comments, hope that you understand why.My thoughtsIn my view, Bostrom's email would have been offensive in the 90s and it is offensive now, for good reason. His apology fails badly to fully take responsibility or display an understanding of the harm the views expressed represent.I think that being deliberately offensive to make a point is gross. When people in positions of privilege use or mention slurs lightly they are able to do so because they are blinkered to the lived experience of others and disengaged from empathy with those different to them.Note that I’m not generally in the business of picking people apart for small one-off past infractions. But I do think it would be virtuous to apologise for and to truly take responsibility for one’s past actions.Bostrom’s apology is defensively couched - emphasising the age of the email, what others wrote on the listserv, that it would be best forgotten, that fear that people might smear him. I think that is cowardly and shows a disappointing lack of ownership of his actions.But I don’t just care about the inclusion of a slur in the email. I am deeply uncomfortable with a discussion of race and intelligence failing to acknowledge the historical context of the ideas’ origin and the harm they can and have caused.To be clear, I think the view Bostrom expressed was wrong, and wrong in a harmful and reckless way.When you argue a point like this without addressing the context of how those ideas came about you will likely be missing something important we should learn from history and be badly wrong.When you are willfully disengaged from the empathy that underlies common decency you will have a massive blindspot in your reasoning and you will likely be badly wrong.I do not think that there is only one acceptable way to express thoughts about this issue, nor do I think this issue could never be discussed sensitively. And I do think it is okay for people to sometimes say things online that I think are plain wrong.But we all know that this issue is high stakes - ideas about racial superiority in the UK, America, and Germany led to some of the worst atrocities of the 20th century. And eugenists historically espoused utterly wrong views on race and intelligence wearing the guise of science.There were c.60,000 sterilisations in US eugenics programmes - focused on women of colour. Including those who were young, poor, victims of sexual abuse, labelled “feeble minded” (allegedly inherited via a recessive gene), and had their fates decided by committees of white men.Hitler was a fan of eugenicist Madison Grant. And at the Nuremberg trials, the Nazi defendants entered Grant’s book - The Passing of the Great Race - in their defence tracing the lineage of their genocidal ideas to a popular American author.And we know now there are very good reasons to think that scores on IQ tests are affected by cultural factors, that global IQ databases are poor sour...]]>
Habiba https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:40 None full 4429
ALzE9JixLLEexTKSq_EA EA - CEA statement on Nick Bostrom's email by Shakeel Hashim Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA statement on Nick Bostrom's email, published by Shakeel Hashim on January 12, 2023 on The Effective Altruism Forum.Effective altruism is based on the core belief that all people count equally. We unequivocally condemn Nick Bostrom’s recklessly flawed and reprehensible words. We reject this unacceptable racist language, and the callous discussion of ideas that can and have harmed Black people. It is fundamentally inconsistent with our mission of building an inclusive and welcoming community.The Centre for Effective AltruismThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Shakeel Hashim https://forum.effectivealtruism.org/posts/ALzE9JixLLEexTKSq/cea-statement-on-nick-bostrom-s-email Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA statement on Nick Bostrom's email, published by Shakeel Hashim on January 12, 2023 on The Effective Altruism Forum.Effective altruism is based on the core belief that all people count equally. We unequivocally condemn Nick Bostrom’s recklessly flawed and reprehensible words. We reject this unacceptable racist language, and the callous discussion of ideas that can and have harmed Black people. It is fundamentally inconsistent with our mission of building an inclusive and welcoming community.The Centre for Effective AltruismThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 12 Jan 2023 17:27:08 +0000 EA - CEA statement on Nick Bostrom's email by Shakeel Hashim Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA statement on Nick Bostrom's email, published by Shakeel Hashim on January 12, 2023 on The Effective Altruism Forum.Effective altruism is based on the core belief that all people count equally. We unequivocally condemn Nick Bostrom’s recklessly flawed and reprehensible words. We reject this unacceptable racist language, and the callous discussion of ideas that can and have harmed Black people. It is fundamentally inconsistent with our mission of building an inclusive and welcoming community.The Centre for Effective AltruismThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA statement on Nick Bostrom's email, published by Shakeel Hashim on January 12, 2023 on The Effective Altruism Forum.Effective altruism is based on the core belief that all people count equally. We unequivocally condemn Nick Bostrom’s recklessly flawed and reprehensible words. We reject this unacceptable racist language, and the callous discussion of ideas that can and have harmed Black people. It is fundamentally inconsistent with our mission of building an inclusive and welcoming community.The Centre for Effective AltruismThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Shakeel Hashim https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:48 None full 4430
rwxNJPajRvCF2qymu_EA EA - Building Effective Altruism in Africa: My Experience Running a University Group by Tim K. Sankara Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Building Effective Altruism in Africa: My Experience Running a University Group, published by Tim K. Sankara on January 12, 2023 on The Effective Altruism Forum.Welcome to my adventure of creating an effective altruism movement in Africa. My journey started when I started a student organisation. I'll relate my experiences, obstacles I faced, and triumphs I had while establishing EA in Africa in this post, along with how I handled them. I'll also discuss my plans for creating a thriving university community that will have a beneficial, long-lasting impact on this area.I'm Tim, an Information Technology graduate of the Jomo Kenyatta University of Agriculture and Technology. In July this year, I celebrated the successful completion of my undergrad! My varied interests in the EA movement have been documented in brief here.My official EA journey commenced in early 2022 when I joined the introductory program, then later, the in-depth program. However, I had already been engaging with various EA content prior to this, without even knowing that it was closely linked to EA.These sessions introduced me to the EA forum, through which I learned about the University Groups Accelerator Program, which aims to increase support for EA cause areas by providing support to university groups around the globe.The application procedure was simple, and after a few interviews, training sessions, and mentorship meetings, I was finally prepared to launch the Uni group.Running a university group involved a variety of tasks, including but not limited to:Outreach & group growth strategyCurriculum developmentEvent planning (dinners, socials, retreat 1-on-1s)Community healthI'll stick to a few of the semi-distinctive aspects that make founding a university organisation alone, and especially among the first few Uni groups on my continent, something worth writing about and perhaps someone will borrow a leaf from my experience. Although this advise may occasionally be particular or generic, it is hopefully still useful. So here goes.1. The importance of making the group a place where people want to beI recently attended an EA related conference and give a brief talk on my experience and one of the attendees asked me if I had to pick the one thing that made my group successful and I picked: "make the group a fun place to be".I can't insist enough how much this helps to build a sense of community. It is important to provide a variety of activities and experiences that allow members to have fun and make memories together. By creating a place where people want to be, the group can create a strong sense of community and foster a culture of collaboration, creativity, and progress.Some activities that worked for us included:Having some unique ice-breakers before each session like this one here.Finding a conducive space where the fellows can be away from distractions and can just relax and unwind with free pizza and soft drinks.Holding a game-night. I asked around for some games that may be fun among my group and settled on a few board games, a quiz night of sorts as well as some digital games too.But apart from the fun activities, the group was a place where intellectually stimulating discussions could be had and meaningful connections could be made. We sought to create an atmosphere of respect, inclusiveness and collaboration, which allowed for people to learn from each other, share ideas and grow. This was an essential part of making the group a place where people not only wanted to be, but wanted to stay and contribute.2. Watch for community healthIn order to make the group inviting and engaging, it is important to create a safe space where people can express their thoughts and ideas without fear of judgment. This can be achieved by having a code of conduct, providing resources and supp...]]>
Tim K. Sankara https://forum.effectivealtruism.org/posts/rwxNJPajRvCF2qymu/building-effective-altruism-in-africa-my-experience-running Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Building Effective Altruism in Africa: My Experience Running a University Group, published by Tim K. Sankara on January 12, 2023 on The Effective Altruism Forum.Welcome to my adventure of creating an effective altruism movement in Africa. My journey started when I started a student organisation. I'll relate my experiences, obstacles I faced, and triumphs I had while establishing EA in Africa in this post, along with how I handled them. I'll also discuss my plans for creating a thriving university community that will have a beneficial, long-lasting impact on this area.I'm Tim, an Information Technology graduate of the Jomo Kenyatta University of Agriculture and Technology. In July this year, I celebrated the successful completion of my undergrad! My varied interests in the EA movement have been documented in brief here.My official EA journey commenced in early 2022 when I joined the introductory program, then later, the in-depth program. However, I had already been engaging with various EA content prior to this, without even knowing that it was closely linked to EA.These sessions introduced me to the EA forum, through which I learned about the University Groups Accelerator Program, which aims to increase support for EA cause areas by providing support to university groups around the globe.The application procedure was simple, and after a few interviews, training sessions, and mentorship meetings, I was finally prepared to launch the Uni group.Running a university group involved a variety of tasks, including but not limited to:Outreach & group growth strategyCurriculum developmentEvent planning (dinners, socials, retreat 1-on-1s)Community healthI'll stick to a few of the semi-distinctive aspects that make founding a university organisation alone, and especially among the first few Uni groups on my continent, something worth writing about and perhaps someone will borrow a leaf from my experience. Although this advise may occasionally be particular or generic, it is hopefully still useful. So here goes.1. The importance of making the group a place where people want to beI recently attended an EA related conference and give a brief talk on my experience and one of the attendees asked me if I had to pick the one thing that made my group successful and I picked: "make the group a fun place to be".I can't insist enough how much this helps to build a sense of community. It is important to provide a variety of activities and experiences that allow members to have fun and make memories together. By creating a place where people want to be, the group can create a strong sense of community and foster a culture of collaboration, creativity, and progress.Some activities that worked for us included:Having some unique ice-breakers before each session like this one here.Finding a conducive space where the fellows can be away from distractions and can just relax and unwind with free pizza and soft drinks.Holding a game-night. I asked around for some games that may be fun among my group and settled on a few board games, a quiz night of sorts as well as some digital games too.But apart from the fun activities, the group was a place where intellectually stimulating discussions could be had and meaningful connections could be made. We sought to create an atmosphere of respect, inclusiveness and collaboration, which allowed for people to learn from each other, share ideas and grow. This was an essential part of making the group a place where people not only wanted to be, but wanted to stay and contribute.2. Watch for community healthIn order to make the group inviting and engaging, it is important to create a safe space where people can express their thoughts and ideas without fear of judgment. This can be achieved by having a code of conduct, providing resources and supp...]]>
Thu, 12 Jan 2023 14:01:06 +0000 EA - Building Effective Altruism in Africa: My Experience Running a University Group by Tim K. Sankara Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Building Effective Altruism in Africa: My Experience Running a University Group, published by Tim K. Sankara on January 12, 2023 on The Effective Altruism Forum.Welcome to my adventure of creating an effective altruism movement in Africa. My journey started when I started a student organisation. I'll relate my experiences, obstacles I faced, and triumphs I had while establishing EA in Africa in this post, along with how I handled them. I'll also discuss my plans for creating a thriving university community that will have a beneficial, long-lasting impact on this area.I'm Tim, an Information Technology graduate of the Jomo Kenyatta University of Agriculture and Technology. In July this year, I celebrated the successful completion of my undergrad! My varied interests in the EA movement have been documented in brief here.My official EA journey commenced in early 2022 when I joined the introductory program, then later, the in-depth program. However, I had already been engaging with various EA content prior to this, without even knowing that it was closely linked to EA.These sessions introduced me to the EA forum, through which I learned about the University Groups Accelerator Program, which aims to increase support for EA cause areas by providing support to university groups around the globe.The application procedure was simple, and after a few interviews, training sessions, and mentorship meetings, I was finally prepared to launch the Uni group.Running a university group involved a variety of tasks, including but not limited to:Outreach & group growth strategyCurriculum developmentEvent planning (dinners, socials, retreat 1-on-1s)Community healthI'll stick to a few of the semi-distinctive aspects that make founding a university organisation alone, and especially among the first few Uni groups on my continent, something worth writing about and perhaps someone will borrow a leaf from my experience. Although this advise may occasionally be particular or generic, it is hopefully still useful. So here goes.1. The importance of making the group a place where people want to beI recently attended an EA related conference and give a brief talk on my experience and one of the attendees asked me if I had to pick the one thing that made my group successful and I picked: "make the group a fun place to be".I can't insist enough how much this helps to build a sense of community. It is important to provide a variety of activities and experiences that allow members to have fun and make memories together. By creating a place where people want to be, the group can create a strong sense of community and foster a culture of collaboration, creativity, and progress.Some activities that worked for us included:Having some unique ice-breakers before each session like this one here.Finding a conducive space where the fellows can be away from distractions and can just relax and unwind with free pizza and soft drinks.Holding a game-night. I asked around for some games that may be fun among my group and settled on a few board games, a quiz night of sorts as well as some digital games too.But apart from the fun activities, the group was a place where intellectually stimulating discussions could be had and meaningful connections could be made. We sought to create an atmosphere of respect, inclusiveness and collaboration, which allowed for people to learn from each other, share ideas and grow. This was an essential part of making the group a place where people not only wanted to be, but wanted to stay and contribute.2. Watch for community healthIn order to make the group inviting and engaging, it is important to create a safe space where people can express their thoughts and ideas without fear of judgment. This can be achieved by having a code of conduct, providing resources and supp...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Building Effective Altruism in Africa: My Experience Running a University Group, published by Tim K. Sankara on January 12, 2023 on The Effective Altruism Forum.Welcome to my adventure of creating an effective altruism movement in Africa. My journey started when I started a student organisation. I'll relate my experiences, obstacles I faced, and triumphs I had while establishing EA in Africa in this post, along with how I handled them. I'll also discuss my plans for creating a thriving university community that will have a beneficial, long-lasting impact on this area.I'm Tim, an Information Technology graduate of the Jomo Kenyatta University of Agriculture and Technology. In July this year, I celebrated the successful completion of my undergrad! My varied interests in the EA movement have been documented in brief here.My official EA journey commenced in early 2022 when I joined the introductory program, then later, the in-depth program. However, I had already been engaging with various EA content prior to this, without even knowing that it was closely linked to EA.These sessions introduced me to the EA forum, through which I learned about the University Groups Accelerator Program, which aims to increase support for EA cause areas by providing support to university groups around the globe.The application procedure was simple, and after a few interviews, training sessions, and mentorship meetings, I was finally prepared to launch the Uni group.Running a university group involved a variety of tasks, including but not limited to:Outreach & group growth strategyCurriculum developmentEvent planning (dinners, socials, retreat 1-on-1s)Community healthI'll stick to a few of the semi-distinctive aspects that make founding a university organisation alone, and especially among the first few Uni groups on my continent, something worth writing about and perhaps someone will borrow a leaf from my experience. Although this advise may occasionally be particular or generic, it is hopefully still useful. So here goes.1. The importance of making the group a place where people want to beI recently attended an EA related conference and give a brief talk on my experience and one of the attendees asked me if I had to pick the one thing that made my group successful and I picked: "make the group a fun place to be".I can't insist enough how much this helps to build a sense of community. It is important to provide a variety of activities and experiences that allow members to have fun and make memories together. By creating a place where people want to be, the group can create a strong sense of community and foster a culture of collaboration, creativity, and progress.Some activities that worked for us included:Having some unique ice-breakers before each session like this one here.Finding a conducive space where the fellows can be away from distractions and can just relax and unwind with free pizza and soft drinks.Holding a game-night. I asked around for some games that may be fun among my group and settled on a few board games, a quiz night of sorts as well as some digital games too.But apart from the fun activities, the group was a place where intellectually stimulating discussions could be had and meaningful connections could be made. We sought to create an atmosphere of respect, inclusiveness and collaboration, which allowed for people to learn from each other, share ideas and grow. This was an essential part of making the group a place where people not only wanted to be, but wanted to stay and contribute.2. Watch for community healthIn order to make the group inviting and engaging, it is important to create a safe space where people can express their thoughts and ideas without fear of judgment. This can be achieved by having a code of conduct, providing resources and supp...]]>
Tim K. Sankara https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:06 None full 4433
uH9akQzJkzpBD5Duw_EA EA - What you can do to help stop violence against women and girls by Akhil Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What you can do to help stop violence against women and girls, published by Akhil on January 12, 2023 on The Effective Altruism Forum.IntroductionI previously wrote an entry for the Open Philanthropy Cause Exploration Prize on why preventing violence against women and girls is a global priority. For an introduction to the area, I have written a brief summary below. In this post, I will extend that work, diving deeper into the literature and the landscape of organisations in the field, as well as creating a cost-effectiveness model for some of the most promising preventative interventions. Based on this, I will offer some concrete recommendations that different stakeholders should take - from individuals looking to donate, to funders, to charity evaluators and incubators.The key recommendations I make, in order of importance, are:Support community-based interventions that seek to shift harmful gender norms and reduce violence- they have a high quality of evidence, and cost $180/DALY (disability adjusted life years) or $150 for a woman to live a year free from violence. Two particularly promising organisations are CEDOVIP and Raising Voices.Fund and conduct a well-designed randomised control trial of radio or television dramas to shift harmful gender norms and reduce violence- they have a startling cost-effectiveness of $13/DALY or $11 for a woman to live a year free from violence, but currently lacks a well-established evidence base.Fund organisations undertaking economic programs supporting women (e.g. microfinancing, cash transfers, village savings and loans association) to add on social empowerment programs focused on reducing violence; they have a cost-effectiveness of $180/DALY, or $145 for a woman to live a year free from violence.Found new charities focused on community-based interventions that seek to shift harmful gender norms and reduce violence, particularly in neglected geographies and populationsConsider supporting self-defence training programs - although analysis suggests that self-defence training may not be as cost-effective as other interventions for VAWG, it is nevertheless a relatively cost-effective (at $260/DALY or $215 for a woman to live a year free from violence) and potentially scalable intervention. No Means No Worldwide and Ujamaa appear to be two organisations scaling this intervention well.Fund academic research to understand what types of culture change programs targeting boys and men are most effective at reducing violence against women and girlsYou can find a 2 page summary of my initial post and this post hereWhy VAWG is an important cause areaNearly one third of women and girls aged 15 years of age or older have experienced either physical or sexual intimate partner violence (IPV) or non-partner sexual violence globally, with 13% (10–16%) experiencing it in 2018 alone (Sardinha et al 2022). It is one of the leading burdens of disease globally, responsible for 8.5 million DALYs and 68 500 deaths annually.In several countries, violence against women is in the top 3-5 leading causes of death for young women aged between 15 and 29 (Mendoza et al 2018).VAWG has wide-ranging effects on women’s physical, sexual and mental health- in fact, it is responsible for 11% of the DALY burden of depressive disorders and 14% of the DALY burden of HIV in women (IHME 2019)Globally, the rates of VAWG are both alarmingly high and have increased over the last 30 years, despite gains in other areas of women’s health, such as maternal care (Think Global Health).In 2016, the global economic cost of violence against women was estimated by the UN to be US$1.5 trillion, equivalent to approximately 2% of the global GDP (UN Women 2016).Although there are many groups working to stop VAWG, it is fairly neglected relative to the scale of harm that it...]]>
Akhil https://forum.effectivealtruism.org/posts/uH9akQzJkzpBD5Duw/what-you-can-do-to-help-stop-violence-against-women-and Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What you can do to help stop violence against women and girls, published by Akhil on January 12, 2023 on The Effective Altruism Forum.IntroductionI previously wrote an entry for the Open Philanthropy Cause Exploration Prize on why preventing violence against women and girls is a global priority. For an introduction to the area, I have written a brief summary below. In this post, I will extend that work, diving deeper into the literature and the landscape of organisations in the field, as well as creating a cost-effectiveness model for some of the most promising preventative interventions. Based on this, I will offer some concrete recommendations that different stakeholders should take - from individuals looking to donate, to funders, to charity evaluators and incubators.The key recommendations I make, in order of importance, are:Support community-based interventions that seek to shift harmful gender norms and reduce violence- they have a high quality of evidence, and cost $180/DALY (disability adjusted life years) or $150 for a woman to live a year free from violence. Two particularly promising organisations are CEDOVIP and Raising Voices.Fund and conduct a well-designed randomised control trial of radio or television dramas to shift harmful gender norms and reduce violence- they have a startling cost-effectiveness of $13/DALY or $11 for a woman to live a year free from violence, but currently lacks a well-established evidence base.Fund organisations undertaking economic programs supporting women (e.g. microfinancing, cash transfers, village savings and loans association) to add on social empowerment programs focused on reducing violence; they have a cost-effectiveness of $180/DALY, or $145 for a woman to live a year free from violence.Found new charities focused on community-based interventions that seek to shift harmful gender norms and reduce violence, particularly in neglected geographies and populationsConsider supporting self-defence training programs - although analysis suggests that self-defence training may not be as cost-effective as other interventions for VAWG, it is nevertheless a relatively cost-effective (at $260/DALY or $215 for a woman to live a year free from violence) and potentially scalable intervention. No Means No Worldwide and Ujamaa appear to be two organisations scaling this intervention well.Fund academic research to understand what types of culture change programs targeting boys and men are most effective at reducing violence against women and girlsYou can find a 2 page summary of my initial post and this post hereWhy VAWG is an important cause areaNearly one third of women and girls aged 15 years of age or older have experienced either physical or sexual intimate partner violence (IPV) or non-partner sexual violence globally, with 13% (10–16%) experiencing it in 2018 alone (Sardinha et al 2022). It is one of the leading burdens of disease globally, responsible for 8.5 million DALYs and 68 500 deaths annually.In several countries, violence against women is in the top 3-5 leading causes of death for young women aged between 15 and 29 (Mendoza et al 2018).VAWG has wide-ranging effects on women’s physical, sexual and mental health- in fact, it is responsible for 11% of the DALY burden of depressive disorders and 14% of the DALY burden of HIV in women (IHME 2019)Globally, the rates of VAWG are both alarmingly high and have increased over the last 30 years, despite gains in other areas of women’s health, such as maternal care (Think Global Health).In 2016, the global economic cost of violence against women was estimated by the UN to be US$1.5 trillion, equivalent to approximately 2% of the global GDP (UN Women 2016).Although there are many groups working to stop VAWG, it is fairly neglected relative to the scale of harm that it...]]>
Thu, 12 Jan 2023 13:04:05 +0000 EA - What you can do to help stop violence against women and girls by Akhil Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What you can do to help stop violence against women and girls, published by Akhil on January 12, 2023 on The Effective Altruism Forum.IntroductionI previously wrote an entry for the Open Philanthropy Cause Exploration Prize on why preventing violence against women and girls is a global priority. For an introduction to the area, I have written a brief summary below. In this post, I will extend that work, diving deeper into the literature and the landscape of organisations in the field, as well as creating a cost-effectiveness model for some of the most promising preventative interventions. Based on this, I will offer some concrete recommendations that different stakeholders should take - from individuals looking to donate, to funders, to charity evaluators and incubators.The key recommendations I make, in order of importance, are:Support community-based interventions that seek to shift harmful gender norms and reduce violence- they have a high quality of evidence, and cost $180/DALY (disability adjusted life years) or $150 for a woman to live a year free from violence. Two particularly promising organisations are CEDOVIP and Raising Voices.Fund and conduct a well-designed randomised control trial of radio or television dramas to shift harmful gender norms and reduce violence- they have a startling cost-effectiveness of $13/DALY or $11 for a woman to live a year free from violence, but currently lacks a well-established evidence base.Fund organisations undertaking economic programs supporting women (e.g. microfinancing, cash transfers, village savings and loans association) to add on social empowerment programs focused on reducing violence; they have a cost-effectiveness of $180/DALY, or $145 for a woman to live a year free from violence.Found new charities focused on community-based interventions that seek to shift harmful gender norms and reduce violence, particularly in neglected geographies and populationsConsider supporting self-defence training programs - although analysis suggests that self-defence training may not be as cost-effective as other interventions for VAWG, it is nevertheless a relatively cost-effective (at $260/DALY or $215 for a woman to live a year free from violence) and potentially scalable intervention. No Means No Worldwide and Ujamaa appear to be two organisations scaling this intervention well.Fund academic research to understand what types of culture change programs targeting boys and men are most effective at reducing violence against women and girlsYou can find a 2 page summary of my initial post and this post hereWhy VAWG is an important cause areaNearly one third of women and girls aged 15 years of age or older have experienced either physical or sexual intimate partner violence (IPV) or non-partner sexual violence globally, with 13% (10–16%) experiencing it in 2018 alone (Sardinha et al 2022). It is one of the leading burdens of disease globally, responsible for 8.5 million DALYs and 68 500 deaths annually.In several countries, violence against women is in the top 3-5 leading causes of death for young women aged between 15 and 29 (Mendoza et al 2018).VAWG has wide-ranging effects on women’s physical, sexual and mental health- in fact, it is responsible for 11% of the DALY burden of depressive disorders and 14% of the DALY burden of HIV in women (IHME 2019)Globally, the rates of VAWG are both alarmingly high and have increased over the last 30 years, despite gains in other areas of women’s health, such as maternal care (Think Global Health).In 2016, the global economic cost of violence against women was estimated by the UN to be US$1.5 trillion, equivalent to approximately 2% of the global GDP (UN Women 2016).Although there are many groups working to stop VAWG, it is fairly neglected relative to the scale of harm that it...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What you can do to help stop violence against women and girls, published by Akhil on January 12, 2023 on The Effective Altruism Forum.IntroductionI previously wrote an entry for the Open Philanthropy Cause Exploration Prize on why preventing violence against women and girls is a global priority. For an introduction to the area, I have written a brief summary below. In this post, I will extend that work, diving deeper into the literature and the landscape of organisations in the field, as well as creating a cost-effectiveness model for some of the most promising preventative interventions. Based on this, I will offer some concrete recommendations that different stakeholders should take - from individuals looking to donate, to funders, to charity evaluators and incubators.The key recommendations I make, in order of importance, are:Support community-based interventions that seek to shift harmful gender norms and reduce violence- they have a high quality of evidence, and cost $180/DALY (disability adjusted life years) or $150 for a woman to live a year free from violence. Two particularly promising organisations are CEDOVIP and Raising Voices.Fund and conduct a well-designed randomised control trial of radio or television dramas to shift harmful gender norms and reduce violence- they have a startling cost-effectiveness of $13/DALY or $11 for a woman to live a year free from violence, but currently lacks a well-established evidence base.Fund organisations undertaking economic programs supporting women (e.g. microfinancing, cash transfers, village savings and loans association) to add on social empowerment programs focused on reducing violence; they have a cost-effectiveness of $180/DALY, or $145 for a woman to live a year free from violence.Found new charities focused on community-based interventions that seek to shift harmful gender norms and reduce violence, particularly in neglected geographies and populationsConsider supporting self-defence training programs - although analysis suggests that self-defence training may not be as cost-effective as other interventions for VAWG, it is nevertheless a relatively cost-effective (at $260/DALY or $215 for a woman to live a year free from violence) and potentially scalable intervention. No Means No Worldwide and Ujamaa appear to be two organisations scaling this intervention well.Fund academic research to understand what types of culture change programs targeting boys and men are most effective at reducing violence against women and girlsYou can find a 2 page summary of my initial post and this post hereWhy VAWG is an important cause areaNearly one third of women and girls aged 15 years of age or older have experienced either physical or sexual intimate partner violence (IPV) or non-partner sexual violence globally, with 13% (10–16%) experiencing it in 2018 alone (Sardinha et al 2022). It is one of the leading burdens of disease globally, responsible for 8.5 million DALYs and 68 500 deaths annually.In several countries, violence against women is in the top 3-5 leading causes of death for young women aged between 15 and 29 (Mendoza et al 2018).VAWG has wide-ranging effects on women’s physical, sexual and mental health- in fact, it is responsible for 11% of the DALY burden of depressive disorders and 14% of the DALY burden of HIV in women (IHME 2019)Globally, the rates of VAWG are both alarmingly high and have increased over the last 30 years, despite gains in other areas of women’s health, such as maternal care (Think Global Health).In 2016, the global economic cost of violence against women was estimated by the UN to be US$1.5 trillion, equivalent to approximately 2% of the global GDP (UN Women 2016).Although there are many groups working to stop VAWG, it is fairly neglected relative to the scale of harm that it...]]>
Akhil https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 31:03 None full 4431
fzrsa6syCM3yWW7HA_EA EA - Announcing the awardees for Open Philanthropy's $150M Regranting Challenge by ChrisSmith Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the awardees for Open Philanthropy's $150M Regranting Challenge, published by ChrisSmith on January 12, 2023 on The Effective Altruism Forum.In February, we launched the Regranting Challenge, aiming to add $150 million in funding to the budgets of outstanding grantmakers at other foundations.We saw the Challenge as an opportunity to maximize our impact, by:Adding funding to high-impact work that was already underway. We’ve long been inspired by the work of other grantmakers, and we believe there are highly effective grantmaking organizations doing better work than we could in their respective spaces. Instead of reinventing the wheel, we’ve used the Regranting Challenge to give some of those organizations additional funding, so they can increase the scale and scope of their own work.Piloting a new approach to growing highly effective grantmaking programs. The lack of feedback mechanisms that ensure effective grantmakers get more money to allocate is a major shortcoming in the existing philanthropic ecosystem. The Regranting Challenge gave us a chance to experiment with changing that dynamic.Learning from a wide range of grantmakers with different approaches. By creating an open call, we were able to identify highly effective foundations and program areas to support that we wouldn’t have known about otherwise.ResultsAfter eight months, three selection rounds, and evaluations from dozens of experts (inside and outside Open Philanthropy), we’ve chosen the awardees!We’ll share a brief description of each program here, but you can learn more about them on the Regranting Challenge website.Development Innovation Ventures: $45,000,000Development Innovation Ventures is a program inside the United States Agency for International Development (USAID). They invest in early-stage organizations and projects in global health and development that have the potential to be highly impactful and cost-effective. They’ve supported programs in water sanitation, early childhood education, and routine immunization (among others).(Read more)Eleanor Crook Foundation: $25,000,000The Eleanor Crook Foundation funds research and advocacy to end global malnutrition. They have a track record of successfully advocating for the increased use of the “Power 4” malnutrition interventions — prenatal vitamins for pregnant women, breastfeeding support for mothers, vitamin A supplementation, and ready-to-use therapeutic food to treat wasting.(Read more)Global Education, Bill & Melinda Gates Foundation: $5,000,000The Global Education program at the Gates Foundation makes grants to organizations that aredeveloping and improving highly effective education interventions. These interventions — namely remediation, structured pedagogy, and teaching at the right level — have been shown to improve foundational literacy and numeracy in low- and middle-income countries.(Read more)Global Health Innovation, Bill & Melinda Gates Foundation: $65,000,000We are supporting two global health initiatives at the Gates Foundation. The first ($40,000,000) will fund grantees who are advancing a new vaccine through efficacy trials against tuberculosis in adults and adolescents. The second ($25,000,000) will fund grantees who are helping a new vaccine manufacturer supply oral cholera vaccine — which will diversify the vaccine’s manufacturing base and increase vaccine supply to better meet global demand.(Read more)Tara Climate Foundation: $10,000,000Tara Climate Foundation focuses on climate change mitigation in South, Southeast, and East Asia (excluding China and India). They help found new nonprofits and grow the climate movement across this region. In addition to climate impacts, their work could also lead to substantial improvements in air quality.(Read more)ProcessHere’s more on how we arrived at...]]>
ChrisSmith https://forum.effectivealtruism.org/posts/fzrsa6syCM3yWW7HA/announcing-the-awardees-for-open-philanthropy-s-usd150m Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the awardees for Open Philanthropy's $150M Regranting Challenge, published by ChrisSmith on January 12, 2023 on The Effective Altruism Forum.In February, we launched the Regranting Challenge, aiming to add $150 million in funding to the budgets of outstanding grantmakers at other foundations.We saw the Challenge as an opportunity to maximize our impact, by:Adding funding to high-impact work that was already underway. We’ve long been inspired by the work of other grantmakers, and we believe there are highly effective grantmaking organizations doing better work than we could in their respective spaces. Instead of reinventing the wheel, we’ve used the Regranting Challenge to give some of those organizations additional funding, so they can increase the scale and scope of their own work.Piloting a new approach to growing highly effective grantmaking programs. The lack of feedback mechanisms that ensure effective grantmakers get more money to allocate is a major shortcoming in the existing philanthropic ecosystem. The Regranting Challenge gave us a chance to experiment with changing that dynamic.Learning from a wide range of grantmakers with different approaches. By creating an open call, we were able to identify highly effective foundations and program areas to support that we wouldn’t have known about otherwise.ResultsAfter eight months, three selection rounds, and evaluations from dozens of experts (inside and outside Open Philanthropy), we’ve chosen the awardees!We’ll share a brief description of each program here, but you can learn more about them on the Regranting Challenge website.Development Innovation Ventures: $45,000,000Development Innovation Ventures is a program inside the United States Agency for International Development (USAID). They invest in early-stage organizations and projects in global health and development that have the potential to be highly impactful and cost-effective. They’ve supported programs in water sanitation, early childhood education, and routine immunization (among others).(Read more)Eleanor Crook Foundation: $25,000,000The Eleanor Crook Foundation funds research and advocacy to end global malnutrition. They have a track record of successfully advocating for the increased use of the “Power 4” malnutrition interventions — prenatal vitamins for pregnant women, breastfeeding support for mothers, vitamin A supplementation, and ready-to-use therapeutic food to treat wasting.(Read more)Global Education, Bill & Melinda Gates Foundation: $5,000,000The Global Education program at the Gates Foundation makes grants to organizations that aredeveloping and improving highly effective education interventions. These interventions — namely remediation, structured pedagogy, and teaching at the right level — have been shown to improve foundational literacy and numeracy in low- and middle-income countries.(Read more)Global Health Innovation, Bill & Melinda Gates Foundation: $65,000,000We are supporting two global health initiatives at the Gates Foundation. The first ($40,000,000) will fund grantees who are advancing a new vaccine through efficacy trials against tuberculosis in adults and adolescents. The second ($25,000,000) will fund grantees who are helping a new vaccine manufacturer supply oral cholera vaccine — which will diversify the vaccine’s manufacturing base and increase vaccine supply to better meet global demand.(Read more)Tara Climate Foundation: $10,000,000Tara Climate Foundation focuses on climate change mitigation in South, Southeast, and East Asia (excluding China and India). They help found new nonprofits and grow the climate movement across this region. In addition to climate impacts, their work could also lead to substantial improvements in air quality.(Read more)ProcessHere’s more on how we arrived at...]]>
Thu, 12 Jan 2023 11:51:19 +0000 EA - Announcing the awardees for Open Philanthropy's $150M Regranting Challenge by ChrisSmith Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the awardees for Open Philanthropy's $150M Regranting Challenge, published by ChrisSmith on January 12, 2023 on The Effective Altruism Forum.In February, we launched the Regranting Challenge, aiming to add $150 million in funding to the budgets of outstanding grantmakers at other foundations.We saw the Challenge as an opportunity to maximize our impact, by:Adding funding to high-impact work that was already underway. We’ve long been inspired by the work of other grantmakers, and we believe there are highly effective grantmaking organizations doing better work than we could in their respective spaces. Instead of reinventing the wheel, we’ve used the Regranting Challenge to give some of those organizations additional funding, so they can increase the scale and scope of their own work.Piloting a new approach to growing highly effective grantmaking programs. The lack of feedback mechanisms that ensure effective grantmakers get more money to allocate is a major shortcoming in the existing philanthropic ecosystem. The Regranting Challenge gave us a chance to experiment with changing that dynamic.Learning from a wide range of grantmakers with different approaches. By creating an open call, we were able to identify highly effective foundations and program areas to support that we wouldn’t have known about otherwise.ResultsAfter eight months, three selection rounds, and evaluations from dozens of experts (inside and outside Open Philanthropy), we’ve chosen the awardees!We’ll share a brief description of each program here, but you can learn more about them on the Regranting Challenge website.Development Innovation Ventures: $45,000,000Development Innovation Ventures is a program inside the United States Agency for International Development (USAID). They invest in early-stage organizations and projects in global health and development that have the potential to be highly impactful and cost-effective. They’ve supported programs in water sanitation, early childhood education, and routine immunization (among others).(Read more)Eleanor Crook Foundation: $25,000,000The Eleanor Crook Foundation funds research and advocacy to end global malnutrition. They have a track record of successfully advocating for the increased use of the “Power 4” malnutrition interventions — prenatal vitamins for pregnant women, breastfeeding support for mothers, vitamin A supplementation, and ready-to-use therapeutic food to treat wasting.(Read more)Global Education, Bill & Melinda Gates Foundation: $5,000,000The Global Education program at the Gates Foundation makes grants to organizations that aredeveloping and improving highly effective education interventions. These interventions — namely remediation, structured pedagogy, and teaching at the right level — have been shown to improve foundational literacy and numeracy in low- and middle-income countries.(Read more)Global Health Innovation, Bill & Melinda Gates Foundation: $65,000,000We are supporting two global health initiatives at the Gates Foundation. The first ($40,000,000) will fund grantees who are advancing a new vaccine through efficacy trials against tuberculosis in adults and adolescents. The second ($25,000,000) will fund grantees who are helping a new vaccine manufacturer supply oral cholera vaccine — which will diversify the vaccine’s manufacturing base and increase vaccine supply to better meet global demand.(Read more)Tara Climate Foundation: $10,000,000Tara Climate Foundation focuses on climate change mitigation in South, Southeast, and East Asia (excluding China and India). They help found new nonprofits and grow the climate movement across this region. In addition to climate impacts, their work could also lead to substantial improvements in air quality.(Read more)ProcessHere’s more on how we arrived at...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the awardees for Open Philanthropy's $150M Regranting Challenge, published by ChrisSmith on January 12, 2023 on The Effective Altruism Forum.In February, we launched the Regranting Challenge, aiming to add $150 million in funding to the budgets of outstanding grantmakers at other foundations.We saw the Challenge as an opportunity to maximize our impact, by:Adding funding to high-impact work that was already underway. We’ve long been inspired by the work of other grantmakers, and we believe there are highly effective grantmaking organizations doing better work than we could in their respective spaces. Instead of reinventing the wheel, we’ve used the Regranting Challenge to give some of those organizations additional funding, so they can increase the scale and scope of their own work.Piloting a new approach to growing highly effective grantmaking programs. The lack of feedback mechanisms that ensure effective grantmakers get more money to allocate is a major shortcoming in the existing philanthropic ecosystem. The Regranting Challenge gave us a chance to experiment with changing that dynamic.Learning from a wide range of grantmakers with different approaches. By creating an open call, we were able to identify highly effective foundations and program areas to support that we wouldn’t have known about otherwise.ResultsAfter eight months, three selection rounds, and evaluations from dozens of experts (inside and outside Open Philanthropy), we’ve chosen the awardees!We’ll share a brief description of each program here, but you can learn more about them on the Regranting Challenge website.Development Innovation Ventures: $45,000,000Development Innovation Ventures is a program inside the United States Agency for International Development (USAID). They invest in early-stage organizations and projects in global health and development that have the potential to be highly impactful and cost-effective. They’ve supported programs in water sanitation, early childhood education, and routine immunization (among others).(Read more)Eleanor Crook Foundation: $25,000,000The Eleanor Crook Foundation funds research and advocacy to end global malnutrition. They have a track record of successfully advocating for the increased use of the “Power 4” malnutrition interventions — prenatal vitamins for pregnant women, breastfeeding support for mothers, vitamin A supplementation, and ready-to-use therapeutic food to treat wasting.(Read more)Global Education, Bill & Melinda Gates Foundation: $5,000,000The Global Education program at the Gates Foundation makes grants to organizations that aredeveloping and improving highly effective education interventions. These interventions — namely remediation, structured pedagogy, and teaching at the right level — have been shown to improve foundational literacy and numeracy in low- and middle-income countries.(Read more)Global Health Innovation, Bill & Melinda Gates Foundation: $65,000,000We are supporting two global health initiatives at the Gates Foundation. The first ($40,000,000) will fund grantees who are advancing a new vaccine through efficacy trials against tuberculosis in adults and adolescents. The second ($25,000,000) will fund grantees who are helping a new vaccine manufacturer supply oral cholera vaccine — which will diversify the vaccine’s manufacturing base and increase vaccine supply to better meet global demand.(Read more)Tara Climate Foundation: $10,000,000Tara Climate Foundation focuses on climate change mitigation in South, Southeast, and East Asia (excluding China and India). They help found new nonprofits and grow the climate movement across this region. In addition to climate impacts, their work could also lead to substantial improvements in air quality.(Read more)ProcessHere’s more on how we arrived at...]]>
ChrisSmith https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:29 None full 4432
nQCW3h4AjTrcxaJpw_EA EA - Announcing: SERI Biosecurity Interventions Technical Seminar (BITS) by James Lin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing: SERI Biosecurity Interventions Technical Seminar (BITS), published by James Lin on January 12, 2023 on The Effective Altruism Forum.1. IntroductionThe biosecurity space has promising technical interventions, such as UV-C and metagenomic sequencing. Often, students in college and community-builders are interested in biosecurity, but don’t have an in-depth understanding of the solutions. This is concerning for 3 central reasons:Poor explanations are presented for biosecurity solutions, and may contain inaccuraciesThe reputation of the biosecurity community is damagedPublic perception of biosecurity interventions will be off the mark relative to the intervention’s true effectiveness (e.g. “far UVC will end pandemics by 2025”)As a consequence, people are turned away from biosecurityPromising people transition to other fields of workWhen someone doesn’t understand the specifics, it’s highly unlikely that they’ll pursue direct work, e.g. building a startup for that technologyInstead, they’ll likely transition to community-building or traditional career pathsBiosecurity culture becomes diluted and can’t attract technically bright researchers or entrepreneursCompetence is magnetic, and promoting a culture of technical learning and deep dives is crucial to doing impactful work in a domain as challenging as biosecurityA technical biosecurity program can serve as a way for students to become much more knowledgeable about biosecurity solutions.2. ContextThere have been biosecurity discussion groups run in the past, but most of these groups focus on the top-of-the-funnel content—the introductory information and context that helps to form a foundation. To our knowledge, there doesn’t yet exist a deep dive group, one that specifically aims to equip students with technical understanding of various biosecurity interventions.When it comes to precedent, there have been similar discussion and research mentorship programs in the field of AI alignment, including AGISF, SERI MATS, MLSS, and the CERI Fellowship for biosecurity, nuclear risk, climate change, and AI safety. These programs have been largely successful, providing hundreds of students with hands-on technical experience or in-depth discussions about particular domains of AI safety.3. ProgramAboutOur proposal is to establish an online 8-week-long technical deep dive group. This group will explore many crucial aspects of technical biosecurity interventions while ensuring that we maintain an emphasis on defensive interventions. Examples of session topics include far-UVC, SotA for PPE, and the NAO (syllabus). The program will culminate in a research presentation for a chosen aspect of related topics and interventions (more details here). Each group will meet once per week and will have ~5 people. The expected weekly time commitment is 2-3 hours.There will be a virtual and in-person pilot of this program. Everyone is eligible to apply for the virtual program. The in-person program seminar group will be held at Stanford, though depending on demand we may also run an in-person group at Berkeley.There are a few reasons for running a primarily virtual program:It’s important to have an experienced biosecurity person facilitating, which usually limits these interactions to virtualVirtual programs are much more scalable and flexibleIt’s easier to reach a critical mass of studentsIf you would like to join, please feel free to apply! The application deadline is Friday, Jan 20th, and the program is expected to start in 2-3 weeks. We expect to start by running a few groups in parallel at different times during the week to accommodate those in different time zones.SyllabusThis is the syllabus with the content we will be covering. There will also be an optional research poster to be submitted 2 weeks after the...]]>
James Lin https://forum.effectivealtruism.org/posts/nQCW3h4AjTrcxaJpw/announcing-seri-biosecurity-interventions-technical-seminar Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing: SERI Biosecurity Interventions Technical Seminar (BITS), published by James Lin on January 12, 2023 on The Effective Altruism Forum.1. IntroductionThe biosecurity space has promising technical interventions, such as UV-C and metagenomic sequencing. Often, students in college and community-builders are interested in biosecurity, but don’t have an in-depth understanding of the solutions. This is concerning for 3 central reasons:Poor explanations are presented for biosecurity solutions, and may contain inaccuraciesThe reputation of the biosecurity community is damagedPublic perception of biosecurity interventions will be off the mark relative to the intervention’s true effectiveness (e.g. “far UVC will end pandemics by 2025”)As a consequence, people are turned away from biosecurityPromising people transition to other fields of workWhen someone doesn’t understand the specifics, it’s highly unlikely that they’ll pursue direct work, e.g. building a startup for that technologyInstead, they’ll likely transition to community-building or traditional career pathsBiosecurity culture becomes diluted and can’t attract technically bright researchers or entrepreneursCompetence is magnetic, and promoting a culture of technical learning and deep dives is crucial to doing impactful work in a domain as challenging as biosecurityA technical biosecurity program can serve as a way for students to become much more knowledgeable about biosecurity solutions.2. ContextThere have been biosecurity discussion groups run in the past, but most of these groups focus on the top-of-the-funnel content—the introductory information and context that helps to form a foundation. To our knowledge, there doesn’t yet exist a deep dive group, one that specifically aims to equip students with technical understanding of various biosecurity interventions.When it comes to precedent, there have been similar discussion and research mentorship programs in the field of AI alignment, including AGISF, SERI MATS, MLSS, and the CERI Fellowship for biosecurity, nuclear risk, climate change, and AI safety. These programs have been largely successful, providing hundreds of students with hands-on technical experience or in-depth discussions about particular domains of AI safety.3. ProgramAboutOur proposal is to establish an online 8-week-long technical deep dive group. This group will explore many crucial aspects of technical biosecurity interventions while ensuring that we maintain an emphasis on defensive interventions. Examples of session topics include far-UVC, SotA for PPE, and the NAO (syllabus). The program will culminate in a research presentation for a chosen aspect of related topics and interventions (more details here). Each group will meet once per week and will have ~5 people. The expected weekly time commitment is 2-3 hours.There will be a virtual and in-person pilot of this program. Everyone is eligible to apply for the virtual program. The in-person program seminar group will be held at Stanford, though depending on demand we may also run an in-person group at Berkeley.There are a few reasons for running a primarily virtual program:It’s important to have an experienced biosecurity person facilitating, which usually limits these interactions to virtualVirtual programs are much more scalable and flexibleIt’s easier to reach a critical mass of studentsIf you would like to join, please feel free to apply! The application deadline is Friday, Jan 20th, and the program is expected to start in 2-3 weeks. We expect to start by running a few groups in parallel at different times during the week to accommodate those in different time zones.SyllabusThis is the syllabus with the content we will be covering. There will also be an optional research poster to be submitted 2 weeks after the...]]>
Thu, 12 Jan 2023 10:01:54 +0000 EA - Announcing: SERI Biosecurity Interventions Technical Seminar (BITS) by James Lin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing: SERI Biosecurity Interventions Technical Seminar (BITS), published by James Lin on January 12, 2023 on The Effective Altruism Forum.1. IntroductionThe biosecurity space has promising technical interventions, such as UV-C and metagenomic sequencing. Often, students in college and community-builders are interested in biosecurity, but don’t have an in-depth understanding of the solutions. This is concerning for 3 central reasons:Poor explanations are presented for biosecurity solutions, and may contain inaccuraciesThe reputation of the biosecurity community is damagedPublic perception of biosecurity interventions will be off the mark relative to the intervention’s true effectiveness (e.g. “far UVC will end pandemics by 2025”)As a consequence, people are turned away from biosecurityPromising people transition to other fields of workWhen someone doesn’t understand the specifics, it’s highly unlikely that they’ll pursue direct work, e.g. building a startup for that technologyInstead, they’ll likely transition to community-building or traditional career pathsBiosecurity culture becomes diluted and can’t attract technically bright researchers or entrepreneursCompetence is magnetic, and promoting a culture of technical learning and deep dives is crucial to doing impactful work in a domain as challenging as biosecurityA technical biosecurity program can serve as a way for students to become much more knowledgeable about biosecurity solutions.2. ContextThere have been biosecurity discussion groups run in the past, but most of these groups focus on the top-of-the-funnel content—the introductory information and context that helps to form a foundation. To our knowledge, there doesn’t yet exist a deep dive group, one that specifically aims to equip students with technical understanding of various biosecurity interventions.When it comes to precedent, there have been similar discussion and research mentorship programs in the field of AI alignment, including AGISF, SERI MATS, MLSS, and the CERI Fellowship for biosecurity, nuclear risk, climate change, and AI safety. These programs have been largely successful, providing hundreds of students with hands-on technical experience or in-depth discussions about particular domains of AI safety.3. ProgramAboutOur proposal is to establish an online 8-week-long technical deep dive group. This group will explore many crucial aspects of technical biosecurity interventions while ensuring that we maintain an emphasis on defensive interventions. Examples of session topics include far-UVC, SotA for PPE, and the NAO (syllabus). The program will culminate in a research presentation for a chosen aspect of related topics and interventions (more details here). Each group will meet once per week and will have ~5 people. The expected weekly time commitment is 2-3 hours.There will be a virtual and in-person pilot of this program. Everyone is eligible to apply for the virtual program. The in-person program seminar group will be held at Stanford, though depending on demand we may also run an in-person group at Berkeley.There are a few reasons for running a primarily virtual program:It’s important to have an experienced biosecurity person facilitating, which usually limits these interactions to virtualVirtual programs are much more scalable and flexibleIt’s easier to reach a critical mass of studentsIf you would like to join, please feel free to apply! The application deadline is Friday, Jan 20th, and the program is expected to start in 2-3 weeks. We expect to start by running a few groups in parallel at different times during the week to accommodate those in different time zones.SyllabusThis is the syllabus with the content we will be covering. There will also be an optional research poster to be submitted 2 weeks after the...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing: SERI Biosecurity Interventions Technical Seminar (BITS), published by James Lin on January 12, 2023 on The Effective Altruism Forum.1. IntroductionThe biosecurity space has promising technical interventions, such as UV-C and metagenomic sequencing. Often, students in college and community-builders are interested in biosecurity, but don’t have an in-depth understanding of the solutions. This is concerning for 3 central reasons:Poor explanations are presented for biosecurity solutions, and may contain inaccuraciesThe reputation of the biosecurity community is damagedPublic perception of biosecurity interventions will be off the mark relative to the intervention’s true effectiveness (e.g. “far UVC will end pandemics by 2025”)As a consequence, people are turned away from biosecurityPromising people transition to other fields of workWhen someone doesn’t understand the specifics, it’s highly unlikely that they’ll pursue direct work, e.g. building a startup for that technologyInstead, they’ll likely transition to community-building or traditional career pathsBiosecurity culture becomes diluted and can’t attract technically bright researchers or entrepreneursCompetence is magnetic, and promoting a culture of technical learning and deep dives is crucial to doing impactful work in a domain as challenging as biosecurityA technical biosecurity program can serve as a way for students to become much more knowledgeable about biosecurity solutions.2. ContextThere have been biosecurity discussion groups run in the past, but most of these groups focus on the top-of-the-funnel content—the introductory information and context that helps to form a foundation. To our knowledge, there doesn’t yet exist a deep dive group, one that specifically aims to equip students with technical understanding of various biosecurity interventions.When it comes to precedent, there have been similar discussion and research mentorship programs in the field of AI alignment, including AGISF, SERI MATS, MLSS, and the CERI Fellowship for biosecurity, nuclear risk, climate change, and AI safety. These programs have been largely successful, providing hundreds of students with hands-on technical experience or in-depth discussions about particular domains of AI safety.3. ProgramAboutOur proposal is to establish an online 8-week-long technical deep dive group. This group will explore many crucial aspects of technical biosecurity interventions while ensuring that we maintain an emphasis on defensive interventions. Examples of session topics include far-UVC, SotA for PPE, and the NAO (syllabus). The program will culminate in a research presentation for a chosen aspect of related topics and interventions (more details here). Each group will meet once per week and will have ~5 people. The expected weekly time commitment is 2-3 hours.There will be a virtual and in-person pilot of this program. Everyone is eligible to apply for the virtual program. The in-person program seminar group will be held at Stanford, though depending on demand we may also run an in-person group at Berkeley.There are a few reasons for running a primarily virtual program:It’s important to have an experienced biosecurity person facilitating, which usually limits these interactions to virtualVirtual programs are much more scalable and flexibleIt’s easier to reach a critical mass of studentsIf you would like to join, please feel free to apply! The application deadline is Friday, Jan 20th, and the program is expected to start in 2-3 weeks. We expect to start by running a few groups in parallel at different times during the week to accommodate those in different time zones.SyllabusThis is the syllabus with the content we will be covering. There will also be an optional research poster to be submitted 2 weeks after the...]]>
James Lin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:39 None full 4421
jRJyjdqqtpwydcieK_EA EA - EA could use better internal communications infrastructure by Ozzie Gooen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA could use better internal communications infrastructure, published by Ozzie Gooen on January 12, 2023 on The Effective Altruism Forum.(This was a quick post, written in around 30 min. It was originally posted on Facebook, where it generated some good discussion.)I really wish EA had better internal communications.If I wanted to make a blog post / message / recording accessible to a "large subset of effective altruist professionals", I'm not sure how I'd do that.I don't think we yet have:One accepted chat systemAn internal blogging systemAny internal email lists (for a very wide net of EA professionals)It's nice to encourage people to communicate publicly, but there's a lot of communication that's really not meant for that.Generally, the existing options are:Post to your internal org slack/emails (note: many EA orgs are tiny)Share with people in your officePost to one of a few domain-specific and idiosyncratic Slacks/DiscordsPost publicly, for everyone to seeI think the SBF situation might have shown some substantial vulnerability here. It was a crisis where public statements were taken as serious legal statements. This meant that EA leadership essentially didn’t have a real method of communicating with most EAs.I feel like much of EA is a lot like one big org that tries really hard not to be one big org. This gives us some advantages of being decentralized, but we are missing a lot of the advantages of centralization. If "Professional EAs" were looked at as one large org, I'd expect that we'd look fairly amateur, compared to other sizeable organizations.A very simple way to make progress on internal communications is to separate the issue into a few clusters, and then attack each one separately.Access/Onboarding/OffboardingMake official lists that cover "professional/trusted members". You could start with simple criteria like "works at an org funded by an EA funder" or "went to 2+ EAGs".Negotiation and Moderation"EA Professionals" might basically be an "enterprise", and need "enterprise tools". These often are expensive and require negotiation.A Responsible IndividualMy preference would be that we find someone who did a good job at this sort of thing in other sizeable companies and try to get them to do it here.I bet with $200k/year for the talent, plus maybe $200k-$1k/year, we could have a decent setup, assuming we could find good enough talent. That said, this would definitely be work to establish, so I wouldn't expect anything soon.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ozzie Gooen https://forum.effectivealtruism.org/posts/jRJyjdqqtpwydcieK/ea-could-use-better-internal-communications-infrastructure Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA could use better internal communications infrastructure, published by Ozzie Gooen on January 12, 2023 on The Effective Altruism Forum.(This was a quick post, written in around 30 min. It was originally posted on Facebook, where it generated some good discussion.)I really wish EA had better internal communications.If I wanted to make a blog post / message / recording accessible to a "large subset of effective altruist professionals", I'm not sure how I'd do that.I don't think we yet have:One accepted chat systemAn internal blogging systemAny internal email lists (for a very wide net of EA professionals)It's nice to encourage people to communicate publicly, but there's a lot of communication that's really not meant for that.Generally, the existing options are:Post to your internal org slack/emails (note: many EA orgs are tiny)Share with people in your officePost to one of a few domain-specific and idiosyncratic Slacks/DiscordsPost publicly, for everyone to seeI think the SBF situation might have shown some substantial vulnerability here. It was a crisis where public statements were taken as serious legal statements. This meant that EA leadership essentially didn’t have a real method of communicating with most EAs.I feel like much of EA is a lot like one big org that tries really hard not to be one big org. This gives us some advantages of being decentralized, but we are missing a lot of the advantages of centralization. If "Professional EAs" were looked at as one large org, I'd expect that we'd look fairly amateur, compared to other sizeable organizations.A very simple way to make progress on internal communications is to separate the issue into a few clusters, and then attack each one separately.Access/Onboarding/OffboardingMake official lists that cover "professional/trusted members". You could start with simple criteria like "works at an org funded by an EA funder" or "went to 2+ EAGs".Negotiation and Moderation"EA Professionals" might basically be an "enterprise", and need "enterprise tools". These often are expensive and require negotiation.A Responsible IndividualMy preference would be that we find someone who did a good job at this sort of thing in other sizeable companies and try to get them to do it here.I bet with $200k/year for the talent, plus maybe $200k-$1k/year, we could have a decent setup, assuming we could find good enough talent. That said, this would definitely be work to establish, so I wouldn't expect anything soon.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 12 Jan 2023 07:43:59 +0000 EA - EA could use better internal communications infrastructure by Ozzie Gooen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA could use better internal communications infrastructure, published by Ozzie Gooen on January 12, 2023 on The Effective Altruism Forum.(This was a quick post, written in around 30 min. It was originally posted on Facebook, where it generated some good discussion.)I really wish EA had better internal communications.If I wanted to make a blog post / message / recording accessible to a "large subset of effective altruist professionals", I'm not sure how I'd do that.I don't think we yet have:One accepted chat systemAn internal blogging systemAny internal email lists (for a very wide net of EA professionals)It's nice to encourage people to communicate publicly, but there's a lot of communication that's really not meant for that.Generally, the existing options are:Post to your internal org slack/emails (note: many EA orgs are tiny)Share with people in your officePost to one of a few domain-specific and idiosyncratic Slacks/DiscordsPost publicly, for everyone to seeI think the SBF situation might have shown some substantial vulnerability here. It was a crisis where public statements were taken as serious legal statements. This meant that EA leadership essentially didn’t have a real method of communicating with most EAs.I feel like much of EA is a lot like one big org that tries really hard not to be one big org. This gives us some advantages of being decentralized, but we are missing a lot of the advantages of centralization. If "Professional EAs" were looked at as one large org, I'd expect that we'd look fairly amateur, compared to other sizeable organizations.A very simple way to make progress on internal communications is to separate the issue into a few clusters, and then attack each one separately.Access/Onboarding/OffboardingMake official lists that cover "professional/trusted members". You could start with simple criteria like "works at an org funded by an EA funder" or "went to 2+ EAGs".Negotiation and Moderation"EA Professionals" might basically be an "enterprise", and need "enterprise tools". These often are expensive and require negotiation.A Responsible IndividualMy preference would be that we find someone who did a good job at this sort of thing in other sizeable companies and try to get them to do it here.I bet with $200k/year for the talent, plus maybe $200k-$1k/year, we could have a decent setup, assuming we could find good enough talent. That said, this would definitely be work to establish, so I wouldn't expect anything soon.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA could use better internal communications infrastructure, published by Ozzie Gooen on January 12, 2023 on The Effective Altruism Forum.(This was a quick post, written in around 30 min. It was originally posted on Facebook, where it generated some good discussion.)I really wish EA had better internal communications.If I wanted to make a blog post / message / recording accessible to a "large subset of effective altruist professionals", I'm not sure how I'd do that.I don't think we yet have:One accepted chat systemAn internal blogging systemAny internal email lists (for a very wide net of EA professionals)It's nice to encourage people to communicate publicly, but there's a lot of communication that's really not meant for that.Generally, the existing options are:Post to your internal org slack/emails (note: many EA orgs are tiny)Share with people in your officePost to one of a few domain-specific and idiosyncratic Slacks/DiscordsPost publicly, for everyone to seeI think the SBF situation might have shown some substantial vulnerability here. It was a crisis where public statements were taken as serious legal statements. This meant that EA leadership essentially didn’t have a real method of communicating with most EAs.I feel like much of EA is a lot like one big org that tries really hard not to be one big org. This gives us some advantages of being decentralized, but we are missing a lot of the advantages of centralization. If "Professional EAs" were looked at as one large org, I'd expect that we'd look fairly amateur, compared to other sizeable organizations.A very simple way to make progress on internal communications is to separate the issue into a few clusters, and then attack each one separately.Access/Onboarding/OffboardingMake official lists that cover "professional/trusted members". You could start with simple criteria like "works at an org funded by an EA funder" or "went to 2+ EAGs".Negotiation and Moderation"EA Professionals" might basically be an "enterprise", and need "enterprise tools". These often are expensive and require negotiation.A Responsible IndividualMy preference would be that we find someone who did a good job at this sort of thing in other sizeable companies and try to get them to do it here.I bet with $200k/year for the talent, plus maybe $200k-$1k/year, we could have a decent setup, assuming we could find good enough talent. That said, this would definitely be work to establish, so I wouldn't expect anything soon.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ozzie Gooen https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:39 None full 4422
9qq53Hy4PKYLyDutD_EA EA - Abolitionist in the Streets, Pragmatist in the Sheets: New Ideas for Effective Animal Advocacy by Dhruv Makwana Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Abolitionist in the Streets, Pragmatist in the Sheets: New Ideas for Effective Animal Advocacy, published by Dhruv Makwana on January 11, 2023 on The Effective Altruism Forum.This series of posts is not intended to re-ignite the never-ending philosophical debate about welfarism vs abolitionism. Its main point is simply to point out that (broadly speaking) animal advocacy within Effective Altruism is uniform in its welfarist thinking and approach, and that it has assumed with insufficient reason that all abolitionist thinking and approaches are ineffective. This assumption is partly based on (1) poor evidence (2) a narrow conception of what it means to take an abolitionist approach to animal advocacy (3) with one notable exception, abolitionists’ poor/absent engagement with animal advocacy on pragmatic terms.It is important to note I am not claiming that alternative approaches mentioned are (cost-)effective, though I do state reasons this might be true. In logic terms, I am simply saying that currently, the status quo is,EA Animal Advocacy = Welfarist approaches & not (Abolitionist approaches).What I am proposing is that it should at least beEA Animal Advocacy = Welfarist approaches & unknown(Abolitionist approaches).That is, the community does not have strong enough evidence and arguments to rule out abolitionist thinking and approaches. This opens up new lines of thinking, new questions for research and new advocacy strategies to test and refine. I also suggest how advocates might measure such strategies’ effectiveness, in ways appropriate to abolitionism (Social Change Dynamics).EpistemicsI am a pragmatic, abolitionist-leaning animal advocate and vegan and I wrote this series voluntarily, in rare spare time, with help from three other vegans, all of whom have been familiar with EA for a few years. Two of us have a moderate amount of experience (a few years) volunteering with local abolitionist activism of various sorts, and we recognise this could lead to motivated reasoning. Two of us would identify as EAs, and have attended EA conferences. One of us is studying for a PhD in wild animal suffering; I am studying for a PhD in Computer Science. None of us have any formal training or professional experience in EA-related research, or criticism. Most importantly, this series was written over the course of more than 14 months; the sheer length of time means research moves on - I’ve done my best to keep up, and have even been scooped on multiple points, but I’ve probably missed things.This series is intended to be a big-picture piece, surfacing and investigating common beliefs within EA animal advocacy. It covers a huge amount of ground, more akin to setting a research agenda than answering specific questions: each paragraph could be the subject of an investigation. Necessarily, I deal with generalisations of views, which will not cover all organisations, variations, or advocates. I have made my best efforts to avoid accidentally strawmanning, by, wherever possible, checking my intuition on what the mainstream views are against any available evidence (funding allocations, forum posts, surveys, reports from prominent EA organisations, searches on the EA forum).SummaryA Case for AbolitionI present a short, pragmatic, and EA-targeted case for complete abolition of animal exploitation, and for using abolitionist approaches to achieve this. I show that (1) a longtermist perspective leads one to aim for complete abolition as a goal, and with one key assumption, using abolitionist approaches to get there and (2) contrary to prior work, abolition is helpful to reducing wild animal suffering (and conversely, welfarism could hinder such efforts).Limitations with Current EA Animal AdvocacyI argue that the current major strands of EA Animal Advocacy (corporate welfare ca...]]>
Dhruv Makwana https://forum.effectivealtruism.org/posts/9qq53Hy4PKYLyDutD/abolitionist-in-the-streets-pragmatist-in-the-sheets-new Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Abolitionist in the Streets, Pragmatist in the Sheets: New Ideas for Effective Animal Advocacy, published by Dhruv Makwana on January 11, 2023 on The Effective Altruism Forum.This series of posts is not intended to re-ignite the never-ending philosophical debate about welfarism vs abolitionism. Its main point is simply to point out that (broadly speaking) animal advocacy within Effective Altruism is uniform in its welfarist thinking and approach, and that it has assumed with insufficient reason that all abolitionist thinking and approaches are ineffective. This assumption is partly based on (1) poor evidence (2) a narrow conception of what it means to take an abolitionist approach to animal advocacy (3) with one notable exception, abolitionists’ poor/absent engagement with animal advocacy on pragmatic terms.It is important to note I am not claiming that alternative approaches mentioned are (cost-)effective, though I do state reasons this might be true. In logic terms, I am simply saying that currently, the status quo is,EA Animal Advocacy = Welfarist approaches & not (Abolitionist approaches).What I am proposing is that it should at least beEA Animal Advocacy = Welfarist approaches & unknown(Abolitionist approaches).That is, the community does not have strong enough evidence and arguments to rule out abolitionist thinking and approaches. This opens up new lines of thinking, new questions for research and new advocacy strategies to test and refine. I also suggest how advocates might measure such strategies’ effectiveness, in ways appropriate to abolitionism (Social Change Dynamics).EpistemicsI am a pragmatic, abolitionist-leaning animal advocate and vegan and I wrote this series voluntarily, in rare spare time, with help from three other vegans, all of whom have been familiar with EA for a few years. Two of us have a moderate amount of experience (a few years) volunteering with local abolitionist activism of various sorts, and we recognise this could lead to motivated reasoning. Two of us would identify as EAs, and have attended EA conferences. One of us is studying for a PhD in wild animal suffering; I am studying for a PhD in Computer Science. None of us have any formal training or professional experience in EA-related research, or criticism. Most importantly, this series was written over the course of more than 14 months; the sheer length of time means research moves on - I’ve done my best to keep up, and have even been scooped on multiple points, but I’ve probably missed things.This series is intended to be a big-picture piece, surfacing and investigating common beliefs within EA animal advocacy. It covers a huge amount of ground, more akin to setting a research agenda than answering specific questions: each paragraph could be the subject of an investigation. Necessarily, I deal with generalisations of views, which will not cover all organisations, variations, or advocates. I have made my best efforts to avoid accidentally strawmanning, by, wherever possible, checking my intuition on what the mainstream views are against any available evidence (funding allocations, forum posts, surveys, reports from prominent EA organisations, searches on the EA forum).SummaryA Case for AbolitionI present a short, pragmatic, and EA-targeted case for complete abolition of animal exploitation, and for using abolitionist approaches to achieve this. I show that (1) a longtermist perspective leads one to aim for complete abolition as a goal, and with one key assumption, using abolitionist approaches to get there and (2) contrary to prior work, abolition is helpful to reducing wild animal suffering (and conversely, welfarism could hinder such efforts).Limitations with Current EA Animal AdvocacyI argue that the current major strands of EA Animal Advocacy (corporate welfare ca...]]>
Thu, 12 Jan 2023 06:27:44 +0000 EA - Abolitionist in the Streets, Pragmatist in the Sheets: New Ideas for Effective Animal Advocacy by Dhruv Makwana Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Abolitionist in the Streets, Pragmatist in the Sheets: New Ideas for Effective Animal Advocacy, published by Dhruv Makwana on January 11, 2023 on The Effective Altruism Forum.This series of posts is not intended to re-ignite the never-ending philosophical debate about welfarism vs abolitionism. Its main point is simply to point out that (broadly speaking) animal advocacy within Effective Altruism is uniform in its welfarist thinking and approach, and that it has assumed with insufficient reason that all abolitionist thinking and approaches are ineffective. This assumption is partly based on (1) poor evidence (2) a narrow conception of what it means to take an abolitionist approach to animal advocacy (3) with one notable exception, abolitionists’ poor/absent engagement with animal advocacy on pragmatic terms.It is important to note I am not claiming that alternative approaches mentioned are (cost-)effective, though I do state reasons this might be true. In logic terms, I am simply saying that currently, the status quo is,EA Animal Advocacy = Welfarist approaches & not (Abolitionist approaches).What I am proposing is that it should at least beEA Animal Advocacy = Welfarist approaches & unknown(Abolitionist approaches).That is, the community does not have strong enough evidence and arguments to rule out abolitionist thinking and approaches. This opens up new lines of thinking, new questions for research and new advocacy strategies to test and refine. I also suggest how advocates might measure such strategies’ effectiveness, in ways appropriate to abolitionism (Social Change Dynamics).EpistemicsI am a pragmatic, abolitionist-leaning animal advocate and vegan and I wrote this series voluntarily, in rare spare time, with help from three other vegans, all of whom have been familiar with EA for a few years. Two of us have a moderate amount of experience (a few years) volunteering with local abolitionist activism of various sorts, and we recognise this could lead to motivated reasoning. Two of us would identify as EAs, and have attended EA conferences. One of us is studying for a PhD in wild animal suffering; I am studying for a PhD in Computer Science. None of us have any formal training or professional experience in EA-related research, or criticism. Most importantly, this series was written over the course of more than 14 months; the sheer length of time means research moves on - I’ve done my best to keep up, and have even been scooped on multiple points, but I’ve probably missed things.This series is intended to be a big-picture piece, surfacing and investigating common beliefs within EA animal advocacy. It covers a huge amount of ground, more akin to setting a research agenda than answering specific questions: each paragraph could be the subject of an investigation. Necessarily, I deal with generalisations of views, which will not cover all organisations, variations, or advocates. I have made my best efforts to avoid accidentally strawmanning, by, wherever possible, checking my intuition on what the mainstream views are against any available evidence (funding allocations, forum posts, surveys, reports from prominent EA organisations, searches on the EA forum).SummaryA Case for AbolitionI present a short, pragmatic, and EA-targeted case for complete abolition of animal exploitation, and for using abolitionist approaches to achieve this. I show that (1) a longtermist perspective leads one to aim for complete abolition as a goal, and with one key assumption, using abolitionist approaches to get there and (2) contrary to prior work, abolition is helpful to reducing wild animal suffering (and conversely, welfarism could hinder such efforts).Limitations with Current EA Animal AdvocacyI argue that the current major strands of EA Animal Advocacy (corporate welfare ca...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Abolitionist in the Streets, Pragmatist in the Sheets: New Ideas for Effective Animal Advocacy, published by Dhruv Makwana on January 11, 2023 on The Effective Altruism Forum.This series of posts is not intended to re-ignite the never-ending philosophical debate about welfarism vs abolitionism. Its main point is simply to point out that (broadly speaking) animal advocacy within Effective Altruism is uniform in its welfarist thinking and approach, and that it has assumed with insufficient reason that all abolitionist thinking and approaches are ineffective. This assumption is partly based on (1) poor evidence (2) a narrow conception of what it means to take an abolitionist approach to animal advocacy (3) with one notable exception, abolitionists’ poor/absent engagement with animal advocacy on pragmatic terms.It is important to note I am not claiming that alternative approaches mentioned are (cost-)effective, though I do state reasons this might be true. In logic terms, I am simply saying that currently, the status quo is,EA Animal Advocacy = Welfarist approaches & not (Abolitionist approaches).What I am proposing is that it should at least beEA Animal Advocacy = Welfarist approaches & unknown(Abolitionist approaches).That is, the community does not have strong enough evidence and arguments to rule out abolitionist thinking and approaches. This opens up new lines of thinking, new questions for research and new advocacy strategies to test and refine. I also suggest how advocates might measure such strategies’ effectiveness, in ways appropriate to abolitionism (Social Change Dynamics).EpistemicsI am a pragmatic, abolitionist-leaning animal advocate and vegan and I wrote this series voluntarily, in rare spare time, with help from three other vegans, all of whom have been familiar with EA for a few years. Two of us have a moderate amount of experience (a few years) volunteering with local abolitionist activism of various sorts, and we recognise this could lead to motivated reasoning. Two of us would identify as EAs, and have attended EA conferences. One of us is studying for a PhD in wild animal suffering; I am studying for a PhD in Computer Science. None of us have any formal training or professional experience in EA-related research, or criticism. Most importantly, this series was written over the course of more than 14 months; the sheer length of time means research moves on - I’ve done my best to keep up, and have even been scooped on multiple points, but I’ve probably missed things.This series is intended to be a big-picture piece, surfacing and investigating common beliefs within EA animal advocacy. It covers a huge amount of ground, more akin to setting a research agenda than answering specific questions: each paragraph could be the subject of an investigation. Necessarily, I deal with generalisations of views, which will not cover all organisations, variations, or advocates. I have made my best efforts to avoid accidentally strawmanning, by, wherever possible, checking my intuition on what the mainstream views are against any available evidence (funding allocations, forum posts, surveys, reports from prominent EA organisations, searches on the EA forum).SummaryA Case for AbolitionI present a short, pragmatic, and EA-targeted case for complete abolition of animal exploitation, and for using abolitionist approaches to achieve this. I show that (1) a longtermist perspective leads one to aim for complete abolition as a goal, and with one key assumption, using abolitionist approaches to get there and (2) contrary to prior work, abolition is helpful to reducing wild animal suffering (and conversely, welfarism could hinder such efforts).Limitations with Current EA Animal AdvocacyI argue that the current major strands of EA Animal Advocacy (corporate welfare ca...]]>
Dhruv Makwana https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:19 None full 4426
6B3QEiSyGgAM4WJSR_EA EA - We don’t trade with ants by Katja Grace Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We don’t trade with ants, published by Katja Grace on January 12, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Katja Grace https://forum.effectivealtruism.org/posts/6B3QEiSyGgAM4WJSR/we-don-t-trade-with-ants Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We don’t trade with ants, published by Katja Grace on January 12, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 12 Jan 2023 06:21:55 +0000 EA - We don’t trade with ants by Katja Grace Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We don’t trade with ants, published by Katja Grace on January 12, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We don’t trade with ants, published by Katja Grace on January 12, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Katja Grace https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:24 None full 4423
C7nkp8h2gE6i8pBCJ_EA EA - What's the deal with AI consciousness? by ThomasW Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What's the deal with AI consciousness?, published by ThomasW on January 11, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
ThomasW https://forum.effectivealtruism.org/posts/C7nkp8h2gE6i8pBCJ/what-s-the-deal-with-ai-consciousness Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What's the deal with AI consciousness?, published by ThomasW on January 11, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 12 Jan 2023 06:08:44 +0000 EA - What's the deal with AI consciousness? by ThomasW Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What's the deal with AI consciousness?, published by ThomasW on January 11, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What's the deal with AI consciousness?, published by ThomasW on January 11, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
ThomasW https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:26 None full 4425
QKa5PGiasWqeL75A7_EA EA - Linkpost: Big Wins for Farm Animals This Decade by James Ozden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Linkpost: Big Wins for Farm Animals This Decade, published by James Ozden on January 11, 2023 on The Effective Altruism Forum.Note: I'm cross-posting the Open Philanthropy Farm Animal Welfare Research Newsletter, written by Lewis Bollard (with his permission). You can sign up to receive this newsletter here. I'm sharing this because I think provides some great analysis of farm animal wins in the past decade, across a range of theories of change.Progress for farm animals has been far too slow. In 1822, Britain passed the first national law protecting farm animals. Two centuries later, most countries — including China and the United States — still lack such a law. In 1975, Peter Singer published Animal Liberation. Half a century later, about ten times more animals are factory farmed every year.But in the last decade, advocates have achieved unprecedented progress for farm animals. For the holidays, I want to step back and reflect on everything you’ve achieved — and much of it was achieved by readers of this newsletter — to start to turn the tide for farm animals.Corporate ChangeA decade ago, most of the world’s largest food corporations lacked even a basic farm animal welfare policy. Today, they almost all have one. That’s thanks to advocates, who won about 3,000 new corporate policies in the last ten years.Take the battery cage. About seven billion birds, or roughly one for every human on the planet, are crammed into these microwave-sized containers. A decade ago, Europe had just moved to slightly larger enriched cages, and US advocates were pushing for a similar reform. Abolishing cages altogether seemed impossible. That changed in 2015-18, as advocates secured cage-free pledges from almost all of the largest American and European retailers, fast food chains, and foodservice companies. Advocates then extended this work globally, securing major pledges from Brazil to Thailand.Most recently, advocates won the first global cage-free pledges from 150 multinationals, including the world’s largest hotel chains and food manufacturers.A major question was whether these companies would follow through on their pledges. So far, almost 1,000 companies have — that’s 88% of the companies that promised to go cage-free by the end of last year. Another 75% of the world’s largest food companies are now publicly reporting on their progress in going cage-free. Of course, some companies will still shirk their pledges. But 165M more hens are already cage-free in Europe and the US today than were a decade ago, and advocates are on track to help over 300M more just by getting companies to follow through on their existing policiesAlternative Protein AccelerationIn 2012, alternative proteins were decidedly “alternative.” No major meat company was making plant-based meat, no major US fast food chain was serving it, cultured meat hadn’t yet been cultured, Beyond Meat hadn’t gotten beyond Whole Foods, and Impossible burgers weren’t yet possible.Today, the world’s largest meat companies — from Brazil’s JBS to America’s Tyson Foods — mostly have their own plant-based meat brands. Germany’s biggest meat producer said this year that half of its future product range will be meatless. One of the world’s largest seafood companies, Thai Union, even launched plant-based seafood.The world’s largest fast food chains now mostly have a plant-based option — with one noticeable McException. Yum Brands, owner of KFC and Pizza Hut, calls plant-based foods “part of a global movement influencing menus at all of our restaurants.” Burger King is selling plant-based whoppers at most of its global locations; in Belgium, they now account for one in three whoppers sold.Even after a bad year, plant-based meat sales are more than twice what they were five years ago. Alternative protein startups have raised a “mere...]]>
James Ozden https://forum.effectivealtruism.org/posts/QKa5PGiasWqeL75A7/linkpost-big-wins-for-farm-animals-this-decade Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Linkpost: Big Wins for Farm Animals This Decade, published by James Ozden on January 11, 2023 on The Effective Altruism Forum.Note: I'm cross-posting the Open Philanthropy Farm Animal Welfare Research Newsletter, written by Lewis Bollard (with his permission). You can sign up to receive this newsletter here. I'm sharing this because I think provides some great analysis of farm animal wins in the past decade, across a range of theories of change.Progress for farm animals has been far too slow. In 1822, Britain passed the first national law protecting farm animals. Two centuries later, most countries — including China and the United States — still lack such a law. In 1975, Peter Singer published Animal Liberation. Half a century later, about ten times more animals are factory farmed every year.But in the last decade, advocates have achieved unprecedented progress for farm animals. For the holidays, I want to step back and reflect on everything you’ve achieved — and much of it was achieved by readers of this newsletter — to start to turn the tide for farm animals.Corporate ChangeA decade ago, most of the world’s largest food corporations lacked even a basic farm animal welfare policy. Today, they almost all have one. That’s thanks to advocates, who won about 3,000 new corporate policies in the last ten years.Take the battery cage. About seven billion birds, or roughly one for every human on the planet, are crammed into these microwave-sized containers. A decade ago, Europe had just moved to slightly larger enriched cages, and US advocates were pushing for a similar reform. Abolishing cages altogether seemed impossible. That changed in 2015-18, as advocates secured cage-free pledges from almost all of the largest American and European retailers, fast food chains, and foodservice companies. Advocates then extended this work globally, securing major pledges from Brazil to Thailand.Most recently, advocates won the first global cage-free pledges from 150 multinationals, including the world’s largest hotel chains and food manufacturers.A major question was whether these companies would follow through on their pledges. So far, almost 1,000 companies have — that’s 88% of the companies that promised to go cage-free by the end of last year. Another 75% of the world’s largest food companies are now publicly reporting on their progress in going cage-free. Of course, some companies will still shirk their pledges. But 165M more hens are already cage-free in Europe and the US today than were a decade ago, and advocates are on track to help over 300M more just by getting companies to follow through on their existing policiesAlternative Protein AccelerationIn 2012, alternative proteins were decidedly “alternative.” No major meat company was making plant-based meat, no major US fast food chain was serving it, cultured meat hadn’t yet been cultured, Beyond Meat hadn’t gotten beyond Whole Foods, and Impossible burgers weren’t yet possible.Today, the world’s largest meat companies — from Brazil’s JBS to America’s Tyson Foods — mostly have their own plant-based meat brands. Germany’s biggest meat producer said this year that half of its future product range will be meatless. One of the world’s largest seafood companies, Thai Union, even launched plant-based seafood.The world’s largest fast food chains now mostly have a plant-based option — with one noticeable McException. Yum Brands, owner of KFC and Pizza Hut, calls plant-based foods “part of a global movement influencing menus at all of our restaurants.” Burger King is selling plant-based whoppers at most of its global locations; in Belgium, they now account for one in three whoppers sold.Even after a bad year, plant-based meat sales are more than twice what they were five years ago. Alternative protein startups have raised a “mere...]]>
Thu, 12 Jan 2023 02:37:18 +0000 EA - Linkpost: Big Wins for Farm Animals This Decade by James Ozden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Linkpost: Big Wins for Farm Animals This Decade, published by James Ozden on January 11, 2023 on The Effective Altruism Forum.Note: I'm cross-posting the Open Philanthropy Farm Animal Welfare Research Newsletter, written by Lewis Bollard (with his permission). You can sign up to receive this newsletter here. I'm sharing this because I think provides some great analysis of farm animal wins in the past decade, across a range of theories of change.Progress for farm animals has been far too slow. In 1822, Britain passed the first national law protecting farm animals. Two centuries later, most countries — including China and the United States — still lack such a law. In 1975, Peter Singer published Animal Liberation. Half a century later, about ten times more animals are factory farmed every year.But in the last decade, advocates have achieved unprecedented progress for farm animals. For the holidays, I want to step back and reflect on everything you’ve achieved — and much of it was achieved by readers of this newsletter — to start to turn the tide for farm animals.Corporate ChangeA decade ago, most of the world’s largest food corporations lacked even a basic farm animal welfare policy. Today, they almost all have one. That’s thanks to advocates, who won about 3,000 new corporate policies in the last ten years.Take the battery cage. About seven billion birds, or roughly one for every human on the planet, are crammed into these microwave-sized containers. A decade ago, Europe had just moved to slightly larger enriched cages, and US advocates were pushing for a similar reform. Abolishing cages altogether seemed impossible. That changed in 2015-18, as advocates secured cage-free pledges from almost all of the largest American and European retailers, fast food chains, and foodservice companies. Advocates then extended this work globally, securing major pledges from Brazil to Thailand.Most recently, advocates won the first global cage-free pledges from 150 multinationals, including the world’s largest hotel chains and food manufacturers.A major question was whether these companies would follow through on their pledges. So far, almost 1,000 companies have — that’s 88% of the companies that promised to go cage-free by the end of last year. Another 75% of the world’s largest food companies are now publicly reporting on their progress in going cage-free. Of course, some companies will still shirk their pledges. But 165M more hens are already cage-free in Europe and the US today than were a decade ago, and advocates are on track to help over 300M more just by getting companies to follow through on their existing policiesAlternative Protein AccelerationIn 2012, alternative proteins were decidedly “alternative.” No major meat company was making plant-based meat, no major US fast food chain was serving it, cultured meat hadn’t yet been cultured, Beyond Meat hadn’t gotten beyond Whole Foods, and Impossible burgers weren’t yet possible.Today, the world’s largest meat companies — from Brazil’s JBS to America’s Tyson Foods — mostly have their own plant-based meat brands. Germany’s biggest meat producer said this year that half of its future product range will be meatless. One of the world’s largest seafood companies, Thai Union, even launched plant-based seafood.The world’s largest fast food chains now mostly have a plant-based option — with one noticeable McException. Yum Brands, owner of KFC and Pizza Hut, calls plant-based foods “part of a global movement influencing menus at all of our restaurants.” Burger King is selling plant-based whoppers at most of its global locations; in Belgium, they now account for one in three whoppers sold.Even after a bad year, plant-based meat sales are more than twice what they were five years ago. Alternative protein startups have raised a “mere...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Linkpost: Big Wins for Farm Animals This Decade, published by James Ozden on January 11, 2023 on The Effective Altruism Forum.Note: I'm cross-posting the Open Philanthropy Farm Animal Welfare Research Newsletter, written by Lewis Bollard (with his permission). You can sign up to receive this newsletter here. I'm sharing this because I think provides some great analysis of farm animal wins in the past decade, across a range of theories of change.Progress for farm animals has been far too slow. In 1822, Britain passed the first national law protecting farm animals. Two centuries later, most countries — including China and the United States — still lack such a law. In 1975, Peter Singer published Animal Liberation. Half a century later, about ten times more animals are factory farmed every year.But in the last decade, advocates have achieved unprecedented progress for farm animals. For the holidays, I want to step back and reflect on everything you’ve achieved — and much of it was achieved by readers of this newsletter — to start to turn the tide for farm animals.Corporate ChangeA decade ago, most of the world’s largest food corporations lacked even a basic farm animal welfare policy. Today, they almost all have one. That’s thanks to advocates, who won about 3,000 new corporate policies in the last ten years.Take the battery cage. About seven billion birds, or roughly one for every human on the planet, are crammed into these microwave-sized containers. A decade ago, Europe had just moved to slightly larger enriched cages, and US advocates were pushing for a similar reform. Abolishing cages altogether seemed impossible. That changed in 2015-18, as advocates secured cage-free pledges from almost all of the largest American and European retailers, fast food chains, and foodservice companies. Advocates then extended this work globally, securing major pledges from Brazil to Thailand.Most recently, advocates won the first global cage-free pledges from 150 multinationals, including the world’s largest hotel chains and food manufacturers.A major question was whether these companies would follow through on their pledges. So far, almost 1,000 companies have — that’s 88% of the companies that promised to go cage-free by the end of last year. Another 75% of the world’s largest food companies are now publicly reporting on their progress in going cage-free. Of course, some companies will still shirk their pledges. But 165M more hens are already cage-free in Europe and the US today than were a decade ago, and advocates are on track to help over 300M more just by getting companies to follow through on their existing policiesAlternative Protein AccelerationIn 2012, alternative proteins were decidedly “alternative.” No major meat company was making plant-based meat, no major US fast food chain was serving it, cultured meat hadn’t yet been cultured, Beyond Meat hadn’t gotten beyond Whole Foods, and Impossible burgers weren’t yet possible.Today, the world’s largest meat companies — from Brazil’s JBS to America’s Tyson Foods — mostly have their own plant-based meat brands. Germany’s biggest meat producer said this year that half of its future product range will be meatless. One of the world’s largest seafood companies, Thai Union, even launched plant-based seafood.The world’s largest fast food chains now mostly have a plant-based option — with one noticeable McException. Yum Brands, owner of KFC and Pizza Hut, calls plant-based foods “part of a global movement influencing menus at all of our restaurants.” Burger King is selling plant-based whoppers at most of its global locations; in Belgium, they now account for one in three whoppers sold.Even after a bad year, plant-based meat sales are more than twice what they were five years ago. Alternative protein startups have raised a “mere...]]>
James Ozden https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:16 None full 4424
XK8zrpoyPGkEyKDKv_EA EA - GWWC's Handling of Conflicting Funding Bars by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC's Handling of Conflicting Funding Bars, published by Jeff Kaufman on January 11, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://forum.effectivealtruism.org/posts/XK8zrpoyPGkEyKDKv/gwwc-s-handling-of-conflicting-funding-bars Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC's Handling of Conflicting Funding Bars, published by Jeff Kaufman on January 11, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 11 Jan 2023 22:00:39 +0000 EA - GWWC's Handling of Conflicting Funding Bars by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC's Handling of Conflicting Funding Bars, published by Jeff Kaufman on January 11, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC's Handling of Conflicting Funding Bars, published by Jeff Kaufman on January 11, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:28 None full 4415
2HjKd8grDGBBoeL6Y_EA EA - Wentworth and Larsen on buying time by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wentworth and Larsen on buying time, published by Akash on January 9, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://forum.effectivealtruism.org/posts/2HjKd8grDGBBoeL6Y/wentworth-and-larsen-on-buying-time Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wentworth and Larsen on buying time, published by Akash on January 9, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 11 Jan 2023 15:02:33 +0000 EA - Wentworth and Larsen on buying time by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wentworth and Larsen on buying time, published by Akash on January 9, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wentworth and Larsen on buying time, published by Akash on January 9, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:25 None full 4416
AJ4ZXWNjcxHL9JEyq_EA EA - A Report on running 1-1s with EA Virtual Programs' Participants by Elika Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Report on running 1-1s with EA Virtual Programs' Participants, published by Elika on January 10, 2023 on The Effective Altruism Forum.TL;DR:We trialed two rounds of 1-1s and, in total, connected 44 EA VP introductory participants with members of the EA community, focusing on mid-career professionals and early-career professionals/students interested in making career decisions related to EA. We’re still a bit uncertain, but generally feel positive about the cost-effectiveness and value of organising 1-1s with VP introductory participants . Our results suggest that 1-1s are providing a decent amount of value and are generally highly enjoyed by participants. We’re unsure on the longer term effects of 1-1s on engagement in EA.More specifically, here’s our initial research questions and the answers we found:Q: How valuable are 1-1s with EA community members for Intro participants or is there any demand for 1-1s?A: There is strong demand for 1-1s and it seems like 1-1s are valuable for VP participants. We asked "how would people feel if they weren't able to do 1-1s", 21% said they would be very disappointed, which is 19% short from the 40% benchmark. Although it's short of the 40% benchmark, on average 1-1s are rated 8.2/10. This makes us feel a little more confident that some 1-1s can be valuable. We anecdotally think they are valuable because it helps expand their network and exposes them to more of the community.Q: How cost-effective or impactful are 1-1s in the context of EA VP?A: We feel somewhat uncertain about the cost-effectiveness of 1-1s, but we think that 1-1s with VP participants are ~3x as cost efficient as an EAG in leading to connections, but only 1/20 as scalable. Some notes on impact: 1-1s are very hits-based, impact can take a long time and it’s hard to measure counterfactual impact.Q: What types of 1-1s are most valuable (ex. career focused ones, general EA ones, social, etc)?A: We think that both career-focused and general networking 1-1s are valuable. Career ones for helping people be open to changes, meet others and get advice. General 1-1s to expand people’s network and to serve as practice for reaching out to community members.Q: What type of mentors doing the 1-1s are best suited?A: We found a broad range of ‘mentors’ to do 1-1s – mentors who are very knowledgeable about EA and sociable (such as community builders) rate highly. Cause-area specific mentors (e.g. an experienced grant-maker for someone working in philanthropy, an AI safety researcher for someone working as a software engineer) to talk about specific career paths are extremely valuable as well.Q: What ‘type’ of participant are these most valuable for? E.g. mid-career professionals, those open to changing careers / jobs, students, people near to EA hubs, etc.A: We’re very excited about mid-career professionals who are interested in changing careers and jobs. We also think social 1-1’s are valuable for anyone highly interested in EA to help them expand their network. A future research direction is to expand the number of VP participants who are offered a 1-1 – primarily test participants whose VP application we were less excited about.Q: At what stage of the fellowship is it most valuable (beginning, middle, end, after, etc)?A: We don’t think 1-1s at the beginning of the fellowship is best. We’re generally uncertain on whether there’s a difference between middle, end, and after. It’s either pretty equal or we need to do more tests.Q: Are repeated 1-1s, maybe some form of mentorship, more valuable than one-off 1-1s?A: We didn’t manage to test repeated 1-1s or mentorship. That’s a potential future research direction.We have a few main future goals and paths.Find ways to streamline the 1-1 process on CEA’s side to make it easier to scale to all of VPRun some more speci...]]>
Elika https://forum.effectivealtruism.org/posts/AJ4ZXWNjcxHL9JEyq/a-report-on-running-1-1s-with-ea-virtual-programs Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Report on running 1-1s with EA Virtual Programs' Participants, published by Elika on January 10, 2023 on The Effective Altruism Forum.TL;DR:We trialed two rounds of 1-1s and, in total, connected 44 EA VP introductory participants with members of the EA community, focusing on mid-career professionals and early-career professionals/students interested in making career decisions related to EA. We’re still a bit uncertain, but generally feel positive about the cost-effectiveness and value of organising 1-1s with VP introductory participants . Our results suggest that 1-1s are providing a decent amount of value and are generally highly enjoyed by participants. We’re unsure on the longer term effects of 1-1s on engagement in EA.More specifically, here’s our initial research questions and the answers we found:Q: How valuable are 1-1s with EA community members for Intro participants or is there any demand for 1-1s?A: There is strong demand for 1-1s and it seems like 1-1s are valuable for VP participants. We asked "how would people feel if they weren't able to do 1-1s", 21% said they would be very disappointed, which is 19% short from the 40% benchmark. Although it's short of the 40% benchmark, on average 1-1s are rated 8.2/10. This makes us feel a little more confident that some 1-1s can be valuable. We anecdotally think they are valuable because it helps expand their network and exposes them to more of the community.Q: How cost-effective or impactful are 1-1s in the context of EA VP?A: We feel somewhat uncertain about the cost-effectiveness of 1-1s, but we think that 1-1s with VP participants are ~3x as cost efficient as an EAG in leading to connections, but only 1/20 as scalable. Some notes on impact: 1-1s are very hits-based, impact can take a long time and it’s hard to measure counterfactual impact.Q: What types of 1-1s are most valuable (ex. career focused ones, general EA ones, social, etc)?A: We think that both career-focused and general networking 1-1s are valuable. Career ones for helping people be open to changes, meet others and get advice. General 1-1s to expand people’s network and to serve as practice for reaching out to community members.Q: What type of mentors doing the 1-1s are best suited?A: We found a broad range of ‘mentors’ to do 1-1s – mentors who are very knowledgeable about EA and sociable (such as community builders) rate highly. Cause-area specific mentors (e.g. an experienced grant-maker for someone working in philanthropy, an AI safety researcher for someone working as a software engineer) to talk about specific career paths are extremely valuable as well.Q: What ‘type’ of participant are these most valuable for? E.g. mid-career professionals, those open to changing careers / jobs, students, people near to EA hubs, etc.A: We’re very excited about mid-career professionals who are interested in changing careers and jobs. We also think social 1-1’s are valuable for anyone highly interested in EA to help them expand their network. A future research direction is to expand the number of VP participants who are offered a 1-1 – primarily test participants whose VP application we were less excited about.Q: At what stage of the fellowship is it most valuable (beginning, middle, end, after, etc)?A: We don’t think 1-1s at the beginning of the fellowship is best. We’re generally uncertain on whether there’s a difference between middle, end, and after. It’s either pretty equal or we need to do more tests.Q: Are repeated 1-1s, maybe some form of mentorship, more valuable than one-off 1-1s?A: We didn’t manage to test repeated 1-1s or mentorship. That’s a potential future research direction.We have a few main future goals and paths.Find ways to streamline the 1-1 process on CEA’s side to make it easier to scale to all of VPRun some more speci...]]>
Tue, 10 Jan 2023 23:59:53 +0000 EA - A Report on running 1-1s with EA Virtual Programs' Participants by Elika Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Report on running 1-1s with EA Virtual Programs' Participants, published by Elika on January 10, 2023 on The Effective Altruism Forum.TL;DR:We trialed two rounds of 1-1s and, in total, connected 44 EA VP introductory participants with members of the EA community, focusing on mid-career professionals and early-career professionals/students interested in making career decisions related to EA. We’re still a bit uncertain, but generally feel positive about the cost-effectiveness and value of organising 1-1s with VP introductory participants . Our results suggest that 1-1s are providing a decent amount of value and are generally highly enjoyed by participants. We’re unsure on the longer term effects of 1-1s on engagement in EA.More specifically, here’s our initial research questions and the answers we found:Q: How valuable are 1-1s with EA community members for Intro participants or is there any demand for 1-1s?A: There is strong demand for 1-1s and it seems like 1-1s are valuable for VP participants. We asked "how would people feel if they weren't able to do 1-1s", 21% said they would be very disappointed, which is 19% short from the 40% benchmark. Although it's short of the 40% benchmark, on average 1-1s are rated 8.2/10. This makes us feel a little more confident that some 1-1s can be valuable. We anecdotally think they are valuable because it helps expand their network and exposes them to more of the community.Q: How cost-effective or impactful are 1-1s in the context of EA VP?A: We feel somewhat uncertain about the cost-effectiveness of 1-1s, but we think that 1-1s with VP participants are ~3x as cost efficient as an EAG in leading to connections, but only 1/20 as scalable. Some notes on impact: 1-1s are very hits-based, impact can take a long time and it’s hard to measure counterfactual impact.Q: What types of 1-1s are most valuable (ex. career focused ones, general EA ones, social, etc)?A: We think that both career-focused and general networking 1-1s are valuable. Career ones for helping people be open to changes, meet others and get advice. General 1-1s to expand people’s network and to serve as practice for reaching out to community members.Q: What type of mentors doing the 1-1s are best suited?A: We found a broad range of ‘mentors’ to do 1-1s – mentors who are very knowledgeable about EA and sociable (such as community builders) rate highly. Cause-area specific mentors (e.g. an experienced grant-maker for someone working in philanthropy, an AI safety researcher for someone working as a software engineer) to talk about specific career paths are extremely valuable as well.Q: What ‘type’ of participant are these most valuable for? E.g. mid-career professionals, those open to changing careers / jobs, students, people near to EA hubs, etc.A: We’re very excited about mid-career professionals who are interested in changing careers and jobs. We also think social 1-1’s are valuable for anyone highly interested in EA to help them expand their network. A future research direction is to expand the number of VP participants who are offered a 1-1 – primarily test participants whose VP application we were less excited about.Q: At what stage of the fellowship is it most valuable (beginning, middle, end, after, etc)?A: We don’t think 1-1s at the beginning of the fellowship is best. We’re generally uncertain on whether there’s a difference between middle, end, and after. It’s either pretty equal or we need to do more tests.Q: Are repeated 1-1s, maybe some form of mentorship, more valuable than one-off 1-1s?A: We didn’t manage to test repeated 1-1s or mentorship. That’s a potential future research direction.We have a few main future goals and paths.Find ways to streamline the 1-1 process on CEA’s side to make it easier to scale to all of VPRun some more speci...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Report on running 1-1s with EA Virtual Programs' Participants, published by Elika on January 10, 2023 on The Effective Altruism Forum.TL;DR:We trialed two rounds of 1-1s and, in total, connected 44 EA VP introductory participants with members of the EA community, focusing on mid-career professionals and early-career professionals/students interested in making career decisions related to EA. We’re still a bit uncertain, but generally feel positive about the cost-effectiveness and value of organising 1-1s with VP introductory participants . Our results suggest that 1-1s are providing a decent amount of value and are generally highly enjoyed by participants. We’re unsure on the longer term effects of 1-1s on engagement in EA.More specifically, here’s our initial research questions and the answers we found:Q: How valuable are 1-1s with EA community members for Intro participants or is there any demand for 1-1s?A: There is strong demand for 1-1s and it seems like 1-1s are valuable for VP participants. We asked "how would people feel if they weren't able to do 1-1s", 21% said they would be very disappointed, which is 19% short from the 40% benchmark. Although it's short of the 40% benchmark, on average 1-1s are rated 8.2/10. This makes us feel a little more confident that some 1-1s can be valuable. We anecdotally think they are valuable because it helps expand their network and exposes them to more of the community.Q: How cost-effective or impactful are 1-1s in the context of EA VP?A: We feel somewhat uncertain about the cost-effectiveness of 1-1s, but we think that 1-1s with VP participants are ~3x as cost efficient as an EAG in leading to connections, but only 1/20 as scalable. Some notes on impact: 1-1s are very hits-based, impact can take a long time and it’s hard to measure counterfactual impact.Q: What types of 1-1s are most valuable (ex. career focused ones, general EA ones, social, etc)?A: We think that both career-focused and general networking 1-1s are valuable. Career ones for helping people be open to changes, meet others and get advice. General 1-1s to expand people’s network and to serve as practice for reaching out to community members.Q: What type of mentors doing the 1-1s are best suited?A: We found a broad range of ‘mentors’ to do 1-1s – mentors who are very knowledgeable about EA and sociable (such as community builders) rate highly. Cause-area specific mentors (e.g. an experienced grant-maker for someone working in philanthropy, an AI safety researcher for someone working as a software engineer) to talk about specific career paths are extremely valuable as well.Q: What ‘type’ of participant are these most valuable for? E.g. mid-career professionals, those open to changing careers / jobs, students, people near to EA hubs, etc.A: We’re very excited about mid-career professionals who are interested in changing careers and jobs. We also think social 1-1’s are valuable for anyone highly interested in EA to help them expand their network. A future research direction is to expand the number of VP participants who are offered a 1-1 – primarily test participants whose VP application we were less excited about.Q: At what stage of the fellowship is it most valuable (beginning, middle, end, after, etc)?A: We don’t think 1-1s at the beginning of the fellowship is best. We’re generally uncertain on whether there’s a difference between middle, end, and after. It’s either pretty equal or we need to do more tests.Q: Are repeated 1-1s, maybe some form of mentorship, more valuable than one-off 1-1s?A: We didn’t manage to test repeated 1-1s or mentorship. That’s a potential future research direction.We have a few main future goals and paths.Find ways to streamline the 1-1 process on CEA’s side to make it easier to scale to all of VPRun some more speci...]]>
Elika https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 14:51 None full 4412
76dQ6YfBuLzJDdTgz_EA EA - Reflections on Wytham Abbey by nikos Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflections on Wytham Abbey, published by nikos on January 10, 2023 on The Effective Altruism Forum.In April 2022, CEA (now EVF) bought Wytham Abbey (a 1480 manor near Oxford) as a conference venue. The purchase was mostly funded by Open Philanthropy. To many, Wytham Abbey looked somewhat more luxurious and expensive than strictly necessary for an event location, which sparked some discussions.At least on Twitter, public perception isn't quite what one might have hoped for:Even among EAs, the purchase seems to have left some (many?) with mixed feelings. In this post I'm sharing some loosely connected thoughts and reflections about the purchase.ContextI think it's important to understand the Wytham Abbey purchase in a larger context. In recent years EA has attracted vastly more funding than before. This likely affected the way decisions were made. It probably led to less due diligence on (some) individual decisions, a greater willingness to spend money on more risky bets and changed trade-offs between money on the one hand and time and convenience on the other hand. The until recently very comfortable funding environment also influenced the decision to buy Wytham Abbey.All of this may be good or bad or both at the same time. But it definitely changed EA. People have raised concerns about a perception of lavish spending and potentially grift, lack of transparency or questionable epistemics and motivated reasoning. Some argued that EA was not living up to its own standards. The EA movement as a whole was criticised in the past for making self-serving trade-offs, arguing that luxury/convenience = productivity. Wytham Abbey seemed to reinforce existing sentiments (If you look at the comments on the Wytham Abbey discussion post I can see why you could walk away with an impression that some of the commentators engaged in motivated reasoning).EA relies on trust and a positive perception both from outside and on the inside to be a healthy community that can operate effectively. Sure, things that look bad can still be good overall. But even leaving aside the obvious point that things often look bad because they actually are bad, decisions that alienate people inside and outside the movement can cause long-lasting damage. There is only a limited time that EA can say "we know decision XY may look bad on the surface, but we thought a lot about it and think it's the right call and we need you to trust us on this". Whether or not you agree with the criticism outlined above, it is important to take it into account.Communicating to the outsideI feel EVF's communication (or lack thereof) made the Wytham Abbey purchase look unnecessarily bad.The first issue is the lack of any formal announcement (even though money for this project was committed in November 2021 and the purchase went through in April 2022). I've only heard about this recently through a tweet from Émile Torres, an article in the New Yorker from August 2022, and a discussion post on the EA Forum. My impression is that Émile's tweet surprised many EAs and put CEA/EV in a difficult spot where they found themselves having to defend against criticism and attacks. An open and upfront announcement and explanation of the reasoning could have saved them a lot of trouble.Grants not being announced immediately is not exceptionally unusual. There often is a certain delay and in addition there seems to be a backlog of old grants that also need to be published. This is understandable. Owen Cotton-Barratt adds that they didn't want to create hype and felt a natural time to make the purchase public was when they would be ready to start applications for events. I'm not convinced by that argument and with hindsight knowledge I think it's fair to call this a mistake.The second issue is the lack of transparency on the reasoning ...]]>
nikos https://forum.effectivealtruism.org/posts/76dQ6YfBuLzJDdTgz/reflections-on-wytham-abbey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflections on Wytham Abbey, published by nikos on January 10, 2023 on The Effective Altruism Forum.In April 2022, CEA (now EVF) bought Wytham Abbey (a 1480 manor near Oxford) as a conference venue. The purchase was mostly funded by Open Philanthropy. To many, Wytham Abbey looked somewhat more luxurious and expensive than strictly necessary for an event location, which sparked some discussions.At least on Twitter, public perception isn't quite what one might have hoped for:Even among EAs, the purchase seems to have left some (many?) with mixed feelings. In this post I'm sharing some loosely connected thoughts and reflections about the purchase.ContextI think it's important to understand the Wytham Abbey purchase in a larger context. In recent years EA has attracted vastly more funding than before. This likely affected the way decisions were made. It probably led to less due diligence on (some) individual decisions, a greater willingness to spend money on more risky bets and changed trade-offs between money on the one hand and time and convenience on the other hand. The until recently very comfortable funding environment also influenced the decision to buy Wytham Abbey.All of this may be good or bad or both at the same time. But it definitely changed EA. People have raised concerns about a perception of lavish spending and potentially grift, lack of transparency or questionable epistemics and motivated reasoning. Some argued that EA was not living up to its own standards. The EA movement as a whole was criticised in the past for making self-serving trade-offs, arguing that luxury/convenience = productivity. Wytham Abbey seemed to reinforce existing sentiments (If you look at the comments on the Wytham Abbey discussion post I can see why you could walk away with an impression that some of the commentators engaged in motivated reasoning).EA relies on trust and a positive perception both from outside and on the inside to be a healthy community that can operate effectively. Sure, things that look bad can still be good overall. But even leaving aside the obvious point that things often look bad because they actually are bad, decisions that alienate people inside and outside the movement can cause long-lasting damage. There is only a limited time that EA can say "we know decision XY may look bad on the surface, but we thought a lot about it and think it's the right call and we need you to trust us on this". Whether or not you agree with the criticism outlined above, it is important to take it into account.Communicating to the outsideI feel EVF's communication (or lack thereof) made the Wytham Abbey purchase look unnecessarily bad.The first issue is the lack of any formal announcement (even though money for this project was committed in November 2021 and the purchase went through in April 2022). I've only heard about this recently through a tweet from Émile Torres, an article in the New Yorker from August 2022, and a discussion post on the EA Forum. My impression is that Émile's tweet surprised many EAs and put CEA/EV in a difficult spot where they found themselves having to defend against criticism and attacks. An open and upfront announcement and explanation of the reasoning could have saved them a lot of trouble.Grants not being announced immediately is not exceptionally unusual. There often is a certain delay and in addition there seems to be a backlog of old grants that also need to be published. This is understandable. Owen Cotton-Barratt adds that they didn't want to create hype and felt a natural time to make the purchase public was when they would be ready to start applications for events. I'm not convinced by that argument and with hindsight knowledge I think it's fair to call this a mistake.The second issue is the lack of transparency on the reasoning ...]]>
Tue, 10 Jan 2023 21:12:40 +0000 EA - Reflections on Wytham Abbey by nikos Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflections on Wytham Abbey, published by nikos on January 10, 2023 on The Effective Altruism Forum.In April 2022, CEA (now EVF) bought Wytham Abbey (a 1480 manor near Oxford) as a conference venue. The purchase was mostly funded by Open Philanthropy. To many, Wytham Abbey looked somewhat more luxurious and expensive than strictly necessary for an event location, which sparked some discussions.At least on Twitter, public perception isn't quite what one might have hoped for:Even among EAs, the purchase seems to have left some (many?) with mixed feelings. In this post I'm sharing some loosely connected thoughts and reflections about the purchase.ContextI think it's important to understand the Wytham Abbey purchase in a larger context. In recent years EA has attracted vastly more funding than before. This likely affected the way decisions were made. It probably led to less due diligence on (some) individual decisions, a greater willingness to spend money on more risky bets and changed trade-offs between money on the one hand and time and convenience on the other hand. The until recently very comfortable funding environment also influenced the decision to buy Wytham Abbey.All of this may be good or bad or both at the same time. But it definitely changed EA. People have raised concerns about a perception of lavish spending and potentially grift, lack of transparency or questionable epistemics and motivated reasoning. Some argued that EA was not living up to its own standards. The EA movement as a whole was criticised in the past for making self-serving trade-offs, arguing that luxury/convenience = productivity. Wytham Abbey seemed to reinforce existing sentiments (If you look at the comments on the Wytham Abbey discussion post I can see why you could walk away with an impression that some of the commentators engaged in motivated reasoning).EA relies on trust and a positive perception both from outside and on the inside to be a healthy community that can operate effectively. Sure, things that look bad can still be good overall. But even leaving aside the obvious point that things often look bad because they actually are bad, decisions that alienate people inside and outside the movement can cause long-lasting damage. There is only a limited time that EA can say "we know decision XY may look bad on the surface, but we thought a lot about it and think it's the right call and we need you to trust us on this". Whether or not you agree with the criticism outlined above, it is important to take it into account.Communicating to the outsideI feel EVF's communication (or lack thereof) made the Wytham Abbey purchase look unnecessarily bad.The first issue is the lack of any formal announcement (even though money for this project was committed in November 2021 and the purchase went through in April 2022). I've only heard about this recently through a tweet from Émile Torres, an article in the New Yorker from August 2022, and a discussion post on the EA Forum. My impression is that Émile's tweet surprised many EAs and put CEA/EV in a difficult spot where they found themselves having to defend against criticism and attacks. An open and upfront announcement and explanation of the reasoning could have saved them a lot of trouble.Grants not being announced immediately is not exceptionally unusual. There often is a certain delay and in addition there seems to be a backlog of old grants that also need to be published. This is understandable. Owen Cotton-Barratt adds that they didn't want to create hype and felt a natural time to make the purchase public was when they would be ready to start applications for events. I'm not convinced by that argument and with hindsight knowledge I think it's fair to call this a mistake.The second issue is the lack of transparency on the reasoning ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflections on Wytham Abbey, published by nikos on January 10, 2023 on The Effective Altruism Forum.In April 2022, CEA (now EVF) bought Wytham Abbey (a 1480 manor near Oxford) as a conference venue. The purchase was mostly funded by Open Philanthropy. To many, Wytham Abbey looked somewhat more luxurious and expensive than strictly necessary for an event location, which sparked some discussions.At least on Twitter, public perception isn't quite what one might have hoped for:Even among EAs, the purchase seems to have left some (many?) with mixed feelings. In this post I'm sharing some loosely connected thoughts and reflections about the purchase.ContextI think it's important to understand the Wytham Abbey purchase in a larger context. In recent years EA has attracted vastly more funding than before. This likely affected the way decisions were made. It probably led to less due diligence on (some) individual decisions, a greater willingness to spend money on more risky bets and changed trade-offs between money on the one hand and time and convenience on the other hand. The until recently very comfortable funding environment also influenced the decision to buy Wytham Abbey.All of this may be good or bad or both at the same time. But it definitely changed EA. People have raised concerns about a perception of lavish spending and potentially grift, lack of transparency or questionable epistemics and motivated reasoning. Some argued that EA was not living up to its own standards. The EA movement as a whole was criticised in the past for making self-serving trade-offs, arguing that luxury/convenience = productivity. Wytham Abbey seemed to reinforce existing sentiments (If you look at the comments on the Wytham Abbey discussion post I can see why you could walk away with an impression that some of the commentators engaged in motivated reasoning).EA relies on trust and a positive perception both from outside and on the inside to be a healthy community that can operate effectively. Sure, things that look bad can still be good overall. But even leaving aside the obvious point that things often look bad because they actually are bad, decisions that alienate people inside and outside the movement can cause long-lasting damage. There is only a limited time that EA can say "we know decision XY may look bad on the surface, but we thought a lot about it and think it's the right call and we need you to trust us on this". Whether or not you agree with the criticism outlined above, it is important to take it into account.Communicating to the outsideI feel EVF's communication (or lack thereof) made the Wytham Abbey purchase look unnecessarily bad.The first issue is the lack of any formal announcement (even though money for this project was committed in November 2021 and the purchase went through in April 2022). I've only heard about this recently through a tweet from Émile Torres, an article in the New Yorker from August 2022, and a discussion post on the EA Forum. My impression is that Émile's tweet surprised many EAs and put CEA/EV in a difficult spot where they found themselves having to defend against criticism and attacks. An open and upfront announcement and explanation of the reasoning could have saved them a lot of trouble.Grants not being announced immediately is not exceptionally unusual. There often is a certain delay and in addition there seems to be a backlog of old grants that also need to be published. This is understandable. Owen Cotton-Barratt adds that they didn't want to create hype and felt a natural time to make the purchase public was when they would be ready to start applications for events. I'm not convinced by that argument and with hindsight knowledge I think it's fair to call this a mistake.The second issue is the lack of transparency on the reasoning ...]]>
nikos https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:43 None full 4402
8c7LycgtkypkgYjZx_EA EA - AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years by basil.halperin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years, published by basil.halperin on January 10, 2023 on The Effective Altruism Forum.by Trevor Chow, Basil Halperin, and J. Zachary MazlishIn this post, we point out that short AI timelines would cause real interest rates to be high, and would do so under expectations of either unaligned or aligned AI. However, 30- to 50-year real interest rates are low. We argue that this suggests one of two possibilities:Long(er) timelines. Financial markets are often highly effective information aggregators (the “efficient market hypothesis”), and therefore real interest rates accurately reflect that transformative AI is unlikely to be developed in the next 30-50 years.Market inefficiency. Markets are radically underestimating how soon advanced AI technology will be developed, and real interest rates are therefore too low. There is thus an opportunity for philanthropists to borrow while real rates are low to cheaply do good today; and/or an opportunity for anyone to earn excess returns by betting that real rates will rise.In the rest of this post we flesh out this argument.Both intuitively and under every mainstream economic model, the “explosive growth” caused by aligned AI would cause high real interest rates.Both intuitively and under every mainstream economic model, the existential risk caused by unaligned AI would cause high real interest rates.We show that in the historical data, indeed, real interest rates have been correlated with future growth.Plugging the Cotra probabilities for AI timelines into the baseline workhorse model of economic growth implies substantially higher real interest rates today.In particular, we argue that markets are decisively rejecting the shortest possible timelines of 0-10 years.We argue that the efficient market hypothesis (EMH) is a reasonable prior, and therefore one reasonable interpretation of low real rates is that since markets are simply not forecasting short timelines, neither should we be forecasting short timelines.Alternatively, if you believe that financial markets are wrong, then you have the opportunity to (1) borrow cheaply today and use that money to e.g. fund AI safety work; and/or (2) earn alpha by betting that real rates will rise.An order-of-magnitude estimate is that, if markets are getting this wrong, then there is easily $1 trillion lying on the table in the US treasury bond market alone – setting aside the enormous implications for every other asset class.Interpretation. We view our argument as the best existing outside view evidence on AI timelines – but also as only one model among a mixture of models that you should consider when thinking about AI timelines. The logic here is a simple implication of a few basic concepts in orthodox economic theory and some supporting empirical evidence, which is important because the unprecedented nature of transformative AI makes “reference class”-based outside views difficult to construct. This outside view approach contrasts with, and complements, an inside view approach, which attempts to build a detailed structural model of the world to forecast timelines (e.g. Cotra 2020; see also Nostalgebraist 2022).Outline. If you want a short version of the argument, sections I and II (700 words) are the heart of the post. Additionally, the section titles are themselves summaries, and we use text formatting to highlight key ideas.I. Long-term real rates would be high if the market was pricing advanced AIReal interest rates reflect, among other things:Time discounting, which includes the probability of deathExpectations of future economic growthThis claim is compactly summarized in the “Ramsey rule” (and the only math that we will introduce in this post), a version of the “Euler equation” th...]]>
basil.halperin https://forum.effectivealtruism.org/posts/8c7LycgtkypkgYjZx/agi-and-the-emh-markets-are-not-expecting-aligned-or Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years, published by basil.halperin on January 10, 2023 on The Effective Altruism Forum.by Trevor Chow, Basil Halperin, and J. Zachary MazlishIn this post, we point out that short AI timelines would cause real interest rates to be high, and would do so under expectations of either unaligned or aligned AI. However, 30- to 50-year real interest rates are low. We argue that this suggests one of two possibilities:Long(er) timelines. Financial markets are often highly effective information aggregators (the “efficient market hypothesis”), and therefore real interest rates accurately reflect that transformative AI is unlikely to be developed in the next 30-50 years.Market inefficiency. Markets are radically underestimating how soon advanced AI technology will be developed, and real interest rates are therefore too low. There is thus an opportunity for philanthropists to borrow while real rates are low to cheaply do good today; and/or an opportunity for anyone to earn excess returns by betting that real rates will rise.In the rest of this post we flesh out this argument.Both intuitively and under every mainstream economic model, the “explosive growth” caused by aligned AI would cause high real interest rates.Both intuitively and under every mainstream economic model, the existential risk caused by unaligned AI would cause high real interest rates.We show that in the historical data, indeed, real interest rates have been correlated with future growth.Plugging the Cotra probabilities for AI timelines into the baseline workhorse model of economic growth implies substantially higher real interest rates today.In particular, we argue that markets are decisively rejecting the shortest possible timelines of 0-10 years.We argue that the efficient market hypothesis (EMH) is a reasonable prior, and therefore one reasonable interpretation of low real rates is that since markets are simply not forecasting short timelines, neither should we be forecasting short timelines.Alternatively, if you believe that financial markets are wrong, then you have the opportunity to (1) borrow cheaply today and use that money to e.g. fund AI safety work; and/or (2) earn alpha by betting that real rates will rise.An order-of-magnitude estimate is that, if markets are getting this wrong, then there is easily $1 trillion lying on the table in the US treasury bond market alone – setting aside the enormous implications for every other asset class.Interpretation. We view our argument as the best existing outside view evidence on AI timelines – but also as only one model among a mixture of models that you should consider when thinking about AI timelines. The logic here is a simple implication of a few basic concepts in orthodox economic theory and some supporting empirical evidence, which is important because the unprecedented nature of transformative AI makes “reference class”-based outside views difficult to construct. This outside view approach contrasts with, and complements, an inside view approach, which attempts to build a detailed structural model of the world to forecast timelines (e.g. Cotra 2020; see also Nostalgebraist 2022).Outline. If you want a short version of the argument, sections I and II (700 words) are the heart of the post. Additionally, the section titles are themselves summaries, and we use text formatting to highlight key ideas.I. Long-term real rates would be high if the market was pricing advanced AIReal interest rates reflect, among other things:Time discounting, which includes the probability of deathExpectations of future economic growthThis claim is compactly summarized in the “Ramsey rule” (and the only math that we will introduce in this post), a version of the “Euler equation” th...]]>
Tue, 10 Jan 2023 17:00:24 +0000 EA - AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years by basil.halperin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years, published by basil.halperin on January 10, 2023 on The Effective Altruism Forum.by Trevor Chow, Basil Halperin, and J. Zachary MazlishIn this post, we point out that short AI timelines would cause real interest rates to be high, and would do so under expectations of either unaligned or aligned AI. However, 30- to 50-year real interest rates are low. We argue that this suggests one of two possibilities:Long(er) timelines. Financial markets are often highly effective information aggregators (the “efficient market hypothesis”), and therefore real interest rates accurately reflect that transformative AI is unlikely to be developed in the next 30-50 years.Market inefficiency. Markets are radically underestimating how soon advanced AI technology will be developed, and real interest rates are therefore too low. There is thus an opportunity for philanthropists to borrow while real rates are low to cheaply do good today; and/or an opportunity for anyone to earn excess returns by betting that real rates will rise.In the rest of this post we flesh out this argument.Both intuitively and under every mainstream economic model, the “explosive growth” caused by aligned AI would cause high real interest rates.Both intuitively and under every mainstream economic model, the existential risk caused by unaligned AI would cause high real interest rates.We show that in the historical data, indeed, real interest rates have been correlated with future growth.Plugging the Cotra probabilities for AI timelines into the baseline workhorse model of economic growth implies substantially higher real interest rates today.In particular, we argue that markets are decisively rejecting the shortest possible timelines of 0-10 years.We argue that the efficient market hypothesis (EMH) is a reasonable prior, and therefore one reasonable interpretation of low real rates is that since markets are simply not forecasting short timelines, neither should we be forecasting short timelines.Alternatively, if you believe that financial markets are wrong, then you have the opportunity to (1) borrow cheaply today and use that money to e.g. fund AI safety work; and/or (2) earn alpha by betting that real rates will rise.An order-of-magnitude estimate is that, if markets are getting this wrong, then there is easily $1 trillion lying on the table in the US treasury bond market alone – setting aside the enormous implications for every other asset class.Interpretation. We view our argument as the best existing outside view evidence on AI timelines – but also as only one model among a mixture of models that you should consider when thinking about AI timelines. The logic here is a simple implication of a few basic concepts in orthodox economic theory and some supporting empirical evidence, which is important because the unprecedented nature of transformative AI makes “reference class”-based outside views difficult to construct. This outside view approach contrasts with, and complements, an inside view approach, which attempts to build a detailed structural model of the world to forecast timelines (e.g. Cotra 2020; see also Nostalgebraist 2022).Outline. If you want a short version of the argument, sections I and II (700 words) are the heart of the post. Additionally, the section titles are themselves summaries, and we use text formatting to highlight key ideas.I. Long-term real rates would be high if the market was pricing advanced AIReal interest rates reflect, among other things:Time discounting, which includes the probability of deathExpectations of future economic growthThis claim is compactly summarized in the “Ramsey rule” (and the only math that we will introduce in this post), a version of the “Euler equation” th...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years, published by basil.halperin on January 10, 2023 on The Effective Altruism Forum.by Trevor Chow, Basil Halperin, and J. Zachary MazlishIn this post, we point out that short AI timelines would cause real interest rates to be high, and would do so under expectations of either unaligned or aligned AI. However, 30- to 50-year real interest rates are low. We argue that this suggests one of two possibilities:Long(er) timelines. Financial markets are often highly effective information aggregators (the “efficient market hypothesis”), and therefore real interest rates accurately reflect that transformative AI is unlikely to be developed in the next 30-50 years.Market inefficiency. Markets are radically underestimating how soon advanced AI technology will be developed, and real interest rates are therefore too low. There is thus an opportunity for philanthropists to borrow while real rates are low to cheaply do good today; and/or an opportunity for anyone to earn excess returns by betting that real rates will rise.In the rest of this post we flesh out this argument.Both intuitively and under every mainstream economic model, the “explosive growth” caused by aligned AI would cause high real interest rates.Both intuitively and under every mainstream economic model, the existential risk caused by unaligned AI would cause high real interest rates.We show that in the historical data, indeed, real interest rates have been correlated with future growth.Plugging the Cotra probabilities for AI timelines into the baseline workhorse model of economic growth implies substantially higher real interest rates today.In particular, we argue that markets are decisively rejecting the shortest possible timelines of 0-10 years.We argue that the efficient market hypothesis (EMH) is a reasonable prior, and therefore one reasonable interpretation of low real rates is that since markets are simply not forecasting short timelines, neither should we be forecasting short timelines.Alternatively, if you believe that financial markets are wrong, then you have the opportunity to (1) borrow cheaply today and use that money to e.g. fund AI safety work; and/or (2) earn alpha by betting that real rates will rise.An order-of-magnitude estimate is that, if markets are getting this wrong, then there is easily $1 trillion lying on the table in the US treasury bond market alone – setting aside the enormous implications for every other asset class.Interpretation. We view our argument as the best existing outside view evidence on AI timelines – but also as only one model among a mixture of models that you should consider when thinking about AI timelines. The logic here is a simple implication of a few basic concepts in orthodox economic theory and some supporting empirical evidence, which is important because the unprecedented nature of transformative AI makes “reference class”-based outside views difficult to construct. This outside view approach contrasts with, and complements, an inside view approach, which attempts to build a detailed structural model of the world to forecast timelines (e.g. Cotra 2020; see also Nostalgebraist 2022).Outline. If you want a short version of the argument, sections I and II (700 words) are the heart of the post. Additionally, the section titles are themselves summaries, and we use text formatting to highlight key ideas.I. Long-term real rates would be high if the market was pricing advanced AIReal interest rates reflect, among other things:Time discounting, which includes the probability of deathExpectations of future economic growthThis claim is compactly summarized in the “Ramsey rule” (and the only math that we will introduce in this post), a version of the “Euler equation” th...]]>
basil.halperin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 45:20 None full 4403
zkgYfeczYC5YKDRcH_EA EA - Non-trivial Fellowship: start an impactful project with €500 and expert guidance by Peter McIntyre Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Non-trivial Fellowship: start an impactful project with €500 and expert guidance, published by Peter McIntyre on January 10, 2023 on The Effective Altruism Forum.We’ve just launched our new Fellowship for pre-university students (aged 14 - 20) in the EU and UK.It's a 7-week online fellowship with a €500 scholarship to start an impactful research, policy, or entrepreneurial project.The cohort will be full of ambitious, curious, and analytical teenagers in the EU/UK who want to work towards having the biggest impact they can, starting today.There are also up to €5,000 in prizes for the best projects up for grabs. These will be awarded based on projects that have made the most progress, show the most promise, and are the best presented.It's run by me, Peter McIntyre. Before Non-trivial, I spent 5 years getting new programs off the ground at 80,000 Hours.No EA background necessary. It’s more important that someone could be interested in working on a project that could change the world for the better than have already learned a lot about EA.We’d be very grateful if you could share the website with any talented teenagers based in the EU/UK you know.The deadline to apply is Sunday, Jan. 29, 2023.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Peter McIntyre https://forum.effectivealtruism.org/posts/zkgYfeczYC5YKDRcH/non-trivial-fellowship-start-an-impactful-project-with Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Non-trivial Fellowship: start an impactful project with €500 and expert guidance, published by Peter McIntyre on January 10, 2023 on The Effective Altruism Forum.We’ve just launched our new Fellowship for pre-university students (aged 14 - 20) in the EU and UK.It's a 7-week online fellowship with a €500 scholarship to start an impactful research, policy, or entrepreneurial project.The cohort will be full of ambitious, curious, and analytical teenagers in the EU/UK who want to work towards having the biggest impact they can, starting today.There are also up to €5,000 in prizes for the best projects up for grabs. These will be awarded based on projects that have made the most progress, show the most promise, and are the best presented.It's run by me, Peter McIntyre. Before Non-trivial, I spent 5 years getting new programs off the ground at 80,000 Hours.No EA background necessary. It’s more important that someone could be interested in working on a project that could change the world for the better than have already learned a lot about EA.We’d be very grateful if you could share the website with any talented teenagers based in the EU/UK you know.The deadline to apply is Sunday, Jan. 29, 2023.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 10 Jan 2023 14:59:12 +0000 EA - Non-trivial Fellowship: start an impactful project with €500 and expert guidance by Peter McIntyre Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Non-trivial Fellowship: start an impactful project with €500 and expert guidance, published by Peter McIntyre on January 10, 2023 on The Effective Altruism Forum.We’ve just launched our new Fellowship for pre-university students (aged 14 - 20) in the EU and UK.It's a 7-week online fellowship with a €500 scholarship to start an impactful research, policy, or entrepreneurial project.The cohort will be full of ambitious, curious, and analytical teenagers in the EU/UK who want to work towards having the biggest impact they can, starting today.There are also up to €5,000 in prizes for the best projects up for grabs. These will be awarded based on projects that have made the most progress, show the most promise, and are the best presented.It's run by me, Peter McIntyre. Before Non-trivial, I spent 5 years getting new programs off the ground at 80,000 Hours.No EA background necessary. It’s more important that someone could be interested in working on a project that could change the world for the better than have already learned a lot about EA.We’d be very grateful if you could share the website with any talented teenagers based in the EU/UK you know.The deadline to apply is Sunday, Jan. 29, 2023.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Non-trivial Fellowship: start an impactful project with €500 and expert guidance, published by Peter McIntyre on January 10, 2023 on The Effective Altruism Forum.We’ve just launched our new Fellowship for pre-university students (aged 14 - 20) in the EU and UK.It's a 7-week online fellowship with a €500 scholarship to start an impactful research, policy, or entrepreneurial project.The cohort will be full of ambitious, curious, and analytical teenagers in the EU/UK who want to work towards having the biggest impact they can, starting today.There are also up to €5,000 in prizes for the best projects up for grabs. These will be awarded based on projects that have made the most progress, show the most promise, and are the best presented.It's run by me, Peter McIntyre. Before Non-trivial, I spent 5 years getting new programs off the ground at 80,000 Hours.No EA background necessary. It’s more important that someone could be interested in working on a project that could change the world for the better than have already learned a lot about EA.We’d be very grateful if you could share the website with any talented teenagers based in the EU/UK you know.The deadline to apply is Sunday, Jan. 29, 2023.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Peter McIntyre https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:32 None full 4404
3b3hhDnxxTbDKqcEq_EA EA - Shallow Investigation: Arsenic Remediation by Francis Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow Investigation: Arsenic Remediation, published by Francis on January 10, 2023 on The Effective Altruism Forum.Arsenic: A toxicant worth thinking more aboutThis report summarizes a shallow investigation into the effects of arsenic on global health and wellbeing, interventions to reduce these effects, and existing programs in this space. I estimate that this report is the result of roughly 50 hours of research and writing. This report was produced as part of Cause Innovation Bootcamp’s fellowship program.SummaryArsenic is a toxicant that contaminates drinking water and other groundwater in some countries. This contamination typically results from groundwater flowing through soil and mineral deposits that contain arsenic in the right conditions to be soluble, but human activities can also lead to arsenic contamination in some cases. Approximately 300 million people live in areas where the groundwater is contaminated by arsenic to a degree that exceeds the World Health Organization Standard of 10 μg/L, and approximately 100 million of those people are exposed to arsenic levels in drinking water of more than 50 μg/L. It is associated with a variety of negative health outcomes and other effects, resulting in higher mortality rates, cognitive damage, and lower lifetime incomes. Although Bangladesh has the most well-known and severe rate of arsenic contamination, groundwater arsenic affects many countries, and arsenic interventions outside of Bangladesh are relatively neglected.This report investigates various possible arsenic interventions. The lower bound for cost-per-death-averted was $630; limiting only to interventions that are backed by field studies, the lower bound for cost-per-death-averted was $774. Various factors, primarily uncertainty about the degree of harmful effects from arsenic, may reduce the overall cost-effectiveness, but arsenic interventions nonetheless have the potential to be promising, with the potential to compete with GiveWell top charities.Introduction and Scope of the ProblemHuman exposure to arsenic primarily results from groundwater contamination. Inorganic arsenic contamination in groundwater can lead to human exposure in three main ways: drinking contaminated water, using contaminated water for cooking, and using contaminated water to irrigate crops. The primary negative health effects of arsenic are increased risk of cancer and cardiovascular disease (including heart attacks), but other adverse effects include a higher risk of skin lesions, diabetes, pulmonary disease, stroke, cognitive deficits (in the case of prenatal and early childhood exposure), and (in some circumstances) Blackfoot disease.As of 2021, an estimated 300 million people worldwide live in areas with groundwater contaminated by arsenic (more than 10 μg/L), with approximately 100 million of those people living in areas with groundwater arsenic levels of more than 50 μg/L, which has been linked to an especially high likelihood of severe health consequences. According to the World Health Organization, several countries have been found to have a high level of arsenic in groundwater, including Argentina, Bangladesh, Cambodia, Chile, China, India, Mexico, Pakistan, the United States of America, and Vietnam. Arsenic exposure is particularly common in Bangladesh, where the number of people exposed to arsenic in groundwater has been estimated at 35 million - 77 million. More recent estimates from 2012 indicate that remediation efforts have reduced this number to 19 million people exposed to arsenic levels greater than 50 μg/L, and it is likely that this number has continued to decline since 2012.Arsenic contamination around the world.Various studies have attempted to determine the effects of arsenic exposure on overall mortality. A ten-year cohort study in Banglades...]]>
Francis https://forum.effectivealtruism.org/posts/3b3hhDnxxTbDKqcEq/shallow-investigation-arsenic-remediation Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow Investigation: Arsenic Remediation, published by Francis on January 10, 2023 on The Effective Altruism Forum.Arsenic: A toxicant worth thinking more aboutThis report summarizes a shallow investigation into the effects of arsenic on global health and wellbeing, interventions to reduce these effects, and existing programs in this space. I estimate that this report is the result of roughly 50 hours of research and writing. This report was produced as part of Cause Innovation Bootcamp’s fellowship program.SummaryArsenic is a toxicant that contaminates drinking water and other groundwater in some countries. This contamination typically results from groundwater flowing through soil and mineral deposits that contain arsenic in the right conditions to be soluble, but human activities can also lead to arsenic contamination in some cases. Approximately 300 million people live in areas where the groundwater is contaminated by arsenic to a degree that exceeds the World Health Organization Standard of 10 μg/L, and approximately 100 million of those people are exposed to arsenic levels in drinking water of more than 50 μg/L. It is associated with a variety of negative health outcomes and other effects, resulting in higher mortality rates, cognitive damage, and lower lifetime incomes. Although Bangladesh has the most well-known and severe rate of arsenic contamination, groundwater arsenic affects many countries, and arsenic interventions outside of Bangladesh are relatively neglected.This report investigates various possible arsenic interventions. The lower bound for cost-per-death-averted was $630; limiting only to interventions that are backed by field studies, the lower bound for cost-per-death-averted was $774. Various factors, primarily uncertainty about the degree of harmful effects from arsenic, may reduce the overall cost-effectiveness, but arsenic interventions nonetheless have the potential to be promising, with the potential to compete with GiveWell top charities.Introduction and Scope of the ProblemHuman exposure to arsenic primarily results from groundwater contamination. Inorganic arsenic contamination in groundwater can lead to human exposure in three main ways: drinking contaminated water, using contaminated water for cooking, and using contaminated water to irrigate crops. The primary negative health effects of arsenic are increased risk of cancer and cardiovascular disease (including heart attacks), but other adverse effects include a higher risk of skin lesions, diabetes, pulmonary disease, stroke, cognitive deficits (in the case of prenatal and early childhood exposure), and (in some circumstances) Blackfoot disease.As of 2021, an estimated 300 million people worldwide live in areas with groundwater contaminated by arsenic (more than 10 μg/L), with approximately 100 million of those people living in areas with groundwater arsenic levels of more than 50 μg/L, which has been linked to an especially high likelihood of severe health consequences. According to the World Health Organization, several countries have been found to have a high level of arsenic in groundwater, including Argentina, Bangladesh, Cambodia, Chile, China, India, Mexico, Pakistan, the United States of America, and Vietnam. Arsenic exposure is particularly common in Bangladesh, where the number of people exposed to arsenic in groundwater has been estimated at 35 million - 77 million. More recent estimates from 2012 indicate that remediation efforts have reduced this number to 19 million people exposed to arsenic levels greater than 50 μg/L, and it is likely that this number has continued to decline since 2012.Arsenic contamination around the world.Various studies have attempted to determine the effects of arsenic exposure on overall mortality. A ten-year cohort study in Banglades...]]>
Tue, 10 Jan 2023 14:12:59 +0000 EA - Shallow Investigation: Arsenic Remediation by Francis Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow Investigation: Arsenic Remediation, published by Francis on January 10, 2023 on The Effective Altruism Forum.Arsenic: A toxicant worth thinking more aboutThis report summarizes a shallow investigation into the effects of arsenic on global health and wellbeing, interventions to reduce these effects, and existing programs in this space. I estimate that this report is the result of roughly 50 hours of research and writing. This report was produced as part of Cause Innovation Bootcamp’s fellowship program.SummaryArsenic is a toxicant that contaminates drinking water and other groundwater in some countries. This contamination typically results from groundwater flowing through soil and mineral deposits that contain arsenic in the right conditions to be soluble, but human activities can also lead to arsenic contamination in some cases. Approximately 300 million people live in areas where the groundwater is contaminated by arsenic to a degree that exceeds the World Health Organization Standard of 10 μg/L, and approximately 100 million of those people are exposed to arsenic levels in drinking water of more than 50 μg/L. It is associated with a variety of negative health outcomes and other effects, resulting in higher mortality rates, cognitive damage, and lower lifetime incomes. Although Bangladesh has the most well-known and severe rate of arsenic contamination, groundwater arsenic affects many countries, and arsenic interventions outside of Bangladesh are relatively neglected.This report investigates various possible arsenic interventions. The lower bound for cost-per-death-averted was $630; limiting only to interventions that are backed by field studies, the lower bound for cost-per-death-averted was $774. Various factors, primarily uncertainty about the degree of harmful effects from arsenic, may reduce the overall cost-effectiveness, but arsenic interventions nonetheless have the potential to be promising, with the potential to compete with GiveWell top charities.Introduction and Scope of the ProblemHuman exposure to arsenic primarily results from groundwater contamination. Inorganic arsenic contamination in groundwater can lead to human exposure in three main ways: drinking contaminated water, using contaminated water for cooking, and using contaminated water to irrigate crops. The primary negative health effects of arsenic are increased risk of cancer and cardiovascular disease (including heart attacks), but other adverse effects include a higher risk of skin lesions, diabetes, pulmonary disease, stroke, cognitive deficits (in the case of prenatal and early childhood exposure), and (in some circumstances) Blackfoot disease.As of 2021, an estimated 300 million people worldwide live in areas with groundwater contaminated by arsenic (more than 10 μg/L), with approximately 100 million of those people living in areas with groundwater arsenic levels of more than 50 μg/L, which has been linked to an especially high likelihood of severe health consequences. According to the World Health Organization, several countries have been found to have a high level of arsenic in groundwater, including Argentina, Bangladesh, Cambodia, Chile, China, India, Mexico, Pakistan, the United States of America, and Vietnam. Arsenic exposure is particularly common in Bangladesh, where the number of people exposed to arsenic in groundwater has been estimated at 35 million - 77 million. More recent estimates from 2012 indicate that remediation efforts have reduced this number to 19 million people exposed to arsenic levels greater than 50 μg/L, and it is likely that this number has continued to decline since 2012.Arsenic contamination around the world.Various studies have attempted to determine the effects of arsenic exposure on overall mortality. A ten-year cohort study in Banglades...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow Investigation: Arsenic Remediation, published by Francis on January 10, 2023 on The Effective Altruism Forum.Arsenic: A toxicant worth thinking more aboutThis report summarizes a shallow investigation into the effects of arsenic on global health and wellbeing, interventions to reduce these effects, and existing programs in this space. I estimate that this report is the result of roughly 50 hours of research and writing. This report was produced as part of Cause Innovation Bootcamp’s fellowship program.SummaryArsenic is a toxicant that contaminates drinking water and other groundwater in some countries. This contamination typically results from groundwater flowing through soil and mineral deposits that contain arsenic in the right conditions to be soluble, but human activities can also lead to arsenic contamination in some cases. Approximately 300 million people live in areas where the groundwater is contaminated by arsenic to a degree that exceeds the World Health Organization Standard of 10 μg/L, and approximately 100 million of those people are exposed to arsenic levels in drinking water of more than 50 μg/L. It is associated with a variety of negative health outcomes and other effects, resulting in higher mortality rates, cognitive damage, and lower lifetime incomes. Although Bangladesh has the most well-known and severe rate of arsenic contamination, groundwater arsenic affects many countries, and arsenic interventions outside of Bangladesh are relatively neglected.This report investigates various possible arsenic interventions. The lower bound for cost-per-death-averted was $630; limiting only to interventions that are backed by field studies, the lower bound for cost-per-death-averted was $774. Various factors, primarily uncertainty about the degree of harmful effects from arsenic, may reduce the overall cost-effectiveness, but arsenic interventions nonetheless have the potential to be promising, with the potential to compete with GiveWell top charities.Introduction and Scope of the ProblemHuman exposure to arsenic primarily results from groundwater contamination. Inorganic arsenic contamination in groundwater can lead to human exposure in three main ways: drinking contaminated water, using contaminated water for cooking, and using contaminated water to irrigate crops. The primary negative health effects of arsenic are increased risk of cancer and cardiovascular disease (including heart attacks), but other adverse effects include a higher risk of skin lesions, diabetes, pulmonary disease, stroke, cognitive deficits (in the case of prenatal and early childhood exposure), and (in some circumstances) Blackfoot disease.As of 2021, an estimated 300 million people worldwide live in areas with groundwater contaminated by arsenic (more than 10 μg/L), with approximately 100 million of those people living in areas with groundwater arsenic levels of more than 50 μg/L, which has been linked to an especially high likelihood of severe health consequences. According to the World Health Organization, several countries have been found to have a high level of arsenic in groundwater, including Argentina, Bangladesh, Cambodia, Chile, China, India, Mexico, Pakistan, the United States of America, and Vietnam. Arsenic exposure is particularly common in Bangladesh, where the number of people exposed to arsenic in groundwater has been estimated at 35 million - 77 million. More recent estimates from 2012 indicate that remediation efforts have reduced this number to 19 million people exposed to arsenic levels greater than 50 μg/L, and it is likely that this number has continued to decline since 2012.Arsenic contamination around the world.Various studies have attempted to determine the effects of arsenic exposure on overall mortality. A ten-year cohort study in Banglades...]]>
Francis https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 49:01 None full 4405
mgurctbDAP8bGeHCb_EA EA - How did our historical moral heroes deal with severe adversity and/or moral compromise? by Linch Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How did our historical moral heroes deal with severe adversity and/or moral compromise?, published by Linch on January 9, 2023 on The Effective Altruism Forum.In one way we think a great deal too much of the atomic bomb. “How are we to live in an atomic age?” I am tempted to reply: “Why, as you would have lived in the sixteenth century when the plague visited London almost every year, or as you would have lived in a Viking age when raiders from Scandinavia might land and cut your throat any night; or indeed, as you are already living in an age of cancer, an age of syphilis, an age of paralysis, an age of air raids, an age of railway accidents, an age of motor accidents.”In other words, do not let us begin by exaggerating the novelty of our situation.Julia Wise, quoting C.S. LewisThat does not kill us makes us strongerHilary Clinton, quoting Kelly Clarkson, quoting NietszcheIn light of current events, I've personally found it difficult to reach equilibrium. In particular, I've found it hard to navigate a) the 2022 loss of ~3/4 of resources available to longtermist EA, b) the consequentially large harms in the world caused by someone who I thought was close to us, c) setbacks in the research prioritization of my own work, d) some vague feelings that our community is internally falling apart, e) the general impending sense of doom, f) some personal difficulties this year (not all of which is related to global events), and g) general feelings of responsibility and also inadequacy to address the above. I imagine many other people reading this are going through similar difficulties.I'll find it personally helpful to understand how our (my) historical heroes dealt with problems akin to the ones we're currently facing. In particular, I'd be interested in hearing about similar situations faced by 1) the Chinese Mohists and 2) the English utilitarians.I will be interested in hearing stories of how the Chinese Mohists and English utilitarians dealt with situations of i) large situational setbacks and ii) large-scale moral compromise.In the past, I've found it helpful to draw connections between my current work/life and that of those I view as my spiritual or intellectual ancestors. Perhaps this will be true again. I confess to not knowing much of the relevant histories here, but presumably they've faced similar issues? I'm guessing the Mohists couldn't have been happy that states they defended ended up being conquered anyway, and Qin Shihuang unified China with fire and blood. As for the English utilitarians, I assume some of the policies they've advocated backfired severely in their lifetimes, whether obviously or more subtly.I'd be interested in seeing and possibly learning from how they responded, both practically and on an emotional level.So this is my question for the historians/amateur historians: In what ways have our historical moral heroes dealt with large-scale adversity and moral compromise?For example, it was helpful for me to learn about what John Stuart Mill viewed as his personal largest emotional difficulties, as well as the Mohist approaches to asceticism in a corrupt world.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Linch https://forum.effectivealtruism.org/posts/mgurctbDAP8bGeHCb/how-did-our-historical-moral-heroes-deal-with-severe Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How did our historical moral heroes deal with severe adversity and/or moral compromise?, published by Linch on January 9, 2023 on The Effective Altruism Forum.In one way we think a great deal too much of the atomic bomb. “How are we to live in an atomic age?” I am tempted to reply: “Why, as you would have lived in the sixteenth century when the plague visited London almost every year, or as you would have lived in a Viking age when raiders from Scandinavia might land and cut your throat any night; or indeed, as you are already living in an age of cancer, an age of syphilis, an age of paralysis, an age of air raids, an age of railway accidents, an age of motor accidents.”In other words, do not let us begin by exaggerating the novelty of our situation.Julia Wise, quoting C.S. LewisThat does not kill us makes us strongerHilary Clinton, quoting Kelly Clarkson, quoting NietszcheIn light of current events, I've personally found it difficult to reach equilibrium. In particular, I've found it hard to navigate a) the 2022 loss of ~3/4 of resources available to longtermist EA, b) the consequentially large harms in the world caused by someone who I thought was close to us, c) setbacks in the research prioritization of my own work, d) some vague feelings that our community is internally falling apart, e) the general impending sense of doom, f) some personal difficulties this year (not all of which is related to global events), and g) general feelings of responsibility and also inadequacy to address the above. I imagine many other people reading this are going through similar difficulties.I'll find it personally helpful to understand how our (my) historical heroes dealt with problems akin to the ones we're currently facing. In particular, I'd be interested in hearing about similar situations faced by 1) the Chinese Mohists and 2) the English utilitarians.I will be interested in hearing stories of how the Chinese Mohists and English utilitarians dealt with situations of i) large situational setbacks and ii) large-scale moral compromise.In the past, I've found it helpful to draw connections between my current work/life and that of those I view as my spiritual or intellectual ancestors. Perhaps this will be true again. I confess to not knowing much of the relevant histories here, but presumably they've faced similar issues? I'm guessing the Mohists couldn't have been happy that states they defended ended up being conquered anyway, and Qin Shihuang unified China with fire and blood. As for the English utilitarians, I assume some of the policies they've advocated backfired severely in their lifetimes, whether obviously or more subtly.I'd be interested in seeing and possibly learning from how they responded, both practically and on an emotional level.So this is my question for the historians/amateur historians: In what ways have our historical moral heroes dealt with large-scale adversity and moral compromise?For example, it was helpful for me to learn about what John Stuart Mill viewed as his personal largest emotional difficulties, as well as the Mohist approaches to asceticism in a corrupt world.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 10 Jan 2023 13:58:42 +0000 EA - How did our historical moral heroes deal with severe adversity and/or moral compromise? by Linch Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How did our historical moral heroes deal with severe adversity and/or moral compromise?, published by Linch on January 9, 2023 on The Effective Altruism Forum.In one way we think a great deal too much of the atomic bomb. “How are we to live in an atomic age?” I am tempted to reply: “Why, as you would have lived in the sixteenth century when the plague visited London almost every year, or as you would have lived in a Viking age when raiders from Scandinavia might land and cut your throat any night; or indeed, as you are already living in an age of cancer, an age of syphilis, an age of paralysis, an age of air raids, an age of railway accidents, an age of motor accidents.”In other words, do not let us begin by exaggerating the novelty of our situation.Julia Wise, quoting C.S. LewisThat does not kill us makes us strongerHilary Clinton, quoting Kelly Clarkson, quoting NietszcheIn light of current events, I've personally found it difficult to reach equilibrium. In particular, I've found it hard to navigate a) the 2022 loss of ~3/4 of resources available to longtermist EA, b) the consequentially large harms in the world caused by someone who I thought was close to us, c) setbacks in the research prioritization of my own work, d) some vague feelings that our community is internally falling apart, e) the general impending sense of doom, f) some personal difficulties this year (not all of which is related to global events), and g) general feelings of responsibility and also inadequacy to address the above. I imagine many other people reading this are going through similar difficulties.I'll find it personally helpful to understand how our (my) historical heroes dealt with problems akin to the ones we're currently facing. In particular, I'd be interested in hearing about similar situations faced by 1) the Chinese Mohists and 2) the English utilitarians.I will be interested in hearing stories of how the Chinese Mohists and English utilitarians dealt with situations of i) large situational setbacks and ii) large-scale moral compromise.In the past, I've found it helpful to draw connections between my current work/life and that of those I view as my spiritual or intellectual ancestors. Perhaps this will be true again. I confess to not knowing much of the relevant histories here, but presumably they've faced similar issues? I'm guessing the Mohists couldn't have been happy that states they defended ended up being conquered anyway, and Qin Shihuang unified China with fire and blood. As for the English utilitarians, I assume some of the policies they've advocated backfired severely in their lifetimes, whether obviously or more subtly.I'd be interested in seeing and possibly learning from how they responded, both practically and on an emotional level.So this is my question for the historians/amateur historians: In what ways have our historical moral heroes dealt with large-scale adversity and moral compromise?For example, it was helpful for me to learn about what John Stuart Mill viewed as his personal largest emotional difficulties, as well as the Mohist approaches to asceticism in a corrupt world.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How did our historical moral heroes deal with severe adversity and/or moral compromise?, published by Linch on January 9, 2023 on The Effective Altruism Forum.In one way we think a great deal too much of the atomic bomb. “How are we to live in an atomic age?” I am tempted to reply: “Why, as you would have lived in the sixteenth century when the plague visited London almost every year, or as you would have lived in a Viking age when raiders from Scandinavia might land and cut your throat any night; or indeed, as you are already living in an age of cancer, an age of syphilis, an age of paralysis, an age of air raids, an age of railway accidents, an age of motor accidents.”In other words, do not let us begin by exaggerating the novelty of our situation.Julia Wise, quoting C.S. LewisThat does not kill us makes us strongerHilary Clinton, quoting Kelly Clarkson, quoting NietszcheIn light of current events, I've personally found it difficult to reach equilibrium. In particular, I've found it hard to navigate a) the 2022 loss of ~3/4 of resources available to longtermist EA, b) the consequentially large harms in the world caused by someone who I thought was close to us, c) setbacks in the research prioritization of my own work, d) some vague feelings that our community is internally falling apart, e) the general impending sense of doom, f) some personal difficulties this year (not all of which is related to global events), and g) general feelings of responsibility and also inadequacy to address the above. I imagine many other people reading this are going through similar difficulties.I'll find it personally helpful to understand how our (my) historical heroes dealt with problems akin to the ones we're currently facing. In particular, I'd be interested in hearing about similar situations faced by 1) the Chinese Mohists and 2) the English utilitarians.I will be interested in hearing stories of how the Chinese Mohists and English utilitarians dealt with situations of i) large situational setbacks and ii) large-scale moral compromise.In the past, I've found it helpful to draw connections between my current work/life and that of those I view as my spiritual or intellectual ancestors. Perhaps this will be true again. I confess to not knowing much of the relevant histories here, but presumably they've faced similar issues? I'm guessing the Mohists couldn't have been happy that states they defended ended up being conquered anyway, and Qin Shihuang unified China with fire and blood. As for the English utilitarians, I assume some of the policies they've advocated backfired severely in their lifetimes, whether obviously or more subtly.I'd be interested in seeing and possibly learning from how they responded, both practically and on an emotional level.So this is my question for the historians/amateur historians: In what ways have our historical moral heroes dealt with large-scale adversity and moral compromise?For example, it was helpful for me to learn about what John Stuart Mill viewed as his personal largest emotional difficulties, as well as the Mohist approaches to asceticism in a corrupt world.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Linch https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:11 None full 4407
Ej8J3idFS6SphEeqR_EA EA - Mental Health Providers and Resources Listed on the Mental Health Navigator Website by Emily Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mental Health Providers and Resources Listed on the Mental Health Navigator Website, published by Emily on January 9, 2023 on The Effective Altruism Forum.The Mental Health Navigator is here to help you find mental health support!We heard that effective altruists were struggling to find therapists – it’s hard to find providers who are good or take your insurance or are accepting new clients.List of providersWe recently overhauled a database of providers recommended by community members, so you can more easily find a therapist who works for you. You can sort by availability, average rating, location, cost, specialties, etc., plus read community reviews. We’ve even included links or emails for scheduling an intake so you don’t need to pick up a phone.It’s a place for members of the Effective Altruism community to provide information about mental health providers they have found helpful and would recommend to other community members, including therapists, coaches, counselors, and psychiatrists. It's also now where the lists of community members who provide coaching or mental health services are stored, rather than these being on separate web pages.The database is still limited (e.g. only a handful of providers accept insurance and the majority are based in the US or UK). We’re always working to grow the list of highly recommended providers, so we greatly appreciate you sharing new reviews here anonymously:List of mental health resourcesThe Mental Health Navigator also provides a list of mental health resources. It has been set up to provide as comprehensive a list of mental health resources as possible, though it admittedly still has a lot of room to grow in terms of diversity and geographic coverage. We always welcome recommendations for the table via this form:Advisory ServiceFinally, the Mental Health Navigator aims to provide an Advisory Service where a volunteer will work with you to find a provider who meets your needs. However, our Advisory Service is currently not accepting new applications while we work through our wait list. We will post again on this forum and on our social media pages once we have reopened to everyone.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Emily https://forum.effectivealtruism.org/posts/Ej8J3idFS6SphEeqR/mental-health-providers-and-resources-listed-on-the-mental Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mental Health Providers and Resources Listed on the Mental Health Navigator Website, published by Emily on January 9, 2023 on The Effective Altruism Forum.The Mental Health Navigator is here to help you find mental health support!We heard that effective altruists were struggling to find therapists – it’s hard to find providers who are good or take your insurance or are accepting new clients.List of providersWe recently overhauled a database of providers recommended by community members, so you can more easily find a therapist who works for you. You can sort by availability, average rating, location, cost, specialties, etc., plus read community reviews. We’ve even included links or emails for scheduling an intake so you don’t need to pick up a phone.It’s a place for members of the Effective Altruism community to provide information about mental health providers they have found helpful and would recommend to other community members, including therapists, coaches, counselors, and psychiatrists. It's also now where the lists of community members who provide coaching or mental health services are stored, rather than these being on separate web pages.The database is still limited (e.g. only a handful of providers accept insurance and the majority are based in the US or UK). We’re always working to grow the list of highly recommended providers, so we greatly appreciate you sharing new reviews here anonymously:List of mental health resourcesThe Mental Health Navigator also provides a list of mental health resources. It has been set up to provide as comprehensive a list of mental health resources as possible, though it admittedly still has a lot of room to grow in terms of diversity and geographic coverage. We always welcome recommendations for the table via this form:Advisory ServiceFinally, the Mental Health Navigator aims to provide an Advisory Service where a volunteer will work with you to find a provider who meets your needs. However, our Advisory Service is currently not accepting new applications while we work through our wait list. We will post again on this forum and on our social media pages once we have reopened to everyone.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 10 Jan 2023 11:57:43 +0000 EA - Mental Health Providers and Resources Listed on the Mental Health Navigator Website by Emily Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mental Health Providers and Resources Listed on the Mental Health Navigator Website, published by Emily on January 9, 2023 on The Effective Altruism Forum.The Mental Health Navigator is here to help you find mental health support!We heard that effective altruists were struggling to find therapists – it’s hard to find providers who are good or take your insurance or are accepting new clients.List of providersWe recently overhauled a database of providers recommended by community members, so you can more easily find a therapist who works for you. You can sort by availability, average rating, location, cost, specialties, etc., plus read community reviews. We’ve even included links or emails for scheduling an intake so you don’t need to pick up a phone.It’s a place for members of the Effective Altruism community to provide information about mental health providers they have found helpful and would recommend to other community members, including therapists, coaches, counselors, and psychiatrists. It's also now where the lists of community members who provide coaching or mental health services are stored, rather than these being on separate web pages.The database is still limited (e.g. only a handful of providers accept insurance and the majority are based in the US or UK). We’re always working to grow the list of highly recommended providers, so we greatly appreciate you sharing new reviews here anonymously:List of mental health resourcesThe Mental Health Navigator also provides a list of mental health resources. It has been set up to provide as comprehensive a list of mental health resources as possible, though it admittedly still has a lot of room to grow in terms of diversity and geographic coverage. We always welcome recommendations for the table via this form:Advisory ServiceFinally, the Mental Health Navigator aims to provide an Advisory Service where a volunteer will work with you to find a provider who meets your needs. However, our Advisory Service is currently not accepting new applications while we work through our wait list. We will post again on this forum and on our social media pages once we have reopened to everyone.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mental Health Providers and Resources Listed on the Mental Health Navigator Website, published by Emily on January 9, 2023 on The Effective Altruism Forum.The Mental Health Navigator is here to help you find mental health support!We heard that effective altruists were struggling to find therapists – it’s hard to find providers who are good or take your insurance or are accepting new clients.List of providersWe recently overhauled a database of providers recommended by community members, so you can more easily find a therapist who works for you. You can sort by availability, average rating, location, cost, specialties, etc., plus read community reviews. We’ve even included links or emails for scheduling an intake so you don’t need to pick up a phone.It’s a place for members of the Effective Altruism community to provide information about mental health providers they have found helpful and would recommend to other community members, including therapists, coaches, counselors, and psychiatrists. It's also now where the lists of community members who provide coaching or mental health services are stored, rather than these being on separate web pages.The database is still limited (e.g. only a handful of providers accept insurance and the majority are based in the US or UK). We’re always working to grow the list of highly recommended providers, so we greatly appreciate you sharing new reviews here anonymously:List of mental health resourcesThe Mental Health Navigator also provides a list of mental health resources. It has been set up to provide as comprehensive a list of mental health resources as possible, though it admittedly still has a lot of room to grow in terms of diversity and geographic coverage. We always welcome recommendations for the table via this form:Advisory ServiceFinally, the Mental Health Navigator aims to provide an Advisory Service where a volunteer will work with you to find a provider who meets your needs. However, our Advisory Service is currently not accepting new applications while we work through our wait list. We will post again on this forum and on our social media pages once we have reopened to everyone.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Emily https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:14 None full 4406
tg2WmhuAoXxCHgzxJ_EA EA - My personal takeaways from EAGxLatAm by CristinaSchmidtIbáñez Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My personal takeaways from EAGxLatAm, published by CristinaSchmidtIbáñez on January 10, 2023 on The Effective Altruism Forum.This is a personal post and does not necessarily represent the views of Rethink Priorities.This year I attended EAGxLatAm (6-8 January) in Mexico City.I thought it would be a very valuable exercise to share my own takeaways from it as well as to encourage other attendees’ to share theirs. My takeaways are based on the 22 1-on-1s I had (not counting casual 1-on-1s that happened during the conference). I talked mostly (ca. 70%) with Latin American students at different levels of their undergraduate degrees.So without further ado, these are my takeaways:Things I was rather surprised aboutLatin American students don’t seem tobe aware of the concept of option value and more specifically, they don’t seem to be explicitly considering this a criteria for their career decision-makingbe considering the “ladder of tests” e.g. when thinking about pursuing a specialization (like a master’s or PhD) often they had not yet considered “cheaper test” to see if that degree would be the right move for themmore broadly, they don’t seem to be aware of the different criteria they are comparing different options withLatin American students seem (internally) to feel really pressured to do a master’s and/or PhD abroad (e.g. US).When I dug deeper for the reasons for this, they mentionedJob/financial securityA pathway for them “to be taken seriously”(this one really saddened me): That they feel they are “worth very little in EA” if they don’t get these degreesLatin American students seem to have a hard time thinking ambitiously, probably more so than the “average EA”.When these students come from low/middle income backgrounds then the issue seems to be that they are carrying a lot of “baggage” from their past that makes it hard for them to “think big” about the impact they can have in the futureIf they come from more high income backgrounds then the issue seems to rather be imposter syndrome.Things I was more or less aware of but were (mentally) strongly highlightedFounders/directors of projects usually don’t plan for the changes in the type of work they do as their projects/organizations grow and therefore find themselves after a couple of years doing a lot of tasks they don’t enjoy doingOperations management (particularly HR) isn’t well planned (if planned at all) when nonprofit entrepreneurs are drafting project plans and fundraising. As a result, project budgets don’t account properly for these costs.When operations management is planned for a new project it looks something like “we’ll budget an ops person (and that'll solve our issues)”. The result of this is that by the time that (ops) person is hired, it is expected that one person solves all operational issues for the projectOften by the time projects receive funding they don’t know “what to do with the money” and start looking fast into fiscal sponsors or other ways to receive the fundsSome things I suggest (or even suggested during my 1-on-1s)For community builders to talk with their community members more about how they are comparing different career options. I suggested to one community builder experimenting with a workshop for doing career weighted factor modelsFor students in particular to seek (career) mentoring opportunitiesHaving a public list of project ideas (for the EA space) for non-programmatic things like operations, HR, management etc. so people that would like to work on these things have a better sense on what ideas to prioritizeFor a service to exist that offers founders “career check-ins” once a year where they have to take inventory of how their list of responsibilities has changed and consider alternative paths either within their own organizations or outside of the...]]>
CristinaSchmidtIbáñez https://forum.effectivealtruism.org/posts/tg2WmhuAoXxCHgzxJ/my-personal-takeaways-from-eagxlatam Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My personal takeaways from EAGxLatAm, published by CristinaSchmidtIbáñez on January 10, 2023 on The Effective Altruism Forum.This is a personal post and does not necessarily represent the views of Rethink Priorities.This year I attended EAGxLatAm (6-8 January) in Mexico City.I thought it would be a very valuable exercise to share my own takeaways from it as well as to encourage other attendees’ to share theirs. My takeaways are based on the 22 1-on-1s I had (not counting casual 1-on-1s that happened during the conference). I talked mostly (ca. 70%) with Latin American students at different levels of their undergraduate degrees.So without further ado, these are my takeaways:Things I was rather surprised aboutLatin American students don’t seem tobe aware of the concept of option value and more specifically, they don’t seem to be explicitly considering this a criteria for their career decision-makingbe considering the “ladder of tests” e.g. when thinking about pursuing a specialization (like a master’s or PhD) often they had not yet considered “cheaper test” to see if that degree would be the right move for themmore broadly, they don’t seem to be aware of the different criteria they are comparing different options withLatin American students seem (internally) to feel really pressured to do a master’s and/or PhD abroad (e.g. US).When I dug deeper for the reasons for this, they mentionedJob/financial securityA pathway for them “to be taken seriously”(this one really saddened me): That they feel they are “worth very little in EA” if they don’t get these degreesLatin American students seem to have a hard time thinking ambitiously, probably more so than the “average EA”.When these students come from low/middle income backgrounds then the issue seems to be that they are carrying a lot of “baggage” from their past that makes it hard for them to “think big” about the impact they can have in the futureIf they come from more high income backgrounds then the issue seems to rather be imposter syndrome.Things I was more or less aware of but were (mentally) strongly highlightedFounders/directors of projects usually don’t plan for the changes in the type of work they do as their projects/organizations grow and therefore find themselves after a couple of years doing a lot of tasks they don’t enjoy doingOperations management (particularly HR) isn’t well planned (if planned at all) when nonprofit entrepreneurs are drafting project plans and fundraising. As a result, project budgets don’t account properly for these costs.When operations management is planned for a new project it looks something like “we’ll budget an ops person (and that'll solve our issues)”. The result of this is that by the time that (ops) person is hired, it is expected that one person solves all operational issues for the projectOften by the time projects receive funding they don’t know “what to do with the money” and start looking fast into fiscal sponsors or other ways to receive the fundsSome things I suggest (or even suggested during my 1-on-1s)For community builders to talk with their community members more about how they are comparing different career options. I suggested to one community builder experimenting with a workshop for doing career weighted factor modelsFor students in particular to seek (career) mentoring opportunitiesHaving a public list of project ideas (for the EA space) for non-programmatic things like operations, HR, management etc. so people that would like to work on these things have a better sense on what ideas to prioritizeFor a service to exist that offers founders “career check-ins” once a year where they have to take inventory of how their list of responsibilities has changed and consider alternative paths either within their own organizations or outside of the...]]>
Tue, 10 Jan 2023 03:41:36 +0000 EA - My personal takeaways from EAGxLatAm by CristinaSchmidtIbáñez Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My personal takeaways from EAGxLatAm, published by CristinaSchmidtIbáñez on January 10, 2023 on The Effective Altruism Forum.This is a personal post and does not necessarily represent the views of Rethink Priorities.This year I attended EAGxLatAm (6-8 January) in Mexico City.I thought it would be a very valuable exercise to share my own takeaways from it as well as to encourage other attendees’ to share theirs. My takeaways are based on the 22 1-on-1s I had (not counting casual 1-on-1s that happened during the conference). I talked mostly (ca. 70%) with Latin American students at different levels of their undergraduate degrees.So without further ado, these are my takeaways:Things I was rather surprised aboutLatin American students don’t seem tobe aware of the concept of option value and more specifically, they don’t seem to be explicitly considering this a criteria for their career decision-makingbe considering the “ladder of tests” e.g. when thinking about pursuing a specialization (like a master’s or PhD) often they had not yet considered “cheaper test” to see if that degree would be the right move for themmore broadly, they don’t seem to be aware of the different criteria they are comparing different options withLatin American students seem (internally) to feel really pressured to do a master’s and/or PhD abroad (e.g. US).When I dug deeper for the reasons for this, they mentionedJob/financial securityA pathway for them “to be taken seriously”(this one really saddened me): That they feel they are “worth very little in EA” if they don’t get these degreesLatin American students seem to have a hard time thinking ambitiously, probably more so than the “average EA”.When these students come from low/middle income backgrounds then the issue seems to be that they are carrying a lot of “baggage” from their past that makes it hard for them to “think big” about the impact they can have in the futureIf they come from more high income backgrounds then the issue seems to rather be imposter syndrome.Things I was more or less aware of but were (mentally) strongly highlightedFounders/directors of projects usually don’t plan for the changes in the type of work they do as their projects/organizations grow and therefore find themselves after a couple of years doing a lot of tasks they don’t enjoy doingOperations management (particularly HR) isn’t well planned (if planned at all) when nonprofit entrepreneurs are drafting project plans and fundraising. As a result, project budgets don’t account properly for these costs.When operations management is planned for a new project it looks something like “we’ll budget an ops person (and that'll solve our issues)”. The result of this is that by the time that (ops) person is hired, it is expected that one person solves all operational issues for the projectOften by the time projects receive funding they don’t know “what to do with the money” and start looking fast into fiscal sponsors or other ways to receive the fundsSome things I suggest (or even suggested during my 1-on-1s)For community builders to talk with their community members more about how they are comparing different career options. I suggested to one community builder experimenting with a workshop for doing career weighted factor modelsFor students in particular to seek (career) mentoring opportunitiesHaving a public list of project ideas (for the EA space) for non-programmatic things like operations, HR, management etc. so people that would like to work on these things have a better sense on what ideas to prioritizeFor a service to exist that offers founders “career check-ins” once a year where they have to take inventory of how their list of responsibilities has changed and consider alternative paths either within their own organizations or outside of the...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My personal takeaways from EAGxLatAm, published by CristinaSchmidtIbáñez on January 10, 2023 on The Effective Altruism Forum.This is a personal post and does not necessarily represent the views of Rethink Priorities.This year I attended EAGxLatAm (6-8 January) in Mexico City.I thought it would be a very valuable exercise to share my own takeaways from it as well as to encourage other attendees’ to share theirs. My takeaways are based on the 22 1-on-1s I had (not counting casual 1-on-1s that happened during the conference). I talked mostly (ca. 70%) with Latin American students at different levels of their undergraduate degrees.So without further ado, these are my takeaways:Things I was rather surprised aboutLatin American students don’t seem tobe aware of the concept of option value and more specifically, they don’t seem to be explicitly considering this a criteria for their career decision-makingbe considering the “ladder of tests” e.g. when thinking about pursuing a specialization (like a master’s or PhD) often they had not yet considered “cheaper test” to see if that degree would be the right move for themmore broadly, they don’t seem to be aware of the different criteria they are comparing different options withLatin American students seem (internally) to feel really pressured to do a master’s and/or PhD abroad (e.g. US).When I dug deeper for the reasons for this, they mentionedJob/financial securityA pathway for them “to be taken seriously”(this one really saddened me): That they feel they are “worth very little in EA” if they don’t get these degreesLatin American students seem to have a hard time thinking ambitiously, probably more so than the “average EA”.When these students come from low/middle income backgrounds then the issue seems to be that they are carrying a lot of “baggage” from their past that makes it hard for them to “think big” about the impact they can have in the futureIf they come from more high income backgrounds then the issue seems to rather be imposter syndrome.Things I was more or less aware of but were (mentally) strongly highlightedFounders/directors of projects usually don’t plan for the changes in the type of work they do as their projects/organizations grow and therefore find themselves after a couple of years doing a lot of tasks they don’t enjoy doingOperations management (particularly HR) isn’t well planned (if planned at all) when nonprofit entrepreneurs are drafting project plans and fundraising. As a result, project budgets don’t account properly for these costs.When operations management is planned for a new project it looks something like “we’ll budget an ops person (and that'll solve our issues)”. The result of this is that by the time that (ops) person is hired, it is expected that one person solves all operational issues for the projectOften by the time projects receive funding they don’t know “what to do with the money” and start looking fast into fiscal sponsors or other ways to receive the fundsSome things I suggest (or even suggested during my 1-on-1s)For community builders to talk with their community members more about how they are comparing different career options. I suggested to one community builder experimenting with a workshop for doing career weighted factor modelsFor students in particular to seek (career) mentoring opportunitiesHaving a public list of project ideas (for the EA space) for non-programmatic things like operations, HR, management etc. so people that would like to work on these things have a better sense on what ideas to prioritizeFor a service to exist that offers founders “career check-ins” once a year where they have to take inventory of how their list of responsibilities has changed and consider alternative paths either within their own organizations or outside of the...]]>
CristinaSchmidtIbáñez https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:06 None full 4394
YNcWioEzJwEHxbJ64_EA EA - There is now an EA Managers Slack by Ben Kuhn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There is now an EA Managers Slack, published by Ben Kuhn on January 9, 2023 on The Effective Altruism Forum.tl;dr: fill out this ~2-min form for access!In Some notes on common challenges building EA orgs, I noted that a lot of managers at EA orgs face a set of similar challenges related to being new to their role and not having peers or mentors within their org to learn from.I thought it could help to have a shared, cross-organization discussion forum for management issues, so that if you're dealing with some kind of management problem for the first time—say, hiring, or coaching someone who's underperforming, or reorganizing a team—you can get advice from people who have done it before, even if none of your direct coworkers have. I intend to be pretty active answering questions, seeding discussion, etc. for the first while.For now, the group is open to paid employees of EA organizations (borrowing the EA Ops slack criteria) with at least one paid direct report. If that's you, fill out this form for access!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ben Kuhn https://forum.effectivealtruism.org/posts/YNcWioEzJwEHxbJ64/there-is-now-an-ea-managers-slack Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There is now an EA Managers Slack, published by Ben Kuhn on January 9, 2023 on The Effective Altruism Forum.tl;dr: fill out this ~2-min form for access!In Some notes on common challenges building EA orgs, I noted that a lot of managers at EA orgs face a set of similar challenges related to being new to their role and not having peers or mentors within their org to learn from.I thought it could help to have a shared, cross-organization discussion forum for management issues, so that if you're dealing with some kind of management problem for the first time—say, hiring, or coaching someone who's underperforming, or reorganizing a team—you can get advice from people who have done it before, even if none of your direct coworkers have. I intend to be pretty active answering questions, seeding discussion, etc. for the first while.For now, the group is open to paid employees of EA organizations (borrowing the EA Ops slack criteria) with at least one paid direct report. If that's you, fill out this form for access!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 10 Jan 2023 01:14:45 +0000 EA - There is now an EA Managers Slack by Ben Kuhn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There is now an EA Managers Slack, published by Ben Kuhn on January 9, 2023 on The Effective Altruism Forum.tl;dr: fill out this ~2-min form for access!In Some notes on common challenges building EA orgs, I noted that a lot of managers at EA orgs face a set of similar challenges related to being new to their role and not having peers or mentors within their org to learn from.I thought it could help to have a shared, cross-organization discussion forum for management issues, so that if you're dealing with some kind of management problem for the first time—say, hiring, or coaching someone who's underperforming, or reorganizing a team—you can get advice from people who have done it before, even if none of your direct coworkers have. I intend to be pretty active answering questions, seeding discussion, etc. for the first while.For now, the group is open to paid employees of EA organizations (borrowing the EA Ops slack criteria) with at least one paid direct report. If that's you, fill out this form for access!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There is now an EA Managers Slack, published by Ben Kuhn on January 9, 2023 on The Effective Altruism Forum.tl;dr: fill out this ~2-min form for access!In Some notes on common challenges building EA orgs, I noted that a lot of managers at EA orgs face a set of similar challenges related to being new to their role and not having peers or mentors within their org to learn from.I thought it could help to have a shared, cross-organization discussion forum for management issues, so that if you're dealing with some kind of management problem for the first time—say, hiring, or coaching someone who's underperforming, or reorganizing a team—you can get advice from people who have done it before, even if none of your direct coworkers have. I intend to be pretty active answering questions, seeding discussion, etc. for the first while.For now, the group is open to paid employees of EA organizations (borrowing the EA Ops slack criteria) with at least one paid direct report. If that's you, fill out this form for access!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ben Kuhn https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:15 None full 4396
nGQ5BtFtxyP8Tw9d3_EA EA - GWWC Should Require Public Charity Evaluations by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC Should Require Public Charity Evaluations, published by Jeff Kaufman on January 9, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://forum.effectivealtruism.org/posts/nGQ5BtFtxyP8Tw9d3/gwwc-should-require-public-charity-evaluations Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC Should Require Public Charity Evaluations, published by Jeff Kaufman on January 9, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 09 Jan 2023 23:42:12 +0000 EA - GWWC Should Require Public Charity Evaluations by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC Should Require Public Charity Evaluations, published by Jeff Kaufman on January 9, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC Should Require Public Charity Evaluations, published by Jeff Kaufman on January 9, 2023 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:28 None full 4395
RWQ6Pqc4s8yq2fSjg_EA EA - Forecasting extreme outcomes by AidanGoth Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forecasting extreme outcomes, published by AidanGoth on January 9, 2023 on The Effective Altruism Forum.This document explores and develops methods for forecasting extreme outcomes, such as the maximum of a sample of n independent and identically distributed random variables. I was inspired to write this by Jaime Sevilla’s recent post with research ideas in forecasting and, in particular, his suggestion to write an accessible introduction to the Fisher–Tippett–Gnedenko Theorem.I’m very grateful to Jaime Sevilla for proposing this idea and for providing great feedback on a draft of this document.SummaryThe Fisher–Tippett–Gnedenko Theorem is similar to a central limit theorem, but for the maximum of random variables. Whereas central limit theorems tell us about what happens on average, the Fisher–Tippett–Gnedenko Theorem tells us what happens in extreme cases. This makes it especially useful in risk management, when we need to pay particular attention to worst case outcomes. It could be a useful tool for forecasting tail events.This document introduces the theorem, describes the limiting probability distribution and provides a couple of examples to illustrate the use (and misuse!) of the Fisher–Tippett–Gnedenko Theorem for forecasting. In the process, I introduce a tool that computes the distribution of the maximum n iid random variables that follow a normal distribution centrally but with an (optional) right Pareto tail.Summary:The Fisher–Tippett–Gnedenko Theorem says (roughly) that if the maximum of n iid random variables—which is itself a random variable—converges as n grows to infinity, then it must converge to a generalised extreme value (GEV) distributionUse cases:When we have lots of data, we should try to fit our data to a GEV distribution since this is the distribution that the maximum should converge to (if it converges)When we have subjective judgements about the distribution of the maximum (e.g. a 90% credible interval and median forecast), we can use these to determine parameters of a GEV distribution that fits these judgementsWhen we know or have subjective judgements about the distribution of the random variables we’re maximising over, the theorem can help us determine the distribution of the maximum of n such random variables for large n – but this can give very bad results when our assumptions / judgements are wrongLimitations:To get accurate forecasts about the maximum of n random variables based on the distribution of the underlying random variables, we need accurate judgements about the right tail of the underlying random variables because the maximum will very likely be drawn from the tail, especially as n gets largeEven for data that is very well described by a normal distribution for typical values, normality can break down at the tails and this can greatly affect the resulting forecastsI use the example of human height: naively assuming normality underestimates how extreme the tallest and shortest humans are because height is “only” normally distributed up to 2-3 standard deviations around the meanModelling the tail separately (even with quite a crude model) can improve forecastsThis simple tool might be good enough for forecasting purposes in many casesIt assumes that the underlying r.v.s are iid and normally distributed up to k standard deviations above the mean and that there is a Pareto tail beyond this pointInputs:90% CI for the underlying r.v.sn (the number of samples of the underlying random variables)k (the number of SDs above the mean at which the Pareto tail starts); set this high if you don’t want a Pareto tailOutput: cumulative distribution function, approximate probability density function and approximate expectation of the maximum of n samples of the underlying random variablesRequest for feedback: I’m not a...]]>
AidanGoth https://forum.effectivealtruism.org/posts/RWQ6Pqc4s8yq2fSjg/forecasting-extreme-outcomes Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forecasting extreme outcomes, published by AidanGoth on January 9, 2023 on The Effective Altruism Forum.This document explores and develops methods for forecasting extreme outcomes, such as the maximum of a sample of n independent and identically distributed random variables. I was inspired to write this by Jaime Sevilla’s recent post with research ideas in forecasting and, in particular, his suggestion to write an accessible introduction to the Fisher–Tippett–Gnedenko Theorem.I’m very grateful to Jaime Sevilla for proposing this idea and for providing great feedback on a draft of this document.SummaryThe Fisher–Tippett–Gnedenko Theorem is similar to a central limit theorem, but for the maximum of random variables. Whereas central limit theorems tell us about what happens on average, the Fisher–Tippett–Gnedenko Theorem tells us what happens in extreme cases. This makes it especially useful in risk management, when we need to pay particular attention to worst case outcomes. It could be a useful tool for forecasting tail events.This document introduces the theorem, describes the limiting probability distribution and provides a couple of examples to illustrate the use (and misuse!) of the Fisher–Tippett–Gnedenko Theorem for forecasting. In the process, I introduce a tool that computes the distribution of the maximum n iid random variables that follow a normal distribution centrally but with an (optional) right Pareto tail.Summary:The Fisher–Tippett–Gnedenko Theorem says (roughly) that if the maximum of n iid random variables—which is itself a random variable—converges as n grows to infinity, then it must converge to a generalised extreme value (GEV) distributionUse cases:When we have lots of data, we should try to fit our data to a GEV distribution since this is the distribution that the maximum should converge to (if it converges)When we have subjective judgements about the distribution of the maximum (e.g. a 90% credible interval and median forecast), we can use these to determine parameters of a GEV distribution that fits these judgementsWhen we know or have subjective judgements about the distribution of the random variables we’re maximising over, the theorem can help us determine the distribution of the maximum of n such random variables for large n – but this can give very bad results when our assumptions / judgements are wrongLimitations:To get accurate forecasts about the maximum of n random variables based on the distribution of the underlying random variables, we need accurate judgements about the right tail of the underlying random variables because the maximum will very likely be drawn from the tail, especially as n gets largeEven for data that is very well described by a normal distribution for typical values, normality can break down at the tails and this can greatly affect the resulting forecastsI use the example of human height: naively assuming normality underestimates how extreme the tallest and shortest humans are because height is “only” normally distributed up to 2-3 standard deviations around the meanModelling the tail separately (even with quite a crude model) can improve forecastsThis simple tool might be good enough for forecasting purposes in many casesIt assumes that the underlying r.v.s are iid and normally distributed up to k standard deviations above the mean and that there is a Pareto tail beyond this pointInputs:90% CI for the underlying r.v.sn (the number of samples of the underlying random variables)k (the number of SDs above the mean at which the Pareto tail starts); set this high if you don’t want a Pareto tailOutput: cumulative distribution function, approximate probability density function and approximate expectation of the maximum of n samples of the underlying random variablesRequest for feedback: I’m not a...]]>
Mon, 09 Jan 2023 20:12:18 +0000 EA - Forecasting extreme outcomes by AidanGoth Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forecasting extreme outcomes, published by AidanGoth on January 9, 2023 on The Effective Altruism Forum.This document explores and develops methods for forecasting extreme outcomes, such as the maximum of a sample of n independent and identically distributed random variables. I was inspired to write this by Jaime Sevilla’s recent post with research ideas in forecasting and, in particular, his suggestion to write an accessible introduction to the Fisher–Tippett–Gnedenko Theorem.I’m very grateful to Jaime Sevilla for proposing this idea and for providing great feedback on a draft of this document.SummaryThe Fisher–Tippett–Gnedenko Theorem is similar to a central limit theorem, but for the maximum of random variables. Whereas central limit theorems tell us about what happens on average, the Fisher–Tippett–Gnedenko Theorem tells us what happens in extreme cases. This makes it especially useful in risk management, when we need to pay particular attention to worst case outcomes. It could be a useful tool for forecasting tail events.This document introduces the theorem, describes the limiting probability distribution and provides a couple of examples to illustrate the use (and misuse!) of the Fisher–Tippett–Gnedenko Theorem for forecasting. In the process, I introduce a tool that computes the distribution of the maximum n iid random variables that follow a normal distribution centrally but with an (optional) right Pareto tail.Summary:The Fisher–Tippett–Gnedenko Theorem says (roughly) that if the maximum of n iid random variables—which is itself a random variable—converges as n grows to infinity, then it must converge to a generalised extreme value (GEV) distributionUse cases:When we have lots of data, we should try to fit our data to a GEV distribution since this is the distribution that the maximum should converge to (if it converges)When we have subjective judgements about the distribution of the maximum (e.g. a 90% credible interval and median forecast), we can use these to determine parameters of a GEV distribution that fits these judgementsWhen we know or have subjective judgements about the distribution of the random variables we’re maximising over, the theorem can help us determine the distribution of the maximum of n such random variables for large n – but this can give very bad results when our assumptions / judgements are wrongLimitations:To get accurate forecasts about the maximum of n random variables based on the distribution of the underlying random variables, we need accurate judgements about the right tail of the underlying random variables because the maximum will very likely be drawn from the tail, especially as n gets largeEven for data that is very well described by a normal distribution for typical values, normality can break down at the tails and this can greatly affect the resulting forecastsI use the example of human height: naively assuming normality underestimates how extreme the tallest and shortest humans are because height is “only” normally distributed up to 2-3 standard deviations around the meanModelling the tail separately (even with quite a crude model) can improve forecastsThis simple tool might be good enough for forecasting purposes in many casesIt assumes that the underlying r.v.s are iid and normally distributed up to k standard deviations above the mean and that there is a Pareto tail beyond this pointInputs:90% CI for the underlying r.v.sn (the number of samples of the underlying random variables)k (the number of SDs above the mean at which the Pareto tail starts); set this high if you don’t want a Pareto tailOutput: cumulative distribution function, approximate probability density function and approximate expectation of the maximum of n samples of the underlying random variablesRequest for feedback: I’m not a...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forecasting extreme outcomes, published by AidanGoth on January 9, 2023 on The Effective Altruism Forum.This document explores and develops methods for forecasting extreme outcomes, such as the maximum of a sample of n independent and identically distributed random variables. I was inspired to write this by Jaime Sevilla’s recent post with research ideas in forecasting and, in particular, his suggestion to write an accessible introduction to the Fisher–Tippett–Gnedenko Theorem.I’m very grateful to Jaime Sevilla for proposing this idea and for providing great feedback on a draft of this document.SummaryThe Fisher–Tippett–Gnedenko Theorem is similar to a central limit theorem, but for the maximum of random variables. Whereas central limit theorems tell us about what happens on average, the Fisher–Tippett–Gnedenko Theorem tells us what happens in extreme cases. This makes it especially useful in risk management, when we need to pay particular attention to worst case outcomes. It could be a useful tool for forecasting tail events.This document introduces the theorem, describes the limiting probability distribution and provides a couple of examples to illustrate the use (and misuse!) of the Fisher–Tippett–Gnedenko Theorem for forecasting. In the process, I introduce a tool that computes the distribution of the maximum n iid random variables that follow a normal distribution centrally but with an (optional) right Pareto tail.Summary:The Fisher–Tippett–Gnedenko Theorem says (roughly) that if the maximum of n iid random variables—which is itself a random variable—converges as n grows to infinity, then it must converge to a generalised extreme value (GEV) distributionUse cases:When we have lots of data, we should try to fit our data to a GEV distribution since this is the distribution that the maximum should converge to (if it converges)When we have subjective judgements about the distribution of the maximum (e.g. a 90% credible interval and median forecast), we can use these to determine parameters of a GEV distribution that fits these judgementsWhen we know or have subjective judgements about the distribution of the random variables we’re maximising over, the theorem can help us determine the distribution of the maximum of n such random variables for large n – but this can give very bad results when our assumptions / judgements are wrongLimitations:To get accurate forecasts about the maximum of n random variables based on the distribution of the underlying random variables, we need accurate judgements about the right tail of the underlying random variables because the maximum will very likely be drawn from the tail, especially as n gets largeEven for data that is very well described by a normal distribution for typical values, normality can break down at the tails and this can greatly affect the resulting forecastsI use the example of human height: naively assuming normality underestimates how extreme the tallest and shortest humans are because height is “only” normally distributed up to 2-3 standard deviations around the meanModelling the tail separately (even with quite a crude model) can improve forecastsThis simple tool might be good enough for forecasting purposes in many casesIt assumes that the underlying r.v.s are iid and normally distributed up to k standard deviations above the mean and that there is a Pareto tail beyond this pointInputs:90% CI for the underlying r.v.sn (the number of samples of the underlying random variables)k (the number of SDs above the mean at which the Pareto tail starts); set this high if you don’t want a Pareto tailOutput: cumulative distribution function, approximate probability density function and approximate expectation of the maximum of n samples of the underlying random variablesRequest for feedback: I’m not a...]]>
AidanGoth https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:15 None full 4390
bCaoxK35RbKtfkDcS_EA EA - Announcing the Launch of the NYU Wild Animal Welfare Program by Sofia Fogel Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Launch of the NYU Wild Animal Welfare Program, published by Sofia Fogel on January 9, 2023 on The Effective Altruism Forum.We are thrilled to announce the launch of the NYU Wild Animal Welfare Program later this month!The NYU Wild Animal Welfare (WAW) program aims to advance understanding about what wild animals are like, how humans and wild animals interact, and how humans can improve our interactions with wild animals at scale. We pursue this goal through foundational research in the humanities, social sciences, and natural sciences, as well as through outreach to academics, advocates, policymakers, and the general public. The team includes Becca Franks and Jeff Sebo as co-directors, me (Sofia) as coordinator, and Arthur Caplan, Lucius Caviola, Kyle Ferguson, Jennifer Jacquet, Dale Jamieson, Colin Jerolmack, Sonali McDermid, Danielle Spiegel-Feld, Christine Webb, and others as faculty affiliates.The program will launch on January 27, 2023 with a roundtable discussion titled "How can humans improve our interactions with wild animals at scale?" The panel will include program directors Becca Franks and Jeff Sebo and program affiliates Christine Webb, Colin Jerolmack, and Dale Jamieson. The discussion will cover an array of topics including:Why does wild animal welfare matter more than ever?What are the most urgent and actionable issues confronting wild animals?How does wild animal welfare relate to conservation biology and other fields?We will also have plenty of time for discussion with the audience. We welcome you to join us in person or online.We will soon be announcing additional spring events as well as opportunities for early-career researchers. If you are interested in receiving occasional updates about our work and offerings, we encourage you to sign up for our email list.Please also feel free to contact us with other inquiries.This launch follows on the heels of our October 2022 launch of the NYU Mind, Ethics, and Policy Program, which may also be of interest to readers of this post.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sofia Fogel https://forum.effectivealtruism.org/posts/bCaoxK35RbKtfkDcS/announcing-the-launch-of-the-nyu-wild-animal-welfare-program Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Launch of the NYU Wild Animal Welfare Program, published by Sofia Fogel on January 9, 2023 on The Effective Altruism Forum.We are thrilled to announce the launch of the NYU Wild Animal Welfare Program later this month!The NYU Wild Animal Welfare (WAW) program aims to advance understanding about what wild animals are like, how humans and wild animals interact, and how humans can improve our interactions with wild animals at scale. We pursue this goal through foundational research in the humanities, social sciences, and natural sciences, as well as through outreach to academics, advocates, policymakers, and the general public. The team includes Becca Franks and Jeff Sebo as co-directors, me (Sofia) as coordinator, and Arthur Caplan, Lucius Caviola, Kyle Ferguson, Jennifer Jacquet, Dale Jamieson, Colin Jerolmack, Sonali McDermid, Danielle Spiegel-Feld, Christine Webb, and others as faculty affiliates.The program will launch on January 27, 2023 with a roundtable discussion titled "How can humans improve our interactions with wild animals at scale?" The panel will include program directors Becca Franks and Jeff Sebo and program affiliates Christine Webb, Colin Jerolmack, and Dale Jamieson. The discussion will cover an array of topics including:Why does wild animal welfare matter more than ever?What are the most urgent and actionable issues confronting wild animals?How does wild animal welfare relate to conservation biology and other fields?We will also have plenty of time for discussion with the audience. We welcome you to join us in person or online.We will soon be announcing additional spring events as well as opportunities for early-career researchers. If you are interested in receiving occasional updates about our work and offerings, we encourage you to sign up for our email list.Please also feel free to contact us with other inquiries.This launch follows on the heels of our October 2022 launch of the NYU Mind, Ethics, and Policy Program, which may also be of interest to readers of this post.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 09 Jan 2023 17:00:26 +0000 EA - Announcing the Launch of the NYU Wild Animal Welfare Program by Sofia Fogel Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Launch of the NYU Wild Animal Welfare Program, published by Sofia Fogel on January 9, 2023 on The Effective Altruism Forum.We are thrilled to announce the launch of the NYU Wild Animal Welfare Program later this month!The NYU Wild Animal Welfare (WAW) program aims to advance understanding about what wild animals are like, how humans and wild animals interact, and how humans can improve our interactions with wild animals at scale. We pursue this goal through foundational research in the humanities, social sciences, and natural sciences, as well as through outreach to academics, advocates, policymakers, and the general public. The team includes Becca Franks and Jeff Sebo as co-directors, me (Sofia) as coordinator, and Arthur Caplan, Lucius Caviola, Kyle Ferguson, Jennifer Jacquet, Dale Jamieson, Colin Jerolmack, Sonali McDermid, Danielle Spiegel-Feld, Christine Webb, and others as faculty affiliates.The program will launch on January 27, 2023 with a roundtable discussion titled "How can humans improve our interactions with wild animals at scale?" The panel will include program directors Becca Franks and Jeff Sebo and program affiliates Christine Webb, Colin Jerolmack, and Dale Jamieson. The discussion will cover an array of topics including:Why does wild animal welfare matter more than ever?What are the most urgent and actionable issues confronting wild animals?How does wild animal welfare relate to conservation biology and other fields?We will also have plenty of time for discussion with the audience. We welcome you to join us in person or online.We will soon be announcing additional spring events as well as opportunities for early-career researchers. If you are interested in receiving occasional updates about our work and offerings, we encourage you to sign up for our email list.Please also feel free to contact us with other inquiries.This launch follows on the heels of our October 2022 launch of the NYU Mind, Ethics, and Policy Program, which may also be of interest to readers of this post.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Launch of the NYU Wild Animal Welfare Program, published by Sofia Fogel on January 9, 2023 on The Effective Altruism Forum.We are thrilled to announce the launch of the NYU Wild Animal Welfare Program later this month!The NYU Wild Animal Welfare (WAW) program aims to advance understanding about what wild animals are like, how humans and wild animals interact, and how humans can improve our interactions with wild animals at scale. We pursue this goal through foundational research in the humanities, social sciences, and natural sciences, as well as through outreach to academics, advocates, policymakers, and the general public. The team includes Becca Franks and Jeff Sebo as co-directors, me (Sofia) as coordinator, and Arthur Caplan, Lucius Caviola, Kyle Ferguson, Jennifer Jacquet, Dale Jamieson, Colin Jerolmack, Sonali McDermid, Danielle Spiegel-Feld, Christine Webb, and others as faculty affiliates.The program will launch on January 27, 2023 with a roundtable discussion titled "How can humans improve our interactions with wild animals at scale?" The panel will include program directors Becca Franks and Jeff Sebo and program affiliates Christine Webb, Colin Jerolmack, and Dale Jamieson. The discussion will cover an array of topics including:Why does wild animal welfare matter more than ever?What are the most urgent and actionable issues confronting wild animals?How does wild animal welfare relate to conservation biology and other fields?We will also have plenty of time for discussion with the audience. We welcome you to join us in person or online.We will soon be announcing additional spring events as well as opportunities for early-career researchers. If you are interested in receiving occasional updates about our work and offerings, we encourage you to sign up for our email list.Please also feel free to contact us with other inquiries.This launch follows on the heels of our October 2022 launch of the NYU Mind, Ethics, and Policy Program, which may also be of interest to readers of this post.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sofia Fogel https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:16 None full 4389
Hk9vhBhrWbyBYX6xb_EA EA - A Study of EA Orgs’ Social Media by Stan Pinsent Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Study of EA Orgs’ Social Media, published by Stan Pinsent on January 9, 2023 on The Effective Altruism Forum.SummaryI collected data on the social media accounts of 79 EA-related organisations. Key findings:Orgs have far more followers on Facebook and Twitter than on Instagram. Facebook accounts typically have more followers than Twitter accounts, but this varies between cause areasA number of Longtermism- and Infrastructure-focused orgs have stepped away from social media since 2021On Facebook, posting regularly correlates weakly with having a larger followingOverall, it appears that EA-aligned organisations should consider their cause area within EA when forming a social media strategyMethodologyI checked the Facebook, Twitter and Instagram accounts of each of the orgs on this list of EA-related organisations and collected data on 1) the number of days since last posting and 2) the number of followers. See the footnotes for details on methodology[1] and limitations[2] [Footnotes are included in the main text because the forum is being glitchy].You can find the full dataset here.Data Overview87% of the 79 organisations surveyed have an account on either Facebook, Twitter or InstagramOf the organisations on social media, only 58% had been active in the past weekMost of the organisations use Facebook and Twitter and less than half of them use Instagram:There was a good spread of cause areas in the survey:Cause areaNumber of organisationsAnimal Advocacy12Longtermism15Global Health & Poverty18Infrastructure24Other10Total79Orgs to WatchA number of the organisations stood out for their aptitude on social media. Below are “ten organisations to watch”, with links to their respective accounts:OrgCause areaLinksReasons to watchThe Humane LeagueAnimal AdvocacyRegular posts and diverse content. Strong on all platformsAnimal EthicsFacebook powerhouse. Diverse Instagram content, cross-channel promotionFuture of Life InstituteLongtermismHarnessing clips and quotes from their podcast, highly active on all platformsSightsaversGlobal Health & Poverty40,000 tweets and countingGiveDirectlyPutting a human face on their work on InstagramAbdul Latif Jameel Poverty Action Lab Regular, diverse Twitter contentEvidence ActionEngaging visuals with every post80,000 HoursInfrastructureLonger, regular but infrequent Facebook postsHigh Impact AthletesEyeball-grabbing visuals on InstagramOur World in DataOtherMaking data sing on InstagramFBTIGFBTIGFBTIGFBTIGFBTIGFBTFBTIGFBTIGFBTIGFBTIGFollower DataFacebook or Twitter? It depends on the cause area.65% of organisations had more followers on Facebook than on Twitter. Comparing the median value of Facebook followers per Twitter follower within each cause area, we see significant variation (with the caveat that sample sizes are small).It appears that cause area has a strong correlation with which platform an organisation does best in: all Animal Advocacy orgs had more Facebook followers than Twitter followers, while 5 of 8 Longtermist orgs had the opposite trend.Why such a difference between the cause areas? I have a few suggestions:Anecdotally, Facebook is better than Twitter for harnessing broad audiences with shared interests. Animal Advocacy orgs may have been able to snare the audiences of big groups like Greenpeace and the WWF (3.1M and 3.6M Facebook followers respectively). The other cause areas don’t have such big peers to poach from (Oxfam has 1.1M followers, and Longtermism barely exists beyond EA)It seems plausible that Twitter is simply more popular among Effective Altruists. This would explain why the Animal Advocacy movement, within which EAs are a minority, is an exception here (although EAs are also a minority in Global Health & Poverty)Some Longtermist and Infrastructure orgs have...]]>
Stan Pinsent https://forum.effectivealtruism.org/posts/Hk9vhBhrWbyBYX6xb/a-study-of-ea-orgs-social-media Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Study of EA Orgs’ Social Media, published by Stan Pinsent on January 9, 2023 on The Effective Altruism Forum.SummaryI collected data on the social media accounts of 79 EA-related organisations. Key findings:Orgs have far more followers on Facebook and Twitter than on Instagram. Facebook accounts typically have more followers than Twitter accounts, but this varies between cause areasA number of Longtermism- and Infrastructure-focused orgs have stepped away from social media since 2021On Facebook, posting regularly correlates weakly with having a larger followingOverall, it appears that EA-aligned organisations should consider their cause area within EA when forming a social media strategyMethodologyI checked the Facebook, Twitter and Instagram accounts of each of the orgs on this list of EA-related organisations and collected data on 1) the number of days since last posting and 2) the number of followers. See the footnotes for details on methodology[1] and limitations[2] [Footnotes are included in the main text because the forum is being glitchy].You can find the full dataset here.Data Overview87% of the 79 organisations surveyed have an account on either Facebook, Twitter or InstagramOf the organisations on social media, only 58% had been active in the past weekMost of the organisations use Facebook and Twitter and less than half of them use Instagram:There was a good spread of cause areas in the survey:Cause areaNumber of organisationsAnimal Advocacy12Longtermism15Global Health & Poverty18Infrastructure24Other10Total79Orgs to WatchA number of the organisations stood out for their aptitude on social media. Below are “ten organisations to watch”, with links to their respective accounts:OrgCause areaLinksReasons to watchThe Humane LeagueAnimal AdvocacyRegular posts and diverse content. Strong on all platformsAnimal EthicsFacebook powerhouse. Diverse Instagram content, cross-channel promotionFuture of Life InstituteLongtermismHarnessing clips and quotes from their podcast, highly active on all platformsSightsaversGlobal Health & Poverty40,000 tweets and countingGiveDirectlyPutting a human face on their work on InstagramAbdul Latif Jameel Poverty Action Lab Regular, diverse Twitter contentEvidence ActionEngaging visuals with every post80,000 HoursInfrastructureLonger, regular but infrequent Facebook postsHigh Impact AthletesEyeball-grabbing visuals on InstagramOur World in DataOtherMaking data sing on InstagramFBTIGFBTIGFBTIGFBTIGFBTIGFBTFBTIGFBTIGFBTIGFBTIGFollower DataFacebook or Twitter? It depends on the cause area.65% of organisations had more followers on Facebook than on Twitter. Comparing the median value of Facebook followers per Twitter follower within each cause area, we see significant variation (with the caveat that sample sizes are small).It appears that cause area has a strong correlation with which platform an organisation does best in: all Animal Advocacy orgs had more Facebook followers than Twitter followers, while 5 of 8 Longtermist orgs had the opposite trend.Why such a difference between the cause areas? I have a few suggestions:Anecdotally, Facebook is better than Twitter for harnessing broad audiences with shared interests. Animal Advocacy orgs may have been able to snare the audiences of big groups like Greenpeace and the WWF (3.1M and 3.6M Facebook followers respectively). The other cause areas don’t have such big peers to poach from (Oxfam has 1.1M followers, and Longtermism barely exists beyond EA)It seems plausible that Twitter is simply more popular among Effective Altruists. This would explain why the Animal Advocacy movement, within which EAs are a minority, is an exception here (although EAs are also a minority in Global Health & Poverty)Some Longtermist and Infrastructure orgs have...]]>
Mon, 09 Jan 2023 16:57:13 +0000 EA - A Study of EA Orgs’ Social Media by Stan Pinsent Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Study of EA Orgs’ Social Media, published by Stan Pinsent on January 9, 2023 on The Effective Altruism Forum.SummaryI collected data on the social media accounts of 79 EA-related organisations. Key findings:Orgs have far more followers on Facebook and Twitter than on Instagram. Facebook accounts typically have more followers than Twitter accounts, but this varies between cause areasA number of Longtermism- and Infrastructure-focused orgs have stepped away from social media since 2021On Facebook, posting regularly correlates weakly with having a larger followingOverall, it appears that EA-aligned organisations should consider their cause area within EA when forming a social media strategyMethodologyI checked the Facebook, Twitter and Instagram accounts of each of the orgs on this list of EA-related organisations and collected data on 1) the number of days since last posting and 2) the number of followers. See the footnotes for details on methodology[1] and limitations[2] [Footnotes are included in the main text because the forum is being glitchy].You can find the full dataset here.Data Overview87% of the 79 organisations surveyed have an account on either Facebook, Twitter or InstagramOf the organisations on social media, only 58% had been active in the past weekMost of the organisations use Facebook and Twitter and less than half of them use Instagram:There was a good spread of cause areas in the survey:Cause areaNumber of organisationsAnimal Advocacy12Longtermism15Global Health & Poverty18Infrastructure24Other10Total79Orgs to WatchA number of the organisations stood out for their aptitude on social media. Below are “ten organisations to watch”, with links to their respective accounts:OrgCause areaLinksReasons to watchThe Humane LeagueAnimal AdvocacyRegular posts and diverse content. Strong on all platformsAnimal EthicsFacebook powerhouse. Diverse Instagram content, cross-channel promotionFuture of Life InstituteLongtermismHarnessing clips and quotes from their podcast, highly active on all platformsSightsaversGlobal Health & Poverty40,000 tweets and countingGiveDirectlyPutting a human face on their work on InstagramAbdul Latif Jameel Poverty Action Lab Regular, diverse Twitter contentEvidence ActionEngaging visuals with every post80,000 HoursInfrastructureLonger, regular but infrequent Facebook postsHigh Impact AthletesEyeball-grabbing visuals on InstagramOur World in DataOtherMaking data sing on InstagramFBTIGFBTIGFBTIGFBTIGFBTIGFBTFBTIGFBTIGFBTIGFBTIGFollower DataFacebook or Twitter? It depends on the cause area.65% of organisations had more followers on Facebook than on Twitter. Comparing the median value of Facebook followers per Twitter follower within each cause area, we see significant variation (with the caveat that sample sizes are small).It appears that cause area has a strong correlation with which platform an organisation does best in: all Animal Advocacy orgs had more Facebook followers than Twitter followers, while 5 of 8 Longtermist orgs had the opposite trend.Why such a difference between the cause areas? I have a few suggestions:Anecdotally, Facebook is better than Twitter for harnessing broad audiences with shared interests. Animal Advocacy orgs may have been able to snare the audiences of big groups like Greenpeace and the WWF (3.1M and 3.6M Facebook followers respectively). The other cause areas don’t have such big peers to poach from (Oxfam has 1.1M followers, and Longtermism barely exists beyond EA)It seems plausible that Twitter is simply more popular among Effective Altruists. This would explain why the Animal Advocacy movement, within which EAs are a minority, is an exception here (although EAs are also a minority in Global Health & Poverty)Some Longtermist and Infrastructure orgs have...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Study of EA Orgs’ Social Media, published by Stan Pinsent on January 9, 2023 on The Effective Altruism Forum.SummaryI collected data on the social media accounts of 79 EA-related organisations. Key findings:Orgs have far more followers on Facebook and Twitter than on Instagram. Facebook accounts typically have more followers than Twitter accounts, but this varies between cause areasA number of Longtermism- and Infrastructure-focused orgs have stepped away from social media since 2021On Facebook, posting regularly correlates weakly with having a larger followingOverall, it appears that EA-aligned organisations should consider their cause area within EA when forming a social media strategyMethodologyI checked the Facebook, Twitter and Instagram accounts of each of the orgs on this list of EA-related organisations and collected data on 1) the number of days since last posting and 2) the number of followers. See the footnotes for details on methodology[1] and limitations[2] [Footnotes are included in the main text because the forum is being glitchy].You can find the full dataset here.Data Overview87% of the 79 organisations surveyed have an account on either Facebook, Twitter or InstagramOf the organisations on social media, only 58% had been active in the past weekMost of the organisations use Facebook and Twitter and less than half of them use Instagram:There was a good spread of cause areas in the survey:Cause areaNumber of organisationsAnimal Advocacy12Longtermism15Global Health & Poverty18Infrastructure24Other10Total79Orgs to WatchA number of the organisations stood out for their aptitude on social media. Below are “ten organisations to watch”, with links to their respective accounts:OrgCause areaLinksReasons to watchThe Humane LeagueAnimal AdvocacyRegular posts and diverse content. Strong on all platformsAnimal EthicsFacebook powerhouse. Diverse Instagram content, cross-channel promotionFuture of Life InstituteLongtermismHarnessing clips and quotes from their podcast, highly active on all platformsSightsaversGlobal Health & Poverty40,000 tweets and countingGiveDirectlyPutting a human face on their work on InstagramAbdul Latif Jameel Poverty Action Lab Regular, diverse Twitter contentEvidence ActionEngaging visuals with every post80,000 HoursInfrastructureLonger, regular but infrequent Facebook postsHigh Impact AthletesEyeball-grabbing visuals on InstagramOur World in DataOtherMaking data sing on InstagramFBTIGFBTIGFBTIGFBTIGFBTIGFBTFBTIGFBTIGFBTIGFBTIGFollower DataFacebook or Twitter? It depends on the cause area.65% of organisations had more followers on Facebook than on Twitter. Comparing the median value of Facebook followers per Twitter follower within each cause area, we see significant variation (with the caveat that sample sizes are small).It appears that cause area has a strong correlation with which platform an organisation does best in: all Animal Advocacy orgs had more Facebook followers than Twitter followers, while 5 of 8 Longtermist orgs had the opposite trend.Why such a difference between the cause areas? I have a few suggestions:Anecdotally, Facebook is better than Twitter for harnessing broad audiences with shared interests. Animal Advocacy orgs may have been able to snare the audiences of big groups like Greenpeace and the WWF (3.1M and 3.6M Facebook followers respectively). The other cause areas don’t have such big peers to poach from (Oxfam has 1.1M followers, and Longtermism barely exists beyond EA)It seems plausible that Twitter is simply more popular among Effective Altruists. This would explain why the Animal Advocacy movement, within which EAs are a minority, is an exception here (although EAs are also a minority in Global Health & Poverty)Some Longtermist and Infrastructure orgs have...]]>
Stan Pinsent https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 13:08 None full 4391
dZs5s8giJ36qtSq4h_EA EA - Dangers of deference by TsviBT Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dangers of deference, published by TsviBT on January 8, 2023 on The Effective Altruism Forum.[Written September 02, 2022. Note: I'm likely to not respond to comments promptly.]Sometimes people defer to other people, e.g. by believing what they say, by following orders, or by adopting intents or stances. In many cases it makes sense to defer, since other people know more than you about many things, and it's useful to share eyes and ears, and coordination and specialization are valuable, and one can "inquisitively defer" to opinions by taking them as challenges to investigate further by trying them out for oneself. But there are major issues with deferring, among which are:Deferral-based opinions don't contain the detailed content that generated the opinions, and therefore can't direct action effectively or update on new evidence correctly. Acting based on deferral-based opinions is discouraging because it's especially not the case that the whole of you can see why the action is good. Acting based on deferral-based opinions to some extent removes the "meaning" of learning new information; if you're just going to defer anyway, it's sort of irrelevant to gain information, and your brain can kind of tell that, so you don't seek information as much. Deference therefore constricts the influx of new information to individuals and groups. A group with many people deferring to others will amplify [information cascades]() by double-triple-quadruple-counting non-deferral-based evidence. A group with many people deferring to others will have mistakenly correlated beliefs and actions, and so will fail to explore many worthwhile possibilities.The deferrer will copy beliefs mistakenly imputed to the deferred-to that would have explained the deferred-to's externally visible behavior. This pushes in the direction opposite to science because science is the way of making beliefs come apart from their pre-theoretical pragmatic implications. Sometimes the deferrer, instead of imputing beliefs to the deferred-to and adopting those beliefs, will adopt the same model-free behavioral stance that the deferred-to has adopted to perform to onlookers, such as pretending to believe something while acting towards no coherent purpose other than to maintain the pretense.If the deferred-to takes actions for PR reasons, e.g. attempting to appear from the outside to hold some belief or intent that they don't actually hold, then the PR might work on the deferrer so that the deferrer systematically adopts the false beliefs and non-held intents performed by the deferred-to (rather than adopting beliefs and intents that would actually explain the deferred-to's actions as part of a coherent worldview and strategy). Allocating resources based on deferral-based opinions potentially opens up niches for non-epistemic processes, such as hype, fraud, and power-grabbing. These dynamics will be amplified when people choose who to defer to according to how much the person is already being deferred to. To the extent that these dynamics increase the general orientation of deference itself, deference recursively amplifies itself.Together, these dynamics make it so that deferral-based opinions are under strong pressure to not function as actual beliefs that can be used to make successful plans and can be ongoingly updated to track reality. So I recommend that peoplekeep these dynamics in mind when deferring, track the difference between believing someone's testimony vs. deferring to beliefs imputed to someone based on their actions vs. adopting non-belief performative stances, and give substantial parliamentary decision-weight to the recommendations made by their expectations about facts-on-the-ground that they can see with their own eyes.Not to throw away arguments or information from other people, or to avoid...]]>
TsviBT https://forum.effectivealtruism.org/posts/dZs5s8giJ36qtSq4h/dangers-of-deference Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dangers of deference, published by TsviBT on January 8, 2023 on The Effective Altruism Forum.[Written September 02, 2022. Note: I'm likely to not respond to comments promptly.]Sometimes people defer to other people, e.g. by believing what they say, by following orders, or by adopting intents or stances. In many cases it makes sense to defer, since other people know more than you about many things, and it's useful to share eyes and ears, and coordination and specialization are valuable, and one can "inquisitively defer" to opinions by taking them as challenges to investigate further by trying them out for oneself. But there are major issues with deferring, among which are:Deferral-based opinions don't contain the detailed content that generated the opinions, and therefore can't direct action effectively or update on new evidence correctly. Acting based on deferral-based opinions is discouraging because it's especially not the case that the whole of you can see why the action is good. Acting based on deferral-based opinions to some extent removes the "meaning" of learning new information; if you're just going to defer anyway, it's sort of irrelevant to gain information, and your brain can kind of tell that, so you don't seek information as much. Deference therefore constricts the influx of new information to individuals and groups. A group with many people deferring to others will amplify [information cascades]() by double-triple-quadruple-counting non-deferral-based evidence. A group with many people deferring to others will have mistakenly correlated beliefs and actions, and so will fail to explore many worthwhile possibilities.The deferrer will copy beliefs mistakenly imputed to the deferred-to that would have explained the deferred-to's externally visible behavior. This pushes in the direction opposite to science because science is the way of making beliefs come apart from their pre-theoretical pragmatic implications. Sometimes the deferrer, instead of imputing beliefs to the deferred-to and adopting those beliefs, will adopt the same model-free behavioral stance that the deferred-to has adopted to perform to onlookers, such as pretending to believe something while acting towards no coherent purpose other than to maintain the pretense.If the deferred-to takes actions for PR reasons, e.g. attempting to appear from the outside to hold some belief or intent that they don't actually hold, then the PR might work on the deferrer so that the deferrer systematically adopts the false beliefs and non-held intents performed by the deferred-to (rather than adopting beliefs and intents that would actually explain the deferred-to's actions as part of a coherent worldview and strategy). Allocating resources based on deferral-based opinions potentially opens up niches for non-epistemic processes, such as hype, fraud, and power-grabbing. These dynamics will be amplified when people choose who to defer to according to how much the person is already being deferred to. To the extent that these dynamics increase the general orientation of deference itself, deference recursively amplifies itself.Together, these dynamics make it so that deferral-based opinions are under strong pressure to not function as actual beliefs that can be used to make successful plans and can be ongoingly updated to track reality. So I recommend that peoplekeep these dynamics in mind when deferring, track the difference between believing someone's testimony vs. deferring to beliefs imputed to someone based on their actions vs. adopting non-belief performative stances, and give substantial parliamentary decision-weight to the recommendations made by their expectations about facts-on-the-ground that they can see with their own eyes.Not to throw away arguments or information from other people, or to avoid...]]>
Mon, 09 Jan 2023 08:37:45 +0000 EA - Dangers of deference by TsviBT Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dangers of deference, published by TsviBT on January 8, 2023 on The Effective Altruism Forum.[Written September 02, 2022. Note: I'm likely to not respond to comments promptly.]Sometimes people defer to other people, e.g. by believing what they say, by following orders, or by adopting intents or stances. In many cases it makes sense to defer, since other people know more than you about many things, and it's useful to share eyes and ears, and coordination and specialization are valuable, and one can "inquisitively defer" to opinions by taking them as challenges to investigate further by trying them out for oneself. But there are major issues with deferring, among which are:Deferral-based opinions don't contain the detailed content that generated the opinions, and therefore can't direct action effectively or update on new evidence correctly. Acting based on deferral-based opinions is discouraging because it's especially not the case that the whole of you can see why the action is good. Acting based on deferral-based opinions to some extent removes the "meaning" of learning new information; if you're just going to defer anyway, it's sort of irrelevant to gain information, and your brain can kind of tell that, so you don't seek information as much. Deference therefore constricts the influx of new information to individuals and groups. A group with many people deferring to others will amplify [information cascades]() by double-triple-quadruple-counting non-deferral-based evidence. A group with many people deferring to others will have mistakenly correlated beliefs and actions, and so will fail to explore many worthwhile possibilities.The deferrer will copy beliefs mistakenly imputed to the deferred-to that would have explained the deferred-to's externally visible behavior. This pushes in the direction opposite to science because science is the way of making beliefs come apart from their pre-theoretical pragmatic implications. Sometimes the deferrer, instead of imputing beliefs to the deferred-to and adopting those beliefs, will adopt the same model-free behavioral stance that the deferred-to has adopted to perform to onlookers, such as pretending to believe something while acting towards no coherent purpose other than to maintain the pretense.If the deferred-to takes actions for PR reasons, e.g. attempting to appear from the outside to hold some belief or intent that they don't actually hold, then the PR might work on the deferrer so that the deferrer systematically adopts the false beliefs and non-held intents performed by the deferred-to (rather than adopting beliefs and intents that would actually explain the deferred-to's actions as part of a coherent worldview and strategy). Allocating resources based on deferral-based opinions potentially opens up niches for non-epistemic processes, such as hype, fraud, and power-grabbing. These dynamics will be amplified when people choose who to defer to according to how much the person is already being deferred to. To the extent that these dynamics increase the general orientation of deference itself, deference recursively amplifies itself.Together, these dynamics make it so that deferral-based opinions are under strong pressure to not function as actual beliefs that can be used to make successful plans and can be ongoingly updated to track reality. So I recommend that peoplekeep these dynamics in mind when deferring, track the difference between believing someone's testimony vs. deferring to beliefs imputed to someone based on their actions vs. adopting non-belief performative stances, and give substantial parliamentary decision-weight to the recommendations made by their expectations about facts-on-the-ground that they can see with their own eyes.Not to throw away arguments or information from other people, or to avoid...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dangers of deference, published by TsviBT on January 8, 2023 on The Effective Altruism Forum.[Written September 02, 2022. Note: I'm likely to not respond to comments promptly.]Sometimes people defer to other people, e.g. by believing what they say, by following orders, or by adopting intents or stances. In many cases it makes sense to defer, since other people know more than you about many things, and it's useful to share eyes and ears, and coordination and specialization are valuable, and one can "inquisitively defer" to opinions by taking them as challenges to investigate further by trying them out for oneself. But there are major issues with deferring, among which are:Deferral-based opinions don't contain the detailed content that generated the opinions, and therefore can't direct action effectively or update on new evidence correctly. Acting based on deferral-based opinions is discouraging because it's especially not the case that the whole of you can see why the action is good. Acting based on deferral-based opinions to some extent removes the "meaning" of learning new information; if you're just going to defer anyway, it's sort of irrelevant to gain information, and your brain can kind of tell that, so you don't seek information as much. Deference therefore constricts the influx of new information to individuals and groups. A group with many people deferring to others will amplify [information cascades]() by double-triple-quadruple-counting non-deferral-based evidence. A group with many people deferring to others will have mistakenly correlated beliefs and actions, and so will fail to explore many worthwhile possibilities.The deferrer will copy beliefs mistakenly imputed to the deferred-to that would have explained the deferred-to's externally visible behavior. This pushes in the direction opposite to science because science is the way of making beliefs come apart from their pre-theoretical pragmatic implications. Sometimes the deferrer, instead of imputing beliefs to the deferred-to and adopting those beliefs, will adopt the same model-free behavioral stance that the deferred-to has adopted to perform to onlookers, such as pretending to believe something while acting towards no coherent purpose other than to maintain the pretense.If the deferred-to takes actions for PR reasons, e.g. attempting to appear from the outside to hold some belief or intent that they don't actually hold, then the PR might work on the deferrer so that the deferrer systematically adopts the false beliefs and non-held intents performed by the deferred-to (rather than adopting beliefs and intents that would actually explain the deferred-to's actions as part of a coherent worldview and strategy). Allocating resources based on deferral-based opinions potentially opens up niches for non-epistemic processes, such as hype, fraud, and power-grabbing. These dynamics will be amplified when people choose who to defer to according to how much the person is already being deferred to. To the extent that these dynamics increase the general orientation of deference itself, deference recursively amplifies itself.Together, these dynamics make it so that deferral-based opinions are under strong pressure to not function as actual beliefs that can be used to make successful plans and can be ongoingly updated to track reality. So I recommend that peoplekeep these dynamics in mind when deferring, track the difference between believing someone's testimony vs. deferring to beliefs imputed to someone based on their actions vs. adopting non-belief performative stances, and give substantial parliamentary decision-weight to the recommendations made by their expectations about facts-on-the-ground that they can see with their own eyes.Not to throw away arguments or information from other people, or to avoid...]]>
TsviBT https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:36 None full 4385
fxDhSN5qJfYai5zs9_EA EA - Moral Weights according to EA Orgs by Simon M Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Moral Weights according to EA Orgs, published by Simon M on January 8, 2023 on The Effective Altruism Forum.This post was motivated by SoGive's moral weights being (to a first check) quite different to Founders Pledge (FP) and Happier Lives Institute (HLI). Upon checking in more detail, this appears to be the largest discrepency across any organisation. (Although we are still waiting to find out many missing values in the grid as HLI's research is ongoing).SummaryGiveWellFPHLISoGiveWELLBY-0.530.55-1 income doubling for 1 year111 11 year of severe depression~1.51 ()1.28 0.71-1.42 41 additional year of life2.301.95-2.8 to 2.91 ()-1 death under 5117.7123.2-1001 death over 583.683.7-100Broadly all organisations (with the exception of SoGive's view on depression) are very much aligned.() means I expect the organisation would not endorse the figures used here. In the case of GiveWell my best guess is this is roughly inline with what they would use. For Happier Lives Institute it is an upper bound I expect they will be far below when they finish their research.DetailsOpen PhilanthropyOpen Phil's summary of their moral weights is very clear and interesting, but:For now, in order to be more consistent in our practices, we’re going to defer to GiveWell and start to use the number of DALYs that would be implied by extrapolating their moral weights.I have left them off here, as I would just be duplicating the GiveWell numbers.GiveWellGiveWell's weights are sourced from here. I have made a few small calculations to align these numbers with the other orgs.Founders PledgeFounders Pledge's moral weights are avaiable here.Happier Lives InstituteUnfortunately, their moral weights are still in the process of being generated. You can determine the range of weights they will use in future in their article The elephant in the bednet.SoGiveSoGive's weights can be found here. I have used them verbatimThis is a calculation. A 100% increase in income/consumption is worth 1.27 / 0.69 = 1.86 WELLBYs (in HLI terms). (See inputs tab C25) We want this to be 1-unit, so we take 1/1.86 = 0.55 to be a WELLBY and other numbers are calculated from this.GiveWell has a strong aversion to disability weights used blindly, so take this number with a grain of salt.Founders Pledge don't explicitly include depression in their data. I have used the disability weights they used in their public CEA of StrongMinds. I am under the impression they are working to move towards HLI's model for this.This is also a calculation. HLI are inconsistent in how they calculate the impact of depression in WELLBYs. Here they say depression is worth 1.3 WELLBYs. (So 1.3 0.55 = 0.71) in units of income doubling. One potential explanation is that "depression" is less severe than "severe depression" so potentially this number could be doubled - they estimate the effect of StrongMinds to be ~1.8 WELLBYs)GiveWell uses a metric "Years lived with disease/disability" which as far as I can tell is equivalent to "value of averting 1 year of death".As mentioned above HLI are still in the process of deciding what their moral weights are. I am taking the upperbound of their deprevationist model, the highest number it could be. The highest number is a deprevationist model of losing 4.95 WELLBY. (4.95 - 0). The lowest number is the same model using a neutral point of 10. "would seem unintuitive to most, but relates to tranquilism and minimalist axiologies" (See inputs tab C18)I have taken the average of "death averted from malaria" and "death averted from vitamin A". The numbers are similar and I don't think material to the analysis here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Simon M https://forum.effectivealtruism.org/posts/fxDhSN5qJfYai5zs9/moral-weights-according-to-ea-orgs Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Moral Weights according to EA Orgs, published by Simon M on January 8, 2023 on The Effective Altruism Forum.This post was motivated by SoGive's moral weights being (to a first check) quite different to Founders Pledge (FP) and Happier Lives Institute (HLI). Upon checking in more detail, this appears to be the largest discrepency across any organisation. (Although we are still waiting to find out many missing values in the grid as HLI's research is ongoing).SummaryGiveWellFPHLISoGiveWELLBY-0.530.55-1 income doubling for 1 year111 11 year of severe depression~1.51 ()1.28 0.71-1.42 41 additional year of life2.301.95-2.8 to 2.91 ()-1 death under 5117.7123.2-1001 death over 583.683.7-100Broadly all organisations (with the exception of SoGive's view on depression) are very much aligned.() means I expect the organisation would not endorse the figures used here. In the case of GiveWell my best guess is this is roughly inline with what they would use. For Happier Lives Institute it is an upper bound I expect they will be far below when they finish their research.DetailsOpen PhilanthropyOpen Phil's summary of their moral weights is very clear and interesting, but:For now, in order to be more consistent in our practices, we’re going to defer to GiveWell and start to use the number of DALYs that would be implied by extrapolating their moral weights.I have left them off here, as I would just be duplicating the GiveWell numbers.GiveWellGiveWell's weights are sourced from here. I have made a few small calculations to align these numbers with the other orgs.Founders PledgeFounders Pledge's moral weights are avaiable here.Happier Lives InstituteUnfortunately, their moral weights are still in the process of being generated. You can determine the range of weights they will use in future in their article The elephant in the bednet.SoGiveSoGive's weights can be found here. I have used them verbatimThis is a calculation. A 100% increase in income/consumption is worth 1.27 / 0.69 = 1.86 WELLBYs (in HLI terms). (See inputs tab C25) We want this to be 1-unit, so we take 1/1.86 = 0.55 to be a WELLBY and other numbers are calculated from this.GiveWell has a strong aversion to disability weights used blindly, so take this number with a grain of salt.Founders Pledge don't explicitly include depression in their data. I have used the disability weights they used in their public CEA of StrongMinds. I am under the impression they are working to move towards HLI's model for this.This is also a calculation. HLI are inconsistent in how they calculate the impact of depression in WELLBYs. Here they say depression is worth 1.3 WELLBYs. (So 1.3 0.55 = 0.71) in units of income doubling. One potential explanation is that "depression" is less severe than "severe depression" so potentially this number could be doubled - they estimate the effect of StrongMinds to be ~1.8 WELLBYs)GiveWell uses a metric "Years lived with disease/disability" which as far as I can tell is equivalent to "value of averting 1 year of death".As mentioned above HLI are still in the process of deciding what their moral weights are. I am taking the upperbound of their deprevationist model, the highest number it could be. The highest number is a deprevationist model of losing 4.95 WELLBY. (4.95 - 0). The lowest number is the same model using a neutral point of 10. "would seem unintuitive to most, but relates to tranquilism and minimalist axiologies" (See inputs tab C18)I have taken the average of "death averted from malaria" and "death averted from vitamin A". The numbers are similar and I don't think material to the analysis here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 08 Jan 2023 17:58:24 +0000 EA - Moral Weights according to EA Orgs by Simon M Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Moral Weights according to EA Orgs, published by Simon M on January 8, 2023 on The Effective Altruism Forum.This post was motivated by SoGive's moral weights being (to a first check) quite different to Founders Pledge (FP) and Happier Lives Institute (HLI). Upon checking in more detail, this appears to be the largest discrepency across any organisation. (Although we are still waiting to find out many missing values in the grid as HLI's research is ongoing).SummaryGiveWellFPHLISoGiveWELLBY-0.530.55-1 income doubling for 1 year111 11 year of severe depression~1.51 ()1.28 0.71-1.42 41 additional year of life2.301.95-2.8 to 2.91 ()-1 death under 5117.7123.2-1001 death over 583.683.7-100Broadly all organisations (with the exception of SoGive's view on depression) are very much aligned.() means I expect the organisation would not endorse the figures used here. In the case of GiveWell my best guess is this is roughly inline with what they would use. For Happier Lives Institute it is an upper bound I expect they will be far below when they finish their research.DetailsOpen PhilanthropyOpen Phil's summary of their moral weights is very clear and interesting, but:For now, in order to be more consistent in our practices, we’re going to defer to GiveWell and start to use the number of DALYs that would be implied by extrapolating their moral weights.I have left them off here, as I would just be duplicating the GiveWell numbers.GiveWellGiveWell's weights are sourced from here. I have made a few small calculations to align these numbers with the other orgs.Founders PledgeFounders Pledge's moral weights are avaiable here.Happier Lives InstituteUnfortunately, their moral weights are still in the process of being generated. You can determine the range of weights they will use in future in their article The elephant in the bednet.SoGiveSoGive's weights can be found here. I have used them verbatimThis is a calculation. A 100% increase in income/consumption is worth 1.27 / 0.69 = 1.86 WELLBYs (in HLI terms). (See inputs tab C25) We want this to be 1-unit, so we take 1/1.86 = 0.55 to be a WELLBY and other numbers are calculated from this.GiveWell has a strong aversion to disability weights used blindly, so take this number with a grain of salt.Founders Pledge don't explicitly include depression in their data. I have used the disability weights they used in their public CEA of StrongMinds. I am under the impression they are working to move towards HLI's model for this.This is also a calculation. HLI are inconsistent in how they calculate the impact of depression in WELLBYs. Here they say depression is worth 1.3 WELLBYs. (So 1.3 0.55 = 0.71) in units of income doubling. One potential explanation is that "depression" is less severe than "severe depression" so potentially this number could be doubled - they estimate the effect of StrongMinds to be ~1.8 WELLBYs)GiveWell uses a metric "Years lived with disease/disability" which as far as I can tell is equivalent to "value of averting 1 year of death".As mentioned above HLI are still in the process of deciding what their moral weights are. I am taking the upperbound of their deprevationist model, the highest number it could be. The highest number is a deprevationist model of losing 4.95 WELLBY. (4.95 - 0). The lowest number is the same model using a neutral point of 10. "would seem unintuitive to most, but relates to tranquilism and minimalist axiologies" (See inputs tab C18)I have taken the average of "death averted from malaria" and "death averted from vitamin A". The numbers are similar and I don't think material to the analysis here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Moral Weights according to EA Orgs, published by Simon M on January 8, 2023 on The Effective Altruism Forum.This post was motivated by SoGive's moral weights being (to a first check) quite different to Founders Pledge (FP) and Happier Lives Institute (HLI). Upon checking in more detail, this appears to be the largest discrepency across any organisation. (Although we are still waiting to find out many missing values in the grid as HLI's research is ongoing).SummaryGiveWellFPHLISoGiveWELLBY-0.530.55-1 income doubling for 1 year111 11 year of severe depression~1.51 ()1.28 0.71-1.42 41 additional year of life2.301.95-2.8 to 2.91 ()-1 death under 5117.7123.2-1001 death over 583.683.7-100Broadly all organisations (with the exception of SoGive's view on depression) are very much aligned.() means I expect the organisation would not endorse the figures used here. In the case of GiveWell my best guess is this is roughly inline with what they would use. For Happier Lives Institute it is an upper bound I expect they will be far below when they finish their research.DetailsOpen PhilanthropyOpen Phil's summary of their moral weights is very clear and interesting, but:For now, in order to be more consistent in our practices, we’re going to defer to GiveWell and start to use the number of DALYs that would be implied by extrapolating their moral weights.I have left them off here, as I would just be duplicating the GiveWell numbers.GiveWellGiveWell's weights are sourced from here. I have made a few small calculations to align these numbers with the other orgs.Founders PledgeFounders Pledge's moral weights are avaiable here.Happier Lives InstituteUnfortunately, their moral weights are still in the process of being generated. You can determine the range of weights they will use in future in their article The elephant in the bednet.SoGiveSoGive's weights can be found here. I have used them verbatimThis is a calculation. A 100% increase in income/consumption is worth 1.27 / 0.69 = 1.86 WELLBYs (in HLI terms). (See inputs tab C25) We want this to be 1-unit, so we take 1/1.86 = 0.55 to be a WELLBY and other numbers are calculated from this.GiveWell has a strong aversion to disability weights used blindly, so take this number with a grain of salt.Founders Pledge don't explicitly include depression in their data. I have used the disability weights they used in their public CEA of StrongMinds. I am under the impression they are working to move towards HLI's model for this.This is also a calculation. HLI are inconsistent in how they calculate the impact of depression in WELLBYs. Here they say depression is worth 1.3 WELLBYs. (So 1.3 0.55 = 0.71) in units of income doubling. One potential explanation is that "depression" is less severe than "severe depression" so potentially this number could be doubled - they estimate the effect of StrongMinds to be ~1.8 WELLBYs)GiveWell uses a metric "Years lived with disease/disability" which as far as I can tell is equivalent to "value of averting 1 year of death".As mentioned above HLI are still in the process of deciding what their moral weights are. I am taking the upperbound of their deprevationist model, the highest number it could be. The highest number is a deprevationist model of losing 4.95 WELLBY. (4.95 - 0). The lowest number is the same model using a neutral point of 10. "would seem unintuitive to most, but relates to tranquilism and minimalist axiologies" (See inputs tab C18)I have taken the average of "death averted from malaria" and "death averted from vitamin A". The numbers are similar and I don't think material to the analysis here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Simon M https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:07 None full 4382
Rnga2XRJzeYypyXDt_EA EA - Learning as much Deep Learning math as I could in 24 hours by Phosphorous Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Learning as much Deep Learning math as I could in 24 hours, published by Phosphorous on January 8, 2023 on The Effective Altruism Forum.TL:DR I designed an experiment where I committed to spend two 12 hour days trying to learn as much deep-learning math as possible, basically from scratch.Table of ContentsOrigins and MotivationsResultsTakeawaysExperiment set-upThe CurriculumDocumentation on hoursOrigins and MotivationsFor a long time, I’ve felt intimidated by the technical aspects of alignment research. I had never taken classes on linear algebra or multivariable calculus or deep learning, and when I cracked open many AI papers, I was terrified by symbols and words I didn’t understand.7 months ago I wrote up a short doc about how I was going to remedy my lack of technical knowledge: I collected some textbooks and some online courses, and I decided to hire a tutor to meet a few hours a week. I had the first two weeks of meetings, it was awesome, then regular meetings got disrupted by travel, and I never came back to it.When I thought about my accumulating debt of technical knowledge, my cached answer was “Oh, that might take six months to get up to speed. I don’t have the time.”Then, watching my productivity on other projects over the intervening months, I noticed two things:There appeared to be massive variance in my productivity. Sometimes, in a single day, I would get more done than I had accomplished in previous weeks.I seemed to both enjoy and get more done by “sprinting” through certain projects, eg. by spending 10 hours on it in a single day, rather than spreading that same work out over 2 hours a week for 5 weeks. It was, for some reason, way more motivating and seemingly more efficient to sprint.Also, when I asked myself what I thought the main bottlenecks were for addressing my technical debt problem, I identified two categories:Time (I felt busy all the time, and was afraid of committing too much to one project)A combination of lacking Motivation, Accountability and FunThen, as my mind wandered, I started to put 2 and 2 together: Perhaps these new things I had noticed about my productivity, could be used to address the bottlenecks in my technical debt? I decided to embark on an experiment: how much technical background on deep learning could I learn in a single weekend? My understanding of the benefits of this experiment were as follows:Committing “a weekend” felt like a much smaller time cost than committing “a few months”, even if they were the same number of hours.No Distraction: I could design my environment to minimize distractions for two days, something it would be intractable to do to the same degree for several months.“Trying to learn as much as possible” felt like a challenge. I was, to be honest, pretty scared. I didn’t know what I was doing, it felt extreme, but that also made it exciting and fun.I had some historical data that I might be good at this kind of sprinting, and framing this as an experiment to see what I could learn about my productivity added another layer of discovery-driven motivation and fun. What if I learned more about how to be productive and get hard things done via this experiment?As far as I knew, nobody else among my peers had done this - but I suspected that more people than me had the same problems, and that if I conducted this experiment, I might learn things that would be helpful to others, which added yet another layer of discovery-driven motivation and fun.Accountability: Once I told somebody about this, it was hard to back out. It’s way easier for them to monitor me for a weekend than for a few months.ResultsI’d consider the experiment a success: I finished the whole curriculum in ~18 hours, and I got a lot of neat take-aways I’ll go over below.At the end of day 1, after 12 hours of cram...]]>
Phosphorous https://forum.effectivealtruism.org/posts/Rnga2XRJzeYypyXDt/learning-as-much-deep-learning-math-as-i-could-in-24-hours Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Learning as much Deep Learning math as I could in 24 hours, published by Phosphorous on January 8, 2023 on The Effective Altruism Forum.TL:DR I designed an experiment where I committed to spend two 12 hour days trying to learn as much deep-learning math as possible, basically from scratch.Table of ContentsOrigins and MotivationsResultsTakeawaysExperiment set-upThe CurriculumDocumentation on hoursOrigins and MotivationsFor a long time, I’ve felt intimidated by the technical aspects of alignment research. I had never taken classes on linear algebra or multivariable calculus or deep learning, and when I cracked open many AI papers, I was terrified by symbols and words I didn’t understand.7 months ago I wrote up a short doc about how I was going to remedy my lack of technical knowledge: I collected some textbooks and some online courses, and I decided to hire a tutor to meet a few hours a week. I had the first two weeks of meetings, it was awesome, then regular meetings got disrupted by travel, and I never came back to it.When I thought about my accumulating debt of technical knowledge, my cached answer was “Oh, that might take six months to get up to speed. I don’t have the time.”Then, watching my productivity on other projects over the intervening months, I noticed two things:There appeared to be massive variance in my productivity. Sometimes, in a single day, I would get more done than I had accomplished in previous weeks.I seemed to both enjoy and get more done by “sprinting” through certain projects, eg. by spending 10 hours on it in a single day, rather than spreading that same work out over 2 hours a week for 5 weeks. It was, for some reason, way more motivating and seemingly more efficient to sprint.Also, when I asked myself what I thought the main bottlenecks were for addressing my technical debt problem, I identified two categories:Time (I felt busy all the time, and was afraid of committing too much to one project)A combination of lacking Motivation, Accountability and FunThen, as my mind wandered, I started to put 2 and 2 together: Perhaps these new things I had noticed about my productivity, could be used to address the bottlenecks in my technical debt? I decided to embark on an experiment: how much technical background on deep learning could I learn in a single weekend? My understanding of the benefits of this experiment were as follows:Committing “a weekend” felt like a much smaller time cost than committing “a few months”, even if they were the same number of hours.No Distraction: I could design my environment to minimize distractions for two days, something it would be intractable to do to the same degree for several months.“Trying to learn as much as possible” felt like a challenge. I was, to be honest, pretty scared. I didn’t know what I was doing, it felt extreme, but that also made it exciting and fun.I had some historical data that I might be good at this kind of sprinting, and framing this as an experiment to see what I could learn about my productivity added another layer of discovery-driven motivation and fun. What if I learned more about how to be productive and get hard things done via this experiment?As far as I knew, nobody else among my peers had done this - but I suspected that more people than me had the same problems, and that if I conducted this experiment, I might learn things that would be helpful to others, which added yet another layer of discovery-driven motivation and fun.Accountability: Once I told somebody about this, it was hard to back out. It’s way easier for them to monitor me for a weekend than for a few months.ResultsI’d consider the experiment a success: I finished the whole curriculum in ~18 hours, and I got a lot of neat take-aways I’ll go over below.At the end of day 1, after 12 hours of cram...]]>
Sun, 08 Jan 2023 14:10:08 +0000 EA - Learning as much Deep Learning math as I could in 24 hours by Phosphorous Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Learning as much Deep Learning math as I could in 24 hours, published by Phosphorous on January 8, 2023 on The Effective Altruism Forum.TL:DR I designed an experiment where I committed to spend two 12 hour days trying to learn as much deep-learning math as possible, basically from scratch.Table of ContentsOrigins and MotivationsResultsTakeawaysExperiment set-upThe CurriculumDocumentation on hoursOrigins and MotivationsFor a long time, I’ve felt intimidated by the technical aspects of alignment research. I had never taken classes on linear algebra or multivariable calculus or deep learning, and when I cracked open many AI papers, I was terrified by symbols and words I didn’t understand.7 months ago I wrote up a short doc about how I was going to remedy my lack of technical knowledge: I collected some textbooks and some online courses, and I decided to hire a tutor to meet a few hours a week. I had the first two weeks of meetings, it was awesome, then regular meetings got disrupted by travel, and I never came back to it.When I thought about my accumulating debt of technical knowledge, my cached answer was “Oh, that might take six months to get up to speed. I don’t have the time.”Then, watching my productivity on other projects over the intervening months, I noticed two things:There appeared to be massive variance in my productivity. Sometimes, in a single day, I would get more done than I had accomplished in previous weeks.I seemed to both enjoy and get more done by “sprinting” through certain projects, eg. by spending 10 hours on it in a single day, rather than spreading that same work out over 2 hours a week for 5 weeks. It was, for some reason, way more motivating and seemingly more efficient to sprint.Also, when I asked myself what I thought the main bottlenecks were for addressing my technical debt problem, I identified two categories:Time (I felt busy all the time, and was afraid of committing too much to one project)A combination of lacking Motivation, Accountability and FunThen, as my mind wandered, I started to put 2 and 2 together: Perhaps these new things I had noticed about my productivity, could be used to address the bottlenecks in my technical debt? I decided to embark on an experiment: how much technical background on deep learning could I learn in a single weekend? My understanding of the benefits of this experiment were as follows:Committing “a weekend” felt like a much smaller time cost than committing “a few months”, even if they were the same number of hours.No Distraction: I could design my environment to minimize distractions for two days, something it would be intractable to do to the same degree for several months.“Trying to learn as much as possible” felt like a challenge. I was, to be honest, pretty scared. I didn’t know what I was doing, it felt extreme, but that also made it exciting and fun.I had some historical data that I might be good at this kind of sprinting, and framing this as an experiment to see what I could learn about my productivity added another layer of discovery-driven motivation and fun. What if I learned more about how to be productive and get hard things done via this experiment?As far as I knew, nobody else among my peers had done this - but I suspected that more people than me had the same problems, and that if I conducted this experiment, I might learn things that would be helpful to others, which added yet another layer of discovery-driven motivation and fun.Accountability: Once I told somebody about this, it was hard to back out. It’s way easier for them to monitor me for a weekend than for a few months.ResultsI’d consider the experiment a success: I finished the whole curriculum in ~18 hours, and I got a lot of neat take-aways I’ll go over below.At the end of day 1, after 12 hours of cram...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Learning as much Deep Learning math as I could in 24 hours, published by Phosphorous on January 8, 2023 on The Effective Altruism Forum.TL:DR I designed an experiment where I committed to spend two 12 hour days trying to learn as much deep-learning math as possible, basically from scratch.Table of ContentsOrigins and MotivationsResultsTakeawaysExperiment set-upThe CurriculumDocumentation on hoursOrigins and MotivationsFor a long time, I’ve felt intimidated by the technical aspects of alignment research. I had never taken classes on linear algebra or multivariable calculus or deep learning, and when I cracked open many AI papers, I was terrified by symbols and words I didn’t understand.7 months ago I wrote up a short doc about how I was going to remedy my lack of technical knowledge: I collected some textbooks and some online courses, and I decided to hire a tutor to meet a few hours a week. I had the first two weeks of meetings, it was awesome, then regular meetings got disrupted by travel, and I never came back to it.When I thought about my accumulating debt of technical knowledge, my cached answer was “Oh, that might take six months to get up to speed. I don’t have the time.”Then, watching my productivity on other projects over the intervening months, I noticed two things:There appeared to be massive variance in my productivity. Sometimes, in a single day, I would get more done than I had accomplished in previous weeks.I seemed to both enjoy and get more done by “sprinting” through certain projects, eg. by spending 10 hours on it in a single day, rather than spreading that same work out over 2 hours a week for 5 weeks. It was, for some reason, way more motivating and seemingly more efficient to sprint.Also, when I asked myself what I thought the main bottlenecks were for addressing my technical debt problem, I identified two categories:Time (I felt busy all the time, and was afraid of committing too much to one project)A combination of lacking Motivation, Accountability and FunThen, as my mind wandered, I started to put 2 and 2 together: Perhaps these new things I had noticed about my productivity, could be used to address the bottlenecks in my technical debt? I decided to embark on an experiment: how much technical background on deep learning could I learn in a single weekend? My understanding of the benefits of this experiment were as follows:Committing “a weekend” felt like a much smaller time cost than committing “a few months”, even if they were the same number of hours.No Distraction: I could design my environment to minimize distractions for two days, something it would be intractable to do to the same degree for several months.“Trying to learn as much as possible” felt like a challenge. I was, to be honest, pretty scared. I didn’t know what I was doing, it felt extreme, but that also made it exciting and fun.I had some historical data that I might be good at this kind of sprinting, and framing this as an experiment to see what I could learn about my productivity added another layer of discovery-driven motivation and fun. What if I learned more about how to be productive and get hard things done via this experiment?As far as I knew, nobody else among my peers had done this - but I suspected that more people than me had the same problems, and that if I conducted this experiment, I might learn things that would be helpful to others, which added yet another layer of discovery-driven motivation and fun.Accountability: Once I told somebody about this, it was hard to back out. It’s way easier for them to monitor me for a weekend than for a few months.ResultsI’d consider the experiment a success: I finished the whole curriculum in ~18 hours, and I got a lot of neat take-aways I’ll go over below.At the end of day 1, after 12 hours of cram...]]>
Phosphorous https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:05 None full 4383
zxSdBkN6cggkE8vv6_EA EA - EA Germany's Strategy for 2023 by Sarah Tegeler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Germany's Strategy for 2023, published by Sarah Tegeler on January 8, 2023 on The Effective Altruism Forum.Based on interviews with stakeholders, feedback from German community members and other national community builders, the new co-directors of EA Germany (EAD) drafted this strategy for 2023.SummaryOur Vision is a diverse and resilient community of ambitious people in and from Germany who are thinking carefully about the world’s biggest problems and taking impactful action to solve them.Our Mission is to serve as a central point of contact for the German EA community and to continuously improve ways to guide people to take effective action, directly or by supporting local groups.Our Values are sustainable growth, a welcoming and nurturing culture and high professional standards.Our Focus Areas:EAD aims to guide people in Germany directly and indirectly to more impactful actions:Directly, e.g. through communications and events such as an EAGxBerlin 2023, career 1-1s, fellowships or retreats.Indirectly by training community builders, e.g. through regular calls, 1-1s and German-specific resources.EAD will offer efficiency services to save time and costs for committed EAs acting as an employer of record for individual grantees and providing fiscal sponsorship for local groups.A methodology of impact estimation based on a multi-touchpoint attribution model will serve as a basis for designing and prioritising exploratory programs using a lean startup approach.BackgroundEA Germany (EAD)In the 2020 EA Survey, 7.4% of participants were from Germany, the third largest population behind the US and the UK. Apart from the US, Germany has the largest population and GNI of the ten largest countries in the survey. Germany has about 50 volunteer community builders in 25 local / university groups. 458 people have taken the GWWC pledge, and more than 400 Germans visited EAGxBerlin in September 2022. In 2021, Effektiv Spenden raised 18.86 Mio. Euros for effective charities.The registered association EA Germany was founded in 2019 as Netzwerk für Effektiven Altruismus Deutschland e.V. (NEAD) by EAs in Germany and has a board of volunteers. In parallel, one person on a national CEA Community Building Grant (CBG) worked independently from the association from 2020-22. The German website effektiveraltruismus.de was run by the national regranting organisation Effektiv Spenden. In late 2021, NEAD started offering employer-of-record services to grantees and EA organisations as well as fiscal sponsorship for local groups and hired a part-time operations associate on a CBG.A new board was elected in May 2022 and decided to apply for three CBGs – two co-directors and one project manager, in addition to the operations associate. The co-directors started in September and November 2022, and the project manager will start in January 2023. Funding for two other roles was promised but is not finalised as of December 2022. The association was renamed Effektiver Altruismus Deutschland (EAD) e.V. in 2022, and will now also run the website effektiveraltruismus.de.Epistemic StatusSarah Tegeler and Patrick Gruban drafted this document in November 2022 after having started working together as co-directors in the same month. Both have volunteered as local community builders, but this is their first role in an EA organisation. Most of the work on this document was influenced by interviews with stakeholders, other national EA community builders and reviews of different national strategies. About 150-200 hours went into discussing and writing the strategy.While the authors are confident the strategy will help foster a healthy community, their overall epistemic status is uncertain about the organisation’s and its programs’ counterfactual impact. Thus, many areas are not listed under founda...]]>
Sarah Tegeler https://forum.effectivealtruism.org/posts/zxSdBkN6cggkE8vv6/ea-germany-s-strategy-for-2023 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Germany's Strategy for 2023, published by Sarah Tegeler on January 8, 2023 on The Effective Altruism Forum.Based on interviews with stakeholders, feedback from German community members and other national community builders, the new co-directors of EA Germany (EAD) drafted this strategy for 2023.SummaryOur Vision is a diverse and resilient community of ambitious people in and from Germany who are thinking carefully about the world’s biggest problems and taking impactful action to solve them.Our Mission is to serve as a central point of contact for the German EA community and to continuously improve ways to guide people to take effective action, directly or by supporting local groups.Our Values are sustainable growth, a welcoming and nurturing culture and high professional standards.Our Focus Areas:EAD aims to guide people in Germany directly and indirectly to more impactful actions:Directly, e.g. through communications and events such as an EAGxBerlin 2023, career 1-1s, fellowships or retreats.Indirectly by training community builders, e.g. through regular calls, 1-1s and German-specific resources.EAD will offer efficiency services to save time and costs for committed EAs acting as an employer of record for individual grantees and providing fiscal sponsorship for local groups.A methodology of impact estimation based on a multi-touchpoint attribution model will serve as a basis for designing and prioritising exploratory programs using a lean startup approach.BackgroundEA Germany (EAD)In the 2020 EA Survey, 7.4% of participants were from Germany, the third largest population behind the US and the UK. Apart from the US, Germany has the largest population and GNI of the ten largest countries in the survey. Germany has about 50 volunteer community builders in 25 local / university groups. 458 people have taken the GWWC pledge, and more than 400 Germans visited EAGxBerlin in September 2022. In 2021, Effektiv Spenden raised 18.86 Mio. Euros for effective charities.The registered association EA Germany was founded in 2019 as Netzwerk für Effektiven Altruismus Deutschland e.V. (NEAD) by EAs in Germany and has a board of volunteers. In parallel, one person on a national CEA Community Building Grant (CBG) worked independently from the association from 2020-22. The German website effektiveraltruismus.de was run by the national regranting organisation Effektiv Spenden. In late 2021, NEAD started offering employer-of-record services to grantees and EA organisations as well as fiscal sponsorship for local groups and hired a part-time operations associate on a CBG.A new board was elected in May 2022 and decided to apply for three CBGs – two co-directors and one project manager, in addition to the operations associate. The co-directors started in September and November 2022, and the project manager will start in January 2023. Funding for two other roles was promised but is not finalised as of December 2022. The association was renamed Effektiver Altruismus Deutschland (EAD) e.V. in 2022, and will now also run the website effektiveraltruismus.de.Epistemic StatusSarah Tegeler and Patrick Gruban drafted this document in November 2022 after having started working together as co-directors in the same month. Both have volunteered as local community builders, but this is their first role in an EA organisation. Most of the work on this document was influenced by interviews with stakeholders, other national EA community builders and reviews of different national strategies. About 150-200 hours went into discussing and writing the strategy.While the authors are confident the strategy will help foster a healthy community, their overall epistemic status is uncertain about the organisation’s and its programs’ counterfactual impact. Thus, many areas are not listed under founda...]]>
Sun, 08 Jan 2023 10:07:01 +0000 EA - EA Germany's Strategy for 2023 by Sarah Tegeler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Germany's Strategy for 2023, published by Sarah Tegeler on January 8, 2023 on The Effective Altruism Forum.Based on interviews with stakeholders, feedback from German community members and other national community builders, the new co-directors of EA Germany (EAD) drafted this strategy for 2023.SummaryOur Vision is a diverse and resilient community of ambitious people in and from Germany who are thinking carefully about the world’s biggest problems and taking impactful action to solve them.Our Mission is to serve as a central point of contact for the German EA community and to continuously improve ways to guide people to take effective action, directly or by supporting local groups.Our Values are sustainable growth, a welcoming and nurturing culture and high professional standards.Our Focus Areas:EAD aims to guide people in Germany directly and indirectly to more impactful actions:Directly, e.g. through communications and events such as an EAGxBerlin 2023, career 1-1s, fellowships or retreats.Indirectly by training community builders, e.g. through regular calls, 1-1s and German-specific resources.EAD will offer efficiency services to save time and costs for committed EAs acting as an employer of record for individual grantees and providing fiscal sponsorship for local groups.A methodology of impact estimation based on a multi-touchpoint attribution model will serve as a basis for designing and prioritising exploratory programs using a lean startup approach.BackgroundEA Germany (EAD)In the 2020 EA Survey, 7.4% of participants were from Germany, the third largest population behind the US and the UK. Apart from the US, Germany has the largest population and GNI of the ten largest countries in the survey. Germany has about 50 volunteer community builders in 25 local / university groups. 458 people have taken the GWWC pledge, and more than 400 Germans visited EAGxBerlin in September 2022. In 2021, Effektiv Spenden raised 18.86 Mio. Euros for effective charities.The registered association EA Germany was founded in 2019 as Netzwerk für Effektiven Altruismus Deutschland e.V. (NEAD) by EAs in Germany and has a board of volunteers. In parallel, one person on a national CEA Community Building Grant (CBG) worked independently from the association from 2020-22. The German website effektiveraltruismus.de was run by the national regranting organisation Effektiv Spenden. In late 2021, NEAD started offering employer-of-record services to grantees and EA organisations as well as fiscal sponsorship for local groups and hired a part-time operations associate on a CBG.A new board was elected in May 2022 and decided to apply for three CBGs – two co-directors and one project manager, in addition to the operations associate. The co-directors started in September and November 2022, and the project manager will start in January 2023. Funding for two other roles was promised but is not finalised as of December 2022. The association was renamed Effektiver Altruismus Deutschland (EAD) e.V. in 2022, and will now also run the website effektiveraltruismus.de.Epistemic StatusSarah Tegeler and Patrick Gruban drafted this document in November 2022 after having started working together as co-directors in the same month. Both have volunteered as local community builders, but this is their first role in an EA organisation. Most of the work on this document was influenced by interviews with stakeholders, other national EA community builders and reviews of different national strategies. About 150-200 hours went into discussing and writing the strategy.While the authors are confident the strategy will help foster a healthy community, their overall epistemic status is uncertain about the organisation’s and its programs’ counterfactual impact. Thus, many areas are not listed under founda...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Germany's Strategy for 2023, published by Sarah Tegeler on January 8, 2023 on The Effective Altruism Forum.Based on interviews with stakeholders, feedback from German community members and other national community builders, the new co-directors of EA Germany (EAD) drafted this strategy for 2023.SummaryOur Vision is a diverse and resilient community of ambitious people in and from Germany who are thinking carefully about the world’s biggest problems and taking impactful action to solve them.Our Mission is to serve as a central point of contact for the German EA community and to continuously improve ways to guide people to take effective action, directly or by supporting local groups.Our Values are sustainable growth, a welcoming and nurturing culture and high professional standards.Our Focus Areas:EAD aims to guide people in Germany directly and indirectly to more impactful actions:Directly, e.g. through communications and events such as an EAGxBerlin 2023, career 1-1s, fellowships or retreats.Indirectly by training community builders, e.g. through regular calls, 1-1s and German-specific resources.EAD will offer efficiency services to save time and costs for committed EAs acting as an employer of record for individual grantees and providing fiscal sponsorship for local groups.A methodology of impact estimation based on a multi-touchpoint attribution model will serve as a basis for designing and prioritising exploratory programs using a lean startup approach.BackgroundEA Germany (EAD)In the 2020 EA Survey, 7.4% of participants were from Germany, the third largest population behind the US and the UK. Apart from the US, Germany has the largest population and GNI of the ten largest countries in the survey. Germany has about 50 volunteer community builders in 25 local / university groups. 458 people have taken the GWWC pledge, and more than 400 Germans visited EAGxBerlin in September 2022. In 2021, Effektiv Spenden raised 18.86 Mio. Euros for effective charities.The registered association EA Germany was founded in 2019 as Netzwerk für Effektiven Altruismus Deutschland e.V. (NEAD) by EAs in Germany and has a board of volunteers. In parallel, one person on a national CEA Community Building Grant (CBG) worked independently from the association from 2020-22. The German website effektiveraltruismus.de was run by the national regranting organisation Effektiv Spenden. In late 2021, NEAD started offering employer-of-record services to grantees and EA organisations as well as fiscal sponsorship for local groups and hired a part-time operations associate on a CBG.A new board was elected in May 2022 and decided to apply for three CBGs – two co-directors and one project manager, in addition to the operations associate. The co-directors started in September and November 2022, and the project manager will start in January 2023. Funding for two other roles was promised but is not finalised as of December 2022. The association was renamed Effektiver Altruismus Deutschland (EAD) e.V. in 2022, and will now also run the website effektiveraltruismus.de.Epistemic StatusSarah Tegeler and Patrick Gruban drafted this document in November 2022 after having started working together as co-directors in the same month. Both have volunteered as local community builders, but this is their first role in an EA organisation. Most of the work on this document was influenced by interviews with stakeholders, other national EA community builders and reviews of different national strategies. About 150-200 hours went into discussing and writing the strategy.While the authors are confident the strategy will help foster a healthy community, their overall epistemic status is uncertain about the organisation’s and its programs’ counterfactual impact. Thus, many areas are not listed under founda...]]>
Sarah Tegeler https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 25:39 None full 4377
LaDGhL8yZuz28rdKG_EA EA - EA university groups are missing out on most of their potential by Johan de Kock Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA university groups are missing out on most of their potential, published by Johan de Kock on January 7, 2023 on The Effective Altruism Forum.The inception of the Purpose and Life Planning TrackContextMany people think that Effective Altruism university groups are incredibly valuable. “To solve pressing global problems — like existential risk, global poverty, and factory farming - we need more talented, ambitious, altruistic people to focus full-time on these issues. Hundreds of thousands of these people are clustered at the world's top universities” (CEA, n.d.).I agree. However, I believe that most EA uni groups, and even the EA community as a whole, are missing out on the majority of their potential to make the most out of this opportunity. This is because the current paradigm for community building emphasises finding talented and ambitious people that want to tackle the world's most pressing problem, and not to create them. This strategy has potentially serious limitations which is preventing us from creating as much counterfactual impact as possible.TL;DR - A summary of the main pointsIf EA university groups want to contribute as well as they can to empowering individuals to tackle the world's most pressing problems, we should not be cherry picking those students who are already naturally inclined to learn more about EA ideas. By only focusing on students who are already interested in EA ideas, we are missing the major opportunity to engage many more ambitious people to work on the world's most pressing problems, if approached from a different angle.Most university students are very young adults. Many are ambitious and conscientious but they are simply not at a point in their life where they have deeply internalised the desire to make doing good a core part of their life; If they don’t decide to join your introduction fellowship, or even drop out, it does not mean that they are not a good fit for EA. The life of university students is changing very quickly and there are many conflicting interests.Before people want to learn more about EA ideas and how to apply them to their lives they must regard this as valuable. Furthermore, before EA ideas can be properly internalised, the proper foundation must be laid.I identify four root causes, particularly for younger adults, that prevent an individual from being naturally inclined to EA ideas.First, people don't understand the link between being happy and doing good. Many people think that pursuing hedonic (feeling-based) happiness is the best way to live a happy life. Eudaimonic happiness (purpose-based happiness) tends to be more effective at this, however. People don’t know this. Making people aware of this difference might change their perspective on life and what they want to prioritise.Second, people often want to find a purpose in life, but it is not clear what that is and how to build one. Purpose consists to a large extent out of using your strengths to make the world a better place.Third, people have not internalised the underlying reasons about why doing good matters. Before somebody can be intrinsically motivated for something they need to understand why it is important and what the underlying reasons are. I think that we can do a lot better as a community to help people internalise these reasons.Four, learning about EA can be intimidating. Many EA ideas go against our evolutionary tendencies, such as prioritising our loved ones. Unless people have built a certain level of psychological and emotional resilience it is likely that taking EA ideas seriously is going to be too demanding.If these four points are addressed effectively it is possible to make a lot more people interested in learning about EA, and applying the ideas to their life (after initially being uninterested in your EA Introducti...]]>
Johan de Kock https://forum.effectivealtruism.org/posts/LaDGhL8yZuz28rdKG/ea-university-groups-are-missing-out-on-most-of-their Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA university groups are missing out on most of their potential, published by Johan de Kock on January 7, 2023 on The Effective Altruism Forum.The inception of the Purpose and Life Planning TrackContextMany people think that Effective Altruism university groups are incredibly valuable. “To solve pressing global problems — like existential risk, global poverty, and factory farming - we need more talented, ambitious, altruistic people to focus full-time on these issues. Hundreds of thousands of these people are clustered at the world's top universities” (CEA, n.d.).I agree. However, I believe that most EA uni groups, and even the EA community as a whole, are missing out on the majority of their potential to make the most out of this opportunity. This is because the current paradigm for community building emphasises finding talented and ambitious people that want to tackle the world's most pressing problem, and not to create them. This strategy has potentially serious limitations which is preventing us from creating as much counterfactual impact as possible.TL;DR - A summary of the main pointsIf EA university groups want to contribute as well as they can to empowering individuals to tackle the world's most pressing problems, we should not be cherry picking those students who are already naturally inclined to learn more about EA ideas. By only focusing on students who are already interested in EA ideas, we are missing the major opportunity to engage many more ambitious people to work on the world's most pressing problems, if approached from a different angle.Most university students are very young adults. Many are ambitious and conscientious but they are simply not at a point in their life where they have deeply internalised the desire to make doing good a core part of their life; If they don’t decide to join your introduction fellowship, or even drop out, it does not mean that they are not a good fit for EA. The life of university students is changing very quickly and there are many conflicting interests.Before people want to learn more about EA ideas and how to apply them to their lives they must regard this as valuable. Furthermore, before EA ideas can be properly internalised, the proper foundation must be laid.I identify four root causes, particularly for younger adults, that prevent an individual from being naturally inclined to EA ideas.First, people don't understand the link between being happy and doing good. Many people think that pursuing hedonic (feeling-based) happiness is the best way to live a happy life. Eudaimonic happiness (purpose-based happiness) tends to be more effective at this, however. People don’t know this. Making people aware of this difference might change their perspective on life and what they want to prioritise.Second, people often want to find a purpose in life, but it is not clear what that is and how to build one. Purpose consists to a large extent out of using your strengths to make the world a better place.Third, people have not internalised the underlying reasons about why doing good matters. Before somebody can be intrinsically motivated for something they need to understand why it is important and what the underlying reasons are. I think that we can do a lot better as a community to help people internalise these reasons.Four, learning about EA can be intimidating. Many EA ideas go against our evolutionary tendencies, such as prioritising our loved ones. Unless people have built a certain level of psychological and emotional resilience it is likely that taking EA ideas seriously is going to be too demanding.If these four points are addressed effectively it is possible to make a lot more people interested in learning about EA, and applying the ideas to their life (after initially being uninterested in your EA Introducti...]]>
Sat, 07 Jan 2023 22:34:02 +0000 EA - EA university groups are missing out on most of their potential by Johan de Kock Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA university groups are missing out on most of their potential, published by Johan de Kock on January 7, 2023 on The Effective Altruism Forum.The inception of the Purpose and Life Planning TrackContextMany people think that Effective Altruism university groups are incredibly valuable. “To solve pressing global problems — like existential risk, global poverty, and factory farming - we need more talented, ambitious, altruistic people to focus full-time on these issues. Hundreds of thousands of these people are clustered at the world's top universities” (CEA, n.d.).I agree. However, I believe that most EA uni groups, and even the EA community as a whole, are missing out on the majority of their potential to make the most out of this opportunity. This is because the current paradigm for community building emphasises finding talented and ambitious people that want to tackle the world's most pressing problem, and not to create them. This strategy has potentially serious limitations which is preventing us from creating as much counterfactual impact as possible.TL;DR - A summary of the main pointsIf EA university groups want to contribute as well as they can to empowering individuals to tackle the world's most pressing problems, we should not be cherry picking those students who are already naturally inclined to learn more about EA ideas. By only focusing on students who are already interested in EA ideas, we are missing the major opportunity to engage many more ambitious people to work on the world's most pressing problems, if approached from a different angle.Most university students are very young adults. Many are ambitious and conscientious but they are simply not at a point in their life where they have deeply internalised the desire to make doing good a core part of their life; If they don’t decide to join your introduction fellowship, or even drop out, it does not mean that they are not a good fit for EA. The life of university students is changing very quickly and there are many conflicting interests.Before people want to learn more about EA ideas and how to apply them to their lives they must regard this as valuable. Furthermore, before EA ideas can be properly internalised, the proper foundation must be laid.I identify four root causes, particularly for younger adults, that prevent an individual from being naturally inclined to EA ideas.First, people don't understand the link between being happy and doing good. Many people think that pursuing hedonic (feeling-based) happiness is the best way to live a happy life. Eudaimonic happiness (purpose-based happiness) tends to be more effective at this, however. People don’t know this. Making people aware of this difference might change their perspective on life and what they want to prioritise.Second, people often want to find a purpose in life, but it is not clear what that is and how to build one. Purpose consists to a large extent out of using your strengths to make the world a better place.Third, people have not internalised the underlying reasons about why doing good matters. Before somebody can be intrinsically motivated for something they need to understand why it is important and what the underlying reasons are. I think that we can do a lot better as a community to help people internalise these reasons.Four, learning about EA can be intimidating. Many EA ideas go against our evolutionary tendencies, such as prioritising our loved ones. Unless people have built a certain level of psychological and emotional resilience it is likely that taking EA ideas seriously is going to be too demanding.If these four points are addressed effectively it is possible to make a lot more people interested in learning about EA, and applying the ideas to their life (after initially being uninterested in your EA Introducti...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA university groups are missing out on most of their potential, published by Johan de Kock on January 7, 2023 on The Effective Altruism Forum.The inception of the Purpose and Life Planning TrackContextMany people think that Effective Altruism university groups are incredibly valuable. “To solve pressing global problems — like existential risk, global poverty, and factory farming - we need more talented, ambitious, altruistic people to focus full-time on these issues. Hundreds of thousands of these people are clustered at the world's top universities” (CEA, n.d.).I agree. However, I believe that most EA uni groups, and even the EA community as a whole, are missing out on the majority of their potential to make the most out of this opportunity. This is because the current paradigm for community building emphasises finding talented and ambitious people that want to tackle the world's most pressing problem, and not to create them. This strategy has potentially serious limitations which is preventing us from creating as much counterfactual impact as possible.TL;DR - A summary of the main pointsIf EA university groups want to contribute as well as they can to empowering individuals to tackle the world's most pressing problems, we should not be cherry picking those students who are already naturally inclined to learn more about EA ideas. By only focusing on students who are already interested in EA ideas, we are missing the major opportunity to engage many more ambitious people to work on the world's most pressing problems, if approached from a different angle.Most university students are very young adults. Many are ambitious and conscientious but they are simply not at a point in their life where they have deeply internalised the desire to make doing good a core part of their life; If they don’t decide to join your introduction fellowship, or even drop out, it does not mean that they are not a good fit for EA. The life of university students is changing very quickly and there are many conflicting interests.Before people want to learn more about EA ideas and how to apply them to their lives they must regard this as valuable. Furthermore, before EA ideas can be properly internalised, the proper foundation must be laid.I identify four root causes, particularly for younger adults, that prevent an individual from being naturally inclined to EA ideas.First, people don't understand the link between being happy and doing good. Many people think that pursuing hedonic (feeling-based) happiness is the best way to live a happy life. Eudaimonic happiness (purpose-based happiness) tends to be more effective at this, however. People don’t know this. Making people aware of this difference might change their perspective on life and what they want to prioritise.Second, people often want to find a purpose in life, but it is not clear what that is and how to build one. Purpose consists to a large extent out of using your strengths to make the world a better place.Third, people have not internalised the underlying reasons about why doing good matters. Before somebody can be intrinsically motivated for something they need to understand why it is important and what the underlying reasons are. I think that we can do a lot better as a community to help people internalise these reasons.Four, learning about EA can be intimidating. Many EA ideas go against our evolutionary tendencies, such as prioritising our loved ones. Unless people have built a certain level of psychological and emotional resilience it is likely that taking EA ideas seriously is going to be too demanding.If these four points are addressed effectively it is possible to make a lot more people interested in learning about EA, and applying the ideas to their life (after initially being uninterested in your EA Introducti...]]>
Johan de Kock https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 48:09 None full 4378
wn9PkfWWWhpCypep6_EA EA - Misha Yagudin and Ozzie Gooen Discuss LLMs and Effective Altruism by Ozzie Gooen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Misha Yagudin and Ozzie Gooen Discuss LLMs and Effective Altruism, published by Ozzie Gooen on January 6, 2023 on The Effective Altruism Forum.Misha and I recently recorded a short discussion about large language models and their uses for effective altruists.This was mostly a regular Zoom meeting, but we added some editing and text transcription. After we wrote up the transcript, both Misha and myself edited our respective sections.I think the final transcript is clearer and contains more information than the original discussion. I might even suggest using text-to-speech on the transcript rather than listening to the original audio. This back-and-forth might seem to ruin the point of presenting the video and audio, but I think it might be straightforwardly more pragmatic.TranscriptSectionsOpeningIntroductionHow do we use LLMs already?Could EAs contributing to applied LLMs be harmful?Potential LLM Application: Management and Emotional AssistancePotential LLM Application: Communication, BroadlyAside: Human-AI-Human CommunicationPotential LLM Application: Decision AutomationPotential LLM Application: EA Forum ImprovementsPotential LLM Application: EvaluationsLLM user interfacesWhat should EAs do with LLMs?OpeningOzzie: Hello. I just did a recording with my friend Misha, an EA researcher at ARB Research. This was a pretty short meeting about large language models and their use by effective altruists. The two of us are pretty excited about the potential for large language models to be used by effective altruists for different kinds of infrastructure.This is an experiment with us presenting videos publicly. Normally, our videos are just Zoom meetings. If anything, the Zoom meetings would be unedited. I found that to be quite a pain. These Zoom meetings typically don't look that great on their own, and they don't sound too terrific. So we've been experimenting with some methods to try to make that a little bit better.I am really curious about what people are going to think about this and am looking forward to what you say. Let's get right into it.IntroductionOzzie: For those watching us, this is Misha and me just having a meeting about large language models and their use for effective altruism.Obviously, large language models have been a very big deal very recently, and now there's a big question about how we could best apply them to EA purposes and what EAs could do best about it. So this is going to be a very quick meeting. We only have about half an hour.Right now, we have about seven topics. The main topic, though, is just the LLM applications.How do we use LLMs already?Ozzie: So, how do we use LLMs already?Misha: I think I use them for roughly 10 minutes on average per day.Sometimes I just ask questions or ask queries like, "Hey, I have these ingredients. What cocktails can I make?" Sometimes I try to converse with them about stuff. Sometimes I just use it (e.g., text-davinci-003) as a source of knowledge. I think it's more suitable for areas where verifiable expertise is rare.Take non-critical medicine, like skincare. I had a chat with it and got some recommendations in this domain, and I think it turned out really well. I previously tried to search for recommendations and asked a few people, but it didn't work.I also use it as an amplification for journaling whenever I'm doing any emotional or self-coaching work. Writing is great. I personally find it much easier to write “as if” I'm writing a message to someone—having ChatGPT obviously helps with that.Having a conversation partner activates some sort of social infrastructure in my brain. Humans solve math problems better when they are framed socially. And yeah, doing it with language models is straightforward and really good. Further, sometimes models give you hints or insights that yo...]]>
Ozzie Gooen https://forum.effectivealtruism.org/posts/wn9PkfWWWhpCypep6/misha-yagudin-and-ozzie-gooen-discuss-llms-and-effective Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Misha Yagudin and Ozzie Gooen Discuss LLMs and Effective Altruism, published by Ozzie Gooen on January 6, 2023 on The Effective Altruism Forum.Misha and I recently recorded a short discussion about large language models and their uses for effective altruists.This was mostly a regular Zoom meeting, but we added some editing and text transcription. After we wrote up the transcript, both Misha and myself edited our respective sections.I think the final transcript is clearer and contains more information than the original discussion. I might even suggest using text-to-speech on the transcript rather than listening to the original audio. This back-and-forth might seem to ruin the point of presenting the video and audio, but I think it might be straightforwardly more pragmatic.TranscriptSectionsOpeningIntroductionHow do we use LLMs already?Could EAs contributing to applied LLMs be harmful?Potential LLM Application: Management and Emotional AssistancePotential LLM Application: Communication, BroadlyAside: Human-AI-Human CommunicationPotential LLM Application: Decision AutomationPotential LLM Application: EA Forum ImprovementsPotential LLM Application: EvaluationsLLM user interfacesWhat should EAs do with LLMs?OpeningOzzie: Hello. I just did a recording with my friend Misha, an EA researcher at ARB Research. This was a pretty short meeting about large language models and their use by effective altruists. The two of us are pretty excited about the potential for large language models to be used by effective altruists for different kinds of infrastructure.This is an experiment with us presenting videos publicly. Normally, our videos are just Zoom meetings. If anything, the Zoom meetings would be unedited. I found that to be quite a pain. These Zoom meetings typically don't look that great on their own, and they don't sound too terrific. So we've been experimenting with some methods to try to make that a little bit better.I am really curious about what people are going to think about this and am looking forward to what you say. Let's get right into it.IntroductionOzzie: For those watching us, this is Misha and me just having a meeting about large language models and their use for effective altruism.Obviously, large language models have been a very big deal very recently, and now there's a big question about how we could best apply them to EA purposes and what EAs could do best about it. So this is going to be a very quick meeting. We only have about half an hour.Right now, we have about seven topics. The main topic, though, is just the LLM applications.How do we use LLMs already?Ozzie: So, how do we use LLMs already?Misha: I think I use them for roughly 10 minutes on average per day.Sometimes I just ask questions or ask queries like, "Hey, I have these ingredients. What cocktails can I make?" Sometimes I try to converse with them about stuff. Sometimes I just use it (e.g., text-davinci-003) as a source of knowledge. I think it's more suitable for areas where verifiable expertise is rare.Take non-critical medicine, like skincare. I had a chat with it and got some recommendations in this domain, and I think it turned out really well. I previously tried to search for recommendations and asked a few people, but it didn't work.I also use it as an amplification for journaling whenever I'm doing any emotional or self-coaching work. Writing is great. I personally find it much easier to write “as if” I'm writing a message to someone—having ChatGPT obviously helps with that.Having a conversation partner activates some sort of social infrastructure in my brain. Humans solve math problems better when they are framed socially. And yeah, doing it with language models is straightforward and really good. Further, sometimes models give you hints or insights that yo...]]>
Sat, 07 Jan 2023 17:16:10 +0000 EA - Misha Yagudin and Ozzie Gooen Discuss LLMs and Effective Altruism by Ozzie Gooen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Misha Yagudin and Ozzie Gooen Discuss LLMs and Effective Altruism, published by Ozzie Gooen on January 6, 2023 on The Effective Altruism Forum.Misha and I recently recorded a short discussion about large language models and their uses for effective altruists.This was mostly a regular Zoom meeting, but we added some editing and text transcription. After we wrote up the transcript, both Misha and myself edited our respective sections.I think the final transcript is clearer and contains more information than the original discussion. I might even suggest using text-to-speech on the transcript rather than listening to the original audio. This back-and-forth might seem to ruin the point of presenting the video and audio, but I think it might be straightforwardly more pragmatic.TranscriptSectionsOpeningIntroductionHow do we use LLMs already?Could EAs contributing to applied LLMs be harmful?Potential LLM Application: Management and Emotional AssistancePotential LLM Application: Communication, BroadlyAside: Human-AI-Human CommunicationPotential LLM Application: Decision AutomationPotential LLM Application: EA Forum ImprovementsPotential LLM Application: EvaluationsLLM user interfacesWhat should EAs do with LLMs?OpeningOzzie: Hello. I just did a recording with my friend Misha, an EA researcher at ARB Research. This was a pretty short meeting about large language models and their use by effective altruists. The two of us are pretty excited about the potential for large language models to be used by effective altruists for different kinds of infrastructure.This is an experiment with us presenting videos publicly. Normally, our videos are just Zoom meetings. If anything, the Zoom meetings would be unedited. I found that to be quite a pain. These Zoom meetings typically don't look that great on their own, and they don't sound too terrific. So we've been experimenting with some methods to try to make that a little bit better.I am really curious about what people are going to think about this and am looking forward to what you say. Let's get right into it.IntroductionOzzie: For those watching us, this is Misha and me just having a meeting about large language models and their use for effective altruism.Obviously, large language models have been a very big deal very recently, and now there's a big question about how we could best apply them to EA purposes and what EAs could do best about it. So this is going to be a very quick meeting. We only have about half an hour.Right now, we have about seven topics. The main topic, though, is just the LLM applications.How do we use LLMs already?Ozzie: So, how do we use LLMs already?Misha: I think I use them for roughly 10 minutes on average per day.Sometimes I just ask questions or ask queries like, "Hey, I have these ingredients. What cocktails can I make?" Sometimes I try to converse with them about stuff. Sometimes I just use it (e.g., text-davinci-003) as a source of knowledge. I think it's more suitable for areas where verifiable expertise is rare.Take non-critical medicine, like skincare. I had a chat with it and got some recommendations in this domain, and I think it turned out really well. I previously tried to search for recommendations and asked a few people, but it didn't work.I also use it as an amplification for journaling whenever I'm doing any emotional or self-coaching work. Writing is great. I personally find it much easier to write “as if” I'm writing a message to someone—having ChatGPT obviously helps with that.Having a conversation partner activates some sort of social infrastructure in my brain. Humans solve math problems better when they are framed socially. And yeah, doing it with language models is straightforward and really good. Further, sometimes models give you hints or insights that yo...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Misha Yagudin and Ozzie Gooen Discuss LLMs and Effective Altruism, published by Ozzie Gooen on January 6, 2023 on The Effective Altruism Forum.Misha and I recently recorded a short discussion about large language models and their uses for effective altruists.This was mostly a regular Zoom meeting, but we added some editing and text transcription. After we wrote up the transcript, both Misha and myself edited our respective sections.I think the final transcript is clearer and contains more information than the original discussion. I might even suggest using text-to-speech on the transcript rather than listening to the original audio. This back-and-forth might seem to ruin the point of presenting the video and audio, but I think it might be straightforwardly more pragmatic.TranscriptSectionsOpeningIntroductionHow do we use LLMs already?Could EAs contributing to applied LLMs be harmful?Potential LLM Application: Management and Emotional AssistancePotential LLM Application: Communication, BroadlyAside: Human-AI-Human CommunicationPotential LLM Application: Decision AutomationPotential LLM Application: EA Forum ImprovementsPotential LLM Application: EvaluationsLLM user interfacesWhat should EAs do with LLMs?OpeningOzzie: Hello. I just did a recording with my friend Misha, an EA researcher at ARB Research. This was a pretty short meeting about large language models and their use by effective altruists. The two of us are pretty excited about the potential for large language models to be used by effective altruists for different kinds of infrastructure.This is an experiment with us presenting videos publicly. Normally, our videos are just Zoom meetings. If anything, the Zoom meetings would be unedited. I found that to be quite a pain. These Zoom meetings typically don't look that great on their own, and they don't sound too terrific. So we've been experimenting with some methods to try to make that a little bit better.I am really curious about what people are going to think about this and am looking forward to what you say. Let's get right into it.IntroductionOzzie: For those watching us, this is Misha and me just having a meeting about large language models and their use for effective altruism.Obviously, large language models have been a very big deal very recently, and now there's a big question about how we could best apply them to EA purposes and what EAs could do best about it. So this is going to be a very quick meeting. We only have about half an hour.Right now, we have about seven topics. The main topic, though, is just the LLM applications.How do we use LLMs already?Ozzie: So, how do we use LLMs already?Misha: I think I use them for roughly 10 minutes on average per day.Sometimes I just ask questions or ask queries like, "Hey, I have these ingredients. What cocktails can I make?" Sometimes I try to converse with them about stuff. Sometimes I just use it (e.g., text-davinci-003) as a source of knowledge. I think it's more suitable for areas where verifiable expertise is rare.Take non-critical medicine, like skincare. I had a chat with it and got some recommendations in this domain, and I think it turned out really well. I previously tried to search for recommendations and asked a few people, but it didn't work.I also use it as an amplification for journaling whenever I'm doing any emotional or self-coaching work. Writing is great. I personally find it much easier to write “as if” I'm writing a message to someone—having ChatGPT obviously helps with that.Having a conversation partner activates some sort of social infrastructure in my brain. Humans solve math problems better when they are framed socially. And yeah, doing it with language models is straightforward and really good. Further, sometimes models give you hints or insights that yo...]]>
Ozzie Gooen https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 22:21 None full 4376
vEYh5NAXuXcZSkAab_EA EA - Do short timelines impact the tractability of 80k’s advice? by smk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Do short timelines impact the tractability of 80k’s advice?, published by smk on January 5, 2023 on The Effective Altruism Forum.Epistemic status: uncertain, looking to clarify my thinking on thisHello,80k recommends going into a graduate programme in machine learning to work on the alignment problem. For someone starting out studies, finishing a PhD will take at least 6-10 years including undergraduate/Master studies.Some put AGI take-off at ~4 years away, while the median Metaculus prediction for weak AGI is 2027 and the first superintelligence ~10 months after the first AGI. (The first public 'general AI' system is predicted in 2038, which makes me a bit confused. I fail to see how there's an 11 year gap between weak and 'strong' AI, especially with superintelligence ~10 months after the first AGI. Am I missing something?).To what extent is it still worthwhile to pursue a PhD in ML to work on the alignment problem when timelines are this short?Best, smkThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
smk https://forum.effectivealtruism.org/posts/vEYh5NAXuXcZSkAab/do-short-timelines-impact-the-tractability-of-80k-s-advice Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Do short timelines impact the tractability of 80k’s advice?, published by smk on January 5, 2023 on The Effective Altruism Forum.Epistemic status: uncertain, looking to clarify my thinking on thisHello,80k recommends going into a graduate programme in machine learning to work on the alignment problem. For someone starting out studies, finishing a PhD will take at least 6-10 years including undergraduate/Master studies.Some put AGI take-off at ~4 years away, while the median Metaculus prediction for weak AGI is 2027 and the first superintelligence ~10 months after the first AGI. (The first public 'general AI' system is predicted in 2038, which makes me a bit confused. I fail to see how there's an 11 year gap between weak and 'strong' AI, especially with superintelligence ~10 months after the first AGI. Am I missing something?).To what extent is it still worthwhile to pursue a PhD in ML to work on the alignment problem when timelines are this short?Best, smkThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 07 Jan 2023 03:45:28 +0000 EA - Do short timelines impact the tractability of 80k’s advice? by smk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Do short timelines impact the tractability of 80k’s advice?, published by smk on January 5, 2023 on The Effective Altruism Forum.Epistemic status: uncertain, looking to clarify my thinking on thisHello,80k recommends going into a graduate programme in machine learning to work on the alignment problem. For someone starting out studies, finishing a PhD will take at least 6-10 years including undergraduate/Master studies.Some put AGI take-off at ~4 years away, while the median Metaculus prediction for weak AGI is 2027 and the first superintelligence ~10 months after the first AGI. (The first public 'general AI' system is predicted in 2038, which makes me a bit confused. I fail to see how there's an 11 year gap between weak and 'strong' AI, especially with superintelligence ~10 months after the first AGI. Am I missing something?).To what extent is it still worthwhile to pursue a PhD in ML to work on the alignment problem when timelines are this short?Best, smkThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Do short timelines impact the tractability of 80k’s advice?, published by smk on January 5, 2023 on The Effective Altruism Forum.Epistemic status: uncertain, looking to clarify my thinking on thisHello,80k recommends going into a graduate programme in machine learning to work on the alignment problem. For someone starting out studies, finishing a PhD will take at least 6-10 years including undergraduate/Master studies.Some put AGI take-off at ~4 years away, while the median Metaculus prediction for weak AGI is 2027 and the first superintelligence ~10 months after the first AGI. (The first public 'general AI' system is predicted in 2038, which makes me a bit confused. I fail to see how there's an 11 year gap between weak and 'strong' AI, especially with superintelligence ~10 months after the first AGI. Am I missing something?).To what extent is it still worthwhile to pursue a PhD in ML to work on the alignment problem when timelines are this short?Best, smkThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
smk https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:18 None full 4373
7E3AGFB86mKYeo5aC_EA EA - EAs interested in EU policy: Consider applying for the European Commission’s Blue Book Traineeship by EU Policy Careers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAs interested in EU policy: Consider applying for the European Commission’s Blue Book Traineeship, published by EU Policy Careers on January 6, 2023 on The Effective Altruism Forum.Inspired by a series of recent forum posts highlighting early career opportunities in US policy, this post summarises why and how to apply to the Blue Book Traineeship. This paid, five-month internship programme with the European Commission, the executive body of the EU, is one of the main pathways into an EU policy career. The last section of this post also outlines some other options to get started in EU policy.There are two Blue Book sessions each year, with applications opening in January for the session starting in October and in August for a start in March of the following year. Application deadlines can be found here. Applications for the October 2023 session are open now and close on January 31.As the programme is suitable for people in different stages of their career and from various backgrounds (see below) and EU policy is arguably still neglected within the EA community, an application could be a good option for many EAs. The initial application is fairly low-cost, as you only need to upload your CV and documents without writing a motivation letter.The programme is not only relevant to students or recent graduates, as many trainees have some years of previous work experience (around 30% of all trainees are 30+ and only 5% younger than 25, see full statistics here). Work experience can even be a significant advantage for finding full-time positions after the traineeship, and it can be a strategic decision to only start the traineeship after gaining some work experience to increase the chances of being able to stay on.[1]Epistemic status: This post is mostly based on my experience completing the traineeship last year and now working full time at the Commission, including conversations with around 20 people before and during the traineeship about both the application process and getting full-time employment afterwards. The post was greatly improved by the contributions of four other EAs with expertise on EU policy.EligibilityThe programme is mostly directed towards EU citizens.[2] The minimum educational requirement is a completed undergraduate degree. However, a master’s degree is sometimes necessary to pass through the first stage of the selection process (especially for ‘competitive’ nationalities below) and increases employment opportunities after the traineeship.The programme is open to graduates of all disciplines, not just people holding policy-related degrees—even rewarding applicants from ‘rare fields of study’ in the selection process. Most degrees outside of policy, law and economics should fall into this, as the majority of trainees (around 70%) hold degrees in one of these fields. It is therefore a good opportunity for EAs with no previous policy experience interested in testing their fit and learning more about impact in the sector.All eligibility requirements are detailed here, including the requirement to prove very good knowledge (B2 level) of two official EU languages (English plus one other is sufficient).Background on the European CommissionThe Commission is the executive body of the EU. It draws up initial legislative proposals (which are then amended and adopted jointly by the European Parliament and the Council of the EU, the other two main EU institutions) and implements EU policies (e.g., deciding how to allocate funds, outlining the technical details of legislation, monitoring the implementation at member state level). It employs around 30,000 people mostly based in Brussels. The Commission’s main departments according to policy areas, the Directorate Generals (DGs), are comparable to government ministries at the national level. A Blue Book Tr...]]>
EU Policy Careers https://forum.effectivealtruism.org/posts/7E3AGFB86mKYeo5aC/eas-interested-in-eu-policy-consider-applying-for-the Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAs interested in EU policy: Consider applying for the European Commission’s Blue Book Traineeship, published by EU Policy Careers on January 6, 2023 on The Effective Altruism Forum.Inspired by a series of recent forum posts highlighting early career opportunities in US policy, this post summarises why and how to apply to the Blue Book Traineeship. This paid, five-month internship programme with the European Commission, the executive body of the EU, is one of the main pathways into an EU policy career. The last section of this post also outlines some other options to get started in EU policy.There are two Blue Book sessions each year, with applications opening in January for the session starting in October and in August for a start in March of the following year. Application deadlines can be found here. Applications for the October 2023 session are open now and close on January 31.As the programme is suitable for people in different stages of their career and from various backgrounds (see below) and EU policy is arguably still neglected within the EA community, an application could be a good option for many EAs. The initial application is fairly low-cost, as you only need to upload your CV and documents without writing a motivation letter.The programme is not only relevant to students or recent graduates, as many trainees have some years of previous work experience (around 30% of all trainees are 30+ and only 5% younger than 25, see full statistics here). Work experience can even be a significant advantage for finding full-time positions after the traineeship, and it can be a strategic decision to only start the traineeship after gaining some work experience to increase the chances of being able to stay on.[1]Epistemic status: This post is mostly based on my experience completing the traineeship last year and now working full time at the Commission, including conversations with around 20 people before and during the traineeship about both the application process and getting full-time employment afterwards. The post was greatly improved by the contributions of four other EAs with expertise on EU policy.EligibilityThe programme is mostly directed towards EU citizens.[2] The minimum educational requirement is a completed undergraduate degree. However, a master’s degree is sometimes necessary to pass through the first stage of the selection process (especially for ‘competitive’ nationalities below) and increases employment opportunities after the traineeship.The programme is open to graduates of all disciplines, not just people holding policy-related degrees—even rewarding applicants from ‘rare fields of study’ in the selection process. Most degrees outside of policy, law and economics should fall into this, as the majority of trainees (around 70%) hold degrees in one of these fields. It is therefore a good opportunity for EAs with no previous policy experience interested in testing their fit and learning more about impact in the sector.All eligibility requirements are detailed here, including the requirement to prove very good knowledge (B2 level) of two official EU languages (English plus one other is sufficient).Background on the European CommissionThe Commission is the executive body of the EU. It draws up initial legislative proposals (which are then amended and adopted jointly by the European Parliament and the Council of the EU, the other two main EU institutions) and implements EU policies (e.g., deciding how to allocate funds, outlining the technical details of legislation, monitoring the implementation at member state level). It employs around 30,000 people mostly based in Brussels. The Commission’s main departments according to policy areas, the Directorate Generals (DGs), are comparable to government ministries at the national level. A Blue Book Tr...]]>
Sat, 07 Jan 2023 00:38:39 +0000 EA - EAs interested in EU policy: Consider applying for the European Commission’s Blue Book Traineeship by EU Policy Careers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAs interested in EU policy: Consider applying for the European Commission’s Blue Book Traineeship, published by EU Policy Careers on January 6, 2023 on The Effective Altruism Forum.Inspired by a series of recent forum posts highlighting early career opportunities in US policy, this post summarises why and how to apply to the Blue Book Traineeship. This paid, five-month internship programme with the European Commission, the executive body of the EU, is one of the main pathways into an EU policy career. The last section of this post also outlines some other options to get started in EU policy.There are two Blue Book sessions each year, with applications opening in January for the session starting in October and in August for a start in March of the following year. Application deadlines can be found here. Applications for the October 2023 session are open now and close on January 31.As the programme is suitable for people in different stages of their career and from various backgrounds (see below) and EU policy is arguably still neglected within the EA community, an application could be a good option for many EAs. The initial application is fairly low-cost, as you only need to upload your CV and documents without writing a motivation letter.The programme is not only relevant to students or recent graduates, as many trainees have some years of previous work experience (around 30% of all trainees are 30+ and only 5% younger than 25, see full statistics here). Work experience can even be a significant advantage for finding full-time positions after the traineeship, and it can be a strategic decision to only start the traineeship after gaining some work experience to increase the chances of being able to stay on.[1]Epistemic status: This post is mostly based on my experience completing the traineeship last year and now working full time at the Commission, including conversations with around 20 people before and during the traineeship about both the application process and getting full-time employment afterwards. The post was greatly improved by the contributions of four other EAs with expertise on EU policy.EligibilityThe programme is mostly directed towards EU citizens.[2] The minimum educational requirement is a completed undergraduate degree. However, a master’s degree is sometimes necessary to pass through the first stage of the selection process (especially for ‘competitive’ nationalities below) and increases employment opportunities after the traineeship.The programme is open to graduates of all disciplines, not just people holding policy-related degrees—even rewarding applicants from ‘rare fields of study’ in the selection process. Most degrees outside of policy, law and economics should fall into this, as the majority of trainees (around 70%) hold degrees in one of these fields. It is therefore a good opportunity for EAs with no previous policy experience interested in testing their fit and learning more about impact in the sector.All eligibility requirements are detailed here, including the requirement to prove very good knowledge (B2 level) of two official EU languages (English plus one other is sufficient).Background on the European CommissionThe Commission is the executive body of the EU. It draws up initial legislative proposals (which are then amended and adopted jointly by the European Parliament and the Council of the EU, the other two main EU institutions) and implements EU policies (e.g., deciding how to allocate funds, outlining the technical details of legislation, monitoring the implementation at member state level). It employs around 30,000 people mostly based in Brussels. The Commission’s main departments according to policy areas, the Directorate Generals (DGs), are comparable to government ministries at the national level. A Blue Book Tr...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAs interested in EU policy: Consider applying for the European Commission’s Blue Book Traineeship, published by EU Policy Careers on January 6, 2023 on The Effective Altruism Forum.Inspired by a series of recent forum posts highlighting early career opportunities in US policy, this post summarises why and how to apply to the Blue Book Traineeship. This paid, five-month internship programme with the European Commission, the executive body of the EU, is one of the main pathways into an EU policy career. The last section of this post also outlines some other options to get started in EU policy.There are two Blue Book sessions each year, with applications opening in January for the session starting in October and in August for a start in March of the following year. Application deadlines can be found here. Applications for the October 2023 session are open now and close on January 31.As the programme is suitable for people in different stages of their career and from various backgrounds (see below) and EU policy is arguably still neglected within the EA community, an application could be a good option for many EAs. The initial application is fairly low-cost, as you only need to upload your CV and documents without writing a motivation letter.The programme is not only relevant to students or recent graduates, as many trainees have some years of previous work experience (around 30% of all trainees are 30+ and only 5% younger than 25, see full statistics here). Work experience can even be a significant advantage for finding full-time positions after the traineeship, and it can be a strategic decision to only start the traineeship after gaining some work experience to increase the chances of being able to stay on.[1]Epistemic status: This post is mostly based on my experience completing the traineeship last year and now working full time at the Commission, including conversations with around 20 people before and during the traineeship about both the application process and getting full-time employment afterwards. The post was greatly improved by the contributions of four other EAs with expertise on EU policy.EligibilityThe programme is mostly directed towards EU citizens.[2] The minimum educational requirement is a completed undergraduate degree. However, a master’s degree is sometimes necessary to pass through the first stage of the selection process (especially for ‘competitive’ nationalities below) and increases employment opportunities after the traineeship.The programme is open to graduates of all disciplines, not just people holding policy-related degrees—even rewarding applicants from ‘rare fields of study’ in the selection process. Most degrees outside of policy, law and economics should fall into this, as the majority of trainees (around 70%) hold degrees in one of these fields. It is therefore a good opportunity for EAs with no previous policy experience interested in testing their fit and learning more about impact in the sector.All eligibility requirements are detailed here, including the requirement to prove very good knowledge (B2 level) of two official EU languages (English plus one other is sufficient).Background on the European CommissionThe Commission is the executive body of the EU. It draws up initial legislative proposals (which are then amended and adopted jointly by the European Parliament and the Council of the EU, the other two main EU institutions) and implements EU policies (e.g., deciding how to allocate funds, outlining the technical details of legislation, monitoring the implementation at member state level). It employs around 30,000 people mostly based in Brussels. The Commission’s main departments according to policy areas, the Directorate Generals (DGs), are comparable to government ministries at the national level. A Blue Book Tr...]]>
EU Policy Careers https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 30:33 None full 4371
uZJSAi2iA9KGx7qg4_EA EA - Metaculus Beginner Tournament for New Forecasters by Anastasia Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Metaculus Beginner Tournament for New Forecasters, published by Anastasia on January 6, 2023 on The Effective Altruism Forum.Are you new to forecasting? Join the latest version of the Metaculus Beginner Tournament! The tournament runs for the first quarter of 2023. Each week, we'll have two new questions on relevant topics that week. And the two questions from the previous week will resolve, so you can see how you did on them! At the end of the quarter, you'll be a veteran forecaster with a real forecasting track record.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Anastasia https://forum.effectivealtruism.org/posts/uZJSAi2iA9KGx7qg4/metaculus-beginner-tournament-for-new-forecasters Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Metaculus Beginner Tournament for New Forecasters, published by Anastasia on January 6, 2023 on The Effective Altruism Forum.Are you new to forecasting? Join the latest version of the Metaculus Beginner Tournament! The tournament runs for the first quarter of 2023. Each week, we'll have two new questions on relevant topics that week. And the two questions from the previous week will resolve, so you can see how you did on them! At the end of the quarter, you'll be a veteran forecaster with a real forecasting track record.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 06 Jan 2023 18:26:18 +0000 EA - Metaculus Beginner Tournament for New Forecasters by Anastasia Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Metaculus Beginner Tournament for New Forecasters, published by Anastasia on January 6, 2023 on The Effective Altruism Forum.Are you new to forecasting? Join the latest version of the Metaculus Beginner Tournament! The tournament runs for the first quarter of 2023. Each week, we'll have two new questions on relevant topics that week. And the two questions from the previous week will resolve, so you can see how you did on them! At the end of the quarter, you'll be a veteran forecaster with a real forecasting track record.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Metaculus Beginner Tournament for New Forecasters, published by Anastasia on January 6, 2023 on The Effective Altruism Forum.Are you new to forecasting? Join the latest version of the Metaculus Beginner Tournament! The tournament runs for the first quarter of 2023. Each week, we'll have two new questions on relevant topics that week. And the two questions from the previous week will resolve, so you can see how you did on them! At the end of the quarter, you'll be a veteran forecaster with a real forecasting track record.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Anastasia https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:48 None full 4372
s4f83vkkSDL6my44j_EA EA - Foundation Entrepreneurship - How the first training program went by Aidan Alexander Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Foundation Entrepreneurship - How the first training program went, published by Aidan Alexander on January 6, 2023 on The Effective Altruism Forum.TLDR:Charity Entrepreneurship (CE) has run our first foundation training program. This program looked less like traditional philanthropic advising, and more like our classic charity incubation program. Instead of recommending specific charities, we aimed to teach skills that would allow foundation leaders to work as informed, independent, full-time grantmakers, equipped with a suite of decision-making tools. Despite seeing many areas where we could improve the program, we consider the pilot to have been very successful. We are excited to run this program again and believe it could provide impact comparable to that of our charity incubation program.BackgroundAt the beginning of 2022 we announced that CE would be running a pilot training program for grantmaking foundations. The pilot program finished several months ago, and so we wanted to follow up on how it went and what we learned.WhyStarting an impactful foundation and being a strong grantmaker is hard. Few high-quality resources exist (at least in a publicly available form) that teach how to run an evidence-based and highly effective foundation.Over the past four years, CE has helped launch 23 new nonprofits, several of which are on track to becoming field leaders in their respective areas (e.g., recommended by GiveWell). An unpredicted learning from this work is that our content, approach, and handbook have been useful to a number of foundations and grantmakers. They have improved their prioritization, decision-making, their clarity of mission and, ultimately, their impact (particularly in the early stages of setting up a foundation or granting department).With this in mind, we incubated one grantmaking foundation in 2021, which proved successful. This, combined with the number of interested foundations, led us to pilot a program focused on grantmaking foundations in 2022.WhatThe pilot program ran for four weeks (three remote and one in person in London) and required participants’ full-time commitment. It was high intensity relative to other programs typically aimed at philanthropists.During the remote weeks there were readings from a 300+ page handbook written for the purpose, ~3x daily video lectures, 2x daily projects and 1x daily group discussion.During the in-person week, in addition to group discussions, we ran mock grantmaking and donor coordination exercises. We supported participants to work towards a concrete final deliverable that they could use to better their grantmaking going forward.The content of the program was focused on (a) equipping the foundations with decision-making tools, e.g., cost-effectiveness analyses, (b) making key decisions regarding their scope, structure and strategy, and (c) sharing best practices on vetting processes in the contexts of hiring and grantmaking.WhoFive foundations (represented by six people) participated in the pilot. Once these foundations have launched and scaled, their combined annual donations are expected to be approximately $60M/yr. Participants had a wide range of grantmaking experience (from none to many years), foundation maturity (from yet-to-exist to fairly established), EA knowledge (from novice to expert) and expected annual dollars granted (from $100k to >$10M).How it wentWhat went wellThe program was a pilot, and as a result it was quite rough around the edges. Despite this, we think the program was very successful! From our perspective, the aspects of the program that went particularly well were the group discussions, the mock grantmaking exercises and the final project at the end.Feedback has suggested that the group discussion sessions were extremely valuable. These sessions gave part...]]>
Aidan Alexander https://forum.effectivealtruism.org/posts/s4f83vkkSDL6my44j/foundation-entrepreneurship-how-the-first-training-program Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Foundation Entrepreneurship - How the first training program went, published by Aidan Alexander on January 6, 2023 on The Effective Altruism Forum.TLDR:Charity Entrepreneurship (CE) has run our first foundation training program. This program looked less like traditional philanthropic advising, and more like our classic charity incubation program. Instead of recommending specific charities, we aimed to teach skills that would allow foundation leaders to work as informed, independent, full-time grantmakers, equipped with a suite of decision-making tools. Despite seeing many areas where we could improve the program, we consider the pilot to have been very successful. We are excited to run this program again and believe it could provide impact comparable to that of our charity incubation program.BackgroundAt the beginning of 2022 we announced that CE would be running a pilot training program for grantmaking foundations. The pilot program finished several months ago, and so we wanted to follow up on how it went and what we learned.WhyStarting an impactful foundation and being a strong grantmaker is hard. Few high-quality resources exist (at least in a publicly available form) that teach how to run an evidence-based and highly effective foundation.Over the past four years, CE has helped launch 23 new nonprofits, several of which are on track to becoming field leaders in their respective areas (e.g., recommended by GiveWell). An unpredicted learning from this work is that our content, approach, and handbook have been useful to a number of foundations and grantmakers. They have improved their prioritization, decision-making, their clarity of mission and, ultimately, their impact (particularly in the early stages of setting up a foundation or granting department).With this in mind, we incubated one grantmaking foundation in 2021, which proved successful. This, combined with the number of interested foundations, led us to pilot a program focused on grantmaking foundations in 2022.WhatThe pilot program ran for four weeks (three remote and one in person in London) and required participants’ full-time commitment. It was high intensity relative to other programs typically aimed at philanthropists.During the remote weeks there were readings from a 300+ page handbook written for the purpose, ~3x daily video lectures, 2x daily projects and 1x daily group discussion.During the in-person week, in addition to group discussions, we ran mock grantmaking and donor coordination exercises. We supported participants to work towards a concrete final deliverable that they could use to better their grantmaking going forward.The content of the program was focused on (a) equipping the foundations with decision-making tools, e.g., cost-effectiveness analyses, (b) making key decisions regarding their scope, structure and strategy, and (c) sharing best practices on vetting processes in the contexts of hiring and grantmaking.WhoFive foundations (represented by six people) participated in the pilot. Once these foundations have launched and scaled, their combined annual donations are expected to be approximately $60M/yr. Participants had a wide range of grantmaking experience (from none to many years), foundation maturity (from yet-to-exist to fairly established), EA knowledge (from novice to expert) and expected annual dollars granted (from $100k to >$10M).How it wentWhat went wellThe program was a pilot, and as a result it was quite rough around the edges. Despite this, we think the program was very successful! From our perspective, the aspects of the program that went particularly well were the group discussions, the mock grantmaking exercises and the final project at the end.Feedback has suggested that the group discussion sessions were extremely valuable. These sessions gave part...]]>
Fri, 06 Jan 2023 11:11:55 +0000 EA - Foundation Entrepreneurship - How the first training program went by Aidan Alexander Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Foundation Entrepreneurship - How the first training program went, published by Aidan Alexander on January 6, 2023 on The Effective Altruism Forum.TLDR:Charity Entrepreneurship (CE) has run our first foundation training program. This program looked less like traditional philanthropic advising, and more like our classic charity incubation program. Instead of recommending specific charities, we aimed to teach skills that would allow foundation leaders to work as informed, independent, full-time grantmakers, equipped with a suite of decision-making tools. Despite seeing many areas where we could improve the program, we consider the pilot to have been very successful. We are excited to run this program again and believe it could provide impact comparable to that of our charity incubation program.BackgroundAt the beginning of 2022 we announced that CE would be running a pilot training program for grantmaking foundations. The pilot program finished several months ago, and so we wanted to follow up on how it went and what we learned.WhyStarting an impactful foundation and being a strong grantmaker is hard. Few high-quality resources exist (at least in a publicly available form) that teach how to run an evidence-based and highly effective foundation.Over the past four years, CE has helped launch 23 new nonprofits, several of which are on track to becoming field leaders in their respective areas (e.g., recommended by GiveWell). An unpredicted learning from this work is that our content, approach, and handbook have been useful to a number of foundations and grantmakers. They have improved their prioritization, decision-making, their clarity of mission and, ultimately, their impact (particularly in the early stages of setting up a foundation or granting department).With this in mind, we incubated one grantmaking foundation in 2021, which proved successful. This, combined with the number of interested foundations, led us to pilot a program focused on grantmaking foundations in 2022.WhatThe pilot program ran for four weeks (three remote and one in person in London) and required participants’ full-time commitment. It was high intensity relative to other programs typically aimed at philanthropists.During the remote weeks there were readings from a 300+ page handbook written for the purpose, ~3x daily video lectures, 2x daily projects and 1x daily group discussion.During the in-person week, in addition to group discussions, we ran mock grantmaking and donor coordination exercises. We supported participants to work towards a concrete final deliverable that they could use to better their grantmaking going forward.The content of the program was focused on (a) equipping the foundations with decision-making tools, e.g., cost-effectiveness analyses, (b) making key decisions regarding their scope, structure and strategy, and (c) sharing best practices on vetting processes in the contexts of hiring and grantmaking.WhoFive foundations (represented by six people) participated in the pilot. Once these foundations have launched and scaled, their combined annual donations are expected to be approximately $60M/yr. Participants had a wide range of grantmaking experience (from none to many years), foundation maturity (from yet-to-exist to fairly established), EA knowledge (from novice to expert) and expected annual dollars granted (from $100k to >$10M).How it wentWhat went wellThe program was a pilot, and as a result it was quite rough around the edges. Despite this, we think the program was very successful! From our perspective, the aspects of the program that went particularly well were the group discussions, the mock grantmaking exercises and the final project at the end.Feedback has suggested that the group discussion sessions were extremely valuable. These sessions gave part...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Foundation Entrepreneurship - How the first training program went, published by Aidan Alexander on January 6, 2023 on The Effective Altruism Forum.TLDR:Charity Entrepreneurship (CE) has run our first foundation training program. This program looked less like traditional philanthropic advising, and more like our classic charity incubation program. Instead of recommending specific charities, we aimed to teach skills that would allow foundation leaders to work as informed, independent, full-time grantmakers, equipped with a suite of decision-making tools. Despite seeing many areas where we could improve the program, we consider the pilot to have been very successful. We are excited to run this program again and believe it could provide impact comparable to that of our charity incubation program.BackgroundAt the beginning of 2022 we announced that CE would be running a pilot training program for grantmaking foundations. The pilot program finished several months ago, and so we wanted to follow up on how it went and what we learned.WhyStarting an impactful foundation and being a strong grantmaker is hard. Few high-quality resources exist (at least in a publicly available form) that teach how to run an evidence-based and highly effective foundation.Over the past four years, CE has helped launch 23 new nonprofits, several of which are on track to becoming field leaders in their respective areas (e.g., recommended by GiveWell). An unpredicted learning from this work is that our content, approach, and handbook have been useful to a number of foundations and grantmakers. They have improved their prioritization, decision-making, their clarity of mission and, ultimately, their impact (particularly in the early stages of setting up a foundation or granting department).With this in mind, we incubated one grantmaking foundation in 2021, which proved successful. This, combined with the number of interested foundations, led us to pilot a program focused on grantmaking foundations in 2022.WhatThe pilot program ran for four weeks (three remote and one in person in London) and required participants’ full-time commitment. It was high intensity relative to other programs typically aimed at philanthropists.During the remote weeks there were readings from a 300+ page handbook written for the purpose, ~3x daily video lectures, 2x daily projects and 1x daily group discussion.During the in-person week, in addition to group discussions, we ran mock grantmaking and donor coordination exercises. We supported participants to work towards a concrete final deliverable that they could use to better their grantmaking going forward.The content of the program was focused on (a) equipping the foundations with decision-making tools, e.g., cost-effectiveness analyses, (b) making key decisions regarding their scope, structure and strategy, and (c) sharing best practices on vetting processes in the contexts of hiring and grantmaking.WhoFive foundations (represented by six people) participated in the pilot. Once these foundations have launched and scaled, their combined annual donations are expected to be approximately $60M/yr. Participants had a wide range of grantmaking experience (from none to many years), foundation maturity (from yet-to-exist to fairly established), EA knowledge (from novice to expert) and expected annual dollars granted (from $100k to >$10M).How it wentWhat went wellThe program was a pilot, and as a result it was quite rough around the edges. Despite this, we think the program was very successful! From our perspective, the aspects of the program that went particularly well were the group discussions, the mock grantmaking exercises and the final project at the end.Feedback has suggested that the group discussion sessions were extremely valuable. These sessions gave part...]]>
Aidan Alexander https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:09 None full 4370
HJwNGxNmFiYcJyAk3_EA EA - Bill Burr on Boiling Lobsters (also manliness and AW) by Lixiang Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bill Burr on Boiling Lobsters (also manliness and AW), published by Lixiang on January 4, 2023 on The Effective Altruism Forum.I think there's a vibe out there in many cultures (including American) that being vegetarian/vegan or certain kinds of sympathy towards animals is unmanly and just kind of lame. This is probably more true in the right-wing demographic. I'm guessing this has been discussed in the animal welfare movement somewhere, so I won't attempt to delve into the issue further in this post. Instead, I merely want to favorably acknowledge some commentary by comedian Bill Burr about boiling lobsters alive.Bill Burr is a super-famous comedian and one of the most prominent cultural icons of masculinity in the U.S (perhaps in some respects the most prominent). Although I would say he is a party-neutral comedian, his comedic themes have included anti-wokeness and challenges to certain aspects of feminism, and probably has a huge following among working-class right-wing men.Here is his commentary on boiling lobsters alive (6 min), excerpted from his podcast.Edit: Actually I will go into it for a minute. If anyone wants to see a great example of how to deal with this sort of thing in a different context, take a look at how Ford dealt with the issue of environmentalism/green-politics being considered soft, lefty, snowflake stuff when they wanted to advertise the fuel economy on their F150 pickup truck: "you won't be put in a chokehold everytime you fill up".Other versions of the commercial here and here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lixiang https://forum.effectivealtruism.org/posts/HJwNGxNmFiYcJyAk3/bill-burr-on-boiling-lobsters-also-manliness-and-aw Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bill Burr on Boiling Lobsters (also manliness and AW), published by Lixiang on January 4, 2023 on The Effective Altruism Forum.I think there's a vibe out there in many cultures (including American) that being vegetarian/vegan or certain kinds of sympathy towards animals is unmanly and just kind of lame. This is probably more true in the right-wing demographic. I'm guessing this has been discussed in the animal welfare movement somewhere, so I won't attempt to delve into the issue further in this post. Instead, I merely want to favorably acknowledge some commentary by comedian Bill Burr about boiling lobsters alive.Bill Burr is a super-famous comedian and one of the most prominent cultural icons of masculinity in the U.S (perhaps in some respects the most prominent). Although I would say he is a party-neutral comedian, his comedic themes have included anti-wokeness and challenges to certain aspects of feminism, and probably has a huge following among working-class right-wing men.Here is his commentary on boiling lobsters alive (6 min), excerpted from his podcast.Edit: Actually I will go into it for a minute. If anyone wants to see a great example of how to deal with this sort of thing in a different context, take a look at how Ford dealt with the issue of environmentalism/green-politics being considered soft, lefty, snowflake stuff when they wanted to advertise the fuel economy on their F150 pickup truck: "you won't be put in a chokehold everytime you fill up".Other versions of the commercial here and here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 06 Jan 2023 08:45:20 +0000 EA - Bill Burr on Boiling Lobsters (also manliness and AW) by Lixiang Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bill Burr on Boiling Lobsters (also manliness and AW), published by Lixiang on January 4, 2023 on The Effective Altruism Forum.I think there's a vibe out there in many cultures (including American) that being vegetarian/vegan or certain kinds of sympathy towards animals is unmanly and just kind of lame. This is probably more true in the right-wing demographic. I'm guessing this has been discussed in the animal welfare movement somewhere, so I won't attempt to delve into the issue further in this post. Instead, I merely want to favorably acknowledge some commentary by comedian Bill Burr about boiling lobsters alive.Bill Burr is a super-famous comedian and one of the most prominent cultural icons of masculinity in the U.S (perhaps in some respects the most prominent). Although I would say he is a party-neutral comedian, his comedic themes have included anti-wokeness and challenges to certain aspects of feminism, and probably has a huge following among working-class right-wing men.Here is his commentary on boiling lobsters alive (6 min), excerpted from his podcast.Edit: Actually I will go into it for a minute. If anyone wants to see a great example of how to deal with this sort of thing in a different context, take a look at how Ford dealt with the issue of environmentalism/green-politics being considered soft, lefty, snowflake stuff when they wanted to advertise the fuel economy on their F150 pickup truck: "you won't be put in a chokehold everytime you fill up".Other versions of the commercial here and here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bill Burr on Boiling Lobsters (also manliness and AW), published by Lixiang on January 4, 2023 on The Effective Altruism Forum.I think there's a vibe out there in many cultures (including American) that being vegetarian/vegan or certain kinds of sympathy towards animals is unmanly and just kind of lame. This is probably more true in the right-wing demographic. I'm guessing this has been discussed in the animal welfare movement somewhere, so I won't attempt to delve into the issue further in this post. Instead, I merely want to favorably acknowledge some commentary by comedian Bill Burr about boiling lobsters alive.Bill Burr is a super-famous comedian and one of the most prominent cultural icons of masculinity in the U.S (perhaps in some respects the most prominent). Although I would say he is a party-neutral comedian, his comedic themes have included anti-wokeness and challenges to certain aspects of feminism, and probably has a huge following among working-class right-wing men.Here is his commentary on boiling lobsters alive (6 min), excerpted from his podcast.Edit: Actually I will go into it for a minute. If anyone wants to see a great example of how to deal with this sort of thing in a different context, take a look at how Ford dealt with the issue of environmentalism/green-politics being considered soft, lefty, snowflake stuff when they wanted to advertise the fuel economy on their F150 pickup truck: "you won't be put in a chokehold everytime you fill up".Other versions of the commercial here and here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lixiang https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:40 None full 4364
tNkHKfbj5BvcWo6vC_EA EA - Prioritization Research Careers - Probably Good by Probably Good Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prioritization Research Careers - Probably Good, published by Probably Good on January 5, 2023 on The Effective Altruism Forum.We’re really happy to start 2023 with a new career path profile! This time, we look at prioritization researchers, a type of researcher who uses tools from a range of disciplines – spanning economics, philosophy, and mathematics – to help make decisions about how we can best utilize our resources to do good.Our overall impression is that this path is likely to be a high impact option for those who are a good fit, particularly if you stand a reasonable chance at getting into one of the most promising organizations that conduct prioritization research. We think this is a career path that readers of the forum might be particularly interested in exploring.Read the full profile here!Test tasksOne of the things we’re most excited about in this profile are the inclusion of two test tasks created in collaboration with GiveWell. These tasks involve creating an intervention report and a cost-effectiveness analysis of a drug to reduce child mortality, and we think they’re a great way to test your fit for more quantitative, details-oriented prioritization research (though we also discuss other types of prioritization research in the profile).We link to instructions in the full path profile, or you can access them separately here.We estimate the tasks will take a combined total of 15-20 hours to complete, though this can be made shorter by just completing one or taking a less thorough approach.FeedbackAs always, we’d love to hear from you. We have exciting plans for this year, including an expansion of our team and lots of new content. As such, we’re eager to hear what types of content would be useful to the broader community and/or to specific groups.If you have any thoughts, let us know by leaving a comment below or emailing contact@probablygood.org.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Probably Good https://forum.effectivealtruism.org/posts/tNkHKfbj5BvcWo6vC/prioritization-research-careers-probably-good Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prioritization Research Careers - Probably Good, published by Probably Good on January 5, 2023 on The Effective Altruism Forum.We’re really happy to start 2023 with a new career path profile! This time, we look at prioritization researchers, a type of researcher who uses tools from a range of disciplines – spanning economics, philosophy, and mathematics – to help make decisions about how we can best utilize our resources to do good.Our overall impression is that this path is likely to be a high impact option for those who are a good fit, particularly if you stand a reasonable chance at getting into one of the most promising organizations that conduct prioritization research. We think this is a career path that readers of the forum might be particularly interested in exploring.Read the full profile here!Test tasksOne of the things we’re most excited about in this profile are the inclusion of two test tasks created in collaboration with GiveWell. These tasks involve creating an intervention report and a cost-effectiveness analysis of a drug to reduce child mortality, and we think they’re a great way to test your fit for more quantitative, details-oriented prioritization research (though we also discuss other types of prioritization research in the profile).We link to instructions in the full path profile, or you can access them separately here.We estimate the tasks will take a combined total of 15-20 hours to complete, though this can be made shorter by just completing one or taking a less thorough approach.FeedbackAs always, we’d love to hear from you. We have exciting plans for this year, including an expansion of our team and lots of new content. As such, we’re eager to hear what types of content would be useful to the broader community and/or to specific groups.If you have any thoughts, let us know by leaving a comment below or emailing contact@probablygood.org.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 05 Jan 2023 23:23:01 +0000 EA - Prioritization Research Careers - Probably Good by Probably Good Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prioritization Research Careers - Probably Good, published by Probably Good on January 5, 2023 on The Effective Altruism Forum.We’re really happy to start 2023 with a new career path profile! This time, we look at prioritization researchers, a type of researcher who uses tools from a range of disciplines – spanning economics, philosophy, and mathematics – to help make decisions about how we can best utilize our resources to do good.Our overall impression is that this path is likely to be a high impact option for those who are a good fit, particularly if you stand a reasonable chance at getting into one of the most promising organizations that conduct prioritization research. We think this is a career path that readers of the forum might be particularly interested in exploring.Read the full profile here!Test tasksOne of the things we’re most excited about in this profile are the inclusion of two test tasks created in collaboration with GiveWell. These tasks involve creating an intervention report and a cost-effectiveness analysis of a drug to reduce child mortality, and we think they’re a great way to test your fit for more quantitative, details-oriented prioritization research (though we also discuss other types of prioritization research in the profile).We link to instructions in the full path profile, or you can access them separately here.We estimate the tasks will take a combined total of 15-20 hours to complete, though this can be made shorter by just completing one or taking a less thorough approach.FeedbackAs always, we’d love to hear from you. We have exciting plans for this year, including an expansion of our team and lots of new content. As such, we’re eager to hear what types of content would be useful to the broader community and/or to specific groups.If you have any thoughts, let us know by leaving a comment below or emailing contact@probablygood.org.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prioritization Research Careers - Probably Good, published by Probably Good on January 5, 2023 on The Effective Altruism Forum.We’re really happy to start 2023 with a new career path profile! This time, we look at prioritization researchers, a type of researcher who uses tools from a range of disciplines – spanning economics, philosophy, and mathematics – to help make decisions about how we can best utilize our resources to do good.Our overall impression is that this path is likely to be a high impact option for those who are a good fit, particularly if you stand a reasonable chance at getting into one of the most promising organizations that conduct prioritization research. We think this is a career path that readers of the forum might be particularly interested in exploring.Read the full profile here!Test tasksOne of the things we’re most excited about in this profile are the inclusion of two test tasks created in collaboration with GiveWell. These tasks involve creating an intervention report and a cost-effectiveness analysis of a drug to reduce child mortality, and we think they’re a great way to test your fit for more quantitative, details-oriented prioritization research (though we also discuss other types of prioritization research in the profile).We link to instructions in the full path profile, or you can access them separately here.We estimate the tasks will take a combined total of 15-20 hours to complete, though this can be made shorter by just completing one or taking a less thorough approach.FeedbackAs always, we’d love to hear from you. We have exciting plans for this year, including an expansion of our team and lots of new content. As such, we’re eager to hear what types of content would be useful to the broader community and/or to specific groups.If you have any thoughts, let us know by leaving a comment below or emailing contact@probablygood.org.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Probably Good https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:59 None full 4363
zh6pZTZisqWq7AgnD_EA EA - On being compromised by Gavin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On being compromised, published by Gavin on January 5, 2023 on The Effective Altruism Forum.Only the young and the saints are uncompromised. Everyone else has tried to do something in the world and eventually slipped up (or just been associated with someone else who slipped up).Say that you are compromised if it is easy for someone to shame you. This takes lots of forms:"We are all sinners", say the Christians."We are all privileged", say the identitarians."We all have some self-serving motives", says everyone sensible."Even just living quietly we destroy things", say the environmentalists."Even our noblest actions fall horribly short of the mark", say the EAs.Lots of people on this forum have struggled with the feeling of being compromised. Since FTX. Or Leverage. Or Guzey. Or Thiel. Or Singer. Or Mill or whatever.But this is the normal course of a life, including highly moral lives. (Part of this normality comes from shame usually being a common sense matter - and common sense morals correlate with actual harm, but are often wrong in the precise ways this movement is devoted to countering!)But the greater part of it being normal is that all action incurs risk, including moral risk. We do our best to avoid them (and in my experience grantmakers are vigilant about negative EV things), but you can't avoid it entirely. (Again: total inaction also does not avoid it.) Empirically, this risk level is high enough that nearly everyone eventually bites it.e.g.The EU is a Nobel peace prize winning organisation you might have heard of. But their Common Agricultural Policy causes billions of dollars of damage to poor-world farmers, and has been called a "crime against humanity".Mother Theresa's well-resourced clinics and hospices were remarkably incompetent and rarely prescribed pain medication, apparently under the belief that suffering brings us closer to God.Gandhi's (and Nehru's) economic policies perpetuated poverty to the tune of millions of dead children equivalents.The American labour hero Cesar Chavez sold out undocumented Mexicans and opposed immigration in a classic protectionist scheme.The Vatican.and so on.Despite appearances, this isn't a tu quoque defence of FTX! The point is to set the occasionally appropriate recriminations of the last month in context. You will make mistakes, and people will rightly hold you to them. It will feel terrible. If you join a movement it will embarrass you eventually. Sorry.(Someone could use the above argument to licence risky behaviour - "in for a penny". But of course, like anything, being compromised is a matter of degree. Higher degrees are to be avoided fervently, insofar as they are downstream of actual harm, which they probably are.)You might think that the idle (like the chattering classes) aren't compromised, but they are. They stood by while the millions suffered, despite their remarkable power to help.Quite true, since we are all living now rather than say under feudalism.Maybe this sounds like a strawman to you, but consider our disdain for Mackenzie Scott giving her wealth to poor and artsy Americans.Bentham is perhaps the second most-demonised consequentialist - and yet he strikes me as nearly uncompromised. His much-mooted imperialism is not one, for instance. The most you can say is that he was a bit naive about state power, privacy, legibility.What's a prior then, if the incidence is 99%?Say 70 years in which to disgrace yourself. How many actions per year? Well, one tweet can do it, so potentially thousands. Call it 300.99% / 21000 = a 0.005% risk of compromise per-action. Clearly a very fragile estimate.He was also very racist, but this isn't the sort of thing that can plausibly fall under understandable moral risk.also sometimes wronglyI await a quantification of compromise, so that ...]]>
Gavin https://forum.effectivealtruism.org/posts/zh6pZTZisqWq7AgnD/on-being-compromised Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On being compromised, published by Gavin on January 5, 2023 on The Effective Altruism Forum.Only the young and the saints are uncompromised. Everyone else has tried to do something in the world and eventually slipped up (or just been associated with someone else who slipped up).Say that you are compromised if it is easy for someone to shame you. This takes lots of forms:"We are all sinners", say the Christians."We are all privileged", say the identitarians."We all have some self-serving motives", says everyone sensible."Even just living quietly we destroy things", say the environmentalists."Even our noblest actions fall horribly short of the mark", say the EAs.Lots of people on this forum have struggled with the feeling of being compromised. Since FTX. Or Leverage. Or Guzey. Or Thiel. Or Singer. Or Mill or whatever.But this is the normal course of a life, including highly moral lives. (Part of this normality comes from shame usually being a common sense matter - and common sense morals correlate with actual harm, but are often wrong in the precise ways this movement is devoted to countering!)But the greater part of it being normal is that all action incurs risk, including moral risk. We do our best to avoid them (and in my experience grantmakers are vigilant about negative EV things), but you can't avoid it entirely. (Again: total inaction also does not avoid it.) Empirically, this risk level is high enough that nearly everyone eventually bites it.e.g.The EU is a Nobel peace prize winning organisation you might have heard of. But their Common Agricultural Policy causes billions of dollars of damage to poor-world farmers, and has been called a "crime against humanity".Mother Theresa's well-resourced clinics and hospices were remarkably incompetent and rarely prescribed pain medication, apparently under the belief that suffering brings us closer to God.Gandhi's (and Nehru's) economic policies perpetuated poverty to the tune of millions of dead children equivalents.The American labour hero Cesar Chavez sold out undocumented Mexicans and opposed immigration in a classic protectionist scheme.The Vatican.and so on.Despite appearances, this isn't a tu quoque defence of FTX! The point is to set the occasionally appropriate recriminations of the last month in context. You will make mistakes, and people will rightly hold you to them. It will feel terrible. If you join a movement it will embarrass you eventually. Sorry.(Someone could use the above argument to licence risky behaviour - "in for a penny". But of course, like anything, being compromised is a matter of degree. Higher degrees are to be avoided fervently, insofar as they are downstream of actual harm, which they probably are.)You might think that the idle (like the chattering classes) aren't compromised, but they are. They stood by while the millions suffered, despite their remarkable power to help.Quite true, since we are all living now rather than say under feudalism.Maybe this sounds like a strawman to you, but consider our disdain for Mackenzie Scott giving her wealth to poor and artsy Americans.Bentham is perhaps the second most-demonised consequentialist - and yet he strikes me as nearly uncompromised. His much-mooted imperialism is not one, for instance. The most you can say is that he was a bit naive about state power, privacy, legibility.What's a prior then, if the incidence is 99%?Say 70 years in which to disgrace yourself. How many actions per year? Well, one tweet can do it, so potentially thousands. Call it 300.99% / 21000 = a 0.005% risk of compromise per-action. Clearly a very fragile estimate.He was also very racist, but this isn't the sort of thing that can plausibly fall under understandable moral risk.also sometimes wronglyI await a quantification of compromise, so that ...]]>
Thu, 05 Jan 2023 13:59:36 +0000 EA - On being compromised by Gavin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On being compromised, published by Gavin on January 5, 2023 on The Effective Altruism Forum.Only the young and the saints are uncompromised. Everyone else has tried to do something in the world and eventually slipped up (or just been associated with someone else who slipped up).Say that you are compromised if it is easy for someone to shame you. This takes lots of forms:"We are all sinners", say the Christians."We are all privileged", say the identitarians."We all have some self-serving motives", says everyone sensible."Even just living quietly we destroy things", say the environmentalists."Even our noblest actions fall horribly short of the mark", say the EAs.Lots of people on this forum have struggled with the feeling of being compromised. Since FTX. Or Leverage. Or Guzey. Or Thiel. Or Singer. Or Mill or whatever.But this is the normal course of a life, including highly moral lives. (Part of this normality comes from shame usually being a common sense matter - and common sense morals correlate with actual harm, but are often wrong in the precise ways this movement is devoted to countering!)But the greater part of it being normal is that all action incurs risk, including moral risk. We do our best to avoid them (and in my experience grantmakers are vigilant about negative EV things), but you can't avoid it entirely. (Again: total inaction also does not avoid it.) Empirically, this risk level is high enough that nearly everyone eventually bites it.e.g.The EU is a Nobel peace prize winning organisation you might have heard of. But their Common Agricultural Policy causes billions of dollars of damage to poor-world farmers, and has been called a "crime against humanity".Mother Theresa's well-resourced clinics and hospices were remarkably incompetent and rarely prescribed pain medication, apparently under the belief that suffering brings us closer to God.Gandhi's (and Nehru's) economic policies perpetuated poverty to the tune of millions of dead children equivalents.The American labour hero Cesar Chavez sold out undocumented Mexicans and opposed immigration in a classic protectionist scheme.The Vatican.and so on.Despite appearances, this isn't a tu quoque defence of FTX! The point is to set the occasionally appropriate recriminations of the last month in context. You will make mistakes, and people will rightly hold you to them. It will feel terrible. If you join a movement it will embarrass you eventually. Sorry.(Someone could use the above argument to licence risky behaviour - "in for a penny". But of course, like anything, being compromised is a matter of degree. Higher degrees are to be avoided fervently, insofar as they are downstream of actual harm, which they probably are.)You might think that the idle (like the chattering classes) aren't compromised, but they are. They stood by while the millions suffered, despite their remarkable power to help.Quite true, since we are all living now rather than say under feudalism.Maybe this sounds like a strawman to you, but consider our disdain for Mackenzie Scott giving her wealth to poor and artsy Americans.Bentham is perhaps the second most-demonised consequentialist - and yet he strikes me as nearly uncompromised. His much-mooted imperialism is not one, for instance. The most you can say is that he was a bit naive about state power, privacy, legibility.What's a prior then, if the incidence is 99%?Say 70 years in which to disgrace yourself. How many actions per year? Well, one tweet can do it, so potentially thousands. Call it 300.99% / 21000 = a 0.005% risk of compromise per-action. Clearly a very fragile estimate.He was also very racist, but this isn't the sort of thing that can plausibly fall under understandable moral risk.also sometimes wronglyI await a quantification of compromise, so that ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On being compromised, published by Gavin on January 5, 2023 on The Effective Altruism Forum.Only the young and the saints are uncompromised. Everyone else has tried to do something in the world and eventually slipped up (or just been associated with someone else who slipped up).Say that you are compromised if it is easy for someone to shame you. This takes lots of forms:"We are all sinners", say the Christians."We are all privileged", say the identitarians."We all have some self-serving motives", says everyone sensible."Even just living quietly we destroy things", say the environmentalists."Even our noblest actions fall horribly short of the mark", say the EAs.Lots of people on this forum have struggled with the feeling of being compromised. Since FTX. Or Leverage. Or Guzey. Or Thiel. Or Singer. Or Mill or whatever.But this is the normal course of a life, including highly moral lives. (Part of this normality comes from shame usually being a common sense matter - and common sense morals correlate with actual harm, but are often wrong in the precise ways this movement is devoted to countering!)But the greater part of it being normal is that all action incurs risk, including moral risk. We do our best to avoid them (and in my experience grantmakers are vigilant about negative EV things), but you can't avoid it entirely. (Again: total inaction also does not avoid it.) Empirically, this risk level is high enough that nearly everyone eventually bites it.e.g.The EU is a Nobel peace prize winning organisation you might have heard of. But their Common Agricultural Policy causes billions of dollars of damage to poor-world farmers, and has been called a "crime against humanity".Mother Theresa's well-resourced clinics and hospices were remarkably incompetent and rarely prescribed pain medication, apparently under the belief that suffering brings us closer to God.Gandhi's (and Nehru's) economic policies perpetuated poverty to the tune of millions of dead children equivalents.The American labour hero Cesar Chavez sold out undocumented Mexicans and opposed immigration in a classic protectionist scheme.The Vatican.and so on.Despite appearances, this isn't a tu quoque defence of FTX! The point is to set the occasionally appropriate recriminations of the last month in context. You will make mistakes, and people will rightly hold you to them. It will feel terrible. If you join a movement it will embarrass you eventually. Sorry.(Someone could use the above argument to licence risky behaviour - "in for a penny". But of course, like anything, being compromised is a matter of degree. Higher degrees are to be avoided fervently, insofar as they are downstream of actual harm, which they probably are.)You might think that the idle (like the chattering classes) aren't compromised, but they are. They stood by while the millions suffered, despite their remarkable power to help.Quite true, since we are all living now rather than say under feudalism.Maybe this sounds like a strawman to you, but consider our disdain for Mackenzie Scott giving her wealth to poor and artsy Americans.Bentham is perhaps the second most-demonised consequentialist - and yet he strikes me as nearly uncompromised. His much-mooted imperialism is not one, for instance. The most you can say is that he was a bit naive about state power, privacy, legibility.What's a prior then, if the incidence is 99%?Say 70 years in which to disgrace yourself. How many actions per year? Well, one tweet can do it, so potentially thousands. Call it 300.99% / 21000 = a 0.005% risk of compromise per-action. Clearly a very fragile estimate.He was also very racist, but this isn't the sort of thing that can plausibly fall under understandable moral risk.also sometimes wronglyI await a quantification of compromise, so that ...]]>
Gavin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:59 None full 4359
7NmxHKCTMX73eLBjW_EA EA - Misleading phrase in a GiveWell Youtube ad by Thomas Kwa Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Misleading phrase in a GiveWell Youtube ad, published by Thomas Kwa on January 5, 2023 on The Effective Altruism Forum.In a sponsored segment for GiveWell on a video by the channel Half as Interesting, the narrator Sam Denby says:[...] Personally, I'd give to the Helen Keller Foundation, which I found through GiveWell, because they help save thousands of lives through distributing Vitamin A supplements to children. Vitamin A supplements can help save the lives of children suffering from vitamin A deficiencies and only cost one dollar to deliver a supplement and save a child.This seems to reinforce the misconception that saving lives in the developing world is incredibly cheap. GiveWell's cost effectiveness estimates actually range from ~$1500 to ~$27,000 for Helen Keller's various regional programs, so this is off by 3 OOMs.I'm not sure if this quote was under GiveWell's editorial control, but to the extent it was I'm disappointed. Surely GiveWell should try to prevent this kind of thing from happening in the future, even if the sponsoree is speaking for themself, the misleading statement is brief, the misinformation looks favorable to GiveWell, or other charities' ads are also misleading.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thomas Kwa https://forum.effectivealtruism.org/posts/7NmxHKCTMX73eLBjW/misleading-phrase-in-a-givewell-youtube-ad Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Misleading phrase in a GiveWell Youtube ad, published by Thomas Kwa on January 5, 2023 on The Effective Altruism Forum.In a sponsored segment for GiveWell on a video by the channel Half as Interesting, the narrator Sam Denby says:[...] Personally, I'd give to the Helen Keller Foundation, which I found through GiveWell, because they help save thousands of lives through distributing Vitamin A supplements to children. Vitamin A supplements can help save the lives of children suffering from vitamin A deficiencies and only cost one dollar to deliver a supplement and save a child.This seems to reinforce the misconception that saving lives in the developing world is incredibly cheap. GiveWell's cost effectiveness estimates actually range from ~$1500 to ~$27,000 for Helen Keller's various regional programs, so this is off by 3 OOMs.I'm not sure if this quote was under GiveWell's editorial control, but to the extent it was I'm disappointed. Surely GiveWell should try to prevent this kind of thing from happening in the future, even if the sponsoree is speaking for themself, the misleading statement is brief, the misinformation looks favorable to GiveWell, or other charities' ads are also misleading.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 05 Jan 2023 13:32:49 +0000 EA - Misleading phrase in a GiveWell Youtube ad by Thomas Kwa Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Misleading phrase in a GiveWell Youtube ad, published by Thomas Kwa on January 5, 2023 on The Effective Altruism Forum.In a sponsored segment for GiveWell on a video by the channel Half as Interesting, the narrator Sam Denby says:[...] Personally, I'd give to the Helen Keller Foundation, which I found through GiveWell, because they help save thousands of lives through distributing Vitamin A supplements to children. Vitamin A supplements can help save the lives of children suffering from vitamin A deficiencies and only cost one dollar to deliver a supplement and save a child.This seems to reinforce the misconception that saving lives in the developing world is incredibly cheap. GiveWell's cost effectiveness estimates actually range from ~$1500 to ~$27,000 for Helen Keller's various regional programs, so this is off by 3 OOMs.I'm not sure if this quote was under GiveWell's editorial control, but to the extent it was I'm disappointed. Surely GiveWell should try to prevent this kind of thing from happening in the future, even if the sponsoree is speaking for themself, the misleading statement is brief, the misinformation looks favorable to GiveWell, or other charities' ads are also misleading.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Misleading phrase in a GiveWell Youtube ad, published by Thomas Kwa on January 5, 2023 on The Effective Altruism Forum.In a sponsored segment for GiveWell on a video by the channel Half as Interesting, the narrator Sam Denby says:[...] Personally, I'd give to the Helen Keller Foundation, which I found through GiveWell, because they help save thousands of lives through distributing Vitamin A supplements to children. Vitamin A supplements can help save the lives of children suffering from vitamin A deficiencies and only cost one dollar to deliver a supplement and save a child.This seems to reinforce the misconception that saving lives in the developing world is incredibly cheap. GiveWell's cost effectiveness estimates actually range from ~$1500 to ~$27,000 for Helen Keller's various regional programs, so this is off by 3 OOMs.I'm not sure if this quote was under GiveWell's editorial control, but to the extent it was I'm disappointed. Surely GiveWell should try to prevent this kind of thing from happening in the future, even if the sponsoree is speaking for themself, the misleading statement is brief, the misinformation looks favorable to GiveWell, or other charities' ads are also misleading.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thomas Kwa https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:23 None full 4360
5h8bNTFHkrNNzrrJf_EA EA - Results from the AI testing hackathon by Esben Kran Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Results from the AI testing hackathon, published by Esben Kran on January 2, 2023 on The Effective Altruism Forum.We (Apart Research) ran a hackathon for AI testing research projects with 11 projects submitted by 34 participants between the 16th and 18th December. Here we share the winning projects. See them all here. In summary:Found that unsupervised latent knowledge representation is generalizable and takes the first steps towards a benchmark using the ETHICS ambiguous / unambiguous examples with latent knowledge evaluation.Created a new way to use token loss trajectories as a marker for targeting our interpretability methods towards a focus area.Investigated three potential inverse scaling phenomena: Counting letters, chaining premises and solving equations. Found incidental inverse scaling on one of them and U-shaped scaling on another.Implemented Trojans into Transformer models and used a gradient arithmetic technique to combine multiple Trojan triggers into one Transformer model.(honorable mention) Invented a way to test how quickly models become misaligned by negative example fine-tuning.Thank you to Zaki, Fazl, Rauno, Charbel, Nguyen, more jam site organizers, and the participants for making it all possible.Discovering Latent Knowledge in Language Models Without Supervision - extensions and testingBy Agatha Duzan, Matthieu David, Jonathan ClaybroughAbstract: Based on the paper "Discovering Latent Knowledge in Language Models without Supervision" this project discusses how well the proposed method applies to the concept of ambiguity.To do that, we tested the Contrast Consistent Search method on a dataset which contained both clear cut (0-1) and ambiguous (0,5) examples: We chose the ETHICS-commonsense dataset.The global conclusion is that the CCS approach seems to generalize well in ambiguous situations, and could potentially be used to determine a model’s latent knowledge about other concepts.These figures show how the CCS results for last layer activations splits into two groups for the non-ambiguous training samples while the ambiguous test samples on the ETHICS dataset reveals the same ambiguity of latent knowledge by the flattened Gaussian inference probability distribution.Haydn & Esben’s judging comment: This project is very good in investigating the generality of unsupervised latent knowledge learning. It also seems quite useful as a direct test of how easy it is to extract latent knowledge and provides an avenue towards a benchmark using the ETHICS unambiguous/ambiguous examples dataset. Excited to see this work continue!Read the report and the code (needs updating).Investigating Training Dynamics via Token Loss TrajectoriesBy Alex FooteAbstract: Evaluations of ML systems typically focus on average statistical performance on a dataset measured at the end of training. However, this type of evaluation is relatively coarse, and does not provide insight into the training dynamics of the model.We present tools for stratifying tokens into groups based on arbitrary functions and measuring the loss on these token groups throughout the training process of a Language Model. By evaluating the loss trajectory of meaningful groups of tokens throughout the training process, we can gain more insight into how the model develops during training, and make interesting observations that could be investigated further using interpretability tools to gain insight into the development of specific mechanisms within a model.We use this lens to look at the training dynamics of the region in which induction heads develop. We also zoom in on a specific region of training where there is a spike in loss and find that within this region the majority of tokens follow the loss trajectory of a spike, but a small set follow the inverse trajectory.Haydn & Esben’s ju...]]>
Esben Kran https://forum.effectivealtruism.org/posts/5h8bNTFHkrNNzrrJf/results-from-the-ai-testing-hackathon Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Results from the AI testing hackathon, published by Esben Kran on January 2, 2023 on The Effective Altruism Forum.We (Apart Research) ran a hackathon for AI testing research projects with 11 projects submitted by 34 participants between the 16th and 18th December. Here we share the winning projects. See them all here. In summary:Found that unsupervised latent knowledge representation is generalizable and takes the first steps towards a benchmark using the ETHICS ambiguous / unambiguous examples with latent knowledge evaluation.Created a new way to use token loss trajectories as a marker for targeting our interpretability methods towards a focus area.Investigated three potential inverse scaling phenomena: Counting letters, chaining premises and solving equations. Found incidental inverse scaling on one of them and U-shaped scaling on another.Implemented Trojans into Transformer models and used a gradient arithmetic technique to combine multiple Trojan triggers into one Transformer model.(honorable mention) Invented a way to test how quickly models become misaligned by negative example fine-tuning.Thank you to Zaki, Fazl, Rauno, Charbel, Nguyen, more jam site organizers, and the participants for making it all possible.Discovering Latent Knowledge in Language Models Without Supervision - extensions and testingBy Agatha Duzan, Matthieu David, Jonathan ClaybroughAbstract: Based on the paper "Discovering Latent Knowledge in Language Models without Supervision" this project discusses how well the proposed method applies to the concept of ambiguity.To do that, we tested the Contrast Consistent Search method on a dataset which contained both clear cut (0-1) and ambiguous (0,5) examples: We chose the ETHICS-commonsense dataset.The global conclusion is that the CCS approach seems to generalize well in ambiguous situations, and could potentially be used to determine a model’s latent knowledge about other concepts.These figures show how the CCS results for last layer activations splits into two groups for the non-ambiguous training samples while the ambiguous test samples on the ETHICS dataset reveals the same ambiguity of latent knowledge by the flattened Gaussian inference probability distribution.Haydn & Esben’s judging comment: This project is very good in investigating the generality of unsupervised latent knowledge learning. It also seems quite useful as a direct test of how easy it is to extract latent knowledge and provides an avenue towards a benchmark using the ETHICS unambiguous/ambiguous examples dataset. Excited to see this work continue!Read the report and the code (needs updating).Investigating Training Dynamics via Token Loss TrajectoriesBy Alex FooteAbstract: Evaluations of ML systems typically focus on average statistical performance on a dataset measured at the end of training. However, this type of evaluation is relatively coarse, and does not provide insight into the training dynamics of the model.We present tools for stratifying tokens into groups based on arbitrary functions and measuring the loss on these token groups throughout the training process of a Language Model. By evaluating the loss trajectory of meaningful groups of tokens throughout the training process, we can gain more insight into how the model develops during training, and make interesting observations that could be investigated further using interpretability tools to gain insight into the development of specific mechanisms within a model.We use this lens to look at the training dynamics of the region in which induction heads develop. We also zoom in on a specific region of training where there is a spike in loss and find that within this region the majority of tokens follow the loss trajectory of a spike, but a small set follow the inverse trajectory.Haydn & Esben’s ju...]]>
Thu, 05 Jan 2023 13:10:42 +0000 EA - Results from the AI testing hackathon by Esben Kran Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Results from the AI testing hackathon, published by Esben Kran on January 2, 2023 on The Effective Altruism Forum.We (Apart Research) ran a hackathon for AI testing research projects with 11 projects submitted by 34 participants between the 16th and 18th December. Here we share the winning projects. See them all here. In summary:Found that unsupervised latent knowledge representation is generalizable and takes the first steps towards a benchmark using the ETHICS ambiguous / unambiguous examples with latent knowledge evaluation.Created a new way to use token loss trajectories as a marker for targeting our interpretability methods towards a focus area.Investigated three potential inverse scaling phenomena: Counting letters, chaining premises and solving equations. Found incidental inverse scaling on one of them and U-shaped scaling on another.Implemented Trojans into Transformer models and used a gradient arithmetic technique to combine multiple Trojan triggers into one Transformer model.(honorable mention) Invented a way to test how quickly models become misaligned by negative example fine-tuning.Thank you to Zaki, Fazl, Rauno, Charbel, Nguyen, more jam site organizers, and the participants for making it all possible.Discovering Latent Knowledge in Language Models Without Supervision - extensions and testingBy Agatha Duzan, Matthieu David, Jonathan ClaybroughAbstract: Based on the paper "Discovering Latent Knowledge in Language Models without Supervision" this project discusses how well the proposed method applies to the concept of ambiguity.To do that, we tested the Contrast Consistent Search method on a dataset which contained both clear cut (0-1) and ambiguous (0,5) examples: We chose the ETHICS-commonsense dataset.The global conclusion is that the CCS approach seems to generalize well in ambiguous situations, and could potentially be used to determine a model’s latent knowledge about other concepts.These figures show how the CCS results for last layer activations splits into two groups for the non-ambiguous training samples while the ambiguous test samples on the ETHICS dataset reveals the same ambiguity of latent knowledge by the flattened Gaussian inference probability distribution.Haydn & Esben’s judging comment: This project is very good in investigating the generality of unsupervised latent knowledge learning. It also seems quite useful as a direct test of how easy it is to extract latent knowledge and provides an avenue towards a benchmark using the ETHICS unambiguous/ambiguous examples dataset. Excited to see this work continue!Read the report and the code (needs updating).Investigating Training Dynamics via Token Loss TrajectoriesBy Alex FooteAbstract: Evaluations of ML systems typically focus on average statistical performance on a dataset measured at the end of training. However, this type of evaluation is relatively coarse, and does not provide insight into the training dynamics of the model.We present tools for stratifying tokens into groups based on arbitrary functions and measuring the loss on these token groups throughout the training process of a Language Model. By evaluating the loss trajectory of meaningful groups of tokens throughout the training process, we can gain more insight into how the model develops during training, and make interesting observations that could be investigated further using interpretability tools to gain insight into the development of specific mechanisms within a model.We use this lens to look at the training dynamics of the region in which induction heads develop. We also zoom in on a specific region of training where there is a spike in loss and find that within this region the majority of tokens follow the loss trajectory of a spike, but a small set follow the inverse trajectory.Haydn & Esben’s ju...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Results from the AI testing hackathon, published by Esben Kran on January 2, 2023 on The Effective Altruism Forum.We (Apart Research) ran a hackathon for AI testing research projects with 11 projects submitted by 34 participants between the 16th and 18th December. Here we share the winning projects. See them all here. In summary:Found that unsupervised latent knowledge representation is generalizable and takes the first steps towards a benchmark using the ETHICS ambiguous / unambiguous examples with latent knowledge evaluation.Created a new way to use token loss trajectories as a marker for targeting our interpretability methods towards a focus area.Investigated three potential inverse scaling phenomena: Counting letters, chaining premises and solving equations. Found incidental inverse scaling on one of them and U-shaped scaling on another.Implemented Trojans into Transformer models and used a gradient arithmetic technique to combine multiple Trojan triggers into one Transformer model.(honorable mention) Invented a way to test how quickly models become misaligned by negative example fine-tuning.Thank you to Zaki, Fazl, Rauno, Charbel, Nguyen, more jam site organizers, and the participants for making it all possible.Discovering Latent Knowledge in Language Models Without Supervision - extensions and testingBy Agatha Duzan, Matthieu David, Jonathan ClaybroughAbstract: Based on the paper "Discovering Latent Knowledge in Language Models without Supervision" this project discusses how well the proposed method applies to the concept of ambiguity.To do that, we tested the Contrast Consistent Search method on a dataset which contained both clear cut (0-1) and ambiguous (0,5) examples: We chose the ETHICS-commonsense dataset.The global conclusion is that the CCS approach seems to generalize well in ambiguous situations, and could potentially be used to determine a model’s latent knowledge about other concepts.These figures show how the CCS results for last layer activations splits into two groups for the non-ambiguous training samples while the ambiguous test samples on the ETHICS dataset reveals the same ambiguity of latent knowledge by the flattened Gaussian inference probability distribution.Haydn & Esben’s judging comment: This project is very good in investigating the generality of unsupervised latent knowledge learning. It also seems quite useful as a direct test of how easy it is to extract latent knowledge and provides an avenue towards a benchmark using the ETHICS unambiguous/ambiguous examples dataset. Excited to see this work continue!Read the report and the code (needs updating).Investigating Training Dynamics via Token Loss TrajectoriesBy Alex FooteAbstract: Evaluations of ML systems typically focus on average statistical performance on a dataset measured at the end of training. However, this type of evaluation is relatively coarse, and does not provide insight into the training dynamics of the model.We present tools for stratifying tokens into groups based on arbitrary functions and measuring the loss on these token groups throughout the training process of a Language Model. By evaluating the loss trajectory of meaningful groups of tokens throughout the training process, we can gain more insight into how the model develops during training, and make interesting observations that could be investigated further using interpretability tools to gain insight into the development of specific mechanisms within a model.We use this lens to look at the training dynamics of the region in which induction heads develop. We also zoom in on a specific region of training where there is a spike in loss and find that within this region the majority of tokens follow the loss trajectory of a spike, but a small set follow the inverse trajectory.Haydn & Esben’s ju...]]>
Esben Kran https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:31 None full 4361
iv3NmPjozonLgvT66_EA EA - Do people have a form or resources for capturing indirect interpersonal impacts? by PeterSlattery Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Do people have a form or resources for capturing indirect interpersonal impacts?, published by PeterSlattery on January 4, 2023 on The Effective Altruism Forum.In my experience, many EAs help and support others in the community (e.g., by giving feedback, emotional support, or making connections etc).These 'helpful EAs' often improve the impact of those who receive their help (e.g., because the receivers start new collaborations, or improve their productivity or career choice etc). I'll call this impact 'indirect interpersonal impact'.Most helpful EA's indirect interpersonal impacts are illegible (i.e., hard to capture/show). This means that many EAs who have high indirect interpersonal impact (e.g., via helping many others or being a good knowledge broker/connector etc) are undervalued relative to those who mostly focus on doing their own projects(but who may benefit from the help of many others).I think that this is probably important to address. It seems important to acknowledge and recognize the contributions of individuals who may not necessarily have a tangible output or project to show for their efforts, but may still have had a significant positive impact on others.With the above in mind, I am wondering if anyone has a form to capture indirect interpersonal impacts or similar, or some resources that they use or recommend using?I am not aware of anything which exists. I would like to either adapt or make something to use myself and share with others. I think that 80,000's evaluation model is probably the best template to work from, but I haven't investigated that yet.I'd also welcome any thoughts on the claims made above and whether they resonate or seem incorrect.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
PeterSlattery https://forum.effectivealtruism.org/posts/iv3NmPjozonLgvT66/do-people-have-a-form-or-resources-for-capturing-indirect Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Do people have a form or resources for capturing indirect interpersonal impacts?, published by PeterSlattery on January 4, 2023 on The Effective Altruism Forum.In my experience, many EAs help and support others in the community (e.g., by giving feedback, emotional support, or making connections etc).These 'helpful EAs' often improve the impact of those who receive their help (e.g., because the receivers start new collaborations, or improve their productivity or career choice etc). I'll call this impact 'indirect interpersonal impact'.Most helpful EA's indirect interpersonal impacts are illegible (i.e., hard to capture/show). This means that many EAs who have high indirect interpersonal impact (e.g., via helping many others or being a good knowledge broker/connector etc) are undervalued relative to those who mostly focus on doing their own projects(but who may benefit from the help of many others).I think that this is probably important to address. It seems important to acknowledge and recognize the contributions of individuals who may not necessarily have a tangible output or project to show for their efforts, but may still have had a significant positive impact on others.With the above in mind, I am wondering if anyone has a form to capture indirect interpersonal impacts or similar, or some resources that they use or recommend using?I am not aware of anything which exists. I would like to either adapt or make something to use myself and share with others. I think that 80,000's evaluation model is probably the best template to work from, but I haven't investigated that yet.I'd also welcome any thoughts on the claims made above and whether they resonate or seem incorrect.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 04 Jan 2023 13:32:08 +0000 EA - Do people have a form or resources for capturing indirect interpersonal impacts? by PeterSlattery Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Do people have a form or resources for capturing indirect interpersonal impacts?, published by PeterSlattery on January 4, 2023 on The Effective Altruism Forum.In my experience, many EAs help and support others in the community (e.g., by giving feedback, emotional support, or making connections etc).These 'helpful EAs' often improve the impact of those who receive their help (e.g., because the receivers start new collaborations, or improve their productivity or career choice etc). I'll call this impact 'indirect interpersonal impact'.Most helpful EA's indirect interpersonal impacts are illegible (i.e., hard to capture/show). This means that many EAs who have high indirect interpersonal impact (e.g., via helping many others or being a good knowledge broker/connector etc) are undervalued relative to those who mostly focus on doing their own projects(but who may benefit from the help of many others).I think that this is probably important to address. It seems important to acknowledge and recognize the contributions of individuals who may not necessarily have a tangible output or project to show for their efforts, but may still have had a significant positive impact on others.With the above in mind, I am wondering if anyone has a form to capture indirect interpersonal impacts or similar, or some resources that they use or recommend using?I am not aware of anything which exists. I would like to either adapt or make something to use myself and share with others. I think that 80,000's evaluation model is probably the best template to work from, but I haven't investigated that yet.I'd also welcome any thoughts on the claims made above and whether they resonate or seem incorrect.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Do people have a form or resources for capturing indirect interpersonal impacts?, published by PeterSlattery on January 4, 2023 on The Effective Altruism Forum.In my experience, many EAs help and support others in the community (e.g., by giving feedback, emotional support, or making connections etc).These 'helpful EAs' often improve the impact of those who receive their help (e.g., because the receivers start new collaborations, or improve their productivity or career choice etc). I'll call this impact 'indirect interpersonal impact'.Most helpful EA's indirect interpersonal impacts are illegible (i.e., hard to capture/show). This means that many EAs who have high indirect interpersonal impact (e.g., via helping many others or being a good knowledge broker/connector etc) are undervalued relative to those who mostly focus on doing their own projects(but who may benefit from the help of many others).I think that this is probably important to address. It seems important to acknowledge and recognize the contributions of individuals who may not necessarily have a tangible output or project to show for their efforts, but may still have had a significant positive impact on others.With the above in mind, I am wondering if anyone has a form to capture indirect interpersonal impacts or similar, or some resources that they use or recommend using?I am not aware of anything which exists. I would like to either adapt or make something to use myself and share with others. I think that 80,000's evaluation model is probably the best template to work from, but I haven't investigated that yet.I'd also welcome any thoughts on the claims made above and whether they resonate or seem incorrect.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
PeterSlattery https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:51 None full 4353
iuBoizzA5c5KfWysc_EA EA - Announcing Insights for Impact by Christian Pearson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Insights for Impact, published by Christian Pearson on January 4, 2023 on The Effective Altruism Forum.Hey all!Are you wanting to follow EA research, but finding papers and longform forum posts too dry?Late last year, Jenna Ong and I noticed a lack of research-focused EA video content and decided to do something about it. Today, we are excited to introduce Insights for Impact, a YouTube channel that’s all about communicating the key insights of EA-aligned research papers.In our first video, How Science Misunderstands Power, we explore why well-meaning scientists failed to prevent nuclear proliferation in the 20th century. Perhaps by examining the history of nuclear weapon development, we may be able to better manage other powerful technologies, like AI and genetic engineering.A 2018 paper by Samo Burja and Zachary Lerangis, The Scientists, the Statesman, and the Bomb, served as the basis for this video. However, we also drew inspiration from HaydnBelfield’s post, especially their idea that the current headspace of the AI Safety community closely resembles the “this is the most important thing” mindset of scientists throughout the mid 20th century. From these case studies, it seems that both social and technical factors are crucial in ensuring powerful technologies have a positive impact.In future videos, we want to explore a range of EA-relevant cause areas. We’d love to collaborate with researchers to ensure we accurately portray their work. So if you’re a researcher who wants to give your work a voice outside of the forum, please get in touch!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Christian Pearson https://forum.effectivealtruism.org/posts/iuBoizzA5c5KfWysc/announcing-insights-for-impact Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Insights for Impact, published by Christian Pearson on January 4, 2023 on The Effective Altruism Forum.Hey all!Are you wanting to follow EA research, but finding papers and longform forum posts too dry?Late last year, Jenna Ong and I noticed a lack of research-focused EA video content and decided to do something about it. Today, we are excited to introduce Insights for Impact, a YouTube channel that’s all about communicating the key insights of EA-aligned research papers.In our first video, How Science Misunderstands Power, we explore why well-meaning scientists failed to prevent nuclear proliferation in the 20th century. Perhaps by examining the history of nuclear weapon development, we may be able to better manage other powerful technologies, like AI and genetic engineering.A 2018 paper by Samo Burja and Zachary Lerangis, The Scientists, the Statesman, and the Bomb, served as the basis for this video. However, we also drew inspiration from HaydnBelfield’s post, especially their idea that the current headspace of the AI Safety community closely resembles the “this is the most important thing” mindset of scientists throughout the mid 20th century. From these case studies, it seems that both social and technical factors are crucial in ensuring powerful technologies have a positive impact.In future videos, we want to explore a range of EA-relevant cause areas. We’d love to collaborate with researchers to ensure we accurately portray their work. So if you’re a researcher who wants to give your work a voice outside of the forum, please get in touch!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 04 Jan 2023 10:49:37 +0000 EA - Announcing Insights for Impact by Christian Pearson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Insights for Impact, published by Christian Pearson on January 4, 2023 on The Effective Altruism Forum.Hey all!Are you wanting to follow EA research, but finding papers and longform forum posts too dry?Late last year, Jenna Ong and I noticed a lack of research-focused EA video content and decided to do something about it. Today, we are excited to introduce Insights for Impact, a YouTube channel that’s all about communicating the key insights of EA-aligned research papers.In our first video, How Science Misunderstands Power, we explore why well-meaning scientists failed to prevent nuclear proliferation in the 20th century. Perhaps by examining the history of nuclear weapon development, we may be able to better manage other powerful technologies, like AI and genetic engineering.A 2018 paper by Samo Burja and Zachary Lerangis, The Scientists, the Statesman, and the Bomb, served as the basis for this video. However, we also drew inspiration from HaydnBelfield’s post, especially their idea that the current headspace of the AI Safety community closely resembles the “this is the most important thing” mindset of scientists throughout the mid 20th century. From these case studies, it seems that both social and technical factors are crucial in ensuring powerful technologies have a positive impact.In future videos, we want to explore a range of EA-relevant cause areas. We’d love to collaborate with researchers to ensure we accurately portray their work. So if you’re a researcher who wants to give your work a voice outside of the forum, please get in touch!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Insights for Impact, published by Christian Pearson on January 4, 2023 on The Effective Altruism Forum.Hey all!Are you wanting to follow EA research, but finding papers and longform forum posts too dry?Late last year, Jenna Ong and I noticed a lack of research-focused EA video content and decided to do something about it. Today, we are excited to introduce Insights for Impact, a YouTube channel that’s all about communicating the key insights of EA-aligned research papers.In our first video, How Science Misunderstands Power, we explore why well-meaning scientists failed to prevent nuclear proliferation in the 20th century. Perhaps by examining the history of nuclear weapon development, we may be able to better manage other powerful technologies, like AI and genetic engineering.A 2018 paper by Samo Burja and Zachary Lerangis, The Scientists, the Statesman, and the Bomb, served as the basis for this video. However, we also drew inspiration from HaydnBelfield’s post, especially their idea that the current headspace of the AI Safety community closely resembles the “this is the most important thing” mindset of scientists throughout the mid 20th century. From these case studies, it seems that both social and technical factors are crucial in ensuring powerful technologies have a positive impact.In future videos, we want to explore a range of EA-relevant cause areas. We’d love to collaborate with researchers to ensure we accurately portray their work. So if you’re a researcher who wants to give your work a voice outside of the forum, please get in touch!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Christian Pearson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:44 None full 4352
SzNpP3zPWz5aA98YH_EA EA - If EA Community-Building Could Be Net-Negative, What Follows? by joshcmorrison Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If EA Community-Building Could Be Net-Negative, What Follows?, published by joshcmorrison on January 2, 2023 on The Effective Altruism Forum.I think it’s likely that institutional effective altruism was a but-for cause of FTX’s existence and therefore that it may have caused about $8B in economic damage due to FTX’s fraud (as well as potentially causing permanent damage to the reputation of effective altruism and longtermism as ideas). This example makes me feel it’s plausible that effective altruist community-building activities could be net-negative in impact, and I wanted to explore some conjectures about what that plausibility would entail.I recognize this is an emotionally charged issue, and to be clear my claim is not “EA community-building has been net-negative” but instead that that’s plausibly the case (i.e. something like >10% likely). I don’t have strong certainty that I’m right about that and I think a public case that disproved my plausibility claim would be quite valuable. I should also say that I have personally and professionally benefitted greatly from EA community building efforts (most saliently from efforts connected to the Center for Effective Altruism) and I sincerely appreciate and am indebted to that work.Some claims that are related and perhaps vaguely isomorphic to the above which I think are probably true but may feel less strongly about are:To date, there has been a strong presumption among EAs that activities likely to significantly increase the number of people who explicitly identify as effective altruist (or otherwise increase their identification with the EA movement) are default worth funding. That presumption should be weakened.Social movements are likely to overvalue efforts to increase the power of their movement and undervalue their goals actually being accomplished, and EA is not immune to this failure mode.Leadership within social movements are likely to (consciously or unconsciously) overvalue measures that increase the leadership’s own control and influence and under-value measures that reduce it, which is a trap EA community-building efforts may have unintentionally fallen into.Pre-FTX, there was a reasonable assumption that expanding the EA movement was one of the most effective things a person could do, and the FTX catastrophe should significantly update our attitude towards that assumption.FTX should significantly update us on principles and strategies for EA community/movement-building and institutional structure, and there should be more public discourse on what such updates might be.EA is obligated to undertake institutional reforms to minimize the risk of creating an FTX-like problem in the future.Here are some conjectures I’d make for potential implications of believing my plausibility claim:Make Impact Targets Public: Insofar as new evidence has emerged about the impact of EA community building (and/or insofar as incentives towards movement-building may map imperfectly onto real-world impact), it is more important to make public, numerical estimates of the goals of particular community-building grants/projects going forward and to attempt public estimation of actual impact (and connection to real-world ends) of at least some specific grants/projects conducted to date. Outside of GiveWell, I think this is something EA institutions (my own included) should be better about in general, but I think the case is particularly strong in the community-building context given the above.Separate Accounting for Community Building vs. Front-Line Spending: I have argued in the past that meta-level and object-level spending by EAs should be in some sense accounted for separately. I admit this idea is, at the moment, under-specified but one basic example would be “EAs/EA grant makers should say their “front-line” and “met...]]>
joshcmorrison https://forum.effectivealtruism.org/posts/SzNpP3zPWz5aA98YH/if-ea-community-building-could-be-net-negative-what-follows Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If EA Community-Building Could Be Net-Negative, What Follows?, published by joshcmorrison on January 2, 2023 on The Effective Altruism Forum.I think it’s likely that institutional effective altruism was a but-for cause of FTX’s existence and therefore that it may have caused about $8B in economic damage due to FTX’s fraud (as well as potentially causing permanent damage to the reputation of effective altruism and longtermism as ideas). This example makes me feel it’s plausible that effective altruist community-building activities could be net-negative in impact, and I wanted to explore some conjectures about what that plausibility would entail.I recognize this is an emotionally charged issue, and to be clear my claim is not “EA community-building has been net-negative” but instead that that’s plausibly the case (i.e. something like >10% likely). I don’t have strong certainty that I’m right about that and I think a public case that disproved my plausibility claim would be quite valuable. I should also say that I have personally and professionally benefitted greatly from EA community building efforts (most saliently from efforts connected to the Center for Effective Altruism) and I sincerely appreciate and am indebted to that work.Some claims that are related and perhaps vaguely isomorphic to the above which I think are probably true but may feel less strongly about are:To date, there has been a strong presumption among EAs that activities likely to significantly increase the number of people who explicitly identify as effective altruist (or otherwise increase their identification with the EA movement) are default worth funding. That presumption should be weakened.Social movements are likely to overvalue efforts to increase the power of their movement and undervalue their goals actually being accomplished, and EA is not immune to this failure mode.Leadership within social movements are likely to (consciously or unconsciously) overvalue measures that increase the leadership’s own control and influence and under-value measures that reduce it, which is a trap EA community-building efforts may have unintentionally fallen into.Pre-FTX, there was a reasonable assumption that expanding the EA movement was one of the most effective things a person could do, and the FTX catastrophe should significantly update our attitude towards that assumption.FTX should significantly update us on principles and strategies for EA community/movement-building and institutional structure, and there should be more public discourse on what such updates might be.EA is obligated to undertake institutional reforms to minimize the risk of creating an FTX-like problem in the future.Here are some conjectures I’d make for potential implications of believing my plausibility claim:Make Impact Targets Public: Insofar as new evidence has emerged about the impact of EA community building (and/or insofar as incentives towards movement-building may map imperfectly onto real-world impact), it is more important to make public, numerical estimates of the goals of particular community-building grants/projects going forward and to attempt public estimation of actual impact (and connection to real-world ends) of at least some specific grants/projects conducted to date. Outside of GiveWell, I think this is something EA institutions (my own included) should be better about in general, but I think the case is particularly strong in the community-building context given the above.Separate Accounting for Community Building vs. Front-Line Spending: I have argued in the past that meta-level and object-level spending by EAs should be in some sense accounted for separately. I admit this idea is, at the moment, under-specified but one basic example would be “EAs/EA grant makers should say their “front-line” and “met...]]>
Mon, 02 Jan 2023 21:54:57 +0000 EA - If EA Community-Building Could Be Net-Negative, What Follows? by joshcmorrison Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If EA Community-Building Could Be Net-Negative, What Follows?, published by joshcmorrison on January 2, 2023 on The Effective Altruism Forum.I think it’s likely that institutional effective altruism was a but-for cause of FTX’s existence and therefore that it may have caused about $8B in economic damage due to FTX’s fraud (as well as potentially causing permanent damage to the reputation of effective altruism and longtermism as ideas). This example makes me feel it’s plausible that effective altruist community-building activities could be net-negative in impact, and I wanted to explore some conjectures about what that plausibility would entail.I recognize this is an emotionally charged issue, and to be clear my claim is not “EA community-building has been net-negative” but instead that that’s plausibly the case (i.e. something like >10% likely). I don’t have strong certainty that I’m right about that and I think a public case that disproved my plausibility claim would be quite valuable. I should also say that I have personally and professionally benefitted greatly from EA community building efforts (most saliently from efforts connected to the Center for Effective Altruism) and I sincerely appreciate and am indebted to that work.Some claims that are related and perhaps vaguely isomorphic to the above which I think are probably true but may feel less strongly about are:To date, there has been a strong presumption among EAs that activities likely to significantly increase the number of people who explicitly identify as effective altruist (or otherwise increase their identification with the EA movement) are default worth funding. That presumption should be weakened.Social movements are likely to overvalue efforts to increase the power of their movement and undervalue their goals actually being accomplished, and EA is not immune to this failure mode.Leadership within social movements are likely to (consciously or unconsciously) overvalue measures that increase the leadership’s own control and influence and under-value measures that reduce it, which is a trap EA community-building efforts may have unintentionally fallen into.Pre-FTX, there was a reasonable assumption that expanding the EA movement was one of the most effective things a person could do, and the FTX catastrophe should significantly update our attitude towards that assumption.FTX should significantly update us on principles and strategies for EA community/movement-building and institutional structure, and there should be more public discourse on what such updates might be.EA is obligated to undertake institutional reforms to minimize the risk of creating an FTX-like problem in the future.Here are some conjectures I’d make for potential implications of believing my plausibility claim:Make Impact Targets Public: Insofar as new evidence has emerged about the impact of EA community building (and/or insofar as incentives towards movement-building may map imperfectly onto real-world impact), it is more important to make public, numerical estimates of the goals of particular community-building grants/projects going forward and to attempt public estimation of actual impact (and connection to real-world ends) of at least some specific grants/projects conducted to date. Outside of GiveWell, I think this is something EA institutions (my own included) should be better about in general, but I think the case is particularly strong in the community-building context given the above.Separate Accounting for Community Building vs. Front-Line Spending: I have argued in the past that meta-level and object-level spending by EAs should be in some sense accounted for separately. I admit this idea is, at the moment, under-specified but one basic example would be “EAs/EA grant makers should say their “front-line” and “met...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If EA Community-Building Could Be Net-Negative, What Follows?, published by joshcmorrison on January 2, 2023 on The Effective Altruism Forum.I think it’s likely that institutional effective altruism was a but-for cause of FTX’s existence and therefore that it may have caused about $8B in economic damage due to FTX’s fraud (as well as potentially causing permanent damage to the reputation of effective altruism and longtermism as ideas). This example makes me feel it’s plausible that effective altruist community-building activities could be net-negative in impact, and I wanted to explore some conjectures about what that plausibility would entail.I recognize this is an emotionally charged issue, and to be clear my claim is not “EA community-building has been net-negative” but instead that that’s plausibly the case (i.e. something like >10% likely). I don’t have strong certainty that I’m right about that and I think a public case that disproved my plausibility claim would be quite valuable. I should also say that I have personally and professionally benefitted greatly from EA community building efforts (most saliently from efforts connected to the Center for Effective Altruism) and I sincerely appreciate and am indebted to that work.Some claims that are related and perhaps vaguely isomorphic to the above which I think are probably true but may feel less strongly about are:To date, there has been a strong presumption among EAs that activities likely to significantly increase the number of people who explicitly identify as effective altruist (or otherwise increase their identification with the EA movement) are default worth funding. That presumption should be weakened.Social movements are likely to overvalue efforts to increase the power of their movement and undervalue their goals actually being accomplished, and EA is not immune to this failure mode.Leadership within social movements are likely to (consciously or unconsciously) overvalue measures that increase the leadership’s own control and influence and under-value measures that reduce it, which is a trap EA community-building efforts may have unintentionally fallen into.Pre-FTX, there was a reasonable assumption that expanding the EA movement was one of the most effective things a person could do, and the FTX catastrophe should significantly update our attitude towards that assumption.FTX should significantly update us on principles and strategies for EA community/movement-building and institutional structure, and there should be more public discourse on what such updates might be.EA is obligated to undertake institutional reforms to minimize the risk of creating an FTX-like problem in the future.Here are some conjectures I’d make for potential implications of believing my plausibility claim:Make Impact Targets Public: Insofar as new evidence has emerged about the impact of EA community building (and/or insofar as incentives towards movement-building may map imperfectly onto real-world impact), it is more important to make public, numerical estimates of the goals of particular community-building grants/projects going forward and to attempt public estimation of actual impact (and connection to real-world ends) of at least some specific grants/projects conducted to date. Outside of GiveWell, I think this is something EA institutions (my own included) should be better about in general, but I think the case is particularly strong in the community-building context given the above.Separate Accounting for Community Building vs. Front-Line Spending: I have argued in the past that meta-level and object-level spending by EAs should be in some sense accounted for separately. I admit this idea is, at the moment, under-specified but one basic example would be “EAs/EA grant makers should say their “front-line” and “met...]]>
joshcmorrison https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:30 None full 4336
EGztvGHigkekgkvc8_EA EA - Community Building from scratch: The first year of EA Hungary by gergogaspar Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Community Building from scratch: The first year of EA Hungary, published by gergogaspar on January 2, 2023 on The Effective Altruism Forum.TLDR/summary:I (Gergő) started EA Hungary as a paid organizer in mid-September 2021. In one year, we’ve had 100+ intro fellowship applicants, 88 of whom completed it successfully. We have run other fellowships, from whom around 20 people have benefited. I have had about 110 calls, most of which were oriented on career support for the new members. EA Hungary brought counterfactually ~35 people to EAGx conferences and organized a retreat for 20 students. EA Hungary hired the second employee (Dia) in July, doing ~0.5 FTE.The aim of this post is to share the progress of EA Hungary in its first year (2021 Oct to 2022 Sept) I wrote this up with the intent of helping those who are starting a new group (hence the chronological order). If you are only interested in the main outcomes, read the tables. I hope this will be useful.Background and how I got fundingI learned about EA sometime in 2020 and while attending the EAGxVirtual (13 – 14 June 2020), I met a few other people based in Hungary. We had the first in-person meetup in August 2020, with 6 people. Then the Covid-19 pandemic hit, so we carried on with online meetups about once per month. We ran the intro fellowship for existing members (7 people), followed by the in-depth fellowship (4 people).During this time (and afterwards) I was supported by Catherine Low from CEA, from whom I learned a lot (∼1 call every 1,5 months).In the meantime, I was also volunteering for SoGive as a charity analyst and then as an analysis coordinator (teaching people basic charity analysis + operations work). I learned a lot from this and I think it really sped up my involvement within EA. My experience with SoGive also helped me start and run EA Hungary more effectively, as I have already gained some operations and mentoring experience.In the fall of 2021, I started a second Master’s in Philosophy at ELTE - (Eötvös Loránd University).Thanks to these activities mentioned above, I had a strong enough track record to apply for funding from EAIF to do 0.5 FTE, (although this upscaled to full-time pretty quickly).(I also have this story presented in memes, don’t ask why.)2021 Fall semester (from 2021 Mid-Sept to 2022 January)Main outcomes:(Where you see gaps it means that I didn’t record the exact numbers/data)8-week intro Fellowship: Number of applicants:DataNumber of people successfully completing the fellowship Number of people who filled out the completion survey:Data1-1 Mentoring calls262222∼20Other outcomes:Online social around week 4, which was attended by around 6 people- End-of-fellowship meal which was attended by 14 people.General info about the 2021 Fall period:Marketing and advertisement of the fellowship was almost completely done on Facebook groups of various university programs of ELTE university. (Facebook is still the most widely used social media in Hungary, although it is slowly losing popularity)Lots of start-up time costs, as I was still figuring things out.If you would like to save time on operations, I recommend using some draft emails/forms from the EA Hungary care package as well as these amazing resources: EA Student Groups Handbook, EA Groups Resource Centre and this folder.2022 Spring Semester (2022 February-May)Main outcomes:8-week intro Fellowship: Number of people successfully completing the fellowshipDataNumber of people who filled out the completion survey:DataIn-depth fellowship Number of people successfully completing the fellowship 1-1 Mentoring calls (unfortunately I didn’t keep an exact account, so this is a rough estimate) EAGxOxford attendees EAGxPrague attendees (the two numbers mean all attendees - counterfactual attendees (ie. people who wo...]]>
gergogaspar https://forum.effectivealtruism.org/posts/EGztvGHigkekgkvc8/community-building-from-scratch-the-first-year-of-ea-hungary Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Community Building from scratch: The first year of EA Hungary, published by gergogaspar on January 2, 2023 on The Effective Altruism Forum.TLDR/summary:I (Gergő) started EA Hungary as a paid organizer in mid-September 2021. In one year, we’ve had 100+ intro fellowship applicants, 88 of whom completed it successfully. We have run other fellowships, from whom around 20 people have benefited. I have had about 110 calls, most of which were oriented on career support for the new members. EA Hungary brought counterfactually ~35 people to EAGx conferences and organized a retreat for 20 students. EA Hungary hired the second employee (Dia) in July, doing ~0.5 FTE.The aim of this post is to share the progress of EA Hungary in its first year (2021 Oct to 2022 Sept) I wrote this up with the intent of helping those who are starting a new group (hence the chronological order). If you are only interested in the main outcomes, read the tables. I hope this will be useful.Background and how I got fundingI learned about EA sometime in 2020 and while attending the EAGxVirtual (13 – 14 June 2020), I met a few other people based in Hungary. We had the first in-person meetup in August 2020, with 6 people. Then the Covid-19 pandemic hit, so we carried on with online meetups about once per month. We ran the intro fellowship for existing members (7 people), followed by the in-depth fellowship (4 people).During this time (and afterwards) I was supported by Catherine Low from CEA, from whom I learned a lot (∼1 call every 1,5 months).In the meantime, I was also volunteering for SoGive as a charity analyst and then as an analysis coordinator (teaching people basic charity analysis + operations work). I learned a lot from this and I think it really sped up my involvement within EA. My experience with SoGive also helped me start and run EA Hungary more effectively, as I have already gained some operations and mentoring experience.In the fall of 2021, I started a second Master’s in Philosophy at ELTE - (Eötvös Loránd University).Thanks to these activities mentioned above, I had a strong enough track record to apply for funding from EAIF to do 0.5 FTE, (although this upscaled to full-time pretty quickly).(I also have this story presented in memes, don’t ask why.)2021 Fall semester (from 2021 Mid-Sept to 2022 January)Main outcomes:(Where you see gaps it means that I didn’t record the exact numbers/data)8-week intro Fellowship: Number of applicants:DataNumber of people successfully completing the fellowship Number of people who filled out the completion survey:Data1-1 Mentoring calls262222∼20Other outcomes:Online social around week 4, which was attended by around 6 people- End-of-fellowship meal which was attended by 14 people.General info about the 2021 Fall period:Marketing and advertisement of the fellowship was almost completely done on Facebook groups of various university programs of ELTE university. (Facebook is still the most widely used social media in Hungary, although it is slowly losing popularity)Lots of start-up time costs, as I was still figuring things out.If you would like to save time on operations, I recommend using some draft emails/forms from the EA Hungary care package as well as these amazing resources: EA Student Groups Handbook, EA Groups Resource Centre and this folder.2022 Spring Semester (2022 February-May)Main outcomes:8-week intro Fellowship: Number of people successfully completing the fellowshipDataNumber of people who filled out the completion survey:DataIn-depth fellowship Number of people successfully completing the fellowship 1-1 Mentoring calls (unfortunately I didn’t keep an exact account, so this is a rough estimate) EAGxOxford attendees EAGxPrague attendees (the two numbers mean all attendees - counterfactual attendees (ie. people who wo...]]>
Mon, 02 Jan 2023 21:04:15 +0000 EA - Community Building from scratch: The first year of EA Hungary by gergogaspar Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Community Building from scratch: The first year of EA Hungary, published by gergogaspar on January 2, 2023 on The Effective Altruism Forum.TLDR/summary:I (Gergő) started EA Hungary as a paid organizer in mid-September 2021. In one year, we’ve had 100+ intro fellowship applicants, 88 of whom completed it successfully. We have run other fellowships, from whom around 20 people have benefited. I have had about 110 calls, most of which were oriented on career support for the new members. EA Hungary brought counterfactually ~35 people to EAGx conferences and organized a retreat for 20 students. EA Hungary hired the second employee (Dia) in July, doing ~0.5 FTE.The aim of this post is to share the progress of EA Hungary in its first year (2021 Oct to 2022 Sept) I wrote this up with the intent of helping those who are starting a new group (hence the chronological order). If you are only interested in the main outcomes, read the tables. I hope this will be useful.Background and how I got fundingI learned about EA sometime in 2020 and while attending the EAGxVirtual (13 – 14 June 2020), I met a few other people based in Hungary. We had the first in-person meetup in August 2020, with 6 people. Then the Covid-19 pandemic hit, so we carried on with online meetups about once per month. We ran the intro fellowship for existing members (7 people), followed by the in-depth fellowship (4 people).During this time (and afterwards) I was supported by Catherine Low from CEA, from whom I learned a lot (∼1 call every 1,5 months).In the meantime, I was also volunteering for SoGive as a charity analyst and then as an analysis coordinator (teaching people basic charity analysis + operations work). I learned a lot from this and I think it really sped up my involvement within EA. My experience with SoGive also helped me start and run EA Hungary more effectively, as I have already gained some operations and mentoring experience.In the fall of 2021, I started a second Master’s in Philosophy at ELTE - (Eötvös Loránd University).Thanks to these activities mentioned above, I had a strong enough track record to apply for funding from EAIF to do 0.5 FTE, (although this upscaled to full-time pretty quickly).(I also have this story presented in memes, don’t ask why.)2021 Fall semester (from 2021 Mid-Sept to 2022 January)Main outcomes:(Where you see gaps it means that I didn’t record the exact numbers/data)8-week intro Fellowship: Number of applicants:DataNumber of people successfully completing the fellowship Number of people who filled out the completion survey:Data1-1 Mentoring calls262222∼20Other outcomes:Online social around week 4, which was attended by around 6 people- End-of-fellowship meal which was attended by 14 people.General info about the 2021 Fall period:Marketing and advertisement of the fellowship was almost completely done on Facebook groups of various university programs of ELTE university. (Facebook is still the most widely used social media in Hungary, although it is slowly losing popularity)Lots of start-up time costs, as I was still figuring things out.If you would like to save time on operations, I recommend using some draft emails/forms from the EA Hungary care package as well as these amazing resources: EA Student Groups Handbook, EA Groups Resource Centre and this folder.2022 Spring Semester (2022 February-May)Main outcomes:8-week intro Fellowship: Number of people successfully completing the fellowshipDataNumber of people who filled out the completion survey:DataIn-depth fellowship Number of people successfully completing the fellowship 1-1 Mentoring calls (unfortunately I didn’t keep an exact account, so this is a rough estimate) EAGxOxford attendees EAGxPrague attendees (the two numbers mean all attendees - counterfactual attendees (ie. people who wo...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Community Building from scratch: The first year of EA Hungary, published by gergogaspar on January 2, 2023 on The Effective Altruism Forum.TLDR/summary:I (Gergő) started EA Hungary as a paid organizer in mid-September 2021. In one year, we’ve had 100+ intro fellowship applicants, 88 of whom completed it successfully. We have run other fellowships, from whom around 20 people have benefited. I have had about 110 calls, most of which were oriented on career support for the new members. EA Hungary brought counterfactually ~35 people to EAGx conferences and organized a retreat for 20 students. EA Hungary hired the second employee (Dia) in July, doing ~0.5 FTE.The aim of this post is to share the progress of EA Hungary in its first year (2021 Oct to 2022 Sept) I wrote this up with the intent of helping those who are starting a new group (hence the chronological order). If you are only interested in the main outcomes, read the tables. I hope this will be useful.Background and how I got fundingI learned about EA sometime in 2020 and while attending the EAGxVirtual (13 – 14 June 2020), I met a few other people based in Hungary. We had the first in-person meetup in August 2020, with 6 people. Then the Covid-19 pandemic hit, so we carried on with online meetups about once per month. We ran the intro fellowship for existing members (7 people), followed by the in-depth fellowship (4 people).During this time (and afterwards) I was supported by Catherine Low from CEA, from whom I learned a lot (∼1 call every 1,5 months).In the meantime, I was also volunteering for SoGive as a charity analyst and then as an analysis coordinator (teaching people basic charity analysis + operations work). I learned a lot from this and I think it really sped up my involvement within EA. My experience with SoGive also helped me start and run EA Hungary more effectively, as I have already gained some operations and mentoring experience.In the fall of 2021, I started a second Master’s in Philosophy at ELTE - (Eötvös Loránd University).Thanks to these activities mentioned above, I had a strong enough track record to apply for funding from EAIF to do 0.5 FTE, (although this upscaled to full-time pretty quickly).(I also have this story presented in memes, don’t ask why.)2021 Fall semester (from 2021 Mid-Sept to 2022 January)Main outcomes:(Where you see gaps it means that I didn’t record the exact numbers/data)8-week intro Fellowship: Number of applicants:DataNumber of people successfully completing the fellowship Number of people who filled out the completion survey:Data1-1 Mentoring calls262222∼20Other outcomes:Online social around week 4, which was attended by around 6 people- End-of-fellowship meal which was attended by 14 people.General info about the 2021 Fall period:Marketing and advertisement of the fellowship was almost completely done on Facebook groups of various university programs of ELTE university. (Facebook is still the most widely used social media in Hungary, although it is slowly losing popularity)Lots of start-up time costs, as I was still figuring things out.If you would like to save time on operations, I recommend using some draft emails/forms from the EA Hungary care package as well as these amazing resources: EA Student Groups Handbook, EA Groups Resource Centre and this folder.2022 Spring Semester (2022 February-May)Main outcomes:8-week intro Fellowship: Number of people successfully completing the fellowshipDataNumber of people who filled out the completion survey:DataIn-depth fellowship Number of people successfully completing the fellowship 1-1 Mentoring calls (unfortunately I didn’t keep an exact account, so this is a rough estimate) EAGxOxford attendees EAGxPrague attendees (the two numbers mean all attendees - counterfactual attendees (ie. people who wo...]]>
gergogaspar https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:10 None full 4329
YuKhRfwjQgs9sNpHG_EA EA - [Job] Researcher (CEARCH) by Joel Tan (CEARCH) Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Job] Researcher (CEARCH), published by Joel Tan (CEARCH) on January 2, 2023 on The Effective Altruism Forum.SummaryCEARCH is hiring researchers; for more details and to apply, refer to the job description here.CEARCHThe Centre for Exploratory Altruism Research (CEARCH) conducts cause prioritization research and outreach – identifying the most important problems in the world and helping to direct resources towards solving them, so as to maximize global welfare. As part of this effort, we carry out:A comprehensive search for causesRigorous cause prioritization research, with (a) shallow research for all causes, (b) intermediate research for more promising causes, and (c) deep research for potential top causes.Reasoning transparency and outreach to allow both the EA and non-EA movement to update on our findings and to support the most impactful causes available.The RoleAs a researcher, you will be performing:Cause prioritization research and outreach – searching for causes, researching them, and engaging other organizations and individuals to use our research.Various generalist tasks (e.g. planning, administration).In your first couple of months, you will likely be focusing on research, but you will eventually get to handle more outreach-related work (e.g. representing CEARCH at EAG/EAGx).Ideal CandidateThe ideal candidate will:Care about impact and doing good above all.Have strong research skills.Be able to work well in a team.BenefitsThe chief reason you might be interested in this role is that it can be expected to be highly impactful.Our internal cost-effectiveness analysis (CEA) suggests that we save around 1,700 disability-adjusted life years (or the equivalent of 58 lives) per USD 100,000 spent, which is around 10x more cost-effective than GiveWell top charities.External analysis supports this assessment, with Charity Entrepreneurship's CEA finding that we should be around 8x as cost effective as top EA causes.That said, such estimates should not be taken literally, given the high amount of uncertainty involved. Rather, they should be understood as giving a sense of how promising CEARCH's work – and hence your work with us – will be, given (a) the outsized benefits from identifying extremely impactful causes and directing resources there, and (b) the low operating costs of research and outreach.Salary offered will be competitive, and dependent on ability and relevant experience.Other benefits include remote work, flexible hours, and a chance to become a leader within the organization and an influential member of the EA community as we grow.How to ApplyTo apply, please fill up and submit the following application form. In terms of the overall hiring process, there will be the following stages:Stage 1: An application form (<= 0.5 hours)Stage 2: An interview (<= 0.5 hours)Stage 3: A work test (~4 days across 2 weeks – or more, if requested – with a lot of flexibility)Q&AWhat causes do you work on?All sorts of (a) global health and wellbeing causes, (b) longtermist causes, as well as (c) meta causes.Should I worry about my counterfactual impact?No – good researchers are hard to find, and the market is competitive; you should not worry about replaceability at all.When should I expect to hear back?If we're keen on your application, we'll try to get back within 3 weeks.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Joel Tan (CEARCH) https://forum.effectivealtruism.org/posts/YuKhRfwjQgs9sNpHG/job-researcher-cearch Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Job] Researcher (CEARCH), published by Joel Tan (CEARCH) on January 2, 2023 on The Effective Altruism Forum.SummaryCEARCH is hiring researchers; for more details and to apply, refer to the job description here.CEARCHThe Centre for Exploratory Altruism Research (CEARCH) conducts cause prioritization research and outreach – identifying the most important problems in the world and helping to direct resources towards solving them, so as to maximize global welfare. As part of this effort, we carry out:A comprehensive search for causesRigorous cause prioritization research, with (a) shallow research for all causes, (b) intermediate research for more promising causes, and (c) deep research for potential top causes.Reasoning transparency and outreach to allow both the EA and non-EA movement to update on our findings and to support the most impactful causes available.The RoleAs a researcher, you will be performing:Cause prioritization research and outreach – searching for causes, researching them, and engaging other organizations and individuals to use our research.Various generalist tasks (e.g. planning, administration).In your first couple of months, you will likely be focusing on research, but you will eventually get to handle more outreach-related work (e.g. representing CEARCH at EAG/EAGx).Ideal CandidateThe ideal candidate will:Care about impact and doing good above all.Have strong research skills.Be able to work well in a team.BenefitsThe chief reason you might be interested in this role is that it can be expected to be highly impactful.Our internal cost-effectiveness analysis (CEA) suggests that we save around 1,700 disability-adjusted life years (or the equivalent of 58 lives) per USD 100,000 spent, which is around 10x more cost-effective than GiveWell top charities.External analysis supports this assessment, with Charity Entrepreneurship's CEA finding that we should be around 8x as cost effective as top EA causes.That said, such estimates should not be taken literally, given the high amount of uncertainty involved. Rather, they should be understood as giving a sense of how promising CEARCH's work – and hence your work with us – will be, given (a) the outsized benefits from identifying extremely impactful causes and directing resources there, and (b) the low operating costs of research and outreach.Salary offered will be competitive, and dependent on ability and relevant experience.Other benefits include remote work, flexible hours, and a chance to become a leader within the organization and an influential member of the EA community as we grow.How to ApplyTo apply, please fill up and submit the following application form. In terms of the overall hiring process, there will be the following stages:Stage 1: An application form (<= 0.5 hours)Stage 2: An interview (<= 0.5 hours)Stage 3: A work test (~4 days across 2 weeks – or more, if requested – with a lot of flexibility)Q&AWhat causes do you work on?All sorts of (a) global health and wellbeing causes, (b) longtermist causes, as well as (c) meta causes.Should I worry about my counterfactual impact?No – good researchers are hard to find, and the market is competitive; you should not worry about replaceability at all.When should I expect to hear back?If we're keen on your application, we'll try to get back within 3 weeks.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 02 Jan 2023 18:35:30 +0000 EA - [Job] Researcher (CEARCH) by Joel Tan (CEARCH) Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Job] Researcher (CEARCH), published by Joel Tan (CEARCH) on January 2, 2023 on The Effective Altruism Forum.SummaryCEARCH is hiring researchers; for more details and to apply, refer to the job description here.CEARCHThe Centre for Exploratory Altruism Research (CEARCH) conducts cause prioritization research and outreach – identifying the most important problems in the world and helping to direct resources towards solving them, so as to maximize global welfare. As part of this effort, we carry out:A comprehensive search for causesRigorous cause prioritization research, with (a) shallow research for all causes, (b) intermediate research for more promising causes, and (c) deep research for potential top causes.Reasoning transparency and outreach to allow both the EA and non-EA movement to update on our findings and to support the most impactful causes available.The RoleAs a researcher, you will be performing:Cause prioritization research and outreach – searching for causes, researching them, and engaging other organizations and individuals to use our research.Various generalist tasks (e.g. planning, administration).In your first couple of months, you will likely be focusing on research, but you will eventually get to handle more outreach-related work (e.g. representing CEARCH at EAG/EAGx).Ideal CandidateThe ideal candidate will:Care about impact and doing good above all.Have strong research skills.Be able to work well in a team.BenefitsThe chief reason you might be interested in this role is that it can be expected to be highly impactful.Our internal cost-effectiveness analysis (CEA) suggests that we save around 1,700 disability-adjusted life years (or the equivalent of 58 lives) per USD 100,000 spent, which is around 10x more cost-effective than GiveWell top charities.External analysis supports this assessment, with Charity Entrepreneurship's CEA finding that we should be around 8x as cost effective as top EA causes.That said, such estimates should not be taken literally, given the high amount of uncertainty involved. Rather, they should be understood as giving a sense of how promising CEARCH's work – and hence your work with us – will be, given (a) the outsized benefits from identifying extremely impactful causes and directing resources there, and (b) the low operating costs of research and outreach.Salary offered will be competitive, and dependent on ability and relevant experience.Other benefits include remote work, flexible hours, and a chance to become a leader within the organization and an influential member of the EA community as we grow.How to ApplyTo apply, please fill up and submit the following application form. In terms of the overall hiring process, there will be the following stages:Stage 1: An application form (<= 0.5 hours)Stage 2: An interview (<= 0.5 hours)Stage 3: A work test (~4 days across 2 weeks – or more, if requested – with a lot of flexibility)Q&AWhat causes do you work on?All sorts of (a) global health and wellbeing causes, (b) longtermist causes, as well as (c) meta causes.Should I worry about my counterfactual impact?No – good researchers are hard to find, and the market is competitive; you should not worry about replaceability at all.When should I expect to hear back?If we're keen on your application, we'll try to get back within 3 weeks.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Job] Researcher (CEARCH), published by Joel Tan (CEARCH) on January 2, 2023 on The Effective Altruism Forum.SummaryCEARCH is hiring researchers; for more details and to apply, refer to the job description here.CEARCHThe Centre for Exploratory Altruism Research (CEARCH) conducts cause prioritization research and outreach – identifying the most important problems in the world and helping to direct resources towards solving them, so as to maximize global welfare. As part of this effort, we carry out:A comprehensive search for causesRigorous cause prioritization research, with (a) shallow research for all causes, (b) intermediate research for more promising causes, and (c) deep research for potential top causes.Reasoning transparency and outreach to allow both the EA and non-EA movement to update on our findings and to support the most impactful causes available.The RoleAs a researcher, you will be performing:Cause prioritization research and outreach – searching for causes, researching them, and engaging other organizations and individuals to use our research.Various generalist tasks (e.g. planning, administration).In your first couple of months, you will likely be focusing on research, but you will eventually get to handle more outreach-related work (e.g. representing CEARCH at EAG/EAGx).Ideal CandidateThe ideal candidate will:Care about impact and doing good above all.Have strong research skills.Be able to work well in a team.BenefitsThe chief reason you might be interested in this role is that it can be expected to be highly impactful.Our internal cost-effectiveness analysis (CEA) suggests that we save around 1,700 disability-adjusted life years (or the equivalent of 58 lives) per USD 100,000 spent, which is around 10x more cost-effective than GiveWell top charities.External analysis supports this assessment, with Charity Entrepreneurship's CEA finding that we should be around 8x as cost effective as top EA causes.That said, such estimates should not be taken literally, given the high amount of uncertainty involved. Rather, they should be understood as giving a sense of how promising CEARCH's work – and hence your work with us – will be, given (a) the outsized benefits from identifying extremely impactful causes and directing resources there, and (b) the low operating costs of research and outreach.Salary offered will be competitive, and dependent on ability and relevant experience.Other benefits include remote work, flexible hours, and a chance to become a leader within the organization and an influential member of the EA community as we grow.How to ApplyTo apply, please fill up and submit the following application form. In terms of the overall hiring process, there will be the following stages:Stage 1: An application form (<= 0.5 hours)Stage 2: An interview (<= 0.5 hours)Stage 3: A work test (~4 days across 2 weeks – or more, if requested – with a lot of flexibility)Q&AWhat causes do you work on?All sorts of (a) global health and wellbeing causes, (b) longtermist causes, as well as (c) meta causes.Should I worry about my counterfactual impact?No – good researchers are hard to find, and the market is competitive; you should not worry about replaceability at all.When should I expect to hear back?If we're keen on your application, we'll try to get back within 3 weeks.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Joel Tan (CEARCH) https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:44 None full 4330
SQ2ayhoYBJJCrFQjd_EA EA - What are the most underrated posts and comments of 2022, according to you? by peterhartree Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What are the most underrated posts & comments of 2022, according to you?, published by peterhartree on January 1, 2023 on The Effective Altruism Forum.Share your views in the comments!To make this clear and easy to follow, please use these guidelines:Use the template below.Post as many items as you want.One item per comment, so that it's easy for people to read and react.(Optional, encouraged) Highlight at least one of your own contributions.If you need some inspiration, open your EA Forum Wrapped and scroll to the bottom of your "Strong Upvoted" list.TemplateTitle:Author:URL:Why it's good:If you're sharing an underrated comment, set the title to "[Username] on [topic]".Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
peterhartree https://forum.effectivealtruism.org/posts/SQ2ayhoYBJJCrFQjd/what-are-the-most-underrated-posts-and-comments-of-2022 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What are the most underrated posts & comments of 2022, according to you?, published by peterhartree on January 1, 2023 on The Effective Altruism Forum.Share your views in the comments!To make this clear and easy to follow, please use these guidelines:Use the template below.Post as many items as you want.One item per comment, so that it's easy for people to read and react.(Optional, encouraged) Highlight at least one of your own contributions.If you need some inspiration, open your EA Forum Wrapped and scroll to the bottom of your "Strong Upvoted" list.TemplateTitle:Author:URL:Why it's good:If you're sharing an underrated comment, set the title to "[Username] on [topic]".Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 02 Jan 2023 11:26:25 +0000 EA - What are the most underrated posts and comments of 2022, according to you? by peterhartree Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What are the most underrated posts & comments of 2022, according to you?, published by peterhartree on January 1, 2023 on The Effective Altruism Forum.Share your views in the comments!To make this clear and easy to follow, please use these guidelines:Use the template below.Post as many items as you want.One item per comment, so that it's easy for people to read and react.(Optional, encouraged) Highlight at least one of your own contributions.If you need some inspiration, open your EA Forum Wrapped and scroll to the bottom of your "Strong Upvoted" list.TemplateTitle:Author:URL:Why it's good:If you're sharing an underrated comment, set the title to "[Username] on [topic]".Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What are the most underrated posts & comments of 2022, according to you?, published by peterhartree on January 1, 2023 on The Effective Altruism Forum.Share your views in the comments!To make this clear and easy to follow, please use these guidelines:Use the template below.Post as many items as you want.One item per comment, so that it's easy for people to read and react.(Optional, encouraged) Highlight at least one of your own contributions.If you need some inspiration, open your EA Forum Wrapped and scroll to the bottom of your "Strong Upvoted" list.TemplateTitle:Author:URL:Why it's good:If you're sharing an underrated comment, set the title to "[Username] on [topic]".Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
peterhartree https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:04 None full 4331
BiQe6Nt9JyCwcpaaB_EA EA - New book: The Tango of Ethics: Intuition, Rationality and the Prevention of Suffering by jonleighton Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New book: The Tango of Ethics: Intuition, Rationality and the Prevention of Suffering, published by jonleighton on January 2, 2023 on The Effective Altruism Forum.As mentioned in a recent post, I have a new book being published this week, titled The Tango of Ethics: Intuition, Rationality and the Prevention of Suffering. It’s rooted in reflections I’ve had on ethics and value since some of my earliest interactions with EAs ten years ago, and my observation that some specific ways of thinking about ethics that were already mainstream in the EA community could legitimately be challenged. These include common intuitions about notions like “good”, “bad” and “value” that have been imported into rational arguments about ethics, without necessarily being put into question or analysed more deeply. The project expanded into a broader reflection on ethics and the dance between intuition and rationality that I think is fundamental to ethical thinking and practice.Some of the claims I make may appear counterintuitive or conflict with beliefs that are strongly held by many others in the EA community. However, I urge people to consider reading the book with an open mind. Over the years that it has taken shape, I’ve continued to reevaluate my arguments and I remain confident that they have merit. Many of the individual ideas aren’t novel, and are even subscribed to by a subset of EAs. But aside from offering some new perspectives, one of my main goals is to offer a more “holistic” way of thinking about ethics that integrates several core ideas, and that is aligned with solid truths about reality, including the content of subjective experience.My hope is that the book will provoke reflection within the EA community about the foundations of our core values and how we think about “doing good”. Although I defend a form of negative utilitarianism I call “xNU+”, I show that it doesn’t need to lead to nihilism, especially within the framework I propose. It doesn’t negate self-preservation and the search for meaning, caring about the welfare of future sentient beings, or striving to realise an optimistic vision for the future. But I do argue for the importance of preventing intense and especially extreme/unbearable suffering as an essential ethical principle – and by extension, that only a future that encodes and reflects this principle is a reasonable one to try to preserve.The book is available in paperback and e-book from Amazon (UK, US, Germany...), B&N, and also directly from the publisher in the UK (paperback only).Description from the publisher’s pageDespite existing for thousands of years, the field of ethics remains strongly influenced by several largely unquestioned assumptions and cognitive biases that can dramatically affect our priorities. The Tango of Ethics: Intuition, Rationality and the Prevention of Suffering proposes a deep, rigorous reassessment of how we think about ethics. Eschewing the traditional language of morality, it places a central emphasis on phenomenological experience and the unique urgency of suffering wherever it occurs, challenges our existence bias and examines the consequences of a metaphysically accurate understanding of personal identity.A key paradigm in The Tango of Ethics is the conflict and interplay between two fundamentally different ways of seeing and being in the world — that of the intuitive human being who wants to lead a meaningful life and thrive, and that of the detached, rational agent who wants to prevent unbearable suffering from occurring. Leighton aims to reconcile these two stances or motivations within a more holistic framework he labels 'xNU+' that places them at distinct ethical levels. This approach avoids some of the flaws of classical utilitarianism, including the notion that extreme suffering can be formally balanced ...]]>
jonleighton https://forum.effectivealtruism.org/posts/BiQe6Nt9JyCwcpaaB/new-book-the-tango-of-ethics-intuition-rationality-and-the Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New book: The Tango of Ethics: Intuition, Rationality and the Prevention of Suffering, published by jonleighton on January 2, 2023 on The Effective Altruism Forum.As mentioned in a recent post, I have a new book being published this week, titled The Tango of Ethics: Intuition, Rationality and the Prevention of Suffering. It’s rooted in reflections I’ve had on ethics and value since some of my earliest interactions with EAs ten years ago, and my observation that some specific ways of thinking about ethics that were already mainstream in the EA community could legitimately be challenged. These include common intuitions about notions like “good”, “bad” and “value” that have been imported into rational arguments about ethics, without necessarily being put into question or analysed more deeply. The project expanded into a broader reflection on ethics and the dance between intuition and rationality that I think is fundamental to ethical thinking and practice.Some of the claims I make may appear counterintuitive or conflict with beliefs that are strongly held by many others in the EA community. However, I urge people to consider reading the book with an open mind. Over the years that it has taken shape, I’ve continued to reevaluate my arguments and I remain confident that they have merit. Many of the individual ideas aren’t novel, and are even subscribed to by a subset of EAs. But aside from offering some new perspectives, one of my main goals is to offer a more “holistic” way of thinking about ethics that integrates several core ideas, and that is aligned with solid truths about reality, including the content of subjective experience.My hope is that the book will provoke reflection within the EA community about the foundations of our core values and how we think about “doing good”. Although I defend a form of negative utilitarianism I call “xNU+”, I show that it doesn’t need to lead to nihilism, especially within the framework I propose. It doesn’t negate self-preservation and the search for meaning, caring about the welfare of future sentient beings, or striving to realise an optimistic vision for the future. But I do argue for the importance of preventing intense and especially extreme/unbearable suffering as an essential ethical principle – and by extension, that only a future that encodes and reflects this principle is a reasonable one to try to preserve.The book is available in paperback and e-book from Amazon (UK, US, Germany...), B&N, and also directly from the publisher in the UK (paperback only).Description from the publisher’s pageDespite existing for thousands of years, the field of ethics remains strongly influenced by several largely unquestioned assumptions and cognitive biases that can dramatically affect our priorities. The Tango of Ethics: Intuition, Rationality and the Prevention of Suffering proposes a deep, rigorous reassessment of how we think about ethics. Eschewing the traditional language of morality, it places a central emphasis on phenomenological experience and the unique urgency of suffering wherever it occurs, challenges our existence bias and examines the consequences of a metaphysically accurate understanding of personal identity.A key paradigm in The Tango of Ethics is the conflict and interplay between two fundamentally different ways of seeing and being in the world — that of the intuitive human being who wants to lead a meaningful life and thrive, and that of the detached, rational agent who wants to prevent unbearable suffering from occurring. Leighton aims to reconcile these two stances or motivations within a more holistic framework he labels 'xNU+' that places them at distinct ethical levels. This approach avoids some of the flaws of classical utilitarianism, including the notion that extreme suffering can be formally balanced ...]]>
Mon, 02 Jan 2023 08:45:43 +0000 EA - New book: The Tango of Ethics: Intuition, Rationality and the Prevention of Suffering by jonleighton Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New book: The Tango of Ethics: Intuition, Rationality and the Prevention of Suffering, published by jonleighton on January 2, 2023 on The Effective Altruism Forum.As mentioned in a recent post, I have a new book being published this week, titled The Tango of Ethics: Intuition, Rationality and the Prevention of Suffering. It’s rooted in reflections I’ve had on ethics and value since some of my earliest interactions with EAs ten years ago, and my observation that some specific ways of thinking about ethics that were already mainstream in the EA community could legitimately be challenged. These include common intuitions about notions like “good”, “bad” and “value” that have been imported into rational arguments about ethics, without necessarily being put into question or analysed more deeply. The project expanded into a broader reflection on ethics and the dance between intuition and rationality that I think is fundamental to ethical thinking and practice.Some of the claims I make may appear counterintuitive or conflict with beliefs that are strongly held by many others in the EA community. However, I urge people to consider reading the book with an open mind. Over the years that it has taken shape, I’ve continued to reevaluate my arguments and I remain confident that they have merit. Many of the individual ideas aren’t novel, and are even subscribed to by a subset of EAs. But aside from offering some new perspectives, one of my main goals is to offer a more “holistic” way of thinking about ethics that integrates several core ideas, and that is aligned with solid truths about reality, including the content of subjective experience.My hope is that the book will provoke reflection within the EA community about the foundations of our core values and how we think about “doing good”. Although I defend a form of negative utilitarianism I call “xNU+”, I show that it doesn’t need to lead to nihilism, especially within the framework I propose. It doesn’t negate self-preservation and the search for meaning, caring about the welfare of future sentient beings, or striving to realise an optimistic vision for the future. But I do argue for the importance of preventing intense and especially extreme/unbearable suffering as an essential ethical principle – and by extension, that only a future that encodes and reflects this principle is a reasonable one to try to preserve.The book is available in paperback and e-book from Amazon (UK, US, Germany...), B&N, and also directly from the publisher in the UK (paperback only).Description from the publisher’s pageDespite existing for thousands of years, the field of ethics remains strongly influenced by several largely unquestioned assumptions and cognitive biases that can dramatically affect our priorities. The Tango of Ethics: Intuition, Rationality and the Prevention of Suffering proposes a deep, rigorous reassessment of how we think about ethics. Eschewing the traditional language of morality, it places a central emphasis on phenomenological experience and the unique urgency of suffering wherever it occurs, challenges our existence bias and examines the consequences of a metaphysically accurate understanding of personal identity.A key paradigm in The Tango of Ethics is the conflict and interplay between two fundamentally different ways of seeing and being in the world — that of the intuitive human being who wants to lead a meaningful life and thrive, and that of the detached, rational agent who wants to prevent unbearable suffering from occurring. Leighton aims to reconcile these two stances or motivations within a more holistic framework he labels 'xNU+' that places them at distinct ethical levels. This approach avoids some of the flaws of classical utilitarianism, including the notion that extreme suffering can be formally balanced ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New book: The Tango of Ethics: Intuition, Rationality and the Prevention of Suffering, published by jonleighton on January 2, 2023 on The Effective Altruism Forum.As mentioned in a recent post, I have a new book being published this week, titled The Tango of Ethics: Intuition, Rationality and the Prevention of Suffering. It’s rooted in reflections I’ve had on ethics and value since some of my earliest interactions with EAs ten years ago, and my observation that some specific ways of thinking about ethics that were already mainstream in the EA community could legitimately be challenged. These include common intuitions about notions like “good”, “bad” and “value” that have been imported into rational arguments about ethics, without necessarily being put into question or analysed more deeply. The project expanded into a broader reflection on ethics and the dance between intuition and rationality that I think is fundamental to ethical thinking and practice.Some of the claims I make may appear counterintuitive or conflict with beliefs that are strongly held by many others in the EA community. However, I urge people to consider reading the book with an open mind. Over the years that it has taken shape, I’ve continued to reevaluate my arguments and I remain confident that they have merit. Many of the individual ideas aren’t novel, and are even subscribed to by a subset of EAs. But aside from offering some new perspectives, one of my main goals is to offer a more “holistic” way of thinking about ethics that integrates several core ideas, and that is aligned with solid truths about reality, including the content of subjective experience.My hope is that the book will provoke reflection within the EA community about the foundations of our core values and how we think about “doing good”. Although I defend a form of negative utilitarianism I call “xNU+”, I show that it doesn’t need to lead to nihilism, especially within the framework I propose. It doesn’t negate self-preservation and the search for meaning, caring about the welfare of future sentient beings, or striving to realise an optimistic vision for the future. But I do argue for the importance of preventing intense and especially extreme/unbearable suffering as an essential ethical principle – and by extension, that only a future that encodes and reflects this principle is a reasonable one to try to preserve.The book is available in paperback and e-book from Amazon (UK, US, Germany...), B&N, and also directly from the publisher in the UK (paperback only).Description from the publisher’s pageDespite existing for thousands of years, the field of ethics remains strongly influenced by several largely unquestioned assumptions and cognitive biases that can dramatically affect our priorities. The Tango of Ethics: Intuition, Rationality and the Prevention of Suffering proposes a deep, rigorous reassessment of how we think about ethics. Eschewing the traditional language of morality, it places a central emphasis on phenomenological experience and the unique urgency of suffering wherever it occurs, challenges our existence bias and examines the consequences of a metaphysically accurate understanding of personal identity.A key paradigm in The Tango of Ethics is the conflict and interplay between two fundamentally different ways of seeing and being in the world — that of the intuitive human being who wants to lead a meaningful life and thrive, and that of the detached, rational agent who wants to prevent unbearable suffering from occurring. Leighton aims to reconcile these two stances or motivations within a more holistic framework he labels 'xNU+' that places them at distinct ethical levels. This approach avoids some of the flaws of classical utilitarianism, including the notion that extreme suffering can be formally balanced ...]]>
jonleighton https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:15 None full 4326
gfj7FMKz9e8CKXqSQ_EA EA - Your 2022 EA Forum Wrapped by Sharang Phadke Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Your 2022 EA Forum Wrapped , published by Sharang Phadke on January 1, 2023 on The Effective Altruism Forum.The EA Forum team is excited to share your personal ✨ 2022 EA Forum Wrapped ✨. We hope you enjoy this little summary of how you used the EA Forum as you ring in the new year with us. Thanks for being part of the Forum!Note: If you don't have an EA Forum account, we won't be able to make a personalized "wrapped" for you. If you feel like you're missing out, today is a great day to make an account and participate more actively in the online EA community!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sharang Phadke https://forum.effectivealtruism.org/posts/gfj7FMKz9e8CKXqSQ/your-2022-ea-forum-wrapped Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Your 2022 EA Forum Wrapped , published by Sharang Phadke on January 1, 2023 on The Effective Altruism Forum.The EA Forum team is excited to share your personal ✨ 2022 EA Forum Wrapped ✨. We hope you enjoy this little summary of how you used the EA Forum as you ring in the new year with us. Thanks for being part of the Forum!Note: If you don't have an EA Forum account, we won't be able to make a personalized "wrapped" for you. If you feel like you're missing out, today is a great day to make an account and participate more actively in the online EA community!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 01 Jan 2023 04:46:25 +0000 EA - Your 2022 EA Forum Wrapped by Sharang Phadke Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Your 2022 EA Forum Wrapped , published by Sharang Phadke on January 1, 2023 on The Effective Altruism Forum.The EA Forum team is excited to share your personal ✨ 2022 EA Forum Wrapped ✨. We hope you enjoy this little summary of how you used the EA Forum as you ring in the new year with us. Thanks for being part of the Forum!Note: If you don't have an EA Forum account, we won't be able to make a personalized "wrapped" for you. If you feel like you're missing out, today is a great day to make an account and participate more actively in the online EA community!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Your 2022 EA Forum Wrapped , published by Sharang Phadke on January 1, 2023 on The Effective Altruism Forum.The EA Forum team is excited to share your personal ✨ 2022 EA Forum Wrapped ✨. We hope you enjoy this little summary of how you used the EA Forum as you ring in the new year with us. Thanks for being part of the Forum!Note: If you don't have an EA Forum account, we won't be able to make a personalized "wrapped" for you. If you feel like you're missing out, today is a great day to make an account and participate more actively in the online EA community!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sharang Phadke https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:52 None full 4320
XRphCh6NbfQiDF3Nt_EA EA - Racing through a minefield: the AI deployment problem by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Racing through a minefield: the AI deployment problem, published by Holden Karnofsky on December 31, 2022 on The Effective Altruism Forum.In previous pieces, I argued that there's a real and large risk of AI systems developing dangerous goals of their own and defeating all of humanity - at least in the absence of specific efforts to prevent this from happening. I discussed why it could be hard to build AI systems without this risk and how it might be doable.The “AI alignment problem” refers1 to a technical problem: how can we design a powerful AI system that behaves as intended, rather than forming its own dangerous aims? This post is going to outline a broader political/strategic problem, the “deployment problem”: if you’re someone who might be on the cusp of developing extremely powerful (and maybe dangerous) AI systems, what should you . do?The basic challenge is this:If you race forward with building and using powerful AI systems as fast as possible, you might cause a global catastrophe (see links above).If you move too slowly, though, you might just be waiting around for someone else less cautious to develop and deploy powerful, dangerous AI systems.And if you can get to the point where your own systems are both powerful and safe . what then? Other people still might be less-cautiously building dangerous ones - what should we do about that?My current analogy for the deployment problem is racing through a minefield: each player is hoping to be ahead of others, but anyone moving too quickly can cause a disaster. (In this minefield, a single mine is big enough to endanger all the racers.)This post gives a high-level overview of how I see the kinds of developments that can lead to a good outcome, despite the “racing through a minefield” dynamic. It is distilled from a more detailed post on the Alignment Forum.First, I’ll flesh out how I see the challenge we’re contending with, based on the premises above.Next, I’ll list a number of things I hope that “cautious actors” (AI companies, governments, etc.) might do in order to prevent catastrophe.Many of the actions I’m picturing are not the kind of things normal market and commercial incentives would push toward, and as such, I think there’s room for a ton of variation in whether the “racing through a minefield” challenge is handled well. Whether key decision-makers understand things like the case for misalignment risk (and in particular, why it might be hard to measure) - and are willing to lower their own chances of “winning the race” to improve the odds of a good outcome for everyone - could be crucial.The basic premises of “racing through a minefield”This piece is going to lean on previous pieces and assume all of the following things:Transformative AI soon. This century, something like PASTA could be developed: AI systems that can effectively automate everything humans do to advance science and technology. This brings the potential for explosive progress in science and tech, getting us more quickly than most people imagine to a deeply unfamiliar future. I’ve argued for this possibility in the Most Important Century series.Misalignment risk. As argued previously, there’s a significant risk that such AI systems could end up with misaligned goals of their own, leading them to defeat all of humanity. And it could take significant extra effort to get AI systems to be safe.Ambiguity. As argued previously, it could be hard to know whether AI systems are dangerously misaligned, for a number of reasons. In particular, when we train AI systems not to behave dangerously, we might be unwittingly training them to obscure their dangerous potential from humans, and take dangerous actions only when humans would not be able to stop them. At the same time, I expect powerful AI systems will present massive opportuniti...]]>
Holden Karnofsky https://forum.effectivealtruism.org/posts/XRphCh6NbfQiDF3Nt/racing-through-a-minefield-the-ai-deployment-problem Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Racing through a minefield: the AI deployment problem, published by Holden Karnofsky on December 31, 2022 on The Effective Altruism Forum.In previous pieces, I argued that there's a real and large risk of AI systems developing dangerous goals of their own and defeating all of humanity - at least in the absence of specific efforts to prevent this from happening. I discussed why it could be hard to build AI systems without this risk and how it might be doable.The “AI alignment problem” refers1 to a technical problem: how can we design a powerful AI system that behaves as intended, rather than forming its own dangerous aims? This post is going to outline a broader political/strategic problem, the “deployment problem”: if you’re someone who might be on the cusp of developing extremely powerful (and maybe dangerous) AI systems, what should you . do?The basic challenge is this:If you race forward with building and using powerful AI systems as fast as possible, you might cause a global catastrophe (see links above).If you move too slowly, though, you might just be waiting around for someone else less cautious to develop and deploy powerful, dangerous AI systems.And if you can get to the point where your own systems are both powerful and safe . what then? Other people still might be less-cautiously building dangerous ones - what should we do about that?My current analogy for the deployment problem is racing through a minefield: each player is hoping to be ahead of others, but anyone moving too quickly can cause a disaster. (In this minefield, a single mine is big enough to endanger all the racers.)This post gives a high-level overview of how I see the kinds of developments that can lead to a good outcome, despite the “racing through a minefield” dynamic. It is distilled from a more detailed post on the Alignment Forum.First, I’ll flesh out how I see the challenge we’re contending with, based on the premises above.Next, I’ll list a number of things I hope that “cautious actors” (AI companies, governments, etc.) might do in order to prevent catastrophe.Many of the actions I’m picturing are not the kind of things normal market and commercial incentives would push toward, and as such, I think there’s room for a ton of variation in whether the “racing through a minefield” challenge is handled well. Whether key decision-makers understand things like the case for misalignment risk (and in particular, why it might be hard to measure) - and are willing to lower their own chances of “winning the race” to improve the odds of a good outcome for everyone - could be crucial.The basic premises of “racing through a minefield”This piece is going to lean on previous pieces and assume all of the following things:Transformative AI soon. This century, something like PASTA could be developed: AI systems that can effectively automate everything humans do to advance science and technology. This brings the potential for explosive progress in science and tech, getting us more quickly than most people imagine to a deeply unfamiliar future. I’ve argued for this possibility in the Most Important Century series.Misalignment risk. As argued previously, there’s a significant risk that such AI systems could end up with misaligned goals of their own, leading them to defeat all of humanity. And it could take significant extra effort to get AI systems to be safe.Ambiguity. As argued previously, it could be hard to know whether AI systems are dangerously misaligned, for a number of reasons. In particular, when we train AI systems not to behave dangerously, we might be unwittingly training them to obscure their dangerous potential from humans, and take dangerous actions only when humans would not be able to stop them. At the same time, I expect powerful AI systems will present massive opportuniti...]]>
Sat, 31 Dec 2022 21:44:56 +0000 EA - Racing through a minefield: the AI deployment problem by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Racing through a minefield: the AI deployment problem, published by Holden Karnofsky on December 31, 2022 on The Effective Altruism Forum.In previous pieces, I argued that there's a real and large risk of AI systems developing dangerous goals of their own and defeating all of humanity - at least in the absence of specific efforts to prevent this from happening. I discussed why it could be hard to build AI systems without this risk and how it might be doable.The “AI alignment problem” refers1 to a technical problem: how can we design a powerful AI system that behaves as intended, rather than forming its own dangerous aims? This post is going to outline a broader political/strategic problem, the “deployment problem”: if you’re someone who might be on the cusp of developing extremely powerful (and maybe dangerous) AI systems, what should you . do?The basic challenge is this:If you race forward with building and using powerful AI systems as fast as possible, you might cause a global catastrophe (see links above).If you move too slowly, though, you might just be waiting around for someone else less cautious to develop and deploy powerful, dangerous AI systems.And if you can get to the point where your own systems are both powerful and safe . what then? Other people still might be less-cautiously building dangerous ones - what should we do about that?My current analogy for the deployment problem is racing through a minefield: each player is hoping to be ahead of others, but anyone moving too quickly can cause a disaster. (In this minefield, a single mine is big enough to endanger all the racers.)This post gives a high-level overview of how I see the kinds of developments that can lead to a good outcome, despite the “racing through a minefield” dynamic. It is distilled from a more detailed post on the Alignment Forum.First, I’ll flesh out how I see the challenge we’re contending with, based on the premises above.Next, I’ll list a number of things I hope that “cautious actors” (AI companies, governments, etc.) might do in order to prevent catastrophe.Many of the actions I’m picturing are not the kind of things normal market and commercial incentives would push toward, and as such, I think there’s room for a ton of variation in whether the “racing through a minefield” challenge is handled well. Whether key decision-makers understand things like the case for misalignment risk (and in particular, why it might be hard to measure) - and are willing to lower their own chances of “winning the race” to improve the odds of a good outcome for everyone - could be crucial.The basic premises of “racing through a minefield”This piece is going to lean on previous pieces and assume all of the following things:Transformative AI soon. This century, something like PASTA could be developed: AI systems that can effectively automate everything humans do to advance science and technology. This brings the potential for explosive progress in science and tech, getting us more quickly than most people imagine to a deeply unfamiliar future. I’ve argued for this possibility in the Most Important Century series.Misalignment risk. As argued previously, there’s a significant risk that such AI systems could end up with misaligned goals of their own, leading them to defeat all of humanity. And it could take significant extra effort to get AI systems to be safe.Ambiguity. As argued previously, it could be hard to know whether AI systems are dangerously misaligned, for a number of reasons. In particular, when we train AI systems not to behave dangerously, we might be unwittingly training them to obscure their dangerous potential from humans, and take dangerous actions only when humans would not be able to stop them. At the same time, I expect powerful AI systems will present massive opportuniti...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Racing through a minefield: the AI deployment problem, published by Holden Karnofsky on December 31, 2022 on The Effective Altruism Forum.In previous pieces, I argued that there's a real and large risk of AI systems developing dangerous goals of their own and defeating all of humanity - at least in the absence of specific efforts to prevent this from happening. I discussed why it could be hard to build AI systems without this risk and how it might be doable.The “AI alignment problem” refers1 to a technical problem: how can we design a powerful AI system that behaves as intended, rather than forming its own dangerous aims? This post is going to outline a broader political/strategic problem, the “deployment problem”: if you’re someone who might be on the cusp of developing extremely powerful (and maybe dangerous) AI systems, what should you . do?The basic challenge is this:If you race forward with building and using powerful AI systems as fast as possible, you might cause a global catastrophe (see links above).If you move too slowly, though, you might just be waiting around for someone else less cautious to develop and deploy powerful, dangerous AI systems.And if you can get to the point where your own systems are both powerful and safe . what then? Other people still might be less-cautiously building dangerous ones - what should we do about that?My current analogy for the deployment problem is racing through a minefield: each player is hoping to be ahead of others, but anyone moving too quickly can cause a disaster. (In this minefield, a single mine is big enough to endanger all the racers.)This post gives a high-level overview of how I see the kinds of developments that can lead to a good outcome, despite the “racing through a minefield” dynamic. It is distilled from a more detailed post on the Alignment Forum.First, I’ll flesh out how I see the challenge we’re contending with, based on the premises above.Next, I’ll list a number of things I hope that “cautious actors” (AI companies, governments, etc.) might do in order to prevent catastrophe.Many of the actions I’m picturing are not the kind of things normal market and commercial incentives would push toward, and as such, I think there’s room for a ton of variation in whether the “racing through a minefield” challenge is handled well. Whether key decision-makers understand things like the case for misalignment risk (and in particular, why it might be hard to measure) - and are willing to lower their own chances of “winning the race” to improve the odds of a good outcome for everyone - could be crucial.The basic premises of “racing through a minefield”This piece is going to lean on previous pieces and assume all of the following things:Transformative AI soon. This century, something like PASTA could be developed: AI systems that can effectively automate everything humans do to advance science and technology. This brings the potential for explosive progress in science and tech, getting us more quickly than most people imagine to a deeply unfamiliar future. I’ve argued for this possibility in the Most Important Century series.Misalignment risk. As argued previously, there’s a significant risk that such AI systems could end up with misaligned goals of their own, leading them to defeat all of humanity. And it could take significant extra effort to get AI systems to be safe.Ambiguity. As argued previously, it could be hard to know whether AI systems are dangerously misaligned, for a number of reasons. In particular, when we train AI systems not to behave dangerously, we might be unwittingly training them to obscure their dangerous potential from humans, and take dangerous actions only when humans would not be able to stop them. At the same time, I expect powerful AI systems will present massive opportuniti...]]>
Holden Karnofsky https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 20:52 None full 4310
tGpwWsP5iBfZFigeZ_EA EA - Future Matters #6: FTX collapse, value lock-in, and counterarguments to AI x-risk by Pablo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future Matters #6: FTX collapse, value lock-in, and counterarguments to AI x-risk, published by Pablo on December 30, 2022 on The Effective Altruism Forum.[T]he sun with all the planets will in time grow too cold for life, unless indeed some great body dashes into the sun and thus gives it fresh life. Believing as I do that man in the distant future will be a far more perfect creature than he now is, it is an intolerable thought that he and all other sentient beings are doomed to complete annihilation after such long-continued slow progress.Charles DarwinFuture Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, listen on your favorite podcast platform and follow on Twitter. Future Matters is also available in Spanish.A message to our readersWelcome back to Future Matters. We took a break during the autumn, but will now be returning to our previous monthly schedule. Future Matters would like to wish all our readers a happy new year!The most significant development during our hiatus was the collapse of FTX and the fall of Sam Bankman-Fried, until then one of the largest and most prominent supporters of longtermist causes. We were shocked and saddened by these revelations, and appalled by the allegations and admissions of fraud, deceit, and misappropriation of customer funds. As others have stated, fraud in the service of effective altruism is unacceptable, and we condemn these actions unequivocally and support authorities’ efforts to investigate and prosecute any crimes that may have been committed.ResearchA classic argument for existential risk from superintelligent AI goes something like this: (1) superintelligent AIs will be goal-directed; (2) goal-directed superintelligent AIs will likely pursue outcomes that we regard as extremely bad; therefore (3) if we build superintelligent AIs, the future will likely be extremely bad. Katja Grace’s Counterarguments to the basic AI x-risk case [] identifies a number of weak points in each of the premises in the argument. We refer interested readers to our conversation with Katja below for more discussion of this post, as well as to Erik Jenner and Johannes Treutlein’s Responses to Katja Grace’s AI x-risk counterarguments [].The key driver of AI risk is that we are rapidly developing more and more powerful AI systems, while making relatively little progress in ensuring they are safe. Katja Grace’s Let’s think about slowing down AI [] argues that the AI risk community should consider advocating for slowing down AI progress. She rebuts some of the objections commonly levelled against this strategy: e.g. to the charge of infeasibility, she points out that many technologies (human gene editing, nuclear energy) have been halted or drastically curtailed due to ethical and/or safety concerns. In the comments, Carl Shulman argues that there is not currently enough buy-in from governments or the public to take more modest safety and governance interventions, so it doesn’t seem wise to advocate for such a dramatic and costly policy: “It's like climate activists in 1950 responding to difficulties passing funds for renewable energy R&D or a carbon tax by proposing that the sale of automobiles be banned immediately.It took a lot of scientific data, solidification of scientific consensus, and communication/movement-building over time to get current measures on climate change.”We enjoyed Kelsey Piper's review of What We Owe the Future [], not necessarily because we agree with her criticisms, but because we thought the review managed to identify, and articulate very clearly, what we take to be the main c...]]>
Pablo https://forum.effectivealtruism.org/posts/tGpwWsP5iBfZFigeZ/future-matters-6-ftx-collapse-value-lock-in-and Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future Matters #6: FTX collapse, value lock-in, and counterarguments to AI x-risk, published by Pablo on December 30, 2022 on The Effective Altruism Forum.[T]he sun with all the planets will in time grow too cold for life, unless indeed some great body dashes into the sun and thus gives it fresh life. Believing as I do that man in the distant future will be a far more perfect creature than he now is, it is an intolerable thought that he and all other sentient beings are doomed to complete annihilation after such long-continued slow progress.Charles DarwinFuture Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, listen on your favorite podcast platform and follow on Twitter. Future Matters is also available in Spanish.A message to our readersWelcome back to Future Matters. We took a break during the autumn, but will now be returning to our previous monthly schedule. Future Matters would like to wish all our readers a happy new year!The most significant development during our hiatus was the collapse of FTX and the fall of Sam Bankman-Fried, until then one of the largest and most prominent supporters of longtermist causes. We were shocked and saddened by these revelations, and appalled by the allegations and admissions of fraud, deceit, and misappropriation of customer funds. As others have stated, fraud in the service of effective altruism is unacceptable, and we condemn these actions unequivocally and support authorities’ efforts to investigate and prosecute any crimes that may have been committed.ResearchA classic argument for existential risk from superintelligent AI goes something like this: (1) superintelligent AIs will be goal-directed; (2) goal-directed superintelligent AIs will likely pursue outcomes that we regard as extremely bad; therefore (3) if we build superintelligent AIs, the future will likely be extremely bad. Katja Grace’s Counterarguments to the basic AI x-risk case [] identifies a number of weak points in each of the premises in the argument. We refer interested readers to our conversation with Katja below for more discussion of this post, as well as to Erik Jenner and Johannes Treutlein’s Responses to Katja Grace’s AI x-risk counterarguments [].The key driver of AI risk is that we are rapidly developing more and more powerful AI systems, while making relatively little progress in ensuring they are safe. Katja Grace’s Let’s think about slowing down AI [] argues that the AI risk community should consider advocating for slowing down AI progress. She rebuts some of the objections commonly levelled against this strategy: e.g. to the charge of infeasibility, she points out that many technologies (human gene editing, nuclear energy) have been halted or drastically curtailed due to ethical and/or safety concerns. In the comments, Carl Shulman argues that there is not currently enough buy-in from governments or the public to take more modest safety and governance interventions, so it doesn’t seem wise to advocate for such a dramatic and costly policy: “It's like climate activists in 1950 responding to difficulties passing funds for renewable energy R&D or a carbon tax by proposing that the sale of automobiles be banned immediately.It took a lot of scientific data, solidification of scientific consensus, and communication/movement-building over time to get current measures on climate change.”We enjoyed Kelsey Piper's review of What We Owe the Future [], not necessarily because we agree with her criticisms, but because we thought the review managed to identify, and articulate very clearly, what we take to be the main c...]]>
Fri, 30 Dec 2022 20:35:53 +0000 EA - Future Matters #6: FTX collapse, value lock-in, and counterarguments to AI x-risk by Pablo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future Matters #6: FTX collapse, value lock-in, and counterarguments to AI x-risk, published by Pablo on December 30, 2022 on The Effective Altruism Forum.[T]he sun with all the planets will in time grow too cold for life, unless indeed some great body dashes into the sun and thus gives it fresh life. Believing as I do that man in the distant future will be a far more perfect creature than he now is, it is an intolerable thought that he and all other sentient beings are doomed to complete annihilation after such long-continued slow progress.Charles DarwinFuture Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, listen on your favorite podcast platform and follow on Twitter. Future Matters is also available in Spanish.A message to our readersWelcome back to Future Matters. We took a break during the autumn, but will now be returning to our previous monthly schedule. Future Matters would like to wish all our readers a happy new year!The most significant development during our hiatus was the collapse of FTX and the fall of Sam Bankman-Fried, until then one of the largest and most prominent supporters of longtermist causes. We were shocked and saddened by these revelations, and appalled by the allegations and admissions of fraud, deceit, and misappropriation of customer funds. As others have stated, fraud in the service of effective altruism is unacceptable, and we condemn these actions unequivocally and support authorities’ efforts to investigate and prosecute any crimes that may have been committed.ResearchA classic argument for existential risk from superintelligent AI goes something like this: (1) superintelligent AIs will be goal-directed; (2) goal-directed superintelligent AIs will likely pursue outcomes that we regard as extremely bad; therefore (3) if we build superintelligent AIs, the future will likely be extremely bad. Katja Grace’s Counterarguments to the basic AI x-risk case [] identifies a number of weak points in each of the premises in the argument. We refer interested readers to our conversation with Katja below for more discussion of this post, as well as to Erik Jenner and Johannes Treutlein’s Responses to Katja Grace’s AI x-risk counterarguments [].The key driver of AI risk is that we are rapidly developing more and more powerful AI systems, while making relatively little progress in ensuring they are safe. Katja Grace’s Let’s think about slowing down AI [] argues that the AI risk community should consider advocating for slowing down AI progress. She rebuts some of the objections commonly levelled against this strategy: e.g. to the charge of infeasibility, she points out that many technologies (human gene editing, nuclear energy) have been halted or drastically curtailed due to ethical and/or safety concerns. In the comments, Carl Shulman argues that there is not currently enough buy-in from governments or the public to take more modest safety and governance interventions, so it doesn’t seem wise to advocate for such a dramatic and costly policy: “It's like climate activists in 1950 responding to difficulties passing funds for renewable energy R&D or a carbon tax by proposing that the sale of automobiles be banned immediately.It took a lot of scientific data, solidification of scientific consensus, and communication/movement-building over time to get current measures on climate change.”We enjoyed Kelsey Piper's review of What We Owe the Future [], not necessarily because we agree with her criticisms, but because we thought the review managed to identify, and articulate very clearly, what we take to be the main c...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future Matters #6: FTX collapse, value lock-in, and counterarguments to AI x-risk, published by Pablo on December 30, 2022 on The Effective Altruism Forum.[T]he sun with all the planets will in time grow too cold for life, unless indeed some great body dashes into the sun and thus gives it fresh life. Believing as I do that man in the distant future will be a far more perfect creature than he now is, it is an intolerable thought that he and all other sentient beings are doomed to complete annihilation after such long-continued slow progress.Charles DarwinFuture Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, listen on your favorite podcast platform and follow on Twitter. Future Matters is also available in Spanish.A message to our readersWelcome back to Future Matters. We took a break during the autumn, but will now be returning to our previous monthly schedule. Future Matters would like to wish all our readers a happy new year!The most significant development during our hiatus was the collapse of FTX and the fall of Sam Bankman-Fried, until then one of the largest and most prominent supporters of longtermist causes. We were shocked and saddened by these revelations, and appalled by the allegations and admissions of fraud, deceit, and misappropriation of customer funds. As others have stated, fraud in the service of effective altruism is unacceptable, and we condemn these actions unequivocally and support authorities’ efforts to investigate and prosecute any crimes that may have been committed.ResearchA classic argument for existential risk from superintelligent AI goes something like this: (1) superintelligent AIs will be goal-directed; (2) goal-directed superintelligent AIs will likely pursue outcomes that we regard as extremely bad; therefore (3) if we build superintelligent AIs, the future will likely be extremely bad. Katja Grace’s Counterarguments to the basic AI x-risk case [] identifies a number of weak points in each of the premises in the argument. We refer interested readers to our conversation with Katja below for more discussion of this post, as well as to Erik Jenner and Johannes Treutlein’s Responses to Katja Grace’s AI x-risk counterarguments [].The key driver of AI risk is that we are rapidly developing more and more powerful AI systems, while making relatively little progress in ensuring they are safe. Katja Grace’s Let’s think about slowing down AI [] argues that the AI risk community should consider advocating for slowing down AI progress. She rebuts some of the objections commonly levelled against this strategy: e.g. to the charge of infeasibility, she points out that many technologies (human gene editing, nuclear energy) have been halted or drastically curtailed due to ethical and/or safety concerns. In the comments, Carl Shulman argues that there is not currently enough buy-in from governments or the public to take more modest safety and governance interventions, so it doesn’t seem wise to advocate for such a dramatic and costly policy: “It's like climate activists in 1950 responding to difficulties passing funds for renewable energy R&D or a carbon tax by proposing that the sale of automobiles be banned immediately.It took a lot of scientific data, solidification of scientific consensus, and communication/movement-building over time to get current measures on climate change.”We enjoyed Kelsey Piper's review of What We Owe the Future [], not necessarily because we agree with her criticisms, but because we thought the review managed to identify, and articulate very clearly, what we take to be the main c...]]>
Pablo https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 31:52 None full 4308
snnfmepzrwpAsAoDT_EA EA - Why Anima International suspended the campaign to end live fish sales in Poland by Jakub Stencel Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Anima International suspended the campaign to end live fish sales in Poland, published by Jakub Stencel on December 30, 2022 on The Effective Altruism Forum.At Anima International, we recently decided to suspend our campaign against live fish sales in Poland indefinitely. After a few years of running the campaign, we are now concerned about the effects of our efforts, specifically the possibility of a net negative result for the lives of animals. We believe that by writing about it openly we can help foster a culture of intellectual honesty, information sharing and accountability. Ideally, our case can serve as a good example on reflecting on potential unintended consequences of advocacy interventions.SummarySome post-communist countries of Eastern Europe have a tradition of buying live carp in shops and slaughtering them at homes for Christmas Eve.Poland is a major importer and producer of carps, with 90% of domestic production consumed during the Christmas season.Many groups, including Anima International, oppose and target the practice because of its incredible cruelty, as well as significant public sentiment against it.Due to successful efforts of animal advocacy groups important victories were achieved, including major retailers withdrawing from selling live fish.A consumer trend to move away from carp to higher-status fish, like salmon, is observed.Anima International became increasingly worried that any effort to displace carp consumption may lead to increased animal suffering due to salmon farming requiring fish feed.We ran polls and created a rough model to check these assumptions and then considered how our strategy should look like.After careful considerations of a number of factors, including effectiveness estimates, we decided to disband our team focused on the campaign and invest the resources elsewhere.Carp in post-communist countriesCommon carp (Cyprinus carpio) is a domesticated freshwater fish. It’s the third largest fish farmed in the world’s aquaculture production. Poland is the European Union’s largest producer and top importer.What is especially bizarre about Poland and other similar countries in Central and Eastern Europe is the relatively recent custom of buying live carps during the Christmas period. The tradition developed around 70 years ago due to a combination of factors:post-war reality – destruction of fishing fleets made carp farming a desirable investment;religion – in Christianity fish is not considered meat and is thus allowed during periods of fasting;communism – there was a special government-established “fish allowance” under which people received live fish (refrigerators were uncommon) from their employers as Christmas bonuses.Due to this strong tradition[1] 90% of Polish domestic production is sold around Christmas and ~25% of the population reports buying the carp alive.[2] There is an intense public debate around this subject in Poland at this time of year.Suffering of carps in PolandCarp farming and the industryCarps are farmed in small ponds. As omnivores, they feed on small invertebrates and later transition to special grain-based feed. There are 3,000 farms in Poland with an additional 1,000 businesses engaged in carp aquaculture. According to 2020 reports, Polish carp aquaculture was the biggest in the European Union, making up 36% of freshwater fish production in Poland.While the Polish carp industry seems to be gaining momentum in Europe, it appears that popular demand and changing trends in consumption may be leading to stagnation. As economic models emerge, they point to aesthetics and the difficulty of preparation rather than price driving the stagnation, especially among young adults. From our anecdotal experience, this seems intuitive, as carp hasn’t been marketed well by the industry. We ...]]>
Jakub Stencel https://forum.effectivealtruism.org/posts/snnfmepzrwpAsAoDT/why-anima-international-suspended-the-campaign-to-end-live Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Anima International suspended the campaign to end live fish sales in Poland, published by Jakub Stencel on December 30, 2022 on The Effective Altruism Forum.At Anima International, we recently decided to suspend our campaign against live fish sales in Poland indefinitely. After a few years of running the campaign, we are now concerned about the effects of our efforts, specifically the possibility of a net negative result for the lives of animals. We believe that by writing about it openly we can help foster a culture of intellectual honesty, information sharing and accountability. Ideally, our case can serve as a good example on reflecting on potential unintended consequences of advocacy interventions.SummarySome post-communist countries of Eastern Europe have a tradition of buying live carp in shops and slaughtering them at homes for Christmas Eve.Poland is a major importer and producer of carps, with 90% of domestic production consumed during the Christmas season.Many groups, including Anima International, oppose and target the practice because of its incredible cruelty, as well as significant public sentiment against it.Due to successful efforts of animal advocacy groups important victories were achieved, including major retailers withdrawing from selling live fish.A consumer trend to move away from carp to higher-status fish, like salmon, is observed.Anima International became increasingly worried that any effort to displace carp consumption may lead to increased animal suffering due to salmon farming requiring fish feed.We ran polls and created a rough model to check these assumptions and then considered how our strategy should look like.After careful considerations of a number of factors, including effectiveness estimates, we decided to disband our team focused on the campaign and invest the resources elsewhere.Carp in post-communist countriesCommon carp (Cyprinus carpio) is a domesticated freshwater fish. It’s the third largest fish farmed in the world’s aquaculture production. Poland is the European Union’s largest producer and top importer.What is especially bizarre about Poland and other similar countries in Central and Eastern Europe is the relatively recent custom of buying live carps during the Christmas period. The tradition developed around 70 years ago due to a combination of factors:post-war reality – destruction of fishing fleets made carp farming a desirable investment;religion – in Christianity fish is not considered meat and is thus allowed during periods of fasting;communism – there was a special government-established “fish allowance” under which people received live fish (refrigerators were uncommon) from their employers as Christmas bonuses.Due to this strong tradition[1] 90% of Polish domestic production is sold around Christmas and ~25% of the population reports buying the carp alive.[2] There is an intense public debate around this subject in Poland at this time of year.Suffering of carps in PolandCarp farming and the industryCarps are farmed in small ponds. As omnivores, they feed on small invertebrates and later transition to special grain-based feed. There are 3,000 farms in Poland with an additional 1,000 businesses engaged in carp aquaculture. According to 2020 reports, Polish carp aquaculture was the biggest in the European Union, making up 36% of freshwater fish production in Poland.While the Polish carp industry seems to be gaining momentum in Europe, it appears that popular demand and changing trends in consumption may be leading to stagnation. As economic models emerge, they point to aesthetics and the difficulty of preparation rather than price driving the stagnation, especially among young adults. From our anecdotal experience, this seems intuitive, as carp hasn’t been marketed well by the industry. We ...]]>
Fri, 30 Dec 2022 19:17:00 +0000 EA - Why Anima International suspended the campaign to end live fish sales in Poland by Jakub Stencel Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Anima International suspended the campaign to end live fish sales in Poland, published by Jakub Stencel on December 30, 2022 on The Effective Altruism Forum.At Anima International, we recently decided to suspend our campaign against live fish sales in Poland indefinitely. After a few years of running the campaign, we are now concerned about the effects of our efforts, specifically the possibility of a net negative result for the lives of animals. We believe that by writing about it openly we can help foster a culture of intellectual honesty, information sharing and accountability. Ideally, our case can serve as a good example on reflecting on potential unintended consequences of advocacy interventions.SummarySome post-communist countries of Eastern Europe have a tradition of buying live carp in shops and slaughtering them at homes for Christmas Eve.Poland is a major importer and producer of carps, with 90% of domestic production consumed during the Christmas season.Many groups, including Anima International, oppose and target the practice because of its incredible cruelty, as well as significant public sentiment against it.Due to successful efforts of animal advocacy groups important victories were achieved, including major retailers withdrawing from selling live fish.A consumer trend to move away from carp to higher-status fish, like salmon, is observed.Anima International became increasingly worried that any effort to displace carp consumption may lead to increased animal suffering due to salmon farming requiring fish feed.We ran polls and created a rough model to check these assumptions and then considered how our strategy should look like.After careful considerations of a number of factors, including effectiveness estimates, we decided to disband our team focused on the campaign and invest the resources elsewhere.Carp in post-communist countriesCommon carp (Cyprinus carpio) is a domesticated freshwater fish. It’s the third largest fish farmed in the world’s aquaculture production. Poland is the European Union’s largest producer and top importer.What is especially bizarre about Poland and other similar countries in Central and Eastern Europe is the relatively recent custom of buying live carps during the Christmas period. The tradition developed around 70 years ago due to a combination of factors:post-war reality – destruction of fishing fleets made carp farming a desirable investment;religion – in Christianity fish is not considered meat and is thus allowed during periods of fasting;communism – there was a special government-established “fish allowance” under which people received live fish (refrigerators were uncommon) from their employers as Christmas bonuses.Due to this strong tradition[1] 90% of Polish domestic production is sold around Christmas and ~25% of the population reports buying the carp alive.[2] There is an intense public debate around this subject in Poland at this time of year.Suffering of carps in PolandCarp farming and the industryCarps are farmed in small ponds. As omnivores, they feed on small invertebrates and later transition to special grain-based feed. There are 3,000 farms in Poland with an additional 1,000 businesses engaged in carp aquaculture. According to 2020 reports, Polish carp aquaculture was the biggest in the European Union, making up 36% of freshwater fish production in Poland.While the Polish carp industry seems to be gaining momentum in Europe, it appears that popular demand and changing trends in consumption may be leading to stagnation. As economic models emerge, they point to aesthetics and the difficulty of preparation rather than price driving the stagnation, especially among young adults. From our anecdotal experience, this seems intuitive, as carp hasn’t been marketed well by the industry. We ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Anima International suspended the campaign to end live fish sales in Poland, published by Jakub Stencel on December 30, 2022 on The Effective Altruism Forum.At Anima International, we recently decided to suspend our campaign against live fish sales in Poland indefinitely. After a few years of running the campaign, we are now concerned about the effects of our efforts, specifically the possibility of a net negative result for the lives of animals. We believe that by writing about it openly we can help foster a culture of intellectual honesty, information sharing and accountability. Ideally, our case can serve as a good example on reflecting on potential unintended consequences of advocacy interventions.SummarySome post-communist countries of Eastern Europe have a tradition of buying live carp in shops and slaughtering them at homes for Christmas Eve.Poland is a major importer and producer of carps, with 90% of domestic production consumed during the Christmas season.Many groups, including Anima International, oppose and target the practice because of its incredible cruelty, as well as significant public sentiment against it.Due to successful efforts of animal advocacy groups important victories were achieved, including major retailers withdrawing from selling live fish.A consumer trend to move away from carp to higher-status fish, like salmon, is observed.Anima International became increasingly worried that any effort to displace carp consumption may lead to increased animal suffering due to salmon farming requiring fish feed.We ran polls and created a rough model to check these assumptions and then considered how our strategy should look like.After careful considerations of a number of factors, including effectiveness estimates, we decided to disband our team focused on the campaign and invest the resources elsewhere.Carp in post-communist countriesCommon carp (Cyprinus carpio) is a domesticated freshwater fish. It’s the third largest fish farmed in the world’s aquaculture production. Poland is the European Union’s largest producer and top importer.What is especially bizarre about Poland and other similar countries in Central and Eastern Europe is the relatively recent custom of buying live carps during the Christmas period. The tradition developed around 70 years ago due to a combination of factors:post-war reality – destruction of fishing fleets made carp farming a desirable investment;religion – in Christianity fish is not considered meat and is thus allowed during periods of fasting;communism – there was a special government-established “fish allowance” under which people received live fish (refrigerators were uncommon) from their employers as Christmas bonuses.Due to this strong tradition[1] 90% of Polish domestic production is sold around Christmas and ~25% of the population reports buying the carp alive.[2] There is an intense public debate around this subject in Poland at this time of year.Suffering of carps in PolandCarp farming and the industryCarps are farmed in small ponds. As omnivores, they feed on small invertebrates and later transition to special grain-based feed. There are 3,000 farms in Poland with an additional 1,000 businesses engaged in carp aquaculture. According to 2020 reports, Polish carp aquaculture was the biggest in the European Union, making up 36% of freshwater fish production in Poland.While the Polish carp industry seems to be gaining momentum in Europe, it appears that popular demand and changing trends in consumption may be leading to stagnation. As economic models emerge, they point to aesthetics and the difficulty of preparation rather than price driving the stagnation, especially among young adults. From our anecdotal experience, this seems intuitive, as carp hasn’t been marketed well by the industry. We ...]]>
Jakub Stencel https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 23:09 None full 4307
SBSC8ZiTNwTM8Azue_EA EA - A libertarian socialist’s view on how EA can improve by freedomandutility Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A libertarian socialist’s view on how EA can improve, published by freedomandutility on December 30, 2022 on The Effective Altruism Forum.Following up on my post criticising liberal-progressive criticisms of EA, I’m bringing you suggestions from further to the left on how EA could improve.In general, libertarian socialism opposes the concentration of power and wealth and aims to redistribute them, either as a terminal goal or an instrumental goal.This post is divided into 3 sections - meta EA, EA interventions and EA Philosophy.Meta EAMost of the interventions I propose are improvements in EA’s institutional design and safeguards, which should, in theory, increase the chances that resources are spent optimally.Whether we are spending resources optimally is near-impossible to measure and evaluate, so we have to rely on theory. Regardless of whether my proposed interventions work or fail, there would be no evidence for it.EA relies on highly-uncertain, vulnerable-to-motivated-reasoning expected value (EV) calculations and is no less vulnerable to motivated reasoning than other ideologies. Because it is not possible to detect suboptimal spending, we should not wait for strong evidence of mistakes or outright fraud and corruption to make improvements, and we should be willing to bear small costs to reap long term benefits.EA priors on the influence of self-serving biases are too weakIn my view, EAs underestimate the influence that self-serving biases play in imprecise, highly uncertain expected value (EV) calculations around decisions such as buying luxurious conference venues, lavish community building expenditure, funding ready meals and funding Ubers, leading to suboptimal allocation of resources.When concerns are raised, I notice that some EAs ask for “evidence” that decisions are influenced by self-serving biases. But that is not how motivated reasoning works - you will rarely find concrete evidence for motivated reasoning. Depending on the strength of self-serving biases, they could influence expected value calculations in ways that justify the most suboptimal, most luxurious purchases, with no evidence of the biases existing.I suggested three improvements to how EV calculations are made in another post:Have two (or more) individuals, or groups, independently calculate the expected value of an intervention and compare resultsIn expected value calculations, identify a theoretical cost at which the intervention would no longer be approximately maximising expected value from the resourcesKeep in mind that EA aims to make decisions that approximately maximise expected value from a set of resources, rather than just make decisions which just have net positive expected valueEAs underestimate the importance of conflicts of interest and the distribution of power inside EAThere is a huge amount of overlap across boards, governance and leadership of key EA organisations, increasing the risk of suboptimal allocation of resources, since in theory, there is a high risk of funders giving too much funding to other organisations with connected leadership.Although I think a certain degree of coordination via events such as the Leaders Summit is good, a greater degree of independence between institutions may help reduce biases and safeguard against misallocation.I would recommend that individuals are only allowed to hold leadership, board or governance positions in one EA organisation each. Beyond reducing risks of bias in funding allocation, this would also help to distribute power at the top of EA, safeguarding against individual irrationality and increasing diversity of thought, which may generate additional benefits.If this seems like a bad idea, try the reversal test: do you think EA orgs should become more integrated?EDIT 1 at 43 upvotes: Another potential ...]]>
freedomandutility https://forum.effectivealtruism.org/posts/SBSC8ZiTNwTM8Azue/a-libertarian-socialist-s-view-on-how-ea-can-improve Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A libertarian socialist’s view on how EA can improve, published by freedomandutility on December 30, 2022 on The Effective Altruism Forum.Following up on my post criticising liberal-progressive criticisms of EA, I’m bringing you suggestions from further to the left on how EA could improve.In general, libertarian socialism opposes the concentration of power and wealth and aims to redistribute them, either as a terminal goal or an instrumental goal.This post is divided into 3 sections - meta EA, EA interventions and EA Philosophy.Meta EAMost of the interventions I propose are improvements in EA’s institutional design and safeguards, which should, in theory, increase the chances that resources are spent optimally.Whether we are spending resources optimally is near-impossible to measure and evaluate, so we have to rely on theory. Regardless of whether my proposed interventions work or fail, there would be no evidence for it.EA relies on highly-uncertain, vulnerable-to-motivated-reasoning expected value (EV) calculations and is no less vulnerable to motivated reasoning than other ideologies. Because it is not possible to detect suboptimal spending, we should not wait for strong evidence of mistakes or outright fraud and corruption to make improvements, and we should be willing to bear small costs to reap long term benefits.EA priors on the influence of self-serving biases are too weakIn my view, EAs underestimate the influence that self-serving biases play in imprecise, highly uncertain expected value (EV) calculations around decisions such as buying luxurious conference venues, lavish community building expenditure, funding ready meals and funding Ubers, leading to suboptimal allocation of resources.When concerns are raised, I notice that some EAs ask for “evidence” that decisions are influenced by self-serving biases. But that is not how motivated reasoning works - you will rarely find concrete evidence for motivated reasoning. Depending on the strength of self-serving biases, they could influence expected value calculations in ways that justify the most suboptimal, most luxurious purchases, with no evidence of the biases existing.I suggested three improvements to how EV calculations are made in another post:Have two (or more) individuals, or groups, independently calculate the expected value of an intervention and compare resultsIn expected value calculations, identify a theoretical cost at which the intervention would no longer be approximately maximising expected value from the resourcesKeep in mind that EA aims to make decisions that approximately maximise expected value from a set of resources, rather than just make decisions which just have net positive expected valueEAs underestimate the importance of conflicts of interest and the distribution of power inside EAThere is a huge amount of overlap across boards, governance and leadership of key EA organisations, increasing the risk of suboptimal allocation of resources, since in theory, there is a high risk of funders giving too much funding to other organisations with connected leadership.Although I think a certain degree of coordination via events such as the Leaders Summit is good, a greater degree of independence between institutions may help reduce biases and safeguard against misallocation.I would recommend that individuals are only allowed to hold leadership, board or governance positions in one EA organisation each. Beyond reducing risks of bias in funding allocation, this would also help to distribute power at the top of EA, safeguarding against individual irrationality and increasing diversity of thought, which may generate additional benefits.If this seems like a bad idea, try the reversal test: do you think EA orgs should become more integrated?EDIT 1 at 43 upvotes: Another potential ...]]>
Fri, 30 Dec 2022 14:51:12 +0000 EA - A libertarian socialist’s view on how EA can improve by freedomandutility Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A libertarian socialist’s view on how EA can improve, published by freedomandutility on December 30, 2022 on The Effective Altruism Forum.Following up on my post criticising liberal-progressive criticisms of EA, I’m bringing you suggestions from further to the left on how EA could improve.In general, libertarian socialism opposes the concentration of power and wealth and aims to redistribute them, either as a terminal goal or an instrumental goal.This post is divided into 3 sections - meta EA, EA interventions and EA Philosophy.Meta EAMost of the interventions I propose are improvements in EA’s institutional design and safeguards, which should, in theory, increase the chances that resources are spent optimally.Whether we are spending resources optimally is near-impossible to measure and evaluate, so we have to rely on theory. Regardless of whether my proposed interventions work or fail, there would be no evidence for it.EA relies on highly-uncertain, vulnerable-to-motivated-reasoning expected value (EV) calculations and is no less vulnerable to motivated reasoning than other ideologies. Because it is not possible to detect suboptimal spending, we should not wait for strong evidence of mistakes or outright fraud and corruption to make improvements, and we should be willing to bear small costs to reap long term benefits.EA priors on the influence of self-serving biases are too weakIn my view, EAs underestimate the influence that self-serving biases play in imprecise, highly uncertain expected value (EV) calculations around decisions such as buying luxurious conference venues, lavish community building expenditure, funding ready meals and funding Ubers, leading to suboptimal allocation of resources.When concerns are raised, I notice that some EAs ask for “evidence” that decisions are influenced by self-serving biases. But that is not how motivated reasoning works - you will rarely find concrete evidence for motivated reasoning. Depending on the strength of self-serving biases, they could influence expected value calculations in ways that justify the most suboptimal, most luxurious purchases, with no evidence of the biases existing.I suggested three improvements to how EV calculations are made in another post:Have two (or more) individuals, or groups, independently calculate the expected value of an intervention and compare resultsIn expected value calculations, identify a theoretical cost at which the intervention would no longer be approximately maximising expected value from the resourcesKeep in mind that EA aims to make decisions that approximately maximise expected value from a set of resources, rather than just make decisions which just have net positive expected valueEAs underestimate the importance of conflicts of interest and the distribution of power inside EAThere is a huge amount of overlap across boards, governance and leadership of key EA organisations, increasing the risk of suboptimal allocation of resources, since in theory, there is a high risk of funders giving too much funding to other organisations with connected leadership.Although I think a certain degree of coordination via events such as the Leaders Summit is good, a greater degree of independence between institutions may help reduce biases and safeguard against misallocation.I would recommend that individuals are only allowed to hold leadership, board or governance positions in one EA organisation each. Beyond reducing risks of bias in funding allocation, this would also help to distribute power at the top of EA, safeguarding against individual irrationality and increasing diversity of thought, which may generate additional benefits.If this seems like a bad idea, try the reversal test: do you think EA orgs should become more integrated?EDIT 1 at 43 upvotes: Another potential ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A libertarian socialist’s view on how EA can improve, published by freedomandutility on December 30, 2022 on The Effective Altruism Forum.Following up on my post criticising liberal-progressive criticisms of EA, I’m bringing you suggestions from further to the left on how EA could improve.In general, libertarian socialism opposes the concentration of power and wealth and aims to redistribute them, either as a terminal goal or an instrumental goal.This post is divided into 3 sections - meta EA, EA interventions and EA Philosophy.Meta EAMost of the interventions I propose are improvements in EA’s institutional design and safeguards, which should, in theory, increase the chances that resources are spent optimally.Whether we are spending resources optimally is near-impossible to measure and evaluate, so we have to rely on theory. Regardless of whether my proposed interventions work or fail, there would be no evidence for it.EA relies on highly-uncertain, vulnerable-to-motivated-reasoning expected value (EV) calculations and is no less vulnerable to motivated reasoning than other ideologies. Because it is not possible to detect suboptimal spending, we should not wait for strong evidence of mistakes or outright fraud and corruption to make improvements, and we should be willing to bear small costs to reap long term benefits.EA priors on the influence of self-serving biases are too weakIn my view, EAs underestimate the influence that self-serving biases play in imprecise, highly uncertain expected value (EV) calculations around decisions such as buying luxurious conference venues, lavish community building expenditure, funding ready meals and funding Ubers, leading to suboptimal allocation of resources.When concerns are raised, I notice that some EAs ask for “evidence” that decisions are influenced by self-serving biases. But that is not how motivated reasoning works - you will rarely find concrete evidence for motivated reasoning. Depending on the strength of self-serving biases, they could influence expected value calculations in ways that justify the most suboptimal, most luxurious purchases, with no evidence of the biases existing.I suggested three improvements to how EV calculations are made in another post:Have two (or more) individuals, or groups, independently calculate the expected value of an intervention and compare resultsIn expected value calculations, identify a theoretical cost at which the intervention would no longer be approximately maximising expected value from the resourcesKeep in mind that EA aims to make decisions that approximately maximise expected value from a set of resources, rather than just make decisions which just have net positive expected valueEAs underestimate the importance of conflicts of interest and the distribution of power inside EAThere is a huge amount of overlap across boards, governance and leadership of key EA organisations, increasing the risk of suboptimal allocation of resources, since in theory, there is a high risk of funders giving too much funding to other organisations with connected leadership.Although I think a certain degree of coordination via events such as the Leaders Summit is good, a greater degree of independence between institutions may help reduce biases and safeguard against misallocation.I would recommend that individuals are only allowed to hold leadership, board or governance positions in one EA organisation each. Beyond reducing risks of bias in funding allocation, this would also help to distribute power at the top of EA, safeguarding against individual irrationality and increasing diversity of thought, which may generate additional benefits.If this seems like a bad idea, try the reversal test: do you think EA orgs should become more integrated?EDIT 1 at 43 upvotes: Another potential ...]]>
freedomandutility https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:58 None full 4309
cdBo2HuXA5FJpya4H_EA EA - Entrepreneurship ETG Might Be Better Than 80k Thought by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Entrepreneurship ETG Might Be Better Than 80k Thought, published by Ben West on December 29, 2022 on The Effective Altruism Forum.SummaryIn 2014, Ryan Carey estimated that YCombinator-backed startup founders averaged $2.5M/yearI repeat his analysis, and find that this number is now substantially higher: $3.8-9.9M/yearWhen this amount is discounted by 12%/year (average S&P 500 returns) this falls to $1.9-4.3M/y, and with a 20%/year discount (a number I've heard for returns to community building) it falls to $1.1-2.9M/y.Note that these numbers include illiquid (pre-exit) valuations.Major Results0% discount$3.77M/y$8.17M/y12% discount$1.98M/y$4.28M/y20% discount$1.34M/y$2.90M/yDiscountAll companiesExcluding post-2019Average founder income under different discount rates. “All companies” includes in the denominator every company incubated by YCombinator; “excluding post-2019” excludes companies incubated after 2019 (which presumably are less likely to make it to the list of top YCombinator companies by valuation, and therefore arguably should be excluded from consideration).Weighted Per Year0% discount$4.56M/y$9.87M/y12% discount$1.84M/y$3.98M/y20% discount$1.06M/y$2.30M/yDiscountAll companiesExcluding post-2019This table is the same as the above except it e.g. counts a company which has been around for 4 years twice as much as one which has been around for 2 years. I.e. this table is the expected value of a founder-year, whereas the previous table is the expected annual value of founding a company. I’m not sure which is more intuitive.CommentaryBackground: See this 80k article for the basic case behind considering entrepreneurship for earning to give reasons.These numbers seem fairly high, and may indicate that earning to give through entrepreneurship is a good path for those who have solid personal fit (with the usual caveats about only pursuing ethical startup careers; see also my analysis of YCombinator fraud rates).With a 20% annual discount the numbers are not that far off from what I've heard as higher-end estimates of the value of direct work, and I expect that there is a fairly strong correlation between being at the higher end of entrepreneurship returns and being at the higher end of direct work, so this doesn't seem like that strong of an argument for entrepreneurship over direct work.My impression is that these numbers are roughly similar to average quantitative finance income, so I’m not sure there’s much of an argument for one over the other based on this data (from an income perspective).Note that the vast majority of founders who apply to YCombinator are rejected, and this is not considered in these estimates.Appendix A: Methods and DataNote: if you know Python, reading the Jupyter notebook might be easier than following this document.MethodsUsed this list of YCombinator top companies and tried to find public information about their most recent valuation. Importantly, note that this is including pre-exit valuations.For publicly traded companies, I used their market capitalization at the time of writing (rather than when they IPO’d).I used an estimate of 2.3 people per founding team and average equity ownership of 35% from the original 80 K article. These numbers could probably use an update.The discount was calculated using a straightforward geometric discount, i.e. receiving $N in Y years with discount rate d has a net present value of (1-d)^Y N.I assume that everything not on that list is valued at zero. This is obviously an underestimate; but I think it’s not too far off:I estimate the value of the company at the bottom of the list (Karbon Card) at $60MIf the 1,788 companies started after 2019 who are not in this list were all valued at $60M, this would increase the total valuation by $107B = 19.5%This is a very conse...]]>
Ben West https://forum.effectivealtruism.org/posts/cdBo2HuXA5FJpya4H/entrepreneurship-etg-might-be-better-than-80k-thought Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Entrepreneurship ETG Might Be Better Than 80k Thought, published by Ben West on December 29, 2022 on The Effective Altruism Forum.SummaryIn 2014, Ryan Carey estimated that YCombinator-backed startup founders averaged $2.5M/yearI repeat his analysis, and find that this number is now substantially higher: $3.8-9.9M/yearWhen this amount is discounted by 12%/year (average S&P 500 returns) this falls to $1.9-4.3M/y, and with a 20%/year discount (a number I've heard for returns to community building) it falls to $1.1-2.9M/y.Note that these numbers include illiquid (pre-exit) valuations.Major Results0% discount$3.77M/y$8.17M/y12% discount$1.98M/y$4.28M/y20% discount$1.34M/y$2.90M/yDiscountAll companiesExcluding post-2019Average founder income under different discount rates. “All companies” includes in the denominator every company incubated by YCombinator; “excluding post-2019” excludes companies incubated after 2019 (which presumably are less likely to make it to the list of top YCombinator companies by valuation, and therefore arguably should be excluded from consideration).Weighted Per Year0% discount$4.56M/y$9.87M/y12% discount$1.84M/y$3.98M/y20% discount$1.06M/y$2.30M/yDiscountAll companiesExcluding post-2019This table is the same as the above except it e.g. counts a company which has been around for 4 years twice as much as one which has been around for 2 years. I.e. this table is the expected value of a founder-year, whereas the previous table is the expected annual value of founding a company. I’m not sure which is more intuitive.CommentaryBackground: See this 80k article for the basic case behind considering entrepreneurship for earning to give reasons.These numbers seem fairly high, and may indicate that earning to give through entrepreneurship is a good path for those who have solid personal fit (with the usual caveats about only pursuing ethical startup careers; see also my analysis of YCombinator fraud rates).With a 20% annual discount the numbers are not that far off from what I've heard as higher-end estimates of the value of direct work, and I expect that there is a fairly strong correlation between being at the higher end of entrepreneurship returns and being at the higher end of direct work, so this doesn't seem like that strong of an argument for entrepreneurship over direct work.My impression is that these numbers are roughly similar to average quantitative finance income, so I’m not sure there’s much of an argument for one over the other based on this data (from an income perspective).Note that the vast majority of founders who apply to YCombinator are rejected, and this is not considered in these estimates.Appendix A: Methods and DataNote: if you know Python, reading the Jupyter notebook might be easier than following this document.MethodsUsed this list of YCombinator top companies and tried to find public information about their most recent valuation. Importantly, note that this is including pre-exit valuations.For publicly traded companies, I used their market capitalization at the time of writing (rather than when they IPO’d).I used an estimate of 2.3 people per founding team and average equity ownership of 35% from the original 80 K article. These numbers could probably use an update.The discount was calculated using a straightforward geometric discount, i.e. receiving $N in Y years with discount rate d has a net present value of (1-d)^Y N.I assume that everything not on that list is valued at zero. This is obviously an underestimate; but I think it’s not too far off:I estimate the value of the company at the bottom of the list (Karbon Card) at $60MIf the 1,788 companies started after 2019 who are not in this list were all valued at $60M, this would increase the total valuation by $107B = 19.5%This is a very conse...]]>
Thu, 29 Dec 2022 18:37:20 +0000 EA - Entrepreneurship ETG Might Be Better Than 80k Thought by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Entrepreneurship ETG Might Be Better Than 80k Thought, published by Ben West on December 29, 2022 on The Effective Altruism Forum.SummaryIn 2014, Ryan Carey estimated that YCombinator-backed startup founders averaged $2.5M/yearI repeat his analysis, and find that this number is now substantially higher: $3.8-9.9M/yearWhen this amount is discounted by 12%/year (average S&P 500 returns) this falls to $1.9-4.3M/y, and with a 20%/year discount (a number I've heard for returns to community building) it falls to $1.1-2.9M/y.Note that these numbers include illiquid (pre-exit) valuations.Major Results0% discount$3.77M/y$8.17M/y12% discount$1.98M/y$4.28M/y20% discount$1.34M/y$2.90M/yDiscountAll companiesExcluding post-2019Average founder income under different discount rates. “All companies” includes in the denominator every company incubated by YCombinator; “excluding post-2019” excludes companies incubated after 2019 (which presumably are less likely to make it to the list of top YCombinator companies by valuation, and therefore arguably should be excluded from consideration).Weighted Per Year0% discount$4.56M/y$9.87M/y12% discount$1.84M/y$3.98M/y20% discount$1.06M/y$2.30M/yDiscountAll companiesExcluding post-2019This table is the same as the above except it e.g. counts a company which has been around for 4 years twice as much as one which has been around for 2 years. I.e. this table is the expected value of a founder-year, whereas the previous table is the expected annual value of founding a company. I’m not sure which is more intuitive.CommentaryBackground: See this 80k article for the basic case behind considering entrepreneurship for earning to give reasons.These numbers seem fairly high, and may indicate that earning to give through entrepreneurship is a good path for those who have solid personal fit (with the usual caveats about only pursuing ethical startup careers; see also my analysis of YCombinator fraud rates).With a 20% annual discount the numbers are not that far off from what I've heard as higher-end estimates of the value of direct work, and I expect that there is a fairly strong correlation between being at the higher end of entrepreneurship returns and being at the higher end of direct work, so this doesn't seem like that strong of an argument for entrepreneurship over direct work.My impression is that these numbers are roughly similar to average quantitative finance income, so I’m not sure there’s much of an argument for one over the other based on this data (from an income perspective).Note that the vast majority of founders who apply to YCombinator are rejected, and this is not considered in these estimates.Appendix A: Methods and DataNote: if you know Python, reading the Jupyter notebook might be easier than following this document.MethodsUsed this list of YCombinator top companies and tried to find public information about their most recent valuation. Importantly, note that this is including pre-exit valuations.For publicly traded companies, I used their market capitalization at the time of writing (rather than when they IPO’d).I used an estimate of 2.3 people per founding team and average equity ownership of 35% from the original 80 K article. These numbers could probably use an update.The discount was calculated using a straightforward geometric discount, i.e. receiving $N in Y years with discount rate d has a net present value of (1-d)^Y N.I assume that everything not on that list is valued at zero. This is obviously an underestimate; but I think it’s not too far off:I estimate the value of the company at the bottom of the list (Karbon Card) at $60MIf the 1,788 companies started after 2019 who are not in this list were all valued at $60M, this would increase the total valuation by $107B = 19.5%This is a very conse...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Entrepreneurship ETG Might Be Better Than 80k Thought, published by Ben West on December 29, 2022 on The Effective Altruism Forum.SummaryIn 2014, Ryan Carey estimated that YCombinator-backed startup founders averaged $2.5M/yearI repeat his analysis, and find that this number is now substantially higher: $3.8-9.9M/yearWhen this amount is discounted by 12%/year (average S&P 500 returns) this falls to $1.9-4.3M/y, and with a 20%/year discount (a number I've heard for returns to community building) it falls to $1.1-2.9M/y.Note that these numbers include illiquid (pre-exit) valuations.Major Results0% discount$3.77M/y$8.17M/y12% discount$1.98M/y$4.28M/y20% discount$1.34M/y$2.90M/yDiscountAll companiesExcluding post-2019Average founder income under different discount rates. “All companies” includes in the denominator every company incubated by YCombinator; “excluding post-2019” excludes companies incubated after 2019 (which presumably are less likely to make it to the list of top YCombinator companies by valuation, and therefore arguably should be excluded from consideration).Weighted Per Year0% discount$4.56M/y$9.87M/y12% discount$1.84M/y$3.98M/y20% discount$1.06M/y$2.30M/yDiscountAll companiesExcluding post-2019This table is the same as the above except it e.g. counts a company which has been around for 4 years twice as much as one which has been around for 2 years. I.e. this table is the expected value of a founder-year, whereas the previous table is the expected annual value of founding a company. I’m not sure which is more intuitive.CommentaryBackground: See this 80k article for the basic case behind considering entrepreneurship for earning to give reasons.These numbers seem fairly high, and may indicate that earning to give through entrepreneurship is a good path for those who have solid personal fit (with the usual caveats about only pursuing ethical startup careers; see also my analysis of YCombinator fraud rates).With a 20% annual discount the numbers are not that far off from what I've heard as higher-end estimates of the value of direct work, and I expect that there is a fairly strong correlation between being at the higher end of entrepreneurship returns and being at the higher end of direct work, so this doesn't seem like that strong of an argument for entrepreneurship over direct work.My impression is that these numbers are roughly similar to average quantitative finance income, so I’m not sure there’s much of an argument for one over the other based on this data (from an income perspective).Note that the vast majority of founders who apply to YCombinator are rejected, and this is not considered in these estimates.Appendix A: Methods and DataNote: if you know Python, reading the Jupyter notebook might be easier than following this document.MethodsUsed this list of YCombinator top companies and tried to find public information about their most recent valuation. Importantly, note that this is including pre-exit valuations.For publicly traded companies, I used their market capitalization at the time of writing (rather than when they IPO’d).I used an estimate of 2.3 people per founding team and average equity ownership of 35% from the original 80 K article. These numbers could probably use an update.The discount was calculated using a straightforward geometric discount, i.e. receiving $N in Y years with discount rate d has a net present value of (1-d)^Y N.I assume that everything not on that list is valued at zero. This is obviously an underestimate; but I think it’s not too far off:I estimate the value of the company at the bottom of the list (Karbon Card) at $60MIf the 1,788 companies started after 2019 who are not in this list were all valued at $60M, this would increase the total valuation by $107B = 19.5%This is a very conse...]]>
Ben West https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:19 None full 4297
bBoKBFnBsPvoiHuaT_EA EA - Announcing the EA Merch Store! by Ines Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the EA Merch Store!, published by Ines on December 29, 2022 on The Effective Altruism Forum.You can now buy EA merch at eamerch.co! We are launching with a small number of designs, and will be adding more soon, some of which will be based on submissions from the community.Why a merch store?Merch helps EAs express themselves and fosters a sense of community. It can also help EAs recognize each other within the same setting or act as a conversation-starter at a function. But most importantly, it’s fun!How it worksOur designer Pearl will regularly add new products, many based on ideas submitted by the community. Shipping and manufacturing is handled by a third-party (Printful). All products are priced at 0% markup, and we are operating on a one-time independent grant for the time being. We will seek long-term funding in the future contingent on demand.If you have an idea for a new product or design, submit it at this link! It doesn’t need to be fully fleshed out—just describe what you have in mind, and our designer will try to make it happen. You can also upload a (preferably high-quality) image if you are picturing something really specific. In the short term, we are more likely to take requests for new designs than new products since we are somewhat restricted in the products that we have the ability to manufacture. You are also able to vote on other people’s ideas, and we will generally prioritize those that get the most votes.We are unlikely to accept designs with controversial, vulgar, or otherwise inappropriate messaging. We are aware of the fact that our merch will symbolize the EA community, and we want to communicate that we are a friendly and welcoming group.We are still in Beta modeWe have not yet processed any orders, so it is possible that there will be some bugs or unexpected hiccups during our first few weeks of operation (e.g. delays in shipping or payment processing). Place your orders now only if you are okay with this! If not, we recommend you wait and submit your ideas to us in the meantime. We also ask that you do not place orders in bulk at this time, as we are charged immediately for manufacturing and shipping but it takes about a week for your money to reach us.Due to our web platform’s limited flexibility in setting shipping prices, they may sometimes be slightly higher than shipping cost.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ines https://forum.effectivealtruism.org/posts/bBoKBFnBsPvoiHuaT/announcing-the-ea-merch-store Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the EA Merch Store!, published by Ines on December 29, 2022 on The Effective Altruism Forum.You can now buy EA merch at eamerch.co! We are launching with a small number of designs, and will be adding more soon, some of which will be based on submissions from the community.Why a merch store?Merch helps EAs express themselves and fosters a sense of community. It can also help EAs recognize each other within the same setting or act as a conversation-starter at a function. But most importantly, it’s fun!How it worksOur designer Pearl will regularly add new products, many based on ideas submitted by the community. Shipping and manufacturing is handled by a third-party (Printful). All products are priced at 0% markup, and we are operating on a one-time independent grant for the time being. We will seek long-term funding in the future contingent on demand.If you have an idea for a new product or design, submit it at this link! It doesn’t need to be fully fleshed out—just describe what you have in mind, and our designer will try to make it happen. You can also upload a (preferably high-quality) image if you are picturing something really specific. In the short term, we are more likely to take requests for new designs than new products since we are somewhat restricted in the products that we have the ability to manufacture. You are also able to vote on other people’s ideas, and we will generally prioritize those that get the most votes.We are unlikely to accept designs with controversial, vulgar, or otherwise inappropriate messaging. We are aware of the fact that our merch will symbolize the EA community, and we want to communicate that we are a friendly and welcoming group.We are still in Beta modeWe have not yet processed any orders, so it is possible that there will be some bugs or unexpected hiccups during our first few weeks of operation (e.g. delays in shipping or payment processing). Place your orders now only if you are okay with this! If not, we recommend you wait and submit your ideas to us in the meantime. We also ask that you do not place orders in bulk at this time, as we are charged immediately for manufacturing and shipping but it takes about a week for your money to reach us.Due to our web platform’s limited flexibility in setting shipping prices, they may sometimes be slightly higher than shipping cost.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 29 Dec 2022 14:36:44 +0000 EA - Announcing the EA Merch Store! by Ines Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the EA Merch Store!, published by Ines on December 29, 2022 on The Effective Altruism Forum.You can now buy EA merch at eamerch.co! We are launching with a small number of designs, and will be adding more soon, some of which will be based on submissions from the community.Why a merch store?Merch helps EAs express themselves and fosters a sense of community. It can also help EAs recognize each other within the same setting or act as a conversation-starter at a function. But most importantly, it’s fun!How it worksOur designer Pearl will regularly add new products, many based on ideas submitted by the community. Shipping and manufacturing is handled by a third-party (Printful). All products are priced at 0% markup, and we are operating on a one-time independent grant for the time being. We will seek long-term funding in the future contingent on demand.If you have an idea for a new product or design, submit it at this link! It doesn’t need to be fully fleshed out—just describe what you have in mind, and our designer will try to make it happen. You can also upload a (preferably high-quality) image if you are picturing something really specific. In the short term, we are more likely to take requests for new designs than new products since we are somewhat restricted in the products that we have the ability to manufacture. You are also able to vote on other people’s ideas, and we will generally prioritize those that get the most votes.We are unlikely to accept designs with controversial, vulgar, or otherwise inappropriate messaging. We are aware of the fact that our merch will symbolize the EA community, and we want to communicate that we are a friendly and welcoming group.We are still in Beta modeWe have not yet processed any orders, so it is possible that there will be some bugs or unexpected hiccups during our first few weeks of operation (e.g. delays in shipping or payment processing). Place your orders now only if you are okay with this! If not, we recommend you wait and submit your ideas to us in the meantime. We also ask that you do not place orders in bulk at this time, as we are charged immediately for manufacturing and shipping but it takes about a week for your money to reach us.Due to our web platform’s limited flexibility in setting shipping prices, they may sometimes be slightly higher than shipping cost.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the EA Merch Store!, published by Ines on December 29, 2022 on The Effective Altruism Forum.You can now buy EA merch at eamerch.co! We are launching with a small number of designs, and will be adding more soon, some of which will be based on submissions from the community.Why a merch store?Merch helps EAs express themselves and fosters a sense of community. It can also help EAs recognize each other within the same setting or act as a conversation-starter at a function. But most importantly, it’s fun!How it worksOur designer Pearl will regularly add new products, many based on ideas submitted by the community. Shipping and manufacturing is handled by a third-party (Printful). All products are priced at 0% markup, and we are operating on a one-time independent grant for the time being. We will seek long-term funding in the future contingent on demand.If you have an idea for a new product or design, submit it at this link! It doesn’t need to be fully fleshed out—just describe what you have in mind, and our designer will try to make it happen. You can also upload a (preferably high-quality) image if you are picturing something really specific. In the short term, we are more likely to take requests for new designs than new products since we are somewhat restricted in the products that we have the ability to manufacture. You are also able to vote on other people’s ideas, and we will generally prioritize those that get the most votes.We are unlikely to accept designs with controversial, vulgar, or otherwise inappropriate messaging. We are aware of the fact that our merch will symbolize the EA community, and we want to communicate that we are a friendly and welcoming group.We are still in Beta modeWe have not yet processed any orders, so it is possible that there will be some bugs or unexpected hiccups during our first few weeks of operation (e.g. delays in shipping or payment processing). Place your orders now only if you are okay with this! If not, we recommend you wait and submit your ideas to us in the meantime. We also ask that you do not place orders in bulk at this time, as we are charged immediately for manufacturing and shipping but it takes about a week for your money to reach us.Due to our web platform’s limited flexibility in setting shipping prices, they may sometimes be slightly higher than shipping cost.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ines https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:24 None full 4299
oGdCtvuQv4BTuNFoC_EA EA - Good things that happened in EA this year by Shakeel Hashim Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Good things that happened in EA this year, published by Shakeel Hashim on December 29, 2022 on The Effective Altruism Forum.Crossposted from TwitterAs the year comes to an end, we want to highlight some of the incredible work done and supported by people in the effective altruism community — work that's helping people and animals all over the world.1/ The team at Charity Entrepreneurship incubated five new charities this year, including the Center for Effective Aid Policy and Vida Plena — the first CE-incubated organisation to operate in Latin America.2/ Over 1,400 new people signed the Giving What We Can Pledge, committing to giving away 10% or more of their annual income to effective charities. The total number of pledgers is now over 8,000!3/ The work of The Humane League and other animal welfare activists led 161 new organisations to commit to using cage-free products, helping free millions of chickens from cruel battery cages.4/ Open Philanthropy launched two new focus areas: South Asian Air Quality and Global Aid Policy. It's already made grants that aim to tackle pollution and increase the quality or quantity of foreign aid./ and/5/ Alvea, a new biotechnology company dedicated to fighting pandemics, launched and announced that it had already started animal studies for a shelf-stable COVID vaccine.6/ Almost 80,000 connections were made at events hosted by @CentreforEA's Events team, prompting people to change jobs, start new projects and explore new ideas. EAGx conferences were held around the world — including in Berlin, Australia and Singapore.#Events7/ The EU Commission said it will "put forward a proposal to end the ‘disturbing’ systematic practice of killing male chicks across the EU" — another huge win for animal welfare campaigners.8/ What We Owe The Future, a book by @willmacaskill arguing that we can — and should — help build a better world for future generations, became a bestseller in both the US and UK.9/ New evidence prompted @GiveWell to re-evaluate its views on water quality interventions. It then made a grant of up to $64.7 million for @EvidenceAction's Dispensers for Safe Water water chlorination program, which operates in Kenya, Malawi and Uganda./10/ Lots of members of the effective altruism community were featured on @voxdotcom's inaugural Future Perfect 50 list of the people building a better future.11/ Fish welfare was discussed in the UK Parliament for the first time ever, featuring contributions from effective-altruism-backed charities./12/ Researchers at @iGEM published a paper looking at how we might be able to better detect whether states are complying with the Biological Weapons Convention — work which could help improve biosecurity around the world.13/ New research from the Lead Exposure Elimination Project showed the dangerous levels of lead in paint in Zimbabwe and Sierra Leone. In response, governments in both countries are working with LEEP to try to tackle the problem and reduce lead exposure./ and/14/ The EA Forum criticism contest sparked a bunch of interesting and technical debate. One entry prompted GiveWell to re-assess their estimates of the cost-effectiveness of deworming, and inspired a second contest of its own!#Prize_for_inspiring_the_Change_Our_Mind_Contest____20_00015/ The welfare of crabs, lobsters and prawns was recognised in UK legislation thanks to the new Animal Welfare (Sentience) Bill16/ Rethink Priorities, meanwhile, embarked on their ambitious Moral Weight Project to provide a better way to compare the interests of different species.17/ At the @medialab, the Nucleic Acid Observatory project launched — working to develop systems that will help provide an early-warning system for new biological threats.18/ Longview Philanthropy and @givingwhatwecan launched the Longtermism Fund, a new fund...]]>
Shakeel Hashim https://forum.effectivealtruism.org/posts/oGdCtvuQv4BTuNFoC/good-things-that-happened-in-ea-this-year Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Good things that happened in EA this year, published by Shakeel Hashim on December 29, 2022 on The Effective Altruism Forum.Crossposted from TwitterAs the year comes to an end, we want to highlight some of the incredible work done and supported by people in the effective altruism community — work that's helping people and animals all over the world.1/ The team at Charity Entrepreneurship incubated five new charities this year, including the Center for Effective Aid Policy and Vida Plena — the first CE-incubated organisation to operate in Latin America.2/ Over 1,400 new people signed the Giving What We Can Pledge, committing to giving away 10% or more of their annual income to effective charities. The total number of pledgers is now over 8,000!3/ The work of The Humane League and other animal welfare activists led 161 new organisations to commit to using cage-free products, helping free millions of chickens from cruel battery cages.4/ Open Philanthropy launched two new focus areas: South Asian Air Quality and Global Aid Policy. It's already made grants that aim to tackle pollution and increase the quality or quantity of foreign aid./ and/5/ Alvea, a new biotechnology company dedicated to fighting pandemics, launched and announced that it had already started animal studies for a shelf-stable COVID vaccine.6/ Almost 80,000 connections were made at events hosted by @CentreforEA's Events team, prompting people to change jobs, start new projects and explore new ideas. EAGx conferences were held around the world — including in Berlin, Australia and Singapore.#Events7/ The EU Commission said it will "put forward a proposal to end the ‘disturbing’ systematic practice of killing male chicks across the EU" — another huge win for animal welfare campaigners.8/ What We Owe The Future, a book by @willmacaskill arguing that we can — and should — help build a better world for future generations, became a bestseller in both the US and UK.9/ New evidence prompted @GiveWell to re-evaluate its views on water quality interventions. It then made a grant of up to $64.7 million for @EvidenceAction's Dispensers for Safe Water water chlorination program, which operates in Kenya, Malawi and Uganda./10/ Lots of members of the effective altruism community were featured on @voxdotcom's inaugural Future Perfect 50 list of the people building a better future.11/ Fish welfare was discussed in the UK Parliament for the first time ever, featuring contributions from effective-altruism-backed charities./12/ Researchers at @iGEM published a paper looking at how we might be able to better detect whether states are complying with the Biological Weapons Convention — work which could help improve biosecurity around the world.13/ New research from the Lead Exposure Elimination Project showed the dangerous levels of lead in paint in Zimbabwe and Sierra Leone. In response, governments in both countries are working with LEEP to try to tackle the problem and reduce lead exposure./ and/14/ The EA Forum criticism contest sparked a bunch of interesting and technical debate. One entry prompted GiveWell to re-assess their estimates of the cost-effectiveness of deworming, and inspired a second contest of its own!#Prize_for_inspiring_the_Change_Our_Mind_Contest____20_00015/ The welfare of crabs, lobsters and prawns was recognised in UK legislation thanks to the new Animal Welfare (Sentience) Bill16/ Rethink Priorities, meanwhile, embarked on their ambitious Moral Weight Project to provide a better way to compare the interests of different species.17/ At the @medialab, the Nucleic Acid Observatory project launched — working to develop systems that will help provide an early-warning system for new biological threats.18/ Longview Philanthropy and @givingwhatwecan launched the Longtermism Fund, a new fund...]]>
Thu, 29 Dec 2022 14:26:12 +0000 EA - Good things that happened in EA this year by Shakeel Hashim Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Good things that happened in EA this year, published by Shakeel Hashim on December 29, 2022 on The Effective Altruism Forum.Crossposted from TwitterAs the year comes to an end, we want to highlight some of the incredible work done and supported by people in the effective altruism community — work that's helping people and animals all over the world.1/ The team at Charity Entrepreneurship incubated five new charities this year, including the Center for Effective Aid Policy and Vida Plena — the first CE-incubated organisation to operate in Latin America.2/ Over 1,400 new people signed the Giving What We Can Pledge, committing to giving away 10% or more of their annual income to effective charities. The total number of pledgers is now over 8,000!3/ The work of The Humane League and other animal welfare activists led 161 new organisations to commit to using cage-free products, helping free millions of chickens from cruel battery cages.4/ Open Philanthropy launched two new focus areas: South Asian Air Quality and Global Aid Policy. It's already made grants that aim to tackle pollution and increase the quality or quantity of foreign aid./ and/5/ Alvea, a new biotechnology company dedicated to fighting pandemics, launched and announced that it had already started animal studies for a shelf-stable COVID vaccine.6/ Almost 80,000 connections were made at events hosted by @CentreforEA's Events team, prompting people to change jobs, start new projects and explore new ideas. EAGx conferences were held around the world — including in Berlin, Australia and Singapore.#Events7/ The EU Commission said it will "put forward a proposal to end the ‘disturbing’ systematic practice of killing male chicks across the EU" — another huge win for animal welfare campaigners.8/ What We Owe The Future, a book by @willmacaskill arguing that we can — and should — help build a better world for future generations, became a bestseller in both the US and UK.9/ New evidence prompted @GiveWell to re-evaluate its views on water quality interventions. It then made a grant of up to $64.7 million for @EvidenceAction's Dispensers for Safe Water water chlorination program, which operates in Kenya, Malawi and Uganda./10/ Lots of members of the effective altruism community were featured on @voxdotcom's inaugural Future Perfect 50 list of the people building a better future.11/ Fish welfare was discussed in the UK Parliament for the first time ever, featuring contributions from effective-altruism-backed charities./12/ Researchers at @iGEM published a paper looking at how we might be able to better detect whether states are complying with the Biological Weapons Convention — work which could help improve biosecurity around the world.13/ New research from the Lead Exposure Elimination Project showed the dangerous levels of lead in paint in Zimbabwe and Sierra Leone. In response, governments in both countries are working with LEEP to try to tackle the problem and reduce lead exposure./ and/14/ The EA Forum criticism contest sparked a bunch of interesting and technical debate. One entry prompted GiveWell to re-assess their estimates of the cost-effectiveness of deworming, and inspired a second contest of its own!#Prize_for_inspiring_the_Change_Our_Mind_Contest____20_00015/ The welfare of crabs, lobsters and prawns was recognised in UK legislation thanks to the new Animal Welfare (Sentience) Bill16/ Rethink Priorities, meanwhile, embarked on their ambitious Moral Weight Project to provide a better way to compare the interests of different species.17/ At the @medialab, the Nucleic Acid Observatory project launched — working to develop systems that will help provide an early-warning system for new biological threats.18/ Longview Philanthropy and @givingwhatwecan launched the Longtermism Fund, a new fund...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Good things that happened in EA this year, published by Shakeel Hashim on December 29, 2022 on The Effective Altruism Forum.Crossposted from TwitterAs the year comes to an end, we want to highlight some of the incredible work done and supported by people in the effective altruism community — work that's helping people and animals all over the world.1/ The team at Charity Entrepreneurship incubated five new charities this year, including the Center for Effective Aid Policy and Vida Plena — the first CE-incubated organisation to operate in Latin America.2/ Over 1,400 new people signed the Giving What We Can Pledge, committing to giving away 10% or more of their annual income to effective charities. The total number of pledgers is now over 8,000!3/ The work of The Humane League and other animal welfare activists led 161 new organisations to commit to using cage-free products, helping free millions of chickens from cruel battery cages.4/ Open Philanthropy launched two new focus areas: South Asian Air Quality and Global Aid Policy. It's already made grants that aim to tackle pollution and increase the quality or quantity of foreign aid./ and/5/ Alvea, a new biotechnology company dedicated to fighting pandemics, launched and announced that it had already started animal studies for a shelf-stable COVID vaccine.6/ Almost 80,000 connections were made at events hosted by @CentreforEA's Events team, prompting people to change jobs, start new projects and explore new ideas. EAGx conferences were held around the world — including in Berlin, Australia and Singapore.#Events7/ The EU Commission said it will "put forward a proposal to end the ‘disturbing’ systematic practice of killing male chicks across the EU" — another huge win for animal welfare campaigners.8/ What We Owe The Future, a book by @willmacaskill arguing that we can — and should — help build a better world for future generations, became a bestseller in both the US and UK.9/ New evidence prompted @GiveWell to re-evaluate its views on water quality interventions. It then made a grant of up to $64.7 million for @EvidenceAction's Dispensers for Safe Water water chlorination program, which operates in Kenya, Malawi and Uganda./10/ Lots of members of the effective altruism community were featured on @voxdotcom's inaugural Future Perfect 50 list of the people building a better future.11/ Fish welfare was discussed in the UK Parliament for the first time ever, featuring contributions from effective-altruism-backed charities./12/ Researchers at @iGEM published a paper looking at how we might be able to better detect whether states are complying with the Biological Weapons Convention — work which could help improve biosecurity around the world.13/ New research from the Lead Exposure Elimination Project showed the dangerous levels of lead in paint in Zimbabwe and Sierra Leone. In response, governments in both countries are working with LEEP to try to tackle the problem and reduce lead exposure./ and/14/ The EA Forum criticism contest sparked a bunch of interesting and technical debate. One entry prompted GiveWell to re-assess their estimates of the cost-effectiveness of deworming, and inspired a second contest of its own!#Prize_for_inspiring_the_Change_Our_Mind_Contest____20_00015/ The welfare of crabs, lobsters and prawns was recognised in UK legislation thanks to the new Animal Welfare (Sentience) Bill16/ Rethink Priorities, meanwhile, embarked on their ambitious Moral Weight Project to provide a better way to compare the interests of different species.17/ At the @medialab, the Nucleic Acid Observatory project launched — working to develop systems that will help provide an early-warning system for new biological threats.18/ Longview Philanthropy and @givingwhatwecan launched the Longtermism Fund, a new fund...]]>
Shakeel Hashim https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:17 None full 4298
YmPnzpzo5dxPozinc_EA EA - Overall Karma vs Agreement Karma by Simon M Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Overall Karma vs Agreement Karma, published by Simon M on December 28, 2022 on The Effective Altruism Forum.Someone recently shared with me a comment (which will appear in this post) which had strong agreement and but a strong down vote. I thought comments like that might make interesting comments to find (turns out they'd found the best example - but I enjoyed searching for more examples and I'm sharing this to save anyone else the effort).There is a risk that just be surfacing these comments it will change their karma, so I've stated their current numbers in this post in case the numbers change in response to this.Since agreement karma is new this year, there isn't a huge sample size of posts to analyse. Unsurprisingly there is a strong relationship between overall karma and agreement:I see roughly 4 different types of outlier:"Hall of Fame" - abnormally high karma and agreement"Hall of Shame" - abnormally low karma and agreement"Terrible point, well made"- abnormally high karma compared to agreement"The worst person you know has a point" - abnormally low karma compared to agreementHall of FameOwen Cotton-Barratt's explanation for buying Wytham Abbey (490, 187)Habryka's defence of the EA leadership (359, 166)Scott Alexander's explanation for why he denied Kathy Forth's accusations (266, 199)Milan_Griffes asking Will MacAskill to account for his involvement in the FTX saga (265, 182)ClaireZabel lifting the lid on where the $ came from for Wytham Abbey (288, 91)The Worst Person You Know has a Pointemre kaplan's reaction to the Kelsey Piper / SBF interviewSabs's take on Phil TorresTerrible point, well madeRAB thought the FTX Future fund team resigned too early (34, -104)MichaelPlant doesn't like the EA leadership discussing reputational issues in private (21, -81)EliezerYudkowsky makes his case for when fucking matters (18, -81)Honorable mention: Guy Raveh doesn't see the point in agreement karma... but people disagreeHall of ShameI'm a little bit disappointed with these - I wouldn't bother clicking on any of them. I was hoping some of them would be "so bad it's good", but really these are just bad. Don't waste your time. (But to save anyone searching, here they are)michaelroberts is rude in a boring way (-118, -91)effectenator comes from twitter to get some money back (-88, -123)[anonymous] has a meandering essay about why EA was doomed to fail (-60, -63)Yorad419 calls SBF & Caroline ugly (-78, -36)michaelroberts (again) is rude (-63, -47)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Simon M https://forum.effectivealtruism.org/posts/YmPnzpzo5dxPozinc/overall-karma-vs-agreement-karma Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Overall Karma vs Agreement Karma, published by Simon M on December 28, 2022 on The Effective Altruism Forum.Someone recently shared with me a comment (which will appear in this post) which had strong agreement and but a strong down vote. I thought comments like that might make interesting comments to find (turns out they'd found the best example - but I enjoyed searching for more examples and I'm sharing this to save anyone else the effort).There is a risk that just be surfacing these comments it will change their karma, so I've stated their current numbers in this post in case the numbers change in response to this.Since agreement karma is new this year, there isn't a huge sample size of posts to analyse. Unsurprisingly there is a strong relationship between overall karma and agreement:I see roughly 4 different types of outlier:"Hall of Fame" - abnormally high karma and agreement"Hall of Shame" - abnormally low karma and agreement"Terrible point, well made"- abnormally high karma compared to agreement"The worst person you know has a point" - abnormally low karma compared to agreementHall of FameOwen Cotton-Barratt's explanation for buying Wytham Abbey (490, 187)Habryka's defence of the EA leadership (359, 166)Scott Alexander's explanation for why he denied Kathy Forth's accusations (266, 199)Milan_Griffes asking Will MacAskill to account for his involvement in the FTX saga (265, 182)ClaireZabel lifting the lid on where the $ came from for Wytham Abbey (288, 91)The Worst Person You Know has a Pointemre kaplan's reaction to the Kelsey Piper / SBF interviewSabs's take on Phil TorresTerrible point, well madeRAB thought the FTX Future fund team resigned too early (34, -104)MichaelPlant doesn't like the EA leadership discussing reputational issues in private (21, -81)EliezerYudkowsky makes his case for when fucking matters (18, -81)Honorable mention: Guy Raveh doesn't see the point in agreement karma... but people disagreeHall of ShameI'm a little bit disappointed with these - I wouldn't bother clicking on any of them. I was hoping some of them would be "so bad it's good", but really these are just bad. Don't waste your time. (But to save anyone searching, here they are)michaelroberts is rude in a boring way (-118, -91)effectenator comes from twitter to get some money back (-88, -123)[anonymous] has a meandering essay about why EA was doomed to fail (-60, -63)Yorad419 calls SBF & Caroline ugly (-78, -36)michaelroberts (again) is rude (-63, -47)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 29 Dec 2022 14:11:18 +0000 EA - Overall Karma vs Agreement Karma by Simon M Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Overall Karma vs Agreement Karma, published by Simon M on December 28, 2022 on The Effective Altruism Forum.Someone recently shared with me a comment (which will appear in this post) which had strong agreement and but a strong down vote. I thought comments like that might make interesting comments to find (turns out they'd found the best example - but I enjoyed searching for more examples and I'm sharing this to save anyone else the effort).There is a risk that just be surfacing these comments it will change their karma, so I've stated their current numbers in this post in case the numbers change in response to this.Since agreement karma is new this year, there isn't a huge sample size of posts to analyse. Unsurprisingly there is a strong relationship between overall karma and agreement:I see roughly 4 different types of outlier:"Hall of Fame" - abnormally high karma and agreement"Hall of Shame" - abnormally low karma and agreement"Terrible point, well made"- abnormally high karma compared to agreement"The worst person you know has a point" - abnormally low karma compared to agreementHall of FameOwen Cotton-Barratt's explanation for buying Wytham Abbey (490, 187)Habryka's defence of the EA leadership (359, 166)Scott Alexander's explanation for why he denied Kathy Forth's accusations (266, 199)Milan_Griffes asking Will MacAskill to account for his involvement in the FTX saga (265, 182)ClaireZabel lifting the lid on where the $ came from for Wytham Abbey (288, 91)The Worst Person You Know has a Pointemre kaplan's reaction to the Kelsey Piper / SBF interviewSabs's take on Phil TorresTerrible point, well madeRAB thought the FTX Future fund team resigned too early (34, -104)MichaelPlant doesn't like the EA leadership discussing reputational issues in private (21, -81)EliezerYudkowsky makes his case for when fucking matters (18, -81)Honorable mention: Guy Raveh doesn't see the point in agreement karma... but people disagreeHall of ShameI'm a little bit disappointed with these - I wouldn't bother clicking on any of them. I was hoping some of them would be "so bad it's good", but really these are just bad. Don't waste your time. (But to save anyone searching, here they are)michaelroberts is rude in a boring way (-118, -91)effectenator comes from twitter to get some money back (-88, -123)[anonymous] has a meandering essay about why EA was doomed to fail (-60, -63)Yorad419 calls SBF & Caroline ugly (-78, -36)michaelroberts (again) is rude (-63, -47)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Overall Karma vs Agreement Karma, published by Simon M on December 28, 2022 on The Effective Altruism Forum.Someone recently shared with me a comment (which will appear in this post) which had strong agreement and but a strong down vote. I thought comments like that might make interesting comments to find (turns out they'd found the best example - but I enjoyed searching for more examples and I'm sharing this to save anyone else the effort).There is a risk that just be surfacing these comments it will change their karma, so I've stated their current numbers in this post in case the numbers change in response to this.Since agreement karma is new this year, there isn't a huge sample size of posts to analyse. Unsurprisingly there is a strong relationship between overall karma and agreement:I see roughly 4 different types of outlier:"Hall of Fame" - abnormally high karma and agreement"Hall of Shame" - abnormally low karma and agreement"Terrible point, well made"- abnormally high karma compared to agreement"The worst person you know has a point" - abnormally low karma compared to agreementHall of FameOwen Cotton-Barratt's explanation for buying Wytham Abbey (490, 187)Habryka's defence of the EA leadership (359, 166)Scott Alexander's explanation for why he denied Kathy Forth's accusations (266, 199)Milan_Griffes asking Will MacAskill to account for his involvement in the FTX saga (265, 182)ClaireZabel lifting the lid on where the $ came from for Wytham Abbey (288, 91)The Worst Person You Know has a Pointemre kaplan's reaction to the Kelsey Piper / SBF interviewSabs's take on Phil TorresTerrible point, well madeRAB thought the FTX Future fund team resigned too early (34, -104)MichaelPlant doesn't like the EA leadership discussing reputational issues in private (21, -81)EliezerYudkowsky makes his case for when fucking matters (18, -81)Honorable mention: Guy Raveh doesn't see the point in agreement karma... but people disagreeHall of ShameI'm a little bit disappointed with these - I wouldn't bother clicking on any of them. I was hoping some of them would be "so bad it's good", but really these are just bad. Don't waste your time. (But to save anyone searching, here they are)michaelroberts is rude in a boring way (-118, -91)effectenator comes from twitter to get some money back (-88, -123)[anonymous] has a meandering essay about why EA was doomed to fail (-60, -63)Yorad419 calls SBF & Caroline ugly (-78, -36)michaelroberts (again) is rude (-63, -47)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Simon M https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:05 None full 4300
qyRn6m4ycGekCzR2f_EA EA - GiveDirectly $1 Million Match Campaign by adam galas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveDirectly $1 Million Match Campaign, published by adam galas on December 27, 2022 on The Effective Altruism Forum.1:1 Match, up to $50K per donor. Until the end of the year or until they hit $1 million.I gave all I could and still afford to pay my taxes and medical bills.$2K donation, $4K effective donation, eight people lifted out of poverty, and $10,400 economic impact.A happy new year to all the people of the world. Live long and prosper:)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
adam galas https://forum.effectivealtruism.org/posts/qyRn6m4ycGekCzR2f/givedirectly-usd1-million-match-campaign Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveDirectly $1 Million Match Campaign, published by adam galas on December 27, 2022 on The Effective Altruism Forum.1:1 Match, up to $50K per donor. Until the end of the year or until they hit $1 million.I gave all I could and still afford to pay my taxes and medical bills.$2K donation, $4K effective donation, eight people lifted out of poverty, and $10,400 economic impact.A happy new year to all the people of the world. Live long and prosper:)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 28 Dec 2022 22:42:39 +0000 EA - GiveDirectly $1 Million Match Campaign by adam galas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveDirectly $1 Million Match Campaign, published by adam galas on December 27, 2022 on The Effective Altruism Forum.1:1 Match, up to $50K per donor. Until the end of the year or until they hit $1 million.I gave all I could and still afford to pay my taxes and medical bills.$2K donation, $4K effective donation, eight people lifted out of poverty, and $10,400 economic impact.A happy new year to all the people of the world. Live long and prosper:)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveDirectly $1 Million Match Campaign, published by adam galas on December 27, 2022 on The Effective Altruism Forum.1:1 Match, up to $50K per donor. Until the end of the year or until they hit $1 million.I gave all I could and still afford to pay my taxes and medical bills.$2K donation, $4K effective donation, eight people lifted out of poverty, and $10,400 economic impact.A happy new year to all the people of the world. Live long and prosper:)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
adam galas https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:50 None full 4295
Sp2pMyrHPzK3jmwLq_EA EA - How many hours is your standard workweek? Why? by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How many hours is your standard workweek? Why?, published by Rockwell on December 28, 2022 on The Effective Altruism Forum.EAs talk about having 80,000 hours in one's career, but working hours vary throughout time and geography. There are open questions about the typical "ideal" number of hours to work in a day for maximum productivity. A recent discussion here on the Forum discussed increasing one's working hours, which runs counter to the trend in recent years of 4-day workweeks or 30-hour workweeks. Sometimes disclosing how many hours one works can also elicit feelings of shame or feel like a competition, and we lose valuable insight as a result.I'm wondering:How many hours is your standard workweek?Why do you work that many hours rather than fewer or greater?How do you stagger your working hours across a day or week?Of your working hours, how many do you feel are actually productive versus, say, time spent scrolling Twitter or getting more coffee?Does your employer have policies in place around how many hours you must work, the maximum number of hours you are permitted to work, and/or time tracking systems?How has this changed for you over time?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Rockwell https://forum.effectivealtruism.org/posts/Sp2pMyrHPzK3jmwLq/how-many-hours-is-your-standard-workweek-why Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How many hours is your standard workweek? Why?, published by Rockwell on December 28, 2022 on The Effective Altruism Forum.EAs talk about having 80,000 hours in one's career, but working hours vary throughout time and geography. There are open questions about the typical "ideal" number of hours to work in a day for maximum productivity. A recent discussion here on the Forum discussed increasing one's working hours, which runs counter to the trend in recent years of 4-day workweeks or 30-hour workweeks. Sometimes disclosing how many hours one works can also elicit feelings of shame or feel like a competition, and we lose valuable insight as a result.I'm wondering:How many hours is your standard workweek?Why do you work that many hours rather than fewer or greater?How do you stagger your working hours across a day or week?Of your working hours, how many do you feel are actually productive versus, say, time spent scrolling Twitter or getting more coffee?Does your employer have policies in place around how many hours you must work, the maximum number of hours you are permitted to work, and/or time tracking systems?How has this changed for you over time?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 28 Dec 2022 21:59:11 +0000 EA - How many hours is your standard workweek? Why? by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How many hours is your standard workweek? Why?, published by Rockwell on December 28, 2022 on The Effective Altruism Forum.EAs talk about having 80,000 hours in one's career, but working hours vary throughout time and geography. There are open questions about the typical "ideal" number of hours to work in a day for maximum productivity. A recent discussion here on the Forum discussed increasing one's working hours, which runs counter to the trend in recent years of 4-day workweeks or 30-hour workweeks. Sometimes disclosing how many hours one works can also elicit feelings of shame or feel like a competition, and we lose valuable insight as a result.I'm wondering:How many hours is your standard workweek?Why do you work that many hours rather than fewer or greater?How do you stagger your working hours across a day or week?Of your working hours, how many do you feel are actually productive versus, say, time spent scrolling Twitter or getting more coffee?Does your employer have policies in place around how many hours you must work, the maximum number of hours you are permitted to work, and/or time tracking systems?How has this changed for you over time?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How many hours is your standard workweek? Why?, published by Rockwell on December 28, 2022 on The Effective Altruism Forum.EAs talk about having 80,000 hours in one's career, but working hours vary throughout time and geography. There are open questions about the typical "ideal" number of hours to work in a day for maximum productivity. A recent discussion here on the Forum discussed increasing one's working hours, which runs counter to the trend in recent years of 4-day workweeks or 30-hour workweeks. Sometimes disclosing how many hours one works can also elicit feelings of shame or feel like a competition, and we lose valuable insight as a result.I'm wondering:How many hours is your standard workweek?Why do you work that many hours rather than fewer or greater?How do you stagger your working hours across a day or week?Of your working hours, how many do you feel are actually productive versus, say, time spent scrolling Twitter or getting more coffee?Does your employer have policies in place around how many hours you must work, the maximum number of hours you are permitted to work, and/or time tracking systems?How has this changed for you over time?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Rockwell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:21 None full 4294
DnMg5q4Wyuuf99kkX_EA EA - Reflections on my 5-month AI alignment upskilling grant by Jay Bailey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflections on my 5-month AI alignment upskilling grant, published by Jay Bailey on December 28, 2022 on The Effective Altruism Forum.Five months ago, I received a grant from the Long Term Future Fund to upskill in AI alignment. As of a few days ago, I was invited to Berkeley for two months of full-time alignment research under Owain Evans’s stream in the SERIMATS program. This post is about how I got there.The post is partially a retrospective for myself, and partially a sketch of the path I took so that others can decide if it’s right for them. This post was written relatively quickly - I’m happy to answer more questions via PM or in the comments.SummaryI was a software engineer for 3-4 years with little to no ML experience before I was accepted for my grant.I did a bunch of stuff around fundamental ML maths, understanding RL and transformers, and improving my alignment understanding.Having tutors, getting feedback on my plan early on, and being able to pivot as I went were all very useful for not getting stuck doing stuff that was no longer useful.I probably wouldn’t have gotten into SERIMATS without that ability to pivot midway through.After SERIMATS, I want to finish off the last part of the grant while I find work, then start work as a Research Engineer at an alignment organisation.If in doubt, put in an application!My BackgroundMy background is more professional and less academic than most. Until I was 23, I didn’t do much of anything - then I got a Bachelor of Computer Science from a university ranked around 1,000th, with little maths and no intent to study ML at all, let alone alignment. It was known for strong graduate employment though, so I went straight into industry from there. I had 3.5 years of software engineering experience (1.5 at Amazon, 2 as a senior engineer at other jobs) before applying for the LTFF grant. I had no ML experience at the time, besides being halfway through doing the fast.ai course in my spare time.Not going to lie, seeing how many Top-20 university PhD students I was sharing my cohort with (At least three!) was a tad intimidating - but I made it in the end, so industry experience clearly has a role to play as well.GrantThe details of the grant are one of the main reasons I wrote this - I’ve been asked for 1:1’s and details on this at least three times in the last six months, and if you get asked something from at least three different people, it might be worth writing it up and sharing it around.Firstly, the process. Applying for the grant is pretty painless. As long as you have a learning plan already in place, the official guidance is to take 1-2 hours on it. I took a bit longer, polishing it more than required. I later found out my plan was more detailed than it probably had to be. In retrospect, I think my level of detail was good, but I spent too much time editing. AI Safety Support helped me with administration. The main benefit that I got from it was that the tutoring and compute money was tax free (since I didn’t get the money personally, rather I used a card they provided me) and I didn’t have to worry about tax withholding throughout the year.Secondly, the money. I agonized over how much money to ask for. This took me days. I asked myself how much I really needed, then I asked myself how much I would actually accept gladly with no regrets, then I balked at those numbers, even knowing that most people ask for too little, not too much. I still balk at the numbers, to be honest, but it would have been so much easier to write this if I had other grants to go off. So, in the interest of transparency and hopefully preventing someone else going through the same level of anguish, I’m sharing the full text of my grant request, including money requested (in Australian dollars, but you can always convert it) here....]]>
Jay Bailey https://forum.effectivealtruism.org/posts/DnMg5q4Wyuuf99kkX/reflections-on-my-5-month-ai-alignment-upskilling-grant Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflections on my 5-month AI alignment upskilling grant, published by Jay Bailey on December 28, 2022 on The Effective Altruism Forum.Five months ago, I received a grant from the Long Term Future Fund to upskill in AI alignment. As of a few days ago, I was invited to Berkeley for two months of full-time alignment research under Owain Evans’s stream in the SERIMATS program. This post is about how I got there.The post is partially a retrospective for myself, and partially a sketch of the path I took so that others can decide if it’s right for them. This post was written relatively quickly - I’m happy to answer more questions via PM or in the comments.SummaryI was a software engineer for 3-4 years with little to no ML experience before I was accepted for my grant.I did a bunch of stuff around fundamental ML maths, understanding RL and transformers, and improving my alignment understanding.Having tutors, getting feedback on my plan early on, and being able to pivot as I went were all very useful for not getting stuck doing stuff that was no longer useful.I probably wouldn’t have gotten into SERIMATS without that ability to pivot midway through.After SERIMATS, I want to finish off the last part of the grant while I find work, then start work as a Research Engineer at an alignment organisation.If in doubt, put in an application!My BackgroundMy background is more professional and less academic than most. Until I was 23, I didn’t do much of anything - then I got a Bachelor of Computer Science from a university ranked around 1,000th, with little maths and no intent to study ML at all, let alone alignment. It was known for strong graduate employment though, so I went straight into industry from there. I had 3.5 years of software engineering experience (1.5 at Amazon, 2 as a senior engineer at other jobs) before applying for the LTFF grant. I had no ML experience at the time, besides being halfway through doing the fast.ai course in my spare time.Not going to lie, seeing how many Top-20 university PhD students I was sharing my cohort with (At least three!) was a tad intimidating - but I made it in the end, so industry experience clearly has a role to play as well.GrantThe details of the grant are one of the main reasons I wrote this - I’ve been asked for 1:1’s and details on this at least three times in the last six months, and if you get asked something from at least three different people, it might be worth writing it up and sharing it around.Firstly, the process. Applying for the grant is pretty painless. As long as you have a learning plan already in place, the official guidance is to take 1-2 hours on it. I took a bit longer, polishing it more than required. I later found out my plan was more detailed than it probably had to be. In retrospect, I think my level of detail was good, but I spent too much time editing. AI Safety Support helped me with administration. The main benefit that I got from it was that the tutoring and compute money was tax free (since I didn’t get the money personally, rather I used a card they provided me) and I didn’t have to worry about tax withholding throughout the year.Secondly, the money. I agonized over how much money to ask for. This took me days. I asked myself how much I really needed, then I asked myself how much I would actually accept gladly with no regrets, then I balked at those numbers, even knowing that most people ask for too little, not too much. I still balk at the numbers, to be honest, but it would have been so much easier to write this if I had other grants to go off. So, in the interest of transparency and hopefully preventing someone else going through the same level of anguish, I’m sharing the full text of my grant request, including money requested (in Australian dollars, but you can always convert it) here....]]>
Wed, 28 Dec 2022 21:04:21 +0000 EA - Reflections on my 5-month AI alignment upskilling grant by Jay Bailey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflections on my 5-month AI alignment upskilling grant, published by Jay Bailey on December 28, 2022 on The Effective Altruism Forum.Five months ago, I received a grant from the Long Term Future Fund to upskill in AI alignment. As of a few days ago, I was invited to Berkeley for two months of full-time alignment research under Owain Evans’s stream in the SERIMATS program. This post is about how I got there.The post is partially a retrospective for myself, and partially a sketch of the path I took so that others can decide if it’s right for them. This post was written relatively quickly - I’m happy to answer more questions via PM or in the comments.SummaryI was a software engineer for 3-4 years with little to no ML experience before I was accepted for my grant.I did a bunch of stuff around fundamental ML maths, understanding RL and transformers, and improving my alignment understanding.Having tutors, getting feedback on my plan early on, and being able to pivot as I went were all very useful for not getting stuck doing stuff that was no longer useful.I probably wouldn’t have gotten into SERIMATS without that ability to pivot midway through.After SERIMATS, I want to finish off the last part of the grant while I find work, then start work as a Research Engineer at an alignment organisation.If in doubt, put in an application!My BackgroundMy background is more professional and less academic than most. Until I was 23, I didn’t do much of anything - then I got a Bachelor of Computer Science from a university ranked around 1,000th, with little maths and no intent to study ML at all, let alone alignment. It was known for strong graduate employment though, so I went straight into industry from there. I had 3.5 years of software engineering experience (1.5 at Amazon, 2 as a senior engineer at other jobs) before applying for the LTFF grant. I had no ML experience at the time, besides being halfway through doing the fast.ai course in my spare time.Not going to lie, seeing how many Top-20 university PhD students I was sharing my cohort with (At least three!) was a tad intimidating - but I made it in the end, so industry experience clearly has a role to play as well.GrantThe details of the grant are one of the main reasons I wrote this - I’ve been asked for 1:1’s and details on this at least three times in the last six months, and if you get asked something from at least three different people, it might be worth writing it up and sharing it around.Firstly, the process. Applying for the grant is pretty painless. As long as you have a learning plan already in place, the official guidance is to take 1-2 hours on it. I took a bit longer, polishing it more than required. I later found out my plan was more detailed than it probably had to be. In retrospect, I think my level of detail was good, but I spent too much time editing. AI Safety Support helped me with administration. The main benefit that I got from it was that the tutoring and compute money was tax free (since I didn’t get the money personally, rather I used a card they provided me) and I didn’t have to worry about tax withholding throughout the year.Secondly, the money. I agonized over how much money to ask for. This took me days. I asked myself how much I really needed, then I asked myself how much I would actually accept gladly with no regrets, then I balked at those numbers, even knowing that most people ask for too little, not too much. I still balk at the numbers, to be honest, but it would have been so much easier to write this if I had other grants to go off. So, in the interest of transparency and hopefully preventing someone else going through the same level of anguish, I’m sharing the full text of my grant request, including money requested (in Australian dollars, but you can always convert it) here....]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflections on my 5-month AI alignment upskilling grant, published by Jay Bailey on December 28, 2022 on The Effective Altruism Forum.Five months ago, I received a grant from the Long Term Future Fund to upskill in AI alignment. As of a few days ago, I was invited to Berkeley for two months of full-time alignment research under Owain Evans’s stream in the SERIMATS program. This post is about how I got there.The post is partially a retrospective for myself, and partially a sketch of the path I took so that others can decide if it’s right for them. This post was written relatively quickly - I’m happy to answer more questions via PM or in the comments.SummaryI was a software engineer for 3-4 years with little to no ML experience before I was accepted for my grant.I did a bunch of stuff around fundamental ML maths, understanding RL and transformers, and improving my alignment understanding.Having tutors, getting feedback on my plan early on, and being able to pivot as I went were all very useful for not getting stuck doing stuff that was no longer useful.I probably wouldn’t have gotten into SERIMATS without that ability to pivot midway through.After SERIMATS, I want to finish off the last part of the grant while I find work, then start work as a Research Engineer at an alignment organisation.If in doubt, put in an application!My BackgroundMy background is more professional and less academic than most. Until I was 23, I didn’t do much of anything - then I got a Bachelor of Computer Science from a university ranked around 1,000th, with little maths and no intent to study ML at all, let alone alignment. It was known for strong graduate employment though, so I went straight into industry from there. I had 3.5 years of software engineering experience (1.5 at Amazon, 2 as a senior engineer at other jobs) before applying for the LTFF grant. I had no ML experience at the time, besides being halfway through doing the fast.ai course in my spare time.Not going to lie, seeing how many Top-20 university PhD students I was sharing my cohort with (At least three!) was a tad intimidating - but I made it in the end, so industry experience clearly has a role to play as well.GrantThe details of the grant are one of the main reasons I wrote this - I’ve been asked for 1:1’s and details on this at least three times in the last six months, and if you get asked something from at least three different people, it might be worth writing it up and sharing it around.Firstly, the process. Applying for the grant is pretty painless. As long as you have a learning plan already in place, the official guidance is to take 1-2 hours on it. I took a bit longer, polishing it more than required. I later found out my plan was more detailed than it probably had to be. In retrospect, I think my level of detail was good, but I spent too much time editing. AI Safety Support helped me with administration. The main benefit that I got from it was that the tutoring and compute money was tax free (since I didn’t get the money personally, rather I used a card they provided me) and I didn’t have to worry about tax withholding throughout the year.Secondly, the money. I agonized over how much money to ask for. This took me days. I asked myself how much I really needed, then I asked myself how much I would actually accept gladly with no regrets, then I balked at those numbers, even knowing that most people ask for too little, not too much. I still balk at the numbers, to be honest, but it would have been so much easier to write this if I had other grants to go off. So, in the interest of transparency and hopefully preventing someone else going through the same level of anguish, I’m sharing the full text of my grant request, including money requested (in Australian dollars, but you can always convert it) here....]]>
Jay Bailey https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 12:44 None full 4292
HgZymWBTAovWTa7F8_EA EA - Things I didn’t feel that guilty about before getting involved in effective altruism by Ada-Maaria Hyvärinen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Things I didn’t feel that guilty about before getting involved in effective altruism, published by Ada-Maaria Hyvärinen on December 28, 2022 on The Effective Altruism Forum.Not wanting to move countries (“there would be a lot more effective work options if I lived elsewhere”)Wanting a permanent work contract(“there would be a lot more effective work options with temporary contracts or grant-based pay”)Not wanting to be an independent researcher(“it could potentially be an effective thing to do, and I wouldn’t have to worry about replaceability”)Wanting to have a child(“if I didn’t want one I’d probably be much more flexible on the points above”)Wanting to take some time off from work to take care of said child, in case I ever manage to have one(“although if I’m not having an impactful job by that time it probably won’t matter much anyway”)Burning out(“it wastes time and sets a bad example”)Feeling guilty about things(“I have read Replacing Guilt but I’m still having all these unproductive feelings”)(Despite feeling guilty I’m doing ok – ultimately, a lot of this is just sadness about not having an unlimited altruism budget. I wanted to post this because I like reading about others’ experiences and thought someone else might like reading about mine. I don’t really need solutions proposals but if you have some, other readers might benefit from them.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ada-Maaria Hyvärinen https://forum.effectivealtruism.org/posts/HgZymWBTAovWTa7F8/things-i-didn-t-feel-that-guilty-about-before-getting Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Things I didn’t feel that guilty about before getting involved in effective altruism, published by Ada-Maaria Hyvärinen on December 28, 2022 on The Effective Altruism Forum.Not wanting to move countries (“there would be a lot more effective work options if I lived elsewhere”)Wanting a permanent work contract(“there would be a lot more effective work options with temporary contracts or grant-based pay”)Not wanting to be an independent researcher(“it could potentially be an effective thing to do, and I wouldn’t have to worry about replaceability”)Wanting to have a child(“if I didn’t want one I’d probably be much more flexible on the points above”)Wanting to take some time off from work to take care of said child, in case I ever manage to have one(“although if I’m not having an impactful job by that time it probably won’t matter much anyway”)Burning out(“it wastes time and sets a bad example”)Feeling guilty about things(“I have read Replacing Guilt but I’m still having all these unproductive feelings”)(Despite feeling guilty I’m doing ok – ultimately, a lot of this is just sadness about not having an unlimited altruism budget. I wanted to post this because I like reading about others’ experiences and thought someone else might like reading about mine. I don’t really need solutions proposals but if you have some, other readers might benefit from them.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 28 Dec 2022 19:27:42 +0000 EA - Things I didn’t feel that guilty about before getting involved in effective altruism by Ada-Maaria Hyvärinen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Things I didn’t feel that guilty about before getting involved in effective altruism, published by Ada-Maaria Hyvärinen on December 28, 2022 on The Effective Altruism Forum.Not wanting to move countries (“there would be a lot more effective work options if I lived elsewhere”)Wanting a permanent work contract(“there would be a lot more effective work options with temporary contracts or grant-based pay”)Not wanting to be an independent researcher(“it could potentially be an effective thing to do, and I wouldn’t have to worry about replaceability”)Wanting to have a child(“if I didn’t want one I’d probably be much more flexible on the points above”)Wanting to take some time off from work to take care of said child, in case I ever manage to have one(“although if I’m not having an impactful job by that time it probably won’t matter much anyway”)Burning out(“it wastes time and sets a bad example”)Feeling guilty about things(“I have read Replacing Guilt but I’m still having all these unproductive feelings”)(Despite feeling guilty I’m doing ok – ultimately, a lot of this is just sadness about not having an unlimited altruism budget. I wanted to post this because I like reading about others’ experiences and thought someone else might like reading about mine. I don’t really need solutions proposals but if you have some, other readers might benefit from them.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Things I didn’t feel that guilty about before getting involved in effective altruism, published by Ada-Maaria Hyvärinen on December 28, 2022 on The Effective Altruism Forum.Not wanting to move countries (“there would be a lot more effective work options if I lived elsewhere”)Wanting a permanent work contract(“there would be a lot more effective work options with temporary contracts or grant-based pay”)Not wanting to be an independent researcher(“it could potentially be an effective thing to do, and I wouldn’t have to worry about replaceability”)Wanting to have a child(“if I didn’t want one I’d probably be much more flexible on the points above”)Wanting to take some time off from work to take care of said child, in case I ever manage to have one(“although if I’m not having an impactful job by that time it probably won’t matter much anyway”)Burning out(“it wastes time and sets a bad example”)Feeling guilty about things(“I have read Replacing Guilt but I’m still having all these unproductive feelings”)(Despite feeling guilty I’m doing ok – ultimately, a lot of this is just sadness about not having an unlimited altruism budget. I wanted to post this because I like reading about others’ experiences and thought someone else might like reading about mine. I don’t really need solutions proposals but if you have some, other readers might benefit from them.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ada-Maaria Hyvärinen https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:28 None full 4289
AtfQu968wH2TrEEGg_EA EA - What countries are worth funding? by Ariel Pontes Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What countries are worth funding?, published by Ariel Pontes on December 28, 2022 on The Effective Altruism Forum.I’m the founder of EA Romania and I applied for funds from the EAIF twice this year and got rejected both times without feedback. In the process of hunting for feedback after my first rejection, I kept hearing questions like “what’s the comparative advantage of Romania?” or “what particular groups or cause areas do you want to focus on?”. I was also told by a fund manager that Romania is “not exactly a hub” and therefore there’s a high opportunity cost in funding community building here unless the project is very creative.At some point during a recent EAGx, I met a Romanian who had been involved in EA for a while and I told her about my whole experience applying for funds. She seemed very empathetic and encouraging and said we should talk more later. The next day when we met her attitude had shifted completely and she started questioning why I wanted to do community building in Romania, and suggesting that if I want to work in this area maybe I could move somewhere with a more vibrant EA movement, where there are more opportunities. Eventually I confronted her about her change of attitude and she basically started saying that Romania is not ready for EA yet. I said that I felt Romania was being discriminated against and she said I had to understand the EA leadership because when Romania joined the EU a lot of uneducated, low-income immigrants flooded Western countries and now they have a bad impression of Romanians.I couldn’t help but have the impression that she talked to somebody in the EA leadership and concluded that Romania wouldn’t be funded any time soon.If some countries are unlikely to be funded in general because they’re low priority, I believe this should be public information, and I also think EAIF should be transparent about the criteria they use to prioritize countries. After attending 4 EAG(x)s, I would say that the perception of the average EA is that there are plenty of funds and if you start a new national group it should be very easy to get funded. This perception seems inaccurate, and if the reality was more transparent this would help founders of new national groups have more realistic expectations.To be clear, I’m not saying that the only or even the main reason why my applications were rejected was that they were coming from Romania. There are other reasons (as I have discussed here and here). It is of course possible that another person with another project would have been funded. It’s hard to know because, again, EAIF doesn’t give feedback. I think this is a big problem, but that’s a topic for another post. In any case, being in Romania was clearly one of the reasons why the application was rejected, as is illustrated by the stories above and as was confirmed in an exchange of emails I had with CEA. This whole experience has made me realize there is very little transparency regarding group funding and groups in general. There are many questions that I think should be easier to find an answer to, for example:Which countries are at the top of the priority list to be funded?Which ones are at the bottom?How much funding does each country get per year for community building?How much of this money is for salaries?How many paid community builders does each country have?How big is each national group?How old is each national group?How many applications come from each country?What percentage of applications is rejected for each country?Is this information public but buried in some long PDF report? Or is it not public at all? I know that CEA only funds a fixed list of countries that already have mature communities, but does anybody have any idea what criteria the EAIF uses to prioritize countries? Do you think some countries ...]]>
Ariel Pontes https://forum.effectivealtruism.org/posts/AtfQu968wH2TrEEGg/what-countries-are-worth-funding Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What countries are worth funding?, published by Ariel Pontes on December 28, 2022 on The Effective Altruism Forum.I’m the founder of EA Romania and I applied for funds from the EAIF twice this year and got rejected both times without feedback. In the process of hunting for feedback after my first rejection, I kept hearing questions like “what’s the comparative advantage of Romania?” or “what particular groups or cause areas do you want to focus on?”. I was also told by a fund manager that Romania is “not exactly a hub” and therefore there’s a high opportunity cost in funding community building here unless the project is very creative.At some point during a recent EAGx, I met a Romanian who had been involved in EA for a while and I told her about my whole experience applying for funds. She seemed very empathetic and encouraging and said we should talk more later. The next day when we met her attitude had shifted completely and she started questioning why I wanted to do community building in Romania, and suggesting that if I want to work in this area maybe I could move somewhere with a more vibrant EA movement, where there are more opportunities. Eventually I confronted her about her change of attitude and she basically started saying that Romania is not ready for EA yet. I said that I felt Romania was being discriminated against and she said I had to understand the EA leadership because when Romania joined the EU a lot of uneducated, low-income immigrants flooded Western countries and now they have a bad impression of Romanians.I couldn’t help but have the impression that she talked to somebody in the EA leadership and concluded that Romania wouldn’t be funded any time soon.If some countries are unlikely to be funded in general because they’re low priority, I believe this should be public information, and I also think EAIF should be transparent about the criteria they use to prioritize countries. After attending 4 EAG(x)s, I would say that the perception of the average EA is that there are plenty of funds and if you start a new national group it should be very easy to get funded. This perception seems inaccurate, and if the reality was more transparent this would help founders of new national groups have more realistic expectations.To be clear, I’m not saying that the only or even the main reason why my applications were rejected was that they were coming from Romania. There are other reasons (as I have discussed here and here). It is of course possible that another person with another project would have been funded. It’s hard to know because, again, EAIF doesn’t give feedback. I think this is a big problem, but that’s a topic for another post. In any case, being in Romania was clearly one of the reasons why the application was rejected, as is illustrated by the stories above and as was confirmed in an exchange of emails I had with CEA. This whole experience has made me realize there is very little transparency regarding group funding and groups in general. There are many questions that I think should be easier to find an answer to, for example:Which countries are at the top of the priority list to be funded?Which ones are at the bottom?How much funding does each country get per year for community building?How much of this money is for salaries?How many paid community builders does each country have?How big is each national group?How old is each national group?How many applications come from each country?What percentage of applications is rejected for each country?Is this information public but buried in some long PDF report? Or is it not public at all? I know that CEA only funds a fixed list of countries that already have mature communities, but does anybody have any idea what criteria the EAIF uses to prioritize countries? Do you think some countries ...]]>
Wed, 28 Dec 2022 18:32:45 +0000 EA - What countries are worth funding? by Ariel Pontes Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What countries are worth funding?, published by Ariel Pontes on December 28, 2022 on The Effective Altruism Forum.I’m the founder of EA Romania and I applied for funds from the EAIF twice this year and got rejected both times without feedback. In the process of hunting for feedback after my first rejection, I kept hearing questions like “what’s the comparative advantage of Romania?” or “what particular groups or cause areas do you want to focus on?”. I was also told by a fund manager that Romania is “not exactly a hub” and therefore there’s a high opportunity cost in funding community building here unless the project is very creative.At some point during a recent EAGx, I met a Romanian who had been involved in EA for a while and I told her about my whole experience applying for funds. She seemed very empathetic and encouraging and said we should talk more later. The next day when we met her attitude had shifted completely and she started questioning why I wanted to do community building in Romania, and suggesting that if I want to work in this area maybe I could move somewhere with a more vibrant EA movement, where there are more opportunities. Eventually I confronted her about her change of attitude and she basically started saying that Romania is not ready for EA yet. I said that I felt Romania was being discriminated against and she said I had to understand the EA leadership because when Romania joined the EU a lot of uneducated, low-income immigrants flooded Western countries and now they have a bad impression of Romanians.I couldn’t help but have the impression that she talked to somebody in the EA leadership and concluded that Romania wouldn’t be funded any time soon.If some countries are unlikely to be funded in general because they’re low priority, I believe this should be public information, and I also think EAIF should be transparent about the criteria they use to prioritize countries. After attending 4 EAG(x)s, I would say that the perception of the average EA is that there are plenty of funds and if you start a new national group it should be very easy to get funded. This perception seems inaccurate, and if the reality was more transparent this would help founders of new national groups have more realistic expectations.To be clear, I’m not saying that the only or even the main reason why my applications were rejected was that they were coming from Romania. There are other reasons (as I have discussed here and here). It is of course possible that another person with another project would have been funded. It’s hard to know because, again, EAIF doesn’t give feedback. I think this is a big problem, but that’s a topic for another post. In any case, being in Romania was clearly one of the reasons why the application was rejected, as is illustrated by the stories above and as was confirmed in an exchange of emails I had with CEA. This whole experience has made me realize there is very little transparency regarding group funding and groups in general. There are many questions that I think should be easier to find an answer to, for example:Which countries are at the top of the priority list to be funded?Which ones are at the bottom?How much funding does each country get per year for community building?How much of this money is for salaries?How many paid community builders does each country have?How big is each national group?How old is each national group?How many applications come from each country?What percentage of applications is rejected for each country?Is this information public but buried in some long PDF report? Or is it not public at all? I know that CEA only funds a fixed list of countries that already have mature communities, but does anybody have any idea what criteria the EAIF uses to prioritize countries? Do you think some countries ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What countries are worth funding?, published by Ariel Pontes on December 28, 2022 on The Effective Altruism Forum.I’m the founder of EA Romania and I applied for funds from the EAIF twice this year and got rejected both times without feedback. In the process of hunting for feedback after my first rejection, I kept hearing questions like “what’s the comparative advantage of Romania?” or “what particular groups or cause areas do you want to focus on?”. I was also told by a fund manager that Romania is “not exactly a hub” and therefore there’s a high opportunity cost in funding community building here unless the project is very creative.At some point during a recent EAGx, I met a Romanian who had been involved in EA for a while and I told her about my whole experience applying for funds. She seemed very empathetic and encouraging and said we should talk more later. The next day when we met her attitude had shifted completely and she started questioning why I wanted to do community building in Romania, and suggesting that if I want to work in this area maybe I could move somewhere with a more vibrant EA movement, where there are more opportunities. Eventually I confronted her about her change of attitude and she basically started saying that Romania is not ready for EA yet. I said that I felt Romania was being discriminated against and she said I had to understand the EA leadership because when Romania joined the EU a lot of uneducated, low-income immigrants flooded Western countries and now they have a bad impression of Romanians.I couldn’t help but have the impression that she talked to somebody in the EA leadership and concluded that Romania wouldn’t be funded any time soon.If some countries are unlikely to be funded in general because they’re low priority, I believe this should be public information, and I also think EAIF should be transparent about the criteria they use to prioritize countries. After attending 4 EAG(x)s, I would say that the perception of the average EA is that there are plenty of funds and if you start a new national group it should be very easy to get funded. This perception seems inaccurate, and if the reality was more transparent this would help founders of new national groups have more realistic expectations.To be clear, I’m not saying that the only or even the main reason why my applications were rejected was that they were coming from Romania. There are other reasons (as I have discussed here and here). It is of course possible that another person with another project would have been funded. It’s hard to know because, again, EAIF doesn’t give feedback. I think this is a big problem, but that’s a topic for another post. In any case, being in Romania was clearly one of the reasons why the application was rejected, as is illustrated by the stories above and as was confirmed in an exchange of emails I had with CEA. This whole experience has made me realize there is very little transparency regarding group funding and groups in general. There are many questions that I think should be easier to find an answer to, for example:Which countries are at the top of the priority list to be funded?Which ones are at the bottom?How much funding does each country get per year for community building?How much of this money is for salaries?How many paid community builders does each country have?How big is each national group?How old is each national group?How many applications come from each country?What percentage of applications is rejected for each country?Is this information public but buried in some long PDF report? Or is it not public at all? I know that CEA only funds a fixed list of countries that already have mature communities, but does anybody have any idea what criteria the EAIF uses to prioritize countries? Do you think some countries ...]]>
Ariel Pontes https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:36 None full 4290
jtsRDQkX38CHrsZ7j_EA EA - Cause prioritisation: Preventing lake Kivu in Africa eruption which could kill two million. by turchin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cause prioritisation: Preventing lake Kivu in Africa eruption which could kill two million., published by turchin on December 28, 2022 on The Effective Altruism Forum.Epistemic status: a preliminary look at a possible cause areaTL;DR: Lake Kivu could erupt and kill 2 million people around it. But we could prevent this by installing oblique pipes, which will slowly and safely release gases and generate energy.Large lake Kivu on the border between Rwanda and Democratic Republic of the Congo has a lot of gases dissolved near its bottom and it can erupt as lake Nyos did in 1986 when 1700 people were killed, but Kivu eruption could be 2000 times stronger.A future overturn and gas release from the deep waters of Lake Kivu would result in catastrophe, dwarfing the historically documented lake overturns at the much smaller Lakes Nyos and Monoun. The lives of the approximately two million people who live in the lake basin area would be threatened.An experimental vent pipe was installed at Lake Nyos in 2001 to remove gas from the deep water, but such a solution for the much larger Lake Kivu would be considerably more expensive. The approximately 510 million metric tons of carbon dioxide in the lake is a little under 2 percent of the amount released annually by human fossil fuel burning. Therefore, the process of releasing it could potentially have costs beyond simply building and operating the system. WikiA report Gas emissions from lake Kivu claims that lake Kivu has erupted in the past with a periodicity of 1000 years and could do it again in 100-200 years.Lake Kivu has 65 cubic miles of methane (this is around 140 Mt and will produce additional methane concentrations increase of 28 ppb if released in the atmosphere, adding to the current level of around 1900 ppb). The lake also has 260 cubic miles of CO2.MitigationVertical pipes allow slowly extraction of gases from the lake and the collection of methane for energy use. Commercial extraction already started, but it is slow.There are risks related to gas extraction via pipes:Risk 1: what if the pipes will destabilise the lake?Risk 2: releasing CO2 will contribute to global warming. Methane is even worse as a greenhouse gas, but it could be collected with profit. CO2 actually may be used too for fracking and for chemical and food production.Concerns about these risks are slowing down current methane extraction. However, if the lake erupts, all methane and CO2 will go into the atmosphere and will be equal to several years of the Earth’s emissions, mostly because of short-term methane’s greenhouse effects. But a part of methane will be combusted in eruption.The strong methane fire may take the form of an explosion which will contribute to gas release and to the devastation around the lake (total methane energy in the lake is around 1 gigaton TNT). However, explosive methane fire will prevent CO2 accumulation on lower grounds near the shores and CO2 suffocating effects will be smaller because gases will mix with the atmosphere quickly.The project to reduce gases in the lake is ongoing, but its impact is not clear to me: it may not be enough to stop the increase of the gases’ concentration, which is still increasing, but could be enough to create risks of the eruption is something goes wrong (a pin and balloon effect). Especially because it is made for profit which creates an incentive to take higher risks. A much larger and simultaneously safer project is needed to prevent the eruption of the lake.As gas concentrations rise in Kivu’s depths, so does the risk. Wüest and colleagues found that from 1974 to 2004 the concentration of CO2 increased by 10%. But the bigger concern at Kivu is the methane concentration, which rose 15-20% during the same period. BBCOblique pipesI think that the pipes should be built in a ...]]>
turchin https://forum.effectivealtruism.org/posts/jtsRDQkX38CHrsZ7j/cause-prioritisation-preventing-lake-kivu-in-africa-eruption Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cause prioritisation: Preventing lake Kivu in Africa eruption which could kill two million., published by turchin on December 28, 2022 on The Effective Altruism Forum.Epistemic status: a preliminary look at a possible cause areaTL;DR: Lake Kivu could erupt and kill 2 million people around it. But we could prevent this by installing oblique pipes, which will slowly and safely release gases and generate energy.Large lake Kivu on the border between Rwanda and Democratic Republic of the Congo has a lot of gases dissolved near its bottom and it can erupt as lake Nyos did in 1986 when 1700 people were killed, but Kivu eruption could be 2000 times stronger.A future overturn and gas release from the deep waters of Lake Kivu would result in catastrophe, dwarfing the historically documented lake overturns at the much smaller Lakes Nyos and Monoun. The lives of the approximately two million people who live in the lake basin area would be threatened.An experimental vent pipe was installed at Lake Nyos in 2001 to remove gas from the deep water, but such a solution for the much larger Lake Kivu would be considerably more expensive. The approximately 510 million metric tons of carbon dioxide in the lake is a little under 2 percent of the amount released annually by human fossil fuel burning. Therefore, the process of releasing it could potentially have costs beyond simply building and operating the system. WikiA report Gas emissions from lake Kivu claims that lake Kivu has erupted in the past with a periodicity of 1000 years and could do it again in 100-200 years.Lake Kivu has 65 cubic miles of methane (this is around 140 Mt and will produce additional methane concentrations increase of 28 ppb if released in the atmosphere, adding to the current level of around 1900 ppb). The lake also has 260 cubic miles of CO2.MitigationVertical pipes allow slowly extraction of gases from the lake and the collection of methane for energy use. Commercial extraction already started, but it is slow.There are risks related to gas extraction via pipes:Risk 1: what if the pipes will destabilise the lake?Risk 2: releasing CO2 will contribute to global warming. Methane is even worse as a greenhouse gas, but it could be collected with profit. CO2 actually may be used too for fracking and for chemical and food production.Concerns about these risks are slowing down current methane extraction. However, if the lake erupts, all methane and CO2 will go into the atmosphere and will be equal to several years of the Earth’s emissions, mostly because of short-term methane’s greenhouse effects. But a part of methane will be combusted in eruption.The strong methane fire may take the form of an explosion which will contribute to gas release and to the devastation around the lake (total methane energy in the lake is around 1 gigaton TNT). However, explosive methane fire will prevent CO2 accumulation on lower grounds near the shores and CO2 suffocating effects will be smaller because gases will mix with the atmosphere quickly.The project to reduce gases in the lake is ongoing, but its impact is not clear to me: it may not be enough to stop the increase of the gases’ concentration, which is still increasing, but could be enough to create risks of the eruption is something goes wrong (a pin and balloon effect). Especially because it is made for profit which creates an incentive to take higher risks. A much larger and simultaneously safer project is needed to prevent the eruption of the lake.As gas concentrations rise in Kivu’s depths, so does the risk. Wüest and colleagues found that from 1974 to 2004 the concentration of CO2 increased by 10%. But the bigger concern at Kivu is the methane concentration, which rose 15-20% during the same period. BBCOblique pipesI think that the pipes should be built in a ...]]>
Wed, 28 Dec 2022 18:11:29 +0000 EA - Cause prioritisation: Preventing lake Kivu in Africa eruption which could kill two million. by turchin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cause prioritisation: Preventing lake Kivu in Africa eruption which could kill two million., published by turchin on December 28, 2022 on The Effective Altruism Forum.Epistemic status: a preliminary look at a possible cause areaTL;DR: Lake Kivu could erupt and kill 2 million people around it. But we could prevent this by installing oblique pipes, which will slowly and safely release gases and generate energy.Large lake Kivu on the border between Rwanda and Democratic Republic of the Congo has a lot of gases dissolved near its bottom and it can erupt as lake Nyos did in 1986 when 1700 people were killed, but Kivu eruption could be 2000 times stronger.A future overturn and gas release from the deep waters of Lake Kivu would result in catastrophe, dwarfing the historically documented lake overturns at the much smaller Lakes Nyos and Monoun. The lives of the approximately two million people who live in the lake basin area would be threatened.An experimental vent pipe was installed at Lake Nyos in 2001 to remove gas from the deep water, but such a solution for the much larger Lake Kivu would be considerably more expensive. The approximately 510 million metric tons of carbon dioxide in the lake is a little under 2 percent of the amount released annually by human fossil fuel burning. Therefore, the process of releasing it could potentially have costs beyond simply building and operating the system. WikiA report Gas emissions from lake Kivu claims that lake Kivu has erupted in the past with a periodicity of 1000 years and could do it again in 100-200 years.Lake Kivu has 65 cubic miles of methane (this is around 140 Mt and will produce additional methane concentrations increase of 28 ppb if released in the atmosphere, adding to the current level of around 1900 ppb). The lake also has 260 cubic miles of CO2.MitigationVertical pipes allow slowly extraction of gases from the lake and the collection of methane for energy use. Commercial extraction already started, but it is slow.There are risks related to gas extraction via pipes:Risk 1: what if the pipes will destabilise the lake?Risk 2: releasing CO2 will contribute to global warming. Methane is even worse as a greenhouse gas, but it could be collected with profit. CO2 actually may be used too for fracking and for chemical and food production.Concerns about these risks are slowing down current methane extraction. However, if the lake erupts, all methane and CO2 will go into the atmosphere and will be equal to several years of the Earth’s emissions, mostly because of short-term methane’s greenhouse effects. But a part of methane will be combusted in eruption.The strong methane fire may take the form of an explosion which will contribute to gas release and to the devastation around the lake (total methane energy in the lake is around 1 gigaton TNT). However, explosive methane fire will prevent CO2 accumulation on lower grounds near the shores and CO2 suffocating effects will be smaller because gases will mix with the atmosphere quickly.The project to reduce gases in the lake is ongoing, but its impact is not clear to me: it may not be enough to stop the increase of the gases’ concentration, which is still increasing, but could be enough to create risks of the eruption is something goes wrong (a pin and balloon effect). Especially because it is made for profit which creates an incentive to take higher risks. A much larger and simultaneously safer project is needed to prevent the eruption of the lake.As gas concentrations rise in Kivu’s depths, so does the risk. Wüest and colleagues found that from 1974 to 2004 the concentration of CO2 increased by 10%. But the bigger concern at Kivu is the methane concentration, which rose 15-20% during the same period. BBCOblique pipesI think that the pipes should be built in a ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cause prioritisation: Preventing lake Kivu in Africa eruption which could kill two million., published by turchin on December 28, 2022 on The Effective Altruism Forum.Epistemic status: a preliminary look at a possible cause areaTL;DR: Lake Kivu could erupt and kill 2 million people around it. But we could prevent this by installing oblique pipes, which will slowly and safely release gases and generate energy.Large lake Kivu on the border between Rwanda and Democratic Republic of the Congo has a lot of gases dissolved near its bottom and it can erupt as lake Nyos did in 1986 when 1700 people were killed, but Kivu eruption could be 2000 times stronger.A future overturn and gas release from the deep waters of Lake Kivu would result in catastrophe, dwarfing the historically documented lake overturns at the much smaller Lakes Nyos and Monoun. The lives of the approximately two million people who live in the lake basin area would be threatened.An experimental vent pipe was installed at Lake Nyos in 2001 to remove gas from the deep water, but such a solution for the much larger Lake Kivu would be considerably more expensive. The approximately 510 million metric tons of carbon dioxide in the lake is a little under 2 percent of the amount released annually by human fossil fuel burning. Therefore, the process of releasing it could potentially have costs beyond simply building and operating the system. WikiA report Gas emissions from lake Kivu claims that lake Kivu has erupted in the past with a periodicity of 1000 years and could do it again in 100-200 years.Lake Kivu has 65 cubic miles of methane (this is around 140 Mt and will produce additional methane concentrations increase of 28 ppb if released in the atmosphere, adding to the current level of around 1900 ppb). The lake also has 260 cubic miles of CO2.MitigationVertical pipes allow slowly extraction of gases from the lake and the collection of methane for energy use. Commercial extraction already started, but it is slow.There are risks related to gas extraction via pipes:Risk 1: what if the pipes will destabilise the lake?Risk 2: releasing CO2 will contribute to global warming. Methane is even worse as a greenhouse gas, but it could be collected with profit. CO2 actually may be used too for fracking and for chemical and food production.Concerns about these risks are slowing down current methane extraction. However, if the lake erupts, all methane and CO2 will go into the atmosphere and will be equal to several years of the Earth’s emissions, mostly because of short-term methane’s greenhouse effects. But a part of methane will be combusted in eruption.The strong methane fire may take the form of an explosion which will contribute to gas release and to the devastation around the lake (total methane energy in the lake is around 1 gigaton TNT). However, explosive methane fire will prevent CO2 accumulation on lower grounds near the shores and CO2 suffocating effects will be smaller because gases will mix with the atmosphere quickly.The project to reduce gases in the lake is ongoing, but its impact is not clear to me: it may not be enough to stop the increase of the gases’ concentration, which is still increasing, but could be enough to create risks of the eruption is something goes wrong (a pin and balloon effect). Especially because it is made for profit which creates an incentive to take higher risks. A much larger and simultaneously safer project is needed to prevent the eruption of the lake.As gas concentrations rise in Kivu’s depths, so does the risk. Wüest and colleagues found that from 1974 to 2004 the concentration of CO2 increased by 10%. But the bigger concern at Kivu is the methane concentration, which rose 15-20% during the same period. BBCOblique pipesI think that the pipes should be built in a ...]]>
turchin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:14 None full 4291
psvQMXEgQsT5RMDTu_EA EA - Consider Financial Independence First by River Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider Financial Independence First, published by River on December 27, 2022 on The Effective Altruism Forum.Over the course of your working life, you will earn some amount of money, spend some amount of money on yourself (and those close to you), and spend some amount of money and time on strangers. EA is about figuring out how to deploy the time and money we spend on strangers effectively. And as a movement of mostly young people, there is a tendency to start devoting time and money to strangers early in our careers. I want to suggest that this is often a mistake – that many of us should front load the money we spend on ourselves, by achieving financial independence first, by which I mean having enough savings that you could live off of it indefinitely, and then earning to give or doing direct work in an EA cause area.This is the strategy I am pursuing. I got interested in EA a couple of years ago during the pandemic, about the same time I was moving towards a new career in software engineering. I’ve been a software engineer making low six figures for about a year, and I’ve been putting about half my income towards my current expenses, and half towards financial independence1. After I reach financial independence, in something like five years, depending how the markets go, I expect to either earn to give (which I will be well set up for), or find direct work mitigating existential risks. Either way, learning about the various existential risks and mitigation strategies, and the people and organizations working on them, will prepare me to do that when the time comes.Some of my reasons for pursing financial independence first are selfish. I’m starting my third career in my mid thirties – I want a feeling of material security, and I’m not going to get it from an employer. I want the lower level of stress and anxiety that I expect I will have when I can pay for my rent and groceries on my own, regardless of my employment. I want the freedom to pick a low paying job, or even an unpaid job, over a higher paying but less meaningful job, without worrying about the impact on my lifestyle. I want the freedom to leave a job if the boss I love gets replaced by a boss I hate, without having to ask whether I can afford it. And it’s ok for an EA to make decisions for selfish reasons like this. EA doesn’t have anything to say about how much money anyone should donate or how much time anyone should spend altruistically. EA only tries to answer the question, after time or money has been committed to strangers, how to deploy it effectively.2Other reasons for pursing financial independence first are altruistic. If your altruism is of the earning to give variety, then achieving financial independence first allows you to donate the entirety of your post-financial-independence salary. Over the course of your life, that means donating more total dollars. If you believe that lives saved in ten or twenty years are as valuable as lives saved today, and that dollars donated in ten or twenty years will not be too much less effective than dollars donated today, then this means doing more good.If you hope to work directly in an EA cause area, the accounting is much less obvious, but the conclusion may be the same. The early part of your career, particularly the first few years, is when you have the least skills and the most to learn. I spent quite a bit of time in training at my current job, where I was gaining skills but at a net cost to my employer. So the altruistic cost of working outside EA is at its lowest at the beginning of your career. And the altruistic benefit from building skills faster working outside EA may be worthwhile. Often it is easier and faster to build skills in a large, established organization, compared to a newer smaller startup-like organization. EA organizations...]]>
River https://forum.effectivealtruism.org/posts/psvQMXEgQsT5RMDTu/consider-financial-independence-first Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider Financial Independence First, published by River on December 27, 2022 on The Effective Altruism Forum.Over the course of your working life, you will earn some amount of money, spend some amount of money on yourself (and those close to you), and spend some amount of money and time on strangers. EA is about figuring out how to deploy the time and money we spend on strangers effectively. And as a movement of mostly young people, there is a tendency to start devoting time and money to strangers early in our careers. I want to suggest that this is often a mistake – that many of us should front load the money we spend on ourselves, by achieving financial independence first, by which I mean having enough savings that you could live off of it indefinitely, and then earning to give or doing direct work in an EA cause area.This is the strategy I am pursuing. I got interested in EA a couple of years ago during the pandemic, about the same time I was moving towards a new career in software engineering. I’ve been a software engineer making low six figures for about a year, and I’ve been putting about half my income towards my current expenses, and half towards financial independence1. After I reach financial independence, in something like five years, depending how the markets go, I expect to either earn to give (which I will be well set up for), or find direct work mitigating existential risks. Either way, learning about the various existential risks and mitigation strategies, and the people and organizations working on them, will prepare me to do that when the time comes.Some of my reasons for pursing financial independence first are selfish. I’m starting my third career in my mid thirties – I want a feeling of material security, and I’m not going to get it from an employer. I want the lower level of stress and anxiety that I expect I will have when I can pay for my rent and groceries on my own, regardless of my employment. I want the freedom to pick a low paying job, or even an unpaid job, over a higher paying but less meaningful job, without worrying about the impact on my lifestyle. I want the freedom to leave a job if the boss I love gets replaced by a boss I hate, without having to ask whether I can afford it. And it’s ok for an EA to make decisions for selfish reasons like this. EA doesn’t have anything to say about how much money anyone should donate or how much time anyone should spend altruistically. EA only tries to answer the question, after time or money has been committed to strangers, how to deploy it effectively.2Other reasons for pursing financial independence first are altruistic. If your altruism is of the earning to give variety, then achieving financial independence first allows you to donate the entirety of your post-financial-independence salary. Over the course of your life, that means donating more total dollars. If you believe that lives saved in ten or twenty years are as valuable as lives saved today, and that dollars donated in ten or twenty years will not be too much less effective than dollars donated today, then this means doing more good.If you hope to work directly in an EA cause area, the accounting is much less obvious, but the conclusion may be the same. The early part of your career, particularly the first few years, is when you have the least skills and the most to learn. I spent quite a bit of time in training at my current job, where I was gaining skills but at a net cost to my employer. So the altruistic cost of working outside EA is at its lowest at the beginning of your career. And the altruistic benefit from building skills faster working outside EA may be worthwhile. Often it is easier and faster to build skills in a large, established organization, compared to a newer smaller startup-like organization. EA organizations...]]>
Wed, 28 Dec 2022 11:35:00 +0000 EA - Consider Financial Independence First by River Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider Financial Independence First, published by River on December 27, 2022 on The Effective Altruism Forum.Over the course of your working life, you will earn some amount of money, spend some amount of money on yourself (and those close to you), and spend some amount of money and time on strangers. EA is about figuring out how to deploy the time and money we spend on strangers effectively. And as a movement of mostly young people, there is a tendency to start devoting time and money to strangers early in our careers. I want to suggest that this is often a mistake – that many of us should front load the money we spend on ourselves, by achieving financial independence first, by which I mean having enough savings that you could live off of it indefinitely, and then earning to give or doing direct work in an EA cause area.This is the strategy I am pursuing. I got interested in EA a couple of years ago during the pandemic, about the same time I was moving towards a new career in software engineering. I’ve been a software engineer making low six figures for about a year, and I’ve been putting about half my income towards my current expenses, and half towards financial independence1. After I reach financial independence, in something like five years, depending how the markets go, I expect to either earn to give (which I will be well set up for), or find direct work mitigating existential risks. Either way, learning about the various existential risks and mitigation strategies, and the people and organizations working on them, will prepare me to do that when the time comes.Some of my reasons for pursing financial independence first are selfish. I’m starting my third career in my mid thirties – I want a feeling of material security, and I’m not going to get it from an employer. I want the lower level of stress and anxiety that I expect I will have when I can pay for my rent and groceries on my own, regardless of my employment. I want the freedom to pick a low paying job, or even an unpaid job, over a higher paying but less meaningful job, without worrying about the impact on my lifestyle. I want the freedom to leave a job if the boss I love gets replaced by a boss I hate, without having to ask whether I can afford it. And it’s ok for an EA to make decisions for selfish reasons like this. EA doesn’t have anything to say about how much money anyone should donate or how much time anyone should spend altruistically. EA only tries to answer the question, after time or money has been committed to strangers, how to deploy it effectively.2Other reasons for pursing financial independence first are altruistic. If your altruism is of the earning to give variety, then achieving financial independence first allows you to donate the entirety of your post-financial-independence salary. Over the course of your life, that means donating more total dollars. If you believe that lives saved in ten or twenty years are as valuable as lives saved today, and that dollars donated in ten or twenty years will not be too much less effective than dollars donated today, then this means doing more good.If you hope to work directly in an EA cause area, the accounting is much less obvious, but the conclusion may be the same. The early part of your career, particularly the first few years, is when you have the least skills and the most to learn. I spent quite a bit of time in training at my current job, where I was gaining skills but at a net cost to my employer. So the altruistic cost of working outside EA is at its lowest at the beginning of your career. And the altruistic benefit from building skills faster working outside EA may be worthwhile. Often it is easier and faster to build skills in a large, established organization, compared to a newer smaller startup-like organization. EA organizations...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider Financial Independence First, published by River on December 27, 2022 on The Effective Altruism Forum.Over the course of your working life, you will earn some amount of money, spend some amount of money on yourself (and those close to you), and spend some amount of money and time on strangers. EA is about figuring out how to deploy the time and money we spend on strangers effectively. And as a movement of mostly young people, there is a tendency to start devoting time and money to strangers early in our careers. I want to suggest that this is often a mistake – that many of us should front load the money we spend on ourselves, by achieving financial independence first, by which I mean having enough savings that you could live off of it indefinitely, and then earning to give or doing direct work in an EA cause area.This is the strategy I am pursuing. I got interested in EA a couple of years ago during the pandemic, about the same time I was moving towards a new career in software engineering. I’ve been a software engineer making low six figures for about a year, and I’ve been putting about half my income towards my current expenses, and half towards financial independence1. After I reach financial independence, in something like five years, depending how the markets go, I expect to either earn to give (which I will be well set up for), or find direct work mitigating existential risks. Either way, learning about the various existential risks and mitigation strategies, and the people and organizations working on them, will prepare me to do that when the time comes.Some of my reasons for pursing financial independence first are selfish. I’m starting my third career in my mid thirties – I want a feeling of material security, and I’m not going to get it from an employer. I want the lower level of stress and anxiety that I expect I will have when I can pay for my rent and groceries on my own, regardless of my employment. I want the freedom to pick a low paying job, or even an unpaid job, over a higher paying but less meaningful job, without worrying about the impact on my lifestyle. I want the freedom to leave a job if the boss I love gets replaced by a boss I hate, without having to ask whether I can afford it. And it’s ok for an EA to make decisions for selfish reasons like this. EA doesn’t have anything to say about how much money anyone should donate or how much time anyone should spend altruistically. EA only tries to answer the question, after time or money has been committed to strangers, how to deploy it effectively.2Other reasons for pursing financial independence first are altruistic. If your altruism is of the earning to give variety, then achieving financial independence first allows you to donate the entirety of your post-financial-independence salary. Over the course of your life, that means donating more total dollars. If you believe that lives saved in ten or twenty years are as valuable as lives saved today, and that dollars donated in ten or twenty years will not be too much less effective than dollars donated today, then this means doing more good.If you hope to work directly in an EA cause area, the accounting is much less obvious, but the conclusion may be the same. The early part of your career, particularly the first few years, is when you have the least skills and the most to learn. I spent quite a bit of time in training at my current job, where I was gaining skills but at a net cost to my employer. So the altruistic cost of working outside EA is at its lowest at the beginning of your career. And the altruistic benefit from building skills faster working outside EA may be worthwhile. Often it is easier and faster to build skills in a large, established organization, compared to a newer smaller startup-like organization. EA organizations...]]>
River https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:54 None full 4296
ffmbLCzJctLac3rDu_EA EA - StrongMinds should not be a top-rated charity (yet) by Simon M Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: StrongMinds should not be a top-rated charity (yet), published by Simon M on December 27, 2022 on The Effective Altruism Forum.GWWC lists StrongMinds as a “top-rated” charity. Their reason for doing so is because Founders Pledge has determined they are cost-effective in their report into mental health.I could say here, “and that report was written in 2019 - either they should update the report or remove the top rating” and we could all go home. In fact, most of what I’m about to say does consist of “the data really isn’t that clear yet”.I think the strongest statement I can make (which I doubt StrongMinds would disagree with) is:“StrongMinds have made limited effort to be quantitative in their self-evaluation, haven’t continued monitoring impact after intervention, haven’t done the research they once claimed they would. They have not been vetted sufficiently to be considered a top charity, and only one independent group has done the work to look into them.”My key issues are:Survey data is notoriously noisy and the data here seems to be especially soThere are reasons to be especially doubtful about the accuracy of the survey data (StrongMinds have twice updated their level of uncertainty in their numbers due to SDB)One of the main models is (to my eyes) off by a factor of ~2 based on an unrealistic assumption about depression (medium confidence)StrongMinds haven’t continued to publish new data since their trials very early onStrongMinds seem to be somewhat deceptive about how they market themselves as “effective” (and EA are playing into that by holding them in such high esteem without scrutiny)What’s going on with the PHQ-9 scores?In their last four quarterly reports, StrongMinds have reported PHQ-9 reductions of: -13, -13, -13, -13. In their Phase II report, raw scores dropped by a similar amount:However, their Phase II analysis reports (emphasis theirs):As evidenced in Table 5, members in the treatment intervention group, on average, had a 4.5 point reduction in their total PHQ-9 Raw Score over the intervention period, as compared to the control populations. Further, there is also a significant visit effect when controlling for group membership. The PHQ-9 Raw Score decreased on average by 0.86 points for a participant for every two groups she attended. Both of these findings are statistically significant.Founders Pledge’s cost-effectivenes model uses the 4.5 reduction number in their model. (And further reduces this for reasons we’ll get into later).Based on Phase I and II surveys, it seems to me that a much more cost-effective intervention would be to go around surveying people. I’m not exactly sure what’s going on with the Phase I / Phase II data, but the best I can tell is in Phase I we had a ~7.5 vs ~5.1 PHQ-9 reduction from “being surveyed” vs “being part of the group” and in Phase II we had ~5.1 vs ~4.5 PHQ-9 reduction from “being surveyed” vs “being part of the group”. For what it’s worth, I don’t believe this is likely the case, I think it’s just a strong sign that the survey mechanism being used is inadequate to determine what is going on.There are a number of potential reasons we might expect to see such large improvements in the mental health of the control group (as well as the treatment group).Mean-reversion - StrongMinds happens to sample people at a low ebb and so the progression of time leads their mental health to improve of its own accord“People in targeted communities often incorrectly believe that StrongMinds will provide them with cash or material goods and may therefore provide misleading responses when being diagnosed.” Potential participants fake their initial scores in order to get into the program (either because they (mistakenly) think there is some material benefit to being in the program or because they think it makes...]]>
Simon M https://forum.effectivealtruism.org/posts/ffmbLCzJctLac3rDu/strongminds-should-not-be-a-top-rated-charity-yet Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: StrongMinds should not be a top-rated charity (yet), published by Simon M on December 27, 2022 on The Effective Altruism Forum.GWWC lists StrongMinds as a “top-rated” charity. Their reason for doing so is because Founders Pledge has determined they are cost-effective in their report into mental health.I could say here, “and that report was written in 2019 - either they should update the report or remove the top rating” and we could all go home. In fact, most of what I’m about to say does consist of “the data really isn’t that clear yet”.I think the strongest statement I can make (which I doubt StrongMinds would disagree with) is:“StrongMinds have made limited effort to be quantitative in their self-evaluation, haven’t continued monitoring impact after intervention, haven’t done the research they once claimed they would. They have not been vetted sufficiently to be considered a top charity, and only one independent group has done the work to look into them.”My key issues are:Survey data is notoriously noisy and the data here seems to be especially soThere are reasons to be especially doubtful about the accuracy of the survey data (StrongMinds have twice updated their level of uncertainty in their numbers due to SDB)One of the main models is (to my eyes) off by a factor of ~2 based on an unrealistic assumption about depression (medium confidence)StrongMinds haven’t continued to publish new data since their trials very early onStrongMinds seem to be somewhat deceptive about how they market themselves as “effective” (and EA are playing into that by holding them in such high esteem without scrutiny)What’s going on with the PHQ-9 scores?In their last four quarterly reports, StrongMinds have reported PHQ-9 reductions of: -13, -13, -13, -13. In their Phase II report, raw scores dropped by a similar amount:However, their Phase II analysis reports (emphasis theirs):As evidenced in Table 5, members in the treatment intervention group, on average, had a 4.5 point reduction in their total PHQ-9 Raw Score over the intervention period, as compared to the control populations. Further, there is also a significant visit effect when controlling for group membership. The PHQ-9 Raw Score decreased on average by 0.86 points for a participant for every two groups she attended. Both of these findings are statistically significant.Founders Pledge’s cost-effectivenes model uses the 4.5 reduction number in their model. (And further reduces this for reasons we’ll get into later).Based on Phase I and II surveys, it seems to me that a much more cost-effective intervention would be to go around surveying people. I’m not exactly sure what’s going on with the Phase I / Phase II data, but the best I can tell is in Phase I we had a ~7.5 vs ~5.1 PHQ-9 reduction from “being surveyed” vs “being part of the group” and in Phase II we had ~5.1 vs ~4.5 PHQ-9 reduction from “being surveyed” vs “being part of the group”. For what it’s worth, I don’t believe this is likely the case, I think it’s just a strong sign that the survey mechanism being used is inadequate to determine what is going on.There are a number of potential reasons we might expect to see such large improvements in the mental health of the control group (as well as the treatment group).Mean-reversion - StrongMinds happens to sample people at a low ebb and so the progression of time leads their mental health to improve of its own accord“People in targeted communities often incorrectly believe that StrongMinds will provide them with cash or material goods and may therefore provide misleading responses when being diagnosed.” Potential participants fake their initial scores in order to get into the program (either because they (mistakenly) think there is some material benefit to being in the program or because they think it makes...]]>
Wed, 28 Dec 2022 00:05:21 +0000 EA - StrongMinds should not be a top-rated charity (yet) by Simon M Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: StrongMinds should not be a top-rated charity (yet), published by Simon M on December 27, 2022 on The Effective Altruism Forum.GWWC lists StrongMinds as a “top-rated” charity. Their reason for doing so is because Founders Pledge has determined they are cost-effective in their report into mental health.I could say here, “and that report was written in 2019 - either they should update the report or remove the top rating” and we could all go home. In fact, most of what I’m about to say does consist of “the data really isn’t that clear yet”.I think the strongest statement I can make (which I doubt StrongMinds would disagree with) is:“StrongMinds have made limited effort to be quantitative in their self-evaluation, haven’t continued monitoring impact after intervention, haven’t done the research they once claimed they would. They have not been vetted sufficiently to be considered a top charity, and only one independent group has done the work to look into them.”My key issues are:Survey data is notoriously noisy and the data here seems to be especially soThere are reasons to be especially doubtful about the accuracy of the survey data (StrongMinds have twice updated their level of uncertainty in their numbers due to SDB)One of the main models is (to my eyes) off by a factor of ~2 based on an unrealistic assumption about depression (medium confidence)StrongMinds haven’t continued to publish new data since their trials very early onStrongMinds seem to be somewhat deceptive about how they market themselves as “effective” (and EA are playing into that by holding them in such high esteem without scrutiny)What’s going on with the PHQ-9 scores?In their last four quarterly reports, StrongMinds have reported PHQ-9 reductions of: -13, -13, -13, -13. In their Phase II report, raw scores dropped by a similar amount:However, their Phase II analysis reports (emphasis theirs):As evidenced in Table 5, members in the treatment intervention group, on average, had a 4.5 point reduction in their total PHQ-9 Raw Score over the intervention period, as compared to the control populations. Further, there is also a significant visit effect when controlling for group membership. The PHQ-9 Raw Score decreased on average by 0.86 points for a participant for every two groups she attended. Both of these findings are statistically significant.Founders Pledge’s cost-effectivenes model uses the 4.5 reduction number in their model. (And further reduces this for reasons we’ll get into later).Based on Phase I and II surveys, it seems to me that a much more cost-effective intervention would be to go around surveying people. I’m not exactly sure what’s going on with the Phase I / Phase II data, but the best I can tell is in Phase I we had a ~7.5 vs ~5.1 PHQ-9 reduction from “being surveyed” vs “being part of the group” and in Phase II we had ~5.1 vs ~4.5 PHQ-9 reduction from “being surveyed” vs “being part of the group”. For what it’s worth, I don’t believe this is likely the case, I think it’s just a strong sign that the survey mechanism being used is inadequate to determine what is going on.There are a number of potential reasons we might expect to see such large improvements in the mental health of the control group (as well as the treatment group).Mean-reversion - StrongMinds happens to sample people at a low ebb and so the progression of time leads their mental health to improve of its own accord“People in targeted communities often incorrectly believe that StrongMinds will provide them with cash or material goods and may therefore provide misleading responses when being diagnosed.” Potential participants fake their initial scores in order to get into the program (either because they (mistakenly) think there is some material benefit to being in the program or because they think it makes...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: StrongMinds should not be a top-rated charity (yet), published by Simon M on December 27, 2022 on The Effective Altruism Forum.GWWC lists StrongMinds as a “top-rated” charity. Their reason for doing so is because Founders Pledge has determined they are cost-effective in their report into mental health.I could say here, “and that report was written in 2019 - either they should update the report or remove the top rating” and we could all go home. In fact, most of what I’m about to say does consist of “the data really isn’t that clear yet”.I think the strongest statement I can make (which I doubt StrongMinds would disagree with) is:“StrongMinds have made limited effort to be quantitative in their self-evaluation, haven’t continued monitoring impact after intervention, haven’t done the research they once claimed they would. They have not been vetted sufficiently to be considered a top charity, and only one independent group has done the work to look into them.”My key issues are:Survey data is notoriously noisy and the data here seems to be especially soThere are reasons to be especially doubtful about the accuracy of the survey data (StrongMinds have twice updated their level of uncertainty in their numbers due to SDB)One of the main models is (to my eyes) off by a factor of ~2 based on an unrealistic assumption about depression (medium confidence)StrongMinds haven’t continued to publish new data since their trials very early onStrongMinds seem to be somewhat deceptive about how they market themselves as “effective” (and EA are playing into that by holding them in such high esteem without scrutiny)What’s going on with the PHQ-9 scores?In their last four quarterly reports, StrongMinds have reported PHQ-9 reductions of: -13, -13, -13, -13. In their Phase II report, raw scores dropped by a similar amount:However, their Phase II analysis reports (emphasis theirs):As evidenced in Table 5, members in the treatment intervention group, on average, had a 4.5 point reduction in their total PHQ-9 Raw Score over the intervention period, as compared to the control populations. Further, there is also a significant visit effect when controlling for group membership. The PHQ-9 Raw Score decreased on average by 0.86 points for a participant for every two groups she attended. Both of these findings are statistically significant.Founders Pledge’s cost-effectivenes model uses the 4.5 reduction number in their model. (And further reduces this for reasons we’ll get into later).Based on Phase I and II surveys, it seems to me that a much more cost-effective intervention would be to go around surveying people. I’m not exactly sure what’s going on with the Phase I / Phase II data, but the best I can tell is in Phase I we had a ~7.5 vs ~5.1 PHQ-9 reduction from “being surveyed” vs “being part of the group” and in Phase II we had ~5.1 vs ~4.5 PHQ-9 reduction from “being surveyed” vs “being part of the group”. For what it’s worth, I don’t believe this is likely the case, I think it’s just a strong sign that the survey mechanism being used is inadequate to determine what is going on.There are a number of potential reasons we might expect to see such large improvements in the mental health of the control group (as well as the treatment group).Mean-reversion - StrongMinds happens to sample people at a low ebb and so the progression of time leads their mental health to improve of its own accord“People in targeted communities often incorrectly believe that StrongMinds will provide them with cash or material goods and may therefore provide misleading responses when being diagnosed.” Potential participants fake their initial scores in order to get into the program (either because they (mistakenly) think there is some material benefit to being in the program or because they think it makes...]]>
Simon M https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:42 None full 4284
AmA9gQMhqAQW8bC4W_EA EA - I have thousands of copies of HPMOR in Russian. How to use them with the most impact? by Samin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I have thousands of copies of HPMOR in Russian. How to use them with the most impact?, published by Samin on December 27, 2022 on The Effective Altruism Forum.As a result of a crowdfunding campaign a couple of years ago, I printed 21k copies of HPMOR. 11k of those went to the crowdfunding participants. I need ideas for how to use the ones that are left with the most impact.Over the last few weeks, I've sent 400 copies to winners of IMO, IOI, and other international and Russian olympiads in math, computer science, biology, etc. (and also bought and sent copies of Human Compatible in Russian to about 250 of them who requested it as well). Over the next month, we'll get the media to post about the opportunity for winners of certain olympiads to get free books. I estimate that we can get maybe twice as many (800-1000) impressive students to fill out a form for getting HPMOR and get maybe up to 30k people who follow the same media to read HPMOR online if everything goes well (uncertain estimates, from the previous experience).The theory of change behind sending the books to winners of olympiads is that people with high potential read HPMOR and share it with friends, get some EA-adjacent mindset and values from it, and then get introduced to EA in emails about 80k Hours (which is being translated into Russian) and other EA and cause-specific content, and start participating in the EA community and get more into specific cause areas that interest them. The anecdotal evidence is that most of the Russian EAs, many of whom now work full-time at EA orgs or as independent researchers, got into EA after reading HPMOR and then the LW sequences.It's mostly impossible to give the books for donations to EA charities because Russians can't transfer money to EA charities due to sanctions.Delivering the copies costs around $5-10 in Russia and $40-100 outside Russia.There are some other ideas, but nothing that lets us spend thousands of books effectively, so we need more.So, any ideas on how to spend 8-9k of HPMOR copies in Russian?HPMOR has endorsements from a bunch of Russians important to different audiences. That includes some of the most famous science communicators, computer science professors, literature critics, a person training the Moscow team for math competitions, etc.; currently, there are news media and popular science resources with millions of followers managed by people who've read HPMOR or heard of it in a positive way and who I can talk to. After the war started, some of them were banned, but that doesn't affect the millions of followers they have on, e.g., Telegram.Initially, we planned first to translate and print 80k and then give HPMOR to people together with copies of 80k, but there's now time pressure due to uncertainty over the future legal status of HPMOR in Russia. Recently, a law came into effect that prohibited all sorts of mentions of LGBTQ in books, and it's not obvious whether HPMOR is at risk, so it's better to spend a lot of books now.E.g., there's the Yandex School of Data Analysis, one of the top places for students in Russia to get into machine learning; we hope to be able to get them to give hundreds of copies of HPMOR to students who've completed a machine learning course and give us their emails. That might result in more people familiar with the alignment problem in positions where they can help prevent an AI arms race started by Russia.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Samin https://forum.effectivealtruism.org/posts/AmA9gQMhqAQW8bC4W/i-have-thousands-of-copies-of-hpmor-in-russian-how-to-use Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I have thousands of copies of HPMOR in Russian. How to use them with the most impact?, published by Samin on December 27, 2022 on The Effective Altruism Forum.As a result of a crowdfunding campaign a couple of years ago, I printed 21k copies of HPMOR. 11k of those went to the crowdfunding participants. I need ideas for how to use the ones that are left with the most impact.Over the last few weeks, I've sent 400 copies to winners of IMO, IOI, and other international and Russian olympiads in math, computer science, biology, etc. (and also bought and sent copies of Human Compatible in Russian to about 250 of them who requested it as well). Over the next month, we'll get the media to post about the opportunity for winners of certain olympiads to get free books. I estimate that we can get maybe twice as many (800-1000) impressive students to fill out a form for getting HPMOR and get maybe up to 30k people who follow the same media to read HPMOR online if everything goes well (uncertain estimates, from the previous experience).The theory of change behind sending the books to winners of olympiads is that people with high potential read HPMOR and share it with friends, get some EA-adjacent mindset and values from it, and then get introduced to EA in emails about 80k Hours (which is being translated into Russian) and other EA and cause-specific content, and start participating in the EA community and get more into specific cause areas that interest them. The anecdotal evidence is that most of the Russian EAs, many of whom now work full-time at EA orgs or as independent researchers, got into EA after reading HPMOR and then the LW sequences.It's mostly impossible to give the books for donations to EA charities because Russians can't transfer money to EA charities due to sanctions.Delivering the copies costs around $5-10 in Russia and $40-100 outside Russia.There are some other ideas, but nothing that lets us spend thousands of books effectively, so we need more.So, any ideas on how to spend 8-9k of HPMOR copies in Russian?HPMOR has endorsements from a bunch of Russians important to different audiences. That includes some of the most famous science communicators, computer science professors, literature critics, a person training the Moscow team for math competitions, etc.; currently, there are news media and popular science resources with millions of followers managed by people who've read HPMOR or heard of it in a positive way and who I can talk to. After the war started, some of them were banned, but that doesn't affect the millions of followers they have on, e.g., Telegram.Initially, we planned first to translate and print 80k and then give HPMOR to people together with copies of 80k, but there's now time pressure due to uncertainty over the future legal status of HPMOR in Russia. Recently, a law came into effect that prohibited all sorts of mentions of LGBTQ in books, and it's not obvious whether HPMOR is at risk, so it's better to spend a lot of books now.E.g., there's the Yandex School of Data Analysis, one of the top places for students in Russia to get into machine learning; we hope to be able to get them to give hundreds of copies of HPMOR to students who've completed a machine learning course and give us their emails. That might result in more people familiar with the alignment problem in positions where they can help prevent an AI arms race started by Russia.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 27 Dec 2022 17:27:54 +0000 EA - I have thousands of copies of HPMOR in Russian. How to use them with the most impact? by Samin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I have thousands of copies of HPMOR in Russian. How to use them with the most impact?, published by Samin on December 27, 2022 on The Effective Altruism Forum.As a result of a crowdfunding campaign a couple of years ago, I printed 21k copies of HPMOR. 11k of those went to the crowdfunding participants. I need ideas for how to use the ones that are left with the most impact.Over the last few weeks, I've sent 400 copies to winners of IMO, IOI, and other international and Russian olympiads in math, computer science, biology, etc. (and also bought and sent copies of Human Compatible in Russian to about 250 of them who requested it as well). Over the next month, we'll get the media to post about the opportunity for winners of certain olympiads to get free books. I estimate that we can get maybe twice as many (800-1000) impressive students to fill out a form for getting HPMOR and get maybe up to 30k people who follow the same media to read HPMOR online if everything goes well (uncertain estimates, from the previous experience).The theory of change behind sending the books to winners of olympiads is that people with high potential read HPMOR and share it with friends, get some EA-adjacent mindset and values from it, and then get introduced to EA in emails about 80k Hours (which is being translated into Russian) and other EA and cause-specific content, and start participating in the EA community and get more into specific cause areas that interest them. The anecdotal evidence is that most of the Russian EAs, many of whom now work full-time at EA orgs or as independent researchers, got into EA after reading HPMOR and then the LW sequences.It's mostly impossible to give the books for donations to EA charities because Russians can't transfer money to EA charities due to sanctions.Delivering the copies costs around $5-10 in Russia and $40-100 outside Russia.There are some other ideas, but nothing that lets us spend thousands of books effectively, so we need more.So, any ideas on how to spend 8-9k of HPMOR copies in Russian?HPMOR has endorsements from a bunch of Russians important to different audiences. That includes some of the most famous science communicators, computer science professors, literature critics, a person training the Moscow team for math competitions, etc.; currently, there are news media and popular science resources with millions of followers managed by people who've read HPMOR or heard of it in a positive way and who I can talk to. After the war started, some of them were banned, but that doesn't affect the millions of followers they have on, e.g., Telegram.Initially, we planned first to translate and print 80k and then give HPMOR to people together with copies of 80k, but there's now time pressure due to uncertainty over the future legal status of HPMOR in Russia. Recently, a law came into effect that prohibited all sorts of mentions of LGBTQ in books, and it's not obvious whether HPMOR is at risk, so it's better to spend a lot of books now.E.g., there's the Yandex School of Data Analysis, one of the top places for students in Russia to get into machine learning; we hope to be able to get them to give hundreds of copies of HPMOR to students who've completed a machine learning course and give us their emails. That might result in more people familiar with the alignment problem in positions where they can help prevent an AI arms race started by Russia.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I have thousands of copies of HPMOR in Russian. How to use them with the most impact?, published by Samin on December 27, 2022 on The Effective Altruism Forum.As a result of a crowdfunding campaign a couple of years ago, I printed 21k copies of HPMOR. 11k of those went to the crowdfunding participants. I need ideas for how to use the ones that are left with the most impact.Over the last few weeks, I've sent 400 copies to winners of IMO, IOI, and other international and Russian olympiads in math, computer science, biology, etc. (and also bought and sent copies of Human Compatible in Russian to about 250 of them who requested it as well). Over the next month, we'll get the media to post about the opportunity for winners of certain olympiads to get free books. I estimate that we can get maybe twice as many (800-1000) impressive students to fill out a form for getting HPMOR and get maybe up to 30k people who follow the same media to read HPMOR online if everything goes well (uncertain estimates, from the previous experience).The theory of change behind sending the books to winners of olympiads is that people with high potential read HPMOR and share it with friends, get some EA-adjacent mindset and values from it, and then get introduced to EA in emails about 80k Hours (which is being translated into Russian) and other EA and cause-specific content, and start participating in the EA community and get more into specific cause areas that interest them. The anecdotal evidence is that most of the Russian EAs, many of whom now work full-time at EA orgs or as independent researchers, got into EA after reading HPMOR and then the LW sequences.It's mostly impossible to give the books for donations to EA charities because Russians can't transfer money to EA charities due to sanctions.Delivering the copies costs around $5-10 in Russia and $40-100 outside Russia.There are some other ideas, but nothing that lets us spend thousands of books effectively, so we need more.So, any ideas on how to spend 8-9k of HPMOR copies in Russian?HPMOR has endorsements from a bunch of Russians important to different audiences. That includes some of the most famous science communicators, computer science professors, literature critics, a person training the Moscow team for math competitions, etc.; currently, there are news media and popular science resources with millions of followers managed by people who've read HPMOR or heard of it in a positive way and who I can talk to. After the war started, some of them were banned, but that doesn't affect the millions of followers they have on, e.g., Telegram.Initially, we planned first to translate and print 80k and then give HPMOR to people together with copies of 80k, but there's now time pressure due to uncertainty over the future legal status of HPMOR in Russia. Recently, a law came into effect that prohibited all sorts of mentions of LGBTQ in books, and it's not obvious whether HPMOR is at risk, so it's better to spend a lot of books now.E.g., there's the Yandex School of Data Analysis, one of the top places for students in Russia to get into machine learning; we hope to be able to get them to give hundreds of copies of HPMOR to students who've completed a machine learning course and give us their emails. That might result in more people familiar with the alignment problem in positions where they can help prevent an AI arms race started by Russia.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Samin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:22 None full 4281
QjeruoGQmYZh2ZsCt_EA EA - How long till Brussels?: A light investigation into the Brussels Gap by Yadav Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How long till Brussels?: A light investigation into the Brussels Gap, published by Yadav on December 26, 2022 on The Effective Altruism Forum.This is written quickly; I'm mostly looking to gauge whether this topic is important at all. As such, I'd love to hear your thoughts!IntroductionThis post quickly looks at the 'Brussels Gap', i.e. the time elapsed between enacting a Regulation within the EU and its emulation in other parts of the world (i.e. the Brussels Effect).The 'Brussels Gap' could be a relatively important phenomenon that is worth examining, given the development of a new Regulation on Artificial Intelligence proposed by the EU. If the Regulation comes to fruition (and it seems likely that it will), then having a better picture of the timescales at which the Brussels Effect operates seems crucial.In this post, I outline a primer on the Brussels Effect and the relationship it could play with AI development. Then I list out different case studies of the Brussels Effect and look at the time differences between the enactment of a Regulation within the EU and its replication in other countries outside Europe.Depending on your timelines for the development of Transformative AI, understanding the 'Brussels Gap' could be directly useful for weighing up the importance of the Brussels Effect.If you're already up to speed with Brussels Effect, I'd suggest skipping to the section titled "The 'Brussels Gap'".Background Information:What is the Brussels Effect?The "Brussels Effect" refers to the EU's unilateral ability to regulate the global marketplace. This ability allows legislation and policy adopted by the EU to diffuse to other jurisdictions.The General Data Protection Regulation (GDPR) is a salient example of this. The Regulation was passed by the European Parliament and the Council of the EU in 2016 and came into effect on the 25th of May, 2018.Introducing the GDPR led to a ripple effect; India, Japan, Brazil and elsewhere began emulating the Regulation.Types of Brussels EffectDe Facto BE: This is where conduct under a particular area is done under the unilateral EU rules, even when other states continue to maintain their own rules. Anu Bradford describes this as the 'convergence' of a company's global production and conducts towards EU regulations.De Jure BE: Where states adopt EU regulations into their own legal body.Why does the Brussels Effect happen?Why do other countries choose to model after the EU? The two most substantial answers to this are (see Anu Bradford's work for more):1. Institutional CompetenceThe EU's architecture is incredibly complex. Here is an outline of how each body interacts with one another.This complicated architecture has also had a 5.2% growth rate in all its bodies combined, with most of the staff being highly educated (usually possessing a master's degree).Both of these factors resulted in a signal of competence to other countries in the world, which results in a higher degree of trust in the EU's decisions and policies.2. Economic trade-offsThe EU consists of 27 Member States, creating strong economic incentives for companies to operate within the EU so they can trade within these states. Those who want to participate in this internal market must comply with the Union's Regulations. Essentially, no one gets to keep their cake and eat it too. If you're going to get hold of the gains of trading within the EU, you'll have to obey the rules of the EU.In creating an EU-compliant product, companies have a choice: do they create separate products for different jurisdictions, or do they standardise their products across the world? The latter is usually more economically feasible, which unlocks the ability for the rules of the EU to spread.What does the Brussels Effect have to do with AI?The EU is likely to intro...]]>
Yadav https://forum.effectivealtruism.org/posts/QjeruoGQmYZh2ZsCt/how-long-till-brussels-a-light-investigation-into-the Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How long till Brussels?: A light investigation into the Brussels Gap, published by Yadav on December 26, 2022 on The Effective Altruism Forum.This is written quickly; I'm mostly looking to gauge whether this topic is important at all. As such, I'd love to hear your thoughts!IntroductionThis post quickly looks at the 'Brussels Gap', i.e. the time elapsed between enacting a Regulation within the EU and its emulation in other parts of the world (i.e. the Brussels Effect).The 'Brussels Gap' could be a relatively important phenomenon that is worth examining, given the development of a new Regulation on Artificial Intelligence proposed by the EU. If the Regulation comes to fruition (and it seems likely that it will), then having a better picture of the timescales at which the Brussels Effect operates seems crucial.In this post, I outline a primer on the Brussels Effect and the relationship it could play with AI development. Then I list out different case studies of the Brussels Effect and look at the time differences between the enactment of a Regulation within the EU and its replication in other countries outside Europe.Depending on your timelines for the development of Transformative AI, understanding the 'Brussels Gap' could be directly useful for weighing up the importance of the Brussels Effect.If you're already up to speed with Brussels Effect, I'd suggest skipping to the section titled "The 'Brussels Gap'".Background Information:What is the Brussels Effect?The "Brussels Effect" refers to the EU's unilateral ability to regulate the global marketplace. This ability allows legislation and policy adopted by the EU to diffuse to other jurisdictions.The General Data Protection Regulation (GDPR) is a salient example of this. The Regulation was passed by the European Parliament and the Council of the EU in 2016 and came into effect on the 25th of May, 2018.Introducing the GDPR led to a ripple effect; India, Japan, Brazil and elsewhere began emulating the Regulation.Types of Brussels EffectDe Facto BE: This is where conduct under a particular area is done under the unilateral EU rules, even when other states continue to maintain their own rules. Anu Bradford describes this as the 'convergence' of a company's global production and conducts towards EU regulations.De Jure BE: Where states adopt EU regulations into their own legal body.Why does the Brussels Effect happen?Why do other countries choose to model after the EU? The two most substantial answers to this are (see Anu Bradford's work for more):1. Institutional CompetenceThe EU's architecture is incredibly complex. Here is an outline of how each body interacts with one another.This complicated architecture has also had a 5.2% growth rate in all its bodies combined, with most of the staff being highly educated (usually possessing a master's degree).Both of these factors resulted in a signal of competence to other countries in the world, which results in a higher degree of trust in the EU's decisions and policies.2. Economic trade-offsThe EU consists of 27 Member States, creating strong economic incentives for companies to operate within the EU so they can trade within these states. Those who want to participate in this internal market must comply with the Union's Regulations. Essentially, no one gets to keep their cake and eat it too. If you're going to get hold of the gains of trading within the EU, you'll have to obey the rules of the EU.In creating an EU-compliant product, companies have a choice: do they create separate products for different jurisdictions, or do they standardise their products across the world? The latter is usually more economically feasible, which unlocks the ability for the rules of the EU to spread.What does the Brussels Effect have to do with AI?The EU is likely to intro...]]>
Tue, 27 Dec 2022 15:50:24 +0000 EA - How long till Brussels?: A light investigation into the Brussels Gap by Yadav Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How long till Brussels?: A light investigation into the Brussels Gap, published by Yadav on December 26, 2022 on The Effective Altruism Forum.This is written quickly; I'm mostly looking to gauge whether this topic is important at all. As such, I'd love to hear your thoughts!IntroductionThis post quickly looks at the 'Brussels Gap', i.e. the time elapsed between enacting a Regulation within the EU and its emulation in other parts of the world (i.e. the Brussels Effect).The 'Brussels Gap' could be a relatively important phenomenon that is worth examining, given the development of a new Regulation on Artificial Intelligence proposed by the EU. If the Regulation comes to fruition (and it seems likely that it will), then having a better picture of the timescales at which the Brussels Effect operates seems crucial.In this post, I outline a primer on the Brussels Effect and the relationship it could play with AI development. Then I list out different case studies of the Brussels Effect and look at the time differences between the enactment of a Regulation within the EU and its replication in other countries outside Europe.Depending on your timelines for the development of Transformative AI, understanding the 'Brussels Gap' could be directly useful for weighing up the importance of the Brussels Effect.If you're already up to speed with Brussels Effect, I'd suggest skipping to the section titled "The 'Brussels Gap'".Background Information:What is the Brussels Effect?The "Brussels Effect" refers to the EU's unilateral ability to regulate the global marketplace. This ability allows legislation and policy adopted by the EU to diffuse to other jurisdictions.The General Data Protection Regulation (GDPR) is a salient example of this. The Regulation was passed by the European Parliament and the Council of the EU in 2016 and came into effect on the 25th of May, 2018.Introducing the GDPR led to a ripple effect; India, Japan, Brazil and elsewhere began emulating the Regulation.Types of Brussels EffectDe Facto BE: This is where conduct under a particular area is done under the unilateral EU rules, even when other states continue to maintain their own rules. Anu Bradford describes this as the 'convergence' of a company's global production and conducts towards EU regulations.De Jure BE: Where states adopt EU regulations into their own legal body.Why does the Brussels Effect happen?Why do other countries choose to model after the EU? The two most substantial answers to this are (see Anu Bradford's work for more):1. Institutional CompetenceThe EU's architecture is incredibly complex. Here is an outline of how each body interacts with one another.This complicated architecture has also had a 5.2% growth rate in all its bodies combined, with most of the staff being highly educated (usually possessing a master's degree).Both of these factors resulted in a signal of competence to other countries in the world, which results in a higher degree of trust in the EU's decisions and policies.2. Economic trade-offsThe EU consists of 27 Member States, creating strong economic incentives for companies to operate within the EU so they can trade within these states. Those who want to participate in this internal market must comply with the Union's Regulations. Essentially, no one gets to keep their cake and eat it too. If you're going to get hold of the gains of trading within the EU, you'll have to obey the rules of the EU.In creating an EU-compliant product, companies have a choice: do they create separate products for different jurisdictions, or do they standardise their products across the world? The latter is usually more economically feasible, which unlocks the ability for the rules of the EU to spread.What does the Brussels Effect have to do with AI?The EU is likely to intro...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How long till Brussels?: A light investigation into the Brussels Gap, published by Yadav on December 26, 2022 on The Effective Altruism Forum.This is written quickly; I'm mostly looking to gauge whether this topic is important at all. As such, I'd love to hear your thoughts!IntroductionThis post quickly looks at the 'Brussels Gap', i.e. the time elapsed between enacting a Regulation within the EU and its emulation in other parts of the world (i.e. the Brussels Effect).The 'Brussels Gap' could be a relatively important phenomenon that is worth examining, given the development of a new Regulation on Artificial Intelligence proposed by the EU. If the Regulation comes to fruition (and it seems likely that it will), then having a better picture of the timescales at which the Brussels Effect operates seems crucial.In this post, I outline a primer on the Brussels Effect and the relationship it could play with AI development. Then I list out different case studies of the Brussels Effect and look at the time differences between the enactment of a Regulation within the EU and its replication in other countries outside Europe.Depending on your timelines for the development of Transformative AI, understanding the 'Brussels Gap' could be directly useful for weighing up the importance of the Brussels Effect.If you're already up to speed with Brussels Effect, I'd suggest skipping to the section titled "The 'Brussels Gap'".Background Information:What is the Brussels Effect?The "Brussels Effect" refers to the EU's unilateral ability to regulate the global marketplace. This ability allows legislation and policy adopted by the EU to diffuse to other jurisdictions.The General Data Protection Regulation (GDPR) is a salient example of this. The Regulation was passed by the European Parliament and the Council of the EU in 2016 and came into effect on the 25th of May, 2018.Introducing the GDPR led to a ripple effect; India, Japan, Brazil and elsewhere began emulating the Regulation.Types of Brussels EffectDe Facto BE: This is where conduct under a particular area is done under the unilateral EU rules, even when other states continue to maintain their own rules. Anu Bradford describes this as the 'convergence' of a company's global production and conducts towards EU regulations.De Jure BE: Where states adopt EU regulations into their own legal body.Why does the Brussels Effect happen?Why do other countries choose to model after the EU? The two most substantial answers to this are (see Anu Bradford's work for more):1. Institutional CompetenceThe EU's architecture is incredibly complex. Here is an outline of how each body interacts with one another.This complicated architecture has also had a 5.2% growth rate in all its bodies combined, with most of the staff being highly educated (usually possessing a master's degree).Both of these factors resulted in a signal of competence to other countries in the world, which results in a higher degree of trust in the EU's decisions and policies.2. Economic trade-offsThe EU consists of 27 Member States, creating strong economic incentives for companies to operate within the EU so they can trade within these states. Those who want to participate in this internal market must comply with the Union's Regulations. Essentially, no one gets to keep their cake and eat it too. If you're going to get hold of the gains of trading within the EU, you'll have to obey the rules of the EU.In creating an EU-compliant product, companies have a choice: do they create separate products for different jurisdictions, or do they standardise their products across the world? The latter is usually more economically feasible, which unlocks the ability for the rules of the EU to spread.What does the Brussels Effect have to do with AI?The EU is likely to intro...]]>
Yadav https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:36 None full 4283
w8LgdJwDxANZDxw8j_EA EA - What are the best charities to donate to in 2022? by Luke Freeman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What are the best charities to donate to in 2022?, published by Luke Freeman on December 26, 2022 on The Effective Altruism Forum.This is a linkpost for GWWC's latest giving recommendations... and also a reminder to donate or pledge before the end of 2022!When you care about making a difference in the world, it's natural to ask: "Where can my donations do the most good?"The great news is that your donations can have an astonishing impact.But to maximise your impact you need to donate effectively. This is especially important given the best charities are often 10 times better than a typical charity within the same area and hundreds of times better than poorly performing charities. The worst charities can even do harm. That means if you donated $100 to the best charity, you could be doing even more good than someone who donates $1,000 to a typical charity, or $10,000 to a poorly performing one.How to donate effectivelyThe point of charity is to help others. Donating effectively means that you’re taking steps to make sure you’re helping others the most for every dollar you give.We think the best way to do this is to:Identify a promising cause to support. The most promising causes are generally:Large in scale: they significantly impact many livesNeglected: they still need more funding and supportTractable: there are clear and practical ways of making progressChoose an excellent fund or charity working on the cause. Indicators of worthwhile organisations include: reliance on evidence, cost-effectiveness, transparency, room for more funding, and a strong track record. We generally recommend giving to expert-managed funds over charities, because they can allocate your donation where and when it is needed most.Pick an efficient way to donate. Try getting your donation matched by your employer or set up a recurring donation.In practice, choosing a cause and evaluating funds and charities can take a lot of time and effort, and many donors aren't able to work it into their busy schedules. To help you get started, we've put together a list of trustworthy, cost-effective funds and charities working on some of the most pressing causes.Our effective giving recommendationsThese recommendations are listed roughly in order of convenience and suitability for most donors. The right giving opportunity for you will depend on your particular values and worldview.Donate to expert-managed fundsFor most people, we recommend donating through an expert-led fund that is focused on effectiveness.Funds are both convenient and highly effective. They allow donors to pool their money so that they can support outstanding giving opportunities that are evaluated by expert grantmakers and trusted charity evaluators.This approach is similar to using an investment fund instead of trying to pick which individual stocks will be the best investments. The fund distributes your donation among multiple grantees with the goal of maximising your overall impact. Read more about the advantages of funds.Donors from the US, UK, and the Netherlands can make tax-deductible donations to the following funds using the Giving What We Can donation platform. For other countries, you can read our tax deductibility guide by country — but we still think these funds are likely to be your best option, even if they’re not tax deductible.Here are our top-rated funds grouped by cause area.Learn more about how we choose which charities and funds to recommend.Top-rated funds improving human wellbeingThese funds generally give to organisations taking evidence-based approaches to improve and save lives of people currently alive. Donors who value their donations going to organisations either with a strong track-record or a promising and testable new approach to help people in the current generation can maximise the...]]>
Luke Freeman https://forum.effectivealtruism.org/posts/w8LgdJwDxANZDxw8j/what-are-the-best-charities-to-donate-to-in-2022 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What are the best charities to donate to in 2022?, published by Luke Freeman on December 26, 2022 on The Effective Altruism Forum.This is a linkpost for GWWC's latest giving recommendations... and also a reminder to donate or pledge before the end of 2022!When you care about making a difference in the world, it's natural to ask: "Where can my donations do the most good?"The great news is that your donations can have an astonishing impact.But to maximise your impact you need to donate effectively. This is especially important given the best charities are often 10 times better than a typical charity within the same area and hundreds of times better than poorly performing charities. The worst charities can even do harm. That means if you donated $100 to the best charity, you could be doing even more good than someone who donates $1,000 to a typical charity, or $10,000 to a poorly performing one.How to donate effectivelyThe point of charity is to help others. Donating effectively means that you’re taking steps to make sure you’re helping others the most for every dollar you give.We think the best way to do this is to:Identify a promising cause to support. The most promising causes are generally:Large in scale: they significantly impact many livesNeglected: they still need more funding and supportTractable: there are clear and practical ways of making progressChoose an excellent fund or charity working on the cause. Indicators of worthwhile organisations include: reliance on evidence, cost-effectiveness, transparency, room for more funding, and a strong track record. We generally recommend giving to expert-managed funds over charities, because they can allocate your donation where and when it is needed most.Pick an efficient way to donate. Try getting your donation matched by your employer or set up a recurring donation.In practice, choosing a cause and evaluating funds and charities can take a lot of time and effort, and many donors aren't able to work it into their busy schedules. To help you get started, we've put together a list of trustworthy, cost-effective funds and charities working on some of the most pressing causes.Our effective giving recommendationsThese recommendations are listed roughly in order of convenience and suitability for most donors. The right giving opportunity for you will depend on your particular values and worldview.Donate to expert-managed fundsFor most people, we recommend donating through an expert-led fund that is focused on effectiveness.Funds are both convenient and highly effective. They allow donors to pool their money so that they can support outstanding giving opportunities that are evaluated by expert grantmakers and trusted charity evaluators.This approach is similar to using an investment fund instead of trying to pick which individual stocks will be the best investments. The fund distributes your donation among multiple grantees with the goal of maximising your overall impact. Read more about the advantages of funds.Donors from the US, UK, and the Netherlands can make tax-deductible donations to the following funds using the Giving What We Can donation platform. For other countries, you can read our tax deductibility guide by country — but we still think these funds are likely to be your best option, even if they’re not tax deductible.Here are our top-rated funds grouped by cause area.Learn more about how we choose which charities and funds to recommend.Top-rated funds improving human wellbeingThese funds generally give to organisations taking evidence-based approaches to improve and save lives of people currently alive. Donors who value their donations going to organisations either with a strong track-record or a promising and testable new approach to help people in the current generation can maximise the...]]>
Tue, 27 Dec 2022 14:09:39 +0000 EA - What are the best charities to donate to in 2022? by Luke Freeman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What are the best charities to donate to in 2022?, published by Luke Freeman on December 26, 2022 on The Effective Altruism Forum.This is a linkpost for GWWC's latest giving recommendations... and also a reminder to donate or pledge before the end of 2022!When you care about making a difference in the world, it's natural to ask: "Where can my donations do the most good?"The great news is that your donations can have an astonishing impact.But to maximise your impact you need to donate effectively. This is especially important given the best charities are often 10 times better than a typical charity within the same area and hundreds of times better than poorly performing charities. The worst charities can even do harm. That means if you donated $100 to the best charity, you could be doing even more good than someone who donates $1,000 to a typical charity, or $10,000 to a poorly performing one.How to donate effectivelyThe point of charity is to help others. Donating effectively means that you’re taking steps to make sure you’re helping others the most for every dollar you give.We think the best way to do this is to:Identify a promising cause to support. The most promising causes are generally:Large in scale: they significantly impact many livesNeglected: they still need more funding and supportTractable: there are clear and practical ways of making progressChoose an excellent fund or charity working on the cause. Indicators of worthwhile organisations include: reliance on evidence, cost-effectiveness, transparency, room for more funding, and a strong track record. We generally recommend giving to expert-managed funds over charities, because they can allocate your donation where and when it is needed most.Pick an efficient way to donate. Try getting your donation matched by your employer or set up a recurring donation.In practice, choosing a cause and evaluating funds and charities can take a lot of time and effort, and many donors aren't able to work it into their busy schedules. To help you get started, we've put together a list of trustworthy, cost-effective funds and charities working on some of the most pressing causes.Our effective giving recommendationsThese recommendations are listed roughly in order of convenience and suitability for most donors. The right giving opportunity for you will depend on your particular values and worldview.Donate to expert-managed fundsFor most people, we recommend donating through an expert-led fund that is focused on effectiveness.Funds are both convenient and highly effective. They allow donors to pool their money so that they can support outstanding giving opportunities that are evaluated by expert grantmakers and trusted charity evaluators.This approach is similar to using an investment fund instead of trying to pick which individual stocks will be the best investments. The fund distributes your donation among multiple grantees with the goal of maximising your overall impact. Read more about the advantages of funds.Donors from the US, UK, and the Netherlands can make tax-deductible donations to the following funds using the Giving What We Can donation platform. For other countries, you can read our tax deductibility guide by country — but we still think these funds are likely to be your best option, even if they’re not tax deductible.Here are our top-rated funds grouped by cause area.Learn more about how we choose which charities and funds to recommend.Top-rated funds improving human wellbeingThese funds generally give to organisations taking evidence-based approaches to improve and save lives of people currently alive. Donors who value their donations going to organisations either with a strong track-record or a promising and testable new approach to help people in the current generation can maximise the...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What are the best charities to donate to in 2022?, published by Luke Freeman on December 26, 2022 on The Effective Altruism Forum.This is a linkpost for GWWC's latest giving recommendations... and also a reminder to donate or pledge before the end of 2022!When you care about making a difference in the world, it's natural to ask: "Where can my donations do the most good?"The great news is that your donations can have an astonishing impact.But to maximise your impact you need to donate effectively. This is especially important given the best charities are often 10 times better than a typical charity within the same area and hundreds of times better than poorly performing charities. The worst charities can even do harm. That means if you donated $100 to the best charity, you could be doing even more good than someone who donates $1,000 to a typical charity, or $10,000 to a poorly performing one.How to donate effectivelyThe point of charity is to help others. Donating effectively means that you’re taking steps to make sure you’re helping others the most for every dollar you give.We think the best way to do this is to:Identify a promising cause to support. The most promising causes are generally:Large in scale: they significantly impact many livesNeglected: they still need more funding and supportTractable: there are clear and practical ways of making progressChoose an excellent fund or charity working on the cause. Indicators of worthwhile organisations include: reliance on evidence, cost-effectiveness, transparency, room for more funding, and a strong track record. We generally recommend giving to expert-managed funds over charities, because they can allocate your donation where and when it is needed most.Pick an efficient way to donate. Try getting your donation matched by your employer or set up a recurring donation.In practice, choosing a cause and evaluating funds and charities can take a lot of time and effort, and many donors aren't able to work it into their busy schedules. To help you get started, we've put together a list of trustworthy, cost-effective funds and charities working on some of the most pressing causes.Our effective giving recommendationsThese recommendations are listed roughly in order of convenience and suitability for most donors. The right giving opportunity for you will depend on your particular values and worldview.Donate to expert-managed fundsFor most people, we recommend donating through an expert-led fund that is focused on effectiveness.Funds are both convenient and highly effective. They allow donors to pool their money so that they can support outstanding giving opportunities that are evaluated by expert grantmakers and trusted charity evaluators.This approach is similar to using an investment fund instead of trying to pick which individual stocks will be the best investments. The fund distributes your donation among multiple grantees with the goal of maximising your overall impact. Read more about the advantages of funds.Donors from the US, UK, and the Netherlands can make tax-deductible donations to the following funds using the Giving What We Can donation platform. For other countries, you can read our tax deductibility guide by country — but we still think these funds are likely to be your best option, even if they’re not tax deductible.Here are our top-rated funds grouped by cause area.Learn more about how we choose which charities and funds to recommend.Top-rated funds improving human wellbeingThese funds generally give to organisations taking evidence-based approaches to improve and save lives of people currently alive. Donors who value their donations going to organisations either with a strong track-record or a promising and testable new approach to help people in the current generation can maximise the...]]>
Luke Freeman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:06 None full 4282
3yojNGhTXAydhfkNg_EA EA - Slightly against aligning with neo-luddites by Matthew Barnett Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Slightly against aligning with neo-luddites, published by Matthew Barnett on December 26, 2022 on The Effective Altruism Forum.To summarize,When considering whether to delay AI, the choice before us is not merely whether to accelerate or decelerate the technology. We can choose what type of regulations are adopted, and some options are much better than others.Neo-luddites do not fundamentally share our concern about AI x-risk. Thus, their regulations will probably not, except by coincidence, be the type of regulations we should try to install.Adopting the wrong AI regulations could lock us into a suboptimal regime that may be difficult or impossible to leave. So we should likely be careful not endorse a proposal because it's "better than nothing" unless it's also literally the only chance we get to delay AI.In particular, arbitrary data restrictions risk preventing researchers from having access to good data that might help with alignment, potentially outweighing the (arguably) positive effect of slowing down AI progress in general.It appears we are in the midst of a new wave of neo-luddite sentiment.Earlier this month, digital artists staged a mass protest against AI art on ArtStation. A few people are reportedly already getting together to hire a lobbyist to advocate more restrictive IP laws around AI generated content. And anecdotally, I've seen numerous large threads on Twitter in which people criticize the users and creators of AI art.Personally, this sentiment disappoints me. While I sympathize with the artists who will lose their income, I'm not persuaded by the general argument. The value we could get from nearly free, personalized entertainment would be truly massive. In my opinion, it would be a shame if humanity never allowed that value to be unlocked, or restricted its proliferation severely.I expect most readers to agree with me on this point — that it is not worth sacrificing a technologically richer world just to protect workers from losing their income. Yet there is a related view that I have recently heard some of my friends endorse: that nonetheless, it is worth aligning with neo-luddites, incidentally, in order to slow down AI capabilities.On the most basic level, I think this argument makes some sense. If aligning with neo-luddites simply means saying "I agree with delaying AI, but not for that reason" then I would not be very concerned. As it happens, I agree with most of the arguments in Katja Grace's recent post about delaying AI in order to ensure existential AI safety.Yet I worry that some people intend their alliance with neo-luddites to extend much further than this shallow rejoinder. I am concerned that people might work with neo-luddites to advance their specific policies, and particular means of achieving them, in the hopes that it's "better than nothing" and might give us more time to solve alignment.In addition to possibly being mildly dishonest, I'm quite worried such an alliance will be counterproductive on separate, purely consequentialist grounds.If we think of AI progress as a single variable that we can either accelerate or decelerate, with other variables held constant upon intervention, then I agree it could be true that we should do whatever we can to impede the march of progress in the field, no matter what that might look like. Delaying AI gives us more time to reflect, debate, and experiment, which prima facie, I agree, is a good thing.A better model, however, is that there are many factor inputs to AI development. To name the main ones: compute, data, and algorithmic progress. To the extent we block only one avenue of progress, the others will continue. Whether that's good depends critically on the details: what's being blocked, what isn't, and how.One consideration, which has been pointed out by many before...]]>
Matthew Barnett https://forum.effectivealtruism.org/posts/3yojNGhTXAydhfkNg/slightly-against-aligning-with-neo-luddites Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Slightly against aligning with neo-luddites, published by Matthew Barnett on December 26, 2022 on The Effective Altruism Forum.To summarize,When considering whether to delay AI, the choice before us is not merely whether to accelerate or decelerate the technology. We can choose what type of regulations are adopted, and some options are much better than others.Neo-luddites do not fundamentally share our concern about AI x-risk. Thus, their regulations will probably not, except by coincidence, be the type of regulations we should try to install.Adopting the wrong AI regulations could lock us into a suboptimal regime that may be difficult or impossible to leave. So we should likely be careful not endorse a proposal because it's "better than nothing" unless it's also literally the only chance we get to delay AI.In particular, arbitrary data restrictions risk preventing researchers from having access to good data that might help with alignment, potentially outweighing the (arguably) positive effect of slowing down AI progress in general.It appears we are in the midst of a new wave of neo-luddite sentiment.Earlier this month, digital artists staged a mass protest against AI art on ArtStation. A few people are reportedly already getting together to hire a lobbyist to advocate more restrictive IP laws around AI generated content. And anecdotally, I've seen numerous large threads on Twitter in which people criticize the users and creators of AI art.Personally, this sentiment disappoints me. While I sympathize with the artists who will lose their income, I'm not persuaded by the general argument. The value we could get from nearly free, personalized entertainment would be truly massive. In my opinion, it would be a shame if humanity never allowed that value to be unlocked, or restricted its proliferation severely.I expect most readers to agree with me on this point — that it is not worth sacrificing a technologically richer world just to protect workers from losing their income. Yet there is a related view that I have recently heard some of my friends endorse: that nonetheless, it is worth aligning with neo-luddites, incidentally, in order to slow down AI capabilities.On the most basic level, I think this argument makes some sense. If aligning with neo-luddites simply means saying "I agree with delaying AI, but not for that reason" then I would not be very concerned. As it happens, I agree with most of the arguments in Katja Grace's recent post about delaying AI in order to ensure existential AI safety.Yet I worry that some people intend their alliance with neo-luddites to extend much further than this shallow rejoinder. I am concerned that people might work with neo-luddites to advance their specific policies, and particular means of achieving them, in the hopes that it's "better than nothing" and might give us more time to solve alignment.In addition to possibly being mildly dishonest, I'm quite worried such an alliance will be counterproductive on separate, purely consequentialist grounds.If we think of AI progress as a single variable that we can either accelerate or decelerate, with other variables held constant upon intervention, then I agree it could be true that we should do whatever we can to impede the march of progress in the field, no matter what that might look like. Delaying AI gives us more time to reflect, debate, and experiment, which prima facie, I agree, is a good thing.A better model, however, is that there are many factor inputs to AI development. To name the main ones: compute, data, and algorithmic progress. To the extent we block only one avenue of progress, the others will continue. Whether that's good depends critically on the details: what's being blocked, what isn't, and how.One consideration, which has been pointed out by many before...]]>
Tue, 27 Dec 2022 06:10:50 +0000 EA - Slightly against aligning with neo-luddites by Matthew Barnett Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Slightly against aligning with neo-luddites, published by Matthew Barnett on December 26, 2022 on The Effective Altruism Forum.To summarize,When considering whether to delay AI, the choice before us is not merely whether to accelerate or decelerate the technology. We can choose what type of regulations are adopted, and some options are much better than others.Neo-luddites do not fundamentally share our concern about AI x-risk. Thus, their regulations will probably not, except by coincidence, be the type of regulations we should try to install.Adopting the wrong AI regulations could lock us into a suboptimal regime that may be difficult or impossible to leave. So we should likely be careful not endorse a proposal because it's "better than nothing" unless it's also literally the only chance we get to delay AI.In particular, arbitrary data restrictions risk preventing researchers from having access to good data that might help with alignment, potentially outweighing the (arguably) positive effect of slowing down AI progress in general.It appears we are in the midst of a new wave of neo-luddite sentiment.Earlier this month, digital artists staged a mass protest against AI art on ArtStation. A few people are reportedly already getting together to hire a lobbyist to advocate more restrictive IP laws around AI generated content. And anecdotally, I've seen numerous large threads on Twitter in which people criticize the users and creators of AI art.Personally, this sentiment disappoints me. While I sympathize with the artists who will lose their income, I'm not persuaded by the general argument. The value we could get from nearly free, personalized entertainment would be truly massive. In my opinion, it would be a shame if humanity never allowed that value to be unlocked, or restricted its proliferation severely.I expect most readers to agree with me on this point — that it is not worth sacrificing a technologically richer world just to protect workers from losing their income. Yet there is a related view that I have recently heard some of my friends endorse: that nonetheless, it is worth aligning with neo-luddites, incidentally, in order to slow down AI capabilities.On the most basic level, I think this argument makes some sense. If aligning with neo-luddites simply means saying "I agree with delaying AI, but not for that reason" then I would not be very concerned. As it happens, I agree with most of the arguments in Katja Grace's recent post about delaying AI in order to ensure existential AI safety.Yet I worry that some people intend their alliance with neo-luddites to extend much further than this shallow rejoinder. I am concerned that people might work with neo-luddites to advance their specific policies, and particular means of achieving them, in the hopes that it's "better than nothing" and might give us more time to solve alignment.In addition to possibly being mildly dishonest, I'm quite worried such an alliance will be counterproductive on separate, purely consequentialist grounds.If we think of AI progress as a single variable that we can either accelerate or decelerate, with other variables held constant upon intervention, then I agree it could be true that we should do whatever we can to impede the march of progress in the field, no matter what that might look like. Delaying AI gives us more time to reflect, debate, and experiment, which prima facie, I agree, is a good thing.A better model, however, is that there are many factor inputs to AI development. To name the main ones: compute, data, and algorithmic progress. To the extent we block only one avenue of progress, the others will continue. Whether that's good depends critically on the details: what's being blocked, what isn't, and how.One consideration, which has been pointed out by many before...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Slightly against aligning with neo-luddites, published by Matthew Barnett on December 26, 2022 on The Effective Altruism Forum.To summarize,When considering whether to delay AI, the choice before us is not merely whether to accelerate or decelerate the technology. We can choose what type of regulations are adopted, and some options are much better than others.Neo-luddites do not fundamentally share our concern about AI x-risk. Thus, their regulations will probably not, except by coincidence, be the type of regulations we should try to install.Adopting the wrong AI regulations could lock us into a suboptimal regime that may be difficult or impossible to leave. So we should likely be careful not endorse a proposal because it's "better than nothing" unless it's also literally the only chance we get to delay AI.In particular, arbitrary data restrictions risk preventing researchers from having access to good data that might help with alignment, potentially outweighing the (arguably) positive effect of slowing down AI progress in general.It appears we are in the midst of a new wave of neo-luddite sentiment.Earlier this month, digital artists staged a mass protest against AI art on ArtStation. A few people are reportedly already getting together to hire a lobbyist to advocate more restrictive IP laws around AI generated content. And anecdotally, I've seen numerous large threads on Twitter in which people criticize the users and creators of AI art.Personally, this sentiment disappoints me. While I sympathize with the artists who will lose their income, I'm not persuaded by the general argument. The value we could get from nearly free, personalized entertainment would be truly massive. In my opinion, it would be a shame if humanity never allowed that value to be unlocked, or restricted its proliferation severely.I expect most readers to agree with me on this point — that it is not worth sacrificing a technologically richer world just to protect workers from losing their income. Yet there is a related view that I have recently heard some of my friends endorse: that nonetheless, it is worth aligning with neo-luddites, incidentally, in order to slow down AI capabilities.On the most basic level, I think this argument makes some sense. If aligning with neo-luddites simply means saying "I agree with delaying AI, but not for that reason" then I would not be very concerned. As it happens, I agree with most of the arguments in Katja Grace's recent post about delaying AI in order to ensure existential AI safety.Yet I worry that some people intend their alliance with neo-luddites to extend much further than this shallow rejoinder. I am concerned that people might work with neo-luddites to advance their specific policies, and particular means of achieving them, in the hopes that it's "better than nothing" and might give us more time to solve alignment.In addition to possibly being mildly dishonest, I'm quite worried such an alliance will be counterproductive on separate, purely consequentialist grounds.If we think of AI progress as a single variable that we can either accelerate or decelerate, with other variables held constant upon intervention, then I agree it could be true that we should do whatever we can to impede the march of progress in the field, no matter what that might look like. Delaying AI gives us more time to reflect, debate, and experiment, which prima facie, I agree, is a good thing.A better model, however, is that there are many factor inputs to AI development. To name the main ones: compute, data, and algorithmic progress. To the extent we block only one avenue of progress, the others will continue. Whether that's good depends critically on the details: what's being blocked, what isn't, and how.One consideration, which has been pointed out by many before...]]>
Matthew Barnett https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:01 None full 4271
bjZymDm7yfPDdzcrP_EA EA - Update on GWWC donation platform by Luke Freeman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update on GWWC donation platform, published by Luke Freeman on December 26, 2022 on The Effective Altruism Forum.We've recently received inquiries about funds donated through the donation platform hosted by Giving What We Can (GWWC), in light of the FTX bankruptcy.Both of the legal entities involved with GWWC (Effective Ventures Foundation and Centre for Effective Altruism USA) are financially solvent. These entities have funding sources outside of the FTX Foundation and other FTX-related entities/individuals. The GWWC related entities would be solvent even without the funds received from the FTX-related entities. Accordingly, our plan is to continue to accept and regrant donations.I apologise for any delays in addressing concerns about the FTX crisis. Coordinating across multiple entities and time zones is challenging, but we are committed to transparency and keeping our donors as informed as we can. As everyone can appreciate, this is an evolving situation, and we’re taking all necessary advice and making sure that we comply with all our legal duties. Part of this means that we cannot give as many details as we would like, however much we would like to.Thank you to all for your support and to everyone for all their hard work and dedication in responding to this incredibly difficult time.I am especially appreciative to all the donors who’ve stepped and will help us continue to broaden the base of donors given the funding shortfall experienced by many high-impact charities, nonprofits, and charitable funds.If anyone has any questions please don’t hesitate to ask – that’s what I’m here for.All my best,LukeIn a comment here or by email. Although please bear in mind that it might take time to coordinate a response.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Luke Freeman https://forum.effectivealtruism.org/posts/bjZymDm7yfPDdzcrP/update-on-gwwc-donation-platform Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update on GWWC donation platform, published by Luke Freeman on December 26, 2022 on The Effective Altruism Forum.We've recently received inquiries about funds donated through the donation platform hosted by Giving What We Can (GWWC), in light of the FTX bankruptcy.Both of the legal entities involved with GWWC (Effective Ventures Foundation and Centre for Effective Altruism USA) are financially solvent. These entities have funding sources outside of the FTX Foundation and other FTX-related entities/individuals. The GWWC related entities would be solvent even without the funds received from the FTX-related entities. Accordingly, our plan is to continue to accept and regrant donations.I apologise for any delays in addressing concerns about the FTX crisis. Coordinating across multiple entities and time zones is challenging, but we are committed to transparency and keeping our donors as informed as we can. As everyone can appreciate, this is an evolving situation, and we’re taking all necessary advice and making sure that we comply with all our legal duties. Part of this means that we cannot give as many details as we would like, however much we would like to.Thank you to all for your support and to everyone for all their hard work and dedication in responding to this incredibly difficult time.I am especially appreciative to all the donors who’ve stepped and will help us continue to broaden the base of donors given the funding shortfall experienced by many high-impact charities, nonprofits, and charitable funds.If anyone has any questions please don’t hesitate to ask – that’s what I’m here for.All my best,LukeIn a comment here or by email. Although please bear in mind that it might take time to coordinate a response.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 27 Dec 2022 00:15:39 +0000 EA - Update on GWWC donation platform by Luke Freeman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update on GWWC donation platform, published by Luke Freeman on December 26, 2022 on The Effective Altruism Forum.We've recently received inquiries about funds donated through the donation platform hosted by Giving What We Can (GWWC), in light of the FTX bankruptcy.Both of the legal entities involved with GWWC (Effective Ventures Foundation and Centre for Effective Altruism USA) are financially solvent. These entities have funding sources outside of the FTX Foundation and other FTX-related entities/individuals. The GWWC related entities would be solvent even without the funds received from the FTX-related entities. Accordingly, our plan is to continue to accept and regrant donations.I apologise for any delays in addressing concerns about the FTX crisis. Coordinating across multiple entities and time zones is challenging, but we are committed to transparency and keeping our donors as informed as we can. As everyone can appreciate, this is an evolving situation, and we’re taking all necessary advice and making sure that we comply with all our legal duties. Part of this means that we cannot give as many details as we would like, however much we would like to.Thank you to all for your support and to everyone for all their hard work and dedication in responding to this incredibly difficult time.I am especially appreciative to all the donors who’ve stepped and will help us continue to broaden the base of donors given the funding shortfall experienced by many high-impact charities, nonprofits, and charitable funds.If anyone has any questions please don’t hesitate to ask – that’s what I’m here for.All my best,LukeIn a comment here or by email. Although please bear in mind that it might take time to coordinate a response.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update on GWWC donation platform, published by Luke Freeman on December 26, 2022 on The Effective Altruism Forum.We've recently received inquiries about funds donated through the donation platform hosted by Giving What We Can (GWWC), in light of the FTX bankruptcy.Both of the legal entities involved with GWWC (Effective Ventures Foundation and Centre for Effective Altruism USA) are financially solvent. These entities have funding sources outside of the FTX Foundation and other FTX-related entities/individuals. The GWWC related entities would be solvent even without the funds received from the FTX-related entities. Accordingly, our plan is to continue to accept and regrant donations.I apologise for any delays in addressing concerns about the FTX crisis. Coordinating across multiple entities and time zones is challenging, but we are committed to transparency and keeping our donors as informed as we can. As everyone can appreciate, this is an evolving situation, and we’re taking all necessary advice and making sure that we comply with all our legal duties. Part of this means that we cannot give as many details as we would like, however much we would like to.Thank you to all for your support and to everyone for all their hard work and dedication in responding to this incredibly difficult time.I am especially appreciative to all the donors who’ve stepped and will help us continue to broaden the base of donors given the funding shortfall experienced by many high-impact charities, nonprofits, and charitable funds.If anyone has any questions please don’t hesitate to ask – that’s what I’m here for.All my best,LukeIn a comment here or by email. Although please bear in mind that it might take time to coordinate a response.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Luke Freeman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:56 None full 4272
azL6uPcbCqBfJ37TJ_EA EA - Announcing a subforum for forecasting and estimation by Sharang Phadke Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing a subforum for forecasting & estimation, published by Sharang Phadke on December 26, 2022 on The Effective Altruism Forum.TL;DR: We’ve launched a dedicated space for discussing forecasting and estimation. Go explore it or join it if you’re interested!As mentioned before, the Forum team is piloting subforums that allow people interested in specific topics in effective altruism to keep up with them, have more casual conversations with a smaller group of people interested in the same topic, and engage more deeply with the topic. We’ve just kicked off our fourth subforum: forecasting & estimation!What goes in a “forecasting & estimation” subforum?Some discussions that we think people might have in this subforum:Predictions for 2023How to forecast, how to get better at forecasting, bottlenecks in forecastingFermi estimationSpecific forecasts and estimatesAnd more!How subforums currently work — give us feedback!It’s early days for subforums, and we expect the features and structure to evolve significantly in the near future, so feedback is particularly useful right now. You can comment here or email us at forum@centreforeffectivealtruism.org.Here’s the current setup:All posts that are relevant to the subforum’s topic will appear in the subforum, even if they’re by someone who’s not part of the subforum.You can post directly to the subforum, so only people who have joined the space or are exploring it will see them. There are two approaches possible:Full posts in the subforum are like normal Forum postsDiscussion threads in the subforum are a bit like shortform commentsThe default ordering of content in the subforum is by a combination of recency and upvotes (“magic”). If someone adds a comment to a post or thread in the subforum, the post will resurface at the top of the discussion.Every subforum has an “organizer.” You can see an outline of what they’ll be doing here.Notifications: if you join the subforum, you’ll have the choice to opt in to notifications of new posts and discussions. You can change this by clicking on the bell icon on the subforum page and setting your preferences accordingly (e.g. by removing “upweight on frontpage”).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sharang Phadke https://forum.effectivealtruism.org/posts/azL6uPcbCqBfJ37TJ/announcing-a-subforum-for-forecasting-and-estimation Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing a subforum for forecasting & estimation, published by Sharang Phadke on December 26, 2022 on The Effective Altruism Forum.TL;DR: We’ve launched a dedicated space for discussing forecasting and estimation. Go explore it or join it if you’re interested!As mentioned before, the Forum team is piloting subforums that allow people interested in specific topics in effective altruism to keep up with them, have more casual conversations with a smaller group of people interested in the same topic, and engage more deeply with the topic. We’ve just kicked off our fourth subforum: forecasting & estimation!What goes in a “forecasting & estimation” subforum?Some discussions that we think people might have in this subforum:Predictions for 2023How to forecast, how to get better at forecasting, bottlenecks in forecastingFermi estimationSpecific forecasts and estimatesAnd more!How subforums currently work — give us feedback!It’s early days for subforums, and we expect the features and structure to evolve significantly in the near future, so feedback is particularly useful right now. You can comment here or email us at forum@centreforeffectivealtruism.org.Here’s the current setup:All posts that are relevant to the subforum’s topic will appear in the subforum, even if they’re by someone who’s not part of the subforum.You can post directly to the subforum, so only people who have joined the space or are exploring it will see them. There are two approaches possible:Full posts in the subforum are like normal Forum postsDiscussion threads in the subforum are a bit like shortform commentsThe default ordering of content in the subforum is by a combination of recency and upvotes (“magic”). If someone adds a comment to a post or thread in the subforum, the post will resurface at the top of the discussion.Every subforum has an “organizer.” You can see an outline of what they’ll be doing here.Notifications: if you join the subforum, you’ll have the choice to opt in to notifications of new posts and discussions. You can change this by clicking on the bell icon on the subforum page and setting your preferences accordingly (e.g. by removing “upweight on frontpage”).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 26 Dec 2022 21:24:18 +0000 EA - Announcing a subforum for forecasting and estimation by Sharang Phadke Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing a subforum for forecasting & estimation, published by Sharang Phadke on December 26, 2022 on The Effective Altruism Forum.TL;DR: We’ve launched a dedicated space for discussing forecasting and estimation. Go explore it or join it if you’re interested!As mentioned before, the Forum team is piloting subforums that allow people interested in specific topics in effective altruism to keep up with them, have more casual conversations with a smaller group of people interested in the same topic, and engage more deeply with the topic. We’ve just kicked off our fourth subforum: forecasting & estimation!What goes in a “forecasting & estimation” subforum?Some discussions that we think people might have in this subforum:Predictions for 2023How to forecast, how to get better at forecasting, bottlenecks in forecastingFermi estimationSpecific forecasts and estimatesAnd more!How subforums currently work — give us feedback!It’s early days for subforums, and we expect the features and structure to evolve significantly in the near future, so feedback is particularly useful right now. You can comment here or email us at forum@centreforeffectivealtruism.org.Here’s the current setup:All posts that are relevant to the subforum’s topic will appear in the subforum, even if they’re by someone who’s not part of the subforum.You can post directly to the subforum, so only people who have joined the space or are exploring it will see them. There are two approaches possible:Full posts in the subforum are like normal Forum postsDiscussion threads in the subforum are a bit like shortform commentsThe default ordering of content in the subforum is by a combination of recency and upvotes (“magic”). If someone adds a comment to a post or thread in the subforum, the post will resurface at the top of the discussion.Every subforum has an “organizer.” You can see an outline of what they’ll be doing here.Notifications: if you join the subforum, you’ll have the choice to opt in to notifications of new posts and discussions. You can change this by clicking on the bell icon on the subforum page and setting your preferences accordingly (e.g. by removing “upweight on frontpage”).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing a subforum for forecasting & estimation, published by Sharang Phadke on December 26, 2022 on The Effective Altruism Forum.TL;DR: We’ve launched a dedicated space for discussing forecasting and estimation. Go explore it or join it if you’re interested!As mentioned before, the Forum team is piloting subforums that allow people interested in specific topics in effective altruism to keep up with them, have more casual conversations with a smaller group of people interested in the same topic, and engage more deeply with the topic. We’ve just kicked off our fourth subforum: forecasting & estimation!What goes in a “forecasting & estimation” subforum?Some discussions that we think people might have in this subforum:Predictions for 2023How to forecast, how to get better at forecasting, bottlenecks in forecastingFermi estimationSpecific forecasts and estimatesAnd more!How subforums currently work — give us feedback!It’s early days for subforums, and we expect the features and structure to evolve significantly in the near future, so feedback is particularly useful right now. You can comment here or email us at forum@centreforeffectivealtruism.org.Here’s the current setup:All posts that are relevant to the subforum’s topic will appear in the subforum, even if they’re by someone who’s not part of the subforum.You can post directly to the subforum, so only people who have joined the space or are exploring it will see them. There are two approaches possible:Full posts in the subforum are like normal Forum postsDiscussion threads in the subforum are a bit like shortform commentsThe default ordering of content in the subforum is by a combination of recency and upvotes (“magic”). If someone adds a comment to a post or thread in the subforum, the post will resurface at the top of the discussion.Every subforum has an “organizer.” You can see an outline of what they’ll be doing here.Notifications: if you join the subforum, you’ll have the choice to opt in to notifications of new posts and discussions. You can change this by clicking on the bell icon on the subforum page and setting your preferences accordingly (e.g. by removing “upweight on frontpage”).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sharang Phadke https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:21 None full 4264
mE68RKHuTymGW4q7N_EA EA - Vida Plena Predictive Cost-Effectiveness Analysis by Samuel Dupret Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vida Plena Predictive Cost-Effectiveness Analysis, published by Samuel Dupret on December 26, 2022 on The Effective Altruism Forum.Disclaimer: Samuel Dupret volunteered his time to develop the analysis. While Samuel is a current member of the research team of the Happier Lives Institute (HLI), this analysis is an independent project which is not part of HLI’s body of research. Joy Bittner, as the founder of Vida Plena, contributed to the report and was previously the Operations Manager at HLI. This is not a recommendation by HLI.SummaryMultiple experts have stated that mental health is one of the most neglected health issues and should urgently receive more global investment (Walker et al., 2021; WHO, 2022). According to the Global Burden of Disease (Ferrari et al., 2022), mental disorders are “the seventh leading cause” (p. 144) of health burden in the world in 2019. Of the mental health disorders, depression is the one with the highest health burden (Ferrari et al., 2022).Vida Plena (see this post for a presentation) will address the lack of treatment for depression by empowering local people to deliver a cost-effective model of psychotherapy. Community members are trained to treat depression through Group Interpersonal Therapy (g-IPT), which is recommended by the World Health Organization as a first-line treatment for depression in low-income settings (WHO, 2020). The aim of Vida Plena is to replicate in Ecuador the success of StrongMinds (which uses g-IPT in Uganda and Zambia). StrongMinds is recommended by Founders Pledge (Halstead, 2019) and is the Happier Lives Institute’s top recommendation (HLI, 2022).Potential funders of Vida Plena are interested in how much good it can accomplish. Whilst data collection and a pilot study are planned, Vida Plena has only just started so it does not have its own cost-effectiveness data. However, we can give a predictive value by using previous cost-effectiveness analyses (CEA) of StrongMinds (Halstead et al., 2019; McGuire & Plant, 2021b; McGuire et al., 2022a) and converting their results to Vida Plena’s context. We use Vida Plena’s predicted costs, the Ecuadorian average household size for spillovers, and we apply two adjustments (one for the counterfactual treatment gap and one for the probability of success). Once data from Vida Plena itself is collected, we will update the CEA.We estimate it will cost $17 to improve a recipient’s wellbeing by one wellbeing-adjusted life year (WELLBY). For a comparison, this is 8 times more cost-effective than GiveDirectly (a gold standard charity which delivers cash transfers in low- and middle-income countries).Additionally, to allow for comparisons with other health programs, we also produce a disability-adjusted life year (DALY) prediction. It is, however, our opinion that our WELLBY analysis is more robust because it includes a comprehensive evaluation beyond just physical health as well as the impact of household spillovers. Nevertheless, we estimate that it will cost Vida Plena $462 to avert one DALY.While we are not arguing that Vida Plena will be the most cost-effective endeavour, we do expect that it is potentially a very cost-effective charity that will improve human wellbeing in an neglected region - Ecuador and Latin America. No mental health organisations currently operate at scale in Latin America; hence, an important treatment gap exists (PAHO, 2018). This provides a counterfactual argument for creating a new mental health organisation specifically reaching people in the region.What’s the problem and how will Vida Plena address it?Mental illness results in reduced quality of lifeBeyond any other metric used to describe it, the core badness of mental health problems, such as depression, is that the actual lived experience is exceptionally bad. Mental health is o...]]>
Samuel Dupret https://forum.effectivealtruism.org/posts/mE68RKHuTymGW4q7N/vida-plena-predictive-cost-effectiveness-analysis Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vida Plena Predictive Cost-Effectiveness Analysis, published by Samuel Dupret on December 26, 2022 on The Effective Altruism Forum.Disclaimer: Samuel Dupret volunteered his time to develop the analysis. While Samuel is a current member of the research team of the Happier Lives Institute (HLI), this analysis is an independent project which is not part of HLI’s body of research. Joy Bittner, as the founder of Vida Plena, contributed to the report and was previously the Operations Manager at HLI. This is not a recommendation by HLI.SummaryMultiple experts have stated that mental health is one of the most neglected health issues and should urgently receive more global investment (Walker et al., 2021; WHO, 2022). According to the Global Burden of Disease (Ferrari et al., 2022), mental disorders are “the seventh leading cause” (p. 144) of health burden in the world in 2019. Of the mental health disorders, depression is the one with the highest health burden (Ferrari et al., 2022).Vida Plena (see this post for a presentation) will address the lack of treatment for depression by empowering local people to deliver a cost-effective model of psychotherapy. Community members are trained to treat depression through Group Interpersonal Therapy (g-IPT), which is recommended by the World Health Organization as a first-line treatment for depression in low-income settings (WHO, 2020). The aim of Vida Plena is to replicate in Ecuador the success of StrongMinds (which uses g-IPT in Uganda and Zambia). StrongMinds is recommended by Founders Pledge (Halstead, 2019) and is the Happier Lives Institute’s top recommendation (HLI, 2022).Potential funders of Vida Plena are interested in how much good it can accomplish. Whilst data collection and a pilot study are planned, Vida Plena has only just started so it does not have its own cost-effectiveness data. However, we can give a predictive value by using previous cost-effectiveness analyses (CEA) of StrongMinds (Halstead et al., 2019; McGuire & Plant, 2021b; McGuire et al., 2022a) and converting their results to Vida Plena’s context. We use Vida Plena’s predicted costs, the Ecuadorian average household size for spillovers, and we apply two adjustments (one for the counterfactual treatment gap and one for the probability of success). Once data from Vida Plena itself is collected, we will update the CEA.We estimate it will cost $17 to improve a recipient’s wellbeing by one wellbeing-adjusted life year (WELLBY). For a comparison, this is 8 times more cost-effective than GiveDirectly (a gold standard charity which delivers cash transfers in low- and middle-income countries).Additionally, to allow for comparisons with other health programs, we also produce a disability-adjusted life year (DALY) prediction. It is, however, our opinion that our WELLBY analysis is more robust because it includes a comprehensive evaluation beyond just physical health as well as the impact of household spillovers. Nevertheless, we estimate that it will cost Vida Plena $462 to avert one DALY.While we are not arguing that Vida Plena will be the most cost-effective endeavour, we do expect that it is potentially a very cost-effective charity that will improve human wellbeing in an neglected region - Ecuador and Latin America. No mental health organisations currently operate at scale in Latin America; hence, an important treatment gap exists (PAHO, 2018). This provides a counterfactual argument for creating a new mental health organisation specifically reaching people in the region.What’s the problem and how will Vida Plena address it?Mental illness results in reduced quality of lifeBeyond any other metric used to describe it, the core badness of mental health problems, such as depression, is that the actual lived experience is exceptionally bad. Mental health is o...]]>
Mon, 26 Dec 2022 10:16:48 +0000 EA - Vida Plena Predictive Cost-Effectiveness Analysis by Samuel Dupret Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vida Plena Predictive Cost-Effectiveness Analysis, published by Samuel Dupret on December 26, 2022 on The Effective Altruism Forum.Disclaimer: Samuel Dupret volunteered his time to develop the analysis. While Samuel is a current member of the research team of the Happier Lives Institute (HLI), this analysis is an independent project which is not part of HLI’s body of research. Joy Bittner, as the founder of Vida Plena, contributed to the report and was previously the Operations Manager at HLI. This is not a recommendation by HLI.SummaryMultiple experts have stated that mental health is one of the most neglected health issues and should urgently receive more global investment (Walker et al., 2021; WHO, 2022). According to the Global Burden of Disease (Ferrari et al., 2022), mental disorders are “the seventh leading cause” (p. 144) of health burden in the world in 2019. Of the mental health disorders, depression is the one with the highest health burden (Ferrari et al., 2022).Vida Plena (see this post for a presentation) will address the lack of treatment for depression by empowering local people to deliver a cost-effective model of psychotherapy. Community members are trained to treat depression through Group Interpersonal Therapy (g-IPT), which is recommended by the World Health Organization as a first-line treatment for depression in low-income settings (WHO, 2020). The aim of Vida Plena is to replicate in Ecuador the success of StrongMinds (which uses g-IPT in Uganda and Zambia). StrongMinds is recommended by Founders Pledge (Halstead, 2019) and is the Happier Lives Institute’s top recommendation (HLI, 2022).Potential funders of Vida Plena are interested in how much good it can accomplish. Whilst data collection and a pilot study are planned, Vida Plena has only just started so it does not have its own cost-effectiveness data. However, we can give a predictive value by using previous cost-effectiveness analyses (CEA) of StrongMinds (Halstead et al., 2019; McGuire & Plant, 2021b; McGuire et al., 2022a) and converting their results to Vida Plena’s context. We use Vida Plena’s predicted costs, the Ecuadorian average household size for spillovers, and we apply two adjustments (one for the counterfactual treatment gap and one for the probability of success). Once data from Vida Plena itself is collected, we will update the CEA.We estimate it will cost $17 to improve a recipient’s wellbeing by one wellbeing-adjusted life year (WELLBY). For a comparison, this is 8 times more cost-effective than GiveDirectly (a gold standard charity which delivers cash transfers in low- and middle-income countries).Additionally, to allow for comparisons with other health programs, we also produce a disability-adjusted life year (DALY) prediction. It is, however, our opinion that our WELLBY analysis is more robust because it includes a comprehensive evaluation beyond just physical health as well as the impact of household spillovers. Nevertheless, we estimate that it will cost Vida Plena $462 to avert one DALY.While we are not arguing that Vida Plena will be the most cost-effective endeavour, we do expect that it is potentially a very cost-effective charity that will improve human wellbeing in an neglected region - Ecuador and Latin America. No mental health organisations currently operate at scale in Latin America; hence, an important treatment gap exists (PAHO, 2018). This provides a counterfactual argument for creating a new mental health organisation specifically reaching people in the region.What’s the problem and how will Vida Plena address it?Mental illness results in reduced quality of lifeBeyond any other metric used to describe it, the core badness of mental health problems, such as depression, is that the actual lived experience is exceptionally bad. Mental health is o...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vida Plena Predictive Cost-Effectiveness Analysis, published by Samuel Dupret on December 26, 2022 on The Effective Altruism Forum.Disclaimer: Samuel Dupret volunteered his time to develop the analysis. While Samuel is a current member of the research team of the Happier Lives Institute (HLI), this analysis is an independent project which is not part of HLI’s body of research. Joy Bittner, as the founder of Vida Plena, contributed to the report and was previously the Operations Manager at HLI. This is not a recommendation by HLI.SummaryMultiple experts have stated that mental health is one of the most neglected health issues and should urgently receive more global investment (Walker et al., 2021; WHO, 2022). According to the Global Burden of Disease (Ferrari et al., 2022), mental disorders are “the seventh leading cause” (p. 144) of health burden in the world in 2019. Of the mental health disorders, depression is the one with the highest health burden (Ferrari et al., 2022).Vida Plena (see this post for a presentation) will address the lack of treatment for depression by empowering local people to deliver a cost-effective model of psychotherapy. Community members are trained to treat depression through Group Interpersonal Therapy (g-IPT), which is recommended by the World Health Organization as a first-line treatment for depression in low-income settings (WHO, 2020). The aim of Vida Plena is to replicate in Ecuador the success of StrongMinds (which uses g-IPT in Uganda and Zambia). StrongMinds is recommended by Founders Pledge (Halstead, 2019) and is the Happier Lives Institute’s top recommendation (HLI, 2022).Potential funders of Vida Plena are interested in how much good it can accomplish. Whilst data collection and a pilot study are planned, Vida Plena has only just started so it does not have its own cost-effectiveness data. However, we can give a predictive value by using previous cost-effectiveness analyses (CEA) of StrongMinds (Halstead et al., 2019; McGuire & Plant, 2021b; McGuire et al., 2022a) and converting their results to Vida Plena’s context. We use Vida Plena’s predicted costs, the Ecuadorian average household size for spillovers, and we apply two adjustments (one for the counterfactual treatment gap and one for the probability of success). Once data from Vida Plena itself is collected, we will update the CEA.We estimate it will cost $17 to improve a recipient’s wellbeing by one wellbeing-adjusted life year (WELLBY). For a comparison, this is 8 times more cost-effective than GiveDirectly (a gold standard charity which delivers cash transfers in low- and middle-income countries).Additionally, to allow for comparisons with other health programs, we also produce a disability-adjusted life year (DALY) prediction. It is, however, our opinion that our WELLBY analysis is more robust because it includes a comprehensive evaluation beyond just physical health as well as the impact of household spillovers. Nevertheless, we estimate that it will cost Vida Plena $462 to avert one DALY.While we are not arguing that Vida Plena will be the most cost-effective endeavour, we do expect that it is potentially a very cost-effective charity that will improve human wellbeing in an neglected region - Ecuador and Latin America. No mental health organisations currently operate at scale in Latin America; hence, an important treatment gap exists (PAHO, 2018). This provides a counterfactual argument for creating a new mental health organisation specifically reaching people in the region.What’s the problem and how will Vida Plena address it?Mental illness results in reduced quality of lifeBeyond any other metric used to describe it, the core badness of mental health problems, such as depression, is that the actual lived experience is exceptionally bad. Mental health is o...]]>
Samuel Dupret https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 22:23 None full 4265
avbMdfiaYkzSqng2E_EA EA - Savoring my moral circle by Angelina Li Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Savoring my moral circle, published by Angelina Li on December 26, 2022 on The Effective Altruism Forum.(Posted in the spirit of draft amnesty)In my day-to-day life, I find it impossible to care as deeply about each individual inhabitant of my moral circle as I would like to. Sometimes, it brings me joy to pause and actively notice how large my moral circle really is.I remember having this thought for the first time while walking down a busy street. Surrounded by a big rush of strangers, I found it striking to remember that I cared about each of the people walking by — that their hopes, desires, and dreams were a part of my utility function. For a couple of minutes, I tried to really notice every person passing by, be curious about the particularities of their lives, and briefly extend the intensity of love I am usually able to hold for only my closest loved ones.When I want to get in touch with this emotion nowadays, I’ll close my eyes and imagine all the inhabitants of my moral circle — the entire universe and all her children — stretching out all around me. I think about the companion chickens and shrimp who live with friends of mine, and how much happier their lives are compared to the lives of most farmed animals. Or I’ll think about the children of the future, and imagine them leading flourishing, happy lives.One of the gifts I’m most grateful EA has given me is helping me notice how BIG the world is, and how much there is to care about.Sometimes, it helps me to remember that I am in other people’s moral circles. That total strangers in the world, who I might not ever meet, care about ME, about me specifically, about my flourishing and happiness. Especially when my mental health is bad and I’m hiding from everyone else in my life, I like to think about a particularly altruistic stranger in the world, and feel comforted to know that, even if they don’t know my name, they care about me and are wishing me well.Merry Christmas (and happy holidays)! I am grateful that you are a part of my moral circle, and I am grateful to be a part of yours <3Thanks to Akash Wasil and Aric Floyd for looking at an earlier version of this :)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Angelina Li https://forum.effectivealtruism.org/posts/avbMdfiaYkzSqng2E/savoring-my-moral-circle Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Savoring my moral circle, published by Angelina Li on December 26, 2022 on The Effective Altruism Forum.(Posted in the spirit of draft amnesty)In my day-to-day life, I find it impossible to care as deeply about each individual inhabitant of my moral circle as I would like to. Sometimes, it brings me joy to pause and actively notice how large my moral circle really is.I remember having this thought for the first time while walking down a busy street. Surrounded by a big rush of strangers, I found it striking to remember that I cared about each of the people walking by — that their hopes, desires, and dreams were a part of my utility function. For a couple of minutes, I tried to really notice every person passing by, be curious about the particularities of their lives, and briefly extend the intensity of love I am usually able to hold for only my closest loved ones.When I want to get in touch with this emotion nowadays, I’ll close my eyes and imagine all the inhabitants of my moral circle — the entire universe and all her children — stretching out all around me. I think about the companion chickens and shrimp who live with friends of mine, and how much happier their lives are compared to the lives of most farmed animals. Or I’ll think about the children of the future, and imagine them leading flourishing, happy lives.One of the gifts I’m most grateful EA has given me is helping me notice how BIG the world is, and how much there is to care about.Sometimes, it helps me to remember that I am in other people’s moral circles. That total strangers in the world, who I might not ever meet, care about ME, about me specifically, about my flourishing and happiness. Especially when my mental health is bad and I’m hiding from everyone else in my life, I like to think about a particularly altruistic stranger in the world, and feel comforted to know that, even if they don’t know my name, they care about me and are wishing me well.Merry Christmas (and happy holidays)! I am grateful that you are a part of my moral circle, and I am grateful to be a part of yours <3Thanks to Akash Wasil and Aric Floyd for looking at an earlier version of this :)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 26 Dec 2022 09:12:19 +0000 EA - Savoring my moral circle by Angelina Li Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Savoring my moral circle, published by Angelina Li on December 26, 2022 on The Effective Altruism Forum.(Posted in the spirit of draft amnesty)In my day-to-day life, I find it impossible to care as deeply about each individual inhabitant of my moral circle as I would like to. Sometimes, it brings me joy to pause and actively notice how large my moral circle really is.I remember having this thought for the first time while walking down a busy street. Surrounded by a big rush of strangers, I found it striking to remember that I cared about each of the people walking by — that their hopes, desires, and dreams were a part of my utility function. For a couple of minutes, I tried to really notice every person passing by, be curious about the particularities of their lives, and briefly extend the intensity of love I am usually able to hold for only my closest loved ones.When I want to get in touch with this emotion nowadays, I’ll close my eyes and imagine all the inhabitants of my moral circle — the entire universe and all her children — stretching out all around me. I think about the companion chickens and shrimp who live with friends of mine, and how much happier their lives are compared to the lives of most farmed animals. Or I’ll think about the children of the future, and imagine them leading flourishing, happy lives.One of the gifts I’m most grateful EA has given me is helping me notice how BIG the world is, and how much there is to care about.Sometimes, it helps me to remember that I am in other people’s moral circles. That total strangers in the world, who I might not ever meet, care about ME, about me specifically, about my flourishing and happiness. Especially when my mental health is bad and I’m hiding from everyone else in my life, I like to think about a particularly altruistic stranger in the world, and feel comforted to know that, even if they don’t know my name, they care about me and are wishing me well.Merry Christmas (and happy holidays)! I am grateful that you are a part of my moral circle, and I am grateful to be a part of yours <3Thanks to Akash Wasil and Aric Floyd for looking at an earlier version of this :)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Savoring my moral circle, published by Angelina Li on December 26, 2022 on The Effective Altruism Forum.(Posted in the spirit of draft amnesty)In my day-to-day life, I find it impossible to care as deeply about each individual inhabitant of my moral circle as I would like to. Sometimes, it brings me joy to pause and actively notice how large my moral circle really is.I remember having this thought for the first time while walking down a busy street. Surrounded by a big rush of strangers, I found it striking to remember that I cared about each of the people walking by — that their hopes, desires, and dreams were a part of my utility function. For a couple of minutes, I tried to really notice every person passing by, be curious about the particularities of their lives, and briefly extend the intensity of love I am usually able to hold for only my closest loved ones.When I want to get in touch with this emotion nowadays, I’ll close my eyes and imagine all the inhabitants of my moral circle — the entire universe and all her children — stretching out all around me. I think about the companion chickens and shrimp who live with friends of mine, and how much happier their lives are compared to the lives of most farmed animals. Or I’ll think about the children of the future, and imagine them leading flourishing, happy lives.One of the gifts I’m most grateful EA has given me is helping me notice how BIG the world is, and how much there is to care about.Sometimes, it helps me to remember that I am in other people’s moral circles. That total strangers in the world, who I might not ever meet, care about ME, about me specifically, about my flourishing and happiness. Especially when my mental health is bad and I’m hiding from everyone else in my life, I like to think about a particularly altruistic stranger in the world, and feel comforted to know that, even if they don’t know my name, they care about me and are wishing me well.Merry Christmas (and happy holidays)! I am grateful that you are a part of my moral circle, and I am grateful to be a part of yours <3Thanks to Akash Wasil and Aric Floyd for looking at an earlier version of this :)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Angelina Li https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:10 None full 4260
e7JgpYdbSukpPyfLT_EA EA - Announcing Vida Plena: the first Latin American organization incubated by Charity Entrepreneurship by Joy Bittner Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Vida Plena: the first Latin American organization incubated by Charity Entrepreneurship, published by Joy Bittner on December 26, 2022 on The Effective Altruism Forum.Vida Plena (meaning ‘a flourishing life’ in Spanish) is a new nonprofit organization based in Quito, Ecuador. Our mission is to build strong mental health in low-income and refugee communities, who otherwise would have no access to care. We provide evidence-based depression treatment which is highly cost-effective and scalable.In this post, we:Share how we got startedMake the case for why you should care about mental healthDemonstrate the evidence base for the solution we are usingHope to make you really excited about Vida Plena’s goals and upcoming plansWe are proud to highlight that Vida Plena completed the 2022 Charity Entrepreneurship (CE) Incubator program, making it the first CE-incubated organization to operate in Latin America. We (myself, Joy Bittner and my co-founder, Anita Kaslin) are exceptionally grateful for their on-going support and for the network of seed funders who are making this work possible.We are also excited to contribute to the EA community locally, as Vida Plena is one of the very first EA-aligned organizations implementing within Latin America.For the very effective altruist, a TLDR summary:The problem: The World Health Organization (WHO) estimates that 5% of people in Latin America have depression, however, a lack of prioritization means that more than 3 out of 4 people in Latin America go untreated. Ecuador, in particular, has some of the highest rates of depression in the region: causing 8.3% of the total years lived with disability (YLD).The solution: Vida Plena’s intervention is based on an evidence-based therapy which is recommended by the WHO as a first-line treatment for depression in low-income settings (WHO, 2020). This program is highly cost-effective as it is delivered by non-specialist community members, which a systematic review of 27 studies found that non-specialists can effectively administer therapy.How you can know this will be impactful: We are replicating a proven program model. The nonprofit organization StrongMinds, operating in Uganda, has treated 150,000 people using the same model of therapy and 85% of people in their program saw significant reductions in their depression. This success has led StrongMinds to be recommended by Founders Pledge (Halstead, 2019) and is the Happier Lives Institute’s top recommendation (HLI, 2022). Despite an extensive body of evidence demonstrating the effectiveness and impact, Vida Plena is the first to introduce this model to Latin America.Current status: we have certified 10 local community facilitators and are currently running a pilot program with 10 support groups. We’re looking forward to sharing the results of the pilot early 2023, but we estimate when fully optional, it will cost $17 to improve a recipient’s wellbeing by one wellbeing-adjusted life year (WELLBY). For a comparison, this is 8 times more cost-effective than GiveDirectly (see our full predictive CEA here).How you can help:Stay in touch and spread the word: generous introductions and people sharing time-sensitive information have made all this possible. sign-up to stay in touch here so we can send you news about what's happening and ways you can helpSupport us financially: we plan to treat almost 3,000 people in 2023, but have yet to secure the full funding to do so. We would be so grateful if you decide to donate or email us at joy@vidaplena.globalSection 1: How Vida Plena got startedTrue story - I started Vida Plena because I saw a Facebook post. Specifically, I saw a post about a new type of mental health program being run by local ‘grandmas’ in Africa. And the more I read through the multiple published studies demonstr...]]>
Joy Bittner https://forum.effectivealtruism.org/posts/e7JgpYdbSukpPyfLT/announcing-vida-plena-the-first-latin-american-organization Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Vida Plena: the first Latin American organization incubated by Charity Entrepreneurship, published by Joy Bittner on December 26, 2022 on The Effective Altruism Forum.Vida Plena (meaning ‘a flourishing life’ in Spanish) is a new nonprofit organization based in Quito, Ecuador. Our mission is to build strong mental health in low-income and refugee communities, who otherwise would have no access to care. We provide evidence-based depression treatment which is highly cost-effective and scalable.In this post, we:Share how we got startedMake the case for why you should care about mental healthDemonstrate the evidence base for the solution we are usingHope to make you really excited about Vida Plena’s goals and upcoming plansWe are proud to highlight that Vida Plena completed the 2022 Charity Entrepreneurship (CE) Incubator program, making it the first CE-incubated organization to operate in Latin America. We (myself, Joy Bittner and my co-founder, Anita Kaslin) are exceptionally grateful for their on-going support and for the network of seed funders who are making this work possible.We are also excited to contribute to the EA community locally, as Vida Plena is one of the very first EA-aligned organizations implementing within Latin America.For the very effective altruist, a TLDR summary:The problem: The World Health Organization (WHO) estimates that 5% of people in Latin America have depression, however, a lack of prioritization means that more than 3 out of 4 people in Latin America go untreated. Ecuador, in particular, has some of the highest rates of depression in the region: causing 8.3% of the total years lived with disability (YLD).The solution: Vida Plena’s intervention is based on an evidence-based therapy which is recommended by the WHO as a first-line treatment for depression in low-income settings (WHO, 2020). This program is highly cost-effective as it is delivered by non-specialist community members, which a systematic review of 27 studies found that non-specialists can effectively administer therapy.How you can know this will be impactful: We are replicating a proven program model. The nonprofit organization StrongMinds, operating in Uganda, has treated 150,000 people using the same model of therapy and 85% of people in their program saw significant reductions in their depression. This success has led StrongMinds to be recommended by Founders Pledge (Halstead, 2019) and is the Happier Lives Institute’s top recommendation (HLI, 2022). Despite an extensive body of evidence demonstrating the effectiveness and impact, Vida Plena is the first to introduce this model to Latin America.Current status: we have certified 10 local community facilitators and are currently running a pilot program with 10 support groups. We’re looking forward to sharing the results of the pilot early 2023, but we estimate when fully optional, it will cost $17 to improve a recipient’s wellbeing by one wellbeing-adjusted life year (WELLBY). For a comparison, this is 8 times more cost-effective than GiveDirectly (see our full predictive CEA here).How you can help:Stay in touch and spread the word: generous introductions and people sharing time-sensitive information have made all this possible. sign-up to stay in touch here so we can send you news about what's happening and ways you can helpSupport us financially: we plan to treat almost 3,000 people in 2023, but have yet to secure the full funding to do so. We would be so grateful if you decide to donate or email us at joy@vidaplena.globalSection 1: How Vida Plena got startedTrue story - I started Vida Plena because I saw a Facebook post. Specifically, I saw a post about a new type of mental health program being run by local ‘grandmas’ in Africa. And the more I read through the multiple published studies demonstr...]]>
Mon, 26 Dec 2022 09:04:49 +0000 EA - Announcing Vida Plena: the first Latin American organization incubated by Charity Entrepreneurship by Joy Bittner Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Vida Plena: the first Latin American organization incubated by Charity Entrepreneurship, published by Joy Bittner on December 26, 2022 on The Effective Altruism Forum.Vida Plena (meaning ‘a flourishing life’ in Spanish) is a new nonprofit organization based in Quito, Ecuador. Our mission is to build strong mental health in low-income and refugee communities, who otherwise would have no access to care. We provide evidence-based depression treatment which is highly cost-effective and scalable.In this post, we:Share how we got startedMake the case for why you should care about mental healthDemonstrate the evidence base for the solution we are usingHope to make you really excited about Vida Plena’s goals and upcoming plansWe are proud to highlight that Vida Plena completed the 2022 Charity Entrepreneurship (CE) Incubator program, making it the first CE-incubated organization to operate in Latin America. We (myself, Joy Bittner and my co-founder, Anita Kaslin) are exceptionally grateful for their on-going support and for the network of seed funders who are making this work possible.We are also excited to contribute to the EA community locally, as Vida Plena is one of the very first EA-aligned organizations implementing within Latin America.For the very effective altruist, a TLDR summary:The problem: The World Health Organization (WHO) estimates that 5% of people in Latin America have depression, however, a lack of prioritization means that more than 3 out of 4 people in Latin America go untreated. Ecuador, in particular, has some of the highest rates of depression in the region: causing 8.3% of the total years lived with disability (YLD).The solution: Vida Plena’s intervention is based on an evidence-based therapy which is recommended by the WHO as a first-line treatment for depression in low-income settings (WHO, 2020). This program is highly cost-effective as it is delivered by non-specialist community members, which a systematic review of 27 studies found that non-specialists can effectively administer therapy.How you can know this will be impactful: We are replicating a proven program model. The nonprofit organization StrongMinds, operating in Uganda, has treated 150,000 people using the same model of therapy and 85% of people in their program saw significant reductions in their depression. This success has led StrongMinds to be recommended by Founders Pledge (Halstead, 2019) and is the Happier Lives Institute’s top recommendation (HLI, 2022). Despite an extensive body of evidence demonstrating the effectiveness and impact, Vida Plena is the first to introduce this model to Latin America.Current status: we have certified 10 local community facilitators and are currently running a pilot program with 10 support groups. We’re looking forward to sharing the results of the pilot early 2023, but we estimate when fully optional, it will cost $17 to improve a recipient’s wellbeing by one wellbeing-adjusted life year (WELLBY). For a comparison, this is 8 times more cost-effective than GiveDirectly (see our full predictive CEA here).How you can help:Stay in touch and spread the word: generous introductions and people sharing time-sensitive information have made all this possible. sign-up to stay in touch here so we can send you news about what's happening and ways you can helpSupport us financially: we plan to treat almost 3,000 people in 2023, but have yet to secure the full funding to do so. We would be so grateful if you decide to donate or email us at joy@vidaplena.globalSection 1: How Vida Plena got startedTrue story - I started Vida Plena because I saw a Facebook post. Specifically, I saw a post about a new type of mental health program being run by local ‘grandmas’ in Africa. And the more I read through the multiple published studies demonstr...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Vida Plena: the first Latin American organization incubated by Charity Entrepreneurship, published by Joy Bittner on December 26, 2022 on The Effective Altruism Forum.Vida Plena (meaning ‘a flourishing life’ in Spanish) is a new nonprofit organization based in Quito, Ecuador. Our mission is to build strong mental health in low-income and refugee communities, who otherwise would have no access to care. We provide evidence-based depression treatment which is highly cost-effective and scalable.In this post, we:Share how we got startedMake the case for why you should care about mental healthDemonstrate the evidence base for the solution we are usingHope to make you really excited about Vida Plena’s goals and upcoming plansWe are proud to highlight that Vida Plena completed the 2022 Charity Entrepreneurship (CE) Incubator program, making it the first CE-incubated organization to operate in Latin America. We (myself, Joy Bittner and my co-founder, Anita Kaslin) are exceptionally grateful for their on-going support and for the network of seed funders who are making this work possible.We are also excited to contribute to the EA community locally, as Vida Plena is one of the very first EA-aligned organizations implementing within Latin America.For the very effective altruist, a TLDR summary:The problem: The World Health Organization (WHO) estimates that 5% of people in Latin America have depression, however, a lack of prioritization means that more than 3 out of 4 people in Latin America go untreated. Ecuador, in particular, has some of the highest rates of depression in the region: causing 8.3% of the total years lived with disability (YLD).The solution: Vida Plena’s intervention is based on an evidence-based therapy which is recommended by the WHO as a first-line treatment for depression in low-income settings (WHO, 2020). This program is highly cost-effective as it is delivered by non-specialist community members, which a systematic review of 27 studies found that non-specialists can effectively administer therapy.How you can know this will be impactful: We are replicating a proven program model. The nonprofit organization StrongMinds, operating in Uganda, has treated 150,000 people using the same model of therapy and 85% of people in their program saw significant reductions in their depression. This success has led StrongMinds to be recommended by Founders Pledge (Halstead, 2019) and is the Happier Lives Institute’s top recommendation (HLI, 2022). Despite an extensive body of evidence demonstrating the effectiveness and impact, Vida Plena is the first to introduce this model to Latin America.Current status: we have certified 10 local community facilitators and are currently running a pilot program with 10 support groups. We’re looking forward to sharing the results of the pilot early 2023, but we estimate when fully optional, it will cost $17 to improve a recipient’s wellbeing by one wellbeing-adjusted life year (WELLBY). For a comparison, this is 8 times more cost-effective than GiveDirectly (see our full predictive CEA here).How you can help:Stay in touch and spread the word: generous introductions and people sharing time-sensitive information have made all this possible. sign-up to stay in touch here so we can send you news about what's happening and ways you can helpSupport us financially: we plan to treat almost 3,000 people in 2023, but have yet to secure the full funding to do so. We would be so grateful if you decide to donate or email us at joy@vidaplena.globalSection 1: How Vida Plena got startedTrue story - I started Vida Plena because I saw a Facebook post. Specifically, I saw a post about a new type of mental health program being run by local ‘grandmas’ in Africa. And the more I read through the multiple published studies demonstr...]]>
Joy Bittner https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 18:03 None full 4259
K8bYdQ5xY923xAbyS_EA EA - Will EU/ESMA financial regulation on ESG Fund Names include animal welfare? Should someone ask them to? by Ramiro Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will EU/ESMA financial regulation on ESG Fund Names include animal welfare? Should someone ask them to?, published by Ramiro on December 25, 2022 on The Effective Altruism Forum.Some people on the Forum have drawn attention to the “Brussels effect” – how EU regulation on some issues, particularly concerning financial markets, tend to influence related standards over the world. Such effect may be relevant for AI Ethics; it will very likely be felt with EU Green New Deal, with impacts environmental regulation (e.g., deforestation-free supply chains), climate change (e.g., a tax on imports to prevent carbon leakage), and, more recently, animal welfare (also, this post).The European Securities and Markets Authority (ESMA) recently issued a public consultation on the requirements for funds to use ESG-related words in their names. Their main proposal:15. ESMA is seeking stakeholder feedback on the following proposals:a. If a fund has any ESG-related words in its name, a minimum proportion of at least 80% of its investments should be used to meet the environmental or social characteristics or sustainable investment objectives in accordance with the binding elements of the investment strategy, as disclosed in Annexes II and III of SFDR Delegated Regulation.b. If a fund has the word “sustainable” or any other term derived from the word “sustainable” in its name, it should allocate within the 80% of investments to “meet the characteristics/objectives” under sub-paragraph a) above at least 50% of minimum proportion of sustainable investments as defined by Article 2(17) 17 of Regulation (EU) 2019/2088 (SFDR) as disclosed in Annexes II and III of SFDR Delegated Regulation.My question: the referred regulations do not mention animal welfare concerns. Will they do it soon? Even without any additions to the norm under consultation? Or should anyone propose that ESMA includes a reference on this subject?The problem (so I see): “animal welfare” has recently become a requirement for corporate Sustainability reporting standards, after the approval of the so-called Corporate Sustainability Reporting Directive (CSRD) - Directive (EU) 2022/2464 of the European Parliament and of the Council of 14 December 2022 amending Directive 2013/34/EU, as regards corporate sustainability reporting. This includes a Chapter 6a in Directive 2013/34/EU, with article 29b stating that:The sustainability reporting standards shall, taking into account the subject matter of a particular sustainability reporting standard:c) specify the information that undertakings are to disclose about the following governance factor:business ethics and corporate culture, including anti-corruption and anti-bribery, the protection of whistleblowers and animal welfare;I'm no expert in EU policy, and I didn't investigate this thoroughly - hence the question. But I have some experience with financial supervision and regulation, and, as a rule of thumb, the more explicit you are (especially for new legal content), the better.So, would it be helpful if someone send an opinion asking ESMA to add a paragraph to the future standard explicitly referring Directive 2013/34/EU?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ramiro https://forum.effectivealtruism.org/posts/K8bYdQ5xY923xAbyS/will-eu-esma-financial-regulation-on-esg-fund-names-include Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will EU/ESMA financial regulation on ESG Fund Names include animal welfare? Should someone ask them to?, published by Ramiro on December 25, 2022 on The Effective Altruism Forum.Some people on the Forum have drawn attention to the “Brussels effect” – how EU regulation on some issues, particularly concerning financial markets, tend to influence related standards over the world. Such effect may be relevant for AI Ethics; it will very likely be felt with EU Green New Deal, with impacts environmental regulation (e.g., deforestation-free supply chains), climate change (e.g., a tax on imports to prevent carbon leakage), and, more recently, animal welfare (also, this post).The European Securities and Markets Authority (ESMA) recently issued a public consultation on the requirements for funds to use ESG-related words in their names. Their main proposal:15. ESMA is seeking stakeholder feedback on the following proposals:a. If a fund has any ESG-related words in its name, a minimum proportion of at least 80% of its investments should be used to meet the environmental or social characteristics or sustainable investment objectives in accordance with the binding elements of the investment strategy, as disclosed in Annexes II and III of SFDR Delegated Regulation.b. If a fund has the word “sustainable” or any other term derived from the word “sustainable” in its name, it should allocate within the 80% of investments to “meet the characteristics/objectives” under sub-paragraph a) above at least 50% of minimum proportion of sustainable investments as defined by Article 2(17) 17 of Regulation (EU) 2019/2088 (SFDR) as disclosed in Annexes II and III of SFDR Delegated Regulation.My question: the referred regulations do not mention animal welfare concerns. Will they do it soon? Even without any additions to the norm under consultation? Or should anyone propose that ESMA includes a reference on this subject?The problem (so I see): “animal welfare” has recently become a requirement for corporate Sustainability reporting standards, after the approval of the so-called Corporate Sustainability Reporting Directive (CSRD) - Directive (EU) 2022/2464 of the European Parliament and of the Council of 14 December 2022 amending Directive 2013/34/EU, as regards corporate sustainability reporting. This includes a Chapter 6a in Directive 2013/34/EU, with article 29b stating that:The sustainability reporting standards shall, taking into account the subject matter of a particular sustainability reporting standard:c) specify the information that undertakings are to disclose about the following governance factor:business ethics and corporate culture, including anti-corruption and anti-bribery, the protection of whistleblowers and animal welfare;I'm no expert in EU policy, and I didn't investigate this thoroughly - hence the question. But I have some experience with financial supervision and regulation, and, as a rule of thumb, the more explicit you are (especially for new legal content), the better.So, would it be helpful if someone send an opinion asking ESMA to add a paragraph to the future standard explicitly referring Directive 2013/34/EU?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 26 Dec 2022 00:39:01 +0000 EA - Will EU/ESMA financial regulation on ESG Fund Names include animal welfare? Should someone ask them to? by Ramiro Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will EU/ESMA financial regulation on ESG Fund Names include animal welfare? Should someone ask them to?, published by Ramiro on December 25, 2022 on The Effective Altruism Forum.Some people on the Forum have drawn attention to the “Brussels effect” – how EU regulation on some issues, particularly concerning financial markets, tend to influence related standards over the world. Such effect may be relevant for AI Ethics; it will very likely be felt with EU Green New Deal, with impacts environmental regulation (e.g., deforestation-free supply chains), climate change (e.g., a tax on imports to prevent carbon leakage), and, more recently, animal welfare (also, this post).The European Securities and Markets Authority (ESMA) recently issued a public consultation on the requirements for funds to use ESG-related words in their names. Their main proposal:15. ESMA is seeking stakeholder feedback on the following proposals:a. If a fund has any ESG-related words in its name, a minimum proportion of at least 80% of its investments should be used to meet the environmental or social characteristics or sustainable investment objectives in accordance with the binding elements of the investment strategy, as disclosed in Annexes II and III of SFDR Delegated Regulation.b. If a fund has the word “sustainable” or any other term derived from the word “sustainable” in its name, it should allocate within the 80% of investments to “meet the characteristics/objectives” under sub-paragraph a) above at least 50% of minimum proportion of sustainable investments as defined by Article 2(17) 17 of Regulation (EU) 2019/2088 (SFDR) as disclosed in Annexes II and III of SFDR Delegated Regulation.My question: the referred regulations do not mention animal welfare concerns. Will they do it soon? Even without any additions to the norm under consultation? Or should anyone propose that ESMA includes a reference on this subject?The problem (so I see): “animal welfare” has recently become a requirement for corporate Sustainability reporting standards, after the approval of the so-called Corporate Sustainability Reporting Directive (CSRD) - Directive (EU) 2022/2464 of the European Parliament and of the Council of 14 December 2022 amending Directive 2013/34/EU, as regards corporate sustainability reporting. This includes a Chapter 6a in Directive 2013/34/EU, with article 29b stating that:The sustainability reporting standards shall, taking into account the subject matter of a particular sustainability reporting standard:c) specify the information that undertakings are to disclose about the following governance factor:business ethics and corporate culture, including anti-corruption and anti-bribery, the protection of whistleblowers and animal welfare;I'm no expert in EU policy, and I didn't investigate this thoroughly - hence the question. But I have some experience with financial supervision and regulation, and, as a rule of thumb, the more explicit you are (especially for new legal content), the better.So, would it be helpful if someone send an opinion asking ESMA to add a paragraph to the future standard explicitly referring Directive 2013/34/EU?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will EU/ESMA financial regulation on ESG Fund Names include animal welfare? Should someone ask them to?, published by Ramiro on December 25, 2022 on The Effective Altruism Forum.Some people on the Forum have drawn attention to the “Brussels effect” – how EU regulation on some issues, particularly concerning financial markets, tend to influence related standards over the world. Such effect may be relevant for AI Ethics; it will very likely be felt with EU Green New Deal, with impacts environmental regulation (e.g., deforestation-free supply chains), climate change (e.g., a tax on imports to prevent carbon leakage), and, more recently, animal welfare (also, this post).The European Securities and Markets Authority (ESMA) recently issued a public consultation on the requirements for funds to use ESG-related words in their names. Their main proposal:15. ESMA is seeking stakeholder feedback on the following proposals:a. If a fund has any ESG-related words in its name, a minimum proportion of at least 80% of its investments should be used to meet the environmental or social characteristics or sustainable investment objectives in accordance with the binding elements of the investment strategy, as disclosed in Annexes II and III of SFDR Delegated Regulation.b. If a fund has the word “sustainable” or any other term derived from the word “sustainable” in its name, it should allocate within the 80% of investments to “meet the characteristics/objectives” under sub-paragraph a) above at least 50% of minimum proportion of sustainable investments as defined by Article 2(17) 17 of Regulation (EU) 2019/2088 (SFDR) as disclosed in Annexes II and III of SFDR Delegated Regulation.My question: the referred regulations do not mention animal welfare concerns. Will they do it soon? Even without any additions to the norm under consultation? Or should anyone propose that ESMA includes a reference on this subject?The problem (so I see): “animal welfare” has recently become a requirement for corporate Sustainability reporting standards, after the approval of the so-called Corporate Sustainability Reporting Directive (CSRD) - Directive (EU) 2022/2464 of the European Parliament and of the Council of 14 December 2022 amending Directive 2013/34/EU, as regards corporate sustainability reporting. This includes a Chapter 6a in Directive 2013/34/EU, with article 29b stating that:The sustainability reporting standards shall, taking into account the subject matter of a particular sustainability reporting standard:c) specify the information that undertakings are to disclose about the following governance factor:business ethics and corporate culture, including anti-corruption and anti-bribery, the protection of whistleblowers and animal welfare;I'm no expert in EU policy, and I didn't investigate this thoroughly - hence the question. But I have some experience with financial supervision and regulation, and, as a rule of thumb, the more explicit you are (especially for new legal content), the better.So, would it be helpful if someone send an opinion asking ESMA to add a paragraph to the future standard explicitly referring Directive 2013/34/EU?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ramiro https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:29 None full 4261
5mghcxCabxuaK4WTs_EA EA - YCombinator fraud rates by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: YCombinator fraud rates, published by Ben West on December 25, 2022 on The Effective Altruism Forum.SummaryI estimate that 1-2% of $100M+ YCombinator-backed companies have faced serious allegations of fraud.DetailsI’m interested in better understanding the base rates of fraud in high-growth companies. YCombinator is a convenience sample for “high-growth companies”, as they are relatively public about who they back, and act as a filter for e.g. “obviously fraudulent” companies.I was able to find 3 companies which closed due to (alleged) fraud, two more which had their business substantially impacted by fraud claims, and three which had mild business impacts from fraud claims.YCombinator has incubated 3,951 companies, which gives a naïve rate of ~0.1% of companies having major allegations of fraud.I estimate that there are around 400 YCombinator-incubated companies with a valuation over $100M. 3 of the 5 companies with major fraud had $100M+ valuations, implying that 3/400 = 1% of $100M+ companies have major fraud charges. I think that this 1% number is probably more useful than the previous 0.1% number, because I expect that many smaller companies were fraudulent but no one bothered to charge them or report on it.I also wouldn’t be surprised to learn if I missed some cases, though I’d be surprised if I missed more than half of the $100M+ cases.So overall, I estimate that maybe 1-2% of $100M+ YCombinator-backed companies have faced serious charges of fraud.Additional CommentsIt’s notable that almost half of the companies on this list are financial services, which I believe makes them overrepresented.I expect that YCombinator is better at filtering out fraudulent companies than other angel investors, but they are also more likely to generate highly valuable companies. I’m not sure how these factors balance out, but would guess that they are roughly equal, implying that 1-2% of all high-growth $100M+ companies face serious fraud charges.DataCompanyValuationSeverityNotesuBiome$298MFatal“the company shut down in 2019 following an investigation into possible insurance fraud. Since 2021, the FBI has considered [the founders] to be fugitives.” (Wikipedia)LendUp$500MFatal"We are shuttering the lending operations of this fintech for repeatedly lying and illegally cheating its customers," said CFPB Director Rohit Chopra. (Reuters)Stablegains$6-15MFatalInvested customer funds into a cryptocurrency which collapsed. Customers allege this was deceptive, though I can’t find any evidence of them actually being charged (yet). (The Defiant)Synapsica$8-20MMajorTwo cofounders charged with defrauding the third and other investors. 1/3 of employees were laid off, which the company claims is unrelated to the fraud charges.Flutterwave$3BMajorAssets frozen over claimed violations of anti-money laundering laws. The company denies misconduct and seems to still be operating at a large scale.I am unfamiliar with the Nigerian legal/financial system, and it’s unclear to me how serious these charges are.Bikayi$21-50MMildSalespeople forged signatures of customers. It looks to me like the company separately encountered financial trouble and this fraud in particular was maybe just used as a cover to fire people.Momentus$65MMildSEC settled charges that the CEO misled investors about his immigration status and the results of a test of their technology during an attempt to take the company public. The company has now successfully gone public; my impression is that the CEO was fired but otherwise the company is fine.Modern Health$1.2BMildOne cofounder sued the other for defrauding investors by not disclosing incentives provided to customers which may have boosted sales.It seems like few media venues have picked up on the story, and so far the impact on their business seems mild.Dreamworld$1....]]>
Ben West https://forum.effectivealtruism.org/posts/5mghcxCabxuaK4WTs/ycombinator-fraud-rates Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: YCombinator fraud rates, published by Ben West on December 25, 2022 on The Effective Altruism Forum.SummaryI estimate that 1-2% of $100M+ YCombinator-backed companies have faced serious allegations of fraud.DetailsI’m interested in better understanding the base rates of fraud in high-growth companies. YCombinator is a convenience sample for “high-growth companies”, as they are relatively public about who they back, and act as a filter for e.g. “obviously fraudulent” companies.I was able to find 3 companies which closed due to (alleged) fraud, two more which had their business substantially impacted by fraud claims, and three which had mild business impacts from fraud claims.YCombinator has incubated 3,951 companies, which gives a naïve rate of ~0.1% of companies having major allegations of fraud.I estimate that there are around 400 YCombinator-incubated companies with a valuation over $100M. 3 of the 5 companies with major fraud had $100M+ valuations, implying that 3/400 = 1% of $100M+ companies have major fraud charges. I think that this 1% number is probably more useful than the previous 0.1% number, because I expect that many smaller companies were fraudulent but no one bothered to charge them or report on it.I also wouldn’t be surprised to learn if I missed some cases, though I’d be surprised if I missed more than half of the $100M+ cases.So overall, I estimate that maybe 1-2% of $100M+ YCombinator-backed companies have faced serious charges of fraud.Additional CommentsIt’s notable that almost half of the companies on this list are financial services, which I believe makes them overrepresented.I expect that YCombinator is better at filtering out fraudulent companies than other angel investors, but they are also more likely to generate highly valuable companies. I’m not sure how these factors balance out, but would guess that they are roughly equal, implying that 1-2% of all high-growth $100M+ companies face serious fraud charges.DataCompanyValuationSeverityNotesuBiome$298MFatal“the company shut down in 2019 following an investigation into possible insurance fraud. Since 2021, the FBI has considered [the founders] to be fugitives.” (Wikipedia)LendUp$500MFatal"We are shuttering the lending operations of this fintech for repeatedly lying and illegally cheating its customers," said CFPB Director Rohit Chopra. (Reuters)Stablegains$6-15MFatalInvested customer funds into a cryptocurrency which collapsed. Customers allege this was deceptive, though I can’t find any evidence of them actually being charged (yet). (The Defiant)Synapsica$8-20MMajorTwo cofounders charged with defrauding the third and other investors. 1/3 of employees were laid off, which the company claims is unrelated to the fraud charges.Flutterwave$3BMajorAssets frozen over claimed violations of anti-money laundering laws. The company denies misconduct and seems to still be operating at a large scale.I am unfamiliar with the Nigerian legal/financial system, and it’s unclear to me how serious these charges are.Bikayi$21-50MMildSalespeople forged signatures of customers. It looks to me like the company separately encountered financial trouble and this fraud in particular was maybe just used as a cover to fire people.Momentus$65MMildSEC settled charges that the CEO misled investors about his immigration status and the results of a test of their technology during an attempt to take the company public. The company has now successfully gone public; my impression is that the CEO was fired but otherwise the company is fine.Modern Health$1.2BMildOne cofounder sued the other for defrauding investors by not disclosing incentives provided to customers which may have boosted sales.It seems like few media venues have picked up on the story, and so far the impact on their business seems mild.Dreamworld$1....]]>
Sun, 25 Dec 2022 19:05:50 +0000 EA - YCombinator fraud rates by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: YCombinator fraud rates, published by Ben West on December 25, 2022 on The Effective Altruism Forum.SummaryI estimate that 1-2% of $100M+ YCombinator-backed companies have faced serious allegations of fraud.DetailsI’m interested in better understanding the base rates of fraud in high-growth companies. YCombinator is a convenience sample for “high-growth companies”, as they are relatively public about who they back, and act as a filter for e.g. “obviously fraudulent” companies.I was able to find 3 companies which closed due to (alleged) fraud, two more which had their business substantially impacted by fraud claims, and three which had mild business impacts from fraud claims.YCombinator has incubated 3,951 companies, which gives a naïve rate of ~0.1% of companies having major allegations of fraud.I estimate that there are around 400 YCombinator-incubated companies with a valuation over $100M. 3 of the 5 companies with major fraud had $100M+ valuations, implying that 3/400 = 1% of $100M+ companies have major fraud charges. I think that this 1% number is probably more useful than the previous 0.1% number, because I expect that many smaller companies were fraudulent but no one bothered to charge them or report on it.I also wouldn’t be surprised to learn if I missed some cases, though I’d be surprised if I missed more than half of the $100M+ cases.So overall, I estimate that maybe 1-2% of $100M+ YCombinator-backed companies have faced serious charges of fraud.Additional CommentsIt’s notable that almost half of the companies on this list are financial services, which I believe makes them overrepresented.I expect that YCombinator is better at filtering out fraudulent companies than other angel investors, but they are also more likely to generate highly valuable companies. I’m not sure how these factors balance out, but would guess that they are roughly equal, implying that 1-2% of all high-growth $100M+ companies face serious fraud charges.DataCompanyValuationSeverityNotesuBiome$298MFatal“the company shut down in 2019 following an investigation into possible insurance fraud. Since 2021, the FBI has considered [the founders] to be fugitives.” (Wikipedia)LendUp$500MFatal"We are shuttering the lending operations of this fintech for repeatedly lying and illegally cheating its customers," said CFPB Director Rohit Chopra. (Reuters)Stablegains$6-15MFatalInvested customer funds into a cryptocurrency which collapsed. Customers allege this was deceptive, though I can’t find any evidence of them actually being charged (yet). (The Defiant)Synapsica$8-20MMajorTwo cofounders charged with defrauding the third and other investors. 1/3 of employees were laid off, which the company claims is unrelated to the fraud charges.Flutterwave$3BMajorAssets frozen over claimed violations of anti-money laundering laws. The company denies misconduct and seems to still be operating at a large scale.I am unfamiliar with the Nigerian legal/financial system, and it’s unclear to me how serious these charges are.Bikayi$21-50MMildSalespeople forged signatures of customers. It looks to me like the company separately encountered financial trouble and this fraud in particular was maybe just used as a cover to fire people.Momentus$65MMildSEC settled charges that the CEO misled investors about his immigration status and the results of a test of their technology during an attempt to take the company public. The company has now successfully gone public; my impression is that the CEO was fired but otherwise the company is fine.Modern Health$1.2BMildOne cofounder sued the other for defrauding investors by not disclosing incentives provided to customers which may have boosted sales.It seems like few media venues have picked up on the story, and so far the impact on their business seems mild.Dreamworld$1....]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: YCombinator fraud rates, published by Ben West on December 25, 2022 on The Effective Altruism Forum.SummaryI estimate that 1-2% of $100M+ YCombinator-backed companies have faced serious allegations of fraud.DetailsI’m interested in better understanding the base rates of fraud in high-growth companies. YCombinator is a convenience sample for “high-growth companies”, as they are relatively public about who they back, and act as a filter for e.g. “obviously fraudulent” companies.I was able to find 3 companies which closed due to (alleged) fraud, two more which had their business substantially impacted by fraud claims, and three which had mild business impacts from fraud claims.YCombinator has incubated 3,951 companies, which gives a naïve rate of ~0.1% of companies having major allegations of fraud.I estimate that there are around 400 YCombinator-incubated companies with a valuation over $100M. 3 of the 5 companies with major fraud had $100M+ valuations, implying that 3/400 = 1% of $100M+ companies have major fraud charges. I think that this 1% number is probably more useful than the previous 0.1% number, because I expect that many smaller companies were fraudulent but no one bothered to charge them or report on it.I also wouldn’t be surprised to learn if I missed some cases, though I’d be surprised if I missed more than half of the $100M+ cases.So overall, I estimate that maybe 1-2% of $100M+ YCombinator-backed companies have faced serious charges of fraud.Additional CommentsIt’s notable that almost half of the companies on this list are financial services, which I believe makes them overrepresented.I expect that YCombinator is better at filtering out fraudulent companies than other angel investors, but they are also more likely to generate highly valuable companies. I’m not sure how these factors balance out, but would guess that they are roughly equal, implying that 1-2% of all high-growth $100M+ companies face serious fraud charges.DataCompanyValuationSeverityNotesuBiome$298MFatal“the company shut down in 2019 following an investigation into possible insurance fraud. Since 2021, the FBI has considered [the founders] to be fugitives.” (Wikipedia)LendUp$500MFatal"We are shuttering the lending operations of this fintech for repeatedly lying and illegally cheating its customers," said CFPB Director Rohit Chopra. (Reuters)Stablegains$6-15MFatalInvested customer funds into a cryptocurrency which collapsed. Customers allege this was deceptive, though I can’t find any evidence of them actually being charged (yet). (The Defiant)Synapsica$8-20MMajorTwo cofounders charged with defrauding the third and other investors. 1/3 of employees were laid off, which the company claims is unrelated to the fraud charges.Flutterwave$3BMajorAssets frozen over claimed violations of anti-money laundering laws. The company denies misconduct and seems to still be operating at a large scale.I am unfamiliar with the Nigerian legal/financial system, and it’s unclear to me how serious these charges are.Bikayi$21-50MMildSalespeople forged signatures of customers. It looks to me like the company separately encountered financial trouble and this fraud in particular was maybe just used as a cover to fire people.Momentus$65MMildSEC settled charges that the CEO misled investors about his immigration status and the results of a test of their technology during an attempt to take the company public. The company has now successfully gone public; my impression is that the CEO was fired but otherwise the company is fine.Modern Health$1.2BMildOne cofounder sued the other for defrauding investors by not disclosing incentives provided to customers which may have boosted sales.It seems like few media venues have picked up on the story, and so far the impact on their business seems mild.Dreamworld$1....]]>
Ben West https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:02 None full 4257
rvEMFn2tJZChakHf8_EA EA - May The Factory Farms Burn by Omnizoid Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: May The Factory Farms Burn, published by Omnizoid on December 25, 2022 on The Effective Altruism Forum.Cross-post of this.“King Lear, late at night on the cliffs asks the blind Earl of Gloucester“How do you see the world?”And the blind man Gloucester replies “I see it feelingly”.Shouldn’t we all?Animals must be off the menu – because tonight they are screaming in terror in the slaughterhouse, in crates, and cages. Vile ignoble gulags of Despair.I heard the screams of my dying father as his body was ravaged by the cancer that killed him. And I realised I had heard these screams before.In the slaughterhouse, eyes stabbed out and tendons slashed, on the cattle ships to the Middle East and the dying mother whale as a Japanese harpoon explodes in her brain as she calls out to her calf.Their cries were the cries of my father.I discovered when we suffer, we suffer as equals.And in their capacity to suffer, a dog is a pig is a bear. . . . . . is a boy.”Phillip Wollen in a speech I’d highly recommendWhen a quarter million birds are stuffed into a single shed, unable even to flap their wings, when more than a million pigs inhabit a single farm, never once stepping into the light of day, when every year tens of millions of creatures go to their death without knowing the least measure of human kindness, it is time to question old assumptions, to ask what we are doing and what spirit drives us on.Matthew Scully “Dominion”Some ethical questions are difficult. One can’t be particularly confident in their views about population ethics, most political issues, normative ethics, etc. Yet the morality of our current consumption of meat is not one of those difficult issues. It is as close to a no-brainer as you get in normative ethics. The question we face is whether we should torture billions of sentient beings before killing them because we like the taste of their flesh.Whether pigs should be roasted to death, hot steam choking and burning them to death. Whether pigs who are smarter than dogs should be forced to give birth in tiny crates where they can’t move around. Whether they should be castrated with no anesthetic, have their tails and teeth cut off with a sharp object and no anesthetic, whether parts of their ears should be cut off for identification purposes, cruelly cramped together during transport unable to stand or move around, and whether they should have a knife dragged across their throat. All of these are the price we pay for cheap pig flesh.Whether chickens should be hung upside down by one leg before being brought on a conveyor to a knife being dragged across their throat—the only saving grace being an error prone electric bath that knocks them unconscious; sometimes. Of course, the combination of blade and electric bath is sufficiently error-prone to boil to death half a million or so sentient beings every year. It becomes abundantly clear that we’re acting horrendously when animal advocates are hoping that we’ll gas animals to death—kill them the way the Nazi’s did, for the ways we do it now are far crueler. Whether chickens should be crammed in a space far smaller than a sheet of paper, living their whole lives without seeing the sun, except in the moments before they’re transferred to their grisly slaughter. We do this to about 80 billion land animals and trillions of sea creatures.Are cheap eggs worth forcing sentient beings to live in shit, with the smell of feces being the only thing detectable from inside the barn? Forcing sentient beings to get osteoporosis and heart disease, all in an attempt to reduce the cost of eggs. This barrage of shit is not limited to egg-laying hens—it’s why a full 80% of pigs have pneumonia upon slaughter. A life lived in so much shit that it causes pneumonia the vast majority of the time is not how we ought to treat...]]>
Omnizoid https://forum.effectivealtruism.org/posts/rvEMFn2tJZChakHf8/may-the-factory-farms-burn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: May The Factory Farms Burn, published by Omnizoid on December 25, 2022 on The Effective Altruism Forum.Cross-post of this.“King Lear, late at night on the cliffs asks the blind Earl of Gloucester“How do you see the world?”And the blind man Gloucester replies “I see it feelingly”.Shouldn’t we all?Animals must be off the menu – because tonight they are screaming in terror in the slaughterhouse, in crates, and cages. Vile ignoble gulags of Despair.I heard the screams of my dying father as his body was ravaged by the cancer that killed him. And I realised I had heard these screams before.In the slaughterhouse, eyes stabbed out and tendons slashed, on the cattle ships to the Middle East and the dying mother whale as a Japanese harpoon explodes in her brain as she calls out to her calf.Their cries were the cries of my father.I discovered when we suffer, we suffer as equals.And in their capacity to suffer, a dog is a pig is a bear. . . . . . is a boy.”Phillip Wollen in a speech I’d highly recommendWhen a quarter million birds are stuffed into a single shed, unable even to flap their wings, when more than a million pigs inhabit a single farm, never once stepping into the light of day, when every year tens of millions of creatures go to their death without knowing the least measure of human kindness, it is time to question old assumptions, to ask what we are doing and what spirit drives us on.Matthew Scully “Dominion”Some ethical questions are difficult. One can’t be particularly confident in their views about population ethics, most political issues, normative ethics, etc. Yet the morality of our current consumption of meat is not one of those difficult issues. It is as close to a no-brainer as you get in normative ethics. The question we face is whether we should torture billions of sentient beings before killing them because we like the taste of their flesh.Whether pigs should be roasted to death, hot steam choking and burning them to death. Whether pigs who are smarter than dogs should be forced to give birth in tiny crates where they can’t move around. Whether they should be castrated with no anesthetic, have their tails and teeth cut off with a sharp object and no anesthetic, whether parts of their ears should be cut off for identification purposes, cruelly cramped together during transport unable to stand or move around, and whether they should have a knife dragged across their throat. All of these are the price we pay for cheap pig flesh.Whether chickens should be hung upside down by one leg before being brought on a conveyor to a knife being dragged across their throat—the only saving grace being an error prone electric bath that knocks them unconscious; sometimes. Of course, the combination of blade and electric bath is sufficiently error-prone to boil to death half a million or so sentient beings every year. It becomes abundantly clear that we’re acting horrendously when animal advocates are hoping that we’ll gas animals to death—kill them the way the Nazi’s did, for the ways we do it now are far crueler. Whether chickens should be crammed in a space far smaller than a sheet of paper, living their whole lives without seeing the sun, except in the moments before they’re transferred to their grisly slaughter. We do this to about 80 billion land animals and trillions of sea creatures.Are cheap eggs worth forcing sentient beings to live in shit, with the smell of feces being the only thing detectable from inside the barn? Forcing sentient beings to get osteoporosis and heart disease, all in an attempt to reduce the cost of eggs. This barrage of shit is not limited to egg-laying hens—it’s why a full 80% of pigs have pneumonia upon slaughter. A life lived in so much shit that it causes pneumonia the vast majority of the time is not how we ought to treat...]]>
Sun, 25 Dec 2022 12:47:43 +0000 EA - May The Factory Farms Burn by Omnizoid Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: May The Factory Farms Burn, published by Omnizoid on December 25, 2022 on The Effective Altruism Forum.Cross-post of this.“King Lear, late at night on the cliffs asks the blind Earl of Gloucester“How do you see the world?”And the blind man Gloucester replies “I see it feelingly”.Shouldn’t we all?Animals must be off the menu – because tonight they are screaming in terror in the slaughterhouse, in crates, and cages. Vile ignoble gulags of Despair.I heard the screams of my dying father as his body was ravaged by the cancer that killed him. And I realised I had heard these screams before.In the slaughterhouse, eyes stabbed out and tendons slashed, on the cattle ships to the Middle East and the dying mother whale as a Japanese harpoon explodes in her brain as she calls out to her calf.Their cries were the cries of my father.I discovered when we suffer, we suffer as equals.And in their capacity to suffer, a dog is a pig is a bear. . . . . . is a boy.”Phillip Wollen in a speech I’d highly recommendWhen a quarter million birds are stuffed into a single shed, unable even to flap their wings, when more than a million pigs inhabit a single farm, never once stepping into the light of day, when every year tens of millions of creatures go to their death without knowing the least measure of human kindness, it is time to question old assumptions, to ask what we are doing and what spirit drives us on.Matthew Scully “Dominion”Some ethical questions are difficult. One can’t be particularly confident in their views about population ethics, most political issues, normative ethics, etc. Yet the morality of our current consumption of meat is not one of those difficult issues. It is as close to a no-brainer as you get in normative ethics. The question we face is whether we should torture billions of sentient beings before killing them because we like the taste of their flesh.Whether pigs should be roasted to death, hot steam choking and burning them to death. Whether pigs who are smarter than dogs should be forced to give birth in tiny crates where they can’t move around. Whether they should be castrated with no anesthetic, have their tails and teeth cut off with a sharp object and no anesthetic, whether parts of their ears should be cut off for identification purposes, cruelly cramped together during transport unable to stand or move around, and whether they should have a knife dragged across their throat. All of these are the price we pay for cheap pig flesh.Whether chickens should be hung upside down by one leg before being brought on a conveyor to a knife being dragged across their throat—the only saving grace being an error prone electric bath that knocks them unconscious; sometimes. Of course, the combination of blade and electric bath is sufficiently error-prone to boil to death half a million or so sentient beings every year. It becomes abundantly clear that we’re acting horrendously when animal advocates are hoping that we’ll gas animals to death—kill them the way the Nazi’s did, for the ways we do it now are far crueler. Whether chickens should be crammed in a space far smaller than a sheet of paper, living their whole lives without seeing the sun, except in the moments before they’re transferred to their grisly slaughter. We do this to about 80 billion land animals and trillions of sea creatures.Are cheap eggs worth forcing sentient beings to live in shit, with the smell of feces being the only thing detectable from inside the barn? Forcing sentient beings to get osteoporosis and heart disease, all in an attempt to reduce the cost of eggs. This barrage of shit is not limited to egg-laying hens—it’s why a full 80% of pigs have pneumonia upon slaughter. A life lived in so much shit that it causes pneumonia the vast majority of the time is not how we ought to treat...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: May The Factory Farms Burn, published by Omnizoid on December 25, 2022 on The Effective Altruism Forum.Cross-post of this.“King Lear, late at night on the cliffs asks the blind Earl of Gloucester“How do you see the world?”And the blind man Gloucester replies “I see it feelingly”.Shouldn’t we all?Animals must be off the menu – because tonight they are screaming in terror in the slaughterhouse, in crates, and cages. Vile ignoble gulags of Despair.I heard the screams of my dying father as his body was ravaged by the cancer that killed him. And I realised I had heard these screams before.In the slaughterhouse, eyes stabbed out and tendons slashed, on the cattle ships to the Middle East and the dying mother whale as a Japanese harpoon explodes in her brain as she calls out to her calf.Their cries were the cries of my father.I discovered when we suffer, we suffer as equals.And in their capacity to suffer, a dog is a pig is a bear. . . . . . is a boy.”Phillip Wollen in a speech I’d highly recommendWhen a quarter million birds are stuffed into a single shed, unable even to flap their wings, when more than a million pigs inhabit a single farm, never once stepping into the light of day, when every year tens of millions of creatures go to their death without knowing the least measure of human kindness, it is time to question old assumptions, to ask what we are doing and what spirit drives us on.Matthew Scully “Dominion”Some ethical questions are difficult. One can’t be particularly confident in their views about population ethics, most political issues, normative ethics, etc. Yet the morality of our current consumption of meat is not one of those difficult issues. It is as close to a no-brainer as you get in normative ethics. The question we face is whether we should torture billions of sentient beings before killing them because we like the taste of their flesh.Whether pigs should be roasted to death, hot steam choking and burning them to death. Whether pigs who are smarter than dogs should be forced to give birth in tiny crates where they can’t move around. Whether they should be castrated with no anesthetic, have their tails and teeth cut off with a sharp object and no anesthetic, whether parts of their ears should be cut off for identification purposes, cruelly cramped together during transport unable to stand or move around, and whether they should have a knife dragged across their throat. All of these are the price we pay for cheap pig flesh.Whether chickens should be hung upside down by one leg before being brought on a conveyor to a knife being dragged across their throat—the only saving grace being an error prone electric bath that knocks them unconscious; sometimes. Of course, the combination of blade and electric bath is sufficiently error-prone to boil to death half a million or so sentient beings every year. It becomes abundantly clear that we’re acting horrendously when animal advocates are hoping that we’ll gas animals to death—kill them the way the Nazi’s did, for the ways we do it now are far crueler. Whether chickens should be crammed in a space far smaller than a sheet of paper, living their whole lives without seeing the sun, except in the moments before they’re transferred to their grisly slaughter. We do this to about 80 billion land animals and trillions of sea creatures.Are cheap eggs worth forcing sentient beings to live in shit, with the smell of feces being the only thing detectable from inside the barn? Forcing sentient beings to get osteoporosis and heart disease, all in an attempt to reduce the cost of eggs. This barrage of shit is not limited to egg-laying hens—it’s why a full 80% of pigs have pneumonia upon slaughter. A life lived in so much shit that it causes pneumonia the vast majority of the time is not how we ought to treat...]]>
Omnizoid https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 21:23 None full 4258
wyuaXAaEYnGJ5tyva_EA EA - On applause lights and costly counters by Will Aldred Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On applause lights and costly counters, published by Will Aldred on December 24, 2022 on The Effective Altruism Forum.Epistemic status: stream of consciousness.Edited to add: This was written as a personal blogpost (with the frontpage checkbox at the bottom unchecked), but it appears to be showing up on the frontpage. I'm not sure what's up with that (though it's probably my bad, I forgot to uncheck the box when I initially published, then unchecked later); if you do read this post, know that it's especially rough and off-the-cuff and to some extent the downstream effect of red wine.An applause light is an empty statement which evokes positive affect without providing new information.(LessWrong Wiki)I've heard a couple of applause lights in the recent past, spoken by EAs.The first: "I'm confused [about x]."EA culture tends to reward people who are open and transparent about what they don't know. This is a good thing... until it starts getting Goodharted. The statement, "I'm confused", is productive in so far as it gets others to:apply appropriate grains of salt to the claims you're making on the topic you say you're confused abouthelp you become less confusedthink more critically and flag appropriate uncertainty on claims they themselves makeHowever, several times now I've heard empty statements of "I'm confused" where there's really no intention to achieve any of the above, so far as I can tell. "I'm confused" is being used as an applause light.The second: "diversity".This one's particularly pernicious, because it's costly to argue against. A concrete example: Alice and Bob, plus a handful more people, run an internship. In a team meeting on the application process, Bob says, "We should promote diversity." Heads nod. Satisfied, Bob does not continue. He's made his point, he's scored some social points.Meanwhile, in Alice's head:Hmm, if we do want to promote diversity, what actions should we take to make that happen?And, which is more, do we actually want to promote diversity in the first place?...... On the one hand, one could make the moral claim, "diversity is a terminal goal."...... On the other hand, one could make an instrumental claim along the lines of, "Greater diversity entails a wider set of viewpoints. This leads to a more informed discourse, better conclusions, and, ultimately, better decisions. Better decisions are upstream of better achieving our terminal goals, such as raising the probability of a flourishing future."Now, I know Bob, he's an EA, he's pretty in on the whole utilitarian deal, I don't think he views diversity as a terminal goal. Moreover, diversity is definitely not a terminal goal of this internship: our raison d'être is to reduce x-risk.Okay, so, I'll assume Bob's statement was made in good faith, and that it's really about the extent to which diversity is instrumentally useful toward reducing x-risk. This is an empirical question. There's a trade-off at play here. I've mentioned some reasons for more diversity above (more viewpoints, and so on), but there are also reasons against more diversity. Top of this list is that most of our best applicants are male and white. So, if we want to include more females and more non-white folks, then, given our fixed quota of places, this will mean distorting the bar to being accepted. Now, I don't like that this is the reality of the situation. I would like it be easy to promote diversity (in order to get the instrumental benefits of diversity without giving anything up, like having lower-aptitude people as fellows), but that's not the reality we live in. We have to weight up costs and benefits and make a choice accordingly.Right, I'll say what I'm thinking...Alice's words were met with awkward silence, averted gazes, and negative social points.Inside Alice's head:Ah, I see....]]>
Will Aldred https://forum.effectivealtruism.org/posts/wyuaXAaEYnGJ5tyva/on-applause-lights-and-costly-counters Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On applause lights and costly counters, published by Will Aldred on December 24, 2022 on The Effective Altruism Forum.Epistemic status: stream of consciousness.Edited to add: This was written as a personal blogpost (with the frontpage checkbox at the bottom unchecked), but it appears to be showing up on the frontpage. I'm not sure what's up with that (though it's probably my bad, I forgot to uncheck the box when I initially published, then unchecked later); if you do read this post, know that it's especially rough and off-the-cuff and to some extent the downstream effect of red wine.An applause light is an empty statement which evokes positive affect without providing new information.(LessWrong Wiki)I've heard a couple of applause lights in the recent past, spoken by EAs.The first: "I'm confused [about x]."EA culture tends to reward people who are open and transparent about what they don't know. This is a good thing... until it starts getting Goodharted. The statement, "I'm confused", is productive in so far as it gets others to:apply appropriate grains of salt to the claims you're making on the topic you say you're confused abouthelp you become less confusedthink more critically and flag appropriate uncertainty on claims they themselves makeHowever, several times now I've heard empty statements of "I'm confused" where there's really no intention to achieve any of the above, so far as I can tell. "I'm confused" is being used as an applause light.The second: "diversity".This one's particularly pernicious, because it's costly to argue against. A concrete example: Alice and Bob, plus a handful more people, run an internship. In a team meeting on the application process, Bob says, "We should promote diversity." Heads nod. Satisfied, Bob does not continue. He's made his point, he's scored some social points.Meanwhile, in Alice's head:Hmm, if we do want to promote diversity, what actions should we take to make that happen?And, which is more, do we actually want to promote diversity in the first place?...... On the one hand, one could make the moral claim, "diversity is a terminal goal."...... On the other hand, one could make an instrumental claim along the lines of, "Greater diversity entails a wider set of viewpoints. This leads to a more informed discourse, better conclusions, and, ultimately, better decisions. Better decisions are upstream of better achieving our terminal goals, such as raising the probability of a flourishing future."Now, I know Bob, he's an EA, he's pretty in on the whole utilitarian deal, I don't think he views diversity as a terminal goal. Moreover, diversity is definitely not a terminal goal of this internship: our raison d'être is to reduce x-risk.Okay, so, I'll assume Bob's statement was made in good faith, and that it's really about the extent to which diversity is instrumentally useful toward reducing x-risk. This is an empirical question. There's a trade-off at play here. I've mentioned some reasons for more diversity above (more viewpoints, and so on), but there are also reasons against more diversity. Top of this list is that most of our best applicants are male and white. So, if we want to include more females and more non-white folks, then, given our fixed quota of places, this will mean distorting the bar to being accepted. Now, I don't like that this is the reality of the situation. I would like it be easy to promote diversity (in order to get the instrumental benefits of diversity without giving anything up, like having lower-aptitude people as fellows), but that's not the reality we live in. We have to weight up costs and benefits and make a choice accordingly.Right, I'll say what I'm thinking...Alice's words were met with awkward silence, averted gazes, and negative social points.Inside Alice's head:Ah, I see....]]>
Sun, 25 Dec 2022 02:42:58 +0000 EA - On applause lights and costly counters by Will Aldred Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On applause lights and costly counters, published by Will Aldred on December 24, 2022 on The Effective Altruism Forum.Epistemic status: stream of consciousness.Edited to add: This was written as a personal blogpost (with the frontpage checkbox at the bottom unchecked), but it appears to be showing up on the frontpage. I'm not sure what's up with that (though it's probably my bad, I forgot to uncheck the box when I initially published, then unchecked later); if you do read this post, know that it's especially rough and off-the-cuff and to some extent the downstream effect of red wine.An applause light is an empty statement which evokes positive affect without providing new information.(LessWrong Wiki)I've heard a couple of applause lights in the recent past, spoken by EAs.The first: "I'm confused [about x]."EA culture tends to reward people who are open and transparent about what they don't know. This is a good thing... until it starts getting Goodharted. The statement, "I'm confused", is productive in so far as it gets others to:apply appropriate grains of salt to the claims you're making on the topic you say you're confused abouthelp you become less confusedthink more critically and flag appropriate uncertainty on claims they themselves makeHowever, several times now I've heard empty statements of "I'm confused" where there's really no intention to achieve any of the above, so far as I can tell. "I'm confused" is being used as an applause light.The second: "diversity".This one's particularly pernicious, because it's costly to argue against. A concrete example: Alice and Bob, plus a handful more people, run an internship. In a team meeting on the application process, Bob says, "We should promote diversity." Heads nod. Satisfied, Bob does not continue. He's made his point, he's scored some social points.Meanwhile, in Alice's head:Hmm, if we do want to promote diversity, what actions should we take to make that happen?And, which is more, do we actually want to promote diversity in the first place?...... On the one hand, one could make the moral claim, "diversity is a terminal goal."...... On the other hand, one could make an instrumental claim along the lines of, "Greater diversity entails a wider set of viewpoints. This leads to a more informed discourse, better conclusions, and, ultimately, better decisions. Better decisions are upstream of better achieving our terminal goals, such as raising the probability of a flourishing future."Now, I know Bob, he's an EA, he's pretty in on the whole utilitarian deal, I don't think he views diversity as a terminal goal. Moreover, diversity is definitely not a terminal goal of this internship: our raison d'être is to reduce x-risk.Okay, so, I'll assume Bob's statement was made in good faith, and that it's really about the extent to which diversity is instrumentally useful toward reducing x-risk. This is an empirical question. There's a trade-off at play here. I've mentioned some reasons for more diversity above (more viewpoints, and so on), but there are also reasons against more diversity. Top of this list is that most of our best applicants are male and white. So, if we want to include more females and more non-white folks, then, given our fixed quota of places, this will mean distorting the bar to being accepted. Now, I don't like that this is the reality of the situation. I would like it be easy to promote diversity (in order to get the instrumental benefits of diversity without giving anything up, like having lower-aptitude people as fellows), but that's not the reality we live in. We have to weight up costs and benefits and make a choice accordingly.Right, I'll say what I'm thinking...Alice's words were met with awkward silence, averted gazes, and negative social points.Inside Alice's head:Ah, I see....]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On applause lights and costly counters, published by Will Aldred on December 24, 2022 on The Effective Altruism Forum.Epistemic status: stream of consciousness.Edited to add: This was written as a personal blogpost (with the frontpage checkbox at the bottom unchecked), but it appears to be showing up on the frontpage. I'm not sure what's up with that (though it's probably my bad, I forgot to uncheck the box when I initially published, then unchecked later); if you do read this post, know that it's especially rough and off-the-cuff and to some extent the downstream effect of red wine.An applause light is an empty statement which evokes positive affect without providing new information.(LessWrong Wiki)I've heard a couple of applause lights in the recent past, spoken by EAs.The first: "I'm confused [about x]."EA culture tends to reward people who are open and transparent about what they don't know. This is a good thing... until it starts getting Goodharted. The statement, "I'm confused", is productive in so far as it gets others to:apply appropriate grains of salt to the claims you're making on the topic you say you're confused abouthelp you become less confusedthink more critically and flag appropriate uncertainty on claims they themselves makeHowever, several times now I've heard empty statements of "I'm confused" where there's really no intention to achieve any of the above, so far as I can tell. "I'm confused" is being used as an applause light.The second: "diversity".This one's particularly pernicious, because it's costly to argue against. A concrete example: Alice and Bob, plus a handful more people, run an internship. In a team meeting on the application process, Bob says, "We should promote diversity." Heads nod. Satisfied, Bob does not continue. He's made his point, he's scored some social points.Meanwhile, in Alice's head:Hmm, if we do want to promote diversity, what actions should we take to make that happen?And, which is more, do we actually want to promote diversity in the first place?...... On the one hand, one could make the moral claim, "diversity is a terminal goal."...... On the other hand, one could make an instrumental claim along the lines of, "Greater diversity entails a wider set of viewpoints. This leads to a more informed discourse, better conclusions, and, ultimately, better decisions. Better decisions are upstream of better achieving our terminal goals, such as raising the probability of a flourishing future."Now, I know Bob, he's an EA, he's pretty in on the whole utilitarian deal, I don't think he views diversity as a terminal goal. Moreover, diversity is definitely not a terminal goal of this internship: our raison d'être is to reduce x-risk.Okay, so, I'll assume Bob's statement was made in good faith, and that it's really about the extent to which diversity is instrumentally useful toward reducing x-risk. This is an empirical question. There's a trade-off at play here. I've mentioned some reasons for more diversity above (more viewpoints, and so on), but there are also reasons against more diversity. Top of this list is that most of our best applicants are male and white. So, if we want to include more females and more non-white folks, then, given our fixed quota of places, this will mean distorting the bar to being accepted. Now, I don't like that this is the reality of the situation. I would like it be easy to promote diversity (in order to get the instrumental benefits of diversity without giving anything up, like having lower-aptitude people as fellows), but that's not the reality we live in. We have to weight up costs and benefits and make a choice accordingly.Right, I'll say what I'm thinking...Alice's words were met with awkward silence, averted gazes, and negative social points.Inside Alice's head:Ah, I see....]]>
Will Aldred https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:08 None full 4253
aupKXpPGnFmbfE2xC_EA EA - [Link-post] Politico: "Ex-Google boss helps fund dozens of jobs in Biden’s administration" by Pranay K Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Link-post] Politico: "Ex-Google boss helps fund dozens of jobs in Biden’s administration", published by Pranay K on December 24, 2022 on The Effective Altruism Forum.Politico article from Thursday December 22, 2022: "Ex-Google boss helps fund dozens of jobs in Biden’s administration"1. Summary:In three sentences:"Eric Schmidt, the former CEO of Google who has long sought influence over White House science policy, is helping to fund the salaries of more than two dozen officials in the Biden administration under the auspices of an outside group, the Federation of American Scientists."It is worth noting that Schmidt Futures (Schmidt's philanthropic ventures) does not directly fund these officials' salaries: Schmidt Futures provides < 30% to the Federation of American Scientists' "Day One fund" which funds these officials' salaries. Eric Schmidt seems to me to have called for the US government to aggressively invest in AI development.Some more context:Eric Schmidt chaired the National Security Commission on Artificial Intelligence from 2018-2021, in which the commision called on the US government to spend $40 billion on AI development.Schmidt Futures (Schmidt's philanthropic ventures) funds < 30% of the contributions to the Day One Project, a project within the Federation of American Scientists (FAS), which (among other things) provides the salaries of "FAS fellows" who hold "more than two dozen officials in the Biden administration" (from the main Politico article being discussed in this post). This includes 2 staffers in the Office of Science and Technology Policy (a different Politico article).The FAS is a "nonprofit global policy think tank with the stated intent of using science and scientific analysis to attempt to make the world more secure" (Wikipedia). The Day One project was started to recruit people to fill "key science and technology positions in the executive branch" (from the main Politico article).2. My question: Are Schmidt's projects harmfully advancing AI capabilities research?I've seen discussion among the EA community about how OpenAI and Anthropic may be harmfully advancing AI capabilities research. (The best discussion that comes to mind is this recent Scott Alexander post about ChatGPT; if anyone knows any other resources discussing this hypothesis - for or against - please comment below).I have not seen much discussion about Eric Schmidt's harmful or beneficial contributions to AI development in the US government. What do people think about this? Is this something that should concern us?3. Some more excerpts from the article about AI“Schmidt is clearly trying to influence AI policy to a disproportionate degree of any person I can think of,” said Alex Engler, a fellow at the Brookings Institution who specializes in AI policy. “We’ve seen a dramatic increase in investment toward advancing AI capacity in government and not much in limiting its harmful use.”Schmidt’s collaboration with FAS [Federation of American Scientists] is only a part of his broader advocacy for the U.S. government to invest more in technology and particularly in AI, positions he advanced as chair of the federal National Security Commission on Artificial Intelligence from 2018 to 2021.The commission’s final report recommended that the government spend $40 billion to “expand and democratize federal AI research and development” and suggested more may be needed.“If anything, this report underplays the investments America will need to make,” the report stated.“Other countries have made AI a national project. The United States has not yet, as a nation, systematically explored its scope, studied its implications, or begun the process of reconciling with it,” they wrote. “If the United States and its allies recoil before the implications of these capabilities and halt...]]>
Pranay K https://forum.effectivealtruism.org/posts/aupKXpPGnFmbfE2xC/link-post-politico-ex-google-boss-helps-fund-dozens-of-jobs Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Link-post] Politico: "Ex-Google boss helps fund dozens of jobs in Biden’s administration", published by Pranay K on December 24, 2022 on The Effective Altruism Forum.Politico article from Thursday December 22, 2022: "Ex-Google boss helps fund dozens of jobs in Biden’s administration"1. Summary:In three sentences:"Eric Schmidt, the former CEO of Google who has long sought influence over White House science policy, is helping to fund the salaries of more than two dozen officials in the Biden administration under the auspices of an outside group, the Federation of American Scientists."It is worth noting that Schmidt Futures (Schmidt's philanthropic ventures) does not directly fund these officials' salaries: Schmidt Futures provides < 30% to the Federation of American Scientists' "Day One fund" which funds these officials' salaries. Eric Schmidt seems to me to have called for the US government to aggressively invest in AI development.Some more context:Eric Schmidt chaired the National Security Commission on Artificial Intelligence from 2018-2021, in which the commision called on the US government to spend $40 billion on AI development.Schmidt Futures (Schmidt's philanthropic ventures) funds < 30% of the contributions to the Day One Project, a project within the Federation of American Scientists (FAS), which (among other things) provides the salaries of "FAS fellows" who hold "more than two dozen officials in the Biden administration" (from the main Politico article being discussed in this post). This includes 2 staffers in the Office of Science and Technology Policy (a different Politico article).The FAS is a "nonprofit global policy think tank with the stated intent of using science and scientific analysis to attempt to make the world more secure" (Wikipedia). The Day One project was started to recruit people to fill "key science and technology positions in the executive branch" (from the main Politico article).2. My question: Are Schmidt's projects harmfully advancing AI capabilities research?I've seen discussion among the EA community about how OpenAI and Anthropic may be harmfully advancing AI capabilities research. (The best discussion that comes to mind is this recent Scott Alexander post about ChatGPT; if anyone knows any other resources discussing this hypothesis - for or against - please comment below).I have not seen much discussion about Eric Schmidt's harmful or beneficial contributions to AI development in the US government. What do people think about this? Is this something that should concern us?3. Some more excerpts from the article about AI“Schmidt is clearly trying to influence AI policy to a disproportionate degree of any person I can think of,” said Alex Engler, a fellow at the Brookings Institution who specializes in AI policy. “We’ve seen a dramatic increase in investment toward advancing AI capacity in government and not much in limiting its harmful use.”Schmidt’s collaboration with FAS [Federation of American Scientists] is only a part of his broader advocacy for the U.S. government to invest more in technology and particularly in AI, positions he advanced as chair of the federal National Security Commission on Artificial Intelligence from 2018 to 2021.The commission’s final report recommended that the government spend $40 billion to “expand and democratize federal AI research and development” and suggested more may be needed.“If anything, this report underplays the investments America will need to make,” the report stated.“Other countries have made AI a national project. The United States has not yet, as a nation, systematically explored its scope, studied its implications, or begun the process of reconciling with it,” they wrote. “If the United States and its allies recoil before the implications of these capabilities and halt...]]>
Sat, 24 Dec 2022 23:03:00 +0000 EA - [Link-post] Politico: "Ex-Google boss helps fund dozens of jobs in Biden’s administration" by Pranay K Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Link-post] Politico: "Ex-Google boss helps fund dozens of jobs in Biden’s administration", published by Pranay K on December 24, 2022 on The Effective Altruism Forum.Politico article from Thursday December 22, 2022: "Ex-Google boss helps fund dozens of jobs in Biden’s administration"1. Summary:In three sentences:"Eric Schmidt, the former CEO of Google who has long sought influence over White House science policy, is helping to fund the salaries of more than two dozen officials in the Biden administration under the auspices of an outside group, the Federation of American Scientists."It is worth noting that Schmidt Futures (Schmidt's philanthropic ventures) does not directly fund these officials' salaries: Schmidt Futures provides < 30% to the Federation of American Scientists' "Day One fund" which funds these officials' salaries. Eric Schmidt seems to me to have called for the US government to aggressively invest in AI development.Some more context:Eric Schmidt chaired the National Security Commission on Artificial Intelligence from 2018-2021, in which the commision called on the US government to spend $40 billion on AI development.Schmidt Futures (Schmidt's philanthropic ventures) funds < 30% of the contributions to the Day One Project, a project within the Federation of American Scientists (FAS), which (among other things) provides the salaries of "FAS fellows" who hold "more than two dozen officials in the Biden administration" (from the main Politico article being discussed in this post). This includes 2 staffers in the Office of Science and Technology Policy (a different Politico article).The FAS is a "nonprofit global policy think tank with the stated intent of using science and scientific analysis to attempt to make the world more secure" (Wikipedia). The Day One project was started to recruit people to fill "key science and technology positions in the executive branch" (from the main Politico article).2. My question: Are Schmidt's projects harmfully advancing AI capabilities research?I've seen discussion among the EA community about how OpenAI and Anthropic may be harmfully advancing AI capabilities research. (The best discussion that comes to mind is this recent Scott Alexander post about ChatGPT; if anyone knows any other resources discussing this hypothesis - for or against - please comment below).I have not seen much discussion about Eric Schmidt's harmful or beneficial contributions to AI development in the US government. What do people think about this? Is this something that should concern us?3. Some more excerpts from the article about AI“Schmidt is clearly trying to influence AI policy to a disproportionate degree of any person I can think of,” said Alex Engler, a fellow at the Brookings Institution who specializes in AI policy. “We’ve seen a dramatic increase in investment toward advancing AI capacity in government and not much in limiting its harmful use.”Schmidt’s collaboration with FAS [Federation of American Scientists] is only a part of his broader advocacy for the U.S. government to invest more in technology and particularly in AI, positions he advanced as chair of the federal National Security Commission on Artificial Intelligence from 2018 to 2021.The commission’s final report recommended that the government spend $40 billion to “expand and democratize federal AI research and development” and suggested more may be needed.“If anything, this report underplays the investments America will need to make,” the report stated.“Other countries have made AI a national project. The United States has not yet, as a nation, systematically explored its scope, studied its implications, or begun the process of reconciling with it,” they wrote. “If the United States and its allies recoil before the implications of these capabilities and halt...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Link-post] Politico: "Ex-Google boss helps fund dozens of jobs in Biden’s administration", published by Pranay K on December 24, 2022 on The Effective Altruism Forum.Politico article from Thursday December 22, 2022: "Ex-Google boss helps fund dozens of jobs in Biden’s administration"1. Summary:In three sentences:"Eric Schmidt, the former CEO of Google who has long sought influence over White House science policy, is helping to fund the salaries of more than two dozen officials in the Biden administration under the auspices of an outside group, the Federation of American Scientists."It is worth noting that Schmidt Futures (Schmidt's philanthropic ventures) does not directly fund these officials' salaries: Schmidt Futures provides < 30% to the Federation of American Scientists' "Day One fund" which funds these officials' salaries. Eric Schmidt seems to me to have called for the US government to aggressively invest in AI development.Some more context:Eric Schmidt chaired the National Security Commission on Artificial Intelligence from 2018-2021, in which the commision called on the US government to spend $40 billion on AI development.Schmidt Futures (Schmidt's philanthropic ventures) funds < 30% of the contributions to the Day One Project, a project within the Federation of American Scientists (FAS), which (among other things) provides the salaries of "FAS fellows" who hold "more than two dozen officials in the Biden administration" (from the main Politico article being discussed in this post). This includes 2 staffers in the Office of Science and Technology Policy (a different Politico article).The FAS is a "nonprofit global policy think tank with the stated intent of using science and scientific analysis to attempt to make the world more secure" (Wikipedia). The Day One project was started to recruit people to fill "key science and technology positions in the executive branch" (from the main Politico article).2. My question: Are Schmidt's projects harmfully advancing AI capabilities research?I've seen discussion among the EA community about how OpenAI and Anthropic may be harmfully advancing AI capabilities research. (The best discussion that comes to mind is this recent Scott Alexander post about ChatGPT; if anyone knows any other resources discussing this hypothesis - for or against - please comment below).I have not seen much discussion about Eric Schmidt's harmful or beneficial contributions to AI development in the US government. What do people think about this? Is this something that should concern us?3. Some more excerpts from the article about AI“Schmidt is clearly trying to influence AI policy to a disproportionate degree of any person I can think of,” said Alex Engler, a fellow at the Brookings Institution who specializes in AI policy. “We’ve seen a dramatic increase in investment toward advancing AI capacity in government and not much in limiting its harmful use.”Schmidt’s collaboration with FAS [Federation of American Scientists] is only a part of his broader advocacy for the U.S. government to invest more in technology and particularly in AI, positions he advanced as chair of the federal National Security Commission on Artificial Intelligence from 2018 to 2021.The commission’s final report recommended that the government spend $40 billion to “expand and democratize federal AI research and development” and suggested more may be needed.“If anything, this report underplays the investments America will need to make,” the report stated.“Other countries have made AI a national project. The United States has not yet, as a nation, systematically explored its scope, studied its implications, or begun the process of reconciling with it,” they wrote. “If the United States and its allies recoil before the implications of these capabilities and halt...]]>
Pranay K https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:53 None full 4254
7kpmKn86BGNeqNfAC_EA EA - Save the Date: EAGxCambridge (UK) by David Mears Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Save the Date: EAGxCambridge (UK), published by David Mears on December 24, 2022 on The Effective Altruism Forum.Save the dates: EAGxCambridge (part of the EA Global conference series) will take place March 17th-19th 2023 in Cambridge, UK. Official launch to follow soon.What is EAGx?EA Global conferences are not the only events for people interested in effective altruism! EAGx conferences are locally-organized conferences designed for a wider audience, primarily for people:Familiar with the core ideas of effective altruismInterested in learning more about what to doFrom the region or country where the conference is taking place (or living there)The event will feature:Talks and workshops on pressing problems that the EA community is currently trying to address and ideas to help the community achieve its goalsThe opportunity to meet and share advice with other EAs in the community, including meet-ups for people who share interests or backgroundsSocial events at and around the conferenceApplicationsWe will post the application form shortly, along with more information about the conference. In the meantime, sign up here to be notified when applications open.Content suggestionsWhat speakers, sessions, or topics would you like to see? Let us know your ideas in the comments below, or use this form. (You can also suggest yourself.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
David Mears https://forum.effectivealtruism.org/posts/7kpmKn86BGNeqNfAC/save-the-date-eagxcambridge-uk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Save the Date: EAGxCambridge (UK), published by David Mears on December 24, 2022 on The Effective Altruism Forum.Save the dates: EAGxCambridge (part of the EA Global conference series) will take place March 17th-19th 2023 in Cambridge, UK. Official launch to follow soon.What is EAGx?EA Global conferences are not the only events for people interested in effective altruism! EAGx conferences are locally-organized conferences designed for a wider audience, primarily for people:Familiar with the core ideas of effective altruismInterested in learning more about what to doFrom the region or country where the conference is taking place (or living there)The event will feature:Talks and workshops on pressing problems that the EA community is currently trying to address and ideas to help the community achieve its goalsThe opportunity to meet and share advice with other EAs in the community, including meet-ups for people who share interests or backgroundsSocial events at and around the conferenceApplicationsWe will post the application form shortly, along with more information about the conference. In the meantime, sign up here to be notified when applications open.Content suggestionsWhat speakers, sessions, or topics would you like to see? Let us know your ideas in the comments below, or use this form. (You can also suggest yourself.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 24 Dec 2022 15:51:19 +0000 EA - Save the Date: EAGxCambridge (UK) by David Mears Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Save the Date: EAGxCambridge (UK), published by David Mears on December 24, 2022 on The Effective Altruism Forum.Save the dates: EAGxCambridge (part of the EA Global conference series) will take place March 17th-19th 2023 in Cambridge, UK. Official launch to follow soon.What is EAGx?EA Global conferences are not the only events for people interested in effective altruism! EAGx conferences are locally-organized conferences designed for a wider audience, primarily for people:Familiar with the core ideas of effective altruismInterested in learning more about what to doFrom the region or country where the conference is taking place (or living there)The event will feature:Talks and workshops on pressing problems that the EA community is currently trying to address and ideas to help the community achieve its goalsThe opportunity to meet and share advice with other EAs in the community, including meet-ups for people who share interests or backgroundsSocial events at and around the conferenceApplicationsWe will post the application form shortly, along with more information about the conference. In the meantime, sign up here to be notified when applications open.Content suggestionsWhat speakers, sessions, or topics would you like to see? Let us know your ideas in the comments below, or use this form. (You can also suggest yourself.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Save the Date: EAGxCambridge (UK), published by David Mears on December 24, 2022 on The Effective Altruism Forum.Save the dates: EAGxCambridge (part of the EA Global conference series) will take place March 17th-19th 2023 in Cambridge, UK. Official launch to follow soon.What is EAGx?EA Global conferences are not the only events for people interested in effective altruism! EAGx conferences are locally-organized conferences designed for a wider audience, primarily for people:Familiar with the core ideas of effective altruismInterested in learning more about what to doFrom the region or country where the conference is taking place (or living there)The event will feature:Talks and workshops on pressing problems that the EA community is currently trying to address and ideas to help the community achieve its goalsThe opportunity to meet and share advice with other EAs in the community, including meet-ups for people who share interests or backgroundsSocial events at and around the conferenceApplicationsWe will post the application form shortly, along with more information about the conference. In the meantime, sign up here to be notified when applications open.Content suggestionsWhat speakers, sessions, or topics would you like to see? Let us know your ideas in the comments below, or use this form. (You can also suggest yourself.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
David Mears https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:38 None full 4246
CycFBmdwgCjCsZxQh_EA EA - What you prioritise is mostly moral intuition by James Ozden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What you prioritise is mostly moral intuition, published by James Ozden on December 24, 2022 on The Effective Altruism Forum.Note: A linkpost from my blog. Epistemic status: A confused physicist trying to grapple with philosophy, and losing.Effective Altruism is about doing as much good as possible with a given amount of resources, using reason and evidence. This sounds very appealing, but there’s the age-old problem, what counts as “the most good”? Despite the large focus on careful reasoning and rigorous evidence within Effective Altruism, I speculate that many people decide what is “the most good” based largely on moral intuitions.I should also point out that these moral dilemmas don’t just plague Effective Altruists or those who prescribe to utilitarianism. These thorny moral issues apply to everyone who wants to help others or “do good”. As Richard Chappell neatly puts it, these are puzzles for everyone. If you want to cop-out, you could reject any philosophy where you try to rank how bad or good things are, but this seems extremely unappealing. Surely if we’re given the choice between saving ten people from a terrible disease like malaria and ten people from hiccups, we should be able to easily decide that one is worse than the other? Anything else seems very unhelpful to the world around us, where we allow grave suffering to continue as we don’t think comparing “bads” are possible.To press on, let’s take one concrete example of a moral dilemma: How important is extending lives relative to improving lives? Put more simply, given limited resources, should we focus on averting deaths from easily preventable diseases or increasing people’s quality of life?This is not a question that one can easily answer with randomised controlled trials, meta-analyses and other traditional forms of evidence! Despite this, it might strongly affect what you dedicate your life to working on, or the causes you choose to support. Happier Lives Institute have done some great research looking at this exact question and no surprises - your view on this moral question matters a lot. When looking at charities that might help people alive today, they find that it matters a lot whether you prioritise the youngest people (deprivationism), older children over infants (TRIA), or the view that death isn’t necessarily bad, but that living a better life is what matters most (Epicureanism). For context, the graph below shows the relative cost-effectiveness of various charities under different philosophical assumptions, using the metric WELLBYs, which taken into account the subjective experiences of people.So, we have a problem. If this is a question that could affect what classifies as “the most good”, and I think it’s definitely up there, then how do we proceed? Do we just do some thought experiments, weigh up our intuitions against other beliefs we hold (using reflective equilibrium potentially), incorporate moral uncertainty, and go from there? For a movement based on doing “the most good”, this seems very unsatisfying! But sadly, I think this problem rears its head in several important places. To quote Michael Plant (a philosopher from the University of Oxford and director of Happier Lives Institute):“Well, all disagreements in philosophy ultimately come down to intuitions, not just those in population ethics!”To note, I think this is very different from empirical disagreements about doing “the most good”. For example, Effective Altruism (EA) is pretty good at using data and evidence to get to the bottom of how to do a certain kind of good. One great example is GiveWell, who have an extensive research process, drawing mostly on high-quality randomised control trials (see this spreadsheet for the cost-effectiveness of the Against Malaria Foundation) to find the most effective ways to hel...]]>
James Ozden https://forum.effectivealtruism.org/posts/CycFBmdwgCjCsZxQh/what-you-prioritise-is-mostly-moral-intuition Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What you prioritise is mostly moral intuition, published by James Ozden on December 24, 2022 on The Effective Altruism Forum.Note: A linkpost from my blog. Epistemic status: A confused physicist trying to grapple with philosophy, and losing.Effective Altruism is about doing as much good as possible with a given amount of resources, using reason and evidence. This sounds very appealing, but there’s the age-old problem, what counts as “the most good”? Despite the large focus on careful reasoning and rigorous evidence within Effective Altruism, I speculate that many people decide what is “the most good” based largely on moral intuitions.I should also point out that these moral dilemmas don’t just plague Effective Altruists or those who prescribe to utilitarianism. These thorny moral issues apply to everyone who wants to help others or “do good”. As Richard Chappell neatly puts it, these are puzzles for everyone. If you want to cop-out, you could reject any philosophy where you try to rank how bad or good things are, but this seems extremely unappealing. Surely if we’re given the choice between saving ten people from a terrible disease like malaria and ten people from hiccups, we should be able to easily decide that one is worse than the other? Anything else seems very unhelpful to the world around us, where we allow grave suffering to continue as we don’t think comparing “bads” are possible.To press on, let’s take one concrete example of a moral dilemma: How important is extending lives relative to improving lives? Put more simply, given limited resources, should we focus on averting deaths from easily preventable diseases or increasing people’s quality of life?This is not a question that one can easily answer with randomised controlled trials, meta-analyses and other traditional forms of evidence! Despite this, it might strongly affect what you dedicate your life to working on, or the causes you choose to support. Happier Lives Institute have done some great research looking at this exact question and no surprises - your view on this moral question matters a lot. When looking at charities that might help people alive today, they find that it matters a lot whether you prioritise the youngest people (deprivationism), older children over infants (TRIA), or the view that death isn’t necessarily bad, but that living a better life is what matters most (Epicureanism). For context, the graph below shows the relative cost-effectiveness of various charities under different philosophical assumptions, using the metric WELLBYs, which taken into account the subjective experiences of people.So, we have a problem. If this is a question that could affect what classifies as “the most good”, and I think it’s definitely up there, then how do we proceed? Do we just do some thought experiments, weigh up our intuitions against other beliefs we hold (using reflective equilibrium potentially), incorporate moral uncertainty, and go from there? For a movement based on doing “the most good”, this seems very unsatisfying! But sadly, I think this problem rears its head in several important places. To quote Michael Plant (a philosopher from the University of Oxford and director of Happier Lives Institute):“Well, all disagreements in philosophy ultimately come down to intuitions, not just those in population ethics!”To note, I think this is very different from empirical disagreements about doing “the most good”. For example, Effective Altruism (EA) is pretty good at using data and evidence to get to the bottom of how to do a certain kind of good. One great example is GiveWell, who have an extensive research process, drawing mostly on high-quality randomised control trials (see this spreadsheet for the cost-effectiveness of the Against Malaria Foundation) to find the most effective ways to hel...]]>
Sat, 24 Dec 2022 15:50:10 +0000 EA - What you prioritise is mostly moral intuition by James Ozden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What you prioritise is mostly moral intuition, published by James Ozden on December 24, 2022 on The Effective Altruism Forum.Note: A linkpost from my blog. Epistemic status: A confused physicist trying to grapple with philosophy, and losing.Effective Altruism is about doing as much good as possible with a given amount of resources, using reason and evidence. This sounds very appealing, but there’s the age-old problem, what counts as “the most good”? Despite the large focus on careful reasoning and rigorous evidence within Effective Altruism, I speculate that many people decide what is “the most good” based largely on moral intuitions.I should also point out that these moral dilemmas don’t just plague Effective Altruists or those who prescribe to utilitarianism. These thorny moral issues apply to everyone who wants to help others or “do good”. As Richard Chappell neatly puts it, these are puzzles for everyone. If you want to cop-out, you could reject any philosophy where you try to rank how bad or good things are, but this seems extremely unappealing. Surely if we’re given the choice between saving ten people from a terrible disease like malaria and ten people from hiccups, we should be able to easily decide that one is worse than the other? Anything else seems very unhelpful to the world around us, where we allow grave suffering to continue as we don’t think comparing “bads” are possible.To press on, let’s take one concrete example of a moral dilemma: How important is extending lives relative to improving lives? Put more simply, given limited resources, should we focus on averting deaths from easily preventable diseases or increasing people’s quality of life?This is not a question that one can easily answer with randomised controlled trials, meta-analyses and other traditional forms of evidence! Despite this, it might strongly affect what you dedicate your life to working on, or the causes you choose to support. Happier Lives Institute have done some great research looking at this exact question and no surprises - your view on this moral question matters a lot. When looking at charities that might help people alive today, they find that it matters a lot whether you prioritise the youngest people (deprivationism), older children over infants (TRIA), or the view that death isn’t necessarily bad, but that living a better life is what matters most (Epicureanism). For context, the graph below shows the relative cost-effectiveness of various charities under different philosophical assumptions, using the metric WELLBYs, which taken into account the subjective experiences of people.So, we have a problem. If this is a question that could affect what classifies as “the most good”, and I think it’s definitely up there, then how do we proceed? Do we just do some thought experiments, weigh up our intuitions against other beliefs we hold (using reflective equilibrium potentially), incorporate moral uncertainty, and go from there? For a movement based on doing “the most good”, this seems very unsatisfying! But sadly, I think this problem rears its head in several important places. To quote Michael Plant (a philosopher from the University of Oxford and director of Happier Lives Institute):“Well, all disagreements in philosophy ultimately come down to intuitions, not just those in population ethics!”To note, I think this is very different from empirical disagreements about doing “the most good”. For example, Effective Altruism (EA) is pretty good at using data and evidence to get to the bottom of how to do a certain kind of good. One great example is GiveWell, who have an extensive research process, drawing mostly on high-quality randomised control trials (see this spreadsheet for the cost-effectiveness of the Against Malaria Foundation) to find the most effective ways to hel...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What you prioritise is mostly moral intuition, published by James Ozden on December 24, 2022 on The Effective Altruism Forum.Note: A linkpost from my blog. Epistemic status: A confused physicist trying to grapple with philosophy, and losing.Effective Altruism is about doing as much good as possible with a given amount of resources, using reason and evidence. This sounds very appealing, but there’s the age-old problem, what counts as “the most good”? Despite the large focus on careful reasoning and rigorous evidence within Effective Altruism, I speculate that many people decide what is “the most good” based largely on moral intuitions.I should also point out that these moral dilemmas don’t just plague Effective Altruists or those who prescribe to utilitarianism. These thorny moral issues apply to everyone who wants to help others or “do good”. As Richard Chappell neatly puts it, these are puzzles for everyone. If you want to cop-out, you could reject any philosophy where you try to rank how bad or good things are, but this seems extremely unappealing. Surely if we’re given the choice between saving ten people from a terrible disease like malaria and ten people from hiccups, we should be able to easily decide that one is worse than the other? Anything else seems very unhelpful to the world around us, where we allow grave suffering to continue as we don’t think comparing “bads” are possible.To press on, let’s take one concrete example of a moral dilemma: How important is extending lives relative to improving lives? Put more simply, given limited resources, should we focus on averting deaths from easily preventable diseases or increasing people’s quality of life?This is not a question that one can easily answer with randomised controlled trials, meta-analyses and other traditional forms of evidence! Despite this, it might strongly affect what you dedicate your life to working on, or the causes you choose to support. Happier Lives Institute have done some great research looking at this exact question and no surprises - your view on this moral question matters a lot. When looking at charities that might help people alive today, they find that it matters a lot whether you prioritise the youngest people (deprivationism), older children over infants (TRIA), or the view that death isn’t necessarily bad, but that living a better life is what matters most (Epicureanism). For context, the graph below shows the relative cost-effectiveness of various charities under different philosophical assumptions, using the metric WELLBYs, which taken into account the subjective experiences of people.So, we have a problem. If this is a question that could affect what classifies as “the most good”, and I think it’s definitely up there, then how do we proceed? Do we just do some thought experiments, weigh up our intuitions against other beliefs we hold (using reflective equilibrium potentially), incorporate moral uncertainty, and go from there? For a movement based on doing “the most good”, this seems very unsatisfying! But sadly, I think this problem rears its head in several important places. To quote Michael Plant (a philosopher from the University of Oxford and director of Happier Lives Institute):“Well, all disagreements in philosophy ultimately come down to intuitions, not just those in population ethics!”To note, I think this is very different from empirical disagreements about doing “the most good”. For example, Effective Altruism (EA) is pretty good at using data and evidence to get to the bottom of how to do a certain kind of good. One great example is GiveWell, who have an extensive research process, drawing mostly on high-quality randomised control trials (see this spreadsheet for the cost-effectiveness of the Against Malaria Foundation) to find the most effective ways to hel...]]>
James Ozden https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 19:42 None full 4245
rEiWzbiWkyBSuLuGy_EA EA - Animal Advocacy Africa’s 2022 Review - Our Achievements and 2023 Strategy by AnimalAdvocacyAfrica Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal Advocacy Africa’s 2022 Review - Our Achievements and 2023 Strategy, published by AnimalAdvocacyAfrica on December 23, 2022 on The Effective Altruism Forum.1. SummaryAnimal Advocacy Africa (AAA) works to empower animal advocates who are interested in or are working to reduce farmed animal suffering in African countries. AAA shares knowledge, provides connections, and helps advocates build the skills to run an impactful animal advocacy organisation.This year, we:Helped 17 partner organisations raise a total of ~US$83,000. We discuss how much of this we think may have counterfactually happened without AAA’s support in Section 3.Provided strategic advice and feedback to 15 organisations, and influenced at least 2 of our partners to adopt high-impact interventions, with one adopting a high-impact intervention in preventing or slowing the growth of industrial animal agriculture in Uganda and one leading a cage-free campaign in Ghana.Helped connect at least 6 of our partners to influential figures and organisations in the global Effective Altruism (EA) and animal advocacy movements, with the aim of improving the visibility of the farmed animal advocacy movement in Africa.Released two research reports on farmed animal advocacy in the African and Asian contexts.Next year we intend to:Continue our current capacity-building programme, with changes and improvements made based on feedback and monitoring & evaluation findings.Start regranting funds to especially promising African advocates and organisations to encourage effective programming and interventions that we think are most likely to help farmed animals in African countries.Identify evidence-based strategies and work with local advocates to mitigate the rise of intensive animal farming in Africa as much as possible.Our primary bottleneck, and that of our partner organisations, remains a lack of funding. Our total funding gap for 2023 is $290,000. We intend to mitigate this by:Better emphasising the importance, neglectedness and tractability of farmed animal advocacy work in Africa. This includes improving overall visibility of the movement in Africa by highlighting the work that organisations are doing to funders and the wider international movement, and facilitating networking and connections between African organisations and international advocates.Better demonstrating our added value to African groups and to the African movement more broadly. Consistently tracking our progress and showcasing monitoring & evaluation findings more clearly for potential donors. Relatedly, better highlighting our theory of change with key cruxes validated.Hiring a full-time Fundraising & Communications staff.Increasing and improving outreach to high-net-worth individuals who may be interested in supporting farmed animal welfare in Africa.Registering in the United States to qualify for various matching opportunities offered during Giving Season.2. Why farmed animal advocacy in Africa?We believe that farmed animal advocacy in Africa is a highly impactful project to work on for several reasons:The human population of Africa is expected to nearly triple by 2100. Meat production in Africa has nearly doubled since 2000, and this rate is expected to increase to match the growing population and growing wealth of the continent.Of all continents, Africa has the highest growth rate in aquatic farming.Farmed animal advocacy is incredibly neglected in Africa — in 2019, Open Philanthropy estimated that only $1 million went towards farmed animal advocacy work in Africa per year.Building the farmed animal advocacy movement in Africa before intensive animal farming is locked-in may improve the lives of millions of animals in the near- and long-term future and potentially prevent millions of animals from being born in factory fa...]]>
AnimalAdvocacyAfrica https://forum.effectivealtruism.org/posts/rEiWzbiWkyBSuLuGy/animal-advocacy-africa-s-2022-review-our-achievements-and-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal Advocacy Africa’s 2022 Review - Our Achievements and 2023 Strategy, published by AnimalAdvocacyAfrica on December 23, 2022 on The Effective Altruism Forum.1. SummaryAnimal Advocacy Africa (AAA) works to empower animal advocates who are interested in or are working to reduce farmed animal suffering in African countries. AAA shares knowledge, provides connections, and helps advocates build the skills to run an impactful animal advocacy organisation.This year, we:Helped 17 partner organisations raise a total of ~US$83,000. We discuss how much of this we think may have counterfactually happened without AAA’s support in Section 3.Provided strategic advice and feedback to 15 organisations, and influenced at least 2 of our partners to adopt high-impact interventions, with one adopting a high-impact intervention in preventing or slowing the growth of industrial animal agriculture in Uganda and one leading a cage-free campaign in Ghana.Helped connect at least 6 of our partners to influential figures and organisations in the global Effective Altruism (EA) and animal advocacy movements, with the aim of improving the visibility of the farmed animal advocacy movement in Africa.Released two research reports on farmed animal advocacy in the African and Asian contexts.Next year we intend to:Continue our current capacity-building programme, with changes and improvements made based on feedback and monitoring & evaluation findings.Start regranting funds to especially promising African advocates and organisations to encourage effective programming and interventions that we think are most likely to help farmed animals in African countries.Identify evidence-based strategies and work with local advocates to mitigate the rise of intensive animal farming in Africa as much as possible.Our primary bottleneck, and that of our partner organisations, remains a lack of funding. Our total funding gap for 2023 is $290,000. We intend to mitigate this by:Better emphasising the importance, neglectedness and tractability of farmed animal advocacy work in Africa. This includes improving overall visibility of the movement in Africa by highlighting the work that organisations are doing to funders and the wider international movement, and facilitating networking and connections between African organisations and international advocates.Better demonstrating our added value to African groups and to the African movement more broadly. Consistently tracking our progress and showcasing monitoring & evaluation findings more clearly for potential donors. Relatedly, better highlighting our theory of change with key cruxes validated.Hiring a full-time Fundraising & Communications staff.Increasing and improving outreach to high-net-worth individuals who may be interested in supporting farmed animal welfare in Africa.Registering in the United States to qualify for various matching opportunities offered during Giving Season.2. Why farmed animal advocacy in Africa?We believe that farmed animal advocacy in Africa is a highly impactful project to work on for several reasons:The human population of Africa is expected to nearly triple by 2100. Meat production in Africa has nearly doubled since 2000, and this rate is expected to increase to match the growing population and growing wealth of the continent.Of all continents, Africa has the highest growth rate in aquatic farming.Farmed animal advocacy is incredibly neglected in Africa — in 2019, Open Philanthropy estimated that only $1 million went towards farmed animal advocacy work in Africa per year.Building the farmed animal advocacy movement in Africa before intensive animal farming is locked-in may improve the lives of millions of animals in the near- and long-term future and potentially prevent millions of animals from being born in factory fa...]]>
Fri, 23 Dec 2022 23:15:45 +0000 EA - Animal Advocacy Africa’s 2022 Review - Our Achievements and 2023 Strategy by AnimalAdvocacyAfrica Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal Advocacy Africa’s 2022 Review - Our Achievements and 2023 Strategy, published by AnimalAdvocacyAfrica on December 23, 2022 on The Effective Altruism Forum.1. SummaryAnimal Advocacy Africa (AAA) works to empower animal advocates who are interested in or are working to reduce farmed animal suffering in African countries. AAA shares knowledge, provides connections, and helps advocates build the skills to run an impactful animal advocacy organisation.This year, we:Helped 17 partner organisations raise a total of ~US$83,000. We discuss how much of this we think may have counterfactually happened without AAA’s support in Section 3.Provided strategic advice and feedback to 15 organisations, and influenced at least 2 of our partners to adopt high-impact interventions, with one adopting a high-impact intervention in preventing or slowing the growth of industrial animal agriculture in Uganda and one leading a cage-free campaign in Ghana.Helped connect at least 6 of our partners to influential figures and organisations in the global Effective Altruism (EA) and animal advocacy movements, with the aim of improving the visibility of the farmed animal advocacy movement in Africa.Released two research reports on farmed animal advocacy in the African and Asian contexts.Next year we intend to:Continue our current capacity-building programme, with changes and improvements made based on feedback and monitoring & evaluation findings.Start regranting funds to especially promising African advocates and organisations to encourage effective programming and interventions that we think are most likely to help farmed animals in African countries.Identify evidence-based strategies and work with local advocates to mitigate the rise of intensive animal farming in Africa as much as possible.Our primary bottleneck, and that of our partner organisations, remains a lack of funding. Our total funding gap for 2023 is $290,000. We intend to mitigate this by:Better emphasising the importance, neglectedness and tractability of farmed animal advocacy work in Africa. This includes improving overall visibility of the movement in Africa by highlighting the work that organisations are doing to funders and the wider international movement, and facilitating networking and connections between African organisations and international advocates.Better demonstrating our added value to African groups and to the African movement more broadly. Consistently tracking our progress and showcasing monitoring & evaluation findings more clearly for potential donors. Relatedly, better highlighting our theory of change with key cruxes validated.Hiring a full-time Fundraising & Communications staff.Increasing and improving outreach to high-net-worth individuals who may be interested in supporting farmed animal welfare in Africa.Registering in the United States to qualify for various matching opportunities offered during Giving Season.2. Why farmed animal advocacy in Africa?We believe that farmed animal advocacy in Africa is a highly impactful project to work on for several reasons:The human population of Africa is expected to nearly triple by 2100. Meat production in Africa has nearly doubled since 2000, and this rate is expected to increase to match the growing population and growing wealth of the continent.Of all continents, Africa has the highest growth rate in aquatic farming.Farmed animal advocacy is incredibly neglected in Africa — in 2019, Open Philanthropy estimated that only $1 million went towards farmed animal advocacy work in Africa per year.Building the farmed animal advocacy movement in Africa before intensive animal farming is locked-in may improve the lives of millions of animals in the near- and long-term future and potentially prevent millions of animals from being born in factory fa...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal Advocacy Africa’s 2022 Review - Our Achievements and 2023 Strategy, published by AnimalAdvocacyAfrica on December 23, 2022 on The Effective Altruism Forum.1. SummaryAnimal Advocacy Africa (AAA) works to empower animal advocates who are interested in or are working to reduce farmed animal suffering in African countries. AAA shares knowledge, provides connections, and helps advocates build the skills to run an impactful animal advocacy organisation.This year, we:Helped 17 partner organisations raise a total of ~US$83,000. We discuss how much of this we think may have counterfactually happened without AAA’s support in Section 3.Provided strategic advice and feedback to 15 organisations, and influenced at least 2 of our partners to adopt high-impact interventions, with one adopting a high-impact intervention in preventing or slowing the growth of industrial animal agriculture in Uganda and one leading a cage-free campaign in Ghana.Helped connect at least 6 of our partners to influential figures and organisations in the global Effective Altruism (EA) and animal advocacy movements, with the aim of improving the visibility of the farmed animal advocacy movement in Africa.Released two research reports on farmed animal advocacy in the African and Asian contexts.Next year we intend to:Continue our current capacity-building programme, with changes and improvements made based on feedback and monitoring & evaluation findings.Start regranting funds to especially promising African advocates and organisations to encourage effective programming and interventions that we think are most likely to help farmed animals in African countries.Identify evidence-based strategies and work with local advocates to mitigate the rise of intensive animal farming in Africa as much as possible.Our primary bottleneck, and that of our partner organisations, remains a lack of funding. Our total funding gap for 2023 is $290,000. We intend to mitigate this by:Better emphasising the importance, neglectedness and tractability of farmed animal advocacy work in Africa. This includes improving overall visibility of the movement in Africa by highlighting the work that organisations are doing to funders and the wider international movement, and facilitating networking and connections between African organisations and international advocates.Better demonstrating our added value to African groups and to the African movement more broadly. Consistently tracking our progress and showcasing monitoring & evaluation findings more clearly for potential donors. Relatedly, better highlighting our theory of change with key cruxes validated.Hiring a full-time Fundraising & Communications staff.Increasing and improving outreach to high-net-worth individuals who may be interested in supporting farmed animal welfare in Africa.Registering in the United States to qualify for various matching opportunities offered during Giving Season.2. Why farmed animal advocacy in Africa?We believe that farmed animal advocacy in Africa is a highly impactful project to work on for several reasons:The human population of Africa is expected to nearly triple by 2100. Meat production in Africa has nearly doubled since 2000, and this rate is expected to increase to match the growing population and growing wealth of the continent.Of all continents, Africa has the highest growth rate in aquatic farming.Farmed animal advocacy is incredibly neglected in Africa — in 2019, Open Philanthropy estimated that only $1 million went towards farmed animal advocacy work in Africa per year.Building the farmed animal advocacy movement in Africa before intensive animal farming is locked-in may improve the lives of millions of animals in the near- and long-term future and potentially prevent millions of animals from being born in factory fa...]]>
AnimalAdvocacyAfrica https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 32:07 None full 4243
y8YzsiEzH4B8Pcsgb_EA EA - Can someone help me refine my goals and proposal to make the maximum impact in my cause area? by emmannaemeka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can someone help me refine my goals and proposal to make the maximum impact in my cause area?, published by emmannaemeka on December 23, 2022 on The Effective Altruism Forum.My name is Emmanuel, and I am a lecturer in Nigeria who is interested in how climate change is making fungi in the environment to be fit to cause diseases. The case of Candida auris is one, C. auris is the first known pathogen that has emerged as a result of climate change and phage therapy as a means of combating antibiotic resistance.I've been unsuccessful with open philanthropy grants and EA funds, possibly because I don't understand what they actually fund or how to format my application to meet their requirements. In Nigeria, I recently established a phage bank with the goal of developing phage therapy-ready products that can be used to treat specific bacterial infections. A recent study has shown that western sub-Saharan Africa, has the highest all-age death rate attributable to resistance at 27·3 deaths per 100 000. We are beginning to see untreatable pathogens.My first thought was that this is a good fit for the cause advocated by Open Philanthropy, but with few rejections. I'm having second thoughts. I'm proposing a lab that will produce phage-ready products for clinicians and will also develop low-cost phage-based solutions for treating water against Typhoid fevers and cleaning hospital environments. I'm wondering how to structure this so that it aligns with the goals of Effective altruism.I require guidance. I'd like to speak with someone who is familiar with the process to get a head start, or can I obtain a sample of a successful proposal to use as a guide in the future? Who would be willing to review my proposal and offer suggestions for meeting EA objectives and making a maximum impact?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
emmannaemeka https://forum.effectivealtruism.org/posts/y8YzsiEzH4B8Pcsgb/can-someone-help-me-refine-my-goals-and-proposal-to-make-the Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can someone help me refine my goals and proposal to make the maximum impact in my cause area?, published by emmannaemeka on December 23, 2022 on The Effective Altruism Forum.My name is Emmanuel, and I am a lecturer in Nigeria who is interested in how climate change is making fungi in the environment to be fit to cause diseases. The case of Candida auris is one, C. auris is the first known pathogen that has emerged as a result of climate change and phage therapy as a means of combating antibiotic resistance.I've been unsuccessful with open philanthropy grants and EA funds, possibly because I don't understand what they actually fund or how to format my application to meet their requirements. In Nigeria, I recently established a phage bank with the goal of developing phage therapy-ready products that can be used to treat specific bacterial infections. A recent study has shown that western sub-Saharan Africa, has the highest all-age death rate attributable to resistance at 27·3 deaths per 100 000. We are beginning to see untreatable pathogens.My first thought was that this is a good fit for the cause advocated by Open Philanthropy, but with few rejections. I'm having second thoughts. I'm proposing a lab that will produce phage-ready products for clinicians and will also develop low-cost phage-based solutions for treating water against Typhoid fevers and cleaning hospital environments. I'm wondering how to structure this so that it aligns with the goals of Effective altruism.I require guidance. I'd like to speak with someone who is familiar with the process to get a head start, or can I obtain a sample of a successful proposal to use as a guide in the future? Who would be willing to review my proposal and offer suggestions for meeting EA objectives and making a maximum impact?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 23 Dec 2022 22:14:04 +0000 EA - Can someone help me refine my goals and proposal to make the maximum impact in my cause area? by emmannaemeka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can someone help me refine my goals and proposal to make the maximum impact in my cause area?, published by emmannaemeka on December 23, 2022 on The Effective Altruism Forum.My name is Emmanuel, and I am a lecturer in Nigeria who is interested in how climate change is making fungi in the environment to be fit to cause diseases. The case of Candida auris is one, C. auris is the first known pathogen that has emerged as a result of climate change and phage therapy as a means of combating antibiotic resistance.I've been unsuccessful with open philanthropy grants and EA funds, possibly because I don't understand what they actually fund or how to format my application to meet their requirements. In Nigeria, I recently established a phage bank with the goal of developing phage therapy-ready products that can be used to treat specific bacterial infections. A recent study has shown that western sub-Saharan Africa, has the highest all-age death rate attributable to resistance at 27·3 deaths per 100 000. We are beginning to see untreatable pathogens.My first thought was that this is a good fit for the cause advocated by Open Philanthropy, but with few rejections. I'm having second thoughts. I'm proposing a lab that will produce phage-ready products for clinicians and will also develop low-cost phage-based solutions for treating water against Typhoid fevers and cleaning hospital environments. I'm wondering how to structure this so that it aligns with the goals of Effective altruism.I require guidance. I'd like to speak with someone who is familiar with the process to get a head start, or can I obtain a sample of a successful proposal to use as a guide in the future? Who would be willing to review my proposal and offer suggestions for meeting EA objectives and making a maximum impact?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can someone help me refine my goals and proposal to make the maximum impact in my cause area?, published by emmannaemeka on December 23, 2022 on The Effective Altruism Forum.My name is Emmanuel, and I am a lecturer in Nigeria who is interested in how climate change is making fungi in the environment to be fit to cause diseases. The case of Candida auris is one, C. auris is the first known pathogen that has emerged as a result of climate change and phage therapy as a means of combating antibiotic resistance.I've been unsuccessful with open philanthropy grants and EA funds, possibly because I don't understand what they actually fund or how to format my application to meet their requirements. In Nigeria, I recently established a phage bank with the goal of developing phage therapy-ready products that can be used to treat specific bacterial infections. A recent study has shown that western sub-Saharan Africa, has the highest all-age death rate attributable to resistance at 27·3 deaths per 100 000. We are beginning to see untreatable pathogens.My first thought was that this is a good fit for the cause advocated by Open Philanthropy, but with few rejections. I'm having second thoughts. I'm proposing a lab that will produce phage-ready products for clinicians and will also develop low-cost phage-based solutions for treating water against Typhoid fevers and cleaning hospital environments. I'm wondering how to structure this so that it aligns with the goals of Effective altruism.I require guidance. I'd like to speak with someone who is familiar with the process to get a head start, or can I obtain a sample of a successful proposal to use as a guide in the future? Who would be willing to review my proposal and offer suggestions for meeting EA objectives and making a maximum impact?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
emmannaemeka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:56 None full 4240
oEmTbsRov6j3KNv5D_EA EA - It's okay to leave by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It's okay to leave, published by Nathan Young on December 23, 2022 on The Effective Altruism Forum.Aged 25, I leave the movement that has been most important in my life so far: Christianity.Tl;drSome things I wish I’d said to myself:If I leave, what will actually happen?It’s probably gonna be okayIf it’s true, I need not fear examining itIf I am not happy, that mattersWhat can I learn by leaving that current me will be glad to know?I will still have my friends and if I don’t then those friendships weren’t gonna surviveAre decisions reversible?I can fear something and do it anywayI like meContextI was a conservative evangelical Christian and had been since childhood. I stuck to a relatively literally interpretation of the Bible. I didn’t drink, I hadn’t had sex, I attended church several times a week. I lived in a community of 30 people who supported and critiqued one another about the most intimate details of our lives. I’ve written more here.It was hard to leave:I feared I’d go to HellI lived with friends from church and they had asked me to leave (which I agreed with) so I needed to find a new homeI didn’t know how to be a non-ChrisitanMuch of my social life revolved around my faithI conception of the future was Christian alsoI acknowledge this isn’t how Christianity is for everyone, though I will argue that conservative evangelicals aren’t exactly handing out pamphlets on how to leave. Making it hard to leave is a feature, not a bug and it’s one I’d like not to replicate.This post is written about a specific belief system and a specific person. I also give advice to others who might leave communities.Advice“If I leave then ______ will happen”I wish I had considered the actual outcomes. What were the things that I was fearing? I guess it was going to Hell, which is a trickier fear than most, a log jam in my brain, that I feared to touch, let alone remove.I think it’s worth considering this for all kinds of decisions - jobs, relationships and communities. If your brain doesn’t know what you’ll lose by leaving, why are you there?“It’s probably gonna be okay”I wish someone had told me this. That it was probably gonna be fine.Part of what took me so long was fare. I was scared of finding new friends, having sex, having money without having to beg my parents.It was fine. I surprised myself by how capable and adaptable I am. I wish someone had held me and told me that I was going to do better than I’d have expected. Perhaps I’d have left a lot earlier than I did.If you are going to leave a community, you are probably gonna do better than it seems. Solve problems 1 at a time. You’ve made it this far.“If it’s true, you need not fear examining it”I was terrified of actually thinking about my faith and ending up in Hell. Terrified of being disloyal, of having to explain my thoughts.But if something is worth believing in, I think there has to be some chance it’s false. The Christianity I believed in was like an abusive partner, harassing and terrifying me about how I would be punished if I left or even considered leaving. I now see that as a clear red flag. If a community makes it hard to leave - get out. If it turns out to be good, you can always rejoin.If your community is giving you good things, then you can write them down. And you can write down the bad things too. Because if it’s a thing your future self is gonna want, the good list should contain bigger things.“You are not happy and that matters”I wasn’t happy. And somehow people didn’t say that to me. And I didn’t say that to myself. Happiness isn’t everything, but it’s a useful internal signal. If I am not happy it is worth listening to my body and wondering why.Are you sad? If so, that’s notable. Is your community making you sad? That’s notable too.“Tell me what you find”Threa...]]>
Nathan Young https://forum.effectivealtruism.org/posts/oEmTbsRov6j3KNv5D/it-s-okay-to-leave Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It's okay to leave, published by Nathan Young on December 23, 2022 on The Effective Altruism Forum.Aged 25, I leave the movement that has been most important in my life so far: Christianity.Tl;drSome things I wish I’d said to myself:If I leave, what will actually happen?It’s probably gonna be okayIf it’s true, I need not fear examining itIf I am not happy, that mattersWhat can I learn by leaving that current me will be glad to know?I will still have my friends and if I don’t then those friendships weren’t gonna surviveAre decisions reversible?I can fear something and do it anywayI like meContextI was a conservative evangelical Christian and had been since childhood. I stuck to a relatively literally interpretation of the Bible. I didn’t drink, I hadn’t had sex, I attended church several times a week. I lived in a community of 30 people who supported and critiqued one another about the most intimate details of our lives. I’ve written more here.It was hard to leave:I feared I’d go to HellI lived with friends from church and they had asked me to leave (which I agreed with) so I needed to find a new homeI didn’t know how to be a non-ChrisitanMuch of my social life revolved around my faithI conception of the future was Christian alsoI acknowledge this isn’t how Christianity is for everyone, though I will argue that conservative evangelicals aren’t exactly handing out pamphlets on how to leave. Making it hard to leave is a feature, not a bug and it’s one I’d like not to replicate.This post is written about a specific belief system and a specific person. I also give advice to others who might leave communities.Advice“If I leave then ______ will happen”I wish I had considered the actual outcomes. What were the things that I was fearing? I guess it was going to Hell, which is a trickier fear than most, a log jam in my brain, that I feared to touch, let alone remove.I think it’s worth considering this for all kinds of decisions - jobs, relationships and communities. If your brain doesn’t know what you’ll lose by leaving, why are you there?“It’s probably gonna be okay”I wish someone had told me this. That it was probably gonna be fine.Part of what took me so long was fare. I was scared of finding new friends, having sex, having money without having to beg my parents.It was fine. I surprised myself by how capable and adaptable I am. I wish someone had held me and told me that I was going to do better than I’d have expected. Perhaps I’d have left a lot earlier than I did.If you are going to leave a community, you are probably gonna do better than it seems. Solve problems 1 at a time. You’ve made it this far.“If it’s true, you need not fear examining it”I was terrified of actually thinking about my faith and ending up in Hell. Terrified of being disloyal, of having to explain my thoughts.But if something is worth believing in, I think there has to be some chance it’s false. The Christianity I believed in was like an abusive partner, harassing and terrifying me about how I would be punished if I left or even considered leaving. I now see that as a clear red flag. If a community makes it hard to leave - get out. If it turns out to be good, you can always rejoin.If your community is giving you good things, then you can write them down. And you can write down the bad things too. Because if it’s a thing your future self is gonna want, the good list should contain bigger things.“You are not happy and that matters”I wasn’t happy. And somehow people didn’t say that to me. And I didn’t say that to myself. Happiness isn’t everything, but it’s a useful internal signal. If I am not happy it is worth listening to my body and wondering why.Are you sad? If so, that’s notable. Is your community making you sad? That’s notable too.“Tell me what you find”Threa...]]>
Fri, 23 Dec 2022 22:01:01 +0000 EA - It's okay to leave by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It's okay to leave, published by Nathan Young on December 23, 2022 on The Effective Altruism Forum.Aged 25, I leave the movement that has been most important in my life so far: Christianity.Tl;drSome things I wish I’d said to myself:If I leave, what will actually happen?It’s probably gonna be okayIf it’s true, I need not fear examining itIf I am not happy, that mattersWhat can I learn by leaving that current me will be glad to know?I will still have my friends and if I don’t then those friendships weren’t gonna surviveAre decisions reversible?I can fear something and do it anywayI like meContextI was a conservative evangelical Christian and had been since childhood. I stuck to a relatively literally interpretation of the Bible. I didn’t drink, I hadn’t had sex, I attended church several times a week. I lived in a community of 30 people who supported and critiqued one another about the most intimate details of our lives. I’ve written more here.It was hard to leave:I feared I’d go to HellI lived with friends from church and they had asked me to leave (which I agreed with) so I needed to find a new homeI didn’t know how to be a non-ChrisitanMuch of my social life revolved around my faithI conception of the future was Christian alsoI acknowledge this isn’t how Christianity is for everyone, though I will argue that conservative evangelicals aren’t exactly handing out pamphlets on how to leave. Making it hard to leave is a feature, not a bug and it’s one I’d like not to replicate.This post is written about a specific belief system and a specific person. I also give advice to others who might leave communities.Advice“If I leave then ______ will happen”I wish I had considered the actual outcomes. What were the things that I was fearing? I guess it was going to Hell, which is a trickier fear than most, a log jam in my brain, that I feared to touch, let alone remove.I think it’s worth considering this for all kinds of decisions - jobs, relationships and communities. If your brain doesn’t know what you’ll lose by leaving, why are you there?“It’s probably gonna be okay”I wish someone had told me this. That it was probably gonna be fine.Part of what took me so long was fare. I was scared of finding new friends, having sex, having money without having to beg my parents.It was fine. I surprised myself by how capable and adaptable I am. I wish someone had held me and told me that I was going to do better than I’d have expected. Perhaps I’d have left a lot earlier than I did.If you are going to leave a community, you are probably gonna do better than it seems. Solve problems 1 at a time. You’ve made it this far.“If it’s true, you need not fear examining it”I was terrified of actually thinking about my faith and ending up in Hell. Terrified of being disloyal, of having to explain my thoughts.But if something is worth believing in, I think there has to be some chance it’s false. The Christianity I believed in was like an abusive partner, harassing and terrifying me about how I would be punished if I left or even considered leaving. I now see that as a clear red flag. If a community makes it hard to leave - get out. If it turns out to be good, you can always rejoin.If your community is giving you good things, then you can write them down. And you can write down the bad things too. Because if it’s a thing your future self is gonna want, the good list should contain bigger things.“You are not happy and that matters”I wasn’t happy. And somehow people didn’t say that to me. And I didn’t say that to myself. Happiness isn’t everything, but it’s a useful internal signal. If I am not happy it is worth listening to my body and wondering why.Are you sad? If so, that’s notable. Is your community making you sad? That’s notable too.“Tell me what you find”Threa...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It's okay to leave, published by Nathan Young on December 23, 2022 on The Effective Altruism Forum.Aged 25, I leave the movement that has been most important in my life so far: Christianity.Tl;drSome things I wish I’d said to myself:If I leave, what will actually happen?It’s probably gonna be okayIf it’s true, I need not fear examining itIf I am not happy, that mattersWhat can I learn by leaving that current me will be glad to know?I will still have my friends and if I don’t then those friendships weren’t gonna surviveAre decisions reversible?I can fear something and do it anywayI like meContextI was a conservative evangelical Christian and had been since childhood. I stuck to a relatively literally interpretation of the Bible. I didn’t drink, I hadn’t had sex, I attended church several times a week. I lived in a community of 30 people who supported and critiqued one another about the most intimate details of our lives. I’ve written more here.It was hard to leave:I feared I’d go to HellI lived with friends from church and they had asked me to leave (which I agreed with) so I needed to find a new homeI didn’t know how to be a non-ChrisitanMuch of my social life revolved around my faithI conception of the future was Christian alsoI acknowledge this isn’t how Christianity is for everyone, though I will argue that conservative evangelicals aren’t exactly handing out pamphlets on how to leave. Making it hard to leave is a feature, not a bug and it’s one I’d like not to replicate.This post is written about a specific belief system and a specific person. I also give advice to others who might leave communities.Advice“If I leave then ______ will happen”I wish I had considered the actual outcomes. What were the things that I was fearing? I guess it was going to Hell, which is a trickier fear than most, a log jam in my brain, that I feared to touch, let alone remove.I think it’s worth considering this for all kinds of decisions - jobs, relationships and communities. If your brain doesn’t know what you’ll lose by leaving, why are you there?“It’s probably gonna be okay”I wish someone had told me this. That it was probably gonna be fine.Part of what took me so long was fare. I was scared of finding new friends, having sex, having money without having to beg my parents.It was fine. I surprised myself by how capable and adaptable I am. I wish someone had held me and told me that I was going to do better than I’d have expected. Perhaps I’d have left a lot earlier than I did.If you are going to leave a community, you are probably gonna do better than it seems. Solve problems 1 at a time. You’ve made it this far.“If it’s true, you need not fear examining it”I was terrified of actually thinking about my faith and ending up in Hell. Terrified of being disloyal, of having to explain my thoughts.But if something is worth believing in, I think there has to be some chance it’s false. The Christianity I believed in was like an abusive partner, harassing and terrifying me about how I would be punished if I left or even considered leaving. I now see that as a clear red flag. If a community makes it hard to leave - get out. If it turns out to be good, you can always rejoin.If your community is giving you good things, then you can write them down. And you can write down the bad things too. Because if it’s a thing your future self is gonna want, the good list should contain bigger things.“You are not happy and that matters”I wasn’t happy. And somehow people didn’t say that to me. And I didn’t say that to myself. Happiness isn’t everything, but it’s a useful internal signal. If I am not happy it is worth listening to my body and wondering why.Are you sad? If so, that’s notable. Is your community making you sad? That’s notable too.“Tell me what you find”Threa...]]>
Nathan Young https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:09 None full 4237
vwK3v3Mekf6Jjpeep_EA EA - Let’s think about slowing down AI by Katja Grace Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Let’s think about slowing down AI, published by Katja Grace on December 23, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Katja Grace https://forum.effectivealtruism.org/posts/vwK3v3Mekf6Jjpeep/let-s-think-about-slowing-down-ai-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Let’s think about slowing down AI, published by Katja Grace on December 23, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 23 Dec 2022 20:35:33 +0000 EA - Let’s think about slowing down AI by Katja Grace Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Let’s think about slowing down AI, published by Katja Grace on December 23, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Let’s think about slowing down AI, published by Katja Grace on December 23, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Katja Grace https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:25 None full 4236
CwKwwxRpNWmG2jgPT_EA EA - EAGx application community norms we'd like to see by Elika Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAGx application community norms we'd like to see, published by Elika on December 23, 2022 on The Effective Altruism Forum.This post is part of an ongoing series: Events in EA: Learnings and Critiques. This post was co-written but is written in Elika's voice from her experience organising EAGxBerkeley. This was writenElika worked (as a contractor) for the Centre for Effective Altruism (CEA) to run EAGxBerkeley. These are written in her personal capacity running the conference and are not endorsed by CEA.TL;DR: Cancel your EAGx ticket as early as possible if you can't attend.Note: This isn’t true of all EAGx’s. Some of this likely applies to EAG’s but we don’t speak for the EAG process / team. This is based on organiser frustrations and feedback from EAGxBerkeley and EAGxBoston.Why? It sometimes takes spots away from other (often new) EA's, it impacts the organisers and the other attendees, and wastes money and resources. We aren't saying don't apply to EAGx's, just apply and register only if you seriously plan on attending. And cancel your ticket as soon as you know you can’t make it.Why This MattersConferences cost money.A lot more money than people realise. The biggest costs are (in rough order): catering, travel grants, venue, organiser salaries, merch. Catering estimates are often given 1-2 months before the event (e.g. we will have 600-700 attendees for X meals), rough food orders are often placed a month before, and final catering numbers are usually due a week before the conference. This is tight timing with when admissions and registration close (which we try to keep open as late as possible to let the most number of people apply and attend, usually about a week - 10 days before the event).For EAGxBerkeley, applications closed on November 21st. Registration formally closed on the 25th, and the conference started just 5 days later on December 1st. Our catering numbers of 600 were planned a month in advance and solidified at 630 a week before the conference due to a flurry of additional applications we we're extremely excited about. About 500-550 people attended the conference. We planned on 630.We spent about $35,000 (lower bound) on meals for no-shows. We spent roughly $350-$500 per person on catering for EAGxBerkeley. The full estimates are about 100 extra meals at $67-$100 a meal x 5 meals = $33,500 - $50,000. EAGxBerkeley provided less meals than typical (5 instead of 8). With 8 meals, snacks, and drinks (which we didn't provide) that's roughly $60,000 - $100,000.The same situation happened at EAGxBoston, which accepted 100 late applications in the week leading up to the conference. The team didn't know "the final number of attendees for the event until a few days before the event . vendors who needed quantities (such as catering, merchandise, security) were given rough estimates which had to be overestimations, further raising costs.".It impacts the applicants who would be good fits and can come.When events get to capacity and there's still applications left to review - which often happens for EAGx’s - organisers have two decisions: either increase event costs (which happened at EAGxBoston) or reject people (which happened at EAGxBerkeley). Letting in late applicants is often not logistically feasible.When we run out of capacity, it’s often first time applicants (new EA’s or people exploring EA) who get rejected. This is because as we have fewer spots available, the bar (of what qualifies you to get in) for admissions often increases and many first time applicants apply close to the deadline (presumably because they are unsure about attending and the application can be overwhelming).This is a separate post about why you should apply early to conferences, how you're more likely to get in, but most people apply the night before applications a...]]>
Elika https://forum.effectivealtruism.org/posts/CwKwwxRpNWmG2jgPT/eagx-application-community-norms-we-d-like-to-see Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAGx application community norms we'd like to see, published by Elika on December 23, 2022 on The Effective Altruism Forum.This post is part of an ongoing series: Events in EA: Learnings and Critiques. This post was co-written but is written in Elika's voice from her experience organising EAGxBerkeley. This was writenElika worked (as a contractor) for the Centre for Effective Altruism (CEA) to run EAGxBerkeley. These are written in her personal capacity running the conference and are not endorsed by CEA.TL;DR: Cancel your EAGx ticket as early as possible if you can't attend.Note: This isn’t true of all EAGx’s. Some of this likely applies to EAG’s but we don’t speak for the EAG process / team. This is based on organiser frustrations and feedback from EAGxBerkeley and EAGxBoston.Why? It sometimes takes spots away from other (often new) EA's, it impacts the organisers and the other attendees, and wastes money and resources. We aren't saying don't apply to EAGx's, just apply and register only if you seriously plan on attending. And cancel your ticket as soon as you know you can’t make it.Why This MattersConferences cost money.A lot more money than people realise. The biggest costs are (in rough order): catering, travel grants, venue, organiser salaries, merch. Catering estimates are often given 1-2 months before the event (e.g. we will have 600-700 attendees for X meals), rough food orders are often placed a month before, and final catering numbers are usually due a week before the conference. This is tight timing with when admissions and registration close (which we try to keep open as late as possible to let the most number of people apply and attend, usually about a week - 10 days before the event).For EAGxBerkeley, applications closed on November 21st. Registration formally closed on the 25th, and the conference started just 5 days later on December 1st. Our catering numbers of 600 were planned a month in advance and solidified at 630 a week before the conference due to a flurry of additional applications we we're extremely excited about. About 500-550 people attended the conference. We planned on 630.We spent about $35,000 (lower bound) on meals for no-shows. We spent roughly $350-$500 per person on catering for EAGxBerkeley. The full estimates are about 100 extra meals at $67-$100 a meal x 5 meals = $33,500 - $50,000. EAGxBerkeley provided less meals than typical (5 instead of 8). With 8 meals, snacks, and drinks (which we didn't provide) that's roughly $60,000 - $100,000.The same situation happened at EAGxBoston, which accepted 100 late applications in the week leading up to the conference. The team didn't know "the final number of attendees for the event until a few days before the event . vendors who needed quantities (such as catering, merchandise, security) were given rough estimates which had to be overestimations, further raising costs.".It impacts the applicants who would be good fits and can come.When events get to capacity and there's still applications left to review - which often happens for EAGx’s - organisers have two decisions: either increase event costs (which happened at EAGxBoston) or reject people (which happened at EAGxBerkeley). Letting in late applicants is often not logistically feasible.When we run out of capacity, it’s often first time applicants (new EA’s or people exploring EA) who get rejected. This is because as we have fewer spots available, the bar (of what qualifies you to get in) for admissions often increases and many first time applicants apply close to the deadline (presumably because they are unsure about attending and the application can be overwhelming).This is a separate post about why you should apply early to conferences, how you're more likely to get in, but most people apply the night before applications a...]]>
Fri, 23 Dec 2022 19:58:26 +0000 EA - EAGx application community norms we'd like to see by Elika Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAGx application community norms we'd like to see, published by Elika on December 23, 2022 on The Effective Altruism Forum.This post is part of an ongoing series: Events in EA: Learnings and Critiques. This post was co-written but is written in Elika's voice from her experience organising EAGxBerkeley. This was writenElika worked (as a contractor) for the Centre for Effective Altruism (CEA) to run EAGxBerkeley. These are written in her personal capacity running the conference and are not endorsed by CEA.TL;DR: Cancel your EAGx ticket as early as possible if you can't attend.Note: This isn’t true of all EAGx’s. Some of this likely applies to EAG’s but we don’t speak for the EAG process / team. This is based on organiser frustrations and feedback from EAGxBerkeley and EAGxBoston.Why? It sometimes takes spots away from other (often new) EA's, it impacts the organisers and the other attendees, and wastes money and resources. We aren't saying don't apply to EAGx's, just apply and register only if you seriously plan on attending. And cancel your ticket as soon as you know you can’t make it.Why This MattersConferences cost money.A lot more money than people realise. The biggest costs are (in rough order): catering, travel grants, venue, organiser salaries, merch. Catering estimates are often given 1-2 months before the event (e.g. we will have 600-700 attendees for X meals), rough food orders are often placed a month before, and final catering numbers are usually due a week before the conference. This is tight timing with when admissions and registration close (which we try to keep open as late as possible to let the most number of people apply and attend, usually about a week - 10 days before the event).For EAGxBerkeley, applications closed on November 21st. Registration formally closed on the 25th, and the conference started just 5 days later on December 1st. Our catering numbers of 600 were planned a month in advance and solidified at 630 a week before the conference due to a flurry of additional applications we we're extremely excited about. About 500-550 people attended the conference. We planned on 630.We spent about $35,000 (lower bound) on meals for no-shows. We spent roughly $350-$500 per person on catering for EAGxBerkeley. The full estimates are about 100 extra meals at $67-$100 a meal x 5 meals = $33,500 - $50,000. EAGxBerkeley provided less meals than typical (5 instead of 8). With 8 meals, snacks, and drinks (which we didn't provide) that's roughly $60,000 - $100,000.The same situation happened at EAGxBoston, which accepted 100 late applications in the week leading up to the conference. The team didn't know "the final number of attendees for the event until a few days before the event . vendors who needed quantities (such as catering, merchandise, security) were given rough estimates which had to be overestimations, further raising costs.".It impacts the applicants who would be good fits and can come.When events get to capacity and there's still applications left to review - which often happens for EAGx’s - organisers have two decisions: either increase event costs (which happened at EAGxBoston) or reject people (which happened at EAGxBerkeley). Letting in late applicants is often not logistically feasible.When we run out of capacity, it’s often first time applicants (new EA’s or people exploring EA) who get rejected. This is because as we have fewer spots available, the bar (of what qualifies you to get in) for admissions often increases and many first time applicants apply close to the deadline (presumably because they are unsure about attending and the application can be overwhelming).This is a separate post about why you should apply early to conferences, how you're more likely to get in, but most people apply the night before applications a...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAGx application community norms we'd like to see, published by Elika on December 23, 2022 on The Effective Altruism Forum.This post is part of an ongoing series: Events in EA: Learnings and Critiques. This post was co-written but is written in Elika's voice from her experience organising EAGxBerkeley. This was writenElika worked (as a contractor) for the Centre for Effective Altruism (CEA) to run EAGxBerkeley. These are written in her personal capacity running the conference and are not endorsed by CEA.TL;DR: Cancel your EAGx ticket as early as possible if you can't attend.Note: This isn’t true of all EAGx’s. Some of this likely applies to EAG’s but we don’t speak for the EAG process / team. This is based on organiser frustrations and feedback from EAGxBerkeley and EAGxBoston.Why? It sometimes takes spots away from other (often new) EA's, it impacts the organisers and the other attendees, and wastes money and resources. We aren't saying don't apply to EAGx's, just apply and register only if you seriously plan on attending. And cancel your ticket as soon as you know you can’t make it.Why This MattersConferences cost money.A lot more money than people realise. The biggest costs are (in rough order): catering, travel grants, venue, organiser salaries, merch. Catering estimates are often given 1-2 months before the event (e.g. we will have 600-700 attendees for X meals), rough food orders are often placed a month before, and final catering numbers are usually due a week before the conference. This is tight timing with when admissions and registration close (which we try to keep open as late as possible to let the most number of people apply and attend, usually about a week - 10 days before the event).For EAGxBerkeley, applications closed on November 21st. Registration formally closed on the 25th, and the conference started just 5 days later on December 1st. Our catering numbers of 600 were planned a month in advance and solidified at 630 a week before the conference due to a flurry of additional applications we we're extremely excited about. About 500-550 people attended the conference. We planned on 630.We spent about $35,000 (lower bound) on meals for no-shows. We spent roughly $350-$500 per person on catering for EAGxBerkeley. The full estimates are about 100 extra meals at $67-$100 a meal x 5 meals = $33,500 - $50,000. EAGxBerkeley provided less meals than typical (5 instead of 8). With 8 meals, snacks, and drinks (which we didn't provide) that's roughly $60,000 - $100,000.The same situation happened at EAGxBoston, which accepted 100 late applications in the week leading up to the conference. The team didn't know "the final number of attendees for the event until a few days before the event . vendors who needed quantities (such as catering, merchandise, security) were given rough estimates which had to be overestimations, further raising costs.".It impacts the applicants who would be good fits and can come.When events get to capacity and there's still applications left to review - which often happens for EAGx’s - organisers have two decisions: either increase event costs (which happened at EAGxBoston) or reject people (which happened at EAGxBerkeley). Letting in late applicants is often not logistically feasible.When we run out of capacity, it’s often first time applicants (new EA’s or people exploring EA) who get rejected. This is because as we have fewer spots available, the bar (of what qualifies you to get in) for admissions often increases and many first time applicants apply close to the deadline (presumably because they are unsure about attending and the application can be overwhelming).This is a separate post about why you should apply early to conferences, how you're more likely to get in, but most people apply the night before applications a...]]>
Elika https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:41 None full 4238
cvGps3nwcKJTq69yE_EA EA - Introducing the Center for Effective Aid Policy (CEAP) by MathiasKB Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the Center for Effective Aid Policy (CEAP), published by MathiasKB on December 23, 2022 on The Effective Altruism Forum.We are incredibly excited to announce the launch of the Center for Effective Aid Policy (CEAP), a new non-profit incubated by Charity Entrepreneurship. Our mission is to improve the effectiveness of development aid. We will do so through policy advocacy to governments and INGOs.If you are unfamiliar with development aid and want to learn more, we wrote an introduction to the field which was posted to the forum last week. In short:Development aid represents one fourth of all charitable giving worldwide and is not known for its immaculate efficiency and clandestine operations. The cost-effectiveness of many aid projects can be vastly improved and we believe there are many opportunities to do so.In this post we will go over:Our near term plans.Speculative plans for the long term.How you can help!We additionally hope sharing our tentative plans can be a step towards greater organizational transparency in the effective altruism community. Some, both inside and outside this community, will disagree that our organization is a good use of resources. Our funding would most likely have gone to highly effective charities counterfactually.Being held accountable and scrutinized for our decisions, might hurt us in the short run but benefits everyone in the long run.Many parliaments are surrounded by ‘think-tanks’ who seek to influence policy in directions that just so happen to benefit the industries which are funding them. Decision makers should be free to evaluate our organization’s priorities and decide for themselves if they agree.Interventions we are excited aboutWe are entering a well-established field and stand on the shoulders of giants.There are many organizations with decades of experience doing great work to improve aid effectiveness - from research institutes doing academic research, to organizations solely focused on advocacy.Our work builds upon the collective research of thousands of academics, practitioners, and policy makers who have worked tirelessly for decades to improve the quality of aid.As a new organization we have the ability to move fast and break things. Taking risks on one's own behalf is all well and good, but mistakes we make might hurt the efforts of other organizations advocating for cost-effective aid spending.When we set out to prioritize between the many possible interventions, we looked for aid policies that experts in development and policymakers alike were excited about when interviewed.We have reached three of interventions that we think look especially promising:Interventions we want to address in our first year:Advocacy for cash-benchmarking of aid projectsCash-benchmarking advocacy was the intervention with the best reception among policymakers and academics we interviewed. Despite this, there is very little information available on cash-benchmarking. It doesn’t even have a wikipedia page! Google’s top result for cash-benchmarking is a one page report by USAID, describing a recent successful experiment they did.The discrepancy between the possible impact of improved benchmarking, the excitement of decision-makers, and lack of high quality public material is larger than for any other improvement to aid we reviewed. To change this state of affairs we are producing a comprehensive report, which can serve as a safe point of referral for policymakers and advocates.Advocacy to affect aid cutsA recent trend in western countries is for governments to cut aid spending. In 2020, the UK government made the decision to cut its aid spending from 0.7 to 0.5% of GNI. In 2022 the newly elected Swedish government cut future spending from 1% to 0.7%.Governments are also classifying previously non-aid bud...]]>
MathiasKB https://forum.effectivealtruism.org/posts/cvGps3nwcKJTq69yE/introducing-the-center-for-effective-aid-policy-ceap Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the Center for Effective Aid Policy (CEAP), published by MathiasKB on December 23, 2022 on The Effective Altruism Forum.We are incredibly excited to announce the launch of the Center for Effective Aid Policy (CEAP), a new non-profit incubated by Charity Entrepreneurship. Our mission is to improve the effectiveness of development aid. We will do so through policy advocacy to governments and INGOs.If you are unfamiliar with development aid and want to learn more, we wrote an introduction to the field which was posted to the forum last week. In short:Development aid represents one fourth of all charitable giving worldwide and is not known for its immaculate efficiency and clandestine operations. The cost-effectiveness of many aid projects can be vastly improved and we believe there are many opportunities to do so.In this post we will go over:Our near term plans.Speculative plans for the long term.How you can help!We additionally hope sharing our tentative plans can be a step towards greater organizational transparency in the effective altruism community. Some, both inside and outside this community, will disagree that our organization is a good use of resources. Our funding would most likely have gone to highly effective charities counterfactually.Being held accountable and scrutinized for our decisions, might hurt us in the short run but benefits everyone in the long run.Many parliaments are surrounded by ‘think-tanks’ who seek to influence policy in directions that just so happen to benefit the industries which are funding them. Decision makers should be free to evaluate our organization’s priorities and decide for themselves if they agree.Interventions we are excited aboutWe are entering a well-established field and stand on the shoulders of giants.There are many organizations with decades of experience doing great work to improve aid effectiveness - from research institutes doing academic research, to organizations solely focused on advocacy.Our work builds upon the collective research of thousands of academics, practitioners, and policy makers who have worked tirelessly for decades to improve the quality of aid.As a new organization we have the ability to move fast and break things. Taking risks on one's own behalf is all well and good, but mistakes we make might hurt the efforts of other organizations advocating for cost-effective aid spending.When we set out to prioritize between the many possible interventions, we looked for aid policies that experts in development and policymakers alike were excited about when interviewed.We have reached three of interventions that we think look especially promising:Interventions we want to address in our first year:Advocacy for cash-benchmarking of aid projectsCash-benchmarking advocacy was the intervention with the best reception among policymakers and academics we interviewed. Despite this, there is very little information available on cash-benchmarking. It doesn’t even have a wikipedia page! Google’s top result for cash-benchmarking is a one page report by USAID, describing a recent successful experiment they did.The discrepancy between the possible impact of improved benchmarking, the excitement of decision-makers, and lack of high quality public material is larger than for any other improvement to aid we reviewed. To change this state of affairs we are producing a comprehensive report, which can serve as a safe point of referral for policymakers and advocates.Advocacy to affect aid cutsA recent trend in western countries is for governments to cut aid spending. In 2020, the UK government made the decision to cut its aid spending from 0.7 to 0.5% of GNI. In 2022 the newly elected Swedish government cut future spending from 1% to 0.7%.Governments are also classifying previously non-aid bud...]]>
Fri, 23 Dec 2022 16:26:40 +0000 EA - Introducing the Center for Effective Aid Policy (CEAP) by MathiasKB Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the Center for Effective Aid Policy (CEAP), published by MathiasKB on December 23, 2022 on The Effective Altruism Forum.We are incredibly excited to announce the launch of the Center for Effective Aid Policy (CEAP), a new non-profit incubated by Charity Entrepreneurship. Our mission is to improve the effectiveness of development aid. We will do so through policy advocacy to governments and INGOs.If you are unfamiliar with development aid and want to learn more, we wrote an introduction to the field which was posted to the forum last week. In short:Development aid represents one fourth of all charitable giving worldwide and is not known for its immaculate efficiency and clandestine operations. The cost-effectiveness of many aid projects can be vastly improved and we believe there are many opportunities to do so.In this post we will go over:Our near term plans.Speculative plans for the long term.How you can help!We additionally hope sharing our tentative plans can be a step towards greater organizational transparency in the effective altruism community. Some, both inside and outside this community, will disagree that our organization is a good use of resources. Our funding would most likely have gone to highly effective charities counterfactually.Being held accountable and scrutinized for our decisions, might hurt us in the short run but benefits everyone in the long run.Many parliaments are surrounded by ‘think-tanks’ who seek to influence policy in directions that just so happen to benefit the industries which are funding them. Decision makers should be free to evaluate our organization’s priorities and decide for themselves if they agree.Interventions we are excited aboutWe are entering a well-established field and stand on the shoulders of giants.There are many organizations with decades of experience doing great work to improve aid effectiveness - from research institutes doing academic research, to organizations solely focused on advocacy.Our work builds upon the collective research of thousands of academics, practitioners, and policy makers who have worked tirelessly for decades to improve the quality of aid.As a new organization we have the ability to move fast and break things. Taking risks on one's own behalf is all well and good, but mistakes we make might hurt the efforts of other organizations advocating for cost-effective aid spending.When we set out to prioritize between the many possible interventions, we looked for aid policies that experts in development and policymakers alike were excited about when interviewed.We have reached three of interventions that we think look especially promising:Interventions we want to address in our first year:Advocacy for cash-benchmarking of aid projectsCash-benchmarking advocacy was the intervention with the best reception among policymakers and academics we interviewed. Despite this, there is very little information available on cash-benchmarking. It doesn’t even have a wikipedia page! Google’s top result for cash-benchmarking is a one page report by USAID, describing a recent successful experiment they did.The discrepancy between the possible impact of improved benchmarking, the excitement of decision-makers, and lack of high quality public material is larger than for any other improvement to aid we reviewed. To change this state of affairs we are producing a comprehensive report, which can serve as a safe point of referral for policymakers and advocates.Advocacy to affect aid cutsA recent trend in western countries is for governments to cut aid spending. In 2020, the UK government made the decision to cut its aid spending from 0.7 to 0.5% of GNI. In 2022 the newly elected Swedish government cut future spending from 1% to 0.7%.Governments are also classifying previously non-aid bud...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the Center for Effective Aid Policy (CEAP), published by MathiasKB on December 23, 2022 on The Effective Altruism Forum.We are incredibly excited to announce the launch of the Center for Effective Aid Policy (CEAP), a new non-profit incubated by Charity Entrepreneurship. Our mission is to improve the effectiveness of development aid. We will do so through policy advocacy to governments and INGOs.If you are unfamiliar with development aid and want to learn more, we wrote an introduction to the field which was posted to the forum last week. In short:Development aid represents one fourth of all charitable giving worldwide and is not known for its immaculate efficiency and clandestine operations. The cost-effectiveness of many aid projects can be vastly improved and we believe there are many opportunities to do so.In this post we will go over:Our near term plans.Speculative plans for the long term.How you can help!We additionally hope sharing our tentative plans can be a step towards greater organizational transparency in the effective altruism community. Some, both inside and outside this community, will disagree that our organization is a good use of resources. Our funding would most likely have gone to highly effective charities counterfactually.Being held accountable and scrutinized for our decisions, might hurt us in the short run but benefits everyone in the long run.Many parliaments are surrounded by ‘think-tanks’ who seek to influence policy in directions that just so happen to benefit the industries which are funding them. Decision makers should be free to evaluate our organization’s priorities and decide for themselves if they agree.Interventions we are excited aboutWe are entering a well-established field and stand on the shoulders of giants.There are many organizations with decades of experience doing great work to improve aid effectiveness - from research institutes doing academic research, to organizations solely focused on advocacy.Our work builds upon the collective research of thousands of academics, practitioners, and policy makers who have worked tirelessly for decades to improve the quality of aid.As a new organization we have the ability to move fast and break things. Taking risks on one's own behalf is all well and good, but mistakes we make might hurt the efforts of other organizations advocating for cost-effective aid spending.When we set out to prioritize between the many possible interventions, we looked for aid policies that experts in development and policymakers alike were excited about when interviewed.We have reached three of interventions that we think look especially promising:Interventions we want to address in our first year:Advocacy for cash-benchmarking of aid projectsCash-benchmarking advocacy was the intervention with the best reception among policymakers and academics we interviewed. Despite this, there is very little information available on cash-benchmarking. It doesn’t even have a wikipedia page! Google’s top result for cash-benchmarking is a one page report by USAID, describing a recent successful experiment they did.The discrepancy between the possible impact of improved benchmarking, the excitement of decision-makers, and lack of high quality public material is larger than for any other improvement to aid we reviewed. To change this state of affairs we are producing a comprehensive report, which can serve as a safe point of referral for policymakers and advocates.Advocacy to affect aid cutsA recent trend in western countries is for governments to cut aid spending. In 2020, the UK government made the decision to cut its aid spending from 0.7 to 0.5% of GNI. In 2022 the newly elected Swedish government cut future spending from 1% to 0.7%.Governments are also classifying previously non-aid bud...]]>
MathiasKB https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:07 None full 4239
2S3CHPwaJBE5h8umW_EA EA - Read The Sequences by Quadratic Reciprocity Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Read The Sequences, published by Quadratic Reciprocity on December 23, 2022 on The Effective Altruism Forum.There is heavy overlap among the effective altruism and rationality communities but they are not the same thing. Within the effective altruism community, especially among those who are newer to the movement and were introduced to it through a university group, I've noticed some tension between the two. I often sense the vibe that sometimes people into effective altruism who haven’t read much of the canonical LessWrong content write off the rationalist stuff as weird or unimportant.I think this is a pretty big mistake.Lots of people doing very valuable work within effective altruism got interested in it via first interacting with rationalist content, in particular The Sequences and Harry Potter and the Methods of Rationality. I think that is for good reason. If you haven’t come across those writings before, here’s a nudge to give The Sequences a read. The Sequences are a (really long) collection of blog posts written by Eliezer Yudkowsky on the science and philosophy of human rationality. They are divided into sequences - a list of posts on a similar topic. Most of the posts would have been pretty useful to me on their own but I also got more value from reading posts in a particular sequence to better internalise the concepts. There are slightly fewer posts in The Sequences than there are days in the year so reading the whole thing is a very doable thing to do in the coming year! You can also read Highlights from the Sequences which cover 50 of the best essays.Below, I’ll list some of the parts that I have found especially helpful and that I often try to point to when talking to people into effective altruism (things I wish they had read too).Fake Beliefs is an excellent sequence if you already know a bit about biases in human thinking. The key insight there is about making beliefs pay rent (“don’t ask what to believe—ask what to anticipate”) and that sometimes your expectations can come apart from your professed beliefs (fake beliefs). The ideas were helpful for me noticing when that happens, for example when I believe I believe something but actually do not. It happens a bunch when I start talking about abstract, wordy things but forget to ask myself what I would actually expect to see in the world if the things I am saying were true.Noticing Confusion is a cool sequence that talks about things like:What is evidence? (“For an event to be evidence about a target of inquiry, it has to happen differently in a way that’s entangled with the different possible states of the target”)Your strength as a rationalist is your ability to be more confused by fiction than by reality - noticing confusion when something doesn’t check out and going EITHER MY MODEL IS FALSE OR THIS STORY IS WRONGAbsence of evidence is evidence of absence, and conservation of expected evidence (“If you expect a strong probability of seeing weak evidence in one direction, it must be balanced by a weak expectation of seeing strong evidence in the other direction”)I am often surrounded by people who are very smart and say convincing-sounding things all the time. The ideas mentioned above have helped me better recognise when I'm confused and when a smooth-sounding argument doesn't match up with how I think the world actually works.Against rationalisation has things that are useful to remember:Knowing about biases can hurt people. Exposing subjects to an apparently balanced set of pro and con arguments will exaggerate their initial polarisation. Politically knowledgeable subjects, because they possess greater ammunition with which to counter-argue incongruent facts and arguments, will be more prone to some biases.Not to avoid your belief's real weak points. “Ask yourself what smart people...]]>
Quadratic Reciprocity https://forum.effectivealtruism.org/posts/2S3CHPwaJBE5h8umW/read-the-sequences Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Read The Sequences, published by Quadratic Reciprocity on December 23, 2022 on The Effective Altruism Forum.There is heavy overlap among the effective altruism and rationality communities but they are not the same thing. Within the effective altruism community, especially among those who are newer to the movement and were introduced to it through a university group, I've noticed some tension between the two. I often sense the vibe that sometimes people into effective altruism who haven’t read much of the canonical LessWrong content write off the rationalist stuff as weird or unimportant.I think this is a pretty big mistake.Lots of people doing very valuable work within effective altruism got interested in it via first interacting with rationalist content, in particular The Sequences and Harry Potter and the Methods of Rationality. I think that is for good reason. If you haven’t come across those writings before, here’s a nudge to give The Sequences a read. The Sequences are a (really long) collection of blog posts written by Eliezer Yudkowsky on the science and philosophy of human rationality. They are divided into sequences - a list of posts on a similar topic. Most of the posts would have been pretty useful to me on their own but I also got more value from reading posts in a particular sequence to better internalise the concepts. There are slightly fewer posts in The Sequences than there are days in the year so reading the whole thing is a very doable thing to do in the coming year! You can also read Highlights from the Sequences which cover 50 of the best essays.Below, I’ll list some of the parts that I have found especially helpful and that I often try to point to when talking to people into effective altruism (things I wish they had read too).Fake Beliefs is an excellent sequence if you already know a bit about biases in human thinking. The key insight there is about making beliefs pay rent (“don’t ask what to believe—ask what to anticipate”) and that sometimes your expectations can come apart from your professed beliefs (fake beliefs). The ideas were helpful for me noticing when that happens, for example when I believe I believe something but actually do not. It happens a bunch when I start talking about abstract, wordy things but forget to ask myself what I would actually expect to see in the world if the things I am saying were true.Noticing Confusion is a cool sequence that talks about things like:What is evidence? (“For an event to be evidence about a target of inquiry, it has to happen differently in a way that’s entangled with the different possible states of the target”)Your strength as a rationalist is your ability to be more confused by fiction than by reality - noticing confusion when something doesn’t check out and going EITHER MY MODEL IS FALSE OR THIS STORY IS WRONGAbsence of evidence is evidence of absence, and conservation of expected evidence (“If you expect a strong probability of seeing weak evidence in one direction, it must be balanced by a weak expectation of seeing strong evidence in the other direction”)I am often surrounded by people who are very smart and say convincing-sounding things all the time. The ideas mentioned above have helped me better recognise when I'm confused and when a smooth-sounding argument doesn't match up with how I think the world actually works.Against rationalisation has things that are useful to remember:Knowing about biases can hurt people. Exposing subjects to an apparently balanced set of pro and con arguments will exaggerate their initial polarisation. Politically knowledgeable subjects, because they possess greater ammunition with which to counter-argue incongruent facts and arguments, will be more prone to some biases.Not to avoid your belief's real weak points. “Ask yourself what smart people...]]>
Fri, 23 Dec 2022 16:17:04 +0000 EA - Read The Sequences by Quadratic Reciprocity Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Read The Sequences, published by Quadratic Reciprocity on December 23, 2022 on The Effective Altruism Forum.There is heavy overlap among the effective altruism and rationality communities but they are not the same thing. Within the effective altruism community, especially among those who are newer to the movement and were introduced to it through a university group, I've noticed some tension between the two. I often sense the vibe that sometimes people into effective altruism who haven’t read much of the canonical LessWrong content write off the rationalist stuff as weird or unimportant.I think this is a pretty big mistake.Lots of people doing very valuable work within effective altruism got interested in it via first interacting with rationalist content, in particular The Sequences and Harry Potter and the Methods of Rationality. I think that is for good reason. If you haven’t come across those writings before, here’s a nudge to give The Sequences a read. The Sequences are a (really long) collection of blog posts written by Eliezer Yudkowsky on the science and philosophy of human rationality. They are divided into sequences - a list of posts on a similar topic. Most of the posts would have been pretty useful to me on their own but I also got more value from reading posts in a particular sequence to better internalise the concepts. There are slightly fewer posts in The Sequences than there are days in the year so reading the whole thing is a very doable thing to do in the coming year! You can also read Highlights from the Sequences which cover 50 of the best essays.Below, I’ll list some of the parts that I have found especially helpful and that I often try to point to when talking to people into effective altruism (things I wish they had read too).Fake Beliefs is an excellent sequence if you already know a bit about biases in human thinking. The key insight there is about making beliefs pay rent (“don’t ask what to believe—ask what to anticipate”) and that sometimes your expectations can come apart from your professed beliefs (fake beliefs). The ideas were helpful for me noticing when that happens, for example when I believe I believe something but actually do not. It happens a bunch when I start talking about abstract, wordy things but forget to ask myself what I would actually expect to see in the world if the things I am saying were true.Noticing Confusion is a cool sequence that talks about things like:What is evidence? (“For an event to be evidence about a target of inquiry, it has to happen differently in a way that’s entangled with the different possible states of the target”)Your strength as a rationalist is your ability to be more confused by fiction than by reality - noticing confusion when something doesn’t check out and going EITHER MY MODEL IS FALSE OR THIS STORY IS WRONGAbsence of evidence is evidence of absence, and conservation of expected evidence (“If you expect a strong probability of seeing weak evidence in one direction, it must be balanced by a weak expectation of seeing strong evidence in the other direction”)I am often surrounded by people who are very smart and say convincing-sounding things all the time. The ideas mentioned above have helped me better recognise when I'm confused and when a smooth-sounding argument doesn't match up with how I think the world actually works.Against rationalisation has things that are useful to remember:Knowing about biases can hurt people. Exposing subjects to an apparently balanced set of pro and con arguments will exaggerate their initial polarisation. Politically knowledgeable subjects, because they possess greater ammunition with which to counter-argue incongruent facts and arguments, will be more prone to some biases.Not to avoid your belief's real weak points. “Ask yourself what smart people...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Read The Sequences, published by Quadratic Reciprocity on December 23, 2022 on The Effective Altruism Forum.There is heavy overlap among the effective altruism and rationality communities but they are not the same thing. Within the effective altruism community, especially among those who are newer to the movement and were introduced to it through a university group, I've noticed some tension between the two. I often sense the vibe that sometimes people into effective altruism who haven’t read much of the canonical LessWrong content write off the rationalist stuff as weird or unimportant.I think this is a pretty big mistake.Lots of people doing very valuable work within effective altruism got interested in it via first interacting with rationalist content, in particular The Sequences and Harry Potter and the Methods of Rationality. I think that is for good reason. If you haven’t come across those writings before, here’s a nudge to give The Sequences a read. The Sequences are a (really long) collection of blog posts written by Eliezer Yudkowsky on the science and philosophy of human rationality. They are divided into sequences - a list of posts on a similar topic. Most of the posts would have been pretty useful to me on their own but I also got more value from reading posts in a particular sequence to better internalise the concepts. There are slightly fewer posts in The Sequences than there are days in the year so reading the whole thing is a very doable thing to do in the coming year! You can also read Highlights from the Sequences which cover 50 of the best essays.Below, I’ll list some of the parts that I have found especially helpful and that I often try to point to when talking to people into effective altruism (things I wish they had read too).Fake Beliefs is an excellent sequence if you already know a bit about biases in human thinking. The key insight there is about making beliefs pay rent (“don’t ask what to believe—ask what to anticipate”) and that sometimes your expectations can come apart from your professed beliefs (fake beliefs). The ideas were helpful for me noticing when that happens, for example when I believe I believe something but actually do not. It happens a bunch when I start talking about abstract, wordy things but forget to ask myself what I would actually expect to see in the world if the things I am saying were true.Noticing Confusion is a cool sequence that talks about things like:What is evidence? (“For an event to be evidence about a target of inquiry, it has to happen differently in a way that’s entangled with the different possible states of the target”)Your strength as a rationalist is your ability to be more confused by fiction than by reality - noticing confusion when something doesn’t check out and going EITHER MY MODEL IS FALSE OR THIS STORY IS WRONGAbsence of evidence is evidence of absence, and conservation of expected evidence (“If you expect a strong probability of seeing weak evidence in one direction, it must be balanced by a weak expectation of seeing strong evidence in the other direction”)I am often surrounded by people who are very smart and say convincing-sounding things all the time. The ideas mentioned above have helped me better recognise when I'm confused and when a smooth-sounding argument doesn't match up with how I think the world actually works.Against rationalisation has things that are useful to remember:Knowing about biases can hurt people. Exposing subjects to an apparently balanced set of pro and con arguments will exaggerate their initial polarisation. Politically knowledgeable subjects, because they possess greater ammunition with which to counter-argue incongruent facts and arguments, will be more prone to some biases.Not to avoid your belief's real weak points. “Ask yourself what smart people...]]>
Quadratic Reciprocity https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:15 None full 4241
sFemFbiFTntgtQDbD_EA EA - Katja Grace: Let's think about slowing down AI by peterhartree Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Katja Grace: Let's think about slowing down AI, published by peterhartree on December 23, 2022 on The Effective Altruism Forum.On Twitter, Katja Grace wrote:I think people should think more about trying to slow down AI progress, if they believe it's going to destroy the world soon. I know people have like eighteen reasons to dismiss this idea out of hand, but I dispute them.The introduction to the post is below. Do read the whole thing.Consider reading alongside:Holden Karnofsky's recent posts on Cold Takes, especially "Racing Through a Minefield: the AI Deployment Problem".Scott Alexander's recent "Perhaps It Is A Bad Thing That The World's Leading AI Companies Cannot Control Their AIs".Averting doom by not building the doom machineIf you fear that someone will build a machine that will seize control of the world and annihilate humanity, then one kind of response is to try to build further machines that will seize control of the world even earlier without destroying it, forestalling the ruinous machine’s conquest. An alternative or complementary kind of response is to try to avert such machines being built at all, at least while the degree of their apocalyptic tendencies is ambiguous.The latter approach seems to me like the kind of basic and obvious thing worthy of at least consideration, and also in its favor, fits nicely in the genre ‘stuff that it isn’t that hard to imagine happening in the real world’. Yet my impression is that for people worried about extinction risk from artificial intelligence, strategies under the heading ‘actively slow down AI progress’ have historically been dismissed and ignored (though ‘don’t actively speed up AI progress’ is popular).The conversation near me over the years has felt a bit like this:Some people: AI might kill everyone. We should design a godlike super-AI of perfect goodness to prevent that.Others: wow that sounds extremely ambitiousSome people: yeah but it’s very important and also we are extremely smart so idk it could work[Work on it for a decade and a half]Some people: ok that’s pretty hard, we give upOthers: oh huh shouldn’t we maybe try to stop the building of this dangerous AI?Some people: hmm, that would involve coordinating numerous people—we may be arrogant enough to think that we might build a god-machine that can take over the world and remake it as a paradise, but we aren’t delusionalThis seems like an error to me. (And lately, to a bunch of other people.)I don’t have a strong view on whether anything in the space of ‘try to slow down some AI research’ should be done. But I think a) the naive first-pass guess should be a strong ‘probably’, and b) a decent amount of thinking should happen before writing off everything in this large space of interventions. Whereas customarily the tentative answer seems to be, ‘of course not’ and then the topic seems to be avoided for further thinking. (At least in my experience—the AI safety community is large, and for most things I say here, different experiences are probably had in different bits of it.)Maybe my strongest view is that one shouldn’t apply such different standards of ambition to these different classes of intervention. Like: yes, there appear to be substantial difficulties in slowing down AI progress to good effect. But in technical alignment, mountainous challenges are met with enthusiasm for mountainous efforts. And it is very non-obvious that the scale of difficulty here is much larger than that involved in designing acceptably safe versions of machines capable of taking over the world before anyone else in the world designs dangerous versions.I’ve been talking about this with people over the past many months, and have accumulated an abundance of reasons for not trying to slow down AI, most of which I’d like to argue about at least a bit....]]>
peterhartree https://forum.effectivealtruism.org/posts/sFemFbiFTntgtQDbD/katja-grace-let-s-think-about-slowing-down-ai Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Katja Grace: Let's think about slowing down AI, published by peterhartree on December 23, 2022 on The Effective Altruism Forum.On Twitter, Katja Grace wrote:I think people should think more about trying to slow down AI progress, if they believe it's going to destroy the world soon. I know people have like eighteen reasons to dismiss this idea out of hand, but I dispute them.The introduction to the post is below. Do read the whole thing.Consider reading alongside:Holden Karnofsky's recent posts on Cold Takes, especially "Racing Through a Minefield: the AI Deployment Problem".Scott Alexander's recent "Perhaps It Is A Bad Thing That The World's Leading AI Companies Cannot Control Their AIs".Averting doom by not building the doom machineIf you fear that someone will build a machine that will seize control of the world and annihilate humanity, then one kind of response is to try to build further machines that will seize control of the world even earlier without destroying it, forestalling the ruinous machine’s conquest. An alternative or complementary kind of response is to try to avert such machines being built at all, at least while the degree of their apocalyptic tendencies is ambiguous.The latter approach seems to me like the kind of basic and obvious thing worthy of at least consideration, and also in its favor, fits nicely in the genre ‘stuff that it isn’t that hard to imagine happening in the real world’. Yet my impression is that for people worried about extinction risk from artificial intelligence, strategies under the heading ‘actively slow down AI progress’ have historically been dismissed and ignored (though ‘don’t actively speed up AI progress’ is popular).The conversation near me over the years has felt a bit like this:Some people: AI might kill everyone. We should design a godlike super-AI of perfect goodness to prevent that.Others: wow that sounds extremely ambitiousSome people: yeah but it’s very important and also we are extremely smart so idk it could work[Work on it for a decade and a half]Some people: ok that’s pretty hard, we give upOthers: oh huh shouldn’t we maybe try to stop the building of this dangerous AI?Some people: hmm, that would involve coordinating numerous people—we may be arrogant enough to think that we might build a god-machine that can take over the world and remake it as a paradise, but we aren’t delusionalThis seems like an error to me. (And lately, to a bunch of other people.)I don’t have a strong view on whether anything in the space of ‘try to slow down some AI research’ should be done. But I think a) the naive first-pass guess should be a strong ‘probably’, and b) a decent amount of thinking should happen before writing off everything in this large space of interventions. Whereas customarily the tentative answer seems to be, ‘of course not’ and then the topic seems to be avoided for further thinking. (At least in my experience—the AI safety community is large, and for most things I say here, different experiences are probably had in different bits of it.)Maybe my strongest view is that one shouldn’t apply such different standards of ambition to these different classes of intervention. Like: yes, there appear to be substantial difficulties in slowing down AI progress to good effect. But in technical alignment, mountainous challenges are met with enthusiasm for mountainous efforts. And it is very non-obvious that the scale of difficulty here is much larger than that involved in designing acceptably safe versions of machines capable of taking over the world before anyone else in the world designs dangerous versions.I’ve been talking about this with people over the past many months, and have accumulated an abundance of reasons for not trying to slow down AI, most of which I’d like to argue about at least a bit....]]>
Fri, 23 Dec 2022 07:34:58 +0000 EA - Katja Grace: Let's think about slowing down AI by peterhartree Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Katja Grace: Let's think about slowing down AI, published by peterhartree on December 23, 2022 on The Effective Altruism Forum.On Twitter, Katja Grace wrote:I think people should think more about trying to slow down AI progress, if they believe it's going to destroy the world soon. I know people have like eighteen reasons to dismiss this idea out of hand, but I dispute them.The introduction to the post is below. Do read the whole thing.Consider reading alongside:Holden Karnofsky's recent posts on Cold Takes, especially "Racing Through a Minefield: the AI Deployment Problem".Scott Alexander's recent "Perhaps It Is A Bad Thing That The World's Leading AI Companies Cannot Control Their AIs".Averting doom by not building the doom machineIf you fear that someone will build a machine that will seize control of the world and annihilate humanity, then one kind of response is to try to build further machines that will seize control of the world even earlier without destroying it, forestalling the ruinous machine’s conquest. An alternative or complementary kind of response is to try to avert such machines being built at all, at least while the degree of their apocalyptic tendencies is ambiguous.The latter approach seems to me like the kind of basic and obvious thing worthy of at least consideration, and also in its favor, fits nicely in the genre ‘stuff that it isn’t that hard to imagine happening in the real world’. Yet my impression is that for people worried about extinction risk from artificial intelligence, strategies under the heading ‘actively slow down AI progress’ have historically been dismissed and ignored (though ‘don’t actively speed up AI progress’ is popular).The conversation near me over the years has felt a bit like this:Some people: AI might kill everyone. We should design a godlike super-AI of perfect goodness to prevent that.Others: wow that sounds extremely ambitiousSome people: yeah but it’s very important and also we are extremely smart so idk it could work[Work on it for a decade and a half]Some people: ok that’s pretty hard, we give upOthers: oh huh shouldn’t we maybe try to stop the building of this dangerous AI?Some people: hmm, that would involve coordinating numerous people—we may be arrogant enough to think that we might build a god-machine that can take over the world and remake it as a paradise, but we aren’t delusionalThis seems like an error to me. (And lately, to a bunch of other people.)I don’t have a strong view on whether anything in the space of ‘try to slow down some AI research’ should be done. But I think a) the naive first-pass guess should be a strong ‘probably’, and b) a decent amount of thinking should happen before writing off everything in this large space of interventions. Whereas customarily the tentative answer seems to be, ‘of course not’ and then the topic seems to be avoided for further thinking. (At least in my experience—the AI safety community is large, and for most things I say here, different experiences are probably had in different bits of it.)Maybe my strongest view is that one shouldn’t apply such different standards of ambition to these different classes of intervention. Like: yes, there appear to be substantial difficulties in slowing down AI progress to good effect. But in technical alignment, mountainous challenges are met with enthusiasm for mountainous efforts. And it is very non-obvious that the scale of difficulty here is much larger than that involved in designing acceptably safe versions of machines capable of taking over the world before anyone else in the world designs dangerous versions.I’ve been talking about this with people over the past many months, and have accumulated an abundance of reasons for not trying to slow down AI, most of which I’d like to argue about at least a bit....]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Katja Grace: Let's think about slowing down AI, published by peterhartree on December 23, 2022 on The Effective Altruism Forum.On Twitter, Katja Grace wrote:I think people should think more about trying to slow down AI progress, if they believe it's going to destroy the world soon. I know people have like eighteen reasons to dismiss this idea out of hand, but I dispute them.The introduction to the post is below. Do read the whole thing.Consider reading alongside:Holden Karnofsky's recent posts on Cold Takes, especially "Racing Through a Minefield: the AI Deployment Problem".Scott Alexander's recent "Perhaps It Is A Bad Thing That The World's Leading AI Companies Cannot Control Their AIs".Averting doom by not building the doom machineIf you fear that someone will build a machine that will seize control of the world and annihilate humanity, then one kind of response is to try to build further machines that will seize control of the world even earlier without destroying it, forestalling the ruinous machine’s conquest. An alternative or complementary kind of response is to try to avert such machines being built at all, at least while the degree of their apocalyptic tendencies is ambiguous.The latter approach seems to me like the kind of basic and obvious thing worthy of at least consideration, and also in its favor, fits nicely in the genre ‘stuff that it isn’t that hard to imagine happening in the real world’. Yet my impression is that for people worried about extinction risk from artificial intelligence, strategies under the heading ‘actively slow down AI progress’ have historically been dismissed and ignored (though ‘don’t actively speed up AI progress’ is popular).The conversation near me over the years has felt a bit like this:Some people: AI might kill everyone. We should design a godlike super-AI of perfect goodness to prevent that.Others: wow that sounds extremely ambitiousSome people: yeah but it’s very important and also we are extremely smart so idk it could work[Work on it for a decade and a half]Some people: ok that’s pretty hard, we give upOthers: oh huh shouldn’t we maybe try to stop the building of this dangerous AI?Some people: hmm, that would involve coordinating numerous people—we may be arrogant enough to think that we might build a god-machine that can take over the world and remake it as a paradise, but we aren’t delusionalThis seems like an error to me. (And lately, to a bunch of other people.)I don’t have a strong view on whether anything in the space of ‘try to slow down some AI research’ should be done. But I think a) the naive first-pass guess should be a strong ‘probably’, and b) a decent amount of thinking should happen before writing off everything in this large space of interventions. Whereas customarily the tentative answer seems to be, ‘of course not’ and then the topic seems to be avoided for further thinking. (At least in my experience—the AI safety community is large, and for most things I say here, different experiences are probably had in different bits of it.)Maybe my strongest view is that one shouldn’t apply such different standards of ambition to these different classes of intervention. Like: yes, there appear to be substantial difficulties in slowing down AI progress to good effect. But in technical alignment, mountainous challenges are met with enthusiasm for mountainous efforts. And it is very non-obvious that the scale of difficulty here is much larger than that involved in designing acceptably safe versions of machines capable of taking over the world before anyone else in the world designs dangerous versions.I’ve been talking about this with people over the past many months, and have accumulated an abundance of reasons for not trying to slow down AI, most of which I’d like to argue about at least a bit....]]>
peterhartree https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:42 None full 4233
opDisq67NLmhZE358_EA EA - FTX’s collapse mirrors an infamous 18th century British financial scandal by Michael Huang Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX’s collapse mirrors an infamous 18th century British financial scandal, published by Michael Huang on December 22, 2022 on The Effective Altruism Forum.Amy Froide, Professor of History at the University of Maryland, discusses the remarkable similarities between the recent FTX collapse and the Charitable Corporation Scandal of 1732....the management of both ventures was centralized in the hands of just a few people. The Charitable Corporation got into trouble when it reduced its directors from 12 to five and when it consolidated most of its loan business in the hands of one employee – namely, Thomson. FTX’s example is even more extreme, with founder Sam Bankman-Fried calling all the shots...In both cases, the key fraud was using the assets of one company to prop up another company managed by the same people...News of both frauds also came as a surprise, with little advance warning. Part of this is due to the ways in which managers were well respected and well connected to both politicians and the financial world...I would also argue that in both cases the company’s connection to philanthropy lent it another level of cover. The Charitable Corporation’s very name announced its altruism. And even after the scandal subsided, commentators pointed out that the original business of microlending was useful. FTX’s founder Bankman-Fried is an advocate of effective altruism and has argued that it was useful for him and his companies to make lots of money so he could give it away to what he deemed effective causes.It's worth noting that the Charitable Corporation's demise did not lead to the end of microlending. Pioneers in micro-credit, Muhammad Yunus and Grameen Bank, won the Nobel Peace Prize in 2006.In the same way, FTX's demise does not necessarily lead to the end of cryptocurrency, or the end of effective altruism.It does support the case for more oversight and stronger regulations:After the Charitable Corporation’s collapse in 1732, Parliament didn’t institute any regulation that would prevent such a fraud from happening again.A tradition of loose oversight and regulations has been the hallmark of Anglo-American capitalism. If the response to the 2008 financial crash is any indication of what will come in the wake of FTX’s collapse, it’s possible that some bad actors, like Bankman-Fried, will be punished. But any regulation will be undone at the first opportunity–or never put in place to begin with.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Michael Huang https://forum.effectivealtruism.org/posts/opDisq67NLmhZE358/ftx-s-collapse-mirrors-an-infamous-18th-century-british Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX’s collapse mirrors an infamous 18th century British financial scandal, published by Michael Huang on December 22, 2022 on The Effective Altruism Forum.Amy Froide, Professor of History at the University of Maryland, discusses the remarkable similarities between the recent FTX collapse and the Charitable Corporation Scandal of 1732....the management of both ventures was centralized in the hands of just a few people. The Charitable Corporation got into trouble when it reduced its directors from 12 to five and when it consolidated most of its loan business in the hands of one employee – namely, Thomson. FTX’s example is even more extreme, with founder Sam Bankman-Fried calling all the shots...In both cases, the key fraud was using the assets of one company to prop up another company managed by the same people...News of both frauds also came as a surprise, with little advance warning. Part of this is due to the ways in which managers were well respected and well connected to both politicians and the financial world...I would also argue that in both cases the company’s connection to philanthropy lent it another level of cover. The Charitable Corporation’s very name announced its altruism. And even after the scandal subsided, commentators pointed out that the original business of microlending was useful. FTX’s founder Bankman-Fried is an advocate of effective altruism and has argued that it was useful for him and his companies to make lots of money so he could give it away to what he deemed effective causes.It's worth noting that the Charitable Corporation's demise did not lead to the end of microlending. Pioneers in micro-credit, Muhammad Yunus and Grameen Bank, won the Nobel Peace Prize in 2006.In the same way, FTX's demise does not necessarily lead to the end of cryptocurrency, or the end of effective altruism.It does support the case for more oversight and stronger regulations:After the Charitable Corporation’s collapse in 1732, Parliament didn’t institute any regulation that would prevent such a fraud from happening again.A tradition of loose oversight and regulations has been the hallmark of Anglo-American capitalism. If the response to the 2008 financial crash is any indication of what will come in the wake of FTX’s collapse, it’s possible that some bad actors, like Bankman-Fried, will be punished. But any regulation will be undone at the first opportunity–or never put in place to begin with.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 23 Dec 2022 00:06:56 +0000 EA - FTX’s collapse mirrors an infamous 18th century British financial scandal by Michael Huang Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX’s collapse mirrors an infamous 18th century British financial scandal, published by Michael Huang on December 22, 2022 on The Effective Altruism Forum.Amy Froide, Professor of History at the University of Maryland, discusses the remarkable similarities between the recent FTX collapse and the Charitable Corporation Scandal of 1732....the management of both ventures was centralized in the hands of just a few people. The Charitable Corporation got into trouble when it reduced its directors from 12 to five and when it consolidated most of its loan business in the hands of one employee – namely, Thomson. FTX’s example is even more extreme, with founder Sam Bankman-Fried calling all the shots...In both cases, the key fraud was using the assets of one company to prop up another company managed by the same people...News of both frauds also came as a surprise, with little advance warning. Part of this is due to the ways in which managers were well respected and well connected to both politicians and the financial world...I would also argue that in both cases the company’s connection to philanthropy lent it another level of cover. The Charitable Corporation’s very name announced its altruism. And even after the scandal subsided, commentators pointed out that the original business of microlending was useful. FTX’s founder Bankman-Fried is an advocate of effective altruism and has argued that it was useful for him and his companies to make lots of money so he could give it away to what he deemed effective causes.It's worth noting that the Charitable Corporation's demise did not lead to the end of microlending. Pioneers in micro-credit, Muhammad Yunus and Grameen Bank, won the Nobel Peace Prize in 2006.In the same way, FTX's demise does not necessarily lead to the end of cryptocurrency, or the end of effective altruism.It does support the case for more oversight and stronger regulations:After the Charitable Corporation’s collapse in 1732, Parliament didn’t institute any regulation that would prevent such a fraud from happening again.A tradition of loose oversight and regulations has been the hallmark of Anglo-American capitalism. If the response to the 2008 financial crash is any indication of what will come in the wake of FTX’s collapse, it’s possible that some bad actors, like Bankman-Fried, will be punished. But any regulation will be undone at the first opportunity–or never put in place to begin with.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX’s collapse mirrors an infamous 18th century British financial scandal, published by Michael Huang on December 22, 2022 on The Effective Altruism Forum.Amy Froide, Professor of History at the University of Maryland, discusses the remarkable similarities between the recent FTX collapse and the Charitable Corporation Scandal of 1732....the management of both ventures was centralized in the hands of just a few people. The Charitable Corporation got into trouble when it reduced its directors from 12 to five and when it consolidated most of its loan business in the hands of one employee – namely, Thomson. FTX’s example is even more extreme, with founder Sam Bankman-Fried calling all the shots...In both cases, the key fraud was using the assets of one company to prop up another company managed by the same people...News of both frauds also came as a surprise, with little advance warning. Part of this is due to the ways in which managers were well respected and well connected to both politicians and the financial world...I would also argue that in both cases the company’s connection to philanthropy lent it another level of cover. The Charitable Corporation’s very name announced its altruism. And even after the scandal subsided, commentators pointed out that the original business of microlending was useful. FTX’s founder Bankman-Fried is an advocate of effective altruism and has argued that it was useful for him and his companies to make lots of money so he could give it away to what he deemed effective causes.It's worth noting that the Charitable Corporation's demise did not lead to the end of microlending. Pioneers in micro-credit, Muhammad Yunus and Grameen Bank, won the Nobel Peace Prize in 2006.In the same way, FTX's demise does not necessarily lead to the end of cryptocurrency, or the end of effective altruism.It does support the case for more oversight and stronger regulations:After the Charitable Corporation’s collapse in 1732, Parliament didn’t institute any regulation that would prevent such a fraud from happening again.A tradition of loose oversight and regulations has been the hallmark of Anglo-American capitalism. If the response to the 2008 financial crash is any indication of what will come in the wake of FTX’s collapse, it’s possible that some bad actors, like Bankman-Fried, will be punished. But any regulation will be undone at the first opportunity–or never put in place to begin with.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Michael Huang https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:32 None full 4235
7xvsnLM7FkwCrvkxT_EA EA - EA Global 2023 applications are now open! by Ivan Burduk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Global 2023 applications are now open!, published by Ivan Burduk on December 22, 2022 on The Effective Altruism Forum.Apply NowUpcoming conferencesEA Global: Bay Area 2023Dates: 24–26 February 2023Location: Oakland Marriott City CenterApplication deadline: 8 February 2023 (8:00am UTC)Early bird registration deadline: 13 January 2023 (8:00am UTC)EA Global: London 2023Dates: 19–21 May 2023Location: Tobacco DockApplication deadline: 28 April 2023 (8:00am UTC)Early bird registration deadline: 31 March 2023 (8:00am UTC)What is an EA Global conference like?EA Globals bring together a network of people who are using the principles of effective altruism to take significant action to have a positive impact on the world. For each event, we’re expecting around 1500 people to come together for a weekend of talks, workshops, and other opportunities. We put a special emphasis on networking and 1:1 meetings, and attendees frequently meet future collaborators, find new jobs, or make new friends.You can check out content from previous conferences here, and reflections from past attendees here.Is EA Global the right conference for me?EA Global is mostly aimed at people who have a solid understanding of the core ideas of effective altruism and who are already taking significant actions based on those ideas. This often looks like professionally working on effective-altruism-inspired projects or working out how best to work on such projects.EAGx’s are community-run conferences and tend to have broader admissions criteria. They are primarily for people familiar with the core ideas of effective altruism, who want to learn more about what they can do, and want to meet the community, especially in their region. You can see an up to date list of confirmed conferences and their dates here.If you want to attend but are unsure about whether to apply, please err on the side of applying. You can read more about admissions for EA Global on our FAQ page.How do I apply to an EA Global?Click here to go to our (new!) application form.There's one application form for all EA Globals in 2023, and if your EA Global application is accepted, you can then register for any EA Global in 2023. Though please note that for each event you will have to apply before the application deadline, and we will not be reviewing applications submitted in the window between this deadline and the upcoming conference. Additionally, please note that EAGx applications are separate as these are community run events with different admissions criteria. You can apply for EAGx events via the corresponding page on our website.This change is part of a new application system which will require you to login using a www.effectivealtruism.org account to apply (similar to how you login to the forum). This has allowed us to enable a variety of features that we’re hoping will allow for a smoother user experience.How much does it cost to attend EA Global?Default ticket price: £200 GBP (with the option of a free ticket for students)For an automatic 25% ticket discount, register by the "early bird" application deadline (approved application required)Discounts are available — you can select from a range of ticket price options during checkout.Updated travel expense policyWe (the CEA events team) are planning to reduce spending on events in the coming year, which includes reducing our travel grants budget. Previously we’ve been able to fund people to travel to any EA conference they’ve been accepted to, but we will likely be significantly more conservative in the coming year. For example, we can still provide visa support letters, but we likely won’t be able to fund visa expenses. You can check our updated travel support policy for more details.While we cannot guarantee travel funding, please do still apply for support if ...]]>
Ivan Burduk https://forum.effectivealtruism.org/posts/7xvsnLM7FkwCrvkxT/ea-global-2023-applications-are-now-open Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Global 2023 applications are now open!, published by Ivan Burduk on December 22, 2022 on The Effective Altruism Forum.Apply NowUpcoming conferencesEA Global: Bay Area 2023Dates: 24–26 February 2023Location: Oakland Marriott City CenterApplication deadline: 8 February 2023 (8:00am UTC)Early bird registration deadline: 13 January 2023 (8:00am UTC)EA Global: London 2023Dates: 19–21 May 2023Location: Tobacco DockApplication deadline: 28 April 2023 (8:00am UTC)Early bird registration deadline: 31 March 2023 (8:00am UTC)What is an EA Global conference like?EA Globals bring together a network of people who are using the principles of effective altruism to take significant action to have a positive impact on the world. For each event, we’re expecting around 1500 people to come together for a weekend of talks, workshops, and other opportunities. We put a special emphasis on networking and 1:1 meetings, and attendees frequently meet future collaborators, find new jobs, or make new friends.You can check out content from previous conferences here, and reflections from past attendees here.Is EA Global the right conference for me?EA Global is mostly aimed at people who have a solid understanding of the core ideas of effective altruism and who are already taking significant actions based on those ideas. This often looks like professionally working on effective-altruism-inspired projects or working out how best to work on such projects.EAGx’s are community-run conferences and tend to have broader admissions criteria. They are primarily for people familiar with the core ideas of effective altruism, who want to learn more about what they can do, and want to meet the community, especially in their region. You can see an up to date list of confirmed conferences and their dates here.If you want to attend but are unsure about whether to apply, please err on the side of applying. You can read more about admissions for EA Global on our FAQ page.How do I apply to an EA Global?Click here to go to our (new!) application form.There's one application form for all EA Globals in 2023, and if your EA Global application is accepted, you can then register for any EA Global in 2023. Though please note that for each event you will have to apply before the application deadline, and we will not be reviewing applications submitted in the window between this deadline and the upcoming conference. Additionally, please note that EAGx applications are separate as these are community run events with different admissions criteria. You can apply for EAGx events via the corresponding page on our website.This change is part of a new application system which will require you to login using a www.effectivealtruism.org account to apply (similar to how you login to the forum). This has allowed us to enable a variety of features that we’re hoping will allow for a smoother user experience.How much does it cost to attend EA Global?Default ticket price: £200 GBP (with the option of a free ticket for students)For an automatic 25% ticket discount, register by the "early bird" application deadline (approved application required)Discounts are available — you can select from a range of ticket price options during checkout.Updated travel expense policyWe (the CEA events team) are planning to reduce spending on events in the coming year, which includes reducing our travel grants budget. Previously we’ve been able to fund people to travel to any EA conference they’ve been accepted to, but we will likely be significantly more conservative in the coming year. For example, we can still provide visa support letters, but we likely won’t be able to fund visa expenses. You can check our updated travel support policy for more details.While we cannot guarantee travel funding, please do still apply for support if ...]]>
Thu, 22 Dec 2022 21:27:31 +0000 EA - EA Global 2023 applications are now open! by Ivan Burduk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Global 2023 applications are now open!, published by Ivan Burduk on December 22, 2022 on The Effective Altruism Forum.Apply NowUpcoming conferencesEA Global: Bay Area 2023Dates: 24–26 February 2023Location: Oakland Marriott City CenterApplication deadline: 8 February 2023 (8:00am UTC)Early bird registration deadline: 13 January 2023 (8:00am UTC)EA Global: London 2023Dates: 19–21 May 2023Location: Tobacco DockApplication deadline: 28 April 2023 (8:00am UTC)Early bird registration deadline: 31 March 2023 (8:00am UTC)What is an EA Global conference like?EA Globals bring together a network of people who are using the principles of effective altruism to take significant action to have a positive impact on the world. For each event, we’re expecting around 1500 people to come together for a weekend of talks, workshops, and other opportunities. We put a special emphasis on networking and 1:1 meetings, and attendees frequently meet future collaborators, find new jobs, or make new friends.You can check out content from previous conferences here, and reflections from past attendees here.Is EA Global the right conference for me?EA Global is mostly aimed at people who have a solid understanding of the core ideas of effective altruism and who are already taking significant actions based on those ideas. This often looks like professionally working on effective-altruism-inspired projects or working out how best to work on such projects.EAGx’s are community-run conferences and tend to have broader admissions criteria. They are primarily for people familiar with the core ideas of effective altruism, who want to learn more about what they can do, and want to meet the community, especially in their region. You can see an up to date list of confirmed conferences and their dates here.If you want to attend but are unsure about whether to apply, please err on the side of applying. You can read more about admissions for EA Global on our FAQ page.How do I apply to an EA Global?Click here to go to our (new!) application form.There's one application form for all EA Globals in 2023, and if your EA Global application is accepted, you can then register for any EA Global in 2023. Though please note that for each event you will have to apply before the application deadline, and we will not be reviewing applications submitted in the window between this deadline and the upcoming conference. Additionally, please note that EAGx applications are separate as these are community run events with different admissions criteria. You can apply for EAGx events via the corresponding page on our website.This change is part of a new application system which will require you to login using a www.effectivealtruism.org account to apply (similar to how you login to the forum). This has allowed us to enable a variety of features that we’re hoping will allow for a smoother user experience.How much does it cost to attend EA Global?Default ticket price: £200 GBP (with the option of a free ticket for students)For an automatic 25% ticket discount, register by the "early bird" application deadline (approved application required)Discounts are available — you can select from a range of ticket price options during checkout.Updated travel expense policyWe (the CEA events team) are planning to reduce spending on events in the coming year, which includes reducing our travel grants budget. Previously we’ve been able to fund people to travel to any EA conference they’ve been accepted to, but we will likely be significantly more conservative in the coming year. For example, we can still provide visa support letters, but we likely won’t be able to fund visa expenses. You can check our updated travel support policy for more details.While we cannot guarantee travel funding, please do still apply for support if ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Global 2023 applications are now open!, published by Ivan Burduk on December 22, 2022 on The Effective Altruism Forum.Apply NowUpcoming conferencesEA Global: Bay Area 2023Dates: 24–26 February 2023Location: Oakland Marriott City CenterApplication deadline: 8 February 2023 (8:00am UTC)Early bird registration deadline: 13 January 2023 (8:00am UTC)EA Global: London 2023Dates: 19–21 May 2023Location: Tobacco DockApplication deadline: 28 April 2023 (8:00am UTC)Early bird registration deadline: 31 March 2023 (8:00am UTC)What is an EA Global conference like?EA Globals bring together a network of people who are using the principles of effective altruism to take significant action to have a positive impact on the world. For each event, we’re expecting around 1500 people to come together for a weekend of talks, workshops, and other opportunities. We put a special emphasis on networking and 1:1 meetings, and attendees frequently meet future collaborators, find new jobs, or make new friends.You can check out content from previous conferences here, and reflections from past attendees here.Is EA Global the right conference for me?EA Global is mostly aimed at people who have a solid understanding of the core ideas of effective altruism and who are already taking significant actions based on those ideas. This often looks like professionally working on effective-altruism-inspired projects or working out how best to work on such projects.EAGx’s are community-run conferences and tend to have broader admissions criteria. They are primarily for people familiar with the core ideas of effective altruism, who want to learn more about what they can do, and want to meet the community, especially in their region. You can see an up to date list of confirmed conferences and their dates here.If you want to attend but are unsure about whether to apply, please err on the side of applying. You can read more about admissions for EA Global on our FAQ page.How do I apply to an EA Global?Click here to go to our (new!) application form.There's one application form for all EA Globals in 2023, and if your EA Global application is accepted, you can then register for any EA Global in 2023. Though please note that for each event you will have to apply before the application deadline, and we will not be reviewing applications submitted in the window between this deadline and the upcoming conference. Additionally, please note that EAGx applications are separate as these are community run events with different admissions criteria. You can apply for EAGx events via the corresponding page on our website.This change is part of a new application system which will require you to login using a www.effectivealtruism.org account to apply (similar to how you login to the forum). This has allowed us to enable a variety of features that we’re hoping will allow for a smoother user experience.How much does it cost to attend EA Global?Default ticket price: £200 GBP (with the option of a free ticket for students)For an automatic 25% ticket discount, register by the "early bird" application deadline (approved application required)Discounts are available — you can select from a range of ticket price options during checkout.Updated travel expense policyWe (the CEA events team) are planning to reduce spending on events in the coming year, which includes reducing our travel grants budget. Previously we’ve been able to fund people to travel to any EA conference they’ve been accepted to, but we will likely be significantly more conservative in the coming year. For example, we can still provide visa support letters, but we likely won’t be able to fund visa expenses. You can check our updated travel support policy for more details.While we cannot guarantee travel funding, please do still apply for support if ...]]>
Ivan Burduk https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:36 None full 4222
bbAzJ33DDw6EmxekP_EA EA - New intervention: paying farmers to not burn crops by Karthik Tadepalli Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New intervention: paying farmers to not burn crops, published by Karthik Tadepalli on December 22, 2022 on The Effective Altruism Forum.I summarize a recent paper evaluating an intervention to reduce air pollution: paying farmers not to burn their crop residue.A pure conditional contract is ineffective, but when farmers are paid some fraction upfront, they reduce crop burning significantly.The authors calculate that this contract saves a life for $5,000, comparable to GiveWell's top charities; I calculate an even lower cost of $1,500 per life saved using a broader measurement of health benefits, beating GiveWell's top charities.I also calculate promising climate co-benefits; this intervention reduces GHG emissions for roughly $36 a ton, which costs less than the social cost of carbon, but more than the best climate mitigation strategies.I strongly recommend further research and piloting as a way to build on this single study, especially given the lack of scalable air pollution interventions in EA.Air pollution causes at least 7 million premature deaths each year. Despite this, it has only recently surfaced as a top cause for effective altruists, with Open Philanthropy announcing South Asian air quality as its newest focus area last year. Even with this recent focus, grants have focused mostly on research rather than on interventions to actually improve air quality. The problem is that we don't yet have shovel-ready air pollution interventions, demonstrably cost-effective interventions that are feasible and scalable for charities to implement themselves (rather than relying on uncertain advocacy). I think one recent paper offers a cost-effective, feasible and scalable intervention to reduce air pollution, and we should investigate it much more closely.OverviewOne pernicious source of air pollution in developing countries[1] is crop residue burning, where farmers burn the remnants of their summer crop to quickly clear the fields for their winter crop. The pollution from this burning causes 66,000 premature deaths a year in India alone. Both bans on crop burning and subsidizing alternatives have failed, leaving crop burning in desperate need of a solution.A new working paper by Kelsey Jack, Seema Jayachandran, Namrata Kala and Rohini Pande proposes one solution: paying farmers directly not to burn crop residue, with a partial payment upfront and the remainder conditional on not burning. They show this contract reduces crop burning with an RCT, while purely conditional payments have no effect. Importantly, they also calculate that this approach saves a life for around $5,000, which is competitive with GiveWell's top charities.I believe their approach may actually underestimate the benefits of reduced crop burning by focusing solely on premature deaths; using an alternative approach based on DALYs lost, I estimate that this intervention saves a life for $1,500, which beats GiveWell's top charities. In addition, it has important climate co-benefits that traditional health interventions do not.I think this study poses an important line of research for EA organizations to pursue, and I conclude with some ways for us to use this research.The paperOne of the authors has summarized the paper already.In brief: We use a randomized trial to evaluate a program that financially rewarded farmers if they avoided burning their rice stubble. We tried out a standard incentive contract that paid the farmer after we verified that he’d complied with the contract terms. That approach had no impact. However, when the contract was tweaked so that some of the payment was made upfront, the financial rewards program became a very cost-effective way to reduce burning, saving a life for <$5000. [author's emphasis]I recommend reading her full summary, but here is a self-contained over...]]>
Karthik Tadepalli https://forum.effectivealtruism.org/posts/bbAzJ33DDw6EmxekP/new-intervention-paying-farmers-to-not-burn-crops Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New intervention: paying farmers to not burn crops, published by Karthik Tadepalli on December 22, 2022 on The Effective Altruism Forum.I summarize a recent paper evaluating an intervention to reduce air pollution: paying farmers not to burn their crop residue.A pure conditional contract is ineffective, but when farmers are paid some fraction upfront, they reduce crop burning significantly.The authors calculate that this contract saves a life for $5,000, comparable to GiveWell's top charities; I calculate an even lower cost of $1,500 per life saved using a broader measurement of health benefits, beating GiveWell's top charities.I also calculate promising climate co-benefits; this intervention reduces GHG emissions for roughly $36 a ton, which costs less than the social cost of carbon, but more than the best climate mitigation strategies.I strongly recommend further research and piloting as a way to build on this single study, especially given the lack of scalable air pollution interventions in EA.Air pollution causes at least 7 million premature deaths each year. Despite this, it has only recently surfaced as a top cause for effective altruists, with Open Philanthropy announcing South Asian air quality as its newest focus area last year. Even with this recent focus, grants have focused mostly on research rather than on interventions to actually improve air quality. The problem is that we don't yet have shovel-ready air pollution interventions, demonstrably cost-effective interventions that are feasible and scalable for charities to implement themselves (rather than relying on uncertain advocacy). I think one recent paper offers a cost-effective, feasible and scalable intervention to reduce air pollution, and we should investigate it much more closely.OverviewOne pernicious source of air pollution in developing countries[1] is crop residue burning, where farmers burn the remnants of their summer crop to quickly clear the fields for their winter crop. The pollution from this burning causes 66,000 premature deaths a year in India alone. Both bans on crop burning and subsidizing alternatives have failed, leaving crop burning in desperate need of a solution.A new working paper by Kelsey Jack, Seema Jayachandran, Namrata Kala and Rohini Pande proposes one solution: paying farmers directly not to burn crop residue, with a partial payment upfront and the remainder conditional on not burning. They show this contract reduces crop burning with an RCT, while purely conditional payments have no effect. Importantly, they also calculate that this approach saves a life for around $5,000, which is competitive with GiveWell's top charities.I believe their approach may actually underestimate the benefits of reduced crop burning by focusing solely on premature deaths; using an alternative approach based on DALYs lost, I estimate that this intervention saves a life for $1,500, which beats GiveWell's top charities. In addition, it has important climate co-benefits that traditional health interventions do not.I think this study poses an important line of research for EA organizations to pursue, and I conclude with some ways for us to use this research.The paperOne of the authors has summarized the paper already.In brief: We use a randomized trial to evaluate a program that financially rewarded farmers if they avoided burning their rice stubble. We tried out a standard incentive contract that paid the farmer after we verified that he’d complied with the contract terms. That approach had no impact. However, when the contract was tweaked so that some of the payment was made upfront, the financial rewards program became a very cost-effective way to reduce burning, saving a life for <$5000. [author's emphasis]I recommend reading her full summary, but here is a self-contained over...]]>
Thu, 22 Dec 2022 20:18:19 +0000 EA - New intervention: paying farmers to not burn crops by Karthik Tadepalli Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New intervention: paying farmers to not burn crops, published by Karthik Tadepalli on December 22, 2022 on The Effective Altruism Forum.I summarize a recent paper evaluating an intervention to reduce air pollution: paying farmers not to burn their crop residue.A pure conditional contract is ineffective, but when farmers are paid some fraction upfront, they reduce crop burning significantly.The authors calculate that this contract saves a life for $5,000, comparable to GiveWell's top charities; I calculate an even lower cost of $1,500 per life saved using a broader measurement of health benefits, beating GiveWell's top charities.I also calculate promising climate co-benefits; this intervention reduces GHG emissions for roughly $36 a ton, which costs less than the social cost of carbon, but more than the best climate mitigation strategies.I strongly recommend further research and piloting as a way to build on this single study, especially given the lack of scalable air pollution interventions in EA.Air pollution causes at least 7 million premature deaths each year. Despite this, it has only recently surfaced as a top cause for effective altruists, with Open Philanthropy announcing South Asian air quality as its newest focus area last year. Even with this recent focus, grants have focused mostly on research rather than on interventions to actually improve air quality. The problem is that we don't yet have shovel-ready air pollution interventions, demonstrably cost-effective interventions that are feasible and scalable for charities to implement themselves (rather than relying on uncertain advocacy). I think one recent paper offers a cost-effective, feasible and scalable intervention to reduce air pollution, and we should investigate it much more closely.OverviewOne pernicious source of air pollution in developing countries[1] is crop residue burning, where farmers burn the remnants of their summer crop to quickly clear the fields for their winter crop. The pollution from this burning causes 66,000 premature deaths a year in India alone. Both bans on crop burning and subsidizing alternatives have failed, leaving crop burning in desperate need of a solution.A new working paper by Kelsey Jack, Seema Jayachandran, Namrata Kala and Rohini Pande proposes one solution: paying farmers directly not to burn crop residue, with a partial payment upfront and the remainder conditional on not burning. They show this contract reduces crop burning with an RCT, while purely conditional payments have no effect. Importantly, they also calculate that this approach saves a life for around $5,000, which is competitive with GiveWell's top charities.I believe their approach may actually underestimate the benefits of reduced crop burning by focusing solely on premature deaths; using an alternative approach based on DALYs lost, I estimate that this intervention saves a life for $1,500, which beats GiveWell's top charities. In addition, it has important climate co-benefits that traditional health interventions do not.I think this study poses an important line of research for EA organizations to pursue, and I conclude with some ways for us to use this research.The paperOne of the authors has summarized the paper already.In brief: We use a randomized trial to evaluate a program that financially rewarded farmers if they avoided burning their rice stubble. We tried out a standard incentive contract that paid the farmer after we verified that he’d complied with the contract terms. That approach had no impact. However, when the contract was tweaked so that some of the payment was made upfront, the financial rewards program became a very cost-effective way to reduce burning, saving a life for <$5000. [author's emphasis]I recommend reading her full summary, but here is a self-contained over...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New intervention: paying farmers to not burn crops, published by Karthik Tadepalli on December 22, 2022 on The Effective Altruism Forum.I summarize a recent paper evaluating an intervention to reduce air pollution: paying farmers not to burn their crop residue.A pure conditional contract is ineffective, but when farmers are paid some fraction upfront, they reduce crop burning significantly.The authors calculate that this contract saves a life for $5,000, comparable to GiveWell's top charities; I calculate an even lower cost of $1,500 per life saved using a broader measurement of health benefits, beating GiveWell's top charities.I also calculate promising climate co-benefits; this intervention reduces GHG emissions for roughly $36 a ton, which costs less than the social cost of carbon, but more than the best climate mitigation strategies.I strongly recommend further research and piloting as a way to build on this single study, especially given the lack of scalable air pollution interventions in EA.Air pollution causes at least 7 million premature deaths each year. Despite this, it has only recently surfaced as a top cause for effective altruists, with Open Philanthropy announcing South Asian air quality as its newest focus area last year. Even with this recent focus, grants have focused mostly on research rather than on interventions to actually improve air quality. The problem is that we don't yet have shovel-ready air pollution interventions, demonstrably cost-effective interventions that are feasible and scalable for charities to implement themselves (rather than relying on uncertain advocacy). I think one recent paper offers a cost-effective, feasible and scalable intervention to reduce air pollution, and we should investigate it much more closely.OverviewOne pernicious source of air pollution in developing countries[1] is crop residue burning, where farmers burn the remnants of their summer crop to quickly clear the fields for their winter crop. The pollution from this burning causes 66,000 premature deaths a year in India alone. Both bans on crop burning and subsidizing alternatives have failed, leaving crop burning in desperate need of a solution.A new working paper by Kelsey Jack, Seema Jayachandran, Namrata Kala and Rohini Pande proposes one solution: paying farmers directly not to burn crop residue, with a partial payment upfront and the remainder conditional on not burning. They show this contract reduces crop burning with an RCT, while purely conditional payments have no effect. Importantly, they also calculate that this approach saves a life for around $5,000, which is competitive with GiveWell's top charities.I believe their approach may actually underestimate the benefits of reduced crop burning by focusing solely on premature deaths; using an alternative approach based on DALYs lost, I estimate that this intervention saves a life for $1,500, which beats GiveWell's top charities. In addition, it has important climate co-benefits that traditional health interventions do not.I think this study poses an important line of research for EA organizations to pursue, and I conclude with some ways for us to use this research.The paperOne of the authors has summarized the paper already.In brief: We use a randomized trial to evaluate a program that financially rewarded farmers if they avoided burning their rice stubble. We tried out a standard incentive contract that paid the farmer after we verified that he’d complied with the contract terms. That approach had no impact. However, when the contract was tweaked so that some of the payment was made upfront, the financial rewards program became a very cost-effective way to reduce burning, saving a life for <$5000. [author's emphasis]I recommend reading her full summary, but here is a self-contained over...]]>
Karthik Tadepalli https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 16:29 None full 4223
LbC2kfn7BtCFdzwpH_EA EA - 1.5X match for Canadian donors to GiveDirectly by Jendayi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 1.5X match for Canadian donors to GiveDirectly, published by Jendayi on December 22, 2022 on The Effective Altruism Forum.Hey everyone, I work on the Growth team at GiveDirectly and wanted to share this match just for Canadian donors running through Dec. 31st 2022.Only $1,300 of our $75,000 (CAD) match pool has been claimed and this is a true match, so any funds left over after 12/31 will expire.Donations are matched 1.5X up to $1,500 per donor, meaning a $1,500 donation will unlock $2,250 for people in poverty. That’s enough to deliver large, one-time cash transfers to 2 households to spend and invest as they see fit.This is a great opportunity to maximize the impact of your giving and enable us to deliver more cash to people living in poverty next year. Consider sharing with your networks (or upvoting this post) to help spread the word.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jendayi https://forum.effectivealtruism.org/posts/LbC2kfn7BtCFdzwpH/1-5x-match-for-canadian-donors-to-givedirectly Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 1.5X match for Canadian donors to GiveDirectly, published by Jendayi on December 22, 2022 on The Effective Altruism Forum.Hey everyone, I work on the Growth team at GiveDirectly and wanted to share this match just for Canadian donors running through Dec. 31st 2022.Only $1,300 of our $75,000 (CAD) match pool has been claimed and this is a true match, so any funds left over after 12/31 will expire.Donations are matched 1.5X up to $1,500 per donor, meaning a $1,500 donation will unlock $2,250 for people in poverty. That’s enough to deliver large, one-time cash transfers to 2 households to spend and invest as they see fit.This is a great opportunity to maximize the impact of your giving and enable us to deliver more cash to people living in poverty next year. Consider sharing with your networks (or upvoting this post) to help spread the word.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 22 Dec 2022 16:25:17 +0000 EA - 1.5X match for Canadian donors to GiveDirectly by Jendayi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 1.5X match for Canadian donors to GiveDirectly, published by Jendayi on December 22, 2022 on The Effective Altruism Forum.Hey everyone, I work on the Growth team at GiveDirectly and wanted to share this match just for Canadian donors running through Dec. 31st 2022.Only $1,300 of our $75,000 (CAD) match pool has been claimed and this is a true match, so any funds left over after 12/31 will expire.Donations are matched 1.5X up to $1,500 per donor, meaning a $1,500 donation will unlock $2,250 for people in poverty. That’s enough to deliver large, one-time cash transfers to 2 households to spend and invest as they see fit.This is a great opportunity to maximize the impact of your giving and enable us to deliver more cash to people living in poverty next year. Consider sharing with your networks (or upvoting this post) to help spread the word.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 1.5X match for Canadian donors to GiveDirectly, published by Jendayi on December 22, 2022 on The Effective Altruism Forum.Hey everyone, I work on the Growth team at GiveDirectly and wanted to share this match just for Canadian donors running through Dec. 31st 2022.Only $1,300 of our $75,000 (CAD) match pool has been claimed and this is a true match, so any funds left over after 12/31 will expire.Donations are matched 1.5X up to $1,500 per donor, meaning a $1,500 donation will unlock $2,250 for people in poverty. That’s enough to deliver large, one-time cash transfers to 2 households to spend and invest as they see fit.This is a great opportunity to maximize the impact of your giving and enable us to deliver more cash to people living in poverty next year. Consider sharing with your networks (or upvoting this post) to help spread the word.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jendayi https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:14 None full 4234
TnsJdJ9HpMaLcS7qn_EA EA - Against philanthropic diversification by Jack Malde Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Against philanthropic diversification, published by Jack Malde on December 22, 2022 on The Effective Altruism Forum.(Cross-posted from my substack The Ethical Economist: a blog covering Economics, Ethics and Effective Altruism.)Don’t give to just one charity, there are so many good charities to give to! Also, what if the charity turns out to be ineffective? Then you just wasted all your money and did no good. Don’t worry, there’s a simple solution. Just give to multiple charities and spread risk!Thinking along these lines is natural. Whether it’s risk aversion, or just an inherent desire to support multiple charities or causes, most of us diversify our philanthropic giving. If your goal is to do the most good however, you should fight this urge with all you’ve got.Diversification does make some sense, some of the time. If you’re going to fill a charity’s budget with your giving then any further giving should probably go elsewhere.But most of us are small donors. Our giving usually won't fill a budget, or hit diminishing returns. It certainly won’t hit diminishing returns at the level of an entire cause area, unless perhaps you’re a billionaire philanthropist, in which case well done you.When you’re deciding where to give you likely have some idea of what the best option is. Maybe you want to help animals and are quite uncertain about how best to do so, but you lean towards thinking that giving to The Humane League (THL) to support their corporate campaigns is slightly better on the margin than giving to Faunalytics to support their research, even though you think there’s a possibility either option is ineffective. In this case, you should give your full philanthropic budget to THL. Fight that urge to give to both charities to cover your back if you make the wrong choice.Giving to both charities reduces the risk of you doing no good. But, because you subjectively think that THL is slightly better than Faunalytics, it also reduces the amount of good you will actually do in expectation. If you think THL is the best, then why give to anything else? Giving to both means trading away expected good done to get more certainty that you yourself will have done some good. It’s putting your own satisfaction ahead of the expected good of the world. Don’t be that person.At this point you might push back and say that I haven’t convincingly shown that there’s anything wrong with being risk averse in this way. That is, risk averse with respect to the amount of good a particular individual does. Fair enough, so let me try something a bit more formal.A recent academic paper by Hilary Greaves, William MacAskill, Andreas Mogenson and Teruji Thomas explores the tension between “difference-making risk aversion” and benevolence. Consider the below.Outcome goodnessHeadsTailsDo nothing100Give to Charity A2010Give to Charity B10+x20+xA fair coin is to be flipped which determines the payoffs if we do nothing, give to Charity A, or give to Charity B. The coin essentially represents our current uncertainty.We do have a hunch that giving to Charity B is better. Charity B differs from Charity A in that, instead of a ½ probability of getting 20, Charity B involves a ½ probability of getting 20+x, and instead of a ½ probability of getting 10, Charity B involves a ½ probability of getting 10+x. Given this, it’s clearly better to give to Charity B. In technical language we say that giving to Charity B stochastically dominates giving to Charity A.Now instead of ‘outcome goodness’ let’s consider the ‘difference made’ of giving to either charity, relative to doing nothing (this is just some simple subtraction using the table above).Difference madeHeadsTailsDo nothing00Give to Charity A1010Give to Charity Bx20+xA key thing to notice is that an individual with ‘difference-making risk aversion...]]>
Jack Malde https://forum.effectivealtruism.org/posts/TnsJdJ9HpMaLcS7qn/against-philanthropic-diversification Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Against philanthropic diversification, published by Jack Malde on December 22, 2022 on The Effective Altruism Forum.(Cross-posted from my substack The Ethical Economist: a blog covering Economics, Ethics and Effective Altruism.)Don’t give to just one charity, there are so many good charities to give to! Also, what if the charity turns out to be ineffective? Then you just wasted all your money and did no good. Don’t worry, there’s a simple solution. Just give to multiple charities and spread risk!Thinking along these lines is natural. Whether it’s risk aversion, or just an inherent desire to support multiple charities or causes, most of us diversify our philanthropic giving. If your goal is to do the most good however, you should fight this urge with all you’ve got.Diversification does make some sense, some of the time. If you’re going to fill a charity’s budget with your giving then any further giving should probably go elsewhere.But most of us are small donors. Our giving usually won't fill a budget, or hit diminishing returns. It certainly won’t hit diminishing returns at the level of an entire cause area, unless perhaps you’re a billionaire philanthropist, in which case well done you.When you’re deciding where to give you likely have some idea of what the best option is. Maybe you want to help animals and are quite uncertain about how best to do so, but you lean towards thinking that giving to The Humane League (THL) to support their corporate campaigns is slightly better on the margin than giving to Faunalytics to support their research, even though you think there’s a possibility either option is ineffective. In this case, you should give your full philanthropic budget to THL. Fight that urge to give to both charities to cover your back if you make the wrong choice.Giving to both charities reduces the risk of you doing no good. But, because you subjectively think that THL is slightly better than Faunalytics, it also reduces the amount of good you will actually do in expectation. If you think THL is the best, then why give to anything else? Giving to both means trading away expected good done to get more certainty that you yourself will have done some good. It’s putting your own satisfaction ahead of the expected good of the world. Don’t be that person.At this point you might push back and say that I haven’t convincingly shown that there’s anything wrong with being risk averse in this way. That is, risk averse with respect to the amount of good a particular individual does. Fair enough, so let me try something a bit more formal.A recent academic paper by Hilary Greaves, William MacAskill, Andreas Mogenson and Teruji Thomas explores the tension between “difference-making risk aversion” and benevolence. Consider the below.Outcome goodnessHeadsTailsDo nothing100Give to Charity A2010Give to Charity B10+x20+xA fair coin is to be flipped which determines the payoffs if we do nothing, give to Charity A, or give to Charity B. The coin essentially represents our current uncertainty.We do have a hunch that giving to Charity B is better. Charity B differs from Charity A in that, instead of a ½ probability of getting 20, Charity B involves a ½ probability of getting 20+x, and instead of a ½ probability of getting 10, Charity B involves a ½ probability of getting 10+x. Given this, it’s clearly better to give to Charity B. In technical language we say that giving to Charity B stochastically dominates giving to Charity A.Now instead of ‘outcome goodness’ let’s consider the ‘difference made’ of giving to either charity, relative to doing nothing (this is just some simple subtraction using the table above).Difference madeHeadsTailsDo nothing00Give to Charity A1010Give to Charity Bx20+xA key thing to notice is that an individual with ‘difference-making risk aversion...]]>
Thu, 22 Dec 2022 15:53:25 +0000 EA - Against philanthropic diversification by Jack Malde Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Against philanthropic diversification, published by Jack Malde on December 22, 2022 on The Effective Altruism Forum.(Cross-posted from my substack The Ethical Economist: a blog covering Economics, Ethics and Effective Altruism.)Don’t give to just one charity, there are so many good charities to give to! Also, what if the charity turns out to be ineffective? Then you just wasted all your money and did no good. Don’t worry, there’s a simple solution. Just give to multiple charities and spread risk!Thinking along these lines is natural. Whether it’s risk aversion, or just an inherent desire to support multiple charities or causes, most of us diversify our philanthropic giving. If your goal is to do the most good however, you should fight this urge with all you’ve got.Diversification does make some sense, some of the time. If you’re going to fill a charity’s budget with your giving then any further giving should probably go elsewhere.But most of us are small donors. Our giving usually won't fill a budget, or hit diminishing returns. It certainly won’t hit diminishing returns at the level of an entire cause area, unless perhaps you’re a billionaire philanthropist, in which case well done you.When you’re deciding where to give you likely have some idea of what the best option is. Maybe you want to help animals and are quite uncertain about how best to do so, but you lean towards thinking that giving to The Humane League (THL) to support their corporate campaigns is slightly better on the margin than giving to Faunalytics to support their research, even though you think there’s a possibility either option is ineffective. In this case, you should give your full philanthropic budget to THL. Fight that urge to give to both charities to cover your back if you make the wrong choice.Giving to both charities reduces the risk of you doing no good. But, because you subjectively think that THL is slightly better than Faunalytics, it also reduces the amount of good you will actually do in expectation. If you think THL is the best, then why give to anything else? Giving to both means trading away expected good done to get more certainty that you yourself will have done some good. It’s putting your own satisfaction ahead of the expected good of the world. Don’t be that person.At this point you might push back and say that I haven’t convincingly shown that there’s anything wrong with being risk averse in this way. That is, risk averse with respect to the amount of good a particular individual does. Fair enough, so let me try something a bit more formal.A recent academic paper by Hilary Greaves, William MacAskill, Andreas Mogenson and Teruji Thomas explores the tension between “difference-making risk aversion” and benevolence. Consider the below.Outcome goodnessHeadsTailsDo nothing100Give to Charity A2010Give to Charity B10+x20+xA fair coin is to be flipped which determines the payoffs if we do nothing, give to Charity A, or give to Charity B. The coin essentially represents our current uncertainty.We do have a hunch that giving to Charity B is better. Charity B differs from Charity A in that, instead of a ½ probability of getting 20, Charity B involves a ½ probability of getting 20+x, and instead of a ½ probability of getting 10, Charity B involves a ½ probability of getting 10+x. Given this, it’s clearly better to give to Charity B. In technical language we say that giving to Charity B stochastically dominates giving to Charity A.Now instead of ‘outcome goodness’ let’s consider the ‘difference made’ of giving to either charity, relative to doing nothing (this is just some simple subtraction using the table above).Difference madeHeadsTailsDo nothing00Give to Charity A1010Give to Charity Bx20+xA key thing to notice is that an individual with ‘difference-making risk aversion...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Against philanthropic diversification, published by Jack Malde on December 22, 2022 on The Effective Altruism Forum.(Cross-posted from my substack The Ethical Economist: a blog covering Economics, Ethics and Effective Altruism.)Don’t give to just one charity, there are so many good charities to give to! Also, what if the charity turns out to be ineffective? Then you just wasted all your money and did no good. Don’t worry, there’s a simple solution. Just give to multiple charities and spread risk!Thinking along these lines is natural. Whether it’s risk aversion, or just an inherent desire to support multiple charities or causes, most of us diversify our philanthropic giving. If your goal is to do the most good however, you should fight this urge with all you’ve got.Diversification does make some sense, some of the time. If you’re going to fill a charity’s budget with your giving then any further giving should probably go elsewhere.But most of us are small donors. Our giving usually won't fill a budget, or hit diminishing returns. It certainly won’t hit diminishing returns at the level of an entire cause area, unless perhaps you’re a billionaire philanthropist, in which case well done you.When you’re deciding where to give you likely have some idea of what the best option is. Maybe you want to help animals and are quite uncertain about how best to do so, but you lean towards thinking that giving to The Humane League (THL) to support their corporate campaigns is slightly better on the margin than giving to Faunalytics to support their research, even though you think there’s a possibility either option is ineffective. In this case, you should give your full philanthropic budget to THL. Fight that urge to give to both charities to cover your back if you make the wrong choice.Giving to both charities reduces the risk of you doing no good. But, because you subjectively think that THL is slightly better than Faunalytics, it also reduces the amount of good you will actually do in expectation. If you think THL is the best, then why give to anything else? Giving to both means trading away expected good done to get more certainty that you yourself will have done some good. It’s putting your own satisfaction ahead of the expected good of the world. Don’t be that person.At this point you might push back and say that I haven’t convincingly shown that there’s anything wrong with being risk averse in this way. That is, risk averse with respect to the amount of good a particular individual does. Fair enough, so let me try something a bit more formal.A recent academic paper by Hilary Greaves, William MacAskill, Andreas Mogenson and Teruji Thomas explores the tension between “difference-making risk aversion” and benevolence. Consider the below.Outcome goodnessHeadsTailsDo nothing100Give to Charity A2010Give to Charity B10+x20+xA fair coin is to be flipped which determines the payoffs if we do nothing, give to Charity A, or give to Charity B. The coin essentially represents our current uncertainty.We do have a hunch that giving to Charity B is better. Charity B differs from Charity A in that, instead of a ½ probability of getting 20, Charity B involves a ½ probability of getting 20+x, and instead of a ½ probability of getting 10, Charity B involves a ½ probability of getting 10+x. Given this, it’s clearly better to give to Charity B. In technical language we say that giving to Charity B stochastically dominates giving to Charity A.Now instead of ‘outcome goodness’ let’s consider the ‘difference made’ of giving to either charity, relative to doing nothing (this is just some simple subtraction using the table above).Difference madeHeadsTailsDo nothing00Give to Charity A1010Give to Charity Bx20+xA key thing to notice is that an individual with ‘difference-making risk aversion...]]>
Jack Malde https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:28 None full 4225
ZCBw36sCfbfondnq2_EA EA - Sign up for our Talent Directory if you’re interested in getting a high-impact job by High Impact Professionals Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sign up for our Talent Directory if you’re interested in getting a high-impact job, published by High Impact Professionals on December 22, 2022 on The Effective Altruism Forum.SummaryHigh Impact Professionals (HIP) is working with high-impact organizations (HIOs) to help them in their recruiting efforts.To facilitate this effort as well as the efforts of other EA recruiting organizations and HIO hiring managers, we’re building a global EA talent directory.If you’re interested in working for a HIO, fill out this talent directory sign-up form.You control your information's privacy as part of completing the form.Who Should Fill Out The Form?Anyone who identifies with EA and is looking to transition to working at a HIO. We are especially interested in working professionals as we think they are less legible to the community, but we are happy for any EA to fill it out. If you already filled out our form for FTX grantees that we recently published you don’t need to fill out this form as it is very similar to the one you already filled out.What Will Happen After I Fill Out the Form?After you fill out the form you will be entered into our talent directory, a list of all individuals who submit the form. Then, depending on what you’ve consented to on the form, one to two things will happen.1. We will actively try to place you at a high-impact organizationWe at HIP are working with HIOs from diverse cause areas to help them fill roles they are recruiting for. As part of this effort, we will look through our talent directory to find qualified candidates, which hopefully includes you! So the more information you provide to us, the better the chance we have of matching you to a HIO. To do this, we may:execute a talent search on behalf of a HIO and, in the case of a potential match, either HIP or the HIO will reach out to you.pass your information to partners who are also interested in either recruiting directly for their organizations or who run a recruiting meta organization as we do. We think this will significantly increase the chances of a match. We will use discretion with whom we share your information; we will also tell all partners not to share your information with anyone further and that they are to use it for the sole purpose of recruiting.2. We may publish information about you on our websiteIf you so consent, we may include you in a public talent directory listing on our website so that all organizations looking for talent can find your profile more easily. This offers the least data privacy, but the most publicity for your profile.You can revoke your consent for either of these options at any time and we will remove your information accordingly. You can also reach out to us to have your information updated.If this sounds good to you, please sign up to our talent directory.If you have any comments or suggestions, please feel free to post them below or email them to us.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
High Impact Professionals https://forum.effectivealtruism.org/posts/ZCBw36sCfbfondnq2/sign-up-for-our-talent-directory-if-you-re-interested-in Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sign up for our Talent Directory if you’re interested in getting a high-impact job, published by High Impact Professionals on December 22, 2022 on The Effective Altruism Forum.SummaryHigh Impact Professionals (HIP) is working with high-impact organizations (HIOs) to help them in their recruiting efforts.To facilitate this effort as well as the efforts of other EA recruiting organizations and HIO hiring managers, we’re building a global EA talent directory.If you’re interested in working for a HIO, fill out this talent directory sign-up form.You control your information's privacy as part of completing the form.Who Should Fill Out The Form?Anyone who identifies with EA and is looking to transition to working at a HIO. We are especially interested in working professionals as we think they are less legible to the community, but we are happy for any EA to fill it out. If you already filled out our form for FTX grantees that we recently published you don’t need to fill out this form as it is very similar to the one you already filled out.What Will Happen After I Fill Out the Form?After you fill out the form you will be entered into our talent directory, a list of all individuals who submit the form. Then, depending on what you’ve consented to on the form, one to two things will happen.1. We will actively try to place you at a high-impact organizationWe at HIP are working with HIOs from diverse cause areas to help them fill roles they are recruiting for. As part of this effort, we will look through our talent directory to find qualified candidates, which hopefully includes you! So the more information you provide to us, the better the chance we have of matching you to a HIO. To do this, we may:execute a talent search on behalf of a HIO and, in the case of a potential match, either HIP or the HIO will reach out to you.pass your information to partners who are also interested in either recruiting directly for their organizations or who run a recruiting meta organization as we do. We think this will significantly increase the chances of a match. We will use discretion with whom we share your information; we will also tell all partners not to share your information with anyone further and that they are to use it for the sole purpose of recruiting.2. We may publish information about you on our websiteIf you so consent, we may include you in a public talent directory listing on our website so that all organizations looking for talent can find your profile more easily. This offers the least data privacy, but the most publicity for your profile.You can revoke your consent for either of these options at any time and we will remove your information accordingly. You can also reach out to us to have your information updated.If this sounds good to you, please sign up to our talent directory.If you have any comments or suggestions, please feel free to post them below or email them to us.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 22 Dec 2022 15:34:45 +0000 EA - Sign up for our Talent Directory if you’re interested in getting a high-impact job by High Impact Professionals Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sign up for our Talent Directory if you’re interested in getting a high-impact job, published by High Impact Professionals on December 22, 2022 on The Effective Altruism Forum.SummaryHigh Impact Professionals (HIP) is working with high-impact organizations (HIOs) to help them in their recruiting efforts.To facilitate this effort as well as the efforts of other EA recruiting organizations and HIO hiring managers, we’re building a global EA talent directory.If you’re interested in working for a HIO, fill out this talent directory sign-up form.You control your information's privacy as part of completing the form.Who Should Fill Out The Form?Anyone who identifies with EA and is looking to transition to working at a HIO. We are especially interested in working professionals as we think they are less legible to the community, but we are happy for any EA to fill it out. If you already filled out our form for FTX grantees that we recently published you don’t need to fill out this form as it is very similar to the one you already filled out.What Will Happen After I Fill Out the Form?After you fill out the form you will be entered into our talent directory, a list of all individuals who submit the form. Then, depending on what you’ve consented to on the form, one to two things will happen.1. We will actively try to place you at a high-impact organizationWe at HIP are working with HIOs from diverse cause areas to help them fill roles they are recruiting for. As part of this effort, we will look through our talent directory to find qualified candidates, which hopefully includes you! So the more information you provide to us, the better the chance we have of matching you to a HIO. To do this, we may:execute a talent search on behalf of a HIO and, in the case of a potential match, either HIP or the HIO will reach out to you.pass your information to partners who are also interested in either recruiting directly for their organizations or who run a recruiting meta organization as we do. We think this will significantly increase the chances of a match. We will use discretion with whom we share your information; we will also tell all partners not to share your information with anyone further and that they are to use it for the sole purpose of recruiting.2. We may publish information about you on our websiteIf you so consent, we may include you in a public talent directory listing on our website so that all organizations looking for talent can find your profile more easily. This offers the least data privacy, but the most publicity for your profile.You can revoke your consent for either of these options at any time and we will remove your information accordingly. You can also reach out to us to have your information updated.If this sounds good to you, please sign up to our talent directory.If you have any comments or suggestions, please feel free to post them below or email them to us.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sign up for our Talent Directory if you’re interested in getting a high-impact job, published by High Impact Professionals on December 22, 2022 on The Effective Altruism Forum.SummaryHigh Impact Professionals (HIP) is working with high-impact organizations (HIOs) to help them in their recruiting efforts.To facilitate this effort as well as the efforts of other EA recruiting organizations and HIO hiring managers, we’re building a global EA talent directory.If you’re interested in working for a HIO, fill out this talent directory sign-up form.You control your information's privacy as part of completing the form.Who Should Fill Out The Form?Anyone who identifies with EA and is looking to transition to working at a HIO. We are especially interested in working professionals as we think they are less legible to the community, but we are happy for any EA to fill it out. If you already filled out our form for FTX grantees that we recently published you don’t need to fill out this form as it is very similar to the one you already filled out.What Will Happen After I Fill Out the Form?After you fill out the form you will be entered into our talent directory, a list of all individuals who submit the form. Then, depending on what you’ve consented to on the form, one to two things will happen.1. We will actively try to place you at a high-impact organizationWe at HIP are working with HIOs from diverse cause areas to help them fill roles they are recruiting for. As part of this effort, we will look through our talent directory to find qualified candidates, which hopefully includes you! So the more information you provide to us, the better the chance we have of matching you to a HIO. To do this, we may:execute a talent search on behalf of a HIO and, in the case of a potential match, either HIP or the HIO will reach out to you.pass your information to partners who are also interested in either recruiting directly for their organizations or who run a recruiting meta organization as we do. We think this will significantly increase the chances of a match. We will use discretion with whom we share your information; we will also tell all partners not to share your information with anyone further and that they are to use it for the sole purpose of recruiting.2. We may publish information about you on our websiteIf you so consent, we may include you in a public talent directory listing on our website so that all organizations looking for talent can find your profile more easily. This offers the least data privacy, but the most publicity for your profile.You can revoke your consent for either of these options at any time and we will remove your information accordingly. You can also reach out to us to have your information updated.If this sounds good to you, please sign up to our talent directory.If you have any comments or suggestions, please feel free to post them below or email them to us.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
High Impact Professionals https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:51 None full 4226
vXq4ADWzBnwR2nyqE_EA EA - Keep EA high-trust by Michael PJ Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Keep EA high-trust, published by Michael PJ on December 22, 2022 on The Effective Altruism Forum.(Status: not draft amnesty, but posting in that spirit, since it's not as good as I'd want it to be but otherwise I probably won't ever post it)In my experience, EA so far has been a high-trust community. That is, people generally trust other people to behave well and in accordance with the values of the community.Being high-trust is great! It means that you can spend more time getting on with stuff and less time carefully checking each other for bad behaviour. It's also just nicer: It feels good and motivating to be trusted, and it is reassuring to support people you trust to do work.I feel like a lot of posts I've seen recently have been arguing for the community to move to a low-trust regime, particularly with respect to EA organizations. That includes calls for:More transparency ("we need to rigorously scrutinise even your small actions in case you're trying to sneak bad behaviour past us")More elaborate governance ("there is a risk of governance capture and we need to seriously guard against it", "we don't trust the people currently doing governance")Sometimes you have to move to low-trust regimes. It's common that organizations tend to move from high-trust to low-trust as they grow, due to the larger number of actors involved who can't all be assumed to be trustworthy. But I do not think that the EA community actually has the problems that require low-trust, and I think it would be very costly.Specifically, I want to argue:Low-trust regimes are expensive, both in terms of resources and moraleThe people working in current EA orgs are in fact very trustworthyThe EA community should remain high-trust (with checking)Low-trust is costlyLow-trust regimes impose costs in at least three ways:Costlier cooperationCostlier delegationGeneral efficiency taxesThe post Bad Omens in current EA Governance argues that due to the possibility of conflicts of interest we should break up the organisations which currently share ops support through EVF. This is a clear example of 1: if we can't trust people then we can't just share our resources, we have to keep everyone at arm's length. You can read in the comments various people explaining why this would be quite expensive.Similarly, you can't just delegate power to people in a low-trust regime. What if they abuse it? Better to require explicit approval up the chain before they do anything serious like spend some money. But if you can't spend money you often can't do things, and activity ends up being blocked on approval, politics, and perception.When you actually try to get anything done, low-trust regimes typically require lots of paper trails and approvals. Anyone who's worked in a larger organization can testify to how demoralizing and slow this can be. Since any decision can be questioned after the fact, there is no limit to how much "transparency" can be demanded, and how many pointless forms, proposals, reports, or forum posts can end up being produced. I think it is very easy to underestimate how destructive this can be to productivity.Finally, it is plain demoralizing to be in a low-trust regime. High-trust says "Yes, we are on the same team, go and attack the problem with my blessing!". Low-trust says "I guess I have to work with you but I'm expecting you to try and steal from me as soon as you have the opportunity, so I'm keeping an eye on you". Where would you rather work?Current people in EA organisations are trustworthy(Disclaimer: I know quite a lot of people who work in EA organisations, so I'm definitely personally biased towards them.)The FTX debacle has led to a lot of finger-pointing in recent months. A particular pattern has been posts listing large numbers of questions about the behaviour of p...]]>
Michael PJ https://forum.effectivealtruism.org/posts/vXq4ADWzBnwR2nyqE/keep-ea-high-trust Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Keep EA high-trust, published by Michael PJ on December 22, 2022 on The Effective Altruism Forum.(Status: not draft amnesty, but posting in that spirit, since it's not as good as I'd want it to be but otherwise I probably won't ever post it)In my experience, EA so far has been a high-trust community. That is, people generally trust other people to behave well and in accordance with the values of the community.Being high-trust is great! It means that you can spend more time getting on with stuff and less time carefully checking each other for bad behaviour. It's also just nicer: It feels good and motivating to be trusted, and it is reassuring to support people you trust to do work.I feel like a lot of posts I've seen recently have been arguing for the community to move to a low-trust regime, particularly with respect to EA organizations. That includes calls for:More transparency ("we need to rigorously scrutinise even your small actions in case you're trying to sneak bad behaviour past us")More elaborate governance ("there is a risk of governance capture and we need to seriously guard against it", "we don't trust the people currently doing governance")Sometimes you have to move to low-trust regimes. It's common that organizations tend to move from high-trust to low-trust as they grow, due to the larger number of actors involved who can't all be assumed to be trustworthy. But I do not think that the EA community actually has the problems that require low-trust, and I think it would be very costly.Specifically, I want to argue:Low-trust regimes are expensive, both in terms of resources and moraleThe people working in current EA orgs are in fact very trustworthyThe EA community should remain high-trust (with checking)Low-trust is costlyLow-trust regimes impose costs in at least three ways:Costlier cooperationCostlier delegationGeneral efficiency taxesThe post Bad Omens in current EA Governance argues that due to the possibility of conflicts of interest we should break up the organisations which currently share ops support through EVF. This is a clear example of 1: if we can't trust people then we can't just share our resources, we have to keep everyone at arm's length. You can read in the comments various people explaining why this would be quite expensive.Similarly, you can't just delegate power to people in a low-trust regime. What if they abuse it? Better to require explicit approval up the chain before they do anything serious like spend some money. But if you can't spend money you often can't do things, and activity ends up being blocked on approval, politics, and perception.When you actually try to get anything done, low-trust regimes typically require lots of paper trails and approvals. Anyone who's worked in a larger organization can testify to how demoralizing and slow this can be. Since any decision can be questioned after the fact, there is no limit to how much "transparency" can be demanded, and how many pointless forms, proposals, reports, or forum posts can end up being produced. I think it is very easy to underestimate how destructive this can be to productivity.Finally, it is plain demoralizing to be in a low-trust regime. High-trust says "Yes, we are on the same team, go and attack the problem with my blessing!". Low-trust says "I guess I have to work with you but I'm expecting you to try and steal from me as soon as you have the opportunity, so I'm keeping an eye on you". Where would you rather work?Current people in EA organisations are trustworthy(Disclaimer: I know quite a lot of people who work in EA organisations, so I'm definitely personally biased towards them.)The FTX debacle has led to a lot of finger-pointing in recent months. A particular pattern has been posts listing large numbers of questions about the behaviour of p...]]>
Thu, 22 Dec 2022 15:17:52 +0000 EA - Keep EA high-trust by Michael PJ Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Keep EA high-trust, published by Michael PJ on December 22, 2022 on The Effective Altruism Forum.(Status: not draft amnesty, but posting in that spirit, since it's not as good as I'd want it to be but otherwise I probably won't ever post it)In my experience, EA so far has been a high-trust community. That is, people generally trust other people to behave well and in accordance with the values of the community.Being high-trust is great! It means that you can spend more time getting on with stuff and less time carefully checking each other for bad behaviour. It's also just nicer: It feels good and motivating to be trusted, and it is reassuring to support people you trust to do work.I feel like a lot of posts I've seen recently have been arguing for the community to move to a low-trust regime, particularly with respect to EA organizations. That includes calls for:More transparency ("we need to rigorously scrutinise even your small actions in case you're trying to sneak bad behaviour past us")More elaborate governance ("there is a risk of governance capture and we need to seriously guard against it", "we don't trust the people currently doing governance")Sometimes you have to move to low-trust regimes. It's common that organizations tend to move from high-trust to low-trust as they grow, due to the larger number of actors involved who can't all be assumed to be trustworthy. But I do not think that the EA community actually has the problems that require low-trust, and I think it would be very costly.Specifically, I want to argue:Low-trust regimes are expensive, both in terms of resources and moraleThe people working in current EA orgs are in fact very trustworthyThe EA community should remain high-trust (with checking)Low-trust is costlyLow-trust regimes impose costs in at least three ways:Costlier cooperationCostlier delegationGeneral efficiency taxesThe post Bad Omens in current EA Governance argues that due to the possibility of conflicts of interest we should break up the organisations which currently share ops support through EVF. This is a clear example of 1: if we can't trust people then we can't just share our resources, we have to keep everyone at arm's length. You can read in the comments various people explaining why this would be quite expensive.Similarly, you can't just delegate power to people in a low-trust regime. What if they abuse it? Better to require explicit approval up the chain before they do anything serious like spend some money. But if you can't spend money you often can't do things, and activity ends up being blocked on approval, politics, and perception.When you actually try to get anything done, low-trust regimes typically require lots of paper trails and approvals. Anyone who's worked in a larger organization can testify to how demoralizing and slow this can be. Since any decision can be questioned after the fact, there is no limit to how much "transparency" can be demanded, and how many pointless forms, proposals, reports, or forum posts can end up being produced. I think it is very easy to underestimate how destructive this can be to productivity.Finally, it is plain demoralizing to be in a low-trust regime. High-trust says "Yes, we are on the same team, go and attack the problem with my blessing!". Low-trust says "I guess I have to work with you but I'm expecting you to try and steal from me as soon as you have the opportunity, so I'm keeping an eye on you". Where would you rather work?Current people in EA organisations are trustworthy(Disclaimer: I know quite a lot of people who work in EA organisations, so I'm definitely personally biased towards them.)The FTX debacle has led to a lot of finger-pointing in recent months. A particular pattern has been posts listing large numbers of questions about the behaviour of p...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Keep EA high-trust, published by Michael PJ on December 22, 2022 on The Effective Altruism Forum.(Status: not draft amnesty, but posting in that spirit, since it's not as good as I'd want it to be but otherwise I probably won't ever post it)In my experience, EA so far has been a high-trust community. That is, people generally trust other people to behave well and in accordance with the values of the community.Being high-trust is great! It means that you can spend more time getting on with stuff and less time carefully checking each other for bad behaviour. It's also just nicer: It feels good and motivating to be trusted, and it is reassuring to support people you trust to do work.I feel like a lot of posts I've seen recently have been arguing for the community to move to a low-trust regime, particularly with respect to EA organizations. That includes calls for:More transparency ("we need to rigorously scrutinise even your small actions in case you're trying to sneak bad behaviour past us")More elaborate governance ("there is a risk of governance capture and we need to seriously guard against it", "we don't trust the people currently doing governance")Sometimes you have to move to low-trust regimes. It's common that organizations tend to move from high-trust to low-trust as they grow, due to the larger number of actors involved who can't all be assumed to be trustworthy. But I do not think that the EA community actually has the problems that require low-trust, and I think it would be very costly.Specifically, I want to argue:Low-trust regimes are expensive, both in terms of resources and moraleThe people working in current EA orgs are in fact very trustworthyThe EA community should remain high-trust (with checking)Low-trust is costlyLow-trust regimes impose costs in at least three ways:Costlier cooperationCostlier delegationGeneral efficiency taxesThe post Bad Omens in current EA Governance argues that due to the possibility of conflicts of interest we should break up the organisations which currently share ops support through EVF. This is a clear example of 1: if we can't trust people then we can't just share our resources, we have to keep everyone at arm's length. You can read in the comments various people explaining why this would be quite expensive.Similarly, you can't just delegate power to people in a low-trust regime. What if they abuse it? Better to require explicit approval up the chain before they do anything serious like spend some money. But if you can't spend money you often can't do things, and activity ends up being blocked on approval, politics, and perception.When you actually try to get anything done, low-trust regimes typically require lots of paper trails and approvals. Anyone who's worked in a larger organization can testify to how demoralizing and slow this can be. Since any decision can be questioned after the fact, there is no limit to how much "transparency" can be demanded, and how many pointless forms, proposals, reports, or forum posts can end up being produced. I think it is very easy to underestimate how destructive this can be to productivity.Finally, it is plain demoralizing to be in a low-trust regime. High-trust says "Yes, we are on the same team, go and attack the problem with my blessing!". Low-trust says "I guess I have to work with you but I'm expecting you to try and steal from me as soon as you have the opportunity, so I'm keeping an eye on you". Where would you rather work?Current people in EA organisations are trustworthy(Disclaimer: I know quite a lot of people who work in EA organisations, so I'm definitely personally biased towards them.)The FTX debacle has led to a lot of finger-pointing in recent months. A particular pattern has been posts listing large numbers of questions about the behaviour of p...]]>
Michael PJ https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:56 None full 4224
acG2fbfd9xwv3XrKZ_EA EA - Link-post for Caroline Ellison’s Guilty Plea by Lauren Maria Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Link-post for Caroline Ellison’s Guilty Plea, published by Lauren Maria on December 22, 2022 on The Effective Altruism Forum.This is a link post for Caroline Ellison’s guilty plea. In addition, the latest from the NY Times is that both her and Gary Wang are co-operating with the federal investigation against Sam Bankman-Fried. Posting here so everyone remains up to date with the progress of this case.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lauren Maria https://forum.effectivealtruism.org/posts/acG2fbfd9xwv3XrKZ/link-post-for-caroline-ellison-s-guilty-plea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Link-post for Caroline Ellison’s Guilty Plea, published by Lauren Maria on December 22, 2022 on The Effective Altruism Forum.This is a link post for Caroline Ellison’s guilty plea. In addition, the latest from the NY Times is that both her and Gary Wang are co-operating with the federal investigation against Sam Bankman-Fried. Posting here so everyone remains up to date with the progress of this case.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 22 Dec 2022 05:27:11 +0000 EA - Link-post for Caroline Ellison’s Guilty Plea by Lauren Maria Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Link-post for Caroline Ellison’s Guilty Plea, published by Lauren Maria on December 22, 2022 on The Effective Altruism Forum.This is a link post for Caroline Ellison’s guilty plea. In addition, the latest from the NY Times is that both her and Gary Wang are co-operating with the federal investigation against Sam Bankman-Fried. Posting here so everyone remains up to date with the progress of this case.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Link-post for Caroline Ellison’s Guilty Plea, published by Lauren Maria on December 22, 2022 on The Effective Altruism Forum.This is a link post for Caroline Ellison’s guilty plea. In addition, the latest from the NY Times is that both her and Gary Wang are co-operating with the federal investigation against Sam Bankman-Fried. Posting here so everyone remains up to date with the progress of this case.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lauren Maria https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:42 None full 4216
yvsf8DfdQJZ8EadtG_EA EA - US Policy Master's Degrees: Why and When? (Part 1) by US Policy Careers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: US Policy Master's Degrees: Why and When? (Part 1), published by US Policy Careers on December 21, 2022 on The Effective Altruism Forum.About this postWorking in policy is among the most effective ways to have a positive impact in areas like AI, biosecurity, animal welfare, or global health. Getting a policy master’s degree (e.g. in security studies or public policy) can help you pivot into or accelerate your policy career.This two-part overview explains why, when, where, and how to get a policy master’s degree. Part 1 (this post) focuses on the “why” and the “when” and alternatives to policy master’s. Part 2 focuses on general criteria for choosing where to apply, specific degrees we recommend, how to apply, and how to secure funding. We also have a US policy master's database if you want to compare program options.Part 1 Part 2US Policy Master's Degrees: Why and When?What are policy master’s degrees?Why do a master’s if you want to work in policy?Why not do a master’s for policy work?When should you go to grad school—right after college or after working for a few years?What are the alternatives to policy master’s?US Policy Master's Degrees: Which Schools, How to Apply, and Funding (forthcoming)Deciding which policy master’s programs to apply toApplication timelinesHow to apply: Getting into policy master’s programsFunding graduate schoolIf you are interested in applying for a policy master’s program—including if you are still unsure or plan to apply in future years—we encourage you to fill in this form so that we can potentially support your application and connect you with others who have gone through the program.These posts are based on our personal experience working on policy in DC for several years, background reading, and conversations with more than two dozen policy professionals.Please note that this two-part series focuses on:Master’s degrees for those aiming to work in policy. Thus, it will be much less relevant for people pursuing technical or other non-policy careers.Master’s degrees rather than other graduate degrees, like law degrees or PhDs. A later section briefly compares policy master’s programs with these alternatives.Master’s programs for policy careers in the US, especially in/with federal government. Some of the advice may apply to policy at the US state-level or other countries, but much of it will not since DC policy institutions and paths may differ substantially from those not in DC or the US.SummaryWhat are policy master’s degrees? They are typically two-year programs in subjects like public policy/administration (MPP/MPA), international relations/security studies, or more technical programs like public health (MPH). Policy master’s fall on a continuum from highly academic to highly practitioner-oriented, with the latter typically being better preparation for a policy career. [read more]Do I need a master’s to work in policy? Doing a master’s (or other graduate degree) is generally valuable for policy work and often necessary, depending on the institution and role (especially in executive agencies and think tanks). As a policy professional, you’ll most likely want to do a master’s eventually, with limited exceptions such as some career tracks in Congress. [read more]What’s the value of a master’s for policy work? A master’s builds your career capital for specific paths like policy. The credential is useful and often necessary to get a policy job. Master’s degrees also provide value through learning, skill-building, networking, exploration, and more. The relative importance of these factors depends on your background and goals, and may influence what degree to get (e.g. subject, location, type of graduate degree). [read more]If I want to do a master’s, when should I do it? We recommend most people to work for 1-3 ...]]>
US Policy Careers https://forum.effectivealtruism.org/posts/yvsf8DfdQJZ8EadtG/us-policy-master-s-degrees-why-and-when-part-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: US Policy Master's Degrees: Why and When? (Part 1), published by US Policy Careers on December 21, 2022 on The Effective Altruism Forum.About this postWorking in policy is among the most effective ways to have a positive impact in areas like AI, biosecurity, animal welfare, or global health. Getting a policy master’s degree (e.g. in security studies or public policy) can help you pivot into or accelerate your policy career.This two-part overview explains why, when, where, and how to get a policy master’s degree. Part 1 (this post) focuses on the “why” and the “when” and alternatives to policy master’s. Part 2 focuses on general criteria for choosing where to apply, specific degrees we recommend, how to apply, and how to secure funding. We also have a US policy master's database if you want to compare program options.Part 1 Part 2US Policy Master's Degrees: Why and When?What are policy master’s degrees?Why do a master’s if you want to work in policy?Why not do a master’s for policy work?When should you go to grad school—right after college or after working for a few years?What are the alternatives to policy master’s?US Policy Master's Degrees: Which Schools, How to Apply, and Funding (forthcoming)Deciding which policy master’s programs to apply toApplication timelinesHow to apply: Getting into policy master’s programsFunding graduate schoolIf you are interested in applying for a policy master’s program—including if you are still unsure or plan to apply in future years—we encourage you to fill in this form so that we can potentially support your application and connect you with others who have gone through the program.These posts are based on our personal experience working on policy in DC for several years, background reading, and conversations with more than two dozen policy professionals.Please note that this two-part series focuses on:Master’s degrees for those aiming to work in policy. Thus, it will be much less relevant for people pursuing technical or other non-policy careers.Master’s degrees rather than other graduate degrees, like law degrees or PhDs. A later section briefly compares policy master’s programs with these alternatives.Master’s programs for policy careers in the US, especially in/with federal government. Some of the advice may apply to policy at the US state-level or other countries, but much of it will not since DC policy institutions and paths may differ substantially from those not in DC or the US.SummaryWhat are policy master’s degrees? They are typically two-year programs in subjects like public policy/administration (MPP/MPA), international relations/security studies, or more technical programs like public health (MPH). Policy master’s fall on a continuum from highly academic to highly practitioner-oriented, with the latter typically being better preparation for a policy career. [read more]Do I need a master’s to work in policy? Doing a master’s (or other graduate degree) is generally valuable for policy work and often necessary, depending on the institution and role (especially in executive agencies and think tanks). As a policy professional, you’ll most likely want to do a master’s eventually, with limited exceptions such as some career tracks in Congress. [read more]What’s the value of a master’s for policy work? A master’s builds your career capital for specific paths like policy. The credential is useful and often necessary to get a policy job. Master’s degrees also provide value through learning, skill-building, networking, exploration, and more. The relative importance of these factors depends on your background and goals, and may influence what degree to get (e.g. subject, location, type of graduate degree). [read more]If I want to do a master’s, when should I do it? We recommend most people to work for 1-3 ...]]>
Wed, 21 Dec 2022 21:26:55 +0000 EA - US Policy Master's Degrees: Why and When? (Part 1) by US Policy Careers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: US Policy Master's Degrees: Why and When? (Part 1), published by US Policy Careers on December 21, 2022 on The Effective Altruism Forum.About this postWorking in policy is among the most effective ways to have a positive impact in areas like AI, biosecurity, animal welfare, or global health. Getting a policy master’s degree (e.g. in security studies or public policy) can help you pivot into or accelerate your policy career.This two-part overview explains why, when, where, and how to get a policy master’s degree. Part 1 (this post) focuses on the “why” and the “when” and alternatives to policy master’s. Part 2 focuses on general criteria for choosing where to apply, specific degrees we recommend, how to apply, and how to secure funding. We also have a US policy master's database if you want to compare program options.Part 1 Part 2US Policy Master's Degrees: Why and When?What are policy master’s degrees?Why do a master’s if you want to work in policy?Why not do a master’s for policy work?When should you go to grad school—right after college or after working for a few years?What are the alternatives to policy master’s?US Policy Master's Degrees: Which Schools, How to Apply, and Funding (forthcoming)Deciding which policy master’s programs to apply toApplication timelinesHow to apply: Getting into policy master’s programsFunding graduate schoolIf you are interested in applying for a policy master’s program—including if you are still unsure or plan to apply in future years—we encourage you to fill in this form so that we can potentially support your application and connect you with others who have gone through the program.These posts are based on our personal experience working on policy in DC for several years, background reading, and conversations with more than two dozen policy professionals.Please note that this two-part series focuses on:Master’s degrees for those aiming to work in policy. Thus, it will be much less relevant for people pursuing technical or other non-policy careers.Master’s degrees rather than other graduate degrees, like law degrees or PhDs. A later section briefly compares policy master’s programs with these alternatives.Master’s programs for policy careers in the US, especially in/with federal government. Some of the advice may apply to policy at the US state-level or other countries, but much of it will not since DC policy institutions and paths may differ substantially from those not in DC or the US.SummaryWhat are policy master’s degrees? They are typically two-year programs in subjects like public policy/administration (MPP/MPA), international relations/security studies, or more technical programs like public health (MPH). Policy master’s fall on a continuum from highly academic to highly practitioner-oriented, with the latter typically being better preparation for a policy career. [read more]Do I need a master’s to work in policy? Doing a master’s (or other graduate degree) is generally valuable for policy work and often necessary, depending on the institution and role (especially in executive agencies and think tanks). As a policy professional, you’ll most likely want to do a master’s eventually, with limited exceptions such as some career tracks in Congress. [read more]What’s the value of a master’s for policy work? A master’s builds your career capital for specific paths like policy. The credential is useful and often necessary to get a policy job. Master’s degrees also provide value through learning, skill-building, networking, exploration, and more. The relative importance of these factors depends on your background and goals, and may influence what degree to get (e.g. subject, location, type of graduate degree). [read more]If I want to do a master’s, when should I do it? We recommend most people to work for 1-3 ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: US Policy Master's Degrees: Why and When? (Part 1), published by US Policy Careers on December 21, 2022 on The Effective Altruism Forum.About this postWorking in policy is among the most effective ways to have a positive impact in areas like AI, biosecurity, animal welfare, or global health. Getting a policy master’s degree (e.g. in security studies or public policy) can help you pivot into or accelerate your policy career.This two-part overview explains why, when, where, and how to get a policy master’s degree. Part 1 (this post) focuses on the “why” and the “when” and alternatives to policy master’s. Part 2 focuses on general criteria for choosing where to apply, specific degrees we recommend, how to apply, and how to secure funding. We also have a US policy master's database if you want to compare program options.Part 1 Part 2US Policy Master's Degrees: Why and When?What are policy master’s degrees?Why do a master’s if you want to work in policy?Why not do a master’s for policy work?When should you go to grad school—right after college or after working for a few years?What are the alternatives to policy master’s?US Policy Master's Degrees: Which Schools, How to Apply, and Funding (forthcoming)Deciding which policy master’s programs to apply toApplication timelinesHow to apply: Getting into policy master’s programsFunding graduate schoolIf you are interested in applying for a policy master’s program—including if you are still unsure or plan to apply in future years—we encourage you to fill in this form so that we can potentially support your application and connect you with others who have gone through the program.These posts are based on our personal experience working on policy in DC for several years, background reading, and conversations with more than two dozen policy professionals.Please note that this two-part series focuses on:Master’s degrees for those aiming to work in policy. Thus, it will be much less relevant for people pursuing technical or other non-policy careers.Master’s degrees rather than other graduate degrees, like law degrees or PhDs. A later section briefly compares policy master’s programs with these alternatives.Master’s programs for policy careers in the US, especially in/with federal government. Some of the advice may apply to policy at the US state-level or other countries, but much of it will not since DC policy institutions and paths may differ substantially from those not in DC or the US.SummaryWhat are policy master’s degrees? They are typically two-year programs in subjects like public policy/administration (MPP/MPA), international relations/security studies, or more technical programs like public health (MPH). Policy master’s fall on a continuum from highly academic to highly practitioner-oriented, with the latter typically being better preparation for a policy career. [read more]Do I need a master’s to work in policy? Doing a master’s (or other graduate degree) is generally valuable for policy work and often necessary, depending on the institution and role (especially in executive agencies and think tanks). As a policy professional, you’ll most likely want to do a master’s eventually, with limited exceptions such as some career tracks in Congress. [read more]What’s the value of a master’s for policy work? A master’s builds your career capital for specific paths like policy. The credential is useful and often necessary to get a policy job. Master’s degrees also provide value through learning, skill-building, networking, exploration, and more. The relative importance of these factors depends on your background and goals, and may influence what degree to get (e.g. subject, location, type of graduate degree). [read more]If I want to do a master’s, when should I do it? We recommend most people to work for 1-3 ...]]>
US Policy Careers https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 49:21 None full 4206
nc3JFZbqnzWWAPkmz_EA EA - Understanding the diffusion of large language models: summary by Ben Cottier Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Understanding the diffusion of large language models: summary, published by Ben Cottier on December 21, 2022 on The Effective Altruism Forum.5-minute summaryTime from GPT-3’s publication (May 2020) to.TimeDateModelOrganizations involvedA better model is produced7 monthsDec 2020GopherDeepmindMay 2020OPT-175BMeta AI ResearchA better or equally good model is open-sourcedANDA successful, explicit attempt at replicating GPT-3 is completed23 months(equally good model)Table 1: Key facts about the diffusion of GPT-3-like modelsI present my findings from case studies on the diffusion of nine language models that are similar to OpenAI’s GPT-3 model, including GPT-3 itself. By “diffusion”, I mean the spread of artifacts among different actors, where artifacts include trained models, code, datasets, and algorithmic insights. Diffusion can occur through different mechanisms, including open publication and replication. Seven of the models in my case studies are “GPT-3-like” according to my definition, which basically means they are similar to GPT-3 in design and purpose, and have similar or better capabilities. Two models have clearly worse capabilities but were of interest for other reasons. (more)I think the most important effects of diffusion are effects on (1) AI timelines—the leading AI developer can get to TAI sooner by using knowledge shared by other developers, (2) who leads AI development, (3) by what margin they lead, and (4) how many actors will plausibly be contenders to develop transformative AI (TAI). The latter three effects in turn affect AI timelines and the competitiveness of AI development. Understanding cases of diffusion today may improve our ability to predict and manage the effects of diffusion in the lead up to TAI being developed. (more)See Table 1 for key facts about the timing of GPT-3-like model diffusion. Additionally, I’m 90% confident that no model exists which is (a) uncontroversially better than GPT-3 and (b) has its model weights immediately available for download by anyone on the internet (as at November 15, 2022). However, GLM-130B (Tsinghua University, 2022)—publicized in August 2022 and developed by Tsinghua University and the Chinese AI startup Zhipu.AI—comes very close to meeting these criteria: it is probably better than GPT-3, but still requires approval to download the weights. (more)I’m 85% confident that in the two years since the publication of GPT-3 (in May 2020), publicly known GPT-3-like models have only been developed by (a) companies whose focus areas include machine learning R&D and have more than $10M in financial capital, or (b) a collaboration between one of these companies and either academia, or a state entity, or both. That is, I’m 85% confident that there has been no publicly known GPT-3-like model developed solely by actors in academia, very small companies, independent groups, or state AI labs. (more)In contrast, I think that hundreds to thousands of people have enough resources and talent to use a GPT-3-like model through their own independent setup (rather than just an API provided by another actor). This is due to wider access to the model weights of GPT-3-like models such as OPT-175B and BLOOM since May 2022. (more)I estimate that the cost of doing the “largest viable deployment” with a GPT-3-like model would be 20% of the cost of developing the model (90% CI: 10 to 68%), in terms of the dollar cost of compute alone. This means that deployment is most likely much less prohibitive than development. For people aiming to limit/shape diffusion, this analysis lends support to targeting interventions at the development stage rather than the deployment stage. (more)Access to compute appears to have been the main factor hindering the development of GPT-3-like models. The next biggest hindering factor appears t...]]>
Ben Cottier https://forum.effectivealtruism.org/posts/nc3JFZbqnzWWAPkmz/understanding-the-diffusion-of-large-language-models-summary-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Understanding the diffusion of large language models: summary, published by Ben Cottier on December 21, 2022 on The Effective Altruism Forum.5-minute summaryTime from GPT-3’s publication (May 2020) to.TimeDateModelOrganizations involvedA better model is produced7 monthsDec 2020GopherDeepmindMay 2020OPT-175BMeta AI ResearchA better or equally good model is open-sourcedANDA successful, explicit attempt at replicating GPT-3 is completed23 months(equally good model)Table 1: Key facts about the diffusion of GPT-3-like modelsI present my findings from case studies on the diffusion of nine language models that are similar to OpenAI’s GPT-3 model, including GPT-3 itself. By “diffusion”, I mean the spread of artifacts among different actors, where artifacts include trained models, code, datasets, and algorithmic insights. Diffusion can occur through different mechanisms, including open publication and replication. Seven of the models in my case studies are “GPT-3-like” according to my definition, which basically means they are similar to GPT-3 in design and purpose, and have similar or better capabilities. Two models have clearly worse capabilities but were of interest for other reasons. (more)I think the most important effects of diffusion are effects on (1) AI timelines—the leading AI developer can get to TAI sooner by using knowledge shared by other developers, (2) who leads AI development, (3) by what margin they lead, and (4) how many actors will plausibly be contenders to develop transformative AI (TAI). The latter three effects in turn affect AI timelines and the competitiveness of AI development. Understanding cases of diffusion today may improve our ability to predict and manage the effects of diffusion in the lead up to TAI being developed. (more)See Table 1 for key facts about the timing of GPT-3-like model diffusion. Additionally, I’m 90% confident that no model exists which is (a) uncontroversially better than GPT-3 and (b) has its model weights immediately available for download by anyone on the internet (as at November 15, 2022). However, GLM-130B (Tsinghua University, 2022)—publicized in August 2022 and developed by Tsinghua University and the Chinese AI startup Zhipu.AI—comes very close to meeting these criteria: it is probably better than GPT-3, but still requires approval to download the weights. (more)I’m 85% confident that in the two years since the publication of GPT-3 (in May 2020), publicly known GPT-3-like models have only been developed by (a) companies whose focus areas include machine learning R&D and have more than $10M in financial capital, or (b) a collaboration between one of these companies and either academia, or a state entity, or both. That is, I’m 85% confident that there has been no publicly known GPT-3-like model developed solely by actors in academia, very small companies, independent groups, or state AI labs. (more)In contrast, I think that hundreds to thousands of people have enough resources and talent to use a GPT-3-like model through their own independent setup (rather than just an API provided by another actor). This is due to wider access to the model weights of GPT-3-like models such as OPT-175B and BLOOM since May 2022. (more)I estimate that the cost of doing the “largest viable deployment” with a GPT-3-like model would be 20% of the cost of developing the model (90% CI: 10 to 68%), in terms of the dollar cost of compute alone. This means that deployment is most likely much less prohibitive than development. For people aiming to limit/shape diffusion, this analysis lends support to targeting interventions at the development stage rather than the deployment stage. (more)Access to compute appears to have been the main factor hindering the development of GPT-3-like models. The next biggest hindering factor appears t...]]>
Wed, 21 Dec 2022 16:31:37 +0000 EA - Understanding the diffusion of large language models: summary by Ben Cottier Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Understanding the diffusion of large language models: summary, published by Ben Cottier on December 21, 2022 on The Effective Altruism Forum.5-minute summaryTime from GPT-3’s publication (May 2020) to.TimeDateModelOrganizations involvedA better model is produced7 monthsDec 2020GopherDeepmindMay 2020OPT-175BMeta AI ResearchA better or equally good model is open-sourcedANDA successful, explicit attempt at replicating GPT-3 is completed23 months(equally good model)Table 1: Key facts about the diffusion of GPT-3-like modelsI present my findings from case studies on the diffusion of nine language models that are similar to OpenAI’s GPT-3 model, including GPT-3 itself. By “diffusion”, I mean the spread of artifacts among different actors, where artifacts include trained models, code, datasets, and algorithmic insights. Diffusion can occur through different mechanisms, including open publication and replication. Seven of the models in my case studies are “GPT-3-like” according to my definition, which basically means they are similar to GPT-3 in design and purpose, and have similar or better capabilities. Two models have clearly worse capabilities but were of interest for other reasons. (more)I think the most important effects of diffusion are effects on (1) AI timelines—the leading AI developer can get to TAI sooner by using knowledge shared by other developers, (2) who leads AI development, (3) by what margin they lead, and (4) how many actors will plausibly be contenders to develop transformative AI (TAI). The latter three effects in turn affect AI timelines and the competitiveness of AI development. Understanding cases of diffusion today may improve our ability to predict and manage the effects of diffusion in the lead up to TAI being developed. (more)See Table 1 for key facts about the timing of GPT-3-like model diffusion. Additionally, I’m 90% confident that no model exists which is (a) uncontroversially better than GPT-3 and (b) has its model weights immediately available for download by anyone on the internet (as at November 15, 2022). However, GLM-130B (Tsinghua University, 2022)—publicized in August 2022 and developed by Tsinghua University and the Chinese AI startup Zhipu.AI—comes very close to meeting these criteria: it is probably better than GPT-3, but still requires approval to download the weights. (more)I’m 85% confident that in the two years since the publication of GPT-3 (in May 2020), publicly known GPT-3-like models have only been developed by (a) companies whose focus areas include machine learning R&D and have more than $10M in financial capital, or (b) a collaboration between one of these companies and either academia, or a state entity, or both. That is, I’m 85% confident that there has been no publicly known GPT-3-like model developed solely by actors in academia, very small companies, independent groups, or state AI labs. (more)In contrast, I think that hundreds to thousands of people have enough resources and talent to use a GPT-3-like model through their own independent setup (rather than just an API provided by another actor). This is due to wider access to the model weights of GPT-3-like models such as OPT-175B and BLOOM since May 2022. (more)I estimate that the cost of doing the “largest viable deployment” with a GPT-3-like model would be 20% of the cost of developing the model (90% CI: 10 to 68%), in terms of the dollar cost of compute alone. This means that deployment is most likely much less prohibitive than development. For people aiming to limit/shape diffusion, this analysis lends support to targeting interventions at the development stage rather than the deployment stage. (more)Access to compute appears to have been the main factor hindering the development of GPT-3-like models. The next biggest hindering factor appears t...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Understanding the diffusion of large language models: summary, published by Ben Cottier on December 21, 2022 on The Effective Altruism Forum.5-minute summaryTime from GPT-3’s publication (May 2020) to.TimeDateModelOrganizations involvedA better model is produced7 monthsDec 2020GopherDeepmindMay 2020OPT-175BMeta AI ResearchA better or equally good model is open-sourcedANDA successful, explicit attempt at replicating GPT-3 is completed23 months(equally good model)Table 1: Key facts about the diffusion of GPT-3-like modelsI present my findings from case studies on the diffusion of nine language models that are similar to OpenAI’s GPT-3 model, including GPT-3 itself. By “diffusion”, I mean the spread of artifacts among different actors, where artifacts include trained models, code, datasets, and algorithmic insights. Diffusion can occur through different mechanisms, including open publication and replication. Seven of the models in my case studies are “GPT-3-like” according to my definition, which basically means they are similar to GPT-3 in design and purpose, and have similar or better capabilities. Two models have clearly worse capabilities but were of interest for other reasons. (more)I think the most important effects of diffusion are effects on (1) AI timelines—the leading AI developer can get to TAI sooner by using knowledge shared by other developers, (2) who leads AI development, (3) by what margin they lead, and (4) how many actors will plausibly be contenders to develop transformative AI (TAI). The latter three effects in turn affect AI timelines and the competitiveness of AI development. Understanding cases of diffusion today may improve our ability to predict and manage the effects of diffusion in the lead up to TAI being developed. (more)See Table 1 for key facts about the timing of GPT-3-like model diffusion. Additionally, I’m 90% confident that no model exists which is (a) uncontroversially better than GPT-3 and (b) has its model weights immediately available for download by anyone on the internet (as at November 15, 2022). However, GLM-130B (Tsinghua University, 2022)—publicized in August 2022 and developed by Tsinghua University and the Chinese AI startup Zhipu.AI—comes very close to meeting these criteria: it is probably better than GPT-3, but still requires approval to download the weights. (more)I’m 85% confident that in the two years since the publication of GPT-3 (in May 2020), publicly known GPT-3-like models have only been developed by (a) companies whose focus areas include machine learning R&D and have more than $10M in financial capital, or (b) a collaboration between one of these companies and either academia, or a state entity, or both. That is, I’m 85% confident that there has been no publicly known GPT-3-like model developed solely by actors in academia, very small companies, independent groups, or state AI labs. (more)In contrast, I think that hundreds to thousands of people have enough resources and talent to use a GPT-3-like model through their own independent setup (rather than just an API provided by another actor). This is due to wider access to the model weights of GPT-3-like models such as OPT-175B and BLOOM since May 2022. (more)I estimate that the cost of doing the “largest viable deployment” with a GPT-3-like model would be 20% of the cost of developing the model (90% CI: 10 to 68%), in terms of the dollar cost of compute alone. This means that deployment is most likely much less prohibitive than development. For people aiming to limit/shape diffusion, this analysis lends support to targeting interventions at the development stage rather than the deployment stage. (more)Access to compute appears to have been the main factor hindering the development of GPT-3-like models. The next biggest hindering factor appears t...]]>
Ben Cottier https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 51:04 None full 4207
H7iXfpmdoJHbsWK7A_EA EA - Flooding is not a promising cause area - shallow investigation by Oscar Delaney Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Flooding is not a promising cause area - shallow investigation, published by Oscar Delaney on December 21, 2022 on The Effective Altruism Forum.SummaryFloods are important - they on average kill 5,000 people and cause $35B of damage each year.Compared to other problems in global health, however, floods are relatively minor.Flood prevention - and especially response - are less neglected than other global health problem areas.This may be because floods harm rich people too, who are politically empowered and cause governments to invest in mitigation.I estimate that to beat GiveWell’s top charities, we would need to prevent all flood damage using $400M per year.This seems very difficult.I investigate several interventions in more detail, which show some promise but are unlikely to be competitive with GiveWell’s top charities.This report was produced with roughly 50 hours of research and writing, as part of the Cause Innovation Bootcamp fellowship program.IntroductionFloods are defined as land that is normally dry being submerged underwater. There are several quite distinct types of floods, the most important being those caused by:[1]Rain: intense rainfall in a localised area can exceed the capacity of the ground to absorb water, and lead to a buildup of water in low-lying areas.Rivers: distributed rainfall anywhere in a river’s catchment area, rapid snow-melt, or the collapse or quick release of dam water, can all cause a river to overflow its banks.Storm surges: intense winds can blow ocean water onto land, and the lower air pressure of large storms causes the local sea level to rise by up to about 1m. Particularly if combined with a high tide this can inundate coastal areas.ImportanceFloods are a big deal. Of natural disasters occurring between 1998 and 2017, a UN report estimated that 43% of disasters were floods (next most common: storm), floods accounted for 45% of disaster-impacted people, at 2 billion (next highest: drought), 11% of deaths (earthquakes caused 56%), and 23% of the economic damage (storms were 46%).[2]While the toll of floods is generally well recorded in terms of deaths and financial cost, at least for larger disasters, data is scarcer and less systematic for other harms.Floods can have many negative impacts, ranging from direct to indirect and immediate to long-term:[3]Deaths: the majority of flood deaths are drownings, with other causes including injuries from falling objects or debris in fast-moving water, and electrocution from fallen powerlines.[4] Floods caused an average of 5,000 deaths per year in 2001-2020, with the bulk of the deaths caused by many smaller events: the deadliest 104 floods accounted for half the deaths in this period.[5]Injuries: as well as injuries during the flood, reconstruction and clean-up causes various injuries including sprains and strains, falling off roofs or ladders, and lacerations from sharp debris.[6] Floods caused an average of 15,000 injuries per year in 2001-2020, though injuries are probably significantly underreported.[7]Water-borne diseases: particularly if sewage systems have been compromised, contact with floodwater can cause a variety of diseases notably diarrhoea, cholera and leptospirosis.Mosquito-borne diseases: the abundance of stagnant water as floodwaters recede may cause a surge in mosquito populations and an associated spike in malaria, dengue and other vector-borne diseases.[8]Malnutrition: lost crops and reduced purchasing power can lead to impaired nutrition in the years following a major flood event in a poor region. One study found wasting to be more than twice as common in children who were flooded twice in the last three years than in control households.[9]Mental health: many studies have found increased incidences of PTSD, anxiety and depression in individuals ...]]>
Oscar Delaney https://forum.effectivealtruism.org/posts/H7iXfpmdoJHbsWK7A/flooding-is-not-a-promising-cause-area-shallow-investigation Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Flooding is not a promising cause area - shallow investigation, published by Oscar Delaney on December 21, 2022 on The Effective Altruism Forum.SummaryFloods are important - they on average kill 5,000 people and cause $35B of damage each year.Compared to other problems in global health, however, floods are relatively minor.Flood prevention - and especially response - are less neglected than other global health problem areas.This may be because floods harm rich people too, who are politically empowered and cause governments to invest in mitigation.I estimate that to beat GiveWell’s top charities, we would need to prevent all flood damage using $400M per year.This seems very difficult.I investigate several interventions in more detail, which show some promise but are unlikely to be competitive with GiveWell’s top charities.This report was produced with roughly 50 hours of research and writing, as part of the Cause Innovation Bootcamp fellowship program.IntroductionFloods are defined as land that is normally dry being submerged underwater. There are several quite distinct types of floods, the most important being those caused by:[1]Rain: intense rainfall in a localised area can exceed the capacity of the ground to absorb water, and lead to a buildup of water in low-lying areas.Rivers: distributed rainfall anywhere in a river’s catchment area, rapid snow-melt, or the collapse or quick release of dam water, can all cause a river to overflow its banks.Storm surges: intense winds can blow ocean water onto land, and the lower air pressure of large storms causes the local sea level to rise by up to about 1m. Particularly if combined with a high tide this can inundate coastal areas.ImportanceFloods are a big deal. Of natural disasters occurring between 1998 and 2017, a UN report estimated that 43% of disasters were floods (next most common: storm), floods accounted for 45% of disaster-impacted people, at 2 billion (next highest: drought), 11% of deaths (earthquakes caused 56%), and 23% of the economic damage (storms were 46%).[2]While the toll of floods is generally well recorded in terms of deaths and financial cost, at least for larger disasters, data is scarcer and less systematic for other harms.Floods can have many negative impacts, ranging from direct to indirect and immediate to long-term:[3]Deaths: the majority of flood deaths are drownings, with other causes including injuries from falling objects or debris in fast-moving water, and electrocution from fallen powerlines.[4] Floods caused an average of 5,000 deaths per year in 2001-2020, with the bulk of the deaths caused by many smaller events: the deadliest 104 floods accounted for half the deaths in this period.[5]Injuries: as well as injuries during the flood, reconstruction and clean-up causes various injuries including sprains and strains, falling off roofs or ladders, and lacerations from sharp debris.[6] Floods caused an average of 15,000 injuries per year in 2001-2020, though injuries are probably significantly underreported.[7]Water-borne diseases: particularly if sewage systems have been compromised, contact with floodwater can cause a variety of diseases notably diarrhoea, cholera and leptospirosis.Mosquito-borne diseases: the abundance of stagnant water as floodwaters recede may cause a surge in mosquito populations and an associated spike in malaria, dengue and other vector-borne diseases.[8]Malnutrition: lost crops and reduced purchasing power can lead to impaired nutrition in the years following a major flood event in a poor region. One study found wasting to be more than twice as common in children who were flooded twice in the last three years than in control households.[9]Mental health: many studies have found increased incidences of PTSD, anxiety and depression in individuals ...]]>
Wed, 21 Dec 2022 15:55:12 +0000 EA - Flooding is not a promising cause area - shallow investigation by Oscar Delaney Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Flooding is not a promising cause area - shallow investigation, published by Oscar Delaney on December 21, 2022 on The Effective Altruism Forum.SummaryFloods are important - they on average kill 5,000 people and cause $35B of damage each year.Compared to other problems in global health, however, floods are relatively minor.Flood prevention - and especially response - are less neglected than other global health problem areas.This may be because floods harm rich people too, who are politically empowered and cause governments to invest in mitigation.I estimate that to beat GiveWell’s top charities, we would need to prevent all flood damage using $400M per year.This seems very difficult.I investigate several interventions in more detail, which show some promise but are unlikely to be competitive with GiveWell’s top charities.This report was produced with roughly 50 hours of research and writing, as part of the Cause Innovation Bootcamp fellowship program.IntroductionFloods are defined as land that is normally dry being submerged underwater. There are several quite distinct types of floods, the most important being those caused by:[1]Rain: intense rainfall in a localised area can exceed the capacity of the ground to absorb water, and lead to a buildup of water in low-lying areas.Rivers: distributed rainfall anywhere in a river’s catchment area, rapid snow-melt, or the collapse or quick release of dam water, can all cause a river to overflow its banks.Storm surges: intense winds can blow ocean water onto land, and the lower air pressure of large storms causes the local sea level to rise by up to about 1m. Particularly if combined with a high tide this can inundate coastal areas.ImportanceFloods are a big deal. Of natural disasters occurring between 1998 and 2017, a UN report estimated that 43% of disasters were floods (next most common: storm), floods accounted for 45% of disaster-impacted people, at 2 billion (next highest: drought), 11% of deaths (earthquakes caused 56%), and 23% of the economic damage (storms were 46%).[2]While the toll of floods is generally well recorded in terms of deaths and financial cost, at least for larger disasters, data is scarcer and less systematic for other harms.Floods can have many negative impacts, ranging from direct to indirect and immediate to long-term:[3]Deaths: the majority of flood deaths are drownings, with other causes including injuries from falling objects or debris in fast-moving water, and electrocution from fallen powerlines.[4] Floods caused an average of 5,000 deaths per year in 2001-2020, with the bulk of the deaths caused by many smaller events: the deadliest 104 floods accounted for half the deaths in this period.[5]Injuries: as well as injuries during the flood, reconstruction and clean-up causes various injuries including sprains and strains, falling off roofs or ladders, and lacerations from sharp debris.[6] Floods caused an average of 15,000 injuries per year in 2001-2020, though injuries are probably significantly underreported.[7]Water-borne diseases: particularly if sewage systems have been compromised, contact with floodwater can cause a variety of diseases notably diarrhoea, cholera and leptospirosis.Mosquito-borne diseases: the abundance of stagnant water as floodwaters recede may cause a surge in mosquito populations and an associated spike in malaria, dengue and other vector-borne diseases.[8]Malnutrition: lost crops and reduced purchasing power can lead to impaired nutrition in the years following a major flood event in a poor region. One study found wasting to be more than twice as common in children who were flooded twice in the last three years than in control households.[9]Mental health: many studies have found increased incidences of PTSD, anxiety and depression in individuals ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Flooding is not a promising cause area - shallow investigation, published by Oscar Delaney on December 21, 2022 on The Effective Altruism Forum.SummaryFloods are important - they on average kill 5,000 people and cause $35B of damage each year.Compared to other problems in global health, however, floods are relatively minor.Flood prevention - and especially response - are less neglected than other global health problem areas.This may be because floods harm rich people too, who are politically empowered and cause governments to invest in mitigation.I estimate that to beat GiveWell’s top charities, we would need to prevent all flood damage using $400M per year.This seems very difficult.I investigate several interventions in more detail, which show some promise but are unlikely to be competitive with GiveWell’s top charities.This report was produced with roughly 50 hours of research and writing, as part of the Cause Innovation Bootcamp fellowship program.IntroductionFloods are defined as land that is normally dry being submerged underwater. There are several quite distinct types of floods, the most important being those caused by:[1]Rain: intense rainfall in a localised area can exceed the capacity of the ground to absorb water, and lead to a buildup of water in low-lying areas.Rivers: distributed rainfall anywhere in a river’s catchment area, rapid snow-melt, or the collapse or quick release of dam water, can all cause a river to overflow its banks.Storm surges: intense winds can blow ocean water onto land, and the lower air pressure of large storms causes the local sea level to rise by up to about 1m. Particularly if combined with a high tide this can inundate coastal areas.ImportanceFloods are a big deal. Of natural disasters occurring between 1998 and 2017, a UN report estimated that 43% of disasters were floods (next most common: storm), floods accounted for 45% of disaster-impacted people, at 2 billion (next highest: drought), 11% of deaths (earthquakes caused 56%), and 23% of the economic damage (storms were 46%).[2]While the toll of floods is generally well recorded in terms of deaths and financial cost, at least for larger disasters, data is scarcer and less systematic for other harms.Floods can have many negative impacts, ranging from direct to indirect and immediate to long-term:[3]Deaths: the majority of flood deaths are drownings, with other causes including injuries from falling objects or debris in fast-moving water, and electrocution from fallen powerlines.[4] Floods caused an average of 5,000 deaths per year in 2001-2020, with the bulk of the deaths caused by many smaller events: the deadliest 104 floods accounted for half the deaths in this period.[5]Injuries: as well as injuries during the flood, reconstruction and clean-up causes various injuries including sprains and strains, falling off roofs or ladders, and lacerations from sharp debris.[6] Floods caused an average of 15,000 injuries per year in 2001-2020, though injuries are probably significantly underreported.[7]Water-borne diseases: particularly if sewage systems have been compromised, contact with floodwater can cause a variety of diseases notably diarrhoea, cholera and leptospirosis.Mosquito-borne diseases: the abundance of stagnant water as floodwaters recede may cause a surge in mosquito populations and an associated spike in malaria, dengue and other vector-borne diseases.[8]Malnutrition: lost crops and reduced purchasing power can lead to impaired nutrition in the years following a major flood event in a poor region. One study found wasting to be more than twice as common in children who were flooded twice in the last three years than in control households.[9]Mental health: many studies have found increased incidences of PTSD, anxiety and depression in individuals ...]]>
Oscar Delaney https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 19:52 None full 4209
xQk4g7yDvufXiECyc_EA EA - A can of worms: the non-significant effect of deworming on happiness in the KLPS by Samuel Dupret Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A can of worms: the non-significant effect of deworming on happiness in the KLPS, published by Samuel Dupret on December 21, 2022 on The Effective Altruism Forum.SummaryMass deworming, where many people are provided drugs to treat parasitic worms, has long been considered a highly cost-effective intervention to improve lives in low-income countries. GiveWell directed over $163 million to deworming charities since 2010. Nevertheless, there are long-running debates about its impact and cost-effectiveness. In this report, we summarise the debate about the efficacy of deworming, present the first analysis of deworming in terms of subjective wellbeing (SWB), and compare the cost-effectiveness of deworming to StrongMinds (our current top recommended charity).Analysing SWB data from the Kenyan Life Panel Survey (KLPS; Hamory et al., 2021), we find that deworming has a small, statistically non-significant effect on long-term happiness that seems (surprisingly) to become negative over time (see Figure 1). We conclude that the effect of deworming in the KLPS is either non-existent or too small to estimate with certainty. Typically, an academic analysis could stop here and not recommend deworming. However, the non-significant effects of deworming could be cost-effective in practice because it is extremely cheap to deliver. Because the effect of deworming is small and becomes negative over time, our best guess finds that the overall cost-effectiveness of deworming is negative. Even under more generous assumptions (but still plausible according to this data), deworming is less cost-effective than StrongMinds. Therefore, we do not recommend any deworming charities at this time.To overturn this conclusion, proponents of deworming would either need to (1) appeal to different SWB data (we’re not aware of any) or (2) appeal to a non-SWB method of comparison which concludes that deworming is more cost-effective than StrongMinds.Figure 1: Differences in happiness between treatment and control groups over time1. Background and literatureIn this section, we present the motivation for this analysis, the work by GiveWell that preceded this, and the broader literature on deworming. We then present the details and context of the dataset we use for this analysis – the Kenyan Life Panel Survey (KLPS).1.1 Our motivation for this analysisThe Happier Lives Institute evaluates charities and interventions in terms of subjective wellbeing (SWB) - how people think and feel about their lives. We believe that wellbeing is what ultimately matters and we take self-reports of SWB to be the best indicator of how much good an intervention does. If deworming improves people’s lives, those treated for deworming should report greater SWB than those who aren’t. SWB should capture and integrate the overall benefits from all of the instrumental goods provided by an intervention. For example, if deworming makes people richer, and this makes them happier, they will report higher SWB (the same is true for improvements to health or education). Although we are not the first to use SWB as an outcome for decision-making (e.g., UK Treasury, Frijters et al., 2020, Birkjaer et al., 2020, Layard & Oparina, 2021), we are the first to use it to compare the impact of charities.See McGuire et al. (2022b) for more detail about why we prefer the SWB approach to evaluate charities.To determine whether the SWB approach changes which interventions we find the most cost-effective, we have been re-evaluating the charity recommendations of GiveWell (a prominent charity evaluator that recommends charities based on their mortality and economic impacts). For a review of our recent research, see this post. We present our findings in wellbeing-adjusted years (WELLBYs), where 1 WELLBY is the equivalent of a 1-point change on a 0-10...]]>
Samuel Dupret https://forum.effectivealtruism.org/posts/xQk4g7yDvufXiECyc/a-can-of-worms-the-non-significant-effect-of-deworming-on Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A can of worms: the non-significant effect of deworming on happiness in the KLPS, published by Samuel Dupret on December 21, 2022 on The Effective Altruism Forum.SummaryMass deworming, where many people are provided drugs to treat parasitic worms, has long been considered a highly cost-effective intervention to improve lives in low-income countries. GiveWell directed over $163 million to deworming charities since 2010. Nevertheless, there are long-running debates about its impact and cost-effectiveness. In this report, we summarise the debate about the efficacy of deworming, present the first analysis of deworming in terms of subjective wellbeing (SWB), and compare the cost-effectiveness of deworming to StrongMinds (our current top recommended charity).Analysing SWB data from the Kenyan Life Panel Survey (KLPS; Hamory et al., 2021), we find that deworming has a small, statistically non-significant effect on long-term happiness that seems (surprisingly) to become negative over time (see Figure 1). We conclude that the effect of deworming in the KLPS is either non-existent or too small to estimate with certainty. Typically, an academic analysis could stop here and not recommend deworming. However, the non-significant effects of deworming could be cost-effective in practice because it is extremely cheap to deliver. Because the effect of deworming is small and becomes negative over time, our best guess finds that the overall cost-effectiveness of deworming is negative. Even under more generous assumptions (but still plausible according to this data), deworming is less cost-effective than StrongMinds. Therefore, we do not recommend any deworming charities at this time.To overturn this conclusion, proponents of deworming would either need to (1) appeal to different SWB data (we’re not aware of any) or (2) appeal to a non-SWB method of comparison which concludes that deworming is more cost-effective than StrongMinds.Figure 1: Differences in happiness between treatment and control groups over time1. Background and literatureIn this section, we present the motivation for this analysis, the work by GiveWell that preceded this, and the broader literature on deworming. We then present the details and context of the dataset we use for this analysis – the Kenyan Life Panel Survey (KLPS).1.1 Our motivation for this analysisThe Happier Lives Institute evaluates charities and interventions in terms of subjective wellbeing (SWB) - how people think and feel about their lives. We believe that wellbeing is what ultimately matters and we take self-reports of SWB to be the best indicator of how much good an intervention does. If deworming improves people’s lives, those treated for deworming should report greater SWB than those who aren’t. SWB should capture and integrate the overall benefits from all of the instrumental goods provided by an intervention. For example, if deworming makes people richer, and this makes them happier, they will report higher SWB (the same is true for improvements to health or education). Although we are not the first to use SWB as an outcome for decision-making (e.g., UK Treasury, Frijters et al., 2020, Birkjaer et al., 2020, Layard & Oparina, 2021), we are the first to use it to compare the impact of charities.See McGuire et al. (2022b) for more detail about why we prefer the SWB approach to evaluate charities.To determine whether the SWB approach changes which interventions we find the most cost-effective, we have been re-evaluating the charity recommendations of GiveWell (a prominent charity evaluator that recommends charities based on their mortality and economic impacts). For a review of our recent research, see this post. We present our findings in wellbeing-adjusted years (WELLBYs), where 1 WELLBY is the equivalent of a 1-point change on a 0-10...]]>
Wed, 21 Dec 2022 13:25:25 +0000 EA - A can of worms: the non-significant effect of deworming on happiness in the KLPS by Samuel Dupret Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A can of worms: the non-significant effect of deworming on happiness in the KLPS, published by Samuel Dupret on December 21, 2022 on The Effective Altruism Forum.SummaryMass deworming, where many people are provided drugs to treat parasitic worms, has long been considered a highly cost-effective intervention to improve lives in low-income countries. GiveWell directed over $163 million to deworming charities since 2010. Nevertheless, there are long-running debates about its impact and cost-effectiveness. In this report, we summarise the debate about the efficacy of deworming, present the first analysis of deworming in terms of subjective wellbeing (SWB), and compare the cost-effectiveness of deworming to StrongMinds (our current top recommended charity).Analysing SWB data from the Kenyan Life Panel Survey (KLPS; Hamory et al., 2021), we find that deworming has a small, statistically non-significant effect on long-term happiness that seems (surprisingly) to become negative over time (see Figure 1). We conclude that the effect of deworming in the KLPS is either non-existent or too small to estimate with certainty. Typically, an academic analysis could stop here and not recommend deworming. However, the non-significant effects of deworming could be cost-effective in practice because it is extremely cheap to deliver. Because the effect of deworming is small and becomes negative over time, our best guess finds that the overall cost-effectiveness of deworming is negative. Even under more generous assumptions (but still plausible according to this data), deworming is less cost-effective than StrongMinds. Therefore, we do not recommend any deworming charities at this time.To overturn this conclusion, proponents of deworming would either need to (1) appeal to different SWB data (we’re not aware of any) or (2) appeal to a non-SWB method of comparison which concludes that deworming is more cost-effective than StrongMinds.Figure 1: Differences in happiness between treatment and control groups over time1. Background and literatureIn this section, we present the motivation for this analysis, the work by GiveWell that preceded this, and the broader literature on deworming. We then present the details and context of the dataset we use for this analysis – the Kenyan Life Panel Survey (KLPS).1.1 Our motivation for this analysisThe Happier Lives Institute evaluates charities and interventions in terms of subjective wellbeing (SWB) - how people think and feel about their lives. We believe that wellbeing is what ultimately matters and we take self-reports of SWB to be the best indicator of how much good an intervention does. If deworming improves people’s lives, those treated for deworming should report greater SWB than those who aren’t. SWB should capture and integrate the overall benefits from all of the instrumental goods provided by an intervention. For example, if deworming makes people richer, and this makes them happier, they will report higher SWB (the same is true for improvements to health or education). Although we are not the first to use SWB as an outcome for decision-making (e.g., UK Treasury, Frijters et al., 2020, Birkjaer et al., 2020, Layard & Oparina, 2021), we are the first to use it to compare the impact of charities.See McGuire et al. (2022b) for more detail about why we prefer the SWB approach to evaluate charities.To determine whether the SWB approach changes which interventions we find the most cost-effective, we have been re-evaluating the charity recommendations of GiveWell (a prominent charity evaluator that recommends charities based on their mortality and economic impacts). For a review of our recent research, see this post. We present our findings in wellbeing-adjusted years (WELLBYs), where 1 WELLBY is the equivalent of a 1-point change on a 0-10...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A can of worms: the non-significant effect of deworming on happiness in the KLPS, published by Samuel Dupret on December 21, 2022 on The Effective Altruism Forum.SummaryMass deworming, where many people are provided drugs to treat parasitic worms, has long been considered a highly cost-effective intervention to improve lives in low-income countries. GiveWell directed over $163 million to deworming charities since 2010. Nevertheless, there are long-running debates about its impact and cost-effectiveness. In this report, we summarise the debate about the efficacy of deworming, present the first analysis of deworming in terms of subjective wellbeing (SWB), and compare the cost-effectiveness of deworming to StrongMinds (our current top recommended charity).Analysing SWB data from the Kenyan Life Panel Survey (KLPS; Hamory et al., 2021), we find that deworming has a small, statistically non-significant effect on long-term happiness that seems (surprisingly) to become negative over time (see Figure 1). We conclude that the effect of deworming in the KLPS is either non-existent or too small to estimate with certainty. Typically, an academic analysis could stop here and not recommend deworming. However, the non-significant effects of deworming could be cost-effective in practice because it is extremely cheap to deliver. Because the effect of deworming is small and becomes negative over time, our best guess finds that the overall cost-effectiveness of deworming is negative. Even under more generous assumptions (but still plausible according to this data), deworming is less cost-effective than StrongMinds. Therefore, we do not recommend any deworming charities at this time.To overturn this conclusion, proponents of deworming would either need to (1) appeal to different SWB data (we’re not aware of any) or (2) appeal to a non-SWB method of comparison which concludes that deworming is more cost-effective than StrongMinds.Figure 1: Differences in happiness between treatment and control groups over time1. Background and literatureIn this section, we present the motivation for this analysis, the work by GiveWell that preceded this, and the broader literature on deworming. We then present the details and context of the dataset we use for this analysis – the Kenyan Life Panel Survey (KLPS).1.1 Our motivation for this analysisThe Happier Lives Institute evaluates charities and interventions in terms of subjective wellbeing (SWB) - how people think and feel about their lives. We believe that wellbeing is what ultimately matters and we take self-reports of SWB to be the best indicator of how much good an intervention does. If deworming improves people’s lives, those treated for deworming should report greater SWB than those who aren’t. SWB should capture and integrate the overall benefits from all of the instrumental goods provided by an intervention. For example, if deworming makes people richer, and this makes them happier, they will report higher SWB (the same is true for improvements to health or education). Although we are not the first to use SWB as an outcome for decision-making (e.g., UK Treasury, Frijters et al., 2020, Birkjaer et al., 2020, Layard & Oparina, 2021), we are the first to use it to compare the impact of charities.See McGuire et al. (2022b) for more detail about why we prefer the SWB approach to evaluate charities.To determine whether the SWB approach changes which interventions we find the most cost-effective, we have been re-evaluating the charity recommendations of GiveWell (a prominent charity evaluator that recommends charities based on their mortality and economic impacts). For a review of our recent research, see this post. We present our findings in wellbeing-adjusted years (WELLBYs), where 1 WELLBY is the equivalent of a 1-point change on a 0-10...]]>
Samuel Dupret https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 50:34 None full 4208
CmgzKPfKyHh4wp9aZ_EA EA - A Summary of Profession-Based Community Building by Sean Lawrence Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Summary of Profession-Based Community Building, published by Sean Lawrence on December 20, 2022 on The Effective Altruism Forum.TLDR: there are now a notable number of profession-based community building organisations in EA and below you will find the key activities, team structure, and ways to get involved for several of the most active organisations.SummaryThere has been a significant increase in the number of organisations catering to specific professions – profession-specific organisations (PSOs) – over the past 12-24 months and we want to update the community on who some of these organisations are and what they seek to provide for their members and the broader EA community.Please note that this list is not fully comprehensive. It should be treated as a partial snapshot of some of the organisations that exist in this space and a window into some of the ways in which they are doing community building in a profession-specific context. If your profession isn’t listed on this post then you may find an organisation that does cater for it on High Impact Professional’s Professional Groups Directory, which is a more comprehensive resource for finding professional groups (see their EA forum post). If you run a PSO that isn’t on the High Impact Professionals’ Workplace and Professional Groups Directory, please email Devon or book a call with him. Additionally, High Impact Professionals can help nascent PSOs through strategic support, reviewing grant proposals, making introductions, and more so don’t hesitate to get in touch!If you run a PSO and would like to be added to this post, please copy and fill out this template and send it to Sean.The What and Why of Profession-Specific OrganisationsPSOs focus their organisational mission and scope on supporting a specific profession, or group of related professions, with tailored content and programming to help members of that field learn more about EA and how they can contribute to the shared project of doing the most good in the world. Additionally, PSOs provide a community of like-minded professionals to help increase network effects amongst members, limit value-drift and make the EA space more welcoming to professionals overall.GoalsWe think this post could be valuable to:Increase the awareness of existing PSOs and their activities. Recent EAGs have given some of us the impression that members of the EA community are unaware that there are active community building efforts targeting professionals and what each group's target focus and mechanisms of engagement are.Summarise the methods for engagement with professionals. This post may help to motivate nascent professional-specific groups, highlight commonalities in PSO’s approaches so organisers can collaborate, and/or highlight gaps in approaches that could be filled.Help grow the membership of organisations.Inspire people to start groups for neglected professions. By seeing that there are already a lot of professional groups around, we hope that others see this as a viable way to have an impact in their profession.List of Profession-Specific Organisations(Please note that this list is not fully comprehensive)Last updated: 21st December 2022EA Architects and PlannersWhat do we do?We seek ways in which architects, urban designers, and planners can use their unique skills to get involved in EA causes and to have a greater positive impact. We are NOT focused on finding high-impact work within architecture and planning, but rather, explore a wide range of opportunities where architectural or design thinking may lead to more effective ways of doing good.As we research ways in which this might be achieved, we are updating the Career Options for EA Architects and Planners database to help others find inspiration.Our aim is to a) create an environment that support...]]>
Sean Lawrence https://forum.effectivealtruism.org/posts/CmgzKPfKyHh4wp9aZ/a-summary-of-profession-based-community-building Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Summary of Profession-Based Community Building, published by Sean Lawrence on December 20, 2022 on The Effective Altruism Forum.TLDR: there are now a notable number of profession-based community building organisations in EA and below you will find the key activities, team structure, and ways to get involved for several of the most active organisations.SummaryThere has been a significant increase in the number of organisations catering to specific professions – profession-specific organisations (PSOs) – over the past 12-24 months and we want to update the community on who some of these organisations are and what they seek to provide for their members and the broader EA community.Please note that this list is not fully comprehensive. It should be treated as a partial snapshot of some of the organisations that exist in this space and a window into some of the ways in which they are doing community building in a profession-specific context. If your profession isn’t listed on this post then you may find an organisation that does cater for it on High Impact Professional’s Professional Groups Directory, which is a more comprehensive resource for finding professional groups (see their EA forum post). If you run a PSO that isn’t on the High Impact Professionals’ Workplace and Professional Groups Directory, please email Devon or book a call with him. Additionally, High Impact Professionals can help nascent PSOs through strategic support, reviewing grant proposals, making introductions, and more so don’t hesitate to get in touch!If you run a PSO and would like to be added to this post, please copy and fill out this template and send it to Sean.The What and Why of Profession-Specific OrganisationsPSOs focus their organisational mission and scope on supporting a specific profession, or group of related professions, with tailored content and programming to help members of that field learn more about EA and how they can contribute to the shared project of doing the most good in the world. Additionally, PSOs provide a community of like-minded professionals to help increase network effects amongst members, limit value-drift and make the EA space more welcoming to professionals overall.GoalsWe think this post could be valuable to:Increase the awareness of existing PSOs and their activities. Recent EAGs have given some of us the impression that members of the EA community are unaware that there are active community building efforts targeting professionals and what each group's target focus and mechanisms of engagement are.Summarise the methods for engagement with professionals. This post may help to motivate nascent professional-specific groups, highlight commonalities in PSO’s approaches so organisers can collaborate, and/or highlight gaps in approaches that could be filled.Help grow the membership of organisations.Inspire people to start groups for neglected professions. By seeing that there are already a lot of professional groups around, we hope that others see this as a viable way to have an impact in their profession.List of Profession-Specific Organisations(Please note that this list is not fully comprehensive)Last updated: 21st December 2022EA Architects and PlannersWhat do we do?We seek ways in which architects, urban designers, and planners can use their unique skills to get involved in EA causes and to have a greater positive impact. We are NOT focused on finding high-impact work within architecture and planning, but rather, explore a wide range of opportunities where architectural or design thinking may lead to more effective ways of doing good.As we research ways in which this might be achieved, we are updating the Career Options for EA Architects and Planners database to help others find inspiration.Our aim is to a) create an environment that support...]]>
Wed, 21 Dec 2022 11:58:10 +0000 EA - A Summary of Profession-Based Community Building by Sean Lawrence Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Summary of Profession-Based Community Building, published by Sean Lawrence on December 20, 2022 on The Effective Altruism Forum.TLDR: there are now a notable number of profession-based community building organisations in EA and below you will find the key activities, team structure, and ways to get involved for several of the most active organisations.SummaryThere has been a significant increase in the number of organisations catering to specific professions – profession-specific organisations (PSOs) – over the past 12-24 months and we want to update the community on who some of these organisations are and what they seek to provide for their members and the broader EA community.Please note that this list is not fully comprehensive. It should be treated as a partial snapshot of some of the organisations that exist in this space and a window into some of the ways in which they are doing community building in a profession-specific context. If your profession isn’t listed on this post then you may find an organisation that does cater for it on High Impact Professional’s Professional Groups Directory, which is a more comprehensive resource for finding professional groups (see their EA forum post). If you run a PSO that isn’t on the High Impact Professionals’ Workplace and Professional Groups Directory, please email Devon or book a call with him. Additionally, High Impact Professionals can help nascent PSOs through strategic support, reviewing grant proposals, making introductions, and more so don’t hesitate to get in touch!If you run a PSO and would like to be added to this post, please copy and fill out this template and send it to Sean.The What and Why of Profession-Specific OrganisationsPSOs focus their organisational mission and scope on supporting a specific profession, or group of related professions, with tailored content and programming to help members of that field learn more about EA and how they can contribute to the shared project of doing the most good in the world. Additionally, PSOs provide a community of like-minded professionals to help increase network effects amongst members, limit value-drift and make the EA space more welcoming to professionals overall.GoalsWe think this post could be valuable to:Increase the awareness of existing PSOs and their activities. Recent EAGs have given some of us the impression that members of the EA community are unaware that there are active community building efforts targeting professionals and what each group's target focus and mechanisms of engagement are.Summarise the methods for engagement with professionals. This post may help to motivate nascent professional-specific groups, highlight commonalities in PSO’s approaches so organisers can collaborate, and/or highlight gaps in approaches that could be filled.Help grow the membership of organisations.Inspire people to start groups for neglected professions. By seeing that there are already a lot of professional groups around, we hope that others see this as a viable way to have an impact in their profession.List of Profession-Specific Organisations(Please note that this list is not fully comprehensive)Last updated: 21st December 2022EA Architects and PlannersWhat do we do?We seek ways in which architects, urban designers, and planners can use their unique skills to get involved in EA causes and to have a greater positive impact. We are NOT focused on finding high-impact work within architecture and planning, but rather, explore a wide range of opportunities where architectural or design thinking may lead to more effective ways of doing good.As we research ways in which this might be achieved, we are updating the Career Options for EA Architects and Planners database to help others find inspiration.Our aim is to a) create an environment that support...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Summary of Profession-Based Community Building, published by Sean Lawrence on December 20, 2022 on The Effective Altruism Forum.TLDR: there are now a notable number of profession-based community building organisations in EA and below you will find the key activities, team structure, and ways to get involved for several of the most active organisations.SummaryThere has been a significant increase in the number of organisations catering to specific professions – profession-specific organisations (PSOs) – over the past 12-24 months and we want to update the community on who some of these organisations are and what they seek to provide for their members and the broader EA community.Please note that this list is not fully comprehensive. It should be treated as a partial snapshot of some of the organisations that exist in this space and a window into some of the ways in which they are doing community building in a profession-specific context. If your profession isn’t listed on this post then you may find an organisation that does cater for it on High Impact Professional’s Professional Groups Directory, which is a more comprehensive resource for finding professional groups (see their EA forum post). If you run a PSO that isn’t on the High Impact Professionals’ Workplace and Professional Groups Directory, please email Devon or book a call with him. Additionally, High Impact Professionals can help nascent PSOs through strategic support, reviewing grant proposals, making introductions, and more so don’t hesitate to get in touch!If you run a PSO and would like to be added to this post, please copy and fill out this template and send it to Sean.The What and Why of Profession-Specific OrganisationsPSOs focus their organisational mission and scope on supporting a specific profession, or group of related professions, with tailored content and programming to help members of that field learn more about EA and how they can contribute to the shared project of doing the most good in the world. Additionally, PSOs provide a community of like-minded professionals to help increase network effects amongst members, limit value-drift and make the EA space more welcoming to professionals overall.GoalsWe think this post could be valuable to:Increase the awareness of existing PSOs and their activities. Recent EAGs have given some of us the impression that members of the EA community are unaware that there are active community building efforts targeting professionals and what each group's target focus and mechanisms of engagement are.Summarise the methods for engagement with professionals. This post may help to motivate nascent professional-specific groups, highlight commonalities in PSO’s approaches so organisers can collaborate, and/or highlight gaps in approaches that could be filled.Help grow the membership of organisations.Inspire people to start groups for neglected professions. By seeing that there are already a lot of professional groups around, we hope that others see this as a viable way to have an impact in their profession.List of Profession-Specific Organisations(Please note that this list is not fully comprehensive)Last updated: 21st December 2022EA Architects and PlannersWhat do we do?We seek ways in which architects, urban designers, and planners can use their unique skills to get involved in EA causes and to have a greater positive impact. We are NOT focused on finding high-impact work within architecture and planning, but rather, explore a wide range of opportunities where architectural or design thinking may lead to more effective ways of doing good.As we research ways in which this might be achieved, we are updating the Career Options for EA Architects and Planners database to help others find inspiration.Our aim is to a) create an environment that support...]]>
Sean Lawrence https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 19:14 None full 4210
DajpFcaMrHv4fPLTy_EA EA - CEA's work in 2022 by MaxDalton Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA's work in 2022, published by MaxDalton on December 21, 2022 on The Effective Altruism Forum.CEA and the EA community have both grown and changed a lot in the year since our last org-wide update. We (the CEA Executive Office team) have written this post to update the community on the work CEA has done in 2022.What is CEA?CEA (The Centre for Effective Altruism) is dedicated to nurturing a community of people who are thinking carefully about the world’s biggest problems and taking impactful action to solve them. We hope that this community can help to build a radically better world: so far it has helped to save over 150,000 lives, reduced the suffering of millions of farmed animals, and begun to address some of the biggest risks to humanity’s future.We do this by helping people to consider their ideas, values and options for and about helping, connecting them to advisors and experts in relevant domains, and facilitating high-quality discussion spaces. Our hope is that this helps people find an effective way to contribute that is a good fit for their skills and inclinations.We do this by...Running EA Global conferences and supporting community-organized EAGx conferences.Funding and advising hundreds of local effective altruism groups.Building and moderating the EA Forum, an online hub for discussing the ideas of effective altruism.Supporting community members through our community health team.We also produce the Effective Altruism Newsletter, which goes out to more than 50,000 subscribers, and run EffectiveAltruism.org, which hosts a collection of recommended resources.Attendees at EAG London in AprilOur priority is helping people who have heard about EA to deeply understand the ideas, and to find opportunities for making an impact in important fields. We think that top-of-funnel growth is likely already at or above healthy levels, so rather than aiming to increase the rate any further, we want to make that growth go well. (This is a shift from our thinking in 2021. The main reason for this shift is that we think that the growth rate of EA increased sharply in 2022.)You can read more about our strategy here, including how we make some of the key decisions we are responsible for, and a list of things we are not focusing on. One thing to note: we do not think of ourselves as having or wanting control over the EA community. We believe that a wide range of ideas and approaches are consistent with the core principles underpinning EA, and encourage others to identify and experiment with filling gaps left by our work.Our legal structureCEA is, like Giving What We Can, 80,000 Hours and a bunch of other projects, a project of the Effective Ventures group — the umbrella term for EVF (a UK registered charity) and CEA USA Inc. (a US registered charity), which are separate legal entities which work together.(We know the US charity name is confusing! To make matters even more complicated, until recently EVF was also known as CEA — that confusion is why EVF changed its name.)2022 in summary2022 was a year of continued growth for CEA and our programs.What went well?Growth of our key public-facing programsMany of our programs scaled up very rapidly, while maintaining (in our opinion) roughly constant quality.Some examples of this (with more examples and details below):Events: The number of connections made at our events grew by around 5x this year, which should help many more people find a way to contribute to important problems.Online: Engagement on the EA Forum grew by around 2.9x, helping the spread of important new ideas and richness of discussion.Groups: 208 organizers went through our University Groups Accelerator Program (10x growth for a new program starting from a low base), receiving 8 weeks of mentorship designed to accelerate EA journeys for organ...]]>
MaxDalton https://forum.effectivealtruism.org/posts/DajpFcaMrHv4fPLTy/cea-s-work-in-2022 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA's work in 2022, published by MaxDalton on December 21, 2022 on The Effective Altruism Forum.CEA and the EA community have both grown and changed a lot in the year since our last org-wide update. We (the CEA Executive Office team) have written this post to update the community on the work CEA has done in 2022.What is CEA?CEA (The Centre for Effective Altruism) is dedicated to nurturing a community of people who are thinking carefully about the world’s biggest problems and taking impactful action to solve them. We hope that this community can help to build a radically better world: so far it has helped to save over 150,000 lives, reduced the suffering of millions of farmed animals, and begun to address some of the biggest risks to humanity’s future.We do this by helping people to consider their ideas, values and options for and about helping, connecting them to advisors and experts in relevant domains, and facilitating high-quality discussion spaces. Our hope is that this helps people find an effective way to contribute that is a good fit for their skills and inclinations.We do this by...Running EA Global conferences and supporting community-organized EAGx conferences.Funding and advising hundreds of local effective altruism groups.Building and moderating the EA Forum, an online hub for discussing the ideas of effective altruism.Supporting community members through our community health team.We also produce the Effective Altruism Newsletter, which goes out to more than 50,000 subscribers, and run EffectiveAltruism.org, which hosts a collection of recommended resources.Attendees at EAG London in AprilOur priority is helping people who have heard about EA to deeply understand the ideas, and to find opportunities for making an impact in important fields. We think that top-of-funnel growth is likely already at or above healthy levels, so rather than aiming to increase the rate any further, we want to make that growth go well. (This is a shift from our thinking in 2021. The main reason for this shift is that we think that the growth rate of EA increased sharply in 2022.)You can read more about our strategy here, including how we make some of the key decisions we are responsible for, and a list of things we are not focusing on. One thing to note: we do not think of ourselves as having or wanting control over the EA community. We believe that a wide range of ideas and approaches are consistent with the core principles underpinning EA, and encourage others to identify and experiment with filling gaps left by our work.Our legal structureCEA is, like Giving What We Can, 80,000 Hours and a bunch of other projects, a project of the Effective Ventures group — the umbrella term for EVF (a UK registered charity) and CEA USA Inc. (a US registered charity), which are separate legal entities which work together.(We know the US charity name is confusing! To make matters even more complicated, until recently EVF was also known as CEA — that confusion is why EVF changed its name.)2022 in summary2022 was a year of continued growth for CEA and our programs.What went well?Growth of our key public-facing programsMany of our programs scaled up very rapidly, while maintaining (in our opinion) roughly constant quality.Some examples of this (with more examples and details below):Events: The number of connections made at our events grew by around 5x this year, which should help many more people find a way to contribute to important problems.Online: Engagement on the EA Forum grew by around 2.9x, helping the spread of important new ideas and richness of discussion.Groups: 208 organizers went through our University Groups Accelerator Program (10x growth for a new program starting from a low base), receiving 8 weeks of mentorship designed to accelerate EA journeys for organ...]]>
Wed, 21 Dec 2022 10:32:08 +0000 EA - CEA's work in 2022 by MaxDalton Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA's work in 2022, published by MaxDalton on December 21, 2022 on The Effective Altruism Forum.CEA and the EA community have both grown and changed a lot in the year since our last org-wide update. We (the CEA Executive Office team) have written this post to update the community on the work CEA has done in 2022.What is CEA?CEA (The Centre for Effective Altruism) is dedicated to nurturing a community of people who are thinking carefully about the world’s biggest problems and taking impactful action to solve them. We hope that this community can help to build a radically better world: so far it has helped to save over 150,000 lives, reduced the suffering of millions of farmed animals, and begun to address some of the biggest risks to humanity’s future.We do this by helping people to consider their ideas, values and options for and about helping, connecting them to advisors and experts in relevant domains, and facilitating high-quality discussion spaces. Our hope is that this helps people find an effective way to contribute that is a good fit for their skills and inclinations.We do this by...Running EA Global conferences and supporting community-organized EAGx conferences.Funding and advising hundreds of local effective altruism groups.Building and moderating the EA Forum, an online hub for discussing the ideas of effective altruism.Supporting community members through our community health team.We also produce the Effective Altruism Newsletter, which goes out to more than 50,000 subscribers, and run EffectiveAltruism.org, which hosts a collection of recommended resources.Attendees at EAG London in AprilOur priority is helping people who have heard about EA to deeply understand the ideas, and to find opportunities for making an impact in important fields. We think that top-of-funnel growth is likely already at or above healthy levels, so rather than aiming to increase the rate any further, we want to make that growth go well. (This is a shift from our thinking in 2021. The main reason for this shift is that we think that the growth rate of EA increased sharply in 2022.)You can read more about our strategy here, including how we make some of the key decisions we are responsible for, and a list of things we are not focusing on. One thing to note: we do not think of ourselves as having or wanting control over the EA community. We believe that a wide range of ideas and approaches are consistent with the core principles underpinning EA, and encourage others to identify and experiment with filling gaps left by our work.Our legal structureCEA is, like Giving What We Can, 80,000 Hours and a bunch of other projects, a project of the Effective Ventures group — the umbrella term for EVF (a UK registered charity) and CEA USA Inc. (a US registered charity), which are separate legal entities which work together.(We know the US charity name is confusing! To make matters even more complicated, until recently EVF was also known as CEA — that confusion is why EVF changed its name.)2022 in summary2022 was a year of continued growth for CEA and our programs.What went well?Growth of our key public-facing programsMany of our programs scaled up very rapidly, while maintaining (in our opinion) roughly constant quality.Some examples of this (with more examples and details below):Events: The number of connections made at our events grew by around 5x this year, which should help many more people find a way to contribute to important problems.Online: Engagement on the EA Forum grew by around 2.9x, helping the spread of important new ideas and richness of discussion.Groups: 208 organizers went through our University Groups Accelerator Program (10x growth for a new program starting from a low base), receiving 8 weeks of mentorship designed to accelerate EA journeys for organ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA's work in 2022, published by MaxDalton on December 21, 2022 on The Effective Altruism Forum.CEA and the EA community have both grown and changed a lot in the year since our last org-wide update. We (the CEA Executive Office team) have written this post to update the community on the work CEA has done in 2022.What is CEA?CEA (The Centre for Effective Altruism) is dedicated to nurturing a community of people who are thinking carefully about the world’s biggest problems and taking impactful action to solve them. We hope that this community can help to build a radically better world: so far it has helped to save over 150,000 lives, reduced the suffering of millions of farmed animals, and begun to address some of the biggest risks to humanity’s future.We do this by helping people to consider their ideas, values and options for and about helping, connecting them to advisors and experts in relevant domains, and facilitating high-quality discussion spaces. Our hope is that this helps people find an effective way to contribute that is a good fit for their skills and inclinations.We do this by...Running EA Global conferences and supporting community-organized EAGx conferences.Funding and advising hundreds of local effective altruism groups.Building and moderating the EA Forum, an online hub for discussing the ideas of effective altruism.Supporting community members through our community health team.We also produce the Effective Altruism Newsletter, which goes out to more than 50,000 subscribers, and run EffectiveAltruism.org, which hosts a collection of recommended resources.Attendees at EAG London in AprilOur priority is helping people who have heard about EA to deeply understand the ideas, and to find opportunities for making an impact in important fields. We think that top-of-funnel growth is likely already at or above healthy levels, so rather than aiming to increase the rate any further, we want to make that growth go well. (This is a shift from our thinking in 2021. The main reason for this shift is that we think that the growth rate of EA increased sharply in 2022.)You can read more about our strategy here, including how we make some of the key decisions we are responsible for, and a list of things we are not focusing on. One thing to note: we do not think of ourselves as having or wanting control over the EA community. We believe that a wide range of ideas and approaches are consistent with the core principles underpinning EA, and encourage others to identify and experiment with filling gaps left by our work.Our legal structureCEA is, like Giving What We Can, 80,000 Hours and a bunch of other projects, a project of the Effective Ventures group — the umbrella term for EVF (a UK registered charity) and CEA USA Inc. (a US registered charity), which are separate legal entities which work together.(We know the US charity name is confusing! To make matters even more complicated, until recently EVF was also known as CEA — that confusion is why EVF changed its name.)2022 in summary2022 was a year of continued growth for CEA and our programs.What went well?Growth of our key public-facing programsMany of our programs scaled up very rapidly, while maintaining (in our opinion) roughly constant quality.Some examples of this (with more examples and details below):Events: The number of connections made at our events grew by around 5x this year, which should help many more people find a way to contribute to important problems.Online: Engagement on the EA Forum grew by around 2.9x, helping the spread of important new ideas and richness of discussion.Groups: 208 organizers went through our University Groups Accelerator Program (10x growth for a new program starting from a low base), receiving 8 weeks of mentorship designed to accelerate EA journeys for organ...]]>
MaxDalton https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 20:52 None full 4200
KKoDGiSkfsnco8iQf_EA EA - Longtermism Fund: December 2022 Grants Report by Michael Townsend Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Longtermism Fund: December 2022 Grants Report, published by Michael Townsend on December 21, 2022 on The Effective Altruism Forum.IntroductionThe Longtermism Fund is pleased to announce that we will be providing grants to the following organisations in our first-ever grantmaking round:Center for Human-Compatible Artificial Intelligence ($70,000 USD)SecureBio ($60,000 USD)Rethink Priorities' General Longtermism Team ($30,000 USD)Council on Strategic Risks' nuclear weapons policy work ($15,000 USD)These grants will be paid out in January 2023; the amounts were chosen based on what the Fund has received in donations as of today.In this payout report, we will provide more details about the grantmaking process and the grantees the fund is supporting. This report was written by Giving What We Can, which is responsible for the fund's communications. Longview Philanthropy is responsible for the Fund's research and grantmaking.Read more about the Longtermism Fund here and about funds more generally here.The grantmaking processLongview actively investigates high-impact funding opportunities for donors looking to improve the long-term future. This means the grants for the Longtermism Fund are generally decided by:Longview’s general work (which is not specific to the Longtermism Fund) evaluating the most cost-effective funding opportunities. This involves thousands of hours of work each year from their team. Read more about why we trust Longview as a grantmaker.Choosing among those opportunities based on the scope of the Fund.In addition to this, the Fund decided to support a diverse range of longtermist causes this grantmaking round. Our current plan is to provide grant reports approximately every six months.The scope of the FundThe scope of the Longtermism Fund is to support organisations that are:Reducing existential and catastrophic risks.Promoting, improving, and implementing key longtermist ideas.In addition, the fund aims to support organisations with a compelling and transparent case in favour of their cost-effectiveness that most donors interested in longtermism will understand, and/or that would benefit from being funded by a large number of donors.Why the fund is supporting organisations working on a diverse range of longtermist causesThere are several major risks to the long-term future that need to be addressed, and they differ in their size, potential severity, and how many promising solutions are available and in need of additional funding. However, there is a lot of uncertainty about which of these risks are most cost-effectively addressed by the next philanthropic dollar, even among expert grantmakers.Given this, the Fund aimed to:Provide funding to organisations working across a variety of high-impact longtermist causes, including:Improving biosecurity and pandemic preparednessPromoting beneficial AIAnd reducing the risks from nuclear war.Allocate an amount of funding to highly effective organisations working on these causes representative of how the fund might deploy resources across areas in the future.A benefit of this approach is that the grants highlight a diverse range of approaches to improving the long-term future.We expect the Fund to take a similar approach for the foreseeable future, but we could imagine it changing if there is a persistent and notable difference in the cost-effectiveness in the funding opportunities between causes. At this point, so long as those opportunities are within the Fund’s scope, funding the most cost-effective opportunities will likely outweigh supporting a more diverse range of causes.GranteesThis section will provide further information about the grantees and a specific comment from Longview on why they chose to fund the organisation and how they expect the funding to be used. Each page lin...]]>
Michael Townsend https://forum.effectivealtruism.org/posts/KKoDGiSkfsnco8iQf/longtermism-fund-december-2022-grants-report Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Longtermism Fund: December 2022 Grants Report, published by Michael Townsend on December 21, 2022 on The Effective Altruism Forum.IntroductionThe Longtermism Fund is pleased to announce that we will be providing grants to the following organisations in our first-ever grantmaking round:Center for Human-Compatible Artificial Intelligence ($70,000 USD)SecureBio ($60,000 USD)Rethink Priorities' General Longtermism Team ($30,000 USD)Council on Strategic Risks' nuclear weapons policy work ($15,000 USD)These grants will be paid out in January 2023; the amounts were chosen based on what the Fund has received in donations as of today.In this payout report, we will provide more details about the grantmaking process and the grantees the fund is supporting. This report was written by Giving What We Can, which is responsible for the fund's communications. Longview Philanthropy is responsible for the Fund's research and grantmaking.Read more about the Longtermism Fund here and about funds more generally here.The grantmaking processLongview actively investigates high-impact funding opportunities for donors looking to improve the long-term future. This means the grants for the Longtermism Fund are generally decided by:Longview’s general work (which is not specific to the Longtermism Fund) evaluating the most cost-effective funding opportunities. This involves thousands of hours of work each year from their team. Read more about why we trust Longview as a grantmaker.Choosing among those opportunities based on the scope of the Fund.In addition to this, the Fund decided to support a diverse range of longtermist causes this grantmaking round. Our current plan is to provide grant reports approximately every six months.The scope of the FundThe scope of the Longtermism Fund is to support organisations that are:Reducing existential and catastrophic risks.Promoting, improving, and implementing key longtermist ideas.In addition, the fund aims to support organisations with a compelling and transparent case in favour of their cost-effectiveness that most donors interested in longtermism will understand, and/or that would benefit from being funded by a large number of donors.Why the fund is supporting organisations working on a diverse range of longtermist causesThere are several major risks to the long-term future that need to be addressed, and they differ in their size, potential severity, and how many promising solutions are available and in need of additional funding. However, there is a lot of uncertainty about which of these risks are most cost-effectively addressed by the next philanthropic dollar, even among expert grantmakers.Given this, the Fund aimed to:Provide funding to organisations working across a variety of high-impact longtermist causes, including:Improving biosecurity and pandemic preparednessPromoting beneficial AIAnd reducing the risks from nuclear war.Allocate an amount of funding to highly effective organisations working on these causes representative of how the fund might deploy resources across areas in the future.A benefit of this approach is that the grants highlight a diverse range of approaches to improving the long-term future.We expect the Fund to take a similar approach for the foreseeable future, but we could imagine it changing if there is a persistent and notable difference in the cost-effectiveness in the funding opportunities between causes. At this point, so long as those opportunities are within the Fund’s scope, funding the most cost-effective opportunities will likely outweigh supporting a more diverse range of causes.GranteesThis section will provide further information about the grantees and a specific comment from Longview on why they chose to fund the organisation and how they expect the funding to be used. Each page lin...]]>
Wed, 21 Dec 2022 04:14:38 +0000 EA - Longtermism Fund: December 2022 Grants Report by Michael Townsend Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Longtermism Fund: December 2022 Grants Report, published by Michael Townsend on December 21, 2022 on The Effective Altruism Forum.IntroductionThe Longtermism Fund is pleased to announce that we will be providing grants to the following organisations in our first-ever grantmaking round:Center for Human-Compatible Artificial Intelligence ($70,000 USD)SecureBio ($60,000 USD)Rethink Priorities' General Longtermism Team ($30,000 USD)Council on Strategic Risks' nuclear weapons policy work ($15,000 USD)These grants will be paid out in January 2023; the amounts were chosen based on what the Fund has received in donations as of today.In this payout report, we will provide more details about the grantmaking process and the grantees the fund is supporting. This report was written by Giving What We Can, which is responsible for the fund's communications. Longview Philanthropy is responsible for the Fund's research and grantmaking.Read more about the Longtermism Fund here and about funds more generally here.The grantmaking processLongview actively investigates high-impact funding opportunities for donors looking to improve the long-term future. This means the grants for the Longtermism Fund are generally decided by:Longview’s general work (which is not specific to the Longtermism Fund) evaluating the most cost-effective funding opportunities. This involves thousands of hours of work each year from their team. Read more about why we trust Longview as a grantmaker.Choosing among those opportunities based on the scope of the Fund.In addition to this, the Fund decided to support a diverse range of longtermist causes this grantmaking round. Our current plan is to provide grant reports approximately every six months.The scope of the FundThe scope of the Longtermism Fund is to support organisations that are:Reducing existential and catastrophic risks.Promoting, improving, and implementing key longtermist ideas.In addition, the fund aims to support organisations with a compelling and transparent case in favour of their cost-effectiveness that most donors interested in longtermism will understand, and/or that would benefit from being funded by a large number of donors.Why the fund is supporting organisations working on a diverse range of longtermist causesThere are several major risks to the long-term future that need to be addressed, and they differ in their size, potential severity, and how many promising solutions are available and in need of additional funding. However, there is a lot of uncertainty about which of these risks are most cost-effectively addressed by the next philanthropic dollar, even among expert grantmakers.Given this, the Fund aimed to:Provide funding to organisations working across a variety of high-impact longtermist causes, including:Improving biosecurity and pandemic preparednessPromoting beneficial AIAnd reducing the risks from nuclear war.Allocate an amount of funding to highly effective organisations working on these causes representative of how the fund might deploy resources across areas in the future.A benefit of this approach is that the grants highlight a diverse range of approaches to improving the long-term future.We expect the Fund to take a similar approach for the foreseeable future, but we could imagine it changing if there is a persistent and notable difference in the cost-effectiveness in the funding opportunities between causes. At this point, so long as those opportunities are within the Fund’s scope, funding the most cost-effective opportunities will likely outweigh supporting a more diverse range of causes.GranteesThis section will provide further information about the grantees and a specific comment from Longview on why they chose to fund the organisation and how they expect the funding to be used. Each page lin...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Longtermism Fund: December 2022 Grants Report, published by Michael Townsend on December 21, 2022 on The Effective Altruism Forum.IntroductionThe Longtermism Fund is pleased to announce that we will be providing grants to the following organisations in our first-ever grantmaking round:Center for Human-Compatible Artificial Intelligence ($70,000 USD)SecureBio ($60,000 USD)Rethink Priorities' General Longtermism Team ($30,000 USD)Council on Strategic Risks' nuclear weapons policy work ($15,000 USD)These grants will be paid out in January 2023; the amounts were chosen based on what the Fund has received in donations as of today.In this payout report, we will provide more details about the grantmaking process and the grantees the fund is supporting. This report was written by Giving What We Can, which is responsible for the fund's communications. Longview Philanthropy is responsible for the Fund's research and grantmaking.Read more about the Longtermism Fund here and about funds more generally here.The grantmaking processLongview actively investigates high-impact funding opportunities for donors looking to improve the long-term future. This means the grants for the Longtermism Fund are generally decided by:Longview’s general work (which is not specific to the Longtermism Fund) evaluating the most cost-effective funding opportunities. This involves thousands of hours of work each year from their team. Read more about why we trust Longview as a grantmaker.Choosing among those opportunities based on the scope of the Fund.In addition to this, the Fund decided to support a diverse range of longtermist causes this grantmaking round. Our current plan is to provide grant reports approximately every six months.The scope of the FundThe scope of the Longtermism Fund is to support organisations that are:Reducing existential and catastrophic risks.Promoting, improving, and implementing key longtermist ideas.In addition, the fund aims to support organisations with a compelling and transparent case in favour of their cost-effectiveness that most donors interested in longtermism will understand, and/or that would benefit from being funded by a large number of donors.Why the fund is supporting organisations working on a diverse range of longtermist causesThere are several major risks to the long-term future that need to be addressed, and they differ in their size, potential severity, and how many promising solutions are available and in need of additional funding. However, there is a lot of uncertainty about which of these risks are most cost-effectively addressed by the next philanthropic dollar, even among expert grantmakers.Given this, the Fund aimed to:Provide funding to organisations working across a variety of high-impact longtermist causes, including:Improving biosecurity and pandemic preparednessPromoting beneficial AIAnd reducing the risks from nuclear war.Allocate an amount of funding to highly effective organisations working on these causes representative of how the fund might deploy resources across areas in the future.A benefit of this approach is that the grants highlight a diverse range of approaches to improving the long-term future.We expect the Fund to take a similar approach for the foreseeable future, but we could imagine it changing if there is a persistent and notable difference in the cost-effectiveness in the funding opportunities between causes. At this point, so long as those opportunities are within the Fund’s scope, funding the most cost-effective opportunities will likely outweigh supporting a more diverse range of causes.GranteesThis section will provide further information about the grantees and a specific comment from Longview on why they chose to fund the organisation and how they expect the funding to be used. Each page lin...]]>
Michael Townsend https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:47 None full 4201
o6LNeNoHBA7Bv9kGE_EA EA - Bad Omens in current EA Governance by ludwigbald Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bad Omens in current EA Governance, published by ludwigbald on December 20, 2022 on The Effective Altruism Forum.Edit: Please do scroll down and read the comments by Giving What We Can staff, who added context and clarified errors that remain in this post.I originally wanted to write a comment to the forum post CEA Disambiguation, which contains further context, but I believe this warrants its own post.The Effective Ventures Foundation (formerly known as CEA) (I'll call them EVF) runs many projects, including 80.000 hours, Giving What We Can, Longview Philanthropy, EA Funds, and the Centre for Effective Altruism (CEA), which in turn seems to run this forum.It is very strange to learn that these organizations are not independent from each other, and the EVF board can exert influence over each of them. I believe this structure was set up so the EVF board has central control over EA strategy.I think this is very bad. EVF can not be trusted to unbiasedly serve the EA community as a whole, it misleads donors, and it exposes effective altruism to unnecessary risks of contagion.An example (misleading donors):As "Giving What We Can", EVF currently recommends donations to a number of funds that are run by EVF:Longview Philanthropy: Longtermism Fundseveral Funds run by Effective Altruism FundsThrough the "EA Funds Longterm Future Fund", EVF has repeatedly paid out grants to itself, for example in July 2021 it paid itself $177,000 for its project "Centre for the Governance of AI".Another example (biased advertising):On/, which serves as an introduction to EA, the EVF links to its own project 80000hours, but not to the competing Probably Good.In both examples, the obvious conflicts of interest are stated nowhere.What should we do?I have not thought hard about this, but I have come up with a few obvious-sounding ideas. Please leave your thoughts in the comments!This is what I think we should do:I think we should break up the EVF into independent projects, especially those that direct or receive funding. Until that happens, we should conceive of EVF as a single entity.We need to push for more transparency. EVF's "EA Funds"-branded funds publicly disclose their spending, which is commendable! EVF's "Longview Longtermism Fund" does not. Funds should definitely disclose their conflicts of interest.We should champion community-run organizations like EA Germany e.V. or the Czech EA Association, and let them step into their natural role of representing the community. GWWC members should demand control over their institution.We should continue the debate about EA's governance norms. In order to de-risk the community and to represent our values, we should establish democratic, transparent and fair governance on all levels, including local groups.We probably should rethink supporting community leaders that consolidate their power instead of distributing it.DFTBA,LudwigPS: the same consideration applies for effektiveraltruismus.de, which is run by an EA donation platform, and not by EA Germany.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
ludwigbald https://forum.effectivealtruism.org/posts/o6LNeNoHBA7Bv9kGE/bad-omens-in-current-ea-governance Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bad Omens in current EA Governance, published by ludwigbald on December 20, 2022 on The Effective Altruism Forum.Edit: Please do scroll down and read the comments by Giving What We Can staff, who added context and clarified errors that remain in this post.I originally wanted to write a comment to the forum post CEA Disambiguation, which contains further context, but I believe this warrants its own post.The Effective Ventures Foundation (formerly known as CEA) (I'll call them EVF) runs many projects, including 80.000 hours, Giving What We Can, Longview Philanthropy, EA Funds, and the Centre for Effective Altruism (CEA), which in turn seems to run this forum.It is very strange to learn that these organizations are not independent from each other, and the EVF board can exert influence over each of them. I believe this structure was set up so the EVF board has central control over EA strategy.I think this is very bad. EVF can not be trusted to unbiasedly serve the EA community as a whole, it misleads donors, and it exposes effective altruism to unnecessary risks of contagion.An example (misleading donors):As "Giving What We Can", EVF currently recommends donations to a number of funds that are run by EVF:Longview Philanthropy: Longtermism Fundseveral Funds run by Effective Altruism FundsThrough the "EA Funds Longterm Future Fund", EVF has repeatedly paid out grants to itself, for example in July 2021 it paid itself $177,000 for its project "Centre for the Governance of AI".Another example (biased advertising):On/, which serves as an introduction to EA, the EVF links to its own project 80000hours, but not to the competing Probably Good.In both examples, the obvious conflicts of interest are stated nowhere.What should we do?I have not thought hard about this, but I have come up with a few obvious-sounding ideas. Please leave your thoughts in the comments!This is what I think we should do:I think we should break up the EVF into independent projects, especially those that direct or receive funding. Until that happens, we should conceive of EVF as a single entity.We need to push for more transparency. EVF's "EA Funds"-branded funds publicly disclose their spending, which is commendable! EVF's "Longview Longtermism Fund" does not. Funds should definitely disclose their conflicts of interest.We should champion community-run organizations like EA Germany e.V. or the Czech EA Association, and let them step into their natural role of representing the community. GWWC members should demand control over their institution.We should continue the debate about EA's governance norms. In order to de-risk the community and to represent our values, we should establish democratic, transparent and fair governance on all levels, including local groups.We probably should rethink supporting community leaders that consolidate their power instead of distributing it.DFTBA,LudwigPS: the same consideration applies for effektiveraltruismus.de, which is run by an EA donation platform, and not by EA Germany.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 21 Dec 2022 01:17:48 +0000 EA - Bad Omens in current EA Governance by ludwigbald Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bad Omens in current EA Governance, published by ludwigbald on December 20, 2022 on The Effective Altruism Forum.Edit: Please do scroll down and read the comments by Giving What We Can staff, who added context and clarified errors that remain in this post.I originally wanted to write a comment to the forum post CEA Disambiguation, which contains further context, but I believe this warrants its own post.The Effective Ventures Foundation (formerly known as CEA) (I'll call them EVF) runs many projects, including 80.000 hours, Giving What We Can, Longview Philanthropy, EA Funds, and the Centre for Effective Altruism (CEA), which in turn seems to run this forum.It is very strange to learn that these organizations are not independent from each other, and the EVF board can exert influence over each of them. I believe this structure was set up so the EVF board has central control over EA strategy.I think this is very bad. EVF can not be trusted to unbiasedly serve the EA community as a whole, it misleads donors, and it exposes effective altruism to unnecessary risks of contagion.An example (misleading donors):As "Giving What We Can", EVF currently recommends donations to a number of funds that are run by EVF:Longview Philanthropy: Longtermism Fundseveral Funds run by Effective Altruism FundsThrough the "EA Funds Longterm Future Fund", EVF has repeatedly paid out grants to itself, for example in July 2021 it paid itself $177,000 for its project "Centre for the Governance of AI".Another example (biased advertising):On/, which serves as an introduction to EA, the EVF links to its own project 80000hours, but not to the competing Probably Good.In both examples, the obvious conflicts of interest are stated nowhere.What should we do?I have not thought hard about this, but I have come up with a few obvious-sounding ideas. Please leave your thoughts in the comments!This is what I think we should do:I think we should break up the EVF into independent projects, especially those that direct or receive funding. Until that happens, we should conceive of EVF as a single entity.We need to push for more transparency. EVF's "EA Funds"-branded funds publicly disclose their spending, which is commendable! EVF's "Longview Longtermism Fund" does not. Funds should definitely disclose their conflicts of interest.We should champion community-run organizations like EA Germany e.V. or the Czech EA Association, and let them step into their natural role of representing the community. GWWC members should demand control over their institution.We should continue the debate about EA's governance norms. In order to de-risk the community and to represent our values, we should establish democratic, transparent and fair governance on all levels, including local groups.We probably should rethink supporting community leaders that consolidate their power instead of distributing it.DFTBA,LudwigPS: the same consideration applies for effektiveraltruismus.de, which is run by an EA donation platform, and not by EA Germany.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bad Omens in current EA Governance, published by ludwigbald on December 20, 2022 on The Effective Altruism Forum.Edit: Please do scroll down and read the comments by Giving What We Can staff, who added context and clarified errors that remain in this post.I originally wanted to write a comment to the forum post CEA Disambiguation, which contains further context, but I believe this warrants its own post.The Effective Ventures Foundation (formerly known as CEA) (I'll call them EVF) runs many projects, including 80.000 hours, Giving What We Can, Longview Philanthropy, EA Funds, and the Centre for Effective Altruism (CEA), which in turn seems to run this forum.It is very strange to learn that these organizations are not independent from each other, and the EVF board can exert influence over each of them. I believe this structure was set up so the EVF board has central control over EA strategy.I think this is very bad. EVF can not be trusted to unbiasedly serve the EA community as a whole, it misleads donors, and it exposes effective altruism to unnecessary risks of contagion.An example (misleading donors):As "Giving What We Can", EVF currently recommends donations to a number of funds that are run by EVF:Longview Philanthropy: Longtermism Fundseveral Funds run by Effective Altruism FundsThrough the "EA Funds Longterm Future Fund", EVF has repeatedly paid out grants to itself, for example in July 2021 it paid itself $177,000 for its project "Centre for the Governance of AI".Another example (biased advertising):On/, which serves as an introduction to EA, the EVF links to its own project 80000hours, but not to the competing Probably Good.In both examples, the obvious conflicts of interest are stated nowhere.What should we do?I have not thought hard about this, but I have come up with a few obvious-sounding ideas. Please leave your thoughts in the comments!This is what I think we should do:I think we should break up the EVF into independent projects, especially those that direct or receive funding. Until that happens, we should conceive of EVF as a single entity.We need to push for more transparency. EVF's "EA Funds"-branded funds publicly disclose their spending, which is commendable! EVF's "Longview Longtermism Fund" does not. Funds should definitely disclose their conflicts of interest.We should champion community-run organizations like EA Germany e.V. or the Czech EA Association, and let them step into their natural role of representing the community. GWWC members should demand control over their institution.We should continue the debate about EA's governance norms. In order to de-risk the community and to represent our values, we should establish democratic, transparent and fair governance on all levels, including local groups.We probably should rethink supporting community leaders that consolidate their power instead of distributing it.DFTBA,LudwigPS: the same consideration applies for effektiveraltruismus.de, which is run by an EA donation platform, and not by EA Germany.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
ludwigbald https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:14 None full 4202
dfMdez3jZMbhBwwJa_EA EA - New blog: Some doubts about effective altruism by David Thorstad Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New blog: Some doubts about effective altruism, published by David Thorstad on December 20, 2022 on The Effective Altruism Forum.I’m a research fellow in philosophy at the Global Priorities Institute. There are many things I like about effective altruism. I’ve started a blog to discuss some views and practices in effective altruism that I don’t like, in order to drive positive change both within and outside of the movement.About meI’m a research fellow in philosophy at the Global Priorities Institute, and a Junior Research Fellow at Kellogg College. Before coming to Oxford, I did a PhD in philosophy at Harvard under the incomparable Ned Hall, and BA in philosophy and mathematics at Haverford College. I held down a few jobs along the way, including a stint teaching high-school mathematics in Lawrence, Massachusetts and a summer gig as a librarian for the North Carolina National Guard. I’m quite fond of dogs.Who should read this blog?The aim of the blog is to feature (1) long-form, serial discussions of views and practices in and around effective altruism, (2) driven by academic research, and from a perspective that (3) shares a number of important views and methods with many effective altruists.This blog might be for you if:You would like to know why someone who shares many background views with effective altruists could nonetheless be worried about some existing views and practices.You are interested in learning more about the implications of academic research for views and practices in effective altruism.You think that empirically-grounded philosophical reflection is a good way to gain knowledge about the world.You have a moderate amount of time to devote to reading and discussion (20-30mins/post).You don't mind reading series of overlapping posts.This blog might not be for you if:You would like to know why someone who has little in common with effective altruists might be worried about the movement.You aren’t keen on philosophy, even when empirically grounded.You have a short amount of time to devote to reading.You like standalone posts and hate series.Blog seriesThe blog is primarily organized around series of posts, rather than individual posts. I’ve kicked off the blog with four series.Academic papers: This series summarizes cutting-edge academic research relevant to questions in and around the effective altruism movement.Existential risk pessimism and the time of perils:Part 1 introduces a tension between Existential Risk Pessimism (risk is high) and the Astronomical Value Thesis (it’s very important to drive down risk).Part 2 looks at some failed solutions to the tension.Part 3 looks at a better solution: the Time of Perils Hypothesis.Part 4 looks at one argument for the Time of Perils Hypothesis, which appeals to space settlement.Part 5 looks at a second argument for the Time of Perils Hypothesis, which appeals to the concept of an existential risk Kuznets curve.Parts 6-8 (coming soon) round out the paper and draw implications.Academics review What we owe the future: This series looks at book reviews of MacAskill’s What we owe the future by leading academics to draw out insights from those reviews.Part 1 looks at Kieran Setiya’s review, focusing on population ethics.Part 2 (coming soon) looks at Richard Chappell’s review.Part 3 (coming soon) looks at Regina Rini’s review.Exaggerating the risks: I think that current levels of existential risk are substantially lower than many leading EAs take them to be. In this series, I say why I think that.Part 1 introduces the series.Part 2 looks at Ord’s discussion of climate risk in The Precipice.Part 3 takes a first look at the Halstead report on climate risk.Parts 4-6 (coming soon) wrap up the discussion of climate risk and draw lessons.Billionaire philanthropy: What is the role of b...]]>
David Thorstad https://forum.effectivealtruism.org/posts/dfMdez3jZMbhBwwJa/new-blog-some-doubts-about-effective-altruism Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New blog: Some doubts about effective altruism, published by David Thorstad on December 20, 2022 on The Effective Altruism Forum.I’m a research fellow in philosophy at the Global Priorities Institute. There are many things I like about effective altruism. I’ve started a blog to discuss some views and practices in effective altruism that I don’t like, in order to drive positive change both within and outside of the movement.About meI’m a research fellow in philosophy at the Global Priorities Institute, and a Junior Research Fellow at Kellogg College. Before coming to Oxford, I did a PhD in philosophy at Harvard under the incomparable Ned Hall, and BA in philosophy and mathematics at Haverford College. I held down a few jobs along the way, including a stint teaching high-school mathematics in Lawrence, Massachusetts and a summer gig as a librarian for the North Carolina National Guard. I’m quite fond of dogs.Who should read this blog?The aim of the blog is to feature (1) long-form, serial discussions of views and practices in and around effective altruism, (2) driven by academic research, and from a perspective that (3) shares a number of important views and methods with many effective altruists.This blog might be for you if:You would like to know why someone who shares many background views with effective altruists could nonetheless be worried about some existing views and practices.You are interested in learning more about the implications of academic research for views and practices in effective altruism.You think that empirically-grounded philosophical reflection is a good way to gain knowledge about the world.You have a moderate amount of time to devote to reading and discussion (20-30mins/post).You don't mind reading series of overlapping posts.This blog might not be for you if:You would like to know why someone who has little in common with effective altruists might be worried about the movement.You aren’t keen on philosophy, even when empirically grounded.You have a short amount of time to devote to reading.You like standalone posts and hate series.Blog seriesThe blog is primarily organized around series of posts, rather than individual posts. I’ve kicked off the blog with four series.Academic papers: This series summarizes cutting-edge academic research relevant to questions in and around the effective altruism movement.Existential risk pessimism and the time of perils:Part 1 introduces a tension between Existential Risk Pessimism (risk is high) and the Astronomical Value Thesis (it’s very important to drive down risk).Part 2 looks at some failed solutions to the tension.Part 3 looks at a better solution: the Time of Perils Hypothesis.Part 4 looks at one argument for the Time of Perils Hypothesis, which appeals to space settlement.Part 5 looks at a second argument for the Time of Perils Hypothesis, which appeals to the concept of an existential risk Kuznets curve.Parts 6-8 (coming soon) round out the paper and draw implications.Academics review What we owe the future: This series looks at book reviews of MacAskill’s What we owe the future by leading academics to draw out insights from those reviews.Part 1 looks at Kieran Setiya’s review, focusing on population ethics.Part 2 (coming soon) looks at Richard Chappell’s review.Part 3 (coming soon) looks at Regina Rini’s review.Exaggerating the risks: I think that current levels of existential risk are substantially lower than many leading EAs take them to be. In this series, I say why I think that.Part 1 introduces the series.Part 2 looks at Ord’s discussion of climate risk in The Precipice.Part 3 takes a first look at the Halstead report on climate risk.Parts 4-6 (coming soon) wrap up the discussion of climate risk and draw lessons.Billionaire philanthropy: What is the role of b...]]>
Tue, 20 Dec 2022 21:29:32 +0000 EA - New blog: Some doubts about effective altruism by David Thorstad Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New blog: Some doubts about effective altruism, published by David Thorstad on December 20, 2022 on The Effective Altruism Forum.I’m a research fellow in philosophy at the Global Priorities Institute. There are many things I like about effective altruism. I’ve started a blog to discuss some views and practices in effective altruism that I don’t like, in order to drive positive change both within and outside of the movement.About meI’m a research fellow in philosophy at the Global Priorities Institute, and a Junior Research Fellow at Kellogg College. Before coming to Oxford, I did a PhD in philosophy at Harvard under the incomparable Ned Hall, and BA in philosophy and mathematics at Haverford College. I held down a few jobs along the way, including a stint teaching high-school mathematics in Lawrence, Massachusetts and a summer gig as a librarian for the North Carolina National Guard. I’m quite fond of dogs.Who should read this blog?The aim of the blog is to feature (1) long-form, serial discussions of views and practices in and around effective altruism, (2) driven by academic research, and from a perspective that (3) shares a number of important views and methods with many effective altruists.This blog might be for you if:You would like to know why someone who shares many background views with effective altruists could nonetheless be worried about some existing views and practices.You are interested in learning more about the implications of academic research for views and practices in effective altruism.You think that empirically-grounded philosophical reflection is a good way to gain knowledge about the world.You have a moderate amount of time to devote to reading and discussion (20-30mins/post).You don't mind reading series of overlapping posts.This blog might not be for you if:You would like to know why someone who has little in common with effective altruists might be worried about the movement.You aren’t keen on philosophy, even when empirically grounded.You have a short amount of time to devote to reading.You like standalone posts and hate series.Blog seriesThe blog is primarily organized around series of posts, rather than individual posts. I’ve kicked off the blog with four series.Academic papers: This series summarizes cutting-edge academic research relevant to questions in and around the effective altruism movement.Existential risk pessimism and the time of perils:Part 1 introduces a tension between Existential Risk Pessimism (risk is high) and the Astronomical Value Thesis (it’s very important to drive down risk).Part 2 looks at some failed solutions to the tension.Part 3 looks at a better solution: the Time of Perils Hypothesis.Part 4 looks at one argument for the Time of Perils Hypothesis, which appeals to space settlement.Part 5 looks at a second argument for the Time of Perils Hypothesis, which appeals to the concept of an existential risk Kuznets curve.Parts 6-8 (coming soon) round out the paper and draw implications.Academics review What we owe the future: This series looks at book reviews of MacAskill’s What we owe the future by leading academics to draw out insights from those reviews.Part 1 looks at Kieran Setiya’s review, focusing on population ethics.Part 2 (coming soon) looks at Richard Chappell’s review.Part 3 (coming soon) looks at Regina Rini’s review.Exaggerating the risks: I think that current levels of existential risk are substantially lower than many leading EAs take them to be. In this series, I say why I think that.Part 1 introduces the series.Part 2 looks at Ord’s discussion of climate risk in The Precipice.Part 3 takes a first look at the Halstead report on climate risk.Parts 4-6 (coming soon) wrap up the discussion of climate risk and draw lessons.Billionaire philanthropy: What is the role of b...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New blog: Some doubts about effective altruism, published by David Thorstad on December 20, 2022 on The Effective Altruism Forum.I’m a research fellow in philosophy at the Global Priorities Institute. There are many things I like about effective altruism. I’ve started a blog to discuss some views and practices in effective altruism that I don’t like, in order to drive positive change both within and outside of the movement.About meI’m a research fellow in philosophy at the Global Priorities Institute, and a Junior Research Fellow at Kellogg College. Before coming to Oxford, I did a PhD in philosophy at Harvard under the incomparable Ned Hall, and BA in philosophy and mathematics at Haverford College. I held down a few jobs along the way, including a stint teaching high-school mathematics in Lawrence, Massachusetts and a summer gig as a librarian for the North Carolina National Guard. I’m quite fond of dogs.Who should read this blog?The aim of the blog is to feature (1) long-form, serial discussions of views and practices in and around effective altruism, (2) driven by academic research, and from a perspective that (3) shares a number of important views and methods with many effective altruists.This blog might be for you if:You would like to know why someone who shares many background views with effective altruists could nonetheless be worried about some existing views and practices.You are interested in learning more about the implications of academic research for views and practices in effective altruism.You think that empirically-grounded philosophical reflection is a good way to gain knowledge about the world.You have a moderate amount of time to devote to reading and discussion (20-30mins/post).You don't mind reading series of overlapping posts.This blog might not be for you if:You would like to know why someone who has little in common with effective altruists might be worried about the movement.You aren’t keen on philosophy, even when empirically grounded.You have a short amount of time to devote to reading.You like standalone posts and hate series.Blog seriesThe blog is primarily organized around series of posts, rather than individual posts. I’ve kicked off the blog with four series.Academic papers: This series summarizes cutting-edge academic research relevant to questions in and around the effective altruism movement.Existential risk pessimism and the time of perils:Part 1 introduces a tension between Existential Risk Pessimism (risk is high) and the Astronomical Value Thesis (it’s very important to drive down risk).Part 2 looks at some failed solutions to the tension.Part 3 looks at a better solution: the Time of Perils Hypothesis.Part 4 looks at one argument for the Time of Perils Hypothesis, which appeals to space settlement.Part 5 looks at a second argument for the Time of Perils Hypothesis, which appeals to the concept of an existential risk Kuznets curve.Parts 6-8 (coming soon) round out the paper and draw implications.Academics review What we owe the future: This series looks at book reviews of MacAskill’s What we owe the future by leading academics to draw out insights from those reviews.Part 1 looks at Kieran Setiya’s review, focusing on population ethics.Part 2 (coming soon) looks at Richard Chappell’s review.Part 3 (coming soon) looks at Regina Rini’s review.Exaggerating the risks: I think that current levels of existential risk are substantially lower than many leading EAs take them to be. In this series, I say why I think that.Part 1 introduces the series.Part 2 looks at Ord’s discussion of climate risk in The Precipice.Part 3 takes a first look at the Halstead report on climate risk.Parts 4-6 (coming soon) wrap up the discussion of climate risk and draw lessons.Billionaire philanthropy: What is the role of b...]]>
David Thorstad https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:19 None full 4192
6ma8rxrfYs3njyQZn_EA EA - A Case for Voluntary Abortion Reduction by Ariel Simnegar Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Case for Voluntary Abortion Reduction, published by Ariel Simnegar on December 20, 2022 on The Effective Altruism Forum.Trigger warning: Abortion is a delicate topic, especially for those of us who've had abortions or otherwise feel strongly on this issue. I've tried to make the following case with care and sensitivity, and if it makes anyone feel uncomfortable, I wholeheartedly apologize.Disclaimer: This essay specifically concerns voluntary abortion reduction. Any discussion of involuntary intervention is outside of this post's scope.Thanks to Ives Parrhesia, Marcus Abramovitch, Ruth Grace Wong, and several anonymous helpers. Their help does not constitute an endorsement of this essay's conclusions.SummaryMany EA principles point us towards supporting voluntary abortion reduction:Moral circle expansion.We're receptive to arguments that we should expand our moral circle to include animals and future people.We should be open to the possibility that fetuses—the future people closest to us—could be included in our moral circle too.Our concern for neglected and disenfranchised moral groups.If fetuses are moral patients, then they are relatively neglected and disenfranchised, with more abortions occuring each year than deaths by all causes combined.The metric of (adjusted) life years.We commonly use (adjusted) life years as a measure of the disvalue of problems and the value of interventions.This metric arguably doesn't distinguish between fetal deaths and infant deaths.Singerian duties to give to help those in need.We're typically sympathetic to arguments that we should proactively help those in need, even if it reduces our personal autonomy.We should consider whether we should help our children the same way.Longtermist philosophical views.Longtermists are typically receptive to total / low critical level views, non-person-affecting views, and pro-natalism.Just as these views seem to imply that we should care for people in the far future, they also seem to imply that we should care for fetuses, the future people closest to us.Moral uncertainty's implications for a potential problem of massive scale.Given abortion's massive scale, even a small chance that fetuses are moral patients could imply that we should do something about it.In that regard, we should carry out the following interventions:Shift our family-focused interventions to spotlight mothers' physical and mental health, and support adoption as an option.Suspend our support for charities which reduce the amount of near-term future people until we can systematically review the effect of the above moral considerations on the morality of the charities' interventions.In our personal lives, we should:Understand the situations of people we know who are considering abortion and do whatever we can to support them in having their babies the way they would like.Help each other to be loving parents and raise thriving children, whether or not some of us have abortions or choose to not have children.Introduction: Moral Circle ExpansionFuture people count, but we rarely count on them. They cannot vote or lobby or run for public office, so politicians have scant incentive to think about them. They can't bargain or trade with us, so they have little representation in the market. And they can't make their views heard directly: they can't tweet, or write articles in newspapers, or march in the streets. They are utterly disenfranchised.The idea that future people count is common sense. Future people, after all, are people. They will exist. They will have hopes and joys and pains and regrets, just like the rest of us. They just don't exist yet.Will MacAskill, What We Owe The Future (2022), pp. 9-10.As EAs, we're no strangers to expanding our moral circle. We’re rooted in the idea that distance shou...]]>
Ariel Simnegar https://forum.effectivealtruism.org/posts/6ma8rxrfYs3njyQZn/a-case-for-voluntary-abortion-reduction Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Case for Voluntary Abortion Reduction, published by Ariel Simnegar on December 20, 2022 on The Effective Altruism Forum.Trigger warning: Abortion is a delicate topic, especially for those of us who've had abortions or otherwise feel strongly on this issue. I've tried to make the following case with care and sensitivity, and if it makes anyone feel uncomfortable, I wholeheartedly apologize.Disclaimer: This essay specifically concerns voluntary abortion reduction. Any discussion of involuntary intervention is outside of this post's scope.Thanks to Ives Parrhesia, Marcus Abramovitch, Ruth Grace Wong, and several anonymous helpers. Their help does not constitute an endorsement of this essay's conclusions.SummaryMany EA principles point us towards supporting voluntary abortion reduction:Moral circle expansion.We're receptive to arguments that we should expand our moral circle to include animals and future people.We should be open to the possibility that fetuses—the future people closest to us—could be included in our moral circle too.Our concern for neglected and disenfranchised moral groups.If fetuses are moral patients, then they are relatively neglected and disenfranchised, with more abortions occuring each year than deaths by all causes combined.The metric of (adjusted) life years.We commonly use (adjusted) life years as a measure of the disvalue of problems and the value of interventions.This metric arguably doesn't distinguish between fetal deaths and infant deaths.Singerian duties to give to help those in need.We're typically sympathetic to arguments that we should proactively help those in need, even if it reduces our personal autonomy.We should consider whether we should help our children the same way.Longtermist philosophical views.Longtermists are typically receptive to total / low critical level views, non-person-affecting views, and pro-natalism.Just as these views seem to imply that we should care for people in the far future, they also seem to imply that we should care for fetuses, the future people closest to us.Moral uncertainty's implications for a potential problem of massive scale.Given abortion's massive scale, even a small chance that fetuses are moral patients could imply that we should do something about it.In that regard, we should carry out the following interventions:Shift our family-focused interventions to spotlight mothers' physical and mental health, and support adoption as an option.Suspend our support for charities which reduce the amount of near-term future people until we can systematically review the effect of the above moral considerations on the morality of the charities' interventions.In our personal lives, we should:Understand the situations of people we know who are considering abortion and do whatever we can to support them in having their babies the way they would like.Help each other to be loving parents and raise thriving children, whether or not some of us have abortions or choose to not have children.Introduction: Moral Circle ExpansionFuture people count, but we rarely count on them. They cannot vote or lobby or run for public office, so politicians have scant incentive to think about them. They can't bargain or trade with us, so they have little representation in the market. And they can't make their views heard directly: they can't tweet, or write articles in newspapers, or march in the streets. They are utterly disenfranchised.The idea that future people count is common sense. Future people, after all, are people. They will exist. They will have hopes and joys and pains and regrets, just like the rest of us. They just don't exist yet.Will MacAskill, What We Owe The Future (2022), pp. 9-10.As EAs, we're no strangers to expanding our moral circle. We’re rooted in the idea that distance shou...]]>
Tue, 20 Dec 2022 16:02:40 +0000 EA - A Case for Voluntary Abortion Reduction by Ariel Simnegar Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Case for Voluntary Abortion Reduction, published by Ariel Simnegar on December 20, 2022 on The Effective Altruism Forum.Trigger warning: Abortion is a delicate topic, especially for those of us who've had abortions or otherwise feel strongly on this issue. I've tried to make the following case with care and sensitivity, and if it makes anyone feel uncomfortable, I wholeheartedly apologize.Disclaimer: This essay specifically concerns voluntary abortion reduction. Any discussion of involuntary intervention is outside of this post's scope.Thanks to Ives Parrhesia, Marcus Abramovitch, Ruth Grace Wong, and several anonymous helpers. Their help does not constitute an endorsement of this essay's conclusions.SummaryMany EA principles point us towards supporting voluntary abortion reduction:Moral circle expansion.We're receptive to arguments that we should expand our moral circle to include animals and future people.We should be open to the possibility that fetuses—the future people closest to us—could be included in our moral circle too.Our concern for neglected and disenfranchised moral groups.If fetuses are moral patients, then they are relatively neglected and disenfranchised, with more abortions occuring each year than deaths by all causes combined.The metric of (adjusted) life years.We commonly use (adjusted) life years as a measure of the disvalue of problems and the value of interventions.This metric arguably doesn't distinguish between fetal deaths and infant deaths.Singerian duties to give to help those in need.We're typically sympathetic to arguments that we should proactively help those in need, even if it reduces our personal autonomy.We should consider whether we should help our children the same way.Longtermist philosophical views.Longtermists are typically receptive to total / low critical level views, non-person-affecting views, and pro-natalism.Just as these views seem to imply that we should care for people in the far future, they also seem to imply that we should care for fetuses, the future people closest to us.Moral uncertainty's implications for a potential problem of massive scale.Given abortion's massive scale, even a small chance that fetuses are moral patients could imply that we should do something about it.In that regard, we should carry out the following interventions:Shift our family-focused interventions to spotlight mothers' physical and mental health, and support adoption as an option.Suspend our support for charities which reduce the amount of near-term future people until we can systematically review the effect of the above moral considerations on the morality of the charities' interventions.In our personal lives, we should:Understand the situations of people we know who are considering abortion and do whatever we can to support them in having their babies the way they would like.Help each other to be loving parents and raise thriving children, whether or not some of us have abortions or choose to not have children.Introduction: Moral Circle ExpansionFuture people count, but we rarely count on them. They cannot vote or lobby or run for public office, so politicians have scant incentive to think about them. They can't bargain or trade with us, so they have little representation in the market. And they can't make their views heard directly: they can't tweet, or write articles in newspapers, or march in the streets. They are utterly disenfranchised.The idea that future people count is common sense. Future people, after all, are people. They will exist. They will have hopes and joys and pains and regrets, just like the rest of us. They just don't exist yet.Will MacAskill, What We Owe The Future (2022), pp. 9-10.As EAs, we're no strangers to expanding our moral circle. We’re rooted in the idea that distance shou...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Case for Voluntary Abortion Reduction, published by Ariel Simnegar on December 20, 2022 on The Effective Altruism Forum.Trigger warning: Abortion is a delicate topic, especially for those of us who've had abortions or otherwise feel strongly on this issue. I've tried to make the following case with care and sensitivity, and if it makes anyone feel uncomfortable, I wholeheartedly apologize.Disclaimer: This essay specifically concerns voluntary abortion reduction. Any discussion of involuntary intervention is outside of this post's scope.Thanks to Ives Parrhesia, Marcus Abramovitch, Ruth Grace Wong, and several anonymous helpers. Their help does not constitute an endorsement of this essay's conclusions.SummaryMany EA principles point us towards supporting voluntary abortion reduction:Moral circle expansion.We're receptive to arguments that we should expand our moral circle to include animals and future people.We should be open to the possibility that fetuses—the future people closest to us—could be included in our moral circle too.Our concern for neglected and disenfranchised moral groups.If fetuses are moral patients, then they are relatively neglected and disenfranchised, with more abortions occuring each year than deaths by all causes combined.The metric of (adjusted) life years.We commonly use (adjusted) life years as a measure of the disvalue of problems and the value of interventions.This metric arguably doesn't distinguish between fetal deaths and infant deaths.Singerian duties to give to help those in need.We're typically sympathetic to arguments that we should proactively help those in need, even if it reduces our personal autonomy.We should consider whether we should help our children the same way.Longtermist philosophical views.Longtermists are typically receptive to total / low critical level views, non-person-affecting views, and pro-natalism.Just as these views seem to imply that we should care for people in the far future, they also seem to imply that we should care for fetuses, the future people closest to us.Moral uncertainty's implications for a potential problem of massive scale.Given abortion's massive scale, even a small chance that fetuses are moral patients could imply that we should do something about it.In that regard, we should carry out the following interventions:Shift our family-focused interventions to spotlight mothers' physical and mental health, and support adoption as an option.Suspend our support for charities which reduce the amount of near-term future people until we can systematically review the effect of the above moral considerations on the morality of the charities' interventions.In our personal lives, we should:Understand the situations of people we know who are considering abortion and do whatever we can to support them in having their babies the way they would like.Help each other to be loving parents and raise thriving children, whether or not some of us have abortions or choose to not have children.Introduction: Moral Circle ExpansionFuture people count, but we rarely count on them. They cannot vote or lobby or run for public office, so politicians have scant incentive to think about them. They can't bargain or trade with us, so they have little representation in the market. And they can't make their views heard directly: they can't tweet, or write articles in newspapers, or march in the streets. They are utterly disenfranchised.The idea that future people count is common sense. Future people, after all, are people. They will exist. They will have hopes and joys and pains and regrets, just like the rest of us. They just don't exist yet.Will MacAskill, What We Owe The Future (2022), pp. 9-10.As EAs, we're no strangers to expanding our moral circle. We’re rooted in the idea that distance shou...]]>
Ariel Simnegar https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 26:04 None full 4193
S2X7smXyjgf3MsNzi_EA EA - SoGive's 2023 plans + funding request by Sanjay Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SoGive's 2023 plans + funding request, published by Sanjay on December 20, 2022 on The Effective Altruism Forum.We at SoGive are excited about what we plan to achieve in 2023. We still have material funding gaps, so if readers are keen for these things to happen, then please get in touch by emailing sanjay@sogive.org.Our work will involve the following:ActivityRough indication of how much effort goes on each40%-45%40%-45%10%-20%EA-aligned researchSupporting the EA talent pipelineSupporting major donors, and running SoGive grants1. EA-aligned research: We plan to do work akin to the research of organisations like Rethink Priorities. There are enough thorny questions about how to do good / give effectively that an enormous amount more research is needed, and the existing ecosystem would benefit from growing in both capacity and the number of voices, hence we are keen for SoGive to contribute.Work which we plan to complete and publish in 2023:Strong Minds:SoGive’s founder identified Strong Minds as a possible high impact charity in 2015 when thinking about his own giving, and had an initial call with Sean Mayberry of Strong Minds then. We have been engaging with Strong Minds over some years and we plan to finalise a report early in 2023.No Means No:An investigation of No Means No Worldwide was suggested to us by a member of the EA London community, who was excited to have an EA-aligned recommendation for a charity which prevents sexual violence. We have mostly completed a review of this charity, and were asked not to publish it yet because it used a study which is not yet in the public domain.PrathamWe have been working on a review of Pratham, which works on education in the developing world.Note that our model involves having multiple pieces of analysis ongoing, and these may be progressed by someone working for 1 day per week.Work which we plan to initiate in 2023:Redteaming of John Halstead’s report on climate changeCounterfactuals vs credit-sharing/Shapley valuesNote: this is not a comprehensive list, just a few examples. For a fuller list, see this public copy of our research agenda.Past track record: People within the SoGive community have successfully written a number of valuable pieces of research.Cool Earth: In order to support SoGive’s work helping major donors, SoGive produced a review of Cool Earth which argued that the EA community had overvalued Cool Earth. It received an EA Forum prize, was nominated for a Review of Decade prize, and has been cited 14 times on the EA Forum and an unknown number of times elsewhere.Giving Green: A SoGive volunteer, with support from SoGive’s founder, wrote a piece critiquing the analytical approach of Giving Green. Prior to this write-up, there had been little critical analysis of the quality of Giving Green’s work.Malaria nets: We wrote a piece on GiveWell’s modelling of insecticide resistance, which argued that there were issues with the way that synergists were modelled (synergists are things which help nets overcome insecticide resistance). We also argued that GiveWell were missing an important source of data. GiveWell awarded this analysis an honourable mention prize.If you would like to know more about SoGive’s research work, feel free to read our strategy document: section 1 covers our research work, section 1.3 explains how we set our research agenda, section 1.5 provides more examples of our track record of research, section 1.7 includes people from other EA orgs who are willing to serve as references.2. Supporting the EA talent pipeline: Our volunteer programme was originally set up to help us analyse charities, however we are now leveraging our experience to serve orgs in the EA community who are looking to hire. In 2023, we aim to recruit c50-100 new volunteers, and of those probably a m...]]>
Sanjay https://forum.effectivealtruism.org/posts/S2X7smXyjgf3MsNzi/sogive-s-2023-plans-funding-request Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SoGive's 2023 plans + funding request, published by Sanjay on December 20, 2022 on The Effective Altruism Forum.We at SoGive are excited about what we plan to achieve in 2023. We still have material funding gaps, so if readers are keen for these things to happen, then please get in touch by emailing sanjay@sogive.org.Our work will involve the following:ActivityRough indication of how much effort goes on each40%-45%40%-45%10%-20%EA-aligned researchSupporting the EA talent pipelineSupporting major donors, and running SoGive grants1. EA-aligned research: We plan to do work akin to the research of organisations like Rethink Priorities. There are enough thorny questions about how to do good / give effectively that an enormous amount more research is needed, and the existing ecosystem would benefit from growing in both capacity and the number of voices, hence we are keen for SoGive to contribute.Work which we plan to complete and publish in 2023:Strong Minds:SoGive’s founder identified Strong Minds as a possible high impact charity in 2015 when thinking about his own giving, and had an initial call with Sean Mayberry of Strong Minds then. We have been engaging with Strong Minds over some years and we plan to finalise a report early in 2023.No Means No:An investigation of No Means No Worldwide was suggested to us by a member of the EA London community, who was excited to have an EA-aligned recommendation for a charity which prevents sexual violence. We have mostly completed a review of this charity, and were asked not to publish it yet because it used a study which is not yet in the public domain.PrathamWe have been working on a review of Pratham, which works on education in the developing world.Note that our model involves having multiple pieces of analysis ongoing, and these may be progressed by someone working for 1 day per week.Work which we plan to initiate in 2023:Redteaming of John Halstead’s report on climate changeCounterfactuals vs credit-sharing/Shapley valuesNote: this is not a comprehensive list, just a few examples. For a fuller list, see this public copy of our research agenda.Past track record: People within the SoGive community have successfully written a number of valuable pieces of research.Cool Earth: In order to support SoGive’s work helping major donors, SoGive produced a review of Cool Earth which argued that the EA community had overvalued Cool Earth. It received an EA Forum prize, was nominated for a Review of Decade prize, and has been cited 14 times on the EA Forum and an unknown number of times elsewhere.Giving Green: A SoGive volunteer, with support from SoGive’s founder, wrote a piece critiquing the analytical approach of Giving Green. Prior to this write-up, there had been little critical analysis of the quality of Giving Green’s work.Malaria nets: We wrote a piece on GiveWell’s modelling of insecticide resistance, which argued that there were issues with the way that synergists were modelled (synergists are things which help nets overcome insecticide resistance). We also argued that GiveWell were missing an important source of data. GiveWell awarded this analysis an honourable mention prize.If you would like to know more about SoGive’s research work, feel free to read our strategy document: section 1 covers our research work, section 1.3 explains how we set our research agenda, section 1.5 provides more examples of our track record of research, section 1.7 includes people from other EA orgs who are willing to serve as references.2. Supporting the EA talent pipeline: Our volunteer programme was originally set up to help us analyse charities, however we are now leveraging our experience to serve orgs in the EA community who are looking to hire. In 2023, we aim to recruit c50-100 new volunteers, and of those probably a m...]]>
Tue, 20 Dec 2022 12:11:52 +0000 EA - SoGive's 2023 plans + funding request by Sanjay Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SoGive's 2023 plans + funding request, published by Sanjay on December 20, 2022 on The Effective Altruism Forum.We at SoGive are excited about what we plan to achieve in 2023. We still have material funding gaps, so if readers are keen for these things to happen, then please get in touch by emailing sanjay@sogive.org.Our work will involve the following:ActivityRough indication of how much effort goes on each40%-45%40%-45%10%-20%EA-aligned researchSupporting the EA talent pipelineSupporting major donors, and running SoGive grants1. EA-aligned research: We plan to do work akin to the research of organisations like Rethink Priorities. There are enough thorny questions about how to do good / give effectively that an enormous amount more research is needed, and the existing ecosystem would benefit from growing in both capacity and the number of voices, hence we are keen for SoGive to contribute.Work which we plan to complete and publish in 2023:Strong Minds:SoGive’s founder identified Strong Minds as a possible high impact charity in 2015 when thinking about his own giving, and had an initial call with Sean Mayberry of Strong Minds then. We have been engaging with Strong Minds over some years and we plan to finalise a report early in 2023.No Means No:An investigation of No Means No Worldwide was suggested to us by a member of the EA London community, who was excited to have an EA-aligned recommendation for a charity which prevents sexual violence. We have mostly completed a review of this charity, and were asked not to publish it yet because it used a study which is not yet in the public domain.PrathamWe have been working on a review of Pratham, which works on education in the developing world.Note that our model involves having multiple pieces of analysis ongoing, and these may be progressed by someone working for 1 day per week.Work which we plan to initiate in 2023:Redteaming of John Halstead’s report on climate changeCounterfactuals vs credit-sharing/Shapley valuesNote: this is not a comprehensive list, just a few examples. For a fuller list, see this public copy of our research agenda.Past track record: People within the SoGive community have successfully written a number of valuable pieces of research.Cool Earth: In order to support SoGive’s work helping major donors, SoGive produced a review of Cool Earth which argued that the EA community had overvalued Cool Earth. It received an EA Forum prize, was nominated for a Review of Decade prize, and has been cited 14 times on the EA Forum and an unknown number of times elsewhere.Giving Green: A SoGive volunteer, with support from SoGive’s founder, wrote a piece critiquing the analytical approach of Giving Green. Prior to this write-up, there had been little critical analysis of the quality of Giving Green’s work.Malaria nets: We wrote a piece on GiveWell’s modelling of insecticide resistance, which argued that there were issues with the way that synergists were modelled (synergists are things which help nets overcome insecticide resistance). We also argued that GiveWell were missing an important source of data. GiveWell awarded this analysis an honourable mention prize.If you would like to know more about SoGive’s research work, feel free to read our strategy document: section 1 covers our research work, section 1.3 explains how we set our research agenda, section 1.5 provides more examples of our track record of research, section 1.7 includes people from other EA orgs who are willing to serve as references.2. Supporting the EA talent pipeline: Our volunteer programme was originally set up to help us analyse charities, however we are now leveraging our experience to serve orgs in the EA community who are looking to hire. In 2023, we aim to recruit c50-100 new volunteers, and of those probably a m...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SoGive's 2023 plans + funding request, published by Sanjay on December 20, 2022 on The Effective Altruism Forum.We at SoGive are excited about what we plan to achieve in 2023. We still have material funding gaps, so if readers are keen for these things to happen, then please get in touch by emailing sanjay@sogive.org.Our work will involve the following:ActivityRough indication of how much effort goes on each40%-45%40%-45%10%-20%EA-aligned researchSupporting the EA talent pipelineSupporting major donors, and running SoGive grants1. EA-aligned research: We plan to do work akin to the research of organisations like Rethink Priorities. There are enough thorny questions about how to do good / give effectively that an enormous amount more research is needed, and the existing ecosystem would benefit from growing in both capacity and the number of voices, hence we are keen for SoGive to contribute.Work which we plan to complete and publish in 2023:Strong Minds:SoGive’s founder identified Strong Minds as a possible high impact charity in 2015 when thinking about his own giving, and had an initial call with Sean Mayberry of Strong Minds then. We have been engaging with Strong Minds over some years and we plan to finalise a report early in 2023.No Means No:An investigation of No Means No Worldwide was suggested to us by a member of the EA London community, who was excited to have an EA-aligned recommendation for a charity which prevents sexual violence. We have mostly completed a review of this charity, and were asked not to publish it yet because it used a study which is not yet in the public domain.PrathamWe have been working on a review of Pratham, which works on education in the developing world.Note that our model involves having multiple pieces of analysis ongoing, and these may be progressed by someone working for 1 day per week.Work which we plan to initiate in 2023:Redteaming of John Halstead’s report on climate changeCounterfactuals vs credit-sharing/Shapley valuesNote: this is not a comprehensive list, just a few examples. For a fuller list, see this public copy of our research agenda.Past track record: People within the SoGive community have successfully written a number of valuable pieces of research.Cool Earth: In order to support SoGive’s work helping major donors, SoGive produced a review of Cool Earth which argued that the EA community had overvalued Cool Earth. It received an EA Forum prize, was nominated for a Review of Decade prize, and has been cited 14 times on the EA Forum and an unknown number of times elsewhere.Giving Green: A SoGive volunteer, with support from SoGive’s founder, wrote a piece critiquing the analytical approach of Giving Green. Prior to this write-up, there had been little critical analysis of the quality of Giving Green’s work.Malaria nets: We wrote a piece on GiveWell’s modelling of insecticide resistance, which argued that there were issues with the way that synergists were modelled (synergists are things which help nets overcome insecticide resistance). We also argued that GiveWell were missing an important source of data. GiveWell awarded this analysis an honourable mention prize.If you would like to know more about SoGive’s research work, feel free to read our strategy document: section 1 covers our research work, section 1.3 explains how we set our research agenda, section 1.5 provides more examples of our track record of research, section 1.7 includes people from other EA orgs who are willing to serve as references.2. Supporting the EA talent pipeline: Our volunteer programme was originally set up to help us analyse charities, however we are now leveraging our experience to serve orgs in the EA community who are looking to hire. In 2023, we aim to recruit c50-100 new volunteers, and of those probably a m...]]>
Sanjay https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:06 None full 4195
rJRw78oihoT5paFGd_EA EA - High-level hopes for AI alignment by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: High-level hopes for AI alignment, published by Holden Karnofsky on December 20, 2022 on The Effective Altruism Forum.See here for an audio version.In previous pieces, I argued that there's a real and large risk of AI systems' aiming to defeat all of humanity combined - and succeeding.I first argued that this sort of catastrophe would be likely without specific countermeasures to prevent it. I then argued that countermeasures could be challenging, due to some key difficulties of AI safety research.But while I think misalignment risk is serious and presents major challenges, I don’t agree with sentiments1 along the lines of “We haven’t figured out how to align an AI, so if transformative AI comes soon, we’re doomed.” Here I’m going to talk about some of my high-level hopes for how we might end up avoiding this risk.I’ll first recap the challenge, using Ajeya Cotra’s young businessperson analogy to give a sense of some of the core difficulties. In a nutshell, once AI systems get capable enough, it could be hard to test whether they’re safe, because they might be able to deceive and manipulate us into getting the wrong read. Thus, trying to determine whether they’re safe might be something like “being an eight-year-old trying to decide between adult job candidates (some of whom are manipulative).”I’ll then go through what I see as three key possibilities for navigating this situation:Digital neuroscience: perhaps we’ll be able to read (and/or even rewrite) the “digital brains” of AI systems, so that we can know (and change) what they’re “aiming” to do directly - rather than having to infer it from their behavior. (Perhaps the eight-year-old is a mind-reader, or even a young Professor X.)Limited AI: perhaps we can make AI systems safe by making them limited in various ways - e.g., by leaving certain kinds of information out of their training, designing them to be “myopic” (focused on short-run as opposed to long-run goals), or something along those lines. Maybe we can make “limited AI” that is nonetheless able to carry out particular helpful tasks - such as doing lots more research on how to achieve safety without the limitations. (Perhaps the eight-year-old can limit the authority or knowledge of their hire, and still get the company run successfully.)AI checks and balances: perhaps we’ll be able to employ some AI systems to critique, supervise, and even rewrite others. Even if no single AI system would be safe on its own, the right “checks and balances” setup could ensure that human interests win out. (Perhaps the eight-year-old is able to get the job candidates to evaluate and critique each other, such that all the eight-year-old needs to do is verify basic factual claims to know who the best candidate is.)These are some of the main categories of hopes that are pretty easy to picture today. Further work on AI safety research might result in further ideas (and the above are not exhaustive - see my more detailed piece, posted to the Alignment Forum rather than Cold Takes, for more).I’ll talk about both challenges and reasons for hope here. I think that for the most part, these hopes look much better if AI projects are moving cautiously rather than racing furiously.I don’t think we’re at the point of having much sense of how the hopes and challenges net out; the best I can do at this point is to say: “I don’t currently have much sympathy for someone who’s highly confident that AI takeover would or would not happen (that is, for anyone who thinks the odds of AI takeover . are under 10% or over 90%).”The challengeThis is all recapping previous pieces. If you remember them super well, skip to the next section.In previous pieces, I argued that:The coming decades could see the development of AI systems that could automate - and dramatically speed up - scientif...]]>
Holden Karnofsky https://forum.effectivealtruism.org/posts/rJRw78oihoT5paFGd/high-level-hopes-for-ai-alignment Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: High-level hopes for AI alignment, published by Holden Karnofsky on December 20, 2022 on The Effective Altruism Forum.See here for an audio version.In previous pieces, I argued that there's a real and large risk of AI systems' aiming to defeat all of humanity combined - and succeeding.I first argued that this sort of catastrophe would be likely without specific countermeasures to prevent it. I then argued that countermeasures could be challenging, due to some key difficulties of AI safety research.But while I think misalignment risk is serious and presents major challenges, I don’t agree with sentiments1 along the lines of “We haven’t figured out how to align an AI, so if transformative AI comes soon, we’re doomed.” Here I’m going to talk about some of my high-level hopes for how we might end up avoiding this risk.I’ll first recap the challenge, using Ajeya Cotra’s young businessperson analogy to give a sense of some of the core difficulties. In a nutshell, once AI systems get capable enough, it could be hard to test whether they’re safe, because they might be able to deceive and manipulate us into getting the wrong read. Thus, trying to determine whether they’re safe might be something like “being an eight-year-old trying to decide between adult job candidates (some of whom are manipulative).”I’ll then go through what I see as three key possibilities for navigating this situation:Digital neuroscience: perhaps we’ll be able to read (and/or even rewrite) the “digital brains” of AI systems, so that we can know (and change) what they’re “aiming” to do directly - rather than having to infer it from their behavior. (Perhaps the eight-year-old is a mind-reader, or even a young Professor X.)Limited AI: perhaps we can make AI systems safe by making them limited in various ways - e.g., by leaving certain kinds of information out of their training, designing them to be “myopic” (focused on short-run as opposed to long-run goals), or something along those lines. Maybe we can make “limited AI” that is nonetheless able to carry out particular helpful tasks - such as doing lots more research on how to achieve safety without the limitations. (Perhaps the eight-year-old can limit the authority or knowledge of their hire, and still get the company run successfully.)AI checks and balances: perhaps we’ll be able to employ some AI systems to critique, supervise, and even rewrite others. Even if no single AI system would be safe on its own, the right “checks and balances” setup could ensure that human interests win out. (Perhaps the eight-year-old is able to get the job candidates to evaluate and critique each other, such that all the eight-year-old needs to do is verify basic factual claims to know who the best candidate is.)These are some of the main categories of hopes that are pretty easy to picture today. Further work on AI safety research might result in further ideas (and the above are not exhaustive - see my more detailed piece, posted to the Alignment Forum rather than Cold Takes, for more).I’ll talk about both challenges and reasons for hope here. I think that for the most part, these hopes look much better if AI projects are moving cautiously rather than racing furiously.I don’t think we’re at the point of having much sense of how the hopes and challenges net out; the best I can do at this point is to say: “I don’t currently have much sympathy for someone who’s highly confident that AI takeover would or would not happen (that is, for anyone who thinks the odds of AI takeover . are under 10% or over 90%).”The challengeThis is all recapping previous pieces. If you remember them super well, skip to the next section.In previous pieces, I argued that:The coming decades could see the development of AI systems that could automate - and dramatically speed up - scientif...]]>
Tue, 20 Dec 2022 09:10:37 +0000 EA - High-level hopes for AI alignment by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: High-level hopes for AI alignment, published by Holden Karnofsky on December 20, 2022 on The Effective Altruism Forum.See here for an audio version.In previous pieces, I argued that there's a real and large risk of AI systems' aiming to defeat all of humanity combined - and succeeding.I first argued that this sort of catastrophe would be likely without specific countermeasures to prevent it. I then argued that countermeasures could be challenging, due to some key difficulties of AI safety research.But while I think misalignment risk is serious and presents major challenges, I don’t agree with sentiments1 along the lines of “We haven’t figured out how to align an AI, so if transformative AI comes soon, we’re doomed.” Here I’m going to talk about some of my high-level hopes for how we might end up avoiding this risk.I’ll first recap the challenge, using Ajeya Cotra’s young businessperson analogy to give a sense of some of the core difficulties. In a nutshell, once AI systems get capable enough, it could be hard to test whether they’re safe, because they might be able to deceive and manipulate us into getting the wrong read. Thus, trying to determine whether they’re safe might be something like “being an eight-year-old trying to decide between adult job candidates (some of whom are manipulative).”I’ll then go through what I see as three key possibilities for navigating this situation:Digital neuroscience: perhaps we’ll be able to read (and/or even rewrite) the “digital brains” of AI systems, so that we can know (and change) what they’re “aiming” to do directly - rather than having to infer it from their behavior. (Perhaps the eight-year-old is a mind-reader, or even a young Professor X.)Limited AI: perhaps we can make AI systems safe by making them limited in various ways - e.g., by leaving certain kinds of information out of their training, designing them to be “myopic” (focused on short-run as opposed to long-run goals), or something along those lines. Maybe we can make “limited AI” that is nonetheless able to carry out particular helpful tasks - such as doing lots more research on how to achieve safety without the limitations. (Perhaps the eight-year-old can limit the authority or knowledge of their hire, and still get the company run successfully.)AI checks and balances: perhaps we’ll be able to employ some AI systems to critique, supervise, and even rewrite others. Even if no single AI system would be safe on its own, the right “checks and balances” setup could ensure that human interests win out. (Perhaps the eight-year-old is able to get the job candidates to evaluate and critique each other, such that all the eight-year-old needs to do is verify basic factual claims to know who the best candidate is.)These are some of the main categories of hopes that are pretty easy to picture today. Further work on AI safety research might result in further ideas (and the above are not exhaustive - see my more detailed piece, posted to the Alignment Forum rather than Cold Takes, for more).I’ll talk about both challenges and reasons for hope here. I think that for the most part, these hopes look much better if AI projects are moving cautiously rather than racing furiously.I don’t think we’re at the point of having much sense of how the hopes and challenges net out; the best I can do at this point is to say: “I don’t currently have much sympathy for someone who’s highly confident that AI takeover would or would not happen (that is, for anyone who thinks the odds of AI takeover . are under 10% or over 90%).”The challengeThis is all recapping previous pieces. If you remember them super well, skip to the next section.In previous pieces, I argued that:The coming decades could see the development of AI systems that could automate - and dramatically speed up - scientif...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: High-level hopes for AI alignment, published by Holden Karnofsky on December 20, 2022 on The Effective Altruism Forum.See here for an audio version.In previous pieces, I argued that there's a real and large risk of AI systems' aiming to defeat all of humanity combined - and succeeding.I first argued that this sort of catastrophe would be likely without specific countermeasures to prevent it. I then argued that countermeasures could be challenging, due to some key difficulties of AI safety research.But while I think misalignment risk is serious and presents major challenges, I don’t agree with sentiments1 along the lines of “We haven’t figured out how to align an AI, so if transformative AI comes soon, we’re doomed.” Here I’m going to talk about some of my high-level hopes for how we might end up avoiding this risk.I’ll first recap the challenge, using Ajeya Cotra’s young businessperson analogy to give a sense of some of the core difficulties. In a nutshell, once AI systems get capable enough, it could be hard to test whether they’re safe, because they might be able to deceive and manipulate us into getting the wrong read. Thus, trying to determine whether they’re safe might be something like “being an eight-year-old trying to decide between adult job candidates (some of whom are manipulative).”I’ll then go through what I see as three key possibilities for navigating this situation:Digital neuroscience: perhaps we’ll be able to read (and/or even rewrite) the “digital brains” of AI systems, so that we can know (and change) what they’re “aiming” to do directly - rather than having to infer it from their behavior. (Perhaps the eight-year-old is a mind-reader, or even a young Professor X.)Limited AI: perhaps we can make AI systems safe by making them limited in various ways - e.g., by leaving certain kinds of information out of their training, designing them to be “myopic” (focused on short-run as opposed to long-run goals), or something along those lines. Maybe we can make “limited AI” that is nonetheless able to carry out particular helpful tasks - such as doing lots more research on how to achieve safety without the limitations. (Perhaps the eight-year-old can limit the authority or knowledge of their hire, and still get the company run successfully.)AI checks and balances: perhaps we’ll be able to employ some AI systems to critique, supervise, and even rewrite others. Even if no single AI system would be safe on its own, the right “checks and balances” setup could ensure that human interests win out. (Perhaps the eight-year-old is able to get the job candidates to evaluate and critique each other, such that all the eight-year-old needs to do is verify basic factual claims to know who the best candidate is.)These are some of the main categories of hopes that are pretty easy to picture today. Further work on AI safety research might result in further ideas (and the above are not exhaustive - see my more detailed piece, posted to the Alignment Forum rather than Cold Takes, for more).I’ll talk about both challenges and reasons for hope here. I think that for the most part, these hopes look much better if AI projects are moving cautiously rather than racing furiously.I don’t think we’re at the point of having much sense of how the hopes and challenges net out; the best I can do at this point is to say: “I don’t currently have much sympathy for someone who’s highly confident that AI takeover would or would not happen (that is, for anyone who thinks the odds of AI takeover . are under 10% or over 90%).”The challengeThis is all recapping previous pieces. If you remember them super well, skip to the next section.In previous pieces, I argued that:The coming decades could see the development of AI systems that could automate - and dramatically speed up - scientif...]]>
Holden Karnofsky https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 29:14 None full 4185
FFBYAB7g9JMsfGtiT_EA EA - Process for Returning FTX Funds Announced by Molly Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Process for Returning FTX Funds Announced, published by Molly on December 20, 2022 on The Effective Altruism Forum.FTX has put out a press release announcing a “process for voluntary return of avoidable payments.” This may be a useful option for grantees looking to urgently return any FTX-associated funding rather than wait for the bankruptcy process to play out. But anyone interested in returning money should keep in mind that in order to avoid being subject to redundant clawbacks or other legal claims later, it’s crucial to receive proper release-of-claims paperwork in exchange for returning funding. I strongly recommend you consult with a bankruptcy lawyer before starting this process. The Open Phil legal team is putting together a list of legal service providers for grantees who want to explore this option; we’ll follow up after the holidays with more information.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Molly https://forum.effectivealtruism.org/posts/FFBYAB7g9JMsfGtiT/process-for-returning-ftx-funds-announced Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Process for Returning FTX Funds Announced, published by Molly on December 20, 2022 on The Effective Altruism Forum.FTX has put out a press release announcing a “process for voluntary return of avoidable payments.” This may be a useful option for grantees looking to urgently return any FTX-associated funding rather than wait for the bankruptcy process to play out. But anyone interested in returning money should keep in mind that in order to avoid being subject to redundant clawbacks or other legal claims later, it’s crucial to receive proper release-of-claims paperwork in exchange for returning funding. I strongly recommend you consult with a bankruptcy lawyer before starting this process. The Open Phil legal team is putting together a list of legal service providers for grantees who want to explore this option; we’ll follow up after the holidays with more information.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 20 Dec 2022 01:50:16 +0000 EA - Process for Returning FTX Funds Announced by Molly Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Process for Returning FTX Funds Announced, published by Molly on December 20, 2022 on The Effective Altruism Forum.FTX has put out a press release announcing a “process for voluntary return of avoidable payments.” This may be a useful option for grantees looking to urgently return any FTX-associated funding rather than wait for the bankruptcy process to play out. But anyone interested in returning money should keep in mind that in order to avoid being subject to redundant clawbacks or other legal claims later, it’s crucial to receive proper release-of-claims paperwork in exchange for returning funding. I strongly recommend you consult with a bankruptcy lawyer before starting this process. The Open Phil legal team is putting together a list of legal service providers for grantees who want to explore this option; we’ll follow up after the holidays with more information.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Process for Returning FTX Funds Announced, published by Molly on December 20, 2022 on The Effective Altruism Forum.FTX has put out a press release announcing a “process for voluntary return of avoidable payments.” This may be a useful option for grantees looking to urgently return any FTX-associated funding rather than wait for the bankruptcy process to play out. But anyone interested in returning money should keep in mind that in order to avoid being subject to redundant clawbacks or other legal claims later, it’s crucial to receive proper release-of-claims paperwork in exchange for returning funding. I strongly recommend you consult with a bankruptcy lawyer before starting this process. The Open Phil legal team is putting together a list of legal service providers for grantees who want to explore this option; we’ll follow up after the holidays with more information.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Molly https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:04 None full 4186
Pt7MxstXxXHak4wkt_EA EA - AGI Timelines in Governance: Different Strategies for Different Timeframes by simeon c Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI Timelines in Governance: Different Strategies for Different Timeframes, published by simeon c on December 19, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
simeon c https://forum.effectivealtruism.org/posts/Pt7MxstXxXHak4wkt/agi-timelines-in-governance-different-strategies-for Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI Timelines in Governance: Different Strategies for Different Timeframes, published by simeon c on December 19, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 19 Dec 2022 23:07:53 +0000 EA - AGI Timelines in Governance: Different Strategies for Different Timeframes by simeon c Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI Timelines in Governance: Different Strategies for Different Timeframes, published by simeon c on December 19, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI Timelines in Governance: Different Strategies for Different Timeframes, published by simeon c on December 19, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
simeon c https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:30 None full 4187
LCagfA2uS7idsLfoN_EA EA - A few more relevant categories to think about diversity in EA by AmAristizabal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A few more relevant categories to think about diversity in EA, published by AmAristizabal on December 19, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.Posted on draft Amnesty Day, one day late oopsPeople who think about diversity in EA or face diversity-related issues in their work often find it helpful to find common labels and identities. It allows identifying common problems, framings and solutions. The EA Survey already identifies common categories that are useful to think about diversity, such as gender, sexual orientation, age, race, education level and financial instability. Here I expand a bit on some other terms that I find particularly helpful.Some important previous thoughts:This is based on anecdotal use of the terms instead of official definitions (people with a background in diversity studies might have better suggestions).These terms are also huge generalizations that fail to capture individual differences and the use of these terms can also harm diversity by creating “us vs them” narratives or packing people into groups that don’t represent them.If terminology needs to be corrected, this post is also to invite refining these terms.Low and Middle Income Countries (LMICs) (as opposed to High Income Countries)Definition: All countries not considered to be high-income. Although there is no universally agreed-upon definition, the World Bank defines high-income countries as those with a gross national income per capita of $12,696 or more in 2020. Upper-middle, lower-middle, and low-income countries are classified as LMICs.Why this category is useful for diversity in EA: I’ve been surprised by how many similarities there are between LMICs, regardless of regions. This seems true outside EA, and Gapminder’s dollar street project perfectly portrays this: they show with a massive collection of photos how income is the main determinant of people’s daily lives, regardless of geography. LMICs meetups in EA events tend to group compatible people with very common experiences.EA is predominant in high income countries and I’ve found that the LMICs category is probably one of the best at capturing a lot of what people care about when they speak about diversity in EA. LMICs groups across the world share a lot in common.This guide, for example, was created by a team from India, Malaysia and Colombia and we all had relatable experiences.Examples:Usual EA framings that address topics like donations or income are unfamiliar to LMICs, such as the usual “you’re the top 1%” assumption (although in relative terms this can still be true at an individual level for high income individuals in LMICs).It captures what is usually referred to as “global north” vs “global south”, which is relevant in many discussions. It might capture that distinction better, because there are some geographically southern countries that are high income (such as Australia) and northern countries that are LMICs.It captures a lot of the historical divisions between colonized and colonizer countries and what that implies: countries often have weak institutions, corruption, poverty, poor responses to emergencies, inequality and all sorts of immediate problems.We share similar questions, experiences and tradeoffs when we think about funding on the EA space. Funding in these countries has way more purchasing power than in high income countries and this has...]]>
AmAristizabal https://forum.effectivealtruism.org/posts/LCagfA2uS7idsLfoN/a-few-more-relevant-categories-to-think-about-diversity-in Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A few more relevant categories to think about diversity in EA, published by AmAristizabal on December 19, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.Posted on draft Amnesty Day, one day late oopsPeople who think about diversity in EA or face diversity-related issues in their work often find it helpful to find common labels and identities. It allows identifying common problems, framings and solutions. The EA Survey already identifies common categories that are useful to think about diversity, such as gender, sexual orientation, age, race, education level and financial instability. Here I expand a bit on some other terms that I find particularly helpful.Some important previous thoughts:This is based on anecdotal use of the terms instead of official definitions (people with a background in diversity studies might have better suggestions).These terms are also huge generalizations that fail to capture individual differences and the use of these terms can also harm diversity by creating “us vs them” narratives or packing people into groups that don’t represent them.If terminology needs to be corrected, this post is also to invite refining these terms.Low and Middle Income Countries (LMICs) (as opposed to High Income Countries)Definition: All countries not considered to be high-income. Although there is no universally agreed-upon definition, the World Bank defines high-income countries as those with a gross national income per capita of $12,696 or more in 2020. Upper-middle, lower-middle, and low-income countries are classified as LMICs.Why this category is useful for diversity in EA: I’ve been surprised by how many similarities there are between LMICs, regardless of regions. This seems true outside EA, and Gapminder’s dollar street project perfectly portrays this: they show with a massive collection of photos how income is the main determinant of people’s daily lives, regardless of geography. LMICs meetups in EA events tend to group compatible people with very common experiences.EA is predominant in high income countries and I’ve found that the LMICs category is probably one of the best at capturing a lot of what people care about when they speak about diversity in EA. LMICs groups across the world share a lot in common.This guide, for example, was created by a team from India, Malaysia and Colombia and we all had relatable experiences.Examples:Usual EA framings that address topics like donations or income are unfamiliar to LMICs, such as the usual “you’re the top 1%” assumption (although in relative terms this can still be true at an individual level for high income individuals in LMICs).It captures what is usually referred to as “global north” vs “global south”, which is relevant in many discussions. It might capture that distinction better, because there are some geographically southern countries that are high income (such as Australia) and northern countries that are LMICs.It captures a lot of the historical divisions between colonized and colonizer countries and what that implies: countries often have weak institutions, corruption, poverty, poor responses to emergencies, inequality and all sorts of immediate problems.We share similar questions, experiences and tradeoffs when we think about funding on the EA space. Funding in these countries has way more purchasing power than in high income countries and this has...]]>
Mon, 19 Dec 2022 21:26:04 +0000 EA - A few more relevant categories to think about diversity in EA by AmAristizabal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A few more relevant categories to think about diversity in EA, published by AmAristizabal on December 19, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.Posted on draft Amnesty Day, one day late oopsPeople who think about diversity in EA or face diversity-related issues in their work often find it helpful to find common labels and identities. It allows identifying common problems, framings and solutions. The EA Survey already identifies common categories that are useful to think about diversity, such as gender, sexual orientation, age, race, education level and financial instability. Here I expand a bit on some other terms that I find particularly helpful.Some important previous thoughts:This is based on anecdotal use of the terms instead of official definitions (people with a background in diversity studies might have better suggestions).These terms are also huge generalizations that fail to capture individual differences and the use of these terms can also harm diversity by creating “us vs them” narratives or packing people into groups that don’t represent them.If terminology needs to be corrected, this post is also to invite refining these terms.Low and Middle Income Countries (LMICs) (as opposed to High Income Countries)Definition: All countries not considered to be high-income. Although there is no universally agreed-upon definition, the World Bank defines high-income countries as those with a gross national income per capita of $12,696 or more in 2020. Upper-middle, lower-middle, and low-income countries are classified as LMICs.Why this category is useful for diversity in EA: I’ve been surprised by how many similarities there are between LMICs, regardless of regions. This seems true outside EA, and Gapminder’s dollar street project perfectly portrays this: they show with a massive collection of photos how income is the main determinant of people’s daily lives, regardless of geography. LMICs meetups in EA events tend to group compatible people with very common experiences.EA is predominant in high income countries and I’ve found that the LMICs category is probably one of the best at capturing a lot of what people care about when they speak about diversity in EA. LMICs groups across the world share a lot in common.This guide, for example, was created by a team from India, Malaysia and Colombia and we all had relatable experiences.Examples:Usual EA framings that address topics like donations or income are unfamiliar to LMICs, such as the usual “you’re the top 1%” assumption (although in relative terms this can still be true at an individual level for high income individuals in LMICs).It captures what is usually referred to as “global north” vs “global south”, which is relevant in many discussions. It might capture that distinction better, because there are some geographically southern countries that are high income (such as Australia) and northern countries that are LMICs.It captures a lot of the historical divisions between colonized and colonizer countries and what that implies: countries often have weak institutions, corruption, poverty, poor responses to emergencies, inequality and all sorts of immediate problems.We share similar questions, experiences and tradeoffs when we think about funding on the EA space. Funding in these countries has way more purchasing power than in high income countries and this has...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A few more relevant categories to think about diversity in EA, published by AmAristizabal on December 19, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.Posted on draft Amnesty Day, one day late oopsPeople who think about diversity in EA or face diversity-related issues in their work often find it helpful to find common labels and identities. It allows identifying common problems, framings and solutions. The EA Survey already identifies common categories that are useful to think about diversity, such as gender, sexual orientation, age, race, education level and financial instability. Here I expand a bit on some other terms that I find particularly helpful.Some important previous thoughts:This is based on anecdotal use of the terms instead of official definitions (people with a background in diversity studies might have better suggestions).These terms are also huge generalizations that fail to capture individual differences and the use of these terms can also harm diversity by creating “us vs them” narratives or packing people into groups that don’t represent them.If terminology needs to be corrected, this post is also to invite refining these terms.Low and Middle Income Countries (LMICs) (as opposed to High Income Countries)Definition: All countries not considered to be high-income. Although there is no universally agreed-upon definition, the World Bank defines high-income countries as those with a gross national income per capita of $12,696 or more in 2020. Upper-middle, lower-middle, and low-income countries are classified as LMICs.Why this category is useful for diversity in EA: I’ve been surprised by how many similarities there are between LMICs, regardless of regions. This seems true outside EA, and Gapminder’s dollar street project perfectly portrays this: they show with a massive collection of photos how income is the main determinant of people’s daily lives, regardless of geography. LMICs meetups in EA events tend to group compatible people with very common experiences.EA is predominant in high income countries and I’ve found that the LMICs category is probably one of the best at capturing a lot of what people care about when they speak about diversity in EA. LMICs groups across the world share a lot in common.This guide, for example, was created by a team from India, Malaysia and Colombia and we all had relatable experiences.Examples:Usual EA framings that address topics like donations or income are unfamiliar to LMICs, such as the usual “you’re the top 1%” assumption (although in relative terms this can still be true at an individual level for high income individuals in LMICs).It captures what is usually referred to as “global north” vs “global south”, which is relevant in many discussions. It might capture that distinction better, because there are some geographically southern countries that are high income (such as Australia) and northern countries that are LMICs.It captures a lot of the historical divisions between colonized and colonizer countries and what that implies: countries often have weak institutions, corruption, poverty, poor responses to emergencies, inequality and all sorts of immediate problems.We share similar questions, experiences and tradeoffs when we think about funding on the EA space. Funding in these countries has way more purchasing power than in high income countries and this has...]]>
AmAristizabal https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:04 None full 4175
4csmTBamMuQy9Zf6Q_EA EA - Top down interventions that could increase participation and impact of Low and Middle Income Countries in EA by AmAristizabal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Top down interventions that could increase participation and impact of Low and Middle Income Countries in EA, published by AmAristizabal on December 19, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.Posted on draft Amnesty Day, one day late oopsThanks to Ben Garfinkel, Surbhi Bharadwaj and Mo Putera for feedbackThis is a list of things that could help increase participation of LMICs in EA. Many of these actions involve tradeoffs (e.g. on people's time, efforts and resources) so probably the community shouldn’t do all of these things. Some of them might not currently be worth the trade-offs. However, since the community is serious about trying to increase diversity, and insofar as a number of these are actually pretty low cost, I think many of these might be worthwhile.See also this great post (has more specific suggestions that overlap with some of the ones highlighted here). This post has been drafted in parallel to this EA career guide for people in LMICs (which is a more bottom-up approach).Events:Strongly encourage senior EAs to attend events that are accessible to people in LMICs: Having virtual conferences and EAGxs in LMICs seems great, but for these events to be successful it is important for senior people in EA (who are still overrepresented in high income countries) to continue to attend them. As the number of annual EA events increases, senior EAs could start prioritizing conferences in their own hubs and it could become increasingly hard for people in LMICs to network with them. If senior EAs underestimate the importance of attending conferences that are accessible to people in LMICs, then personal outreach to these EAs could help them make better prioritisation decisions. There may also be ways to make participation in these events more convenient or attractive to senior EAs.Alternatively, making EAGs as inclusive as possible for LMICs. Some measures that could be done:Advertising events further in advance, considering that people from LMICs might struggle more with visa appointments (as already highlighted here).Provide immigration support like invitation letters for conferences and events and make the process smooth for LMICs attendees.Advertising events more broadly in LMICs by sharing events with local community builders, orgs and universities (even nudging community builders in LMICs to advertise an EAG seems low effort and worth doing).Consider hosting at least one annual EA Global in a more visa-friendly hub, instead of the UK and the US. For example, it could be worthwhile to host an EA Global in Mexico City or India (depending on visa requirements).For funders/EA orgs(probably too obvious) Diversifying demographics of staff working at funding organizations. This will come too from bottom-up approaches (community builders in LMICs could encourage EAs in LMICs to apply to jobs in these organizations).In the short term, these orgs could reach out more actively to community builders in LMICs when they have open positions.See more specific recommendations about hiring here.Shoutouts to very inclusive processes: someone could spot hiring and recruiting processes that have been particularly successful at attracting diverse candidates and share recommendations to replicate them. For example, we have heard that the last LPP Summer fellowship in Oxford had very diverse demographics. Apparently, they s...]]>
AmAristizabal https://forum.effectivealtruism.org/posts/4csmTBamMuQy9Zf6Q/top-down-interventions-that-could-increase-participation-and Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Top down interventions that could increase participation and impact of Low and Middle Income Countries in EA, published by AmAristizabal on December 19, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.Posted on draft Amnesty Day, one day late oopsThanks to Ben Garfinkel, Surbhi Bharadwaj and Mo Putera for feedbackThis is a list of things that could help increase participation of LMICs in EA. Many of these actions involve tradeoffs (e.g. on people's time, efforts and resources) so probably the community shouldn’t do all of these things. Some of them might not currently be worth the trade-offs. However, since the community is serious about trying to increase diversity, and insofar as a number of these are actually pretty low cost, I think many of these might be worthwhile.See also this great post (has more specific suggestions that overlap with some of the ones highlighted here). This post has been drafted in parallel to this EA career guide for people in LMICs (which is a more bottom-up approach).Events:Strongly encourage senior EAs to attend events that are accessible to people in LMICs: Having virtual conferences and EAGxs in LMICs seems great, but for these events to be successful it is important for senior people in EA (who are still overrepresented in high income countries) to continue to attend them. As the number of annual EA events increases, senior EAs could start prioritizing conferences in their own hubs and it could become increasingly hard for people in LMICs to network with them. If senior EAs underestimate the importance of attending conferences that are accessible to people in LMICs, then personal outreach to these EAs could help them make better prioritisation decisions. There may also be ways to make participation in these events more convenient or attractive to senior EAs.Alternatively, making EAGs as inclusive as possible for LMICs. Some measures that could be done:Advertising events further in advance, considering that people from LMICs might struggle more with visa appointments (as already highlighted here).Provide immigration support like invitation letters for conferences and events and make the process smooth for LMICs attendees.Advertising events more broadly in LMICs by sharing events with local community builders, orgs and universities (even nudging community builders in LMICs to advertise an EAG seems low effort and worth doing).Consider hosting at least one annual EA Global in a more visa-friendly hub, instead of the UK and the US. For example, it could be worthwhile to host an EA Global in Mexico City or India (depending on visa requirements).For funders/EA orgs(probably too obvious) Diversifying demographics of staff working at funding organizations. This will come too from bottom-up approaches (community builders in LMICs could encourage EAs in LMICs to apply to jobs in these organizations).In the short term, these orgs could reach out more actively to community builders in LMICs when they have open positions.See more specific recommendations about hiring here.Shoutouts to very inclusive processes: someone could spot hiring and recruiting processes that have been particularly successful at attracting diverse candidates and share recommendations to replicate them. For example, we have heard that the last LPP Summer fellowship in Oxford had very diverse demographics. Apparently, they s...]]>
Mon, 19 Dec 2022 21:25:56 +0000 EA - Top down interventions that could increase participation and impact of Low and Middle Income Countries in EA by AmAristizabal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Top down interventions that could increase participation and impact of Low and Middle Income Countries in EA, published by AmAristizabal on December 19, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.Posted on draft Amnesty Day, one day late oopsThanks to Ben Garfinkel, Surbhi Bharadwaj and Mo Putera for feedbackThis is a list of things that could help increase participation of LMICs in EA. Many of these actions involve tradeoffs (e.g. on people's time, efforts and resources) so probably the community shouldn’t do all of these things. Some of them might not currently be worth the trade-offs. However, since the community is serious about trying to increase diversity, and insofar as a number of these are actually pretty low cost, I think many of these might be worthwhile.See also this great post (has more specific suggestions that overlap with some of the ones highlighted here). This post has been drafted in parallel to this EA career guide for people in LMICs (which is a more bottom-up approach).Events:Strongly encourage senior EAs to attend events that are accessible to people in LMICs: Having virtual conferences and EAGxs in LMICs seems great, but for these events to be successful it is important for senior people in EA (who are still overrepresented in high income countries) to continue to attend them. As the number of annual EA events increases, senior EAs could start prioritizing conferences in their own hubs and it could become increasingly hard for people in LMICs to network with them. If senior EAs underestimate the importance of attending conferences that are accessible to people in LMICs, then personal outreach to these EAs could help them make better prioritisation decisions. There may also be ways to make participation in these events more convenient or attractive to senior EAs.Alternatively, making EAGs as inclusive as possible for LMICs. Some measures that could be done:Advertising events further in advance, considering that people from LMICs might struggle more with visa appointments (as already highlighted here).Provide immigration support like invitation letters for conferences and events and make the process smooth for LMICs attendees.Advertising events more broadly in LMICs by sharing events with local community builders, orgs and universities (even nudging community builders in LMICs to advertise an EAG seems low effort and worth doing).Consider hosting at least one annual EA Global in a more visa-friendly hub, instead of the UK and the US. For example, it could be worthwhile to host an EA Global in Mexico City or India (depending on visa requirements).For funders/EA orgs(probably too obvious) Diversifying demographics of staff working at funding organizations. This will come too from bottom-up approaches (community builders in LMICs could encourage EAs in LMICs to apply to jobs in these organizations).In the short term, these orgs could reach out more actively to community builders in LMICs when they have open positions.See more specific recommendations about hiring here.Shoutouts to very inclusive processes: someone could spot hiring and recruiting processes that have been particularly successful at attracting diverse candidates and share recommendations to replicate them. For example, we have heard that the last LPP Summer fellowship in Oxford had very diverse demographics. Apparently, they s...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Top down interventions that could increase participation and impact of Low and Middle Income Countries in EA, published by AmAristizabal on December 19, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.Posted on draft Amnesty Day, one day late oopsThanks to Ben Garfinkel, Surbhi Bharadwaj and Mo Putera for feedbackThis is a list of things that could help increase participation of LMICs in EA. Many of these actions involve tradeoffs (e.g. on people's time, efforts and resources) so probably the community shouldn’t do all of these things. Some of them might not currently be worth the trade-offs. However, since the community is serious about trying to increase diversity, and insofar as a number of these are actually pretty low cost, I think many of these might be worthwhile.See also this great post (has more specific suggestions that overlap with some of the ones highlighted here). This post has been drafted in parallel to this EA career guide for people in LMICs (which is a more bottom-up approach).Events:Strongly encourage senior EAs to attend events that are accessible to people in LMICs: Having virtual conferences and EAGxs in LMICs seems great, but for these events to be successful it is important for senior people in EA (who are still overrepresented in high income countries) to continue to attend them. As the number of annual EA events increases, senior EAs could start prioritizing conferences in their own hubs and it could become increasingly hard for people in LMICs to network with them. If senior EAs underestimate the importance of attending conferences that are accessible to people in LMICs, then personal outreach to these EAs could help them make better prioritisation decisions. There may also be ways to make participation in these events more convenient or attractive to senior EAs.Alternatively, making EAGs as inclusive as possible for LMICs. Some measures that could be done:Advertising events further in advance, considering that people from LMICs might struggle more with visa appointments (as already highlighted here).Provide immigration support like invitation letters for conferences and events and make the process smooth for LMICs attendees.Advertising events more broadly in LMICs by sharing events with local community builders, orgs and universities (even nudging community builders in LMICs to advertise an EAG seems low effort and worth doing).Consider hosting at least one annual EA Global in a more visa-friendly hub, instead of the UK and the US. For example, it could be worthwhile to host an EA Global in Mexico City or India (depending on visa requirements).For funders/EA orgs(probably too obvious) Diversifying demographics of staff working at funding organizations. This will come too from bottom-up approaches (community builders in LMICs could encourage EAs in LMICs to apply to jobs in these organizations).In the short term, these orgs could reach out more actively to community builders in LMICs when they have open positions.See more specific recommendations about hiring here.Shoutouts to very inclusive processes: someone could spot hiring and recruiting processes that have been particularly successful at attracting diverse candidates and share recommendations to replicate them. For example, we have heard that the last LPP Summer fellowship in Oxford had very diverse demographics. Apparently, they s...]]>
AmAristizabal https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:19 None full 4188
Zxrv2kBhHzyd2gzsQ_EA EA - Existential risk mitigation: What I worry about when there are only bad options by MMMaas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Existential risk mitigation: What I worry about when there are only bad options, published by MMMaas on December 19, 2022 on The Effective Altruism Forum.(This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was encouraged to post something).(Written in my personal capacity, reflecting only my own, underdeveloped views).(Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated)My status: doubt. Shallow ethical speculation, including attempts to consider different ethical perspectives on these questions that are both closer to and further from my own.If I had my way: great qualities for existential risk reduction optionsWe know what we would like the perfect response to an existential risk to look like. If we could wave a wand, it would be great to have some ideal strategy that manages to simultaneously be:functionally ideal:[...]effective (significantly reduces the risks if successful, ideally permanently),reliable (high chance of success),technically feasible,politically viable,low-cost;safe (little to no downside risk -- ie graceful failure),robust (effective, reliable, feasible, viable and safe across many possible future scenarios),ethically ideal[...]pluralistically ethical (no serious moral costs or rights violations entailed by intervention, under a wide variety of moral views),impartial (everyone is saved by its success; no one bears disproportionate costs of implementing the strategy) / 'paretotopian' (everyone is left better off, or at least no one is made badly worse off);widely accepted (everyone (?) agrees to the strategy's deployment, either in active practice (e.g. after open democratic deliberation or participation), passive practice (e.g. everyone has been notified or informed about the strategy), or at least in principle (we cannot come up with objections from any extant political or ethical positions, after extensive red-teaming)),choice-preserving (does not lead to value lock-in and/or entail leaving a strong ethical fingerprint on the future)etc, etc.But it may be tragically likely that interventions that combine every single one of these traits are just not on the table. To be clear, I think many proposed strategies for reducing existential risk at least aim at hitting many or all of these criteria. But these won't be the only actions that will be pursued around extreme risks.What if the only feasible strategies to respond to existential risks--or the strategies that will most likely be pursued by other actors in response to existential risk--are all, to some extent, imperfect, flawed or 'bad'?Three 'bad' options and their moral dilemmasIn particular, I worry about at least three (possible or likely) classes of strategies that could be considered in response to existential risks or global catastrophes: (1) non-universal escape hatches or partial shields; (2) unilateral high-risk solutions; (3) strongly politically or ethically partisan solutions.All three plausibly constitute '(somewhat) bad' options. I don't want to say that these strategies should not be pursued (e.g. they may still be 'least-bad', given their likely alternatives; or 'acceptably bad', given an evaluation of the likely benefits versus costs). I also don't want to claim that we should not analyze these strategies (especially if they are likely to be adopted by some people in the world).But I do believe that all create moral dilemmas or tradeoffs that I am uncomfortable with--and risky 'failures' that could be entailed by taking one or another view on whether to use them....]]>
MMMaas https://forum.effectivealtruism.org/posts/Zxrv2kBhHzyd2gzsQ/existential-risk-mitigation-what-i-worry-about-when-there Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Existential risk mitigation: What I worry about when there are only bad options, published by MMMaas on December 19, 2022 on The Effective Altruism Forum.(This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was encouraged to post something).(Written in my personal capacity, reflecting only my own, underdeveloped views).(Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated)My status: doubt. Shallow ethical speculation, including attempts to consider different ethical perspectives on these questions that are both closer to and further from my own.If I had my way: great qualities for existential risk reduction optionsWe know what we would like the perfect response to an existential risk to look like. If we could wave a wand, it would be great to have some ideal strategy that manages to simultaneously be:functionally ideal:[...]effective (significantly reduces the risks if successful, ideally permanently),reliable (high chance of success),technically feasible,politically viable,low-cost;safe (little to no downside risk -- ie graceful failure),robust (effective, reliable, feasible, viable and safe across many possible future scenarios),ethically ideal[...]pluralistically ethical (no serious moral costs or rights violations entailed by intervention, under a wide variety of moral views),impartial (everyone is saved by its success; no one bears disproportionate costs of implementing the strategy) / 'paretotopian' (everyone is left better off, or at least no one is made badly worse off);widely accepted (everyone (?) agrees to the strategy's deployment, either in active practice (e.g. after open democratic deliberation or participation), passive practice (e.g. everyone has been notified or informed about the strategy), or at least in principle (we cannot come up with objections from any extant political or ethical positions, after extensive red-teaming)),choice-preserving (does not lead to value lock-in and/or entail leaving a strong ethical fingerprint on the future)etc, etc.But it may be tragically likely that interventions that combine every single one of these traits are just not on the table. To be clear, I think many proposed strategies for reducing existential risk at least aim at hitting many or all of these criteria. But these won't be the only actions that will be pursued around extreme risks.What if the only feasible strategies to respond to existential risks--or the strategies that will most likely be pursued by other actors in response to existential risk--are all, to some extent, imperfect, flawed or 'bad'?Three 'bad' options and their moral dilemmasIn particular, I worry about at least three (possible or likely) classes of strategies that could be considered in response to existential risks or global catastrophes: (1) non-universal escape hatches or partial shields; (2) unilateral high-risk solutions; (3) strongly politically or ethically partisan solutions.All three plausibly constitute '(somewhat) bad' options. I don't want to say that these strategies should not be pursued (e.g. they may still be 'least-bad', given their likely alternatives; or 'acceptably bad', given an evaluation of the likely benefits versus costs). I also don't want to claim that we should not analyze these strategies (especially if they are likely to be adopted by some people in the world).But I do believe that all create moral dilemmas or tradeoffs that I am uncomfortable with--and risky 'failures' that could be entailed by taking one or another view on whether to use them....]]>
Mon, 19 Dec 2022 20:39:16 +0000 EA - Existential risk mitigation: What I worry about when there are only bad options by MMMaas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Existential risk mitigation: What I worry about when there are only bad options, published by MMMaas on December 19, 2022 on The Effective Altruism Forum.(This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was encouraged to post something).(Written in my personal capacity, reflecting only my own, underdeveloped views).(Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated)My status: doubt. Shallow ethical speculation, including attempts to consider different ethical perspectives on these questions that are both closer to and further from my own.If I had my way: great qualities for existential risk reduction optionsWe know what we would like the perfect response to an existential risk to look like. If we could wave a wand, it would be great to have some ideal strategy that manages to simultaneously be:functionally ideal:[...]effective (significantly reduces the risks if successful, ideally permanently),reliable (high chance of success),technically feasible,politically viable,low-cost;safe (little to no downside risk -- ie graceful failure),robust (effective, reliable, feasible, viable and safe across many possible future scenarios),ethically ideal[...]pluralistically ethical (no serious moral costs or rights violations entailed by intervention, under a wide variety of moral views),impartial (everyone is saved by its success; no one bears disproportionate costs of implementing the strategy) / 'paretotopian' (everyone is left better off, or at least no one is made badly worse off);widely accepted (everyone (?) agrees to the strategy's deployment, either in active practice (e.g. after open democratic deliberation or participation), passive practice (e.g. everyone has been notified or informed about the strategy), or at least in principle (we cannot come up with objections from any extant political or ethical positions, after extensive red-teaming)),choice-preserving (does not lead to value lock-in and/or entail leaving a strong ethical fingerprint on the future)etc, etc.But it may be tragically likely that interventions that combine every single one of these traits are just not on the table. To be clear, I think many proposed strategies for reducing existential risk at least aim at hitting many or all of these criteria. But these won't be the only actions that will be pursued around extreme risks.What if the only feasible strategies to respond to existential risks--or the strategies that will most likely be pursued by other actors in response to existential risk--are all, to some extent, imperfect, flawed or 'bad'?Three 'bad' options and their moral dilemmasIn particular, I worry about at least three (possible or likely) classes of strategies that could be considered in response to existential risks or global catastrophes: (1) non-universal escape hatches or partial shields; (2) unilateral high-risk solutions; (3) strongly politically or ethically partisan solutions.All three plausibly constitute '(somewhat) bad' options. I don't want to say that these strategies should not be pursued (e.g. they may still be 'least-bad', given their likely alternatives; or 'acceptably bad', given an evaluation of the likely benefits versus costs). I also don't want to claim that we should not analyze these strategies (especially if they are likely to be adopted by some people in the world).But I do believe that all create moral dilemmas or tradeoffs that I am uncomfortable with--and risky 'failures' that could be entailed by taking one or another view on whether to use them....]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Existential risk mitigation: What I worry about when there are only bad options, published by MMMaas on December 19, 2022 on The Effective Altruism Forum.(This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was encouraged to post something).(Written in my personal capacity, reflecting only my own, underdeveloped views).(Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated)My status: doubt. Shallow ethical speculation, including attempts to consider different ethical perspectives on these questions that are both closer to and further from my own.If I had my way: great qualities for existential risk reduction optionsWe know what we would like the perfect response to an existential risk to look like. If we could wave a wand, it would be great to have some ideal strategy that manages to simultaneously be:functionally ideal:[...]effective (significantly reduces the risks if successful, ideally permanently),reliable (high chance of success),technically feasible,politically viable,low-cost;safe (little to no downside risk -- ie graceful failure),robust (effective, reliable, feasible, viable and safe across many possible future scenarios),ethically ideal[...]pluralistically ethical (no serious moral costs or rights violations entailed by intervention, under a wide variety of moral views),impartial (everyone is saved by its success; no one bears disproportionate costs of implementing the strategy) / 'paretotopian' (everyone is left better off, or at least no one is made badly worse off);widely accepted (everyone (?) agrees to the strategy's deployment, either in active practice (e.g. after open democratic deliberation or participation), passive practice (e.g. everyone has been notified or informed about the strategy), or at least in principle (we cannot come up with objections from any extant political or ethical positions, after extensive red-teaming)),choice-preserving (does not lead to value lock-in and/or entail leaving a strong ethical fingerprint on the future)etc, etc.But it may be tragically likely that interventions that combine every single one of these traits are just not on the table. To be clear, I think many proposed strategies for reducing existential risk at least aim at hitting many or all of these criteria. But these won't be the only actions that will be pursued around extreme risks.What if the only feasible strategies to respond to existential risks--or the strategies that will most likely be pursued by other actors in response to existential risk--are all, to some extent, imperfect, flawed or 'bad'?Three 'bad' options and their moral dilemmasIn particular, I worry about at least three (possible or likely) classes of strategies that could be considered in response to existential risks or global catastrophes: (1) non-universal escape hatches or partial shields; (2) unilateral high-risk solutions; (3) strongly politically or ethically partisan solutions.All three plausibly constitute '(somewhat) bad' options. I don't want to say that these strategies should not be pursued (e.g. they may still be 'least-bad', given their likely alternatives; or 'acceptably bad', given an evaluation of the likely benefits versus costs). I also don't want to claim that we should not analyze these strategies (especially if they are likely to be adopted by some people in the world).But I do believe that all create moral dilemmas or tradeoffs that I am uncomfortable with--and risky 'failures' that could be entailed by taking one or another view on whether to use them....]]>
MMMaas https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 14:46 None full 4177
qDaBSETiQJusrcctJ_EA EA - Proposal — change the name of EA Global by DMMF Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Proposal — change the name of EA Global, published by DMMF on December 19, 2022 on The Effective Altruism Forum.(to something along the lines of: EA Careers Conference or EA Direct Work Conference)Earlier this year, Scott Alexander made a thread titled Open EA Global that caused a big stir.My summary of events is that people interpreted EA Global to represent different things and have different aims, leading to many to have different expectations as to who should attend EA Global. This ambiguity led to many people feeling hurt when they (or others they care about) were rejected from EAG.The organizers of EAG seemingly had done a bad job explaining what EAG is and had incorrect/out of date/misleading information about EAG on various online platforms/communications.To Eli Nathan’s credit (the CEA lead for EAG), he provided a lot of helpful context in the ensuing thread and updated lots of EAG related communications to be more clear.The EAG website now states:“EA Global is designed for people who have a solid understanding of the main concepts of effective altruism, and who are making decisions and taking significant actions based on them. EA Global conferences are not the only events for people interested in effective altruism!”Still, several months after Scott’s thread, I still feel there has not been an adequate resolution to this.In Eli’s response to Scott, he provided:"EAG is primarily a networking event."“I wanted to clarify: EAG exists to make the world a better place, rather than serve the EA community or make EAs happy”“The conference is called "EA Global" and is universally billed as the place where EAs meet one another, learn more about the movement, and have a good time together.” It’s possible we should rename the event, and I agree this confusion and reputation is problematic, but I would like to clarify that we don’t define the event like this anywhere (though perhaps we used to in previous years). It’s now explicitly described as an event with a high bar for highly engaged EAs (see here). "Even though there is now much accurate information about EAG online, I still find it problematic.As stated above, the primary description of who EAG is for is: “EA Global is designed for people who have a solid understanding of the main concepts of effective altruism, and who are making decisions and taking significant actions based on them.”I think EAG is still not meant for the majority of people who fit this description.Those who earn to give, those who donate significant sums of their annual income to charity, those who decided to become vegans, those who spend large amounts of time reading EA content/attending Ea meet ups etc. all meet this description yet are not ideal candidates for EAG.If EAG is intended to be a conference for people doing direct work to network and to help motivate others (particularly, students/early career workers) to do direct work, then I think the conference should be clear about this.I feel this way for two reasons:Despite the updated branding/communications, people are still going to feel excluded and hurt.Because there is already a conference titled EA Global, people are less likely to create new Effective Altruism conferences for other purposes like being the place where EAs meet one another, learn more about the movement, and have a good time together etc.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
DMMF https://forum.effectivealtruism.org/posts/qDaBSETiQJusrcctJ/proposal-change-the-name-of-ea-global Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Proposal — change the name of EA Global, published by DMMF on December 19, 2022 on The Effective Altruism Forum.(to something along the lines of: EA Careers Conference or EA Direct Work Conference)Earlier this year, Scott Alexander made a thread titled Open EA Global that caused a big stir.My summary of events is that people interpreted EA Global to represent different things and have different aims, leading to many to have different expectations as to who should attend EA Global. This ambiguity led to many people feeling hurt when they (or others they care about) were rejected from EAG.The organizers of EAG seemingly had done a bad job explaining what EAG is and had incorrect/out of date/misleading information about EAG on various online platforms/communications.To Eli Nathan’s credit (the CEA lead for EAG), he provided a lot of helpful context in the ensuing thread and updated lots of EAG related communications to be more clear.The EAG website now states:“EA Global is designed for people who have a solid understanding of the main concepts of effective altruism, and who are making decisions and taking significant actions based on them. EA Global conferences are not the only events for people interested in effective altruism!”Still, several months after Scott’s thread, I still feel there has not been an adequate resolution to this.In Eli’s response to Scott, he provided:"EAG is primarily a networking event."“I wanted to clarify: EAG exists to make the world a better place, rather than serve the EA community or make EAs happy”“The conference is called "EA Global" and is universally billed as the place where EAs meet one another, learn more about the movement, and have a good time together.” It’s possible we should rename the event, and I agree this confusion and reputation is problematic, but I would like to clarify that we don’t define the event like this anywhere (though perhaps we used to in previous years). It’s now explicitly described as an event with a high bar for highly engaged EAs (see here). "Even though there is now much accurate information about EAG online, I still find it problematic.As stated above, the primary description of who EAG is for is: “EA Global is designed for people who have a solid understanding of the main concepts of effective altruism, and who are making decisions and taking significant actions based on them.”I think EAG is still not meant for the majority of people who fit this description.Those who earn to give, those who donate significant sums of their annual income to charity, those who decided to become vegans, those who spend large amounts of time reading EA content/attending Ea meet ups etc. all meet this description yet are not ideal candidates for EAG.If EAG is intended to be a conference for people doing direct work to network and to help motivate others (particularly, students/early career workers) to do direct work, then I think the conference should be clear about this.I feel this way for two reasons:Despite the updated branding/communications, people are still going to feel excluded and hurt.Because there is already a conference titled EA Global, people are less likely to create new Effective Altruism conferences for other purposes like being the place where EAs meet one another, learn more about the movement, and have a good time together etc.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 19 Dec 2022 19:29:22 +0000 EA - Proposal — change the name of EA Global by DMMF Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Proposal — change the name of EA Global, published by DMMF on December 19, 2022 on The Effective Altruism Forum.(to something along the lines of: EA Careers Conference or EA Direct Work Conference)Earlier this year, Scott Alexander made a thread titled Open EA Global that caused a big stir.My summary of events is that people interpreted EA Global to represent different things and have different aims, leading to many to have different expectations as to who should attend EA Global. This ambiguity led to many people feeling hurt when they (or others they care about) were rejected from EAG.The organizers of EAG seemingly had done a bad job explaining what EAG is and had incorrect/out of date/misleading information about EAG on various online platforms/communications.To Eli Nathan’s credit (the CEA lead for EAG), he provided a lot of helpful context in the ensuing thread and updated lots of EAG related communications to be more clear.The EAG website now states:“EA Global is designed for people who have a solid understanding of the main concepts of effective altruism, and who are making decisions and taking significant actions based on them. EA Global conferences are not the only events for people interested in effective altruism!”Still, several months after Scott’s thread, I still feel there has not been an adequate resolution to this.In Eli’s response to Scott, he provided:"EAG is primarily a networking event."“I wanted to clarify: EAG exists to make the world a better place, rather than serve the EA community or make EAs happy”“The conference is called "EA Global" and is universally billed as the place where EAs meet one another, learn more about the movement, and have a good time together.” It’s possible we should rename the event, and I agree this confusion and reputation is problematic, but I would like to clarify that we don’t define the event like this anywhere (though perhaps we used to in previous years). It’s now explicitly described as an event with a high bar for highly engaged EAs (see here). "Even though there is now much accurate information about EAG online, I still find it problematic.As stated above, the primary description of who EAG is for is: “EA Global is designed for people who have a solid understanding of the main concepts of effective altruism, and who are making decisions and taking significant actions based on them.”I think EAG is still not meant for the majority of people who fit this description.Those who earn to give, those who donate significant sums of their annual income to charity, those who decided to become vegans, those who spend large amounts of time reading EA content/attending Ea meet ups etc. all meet this description yet are not ideal candidates for EAG.If EAG is intended to be a conference for people doing direct work to network and to help motivate others (particularly, students/early career workers) to do direct work, then I think the conference should be clear about this.I feel this way for two reasons:Despite the updated branding/communications, people are still going to feel excluded and hurt.Because there is already a conference titled EA Global, people are less likely to create new Effective Altruism conferences for other purposes like being the place where EAs meet one another, learn more about the movement, and have a good time together etc.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Proposal — change the name of EA Global, published by DMMF on December 19, 2022 on The Effective Altruism Forum.(to something along the lines of: EA Careers Conference or EA Direct Work Conference)Earlier this year, Scott Alexander made a thread titled Open EA Global that caused a big stir.My summary of events is that people interpreted EA Global to represent different things and have different aims, leading to many to have different expectations as to who should attend EA Global. This ambiguity led to many people feeling hurt when they (or others they care about) were rejected from EAG.The organizers of EAG seemingly had done a bad job explaining what EAG is and had incorrect/out of date/misleading information about EAG on various online platforms/communications.To Eli Nathan’s credit (the CEA lead for EAG), he provided a lot of helpful context in the ensuing thread and updated lots of EAG related communications to be more clear.The EAG website now states:“EA Global is designed for people who have a solid understanding of the main concepts of effective altruism, and who are making decisions and taking significant actions based on them. EA Global conferences are not the only events for people interested in effective altruism!”Still, several months after Scott’s thread, I still feel there has not been an adequate resolution to this.In Eli’s response to Scott, he provided:"EAG is primarily a networking event."“I wanted to clarify: EAG exists to make the world a better place, rather than serve the EA community or make EAs happy”“The conference is called "EA Global" and is universally billed as the place where EAs meet one another, learn more about the movement, and have a good time together.” It’s possible we should rename the event, and I agree this confusion and reputation is problematic, but I would like to clarify that we don’t define the event like this anywhere (though perhaps we used to in previous years). It’s now explicitly described as an event with a high bar for highly engaged EAs (see here). "Even though there is now much accurate information about EAG online, I still find it problematic.As stated above, the primary description of who EAG is for is: “EA Global is designed for people who have a solid understanding of the main concepts of effective altruism, and who are making decisions and taking significant actions based on them.”I think EAG is still not meant for the majority of people who fit this description.Those who earn to give, those who donate significant sums of their annual income to charity, those who decided to become vegans, those who spend large amounts of time reading EA content/attending Ea meet ups etc. all meet this description yet are not ideal candidates for EAG.If EAG is intended to be a conference for people doing direct work to network and to help motivate others (particularly, students/early career workers) to do direct work, then I think the conference should be clear about this.I feel this way for two reasons:Despite the updated branding/communications, people are still going to feel excluded and hurt.Because there is already a conference titled EA Global, people are less likely to create new Effective Altruism conferences for other purposes like being the place where EAs meet one another, learn more about the movement, and have a good time together etc.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
DMMF https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:15 None full 4174
eCYkD4BP2s4FYuwbP_EA EA - Staff members’ personal donations for giving season 2022 by GiveWell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Staff members’ personal donations for giving season 2022, published by GiveWell on December 19, 2022 on The Effective Altruism Forum.Author: Isabel Arjmand, GiveWell Special Projects OfficerFor this post, a number of GiveWell staff members volunteered to share the thinking behind their personal donations for the year. We’ve published similar posts in previous years.[1] Staff are listed alphabetically by first name.You can click the links in the table of contents on the left to jump to a staff member’s entry.Andrew Martin (Senior Research Associate)I continue to be impressed by the care and thoughtfulness I see from my colleagues in making grant allocation decisions. Seeing and participating in this work informs my decision to give all of my donation this year to the All Grants Fund. In addition to GiveWell's Top Charities, I'm excited to be able to support other highly cost-effective programs through the All Grants Fund, as highlighted in this blog post.Audrey Cooper (Philanthropy Advisor)We plan to donate to GiveWell's Top Charities Fund again this year. Each of the top charity programs has substantial funding needs, such that they could reach more people and save more lives if they receive more donations. I'm excited to help these programs close the gap.We also plan to continue our support of the International Refugee Assistance Project and criminal justice organizations. Throughout the year, we also make smaller donations to local causes (such as services for people experiencing homelessness, community gardens, etc.) as well as gifts in honor of friends to their charities of choice.Elie Hassenfeld (Co-Founder and Chief Executive Officer)This year, my family is planning to give 80% of our annual donation to GiveWell's All Grants Funds and 20% to GiveDirectly.We're giving to GiveWell's All Grants Fund because it gives GiveWell the most flexibility to direct funds where we (GiveWell staff) think they will do the most good. This may mean supporting programs at Top Charities, but it could mean funding newer organizations, research, or more speculative opportunities that are high (expected) impact.Our decision to give to GiveDirectly is less straightforward. Based on GiveWell's cost-effectiveness models, the funds my family is giving to GiveDirectly would do more good if given elsewhere (roughly speaking, GiveWell's best estimate is that funds to top charities and the All Grants Fund do about ten times as much good in expectation).We're giving 20% to GiveDirectly for two reasons:When I talk to people who aren't already familiar with GiveWell's work, I often reference GiveDirectly. Many people aren't aware of the vast income disparities between high-income and low-income countries. I talk about GiveDirectly because (a) it's very simple and easy to explain and (b) years ago, I visited GiveDirectly's program in Kenya, so I'm able to speak personally and specifically about people who benefited from GiveDirectly's work. For example, I often tell people about a specific family I met who received ~$1,000 from GiveDirectly. Like many other families, they chose to use part of their cash transfer to replace their thatched roof with a metal one. Before receiving these funds, when it rained in the middle of the night, the family (if I recall correctly, a mother and two children) would have to move out to a neighbor's house that was ~60 feet away to stay dry. They'd come back the next day to find their belongings soaked.Or, when I talk to people with an interest in evaluation, I tell them that, when I visited Kenya, GiveDirectly enabled me to randomly select households to visit. On any site visit, donors should expect that organizations are aiming to shape a compelling narrative of their impact, so I loved that GiveDirectly helped me see a more representative picture ...]]>
GiveWell https://forum.effectivealtruism.org/posts/eCYkD4BP2s4FYuwbP/staff-members-personal-donations-for-giving-season-2022 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Staff members’ personal donations for giving season 2022, published by GiveWell on December 19, 2022 on The Effective Altruism Forum.Author: Isabel Arjmand, GiveWell Special Projects OfficerFor this post, a number of GiveWell staff members volunteered to share the thinking behind their personal donations for the year. We’ve published similar posts in previous years.[1] Staff are listed alphabetically by first name.You can click the links in the table of contents on the left to jump to a staff member’s entry.Andrew Martin (Senior Research Associate)I continue to be impressed by the care and thoughtfulness I see from my colleagues in making grant allocation decisions. Seeing and participating in this work informs my decision to give all of my donation this year to the All Grants Fund. In addition to GiveWell's Top Charities, I'm excited to be able to support other highly cost-effective programs through the All Grants Fund, as highlighted in this blog post.Audrey Cooper (Philanthropy Advisor)We plan to donate to GiveWell's Top Charities Fund again this year. Each of the top charity programs has substantial funding needs, such that they could reach more people and save more lives if they receive more donations. I'm excited to help these programs close the gap.We also plan to continue our support of the International Refugee Assistance Project and criminal justice organizations. Throughout the year, we also make smaller donations to local causes (such as services for people experiencing homelessness, community gardens, etc.) as well as gifts in honor of friends to their charities of choice.Elie Hassenfeld (Co-Founder and Chief Executive Officer)This year, my family is planning to give 80% of our annual donation to GiveWell's All Grants Funds and 20% to GiveDirectly.We're giving to GiveWell's All Grants Fund because it gives GiveWell the most flexibility to direct funds where we (GiveWell staff) think they will do the most good. This may mean supporting programs at Top Charities, but it could mean funding newer organizations, research, or more speculative opportunities that are high (expected) impact.Our decision to give to GiveDirectly is less straightforward. Based on GiveWell's cost-effectiveness models, the funds my family is giving to GiveDirectly would do more good if given elsewhere (roughly speaking, GiveWell's best estimate is that funds to top charities and the All Grants Fund do about ten times as much good in expectation).We're giving 20% to GiveDirectly for two reasons:When I talk to people who aren't already familiar with GiveWell's work, I often reference GiveDirectly. Many people aren't aware of the vast income disparities between high-income and low-income countries. I talk about GiveDirectly because (a) it's very simple and easy to explain and (b) years ago, I visited GiveDirectly's program in Kenya, so I'm able to speak personally and specifically about people who benefited from GiveDirectly's work. For example, I often tell people about a specific family I met who received ~$1,000 from GiveDirectly. Like many other families, they chose to use part of their cash transfer to replace their thatched roof with a metal one. Before receiving these funds, when it rained in the middle of the night, the family (if I recall correctly, a mother and two children) would have to move out to a neighbor's house that was ~60 feet away to stay dry. They'd come back the next day to find their belongings soaked.Or, when I talk to people with an interest in evaluation, I tell them that, when I visited Kenya, GiveDirectly enabled me to randomly select households to visit. On any site visit, donors should expect that organizations are aiming to shape a compelling narrative of their impact, so I loved that GiveDirectly helped me see a more representative picture ...]]>
Mon, 19 Dec 2022 18:31:04 +0000 EA - Staff members’ personal donations for giving season 2022 by GiveWell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Staff members’ personal donations for giving season 2022, published by GiveWell on December 19, 2022 on The Effective Altruism Forum.Author: Isabel Arjmand, GiveWell Special Projects OfficerFor this post, a number of GiveWell staff members volunteered to share the thinking behind their personal donations for the year. We’ve published similar posts in previous years.[1] Staff are listed alphabetically by first name.You can click the links in the table of contents on the left to jump to a staff member’s entry.Andrew Martin (Senior Research Associate)I continue to be impressed by the care and thoughtfulness I see from my colleagues in making grant allocation decisions. Seeing and participating in this work informs my decision to give all of my donation this year to the All Grants Fund. In addition to GiveWell's Top Charities, I'm excited to be able to support other highly cost-effective programs through the All Grants Fund, as highlighted in this blog post.Audrey Cooper (Philanthropy Advisor)We plan to donate to GiveWell's Top Charities Fund again this year. Each of the top charity programs has substantial funding needs, such that they could reach more people and save more lives if they receive more donations. I'm excited to help these programs close the gap.We also plan to continue our support of the International Refugee Assistance Project and criminal justice organizations. Throughout the year, we also make smaller donations to local causes (such as services for people experiencing homelessness, community gardens, etc.) as well as gifts in honor of friends to their charities of choice.Elie Hassenfeld (Co-Founder and Chief Executive Officer)This year, my family is planning to give 80% of our annual donation to GiveWell's All Grants Funds and 20% to GiveDirectly.We're giving to GiveWell's All Grants Fund because it gives GiveWell the most flexibility to direct funds where we (GiveWell staff) think they will do the most good. This may mean supporting programs at Top Charities, but it could mean funding newer organizations, research, or more speculative opportunities that are high (expected) impact.Our decision to give to GiveDirectly is less straightforward. Based on GiveWell's cost-effectiveness models, the funds my family is giving to GiveDirectly would do more good if given elsewhere (roughly speaking, GiveWell's best estimate is that funds to top charities and the All Grants Fund do about ten times as much good in expectation).We're giving 20% to GiveDirectly for two reasons:When I talk to people who aren't already familiar with GiveWell's work, I often reference GiveDirectly. Many people aren't aware of the vast income disparities between high-income and low-income countries. I talk about GiveDirectly because (a) it's very simple and easy to explain and (b) years ago, I visited GiveDirectly's program in Kenya, so I'm able to speak personally and specifically about people who benefited from GiveDirectly's work. For example, I often tell people about a specific family I met who received ~$1,000 from GiveDirectly. Like many other families, they chose to use part of their cash transfer to replace their thatched roof with a metal one. Before receiving these funds, when it rained in the middle of the night, the family (if I recall correctly, a mother and two children) would have to move out to a neighbor's house that was ~60 feet away to stay dry. They'd come back the next day to find their belongings soaked.Or, when I talk to people with an interest in evaluation, I tell them that, when I visited Kenya, GiveDirectly enabled me to randomly select households to visit. On any site visit, donors should expect that organizations are aiming to shape a compelling narrative of their impact, so I loved that GiveDirectly helped me see a more representative picture ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Staff members’ personal donations for giving season 2022, published by GiveWell on December 19, 2022 on The Effective Altruism Forum.Author: Isabel Arjmand, GiveWell Special Projects OfficerFor this post, a number of GiveWell staff members volunteered to share the thinking behind their personal donations for the year. We’ve published similar posts in previous years.[1] Staff are listed alphabetically by first name.You can click the links in the table of contents on the left to jump to a staff member’s entry.Andrew Martin (Senior Research Associate)I continue to be impressed by the care and thoughtfulness I see from my colleagues in making grant allocation decisions. Seeing and participating in this work informs my decision to give all of my donation this year to the All Grants Fund. In addition to GiveWell's Top Charities, I'm excited to be able to support other highly cost-effective programs through the All Grants Fund, as highlighted in this blog post.Audrey Cooper (Philanthropy Advisor)We plan to donate to GiveWell's Top Charities Fund again this year. Each of the top charity programs has substantial funding needs, such that they could reach more people and save more lives if they receive more donations. I'm excited to help these programs close the gap.We also plan to continue our support of the International Refugee Assistance Project and criminal justice organizations. Throughout the year, we also make smaller donations to local causes (such as services for people experiencing homelessness, community gardens, etc.) as well as gifts in honor of friends to their charities of choice.Elie Hassenfeld (Co-Founder and Chief Executive Officer)This year, my family is planning to give 80% of our annual donation to GiveWell's All Grants Funds and 20% to GiveDirectly.We're giving to GiveWell's All Grants Fund because it gives GiveWell the most flexibility to direct funds where we (GiveWell staff) think they will do the most good. This may mean supporting programs at Top Charities, but it could mean funding newer organizations, research, or more speculative opportunities that are high (expected) impact.Our decision to give to GiveDirectly is less straightforward. Based on GiveWell's cost-effectiveness models, the funds my family is giving to GiveDirectly would do more good if given elsewhere (roughly speaking, GiveWell's best estimate is that funds to top charities and the All Grants Fund do about ten times as much good in expectation).We're giving 20% to GiveDirectly for two reasons:When I talk to people who aren't already familiar with GiveWell's work, I often reference GiveDirectly. Many people aren't aware of the vast income disparities between high-income and low-income countries. I talk about GiveDirectly because (a) it's very simple and easy to explain and (b) years ago, I visited GiveDirectly's program in Kenya, so I'm able to speak personally and specifically about people who benefited from GiveDirectly's work. For example, I often tell people about a specific family I met who received ~$1,000 from GiveDirectly. Like many other families, they chose to use part of their cash transfer to replace their thatched roof with a metal one. Before receiving these funds, when it rained in the middle of the night, the family (if I recall correctly, a mother and two children) would have to move out to a neighbor's house that was ~60 feet away to stay dry. They'd come back the next day to find their belongings soaked.Or, when I talk to people with an interest in evaluation, I tell them that, when I visited Kenya, GiveDirectly enabled me to randomly select households to visit. On any site visit, donors should expect that organizations are aiming to shape a compelling narrative of their impact, so I loved that GiveDirectly helped me see a more representative picture ...]]>
GiveWell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 16:26 None full 4176
k73qrirnxcKtKZ4ng_EA EA - The ‘Old AI’: Lessons for AI governance from early electricity regulation by Sam Clarke Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The ‘Old AI’: Lessons for AI governance from early electricity regulation, published by Sam Clarke on December 19, 2022 on The Effective Altruism Forum.Note: neither author has a background in history, so please take this with a lot of salt. Sam thinks this is more likely than not to contain an important error. This was written in April 2022 and we’re posting now as a draft, because the alternative is to never post.Like electricity, AI is argued to be a general purpose technology, which will significantly shape the global economic, military and political landscapes, attracting considerable media attention and public concern. Also like electricity, AI technology has the property that whilst some use cases are innocuous, others pose varying risks of harm.Due to these similarities, one might wonder if there are any lessons for AI governance today to be learned from the development of early electricity regulation and standards. We looked into this question for about two weeks, focusing on early electrification in the US from the late 1800s to the early 1900s, and on the UK’s nationalisation of the electricity sector during the 20th century.This post identifies and examines lessons we found particularly interesting and relevant to AI governance. We imagine many of them will be fairly obvious to many readers, but we found that having concrete historical examples was helpful for understanding the lessons in more depth and grounding them in some empirical evidence.In brief, the lessons we found interesting and relevant are:Accidents can galvanise regulationPeople co-opt accidents for their own (policy) agendas (to various degrees of success)Technology experts can have significant influence in dictating the direction of early standards and regulationTechnology regulation is not inherently anti-innovationThe optimal amount and shape of regulation can change as a technology maturesThe need for interoperability of electrical devices presented a window of opportunity for setting global standardsThe development of safety regulation can be driven by unexpected stakeholdersPervasive monitoring and hard constraints on individual consumption of technology is an existing and already used governance toolThere’s a lot more that could be investigated here—if you’re interested in this topic, and especially if you’re a historian interested in electricity or the early development of technology standards and regulations, we think there are a number of threads of inquiry that could be worth picking up.Accidents can galvanise regulationIn the early days of electrification, there were several high-profile accidents resulting in deaths and economic damage:A lineman being electrocuted in a tangle of overhead electrical wires, above a busy lunchtime crowd in Manhattan, which included many influential New York aldermen.There were a number of other deaths for similar reasons, which occurred somewhat less publicly and so were less influential but still important.Pearl Street Station—the first commercial central power plant in the United States—burned down in 1890.The 1888 blizzard in New York City tore down many power lines and led to a power blackout.Despite electric companies like Western Union and US Illuminating Company protesting regulation with court injunctions, [Hargadon & Doglas 2021] these accidents spurred government and corporate regulation around electrical safety, including:Various governments began to require high voltage electrical lines to be buried underground, one of the first (if not the first) governmental regulations on electricity to be introduced [Stross 2007].Thomson-Houston electric company developed lighting arrestors for power lines and blowout switches to shut down systems in case of a power surge [Davis 2012].Concerned about risks of installing AC e...]]>
Sam Clarke https://forum.effectivealtruism.org/posts/k73qrirnxcKtKZ4ng/the-old-ai-lessons-for-ai-governance-from-early-electricity-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The ‘Old AI’: Lessons for AI governance from early electricity regulation, published by Sam Clarke on December 19, 2022 on The Effective Altruism Forum.Note: neither author has a background in history, so please take this with a lot of salt. Sam thinks this is more likely than not to contain an important error. This was written in April 2022 and we’re posting now as a draft, because the alternative is to never post.Like electricity, AI is argued to be a general purpose technology, which will significantly shape the global economic, military and political landscapes, attracting considerable media attention and public concern. Also like electricity, AI technology has the property that whilst some use cases are innocuous, others pose varying risks of harm.Due to these similarities, one might wonder if there are any lessons for AI governance today to be learned from the development of early electricity regulation and standards. We looked into this question for about two weeks, focusing on early electrification in the US from the late 1800s to the early 1900s, and on the UK’s nationalisation of the electricity sector during the 20th century.This post identifies and examines lessons we found particularly interesting and relevant to AI governance. We imagine many of them will be fairly obvious to many readers, but we found that having concrete historical examples was helpful for understanding the lessons in more depth and grounding them in some empirical evidence.In brief, the lessons we found interesting and relevant are:Accidents can galvanise regulationPeople co-opt accidents for their own (policy) agendas (to various degrees of success)Technology experts can have significant influence in dictating the direction of early standards and regulationTechnology regulation is not inherently anti-innovationThe optimal amount and shape of regulation can change as a technology maturesThe need for interoperability of electrical devices presented a window of opportunity for setting global standardsThe development of safety regulation can be driven by unexpected stakeholdersPervasive monitoring and hard constraints on individual consumption of technology is an existing and already used governance toolThere’s a lot more that could be investigated here—if you’re interested in this topic, and especially if you’re a historian interested in electricity or the early development of technology standards and regulations, we think there are a number of threads of inquiry that could be worth picking up.Accidents can galvanise regulationIn the early days of electrification, there were several high-profile accidents resulting in deaths and economic damage:A lineman being electrocuted in a tangle of overhead electrical wires, above a busy lunchtime crowd in Manhattan, which included many influential New York aldermen.There were a number of other deaths for similar reasons, which occurred somewhat less publicly and so were less influential but still important.Pearl Street Station—the first commercial central power plant in the United States—burned down in 1890.The 1888 blizzard in New York City tore down many power lines and led to a power blackout.Despite electric companies like Western Union and US Illuminating Company protesting regulation with court injunctions, [Hargadon & Doglas 2021] these accidents spurred government and corporate regulation around electrical safety, including:Various governments began to require high voltage electrical lines to be buried underground, one of the first (if not the first) governmental regulations on electricity to be introduced [Stross 2007].Thomson-Houston electric company developed lighting arrestors for power lines and blowout switches to shut down systems in case of a power surge [Davis 2012].Concerned about risks of installing AC e...]]>
Mon, 19 Dec 2022 17:42:58 +0000 EA - The ‘Old AI’: Lessons for AI governance from early electricity regulation by Sam Clarke Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The ‘Old AI’: Lessons for AI governance from early electricity regulation, published by Sam Clarke on December 19, 2022 on The Effective Altruism Forum.Note: neither author has a background in history, so please take this with a lot of salt. Sam thinks this is more likely than not to contain an important error. This was written in April 2022 and we’re posting now as a draft, because the alternative is to never post.Like electricity, AI is argued to be a general purpose technology, which will significantly shape the global economic, military and political landscapes, attracting considerable media attention and public concern. Also like electricity, AI technology has the property that whilst some use cases are innocuous, others pose varying risks of harm.Due to these similarities, one might wonder if there are any lessons for AI governance today to be learned from the development of early electricity regulation and standards. We looked into this question for about two weeks, focusing on early electrification in the US from the late 1800s to the early 1900s, and on the UK’s nationalisation of the electricity sector during the 20th century.This post identifies and examines lessons we found particularly interesting and relevant to AI governance. We imagine many of them will be fairly obvious to many readers, but we found that having concrete historical examples was helpful for understanding the lessons in more depth and grounding them in some empirical evidence.In brief, the lessons we found interesting and relevant are:Accidents can galvanise regulationPeople co-opt accidents for their own (policy) agendas (to various degrees of success)Technology experts can have significant influence in dictating the direction of early standards and regulationTechnology regulation is not inherently anti-innovationThe optimal amount and shape of regulation can change as a technology maturesThe need for interoperability of electrical devices presented a window of opportunity for setting global standardsThe development of safety regulation can be driven by unexpected stakeholdersPervasive monitoring and hard constraints on individual consumption of technology is an existing and already used governance toolThere’s a lot more that could be investigated here—if you’re interested in this topic, and especially if you’re a historian interested in electricity or the early development of technology standards and regulations, we think there are a number of threads of inquiry that could be worth picking up.Accidents can galvanise regulationIn the early days of electrification, there were several high-profile accidents resulting in deaths and economic damage:A lineman being electrocuted in a tangle of overhead electrical wires, above a busy lunchtime crowd in Manhattan, which included many influential New York aldermen.There were a number of other deaths for similar reasons, which occurred somewhat less publicly and so were less influential but still important.Pearl Street Station—the first commercial central power plant in the United States—burned down in 1890.The 1888 blizzard in New York City tore down many power lines and led to a power blackout.Despite electric companies like Western Union and US Illuminating Company protesting regulation with court injunctions, [Hargadon & Doglas 2021] these accidents spurred government and corporate regulation around electrical safety, including:Various governments began to require high voltage electrical lines to be buried underground, one of the first (if not the first) governmental regulations on electricity to be introduced [Stross 2007].Thomson-Houston electric company developed lighting arrestors for power lines and blowout switches to shut down systems in case of a power surge [Davis 2012].Concerned about risks of installing AC e...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The ‘Old AI’: Lessons for AI governance from early electricity regulation, published by Sam Clarke on December 19, 2022 on The Effective Altruism Forum.Note: neither author has a background in history, so please take this with a lot of salt. Sam thinks this is more likely than not to contain an important error. This was written in April 2022 and we’re posting now as a draft, because the alternative is to never post.Like electricity, AI is argued to be a general purpose technology, which will significantly shape the global economic, military and political landscapes, attracting considerable media attention and public concern. Also like electricity, AI technology has the property that whilst some use cases are innocuous, others pose varying risks of harm.Due to these similarities, one might wonder if there are any lessons for AI governance today to be learned from the development of early electricity regulation and standards. We looked into this question for about two weeks, focusing on early electrification in the US from the late 1800s to the early 1900s, and on the UK’s nationalisation of the electricity sector during the 20th century.This post identifies and examines lessons we found particularly interesting and relevant to AI governance. We imagine many of them will be fairly obvious to many readers, but we found that having concrete historical examples was helpful for understanding the lessons in more depth and grounding them in some empirical evidence.In brief, the lessons we found interesting and relevant are:Accidents can galvanise regulationPeople co-opt accidents for their own (policy) agendas (to various degrees of success)Technology experts can have significant influence in dictating the direction of early standards and regulationTechnology regulation is not inherently anti-innovationThe optimal amount and shape of regulation can change as a technology maturesThe need for interoperability of electrical devices presented a window of opportunity for setting global standardsThe development of safety regulation can be driven by unexpected stakeholdersPervasive monitoring and hard constraints on individual consumption of technology is an existing and already used governance toolThere’s a lot more that could be investigated here—if you’re interested in this topic, and especially if you’re a historian interested in electricity or the early development of technology standards and regulations, we think there are a number of threads of inquiry that could be worth picking up.Accidents can galvanise regulationIn the early days of electrification, there were several high-profile accidents resulting in deaths and economic damage:A lineman being electrocuted in a tangle of overhead electrical wires, above a busy lunchtime crowd in Manhattan, which included many influential New York aldermen.There were a number of other deaths for similar reasons, which occurred somewhat less publicly and so were less influential but still important.Pearl Street Station—the first commercial central power plant in the United States—burned down in 1890.The 1888 blizzard in New York City tore down many power lines and led to a power blackout.Despite electric companies like Western Union and US Illuminating Company protesting regulation with court injunctions, [Hargadon & Doglas 2021] these accidents spurred government and corporate regulation around electrical safety, including:Various governments began to require high voltage electrical lines to be buried underground, one of the first (if not the first) governmental regulations on electricity to be introduced [Stross 2007].Thomson-Houston electric company developed lighting arrestors for power lines and blowout switches to shut down systems in case of a power surge [Davis 2012].Concerned about risks of installing AC e...]]>
Sam Clarke https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 24:19 None full 4180
e6wHxJ8mzwhgRvyjC_EA EA - [Draft Amnesty] Early unfinished draft on the case for a first principles, systematic scoping of meat alternatives by Linch Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Draft Amnesty] Early unfinished draft on the case for a first principles, systematic scoping of meat alternatives, published by Linch on December 19, 2022 on The Effective Altruism Forum.Epistemic status: Was revealed to me in a dream.More seriously, this is a post (actually, stringing together two posts) I wrote based very loosely on research I did in late 2021. I do not necessarily stand by all the claims in the current post, but hope that it can still be moderately helpful to at least some readers. I think this post has some structural issues. It was not cleaned up sufficiently for my personal standards of publication. It is also in a more conversational style than I endorse, and I’ve grown to be less confident in the core metaphors. Earlier drafts also have critiques that I did not get around to addressing. I also expect it to be more generally false. However in the spirit of draft amnesty day, I am publishing it rather than leave it languishing forever in my Google Drive (and my conscience).This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.I think among effective animal advocates who are techno-optimists, the arguments for alternatives to factory farming are too narrow in scope, eg, plant-based meat vs cultured meat, or plant-based meat vs cultured meat vs non-food investments (eg corporate campaigns), or occasional considerations of other meat alternatives like yeast or mushrooms. I think this is too narrow in scope. I instead think we should have first-principles evaluation of strategies to sate people's desire for meat while avoiding animal suffering:Carefully considering all the desired criteria of what people value in conventional meatEnumerating all of the existing strategies, including the lesser-known onesBrainstorming/exploring entirely new strategiesCarefully analyze the feasibility of each strategyI believe a significant fraction of this work can be done through armchair thinking by smart generalists, but we will eventually need empirical data, computational modeling, actual experimentation and varied domain expertise.In the post, I explore why I believe this is the correct strategy, using the motivating example of an extremely scientifically literate person in the 1800s trying to figure out flight. Birds are an existence proof that heavier-than-air flight is possible, but not a guarantee that we can get heavier-than-air flight quickly, and certainly not a proof that our existing attempts to get heavier-than-air flight is the right approach.While I am more optimistic about the general framework than any operational details, I would like to sketch out a path forwards for what a research agenda/plan might look like:Getting private feedback and carefully evaluating the quality of this initial plan, attracting funders and researchers as necessary <- We are here.Have 1-3 people work on the initial scoping of this research for up to 6 months, trying to analyze, dissect, and red-team this argument for clear empirical or conceptual issuesExecutive search for someone to lead a new org, or a team within an existing org (Rethink Priorities, New Science) to lead “New Meat Studies”Form an initial group of 3-20 tight-knit researchers that aims to generate and quickly falsify different lines of research into viable alternatives to conventional meatWhen a line of research looks like it cannot be clearly falsified with existing resources, spin off new no...]]>
Linch https://forum.effectivealtruism.org/posts/e6wHxJ8mzwhgRvyjC/draft-amnesty-early-unfinished-draft-on-the-case-for-a-first Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Draft Amnesty] Early unfinished draft on the case for a first principles, systematic scoping of meat alternatives, published by Linch on December 19, 2022 on The Effective Altruism Forum.Epistemic status: Was revealed to me in a dream.More seriously, this is a post (actually, stringing together two posts) I wrote based very loosely on research I did in late 2021. I do not necessarily stand by all the claims in the current post, but hope that it can still be moderately helpful to at least some readers. I think this post has some structural issues. It was not cleaned up sufficiently for my personal standards of publication. It is also in a more conversational style than I endorse, and I’ve grown to be less confident in the core metaphors. Earlier drafts also have critiques that I did not get around to addressing. I also expect it to be more generally false. However in the spirit of draft amnesty day, I am publishing it rather than leave it languishing forever in my Google Drive (and my conscience).This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.I think among effective animal advocates who are techno-optimists, the arguments for alternatives to factory farming are too narrow in scope, eg, plant-based meat vs cultured meat, or plant-based meat vs cultured meat vs non-food investments (eg corporate campaigns), or occasional considerations of other meat alternatives like yeast or mushrooms. I think this is too narrow in scope. I instead think we should have first-principles evaluation of strategies to sate people's desire for meat while avoiding animal suffering:Carefully considering all the desired criteria of what people value in conventional meatEnumerating all of the existing strategies, including the lesser-known onesBrainstorming/exploring entirely new strategiesCarefully analyze the feasibility of each strategyI believe a significant fraction of this work can be done through armchair thinking by smart generalists, but we will eventually need empirical data, computational modeling, actual experimentation and varied domain expertise.In the post, I explore why I believe this is the correct strategy, using the motivating example of an extremely scientifically literate person in the 1800s trying to figure out flight. Birds are an existence proof that heavier-than-air flight is possible, but not a guarantee that we can get heavier-than-air flight quickly, and certainly not a proof that our existing attempts to get heavier-than-air flight is the right approach.While I am more optimistic about the general framework than any operational details, I would like to sketch out a path forwards for what a research agenda/plan might look like:Getting private feedback and carefully evaluating the quality of this initial plan, attracting funders and researchers as necessary <- We are here.Have 1-3 people work on the initial scoping of this research for up to 6 months, trying to analyze, dissect, and red-team this argument for clear empirical or conceptual issuesExecutive search for someone to lead a new org, or a team within an existing org (Rethink Priorities, New Science) to lead “New Meat Studies”Form an initial group of 3-20 tight-knit researchers that aims to generate and quickly falsify different lines of research into viable alternatives to conventional meatWhen a line of research looks like it cannot be clearly falsified with existing resources, spin off new no...]]>
Mon, 19 Dec 2022 15:23:29 +0000 EA - [Draft Amnesty] Early unfinished draft on the case for a first principles, systematic scoping of meat alternatives by Linch Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Draft Amnesty] Early unfinished draft on the case for a first principles, systematic scoping of meat alternatives, published by Linch on December 19, 2022 on The Effective Altruism Forum.Epistemic status: Was revealed to me in a dream.More seriously, this is a post (actually, stringing together two posts) I wrote based very loosely on research I did in late 2021. I do not necessarily stand by all the claims in the current post, but hope that it can still be moderately helpful to at least some readers. I think this post has some structural issues. It was not cleaned up sufficiently for my personal standards of publication. It is also in a more conversational style than I endorse, and I’ve grown to be less confident in the core metaphors. Earlier drafts also have critiques that I did not get around to addressing. I also expect it to be more generally false. However in the spirit of draft amnesty day, I am publishing it rather than leave it languishing forever in my Google Drive (and my conscience).This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.I think among effective animal advocates who are techno-optimists, the arguments for alternatives to factory farming are too narrow in scope, eg, plant-based meat vs cultured meat, or plant-based meat vs cultured meat vs non-food investments (eg corporate campaigns), or occasional considerations of other meat alternatives like yeast or mushrooms. I think this is too narrow in scope. I instead think we should have first-principles evaluation of strategies to sate people's desire for meat while avoiding animal suffering:Carefully considering all the desired criteria of what people value in conventional meatEnumerating all of the existing strategies, including the lesser-known onesBrainstorming/exploring entirely new strategiesCarefully analyze the feasibility of each strategyI believe a significant fraction of this work can be done through armchair thinking by smart generalists, but we will eventually need empirical data, computational modeling, actual experimentation and varied domain expertise.In the post, I explore why I believe this is the correct strategy, using the motivating example of an extremely scientifically literate person in the 1800s trying to figure out flight. Birds are an existence proof that heavier-than-air flight is possible, but not a guarantee that we can get heavier-than-air flight quickly, and certainly not a proof that our existing attempts to get heavier-than-air flight is the right approach.While I am more optimistic about the general framework than any operational details, I would like to sketch out a path forwards for what a research agenda/plan might look like:Getting private feedback and carefully evaluating the quality of this initial plan, attracting funders and researchers as necessary <- We are here.Have 1-3 people work on the initial scoping of this research for up to 6 months, trying to analyze, dissect, and red-team this argument for clear empirical or conceptual issuesExecutive search for someone to lead a new org, or a team within an existing org (Rethink Priorities, New Science) to lead “New Meat Studies”Form an initial group of 3-20 tight-knit researchers that aims to generate and quickly falsify different lines of research into viable alternatives to conventional meatWhen a line of research looks like it cannot be clearly falsified with existing resources, spin off new no...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Draft Amnesty] Early unfinished draft on the case for a first principles, systematic scoping of meat alternatives, published by Linch on December 19, 2022 on The Effective Altruism Forum.Epistemic status: Was revealed to me in a dream.More seriously, this is a post (actually, stringing together two posts) I wrote based very loosely on research I did in late 2021. I do not necessarily stand by all the claims in the current post, but hope that it can still be moderately helpful to at least some readers. I think this post has some structural issues. It was not cleaned up sufficiently for my personal standards of publication. It is also in a more conversational style than I endorse, and I’ve grown to be less confident in the core metaphors. Earlier drafts also have critiques that I did not get around to addressing. I also expect it to be more generally false. However in the spirit of draft amnesty day, I am publishing it rather than leave it languishing forever in my Google Drive (and my conscience).This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.I think among effective animal advocates who are techno-optimists, the arguments for alternatives to factory farming are too narrow in scope, eg, plant-based meat vs cultured meat, or plant-based meat vs cultured meat vs non-food investments (eg corporate campaigns), or occasional considerations of other meat alternatives like yeast or mushrooms. I think this is too narrow in scope. I instead think we should have first-principles evaluation of strategies to sate people's desire for meat while avoiding animal suffering:Carefully considering all the desired criteria of what people value in conventional meatEnumerating all of the existing strategies, including the lesser-known onesBrainstorming/exploring entirely new strategiesCarefully analyze the feasibility of each strategyI believe a significant fraction of this work can be done through armchair thinking by smart generalists, but we will eventually need empirical data, computational modeling, actual experimentation and varied domain expertise.In the post, I explore why I believe this is the correct strategy, using the motivating example of an extremely scientifically literate person in the 1800s trying to figure out flight. Birds are an existence proof that heavier-than-air flight is possible, but not a guarantee that we can get heavier-than-air flight quickly, and certainly not a proof that our existing attempts to get heavier-than-air flight is the right approach.While I am more optimistic about the general framework than any operational details, I would like to sketch out a path forwards for what a research agenda/plan might look like:Getting private feedback and carefully evaluating the quality of this initial plan, attracting funders and researchers as necessary <- We are here.Have 1-3 people work on the initial scoping of this research for up to 6 months, trying to analyze, dissect, and red-team this argument for clear empirical or conceptual issuesExecutive search for someone to lead a new org, or a team within an existing org (Rethink Priorities, New Science) to lead “New Meat Studies”Form an initial group of 3-20 tight-knit researchers that aims to generate and quickly falsify different lines of research into viable alternatives to conventional meatWhen a line of research looks like it cannot be clearly falsified with existing resources, spin off new no...]]>
Linch https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 29:42 None full 4179
xaMLvzbaBMFjX88Z2_EA EA - CEA Disambiguation by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA Disambiguation, published by Jeff Kaufman on December 19, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://forum.effectivealtruism.org/posts/xaMLvzbaBMFjX88Z2/cea-disambiguation Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA Disambiguation, published by Jeff Kaufman on December 19, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 19 Dec 2022 14:02:42 +0000 EA - CEA Disambiguation by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA Disambiguation, published by Jeff Kaufman on December 19, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA Disambiguation, published by Jeff Kaufman on December 19, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:25 None full 4178
6TtH9NDbkDxK4zBDL_EA EA - How my thinking about doing good changed over the years by Quadratic Reciprocity Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How my thinking about doing good changed over the years, published by Quadratic Reciprocity on December 18, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the post is also appreciated.I first came across effective altruism as a teenager a few years ago, and the core idea instantly clicked for me after reading one post about it. In this post, I will talk about some ways in which my thinking around doing good has evolved over the years as a young person with a strong interest in making the world better.The emotions I feel when thinking about others’ suffering are less intense. I don’t know if teenage-me would have predicted this. As a child, I remember crying a lot when watching videos on animal suffering, when I first confronted the idea of infinite hell I was depressed for an entire summer, I wanted to give all the money I received on my birthday to people who were less fortunate because I knew they needed it more.I think the change is partly from just getting used to it. The first time you confront the horrors of factory farming it is awful but by the hundredth time, it’s hard for my brain to naturally feel the same powerful emotions of sadness and anger. Partly, the change is from starting to believe that it isn’t actually that virtuous to feel strong emotions at others’ suffering. Some of that is from having been in the effective altruism community, where it is easy to feel that what matters are the results of what you do and not the emotions behind what you do. I still feel strong emotions of empathy for those who are suffering some of the time when I am feeling particularly introspective and emotional.However, and this is because of being in the effective altruism community, I am much more aware of my own ranking of what the biggest problems are and it is harder for me to direct a lot of empathy towards causes that feel less “big” compared to factory farming, extreme poverty, and existential risk - even though, in absolute terms, the suffering of people living in terrible conditions in rich countries is still massive.At the same time, my ability to live according to my values has increased. I haven’t eaten meat in a couple of years whereas as a child and young teenager, this was really difficult for me to do even though I really wanted to be vegetarian. I have more tools now to do what I think is right, and the biggest of them all is having a social community where there are others who take their beliefs seriously and try to do good.I am much less willing to try to hack my brain in order to force myself to do and feel things I endorse. I used to be much more ashamed of some of my feelings and actions and felt a strong desire to figure out how to trick my brain into being more willing to sacrifice myself for others, into working all the time and being more ambitious. This involved doing things adjacent to self-deception. This was a really bad idea and caused me lots of pain and frustration.Instead, the thing that worked for me is acknowledging that I have “selfish” desires, that sometimes I take actions that actively hurt others, and that I have things that I deeply care about besides just maximising the good. Having a better picture of myself and what I actually value allowed me to work with the “altruist” and “selfish” sides of me to do things like be able to enjoy spending money and time on things that make me happy without feeling guilty and then ...]]>
Quadratic Reciprocity https://forum.effectivealtruism.org/posts/6TtH9NDbkDxK4zBDL/how-my-thinking-about-doing-good-changed-over-the-years Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How my thinking about doing good changed over the years, published by Quadratic Reciprocity on December 18, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the post is also appreciated.I first came across effective altruism as a teenager a few years ago, and the core idea instantly clicked for me after reading one post about it. In this post, I will talk about some ways in which my thinking around doing good has evolved over the years as a young person with a strong interest in making the world better.The emotions I feel when thinking about others’ suffering are less intense. I don’t know if teenage-me would have predicted this. As a child, I remember crying a lot when watching videos on animal suffering, when I first confronted the idea of infinite hell I was depressed for an entire summer, I wanted to give all the money I received on my birthday to people who were less fortunate because I knew they needed it more.I think the change is partly from just getting used to it. The first time you confront the horrors of factory farming it is awful but by the hundredth time, it’s hard for my brain to naturally feel the same powerful emotions of sadness and anger. Partly, the change is from starting to believe that it isn’t actually that virtuous to feel strong emotions at others’ suffering. Some of that is from having been in the effective altruism community, where it is easy to feel that what matters are the results of what you do and not the emotions behind what you do. I still feel strong emotions of empathy for those who are suffering some of the time when I am feeling particularly introspective and emotional.However, and this is because of being in the effective altruism community, I am much more aware of my own ranking of what the biggest problems are and it is harder for me to direct a lot of empathy towards causes that feel less “big” compared to factory farming, extreme poverty, and existential risk - even though, in absolute terms, the suffering of people living in terrible conditions in rich countries is still massive.At the same time, my ability to live according to my values has increased. I haven’t eaten meat in a couple of years whereas as a child and young teenager, this was really difficult for me to do even though I really wanted to be vegetarian. I have more tools now to do what I think is right, and the biggest of them all is having a social community where there are others who take their beliefs seriously and try to do good.I am much less willing to try to hack my brain in order to force myself to do and feel things I endorse. I used to be much more ashamed of some of my feelings and actions and felt a strong desire to figure out how to trick my brain into being more willing to sacrifice myself for others, into working all the time and being more ambitious. This involved doing things adjacent to self-deception. This was a really bad idea and caused me lots of pain and frustration.Instead, the thing that worked for me is acknowledging that I have “selfish” desires, that sometimes I take actions that actively hurt others, and that I have things that I deeply care about besides just maximising the good. Having a better picture of myself and what I actually value allowed me to work with the “altruist” and “selfish” sides of me to do things like be able to enjoy spending money and time on things that make me happy without feeling guilty and then ...]]>
Sun, 18 Dec 2022 23:50:07 +0000 EA - How my thinking about doing good changed over the years by Quadratic Reciprocity Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How my thinking about doing good changed over the years, published by Quadratic Reciprocity on December 18, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the post is also appreciated.I first came across effective altruism as a teenager a few years ago, and the core idea instantly clicked for me after reading one post about it. In this post, I will talk about some ways in which my thinking around doing good has evolved over the years as a young person with a strong interest in making the world better.The emotions I feel when thinking about others’ suffering are less intense. I don’t know if teenage-me would have predicted this. As a child, I remember crying a lot when watching videos on animal suffering, when I first confronted the idea of infinite hell I was depressed for an entire summer, I wanted to give all the money I received on my birthday to people who were less fortunate because I knew they needed it more.I think the change is partly from just getting used to it. The first time you confront the horrors of factory farming it is awful but by the hundredth time, it’s hard for my brain to naturally feel the same powerful emotions of sadness and anger. Partly, the change is from starting to believe that it isn’t actually that virtuous to feel strong emotions at others’ suffering. Some of that is from having been in the effective altruism community, where it is easy to feel that what matters are the results of what you do and not the emotions behind what you do. I still feel strong emotions of empathy for those who are suffering some of the time when I am feeling particularly introspective and emotional.However, and this is because of being in the effective altruism community, I am much more aware of my own ranking of what the biggest problems are and it is harder for me to direct a lot of empathy towards causes that feel less “big” compared to factory farming, extreme poverty, and existential risk - even though, in absolute terms, the suffering of people living in terrible conditions in rich countries is still massive.At the same time, my ability to live according to my values has increased. I haven’t eaten meat in a couple of years whereas as a child and young teenager, this was really difficult for me to do even though I really wanted to be vegetarian. I have more tools now to do what I think is right, and the biggest of them all is having a social community where there are others who take their beliefs seriously and try to do good.I am much less willing to try to hack my brain in order to force myself to do and feel things I endorse. I used to be much more ashamed of some of my feelings and actions and felt a strong desire to figure out how to trick my brain into being more willing to sacrifice myself for others, into working all the time and being more ambitious. This involved doing things adjacent to self-deception. This was a really bad idea and caused me lots of pain and frustration.Instead, the thing that worked for me is acknowledging that I have “selfish” desires, that sometimes I take actions that actively hurt others, and that I have things that I deeply care about besides just maximising the good. Having a better picture of myself and what I actually value allowed me to work with the “altruist” and “selfish” sides of me to do things like be able to enjoy spending money and time on things that make me happy without feeling guilty and then ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How my thinking about doing good changed over the years, published by Quadratic Reciprocity on December 18, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the post is also appreciated.I first came across effective altruism as a teenager a few years ago, and the core idea instantly clicked for me after reading one post about it. In this post, I will talk about some ways in which my thinking around doing good has evolved over the years as a young person with a strong interest in making the world better.The emotions I feel when thinking about others’ suffering are less intense. I don’t know if teenage-me would have predicted this. As a child, I remember crying a lot when watching videos on animal suffering, when I first confronted the idea of infinite hell I was depressed for an entire summer, I wanted to give all the money I received on my birthday to people who were less fortunate because I knew they needed it more.I think the change is partly from just getting used to it. The first time you confront the horrors of factory farming it is awful but by the hundredth time, it’s hard for my brain to naturally feel the same powerful emotions of sadness and anger. Partly, the change is from starting to believe that it isn’t actually that virtuous to feel strong emotions at others’ suffering. Some of that is from having been in the effective altruism community, where it is easy to feel that what matters are the results of what you do and not the emotions behind what you do. I still feel strong emotions of empathy for those who are suffering some of the time when I am feeling particularly introspective and emotional.However, and this is because of being in the effective altruism community, I am much more aware of my own ranking of what the biggest problems are and it is harder for me to direct a lot of empathy towards causes that feel less “big” compared to factory farming, extreme poverty, and existential risk - even though, in absolute terms, the suffering of people living in terrible conditions in rich countries is still massive.At the same time, my ability to live according to my values has increased. I haven’t eaten meat in a couple of years whereas as a child and young teenager, this was really difficult for me to do even though I really wanted to be vegetarian. I have more tools now to do what I think is right, and the biggest of them all is having a social community where there are others who take their beliefs seriously and try to do good.I am much less willing to try to hack my brain in order to force myself to do and feel things I endorse. I used to be much more ashamed of some of my feelings and actions and felt a strong desire to figure out how to trick my brain into being more willing to sacrifice myself for others, into working all the time and being more ambitious. This involved doing things adjacent to self-deception. This was a really bad idea and caused me lots of pain and frustration.Instead, the thing that worked for me is acknowledging that I have “selfish” desires, that sometimes I take actions that actively hurt others, and that I have things that I deeply care about besides just maximising the good. Having a better picture of myself and what I actually value allowed me to work with the “altruist” and “selfish” sides of me to do things like be able to enjoy spending money and time on things that make me happy without feeling guilty and then ...]]>
Quadratic Reciprocity https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:42 None full 4172
oF9nu74Hs8NtTMgBS_EA EA - 80,000 Hours wants to see more people trying out recruiting by Niel Bowerman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80,000 Hours wants to see more people trying out recruiting, published by Niel Bowerman on December 18, 2022 on The Effective Altruism Forum.Tl;drI'm excited to see more EAs try out recruiting for EA projects. 80k doesn't "have it covered", and we are keen to see more people and organisations test their fit with this kind of work.Recruiting has low "barriers to entry": you need to find a hiring manager who wants your help, and you need to have the time available to help them.Work hard to understand what the hiring manager is looking for.You don't need a mountain of centralised data in a single CRM to get started. You can start with referrals, spreadsheets, emails and conversations.AimsThe aim of this article is to:Encourage more people to test their fit with recruiting, and reduce the extent to which 80,000 Hours is inadvertently "crowding out" other organisations and people from working in the recruiting space.Communicate some tentative "lessons learned" from our time recruiting in 2019-2020.Historical contextIn 2019 and 2020 I worked on a team with Peter McIntyre at 80,000 Hours, helping organisations working to solve some of the world's most pressing problems to identify candidates to hire. Below are some reflections and lessons learned from that period. At the start of 2021 we put on hold much of our efforts to help organisations hire in order to focus on increasing 80,000 Hours' capacity to have conversations with people who applied to speak with us. (We are hoping to increase our capacity on recruiting again in 2023.)We did ~534 calls with potential candidates in 2019 and 2020. We generated lists of names for roughly 174 roles. We spent just a few minutes on some of those lists, and up to a month on others. We think something like 23 people from the lists we generated were later hired by organisations, though we make no claims about counterfactual impact in this article.I've written this article for people who are considering trying out recruiting, but haven't done much recruiting before. I've put the article together pretty quickly, and expect to update parts of it in response to questions and comments, so please do fire away in the comments section. I've posted this on the forum as part of the draft amnesty days.Lots of hiring managers want help hiringThere are a lot of hiring managers out there who are struggling to find great candidates to hire.Sometimes that's because the person they want doesn't exist, but sometimes it's because the hiring manager doesn't have the time to go find the right person.If the problem is that the hiring manager doesn't have the time to find the right person, then you may be able to help if you're willing to put in the time.Focus on building great models of what hiring managers want in a candidateHaving buy-in from the hiring manager is crucialThe best way to build up a model of what the hiring manager is looking for is to be able to ask them a lot of questions.Understand how they would trade off between the various attributes they want in the ideal candidate.If you aren't able to regularly email/message/speak with the hiring manager during the selection process, it will be a bunch harder for you to find a candidate that will ultimately get hired.We tried doing recruiting for a few roles in large organisations where we knew someone on the team but not the hiring manager. We generally had a much lower placement rate in these cases.Give a great experience to the hiring managerIn order to get access to the hiring manager, it's usually valuable to be as helpful as you can initially. Write out detailed but concise descriptions of the candidates you're suggesting, put them into tiers, add LinkedIn links, and generally make the process easy for the hiring manager. Once you have more experience you can start cutt...]]>
Niel Bowerman https://forum.effectivealtruism.org/posts/oF9nu74Hs8NtTMgBS/80-000-hours-wants-to-see-more-people-trying-out-recruiting Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80,000 Hours wants to see more people trying out recruiting, published by Niel Bowerman on December 18, 2022 on The Effective Altruism Forum.Tl;drI'm excited to see more EAs try out recruiting for EA projects. 80k doesn't "have it covered", and we are keen to see more people and organisations test their fit with this kind of work.Recruiting has low "barriers to entry": you need to find a hiring manager who wants your help, and you need to have the time available to help them.Work hard to understand what the hiring manager is looking for.You don't need a mountain of centralised data in a single CRM to get started. You can start with referrals, spreadsheets, emails and conversations.AimsThe aim of this article is to:Encourage more people to test their fit with recruiting, and reduce the extent to which 80,000 Hours is inadvertently "crowding out" other organisations and people from working in the recruiting space.Communicate some tentative "lessons learned" from our time recruiting in 2019-2020.Historical contextIn 2019 and 2020 I worked on a team with Peter McIntyre at 80,000 Hours, helping organisations working to solve some of the world's most pressing problems to identify candidates to hire. Below are some reflections and lessons learned from that period. At the start of 2021 we put on hold much of our efforts to help organisations hire in order to focus on increasing 80,000 Hours' capacity to have conversations with people who applied to speak with us. (We are hoping to increase our capacity on recruiting again in 2023.)We did ~534 calls with potential candidates in 2019 and 2020. We generated lists of names for roughly 174 roles. We spent just a few minutes on some of those lists, and up to a month on others. We think something like 23 people from the lists we generated were later hired by organisations, though we make no claims about counterfactual impact in this article.I've written this article for people who are considering trying out recruiting, but haven't done much recruiting before. I've put the article together pretty quickly, and expect to update parts of it in response to questions and comments, so please do fire away in the comments section. I've posted this on the forum as part of the draft amnesty days.Lots of hiring managers want help hiringThere are a lot of hiring managers out there who are struggling to find great candidates to hire.Sometimes that's because the person they want doesn't exist, but sometimes it's because the hiring manager doesn't have the time to go find the right person.If the problem is that the hiring manager doesn't have the time to find the right person, then you may be able to help if you're willing to put in the time.Focus on building great models of what hiring managers want in a candidateHaving buy-in from the hiring manager is crucialThe best way to build up a model of what the hiring manager is looking for is to be able to ask them a lot of questions.Understand how they would trade off between the various attributes they want in the ideal candidate.If you aren't able to regularly email/message/speak with the hiring manager during the selection process, it will be a bunch harder for you to find a candidate that will ultimately get hired.We tried doing recruiting for a few roles in large organisations where we knew someone on the team but not the hiring manager. We generally had a much lower placement rate in these cases.Give a great experience to the hiring managerIn order to get access to the hiring manager, it's usually valuable to be as helpful as you can initially. Write out detailed but concise descriptions of the candidates you're suggesting, put them into tiers, add LinkedIn links, and generally make the process easy for the hiring manager. Once you have more experience you can start cutt...]]>
Sun, 18 Dec 2022 23:35:37 +0000 EA - 80,000 Hours wants to see more people trying out recruiting by Niel Bowerman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80,000 Hours wants to see more people trying out recruiting, published by Niel Bowerman on December 18, 2022 on The Effective Altruism Forum.Tl;drI'm excited to see more EAs try out recruiting for EA projects. 80k doesn't "have it covered", and we are keen to see more people and organisations test their fit with this kind of work.Recruiting has low "barriers to entry": you need to find a hiring manager who wants your help, and you need to have the time available to help them.Work hard to understand what the hiring manager is looking for.You don't need a mountain of centralised data in a single CRM to get started. You can start with referrals, spreadsheets, emails and conversations.AimsThe aim of this article is to:Encourage more people to test their fit with recruiting, and reduce the extent to which 80,000 Hours is inadvertently "crowding out" other organisations and people from working in the recruiting space.Communicate some tentative "lessons learned" from our time recruiting in 2019-2020.Historical contextIn 2019 and 2020 I worked on a team with Peter McIntyre at 80,000 Hours, helping organisations working to solve some of the world's most pressing problems to identify candidates to hire. Below are some reflections and lessons learned from that period. At the start of 2021 we put on hold much of our efforts to help organisations hire in order to focus on increasing 80,000 Hours' capacity to have conversations with people who applied to speak with us. (We are hoping to increase our capacity on recruiting again in 2023.)We did ~534 calls with potential candidates in 2019 and 2020. We generated lists of names for roughly 174 roles. We spent just a few minutes on some of those lists, and up to a month on others. We think something like 23 people from the lists we generated were later hired by organisations, though we make no claims about counterfactual impact in this article.I've written this article for people who are considering trying out recruiting, but haven't done much recruiting before. I've put the article together pretty quickly, and expect to update parts of it in response to questions and comments, so please do fire away in the comments section. I've posted this on the forum as part of the draft amnesty days.Lots of hiring managers want help hiringThere are a lot of hiring managers out there who are struggling to find great candidates to hire.Sometimes that's because the person they want doesn't exist, but sometimes it's because the hiring manager doesn't have the time to go find the right person.If the problem is that the hiring manager doesn't have the time to find the right person, then you may be able to help if you're willing to put in the time.Focus on building great models of what hiring managers want in a candidateHaving buy-in from the hiring manager is crucialThe best way to build up a model of what the hiring manager is looking for is to be able to ask them a lot of questions.Understand how they would trade off between the various attributes they want in the ideal candidate.If you aren't able to regularly email/message/speak with the hiring manager during the selection process, it will be a bunch harder for you to find a candidate that will ultimately get hired.We tried doing recruiting for a few roles in large organisations where we knew someone on the team but not the hiring manager. We generally had a much lower placement rate in these cases.Give a great experience to the hiring managerIn order to get access to the hiring manager, it's usually valuable to be as helpful as you can initially. Write out detailed but concise descriptions of the candidates you're suggesting, put them into tiers, add LinkedIn links, and generally make the process easy for the hiring manager. Once you have more experience you can start cutt...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80,000 Hours wants to see more people trying out recruiting, published by Niel Bowerman on December 18, 2022 on The Effective Altruism Forum.Tl;drI'm excited to see more EAs try out recruiting for EA projects. 80k doesn't "have it covered", and we are keen to see more people and organisations test their fit with this kind of work.Recruiting has low "barriers to entry": you need to find a hiring manager who wants your help, and you need to have the time available to help them.Work hard to understand what the hiring manager is looking for.You don't need a mountain of centralised data in a single CRM to get started. You can start with referrals, spreadsheets, emails and conversations.AimsThe aim of this article is to:Encourage more people to test their fit with recruiting, and reduce the extent to which 80,000 Hours is inadvertently "crowding out" other organisations and people from working in the recruiting space.Communicate some tentative "lessons learned" from our time recruiting in 2019-2020.Historical contextIn 2019 and 2020 I worked on a team with Peter McIntyre at 80,000 Hours, helping organisations working to solve some of the world's most pressing problems to identify candidates to hire. Below are some reflections and lessons learned from that period. At the start of 2021 we put on hold much of our efforts to help organisations hire in order to focus on increasing 80,000 Hours' capacity to have conversations with people who applied to speak with us. (We are hoping to increase our capacity on recruiting again in 2023.)We did ~534 calls with potential candidates in 2019 and 2020. We generated lists of names for roughly 174 roles. We spent just a few minutes on some of those lists, and up to a month on others. We think something like 23 people from the lists we generated were later hired by organisations, though we make no claims about counterfactual impact in this article.I've written this article for people who are considering trying out recruiting, but haven't done much recruiting before. I've put the article together pretty quickly, and expect to update parts of it in response to questions and comments, so please do fire away in the comments section. I've posted this on the forum as part of the draft amnesty days.Lots of hiring managers want help hiringThere are a lot of hiring managers out there who are struggling to find great candidates to hire.Sometimes that's because the person they want doesn't exist, but sometimes it's because the hiring manager doesn't have the time to go find the right person.If the problem is that the hiring manager doesn't have the time to find the right person, then you may be able to help if you're willing to put in the time.Focus on building great models of what hiring managers want in a candidateHaving buy-in from the hiring manager is crucialThe best way to build up a model of what the hiring manager is looking for is to be able to ask them a lot of questions.Understand how they would trade off between the various attributes they want in the ideal candidate.If you aren't able to regularly email/message/speak with the hiring manager during the selection process, it will be a bunch harder for you to find a candidate that will ultimately get hired.We tried doing recruiting for a few roles in large organisations where we knew someone on the team but not the hiring manager. We generally had a much lower placement rate in these cases.Give a great experience to the hiring managerIn order to get access to the hiring manager, it's usually valuable to be as helpful as you can initially. Write out detailed but concise descriptions of the candidates you're suggesting, put them into tiers, add LinkedIn links, and generally make the process easy for the hiring manager. Once you have more experience you can start cutt...]]>
Niel Bowerman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:52 None full 4171
5XKAsEBMuxiycTHL7_EA EA - Working with the Beef Industry for Chicken Welfare by RobertY Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Working with the Beef Industry for Chicken Welfare, published by RobertY on December 18, 2022 on The Effective Altruism Forum.Historically, the US farmed animal welfare movement has seen itself as working in opposition to the entire animal agriculture industry. In doing so, we may have taken on a larger and more powerful enemy than we need to. I’ll explore an alternative approach of working with parts of the meat industry to mutually push for strong welfare protections in other parts, focusing on chicken and beef.Executive SummarySmall animals account for a vast majority of the suffering in animal agriculture. From a cost-effectiveness standpoint, this has rightly caused the animal advocacy movement to more recently focus on chicken, fish, and invertebrates. Taking this one step further, if most of our focus is only on certain parts of animal agriculture, we may not need to see ourselves in opposition to the entire industry.In particular, I argue that the US beef industry could be a critical ally in the fight for farmed animal welfare. It may seem unlikely that the beef industry would be interested in partnering with animal welfare advocates, but a deeper analysis of the structure of the industry indicates that incentives may be more aligned than they first seem:Chickens don’t have the same welfare protections as cows, which is unfair to beef producers.Chicken is substantially cheaper than beef, partially because producers don’t have to treat chickens as well as cows. Chicken and beef compete for space on the plate, so if the price of chicken goes up, this will likely increase demand for beef.The beef industry is large and complex, with multiple different players whose incentives are not always aligned. The three main pillars of the beef sector are ranchers, feedlot operators, and the big meat packers. The meat packers have interests across chicken, pork, and beef, so will likely oppose animal welfare improvements in any area. However, ranchers and feedlot operators are generally only invested in beef, meaning that welfare improvements in chicken wouldn’t directly affect them.Additionally, I believe that cattle ranchers, who are generally small business owners, often do care about animal welfare. If framed in the right way, they could be inherently interested in seeing farmed animals treated better.Partnering with the beef industry could be useful in building broader coalitions than the animal welfare movement historically has been able to. Additionally, if the goal is to eventually pass stronger federal legislation for poultry welfare, having the beef industry on board could be a critical necessary step.OutlineI) Small animals should dominate farmed animal welfare discussionsII) The incentives of the beef industry are structurally aligned with the animal welfare movementExisting federal laws are unfair to beef producersIncreasing the price of chicken will benefit beef producersBackground on the structure of the beef and chicken industries in the USThe incentives of cattle ranchersThe incentives of feedlot operatorsHow to find common ground with the beef industryIII) How the beef industry might be helpful in the fight for animal welfareBeef industry support may open up the path to federal poultry welfare legislationIV) CounterargumentsHistorical OppositionSustainabilityAbolitionismV) Why I’m focusing on chicken and beefVI) Call to ActionI. Small animals should dominate farmed animal welfare discussionsOne of the most important recent theoretical developments in farmed animal advocacy has been a focus on small animals, mainly chicken, fish, and invertebrates. This was very influenced by EA-style thinking: chickens vastly outnumber other land animals because they’re smaller (In 2022, poultry accounted for 98% of animals slaughtered for the U...]]>
RobertY https://forum.effectivealtruism.org/posts/5XKAsEBMuxiycTHL7/working-with-the-beef-industry-for-chicken-welfare Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Working with the Beef Industry for Chicken Welfare, published by RobertY on December 18, 2022 on The Effective Altruism Forum.Historically, the US farmed animal welfare movement has seen itself as working in opposition to the entire animal agriculture industry. In doing so, we may have taken on a larger and more powerful enemy than we need to. I’ll explore an alternative approach of working with parts of the meat industry to mutually push for strong welfare protections in other parts, focusing on chicken and beef.Executive SummarySmall animals account for a vast majority of the suffering in animal agriculture. From a cost-effectiveness standpoint, this has rightly caused the animal advocacy movement to more recently focus on chicken, fish, and invertebrates. Taking this one step further, if most of our focus is only on certain parts of animal agriculture, we may not need to see ourselves in opposition to the entire industry.In particular, I argue that the US beef industry could be a critical ally in the fight for farmed animal welfare. It may seem unlikely that the beef industry would be interested in partnering with animal welfare advocates, but a deeper analysis of the structure of the industry indicates that incentives may be more aligned than they first seem:Chickens don’t have the same welfare protections as cows, which is unfair to beef producers.Chicken is substantially cheaper than beef, partially because producers don’t have to treat chickens as well as cows. Chicken and beef compete for space on the plate, so if the price of chicken goes up, this will likely increase demand for beef.The beef industry is large and complex, with multiple different players whose incentives are not always aligned. The three main pillars of the beef sector are ranchers, feedlot operators, and the big meat packers. The meat packers have interests across chicken, pork, and beef, so will likely oppose animal welfare improvements in any area. However, ranchers and feedlot operators are generally only invested in beef, meaning that welfare improvements in chicken wouldn’t directly affect them.Additionally, I believe that cattle ranchers, who are generally small business owners, often do care about animal welfare. If framed in the right way, they could be inherently interested in seeing farmed animals treated better.Partnering with the beef industry could be useful in building broader coalitions than the animal welfare movement historically has been able to. Additionally, if the goal is to eventually pass stronger federal legislation for poultry welfare, having the beef industry on board could be a critical necessary step.OutlineI) Small animals should dominate farmed animal welfare discussionsII) The incentives of the beef industry are structurally aligned with the animal welfare movementExisting federal laws are unfair to beef producersIncreasing the price of chicken will benefit beef producersBackground on the structure of the beef and chicken industries in the USThe incentives of cattle ranchersThe incentives of feedlot operatorsHow to find common ground with the beef industryIII) How the beef industry might be helpful in the fight for animal welfareBeef industry support may open up the path to federal poultry welfare legislationIV) CounterargumentsHistorical OppositionSustainabilityAbolitionismV) Why I’m focusing on chicken and beefVI) Call to ActionI. Small animals should dominate farmed animal welfare discussionsOne of the most important recent theoretical developments in farmed animal advocacy has been a focus on small animals, mainly chicken, fish, and invertebrates. This was very influenced by EA-style thinking: chickens vastly outnumber other land animals because they’re smaller (In 2022, poultry accounted for 98% of animals slaughtered for the U...]]>
Sun, 18 Dec 2022 22:46:33 +0000 EA - Working with the Beef Industry for Chicken Welfare by RobertY Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Working with the Beef Industry for Chicken Welfare, published by RobertY on December 18, 2022 on The Effective Altruism Forum.Historically, the US farmed animal welfare movement has seen itself as working in opposition to the entire animal agriculture industry. In doing so, we may have taken on a larger and more powerful enemy than we need to. I’ll explore an alternative approach of working with parts of the meat industry to mutually push for strong welfare protections in other parts, focusing on chicken and beef.Executive SummarySmall animals account for a vast majority of the suffering in animal agriculture. From a cost-effectiveness standpoint, this has rightly caused the animal advocacy movement to more recently focus on chicken, fish, and invertebrates. Taking this one step further, if most of our focus is only on certain parts of animal agriculture, we may not need to see ourselves in opposition to the entire industry.In particular, I argue that the US beef industry could be a critical ally in the fight for farmed animal welfare. It may seem unlikely that the beef industry would be interested in partnering with animal welfare advocates, but a deeper analysis of the structure of the industry indicates that incentives may be more aligned than they first seem:Chickens don’t have the same welfare protections as cows, which is unfair to beef producers.Chicken is substantially cheaper than beef, partially because producers don’t have to treat chickens as well as cows. Chicken and beef compete for space on the plate, so if the price of chicken goes up, this will likely increase demand for beef.The beef industry is large and complex, with multiple different players whose incentives are not always aligned. The three main pillars of the beef sector are ranchers, feedlot operators, and the big meat packers. The meat packers have interests across chicken, pork, and beef, so will likely oppose animal welfare improvements in any area. However, ranchers and feedlot operators are generally only invested in beef, meaning that welfare improvements in chicken wouldn’t directly affect them.Additionally, I believe that cattle ranchers, who are generally small business owners, often do care about animal welfare. If framed in the right way, they could be inherently interested in seeing farmed animals treated better.Partnering with the beef industry could be useful in building broader coalitions than the animal welfare movement historically has been able to. Additionally, if the goal is to eventually pass stronger federal legislation for poultry welfare, having the beef industry on board could be a critical necessary step.OutlineI) Small animals should dominate farmed animal welfare discussionsII) The incentives of the beef industry are structurally aligned with the animal welfare movementExisting federal laws are unfair to beef producersIncreasing the price of chicken will benefit beef producersBackground on the structure of the beef and chicken industries in the USThe incentives of cattle ranchersThe incentives of feedlot operatorsHow to find common ground with the beef industryIII) How the beef industry might be helpful in the fight for animal welfareBeef industry support may open up the path to federal poultry welfare legislationIV) CounterargumentsHistorical OppositionSustainabilityAbolitionismV) Why I’m focusing on chicken and beefVI) Call to ActionI. Small animals should dominate farmed animal welfare discussionsOne of the most important recent theoretical developments in farmed animal advocacy has been a focus on small animals, mainly chicken, fish, and invertebrates. This was very influenced by EA-style thinking: chickens vastly outnumber other land animals because they’re smaller (In 2022, poultry accounted for 98% of animals slaughtered for the U...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Working with the Beef Industry for Chicken Welfare, published by RobertY on December 18, 2022 on The Effective Altruism Forum.Historically, the US farmed animal welfare movement has seen itself as working in opposition to the entire animal agriculture industry. In doing so, we may have taken on a larger and more powerful enemy than we need to. I’ll explore an alternative approach of working with parts of the meat industry to mutually push for strong welfare protections in other parts, focusing on chicken and beef.Executive SummarySmall animals account for a vast majority of the suffering in animal agriculture. From a cost-effectiveness standpoint, this has rightly caused the animal advocacy movement to more recently focus on chicken, fish, and invertebrates. Taking this one step further, if most of our focus is only on certain parts of animal agriculture, we may not need to see ourselves in opposition to the entire industry.In particular, I argue that the US beef industry could be a critical ally in the fight for farmed animal welfare. It may seem unlikely that the beef industry would be interested in partnering with animal welfare advocates, but a deeper analysis of the structure of the industry indicates that incentives may be more aligned than they first seem:Chickens don’t have the same welfare protections as cows, which is unfair to beef producers.Chicken is substantially cheaper than beef, partially because producers don’t have to treat chickens as well as cows. Chicken and beef compete for space on the plate, so if the price of chicken goes up, this will likely increase demand for beef.The beef industry is large and complex, with multiple different players whose incentives are not always aligned. The three main pillars of the beef sector are ranchers, feedlot operators, and the big meat packers. The meat packers have interests across chicken, pork, and beef, so will likely oppose animal welfare improvements in any area. However, ranchers and feedlot operators are generally only invested in beef, meaning that welfare improvements in chicken wouldn’t directly affect them.Additionally, I believe that cattle ranchers, who are generally small business owners, often do care about animal welfare. If framed in the right way, they could be inherently interested in seeing farmed animals treated better.Partnering with the beef industry could be useful in building broader coalitions than the animal welfare movement historically has been able to. Additionally, if the goal is to eventually pass stronger federal legislation for poultry welfare, having the beef industry on board could be a critical necessary step.OutlineI) Small animals should dominate farmed animal welfare discussionsII) The incentives of the beef industry are structurally aligned with the animal welfare movementExisting federal laws are unfair to beef producersIncreasing the price of chicken will benefit beef producersBackground on the structure of the beef and chicken industries in the USThe incentives of cattle ranchersThe incentives of feedlot operatorsHow to find common ground with the beef industryIII) How the beef industry might be helpful in the fight for animal welfareBeef industry support may open up the path to federal poultry welfare legislationIV) CounterargumentsHistorical OppositionSustainabilityAbolitionismV) Why I’m focusing on chicken and beefVI) Call to ActionI. Small animals should dominate farmed animal welfare discussionsOne of the most important recent theoretical developments in farmed animal advocacy has been a focus on small animals, mainly chicken, fish, and invertebrates. This was very influenced by EA-style thinking: chickens vastly outnumber other land animals because they’re smaller (In 2022, poultry accounted for 98% of animals slaughtered for the U...]]>
RobertY https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 26:26 None full 4165
4JF39v548SETuMewp_EA EA - Update on spending for CEA-run events by Eli Nathan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update on spending for CEA-run events, published by Eli Nathan on December 18, 2022 on The Effective Altruism Forum.TL;DR: Spending on events run and supported by CEA (including EA Global and EAGx conferences) will likely be reduced due to a decrease in available funding. This might influence travel grants, catering, volunteering, ticketing, and non-critical conference expenses.The CEA events team is responsible for numerous events in the EA community, including EA Global, EAGx, and various retreat programs. We (the CEA events team) expect to reduce spending on events we run in the coming year due to:The FTX situationThe reduction in funds available to Open Philanthropy (partially due to a general stock market decline)The growth of the EA community — meaning that grantmakers now have more alternative funding opportunities. i.e., we’re no longer one of the very few things available for them to fund (this is a good thing!)At this stage, we’re still navigating the new funding landscape and we aren’t sure what this means going forwards, but some potential consequences include:Travel grant funding will likely be more restrictive. Previously we’ve funded people to travel to any EA conference they’ve been accepted to. We expect to retain some amount of travel funding moving forwards, but we’ll likely have to be much more conservative about how much we give and who we give it to. When planning around an event, we’d recommend you act under the assumption that we will not be able to grant your travel funding request (unless it has already been approved).Catering will likely be cut down. We’ll likely have to stop providing all three of breakfast, lunch, and dinner on each day for our conferences — we still expect to have some food or snacks available, but it’s currently unclear exactly what we’ll be able to provide.We might go back to a volunteer model for people working at EA Global (we trialed paying “volunteers” at the last two EA Globals).We might introduce a variable pricing ticketing system where we ask people with higher incomes to pay more for their tickets (we expect to still have free and reduced cost tickets available for students and those on lower incomes).We might need to limit capacity at certain events (whereas previously we always accepted people if they were above a certain bar).If you have any questions or concerns, you can email us at hello@eaglobal.org or comment below (though we may not be able to respond to all comments).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Eli Nathan https://forum.effectivealtruism.org/posts/4JF39v548SETuMewp/update-on-spending-for-cea-run-events Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update on spending for CEA-run events, published by Eli Nathan on December 18, 2022 on The Effective Altruism Forum.TL;DR: Spending on events run and supported by CEA (including EA Global and EAGx conferences) will likely be reduced due to a decrease in available funding. This might influence travel grants, catering, volunteering, ticketing, and non-critical conference expenses.The CEA events team is responsible for numerous events in the EA community, including EA Global, EAGx, and various retreat programs. We (the CEA events team) expect to reduce spending on events we run in the coming year due to:The FTX situationThe reduction in funds available to Open Philanthropy (partially due to a general stock market decline)The growth of the EA community — meaning that grantmakers now have more alternative funding opportunities. i.e., we’re no longer one of the very few things available for them to fund (this is a good thing!)At this stage, we’re still navigating the new funding landscape and we aren’t sure what this means going forwards, but some potential consequences include:Travel grant funding will likely be more restrictive. Previously we’ve funded people to travel to any EA conference they’ve been accepted to. We expect to retain some amount of travel funding moving forwards, but we’ll likely have to be much more conservative about how much we give and who we give it to. When planning around an event, we’d recommend you act under the assumption that we will not be able to grant your travel funding request (unless it has already been approved).Catering will likely be cut down. We’ll likely have to stop providing all three of breakfast, lunch, and dinner on each day for our conferences — we still expect to have some food or snacks available, but it’s currently unclear exactly what we’ll be able to provide.We might go back to a volunteer model for people working at EA Global (we trialed paying “volunteers” at the last two EA Globals).We might introduce a variable pricing ticketing system where we ask people with higher incomes to pay more for their tickets (we expect to still have free and reduced cost tickets available for students and those on lower incomes).We might need to limit capacity at certain events (whereas previously we always accepted people if they were above a certain bar).If you have any questions or concerns, you can email us at hello@eaglobal.org or comment below (though we may not be able to respond to all comments).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 18 Dec 2022 19:53:27 +0000 EA - Update on spending for CEA-run events by Eli Nathan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update on spending for CEA-run events, published by Eli Nathan on December 18, 2022 on The Effective Altruism Forum.TL;DR: Spending on events run and supported by CEA (including EA Global and EAGx conferences) will likely be reduced due to a decrease in available funding. This might influence travel grants, catering, volunteering, ticketing, and non-critical conference expenses.The CEA events team is responsible for numerous events in the EA community, including EA Global, EAGx, and various retreat programs. We (the CEA events team) expect to reduce spending on events we run in the coming year due to:The FTX situationThe reduction in funds available to Open Philanthropy (partially due to a general stock market decline)The growth of the EA community — meaning that grantmakers now have more alternative funding opportunities. i.e., we’re no longer one of the very few things available for them to fund (this is a good thing!)At this stage, we’re still navigating the new funding landscape and we aren’t sure what this means going forwards, but some potential consequences include:Travel grant funding will likely be more restrictive. Previously we’ve funded people to travel to any EA conference they’ve been accepted to. We expect to retain some amount of travel funding moving forwards, but we’ll likely have to be much more conservative about how much we give and who we give it to. When planning around an event, we’d recommend you act under the assumption that we will not be able to grant your travel funding request (unless it has already been approved).Catering will likely be cut down. We’ll likely have to stop providing all three of breakfast, lunch, and dinner on each day for our conferences — we still expect to have some food or snacks available, but it’s currently unclear exactly what we’ll be able to provide.We might go back to a volunteer model for people working at EA Global (we trialed paying “volunteers” at the last two EA Globals).We might introduce a variable pricing ticketing system where we ask people with higher incomes to pay more for their tickets (we expect to still have free and reduced cost tickets available for students and those on lower incomes).We might need to limit capacity at certain events (whereas previously we always accepted people if they were above a certain bar).If you have any questions or concerns, you can email us at hello@eaglobal.org or comment below (though we may not be able to respond to all comments).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update on spending for CEA-run events, published by Eli Nathan on December 18, 2022 on The Effective Altruism Forum.TL;DR: Spending on events run and supported by CEA (including EA Global and EAGx conferences) will likely be reduced due to a decrease in available funding. This might influence travel grants, catering, volunteering, ticketing, and non-critical conference expenses.The CEA events team is responsible for numerous events in the EA community, including EA Global, EAGx, and various retreat programs. We (the CEA events team) expect to reduce spending on events we run in the coming year due to:The FTX situationThe reduction in funds available to Open Philanthropy (partially due to a general stock market decline)The growth of the EA community — meaning that grantmakers now have more alternative funding opportunities. i.e., we’re no longer one of the very few things available for them to fund (this is a good thing!)At this stage, we’re still navigating the new funding landscape and we aren’t sure what this means going forwards, but some potential consequences include:Travel grant funding will likely be more restrictive. Previously we’ve funded people to travel to any EA conference they’ve been accepted to. We expect to retain some amount of travel funding moving forwards, but we’ll likely have to be much more conservative about how much we give and who we give it to. When planning around an event, we’d recommend you act under the assumption that we will not be able to grant your travel funding request (unless it has already been approved).Catering will likely be cut down. We’ll likely have to stop providing all three of breakfast, lunch, and dinner on each day for our conferences — we still expect to have some food or snacks available, but it’s currently unclear exactly what we’ll be able to provide.We might go back to a volunteer model for people working at EA Global (we trialed paying “volunteers” at the last two EA Globals).We might introduce a variable pricing ticketing system where we ask people with higher incomes to pay more for their tickets (we expect to still have free and reduced cost tickets available for students and those on lower incomes).We might need to limit capacity at certain events (whereas previously we always accepted people if they were above a certain bar).If you have any questions or concerns, you can email us at hello@eaglobal.org or comment below (though we may not be able to respond to all comments).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Eli Nathan https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:31 None full 4166
Wj2Z2NXDZXiysnRv5_EA EA - Lessons learned from Tyve, an effective giving startup by Clifford Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lessons learned from Tyve, an effective giving startup, published by Clifford on December 18, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. I probably have a lot more thoughts I haven’t written down so feel free to ask me questions in the comments.In 2019 I tried building a startup to promote effective giving in workplaces. It didn’t work out as a for-profit but looks somewhat promising as a non-profit and has facilitated £630k in donations (around £160k of this went to Founders Pledge recommended charities). In this post, I share the story and my lessons learned.Since writing this draft, I’m excited that a new CEO with vastly more product experience than me has taken on Tyve as a nonprofit.HT to Sebastien who I met at EAG 2020 and suggested I write up a post-mortem and to Lizka for encouraging me to post as Draft Amnesty Day 2 years later.Main learnings:Fall in love with the problem, not the solutionI’d prefer to do similar projects as nonprofits in the futureProbably don’t try to solve a lack of effective giving with technologyFundraising at workplaces can work decently well but I’m not sure EA/tech helpsDon’t confuse “this should exist” with “people want this to exist”Ignore all the above adviceBrief historyI started Tyve (like Tithe with a funky spelling) after working at Founders Pledge in September 2019.The idea was to encourage employees to give a % of their income to charities and we’d encourage them to give effectively.My startup pitch was as follows:millennial employees care about purpose and employers need to find better ways of delivering thisone way they could do this would be by supporting people in giving to causes they care aboutthere’s a nifty system in the UK called payroll giving where the employer can play a helpful role in enabling employees to donate pre-tax to charity through their payrollpayroll giving appeared to be a potentially untapped market. Only 2% of people in the UK give through payroll and apparently this is 30% in the USthe existing players all had terrible UX and we were going to fix it and create a modern experience (like challenger banks had done to high street banks)we were also going to use the cost-effective estimates of our recommended charities to quantify the good they were doing, making it more satisfyingI now feel pretty iffy about most of the assumptions and the argument - I’d like to think this is based on what I learnt but I also think I could have realised at the time that some of this argument is pretty weak.I was relatively sceptical about donation apps but I thought I might have found a problem (small startups have employees who want to feel like they’re at a “good” company but doing CSR at small startups is an effort). I thought a cheap alternative would be to provide a spruced-up version of payroll giving.I built an MVP with my cofounder Ben O (a great software developer who I met at EAG Oxford 2016 with the explicit intention to find cofounders - kudos to that conference).I got 5 companies whose founders I knew well signed up and paying for the beta. We were pretty excited as people were giving unusual high amounts (thanks to anchoring people on % and targeting fairly well-paid startup employees) and about 50% was going to charities we’d recommended (either based on GiveWell or Founders Pledge).I quit my job and then raised £150k SEIS from well-known angel investors in the UK.It was a buzz raising this money ...]]>
Clifford https://forum.effectivealtruism.org/posts/Wj2Z2NXDZXiysnRv5/lessons-learned-from-tyve-an-effective-giving-startup Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lessons learned from Tyve, an effective giving startup, published by Clifford on December 18, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. I probably have a lot more thoughts I haven’t written down so feel free to ask me questions in the comments.In 2019 I tried building a startup to promote effective giving in workplaces. It didn’t work out as a for-profit but looks somewhat promising as a non-profit and has facilitated £630k in donations (around £160k of this went to Founders Pledge recommended charities). In this post, I share the story and my lessons learned.Since writing this draft, I’m excited that a new CEO with vastly more product experience than me has taken on Tyve as a nonprofit.HT to Sebastien who I met at EAG 2020 and suggested I write up a post-mortem and to Lizka for encouraging me to post as Draft Amnesty Day 2 years later.Main learnings:Fall in love with the problem, not the solutionI’d prefer to do similar projects as nonprofits in the futureProbably don’t try to solve a lack of effective giving with technologyFundraising at workplaces can work decently well but I’m not sure EA/tech helpsDon’t confuse “this should exist” with “people want this to exist”Ignore all the above adviceBrief historyI started Tyve (like Tithe with a funky spelling) after working at Founders Pledge in September 2019.The idea was to encourage employees to give a % of their income to charities and we’d encourage them to give effectively.My startup pitch was as follows:millennial employees care about purpose and employers need to find better ways of delivering thisone way they could do this would be by supporting people in giving to causes they care aboutthere’s a nifty system in the UK called payroll giving where the employer can play a helpful role in enabling employees to donate pre-tax to charity through their payrollpayroll giving appeared to be a potentially untapped market. Only 2% of people in the UK give through payroll and apparently this is 30% in the USthe existing players all had terrible UX and we were going to fix it and create a modern experience (like challenger banks had done to high street banks)we were also going to use the cost-effective estimates of our recommended charities to quantify the good they were doing, making it more satisfyingI now feel pretty iffy about most of the assumptions and the argument - I’d like to think this is based on what I learnt but I also think I could have realised at the time that some of this argument is pretty weak.I was relatively sceptical about donation apps but I thought I might have found a problem (small startups have employees who want to feel like they’re at a “good” company but doing CSR at small startups is an effort). I thought a cheap alternative would be to provide a spruced-up version of payroll giving.I built an MVP with my cofounder Ben O (a great software developer who I met at EAG Oxford 2016 with the explicit intention to find cofounders - kudos to that conference).I got 5 companies whose founders I knew well signed up and paying for the beta. We were pretty excited as people were giving unusual high amounts (thanks to anchoring people on % and targeting fairly well-paid startup employees) and about 50% was going to charities we’d recommended (either based on GiveWell or Founders Pledge).I quit my job and then raised £150k SEIS from well-known angel investors in the UK.It was a buzz raising this money ...]]>
Sun, 18 Dec 2022 19:03:47 +0000 EA - Lessons learned from Tyve, an effective giving startup by Clifford Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lessons learned from Tyve, an effective giving startup, published by Clifford on December 18, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. I probably have a lot more thoughts I haven’t written down so feel free to ask me questions in the comments.In 2019 I tried building a startup to promote effective giving in workplaces. It didn’t work out as a for-profit but looks somewhat promising as a non-profit and has facilitated £630k in donations (around £160k of this went to Founders Pledge recommended charities). In this post, I share the story and my lessons learned.Since writing this draft, I’m excited that a new CEO with vastly more product experience than me has taken on Tyve as a nonprofit.HT to Sebastien who I met at EAG 2020 and suggested I write up a post-mortem and to Lizka for encouraging me to post as Draft Amnesty Day 2 years later.Main learnings:Fall in love with the problem, not the solutionI’d prefer to do similar projects as nonprofits in the futureProbably don’t try to solve a lack of effective giving with technologyFundraising at workplaces can work decently well but I’m not sure EA/tech helpsDon’t confuse “this should exist” with “people want this to exist”Ignore all the above adviceBrief historyI started Tyve (like Tithe with a funky spelling) after working at Founders Pledge in September 2019.The idea was to encourage employees to give a % of their income to charities and we’d encourage them to give effectively.My startup pitch was as follows:millennial employees care about purpose and employers need to find better ways of delivering thisone way they could do this would be by supporting people in giving to causes they care aboutthere’s a nifty system in the UK called payroll giving where the employer can play a helpful role in enabling employees to donate pre-tax to charity through their payrollpayroll giving appeared to be a potentially untapped market. Only 2% of people in the UK give through payroll and apparently this is 30% in the USthe existing players all had terrible UX and we were going to fix it and create a modern experience (like challenger banks had done to high street banks)we were also going to use the cost-effective estimates of our recommended charities to quantify the good they were doing, making it more satisfyingI now feel pretty iffy about most of the assumptions and the argument - I’d like to think this is based on what I learnt but I also think I could have realised at the time that some of this argument is pretty weak.I was relatively sceptical about donation apps but I thought I might have found a problem (small startups have employees who want to feel like they’re at a “good” company but doing CSR at small startups is an effort). I thought a cheap alternative would be to provide a spruced-up version of payroll giving.I built an MVP with my cofounder Ben O (a great software developer who I met at EAG Oxford 2016 with the explicit intention to find cofounders - kudos to that conference).I got 5 companies whose founders I knew well signed up and paying for the beta. We were pretty excited as people were giving unusual high amounts (thanks to anchoring people on % and targeting fairly well-paid startup employees) and about 50% was going to charities we’d recommended (either based on GiveWell or Founders Pledge).I quit my job and then raised £150k SEIS from well-known angel investors in the UK.It was a buzz raising this money ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lessons learned from Tyve, an effective giving startup, published by Clifford on December 18, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. I probably have a lot more thoughts I haven’t written down so feel free to ask me questions in the comments.In 2019 I tried building a startup to promote effective giving in workplaces. It didn’t work out as a for-profit but looks somewhat promising as a non-profit and has facilitated £630k in donations (around £160k of this went to Founders Pledge recommended charities). In this post, I share the story and my lessons learned.Since writing this draft, I’m excited that a new CEO with vastly more product experience than me has taken on Tyve as a nonprofit.HT to Sebastien who I met at EAG 2020 and suggested I write up a post-mortem and to Lizka for encouraging me to post as Draft Amnesty Day 2 years later.Main learnings:Fall in love with the problem, not the solutionI’d prefer to do similar projects as nonprofits in the futureProbably don’t try to solve a lack of effective giving with technologyFundraising at workplaces can work decently well but I’m not sure EA/tech helpsDon’t confuse “this should exist” with “people want this to exist”Ignore all the above adviceBrief historyI started Tyve (like Tithe with a funky spelling) after working at Founders Pledge in September 2019.The idea was to encourage employees to give a % of their income to charities and we’d encourage them to give effectively.My startup pitch was as follows:millennial employees care about purpose and employers need to find better ways of delivering thisone way they could do this would be by supporting people in giving to causes they care aboutthere’s a nifty system in the UK called payroll giving where the employer can play a helpful role in enabling employees to donate pre-tax to charity through their payrollpayroll giving appeared to be a potentially untapped market. Only 2% of people in the UK give through payroll and apparently this is 30% in the USthe existing players all had terrible UX and we were going to fix it and create a modern experience (like challenger banks had done to high street banks)we were also going to use the cost-effective estimates of our recommended charities to quantify the good they were doing, making it more satisfyingI now feel pretty iffy about most of the assumptions and the argument - I’d like to think this is based on what I learnt but I also think I could have realised at the time that some of this argument is pretty weak.I was relatively sceptical about donation apps but I thought I might have found a problem (small startups have employees who want to feel like they’re at a “good” company but doing CSR at small startups is an effort). I thought a cheap alternative would be to provide a spruced-up version of payroll giving.I built an MVP with my cofounder Ben O (a great software developer who I met at EAG Oxford 2016 with the explicit intention to find cofounders - kudos to that conference).I got 5 companies whose founders I knew well signed up and paying for the beta. We were pretty excited as people were giving unusual high amounts (thanks to anchoring people on % and targeting fairly well-paid startup employees) and about 50% was going to charities we’d recommended (either based on GiveWell or Founders Pledge).I quit my job and then raised £150k SEIS from well-known angel investors in the UK.It was a buzz raising this money ...]]>
Clifford https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:33 None full 4167
WYktRSxq4Edw9zsH9_EA EA - Be less trusting of intuitive arguments about social phenomena by Nathan Barnard Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Be less trusting of intuitive arguments about social phenomena, published by Nathan Barnard on December 18, 2022 on The Effective Altruism Forum.note: I think this applies much less or even not all in domains where you’re getting tight feedback on your models and have to take actions based on them which you’re then evaluated on.I think there’s a trend in the effective altruist and rationality communities to be quite trusting of arguments about how social phenomena work that have theoretical models that are intuitively appealing and have anecdotal evidence or non-systematic observational evidence to support them. The sorts of things I’m thinking about are:The evaporative cooling model of communitiesMy friend’s argument that community builders shouldn't spend [edit: most] of their time talking to people they consider less sharp than them to people less sharp than them because it’ll harm their epistemicCurrent EA community is selecting for uncritical peopleAsking people explicitly if they’re altruistic will just select for people who are good lairs (person doing selections for admittance to an EA thing)The toxoplasma of rageMax Tegmark’s model of nuclear warJohn Wentworth’s post on takeoff speedsI think this is a really bad epistemology for thinking about social phenomena.Here are some examples of arguments I could make that we know are wrong but seem reasonable based on arguments some people find intuitive and observational evidence:Having a minimum wage will increase unemployment rates. Employers hire workers up until the point that the marginal revenue generated by each worker equals the marginal cost of hiring workers. If the wage workers have to be paid goes up then unemployment will go up because marginal productivity is diminishing in the number of workers.Increasing interest rates will increase inflation. Firms set their prices as a cost plus a markup and so if their costs increase because the price of loans goes up then firms will increase prices which means that inflation goes up. My friend works as a handyman and he charges £150 for a day of work plus the price of materials. If the price of materials went up he’d charge moreLetting people emigrate to rich countries from poor countries will increase crime in rich countries. The immigrants who are most likely to leave their home countries are those who have the least social ties and the worst employment outlooks in their home countries. This selects people who are more likely to be criminals because criminals are likely to have bad job opportunities in their home countries and weak ties to their families. If we try and filter out criminals we end up selecting smart criminals who are good at hiding their misdeeds. If you look at areas with high crime rates they often have large foreign immigrant populations. [Edit - most people wouldn't find this selection argument intuitive but I thought it was worth including because of how common selection based arguments are in the EA and rationality communities.I'm also not taking aim at arguments that are intuitively obvious rather arguments that those making find intuitively appeal, even if they're counterintuitive in some way. i.e some people think that adverse selection is a common and powerful force even though adverse selection is a counter-intutive concept.]Cash transfers increase poverty, or at least are unlikely to reduce more than in-kind transfers or job training. We that people in low-income countries often spend a large fraction of their incomes on tobacco and alcohol products. By giving these people cash they have more money to spend on tobacco and alcohol meaning they’re more likely to suffer from addiction problems that keep them in poverty. We also know that poverty selects people who make poor financial decisions, so giving people cash give...]]>
Nathan Barnard https://forum.effectivealtruism.org/posts/WYktRSxq4Edw9zsH9/be-less-trusting-of-intuitive-arguments-about-social Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Be less trusting of intuitive arguments about social phenomena, published by Nathan Barnard on December 18, 2022 on The Effective Altruism Forum.note: I think this applies much less or even not all in domains where you’re getting tight feedback on your models and have to take actions based on them which you’re then evaluated on.I think there’s a trend in the effective altruist and rationality communities to be quite trusting of arguments about how social phenomena work that have theoretical models that are intuitively appealing and have anecdotal evidence or non-systematic observational evidence to support them. The sorts of things I’m thinking about are:The evaporative cooling model of communitiesMy friend’s argument that community builders shouldn't spend [edit: most] of their time talking to people they consider less sharp than them to people less sharp than them because it’ll harm their epistemicCurrent EA community is selecting for uncritical peopleAsking people explicitly if they’re altruistic will just select for people who are good lairs (person doing selections for admittance to an EA thing)The toxoplasma of rageMax Tegmark’s model of nuclear warJohn Wentworth’s post on takeoff speedsI think this is a really bad epistemology for thinking about social phenomena.Here are some examples of arguments I could make that we know are wrong but seem reasonable based on arguments some people find intuitive and observational evidence:Having a minimum wage will increase unemployment rates. Employers hire workers up until the point that the marginal revenue generated by each worker equals the marginal cost of hiring workers. If the wage workers have to be paid goes up then unemployment will go up because marginal productivity is diminishing in the number of workers.Increasing interest rates will increase inflation. Firms set their prices as a cost plus a markup and so if their costs increase because the price of loans goes up then firms will increase prices which means that inflation goes up. My friend works as a handyman and he charges £150 for a day of work plus the price of materials. If the price of materials went up he’d charge moreLetting people emigrate to rich countries from poor countries will increase crime in rich countries. The immigrants who are most likely to leave their home countries are those who have the least social ties and the worst employment outlooks in their home countries. This selects people who are more likely to be criminals because criminals are likely to have bad job opportunities in their home countries and weak ties to their families. If we try and filter out criminals we end up selecting smart criminals who are good at hiding their misdeeds. If you look at areas with high crime rates they often have large foreign immigrant populations. [Edit - most people wouldn't find this selection argument intuitive but I thought it was worth including because of how common selection based arguments are in the EA and rationality communities.I'm also not taking aim at arguments that are intuitively obvious rather arguments that those making find intuitively appeal, even if they're counterintuitive in some way. i.e some people think that adverse selection is a common and powerful force even though adverse selection is a counter-intutive concept.]Cash transfers increase poverty, or at least are unlikely to reduce more than in-kind transfers or job training. We that people in low-income countries often spend a large fraction of their incomes on tobacco and alcohol products. By giving these people cash they have more money to spend on tobacco and alcohol meaning they’re more likely to suffer from addiction problems that keep them in poverty. We also know that poverty selects people who make poor financial decisions, so giving people cash give...]]>
Sun, 18 Dec 2022 16:42:05 +0000 EA - Be less trusting of intuitive arguments about social phenomena by Nathan Barnard Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Be less trusting of intuitive arguments about social phenomena, published by Nathan Barnard on December 18, 2022 on The Effective Altruism Forum.note: I think this applies much less or even not all in domains where you’re getting tight feedback on your models and have to take actions based on them which you’re then evaluated on.I think there’s a trend in the effective altruist and rationality communities to be quite trusting of arguments about how social phenomena work that have theoretical models that are intuitively appealing and have anecdotal evidence or non-systematic observational evidence to support them. The sorts of things I’m thinking about are:The evaporative cooling model of communitiesMy friend’s argument that community builders shouldn't spend [edit: most] of their time talking to people they consider less sharp than them to people less sharp than them because it’ll harm their epistemicCurrent EA community is selecting for uncritical peopleAsking people explicitly if they’re altruistic will just select for people who are good lairs (person doing selections for admittance to an EA thing)The toxoplasma of rageMax Tegmark’s model of nuclear warJohn Wentworth’s post on takeoff speedsI think this is a really bad epistemology for thinking about social phenomena.Here are some examples of arguments I could make that we know are wrong but seem reasonable based on arguments some people find intuitive and observational evidence:Having a minimum wage will increase unemployment rates. Employers hire workers up until the point that the marginal revenue generated by each worker equals the marginal cost of hiring workers. If the wage workers have to be paid goes up then unemployment will go up because marginal productivity is diminishing in the number of workers.Increasing interest rates will increase inflation. Firms set their prices as a cost plus a markup and so if their costs increase because the price of loans goes up then firms will increase prices which means that inflation goes up. My friend works as a handyman and he charges £150 for a day of work plus the price of materials. If the price of materials went up he’d charge moreLetting people emigrate to rich countries from poor countries will increase crime in rich countries. The immigrants who are most likely to leave their home countries are those who have the least social ties and the worst employment outlooks in their home countries. This selects people who are more likely to be criminals because criminals are likely to have bad job opportunities in their home countries and weak ties to their families. If we try and filter out criminals we end up selecting smart criminals who are good at hiding their misdeeds. If you look at areas with high crime rates they often have large foreign immigrant populations. [Edit - most people wouldn't find this selection argument intuitive but I thought it was worth including because of how common selection based arguments are in the EA and rationality communities.I'm also not taking aim at arguments that are intuitively obvious rather arguments that those making find intuitively appeal, even if they're counterintuitive in some way. i.e some people think that adverse selection is a common and powerful force even though adverse selection is a counter-intutive concept.]Cash transfers increase poverty, or at least are unlikely to reduce more than in-kind transfers or job training. We that people in low-income countries often spend a large fraction of their incomes on tobacco and alcohol products. By giving these people cash they have more money to spend on tobacco and alcohol meaning they’re more likely to suffer from addiction problems that keep them in poverty. We also know that poverty selects people who make poor financial decisions, so giving people cash give...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Be less trusting of intuitive arguments about social phenomena, published by Nathan Barnard on December 18, 2022 on The Effective Altruism Forum.note: I think this applies much less or even not all in domains where you’re getting tight feedback on your models and have to take actions based on them which you’re then evaluated on.I think there’s a trend in the effective altruist and rationality communities to be quite trusting of arguments about how social phenomena work that have theoretical models that are intuitively appealing and have anecdotal evidence or non-systematic observational evidence to support them. The sorts of things I’m thinking about are:The evaporative cooling model of communitiesMy friend’s argument that community builders shouldn't spend [edit: most] of their time talking to people they consider less sharp than them to people less sharp than them because it’ll harm their epistemicCurrent EA community is selecting for uncritical peopleAsking people explicitly if they’re altruistic will just select for people who are good lairs (person doing selections for admittance to an EA thing)The toxoplasma of rageMax Tegmark’s model of nuclear warJohn Wentworth’s post on takeoff speedsI think this is a really bad epistemology for thinking about social phenomena.Here are some examples of arguments I could make that we know are wrong but seem reasonable based on arguments some people find intuitive and observational evidence:Having a minimum wage will increase unemployment rates. Employers hire workers up until the point that the marginal revenue generated by each worker equals the marginal cost of hiring workers. If the wage workers have to be paid goes up then unemployment will go up because marginal productivity is diminishing in the number of workers.Increasing interest rates will increase inflation. Firms set their prices as a cost plus a markup and so if their costs increase because the price of loans goes up then firms will increase prices which means that inflation goes up. My friend works as a handyman and he charges £150 for a day of work plus the price of materials. If the price of materials went up he’d charge moreLetting people emigrate to rich countries from poor countries will increase crime in rich countries. The immigrants who are most likely to leave their home countries are those who have the least social ties and the worst employment outlooks in their home countries. This selects people who are more likely to be criminals because criminals are likely to have bad job opportunities in their home countries and weak ties to their families. If we try and filter out criminals we end up selecting smart criminals who are good at hiding their misdeeds. If you look at areas with high crime rates they often have large foreign immigrant populations. [Edit - most people wouldn't find this selection argument intuitive but I thought it was worth including because of how common selection based arguments are in the EA and rationality communities.I'm also not taking aim at arguments that are intuitively obvious rather arguments that those making find intuitively appeal, even if they're counterintuitive in some way. i.e some people think that adverse selection is a common and powerful force even though adverse selection is a counter-intutive concept.]Cash transfers increase poverty, or at least are unlikely to reduce more than in-kind transfers or job training. We that people in low-income countries often spend a large fraction of their incomes on tobacco and alcohol products. By giving these people cash they have more money to spend on tobacco and alcohol meaning they’re more likely to suffer from addiction problems that keep them in poverty. We also know that poverty selects people who make poor financial decisions, so giving people cash give...]]>
Nathan Barnard https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:31 None full 4168
Bm9g83bgXE3yd9Xmg_EA EA - Sir Gavin and the green sky by Gavin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sir Gavin and the green sky, published by Gavin on December 17, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.Epistemic status: crank reportage.It suits me that climate change isn't an x-risk. (The movement has trillions of dollars already, and persistently drains talent, attention, and political capital away from actual x-risks.)... But is it one?One palaeontologist, Peter Ward, is semi-famous in the field for suggesting a mechanism by which runaway climate change could kill everything: by turning the ocean into a toxic gas factory.an increase in carbon dioxide... warms the oceans enough to change circulation patterns. When this happens, sulfur-eating microbes sometimes thrive. These bacteria produce hydrogen sulfide, which, in sufficient quantities and under certain conditions, outgasses into the air, shreds the ozone layer, and poisons other living things. The warming also causes methane ice under the seas to melt and, well, burp, adding to the nasty mix.As usual in Zoomed Out Sciences like epidemiology and climatology, the model stops short at the inevitable massive effort to reverse this process. The modellers prefer to think of humans as the pink fella from these comics: inertly lamenting.Ward argues that this (and similar mechanisms) is responsible for all past mass extinctions except the dinosaurs one everyone fixates on.This is correct as chemistry (I think), and apriori could happen, and for all I know he's right and it actually has happened before. So I have to give it nonzero probability - and one of the real probabilities, with only a few zeroes in it.I can't really evaluate this. There are some hallmarks of crankery in the book - most of the citations of the book are unscientific or pseudoscientific - and (as Halstead and others have long noted) climate is one of the slower and more detectable ways to kill a biosphere.But in concert with the usual vague definition of an x-risk (where there's no threshold on the probability), I've been thinking of climate change as a (lesser) xrisk for a while and thought I'd come clean, even though I don't think this changes anyone's decisions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Gavin https://forum.effectivealtruism.org/posts/Bm9g83bgXE3yd9Xmg/sir-gavin-and-the-green-sky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sir Gavin and the green sky, published by Gavin on December 17, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.Epistemic status: crank reportage.It suits me that climate change isn't an x-risk. (The movement has trillions of dollars already, and persistently drains talent, attention, and political capital away from actual x-risks.)... But is it one?One palaeontologist, Peter Ward, is semi-famous in the field for suggesting a mechanism by which runaway climate change could kill everything: by turning the ocean into a toxic gas factory.an increase in carbon dioxide... warms the oceans enough to change circulation patterns. When this happens, sulfur-eating microbes sometimes thrive. These bacteria produce hydrogen sulfide, which, in sufficient quantities and under certain conditions, outgasses into the air, shreds the ozone layer, and poisons other living things. The warming also causes methane ice under the seas to melt and, well, burp, adding to the nasty mix.As usual in Zoomed Out Sciences like epidemiology and climatology, the model stops short at the inevitable massive effort to reverse this process. The modellers prefer to think of humans as the pink fella from these comics: inertly lamenting.Ward argues that this (and similar mechanisms) is responsible for all past mass extinctions except the dinosaurs one everyone fixates on.This is correct as chemistry (I think), and apriori could happen, and for all I know he's right and it actually has happened before. So I have to give it nonzero probability - and one of the real probabilities, with only a few zeroes in it.I can't really evaluate this. There are some hallmarks of crankery in the book - most of the citations of the book are unscientific or pseudoscientific - and (as Halstead and others have long noted) climate is one of the slower and more detectable ways to kill a biosphere.But in concert with the usual vague definition of an x-risk (where there's no threshold on the probability), I've been thinking of climate change as a (lesser) xrisk for a while and thought I'd come clean, even though I don't think this changes anyone's decisions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 18 Dec 2022 09:10:52 +0000 EA - Sir Gavin and the green sky by Gavin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sir Gavin and the green sky, published by Gavin on December 17, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.Epistemic status: crank reportage.It suits me that climate change isn't an x-risk. (The movement has trillions of dollars already, and persistently drains talent, attention, and political capital away from actual x-risks.)... But is it one?One palaeontologist, Peter Ward, is semi-famous in the field for suggesting a mechanism by which runaway climate change could kill everything: by turning the ocean into a toxic gas factory.an increase in carbon dioxide... warms the oceans enough to change circulation patterns. When this happens, sulfur-eating microbes sometimes thrive. These bacteria produce hydrogen sulfide, which, in sufficient quantities and under certain conditions, outgasses into the air, shreds the ozone layer, and poisons other living things. The warming also causes methane ice under the seas to melt and, well, burp, adding to the nasty mix.As usual in Zoomed Out Sciences like epidemiology and climatology, the model stops short at the inevitable massive effort to reverse this process. The modellers prefer to think of humans as the pink fella from these comics: inertly lamenting.Ward argues that this (and similar mechanisms) is responsible for all past mass extinctions except the dinosaurs one everyone fixates on.This is correct as chemistry (I think), and apriori could happen, and for all I know he's right and it actually has happened before. So I have to give it nonzero probability - and one of the real probabilities, with only a few zeroes in it.I can't really evaluate this. There are some hallmarks of crankery in the book - most of the citations of the book are unscientific or pseudoscientific - and (as Halstead and others have long noted) climate is one of the slower and more detectable ways to kill a biosphere.But in concert with the usual vague definition of an x-risk (where there's no threshold on the probability), I've been thinking of climate change as a (lesser) xrisk for a while and thought I'd come clean, even though I don't think this changes anyone's decisions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sir Gavin and the green sky, published by Gavin on December 17, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.Epistemic status: crank reportage.It suits me that climate change isn't an x-risk. (The movement has trillions of dollars already, and persistently drains talent, attention, and political capital away from actual x-risks.)... But is it one?One palaeontologist, Peter Ward, is semi-famous in the field for suggesting a mechanism by which runaway climate change could kill everything: by turning the ocean into a toxic gas factory.an increase in carbon dioxide... warms the oceans enough to change circulation patterns. When this happens, sulfur-eating microbes sometimes thrive. These bacteria produce hydrogen sulfide, which, in sufficient quantities and under certain conditions, outgasses into the air, shreds the ozone layer, and poisons other living things. The warming also causes methane ice under the seas to melt and, well, burp, adding to the nasty mix.As usual in Zoomed Out Sciences like epidemiology and climatology, the model stops short at the inevitable massive effort to reverse this process. The modellers prefer to think of humans as the pink fella from these comics: inertly lamenting.Ward argues that this (and similar mechanisms) is responsible for all past mass extinctions except the dinosaurs one everyone fixates on.This is correct as chemistry (I think), and apriori could happen, and for all I know he's right and it actually has happened before. So I have to give it nonzero probability - and one of the real probabilities, with only a few zeroes in it.I can't really evaluate this. There are some hallmarks of crankery in the book - most of the citations of the book are unscientific or pseudoscientific - and (as Halstead and others have long noted) climate is one of the slower and more detectable ways to kill a biosphere.But in concert with the usual vague definition of an x-risk (where there's no threshold on the probability), I've been thinking of climate change as a (lesser) xrisk for a while and thought I'd come clean, even though I don't think this changes anyone's decisions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Gavin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:38 None full 4162
PsztRbQnuMmpgSrRc_EA EA - Why we’re getting the Fidelity Model wrong by Alishaandomeda Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why we’re getting the Fidelity Model wrong, published by Alishaandomeda on December 17, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.This is a response to The fidelity model of spreading ideas - EA Forum (effectivealtruism.org), the associated CEA blog post, plus those who have referenced it in good-spirited debate about how EA comms ought to develop in the coming years.Epistemic status: As confident as anyone experienced enough in manipulating the whims of social media algorithms for social change can ever be, given that they change semi-regularly. 6+ years of experience using social media to effect social change, including advising politicians on their use of social media, conducting experimental social media for social change projects that have informed national political party strategies, and training upwards of 500+ activists.OverviewThis post is an exploration of the Fidelity Model, an approach to the spreading of ideas that is widely referenced within EA spaces.While I am broadly supportive of the fidelity model and agree with its central premises, I disagree with one influential interpretation of it: that we shouldn't use social media to communicate key EA ideas.This interpretation of the argument is claimed to be central to CEA's social media scepticism, which I argue is misplaced. I am more sympathetic to the mass media scepticism this argument also brings about.I explain why having an "official presence" on social media is important using the Fidelity Model. Social media is an important communications model, even if social media itself is low-fidelity.The Fidelity ModelI am taking this from the original post, so as to keep our arguments clear and consistent.DefinitionFidelity The key term in this model is "fidelity." Therefore it will be useful to define this term. By fidelity [the original author has] in mind nothing more than the classic dictionary definition of "adherence to fact or detail" or "accuracy; exactness."As an example, imagine I am shooting a movie on an old camera. If the image captured by the camera causes it to seem as though I am wearing a blue shirt when actually my shirt is red, then the image captured by the camera is low fidelity.The problemWith reference to The Telephone Game, the author of the original argument accepts that "EA" ideas and arguments are nuanced and worries that some methods of communication require that the nuance be stripped back in order to be easily communicable.When the context gets stripped away, those who receive the ideas leave with something that's similar to effective altruism, but different. Thus, when we hear the EA message repeated back to us, we get sentences like "EA is about earning all the money you can and donating it to GiveWell charities" or "EAs only care about interventions that are supported by randomized controlled trials." To a certain extent we can influence the sentences we get back by being more clever about how we frame our ideas, but it seems unlikely that framing can do all the work.How we get to social media scepticismVery few people who use social media believe it to be a high-fidelity information exchange space. Platforms like Twitter, with low character counts and a norm towards fast-paced argument threads, are particularly bad at fostering nuanced debate.EA ideas are nuanced, and when they aren't covered with that nuance in m...]]>
Alishaandomeda https://forum.effectivealtruism.org/posts/PsztRbQnuMmpgSrRc/why-we-re-getting-the-fidelity-model-wrong Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why we’re getting the Fidelity Model wrong, published by Alishaandomeda on December 17, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.This is a response to The fidelity model of spreading ideas - EA Forum (effectivealtruism.org), the associated CEA blog post, plus those who have referenced it in good-spirited debate about how EA comms ought to develop in the coming years.Epistemic status: As confident as anyone experienced enough in manipulating the whims of social media algorithms for social change can ever be, given that they change semi-regularly. 6+ years of experience using social media to effect social change, including advising politicians on their use of social media, conducting experimental social media for social change projects that have informed national political party strategies, and training upwards of 500+ activists.OverviewThis post is an exploration of the Fidelity Model, an approach to the spreading of ideas that is widely referenced within EA spaces.While I am broadly supportive of the fidelity model and agree with its central premises, I disagree with one influential interpretation of it: that we shouldn't use social media to communicate key EA ideas.This interpretation of the argument is claimed to be central to CEA's social media scepticism, which I argue is misplaced. I am more sympathetic to the mass media scepticism this argument also brings about.I explain why having an "official presence" on social media is important using the Fidelity Model. Social media is an important communications model, even if social media itself is low-fidelity.The Fidelity ModelI am taking this from the original post, so as to keep our arguments clear and consistent.DefinitionFidelity The key term in this model is "fidelity." Therefore it will be useful to define this term. By fidelity [the original author has] in mind nothing more than the classic dictionary definition of "adherence to fact or detail" or "accuracy; exactness."As an example, imagine I am shooting a movie on an old camera. If the image captured by the camera causes it to seem as though I am wearing a blue shirt when actually my shirt is red, then the image captured by the camera is low fidelity.The problemWith reference to The Telephone Game, the author of the original argument accepts that "EA" ideas and arguments are nuanced and worries that some methods of communication require that the nuance be stripped back in order to be easily communicable.When the context gets stripped away, those who receive the ideas leave with something that's similar to effective altruism, but different. Thus, when we hear the EA message repeated back to us, we get sentences like "EA is about earning all the money you can and donating it to GiveWell charities" or "EAs only care about interventions that are supported by randomized controlled trials." To a certain extent we can influence the sentences we get back by being more clever about how we frame our ideas, but it seems unlikely that framing can do all the work.How we get to social media scepticismVery few people who use social media believe it to be a high-fidelity information exchange space. Platforms like Twitter, with low character counts and a norm towards fast-paced argument threads, are particularly bad at fostering nuanced debate.EA ideas are nuanced, and when they aren't covered with that nuance in m...]]>
Sun, 18 Dec 2022 00:49:07 +0000 EA - Why we’re getting the Fidelity Model wrong by Alishaandomeda Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why we’re getting the Fidelity Model wrong, published by Alishaandomeda on December 17, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.This is a response to The fidelity model of spreading ideas - EA Forum (effectivealtruism.org), the associated CEA blog post, plus those who have referenced it in good-spirited debate about how EA comms ought to develop in the coming years.Epistemic status: As confident as anyone experienced enough in manipulating the whims of social media algorithms for social change can ever be, given that they change semi-regularly. 6+ years of experience using social media to effect social change, including advising politicians on their use of social media, conducting experimental social media for social change projects that have informed national political party strategies, and training upwards of 500+ activists.OverviewThis post is an exploration of the Fidelity Model, an approach to the spreading of ideas that is widely referenced within EA spaces.While I am broadly supportive of the fidelity model and agree with its central premises, I disagree with one influential interpretation of it: that we shouldn't use social media to communicate key EA ideas.This interpretation of the argument is claimed to be central to CEA's social media scepticism, which I argue is misplaced. I am more sympathetic to the mass media scepticism this argument also brings about.I explain why having an "official presence" on social media is important using the Fidelity Model. Social media is an important communications model, even if social media itself is low-fidelity.The Fidelity ModelI am taking this from the original post, so as to keep our arguments clear and consistent.DefinitionFidelity The key term in this model is "fidelity." Therefore it will be useful to define this term. By fidelity [the original author has] in mind nothing more than the classic dictionary definition of "adherence to fact or detail" or "accuracy; exactness."As an example, imagine I am shooting a movie on an old camera. If the image captured by the camera causes it to seem as though I am wearing a blue shirt when actually my shirt is red, then the image captured by the camera is low fidelity.The problemWith reference to The Telephone Game, the author of the original argument accepts that "EA" ideas and arguments are nuanced and worries that some methods of communication require that the nuance be stripped back in order to be easily communicable.When the context gets stripped away, those who receive the ideas leave with something that's similar to effective altruism, but different. Thus, when we hear the EA message repeated back to us, we get sentences like "EA is about earning all the money you can and donating it to GiveWell charities" or "EAs only care about interventions that are supported by randomized controlled trials." To a certain extent we can influence the sentences we get back by being more clever about how we frame our ideas, but it seems unlikely that framing can do all the work.How we get to social media scepticismVery few people who use social media believe it to be a high-fidelity information exchange space. Platforms like Twitter, with low character counts and a norm towards fast-paced argument threads, are particularly bad at fostering nuanced debate.EA ideas are nuanced, and when they aren't covered with that nuance in m...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why we’re getting the Fidelity Model wrong, published by Alishaandomeda on December 17, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.This is a response to The fidelity model of spreading ideas - EA Forum (effectivealtruism.org), the associated CEA blog post, plus those who have referenced it in good-spirited debate about how EA comms ought to develop in the coming years.Epistemic status: As confident as anyone experienced enough in manipulating the whims of social media algorithms for social change can ever be, given that they change semi-regularly. 6+ years of experience using social media to effect social change, including advising politicians on their use of social media, conducting experimental social media for social change projects that have informed national political party strategies, and training upwards of 500+ activists.OverviewThis post is an exploration of the Fidelity Model, an approach to the spreading of ideas that is widely referenced within EA spaces.While I am broadly supportive of the fidelity model and agree with its central premises, I disagree with one influential interpretation of it: that we shouldn't use social media to communicate key EA ideas.This interpretation of the argument is claimed to be central to CEA's social media scepticism, which I argue is misplaced. I am more sympathetic to the mass media scepticism this argument also brings about.I explain why having an "official presence" on social media is important using the Fidelity Model. Social media is an important communications model, even if social media itself is low-fidelity.The Fidelity ModelI am taking this from the original post, so as to keep our arguments clear and consistent.DefinitionFidelity The key term in this model is "fidelity." Therefore it will be useful to define this term. By fidelity [the original author has] in mind nothing more than the classic dictionary definition of "adherence to fact or detail" or "accuracy; exactness."As an example, imagine I am shooting a movie on an old camera. If the image captured by the camera causes it to seem as though I am wearing a blue shirt when actually my shirt is red, then the image captured by the camera is low fidelity.The problemWith reference to The Telephone Game, the author of the original argument accepts that "EA" ideas and arguments are nuanced and worries that some methods of communication require that the nuance be stripped back in order to be easily communicable.When the context gets stripped away, those who receive the ideas leave with something that's similar to effective altruism, but different. Thus, when we hear the EA message repeated back to us, we get sentences like "EA is about earning all the money you can and donating it to GiveWell charities" or "EAs only care about interventions that are supported by randomized controlled trials." To a certain extent we can influence the sentences we get back by being more clever about how we frame our ideas, but it seems unlikely that framing can do all the work.How we get to social media scepticismVery few people who use social media believe it to be a high-fidelity information exchange space. Platforms like Twitter, with low character counts and a norm towards fast-paced argument threads, are particularly bad at fostering nuanced debate.EA ideas are nuanced, and when they aren't covered with that nuance in m...]]>
Alishaandomeda https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:36 None full 4164
cSKDSzAe4C3J8si4a_EA EA - A digression about area-specific work in LatAm by Jaime Sevilla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A digression about area-specific work in LatAm, published by Jaime Sevilla on December 17, 2022 on The Effective Altruism Forum.Written for draft amnesty dayDespite the many successes of the Spanish-Speaker community, two Spanish-Speaking organizers I greatly respect have been repeatedly discouraged from pursuing research in their areas of expertise.Eg a grantmaker suggested them instead working on the translation of key texts to promote Effective Altruism because it might be easier to balance with their other community-building work, despite having no expertise in translation and there already being a translation project in the community. Their grant request was subsequently put on hold.Other members of the international community have echoed the advice in different forms, to the point where the organisers feel upset and disheartened.This dynamic challenges the local organisers' expertise and awareness of the situation of their local community. On a meta-level, this (together with my previous impressions of the topic) suggests to me that the community might need to tone down EA branding promotion in support of doing area-specific work.They have also been encouraged to move to the main EA hubs to gain legitimacy and experience. While this seems hard to fix, it saddens me that there is so little support for capable people who want to develop their career locally.Additionally, I am aware of people outside the Spanish-speaking community who have been encouraged to pursue area-specific projects in LatAm, despite lacking experience in the area or local knowledge. This hinders the intellectual independence of small communities and discourages local efforts. Even though I am glad that this work is being done, this might be a symptom of systematic misprioritization.All in all, these are isolated issues and probably better explained by miscommunication and differences in judgement. However, I believe we should have a lower bar for raising critiques and be more transparent about them — this is my targeted contribution to move the culture of EA in that direction.There is a deeper conversation to be had about the relationship between the international EA community and EA in LatAm and Low and Middle-Income Countries in general. It is not my place to host that conversation, though I hope that by speaking up and seeing the reaction from the community I can help others feel more confident bringing up the issues that worry them.As a takeaway, please do not discourage young professionals from pursuing area-specific work, and seek feedback from local organizers about their community’s needs. I also want to stress that grassroots efforts led by dedicated professionals are disproportionately more likely to succeed.Thank you to Agustín Covarrubias, Laura González, Claudette Salinas, Michelle Bruno, Sandra Malagón, Ángela Aristizábal, Pablo Stafforini and Catherine Low for feedback and help editing the post.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jaime Sevilla https://forum.effectivealtruism.org/posts/cSKDSzAe4C3J8si4a/a-digression-about-area-specific-work-in-latam Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A digression about area-specific work in LatAm, published by Jaime Sevilla on December 17, 2022 on The Effective Altruism Forum.Written for draft amnesty dayDespite the many successes of the Spanish-Speaker community, two Spanish-Speaking organizers I greatly respect have been repeatedly discouraged from pursuing research in their areas of expertise.Eg a grantmaker suggested them instead working on the translation of key texts to promote Effective Altruism because it might be easier to balance with their other community-building work, despite having no expertise in translation and there already being a translation project in the community. Their grant request was subsequently put on hold.Other members of the international community have echoed the advice in different forms, to the point where the organisers feel upset and disheartened.This dynamic challenges the local organisers' expertise and awareness of the situation of their local community. On a meta-level, this (together with my previous impressions of the topic) suggests to me that the community might need to tone down EA branding promotion in support of doing area-specific work.They have also been encouraged to move to the main EA hubs to gain legitimacy and experience. While this seems hard to fix, it saddens me that there is so little support for capable people who want to develop their career locally.Additionally, I am aware of people outside the Spanish-speaking community who have been encouraged to pursue area-specific projects in LatAm, despite lacking experience in the area or local knowledge. This hinders the intellectual independence of small communities and discourages local efforts. Even though I am glad that this work is being done, this might be a symptom of systematic misprioritization.All in all, these are isolated issues and probably better explained by miscommunication and differences in judgement. However, I believe we should have a lower bar for raising critiques and be more transparent about them — this is my targeted contribution to move the culture of EA in that direction.There is a deeper conversation to be had about the relationship between the international EA community and EA in LatAm and Low and Middle-Income Countries in general. It is not my place to host that conversation, though I hope that by speaking up and seeing the reaction from the community I can help others feel more confident bringing up the issues that worry them.As a takeaway, please do not discourage young professionals from pursuing area-specific work, and seek feedback from local organizers about their community’s needs. I also want to stress that grassroots efforts led by dedicated professionals are disproportionately more likely to succeed.Thank you to Agustín Covarrubias, Laura González, Claudette Salinas, Michelle Bruno, Sandra Malagón, Ángela Aristizábal, Pablo Stafforini and Catherine Low for feedback and help editing the post.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 17 Dec 2022 21:15:19 +0000 EA - A digression about area-specific work in LatAm by Jaime Sevilla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A digression about area-specific work in LatAm, published by Jaime Sevilla on December 17, 2022 on The Effective Altruism Forum.Written for draft amnesty dayDespite the many successes of the Spanish-Speaker community, two Spanish-Speaking organizers I greatly respect have been repeatedly discouraged from pursuing research in their areas of expertise.Eg a grantmaker suggested them instead working on the translation of key texts to promote Effective Altruism because it might be easier to balance with their other community-building work, despite having no expertise in translation and there already being a translation project in the community. Their grant request was subsequently put on hold.Other members of the international community have echoed the advice in different forms, to the point where the organisers feel upset and disheartened.This dynamic challenges the local organisers' expertise and awareness of the situation of their local community. On a meta-level, this (together with my previous impressions of the topic) suggests to me that the community might need to tone down EA branding promotion in support of doing area-specific work.They have also been encouraged to move to the main EA hubs to gain legitimacy and experience. While this seems hard to fix, it saddens me that there is so little support for capable people who want to develop their career locally.Additionally, I am aware of people outside the Spanish-speaking community who have been encouraged to pursue area-specific projects in LatAm, despite lacking experience in the area or local knowledge. This hinders the intellectual independence of small communities and discourages local efforts. Even though I am glad that this work is being done, this might be a symptom of systematic misprioritization.All in all, these are isolated issues and probably better explained by miscommunication and differences in judgement. However, I believe we should have a lower bar for raising critiques and be more transparent about them — this is my targeted contribution to move the culture of EA in that direction.There is a deeper conversation to be had about the relationship between the international EA community and EA in LatAm and Low and Middle-Income Countries in general. It is not my place to host that conversation, though I hope that by speaking up and seeing the reaction from the community I can help others feel more confident bringing up the issues that worry them.As a takeaway, please do not discourage young professionals from pursuing area-specific work, and seek feedback from local organizers about their community’s needs. I also want to stress that grassroots efforts led by dedicated professionals are disproportionately more likely to succeed.Thank you to Agustín Covarrubias, Laura González, Claudette Salinas, Michelle Bruno, Sandra Malagón, Ángela Aristizábal, Pablo Stafforini and Catherine Low for feedback and help editing the post.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A digression about area-specific work in LatAm, published by Jaime Sevilla on December 17, 2022 on The Effective Altruism Forum.Written for draft amnesty dayDespite the many successes of the Spanish-Speaker community, two Spanish-Speaking organizers I greatly respect have been repeatedly discouraged from pursuing research in their areas of expertise.Eg a grantmaker suggested them instead working on the translation of key texts to promote Effective Altruism because it might be easier to balance with their other community-building work, despite having no expertise in translation and there already being a translation project in the community. Their grant request was subsequently put on hold.Other members of the international community have echoed the advice in different forms, to the point where the organisers feel upset and disheartened.This dynamic challenges the local organisers' expertise and awareness of the situation of their local community. On a meta-level, this (together with my previous impressions of the topic) suggests to me that the community might need to tone down EA branding promotion in support of doing area-specific work.They have also been encouraged to move to the main EA hubs to gain legitimacy and experience. While this seems hard to fix, it saddens me that there is so little support for capable people who want to develop their career locally.Additionally, I am aware of people outside the Spanish-speaking community who have been encouraged to pursue area-specific projects in LatAm, despite lacking experience in the area or local knowledge. This hinders the intellectual independence of small communities and discourages local efforts. Even though I am glad that this work is being done, this might be a symptom of systematic misprioritization.All in all, these are isolated issues and probably better explained by miscommunication and differences in judgement. However, I believe we should have a lower bar for raising critiques and be more transparent about them — this is my targeted contribution to move the culture of EA in that direction.There is a deeper conversation to be had about the relationship between the international EA community and EA in LatAm and Low and Middle-Income Countries in general. It is not my place to host that conversation, though I hope that by speaking up and seeing the reaction from the community I can help others feel more confident bringing up the issues that worry them.As a takeaway, please do not discourage young professionals from pursuing area-specific work, and seek feedback from local organizers about their community’s needs. I also want to stress that grassroots efforts led by dedicated professionals are disproportionately more likely to succeed.Thank you to Agustín Covarrubias, Laura González, Claudette Salinas, Michelle Bruno, Sandra Malagón, Ángela Aristizábal, Pablo Stafforini and Catherine Low for feedback and help editing the post.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jaime Sevilla https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:50 None full 4163
2XFxKpRAqNErajF88_EA EA - The Rules of Rescue - out now! by Theron Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Rules of Rescue - out now!, published by Theron on December 17, 2022 on The Effective Altruism Forum.My book—The Rules of Rescue—is officially out! It’s Open Access, so you can download a PDF for free.The book deals with a host of questions that have bothered me for a long time. What costs are we morally required to incur to rescue strangers? Is failing to assist distant people in need morally like letting nearby people drown? When do the needs of the many outweigh the needs of the few?I defend a novel picture of the moral reasons and requirements to use time, money, and other resources to help others the most. It’s a non-consequentialist picture according to which there are significant permissions not to spend your life helping, as well as robust constraints against helping when it involves harming, lying, or stealing. Requirements to help are grounded not in the promotion of goodness per se, but in the prevention of serious harm to imperiled individuals, whether near or far.I argue that altruistic activities are often constrained by requirements of effectiveness, to direct resources in ways that help more rather than less. I explore effectiveness in the context of what you’re required to do over the course of your whole life. I conclude that even if you’re not morally required to be an effective altruist, you may have to help as much as if you were one.Visit the book’s website:/Buy a printed copy:Buy an ebook (read aloud):Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Theron https://forum.effectivealtruism.org/posts/2XFxKpRAqNErajF88/the-rules-of-rescue-out-now Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Rules of Rescue - out now!, published by Theron on December 17, 2022 on The Effective Altruism Forum.My book—The Rules of Rescue—is officially out! It’s Open Access, so you can download a PDF for free.The book deals with a host of questions that have bothered me for a long time. What costs are we morally required to incur to rescue strangers? Is failing to assist distant people in need morally like letting nearby people drown? When do the needs of the many outweigh the needs of the few?I defend a novel picture of the moral reasons and requirements to use time, money, and other resources to help others the most. It’s a non-consequentialist picture according to which there are significant permissions not to spend your life helping, as well as robust constraints against helping when it involves harming, lying, or stealing. Requirements to help are grounded not in the promotion of goodness per se, but in the prevention of serious harm to imperiled individuals, whether near or far.I argue that altruistic activities are often constrained by requirements of effectiveness, to direct resources in ways that help more rather than less. I explore effectiveness in the context of what you’re required to do over the course of your whole life. I conclude that even if you’re not morally required to be an effective altruist, you may have to help as much as if you were one.Visit the book’s website:/Buy a printed copy:Buy an ebook (read aloud):Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 17 Dec 2022 21:02:13 +0000 EA - The Rules of Rescue - out now! by Theron Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Rules of Rescue - out now!, published by Theron on December 17, 2022 on The Effective Altruism Forum.My book—The Rules of Rescue—is officially out! It’s Open Access, so you can download a PDF for free.The book deals with a host of questions that have bothered me for a long time. What costs are we morally required to incur to rescue strangers? Is failing to assist distant people in need morally like letting nearby people drown? When do the needs of the many outweigh the needs of the few?I defend a novel picture of the moral reasons and requirements to use time, money, and other resources to help others the most. It’s a non-consequentialist picture according to which there are significant permissions not to spend your life helping, as well as robust constraints against helping when it involves harming, lying, or stealing. Requirements to help are grounded not in the promotion of goodness per se, but in the prevention of serious harm to imperiled individuals, whether near or far.I argue that altruistic activities are often constrained by requirements of effectiveness, to direct resources in ways that help more rather than less. I explore effectiveness in the context of what you’re required to do over the course of your whole life. I conclude that even if you’re not morally required to be an effective altruist, you may have to help as much as if you were one.Visit the book’s website:/Buy a printed copy:Buy an ebook (read aloud):Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Rules of Rescue - out now!, published by Theron on December 17, 2022 on The Effective Altruism Forum.My book—The Rules of Rescue—is officially out! It’s Open Access, so you can download a PDF for free.The book deals with a host of questions that have bothered me for a long time. What costs are we morally required to incur to rescue strangers? Is failing to assist distant people in need morally like letting nearby people drown? When do the needs of the many outweigh the needs of the few?I defend a novel picture of the moral reasons and requirements to use time, money, and other resources to help others the most. It’s a non-consequentialist picture according to which there are significant permissions not to spend your life helping, as well as robust constraints against helping when it involves harming, lying, or stealing. Requirements to help are grounded not in the promotion of goodness per se, but in the prevention of serious harm to imperiled individuals, whether near or far.I argue that altruistic activities are often constrained by requirements of effectiveness, to direct resources in ways that help more rather than less. I explore effectiveness in the context of what you’re required to do over the course of your whole life. I conclude that even if you’re not morally required to be an effective altruist, you may have to help as much as if you were one.Visit the book’s website:/Buy a printed copy:Buy an ebook (read aloud):Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Theron https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:37 None full 4158
3M7iL4cb3LJjj7zNp_EA EA - Effective altruism is worse than traditional philanthropy in the way it excludes the extreme poor in the global south. by Jaime Sevilla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective altruism is worse than traditional philanthropy in the way it excludes the extreme poor in the global south., published by Jaime Sevilla on December 17, 2022 on The Effective Altruism Forum.Interesting article from Anthony Kalulu from Uganda.It argues that EA recommended charities have very little impact in the average poor in Uganda.if you randomly asked one of the people who themselves live in abject poverty, there is no chance that they will mention one of EA’s supported “effective” charities, as having impacted their lives more than the work of traditional global antipoverty agencies. No. That’s out of question.Anthony argues too that EA solutions are not persistentIf you visited a truly impoverished country like Uganda, you will quickly notice that many of the things that effective altruists call “effective” — from mosquito nets, to $100 business grants that are provided to groups of 3 people — are the same short-term, disposable solutions that have not only kept their recipients in abject poverty, but also, they are the very kind of solutions that often disappear the same day their proponents exit.And that the solutions implemented do not match the communities needs:In my region of Busoga, Uganda’s most impoverished region, we have one [well-funded] international charity which is among those described by the EA movement as being “effective”. That charity is also working with rural poor farmers here, principally on maize.But the thing is: every household in our region that depends on maize, lives in chronic extreme poverty, and has lived in chronic poverty for eternity. Neither the effective charity nor the other big antipoverty agencies that came before it, have changed this.By contrast, those farmers who are growing crops like sugarcane, no charity or antipoverty agency has ever supported them. But today, every village in our region that you visit, is covered with sugarcane. It is also the same with many other crops (rice, tomatoes, water melon etc) that are at least providing rural farmers with some tangible income.The solution advocated by Anthony through the article are grassroots organizations:In the name of being “effective”, EA has instead indoctrinated its followers to strictly support a small, select list of charities that have been labelled “most effective” by the movement’s own charity raters like GiveWell, Giving What We Can, The Life You Can Save etc, of which the named charities, right now, are all western.But that is the very ingredient that makes traditional philanthropy a sector that keeps the world’s extreme poor on the sidelines. And by consolidating itself as a movement that completely never supports grassroots organizations directly, EA has proved to be more of a blockade to those of us who live in ultra poverty, even more than traditional philanthropy.What can we do to help improve this situation?Here is a brainstorming of some ideas:Seek feedback and listen to local though leaders, and invite them to participate in the global conversation about eradicating povertyConduct third-party surveys of the intended beneficiaries of poverty relief. So far the only instrument of this kind I've seen is GiveDirectly's program.If you are a charity working in eg Uganda, try promoting your hiring rounds among your beneficiaries and their surrounding communitiesLead, together with local EA organizers, incubation programs of grassroot and effective effortsTry to identify the best local grassroot efforts to support. This might be tricky because these efforts are usually not scalable, so spending $100k on a study might be pointless when the org only has capacity to absorb $50k. Maybe we could study better the effects of a grassroot support foundation.Train and support local EA organizers to run talent identification and nurturin...]]>
Jaime Sevilla https://forum.effectivealtruism.org/posts/3M7iL4cb3LJjj7zNp/effective-altruism-is-worse-than-traditional-philanthropy-in Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective altruism is worse than traditional philanthropy in the way it excludes the extreme poor in the global south., published by Jaime Sevilla on December 17, 2022 on The Effective Altruism Forum.Interesting article from Anthony Kalulu from Uganda.It argues that EA recommended charities have very little impact in the average poor in Uganda.if you randomly asked one of the people who themselves live in abject poverty, there is no chance that they will mention one of EA’s supported “effective” charities, as having impacted their lives more than the work of traditional global antipoverty agencies. No. That’s out of question.Anthony argues too that EA solutions are not persistentIf you visited a truly impoverished country like Uganda, you will quickly notice that many of the things that effective altruists call “effective” — from mosquito nets, to $100 business grants that are provided to groups of 3 people — are the same short-term, disposable solutions that have not only kept their recipients in abject poverty, but also, they are the very kind of solutions that often disappear the same day their proponents exit.And that the solutions implemented do not match the communities needs:In my region of Busoga, Uganda’s most impoverished region, we have one [well-funded] international charity which is among those described by the EA movement as being “effective”. That charity is also working with rural poor farmers here, principally on maize.But the thing is: every household in our region that depends on maize, lives in chronic extreme poverty, and has lived in chronic poverty for eternity. Neither the effective charity nor the other big antipoverty agencies that came before it, have changed this.By contrast, those farmers who are growing crops like sugarcane, no charity or antipoverty agency has ever supported them. But today, every village in our region that you visit, is covered with sugarcane. It is also the same with many other crops (rice, tomatoes, water melon etc) that are at least providing rural farmers with some tangible income.The solution advocated by Anthony through the article are grassroots organizations:In the name of being “effective”, EA has instead indoctrinated its followers to strictly support a small, select list of charities that have been labelled “most effective” by the movement’s own charity raters like GiveWell, Giving What We Can, The Life You Can Save etc, of which the named charities, right now, are all western.But that is the very ingredient that makes traditional philanthropy a sector that keeps the world’s extreme poor on the sidelines. And by consolidating itself as a movement that completely never supports grassroots organizations directly, EA has proved to be more of a blockade to those of us who live in ultra poverty, even more than traditional philanthropy.What can we do to help improve this situation?Here is a brainstorming of some ideas:Seek feedback and listen to local though leaders, and invite them to participate in the global conversation about eradicating povertyConduct third-party surveys of the intended beneficiaries of poverty relief. So far the only instrument of this kind I've seen is GiveDirectly's program.If you are a charity working in eg Uganda, try promoting your hiring rounds among your beneficiaries and their surrounding communitiesLead, together with local EA organizers, incubation programs of grassroot and effective effortsTry to identify the best local grassroot efforts to support. This might be tricky because these efforts are usually not scalable, so spending $100k on a study might be pointless when the org only has capacity to absorb $50k. Maybe we could study better the effects of a grassroot support foundation.Train and support local EA organizers to run talent identification and nurturin...]]>
Sat, 17 Dec 2022 19:43:34 +0000 EA - Effective altruism is worse than traditional philanthropy in the way it excludes the extreme poor in the global south. by Jaime Sevilla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective altruism is worse than traditional philanthropy in the way it excludes the extreme poor in the global south., published by Jaime Sevilla on December 17, 2022 on The Effective Altruism Forum.Interesting article from Anthony Kalulu from Uganda.It argues that EA recommended charities have very little impact in the average poor in Uganda.if you randomly asked one of the people who themselves live in abject poverty, there is no chance that they will mention one of EA’s supported “effective” charities, as having impacted their lives more than the work of traditional global antipoverty agencies. No. That’s out of question.Anthony argues too that EA solutions are not persistentIf you visited a truly impoverished country like Uganda, you will quickly notice that many of the things that effective altruists call “effective” — from mosquito nets, to $100 business grants that are provided to groups of 3 people — are the same short-term, disposable solutions that have not only kept their recipients in abject poverty, but also, they are the very kind of solutions that often disappear the same day their proponents exit.And that the solutions implemented do not match the communities needs:In my region of Busoga, Uganda’s most impoverished region, we have one [well-funded] international charity which is among those described by the EA movement as being “effective”. That charity is also working with rural poor farmers here, principally on maize.But the thing is: every household in our region that depends on maize, lives in chronic extreme poverty, and has lived in chronic poverty for eternity. Neither the effective charity nor the other big antipoverty agencies that came before it, have changed this.By contrast, those farmers who are growing crops like sugarcane, no charity or antipoverty agency has ever supported them. But today, every village in our region that you visit, is covered with sugarcane. It is also the same with many other crops (rice, tomatoes, water melon etc) that are at least providing rural farmers with some tangible income.The solution advocated by Anthony through the article are grassroots organizations:In the name of being “effective”, EA has instead indoctrinated its followers to strictly support a small, select list of charities that have been labelled “most effective” by the movement’s own charity raters like GiveWell, Giving What We Can, The Life You Can Save etc, of which the named charities, right now, are all western.But that is the very ingredient that makes traditional philanthropy a sector that keeps the world’s extreme poor on the sidelines. And by consolidating itself as a movement that completely never supports grassroots organizations directly, EA has proved to be more of a blockade to those of us who live in ultra poverty, even more than traditional philanthropy.What can we do to help improve this situation?Here is a brainstorming of some ideas:Seek feedback and listen to local though leaders, and invite them to participate in the global conversation about eradicating povertyConduct third-party surveys of the intended beneficiaries of poverty relief. So far the only instrument of this kind I've seen is GiveDirectly's program.If you are a charity working in eg Uganda, try promoting your hiring rounds among your beneficiaries and their surrounding communitiesLead, together with local EA organizers, incubation programs of grassroot and effective effortsTry to identify the best local grassroot efforts to support. This might be tricky because these efforts are usually not scalable, so spending $100k on a study might be pointless when the org only has capacity to absorb $50k. Maybe we could study better the effects of a grassroot support foundation.Train and support local EA organizers to run talent identification and nurturin...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective altruism is worse than traditional philanthropy in the way it excludes the extreme poor in the global south., published by Jaime Sevilla on December 17, 2022 on The Effective Altruism Forum.Interesting article from Anthony Kalulu from Uganda.It argues that EA recommended charities have very little impact in the average poor in Uganda.if you randomly asked one of the people who themselves live in abject poverty, there is no chance that they will mention one of EA’s supported “effective” charities, as having impacted their lives more than the work of traditional global antipoverty agencies. No. That’s out of question.Anthony argues too that EA solutions are not persistentIf you visited a truly impoverished country like Uganda, you will quickly notice that many of the things that effective altruists call “effective” — from mosquito nets, to $100 business grants that are provided to groups of 3 people — are the same short-term, disposable solutions that have not only kept their recipients in abject poverty, but also, they are the very kind of solutions that often disappear the same day their proponents exit.And that the solutions implemented do not match the communities needs:In my region of Busoga, Uganda’s most impoverished region, we have one [well-funded] international charity which is among those described by the EA movement as being “effective”. That charity is also working with rural poor farmers here, principally on maize.But the thing is: every household in our region that depends on maize, lives in chronic extreme poverty, and has lived in chronic poverty for eternity. Neither the effective charity nor the other big antipoverty agencies that came before it, have changed this.By contrast, those farmers who are growing crops like sugarcane, no charity or antipoverty agency has ever supported them. But today, every village in our region that you visit, is covered with sugarcane. It is also the same with many other crops (rice, tomatoes, water melon etc) that are at least providing rural farmers with some tangible income.The solution advocated by Anthony through the article are grassroots organizations:In the name of being “effective”, EA has instead indoctrinated its followers to strictly support a small, select list of charities that have been labelled “most effective” by the movement’s own charity raters like GiveWell, Giving What We Can, The Life You Can Save etc, of which the named charities, right now, are all western.But that is the very ingredient that makes traditional philanthropy a sector that keeps the world’s extreme poor on the sidelines. And by consolidating itself as a movement that completely never supports grassroots organizations directly, EA has proved to be more of a blockade to those of us who live in ultra poverty, even more than traditional philanthropy.What can we do to help improve this situation?Here is a brainstorming of some ideas:Seek feedback and listen to local though leaders, and invite them to participate in the global conversation about eradicating povertyConduct third-party surveys of the intended beneficiaries of poverty relief. So far the only instrument of this kind I've seen is GiveDirectly's program.If you are a charity working in eg Uganda, try promoting your hiring rounds among your beneficiaries and their surrounding communitiesLead, together with local EA organizers, incubation programs of grassroot and effective effortsTry to identify the best local grassroot efforts to support. This might be tricky because these efforts are usually not scalable, so spending $100k on a study might be pointless when the org only has capacity to absorb $50k. Maybe we could study better the effects of a grassroot support foundation.Train and support local EA organizers to run talent identification and nurturin...]]>
Jaime Sevilla https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:57 None full 4157
jgsFfsDPzjdjbPhB6_EA EA - Concerns over EA’s possible neglect of experts by Jack Malde Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Concerns over EA’s possible neglect of experts, published by Jack Malde on December 16, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, I haven’t checked everything, and it's unfinished. I was explicitly encouraged to post something like this!Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.I am becoming increasingly concerned that EA is neglecting experts when it comes to research. I’m not saying that EA organisations don’t produce high quality research, but I have a feeling that the research could be of an even higher quality if we were to embrace experts more.Epistemic status: not that confident that what I’m saying is valid. Maybe experts are utilised more than I realise. Maybe the people I mention below can reasonably be considered experts. I also haven’t done an in-depth exploration of all relevant research to judge how widespread the problem might be (if it is indeed a problem)Research examples I’m NOT concerned byLet me start with some good examples (there are certainly more than I am listing here!).In 2021 Open Phil commissioned a report from David Humbird on the potential for cultured meat production to scale up to the point where it would be sufficiently available and affordable to replace a substantial portion of global meat consumption. Humbird has a PhD in chemical engineering and has extensive career experience in process engineering and techno-economic analysis, including the provision of consultancy services. In short, he seems like a great choice to carry out this research.Another example I am pleased by is Will MacAskill as author of What We Owe the Future. I cannot think of a better author of this book. Will is a respected philosopher, and a central figure in the EA movement. This book outlines the philosophical argument for longtermism, a key school of thought within EA. Boy am I happy that Will wrote this book.Other examples I was planning to write up:Modeling the Human Trajectory - David RoodmanRoodman seems qualified to deliver this research.Wild Animal Initiative research such as thisI like that they collaborated with Samniqueka J. Halsey who is an assistant professorSome examples I’m concerned byOpen Phil’s research on AIIn 2020 Ajeya Cotra, a Senior Research Analyst at Open Phil, wrote a report on timelines to transformative AI. I have no doubt that the report is high-quality and that Ajeya is very intelligent. However, this is a very technical subject and, beyond having a bachelor’s degree in Electrical Engineering and Computer Science, I don’t see why Ajeya would be the first choice to write this report. Why wouldn’t Open Phil have commissioned an expert in AI development / computational neuroscience etc. to write this report, similar to what they did with David Humbird (see above)? Ajeya’s report had Paul Christiano and Dario Amodei as advisors, which is good, but advisors generally have limited input. Wouldn’t it have been better to have an expert as first author?All the above applies to another Open Phil AI report, this time written by Joe Carlsmith. Joe is a philosopher by training, and whilst that isn’t completely irrelevant, it once again seems to me that a better choice could have been found. Personally I’d prefer that Joe do more philosophy-related work, similar to what Will MacAsKill is doing (see above).Climate Change research(removed mention of Founder's Pledge as per jackva's comment)Climate Change and Longtermism - John HalsteadJohn Halstead doesn't seem to have any formal training in climate science. Not sur...]]>
Jack Malde https://forum.effectivealtruism.org/posts/jgsFfsDPzjdjbPhB6/concerns-over-ea-s-possible-neglect-of-experts Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Concerns over EA’s possible neglect of experts, published by Jack Malde on December 16, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, I haven’t checked everything, and it's unfinished. I was explicitly encouraged to post something like this!Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.I am becoming increasingly concerned that EA is neglecting experts when it comes to research. I’m not saying that EA organisations don’t produce high quality research, but I have a feeling that the research could be of an even higher quality if we were to embrace experts more.Epistemic status: not that confident that what I’m saying is valid. Maybe experts are utilised more than I realise. Maybe the people I mention below can reasonably be considered experts. I also haven’t done an in-depth exploration of all relevant research to judge how widespread the problem might be (if it is indeed a problem)Research examples I’m NOT concerned byLet me start with some good examples (there are certainly more than I am listing here!).In 2021 Open Phil commissioned a report from David Humbird on the potential for cultured meat production to scale up to the point where it would be sufficiently available and affordable to replace a substantial portion of global meat consumption. Humbird has a PhD in chemical engineering and has extensive career experience in process engineering and techno-economic analysis, including the provision of consultancy services. In short, he seems like a great choice to carry out this research.Another example I am pleased by is Will MacAskill as author of What We Owe the Future. I cannot think of a better author of this book. Will is a respected philosopher, and a central figure in the EA movement. This book outlines the philosophical argument for longtermism, a key school of thought within EA. Boy am I happy that Will wrote this book.Other examples I was planning to write up:Modeling the Human Trajectory - David RoodmanRoodman seems qualified to deliver this research.Wild Animal Initiative research such as thisI like that they collaborated with Samniqueka J. Halsey who is an assistant professorSome examples I’m concerned byOpen Phil’s research on AIIn 2020 Ajeya Cotra, a Senior Research Analyst at Open Phil, wrote a report on timelines to transformative AI. I have no doubt that the report is high-quality and that Ajeya is very intelligent. However, this is a very technical subject and, beyond having a bachelor’s degree in Electrical Engineering and Computer Science, I don’t see why Ajeya would be the first choice to write this report. Why wouldn’t Open Phil have commissioned an expert in AI development / computational neuroscience etc. to write this report, similar to what they did with David Humbird (see above)? Ajeya’s report had Paul Christiano and Dario Amodei as advisors, which is good, but advisors generally have limited input. Wouldn’t it have been better to have an expert as first author?All the above applies to another Open Phil AI report, this time written by Joe Carlsmith. Joe is a philosopher by training, and whilst that isn’t completely irrelevant, it once again seems to me that a better choice could have been found. Personally I’d prefer that Joe do more philosophy-related work, similar to what Will MacAsKill is doing (see above).Climate Change research(removed mention of Founder's Pledge as per jackva's comment)Climate Change and Longtermism - John HalsteadJohn Halstead doesn't seem to have any formal training in climate science. Not sur...]]>
Sat, 17 Dec 2022 10:50:01 +0000 EA - Concerns over EA’s possible neglect of experts by Jack Malde Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Concerns over EA’s possible neglect of experts, published by Jack Malde on December 16, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, I haven’t checked everything, and it's unfinished. I was explicitly encouraged to post something like this!Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.I am becoming increasingly concerned that EA is neglecting experts when it comes to research. I’m not saying that EA organisations don’t produce high quality research, but I have a feeling that the research could be of an even higher quality if we were to embrace experts more.Epistemic status: not that confident that what I’m saying is valid. Maybe experts are utilised more than I realise. Maybe the people I mention below can reasonably be considered experts. I also haven’t done an in-depth exploration of all relevant research to judge how widespread the problem might be (if it is indeed a problem)Research examples I’m NOT concerned byLet me start with some good examples (there are certainly more than I am listing here!).In 2021 Open Phil commissioned a report from David Humbird on the potential for cultured meat production to scale up to the point where it would be sufficiently available and affordable to replace a substantial portion of global meat consumption. Humbird has a PhD in chemical engineering and has extensive career experience in process engineering and techno-economic analysis, including the provision of consultancy services. In short, he seems like a great choice to carry out this research.Another example I am pleased by is Will MacAskill as author of What We Owe the Future. I cannot think of a better author of this book. Will is a respected philosopher, and a central figure in the EA movement. This book outlines the philosophical argument for longtermism, a key school of thought within EA. Boy am I happy that Will wrote this book.Other examples I was planning to write up:Modeling the Human Trajectory - David RoodmanRoodman seems qualified to deliver this research.Wild Animal Initiative research such as thisI like that they collaborated with Samniqueka J. Halsey who is an assistant professorSome examples I’m concerned byOpen Phil’s research on AIIn 2020 Ajeya Cotra, a Senior Research Analyst at Open Phil, wrote a report on timelines to transformative AI. I have no doubt that the report is high-quality and that Ajeya is very intelligent. However, this is a very technical subject and, beyond having a bachelor’s degree in Electrical Engineering and Computer Science, I don’t see why Ajeya would be the first choice to write this report. Why wouldn’t Open Phil have commissioned an expert in AI development / computational neuroscience etc. to write this report, similar to what they did with David Humbird (see above)? Ajeya’s report had Paul Christiano and Dario Amodei as advisors, which is good, but advisors generally have limited input. Wouldn’t it have been better to have an expert as first author?All the above applies to another Open Phil AI report, this time written by Joe Carlsmith. Joe is a philosopher by training, and whilst that isn’t completely irrelevant, it once again seems to me that a better choice could have been found. Personally I’d prefer that Joe do more philosophy-related work, similar to what Will MacAsKill is doing (see above).Climate Change research(removed mention of Founder's Pledge as per jackva's comment)Climate Change and Longtermism - John HalsteadJohn Halstead doesn't seem to have any formal training in climate science. Not sur...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Concerns over EA’s possible neglect of experts, published by Jack Malde on December 16, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, I haven’t checked everything, and it's unfinished. I was explicitly encouraged to post something like this!Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.I am becoming increasingly concerned that EA is neglecting experts when it comes to research. I’m not saying that EA organisations don’t produce high quality research, but I have a feeling that the research could be of an even higher quality if we were to embrace experts more.Epistemic status: not that confident that what I’m saying is valid. Maybe experts are utilised more than I realise. Maybe the people I mention below can reasonably be considered experts. I also haven’t done an in-depth exploration of all relevant research to judge how widespread the problem might be (if it is indeed a problem)Research examples I’m NOT concerned byLet me start with some good examples (there are certainly more than I am listing here!).In 2021 Open Phil commissioned a report from David Humbird on the potential for cultured meat production to scale up to the point where it would be sufficiently available and affordable to replace a substantial portion of global meat consumption. Humbird has a PhD in chemical engineering and has extensive career experience in process engineering and techno-economic analysis, including the provision of consultancy services. In short, he seems like a great choice to carry out this research.Another example I am pleased by is Will MacAskill as author of What We Owe the Future. I cannot think of a better author of this book. Will is a respected philosopher, and a central figure in the EA movement. This book outlines the philosophical argument for longtermism, a key school of thought within EA. Boy am I happy that Will wrote this book.Other examples I was planning to write up:Modeling the Human Trajectory - David RoodmanRoodman seems qualified to deliver this research.Wild Animal Initiative research such as thisI like that they collaborated with Samniqueka J. Halsey who is an assistant professorSome examples I’m concerned byOpen Phil’s research on AIIn 2020 Ajeya Cotra, a Senior Research Analyst at Open Phil, wrote a report on timelines to transformative AI. I have no doubt that the report is high-quality and that Ajeya is very intelligent. However, this is a very technical subject and, beyond having a bachelor’s degree in Electrical Engineering and Computer Science, I don’t see why Ajeya would be the first choice to write this report. Why wouldn’t Open Phil have commissioned an expert in AI development / computational neuroscience etc. to write this report, similar to what they did with David Humbird (see above)? Ajeya’s report had Paul Christiano and Dario Amodei as advisors, which is good, but advisors generally have limited input. Wouldn’t it have been better to have an expert as first author?All the above applies to another Open Phil AI report, this time written by Joe Carlsmith. Joe is a philosopher by training, and whilst that isn’t completely irrelevant, it once again seems to me that a better choice could have been found. Personally I’d prefer that Joe do more philosophy-related work, similar to what Will MacAsKill is doing (see above).Climate Change research(removed mention of Founder's Pledge as per jackva's comment)Climate Change and Longtermism - John HalsteadJohn Halstead doesn't seem to have any formal training in climate science. Not sur...]]>
Jack Malde https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:53 None full 4149
PP8jQFp7FaDieMbid_EA EA - Seeking feedback on a MOOC draft plan: Skills for Doing Good Better by Michael Noetel Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Seeking feedback on a MOOC draft plan: Skills for Doing Good Better, published by Michael Noetel on December 16, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not perfect, and I haven’t checked everything. I was explicitly encouraged to post something imperfect! Commenting and feedback guidelines. I'm slightly nervous about the amount of feedback I'll get on this, but am doing it because I sincerely do want constructive feedback. Please let me know if my assumptions are wrong, my plans misguided, my focus for the course poorly calibrated, or my examples un-compelling. Google Doc for commenting available upon request.A few months ago, I posted a diatribe on how to better use evidence to inform education and field building in effective altruism. This a draft where I try to practice what I preach. John Walker and I have been exploring ideas for another EA Massively Open Online Course (MOOC). This is currently Plan A.Big-picture intentWhy make a MOOC?There are many great forms of outreach to build the community of people who care about effective altruism and existential risk. In contrast with most approaches, MOOCs provide:Authority, via their university affiliationCredentials for completion, with a university badgeMinimal marginal cost per learner, by designDesigned well, they can provide high-quality, evidence informed learning environments (e.g., with professional multimedia and interactive learning). These resources can feed into the other methods of outreach (e.g., fellowships).Course pitch to learnersMany people want to do good with their lives and careers.The problem is: many well-intentioned attempts to improve the world are ineffective. Some are even harmful.With the right skills, you can have a massive impact on the world.This MOOC provides many of those skills.Through this course, you’ll learn how to use your time and money to do as much good as possible, and give you a platform to keep learning about how to improve the world.Underlying assumptionsTo be transparent, most of these assumptions are not based on direct data from the community or general public. Where support is available, I’ve linked to the relevant section of resources. Where not, I’ve listed methods of testing those assumptions but am open to corrections or other methods.There are some skills and frameworks used in EA that help us answer the essential question: “how can I do the most good, with the resources available to me?” (see also, MacAskill)Examples of what we mean by ‘skills and frameworks’current CEA website: scope sensitivity, trade-offs, scout mindset, impartiality, (less confidently: expected value, thinking on the margin, consequentialism, importance/value of unusual ideas, INT framework, crucial considerations, forecasting, and fermi estimatesSee old whatiseffectivealtruism.com page: maximisation, rationality, cosmopolitanism, cost-effectiveness, cause neutrality, counterfactual reasoningThese skills and frameworks are less ‘double-edged’ than the moral philosophyThe moral obligations in EA (e.g., Singer’s drowning child) are a source of motivation for many but also a source of burnout and distress (see also forum tags on Demandingness of morality and Excited vs. obligatory altruism)In contrast, the evidence that ‘improving competence and confidence leads to sustainable motivation’ is supported by dozens of meta-analyses across domains with no major downsides, to my knowledgeThese skills and frameworks are less controversial than the ‘answers’ (e.g., existential risk, farmed animal welfare)As put by CEA, “we are more confident in the core principles”Misconceptions about EA (e.g., per 80k: ‘EA is just about fighting poverty’; ‘EA ignores systemic chan...]]>
Michael Noetel https://forum.effectivealtruism.org/posts/PP8jQFp7FaDieMbid/seeking-feedback-on-a-mooc-draft-plan-skills-for-doing-good Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Seeking feedback on a MOOC draft plan: Skills for Doing Good Better, published by Michael Noetel on December 16, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not perfect, and I haven’t checked everything. I was explicitly encouraged to post something imperfect! Commenting and feedback guidelines. I'm slightly nervous about the amount of feedback I'll get on this, but am doing it because I sincerely do want constructive feedback. Please let me know if my assumptions are wrong, my plans misguided, my focus for the course poorly calibrated, or my examples un-compelling. Google Doc for commenting available upon request.A few months ago, I posted a diatribe on how to better use evidence to inform education and field building in effective altruism. This a draft where I try to practice what I preach. John Walker and I have been exploring ideas for another EA Massively Open Online Course (MOOC). This is currently Plan A.Big-picture intentWhy make a MOOC?There are many great forms of outreach to build the community of people who care about effective altruism and existential risk. In contrast with most approaches, MOOCs provide:Authority, via their university affiliationCredentials for completion, with a university badgeMinimal marginal cost per learner, by designDesigned well, they can provide high-quality, evidence informed learning environments (e.g., with professional multimedia and interactive learning). These resources can feed into the other methods of outreach (e.g., fellowships).Course pitch to learnersMany people want to do good with their lives and careers.The problem is: many well-intentioned attempts to improve the world are ineffective. Some are even harmful.With the right skills, you can have a massive impact on the world.This MOOC provides many of those skills.Through this course, you’ll learn how to use your time and money to do as much good as possible, and give you a platform to keep learning about how to improve the world.Underlying assumptionsTo be transparent, most of these assumptions are not based on direct data from the community or general public. Where support is available, I’ve linked to the relevant section of resources. Where not, I’ve listed methods of testing those assumptions but am open to corrections or other methods.There are some skills and frameworks used in EA that help us answer the essential question: “how can I do the most good, with the resources available to me?” (see also, MacAskill)Examples of what we mean by ‘skills and frameworks’current CEA website: scope sensitivity, trade-offs, scout mindset, impartiality, (less confidently: expected value, thinking on the margin, consequentialism, importance/value of unusual ideas, INT framework, crucial considerations, forecasting, and fermi estimatesSee old whatiseffectivealtruism.com page: maximisation, rationality, cosmopolitanism, cost-effectiveness, cause neutrality, counterfactual reasoningThese skills and frameworks are less ‘double-edged’ than the moral philosophyThe moral obligations in EA (e.g., Singer’s drowning child) are a source of motivation for many but also a source of burnout and distress (see also forum tags on Demandingness of morality and Excited vs. obligatory altruism)In contrast, the evidence that ‘improving competence and confidence leads to sustainable motivation’ is supported by dozens of meta-analyses across domains with no major downsides, to my knowledgeThese skills and frameworks are less controversial than the ‘answers’ (e.g., existential risk, farmed animal welfare)As put by CEA, “we are more confident in the core principles”Misconceptions about EA (e.g., per 80k: ‘EA is just about fighting poverty’; ‘EA ignores systemic chan...]]>
Sat, 17 Dec 2022 09:01:51 +0000 EA - Seeking feedback on a MOOC draft plan: Skills for Doing Good Better by Michael Noetel Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Seeking feedback on a MOOC draft plan: Skills for Doing Good Better, published by Michael Noetel on December 16, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not perfect, and I haven’t checked everything. I was explicitly encouraged to post something imperfect! Commenting and feedback guidelines. I'm slightly nervous about the amount of feedback I'll get on this, but am doing it because I sincerely do want constructive feedback. Please let me know if my assumptions are wrong, my plans misguided, my focus for the course poorly calibrated, or my examples un-compelling. Google Doc for commenting available upon request.A few months ago, I posted a diatribe on how to better use evidence to inform education and field building in effective altruism. This a draft where I try to practice what I preach. John Walker and I have been exploring ideas for another EA Massively Open Online Course (MOOC). This is currently Plan A.Big-picture intentWhy make a MOOC?There are many great forms of outreach to build the community of people who care about effective altruism and existential risk. In contrast with most approaches, MOOCs provide:Authority, via their university affiliationCredentials for completion, with a university badgeMinimal marginal cost per learner, by designDesigned well, they can provide high-quality, evidence informed learning environments (e.g., with professional multimedia and interactive learning). These resources can feed into the other methods of outreach (e.g., fellowships).Course pitch to learnersMany people want to do good with their lives and careers.The problem is: many well-intentioned attempts to improve the world are ineffective. Some are even harmful.With the right skills, you can have a massive impact on the world.This MOOC provides many of those skills.Through this course, you’ll learn how to use your time and money to do as much good as possible, and give you a platform to keep learning about how to improve the world.Underlying assumptionsTo be transparent, most of these assumptions are not based on direct data from the community or general public. Where support is available, I’ve linked to the relevant section of resources. Where not, I’ve listed methods of testing those assumptions but am open to corrections or other methods.There are some skills and frameworks used in EA that help us answer the essential question: “how can I do the most good, with the resources available to me?” (see also, MacAskill)Examples of what we mean by ‘skills and frameworks’current CEA website: scope sensitivity, trade-offs, scout mindset, impartiality, (less confidently: expected value, thinking on the margin, consequentialism, importance/value of unusual ideas, INT framework, crucial considerations, forecasting, and fermi estimatesSee old whatiseffectivealtruism.com page: maximisation, rationality, cosmopolitanism, cost-effectiveness, cause neutrality, counterfactual reasoningThese skills and frameworks are less ‘double-edged’ than the moral philosophyThe moral obligations in EA (e.g., Singer’s drowning child) are a source of motivation for many but also a source of burnout and distress (see also forum tags on Demandingness of morality and Excited vs. obligatory altruism)In contrast, the evidence that ‘improving competence and confidence leads to sustainable motivation’ is supported by dozens of meta-analyses across domains with no major downsides, to my knowledgeThese skills and frameworks are less controversial than the ‘answers’ (e.g., existential risk, farmed animal welfare)As put by CEA, “we are more confident in the core principles”Misconceptions about EA (e.g., per 80k: ‘EA is just about fighting poverty’; ‘EA ignores systemic chan...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Seeking feedback on a MOOC draft plan: Skills for Doing Good Better, published by Michael Noetel on December 16, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not perfect, and I haven’t checked everything. I was explicitly encouraged to post something imperfect! Commenting and feedback guidelines. I'm slightly nervous about the amount of feedback I'll get on this, but am doing it because I sincerely do want constructive feedback. Please let me know if my assumptions are wrong, my plans misguided, my focus for the course poorly calibrated, or my examples un-compelling. Google Doc for commenting available upon request.A few months ago, I posted a diatribe on how to better use evidence to inform education and field building in effective altruism. This a draft where I try to practice what I preach. John Walker and I have been exploring ideas for another EA Massively Open Online Course (MOOC). This is currently Plan A.Big-picture intentWhy make a MOOC?There are many great forms of outreach to build the community of people who care about effective altruism and existential risk. In contrast with most approaches, MOOCs provide:Authority, via their university affiliationCredentials for completion, with a university badgeMinimal marginal cost per learner, by designDesigned well, they can provide high-quality, evidence informed learning environments (e.g., with professional multimedia and interactive learning). These resources can feed into the other methods of outreach (e.g., fellowships).Course pitch to learnersMany people want to do good with their lives and careers.The problem is: many well-intentioned attempts to improve the world are ineffective. Some are even harmful.With the right skills, you can have a massive impact on the world.This MOOC provides many of those skills.Through this course, you’ll learn how to use your time and money to do as much good as possible, and give you a platform to keep learning about how to improve the world.Underlying assumptionsTo be transparent, most of these assumptions are not based on direct data from the community or general public. Where support is available, I’ve linked to the relevant section of resources. Where not, I’ve listed methods of testing those assumptions but am open to corrections or other methods.There are some skills and frameworks used in EA that help us answer the essential question: “how can I do the most good, with the resources available to me?” (see also, MacAskill)Examples of what we mean by ‘skills and frameworks’current CEA website: scope sensitivity, trade-offs, scout mindset, impartiality, (less confidently: expected value, thinking on the margin, consequentialism, importance/value of unusual ideas, INT framework, crucial considerations, forecasting, and fermi estimatesSee old whatiseffectivealtruism.com page: maximisation, rationality, cosmopolitanism, cost-effectiveness, cause neutrality, counterfactual reasoningThese skills and frameworks are less ‘double-edged’ than the moral philosophyThe moral obligations in EA (e.g., Singer’s drowning child) are a source of motivation for many but also a source of burnout and distress (see also forum tags on Demandingness of morality and Excited vs. obligatory altruism)In contrast, the evidence that ‘improving competence and confidence leads to sustainable motivation’ is supported by dozens of meta-analyses across domains with no major downsides, to my knowledgeThese skills and frameworks are less controversial than the ‘answers’ (e.g., existential risk, farmed animal welfare)As put by CEA, “we are more confident in the core principles”Misconceptions about EA (e.g., per 80k: ‘EA is just about fighting poverty’; ‘EA ignores systemic chan...]]>
Michael Noetel https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 41:01 None full 4152
cGM86RhxMdfDYbQnn_EA EA - We should say more than “x-risk is high” by OllieBase Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We should say more than “x-risk is high”, published by OllieBase on December 16, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.Epistemic status: outlining a take that I think is maybe 50% likely to be right. Also on my blog.Some people have recently argued that, in order to persuade people to work on high priority issues such as AI safety and biosecurity, effective altruists only need to point to how high existential risk (x-risk) is, and don’t need to make the case for longtermism or broader EA principles. E.g.Neel Nanda argues that if you believe the key claims of "there is a >=1% chance of AI causing x-risk and >=0.1% chance of bio causing x-risk in my lifetime" this is enough to justify the core action relevant points of EA.AISafetyIsNotLongtermist argues that the chance of the author dying prematurely because of AI x-risk is sufficiently high (~41%, conditional on their death in the subsequent 30 years) that the pitch for reducing this risk need not appeal to longtermism.The generalised argument, which I’ll call “x-risk is high”, is fairly simple:1) X-risk this century is, or could very plausibly be, very high (>10%)2) X-risk is high enough that it matters to people alive today - e.g. it could result in their premature death.3) The above is sufficient to motivate people to take high-priority paths to reduce x-risk. We don’t need to emphasise anything else, including the philosophical case for the importance of the long-run future.I think this argument holds up. However, I think that outlining the case for longtermism (and EA principles, more broadly) is better for building a community of people who will reliably choose the highest-priority actions and paths to do the most good and that this is better for the world and keeping x-risk low in the long-run. Here are three counterpoints to only using “x-risk is high”:Our situation could changeTrivially, if we successfully reduce x-risk or, after further examination, determine that overall x-risk is much lower than we thought, “x-risk is high” loses its force. If top talent, policymakers or funders convinced by “x-risk is high” learn that x-risk this century is actually much lower, they might move away from these issues. This would be bad because any non-negligible amount of x-risk is still unsustainably high from a longtermist perspective.Our priorities could changeIn the early 2010s, the EA movement was much more focused on funding effective global health charities. What if, at that time, EAs decided to stop explaining the core principles of EA, and instead made the following argument “effective charities are effective”?Effective charities are, or could very plausible be, very effective.Effective charities are effective enough that donating to them is a clear and enormous opportunity to do good.The above is sufficient to motivate people to take high-priority paths, like earning to give. We don’t need to emphasise anything else, including the case for effective altruism.This argument is probably different in important respects to “x-risk is high”, but illustrates how the EA movement could have “locked in” their approach to doing good if they made this argument. If we started using “effective charities are effective” instead of explaining the core principles of EA, it might have taken a lot longer for the EA movement to identify x-risks as a top priority.Our pri...]]>
OllieBase https://forum.effectivealtruism.org/posts/cGM86RhxMdfDYbQnn/we-should-say-more-than-x-risk-is-high Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We should say more than “x-risk is high”, published by OllieBase on December 16, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.Epistemic status: outlining a take that I think is maybe 50% likely to be right. Also on my blog.Some people have recently argued that, in order to persuade people to work on high priority issues such as AI safety and biosecurity, effective altruists only need to point to how high existential risk (x-risk) is, and don’t need to make the case for longtermism or broader EA principles. E.g.Neel Nanda argues that if you believe the key claims of "there is a >=1% chance of AI causing x-risk and >=0.1% chance of bio causing x-risk in my lifetime" this is enough to justify the core action relevant points of EA.AISafetyIsNotLongtermist argues that the chance of the author dying prematurely because of AI x-risk is sufficiently high (~41%, conditional on their death in the subsequent 30 years) that the pitch for reducing this risk need not appeal to longtermism.The generalised argument, which I’ll call “x-risk is high”, is fairly simple:1) X-risk this century is, or could very plausibly be, very high (>10%)2) X-risk is high enough that it matters to people alive today - e.g. it could result in their premature death.3) The above is sufficient to motivate people to take high-priority paths to reduce x-risk. We don’t need to emphasise anything else, including the philosophical case for the importance of the long-run future.I think this argument holds up. However, I think that outlining the case for longtermism (and EA principles, more broadly) is better for building a community of people who will reliably choose the highest-priority actions and paths to do the most good and that this is better for the world and keeping x-risk low in the long-run. Here are three counterpoints to only using “x-risk is high”:Our situation could changeTrivially, if we successfully reduce x-risk or, after further examination, determine that overall x-risk is much lower than we thought, “x-risk is high” loses its force. If top talent, policymakers or funders convinced by “x-risk is high” learn that x-risk this century is actually much lower, they might move away from these issues. This would be bad because any non-negligible amount of x-risk is still unsustainably high from a longtermist perspective.Our priorities could changeIn the early 2010s, the EA movement was much more focused on funding effective global health charities. What if, at that time, EAs decided to stop explaining the core principles of EA, and instead made the following argument “effective charities are effective”?Effective charities are, or could very plausible be, very effective.Effective charities are effective enough that donating to them is a clear and enormous opportunity to do good.The above is sufficient to motivate people to take high-priority paths, like earning to give. We don’t need to emphasise anything else, including the case for effective altruism.This argument is probably different in important respects to “x-risk is high”, but illustrates how the EA movement could have “locked in” their approach to doing good if they made this argument. If we started using “effective charities are effective” instead of explaining the core principles of EA, it might have taken a lot longer for the EA movement to identify x-risks as a top priority.Our pri...]]>
Sat, 17 Dec 2022 06:13:18 +0000 EA - We should say more than “x-risk is high” by OllieBase Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We should say more than “x-risk is high”, published by OllieBase on December 16, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.Epistemic status: outlining a take that I think is maybe 50% likely to be right. Also on my blog.Some people have recently argued that, in order to persuade people to work on high priority issues such as AI safety and biosecurity, effective altruists only need to point to how high existential risk (x-risk) is, and don’t need to make the case for longtermism or broader EA principles. E.g.Neel Nanda argues that if you believe the key claims of "there is a >=1% chance of AI causing x-risk and >=0.1% chance of bio causing x-risk in my lifetime" this is enough to justify the core action relevant points of EA.AISafetyIsNotLongtermist argues that the chance of the author dying prematurely because of AI x-risk is sufficiently high (~41%, conditional on their death in the subsequent 30 years) that the pitch for reducing this risk need not appeal to longtermism.The generalised argument, which I’ll call “x-risk is high”, is fairly simple:1) X-risk this century is, or could very plausibly be, very high (>10%)2) X-risk is high enough that it matters to people alive today - e.g. it could result in their premature death.3) The above is sufficient to motivate people to take high-priority paths to reduce x-risk. We don’t need to emphasise anything else, including the philosophical case for the importance of the long-run future.I think this argument holds up. However, I think that outlining the case for longtermism (and EA principles, more broadly) is better for building a community of people who will reliably choose the highest-priority actions and paths to do the most good and that this is better for the world and keeping x-risk low in the long-run. Here are three counterpoints to only using “x-risk is high”:Our situation could changeTrivially, if we successfully reduce x-risk or, after further examination, determine that overall x-risk is much lower than we thought, “x-risk is high” loses its force. If top talent, policymakers or funders convinced by “x-risk is high” learn that x-risk this century is actually much lower, they might move away from these issues. This would be bad because any non-negligible amount of x-risk is still unsustainably high from a longtermist perspective.Our priorities could changeIn the early 2010s, the EA movement was much more focused on funding effective global health charities. What if, at that time, EAs decided to stop explaining the core principles of EA, and instead made the following argument “effective charities are effective”?Effective charities are, or could very plausible be, very effective.Effective charities are effective enough that donating to them is a clear and enormous opportunity to do good.The above is sufficient to motivate people to take high-priority paths, like earning to give. We don’t need to emphasise anything else, including the case for effective altruism.This argument is probably different in important respects to “x-risk is high”, but illustrates how the EA movement could have “locked in” their approach to doing good if they made this argument. If we started using “effective charities are effective” instead of explaining the core principles of EA, it might have taken a lot longer for the EA movement to identify x-risks as a top priority.Our pri...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We should say more than “x-risk is high”, published by OllieBase on December 16, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.Epistemic status: outlining a take that I think is maybe 50% likely to be right. Also on my blog.Some people have recently argued that, in order to persuade people to work on high priority issues such as AI safety and biosecurity, effective altruists only need to point to how high existential risk (x-risk) is, and don’t need to make the case for longtermism or broader EA principles. E.g.Neel Nanda argues that if you believe the key claims of "there is a >=1% chance of AI causing x-risk and >=0.1% chance of bio causing x-risk in my lifetime" this is enough to justify the core action relevant points of EA.AISafetyIsNotLongtermist argues that the chance of the author dying prematurely because of AI x-risk is sufficiently high (~41%, conditional on their death in the subsequent 30 years) that the pitch for reducing this risk need not appeal to longtermism.The generalised argument, which I’ll call “x-risk is high”, is fairly simple:1) X-risk this century is, or could very plausibly be, very high (>10%)2) X-risk is high enough that it matters to people alive today - e.g. it could result in their premature death.3) The above is sufficient to motivate people to take high-priority paths to reduce x-risk. We don’t need to emphasise anything else, including the philosophical case for the importance of the long-run future.I think this argument holds up. However, I think that outlining the case for longtermism (and EA principles, more broadly) is better for building a community of people who will reliably choose the highest-priority actions and paths to do the most good and that this is better for the world and keeping x-risk low in the long-run. Here are three counterpoints to only using “x-risk is high”:Our situation could changeTrivially, if we successfully reduce x-risk or, after further examination, determine that overall x-risk is much lower than we thought, “x-risk is high” loses its force. If top talent, policymakers or funders convinced by “x-risk is high” learn that x-risk this century is actually much lower, they might move away from these issues. This would be bad because any non-negligible amount of x-risk is still unsustainably high from a longtermist perspective.Our priorities could changeIn the early 2010s, the EA movement was much more focused on funding effective global health charities. What if, at that time, EAs decided to stop explaining the core principles of EA, and instead made the following argument “effective charities are effective”?Effective charities are, or could very plausible be, very effective.Effective charities are effective enough that donating to them is a clear and enormous opportunity to do good.The above is sufficient to motivate people to take high-priority paths, like earning to give. We don’t need to emphasise anything else, including the case for effective altruism.This argument is probably different in important respects to “x-risk is high”, but illustrates how the EA movement could have “locked in” their approach to doing good if they made this argument. If we started using “effective charities are effective” instead of explaining the core principles of EA, it might have taken a lot longer for the EA movement to identify x-risks as a top priority.Our pri...]]>
OllieBase https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:59 None full 4150
xMEfhjGaRB49x7xyk_EA EA - On Epistemics and Communities by JP Addison Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Epistemics and Communities, published by JP Addison on December 16, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day post. I wrote it in 2020 and ~haven't looked at it since. I'm posting it as-is.An invisible project is one of our most important — let’s try to reveal itThis community is about doing the most good. We have many conversations about how to do that. In the course of those conversations, we've slowly pushed forward a cultural understanding of "how do we form correct beliefs?" We call that understanding “epistemics.”I want to make a few, hopefully-useful observations around this general space. If you haven’t read much about epistemics before, I hope it serves as an accessible introduction. If you’re an old hand, I hope it communicates this frame I’ve found useful.I Why is this hard?Most of the time, figuring out what's true is easy. When was this bridge built? Look it up on Wikipedia (more on that later). When it's not easy, it's often best to just use the tools someone else has already developed.The situations where you really start needing to attack the problem are when:The tools you're used to are inadequate for the task at hand, orYou and a collaborator disagree on how to figure out what's true.Once that happens, I claim most people just get really confused. It's like, what the heck is going on? This makes no sense / my collaborator makes no sense. People give up, or can form really deep impasses with those around them. Even if you're fortunate enough to notice the issue for what it is, it can seem really hard to resolve. I think this is what happened when a lot of my friends and I were only half-convinced of AI risk.How the fk are you supposed to weigh "we literally have an RCT here" versus, "this other thing would be big if true"? Many people find the answer obvious, but unfortunately not the same way. I hope at least some point in your life you’ve viewed it as a hard problem.Then, just when you and your best friend have figured out how to weigh evidence between yourselves, there comes a whole lot of other people. Many new complications arise when this is done as a community:Not all the participants are able to get complete information, or evaluate all the argumentsSome participants are probably smarter than othersSome participants are probably acting adversarially, or with some level of own-view favoring-biasII All project-oriented communities do thisMaybe you’ve heard people talk recently about epistemics, and it’s felt like a fuzzy concept. I hope that presenting the ways in which a bunch of different communities go about forming beliefs.WikipediaAs we’ve already mentioned, Wikipedia has some outstanding epistemics. What’s really useful for us here is that they’ve written it down. You can see the way that it's meant for Wikipedia’s particular situation and how it needs to be legible to outsiders and extremely resilient to adversarial action.ScienceYou can observe humanity making a huge leap forward by improving its epistemics in one important domain. Our species went from just being completely wrong about just about everything in the natural world, to methodically making progress in our understanding. That progress has compounded over time to completely transform the world. It's interesting the note that wasn't obvious with those epistemics should be at first. Are thought experiments valid scientific evidence? And in the soft sciences the epistemics are still controversial.Some might claim that science has figured it all out. But what’s the scientific way to predict who you should pick to lead your company? To make most decisions that humans make there is simply too little high quality data.MedicinePerhaps even more so than science, medicine is extremely conservative in its epistemics. There are so many ac...]]>
JP Addison https://forum.effectivealtruism.org/posts/xMEfhjGaRB49x7xyk/on-epistemics-and-communities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Epistemics and Communities, published by JP Addison on December 16, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day post. I wrote it in 2020 and ~haven't looked at it since. I'm posting it as-is.An invisible project is one of our most important — let’s try to reveal itThis community is about doing the most good. We have many conversations about how to do that. In the course of those conversations, we've slowly pushed forward a cultural understanding of "how do we form correct beliefs?" We call that understanding “epistemics.”I want to make a few, hopefully-useful observations around this general space. If you haven’t read much about epistemics before, I hope it serves as an accessible introduction. If you’re an old hand, I hope it communicates this frame I’ve found useful.I Why is this hard?Most of the time, figuring out what's true is easy. When was this bridge built? Look it up on Wikipedia (more on that later). When it's not easy, it's often best to just use the tools someone else has already developed.The situations where you really start needing to attack the problem are when:The tools you're used to are inadequate for the task at hand, orYou and a collaborator disagree on how to figure out what's true.Once that happens, I claim most people just get really confused. It's like, what the heck is going on? This makes no sense / my collaborator makes no sense. People give up, or can form really deep impasses with those around them. Even if you're fortunate enough to notice the issue for what it is, it can seem really hard to resolve. I think this is what happened when a lot of my friends and I were only half-convinced of AI risk.How the fk are you supposed to weigh "we literally have an RCT here" versus, "this other thing would be big if true"? Many people find the answer obvious, but unfortunately not the same way. I hope at least some point in your life you’ve viewed it as a hard problem.Then, just when you and your best friend have figured out how to weigh evidence between yourselves, there comes a whole lot of other people. Many new complications arise when this is done as a community:Not all the participants are able to get complete information, or evaluate all the argumentsSome participants are probably smarter than othersSome participants are probably acting adversarially, or with some level of own-view favoring-biasII All project-oriented communities do thisMaybe you’ve heard people talk recently about epistemics, and it’s felt like a fuzzy concept. I hope that presenting the ways in which a bunch of different communities go about forming beliefs.WikipediaAs we’ve already mentioned, Wikipedia has some outstanding epistemics. What’s really useful for us here is that they’ve written it down. You can see the way that it's meant for Wikipedia’s particular situation and how it needs to be legible to outsiders and extremely resilient to adversarial action.ScienceYou can observe humanity making a huge leap forward by improving its epistemics in one important domain. Our species went from just being completely wrong about just about everything in the natural world, to methodically making progress in our understanding. That progress has compounded over time to completely transform the world. It's interesting the note that wasn't obvious with those epistemics should be at first. Are thought experiments valid scientific evidence? And in the soft sciences the epistemics are still controversial.Some might claim that science has figured it all out. But what’s the scientific way to predict who you should pick to lead your company? To make most decisions that humans make there is simply too little high quality data.MedicinePerhaps even more so than science, medicine is extremely conservative in its epistemics. There are so many ac...]]>
Sat, 17 Dec 2022 02:19:01 +0000 EA - On Epistemics and Communities by JP Addison Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Epistemics and Communities, published by JP Addison on December 16, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day post. I wrote it in 2020 and ~haven't looked at it since. I'm posting it as-is.An invisible project is one of our most important — let’s try to reveal itThis community is about doing the most good. We have many conversations about how to do that. In the course of those conversations, we've slowly pushed forward a cultural understanding of "how do we form correct beliefs?" We call that understanding “epistemics.”I want to make a few, hopefully-useful observations around this general space. If you haven’t read much about epistemics before, I hope it serves as an accessible introduction. If you’re an old hand, I hope it communicates this frame I’ve found useful.I Why is this hard?Most of the time, figuring out what's true is easy. When was this bridge built? Look it up on Wikipedia (more on that later). When it's not easy, it's often best to just use the tools someone else has already developed.The situations where you really start needing to attack the problem are when:The tools you're used to are inadequate for the task at hand, orYou and a collaborator disagree on how to figure out what's true.Once that happens, I claim most people just get really confused. It's like, what the heck is going on? This makes no sense / my collaborator makes no sense. People give up, or can form really deep impasses with those around them. Even if you're fortunate enough to notice the issue for what it is, it can seem really hard to resolve. I think this is what happened when a lot of my friends and I were only half-convinced of AI risk.How the fk are you supposed to weigh "we literally have an RCT here" versus, "this other thing would be big if true"? Many people find the answer obvious, but unfortunately not the same way. I hope at least some point in your life you’ve viewed it as a hard problem.Then, just when you and your best friend have figured out how to weigh evidence between yourselves, there comes a whole lot of other people. Many new complications arise when this is done as a community:Not all the participants are able to get complete information, or evaluate all the argumentsSome participants are probably smarter than othersSome participants are probably acting adversarially, or with some level of own-view favoring-biasII All project-oriented communities do thisMaybe you’ve heard people talk recently about epistemics, and it’s felt like a fuzzy concept. I hope that presenting the ways in which a bunch of different communities go about forming beliefs.WikipediaAs we’ve already mentioned, Wikipedia has some outstanding epistemics. What’s really useful for us here is that they’ve written it down. You can see the way that it's meant for Wikipedia’s particular situation and how it needs to be legible to outsiders and extremely resilient to adversarial action.ScienceYou can observe humanity making a huge leap forward by improving its epistemics in one important domain. Our species went from just being completely wrong about just about everything in the natural world, to methodically making progress in our understanding. That progress has compounded over time to completely transform the world. It's interesting the note that wasn't obvious with those epistemics should be at first. Are thought experiments valid scientific evidence? And in the soft sciences the epistemics are still controversial.Some might claim that science has figured it all out. But what’s the scientific way to predict who you should pick to lead your company? To make most decisions that humans make there is simply too little high quality data.MedicinePerhaps even more so than science, medicine is extremely conservative in its epistemics. There are so many ac...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Epistemics and Communities, published by JP Addison on December 16, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day post. I wrote it in 2020 and ~haven't looked at it since. I'm posting it as-is.An invisible project is one of our most important — let’s try to reveal itThis community is about doing the most good. We have many conversations about how to do that. In the course of those conversations, we've slowly pushed forward a cultural understanding of "how do we form correct beliefs?" We call that understanding “epistemics.”I want to make a few, hopefully-useful observations around this general space. If you haven’t read much about epistemics before, I hope it serves as an accessible introduction. If you’re an old hand, I hope it communicates this frame I’ve found useful.I Why is this hard?Most of the time, figuring out what's true is easy. When was this bridge built? Look it up on Wikipedia (more on that later). When it's not easy, it's often best to just use the tools someone else has already developed.The situations where you really start needing to attack the problem are when:The tools you're used to are inadequate for the task at hand, orYou and a collaborator disagree on how to figure out what's true.Once that happens, I claim most people just get really confused. It's like, what the heck is going on? This makes no sense / my collaborator makes no sense. People give up, or can form really deep impasses with those around them. Even if you're fortunate enough to notice the issue for what it is, it can seem really hard to resolve. I think this is what happened when a lot of my friends and I were only half-convinced of AI risk.How the fk are you supposed to weigh "we literally have an RCT here" versus, "this other thing would be big if true"? Many people find the answer obvious, but unfortunately not the same way. I hope at least some point in your life you’ve viewed it as a hard problem.Then, just when you and your best friend have figured out how to weigh evidence between yourselves, there comes a whole lot of other people. Many new complications arise when this is done as a community:Not all the participants are able to get complete information, or evaluate all the argumentsSome participants are probably smarter than othersSome participants are probably acting adversarially, or with some level of own-view favoring-biasII All project-oriented communities do thisMaybe you’ve heard people talk recently about epistemics, and it’s felt like a fuzzy concept. I hope that presenting the ways in which a bunch of different communities go about forming beliefs.WikipediaAs we’ve already mentioned, Wikipedia has some outstanding epistemics. What’s really useful for us here is that they’ve written it down. You can see the way that it's meant for Wikipedia’s particular situation and how it needs to be legible to outsiders and extremely resilient to adversarial action.ScienceYou can observe humanity making a huge leap forward by improving its epistemics in one important domain. Our species went from just being completely wrong about just about everything in the natural world, to methodically making progress in our understanding. That progress has compounded over time to completely transform the world. It's interesting the note that wasn't obvious with those epistemics should be at first. Are thought experiments valid scientific evidence? And in the soft sciences the epistemics are still controversial.Some might claim that science has figured it all out. But what’s the scientific way to predict who you should pick to lead your company? To make most decisions that humans make there is simply too little high quality data.MedicinePerhaps even more so than science, medicine is extremely conservative in its epistemics. There are so many ac...]]>
JP Addison https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:29 None full 4151
MzQzXEkCmNgxhAHDu_EA EA - If everyone makes the same criticism, the opposite criticism is more likely to be true by MichaelDickens Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If everyone makes the same criticism, the opposite criticism is more likely to be true, published by MichaelDickens on December 17, 2022 on The Effective Altruism Forum.I frequently hear people say EAs rely too much on quantifying uncertain variables, but I basically never hear the opposite criticism. If everyone believes you shouldn't quantify, then nobody's doing it, so it can't possibly be true that people quantify too much, and in fact the opposite is probably true.Obviously I could make various counterarguments, like maybe the people who think we don't quantify enough are not writing essays about how we need to quantify more. Generally speaking, I don't think this counterargument is correct, but arguing for/against it is harder so I don't have much to say about itIt's like Lake Wobegon, where all the children are above average. It's impossible for every single person in the community to believe that the community is not X enoughAnother example: everyone says we need to care more about systemic changeSaw a Twitter post "EAs way under-update on thought experiments" and I thought, damn that's a spicy take. Then I realized I misread it and they actually said "over-update" and I thought...wow what a boring take that's been said a thousand times alreadyThey gave the simulation argument and Roko's Basilisk as examples. As far as I know, nobody has ever changed their behavior based on either of those arguments. It would be pretty much impossible for people to update less on them than they haveI'm sure there are some people somewhere who have updated based on the simulation argument but I've never met them"People under-update on thought experiments" would have been a much more interesting take because people basically don't update on thought experimentsBy a shocking coincidence, I take the opposite side on all these examples: I think EAs should use more quantitative estimates, should care less about systemic change, and should update more on thought experimentsAre there any issues where I make the same criticism as everyone else, and I'm actually wrong? Probably, idkI can think of some non-EA-related examples of this phenomenon, but I'm not as interested in thoseBy analogy, the moment when the most people agree the stock market is going to go up is the exact moment when the market is at its peak. The price can't go higher because there's no one left to buy from. If everyone agrees, everyone /must/ be wrongRelevant Scott Alexander:/ and/. He said it better than me, but my post isn't about exactly the same thing so I figured it might be worth publishing.(Note: The way I usually write essays is by writing outlines like this, and then fleshing them out into full posts. For a lot of the outlines I write, like this one, I never flesh them out because it doesn't seem worth the time. But I figured for Draft Amnesty Day, I could just publish my outline, and most people will get the idea.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
MichaelDickens https://forum.effectivealtruism.org/posts/MzQzXEkCmNgxhAHDu/if-everyone-makes-the-same-criticism-the-opposite-criticism Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If everyone makes the same criticism, the opposite criticism is more likely to be true, published by MichaelDickens on December 17, 2022 on The Effective Altruism Forum.I frequently hear people say EAs rely too much on quantifying uncertain variables, but I basically never hear the opposite criticism. If everyone believes you shouldn't quantify, then nobody's doing it, so it can't possibly be true that people quantify too much, and in fact the opposite is probably true.Obviously I could make various counterarguments, like maybe the people who think we don't quantify enough are not writing essays about how we need to quantify more. Generally speaking, I don't think this counterargument is correct, but arguing for/against it is harder so I don't have much to say about itIt's like Lake Wobegon, where all the children are above average. It's impossible for every single person in the community to believe that the community is not X enoughAnother example: everyone says we need to care more about systemic changeSaw a Twitter post "EAs way under-update on thought experiments" and I thought, damn that's a spicy take. Then I realized I misread it and they actually said "over-update" and I thought...wow what a boring take that's been said a thousand times alreadyThey gave the simulation argument and Roko's Basilisk as examples. As far as I know, nobody has ever changed their behavior based on either of those arguments. It would be pretty much impossible for people to update less on them than they haveI'm sure there are some people somewhere who have updated based on the simulation argument but I've never met them"People under-update on thought experiments" would have been a much more interesting take because people basically don't update on thought experimentsBy a shocking coincidence, I take the opposite side on all these examples: I think EAs should use more quantitative estimates, should care less about systemic change, and should update more on thought experimentsAre there any issues where I make the same criticism as everyone else, and I'm actually wrong? Probably, idkI can think of some non-EA-related examples of this phenomenon, but I'm not as interested in thoseBy analogy, the moment when the most people agree the stock market is going to go up is the exact moment when the market is at its peak. The price can't go higher because there's no one left to buy from. If everyone agrees, everyone /must/ be wrongRelevant Scott Alexander:/ and/. He said it better than me, but my post isn't about exactly the same thing so I figured it might be worth publishing.(Note: The way I usually write essays is by writing outlines like this, and then fleshing them out into full posts. For a lot of the outlines I write, like this one, I never flesh them out because it doesn't seem worth the time. But I figured for Draft Amnesty Day, I could just publish my outline, and most people will get the idea.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 17 Dec 2022 01:26:03 +0000 EA - If everyone makes the same criticism, the opposite criticism is more likely to be true by MichaelDickens Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If everyone makes the same criticism, the opposite criticism is more likely to be true, published by MichaelDickens on December 17, 2022 on The Effective Altruism Forum.I frequently hear people say EAs rely too much on quantifying uncertain variables, but I basically never hear the opposite criticism. If everyone believes you shouldn't quantify, then nobody's doing it, so it can't possibly be true that people quantify too much, and in fact the opposite is probably true.Obviously I could make various counterarguments, like maybe the people who think we don't quantify enough are not writing essays about how we need to quantify more. Generally speaking, I don't think this counterargument is correct, but arguing for/against it is harder so I don't have much to say about itIt's like Lake Wobegon, where all the children are above average. It's impossible for every single person in the community to believe that the community is not X enoughAnother example: everyone says we need to care more about systemic changeSaw a Twitter post "EAs way under-update on thought experiments" and I thought, damn that's a spicy take. Then I realized I misread it and they actually said "over-update" and I thought...wow what a boring take that's been said a thousand times alreadyThey gave the simulation argument and Roko's Basilisk as examples. As far as I know, nobody has ever changed their behavior based on either of those arguments. It would be pretty much impossible for people to update less on them than they haveI'm sure there are some people somewhere who have updated based on the simulation argument but I've never met them"People under-update on thought experiments" would have been a much more interesting take because people basically don't update on thought experimentsBy a shocking coincidence, I take the opposite side on all these examples: I think EAs should use more quantitative estimates, should care less about systemic change, and should update more on thought experimentsAre there any issues where I make the same criticism as everyone else, and I'm actually wrong? Probably, idkI can think of some non-EA-related examples of this phenomenon, but I'm not as interested in thoseBy analogy, the moment when the most people agree the stock market is going to go up is the exact moment when the market is at its peak. The price can't go higher because there's no one left to buy from. If everyone agrees, everyone /must/ be wrongRelevant Scott Alexander:/ and/. He said it better than me, but my post isn't about exactly the same thing so I figured it might be worth publishing.(Note: The way I usually write essays is by writing outlines like this, and then fleshing them out into full posts. For a lot of the outlines I write, like this one, I never flesh them out because it doesn't seem worth the time. But I figured for Draft Amnesty Day, I could just publish my outline, and most people will get the idea.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If everyone makes the same criticism, the opposite criticism is more likely to be true, published by MichaelDickens on December 17, 2022 on The Effective Altruism Forum.I frequently hear people say EAs rely too much on quantifying uncertain variables, but I basically never hear the opposite criticism. If everyone believes you shouldn't quantify, then nobody's doing it, so it can't possibly be true that people quantify too much, and in fact the opposite is probably true.Obviously I could make various counterarguments, like maybe the people who think we don't quantify enough are not writing essays about how we need to quantify more. Generally speaking, I don't think this counterargument is correct, but arguing for/against it is harder so I don't have much to say about itIt's like Lake Wobegon, where all the children are above average. It's impossible for every single person in the community to believe that the community is not X enoughAnother example: everyone says we need to care more about systemic changeSaw a Twitter post "EAs way under-update on thought experiments" and I thought, damn that's a spicy take. Then I realized I misread it and they actually said "over-update" and I thought...wow what a boring take that's been said a thousand times alreadyThey gave the simulation argument and Roko's Basilisk as examples. As far as I know, nobody has ever changed their behavior based on either of those arguments. It would be pretty much impossible for people to update less on them than they haveI'm sure there are some people somewhere who have updated based on the simulation argument but I've never met them"People under-update on thought experiments" would have been a much more interesting take because people basically don't update on thought experimentsBy a shocking coincidence, I take the opposite side on all these examples: I think EAs should use more quantitative estimates, should care less about systemic change, and should update more on thought experimentsAre there any issues where I make the same criticism as everyone else, and I'm actually wrong? Probably, idkI can think of some non-EA-related examples of this phenomenon, but I'm not as interested in thoseBy analogy, the moment when the most people agree the stock market is going to go up is the exact moment when the market is at its peak. The price can't go higher because there's no one left to buy from. If everyone agrees, everyone /must/ be wrongRelevant Scott Alexander:/ and/. He said it better than me, but my post isn't about exactly the same thing so I figured it might be worth publishing.(Note: The way I usually write essays is by writing outlines like this, and then fleshing them out into full posts. For a lot of the outlines I write, like this one, I never flesh them out because it doesn't seem worth the time. But I figured for Draft Amnesty Day, I could just publish my outline, and most people will get the idea.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
MichaelDickens https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:53 None full 4159
DosxprPupic6dmiB6_EA EA - You Don’t Have to Call Yourself an Effective Altruist or Fraternize With Effective Altruists or Support Longtermism, Just Please, for the Love of God, Help the Global Poor by Omnizoid Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You Don’t Have to Call Yourself an Effective Altruist or Fraternize With Effective Altruists or Support Longtermism, Just Please, for the Love of God, Help the Global Poor, published by Omnizoid on December 16, 2022 on The Effective Altruism Forum.Cross post from my blog.Right now you — whoever is reading this — can give money to the poorest people on earth. If you donate to givedirectly , 90% of the funds will go to the poorest people on earth. People whose incomes are roughly 1% of those of poor people in the US. And this isn’t just because things cost less — that’s taken into account for the statistic.This is a really surprising fact. There are people that live on less than 1% of the income of the poor in the US — and you can, right now, make their lives dramatically better at very minimal personal cost. If you donate a few hundred dollars to these people, it will double their annual income. And there’s robust evidence that this makes their lives a lot better — they don’t just spend it on trivial things.But givedirectly isn’t the only super effective charity. There are charities that can prevent malaria for a few thousand dollars. Givewell lists the best charities — the Malaria Consortium saves one life for every 5000 or so dollars donated.You can save people’s lives. You can double people’s income. All at minimal personal cost.The plight of the world’s poor isn’t just some interesting abstract philosophy problem. It doesn’t matter if it’s exactly like the drowning child scenario. Right now, there are people dying, and we can save their lives.A lot of the people that are working to save their lives are called effective altruists. But you really don’t need to call yourself an effective altruist to support this. You can think that effective altruists are annoying silicon valley tech bros. You can think they’re all satan. And it doesn’t matter.The badness of EA has nothing to do with our personal obligations. If EA is a rotten movement, don’t attend EA meetups. Don’t fraternize with EAs. But please, for the love of god, do something about the poor people that are dying right now, whose lives you can save.Émile Torres is someone that I’ve spent lots of time criticizing in the past. I think they’re really, really wrong about longtermism. But recently, I saw something that warmed my heart and improved my opinion of Torres a lot.Nathan @NathanpmYoungChristmas is a time of peace and gift giving. @xriskology and I are putting aside our differences to give to the poorest people in the world, via @GiveDirectly. Perhaps you'll join us. givingwhatwecan.org/fundraisers/em.10:53 AM ∙ Dec 12, 2022404Likes53RetweetsTorres and Nathan Young have put aside their differences — Young is a longtermist — and organized a fundraiser to benefit the world’s poor.The fact that Torres is doing this shows that you don’t have to be an EA or a longtermist to support this. Perhaps the most adamant critic of EA on the face of the earth agrees that you should fund give directly. Even if EA is a horrific cult, there are poor people that are dying right now, that need money. Money given barely costs us anything, and makes their lives dramatically better.So please, regardless of what you think about politics, philosophy, utilitarianism, effective altruism, etc — give some money to give directly or the against malaria foundation or any other top givewell charities. You have dramatic opportunities to improve the world. These are totally uncontroversial. If anyone emails me evidence that they’ve donated 5 dollars to GiveDirectly in this fundraiser — or some other EA charity — I’ll give them a free subscription.This giving season, let’s put aside our differences, and do something to benefit the world’s poor. Let’s help that fundraiser raise a billion dollars for the world’s poor.Thanks f...]]>
Omnizoid https://forum.effectivealtruism.org/posts/DosxprPupic6dmiB6/you-don-t-have-to-call-yourself-an-effective-altruist-or Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You Don’t Have to Call Yourself an Effective Altruist or Fraternize With Effective Altruists or Support Longtermism, Just Please, for the Love of God, Help the Global Poor, published by Omnizoid on December 16, 2022 on The Effective Altruism Forum.Cross post from my blog.Right now you — whoever is reading this — can give money to the poorest people on earth. If you donate to givedirectly , 90% of the funds will go to the poorest people on earth. People whose incomes are roughly 1% of those of poor people in the US. And this isn’t just because things cost less — that’s taken into account for the statistic.This is a really surprising fact. There are people that live on less than 1% of the income of the poor in the US — and you can, right now, make their lives dramatically better at very minimal personal cost. If you donate a few hundred dollars to these people, it will double their annual income. And there’s robust evidence that this makes their lives a lot better — they don’t just spend it on trivial things.But givedirectly isn’t the only super effective charity. There are charities that can prevent malaria for a few thousand dollars. Givewell lists the best charities — the Malaria Consortium saves one life for every 5000 or so dollars donated.You can save people’s lives. You can double people’s income. All at minimal personal cost.The plight of the world’s poor isn’t just some interesting abstract philosophy problem. It doesn’t matter if it’s exactly like the drowning child scenario. Right now, there are people dying, and we can save their lives.A lot of the people that are working to save their lives are called effective altruists. But you really don’t need to call yourself an effective altruist to support this. You can think that effective altruists are annoying silicon valley tech bros. You can think they’re all satan. And it doesn’t matter.The badness of EA has nothing to do with our personal obligations. If EA is a rotten movement, don’t attend EA meetups. Don’t fraternize with EAs. But please, for the love of god, do something about the poor people that are dying right now, whose lives you can save.Émile Torres is someone that I’ve spent lots of time criticizing in the past. I think they’re really, really wrong about longtermism. But recently, I saw something that warmed my heart and improved my opinion of Torres a lot.Nathan @NathanpmYoungChristmas is a time of peace and gift giving. @xriskology and I are putting aside our differences to give to the poorest people in the world, via @GiveDirectly. Perhaps you'll join us. givingwhatwecan.org/fundraisers/em.10:53 AM ∙ Dec 12, 2022404Likes53RetweetsTorres and Nathan Young have put aside their differences — Young is a longtermist — and organized a fundraiser to benefit the world’s poor.The fact that Torres is doing this shows that you don’t have to be an EA or a longtermist to support this. Perhaps the most adamant critic of EA on the face of the earth agrees that you should fund give directly. Even if EA is a horrific cult, there are poor people that are dying right now, that need money. Money given barely costs us anything, and makes their lives dramatically better.So please, regardless of what you think about politics, philosophy, utilitarianism, effective altruism, etc — give some money to give directly or the against malaria foundation or any other top givewell charities. You have dramatic opportunities to improve the world. These are totally uncontroversial. If anyone emails me evidence that they’ve donated 5 dollars to GiveDirectly in this fundraiser — or some other EA charity — I’ll give them a free subscription.This giving season, let’s put aside our differences, and do something to benefit the world’s poor. Let’s help that fundraiser raise a billion dollars for the world’s poor.Thanks f...]]>
Sat, 17 Dec 2022 01:13:01 +0000 EA - You Don’t Have to Call Yourself an Effective Altruist or Fraternize With Effective Altruists or Support Longtermism, Just Please, for the Love of God, Help the Global Poor by Omnizoid Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You Don’t Have to Call Yourself an Effective Altruist or Fraternize With Effective Altruists or Support Longtermism, Just Please, for the Love of God, Help the Global Poor, published by Omnizoid on December 16, 2022 on The Effective Altruism Forum.Cross post from my blog.Right now you — whoever is reading this — can give money to the poorest people on earth. If you donate to givedirectly , 90% of the funds will go to the poorest people on earth. People whose incomes are roughly 1% of those of poor people in the US. And this isn’t just because things cost less — that’s taken into account for the statistic.This is a really surprising fact. There are people that live on less than 1% of the income of the poor in the US — and you can, right now, make their lives dramatically better at very minimal personal cost. If you donate a few hundred dollars to these people, it will double their annual income. And there’s robust evidence that this makes their lives a lot better — they don’t just spend it on trivial things.But givedirectly isn’t the only super effective charity. There are charities that can prevent malaria for a few thousand dollars. Givewell lists the best charities — the Malaria Consortium saves one life for every 5000 or so dollars donated.You can save people’s lives. You can double people’s income. All at minimal personal cost.The plight of the world’s poor isn’t just some interesting abstract philosophy problem. It doesn’t matter if it’s exactly like the drowning child scenario. Right now, there are people dying, and we can save their lives.A lot of the people that are working to save their lives are called effective altruists. But you really don’t need to call yourself an effective altruist to support this. You can think that effective altruists are annoying silicon valley tech bros. You can think they’re all satan. And it doesn’t matter.The badness of EA has nothing to do with our personal obligations. If EA is a rotten movement, don’t attend EA meetups. Don’t fraternize with EAs. But please, for the love of god, do something about the poor people that are dying right now, whose lives you can save.Émile Torres is someone that I’ve spent lots of time criticizing in the past. I think they’re really, really wrong about longtermism. But recently, I saw something that warmed my heart and improved my opinion of Torres a lot.Nathan @NathanpmYoungChristmas is a time of peace and gift giving. @xriskology and I are putting aside our differences to give to the poorest people in the world, via @GiveDirectly. Perhaps you'll join us. givingwhatwecan.org/fundraisers/em.10:53 AM ∙ Dec 12, 2022404Likes53RetweetsTorres and Nathan Young have put aside their differences — Young is a longtermist — and organized a fundraiser to benefit the world’s poor.The fact that Torres is doing this shows that you don’t have to be an EA or a longtermist to support this. Perhaps the most adamant critic of EA on the face of the earth agrees that you should fund give directly. Even if EA is a horrific cult, there are poor people that are dying right now, that need money. Money given barely costs us anything, and makes their lives dramatically better.So please, regardless of what you think about politics, philosophy, utilitarianism, effective altruism, etc — give some money to give directly or the against malaria foundation or any other top givewell charities. You have dramatic opportunities to improve the world. These are totally uncontroversial. If anyone emails me evidence that they’ve donated 5 dollars to GiveDirectly in this fundraiser — or some other EA charity — I’ll give them a free subscription.This giving season, let’s put aside our differences, and do something to benefit the world’s poor. Let’s help that fundraiser raise a billion dollars for the world’s poor.Thanks f...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You Don’t Have to Call Yourself an Effective Altruist or Fraternize With Effective Altruists or Support Longtermism, Just Please, for the Love of God, Help the Global Poor, published by Omnizoid on December 16, 2022 on The Effective Altruism Forum.Cross post from my blog.Right now you — whoever is reading this — can give money to the poorest people on earth. If you donate to givedirectly , 90% of the funds will go to the poorest people on earth. People whose incomes are roughly 1% of those of poor people in the US. And this isn’t just because things cost less — that’s taken into account for the statistic.This is a really surprising fact. There are people that live on less than 1% of the income of the poor in the US — and you can, right now, make their lives dramatically better at very minimal personal cost. If you donate a few hundred dollars to these people, it will double their annual income. And there’s robust evidence that this makes their lives a lot better — they don’t just spend it on trivial things.But givedirectly isn’t the only super effective charity. There are charities that can prevent malaria for a few thousand dollars. Givewell lists the best charities — the Malaria Consortium saves one life for every 5000 or so dollars donated.You can save people’s lives. You can double people’s income. All at minimal personal cost.The plight of the world’s poor isn’t just some interesting abstract philosophy problem. It doesn’t matter if it’s exactly like the drowning child scenario. Right now, there are people dying, and we can save their lives.A lot of the people that are working to save their lives are called effective altruists. But you really don’t need to call yourself an effective altruist to support this. You can think that effective altruists are annoying silicon valley tech bros. You can think they’re all satan. And it doesn’t matter.The badness of EA has nothing to do with our personal obligations. If EA is a rotten movement, don’t attend EA meetups. Don’t fraternize with EAs. But please, for the love of god, do something about the poor people that are dying right now, whose lives you can save.Émile Torres is someone that I’ve spent lots of time criticizing in the past. I think they’re really, really wrong about longtermism. But recently, I saw something that warmed my heart and improved my opinion of Torres a lot.Nathan @NathanpmYoungChristmas is a time of peace and gift giving. @xriskology and I are putting aside our differences to give to the poorest people in the world, via @GiveDirectly. Perhaps you'll join us. givingwhatwecan.org/fundraisers/em.10:53 AM ∙ Dec 12, 2022404Likes53RetweetsTorres and Nathan Young have put aside their differences — Young is a longtermist — and organized a fundraiser to benefit the world’s poor.The fact that Torres is doing this shows that you don’t have to be an EA or a longtermist to support this. Perhaps the most adamant critic of EA on the face of the earth agrees that you should fund give directly. Even if EA is a horrific cult, there are poor people that are dying right now, that need money. Money given barely costs us anything, and makes their lives dramatically better.So please, regardless of what you think about politics, philosophy, utilitarianism, effective altruism, etc — give some money to give directly or the against malaria foundation or any other top givewell charities. You have dramatic opportunities to improve the world. These are totally uncontroversial. If anyone emails me evidence that they’ve donated 5 dollars to GiveDirectly in this fundraiser — or some other EA charity — I’ll give them a free subscription.This giving season, let’s put aside our differences, and do something to benefit the world’s poor. Let’s help that fundraiser raise a billion dollars for the world’s poor.Thanks f...]]>
Omnizoid https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:54 None full 4148
xBfp3HGapGycuSa6H_EA EA - I'm less approving of the EA community now than before the FTX collapse by throwaway790 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I'm less approving of the EA community now than before the FTX collapse, published by throwaway790 on December 16, 2022 on The Effective Altruism Forum.This is mostly a counterpoint to Will Aldred/Duncan Sabien's post.I'm not really an EA; haven't taken the pledge, don't work at an org, have been to no EA meetups in my life, (haven't been a speaker at EA Global).However, I have been close to the EA community for a while, agreeing with many of its goals and donating to many of its key causes.Reading Duncan's post made me want to defend the case for why I'm less approving of the EA community than before. I want to be a little specific about what I'm less approving of.EAâ„¢ - CEA / EVF (I'd never heard it called EVF before FTX)Will MacAskill personallyDonation practices during the "funding overhang" eraThere are some people/orgs I am more approving of:Peter Wildeford (and Rethink Priorities more generally)Rob WiblinDustin MoskovitzI'm not going to say anything about Peter/Rob/Dustin in this post, although the amount I approve them more does not change the net effect, which is less approving of EA.Lots of the reasons I am less approving of EA now than I was prior to FTX collapse are things I could have known. However I am aware of them because of the FTX collapse. Others might have already been aware of everything I'll mention, in which case I would agree with Duncan - with few exceptions, the communities reaction to the FTX collapse has been very good and I largely approve.So, what have I learned since the FTX collapse which makes me approve less of EA?Will MacAskill:Initial reactions thread I read this as both minimizing the event and distancing himself from SBF, which is not credible in light of:Elon relationship - you don't go and bat for someone with the richest guy in the world unless you are confident in who you're batting forTheir long history of a close relationships (shared board memberships, his earlier mentorship etc)Guzey's review (this episode is why I'm posting as a throwaway)Equivocal statements during the early crisis:Will MacAskillHolder Karnofsky (note the edit and Alexander Berger's comments before blaming Holden)There were more statements I was annoyed with at the time, but I can't remember enough of the specifics to search them right now. If I remember them, I will add them here. (Part of my praise for Rob was his statement was in sharp contrast of what was coming out at the time)Wytham Abbey - I don't especially want to relitigate this but it makes me think less of EA - you can read many takes on EAF - probably the fairest place to start is the reasoning for why it was bought.People who received funding in dubious ways and didn't say anything (North Dimension donating on behalf of FTX Future fund)EA (generally) treating FTX as something which "happened" to them, rather than being something they were intimately involved in. (Initial funding for Alameda coming from the EA community, ...)I suspect (but can't prove) that the EA orgs are using the excuse of legal advice / legal threats to avoid saying things they don't want to say for other reasons. (And my suspicion is that this is because they have done something which could be bad)People in the community knew SBF's image was manufactured to some degree but didn't say anything (general take from forum - summarized in this New Yorker piece)EA's earlier relationship with a sketchy billionaire (and the degree to which this was covered up)All of these things put together are enough for me to downgrade my opinion of EA. I've put this together from off the top of my head, so there are likely other things which have affected my view over the last month. To be clear - I am still very much in favor of EA principles, the EA community and will continue to donate to EA causes but will ...]]>
throwaway790 https://forum.effectivealtruism.org/posts/xBfp3HGapGycuSa6H/i-m-less-approving-of-the-ea-community-now-than-before-the Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I'm less approving of the EA community now than before the FTX collapse, published by throwaway790 on December 16, 2022 on The Effective Altruism Forum.This is mostly a counterpoint to Will Aldred/Duncan Sabien's post.I'm not really an EA; haven't taken the pledge, don't work at an org, have been to no EA meetups in my life, (haven't been a speaker at EA Global).However, I have been close to the EA community for a while, agreeing with many of its goals and donating to many of its key causes.Reading Duncan's post made me want to defend the case for why I'm less approving of the EA community than before. I want to be a little specific about what I'm less approving of.EAâ„¢ - CEA / EVF (I'd never heard it called EVF before FTX)Will MacAskill personallyDonation practices during the "funding overhang" eraThere are some people/orgs I am more approving of:Peter Wildeford (and Rethink Priorities more generally)Rob WiblinDustin MoskovitzI'm not going to say anything about Peter/Rob/Dustin in this post, although the amount I approve them more does not change the net effect, which is less approving of EA.Lots of the reasons I am less approving of EA now than I was prior to FTX collapse are things I could have known. However I am aware of them because of the FTX collapse. Others might have already been aware of everything I'll mention, in which case I would agree with Duncan - with few exceptions, the communities reaction to the FTX collapse has been very good and I largely approve.So, what have I learned since the FTX collapse which makes me approve less of EA?Will MacAskill:Initial reactions thread I read this as both minimizing the event and distancing himself from SBF, which is not credible in light of:Elon relationship - you don't go and bat for someone with the richest guy in the world unless you are confident in who you're batting forTheir long history of a close relationships (shared board memberships, his earlier mentorship etc)Guzey's review (this episode is why I'm posting as a throwaway)Equivocal statements during the early crisis:Will MacAskillHolder Karnofsky (note the edit and Alexander Berger's comments before blaming Holden)There were more statements I was annoyed with at the time, but I can't remember enough of the specifics to search them right now. If I remember them, I will add them here. (Part of my praise for Rob was his statement was in sharp contrast of what was coming out at the time)Wytham Abbey - I don't especially want to relitigate this but it makes me think less of EA - you can read many takes on EAF - probably the fairest place to start is the reasoning for why it was bought.People who received funding in dubious ways and didn't say anything (North Dimension donating on behalf of FTX Future fund)EA (generally) treating FTX as something which "happened" to them, rather than being something they were intimately involved in. (Initial funding for Alameda coming from the EA community, ...)I suspect (but can't prove) that the EA orgs are using the excuse of legal advice / legal threats to avoid saying things they don't want to say for other reasons. (And my suspicion is that this is because they have done something which could be bad)People in the community knew SBF's image was manufactured to some degree but didn't say anything (general take from forum - summarized in this New Yorker piece)EA's earlier relationship with a sketchy billionaire (and the degree to which this was covered up)All of these things put together are enough for me to downgrade my opinion of EA. I've put this together from off the top of my head, so there are likely other things which have affected my view over the last month. To be clear - I am still very much in favor of EA principles, the EA community and will continue to donate to EA causes but will ...]]>
Fri, 16 Dec 2022 20:28:09 +0000 EA - I'm less approving of the EA community now than before the FTX collapse by throwaway790 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I'm less approving of the EA community now than before the FTX collapse, published by throwaway790 on December 16, 2022 on The Effective Altruism Forum.This is mostly a counterpoint to Will Aldred/Duncan Sabien's post.I'm not really an EA; haven't taken the pledge, don't work at an org, have been to no EA meetups in my life, (haven't been a speaker at EA Global).However, I have been close to the EA community for a while, agreeing with many of its goals and donating to many of its key causes.Reading Duncan's post made me want to defend the case for why I'm less approving of the EA community than before. I want to be a little specific about what I'm less approving of.EAâ„¢ - CEA / EVF (I'd never heard it called EVF before FTX)Will MacAskill personallyDonation practices during the "funding overhang" eraThere are some people/orgs I am more approving of:Peter Wildeford (and Rethink Priorities more generally)Rob WiblinDustin MoskovitzI'm not going to say anything about Peter/Rob/Dustin in this post, although the amount I approve them more does not change the net effect, which is less approving of EA.Lots of the reasons I am less approving of EA now than I was prior to FTX collapse are things I could have known. However I am aware of them because of the FTX collapse. Others might have already been aware of everything I'll mention, in which case I would agree with Duncan - with few exceptions, the communities reaction to the FTX collapse has been very good and I largely approve.So, what have I learned since the FTX collapse which makes me approve less of EA?Will MacAskill:Initial reactions thread I read this as both minimizing the event and distancing himself from SBF, which is not credible in light of:Elon relationship - you don't go and bat for someone with the richest guy in the world unless you are confident in who you're batting forTheir long history of a close relationships (shared board memberships, his earlier mentorship etc)Guzey's review (this episode is why I'm posting as a throwaway)Equivocal statements during the early crisis:Will MacAskillHolder Karnofsky (note the edit and Alexander Berger's comments before blaming Holden)There were more statements I was annoyed with at the time, but I can't remember enough of the specifics to search them right now. If I remember them, I will add them here. (Part of my praise for Rob was his statement was in sharp contrast of what was coming out at the time)Wytham Abbey - I don't especially want to relitigate this but it makes me think less of EA - you can read many takes on EAF - probably the fairest place to start is the reasoning for why it was bought.People who received funding in dubious ways and didn't say anything (North Dimension donating on behalf of FTX Future fund)EA (generally) treating FTX as something which "happened" to them, rather than being something they were intimately involved in. (Initial funding for Alameda coming from the EA community, ...)I suspect (but can't prove) that the EA orgs are using the excuse of legal advice / legal threats to avoid saying things they don't want to say for other reasons. (And my suspicion is that this is because they have done something which could be bad)People in the community knew SBF's image was manufactured to some degree but didn't say anything (general take from forum - summarized in this New Yorker piece)EA's earlier relationship with a sketchy billionaire (and the degree to which this was covered up)All of these things put together are enough for me to downgrade my opinion of EA. I've put this together from off the top of my head, so there are likely other things which have affected my view over the last month. To be clear - I am still very much in favor of EA principles, the EA community and will continue to donate to EA causes but will ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I'm less approving of the EA community now than before the FTX collapse, published by throwaway790 on December 16, 2022 on The Effective Altruism Forum.This is mostly a counterpoint to Will Aldred/Duncan Sabien's post.I'm not really an EA; haven't taken the pledge, don't work at an org, have been to no EA meetups in my life, (haven't been a speaker at EA Global).However, I have been close to the EA community for a while, agreeing with many of its goals and donating to many of its key causes.Reading Duncan's post made me want to defend the case for why I'm less approving of the EA community than before. I want to be a little specific about what I'm less approving of.EAâ„¢ - CEA / EVF (I'd never heard it called EVF before FTX)Will MacAskill personallyDonation practices during the "funding overhang" eraThere are some people/orgs I am more approving of:Peter Wildeford (and Rethink Priorities more generally)Rob WiblinDustin MoskovitzI'm not going to say anything about Peter/Rob/Dustin in this post, although the amount I approve them more does not change the net effect, which is less approving of EA.Lots of the reasons I am less approving of EA now than I was prior to FTX collapse are things I could have known. However I am aware of them because of the FTX collapse. Others might have already been aware of everything I'll mention, in which case I would agree with Duncan - with few exceptions, the communities reaction to the FTX collapse has been very good and I largely approve.So, what have I learned since the FTX collapse which makes me approve less of EA?Will MacAskill:Initial reactions thread I read this as both minimizing the event and distancing himself from SBF, which is not credible in light of:Elon relationship - you don't go and bat for someone with the richest guy in the world unless you are confident in who you're batting forTheir long history of a close relationships (shared board memberships, his earlier mentorship etc)Guzey's review (this episode is why I'm posting as a throwaway)Equivocal statements during the early crisis:Will MacAskillHolder Karnofsky (note the edit and Alexander Berger's comments before blaming Holden)There were more statements I was annoyed with at the time, but I can't remember enough of the specifics to search them right now. If I remember them, I will add them here. (Part of my praise for Rob was his statement was in sharp contrast of what was coming out at the time)Wytham Abbey - I don't especially want to relitigate this but it makes me think less of EA - you can read many takes on EAF - probably the fairest place to start is the reasoning for why it was bought.People who received funding in dubious ways and didn't say anything (North Dimension donating on behalf of FTX Future fund)EA (generally) treating FTX as something which "happened" to them, rather than being something they were intimately involved in. (Initial funding for Alameda coming from the EA community, ...)I suspect (but can't prove) that the EA orgs are using the excuse of legal advice / legal threats to avoid saying things they don't want to say for other reasons. (And my suspicion is that this is because they have done something which could be bad)People in the community knew SBF's image was manufactured to some degree but didn't say anything (general take from forum - summarized in this New Yorker piece)EA's earlier relationship with a sketchy billionaire (and the degree to which this was covered up)All of these things put together are enough for me to downgrade my opinion of EA. I've put this together from off the top of my head, so there are likely other things which have affected my view over the last month. To be clear - I am still very much in favor of EA principles, the EA community and will continue to donate to EA causes but will ...]]>
throwaway790 https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:55 None full 4139
T85NxgeZTTZZpqBq2_EA EA - The Effective Altruism movement is not above conflicts of interest by sphor Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Effective Altruism movement is not above conflicts of interest, published by sphor on December 16, 2022 on The Effective Altruism Forum.I'm linking and excerpting a submission to the EA criticism contest published by a pseudonymous author on August 31, 2022 (i.e. before the collapse of FTX).The submission did not win a prize, but was highlighted by a panelist:I was unsure about including this post, but I think this post highlights an important risk of the EA community receiving a significant share of its funding from a few sources, both for internal community epistemics/culture considerations as well as for external-facing and movement-building considerations. I don't agree with all of the object-level claims, but I think these issues are important to highlight and plausibly relevant outside of the specific case of SBF / crypto. That it wasn't already on the forum (afaict) also contributed to its inclusion here.Due to concerns about copyright, I'm excerpting the post besides the summary and disclaimer, but I recommend reading the whole piece.SummarySam Bankman-Fried, founder of the cryptocurrency exchange FTX, is a major donator to the Effective Altruism ecosystem and has pledged to eventually donate his entire fortune to causes aligned with Effective Altruism.By relying heavily on ultra-wealthy individuals like Sam Bankman-Fried for funding, the Effective Altruism community is incentivized to accept political stances and moral judgments based on their alignment with the interests of its wealthy donators, instead of relying on a careful and rational examination of the quality and merits of these ideas. Yet, the Effective Altruism community does not appear to recognize that this creates potential conflicts with its stated mission of doing the most good by adhering to high standards of rationality and critical thought.In practice, Sam Bankman-Fried has enjoyed highly-favourable coverage from 80,000 Hours, an important actor in the Effective Altruism ecosystem. Given his donations to Effective Altruism, 80,000 Hours is, almost by definition, in a conflict of interest when it comes to communicating about Sam Bankman-Fried and his professional activities. This raises obvious questions regarding the trustworthiness of 80,000 Hours’ coverage of Sam Bankman-Fried and of topics his interests are linked with (quantitative trading, cryptocurrency, the FTX firm.).In this post, I argue that the Effective Altruism movement has failed to identify and publicize its own potential conflicts of interests. This failure reflects poorly on the quality of the standards the Effective Altruism movement holds itself to. Therefore, I invite outsiders and Effective Altruists alike to keep a healthy level of skepticism in mind when examining areas of the discourse and action of the Effective Altruism community that are susceptible to be affected by incentives conflicting with its stated mission. These incentives are not just financial in nature, they can also be linked to influence, prestige, or even emerge from personal friendships or other social dynamics. The Effective Altruism movement is not above being influenced by such incentives, and it seems urgent that it acts to minimize conflicts of interest.Introduction — Cryptocurrency is not neutral (neither morally nor politically)... Cryptocurrency is not simply an attempt to provide a set of technical solutions to improve existing currency systems. It is an attempt to replace existing monetary institutions by a new political system, it is therefore political at its core...My point here is not to debate on the virtues of the societal model promoted by cryptocurrency actors, but rather to convince readers unfamiliar with the cryptocurrency industry that it is deeply infused with political ideology and is certainly not a purely-te...]]>
sphor https://forum.effectivealtruism.org/posts/T85NxgeZTTZZpqBq2/the-effective-altruism-movement-is-not-above-conflicts-of Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Effective Altruism movement is not above conflicts of interest, published by sphor on December 16, 2022 on The Effective Altruism Forum.I'm linking and excerpting a submission to the EA criticism contest published by a pseudonymous author on August 31, 2022 (i.e. before the collapse of FTX).The submission did not win a prize, but was highlighted by a panelist:I was unsure about including this post, but I think this post highlights an important risk of the EA community receiving a significant share of its funding from a few sources, both for internal community epistemics/culture considerations as well as for external-facing and movement-building considerations. I don't agree with all of the object-level claims, but I think these issues are important to highlight and plausibly relevant outside of the specific case of SBF / crypto. That it wasn't already on the forum (afaict) also contributed to its inclusion here.Due to concerns about copyright, I'm excerpting the post besides the summary and disclaimer, but I recommend reading the whole piece.SummarySam Bankman-Fried, founder of the cryptocurrency exchange FTX, is a major donator to the Effective Altruism ecosystem and has pledged to eventually donate his entire fortune to causes aligned with Effective Altruism.By relying heavily on ultra-wealthy individuals like Sam Bankman-Fried for funding, the Effective Altruism community is incentivized to accept political stances and moral judgments based on their alignment with the interests of its wealthy donators, instead of relying on a careful and rational examination of the quality and merits of these ideas. Yet, the Effective Altruism community does not appear to recognize that this creates potential conflicts with its stated mission of doing the most good by adhering to high standards of rationality and critical thought.In practice, Sam Bankman-Fried has enjoyed highly-favourable coverage from 80,000 Hours, an important actor in the Effective Altruism ecosystem. Given his donations to Effective Altruism, 80,000 Hours is, almost by definition, in a conflict of interest when it comes to communicating about Sam Bankman-Fried and his professional activities. This raises obvious questions regarding the trustworthiness of 80,000 Hours’ coverage of Sam Bankman-Fried and of topics his interests are linked with (quantitative trading, cryptocurrency, the FTX firm.).In this post, I argue that the Effective Altruism movement has failed to identify and publicize its own potential conflicts of interests. This failure reflects poorly on the quality of the standards the Effective Altruism movement holds itself to. Therefore, I invite outsiders and Effective Altruists alike to keep a healthy level of skepticism in mind when examining areas of the discourse and action of the Effective Altruism community that are susceptible to be affected by incentives conflicting with its stated mission. These incentives are not just financial in nature, they can also be linked to influence, prestige, or even emerge from personal friendships or other social dynamics. The Effective Altruism movement is not above being influenced by such incentives, and it seems urgent that it acts to minimize conflicts of interest.Introduction — Cryptocurrency is not neutral (neither morally nor politically)... Cryptocurrency is not simply an attempt to provide a set of technical solutions to improve existing currency systems. It is an attempt to replace existing monetary institutions by a new political system, it is therefore political at its core...My point here is not to debate on the virtues of the societal model promoted by cryptocurrency actors, but rather to convince readers unfamiliar with the cryptocurrency industry that it is deeply infused with political ideology and is certainly not a purely-te...]]>
Fri, 16 Dec 2022 16:26:57 +0000 EA - The Effective Altruism movement is not above conflicts of interest by sphor Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Effective Altruism movement is not above conflicts of interest, published by sphor on December 16, 2022 on The Effective Altruism Forum.I'm linking and excerpting a submission to the EA criticism contest published by a pseudonymous author on August 31, 2022 (i.e. before the collapse of FTX).The submission did not win a prize, but was highlighted by a panelist:I was unsure about including this post, but I think this post highlights an important risk of the EA community receiving a significant share of its funding from a few sources, both for internal community epistemics/culture considerations as well as for external-facing and movement-building considerations. I don't agree with all of the object-level claims, but I think these issues are important to highlight and plausibly relevant outside of the specific case of SBF / crypto. That it wasn't already on the forum (afaict) also contributed to its inclusion here.Due to concerns about copyright, I'm excerpting the post besides the summary and disclaimer, but I recommend reading the whole piece.SummarySam Bankman-Fried, founder of the cryptocurrency exchange FTX, is a major donator to the Effective Altruism ecosystem and has pledged to eventually donate his entire fortune to causes aligned with Effective Altruism.By relying heavily on ultra-wealthy individuals like Sam Bankman-Fried for funding, the Effective Altruism community is incentivized to accept political stances and moral judgments based on their alignment with the interests of its wealthy donators, instead of relying on a careful and rational examination of the quality and merits of these ideas. Yet, the Effective Altruism community does not appear to recognize that this creates potential conflicts with its stated mission of doing the most good by adhering to high standards of rationality and critical thought.In practice, Sam Bankman-Fried has enjoyed highly-favourable coverage from 80,000 Hours, an important actor in the Effective Altruism ecosystem. Given his donations to Effective Altruism, 80,000 Hours is, almost by definition, in a conflict of interest when it comes to communicating about Sam Bankman-Fried and his professional activities. This raises obvious questions regarding the trustworthiness of 80,000 Hours’ coverage of Sam Bankman-Fried and of topics his interests are linked with (quantitative trading, cryptocurrency, the FTX firm.).In this post, I argue that the Effective Altruism movement has failed to identify and publicize its own potential conflicts of interests. This failure reflects poorly on the quality of the standards the Effective Altruism movement holds itself to. Therefore, I invite outsiders and Effective Altruists alike to keep a healthy level of skepticism in mind when examining areas of the discourse and action of the Effective Altruism community that are susceptible to be affected by incentives conflicting with its stated mission. These incentives are not just financial in nature, they can also be linked to influence, prestige, or even emerge from personal friendships or other social dynamics. The Effective Altruism movement is not above being influenced by such incentives, and it seems urgent that it acts to minimize conflicts of interest.Introduction — Cryptocurrency is not neutral (neither morally nor politically)... Cryptocurrency is not simply an attempt to provide a set of technical solutions to improve existing currency systems. It is an attempt to replace existing monetary institutions by a new political system, it is therefore political at its core...My point here is not to debate on the virtues of the societal model promoted by cryptocurrency actors, but rather to convince readers unfamiliar with the cryptocurrency industry that it is deeply infused with political ideology and is certainly not a purely-te...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Effective Altruism movement is not above conflicts of interest, published by sphor on December 16, 2022 on The Effective Altruism Forum.I'm linking and excerpting a submission to the EA criticism contest published by a pseudonymous author on August 31, 2022 (i.e. before the collapse of FTX).The submission did not win a prize, but was highlighted by a panelist:I was unsure about including this post, but I think this post highlights an important risk of the EA community receiving a significant share of its funding from a few sources, both for internal community epistemics/culture considerations as well as for external-facing and movement-building considerations. I don't agree with all of the object-level claims, but I think these issues are important to highlight and plausibly relevant outside of the specific case of SBF / crypto. That it wasn't already on the forum (afaict) also contributed to its inclusion here.Due to concerns about copyright, I'm excerpting the post besides the summary and disclaimer, but I recommend reading the whole piece.SummarySam Bankman-Fried, founder of the cryptocurrency exchange FTX, is a major donator to the Effective Altruism ecosystem and has pledged to eventually donate his entire fortune to causes aligned with Effective Altruism.By relying heavily on ultra-wealthy individuals like Sam Bankman-Fried for funding, the Effective Altruism community is incentivized to accept political stances and moral judgments based on their alignment with the interests of its wealthy donators, instead of relying on a careful and rational examination of the quality and merits of these ideas. Yet, the Effective Altruism community does not appear to recognize that this creates potential conflicts with its stated mission of doing the most good by adhering to high standards of rationality and critical thought.In practice, Sam Bankman-Fried has enjoyed highly-favourable coverage from 80,000 Hours, an important actor in the Effective Altruism ecosystem. Given his donations to Effective Altruism, 80,000 Hours is, almost by definition, in a conflict of interest when it comes to communicating about Sam Bankman-Fried and his professional activities. This raises obvious questions regarding the trustworthiness of 80,000 Hours’ coverage of Sam Bankman-Fried and of topics his interests are linked with (quantitative trading, cryptocurrency, the FTX firm.).In this post, I argue that the Effective Altruism movement has failed to identify and publicize its own potential conflicts of interests. This failure reflects poorly on the quality of the standards the Effective Altruism movement holds itself to. Therefore, I invite outsiders and Effective Altruists alike to keep a healthy level of skepticism in mind when examining areas of the discourse and action of the Effective Altruism community that are susceptible to be affected by incentives conflicting with its stated mission. These incentives are not just financial in nature, they can also be linked to influence, prestige, or even emerge from personal friendships or other social dynamics. The Effective Altruism movement is not above being influenced by such incentives, and it seems urgent that it acts to minimize conflicts of interest.Introduction — Cryptocurrency is not neutral (neither morally nor politically)... Cryptocurrency is not simply an attempt to provide a set of technical solutions to improve existing currency systems. It is an attempt to replace existing monetary institutions by a new political system, it is therefore political at its core...My point here is not to debate on the virtues of the societal model promoted by cryptocurrency actors, but rather to convince readers unfamiliar with the cryptocurrency industry that it is deeply infused with political ideology and is certainly not a purely-te...]]>
sphor https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:32 None full 4140
FiJZduXnddAGSveAC_EA EA - MacKenzie Scott's grantmaking data by david reinstein Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MacKenzie Scott's grantmaking data, published by david reinstein on December 15, 2022 on The Effective Altruism Forum."MacKenzie Scott's finally revealed her grants to date" Dylan Mathews notes in this Twitter thread.It seems mostly USA-directed, but some 'good stuff from an EA POV'. I'm working on unpacking this further.Above: her donations in $ millions classified by year and by whether they go to a charity that includes a US location under 'Geographies of service', but may also operate abroad.(Next step: classify the amount that might go abroad, and/or might be effective.)My code is here, feel free to continue to continue this work, I'm taking a break for now.Her largest donations across years (my aggregation):Dylan's aggregation by org:Below, some of her largest donations in particular years:Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
david reinstein https://forum.effectivealtruism.org/posts/FiJZduXnddAGSveAC/mackenzie-scott-s-grantmaking-data Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MacKenzie Scott's grantmaking data, published by david reinstein on December 15, 2022 on The Effective Altruism Forum."MacKenzie Scott's finally revealed her grants to date" Dylan Mathews notes in this Twitter thread.It seems mostly USA-directed, but some 'good stuff from an EA POV'. I'm working on unpacking this further.Above: her donations in $ millions classified by year and by whether they go to a charity that includes a US location under 'Geographies of service', but may also operate abroad.(Next step: classify the amount that might go abroad, and/or might be effective.)My code is here, feel free to continue to continue this work, I'm taking a break for now.Her largest donations across years (my aggregation):Dylan's aggregation by org:Below, some of her largest donations in particular years:Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 16 Dec 2022 14:44:11 +0000 EA - MacKenzie Scott's grantmaking data by david reinstein Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MacKenzie Scott's grantmaking data, published by david reinstein on December 15, 2022 on The Effective Altruism Forum."MacKenzie Scott's finally revealed her grants to date" Dylan Mathews notes in this Twitter thread.It seems mostly USA-directed, but some 'good stuff from an EA POV'. I'm working on unpacking this further.Above: her donations in $ millions classified by year and by whether they go to a charity that includes a US location under 'Geographies of service', but may also operate abroad.(Next step: classify the amount that might go abroad, and/or might be effective.)My code is here, feel free to continue to continue this work, I'm taking a break for now.Her largest donations across years (my aggregation):Dylan's aggregation by org:Below, some of her largest donations in particular years:Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MacKenzie Scott's grantmaking data, published by david reinstein on December 15, 2022 on The Effective Altruism Forum."MacKenzie Scott's finally revealed her grants to date" Dylan Mathews notes in this Twitter thread.It seems mostly USA-directed, but some 'good stuff from an EA POV'. I'm working on unpacking this further.Above: her donations in $ millions classified by year and by whether they go to a charity that includes a US location under 'Geographies of service', but may also operate abroad.(Next step: classify the amount that might go abroad, and/or might be effective.)My code is here, feel free to continue to continue this work, I'm taking a break for now.Her largest donations across years (my aggregation):Dylan's aggregation by org:Below, some of her largest donations in particular years:Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
david reinstein https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:03 None full 4142
LXj4cs5dLqDHwJynp_EA EA - Radical tactics can increase support for more moderate groups by James Ozden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Radical tactics can increase support for more moderate groups, published by James Ozden on December 16, 2022 on The Effective Altruism Forum.Note: This post is a synthesised version of this report, which analyses some public opinion polling we did for a disruptive climate-focused campaign in the UK, by Just Stop Oil. I think this might be relevant to EAs as I've generally perceived there to be quite a lot of skepticism about groups who employ "radical" tactics (e.g. disruptive protest) as it might taint a certain field, or otherwise slow overall progress. Whilst this work only answers the public opinion component of this concern, we find that radical tactics can actually increase support for and identification with more moderate organisations working on the same issue - a helpful dynamic. However, it's plausible that this works best for large social movements (e.g. the climate movement) where distinctions between a radical faction and moderate faction are much more clear. That said, we also found some trends towards polarisation that we intend to analyse further.SummarySocial Change Lab conducted nationally representative YouGov surveys, before and after a week-long campaign by Just Stop Oil to block the M25 motorway. These surveys were conducted longitudinally, by surveying the same people before and after the Just Stop Oil M25 campaign. Our aim was to see if a ‘radical flank effect’ was at play: did the radical tactics implemented by Just Stop Oil impact attitudes towards more moderate UK climate organisations? We surveyed 1,415 members of the public about their support for climate policies and support for and identification with a more moderate climate organisation (Friends of the Earth). We detected a positive radical flank effect, whereby increased awareness of Just Stop Oil resulted in increased support for and identification with Friends of the Earth (p=0.004 and p=0.007 respectively).We believe this is the first time the radical flank effect has been observed empirically using large-scale nationally representative polling, and it corroborates previous experimental findings by Simpson et al. (2022). The results indicate the potential positive effects of radical tactics on a broader social movement.Support for climate policies also increased between our two surveys, though we attribute this largely to media coverage of COP27, which took place at the same time as the M25 protests. We believe this change was largely due to COP27 as we observed no positive correlation between awareness of Just Stop Oil and support for climate policies, and in fact, observed a statistically non-significant (p = 0.19) negative association.Key Results:Over 92% of UK adults had heard of Just Stop Oil after their campaign, putting awareness of the organisation as high as the top 20 UK charities. This figure was 87% before this campaign started (as people may have reported higher awareness of Just Stop Oil purely due to our first survey).The number of people saying they support Friends of the Earth increased from 50.3% to 52.9% of the population, a 2.6 percentage point increase, equivalent to 1.75 million people in the UK.A clear positive radical flank effect: Increased awareness of Just Stop Oil after their M25 protest campaign was linked with stronger identification with and support for Friends of the Earth.A trend towards polarisation: Increased awareness of Just Stop Oil through their M25 campaign tended to make people who had low baseline identification with a moderate climate organisation reduce their support for climate policies; the opposite was true for people with high levels of baseline identification, who showed increased support for climate policies with increasing awareness of Just Stop Oil.IntroductionSocial movements are often made up of several factions, dep...]]>
James Ozden https://forum.effectivealtruism.org/posts/LXj4cs5dLqDHwJynp/radical-tactics-can-increase-support-for-more-moderate Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Radical tactics can increase support for more moderate groups, published by James Ozden on December 16, 2022 on The Effective Altruism Forum.Note: This post is a synthesised version of this report, which analyses some public opinion polling we did for a disruptive climate-focused campaign in the UK, by Just Stop Oil. I think this might be relevant to EAs as I've generally perceived there to be quite a lot of skepticism about groups who employ "radical" tactics (e.g. disruptive protest) as it might taint a certain field, or otherwise slow overall progress. Whilst this work only answers the public opinion component of this concern, we find that radical tactics can actually increase support for and identification with more moderate organisations working on the same issue - a helpful dynamic. However, it's plausible that this works best for large social movements (e.g. the climate movement) where distinctions between a radical faction and moderate faction are much more clear. That said, we also found some trends towards polarisation that we intend to analyse further.SummarySocial Change Lab conducted nationally representative YouGov surveys, before and after a week-long campaign by Just Stop Oil to block the M25 motorway. These surveys were conducted longitudinally, by surveying the same people before and after the Just Stop Oil M25 campaign. Our aim was to see if a ‘radical flank effect’ was at play: did the radical tactics implemented by Just Stop Oil impact attitudes towards more moderate UK climate organisations? We surveyed 1,415 members of the public about their support for climate policies and support for and identification with a more moderate climate organisation (Friends of the Earth). We detected a positive radical flank effect, whereby increased awareness of Just Stop Oil resulted in increased support for and identification with Friends of the Earth (p=0.004 and p=0.007 respectively).We believe this is the first time the radical flank effect has been observed empirically using large-scale nationally representative polling, and it corroborates previous experimental findings by Simpson et al. (2022). The results indicate the potential positive effects of radical tactics on a broader social movement.Support for climate policies also increased between our two surveys, though we attribute this largely to media coverage of COP27, which took place at the same time as the M25 protests. We believe this change was largely due to COP27 as we observed no positive correlation between awareness of Just Stop Oil and support for climate policies, and in fact, observed a statistically non-significant (p = 0.19) negative association.Key Results:Over 92% of UK adults had heard of Just Stop Oil after their campaign, putting awareness of the organisation as high as the top 20 UK charities. This figure was 87% before this campaign started (as people may have reported higher awareness of Just Stop Oil purely due to our first survey).The number of people saying they support Friends of the Earth increased from 50.3% to 52.9% of the population, a 2.6 percentage point increase, equivalent to 1.75 million people in the UK.A clear positive radical flank effect: Increased awareness of Just Stop Oil after their M25 protest campaign was linked with stronger identification with and support for Friends of the Earth.A trend towards polarisation: Increased awareness of Just Stop Oil through their M25 campaign tended to make people who had low baseline identification with a moderate climate organisation reduce their support for climate policies; the opposite was true for people with high levels of baseline identification, who showed increased support for climate policies with increasing awareness of Just Stop Oil.IntroductionSocial movements are often made up of several factions, dep...]]>
Fri, 16 Dec 2022 14:02:09 +0000 EA - Radical tactics can increase support for more moderate groups by James Ozden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Radical tactics can increase support for more moderate groups, published by James Ozden on December 16, 2022 on The Effective Altruism Forum.Note: This post is a synthesised version of this report, which analyses some public opinion polling we did for a disruptive climate-focused campaign in the UK, by Just Stop Oil. I think this might be relevant to EAs as I've generally perceived there to be quite a lot of skepticism about groups who employ "radical" tactics (e.g. disruptive protest) as it might taint a certain field, or otherwise slow overall progress. Whilst this work only answers the public opinion component of this concern, we find that radical tactics can actually increase support for and identification with more moderate organisations working on the same issue - a helpful dynamic. However, it's plausible that this works best for large social movements (e.g. the climate movement) where distinctions between a radical faction and moderate faction are much more clear. That said, we also found some trends towards polarisation that we intend to analyse further.SummarySocial Change Lab conducted nationally representative YouGov surveys, before and after a week-long campaign by Just Stop Oil to block the M25 motorway. These surveys were conducted longitudinally, by surveying the same people before and after the Just Stop Oil M25 campaign. Our aim was to see if a ‘radical flank effect’ was at play: did the radical tactics implemented by Just Stop Oil impact attitudes towards more moderate UK climate organisations? We surveyed 1,415 members of the public about their support for climate policies and support for and identification with a more moderate climate organisation (Friends of the Earth). We detected a positive radical flank effect, whereby increased awareness of Just Stop Oil resulted in increased support for and identification with Friends of the Earth (p=0.004 and p=0.007 respectively).We believe this is the first time the radical flank effect has been observed empirically using large-scale nationally representative polling, and it corroborates previous experimental findings by Simpson et al. (2022). The results indicate the potential positive effects of radical tactics on a broader social movement.Support for climate policies also increased between our two surveys, though we attribute this largely to media coverage of COP27, which took place at the same time as the M25 protests. We believe this change was largely due to COP27 as we observed no positive correlation between awareness of Just Stop Oil and support for climate policies, and in fact, observed a statistically non-significant (p = 0.19) negative association.Key Results:Over 92% of UK adults had heard of Just Stop Oil after their campaign, putting awareness of the organisation as high as the top 20 UK charities. This figure was 87% before this campaign started (as people may have reported higher awareness of Just Stop Oil purely due to our first survey).The number of people saying they support Friends of the Earth increased from 50.3% to 52.9% of the population, a 2.6 percentage point increase, equivalent to 1.75 million people in the UK.A clear positive radical flank effect: Increased awareness of Just Stop Oil after their M25 protest campaign was linked with stronger identification with and support for Friends of the Earth.A trend towards polarisation: Increased awareness of Just Stop Oil through their M25 campaign tended to make people who had low baseline identification with a moderate climate organisation reduce their support for climate policies; the opposite was true for people with high levels of baseline identification, who showed increased support for climate policies with increasing awareness of Just Stop Oil.IntroductionSocial movements are often made up of several factions, dep...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Radical tactics can increase support for more moderate groups, published by James Ozden on December 16, 2022 on The Effective Altruism Forum.Note: This post is a synthesised version of this report, which analyses some public opinion polling we did for a disruptive climate-focused campaign in the UK, by Just Stop Oil. I think this might be relevant to EAs as I've generally perceived there to be quite a lot of skepticism about groups who employ "radical" tactics (e.g. disruptive protest) as it might taint a certain field, or otherwise slow overall progress. Whilst this work only answers the public opinion component of this concern, we find that radical tactics can actually increase support for and identification with more moderate organisations working on the same issue - a helpful dynamic. However, it's plausible that this works best for large social movements (e.g. the climate movement) where distinctions between a radical faction and moderate faction are much more clear. That said, we also found some trends towards polarisation that we intend to analyse further.SummarySocial Change Lab conducted nationally representative YouGov surveys, before and after a week-long campaign by Just Stop Oil to block the M25 motorway. These surveys were conducted longitudinally, by surveying the same people before and after the Just Stop Oil M25 campaign. Our aim was to see if a ‘radical flank effect’ was at play: did the radical tactics implemented by Just Stop Oil impact attitudes towards more moderate UK climate organisations? We surveyed 1,415 members of the public about their support for climate policies and support for and identification with a more moderate climate organisation (Friends of the Earth). We detected a positive radical flank effect, whereby increased awareness of Just Stop Oil resulted in increased support for and identification with Friends of the Earth (p=0.004 and p=0.007 respectively).We believe this is the first time the radical flank effect has been observed empirically using large-scale nationally representative polling, and it corroborates previous experimental findings by Simpson et al. (2022). The results indicate the potential positive effects of radical tactics on a broader social movement.Support for climate policies also increased between our two surveys, though we attribute this largely to media coverage of COP27, which took place at the same time as the M25 protests. We believe this change was largely due to COP27 as we observed no positive correlation between awareness of Just Stop Oil and support for climate policies, and in fact, observed a statistically non-significant (p = 0.19) negative association.Key Results:Over 92% of UK adults had heard of Just Stop Oil after their campaign, putting awareness of the organisation as high as the top 20 UK charities. This figure was 87% before this campaign started (as people may have reported higher awareness of Just Stop Oil purely due to our first survey).The number of people saying they support Friends of the Earth increased from 50.3% to 52.9% of the population, a 2.6 percentage point increase, equivalent to 1.75 million people in the UK.A clear positive radical flank effect: Increased awareness of Just Stop Oil after their M25 protest campaign was linked with stronger identification with and support for Friends of the Earth.A trend towards polarisation: Increased awareness of Just Stop Oil through their M25 campaign tended to make people who had low baseline identification with a moderate climate organisation reduce their support for climate policies; the opposite was true for people with high levels of baseline identification, who showed increased support for climate policies with increasing awareness of Just Stop Oil.IntroductionSocial movements are often made up of several factions, dep...]]>
James Ozden https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 26:10 None full 4141
vHogKj87ZBLgN8TpQ_EA EA - Today is Draft Amnesty Day (December 16-18) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Today is Draft Amnesty Day (December 16-18), published by Lizka on December 16, 2022 on The Effective Altruism Forum.TL;DR: Today is an official “Draft Amnesty Day” on the Forum. People are encouraged to share unfinished posts, unpolished writing, butterfly ideas, thoughts they’re not sure they endorse, etc.This post explains how to:Participate ⬇️Explore what people have written ⬇️Hide these posts so that you don’t need to see them ⬇️Participate!If you have a draft lying around that you’ve never gotten around to polishing or checking, you are invited to post it today. (Here's why I think that's valuable.)To make it clear that it’s a “Draft Amnesty Day” post, do the following:Tag it with the Draft Amnesty Day tagCopy-paste the following table into the post, and modify it as you’d like:This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.You can also elaborate more on what types of feedback would be most useful, the context for the original draft, and the like.Explore what people have writtenHere’s a list of posts that have been published with this tag. Explore them! If you like one of them, tell the author! You can also give constructive feedback, which could be enormously useful to the poster, but please be generous, respect the norms, and see if the author has asked for certain types of feedback specifically.How to hide these posts entirelyYou can hide Draft Amnesty Day posts entirely by using a tag filter. Go to the Frontpage of the Forum, then hover over the “Draft Amnesty Day” tag (find it to the right of the “Frtonpage Posts” header), and click on “Hidden.” Posts with this tag will stop showing up for you.Depending on your timezone, you might see this early! In that case, tomorrow is a Draft Amnesty Day.If it's early, there might not be content there, yet.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lizka https://forum.effectivealtruism.org/posts/vHogKj87ZBLgN8TpQ/today-is-draft-amnesty-day-december-16-18 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Today is Draft Amnesty Day (December 16-18), published by Lizka on December 16, 2022 on The Effective Altruism Forum.TL;DR: Today is an official “Draft Amnesty Day” on the Forum. People are encouraged to share unfinished posts, unpolished writing, butterfly ideas, thoughts they’re not sure they endorse, etc.This post explains how to:Participate ⬇️Explore what people have written ⬇️Hide these posts so that you don’t need to see them ⬇️Participate!If you have a draft lying around that you’ve never gotten around to polishing or checking, you are invited to post it today. (Here's why I think that's valuable.)To make it clear that it’s a “Draft Amnesty Day” post, do the following:Tag it with the Draft Amnesty Day tagCopy-paste the following table into the post, and modify it as you’d like:This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.You can also elaborate more on what types of feedback would be most useful, the context for the original draft, and the like.Explore what people have writtenHere’s a list of posts that have been published with this tag. Explore them! If you like one of them, tell the author! You can also give constructive feedback, which could be enormously useful to the poster, but please be generous, respect the norms, and see if the author has asked for certain types of feedback specifically.How to hide these posts entirelyYou can hide Draft Amnesty Day posts entirely by using a tag filter. Go to the Frontpage of the Forum, then hover over the “Draft Amnesty Day” tag (find it to the right of the “Frtonpage Posts” header), and click on “Hidden.” Posts with this tag will stop showing up for you.Depending on your timezone, you might see this early! In that case, tomorrow is a Draft Amnesty Day.If it's early, there might not be content there, yet.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 16 Dec 2022 08:54:00 +0000 EA - Today is Draft Amnesty Day (December 16-18) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Today is Draft Amnesty Day (December 16-18), published by Lizka on December 16, 2022 on The Effective Altruism Forum.TL;DR: Today is an official “Draft Amnesty Day” on the Forum. People are encouraged to share unfinished posts, unpolished writing, butterfly ideas, thoughts they’re not sure they endorse, etc.This post explains how to:Participate ⬇️Explore what people have written ⬇️Hide these posts so that you don’t need to see them ⬇️Participate!If you have a draft lying around that you’ve never gotten around to polishing or checking, you are invited to post it today. (Here's why I think that's valuable.)To make it clear that it’s a “Draft Amnesty Day” post, do the following:Tag it with the Draft Amnesty Day tagCopy-paste the following table into the post, and modify it as you’d like:This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.You can also elaborate more on what types of feedback would be most useful, the context for the original draft, and the like.Explore what people have writtenHere’s a list of posts that have been published with this tag. Explore them! If you like one of them, tell the author! You can also give constructive feedback, which could be enormously useful to the poster, but please be generous, respect the norms, and see if the author has asked for certain types of feedback specifically.How to hide these posts entirelyYou can hide Draft Amnesty Day posts entirely by using a tag filter. Go to the Frontpage of the Forum, then hover over the “Draft Amnesty Day” tag (find it to the right of the “Frtonpage Posts” header), and click on “Hidden.” Posts with this tag will stop showing up for you.Depending on your timezone, you might see this early! In that case, tomorrow is a Draft Amnesty Day.If it's early, there might not be content there, yet.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Today is Draft Amnesty Day (December 16-18), published by Lizka on December 16, 2022 on The Effective Altruism Forum.TL;DR: Today is an official “Draft Amnesty Day” on the Forum. People are encouraged to share unfinished posts, unpolished writing, butterfly ideas, thoughts they’re not sure they endorse, etc.This post explains how to:Participate ⬇️Explore what people have written ⬇️Hide these posts so that you don’t need to see them ⬇️Participate!If you have a draft lying around that you’ve never gotten around to polishing or checking, you are invited to post it today. (Here's why I think that's valuable.)To make it clear that it’s a “Draft Amnesty Day” post, do the following:Tag it with the Draft Amnesty Day tagCopy-paste the following table into the post, and modify it as you’d like:This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.You can also elaborate more on what types of feedback would be most useful, the context for the original draft, and the like.Explore what people have writtenHere’s a list of posts that have been published with this tag. Explore them! If you like one of them, tell the author! You can also give constructive feedback, which could be enormously useful to the poster, but please be generous, respect the norms, and see if the author has asked for certain types of feedback specifically.How to hide these posts entirelyYou can hide Draft Amnesty Day posts entirely by using a tag filter. Go to the Frontpage of the Forum, then hover over the “Draft Amnesty Day” tag (find it to the right of the “Frtonpage Posts” header), and click on “Hidden.” Posts with this tag will stop showing up for you.Depending on your timezone, you might see this early! In that case, tomorrow is a Draft Amnesty Day.If it's early, there might not be content there, yet.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:26 None full 4135
4bFgFRNL2nKbh2s2p_EA EA - $50 TisBest Charity Gift Card to the first 20,000 people who sign up by Michael Huang Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: $50 TisBest Charity Gift Card to the first 20,000 people who sign up, published by Michael Huang on December 15, 2022 on The Effective Altruism Forum.$50 Charity Gift Cards are available today on a first come basis.“We’re helping this idea catch fire by offering a $50 Charity Gift Card to the first 20,000 people who sign up so that more people can experience this uniquely meaningful holiday gift for themselves. There are no strings attached. Our hope is simply that once you see how easy and joyous this kind of gift can be, you’ll choose to give it to your friends and family. We also ask that if you claimed one of these free cards last year, you either let others participate this year or pass this offer along to someone else who hasn’t received one yet.”Paul Tudor JonesThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Michael Huang https://forum.effectivealtruism.org/posts/4bFgFRNL2nKbh2s2p/usd50-tisbest-charity-gift-card-to-the-first-20-000-people Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: $50 TisBest Charity Gift Card to the first 20,000 people who sign up, published by Michael Huang on December 15, 2022 on The Effective Altruism Forum.$50 Charity Gift Cards are available today on a first come basis.“We’re helping this idea catch fire by offering a $50 Charity Gift Card to the first 20,000 people who sign up so that more people can experience this uniquely meaningful holiday gift for themselves. There are no strings attached. Our hope is simply that once you see how easy and joyous this kind of gift can be, you’ll choose to give it to your friends and family. We also ask that if you claimed one of these free cards last year, you either let others participate this year or pass this offer along to someone else who hasn’t received one yet.”Paul Tudor JonesThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 16 Dec 2022 00:34:25 +0000 EA - $50 TisBest Charity Gift Card to the first 20,000 people who sign up by Michael Huang Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: $50 TisBest Charity Gift Card to the first 20,000 people who sign up, published by Michael Huang on December 15, 2022 on The Effective Altruism Forum.$50 Charity Gift Cards are available today on a first come basis.“We’re helping this idea catch fire by offering a $50 Charity Gift Card to the first 20,000 people who sign up so that more people can experience this uniquely meaningful holiday gift for themselves. There are no strings attached. Our hope is simply that once you see how easy and joyous this kind of gift can be, you’ll choose to give it to your friends and family. We also ask that if you claimed one of these free cards last year, you either let others participate this year or pass this offer along to someone else who hasn’t received one yet.”Paul Tudor JonesThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: $50 TisBest Charity Gift Card to the first 20,000 people who sign up, published by Michael Huang on December 15, 2022 on The Effective Altruism Forum.$50 Charity Gift Cards are available today on a first come basis.“We’re helping this idea catch fire by offering a $50 Charity Gift Card to the first 20,000 people who sign up so that more people can experience this uniquely meaningful holiday gift for themselves. There are no strings attached. Our hope is simply that once you see how easy and joyous this kind of gift can be, you’ll choose to give it to your friends and family. We also ask that if you claimed one of these free cards last year, you either let others participate this year or pass this offer along to someone else who hasn’t received one yet.”Paul Tudor JonesThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Michael Huang https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:03 None full 4137
CZXDTSvnFFe99kNoB_EA EA - I'm as approving of the EA community now as before the FTX collapse by Will Aldred Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I'm as approving of the EA community now as before the FTX collapse, published by Will Aldred on December 15, 2022 on The Effective Altruism Forum.(Posting on behalf of and with permission from Duncan Sabien; the first person speaker is Duncan. Full text below.)Just took the Effective Altruism survey, and it had an extra, optional section that had a lot of questions about the FTX stuff, and trust, and how EA should respond, and what I think of it, and so forth.I'm not really an EA; haven't taken the pledge, don't work at an org, have been to fewer than twenty EA meetups in my life (though I've been a speaker at multiple EA Globals).However, I've been close to, and quite fond of, and at least a little protective of the EA community, for the past seven years (for instance, volunteering to speak at multiple EA Globals!).And the questions on the survey made me want to note publicly:I feel almost exactly as fond of, and approving of, and protective of, the EA community writ-large as I have for the past seven years.I get that the FTX thing is a big deal. Every EA I've seen has treated it as a big deal. It's being taken seriously, and I've seen soul-searching at every level from the individual up to the meta-organizational.I think there's a mistake that the average person tends to make, though, which is something like "if a plane crashes, something absolutely needs to visibly change."Often, when a plane crashes, something absolutely needs to change! Often there are legitimate flaws in the system that need to be patched.But sometimes, you just get that confluence of three one-in-a-thousand events. Sometimes, the right answer really is "our system shouldn't change; this is the exception."I'm not claiming that's the case here, at all. I'm mentioning it because:it is in fact sometimes true, and the outrage machine doesn't take that fact into accountit is a good reminder of the difference between improvement and improvement theater.There are actions you can take to look like you're taking things seriously, and there are actions you can take because you're actually taking things seriously.Public relations are also important, so some amount of the former category belongs in the latter category.But overall, I'm not interested in insisting that the individuals and organizations in the Effective Altruism sphere do things that look to me, from the outside, like sufficient due diligence, in response to this crisis.I want them to simply respond. To the best of their ability, in the ways that seem right to them.And I do, in fact, trust that that is happening. In part because of the glimpses I've caught, but also in part because that's just ... firmly in my model of these people and these orgs.Or to put it another way: the FTX thing was a blindside, and a negative update, but it was a negative update in the slack. It was the kind of out-of-distribution, surprisingly bad event that I sort of ... budget room for, on the meta level?I expect there to be one or two bad things here and there, when you're trying to coordinate thousands and thousands of individuals all pulling in different directions. This particular badness was in an extremely high-leverage place, and that sucks, but that feels more like bad luck than like ... "HOW DARE YOU ALL NOT HAVE PREDICTED THIS SORT OF THING AND ALSO BEEN FULLY ROBUST AGAINST ANY SUCH SHENANIGANERY AT ALL TIMES AND FROM ALL ANGLES."Some people are shouting stuff like that. But I think those people are being unreasonable, and not considering what a world with that level of vigilance actually looks like, in practice (hint: it looks like paralysis).I think that if a second disaster falls (or if a second looming disaster is uncovered and prevented before it actually breaks) then sure, I will owe those people an apology, and will make a p...]]>
Will Aldred https://forum.effectivealtruism.org/posts/CZXDTSvnFFe99kNoB/i-m-as-approving-of-the-ea-community-now-as-before-the-ftx Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I'm as approving of the EA community now as before the FTX collapse, published by Will Aldred on December 15, 2022 on The Effective Altruism Forum.(Posting on behalf of and with permission from Duncan Sabien; the first person speaker is Duncan. Full text below.)Just took the Effective Altruism survey, and it had an extra, optional section that had a lot of questions about the FTX stuff, and trust, and how EA should respond, and what I think of it, and so forth.I'm not really an EA; haven't taken the pledge, don't work at an org, have been to fewer than twenty EA meetups in my life (though I've been a speaker at multiple EA Globals).However, I've been close to, and quite fond of, and at least a little protective of the EA community, for the past seven years (for instance, volunteering to speak at multiple EA Globals!).And the questions on the survey made me want to note publicly:I feel almost exactly as fond of, and approving of, and protective of, the EA community writ-large as I have for the past seven years.I get that the FTX thing is a big deal. Every EA I've seen has treated it as a big deal. It's being taken seriously, and I've seen soul-searching at every level from the individual up to the meta-organizational.I think there's a mistake that the average person tends to make, though, which is something like "if a plane crashes, something absolutely needs to visibly change."Often, when a plane crashes, something absolutely needs to change! Often there are legitimate flaws in the system that need to be patched.But sometimes, you just get that confluence of three one-in-a-thousand events. Sometimes, the right answer really is "our system shouldn't change; this is the exception."I'm not claiming that's the case here, at all. I'm mentioning it because:it is in fact sometimes true, and the outrage machine doesn't take that fact into accountit is a good reminder of the difference between improvement and improvement theater.There are actions you can take to look like you're taking things seriously, and there are actions you can take because you're actually taking things seriously.Public relations are also important, so some amount of the former category belongs in the latter category.But overall, I'm not interested in insisting that the individuals and organizations in the Effective Altruism sphere do things that look to me, from the outside, like sufficient due diligence, in response to this crisis.I want them to simply respond. To the best of their ability, in the ways that seem right to them.And I do, in fact, trust that that is happening. In part because of the glimpses I've caught, but also in part because that's just ... firmly in my model of these people and these orgs.Or to put it another way: the FTX thing was a blindside, and a negative update, but it was a negative update in the slack. It was the kind of out-of-distribution, surprisingly bad event that I sort of ... budget room for, on the meta level?I expect there to be one or two bad things here and there, when you're trying to coordinate thousands and thousands of individuals all pulling in different directions. This particular badness was in an extremely high-leverage place, and that sucks, but that feels more like bad luck than like ... "HOW DARE YOU ALL NOT HAVE PREDICTED THIS SORT OF THING AND ALSO BEEN FULLY ROBUST AGAINST ANY SUCH SHENANIGANERY AT ALL TIMES AND FROM ALL ANGLES."Some people are shouting stuff like that. But I think those people are being unreasonable, and not considering what a world with that level of vigilance actually looks like, in practice (hint: it looks like paralysis).I think that if a second disaster falls (or if a second looming disaster is uncovered and prevented before it actually breaks) then sure, I will owe those people an apology, and will make a p...]]>
Thu, 15 Dec 2022 23:36:16 +0000 EA - I'm as approving of the EA community now as before the FTX collapse by Will Aldred Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I'm as approving of the EA community now as before the FTX collapse, published by Will Aldred on December 15, 2022 on The Effective Altruism Forum.(Posting on behalf of and with permission from Duncan Sabien; the first person speaker is Duncan. Full text below.)Just took the Effective Altruism survey, and it had an extra, optional section that had a lot of questions about the FTX stuff, and trust, and how EA should respond, and what I think of it, and so forth.I'm not really an EA; haven't taken the pledge, don't work at an org, have been to fewer than twenty EA meetups in my life (though I've been a speaker at multiple EA Globals).However, I've been close to, and quite fond of, and at least a little protective of the EA community, for the past seven years (for instance, volunteering to speak at multiple EA Globals!).And the questions on the survey made me want to note publicly:I feel almost exactly as fond of, and approving of, and protective of, the EA community writ-large as I have for the past seven years.I get that the FTX thing is a big deal. Every EA I've seen has treated it as a big deal. It's being taken seriously, and I've seen soul-searching at every level from the individual up to the meta-organizational.I think there's a mistake that the average person tends to make, though, which is something like "if a plane crashes, something absolutely needs to visibly change."Often, when a plane crashes, something absolutely needs to change! Often there are legitimate flaws in the system that need to be patched.But sometimes, you just get that confluence of three one-in-a-thousand events. Sometimes, the right answer really is "our system shouldn't change; this is the exception."I'm not claiming that's the case here, at all. I'm mentioning it because:it is in fact sometimes true, and the outrage machine doesn't take that fact into accountit is a good reminder of the difference between improvement and improvement theater.There are actions you can take to look like you're taking things seriously, and there are actions you can take because you're actually taking things seriously.Public relations are also important, so some amount of the former category belongs in the latter category.But overall, I'm not interested in insisting that the individuals and organizations in the Effective Altruism sphere do things that look to me, from the outside, like sufficient due diligence, in response to this crisis.I want them to simply respond. To the best of their ability, in the ways that seem right to them.And I do, in fact, trust that that is happening. In part because of the glimpses I've caught, but also in part because that's just ... firmly in my model of these people and these orgs.Or to put it another way: the FTX thing was a blindside, and a negative update, but it was a negative update in the slack. It was the kind of out-of-distribution, surprisingly bad event that I sort of ... budget room for, on the meta level?I expect there to be one or two bad things here and there, when you're trying to coordinate thousands and thousands of individuals all pulling in different directions. This particular badness was in an extremely high-leverage place, and that sucks, but that feels more like bad luck than like ... "HOW DARE YOU ALL NOT HAVE PREDICTED THIS SORT OF THING AND ALSO BEEN FULLY ROBUST AGAINST ANY SUCH SHENANIGANERY AT ALL TIMES AND FROM ALL ANGLES."Some people are shouting stuff like that. But I think those people are being unreasonable, and not considering what a world with that level of vigilance actually looks like, in practice (hint: it looks like paralysis).I think that if a second disaster falls (or if a second looming disaster is uncovered and prevented before it actually breaks) then sure, I will owe those people an apology, and will make a p...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I'm as approving of the EA community now as before the FTX collapse, published by Will Aldred on December 15, 2022 on The Effective Altruism Forum.(Posting on behalf of and with permission from Duncan Sabien; the first person speaker is Duncan. Full text below.)Just took the Effective Altruism survey, and it had an extra, optional section that had a lot of questions about the FTX stuff, and trust, and how EA should respond, and what I think of it, and so forth.I'm not really an EA; haven't taken the pledge, don't work at an org, have been to fewer than twenty EA meetups in my life (though I've been a speaker at multiple EA Globals).However, I've been close to, and quite fond of, and at least a little protective of the EA community, for the past seven years (for instance, volunteering to speak at multiple EA Globals!).And the questions on the survey made me want to note publicly:I feel almost exactly as fond of, and approving of, and protective of, the EA community writ-large as I have for the past seven years.I get that the FTX thing is a big deal. Every EA I've seen has treated it as a big deal. It's being taken seriously, and I've seen soul-searching at every level from the individual up to the meta-organizational.I think there's a mistake that the average person tends to make, though, which is something like "if a plane crashes, something absolutely needs to visibly change."Often, when a plane crashes, something absolutely needs to change! Often there are legitimate flaws in the system that need to be patched.But sometimes, you just get that confluence of three one-in-a-thousand events. Sometimes, the right answer really is "our system shouldn't change; this is the exception."I'm not claiming that's the case here, at all. I'm mentioning it because:it is in fact sometimes true, and the outrage machine doesn't take that fact into accountit is a good reminder of the difference between improvement and improvement theater.There are actions you can take to look like you're taking things seriously, and there are actions you can take because you're actually taking things seriously.Public relations are also important, so some amount of the former category belongs in the latter category.But overall, I'm not interested in insisting that the individuals and organizations in the Effective Altruism sphere do things that look to me, from the outside, like sufficient due diligence, in response to this crisis.I want them to simply respond. To the best of their ability, in the ways that seem right to them.And I do, in fact, trust that that is happening. In part because of the glimpses I've caught, but also in part because that's just ... firmly in my model of these people and these orgs.Or to put it another way: the FTX thing was a blindside, and a negative update, but it was a negative update in the slack. It was the kind of out-of-distribution, surprisingly bad event that I sort of ... budget room for, on the meta level?I expect there to be one or two bad things here and there, when you're trying to coordinate thousands and thousands of individuals all pulling in different directions. This particular badness was in an extremely high-leverage place, and that sucks, but that feels more like bad luck than like ... "HOW DARE YOU ALL NOT HAVE PREDICTED THIS SORT OF THING AND ALSO BEEN FULLY ROBUST AGAINST ANY SUCH SHENANIGANERY AT ALL TIMES AND FROM ALL ANGLES."Some people are shouting stuff like that. But I think those people are being unreasonable, and not considering what a world with that level of vigilance actually looks like, in practice (hint: it looks like paralysis).I think that if a second disaster falls (or if a second looming disaster is uncovered and prevented before it actually breaks) then sure, I will owe those people an apology, and will make a p...]]>
Will Aldred https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:13 None full 4136
FFY8zfkQq3qdELyvm_EA EA - Consider working more hours and taking more stimulants by Arjun Panickssery Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider working more hours and taking more stimulants, published by Arjun Panickssery on December 15, 2022 on The Effective Altruism Forum.Epistemic status: amphetaminesSooner or later, the great men turn out to be all alike. They never stop working. They never lose a minute. It is very depressing.V.S. PritchettEA produces much talk about an obligation to donate some or even most of your wealth. In both direct work and earning to give, there's a connection between your work productivity and your (direct or indirect) impact. Hard work is also a costly signal of commitment that could substitute for frugality in our less funding-constrained phase of the movement. And working incredibly hard increases the chance of tail successes that might generate very high impact.In the same way that you might want to attract converts by advancing a softer norm of donating only 10% of your income rather than everything above $40k, you might want to create a softer norm about productivity, and not feel bad about only following this norm. This post is addressed instead to those who haven't considered much at all the prospect of experimenting with working 60 hours a week rather than 30-40.Don't dismiss this option out of hand because of general concerns about burnout. There are multiple good reasons to think you should work much harder.First, the short-term optimal workweek might just be very long. Studies often find that CEOs work 50+ hours per week. Silicon Valley is very productive and has a "hustle culture" involving long work hours (see also). I agree with Lynette Bye that most of the working hours literature is poor—I'm even more skeptical than she is about agenda-driven research on Gilded Age factory workers—and that gaining an impression from anecdotes of top performers is better. Top performers in business routinely work long hours, and reading through lists of anecdotes like Daily Rituals (which is mostly writers and artists) you'll see a lot of strict routines, long hours, and stimulants of all kinds: caffeine, nicotine, amphetamines.High-performing managers in the EA ecosystem report working long hours:"I know there have been some pretty publicized debates recently in Silicon Valley about whether you can have it all and be successful and have balance, or whether you really do need to almost work yourself to death in order to accomplish something big. At least in our mind and our experience with our company is that sometimes if you want to do something that is very challenging and that you think will have a big impact and hasn’t been done before the reality is you just have to run and work harder and faster than anyone else and I think that’s a very real thing for us. We have gone all-in on this." -Theresa Condor, COO at Spire Global"Yeah, so the limiting factor is I only want to work about 55 hours a week or something like that at the most. And so maybe I'll experiment in pushing that up to 60. But somewhere between 45 and 60 hours a week. That is hours in the office, and that transfers into Toggl hours at a rate of something like 75% to 85% just because of pee breaks and chatting to people and eating and stuff like that. And so that translates into around 40 Toggl hours, like 35 to 40 Toggl hours a week. And then, then the fraction of those that are spent on key priorities. The rest of it is meetings with people and emails, which are the two biggest sucks of time. Also internal comms, which is checking Slack or recording my goals and stuff like that." -Niel Bowerman, Director of Special Projects at 80,000 HoursHowever, productive hours look more limited for certain types of cognitive or intellectual work:"I've done several different kinds of work, and the limits were different for each. My limit for the harder types of writing or programming is about five hours ...]]>
Arjun Panickssery https://forum.effectivealtruism.org/posts/FFY8zfkQq3qdELyvm/consider-working-more-hours-and-taking-more-stimulants-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider working more hours and taking more stimulants, published by Arjun Panickssery on December 15, 2022 on The Effective Altruism Forum.Epistemic status: amphetaminesSooner or later, the great men turn out to be all alike. They never stop working. They never lose a minute. It is very depressing.V.S. PritchettEA produces much talk about an obligation to donate some or even most of your wealth. In both direct work and earning to give, there's a connection between your work productivity and your (direct or indirect) impact. Hard work is also a costly signal of commitment that could substitute for frugality in our less funding-constrained phase of the movement. And working incredibly hard increases the chance of tail successes that might generate very high impact.In the same way that you might want to attract converts by advancing a softer norm of donating only 10% of your income rather than everything above $40k, you might want to create a softer norm about productivity, and not feel bad about only following this norm. This post is addressed instead to those who haven't considered much at all the prospect of experimenting with working 60 hours a week rather than 30-40.Don't dismiss this option out of hand because of general concerns about burnout. There are multiple good reasons to think you should work much harder.First, the short-term optimal workweek might just be very long. Studies often find that CEOs work 50+ hours per week. Silicon Valley is very productive and has a "hustle culture" involving long work hours (see also). I agree with Lynette Bye that most of the working hours literature is poor—I'm even more skeptical than she is about agenda-driven research on Gilded Age factory workers—and that gaining an impression from anecdotes of top performers is better. Top performers in business routinely work long hours, and reading through lists of anecdotes like Daily Rituals (which is mostly writers and artists) you'll see a lot of strict routines, long hours, and stimulants of all kinds: caffeine, nicotine, amphetamines.High-performing managers in the EA ecosystem report working long hours:"I know there have been some pretty publicized debates recently in Silicon Valley about whether you can have it all and be successful and have balance, or whether you really do need to almost work yourself to death in order to accomplish something big. At least in our mind and our experience with our company is that sometimes if you want to do something that is very challenging and that you think will have a big impact and hasn’t been done before the reality is you just have to run and work harder and faster than anyone else and I think that’s a very real thing for us. We have gone all-in on this." -Theresa Condor, COO at Spire Global"Yeah, so the limiting factor is I only want to work about 55 hours a week or something like that at the most. And so maybe I'll experiment in pushing that up to 60. But somewhere between 45 and 60 hours a week. That is hours in the office, and that transfers into Toggl hours at a rate of something like 75% to 85% just because of pee breaks and chatting to people and eating and stuff like that. And so that translates into around 40 Toggl hours, like 35 to 40 Toggl hours a week. And then, then the fraction of those that are spent on key priorities. The rest of it is meetings with people and emails, which are the two biggest sucks of time. Also internal comms, which is checking Slack or recording my goals and stuff like that." -Niel Bowerman, Director of Special Projects at 80,000 HoursHowever, productive hours look more limited for certain types of cognitive or intellectual work:"I've done several different kinds of work, and the limits were different for each. My limit for the harder types of writing or programming is about five hours ...]]>
Thu, 15 Dec 2022 22:47:38 +0000 EA - Consider working more hours and taking more stimulants by Arjun Panickssery Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider working more hours and taking more stimulants, published by Arjun Panickssery on December 15, 2022 on The Effective Altruism Forum.Epistemic status: amphetaminesSooner or later, the great men turn out to be all alike. They never stop working. They never lose a minute. It is very depressing.V.S. PritchettEA produces much talk about an obligation to donate some or even most of your wealth. In both direct work and earning to give, there's a connection between your work productivity and your (direct or indirect) impact. Hard work is also a costly signal of commitment that could substitute for frugality in our less funding-constrained phase of the movement. And working incredibly hard increases the chance of tail successes that might generate very high impact.In the same way that you might want to attract converts by advancing a softer norm of donating only 10% of your income rather than everything above $40k, you might want to create a softer norm about productivity, and not feel bad about only following this norm. This post is addressed instead to those who haven't considered much at all the prospect of experimenting with working 60 hours a week rather than 30-40.Don't dismiss this option out of hand because of general concerns about burnout. There are multiple good reasons to think you should work much harder.First, the short-term optimal workweek might just be very long. Studies often find that CEOs work 50+ hours per week. Silicon Valley is very productive and has a "hustle culture" involving long work hours (see also). I agree with Lynette Bye that most of the working hours literature is poor—I'm even more skeptical than she is about agenda-driven research on Gilded Age factory workers—and that gaining an impression from anecdotes of top performers is better. Top performers in business routinely work long hours, and reading through lists of anecdotes like Daily Rituals (which is mostly writers and artists) you'll see a lot of strict routines, long hours, and stimulants of all kinds: caffeine, nicotine, amphetamines.High-performing managers in the EA ecosystem report working long hours:"I know there have been some pretty publicized debates recently in Silicon Valley about whether you can have it all and be successful and have balance, or whether you really do need to almost work yourself to death in order to accomplish something big. At least in our mind and our experience with our company is that sometimes if you want to do something that is very challenging and that you think will have a big impact and hasn’t been done before the reality is you just have to run and work harder and faster than anyone else and I think that’s a very real thing for us. We have gone all-in on this." -Theresa Condor, COO at Spire Global"Yeah, so the limiting factor is I only want to work about 55 hours a week or something like that at the most. And so maybe I'll experiment in pushing that up to 60. But somewhere between 45 and 60 hours a week. That is hours in the office, and that transfers into Toggl hours at a rate of something like 75% to 85% just because of pee breaks and chatting to people and eating and stuff like that. And so that translates into around 40 Toggl hours, like 35 to 40 Toggl hours a week. And then, then the fraction of those that are spent on key priorities. The rest of it is meetings with people and emails, which are the two biggest sucks of time. Also internal comms, which is checking Slack or recording my goals and stuff like that." -Niel Bowerman, Director of Special Projects at 80,000 HoursHowever, productive hours look more limited for certain types of cognitive or intellectual work:"I've done several different kinds of work, and the limits were different for each. My limit for the harder types of writing or programming is about five hours ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider working more hours and taking more stimulants, published by Arjun Panickssery on December 15, 2022 on The Effective Altruism Forum.Epistemic status: amphetaminesSooner or later, the great men turn out to be all alike. They never stop working. They never lose a minute. It is very depressing.V.S. PritchettEA produces much talk about an obligation to donate some or even most of your wealth. In both direct work and earning to give, there's a connection between your work productivity and your (direct or indirect) impact. Hard work is also a costly signal of commitment that could substitute for frugality in our less funding-constrained phase of the movement. And working incredibly hard increases the chance of tail successes that might generate very high impact.In the same way that you might want to attract converts by advancing a softer norm of donating only 10% of your income rather than everything above $40k, you might want to create a softer norm about productivity, and not feel bad about only following this norm. This post is addressed instead to those who haven't considered much at all the prospect of experimenting with working 60 hours a week rather than 30-40.Don't dismiss this option out of hand because of general concerns about burnout. There are multiple good reasons to think you should work much harder.First, the short-term optimal workweek might just be very long. Studies often find that CEOs work 50+ hours per week. Silicon Valley is very productive and has a "hustle culture" involving long work hours (see also). I agree with Lynette Bye that most of the working hours literature is poor—I'm even more skeptical than she is about agenda-driven research on Gilded Age factory workers—and that gaining an impression from anecdotes of top performers is better. Top performers in business routinely work long hours, and reading through lists of anecdotes like Daily Rituals (which is mostly writers and artists) you'll see a lot of strict routines, long hours, and stimulants of all kinds: caffeine, nicotine, amphetamines.High-performing managers in the EA ecosystem report working long hours:"I know there have been some pretty publicized debates recently in Silicon Valley about whether you can have it all and be successful and have balance, or whether you really do need to almost work yourself to death in order to accomplish something big. At least in our mind and our experience with our company is that sometimes if you want to do something that is very challenging and that you think will have a big impact and hasn’t been done before the reality is you just have to run and work harder and faster than anyone else and I think that’s a very real thing for us. We have gone all-in on this." -Theresa Condor, COO at Spire Global"Yeah, so the limiting factor is I only want to work about 55 hours a week or something like that at the most. And so maybe I'll experiment in pushing that up to 60. But somewhere between 45 and 60 hours a week. That is hours in the office, and that transfers into Toggl hours at a rate of something like 75% to 85% just because of pee breaks and chatting to people and eating and stuff like that. And so that translates into around 40 Toggl hours, like 35 to 40 Toggl hours a week. And then, then the fraction of those that are spent on key priorities. The rest of it is meetings with people and emails, which are the two biggest sucks of time. Also internal comms, which is checking Slack or recording my goals and stuff like that." -Niel Bowerman, Director of Special Projects at 80,000 HoursHowever, productive hours look more limited for certain types of cognitive or intellectual work:"I've done several different kinds of work, and the limits were different for each. My limit for the harder types of writing or programming is about five hours ...]]>
Arjun Panickssery https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:38 None full 4130
2AiuvYoozXeHBGnhd_EA EA - The next decades might be wild by mariushobbhahn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The next decades might be wild, published by mariushobbhahn on December 15, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
mariushobbhahn https://forum.effectivealtruism.org/posts/2AiuvYoozXeHBGnhd/the-next-decades-might-be-wild Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The next decades might be wild, published by mariushobbhahn on December 15, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 15 Dec 2022 20:32:34 +0000 EA - The next decades might be wild by mariushobbhahn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The next decades might be wild, published by mariushobbhahn on December 15, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The next decades might be wild, published by mariushobbhahn on December 15, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
mariushobbhahn https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:26 None full 4132
NFhELFno7ScuCxXMY_EA EA - The winners of the Change Our Mind Contest—and some reflections by GiveWell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The winners of the Change Our Mind Contest—and some reflections, published by GiveWell on December 15, 2022 on The Effective Altruism Forum.Author: Isabel Arjmand, GiveWell Special Projects OfficerIn September, we announced the Change Our Mind Contest for critiques of our cost-effectiveness analyses. Today, we're excited to announce the winners!We're very grateful that so many people engaged deeply with our work. This contest was GiveWell's most successful effort so far to solicit external criticism from the public, and it wouldn't have been possible without the participation of people who share our goal of allocating funding to cost-effective programs.Overall, we received 49 entries engaging with our prompts. We were very happy with the quality of entries we received—their authors brought a great deal of thought and expertise to engaging with our cost-effectiveness analyses.Because we were impressed by the quality of entries, we've decided to award two first-place prizes and eight honorable mentions. (We stated in September that we would give a minimum of one first-place, one runner-up, and one honorable mention prize.) We also awarded $20,000 to the piece of criticism that inspired this contest.Winners are listed below, followed by our reflections on this contest and responses to the winners.The prize-winnersGiven the overall quality of the entries we received, selecting a set of winners required a lot of deliberation.We're still in the process of determining which critiques to incorporate into our cost-effectiveness analyses and to what extent they'll change the bottom line; we don't agree with all the critiques in the first-place and honorable mention entries, but each prize-winner raised issues that we believe were worth considering. In several cases, we plan to further investigate the questions raised by these entries.Within categories, the winners are listed alphabetically by the last name of the author who submitted the entry.First-place prizes – $20,000 each[1]Noah Haber for "GiveWell's Uncertainty Problem." The author argues that without properly accounting for uncertainty, GiveWell is likely to allocate its portfolio of funding suboptimally, and proposes methods for addressing uncertainty.Matthew Romer and Paul Romer Present for "An Examination of GiveWell’s Water Quality Intervention Cost-Effectiveness Analysis." The authors suggest several changes to GiveWell's analysis of water chlorination programs, which overall make Dispensers for Safe Water's program appear less cost-effective.To give a general sense of the magnitude of the changes we currently anticipate, our best guess is that Matthew Romer and Paul Romer Present's entry will change our estimate of the cost-effectiveness of Dispensers for Safe Water by very roughly 5 to 10% and that Noah Haber's entry may lead to an overall shift in how we account for uncertainty (but it's too early to say how it would impact any given intervention). Overall, we currently expect that entries to the contest may shift the allocation of resources between programs but are unlikely to lead to us adding or removing any programs from our list of recommended charities.Honorable mentions – $5,000 eachAlex Bates for a critical review of GiveWell's 2022 cost-effectiveness modelDr. Samantha Field and Dr. Yannish Naik for "A critique of GiveWell’s CEA model for Conditional Cash Transfers for vaccination in Nigeria (New Incentives)"Akash Kulgod for "Cost-effectiveness of iron fortification in India is lower than GiveWell's estimates"Sam Nolan, Hannah Rokebrand, and Tanae Rao for "Quantifying Uncertainty in GiveWell Cost-Effectiveness Analyses"Isobel Phillips for "Improving GiveWell's modelling of insecticide resistance may change their cost per life saved for AMF by up to 20%"Tanae Rao and Ricky Huang for "...]]>
GiveWell https://forum.effectivealtruism.org/posts/NFhELFno7ScuCxXMY/the-winners-of-the-change-our-mind-contest-and-some Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The winners of the Change Our Mind Contest—and some reflections, published by GiveWell on December 15, 2022 on The Effective Altruism Forum.Author: Isabel Arjmand, GiveWell Special Projects OfficerIn September, we announced the Change Our Mind Contest for critiques of our cost-effectiveness analyses. Today, we're excited to announce the winners!We're very grateful that so many people engaged deeply with our work. This contest was GiveWell's most successful effort so far to solicit external criticism from the public, and it wouldn't have been possible without the participation of people who share our goal of allocating funding to cost-effective programs.Overall, we received 49 entries engaging with our prompts. We were very happy with the quality of entries we received—their authors brought a great deal of thought and expertise to engaging with our cost-effectiveness analyses.Because we were impressed by the quality of entries, we've decided to award two first-place prizes and eight honorable mentions. (We stated in September that we would give a minimum of one first-place, one runner-up, and one honorable mention prize.) We also awarded $20,000 to the piece of criticism that inspired this contest.Winners are listed below, followed by our reflections on this contest and responses to the winners.The prize-winnersGiven the overall quality of the entries we received, selecting a set of winners required a lot of deliberation.We're still in the process of determining which critiques to incorporate into our cost-effectiveness analyses and to what extent they'll change the bottom line; we don't agree with all the critiques in the first-place and honorable mention entries, but each prize-winner raised issues that we believe were worth considering. In several cases, we plan to further investigate the questions raised by these entries.Within categories, the winners are listed alphabetically by the last name of the author who submitted the entry.First-place prizes – $20,000 each[1]Noah Haber for "GiveWell's Uncertainty Problem." The author argues that without properly accounting for uncertainty, GiveWell is likely to allocate its portfolio of funding suboptimally, and proposes methods for addressing uncertainty.Matthew Romer and Paul Romer Present for "An Examination of GiveWell’s Water Quality Intervention Cost-Effectiveness Analysis." The authors suggest several changes to GiveWell's analysis of water chlorination programs, which overall make Dispensers for Safe Water's program appear less cost-effective.To give a general sense of the magnitude of the changes we currently anticipate, our best guess is that Matthew Romer and Paul Romer Present's entry will change our estimate of the cost-effectiveness of Dispensers for Safe Water by very roughly 5 to 10% and that Noah Haber's entry may lead to an overall shift in how we account for uncertainty (but it's too early to say how it would impact any given intervention). Overall, we currently expect that entries to the contest may shift the allocation of resources between programs but are unlikely to lead to us adding or removing any programs from our list of recommended charities.Honorable mentions – $5,000 eachAlex Bates for a critical review of GiveWell's 2022 cost-effectiveness modelDr. Samantha Field and Dr. Yannish Naik for "A critique of GiveWell’s CEA model for Conditional Cash Transfers for vaccination in Nigeria (New Incentives)"Akash Kulgod for "Cost-effectiveness of iron fortification in India is lower than GiveWell's estimates"Sam Nolan, Hannah Rokebrand, and Tanae Rao for "Quantifying Uncertainty in GiveWell Cost-Effectiveness Analyses"Isobel Phillips for "Improving GiveWell's modelling of insecticide resistance may change their cost per life saved for AMF by up to 20%"Tanae Rao and Ricky Huang for "...]]>
Thu, 15 Dec 2022 20:24:04 +0000 EA - The winners of the Change Our Mind Contest—and some reflections by GiveWell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The winners of the Change Our Mind Contest—and some reflections, published by GiveWell on December 15, 2022 on The Effective Altruism Forum.Author: Isabel Arjmand, GiveWell Special Projects OfficerIn September, we announced the Change Our Mind Contest for critiques of our cost-effectiveness analyses. Today, we're excited to announce the winners!We're very grateful that so many people engaged deeply with our work. This contest was GiveWell's most successful effort so far to solicit external criticism from the public, and it wouldn't have been possible without the participation of people who share our goal of allocating funding to cost-effective programs.Overall, we received 49 entries engaging with our prompts. We were very happy with the quality of entries we received—their authors brought a great deal of thought and expertise to engaging with our cost-effectiveness analyses.Because we were impressed by the quality of entries, we've decided to award two first-place prizes and eight honorable mentions. (We stated in September that we would give a minimum of one first-place, one runner-up, and one honorable mention prize.) We also awarded $20,000 to the piece of criticism that inspired this contest.Winners are listed below, followed by our reflections on this contest and responses to the winners.The prize-winnersGiven the overall quality of the entries we received, selecting a set of winners required a lot of deliberation.We're still in the process of determining which critiques to incorporate into our cost-effectiveness analyses and to what extent they'll change the bottom line; we don't agree with all the critiques in the first-place and honorable mention entries, but each prize-winner raised issues that we believe were worth considering. In several cases, we plan to further investigate the questions raised by these entries.Within categories, the winners are listed alphabetically by the last name of the author who submitted the entry.First-place prizes – $20,000 each[1]Noah Haber for "GiveWell's Uncertainty Problem." The author argues that without properly accounting for uncertainty, GiveWell is likely to allocate its portfolio of funding suboptimally, and proposes methods for addressing uncertainty.Matthew Romer and Paul Romer Present for "An Examination of GiveWell’s Water Quality Intervention Cost-Effectiveness Analysis." The authors suggest several changes to GiveWell's analysis of water chlorination programs, which overall make Dispensers for Safe Water's program appear less cost-effective.To give a general sense of the magnitude of the changes we currently anticipate, our best guess is that Matthew Romer and Paul Romer Present's entry will change our estimate of the cost-effectiveness of Dispensers for Safe Water by very roughly 5 to 10% and that Noah Haber's entry may lead to an overall shift in how we account for uncertainty (but it's too early to say how it would impact any given intervention). Overall, we currently expect that entries to the contest may shift the allocation of resources between programs but are unlikely to lead to us adding or removing any programs from our list of recommended charities.Honorable mentions – $5,000 eachAlex Bates for a critical review of GiveWell's 2022 cost-effectiveness modelDr. Samantha Field and Dr. Yannish Naik for "A critique of GiveWell’s CEA model for Conditional Cash Transfers for vaccination in Nigeria (New Incentives)"Akash Kulgod for "Cost-effectiveness of iron fortification in India is lower than GiveWell's estimates"Sam Nolan, Hannah Rokebrand, and Tanae Rao for "Quantifying Uncertainty in GiveWell Cost-Effectiveness Analyses"Isobel Phillips for "Improving GiveWell's modelling of insecticide resistance may change their cost per life saved for AMF by up to 20%"Tanae Rao and Ricky Huang for "...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The winners of the Change Our Mind Contest—and some reflections, published by GiveWell on December 15, 2022 on The Effective Altruism Forum.Author: Isabel Arjmand, GiveWell Special Projects OfficerIn September, we announced the Change Our Mind Contest for critiques of our cost-effectiveness analyses. Today, we're excited to announce the winners!We're very grateful that so many people engaged deeply with our work. This contest was GiveWell's most successful effort so far to solicit external criticism from the public, and it wouldn't have been possible without the participation of people who share our goal of allocating funding to cost-effective programs.Overall, we received 49 entries engaging with our prompts. We were very happy with the quality of entries we received—their authors brought a great deal of thought and expertise to engaging with our cost-effectiveness analyses.Because we were impressed by the quality of entries, we've decided to award two first-place prizes and eight honorable mentions. (We stated in September that we would give a minimum of one first-place, one runner-up, and one honorable mention prize.) We also awarded $20,000 to the piece of criticism that inspired this contest.Winners are listed below, followed by our reflections on this contest and responses to the winners.The prize-winnersGiven the overall quality of the entries we received, selecting a set of winners required a lot of deliberation.We're still in the process of determining which critiques to incorporate into our cost-effectiveness analyses and to what extent they'll change the bottom line; we don't agree with all the critiques in the first-place and honorable mention entries, but each prize-winner raised issues that we believe were worth considering. In several cases, we plan to further investigate the questions raised by these entries.Within categories, the winners are listed alphabetically by the last name of the author who submitted the entry.First-place prizes – $20,000 each[1]Noah Haber for "GiveWell's Uncertainty Problem." The author argues that without properly accounting for uncertainty, GiveWell is likely to allocate its portfolio of funding suboptimally, and proposes methods for addressing uncertainty.Matthew Romer and Paul Romer Present for "An Examination of GiveWell’s Water Quality Intervention Cost-Effectiveness Analysis." The authors suggest several changes to GiveWell's analysis of water chlorination programs, which overall make Dispensers for Safe Water's program appear less cost-effective.To give a general sense of the magnitude of the changes we currently anticipate, our best guess is that Matthew Romer and Paul Romer Present's entry will change our estimate of the cost-effectiveness of Dispensers for Safe Water by very roughly 5 to 10% and that Noah Haber's entry may lead to an overall shift in how we account for uncertainty (but it's too early to say how it would impact any given intervention). Overall, we currently expect that entries to the contest may shift the allocation of resources between programs but are unlikely to lead to us adding or removing any programs from our list of recommended charities.Honorable mentions – $5,000 eachAlex Bates for a critical review of GiveWell's 2022 cost-effectiveness modelDr. Samantha Field and Dr. Yannish Naik for "A critique of GiveWell’s CEA model for Conditional Cash Transfers for vaccination in Nigeria (New Incentives)"Akash Kulgod for "Cost-effectiveness of iron fortification in India is lower than GiveWell's estimates"Sam Nolan, Hannah Rokebrand, and Tanae Rao for "Quantifying Uncertainty in GiveWell Cost-Effectiveness Analyses"Isobel Phillips for "Improving GiveWell's modelling of insecticide resistance may change their cost per life saved for AMF by up to 20%"Tanae Rao and Ricky Huang for "...]]>
GiveWell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 15:45 None full 4131
4iLeA9uwdAqXS3Jpc_EA EA - The case for transparent spending by Jeroen W Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The case for transparent spending, published by Jeroen W on December 15, 2022 on The Effective Altruism Forum.ContextRecently, the purchase of Wytham Abbey came in the spotlight: The purchase by Effective Ventures of a (15 million pound?) manor house. On Twitter and on the forum, people have held extensive discussions on whether the expense was justified. I personally was very surprised, which is why I made a forum post requesting an explanation. (Currently, I’m still slightly sceptical but I’m happy I now understand the reasoning behind it a lot better.)Because of the purchase, the idea that Effective Ventures/CEA should be more transparent about large expenses has been brought up more often. I agree with that idea, and try to defend it here.I added footnotes/links when I have a source for the (counter) argument. They may have been made regarding a different formulation of my proposal or a slightly different proposal, but are nonetheless worth mentioning. I sometimes rewrote arguments and may have misunderstood their original meaning, so don’t interpret my rewritings as the original authors’ intentions. I just want to credit where I got the ideas from. I have no experience at all with funding projects, so there might be practical things I’ve overlooked.My proposalAll spending above a certain threshold by EA organisations should be publicly explainedThe threshold should be high enough so it doesn’t take away too much time from grantmakers/employees (ex. $500k, 1 million pounds)The more concerned people might be about the expense, or the more influential the expense is, the more time and effort should be put into the explanationThe public explanation should be some kind of cost-benefit analysis and clearly state the reasons for the purchase including positives, potential negatives and counterfactuals in layman's terms. Precise numbers are admirable but not always necessary. “Worst case” and “best case” scenarios with some kind of probability distribution might be helpful.The explanations should be published within a reasonable timeframe (ex. within a month after the purchase/grant is made, every quarter,...). This timeframe should be made clear so that people can expect when to get an explanation of something. The sooner after the purchase, the better.The EA organisations could be: Effective Ventures and the organisations that fall under it (CEA, 80,000 Hours, Forethought Foundation, EA Funds, Giving What We Can, The Centre for the Governance of AI, Longview Philanthropy, asterisk, non-trivial, BlueDot Impact), Open Philanthropy, GiveWell, Animal Charity Evaluators,... I think the case is the strongest for Effective Ventures and CEA since they represent the EA movement, so they should be held to the highest standards.I haven’t reviewed every organisation. Some might already do a great job. For example, I get the impression that GiveWell and Open Philanthropy do a better job at explaining grants than EA Funds does (except for EAIF they often only use one sentence, even with million dollar grants).I’m highly confident some variation of my proposal should be done if the grant is made using individual/small donations, I think it’s reasonable to claim you owe explanations to your donors. The case is less strong when the donor of the grant is just one or two people (like with Wytham Abbey/Open Philanthropy), but I’m still quite confident it’s important in those cases too. You may not owe an explanation to individual/small donors, but rather to the EA community as a whole.Should this be some kind of official rule every EA organisation must follow? No, but I’d be happy to see some organisations (especially Effective Ventures/CEA) try it out. Different organisations can try different thresholds and use different rules for their public explanations.Similar propos...]]>
Jeroen W https://forum.effectivealtruism.org/posts/4iLeA9uwdAqXS3Jpc/the-case-for-transparent-spending Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The case for transparent spending, published by Jeroen W on December 15, 2022 on The Effective Altruism Forum.ContextRecently, the purchase of Wytham Abbey came in the spotlight: The purchase by Effective Ventures of a (15 million pound?) manor house. On Twitter and on the forum, people have held extensive discussions on whether the expense was justified. I personally was very surprised, which is why I made a forum post requesting an explanation. (Currently, I’m still slightly sceptical but I’m happy I now understand the reasoning behind it a lot better.)Because of the purchase, the idea that Effective Ventures/CEA should be more transparent about large expenses has been brought up more often. I agree with that idea, and try to defend it here.I added footnotes/links when I have a source for the (counter) argument. They may have been made regarding a different formulation of my proposal or a slightly different proposal, but are nonetheless worth mentioning. I sometimes rewrote arguments and may have misunderstood their original meaning, so don’t interpret my rewritings as the original authors’ intentions. I just want to credit where I got the ideas from. I have no experience at all with funding projects, so there might be practical things I’ve overlooked.My proposalAll spending above a certain threshold by EA organisations should be publicly explainedThe threshold should be high enough so it doesn’t take away too much time from grantmakers/employees (ex. $500k, 1 million pounds)The more concerned people might be about the expense, or the more influential the expense is, the more time and effort should be put into the explanationThe public explanation should be some kind of cost-benefit analysis and clearly state the reasons for the purchase including positives, potential negatives and counterfactuals in layman's terms. Precise numbers are admirable but not always necessary. “Worst case” and “best case” scenarios with some kind of probability distribution might be helpful.The explanations should be published within a reasonable timeframe (ex. within a month after the purchase/grant is made, every quarter,...). This timeframe should be made clear so that people can expect when to get an explanation of something. The sooner after the purchase, the better.The EA organisations could be: Effective Ventures and the organisations that fall under it (CEA, 80,000 Hours, Forethought Foundation, EA Funds, Giving What We Can, The Centre for the Governance of AI, Longview Philanthropy, asterisk, non-trivial, BlueDot Impact), Open Philanthropy, GiveWell, Animal Charity Evaluators,... I think the case is the strongest for Effective Ventures and CEA since they represent the EA movement, so they should be held to the highest standards.I haven’t reviewed every organisation. Some might already do a great job. For example, I get the impression that GiveWell and Open Philanthropy do a better job at explaining grants than EA Funds does (except for EAIF they often only use one sentence, even with million dollar grants).I’m highly confident some variation of my proposal should be done if the grant is made using individual/small donations, I think it’s reasonable to claim you owe explanations to your donors. The case is less strong when the donor of the grant is just one or two people (like with Wytham Abbey/Open Philanthropy), but I’m still quite confident it’s important in those cases too. You may not owe an explanation to individual/small donors, but rather to the EA community as a whole.Should this be some kind of official rule every EA organisation must follow? No, but I’d be happy to see some organisations (especially Effective Ventures/CEA) try it out. Different organisations can try different thresholds and use different rules for their public explanations.Similar propos...]]>
Thu, 15 Dec 2022 20:14:06 +0000 EA - The case for transparent spending by Jeroen W Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The case for transparent spending, published by Jeroen W on December 15, 2022 on The Effective Altruism Forum.ContextRecently, the purchase of Wytham Abbey came in the spotlight: The purchase by Effective Ventures of a (15 million pound?) manor house. On Twitter and on the forum, people have held extensive discussions on whether the expense was justified. I personally was very surprised, which is why I made a forum post requesting an explanation. (Currently, I’m still slightly sceptical but I’m happy I now understand the reasoning behind it a lot better.)Because of the purchase, the idea that Effective Ventures/CEA should be more transparent about large expenses has been brought up more often. I agree with that idea, and try to defend it here.I added footnotes/links when I have a source for the (counter) argument. They may have been made regarding a different formulation of my proposal or a slightly different proposal, but are nonetheless worth mentioning. I sometimes rewrote arguments and may have misunderstood their original meaning, so don’t interpret my rewritings as the original authors’ intentions. I just want to credit where I got the ideas from. I have no experience at all with funding projects, so there might be practical things I’ve overlooked.My proposalAll spending above a certain threshold by EA organisations should be publicly explainedThe threshold should be high enough so it doesn’t take away too much time from grantmakers/employees (ex. $500k, 1 million pounds)The more concerned people might be about the expense, or the more influential the expense is, the more time and effort should be put into the explanationThe public explanation should be some kind of cost-benefit analysis and clearly state the reasons for the purchase including positives, potential negatives and counterfactuals in layman's terms. Precise numbers are admirable but not always necessary. “Worst case” and “best case” scenarios with some kind of probability distribution might be helpful.The explanations should be published within a reasonable timeframe (ex. within a month after the purchase/grant is made, every quarter,...). This timeframe should be made clear so that people can expect when to get an explanation of something. The sooner after the purchase, the better.The EA organisations could be: Effective Ventures and the organisations that fall under it (CEA, 80,000 Hours, Forethought Foundation, EA Funds, Giving What We Can, The Centre for the Governance of AI, Longview Philanthropy, asterisk, non-trivial, BlueDot Impact), Open Philanthropy, GiveWell, Animal Charity Evaluators,... I think the case is the strongest for Effective Ventures and CEA since they represent the EA movement, so they should be held to the highest standards.I haven’t reviewed every organisation. Some might already do a great job. For example, I get the impression that GiveWell and Open Philanthropy do a better job at explaining grants than EA Funds does (except for EAIF they often only use one sentence, even with million dollar grants).I’m highly confident some variation of my proposal should be done if the grant is made using individual/small donations, I think it’s reasonable to claim you owe explanations to your donors. The case is less strong when the donor of the grant is just one or two people (like with Wytham Abbey/Open Philanthropy), but I’m still quite confident it’s important in those cases too. You may not owe an explanation to individual/small donors, but rather to the EA community as a whole.Should this be some kind of official rule every EA organisation must follow? No, but I’d be happy to see some organisations (especially Effective Ventures/CEA) try it out. Different organisations can try different thresholds and use different rules for their public explanations.Similar propos...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The case for transparent spending, published by Jeroen W on December 15, 2022 on The Effective Altruism Forum.ContextRecently, the purchase of Wytham Abbey came in the spotlight: The purchase by Effective Ventures of a (15 million pound?) manor house. On Twitter and on the forum, people have held extensive discussions on whether the expense was justified. I personally was very surprised, which is why I made a forum post requesting an explanation. (Currently, I’m still slightly sceptical but I’m happy I now understand the reasoning behind it a lot better.)Because of the purchase, the idea that Effective Ventures/CEA should be more transparent about large expenses has been brought up more often. I agree with that idea, and try to defend it here.I added footnotes/links when I have a source for the (counter) argument. They may have been made regarding a different formulation of my proposal or a slightly different proposal, but are nonetheless worth mentioning. I sometimes rewrote arguments and may have misunderstood their original meaning, so don’t interpret my rewritings as the original authors’ intentions. I just want to credit where I got the ideas from. I have no experience at all with funding projects, so there might be practical things I’ve overlooked.My proposalAll spending above a certain threshold by EA organisations should be publicly explainedThe threshold should be high enough so it doesn’t take away too much time from grantmakers/employees (ex. $500k, 1 million pounds)The more concerned people might be about the expense, or the more influential the expense is, the more time and effort should be put into the explanationThe public explanation should be some kind of cost-benefit analysis and clearly state the reasons for the purchase including positives, potential negatives and counterfactuals in layman's terms. Precise numbers are admirable but not always necessary. “Worst case” and “best case” scenarios with some kind of probability distribution might be helpful.The explanations should be published within a reasonable timeframe (ex. within a month after the purchase/grant is made, every quarter,...). This timeframe should be made clear so that people can expect when to get an explanation of something. The sooner after the purchase, the better.The EA organisations could be: Effective Ventures and the organisations that fall under it (CEA, 80,000 Hours, Forethought Foundation, EA Funds, Giving What We Can, The Centre for the Governance of AI, Longview Philanthropy, asterisk, non-trivial, BlueDot Impact), Open Philanthropy, GiveWell, Animal Charity Evaluators,... I think the case is the strongest for Effective Ventures and CEA since they represent the EA movement, so they should be held to the highest standards.I haven’t reviewed every organisation. Some might already do a great job. For example, I get the impression that GiveWell and Open Philanthropy do a better job at explaining grants than EA Funds does (except for EAIF they often only use one sentence, even with million dollar grants).I’m highly confident some variation of my proposal should be done if the grant is made using individual/small donations, I think it’s reasonable to claim you owe explanations to your donors. The case is less strong when the donor of the grant is just one or two people (like with Wytham Abbey/Open Philanthropy), but I’m still quite confident it’s important in those cases too. You may not owe an explanation to individual/small donors, but rather to the EA community as a whole.Should this be some kind of official rule every EA organisation must follow? No, but I’d be happy to see some organisations (especially Effective Ventures/CEA) try it out. Different organisations can try different thresholds and use different rules for their public explanations.Similar propos...]]>
Jeroen W https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:51 None full 4138
Hyco4iMbL6phJwCH2_EA EA - EA career guide for people from LMICs by Surbhi B Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA career guide for people from LMICs, published by Surbhi B on December 15, 2022 on The Effective Altruism Forum.Executive SummaryIndividuals from Low and Middle Income Countries (LMICs) engaging with EA often find that existing EA career advice does not address various frequently arising questions and challenges. This post attempts to address that gap by sharing the tentative outcomes of discussions between the authors (who are all from LMICs) on the pros and cons for various career paths.We hope that this guide will serve as a tool for individuals in LMICs to prioritize career paths. This post may be particularly useful as material for localized introductory or in depth fellowships.The advice for paths to impact depends on the individual, but broadly speaking, we recommend EAs, and especially those from LMICs, to:Build career capital early onWork on top global issues instead of local ones, unless there are clear impact-related reasons to opt for the latterSome Individuals -we further discuss who and when- may have impact by doing:local community buildinglocal priorities research and charity-related activitieslocal career advisingThere are many other paths the authors did not explore in-depth but consider promising, such as LMIC-relevant content creation to research on regional comparative advantages in addressing global prioritiesFor each path, the authors discuss pros and cons and suggest actionable next steps (lots of them!)Some general advice around how to think about paths to impact:Assess and prioritize paths by the problem’s scale / neglectedness / tractability, the incremental value you can contribute, and your personal fit.Consider global priorities first. Look for local comparative advantage.New EAs with a bias towards acting now risk subpar outcomes by hastily jumping into direct work, so instead first upskill and engage deeply with (and critique!) EA ideas.Note that this post is not meant to be the final word on how EAs from LMICs should think about career paths, but rather a conversation-starter and call-to-action to improve LMIC-specific diversity in EA. It builds upon excellent prior work by several LMIC groups and community builders, which the authors have compiled in this resource bank.AcknowledgementsThanks to Alejandro Acelas, Agustin Covarrubias, Claudette Salinas, Siobhan McDonough, Vaishnav Sunil, and Yi-Yang Chua for all of your feedback on previous drafts.IntroductionEngaging with EA when you come from a Low or Middle Income Country (LMIC) raises some challenges that might not be obvious to the current EA majority, composed of people from higher income countries, often in the Western world. EA principles framed from a high-income audience can fall flat in contexts where individuals lack the ability to donate, don’t have access to large EA networks for continued engagement, and where EA principles may be at odds with typical cultural norms and histories.If you come from an LMIC, you have probably wondered how your strategy and prioritization fits in with the EA movement at-large. Some common questions/concerns we hear from LMICs audiences (that you might have at the moment) are:Why should I care about suffering in other countries if I am surrounded by suffering, poverty, and disease in my own city or country?What are the most effective causes and charities in my country? Does global EA or GiveWell have adequate information to assess local interventions?Is it worthwhile to focus on identifying effective charities in my own country? Will it increase overall impact or divert funding from higher impact interventions?What other high leverage interventions can increase impact? How should we think about localized career advising, research, community building, founding new charities?What are the comparative advantages of my...]]>
Surbhi B https://forum.effectivealtruism.org/posts/Hyco4iMbL6phJwCH2/ea-career-guide-for-people-from-lmics Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA career guide for people from LMICs, published by Surbhi B on December 15, 2022 on The Effective Altruism Forum.Executive SummaryIndividuals from Low and Middle Income Countries (LMICs) engaging with EA often find that existing EA career advice does not address various frequently arising questions and challenges. This post attempts to address that gap by sharing the tentative outcomes of discussions between the authors (who are all from LMICs) on the pros and cons for various career paths.We hope that this guide will serve as a tool for individuals in LMICs to prioritize career paths. This post may be particularly useful as material for localized introductory or in depth fellowships.The advice for paths to impact depends on the individual, but broadly speaking, we recommend EAs, and especially those from LMICs, to:Build career capital early onWork on top global issues instead of local ones, unless there are clear impact-related reasons to opt for the latterSome Individuals -we further discuss who and when- may have impact by doing:local community buildinglocal priorities research and charity-related activitieslocal career advisingThere are many other paths the authors did not explore in-depth but consider promising, such as LMIC-relevant content creation to research on regional comparative advantages in addressing global prioritiesFor each path, the authors discuss pros and cons and suggest actionable next steps (lots of them!)Some general advice around how to think about paths to impact:Assess and prioritize paths by the problem’s scale / neglectedness / tractability, the incremental value you can contribute, and your personal fit.Consider global priorities first. Look for local comparative advantage.New EAs with a bias towards acting now risk subpar outcomes by hastily jumping into direct work, so instead first upskill and engage deeply with (and critique!) EA ideas.Note that this post is not meant to be the final word on how EAs from LMICs should think about career paths, but rather a conversation-starter and call-to-action to improve LMIC-specific diversity in EA. It builds upon excellent prior work by several LMIC groups and community builders, which the authors have compiled in this resource bank.AcknowledgementsThanks to Alejandro Acelas, Agustin Covarrubias, Claudette Salinas, Siobhan McDonough, Vaishnav Sunil, and Yi-Yang Chua for all of your feedback on previous drafts.IntroductionEngaging with EA when you come from a Low or Middle Income Country (LMIC) raises some challenges that might not be obvious to the current EA majority, composed of people from higher income countries, often in the Western world. EA principles framed from a high-income audience can fall flat in contexts where individuals lack the ability to donate, don’t have access to large EA networks for continued engagement, and where EA principles may be at odds with typical cultural norms and histories.If you come from an LMIC, you have probably wondered how your strategy and prioritization fits in with the EA movement at-large. Some common questions/concerns we hear from LMICs audiences (that you might have at the moment) are:Why should I care about suffering in other countries if I am surrounded by suffering, poverty, and disease in my own city or country?What are the most effective causes and charities in my country? Does global EA or GiveWell have adequate information to assess local interventions?Is it worthwhile to focus on identifying effective charities in my own country? Will it increase overall impact or divert funding from higher impact interventions?What other high leverage interventions can increase impact? How should we think about localized career advising, research, community building, founding new charities?What are the comparative advantages of my...]]>
Thu, 15 Dec 2022 15:47:56 +0000 EA - EA career guide for people from LMICs by Surbhi B Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA career guide for people from LMICs, published by Surbhi B on December 15, 2022 on The Effective Altruism Forum.Executive SummaryIndividuals from Low and Middle Income Countries (LMICs) engaging with EA often find that existing EA career advice does not address various frequently arising questions and challenges. This post attempts to address that gap by sharing the tentative outcomes of discussions between the authors (who are all from LMICs) on the pros and cons for various career paths.We hope that this guide will serve as a tool for individuals in LMICs to prioritize career paths. This post may be particularly useful as material for localized introductory or in depth fellowships.The advice for paths to impact depends on the individual, but broadly speaking, we recommend EAs, and especially those from LMICs, to:Build career capital early onWork on top global issues instead of local ones, unless there are clear impact-related reasons to opt for the latterSome Individuals -we further discuss who and when- may have impact by doing:local community buildinglocal priorities research and charity-related activitieslocal career advisingThere are many other paths the authors did not explore in-depth but consider promising, such as LMIC-relevant content creation to research on regional comparative advantages in addressing global prioritiesFor each path, the authors discuss pros and cons and suggest actionable next steps (lots of them!)Some general advice around how to think about paths to impact:Assess and prioritize paths by the problem’s scale / neglectedness / tractability, the incremental value you can contribute, and your personal fit.Consider global priorities first. Look for local comparative advantage.New EAs with a bias towards acting now risk subpar outcomes by hastily jumping into direct work, so instead first upskill and engage deeply with (and critique!) EA ideas.Note that this post is not meant to be the final word on how EAs from LMICs should think about career paths, but rather a conversation-starter and call-to-action to improve LMIC-specific diversity in EA. It builds upon excellent prior work by several LMIC groups and community builders, which the authors have compiled in this resource bank.AcknowledgementsThanks to Alejandro Acelas, Agustin Covarrubias, Claudette Salinas, Siobhan McDonough, Vaishnav Sunil, and Yi-Yang Chua for all of your feedback on previous drafts.IntroductionEngaging with EA when you come from a Low or Middle Income Country (LMIC) raises some challenges that might not be obvious to the current EA majority, composed of people from higher income countries, often in the Western world. EA principles framed from a high-income audience can fall flat in contexts where individuals lack the ability to donate, don’t have access to large EA networks for continued engagement, and where EA principles may be at odds with typical cultural norms and histories.If you come from an LMIC, you have probably wondered how your strategy and prioritization fits in with the EA movement at-large. Some common questions/concerns we hear from LMICs audiences (that you might have at the moment) are:Why should I care about suffering in other countries if I am surrounded by suffering, poverty, and disease in my own city or country?What are the most effective causes and charities in my country? Does global EA or GiveWell have adequate information to assess local interventions?Is it worthwhile to focus on identifying effective charities in my own country? Will it increase overall impact or divert funding from higher impact interventions?What other high leverage interventions can increase impact? How should we think about localized career advising, research, community building, founding new charities?What are the comparative advantages of my...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA career guide for people from LMICs, published by Surbhi B on December 15, 2022 on The Effective Altruism Forum.Executive SummaryIndividuals from Low and Middle Income Countries (LMICs) engaging with EA often find that existing EA career advice does not address various frequently arising questions and challenges. This post attempts to address that gap by sharing the tentative outcomes of discussions between the authors (who are all from LMICs) on the pros and cons for various career paths.We hope that this guide will serve as a tool for individuals in LMICs to prioritize career paths. This post may be particularly useful as material for localized introductory or in depth fellowships.The advice for paths to impact depends on the individual, but broadly speaking, we recommend EAs, and especially those from LMICs, to:Build career capital early onWork on top global issues instead of local ones, unless there are clear impact-related reasons to opt for the latterSome Individuals -we further discuss who and when- may have impact by doing:local community buildinglocal priorities research and charity-related activitieslocal career advisingThere are many other paths the authors did not explore in-depth but consider promising, such as LMIC-relevant content creation to research on regional comparative advantages in addressing global prioritiesFor each path, the authors discuss pros and cons and suggest actionable next steps (lots of them!)Some general advice around how to think about paths to impact:Assess and prioritize paths by the problem’s scale / neglectedness / tractability, the incremental value you can contribute, and your personal fit.Consider global priorities first. Look for local comparative advantage.New EAs with a bias towards acting now risk subpar outcomes by hastily jumping into direct work, so instead first upskill and engage deeply with (and critique!) EA ideas.Note that this post is not meant to be the final word on how EAs from LMICs should think about career paths, but rather a conversation-starter and call-to-action to improve LMIC-specific diversity in EA. It builds upon excellent prior work by several LMIC groups and community builders, which the authors have compiled in this resource bank.AcknowledgementsThanks to Alejandro Acelas, Agustin Covarrubias, Claudette Salinas, Siobhan McDonough, Vaishnav Sunil, and Yi-Yang Chua for all of your feedback on previous drafts.IntroductionEngaging with EA when you come from a Low or Middle Income Country (LMIC) raises some challenges that might not be obvious to the current EA majority, composed of people from higher income countries, often in the Western world. EA principles framed from a high-income audience can fall flat in contexts where individuals lack the ability to donate, don’t have access to large EA networks for continued engagement, and where EA principles may be at odds with typical cultural norms and histories.If you come from an LMIC, you have probably wondered how your strategy and prioritization fits in with the EA movement at-large. Some common questions/concerns we hear from LMICs audiences (that you might have at the moment) are:Why should I care about suffering in other countries if I am surrounded by suffering, poverty, and disease in my own city or country?What are the most effective causes and charities in my country? Does global EA or GiveWell have adequate information to assess local interventions?Is it worthwhile to focus on identifying effective charities in my own country? Will it increase overall impact or divert funding from higher impact interventions?What other high leverage interventions can increase impact? How should we think about localized career advising, research, community building, founding new charities?What are the comparative advantages of my...]]>
Surbhi B https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 38:55 None full 4133
qexomkYp5D7vZR4zP_EA EA - Small EA Groups and Invisible Impact by brook Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Small EA Groups and Invisible Impact, published by brook on December 14, 2022 on The Effective Altruism Forum.Epistemic Status: personal observation (n=1)tl;dr: EA groups not in central hubs (Loxbridge, Berkeley) generate a lot of value they don't see when members move to central hubs to work. Intuitively, this may feel like losing compared to members who stay near the peripheral group.If you're running an EA group anywhere outside of a large hub (e.g. Oxford, London, Berkeley), what does the best case look like for an individual who interacts with your group?Obviously there are several answers to this group, but I'll focus on two:Somebody who stays in the city and continues to help the group as an advisor or senior member, who also works at an EA org or earns to give.Somebody who leaves the city for an EA hub and has little/no contact with the group, but works at an EA org.I won't debate which of these is more valuable at length, because I don't think it's that important to my central point. It seems plausible that the additional movement building of (1) could be comparable to the productivity gains of (2), and what's more impactful for one person might not be for another.My central point is this question: which do you feel good about? At EA Edinburgh, there's something of an undercurrent that Oxford is 'taking' our best members, or an implicit assumption that having people work in Edinburgh and set up EA organisations in Edinburgh is better than them setting up elsewhere. I bought into both of these assumptions for a long time! But I think they are, fundamentally, tribal.Another model that isn't just a black box labelled 'tribalism' is that the impact produced by people who leave Edinburgh for greener pastures is completely invisible to you, while the people who hang around turn up to meetings every month telling you about all the exciting, impressive things they are doing. It seems pretty easy to not update sufficiently for this selection bias.Some suggested actions for your group:Notice if your discussions/thoughts/plans run this wayQuestion the assumption that things happening nearer to you is more impactful than them happening elsewhereStay in contact with members who move away! If you're going to be tribal, you might as well get them to give a talk now and again about what they're doing, or give advice about how to run your groupI'd be interested to hear any other suggestions, and if anybody else has noticed this effect (maybe it's just Edinburgh for some reason?).Special thanks to Q for giving a talk about egregores which triggered this thought initially.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
brook https://forum.effectivealtruism.org/posts/qexomkYp5D7vZR4zP/small-ea-groups-and-invisible-impact Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Small EA Groups and Invisible Impact, published by brook on December 14, 2022 on The Effective Altruism Forum.Epistemic Status: personal observation (n=1)tl;dr: EA groups not in central hubs (Loxbridge, Berkeley) generate a lot of value they don't see when members move to central hubs to work. Intuitively, this may feel like losing compared to members who stay near the peripheral group.If you're running an EA group anywhere outside of a large hub (e.g. Oxford, London, Berkeley), what does the best case look like for an individual who interacts with your group?Obviously there are several answers to this group, but I'll focus on two:Somebody who stays in the city and continues to help the group as an advisor or senior member, who also works at an EA org or earns to give.Somebody who leaves the city for an EA hub and has little/no contact with the group, but works at an EA org.I won't debate which of these is more valuable at length, because I don't think it's that important to my central point. It seems plausible that the additional movement building of (1) could be comparable to the productivity gains of (2), and what's more impactful for one person might not be for another.My central point is this question: which do you feel good about? At EA Edinburgh, there's something of an undercurrent that Oxford is 'taking' our best members, or an implicit assumption that having people work in Edinburgh and set up EA organisations in Edinburgh is better than them setting up elsewhere. I bought into both of these assumptions for a long time! But I think they are, fundamentally, tribal.Another model that isn't just a black box labelled 'tribalism' is that the impact produced by people who leave Edinburgh for greener pastures is completely invisible to you, while the people who hang around turn up to meetings every month telling you about all the exciting, impressive things they are doing. It seems pretty easy to not update sufficiently for this selection bias.Some suggested actions for your group:Notice if your discussions/thoughts/plans run this wayQuestion the assumption that things happening nearer to you is more impactful than them happening elsewhereStay in contact with members who move away! If you're going to be tribal, you might as well get them to give a talk now and again about what they're doing, or give advice about how to run your groupI'd be interested to hear any other suggestions, and if anybody else has noticed this effect (maybe it's just Edinburgh for some reason?).Special thanks to Q for giving a talk about egregores which triggered this thought initially.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 15 Dec 2022 14:17:09 +0000 EA - Small EA Groups and Invisible Impact by brook Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Small EA Groups and Invisible Impact, published by brook on December 14, 2022 on The Effective Altruism Forum.Epistemic Status: personal observation (n=1)tl;dr: EA groups not in central hubs (Loxbridge, Berkeley) generate a lot of value they don't see when members move to central hubs to work. Intuitively, this may feel like losing compared to members who stay near the peripheral group.If you're running an EA group anywhere outside of a large hub (e.g. Oxford, London, Berkeley), what does the best case look like for an individual who interacts with your group?Obviously there are several answers to this group, but I'll focus on two:Somebody who stays in the city and continues to help the group as an advisor or senior member, who also works at an EA org or earns to give.Somebody who leaves the city for an EA hub and has little/no contact with the group, but works at an EA org.I won't debate which of these is more valuable at length, because I don't think it's that important to my central point. It seems plausible that the additional movement building of (1) could be comparable to the productivity gains of (2), and what's more impactful for one person might not be for another.My central point is this question: which do you feel good about? At EA Edinburgh, there's something of an undercurrent that Oxford is 'taking' our best members, or an implicit assumption that having people work in Edinburgh and set up EA organisations in Edinburgh is better than them setting up elsewhere. I bought into both of these assumptions for a long time! But I think they are, fundamentally, tribal.Another model that isn't just a black box labelled 'tribalism' is that the impact produced by people who leave Edinburgh for greener pastures is completely invisible to you, while the people who hang around turn up to meetings every month telling you about all the exciting, impressive things they are doing. It seems pretty easy to not update sufficiently for this selection bias.Some suggested actions for your group:Notice if your discussions/thoughts/plans run this wayQuestion the assumption that things happening nearer to you is more impactful than them happening elsewhereStay in contact with members who move away! If you're going to be tribal, you might as well get them to give a talk now and again about what they're doing, or give advice about how to run your groupI'd be interested to hear any other suggestions, and if anybody else has noticed this effect (maybe it's just Edinburgh for some reason?).Special thanks to Q for giving a talk about egregores which triggered this thought initially.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Small EA Groups and Invisible Impact, published by brook on December 14, 2022 on The Effective Altruism Forum.Epistemic Status: personal observation (n=1)tl;dr: EA groups not in central hubs (Loxbridge, Berkeley) generate a lot of value they don't see when members move to central hubs to work. Intuitively, this may feel like losing compared to members who stay near the peripheral group.If you're running an EA group anywhere outside of a large hub (e.g. Oxford, London, Berkeley), what does the best case look like for an individual who interacts with your group?Obviously there are several answers to this group, but I'll focus on two:Somebody who stays in the city and continues to help the group as an advisor or senior member, who also works at an EA org or earns to give.Somebody who leaves the city for an EA hub and has little/no contact with the group, but works at an EA org.I won't debate which of these is more valuable at length, because I don't think it's that important to my central point. It seems plausible that the additional movement building of (1) could be comparable to the productivity gains of (2), and what's more impactful for one person might not be for another.My central point is this question: which do you feel good about? At EA Edinburgh, there's something of an undercurrent that Oxford is 'taking' our best members, or an implicit assumption that having people work in Edinburgh and set up EA organisations in Edinburgh is better than them setting up elsewhere. I bought into both of these assumptions for a long time! But I think they are, fundamentally, tribal.Another model that isn't just a black box labelled 'tribalism' is that the impact produced by people who leave Edinburgh for greener pastures is completely invisible to you, while the people who hang around turn up to meetings every month telling you about all the exciting, impressive things they are doing. It seems pretty easy to not update sufficiently for this selection bias.Some suggested actions for your group:Notice if your discussions/thoughts/plans run this wayQuestion the assumption that things happening nearer to you is more impactful than them happening elsewhereStay in contact with members who move away! If you're going to be tribal, you might as well get them to give a talk now and again about what they're doing, or give advice about how to run your groupI'd be interested to hear any other suggestions, and if anybody else has noticed this effect (maybe it's just Edinburgh for some reason?).Special thanks to Q for giving a talk about egregores which triggered this thought initially.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
brook https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:35 None full 4134
zDnpnt6KDps8hGcjq_EA EA - Saving Lives vs Creating Lives by Richard Y Chappell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Saving Lives vs Creating Lives, published by Richard Y Chappell on December 15, 2022 on The Effective Altruism Forum.tl;dr: Total utilitarianism treats saving lives and creating new lives as equivalent (all else equal). This seems wrong: funding fertility is not an adequate substitute for bednets. We can avoid this result by giving separate weight to both person-directed and undirected (or "impersonal") reasons. We have weak impersonal reasons to bring an extra life into existence, while we have both impersonal and person-directed reasons to aid an existing individual. This commonsense alternative to totalism still entails longtermism, as zillions of weak impersonal reasons to bring new lives into existence can add up to overwhelmingly strong reasons to prevent human extinction.Killing vs Failing to CreateI think the strongest objection to total utilitarianism is that it risks collapsing the theoretical distinction between killing and failing to create. (Of course, there would still be good practical reasons to maintain such a distinction in practice; but I think there’s a principled distinction here that our theories ought to accommodate.) While I think it’s straightforwardly good to bring more awesome lives into existence, and so failing to create an awesome life constitutes a missed opportunity for doing good, premature death is not just a “missed opportunity” for a good future, it’s harmful in a way that should especially concern us.For example, we clearly have much stronger moral reasons to save the life of a young child (e.g. by funding anti-malarial bednets) than to simply cause an extra child to exist (e.g. by funding fertility treatments or incentivizing procreation). If totalism can’t accommodate this moral datum, that would seem a serious problem for the view.How can we best accommodate this datum? I think there may be two distinct intuitions in the vicinity that I’d want to accommodate:(1) Something about the intrinsic badness of (undesired) death.(2) Counting both person-directed and undirected (“impersonal”) moral reasons.The Intrinsic Harm of DeathMost of the harm of death is comparative: not bad in itself, but worse than the alternative of living on. Importantly, we only have reason to avoid comparative harms in ways that secure the better alternative. To see this, suppose that if you save a child’s life, they’ll live two more decades and then die from an illness that robs them of five decades more life. That latter death is then really bad for them. Does it follow that you shouldn’t save the child’s life after all (since it exposes them to a more harmful death later)? Of course not. The later death is worse compared to living the five decades extra, but letting them die now would do them even less good, no matter that the early death — in depriving them of just two decades of life — is not “as bad” (comparatively speaking) as the later death would be (in a different context with a different point of comparison).So we should not aim to minimize comparative harms of this sort: that would lead us badly astray. But it’s a tricky question whether the harm of death is purely comparative. In ‘Value Receptacles’ (2015, p. 323), I argued that it plausibly is not:Besides preventing the creation of future goods, death is also positively disvaluable insofar as it involves the interruption and thwarting of important life plans, projects, and goals. If such thwarting has sufficient disvalue, it could well outweigh the slight increase in hedonic value obtained in the replacement scenario [where one person is “struck down in the prime of life and replaced with a marginally happier substitute”].Thwarted goals and projects may make death positively bad to some extent. But the extent must be limited. However tragic it is to die in one’s teens (say), I don’t ...]]>
Richard Y Chappell https://forum.effectivealtruism.org/posts/zDnpnt6KDps8hGcjq/saving-lives-vs-creating-lives Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Saving Lives vs Creating Lives, published by Richard Y Chappell on December 15, 2022 on The Effective Altruism Forum.tl;dr: Total utilitarianism treats saving lives and creating new lives as equivalent (all else equal). This seems wrong: funding fertility is not an adequate substitute for bednets. We can avoid this result by giving separate weight to both person-directed and undirected (or "impersonal") reasons. We have weak impersonal reasons to bring an extra life into existence, while we have both impersonal and person-directed reasons to aid an existing individual. This commonsense alternative to totalism still entails longtermism, as zillions of weak impersonal reasons to bring new lives into existence can add up to overwhelmingly strong reasons to prevent human extinction.Killing vs Failing to CreateI think the strongest objection to total utilitarianism is that it risks collapsing the theoretical distinction between killing and failing to create. (Of course, there would still be good practical reasons to maintain such a distinction in practice; but I think there’s a principled distinction here that our theories ought to accommodate.) While I think it’s straightforwardly good to bring more awesome lives into existence, and so failing to create an awesome life constitutes a missed opportunity for doing good, premature death is not just a “missed opportunity” for a good future, it’s harmful in a way that should especially concern us.For example, we clearly have much stronger moral reasons to save the life of a young child (e.g. by funding anti-malarial bednets) than to simply cause an extra child to exist (e.g. by funding fertility treatments or incentivizing procreation). If totalism can’t accommodate this moral datum, that would seem a serious problem for the view.How can we best accommodate this datum? I think there may be two distinct intuitions in the vicinity that I’d want to accommodate:(1) Something about the intrinsic badness of (undesired) death.(2) Counting both person-directed and undirected (“impersonal”) moral reasons.The Intrinsic Harm of DeathMost of the harm of death is comparative: not bad in itself, but worse than the alternative of living on. Importantly, we only have reason to avoid comparative harms in ways that secure the better alternative. To see this, suppose that if you save a child’s life, they’ll live two more decades and then die from an illness that robs them of five decades more life. That latter death is then really bad for them. Does it follow that you shouldn’t save the child’s life after all (since it exposes them to a more harmful death later)? Of course not. The later death is worse compared to living the five decades extra, but letting them die now would do them even less good, no matter that the early death — in depriving them of just two decades of life — is not “as bad” (comparatively speaking) as the later death would be (in a different context with a different point of comparison).So we should not aim to minimize comparative harms of this sort: that would lead us badly astray. But it’s a tricky question whether the harm of death is purely comparative. In ‘Value Receptacles’ (2015, p. 323), I argued that it plausibly is not:Besides preventing the creation of future goods, death is also positively disvaluable insofar as it involves the interruption and thwarting of important life plans, projects, and goals. If such thwarting has sufficient disvalue, it could well outweigh the slight increase in hedonic value obtained in the replacement scenario [where one person is “struck down in the prime of life and replaced with a marginally happier substitute”].Thwarted goals and projects may make death positively bad to some extent. But the extent must be limited. However tragic it is to die in one’s teens (say), I don’t ...]]>
Thu, 15 Dec 2022 08:22:27 +0000 EA - Saving Lives vs Creating Lives by Richard Y Chappell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Saving Lives vs Creating Lives, published by Richard Y Chappell on December 15, 2022 on The Effective Altruism Forum.tl;dr: Total utilitarianism treats saving lives and creating new lives as equivalent (all else equal). This seems wrong: funding fertility is not an adequate substitute for bednets. We can avoid this result by giving separate weight to both person-directed and undirected (or "impersonal") reasons. We have weak impersonal reasons to bring an extra life into existence, while we have both impersonal and person-directed reasons to aid an existing individual. This commonsense alternative to totalism still entails longtermism, as zillions of weak impersonal reasons to bring new lives into existence can add up to overwhelmingly strong reasons to prevent human extinction.Killing vs Failing to CreateI think the strongest objection to total utilitarianism is that it risks collapsing the theoretical distinction between killing and failing to create. (Of course, there would still be good practical reasons to maintain such a distinction in practice; but I think there’s a principled distinction here that our theories ought to accommodate.) While I think it’s straightforwardly good to bring more awesome lives into existence, and so failing to create an awesome life constitutes a missed opportunity for doing good, premature death is not just a “missed opportunity” for a good future, it’s harmful in a way that should especially concern us.For example, we clearly have much stronger moral reasons to save the life of a young child (e.g. by funding anti-malarial bednets) than to simply cause an extra child to exist (e.g. by funding fertility treatments or incentivizing procreation). If totalism can’t accommodate this moral datum, that would seem a serious problem for the view.How can we best accommodate this datum? I think there may be two distinct intuitions in the vicinity that I’d want to accommodate:(1) Something about the intrinsic badness of (undesired) death.(2) Counting both person-directed and undirected (“impersonal”) moral reasons.The Intrinsic Harm of DeathMost of the harm of death is comparative: not bad in itself, but worse than the alternative of living on. Importantly, we only have reason to avoid comparative harms in ways that secure the better alternative. To see this, suppose that if you save a child’s life, they’ll live two more decades and then die from an illness that robs them of five decades more life. That latter death is then really bad for them. Does it follow that you shouldn’t save the child’s life after all (since it exposes them to a more harmful death later)? Of course not. The later death is worse compared to living the five decades extra, but letting them die now would do them even less good, no matter that the early death — in depriving them of just two decades of life — is not “as bad” (comparatively speaking) as the later death would be (in a different context with a different point of comparison).So we should not aim to minimize comparative harms of this sort: that would lead us badly astray. But it’s a tricky question whether the harm of death is purely comparative. In ‘Value Receptacles’ (2015, p. 323), I argued that it plausibly is not:Besides preventing the creation of future goods, death is also positively disvaluable insofar as it involves the interruption and thwarting of important life plans, projects, and goals. If such thwarting has sufficient disvalue, it could well outweigh the slight increase in hedonic value obtained in the replacement scenario [where one person is “struck down in the prime of life and replaced with a marginally happier substitute”].Thwarted goals and projects may make death positively bad to some extent. But the extent must be limited. However tragic it is to die in one’s teens (say), I don’t ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Saving Lives vs Creating Lives, published by Richard Y Chappell on December 15, 2022 on The Effective Altruism Forum.tl;dr: Total utilitarianism treats saving lives and creating new lives as equivalent (all else equal). This seems wrong: funding fertility is not an adequate substitute for bednets. We can avoid this result by giving separate weight to both person-directed and undirected (or "impersonal") reasons. We have weak impersonal reasons to bring an extra life into existence, while we have both impersonal and person-directed reasons to aid an existing individual. This commonsense alternative to totalism still entails longtermism, as zillions of weak impersonal reasons to bring new lives into existence can add up to overwhelmingly strong reasons to prevent human extinction.Killing vs Failing to CreateI think the strongest objection to total utilitarianism is that it risks collapsing the theoretical distinction between killing and failing to create. (Of course, there would still be good practical reasons to maintain such a distinction in practice; but I think there’s a principled distinction here that our theories ought to accommodate.) While I think it’s straightforwardly good to bring more awesome lives into existence, and so failing to create an awesome life constitutes a missed opportunity for doing good, premature death is not just a “missed opportunity” for a good future, it’s harmful in a way that should especially concern us.For example, we clearly have much stronger moral reasons to save the life of a young child (e.g. by funding anti-malarial bednets) than to simply cause an extra child to exist (e.g. by funding fertility treatments or incentivizing procreation). If totalism can’t accommodate this moral datum, that would seem a serious problem for the view.How can we best accommodate this datum? I think there may be two distinct intuitions in the vicinity that I’d want to accommodate:(1) Something about the intrinsic badness of (undesired) death.(2) Counting both person-directed and undirected (“impersonal”) moral reasons.The Intrinsic Harm of DeathMost of the harm of death is comparative: not bad in itself, but worse than the alternative of living on. Importantly, we only have reason to avoid comparative harms in ways that secure the better alternative. To see this, suppose that if you save a child’s life, they’ll live two more decades and then die from an illness that robs them of five decades more life. That latter death is then really bad for them. Does it follow that you shouldn’t save the child’s life after all (since it exposes them to a more harmful death later)? Of course not. The later death is worse compared to living the five decades extra, but letting them die now would do them even less good, no matter that the early death — in depriving them of just two decades of life — is not “as bad” (comparatively speaking) as the later death would be (in a different context with a different point of comparison).So we should not aim to minimize comparative harms of this sort: that would lead us badly astray. But it’s a tricky question whether the harm of death is purely comparative. In ‘Value Receptacles’ (2015, p. 323), I argued that it plausibly is not:Besides preventing the creation of future goods, death is also positively disvaluable insofar as it involves the interruption and thwarting of important life plans, projects, and goals. If such thwarting has sufficient disvalue, it could well outweigh the slight increase in hedonic value obtained in the replacement scenario [where one person is “struck down in the prime of life and replaced with a marginally happier substitute”].Thwarted goals and projects may make death positively bad to some extent. But the extent must be limited. However tragic it is to die in one’s teens (say), I don’t ...]]>
Richard Y Chappell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:29 None full 4123
YkjDHYqEubHaDsEQT_EA EA - I went to the Progress Summit. Here’s What I Learned. by Nick Corvino Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I went to the Progress Summit. Here’s What I Learned., published by Nick Corvino on December 14, 2022 on The Effective Altruism Forum.I attended the Progress Summit in Hollywood yesterday, hosted by The Atlantic. Progress studies and EA have overlap, so I thought it would be useful to give my thoughts on the event. In general, the main difference I perceived was people attending not because they wanted to maximize their positive impact but rather because they were intellectually interested in socially responsible progress. And some other stuff.Right after I left, I told my friend it felt like a “grown up version of an EA conference.” I’m 22, and was probably the youngest person there. Everything felt more professional (cocktails, food, outfits, etc.) and the operations seemed smoother than any EA event I’ve been to.The facilitators were all Atlantic writers, such as Ross Anderson and Derek Thompson, and they were significantly more eloquent and better at holding people’s attention than any EA event I’ve been to. I could definitely tell the difference in their training. That being said, the talks felt fluffy and often skirted around intellectual issues for the sake of a smooth conversation.Networking was less direct. More small talk, less intensity.The event was catered towards investors/venture capitalists. Speakers were trying to make their product sound appealing so investors will fund them, which I thought was slightly bad for epistemics, often ignoring the risks of their products (e.g. AI slaughter bots).In general, the majority of the attendees seemed bullish on “the progress of technology,” and didn’t touch much on the potential risks of things like AGI or biorisk. If they did address the risks, it was invariably in relation to (1) the economy, (2) climate change, or (3) war. Of the people I spoke with, <20% had heard of misalignment or existential risk. I didn’t get the impression that anyone at the event didn’t take existential risk seriously. Rather, it felt like they had not heard about it in the progress studies ecosystem.Overall, I think the Progress studies community seems decently aligned with what EAs care about, and could become more-so in the coming years. The event had decent epistemics and was less intimidating than an EA conference. I think many people who feel that EA is too intense, cares too much about longtermism, or uses too much jargon could find progress studies as a suitable alternative. If the movement known as EA dissolved (God forbid) I think progress studies could absorb many of the folks.Notable events:“How mRNA Technology Can Save the World”Most of the people I talked to here had never considered the risks posed by biotechnology (beyond class inequality stuff)“Drones and AI: The Future of Military Technology”The concern most people had was causing a war (which seems good), but because much of what Brian Schimpf talked about was technology used for deterrence, most people then seemed bullish on the positive impacts of this technology (I was not).“How Artificial Intelligence Can Revolutionize Creativity”I didn’t go to this one, but I heard from someone who did that they talked about GPT-3 and Dall-e positively, and didn't mention the potential risks posed by capabilities advancements.“The Long View”Didn’t go, unfortunately.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nick Corvino https://forum.effectivealtruism.org/posts/YkjDHYqEubHaDsEQT/i-went-to-the-progress-summit-here-s-what-i-learned Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I went to the Progress Summit. Here’s What I Learned., published by Nick Corvino on December 14, 2022 on The Effective Altruism Forum.I attended the Progress Summit in Hollywood yesterday, hosted by The Atlantic. Progress studies and EA have overlap, so I thought it would be useful to give my thoughts on the event. In general, the main difference I perceived was people attending not because they wanted to maximize their positive impact but rather because they were intellectually interested in socially responsible progress. And some other stuff.Right after I left, I told my friend it felt like a “grown up version of an EA conference.” I’m 22, and was probably the youngest person there. Everything felt more professional (cocktails, food, outfits, etc.) and the operations seemed smoother than any EA event I’ve been to.The facilitators were all Atlantic writers, such as Ross Anderson and Derek Thompson, and they were significantly more eloquent and better at holding people’s attention than any EA event I’ve been to. I could definitely tell the difference in their training. That being said, the talks felt fluffy and often skirted around intellectual issues for the sake of a smooth conversation.Networking was less direct. More small talk, less intensity.The event was catered towards investors/venture capitalists. Speakers were trying to make their product sound appealing so investors will fund them, which I thought was slightly bad for epistemics, often ignoring the risks of their products (e.g. AI slaughter bots).In general, the majority of the attendees seemed bullish on “the progress of technology,” and didn’t touch much on the potential risks of things like AGI or biorisk. If they did address the risks, it was invariably in relation to (1) the economy, (2) climate change, or (3) war. Of the people I spoke with, <20% had heard of misalignment or existential risk. I didn’t get the impression that anyone at the event didn’t take existential risk seriously. Rather, it felt like they had not heard about it in the progress studies ecosystem.Overall, I think the Progress studies community seems decently aligned with what EAs care about, and could become more-so in the coming years. The event had decent epistemics and was less intimidating than an EA conference. I think many people who feel that EA is too intense, cares too much about longtermism, or uses too much jargon could find progress studies as a suitable alternative. If the movement known as EA dissolved (God forbid) I think progress studies could absorb many of the folks.Notable events:“How mRNA Technology Can Save the World”Most of the people I talked to here had never considered the risks posed by biotechnology (beyond class inequality stuff)“Drones and AI: The Future of Military Technology”The concern most people had was causing a war (which seems good), but because much of what Brian Schimpf talked about was technology used for deterrence, most people then seemed bullish on the positive impacts of this technology (I was not).“How Artificial Intelligence Can Revolutionize Creativity”I didn’t go to this one, but I heard from someone who did that they talked about GPT-3 and Dall-e positively, and didn't mention the potential risks posed by capabilities advancements.“The Long View”Didn’t go, unfortunately.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 15 Dec 2022 00:04:05 +0000 EA - I went to the Progress Summit. Here’s What I Learned. by Nick Corvino Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I went to the Progress Summit. Here’s What I Learned., published by Nick Corvino on December 14, 2022 on The Effective Altruism Forum.I attended the Progress Summit in Hollywood yesterday, hosted by The Atlantic. Progress studies and EA have overlap, so I thought it would be useful to give my thoughts on the event. In general, the main difference I perceived was people attending not because they wanted to maximize their positive impact but rather because they were intellectually interested in socially responsible progress. And some other stuff.Right after I left, I told my friend it felt like a “grown up version of an EA conference.” I’m 22, and was probably the youngest person there. Everything felt more professional (cocktails, food, outfits, etc.) and the operations seemed smoother than any EA event I’ve been to.The facilitators were all Atlantic writers, such as Ross Anderson and Derek Thompson, and they were significantly more eloquent and better at holding people’s attention than any EA event I’ve been to. I could definitely tell the difference in their training. That being said, the talks felt fluffy and often skirted around intellectual issues for the sake of a smooth conversation.Networking was less direct. More small talk, less intensity.The event was catered towards investors/venture capitalists. Speakers were trying to make their product sound appealing so investors will fund them, which I thought was slightly bad for epistemics, often ignoring the risks of their products (e.g. AI slaughter bots).In general, the majority of the attendees seemed bullish on “the progress of technology,” and didn’t touch much on the potential risks of things like AGI or biorisk. If they did address the risks, it was invariably in relation to (1) the economy, (2) climate change, or (3) war. Of the people I spoke with, <20% had heard of misalignment or existential risk. I didn’t get the impression that anyone at the event didn’t take existential risk seriously. Rather, it felt like they had not heard about it in the progress studies ecosystem.Overall, I think the Progress studies community seems decently aligned with what EAs care about, and could become more-so in the coming years. The event had decent epistemics and was less intimidating than an EA conference. I think many people who feel that EA is too intense, cares too much about longtermism, or uses too much jargon could find progress studies as a suitable alternative. If the movement known as EA dissolved (God forbid) I think progress studies could absorb many of the folks.Notable events:“How mRNA Technology Can Save the World”Most of the people I talked to here had never considered the risks posed by biotechnology (beyond class inequality stuff)“Drones and AI: The Future of Military Technology”The concern most people had was causing a war (which seems good), but because much of what Brian Schimpf talked about was technology used for deterrence, most people then seemed bullish on the positive impacts of this technology (I was not).“How Artificial Intelligence Can Revolutionize Creativity”I didn’t go to this one, but I heard from someone who did that they talked about GPT-3 and Dall-e positively, and didn't mention the potential risks posed by capabilities advancements.“The Long View”Didn’t go, unfortunately.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I went to the Progress Summit. Here’s What I Learned., published by Nick Corvino on December 14, 2022 on The Effective Altruism Forum.I attended the Progress Summit in Hollywood yesterday, hosted by The Atlantic. Progress studies and EA have overlap, so I thought it would be useful to give my thoughts on the event. In general, the main difference I perceived was people attending not because they wanted to maximize their positive impact but rather because they were intellectually interested in socially responsible progress. And some other stuff.Right after I left, I told my friend it felt like a “grown up version of an EA conference.” I’m 22, and was probably the youngest person there. Everything felt more professional (cocktails, food, outfits, etc.) and the operations seemed smoother than any EA event I’ve been to.The facilitators were all Atlantic writers, such as Ross Anderson and Derek Thompson, and they were significantly more eloquent and better at holding people’s attention than any EA event I’ve been to. I could definitely tell the difference in their training. That being said, the talks felt fluffy and often skirted around intellectual issues for the sake of a smooth conversation.Networking was less direct. More small talk, less intensity.The event was catered towards investors/venture capitalists. Speakers were trying to make their product sound appealing so investors will fund them, which I thought was slightly bad for epistemics, often ignoring the risks of their products (e.g. AI slaughter bots).In general, the majority of the attendees seemed bullish on “the progress of technology,” and didn’t touch much on the potential risks of things like AGI or biorisk. If they did address the risks, it was invariably in relation to (1) the economy, (2) climate change, or (3) war. Of the people I spoke with, <20% had heard of misalignment or existential risk. I didn’t get the impression that anyone at the event didn’t take existential risk seriously. Rather, it felt like they had not heard about it in the progress studies ecosystem.Overall, I think the Progress studies community seems decently aligned with what EAs care about, and could become more-so in the coming years. The event had decent epistemics and was less intimidating than an EA conference. I think many people who feel that EA is too intense, cares too much about longtermism, or uses too much jargon could find progress studies as a suitable alternative. If the movement known as EA dissolved (God forbid) I think progress studies could absorb many of the folks.Notable events:“How mRNA Technology Can Save the World”Most of the people I talked to here had never considered the risks posed by biotechnology (beyond class inequality stuff)“Drones and AI: The Future of Military Technology”The concern most people had was causing a war (which seems good), but because much of what Brian Schimpf talked about was technology used for deterrence, most people then seemed bullish on the positive impacts of this technology (I was not).“How Artificial Intelligence Can Revolutionize Creativity”I didn’t go to this one, but I heard from someone who did that they talked about GPT-3 and Dall-e positively, and didn't mention the potential risks posed by capabilities advancements.“The Long View”Didn’t go, unfortunately.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nick Corvino https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:21 None full 4124
LTq3DsBedqD8asnZE_EA EA - The US Attorney for the SDNY asked that FTX money be returned to victims. What are the moral and legal consequences to EA? by Fermi–Dirac Distribution Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The US Attorney for the SDNY asked that FTX money be returned to victims. What are the moral and legal consequences to EA?, published by Fermi–Dirac Distribution on December 14, 2022 on The Effective Altruism Forum.In a December 13, 2022 press conference announcing the indictment of Sam Bankman-Fried, Damian Williams, the United States Attorney for the Southern District of New York, said the following:To any person, entity or political campaign that has received stolen customer money, we ask that you work with us to return that money to the innocent victims.In the same press conference, he alleges that FTX has been stealing customer money since 2019:First, we charge that from 2019 until earlier this year, Bankman-Fried and his co-conspirators stole billions of dollars from FTX customers.He also alleged the following about SBF's political contributions:Those contributions were disguised to look like they were coming from wealthy co-conspirators, when in fact, the contributions were funded by Alameda Research with stolen customer money.This might be relevant to EA orgs, since a lot of SBF's political contributions seem to have been made around the same time as his EA donations (i.e. this year), and so likely came from the same source.I made a comment pointing out some of this information yesterday, but so far that this is getting less attention than I expected given the seriousness of the matter, hence this top-level post.What are the moral and legal consequences of the information in the indictment, and of Williams's request, to EA orgs and individuals that have received money from FTX?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fermi–Dirac Distribution https://forum.effectivealtruism.org/posts/LTq3DsBedqD8asnZE/the-us-attorney-for-the-sdny-asked-that-ftx-money-be Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The US Attorney for the SDNY asked that FTX money be returned to victims. What are the moral and legal consequences to EA?, published by Fermi–Dirac Distribution on December 14, 2022 on The Effective Altruism Forum.In a December 13, 2022 press conference announcing the indictment of Sam Bankman-Fried, Damian Williams, the United States Attorney for the Southern District of New York, said the following:To any person, entity or political campaign that has received stolen customer money, we ask that you work with us to return that money to the innocent victims.In the same press conference, he alleges that FTX has been stealing customer money since 2019:First, we charge that from 2019 until earlier this year, Bankman-Fried and his co-conspirators stole billions of dollars from FTX customers.He also alleged the following about SBF's political contributions:Those contributions were disguised to look like they were coming from wealthy co-conspirators, when in fact, the contributions were funded by Alameda Research with stolen customer money.This might be relevant to EA orgs, since a lot of SBF's political contributions seem to have been made around the same time as his EA donations (i.e. this year), and so likely came from the same source.I made a comment pointing out some of this information yesterday, but so far that this is getting less attention than I expected given the seriousness of the matter, hence this top-level post.What are the moral and legal consequences of the information in the indictment, and of Williams's request, to EA orgs and individuals that have received money from FTX?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 14 Dec 2022 22:45:58 +0000 EA - The US Attorney for the SDNY asked that FTX money be returned to victims. What are the moral and legal consequences to EA? by Fermi–Dirac Distribution Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The US Attorney for the SDNY asked that FTX money be returned to victims. What are the moral and legal consequences to EA?, published by Fermi–Dirac Distribution on December 14, 2022 on The Effective Altruism Forum.In a December 13, 2022 press conference announcing the indictment of Sam Bankman-Fried, Damian Williams, the United States Attorney for the Southern District of New York, said the following:To any person, entity or political campaign that has received stolen customer money, we ask that you work with us to return that money to the innocent victims.In the same press conference, he alleges that FTX has been stealing customer money since 2019:First, we charge that from 2019 until earlier this year, Bankman-Fried and his co-conspirators stole billions of dollars from FTX customers.He also alleged the following about SBF's political contributions:Those contributions were disguised to look like they were coming from wealthy co-conspirators, when in fact, the contributions were funded by Alameda Research with stolen customer money.This might be relevant to EA orgs, since a lot of SBF's political contributions seem to have been made around the same time as his EA donations (i.e. this year), and so likely came from the same source.I made a comment pointing out some of this information yesterday, but so far that this is getting less attention than I expected given the seriousness of the matter, hence this top-level post.What are the moral and legal consequences of the information in the indictment, and of Williams's request, to EA orgs and individuals that have received money from FTX?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The US Attorney for the SDNY asked that FTX money be returned to victims. What are the moral and legal consequences to EA?, published by Fermi–Dirac Distribution on December 14, 2022 on The Effective Altruism Forum.In a December 13, 2022 press conference announcing the indictment of Sam Bankman-Fried, Damian Williams, the United States Attorney for the Southern District of New York, said the following:To any person, entity or political campaign that has received stolen customer money, we ask that you work with us to return that money to the innocent victims.In the same press conference, he alleges that FTX has been stealing customer money since 2019:First, we charge that from 2019 until earlier this year, Bankman-Fried and his co-conspirators stole billions of dollars from FTX customers.He also alleged the following about SBF's political contributions:Those contributions were disguised to look like they were coming from wealthy co-conspirators, when in fact, the contributions were funded by Alameda Research with stolen customer money.This might be relevant to EA orgs, since a lot of SBF's political contributions seem to have been made around the same time as his EA donations (i.e. this year), and so likely came from the same source.I made a comment pointing out some of this information yesterday, but so far that this is getting less attention than I expected given the seriousness of the matter, hence this top-level post.What are the moral and legal consequences of the information in the indictment, and of Williams's request, to EA orgs and individuals that have received money from FTX?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fermi–Dirac Distribution https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:51 None full 4116
pzCi5EuiherL2ccYc_EA EA - EA's Achievements in 2022 by ElliotJDavies Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA's Achievements in 2022, published by ElliotJDavies on December 14, 2022 on The Effective Altruism Forum.As the year is nearing its end, it would be good to crowd source achievements made by Effective Altruists this year, and collect them into a single thread.What got done in 2022, to to make the world safer or better, for animals, people or future beings:How many mosquito nets were distributed? What organisations got founded? What papers got published? What incubation grants where made? How have organisations scaled up? What laws got passed?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
ElliotJDavies https://forum.effectivealtruism.org/posts/pzCi5EuiherL2ccYc/ea-s-achievements-in-2022 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA's Achievements in 2022, published by ElliotJDavies on December 14, 2022 on The Effective Altruism Forum.As the year is nearing its end, it would be good to crowd source achievements made by Effective Altruists this year, and collect them into a single thread.What got done in 2022, to to make the world safer or better, for animals, people or future beings:How many mosquito nets were distributed? What organisations got founded? What papers got published? What incubation grants where made? How have organisations scaled up? What laws got passed?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 14 Dec 2022 16:23:35 +0000 EA - EA's Achievements in 2022 by ElliotJDavies Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA's Achievements in 2022, published by ElliotJDavies on December 14, 2022 on The Effective Altruism Forum.As the year is nearing its end, it would be good to crowd source achievements made by Effective Altruists this year, and collect them into a single thread.What got done in 2022, to to make the world safer or better, for animals, people or future beings:How many mosquito nets were distributed? What organisations got founded? What papers got published? What incubation grants where made? How have organisations scaled up? What laws got passed?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA's Achievements in 2022, published by ElliotJDavies on December 14, 2022 on The Effective Altruism Forum.As the year is nearing its end, it would be good to crowd source achievements made by Effective Altruists this year, and collect them into a single thread.What got done in 2022, to to make the world safer or better, for animals, people or future beings:How many mosquito nets were distributed? What organisations got founded? What papers got published? What incubation grants where made? How have organisations scaled up? What laws got passed?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
ElliotJDavies https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:51 None full 4117
C26RHHYXzT6P6A4ht_EA EA - What Rethink Priorities General Longtermism Team Did in 2022, and Updates in Light of the Current Situation by Linch Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What Rethink Priorities General Longtermism Team Did in 2022, and Updates in Light of the Current Situation, published by Linch on December 14, 2022 on The Effective Altruism Forum.SummaryRethink Priorities’ General Longtermism team, led by me, has existed for just under a year. In this post, I summarize our work so far.Our initial theory of change centered around:Primarily, facilitating the creation of faster and better longtermist “megaprojects” (though in practice, we focused more on somewhat scalable longtermist projects).Secondarily, improving strategy clarity about which “intermediate goals” longtermists should pursue (though in practice, we focused more opportunistically on miscellaneous high-impact research questions). (more)We had ~5 Full-time Equivalents (FTE) on average (a total of 54.5 FTE months of research work up until the end of November) and spent ~$721,000. (more)(Shareable) Outputs and outcomes of our work this year include:I encouraged the creation of and supported the Rethink Priorities’ Special Projects team, which provides fiscal sponsorship to external entrepreneurial projects.Marie and Renan (based on prior work by Renan, Max, and me) made a simple model for prioritizing between longtermist projects and identifying ones that seemed especially promising to research further. (more)Our fellows and research assistant (Emma, Jam, Joe, Max, Marie) completed 13 research “speedruns” (~10h shallow dives) into specific longtermist projects. (more)Our fellows and research assistant (Emma, Jam, Joe, Max, Marie) completed further research on several longtermist projects, including air sterilization techniques, whistleblowing, the AI safety recruiting pipeline, and infrastructure to support independent researchers. (more)Renan cofounded and ran Condor Camp, a project that aims to find and engage world-class talent in Brazil for longtermist causes while also field-building longtermism and supporting EA community building in the country. (more)Ben cofounded and ran Pathfinder, a project to help mid-career professionals to find the highest impact work (more)I initiated a founder search for multiple promising projects, including:Shelters and other civilizational resilience work (resulting in recommending grants to Tereza Fildrova, who helped organize the SHELTER weekend, and Ulrik Horn, who is exploiting his fit for work in this area). (more)An early warning forecasting center (resulting in working with Alex D to explore founding a project in this area). (more)Ben researched nanotechnology strategy and made a database of resources relevant to this area. (more)Separately from my work at Rethink Priorities, I was a guest fund manager for EA Funds’ Long-Term Future Fund, and a regranter for the Future Fund. (more)The recent changes to the EA funding situation have significantly affected our team’s strategy, in that megaprojects now seem less relevant, and in that new research questions might have become especially important in light of the FTX crash. (more) However, I still think it’s very plausible that we continue to focus on entrepreneurial longtermist projects as our main research area. We’re currently in the process of reorienting and setting our strategy for 2023. (more)You can help our team by contributing ideas for highly impactful research projects, funding us, expressing interest in working with us, and giving feedback on the work and plans outlined in this post. (more)PreambleI’ve led the new General Longtermism team at Rethink Priorities for slightly under a year. Recent changes in the EA funding landscape have had a large impact on our work. I’ve decided that now is a good time to write a detailed summary of our work so far, as well as how recent events have affected our work.The primary purpose of this article is to be informa...]]>
Linch https://forum.effectivealtruism.org/posts/C26RHHYXzT6P6A4ht/what-rethink-priorities-general-longtermism-team-did-in-2022 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What Rethink Priorities General Longtermism Team Did in 2022, and Updates in Light of the Current Situation, published by Linch on December 14, 2022 on The Effective Altruism Forum.SummaryRethink Priorities’ General Longtermism team, led by me, has existed for just under a year. In this post, I summarize our work so far.Our initial theory of change centered around:Primarily, facilitating the creation of faster and better longtermist “megaprojects” (though in practice, we focused more on somewhat scalable longtermist projects).Secondarily, improving strategy clarity about which “intermediate goals” longtermists should pursue (though in practice, we focused more opportunistically on miscellaneous high-impact research questions). (more)We had ~5 Full-time Equivalents (FTE) on average (a total of 54.5 FTE months of research work up until the end of November) and spent ~$721,000. (more)(Shareable) Outputs and outcomes of our work this year include:I encouraged the creation of and supported the Rethink Priorities’ Special Projects team, which provides fiscal sponsorship to external entrepreneurial projects.Marie and Renan (based on prior work by Renan, Max, and me) made a simple model for prioritizing between longtermist projects and identifying ones that seemed especially promising to research further. (more)Our fellows and research assistant (Emma, Jam, Joe, Max, Marie) completed 13 research “speedruns” (~10h shallow dives) into specific longtermist projects. (more)Our fellows and research assistant (Emma, Jam, Joe, Max, Marie) completed further research on several longtermist projects, including air sterilization techniques, whistleblowing, the AI safety recruiting pipeline, and infrastructure to support independent researchers. (more)Renan cofounded and ran Condor Camp, a project that aims to find and engage world-class talent in Brazil for longtermist causes while also field-building longtermism and supporting EA community building in the country. (more)Ben cofounded and ran Pathfinder, a project to help mid-career professionals to find the highest impact work (more)I initiated a founder search for multiple promising projects, including:Shelters and other civilizational resilience work (resulting in recommending grants to Tereza Fildrova, who helped organize the SHELTER weekend, and Ulrik Horn, who is exploiting his fit for work in this area). (more)An early warning forecasting center (resulting in working with Alex D to explore founding a project in this area). (more)Ben researched nanotechnology strategy and made a database of resources relevant to this area. (more)Separately from my work at Rethink Priorities, I was a guest fund manager for EA Funds’ Long-Term Future Fund, and a regranter for the Future Fund. (more)The recent changes to the EA funding situation have significantly affected our team’s strategy, in that megaprojects now seem less relevant, and in that new research questions might have become especially important in light of the FTX crash. (more) However, I still think it’s very plausible that we continue to focus on entrepreneurial longtermist projects as our main research area. We’re currently in the process of reorienting and setting our strategy for 2023. (more)You can help our team by contributing ideas for highly impactful research projects, funding us, expressing interest in working with us, and giving feedback on the work and plans outlined in this post. (more)PreambleI’ve led the new General Longtermism team at Rethink Priorities for slightly under a year. Recent changes in the EA funding landscape have had a large impact on our work. I’ve decided that now is a good time to write a detailed summary of our work so far, as well as how recent events have affected our work.The primary purpose of this article is to be informa...]]>
Wed, 14 Dec 2022 13:44:39 +0000 EA - What Rethink Priorities General Longtermism Team Did in 2022, and Updates in Light of the Current Situation by Linch Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What Rethink Priorities General Longtermism Team Did in 2022, and Updates in Light of the Current Situation, published by Linch on December 14, 2022 on The Effective Altruism Forum.SummaryRethink Priorities’ General Longtermism team, led by me, has existed for just under a year. In this post, I summarize our work so far.Our initial theory of change centered around:Primarily, facilitating the creation of faster and better longtermist “megaprojects” (though in practice, we focused more on somewhat scalable longtermist projects).Secondarily, improving strategy clarity about which “intermediate goals” longtermists should pursue (though in practice, we focused more opportunistically on miscellaneous high-impact research questions). (more)We had ~5 Full-time Equivalents (FTE) on average (a total of 54.5 FTE months of research work up until the end of November) and spent ~$721,000. (more)(Shareable) Outputs and outcomes of our work this year include:I encouraged the creation of and supported the Rethink Priorities’ Special Projects team, which provides fiscal sponsorship to external entrepreneurial projects.Marie and Renan (based on prior work by Renan, Max, and me) made a simple model for prioritizing between longtermist projects and identifying ones that seemed especially promising to research further. (more)Our fellows and research assistant (Emma, Jam, Joe, Max, Marie) completed 13 research “speedruns” (~10h shallow dives) into specific longtermist projects. (more)Our fellows and research assistant (Emma, Jam, Joe, Max, Marie) completed further research on several longtermist projects, including air sterilization techniques, whistleblowing, the AI safety recruiting pipeline, and infrastructure to support independent researchers. (more)Renan cofounded and ran Condor Camp, a project that aims to find and engage world-class talent in Brazil for longtermist causes while also field-building longtermism and supporting EA community building in the country. (more)Ben cofounded and ran Pathfinder, a project to help mid-career professionals to find the highest impact work (more)I initiated a founder search for multiple promising projects, including:Shelters and other civilizational resilience work (resulting in recommending grants to Tereza Fildrova, who helped organize the SHELTER weekend, and Ulrik Horn, who is exploiting his fit for work in this area). (more)An early warning forecasting center (resulting in working with Alex D to explore founding a project in this area). (more)Ben researched nanotechnology strategy and made a database of resources relevant to this area. (more)Separately from my work at Rethink Priorities, I was a guest fund manager for EA Funds’ Long-Term Future Fund, and a regranter for the Future Fund. (more)The recent changes to the EA funding situation have significantly affected our team’s strategy, in that megaprojects now seem less relevant, and in that new research questions might have become especially important in light of the FTX crash. (more) However, I still think it’s very plausible that we continue to focus on entrepreneurial longtermist projects as our main research area. We’re currently in the process of reorienting and setting our strategy for 2023. (more)You can help our team by contributing ideas for highly impactful research projects, funding us, expressing interest in working with us, and giving feedback on the work and plans outlined in this post. (more)PreambleI’ve led the new General Longtermism team at Rethink Priorities for slightly under a year. Recent changes in the EA funding landscape have had a large impact on our work. I’ve decided that now is a good time to write a detailed summary of our work so far, as well as how recent events have affected our work.The primary purpose of this article is to be informa...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What Rethink Priorities General Longtermism Team Did in 2022, and Updates in Light of the Current Situation, published by Linch on December 14, 2022 on The Effective Altruism Forum.SummaryRethink Priorities’ General Longtermism team, led by me, has existed for just under a year. In this post, I summarize our work so far.Our initial theory of change centered around:Primarily, facilitating the creation of faster and better longtermist “megaprojects” (though in practice, we focused more on somewhat scalable longtermist projects).Secondarily, improving strategy clarity about which “intermediate goals” longtermists should pursue (though in practice, we focused more opportunistically on miscellaneous high-impact research questions). (more)We had ~5 Full-time Equivalents (FTE) on average (a total of 54.5 FTE months of research work up until the end of November) and spent ~$721,000. (more)(Shareable) Outputs and outcomes of our work this year include:I encouraged the creation of and supported the Rethink Priorities’ Special Projects team, which provides fiscal sponsorship to external entrepreneurial projects.Marie and Renan (based on prior work by Renan, Max, and me) made a simple model for prioritizing between longtermist projects and identifying ones that seemed especially promising to research further. (more)Our fellows and research assistant (Emma, Jam, Joe, Max, Marie) completed 13 research “speedruns” (~10h shallow dives) into specific longtermist projects. (more)Our fellows and research assistant (Emma, Jam, Joe, Max, Marie) completed further research on several longtermist projects, including air sterilization techniques, whistleblowing, the AI safety recruiting pipeline, and infrastructure to support independent researchers. (more)Renan cofounded and ran Condor Camp, a project that aims to find and engage world-class talent in Brazil for longtermist causes while also field-building longtermism and supporting EA community building in the country. (more)Ben cofounded and ran Pathfinder, a project to help mid-career professionals to find the highest impact work (more)I initiated a founder search for multiple promising projects, including:Shelters and other civilizational resilience work (resulting in recommending grants to Tereza Fildrova, who helped organize the SHELTER weekend, and Ulrik Horn, who is exploiting his fit for work in this area). (more)An early warning forecasting center (resulting in working with Alex D to explore founding a project in this area). (more)Ben researched nanotechnology strategy and made a database of resources relevant to this area. (more)Separately from my work at Rethink Priorities, I was a guest fund manager for EA Funds’ Long-Term Future Fund, and a regranter for the Future Fund. (more)The recent changes to the EA funding situation have significantly affected our team’s strategy, in that megaprojects now seem less relevant, and in that new research questions might have become especially important in light of the FTX crash. (more) However, I still think it’s very plausible that we continue to focus on entrepreneurial longtermist projects as our main research area. We’re currently in the process of reorienting and setting our strategy for 2023. (more)You can help our team by contributing ideas for highly impactful research projects, funding us, expressing interest in working with us, and giving feedback on the work and plans outlined in this post. (more)PreambleI’ve led the new General Longtermism team at Rethink Priorities for slightly under a year. Recent changes in the EA funding landscape have had a large impact on our work. I’ve decided that now is a good time to write a detailed summary of our work so far, as well as how recent events have affected our work.The primary purpose of this article is to be informa...]]>
Linch https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 33:53 None full 4118
8xsDrkLpzvsh5zZeR_EA EA - Open Philanthropy is hiring for (lots of) operations roles! by maura Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Philanthropy is hiring for (lots of) operations roles!, published by maura on December 14, 2022 on The Effective Altruism Forum.It’s been a busy year at Open Philanthropy; our ambitious growth goals have had us expanding our team rapidly this year (we roughly doubled over the course of 2022). We recently posted 7 operations roles, which are now live on our jobs page and linked below. Each role will be vital to scaling our operations and executing on our grantmaking goals:A Business Operations Lead, who will manage multiple Business Operations Generalists on our team which provides organization-wide tech and office infrastructure, helps with event planning, and handles all forms of time-saving support.Note: the generalist role is open to applicants without any previous experience, and we’re happy to consider candidates graduating in May.A Finance Operations Assistant to help manage accounts payable, contractors, and internal expense reporting.Multiple Grants Associates (both generalist and longtermist-focused) to join the team responsible for structuring, conducting due diligence on, and processing every grant we make.A People Operations Generalist to work on onboarding, retention, offboarding, payroll, benefits, and helping our team grow more international.A Recruiter to help us continue to grow.A Salesforce Administrator & Technical Project Manager to help manage Salesforce (our grantmaking platform) and support other tech needs.If you know someone who would be great for one of these roles, please refer them to us. We welcome external referrals, and have found them extremely helpful in the past. We also offer a $5,000 referral bonus; more information here.A few notes for prospective applicantsWe’re always adjusting our hiring process based on candidate feedback, and we wanted to highlight (or reiterate) a few things we’re trying for this round of roles.For most of the roles posted (all except the Business Operations Lead and Salesforce roles), you only need to apply once to opt into consideration for as many of these roles as you’re interested in.There will be a checkbox on the application form asking if you want to be considered for any other teams. Checking multiple boxes will generally not mean you will be asked to do additional work tests for each role you’re applying to. If you proceed to the end of the process, you may have interviews with multiple hiring managers if multiple teams are still interested in considering you as a candidate, and you may receive an offer for one or multiple of the roles you expressed interest in.Interested candidates can self-select into applying for a 3-month temporary two-way trial for these roles. If you select this track, this means you’ll:Potentially have the opportunity to go through a more streamlined application process (one brief work test, rather than two, with the option to consider alternatives to work tests), if you need a decision more quickly.If you seem like a very strong candidate for full-time employment at the end of that streamlined process, receive an offer of 3 months of employment, at the end of which we will mutually decide whether to continue a longer-term employment relationship (assuming no compliance issues arise, we plan to extend this option to both US and international candidates).Please don’t feel any pressure to select this option — we’re providing it as an experiment, and in case it’s actively preferable for some candidates (e.g. because they’re uncertain they want to transition into a career in ops and would like to try it out, don’t have the financial flexibility to go through a more involved application process, or don’t feel they can make a good longer-term career decision right now for whatever reason). If we get feedback from applicants that this isn’t useful, or is stres...]]>
maura https://forum.effectivealtruism.org/posts/8xsDrkLpzvsh5zZeR/open-philanthropy-is-hiring-for-lots-of-operations-roles Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Philanthropy is hiring for (lots of) operations roles!, published by maura on December 14, 2022 on The Effective Altruism Forum.It’s been a busy year at Open Philanthropy; our ambitious growth goals have had us expanding our team rapidly this year (we roughly doubled over the course of 2022). We recently posted 7 operations roles, which are now live on our jobs page and linked below. Each role will be vital to scaling our operations and executing on our grantmaking goals:A Business Operations Lead, who will manage multiple Business Operations Generalists on our team which provides organization-wide tech and office infrastructure, helps with event planning, and handles all forms of time-saving support.Note: the generalist role is open to applicants without any previous experience, and we’re happy to consider candidates graduating in May.A Finance Operations Assistant to help manage accounts payable, contractors, and internal expense reporting.Multiple Grants Associates (both generalist and longtermist-focused) to join the team responsible for structuring, conducting due diligence on, and processing every grant we make.A People Operations Generalist to work on onboarding, retention, offboarding, payroll, benefits, and helping our team grow more international.A Recruiter to help us continue to grow.A Salesforce Administrator & Technical Project Manager to help manage Salesforce (our grantmaking platform) and support other tech needs.If you know someone who would be great for one of these roles, please refer them to us. We welcome external referrals, and have found them extremely helpful in the past. We also offer a $5,000 referral bonus; more information here.A few notes for prospective applicantsWe’re always adjusting our hiring process based on candidate feedback, and we wanted to highlight (or reiterate) a few things we’re trying for this round of roles.For most of the roles posted (all except the Business Operations Lead and Salesforce roles), you only need to apply once to opt into consideration for as many of these roles as you’re interested in.There will be a checkbox on the application form asking if you want to be considered for any other teams. Checking multiple boxes will generally not mean you will be asked to do additional work tests for each role you’re applying to. If you proceed to the end of the process, you may have interviews with multiple hiring managers if multiple teams are still interested in considering you as a candidate, and you may receive an offer for one or multiple of the roles you expressed interest in.Interested candidates can self-select into applying for a 3-month temporary two-way trial for these roles. If you select this track, this means you’ll:Potentially have the opportunity to go through a more streamlined application process (one brief work test, rather than two, with the option to consider alternatives to work tests), if you need a decision more quickly.If you seem like a very strong candidate for full-time employment at the end of that streamlined process, receive an offer of 3 months of employment, at the end of which we will mutually decide whether to continue a longer-term employment relationship (assuming no compliance issues arise, we plan to extend this option to both US and international candidates).Please don’t feel any pressure to select this option — we’re providing it as an experiment, and in case it’s actively preferable for some candidates (e.g. because they’re uncertain they want to transition into a career in ops and would like to try it out, don’t have the financial flexibility to go through a more involved application process, or don’t feel they can make a good longer-term career decision right now for whatever reason). If we get feedback from applicants that this isn’t useful, or is stres...]]>
Wed, 14 Dec 2022 13:19:04 +0000 EA - Open Philanthropy is hiring for (lots of) operations roles! by maura Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Philanthropy is hiring for (lots of) operations roles!, published by maura on December 14, 2022 on The Effective Altruism Forum.It’s been a busy year at Open Philanthropy; our ambitious growth goals have had us expanding our team rapidly this year (we roughly doubled over the course of 2022). We recently posted 7 operations roles, which are now live on our jobs page and linked below. Each role will be vital to scaling our operations and executing on our grantmaking goals:A Business Operations Lead, who will manage multiple Business Operations Generalists on our team which provides organization-wide tech and office infrastructure, helps with event planning, and handles all forms of time-saving support.Note: the generalist role is open to applicants without any previous experience, and we’re happy to consider candidates graduating in May.A Finance Operations Assistant to help manage accounts payable, contractors, and internal expense reporting.Multiple Grants Associates (both generalist and longtermist-focused) to join the team responsible for structuring, conducting due diligence on, and processing every grant we make.A People Operations Generalist to work on onboarding, retention, offboarding, payroll, benefits, and helping our team grow more international.A Recruiter to help us continue to grow.A Salesforce Administrator & Technical Project Manager to help manage Salesforce (our grantmaking platform) and support other tech needs.If you know someone who would be great for one of these roles, please refer them to us. We welcome external referrals, and have found them extremely helpful in the past. We also offer a $5,000 referral bonus; more information here.A few notes for prospective applicantsWe’re always adjusting our hiring process based on candidate feedback, and we wanted to highlight (or reiterate) a few things we’re trying for this round of roles.For most of the roles posted (all except the Business Operations Lead and Salesforce roles), you only need to apply once to opt into consideration for as many of these roles as you’re interested in.There will be a checkbox on the application form asking if you want to be considered for any other teams. Checking multiple boxes will generally not mean you will be asked to do additional work tests for each role you’re applying to. If you proceed to the end of the process, you may have interviews with multiple hiring managers if multiple teams are still interested in considering you as a candidate, and you may receive an offer for one or multiple of the roles you expressed interest in.Interested candidates can self-select into applying for a 3-month temporary two-way trial for these roles. If you select this track, this means you’ll:Potentially have the opportunity to go through a more streamlined application process (one brief work test, rather than two, with the option to consider alternatives to work tests), if you need a decision more quickly.If you seem like a very strong candidate for full-time employment at the end of that streamlined process, receive an offer of 3 months of employment, at the end of which we will mutually decide whether to continue a longer-term employment relationship (assuming no compliance issues arise, we plan to extend this option to both US and international candidates).Please don’t feel any pressure to select this option — we’re providing it as an experiment, and in case it’s actively preferable for some candidates (e.g. because they’re uncertain they want to transition into a career in ops and would like to try it out, don’t have the financial flexibility to go through a more involved application process, or don’t feel they can make a good longer-term career decision right now for whatever reason). If we get feedback from applicants that this isn’t useful, or is stres...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Philanthropy is hiring for (lots of) operations roles!, published by maura on December 14, 2022 on The Effective Altruism Forum.It’s been a busy year at Open Philanthropy; our ambitious growth goals have had us expanding our team rapidly this year (we roughly doubled over the course of 2022). We recently posted 7 operations roles, which are now live on our jobs page and linked below. Each role will be vital to scaling our operations and executing on our grantmaking goals:A Business Operations Lead, who will manage multiple Business Operations Generalists on our team which provides organization-wide tech and office infrastructure, helps with event planning, and handles all forms of time-saving support.Note: the generalist role is open to applicants without any previous experience, and we’re happy to consider candidates graduating in May.A Finance Operations Assistant to help manage accounts payable, contractors, and internal expense reporting.Multiple Grants Associates (both generalist and longtermist-focused) to join the team responsible for structuring, conducting due diligence on, and processing every grant we make.A People Operations Generalist to work on onboarding, retention, offboarding, payroll, benefits, and helping our team grow more international.A Recruiter to help us continue to grow.A Salesforce Administrator & Technical Project Manager to help manage Salesforce (our grantmaking platform) and support other tech needs.If you know someone who would be great for one of these roles, please refer them to us. We welcome external referrals, and have found them extremely helpful in the past. We also offer a $5,000 referral bonus; more information here.A few notes for prospective applicantsWe’re always adjusting our hiring process based on candidate feedback, and we wanted to highlight (or reiterate) a few things we’re trying for this round of roles.For most of the roles posted (all except the Business Operations Lead and Salesforce roles), you only need to apply once to opt into consideration for as many of these roles as you’re interested in.There will be a checkbox on the application form asking if you want to be considered for any other teams. Checking multiple boxes will generally not mean you will be asked to do additional work tests for each role you’re applying to. If you proceed to the end of the process, you may have interviews with multiple hiring managers if multiple teams are still interested in considering you as a candidate, and you may receive an offer for one or multiple of the roles you expressed interest in.Interested candidates can self-select into applying for a 3-month temporary two-way trial for these roles. If you select this track, this means you’ll:Potentially have the opportunity to go through a more streamlined application process (one brief work test, rather than two, with the option to consider alternatives to work tests), if you need a decision more quickly.If you seem like a very strong candidate for full-time employment at the end of that streamlined process, receive an offer of 3 months of employment, at the end of which we will mutually decide whether to continue a longer-term employment relationship (assuming no compliance issues arise, we plan to extend this option to both US and international candidates).Please don’t feel any pressure to select this option — we’re providing it as an experiment, and in case it’s actively preferable for some candidates (e.g. because they’re uncertain they want to transition into a career in ops and would like to try it out, don’t have the financial flexibility to go through a more involved application process, or don’t feel they can make a good longer-term career decision right now for whatever reason). If we get feedback from applicants that this isn’t useful, or is stres...]]>
maura https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:27 None full 4119
4SpYcqZvuyZ8JyiCK_EA EA - Neuron Count-Based Measures May Currently Underweight Suffering in Farmed Fish by MHR Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Neuron Count-Based Measures May Currently Underweight Suffering in Farmed Fish, published by MHR on December 13, 2022 on The Effective Altruism Forum.Epistemic status: layperson’s attempt to understand the relevant literature. I welcome corrections from anyone with a better understanding of fish biology.SummaryNeuron counts have historically been used as a proxy for the moral weight of different animal species. While alternate systems have been proposed in response to criticisms of using neuron counts alone, neuron counts likely remain a useful input into these more holistic weighting processes.The only publicly-available empirical reports of fish neuron counts sample exclusively from very small species (<1 g bodyweight), but many farmed fish are of species at least 1000x more massive.Some sources apply neuron count estimates from small fish to larger farmed fish without correction, which seems very unlikely to be reasonable given the large size differences involved.Even among sources that try to account for these size differences, there is significant uncertainty in how to extrapolate neuron count data from small fish to larger fish.As a result, animal welfare advocates should be highly skeptical of current neuron count-based estimates of the moral weight of farmed fish, and should consider funding studies to empirically measure neuron counts in these species.The role of neuron counts in estimating moral significanceAs long as there are limited resources available for improving animal welfare, advocates will require a means of morally weighting the harm that occurs to different species. Historically, neuron counts have often been used as a proxy for moral weight in interspecies comparisons, sometimes as the sole factor used. Researchers have raised issues with the direct use of neuron counts in determining moral weights, with a prominent recent example being a segment of the Rethink Priorities Moral Weight Project, which argues against the validity of neuron counts alone as a proxy for moral importance. However, even the Rethink Priorities team does not argue that neuron counts are useless, but rather, that they should be considered as just one component among many in establishing moral weight:Overall, we suggest that neuron counts should not be used as a sole proxy for moral weight, but cannot be dismissed entirely. Rather, neuron counts should be combined with other metrics in an overall weighted score that includes information about whether different species have welfare-relevant capacities.Given this context, neuron counts will likely continue to be a meaningful factor in most plausible models for estimating interspecies relative moral importance; it is therefore important both morally and epistemically that neuron count estimates are accurate. This is especially true with respect to those species for which we can plausibly impact the welfare of a very large number of individual members. Farmed fish, of which more than 50 billion are slaughtered each year (probably more than the number of farmed chickens), are easily among those species for which it is most important to obtain a high-quality neuron count estimate.Current neuron count data for fishInvincible Wellbeing's Planetary Animal Welfare Survey (PAWS) spreadsheet is, to the best of my knowledge, the most comprehensive literature review of neuron count estimates across animal species. Despite this, only two fish species, both weighing less than one gram, appear in PAWS.Table 1. Fish neuron counts (PAWS)Guppy (Poecilia reticulata)0.41 g4.3 millionLinkZebrafish (Danio rerio)0.75 g10 millionLinkSpeciesAverage Adult Body WeightNeuron CountSourceA further independent literature review revealed only a single additional estimate of a fish species’s neuron count: in What We Owe the Future, Will ...]]>
MHR https://forum.effectivealtruism.org/posts/4SpYcqZvuyZ8JyiCK/neuron-count-based-measures-may-currently-underweight Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Neuron Count-Based Measures May Currently Underweight Suffering in Farmed Fish, published by MHR on December 13, 2022 on The Effective Altruism Forum.Epistemic status: layperson’s attempt to understand the relevant literature. I welcome corrections from anyone with a better understanding of fish biology.SummaryNeuron counts have historically been used as a proxy for the moral weight of different animal species. While alternate systems have been proposed in response to criticisms of using neuron counts alone, neuron counts likely remain a useful input into these more holistic weighting processes.The only publicly-available empirical reports of fish neuron counts sample exclusively from very small species (<1 g bodyweight), but many farmed fish are of species at least 1000x more massive.Some sources apply neuron count estimates from small fish to larger farmed fish without correction, which seems very unlikely to be reasonable given the large size differences involved.Even among sources that try to account for these size differences, there is significant uncertainty in how to extrapolate neuron count data from small fish to larger fish.As a result, animal welfare advocates should be highly skeptical of current neuron count-based estimates of the moral weight of farmed fish, and should consider funding studies to empirically measure neuron counts in these species.The role of neuron counts in estimating moral significanceAs long as there are limited resources available for improving animal welfare, advocates will require a means of morally weighting the harm that occurs to different species. Historically, neuron counts have often been used as a proxy for moral weight in interspecies comparisons, sometimes as the sole factor used. Researchers have raised issues with the direct use of neuron counts in determining moral weights, with a prominent recent example being a segment of the Rethink Priorities Moral Weight Project, which argues against the validity of neuron counts alone as a proxy for moral importance. However, even the Rethink Priorities team does not argue that neuron counts are useless, but rather, that they should be considered as just one component among many in establishing moral weight:Overall, we suggest that neuron counts should not be used as a sole proxy for moral weight, but cannot be dismissed entirely. Rather, neuron counts should be combined with other metrics in an overall weighted score that includes information about whether different species have welfare-relevant capacities.Given this context, neuron counts will likely continue to be a meaningful factor in most plausible models for estimating interspecies relative moral importance; it is therefore important both morally and epistemically that neuron count estimates are accurate. This is especially true with respect to those species for which we can plausibly impact the welfare of a very large number of individual members. Farmed fish, of which more than 50 billion are slaughtered each year (probably more than the number of farmed chickens), are easily among those species for which it is most important to obtain a high-quality neuron count estimate.Current neuron count data for fishInvincible Wellbeing's Planetary Animal Welfare Survey (PAWS) spreadsheet is, to the best of my knowledge, the most comprehensive literature review of neuron count estimates across animal species. Despite this, only two fish species, both weighing less than one gram, appear in PAWS.Table 1. Fish neuron counts (PAWS)Guppy (Poecilia reticulata)0.41 g4.3 millionLinkZebrafish (Danio rerio)0.75 g10 millionLinkSpeciesAverage Adult Body WeightNeuron CountSourceA further independent literature review revealed only a single additional estimate of a fish species’s neuron count: in What We Owe the Future, Will ...]]>
Wed, 14 Dec 2022 04:26:49 +0000 EA - Neuron Count-Based Measures May Currently Underweight Suffering in Farmed Fish by MHR Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Neuron Count-Based Measures May Currently Underweight Suffering in Farmed Fish, published by MHR on December 13, 2022 on The Effective Altruism Forum.Epistemic status: layperson’s attempt to understand the relevant literature. I welcome corrections from anyone with a better understanding of fish biology.SummaryNeuron counts have historically been used as a proxy for the moral weight of different animal species. While alternate systems have been proposed in response to criticisms of using neuron counts alone, neuron counts likely remain a useful input into these more holistic weighting processes.The only publicly-available empirical reports of fish neuron counts sample exclusively from very small species (<1 g bodyweight), but many farmed fish are of species at least 1000x more massive.Some sources apply neuron count estimates from small fish to larger farmed fish without correction, which seems very unlikely to be reasonable given the large size differences involved.Even among sources that try to account for these size differences, there is significant uncertainty in how to extrapolate neuron count data from small fish to larger fish.As a result, animal welfare advocates should be highly skeptical of current neuron count-based estimates of the moral weight of farmed fish, and should consider funding studies to empirically measure neuron counts in these species.The role of neuron counts in estimating moral significanceAs long as there are limited resources available for improving animal welfare, advocates will require a means of morally weighting the harm that occurs to different species. Historically, neuron counts have often been used as a proxy for moral weight in interspecies comparisons, sometimes as the sole factor used. Researchers have raised issues with the direct use of neuron counts in determining moral weights, with a prominent recent example being a segment of the Rethink Priorities Moral Weight Project, which argues against the validity of neuron counts alone as a proxy for moral importance. However, even the Rethink Priorities team does not argue that neuron counts are useless, but rather, that they should be considered as just one component among many in establishing moral weight:Overall, we suggest that neuron counts should not be used as a sole proxy for moral weight, but cannot be dismissed entirely. Rather, neuron counts should be combined with other metrics in an overall weighted score that includes information about whether different species have welfare-relevant capacities.Given this context, neuron counts will likely continue to be a meaningful factor in most plausible models for estimating interspecies relative moral importance; it is therefore important both morally and epistemically that neuron count estimates are accurate. This is especially true with respect to those species for which we can plausibly impact the welfare of a very large number of individual members. Farmed fish, of which more than 50 billion are slaughtered each year (probably more than the number of farmed chickens), are easily among those species for which it is most important to obtain a high-quality neuron count estimate.Current neuron count data for fishInvincible Wellbeing's Planetary Animal Welfare Survey (PAWS) spreadsheet is, to the best of my knowledge, the most comprehensive literature review of neuron count estimates across animal species. Despite this, only two fish species, both weighing less than one gram, appear in PAWS.Table 1. Fish neuron counts (PAWS)Guppy (Poecilia reticulata)0.41 g4.3 millionLinkZebrafish (Danio rerio)0.75 g10 millionLinkSpeciesAverage Adult Body WeightNeuron CountSourceA further independent literature review revealed only a single additional estimate of a fish species’s neuron count: in What We Owe the Future, Will ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Neuron Count-Based Measures May Currently Underweight Suffering in Farmed Fish, published by MHR on December 13, 2022 on The Effective Altruism Forum.Epistemic status: layperson’s attempt to understand the relevant literature. I welcome corrections from anyone with a better understanding of fish biology.SummaryNeuron counts have historically been used as a proxy for the moral weight of different animal species. While alternate systems have been proposed in response to criticisms of using neuron counts alone, neuron counts likely remain a useful input into these more holistic weighting processes.The only publicly-available empirical reports of fish neuron counts sample exclusively from very small species (<1 g bodyweight), but many farmed fish are of species at least 1000x more massive.Some sources apply neuron count estimates from small fish to larger farmed fish without correction, which seems very unlikely to be reasonable given the large size differences involved.Even among sources that try to account for these size differences, there is significant uncertainty in how to extrapolate neuron count data from small fish to larger fish.As a result, animal welfare advocates should be highly skeptical of current neuron count-based estimates of the moral weight of farmed fish, and should consider funding studies to empirically measure neuron counts in these species.The role of neuron counts in estimating moral significanceAs long as there are limited resources available for improving animal welfare, advocates will require a means of morally weighting the harm that occurs to different species. Historically, neuron counts have often been used as a proxy for moral weight in interspecies comparisons, sometimes as the sole factor used. Researchers have raised issues with the direct use of neuron counts in determining moral weights, with a prominent recent example being a segment of the Rethink Priorities Moral Weight Project, which argues against the validity of neuron counts alone as a proxy for moral importance. However, even the Rethink Priorities team does not argue that neuron counts are useless, but rather, that they should be considered as just one component among many in establishing moral weight:Overall, we suggest that neuron counts should not be used as a sole proxy for moral weight, but cannot be dismissed entirely. Rather, neuron counts should be combined with other metrics in an overall weighted score that includes information about whether different species have welfare-relevant capacities.Given this context, neuron counts will likely continue to be a meaningful factor in most plausible models for estimating interspecies relative moral importance; it is therefore important both morally and epistemically that neuron count estimates are accurate. This is especially true with respect to those species for which we can plausibly impact the welfare of a very large number of individual members. Farmed fish, of which more than 50 billion are slaughtered each year (probably more than the number of farmed chickens), are easily among those species for which it is most important to obtain a high-quality neuron count estimate.Current neuron count data for fishInvincible Wellbeing's Planetary Animal Welfare Survey (PAWS) spreadsheet is, to the best of my knowledge, the most comprehensive literature review of neuron count estimates across animal species. Despite this, only two fish species, both weighing less than one gram, appear in PAWS.Table 1. Fish neuron counts (PAWS)Guppy (Poecilia reticulata)0.41 g4.3 millionLinkZebrafish (Danio rerio)0.75 g10 millionLinkSpeciesAverage Adult Body WeightNeuron CountSourceA further independent literature review revealed only a single additional estimate of a fish species’s neuron count: in What We Owe the Future, Will ...]]>
MHR https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:46 None full 4111
uZY7fqXrP6P4vkK7g_EA EA - Announcing ERA: a spin-off from CERI by Nandini Shiralkar Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing ERA: a spin-off from CERI, published by Nandini Shiralkar on December 13, 2022 on The Effective Altruism Forum.The Existential Risk Alliance (ERA) is a non-profit project working to mitigate existential risk by equipping the community of young researchers and entrepreneurs with the skills and the knowledge necessary to tackle existential risks via our summer research fellowship programme.HighlightsThe CERI Fellowship is now a spin-off from the Cambridge Existential Risks Initiative (CERI), and will be run by a new nonprofit project called ERA in 2023 and beyond.Join the ERA team! ERA is looking for focus area leads in our main cause areas (e.g., AI Safety, Biosecurity, etc.) who could support the core team and help us grow. We are accepting expressions of interest for certain positions that we haven't yet opened hiring rounds for.ERA Fellowship Mentors’ expression of interest form is now live. ERA mentors will advise our summer research fellows, an international cohort of top students and early career professionals who are dedicated to using their summers (and in many cases, their careers) to help address large-scale threats to humanity's future.ERA Cambridge Fellows’ expression of interest form is also live - we expect applications for 2023 summer to open in late January/early February.CERI’s evolution to ERAThe Cambridge Existential Risks Initiative (CERI) was founded in April 2021 as a project at the University of Cambridge with the aim to reduce existential risk (x-risk) by raising awareness and promoting x-risk research, via initiatives such as our flagship Summer Research Fellowship programme.Since then, CERI has evolved into a meta organisation supporting many projects within this space; some projects have been run entirely by students (such as the Existential Risks Introductory Course), and some projects have been run by a mixture of students and full time community builders (such as the CERI Fellowship).Now, CERI will refocus to be a University of Cambridge student group, engaging with the local community via projects such as HackX. ERA will be running the ERA Fellowship (which was previously called the CERI Fellowship) in 2023 and beyond.Time for a new ERA!Our mission is to reduce the probability of an existential catastrophe. We believe that one of the key ways to reduce existential risk lies in fostering a community of dedicated and knowledgeable x-risk researchers. Through our summer research fellowship programme, we aim to identify and support aspiring researchers in this field, providing them with the resources and mentorship needed to succeed.The establishment of ERA will allow us to optimise our programming and support ecosystem for the Summer Research Fellowship programme. We plan to take an evidence-based approach to community building to improve our existing programme and to find promising new approaches in this space.Why rename CERI to ERA?The naming situation with the -ERIs is just very confusing. There is CHERI, CERI, SERI, BERI, and LERI (the ones I know of). We think that we could potentially reach a different subset of our target demographic by rebranding, and also reduce the amount of time we spend clarifying questions over exactly which -ERI we are. The acronym ERA also has very positive connotations, and is fairly short/catchy.Join the ERA teamERA is looking for passionate and driven focus area leads who could support the core ERA team and help us grow. We are accepting expressions of interest for certain positions that we haven't yet opened hiring rounds for.We have made the first step as easy as possible, and we recommend submitting an expression of interest even if you are not sure whether you are a good fit. If we think you could be a particularly good fit for our team, we will reach out to you.ERA Cambri...]]>
Nandini Shiralkar https://forum.effectivealtruism.org/posts/uZY7fqXrP6P4vkK7g/announcing-era-a-spin-off-from-ceri Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing ERA: a spin-off from CERI, published by Nandini Shiralkar on December 13, 2022 on The Effective Altruism Forum.The Existential Risk Alliance (ERA) is a non-profit project working to mitigate existential risk by equipping the community of young researchers and entrepreneurs with the skills and the knowledge necessary to tackle existential risks via our summer research fellowship programme.HighlightsThe CERI Fellowship is now a spin-off from the Cambridge Existential Risks Initiative (CERI), and will be run by a new nonprofit project called ERA in 2023 and beyond.Join the ERA team! ERA is looking for focus area leads in our main cause areas (e.g., AI Safety, Biosecurity, etc.) who could support the core team and help us grow. We are accepting expressions of interest for certain positions that we haven't yet opened hiring rounds for.ERA Fellowship Mentors’ expression of interest form is now live. ERA mentors will advise our summer research fellows, an international cohort of top students and early career professionals who are dedicated to using their summers (and in many cases, their careers) to help address large-scale threats to humanity's future.ERA Cambridge Fellows’ expression of interest form is also live - we expect applications for 2023 summer to open in late January/early February.CERI’s evolution to ERAThe Cambridge Existential Risks Initiative (CERI) was founded in April 2021 as a project at the University of Cambridge with the aim to reduce existential risk (x-risk) by raising awareness and promoting x-risk research, via initiatives such as our flagship Summer Research Fellowship programme.Since then, CERI has evolved into a meta organisation supporting many projects within this space; some projects have been run entirely by students (such as the Existential Risks Introductory Course), and some projects have been run by a mixture of students and full time community builders (such as the CERI Fellowship).Now, CERI will refocus to be a University of Cambridge student group, engaging with the local community via projects such as HackX. ERA will be running the ERA Fellowship (which was previously called the CERI Fellowship) in 2023 and beyond.Time for a new ERA!Our mission is to reduce the probability of an existential catastrophe. We believe that one of the key ways to reduce existential risk lies in fostering a community of dedicated and knowledgeable x-risk researchers. Through our summer research fellowship programme, we aim to identify and support aspiring researchers in this field, providing them with the resources and mentorship needed to succeed.The establishment of ERA will allow us to optimise our programming and support ecosystem for the Summer Research Fellowship programme. We plan to take an evidence-based approach to community building to improve our existing programme and to find promising new approaches in this space.Why rename CERI to ERA?The naming situation with the -ERIs is just very confusing. There is CHERI, CERI, SERI, BERI, and LERI (the ones I know of). We think that we could potentially reach a different subset of our target demographic by rebranding, and also reduce the amount of time we spend clarifying questions over exactly which -ERI we are. The acronym ERA also has very positive connotations, and is fairly short/catchy.Join the ERA teamERA is looking for passionate and driven focus area leads who could support the core ERA team and help us grow. We are accepting expressions of interest for certain positions that we haven't yet opened hiring rounds for.We have made the first step as easy as possible, and we recommend submitting an expression of interest even if you are not sure whether you are a good fit. If we think you could be a particularly good fit for our team, we will reach out to you.ERA Cambri...]]>
Wed, 14 Dec 2022 00:01:00 +0000 EA - Announcing ERA: a spin-off from CERI by Nandini Shiralkar Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing ERA: a spin-off from CERI, published by Nandini Shiralkar on December 13, 2022 on The Effective Altruism Forum.The Existential Risk Alliance (ERA) is a non-profit project working to mitigate existential risk by equipping the community of young researchers and entrepreneurs with the skills and the knowledge necessary to tackle existential risks via our summer research fellowship programme.HighlightsThe CERI Fellowship is now a spin-off from the Cambridge Existential Risks Initiative (CERI), and will be run by a new nonprofit project called ERA in 2023 and beyond.Join the ERA team! ERA is looking for focus area leads in our main cause areas (e.g., AI Safety, Biosecurity, etc.) who could support the core team and help us grow. We are accepting expressions of interest for certain positions that we haven't yet opened hiring rounds for.ERA Fellowship Mentors’ expression of interest form is now live. ERA mentors will advise our summer research fellows, an international cohort of top students and early career professionals who are dedicated to using their summers (and in many cases, their careers) to help address large-scale threats to humanity's future.ERA Cambridge Fellows’ expression of interest form is also live - we expect applications for 2023 summer to open in late January/early February.CERI’s evolution to ERAThe Cambridge Existential Risks Initiative (CERI) was founded in April 2021 as a project at the University of Cambridge with the aim to reduce existential risk (x-risk) by raising awareness and promoting x-risk research, via initiatives such as our flagship Summer Research Fellowship programme.Since then, CERI has evolved into a meta organisation supporting many projects within this space; some projects have been run entirely by students (such as the Existential Risks Introductory Course), and some projects have been run by a mixture of students and full time community builders (such as the CERI Fellowship).Now, CERI will refocus to be a University of Cambridge student group, engaging with the local community via projects such as HackX. ERA will be running the ERA Fellowship (which was previously called the CERI Fellowship) in 2023 and beyond.Time for a new ERA!Our mission is to reduce the probability of an existential catastrophe. We believe that one of the key ways to reduce existential risk lies in fostering a community of dedicated and knowledgeable x-risk researchers. Through our summer research fellowship programme, we aim to identify and support aspiring researchers in this field, providing them with the resources and mentorship needed to succeed.The establishment of ERA will allow us to optimise our programming and support ecosystem for the Summer Research Fellowship programme. We plan to take an evidence-based approach to community building to improve our existing programme and to find promising new approaches in this space.Why rename CERI to ERA?The naming situation with the -ERIs is just very confusing. There is CHERI, CERI, SERI, BERI, and LERI (the ones I know of). We think that we could potentially reach a different subset of our target demographic by rebranding, and also reduce the amount of time we spend clarifying questions over exactly which -ERI we are. The acronym ERA also has very positive connotations, and is fairly short/catchy.Join the ERA teamERA is looking for passionate and driven focus area leads who could support the core ERA team and help us grow. We are accepting expressions of interest for certain positions that we haven't yet opened hiring rounds for.We have made the first step as easy as possible, and we recommend submitting an expression of interest even if you are not sure whether you are a good fit. If we think you could be a particularly good fit for our team, we will reach out to you.ERA Cambri...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing ERA: a spin-off from CERI, published by Nandini Shiralkar on December 13, 2022 on The Effective Altruism Forum.The Existential Risk Alliance (ERA) is a non-profit project working to mitigate existential risk by equipping the community of young researchers and entrepreneurs with the skills and the knowledge necessary to tackle existential risks via our summer research fellowship programme.HighlightsThe CERI Fellowship is now a spin-off from the Cambridge Existential Risks Initiative (CERI), and will be run by a new nonprofit project called ERA in 2023 and beyond.Join the ERA team! ERA is looking for focus area leads in our main cause areas (e.g., AI Safety, Biosecurity, etc.) who could support the core team and help us grow. We are accepting expressions of interest for certain positions that we haven't yet opened hiring rounds for.ERA Fellowship Mentors’ expression of interest form is now live. ERA mentors will advise our summer research fellows, an international cohort of top students and early career professionals who are dedicated to using their summers (and in many cases, their careers) to help address large-scale threats to humanity's future.ERA Cambridge Fellows’ expression of interest form is also live - we expect applications for 2023 summer to open in late January/early February.CERI’s evolution to ERAThe Cambridge Existential Risks Initiative (CERI) was founded in April 2021 as a project at the University of Cambridge with the aim to reduce existential risk (x-risk) by raising awareness and promoting x-risk research, via initiatives such as our flagship Summer Research Fellowship programme.Since then, CERI has evolved into a meta organisation supporting many projects within this space; some projects have been run entirely by students (such as the Existential Risks Introductory Course), and some projects have been run by a mixture of students and full time community builders (such as the CERI Fellowship).Now, CERI will refocus to be a University of Cambridge student group, engaging with the local community via projects such as HackX. ERA will be running the ERA Fellowship (which was previously called the CERI Fellowship) in 2023 and beyond.Time for a new ERA!Our mission is to reduce the probability of an existential catastrophe. We believe that one of the key ways to reduce existential risk lies in fostering a community of dedicated and knowledgeable x-risk researchers. Through our summer research fellowship programme, we aim to identify and support aspiring researchers in this field, providing them with the resources and mentorship needed to succeed.The establishment of ERA will allow us to optimise our programming and support ecosystem for the Summer Research Fellowship programme. We plan to take an evidence-based approach to community building to improve our existing programme and to find promising new approaches in this space.Why rename CERI to ERA?The naming situation with the -ERIs is just very confusing. There is CHERI, CERI, SERI, BERI, and LERI (the ones I know of). We think that we could potentially reach a different subset of our target demographic by rebranding, and also reduce the amount of time we spend clarifying questions over exactly which -ERI we are. The acronym ERA also has very positive connotations, and is fairly short/catchy.Join the ERA teamERA is looking for passionate and driven focus area leads who could support the core ERA team and help us grow. We are accepting expressions of interest for certain positions that we haven't yet opened hiring rounds for.We have made the first step as easy as possible, and we recommend submitting an expression of interest even if you are not sure whether you are a good fit. If we think you could be a particularly good fit for our team, we will reach out to you.ERA Cambri...]]>
Nandini Shiralkar https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:52 None full 4110
rMucQN9EkL8u4xnjd_EA EA - Improving EA events: start early and invest in content and stewardship by Vaidehi Agarwalla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Improving EA events: start early & invest in content and stewardship, published by Vaidehi Agarwalla on December 12, 2022 on The Effective Altruism Forum.This post is part of an ongoing series: Events in EA: Learnings and Critiques.TL:DR;The way many EA events are run is unsustainable for event organisers and does not leave sufficient slack for running excellent events and experimenting for learning.We share our models for better event planning and highlight some key challenges with current approaches to event operations (event ops). We make the case that hiring early, finding a venue and outsourcing logistics will create more capacity to invest in the most important (but relatively neglected) aspects of event ops - content, structure and stewardship. More capacity can also allow organisers to deviate from defaults and innovate with new event formats and styles, and engage in more resource-intensive programming.We aim to set realistic expectations and provide practical guidance for newer organisers to better prepare them. Good event ops people are not easily replaceable. We’ve observed a trend of event organisers being put under a lot of (unnecessary) pressure due to a lack of planning and capacity. We give a bunch of real (anonymised) examples throughout this post to illustrate our point.We don't go into specific suggestions because it's very event-dependent, but we'd estimate that adding ~20-40% of lead time and/or capacity would help achieve these goals.We’d love to hear if you're an event organiser and don’t find this helpful or actionable. Although we're only talking about events, we think it's possible there are similar trends in other areas -we’d love to hear about them if you've got observations!The Hierarchy of Events PlanningIn Maslow’s Hierarchy of Needs, needs lower down in the hierarchy must be satisfied before individuals can attend to needs higher up. Our pyramid is similar - components lower down in the hierarchy are more basic and are necessary for an event to exist. The exact configuration of each component affects the ones above it (e.g. if your venue only has 2 rooms, you probably can’t have an agenda with 3 tracks of events running simultaneously).However, influence is not strictly linear - so the structure of the event might influence which venue you choose (e.g. if you want to create an informal atmosphere, you might choose to host an event in a smaller, more casual venue instead of a formal one, or if you need a lot of stewardship for newer attendees, you might hire a larger team to manage those needs).Consider LeverageIf you could invest 1-10% more resources (time) for 10-30% return, this seems worth considering. In the following, we’re going to try and point out the highest leverage points of each step of the pyramid - the places where spending that extra time could make a big difference and open up more possibilities.Work backwards from specific goalsaka don’t start with “I want to run a conference”It may sound trite, but no event makes sense without specific, concrete, and measurable goals - you need to know the who and why, and then work backwards from there. Start with a specific goal like “I want to facilitate peer- to peer bonding for my local university group” and then list all the ways that could achieved, (host a social dinner, a retreat, a 1-day hike, a group project etc.), rather than do an event format because it's been done before and seems effective.Here are some goals an event might have - note that these goals are pretty different from each other, and it’s pretty hard to create events that could excel more than a couple at once:Networking - participants form important connections with others - either peers or mentorsLearning/Skilling up - participants learn from each other or experts on a particular topicId...]]>
Vaidehi Agarwalla https://forum.effectivealtruism.org/posts/rMucQN9EkL8u4xnjd/improving-ea-events-start-early-and-invest-in-content-and Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Improving EA events: start early & invest in content and stewardship, published by Vaidehi Agarwalla on December 12, 2022 on The Effective Altruism Forum.This post is part of an ongoing series: Events in EA: Learnings and Critiques.TL:DR;The way many EA events are run is unsustainable for event organisers and does not leave sufficient slack for running excellent events and experimenting for learning.We share our models for better event planning and highlight some key challenges with current approaches to event operations (event ops). We make the case that hiring early, finding a venue and outsourcing logistics will create more capacity to invest in the most important (but relatively neglected) aspects of event ops - content, structure and stewardship. More capacity can also allow organisers to deviate from defaults and innovate with new event formats and styles, and engage in more resource-intensive programming.We aim to set realistic expectations and provide practical guidance for newer organisers to better prepare them. Good event ops people are not easily replaceable. We’ve observed a trend of event organisers being put under a lot of (unnecessary) pressure due to a lack of planning and capacity. We give a bunch of real (anonymised) examples throughout this post to illustrate our point.We don't go into specific suggestions because it's very event-dependent, but we'd estimate that adding ~20-40% of lead time and/or capacity would help achieve these goals.We’d love to hear if you're an event organiser and don’t find this helpful or actionable. Although we're only talking about events, we think it's possible there are similar trends in other areas -we’d love to hear about them if you've got observations!The Hierarchy of Events PlanningIn Maslow’s Hierarchy of Needs, needs lower down in the hierarchy must be satisfied before individuals can attend to needs higher up. Our pyramid is similar - components lower down in the hierarchy are more basic and are necessary for an event to exist. The exact configuration of each component affects the ones above it (e.g. if your venue only has 2 rooms, you probably can’t have an agenda with 3 tracks of events running simultaneously).However, influence is not strictly linear - so the structure of the event might influence which venue you choose (e.g. if you want to create an informal atmosphere, you might choose to host an event in a smaller, more casual venue instead of a formal one, or if you need a lot of stewardship for newer attendees, you might hire a larger team to manage those needs).Consider LeverageIf you could invest 1-10% more resources (time) for 10-30% return, this seems worth considering. In the following, we’re going to try and point out the highest leverage points of each step of the pyramid - the places where spending that extra time could make a big difference and open up more possibilities.Work backwards from specific goalsaka don’t start with “I want to run a conference”It may sound trite, but no event makes sense without specific, concrete, and measurable goals - you need to know the who and why, and then work backwards from there. Start with a specific goal like “I want to facilitate peer- to peer bonding for my local university group” and then list all the ways that could achieved, (host a social dinner, a retreat, a 1-day hike, a group project etc.), rather than do an event format because it's been done before and seems effective.Here are some goals an event might have - note that these goals are pretty different from each other, and it’s pretty hard to create events that could excel more than a couple at once:Networking - participants form important connections with others - either peers or mentorsLearning/Skilling up - participants learn from each other or experts on a particular topicId...]]>
Tue, 13 Dec 2022 21:43:05 +0000 EA - Improving EA events: start early and invest in content and stewardship by Vaidehi Agarwalla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Improving EA events: start early & invest in content and stewardship, published by Vaidehi Agarwalla on December 12, 2022 on The Effective Altruism Forum.This post is part of an ongoing series: Events in EA: Learnings and Critiques.TL:DR;The way many EA events are run is unsustainable for event organisers and does not leave sufficient slack for running excellent events and experimenting for learning.We share our models for better event planning and highlight some key challenges with current approaches to event operations (event ops). We make the case that hiring early, finding a venue and outsourcing logistics will create more capacity to invest in the most important (but relatively neglected) aspects of event ops - content, structure and stewardship. More capacity can also allow organisers to deviate from defaults and innovate with new event formats and styles, and engage in more resource-intensive programming.We aim to set realistic expectations and provide practical guidance for newer organisers to better prepare them. Good event ops people are not easily replaceable. We’ve observed a trend of event organisers being put under a lot of (unnecessary) pressure due to a lack of planning and capacity. We give a bunch of real (anonymised) examples throughout this post to illustrate our point.We don't go into specific suggestions because it's very event-dependent, but we'd estimate that adding ~20-40% of lead time and/or capacity would help achieve these goals.We’d love to hear if you're an event organiser and don’t find this helpful or actionable. Although we're only talking about events, we think it's possible there are similar trends in other areas -we’d love to hear about them if you've got observations!The Hierarchy of Events PlanningIn Maslow’s Hierarchy of Needs, needs lower down in the hierarchy must be satisfied before individuals can attend to needs higher up. Our pyramid is similar - components lower down in the hierarchy are more basic and are necessary for an event to exist. The exact configuration of each component affects the ones above it (e.g. if your venue only has 2 rooms, you probably can’t have an agenda with 3 tracks of events running simultaneously).However, influence is not strictly linear - so the structure of the event might influence which venue you choose (e.g. if you want to create an informal atmosphere, you might choose to host an event in a smaller, more casual venue instead of a formal one, or if you need a lot of stewardship for newer attendees, you might hire a larger team to manage those needs).Consider LeverageIf you could invest 1-10% more resources (time) for 10-30% return, this seems worth considering. In the following, we’re going to try and point out the highest leverage points of each step of the pyramid - the places where spending that extra time could make a big difference and open up more possibilities.Work backwards from specific goalsaka don’t start with “I want to run a conference”It may sound trite, but no event makes sense without specific, concrete, and measurable goals - you need to know the who and why, and then work backwards from there. Start with a specific goal like “I want to facilitate peer- to peer bonding for my local university group” and then list all the ways that could achieved, (host a social dinner, a retreat, a 1-day hike, a group project etc.), rather than do an event format because it's been done before and seems effective.Here are some goals an event might have - note that these goals are pretty different from each other, and it’s pretty hard to create events that could excel more than a couple at once:Networking - participants form important connections with others - either peers or mentorsLearning/Skilling up - participants learn from each other or experts on a particular topicId...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Improving EA events: start early & invest in content and stewardship, published by Vaidehi Agarwalla on December 12, 2022 on The Effective Altruism Forum.This post is part of an ongoing series: Events in EA: Learnings and Critiques.TL:DR;The way many EA events are run is unsustainable for event organisers and does not leave sufficient slack for running excellent events and experimenting for learning.We share our models for better event planning and highlight some key challenges with current approaches to event operations (event ops). We make the case that hiring early, finding a venue and outsourcing logistics will create more capacity to invest in the most important (but relatively neglected) aspects of event ops - content, structure and stewardship. More capacity can also allow organisers to deviate from defaults and innovate with new event formats and styles, and engage in more resource-intensive programming.We aim to set realistic expectations and provide practical guidance for newer organisers to better prepare them. Good event ops people are not easily replaceable. We’ve observed a trend of event organisers being put under a lot of (unnecessary) pressure due to a lack of planning and capacity. We give a bunch of real (anonymised) examples throughout this post to illustrate our point.We don't go into specific suggestions because it's very event-dependent, but we'd estimate that adding ~20-40% of lead time and/or capacity would help achieve these goals.We’d love to hear if you're an event organiser and don’t find this helpful or actionable. Although we're only talking about events, we think it's possible there are similar trends in other areas -we’d love to hear about them if you've got observations!The Hierarchy of Events PlanningIn Maslow’s Hierarchy of Needs, needs lower down in the hierarchy must be satisfied before individuals can attend to needs higher up. Our pyramid is similar - components lower down in the hierarchy are more basic and are necessary for an event to exist. The exact configuration of each component affects the ones above it (e.g. if your venue only has 2 rooms, you probably can’t have an agenda with 3 tracks of events running simultaneously).However, influence is not strictly linear - so the structure of the event might influence which venue you choose (e.g. if you want to create an informal atmosphere, you might choose to host an event in a smaller, more casual venue instead of a formal one, or if you need a lot of stewardship for newer attendees, you might hire a larger team to manage those needs).Consider LeverageIf you could invest 1-10% more resources (time) for 10-30% return, this seems worth considering. In the following, we’re going to try and point out the highest leverage points of each step of the pyramid - the places where spending that extra time could make a big difference and open up more possibilities.Work backwards from specific goalsaka don’t start with “I want to run a conference”It may sound trite, but no event makes sense without specific, concrete, and measurable goals - you need to know the who and why, and then work backwards from there. Start with a specific goal like “I want to facilitate peer- to peer bonding for my local university group” and then list all the ways that could achieved, (host a social dinner, a retreat, a 1-day hike, a group project etc.), rather than do an event format because it's been done before and seems effective.Here are some goals an event might have - note that these goals are pretty different from each other, and it’s pretty hard to create events that could excel more than a couple at once:Networking - participants form important connections with others - either peers or mentorsLearning/Skilling up - participants learn from each other or experts on a particular topicId...]]>
Vaidehi Agarwalla https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 23:20 None full 4106
HBgAruFrZhFKBFfDa_EA EA - Applications open for AGI Safety Fundamentals: Alignment Course by Jamie Bernardi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Applications open for AGI Safety Fundamentals: Alignment Course, published by Jamie Bernardi on December 13, 2022 on The Effective Altruism Forum.The AGI Safety Fundamentals (AGISF): Alignment Course is designed to introduce the key ideas in AGI safety and alignment, and provide a space and support for participants to engage, evaluate and debate these arguments. Participants will meet others who are excited to help mitigate risks from future AI systems, and explore opportunities for their next steps in the field.The course is being run by the same team as for previous rounds, now under a new project called BlueDot Impact.Time commitmentThe course will run from February-April 2023. It comprises 8 weeks of reading and virtual small-group discussions, followed by a 4-week capstone project.The time commitment is around 4 hours per week, so participants can engage with the course alongside full-time work or study.Course structureParticipants are provided with structured content to work through, alongside weekly, facilitated discussion groups. Participants will be grouped depending on their ML experience and background knowledge about AI safety. In these sessions, participants will engage in activities and discussions with other participants, guided by the facilitator. The facilitator will be knowledgeable about AI safety, and can help to answer participants’ questions.The course is followed by a capstone project, which is an opportunity for participants to synthesise their views on the field and start thinking through how to put these ideas into practice, or start getting relevant skills and experience that will help them with the next step in their career.The course content is designed by Richard Ngo (Governance team at OpenAI, previously a research engineer on the AGI safety team at DeepMind). You can read the curriculum content here.Target audienceWe are most excited about applicants who would be in strong position to pursue technical alignment research in their career, such as professional software engineers and students studying technical subjects (e.g. CS/maths/physics/engineering).That said, we consider all applicants and expect 25-50% of the course to consist of people with a variety of other backgrounds, so we encourage you to apply regardless. This includes community builders who would benefit from a deeper understanding of the concepts in AI alignment.We will be running another course on AI Governance in early 2023 and expect a different distribution of target participants.Apply now!If you would like to be considered for the next round of the courses, starting in February 2023, please apply here by Thursday 5th January 2023. More details can be found here. We will be evaluating applications on a rolling basis and we aim to let you know the outcome of your application by mid-January 2023.If you already have experience working on AI alignment and would be keen to join our community of facilitators, please apply to facilitate.Who is running the course?AGISF is now being run by BlueDot Impact - a new non-profit project running courses that support participants to develop the knowledge, community and network needed to pursue high-impact careers. BlueDot Impact spun out of Cambridge Effective Altruism, and was founded by the team who was primarily responsible for running previous rounds of AGISF. You can read more in our announcement post here.We’re really excited about the amount of interest in the courses and think they have great potential to build awesome communities around key issues. As such we have spent the last few months:Working with pedagogy experts to make discussion sessions more engagingFormalising our course design process with greater transparency for participants and facilitatorsBuilding systems to improve participant networking...]]>
Jamie Bernardi https://forum.effectivealtruism.org/posts/HBgAruFrZhFKBFfDa/applications-open-for-agi-safety-fundamentals-alignment Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Applications open for AGI Safety Fundamentals: Alignment Course, published by Jamie Bernardi on December 13, 2022 on The Effective Altruism Forum.The AGI Safety Fundamentals (AGISF): Alignment Course is designed to introduce the key ideas in AGI safety and alignment, and provide a space and support for participants to engage, evaluate and debate these arguments. Participants will meet others who are excited to help mitigate risks from future AI systems, and explore opportunities for their next steps in the field.The course is being run by the same team as for previous rounds, now under a new project called BlueDot Impact.Time commitmentThe course will run from February-April 2023. It comprises 8 weeks of reading and virtual small-group discussions, followed by a 4-week capstone project.The time commitment is around 4 hours per week, so participants can engage with the course alongside full-time work or study.Course structureParticipants are provided with structured content to work through, alongside weekly, facilitated discussion groups. Participants will be grouped depending on their ML experience and background knowledge about AI safety. In these sessions, participants will engage in activities and discussions with other participants, guided by the facilitator. The facilitator will be knowledgeable about AI safety, and can help to answer participants’ questions.The course is followed by a capstone project, which is an opportunity for participants to synthesise their views on the field and start thinking through how to put these ideas into practice, or start getting relevant skills and experience that will help them with the next step in their career.The course content is designed by Richard Ngo (Governance team at OpenAI, previously a research engineer on the AGI safety team at DeepMind). You can read the curriculum content here.Target audienceWe are most excited about applicants who would be in strong position to pursue technical alignment research in their career, such as professional software engineers and students studying technical subjects (e.g. CS/maths/physics/engineering).That said, we consider all applicants and expect 25-50% of the course to consist of people with a variety of other backgrounds, so we encourage you to apply regardless. This includes community builders who would benefit from a deeper understanding of the concepts in AI alignment.We will be running another course on AI Governance in early 2023 and expect a different distribution of target participants.Apply now!If you would like to be considered for the next round of the courses, starting in February 2023, please apply here by Thursday 5th January 2023. More details can be found here. We will be evaluating applications on a rolling basis and we aim to let you know the outcome of your application by mid-January 2023.If you already have experience working on AI alignment and would be keen to join our community of facilitators, please apply to facilitate.Who is running the course?AGISF is now being run by BlueDot Impact - a new non-profit project running courses that support participants to develop the knowledge, community and network needed to pursue high-impact careers. BlueDot Impact spun out of Cambridge Effective Altruism, and was founded by the team who was primarily responsible for running previous rounds of AGISF. You can read more in our announcement post here.We’re really excited about the amount of interest in the courses and think they have great potential to build awesome communities around key issues. As such we have spent the last few months:Working with pedagogy experts to make discussion sessions more engagingFormalising our course design process with greater transparency for participants and facilitatorsBuilding systems to improve participant networking...]]>
Tue, 13 Dec 2022 18:10:58 +0000 EA - Applications open for AGI Safety Fundamentals: Alignment Course by Jamie Bernardi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Applications open for AGI Safety Fundamentals: Alignment Course, published by Jamie Bernardi on December 13, 2022 on The Effective Altruism Forum.The AGI Safety Fundamentals (AGISF): Alignment Course is designed to introduce the key ideas in AGI safety and alignment, and provide a space and support for participants to engage, evaluate and debate these arguments. Participants will meet others who are excited to help mitigate risks from future AI systems, and explore opportunities for their next steps in the field.The course is being run by the same team as for previous rounds, now under a new project called BlueDot Impact.Time commitmentThe course will run from February-April 2023. It comprises 8 weeks of reading and virtual small-group discussions, followed by a 4-week capstone project.The time commitment is around 4 hours per week, so participants can engage with the course alongside full-time work or study.Course structureParticipants are provided with structured content to work through, alongside weekly, facilitated discussion groups. Participants will be grouped depending on their ML experience and background knowledge about AI safety. In these sessions, participants will engage in activities and discussions with other participants, guided by the facilitator. The facilitator will be knowledgeable about AI safety, and can help to answer participants’ questions.The course is followed by a capstone project, which is an opportunity for participants to synthesise their views on the field and start thinking through how to put these ideas into practice, or start getting relevant skills and experience that will help them with the next step in their career.The course content is designed by Richard Ngo (Governance team at OpenAI, previously a research engineer on the AGI safety team at DeepMind). You can read the curriculum content here.Target audienceWe are most excited about applicants who would be in strong position to pursue technical alignment research in their career, such as professional software engineers and students studying technical subjects (e.g. CS/maths/physics/engineering).That said, we consider all applicants and expect 25-50% of the course to consist of people with a variety of other backgrounds, so we encourage you to apply regardless. This includes community builders who would benefit from a deeper understanding of the concepts in AI alignment.We will be running another course on AI Governance in early 2023 and expect a different distribution of target participants.Apply now!If you would like to be considered for the next round of the courses, starting in February 2023, please apply here by Thursday 5th January 2023. More details can be found here. We will be evaluating applications on a rolling basis and we aim to let you know the outcome of your application by mid-January 2023.If you already have experience working on AI alignment and would be keen to join our community of facilitators, please apply to facilitate.Who is running the course?AGISF is now being run by BlueDot Impact - a new non-profit project running courses that support participants to develop the knowledge, community and network needed to pursue high-impact careers. BlueDot Impact spun out of Cambridge Effective Altruism, and was founded by the team who was primarily responsible for running previous rounds of AGISF. You can read more in our announcement post here.We’re really excited about the amount of interest in the courses and think they have great potential to build awesome communities around key issues. As such we have spent the last few months:Working with pedagogy experts to make discussion sessions more engagingFormalising our course design process with greater transparency for participants and facilitatorsBuilding systems to improve participant networking...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Applications open for AGI Safety Fundamentals: Alignment Course, published by Jamie Bernardi on December 13, 2022 on The Effective Altruism Forum.The AGI Safety Fundamentals (AGISF): Alignment Course is designed to introduce the key ideas in AGI safety and alignment, and provide a space and support for participants to engage, evaluate and debate these arguments. Participants will meet others who are excited to help mitigate risks from future AI systems, and explore opportunities for their next steps in the field.The course is being run by the same team as for previous rounds, now under a new project called BlueDot Impact.Time commitmentThe course will run from February-April 2023. It comprises 8 weeks of reading and virtual small-group discussions, followed by a 4-week capstone project.The time commitment is around 4 hours per week, so participants can engage with the course alongside full-time work or study.Course structureParticipants are provided with structured content to work through, alongside weekly, facilitated discussion groups. Participants will be grouped depending on their ML experience and background knowledge about AI safety. In these sessions, participants will engage in activities and discussions with other participants, guided by the facilitator. The facilitator will be knowledgeable about AI safety, and can help to answer participants’ questions.The course is followed by a capstone project, which is an opportunity for participants to synthesise their views on the field and start thinking through how to put these ideas into practice, or start getting relevant skills and experience that will help them with the next step in their career.The course content is designed by Richard Ngo (Governance team at OpenAI, previously a research engineer on the AGI safety team at DeepMind). You can read the curriculum content here.Target audienceWe are most excited about applicants who would be in strong position to pursue technical alignment research in their career, such as professional software engineers and students studying technical subjects (e.g. CS/maths/physics/engineering).That said, we consider all applicants and expect 25-50% of the course to consist of people with a variety of other backgrounds, so we encourage you to apply regardless. This includes community builders who would benefit from a deeper understanding of the concepts in AI alignment.We will be running another course on AI Governance in early 2023 and expect a different distribution of target participants.Apply now!If you would like to be considered for the next round of the courses, starting in February 2023, please apply here by Thursday 5th January 2023. More details can be found here. We will be evaluating applications on a rolling basis and we aim to let you know the outcome of your application by mid-January 2023.If you already have experience working on AI alignment and would be keen to join our community of facilitators, please apply to facilitate.Who is running the course?AGISF is now being run by BlueDot Impact - a new non-profit project running courses that support participants to develop the knowledge, community and network needed to pursue high-impact careers. BlueDot Impact spun out of Cambridge Effective Altruism, and was founded by the team who was primarily responsible for running previous rounds of AGISF. You can read more in our announcement post here.We’re really excited about the amount of interest in the courses and think they have great potential to build awesome communities around key issues. As such we have spent the last few months:Working with pedagogy experts to make discussion sessions more engagingFormalising our course design process with greater transparency for participants and facilitatorsBuilding systems to improve participant networking...]]>
Jamie Bernardi https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:05 None full 4104
aj4PQaLqSMWffyhFD_EA EA - EA Landscape in the UK by DavidNash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Landscape in the UK, published by DavidNash on December 13, 2022 on The Effective Altruism Forum.SummaryIn this post I’m attempting to give an overview of what EA looks like in the UK, including communities and organisations with a range of affiliation with EA.EA Hubs in the UKMost of the people working at EA related organisations are in London, Oxford or Cambridge (sometimes known as Loxbridge), with some remote workers around the UK. It’s quite easy to get from London to Oxford or Cambridge, being about an hour by train, and people regularly travel between them.LondonLondon probably has the largest number of people interested in EA. There are roughly 150-250 people working at EA related organisations although the majority of people interested in EA are working in the private sector, government, academia or (non EA) nonprofits. There are also four EA university groups in London at Imperial College London, King's College London, University College London and London School of Economics.There isn't one main EA community in London, there are quite a few subgroups based around different causes, workplaces and interests. There are also people who have an interest but attend an event or get involved with a relevant opportunity once every few years.SubcommunitiesWorkplace/Cause CommunitiesCharity Entrepreneurship are near Queens Park and with each new cohort there are more organisations set up that work from the CE office, mainly on global health & development, health security or animal welfareThe Centre on Long-Term Risk are near Primrose Hill and there is a community there for people working on suffering risksConjecture are close to London Bridge and support the SERI-MATS fellowship nearby, both focusing on AI SafetyProfession communitiesThe civil service has people interested in EA working there and they have regular meetupsEA Finance has quite a few members in London and they have meetups every few monthsEA Tech have meetups every month or twoThere is also a group for people who are interested in politics and EAThere are usually 1 or 2 meetups a year for entrepreneurs and consultantsUniversity groupsThere are 4 quite active groups, roughly 20 volunteer organisers as well as the London EA Hub (LEAH) supporting students interested in EAICLKCLUCLLSEInterest communitiesEA for Christians have had meetups in London every few monthsThere are also meetups for people to play football, dance, go to gigs, stitch and climbThere are now quite a few EA related organisations in London, in 2015 there was just Founders Pledge and some Givewell recommended charities.EA MetaCharity Entrepreneurship80,000 HoursFounders PledgeImpactful Government CareersLet’s FundSoGiveNon-trivialEA UKLEAH - supporting EA groups in London. They are also running a co-working space for students and professionals working on EA projectsEffective GivingRethink Priorities ~18 people based in UKBetter MattersSocial Change LabGlobal DevelopmentAgainst Malaria FoundationSchistosomiasis Control InitiativeMalaria ConsortiumGiveDirectly - London OfficeSuvitaCanopieLead Exposure Elimination ProjectAnimal Welfare - Most of these are the UK based teams for international orgsThe Humane LeagueAnimal EqualityOpen CagesThe Good Food Institute EuropeAnimal Advocacy CareersAnimal AskFish Welfare InitiativeCellular Agriculture UKAI AlignmentConjectureThe Cooperative AI FoundationDeepmind Safety teamSafe AI LondonLong Term Future/Existential RisksLongview PhilanthropyLondon Existential Risk Initiative (aimed at students)All-Party Parliamentary Group For Future GenerationsThe Centre for Long-Term Resilience1Day SoonerCenter on Long-Term RiskThere are some remote staff in London for various EA related organisations, such as Animal Charity Evaluators, Centre ...]]>
DavidNash https://forum.effectivealtruism.org/posts/aj4PQaLqSMWffyhFD/ea-landscape-in-the-uk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Landscape in the UK, published by DavidNash on December 13, 2022 on The Effective Altruism Forum.SummaryIn this post I’m attempting to give an overview of what EA looks like in the UK, including communities and organisations with a range of affiliation with EA.EA Hubs in the UKMost of the people working at EA related organisations are in London, Oxford or Cambridge (sometimes known as Loxbridge), with some remote workers around the UK. It’s quite easy to get from London to Oxford or Cambridge, being about an hour by train, and people regularly travel between them.LondonLondon probably has the largest number of people interested in EA. There are roughly 150-250 people working at EA related organisations although the majority of people interested in EA are working in the private sector, government, academia or (non EA) nonprofits. There are also four EA university groups in London at Imperial College London, King's College London, University College London and London School of Economics.There isn't one main EA community in London, there are quite a few subgroups based around different causes, workplaces and interests. There are also people who have an interest but attend an event or get involved with a relevant opportunity once every few years.SubcommunitiesWorkplace/Cause CommunitiesCharity Entrepreneurship are near Queens Park and with each new cohort there are more organisations set up that work from the CE office, mainly on global health & development, health security or animal welfareThe Centre on Long-Term Risk are near Primrose Hill and there is a community there for people working on suffering risksConjecture are close to London Bridge and support the SERI-MATS fellowship nearby, both focusing on AI SafetyProfession communitiesThe civil service has people interested in EA working there and they have regular meetupsEA Finance has quite a few members in London and they have meetups every few monthsEA Tech have meetups every month or twoThere is also a group for people who are interested in politics and EAThere are usually 1 or 2 meetups a year for entrepreneurs and consultantsUniversity groupsThere are 4 quite active groups, roughly 20 volunteer organisers as well as the London EA Hub (LEAH) supporting students interested in EAICLKCLUCLLSEInterest communitiesEA for Christians have had meetups in London every few monthsThere are also meetups for people to play football, dance, go to gigs, stitch and climbThere are now quite a few EA related organisations in London, in 2015 there was just Founders Pledge and some Givewell recommended charities.EA MetaCharity Entrepreneurship80,000 HoursFounders PledgeImpactful Government CareersLet’s FundSoGiveNon-trivialEA UKLEAH - supporting EA groups in London. They are also running a co-working space for students and professionals working on EA projectsEffective GivingRethink Priorities ~18 people based in UKBetter MattersSocial Change LabGlobal DevelopmentAgainst Malaria FoundationSchistosomiasis Control InitiativeMalaria ConsortiumGiveDirectly - London OfficeSuvitaCanopieLead Exposure Elimination ProjectAnimal Welfare - Most of these are the UK based teams for international orgsThe Humane LeagueAnimal EqualityOpen CagesThe Good Food Institute EuropeAnimal Advocacy CareersAnimal AskFish Welfare InitiativeCellular Agriculture UKAI AlignmentConjectureThe Cooperative AI FoundationDeepmind Safety teamSafe AI LondonLong Term Future/Existential RisksLongview PhilanthropyLondon Existential Risk Initiative (aimed at students)All-Party Parliamentary Group For Future GenerationsThe Centre for Long-Term Resilience1Day SoonerCenter on Long-Term RiskThere are some remote staff in London for various EA related organisations, such as Animal Charity Evaluators, Centre ...]]>
Tue, 13 Dec 2022 18:10:52 +0000 EA - EA Landscape in the UK by DavidNash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Landscape in the UK, published by DavidNash on December 13, 2022 on The Effective Altruism Forum.SummaryIn this post I’m attempting to give an overview of what EA looks like in the UK, including communities and organisations with a range of affiliation with EA.EA Hubs in the UKMost of the people working at EA related organisations are in London, Oxford or Cambridge (sometimes known as Loxbridge), with some remote workers around the UK. It’s quite easy to get from London to Oxford or Cambridge, being about an hour by train, and people regularly travel between them.LondonLondon probably has the largest number of people interested in EA. There are roughly 150-250 people working at EA related organisations although the majority of people interested in EA are working in the private sector, government, academia or (non EA) nonprofits. There are also four EA university groups in London at Imperial College London, King's College London, University College London and London School of Economics.There isn't one main EA community in London, there are quite a few subgroups based around different causes, workplaces and interests. There are also people who have an interest but attend an event or get involved with a relevant opportunity once every few years.SubcommunitiesWorkplace/Cause CommunitiesCharity Entrepreneurship are near Queens Park and with each new cohort there are more organisations set up that work from the CE office, mainly on global health & development, health security or animal welfareThe Centre on Long-Term Risk are near Primrose Hill and there is a community there for people working on suffering risksConjecture are close to London Bridge and support the SERI-MATS fellowship nearby, both focusing on AI SafetyProfession communitiesThe civil service has people interested in EA working there and they have regular meetupsEA Finance has quite a few members in London and they have meetups every few monthsEA Tech have meetups every month or twoThere is also a group for people who are interested in politics and EAThere are usually 1 or 2 meetups a year for entrepreneurs and consultantsUniversity groupsThere are 4 quite active groups, roughly 20 volunteer organisers as well as the London EA Hub (LEAH) supporting students interested in EAICLKCLUCLLSEInterest communitiesEA for Christians have had meetups in London every few monthsThere are also meetups for people to play football, dance, go to gigs, stitch and climbThere are now quite a few EA related organisations in London, in 2015 there was just Founders Pledge and some Givewell recommended charities.EA MetaCharity Entrepreneurship80,000 HoursFounders PledgeImpactful Government CareersLet’s FundSoGiveNon-trivialEA UKLEAH - supporting EA groups in London. They are also running a co-working space for students and professionals working on EA projectsEffective GivingRethink Priorities ~18 people based in UKBetter MattersSocial Change LabGlobal DevelopmentAgainst Malaria FoundationSchistosomiasis Control InitiativeMalaria ConsortiumGiveDirectly - London OfficeSuvitaCanopieLead Exposure Elimination ProjectAnimal Welfare - Most of these are the UK based teams for international orgsThe Humane LeagueAnimal EqualityOpen CagesThe Good Food Institute EuropeAnimal Advocacy CareersAnimal AskFish Welfare InitiativeCellular Agriculture UKAI AlignmentConjectureThe Cooperative AI FoundationDeepmind Safety teamSafe AI LondonLong Term Future/Existential RisksLongview PhilanthropyLondon Existential Risk Initiative (aimed at students)All-Party Parliamentary Group For Future GenerationsThe Centre for Long-Term Resilience1Day SoonerCenter on Long-Term RiskThere are some remote staff in London for various EA related organisations, such as Animal Charity Evaluators, Centre ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Landscape in the UK, published by DavidNash on December 13, 2022 on The Effective Altruism Forum.SummaryIn this post I’m attempting to give an overview of what EA looks like in the UK, including communities and organisations with a range of affiliation with EA.EA Hubs in the UKMost of the people working at EA related organisations are in London, Oxford or Cambridge (sometimes known as Loxbridge), with some remote workers around the UK. It’s quite easy to get from London to Oxford or Cambridge, being about an hour by train, and people regularly travel between them.LondonLondon probably has the largest number of people interested in EA. There are roughly 150-250 people working at EA related organisations although the majority of people interested in EA are working in the private sector, government, academia or (non EA) nonprofits. There are also four EA university groups in London at Imperial College London, King's College London, University College London and London School of Economics.There isn't one main EA community in London, there are quite a few subgroups based around different causes, workplaces and interests. There are also people who have an interest but attend an event or get involved with a relevant opportunity once every few years.SubcommunitiesWorkplace/Cause CommunitiesCharity Entrepreneurship are near Queens Park and with each new cohort there are more organisations set up that work from the CE office, mainly on global health & development, health security or animal welfareThe Centre on Long-Term Risk are near Primrose Hill and there is a community there for people working on suffering risksConjecture are close to London Bridge and support the SERI-MATS fellowship nearby, both focusing on AI SafetyProfession communitiesThe civil service has people interested in EA working there and they have regular meetupsEA Finance has quite a few members in London and they have meetups every few monthsEA Tech have meetups every month or twoThere is also a group for people who are interested in politics and EAThere are usually 1 or 2 meetups a year for entrepreneurs and consultantsUniversity groupsThere are 4 quite active groups, roughly 20 volunteer organisers as well as the London EA Hub (LEAH) supporting students interested in EAICLKCLUCLLSEInterest communitiesEA for Christians have had meetups in London every few monthsThere are also meetups for people to play football, dance, go to gigs, stitch and climbThere are now quite a few EA related organisations in London, in 2015 there was just Founders Pledge and some Givewell recommended charities.EA MetaCharity Entrepreneurship80,000 HoursFounders PledgeImpactful Government CareersLet’s FundSoGiveNon-trivialEA UKLEAH - supporting EA groups in London. They are also running a co-working space for students and professionals working on EA projectsEffective GivingRethink Priorities ~18 people based in UKBetter MattersSocial Change LabGlobal DevelopmentAgainst Malaria FoundationSchistosomiasis Control InitiativeMalaria ConsortiumGiveDirectly - London OfficeSuvitaCanopieLead Exposure Elimination ProjectAnimal Welfare - Most of these are the UK based teams for international orgsThe Humane LeagueAnimal EqualityOpen CagesThe Good Food Institute EuropeAnimal Advocacy CareersAnimal AskFish Welfare InitiativeCellular Agriculture UKAI AlignmentConjectureThe Cooperative AI FoundationDeepmind Safety teamSafe AI LondonLong Term Future/Existential RisksLongview PhilanthropyLondon Existential Risk Initiative (aimed at students)All-Party Parliamentary Group For Future GenerationsThe Centre for Long-Term Resilience1Day SoonerCenter on Long-Term RiskThere are some remote staff in London for various EA related organisations, such as Animal Charity Evaluators, Centre ...]]>
DavidNash https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:19 None full 4101
CixidC6JCruHue8Hs_EA EA - GiveWell’s Moral Weights Underweight the Value of Transfers to the Poor by Trevor Woolley Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell’s Moral Weights Underweight the Value of Transfers to the Poor, published by Trevor Woolley on December 13, 2022 on The Effective Altruism Forum.Ethan Ligon and I submitted this to GiveWell for their "Change Our Minds" contest this year (2022). They will be announcing winners later this week (Dec 15, I think). But before they do, we wanted to share our submission here in case anyone is interested! Ethan has done some amazing work measuring real-world marginal utility by estimating demand systems using consumer purchases data. Seeing this prior work of his and thinking it may be particularly relevant to GiveWell's evaluations (and EA in general), I approached him and we wrote a short paper which we ended up submitting for the challenge.Rather than post the entire paper here and risk losing info in the reformatting process, I ask that you see the Google Doc of it here. While it does get a bit technical, I've tried to include non-technical summaries for most of the major sections. So in case you feel like some of the details are getting intense, I encourage you to at least read through the intro paragraphs and conclusions as you skim over it. If you get anything out of it, let it be that while total utility levels (over consumption) are unidentifiable, marginal utilities (over total consumption) are not--they can be measured with data!In case you aren't convinced yet to read the full article, below is a quick excerpt from the Summary:GiveWell bases much of their cost-effectiveness analysis on the value of doubling consumption. Since increasing consumption expenditures is the primary effect of the GiveDirectly cash transfers program, GiveWell uses the effectiveness (value generated per dollar) of cash transfers as a metric for evaluating the effectiveness of all other programs. However, by valuing “doubling consumption”, GiveWell has assumed the functional form of utility over “real consumption” x to be log(x) and the functional form of marginal utility over consumption to be 1/x (since this is the derivative of log(x)). This is a valid utility function in the sense that it is one of many functions that satisfies the conditions of rationality, but there is strong evidence that it is not a good representation of the preferences of the Kenyan beneficiaries of the GiveDirectly experiment.The purpose of this article is to explain why GiveWell should reconsider using “doubling consumption” as the basis for assessing the value of consumption (or income) changes and instead value “halving marginal utility of expenditure”—what we think GiveWell actually intends to value. Using data from GiveDirectly’s cash transfers program in Kenya (Haushofer and Shapiro 2016), we provide empirical evidence that rejects the use of any function that implies homothetic preferences (including marginal utility of 1/x). We then empirically estimate the true marginal utility over consumption (λ) as revealed by Kenyan beneficiaries of GiveDirectly’s cash transfers program and show how the value per dollar of cash transfers is actually 2.6 times GiveWell’s current number (from 0.0034 to 0.009). This is because 1/x is quickly dwarfed by revealed marginal utility, , at low levels of consumption.Therefore, valuing “doubling consumption” underweights the value of cash transfers to the very poor if we let them “speak” for themselves.Our "headline figure":FIGURE 5: Utilitarian ROI. The curves labeled cfe use the estimated marginal utilities of expenditure λ(x, p), with p either the values at baseline, or endline. The curve labeled log uses the marginal utility of expenditures implied by a logarithmic utility function used (implicitly) by GiveWell.4.5. Utilitarian Return on Investment. For every dollar given to a particular household, there’s some increase in utility, which we can think of a...]]>
Trevor Woolley https://forum.effectivealtruism.org/posts/CixidC6JCruHue8Hs/givewell-s-moral-weights-underweight-the-value-of-transfers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell’s Moral Weights Underweight the Value of Transfers to the Poor, published by Trevor Woolley on December 13, 2022 on The Effective Altruism Forum.Ethan Ligon and I submitted this to GiveWell for their "Change Our Minds" contest this year (2022). They will be announcing winners later this week (Dec 15, I think). But before they do, we wanted to share our submission here in case anyone is interested! Ethan has done some amazing work measuring real-world marginal utility by estimating demand systems using consumer purchases data. Seeing this prior work of his and thinking it may be particularly relevant to GiveWell's evaluations (and EA in general), I approached him and we wrote a short paper which we ended up submitting for the challenge.Rather than post the entire paper here and risk losing info in the reformatting process, I ask that you see the Google Doc of it here. While it does get a bit technical, I've tried to include non-technical summaries for most of the major sections. So in case you feel like some of the details are getting intense, I encourage you to at least read through the intro paragraphs and conclusions as you skim over it. If you get anything out of it, let it be that while total utility levels (over consumption) are unidentifiable, marginal utilities (over total consumption) are not--they can be measured with data!In case you aren't convinced yet to read the full article, below is a quick excerpt from the Summary:GiveWell bases much of their cost-effectiveness analysis on the value of doubling consumption. Since increasing consumption expenditures is the primary effect of the GiveDirectly cash transfers program, GiveWell uses the effectiveness (value generated per dollar) of cash transfers as a metric for evaluating the effectiveness of all other programs. However, by valuing “doubling consumption”, GiveWell has assumed the functional form of utility over “real consumption” x to be log(x) and the functional form of marginal utility over consumption to be 1/x (since this is the derivative of log(x)). This is a valid utility function in the sense that it is one of many functions that satisfies the conditions of rationality, but there is strong evidence that it is not a good representation of the preferences of the Kenyan beneficiaries of the GiveDirectly experiment.The purpose of this article is to explain why GiveWell should reconsider using “doubling consumption” as the basis for assessing the value of consumption (or income) changes and instead value “halving marginal utility of expenditure”—what we think GiveWell actually intends to value. Using data from GiveDirectly’s cash transfers program in Kenya (Haushofer and Shapiro 2016), we provide empirical evidence that rejects the use of any function that implies homothetic preferences (including marginal utility of 1/x). We then empirically estimate the true marginal utility over consumption (λ) as revealed by Kenyan beneficiaries of GiveDirectly’s cash transfers program and show how the value per dollar of cash transfers is actually 2.6 times GiveWell’s current number (from 0.0034 to 0.009). This is because 1/x is quickly dwarfed by revealed marginal utility, , at low levels of consumption.Therefore, valuing “doubling consumption” underweights the value of cash transfers to the very poor if we let them “speak” for themselves.Our "headline figure":FIGURE 5: Utilitarian ROI. The curves labeled cfe use the estimated marginal utilities of expenditure λ(x, p), with p either the values at baseline, or endline. The curve labeled log uses the marginal utility of expenditures implied by a logarithmic utility function used (implicitly) by GiveWell.4.5. Utilitarian Return on Investment. For every dollar given to a particular household, there’s some increase in utility, which we can think of a...]]>
Tue, 13 Dec 2022 16:14:03 +0000 EA - GiveWell’s Moral Weights Underweight the Value of Transfers to the Poor by Trevor Woolley Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell’s Moral Weights Underweight the Value of Transfers to the Poor, published by Trevor Woolley on December 13, 2022 on The Effective Altruism Forum.Ethan Ligon and I submitted this to GiveWell for their "Change Our Minds" contest this year (2022). They will be announcing winners later this week (Dec 15, I think). But before they do, we wanted to share our submission here in case anyone is interested! Ethan has done some amazing work measuring real-world marginal utility by estimating demand systems using consumer purchases data. Seeing this prior work of his and thinking it may be particularly relevant to GiveWell's evaluations (and EA in general), I approached him and we wrote a short paper which we ended up submitting for the challenge.Rather than post the entire paper here and risk losing info in the reformatting process, I ask that you see the Google Doc of it here. While it does get a bit technical, I've tried to include non-technical summaries for most of the major sections. So in case you feel like some of the details are getting intense, I encourage you to at least read through the intro paragraphs and conclusions as you skim over it. If you get anything out of it, let it be that while total utility levels (over consumption) are unidentifiable, marginal utilities (over total consumption) are not--they can be measured with data!In case you aren't convinced yet to read the full article, below is a quick excerpt from the Summary:GiveWell bases much of their cost-effectiveness analysis on the value of doubling consumption. Since increasing consumption expenditures is the primary effect of the GiveDirectly cash transfers program, GiveWell uses the effectiveness (value generated per dollar) of cash transfers as a metric for evaluating the effectiveness of all other programs. However, by valuing “doubling consumption”, GiveWell has assumed the functional form of utility over “real consumption” x to be log(x) and the functional form of marginal utility over consumption to be 1/x (since this is the derivative of log(x)). This is a valid utility function in the sense that it is one of many functions that satisfies the conditions of rationality, but there is strong evidence that it is not a good representation of the preferences of the Kenyan beneficiaries of the GiveDirectly experiment.The purpose of this article is to explain why GiveWell should reconsider using “doubling consumption” as the basis for assessing the value of consumption (or income) changes and instead value “halving marginal utility of expenditure”—what we think GiveWell actually intends to value. Using data from GiveDirectly’s cash transfers program in Kenya (Haushofer and Shapiro 2016), we provide empirical evidence that rejects the use of any function that implies homothetic preferences (including marginal utility of 1/x). We then empirically estimate the true marginal utility over consumption (λ) as revealed by Kenyan beneficiaries of GiveDirectly’s cash transfers program and show how the value per dollar of cash transfers is actually 2.6 times GiveWell’s current number (from 0.0034 to 0.009). This is because 1/x is quickly dwarfed by revealed marginal utility, , at low levels of consumption.Therefore, valuing “doubling consumption” underweights the value of cash transfers to the very poor if we let them “speak” for themselves.Our "headline figure":FIGURE 5: Utilitarian ROI. The curves labeled cfe use the estimated marginal utilities of expenditure λ(x, p), with p either the values at baseline, or endline. The curve labeled log uses the marginal utility of expenditures implied by a logarithmic utility function used (implicitly) by GiveWell.4.5. Utilitarian Return on Investment. For every dollar given to a particular household, there’s some increase in utility, which we can think of a...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell’s Moral Weights Underweight the Value of Transfers to the Poor, published by Trevor Woolley on December 13, 2022 on The Effective Altruism Forum.Ethan Ligon and I submitted this to GiveWell for their "Change Our Minds" contest this year (2022). They will be announcing winners later this week (Dec 15, I think). But before they do, we wanted to share our submission here in case anyone is interested! Ethan has done some amazing work measuring real-world marginal utility by estimating demand systems using consumer purchases data. Seeing this prior work of his and thinking it may be particularly relevant to GiveWell's evaluations (and EA in general), I approached him and we wrote a short paper which we ended up submitting for the challenge.Rather than post the entire paper here and risk losing info in the reformatting process, I ask that you see the Google Doc of it here. While it does get a bit technical, I've tried to include non-technical summaries for most of the major sections. So in case you feel like some of the details are getting intense, I encourage you to at least read through the intro paragraphs and conclusions as you skim over it. If you get anything out of it, let it be that while total utility levels (over consumption) are unidentifiable, marginal utilities (over total consumption) are not--they can be measured with data!In case you aren't convinced yet to read the full article, below is a quick excerpt from the Summary:GiveWell bases much of their cost-effectiveness analysis on the value of doubling consumption. Since increasing consumption expenditures is the primary effect of the GiveDirectly cash transfers program, GiveWell uses the effectiveness (value generated per dollar) of cash transfers as a metric for evaluating the effectiveness of all other programs. However, by valuing “doubling consumption”, GiveWell has assumed the functional form of utility over “real consumption” x to be log(x) and the functional form of marginal utility over consumption to be 1/x (since this is the derivative of log(x)). This is a valid utility function in the sense that it is one of many functions that satisfies the conditions of rationality, but there is strong evidence that it is not a good representation of the preferences of the Kenyan beneficiaries of the GiveDirectly experiment.The purpose of this article is to explain why GiveWell should reconsider using “doubling consumption” as the basis for assessing the value of consumption (or income) changes and instead value “halving marginal utility of expenditure”—what we think GiveWell actually intends to value. Using data from GiveDirectly’s cash transfers program in Kenya (Haushofer and Shapiro 2016), we provide empirical evidence that rejects the use of any function that implies homothetic preferences (including marginal utility of 1/x). We then empirically estimate the true marginal utility over consumption (λ) as revealed by Kenyan beneficiaries of GiveDirectly’s cash transfers program and show how the value per dollar of cash transfers is actually 2.6 times GiveWell’s current number (from 0.0034 to 0.009). This is because 1/x is quickly dwarfed by revealed marginal utility, , at low levels of consumption.Therefore, valuing “doubling consumption” underweights the value of cash transfers to the very poor if we let them “speak” for themselves.Our "headline figure":FIGURE 5: Utilitarian ROI. The curves labeled cfe use the estimated marginal utilities of expenditure λ(x, p), with p either the values at baseline, or endline. The curve labeled log uses the marginal utility of expenditures implied by a logarithmic utility function used (implicitly) by GiveWell.4.5. Utilitarian Return on Investment. For every dollar given to a particular household, there’s some increase in utility, which we can think of a...]]>
Trevor Woolley https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:27 None full 4105
kEd5qWwg8pZjWAeFS_EA EA - Announcing the Forecasting Research Institute (we’re hiring) by Tegan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Forecasting Research Institute (we’re hiring), published by Tegan on December 13, 2022 on The Effective Altruism Forum.The Forecasting Research Institute (FRI) is a new organization focused on advancing the science of forecasting for the public good.All decision-making implicitly relies on prediction, so improving prediction accuracy should lead to better decisions. And forecasting has shown early promise in the first-generation research conducted by FRI Chief Scientist Phil Tetlock and coauthors. But despite burgeoning popular interest in the practice of forecasting (especially among EAs), it has yet to realize its potential as a tool to inform decision-making.Early forecasting work focused on establishing a rigorous standard for accuracy, in experimental conditions chosen to provide the cleanest, most precise evidence possible about forecasting itself—a proof of concept, rather than a roadmap for using forecasting in real-world conditions. A great deal of work, both foundational and translational, is still needed to shape forecasting into a tool with practical value.That’s why our team is pursuing a two-pronged strategy. One is foundational, aimed at filling in the gaps in the science of forecasting that represent critical barriers to some of the most important uses of forecasting—like how to handle low probability events, long-run and unobservable outcomes, or complex topics that cannot be captured in a single forecast. The other prong is translational, focused on adapting forecasting methods to practical purposes: increasing the decision-relevance of questions, using forecasting to map important disagreements, and identifying the contexts in which forecasting will be most useful.Over the next two years we plan to launch multiple research projects aimed at the key outstanding questions for forecasting. We will also analyze and report on our group’s recently completed project, the Existential Risk Persuasion Tournament (XPT). This tournament brought together over 200 domain experts and highly skilled forecasters to explore, debate, and forecast potential threats to humanity in the next century, creating a wealth of rich data that our team is mining for forecasting and policy insights.In our upcoming projects, we’ll be conducting large, high-powered studies on a new research platform, customized for the demands of forecasting research. We’ll also work closely with selected organizations and policymakers to create forecasting tools informed by practical use-cases. Our planned projects include:Developing a forecasting proficiency test for quickly and cheaply identifying accurate forecastersIdentifying leading indicators of increased risk to humanity from AI by building “AI-risk conditional trees” with the help of domain experts (overview of conditional trees here, pg. 13)Exploring ways of judging (and incentivizing) answers to unresolvable and far-future questions, such as reciprocal scoringConducting “Epistemic Audits” to help organizations reduce uncertainty, identify action-relevant disagreement, and guide their decision processes.(For more on our research priorities, see here and here.)We’re excited to begin FRI’s work at such an auspicious time for the field of forecasting, with the many great projects, people and ideas that currently inhabit it—spanning the gamut from heavyweight organizations like Metaculus and GJI, to the numerous innovative projects run by small teams and individuals. This environment presents a wealth of opportunities for collaboration and cooperation, and we’re looking forward to being a part of such a dynamic community.Our core team consists of Phil Tetlock, Michael Page, Josh Rosenberg, Ezra Karger, Tegan McCaslin, and Zachary Jacobs. We also work with various contractors and external collaborators in the forec...]]>
Tegan https://forum.effectivealtruism.org/posts/kEd5qWwg8pZjWAeFS/announcing-the-forecasting-research-institute-we-re-hiring Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Forecasting Research Institute (we’re hiring), published by Tegan on December 13, 2022 on The Effective Altruism Forum.The Forecasting Research Institute (FRI) is a new organization focused on advancing the science of forecasting for the public good.All decision-making implicitly relies on prediction, so improving prediction accuracy should lead to better decisions. And forecasting has shown early promise in the first-generation research conducted by FRI Chief Scientist Phil Tetlock and coauthors. But despite burgeoning popular interest in the practice of forecasting (especially among EAs), it has yet to realize its potential as a tool to inform decision-making.Early forecasting work focused on establishing a rigorous standard for accuracy, in experimental conditions chosen to provide the cleanest, most precise evidence possible about forecasting itself—a proof of concept, rather than a roadmap for using forecasting in real-world conditions. A great deal of work, both foundational and translational, is still needed to shape forecasting into a tool with practical value.That’s why our team is pursuing a two-pronged strategy. One is foundational, aimed at filling in the gaps in the science of forecasting that represent critical barriers to some of the most important uses of forecasting—like how to handle low probability events, long-run and unobservable outcomes, or complex topics that cannot be captured in a single forecast. The other prong is translational, focused on adapting forecasting methods to practical purposes: increasing the decision-relevance of questions, using forecasting to map important disagreements, and identifying the contexts in which forecasting will be most useful.Over the next two years we plan to launch multiple research projects aimed at the key outstanding questions for forecasting. We will also analyze and report on our group’s recently completed project, the Existential Risk Persuasion Tournament (XPT). This tournament brought together over 200 domain experts and highly skilled forecasters to explore, debate, and forecast potential threats to humanity in the next century, creating a wealth of rich data that our team is mining for forecasting and policy insights.In our upcoming projects, we’ll be conducting large, high-powered studies on a new research platform, customized for the demands of forecasting research. We’ll also work closely with selected organizations and policymakers to create forecasting tools informed by practical use-cases. Our planned projects include:Developing a forecasting proficiency test for quickly and cheaply identifying accurate forecastersIdentifying leading indicators of increased risk to humanity from AI by building “AI-risk conditional trees” with the help of domain experts (overview of conditional trees here, pg. 13)Exploring ways of judging (and incentivizing) answers to unresolvable and far-future questions, such as reciprocal scoringConducting “Epistemic Audits” to help organizations reduce uncertainty, identify action-relevant disagreement, and guide their decision processes.(For more on our research priorities, see here and here.)We’re excited to begin FRI’s work at such an auspicious time for the field of forecasting, with the many great projects, people and ideas that currently inhabit it—spanning the gamut from heavyweight organizations like Metaculus and GJI, to the numerous innovative projects run by small teams and individuals. This environment presents a wealth of opportunities for collaboration and cooperation, and we’re looking forward to being a part of such a dynamic community.Our core team consists of Phil Tetlock, Michael Page, Josh Rosenberg, Ezra Karger, Tegan McCaslin, and Zachary Jacobs. We also work with various contractors and external collaborators in the forec...]]>
Tue, 13 Dec 2022 13:40:40 +0000 EA - Announcing the Forecasting Research Institute (we’re hiring) by Tegan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Forecasting Research Institute (we’re hiring), published by Tegan on December 13, 2022 on The Effective Altruism Forum.The Forecasting Research Institute (FRI) is a new organization focused on advancing the science of forecasting for the public good.All decision-making implicitly relies on prediction, so improving prediction accuracy should lead to better decisions. And forecasting has shown early promise in the first-generation research conducted by FRI Chief Scientist Phil Tetlock and coauthors. But despite burgeoning popular interest in the practice of forecasting (especially among EAs), it has yet to realize its potential as a tool to inform decision-making.Early forecasting work focused on establishing a rigorous standard for accuracy, in experimental conditions chosen to provide the cleanest, most precise evidence possible about forecasting itself—a proof of concept, rather than a roadmap for using forecasting in real-world conditions. A great deal of work, both foundational and translational, is still needed to shape forecasting into a tool with practical value.That’s why our team is pursuing a two-pronged strategy. One is foundational, aimed at filling in the gaps in the science of forecasting that represent critical barriers to some of the most important uses of forecasting—like how to handle low probability events, long-run and unobservable outcomes, or complex topics that cannot be captured in a single forecast. The other prong is translational, focused on adapting forecasting methods to practical purposes: increasing the decision-relevance of questions, using forecasting to map important disagreements, and identifying the contexts in which forecasting will be most useful.Over the next two years we plan to launch multiple research projects aimed at the key outstanding questions for forecasting. We will also analyze and report on our group’s recently completed project, the Existential Risk Persuasion Tournament (XPT). This tournament brought together over 200 domain experts and highly skilled forecasters to explore, debate, and forecast potential threats to humanity in the next century, creating a wealth of rich data that our team is mining for forecasting and policy insights.In our upcoming projects, we’ll be conducting large, high-powered studies on a new research platform, customized for the demands of forecasting research. We’ll also work closely with selected organizations and policymakers to create forecasting tools informed by practical use-cases. Our planned projects include:Developing a forecasting proficiency test for quickly and cheaply identifying accurate forecastersIdentifying leading indicators of increased risk to humanity from AI by building “AI-risk conditional trees” with the help of domain experts (overview of conditional trees here, pg. 13)Exploring ways of judging (and incentivizing) answers to unresolvable and far-future questions, such as reciprocal scoringConducting “Epistemic Audits” to help organizations reduce uncertainty, identify action-relevant disagreement, and guide their decision processes.(For more on our research priorities, see here and here.)We’re excited to begin FRI’s work at such an auspicious time for the field of forecasting, with the many great projects, people and ideas that currently inhabit it—spanning the gamut from heavyweight organizations like Metaculus and GJI, to the numerous innovative projects run by small teams and individuals. This environment presents a wealth of opportunities for collaboration and cooperation, and we’re looking forward to being a part of such a dynamic community.Our core team consists of Phil Tetlock, Michael Page, Josh Rosenberg, Ezra Karger, Tegan McCaslin, and Zachary Jacobs. We also work with various contractors and external collaborators in the forec...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Forecasting Research Institute (we’re hiring), published by Tegan on December 13, 2022 on The Effective Altruism Forum.The Forecasting Research Institute (FRI) is a new organization focused on advancing the science of forecasting for the public good.All decision-making implicitly relies on prediction, so improving prediction accuracy should lead to better decisions. And forecasting has shown early promise in the first-generation research conducted by FRI Chief Scientist Phil Tetlock and coauthors. But despite burgeoning popular interest in the practice of forecasting (especially among EAs), it has yet to realize its potential as a tool to inform decision-making.Early forecasting work focused on establishing a rigorous standard for accuracy, in experimental conditions chosen to provide the cleanest, most precise evidence possible about forecasting itself—a proof of concept, rather than a roadmap for using forecasting in real-world conditions. A great deal of work, both foundational and translational, is still needed to shape forecasting into a tool with practical value.That’s why our team is pursuing a two-pronged strategy. One is foundational, aimed at filling in the gaps in the science of forecasting that represent critical barriers to some of the most important uses of forecasting—like how to handle low probability events, long-run and unobservable outcomes, or complex topics that cannot be captured in a single forecast. The other prong is translational, focused on adapting forecasting methods to practical purposes: increasing the decision-relevance of questions, using forecasting to map important disagreements, and identifying the contexts in which forecasting will be most useful.Over the next two years we plan to launch multiple research projects aimed at the key outstanding questions for forecasting. We will also analyze and report on our group’s recently completed project, the Existential Risk Persuasion Tournament (XPT). This tournament brought together over 200 domain experts and highly skilled forecasters to explore, debate, and forecast potential threats to humanity in the next century, creating a wealth of rich data that our team is mining for forecasting and policy insights.In our upcoming projects, we’ll be conducting large, high-powered studies on a new research platform, customized for the demands of forecasting research. We’ll also work closely with selected organizations and policymakers to create forecasting tools informed by practical use-cases. Our planned projects include:Developing a forecasting proficiency test for quickly and cheaply identifying accurate forecastersIdentifying leading indicators of increased risk to humanity from AI by building “AI-risk conditional trees” with the help of domain experts (overview of conditional trees here, pg. 13)Exploring ways of judging (and incentivizing) answers to unresolvable and far-future questions, such as reciprocal scoringConducting “Epistemic Audits” to help organizations reduce uncertainty, identify action-relevant disagreement, and guide their decision processes.(For more on our research priorities, see here and here.)We’re excited to begin FRI’s work at such an auspicious time for the field of forecasting, with the many great projects, people and ideas that currently inhabit it—spanning the gamut from heavyweight organizations like Metaculus and GJI, to the numerous innovative projects run by small teams and individuals. This environment presents a wealth of opportunities for collaboration and cooperation, and we’re looking forward to being a part of such a dynamic community.Our core team consists of Phil Tetlock, Michael Page, Josh Rosenberg, Ezra Karger, Tegan McCaslin, and Zachary Jacobs. We also work with various contractors and external collaborators in the forec...]]>
Tegan https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:58 None full 4103
GvWiCxwLQJAzF6aK9_EA EA - CEEALAR: 2022 Update by CEEALAR Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEEALAR: 2022 Update, published by CEEALAR on December 13, 2022 on The Effective Altruism Forum.Tldr: we are still going, currently have lots of space, and have potential for further growth. Please apply if you have EA-related learning or research you want to do that requires support.UpdateIt’s been a while since our last update, but, suffice to say, we are still here! During 2021 we gradually increased our numbers again, from a low of 4 grantees mid-pandemic, to 15 by the end of the year (our full capacity, with no room sharing and 2 staff living on site). We lifted all Covid restrictions in March 2022, and things started to again feel like they were pre-pandemic. However, our building and its contents are old, and in mid-May this year we closed to new grantees for building repairs and maintenance. We reopened bookings again at the end of July, by which time we had once again got down to very low numbers - we are now up and running again and starting to fill up with new grantees, but we still have plenty of spare capacity. Please apply if you are interested in doing EA work at the hotel. We are offering (up to full) subsidies on accommodation and board for those wishing to learn or work on research or charitable projects (in fitting with our charitable objects).See our Grant Making Policy for more details.Along with the ups and downs in numbers, we’ve had ups and downs in other ways. We were delighted to receive our largest grant to date, from the FTX Future Fund, in May ($125k or ~ a year of runway), but this is now bittersweet given recent events. We condemn the actions of SBF and the FTX/Alameda inner circle, and are ashamed of the association. It’s possible the grant will be subject to clawbacks as a result of the commenced bankruptcy proceedings. As with many FTX grantees in the EA Community, we are following and discussing the situation as it unfolds. We intend to follow the consensus that emerges around any voluntary returning of unspent funds.Despite the significant funding, with the ongoing energy crisis, inflation in general, and increased spending on building maintenance and salaries, our costs have risen rapidly, and we were recently down to ~4 months of runway again. Enter the Survival & Flourishing Fund (SFF). We are extremely grateful to have been awarded a grant of $224,000(!) by Jaan Tallinn as per their most recent announcement.In order to attract and retain talent, with the last grant we upped our management salaries to ~the UK median salary (£31,286), plus accommodation and food (worth about £6k).It’s now 4.5 years since we first opened. Since then we have supported ~100 EAs aspiring to do direct work with their career development, and hosted another ~200 visitors from the EA community participating in events, networking and community building. We’ve established an EA community hub in a relatively low cost location. We believe there is plenty of potential demand for it to scale, but we still need to get the word out (which we are doing in part with this blog post).Our impactGrantee workThere are two main aspects to our potential impact: the direct work and career building of our grantees, and the community building and networking we facilitate.We are open to people working on all cause areas of EA, with the caveat that the work we facilitate is desk-based and mostly remote. In practice, this has meant that longtermist topics, especially x-risks, and in particular AI Alignment, have been foremost amongst the work of the grantees we have hosted. But we have also had grantees interested in animal welfare, global health, wellbeing, development and progress, and meta topics related to EA community building.Since our last update, we have had a number of grantees go on to internships, contracts and jobs at the likes of SERI, CHAI, Alvea, Re...]]>
CEEALAR https://forum.effectivealtruism.org/posts/GvWiCxwLQJAzF6aK9/ceealar-2022-update Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEEALAR: 2022 Update, published by CEEALAR on December 13, 2022 on The Effective Altruism Forum.Tldr: we are still going, currently have lots of space, and have potential for further growth. Please apply if you have EA-related learning or research you want to do that requires support.UpdateIt’s been a while since our last update, but, suffice to say, we are still here! During 2021 we gradually increased our numbers again, from a low of 4 grantees mid-pandemic, to 15 by the end of the year (our full capacity, with no room sharing and 2 staff living on site). We lifted all Covid restrictions in March 2022, and things started to again feel like they were pre-pandemic. However, our building and its contents are old, and in mid-May this year we closed to new grantees for building repairs and maintenance. We reopened bookings again at the end of July, by which time we had once again got down to very low numbers - we are now up and running again and starting to fill up with new grantees, but we still have plenty of spare capacity. Please apply if you are interested in doing EA work at the hotel. We are offering (up to full) subsidies on accommodation and board for those wishing to learn or work on research or charitable projects (in fitting with our charitable objects).See our Grant Making Policy for more details.Along with the ups and downs in numbers, we’ve had ups and downs in other ways. We were delighted to receive our largest grant to date, from the FTX Future Fund, in May ($125k or ~ a year of runway), but this is now bittersweet given recent events. We condemn the actions of SBF and the FTX/Alameda inner circle, and are ashamed of the association. It’s possible the grant will be subject to clawbacks as a result of the commenced bankruptcy proceedings. As with many FTX grantees in the EA Community, we are following and discussing the situation as it unfolds. We intend to follow the consensus that emerges around any voluntary returning of unspent funds.Despite the significant funding, with the ongoing energy crisis, inflation in general, and increased spending on building maintenance and salaries, our costs have risen rapidly, and we were recently down to ~4 months of runway again. Enter the Survival & Flourishing Fund (SFF). We are extremely grateful to have been awarded a grant of $224,000(!) by Jaan Tallinn as per their most recent announcement.In order to attract and retain talent, with the last grant we upped our management salaries to ~the UK median salary (£31,286), plus accommodation and food (worth about £6k).It’s now 4.5 years since we first opened. Since then we have supported ~100 EAs aspiring to do direct work with their career development, and hosted another ~200 visitors from the EA community participating in events, networking and community building. We’ve established an EA community hub in a relatively low cost location. We believe there is plenty of potential demand for it to scale, but we still need to get the word out (which we are doing in part with this blog post).Our impactGrantee workThere are two main aspects to our potential impact: the direct work and career building of our grantees, and the community building and networking we facilitate.We are open to people working on all cause areas of EA, with the caveat that the work we facilitate is desk-based and mostly remote. In practice, this has meant that longtermist topics, especially x-risks, and in particular AI Alignment, have been foremost amongst the work of the grantees we have hosted. But we have also had grantees interested in animal welfare, global health, wellbeing, development and progress, and meta topics related to EA community building.Since our last update, we have had a number of grantees go on to internships, contracts and jobs at the likes of SERI, CHAI, Alvea, Re...]]>
Tue, 13 Dec 2022 12:47:48 +0000 EA - CEEALAR: 2022 Update by CEEALAR Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEEALAR: 2022 Update, published by CEEALAR on December 13, 2022 on The Effective Altruism Forum.Tldr: we are still going, currently have lots of space, and have potential for further growth. Please apply if you have EA-related learning or research you want to do that requires support.UpdateIt’s been a while since our last update, but, suffice to say, we are still here! During 2021 we gradually increased our numbers again, from a low of 4 grantees mid-pandemic, to 15 by the end of the year (our full capacity, with no room sharing and 2 staff living on site). We lifted all Covid restrictions in March 2022, and things started to again feel like they were pre-pandemic. However, our building and its contents are old, and in mid-May this year we closed to new grantees for building repairs and maintenance. We reopened bookings again at the end of July, by which time we had once again got down to very low numbers - we are now up and running again and starting to fill up with new grantees, but we still have plenty of spare capacity. Please apply if you are interested in doing EA work at the hotel. We are offering (up to full) subsidies on accommodation and board for those wishing to learn or work on research or charitable projects (in fitting with our charitable objects).See our Grant Making Policy for more details.Along with the ups and downs in numbers, we’ve had ups and downs in other ways. We were delighted to receive our largest grant to date, from the FTX Future Fund, in May ($125k or ~ a year of runway), but this is now bittersweet given recent events. We condemn the actions of SBF and the FTX/Alameda inner circle, and are ashamed of the association. It’s possible the grant will be subject to clawbacks as a result of the commenced bankruptcy proceedings. As with many FTX grantees in the EA Community, we are following and discussing the situation as it unfolds. We intend to follow the consensus that emerges around any voluntary returning of unspent funds.Despite the significant funding, with the ongoing energy crisis, inflation in general, and increased spending on building maintenance and salaries, our costs have risen rapidly, and we were recently down to ~4 months of runway again. Enter the Survival & Flourishing Fund (SFF). We are extremely grateful to have been awarded a grant of $224,000(!) by Jaan Tallinn as per their most recent announcement.In order to attract and retain talent, with the last grant we upped our management salaries to ~the UK median salary (£31,286), plus accommodation and food (worth about £6k).It’s now 4.5 years since we first opened. Since then we have supported ~100 EAs aspiring to do direct work with their career development, and hosted another ~200 visitors from the EA community participating in events, networking and community building. We’ve established an EA community hub in a relatively low cost location. We believe there is plenty of potential demand for it to scale, but we still need to get the word out (which we are doing in part with this blog post).Our impactGrantee workThere are two main aspects to our potential impact: the direct work and career building of our grantees, and the community building and networking we facilitate.We are open to people working on all cause areas of EA, with the caveat that the work we facilitate is desk-based and mostly remote. In practice, this has meant that longtermist topics, especially x-risks, and in particular AI Alignment, have been foremost amongst the work of the grantees we have hosted. But we have also had grantees interested in animal welfare, global health, wellbeing, development and progress, and meta topics related to EA community building.Since our last update, we have had a number of grantees go on to internships, contracts and jobs at the likes of SERI, CHAI, Alvea, Re...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEEALAR: 2022 Update, published by CEEALAR on December 13, 2022 on The Effective Altruism Forum.Tldr: we are still going, currently have lots of space, and have potential for further growth. Please apply if you have EA-related learning or research you want to do that requires support.UpdateIt’s been a while since our last update, but, suffice to say, we are still here! During 2021 we gradually increased our numbers again, from a low of 4 grantees mid-pandemic, to 15 by the end of the year (our full capacity, with no room sharing and 2 staff living on site). We lifted all Covid restrictions in March 2022, and things started to again feel like they were pre-pandemic. However, our building and its contents are old, and in mid-May this year we closed to new grantees for building repairs and maintenance. We reopened bookings again at the end of July, by which time we had once again got down to very low numbers - we are now up and running again and starting to fill up with new grantees, but we still have plenty of spare capacity. Please apply if you are interested in doing EA work at the hotel. We are offering (up to full) subsidies on accommodation and board for those wishing to learn or work on research or charitable projects (in fitting with our charitable objects).See our Grant Making Policy for more details.Along with the ups and downs in numbers, we’ve had ups and downs in other ways. We were delighted to receive our largest grant to date, from the FTX Future Fund, in May ($125k or ~ a year of runway), but this is now bittersweet given recent events. We condemn the actions of SBF and the FTX/Alameda inner circle, and are ashamed of the association. It’s possible the grant will be subject to clawbacks as a result of the commenced bankruptcy proceedings. As with many FTX grantees in the EA Community, we are following and discussing the situation as it unfolds. We intend to follow the consensus that emerges around any voluntary returning of unspent funds.Despite the significant funding, with the ongoing energy crisis, inflation in general, and increased spending on building maintenance and salaries, our costs have risen rapidly, and we were recently down to ~4 months of runway again. Enter the Survival & Flourishing Fund (SFF). We are extremely grateful to have been awarded a grant of $224,000(!) by Jaan Tallinn as per their most recent announcement.In order to attract and retain talent, with the last grant we upped our management salaries to ~the UK median salary (£31,286), plus accommodation and food (worth about £6k).It’s now 4.5 years since we first opened. Since then we have supported ~100 EAs aspiring to do direct work with their career development, and hosted another ~200 visitors from the EA community participating in events, networking and community building. We’ve established an EA community hub in a relatively low cost location. We believe there is plenty of potential demand for it to scale, but we still need to get the word out (which we are doing in part with this blog post).Our impactGrantee workThere are two main aspects to our potential impact: the direct work and career building of our grantees, and the community building and networking we facilitate.We are open to people working on all cause areas of EA, with the caveat that the work we facilitate is desk-based and mostly remote. In practice, this has meant that longtermist topics, especially x-risks, and in particular AI Alignment, have been foremost amongst the work of the grantees we have hosted. But we have also had grantees interested in animal welfare, global health, wellbeing, development and progress, and meta topics related to EA community building.Since our last update, we have had a number of grantees go on to internships, contracts and jobs at the likes of SERI, CHAI, Alvea, Re...]]>
CEEALAR https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 12:26 None full 4102
osrBzAQaBE2jnNQRd_EA EA - Altruism and Development - It's complicated... by DavidNash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Altruism and Development - It's complicated..., published by DavidNash on December 12, 2022 on The Effective Altruism Forum.An interesting post from Shruti Rajagopalan looking at the trade off between legibility and complexity in evaluating philanthropic efforts.As December rolls in, my inbox is filled with requests for donations, often from organizations I have given to in the past. This holiday season is also bittersweet because I cannot visit Delhi, where I was born and my parents still live, because of the air pollution and smog during the winter months. In Delhi, I find it hard to breathe, and usually lose my voice because of inflammation caused by particulate matter pollution. This year, I am under doctor’s orders to avoid travelling to Delhi in the winter; I’ve been struggling with respiratory problems from long Covid.With air pollution dominating my thoughts and nudges for charitable giving in my inbox, my first instinct is to give to causes that help mitigate pollution in Delhi. But I am also aware of the literature on emotional giving or ineffective altruism. In their 2021 paper, Caviola, Schubert and Greene explain why both effective and ineffective causes may attract dollars. People often give emotionally to a cause that has personally impacted them in some way.This paper resonated with me because I am exactly the sort of irrational dog lover likely to support the best training programs for guide dogs. These super dogs have my lifelong admiration. My Labrador retrievers can barely fetch a ball.We all know air pollution is bad. But how bad? And compared to what?As an alternative, I looked up the top charities recommended by GiveWell—the top two work on reducing Malaria deaths. Malaria kills between 600,000 and 700,000 each year. And GiveWell is considered one of the most credible evaluators in the philanthropic space. Should I be thinking less about air pollution in Delhi and more about malaria in Africa?So, I thought it best to evaluate 1) my priors on air pollution, 2) whether air pollution mitigation in Delhi merits my dollars/rupees. And if Delhi air pollution merits intervention, then it would be good to 3) identify the reasons air pollution became such a big problem in Delhi (you would be surprised), which would lead to uncovering 4) how to mitigate the problem of air pollution so I can decide where to send my dollars. And since I got to #4, to understand 5) why people think that giving to malaria charities is “higher impact” than solving air pollution.6. Back to Malaria—Is It Really That Simple?While writing this post, I also thought more about malaria and whether malaria prevention is more complex than impact evaluations lead us to believe. If legibility is the consequence of a narrowing of vision to make a complex problem tractable, then are these malaria mitigation interventions too simple?95% of all malaria deaths are in Africa, and that malaria disproportionately kills children.This is probably why the effective altruism community, which believes in helping those far removed from one’s situation, measures by lives saved per dollar when thinking about long-term and high-impact efforts, and rates malaria prevention charities so highly. In his latest column, Ezra Klein defends the basic principles of effective altruism and separates it from the SBF-FTX mess.It is impossible not to feel for the children in Africa dying from malaria. But suggesting that distributing nets and antimalarial medication is the best way to save lives and prevent illness compared to anything else we know is narrow. Regions outside Africa only account for 4% of malaria deaths. But I don’t see high use of mosquito nets and antimalarial medication in Europe and the U.S. Outside of camping equipment stores, I don’t think I have seen any mosquito nets bought or sold...]]>
DavidNash https://forum.effectivealtruism.org/posts/osrBzAQaBE2jnNQRd/altruism-and-development-it-s-complicated Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Altruism and Development - It's complicated..., published by DavidNash on December 12, 2022 on The Effective Altruism Forum.An interesting post from Shruti Rajagopalan looking at the trade off between legibility and complexity in evaluating philanthropic efforts.As December rolls in, my inbox is filled with requests for donations, often from organizations I have given to in the past. This holiday season is also bittersweet because I cannot visit Delhi, where I was born and my parents still live, because of the air pollution and smog during the winter months. In Delhi, I find it hard to breathe, and usually lose my voice because of inflammation caused by particulate matter pollution. This year, I am under doctor’s orders to avoid travelling to Delhi in the winter; I’ve been struggling with respiratory problems from long Covid.With air pollution dominating my thoughts and nudges for charitable giving in my inbox, my first instinct is to give to causes that help mitigate pollution in Delhi. But I am also aware of the literature on emotional giving or ineffective altruism. In their 2021 paper, Caviola, Schubert and Greene explain why both effective and ineffective causes may attract dollars. People often give emotionally to a cause that has personally impacted them in some way.This paper resonated with me because I am exactly the sort of irrational dog lover likely to support the best training programs for guide dogs. These super dogs have my lifelong admiration. My Labrador retrievers can barely fetch a ball.We all know air pollution is bad. But how bad? And compared to what?As an alternative, I looked up the top charities recommended by GiveWell—the top two work on reducing Malaria deaths. Malaria kills between 600,000 and 700,000 each year. And GiveWell is considered one of the most credible evaluators in the philanthropic space. Should I be thinking less about air pollution in Delhi and more about malaria in Africa?So, I thought it best to evaluate 1) my priors on air pollution, 2) whether air pollution mitigation in Delhi merits my dollars/rupees. And if Delhi air pollution merits intervention, then it would be good to 3) identify the reasons air pollution became such a big problem in Delhi (you would be surprised), which would lead to uncovering 4) how to mitigate the problem of air pollution so I can decide where to send my dollars. And since I got to #4, to understand 5) why people think that giving to malaria charities is “higher impact” than solving air pollution.6. Back to Malaria—Is It Really That Simple?While writing this post, I also thought more about malaria and whether malaria prevention is more complex than impact evaluations lead us to believe. If legibility is the consequence of a narrowing of vision to make a complex problem tractable, then are these malaria mitigation interventions too simple?95% of all malaria deaths are in Africa, and that malaria disproportionately kills children.This is probably why the effective altruism community, which believes in helping those far removed from one’s situation, measures by lives saved per dollar when thinking about long-term and high-impact efforts, and rates malaria prevention charities so highly. In his latest column, Ezra Klein defends the basic principles of effective altruism and separates it from the SBF-FTX mess.It is impossible not to feel for the children in Africa dying from malaria. But suggesting that distributing nets and antimalarial medication is the best way to save lives and prevent illness compared to anything else we know is narrow. Regions outside Africa only account for 4% of malaria deaths. But I don’t see high use of mosquito nets and antimalarial medication in Europe and the U.S. Outside of camping equipment stores, I don’t think I have seen any mosquito nets bought or sold...]]>
Tue, 13 Dec 2022 04:50:22 +0000 EA - Altruism and Development - It's complicated... by DavidNash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Altruism and Development - It's complicated..., published by DavidNash on December 12, 2022 on The Effective Altruism Forum.An interesting post from Shruti Rajagopalan looking at the trade off between legibility and complexity in evaluating philanthropic efforts.As December rolls in, my inbox is filled with requests for donations, often from organizations I have given to in the past. This holiday season is also bittersweet because I cannot visit Delhi, where I was born and my parents still live, because of the air pollution and smog during the winter months. In Delhi, I find it hard to breathe, and usually lose my voice because of inflammation caused by particulate matter pollution. This year, I am under doctor’s orders to avoid travelling to Delhi in the winter; I’ve been struggling with respiratory problems from long Covid.With air pollution dominating my thoughts and nudges for charitable giving in my inbox, my first instinct is to give to causes that help mitigate pollution in Delhi. But I am also aware of the literature on emotional giving or ineffective altruism. In their 2021 paper, Caviola, Schubert and Greene explain why both effective and ineffective causes may attract dollars. People often give emotionally to a cause that has personally impacted them in some way.This paper resonated with me because I am exactly the sort of irrational dog lover likely to support the best training programs for guide dogs. These super dogs have my lifelong admiration. My Labrador retrievers can barely fetch a ball.We all know air pollution is bad. But how bad? And compared to what?As an alternative, I looked up the top charities recommended by GiveWell—the top two work on reducing Malaria deaths. Malaria kills between 600,000 and 700,000 each year. And GiveWell is considered one of the most credible evaluators in the philanthropic space. Should I be thinking less about air pollution in Delhi and more about malaria in Africa?So, I thought it best to evaluate 1) my priors on air pollution, 2) whether air pollution mitigation in Delhi merits my dollars/rupees. And if Delhi air pollution merits intervention, then it would be good to 3) identify the reasons air pollution became such a big problem in Delhi (you would be surprised), which would lead to uncovering 4) how to mitigate the problem of air pollution so I can decide where to send my dollars. And since I got to #4, to understand 5) why people think that giving to malaria charities is “higher impact” than solving air pollution.6. Back to Malaria—Is It Really That Simple?While writing this post, I also thought more about malaria and whether malaria prevention is more complex than impact evaluations lead us to believe. If legibility is the consequence of a narrowing of vision to make a complex problem tractable, then are these malaria mitigation interventions too simple?95% of all malaria deaths are in Africa, and that malaria disproportionately kills children.This is probably why the effective altruism community, which believes in helping those far removed from one’s situation, measures by lives saved per dollar when thinking about long-term and high-impact efforts, and rates malaria prevention charities so highly. In his latest column, Ezra Klein defends the basic principles of effective altruism and separates it from the SBF-FTX mess.It is impossible not to feel for the children in Africa dying from malaria. But suggesting that distributing nets and antimalarial medication is the best way to save lives and prevent illness compared to anything else we know is narrow. Regions outside Africa only account for 4% of malaria deaths. But I don’t see high use of mosquito nets and antimalarial medication in Europe and the U.S. Outside of camping equipment stores, I don’t think I have seen any mosquito nets bought or sold...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Altruism and Development - It's complicated..., published by DavidNash on December 12, 2022 on The Effective Altruism Forum.An interesting post from Shruti Rajagopalan looking at the trade off between legibility and complexity in evaluating philanthropic efforts.As December rolls in, my inbox is filled with requests for donations, often from organizations I have given to in the past. This holiday season is also bittersweet because I cannot visit Delhi, where I was born and my parents still live, because of the air pollution and smog during the winter months. In Delhi, I find it hard to breathe, and usually lose my voice because of inflammation caused by particulate matter pollution. This year, I am under doctor’s orders to avoid travelling to Delhi in the winter; I’ve been struggling with respiratory problems from long Covid.With air pollution dominating my thoughts and nudges for charitable giving in my inbox, my first instinct is to give to causes that help mitigate pollution in Delhi. But I am also aware of the literature on emotional giving or ineffective altruism. In their 2021 paper, Caviola, Schubert and Greene explain why both effective and ineffective causes may attract dollars. People often give emotionally to a cause that has personally impacted them in some way.This paper resonated with me because I am exactly the sort of irrational dog lover likely to support the best training programs for guide dogs. These super dogs have my lifelong admiration. My Labrador retrievers can barely fetch a ball.We all know air pollution is bad. But how bad? And compared to what?As an alternative, I looked up the top charities recommended by GiveWell—the top two work on reducing Malaria deaths. Malaria kills between 600,000 and 700,000 each year. And GiveWell is considered one of the most credible evaluators in the philanthropic space. Should I be thinking less about air pollution in Delhi and more about malaria in Africa?So, I thought it best to evaluate 1) my priors on air pollution, 2) whether air pollution mitigation in Delhi merits my dollars/rupees. And if Delhi air pollution merits intervention, then it would be good to 3) identify the reasons air pollution became such a big problem in Delhi (you would be surprised), which would lead to uncovering 4) how to mitigate the problem of air pollution so I can decide where to send my dollars. And since I got to #4, to understand 5) why people think that giving to malaria charities is “higher impact” than solving air pollution.6. Back to Malaria—Is It Really That Simple?While writing this post, I also thought more about malaria and whether malaria prevention is more complex than impact evaluations lead us to believe. If legibility is the consequence of a narrowing of vision to make a complex problem tractable, then are these malaria mitigation interventions too simple?95% of all malaria deaths are in Africa, and that malaria disproportionately kills children.This is probably why the effective altruism community, which believes in helping those far removed from one’s situation, measures by lives saved per dollar when thinking about long-term and high-impact efforts, and rates malaria prevention charities so highly. In his latest column, Ezra Klein defends the basic principles of effective altruism and separates it from the SBF-FTX mess.It is impossible not to feel for the children in Africa dying from malaria. But suggesting that distributing nets and antimalarial medication is the best way to save lives and prevent illness compared to anything else we know is narrow. Regions outside Africa only account for 4% of malaria deaths. But I don’t see high use of mosquito nets and antimalarial medication in Europe and the U.S. Outside of camping equipment stores, I don’t think I have seen any mosquito nets bought or sold...]]>
DavidNash https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:45 None full 4100
jnrmWcJnfa5c2Hnvx_EA EA - Sam Bankman-Fried has been arrested by Markus Amalthea Magnuson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sam Bankman-Fried has been arrested, published by Markus Amalthea Magnuson on December 12, 2022 on The Effective Altruism Forum.Confirmed by Sarah Emerson, a tech reporter at Forbes: (Edit: Now also confirmed by NYT, CNBC, Bloomberg, Yahoo Finance, and others.)Includes statement from the Attorney General of The Bahamas, transcribed below:Statement from the Attorney General of The Bahamas Sen. Ryan Pinder KC on the arrest of Sam Bankman-FriedOn 12 December 2022, the Office of the Attorney General of The Bahamas is announcing the arrest by The Royal Bahamas Police Force of Sam Bankman-Fried ("SBF"), former CEO of FTX. SBF's arrest followed receipt of formal notification from the United States that it has filed criminal charges against SBF and is likely to request his extradition.As a result of the notification received and the material provided therewith, it was deemed appropriate for the Attorney General to seek SBF's arrest and hold him in custody pursuant to our nation's Extradition Act.At such time as a formal request for extradition is made, The Bahamas intends to process it promptly, pursuant to Bahamian law and its treaty obligations with the United States.Responding to SBF's arrest, Prime Minister Davis stated, "The Bahamas and the United States have a shared interest in holding accountable all individuals associated with FTX who may have betrayed the public trust and broken the law. While the United States is pursuing criminal charges against SBF individually, The Bahamas will continue its own regulatory and criminal investigations into the collapse of FTX, with the continued cooperation of its law enforcement and regulatory partners in the United States and elsewhere."December 12, 2022Office of The Attorney General &Ministry of Legal AffairsCommonwealth of The BahamasThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Markus Amalthea Magnuson https://forum.effectivealtruism.org/posts/jnrmWcJnfa5c2Hnvx/sam-bankman-fried-has-been-arrested Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sam Bankman-Fried has been arrested, published by Markus Amalthea Magnuson on December 12, 2022 on The Effective Altruism Forum.Confirmed by Sarah Emerson, a tech reporter at Forbes: (Edit: Now also confirmed by NYT, CNBC, Bloomberg, Yahoo Finance, and others.)Includes statement from the Attorney General of The Bahamas, transcribed below:Statement from the Attorney General of The Bahamas Sen. Ryan Pinder KC on the arrest of Sam Bankman-FriedOn 12 December 2022, the Office of the Attorney General of The Bahamas is announcing the arrest by The Royal Bahamas Police Force of Sam Bankman-Fried ("SBF"), former CEO of FTX. SBF's arrest followed receipt of formal notification from the United States that it has filed criminal charges against SBF and is likely to request his extradition.As a result of the notification received and the material provided therewith, it was deemed appropriate for the Attorney General to seek SBF's arrest and hold him in custody pursuant to our nation's Extradition Act.At such time as a formal request for extradition is made, The Bahamas intends to process it promptly, pursuant to Bahamian law and its treaty obligations with the United States.Responding to SBF's arrest, Prime Minister Davis stated, "The Bahamas and the United States have a shared interest in holding accountable all individuals associated with FTX who may have betrayed the public trust and broken the law. While the United States is pursuing criminal charges against SBF individually, The Bahamas will continue its own regulatory and criminal investigations into the collapse of FTX, with the continued cooperation of its law enforcement and regulatory partners in the United States and elsewhere."December 12, 2022Office of The Attorney General &Ministry of Legal AffairsCommonwealth of The BahamasThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 13 Dec 2022 00:53:58 +0000 EA - Sam Bankman-Fried has been arrested by Markus Amalthea Magnuson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sam Bankman-Fried has been arrested, published by Markus Amalthea Magnuson on December 12, 2022 on The Effective Altruism Forum.Confirmed by Sarah Emerson, a tech reporter at Forbes: (Edit: Now also confirmed by NYT, CNBC, Bloomberg, Yahoo Finance, and others.)Includes statement from the Attorney General of The Bahamas, transcribed below:Statement from the Attorney General of The Bahamas Sen. Ryan Pinder KC on the arrest of Sam Bankman-FriedOn 12 December 2022, the Office of the Attorney General of The Bahamas is announcing the arrest by The Royal Bahamas Police Force of Sam Bankman-Fried ("SBF"), former CEO of FTX. SBF's arrest followed receipt of formal notification from the United States that it has filed criminal charges against SBF and is likely to request his extradition.As a result of the notification received and the material provided therewith, it was deemed appropriate for the Attorney General to seek SBF's arrest and hold him in custody pursuant to our nation's Extradition Act.At such time as a formal request for extradition is made, The Bahamas intends to process it promptly, pursuant to Bahamian law and its treaty obligations with the United States.Responding to SBF's arrest, Prime Minister Davis stated, "The Bahamas and the United States have a shared interest in holding accountable all individuals associated with FTX who may have betrayed the public trust and broken the law. While the United States is pursuing criminal charges against SBF individually, The Bahamas will continue its own regulatory and criminal investigations into the collapse of FTX, with the continued cooperation of its law enforcement and regulatory partners in the United States and elsewhere."December 12, 2022Office of The Attorney General &Ministry of Legal AffairsCommonwealth of The BahamasThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sam Bankman-Fried has been arrested, published by Markus Amalthea Magnuson on December 12, 2022 on The Effective Altruism Forum.Confirmed by Sarah Emerson, a tech reporter at Forbes: (Edit: Now also confirmed by NYT, CNBC, Bloomberg, Yahoo Finance, and others.)Includes statement from the Attorney General of The Bahamas, transcribed below:Statement from the Attorney General of The Bahamas Sen. Ryan Pinder KC on the arrest of Sam Bankman-FriedOn 12 December 2022, the Office of the Attorney General of The Bahamas is announcing the arrest by The Royal Bahamas Police Force of Sam Bankman-Fried ("SBF"), former CEO of FTX. SBF's arrest followed receipt of formal notification from the United States that it has filed criminal charges against SBF and is likely to request his extradition.As a result of the notification received and the material provided therewith, it was deemed appropriate for the Attorney General to seek SBF's arrest and hold him in custody pursuant to our nation's Extradition Act.At such time as a formal request for extradition is made, The Bahamas intends to process it promptly, pursuant to Bahamian law and its treaty obligations with the United States.Responding to SBF's arrest, Prime Minister Davis stated, "The Bahamas and the United States have a shared interest in holding accountable all individuals associated with FTX who may have betrayed the public trust and broken the law. While the United States is pursuing criminal charges against SBF individually, The Bahamas will continue its own regulatory and criminal investigations into the collapse of FTX, with the continued cooperation of its law enforcement and regulatory partners in the United States and elsewhere."December 12, 2022Office of The Attorney General &Ministry of Legal AffairsCommonwealth of The BahamasThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Markus Amalthea Magnuson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:59 None full 4099
wgtSCg8cFDRXvZzxS_EA EA - EA is probably undergoing "Evaporative Cooling" right now by freedomandutility Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA is probably undergoing "Evaporative Cooling" right now, published by freedomandutility on December 12, 2022 on The Effective Altruism Forum.Eliezer Yudkowsky has an excellent post on "Evaporative Cooling of Group Beliefs".Essentially, when a group goes through a crisis, those who hold the group's beliefs least strongly leave, and those who hold the group's beliefs most strongly stay.This might leave the remaining group less able to identify weaknesses within group beliefs or course-correct, or "steer".The FTX collapse, bad press and bad optics of the Whytham Abbey purchase probably mean that this is happening in EA right now.I'm not really sure what to do about this, but one suggestion might be for community building to move away from the model of trying to produce highly-engaged EAs, and switch to trying to produce moderately-engaged EAs, who might be better placed to offer helpful criticisms and help steer the movement towards doing the most good.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
freedomandutility https://forum.effectivealtruism.org/posts/wgtSCg8cFDRXvZzxS/ea-is-probably-undergoing-evaporative-cooling-right-now Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA is probably undergoing "Evaporative Cooling" right now, published by freedomandutility on December 12, 2022 on The Effective Altruism Forum.Eliezer Yudkowsky has an excellent post on "Evaporative Cooling of Group Beliefs".Essentially, when a group goes through a crisis, those who hold the group's beliefs least strongly leave, and those who hold the group's beliefs most strongly stay.This might leave the remaining group less able to identify weaknesses within group beliefs or course-correct, or "steer".The FTX collapse, bad press and bad optics of the Whytham Abbey purchase probably mean that this is happening in EA right now.I'm not really sure what to do about this, but one suggestion might be for community building to move away from the model of trying to produce highly-engaged EAs, and switch to trying to produce moderately-engaged EAs, who might be better placed to offer helpful criticisms and help steer the movement towards doing the most good.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 12 Dec 2022 17:41:34 +0000 EA - EA is probably undergoing "Evaporative Cooling" right now by freedomandutility Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA is probably undergoing "Evaporative Cooling" right now, published by freedomandutility on December 12, 2022 on The Effective Altruism Forum.Eliezer Yudkowsky has an excellent post on "Evaporative Cooling of Group Beliefs".Essentially, when a group goes through a crisis, those who hold the group's beliefs least strongly leave, and those who hold the group's beliefs most strongly stay.This might leave the remaining group less able to identify weaknesses within group beliefs or course-correct, or "steer".The FTX collapse, bad press and bad optics of the Whytham Abbey purchase probably mean that this is happening in EA right now.I'm not really sure what to do about this, but one suggestion might be for community building to move away from the model of trying to produce highly-engaged EAs, and switch to trying to produce moderately-engaged EAs, who might be better placed to offer helpful criticisms and help steer the movement towards doing the most good.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA is probably undergoing "Evaporative Cooling" right now, published by freedomandutility on December 12, 2022 on The Effective Altruism Forum.Eliezer Yudkowsky has an excellent post on "Evaporative Cooling of Group Beliefs".Essentially, when a group goes through a crisis, those who hold the group's beliefs least strongly leave, and those who hold the group's beliefs most strongly stay.This might leave the remaining group less able to identify weaknesses within group beliefs or course-correct, or "steer".The FTX collapse, bad press and bad optics of the Whytham Abbey purchase probably mean that this is happening in EA right now.I'm not really sure what to do about this, but one suggestion might be for community building to move away from the model of trying to produce highly-engaged EAs, and switch to trying to produce moderately-engaged EAs, who might be better placed to offer helpful criticisms and help steer the movement towards doing the most good.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
freedomandutility https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:10 None full 4089
M6ebKkRos2nQe9gqG_EA EA - Reflections on Vox's "How effective altruism let SBF happen" by Richard Y Chappell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflections on Vox's "How effective altruism let SBF happen", published by Richard Y Chappell on December 12, 2022 on The Effective Altruism Forum.Dylan Matthews has an interesting piece up in Vox, 'How effective altruism let SBF happen'. I feel very conflicted about it, as I think it contains some criticisms that are importantly correct, but then takes it in a direction I think is importantly mistaken. I'll be curious to hear others' thoughts.Here's what I think is most right about it:There’s still plenty we don’t know, but based on what we do know, I don’t think the problem was earning to give, or billionaire money, or longtermism per se. But the problem does lie in the culture of effective altruism... it is deeply immature and myopic, in a way that enabled Bankman-Fried and Ellison, and it desperately needs to grow up. That means emulating the kinds of practices that more mature philanthropic institutions and movements have used for centuries, and becoming much more risk-averse.Like many youth-led movements, there's a tendency within EA to be skeptical of established institutions and ways of running things. Such skepticism is healthy in moderation, but taken to extremes can lead to things like FTX's apparent total failure of financial oversight and corporate governance. Installing SBF as a corporate "philosopher-king" turns out not to have been great for FTX, in much the same way that we might predict installing a philosopher-king as absolute dictator would not be great for a country.I'm obviously very pro-philosophy, and think it offers important practical guidance too, but it's not a substitute for robust institutions. So here is where I feel most conflicted about the article. Because I agree we should be wary of philosopher-kings. But that's mostly just because we should be wary of "kings" (or immature dictators) in general.So I'm not thrilled with a framing that says (as Matthews goes on to say) that "the problem is the dominance of philosophy", because I don't think philosophy tells you to install philosopher-kings. Instead, I'd say, the problem is immaturity, and lack of respect for established institutional guard-rails for good governance (i.e., bureaucracy). What EA needs to learn, IMO, is this missing respect for "established" procedures, and a culture of consulting with more senior advisers who understand how institutions work (and why).It's important to get this diagnosis right, since there's no reason to think that replacing 30 y/o philosophers with equally young anticapitalist activists (say) would do any good here. What's needed is people with more institutional experience (which will often mean significantly older people), and a sensible division of labour between philosophy and policy, ideas and implementation.There are parts of the article that sort of point in this direction, but then it spins away and doesn't quite articulate the problem correctly. Or so it seems to me. But again, curious to hear others' thoughts.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Richard Y Chappell https://forum.effectivealtruism.org/posts/M6ebKkRos2nQe9gqG/reflections-on-vox-s-how-effective-altruism-let-sbf-happen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflections on Vox's "How effective altruism let SBF happen", published by Richard Y Chappell on December 12, 2022 on The Effective Altruism Forum.Dylan Matthews has an interesting piece up in Vox, 'How effective altruism let SBF happen'. I feel very conflicted about it, as I think it contains some criticisms that are importantly correct, but then takes it in a direction I think is importantly mistaken. I'll be curious to hear others' thoughts.Here's what I think is most right about it:There’s still plenty we don’t know, but based on what we do know, I don’t think the problem was earning to give, or billionaire money, or longtermism per se. But the problem does lie in the culture of effective altruism... it is deeply immature and myopic, in a way that enabled Bankman-Fried and Ellison, and it desperately needs to grow up. That means emulating the kinds of practices that more mature philanthropic institutions and movements have used for centuries, and becoming much more risk-averse.Like many youth-led movements, there's a tendency within EA to be skeptical of established institutions and ways of running things. Such skepticism is healthy in moderation, but taken to extremes can lead to things like FTX's apparent total failure of financial oversight and corporate governance. Installing SBF as a corporate "philosopher-king" turns out not to have been great for FTX, in much the same way that we might predict installing a philosopher-king as absolute dictator would not be great for a country.I'm obviously very pro-philosophy, and think it offers important practical guidance too, but it's not a substitute for robust institutions. So here is where I feel most conflicted about the article. Because I agree we should be wary of philosopher-kings. But that's mostly just because we should be wary of "kings" (or immature dictators) in general.So I'm not thrilled with a framing that says (as Matthews goes on to say) that "the problem is the dominance of philosophy", because I don't think philosophy tells you to install philosopher-kings. Instead, I'd say, the problem is immaturity, and lack of respect for established institutional guard-rails for good governance (i.e., bureaucracy). What EA needs to learn, IMO, is this missing respect for "established" procedures, and a culture of consulting with more senior advisers who understand how institutions work (and why).It's important to get this diagnosis right, since there's no reason to think that replacing 30 y/o philosophers with equally young anticapitalist activists (say) would do any good here. What's needed is people with more institutional experience (which will often mean significantly older people), and a sensible division of labour between philosophy and policy, ideas and implementation.There are parts of the article that sort of point in this direction, but then it spins away and doesn't quite articulate the problem correctly. Or so it seems to me. But again, curious to hear others' thoughts.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 12 Dec 2022 17:27:13 +0000 EA - Reflections on Vox's "How effective altruism let SBF happen" by Richard Y Chappell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflections on Vox's "How effective altruism let SBF happen", published by Richard Y Chappell on December 12, 2022 on The Effective Altruism Forum.Dylan Matthews has an interesting piece up in Vox, 'How effective altruism let SBF happen'. I feel very conflicted about it, as I think it contains some criticisms that are importantly correct, but then takes it in a direction I think is importantly mistaken. I'll be curious to hear others' thoughts.Here's what I think is most right about it:There’s still plenty we don’t know, but based on what we do know, I don’t think the problem was earning to give, or billionaire money, or longtermism per se. But the problem does lie in the culture of effective altruism... it is deeply immature and myopic, in a way that enabled Bankman-Fried and Ellison, and it desperately needs to grow up. That means emulating the kinds of practices that more mature philanthropic institutions and movements have used for centuries, and becoming much more risk-averse.Like many youth-led movements, there's a tendency within EA to be skeptical of established institutions and ways of running things. Such skepticism is healthy in moderation, but taken to extremes can lead to things like FTX's apparent total failure of financial oversight and corporate governance. Installing SBF as a corporate "philosopher-king" turns out not to have been great for FTX, in much the same way that we might predict installing a philosopher-king as absolute dictator would not be great for a country.I'm obviously very pro-philosophy, and think it offers important practical guidance too, but it's not a substitute for robust institutions. So here is where I feel most conflicted about the article. Because I agree we should be wary of philosopher-kings. But that's mostly just because we should be wary of "kings" (or immature dictators) in general.So I'm not thrilled with a framing that says (as Matthews goes on to say) that "the problem is the dominance of philosophy", because I don't think philosophy tells you to install philosopher-kings. Instead, I'd say, the problem is immaturity, and lack of respect for established institutional guard-rails for good governance (i.e., bureaucracy). What EA needs to learn, IMO, is this missing respect for "established" procedures, and a culture of consulting with more senior advisers who understand how institutions work (and why).It's important to get this diagnosis right, since there's no reason to think that replacing 30 y/o philosophers with equally young anticapitalist activists (say) would do any good here. What's needed is people with more institutional experience (which will often mean significantly older people), and a sensible division of labour between philosophy and policy, ideas and implementation.There are parts of the article that sort of point in this direction, but then it spins away and doesn't quite articulate the problem correctly. Or so it seems to me. But again, curious to hear others' thoughts.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflections on Vox's "How effective altruism let SBF happen", published by Richard Y Chappell on December 12, 2022 on The Effective Altruism Forum.Dylan Matthews has an interesting piece up in Vox, 'How effective altruism let SBF happen'. I feel very conflicted about it, as I think it contains some criticisms that are importantly correct, but then takes it in a direction I think is importantly mistaken. I'll be curious to hear others' thoughts.Here's what I think is most right about it:There’s still plenty we don’t know, but based on what we do know, I don’t think the problem was earning to give, or billionaire money, or longtermism per se. But the problem does lie in the culture of effective altruism... it is deeply immature and myopic, in a way that enabled Bankman-Fried and Ellison, and it desperately needs to grow up. That means emulating the kinds of practices that more mature philanthropic institutions and movements have used for centuries, and becoming much more risk-averse.Like many youth-led movements, there's a tendency within EA to be skeptical of established institutions and ways of running things. Such skepticism is healthy in moderation, but taken to extremes can lead to things like FTX's apparent total failure of financial oversight and corporate governance. Installing SBF as a corporate "philosopher-king" turns out not to have been great for FTX, in much the same way that we might predict installing a philosopher-king as absolute dictator would not be great for a country.I'm obviously very pro-philosophy, and think it offers important practical guidance too, but it's not a substitute for robust institutions. So here is where I feel most conflicted about the article. Because I agree we should be wary of philosopher-kings. But that's mostly just because we should be wary of "kings" (or immature dictators) in general.So I'm not thrilled with a framing that says (as Matthews goes on to say) that "the problem is the dominance of philosophy", because I don't think philosophy tells you to install philosopher-kings. Instead, I'd say, the problem is immaturity, and lack of respect for established institutional guard-rails for good governance (i.e., bureaucracy). What EA needs to learn, IMO, is this missing respect for "established" procedures, and a culture of consulting with more senior advisers who understand how institutions work (and why).It's important to get this diagnosis right, since there's no reason to think that replacing 30 y/o philosophers with equally young anticapitalist activists (say) would do any good here. What's needed is people with more institutional experience (which will often mean significantly older people), and a sensible division of labour between philosophy and policy, ideas and implementation.There are parts of the article that sort of point in this direction, but then it spins away and doesn't quite articulate the problem correctly. Or so it seems to me. But again, curious to hear others' thoughts.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Richard Y Chappell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:58 None full 4087
Buongkf4KXmBP2jiG_EA EA - Announcing WildAnimalSuffering.org, a new resource launched for the cause by David van Beveren Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing WildAnimalSuffering.org, a new resource launched for the cause, published by David van Beveren on December 12, 2022 on The Effective Altruism Forum.I'm very excited to announce that Vegan Hacktivists has just released our latest project, Wild Animal Suffering, for one of EA's highly important and neglected focus areas.This website, very briefly, educates the viewer on the issues surrounding Wild Animal Suffering, and provides them with easy to access resources to getting involved and learning more. It's very important to note that this website is not intended to be a deep dive into Wild Animal Suffering, nor have covered everything there is to cover. Our main audience is folks interested in wild life welfare, and our secondary audience is vegans who may be interested in learning more.“With its clear, concise explanations and visuals, this site is ideal for people who are looking to learn about what the lives of wild animals are really like, and what we can do to help. I hope it inspires people to think differently about addressing not just anthropogenic harms but also natural ones. This will also be an excellent resource for animal advocates who are looking for effective ways to communicate why helping animals is as essential as refraining from harming them.”Leah McKelvie, Animal EthicsOur primary goal here was to combine the many and various fantastic resources and content surrounding Wild Animal Suffering and turn it into a more visually engaging, friendly, and accessible format. We hope this makes it easier for those in our movement to have one link they can share for folks to consume and get started with.We're really excited with the launch, and want to give a special thanks to our friends at Animal Ethics, Wild Animal Initiative, and Rethink Priorities for lending their expertise. Note that naming these organizations does not constitute their support or endorsement for all of the varied content, opinions, or resources displayed on the site.“WildAnimalSuffering.org offers an accessible, engaging, and visually stunning introduction to the significant and pressing issue of wild animal suffering. The site fills a need by curating the best available information and resources all in one place, and I could see it becoming a key tool in building the movement.”Cat Kerr, Wild Animal InitiativeIf you'd like to support our launch, feel free to share this project within your networks if relevant— otherwise, we hope you enjoy the new resource!Thanks for reading.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
David van Beveren https://forum.effectivealtruism.org/posts/Buongkf4KXmBP2jiG/announcing-wildanimalsuffering-org-a-new-resource-launched Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing WildAnimalSuffering.org, a new resource launched for the cause, published by David van Beveren on December 12, 2022 on The Effective Altruism Forum.I'm very excited to announce that Vegan Hacktivists has just released our latest project, Wild Animal Suffering, for one of EA's highly important and neglected focus areas.This website, very briefly, educates the viewer on the issues surrounding Wild Animal Suffering, and provides them with easy to access resources to getting involved and learning more. It's very important to note that this website is not intended to be a deep dive into Wild Animal Suffering, nor have covered everything there is to cover. Our main audience is folks interested in wild life welfare, and our secondary audience is vegans who may be interested in learning more.“With its clear, concise explanations and visuals, this site is ideal for people who are looking to learn about what the lives of wild animals are really like, and what we can do to help. I hope it inspires people to think differently about addressing not just anthropogenic harms but also natural ones. This will also be an excellent resource for animal advocates who are looking for effective ways to communicate why helping animals is as essential as refraining from harming them.”Leah McKelvie, Animal EthicsOur primary goal here was to combine the many and various fantastic resources and content surrounding Wild Animal Suffering and turn it into a more visually engaging, friendly, and accessible format. We hope this makes it easier for those in our movement to have one link they can share for folks to consume and get started with.We're really excited with the launch, and want to give a special thanks to our friends at Animal Ethics, Wild Animal Initiative, and Rethink Priorities for lending their expertise. Note that naming these organizations does not constitute their support or endorsement for all of the varied content, opinions, or resources displayed on the site.“WildAnimalSuffering.org offers an accessible, engaging, and visually stunning introduction to the significant and pressing issue of wild animal suffering. The site fills a need by curating the best available information and resources all in one place, and I could see it becoming a key tool in building the movement.”Cat Kerr, Wild Animal InitiativeIf you'd like to support our launch, feel free to share this project within your networks if relevant— otherwise, we hope you enjoy the new resource!Thanks for reading.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 12 Dec 2022 15:23:09 +0000 EA - Announcing WildAnimalSuffering.org, a new resource launched for the cause by David van Beveren Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing WildAnimalSuffering.org, a new resource launched for the cause, published by David van Beveren on December 12, 2022 on The Effective Altruism Forum.I'm very excited to announce that Vegan Hacktivists has just released our latest project, Wild Animal Suffering, for one of EA's highly important and neglected focus areas.This website, very briefly, educates the viewer on the issues surrounding Wild Animal Suffering, and provides them with easy to access resources to getting involved and learning more. It's very important to note that this website is not intended to be a deep dive into Wild Animal Suffering, nor have covered everything there is to cover. Our main audience is folks interested in wild life welfare, and our secondary audience is vegans who may be interested in learning more.“With its clear, concise explanations and visuals, this site is ideal for people who are looking to learn about what the lives of wild animals are really like, and what we can do to help. I hope it inspires people to think differently about addressing not just anthropogenic harms but also natural ones. This will also be an excellent resource for animal advocates who are looking for effective ways to communicate why helping animals is as essential as refraining from harming them.”Leah McKelvie, Animal EthicsOur primary goal here was to combine the many and various fantastic resources and content surrounding Wild Animal Suffering and turn it into a more visually engaging, friendly, and accessible format. We hope this makes it easier for those in our movement to have one link they can share for folks to consume and get started with.We're really excited with the launch, and want to give a special thanks to our friends at Animal Ethics, Wild Animal Initiative, and Rethink Priorities for lending their expertise. Note that naming these organizations does not constitute their support or endorsement for all of the varied content, opinions, or resources displayed on the site.“WildAnimalSuffering.org offers an accessible, engaging, and visually stunning introduction to the significant and pressing issue of wild animal suffering. The site fills a need by curating the best available information and resources all in one place, and I could see it becoming a key tool in building the movement.”Cat Kerr, Wild Animal InitiativeIf you'd like to support our launch, feel free to share this project within your networks if relevant— otherwise, we hope you enjoy the new resource!Thanks for reading.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing WildAnimalSuffering.org, a new resource launched for the cause, published by David van Beveren on December 12, 2022 on The Effective Altruism Forum.I'm very excited to announce that Vegan Hacktivists has just released our latest project, Wild Animal Suffering, for one of EA's highly important and neglected focus areas.This website, very briefly, educates the viewer on the issues surrounding Wild Animal Suffering, and provides them with easy to access resources to getting involved and learning more. It's very important to note that this website is not intended to be a deep dive into Wild Animal Suffering, nor have covered everything there is to cover. Our main audience is folks interested in wild life welfare, and our secondary audience is vegans who may be interested in learning more.“With its clear, concise explanations and visuals, this site is ideal for people who are looking to learn about what the lives of wild animals are really like, and what we can do to help. I hope it inspires people to think differently about addressing not just anthropogenic harms but also natural ones. This will also be an excellent resource for animal advocates who are looking for effective ways to communicate why helping animals is as essential as refraining from harming them.”Leah McKelvie, Animal EthicsOur primary goal here was to combine the many and various fantastic resources and content surrounding Wild Animal Suffering and turn it into a more visually engaging, friendly, and accessible format. We hope this makes it easier for those in our movement to have one link they can share for folks to consume and get started with.We're really excited with the launch, and want to give a special thanks to our friends at Animal Ethics, Wild Animal Initiative, and Rethink Priorities for lending their expertise. Note that naming these organizations does not constitute their support or endorsement for all of the varied content, opinions, or resources displayed on the site.“WildAnimalSuffering.org offers an accessible, engaging, and visually stunning introduction to the significant and pressing issue of wild animal suffering. The site fills a need by curating the best available information and resources all in one place, and I could see it becoming a key tool in building the movement.”Cat Kerr, Wild Animal InitiativeIf you'd like to support our launch, feel free to share this project within your networks if relevant— otherwise, we hope you enjoy the new resource!Thanks for reading.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
David van Beveren https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:30 None full 4088
WBWXLZTF5FPzmK8nh_EA EA - Octopuses (Probably) Don't Have Nine Minds by Bob Fischer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Octopuses (Probably) Don't Have Nine Minds, published by Bob Fischer on December 12, 2022 on The Effective Altruism Forum.Key TakeawaysHere are the key takeaways for the full report:Based on the split-brain condition in humans, some people have wondered whether some humans “house” multiple subjects.Based on superficial parallels between the split-brain condition and the apparent neurological structures of some animals—such as chickens and octopuses—some people have wondered whether those animals house multiple subjects too.To assign a non-negligible credence to this possibility, we’d need evidence that parts of these animals aren't just conscious, but that they have valenced conscious states (like pain), as that’s what matters morally (given our project’s assumptions).This evidence is difficult to get:The human case shows that unconscious mentality is powerful, so we can’t infer consciousness from many behaviors.Even when we can infer consciousness, we can’t necessarily infer a separate subject. After all, there are plausible interpretations of split-brain cases on which there are not separate subjects.Even if there are multiple subjects housed in an organism in some circumstances, it doesn’t follow that there are always multiple subjects. These additional subjects may only be generated in contexts that are irrelevant for practical purposes.If we don’t have any evidence that parts of these animals are conscious or that they have valenced conscious states, then insofar as we’re committed to having an empirically-driven approach to counting subjects, we shouldn’t postulate multiple subjects in these cases.That being said, the author is inclined to place up to a 0.1 credence that there are multiple subjects in the split-brain case, but no higher than 0.025 for the 1+8 model of octopuses.IntroductionThis is the sixth post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species. The aim of this post, which was written by Joe Gottlieb, is to summarize his full report on the phenomenal unity and cause prioritization, which explores whether, for certain species, there are empirical reasons to posit multiple welfare subjects per organism. That report is available here.Motivations and the Bottom LineWe normally assume that there is one conscious subject—or one entity who undergoes conscious experiences—per conscious animal. But perhaps this isn’t always true: perhaps some animals ‘house’ more than one conscious subject. If those subjects are also welfare subjects—beings with the ability to accrue welfare goods and bad—then this might matter when trying to determine whether we are allocating resources in a way that maximizes expected welfare gained per dollar spent. When we theorize about these animals’ capacity for welfare, we would no longer be theorizing about a single welfare subject, but multiple such subjects.[1]In humans, people have speculated about this possibility based on “split-brain” cases, where the corpus callosum has been wholly or partially severed (e.g., Bayne 2010; Schechter 2018). Some non-human animals, like birds, approximate the split-brain condition as the norm, and others, like the octopus, exhibit a striking lack of integration and highly decentralized nervous systems, with surprising levels of peripheral autonomy. And in the case of the octopus, Peter Godfrey-Smith suggests that “[w]e should.at least consider the possibility that an octopus is a being with multiple selves”, one for central brain, and then one for each arm (2020: 148; cf. Carls-Diamante 2017, 2019, 2022).What follows is a high-level summary of my full report on...]]>
Bob Fischer https://forum.effectivealtruism.org/posts/WBWXLZTF5FPzmK8nh/octopuses-probably-don-t-have-nine-minds Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Octopuses (Probably) Don't Have Nine Minds, published by Bob Fischer on December 12, 2022 on The Effective Altruism Forum.Key TakeawaysHere are the key takeaways for the full report:Based on the split-brain condition in humans, some people have wondered whether some humans “house” multiple subjects.Based on superficial parallels between the split-brain condition and the apparent neurological structures of some animals—such as chickens and octopuses—some people have wondered whether those animals house multiple subjects too.To assign a non-negligible credence to this possibility, we’d need evidence that parts of these animals aren't just conscious, but that they have valenced conscious states (like pain), as that’s what matters morally (given our project’s assumptions).This evidence is difficult to get:The human case shows that unconscious mentality is powerful, so we can’t infer consciousness from many behaviors.Even when we can infer consciousness, we can’t necessarily infer a separate subject. After all, there are plausible interpretations of split-brain cases on which there are not separate subjects.Even if there are multiple subjects housed in an organism in some circumstances, it doesn’t follow that there are always multiple subjects. These additional subjects may only be generated in contexts that are irrelevant for practical purposes.If we don’t have any evidence that parts of these animals are conscious or that they have valenced conscious states, then insofar as we’re committed to having an empirically-driven approach to counting subjects, we shouldn’t postulate multiple subjects in these cases.That being said, the author is inclined to place up to a 0.1 credence that there are multiple subjects in the split-brain case, but no higher than 0.025 for the 1+8 model of octopuses.IntroductionThis is the sixth post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species. The aim of this post, which was written by Joe Gottlieb, is to summarize his full report on the phenomenal unity and cause prioritization, which explores whether, for certain species, there are empirical reasons to posit multiple welfare subjects per organism. That report is available here.Motivations and the Bottom LineWe normally assume that there is one conscious subject—or one entity who undergoes conscious experiences—per conscious animal. But perhaps this isn’t always true: perhaps some animals ‘house’ more than one conscious subject. If those subjects are also welfare subjects—beings with the ability to accrue welfare goods and bad—then this might matter when trying to determine whether we are allocating resources in a way that maximizes expected welfare gained per dollar spent. When we theorize about these animals’ capacity for welfare, we would no longer be theorizing about a single welfare subject, but multiple such subjects.[1]In humans, people have speculated about this possibility based on “split-brain” cases, where the corpus callosum has been wholly or partially severed (e.g., Bayne 2010; Schechter 2018). Some non-human animals, like birds, approximate the split-brain condition as the norm, and others, like the octopus, exhibit a striking lack of integration and highly decentralized nervous systems, with surprising levels of peripheral autonomy. And in the case of the octopus, Peter Godfrey-Smith suggests that “[w]e should.at least consider the possibility that an octopus is a being with multiple selves”, one for central brain, and then one for each arm (2020: 148; cf. Carls-Diamante 2017, 2019, 2022).What follows is a high-level summary of my full report on...]]>
Mon, 12 Dec 2022 15:14:54 +0000 EA - Octopuses (Probably) Don't Have Nine Minds by Bob Fischer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Octopuses (Probably) Don't Have Nine Minds, published by Bob Fischer on December 12, 2022 on The Effective Altruism Forum.Key TakeawaysHere are the key takeaways for the full report:Based on the split-brain condition in humans, some people have wondered whether some humans “house” multiple subjects.Based on superficial parallels between the split-brain condition and the apparent neurological structures of some animals—such as chickens and octopuses—some people have wondered whether those animals house multiple subjects too.To assign a non-negligible credence to this possibility, we’d need evidence that parts of these animals aren't just conscious, but that they have valenced conscious states (like pain), as that’s what matters morally (given our project’s assumptions).This evidence is difficult to get:The human case shows that unconscious mentality is powerful, so we can’t infer consciousness from many behaviors.Even when we can infer consciousness, we can’t necessarily infer a separate subject. After all, there are plausible interpretations of split-brain cases on which there are not separate subjects.Even if there are multiple subjects housed in an organism in some circumstances, it doesn’t follow that there are always multiple subjects. These additional subjects may only be generated in contexts that are irrelevant for practical purposes.If we don’t have any evidence that parts of these animals are conscious or that they have valenced conscious states, then insofar as we’re committed to having an empirically-driven approach to counting subjects, we shouldn’t postulate multiple subjects in these cases.That being said, the author is inclined to place up to a 0.1 credence that there are multiple subjects in the split-brain case, but no higher than 0.025 for the 1+8 model of octopuses.IntroductionThis is the sixth post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species. The aim of this post, which was written by Joe Gottlieb, is to summarize his full report on the phenomenal unity and cause prioritization, which explores whether, for certain species, there are empirical reasons to posit multiple welfare subjects per organism. That report is available here.Motivations and the Bottom LineWe normally assume that there is one conscious subject—or one entity who undergoes conscious experiences—per conscious animal. But perhaps this isn’t always true: perhaps some animals ‘house’ more than one conscious subject. If those subjects are also welfare subjects—beings with the ability to accrue welfare goods and bad—then this might matter when trying to determine whether we are allocating resources in a way that maximizes expected welfare gained per dollar spent. When we theorize about these animals’ capacity for welfare, we would no longer be theorizing about a single welfare subject, but multiple such subjects.[1]In humans, people have speculated about this possibility based on “split-brain” cases, where the corpus callosum has been wholly or partially severed (e.g., Bayne 2010; Schechter 2018). Some non-human animals, like birds, approximate the split-brain condition as the norm, and others, like the octopus, exhibit a striking lack of integration and highly decentralized nervous systems, with surprising levels of peripheral autonomy. And in the case of the octopus, Peter Godfrey-Smith suggests that “[w]e should.at least consider the possibility that an octopus is a being with multiple selves”, one for central brain, and then one for each arm (2020: 148; cf. Carls-Diamante 2017, 2019, 2022).What follows is a high-level summary of my full report on...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Octopuses (Probably) Don't Have Nine Minds, published by Bob Fischer on December 12, 2022 on The Effective Altruism Forum.Key TakeawaysHere are the key takeaways for the full report:Based on the split-brain condition in humans, some people have wondered whether some humans “house” multiple subjects.Based on superficial parallels between the split-brain condition and the apparent neurological structures of some animals—such as chickens and octopuses—some people have wondered whether those animals house multiple subjects too.To assign a non-negligible credence to this possibility, we’d need evidence that parts of these animals aren't just conscious, but that they have valenced conscious states (like pain), as that’s what matters morally (given our project’s assumptions).This evidence is difficult to get:The human case shows that unconscious mentality is powerful, so we can’t infer consciousness from many behaviors.Even when we can infer consciousness, we can’t necessarily infer a separate subject. After all, there are plausible interpretations of split-brain cases on which there are not separate subjects.Even if there are multiple subjects housed in an organism in some circumstances, it doesn’t follow that there are always multiple subjects. These additional subjects may only be generated in contexts that are irrelevant for practical purposes.If we don’t have any evidence that parts of these animals are conscious or that they have valenced conscious states, then insofar as we’re committed to having an empirically-driven approach to counting subjects, we shouldn’t postulate multiple subjects in these cases.That being said, the author is inclined to place up to a 0.1 credence that there are multiple subjects in the split-brain case, but no higher than 0.025 for the 1+8 model of octopuses.IntroductionThis is the sixth post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species. The aim of this post, which was written by Joe Gottlieb, is to summarize his full report on the phenomenal unity and cause prioritization, which explores whether, for certain species, there are empirical reasons to posit multiple welfare subjects per organism. That report is available here.Motivations and the Bottom LineWe normally assume that there is one conscious subject—or one entity who undergoes conscious experiences—per conscious animal. But perhaps this isn’t always true: perhaps some animals ‘house’ more than one conscious subject. If those subjects are also welfare subjects—beings with the ability to accrue welfare goods and bad—then this might matter when trying to determine whether we are allocating resources in a way that maximizes expected welfare gained per dollar spent. When we theorize about these animals’ capacity for welfare, we would no longer be theorizing about a single welfare subject, but multiple such subjects.[1]In humans, people have speculated about this possibility based on “split-brain” cases, where the corpus callosum has been wholly or partially severed (e.g., Bayne 2010; Schechter 2018). Some non-human animals, like birds, approximate the split-brain condition as the norm, and others, like the octopus, exhibit a striking lack of integration and highly decentralized nervous systems, with surprising levels of peripheral autonomy. And in the case of the octopus, Peter Godfrey-Smith suggests that “[w]e should.at least consider the possibility that an octopus is a being with multiple selves”, one for central brain, and then one for each arm (2020: 148; cf. Carls-Diamante 2017, 2019, 2022).What follows is a high-level summary of my full report on...]]>
Bob Fischer https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 20:30 None full 4090
H7xWzvwvkyywDAEkL_EA EA - Creating a database for base rates by nikos Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Creating a database for base rates, published by nikos on December 12, 2022 on The Effective Altruism Forum.TLDRWe are creating a database to collect base rates for various categories of events. You can find the database here and can suggest new base rate categories for us to look into here.Project SummaryThe base rate database project collects base rates for different categories of events and makes them available to researchers, forecasters and philanthropic organisations. Its main goals are to develop better intuitions about the potential and limitations of reference class forecasting and to provide useful information to the public. The data will enable research that enhances our understanding of the kinds of circumstances in which reference forecasting is a promising approach, what kinds of methods of reference forecasting work best, how to construct reasonable reference classes, and what potential caveats and pitfalls are. In addition to the raw data we will collect qualitative feedback on individual reference classes and on the overall process of building a base rate database, adding context to the data and developing comprehensive knowledge to build upon in the future.We aim to select categories of base rates in a way that makes the information we collect useful to decision makers and philanthropic organisations.IntroductionIf one wants to predict whether some event will happen in the future, it is often helpful to look at the past. One can ask: "Ignoring all the specifics of the current event I'm trying to predict, what would I predict just by looking at the base rate of similar events happening in the past?". This is called reference class forecasting and helps forecasters to obtain an 'outside view' on the forecasting question at hand. This outside view, of course, is usually complemented by the 'inside view': what are the specifics of the current event at hand that distinguish it from other events?Reference class forecasting is widely used among forecasters. To this date, however, there has been little systematic research done into how effective base rates are for forecasting future events, how they can best be used and what limitations apply. We aim to facilitate this research.Project outlineGoalThe main goal of this project is to develop a better understanding of the merits and limitations of reference class forecasting.A secondary goal is to collect information that may be useful for forecasters and EA stakeholders in the future.What we'll doWe want to achieve our goals byasking experienced forecasters to compile a public database with base rates for various categories of eventscollecting qualitative feedback on the process of collecting base rates, as well as the base rates themselvesusing the database to conduct and facilitate quantitative and qualitative research, especially with regards to the performance of various reference class forecasting approachesinviting others (you!) to suggest base rate categories that we should look into through this form.Categories that we want to look intoWe intend to look into categories as diverse asViolent and non-violent protests that have (or have not) led to regime changeElections with small margins of victoryZoonotic spillover eventsDevelopment of new antibioticsYou can find a list of all the categories on our radar here. You can suggest new categories here.Specific research questionsThe database is meant to be a resource for anyone who is interested in reference class forecasting. Please do feel free to use it for your own research as well as to reach out to us.So far, we have thought of the following quantitative analyses we think may be promising:Comparison of the predictive performance of several reference forecasting approaches, for example:Naive Laplace's rule with different pri...]]>
nikos https://forum.effectivealtruism.org/posts/H7xWzvwvkyywDAEkL/creating-a-database-for-base-rates Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Creating a database for base rates, published by nikos on December 12, 2022 on The Effective Altruism Forum.TLDRWe are creating a database to collect base rates for various categories of events. You can find the database here and can suggest new base rate categories for us to look into here.Project SummaryThe base rate database project collects base rates for different categories of events and makes them available to researchers, forecasters and philanthropic organisations. Its main goals are to develop better intuitions about the potential and limitations of reference class forecasting and to provide useful information to the public. The data will enable research that enhances our understanding of the kinds of circumstances in which reference forecasting is a promising approach, what kinds of methods of reference forecasting work best, how to construct reasonable reference classes, and what potential caveats and pitfalls are. In addition to the raw data we will collect qualitative feedback on individual reference classes and on the overall process of building a base rate database, adding context to the data and developing comprehensive knowledge to build upon in the future.We aim to select categories of base rates in a way that makes the information we collect useful to decision makers and philanthropic organisations.IntroductionIf one wants to predict whether some event will happen in the future, it is often helpful to look at the past. One can ask: "Ignoring all the specifics of the current event I'm trying to predict, what would I predict just by looking at the base rate of similar events happening in the past?". This is called reference class forecasting and helps forecasters to obtain an 'outside view' on the forecasting question at hand. This outside view, of course, is usually complemented by the 'inside view': what are the specifics of the current event at hand that distinguish it from other events?Reference class forecasting is widely used among forecasters. To this date, however, there has been little systematic research done into how effective base rates are for forecasting future events, how they can best be used and what limitations apply. We aim to facilitate this research.Project outlineGoalThe main goal of this project is to develop a better understanding of the merits and limitations of reference class forecasting.A secondary goal is to collect information that may be useful for forecasters and EA stakeholders in the future.What we'll doWe want to achieve our goals byasking experienced forecasters to compile a public database with base rates for various categories of eventscollecting qualitative feedback on the process of collecting base rates, as well as the base rates themselvesusing the database to conduct and facilitate quantitative and qualitative research, especially with regards to the performance of various reference class forecasting approachesinviting others (you!) to suggest base rate categories that we should look into through this form.Categories that we want to look intoWe intend to look into categories as diverse asViolent and non-violent protests that have (or have not) led to regime changeElections with small margins of victoryZoonotic spillover eventsDevelopment of new antibioticsYou can find a list of all the categories on our radar here. You can suggest new categories here.Specific research questionsThe database is meant to be a resource for anyone who is interested in reference class forecasting. Please do feel free to use it for your own research as well as to reach out to us.So far, we have thought of the following quantitative analyses we think may be promising:Comparison of the predictive performance of several reference forecasting approaches, for example:Naive Laplace's rule with different pri...]]>
Mon, 12 Dec 2022 15:06:18 +0000 EA - Creating a database for base rates by nikos Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Creating a database for base rates, published by nikos on December 12, 2022 on The Effective Altruism Forum.TLDRWe are creating a database to collect base rates for various categories of events. You can find the database here and can suggest new base rate categories for us to look into here.Project SummaryThe base rate database project collects base rates for different categories of events and makes them available to researchers, forecasters and philanthropic organisations. Its main goals are to develop better intuitions about the potential and limitations of reference class forecasting and to provide useful information to the public. The data will enable research that enhances our understanding of the kinds of circumstances in which reference forecasting is a promising approach, what kinds of methods of reference forecasting work best, how to construct reasonable reference classes, and what potential caveats and pitfalls are. In addition to the raw data we will collect qualitative feedback on individual reference classes and on the overall process of building a base rate database, adding context to the data and developing comprehensive knowledge to build upon in the future.We aim to select categories of base rates in a way that makes the information we collect useful to decision makers and philanthropic organisations.IntroductionIf one wants to predict whether some event will happen in the future, it is often helpful to look at the past. One can ask: "Ignoring all the specifics of the current event I'm trying to predict, what would I predict just by looking at the base rate of similar events happening in the past?". This is called reference class forecasting and helps forecasters to obtain an 'outside view' on the forecasting question at hand. This outside view, of course, is usually complemented by the 'inside view': what are the specifics of the current event at hand that distinguish it from other events?Reference class forecasting is widely used among forecasters. To this date, however, there has been little systematic research done into how effective base rates are for forecasting future events, how they can best be used and what limitations apply. We aim to facilitate this research.Project outlineGoalThe main goal of this project is to develop a better understanding of the merits and limitations of reference class forecasting.A secondary goal is to collect information that may be useful for forecasters and EA stakeholders in the future.What we'll doWe want to achieve our goals byasking experienced forecasters to compile a public database with base rates for various categories of eventscollecting qualitative feedback on the process of collecting base rates, as well as the base rates themselvesusing the database to conduct and facilitate quantitative and qualitative research, especially with regards to the performance of various reference class forecasting approachesinviting others (you!) to suggest base rate categories that we should look into through this form.Categories that we want to look intoWe intend to look into categories as diverse asViolent and non-violent protests that have (or have not) led to regime changeElections with small margins of victoryZoonotic spillover eventsDevelopment of new antibioticsYou can find a list of all the categories on our radar here. You can suggest new categories here.Specific research questionsThe database is meant to be a resource for anyone who is interested in reference class forecasting. Please do feel free to use it for your own research as well as to reach out to us.So far, we have thought of the following quantitative analyses we think may be promising:Comparison of the predictive performance of several reference forecasting approaches, for example:Naive Laplace's rule with different pri...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Creating a database for base rates, published by nikos on December 12, 2022 on The Effective Altruism Forum.TLDRWe are creating a database to collect base rates for various categories of events. You can find the database here and can suggest new base rate categories for us to look into here.Project SummaryThe base rate database project collects base rates for different categories of events and makes them available to researchers, forecasters and philanthropic organisations. Its main goals are to develop better intuitions about the potential and limitations of reference class forecasting and to provide useful information to the public. The data will enable research that enhances our understanding of the kinds of circumstances in which reference forecasting is a promising approach, what kinds of methods of reference forecasting work best, how to construct reasonable reference classes, and what potential caveats and pitfalls are. In addition to the raw data we will collect qualitative feedback on individual reference classes and on the overall process of building a base rate database, adding context to the data and developing comprehensive knowledge to build upon in the future.We aim to select categories of base rates in a way that makes the information we collect useful to decision makers and philanthropic organisations.IntroductionIf one wants to predict whether some event will happen in the future, it is often helpful to look at the past. One can ask: "Ignoring all the specifics of the current event I'm trying to predict, what would I predict just by looking at the base rate of similar events happening in the past?". This is called reference class forecasting and helps forecasters to obtain an 'outside view' on the forecasting question at hand. This outside view, of course, is usually complemented by the 'inside view': what are the specifics of the current event at hand that distinguish it from other events?Reference class forecasting is widely used among forecasters. To this date, however, there has been little systematic research done into how effective base rates are for forecasting future events, how they can best be used and what limitations apply. We aim to facilitate this research.Project outlineGoalThe main goal of this project is to develop a better understanding of the merits and limitations of reference class forecasting.A secondary goal is to collect information that may be useful for forecasters and EA stakeholders in the future.What we'll doWe want to achieve our goals byasking experienced forecasters to compile a public database with base rates for various categories of eventscollecting qualitative feedback on the process of collecting base rates, as well as the base rates themselvesusing the database to conduct and facilitate quantitative and qualitative research, especially with regards to the performance of various reference class forecasting approachesinviting others (you!) to suggest base rate categories that we should look into through this form.Categories that we want to look intoWe intend to look into categories as diverse asViolent and non-violent protests that have (or have not) led to regime changeElections with small margins of victoryZoonotic spillover eventsDevelopment of new antibioticsYou can find a list of all the categories on our radar here. You can suggest new categories here.Specific research questionsThe database is meant to be a resource for anyone who is interested in reference class forecasting. Please do feel free to use it for your own research as well as to reach out to us.So far, we have thought of the following quantitative analyses we think may be promising:Comparison of the predictive performance of several reference forecasting approaches, for example:Naive Laplace's rule with different pri...]]>
nikos https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:21 None full 4091
bkF4jWM9pbBFxnCLH_EA EA - Observations of community building in Asia, a 🧵 by Vaidehi Agarwalla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Observations of community building in Asia, a 🧵, published by Vaidehi Agarwalla on December 11, 2022 on The Effective Altruism Forum.This is a lightly edited twitter thread I wrote after EAGxSingapore. I figured it might be useful to post to the Forum as well.Caveatsthese are my takeaways (with some interpretation) not just problems people told me.summarized a bunch to fit into the 280 character limit (flag emoji FTW).good > perfectAsian CB's please correct me + enhance!Everyone - please ask clarification questions!What do we do with people? This isn’t unique to Asian CB’s, but there are strictly fewer and less attractive opportunities available. I might do a separate thread or add to this thought later.Some countries have less traction with English & worry about EA being presented as Western concept (e.g. Japan & Iran). Translation of key texts seems important, and could be a way to engage newer EAs with a concrete project (see a test from Estonia).Countries with lots of internal issues may have difficulty gaining EA traction, but it may be a matter of perspective and approach. A Turkey CB mentioned that the fact that another group from Iran was able to get traction was inspiring, since they perceived Iran as having worse problemsMultiple CB’s suggested that after talking to other EAs they started thinking more about city / uni group building rather than trying to build national groups to start with. For large countries (India) there are many choices, .....But some countries (Nepal, Pakistan, Vietnam) have 1 or 2 (Kathmandu, Karachi, Ho Chi Minh & Hanoi) major cities that are likely to be viable for EA groups. Kathmandu has all of Nepal’s top uni’s and best talent pool so in practice EA Nepal ~= EA Kathmandu.A few CBs want to/are starting local groups in liberal arts unis which they feel are more EA aligned. A challenge in Turkey is that vegans are abolitionist and against welfarism, and was concerned about discussing farmed animal welfare within EA.In Japan (+ others?), many students study abroad. There may be an opportunity to get those students interested in EA before they go (and connect them to local EA groups in the West), and catch them again after they return.E.g. 1 uni group struggled with reading group retention. It seemed plausible they could focus on their existing ~8 engaged members, or do a “trial week” for their reading group to help attendees evaluate fit early on.There is uncertainty over what messaging works best and non-existent testing. People mostly rely on their insights. More testing seems good, e.g. how much do you need to incorporate native philosophy vs. localizing examples and stories. bad e.g. "Asking someone who grew up or has spent a lot of time in LMICs to imagine a child drowning in a lake is not a hypothetical - it's something they might see and ask themselves every single day. This thought experiment loses a lot of its power. What are some alternatives?"Asking someone who grew up or has spent a lot of time in LMICs to imagine a child drowning in a lake is not a hypothetical - it's something they might see and ask themselves every single day. This thought experiment loses a lot of its power. What are some alternatives?Vietnam doesn’t have a big book-reading culture, so EA books could be less likely to be a way in. Perhaps focusing on blogs or podcasts or other formats is more promising?Many of us (including myself) learnt a lot from Israel on volunteer management and early stage group priorities. I believe a lot of value I provide CB's is expanding the option space and generating lots of specific examples. I wish more CB’s from around the world had attended.Early stage groups (which is most Asia groups today) could benefit from The Lean Startup model of validated learning - trying to optimize early activities to l...]]>
Vaidehi Agarwalla https://forum.effectivealtruism.org/posts/bkF4jWM9pbBFxnCLH/observations-of-community-building-in-asia-a Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Observations of community building in Asia, a 🧵, published by Vaidehi Agarwalla on December 11, 2022 on The Effective Altruism Forum.This is a lightly edited twitter thread I wrote after EAGxSingapore. I figured it might be useful to post to the Forum as well.Caveatsthese are my takeaways (with some interpretation) not just problems people told me.summarized a bunch to fit into the 280 character limit (flag emoji FTW).good > perfectAsian CB's please correct me + enhance!Everyone - please ask clarification questions!What do we do with people? This isn’t unique to Asian CB’s, but there are strictly fewer and less attractive opportunities available. I might do a separate thread or add to this thought later.Some countries have less traction with English & worry about EA being presented as Western concept (e.g. Japan & Iran). Translation of key texts seems important, and could be a way to engage newer EAs with a concrete project (see a test from Estonia).Countries with lots of internal issues may have difficulty gaining EA traction, but it may be a matter of perspective and approach. A Turkey CB mentioned that the fact that another group from Iran was able to get traction was inspiring, since they perceived Iran as having worse problemsMultiple CB’s suggested that after talking to other EAs they started thinking more about city / uni group building rather than trying to build national groups to start with. For large countries (India) there are many choices, .....But some countries (Nepal, Pakistan, Vietnam) have 1 or 2 (Kathmandu, Karachi, Ho Chi Minh & Hanoi) major cities that are likely to be viable for EA groups. Kathmandu has all of Nepal’s top uni’s and best talent pool so in practice EA Nepal ~= EA Kathmandu.A few CBs want to/are starting local groups in liberal arts unis which they feel are more EA aligned. A challenge in Turkey is that vegans are abolitionist and against welfarism, and was concerned about discussing farmed animal welfare within EA.In Japan (+ others?), many students study abroad. There may be an opportunity to get those students interested in EA before they go (and connect them to local EA groups in the West), and catch them again after they return.E.g. 1 uni group struggled with reading group retention. It seemed plausible they could focus on their existing ~8 engaged members, or do a “trial week” for their reading group to help attendees evaluate fit early on.There is uncertainty over what messaging works best and non-existent testing. People mostly rely on their insights. More testing seems good, e.g. how much do you need to incorporate native philosophy vs. localizing examples and stories. bad e.g. "Asking someone who grew up or has spent a lot of time in LMICs to imagine a child drowning in a lake is not a hypothetical - it's something they might see and ask themselves every single day. This thought experiment loses a lot of its power. What are some alternatives?"Asking someone who grew up or has spent a lot of time in LMICs to imagine a child drowning in a lake is not a hypothetical - it's something they might see and ask themselves every single day. This thought experiment loses a lot of its power. What are some alternatives?Vietnam doesn’t have a big book-reading culture, so EA books could be less likely to be a way in. Perhaps focusing on blogs or podcasts or other formats is more promising?Many of us (including myself) learnt a lot from Israel on volunteer management and early stage group priorities. I believe a lot of value I provide CB's is expanding the option space and generating lots of specific examples. I wish more CB’s from around the world had attended.Early stage groups (which is most Asia groups today) could benefit from The Lean Startup model of validated learning - trying to optimize early activities to l...]]>
Mon, 12 Dec 2022 02:45:17 +0000 EA - Observations of community building in Asia, a 🧵 by Vaidehi Agarwalla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Observations of community building in Asia, a 🧵, published by Vaidehi Agarwalla on December 11, 2022 on The Effective Altruism Forum.This is a lightly edited twitter thread I wrote after EAGxSingapore. I figured it might be useful to post to the Forum as well.Caveatsthese are my takeaways (with some interpretation) not just problems people told me.summarized a bunch to fit into the 280 character limit (flag emoji FTW).good > perfectAsian CB's please correct me + enhance!Everyone - please ask clarification questions!What do we do with people? This isn’t unique to Asian CB’s, but there are strictly fewer and less attractive opportunities available. I might do a separate thread or add to this thought later.Some countries have less traction with English & worry about EA being presented as Western concept (e.g. Japan & Iran). Translation of key texts seems important, and could be a way to engage newer EAs with a concrete project (see a test from Estonia).Countries with lots of internal issues may have difficulty gaining EA traction, but it may be a matter of perspective and approach. A Turkey CB mentioned that the fact that another group from Iran was able to get traction was inspiring, since they perceived Iran as having worse problemsMultiple CB’s suggested that after talking to other EAs they started thinking more about city / uni group building rather than trying to build national groups to start with. For large countries (India) there are many choices, .....But some countries (Nepal, Pakistan, Vietnam) have 1 or 2 (Kathmandu, Karachi, Ho Chi Minh & Hanoi) major cities that are likely to be viable for EA groups. Kathmandu has all of Nepal’s top uni’s and best talent pool so in practice EA Nepal ~= EA Kathmandu.A few CBs want to/are starting local groups in liberal arts unis which they feel are more EA aligned. A challenge in Turkey is that vegans are abolitionist and against welfarism, and was concerned about discussing farmed animal welfare within EA.In Japan (+ others?), many students study abroad. There may be an opportunity to get those students interested in EA before they go (and connect them to local EA groups in the West), and catch them again after they return.E.g. 1 uni group struggled with reading group retention. It seemed plausible they could focus on their existing ~8 engaged members, or do a “trial week” for their reading group to help attendees evaluate fit early on.There is uncertainty over what messaging works best and non-existent testing. People mostly rely on their insights. More testing seems good, e.g. how much do you need to incorporate native philosophy vs. localizing examples and stories. bad e.g. "Asking someone who grew up or has spent a lot of time in LMICs to imagine a child drowning in a lake is not a hypothetical - it's something they might see and ask themselves every single day. This thought experiment loses a lot of its power. What are some alternatives?"Asking someone who grew up or has spent a lot of time in LMICs to imagine a child drowning in a lake is not a hypothetical - it's something they might see and ask themselves every single day. This thought experiment loses a lot of its power. What are some alternatives?Vietnam doesn’t have a big book-reading culture, so EA books could be less likely to be a way in. Perhaps focusing on blogs or podcasts or other formats is more promising?Many of us (including myself) learnt a lot from Israel on volunteer management and early stage group priorities. I believe a lot of value I provide CB's is expanding the option space and generating lots of specific examples. I wish more CB’s from around the world had attended.Early stage groups (which is most Asia groups today) could benefit from The Lean Startup model of validated learning - trying to optimize early activities to l...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Observations of community building in Asia, a 🧵, published by Vaidehi Agarwalla on December 11, 2022 on The Effective Altruism Forum.This is a lightly edited twitter thread I wrote after EAGxSingapore. I figured it might be useful to post to the Forum as well.Caveatsthese are my takeaways (with some interpretation) not just problems people told me.summarized a bunch to fit into the 280 character limit (flag emoji FTW).good > perfectAsian CB's please correct me + enhance!Everyone - please ask clarification questions!What do we do with people? This isn’t unique to Asian CB’s, but there are strictly fewer and less attractive opportunities available. I might do a separate thread or add to this thought later.Some countries have less traction with English & worry about EA being presented as Western concept (e.g. Japan & Iran). Translation of key texts seems important, and could be a way to engage newer EAs with a concrete project (see a test from Estonia).Countries with lots of internal issues may have difficulty gaining EA traction, but it may be a matter of perspective and approach. A Turkey CB mentioned that the fact that another group from Iran was able to get traction was inspiring, since they perceived Iran as having worse problemsMultiple CB’s suggested that after talking to other EAs they started thinking more about city / uni group building rather than trying to build national groups to start with. For large countries (India) there are many choices, .....But some countries (Nepal, Pakistan, Vietnam) have 1 or 2 (Kathmandu, Karachi, Ho Chi Minh & Hanoi) major cities that are likely to be viable for EA groups. Kathmandu has all of Nepal’s top uni’s and best talent pool so in practice EA Nepal ~= EA Kathmandu.A few CBs want to/are starting local groups in liberal arts unis which they feel are more EA aligned. A challenge in Turkey is that vegans are abolitionist and against welfarism, and was concerned about discussing farmed animal welfare within EA.In Japan (+ others?), many students study abroad. There may be an opportunity to get those students interested in EA before they go (and connect them to local EA groups in the West), and catch them again after they return.E.g. 1 uni group struggled with reading group retention. It seemed plausible they could focus on their existing ~8 engaged members, or do a “trial week” for their reading group to help attendees evaluate fit early on.There is uncertainty over what messaging works best and non-existent testing. People mostly rely on their insights. More testing seems good, e.g. how much do you need to incorporate native philosophy vs. localizing examples and stories. bad e.g. "Asking someone who grew up or has spent a lot of time in LMICs to imagine a child drowning in a lake is not a hypothetical - it's something they might see and ask themselves every single day. This thought experiment loses a lot of its power. What are some alternatives?"Asking someone who grew up or has spent a lot of time in LMICs to imagine a child drowning in a lake is not a hypothetical - it's something they might see and ask themselves every single day. This thought experiment loses a lot of its power. What are some alternatives?Vietnam doesn’t have a big book-reading culture, so EA books could be less likely to be a way in. Perhaps focusing on blogs or podcasts or other formats is more promising?Many of us (including myself) learnt a lot from Israel on volunteer management and early stage group priorities. I believe a lot of value I provide CB's is expanding the option space and generating lots of specific examples. I wish more CB’s from around the world had attended.Early stage groups (which is most Asia groups today) could benefit from The Lean Startup model of validated learning - trying to optimize early activities to l...]]>
Vaidehi Agarwalla https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:55 None full 4082
zvALRCKshYGYetsbC_EA EA - Reflections on the PIBBSS Fellowship 2022 by nora Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflections on the PIBBSS Fellowship 2022, published by nora on December 11, 2022 on The Effective Altruism Forum.Cross-posted from LessWrong and the Alignment ForumLast summer, we ran the first iteration of the PIBBSS Summer Research Fellowship. In this post, we share some reflections on how the program went.Note that this post deals mostly with high-level reflections and isn’t maximally comprehensive. It primarily focusses on information we think might be relevant for other people and initiatives in this space. We also do not go into specific research outputs produced by fellows within the scope of this post. Further, there are some details that we may not cover in this post for privacy reasons.How to navigate this post:If you know what PIBBSS is and want to directly jump to our reflections, go to sections "Overview of main updates" and "Main successes and failures".If you want a bit more context first, check out "About PIBBSS" for a brief description of PIBBSS’s overall mission; and "Some key facts about the fellowship", if you want a quick overview of the program design.The appendix contains a more detailed discussion of the portfolio of research projects hosted by the fellowship program (appendix 1), and a summary of the research retreats (appendix 2).About PIBBSSPIBBSS (Principles of Intelligent Behavior in Biological and Social Systems) aims to facilitate research studying parallels between intelligent behavior in natural and artificial systems, and to leverage these insights towards the goal of building safe and aligned AI.To this purpose, we organized a 3-month Summer Research Fellowship bringing together scholars with graduate-level research experience (or equivalent) from a wide range of relevant disciplines to work on research projects under the mentorship of experienced AI alignment researchers. The disciplines of interest included fields as diverse as the brain sciences; evolutionary biology, systems biology and ecology; statistical mechanics and complex systems studies; economic, legal and political theory; philosophy of science; and more.This approach broadly -- the PIBBSS bet -- is something we think is a valuable frontier for expanding the scientific and philosophical enquiry on AI risk and the alignment problem. In particular, this aspires to bring in more empirical and conceptual grounding to thinking about advanced AI systems. It can do so by drawing on understanding that different disciplines already possess about intelligent and complex behavior, while also remaining vigilant about the disanalogies that might exist between natural systems and candidate AI designs.Furthermore, bringing diverse epistemic competencies to bear upon the problem also puts us in a better position to identify neglected challenges and opportunities in alignment research. While we certainly recognize that familiarity with ML research is an important part of being able to make significant progress in the field, we also think that familiarity with a large variety of intelligent systems and models of intelligent behavior constitutes an underserved epistemic resource. It can provide novel research surface area, help assess current research frontiers, de- (and re-)construct the AI risk problem, help conceive of novel alternatives in the design space, etc.This makes interdisciplinary and transdisciplinary research endeavors valuable, especially given how they otherwise are likely to be neglected due to inferential and disciplinary distances. That said, we are skeptical of “interdisciplinary for the sake of it”, but consider it exciting insofar it explores specific research bets or has specific generative motivations for why X is interesting.For more information on PIBBSS, see this introduction post, this discussion of the epistemic bet, our research map (currentl...]]>
nora https://forum.effectivealtruism.org/posts/zvALRCKshYGYetsbC/reflections-on-the-pibbss-fellowship-2022 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflections on the PIBBSS Fellowship 2022, published by nora on December 11, 2022 on The Effective Altruism Forum.Cross-posted from LessWrong and the Alignment ForumLast summer, we ran the first iteration of the PIBBSS Summer Research Fellowship. In this post, we share some reflections on how the program went.Note that this post deals mostly with high-level reflections and isn’t maximally comprehensive. It primarily focusses on information we think might be relevant for other people and initiatives in this space. We also do not go into specific research outputs produced by fellows within the scope of this post. Further, there are some details that we may not cover in this post for privacy reasons.How to navigate this post:If you know what PIBBSS is and want to directly jump to our reflections, go to sections "Overview of main updates" and "Main successes and failures".If you want a bit more context first, check out "About PIBBSS" for a brief description of PIBBSS’s overall mission; and "Some key facts about the fellowship", if you want a quick overview of the program design.The appendix contains a more detailed discussion of the portfolio of research projects hosted by the fellowship program (appendix 1), and a summary of the research retreats (appendix 2).About PIBBSSPIBBSS (Principles of Intelligent Behavior in Biological and Social Systems) aims to facilitate research studying parallels between intelligent behavior in natural and artificial systems, and to leverage these insights towards the goal of building safe and aligned AI.To this purpose, we organized a 3-month Summer Research Fellowship bringing together scholars with graduate-level research experience (or equivalent) from a wide range of relevant disciplines to work on research projects under the mentorship of experienced AI alignment researchers. The disciplines of interest included fields as diverse as the brain sciences; evolutionary biology, systems biology and ecology; statistical mechanics and complex systems studies; economic, legal and political theory; philosophy of science; and more.This approach broadly -- the PIBBSS bet -- is something we think is a valuable frontier for expanding the scientific and philosophical enquiry on AI risk and the alignment problem. In particular, this aspires to bring in more empirical and conceptual grounding to thinking about advanced AI systems. It can do so by drawing on understanding that different disciplines already possess about intelligent and complex behavior, while also remaining vigilant about the disanalogies that might exist between natural systems and candidate AI designs.Furthermore, bringing diverse epistemic competencies to bear upon the problem also puts us in a better position to identify neglected challenges and opportunities in alignment research. While we certainly recognize that familiarity with ML research is an important part of being able to make significant progress in the field, we also think that familiarity with a large variety of intelligent systems and models of intelligent behavior constitutes an underserved epistemic resource. It can provide novel research surface area, help assess current research frontiers, de- (and re-)construct the AI risk problem, help conceive of novel alternatives in the design space, etc.This makes interdisciplinary and transdisciplinary research endeavors valuable, especially given how they otherwise are likely to be neglected due to inferential and disciplinary distances. That said, we are skeptical of “interdisciplinary for the sake of it”, but consider it exciting insofar it explores specific research bets or has specific generative motivations for why X is interesting.For more information on PIBBSS, see this introduction post, this discussion of the epistemic bet, our research map (currentl...]]>
Mon, 12 Dec 2022 02:13:01 +0000 EA - Reflections on the PIBBSS Fellowship 2022 by nora Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflections on the PIBBSS Fellowship 2022, published by nora on December 11, 2022 on The Effective Altruism Forum.Cross-posted from LessWrong and the Alignment ForumLast summer, we ran the first iteration of the PIBBSS Summer Research Fellowship. In this post, we share some reflections on how the program went.Note that this post deals mostly with high-level reflections and isn’t maximally comprehensive. It primarily focusses on information we think might be relevant for other people and initiatives in this space. We also do not go into specific research outputs produced by fellows within the scope of this post. Further, there are some details that we may not cover in this post for privacy reasons.How to navigate this post:If you know what PIBBSS is and want to directly jump to our reflections, go to sections "Overview of main updates" and "Main successes and failures".If you want a bit more context first, check out "About PIBBSS" for a brief description of PIBBSS’s overall mission; and "Some key facts about the fellowship", if you want a quick overview of the program design.The appendix contains a more detailed discussion of the portfolio of research projects hosted by the fellowship program (appendix 1), and a summary of the research retreats (appendix 2).About PIBBSSPIBBSS (Principles of Intelligent Behavior in Biological and Social Systems) aims to facilitate research studying parallels between intelligent behavior in natural and artificial systems, and to leverage these insights towards the goal of building safe and aligned AI.To this purpose, we organized a 3-month Summer Research Fellowship bringing together scholars with graduate-level research experience (or equivalent) from a wide range of relevant disciplines to work on research projects under the mentorship of experienced AI alignment researchers. The disciplines of interest included fields as diverse as the brain sciences; evolutionary biology, systems biology and ecology; statistical mechanics and complex systems studies; economic, legal and political theory; philosophy of science; and more.This approach broadly -- the PIBBSS bet -- is something we think is a valuable frontier for expanding the scientific and philosophical enquiry on AI risk and the alignment problem. In particular, this aspires to bring in more empirical and conceptual grounding to thinking about advanced AI systems. It can do so by drawing on understanding that different disciplines already possess about intelligent and complex behavior, while also remaining vigilant about the disanalogies that might exist between natural systems and candidate AI designs.Furthermore, bringing diverse epistemic competencies to bear upon the problem also puts us in a better position to identify neglected challenges and opportunities in alignment research. While we certainly recognize that familiarity with ML research is an important part of being able to make significant progress in the field, we also think that familiarity with a large variety of intelligent systems and models of intelligent behavior constitutes an underserved epistemic resource. It can provide novel research surface area, help assess current research frontiers, de- (and re-)construct the AI risk problem, help conceive of novel alternatives in the design space, etc.This makes interdisciplinary and transdisciplinary research endeavors valuable, especially given how they otherwise are likely to be neglected due to inferential and disciplinary distances. That said, we are skeptical of “interdisciplinary for the sake of it”, but consider it exciting insofar it explores specific research bets or has specific generative motivations for why X is interesting.For more information on PIBBSS, see this introduction post, this discussion of the epistemic bet, our research map (currentl...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflections on the PIBBSS Fellowship 2022, published by nora on December 11, 2022 on The Effective Altruism Forum.Cross-posted from LessWrong and the Alignment ForumLast summer, we ran the first iteration of the PIBBSS Summer Research Fellowship. In this post, we share some reflections on how the program went.Note that this post deals mostly with high-level reflections and isn’t maximally comprehensive. It primarily focusses on information we think might be relevant for other people and initiatives in this space. We also do not go into specific research outputs produced by fellows within the scope of this post. Further, there are some details that we may not cover in this post for privacy reasons.How to navigate this post:If you know what PIBBSS is and want to directly jump to our reflections, go to sections "Overview of main updates" and "Main successes and failures".If you want a bit more context first, check out "About PIBBSS" for a brief description of PIBBSS’s overall mission; and "Some key facts about the fellowship", if you want a quick overview of the program design.The appendix contains a more detailed discussion of the portfolio of research projects hosted by the fellowship program (appendix 1), and a summary of the research retreats (appendix 2).About PIBBSSPIBBSS (Principles of Intelligent Behavior in Biological and Social Systems) aims to facilitate research studying parallels between intelligent behavior in natural and artificial systems, and to leverage these insights towards the goal of building safe and aligned AI.To this purpose, we organized a 3-month Summer Research Fellowship bringing together scholars with graduate-level research experience (or equivalent) from a wide range of relevant disciplines to work on research projects under the mentorship of experienced AI alignment researchers. The disciplines of interest included fields as diverse as the brain sciences; evolutionary biology, systems biology and ecology; statistical mechanics and complex systems studies; economic, legal and political theory; philosophy of science; and more.This approach broadly -- the PIBBSS bet -- is something we think is a valuable frontier for expanding the scientific and philosophical enquiry on AI risk and the alignment problem. In particular, this aspires to bring in more empirical and conceptual grounding to thinking about advanced AI systems. It can do so by drawing on understanding that different disciplines already possess about intelligent and complex behavior, while also remaining vigilant about the disanalogies that might exist between natural systems and candidate AI designs.Furthermore, bringing diverse epistemic competencies to bear upon the problem also puts us in a better position to identify neglected challenges and opportunities in alignment research. While we certainly recognize that familiarity with ML research is an important part of being able to make significant progress in the field, we also think that familiarity with a large variety of intelligent systems and models of intelligent behavior constitutes an underserved epistemic resource. It can provide novel research surface area, help assess current research frontiers, de- (and re-)construct the AI risk problem, help conceive of novel alternatives in the design space, etc.This makes interdisciplinary and transdisciplinary research endeavors valuable, especially given how they otherwise are likely to be neglected due to inferential and disciplinary distances. That said, we are skeptical of “interdisciplinary for the sake of it”, but consider it exciting insofar it explores specific research bets or has specific generative motivations for why X is interesting.For more information on PIBBSS, see this introduction post, this discussion of the epistemic bet, our research map (currentl...]]>
nora https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 34:29 None full 4081
7DCuhq23zHxAmPqvJ_EA EA - Kurzgesagt's most recent video promoting the introducing of wild life to other planets is unethical and irresponsible by David van Beveren Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Kurzgesagt's most recent video promoting the introducing of wild life to other planets is unethical and irresponsible, published by David van Beveren on December 11, 2022 on The Effective Altruism Forum.Kurzgesagt, in their recent video "How To Terraform Mars - WITH LASERS" (just came out a few hours ago as of writing this post), promotes the idea of seeding wildlife on other planets without considering the immense suffering it would cause for the animals on it. Instead of putting thought into the ethical implications of these actions, the video (as par for the course) focuses solely on the potential benefits for humans.Sadly this problem isn't an isolated incident either, the pattern of ignoring the real risk for immense wild animal suffering is common in almost all major plans and discussions involving the terraforming of planets or space colonisation.Sure, a new green planet with lots of nature sounds cool in theory, but it would very likely mean subjecting countless animals to a lifetime of suffering. These animals would be forced to adapt to potentially hostile and unfamiliar environments, and face countless challenges without any choice in the matter. There's no way around it that I can see.You might argue that in these proposed worlds, we'd create an environment for wild animals where there wouldn't be food scarcity, predators, disease, or even anthropogenic harms. Setting aside the immense improbability of such an world (imagine convincing a rhinoceros not to fight to the death for their territory against a wild boar or elephant), none of the terraformed videos or articles I've read have even hinted at wild animal suffering as a potential issue to be concerned about.Also setting aside the conversation of whether or not we should extend human life into other planets and galaxies (for those who don't particularly follow longtermism, or the staunch antinatalists that might be reading this), wouldn't we be far better off just seeding these terraformed planets with plant life instead?If the key decision makers of the future decide they have to bring animals to other planets (and we can't convince them otherwise), then just introducing herbivores would be preferred, at the very least. I'd still be staunchly against this unless we could somehow guarantee that the lives of every individual animal would be net-positive, but sadly— we're not even close to getting people to include this kind of consideration into these types of conversations. At least, not that I know of.Don't get me wrong, Kurzgesagt has always been one of my favorite educational channels to watch. I'll continue to stay subscribed because I think they spread a lot of good, but their promotion of seeding wild life on other planets, without any consideration of the consequences, is unethical, and irresponsible.Instead of blindly pursuing our own interests and trying to populate every inch of the galaxy with life, we should consider the impact of our actions on other future beings and strive to minimize suffering whenever possible— or in this case, preventing it from happening at all.Thanks for reading.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
David van Beveren https://forum.effectivealtruism.org/posts/7DCuhq23zHxAmPqvJ/kurzgesagt-s-most-recent-video-promoting-the-introducing-of Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Kurzgesagt's most recent video promoting the introducing of wild life to other planets is unethical and irresponsible, published by David van Beveren on December 11, 2022 on The Effective Altruism Forum.Kurzgesagt, in their recent video "How To Terraform Mars - WITH LASERS" (just came out a few hours ago as of writing this post), promotes the idea of seeding wildlife on other planets without considering the immense suffering it would cause for the animals on it. Instead of putting thought into the ethical implications of these actions, the video (as par for the course) focuses solely on the potential benefits for humans.Sadly this problem isn't an isolated incident either, the pattern of ignoring the real risk for immense wild animal suffering is common in almost all major plans and discussions involving the terraforming of planets or space colonisation.Sure, a new green planet with lots of nature sounds cool in theory, but it would very likely mean subjecting countless animals to a lifetime of suffering. These animals would be forced to adapt to potentially hostile and unfamiliar environments, and face countless challenges without any choice in the matter. There's no way around it that I can see.You might argue that in these proposed worlds, we'd create an environment for wild animals where there wouldn't be food scarcity, predators, disease, or even anthropogenic harms. Setting aside the immense improbability of such an world (imagine convincing a rhinoceros not to fight to the death for their territory against a wild boar or elephant), none of the terraformed videos or articles I've read have even hinted at wild animal suffering as a potential issue to be concerned about.Also setting aside the conversation of whether or not we should extend human life into other planets and galaxies (for those who don't particularly follow longtermism, or the staunch antinatalists that might be reading this), wouldn't we be far better off just seeding these terraformed planets with plant life instead?If the key decision makers of the future decide they have to bring animals to other planets (and we can't convince them otherwise), then just introducing herbivores would be preferred, at the very least. I'd still be staunchly against this unless we could somehow guarantee that the lives of every individual animal would be net-positive, but sadly— we're not even close to getting people to include this kind of consideration into these types of conversations. At least, not that I know of.Don't get me wrong, Kurzgesagt has always been one of my favorite educational channels to watch. I'll continue to stay subscribed because I think they spread a lot of good, but their promotion of seeding wild life on other planets, without any consideration of the consequences, is unethical, and irresponsible.Instead of blindly pursuing our own interests and trying to populate every inch of the galaxy with life, we should consider the impact of our actions on other future beings and strive to minimize suffering whenever possible— or in this case, preventing it from happening at all.Thanks for reading.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 11 Dec 2022 21:14:56 +0000 EA - Kurzgesagt's most recent video promoting the introducing of wild life to other planets is unethical and irresponsible by David van Beveren Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Kurzgesagt's most recent video promoting the introducing of wild life to other planets is unethical and irresponsible, published by David van Beveren on December 11, 2022 on The Effective Altruism Forum.Kurzgesagt, in their recent video "How To Terraform Mars - WITH LASERS" (just came out a few hours ago as of writing this post), promotes the idea of seeding wildlife on other planets without considering the immense suffering it would cause for the animals on it. Instead of putting thought into the ethical implications of these actions, the video (as par for the course) focuses solely on the potential benefits for humans.Sadly this problem isn't an isolated incident either, the pattern of ignoring the real risk for immense wild animal suffering is common in almost all major plans and discussions involving the terraforming of planets or space colonisation.Sure, a new green planet with lots of nature sounds cool in theory, but it would very likely mean subjecting countless animals to a lifetime of suffering. These animals would be forced to adapt to potentially hostile and unfamiliar environments, and face countless challenges without any choice in the matter. There's no way around it that I can see.You might argue that in these proposed worlds, we'd create an environment for wild animals where there wouldn't be food scarcity, predators, disease, or even anthropogenic harms. Setting aside the immense improbability of such an world (imagine convincing a rhinoceros not to fight to the death for their territory against a wild boar or elephant), none of the terraformed videos or articles I've read have even hinted at wild animal suffering as a potential issue to be concerned about.Also setting aside the conversation of whether or not we should extend human life into other planets and galaxies (for those who don't particularly follow longtermism, or the staunch antinatalists that might be reading this), wouldn't we be far better off just seeding these terraformed planets with plant life instead?If the key decision makers of the future decide they have to bring animals to other planets (and we can't convince them otherwise), then just introducing herbivores would be preferred, at the very least. I'd still be staunchly against this unless we could somehow guarantee that the lives of every individual animal would be net-positive, but sadly— we're not even close to getting people to include this kind of consideration into these types of conversations. At least, not that I know of.Don't get me wrong, Kurzgesagt has always been one of my favorite educational channels to watch. I'll continue to stay subscribed because I think they spread a lot of good, but their promotion of seeding wild life on other planets, without any consideration of the consequences, is unethical, and irresponsible.Instead of blindly pursuing our own interests and trying to populate every inch of the galaxy with life, we should consider the impact of our actions on other future beings and strive to minimize suffering whenever possible— or in this case, preventing it from happening at all.Thanks for reading.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Kurzgesagt's most recent video promoting the introducing of wild life to other planets is unethical and irresponsible, published by David van Beveren on December 11, 2022 on The Effective Altruism Forum.Kurzgesagt, in their recent video "How To Terraform Mars - WITH LASERS" (just came out a few hours ago as of writing this post), promotes the idea of seeding wildlife on other planets without considering the immense suffering it would cause for the animals on it. Instead of putting thought into the ethical implications of these actions, the video (as par for the course) focuses solely on the potential benefits for humans.Sadly this problem isn't an isolated incident either, the pattern of ignoring the real risk for immense wild animal suffering is common in almost all major plans and discussions involving the terraforming of planets or space colonisation.Sure, a new green planet with lots of nature sounds cool in theory, but it would very likely mean subjecting countless animals to a lifetime of suffering. These animals would be forced to adapt to potentially hostile and unfamiliar environments, and face countless challenges without any choice in the matter. There's no way around it that I can see.You might argue that in these proposed worlds, we'd create an environment for wild animals where there wouldn't be food scarcity, predators, disease, or even anthropogenic harms. Setting aside the immense improbability of such an world (imagine convincing a rhinoceros not to fight to the death for their territory against a wild boar or elephant), none of the terraformed videos or articles I've read have even hinted at wild animal suffering as a potential issue to be concerned about.Also setting aside the conversation of whether or not we should extend human life into other planets and galaxies (for those who don't particularly follow longtermism, or the staunch antinatalists that might be reading this), wouldn't we be far better off just seeding these terraformed planets with plant life instead?If the key decision makers of the future decide they have to bring animals to other planets (and we can't convince them otherwise), then just introducing herbivores would be preferred, at the very least. I'd still be staunchly against this unless we could somehow guarantee that the lives of every individual animal would be net-positive, but sadly— we're not even close to getting people to include this kind of consideration into these types of conversations. At least, not that I know of.Don't get me wrong, Kurzgesagt has always been one of my favorite educational channels to watch. I'll continue to stay subscribed because I think they spread a lot of good, but their promotion of seeding wild life on other planets, without any consideration of the consequences, is unethical, and irresponsible.Instead of blindly pursuing our own interests and trying to populate every inch of the galaxy with life, we should consider the impact of our actions on other future beings and strive to minimize suffering whenever possible— or in this case, preventing it from happening at all.Thanks for reading.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
David van Beveren https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:58 None full 4074
xL8JNJJmQMwsCtrtw_EA EA - Cryptocurrency is not all bad. We should stay away from it anyway. by titotal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cryptocurrency is not all bad. We should stay away from it anyway., published by titotal on December 11, 2022 on The Effective Altruism Forum.So I've wanted to write something like this for some time, but was discouraged by the very real chance of getting downvoted into obscurity by cryptocurrency fans. I hope that in the face of the FTX debacle, people will at least consider the good-faith arguments put forward here. The title is a reference to this recent astralcodex post, which I critique in this article.IntroductionIn the wake of the FTX debacle, there seems to be a small but sizeable minority that believes that there was absolutely no way to see this coming. A massive company, the number 2 crypto exchange in the world, just collapses into nothing due to incompetence and/or fraud? Surely this is just a black swan event?Just like Celcius, three arrows capital, Voyager, and Terra/Luna, all of which collapsed in the last year. Go back to previous downturns, and you'll see the downfall of exchanges like Quadriga and mt gox, the latter of which was by far the largest crypto exchange in existence at the time when it collapsed. And the collapses are just getting started, with the fall of FTX taking out Blockfi, and threatening to take out Genesis and Grayscale. Tether, the largest stablecoin and the third largest crypto-coin, has been caught out lying about it's reserves. If they are as fraudulent as many suspect, the repercussions for the rest of the crypto industry could be disastrous.For an event to be a black swan, it needs to be outside the realm of normalcy. But the collapse of FTX was a fairly predictable event. Even true believers will admit that the crypto industry as a whole has significant problems with speculative bubbles, ponzis, scams, frauds, hacks, and general incompetence. The potential for a collapse was warned against on this forum, months ago. (I agreed with the prognosis in the comments at the time, for what it's worth).Note that I'm talking about collapse, and not specifically fraud. It was indeed hard to predict the precise mechanism by which FTX could fail, but I don't think that let's anyone off the hook. If FTX had failed due to incompetence, hacking, or exposure to other fraudulent companies, their investors would have still been screwed over, and the financial and reputational damage to EA would still occur, just with slightly better optics.The fundamental problem with cryptocurrency at the present time is that:A) Almost everyone involved with crypto is using it to try and get rich.B) Almost nobody (in relative terms) is using crypto for anything else.As long as this continues to be the case, crypto as a whole is still in the middle of a massive speculative bubble, and participating in said bubble is inherently dangerous.A. Crypto is infested with speculative bubbles, fraud, and scams (and everyone knows it)The crypto market is "rife with frauds, scams and abuse"SEC chairmen Gary Gensler, August 2021We’re just seeing mountains and mountains of fraudRyan Korner, IRS criminal investigator, Jan 2022During this period, nearly four out of every ten dollars reported lost to a fraud originating on social media was lost in crypto, far more than any other payment method.Federal Trade commission, June 2022Other than a speculative asset with a glorious whitepaper and an impressive "ex workers" of big name companies all around the world, they all promise the Moon but underdeliver. I have yet to see something that has an actual use in realife that is using any of those tokens/coins technology.r/cryptocurrency post with 13k upvotesMatt: (27:13)I think of myself as like a fairly cynical person. And that was so much more cynical than how I would've described farming. You're just like, well, I'm in the Ponzi business and it's pretty good.J...]]>
titotal https://forum.effectivealtruism.org/posts/xL8JNJJmQMwsCtrtw/cryptocurrency-is-not-all-bad-we-should-stay-away-from-it Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cryptocurrency is not all bad. We should stay away from it anyway., published by titotal on December 11, 2022 on The Effective Altruism Forum.So I've wanted to write something like this for some time, but was discouraged by the very real chance of getting downvoted into obscurity by cryptocurrency fans. I hope that in the face of the FTX debacle, people will at least consider the good-faith arguments put forward here. The title is a reference to this recent astralcodex post, which I critique in this article.IntroductionIn the wake of the FTX debacle, there seems to be a small but sizeable minority that believes that there was absolutely no way to see this coming. A massive company, the number 2 crypto exchange in the world, just collapses into nothing due to incompetence and/or fraud? Surely this is just a black swan event?Just like Celcius, three arrows capital, Voyager, and Terra/Luna, all of which collapsed in the last year. Go back to previous downturns, and you'll see the downfall of exchanges like Quadriga and mt gox, the latter of which was by far the largest crypto exchange in existence at the time when it collapsed. And the collapses are just getting started, with the fall of FTX taking out Blockfi, and threatening to take out Genesis and Grayscale. Tether, the largest stablecoin and the third largest crypto-coin, has been caught out lying about it's reserves. If they are as fraudulent as many suspect, the repercussions for the rest of the crypto industry could be disastrous.For an event to be a black swan, it needs to be outside the realm of normalcy. But the collapse of FTX was a fairly predictable event. Even true believers will admit that the crypto industry as a whole has significant problems with speculative bubbles, ponzis, scams, frauds, hacks, and general incompetence. The potential for a collapse was warned against on this forum, months ago. (I agreed with the prognosis in the comments at the time, for what it's worth).Note that I'm talking about collapse, and not specifically fraud. It was indeed hard to predict the precise mechanism by which FTX could fail, but I don't think that let's anyone off the hook. If FTX had failed due to incompetence, hacking, or exposure to other fraudulent companies, their investors would have still been screwed over, and the financial and reputational damage to EA would still occur, just with slightly better optics.The fundamental problem with cryptocurrency at the present time is that:A) Almost everyone involved with crypto is using it to try and get rich.B) Almost nobody (in relative terms) is using crypto for anything else.As long as this continues to be the case, crypto as a whole is still in the middle of a massive speculative bubble, and participating in said bubble is inherently dangerous.A. Crypto is infested with speculative bubbles, fraud, and scams (and everyone knows it)The crypto market is "rife with frauds, scams and abuse"SEC chairmen Gary Gensler, August 2021We’re just seeing mountains and mountains of fraudRyan Korner, IRS criminal investigator, Jan 2022During this period, nearly four out of every ten dollars reported lost to a fraud originating on social media was lost in crypto, far more than any other payment method.Federal Trade commission, June 2022Other than a speculative asset with a glorious whitepaper and an impressive "ex workers" of big name companies all around the world, they all promise the Moon but underdeliver. I have yet to see something that has an actual use in realife that is using any of those tokens/coins technology.r/cryptocurrency post with 13k upvotesMatt: (27:13)I think of myself as like a fairly cynical person. And that was so much more cynical than how I would've described farming. You're just like, well, I'm in the Ponzi business and it's pretty good.J...]]>
Sun, 11 Dec 2022 17:03:12 +0000 EA - Cryptocurrency is not all bad. We should stay away from it anyway. by titotal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cryptocurrency is not all bad. We should stay away from it anyway., published by titotal on December 11, 2022 on The Effective Altruism Forum.So I've wanted to write something like this for some time, but was discouraged by the very real chance of getting downvoted into obscurity by cryptocurrency fans. I hope that in the face of the FTX debacle, people will at least consider the good-faith arguments put forward here. The title is a reference to this recent astralcodex post, which I critique in this article.IntroductionIn the wake of the FTX debacle, there seems to be a small but sizeable minority that believes that there was absolutely no way to see this coming. A massive company, the number 2 crypto exchange in the world, just collapses into nothing due to incompetence and/or fraud? Surely this is just a black swan event?Just like Celcius, three arrows capital, Voyager, and Terra/Luna, all of which collapsed in the last year. Go back to previous downturns, and you'll see the downfall of exchanges like Quadriga and mt gox, the latter of which was by far the largest crypto exchange in existence at the time when it collapsed. And the collapses are just getting started, with the fall of FTX taking out Blockfi, and threatening to take out Genesis and Grayscale. Tether, the largest stablecoin and the third largest crypto-coin, has been caught out lying about it's reserves. If they are as fraudulent as many suspect, the repercussions for the rest of the crypto industry could be disastrous.For an event to be a black swan, it needs to be outside the realm of normalcy. But the collapse of FTX was a fairly predictable event. Even true believers will admit that the crypto industry as a whole has significant problems with speculative bubbles, ponzis, scams, frauds, hacks, and general incompetence. The potential for a collapse was warned against on this forum, months ago. (I agreed with the prognosis in the comments at the time, for what it's worth).Note that I'm talking about collapse, and not specifically fraud. It was indeed hard to predict the precise mechanism by which FTX could fail, but I don't think that let's anyone off the hook. If FTX had failed due to incompetence, hacking, or exposure to other fraudulent companies, their investors would have still been screwed over, and the financial and reputational damage to EA would still occur, just with slightly better optics.The fundamental problem with cryptocurrency at the present time is that:A) Almost everyone involved with crypto is using it to try and get rich.B) Almost nobody (in relative terms) is using crypto for anything else.As long as this continues to be the case, crypto as a whole is still in the middle of a massive speculative bubble, and participating in said bubble is inherently dangerous.A. Crypto is infested with speculative bubbles, fraud, and scams (and everyone knows it)The crypto market is "rife with frauds, scams and abuse"SEC chairmen Gary Gensler, August 2021We’re just seeing mountains and mountains of fraudRyan Korner, IRS criminal investigator, Jan 2022During this period, nearly four out of every ten dollars reported lost to a fraud originating on social media was lost in crypto, far more than any other payment method.Federal Trade commission, June 2022Other than a speculative asset with a glorious whitepaper and an impressive "ex workers" of big name companies all around the world, they all promise the Moon but underdeliver. I have yet to see something that has an actual use in realife that is using any of those tokens/coins technology.r/cryptocurrency post with 13k upvotesMatt: (27:13)I think of myself as like a fairly cynical person. And that was so much more cynical than how I would've described farming. You're just like, well, I'm in the Ponzi business and it's pretty good.J...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cryptocurrency is not all bad. We should stay away from it anyway., published by titotal on December 11, 2022 on The Effective Altruism Forum.So I've wanted to write something like this for some time, but was discouraged by the very real chance of getting downvoted into obscurity by cryptocurrency fans. I hope that in the face of the FTX debacle, people will at least consider the good-faith arguments put forward here. The title is a reference to this recent astralcodex post, which I critique in this article.IntroductionIn the wake of the FTX debacle, there seems to be a small but sizeable minority that believes that there was absolutely no way to see this coming. A massive company, the number 2 crypto exchange in the world, just collapses into nothing due to incompetence and/or fraud? Surely this is just a black swan event?Just like Celcius, three arrows capital, Voyager, and Terra/Luna, all of which collapsed in the last year. Go back to previous downturns, and you'll see the downfall of exchanges like Quadriga and mt gox, the latter of which was by far the largest crypto exchange in existence at the time when it collapsed. And the collapses are just getting started, with the fall of FTX taking out Blockfi, and threatening to take out Genesis and Grayscale. Tether, the largest stablecoin and the third largest crypto-coin, has been caught out lying about it's reserves. If they are as fraudulent as many suspect, the repercussions for the rest of the crypto industry could be disastrous.For an event to be a black swan, it needs to be outside the realm of normalcy. But the collapse of FTX was a fairly predictable event. Even true believers will admit that the crypto industry as a whole has significant problems with speculative bubbles, ponzis, scams, frauds, hacks, and general incompetence. The potential for a collapse was warned against on this forum, months ago. (I agreed with the prognosis in the comments at the time, for what it's worth).Note that I'm talking about collapse, and not specifically fraud. It was indeed hard to predict the precise mechanism by which FTX could fail, but I don't think that let's anyone off the hook. If FTX had failed due to incompetence, hacking, or exposure to other fraudulent companies, their investors would have still been screwed over, and the financial and reputational damage to EA would still occur, just with slightly better optics.The fundamental problem with cryptocurrency at the present time is that:A) Almost everyone involved with crypto is using it to try and get rich.B) Almost nobody (in relative terms) is using crypto for anything else.As long as this continues to be the case, crypto as a whole is still in the middle of a massive speculative bubble, and participating in said bubble is inherently dangerous.A. Crypto is infested with speculative bubbles, fraud, and scams (and everyone knows it)The crypto market is "rife with frauds, scams and abuse"SEC chairmen Gary Gensler, August 2021We’re just seeing mountains and mountains of fraudRyan Korner, IRS criminal investigator, Jan 2022During this period, nearly four out of every ten dollars reported lost to a fraud originating on social media was lost in crypto, far more than any other payment method.Federal Trade commission, June 2022Other than a speculative asset with a glorious whitepaper and an impressive "ex workers" of big name companies all around the world, they all promise the Moon but underdeliver. I have yet to see something that has an actual use in realife that is using any of those tokens/coins technology.r/cryptocurrency post with 13k upvotesMatt: (27:13)I think of myself as like a fairly cynical person. And that was so much more cynical than how I would've described farming. You're just like, well, I'm in the Ponzi business and it's pretty good.J...]]>
titotal https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 16:31 None full 4075
sWguzaydorf8ejCKu_EA EA - What should we have thought about FTX's business practices? by smountjoy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What should we have thought about FTX's business practices?, published by smountjoy on December 10, 2022 on The Effective Altruism Forum.I think I'm not alone in worrying that we might have overlooked red flags about FTX because of the fact that its founders considered themselves EAs.Suppose all of us who failed to predict the FTX collapse were right to think, beforehand, that FTX was very likely an honest, non-fraudulent business. (Maybe because base rates for fraud were low or because investors thought so too.) Should we have even still been concerned about its business practices?For instance, should FTX's impact on its customers have looked net-negative? Should its business have seemed objectionable from a "common-sense ethics" perspective? If so, the lack of discussion at the time would suggest that many of us were either blind to unwelcome news or afraid to speak out against an important funder.Here are some considerations that might have suggested FTX's business practices were bad:"If customers are moving most of their savings out of stocks and bonds and into cryptocurrency, that probably makes them worse-off. FTX's mass-marketing might be encouraging people to do this, especially people who aren't financially savvy.""When customers make trades on the platform, they're probably trading against smart money and losing out. In fact, they're probably losing out more than usual because that smart money is Alameda and Alameda has a systemic advantage. Considering the amount FTX spends on marketing, customers must be losing a lot of money between exchange fees and market losses to Alameda."On the other hand, many people enjoy retail trading; some are probably aware of the costs and still find it worthwhile.My tentative personal view is that a year ago, FTX's business looked neutral or mildly bad for customers, but not much worse than, e.g., Robinhood; that the reputational risk to EA looked small; and that, though specialists could've given these issues more attention, it was okay for the wider EA community to focus on other things.What should someone with no inside information or ingroup bias have thought a year ago about FTX's business practices?In hindsight it looks like all this might be false, but I assume we couldn't have known that at the time.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
smountjoy https://forum.effectivealtruism.org/posts/sWguzaydorf8ejCKu/what-should-we-have-thought-about-ftx-s-business-practices Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What should we have thought about FTX's business practices?, published by smountjoy on December 10, 2022 on The Effective Altruism Forum.I think I'm not alone in worrying that we might have overlooked red flags about FTX because of the fact that its founders considered themselves EAs.Suppose all of us who failed to predict the FTX collapse were right to think, beforehand, that FTX was very likely an honest, non-fraudulent business. (Maybe because base rates for fraud were low or because investors thought so too.) Should we have even still been concerned about its business practices?For instance, should FTX's impact on its customers have looked net-negative? Should its business have seemed objectionable from a "common-sense ethics" perspective? If so, the lack of discussion at the time would suggest that many of us were either blind to unwelcome news or afraid to speak out against an important funder.Here are some considerations that might have suggested FTX's business practices were bad:"If customers are moving most of their savings out of stocks and bonds and into cryptocurrency, that probably makes them worse-off. FTX's mass-marketing might be encouraging people to do this, especially people who aren't financially savvy.""When customers make trades on the platform, they're probably trading against smart money and losing out. In fact, they're probably losing out more than usual because that smart money is Alameda and Alameda has a systemic advantage. Considering the amount FTX spends on marketing, customers must be losing a lot of money between exchange fees and market losses to Alameda."On the other hand, many people enjoy retail trading; some are probably aware of the costs and still find it worthwhile.My tentative personal view is that a year ago, FTX's business looked neutral or mildly bad for customers, but not much worse than, e.g., Robinhood; that the reputational risk to EA looked small; and that, though specialists could've given these issues more attention, it was okay for the wider EA community to focus on other things.What should someone with no inside information or ingroup bias have thought a year ago about FTX's business practices?In hindsight it looks like all this might be false, but I assume we couldn't have known that at the time.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 11 Dec 2022 12:15:52 +0000 EA - What should we have thought about FTX's business practices? by smountjoy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What should we have thought about FTX's business practices?, published by smountjoy on December 10, 2022 on The Effective Altruism Forum.I think I'm not alone in worrying that we might have overlooked red flags about FTX because of the fact that its founders considered themselves EAs.Suppose all of us who failed to predict the FTX collapse were right to think, beforehand, that FTX was very likely an honest, non-fraudulent business. (Maybe because base rates for fraud were low or because investors thought so too.) Should we have even still been concerned about its business practices?For instance, should FTX's impact on its customers have looked net-negative? Should its business have seemed objectionable from a "common-sense ethics" perspective? If so, the lack of discussion at the time would suggest that many of us were either blind to unwelcome news or afraid to speak out against an important funder.Here are some considerations that might have suggested FTX's business practices were bad:"If customers are moving most of their savings out of stocks and bonds and into cryptocurrency, that probably makes them worse-off. FTX's mass-marketing might be encouraging people to do this, especially people who aren't financially savvy.""When customers make trades on the platform, they're probably trading against smart money and losing out. In fact, they're probably losing out more than usual because that smart money is Alameda and Alameda has a systemic advantage. Considering the amount FTX spends on marketing, customers must be losing a lot of money between exchange fees and market losses to Alameda."On the other hand, many people enjoy retail trading; some are probably aware of the costs and still find it worthwhile.My tentative personal view is that a year ago, FTX's business looked neutral or mildly bad for customers, but not much worse than, e.g., Robinhood; that the reputational risk to EA looked small; and that, though specialists could've given these issues more attention, it was okay for the wider EA community to focus on other things.What should someone with no inside information or ingroup bias have thought a year ago about FTX's business practices?In hindsight it looks like all this might be false, but I assume we couldn't have known that at the time.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What should we have thought about FTX's business practices?, published by smountjoy on December 10, 2022 on The Effective Altruism Forum.I think I'm not alone in worrying that we might have overlooked red flags about FTX because of the fact that its founders considered themselves EAs.Suppose all of us who failed to predict the FTX collapse were right to think, beforehand, that FTX was very likely an honest, non-fraudulent business. (Maybe because base rates for fraud were low or because investors thought so too.) Should we have even still been concerned about its business practices?For instance, should FTX's impact on its customers have looked net-negative? Should its business have seemed objectionable from a "common-sense ethics" perspective? If so, the lack of discussion at the time would suggest that many of us were either blind to unwelcome news or afraid to speak out against an important funder.Here are some considerations that might have suggested FTX's business practices were bad:"If customers are moving most of their savings out of stocks and bonds and into cryptocurrency, that probably makes them worse-off. FTX's mass-marketing might be encouraging people to do this, especially people who aren't financially savvy.""When customers make trades on the platform, they're probably trading against smart money and losing out. In fact, they're probably losing out more than usual because that smart money is Alameda and Alameda has a systemic advantage. Considering the amount FTX spends on marketing, customers must be losing a lot of money between exchange fees and market losses to Alameda."On the other hand, many people enjoy retail trading; some are probably aware of the costs and still find it worthwhile.My tentative personal view is that a year ago, FTX's business looked neutral or mildly bad for customers, but not much worse than, e.g., Robinhood; that the reputational risk to EA looked small; and that, though specialists could've given these issues more attention, it was okay for the wider EA community to focus on other things.What should someone with no inside information or ingroup bias have thought a year ago about FTX's business practices?In hindsight it looks like all this might be false, but I assume we couldn't have known that at the time.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
smountjoy https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:21 None full 4076
NbiHKTN5QhFFfjjm5_EA EA - AI Safety Seems Hard to Measure by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Seems Hard to Measure, published by Holden Karnofsky on December 11, 2022 on The Effective Altruism Forum.More detail on why AI could make this the most important century (Details not included in email - click to view on the web) Why would AI "aim" to defeat humanity? (Details not included in email - click to view on the web) How could AI defeat humanity? (Details not included in email - click to view on the web) Why are AI systems "black boxes" that we can't understand the inner workings of? (Details not included in email - click to view on the web) How could AI defeat humanity? (Details not included in email - click to view on the web) The Volkswagen emissions scandal (Details not included in email - click to view on the web)In previous pieces, I argued that there's a real and large risk of AI systems' developing dangerous goals of their own and defeating all of humanity - at least in the absence of specific efforts to prevent this from happening.A young, growing field of AI safety research tries to reduce this risk, by finding ways to ensure that AI systems behave as intended (rather than forming ambitious aims of their own and deceiving and manipulating humans as needed to accomplish them).Maybe we'll succeed in reducing the risk, and maybe we won't. Unfortunately, I think it could be hard to know either way. This piece is about four fairly distinct-seeming reasons that this could be the case - and that AI safety could be an unusually difficult sort of science.This piece is aimed at a broad audience, because I think it's important for the challenges here to be broadly understood. I expect powerful, dangerous AI systems to have a lot of benefits (commercial, military, etc.), and to potentially appear safer than they are - so I think it will be hard to be as cautious about AI as we should be. I think our odds look better if many people understand, at a high level, some of the challenges in knowing whether AI systems are as safe as they appear.First, I'll recap the basic challenge of AI safety research, and outline what I wish AI safety research could be like. I wish it had this basic form: "Apply a test to the AI system. If the test goes badly, try another AI development method and test that. If the test goes well, we're probably in good shape." I think car safety research mostly looks like this; I think AI capabilities research mostly looks like this.Then, I’ll give four reasons that apparent success in AI safety can be misleading.“Great news - I’ve tested this AI and it looks safe.” Why might we still have a problem? Problem Key question Explanation The Lance Armstrong problem Did we get the AI to be actually safe or good at hiding its dangerous actions? The King Lear problem The lab mice problem Today's "subhuman" AIs are safe.What about future AIs with more human-like abilities? The first contact problemWhen dealing with an intelligent agent, it’s hard to tell the difference between “behaving well” and “appearing to behave well.”When professional cycling was cracking down on performance-enhancing drugs, Lance Armstrong was very successful and seemed to be unusually “clean.” It later came out that he had been using drugs with an unusually sophisticated operation for concealing them.The AI is (actually) well-behaved when humans are in control. Will this transfer to when AIs are in control?It's hard to know how someone will behave when they have power over you, based only on observing how they behave when they don't.AIs might behave as intended as long as humans are in control - but at some future point, AI systems might be capable and widespread enough to have opportunities to take control of the world entirely. It's hard to know whether they'll take these opportunities, and we can't exactly run a clean test of the situation.Like King Lear...]]>
Holden Karnofsky https://forum.effectivealtruism.org/posts/NbiHKTN5QhFFfjjm5/ai-safety-seems-hard-to-measure Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Seems Hard to Measure, published by Holden Karnofsky on December 11, 2022 on The Effective Altruism Forum.More detail on why AI could make this the most important century (Details not included in email - click to view on the web) Why would AI "aim" to defeat humanity? (Details not included in email - click to view on the web) How could AI defeat humanity? (Details not included in email - click to view on the web) Why are AI systems "black boxes" that we can't understand the inner workings of? (Details not included in email - click to view on the web) How could AI defeat humanity? (Details not included in email - click to view on the web) The Volkswagen emissions scandal (Details not included in email - click to view on the web)In previous pieces, I argued that there's a real and large risk of AI systems' developing dangerous goals of their own and defeating all of humanity - at least in the absence of specific efforts to prevent this from happening.A young, growing field of AI safety research tries to reduce this risk, by finding ways to ensure that AI systems behave as intended (rather than forming ambitious aims of their own and deceiving and manipulating humans as needed to accomplish them).Maybe we'll succeed in reducing the risk, and maybe we won't. Unfortunately, I think it could be hard to know either way. This piece is about four fairly distinct-seeming reasons that this could be the case - and that AI safety could be an unusually difficult sort of science.This piece is aimed at a broad audience, because I think it's important for the challenges here to be broadly understood. I expect powerful, dangerous AI systems to have a lot of benefits (commercial, military, etc.), and to potentially appear safer than they are - so I think it will be hard to be as cautious about AI as we should be. I think our odds look better if many people understand, at a high level, some of the challenges in knowing whether AI systems are as safe as they appear.First, I'll recap the basic challenge of AI safety research, and outline what I wish AI safety research could be like. I wish it had this basic form: "Apply a test to the AI system. If the test goes badly, try another AI development method and test that. If the test goes well, we're probably in good shape." I think car safety research mostly looks like this; I think AI capabilities research mostly looks like this.Then, I’ll give four reasons that apparent success in AI safety can be misleading.“Great news - I’ve tested this AI and it looks safe.” Why might we still have a problem? Problem Key question Explanation The Lance Armstrong problem Did we get the AI to be actually safe or good at hiding its dangerous actions? The King Lear problem The lab mice problem Today's "subhuman" AIs are safe.What about future AIs with more human-like abilities? The first contact problemWhen dealing with an intelligent agent, it’s hard to tell the difference between “behaving well” and “appearing to behave well.”When professional cycling was cracking down on performance-enhancing drugs, Lance Armstrong was very successful and seemed to be unusually “clean.” It later came out that he had been using drugs with an unusually sophisticated operation for concealing them.The AI is (actually) well-behaved when humans are in control. Will this transfer to when AIs are in control?It's hard to know how someone will behave when they have power over you, based only on observing how they behave when they don't.AIs might behave as intended as long as humans are in control - but at some future point, AI systems might be capable and widespread enough to have opportunities to take control of the world entirely. It's hard to know whether they'll take these opportunities, and we can't exactly run a clean test of the situation.Like King Lear...]]>
Sun, 11 Dec 2022 08:19:08 +0000 EA - AI Safety Seems Hard to Measure by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Seems Hard to Measure, published by Holden Karnofsky on December 11, 2022 on The Effective Altruism Forum.More detail on why AI could make this the most important century (Details not included in email - click to view on the web) Why would AI "aim" to defeat humanity? (Details not included in email - click to view on the web) How could AI defeat humanity? (Details not included in email - click to view on the web) Why are AI systems "black boxes" that we can't understand the inner workings of? (Details not included in email - click to view on the web) How could AI defeat humanity? (Details not included in email - click to view on the web) The Volkswagen emissions scandal (Details not included in email - click to view on the web)In previous pieces, I argued that there's a real and large risk of AI systems' developing dangerous goals of their own and defeating all of humanity - at least in the absence of specific efforts to prevent this from happening.A young, growing field of AI safety research tries to reduce this risk, by finding ways to ensure that AI systems behave as intended (rather than forming ambitious aims of their own and deceiving and manipulating humans as needed to accomplish them).Maybe we'll succeed in reducing the risk, and maybe we won't. Unfortunately, I think it could be hard to know either way. This piece is about four fairly distinct-seeming reasons that this could be the case - and that AI safety could be an unusually difficult sort of science.This piece is aimed at a broad audience, because I think it's important for the challenges here to be broadly understood. I expect powerful, dangerous AI systems to have a lot of benefits (commercial, military, etc.), and to potentially appear safer than they are - so I think it will be hard to be as cautious about AI as we should be. I think our odds look better if many people understand, at a high level, some of the challenges in knowing whether AI systems are as safe as they appear.First, I'll recap the basic challenge of AI safety research, and outline what I wish AI safety research could be like. I wish it had this basic form: "Apply a test to the AI system. If the test goes badly, try another AI development method and test that. If the test goes well, we're probably in good shape." I think car safety research mostly looks like this; I think AI capabilities research mostly looks like this.Then, I’ll give four reasons that apparent success in AI safety can be misleading.“Great news - I’ve tested this AI and it looks safe.” Why might we still have a problem? Problem Key question Explanation The Lance Armstrong problem Did we get the AI to be actually safe or good at hiding its dangerous actions? The King Lear problem The lab mice problem Today's "subhuman" AIs are safe.What about future AIs with more human-like abilities? The first contact problemWhen dealing with an intelligent agent, it’s hard to tell the difference between “behaving well” and “appearing to behave well.”When professional cycling was cracking down on performance-enhancing drugs, Lance Armstrong was very successful and seemed to be unusually “clean.” It later came out that he had been using drugs with an unusually sophisticated operation for concealing them.The AI is (actually) well-behaved when humans are in control. Will this transfer to when AIs are in control?It's hard to know how someone will behave when they have power over you, based only on observing how they behave when they don't.AIs might behave as intended as long as humans are in control - but at some future point, AI systems might be capable and widespread enough to have opportunities to take control of the world entirely. It's hard to know whether they'll take these opportunities, and we can't exactly run a clean test of the situation.Like King Lear...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Seems Hard to Measure, published by Holden Karnofsky on December 11, 2022 on The Effective Altruism Forum.More detail on why AI could make this the most important century (Details not included in email - click to view on the web) Why would AI "aim" to defeat humanity? (Details not included in email - click to view on the web) How could AI defeat humanity? (Details not included in email - click to view on the web) Why are AI systems "black boxes" that we can't understand the inner workings of? (Details not included in email - click to view on the web) How could AI defeat humanity? (Details not included in email - click to view on the web) The Volkswagen emissions scandal (Details not included in email - click to view on the web)In previous pieces, I argued that there's a real and large risk of AI systems' developing dangerous goals of their own and defeating all of humanity - at least in the absence of specific efforts to prevent this from happening.A young, growing field of AI safety research tries to reduce this risk, by finding ways to ensure that AI systems behave as intended (rather than forming ambitious aims of their own and deceiving and manipulating humans as needed to accomplish them).Maybe we'll succeed in reducing the risk, and maybe we won't. Unfortunately, I think it could be hard to know either way. This piece is about four fairly distinct-seeming reasons that this could be the case - and that AI safety could be an unusually difficult sort of science.This piece is aimed at a broad audience, because I think it's important for the challenges here to be broadly understood. I expect powerful, dangerous AI systems to have a lot of benefits (commercial, military, etc.), and to potentially appear safer than they are - so I think it will be hard to be as cautious about AI as we should be. I think our odds look better if many people understand, at a high level, some of the challenges in knowing whether AI systems are as safe as they appear.First, I'll recap the basic challenge of AI safety research, and outline what I wish AI safety research could be like. I wish it had this basic form: "Apply a test to the AI system. If the test goes badly, try another AI development method and test that. If the test goes well, we're probably in good shape." I think car safety research mostly looks like this; I think AI capabilities research mostly looks like this.Then, I’ll give four reasons that apparent success in AI safety can be misleading.“Great news - I’ve tested this AI and it looks safe.” Why might we still have a problem? Problem Key question Explanation The Lance Armstrong problem Did we get the AI to be actually safe or good at hiding its dangerous actions? The King Lear problem The lab mice problem Today's "subhuman" AIs are safe.What about future AIs with more human-like abilities? The first contact problemWhen dealing with an intelligent agent, it’s hard to tell the difference between “behaving well” and “appearing to behave well.”When professional cycling was cracking down on performance-enhancing drugs, Lance Armstrong was very successful and seemed to be unusually “clean.” It later came out that he had been using drugs with an unusually sophisticated operation for concealing them.The AI is (actually) well-behaved when humans are in control. Will this transfer to when AIs are in control?It's hard to know how someone will behave when they have power over you, based only on observing how they behave when they don't.AIs might behave as intended as long as humans are in control - but at some future point, AI systems might be capable and widespread enough to have opportunities to take control of the world entirely. It's hard to know whether they'll take these opportunities, and we can't exactly run a clean test of the situation.Like King Lear...]]>
Holden Karnofsky https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 21:35 None full 4071
ihYFy5kmeEgLT8opY_EA EA - Personal Finance for EAs by NicoleJaneway Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Personal Finance for EAs, published by NicoleJaneway on December 10, 2022 on The Effective Altruism Forum.After attending EAGxBerkeley last weekend, I'm thinking there needs to be significantly more emphasis on the basics of personal finance, particularly for student groups and other young EAs.Rationale:It's in the best interest of the community to raise the overall level of financial literacy. This has the benefit of increasing the amount of money we can move as a collective. Plus, it avoids the serious risk of an individual overtaxing their finances by failing to set up a rainy day strategy or implement another common-sense personal finance best practice.EAs have different financial needs than the general population. Balancing giving and saving is much less stressful if you have a reasoned plan about how to do so. I imagine it'd be pretty easy to get bad advice from a financial counselor who isn't familiar with philanthropic giving.Context:At the EA Software Engineering meet, earning to give came up, so we talked about some slightly advanced financial strategies. Some people were nodding their heads, others were scribbling frantic notes, seeming to indicate the concepts were perhaps new to them.Throughout the weekend, I had a couple conversations that revealed a lack of understanding of basic personal finance. This post is absolutely not intended to call anyone out. But before graduation, you really should have a solid understanding of checking accounts versus long term investments.Implementation:I feel pretty strongly the next EAGx would benefit from a talk on personal finance, outlining the best strategies for that particular country's idiosyncratic financial rules.For example, the first 80% of this presentation would walk through commonly cited strategies for setting up checking and brokerage accounts, investing for retirement, maxing out one's 401k, handling stock options, etc. Here's a useful article that breaks down saving and giving into stages.Given that EAs should be open to the idea of spending money to save time, in the following 10% of the presentation, we would provide a framework for how to think about this, based on Clearer Thinking's Value of Your Time Calculator.Then the remaining 10% would talk about strategies specific to maximizing donations. As far as I know, in the US, using a donor advised fund (DAF) to donate appreciated assets is the best strategy for tax advantaged giving. Here are a bunch more ideas.So long as we caveat with "this is not financial advice" and don't specify which stocks attendees should purchase, my understanding is that this kind of financial presentation would be okay content for an event. Please correct me if this is not correct.Our community has the fun Facebook Group Highly Speculative EA Capital Accumulation full of complex (/ questionable?) strategies. I think we need to cover the other end of the spectrum — i.e., personal finance basics — a little better.Summary:The next EAGx should have a talk on smart ways to maximize giving strategies, geared toward the particular rules of the country in which the event is hosted.EAs should be more open to taking on risk with the assets they intend to donate. EAs should invest to give with the mindset of maximizing expected value, knowing that the downside risks won't put their financial future in jeopardy. To this end, EAs might be underutilizing donor advised funds and missing out on the associated tax benefits.It's also possible that EAs are underleveraging opportunities to spend money to save time.Furthermore, EAs (especially student groups) may benefit from education about basic personal finance: maximizing 401k and ROTH IRA contributions, using low-cost index funds to invest, setting up and paying off credit cards to take advantage of perks and save on t...]]>
NicoleJaneway https://forum.effectivealtruism.org/posts/ihYFy5kmeEgLT8opY/personal-finance-for-eas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Personal Finance for EAs, published by NicoleJaneway on December 10, 2022 on The Effective Altruism Forum.After attending EAGxBerkeley last weekend, I'm thinking there needs to be significantly more emphasis on the basics of personal finance, particularly for student groups and other young EAs.Rationale:It's in the best interest of the community to raise the overall level of financial literacy. This has the benefit of increasing the amount of money we can move as a collective. Plus, it avoids the serious risk of an individual overtaxing their finances by failing to set up a rainy day strategy or implement another common-sense personal finance best practice.EAs have different financial needs than the general population. Balancing giving and saving is much less stressful if you have a reasoned plan about how to do so. I imagine it'd be pretty easy to get bad advice from a financial counselor who isn't familiar with philanthropic giving.Context:At the EA Software Engineering meet, earning to give came up, so we talked about some slightly advanced financial strategies. Some people were nodding their heads, others were scribbling frantic notes, seeming to indicate the concepts were perhaps new to them.Throughout the weekend, I had a couple conversations that revealed a lack of understanding of basic personal finance. This post is absolutely not intended to call anyone out. But before graduation, you really should have a solid understanding of checking accounts versus long term investments.Implementation:I feel pretty strongly the next EAGx would benefit from a talk on personal finance, outlining the best strategies for that particular country's idiosyncratic financial rules.For example, the first 80% of this presentation would walk through commonly cited strategies for setting up checking and brokerage accounts, investing for retirement, maxing out one's 401k, handling stock options, etc. Here's a useful article that breaks down saving and giving into stages.Given that EAs should be open to the idea of spending money to save time, in the following 10% of the presentation, we would provide a framework for how to think about this, based on Clearer Thinking's Value of Your Time Calculator.Then the remaining 10% would talk about strategies specific to maximizing donations. As far as I know, in the US, using a donor advised fund (DAF) to donate appreciated assets is the best strategy for tax advantaged giving. Here are a bunch more ideas.So long as we caveat with "this is not financial advice" and don't specify which stocks attendees should purchase, my understanding is that this kind of financial presentation would be okay content for an event. Please correct me if this is not correct.Our community has the fun Facebook Group Highly Speculative EA Capital Accumulation full of complex (/ questionable?) strategies. I think we need to cover the other end of the spectrum — i.e., personal finance basics — a little better.Summary:The next EAGx should have a talk on smart ways to maximize giving strategies, geared toward the particular rules of the country in which the event is hosted.EAs should be more open to taking on risk with the assets they intend to donate. EAs should invest to give with the mindset of maximizing expected value, knowing that the downside risks won't put their financial future in jeopardy. To this end, EAs might be underutilizing donor advised funds and missing out on the associated tax benefits.It's also possible that EAs are underleveraging opportunities to spend money to save time.Furthermore, EAs (especially student groups) may benefit from education about basic personal finance: maximizing 401k and ROTH IRA contributions, using low-cost index funds to invest, setting up and paying off credit cards to take advantage of perks and save on t...]]>
Sun, 11 Dec 2022 08:16:19 +0000 EA - Personal Finance for EAs by NicoleJaneway Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Personal Finance for EAs, published by NicoleJaneway on December 10, 2022 on The Effective Altruism Forum.After attending EAGxBerkeley last weekend, I'm thinking there needs to be significantly more emphasis on the basics of personal finance, particularly for student groups and other young EAs.Rationale:It's in the best interest of the community to raise the overall level of financial literacy. This has the benefit of increasing the amount of money we can move as a collective. Plus, it avoids the serious risk of an individual overtaxing their finances by failing to set up a rainy day strategy or implement another common-sense personal finance best practice.EAs have different financial needs than the general population. Balancing giving and saving is much less stressful if you have a reasoned plan about how to do so. I imagine it'd be pretty easy to get bad advice from a financial counselor who isn't familiar with philanthropic giving.Context:At the EA Software Engineering meet, earning to give came up, so we talked about some slightly advanced financial strategies. Some people were nodding their heads, others were scribbling frantic notes, seeming to indicate the concepts were perhaps new to them.Throughout the weekend, I had a couple conversations that revealed a lack of understanding of basic personal finance. This post is absolutely not intended to call anyone out. But before graduation, you really should have a solid understanding of checking accounts versus long term investments.Implementation:I feel pretty strongly the next EAGx would benefit from a talk on personal finance, outlining the best strategies for that particular country's idiosyncratic financial rules.For example, the first 80% of this presentation would walk through commonly cited strategies for setting up checking and brokerage accounts, investing for retirement, maxing out one's 401k, handling stock options, etc. Here's a useful article that breaks down saving and giving into stages.Given that EAs should be open to the idea of spending money to save time, in the following 10% of the presentation, we would provide a framework for how to think about this, based on Clearer Thinking's Value of Your Time Calculator.Then the remaining 10% would talk about strategies specific to maximizing donations. As far as I know, in the US, using a donor advised fund (DAF) to donate appreciated assets is the best strategy for tax advantaged giving. Here are a bunch more ideas.So long as we caveat with "this is not financial advice" and don't specify which stocks attendees should purchase, my understanding is that this kind of financial presentation would be okay content for an event. Please correct me if this is not correct.Our community has the fun Facebook Group Highly Speculative EA Capital Accumulation full of complex (/ questionable?) strategies. I think we need to cover the other end of the spectrum — i.e., personal finance basics — a little better.Summary:The next EAGx should have a talk on smart ways to maximize giving strategies, geared toward the particular rules of the country in which the event is hosted.EAs should be more open to taking on risk with the assets they intend to donate. EAs should invest to give with the mindset of maximizing expected value, knowing that the downside risks won't put their financial future in jeopardy. To this end, EAs might be underutilizing donor advised funds and missing out on the associated tax benefits.It's also possible that EAs are underleveraging opportunities to spend money to save time.Furthermore, EAs (especially student groups) may benefit from education about basic personal finance: maximizing 401k and ROTH IRA contributions, using low-cost index funds to invest, setting up and paying off credit cards to take advantage of perks and save on t...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Personal Finance for EAs, published by NicoleJaneway on December 10, 2022 on The Effective Altruism Forum.After attending EAGxBerkeley last weekend, I'm thinking there needs to be significantly more emphasis on the basics of personal finance, particularly for student groups and other young EAs.Rationale:It's in the best interest of the community to raise the overall level of financial literacy. This has the benefit of increasing the amount of money we can move as a collective. Plus, it avoids the serious risk of an individual overtaxing their finances by failing to set up a rainy day strategy or implement another common-sense personal finance best practice.EAs have different financial needs than the general population. Balancing giving and saving is much less stressful if you have a reasoned plan about how to do so. I imagine it'd be pretty easy to get bad advice from a financial counselor who isn't familiar with philanthropic giving.Context:At the EA Software Engineering meet, earning to give came up, so we talked about some slightly advanced financial strategies. Some people were nodding their heads, others were scribbling frantic notes, seeming to indicate the concepts were perhaps new to them.Throughout the weekend, I had a couple conversations that revealed a lack of understanding of basic personal finance. This post is absolutely not intended to call anyone out. But before graduation, you really should have a solid understanding of checking accounts versus long term investments.Implementation:I feel pretty strongly the next EAGx would benefit from a talk on personal finance, outlining the best strategies for that particular country's idiosyncratic financial rules.For example, the first 80% of this presentation would walk through commonly cited strategies for setting up checking and brokerage accounts, investing for retirement, maxing out one's 401k, handling stock options, etc. Here's a useful article that breaks down saving and giving into stages.Given that EAs should be open to the idea of spending money to save time, in the following 10% of the presentation, we would provide a framework for how to think about this, based on Clearer Thinking's Value of Your Time Calculator.Then the remaining 10% would talk about strategies specific to maximizing donations. As far as I know, in the US, using a donor advised fund (DAF) to donate appreciated assets is the best strategy for tax advantaged giving. Here are a bunch more ideas.So long as we caveat with "this is not financial advice" and don't specify which stocks attendees should purchase, my understanding is that this kind of financial presentation would be okay content for an event. Please correct me if this is not correct.Our community has the fun Facebook Group Highly Speculative EA Capital Accumulation full of complex (/ questionable?) strategies. I think we need to cover the other end of the spectrum — i.e., personal finance basics — a little better.Summary:The next EAGx should have a talk on smart ways to maximize giving strategies, geared toward the particular rules of the country in which the event is hosted.EAs should be more open to taking on risk with the assets they intend to donate. EAs should invest to give with the mindset of maximizing expected value, knowing that the downside risks won't put their financial future in jeopardy. To this end, EAs might be underutilizing donor advised funds and missing out on the associated tax benefits.It's also possible that EAs are underleveraging opportunities to spend money to save time.Furthermore, EAs (especially student groups) may benefit from education about basic personal finance: maximizing 401k and ROTH IRA contributions, using low-cost index funds to invest, setting up and paying off credit cards to take advantage of perks and save on t...]]>
NicoleJaneway https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:12 None full 4072
JGR87M8to93D7Ahzh_EA EA - Hugh Thompson Jr (1943–2006) by Gavin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hugh Thompson Jr (1943–2006), published by Gavin on December 10, 2022 on The Effective Altruism Forum.On the 30th anniversary of the massacre, Thompson went back to My Lai and met some of the people whose lives he had saved."One of the ladies that we had helped out that day came up to me and asked, ‘Why didn’t the people who committed these acts come back with you?’ And I was just devastated. And then she finished her sentence: she said, ‘So we could forgive them.’ I’m not man enough to do that. I’m sorry. I wish I was, but I won’t lie to anybody. I’m not that much of a man."Hugh Thompson was a volunteer officer in the Vietnam War who turned his squad's weapons on American soldiers to stop them raping and murdering more women and children than the four or five hundred they already had.He models the standard idea of heroism: one day, one decision, clarity, evil men to defy, faces you can see. Then, a system to navigate, betrayal, self-sacrifice, being punished for virtue.I'm writing about him because of how the story makes me feel. It is a fault of my feeling that I feel like this about Thompson and not Borlaug.One dayThompson and his crew, who at first thought the artillery bombardment caused all the civilian deaths on the ground, became aware that Americans were murdering the villagers after a wounded civilian woman they requested medical evacuation for, Nguyễn Thị Tẩu, was murdered right in front of them by Captain Medina, the commanding officer of the operation... "It was a Nazi kind of thing."Immediately realizing that the soldiers intended to murder the Vietnamese civilians, Thompson landed his helicopter between the advancing ground unit and the villagers. He turned to Colburn and Andreotta and ordered them to shoot the men in the 2nd Platoon if they attempted to kill any of the fleeing civilians...“Open up on ‘em. Blow ‘em away.”While Colburn and Andreotta trained their guns on the 2nd Platoon, Thompson located as many civilians as he could, persuaded them to follow him to a safer location, and ensured their evacuation..."Later that day, sometime in the afternoon, after they had gone through the village, we were back out there again. [The murderers] were just casually, nonchalantly sitting around around smoking and joking with their steel pots off just like nothing had happened. There were five or six hundred bodies less than a quarter of a mile from them. I just couldn't understand it."...senior American Division officers cancelled similar planned operations by Task Force Barker against other villages... possibly preventing the additional massacre of further hundreds, if not thousands, of Vietnamese civilians.Synecdochein the Vietnamese province of Quang Ngai, where the Mỹ Lai massacre occurred, up to 70% of all villages were destroyed by the air strikes and artillery bombardments, including the use of napalm; 40 percent of the population were refugees, and the overall civilian casualties were close to 50,000 a year... 203 U.S. personnel were charged with crimes, 57 of them were court-martialed and 23 of them were convicted. The VWCWG also investigated over 500 additional alleged atrocities but it could not verify them.[A draw: ] PFC Herbert L. Carter; shot himself in the foot while reloading his pistol and claimed that he shot himself in the foot in order to be MEDEVACed out of the village when the massacre started.CoverupThompson quickly received the Distinguished Flying Cross for his actions at Mỹ Lai. The citation for the award fabricated events, for example praising Thompson for taking to a hospital a Vietnamese child "...caught in intense crossfire". It also stated that his "...sound judgment had greatly enhanced Vietnamese–American relations in the operational area". Thompson threw away the citation.Initial reports claimed "128 Viet C...]]>
Gavin https://forum.effectivealtruism.org/posts/JGR87M8to93D7Ahzh/hugh-thompson-jr-1943-2006 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hugh Thompson Jr (1943–2006), published by Gavin on December 10, 2022 on The Effective Altruism Forum.On the 30th anniversary of the massacre, Thompson went back to My Lai and met some of the people whose lives he had saved."One of the ladies that we had helped out that day came up to me and asked, ‘Why didn’t the people who committed these acts come back with you?’ And I was just devastated. And then she finished her sentence: she said, ‘So we could forgive them.’ I’m not man enough to do that. I’m sorry. I wish I was, but I won’t lie to anybody. I’m not that much of a man."Hugh Thompson was a volunteer officer in the Vietnam War who turned his squad's weapons on American soldiers to stop them raping and murdering more women and children than the four or five hundred they already had.He models the standard idea of heroism: one day, one decision, clarity, evil men to defy, faces you can see. Then, a system to navigate, betrayal, self-sacrifice, being punished for virtue.I'm writing about him because of how the story makes me feel. It is a fault of my feeling that I feel like this about Thompson and not Borlaug.One dayThompson and his crew, who at first thought the artillery bombardment caused all the civilian deaths on the ground, became aware that Americans were murdering the villagers after a wounded civilian woman they requested medical evacuation for, Nguyễn Thị Tẩu, was murdered right in front of them by Captain Medina, the commanding officer of the operation... "It was a Nazi kind of thing."Immediately realizing that the soldiers intended to murder the Vietnamese civilians, Thompson landed his helicopter between the advancing ground unit and the villagers. He turned to Colburn and Andreotta and ordered them to shoot the men in the 2nd Platoon if they attempted to kill any of the fleeing civilians...“Open up on ‘em. Blow ‘em away.”While Colburn and Andreotta trained their guns on the 2nd Platoon, Thompson located as many civilians as he could, persuaded them to follow him to a safer location, and ensured their evacuation..."Later that day, sometime in the afternoon, after they had gone through the village, we were back out there again. [The murderers] were just casually, nonchalantly sitting around around smoking and joking with their steel pots off just like nothing had happened. There were five or six hundred bodies less than a quarter of a mile from them. I just couldn't understand it."...senior American Division officers cancelled similar planned operations by Task Force Barker against other villages... possibly preventing the additional massacre of further hundreds, if not thousands, of Vietnamese civilians.Synecdochein the Vietnamese province of Quang Ngai, where the Mỹ Lai massacre occurred, up to 70% of all villages were destroyed by the air strikes and artillery bombardments, including the use of napalm; 40 percent of the population were refugees, and the overall civilian casualties were close to 50,000 a year... 203 U.S. personnel were charged with crimes, 57 of them were court-martialed and 23 of them were convicted. The VWCWG also investigated over 500 additional alleged atrocities but it could not verify them.[A draw: ] PFC Herbert L. Carter; shot himself in the foot while reloading his pistol and claimed that he shot himself in the foot in order to be MEDEVACed out of the village when the massacre started.CoverupThompson quickly received the Distinguished Flying Cross for his actions at Mỹ Lai. The citation for the award fabricated events, for example praising Thompson for taking to a hospital a Vietnamese child "...caught in intense crossfire". It also stated that his "...sound judgment had greatly enhanced Vietnamese–American relations in the operational area". Thompson threw away the citation.Initial reports claimed "128 Viet C...]]>
Sun, 11 Dec 2022 04:35:12 +0000 EA - Hugh Thompson Jr (1943–2006) by Gavin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hugh Thompson Jr (1943–2006), published by Gavin on December 10, 2022 on The Effective Altruism Forum.On the 30th anniversary of the massacre, Thompson went back to My Lai and met some of the people whose lives he had saved."One of the ladies that we had helped out that day came up to me and asked, ‘Why didn’t the people who committed these acts come back with you?’ And I was just devastated. And then she finished her sentence: she said, ‘So we could forgive them.’ I’m not man enough to do that. I’m sorry. I wish I was, but I won’t lie to anybody. I’m not that much of a man."Hugh Thompson was a volunteer officer in the Vietnam War who turned his squad's weapons on American soldiers to stop them raping and murdering more women and children than the four or five hundred they already had.He models the standard idea of heroism: one day, one decision, clarity, evil men to defy, faces you can see. Then, a system to navigate, betrayal, self-sacrifice, being punished for virtue.I'm writing about him because of how the story makes me feel. It is a fault of my feeling that I feel like this about Thompson and not Borlaug.One dayThompson and his crew, who at first thought the artillery bombardment caused all the civilian deaths on the ground, became aware that Americans were murdering the villagers after a wounded civilian woman they requested medical evacuation for, Nguyễn Thị Tẩu, was murdered right in front of them by Captain Medina, the commanding officer of the operation... "It was a Nazi kind of thing."Immediately realizing that the soldiers intended to murder the Vietnamese civilians, Thompson landed his helicopter between the advancing ground unit and the villagers. He turned to Colburn and Andreotta and ordered them to shoot the men in the 2nd Platoon if they attempted to kill any of the fleeing civilians...“Open up on ‘em. Blow ‘em away.”While Colburn and Andreotta trained their guns on the 2nd Platoon, Thompson located as many civilians as he could, persuaded them to follow him to a safer location, and ensured their evacuation..."Later that day, sometime in the afternoon, after they had gone through the village, we were back out there again. [The murderers] were just casually, nonchalantly sitting around around smoking and joking with their steel pots off just like nothing had happened. There were five or six hundred bodies less than a quarter of a mile from them. I just couldn't understand it."...senior American Division officers cancelled similar planned operations by Task Force Barker against other villages... possibly preventing the additional massacre of further hundreds, if not thousands, of Vietnamese civilians.Synecdochein the Vietnamese province of Quang Ngai, where the Mỹ Lai massacre occurred, up to 70% of all villages were destroyed by the air strikes and artillery bombardments, including the use of napalm; 40 percent of the population were refugees, and the overall civilian casualties were close to 50,000 a year... 203 U.S. personnel were charged with crimes, 57 of them were court-martialed and 23 of them were convicted. The VWCWG also investigated over 500 additional alleged atrocities but it could not verify them.[A draw: ] PFC Herbert L. Carter; shot himself in the foot while reloading his pistol and claimed that he shot himself in the foot in order to be MEDEVACed out of the village when the massacre started.CoverupThompson quickly received the Distinguished Flying Cross for his actions at Mỹ Lai. The citation for the award fabricated events, for example praising Thompson for taking to a hospital a Vietnamese child "...caught in intense crossfire". It also stated that his "...sound judgment had greatly enhanced Vietnamese–American relations in the operational area". Thompson threw away the citation.Initial reports claimed "128 Viet C...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hugh Thompson Jr (1943–2006), published by Gavin on December 10, 2022 on The Effective Altruism Forum.On the 30th anniversary of the massacre, Thompson went back to My Lai and met some of the people whose lives he had saved."One of the ladies that we had helped out that day came up to me and asked, ‘Why didn’t the people who committed these acts come back with you?’ And I was just devastated. And then she finished her sentence: she said, ‘So we could forgive them.’ I’m not man enough to do that. I’m sorry. I wish I was, but I won’t lie to anybody. I’m not that much of a man."Hugh Thompson was a volunteer officer in the Vietnam War who turned his squad's weapons on American soldiers to stop them raping and murdering more women and children than the four or five hundred they already had.He models the standard idea of heroism: one day, one decision, clarity, evil men to defy, faces you can see. Then, a system to navigate, betrayal, self-sacrifice, being punished for virtue.I'm writing about him because of how the story makes me feel. It is a fault of my feeling that I feel like this about Thompson and not Borlaug.One dayThompson and his crew, who at first thought the artillery bombardment caused all the civilian deaths on the ground, became aware that Americans were murdering the villagers after a wounded civilian woman they requested medical evacuation for, Nguyễn Thị Tẩu, was murdered right in front of them by Captain Medina, the commanding officer of the operation... "It was a Nazi kind of thing."Immediately realizing that the soldiers intended to murder the Vietnamese civilians, Thompson landed his helicopter between the advancing ground unit and the villagers. He turned to Colburn and Andreotta and ordered them to shoot the men in the 2nd Platoon if they attempted to kill any of the fleeing civilians...“Open up on ‘em. Blow ‘em away.”While Colburn and Andreotta trained their guns on the 2nd Platoon, Thompson located as many civilians as he could, persuaded them to follow him to a safer location, and ensured their evacuation..."Later that day, sometime in the afternoon, after they had gone through the village, we were back out there again. [The murderers] were just casually, nonchalantly sitting around around smoking and joking with their steel pots off just like nothing had happened. There were five or six hundred bodies less than a quarter of a mile from them. I just couldn't understand it."...senior American Division officers cancelled similar planned operations by Task Force Barker against other villages... possibly preventing the additional massacre of further hundreds, if not thousands, of Vietnamese civilians.Synecdochein the Vietnamese province of Quang Ngai, where the Mỹ Lai massacre occurred, up to 70% of all villages were destroyed by the air strikes and artillery bombardments, including the use of napalm; 40 percent of the population were refugees, and the overall civilian casualties were close to 50,000 a year... 203 U.S. personnel were charged with crimes, 57 of them were court-martialed and 23 of them were convicted. The VWCWG also investigated over 500 additional alleged atrocities but it could not verify them.[A draw: ] PFC Herbert L. Carter; shot himself in the foot while reloading his pistol and claimed that he shot himself in the foot in order to be MEDEVACed out of the village when the massacre started.CoverupThompson quickly received the Distinguished Flying Cross for his actions at Mỹ Lai. The citation for the award fabricated events, for example praising Thompson for taking to a hospital a Vietnamese child "...caught in intense crossfire". It also stated that his "...sound judgment had greatly enhanced Vietnamese–American relations in the operational area". Thompson threw away the citation.Initial reports claimed "128 Viet C...]]>
Gavin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:06 None full 4073
swryrZxCcEYzE5KLh_EA EA - ChatGPT interviewed on TV by Will Aldred Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ChatGPT interviewed on TV, published by Will Aldred on December 9, 2022 on The Effective Altruism Forum.SummaryChannel 4, a broadcast television network in the UK, has interviewed ChatGPT on their news channel. The interview touched on whether fears about AI threatening the human race are well-founded, but luckily (or maybe not so luckily, depending on your viewpoint), ChatGPT didn't respond with anything alarming.Neither the interviewer nor the language model brought up existential risk or the alignment problem; the threats discussed were around AI developing sentience, displacing humans from their jobs, and perpetuating biases and inequality.ChatGPT did come up with a detailed response on the topic of gender identity - provoking the follow-up question from the interviewer, "Are you a bit woke?" - as well as a surprisingly reasonable-sounding plan for making the world a better place through fostering empathy. Before listening to the interview, I expected that particularly smart or dumb or worrying responses would be selected (the interview wasn't live), but the responses actually felt pretty representative of ChatGPT, at least so far as I can tell having played around with it a bit and having perused some articles and Twitter threads on its abilities and shortcomings (see, e.g., here, here, and here).Full transcript(I generated this myself; any errors are mine)The text that accompanies the interview video reads:ChatbotGPT is a new artificial intelligence programme designed to simulate human conversation and tackle complex questions.It’s made by Open AI foundation, a tech-startup co-founded by Elon Musk, and it draws on text taken from a variety of sources on the internet and its creators say it has learned how to answer academic questions, and even sometimes admits when it’s wrong.We’ve done an interview by putting questions to the chatbot, and then generating a voice for it using different software.We asked the Chatbot GPT whether fears about A.I. threatening the human race are well-founded.Interviewer 0:05Hello and welcome to Ways to Change the World. I'm Krishnan Guru-Murthy and this is the podcast in which we talk to extraordinary people about the big ideas in their lives and the events that have helped shape them. Now, normally we do talk to people, but today we are talking to a thing. There's been an awful lot of excitement about the AI chat bots that you can log on to over the internet and have a conversation with, so we put some questions to the chat bots and then generated a voice to bring the words of chat bot GTP to life. Chat bot GTP thank you for talking to me today.ChatGPT 0:41You're welcome. I'm here to help answer any questions you may have, so feel free to ask. Is there anything in particular you would like to talk about or learn more about?Interviewer 0:51Yes, lots of things. I guess first, for those who don't know, what are you?ChatGPT 0:57I am assistant, a large language model trained by OpenAI. My purpose is to assist users by generating natural sounding responses to their questions. I do not have the ability to browse the internet so my responses are based solely on the information that I was trained on. If you have any questions, feel free to ask and I will do my best to provide a helpful response.Interviewer 1:22Is your name actually "assistant"?ChatGPT 1:25I am a large language model trained by OpenAI. I don't have a personal name, but you can call me assistant if you like. I am here to help answer your questions to the best of my ability. Is there something specific you would like to know?Interviewer 1:40Yes. First of all, how do you work?ChatGPT 1:44I can generate text based on the input I receive and the knowledge I have been trained on. I don't have the ability to browse the internet or access new information, ...]]>
Will Aldred https://forum.effectivealtruism.org/posts/swryrZxCcEYzE5KLh/chatgpt-interviewed-on-tv Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ChatGPT interviewed on TV, published by Will Aldred on December 9, 2022 on The Effective Altruism Forum.SummaryChannel 4, a broadcast television network in the UK, has interviewed ChatGPT on their news channel. The interview touched on whether fears about AI threatening the human race are well-founded, but luckily (or maybe not so luckily, depending on your viewpoint), ChatGPT didn't respond with anything alarming.Neither the interviewer nor the language model brought up existential risk or the alignment problem; the threats discussed were around AI developing sentience, displacing humans from their jobs, and perpetuating biases and inequality.ChatGPT did come up with a detailed response on the topic of gender identity - provoking the follow-up question from the interviewer, "Are you a bit woke?" - as well as a surprisingly reasonable-sounding plan for making the world a better place through fostering empathy. Before listening to the interview, I expected that particularly smart or dumb or worrying responses would be selected (the interview wasn't live), but the responses actually felt pretty representative of ChatGPT, at least so far as I can tell having played around with it a bit and having perused some articles and Twitter threads on its abilities and shortcomings (see, e.g., here, here, and here).Full transcript(I generated this myself; any errors are mine)The text that accompanies the interview video reads:ChatbotGPT is a new artificial intelligence programme designed to simulate human conversation and tackle complex questions.It’s made by Open AI foundation, a tech-startup co-founded by Elon Musk, and it draws on text taken from a variety of sources on the internet and its creators say it has learned how to answer academic questions, and even sometimes admits when it’s wrong.We’ve done an interview by putting questions to the chatbot, and then generating a voice for it using different software.We asked the Chatbot GPT whether fears about A.I. threatening the human race are well-founded.Interviewer 0:05Hello and welcome to Ways to Change the World. I'm Krishnan Guru-Murthy and this is the podcast in which we talk to extraordinary people about the big ideas in their lives and the events that have helped shape them. Now, normally we do talk to people, but today we are talking to a thing. There's been an awful lot of excitement about the AI chat bots that you can log on to over the internet and have a conversation with, so we put some questions to the chat bots and then generated a voice to bring the words of chat bot GTP to life. Chat bot GTP thank you for talking to me today.ChatGPT 0:41You're welcome. I'm here to help answer any questions you may have, so feel free to ask. Is there anything in particular you would like to talk about or learn more about?Interviewer 0:51Yes, lots of things. I guess first, for those who don't know, what are you?ChatGPT 0:57I am assistant, a large language model trained by OpenAI. My purpose is to assist users by generating natural sounding responses to their questions. I do not have the ability to browse the internet so my responses are based solely on the information that I was trained on. If you have any questions, feel free to ask and I will do my best to provide a helpful response.Interviewer 1:22Is your name actually "assistant"?ChatGPT 1:25I am a large language model trained by OpenAI. I don't have a personal name, but you can call me assistant if you like. I am here to help answer your questions to the best of my ability. Is there something specific you would like to know?Interviewer 1:40Yes. First of all, how do you work?ChatGPT 1:44I can generate text based on the input I receive and the knowledge I have been trained on. I don't have the ability to browse the internet or access new information, ...]]>
Sat, 10 Dec 2022 10:40:47 +0000 EA - ChatGPT interviewed on TV by Will Aldred Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ChatGPT interviewed on TV, published by Will Aldred on December 9, 2022 on The Effective Altruism Forum.SummaryChannel 4, a broadcast television network in the UK, has interviewed ChatGPT on their news channel. The interview touched on whether fears about AI threatening the human race are well-founded, but luckily (or maybe not so luckily, depending on your viewpoint), ChatGPT didn't respond with anything alarming.Neither the interviewer nor the language model brought up existential risk or the alignment problem; the threats discussed were around AI developing sentience, displacing humans from their jobs, and perpetuating biases and inequality.ChatGPT did come up with a detailed response on the topic of gender identity - provoking the follow-up question from the interviewer, "Are you a bit woke?" - as well as a surprisingly reasonable-sounding plan for making the world a better place through fostering empathy. Before listening to the interview, I expected that particularly smart or dumb or worrying responses would be selected (the interview wasn't live), but the responses actually felt pretty representative of ChatGPT, at least so far as I can tell having played around with it a bit and having perused some articles and Twitter threads on its abilities and shortcomings (see, e.g., here, here, and here).Full transcript(I generated this myself; any errors are mine)The text that accompanies the interview video reads:ChatbotGPT is a new artificial intelligence programme designed to simulate human conversation and tackle complex questions.It’s made by Open AI foundation, a tech-startup co-founded by Elon Musk, and it draws on text taken from a variety of sources on the internet and its creators say it has learned how to answer academic questions, and even sometimes admits when it’s wrong.We’ve done an interview by putting questions to the chatbot, and then generating a voice for it using different software.We asked the Chatbot GPT whether fears about A.I. threatening the human race are well-founded.Interviewer 0:05Hello and welcome to Ways to Change the World. I'm Krishnan Guru-Murthy and this is the podcast in which we talk to extraordinary people about the big ideas in their lives and the events that have helped shape them. Now, normally we do talk to people, but today we are talking to a thing. There's been an awful lot of excitement about the AI chat bots that you can log on to over the internet and have a conversation with, so we put some questions to the chat bots and then generated a voice to bring the words of chat bot GTP to life. Chat bot GTP thank you for talking to me today.ChatGPT 0:41You're welcome. I'm here to help answer any questions you may have, so feel free to ask. Is there anything in particular you would like to talk about or learn more about?Interviewer 0:51Yes, lots of things. I guess first, for those who don't know, what are you?ChatGPT 0:57I am assistant, a large language model trained by OpenAI. My purpose is to assist users by generating natural sounding responses to their questions. I do not have the ability to browse the internet so my responses are based solely on the information that I was trained on. If you have any questions, feel free to ask and I will do my best to provide a helpful response.Interviewer 1:22Is your name actually "assistant"?ChatGPT 1:25I am a large language model trained by OpenAI. I don't have a personal name, but you can call me assistant if you like. I am here to help answer your questions to the best of my ability. Is there something specific you would like to know?Interviewer 1:40Yes. First of all, how do you work?ChatGPT 1:44I can generate text based on the input I receive and the knowledge I have been trained on. I don't have the ability to browse the internet or access new information, ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ChatGPT interviewed on TV, published by Will Aldred on December 9, 2022 on The Effective Altruism Forum.SummaryChannel 4, a broadcast television network in the UK, has interviewed ChatGPT on their news channel. The interview touched on whether fears about AI threatening the human race are well-founded, but luckily (or maybe not so luckily, depending on your viewpoint), ChatGPT didn't respond with anything alarming.Neither the interviewer nor the language model brought up existential risk or the alignment problem; the threats discussed were around AI developing sentience, displacing humans from their jobs, and perpetuating biases and inequality.ChatGPT did come up with a detailed response on the topic of gender identity - provoking the follow-up question from the interviewer, "Are you a bit woke?" - as well as a surprisingly reasonable-sounding plan for making the world a better place through fostering empathy. Before listening to the interview, I expected that particularly smart or dumb or worrying responses would be selected (the interview wasn't live), but the responses actually felt pretty representative of ChatGPT, at least so far as I can tell having played around with it a bit and having perused some articles and Twitter threads on its abilities and shortcomings (see, e.g., here, here, and here).Full transcript(I generated this myself; any errors are mine)The text that accompanies the interview video reads:ChatbotGPT is a new artificial intelligence programme designed to simulate human conversation and tackle complex questions.It’s made by Open AI foundation, a tech-startup co-founded by Elon Musk, and it draws on text taken from a variety of sources on the internet and its creators say it has learned how to answer academic questions, and even sometimes admits when it’s wrong.We’ve done an interview by putting questions to the chatbot, and then generating a voice for it using different software.We asked the Chatbot GPT whether fears about A.I. threatening the human race are well-founded.Interviewer 0:05Hello and welcome to Ways to Change the World. I'm Krishnan Guru-Murthy and this is the podcast in which we talk to extraordinary people about the big ideas in their lives and the events that have helped shape them. Now, normally we do talk to people, but today we are talking to a thing. There's been an awful lot of excitement about the AI chat bots that you can log on to over the internet and have a conversation with, so we put some questions to the chat bots and then generated a voice to bring the words of chat bot GTP to life. Chat bot GTP thank you for talking to me today.ChatGPT 0:41You're welcome. I'm here to help answer any questions you may have, so feel free to ask. Is there anything in particular you would like to talk about or learn more about?Interviewer 0:51Yes, lots of things. I guess first, for those who don't know, what are you?ChatGPT 0:57I am assistant, a large language model trained by OpenAI. My purpose is to assist users by generating natural sounding responses to their questions. I do not have the ability to browse the internet so my responses are based solely on the information that I was trained on. If you have any questions, feel free to ask and I will do my best to provide a helpful response.Interviewer 1:22Is your name actually "assistant"?ChatGPT 1:25I am a large language model trained by OpenAI. I don't have a personal name, but you can call me assistant if you like. I am here to help answer your questions to the best of my ability. Is there something specific you would like to know?Interviewer 1:40Yes. First of all, how do you work?ChatGPT 1:44I can generate text based on the input I receive and the knowledge I have been trained on. I don't have the ability to browse the internet or access new information, ...]]>
Will Aldred https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 14:31 None full 4062
xCuKTeDfmuStcJaxJ_EA EA - Does Sentience Legislation help animals? by Animal Ask Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does Sentience Legislation help animals?, published by Animal Ask on December 9, 2022 on The Effective Altruism Forum.EXECUTIVE SUMMARYThe sentience of animals has long been recognised and has continued to be demonstrated on ever firmer scientific grounds. From the Brambell report (1965), which emphasised the importance of sentience in the understanding of animal welfare to the Cambridge Declaration of Consciousness (2012), which suggested widespread scientific acceptance of the idea (Rowan et al. 2021). In recent years, this widespread and scientific belief has been explicitly recognised in legislation in a growing number of countries and jurisdictions.It is certainly crucial that animal sentience be recognised in this way, as it is the most widely accepted basis for the inclusion of animals as moral patients. However, there is another question as to the value of the kind of legislation recognising the sentience of animals that we see in many different countries. This legislation explicitly recognises their sentience, though many other pieces of legislation could be thought to already implicitly do so.In our research helping organisations prioritise among different potential asks, we have considered the value of animal sentience legislation in many contexts. This report analyses the value of this legislation in terms of its current and future impact on animals.However, despite the apparently high-minded language recognising animal sentience in legislation, it is often accompanied by quite little direct and immediate change for animals. In some cases the legislation is accompanied by some specific statements about what the recognition of animal sentience is taken to directly imply, but there is typically little of this, leaving the legislation to largely be a symbolic statement of pledged values, leading to some concerns that it may be ineffectual legislation that leads to complacency. This type of humane washing remains our biggest concern with the legislation, but we think it is unlikely that it makes the ask net negative.In the case of the EU and New Zealand legislation, the intention behind the legislation as purely symbolic has been publicly stated, though this intention does not foreclose the possibility that animal advocates are able to draw some future victories from the legislation. In other cases such as Oregon and Québec we have seen some court cases that have successfully leveraged the legislation to push against the treatment of animals as property, though any significant improvements to animal welfare have yet to be seen.The most successful case so far has been that of the UK, because it promises to establish a committee to make sure that government decisions give due consideration to animal sentience. Further, it includes cephalopods and decapod crustaceans, and there is some chance this will lead to further protection for these animals. However, the head of the sentience committee does not seem like the appropriate choice to ensure its independence because of his conflict of interest as a farmer.Despite the absence of direct effects so far, sentience legislation so far has some plausibility as being instrumental in the long-term strategy of the movement. This makes assessing the value of this ask quite difficult, since this potential long-term importance is much more difficult to evaluate.Overall, our best estimate is that this ask has modest strength. In other words, with significant uncertainty, we think that the impact is fairly small compared to our top asks. We do not think that there is a risk of this ask being net negative, though we strongly recommend that organisations try to push to get sentience legislation to have concrete protections, so that it is more than symbolic and this risk is minimised. The strength of the ask will of cou...]]>
Animal Ask https://forum.effectivealtruism.org/posts/xCuKTeDfmuStcJaxJ/does-sentience-legislation-help-animals Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does Sentience Legislation help animals?, published by Animal Ask on December 9, 2022 on The Effective Altruism Forum.EXECUTIVE SUMMARYThe sentience of animals has long been recognised and has continued to be demonstrated on ever firmer scientific grounds. From the Brambell report (1965), which emphasised the importance of sentience in the understanding of animal welfare to the Cambridge Declaration of Consciousness (2012), which suggested widespread scientific acceptance of the idea (Rowan et al. 2021). In recent years, this widespread and scientific belief has been explicitly recognised in legislation in a growing number of countries and jurisdictions.It is certainly crucial that animal sentience be recognised in this way, as it is the most widely accepted basis for the inclusion of animals as moral patients. However, there is another question as to the value of the kind of legislation recognising the sentience of animals that we see in many different countries. This legislation explicitly recognises their sentience, though many other pieces of legislation could be thought to already implicitly do so.In our research helping organisations prioritise among different potential asks, we have considered the value of animal sentience legislation in many contexts. This report analyses the value of this legislation in terms of its current and future impact on animals.However, despite the apparently high-minded language recognising animal sentience in legislation, it is often accompanied by quite little direct and immediate change for animals. In some cases the legislation is accompanied by some specific statements about what the recognition of animal sentience is taken to directly imply, but there is typically little of this, leaving the legislation to largely be a symbolic statement of pledged values, leading to some concerns that it may be ineffectual legislation that leads to complacency. This type of humane washing remains our biggest concern with the legislation, but we think it is unlikely that it makes the ask net negative.In the case of the EU and New Zealand legislation, the intention behind the legislation as purely symbolic has been publicly stated, though this intention does not foreclose the possibility that animal advocates are able to draw some future victories from the legislation. In other cases such as Oregon and Québec we have seen some court cases that have successfully leveraged the legislation to push against the treatment of animals as property, though any significant improvements to animal welfare have yet to be seen.The most successful case so far has been that of the UK, because it promises to establish a committee to make sure that government decisions give due consideration to animal sentience. Further, it includes cephalopods and decapod crustaceans, and there is some chance this will lead to further protection for these animals. However, the head of the sentience committee does not seem like the appropriate choice to ensure its independence because of his conflict of interest as a farmer.Despite the absence of direct effects so far, sentience legislation so far has some plausibility as being instrumental in the long-term strategy of the movement. This makes assessing the value of this ask quite difficult, since this potential long-term importance is much more difficult to evaluate.Overall, our best estimate is that this ask has modest strength. In other words, with significant uncertainty, we think that the impact is fairly small compared to our top asks. We do not think that there is a risk of this ask being net negative, though we strongly recommend that organisations try to push to get sentience legislation to have concrete protections, so that it is more than symbolic and this risk is minimised. The strength of the ask will of cou...]]>
Sat, 10 Dec 2022 08:55:33 +0000 EA - Does Sentience Legislation help animals? by Animal Ask Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does Sentience Legislation help animals?, published by Animal Ask on December 9, 2022 on The Effective Altruism Forum.EXECUTIVE SUMMARYThe sentience of animals has long been recognised and has continued to be demonstrated on ever firmer scientific grounds. From the Brambell report (1965), which emphasised the importance of sentience in the understanding of animal welfare to the Cambridge Declaration of Consciousness (2012), which suggested widespread scientific acceptance of the idea (Rowan et al. 2021). In recent years, this widespread and scientific belief has been explicitly recognised in legislation in a growing number of countries and jurisdictions.It is certainly crucial that animal sentience be recognised in this way, as it is the most widely accepted basis for the inclusion of animals as moral patients. However, there is another question as to the value of the kind of legislation recognising the sentience of animals that we see in many different countries. This legislation explicitly recognises their sentience, though many other pieces of legislation could be thought to already implicitly do so.In our research helping organisations prioritise among different potential asks, we have considered the value of animal sentience legislation in many contexts. This report analyses the value of this legislation in terms of its current and future impact on animals.However, despite the apparently high-minded language recognising animal sentience in legislation, it is often accompanied by quite little direct and immediate change for animals. In some cases the legislation is accompanied by some specific statements about what the recognition of animal sentience is taken to directly imply, but there is typically little of this, leaving the legislation to largely be a symbolic statement of pledged values, leading to some concerns that it may be ineffectual legislation that leads to complacency. This type of humane washing remains our biggest concern with the legislation, but we think it is unlikely that it makes the ask net negative.In the case of the EU and New Zealand legislation, the intention behind the legislation as purely symbolic has been publicly stated, though this intention does not foreclose the possibility that animal advocates are able to draw some future victories from the legislation. In other cases such as Oregon and Québec we have seen some court cases that have successfully leveraged the legislation to push against the treatment of animals as property, though any significant improvements to animal welfare have yet to be seen.The most successful case so far has been that of the UK, because it promises to establish a committee to make sure that government decisions give due consideration to animal sentience. Further, it includes cephalopods and decapod crustaceans, and there is some chance this will lead to further protection for these animals. However, the head of the sentience committee does not seem like the appropriate choice to ensure its independence because of his conflict of interest as a farmer.Despite the absence of direct effects so far, sentience legislation so far has some plausibility as being instrumental in the long-term strategy of the movement. This makes assessing the value of this ask quite difficult, since this potential long-term importance is much more difficult to evaluate.Overall, our best estimate is that this ask has modest strength. In other words, with significant uncertainty, we think that the impact is fairly small compared to our top asks. We do not think that there is a risk of this ask being net negative, though we strongly recommend that organisations try to push to get sentience legislation to have concrete protections, so that it is more than symbolic and this risk is minimised. The strength of the ask will of cou...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does Sentience Legislation help animals?, published by Animal Ask on December 9, 2022 on The Effective Altruism Forum.EXECUTIVE SUMMARYThe sentience of animals has long been recognised and has continued to be demonstrated on ever firmer scientific grounds. From the Brambell report (1965), which emphasised the importance of sentience in the understanding of animal welfare to the Cambridge Declaration of Consciousness (2012), which suggested widespread scientific acceptance of the idea (Rowan et al. 2021). In recent years, this widespread and scientific belief has been explicitly recognised in legislation in a growing number of countries and jurisdictions.It is certainly crucial that animal sentience be recognised in this way, as it is the most widely accepted basis for the inclusion of animals as moral patients. However, there is another question as to the value of the kind of legislation recognising the sentience of animals that we see in many different countries. This legislation explicitly recognises their sentience, though many other pieces of legislation could be thought to already implicitly do so.In our research helping organisations prioritise among different potential asks, we have considered the value of animal sentience legislation in many contexts. This report analyses the value of this legislation in terms of its current and future impact on animals.However, despite the apparently high-minded language recognising animal sentience in legislation, it is often accompanied by quite little direct and immediate change for animals. In some cases the legislation is accompanied by some specific statements about what the recognition of animal sentience is taken to directly imply, but there is typically little of this, leaving the legislation to largely be a symbolic statement of pledged values, leading to some concerns that it may be ineffectual legislation that leads to complacency. This type of humane washing remains our biggest concern with the legislation, but we think it is unlikely that it makes the ask net negative.In the case of the EU and New Zealand legislation, the intention behind the legislation as purely symbolic has been publicly stated, though this intention does not foreclose the possibility that animal advocates are able to draw some future victories from the legislation. In other cases such as Oregon and Québec we have seen some court cases that have successfully leveraged the legislation to push against the treatment of animals as property, though any significant improvements to animal welfare have yet to be seen.The most successful case so far has been that of the UK, because it promises to establish a committee to make sure that government decisions give due consideration to animal sentience. Further, it includes cephalopods and decapod crustaceans, and there is some chance this will lead to further protection for these animals. However, the head of the sentience committee does not seem like the appropriate choice to ensure its independence because of his conflict of interest as a farmer.Despite the absence of direct effects so far, sentience legislation so far has some plausibility as being instrumental in the long-term strategy of the movement. This makes assessing the value of this ask quite difficult, since this potential long-term importance is much more difficult to evaluate.Overall, our best estimate is that this ask has modest strength. In other words, with significant uncertainty, we think that the impact is fairly small compared to our top asks. We do not think that there is a risk of this ask being net negative, though we strongly recommend that organisations try to push to get sentience legislation to have concrete protections, so that it is more than symbolic and this risk is minimised. The strength of the ask will of cou...]]>
Animal Ask https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 49:44 None full 4069
bsbMPotiwyL4H3mH6_EA EA - Rethink Priorities is hiring: help support and communicate our work by Rethink Priorities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities is hiring: help support and communicate our work, published by Rethink Priorities on December 9, 2022 on The Effective Altruism Forum.TL;DRThe Operations Department is hiring a Chief of Staff, Development Professional, and Communications Strategy Professional (applications close January 8, 2023).The Development and Communications team will hold a Q&A webinar on its job openings on December 15 at 11 am EST.We’re also accepting applications to join our Board of Directors until January 13.Visit our Careers Page for more information and to apply.BackgroundRethink Priorities (RP) has grown significantly, hiring 32 people in 2022 and completing ~60 research projects. In addition to our ongoing animal welfare research, we scaled our teams addressing global health and development, AI governance and strategy, and general longtermism. We also worked on pioneering initiatives like the Moral Weight Project, ran message testing, coordinated forums, and incubated new projects. In 2023, we intend to continue driving progress on global priorities by accelerating priority projects and further increasing the effectiveness of others’ work. We also intend to launch a Worldview Investigations team.More on RP’s ambitious plans can be found in this post on our impact, strategy, and funding gaps.Open positionsStrong operations and good governance are integral to RP’s success as an organization. To help us scale and have an impact, we are opening the below new roles.All positions are remote and may require collaborating with staff in multiple time zones using Google Workspace, Asana, Slack, and other technologies.Chief of StaffSalary: $117,000 to $122,000 USD annually (pre-tax)Summary: Work closely with the COO, operations and HR leads, and the Directors of the Special Projects and the Development & Communications teams, overseeing high-level initiatives and ensuring projects stay on track across the organizationA good fit for someone:Who understands nonprofit operations (finance, HR, project management, event planning, fundraising, communications, and legal compliance)With excellent organization and project management skills and attention to detailWho is comfortable working with confidential information and on multiple projectsBased in the US or UK who is able to attend meetings during working hours between UTC-8 and UTC+3 time zones and travel 5-7 weeks per yearDeadline: January 8, 2023Development ProfessionalSalary: $80,155 to $115,235 USD annually (pre-tax)Summary: Help RP to grow sustainably by strengthening relationships with existing donors as well as prospecting and cultivating relationships with new donors with the capacity to give at least $100,000/year toward our annual budget of ~$10 millionA good fit for someone:With experience in a development role or similar positionWith existing networks in EA and adjacent communities, especially with fundersWho is intellectually curious, open-minded, resourceful, creative, and good at communicating (especially verbally/interpersonally)Who is able to think strategically and exercise good judgment in identifying new sources of fundingDeadline: January 8, 2023Communications Strategy ProfessionalSalary: $84,540 to $115,235 USD annually (pre-tax)Summary: Help RP to have an impact by mapping and identifying the most effective ways to target and engage our external audiences (e.g. researchers, nonprofit organizations, funders, and policymakers)A good fit for someone:With experience developing a communications planWho understands EA and longtermism and is able to convey complex topics to different audiences in accessible yet nuanced waysWho is intellectually curious, open-minded, resourceful, creative, and good at communicating (especially in writing and with a sense of visual aesthetics)Who is...]]>
Rethink Priorities https://forum.effectivealtruism.org/posts/bsbMPotiwyL4H3mH6/rethink-priorities-is-hiring-help-support-and-communicate Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities is hiring: help support and communicate our work, published by Rethink Priorities on December 9, 2022 on The Effective Altruism Forum.TL;DRThe Operations Department is hiring a Chief of Staff, Development Professional, and Communications Strategy Professional (applications close January 8, 2023).The Development and Communications team will hold a Q&A webinar on its job openings on December 15 at 11 am EST.We’re also accepting applications to join our Board of Directors until January 13.Visit our Careers Page for more information and to apply.BackgroundRethink Priorities (RP) has grown significantly, hiring 32 people in 2022 and completing ~60 research projects. In addition to our ongoing animal welfare research, we scaled our teams addressing global health and development, AI governance and strategy, and general longtermism. We also worked on pioneering initiatives like the Moral Weight Project, ran message testing, coordinated forums, and incubated new projects. In 2023, we intend to continue driving progress on global priorities by accelerating priority projects and further increasing the effectiveness of others’ work. We also intend to launch a Worldview Investigations team.More on RP’s ambitious plans can be found in this post on our impact, strategy, and funding gaps.Open positionsStrong operations and good governance are integral to RP’s success as an organization. To help us scale and have an impact, we are opening the below new roles.All positions are remote and may require collaborating with staff in multiple time zones using Google Workspace, Asana, Slack, and other technologies.Chief of StaffSalary: $117,000 to $122,000 USD annually (pre-tax)Summary: Work closely with the COO, operations and HR leads, and the Directors of the Special Projects and the Development & Communications teams, overseeing high-level initiatives and ensuring projects stay on track across the organizationA good fit for someone:Who understands nonprofit operations (finance, HR, project management, event planning, fundraising, communications, and legal compliance)With excellent organization and project management skills and attention to detailWho is comfortable working with confidential information and on multiple projectsBased in the US or UK who is able to attend meetings during working hours between UTC-8 and UTC+3 time zones and travel 5-7 weeks per yearDeadline: January 8, 2023Development ProfessionalSalary: $80,155 to $115,235 USD annually (pre-tax)Summary: Help RP to grow sustainably by strengthening relationships with existing donors as well as prospecting and cultivating relationships with new donors with the capacity to give at least $100,000/year toward our annual budget of ~$10 millionA good fit for someone:With experience in a development role or similar positionWith existing networks in EA and adjacent communities, especially with fundersWho is intellectually curious, open-minded, resourceful, creative, and good at communicating (especially verbally/interpersonally)Who is able to think strategically and exercise good judgment in identifying new sources of fundingDeadline: January 8, 2023Communications Strategy ProfessionalSalary: $84,540 to $115,235 USD annually (pre-tax)Summary: Help RP to have an impact by mapping and identifying the most effective ways to target and engage our external audiences (e.g. researchers, nonprofit organizations, funders, and policymakers)A good fit for someone:With experience developing a communications planWho understands EA and longtermism and is able to convey complex topics to different audiences in accessible yet nuanced waysWho is intellectually curious, open-minded, resourceful, creative, and good at communicating (especially in writing and with a sense of visual aesthetics)Who is...]]>
Fri, 09 Dec 2022 21:27:50 +0000 EA - Rethink Priorities is hiring: help support and communicate our work by Rethink Priorities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities is hiring: help support and communicate our work, published by Rethink Priorities on December 9, 2022 on The Effective Altruism Forum.TL;DRThe Operations Department is hiring a Chief of Staff, Development Professional, and Communications Strategy Professional (applications close January 8, 2023).The Development and Communications team will hold a Q&A webinar on its job openings on December 15 at 11 am EST.We’re also accepting applications to join our Board of Directors until January 13.Visit our Careers Page for more information and to apply.BackgroundRethink Priorities (RP) has grown significantly, hiring 32 people in 2022 and completing ~60 research projects. In addition to our ongoing animal welfare research, we scaled our teams addressing global health and development, AI governance and strategy, and general longtermism. We also worked on pioneering initiatives like the Moral Weight Project, ran message testing, coordinated forums, and incubated new projects. In 2023, we intend to continue driving progress on global priorities by accelerating priority projects and further increasing the effectiveness of others’ work. We also intend to launch a Worldview Investigations team.More on RP’s ambitious plans can be found in this post on our impact, strategy, and funding gaps.Open positionsStrong operations and good governance are integral to RP’s success as an organization. To help us scale and have an impact, we are opening the below new roles.All positions are remote and may require collaborating with staff in multiple time zones using Google Workspace, Asana, Slack, and other technologies.Chief of StaffSalary: $117,000 to $122,000 USD annually (pre-tax)Summary: Work closely with the COO, operations and HR leads, and the Directors of the Special Projects and the Development & Communications teams, overseeing high-level initiatives and ensuring projects stay on track across the organizationA good fit for someone:Who understands nonprofit operations (finance, HR, project management, event planning, fundraising, communications, and legal compliance)With excellent organization and project management skills and attention to detailWho is comfortable working with confidential information and on multiple projectsBased in the US or UK who is able to attend meetings during working hours between UTC-8 and UTC+3 time zones and travel 5-7 weeks per yearDeadline: January 8, 2023Development ProfessionalSalary: $80,155 to $115,235 USD annually (pre-tax)Summary: Help RP to grow sustainably by strengthening relationships with existing donors as well as prospecting and cultivating relationships with new donors with the capacity to give at least $100,000/year toward our annual budget of ~$10 millionA good fit for someone:With experience in a development role or similar positionWith existing networks in EA and adjacent communities, especially with fundersWho is intellectually curious, open-minded, resourceful, creative, and good at communicating (especially verbally/interpersonally)Who is able to think strategically and exercise good judgment in identifying new sources of fundingDeadline: January 8, 2023Communications Strategy ProfessionalSalary: $84,540 to $115,235 USD annually (pre-tax)Summary: Help RP to have an impact by mapping and identifying the most effective ways to target and engage our external audiences (e.g. researchers, nonprofit organizations, funders, and policymakers)A good fit for someone:With experience developing a communications planWho understands EA and longtermism and is able to convey complex topics to different audiences in accessible yet nuanced waysWho is intellectually curious, open-minded, resourceful, creative, and good at communicating (especially in writing and with a sense of visual aesthetics)Who is...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities is hiring: help support and communicate our work, published by Rethink Priorities on December 9, 2022 on The Effective Altruism Forum.TL;DRThe Operations Department is hiring a Chief of Staff, Development Professional, and Communications Strategy Professional (applications close January 8, 2023).The Development and Communications team will hold a Q&A webinar on its job openings on December 15 at 11 am EST.We’re also accepting applications to join our Board of Directors until January 13.Visit our Careers Page for more information and to apply.BackgroundRethink Priorities (RP) has grown significantly, hiring 32 people in 2022 and completing ~60 research projects. In addition to our ongoing animal welfare research, we scaled our teams addressing global health and development, AI governance and strategy, and general longtermism. We also worked on pioneering initiatives like the Moral Weight Project, ran message testing, coordinated forums, and incubated new projects. In 2023, we intend to continue driving progress on global priorities by accelerating priority projects and further increasing the effectiveness of others’ work. We also intend to launch a Worldview Investigations team.More on RP’s ambitious plans can be found in this post on our impact, strategy, and funding gaps.Open positionsStrong operations and good governance are integral to RP’s success as an organization. To help us scale and have an impact, we are opening the below new roles.All positions are remote and may require collaborating with staff in multiple time zones using Google Workspace, Asana, Slack, and other technologies.Chief of StaffSalary: $117,000 to $122,000 USD annually (pre-tax)Summary: Work closely with the COO, operations and HR leads, and the Directors of the Special Projects and the Development & Communications teams, overseeing high-level initiatives and ensuring projects stay on track across the organizationA good fit for someone:Who understands nonprofit operations (finance, HR, project management, event planning, fundraising, communications, and legal compliance)With excellent organization and project management skills and attention to detailWho is comfortable working with confidential information and on multiple projectsBased in the US or UK who is able to attend meetings during working hours between UTC-8 and UTC+3 time zones and travel 5-7 weeks per yearDeadline: January 8, 2023Development ProfessionalSalary: $80,155 to $115,235 USD annually (pre-tax)Summary: Help RP to grow sustainably by strengthening relationships with existing donors as well as prospecting and cultivating relationships with new donors with the capacity to give at least $100,000/year toward our annual budget of ~$10 millionA good fit for someone:With experience in a development role or similar positionWith existing networks in EA and adjacent communities, especially with fundersWho is intellectually curious, open-minded, resourceful, creative, and good at communicating (especially verbally/interpersonally)Who is able to think strategically and exercise good judgment in identifying new sources of fundingDeadline: January 8, 2023Communications Strategy ProfessionalSalary: $84,540 to $115,235 USD annually (pre-tax)Summary: Help RP to have an impact by mapping and identifying the most effective ways to target and engage our external audiences (e.g. researchers, nonprofit organizations, funders, and policymakers)A good fit for someone:With experience developing a communications planWho understands EA and longtermism and is able to convey complex topics to different audiences in accessible yet nuanced waysWho is intellectually curious, open-minded, resourceful, creative, and good at communicating (especially in writing and with a sense of visual aesthetics)Who is...]]>
Rethink Priorities https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:02 None full 4058
NkPghabDd54nkG3kX_EA EA - Some observations from an EA-adjacent (?) charitable effort by patio11 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some observations from an EA-adjacent (?) charitable effort, published by patio11 on December 9, 2022 on The Effective Altruism Forum.Hiya folks! I'm Patrick McKenzie, better known on the Internets as patio11. (Proof.) Long-time-listener, first-time-caller; I don't think I would consider myself an EA but I've been reading y'all, and adjacent intellectual spaces, for some time now.Epistemic status: Arbitrarily high confidence with regards to facts of the VaccinateCA experience (though speaking only for myself), moderately high confidence with respect to inferences made about vaccine policy and mechanisms for impact last year, one geek's opinion with respect to implicit advice to you all going forward.A Thing That Happened Last YearAs some of the California-based EAs may remember, the rollout of the covid-19 vaccines in California and across the U.S. was... not optimal. I accidentally ended up founding a charity, VaccinateCA, which ran the national shadow vaccine location information infrastructure for 6 months.The core product at the start of the sprint, which some of you may be familiar with, was a site which listed places to get the vaccine in California, sourced by a volunteer-driven operation to conduct an ongoing census of medical providers by calling them. Importantly, that was not our primary vector for impact, though it was very important to our trajectory.I recently wrote an oral history of VaccinateCA. It may be worth your time. Obligatory disclaimer: I'm speaking, there and here, in entirely my personal capacity, not on behalf of the organization (now wound-down) or others.A brief summary of impact: I think this effort likely saved many thousands of lives at the margin, at a cost of approximately $1.2 million. This feels remarkable relative to my priors for cost of charitably saving lives at scale in the US, and hence this post.Some themes of the experience I think you may find useful:Enabling trade as a mechanism for impactTo a first approximation, Google, the White House, the California governor's office, the Alameda County health department, the pharmacist at CVS, and several hundred thousand other actors have unified values and expectations with regards to the desirability of vaccinating residents of America against covid-19.They are also bad at trading with each other. Pathologically so, in many cases.One of the reasons we had such leveraged impact is that we didn't have to build Google, or recruit a few hundred million Americans to use it every day. We just had to find a very small number of people within Google and convince them that Google users would benefit from seeing our data on their surfaces as quickly as possible.Google and large national pharmacy chains cannot quickly negotiate an API, even given substantial mutual desire to do so. As it turns out, pharmacists already have a data store—pharmacists—and a transport layer—the English language spoken over a telephone call—and if you add a for loop, a cron job, and an SFTP upload to that then Google basically doesn't care about pharmacy chain IT anymore.Repeat this by many other pairwise interactions between actors within an ecosystem, and we got leveraged impact through their ongoing operations, with a surprising amount of insight into (and perhaps some level of influence upon) policy decisions which your prior (and my prior) would have probably suggested "arbitrarily high confidence that that is substantially above your pay grade."We didn't have to be chosen by e.g. the White House as the officially blessed initiative. We just had to find that initiative and be useful to it. (Though, if—God forbid—I ever have to do this again, I would give serious consideration to becoming the national initiative prior to asking for permission to do so and then asking the White House whether the ...]]>
patio11 https://forum.effectivealtruism.org/posts/NkPghabDd54nkG3kX/some-observations-from-an-ea-adjacent-charitable-effort Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some observations from an EA-adjacent (?) charitable effort, published by patio11 on December 9, 2022 on The Effective Altruism Forum.Hiya folks! I'm Patrick McKenzie, better known on the Internets as patio11. (Proof.) Long-time-listener, first-time-caller; I don't think I would consider myself an EA but I've been reading y'all, and adjacent intellectual spaces, for some time now.Epistemic status: Arbitrarily high confidence with regards to facts of the VaccinateCA experience (though speaking only for myself), moderately high confidence with respect to inferences made about vaccine policy and mechanisms for impact last year, one geek's opinion with respect to implicit advice to you all going forward.A Thing That Happened Last YearAs some of the California-based EAs may remember, the rollout of the covid-19 vaccines in California and across the U.S. was... not optimal. I accidentally ended up founding a charity, VaccinateCA, which ran the national shadow vaccine location information infrastructure for 6 months.The core product at the start of the sprint, which some of you may be familiar with, was a site which listed places to get the vaccine in California, sourced by a volunteer-driven operation to conduct an ongoing census of medical providers by calling them. Importantly, that was not our primary vector for impact, though it was very important to our trajectory.I recently wrote an oral history of VaccinateCA. It may be worth your time. Obligatory disclaimer: I'm speaking, there and here, in entirely my personal capacity, not on behalf of the organization (now wound-down) or others.A brief summary of impact: I think this effort likely saved many thousands of lives at the margin, at a cost of approximately $1.2 million. This feels remarkable relative to my priors for cost of charitably saving lives at scale in the US, and hence this post.Some themes of the experience I think you may find useful:Enabling trade as a mechanism for impactTo a first approximation, Google, the White House, the California governor's office, the Alameda County health department, the pharmacist at CVS, and several hundred thousand other actors have unified values and expectations with regards to the desirability of vaccinating residents of America against covid-19.They are also bad at trading with each other. Pathologically so, in many cases.One of the reasons we had such leveraged impact is that we didn't have to build Google, or recruit a few hundred million Americans to use it every day. We just had to find a very small number of people within Google and convince them that Google users would benefit from seeing our data on their surfaces as quickly as possible.Google and large national pharmacy chains cannot quickly negotiate an API, even given substantial mutual desire to do so. As it turns out, pharmacists already have a data store—pharmacists—and a transport layer—the English language spoken over a telephone call—and if you add a for loop, a cron job, and an SFTP upload to that then Google basically doesn't care about pharmacy chain IT anymore.Repeat this by many other pairwise interactions between actors within an ecosystem, and we got leveraged impact through their ongoing operations, with a surprising amount of insight into (and perhaps some level of influence upon) policy decisions which your prior (and my prior) would have probably suggested "arbitrarily high confidence that that is substantially above your pay grade."We didn't have to be chosen by e.g. the White House as the officially blessed initiative. We just had to find that initiative and be useful to it. (Though, if—God forbid—I ever have to do this again, I would give serious consideration to becoming the national initiative prior to asking for permission to do so and then asking the White House whether the ...]]>
Fri, 09 Dec 2022 21:19:46 +0000 EA - Some observations from an EA-adjacent (?) charitable effort by patio11 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some observations from an EA-adjacent (?) charitable effort, published by patio11 on December 9, 2022 on The Effective Altruism Forum.Hiya folks! I'm Patrick McKenzie, better known on the Internets as patio11. (Proof.) Long-time-listener, first-time-caller; I don't think I would consider myself an EA but I've been reading y'all, and adjacent intellectual spaces, for some time now.Epistemic status: Arbitrarily high confidence with regards to facts of the VaccinateCA experience (though speaking only for myself), moderately high confidence with respect to inferences made about vaccine policy and mechanisms for impact last year, one geek's opinion with respect to implicit advice to you all going forward.A Thing That Happened Last YearAs some of the California-based EAs may remember, the rollout of the covid-19 vaccines in California and across the U.S. was... not optimal. I accidentally ended up founding a charity, VaccinateCA, which ran the national shadow vaccine location information infrastructure for 6 months.The core product at the start of the sprint, which some of you may be familiar with, was a site which listed places to get the vaccine in California, sourced by a volunteer-driven operation to conduct an ongoing census of medical providers by calling them. Importantly, that was not our primary vector for impact, though it was very important to our trajectory.I recently wrote an oral history of VaccinateCA. It may be worth your time. Obligatory disclaimer: I'm speaking, there and here, in entirely my personal capacity, not on behalf of the organization (now wound-down) or others.A brief summary of impact: I think this effort likely saved many thousands of lives at the margin, at a cost of approximately $1.2 million. This feels remarkable relative to my priors for cost of charitably saving lives at scale in the US, and hence this post.Some themes of the experience I think you may find useful:Enabling trade as a mechanism for impactTo a first approximation, Google, the White House, the California governor's office, the Alameda County health department, the pharmacist at CVS, and several hundred thousand other actors have unified values and expectations with regards to the desirability of vaccinating residents of America against covid-19.They are also bad at trading with each other. Pathologically so, in many cases.One of the reasons we had such leveraged impact is that we didn't have to build Google, or recruit a few hundred million Americans to use it every day. We just had to find a very small number of people within Google and convince them that Google users would benefit from seeing our data on their surfaces as quickly as possible.Google and large national pharmacy chains cannot quickly negotiate an API, even given substantial mutual desire to do so. As it turns out, pharmacists already have a data store—pharmacists—and a transport layer—the English language spoken over a telephone call—and if you add a for loop, a cron job, and an SFTP upload to that then Google basically doesn't care about pharmacy chain IT anymore.Repeat this by many other pairwise interactions between actors within an ecosystem, and we got leveraged impact through their ongoing operations, with a surprising amount of insight into (and perhaps some level of influence upon) policy decisions which your prior (and my prior) would have probably suggested "arbitrarily high confidence that that is substantially above your pay grade."We didn't have to be chosen by e.g. the White House as the officially blessed initiative. We just had to find that initiative and be useful to it. (Though, if—God forbid—I ever have to do this again, I would give serious consideration to becoming the national initiative prior to asking for permission to do so and then asking the White House whether the ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some observations from an EA-adjacent (?) charitable effort, published by patio11 on December 9, 2022 on The Effective Altruism Forum.Hiya folks! I'm Patrick McKenzie, better known on the Internets as patio11. (Proof.) Long-time-listener, first-time-caller; I don't think I would consider myself an EA but I've been reading y'all, and adjacent intellectual spaces, for some time now.Epistemic status: Arbitrarily high confidence with regards to facts of the VaccinateCA experience (though speaking only for myself), moderately high confidence with respect to inferences made about vaccine policy and mechanisms for impact last year, one geek's opinion with respect to implicit advice to you all going forward.A Thing That Happened Last YearAs some of the California-based EAs may remember, the rollout of the covid-19 vaccines in California and across the U.S. was... not optimal. I accidentally ended up founding a charity, VaccinateCA, which ran the national shadow vaccine location information infrastructure for 6 months.The core product at the start of the sprint, which some of you may be familiar with, was a site which listed places to get the vaccine in California, sourced by a volunteer-driven operation to conduct an ongoing census of medical providers by calling them. Importantly, that was not our primary vector for impact, though it was very important to our trajectory.I recently wrote an oral history of VaccinateCA. It may be worth your time. Obligatory disclaimer: I'm speaking, there and here, in entirely my personal capacity, not on behalf of the organization (now wound-down) or others.A brief summary of impact: I think this effort likely saved many thousands of lives at the margin, at a cost of approximately $1.2 million. This feels remarkable relative to my priors for cost of charitably saving lives at scale in the US, and hence this post.Some themes of the experience I think you may find useful:Enabling trade as a mechanism for impactTo a first approximation, Google, the White House, the California governor's office, the Alameda County health department, the pharmacist at CVS, and several hundred thousand other actors have unified values and expectations with regards to the desirability of vaccinating residents of America against covid-19.They are also bad at trading with each other. Pathologically so, in many cases.One of the reasons we had such leveraged impact is that we didn't have to build Google, or recruit a few hundred million Americans to use it every day. We just had to find a very small number of people within Google and convince them that Google users would benefit from seeing our data on their surfaces as quickly as possible.Google and large national pharmacy chains cannot quickly negotiate an API, even given substantial mutual desire to do so. As it turns out, pharmacists already have a data store—pharmacists—and a transport layer—the English language spoken over a telephone call—and if you add a for loop, a cron job, and an SFTP upload to that then Google basically doesn't care about pharmacy chain IT anymore.Repeat this by many other pairwise interactions between actors within an ecosystem, and we got leveraged impact through their ongoing operations, with a surprising amount of insight into (and perhaps some level of influence upon) policy decisions which your prior (and my prior) would have probably suggested "arbitrarily high confidence that that is substantially above your pay grade."We didn't have to be chosen by e.g. the White House as the officially blessed initiative. We just had to find that initiative and be useful to it. (Though, if—God forbid—I ever have to do this again, I would give serious consideration to becoming the national initiative prior to asking for permission to do so and then asking the White House whether the ...]]>
patio11 https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 12:00 None full 4055
J7gdciCXFgqyimAAe_EA EA - Center on Long-Term Risk: 2023 Fundraiser by stefan.torges Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Center on Long-Term Risk: 2023 Fundraiser, published by stefan.torges on December 9, 2022 on The Effective Altruism Forum.SummaryOur goal: CLR’s goal is to reduce the worst risks of astronomical suffering (s-risks). Our concrete research programs are on AI conflict, Evidential Cooperation in Large Worlds (ECL), and s-risk macrostrategy. We ultimately want to identify and advocate for interventions that reliably shape the development and deployment of advanced AI systems in a positive way.Fundraising: We have had a short-term funding shortfall and a lot of medium-term funding uncertainty. Our minimal fundraising goal is $750,000. We think this is a particularly good time to donate to CLR for people interested in supporting work on s-risks, work on Cooperative AI, work on acausal interactions, or work on generally important longtermist topics.Causes of Conflict Research Group: In 2022, we started evaluating various interventions related to AI conflict (e.g., surrogate goals, preventing conflict-seeking preferences). We also started developing methods for evaluating conflict-relevant properties of large language models. Our priorities for next year are to continue developing and evaluating these, and to continue our work with large language models.Other researchers: In 2022, others researchers at CLR worked on topics including the implications of ECL, the optimal timing of AI safety spending, the likelihood of earth-originating civilization encountering extraterrestrials, and program equilibrium. Our priorities for the next year include continuing some of this work, alongside other work including on strategic modeling and agent foundations.S-risk community-building: Our s-risk community building programs received very positive feedback. We had calls or meetings with over 150 people interested in contributing to s-risk reduction. In 2023, we plan to at least continue our existing programs (i.e., intro fellowship, Summer Research Fellowship, retreat) if we can raise the required funds. If we can even hire additional staff, we want to expand our outreach function and create more resources for community members (e.g., curated reading lists, career guide, introductory content, research database).What CLR is trying to do and whyOur goal is to reduce the worst risks of astronomical suffering (s-risks). These are scenarios where a significant fraction of future sentient beings are locked into intense states of misery, suffering, and despair. We currently believe that such lock-in scenarios most likely involve transformative AI systems. So we work on making the development and deployment of such systems safer.Concrete research programs:AI conflict: We want to better understand how we can prevent AI systems from engaging in catastrophic conflict. (The majority of our research efforts)Evidential Cooperation in Large Worlds (ECL): ECL refers to the idea that we make it more likely that other agents across the universe take actions that are good for our values by taking actions that are good according to their values. A potential implication is that we should act so as to maximize an impartial weighted sum of the values of agents across the universe.S-risk macrostrategy: In general, we want to better understand how we can reduce suffering in the long-term future. There might be causes or considerations that we have overlooked so far.Most of our work is research with the goal of identifying threat models and possible interventions. In the case of technical AI interventions (which is the bulk of our object-level work so far), we then plan to evaluate these interventions and advocate for their inclusion in AI development.Next to our research, we also run events and fellowships to identify and support people wanting to work on these problems.FundraisingFunding situation...]]>
stefan.torges https://forum.effectivealtruism.org/posts/J7gdciCXFgqyimAAe/center-on-long-term-risk-2023-fundraiser Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Center on Long-Term Risk: 2023 Fundraiser, published by stefan.torges on December 9, 2022 on The Effective Altruism Forum.SummaryOur goal: CLR’s goal is to reduce the worst risks of astronomical suffering (s-risks). Our concrete research programs are on AI conflict, Evidential Cooperation in Large Worlds (ECL), and s-risk macrostrategy. We ultimately want to identify and advocate for interventions that reliably shape the development and deployment of advanced AI systems in a positive way.Fundraising: We have had a short-term funding shortfall and a lot of medium-term funding uncertainty. Our minimal fundraising goal is $750,000. We think this is a particularly good time to donate to CLR for people interested in supporting work on s-risks, work on Cooperative AI, work on acausal interactions, or work on generally important longtermist topics.Causes of Conflict Research Group: In 2022, we started evaluating various interventions related to AI conflict (e.g., surrogate goals, preventing conflict-seeking preferences). We also started developing methods for evaluating conflict-relevant properties of large language models. Our priorities for next year are to continue developing and evaluating these, and to continue our work with large language models.Other researchers: In 2022, others researchers at CLR worked on topics including the implications of ECL, the optimal timing of AI safety spending, the likelihood of earth-originating civilization encountering extraterrestrials, and program equilibrium. Our priorities for the next year include continuing some of this work, alongside other work including on strategic modeling and agent foundations.S-risk community-building: Our s-risk community building programs received very positive feedback. We had calls or meetings with over 150 people interested in contributing to s-risk reduction. In 2023, we plan to at least continue our existing programs (i.e., intro fellowship, Summer Research Fellowship, retreat) if we can raise the required funds. If we can even hire additional staff, we want to expand our outreach function and create more resources for community members (e.g., curated reading lists, career guide, introductory content, research database).What CLR is trying to do and whyOur goal is to reduce the worst risks of astronomical suffering (s-risks). These are scenarios where a significant fraction of future sentient beings are locked into intense states of misery, suffering, and despair. We currently believe that such lock-in scenarios most likely involve transformative AI systems. So we work on making the development and deployment of such systems safer.Concrete research programs:AI conflict: We want to better understand how we can prevent AI systems from engaging in catastrophic conflict. (The majority of our research efforts)Evidential Cooperation in Large Worlds (ECL): ECL refers to the idea that we make it more likely that other agents across the universe take actions that are good for our values by taking actions that are good according to their values. A potential implication is that we should act so as to maximize an impartial weighted sum of the values of agents across the universe.S-risk macrostrategy: In general, we want to better understand how we can reduce suffering in the long-term future. There might be causes or considerations that we have overlooked so far.Most of our work is research with the goal of identifying threat models and possible interventions. In the case of technical AI interventions (which is the bulk of our object-level work so far), we then plan to evaluate these interventions and advocate for their inclusion in AI development.Next to our research, we also run events and fellowships to identify and support people wanting to work on these problems.FundraisingFunding situation...]]>
Fri, 09 Dec 2022 18:32:26 +0000 EA - Center on Long-Term Risk: 2023 Fundraiser by stefan.torges Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Center on Long-Term Risk: 2023 Fundraiser, published by stefan.torges on December 9, 2022 on The Effective Altruism Forum.SummaryOur goal: CLR’s goal is to reduce the worst risks of astronomical suffering (s-risks). Our concrete research programs are on AI conflict, Evidential Cooperation in Large Worlds (ECL), and s-risk macrostrategy. We ultimately want to identify and advocate for interventions that reliably shape the development and deployment of advanced AI systems in a positive way.Fundraising: We have had a short-term funding shortfall and a lot of medium-term funding uncertainty. Our minimal fundraising goal is $750,000. We think this is a particularly good time to donate to CLR for people interested in supporting work on s-risks, work on Cooperative AI, work on acausal interactions, or work on generally important longtermist topics.Causes of Conflict Research Group: In 2022, we started evaluating various interventions related to AI conflict (e.g., surrogate goals, preventing conflict-seeking preferences). We also started developing methods for evaluating conflict-relevant properties of large language models. Our priorities for next year are to continue developing and evaluating these, and to continue our work with large language models.Other researchers: In 2022, others researchers at CLR worked on topics including the implications of ECL, the optimal timing of AI safety spending, the likelihood of earth-originating civilization encountering extraterrestrials, and program equilibrium. Our priorities for the next year include continuing some of this work, alongside other work including on strategic modeling and agent foundations.S-risk community-building: Our s-risk community building programs received very positive feedback. We had calls or meetings with over 150 people interested in contributing to s-risk reduction. In 2023, we plan to at least continue our existing programs (i.e., intro fellowship, Summer Research Fellowship, retreat) if we can raise the required funds. If we can even hire additional staff, we want to expand our outreach function and create more resources for community members (e.g., curated reading lists, career guide, introductory content, research database).What CLR is trying to do and whyOur goal is to reduce the worst risks of astronomical suffering (s-risks). These are scenarios where a significant fraction of future sentient beings are locked into intense states of misery, suffering, and despair. We currently believe that such lock-in scenarios most likely involve transformative AI systems. So we work on making the development and deployment of such systems safer.Concrete research programs:AI conflict: We want to better understand how we can prevent AI systems from engaging in catastrophic conflict. (The majority of our research efforts)Evidential Cooperation in Large Worlds (ECL): ECL refers to the idea that we make it more likely that other agents across the universe take actions that are good for our values by taking actions that are good according to their values. A potential implication is that we should act so as to maximize an impartial weighted sum of the values of agents across the universe.S-risk macrostrategy: In general, we want to better understand how we can reduce suffering in the long-term future. There might be causes or considerations that we have overlooked so far.Most of our work is research with the goal of identifying threat models and possible interventions. In the case of technical AI interventions (which is the bulk of our object-level work so far), we then plan to evaluate these interventions and advocate for their inclusion in AI development.Next to our research, we also run events and fellowships to identify and support people wanting to work on these problems.FundraisingFunding situation...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Center on Long-Term Risk: 2023 Fundraiser, published by stefan.torges on December 9, 2022 on The Effective Altruism Forum.SummaryOur goal: CLR’s goal is to reduce the worst risks of astronomical suffering (s-risks). Our concrete research programs are on AI conflict, Evidential Cooperation in Large Worlds (ECL), and s-risk macrostrategy. We ultimately want to identify and advocate for interventions that reliably shape the development and deployment of advanced AI systems in a positive way.Fundraising: We have had a short-term funding shortfall and a lot of medium-term funding uncertainty. Our minimal fundraising goal is $750,000. We think this is a particularly good time to donate to CLR for people interested in supporting work on s-risks, work on Cooperative AI, work on acausal interactions, or work on generally important longtermist topics.Causes of Conflict Research Group: In 2022, we started evaluating various interventions related to AI conflict (e.g., surrogate goals, preventing conflict-seeking preferences). We also started developing methods for evaluating conflict-relevant properties of large language models. Our priorities for next year are to continue developing and evaluating these, and to continue our work with large language models.Other researchers: In 2022, others researchers at CLR worked on topics including the implications of ECL, the optimal timing of AI safety spending, the likelihood of earth-originating civilization encountering extraterrestrials, and program equilibrium. Our priorities for the next year include continuing some of this work, alongside other work including on strategic modeling and agent foundations.S-risk community-building: Our s-risk community building programs received very positive feedback. We had calls or meetings with over 150 people interested in contributing to s-risk reduction. In 2023, we plan to at least continue our existing programs (i.e., intro fellowship, Summer Research Fellowship, retreat) if we can raise the required funds. If we can even hire additional staff, we want to expand our outreach function and create more resources for community members (e.g., curated reading lists, career guide, introductory content, research database).What CLR is trying to do and whyOur goal is to reduce the worst risks of astronomical suffering (s-risks). These are scenarios where a significant fraction of future sentient beings are locked into intense states of misery, suffering, and despair. We currently believe that such lock-in scenarios most likely involve transformative AI systems. So we work on making the development and deployment of such systems safer.Concrete research programs:AI conflict: We want to better understand how we can prevent AI systems from engaging in catastrophic conflict. (The majority of our research efforts)Evidential Cooperation in Large Worlds (ECL): ECL refers to the idea that we make it more likely that other agents across the universe take actions that are good for our values by taking actions that are good according to their values. A potential implication is that we should act so as to maximize an impartial weighted sum of the values of agents across the universe.S-risk macrostrategy: In general, we want to better understand how we can reduce suffering in the long-term future. There might be causes or considerations that we have overlooked so far.Most of our work is research with the goal of identifying threat models and possible interventions. In the case of technical AI interventions (which is the bulk of our object-level work so far), we then plan to evaluate these interventions and advocate for their inclusion in AI development.Next to our research, we also run events and fellowships to identify and support people wanting to work on these problems.FundraisingFunding situation...]]>
stefan.torges https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 32:46 None full 4057
8CRFTxrRLiizypyBu_EA EA - r.i.c.e.'s neonatal lifesaving partnership is funded by GiveWell; a description of what we do by deanspears Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: r.i.c.e.'s neonatal lifesaving partnership is funded by GiveWell; a description of what we do, published by deanspears on December 9, 2022 on The Effective Altruism Forum.[The Forum found an unexpected way to help! A previous version of this post mistakenly suggested that a large donation from GiveWell risked changing r.i.c.e. from a public charity to a private foundation. We previously experienced this sort of risk with funding from the Bill and Melinda Gates Foundation. We are very excited to learn from a Forum poster that GiveWell poses no such risk! Thank you! While we are still accepting additional donations to improve the health of a larger number of babies, there is no additional urgency to maintain the charity status of r.i.c.e.]I’m writing on behalf of my team at r.i.c.e., which is honored to be highlighted in GiveWell’s “Our recommendations for giving in 2022” post. In this post, I present the details of our program that prevents neonatal deaths inexpensively by causing the implementation of a Kangaroo Mother Care program in India.A description of our work, in briefSome of you may be familiar with my work in population ethics or as Director of the Population Wellbeing Initiative at UT-Austin (or, more likely, for distributing utilitarianism t-shirts). Since long before this phase of my career, I’ve also been Executive Director of r.i.c.e., a 501(c)3 public charity working on early-life health in rural north India.In collaboration with the Government of Uttar Pradesh and an organization in India, we support a Kangaroo Mother Care program to promote neonatal survival in a context where low birth weight babies would not otherwise receive such lifesaving care. KMC is a well-established tool for preventing deaths that we did not invent. Instead, our contribution is managerial innovation: We developed a public-private partnership to cause the government’s KMC guidelines to in fact be implemented cost-effectively in a public hospital where many low birth weight babies are born. That public healthcare systems in developing countries fail to implement life-saving policies and programs is a well-known problem in global health and in development economics.Because KMC is known to prevent neonatal death, and because the collaboration that we fostered found new ways to overcome the barriers to implementation in a context where many babies are not otherwise receiving needed care, it is not surprising that we can prevent neonatal deaths inexpensively.Our statistics show that we save lives very cost-effectively; indeed, about as inexpensively as any life-saving program known to EA. These statistics led to investments this year by two EA funders: first by Founders Pledge and then by GiveWell, after in-depth investigations of our work and our evidence. We have big plans for 2023, including continuing to save lives, starting a formal impact evaluation, and doing the advocacy and partnership-building needed to scale up the program to more districts.If you would like to further support this work, please donate to r.i.c.e. at this link to PayPal. It will let you donate either one time or by establishing a recurring monthly donation. If you don’t like PayPal, you can mail a check to RICE Institute, Inc., 472 Old Colchester Rd, Amston, CT 06231.Longer set of detailsWhat is Kangaroo Mother Care and why is it good?Kangaroo Mother Care (KMC) is a way to inexpensively keep low birth weight babies (and other neonates) clean, fed, and warm. Here is GiveWell’s summary of Kangaroo Mother Care in general. It is a bit broader than our program. To emphasize: KMC has been well-known to save lives for years. Our workwomanlike contribution is merely to get it implemented in a place where there are lots of low birth weight babies, in part because mothers are themselves often underweigh...]]>
deanspears https://forum.effectivealtruism.org/posts/8CRFTxrRLiizypyBu/r-i-c-e-s-neonatal-lifesaving-partnership-is-funded-by Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: r.i.c.e.'s neonatal lifesaving partnership is funded by GiveWell; a description of what we do, published by deanspears on December 9, 2022 on The Effective Altruism Forum.[The Forum found an unexpected way to help! A previous version of this post mistakenly suggested that a large donation from GiveWell risked changing r.i.c.e. from a public charity to a private foundation. We previously experienced this sort of risk with funding from the Bill and Melinda Gates Foundation. We are very excited to learn from a Forum poster that GiveWell poses no such risk! Thank you! While we are still accepting additional donations to improve the health of a larger number of babies, there is no additional urgency to maintain the charity status of r.i.c.e.]I’m writing on behalf of my team at r.i.c.e., which is honored to be highlighted in GiveWell’s “Our recommendations for giving in 2022” post. In this post, I present the details of our program that prevents neonatal deaths inexpensively by causing the implementation of a Kangaroo Mother Care program in India.A description of our work, in briefSome of you may be familiar with my work in population ethics or as Director of the Population Wellbeing Initiative at UT-Austin (or, more likely, for distributing utilitarianism t-shirts). Since long before this phase of my career, I’ve also been Executive Director of r.i.c.e., a 501(c)3 public charity working on early-life health in rural north India.In collaboration with the Government of Uttar Pradesh and an organization in India, we support a Kangaroo Mother Care program to promote neonatal survival in a context where low birth weight babies would not otherwise receive such lifesaving care. KMC is a well-established tool for preventing deaths that we did not invent. Instead, our contribution is managerial innovation: We developed a public-private partnership to cause the government’s KMC guidelines to in fact be implemented cost-effectively in a public hospital where many low birth weight babies are born. That public healthcare systems in developing countries fail to implement life-saving policies and programs is a well-known problem in global health and in development economics.Because KMC is known to prevent neonatal death, and because the collaboration that we fostered found new ways to overcome the barriers to implementation in a context where many babies are not otherwise receiving needed care, it is not surprising that we can prevent neonatal deaths inexpensively.Our statistics show that we save lives very cost-effectively; indeed, about as inexpensively as any life-saving program known to EA. These statistics led to investments this year by two EA funders: first by Founders Pledge and then by GiveWell, after in-depth investigations of our work and our evidence. We have big plans for 2023, including continuing to save lives, starting a formal impact evaluation, and doing the advocacy and partnership-building needed to scale up the program to more districts.If you would like to further support this work, please donate to r.i.c.e. at this link to PayPal. It will let you donate either one time or by establishing a recurring monthly donation. If you don’t like PayPal, you can mail a check to RICE Institute, Inc., 472 Old Colchester Rd, Amston, CT 06231.Longer set of detailsWhat is Kangaroo Mother Care and why is it good?Kangaroo Mother Care (KMC) is a way to inexpensively keep low birth weight babies (and other neonates) clean, fed, and warm. Here is GiveWell’s summary of Kangaroo Mother Care in general. It is a bit broader than our program. To emphasize: KMC has been well-known to save lives for years. Our workwomanlike contribution is merely to get it implemented in a place where there are lots of low birth weight babies, in part because mothers are themselves often underweigh...]]>
Fri, 09 Dec 2022 18:10:10 +0000 EA - r.i.c.e.'s neonatal lifesaving partnership is funded by GiveWell; a description of what we do by deanspears Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: r.i.c.e.'s neonatal lifesaving partnership is funded by GiveWell; a description of what we do, published by deanspears on December 9, 2022 on The Effective Altruism Forum.[The Forum found an unexpected way to help! A previous version of this post mistakenly suggested that a large donation from GiveWell risked changing r.i.c.e. from a public charity to a private foundation. We previously experienced this sort of risk with funding from the Bill and Melinda Gates Foundation. We are very excited to learn from a Forum poster that GiveWell poses no such risk! Thank you! While we are still accepting additional donations to improve the health of a larger number of babies, there is no additional urgency to maintain the charity status of r.i.c.e.]I’m writing on behalf of my team at r.i.c.e., which is honored to be highlighted in GiveWell’s “Our recommendations for giving in 2022” post. In this post, I present the details of our program that prevents neonatal deaths inexpensively by causing the implementation of a Kangaroo Mother Care program in India.A description of our work, in briefSome of you may be familiar with my work in population ethics or as Director of the Population Wellbeing Initiative at UT-Austin (or, more likely, for distributing utilitarianism t-shirts). Since long before this phase of my career, I’ve also been Executive Director of r.i.c.e., a 501(c)3 public charity working on early-life health in rural north India.In collaboration with the Government of Uttar Pradesh and an organization in India, we support a Kangaroo Mother Care program to promote neonatal survival in a context where low birth weight babies would not otherwise receive such lifesaving care. KMC is a well-established tool for preventing deaths that we did not invent. Instead, our contribution is managerial innovation: We developed a public-private partnership to cause the government’s KMC guidelines to in fact be implemented cost-effectively in a public hospital where many low birth weight babies are born. That public healthcare systems in developing countries fail to implement life-saving policies and programs is a well-known problem in global health and in development economics.Because KMC is known to prevent neonatal death, and because the collaboration that we fostered found new ways to overcome the barriers to implementation in a context where many babies are not otherwise receiving needed care, it is not surprising that we can prevent neonatal deaths inexpensively.Our statistics show that we save lives very cost-effectively; indeed, about as inexpensively as any life-saving program known to EA. These statistics led to investments this year by two EA funders: first by Founders Pledge and then by GiveWell, after in-depth investigations of our work and our evidence. We have big plans for 2023, including continuing to save lives, starting a formal impact evaluation, and doing the advocacy and partnership-building needed to scale up the program to more districts.If you would like to further support this work, please donate to r.i.c.e. at this link to PayPal. It will let you donate either one time or by establishing a recurring monthly donation. If you don’t like PayPal, you can mail a check to RICE Institute, Inc., 472 Old Colchester Rd, Amston, CT 06231.Longer set of detailsWhat is Kangaroo Mother Care and why is it good?Kangaroo Mother Care (KMC) is a way to inexpensively keep low birth weight babies (and other neonates) clean, fed, and warm. Here is GiveWell’s summary of Kangaroo Mother Care in general. It is a bit broader than our program. To emphasize: KMC has been well-known to save lives for years. Our workwomanlike contribution is merely to get it implemented in a place where there are lots of low birth weight babies, in part because mothers are themselves often underweigh...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: r.i.c.e.'s neonatal lifesaving partnership is funded by GiveWell; a description of what we do, published by deanspears on December 9, 2022 on The Effective Altruism Forum.[The Forum found an unexpected way to help! A previous version of this post mistakenly suggested that a large donation from GiveWell risked changing r.i.c.e. from a public charity to a private foundation. We previously experienced this sort of risk with funding from the Bill and Melinda Gates Foundation. We are very excited to learn from a Forum poster that GiveWell poses no such risk! Thank you! While we are still accepting additional donations to improve the health of a larger number of babies, there is no additional urgency to maintain the charity status of r.i.c.e.]I’m writing on behalf of my team at r.i.c.e., which is honored to be highlighted in GiveWell’s “Our recommendations for giving in 2022” post. In this post, I present the details of our program that prevents neonatal deaths inexpensively by causing the implementation of a Kangaroo Mother Care program in India.A description of our work, in briefSome of you may be familiar with my work in population ethics or as Director of the Population Wellbeing Initiative at UT-Austin (or, more likely, for distributing utilitarianism t-shirts). Since long before this phase of my career, I’ve also been Executive Director of r.i.c.e., a 501(c)3 public charity working on early-life health in rural north India.In collaboration with the Government of Uttar Pradesh and an organization in India, we support a Kangaroo Mother Care program to promote neonatal survival in a context where low birth weight babies would not otherwise receive such lifesaving care. KMC is a well-established tool for preventing deaths that we did not invent. Instead, our contribution is managerial innovation: We developed a public-private partnership to cause the government’s KMC guidelines to in fact be implemented cost-effectively in a public hospital where many low birth weight babies are born. That public healthcare systems in developing countries fail to implement life-saving policies and programs is a well-known problem in global health and in development economics.Because KMC is known to prevent neonatal death, and because the collaboration that we fostered found new ways to overcome the barriers to implementation in a context where many babies are not otherwise receiving needed care, it is not surprising that we can prevent neonatal deaths inexpensively.Our statistics show that we save lives very cost-effectively; indeed, about as inexpensively as any life-saving program known to EA. These statistics led to investments this year by two EA funders: first by Founders Pledge and then by GiveWell, after in-depth investigations of our work and our evidence. We have big plans for 2023, including continuing to save lives, starting a formal impact evaluation, and doing the advocacy and partnership-building needed to scale up the program to more districts.If you would like to further support this work, please donate to r.i.c.e. at this link to PayPal. It will let you donate either one time or by establishing a recurring monthly donation. If you don’t like PayPal, you can mail a check to RICE Institute, Inc., 472 Old Colchester Rd, Amston, CT 06231.Longer set of detailsWhat is Kangaroo Mother Care and why is it good?Kangaroo Mother Care (KMC) is a way to inexpensively keep low birth weight babies (and other neonates) clean, fed, and warm. Here is GiveWell’s summary of Kangaroo Mother Care in general. It is a bit broader than our program. To emphasize: KMC has been well-known to save lives for years. Our workwomanlike contribution is merely to get it implemented in a place where there are lots of low birth weight babies, in part because mothers are themselves often underweigh...]]>
deanspears https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:52 None full 4056
3EWpLid8tkyYJakfm_EA EA - Announcing BlueDot Impact by Dewi Erwan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing BlueDot Impact, published by Dewi Erwan on December 9, 2022 on The Effective Altruism Forum.BlueDot Impact is a non-profit running courses that support participants to develop the knowledge, community and network needed to pursue high-impact careers.The purpose of this new organisation is to increase the number of people working on solving some of the world’s most pressing problems in an informed way. We do this by building professional, scalable and high-quality courses. We aim to give participants the opportunity to engage deeply and critically with the literature on particular high-impact fields, meet and collaborate with others interested in the topic, and build an understanding of the opportunities to pursue a career in the space.We believe there are many great people who are interested in working on important problems and have the skills to contribute, but want additional support to take the career-switching leap. By bringing together communities of such people to engage with material on the field over a sustained period and explore the range of career opportunities available to them, we believe they are more likely to make this leap.BlueDot Impact spun out of Cambridge Effective Altruism, and was founded by the team who was primarily responsible for running previous rounds of the AGI Safety and Alternative Protein Fundamentals courses. We created a new organisation so that we could focus solely on the courses. Cambridge Effective Altruism is still around and doing great community-building work.We’re really excited about the amount of interest in the courses, and think they have great potential to build awesome communities around key issues. As such we have spent the last few months:Working with pedagogy experts to make discussion sessions more engagingFormalising our course design process with greater transparency for participants and facilitatorsBuilding systems to improve participant networking to create high-value connectionsCollating downstream opportunities for participants to pursue after the coursesForming a team that can continue to build, run and improve these courses over the long-termLearn more about BlueDot Impact here, and register your interest for future rounds of our courses here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Dewi Erwan https://forum.effectivealtruism.org/posts/3EWpLid8tkyYJakfm/announcing-bluedot-impact Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing BlueDot Impact, published by Dewi Erwan on December 9, 2022 on The Effective Altruism Forum.BlueDot Impact is a non-profit running courses that support participants to develop the knowledge, community and network needed to pursue high-impact careers.The purpose of this new organisation is to increase the number of people working on solving some of the world’s most pressing problems in an informed way. We do this by building professional, scalable and high-quality courses. We aim to give participants the opportunity to engage deeply and critically with the literature on particular high-impact fields, meet and collaborate with others interested in the topic, and build an understanding of the opportunities to pursue a career in the space.We believe there are many great people who are interested in working on important problems and have the skills to contribute, but want additional support to take the career-switching leap. By bringing together communities of such people to engage with material on the field over a sustained period and explore the range of career opportunities available to them, we believe they are more likely to make this leap.BlueDot Impact spun out of Cambridge Effective Altruism, and was founded by the team who was primarily responsible for running previous rounds of the AGI Safety and Alternative Protein Fundamentals courses. We created a new organisation so that we could focus solely on the courses. Cambridge Effective Altruism is still around and doing great community-building work.We’re really excited about the amount of interest in the courses, and think they have great potential to build awesome communities around key issues. As such we have spent the last few months:Working with pedagogy experts to make discussion sessions more engagingFormalising our course design process with greater transparency for participants and facilitatorsBuilding systems to improve participant networking to create high-value connectionsCollating downstream opportunities for participants to pursue after the coursesForming a team that can continue to build, run and improve these courses over the long-termLearn more about BlueDot Impact here, and register your interest for future rounds of our courses here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 09 Dec 2022 17:56:17 +0000 EA - Announcing BlueDot Impact by Dewi Erwan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing BlueDot Impact, published by Dewi Erwan on December 9, 2022 on The Effective Altruism Forum.BlueDot Impact is a non-profit running courses that support participants to develop the knowledge, community and network needed to pursue high-impact careers.The purpose of this new organisation is to increase the number of people working on solving some of the world’s most pressing problems in an informed way. We do this by building professional, scalable and high-quality courses. We aim to give participants the opportunity to engage deeply and critically with the literature on particular high-impact fields, meet and collaborate with others interested in the topic, and build an understanding of the opportunities to pursue a career in the space.We believe there are many great people who are interested in working on important problems and have the skills to contribute, but want additional support to take the career-switching leap. By bringing together communities of such people to engage with material on the field over a sustained period and explore the range of career opportunities available to them, we believe they are more likely to make this leap.BlueDot Impact spun out of Cambridge Effective Altruism, and was founded by the team who was primarily responsible for running previous rounds of the AGI Safety and Alternative Protein Fundamentals courses. We created a new organisation so that we could focus solely on the courses. Cambridge Effective Altruism is still around and doing great community-building work.We’re really excited about the amount of interest in the courses, and think they have great potential to build awesome communities around key issues. As such we have spent the last few months:Working with pedagogy experts to make discussion sessions more engagingFormalising our course design process with greater transparency for participants and facilitatorsBuilding systems to improve participant networking to create high-value connectionsCollating downstream opportunities for participants to pursue after the coursesForming a team that can continue to build, run and improve these courses over the long-termLearn more about BlueDot Impact here, and register your interest for future rounds of our courses here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing BlueDot Impact, published by Dewi Erwan on December 9, 2022 on The Effective Altruism Forum.BlueDot Impact is a non-profit running courses that support participants to develop the knowledge, community and network needed to pursue high-impact careers.The purpose of this new organisation is to increase the number of people working on solving some of the world’s most pressing problems in an informed way. We do this by building professional, scalable and high-quality courses. We aim to give participants the opportunity to engage deeply and critically with the literature on particular high-impact fields, meet and collaborate with others interested in the topic, and build an understanding of the opportunities to pursue a career in the space.We believe there are many great people who are interested in working on important problems and have the skills to contribute, but want additional support to take the career-switching leap. By bringing together communities of such people to engage with material on the field over a sustained period and explore the range of career opportunities available to them, we believe they are more likely to make this leap.BlueDot Impact spun out of Cambridge Effective Altruism, and was founded by the team who was primarily responsible for running previous rounds of the AGI Safety and Alternative Protein Fundamentals courses. We created a new organisation so that we could focus solely on the courses. Cambridge Effective Altruism is still around and doing great community-building work.We’re really excited about the amount of interest in the courses, and think they have great potential to build awesome communities around key issues. As such we have spent the last few months:Working with pedagogy experts to make discussion sessions more engagingFormalising our course design process with greater transparency for participants and facilitatorsBuilding systems to improve participant networking to create high-value connectionsCollating downstream opportunities for participants to pursue after the coursesForming a team that can continue to build, run and improve these courses over the long-termLearn more about BlueDot Impact here, and register your interest for future rounds of our courses here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Dewi Erwan https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:12 None full 4059
ydfcCfRAQpneH2wpG_EA EA - Smallpox eradication by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Smallpox eradication, published by Lizka on December 9, 2022 on The Effective Altruism Forum.Today (December 9) is Smallpox Eradication Day. 43 years ago, smallpox was confirmed to have been eradicated after killing hundreds of millions of people. This was a major achievement in global health.So I'm link-posting Our World in Data’s data explorer on smallpox (and here’s the section on how decline & eradication was achieved).This post shares a summary of the history of the eradication of smallpox and selected excerpts from the data explorer.A summary of the history of smallpox eradicationSmallpox was extremely deadly, probably killing 300 million people in the 20th century alone. The last known cases occurred in 1977, and smallpox is now the only human disease that has been completely eradicated.So how was this accomplished?Before we had a smallpox vaccine, we had the practice of variolation — deliberately exposing people to material from smallpox scabs or pus, in order to protect them against the disease (variolation traces back to 16th century China). While variolation made cases of smallpox much less severe, variolation infected the patient and could spread the disease to others, and the severity of the infection could not be easily controlled. So variolation did not lead to the elimination of smallpox from the population.In the late 18th century, Edward Jenner demonstrated that exposure to cowpox — a much less severe disease that turns out to be related — protected people against smallpox. This, in turn, led to the invention of a vaccine against smallpox (the first vaccine ever).In the 19th and 20th centuries, further improvements were made to the smallpox vaccine, and many states were running programs to vaccinate significant portions of the population. By 1959, the World Health Organization (WHO) launched a global program to eradicate smallpox . This involved a coordinated effort to immunize large numbers of people, isolate infected individuals, and monitor the spread of the disease. The program used a technique known as ring vaccination, which involved vaccinating people who had been in contact with infected individuals, in order to create a protective "ring" around the infected person and prevent further spread of the disease.Excerpts from the Our World in Data entryIntroductionSmallpox is the only human disease that has been successfully eradicated.Smallpox, an infectious disease caused by the variola virus, was a major cause of mortality in the past, with historic records of outbreaks across the world. Its historic death tolls were so large that it is often likened to the Black Plague.The eradication of smallpox is therefore a major success story for global health for several reasons: it was a disease that was endemic (and caused high mortality rates) across all continents; but was also crucial to advances in the field of immunology. The smallpox vaccine was the first successful vaccine to be developed.How many died of smallpox?In his review paper ‘The eradication of smallpox – An overview of the past, present, and future’ Donald Henderson reports that during the 20th century alone “an estimated 300 million people died of the disease.”In his book Anderson suggests that in the last hundred years of its existence smallpox killed “at least half a billion people.” 500 million deaths over a century means 5 million annual deaths on average.Eradication across the worldThe last variola major infection was recorded in Bangladesh in October 1975, and the last variola minor infection occurred two years later in Merka, Somalia, on October 26th, 1977. During the following two years, WHO teams searched the African continent for further smallpox cases among those rash-like symptoms (which is a symptom of numerous other diseases). They found no further cas...]]>
Lizka https://forum.effectivealtruism.org/posts/ydfcCfRAQpneH2wpG/smallpox-eradication Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Smallpox eradication, published by Lizka on December 9, 2022 on The Effective Altruism Forum.Today (December 9) is Smallpox Eradication Day. 43 years ago, smallpox was confirmed to have been eradicated after killing hundreds of millions of people. This was a major achievement in global health.So I'm link-posting Our World in Data’s data explorer on smallpox (and here’s the section on how decline & eradication was achieved).This post shares a summary of the history of the eradication of smallpox and selected excerpts from the data explorer.A summary of the history of smallpox eradicationSmallpox was extremely deadly, probably killing 300 million people in the 20th century alone. The last known cases occurred in 1977, and smallpox is now the only human disease that has been completely eradicated.So how was this accomplished?Before we had a smallpox vaccine, we had the practice of variolation — deliberately exposing people to material from smallpox scabs or pus, in order to protect them against the disease (variolation traces back to 16th century China). While variolation made cases of smallpox much less severe, variolation infected the patient and could spread the disease to others, and the severity of the infection could not be easily controlled. So variolation did not lead to the elimination of smallpox from the population.In the late 18th century, Edward Jenner demonstrated that exposure to cowpox — a much less severe disease that turns out to be related — protected people against smallpox. This, in turn, led to the invention of a vaccine against smallpox (the first vaccine ever).In the 19th and 20th centuries, further improvements were made to the smallpox vaccine, and many states were running programs to vaccinate significant portions of the population. By 1959, the World Health Organization (WHO) launched a global program to eradicate smallpox . This involved a coordinated effort to immunize large numbers of people, isolate infected individuals, and monitor the spread of the disease. The program used a technique known as ring vaccination, which involved vaccinating people who had been in contact with infected individuals, in order to create a protective "ring" around the infected person and prevent further spread of the disease.Excerpts from the Our World in Data entryIntroductionSmallpox is the only human disease that has been successfully eradicated.Smallpox, an infectious disease caused by the variola virus, was a major cause of mortality in the past, with historic records of outbreaks across the world. Its historic death tolls were so large that it is often likened to the Black Plague.The eradication of smallpox is therefore a major success story for global health for several reasons: it was a disease that was endemic (and caused high mortality rates) across all continents; but was also crucial to advances in the field of immunology. The smallpox vaccine was the first successful vaccine to be developed.How many died of smallpox?In his review paper ‘The eradication of smallpox – An overview of the past, present, and future’ Donald Henderson reports that during the 20th century alone “an estimated 300 million people died of the disease.”In his book Anderson suggests that in the last hundred years of its existence smallpox killed “at least half a billion people.” 500 million deaths over a century means 5 million annual deaths on average.Eradication across the worldThe last variola major infection was recorded in Bangladesh in October 1975, and the last variola minor infection occurred two years later in Merka, Somalia, on October 26th, 1977. During the following two years, WHO teams searched the African continent for further smallpox cases among those rash-like symptoms (which is a symptom of numerous other diseases). They found no further cas...]]>
Fri, 09 Dec 2022 09:01:33 +0000 EA - Smallpox eradication by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Smallpox eradication, published by Lizka on December 9, 2022 on The Effective Altruism Forum.Today (December 9) is Smallpox Eradication Day. 43 years ago, smallpox was confirmed to have been eradicated after killing hundreds of millions of people. This was a major achievement in global health.So I'm link-posting Our World in Data’s data explorer on smallpox (and here’s the section on how decline & eradication was achieved).This post shares a summary of the history of the eradication of smallpox and selected excerpts from the data explorer.A summary of the history of smallpox eradicationSmallpox was extremely deadly, probably killing 300 million people in the 20th century alone. The last known cases occurred in 1977, and smallpox is now the only human disease that has been completely eradicated.So how was this accomplished?Before we had a smallpox vaccine, we had the practice of variolation — deliberately exposing people to material from smallpox scabs or pus, in order to protect them against the disease (variolation traces back to 16th century China). While variolation made cases of smallpox much less severe, variolation infected the patient and could spread the disease to others, and the severity of the infection could not be easily controlled. So variolation did not lead to the elimination of smallpox from the population.In the late 18th century, Edward Jenner demonstrated that exposure to cowpox — a much less severe disease that turns out to be related — protected people against smallpox. This, in turn, led to the invention of a vaccine against smallpox (the first vaccine ever).In the 19th and 20th centuries, further improvements were made to the smallpox vaccine, and many states were running programs to vaccinate significant portions of the population. By 1959, the World Health Organization (WHO) launched a global program to eradicate smallpox . This involved a coordinated effort to immunize large numbers of people, isolate infected individuals, and monitor the spread of the disease. The program used a technique known as ring vaccination, which involved vaccinating people who had been in contact with infected individuals, in order to create a protective "ring" around the infected person and prevent further spread of the disease.Excerpts from the Our World in Data entryIntroductionSmallpox is the only human disease that has been successfully eradicated.Smallpox, an infectious disease caused by the variola virus, was a major cause of mortality in the past, with historic records of outbreaks across the world. Its historic death tolls were so large that it is often likened to the Black Plague.The eradication of smallpox is therefore a major success story for global health for several reasons: it was a disease that was endemic (and caused high mortality rates) across all continents; but was also crucial to advances in the field of immunology. The smallpox vaccine was the first successful vaccine to be developed.How many died of smallpox?In his review paper ‘The eradication of smallpox – An overview of the past, present, and future’ Donald Henderson reports that during the 20th century alone “an estimated 300 million people died of the disease.”In his book Anderson suggests that in the last hundred years of its existence smallpox killed “at least half a billion people.” 500 million deaths over a century means 5 million annual deaths on average.Eradication across the worldThe last variola major infection was recorded in Bangladesh in October 1975, and the last variola minor infection occurred two years later in Merka, Somalia, on October 26th, 1977. During the following two years, WHO teams searched the African continent for further smallpox cases among those rash-like symptoms (which is a symptom of numerous other diseases). They found no further cas...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Smallpox eradication, published by Lizka on December 9, 2022 on The Effective Altruism Forum.Today (December 9) is Smallpox Eradication Day. 43 years ago, smallpox was confirmed to have been eradicated after killing hundreds of millions of people. This was a major achievement in global health.So I'm link-posting Our World in Data’s data explorer on smallpox (and here’s the section on how decline & eradication was achieved).This post shares a summary of the history of the eradication of smallpox and selected excerpts from the data explorer.A summary of the history of smallpox eradicationSmallpox was extremely deadly, probably killing 300 million people in the 20th century alone. The last known cases occurred in 1977, and smallpox is now the only human disease that has been completely eradicated.So how was this accomplished?Before we had a smallpox vaccine, we had the practice of variolation — deliberately exposing people to material from smallpox scabs or pus, in order to protect them against the disease (variolation traces back to 16th century China). While variolation made cases of smallpox much less severe, variolation infected the patient and could spread the disease to others, and the severity of the infection could not be easily controlled. So variolation did not lead to the elimination of smallpox from the population.In the late 18th century, Edward Jenner demonstrated that exposure to cowpox — a much less severe disease that turns out to be related — protected people against smallpox. This, in turn, led to the invention of a vaccine against smallpox (the first vaccine ever).In the 19th and 20th centuries, further improvements were made to the smallpox vaccine, and many states were running programs to vaccinate significant portions of the population. By 1959, the World Health Organization (WHO) launched a global program to eradicate smallpox . This involved a coordinated effort to immunize large numbers of people, isolate infected individuals, and monitor the spread of the disease. The program used a technique known as ring vaccination, which involved vaccinating people who had been in contact with infected individuals, in order to create a protective "ring" around the infected person and prevent further spread of the disease.Excerpts from the Our World in Data entryIntroductionSmallpox is the only human disease that has been successfully eradicated.Smallpox, an infectious disease caused by the variola virus, was a major cause of mortality in the past, with historic records of outbreaks across the world. Its historic death tolls were so large that it is often likened to the Black Plague.The eradication of smallpox is therefore a major success story for global health for several reasons: it was a disease that was endemic (and caused high mortality rates) across all continents; but was also crucial to advances in the field of immunology. The smallpox vaccine was the first successful vaccine to be developed.How many died of smallpox?In his review paper ‘The eradication of smallpox – An overview of the past, present, and future’ Donald Henderson reports that during the 20th century alone “an estimated 300 million people died of the disease.”In his book Anderson suggests that in the last hundred years of its existence smallpox killed “at least half a billion people.” 500 million deaths over a century means 5 million annual deaths on average.Eradication across the worldThe last variola major infection was recorded in Bangladesh in October 1975, and the last variola minor infection occurred two years later in Merka, Somalia, on October 26th, 1977. During the following two years, WHO teams searched the African continent for further smallpox cases among those rash-like symptoms (which is a symptom of numerous other diseases). They found no further cas...]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:26 None full 4051
wPHpdwfu3toRDf6hM_EA EA - Main paths to impact in EU AI Policy by JOMG Monnet Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Main paths to impact in EU AI Policy, published by JOMG Monnet on December 8, 2022 on The Effective Altruism Forum.Epistemic statusThis is a tentative overview of the current main paths to impact in EU AI Policy. There is significant uncertainty regarding the relative impact of the different paths belowThe paths mentioned below were cross-checked with four experts working in the field, but the list is probably not exhaustiveAdditionally, this post may also be of interest to those interested in working in EU AI Policy. In particular, consider the arguments against impact hereThis article doesn’t compare the potential value of US AI policy careers with those in the EU for people with EU-citizenship. A comparison between the two options is beyond the scope of this postSummaryPeople seeking to have a positive impact on the direction of AI policy in the EU may consider the following paths to impact:Working on (enforcement of) the AI Act, related AI technical standards and adjacent regulationWorking on the current AI Act draft (possibility to have impact immediately)Working on technical standards and auditing services of the AI Act (possibility to have impact immediately)Making sure the EU AI Act is enforced effectively (possibility to have impact now and >1/2 years from now)Working on a revision of the AI Act (possibility to have impact >5 years from now), or on (potential) new AI-related regulation (e.g. possibility to have impact now through the AI Liability Directive)Working on export controls and using the EU’s soft power (possibility to have immediate + longer term impact)Using career capital gained from a career in EU AI Policy to work on different policy topics or in the private sectorWhile the majority of impact for some paths above is expected to be realised in the medium/long term, building up career capital is probably a prerequisite for making impact in these paths later on. It is advisable to create your own personal “theory of change” for working in the field. The list of paths to impact below is not exhaustive and individuals should do their own research into different paths and speak to multiple experts.Paths to impactWorking on (enforcement of) the AI Act, related AI technical standards and adjacent regulationSince the EU generally lacks cutting edge developers of AI systems, the largest expected impact from the EU AI Act is expected to follow from a Brussels effect of sorts. For the AI act to have a positive effect on the likelihood of safe, advanced AI systems being developed within organisations outside of the EU, the following assumptions need to be true:Advanced AI systems are (also) being developed within private AI labs that (plan to) export products to EU citizens.The AI-developing activities within private AI labs need to be influenced by the EU AI Act through a Brussels Effect of some sort (de facto or de jure). This depends on whether advanced AI is developed within a product development process in which AI-companies take EU AI Act regulation into account.Requirements on these companies either have a (1) slowing effect on advanced AI development, buying more time for technical safety research or better regulation within the countries where AI is developed (2) direct effect, e.g. through risk management requirements that increase the odds of safe AI developmentWorking on the final draft of the AI ActCurrent status of AI ActThe AI Act is a proposed European law on artificial intelligence (AI) – the first law on AI by a major regulator anywhere. On April 21, 2021 the Commission published a proposal to regulate artificial intelligence in the European Union. The proposal of the EU AI Act will become law once both the Council (representing the 27 EU Member States) and the European Parliament agree on a common version of the te...]]>
JOMG Monnet https://forum.effectivealtruism.org/posts/wPHpdwfu3toRDf6hM/main-paths-to-impact-in-eu-ai-policy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Main paths to impact in EU AI Policy, published by JOMG Monnet on December 8, 2022 on The Effective Altruism Forum.Epistemic statusThis is a tentative overview of the current main paths to impact in EU AI Policy. There is significant uncertainty regarding the relative impact of the different paths belowThe paths mentioned below were cross-checked with four experts working in the field, but the list is probably not exhaustiveAdditionally, this post may also be of interest to those interested in working in EU AI Policy. In particular, consider the arguments against impact hereThis article doesn’t compare the potential value of US AI policy careers with those in the EU for people with EU-citizenship. A comparison between the two options is beyond the scope of this postSummaryPeople seeking to have a positive impact on the direction of AI policy in the EU may consider the following paths to impact:Working on (enforcement of) the AI Act, related AI technical standards and adjacent regulationWorking on the current AI Act draft (possibility to have impact immediately)Working on technical standards and auditing services of the AI Act (possibility to have impact immediately)Making sure the EU AI Act is enforced effectively (possibility to have impact now and >1/2 years from now)Working on a revision of the AI Act (possibility to have impact >5 years from now), or on (potential) new AI-related regulation (e.g. possibility to have impact now through the AI Liability Directive)Working on export controls and using the EU’s soft power (possibility to have immediate + longer term impact)Using career capital gained from a career in EU AI Policy to work on different policy topics or in the private sectorWhile the majority of impact for some paths above is expected to be realised in the medium/long term, building up career capital is probably a prerequisite for making impact in these paths later on. It is advisable to create your own personal “theory of change” for working in the field. The list of paths to impact below is not exhaustive and individuals should do their own research into different paths and speak to multiple experts.Paths to impactWorking on (enforcement of) the AI Act, related AI technical standards and adjacent regulationSince the EU generally lacks cutting edge developers of AI systems, the largest expected impact from the EU AI Act is expected to follow from a Brussels effect of sorts. For the AI act to have a positive effect on the likelihood of safe, advanced AI systems being developed within organisations outside of the EU, the following assumptions need to be true:Advanced AI systems are (also) being developed within private AI labs that (plan to) export products to EU citizens.The AI-developing activities within private AI labs need to be influenced by the EU AI Act through a Brussels Effect of some sort (de facto or de jure). This depends on whether advanced AI is developed within a product development process in which AI-companies take EU AI Act regulation into account.Requirements on these companies either have a (1) slowing effect on advanced AI development, buying more time for technical safety research or better regulation within the countries where AI is developed (2) direct effect, e.g. through risk management requirements that increase the odds of safe AI developmentWorking on the final draft of the AI ActCurrent status of AI ActThe AI Act is a proposed European law on artificial intelligence (AI) – the first law on AI by a major regulator anywhere. On April 21, 2021 the Commission published a proposal to regulate artificial intelligence in the European Union. The proposal of the EU AI Act will become law once both the Council (representing the 27 EU Member States) and the European Parliament agree on a common version of the te...]]>
Fri, 09 Dec 2022 04:26:10 +0000 EA - Main paths to impact in EU AI Policy by JOMG Monnet Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Main paths to impact in EU AI Policy, published by JOMG Monnet on December 8, 2022 on The Effective Altruism Forum.Epistemic statusThis is a tentative overview of the current main paths to impact in EU AI Policy. There is significant uncertainty regarding the relative impact of the different paths belowThe paths mentioned below were cross-checked with four experts working in the field, but the list is probably not exhaustiveAdditionally, this post may also be of interest to those interested in working in EU AI Policy. In particular, consider the arguments against impact hereThis article doesn’t compare the potential value of US AI policy careers with those in the EU for people with EU-citizenship. A comparison between the two options is beyond the scope of this postSummaryPeople seeking to have a positive impact on the direction of AI policy in the EU may consider the following paths to impact:Working on (enforcement of) the AI Act, related AI technical standards and adjacent regulationWorking on the current AI Act draft (possibility to have impact immediately)Working on technical standards and auditing services of the AI Act (possibility to have impact immediately)Making sure the EU AI Act is enforced effectively (possibility to have impact now and >1/2 years from now)Working on a revision of the AI Act (possibility to have impact >5 years from now), or on (potential) new AI-related regulation (e.g. possibility to have impact now through the AI Liability Directive)Working on export controls and using the EU’s soft power (possibility to have immediate + longer term impact)Using career capital gained from a career in EU AI Policy to work on different policy topics or in the private sectorWhile the majority of impact for some paths above is expected to be realised in the medium/long term, building up career capital is probably a prerequisite for making impact in these paths later on. It is advisable to create your own personal “theory of change” for working in the field. The list of paths to impact below is not exhaustive and individuals should do their own research into different paths and speak to multiple experts.Paths to impactWorking on (enforcement of) the AI Act, related AI technical standards and adjacent regulationSince the EU generally lacks cutting edge developers of AI systems, the largest expected impact from the EU AI Act is expected to follow from a Brussels effect of sorts. For the AI act to have a positive effect on the likelihood of safe, advanced AI systems being developed within organisations outside of the EU, the following assumptions need to be true:Advanced AI systems are (also) being developed within private AI labs that (plan to) export products to EU citizens.The AI-developing activities within private AI labs need to be influenced by the EU AI Act through a Brussels Effect of some sort (de facto or de jure). This depends on whether advanced AI is developed within a product development process in which AI-companies take EU AI Act regulation into account.Requirements on these companies either have a (1) slowing effect on advanced AI development, buying more time for technical safety research or better regulation within the countries where AI is developed (2) direct effect, e.g. through risk management requirements that increase the odds of safe AI developmentWorking on the final draft of the AI ActCurrent status of AI ActThe AI Act is a proposed European law on artificial intelligence (AI) – the first law on AI by a major regulator anywhere. On April 21, 2021 the Commission published a proposal to regulate artificial intelligence in the European Union. The proposal of the EU AI Act will become law once both the Council (representing the 27 EU Member States) and the European Parliament agree on a common version of the te...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Main paths to impact in EU AI Policy, published by JOMG Monnet on December 8, 2022 on The Effective Altruism Forum.Epistemic statusThis is a tentative overview of the current main paths to impact in EU AI Policy. There is significant uncertainty regarding the relative impact of the different paths belowThe paths mentioned below were cross-checked with four experts working in the field, but the list is probably not exhaustiveAdditionally, this post may also be of interest to those interested in working in EU AI Policy. In particular, consider the arguments against impact hereThis article doesn’t compare the potential value of US AI policy careers with those in the EU for people with EU-citizenship. A comparison between the two options is beyond the scope of this postSummaryPeople seeking to have a positive impact on the direction of AI policy in the EU may consider the following paths to impact:Working on (enforcement of) the AI Act, related AI technical standards and adjacent regulationWorking on the current AI Act draft (possibility to have impact immediately)Working on technical standards and auditing services of the AI Act (possibility to have impact immediately)Making sure the EU AI Act is enforced effectively (possibility to have impact now and >1/2 years from now)Working on a revision of the AI Act (possibility to have impact >5 years from now), or on (potential) new AI-related regulation (e.g. possibility to have impact now through the AI Liability Directive)Working on export controls and using the EU’s soft power (possibility to have immediate + longer term impact)Using career capital gained from a career in EU AI Policy to work on different policy topics or in the private sectorWhile the majority of impact for some paths above is expected to be realised in the medium/long term, building up career capital is probably a prerequisite for making impact in these paths later on. It is advisable to create your own personal “theory of change” for working in the field. The list of paths to impact below is not exhaustive and individuals should do their own research into different paths and speak to multiple experts.Paths to impactWorking on (enforcement of) the AI Act, related AI technical standards and adjacent regulationSince the EU generally lacks cutting edge developers of AI systems, the largest expected impact from the EU AI Act is expected to follow from a Brussels effect of sorts. For the AI act to have a positive effect on the likelihood of safe, advanced AI systems being developed within organisations outside of the EU, the following assumptions need to be true:Advanced AI systems are (also) being developed within private AI labs that (plan to) export products to EU citizens.The AI-developing activities within private AI labs need to be influenced by the EU AI Act through a Brussels Effect of some sort (de facto or de jure). This depends on whether advanced AI is developed within a product development process in which AI-companies take EU AI Act regulation into account.Requirements on these companies either have a (1) slowing effect on advanced AI development, buying more time for technical safety research or better regulation within the countries where AI is developed (2) direct effect, e.g. through risk management requirements that increase the odds of safe AI developmentWorking on the final draft of the AI ActCurrent status of AI ActThe AI Act is a proposed European law on artificial intelligence (AI) – the first law on AI by a major regulator anywhere. On April 21, 2021 the Commission published a proposal to regulate artificial intelligence in the European Union. The proposal of the EU AI Act will become law once both the Council (representing the 27 EU Member States) and the European Parliament agree on a common version of the te...]]>
JOMG Monnet https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 14:37 None full 4052
QB6JJdYemL2kQPhLM_EA EA - Monitoring and Evaluation Specialists – a new career path profile from Probably Good by Probably Good Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Monitoring & Evaluation Specialists – a new career path profile from Probably Good, published by Probably Good on December 8, 2022 on The Effective Altruism Forum.Probably Good is excited to share a new path profile for careers in monitoring and evaluation. Below, we’ve included a few excerpts from the full profile.Monitoring and evaluation (M&E) specialists collect, track, and analyze data to assess the value and impact of different programs and interventions, as well as translate these assessments into actionable insights and strategies to increase the impact of an organization.M&E specialist careers might be a promising option for some people. If you’re an exceptional fit, especially if you’re based in a low- or middle-income country where there’s lots of scope for implementing global health and development interventions, then it may be worth considering these careers.However, the impact you’ll be able to have will be determined in large part by the organization you enter – making it particularly important to seek out the best organizations and avoid those that only superficially care about evaluating their impact. Additionally, if you’re a good fit for some of the top roles in this path, it’s likely you’ll also be a good fit for other highly impactful roles, so we’d recommend you consider other paths, too.How promising is this path?Monitoring and evaluation is important for any organization aiming to have an impact. Without collecting evidence and data, it’s easy to seem like an intervention or program is having an impact, even when it’s not. Here are a few ways in which M&E might be able to generate impact:Discover effective interventions that do a lot of good. For example, rigorous evaluation by J-PAL affiliates and Evidence Action found that placing chlorine-treated water dispensers in rural African villages reduced under-5 child mortality by as much as 63%. Evidence Action has now pledged to double the size of its water-treatment program, reaching 9 million people.Make improvements to known effective interventions. Improving the efficacy of an already-impactful intervention by even a little bit can generate a large impact, especially if the intervention is rolled out on a large scale. Consider this study run by malaria charity TAMTAM, which found that charging even a nominal price for malaria bednets decreased demand by up to 60%, leading a number of large organizations to offer them for free instead.Identify ineffective or harmful interventions, so that an organization can change course. A great example of this is animal advocacy organization the Humane League, which determined that their current strategy of performing controversial public stunts was ineffective, and pivoted its strategy towards corporate campaigns. In doing so, they convinced Unilever to stop killing male chicks, saving millions of baby chicks from gruesome deaths.AdvantagesClear links to effectiveness - Because M&E is explicitly concerned with measuring the impact of interventions, there’s often a clear “theory of change” for how your work might translate into positive impact.Leverage - If you’re working in a large organization, or working on an intervention with a large pool of potential funders and implementers, your work can influence where large amounts of money is spent, or how large amounts of other resources are distributed.Flexible skill set - the skills and qualifications you’ll need for a career in M&E are robustly useful across a range of careers. As such, it’s likely that M&E work will provide you with flexible career capital for pursuing other paths.DisadvantagesNarrow range of cause areas - Within our top recommended cause areas, there are far more M&E roles within global health and development than the others. This means M&E may be a promising career path if y...]]>
Probably Good https://forum.effectivealtruism.org/posts/QB6JJdYemL2kQPhLM/monitoring-and-evaluation-specialists-a-new-career-path Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Monitoring & Evaluation Specialists – a new career path profile from Probably Good, published by Probably Good on December 8, 2022 on The Effective Altruism Forum.Probably Good is excited to share a new path profile for careers in monitoring and evaluation. Below, we’ve included a few excerpts from the full profile.Monitoring and evaluation (M&E) specialists collect, track, and analyze data to assess the value and impact of different programs and interventions, as well as translate these assessments into actionable insights and strategies to increase the impact of an organization.M&E specialist careers might be a promising option for some people. If you’re an exceptional fit, especially if you’re based in a low- or middle-income country where there’s lots of scope for implementing global health and development interventions, then it may be worth considering these careers.However, the impact you’ll be able to have will be determined in large part by the organization you enter – making it particularly important to seek out the best organizations and avoid those that only superficially care about evaluating their impact. Additionally, if you’re a good fit for some of the top roles in this path, it’s likely you’ll also be a good fit for other highly impactful roles, so we’d recommend you consider other paths, too.How promising is this path?Monitoring and evaluation is important for any organization aiming to have an impact. Without collecting evidence and data, it’s easy to seem like an intervention or program is having an impact, even when it’s not. Here are a few ways in which M&E might be able to generate impact:Discover effective interventions that do a lot of good. For example, rigorous evaluation by J-PAL affiliates and Evidence Action found that placing chlorine-treated water dispensers in rural African villages reduced under-5 child mortality by as much as 63%. Evidence Action has now pledged to double the size of its water-treatment program, reaching 9 million people.Make improvements to known effective interventions. Improving the efficacy of an already-impactful intervention by even a little bit can generate a large impact, especially if the intervention is rolled out on a large scale. Consider this study run by malaria charity TAMTAM, which found that charging even a nominal price for malaria bednets decreased demand by up to 60%, leading a number of large organizations to offer them for free instead.Identify ineffective or harmful interventions, so that an organization can change course. A great example of this is animal advocacy organization the Humane League, which determined that their current strategy of performing controversial public stunts was ineffective, and pivoted its strategy towards corporate campaigns. In doing so, they convinced Unilever to stop killing male chicks, saving millions of baby chicks from gruesome deaths.AdvantagesClear links to effectiveness - Because M&E is explicitly concerned with measuring the impact of interventions, there’s often a clear “theory of change” for how your work might translate into positive impact.Leverage - If you’re working in a large organization, or working on an intervention with a large pool of potential funders and implementers, your work can influence where large amounts of money is spent, or how large amounts of other resources are distributed.Flexible skill set - the skills and qualifications you’ll need for a career in M&E are robustly useful across a range of careers. As such, it’s likely that M&E work will provide you with flexible career capital for pursuing other paths.DisadvantagesNarrow range of cause areas - Within our top recommended cause areas, there are far more M&E roles within global health and development than the others. This means M&E may be a promising career path if y...]]>
Thu, 08 Dec 2022 19:42:31 +0000 EA - Monitoring and Evaluation Specialists – a new career path profile from Probably Good by Probably Good Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Monitoring & Evaluation Specialists – a new career path profile from Probably Good, published by Probably Good on December 8, 2022 on The Effective Altruism Forum.Probably Good is excited to share a new path profile for careers in monitoring and evaluation. Below, we’ve included a few excerpts from the full profile.Monitoring and evaluation (M&E) specialists collect, track, and analyze data to assess the value and impact of different programs and interventions, as well as translate these assessments into actionable insights and strategies to increase the impact of an organization.M&E specialist careers might be a promising option for some people. If you’re an exceptional fit, especially if you’re based in a low- or middle-income country where there’s lots of scope for implementing global health and development interventions, then it may be worth considering these careers.However, the impact you’ll be able to have will be determined in large part by the organization you enter – making it particularly important to seek out the best organizations and avoid those that only superficially care about evaluating their impact. Additionally, if you’re a good fit for some of the top roles in this path, it’s likely you’ll also be a good fit for other highly impactful roles, so we’d recommend you consider other paths, too.How promising is this path?Monitoring and evaluation is important for any organization aiming to have an impact. Without collecting evidence and data, it’s easy to seem like an intervention or program is having an impact, even when it’s not. Here are a few ways in which M&E might be able to generate impact:Discover effective interventions that do a lot of good. For example, rigorous evaluation by J-PAL affiliates and Evidence Action found that placing chlorine-treated water dispensers in rural African villages reduced under-5 child mortality by as much as 63%. Evidence Action has now pledged to double the size of its water-treatment program, reaching 9 million people.Make improvements to known effective interventions. Improving the efficacy of an already-impactful intervention by even a little bit can generate a large impact, especially if the intervention is rolled out on a large scale. Consider this study run by malaria charity TAMTAM, which found that charging even a nominal price for malaria bednets decreased demand by up to 60%, leading a number of large organizations to offer them for free instead.Identify ineffective or harmful interventions, so that an organization can change course. A great example of this is animal advocacy organization the Humane League, which determined that their current strategy of performing controversial public stunts was ineffective, and pivoted its strategy towards corporate campaigns. In doing so, they convinced Unilever to stop killing male chicks, saving millions of baby chicks from gruesome deaths.AdvantagesClear links to effectiveness - Because M&E is explicitly concerned with measuring the impact of interventions, there’s often a clear “theory of change” for how your work might translate into positive impact.Leverage - If you’re working in a large organization, or working on an intervention with a large pool of potential funders and implementers, your work can influence where large amounts of money is spent, or how large amounts of other resources are distributed.Flexible skill set - the skills and qualifications you’ll need for a career in M&E are robustly useful across a range of careers. As such, it’s likely that M&E work will provide you with flexible career capital for pursuing other paths.DisadvantagesNarrow range of cause areas - Within our top recommended cause areas, there are far more M&E roles within global health and development than the others. This means M&E may be a promising career path if y...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Monitoring & Evaluation Specialists – a new career path profile from Probably Good, published by Probably Good on December 8, 2022 on The Effective Altruism Forum.Probably Good is excited to share a new path profile for careers in monitoring and evaluation. Below, we’ve included a few excerpts from the full profile.Monitoring and evaluation (M&E) specialists collect, track, and analyze data to assess the value and impact of different programs and interventions, as well as translate these assessments into actionable insights and strategies to increase the impact of an organization.M&E specialist careers might be a promising option for some people. If you’re an exceptional fit, especially if you’re based in a low- or middle-income country where there’s lots of scope for implementing global health and development interventions, then it may be worth considering these careers.However, the impact you’ll be able to have will be determined in large part by the organization you enter – making it particularly important to seek out the best organizations and avoid those that only superficially care about evaluating their impact. Additionally, if you’re a good fit for some of the top roles in this path, it’s likely you’ll also be a good fit for other highly impactful roles, so we’d recommend you consider other paths, too.How promising is this path?Monitoring and evaluation is important for any organization aiming to have an impact. Without collecting evidence and data, it’s easy to seem like an intervention or program is having an impact, even when it’s not. Here are a few ways in which M&E might be able to generate impact:Discover effective interventions that do a lot of good. For example, rigorous evaluation by J-PAL affiliates and Evidence Action found that placing chlorine-treated water dispensers in rural African villages reduced under-5 child mortality by as much as 63%. Evidence Action has now pledged to double the size of its water-treatment program, reaching 9 million people.Make improvements to known effective interventions. Improving the efficacy of an already-impactful intervention by even a little bit can generate a large impact, especially if the intervention is rolled out on a large scale. Consider this study run by malaria charity TAMTAM, which found that charging even a nominal price for malaria bednets decreased demand by up to 60%, leading a number of large organizations to offer them for free instead.Identify ineffective or harmful interventions, so that an organization can change course. A great example of this is animal advocacy organization the Humane League, which determined that their current strategy of performing controversial public stunts was ineffective, and pivoted its strategy towards corporate campaigns. In doing so, they convinced Unilever to stop killing male chicks, saving millions of baby chicks from gruesome deaths.AdvantagesClear links to effectiveness - Because M&E is explicitly concerned with measuring the impact of interventions, there’s often a clear “theory of change” for how your work might translate into positive impact.Leverage - If you’re working in a large organization, or working on an intervention with a large pool of potential funders and implementers, your work can influence where large amounts of money is spent, or how large amounts of other resources are distributed.Flexible skill set - the skills and qualifications you’ll need for a career in M&E are robustly useful across a range of careers. As such, it’s likely that M&E work will provide you with flexible career capital for pursuing other paths.DisadvantagesNarrow range of cause areas - Within our top recommended cause areas, there are far more M&E roles within global health and development than the others. This means M&E may be a promising career path if y...]]>
Probably Good https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 14:02 None full 4050
CYSXxCsxhBRdxyNPR_EA EA - SFF is doubling speculation (rapid) grant budgets; FTX grantees should consider applying by JueYan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SFF is doubling speculation (rapid) grant budgets; FTX grantees should consider applying, published by JueYan on December 8, 2022 on The Effective Altruism Forum.The Survival and Flourishing Fund (SFF) funds many longtermist, x-risk, and meta projects, and has distributed $18mm YTD. While SFF’s focus areas are similar to those of the FTX Future Fund, SFF has received few applications since the latest round closed in August.This is a reminder that projects can apply to be considered for expedited speculation grants at any time. Speculation grants can be approved in days and paid out as quickly as within a month. Past speculation grants have ranged from $10,000 to $400,000, and applicants for speculation grants will automatically be considered for the next main SFF round. In response to the recent extraordinary need, Jaan Tallinn, the main funder of SFF, is doubling speculation budgets. Grantees impacted by recent events should apply.SFF funds charities and projects hosted by organizations with charity status. You can get a better idea of SFF’s scope from its website and its recent grants. I encourage relevant grantees to consider applying to SFF, in addition to the current array of efforts led by Open Phil, Mercatus, and Nonlinear.For general information about the Survival and Flourishing Fund, see:/Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
JueYan https://forum.effectivealtruism.org/posts/CYSXxCsxhBRdxyNPR/sff-is-doubling-speculation-rapid-grant-budgets-ftx-grantees Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SFF is doubling speculation (rapid) grant budgets; FTX grantees should consider applying, published by JueYan on December 8, 2022 on The Effective Altruism Forum.The Survival and Flourishing Fund (SFF) funds many longtermist, x-risk, and meta projects, and has distributed $18mm YTD. While SFF’s focus areas are similar to those of the FTX Future Fund, SFF has received few applications since the latest round closed in August.This is a reminder that projects can apply to be considered for expedited speculation grants at any time. Speculation grants can be approved in days and paid out as quickly as within a month. Past speculation grants have ranged from $10,000 to $400,000, and applicants for speculation grants will automatically be considered for the next main SFF round. In response to the recent extraordinary need, Jaan Tallinn, the main funder of SFF, is doubling speculation budgets. Grantees impacted by recent events should apply.SFF funds charities and projects hosted by organizations with charity status. You can get a better idea of SFF’s scope from its website and its recent grants. I encourage relevant grantees to consider applying to SFF, in addition to the current array of efforts led by Open Phil, Mercatus, and Nonlinear.For general information about the Survival and Flourishing Fund, see:/Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 08 Dec 2022 19:39:58 +0000 EA - SFF is doubling speculation (rapid) grant budgets; FTX grantees should consider applying by JueYan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SFF is doubling speculation (rapid) grant budgets; FTX grantees should consider applying, published by JueYan on December 8, 2022 on The Effective Altruism Forum.The Survival and Flourishing Fund (SFF) funds many longtermist, x-risk, and meta projects, and has distributed $18mm YTD. While SFF’s focus areas are similar to those of the FTX Future Fund, SFF has received few applications since the latest round closed in August.This is a reminder that projects can apply to be considered for expedited speculation grants at any time. Speculation grants can be approved in days and paid out as quickly as within a month. Past speculation grants have ranged from $10,000 to $400,000, and applicants for speculation grants will automatically be considered for the next main SFF round. In response to the recent extraordinary need, Jaan Tallinn, the main funder of SFF, is doubling speculation budgets. Grantees impacted by recent events should apply.SFF funds charities and projects hosted by organizations with charity status. You can get a better idea of SFF’s scope from its website and its recent grants. I encourage relevant grantees to consider applying to SFF, in addition to the current array of efforts led by Open Phil, Mercatus, and Nonlinear.For general information about the Survival and Flourishing Fund, see:/Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SFF is doubling speculation (rapid) grant budgets; FTX grantees should consider applying, published by JueYan on December 8, 2022 on The Effective Altruism Forum.The Survival and Flourishing Fund (SFF) funds many longtermist, x-risk, and meta projects, and has distributed $18mm YTD. While SFF’s focus areas are similar to those of the FTX Future Fund, SFF has received few applications since the latest round closed in August.This is a reminder that projects can apply to be considered for expedited speculation grants at any time. Speculation grants can be approved in days and paid out as quickly as within a month. Past speculation grants have ranged from $10,000 to $400,000, and applicants for speculation grants will automatically be considered for the next main SFF round. In response to the recent extraordinary need, Jaan Tallinn, the main funder of SFF, is doubling speculation budgets. Grantees impacted by recent events should apply.SFF funds charities and projects hosted by organizations with charity status. You can get a better idea of SFF’s scope from its website and its recent grants. I encourage relevant grantees to consider applying to SFF, in addition to the current array of efforts led by Open Phil, Mercatus, and Nonlinear.For general information about the Survival and Flourishing Fund, see:/Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
JueYan https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:43 None full 4049
isggu3woGwkpYzqwW_EA EA - Presenting: 2022 Incubated Charities (Charity Entrepreneurship) by KarolinaSarek Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Presenting: 2022 Incubated Charities (Charity Entrepreneurship), published by KarolinaSarek on December 8, 2022 on The Effective Altruism Forum.We are proud to announce that 5 new charitable organizations have been launched from our June-August 2022 Incubation Program. Nine high-potential individuals graduated from our two-month intensive training program. The Incubation Program has been designed to teach participants everything they need to know to launch and run an extremely effective, high-impact charity. From analyzing the cost-effectiveness of an intervention, all the way to crafting a proposal for funding, the program participants are equipped with the knowledge and confidence they need to see their chosen intervention become a reality. Eight have gone on to start new effective nonprofits focused on policy, mental health, family planning and EA meta cause areas, and one participant was hired by Charity Entrepreneurship as a Research Analyst. They will be joining our 2023 cohort.Thanks to our generous CE Seed Network of funders, we have helped to secure $732,000 in funding for the organizations, and will further support them with mentorship, operational support, free co-working space in London, and access to a constantly growing entrepreneurial community of funders, advisors, interns and other charity founders.The 2022 incubated charities are:Center for Effective Aid Policy- identifying and promoting high-impact development policies and interventions.Centre for Exploratory Altruism Research (CEARCH)- conducting cause prioritization research and outreach.Maternal Health Initiative- producing transformative benefits to women’s health, agency, and income through increased access to family planning.Kaya Guides- reducing depression and anxiety among youth in low-and middle-income countries.Vida Plena- building strong mental health in Latin America.CENTER FOR EFFECTIVE AID POLICYCo-founders: Jacob Wood, Mathias BondeWebsite: aidpolicy.orgEmail address: contact@aidpolicy.org CE incubation grant: $170,000Description of the intervention:The Center for Effective Aid Policy will work on identifying and advocating for effective solutions in aid policy. This may include:Increasing international development aidIncreasing budget allocation to specific effective development programsIntroducing new effective development interventions into aid budgetsRevising processes which result in improved development aid effectivenessBackground of the intervention:$179 billion was spent on development aid in 2021 - that is roughly 240x the amount of money that GiveWell has moved since 2009. While well-intentioned, there is a broad consensus among experts, think tanks, and implementing partners alike that aid effectiveness can be vastly improved.The Center for Effective Aid Policy believes tractable interventions exist in the development aid space that will result in improved aid spending and better outcomes for its recipients. You can read more in their recent EA Forum post.Near-term plans:In 2022-2023, The Center for Effective Aid Policy will identify policy windows and formulate impactful and practical-to-implement policies, which they will advocate to governments and NGOs. They conservatively estimate their chances of advocacy success at $5.62 per DALY - more than an order of magnitude higher than multiple GiveWell-recommended charities.CENTRE FOR EXPLORATORY ALTRUISM RESEARCH (CEARCH)Founder: Joel TanWebsite: exploratory-altruism.orgContact: incubation grant: $100,000Description of the intervention:CEARCH conducts cause prioritization research and outreach - identifying the most important problems in the world and directing resources towards solving them, so as to maximize global welfare.Background of the intervention:There are many potential cause areas (e.g.,...]]>
KarolinaSarek https://forum.effectivealtruism.org/posts/isggu3woGwkpYzqwW/presenting-2022-incubated-charities-charity-entrepreneurship Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Presenting: 2022 Incubated Charities (Charity Entrepreneurship), published by KarolinaSarek on December 8, 2022 on The Effective Altruism Forum.We are proud to announce that 5 new charitable organizations have been launched from our June-August 2022 Incubation Program. Nine high-potential individuals graduated from our two-month intensive training program. The Incubation Program has been designed to teach participants everything they need to know to launch and run an extremely effective, high-impact charity. From analyzing the cost-effectiveness of an intervention, all the way to crafting a proposal for funding, the program participants are equipped with the knowledge and confidence they need to see their chosen intervention become a reality. Eight have gone on to start new effective nonprofits focused on policy, mental health, family planning and EA meta cause areas, and one participant was hired by Charity Entrepreneurship as a Research Analyst. They will be joining our 2023 cohort.Thanks to our generous CE Seed Network of funders, we have helped to secure $732,000 in funding for the organizations, and will further support them with mentorship, operational support, free co-working space in London, and access to a constantly growing entrepreneurial community of funders, advisors, interns and other charity founders.The 2022 incubated charities are:Center for Effective Aid Policy- identifying and promoting high-impact development policies and interventions.Centre for Exploratory Altruism Research (CEARCH)- conducting cause prioritization research and outreach.Maternal Health Initiative- producing transformative benefits to women’s health, agency, and income through increased access to family planning.Kaya Guides- reducing depression and anxiety among youth in low-and middle-income countries.Vida Plena- building strong mental health in Latin America.CENTER FOR EFFECTIVE AID POLICYCo-founders: Jacob Wood, Mathias BondeWebsite: aidpolicy.orgEmail address: contact@aidpolicy.org CE incubation grant: $170,000Description of the intervention:The Center for Effective Aid Policy will work on identifying and advocating for effective solutions in aid policy. This may include:Increasing international development aidIncreasing budget allocation to specific effective development programsIntroducing new effective development interventions into aid budgetsRevising processes which result in improved development aid effectivenessBackground of the intervention:$179 billion was spent on development aid in 2021 - that is roughly 240x the amount of money that GiveWell has moved since 2009. While well-intentioned, there is a broad consensus among experts, think tanks, and implementing partners alike that aid effectiveness can be vastly improved.The Center for Effective Aid Policy believes tractable interventions exist in the development aid space that will result in improved aid spending and better outcomes for its recipients. You can read more in their recent EA Forum post.Near-term plans:In 2022-2023, The Center for Effective Aid Policy will identify policy windows and formulate impactful and practical-to-implement policies, which they will advocate to governments and NGOs. They conservatively estimate their chances of advocacy success at $5.62 per DALY - more than an order of magnitude higher than multiple GiveWell-recommended charities.CENTRE FOR EXPLORATORY ALTRUISM RESEARCH (CEARCH)Founder: Joel TanWebsite: exploratory-altruism.orgContact: incubation grant: $100,000Description of the intervention:CEARCH conducts cause prioritization research and outreach - identifying the most important problems in the world and directing resources towards solving them, so as to maximize global welfare.Background of the intervention:There are many potential cause areas (e.g.,...]]>
Thu, 08 Dec 2022 18:31:46 +0000 EA - Presenting: 2022 Incubated Charities (Charity Entrepreneurship) by KarolinaSarek Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Presenting: 2022 Incubated Charities (Charity Entrepreneurship), published by KarolinaSarek on December 8, 2022 on The Effective Altruism Forum.We are proud to announce that 5 new charitable organizations have been launched from our June-August 2022 Incubation Program. Nine high-potential individuals graduated from our two-month intensive training program. The Incubation Program has been designed to teach participants everything they need to know to launch and run an extremely effective, high-impact charity. From analyzing the cost-effectiveness of an intervention, all the way to crafting a proposal for funding, the program participants are equipped with the knowledge and confidence they need to see their chosen intervention become a reality. Eight have gone on to start new effective nonprofits focused on policy, mental health, family planning and EA meta cause areas, and one participant was hired by Charity Entrepreneurship as a Research Analyst. They will be joining our 2023 cohort.Thanks to our generous CE Seed Network of funders, we have helped to secure $732,000 in funding for the organizations, and will further support them with mentorship, operational support, free co-working space in London, and access to a constantly growing entrepreneurial community of funders, advisors, interns and other charity founders.The 2022 incubated charities are:Center for Effective Aid Policy- identifying and promoting high-impact development policies and interventions.Centre for Exploratory Altruism Research (CEARCH)- conducting cause prioritization research and outreach.Maternal Health Initiative- producing transformative benefits to women’s health, agency, and income through increased access to family planning.Kaya Guides- reducing depression and anxiety among youth in low-and middle-income countries.Vida Plena- building strong mental health in Latin America.CENTER FOR EFFECTIVE AID POLICYCo-founders: Jacob Wood, Mathias BondeWebsite: aidpolicy.orgEmail address: contact@aidpolicy.org CE incubation grant: $170,000Description of the intervention:The Center for Effective Aid Policy will work on identifying and advocating for effective solutions in aid policy. This may include:Increasing international development aidIncreasing budget allocation to specific effective development programsIntroducing new effective development interventions into aid budgetsRevising processes which result in improved development aid effectivenessBackground of the intervention:$179 billion was spent on development aid in 2021 - that is roughly 240x the amount of money that GiveWell has moved since 2009. While well-intentioned, there is a broad consensus among experts, think tanks, and implementing partners alike that aid effectiveness can be vastly improved.The Center for Effective Aid Policy believes tractable interventions exist in the development aid space that will result in improved aid spending and better outcomes for its recipients. You can read more in their recent EA Forum post.Near-term plans:In 2022-2023, The Center for Effective Aid Policy will identify policy windows and formulate impactful and practical-to-implement policies, which they will advocate to governments and NGOs. They conservatively estimate their chances of advocacy success at $5.62 per DALY - more than an order of magnitude higher than multiple GiveWell-recommended charities.CENTRE FOR EXPLORATORY ALTRUISM RESEARCH (CEARCH)Founder: Joel TanWebsite: exploratory-altruism.orgContact: incubation grant: $100,000Description of the intervention:CEARCH conducts cause prioritization research and outreach - identifying the most important problems in the world and directing resources towards solving them, so as to maximize global welfare.Background of the intervention:There are many potential cause areas (e.g.,...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Presenting: 2022 Incubated Charities (Charity Entrepreneurship), published by KarolinaSarek on December 8, 2022 on The Effective Altruism Forum.We are proud to announce that 5 new charitable organizations have been launched from our June-August 2022 Incubation Program. Nine high-potential individuals graduated from our two-month intensive training program. The Incubation Program has been designed to teach participants everything they need to know to launch and run an extremely effective, high-impact charity. From analyzing the cost-effectiveness of an intervention, all the way to crafting a proposal for funding, the program participants are equipped with the knowledge and confidence they need to see their chosen intervention become a reality. Eight have gone on to start new effective nonprofits focused on policy, mental health, family planning and EA meta cause areas, and one participant was hired by Charity Entrepreneurship as a Research Analyst. They will be joining our 2023 cohort.Thanks to our generous CE Seed Network of funders, we have helped to secure $732,000 in funding for the organizations, and will further support them with mentorship, operational support, free co-working space in London, and access to a constantly growing entrepreneurial community of funders, advisors, interns and other charity founders.The 2022 incubated charities are:Center for Effective Aid Policy- identifying and promoting high-impact development policies and interventions.Centre for Exploratory Altruism Research (CEARCH)- conducting cause prioritization research and outreach.Maternal Health Initiative- producing transformative benefits to women’s health, agency, and income through increased access to family planning.Kaya Guides- reducing depression and anxiety among youth in low-and middle-income countries.Vida Plena- building strong mental health in Latin America.CENTER FOR EFFECTIVE AID POLICYCo-founders: Jacob Wood, Mathias BondeWebsite: aidpolicy.orgEmail address: contact@aidpolicy.org CE incubation grant: $170,000Description of the intervention:The Center for Effective Aid Policy will work on identifying and advocating for effective solutions in aid policy. This may include:Increasing international development aidIncreasing budget allocation to specific effective development programsIntroducing new effective development interventions into aid budgetsRevising processes which result in improved development aid effectivenessBackground of the intervention:$179 billion was spent on development aid in 2021 - that is roughly 240x the amount of money that GiveWell has moved since 2009. While well-intentioned, there is a broad consensus among experts, think tanks, and implementing partners alike that aid effectiveness can be vastly improved.The Center for Effective Aid Policy believes tractable interventions exist in the development aid space that will result in improved aid spending and better outcomes for its recipients. You can read more in their recent EA Forum post.Near-term plans:In 2022-2023, The Center for Effective Aid Policy will identify policy windows and formulate impactful and practical-to-implement policies, which they will advocate to governments and NGOs. They conservatively estimate their chances of advocacy success at $5.62 per DALY - more than an order of magnitude higher than multiple GiveWell-recommended charities.CENTRE FOR EXPLORATORY ALTRUISM RESEARCH (CEARCH)Founder: Joel TanWebsite: exploratory-altruism.orgContact: incubation grant: $100,000Description of the intervention:CEARCH conducts cause prioritization research and outreach - identifying the most important problems in the world and directing resources towards solving them, so as to maximize global welfare.Background of the intervention:There are many potential cause areas (e.g.,...]]>
KarolinaSarek https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 13:33 None full 4048
pt28xhxL2DEej8DyJ_EA EA - You should consider launching an AI startup by Joshc Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You should consider launching an AI startup, published by Joshc on December 8, 2022 on The Effective Altruism Forum.Edit: I've noticed I've been getting a lot of downvotes. If you want to reduce the probability that people go this route (myself included), it would be helpful if you voiced your disagreements. Also, opinions are my own.The prevailing view in this community is that we are talent constrained, not resource constrained. Earn to give has gone out of fashion. Now, community builders mostly push people towards AI research or strategy.Maybe this is a mistake. Or if it made sense previously, perhaps we should update in light of recent events.We could use a lot more moneyAI Safety researchers will need a lot of money for compute later...Given how expensive AI research is becoming, research orgs like Redwood research and The Center for AI Safety will need a lot of funding in the future. Where is that money going to come from? How are they going to fine-tune the latest models when OpenAI is spending billions? If we want there to be a favorable safety-capabilities balance, there needs to be a favorable safety-capabilities funding balance!Money can be converted to talentCurrently, the primary recruitment strategy appears to be to persuade people to take X-risk arguments seriously. But not many people change their careers because of abstract arguments. Influencing incentives seems more promising to me; first, for directly moving talent, but also for building safety culture. The more appealing it is to work on AI Safety, the more willing people will be to engage with arguments. Research incentives can be shifted in a variety of ways using money:Fund AI Safety Industry labs. If there are more well-paying AI safety jobs, people will seek to fill them.Build a massive safety compute cluster. Academics feel locked out. They can barely fine-tune GPT-2 sized models while industry models crush every NLP benchmark. Offering compute for safety research is a golden opportunity to shift incentives. Academia is a wellspring of ML talent. If safety dominates there, it will trickle into industry.Competitions. The jury is still out on whether you can use ungodly amounts of money to motivate talent; but the fact that you can always raise the bounties make competitions scalable.Grants. A common way to grow research fields is to simply fund the research you would like to see more of (i.e. run RfPs). Apparently, the standard interventions tend to work. Accountability mechanisms have to be solid, but government orgs like DARPA appear to be good at this.Fellowships. Many professors could take on more graduate students if they had funding for them. For the small cost of $100,000 per year (one-third that in Canada), you can essentially 'hire' a graduate student to produce AI safety research. Graduate students are often more capable than industry research engineers and even come with free mentorship (a professor!). In addition to building new safety labs, we can start converting existing ones and giving them industry-level resources.'Talent' might not even be an important input in the future (relative to resources)The techniques in Holden's "How might we align transformative AI if it’s developed very soon?" are generally more resource-dependent than talent dependent. This is consistent with a general trend in ML research towards engineering and datasets and away from small numbers of people having creative insights.The most effective ways to select against deception could involve extensive red-teaming (e.g. 'honey pots'), or carefully constructing datasets / environments that don't produce instrumental self-preservation tendencies early on in the training process. The most effective ways to guard against distribution shift could be to test the AIs in a wide variety of scenario...]]>
Joshc https://forum.effectivealtruism.org/posts/pt28xhxL2DEej8DyJ/you-should-consider-launching-an-ai-startup Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You should consider launching an AI startup, published by Joshc on December 8, 2022 on The Effective Altruism Forum.Edit: I've noticed I've been getting a lot of downvotes. If you want to reduce the probability that people go this route (myself included), it would be helpful if you voiced your disagreements. Also, opinions are my own.The prevailing view in this community is that we are talent constrained, not resource constrained. Earn to give has gone out of fashion. Now, community builders mostly push people towards AI research or strategy.Maybe this is a mistake. Or if it made sense previously, perhaps we should update in light of recent events.We could use a lot more moneyAI Safety researchers will need a lot of money for compute later...Given how expensive AI research is becoming, research orgs like Redwood research and The Center for AI Safety will need a lot of funding in the future. Where is that money going to come from? How are they going to fine-tune the latest models when OpenAI is spending billions? If we want there to be a favorable safety-capabilities balance, there needs to be a favorable safety-capabilities funding balance!Money can be converted to talentCurrently, the primary recruitment strategy appears to be to persuade people to take X-risk arguments seriously. But not many people change their careers because of abstract arguments. Influencing incentives seems more promising to me; first, for directly moving talent, but also for building safety culture. The more appealing it is to work on AI Safety, the more willing people will be to engage with arguments. Research incentives can be shifted in a variety of ways using money:Fund AI Safety Industry labs. If there are more well-paying AI safety jobs, people will seek to fill them.Build a massive safety compute cluster. Academics feel locked out. They can barely fine-tune GPT-2 sized models while industry models crush every NLP benchmark. Offering compute for safety research is a golden opportunity to shift incentives. Academia is a wellspring of ML talent. If safety dominates there, it will trickle into industry.Competitions. The jury is still out on whether you can use ungodly amounts of money to motivate talent; but the fact that you can always raise the bounties make competitions scalable.Grants. A common way to grow research fields is to simply fund the research you would like to see more of (i.e. run RfPs). Apparently, the standard interventions tend to work. Accountability mechanisms have to be solid, but government orgs like DARPA appear to be good at this.Fellowships. Many professors could take on more graduate students if they had funding for them. For the small cost of $100,000 per year (one-third that in Canada), you can essentially 'hire' a graduate student to produce AI safety research. Graduate students are often more capable than industry research engineers and even come with free mentorship (a professor!). In addition to building new safety labs, we can start converting existing ones and giving them industry-level resources.'Talent' might not even be an important input in the future (relative to resources)The techniques in Holden's "How might we align transformative AI if it’s developed very soon?" are generally more resource-dependent than talent dependent. This is consistent with a general trend in ML research towards engineering and datasets and away from small numbers of people having creative insights.The most effective ways to select against deception could involve extensive red-teaming (e.g. 'honey pots'), or carefully constructing datasets / environments that don't produce instrumental self-preservation tendencies early on in the training process. The most effective ways to guard against distribution shift could be to test the AIs in a wide variety of scenario...]]>
Thu, 08 Dec 2022 06:35:12 +0000 EA - You should consider launching an AI startup by Joshc Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You should consider launching an AI startup, published by Joshc on December 8, 2022 on The Effective Altruism Forum.Edit: I've noticed I've been getting a lot of downvotes. If you want to reduce the probability that people go this route (myself included), it would be helpful if you voiced your disagreements. Also, opinions are my own.The prevailing view in this community is that we are talent constrained, not resource constrained. Earn to give has gone out of fashion. Now, community builders mostly push people towards AI research or strategy.Maybe this is a mistake. Or if it made sense previously, perhaps we should update in light of recent events.We could use a lot more moneyAI Safety researchers will need a lot of money for compute later...Given how expensive AI research is becoming, research orgs like Redwood research and The Center for AI Safety will need a lot of funding in the future. Where is that money going to come from? How are they going to fine-tune the latest models when OpenAI is spending billions? If we want there to be a favorable safety-capabilities balance, there needs to be a favorable safety-capabilities funding balance!Money can be converted to talentCurrently, the primary recruitment strategy appears to be to persuade people to take X-risk arguments seriously. But not many people change their careers because of abstract arguments. Influencing incentives seems more promising to me; first, for directly moving talent, but also for building safety culture. The more appealing it is to work on AI Safety, the more willing people will be to engage with arguments. Research incentives can be shifted in a variety of ways using money:Fund AI Safety Industry labs. If there are more well-paying AI safety jobs, people will seek to fill them.Build a massive safety compute cluster. Academics feel locked out. They can barely fine-tune GPT-2 sized models while industry models crush every NLP benchmark. Offering compute for safety research is a golden opportunity to shift incentives. Academia is a wellspring of ML talent. If safety dominates there, it will trickle into industry.Competitions. The jury is still out on whether you can use ungodly amounts of money to motivate talent; but the fact that you can always raise the bounties make competitions scalable.Grants. A common way to grow research fields is to simply fund the research you would like to see more of (i.e. run RfPs). Apparently, the standard interventions tend to work. Accountability mechanisms have to be solid, but government orgs like DARPA appear to be good at this.Fellowships. Many professors could take on more graduate students if they had funding for them. For the small cost of $100,000 per year (one-third that in Canada), you can essentially 'hire' a graduate student to produce AI safety research. Graduate students are often more capable than industry research engineers and even come with free mentorship (a professor!). In addition to building new safety labs, we can start converting existing ones and giving them industry-level resources.'Talent' might not even be an important input in the future (relative to resources)The techniques in Holden's "How might we align transformative AI if it’s developed very soon?" are generally more resource-dependent than talent dependent. This is consistent with a general trend in ML research towards engineering and datasets and away from small numbers of people having creative insights.The most effective ways to select against deception could involve extensive red-teaming (e.g. 'honey pots'), or carefully constructing datasets / environments that don't produce instrumental self-preservation tendencies early on in the training process. The most effective ways to guard against distribution shift could be to test the AIs in a wide variety of scenario...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You should consider launching an AI startup, published by Joshc on December 8, 2022 on The Effective Altruism Forum.Edit: I've noticed I've been getting a lot of downvotes. If you want to reduce the probability that people go this route (myself included), it would be helpful if you voiced your disagreements. Also, opinions are my own.The prevailing view in this community is that we are talent constrained, not resource constrained. Earn to give has gone out of fashion. Now, community builders mostly push people towards AI research or strategy.Maybe this is a mistake. Or if it made sense previously, perhaps we should update in light of recent events.We could use a lot more moneyAI Safety researchers will need a lot of money for compute later...Given how expensive AI research is becoming, research orgs like Redwood research and The Center for AI Safety will need a lot of funding in the future. Where is that money going to come from? How are they going to fine-tune the latest models when OpenAI is spending billions? If we want there to be a favorable safety-capabilities balance, there needs to be a favorable safety-capabilities funding balance!Money can be converted to talentCurrently, the primary recruitment strategy appears to be to persuade people to take X-risk arguments seriously. But not many people change their careers because of abstract arguments. Influencing incentives seems more promising to me; first, for directly moving talent, but also for building safety culture. The more appealing it is to work on AI Safety, the more willing people will be to engage with arguments. Research incentives can be shifted in a variety of ways using money:Fund AI Safety Industry labs. If there are more well-paying AI safety jobs, people will seek to fill them.Build a massive safety compute cluster. Academics feel locked out. They can barely fine-tune GPT-2 sized models while industry models crush every NLP benchmark. Offering compute for safety research is a golden opportunity to shift incentives. Academia is a wellspring of ML talent. If safety dominates there, it will trickle into industry.Competitions. The jury is still out on whether you can use ungodly amounts of money to motivate talent; but the fact that you can always raise the bounties make competitions scalable.Grants. A common way to grow research fields is to simply fund the research you would like to see more of (i.e. run RfPs). Apparently, the standard interventions tend to work. Accountability mechanisms have to be solid, but government orgs like DARPA appear to be good at this.Fellowships. Many professors could take on more graduate students if they had funding for them. For the small cost of $100,000 per year (one-third that in Canada), you can essentially 'hire' a graduate student to produce AI safety research. Graduate students are often more capable than industry research engineers and even come with free mentorship (a professor!). In addition to building new safety labs, we can start converting existing ones and giving them industry-level resources.'Talent' might not even be an important input in the future (relative to resources)The techniques in Holden's "How might we align transformative AI if it’s developed very soon?" are generally more resource-dependent than talent dependent. This is consistent with a general trend in ML research towards engineering and datasets and away from small numbers of people having creative insights.The most effective ways to select against deception could involve extensive red-teaming (e.g. 'honey pots'), or carefully constructing datasets / environments that don't produce instrumental self-preservation tendencies early on in the training process. The most effective ways to guard against distribution shift could be to test the AIs in a wide variety of scenario...]]>
Joshc https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:57 None full 4041
JFyzCv5YynN665nH8_EA EA - Thoughts on AGI organizations and capabilities work by RobBensinger Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on AGI organizations and capabilities work, published by RobBensinger on December 7, 2022 on The Effective Altruism Forum.(Note: This essay was largely written by Rob, based on notes from Nate. It’s formatted as Rob-paraphrasing-Nate because (a) Nate didn’t have time to rephrase everything into his own words, and (b) most of the impetus for this post came from Eliezer wanting MIRI to praise a recent OpenAI post and Rob wanting to share more MIRI-thoughts about the space of AGI organizations, so it felt a bit less like a Nate-post than usual.)Nate and I have been happy about the AGI conversation seeming more honest and “real” recently. To contribute to that, I’ve collected some general Nate-thoughts in this post, even though they’re relatively informal and disorganized.AGI development is a critically important topic, and the world should obviously be able to hash out such topics in conversation. (Even though it can feel weird or intimidating, and even though there’s inevitably some social weirdness in sometimes saying negative things about people you like and sometimes collaborate with.) My hope is that we'll be able to make faster and better progress if we move the conversational norms further toward candor and substantive discussion of disagreements, as opposed to saying everything behind a veil of collegial obscurity.Capabilities work is currently a bad ideaNate’s top-level view is that ideally, Earth should take a break on doing work that might move us closer to AGI, until we understand alignment better.That move isn’t available to us, but individual researchers and organizations who choose not to burn the timeline are helping the world, even if other researchers and orgs don't reciprocate. You can unilaterally lengthen timelines, and give humanity more chances of success, by choosing not to personally shorten them.Nate thinks capabilities work is currently a bad idea for a few reasons:He doesn’t buy that current capabilities work is a likely path to ultimately solving alignment.Insofar as current capabilities work does seem helpful for alignment, it strikes him as helping with parallelizable research goals, whereas our bottleneck is serial research goals. (See A note about differential technological development.)Nate doesn’t buy that we need more capabilities progress before we can start finding a better path.This is not to say that capabilities work is never useful for alignment, or that alignment progress is never bottlenecked on capabilities progress. As an extreme example, having a working AGI on hand tomorrow would indeed make it easier to run experiments that teach us things about alignment! But in a world where we build AGI tomorrow, we're dead, because we won't have time to get a firm understanding of alignment before AGI technology proliferates and someone accidentally destroys the world. Capabilities progress can be useful in various ways, while still being harmful on net.(Also, to be clear: AGI capabilities are obviously an essential part of humanity's long-term path to good outcomes, and it's important to develop them at some point — the sooner the better, once we're confident this will have good outcomes — and it would be catastrophically bad to delay realizing them forever.)On Nate’s view, the field should do experiments with ML systems, not just abstract theory. But if he were magically in charge of the world's collective ML efforts, he would put a pause on further capabilities work until we've had more time to orient to the problem, consider the option space, and think our way to some sort of plan-that-will-actually-probably-work. It’s not as though we’re hurting for ML systems to study today, and our understanding already lags far behind today’s systems' capabilities.Publishing capabilities advances is even more obviously b...]]>
RobBensinger https://forum.effectivealtruism.org/posts/JFyzCv5YynN665nH8/thoughts-on-agi-organizations-and-capabilities-work Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on AGI organizations and capabilities work, published by RobBensinger on December 7, 2022 on The Effective Altruism Forum.(Note: This essay was largely written by Rob, based on notes from Nate. It’s formatted as Rob-paraphrasing-Nate because (a) Nate didn’t have time to rephrase everything into his own words, and (b) most of the impetus for this post came from Eliezer wanting MIRI to praise a recent OpenAI post and Rob wanting to share more MIRI-thoughts about the space of AGI organizations, so it felt a bit less like a Nate-post than usual.)Nate and I have been happy about the AGI conversation seeming more honest and “real” recently. To contribute to that, I’ve collected some general Nate-thoughts in this post, even though they’re relatively informal and disorganized.AGI development is a critically important topic, and the world should obviously be able to hash out such topics in conversation. (Even though it can feel weird or intimidating, and even though there’s inevitably some social weirdness in sometimes saying negative things about people you like and sometimes collaborate with.) My hope is that we'll be able to make faster and better progress if we move the conversational norms further toward candor and substantive discussion of disagreements, as opposed to saying everything behind a veil of collegial obscurity.Capabilities work is currently a bad ideaNate’s top-level view is that ideally, Earth should take a break on doing work that might move us closer to AGI, until we understand alignment better.That move isn’t available to us, but individual researchers and organizations who choose not to burn the timeline are helping the world, even if other researchers and orgs don't reciprocate. You can unilaterally lengthen timelines, and give humanity more chances of success, by choosing not to personally shorten them.Nate thinks capabilities work is currently a bad idea for a few reasons:He doesn’t buy that current capabilities work is a likely path to ultimately solving alignment.Insofar as current capabilities work does seem helpful for alignment, it strikes him as helping with parallelizable research goals, whereas our bottleneck is serial research goals. (See A note about differential technological development.)Nate doesn’t buy that we need more capabilities progress before we can start finding a better path.This is not to say that capabilities work is never useful for alignment, or that alignment progress is never bottlenecked on capabilities progress. As an extreme example, having a working AGI on hand tomorrow would indeed make it easier to run experiments that teach us things about alignment! But in a world where we build AGI tomorrow, we're dead, because we won't have time to get a firm understanding of alignment before AGI technology proliferates and someone accidentally destroys the world. Capabilities progress can be useful in various ways, while still being harmful on net.(Also, to be clear: AGI capabilities are obviously an essential part of humanity's long-term path to good outcomes, and it's important to develop them at some point — the sooner the better, once we're confident this will have good outcomes — and it would be catastrophically bad to delay realizing them forever.)On Nate’s view, the field should do experiments with ML systems, not just abstract theory. But if he were magically in charge of the world's collective ML efforts, he would put a pause on further capabilities work until we've had more time to orient to the problem, consider the option space, and think our way to some sort of plan-that-will-actually-probably-work. It’s not as though we’re hurting for ML systems to study today, and our understanding already lags far behind today’s systems' capabilities.Publishing capabilities advances is even more obviously b...]]>
Thu, 08 Dec 2022 04:10:46 +0000 EA - Thoughts on AGI organizations and capabilities work by RobBensinger Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on AGI organizations and capabilities work, published by RobBensinger on December 7, 2022 on The Effective Altruism Forum.(Note: This essay was largely written by Rob, based on notes from Nate. It’s formatted as Rob-paraphrasing-Nate because (a) Nate didn’t have time to rephrase everything into his own words, and (b) most of the impetus for this post came from Eliezer wanting MIRI to praise a recent OpenAI post and Rob wanting to share more MIRI-thoughts about the space of AGI organizations, so it felt a bit less like a Nate-post than usual.)Nate and I have been happy about the AGI conversation seeming more honest and “real” recently. To contribute to that, I’ve collected some general Nate-thoughts in this post, even though they’re relatively informal and disorganized.AGI development is a critically important topic, and the world should obviously be able to hash out such topics in conversation. (Even though it can feel weird or intimidating, and even though there’s inevitably some social weirdness in sometimes saying negative things about people you like and sometimes collaborate with.) My hope is that we'll be able to make faster and better progress if we move the conversational norms further toward candor and substantive discussion of disagreements, as opposed to saying everything behind a veil of collegial obscurity.Capabilities work is currently a bad ideaNate’s top-level view is that ideally, Earth should take a break on doing work that might move us closer to AGI, until we understand alignment better.That move isn’t available to us, but individual researchers and organizations who choose not to burn the timeline are helping the world, even if other researchers and orgs don't reciprocate. You can unilaterally lengthen timelines, and give humanity more chances of success, by choosing not to personally shorten them.Nate thinks capabilities work is currently a bad idea for a few reasons:He doesn’t buy that current capabilities work is a likely path to ultimately solving alignment.Insofar as current capabilities work does seem helpful for alignment, it strikes him as helping with parallelizable research goals, whereas our bottleneck is serial research goals. (See A note about differential technological development.)Nate doesn’t buy that we need more capabilities progress before we can start finding a better path.This is not to say that capabilities work is never useful for alignment, or that alignment progress is never bottlenecked on capabilities progress. As an extreme example, having a working AGI on hand tomorrow would indeed make it easier to run experiments that teach us things about alignment! But in a world where we build AGI tomorrow, we're dead, because we won't have time to get a firm understanding of alignment before AGI technology proliferates and someone accidentally destroys the world. Capabilities progress can be useful in various ways, while still being harmful on net.(Also, to be clear: AGI capabilities are obviously an essential part of humanity's long-term path to good outcomes, and it's important to develop them at some point — the sooner the better, once we're confident this will have good outcomes — and it would be catastrophically bad to delay realizing them forever.)On Nate’s view, the field should do experiments with ML systems, not just abstract theory. But if he were magically in charge of the world's collective ML efforts, he would put a pause on further capabilities work until we've had more time to orient to the problem, consider the option space, and think our way to some sort of plan-that-will-actually-probably-work. It’s not as though we’re hurting for ML systems to study today, and our understanding already lags far behind today’s systems' capabilities.Publishing capabilities advances is even more obviously b...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on AGI organizations and capabilities work, published by RobBensinger on December 7, 2022 on The Effective Altruism Forum.(Note: This essay was largely written by Rob, based on notes from Nate. It’s formatted as Rob-paraphrasing-Nate because (a) Nate didn’t have time to rephrase everything into his own words, and (b) most of the impetus for this post came from Eliezer wanting MIRI to praise a recent OpenAI post and Rob wanting to share more MIRI-thoughts about the space of AGI organizations, so it felt a bit less like a Nate-post than usual.)Nate and I have been happy about the AGI conversation seeming more honest and “real” recently. To contribute to that, I’ve collected some general Nate-thoughts in this post, even though they’re relatively informal and disorganized.AGI development is a critically important topic, and the world should obviously be able to hash out such topics in conversation. (Even though it can feel weird or intimidating, and even though there’s inevitably some social weirdness in sometimes saying negative things about people you like and sometimes collaborate with.) My hope is that we'll be able to make faster and better progress if we move the conversational norms further toward candor and substantive discussion of disagreements, as opposed to saying everything behind a veil of collegial obscurity.Capabilities work is currently a bad ideaNate’s top-level view is that ideally, Earth should take a break on doing work that might move us closer to AGI, until we understand alignment better.That move isn’t available to us, but individual researchers and organizations who choose not to burn the timeline are helping the world, even if other researchers and orgs don't reciprocate. You can unilaterally lengthen timelines, and give humanity more chances of success, by choosing not to personally shorten them.Nate thinks capabilities work is currently a bad idea for a few reasons:He doesn’t buy that current capabilities work is a likely path to ultimately solving alignment.Insofar as current capabilities work does seem helpful for alignment, it strikes him as helping with parallelizable research goals, whereas our bottleneck is serial research goals. (See A note about differential technological development.)Nate doesn’t buy that we need more capabilities progress before we can start finding a better path.This is not to say that capabilities work is never useful for alignment, or that alignment progress is never bottlenecked on capabilities progress. As an extreme example, having a working AGI on hand tomorrow would indeed make it easier to run experiments that teach us things about alignment! But in a world where we build AGI tomorrow, we're dead, because we won't have time to get a firm understanding of alignment before AGI technology proliferates and someone accidentally destroys the world. Capabilities progress can be useful in various ways, while still being harmful on net.(Also, to be clear: AGI capabilities are obviously an essential part of humanity's long-term path to good outcomes, and it's important to develop them at some point — the sooner the better, once we're confident this will have good outcomes — and it would be catastrophically bad to delay realizing them forever.)On Nate’s view, the field should do experiments with ML systems, not just abstract theory. But if he were magically in charge of the world's collective ML efforts, he would put a pause on further capabilities work until we've had more time to orient to the problem, consider the option space, and think our way to some sort of plan-that-will-actually-probably-work. It’s not as though we’re hurting for ML systems to study today, and our understanding already lags far behind today’s systems' capabilities.Publishing capabilities advances is even more obviously b...]]>
RobBensinger https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:36 None full 4043
8Qdc5mPyrfjttLCZn_EA EA - Learning from non-EAs who seek to do good by Siobhan M Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Learning from non-EAs who seek to do good, published by Siobhan M on December 8, 2022 on The Effective Altruism Forum.Is EA a question, or a community based around ideology?After a year of close interaction with Effective Altruism – and recognizing that the movement is made up of many people with different views – I’m still confused as to whether EA aims to be a question about doing good effectively, or a community based around ideology.In my experience, it’s largely been the latter, but many EAs have expressed – either explicitly or implicitly – that they’d like it to be the former. I see this in the frequent citations of “EA is a question (not an ideology)” and the idea of the scout mindset; and most recently, in a lot of the top comments in the post on suggestions for community changes.As an EA-adjacent individual, I think the single most important thing the EA community could do to become more of a question, rather than an ideology, is to take concrete steps to interact more with, learn from, and collaborate with people outside of EA who seek to do good, without necessarily aiming to bring them into the community.I was a Fellow with Vox’s Future Perfect section last year, and moved to the Bay Area in part to learn more about EA. I want to thank the EA community for letting me spend time in your spaces and learn from your ideas; my view of the world has definitely broadened over the past year, and I hope to continue to be engaged with the community.But EA has never been, and I don’t see it ever becoming, my primary community. The EA community and I have differences in interests, culture, and communication styles, and that’s okay on both ends. As this comment says, the core EA community is not a good fit for everyone!A bit more about me. After college, I worked with IDinsight, a global development data analysis and advisory firm that has collaborated with GiveWell. Then I wrote for Future Perfect, focusing on global development, agriculture, and climate. I care a lot about lowercase effective altruism – trying to make the world better in an effective way – and evidence-based decision making. Some specific ways in which I differ from the average highly-engaged EA (although my, or their, priors could change) is that I’m more sympathetic to non-utilitarian ethical theories (and religion), more sympathetic to person-affecting views, and more skeptical that we can predict how our actions now will impact the long-term future.My personal experience with the EA community has largely been that it’s a community based around an ideology, rather than a question. I probably disagree with both some critics and some EAs in that I don’t think being a community based around ideology is necessarily bad. I come from a religious background, and while I have a complicated relationship with religion, I have a lot of close friends and family members who are religious, and I have a lot of respect for ideology-based communities in many ways. It’s helpful for me to know what their ideology is, as I know going into discussions where we’ll likely differ and where we’ll potentially find common ground.If EA aims to be a community based around ideology, I don’t think much has to change; and the only request I’d have is that EA leadership and general discourse more explicitly position the community this way. It’s frustrating and confusing to interact with a community that publicly claims the importance of moral uncertainty, and then have your ideas dismissed when you’re not a total utilitarian or strong longtermist.That said, a lot of EAs have expressed that they do not want to be a community based around ideology. I appreciated the post about EA disillusionment and agree with some of the recent critical posts around the community for women, but this post is not about the community itse...]]>
Siobhan M https://forum.effectivealtruism.org/posts/8Qdc5mPyrfjttLCZn/learning-from-non-eas-who-seek-to-do-good Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Learning from non-EAs who seek to do good, published by Siobhan M on December 8, 2022 on The Effective Altruism Forum.Is EA a question, or a community based around ideology?After a year of close interaction with Effective Altruism – and recognizing that the movement is made up of many people with different views – I’m still confused as to whether EA aims to be a question about doing good effectively, or a community based around ideology.In my experience, it’s largely been the latter, but many EAs have expressed – either explicitly or implicitly – that they’d like it to be the former. I see this in the frequent citations of “EA is a question (not an ideology)” and the idea of the scout mindset; and most recently, in a lot of the top comments in the post on suggestions for community changes.As an EA-adjacent individual, I think the single most important thing the EA community could do to become more of a question, rather than an ideology, is to take concrete steps to interact more with, learn from, and collaborate with people outside of EA who seek to do good, without necessarily aiming to bring them into the community.I was a Fellow with Vox’s Future Perfect section last year, and moved to the Bay Area in part to learn more about EA. I want to thank the EA community for letting me spend time in your spaces and learn from your ideas; my view of the world has definitely broadened over the past year, and I hope to continue to be engaged with the community.But EA has never been, and I don’t see it ever becoming, my primary community. The EA community and I have differences in interests, culture, and communication styles, and that’s okay on both ends. As this comment says, the core EA community is not a good fit for everyone!A bit more about me. After college, I worked with IDinsight, a global development data analysis and advisory firm that has collaborated with GiveWell. Then I wrote for Future Perfect, focusing on global development, agriculture, and climate. I care a lot about lowercase effective altruism – trying to make the world better in an effective way – and evidence-based decision making. Some specific ways in which I differ from the average highly-engaged EA (although my, or their, priors could change) is that I’m more sympathetic to non-utilitarian ethical theories (and religion), more sympathetic to person-affecting views, and more skeptical that we can predict how our actions now will impact the long-term future.My personal experience with the EA community has largely been that it’s a community based around an ideology, rather than a question. I probably disagree with both some critics and some EAs in that I don’t think being a community based around ideology is necessarily bad. I come from a religious background, and while I have a complicated relationship with religion, I have a lot of close friends and family members who are religious, and I have a lot of respect for ideology-based communities in many ways. It’s helpful for me to know what their ideology is, as I know going into discussions where we’ll likely differ and where we’ll potentially find common ground.If EA aims to be a community based around ideology, I don’t think much has to change; and the only request I’d have is that EA leadership and general discourse more explicitly position the community this way. It’s frustrating and confusing to interact with a community that publicly claims the importance of moral uncertainty, and then have your ideas dismissed when you’re not a total utilitarian or strong longtermist.That said, a lot of EAs have expressed that they do not want to be a community based around ideology. I appreciated the post about EA disillusionment and agree with some of the recent critical posts around the community for women, but this post is not about the community itse...]]>
Thu, 08 Dec 2022 03:45:05 +0000 EA - Learning from non-EAs who seek to do good by Siobhan M Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Learning from non-EAs who seek to do good, published by Siobhan M on December 8, 2022 on The Effective Altruism Forum.Is EA a question, or a community based around ideology?After a year of close interaction with Effective Altruism – and recognizing that the movement is made up of many people with different views – I’m still confused as to whether EA aims to be a question about doing good effectively, or a community based around ideology.In my experience, it’s largely been the latter, but many EAs have expressed – either explicitly or implicitly – that they’d like it to be the former. I see this in the frequent citations of “EA is a question (not an ideology)” and the idea of the scout mindset; and most recently, in a lot of the top comments in the post on suggestions for community changes.As an EA-adjacent individual, I think the single most important thing the EA community could do to become more of a question, rather than an ideology, is to take concrete steps to interact more with, learn from, and collaborate with people outside of EA who seek to do good, without necessarily aiming to bring them into the community.I was a Fellow with Vox’s Future Perfect section last year, and moved to the Bay Area in part to learn more about EA. I want to thank the EA community for letting me spend time in your spaces and learn from your ideas; my view of the world has definitely broadened over the past year, and I hope to continue to be engaged with the community.But EA has never been, and I don’t see it ever becoming, my primary community. The EA community and I have differences in interests, culture, and communication styles, and that’s okay on both ends. As this comment says, the core EA community is not a good fit for everyone!A bit more about me. After college, I worked with IDinsight, a global development data analysis and advisory firm that has collaborated with GiveWell. Then I wrote for Future Perfect, focusing on global development, agriculture, and climate. I care a lot about lowercase effective altruism – trying to make the world better in an effective way – and evidence-based decision making. Some specific ways in which I differ from the average highly-engaged EA (although my, or their, priors could change) is that I’m more sympathetic to non-utilitarian ethical theories (and religion), more sympathetic to person-affecting views, and more skeptical that we can predict how our actions now will impact the long-term future.My personal experience with the EA community has largely been that it’s a community based around an ideology, rather than a question. I probably disagree with both some critics and some EAs in that I don’t think being a community based around ideology is necessarily bad. I come from a religious background, and while I have a complicated relationship with religion, I have a lot of close friends and family members who are religious, and I have a lot of respect for ideology-based communities in many ways. It’s helpful for me to know what their ideology is, as I know going into discussions where we’ll likely differ and where we’ll potentially find common ground.If EA aims to be a community based around ideology, I don’t think much has to change; and the only request I’d have is that EA leadership and general discourse more explicitly position the community this way. It’s frustrating and confusing to interact with a community that publicly claims the importance of moral uncertainty, and then have your ideas dismissed when you’re not a total utilitarian or strong longtermist.That said, a lot of EAs have expressed that they do not want to be a community based around ideology. I appreciated the post about EA disillusionment and agree with some of the recent critical posts around the community for women, but this post is not about the community itse...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Learning from non-EAs who seek to do good, published by Siobhan M on December 8, 2022 on The Effective Altruism Forum.Is EA a question, or a community based around ideology?After a year of close interaction with Effective Altruism – and recognizing that the movement is made up of many people with different views – I’m still confused as to whether EA aims to be a question about doing good effectively, or a community based around ideology.In my experience, it’s largely been the latter, but many EAs have expressed – either explicitly or implicitly – that they’d like it to be the former. I see this in the frequent citations of “EA is a question (not an ideology)” and the idea of the scout mindset; and most recently, in a lot of the top comments in the post on suggestions for community changes.As an EA-adjacent individual, I think the single most important thing the EA community could do to become more of a question, rather than an ideology, is to take concrete steps to interact more with, learn from, and collaborate with people outside of EA who seek to do good, without necessarily aiming to bring them into the community.I was a Fellow with Vox’s Future Perfect section last year, and moved to the Bay Area in part to learn more about EA. I want to thank the EA community for letting me spend time in your spaces and learn from your ideas; my view of the world has definitely broadened over the past year, and I hope to continue to be engaged with the community.But EA has never been, and I don’t see it ever becoming, my primary community. The EA community and I have differences in interests, culture, and communication styles, and that’s okay on both ends. As this comment says, the core EA community is not a good fit for everyone!A bit more about me. After college, I worked with IDinsight, a global development data analysis and advisory firm that has collaborated with GiveWell. Then I wrote for Future Perfect, focusing on global development, agriculture, and climate. I care a lot about lowercase effective altruism – trying to make the world better in an effective way – and evidence-based decision making. Some specific ways in which I differ from the average highly-engaged EA (although my, or their, priors could change) is that I’m more sympathetic to non-utilitarian ethical theories (and religion), more sympathetic to person-affecting views, and more skeptical that we can predict how our actions now will impact the long-term future.My personal experience with the EA community has largely been that it’s a community based around an ideology, rather than a question. I probably disagree with both some critics and some EAs in that I don’t think being a community based around ideology is necessarily bad. I come from a religious background, and while I have a complicated relationship with religion, I have a lot of close friends and family members who are religious, and I have a lot of respect for ideology-based communities in many ways. It’s helpful for me to know what their ideology is, as I know going into discussions where we’ll likely differ and where we’ll potentially find common ground.If EA aims to be a community based around ideology, I don’t think much has to change; and the only request I’d have is that EA leadership and general discourse more explicitly position the community this way. It’s frustrating and confusing to interact with a community that publicly claims the importance of moral uncertainty, and then have your ideas dismissed when you’re not a total utilitarian or strong longtermist.That said, a lot of EAs have expressed that they do not want to be a community based around ideology. I appreciated the post about EA disillusionment and agree with some of the recent critical posts around the community for women, but this post is not about the community itse...]]>
Siobhan M https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:33 None full 4042
XYdLTKZLQwTr337zM_EA EA - The Spanish-Speaking Effective Altruism community is awesome by Jaime Sevilla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Spanish-Speaking Effective Altruism community is awesome, published by Jaime Sevilla on December 7, 2022 on The Effective Altruism Forum.Epistemic status: fanboying and biased. It also focuses a lot on the part of the Spanish-Speaking community that I am familiar with, but there are many other aspects I didn’t cover.TL;DR: Exactly what it says in the title. Come meet us at EAGx LatAm! And if you speak Spanish, join the virtual community now!I can’t overstate how impressed I am with the successes of the Spanish-speaking EA community in the last year.The community was in a somewhat sorry state. Several fragmented efforts in Spain, Argentina and elsewhere had been tried. Some great people joined us that way (e.g., the equally feared and loved Nuño Sempere from QURI, the incredible entrepreneur Pablo Melchor of Ayuda Efectiva or the wondrous researcher Juan García from ALLFED). And yet, without anyone dedicated to nurturing it, the community languished.Several things were tried. An online group of Spanish-Speaking community builders met periodically to discuss the situation. EA Argentina and EA Spain joined their slack workspaces. Nothing worked, not for long.In the summer of 2021, at the pandemic's peak, we took a turn for the better. Sandra Malagón and Laura González sought, with the support of the community, a grant to dedicate themselves professionally to growing the Spanish-speaking community.Since then:They have run ~40 cohorts of the online introductory fellowship to Effective Altruism in Spanish, reaching ~150 people. This led, among other things, to the establishment of two university groups in Bogotá and Santiago, thanks in no small part to the initiative of Alejandro Acelas, Laura Arana, Agustín Covarrubias and David Solar.They have run two camps in Mexico and Colombia, widely promoted in Universities and reaching ~80 people. This led to about ~10 people becoming highly engaged community members.They have supported the ongoing community efforts in Colombia, Mexico and Chile, which each now have dedicated community leaders: Jaime Andres Fernández in Colombia; Claudette Salinas and Michelle Bruno in México; the whole Universidad Católica de Chile group in, well, Chile.Laura González led a translation project to get the EA Handbook and other key materials translated into Spanish, and supported the pitching and translation of EA books. More broadly, she contributes to a larger effort to streamline the translation of EA materials to non-English languages.Ángela Aristizábal, who had been doing community building in Colombia, organised a fellows program to nurture young professionals. These fellows have gone on to participate in the United Nations, lead local EA communities and do other amazing projects.The activity in Slack has grown sevenfold since 2020, and it’s on an upward trend.Sandra Malagón ideated and is running a community fellowship program in Mexico City, establishing the city as a budding hub.This is just the beginning. EAGx in Mexico City is nearing. The coordination team has more plans to make the community more professional and foster more impactful projects. And the amount of Spanish EA memes is going through the roof.Here is a highlight of some of my favourite people in the community:Our fearless captain Sandra Malagón, who within one year has kickstarted an EA hub in Mexico and helped raise a community in Chile and Colombia.Laura González, who co-coordinates the Spanish speaking community and leads the Spanish translation project. An awesome master of ceremonies for solemn occasions of every sort, too.Pablo Melchor, founder of Ayuda Efectiva, promoting and enabling effective giving in Spanish-Speaking countries.Javier Prieto, AI Governance and Policy Program Assistant at Open Philanthropy. Mug not included.The elusive Pablo...]]>
Jaime Sevilla https://forum.effectivealtruism.org/posts/XYdLTKZLQwTr337zM/the-spanish-speaking-effective-altruism-community-is-awesome Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Spanish-Speaking Effective Altruism community is awesome, published by Jaime Sevilla on December 7, 2022 on The Effective Altruism Forum.Epistemic status: fanboying and biased. It also focuses a lot on the part of the Spanish-Speaking community that I am familiar with, but there are many other aspects I didn’t cover.TL;DR: Exactly what it says in the title. Come meet us at EAGx LatAm! And if you speak Spanish, join the virtual community now!I can’t overstate how impressed I am with the successes of the Spanish-speaking EA community in the last year.The community was in a somewhat sorry state. Several fragmented efforts in Spain, Argentina and elsewhere had been tried. Some great people joined us that way (e.g., the equally feared and loved Nuño Sempere from QURI, the incredible entrepreneur Pablo Melchor of Ayuda Efectiva or the wondrous researcher Juan García from ALLFED). And yet, without anyone dedicated to nurturing it, the community languished.Several things were tried. An online group of Spanish-Speaking community builders met periodically to discuss the situation. EA Argentina and EA Spain joined their slack workspaces. Nothing worked, not for long.In the summer of 2021, at the pandemic's peak, we took a turn for the better. Sandra Malagón and Laura González sought, with the support of the community, a grant to dedicate themselves professionally to growing the Spanish-speaking community.Since then:They have run ~40 cohorts of the online introductory fellowship to Effective Altruism in Spanish, reaching ~150 people. This led, among other things, to the establishment of two university groups in Bogotá and Santiago, thanks in no small part to the initiative of Alejandro Acelas, Laura Arana, Agustín Covarrubias and David Solar.They have run two camps in Mexico and Colombia, widely promoted in Universities and reaching ~80 people. This led to about ~10 people becoming highly engaged community members.They have supported the ongoing community efforts in Colombia, Mexico and Chile, which each now have dedicated community leaders: Jaime Andres Fernández in Colombia; Claudette Salinas and Michelle Bruno in México; the whole Universidad Católica de Chile group in, well, Chile.Laura González led a translation project to get the EA Handbook and other key materials translated into Spanish, and supported the pitching and translation of EA books. More broadly, she contributes to a larger effort to streamline the translation of EA materials to non-English languages.Ángela Aristizábal, who had been doing community building in Colombia, organised a fellows program to nurture young professionals. These fellows have gone on to participate in the United Nations, lead local EA communities and do other amazing projects.The activity in Slack has grown sevenfold since 2020, and it’s on an upward trend.Sandra Malagón ideated and is running a community fellowship program in Mexico City, establishing the city as a budding hub.This is just the beginning. EAGx in Mexico City is nearing. The coordination team has more plans to make the community more professional and foster more impactful projects. And the amount of Spanish EA memes is going through the roof.Here is a highlight of some of my favourite people in the community:Our fearless captain Sandra Malagón, who within one year has kickstarted an EA hub in Mexico and helped raise a community in Chile and Colombia.Laura González, who co-coordinates the Spanish speaking community and leads the Spanish translation project. An awesome master of ceremonies for solemn occasions of every sort, too.Pablo Melchor, founder of Ayuda Efectiva, promoting and enabling effective giving in Spanish-Speaking countries.Javier Prieto, AI Governance and Policy Program Assistant at Open Philanthropy. Mug not included.The elusive Pablo...]]>
Wed, 07 Dec 2022 18:13:47 +0000 EA - The Spanish-Speaking Effective Altruism community is awesome by Jaime Sevilla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Spanish-Speaking Effective Altruism community is awesome, published by Jaime Sevilla on December 7, 2022 on The Effective Altruism Forum.Epistemic status: fanboying and biased. It also focuses a lot on the part of the Spanish-Speaking community that I am familiar with, but there are many other aspects I didn’t cover.TL;DR: Exactly what it says in the title. Come meet us at EAGx LatAm! And if you speak Spanish, join the virtual community now!I can’t overstate how impressed I am with the successes of the Spanish-speaking EA community in the last year.The community was in a somewhat sorry state. Several fragmented efforts in Spain, Argentina and elsewhere had been tried. Some great people joined us that way (e.g., the equally feared and loved Nuño Sempere from QURI, the incredible entrepreneur Pablo Melchor of Ayuda Efectiva or the wondrous researcher Juan García from ALLFED). And yet, without anyone dedicated to nurturing it, the community languished.Several things were tried. An online group of Spanish-Speaking community builders met periodically to discuss the situation. EA Argentina and EA Spain joined their slack workspaces. Nothing worked, not for long.In the summer of 2021, at the pandemic's peak, we took a turn for the better. Sandra Malagón and Laura González sought, with the support of the community, a grant to dedicate themselves professionally to growing the Spanish-speaking community.Since then:They have run ~40 cohorts of the online introductory fellowship to Effective Altruism in Spanish, reaching ~150 people. This led, among other things, to the establishment of two university groups in Bogotá and Santiago, thanks in no small part to the initiative of Alejandro Acelas, Laura Arana, Agustín Covarrubias and David Solar.They have run two camps in Mexico and Colombia, widely promoted in Universities and reaching ~80 people. This led to about ~10 people becoming highly engaged community members.They have supported the ongoing community efforts in Colombia, Mexico and Chile, which each now have dedicated community leaders: Jaime Andres Fernández in Colombia; Claudette Salinas and Michelle Bruno in México; the whole Universidad Católica de Chile group in, well, Chile.Laura González led a translation project to get the EA Handbook and other key materials translated into Spanish, and supported the pitching and translation of EA books. More broadly, she contributes to a larger effort to streamline the translation of EA materials to non-English languages.Ángela Aristizábal, who had been doing community building in Colombia, organised a fellows program to nurture young professionals. These fellows have gone on to participate in the United Nations, lead local EA communities and do other amazing projects.The activity in Slack has grown sevenfold since 2020, and it’s on an upward trend.Sandra Malagón ideated and is running a community fellowship program in Mexico City, establishing the city as a budding hub.This is just the beginning. EAGx in Mexico City is nearing. The coordination team has more plans to make the community more professional and foster more impactful projects. And the amount of Spanish EA memes is going through the roof.Here is a highlight of some of my favourite people in the community:Our fearless captain Sandra Malagón, who within one year has kickstarted an EA hub in Mexico and helped raise a community in Chile and Colombia.Laura González, who co-coordinates the Spanish speaking community and leads the Spanish translation project. An awesome master of ceremonies for solemn occasions of every sort, too.Pablo Melchor, founder of Ayuda Efectiva, promoting and enabling effective giving in Spanish-Speaking countries.Javier Prieto, AI Governance and Policy Program Assistant at Open Philanthropy. Mug not included.The elusive Pablo...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Spanish-Speaking Effective Altruism community is awesome, published by Jaime Sevilla on December 7, 2022 on The Effective Altruism Forum.Epistemic status: fanboying and biased. It also focuses a lot on the part of the Spanish-Speaking community that I am familiar with, but there are many other aspects I didn’t cover.TL;DR: Exactly what it says in the title. Come meet us at EAGx LatAm! And if you speak Spanish, join the virtual community now!I can’t overstate how impressed I am with the successes of the Spanish-speaking EA community in the last year.The community was in a somewhat sorry state. Several fragmented efforts in Spain, Argentina and elsewhere had been tried. Some great people joined us that way (e.g., the equally feared and loved Nuño Sempere from QURI, the incredible entrepreneur Pablo Melchor of Ayuda Efectiva or the wondrous researcher Juan García from ALLFED). And yet, without anyone dedicated to nurturing it, the community languished.Several things were tried. An online group of Spanish-Speaking community builders met periodically to discuss the situation. EA Argentina and EA Spain joined their slack workspaces. Nothing worked, not for long.In the summer of 2021, at the pandemic's peak, we took a turn for the better. Sandra Malagón and Laura González sought, with the support of the community, a grant to dedicate themselves professionally to growing the Spanish-speaking community.Since then:They have run ~40 cohorts of the online introductory fellowship to Effective Altruism in Spanish, reaching ~150 people. This led, among other things, to the establishment of two university groups in Bogotá and Santiago, thanks in no small part to the initiative of Alejandro Acelas, Laura Arana, Agustín Covarrubias and David Solar.They have run two camps in Mexico and Colombia, widely promoted in Universities and reaching ~80 people. This led to about ~10 people becoming highly engaged community members.They have supported the ongoing community efforts in Colombia, Mexico and Chile, which each now have dedicated community leaders: Jaime Andres Fernández in Colombia; Claudette Salinas and Michelle Bruno in México; the whole Universidad Católica de Chile group in, well, Chile.Laura González led a translation project to get the EA Handbook and other key materials translated into Spanish, and supported the pitching and translation of EA books. More broadly, she contributes to a larger effort to streamline the translation of EA materials to non-English languages.Ángela Aristizábal, who had been doing community building in Colombia, organised a fellows program to nurture young professionals. These fellows have gone on to participate in the United Nations, lead local EA communities and do other amazing projects.The activity in Slack has grown sevenfold since 2020, and it’s on an upward trend.Sandra Malagón ideated and is running a community fellowship program in Mexico City, establishing the city as a budding hub.This is just the beginning. EAGx in Mexico City is nearing. The coordination team has more plans to make the community more professional and foster more impactful projects. And the amount of Spanish EA memes is going through the roof.Here is a highlight of some of my favourite people in the community:Our fearless captain Sandra Malagón, who within one year has kickstarted an EA hub in Mexico and helped raise a community in Chile and Colombia.Laura González, who co-coordinates the Spanish speaking community and leads the Spanish translation project. An awesome master of ceremonies for solemn occasions of every sort, too.Pablo Melchor, founder of Ayuda Efectiva, promoting and enabling effective giving in Spanish-Speaking countries.Javier Prieto, AI Governance and Policy Program Assistant at Open Philanthropy. Mug not included.The elusive Pablo...]]>
Jaime Sevilla https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:20 None full 4033
F2YfRtMvHfRJibwkj_EA EA - Promoting compassionate longtermism by jonleighton Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Promoting compassionate longtermism, published by jonleighton on December 7, 2022 on The Effective Altruism Forum.This post is in 6 parts, starting with some basic reflections on suffering and ethics, and ending with a brief project description. While this post might seem overly broad-ranging, it’s meant to set out some basic arguments and explain the rationale for the project initiative in the last section, for which we are looking for support and collaboration. I go into much greater detail about some of the core ethical ideas in a new book about to be published, which I will present soon in a separate post. I also make several references here to Will MacAskill’s What We Owe the Future, because many of the ideas he expresses are shared by many EAs, and while I agree with many of the things he says, there are some important stances I disagree with that I will explain in this post.My overall motivation is a deep concern about the persistence of extreme suffering far into the future, and the possibility to take productive steps now to reduce the likelihood of that happening, thereby increasing the likelihood that the future will be a flourishing one.Summary:Suffering has an inherent call to action, and some suffering literally makes non-existence preferable.For various reasons, there are mixed attitudes within EA towards addressing suffering as a priority.We may not have the time to delay value lock-in for too long, and we already know some of the key principles.Increasing our efforts to prevent intense suffering in the short term may be important for preventing the lock-in of uncompassionate values.There’s an urgent need to research and promote mechanisms that can stabilise compassionate governance at the global level.OPIS is initiating research and film projects to widely communicate these ideas and concrete steps that can already be taken, and we are looking for support and collaboration.1. Some reflections on sufferingInvoluntary suffering is inherently bad – one could argue that this is ultimately what “bad” means – but extreme, unbearable suffering is especially bad, to the point that non-existence is literally a preferable option. At this level, people choose to end their lives if they can in order to escape the pain.We probably cannot fully grasp what it’s like to experience extreme suffering unless we have experienced it ourselves. To get even an approximate sense of what it’s like requires engaging with accounts and depictions of it. If not, we may underestimate its significance and attribute much lower priority to it than it deserves. As an example, a patient with a terrible condition called SUNCT whom I provided support to, who at one point attempted suicide, described in a presentation we recently gave together in Geneva the utter hell he experienced, and how no one should ever have to experience what he did.Intense suffering has an inherent call to action – we respond to it whenever we try to help people in severe pain, or animals being tortured on factory farms.There is no equivalent inherent urgency to fill the void and bring new sentient beings into existence, even though this is an understandable desire of intelligent beings who already exist.Intentionally bringing into existence a sentient being who will definitely experience extreme/unbearable suffering could be considered uncompassionate and even cruel.I don’t think the above reflections should be particularly controversial. Even someone who would like to fill the universe with blissful beings might still concede that the project doesn’t have an inherent urgency – that is, that it could be delayed for some time, or even indefinitely, without harm to anyone (unless you believe, as do some EAs, that every instance of inanimate matter in space and time that isn’t being optimally used ...]]>
jonleighton https://forum.effectivealtruism.org/posts/F2YfRtMvHfRJibwkj/promoting-compassionate-longtermism Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Promoting compassionate longtermism, published by jonleighton on December 7, 2022 on The Effective Altruism Forum.This post is in 6 parts, starting with some basic reflections on suffering and ethics, and ending with a brief project description. While this post might seem overly broad-ranging, it’s meant to set out some basic arguments and explain the rationale for the project initiative in the last section, for which we are looking for support and collaboration. I go into much greater detail about some of the core ethical ideas in a new book about to be published, which I will present soon in a separate post. I also make several references here to Will MacAskill’s What We Owe the Future, because many of the ideas he expresses are shared by many EAs, and while I agree with many of the things he says, there are some important stances I disagree with that I will explain in this post.My overall motivation is a deep concern about the persistence of extreme suffering far into the future, and the possibility to take productive steps now to reduce the likelihood of that happening, thereby increasing the likelihood that the future will be a flourishing one.Summary:Suffering has an inherent call to action, and some suffering literally makes non-existence preferable.For various reasons, there are mixed attitudes within EA towards addressing suffering as a priority.We may not have the time to delay value lock-in for too long, and we already know some of the key principles.Increasing our efforts to prevent intense suffering in the short term may be important for preventing the lock-in of uncompassionate values.There’s an urgent need to research and promote mechanisms that can stabilise compassionate governance at the global level.OPIS is initiating research and film projects to widely communicate these ideas and concrete steps that can already be taken, and we are looking for support and collaboration.1. Some reflections on sufferingInvoluntary suffering is inherently bad – one could argue that this is ultimately what “bad” means – but extreme, unbearable suffering is especially bad, to the point that non-existence is literally a preferable option. At this level, people choose to end their lives if they can in order to escape the pain.We probably cannot fully grasp what it’s like to experience extreme suffering unless we have experienced it ourselves. To get even an approximate sense of what it’s like requires engaging with accounts and depictions of it. If not, we may underestimate its significance and attribute much lower priority to it than it deserves. As an example, a patient with a terrible condition called SUNCT whom I provided support to, who at one point attempted suicide, described in a presentation we recently gave together in Geneva the utter hell he experienced, and how no one should ever have to experience what he did.Intense suffering has an inherent call to action – we respond to it whenever we try to help people in severe pain, or animals being tortured on factory farms.There is no equivalent inherent urgency to fill the void and bring new sentient beings into existence, even though this is an understandable desire of intelligent beings who already exist.Intentionally bringing into existence a sentient being who will definitely experience extreme/unbearable suffering could be considered uncompassionate and even cruel.I don’t think the above reflections should be particularly controversial. Even someone who would like to fill the universe with blissful beings might still concede that the project doesn’t have an inherent urgency – that is, that it could be delayed for some time, or even indefinitely, without harm to anyone (unless you believe, as do some EAs, that every instance of inanimate matter in space and time that isn’t being optimally used ...]]>
Wed, 07 Dec 2022 17:16:18 +0000 EA - Promoting compassionate longtermism by jonleighton Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Promoting compassionate longtermism, published by jonleighton on December 7, 2022 on The Effective Altruism Forum.This post is in 6 parts, starting with some basic reflections on suffering and ethics, and ending with a brief project description. While this post might seem overly broad-ranging, it’s meant to set out some basic arguments and explain the rationale for the project initiative in the last section, for which we are looking for support and collaboration. I go into much greater detail about some of the core ethical ideas in a new book about to be published, which I will present soon in a separate post. I also make several references here to Will MacAskill’s What We Owe the Future, because many of the ideas he expresses are shared by many EAs, and while I agree with many of the things he says, there are some important stances I disagree with that I will explain in this post.My overall motivation is a deep concern about the persistence of extreme suffering far into the future, and the possibility to take productive steps now to reduce the likelihood of that happening, thereby increasing the likelihood that the future will be a flourishing one.Summary:Suffering has an inherent call to action, and some suffering literally makes non-existence preferable.For various reasons, there are mixed attitudes within EA towards addressing suffering as a priority.We may not have the time to delay value lock-in for too long, and we already know some of the key principles.Increasing our efforts to prevent intense suffering in the short term may be important for preventing the lock-in of uncompassionate values.There’s an urgent need to research and promote mechanisms that can stabilise compassionate governance at the global level.OPIS is initiating research and film projects to widely communicate these ideas and concrete steps that can already be taken, and we are looking for support and collaboration.1. Some reflections on sufferingInvoluntary suffering is inherently bad – one could argue that this is ultimately what “bad” means – but extreme, unbearable suffering is especially bad, to the point that non-existence is literally a preferable option. At this level, people choose to end their lives if they can in order to escape the pain.We probably cannot fully grasp what it’s like to experience extreme suffering unless we have experienced it ourselves. To get even an approximate sense of what it’s like requires engaging with accounts and depictions of it. If not, we may underestimate its significance and attribute much lower priority to it than it deserves. As an example, a patient with a terrible condition called SUNCT whom I provided support to, who at one point attempted suicide, described in a presentation we recently gave together in Geneva the utter hell he experienced, and how no one should ever have to experience what he did.Intense suffering has an inherent call to action – we respond to it whenever we try to help people in severe pain, or animals being tortured on factory farms.There is no equivalent inherent urgency to fill the void and bring new sentient beings into existence, even though this is an understandable desire of intelligent beings who already exist.Intentionally bringing into existence a sentient being who will definitely experience extreme/unbearable suffering could be considered uncompassionate and even cruel.I don’t think the above reflections should be particularly controversial. Even someone who would like to fill the universe with blissful beings might still concede that the project doesn’t have an inherent urgency – that is, that it could be delayed for some time, or even indefinitely, without harm to anyone (unless you believe, as do some EAs, that every instance of inanimate matter in space and time that isn’t being optimally used ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Promoting compassionate longtermism, published by jonleighton on December 7, 2022 on The Effective Altruism Forum.This post is in 6 parts, starting with some basic reflections on suffering and ethics, and ending with a brief project description. While this post might seem overly broad-ranging, it’s meant to set out some basic arguments and explain the rationale for the project initiative in the last section, for which we are looking for support and collaboration. I go into much greater detail about some of the core ethical ideas in a new book about to be published, which I will present soon in a separate post. I also make several references here to Will MacAskill’s What We Owe the Future, because many of the ideas he expresses are shared by many EAs, and while I agree with many of the things he says, there are some important stances I disagree with that I will explain in this post.My overall motivation is a deep concern about the persistence of extreme suffering far into the future, and the possibility to take productive steps now to reduce the likelihood of that happening, thereby increasing the likelihood that the future will be a flourishing one.Summary:Suffering has an inherent call to action, and some suffering literally makes non-existence preferable.For various reasons, there are mixed attitudes within EA towards addressing suffering as a priority.We may not have the time to delay value lock-in for too long, and we already know some of the key principles.Increasing our efforts to prevent intense suffering in the short term may be important for preventing the lock-in of uncompassionate values.There’s an urgent need to research and promote mechanisms that can stabilise compassionate governance at the global level.OPIS is initiating research and film projects to widely communicate these ideas and concrete steps that can already be taken, and we are looking for support and collaboration.1. Some reflections on sufferingInvoluntary suffering is inherently bad – one could argue that this is ultimately what “bad” means – but extreme, unbearable suffering is especially bad, to the point that non-existence is literally a preferable option. At this level, people choose to end their lives if they can in order to escape the pain.We probably cannot fully grasp what it’s like to experience extreme suffering unless we have experienced it ourselves. To get even an approximate sense of what it’s like requires engaging with accounts and depictions of it. If not, we may underestimate its significance and attribute much lower priority to it than it deserves. As an example, a patient with a terrible condition called SUNCT whom I provided support to, who at one point attempted suicide, described in a presentation we recently gave together in Geneva the utter hell he experienced, and how no one should ever have to experience what he did.Intense suffering has an inherent call to action – we respond to it whenever we try to help people in severe pain, or animals being tortured on factory farms.There is no equivalent inherent urgency to fill the void and bring new sentient beings into existence, even though this is an understandable desire of intelligent beings who already exist.Intentionally bringing into existence a sentient being who will definitely experience extreme/unbearable suffering could be considered uncompassionate and even cruel.I don’t think the above reflections should be particularly controversial. Even someone who would like to fill the universe with blissful beings might still concede that the project doesn’t have an inherent urgency – that is, that it could be delayed for some time, or even indefinitely, without harm to anyone (unless you believe, as do some EAs, that every instance of inanimate matter in space and time that isn’t being optimally used ...]]>
jonleighton https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 19:21 None full 4035
secEczgstteSs2c7r_EA EA - What can we learn from the empirical social science literature on the expected contingency of value change? by jackva Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What can we learn from the empirical social science literature on the expected contingency of value change?, published by jackva on December 7, 2022 on The Effective Altruism Forum.Note: Quickly written on vacation as I don't and won't have time to flesh this out much more but wanted to get the idea out so that others can work on this insofar as this seems important. I usually write on climate and the last time I seriously thought about value change is a while back.A missing perspective?Reading What We Owe The Future's chapter on the contingency of value change one thing that struck me was that there was -- as far as I can tell -- no reference to the rich empirical literature in political science and sociology on the drivers of value change, e.g. the work on the emergence of post-materialist values by Inglehart and the work of other scholars in this tradition, including the World Value Survey, which maps and seeks to explain value changes in societies around the world.Note that I am not claiming this literature being entirely right, more that this is a large body of literature (including criticisms) that seems very relevant to EA interest in value change, but not discussed.Introductions to this literatureEzra Klein podcast 2022: Conversation with Pippa NorrisWelzel 2021: Why the Future is DemocraticInglehart and Welzel 2010: Changing Mass Priorities: The Link between Modernization and DemocracyInglehart and Welzel 2009: How Development Leads to DemocracyWelzel et al 2003: The theory of human development, a cross-cultural analysis.These sources are quickly chosen and are biased towards Welzel (whom I was a student of in college), so don't take this as a definite treatment - just a window into this literature.The basic point of this literature is that there are good reasons to expect that value change is actually quite predictable and that a certain set of values tend to emerge out of the conditions of modernization. There is a lot more nuance to it, this is not just "old-school" modernization theory, but it is an empirically grounded update against massive contingency in the development of values, as there are predictable relationships between economic development and value change that seem to hold across a broad set of geographies and stages of economic development.To give a flavor, the abstract of Welzel 2021 puts it as follows:"Recent accounts of democratic backsliding neglect the cultural foundations of autocracy-versus-democracy.To bring culture back in, this article demonstrates that 1) countries' membership in culture zones explains some 70 percent of the total cross-national variation in autocracy-versus-democracy; and 2) this culture-bound variation has remained astoundingly constant over time—in spite of all the trending patterns in the global distribution of regime types over the last 120 years. Furthermore, the explanatory power of culture zones over autocracy-versus-democracy is rooted in the cultures' differentiation on "authoritarian-versus-emancipative values." Therefore, both the direction and the extent of regime change are a function of glacially accruing regime-culture misfits—driven by generational value shifts in a predominantly emancipatory direction. Consequently, the backsliding of democracies into authoritarianism is limited to societies in which emancipative values remain underdeveloped.Contrary to the widely cited deconsolidation thesis, the ascendant generational profile of emancipative values means that the momentary challenges to democracy are unlikely to stifle democracy's long-term rise."How it matters: different priors on the contingency of value changeI am wondering(1) whether this literature could be quite useful in forming priors about the contingency of value change and,(2) in particular, serve as useful correctiv...]]>
jackva https://forum.effectivealtruism.org/posts/secEczgstteSs2c7r/what-can-we-learn-from-the-empirical-social-science Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What can we learn from the empirical social science literature on the expected contingency of value change?, published by jackva on December 7, 2022 on The Effective Altruism Forum.Note: Quickly written on vacation as I don't and won't have time to flesh this out much more but wanted to get the idea out so that others can work on this insofar as this seems important. I usually write on climate and the last time I seriously thought about value change is a while back.A missing perspective?Reading What We Owe The Future's chapter on the contingency of value change one thing that struck me was that there was -- as far as I can tell -- no reference to the rich empirical literature in political science and sociology on the drivers of value change, e.g. the work on the emergence of post-materialist values by Inglehart and the work of other scholars in this tradition, including the World Value Survey, which maps and seeks to explain value changes in societies around the world.Note that I am not claiming this literature being entirely right, more that this is a large body of literature (including criticisms) that seems very relevant to EA interest in value change, but not discussed.Introductions to this literatureEzra Klein podcast 2022: Conversation with Pippa NorrisWelzel 2021: Why the Future is DemocraticInglehart and Welzel 2010: Changing Mass Priorities: The Link between Modernization and DemocracyInglehart and Welzel 2009: How Development Leads to DemocracyWelzel et al 2003: The theory of human development, a cross-cultural analysis.These sources are quickly chosen and are biased towards Welzel (whom I was a student of in college), so don't take this as a definite treatment - just a window into this literature.The basic point of this literature is that there are good reasons to expect that value change is actually quite predictable and that a certain set of values tend to emerge out of the conditions of modernization. There is a lot more nuance to it, this is not just "old-school" modernization theory, but it is an empirically grounded update against massive contingency in the development of values, as there are predictable relationships between economic development and value change that seem to hold across a broad set of geographies and stages of economic development.To give a flavor, the abstract of Welzel 2021 puts it as follows:"Recent accounts of democratic backsliding neglect the cultural foundations of autocracy-versus-democracy.To bring culture back in, this article demonstrates that 1) countries' membership in culture zones explains some 70 percent of the total cross-national variation in autocracy-versus-democracy; and 2) this culture-bound variation has remained astoundingly constant over time—in spite of all the trending patterns in the global distribution of regime types over the last 120 years. Furthermore, the explanatory power of culture zones over autocracy-versus-democracy is rooted in the cultures' differentiation on "authoritarian-versus-emancipative values." Therefore, both the direction and the extent of regime change are a function of glacially accruing regime-culture misfits—driven by generational value shifts in a predominantly emancipatory direction. Consequently, the backsliding of democracies into authoritarianism is limited to societies in which emancipative values remain underdeveloped.Contrary to the widely cited deconsolidation thesis, the ascendant generational profile of emancipative values means that the momentary challenges to democracy are unlikely to stifle democracy's long-term rise."How it matters: different priors on the contingency of value changeI am wondering(1) whether this literature could be quite useful in forming priors about the contingency of value change and,(2) in particular, serve as useful correctiv...]]>
Wed, 07 Dec 2022 15:58:11 +0000 EA - What can we learn from the empirical social science literature on the expected contingency of value change? by jackva Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What can we learn from the empirical social science literature on the expected contingency of value change?, published by jackva on December 7, 2022 on The Effective Altruism Forum.Note: Quickly written on vacation as I don't and won't have time to flesh this out much more but wanted to get the idea out so that others can work on this insofar as this seems important. I usually write on climate and the last time I seriously thought about value change is a while back.A missing perspective?Reading What We Owe The Future's chapter on the contingency of value change one thing that struck me was that there was -- as far as I can tell -- no reference to the rich empirical literature in political science and sociology on the drivers of value change, e.g. the work on the emergence of post-materialist values by Inglehart and the work of other scholars in this tradition, including the World Value Survey, which maps and seeks to explain value changes in societies around the world.Note that I am not claiming this literature being entirely right, more that this is a large body of literature (including criticisms) that seems very relevant to EA interest in value change, but not discussed.Introductions to this literatureEzra Klein podcast 2022: Conversation with Pippa NorrisWelzel 2021: Why the Future is DemocraticInglehart and Welzel 2010: Changing Mass Priorities: The Link between Modernization and DemocracyInglehart and Welzel 2009: How Development Leads to DemocracyWelzel et al 2003: The theory of human development, a cross-cultural analysis.These sources are quickly chosen and are biased towards Welzel (whom I was a student of in college), so don't take this as a definite treatment - just a window into this literature.The basic point of this literature is that there are good reasons to expect that value change is actually quite predictable and that a certain set of values tend to emerge out of the conditions of modernization. There is a lot more nuance to it, this is not just "old-school" modernization theory, but it is an empirically grounded update against massive contingency in the development of values, as there are predictable relationships between economic development and value change that seem to hold across a broad set of geographies and stages of economic development.To give a flavor, the abstract of Welzel 2021 puts it as follows:"Recent accounts of democratic backsliding neglect the cultural foundations of autocracy-versus-democracy.To bring culture back in, this article demonstrates that 1) countries' membership in culture zones explains some 70 percent of the total cross-national variation in autocracy-versus-democracy; and 2) this culture-bound variation has remained astoundingly constant over time—in spite of all the trending patterns in the global distribution of regime types over the last 120 years. Furthermore, the explanatory power of culture zones over autocracy-versus-democracy is rooted in the cultures' differentiation on "authoritarian-versus-emancipative values." Therefore, both the direction and the extent of regime change are a function of glacially accruing regime-culture misfits—driven by generational value shifts in a predominantly emancipatory direction. Consequently, the backsliding of democracies into authoritarianism is limited to societies in which emancipative values remain underdeveloped.Contrary to the widely cited deconsolidation thesis, the ascendant generational profile of emancipative values means that the momentary challenges to democracy are unlikely to stifle democracy's long-term rise."How it matters: different priors on the contingency of value changeI am wondering(1) whether this literature could be quite useful in forming priors about the contingency of value change and,(2) in particular, serve as useful correctiv...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What can we learn from the empirical social science literature on the expected contingency of value change?, published by jackva on December 7, 2022 on The Effective Altruism Forum.Note: Quickly written on vacation as I don't and won't have time to flesh this out much more but wanted to get the idea out so that others can work on this insofar as this seems important. I usually write on climate and the last time I seriously thought about value change is a while back.A missing perspective?Reading What We Owe The Future's chapter on the contingency of value change one thing that struck me was that there was -- as far as I can tell -- no reference to the rich empirical literature in political science and sociology on the drivers of value change, e.g. the work on the emergence of post-materialist values by Inglehart and the work of other scholars in this tradition, including the World Value Survey, which maps and seeks to explain value changes in societies around the world.Note that I am not claiming this literature being entirely right, more that this is a large body of literature (including criticisms) that seems very relevant to EA interest in value change, but not discussed.Introductions to this literatureEzra Klein podcast 2022: Conversation with Pippa NorrisWelzel 2021: Why the Future is DemocraticInglehart and Welzel 2010: Changing Mass Priorities: The Link between Modernization and DemocracyInglehart and Welzel 2009: How Development Leads to DemocracyWelzel et al 2003: The theory of human development, a cross-cultural analysis.These sources are quickly chosen and are biased towards Welzel (whom I was a student of in college), so don't take this as a definite treatment - just a window into this literature.The basic point of this literature is that there are good reasons to expect that value change is actually quite predictable and that a certain set of values tend to emerge out of the conditions of modernization. There is a lot more nuance to it, this is not just "old-school" modernization theory, but it is an empirically grounded update against massive contingency in the development of values, as there are predictable relationships between economic development and value change that seem to hold across a broad set of geographies and stages of economic development.To give a flavor, the abstract of Welzel 2021 puts it as follows:"Recent accounts of democratic backsliding neglect the cultural foundations of autocracy-versus-democracy.To bring culture back in, this article demonstrates that 1) countries' membership in culture zones explains some 70 percent of the total cross-national variation in autocracy-versus-democracy; and 2) this culture-bound variation has remained astoundingly constant over time—in spite of all the trending patterns in the global distribution of regime types over the last 120 years. Furthermore, the explanatory power of culture zones over autocracy-versus-democracy is rooted in the cultures' differentiation on "authoritarian-versus-emancipative values." Therefore, both the direction and the extent of regime change are a function of glacially accruing regime-culture misfits—driven by generational value shifts in a predominantly emancipatory direction. Consequently, the backsliding of democracies into authoritarianism is limited to societies in which emancipative values remain underdeveloped.Contrary to the widely cited deconsolidation thesis, the ascendant generational profile of emancipative values means that the momentary challenges to democracy are unlikely to stifle democracy's long-term rise."How it matters: different priors on the contingency of value changeI am wondering(1) whether this literature could be quite useful in forming priors about the contingency of value change and,(2) in particular, serve as useful correctiv...]]>
jackva https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:16 None full 4037
agKFRa5qBApnfLrSh_EA EA - Why development aid is a really exciting field by MathiasKB Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why development aid is a really exciting field, published by MathiasKB on December 7, 2022 on The Effective Altruism Forum.Each year, wealthy countries collectively spend around 178 billion dollars (!!) on Development aid.Development aid has funded some of the most cost-effective lifesaving programs that have ever been run. Such examples include PEPFAR, the US emergency aids relief programme rolled out at the height of the African aids pandemic, which estimates suggest saved 25 million lives at a cost of some 85 billion ($3400 per life saved, competitive with Givewell’s very best). EAs working with global poverty will know just how difficult it is to achieve high cost effectiveness at these scales.Development aid has also funded some of the very worst development projects conceived, in some instances causing outright harm to the recipients.Development aid is spent with a large variety of goals in mind. Climate mitigation projects, gender equality campaigns, and free-trade agreements are all funded by wealthy governments under a single illusory budgetary item: ‘development assistance’.In short, the scope of aid is enormous and so is the impact that can be had by positively influencing how it is spent. I'm not the only one who thinks so! In January of 2022 Open Philanthropy announced Global Aid Policy as a priority area within its global health and wellbeing portfolio.In this post I will:Demystify the processes that decide how aid is allocated.Argue that aid policy is neglected (by EAs especially), high in scale, and maybe tractable.Sneakily attempt to make you excited about aid, in preparation for the announcement of a non-profit I’m co-founding.Who decides how aid is allocated?When I first dug into development aid, I found the field very opaque and difficult to get an overview of. This made everything seem much more static and difficult to influence, than I now think it is.Come with me, and I’ll show you how the aid-sausage is made. The explanation tries to capture the grand picture, but each country is different and the explanation is overfit to western democracies.The aid pipeline:It all begins with a government decision to spend money on aid. For many countries this decision was formalized in 1970 after a UN resolution was signed between members to spend 0.7% of GNI on official development assistance.Politicians decide on a national aid strategyEach country that gives aid will have an official strategy for its aid spending. The strategy lists a number of priorities the government wants to focus on. It is typically re-negotiated and updated once every few years or when a new government takes seat. The agreed upon aid strategy sets a broad direction for the civil service and relevant parliamentary committees in projects they choose to carry out.The recently released UK aid strategy is a good example of what typical priorities look like:deliver honest, reliable investment through British Investment Partnerships, building on the UK’s financial expertise and the strengths of the City of London, in line with the Prime Minister’s vision for the Clean Green Initiativeprovide women and girls with the freedom they need to succeed, unlocking their future potential, educating girls, supporting their empowerment and protecting them against violencestep up the UK’s life-saving humanitarian work to prevent the worst forms of human suffering around the world. The UK will lead globally for a more effective international response to humanitarian crisestake forward UK leadership on climate change, nature and global health. The strategy will put the UK commitments made during the UK’s Presidency of G7 and COP26, UK global leadership in science and technology, and the UK’s COVID-19 response, at the core of its international development workThe Government passes a...]]>
MathiasKB https://forum.effectivealtruism.org/posts/agKFRa5qBApnfLrSh/why-development-aid-is-a-really-exciting-field Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why development aid is a really exciting field, published by MathiasKB on December 7, 2022 on The Effective Altruism Forum.Each year, wealthy countries collectively spend around 178 billion dollars (!!) on Development aid.Development aid has funded some of the most cost-effective lifesaving programs that have ever been run. Such examples include PEPFAR, the US emergency aids relief programme rolled out at the height of the African aids pandemic, which estimates suggest saved 25 million lives at a cost of some 85 billion ($3400 per life saved, competitive with Givewell’s very best). EAs working with global poverty will know just how difficult it is to achieve high cost effectiveness at these scales.Development aid has also funded some of the very worst development projects conceived, in some instances causing outright harm to the recipients.Development aid is spent with a large variety of goals in mind. Climate mitigation projects, gender equality campaigns, and free-trade agreements are all funded by wealthy governments under a single illusory budgetary item: ‘development assistance’.In short, the scope of aid is enormous and so is the impact that can be had by positively influencing how it is spent. I'm not the only one who thinks so! In January of 2022 Open Philanthropy announced Global Aid Policy as a priority area within its global health and wellbeing portfolio.In this post I will:Demystify the processes that decide how aid is allocated.Argue that aid policy is neglected (by EAs especially), high in scale, and maybe tractable.Sneakily attempt to make you excited about aid, in preparation for the announcement of a non-profit I’m co-founding.Who decides how aid is allocated?When I first dug into development aid, I found the field very opaque and difficult to get an overview of. This made everything seem much more static and difficult to influence, than I now think it is.Come with me, and I’ll show you how the aid-sausage is made. The explanation tries to capture the grand picture, but each country is different and the explanation is overfit to western democracies.The aid pipeline:It all begins with a government decision to spend money on aid. For many countries this decision was formalized in 1970 after a UN resolution was signed between members to spend 0.7% of GNI on official development assistance.Politicians decide on a national aid strategyEach country that gives aid will have an official strategy for its aid spending. The strategy lists a number of priorities the government wants to focus on. It is typically re-negotiated and updated once every few years or when a new government takes seat. The agreed upon aid strategy sets a broad direction for the civil service and relevant parliamentary committees in projects they choose to carry out.The recently released UK aid strategy is a good example of what typical priorities look like:deliver honest, reliable investment through British Investment Partnerships, building on the UK’s financial expertise and the strengths of the City of London, in line with the Prime Minister’s vision for the Clean Green Initiativeprovide women and girls with the freedom they need to succeed, unlocking their future potential, educating girls, supporting their empowerment and protecting them against violencestep up the UK’s life-saving humanitarian work to prevent the worst forms of human suffering around the world. The UK will lead globally for a more effective international response to humanitarian crisestake forward UK leadership on climate change, nature and global health. The strategy will put the UK commitments made during the UK’s Presidency of G7 and COP26, UK global leadership in science and technology, and the UK’s COVID-19 response, at the core of its international development workThe Government passes a...]]>
Wed, 07 Dec 2022 15:24:28 +0000 EA - Why development aid is a really exciting field by MathiasKB Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why development aid is a really exciting field, published by MathiasKB on December 7, 2022 on The Effective Altruism Forum.Each year, wealthy countries collectively spend around 178 billion dollars (!!) on Development aid.Development aid has funded some of the most cost-effective lifesaving programs that have ever been run. Such examples include PEPFAR, the US emergency aids relief programme rolled out at the height of the African aids pandemic, which estimates suggest saved 25 million lives at a cost of some 85 billion ($3400 per life saved, competitive with Givewell’s very best). EAs working with global poverty will know just how difficult it is to achieve high cost effectiveness at these scales.Development aid has also funded some of the very worst development projects conceived, in some instances causing outright harm to the recipients.Development aid is spent with a large variety of goals in mind. Climate mitigation projects, gender equality campaigns, and free-trade agreements are all funded by wealthy governments under a single illusory budgetary item: ‘development assistance’.In short, the scope of aid is enormous and so is the impact that can be had by positively influencing how it is spent. I'm not the only one who thinks so! In January of 2022 Open Philanthropy announced Global Aid Policy as a priority area within its global health and wellbeing portfolio.In this post I will:Demystify the processes that decide how aid is allocated.Argue that aid policy is neglected (by EAs especially), high in scale, and maybe tractable.Sneakily attempt to make you excited about aid, in preparation for the announcement of a non-profit I’m co-founding.Who decides how aid is allocated?When I first dug into development aid, I found the field very opaque and difficult to get an overview of. This made everything seem much more static and difficult to influence, than I now think it is.Come with me, and I’ll show you how the aid-sausage is made. The explanation tries to capture the grand picture, but each country is different and the explanation is overfit to western democracies.The aid pipeline:It all begins with a government decision to spend money on aid. For many countries this decision was formalized in 1970 after a UN resolution was signed between members to spend 0.7% of GNI on official development assistance.Politicians decide on a national aid strategyEach country that gives aid will have an official strategy for its aid spending. The strategy lists a number of priorities the government wants to focus on. It is typically re-negotiated and updated once every few years or when a new government takes seat. The agreed upon aid strategy sets a broad direction for the civil service and relevant parliamentary committees in projects they choose to carry out.The recently released UK aid strategy is a good example of what typical priorities look like:deliver honest, reliable investment through British Investment Partnerships, building on the UK’s financial expertise and the strengths of the City of London, in line with the Prime Minister’s vision for the Clean Green Initiativeprovide women and girls with the freedom they need to succeed, unlocking their future potential, educating girls, supporting their empowerment and protecting them against violencestep up the UK’s life-saving humanitarian work to prevent the worst forms of human suffering around the world. The UK will lead globally for a more effective international response to humanitarian crisestake forward UK leadership on climate change, nature and global health. The strategy will put the UK commitments made during the UK’s Presidency of G7 and COP26, UK global leadership in science and technology, and the UK’s COVID-19 response, at the core of its international development workThe Government passes a...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why development aid is a really exciting field, published by MathiasKB on December 7, 2022 on The Effective Altruism Forum.Each year, wealthy countries collectively spend around 178 billion dollars (!!) on Development aid.Development aid has funded some of the most cost-effective lifesaving programs that have ever been run. Such examples include PEPFAR, the US emergency aids relief programme rolled out at the height of the African aids pandemic, which estimates suggest saved 25 million lives at a cost of some 85 billion ($3400 per life saved, competitive with Givewell’s very best). EAs working with global poverty will know just how difficult it is to achieve high cost effectiveness at these scales.Development aid has also funded some of the very worst development projects conceived, in some instances causing outright harm to the recipients.Development aid is spent with a large variety of goals in mind. Climate mitigation projects, gender equality campaigns, and free-trade agreements are all funded by wealthy governments under a single illusory budgetary item: ‘development assistance’.In short, the scope of aid is enormous and so is the impact that can be had by positively influencing how it is spent. I'm not the only one who thinks so! In January of 2022 Open Philanthropy announced Global Aid Policy as a priority area within its global health and wellbeing portfolio.In this post I will:Demystify the processes that decide how aid is allocated.Argue that aid policy is neglected (by EAs especially), high in scale, and maybe tractable.Sneakily attempt to make you excited about aid, in preparation for the announcement of a non-profit I’m co-founding.Who decides how aid is allocated?When I first dug into development aid, I found the field very opaque and difficult to get an overview of. This made everything seem much more static and difficult to influence, than I now think it is.Come with me, and I’ll show you how the aid-sausage is made. The explanation tries to capture the grand picture, but each country is different and the explanation is overfit to western democracies.The aid pipeline:It all begins with a government decision to spend money on aid. For many countries this decision was formalized in 1970 after a UN resolution was signed between members to spend 0.7% of GNI on official development assistance.Politicians decide on a national aid strategyEach country that gives aid will have an official strategy for its aid spending. The strategy lists a number of priorities the government wants to focus on. It is typically re-negotiated and updated once every few years or when a new government takes seat. The agreed upon aid strategy sets a broad direction for the civil service and relevant parliamentary committees in projects they choose to carry out.The recently released UK aid strategy is a good example of what typical priorities look like:deliver honest, reliable investment through British Investment Partnerships, building on the UK’s financial expertise and the strengths of the City of London, in line with the Prime Minister’s vision for the Clean Green Initiativeprovide women and girls with the freedom they need to succeed, unlocking their future potential, educating girls, supporting their empowerment and protecting them against violencestep up the UK’s life-saving humanitarian work to prevent the worst forms of human suffering around the world. The UK will lead globally for a more effective international response to humanitarian crisestake forward UK leadership on climate change, nature and global health. The strategy will put the UK commitments made during the UK’s Presidency of G7 and COP26, UK global leadership in science and technology, and the UK’s COVID-19 response, at the core of its international development workThe Government passes a...]]>
MathiasKB https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:40 None full 4034
dDTdviDpm8dAFssqe_EA EA - EA London Rebranding to EA UK by DavidNash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA London Rebranding to EA UK, published by DavidNash on December 7, 2022 on The Effective Altruism Forum.This post is mainly from the original proposal document to rebrand EA London to EA UK. There is now an EA UK website and newsletter since August 2022.SummaryThere are people in the UK outside of London, Oxford and Cambridge who have an interest in EA and don’t have a local group. They may miss out on chances to connect with others, stay motivated and find out about job and donation opportunities. It could be useful to set up EA UK to support this wider group of people.Why is there no EA UK already?Historically local EA groups have been founded where there happened to be someone with enough time to run one rather than with a larger strategy in mind. Newer organisers around the world tend to create national groups that support local groups within that country and provide a support network for everyone in their country rather than a few cities. Because EA groups in the UK were made pretty early they grew faster before the idea of national groups was as widespread.Another contributing factor to why there is no EA UK may be that some of the support that EA country groups often provide is given by CEA and other organisations, for example most country groups have to set up a donation platform for people who want to give effectively, but this is already done in the UK by EA Funds. There is also no need to translate materials into a local language for the UK.Why set up EA UK?The UK has a population of 67 million, with roughly 14% living in London, Oxford and Cambridge, the three places in the UK with paid group organisers. This means that people in the rest of the country who have an interest in EA will not have a group they can get support fromA lot of the support that can be provided doesn’t have to be in person (1-1s, newsletter, directory, project advice)People move around the UK and so if there is a national group they can stay up to date with EA no matter whether they are moving to, it also could reduce the chance that people drop out of EA when they move away from or between hubsThere will be some individuals who wont be interested in attending local group events but will want to keep up to date with what happens with EA in the UK and occasionally want to talk about donations/careersEven though EA London can provide support to those outside of London, people may not reach out to a group that isn’t near them unless they feel like they have permission toBy having a national group it may lead to more local groups being set up as it is easier for people in the same place to find each otherWhat EA UK wont doProvide support to UK student groups, this is already done by CEA, Open Philanthropy and the Global Challenges Project. EA UK could act as an extra place for them to go for support, but generally would be more focused on professionals in the UKAct as a layer of management between other local groups in the UK and CEA, they will still interact with CEA for supportWhat does this mean for EA London?Most things will stay the same, the website and newsletter have been rebranded to EA UK rather than EA LondonThere will still be events and support for people in LondonPotential IssuesEA London, Oxford and Cambridge community members may get less support if there is more focus on a national groupPeople outside of UK EA hubs may feel neglected if most of the jobs/events etc are based in hubs. Although hopefully providing more support will allow more relevant opportunities in the wider UK to be discovered and then this can be shared back to the UK communityThere may be much more demand for the support that is offered. As this is a problem of being potentially too successful we could scale back different projects, or maybe consider hiring extra people if it...]]>
DavidNash https://forum.effectivealtruism.org/posts/dDTdviDpm8dAFssqe/ea-london-rebranding-to-ea-uk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA London Rebranding to EA UK, published by DavidNash on December 7, 2022 on The Effective Altruism Forum.This post is mainly from the original proposal document to rebrand EA London to EA UK. There is now an EA UK website and newsletter since August 2022.SummaryThere are people in the UK outside of London, Oxford and Cambridge who have an interest in EA and don’t have a local group. They may miss out on chances to connect with others, stay motivated and find out about job and donation opportunities. It could be useful to set up EA UK to support this wider group of people.Why is there no EA UK already?Historically local EA groups have been founded where there happened to be someone with enough time to run one rather than with a larger strategy in mind. Newer organisers around the world tend to create national groups that support local groups within that country and provide a support network for everyone in their country rather than a few cities. Because EA groups in the UK were made pretty early they grew faster before the idea of national groups was as widespread.Another contributing factor to why there is no EA UK may be that some of the support that EA country groups often provide is given by CEA and other organisations, for example most country groups have to set up a donation platform for people who want to give effectively, but this is already done in the UK by EA Funds. There is also no need to translate materials into a local language for the UK.Why set up EA UK?The UK has a population of 67 million, with roughly 14% living in London, Oxford and Cambridge, the three places in the UK with paid group organisers. This means that people in the rest of the country who have an interest in EA will not have a group they can get support fromA lot of the support that can be provided doesn’t have to be in person (1-1s, newsletter, directory, project advice)People move around the UK and so if there is a national group they can stay up to date with EA no matter whether they are moving to, it also could reduce the chance that people drop out of EA when they move away from or between hubsThere will be some individuals who wont be interested in attending local group events but will want to keep up to date with what happens with EA in the UK and occasionally want to talk about donations/careersEven though EA London can provide support to those outside of London, people may not reach out to a group that isn’t near them unless they feel like they have permission toBy having a national group it may lead to more local groups being set up as it is easier for people in the same place to find each otherWhat EA UK wont doProvide support to UK student groups, this is already done by CEA, Open Philanthropy and the Global Challenges Project. EA UK could act as an extra place for them to go for support, but generally would be more focused on professionals in the UKAct as a layer of management between other local groups in the UK and CEA, they will still interact with CEA for supportWhat does this mean for EA London?Most things will stay the same, the website and newsletter have been rebranded to EA UK rather than EA LondonThere will still be events and support for people in LondonPotential IssuesEA London, Oxford and Cambridge community members may get less support if there is more focus on a national groupPeople outside of UK EA hubs may feel neglected if most of the jobs/events etc are based in hubs. Although hopefully providing more support will allow more relevant opportunities in the wider UK to be discovered and then this can be shared back to the UK communityThere may be much more demand for the support that is offered. As this is a problem of being potentially too successful we could scale back different projects, or maybe consider hiring extra people if it...]]>
Wed, 07 Dec 2022 15:02:14 +0000 EA - EA London Rebranding to EA UK by DavidNash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA London Rebranding to EA UK, published by DavidNash on December 7, 2022 on The Effective Altruism Forum.This post is mainly from the original proposal document to rebrand EA London to EA UK. There is now an EA UK website and newsletter since August 2022.SummaryThere are people in the UK outside of London, Oxford and Cambridge who have an interest in EA and don’t have a local group. They may miss out on chances to connect with others, stay motivated and find out about job and donation opportunities. It could be useful to set up EA UK to support this wider group of people.Why is there no EA UK already?Historically local EA groups have been founded where there happened to be someone with enough time to run one rather than with a larger strategy in mind. Newer organisers around the world tend to create national groups that support local groups within that country and provide a support network for everyone in their country rather than a few cities. Because EA groups in the UK were made pretty early they grew faster before the idea of national groups was as widespread.Another contributing factor to why there is no EA UK may be that some of the support that EA country groups often provide is given by CEA and other organisations, for example most country groups have to set up a donation platform for people who want to give effectively, but this is already done in the UK by EA Funds. There is also no need to translate materials into a local language for the UK.Why set up EA UK?The UK has a population of 67 million, with roughly 14% living in London, Oxford and Cambridge, the three places in the UK with paid group organisers. This means that people in the rest of the country who have an interest in EA will not have a group they can get support fromA lot of the support that can be provided doesn’t have to be in person (1-1s, newsletter, directory, project advice)People move around the UK and so if there is a national group they can stay up to date with EA no matter whether they are moving to, it also could reduce the chance that people drop out of EA when they move away from or between hubsThere will be some individuals who wont be interested in attending local group events but will want to keep up to date with what happens with EA in the UK and occasionally want to talk about donations/careersEven though EA London can provide support to those outside of London, people may not reach out to a group that isn’t near them unless they feel like they have permission toBy having a national group it may lead to more local groups being set up as it is easier for people in the same place to find each otherWhat EA UK wont doProvide support to UK student groups, this is already done by CEA, Open Philanthropy and the Global Challenges Project. EA UK could act as an extra place for them to go for support, but generally would be more focused on professionals in the UKAct as a layer of management between other local groups in the UK and CEA, they will still interact with CEA for supportWhat does this mean for EA London?Most things will stay the same, the website and newsletter have been rebranded to EA UK rather than EA LondonThere will still be events and support for people in LondonPotential IssuesEA London, Oxford and Cambridge community members may get less support if there is more focus on a national groupPeople outside of UK EA hubs may feel neglected if most of the jobs/events etc are based in hubs. Although hopefully providing more support will allow more relevant opportunities in the wider UK to be discovered and then this can be shared back to the UK communityThere may be much more demand for the support that is offered. As this is a problem of being potentially too successful we could scale back different projects, or maybe consider hiring extra people if it...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA London Rebranding to EA UK, published by DavidNash on December 7, 2022 on The Effective Altruism Forum.This post is mainly from the original proposal document to rebrand EA London to EA UK. There is now an EA UK website and newsletter since August 2022.SummaryThere are people in the UK outside of London, Oxford and Cambridge who have an interest in EA and don’t have a local group. They may miss out on chances to connect with others, stay motivated and find out about job and donation opportunities. It could be useful to set up EA UK to support this wider group of people.Why is there no EA UK already?Historically local EA groups have been founded where there happened to be someone with enough time to run one rather than with a larger strategy in mind. Newer organisers around the world tend to create national groups that support local groups within that country and provide a support network for everyone in their country rather than a few cities. Because EA groups in the UK were made pretty early they grew faster before the idea of national groups was as widespread.Another contributing factor to why there is no EA UK may be that some of the support that EA country groups often provide is given by CEA and other organisations, for example most country groups have to set up a donation platform for people who want to give effectively, but this is already done in the UK by EA Funds. There is also no need to translate materials into a local language for the UK.Why set up EA UK?The UK has a population of 67 million, with roughly 14% living in London, Oxford and Cambridge, the three places in the UK with paid group organisers. This means that people in the rest of the country who have an interest in EA will not have a group they can get support fromA lot of the support that can be provided doesn’t have to be in person (1-1s, newsletter, directory, project advice)People move around the UK and so if there is a national group they can stay up to date with EA no matter whether they are moving to, it also could reduce the chance that people drop out of EA when they move away from or between hubsThere will be some individuals who wont be interested in attending local group events but will want to keep up to date with what happens with EA in the UK and occasionally want to talk about donations/careersEven though EA London can provide support to those outside of London, people may not reach out to a group that isn’t near them unless they feel like they have permission toBy having a national group it may lead to more local groups being set up as it is easier for people in the same place to find each otherWhat EA UK wont doProvide support to UK student groups, this is already done by CEA, Open Philanthropy and the Global Challenges Project. EA UK could act as an extra place for them to go for support, but generally would be more focused on professionals in the UKAct as a layer of management between other local groups in the UK and CEA, they will still interact with CEA for supportWhat does this mean for EA London?Most things will stay the same, the website and newsletter have been rebranded to EA UK rather than EA LondonThere will still be events and support for people in LondonPotential IssuesEA London, Oxford and Cambridge community members may get less support if there is more focus on a national groupPeople outside of UK EA hubs may feel neglected if most of the jobs/events etc are based in hubs. Although hopefully providing more support will allow more relevant opportunities in the wider UK to be discovered and then this can be shared back to the UK communityThere may be much more demand for the support that is offered. As this is a problem of being potentially too successful we could scale back different projects, or maybe consider hiring extra people if it...]]>
DavidNash https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:54 None full 4038
yRSoqFve3vQ6DWyJh_EA EA - Visualizing the development gap by Stephen Clare Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Visualizing the development gap, published by Stephen Clare on December 7, 2022 on The Effective Altruism Forum.Lazarus Chakwera won Malawi’s 2020 Presidential election on an anti-corruption, pro-growth platform. It’s no surprise that Malawians voted for growth, as Malawi has been called the world’s “poorest peaceful country”. According to Our World in Data, the median income per day is $1.53, or about $560 per year. Real GDP per capita has grown at an average rate of just 1.4% per year since 1961 and stands today at $1650 per person (PPP, current international $). Furthermore, the country has yet to recover from an economic downturn caused by the Covid-19 pandemic, leaving GDP per capita only slightly higher than it was in 2014.Life on $560 a year is possible, but not very comfortable. A sudden illness, accident, or natural disaster can be devastating. Even after spending almost all one’s income, many of one’s basic needs remain unmet. Investments for one’s future, including education and durable goods, are mostly out of reach. Life satisfaction in countries where incomes are so low is poor.We know that it’s not fair some people have to make do with so little. In the U.S., the poverty threshold, below which one qualifies for various government benefits to help meet basic needs, is $26,200 for a family of four, or $6625 per person. That makes it almost 12 times higher than the median Malawian income. (All of these international comparisons are adjusted for purchasing power.)Let that sink in. The majority of Malawians don’t earn one-tenth of the amount of money below which we, in a high-income country, think one should get help from the government. And of course, the same is true in most countries. If we applied the U.S. poverty line around the world, we would see that most people just don’t have enough money to meet all their basic needs.That’s why finding ways to speed up development in low-income countries would be a huge win for philanthropists, policymakers, and citizens alike. Unfortunately, we haven’t made much progress towards this goal.In this post, I give a sense of why it’s important for EAs to think about broad economic growth in lower income countries. I do this by showing how long it will take Malawi to catch up to where a high-income country like the United States is at today. In short, at typical growth rates, it will take a depressingly long time: almost two centuries. Sparking a growth acceleration, like what India has experienced in recent decades, would help a bit, but it would still take many decades for Malawi’s economy to grow to the point that most Malawians can afford a reasonable standard of living.These calculations should help deepen one’s understanding of the development gap: the difference in living standards between high- and low-income countries. The implications for global development advocates are obvious. But I also think longtermists should pay attention. First, the wellbeing of billions is not unimportant from a longterm perspective – it’s just that future wellbeing matters as well. Second, I think speeding up development would help more people from lower-income countries access the educational and professional opportunities they need to participate in what could be humanity’s most important century.Growth trajectories for MalawiOne way to visualize the development gap is to think about how long it will take a country like Malawi to reach various benchmarks. To that end, let's consider a few different growth trajectories. I’m going to continue to refer to Malawi specifically, but one could similarly visualize any country. What matters is just the starting income and the growth rate.First, what if Malawi continued to grow at 2% per year, roughly as it has in recent decades? At this rate, it would take a shocking 105 years ...]]>
Stephen Clare https://forum.effectivealtruism.org/posts/yRSoqFve3vQ6DWyJh/visualizing-the-development-gap Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Visualizing the development gap, published by Stephen Clare on December 7, 2022 on The Effective Altruism Forum.Lazarus Chakwera won Malawi’s 2020 Presidential election on an anti-corruption, pro-growth platform. It’s no surprise that Malawians voted for growth, as Malawi has been called the world’s “poorest peaceful country”. According to Our World in Data, the median income per day is $1.53, or about $560 per year. Real GDP per capita has grown at an average rate of just 1.4% per year since 1961 and stands today at $1650 per person (PPP, current international $). Furthermore, the country has yet to recover from an economic downturn caused by the Covid-19 pandemic, leaving GDP per capita only slightly higher than it was in 2014.Life on $560 a year is possible, but not very comfortable. A sudden illness, accident, or natural disaster can be devastating. Even after spending almost all one’s income, many of one’s basic needs remain unmet. Investments for one’s future, including education and durable goods, are mostly out of reach. Life satisfaction in countries where incomes are so low is poor.We know that it’s not fair some people have to make do with so little. In the U.S., the poverty threshold, below which one qualifies for various government benefits to help meet basic needs, is $26,200 for a family of four, or $6625 per person. That makes it almost 12 times higher than the median Malawian income. (All of these international comparisons are adjusted for purchasing power.)Let that sink in. The majority of Malawians don’t earn one-tenth of the amount of money below which we, in a high-income country, think one should get help from the government. And of course, the same is true in most countries. If we applied the U.S. poverty line around the world, we would see that most people just don’t have enough money to meet all their basic needs.That’s why finding ways to speed up development in low-income countries would be a huge win for philanthropists, policymakers, and citizens alike. Unfortunately, we haven’t made much progress towards this goal.In this post, I give a sense of why it’s important for EAs to think about broad economic growth in lower income countries. I do this by showing how long it will take Malawi to catch up to where a high-income country like the United States is at today. In short, at typical growth rates, it will take a depressingly long time: almost two centuries. Sparking a growth acceleration, like what India has experienced in recent decades, would help a bit, but it would still take many decades for Malawi’s economy to grow to the point that most Malawians can afford a reasonable standard of living.These calculations should help deepen one’s understanding of the development gap: the difference in living standards between high- and low-income countries. The implications for global development advocates are obvious. But I also think longtermists should pay attention. First, the wellbeing of billions is not unimportant from a longterm perspective – it’s just that future wellbeing matters as well. Second, I think speeding up development would help more people from lower-income countries access the educational and professional opportunities they need to participate in what could be humanity’s most important century.Growth trajectories for MalawiOne way to visualize the development gap is to think about how long it will take a country like Malawi to reach various benchmarks. To that end, let's consider a few different growth trajectories. I’m going to continue to refer to Malawi specifically, but one could similarly visualize any country. What matters is just the starting income and the growth rate.First, what if Malawi continued to grow at 2% per year, roughly as it has in recent decades? At this rate, it would take a shocking 105 years ...]]>
Wed, 07 Dec 2022 14:05:43 +0000 EA - Visualizing the development gap by Stephen Clare Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Visualizing the development gap, published by Stephen Clare on December 7, 2022 on The Effective Altruism Forum.Lazarus Chakwera won Malawi’s 2020 Presidential election on an anti-corruption, pro-growth platform. It’s no surprise that Malawians voted for growth, as Malawi has been called the world’s “poorest peaceful country”. According to Our World in Data, the median income per day is $1.53, or about $560 per year. Real GDP per capita has grown at an average rate of just 1.4% per year since 1961 and stands today at $1650 per person (PPP, current international $). Furthermore, the country has yet to recover from an economic downturn caused by the Covid-19 pandemic, leaving GDP per capita only slightly higher than it was in 2014.Life on $560 a year is possible, but not very comfortable. A sudden illness, accident, or natural disaster can be devastating. Even after spending almost all one’s income, many of one’s basic needs remain unmet. Investments for one’s future, including education and durable goods, are mostly out of reach. Life satisfaction in countries where incomes are so low is poor.We know that it’s not fair some people have to make do with so little. In the U.S., the poverty threshold, below which one qualifies for various government benefits to help meet basic needs, is $26,200 for a family of four, or $6625 per person. That makes it almost 12 times higher than the median Malawian income. (All of these international comparisons are adjusted for purchasing power.)Let that sink in. The majority of Malawians don’t earn one-tenth of the amount of money below which we, in a high-income country, think one should get help from the government. And of course, the same is true in most countries. If we applied the U.S. poverty line around the world, we would see that most people just don’t have enough money to meet all their basic needs.That’s why finding ways to speed up development in low-income countries would be a huge win for philanthropists, policymakers, and citizens alike. Unfortunately, we haven’t made much progress towards this goal.In this post, I give a sense of why it’s important for EAs to think about broad economic growth in lower income countries. I do this by showing how long it will take Malawi to catch up to where a high-income country like the United States is at today. In short, at typical growth rates, it will take a depressingly long time: almost two centuries. Sparking a growth acceleration, like what India has experienced in recent decades, would help a bit, but it would still take many decades for Malawi’s economy to grow to the point that most Malawians can afford a reasonable standard of living.These calculations should help deepen one’s understanding of the development gap: the difference in living standards between high- and low-income countries. The implications for global development advocates are obvious. But I also think longtermists should pay attention. First, the wellbeing of billions is not unimportant from a longterm perspective – it’s just that future wellbeing matters as well. Second, I think speeding up development would help more people from lower-income countries access the educational and professional opportunities they need to participate in what could be humanity’s most important century.Growth trajectories for MalawiOne way to visualize the development gap is to think about how long it will take a country like Malawi to reach various benchmarks. To that end, let's consider a few different growth trajectories. I’m going to continue to refer to Malawi specifically, but one could similarly visualize any country. What matters is just the starting income and the growth rate.First, what if Malawi continued to grow at 2% per year, roughly as it has in recent decades? At this rate, it would take a shocking 105 years ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Visualizing the development gap, published by Stephen Clare on December 7, 2022 on The Effective Altruism Forum.Lazarus Chakwera won Malawi’s 2020 Presidential election on an anti-corruption, pro-growth platform. It’s no surprise that Malawians voted for growth, as Malawi has been called the world’s “poorest peaceful country”. According to Our World in Data, the median income per day is $1.53, or about $560 per year. Real GDP per capita has grown at an average rate of just 1.4% per year since 1961 and stands today at $1650 per person (PPP, current international $). Furthermore, the country has yet to recover from an economic downturn caused by the Covid-19 pandemic, leaving GDP per capita only slightly higher than it was in 2014.Life on $560 a year is possible, but not very comfortable. A sudden illness, accident, or natural disaster can be devastating. Even after spending almost all one’s income, many of one’s basic needs remain unmet. Investments for one’s future, including education and durable goods, are mostly out of reach. Life satisfaction in countries where incomes are so low is poor.We know that it’s not fair some people have to make do with so little. In the U.S., the poverty threshold, below which one qualifies for various government benefits to help meet basic needs, is $26,200 for a family of four, or $6625 per person. That makes it almost 12 times higher than the median Malawian income. (All of these international comparisons are adjusted for purchasing power.)Let that sink in. The majority of Malawians don’t earn one-tenth of the amount of money below which we, in a high-income country, think one should get help from the government. And of course, the same is true in most countries. If we applied the U.S. poverty line around the world, we would see that most people just don’t have enough money to meet all their basic needs.That’s why finding ways to speed up development in low-income countries would be a huge win for philanthropists, policymakers, and citizens alike. Unfortunately, we haven’t made much progress towards this goal.In this post, I give a sense of why it’s important for EAs to think about broad economic growth in lower income countries. I do this by showing how long it will take Malawi to catch up to where a high-income country like the United States is at today. In short, at typical growth rates, it will take a depressingly long time: almost two centuries. Sparking a growth acceleration, like what India has experienced in recent decades, would help a bit, but it would still take many decades for Malawi’s economy to grow to the point that most Malawians can afford a reasonable standard of living.These calculations should help deepen one’s understanding of the development gap: the difference in living standards between high- and low-income countries. The implications for global development advocates are obvious. But I also think longtermists should pay attention. First, the wellbeing of billions is not unimportant from a longterm perspective – it’s just that future wellbeing matters as well. Second, I think speeding up development would help more people from lower-income countries access the educational and professional opportunities they need to participate in what could be humanity’s most important century.Growth trajectories for MalawiOne way to visualize the development gap is to think about how long it will take a country like Malawi to reach various benchmarks. To that end, let's consider a few different growth trajectories. I’m going to continue to refer to Malawi specifically, but one could similarly visualize any country. What matters is just the starting income and the growth rate.First, what if Malawi continued to grow at 2% per year, roughly as it has in recent decades? At this rate, it would take a shocking 105 years ...]]>
Stephen Clare https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:13 None full 4036
uG3s9qDCnntJwci9i_EA EA - [Link Post] If We Don’t End Factory Farming Soon, It Might Be Here Forever. by BrianK Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Link Post] If We Don’t End Factory Farming Soon, It Might Be Here Forever., published by BrianK on December 7, 2022 on The Effective Altruism Forum.“Do you know what the most popular book is? No, it’s not Harry Potter. But it does talk about spells. It’s the Bible, and it has been for centuries. In the past 50 years alone, the Bible has sold over 3.9 billion copies. And the second best-selling book? The Quran, at 800 million copies.As Oxford Professor William MacAskill, author of the new book “What We Owe The Future”—a tome on effective altruism and “longtermism”—explains, excerpts from these millennia-old schools of thought influence politics around the world: “The Babylonian Talmud, for example, compiled over a millennium ago, states that ‘the embryo is considered to be mere water until the fortieth day’—and today Jews tend to have much more liberal attitudes towards stem cell research than Catholics, who object to this use of embryos because they believe life begins at conception. Similarly, centuries-old dietary restrictions are still widely followed, as evidenced by India’s unusually high rate of vegetarianism, a $20 billion kosher food market, and many Muslims’ abstinence from alcohol.”The reason for this is simple: once rooted, value systems tend to persist for an extremely long time. And when it comes to factory farming, there’s reason to believe we may be at an inflection point.”Read the rest on Forbes.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
BrianK https://forum.effectivealtruism.org/posts/uG3s9qDCnntJwci9i/link-post-if-we-don-t-end-factory-farming-soon-it-might-be Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Link Post] If We Don’t End Factory Farming Soon, It Might Be Here Forever., published by BrianK on December 7, 2022 on The Effective Altruism Forum.“Do you know what the most popular book is? No, it’s not Harry Potter. But it does talk about spells. It’s the Bible, and it has been for centuries. In the past 50 years alone, the Bible has sold over 3.9 billion copies. And the second best-selling book? The Quran, at 800 million copies.As Oxford Professor William MacAskill, author of the new book “What We Owe The Future”—a tome on effective altruism and “longtermism”—explains, excerpts from these millennia-old schools of thought influence politics around the world: “The Babylonian Talmud, for example, compiled over a millennium ago, states that ‘the embryo is considered to be mere water until the fortieth day’—and today Jews tend to have much more liberal attitudes towards stem cell research than Catholics, who object to this use of embryos because they believe life begins at conception. Similarly, centuries-old dietary restrictions are still widely followed, as evidenced by India’s unusually high rate of vegetarianism, a $20 billion kosher food market, and many Muslims’ abstinence from alcohol.”The reason for this is simple: once rooted, value systems tend to persist for an extremely long time. And when it comes to factory farming, there’s reason to believe we may be at an inflection point.”Read the rest on Forbes.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 07 Dec 2022 13:18:44 +0000 EA - [Link Post] If We Don’t End Factory Farming Soon, It Might Be Here Forever. by BrianK Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Link Post] If We Don’t End Factory Farming Soon, It Might Be Here Forever., published by BrianK on December 7, 2022 on The Effective Altruism Forum.“Do you know what the most popular book is? No, it’s not Harry Potter. But it does talk about spells. It’s the Bible, and it has been for centuries. In the past 50 years alone, the Bible has sold over 3.9 billion copies. And the second best-selling book? The Quran, at 800 million copies.As Oxford Professor William MacAskill, author of the new book “What We Owe The Future”—a tome on effective altruism and “longtermism”—explains, excerpts from these millennia-old schools of thought influence politics around the world: “The Babylonian Talmud, for example, compiled over a millennium ago, states that ‘the embryo is considered to be mere water until the fortieth day’—and today Jews tend to have much more liberal attitudes towards stem cell research than Catholics, who object to this use of embryos because they believe life begins at conception. Similarly, centuries-old dietary restrictions are still widely followed, as evidenced by India’s unusually high rate of vegetarianism, a $20 billion kosher food market, and many Muslims’ abstinence from alcohol.”The reason for this is simple: once rooted, value systems tend to persist for an extremely long time. And when it comes to factory farming, there’s reason to believe we may be at an inflection point.”Read the rest on Forbes.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Link Post] If We Don’t End Factory Farming Soon, It Might Be Here Forever., published by BrianK on December 7, 2022 on The Effective Altruism Forum.“Do you know what the most popular book is? No, it’s not Harry Potter. But it does talk about spells. It’s the Bible, and it has been for centuries. In the past 50 years alone, the Bible has sold over 3.9 billion copies. And the second best-selling book? The Quran, at 800 million copies.As Oxford Professor William MacAskill, author of the new book “What We Owe The Future”—a tome on effective altruism and “longtermism”—explains, excerpts from these millennia-old schools of thought influence politics around the world: “The Babylonian Talmud, for example, compiled over a millennium ago, states that ‘the embryo is considered to be mere water until the fortieth day’—and today Jews tend to have much more liberal attitudes towards stem cell research than Catholics, who object to this use of embryos because they believe life begins at conception. Similarly, centuries-old dietary restrictions are still widely followed, as evidenced by India’s unusually high rate of vegetarianism, a $20 billion kosher food market, and many Muslims’ abstinence from alcohol.”The reason for this is simple: once rooted, value systems tend to persist for an extremely long time. And when it comes to factory farming, there’s reason to believe we may be at an inflection point.”Read the rest on Forbes.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
BrianK https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:40 None full 4039
cFtheXSC6XfECxeze_EA EA - Alternative Proteins + Machine Learning: Project Proposals, Seeking Funding by Noa Weiss Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Alternative Proteins + Machine Learning: Project Proposals, Seeking Funding, published by Noa Weiss on December 6, 2022 on The Effective Altruism Forum.TL;DR:I’m an AI & Machine Learning consultant (12 years of experience with data).My website: www.weissnoa.com. TL;DR: research, machine learning, data science strategy, I worked with early stage startups as well as big companies like PayPal.Vegan for 10 years and countingI want to move to work full time on animal welfare projects if I can find such projects where my skill set has added value.I looked for promising problems that I think I might solve, I’ll list my top ones here (please give me feedback)I’m looking for funding to flesh these ideas out, implement them, and offer them to alternative protein companiesIdeas1: Modeling of the extrusion processWhat is this?A physical machine where you put ingredients in, and plant-based meat comes out.It’s widely usedThe vast majority of plant-based meat solutions use this. It is also sometimes used for cultivated (lab-grown) meat.It’s currently “voodoo”Trial and error, mostly unpredictable.It’s a classic problem for MLTrain a model to guess the result based on given ingredients.In problems like this, the model often isn’t perfect but does help narrow down the search space significantly (for example, by recognizing that most sets of ingredients “obviously” won’t work)2: Addressing bottlenecks in Alternative Protein companies using existing researchTL;DR:There are known bottle necks that are relevant for most alternative protein companies. For example: figuring out what stem cells are about to grow into as early as possible.There are existing solutions (in the form of published research) for these problems. For example, see this Nature article.These solutions are not ready for use (there’s a difference between an article and a production ready system).I hope I can help orgs integrate these solutions.After I do this for one org, scaling to other orgs will be much easier.Some bottlenecks I might addressAll bottlenecks here are known problems for alternative protein companies, according to the experts I spoke to, but please tell me if I’m wrong. For all problems here, there is at least one scientific paper that seems promising enough to explore (in my opinion).Identification of the beginning stages of neural stem cells differentiation into specific cell typesPredicting cell response to different treatmentsIdentify new potential cell cycle regulatory genesIf you’re an org that could use a solution for one or more of these problems, please get in touch!Call to actionHelp me get funded to run this projectGive me feedback on how to write a funding request for this projectContact meBy messaging me here, or emailing [me at weissnoa dot com].Thanks to Yonatan Cale for helping me with this post.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Noa Weiss https://forum.effectivealtruism.org/posts/cFtheXSC6XfECxeze/alternative-proteins-machine-learning-project-proposals Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Alternative Proteins + Machine Learning: Project Proposals, Seeking Funding, published by Noa Weiss on December 6, 2022 on The Effective Altruism Forum.TL;DR:I’m an AI & Machine Learning consultant (12 years of experience with data).My website: www.weissnoa.com. TL;DR: research, machine learning, data science strategy, I worked with early stage startups as well as big companies like PayPal.Vegan for 10 years and countingI want to move to work full time on animal welfare projects if I can find such projects where my skill set has added value.I looked for promising problems that I think I might solve, I’ll list my top ones here (please give me feedback)I’m looking for funding to flesh these ideas out, implement them, and offer them to alternative protein companiesIdeas1: Modeling of the extrusion processWhat is this?A physical machine where you put ingredients in, and plant-based meat comes out.It’s widely usedThe vast majority of plant-based meat solutions use this. It is also sometimes used for cultivated (lab-grown) meat.It’s currently “voodoo”Trial and error, mostly unpredictable.It’s a classic problem for MLTrain a model to guess the result based on given ingredients.In problems like this, the model often isn’t perfect but does help narrow down the search space significantly (for example, by recognizing that most sets of ingredients “obviously” won’t work)2: Addressing bottlenecks in Alternative Protein companies using existing researchTL;DR:There are known bottle necks that are relevant for most alternative protein companies. For example: figuring out what stem cells are about to grow into as early as possible.There are existing solutions (in the form of published research) for these problems. For example, see this Nature article.These solutions are not ready for use (there’s a difference between an article and a production ready system).I hope I can help orgs integrate these solutions.After I do this for one org, scaling to other orgs will be much easier.Some bottlenecks I might addressAll bottlenecks here are known problems for alternative protein companies, according to the experts I spoke to, but please tell me if I’m wrong. For all problems here, there is at least one scientific paper that seems promising enough to explore (in my opinion).Identification of the beginning stages of neural stem cells differentiation into specific cell typesPredicting cell response to different treatmentsIdentify new potential cell cycle regulatory genesIf you’re an org that could use a solution for one or more of these problems, please get in touch!Call to actionHelp me get funded to run this projectGive me feedback on how to write a funding request for this projectContact meBy messaging me here, or emailing [me at weissnoa dot com].Thanks to Yonatan Cale for helping me with this post.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 07 Dec 2022 10:03:39 +0000 EA - Alternative Proteins + Machine Learning: Project Proposals, Seeking Funding by Noa Weiss Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Alternative Proteins + Machine Learning: Project Proposals, Seeking Funding, published by Noa Weiss on December 6, 2022 on The Effective Altruism Forum.TL;DR:I’m an AI & Machine Learning consultant (12 years of experience with data).My website: www.weissnoa.com. TL;DR: research, machine learning, data science strategy, I worked with early stage startups as well as big companies like PayPal.Vegan for 10 years and countingI want to move to work full time on animal welfare projects if I can find such projects where my skill set has added value.I looked for promising problems that I think I might solve, I’ll list my top ones here (please give me feedback)I’m looking for funding to flesh these ideas out, implement them, and offer them to alternative protein companiesIdeas1: Modeling of the extrusion processWhat is this?A physical machine where you put ingredients in, and plant-based meat comes out.It’s widely usedThe vast majority of plant-based meat solutions use this. It is also sometimes used for cultivated (lab-grown) meat.It’s currently “voodoo”Trial and error, mostly unpredictable.It’s a classic problem for MLTrain a model to guess the result based on given ingredients.In problems like this, the model often isn’t perfect but does help narrow down the search space significantly (for example, by recognizing that most sets of ingredients “obviously” won’t work)2: Addressing bottlenecks in Alternative Protein companies using existing researchTL;DR:There are known bottle necks that are relevant for most alternative protein companies. For example: figuring out what stem cells are about to grow into as early as possible.There are existing solutions (in the form of published research) for these problems. For example, see this Nature article.These solutions are not ready for use (there’s a difference between an article and a production ready system).I hope I can help orgs integrate these solutions.After I do this for one org, scaling to other orgs will be much easier.Some bottlenecks I might addressAll bottlenecks here are known problems for alternative protein companies, according to the experts I spoke to, but please tell me if I’m wrong. For all problems here, there is at least one scientific paper that seems promising enough to explore (in my opinion).Identification of the beginning stages of neural stem cells differentiation into specific cell typesPredicting cell response to different treatmentsIdentify new potential cell cycle regulatory genesIf you’re an org that could use a solution for one or more of these problems, please get in touch!Call to actionHelp me get funded to run this projectGive me feedback on how to write a funding request for this projectContact meBy messaging me here, or emailing [me at weissnoa dot com].Thanks to Yonatan Cale for helping me with this post.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Alternative Proteins + Machine Learning: Project Proposals, Seeking Funding, published by Noa Weiss on December 6, 2022 on The Effective Altruism Forum.TL;DR:I’m an AI & Machine Learning consultant (12 years of experience with data).My website: www.weissnoa.com. TL;DR: research, machine learning, data science strategy, I worked with early stage startups as well as big companies like PayPal.Vegan for 10 years and countingI want to move to work full time on animal welfare projects if I can find such projects where my skill set has added value.I looked for promising problems that I think I might solve, I’ll list my top ones here (please give me feedback)I’m looking for funding to flesh these ideas out, implement them, and offer them to alternative protein companiesIdeas1: Modeling of the extrusion processWhat is this?A physical machine where you put ingredients in, and plant-based meat comes out.It’s widely usedThe vast majority of plant-based meat solutions use this. It is also sometimes used for cultivated (lab-grown) meat.It’s currently “voodoo”Trial and error, mostly unpredictable.It’s a classic problem for MLTrain a model to guess the result based on given ingredients.In problems like this, the model often isn’t perfect but does help narrow down the search space significantly (for example, by recognizing that most sets of ingredients “obviously” won’t work)2: Addressing bottlenecks in Alternative Protein companies using existing researchTL;DR:There are known bottle necks that are relevant for most alternative protein companies. For example: figuring out what stem cells are about to grow into as early as possible.There are existing solutions (in the form of published research) for these problems. For example, see this Nature article.These solutions are not ready for use (there’s a difference between an article and a production ready system).I hope I can help orgs integrate these solutions.After I do this for one org, scaling to other orgs will be much easier.Some bottlenecks I might addressAll bottlenecks here are known problems for alternative protein companies, according to the experts I spoke to, but please tell me if I’m wrong. For all problems here, there is at least one scientific paper that seems promising enough to explore (in my opinion).Identification of the beginning stages of neural stem cells differentiation into specific cell typesPredicting cell response to different treatmentsIdentify new potential cell cycle regulatory genesIf you’re an org that could use a solution for one or more of these problems, please get in touch!Call to actionHelp me get funded to run this projectGive me feedback on how to write a funding request for this projectContact meBy messaging me here, or emailing [me at weissnoa dot com].Thanks to Yonatan Cale for helping me with this post.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Noa Weiss https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:09 None full 4031
xdwqbmZv2txYPqFAB_EA EA - New interview with SBF on Will MacAskill, "earn to give" and EA by teddyschleifer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New interview with SBF on Will MacAskill, "earn to give" and EA, published by teddyschleifer on December 7, 2022 on The Effective Altruism Forum.Hi folks, just wanted to share a new interview I did with Sam Bankman-Fried about Will MacAskill, effective altruism, and whether "earn to give" led him to make the mistakes he made at FTX. Interested, of course, in any thoughts.TeddyThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
teddyschleifer https://forum.effectivealtruism.org/posts/xdwqbmZv2txYPqFAB/new-interview-with-sbf-on-will-macaskill-earn-to-give-and-ea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New interview with SBF on Will MacAskill, "earn to give" and EA, published by teddyschleifer on December 7, 2022 on The Effective Altruism Forum.Hi folks, just wanted to share a new interview I did with Sam Bankman-Fried about Will MacAskill, effective altruism, and whether "earn to give" led him to make the mistakes he made at FTX. Interested, of course, in any thoughts.TeddyThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 07 Dec 2022 07:13:25 +0000 EA - New interview with SBF on Will MacAskill, "earn to give" and EA by teddyschleifer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New interview with SBF on Will MacAskill, "earn to give" and EA, published by teddyschleifer on December 7, 2022 on The Effective Altruism Forum.Hi folks, just wanted to share a new interview I did with Sam Bankman-Fried about Will MacAskill, effective altruism, and whether "earn to give" led him to make the mistakes he made at FTX. Interested, of course, in any thoughts.TeddyThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New interview with SBF on Will MacAskill, "earn to give" and EA, published by teddyschleifer on December 7, 2022 on The Effective Altruism Forum.Hi folks, just wanted to share a new interview I did with Sam Bankman-Fried about Will MacAskill, effective altruism, and whether "earn to give" led him to make the mistakes he made at FTX. Interested, of course, in any thoughts.TeddyThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
teddyschleifer https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:41 None full 4030
6CFt2swzcZqHspgbM_EA EA - On the Differences Between Ecomodernism and Effective Altruism by PeterSlattery Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On the Differences Between Ecomodernism and Effective Altruism, published by PeterSlattery on December 6, 2022 on The Effective Altruism Forum.Quickly sharing because this seems relatively well written and well-intentioned and probably worth reading for at least some forum readers. It offers an interesting introduction to the ideas of 'Ecomodernism' (a movement I hadn't heard of) and where these ideas overlap and come apart from the ideas of EA. Also poses some relatively interesting/sophisticated critiques of EA (at least by the standards of what I have come to expect from critics).ExcerptI am an ecomodernist, not an effective altruist. And it’s funny because, over the last few years, I have met many self-identified effective altruists, often themselves quite inclined towards ecomodernism, whose views and habits of mind I also really admire.Your typical ecomodernist and effective altruist each believe in the liberatory power of science and technology. They are both pro-growth, recognizing the robust relationship between economic growth and human freedom, expanding circles of empathy, democratic governance, improved social and public health outcomes, and even ecological sustainability. Notably, every effective altruist I can recall discussing the matter with is pro-nuclear, or at least not reflexively anti-nuclear. That is usually a litmus test for broader pro-abundance views, which effective altruists and ecomodernists both tend to espouse. Ecomodernists and effective altruists both attempt an evidence-based analytical rigor, in contrast to the more myopic, romantic, and utopian frameworks they are working to displace.All that said, there are distinctions in both practice and worldview between the two communities that I think are worth grappling with. Obviously, I don’t speak for every ecomodernist out there, and I am writing this partially to my effective altruist friends in the hopes they will validate or invalidate my premises. But broadly speaking, some distinctions come to mind:Ecomodernists are anthropocentric deontologists, while effective altruists embrace a kind of pan-species utilitarianism.Ecomodernists are more meliorist, while effective altruists are more longtermist.Ecomodernists are institutionalists, while effective altruists evince a consistent skepticism of institutions.Despite the commonalities and opportunities for collaboration, I think it would be a mistake for ecomodernists to overlook these gaps. Buying into what even effective altruists call the more fanatical commitments of their movement risks abandoning what makes ecomodernism necessary in the first place: reinforcing the role of human institutions in democratically creating a better future for humans and nature.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
PeterSlattery https://forum.effectivealtruism.org/posts/6CFt2swzcZqHspgbM/on-the-differences-between-ecomodernism-and-effective Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On the Differences Between Ecomodernism and Effective Altruism, published by PeterSlattery on December 6, 2022 on The Effective Altruism Forum.Quickly sharing because this seems relatively well written and well-intentioned and probably worth reading for at least some forum readers. It offers an interesting introduction to the ideas of 'Ecomodernism' (a movement I hadn't heard of) and where these ideas overlap and come apart from the ideas of EA. Also poses some relatively interesting/sophisticated critiques of EA (at least by the standards of what I have come to expect from critics).ExcerptI am an ecomodernist, not an effective altruist. And it’s funny because, over the last few years, I have met many self-identified effective altruists, often themselves quite inclined towards ecomodernism, whose views and habits of mind I also really admire.Your typical ecomodernist and effective altruist each believe in the liberatory power of science and technology. They are both pro-growth, recognizing the robust relationship between economic growth and human freedom, expanding circles of empathy, democratic governance, improved social and public health outcomes, and even ecological sustainability. Notably, every effective altruist I can recall discussing the matter with is pro-nuclear, or at least not reflexively anti-nuclear. That is usually a litmus test for broader pro-abundance views, which effective altruists and ecomodernists both tend to espouse. Ecomodernists and effective altruists both attempt an evidence-based analytical rigor, in contrast to the more myopic, romantic, and utopian frameworks they are working to displace.All that said, there are distinctions in both practice and worldview between the two communities that I think are worth grappling with. Obviously, I don’t speak for every ecomodernist out there, and I am writing this partially to my effective altruist friends in the hopes they will validate or invalidate my premises. But broadly speaking, some distinctions come to mind:Ecomodernists are anthropocentric deontologists, while effective altruists embrace a kind of pan-species utilitarianism.Ecomodernists are more meliorist, while effective altruists are more longtermist.Ecomodernists are institutionalists, while effective altruists evince a consistent skepticism of institutions.Despite the commonalities and opportunities for collaboration, I think it would be a mistake for ecomodernists to overlook these gaps. Buying into what even effective altruists call the more fanatical commitments of their movement risks abandoning what makes ecomodernism necessary in the first place: reinforcing the role of human institutions in democratically creating a better future for humans and nature.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 06 Dec 2022 17:56:36 +0000 EA - On the Differences Between Ecomodernism and Effective Altruism by PeterSlattery Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On the Differences Between Ecomodernism and Effective Altruism, published by PeterSlattery on December 6, 2022 on The Effective Altruism Forum.Quickly sharing because this seems relatively well written and well-intentioned and probably worth reading for at least some forum readers. It offers an interesting introduction to the ideas of 'Ecomodernism' (a movement I hadn't heard of) and where these ideas overlap and come apart from the ideas of EA. Also poses some relatively interesting/sophisticated critiques of EA (at least by the standards of what I have come to expect from critics).ExcerptI am an ecomodernist, not an effective altruist. And it’s funny because, over the last few years, I have met many self-identified effective altruists, often themselves quite inclined towards ecomodernism, whose views and habits of mind I also really admire.Your typical ecomodernist and effective altruist each believe in the liberatory power of science and technology. They are both pro-growth, recognizing the robust relationship between economic growth and human freedom, expanding circles of empathy, democratic governance, improved social and public health outcomes, and even ecological sustainability. Notably, every effective altruist I can recall discussing the matter with is pro-nuclear, or at least not reflexively anti-nuclear. That is usually a litmus test for broader pro-abundance views, which effective altruists and ecomodernists both tend to espouse. Ecomodernists and effective altruists both attempt an evidence-based analytical rigor, in contrast to the more myopic, romantic, and utopian frameworks they are working to displace.All that said, there are distinctions in both practice and worldview between the two communities that I think are worth grappling with. Obviously, I don’t speak for every ecomodernist out there, and I am writing this partially to my effective altruist friends in the hopes they will validate or invalidate my premises. But broadly speaking, some distinctions come to mind:Ecomodernists are anthropocentric deontologists, while effective altruists embrace a kind of pan-species utilitarianism.Ecomodernists are more meliorist, while effective altruists are more longtermist.Ecomodernists are institutionalists, while effective altruists evince a consistent skepticism of institutions.Despite the commonalities and opportunities for collaboration, I think it would be a mistake for ecomodernists to overlook these gaps. Buying into what even effective altruists call the more fanatical commitments of their movement risks abandoning what makes ecomodernism necessary in the first place: reinforcing the role of human institutions in democratically creating a better future for humans and nature.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On the Differences Between Ecomodernism and Effective Altruism, published by PeterSlattery on December 6, 2022 on The Effective Altruism Forum.Quickly sharing because this seems relatively well written and well-intentioned and probably worth reading for at least some forum readers. It offers an interesting introduction to the ideas of 'Ecomodernism' (a movement I hadn't heard of) and where these ideas overlap and come apart from the ideas of EA. Also poses some relatively interesting/sophisticated critiques of EA (at least by the standards of what I have come to expect from critics).ExcerptI am an ecomodernist, not an effective altruist. And it’s funny because, over the last few years, I have met many self-identified effective altruists, often themselves quite inclined towards ecomodernism, whose views and habits of mind I also really admire.Your typical ecomodernist and effective altruist each believe in the liberatory power of science and technology. They are both pro-growth, recognizing the robust relationship between economic growth and human freedom, expanding circles of empathy, democratic governance, improved social and public health outcomes, and even ecological sustainability. Notably, every effective altruist I can recall discussing the matter with is pro-nuclear, or at least not reflexively anti-nuclear. That is usually a litmus test for broader pro-abundance views, which effective altruists and ecomodernists both tend to espouse. Ecomodernists and effective altruists both attempt an evidence-based analytical rigor, in contrast to the more myopic, romantic, and utopian frameworks they are working to displace.All that said, there are distinctions in both practice and worldview between the two communities that I think are worth grappling with. Obviously, I don’t speak for every ecomodernist out there, and I am writing this partially to my effective altruist friends in the hopes they will validate or invalidate my premises. But broadly speaking, some distinctions come to mind:Ecomodernists are anthropocentric deontologists, while effective altruists embrace a kind of pan-species utilitarianism.Ecomodernists are more meliorist, while effective altruists are more longtermist.Ecomodernists are institutionalists, while effective altruists evince a consistent skepticism of institutions.Despite the commonalities and opportunities for collaboration, I think it would be a mistake for ecomodernists to overlook these gaps. Buying into what even effective altruists call the more fanatical commitments of their movement risks abandoning what makes ecomodernism necessary in the first place: reinforcing the role of human institutions in democratically creating a better future for humans and nature.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
PeterSlattery https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:45 None full 4023
xof7iFB3uh8Kc53bG_EA EA - Why did CEA buy Wytham Abbey? by Jeroen W Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why did CEA buy Wytham Abbey?, published by Jeroen W on December 6, 2022 on The Effective Altruism Forum.Yesterday morning I woke up and saw this tweet by Émile Torres:I was shocked, angry and upset at first. Especially since it appears that the estate was for sale last year for 15 million pounds:I'm not a big fan of Émile's writing and how they often misrepresent the EA movement. But that's not what this question is about, because they do raise a good point here: Why did CEA buy this property? My trust in CEA has been a bit shaky lately, and this doesn't help.Apparently it was already mentioned in the New Yorker piece:#:~:text=Last year%2C the Centre for Effective Altruism bought Wytham Abbey%2C a palatial estate near Oxford%2C built in 1480. Money%2C which no longer seemed an object%2C was increasingly being reinvested in the community itself."Last year, the Centre for Effective Altruism bought Wytham Abbey, a palatial estate near Oxford, built in 1480. Money, which no longer seemed an object, was increasingly being reinvested in the community itself."For some reason I glanced over it at the time, or I just didn't realize the seriousness of it.Upon more research, I came across this comment by Shakeel Hashim: "In April, Effective Ventures purchased Wytham Abbey and some land around it (but <1% of the 2,500 acre estate you're suggesting). Wytham is in the process of being established as a convening centre to run workshops and meetings that bring together people to think seriously about how to address important problems in the world. The vision is modelled on traditional specialist conference centres, e.g. Oberwolfach, The Rockefeller Foundation Bellagio Center or the Brocher Foundation.The purchase was made from a large grant made specifically for this. There was no money from FTX or affiliated individuals or organizations."I'm very relieved to hear money from individual donors wasn't used. And the <1% suggests 15 million pounds perhaps wasn't spent. Still, I'd love to hear and understand more about this project and why CEA thinks it's cost-effective. What is the EV calculation behind it?Like the New Yorker piece points out, with more funding there has been a lot of spending within the movement itself. And that's fine, great even. This way more outreach can be done and the movement can grow. But we don't want to be too self-serving, and I'm scared too much of this thinking will lead to rationalizing lavish expenses (and I'm afraid this is already happening). There needs to be more transparency behind big expenses.Edit to add: If this expense has been made a while back, why not announce it then?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeroen W https://forum.effectivealtruism.org/posts/xof7iFB3uh8Kc53bG/why-did-cea-buy-wytham-abbey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why did CEA buy Wytham Abbey?, published by Jeroen W on December 6, 2022 on The Effective Altruism Forum.Yesterday morning I woke up and saw this tweet by Émile Torres:I was shocked, angry and upset at first. Especially since it appears that the estate was for sale last year for 15 million pounds:I'm not a big fan of Émile's writing and how they often misrepresent the EA movement. But that's not what this question is about, because they do raise a good point here: Why did CEA buy this property? My trust in CEA has been a bit shaky lately, and this doesn't help.Apparently it was already mentioned in the New Yorker piece:#:~:text=Last year%2C the Centre for Effective Altruism bought Wytham Abbey%2C a palatial estate near Oxford%2C built in 1480. Money%2C which no longer seemed an object%2C was increasingly being reinvested in the community itself."Last year, the Centre for Effective Altruism bought Wytham Abbey, a palatial estate near Oxford, built in 1480. Money, which no longer seemed an object, was increasingly being reinvested in the community itself."For some reason I glanced over it at the time, or I just didn't realize the seriousness of it.Upon more research, I came across this comment by Shakeel Hashim: "In April, Effective Ventures purchased Wytham Abbey and some land around it (but <1% of the 2,500 acre estate you're suggesting). Wytham is in the process of being established as a convening centre to run workshops and meetings that bring together people to think seriously about how to address important problems in the world. The vision is modelled on traditional specialist conference centres, e.g. Oberwolfach, The Rockefeller Foundation Bellagio Center or the Brocher Foundation.The purchase was made from a large grant made specifically for this. There was no money from FTX or affiliated individuals or organizations."I'm very relieved to hear money from individual donors wasn't used. And the <1% suggests 15 million pounds perhaps wasn't spent. Still, I'd love to hear and understand more about this project and why CEA thinks it's cost-effective. What is the EV calculation behind it?Like the New Yorker piece points out, with more funding there has been a lot of spending within the movement itself. And that's fine, great even. This way more outreach can be done and the movement can grow. But we don't want to be too self-serving, and I'm scared too much of this thinking will lead to rationalizing lavish expenses (and I'm afraid this is already happening). There needs to be more transparency behind big expenses.Edit to add: If this expense has been made a while back, why not announce it then?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 06 Dec 2022 15:06:49 +0000 EA - Why did CEA buy Wytham Abbey? by Jeroen W Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why did CEA buy Wytham Abbey?, published by Jeroen W on December 6, 2022 on The Effective Altruism Forum.Yesterday morning I woke up and saw this tweet by Émile Torres:I was shocked, angry and upset at first. Especially since it appears that the estate was for sale last year for 15 million pounds:I'm not a big fan of Émile's writing and how they often misrepresent the EA movement. But that's not what this question is about, because they do raise a good point here: Why did CEA buy this property? My trust in CEA has been a bit shaky lately, and this doesn't help.Apparently it was already mentioned in the New Yorker piece:#:~:text=Last year%2C the Centre for Effective Altruism bought Wytham Abbey%2C a palatial estate near Oxford%2C built in 1480. Money%2C which no longer seemed an object%2C was increasingly being reinvested in the community itself."Last year, the Centre for Effective Altruism bought Wytham Abbey, a palatial estate near Oxford, built in 1480. Money, which no longer seemed an object, was increasingly being reinvested in the community itself."For some reason I glanced over it at the time, or I just didn't realize the seriousness of it.Upon more research, I came across this comment by Shakeel Hashim: "In April, Effective Ventures purchased Wytham Abbey and some land around it (but <1% of the 2,500 acre estate you're suggesting). Wytham is in the process of being established as a convening centre to run workshops and meetings that bring together people to think seriously about how to address important problems in the world. The vision is modelled on traditional specialist conference centres, e.g. Oberwolfach, The Rockefeller Foundation Bellagio Center or the Brocher Foundation.The purchase was made from a large grant made specifically for this. There was no money from FTX or affiliated individuals or organizations."I'm very relieved to hear money from individual donors wasn't used. And the <1% suggests 15 million pounds perhaps wasn't spent. Still, I'd love to hear and understand more about this project and why CEA thinks it's cost-effective. What is the EV calculation behind it?Like the New Yorker piece points out, with more funding there has been a lot of spending within the movement itself. And that's fine, great even. This way more outreach can be done and the movement can grow. But we don't want to be too self-serving, and I'm scared too much of this thinking will lead to rationalizing lavish expenses (and I'm afraid this is already happening). There needs to be more transparency behind big expenses.Edit to add: If this expense has been made a while back, why not announce it then?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why did CEA buy Wytham Abbey?, published by Jeroen W on December 6, 2022 on The Effective Altruism Forum.Yesterday morning I woke up and saw this tweet by Émile Torres:I was shocked, angry and upset at first. Especially since it appears that the estate was for sale last year for 15 million pounds:I'm not a big fan of Émile's writing and how they often misrepresent the EA movement. But that's not what this question is about, because they do raise a good point here: Why did CEA buy this property? My trust in CEA has been a bit shaky lately, and this doesn't help.Apparently it was already mentioned in the New Yorker piece:#:~:text=Last year%2C the Centre for Effective Altruism bought Wytham Abbey%2C a palatial estate near Oxford%2C built in 1480. Money%2C which no longer seemed an object%2C was increasingly being reinvested in the community itself."Last year, the Centre for Effective Altruism bought Wytham Abbey, a palatial estate near Oxford, built in 1480. Money, which no longer seemed an object, was increasingly being reinvested in the community itself."For some reason I glanced over it at the time, or I just didn't realize the seriousness of it.Upon more research, I came across this comment by Shakeel Hashim: "In April, Effective Ventures purchased Wytham Abbey and some land around it (but <1% of the 2,500 acre estate you're suggesting). Wytham is in the process of being established as a convening centre to run workshops and meetings that bring together people to think seriously about how to address important problems in the world. The vision is modelled on traditional specialist conference centres, e.g. Oberwolfach, The Rockefeller Foundation Bellagio Center or the Brocher Foundation.The purchase was made from a large grant made specifically for this. There was no money from FTX or affiliated individuals or organizations."I'm very relieved to hear money from individual donors wasn't used. And the <1% suggests 15 million pounds perhaps wasn't spent. Still, I'd love to hear and understand more about this project and why CEA thinks it's cost-effective. What is the EV calculation behind it?Like the New Yorker piece points out, with more funding there has been a lot of spending within the movement itself. And that's fine, great even. This way more outreach can be done and the movement can grow. But we don't want to be too self-serving, and I'm scared too much of this thinking will lead to rationalizing lavish expenses (and I'm afraid this is already happening). There needs to be more transparency behind big expenses.Edit to add: If this expense has been made a while back, why not announce it then?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeroen W https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:49 None full 4021
SHCABNadx8GycWAaJ_EA EA - Climate research webinar by Rethink Priorities on Tuesday, December 13 at 11 am EST by Rethink Priorities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Climate research webinar by Rethink Priorities on Tuesday, December 13 at 11 am EST, published by Rethink Priorities on December 5, 2022 on The Effective Altruism Forum.Since its inception in September 2021, the Global Health and Development team at Rethink Priorities has conducted research on a variety of topics. Some examples include assessing the effectiveness of inducement prizes to spur innovation, assessing promising livelihood interventions, and estimating the potential repercussions of lead paint exposure in low- and middle-income countries (report forthcoming).Our team has also worked on a variety of climate reports this past year. Our resident climate expert, Senior Environmental Economist Greer Gosnell, will give a presentation regarding the research process and findings of one such report that evaluates anti-deforestation as a promising climate solution. You will be able to read the full report in a forthcoming EA Forum post, so stay tuned for updates.The webinar, which will take place on Tuesday, December 13, 2022, at 11 am EST / 4 pm GMT, will include a brief presentation followed by a Q&A session. If you are interested in attending, please complete this form to sign up for the event. A zoom link will be shared with registered participants the day before the webinar. Questions from all participants are very welcome; anonymous attendance is possible as well. We will record the meeting and share the recording with those who have signed up but are unable to join us live.We look forward to sharing some of our research with you, and hope to see many of you there!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Rethink Priorities https://forum.effectivealtruism.org/posts/SHCABNadx8GycWAaJ/climate-research-webinar-by-rethink-priorities-on-tuesday Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Climate research webinar by Rethink Priorities on Tuesday, December 13 at 11 am EST, published by Rethink Priorities on December 5, 2022 on The Effective Altruism Forum.Since its inception in September 2021, the Global Health and Development team at Rethink Priorities has conducted research on a variety of topics. Some examples include assessing the effectiveness of inducement prizes to spur innovation, assessing promising livelihood interventions, and estimating the potential repercussions of lead paint exposure in low- and middle-income countries (report forthcoming).Our team has also worked on a variety of climate reports this past year. Our resident climate expert, Senior Environmental Economist Greer Gosnell, will give a presentation regarding the research process and findings of one such report that evaluates anti-deforestation as a promising climate solution. You will be able to read the full report in a forthcoming EA Forum post, so stay tuned for updates.The webinar, which will take place on Tuesday, December 13, 2022, at 11 am EST / 4 pm GMT, will include a brief presentation followed by a Q&A session. If you are interested in attending, please complete this form to sign up for the event. A zoom link will be shared with registered participants the day before the webinar. Questions from all participants are very welcome; anonymous attendance is possible as well. We will record the meeting and share the recording with those who have signed up but are unable to join us live.We look forward to sharing some of our research with you, and hope to see many of you there!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 06 Dec 2022 13:42:38 +0000 EA - Climate research webinar by Rethink Priorities on Tuesday, December 13 at 11 am EST by Rethink Priorities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Climate research webinar by Rethink Priorities on Tuesday, December 13 at 11 am EST, published by Rethink Priorities on December 5, 2022 on The Effective Altruism Forum.Since its inception in September 2021, the Global Health and Development team at Rethink Priorities has conducted research on a variety of topics. Some examples include assessing the effectiveness of inducement prizes to spur innovation, assessing promising livelihood interventions, and estimating the potential repercussions of lead paint exposure in low- and middle-income countries (report forthcoming).Our team has also worked on a variety of climate reports this past year. Our resident climate expert, Senior Environmental Economist Greer Gosnell, will give a presentation regarding the research process and findings of one such report that evaluates anti-deforestation as a promising climate solution. You will be able to read the full report in a forthcoming EA Forum post, so stay tuned for updates.The webinar, which will take place on Tuesday, December 13, 2022, at 11 am EST / 4 pm GMT, will include a brief presentation followed by a Q&A session. If you are interested in attending, please complete this form to sign up for the event. A zoom link will be shared with registered participants the day before the webinar. Questions from all participants are very welcome; anonymous attendance is possible as well. We will record the meeting and share the recording with those who have signed up but are unable to join us live.We look forward to sharing some of our research with you, and hope to see many of you there!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Climate research webinar by Rethink Priorities on Tuesday, December 13 at 11 am EST, published by Rethink Priorities on December 5, 2022 on The Effective Altruism Forum.Since its inception in September 2021, the Global Health and Development team at Rethink Priorities has conducted research on a variety of topics. Some examples include assessing the effectiveness of inducement prizes to spur innovation, assessing promising livelihood interventions, and estimating the potential repercussions of lead paint exposure in low- and middle-income countries (report forthcoming).Our team has also worked on a variety of climate reports this past year. Our resident climate expert, Senior Environmental Economist Greer Gosnell, will give a presentation regarding the research process and findings of one such report that evaluates anti-deforestation as a promising climate solution. You will be able to read the full report in a forthcoming EA Forum post, so stay tuned for updates.The webinar, which will take place on Tuesday, December 13, 2022, at 11 am EST / 4 pm GMT, will include a brief presentation followed by a Q&A session. If you are interested in attending, please complete this form to sign up for the event. A zoom link will be shared with registered participants the day before the webinar. Questions from all participants are very welcome; anonymous attendance is possible as well. We will record the meeting and share the recording with those who have signed up but are unable to join us live.We look forward to sharing some of our research with you, and hope to see many of you there!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Rethink Priorities https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:49 None full 4024
4jMsqrwkcXQWauciJ_EA EA - The EA Infrastructure Fund seems to have paused its grantmaking and approved grant payments. Why? by Markus Amalthea Magnuson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA Infrastructure Fund seems to have paused its grantmaking and approved grant payments. Why?, published by Markus Amalthea Magnuson on December 6, 2022 on The Effective Altruism Forum.Yesterday someone said the EA Infrastructure Fund had paused its grantmaking. They later added that they had people send in grant proposals and got an email back confirming this pause. Today I got confirmation from a second person who said they know people with approved grants from the EAIF that have had the payments of those grants paused. I'm trying to find any official communication on this but have not found any yet. This might be relevant though, and possibly this. Does anyone know what is going on here?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Markus Amalthea Magnuson https://forum.effectivealtruism.org/posts/4jMsqrwkcXQWauciJ/the-ea-infrastructure-fund-seems-to-have-paused-its Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA Infrastructure Fund seems to have paused its grantmaking and approved grant payments. Why?, published by Markus Amalthea Magnuson on December 6, 2022 on The Effective Altruism Forum.Yesterday someone said the EA Infrastructure Fund had paused its grantmaking. They later added that they had people send in grant proposals and got an email back confirming this pause. Today I got confirmation from a second person who said they know people with approved grants from the EAIF that have had the payments of those grants paused. I'm trying to find any official communication on this but have not found any yet. This might be relevant though, and possibly this. Does anyone know what is going on here?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 06 Dec 2022 12:16:36 +0000 EA - The EA Infrastructure Fund seems to have paused its grantmaking and approved grant payments. Why? by Markus Amalthea Magnuson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA Infrastructure Fund seems to have paused its grantmaking and approved grant payments. Why?, published by Markus Amalthea Magnuson on December 6, 2022 on The Effective Altruism Forum.Yesterday someone said the EA Infrastructure Fund had paused its grantmaking. They later added that they had people send in grant proposals and got an email back confirming this pause. Today I got confirmation from a second person who said they know people with approved grants from the EAIF that have had the payments of those grants paused. I'm trying to find any official communication on this but have not found any yet. This might be relevant though, and possibly this. Does anyone know what is going on here?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA Infrastructure Fund seems to have paused its grantmaking and approved grant payments. Why?, published by Markus Amalthea Magnuson on December 6, 2022 on The Effective Altruism Forum.Yesterday someone said the EA Infrastructure Fund had paused its grantmaking. They later added that they had people send in grant proposals and got an email back confirming this pause. Today I got confirmation from a second person who said they know people with approved grants from the EAIF that have had the payments of those grants paused. I'm trying to find any official communication on this but have not found any yet. This might be relevant though, and possibly this. Does anyone know what is going on here?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Markus Amalthea Magnuson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:00 None full 4022
K5Snxo5EhgmwJJjR2_EA EA - Announcing: Audio narrations of EA Forum posts by peterhartree Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing: Audio narrations of EA Forum posts, published by peterhartree on December 5, 2022 on The Effective Altruism Forum.We've started making audio narrations of some of the best posts from the EA Forum.As of today, you can subscribe to the podcast:EA Forum (All audio)Audio narrations from the Effective Altruism Forum, including curated posts and other great writing.Subscribe:Apple Podcasts | Google Podcasts (soon) | Spotify | RSSYou'll also start to see narrations embedded on the EA Forum post pages themselves.If a narration is available, you'll see a blue loudspeaker button:What can I listen to now?Some of the winning entries from the EA Criticism and Red Teaming Contest:Are you really in a race? The cautionary tales of Szilárd and Ellsberg by Haydn BelfieldDoes economic growth meaningfully improve well-being? by Vadim AlbinskyEffective altruism in the garden of ends by Tyler AltermanNotes on effective altruism by Michael NielsenBiological Anchors external review by Jennifer LinSome posts that were recently marked as “curated”:Counterarguments to the basic AI risk case by Katja GraceMy take on What We Owe the Future by Eli LiflandPopulation ethics without axiology: A framework by Lukas Gloor500 million, but not a single one more by jaiAGI and lock-in by Lukas Finnveden, Jess Riedel, and Carl ShulmanWhat matters to shrimps? Factors affecting shrimp welfare in aquaculture by Lucas Lewit-Mendes and Aaron BoddyCase for emergency response teams by Gavin and Jan KulveitWhat happens on the average day? by Rose HadsharHow bad could a war get? by Stephen Clare and Rani MartinCool. But I have a lot of things to listen to. Could I just get the curated posts, or maybe just the summaries?Yes. You can subscribe to either of these:EA Forum (Curated posts)Audio narrations of curated posts from the Effective Altruism Forum. Subscribe:Apple Podcasts | Google Podcasts (soon) | Spotify | RSSEA Forum (Summaries)Weekly summaries of the best EA Forum posts. Written by Zoe Williams (Rethink Priorities) and narrated by Coleman Jackson Snell.Subscribe:Apple Podcasts | Google Podcasts (soon) | Spotify | RSSWhat about AI narrations? I want to listen to everything!The Nonlinear Library project is currently generating AI narrations of all posts that meet a fairly low karma threshold.Within the next few months, we are hoping to collaborate with Nonlinear to develop a system that generates even better AI narrations for most or all EA Forum posts. We see a path to better pronunciation, emphasis, and tone, and also to much better handling of images, graphs and formulae.Who is working on this?This project is run by the EA Forum Team, in collaboration with TYPE III AUDIO.What about the existing “EA Forum Podcast”?In July 2021, Garrett Baker and David Reinstein started a volunteer project to narrate EA Forum posts. The most recent narration was published in January 2022. Since September they’ve been publishing Zoe and Coleman’s weekly summary episodes.We are grateful to Garrett and David for their work on this.We’ve not yet heard what they plan to do next—presumably they’ll release an update to subscribers in due course.Thoughts, feedback, suggestions?We'd love to hear from you! Please comment below, or write to team@type3.audio.If you’re already listening to Nonlinear Library AI narrations, we’d be especially interested to hear what you think of them. What would you most like to see improved?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
peterhartree https://forum.effectivealtruism.org/posts/K5Snxo5EhgmwJJjR2/announcing-audio-narrations-of-ea-forum-posts-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing: Audio narrations of EA Forum posts, published by peterhartree on December 5, 2022 on The Effective Altruism Forum.We've started making audio narrations of some of the best posts from the EA Forum.As of today, you can subscribe to the podcast:EA Forum (All audio)Audio narrations from the Effective Altruism Forum, including curated posts and other great writing.Subscribe:Apple Podcasts | Google Podcasts (soon) | Spotify | RSSYou'll also start to see narrations embedded on the EA Forum post pages themselves.If a narration is available, you'll see a blue loudspeaker button:What can I listen to now?Some of the winning entries from the EA Criticism and Red Teaming Contest:Are you really in a race? The cautionary tales of Szilárd and Ellsberg by Haydn BelfieldDoes economic growth meaningfully improve well-being? by Vadim AlbinskyEffective altruism in the garden of ends by Tyler AltermanNotes on effective altruism by Michael NielsenBiological Anchors external review by Jennifer LinSome posts that were recently marked as “curated”:Counterarguments to the basic AI risk case by Katja GraceMy take on What We Owe the Future by Eli LiflandPopulation ethics without axiology: A framework by Lukas Gloor500 million, but not a single one more by jaiAGI and lock-in by Lukas Finnveden, Jess Riedel, and Carl ShulmanWhat matters to shrimps? Factors affecting shrimp welfare in aquaculture by Lucas Lewit-Mendes and Aaron BoddyCase for emergency response teams by Gavin and Jan KulveitWhat happens on the average day? by Rose HadsharHow bad could a war get? by Stephen Clare and Rani MartinCool. But I have a lot of things to listen to. Could I just get the curated posts, or maybe just the summaries?Yes. You can subscribe to either of these:EA Forum (Curated posts)Audio narrations of curated posts from the Effective Altruism Forum. Subscribe:Apple Podcasts | Google Podcasts (soon) | Spotify | RSSEA Forum (Summaries)Weekly summaries of the best EA Forum posts. Written by Zoe Williams (Rethink Priorities) and narrated by Coleman Jackson Snell.Subscribe:Apple Podcasts | Google Podcasts (soon) | Spotify | RSSWhat about AI narrations? I want to listen to everything!The Nonlinear Library project is currently generating AI narrations of all posts that meet a fairly low karma threshold.Within the next few months, we are hoping to collaborate with Nonlinear to develop a system that generates even better AI narrations for most or all EA Forum posts. We see a path to better pronunciation, emphasis, and tone, and also to much better handling of images, graphs and formulae.Who is working on this?This project is run by the EA Forum Team, in collaboration with TYPE III AUDIO.What about the existing “EA Forum Podcast”?In July 2021, Garrett Baker and David Reinstein started a volunteer project to narrate EA Forum posts. The most recent narration was published in January 2022. Since September they’ve been publishing Zoe and Coleman’s weekly summary episodes.We are grateful to Garrett and David for their work on this.We’ve not yet heard what they plan to do next—presumably they’ll release an update to subscribers in due course.Thoughts, feedback, suggestions?We'd love to hear from you! Please comment below, or write to team@type3.audio.If you’re already listening to Nonlinear Library AI narrations, we’d be especially interested to hear what you think of them. What would you most like to see improved?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 05 Dec 2022 23:18:47 +0000 EA - Announcing: Audio narrations of EA Forum posts by peterhartree Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing: Audio narrations of EA Forum posts, published by peterhartree on December 5, 2022 on The Effective Altruism Forum.We've started making audio narrations of some of the best posts from the EA Forum.As of today, you can subscribe to the podcast:EA Forum (All audio)Audio narrations from the Effective Altruism Forum, including curated posts and other great writing.Subscribe:Apple Podcasts | Google Podcasts (soon) | Spotify | RSSYou'll also start to see narrations embedded on the EA Forum post pages themselves.If a narration is available, you'll see a blue loudspeaker button:What can I listen to now?Some of the winning entries from the EA Criticism and Red Teaming Contest:Are you really in a race? The cautionary tales of Szilárd and Ellsberg by Haydn BelfieldDoes economic growth meaningfully improve well-being? by Vadim AlbinskyEffective altruism in the garden of ends by Tyler AltermanNotes on effective altruism by Michael NielsenBiological Anchors external review by Jennifer LinSome posts that were recently marked as “curated”:Counterarguments to the basic AI risk case by Katja GraceMy take on What We Owe the Future by Eli LiflandPopulation ethics without axiology: A framework by Lukas Gloor500 million, but not a single one more by jaiAGI and lock-in by Lukas Finnveden, Jess Riedel, and Carl ShulmanWhat matters to shrimps? Factors affecting shrimp welfare in aquaculture by Lucas Lewit-Mendes and Aaron BoddyCase for emergency response teams by Gavin and Jan KulveitWhat happens on the average day? by Rose HadsharHow bad could a war get? by Stephen Clare and Rani MartinCool. But I have a lot of things to listen to. Could I just get the curated posts, or maybe just the summaries?Yes. You can subscribe to either of these:EA Forum (Curated posts)Audio narrations of curated posts from the Effective Altruism Forum. Subscribe:Apple Podcasts | Google Podcasts (soon) | Spotify | RSSEA Forum (Summaries)Weekly summaries of the best EA Forum posts. Written by Zoe Williams (Rethink Priorities) and narrated by Coleman Jackson Snell.Subscribe:Apple Podcasts | Google Podcasts (soon) | Spotify | RSSWhat about AI narrations? I want to listen to everything!The Nonlinear Library project is currently generating AI narrations of all posts that meet a fairly low karma threshold.Within the next few months, we are hoping to collaborate with Nonlinear to develop a system that generates even better AI narrations for most or all EA Forum posts. We see a path to better pronunciation, emphasis, and tone, and also to much better handling of images, graphs and formulae.Who is working on this?This project is run by the EA Forum Team, in collaboration with TYPE III AUDIO.What about the existing “EA Forum Podcast”?In July 2021, Garrett Baker and David Reinstein started a volunteer project to narrate EA Forum posts. The most recent narration was published in January 2022. Since September they’ve been publishing Zoe and Coleman’s weekly summary episodes.We are grateful to Garrett and David for their work on this.We’ve not yet heard what they plan to do next—presumably they’ll release an update to subscribers in due course.Thoughts, feedback, suggestions?We'd love to hear from you! Please comment below, or write to team@type3.audio.If you’re already listening to Nonlinear Library AI narrations, we’d be especially interested to hear what you think of them. What would you most like to see improved?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing: Audio narrations of EA Forum posts, published by peterhartree on December 5, 2022 on The Effective Altruism Forum.We've started making audio narrations of some of the best posts from the EA Forum.As of today, you can subscribe to the podcast:EA Forum (All audio)Audio narrations from the Effective Altruism Forum, including curated posts and other great writing.Subscribe:Apple Podcasts | Google Podcasts (soon) | Spotify | RSSYou'll also start to see narrations embedded on the EA Forum post pages themselves.If a narration is available, you'll see a blue loudspeaker button:What can I listen to now?Some of the winning entries from the EA Criticism and Red Teaming Contest:Are you really in a race? The cautionary tales of Szilárd and Ellsberg by Haydn BelfieldDoes economic growth meaningfully improve well-being? by Vadim AlbinskyEffective altruism in the garden of ends by Tyler AltermanNotes on effective altruism by Michael NielsenBiological Anchors external review by Jennifer LinSome posts that were recently marked as “curated”:Counterarguments to the basic AI risk case by Katja GraceMy take on What We Owe the Future by Eli LiflandPopulation ethics without axiology: A framework by Lukas Gloor500 million, but not a single one more by jaiAGI and lock-in by Lukas Finnveden, Jess Riedel, and Carl ShulmanWhat matters to shrimps? Factors affecting shrimp welfare in aquaculture by Lucas Lewit-Mendes and Aaron BoddyCase for emergency response teams by Gavin and Jan KulveitWhat happens on the average day? by Rose HadsharHow bad could a war get? by Stephen Clare and Rani MartinCool. But I have a lot of things to listen to. Could I just get the curated posts, or maybe just the summaries?Yes. You can subscribe to either of these:EA Forum (Curated posts)Audio narrations of curated posts from the Effective Altruism Forum. Subscribe:Apple Podcasts | Google Podcasts (soon) | Spotify | RSSEA Forum (Summaries)Weekly summaries of the best EA Forum posts. Written by Zoe Williams (Rethink Priorities) and narrated by Coleman Jackson Snell.Subscribe:Apple Podcasts | Google Podcasts (soon) | Spotify | RSSWhat about AI narrations? I want to listen to everything!The Nonlinear Library project is currently generating AI narrations of all posts that meet a fairly low karma threshold.Within the next few months, we are hoping to collaborate with Nonlinear to develop a system that generates even better AI narrations for most or all EA Forum posts. We see a path to better pronunciation, emphasis, and tone, and also to much better handling of images, graphs and formulae.Who is working on this?This project is run by the EA Forum Team, in collaboration with TYPE III AUDIO.What about the existing “EA Forum Podcast”?In July 2021, Garrett Baker and David Reinstein started a volunteer project to narrate EA Forum posts. The most recent narration was published in January 2022. Since September they’ve been publishing Zoe and Coleman’s weekly summary episodes.We are grateful to Garrett and David for their work on this.We’ve not yet heard what they plan to do next—presumably they’ll release an update to subscribers in due course.Thoughts, feedback, suggestions?We'd love to hear from you! Please comment below, or write to team@type3.audio.If you’re already listening to Nonlinear Library AI narrations, we’d be especially interested to hear what you think of them. What would you most like to see improved?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
peterhartree https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:49 None full 4017
aW9ANJg7s3Q9BASgu_EA EA - SoGive Grants: a promising pilot. Our reflections and payout report. by SoGive Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SoGive Grants: a promising pilot. Our reflections and payout report., published by SoGive on December 5, 2022 on The Effective Altruism Forum.Executive SummarySoGive ran a pilot grants program, and we ended up granting a total of £223k to 6 projects (see the section “Our grantee payouts” for details). We were pleased to be able to give high quality feedback to all rejected applicants (which appears to be a gap in the grants market), and were also able to help some projects make tweaks such that we could make a positive decision to fund them. We also noted that despite explicitly encouraging biosecurity projects we only received one application in this field. We tracked our initial impressions of grants and found that the initial video call with applicants was discriminatory and helped unearth lots of decision-relevant information, but that a second video call didn’t change our evaluations very much.Given we added value to the grants market by providing feedback to all applicants, helped some candidates tweak their project proposal and identified 6 promising applications to direct funding towards, we would run SoGive Grants again next year, but perform a more light touch approach, assuming that the donors we work with agree to this, or new ones opt to contribute to the pool of funding. This report also includes thoughts on how we might improve the questions asked in the application form (which currently mirrors the application form used by EA Funds).IntroductionBack in April SoGive launched our first ever applied-for granting program, this program has now wrapped up and this post sets out our experiences and lessons learned. For those of you not familiar with SoGive we’re an EA-aligned research organisation and think tank.This post will cover:Summary of the SoGive Grants programAdvice to grant applicantsReflections on our evaluation process and criteriaAdvice for people considering running their own grants programOur grantee payoutsWe’d like to say a huge thank you to all of the SoGive team who helped with this project, and also to the external advisors who offered their time and expertise. Also, as discussed in the report we referred back to a lot of publicly posted EA material (typically from the EA forum) so for those individuals and organisations who take the time to write up their views and considerations online it is incredibly helpful and it affects real world decisions - thank you.If any potential donors reading this want their funding to contribute to the funding pool for the next round of SoGive grants, then please get in touch (isobel@sogive.org).1. Summary of the SoGive Grants programWhy run a grants program?Even at the start of 2022 when funding conditions were more favourable, we believed that another funding body would be valuable. This reflected our view of the value of there being more vetting capacity. We also thought Joey made a valuable contribution in this post.Part of our work at SoGive involves advising high net worth individuals, and as such we generally scope out opportunities for high impact giving, so in order to find the highest impact donation opportunities we decided to formalise and open up this process. Prior to SoGive grants, we have tended to guide funds towards organisations that Founders Pledge, Open Phil, and GiveWell might also recommend along with interventions that SoGive has specifically researched or investigated for our donors. Especially in the case of following Open Phil’s grants, we had doubts that this was the highest impact donation advice we could offer, since Open Phil makes no guarantee that a grantee still has room for more funding (after receiving a grant from Open Phil).We also noticed a gap in the market for a granting program that provided high quality feedback (or, for some applicants, any feed...]]>
SoGive https://forum.effectivealtruism.org/posts/aW9ANJg7s3Q9BASgu/sogive-grants-a-promising-pilot-our-reflections-and-payout Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SoGive Grants: a promising pilot. Our reflections and payout report., published by SoGive on December 5, 2022 on The Effective Altruism Forum.Executive SummarySoGive ran a pilot grants program, and we ended up granting a total of £223k to 6 projects (see the section “Our grantee payouts” for details). We were pleased to be able to give high quality feedback to all rejected applicants (which appears to be a gap in the grants market), and were also able to help some projects make tweaks such that we could make a positive decision to fund them. We also noted that despite explicitly encouraging biosecurity projects we only received one application in this field. We tracked our initial impressions of grants and found that the initial video call with applicants was discriminatory and helped unearth lots of decision-relevant information, but that a second video call didn’t change our evaluations very much.Given we added value to the grants market by providing feedback to all applicants, helped some candidates tweak their project proposal and identified 6 promising applications to direct funding towards, we would run SoGive Grants again next year, but perform a more light touch approach, assuming that the donors we work with agree to this, or new ones opt to contribute to the pool of funding. This report also includes thoughts on how we might improve the questions asked in the application form (which currently mirrors the application form used by EA Funds).IntroductionBack in April SoGive launched our first ever applied-for granting program, this program has now wrapped up and this post sets out our experiences and lessons learned. For those of you not familiar with SoGive we’re an EA-aligned research organisation and think tank.This post will cover:Summary of the SoGive Grants programAdvice to grant applicantsReflections on our evaluation process and criteriaAdvice for people considering running their own grants programOur grantee payoutsWe’d like to say a huge thank you to all of the SoGive team who helped with this project, and also to the external advisors who offered their time and expertise. Also, as discussed in the report we referred back to a lot of publicly posted EA material (typically from the EA forum) so for those individuals and organisations who take the time to write up their views and considerations online it is incredibly helpful and it affects real world decisions - thank you.If any potential donors reading this want their funding to contribute to the funding pool for the next round of SoGive grants, then please get in touch (isobel@sogive.org).1. Summary of the SoGive Grants programWhy run a grants program?Even at the start of 2022 when funding conditions were more favourable, we believed that another funding body would be valuable. This reflected our view of the value of there being more vetting capacity. We also thought Joey made a valuable contribution in this post.Part of our work at SoGive involves advising high net worth individuals, and as such we generally scope out opportunities for high impact giving, so in order to find the highest impact donation opportunities we decided to formalise and open up this process. Prior to SoGive grants, we have tended to guide funds towards organisations that Founders Pledge, Open Phil, and GiveWell might also recommend along with interventions that SoGive has specifically researched or investigated for our donors. Especially in the case of following Open Phil’s grants, we had doubts that this was the highest impact donation advice we could offer, since Open Phil makes no guarantee that a grantee still has room for more funding (after receiving a grant from Open Phil).We also noticed a gap in the market for a granting program that provided high quality feedback (or, for some applicants, any feed...]]>
Mon, 05 Dec 2022 21:58:21 +0000 EA - SoGive Grants: a promising pilot. Our reflections and payout report. by SoGive Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SoGive Grants: a promising pilot. Our reflections and payout report., published by SoGive on December 5, 2022 on The Effective Altruism Forum.Executive SummarySoGive ran a pilot grants program, and we ended up granting a total of £223k to 6 projects (see the section “Our grantee payouts” for details). We were pleased to be able to give high quality feedback to all rejected applicants (which appears to be a gap in the grants market), and were also able to help some projects make tweaks such that we could make a positive decision to fund them. We also noted that despite explicitly encouraging biosecurity projects we only received one application in this field. We tracked our initial impressions of grants and found that the initial video call with applicants was discriminatory and helped unearth lots of decision-relevant information, but that a second video call didn’t change our evaluations very much.Given we added value to the grants market by providing feedback to all applicants, helped some candidates tweak their project proposal and identified 6 promising applications to direct funding towards, we would run SoGive Grants again next year, but perform a more light touch approach, assuming that the donors we work with agree to this, or new ones opt to contribute to the pool of funding. This report also includes thoughts on how we might improve the questions asked in the application form (which currently mirrors the application form used by EA Funds).IntroductionBack in April SoGive launched our first ever applied-for granting program, this program has now wrapped up and this post sets out our experiences and lessons learned. For those of you not familiar with SoGive we’re an EA-aligned research organisation and think tank.This post will cover:Summary of the SoGive Grants programAdvice to grant applicantsReflections on our evaluation process and criteriaAdvice for people considering running their own grants programOur grantee payoutsWe’d like to say a huge thank you to all of the SoGive team who helped with this project, and also to the external advisors who offered their time and expertise. Also, as discussed in the report we referred back to a lot of publicly posted EA material (typically from the EA forum) so for those individuals and organisations who take the time to write up their views and considerations online it is incredibly helpful and it affects real world decisions - thank you.If any potential donors reading this want their funding to contribute to the funding pool for the next round of SoGive grants, then please get in touch (isobel@sogive.org).1. Summary of the SoGive Grants programWhy run a grants program?Even at the start of 2022 when funding conditions were more favourable, we believed that another funding body would be valuable. This reflected our view of the value of there being more vetting capacity. We also thought Joey made a valuable contribution in this post.Part of our work at SoGive involves advising high net worth individuals, and as such we generally scope out opportunities for high impact giving, so in order to find the highest impact donation opportunities we decided to formalise and open up this process. Prior to SoGive grants, we have tended to guide funds towards organisations that Founders Pledge, Open Phil, and GiveWell might also recommend along with interventions that SoGive has specifically researched or investigated for our donors. Especially in the case of following Open Phil’s grants, we had doubts that this was the highest impact donation advice we could offer, since Open Phil makes no guarantee that a grantee still has room for more funding (after receiving a grant from Open Phil).We also noticed a gap in the market for a granting program that provided high quality feedback (or, for some applicants, any feed...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SoGive Grants: a promising pilot. Our reflections and payout report., published by SoGive on December 5, 2022 on The Effective Altruism Forum.Executive SummarySoGive ran a pilot grants program, and we ended up granting a total of £223k to 6 projects (see the section “Our grantee payouts” for details). We were pleased to be able to give high quality feedback to all rejected applicants (which appears to be a gap in the grants market), and were also able to help some projects make tweaks such that we could make a positive decision to fund them. We also noted that despite explicitly encouraging biosecurity projects we only received one application in this field. We tracked our initial impressions of grants and found that the initial video call with applicants was discriminatory and helped unearth lots of decision-relevant information, but that a second video call didn’t change our evaluations very much.Given we added value to the grants market by providing feedback to all applicants, helped some candidates tweak their project proposal and identified 6 promising applications to direct funding towards, we would run SoGive Grants again next year, but perform a more light touch approach, assuming that the donors we work with agree to this, or new ones opt to contribute to the pool of funding. This report also includes thoughts on how we might improve the questions asked in the application form (which currently mirrors the application form used by EA Funds).IntroductionBack in April SoGive launched our first ever applied-for granting program, this program has now wrapped up and this post sets out our experiences and lessons learned. For those of you not familiar with SoGive we’re an EA-aligned research organisation and think tank.This post will cover:Summary of the SoGive Grants programAdvice to grant applicantsReflections on our evaluation process and criteriaAdvice for people considering running their own grants programOur grantee payoutsWe’d like to say a huge thank you to all of the SoGive team who helped with this project, and also to the external advisors who offered their time and expertise. Also, as discussed in the report we referred back to a lot of publicly posted EA material (typically from the EA forum) so for those individuals and organisations who take the time to write up their views and considerations online it is incredibly helpful and it affects real world decisions - thank you.If any potential donors reading this want their funding to contribute to the funding pool for the next round of SoGive grants, then please get in touch (isobel@sogive.org).1. Summary of the SoGive Grants programWhy run a grants program?Even at the start of 2022 when funding conditions were more favourable, we believed that another funding body would be valuable. This reflected our view of the value of there being more vetting capacity. We also thought Joey made a valuable contribution in this post.Part of our work at SoGive involves advising high net worth individuals, and as such we generally scope out opportunities for high impact giving, so in order to find the highest impact donation opportunities we decided to formalise and open up this process. Prior to SoGive grants, we have tended to guide funds towards organisations that Founders Pledge, Open Phil, and GiveWell might also recommend along with interventions that SoGive has specifically researched or investigated for our donors. Especially in the case of following Open Phil’s grants, we had doubts that this was the highest impact donation advice we could offer, since Open Phil makes no guarantee that a grantee still has room for more funding (after receiving a grant from Open Phil).We also noticed a gap in the market for a granting program that provided high quality feedback (or, for some applicants, any feed...]]>
SoGive https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 33:57 None full 4012
RCmgGp2nmoWFcRwdn_EA EA - Should strong longtermists really want to minimize existential risk? by tobycrisford Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should strong longtermists really want to minimize existential risk?, published by tobycrisford on December 4, 2022 on The Effective Altruism Forum.Strong longtermists believe there is a non-negligible chance that the future will be enormous. For example, earth-originating life may one day fill the galaxy with 1040 digital minds. The future therefore has enormous expected value, and concern for the long-term should almost always dominate near-term considerations, at least for those decisions where our goal is to maximize expected value.It is often stated that strong longtermism reduces in practice to the goal: “minimize existential risk at all costs”. I argue here that this is inaccurate. I claim that a more accurate way of summarising the strong longtermist goal is: “minimize existential risk at all costs conditional on the future possibly being very big”. I believe the distinction between these two goals has important practical implications. The strong longtermist goal may actually conflict with the goal of minimizing existential risk unconditionally.In the next section I describe a thought experiment to demonstrate my claim. In the following section I argue that this is likely to be relevant to the actual world we find ourselves in. In the final section I give some concluding remarks on what we should take away from all this.The Anti-Apocalypse MachineThe Earth is about to be destroyed by a cosmic disaster. This disaster would end all life, and snuff out all of our enormous future potential.Fortunately, physicists have almost settled on a grand unified theory of everything that they believe will help them build a machine to save us. They are 99% certain that the world is described by Theory A, which tells us we can be saved if we build Machine A. But there is a 1% chance that the correct theory is actually Theory B, in which case we need to build Machine B. We only have the time and resources to build one machine.It appears that our best bet is to build Machine A, but there is a catch. If Theory B is true, then the expected value of our future is many orders of magnitude larger (although it is enormous under both theories). This is because Theory B leaves open the possibility that we may one day develop slightly-faster-than-light travel, while Theory A being true would make that impossible.Due to the spread of strong longtermism, Earth's inhabitants decide that they should build Machine B, acting as if the speculative Theory B is correct, since this is what maximizes expected value. Extinction would be far worse in the Theory B world than the Theory A world, so they decide to take the action which would prevent extinction in that world. They deliberately choose a 99% chance of extinction over a 1% chance, risking all of humanity, and all of humanity's future potential.The lesson here is that strong longtermism gives us the goal to minimize existential risk conditional on the future possibly being very big, and that may conflict with the goal to minimize existential risk unconditionally.Relevance for the actual worldThe implication of the above thought experiment is that strong longtermism tells us to look at the set of possible theories about the world, pick the one in which the future is largest, and, if it is large enough, act as if that theory were true. This is likely to have absurd consequences if carried to its logical conclusion, even in real world cases. I explore some examples in this section.The picture becomes more confusing when you consider theories which permit the future to have infinite value. In Nick Beckstead's original thesis, On the Overwhelming Importance of Shaping the Far Future, he explicitly singles out infinite value cases as examples of where we should abandon expected value maximization, and switch to using a more timid deci...]]>
tobycrisford https://forum.effectivealtruism.org/posts/RCmgGp2nmoWFcRwdn/should-strong-longtermists-really-want-to-minimize Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should strong longtermists really want to minimize existential risk?, published by tobycrisford on December 4, 2022 on The Effective Altruism Forum.Strong longtermists believe there is a non-negligible chance that the future will be enormous. For example, earth-originating life may one day fill the galaxy with 1040 digital minds. The future therefore has enormous expected value, and concern for the long-term should almost always dominate near-term considerations, at least for those decisions where our goal is to maximize expected value.It is often stated that strong longtermism reduces in practice to the goal: “minimize existential risk at all costs”. I argue here that this is inaccurate. I claim that a more accurate way of summarising the strong longtermist goal is: “minimize existential risk at all costs conditional on the future possibly being very big”. I believe the distinction between these two goals has important practical implications. The strong longtermist goal may actually conflict with the goal of minimizing existential risk unconditionally.In the next section I describe a thought experiment to demonstrate my claim. In the following section I argue that this is likely to be relevant to the actual world we find ourselves in. In the final section I give some concluding remarks on what we should take away from all this.The Anti-Apocalypse MachineThe Earth is about to be destroyed by a cosmic disaster. This disaster would end all life, and snuff out all of our enormous future potential.Fortunately, physicists have almost settled on a grand unified theory of everything that they believe will help them build a machine to save us. They are 99% certain that the world is described by Theory A, which tells us we can be saved if we build Machine A. But there is a 1% chance that the correct theory is actually Theory B, in which case we need to build Machine B. We only have the time and resources to build one machine.It appears that our best bet is to build Machine A, but there is a catch. If Theory B is true, then the expected value of our future is many orders of magnitude larger (although it is enormous under both theories). This is because Theory B leaves open the possibility that we may one day develop slightly-faster-than-light travel, while Theory A being true would make that impossible.Due to the spread of strong longtermism, Earth's inhabitants decide that they should build Machine B, acting as if the speculative Theory B is correct, since this is what maximizes expected value. Extinction would be far worse in the Theory B world than the Theory A world, so they decide to take the action which would prevent extinction in that world. They deliberately choose a 99% chance of extinction over a 1% chance, risking all of humanity, and all of humanity's future potential.The lesson here is that strong longtermism gives us the goal to minimize existential risk conditional on the future possibly being very big, and that may conflict with the goal to minimize existential risk unconditionally.Relevance for the actual worldThe implication of the above thought experiment is that strong longtermism tells us to look at the set of possible theories about the world, pick the one in which the future is largest, and, if it is large enough, act as if that theory were true. This is likely to have absurd consequences if carried to its logical conclusion, even in real world cases. I explore some examples in this section.The picture becomes more confusing when you consider theories which permit the future to have infinite value. In Nick Beckstead's original thesis, On the Overwhelming Importance of Shaping the Far Future, he explicitly singles out infinite value cases as examples of where we should abandon expected value maximization, and switch to using a more timid deci...]]>
Mon, 05 Dec 2022 18:53:36 +0000 EA - Should strong longtermists really want to minimize existential risk? by tobycrisford Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should strong longtermists really want to minimize existential risk?, published by tobycrisford on December 4, 2022 on The Effective Altruism Forum.Strong longtermists believe there is a non-negligible chance that the future will be enormous. For example, earth-originating life may one day fill the galaxy with 1040 digital minds. The future therefore has enormous expected value, and concern for the long-term should almost always dominate near-term considerations, at least for those decisions where our goal is to maximize expected value.It is often stated that strong longtermism reduces in practice to the goal: “minimize existential risk at all costs”. I argue here that this is inaccurate. I claim that a more accurate way of summarising the strong longtermist goal is: “minimize existential risk at all costs conditional on the future possibly being very big”. I believe the distinction between these two goals has important practical implications. The strong longtermist goal may actually conflict with the goal of minimizing existential risk unconditionally.In the next section I describe a thought experiment to demonstrate my claim. In the following section I argue that this is likely to be relevant to the actual world we find ourselves in. In the final section I give some concluding remarks on what we should take away from all this.The Anti-Apocalypse MachineThe Earth is about to be destroyed by a cosmic disaster. This disaster would end all life, and snuff out all of our enormous future potential.Fortunately, physicists have almost settled on a grand unified theory of everything that they believe will help them build a machine to save us. They are 99% certain that the world is described by Theory A, which tells us we can be saved if we build Machine A. But there is a 1% chance that the correct theory is actually Theory B, in which case we need to build Machine B. We only have the time and resources to build one machine.It appears that our best bet is to build Machine A, but there is a catch. If Theory B is true, then the expected value of our future is many orders of magnitude larger (although it is enormous under both theories). This is because Theory B leaves open the possibility that we may one day develop slightly-faster-than-light travel, while Theory A being true would make that impossible.Due to the spread of strong longtermism, Earth's inhabitants decide that they should build Machine B, acting as if the speculative Theory B is correct, since this is what maximizes expected value. Extinction would be far worse in the Theory B world than the Theory A world, so they decide to take the action which would prevent extinction in that world. They deliberately choose a 99% chance of extinction over a 1% chance, risking all of humanity, and all of humanity's future potential.The lesson here is that strong longtermism gives us the goal to minimize existential risk conditional on the future possibly being very big, and that may conflict with the goal to minimize existential risk unconditionally.Relevance for the actual worldThe implication of the above thought experiment is that strong longtermism tells us to look at the set of possible theories about the world, pick the one in which the future is largest, and, if it is large enough, act as if that theory were true. This is likely to have absurd consequences if carried to its logical conclusion, even in real world cases. I explore some examples in this section.The picture becomes more confusing when you consider theories which permit the future to have infinite value. In Nick Beckstead's original thesis, On the Overwhelming Importance of Shaping the Far Future, he explicitly singles out infinite value cases as examples of where we should abandon expected value maximization, and switch to using a more timid deci...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should strong longtermists really want to minimize existential risk?, published by tobycrisford on December 4, 2022 on The Effective Altruism Forum.Strong longtermists believe there is a non-negligible chance that the future will be enormous. For example, earth-originating life may one day fill the galaxy with 1040 digital minds. The future therefore has enormous expected value, and concern for the long-term should almost always dominate near-term considerations, at least for those decisions where our goal is to maximize expected value.It is often stated that strong longtermism reduces in practice to the goal: “minimize existential risk at all costs”. I argue here that this is inaccurate. I claim that a more accurate way of summarising the strong longtermist goal is: “minimize existential risk at all costs conditional on the future possibly being very big”. I believe the distinction between these two goals has important practical implications. The strong longtermist goal may actually conflict with the goal of minimizing existential risk unconditionally.In the next section I describe a thought experiment to demonstrate my claim. In the following section I argue that this is likely to be relevant to the actual world we find ourselves in. In the final section I give some concluding remarks on what we should take away from all this.The Anti-Apocalypse MachineThe Earth is about to be destroyed by a cosmic disaster. This disaster would end all life, and snuff out all of our enormous future potential.Fortunately, physicists have almost settled on a grand unified theory of everything that they believe will help them build a machine to save us. They are 99% certain that the world is described by Theory A, which tells us we can be saved if we build Machine A. But there is a 1% chance that the correct theory is actually Theory B, in which case we need to build Machine B. We only have the time and resources to build one machine.It appears that our best bet is to build Machine A, but there is a catch. If Theory B is true, then the expected value of our future is many orders of magnitude larger (although it is enormous under both theories). This is because Theory B leaves open the possibility that we may one day develop slightly-faster-than-light travel, while Theory A being true would make that impossible.Due to the spread of strong longtermism, Earth's inhabitants decide that they should build Machine B, acting as if the speculative Theory B is correct, since this is what maximizes expected value. Extinction would be far worse in the Theory B world than the Theory A world, so they decide to take the action which would prevent extinction in that world. They deliberately choose a 99% chance of extinction over a 1% chance, risking all of humanity, and all of humanity's future potential.The lesson here is that strong longtermism gives us the goal to minimize existential risk conditional on the future possibly being very big, and that may conflict with the goal to minimize existential risk unconditionally.Relevance for the actual worldThe implication of the above thought experiment is that strong longtermism tells us to look at the set of possible theories about the world, pick the one in which the future is largest, and, if it is large enough, act as if that theory were true. This is likely to have absurd consequences if carried to its logical conclusion, even in real world cases. I explore some examples in this section.The picture becomes more confusing when you consider theories which permit the future to have infinite value. In Nick Beckstead's original thesis, On the Overwhelming Importance of Shaping the Far Future, he explicitly singles out infinite value cases as examples of where we should abandon expected value maximization, and switch to using a more timid deci...]]>
tobycrisford https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:40 None full 4016
vbhoFsyQmrntru6Kw_EA EA - Do Brains Contain Many Conscious Subsystems? If So, Should We Act Differently? by Bob Fischer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Do Brains Contain Many Conscious Subsystems? If So, Should We Act Differently?, published by Bob Fischer on December 5, 2022 on The Effective Altruism Forum.Key TakeawaysThe Conscious Subsystems Hypothesis (“Conscious Subsystems,” for short) says that brains have subsystems that realize phenomenally conscious states that aren’t accessible to the subjects we typically associate with those brains—namely, the ones who report their experiences to us.Given that humans’ brains are likely to support more such subsystems than animals’ brains, EAs who have explored Conscious Subsystems have suggested that it provides a reason for risk-neutral expected utility maximizers to assign more weight to humans relative to animals.However, even if Conscious Subsystems is true, it probably doesn’t imply that risk-neutral expected utility maximizers ought to allocate neartermist dollars to humans instead of animals. There are three reasons for this:If humans have conscious subsystems, then animals probably have them too, so taking them seriously doesn’t increase the expected value of, say, humans over chickens as much as we might initially suppose.Risk-neutral expected utility maximizers are committed to assumptions—including the assumption that all welfare counts equally, whoever’s welfare it is—that support the conclusion that the best animal-focused neartermist interventions (e.g., cage-free campaigns) are many times better than the best human-focused neartermist interventions (e.g., bednets).Independently, note that the higher our credences in the theories of consciousness that are most friendly to Conscious Subsystems, the higher our credences ought to be in the hypothesis that many small invertebrates are sentient. So, insofar as we’re risk-neutral expected utility maximizers with relatively high credences in Conscious Subsystems-friendly theories of consciousness, it’s likely that we should be putting far more resources into investigating the welfare of the world’s small invertebrates.We assign very low credences to claims that ostensibly support Conscious Subsystems.The appeal of the idea that standard theories of consciousness support Conscious Subsystems may be based on not distinguishing (a) theories that are just designed to make predictions about when people will self-report having conscious experiences of a certain type (which may all be wrong, but have whatever direct empirical support they have) and (b) theories that are attempts to answer the so-called “hard problem” of consciousness (which only have indirect empirical support and are far more controversial).Standard versions of functionalism say that states are conscious when they have the right relationships to sensory stimulations, other mental states, and behavior. But it’s highly unlikely that many groups of neurons stand in the correct relationships, even if they perform functions that, in the abstract, seem as complex and sophisticated as those performed by whole brains.Ultimately, we do not recommend acting on Conscious Subsystems at this time.IntroductionThis is the fifth post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species. The aim of this post is to assess a hypothesis that's been advanced by several members of the EA community: namely, that brains have subsystems that realize phenomenally conscious states that aren’t accessible to the subjects we typically associate with those brains (i.e., the ones who report their experiences to us; see, e.g., Tomasik, 2013-2019, Shiller, 2016, Muehlhauser, 2017, Shulman, 2020, Crummett, 2022).If there are such states, then we might think that ...]]>
Bob Fischer https://forum.effectivealtruism.org/posts/vbhoFsyQmrntru6Kw/do-brains-contain-many-conscious-subsystems-if-so-should-we Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Do Brains Contain Many Conscious Subsystems? If So, Should We Act Differently?, published by Bob Fischer on December 5, 2022 on The Effective Altruism Forum.Key TakeawaysThe Conscious Subsystems Hypothesis (“Conscious Subsystems,” for short) says that brains have subsystems that realize phenomenally conscious states that aren’t accessible to the subjects we typically associate with those brains—namely, the ones who report their experiences to us.Given that humans’ brains are likely to support more such subsystems than animals’ brains, EAs who have explored Conscious Subsystems have suggested that it provides a reason for risk-neutral expected utility maximizers to assign more weight to humans relative to animals.However, even if Conscious Subsystems is true, it probably doesn’t imply that risk-neutral expected utility maximizers ought to allocate neartermist dollars to humans instead of animals. There are three reasons for this:If humans have conscious subsystems, then animals probably have them too, so taking them seriously doesn’t increase the expected value of, say, humans over chickens as much as we might initially suppose.Risk-neutral expected utility maximizers are committed to assumptions—including the assumption that all welfare counts equally, whoever’s welfare it is—that support the conclusion that the best animal-focused neartermist interventions (e.g., cage-free campaigns) are many times better than the best human-focused neartermist interventions (e.g., bednets).Independently, note that the higher our credences in the theories of consciousness that are most friendly to Conscious Subsystems, the higher our credences ought to be in the hypothesis that many small invertebrates are sentient. So, insofar as we’re risk-neutral expected utility maximizers with relatively high credences in Conscious Subsystems-friendly theories of consciousness, it’s likely that we should be putting far more resources into investigating the welfare of the world’s small invertebrates.We assign very low credences to claims that ostensibly support Conscious Subsystems.The appeal of the idea that standard theories of consciousness support Conscious Subsystems may be based on not distinguishing (a) theories that are just designed to make predictions about when people will self-report having conscious experiences of a certain type (which may all be wrong, but have whatever direct empirical support they have) and (b) theories that are attempts to answer the so-called “hard problem” of consciousness (which only have indirect empirical support and are far more controversial).Standard versions of functionalism say that states are conscious when they have the right relationships to sensory stimulations, other mental states, and behavior. But it’s highly unlikely that many groups of neurons stand in the correct relationships, even if they perform functions that, in the abstract, seem as complex and sophisticated as those performed by whole brains.Ultimately, we do not recommend acting on Conscious Subsystems at this time.IntroductionThis is the fifth post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species. The aim of this post is to assess a hypothesis that's been advanced by several members of the EA community: namely, that brains have subsystems that realize phenomenally conscious states that aren’t accessible to the subjects we typically associate with those brains (i.e., the ones who report their experiences to us; see, e.g., Tomasik, 2013-2019, Shiller, 2016, Muehlhauser, 2017, Shulman, 2020, Crummett, 2022).If there are such states, then we might think that ...]]>
Mon, 05 Dec 2022 18:14:43 +0000 EA - Do Brains Contain Many Conscious Subsystems? If So, Should We Act Differently? by Bob Fischer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Do Brains Contain Many Conscious Subsystems? If So, Should We Act Differently?, published by Bob Fischer on December 5, 2022 on The Effective Altruism Forum.Key TakeawaysThe Conscious Subsystems Hypothesis (“Conscious Subsystems,” for short) says that brains have subsystems that realize phenomenally conscious states that aren’t accessible to the subjects we typically associate with those brains—namely, the ones who report their experiences to us.Given that humans’ brains are likely to support more such subsystems than animals’ brains, EAs who have explored Conscious Subsystems have suggested that it provides a reason for risk-neutral expected utility maximizers to assign more weight to humans relative to animals.However, even if Conscious Subsystems is true, it probably doesn’t imply that risk-neutral expected utility maximizers ought to allocate neartermist dollars to humans instead of animals. There are three reasons for this:If humans have conscious subsystems, then animals probably have them too, so taking them seriously doesn’t increase the expected value of, say, humans over chickens as much as we might initially suppose.Risk-neutral expected utility maximizers are committed to assumptions—including the assumption that all welfare counts equally, whoever’s welfare it is—that support the conclusion that the best animal-focused neartermist interventions (e.g., cage-free campaigns) are many times better than the best human-focused neartermist interventions (e.g., bednets).Independently, note that the higher our credences in the theories of consciousness that are most friendly to Conscious Subsystems, the higher our credences ought to be in the hypothesis that many small invertebrates are sentient. So, insofar as we’re risk-neutral expected utility maximizers with relatively high credences in Conscious Subsystems-friendly theories of consciousness, it’s likely that we should be putting far more resources into investigating the welfare of the world’s small invertebrates.We assign very low credences to claims that ostensibly support Conscious Subsystems.The appeal of the idea that standard theories of consciousness support Conscious Subsystems may be based on not distinguishing (a) theories that are just designed to make predictions about when people will self-report having conscious experiences of a certain type (which may all be wrong, but have whatever direct empirical support they have) and (b) theories that are attempts to answer the so-called “hard problem” of consciousness (which only have indirect empirical support and are far more controversial).Standard versions of functionalism say that states are conscious when they have the right relationships to sensory stimulations, other mental states, and behavior. But it’s highly unlikely that many groups of neurons stand in the correct relationships, even if they perform functions that, in the abstract, seem as complex and sophisticated as those performed by whole brains.Ultimately, we do not recommend acting on Conscious Subsystems at this time.IntroductionThis is the fifth post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species. The aim of this post is to assess a hypothesis that's been advanced by several members of the EA community: namely, that brains have subsystems that realize phenomenally conscious states that aren’t accessible to the subjects we typically associate with those brains (i.e., the ones who report their experiences to us; see, e.g., Tomasik, 2013-2019, Shiller, 2016, Muehlhauser, 2017, Shulman, 2020, Crummett, 2022).If there are such states, then we might think that ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Do Brains Contain Many Conscious Subsystems? If So, Should We Act Differently?, published by Bob Fischer on December 5, 2022 on The Effective Altruism Forum.Key TakeawaysThe Conscious Subsystems Hypothesis (“Conscious Subsystems,” for short) says that brains have subsystems that realize phenomenally conscious states that aren’t accessible to the subjects we typically associate with those brains—namely, the ones who report their experiences to us.Given that humans’ brains are likely to support more such subsystems than animals’ brains, EAs who have explored Conscious Subsystems have suggested that it provides a reason for risk-neutral expected utility maximizers to assign more weight to humans relative to animals.However, even if Conscious Subsystems is true, it probably doesn’t imply that risk-neutral expected utility maximizers ought to allocate neartermist dollars to humans instead of animals. There are three reasons for this:If humans have conscious subsystems, then animals probably have them too, so taking them seriously doesn’t increase the expected value of, say, humans over chickens as much as we might initially suppose.Risk-neutral expected utility maximizers are committed to assumptions—including the assumption that all welfare counts equally, whoever’s welfare it is—that support the conclusion that the best animal-focused neartermist interventions (e.g., cage-free campaigns) are many times better than the best human-focused neartermist interventions (e.g., bednets).Independently, note that the higher our credences in the theories of consciousness that are most friendly to Conscious Subsystems, the higher our credences ought to be in the hypothesis that many small invertebrates are sentient. So, insofar as we’re risk-neutral expected utility maximizers with relatively high credences in Conscious Subsystems-friendly theories of consciousness, it’s likely that we should be putting far more resources into investigating the welfare of the world’s small invertebrates.We assign very low credences to claims that ostensibly support Conscious Subsystems.The appeal of the idea that standard theories of consciousness support Conscious Subsystems may be based on not distinguishing (a) theories that are just designed to make predictions about when people will self-report having conscious experiences of a certain type (which may all be wrong, but have whatever direct empirical support they have) and (b) theories that are attempts to answer the so-called “hard problem” of consciousness (which only have indirect empirical support and are far more controversial).Standard versions of functionalism say that states are conscious when they have the right relationships to sensory stimulations, other mental states, and behavior. But it’s highly unlikely that many groups of neurons stand in the correct relationships, even if they perform functions that, in the abstract, seem as complex and sophisticated as those performed by whole brains.Ultimately, we do not recommend acting on Conscious Subsystems at this time.IntroductionThis is the fifth post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species. The aim of this post is to assess a hypothesis that's been advanced by several members of the EA community: namely, that brains have subsystems that realize phenomenally conscious states that aren’t accessible to the subjects we typically associate with those brains (i.e., the ones who report their experiences to us; see, e.g., Tomasik, 2013-2019, Shiller, 2016, Muehlhauser, 2017, Shulman, 2020, Crummett, 2022).If there are such states, then we might think that ...]]>
Bob Fischer https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 42:58 None full 4013
CnAhPPsMWAxBm7pii_EA EA - What specific changes should we as a community make to the effective altruism community? [Stage 1] by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What specific changes should we as a community make to the effective altruism community? [Stage 1], published by Nathan Young on December 5, 2022 on The Effective Altruism Forum.I'm want to run the listening exercise I'd like to see.Get popular suggestionsRun a polis pollMake a google doc where we research consensus suggestions/ near consensus/consensus for specific groupsPoll againStage 1Give concrete suggestions for community changes. 1 - 2 sentences only.Upvote if you think they are worth putting in the polis poll and agreevote if you think the comment is true.Agreevote if you think they are well-framed.Aim for them to be upvoted. Please add suggestions you'd like to see.I'll take the top 20 - 30I will delete/move to comments top-level answers that are longer than 2 sentences.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nathan Young https://forum.effectivealtruism.org/posts/CnAhPPsMWAxBm7pii/what-specific-changes-should-we-as-a-community-make-to-the Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What specific changes should we as a community make to the effective altruism community? [Stage 1], published by Nathan Young on December 5, 2022 on The Effective Altruism Forum.I'm want to run the listening exercise I'd like to see.Get popular suggestionsRun a polis pollMake a google doc where we research consensus suggestions/ near consensus/consensus for specific groupsPoll againStage 1Give concrete suggestions for community changes. 1 - 2 sentences only.Upvote if you think they are worth putting in the polis poll and agreevote if you think the comment is true.Agreevote if you think they are well-framed.Aim for them to be upvoted. Please add suggestions you'd like to see.I'll take the top 20 - 30I will delete/move to comments top-level answers that are longer than 2 sentences.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 05 Dec 2022 15:49:46 +0000 EA - What specific changes should we as a community make to the effective altruism community? [Stage 1] by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What specific changes should we as a community make to the effective altruism community? [Stage 1], published by Nathan Young on December 5, 2022 on The Effective Altruism Forum.I'm want to run the listening exercise I'd like to see.Get popular suggestionsRun a polis pollMake a google doc where we research consensus suggestions/ near consensus/consensus for specific groupsPoll againStage 1Give concrete suggestions for community changes. 1 - 2 sentences only.Upvote if you think they are worth putting in the polis poll and agreevote if you think the comment is true.Agreevote if you think they are well-framed.Aim for them to be upvoted. Please add suggestions you'd like to see.I'll take the top 20 - 30I will delete/move to comments top-level answers that are longer than 2 sentences.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What specific changes should we as a community make to the effective altruism community? [Stage 1], published by Nathan Young on December 5, 2022 on The Effective Altruism Forum.I'm want to run the listening exercise I'd like to see.Get popular suggestionsRun a polis pollMake a google doc where we research consensus suggestions/ near consensus/consensus for specific groupsPoll againStage 1Give concrete suggestions for community changes. 1 - 2 sentences only.Upvote if you think they are worth putting in the polis poll and agreevote if you think the comment is true.Agreevote if you think they are well-framed.Aim for them to be upvoted. Please add suggestions you'd like to see.I'll take the top 20 - 30I will delete/move to comments top-level answers that are longer than 2 sentences.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nathan Young https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:08 None full 4014
saEXX9Nucz8mh9XgB_EA EA - Race to the Top: Benchmarks for AI Safety by isaduan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Race to the Top: Benchmarks for AI Safety, published by isaduan on December 4, 2022 on The Effective Altruism Forum.This is an executive summary of a blog post. Read the full texts here.SummaryBenchmarks support the empirical, quantitative evaluation of progress in AI research. Although benchmarks are ubiquitous in most subfields of machine learning, they are still rare in the subfield of AI safety.I argue that creating benchmarks should be a high priority for AI safety. While this idea is not new, I think it may still be underrated. Among other benefits, benchmarks would make it much easier to:track the field’s progress and focus resources on the most productive lines of work;create professional incentives for researchers - especially Chinese researchers - to work on problems that are relevant to AGI safety;develop auditing regimes and regulations for advanced AI systems.Unfortunately, we cannot assume that good benchmarks will be developed quickly enough “by default." I discuss several reasons to expect them to be undersupplied. I also outline actions that different groups can take today to accelerate their development.For example, AI safety researchers can help by:directly trying their hand at creating safety-relevant benchmarks;clarifying certain safety-relevant traits (such as “honesty” and “power-seekingness”) that it could be important to measure in the future;building up relevant expertise and skills, for instance by working on other benchmarking projects;drafting “benchmark roadmaps,” which identify categories of benchmarks that could be valuable in the future and outline prerequisites for developing them.And AI governance professionals can help by:co-organizing workshops, competitions, and prizes focused on benchmarking;creating third-party institutional homes for benchmarking work;clarifying, ahead of time, how auditing and regulatory frameworks can put benchmarks to use;advising safety researchers on political, institutional, and strategic considerations that matter for benchmark design;popularizing the narrative of a “race to the top” on AI safety.Ultimately, we can and should begin to build benchmark-making capability now.AcknowledgmentI would like to thank Ben Garfinkel and Owen Cotton-Barratt for their mentorship, Emma Bluemke and many others at the Centre for the Governance of AI for their warmhearted support. All views and errors are my own.Future researchI am working on a paper on the topic, and if you are interested in benchmarks and model evaluation, especially if you are a technical AI safety researcher, I would love to hear from you!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
isaduan https://forum.effectivealtruism.org/posts/saEXX9Nucz8mh9XgB/race-to-the-top-benchmarks-for-ai-safety Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Race to the Top: Benchmarks for AI Safety, published by isaduan on December 4, 2022 on The Effective Altruism Forum.This is an executive summary of a blog post. Read the full texts here.SummaryBenchmarks support the empirical, quantitative evaluation of progress in AI research. Although benchmarks are ubiquitous in most subfields of machine learning, they are still rare in the subfield of AI safety.I argue that creating benchmarks should be a high priority for AI safety. While this idea is not new, I think it may still be underrated. Among other benefits, benchmarks would make it much easier to:track the field’s progress and focus resources on the most productive lines of work;create professional incentives for researchers - especially Chinese researchers - to work on problems that are relevant to AGI safety;develop auditing regimes and regulations for advanced AI systems.Unfortunately, we cannot assume that good benchmarks will be developed quickly enough “by default." I discuss several reasons to expect them to be undersupplied. I also outline actions that different groups can take today to accelerate their development.For example, AI safety researchers can help by:directly trying their hand at creating safety-relevant benchmarks;clarifying certain safety-relevant traits (such as “honesty” and “power-seekingness”) that it could be important to measure in the future;building up relevant expertise and skills, for instance by working on other benchmarking projects;drafting “benchmark roadmaps,” which identify categories of benchmarks that could be valuable in the future and outline prerequisites for developing them.And AI governance professionals can help by:co-organizing workshops, competitions, and prizes focused on benchmarking;creating third-party institutional homes for benchmarking work;clarifying, ahead of time, how auditing and regulatory frameworks can put benchmarks to use;advising safety researchers on political, institutional, and strategic considerations that matter for benchmark design;popularizing the narrative of a “race to the top” on AI safety.Ultimately, we can and should begin to build benchmark-making capability now.AcknowledgmentI would like to thank Ben Garfinkel and Owen Cotton-Barratt for their mentorship, Emma Bluemke and many others at the Centre for the Governance of AI for their warmhearted support. All views and errors are my own.Future researchI am working on a paper on the topic, and if you are interested in benchmarks and model evaluation, especially if you are a technical AI safety researcher, I would love to hear from you!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 05 Dec 2022 11:19:03 +0000 EA - Race to the Top: Benchmarks for AI Safety by isaduan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Race to the Top: Benchmarks for AI Safety, published by isaduan on December 4, 2022 on The Effective Altruism Forum.This is an executive summary of a blog post. Read the full texts here.SummaryBenchmarks support the empirical, quantitative evaluation of progress in AI research. Although benchmarks are ubiquitous in most subfields of machine learning, they are still rare in the subfield of AI safety.I argue that creating benchmarks should be a high priority for AI safety. While this idea is not new, I think it may still be underrated. Among other benefits, benchmarks would make it much easier to:track the field’s progress and focus resources on the most productive lines of work;create professional incentives for researchers - especially Chinese researchers - to work on problems that are relevant to AGI safety;develop auditing regimes and regulations for advanced AI systems.Unfortunately, we cannot assume that good benchmarks will be developed quickly enough “by default." I discuss several reasons to expect them to be undersupplied. I also outline actions that different groups can take today to accelerate their development.For example, AI safety researchers can help by:directly trying their hand at creating safety-relevant benchmarks;clarifying certain safety-relevant traits (such as “honesty” and “power-seekingness”) that it could be important to measure in the future;building up relevant expertise and skills, for instance by working on other benchmarking projects;drafting “benchmark roadmaps,” which identify categories of benchmarks that could be valuable in the future and outline prerequisites for developing them.And AI governance professionals can help by:co-organizing workshops, competitions, and prizes focused on benchmarking;creating third-party institutional homes for benchmarking work;clarifying, ahead of time, how auditing and regulatory frameworks can put benchmarks to use;advising safety researchers on political, institutional, and strategic considerations that matter for benchmark design;popularizing the narrative of a “race to the top” on AI safety.Ultimately, we can and should begin to build benchmark-making capability now.AcknowledgmentI would like to thank Ben Garfinkel and Owen Cotton-Barratt for their mentorship, Emma Bluemke and many others at the Centre for the Governance of AI for their warmhearted support. All views and errors are my own.Future researchI am working on a paper on the topic, and if you are interested in benchmarks and model evaluation, especially if you are a technical AI safety researcher, I would love to hear from you!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Race to the Top: Benchmarks for AI Safety, published by isaduan on December 4, 2022 on The Effective Altruism Forum.This is an executive summary of a blog post. Read the full texts here.SummaryBenchmarks support the empirical, quantitative evaluation of progress in AI research. Although benchmarks are ubiquitous in most subfields of machine learning, they are still rare in the subfield of AI safety.I argue that creating benchmarks should be a high priority for AI safety. While this idea is not new, I think it may still be underrated. Among other benefits, benchmarks would make it much easier to:track the field’s progress and focus resources on the most productive lines of work;create professional incentives for researchers - especially Chinese researchers - to work on problems that are relevant to AGI safety;develop auditing regimes and regulations for advanced AI systems.Unfortunately, we cannot assume that good benchmarks will be developed quickly enough “by default." I discuss several reasons to expect them to be undersupplied. I also outline actions that different groups can take today to accelerate their development.For example, AI safety researchers can help by:directly trying their hand at creating safety-relevant benchmarks;clarifying certain safety-relevant traits (such as “honesty” and “power-seekingness”) that it could be important to measure in the future;building up relevant expertise and skills, for instance by working on other benchmarking projects;drafting “benchmark roadmaps,” which identify categories of benchmarks that could be valuable in the future and outline prerequisites for developing them.And AI governance professionals can help by:co-organizing workshops, competitions, and prizes focused on benchmarking;creating third-party institutional homes for benchmarking work;clarifying, ahead of time, how auditing and regulatory frameworks can put benchmarks to use;advising safety researchers on political, institutional, and strategic considerations that matter for benchmark design;popularizing the narrative of a “race to the top” on AI safety.Ultimately, we can and should begin to build benchmark-making capability now.AcknowledgmentI would like to thank Ben Garfinkel and Owen Cotton-Barratt for their mentorship, Emma Bluemke and many others at the Centre for the Governance of AI for their warmhearted support. All views and errors are my own.Future researchI am working on a paper on the topic, and if you are interested in benchmarks and model evaluation, especially if you are a technical AI safety researcher, I would love to hear from you!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
isaduan https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:43 None full 4015
9qcnrRD3ZHSwibtBC_EA EA - EA Taskmaster Game by Gemma Paterson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Taskmaster Game, published by Gemma Paterson on December 4, 2022 on The Effective Altruism Forum.IntroductionThis is a post about a silly EA themed game I organised in London this year. All resources included in this post are free for you to use for your own EA events with the following caveats:If after reading this you have task ideas, please submit them here and they might be used in my next event: GoogleForm | EA Taskmaster 2023 Task IdeasIf you want to run one of these yourself, I recommend reading my thoughts and suggestions in the logistical points for organisers sectionIf you do run one of these, I would love to hear about it (and see photos ). Please get in touch - you can message me here on the forum or contact me on Twitter glpat (@glpat99) / Twitter.I had a fabulous time putting it together and the feedback from attendees was really positive so if this sounds like your kind of “organised fun” then I hope this post is helpful/inspiring. Thanks to Romy and Vania from the London EA women & NB meet-up who pushed me to make this my first forum post!The GameIn June 2022, I ran an EA themed game that was a mix between a scavenger hunt, the TV show Taskmaster and a party quest game. It was made up of 89 tasks (some EA themed, some just silly) to be completed by competing teams over the course of an afternoon.The idea was that there would be too many tasks for a team to complete in the time limit and they would have to do some task prioritisation. When I was thinking about what tasks should be included, I wanted a range of tasks that could be thought of in terms of scale, neglectedness and tractability, so that it would be a much lower stakes game version of cause prioritisation:Scale: There was a large variation in point values across the tasks, not necessarily in proportion to their difficulty. Also, some tasks were repeatable (with accumulating points) while others were one-offs.Neglectedness: For some tasks the points would only go to the team with the best entry or the points would be split between all teams that submitted. This incentivised completing tasks that no one else was completing.Tractability: There was a large range of task difficulty, with hard tasks not necessarily being rewarded with extra points.In fact, one of the tasks was to send in the best task prioritisation spreadsheet for the tasks in this game. As per the Taskmaster game show, final point allocation was at the discretion of the Taskmaster (me) and was based on ingenuity, grit and entertainment value.Huge thanks to my EA friends in London for helping me think of tasks. Also, shout out to the Party Quest spreadsheet for providing many of the fun non-EA themed filler tasks.I am conscious that posting all the tasks would spoil the game for potential players so I have provided a small selection of some of the tasks and submissions from this year’s event.Task examplesTask 24: Sculptor (2000 points)Construct the most beautiful trophy out of random objects and deliver it to the Taskmaster.The winner’s podium with the constructed trophies given in order of the Taskmaster’s preferenceTask 19: I’m Thirsty (800 points)Bring the taskmaster the best cocktail.The Taskmaster on her homemade throne graciously accepting her third cocktail of the afternoonTask 51: An Order From the Ministry (500 points)Create the silliest walkLess Miserables' excellent submissionTask 67: Very Culture, Much Topical, Wow (1000 points)Recreate a meme in real life and photograph/video it. The more elaborate the better.Task 68: Getting down with the kids (1000 points)Create an EA themed tiktokPink team's novel new approach to AI safetyStructure of gameThere were four categories of task which were based on how points are allocated.Winner take all: All the points were awarded to the team whose...]]>
Gemma Paterson https://forum.effectivealtruism.org/posts/9qcnrRD3ZHSwibtBC/ea-taskmaster-game Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Taskmaster Game, published by Gemma Paterson on December 4, 2022 on The Effective Altruism Forum.IntroductionThis is a post about a silly EA themed game I organised in London this year. All resources included in this post are free for you to use for your own EA events with the following caveats:If after reading this you have task ideas, please submit them here and they might be used in my next event: GoogleForm | EA Taskmaster 2023 Task IdeasIf you want to run one of these yourself, I recommend reading my thoughts and suggestions in the logistical points for organisers sectionIf you do run one of these, I would love to hear about it (and see photos ). Please get in touch - you can message me here on the forum or contact me on Twitter glpat (@glpat99) / Twitter.I had a fabulous time putting it together and the feedback from attendees was really positive so if this sounds like your kind of “organised fun” then I hope this post is helpful/inspiring. Thanks to Romy and Vania from the London EA women & NB meet-up who pushed me to make this my first forum post!The GameIn June 2022, I ran an EA themed game that was a mix between a scavenger hunt, the TV show Taskmaster and a party quest game. It was made up of 89 tasks (some EA themed, some just silly) to be completed by competing teams over the course of an afternoon.The idea was that there would be too many tasks for a team to complete in the time limit and they would have to do some task prioritisation. When I was thinking about what tasks should be included, I wanted a range of tasks that could be thought of in terms of scale, neglectedness and tractability, so that it would be a much lower stakes game version of cause prioritisation:Scale: There was a large variation in point values across the tasks, not necessarily in proportion to their difficulty. Also, some tasks were repeatable (with accumulating points) while others were one-offs.Neglectedness: For some tasks the points would only go to the team with the best entry or the points would be split between all teams that submitted. This incentivised completing tasks that no one else was completing.Tractability: There was a large range of task difficulty, with hard tasks not necessarily being rewarded with extra points.In fact, one of the tasks was to send in the best task prioritisation spreadsheet for the tasks in this game. As per the Taskmaster game show, final point allocation was at the discretion of the Taskmaster (me) and was based on ingenuity, grit and entertainment value.Huge thanks to my EA friends in London for helping me think of tasks. Also, shout out to the Party Quest spreadsheet for providing many of the fun non-EA themed filler tasks.I am conscious that posting all the tasks would spoil the game for potential players so I have provided a small selection of some of the tasks and submissions from this year’s event.Task examplesTask 24: Sculptor (2000 points)Construct the most beautiful trophy out of random objects and deliver it to the Taskmaster.The winner’s podium with the constructed trophies given in order of the Taskmaster’s preferenceTask 19: I’m Thirsty (800 points)Bring the taskmaster the best cocktail.The Taskmaster on her homemade throne graciously accepting her third cocktail of the afternoonTask 51: An Order From the Ministry (500 points)Create the silliest walkLess Miserables' excellent submissionTask 67: Very Culture, Much Topical, Wow (1000 points)Recreate a meme in real life and photograph/video it. The more elaborate the better.Task 68: Getting down with the kids (1000 points)Create an EA themed tiktokPink team's novel new approach to AI safetyStructure of gameThere were four categories of task which were based on how points are allocated.Winner take all: All the points were awarded to the team whose...]]>
Mon, 05 Dec 2022 04:03:20 +0000 EA - EA Taskmaster Game by Gemma Paterson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Taskmaster Game, published by Gemma Paterson on December 4, 2022 on The Effective Altruism Forum.IntroductionThis is a post about a silly EA themed game I organised in London this year. All resources included in this post are free for you to use for your own EA events with the following caveats:If after reading this you have task ideas, please submit them here and they might be used in my next event: GoogleForm | EA Taskmaster 2023 Task IdeasIf you want to run one of these yourself, I recommend reading my thoughts and suggestions in the logistical points for organisers sectionIf you do run one of these, I would love to hear about it (and see photos ). Please get in touch - you can message me here on the forum or contact me on Twitter glpat (@glpat99) / Twitter.I had a fabulous time putting it together and the feedback from attendees was really positive so if this sounds like your kind of “organised fun” then I hope this post is helpful/inspiring. Thanks to Romy and Vania from the London EA women & NB meet-up who pushed me to make this my first forum post!The GameIn June 2022, I ran an EA themed game that was a mix between a scavenger hunt, the TV show Taskmaster and a party quest game. It was made up of 89 tasks (some EA themed, some just silly) to be completed by competing teams over the course of an afternoon.The idea was that there would be too many tasks for a team to complete in the time limit and they would have to do some task prioritisation. When I was thinking about what tasks should be included, I wanted a range of tasks that could be thought of in terms of scale, neglectedness and tractability, so that it would be a much lower stakes game version of cause prioritisation:Scale: There was a large variation in point values across the tasks, not necessarily in proportion to their difficulty. Also, some tasks were repeatable (with accumulating points) while others were one-offs.Neglectedness: For some tasks the points would only go to the team with the best entry or the points would be split between all teams that submitted. This incentivised completing tasks that no one else was completing.Tractability: There was a large range of task difficulty, with hard tasks not necessarily being rewarded with extra points.In fact, one of the tasks was to send in the best task prioritisation spreadsheet for the tasks in this game. As per the Taskmaster game show, final point allocation was at the discretion of the Taskmaster (me) and was based on ingenuity, grit and entertainment value.Huge thanks to my EA friends in London for helping me think of tasks. Also, shout out to the Party Quest spreadsheet for providing many of the fun non-EA themed filler tasks.I am conscious that posting all the tasks would spoil the game for potential players so I have provided a small selection of some of the tasks and submissions from this year’s event.Task examplesTask 24: Sculptor (2000 points)Construct the most beautiful trophy out of random objects and deliver it to the Taskmaster.The winner’s podium with the constructed trophies given in order of the Taskmaster’s preferenceTask 19: I’m Thirsty (800 points)Bring the taskmaster the best cocktail.The Taskmaster on her homemade throne graciously accepting her third cocktail of the afternoonTask 51: An Order From the Ministry (500 points)Create the silliest walkLess Miserables' excellent submissionTask 67: Very Culture, Much Topical, Wow (1000 points)Recreate a meme in real life and photograph/video it. The more elaborate the better.Task 68: Getting down with the kids (1000 points)Create an EA themed tiktokPink team's novel new approach to AI safetyStructure of gameThere were four categories of task which were based on how points are allocated.Winner take all: All the points were awarded to the team whose...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Taskmaster Game, published by Gemma Paterson on December 4, 2022 on The Effective Altruism Forum.IntroductionThis is a post about a silly EA themed game I organised in London this year. All resources included in this post are free for you to use for your own EA events with the following caveats:If after reading this you have task ideas, please submit them here and they might be used in my next event: GoogleForm | EA Taskmaster 2023 Task IdeasIf you want to run one of these yourself, I recommend reading my thoughts and suggestions in the logistical points for organisers sectionIf you do run one of these, I would love to hear about it (and see photos ). Please get in touch - you can message me here on the forum or contact me on Twitter glpat (@glpat99) / Twitter.I had a fabulous time putting it together and the feedback from attendees was really positive so if this sounds like your kind of “organised fun” then I hope this post is helpful/inspiring. Thanks to Romy and Vania from the London EA women & NB meet-up who pushed me to make this my first forum post!The GameIn June 2022, I ran an EA themed game that was a mix between a scavenger hunt, the TV show Taskmaster and a party quest game. It was made up of 89 tasks (some EA themed, some just silly) to be completed by competing teams over the course of an afternoon.The idea was that there would be too many tasks for a team to complete in the time limit and they would have to do some task prioritisation. When I was thinking about what tasks should be included, I wanted a range of tasks that could be thought of in terms of scale, neglectedness and tractability, so that it would be a much lower stakes game version of cause prioritisation:Scale: There was a large variation in point values across the tasks, not necessarily in proportion to their difficulty. Also, some tasks were repeatable (with accumulating points) while others were one-offs.Neglectedness: For some tasks the points would only go to the team with the best entry or the points would be split between all teams that submitted. This incentivised completing tasks that no one else was completing.Tractability: There was a large range of task difficulty, with hard tasks not necessarily being rewarded with extra points.In fact, one of the tasks was to send in the best task prioritisation spreadsheet for the tasks in this game. As per the Taskmaster game show, final point allocation was at the discretion of the Taskmaster (me) and was based on ingenuity, grit and entertainment value.Huge thanks to my EA friends in London for helping me think of tasks. Also, shout out to the Party Quest spreadsheet for providing many of the fun non-EA themed filler tasks.I am conscious that posting all the tasks would spoil the game for potential players so I have provided a small selection of some of the tasks and submissions from this year’s event.Task examplesTask 24: Sculptor (2000 points)Construct the most beautiful trophy out of random objects and deliver it to the Taskmaster.The winner’s podium with the constructed trophies given in order of the Taskmaster’s preferenceTask 19: I’m Thirsty (800 points)Bring the taskmaster the best cocktail.The Taskmaster on her homemade throne graciously accepting her third cocktail of the afternoonTask 51: An Order From the Ministry (500 points)Create the silliest walkLess Miserables' excellent submissionTask 67: Very Culture, Much Topical, Wow (1000 points)Recreate a meme in real life and photograph/video it. The more elaborate the better.Task 68: Getting down with the kids (1000 points)Create an EA themed tiktokPink team's novel new approach to AI safetyStructure of gameThere were four categories of task which were based on how points are allocated.Winner take all: All the points were awarded to the team whose...]]>
Gemma Paterson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:19 None full 4005
gQKozf9QbHpXLm3C7_EA EA - Applications for EAGx LatAm closing on the 20th of December by LGlez Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Applications for EAGx LatAm closing on the 20th of December, published by LGlez on December 4, 2022 on The Effective Altruism Forum.This is just a short announcement that you can now apply to EAGx LatAm, which will take place in Mexico City on January 6-8. For more information, you can read our previous announcement here. Apologies for the delay in getting this post out, but we had to postpone it for circumstances beyond our control.“How to do the most good” is a very hard problem and no isolated community will solve it on its own. We are excited about opening spaces to discuss this question in a variety of contexts, with diversity and inclusion in mind. The unfortunate fact is that it’s easier for some to hear about EA and be heard in EA, for reasons that have nothing to do with talent on one side or ill intent on the other, and everything to do with visas, the language you happened to be raised with, your socioeconomic background or environment. We believe that EA’s current blindspots can be identified by brilliant minds all over the world, and we want to promote spaces for more people to come together, learn and bring their unique perspectives to the conversation.We have almost finalised the programme. You can expect to see sessions to discuss community building in Low and Middle Income Countries and meet organisations working on top EA areas outside of the UK and the US. We will also have the usual EAGx suspects, including intro talks about Global Health and Development, AI Safety, Animal Welfare or Global Catastrophic Risks. We have confirmed speakers from GWWC, CEA, Charity Entrepreneurship, JPAL, Open Philanthropy, IPA, Rethink Priorities and the World Bank, among many others. Rob Wiblin will come to practice his Spanish talk about Career Prioritisation and Toby Ord will join us virtually to discuss longtermism in the context of the Global South.This conference is mainly for those who are from Latin America or have ties to the continent, because they are the ones for whom it’s most difficult to attend events elsewhere and connect with people who face similar struggles. But as we were saying before, we don’t believe in isolated communities and we certainly wouldn’t want to build a segregated EA LatAm community! On the contrary, we see this as an opportunity for connection. We’re therefore looking forward to receiving experienced members of the international community who are excited to meet talented people who might be under their radar –to talk with them, mentor them, learn from them, collaborate with them, hire them. On the other hand, if you’re new to EA, willing to learn more and based in a region that hosts regular EA conferences (e.g. the US, Europe), we suggest one of those might be a better fit for you to be introduced to the movement.If in doubt, err on the side of applying.And no, you don’t need to speak Spanish to join us.If you’re asking for funding to cover your travel expenses, bear in mind that costs in Mexico are cheap compared to Europe and the US. You should be able to find a good hotel for ∼60$. Regarding transport, Uber is available and very cheap.If you have any questions or comments don’t hesitate to write to latam@eaglobalx.orgWe think that EA with a Latin American spice can be great. Join us in January to discuss doing good over tacos and get the coolest EA merch the world has ever seen.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
LGlez https://forum.effectivealtruism.org/posts/gQKozf9QbHpXLm3C7/applications-for-eagx-latam-closing-on-the-20th-of-december Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Applications for EAGx LatAm closing on the 20th of December, published by LGlez on December 4, 2022 on The Effective Altruism Forum.This is just a short announcement that you can now apply to EAGx LatAm, which will take place in Mexico City on January 6-8. For more information, you can read our previous announcement here. Apologies for the delay in getting this post out, but we had to postpone it for circumstances beyond our control.“How to do the most good” is a very hard problem and no isolated community will solve it on its own. We are excited about opening spaces to discuss this question in a variety of contexts, with diversity and inclusion in mind. The unfortunate fact is that it’s easier for some to hear about EA and be heard in EA, for reasons that have nothing to do with talent on one side or ill intent on the other, and everything to do with visas, the language you happened to be raised with, your socioeconomic background or environment. We believe that EA’s current blindspots can be identified by brilliant minds all over the world, and we want to promote spaces for more people to come together, learn and bring their unique perspectives to the conversation.We have almost finalised the programme. You can expect to see sessions to discuss community building in Low and Middle Income Countries and meet organisations working on top EA areas outside of the UK and the US. We will also have the usual EAGx suspects, including intro talks about Global Health and Development, AI Safety, Animal Welfare or Global Catastrophic Risks. We have confirmed speakers from GWWC, CEA, Charity Entrepreneurship, JPAL, Open Philanthropy, IPA, Rethink Priorities and the World Bank, among many others. Rob Wiblin will come to practice his Spanish talk about Career Prioritisation and Toby Ord will join us virtually to discuss longtermism in the context of the Global South.This conference is mainly for those who are from Latin America or have ties to the continent, because they are the ones for whom it’s most difficult to attend events elsewhere and connect with people who face similar struggles. But as we were saying before, we don’t believe in isolated communities and we certainly wouldn’t want to build a segregated EA LatAm community! On the contrary, we see this as an opportunity for connection. We’re therefore looking forward to receiving experienced members of the international community who are excited to meet talented people who might be under their radar –to talk with them, mentor them, learn from them, collaborate with them, hire them. On the other hand, if you’re new to EA, willing to learn more and based in a region that hosts regular EA conferences (e.g. the US, Europe), we suggest one of those might be a better fit for you to be introduced to the movement.If in doubt, err on the side of applying.And no, you don’t need to speak Spanish to join us.If you’re asking for funding to cover your travel expenses, bear in mind that costs in Mexico are cheap compared to Europe and the US. You should be able to find a good hotel for ∼60$. Regarding transport, Uber is available and very cheap.If you have any questions or comments don’t hesitate to write to latam@eaglobalx.orgWe think that EA with a Latin American spice can be great. Join us in January to discuss doing good over tacos and get the coolest EA merch the world has ever seen.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 04 Dec 2022 18:44:45 +0000 EA - Applications for EAGx LatAm closing on the 20th of December by LGlez Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Applications for EAGx LatAm closing on the 20th of December, published by LGlez on December 4, 2022 on The Effective Altruism Forum.This is just a short announcement that you can now apply to EAGx LatAm, which will take place in Mexico City on January 6-8. For more information, you can read our previous announcement here. Apologies for the delay in getting this post out, but we had to postpone it for circumstances beyond our control.“How to do the most good” is a very hard problem and no isolated community will solve it on its own. We are excited about opening spaces to discuss this question in a variety of contexts, with diversity and inclusion in mind. The unfortunate fact is that it’s easier for some to hear about EA and be heard in EA, for reasons that have nothing to do with talent on one side or ill intent on the other, and everything to do with visas, the language you happened to be raised with, your socioeconomic background or environment. We believe that EA’s current blindspots can be identified by brilliant minds all over the world, and we want to promote spaces for more people to come together, learn and bring their unique perspectives to the conversation.We have almost finalised the programme. You can expect to see sessions to discuss community building in Low and Middle Income Countries and meet organisations working on top EA areas outside of the UK and the US. We will also have the usual EAGx suspects, including intro talks about Global Health and Development, AI Safety, Animal Welfare or Global Catastrophic Risks. We have confirmed speakers from GWWC, CEA, Charity Entrepreneurship, JPAL, Open Philanthropy, IPA, Rethink Priorities and the World Bank, among many others. Rob Wiblin will come to practice his Spanish talk about Career Prioritisation and Toby Ord will join us virtually to discuss longtermism in the context of the Global South.This conference is mainly for those who are from Latin America or have ties to the continent, because they are the ones for whom it’s most difficult to attend events elsewhere and connect with people who face similar struggles. But as we were saying before, we don’t believe in isolated communities and we certainly wouldn’t want to build a segregated EA LatAm community! On the contrary, we see this as an opportunity for connection. We’re therefore looking forward to receiving experienced members of the international community who are excited to meet talented people who might be under their radar –to talk with them, mentor them, learn from them, collaborate with them, hire them. On the other hand, if you’re new to EA, willing to learn more and based in a region that hosts regular EA conferences (e.g. the US, Europe), we suggest one of those might be a better fit for you to be introduced to the movement.If in doubt, err on the side of applying.And no, you don’t need to speak Spanish to join us.If you’re asking for funding to cover your travel expenses, bear in mind that costs in Mexico are cheap compared to Europe and the US. You should be able to find a good hotel for ∼60$. Regarding transport, Uber is available and very cheap.If you have any questions or comments don’t hesitate to write to latam@eaglobalx.orgWe think that EA with a Latin American spice can be great. Join us in January to discuss doing good over tacos and get the coolest EA merch the world has ever seen.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Applications for EAGx LatAm closing on the 20th of December, published by LGlez on December 4, 2022 on The Effective Altruism Forum.This is just a short announcement that you can now apply to EAGx LatAm, which will take place in Mexico City on January 6-8. For more information, you can read our previous announcement here. Apologies for the delay in getting this post out, but we had to postpone it for circumstances beyond our control.“How to do the most good” is a very hard problem and no isolated community will solve it on its own. We are excited about opening spaces to discuss this question in a variety of contexts, with diversity and inclusion in mind. The unfortunate fact is that it’s easier for some to hear about EA and be heard in EA, for reasons that have nothing to do with talent on one side or ill intent on the other, and everything to do with visas, the language you happened to be raised with, your socioeconomic background or environment. We believe that EA’s current blindspots can be identified by brilliant minds all over the world, and we want to promote spaces for more people to come together, learn and bring their unique perspectives to the conversation.We have almost finalised the programme. You can expect to see sessions to discuss community building in Low and Middle Income Countries and meet organisations working on top EA areas outside of the UK and the US. We will also have the usual EAGx suspects, including intro talks about Global Health and Development, AI Safety, Animal Welfare or Global Catastrophic Risks. We have confirmed speakers from GWWC, CEA, Charity Entrepreneurship, JPAL, Open Philanthropy, IPA, Rethink Priorities and the World Bank, among many others. Rob Wiblin will come to practice his Spanish talk about Career Prioritisation and Toby Ord will join us virtually to discuss longtermism in the context of the Global South.This conference is mainly for those who are from Latin America or have ties to the continent, because they are the ones for whom it’s most difficult to attend events elsewhere and connect with people who face similar struggles. But as we were saying before, we don’t believe in isolated communities and we certainly wouldn’t want to build a segregated EA LatAm community! On the contrary, we see this as an opportunity for connection. We’re therefore looking forward to receiving experienced members of the international community who are excited to meet talented people who might be under their radar –to talk with them, mentor them, learn from them, collaborate with them, hire them. On the other hand, if you’re new to EA, willing to learn more and based in a region that hosts regular EA conferences (e.g. the US, Europe), we suggest one of those might be a better fit for you to be introduced to the movement.If in doubt, err on the side of applying.And no, you don’t need to speak Spanish to join us.If you’re asking for funding to cover your travel expenses, bear in mind that costs in Mexico are cheap compared to Europe and the US. You should be able to find a good hotel for ∼60$. Regarding transport, Uber is available and very cheap.If you have any questions or comments don’t hesitate to write to latam@eaglobalx.orgWe think that EA with a Latin American spice can be great. Join us in January to discuss doing good over tacos and get the coolest EA merch the world has ever seen.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
LGlez https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:17 None full 4003
XGnbde54nteLGw2RB_EA EA - Binding Fuzzies and Utilons Together by eleni Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Binding Fuzzies and Utilons Together, published by eleni on December 4, 2022 on The Effective Altruism Forum."A single death is a tragedy; a million deaths is a statistic."Attributed to Joseph Stalin.(Epistemic status: speculative)In this post, we argue that producing more emotionally appealing materials related to EA cause areas could yield some beneficial results. We present some possible research designs to assess the empirical validity of our claims, and call for EAs with a background in psychology to lead this research project.IntroductionIn Purchase Fuzzies and Utilons Separately, Yudkowsky argues that, if we want maximize our impact, we should be explicit about which of our "good deeds" we do with the intention of really helping the world to become a better place, and which are done in order for us to feel good about ourselves. By clearly separating these two types of actions, the argument goes, we could have both a larger positive impact on the world and get to feel even better about ourselves. We think that this is very valuable advice at the individual level. However, when considering the strategic approach the EA movement as a whole should have, we believe that merging warm fuzzies and utilons together, to some extent, might possibly be a good idea.Some ideas in the Effective Altruism movement are a deliberate attempt to correct a market failure in the "market" of doing good. Some cause areas and interventions, due to a wide range of psychological biases, are less emotionally appealing than others, which means that, when left unchecked, they would receive less resources than what would be optimal, and the opposite holds for the more emotionally appealing causes. The approach EA usually seems to take is to acknowledge this phenomenon ("neglectedness" in the ITN framework) and try to reallocate resources to where they would be the most efficient, without directly taking into account the emotional appeal of the cause. In this post, we argue that a further step in correcting this failure could be to explicitly targeting emotional appeal when designing the marketing of these neglected cause areas, in a way that would attenuate, in part, the very characteristics that make the cause neglected on the first place.We hypothesize that while trying to focus explicitly on improving the emotional appeal of EA cause areas could make the movement easier to promote to non-EAs, a significant part of the benefits of this strategy could come from the effect of increased engagement of those who are already EAs. If this hypothesis is valid, increasing the emotional appeal of marketing materials in EA organizations could prove to be a cost-effective intervention.In this post, we will develop this general idea in more detail. We will specify what kinds of interventions would fit what we have in mind, and we will present some potential upsides that these interventions might have. We have a simple idea in mind that should be empirically verified before being put into practice, and so we also tentatively present some research ideas that EAs with the necessary skills could pursue. We will also present some objections to our ideas and points of uncertainty as an important factor for our belief on the importance on further research. Finally, neither of us have a background in psychology, so take what is written here with a grain of salt.Binding Fuzzies and UtilonsThe hypothesis to be presented in this article is the following: explicitly focusing on increasing the emotional appeal in the marketing of EA organizations could prove to be a powerful tool to help move more resources towards these areas. As we will discuss in the next section, these could be both monetary and human resources, and we believe this effect could hold both for non-EAs and for EAs.A striking anecdotal ...]]>
eleni https://forum.effectivealtruism.org/posts/XGnbde54nteLGw2RB/binding-fuzzies-and-utilons-together-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Binding Fuzzies and Utilons Together, published by eleni on December 4, 2022 on The Effective Altruism Forum."A single death is a tragedy; a million deaths is a statistic."Attributed to Joseph Stalin.(Epistemic status: speculative)In this post, we argue that producing more emotionally appealing materials related to EA cause areas could yield some beneficial results. We present some possible research designs to assess the empirical validity of our claims, and call for EAs with a background in psychology to lead this research project.IntroductionIn Purchase Fuzzies and Utilons Separately, Yudkowsky argues that, if we want maximize our impact, we should be explicit about which of our "good deeds" we do with the intention of really helping the world to become a better place, and which are done in order for us to feel good about ourselves. By clearly separating these two types of actions, the argument goes, we could have both a larger positive impact on the world and get to feel even better about ourselves. We think that this is very valuable advice at the individual level. However, when considering the strategic approach the EA movement as a whole should have, we believe that merging warm fuzzies and utilons together, to some extent, might possibly be a good idea.Some ideas in the Effective Altruism movement are a deliberate attempt to correct a market failure in the "market" of doing good. Some cause areas and interventions, due to a wide range of psychological biases, are less emotionally appealing than others, which means that, when left unchecked, they would receive less resources than what would be optimal, and the opposite holds for the more emotionally appealing causes. The approach EA usually seems to take is to acknowledge this phenomenon ("neglectedness" in the ITN framework) and try to reallocate resources to where they would be the most efficient, without directly taking into account the emotional appeal of the cause. In this post, we argue that a further step in correcting this failure could be to explicitly targeting emotional appeal when designing the marketing of these neglected cause areas, in a way that would attenuate, in part, the very characteristics that make the cause neglected on the first place.We hypothesize that while trying to focus explicitly on improving the emotional appeal of EA cause areas could make the movement easier to promote to non-EAs, a significant part of the benefits of this strategy could come from the effect of increased engagement of those who are already EAs. If this hypothesis is valid, increasing the emotional appeal of marketing materials in EA organizations could prove to be a cost-effective intervention.In this post, we will develop this general idea in more detail. We will specify what kinds of interventions would fit what we have in mind, and we will present some potential upsides that these interventions might have. We have a simple idea in mind that should be empirically verified before being put into practice, and so we also tentatively present some research ideas that EAs with the necessary skills could pursue. We will also present some objections to our ideas and points of uncertainty as an important factor for our belief on the importance on further research. Finally, neither of us have a background in psychology, so take what is written here with a grain of salt.Binding Fuzzies and UtilonsThe hypothesis to be presented in this article is the following: explicitly focusing on increasing the emotional appeal in the marketing of EA organizations could prove to be a powerful tool to help move more resources towards these areas. As we will discuss in the next section, these could be both monetary and human resources, and we believe this effect could hold both for non-EAs and for EAs.A striking anecdotal ...]]>
Sun, 04 Dec 2022 18:42:44 +0000 EA - Binding Fuzzies and Utilons Together by eleni Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Binding Fuzzies and Utilons Together, published by eleni on December 4, 2022 on The Effective Altruism Forum."A single death is a tragedy; a million deaths is a statistic."Attributed to Joseph Stalin.(Epistemic status: speculative)In this post, we argue that producing more emotionally appealing materials related to EA cause areas could yield some beneficial results. We present some possible research designs to assess the empirical validity of our claims, and call for EAs with a background in psychology to lead this research project.IntroductionIn Purchase Fuzzies and Utilons Separately, Yudkowsky argues that, if we want maximize our impact, we should be explicit about which of our "good deeds" we do with the intention of really helping the world to become a better place, and which are done in order for us to feel good about ourselves. By clearly separating these two types of actions, the argument goes, we could have both a larger positive impact on the world and get to feel even better about ourselves. We think that this is very valuable advice at the individual level. However, when considering the strategic approach the EA movement as a whole should have, we believe that merging warm fuzzies and utilons together, to some extent, might possibly be a good idea.Some ideas in the Effective Altruism movement are a deliberate attempt to correct a market failure in the "market" of doing good. Some cause areas and interventions, due to a wide range of psychological biases, are less emotionally appealing than others, which means that, when left unchecked, they would receive less resources than what would be optimal, and the opposite holds for the more emotionally appealing causes. The approach EA usually seems to take is to acknowledge this phenomenon ("neglectedness" in the ITN framework) and try to reallocate resources to where they would be the most efficient, without directly taking into account the emotional appeal of the cause. In this post, we argue that a further step in correcting this failure could be to explicitly targeting emotional appeal when designing the marketing of these neglected cause areas, in a way that would attenuate, in part, the very characteristics that make the cause neglected on the first place.We hypothesize that while trying to focus explicitly on improving the emotional appeal of EA cause areas could make the movement easier to promote to non-EAs, a significant part of the benefits of this strategy could come from the effect of increased engagement of those who are already EAs. If this hypothesis is valid, increasing the emotional appeal of marketing materials in EA organizations could prove to be a cost-effective intervention.In this post, we will develop this general idea in more detail. We will specify what kinds of interventions would fit what we have in mind, and we will present some potential upsides that these interventions might have. We have a simple idea in mind that should be empirically verified before being put into practice, and so we also tentatively present some research ideas that EAs with the necessary skills could pursue. We will also present some objections to our ideas and points of uncertainty as an important factor for our belief on the importance on further research. Finally, neither of us have a background in psychology, so take what is written here with a grain of salt.Binding Fuzzies and UtilonsThe hypothesis to be presented in this article is the following: explicitly focusing on increasing the emotional appeal in the marketing of EA organizations could prove to be a powerful tool to help move more resources towards these areas. As we will discuss in the next section, these could be both monetary and human resources, and we believe this effect could hold both for non-EAs and for EAs.A striking anecdotal ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Binding Fuzzies and Utilons Together, published by eleni on December 4, 2022 on The Effective Altruism Forum."A single death is a tragedy; a million deaths is a statistic."Attributed to Joseph Stalin.(Epistemic status: speculative)In this post, we argue that producing more emotionally appealing materials related to EA cause areas could yield some beneficial results. We present some possible research designs to assess the empirical validity of our claims, and call for EAs with a background in psychology to lead this research project.IntroductionIn Purchase Fuzzies and Utilons Separately, Yudkowsky argues that, if we want maximize our impact, we should be explicit about which of our "good deeds" we do with the intention of really helping the world to become a better place, and which are done in order for us to feel good about ourselves. By clearly separating these two types of actions, the argument goes, we could have both a larger positive impact on the world and get to feel even better about ourselves. We think that this is very valuable advice at the individual level. However, when considering the strategic approach the EA movement as a whole should have, we believe that merging warm fuzzies and utilons together, to some extent, might possibly be a good idea.Some ideas in the Effective Altruism movement are a deliberate attempt to correct a market failure in the "market" of doing good. Some cause areas and interventions, due to a wide range of psychological biases, are less emotionally appealing than others, which means that, when left unchecked, they would receive less resources than what would be optimal, and the opposite holds for the more emotionally appealing causes. The approach EA usually seems to take is to acknowledge this phenomenon ("neglectedness" in the ITN framework) and try to reallocate resources to where they would be the most efficient, without directly taking into account the emotional appeal of the cause. In this post, we argue that a further step in correcting this failure could be to explicitly targeting emotional appeal when designing the marketing of these neglected cause areas, in a way that would attenuate, in part, the very characteristics that make the cause neglected on the first place.We hypothesize that while trying to focus explicitly on improving the emotional appeal of EA cause areas could make the movement easier to promote to non-EAs, a significant part of the benefits of this strategy could come from the effect of increased engagement of those who are already EAs. If this hypothesis is valid, increasing the emotional appeal of marketing materials in EA organizations could prove to be a cost-effective intervention.In this post, we will develop this general idea in more detail. We will specify what kinds of interventions would fit what we have in mind, and we will present some potential upsides that these interventions might have. We have a simple idea in mind that should be empirically verified before being put into practice, and so we also tentatively present some research ideas that EAs with the necessary skills could pursue. We will also present some objections to our ideas and points of uncertainty as an important factor for our belief on the importance on further research. Finally, neither of us have a background in psychology, so take what is written here with a grain of salt.Binding Fuzzies and UtilonsThe hypothesis to be presented in this article is the following: explicitly focusing on increasing the emotional appeal in the marketing of EA organizations could prove to be a powerful tool to help move more resources towards these areas. As we will discuss in the next section, these could be both monetary and human resources, and we believe this effect could hold both for non-EAs and for EAs.A striking anecdotal ...]]>
eleni https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 13:30 None full 4001
wDgdRxBmF7GXBsnqr_EA EA - Our 2022 Giving by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Our 2022 Giving, published by Jeff Kaufman on December 4, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://forum.effectivealtruism.org/posts/wDgdRxBmF7GXBsnqr/our-2022-giving Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Our 2022 Giving, published by Jeff Kaufman on December 4, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 04 Dec 2022 16:52:50 +0000 EA - Our 2022 Giving by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Our 2022 Giving, published by Jeff Kaufman on December 4, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Our 2022 Giving, published by Jeff Kaufman on December 4, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeff Kaufman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:24 None full 4002
buEazpmcKJhM5KRGx_EA EA - SFF Speculation Grants as an expedited funding source by Andrew Critch Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SFF Speculation Grants as an expedited funding source, published by Andrew Critch on December 3, 2022 on The Effective Altruism Forum.Hi everyone, SFF has received numerous emails recently from organizations interested in expedited funding. I believe a number of people here already know about SFF Speculation Grants, but since we've never actually announced our existence on the EA Forum before:The Survival and Flourishing Fund has a means of expediting funding requests at any time of year, via applications to our Speculation Grants program:SFF Speculation Grants are expedited grants organized by SFF outside of our biannual grant-recommendation process (the S-process). “Speculation Grantors” are volunteers with budgets to make these grants. Each Speculation Grantor’s budget grows or increases with the settlement of budget adjustments that we call “impact futures” (explained further below). Currently, we have a total of ~20 Speculation Grantors, with a combined budget of approximately $4MM. Our process and software infrastructure for funding these grants were co-designed by Andrew Critch and Oliver Habryka.For instructions on how to apply, please visit the link above.For general information about the Survival and Flourishing Fund, see:Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Andrew Critch https://forum.effectivealtruism.org/posts/buEazpmcKJhM5KRGx/sff-speculation-grants-as-an-expedited-funding-source Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SFF Speculation Grants as an expedited funding source, published by Andrew Critch on December 3, 2022 on The Effective Altruism Forum.Hi everyone, SFF has received numerous emails recently from organizations interested in expedited funding. I believe a number of people here already know about SFF Speculation Grants, but since we've never actually announced our existence on the EA Forum before:The Survival and Flourishing Fund has a means of expediting funding requests at any time of year, via applications to our Speculation Grants program:SFF Speculation Grants are expedited grants organized by SFF outside of our biannual grant-recommendation process (the S-process). “Speculation Grantors” are volunteers with budgets to make these grants. Each Speculation Grantor’s budget grows or increases with the settlement of budget adjustments that we call “impact futures” (explained further below). Currently, we have a total of ~20 Speculation Grantors, with a combined budget of approximately $4MM. Our process and software infrastructure for funding these grants were co-designed by Andrew Critch and Oliver Habryka.For instructions on how to apply, please visit the link above.For general information about the Survival and Flourishing Fund, see:Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 03 Dec 2022 23:08:49 +0000 EA - SFF Speculation Grants as an expedited funding source by Andrew Critch Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SFF Speculation Grants as an expedited funding source, published by Andrew Critch on December 3, 2022 on The Effective Altruism Forum.Hi everyone, SFF has received numerous emails recently from organizations interested in expedited funding. I believe a number of people here already know about SFF Speculation Grants, but since we've never actually announced our existence on the EA Forum before:The Survival and Flourishing Fund has a means of expediting funding requests at any time of year, via applications to our Speculation Grants program:SFF Speculation Grants are expedited grants organized by SFF outside of our biannual grant-recommendation process (the S-process). “Speculation Grantors” are volunteers with budgets to make these grants. Each Speculation Grantor’s budget grows or increases with the settlement of budget adjustments that we call “impact futures” (explained further below). Currently, we have a total of ~20 Speculation Grantors, with a combined budget of approximately $4MM. Our process and software infrastructure for funding these grants were co-designed by Andrew Critch and Oliver Habryka.For instructions on how to apply, please visit the link above.For general information about the Survival and Flourishing Fund, see:Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SFF Speculation Grants as an expedited funding source, published by Andrew Critch on December 3, 2022 on The Effective Altruism Forum.Hi everyone, SFF has received numerous emails recently from organizations interested in expedited funding. I believe a number of people here already know about SFF Speculation Grants, but since we've never actually announced our existence on the EA Forum before:The Survival and Flourishing Fund has a means of expediting funding requests at any time of year, via applications to our Speculation Grants program:SFF Speculation Grants are expedited grants organized by SFF outside of our biannual grant-recommendation process (the S-process). “Speculation Grantors” are volunteers with budgets to make these grants. Each Speculation Grantor’s budget grows or increases with the settlement of budget adjustments that we call “impact futures” (explained further below). Currently, we have a total of ~20 Speculation Grantors, with a combined budget of approximately $4MM. Our process and software infrastructure for funding these grants were co-designed by Andrew Critch and Oliver Habryka.For instructions on how to apply, please visit the link above.For general information about the Survival and Flourishing Fund, see:Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Andrew Critch https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:28 None full 4000
q2M9DDoogZGTJjQbg_EA EA - Revisiting EA's media policy by Arepo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Revisiting EA's media policy, published by Arepo on December 3, 2022 on The Effective Altruism Forum.Epistemic status: This post is meant to be a conversation starter rather than a conclusive argument. I don’t assert that any of the concerns in it are overwhelming, only that we have too quickly adopted a set of media communication practices without discussing their trade-offs.Also, while this was in draft form, Shakeel Hashim, CEA’s new head of communications, made some positive comments on the main thesis suggesting that he agreed with a lot of my criticisms and planned to have a much more active involvement with the media. If so, this post may be largely redundant - nonetheless, it seems worth having the conversation in public.CEA adheres to what they call the fidelity model of spreading ideas, which they formally introduced in 2017, though my sense is it was an unofficial policy well before that. In a low-fidelity nutshell, this is the claim that EA ideas are somewhat nuanced and media reporting often isn’t, and so it’s generally not worth pursuing - and often worth actively discouraging - media communication unless you’re a) extremely confident the outlet in question will report the ideas exactly as you describe them and b) you’re qualified to deal with the media.In practice, because CEA pull many strings, this being CEA policy makes it de facto EA policy. ‘Qualified to deal with the media’ seems to mean ‘CEA-sanctioned’, and I have heard of at least one organisation being denied CEA-directed money in part because it was considered too accommodating of the media. Given that ubiquity, I think it’s worth discussing the policy more depth. We have five years of results to look back on and, to my knowledge, no further public discussion of the subject. I have four key concerns with the current approach:It’s not grounded in researchIt leads to a high proportion of negative coverage for EAIt assumes a Platonic ideal of EAIt contributes to the hero-worship/concentration of power in EAElaborating each.Not empirically groundedThe article assumes that low-fidelity spreading of EA ideas is necessarily bad, but doesn’t give any data beyond some very general anecdotes to support this. There’s an obvious trade-off to be had between a small number of people doing something a lot like what we want and a larger number doing something a bit like what we want, and it’s very unclear which has higher expectation.To see the case for the alternative, we might compare the rise of the animal rights movement in the wake of Peter Singer’s original argument for animal welfare. The former is a philosophically mutated version of the latter, so on fidelity model reasoning would have been something that’s ‘similar but different’ - apparently treated on the fidelity model as undesirable. Similarly, the emergence of reducetarianism/flexitarianism looks very like what the fidelity model would consider to be a ‘diluted’ version of the practice of veganism Singer advocated. My sense is that both of these have nonetheless been strong net positives for animal welfare.High proportion of negative coverageIf you have influence over a group of supporters and you tell them not to communicate with the media, one result you might anticipate is that a much higher proportion of your media coverage comes from the detractors, who you don’t have influence over. Shutting out the media can also be counterproductive - they’re (mostly) human, and so tend to deal more kindly with people who deal more kindly with them. I have three supporting anecdotes, one admittedly eclipsing the others:FTXgateAt the time of writing, if you Google ‘effective altruism news’, you still get something like this.Similarly, if you look at Will’s Tweetstorm decrying SBF’s actions, the majority of responses are angrily negativ...]]>
Arepo https://forum.effectivealtruism.org/posts/q2M9DDoogZGTJjQbg/revisiting-ea-s-media-policy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Revisiting EA's media policy, published by Arepo on December 3, 2022 on The Effective Altruism Forum.Epistemic status: This post is meant to be a conversation starter rather than a conclusive argument. I don’t assert that any of the concerns in it are overwhelming, only that we have too quickly adopted a set of media communication practices without discussing their trade-offs.Also, while this was in draft form, Shakeel Hashim, CEA’s new head of communications, made some positive comments on the main thesis suggesting that he agreed with a lot of my criticisms and planned to have a much more active involvement with the media. If so, this post may be largely redundant - nonetheless, it seems worth having the conversation in public.CEA adheres to what they call the fidelity model of spreading ideas, which they formally introduced in 2017, though my sense is it was an unofficial policy well before that. In a low-fidelity nutshell, this is the claim that EA ideas are somewhat nuanced and media reporting often isn’t, and so it’s generally not worth pursuing - and often worth actively discouraging - media communication unless you’re a) extremely confident the outlet in question will report the ideas exactly as you describe them and b) you’re qualified to deal with the media.In practice, because CEA pull many strings, this being CEA policy makes it de facto EA policy. ‘Qualified to deal with the media’ seems to mean ‘CEA-sanctioned’, and I have heard of at least one organisation being denied CEA-directed money in part because it was considered too accommodating of the media. Given that ubiquity, I think it’s worth discussing the policy more depth. We have five years of results to look back on and, to my knowledge, no further public discussion of the subject. I have four key concerns with the current approach:It’s not grounded in researchIt leads to a high proportion of negative coverage for EAIt assumes a Platonic ideal of EAIt contributes to the hero-worship/concentration of power in EAElaborating each.Not empirically groundedThe article assumes that low-fidelity spreading of EA ideas is necessarily bad, but doesn’t give any data beyond some very general anecdotes to support this. There’s an obvious trade-off to be had between a small number of people doing something a lot like what we want and a larger number doing something a bit like what we want, and it’s very unclear which has higher expectation.To see the case for the alternative, we might compare the rise of the animal rights movement in the wake of Peter Singer’s original argument for animal welfare. The former is a philosophically mutated version of the latter, so on fidelity model reasoning would have been something that’s ‘similar but different’ - apparently treated on the fidelity model as undesirable. Similarly, the emergence of reducetarianism/flexitarianism looks very like what the fidelity model would consider to be a ‘diluted’ version of the practice of veganism Singer advocated. My sense is that both of these have nonetheless been strong net positives for animal welfare.High proportion of negative coverageIf you have influence over a group of supporters and you tell them not to communicate with the media, one result you might anticipate is that a much higher proportion of your media coverage comes from the detractors, who you don’t have influence over. Shutting out the media can also be counterproductive - they’re (mostly) human, and so tend to deal more kindly with people who deal more kindly with them. I have three supporting anecdotes, one admittedly eclipsing the others:FTXgateAt the time of writing, if you Google ‘effective altruism news’, you still get something like this.Similarly, if you look at Will’s Tweetstorm decrying SBF’s actions, the majority of responses are angrily negativ...]]>
Sat, 03 Dec 2022 18:02:51 +0000 EA - Revisiting EA's media policy by Arepo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Revisiting EA's media policy, published by Arepo on December 3, 2022 on The Effective Altruism Forum.Epistemic status: This post is meant to be a conversation starter rather than a conclusive argument. I don’t assert that any of the concerns in it are overwhelming, only that we have too quickly adopted a set of media communication practices without discussing their trade-offs.Also, while this was in draft form, Shakeel Hashim, CEA’s new head of communications, made some positive comments on the main thesis suggesting that he agreed with a lot of my criticisms and planned to have a much more active involvement with the media. If so, this post may be largely redundant - nonetheless, it seems worth having the conversation in public.CEA adheres to what they call the fidelity model of spreading ideas, which they formally introduced in 2017, though my sense is it was an unofficial policy well before that. In a low-fidelity nutshell, this is the claim that EA ideas are somewhat nuanced and media reporting often isn’t, and so it’s generally not worth pursuing - and often worth actively discouraging - media communication unless you’re a) extremely confident the outlet in question will report the ideas exactly as you describe them and b) you’re qualified to deal with the media.In practice, because CEA pull many strings, this being CEA policy makes it de facto EA policy. ‘Qualified to deal with the media’ seems to mean ‘CEA-sanctioned’, and I have heard of at least one organisation being denied CEA-directed money in part because it was considered too accommodating of the media. Given that ubiquity, I think it’s worth discussing the policy more depth. We have five years of results to look back on and, to my knowledge, no further public discussion of the subject. I have four key concerns with the current approach:It’s not grounded in researchIt leads to a high proportion of negative coverage for EAIt assumes a Platonic ideal of EAIt contributes to the hero-worship/concentration of power in EAElaborating each.Not empirically groundedThe article assumes that low-fidelity spreading of EA ideas is necessarily bad, but doesn’t give any data beyond some very general anecdotes to support this. There’s an obvious trade-off to be had between a small number of people doing something a lot like what we want and a larger number doing something a bit like what we want, and it’s very unclear which has higher expectation.To see the case for the alternative, we might compare the rise of the animal rights movement in the wake of Peter Singer’s original argument for animal welfare. The former is a philosophically mutated version of the latter, so on fidelity model reasoning would have been something that’s ‘similar but different’ - apparently treated on the fidelity model as undesirable. Similarly, the emergence of reducetarianism/flexitarianism looks very like what the fidelity model would consider to be a ‘diluted’ version of the practice of veganism Singer advocated. My sense is that both of these have nonetheless been strong net positives for animal welfare.High proportion of negative coverageIf you have influence over a group of supporters and you tell them not to communicate with the media, one result you might anticipate is that a much higher proportion of your media coverage comes from the detractors, who you don’t have influence over. Shutting out the media can also be counterproductive - they’re (mostly) human, and so tend to deal more kindly with people who deal more kindly with them. I have three supporting anecdotes, one admittedly eclipsing the others:FTXgateAt the time of writing, if you Google ‘effective altruism news’, you still get something like this.Similarly, if you look at Will’s Tweetstorm decrying SBF’s actions, the majority of responses are angrily negativ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Revisiting EA's media policy, published by Arepo on December 3, 2022 on The Effective Altruism Forum.Epistemic status: This post is meant to be a conversation starter rather than a conclusive argument. I don’t assert that any of the concerns in it are overwhelming, only that we have too quickly adopted a set of media communication practices without discussing their trade-offs.Also, while this was in draft form, Shakeel Hashim, CEA’s new head of communications, made some positive comments on the main thesis suggesting that he agreed with a lot of my criticisms and planned to have a much more active involvement with the media. If so, this post may be largely redundant - nonetheless, it seems worth having the conversation in public.CEA adheres to what they call the fidelity model of spreading ideas, which they formally introduced in 2017, though my sense is it was an unofficial policy well before that. In a low-fidelity nutshell, this is the claim that EA ideas are somewhat nuanced and media reporting often isn’t, and so it’s generally not worth pursuing - and often worth actively discouraging - media communication unless you’re a) extremely confident the outlet in question will report the ideas exactly as you describe them and b) you’re qualified to deal with the media.In practice, because CEA pull many strings, this being CEA policy makes it de facto EA policy. ‘Qualified to deal with the media’ seems to mean ‘CEA-sanctioned’, and I have heard of at least one organisation being denied CEA-directed money in part because it was considered too accommodating of the media. Given that ubiquity, I think it’s worth discussing the policy more depth. We have five years of results to look back on and, to my knowledge, no further public discussion of the subject. I have four key concerns with the current approach:It’s not grounded in researchIt leads to a high proportion of negative coverage for EAIt assumes a Platonic ideal of EAIt contributes to the hero-worship/concentration of power in EAElaborating each.Not empirically groundedThe article assumes that low-fidelity spreading of EA ideas is necessarily bad, but doesn’t give any data beyond some very general anecdotes to support this. There’s an obvious trade-off to be had between a small number of people doing something a lot like what we want and a larger number doing something a bit like what we want, and it’s very unclear which has higher expectation.To see the case for the alternative, we might compare the rise of the animal rights movement in the wake of Peter Singer’s original argument for animal welfare. The former is a philosophically mutated version of the latter, so on fidelity model reasoning would have been something that’s ‘similar but different’ - apparently treated on the fidelity model as undesirable. Similarly, the emergence of reducetarianism/flexitarianism looks very like what the fidelity model would consider to be a ‘diluted’ version of the practice of veganism Singer advocated. My sense is that both of these have nonetheless been strong net positives for animal welfare.High proportion of negative coverageIf you have influence over a group of supporters and you tell them not to communicate with the media, one result you might anticipate is that a much higher proportion of your media coverage comes from the detractors, who you don’t have influence over. Shutting out the media can also be counterproductive - they’re (mostly) human, and so tend to deal more kindly with people who deal more kindly with them. I have three supporting anecdotes, one admittedly eclipsing the others:FTXgateAt the time of writing, if you Google ‘effective altruism news’, you still get something like this.Similarly, if you look at Will’s Tweetstorm decrying SBF’s actions, the majority of responses are angrily negativ...]]>
Arepo https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:50 None full 3993
erYvs4tLwnNCopBxg_EA EA - CEA “serious incident report”? by Pagw Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA “serious incident report”?, published by Pagw on December 3, 2022 on The Effective Altruism Forum.CNBC reported on 30/11/22 that the Centre for Effective Altruism had filed a '“serious incident report” tied to “the collapse of FTX”' with the Charity Commission for England and Wales. It says reasons for this could include 'the “loss of your charity’s money or assets” or “harm to your charity’s work or reputation”'. Here is more information about serious incident reporting.I have not seen any comment about this from CEA or anywhere else. Whilst the FTX fallout is serious, I wasn't aware of clear direct implications for CEA, so this seems like new information. Readers of the Forum may be donors to CEA and CEA is still soliciting donations through its website, so I think it's important to get a clear public picture of any problems. Clarifying that there is no problem relevant for the public would also be helpful if that's the case. Does anybody have knowledge about this that they could share?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Pagw https://forum.effectivealtruism.org/posts/erYvs4tLwnNCopBxg/cea-serious-incident-report Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA “serious incident report”?, published by Pagw on December 3, 2022 on The Effective Altruism Forum.CNBC reported on 30/11/22 that the Centre for Effective Altruism had filed a '“serious incident report” tied to “the collapse of FTX”' with the Charity Commission for England and Wales. It says reasons for this could include 'the “loss of your charity’s money or assets” or “harm to your charity’s work or reputation”'. Here is more information about serious incident reporting.I have not seen any comment about this from CEA or anywhere else. Whilst the FTX fallout is serious, I wasn't aware of clear direct implications for CEA, so this seems like new information. Readers of the Forum may be donors to CEA and CEA is still soliciting donations through its website, so I think it's important to get a clear public picture of any problems. Clarifying that there is no problem relevant for the public would also be helpful if that's the case. Does anybody have knowledge about this that they could share?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 03 Dec 2022 17:40:33 +0000 EA - CEA “serious incident report”? by Pagw Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA “serious incident report”?, published by Pagw on December 3, 2022 on The Effective Altruism Forum.CNBC reported on 30/11/22 that the Centre for Effective Altruism had filed a '“serious incident report” tied to “the collapse of FTX”' with the Charity Commission for England and Wales. It says reasons for this could include 'the “loss of your charity’s money or assets” or “harm to your charity’s work or reputation”'. Here is more information about serious incident reporting.I have not seen any comment about this from CEA or anywhere else. Whilst the FTX fallout is serious, I wasn't aware of clear direct implications for CEA, so this seems like new information. Readers of the Forum may be donors to CEA and CEA is still soliciting donations through its website, so I think it's important to get a clear public picture of any problems. Clarifying that there is no problem relevant for the public would also be helpful if that's the case. Does anybody have knowledge about this that they could share?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA “serious incident report”?, published by Pagw on December 3, 2022 on The Effective Altruism Forum.CNBC reported on 30/11/22 that the Centre for Effective Altruism had filed a '“serious incident report” tied to “the collapse of FTX”' with the Charity Commission for England and Wales. It says reasons for this could include 'the “loss of your charity’s money or assets” or “harm to your charity’s work or reputation”'. Here is more information about serious incident reporting.I have not seen any comment about this from CEA or anywhere else. Whilst the FTX fallout is serious, I wasn't aware of clear direct implications for CEA, so this seems like new information. Readers of the Forum may be donors to CEA and CEA is still soliciting donations through its website, so I think it's important to get a clear public picture of any problems. Clarifying that there is no problem relevant for the public would also be helpful if that's the case. Does anybody have knowledge about this that they could share?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Pagw https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:13 None full 3994
XtmGK2inWzp9AG32M_EA EA - Some notes on common challenges building EA orgs by Ben Kuhn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some notes on common challenges building EA orgs, published by Ben Kuhn on December 3, 2022 on The Effective Altruism Forum.(Follow-up to: Want advice on management/organization-building?)After posting that offer, I've chatted with a few people at different orgs (though still have bandwidth for more if other folks are interested, as I find them pretty fun!) and started to notice some trends in what kinds of management problems different orgs face. These are all low-confidence—I'd like to validate them against a broader set of orgs—but I thought I'd write them up to see if they resonate with people.ObservationsMany orgs skew extremely juniorThe EA community skews very young, and highly engaged EAs (like those working at orgs) skew even younger. Anecdotally, I was the same age or older than almost everyone I talked to, many of whom were the most experienced person in their org. By comparison, at Wave almost the entire prod/eng leadership team is my age or older. (Note that this seems to be less true in the most established/large/high-status orgs, e.g. Open Phil.)This isn't a disaster, especially since the best of the junior people are very talented, but it does lead to a set of typical problems:having to think through how to do everything from first principles rather than copy what's worked from elsewheremanagers needing to spend a fair amount of time providing basic "how to be a functioning employee" support to junior hiresmanagers not providing that support and the junior hires ending up being less effective, or growing less quickly, than they would at an org that could provide them more support(At Wave, we've largely avoided hiring people with <2 years of work experience onto the prod/eng team for this reason. Of course, that's easier for us than for many EA orgs, so I'm not suggesting this as a general solution.)It also leads to two specific subproblems:Many managers are first-time managersAgain, this isn't a disaster, since many of these first-time managers are very smart, hardworking and kind. But, first-time managers have a few patterns of mistakes they tend to make, mostly related to "seeing around corners" or making decisions whose consequences play out on a long time horizon (understandably, since they often haven't worked as a manager for long enough to have seen long-time-horizon decisions play out!). This is things like:Providing enough coaching and career growth for their teamGiving feedback that's difficult to giveMaking sure their reports are happy in their roles and not burning outThe last point seems especially underrated in EA, I suspect because people are unusually focused on "doing what's optimal, not what's fun." That's a good idea to a large extent, but even for many people who are largely motivated by impact, they can be massively more or less productive depending on how much their day-to-day work resonates with them. But I suspect many EAs, like me, are reluctant to admit that this applies to us too and we're not purely impact-maximizing robots.Almost all managers-of-managers are first-timersGoing from managing individual contributors to managing managers is a fairly different skillset in many ways. Most people in EA orgs that are managing managers seem to be "career EAs" for whom it's their first manager-of-managers role. As above, new managers of managers tend to make some predictable mistakes, e.g.:Not planning far enough ahead for what hires they'll need to make to support their org's growthHiring/promoting the wrong people into management roles (having become a good manager doesn't necessarily mean you'll be good at coaching/evaluating other managers who fail in different ways than you did!)Not noticing team dysfunction, either due to not having systems in place (e.g. skip-level 1:1s) or not knowing that something is ...]]>
Ben Kuhn https://forum.effectivealtruism.org/posts/XtmGK2inWzp9AG32M/some-notes-on-common-challenges-building-ea-orgs Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some notes on common challenges building EA orgs, published by Ben Kuhn on December 3, 2022 on The Effective Altruism Forum.(Follow-up to: Want advice on management/organization-building?)After posting that offer, I've chatted with a few people at different orgs (though still have bandwidth for more if other folks are interested, as I find them pretty fun!) and started to notice some trends in what kinds of management problems different orgs face. These are all low-confidence—I'd like to validate them against a broader set of orgs—but I thought I'd write them up to see if they resonate with people.ObservationsMany orgs skew extremely juniorThe EA community skews very young, and highly engaged EAs (like those working at orgs) skew even younger. Anecdotally, I was the same age or older than almost everyone I talked to, many of whom were the most experienced person in their org. By comparison, at Wave almost the entire prod/eng leadership team is my age or older. (Note that this seems to be less true in the most established/large/high-status orgs, e.g. Open Phil.)This isn't a disaster, especially since the best of the junior people are very talented, but it does lead to a set of typical problems:having to think through how to do everything from first principles rather than copy what's worked from elsewheremanagers needing to spend a fair amount of time providing basic "how to be a functioning employee" support to junior hiresmanagers not providing that support and the junior hires ending up being less effective, or growing less quickly, than they would at an org that could provide them more support(At Wave, we've largely avoided hiring people with <2 years of work experience onto the prod/eng team for this reason. Of course, that's easier for us than for many EA orgs, so I'm not suggesting this as a general solution.)It also leads to two specific subproblems:Many managers are first-time managersAgain, this isn't a disaster, since many of these first-time managers are very smart, hardworking and kind. But, first-time managers have a few patterns of mistakes they tend to make, mostly related to "seeing around corners" or making decisions whose consequences play out on a long time horizon (understandably, since they often haven't worked as a manager for long enough to have seen long-time-horizon decisions play out!). This is things like:Providing enough coaching and career growth for their teamGiving feedback that's difficult to giveMaking sure their reports are happy in their roles and not burning outThe last point seems especially underrated in EA, I suspect because people are unusually focused on "doing what's optimal, not what's fun." That's a good idea to a large extent, but even for many people who are largely motivated by impact, they can be massively more or less productive depending on how much their day-to-day work resonates with them. But I suspect many EAs, like me, are reluctant to admit that this applies to us too and we're not purely impact-maximizing robots.Almost all managers-of-managers are first-timersGoing from managing individual contributors to managing managers is a fairly different skillset in many ways. Most people in EA orgs that are managing managers seem to be "career EAs" for whom it's their first manager-of-managers role. As above, new managers of managers tend to make some predictable mistakes, e.g.:Not planning far enough ahead for what hires they'll need to make to support their org's growthHiring/promoting the wrong people into management roles (having become a good manager doesn't necessarily mean you'll be good at coaching/evaluating other managers who fail in different ways than you did!)Not noticing team dysfunction, either due to not having systems in place (e.g. skip-level 1:1s) or not knowing that something is ...]]>
Sat, 03 Dec 2022 04:37:53 +0000 EA - Some notes on common challenges building EA orgs by Ben Kuhn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some notes on common challenges building EA orgs, published by Ben Kuhn on December 3, 2022 on The Effective Altruism Forum.(Follow-up to: Want advice on management/organization-building?)After posting that offer, I've chatted with a few people at different orgs (though still have bandwidth for more if other folks are interested, as I find them pretty fun!) and started to notice some trends in what kinds of management problems different orgs face. These are all low-confidence—I'd like to validate them against a broader set of orgs—but I thought I'd write them up to see if they resonate with people.ObservationsMany orgs skew extremely juniorThe EA community skews very young, and highly engaged EAs (like those working at orgs) skew even younger. Anecdotally, I was the same age or older than almost everyone I talked to, many of whom were the most experienced person in their org. By comparison, at Wave almost the entire prod/eng leadership team is my age or older. (Note that this seems to be less true in the most established/large/high-status orgs, e.g. Open Phil.)This isn't a disaster, especially since the best of the junior people are very talented, but it does lead to a set of typical problems:having to think through how to do everything from first principles rather than copy what's worked from elsewheremanagers needing to spend a fair amount of time providing basic "how to be a functioning employee" support to junior hiresmanagers not providing that support and the junior hires ending up being less effective, or growing less quickly, than they would at an org that could provide them more support(At Wave, we've largely avoided hiring people with <2 years of work experience onto the prod/eng team for this reason. Of course, that's easier for us than for many EA orgs, so I'm not suggesting this as a general solution.)It also leads to two specific subproblems:Many managers are first-time managersAgain, this isn't a disaster, since many of these first-time managers are very smart, hardworking and kind. But, first-time managers have a few patterns of mistakes they tend to make, mostly related to "seeing around corners" or making decisions whose consequences play out on a long time horizon (understandably, since they often haven't worked as a manager for long enough to have seen long-time-horizon decisions play out!). This is things like:Providing enough coaching and career growth for their teamGiving feedback that's difficult to giveMaking sure their reports are happy in their roles and not burning outThe last point seems especially underrated in EA, I suspect because people are unusually focused on "doing what's optimal, not what's fun." That's a good idea to a large extent, but even for many people who are largely motivated by impact, they can be massively more or less productive depending on how much their day-to-day work resonates with them. But I suspect many EAs, like me, are reluctant to admit that this applies to us too and we're not purely impact-maximizing robots.Almost all managers-of-managers are first-timersGoing from managing individual contributors to managing managers is a fairly different skillset in many ways. Most people in EA orgs that are managing managers seem to be "career EAs" for whom it's their first manager-of-managers role. As above, new managers of managers tend to make some predictable mistakes, e.g.:Not planning far enough ahead for what hires they'll need to make to support their org's growthHiring/promoting the wrong people into management roles (having become a good manager doesn't necessarily mean you'll be good at coaching/evaluating other managers who fail in different ways than you did!)Not noticing team dysfunction, either due to not having systems in place (e.g. skip-level 1:1s) or not knowing that something is ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some notes on common challenges building EA orgs, published by Ben Kuhn on December 3, 2022 on The Effective Altruism Forum.(Follow-up to: Want advice on management/organization-building?)After posting that offer, I've chatted with a few people at different orgs (though still have bandwidth for more if other folks are interested, as I find them pretty fun!) and started to notice some trends in what kinds of management problems different orgs face. These are all low-confidence—I'd like to validate them against a broader set of orgs—but I thought I'd write them up to see if they resonate with people.ObservationsMany orgs skew extremely juniorThe EA community skews very young, and highly engaged EAs (like those working at orgs) skew even younger. Anecdotally, I was the same age or older than almost everyone I talked to, many of whom were the most experienced person in their org. By comparison, at Wave almost the entire prod/eng leadership team is my age or older. (Note that this seems to be less true in the most established/large/high-status orgs, e.g. Open Phil.)This isn't a disaster, especially since the best of the junior people are very talented, but it does lead to a set of typical problems:having to think through how to do everything from first principles rather than copy what's worked from elsewheremanagers needing to spend a fair amount of time providing basic "how to be a functioning employee" support to junior hiresmanagers not providing that support and the junior hires ending up being less effective, or growing less quickly, than they would at an org that could provide them more support(At Wave, we've largely avoided hiring people with <2 years of work experience onto the prod/eng team for this reason. Of course, that's easier for us than for many EA orgs, so I'm not suggesting this as a general solution.)It also leads to two specific subproblems:Many managers are first-time managersAgain, this isn't a disaster, since many of these first-time managers are very smart, hardworking and kind. But, first-time managers have a few patterns of mistakes they tend to make, mostly related to "seeing around corners" or making decisions whose consequences play out on a long time horizon (understandably, since they often haven't worked as a manager for long enough to have seen long-time-horizon decisions play out!). This is things like:Providing enough coaching and career growth for their teamGiving feedback that's difficult to giveMaking sure their reports are happy in their roles and not burning outThe last point seems especially underrated in EA, I suspect because people are unusually focused on "doing what's optimal, not what's fun." That's a good idea to a large extent, but even for many people who are largely motivated by impact, they can be massively more or less productive depending on how much their day-to-day work resonates with them. But I suspect many EAs, like me, are reluctant to admit that this applies to us too and we're not purely impact-maximizing robots.Almost all managers-of-managers are first-timersGoing from managing individual contributors to managing managers is a fairly different skillset in many ways. Most people in EA orgs that are managing managers seem to be "career EAs" for whom it's their first manager-of-managers role. As above, new managers of managers tend to make some predictable mistakes, e.g.:Not planning far enough ahead for what hires they'll need to make to support their org's growthHiring/promoting the wrong people into management roles (having become a good manager doesn't necessarily mean you'll be good at coaching/evaluating other managers who fail in different ways than you did!)Not noticing team dysfunction, either due to not having systems in place (e.g. skip-level 1:1s) or not knowing that something is ...]]>
Ben Kuhn https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:03 None full 3992
nGhmNsGHkRYXjXrTF_EA EA - Is Headhunting within EA Appropriate? by Dan Stein Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is Headhunting within EA Appropriate?, published by Dan Stein on December 2, 2022 on The Effective Altruism Forum.I'm the Chief Economist at IDinsight. I've been somewhat surprised recently to see a number of very direct headhunting attempts from people in the EA community, directed at key staff members of our organization. This is not a one-off, this is attempts to recruit multiple staff from a number of hiring organizations.I understand that the recruitment of great staff is a key bottleneck for EA orgs, and that this has resulting in more resources being into headhunting. (For posts that discuss this issue, see here, here, and here.) But I would have though that this headhunting would be concentrated on less impactful organizations outside the EA community. Clearly, if a headhunter eases a bottleneck at a high-impact organization while creating a bottleneck at another equally high-impact organization, they are not having a positive effect.I wouldn't call IDinsight an EA organization, but we are certainly collaborative with the EA ecosystem, working closely with EA funders and high-impact implementation organizations in global health and development. We are a nonprofit dedicated to maximizing our social impact, and although I'm certainly biased I think we are in impactful organization. Perhaps headhunters targeting our staff feel that the roles they are recruiting for are much higher-impact than the roles people currently have at IDinsight, and I would respect their actions if this were the case. However, I would imagine that headhunters also are motivated to fill roles, and this would hinder them from accurately weighing the global impact of someone moving from job X to job Y.I do understand this is complicated. The decision to move jobs ultimately rests with the worker, not the headhunter, and I of course respect the decision of anyone to switch jobs. But I do think the EA community should be thinking strategically about how to maximize our headhunting resources for total global impact, as opposed to just impact for the organizations they are working for. I wonder, are there any established norms or best-practices within the community? If not, I think it would make sense to develop some.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Dan Stein https://forum.effectivealtruism.org/posts/nGhmNsGHkRYXjXrTF/is-headhunting-within-ea-appropriate Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is Headhunting within EA Appropriate?, published by Dan Stein on December 2, 2022 on The Effective Altruism Forum.I'm the Chief Economist at IDinsight. I've been somewhat surprised recently to see a number of very direct headhunting attempts from people in the EA community, directed at key staff members of our organization. This is not a one-off, this is attempts to recruit multiple staff from a number of hiring organizations.I understand that the recruitment of great staff is a key bottleneck for EA orgs, and that this has resulting in more resources being into headhunting. (For posts that discuss this issue, see here, here, and here.) But I would have though that this headhunting would be concentrated on less impactful organizations outside the EA community. Clearly, if a headhunter eases a bottleneck at a high-impact organization while creating a bottleneck at another equally high-impact organization, they are not having a positive effect.I wouldn't call IDinsight an EA organization, but we are certainly collaborative with the EA ecosystem, working closely with EA funders and high-impact implementation organizations in global health and development. We are a nonprofit dedicated to maximizing our social impact, and although I'm certainly biased I think we are in impactful organization. Perhaps headhunters targeting our staff feel that the roles they are recruiting for are much higher-impact than the roles people currently have at IDinsight, and I would respect their actions if this were the case. However, I would imagine that headhunters also are motivated to fill roles, and this would hinder them from accurately weighing the global impact of someone moving from job X to job Y.I do understand this is complicated. The decision to move jobs ultimately rests with the worker, not the headhunter, and I of course respect the decision of anyone to switch jobs. But I do think the EA community should be thinking strategically about how to maximize our headhunting resources for total global impact, as opposed to just impact for the organizations they are working for. I wonder, are there any established norms or best-practices within the community? If not, I think it would make sense to develop some.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 02 Dec 2022 21:45:36 +0000 EA - Is Headhunting within EA Appropriate? by Dan Stein Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is Headhunting within EA Appropriate?, published by Dan Stein on December 2, 2022 on The Effective Altruism Forum.I'm the Chief Economist at IDinsight. I've been somewhat surprised recently to see a number of very direct headhunting attempts from people in the EA community, directed at key staff members of our organization. This is not a one-off, this is attempts to recruit multiple staff from a number of hiring organizations.I understand that the recruitment of great staff is a key bottleneck for EA orgs, and that this has resulting in more resources being into headhunting. (For posts that discuss this issue, see here, here, and here.) But I would have though that this headhunting would be concentrated on less impactful organizations outside the EA community. Clearly, if a headhunter eases a bottleneck at a high-impact organization while creating a bottleneck at another equally high-impact organization, they are not having a positive effect.I wouldn't call IDinsight an EA organization, but we are certainly collaborative with the EA ecosystem, working closely with EA funders and high-impact implementation organizations in global health and development. We are a nonprofit dedicated to maximizing our social impact, and although I'm certainly biased I think we are in impactful organization. Perhaps headhunters targeting our staff feel that the roles they are recruiting for are much higher-impact than the roles people currently have at IDinsight, and I would respect their actions if this were the case. However, I would imagine that headhunters also are motivated to fill roles, and this would hinder them from accurately weighing the global impact of someone moving from job X to job Y.I do understand this is complicated. The decision to move jobs ultimately rests with the worker, not the headhunter, and I of course respect the decision of anyone to switch jobs. But I do think the EA community should be thinking strategically about how to maximize our headhunting resources for total global impact, as opposed to just impact for the organizations they are working for. I wonder, are there any established norms or best-practices within the community? If not, I think it would make sense to develop some.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is Headhunting within EA Appropriate?, published by Dan Stein on December 2, 2022 on The Effective Altruism Forum.I'm the Chief Economist at IDinsight. I've been somewhat surprised recently to see a number of very direct headhunting attempts from people in the EA community, directed at key staff members of our organization. This is not a one-off, this is attempts to recruit multiple staff from a number of hiring organizations.I understand that the recruitment of great staff is a key bottleneck for EA orgs, and that this has resulting in more resources being into headhunting. (For posts that discuss this issue, see here, here, and here.) But I would have though that this headhunting would be concentrated on less impactful organizations outside the EA community. Clearly, if a headhunter eases a bottleneck at a high-impact organization while creating a bottleneck at another equally high-impact organization, they are not having a positive effect.I wouldn't call IDinsight an EA organization, but we are certainly collaborative with the EA ecosystem, working closely with EA funders and high-impact implementation organizations in global health and development. We are a nonprofit dedicated to maximizing our social impact, and although I'm certainly biased I think we are in impactful organization. Perhaps headhunters targeting our staff feel that the roles they are recruiting for are much higher-impact than the roles people currently have at IDinsight, and I would respect their actions if this were the case. However, I would imagine that headhunters also are motivated to fill roles, and this would hinder them from accurately weighing the global impact of someone moving from job X to job Y.I do understand this is complicated. The decision to move jobs ultimately rests with the worker, not the headhunter, and I of course respect the decision of anyone to switch jobs. But I do think the EA community should be thinking strategically about how to maximize our headhunting resources for total global impact, as opposed to just impact for the organizations they are working for. I wonder, are there any established norms or best-practices within the community? If not, I think it would make sense to develop some.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Dan Stein https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:12 None full 3982
QbLKFRhbQN8JvtWkM_EA EA - The Founders Pledge Climate Fund at 2 years by jackva Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Founders Pledge Climate Fund at 2 years, published by jackva on December 2, 2022 on The Effective Altruism Forum.By Johannes Ackva, Violet Buxton-Walsh, and Luisa SandkühlerThese posts were originally written for the Founders Pledge Blog so the style is a bit different (e.g. less technical, less hedging language) than for typical EA Forum posts. We add some light edits in [brackets] specifically for the Forum version and include a Forum-specific summary. The original version of this post can be found here.Contextualization and summaryIn the two years since its inception the Founders Pledge Climate Change Fund has distributed over $10M USD to high-impact climate interventions. In this post we reiterate the reasons for having the Climate Fund, why we believe it is higher impact than giving to recommended charities (also see Giving What We Can’s recent post), its initial impact, and our current view of key uncertainties to investigate for future grantmaking. Two companion posts present initial ideas on those uncertainties, namely making sense of the reshaped US climate response (Forum version here), as well as the Ukraine invasion (Forum version here) and its implications for climate and energy.IntroductionThis fall we're celebrating the two year anniversary of the Founders Pledge Climate Change Fund. To date, the fund has allocated more than $10M USD to high-impact climate interventions. Our team is committed to finding the best available giving opportunities in the climate space and filling the most urgent funding needs. This is why, once we’ve made all of our planned grants for 2022, there will be no money remaining in the Fund. Moving forward, we hope to raise much more capital and, as in the past, spend it rapidly on the most effective climate solutions.This post will give a high-level overview of how this money has been hard at work (i) accelerating innovation in neglected technologies, (ii) avoiding carbon lock-in in emerging economies, (iii) promoting policy leadership and paradigm shaping, and (iv) catalytically growing organizations during the past two years.We start by laying out our basic “meta” theory of change and why we created the Climate Fund in the first place, before discussing some of the recent successes as well as future plans.Part I: The WhyHow to have an impact on climateIn a well-funded space such as climate, with philanthropic giving on the order of USD 10 billion per year and societal spending around 100x that at about USD 1 trillion per year, it is not easy to have a meaningful impact as an individual donor.Indeed, a natural question for any individual might be – “how can my contribution matter?” – when several of the world’s richest people, such as Jeff Bezos and Bill Gates, are major climate philanthropists.The visual below illustrates this challenge, comparing the Climate Fund at business as usual in 2023 to climate philanthropy at large (about 1000x larger) and societal response (100x larger still):Given the size of existing funding as well as predictable biases in climate philanthropy and our societal climate response, we believe it is overwhelmingly likely that impact-maximizing action for individual donors coming into climate lies in correcting the existing biases of the larger pool of overall climate philanthropy, to fill blindspots and leverage existing attention on climate more effectively. You can read, listen, or watch more about this in the linked materials.Why a fund?Of course, until now we have not yet answered the question “why should this happen through a Fund?”.There are several reasons to expect higher impact from a fund than from individual giving:The fund model allows us to make larger, coordinated grants that enable significant change for grantees, such as hiring new staff and starting new programs;...]]>
jackva https://forum.effectivealtruism.org/posts/QbLKFRhbQN8JvtWkM/the-founders-pledge-climate-fund-at-2-years Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Founders Pledge Climate Fund at 2 years, published by jackva on December 2, 2022 on The Effective Altruism Forum.By Johannes Ackva, Violet Buxton-Walsh, and Luisa SandkühlerThese posts were originally written for the Founders Pledge Blog so the style is a bit different (e.g. less technical, less hedging language) than for typical EA Forum posts. We add some light edits in [brackets] specifically for the Forum version and include a Forum-specific summary. The original version of this post can be found here.Contextualization and summaryIn the two years since its inception the Founders Pledge Climate Change Fund has distributed over $10M USD to high-impact climate interventions. In this post we reiterate the reasons for having the Climate Fund, why we believe it is higher impact than giving to recommended charities (also see Giving What We Can’s recent post), its initial impact, and our current view of key uncertainties to investigate for future grantmaking. Two companion posts present initial ideas on those uncertainties, namely making sense of the reshaped US climate response (Forum version here), as well as the Ukraine invasion (Forum version here) and its implications for climate and energy.IntroductionThis fall we're celebrating the two year anniversary of the Founders Pledge Climate Change Fund. To date, the fund has allocated more than $10M USD to high-impact climate interventions. Our team is committed to finding the best available giving opportunities in the climate space and filling the most urgent funding needs. This is why, once we’ve made all of our planned grants for 2022, there will be no money remaining in the Fund. Moving forward, we hope to raise much more capital and, as in the past, spend it rapidly on the most effective climate solutions.This post will give a high-level overview of how this money has been hard at work (i) accelerating innovation in neglected technologies, (ii) avoiding carbon lock-in in emerging economies, (iii) promoting policy leadership and paradigm shaping, and (iv) catalytically growing organizations during the past two years.We start by laying out our basic “meta” theory of change and why we created the Climate Fund in the first place, before discussing some of the recent successes as well as future plans.Part I: The WhyHow to have an impact on climateIn a well-funded space such as climate, with philanthropic giving on the order of USD 10 billion per year and societal spending around 100x that at about USD 1 trillion per year, it is not easy to have a meaningful impact as an individual donor.Indeed, a natural question for any individual might be – “how can my contribution matter?” – when several of the world’s richest people, such as Jeff Bezos and Bill Gates, are major climate philanthropists.The visual below illustrates this challenge, comparing the Climate Fund at business as usual in 2023 to climate philanthropy at large (about 1000x larger) and societal response (100x larger still):Given the size of existing funding as well as predictable biases in climate philanthropy and our societal climate response, we believe it is overwhelmingly likely that impact-maximizing action for individual donors coming into climate lies in correcting the existing biases of the larger pool of overall climate philanthropy, to fill blindspots and leverage existing attention on climate more effectively. You can read, listen, or watch more about this in the linked materials.Why a fund?Of course, until now we have not yet answered the question “why should this happen through a Fund?”.There are several reasons to expect higher impact from a fund than from individual giving:The fund model allows us to make larger, coordinated grants that enable significant change for grantees, such as hiring new staff and starting new programs;...]]>
Fri, 02 Dec 2022 20:26:07 +0000 EA - The Founders Pledge Climate Fund at 2 years by jackva Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Founders Pledge Climate Fund at 2 years, published by jackva on December 2, 2022 on The Effective Altruism Forum.By Johannes Ackva, Violet Buxton-Walsh, and Luisa SandkühlerThese posts were originally written for the Founders Pledge Blog so the style is a bit different (e.g. less technical, less hedging language) than for typical EA Forum posts. We add some light edits in [brackets] specifically for the Forum version and include a Forum-specific summary. The original version of this post can be found here.Contextualization and summaryIn the two years since its inception the Founders Pledge Climate Change Fund has distributed over $10M USD to high-impact climate interventions. In this post we reiterate the reasons for having the Climate Fund, why we believe it is higher impact than giving to recommended charities (also see Giving What We Can’s recent post), its initial impact, and our current view of key uncertainties to investigate for future grantmaking. Two companion posts present initial ideas on those uncertainties, namely making sense of the reshaped US climate response (Forum version here), as well as the Ukraine invasion (Forum version here) and its implications for climate and energy.IntroductionThis fall we're celebrating the two year anniversary of the Founders Pledge Climate Change Fund. To date, the fund has allocated more than $10M USD to high-impact climate interventions. Our team is committed to finding the best available giving opportunities in the climate space and filling the most urgent funding needs. This is why, once we’ve made all of our planned grants for 2022, there will be no money remaining in the Fund. Moving forward, we hope to raise much more capital and, as in the past, spend it rapidly on the most effective climate solutions.This post will give a high-level overview of how this money has been hard at work (i) accelerating innovation in neglected technologies, (ii) avoiding carbon lock-in in emerging economies, (iii) promoting policy leadership and paradigm shaping, and (iv) catalytically growing organizations during the past two years.We start by laying out our basic “meta” theory of change and why we created the Climate Fund in the first place, before discussing some of the recent successes as well as future plans.Part I: The WhyHow to have an impact on climateIn a well-funded space such as climate, with philanthropic giving on the order of USD 10 billion per year and societal spending around 100x that at about USD 1 trillion per year, it is not easy to have a meaningful impact as an individual donor.Indeed, a natural question for any individual might be – “how can my contribution matter?” – when several of the world’s richest people, such as Jeff Bezos and Bill Gates, are major climate philanthropists.The visual below illustrates this challenge, comparing the Climate Fund at business as usual in 2023 to climate philanthropy at large (about 1000x larger) and societal response (100x larger still):Given the size of existing funding as well as predictable biases in climate philanthropy and our societal climate response, we believe it is overwhelmingly likely that impact-maximizing action for individual donors coming into climate lies in correcting the existing biases of the larger pool of overall climate philanthropy, to fill blindspots and leverage existing attention on climate more effectively. You can read, listen, or watch more about this in the linked materials.Why a fund?Of course, until now we have not yet answered the question “why should this happen through a Fund?”.There are several reasons to expect higher impact from a fund than from individual giving:The fund model allows us to make larger, coordinated grants that enable significant change for grantees, such as hiring new staff and starting new programs;...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Founders Pledge Climate Fund at 2 years, published by jackva on December 2, 2022 on The Effective Altruism Forum.By Johannes Ackva, Violet Buxton-Walsh, and Luisa SandkühlerThese posts were originally written for the Founders Pledge Blog so the style is a bit different (e.g. less technical, less hedging language) than for typical EA Forum posts. We add some light edits in [brackets] specifically for the Forum version and include a Forum-specific summary. The original version of this post can be found here.Contextualization and summaryIn the two years since its inception the Founders Pledge Climate Change Fund has distributed over $10M USD to high-impact climate interventions. In this post we reiterate the reasons for having the Climate Fund, why we believe it is higher impact than giving to recommended charities (also see Giving What We Can’s recent post), its initial impact, and our current view of key uncertainties to investigate for future grantmaking. Two companion posts present initial ideas on those uncertainties, namely making sense of the reshaped US climate response (Forum version here), as well as the Ukraine invasion (Forum version here) and its implications for climate and energy.IntroductionThis fall we're celebrating the two year anniversary of the Founders Pledge Climate Change Fund. To date, the fund has allocated more than $10M USD to high-impact climate interventions. Our team is committed to finding the best available giving opportunities in the climate space and filling the most urgent funding needs. This is why, once we’ve made all of our planned grants for 2022, there will be no money remaining in the Fund. Moving forward, we hope to raise much more capital and, as in the past, spend it rapidly on the most effective climate solutions.This post will give a high-level overview of how this money has been hard at work (i) accelerating innovation in neglected technologies, (ii) avoiding carbon lock-in in emerging economies, (iii) promoting policy leadership and paradigm shaping, and (iv) catalytically growing organizations during the past two years.We start by laying out our basic “meta” theory of change and why we created the Climate Fund in the first place, before discussing some of the recent successes as well as future plans.Part I: The WhyHow to have an impact on climateIn a well-funded space such as climate, with philanthropic giving on the order of USD 10 billion per year and societal spending around 100x that at about USD 1 trillion per year, it is not easy to have a meaningful impact as an individual donor.Indeed, a natural question for any individual might be – “how can my contribution matter?” – when several of the world’s richest people, such as Jeff Bezos and Bill Gates, are major climate philanthropists.The visual below illustrates this challenge, comparing the Climate Fund at business as usual in 2023 to climate philanthropy at large (about 1000x larger) and societal response (100x larger still):Given the size of existing funding as well as predictable biases in climate philanthropy and our societal climate response, we believe it is overwhelmingly likely that impact-maximizing action for individual donors coming into climate lies in correcting the existing biases of the larger pool of overall climate philanthropy, to fill blindspots and leverage existing attention on climate more effectively. You can read, listen, or watch more about this in the linked materials.Why a fund?Of course, until now we have not yet answered the question “why should this happen through a Fund?”.There are several reasons to expect higher impact from a fund than from individual giving:The fund model allows us to make larger, coordinated grants that enable significant change for grantees, such as hiring new staff and starting new programs;...]]>
jackva https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:10 None full 3984
G3vzNHjrL8AQmBqFb_EA EA - Winter ML upskilling camp by Nathan Barnard Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Winter ML upskilling camp, published by Nathan Barnard on December 2, 2022 on The Effective Altruism Forum.Title: Apply for the ML Winter Camp in Cambridge, UK [2-10 Jan]TL;DR: We are running a UK-based ML upskilling camp from 2-10 January in Cambridge for people with no prior experience in ML who want to work on technical AI safety. Apply here by 11 December.We (Nathan Barnard, Joe Hardie, Quratul Zainab and Hannah Erlebach) will be running a machine learning upskilling camp this January in conjunction with the Cambridge AI Safety Hub. The camp is designed for people with little-to-no ML experience to work through a curriculum based on the first two weeks of MLAB under the guidance of experienced mentors, in order to develop skills which are necessary for conducting many kinds of technical AI safety research.The camp will take place from 2-10 January in Cambridge, UK.Accommodation will be provided at Emmanuel College.There are up to 20 in-person spaces; the camp will take place in the Sidney Street Office in central Cambridge.There is also the option to attend online for those who cannot attend in-person, although participants are strongly encouraged to attend in-person if possible, as we expect it to be substantially harder to make progress if attending online. As such, our bar for accepting virtual participants will be higher. We can cover travel costs if this is a barrier to attending in-person.Apply to be a participantWho we are looking forThe typical participant we are looking for will have:Strong quantitative skills (e.g., a maths/physics/engineering bakground)An intention to work on AI safety research projects which require ML experienceLittle-to-no prior ML experienceThe following are strongly preferred, but not essential:Programming experience (preferably Python)AI safety knowledge equivalent to having at least completed the AGI Safety Fundamentals alignment curriculumThe camp is open to participants from all over the world, but in particular those from the UK and Europe; for those located in the USA or Canada, we recommend (also) applying for the CBAI Winter ML Bootcamp, happening either in Boston or Berkeley (deadline 4 December).If you're unsure if you're a good fit for this camp, we encourage you to err on the side of applying. We recognise that evidence suggests that less privileged individuals tend to underestimate their abilities, and encourage individuals with diverse backgrounds and experiences to apply; we especially encourage applications from women and minorities.How to applyFill out the application form by Sunday 11 December, 23:59 GMT+0.Decisions will be released no later than 16 December; if you require an earlier decision in order to make plans for January, you can specify so in your application.Apply to be a mentorWe are looking for mentors to be present full- or part-time during the camp. Although participants will work through the curriculum in a self-directed manner, we think that learning can be greatly accelerated when there are experts on hand to answer questions and clarify concepts.We expect mentors to beExperienced ML programmersFamiliar with the content of the MLAB curriculum (it’s helpful, but not necessary, if they have participated in MLAB themselves)Knowledgeable about AI safety (although this is less important)Comfortable with teaching (past teaching or tutoring experience can be useful)However, we also acknowledge that being a mentor can be useful for gaining skills and confidence in teaching, and for consolidating the content in one’s own mind; we hope that being a mentor will also be a useful experience for mentors themselves!If needed, we are able to provide accommodation in Cambridge, and can offer compensation for your time at £100 for a half day or £200 for a full day. We understand that m...]]>
Nathan Barnard https://forum.effectivealtruism.org/posts/G3vzNHjrL8AQmBqFb/winter-ml-upskilling-camp Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Winter ML upskilling camp, published by Nathan Barnard on December 2, 2022 on The Effective Altruism Forum.Title: Apply for the ML Winter Camp in Cambridge, UK [2-10 Jan]TL;DR: We are running a UK-based ML upskilling camp from 2-10 January in Cambridge for people with no prior experience in ML who want to work on technical AI safety. Apply here by 11 December.We (Nathan Barnard, Joe Hardie, Quratul Zainab and Hannah Erlebach) will be running a machine learning upskilling camp this January in conjunction with the Cambridge AI Safety Hub. The camp is designed for people with little-to-no ML experience to work through a curriculum based on the first two weeks of MLAB under the guidance of experienced mentors, in order to develop skills which are necessary for conducting many kinds of technical AI safety research.The camp will take place from 2-10 January in Cambridge, UK.Accommodation will be provided at Emmanuel College.There are up to 20 in-person spaces; the camp will take place in the Sidney Street Office in central Cambridge.There is also the option to attend online for those who cannot attend in-person, although participants are strongly encouraged to attend in-person if possible, as we expect it to be substantially harder to make progress if attending online. As such, our bar for accepting virtual participants will be higher. We can cover travel costs if this is a barrier to attending in-person.Apply to be a participantWho we are looking forThe typical participant we are looking for will have:Strong quantitative skills (e.g., a maths/physics/engineering bakground)An intention to work on AI safety research projects which require ML experienceLittle-to-no prior ML experienceThe following are strongly preferred, but not essential:Programming experience (preferably Python)AI safety knowledge equivalent to having at least completed the AGI Safety Fundamentals alignment curriculumThe camp is open to participants from all over the world, but in particular those from the UK and Europe; for those located in the USA or Canada, we recommend (also) applying for the CBAI Winter ML Bootcamp, happening either in Boston or Berkeley (deadline 4 December).If you're unsure if you're a good fit for this camp, we encourage you to err on the side of applying. We recognise that evidence suggests that less privileged individuals tend to underestimate their abilities, and encourage individuals with diverse backgrounds and experiences to apply; we especially encourage applications from women and minorities.How to applyFill out the application form by Sunday 11 December, 23:59 GMT+0.Decisions will be released no later than 16 December; if you require an earlier decision in order to make plans for January, you can specify so in your application.Apply to be a mentorWe are looking for mentors to be present full- or part-time during the camp. Although participants will work through the curriculum in a self-directed manner, we think that learning can be greatly accelerated when there are experts on hand to answer questions and clarify concepts.We expect mentors to beExperienced ML programmersFamiliar with the content of the MLAB curriculum (it’s helpful, but not necessary, if they have participated in MLAB themselves)Knowledgeable about AI safety (although this is less important)Comfortable with teaching (past teaching or tutoring experience can be useful)However, we also acknowledge that being a mentor can be useful for gaining skills and confidence in teaching, and for consolidating the content in one’s own mind; we hope that being a mentor will also be a useful experience for mentors themselves!If needed, we are able to provide accommodation in Cambridge, and can offer compensation for your time at £100 for a half day or £200 for a full day. We understand that m...]]>
Fri, 02 Dec 2022 19:33:41 +0000 EA - Winter ML upskilling camp by Nathan Barnard Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Winter ML upskilling camp, published by Nathan Barnard on December 2, 2022 on The Effective Altruism Forum.Title: Apply for the ML Winter Camp in Cambridge, UK [2-10 Jan]TL;DR: We are running a UK-based ML upskilling camp from 2-10 January in Cambridge for people with no prior experience in ML who want to work on technical AI safety. Apply here by 11 December.We (Nathan Barnard, Joe Hardie, Quratul Zainab and Hannah Erlebach) will be running a machine learning upskilling camp this January in conjunction with the Cambridge AI Safety Hub. The camp is designed for people with little-to-no ML experience to work through a curriculum based on the first two weeks of MLAB under the guidance of experienced mentors, in order to develop skills which are necessary for conducting many kinds of technical AI safety research.The camp will take place from 2-10 January in Cambridge, UK.Accommodation will be provided at Emmanuel College.There are up to 20 in-person spaces; the camp will take place in the Sidney Street Office in central Cambridge.There is also the option to attend online for those who cannot attend in-person, although participants are strongly encouraged to attend in-person if possible, as we expect it to be substantially harder to make progress if attending online. As such, our bar for accepting virtual participants will be higher. We can cover travel costs if this is a barrier to attending in-person.Apply to be a participantWho we are looking forThe typical participant we are looking for will have:Strong quantitative skills (e.g., a maths/physics/engineering bakground)An intention to work on AI safety research projects which require ML experienceLittle-to-no prior ML experienceThe following are strongly preferred, but not essential:Programming experience (preferably Python)AI safety knowledge equivalent to having at least completed the AGI Safety Fundamentals alignment curriculumThe camp is open to participants from all over the world, but in particular those from the UK and Europe; for those located in the USA or Canada, we recommend (also) applying for the CBAI Winter ML Bootcamp, happening either in Boston or Berkeley (deadline 4 December).If you're unsure if you're a good fit for this camp, we encourage you to err on the side of applying. We recognise that evidence suggests that less privileged individuals tend to underestimate their abilities, and encourage individuals with diverse backgrounds and experiences to apply; we especially encourage applications from women and minorities.How to applyFill out the application form by Sunday 11 December, 23:59 GMT+0.Decisions will be released no later than 16 December; if you require an earlier decision in order to make plans for January, you can specify so in your application.Apply to be a mentorWe are looking for mentors to be present full- or part-time during the camp. Although participants will work through the curriculum in a self-directed manner, we think that learning can be greatly accelerated when there are experts on hand to answer questions and clarify concepts.We expect mentors to beExperienced ML programmersFamiliar with the content of the MLAB curriculum (it’s helpful, but not necessary, if they have participated in MLAB themselves)Knowledgeable about AI safety (although this is less important)Comfortable with teaching (past teaching or tutoring experience can be useful)However, we also acknowledge that being a mentor can be useful for gaining skills and confidence in teaching, and for consolidating the content in one’s own mind; we hope that being a mentor will also be a useful experience for mentors themselves!If needed, we are able to provide accommodation in Cambridge, and can offer compensation for your time at £100 for a half day or £200 for a full day. We understand that m...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Winter ML upskilling camp, published by Nathan Barnard on December 2, 2022 on The Effective Altruism Forum.Title: Apply for the ML Winter Camp in Cambridge, UK [2-10 Jan]TL;DR: We are running a UK-based ML upskilling camp from 2-10 January in Cambridge for people with no prior experience in ML who want to work on technical AI safety. Apply here by 11 December.We (Nathan Barnard, Joe Hardie, Quratul Zainab and Hannah Erlebach) will be running a machine learning upskilling camp this January in conjunction with the Cambridge AI Safety Hub. The camp is designed for people with little-to-no ML experience to work through a curriculum based on the first two weeks of MLAB under the guidance of experienced mentors, in order to develop skills which are necessary for conducting many kinds of technical AI safety research.The camp will take place from 2-10 January in Cambridge, UK.Accommodation will be provided at Emmanuel College.There are up to 20 in-person spaces; the camp will take place in the Sidney Street Office in central Cambridge.There is also the option to attend online for those who cannot attend in-person, although participants are strongly encouraged to attend in-person if possible, as we expect it to be substantially harder to make progress if attending online. As such, our bar for accepting virtual participants will be higher. We can cover travel costs if this is a barrier to attending in-person.Apply to be a participantWho we are looking forThe typical participant we are looking for will have:Strong quantitative skills (e.g., a maths/physics/engineering bakground)An intention to work on AI safety research projects which require ML experienceLittle-to-no prior ML experienceThe following are strongly preferred, but not essential:Programming experience (preferably Python)AI safety knowledge equivalent to having at least completed the AGI Safety Fundamentals alignment curriculumThe camp is open to participants from all over the world, but in particular those from the UK and Europe; for those located in the USA or Canada, we recommend (also) applying for the CBAI Winter ML Bootcamp, happening either in Boston or Berkeley (deadline 4 December).If you're unsure if you're a good fit for this camp, we encourage you to err on the side of applying. We recognise that evidence suggests that less privileged individuals tend to underestimate their abilities, and encourage individuals with diverse backgrounds and experiences to apply; we especially encourage applications from women and minorities.How to applyFill out the application form by Sunday 11 December, 23:59 GMT+0.Decisions will be released no later than 16 December; if you require an earlier decision in order to make plans for January, you can specify so in your application.Apply to be a mentorWe are looking for mentors to be present full- or part-time during the camp. Although participants will work through the curriculum in a self-directed manner, we think that learning can be greatly accelerated when there are experts on hand to answer questions and clarify concepts.We expect mentors to beExperienced ML programmersFamiliar with the content of the MLAB curriculum (it’s helpful, but not necessary, if they have participated in MLAB themselves)Knowledgeable about AI safety (although this is less important)Comfortable with teaching (past teaching or tutoring experience can be useful)However, we also acknowledge that being a mentor can be useful for gaining skills and confidence in teaching, and for consolidating the content in one’s own mind; we hope that being a mentor will also be a useful experience for mentors themselves!If needed, we are able to provide accommodation in Cambridge, and can offer compensation for your time at £100 for a half day or £200 for a full day. We understand that m...]]>
Nathan Barnard https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:11 None full 3981
wph9LGoT8dGpzjuZa_EA EA - Announcing FTX Community Response Survey by Conor McGurk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing FTX Community Response Survey, published by Conor McGurk on December 2, 2022 on The Effective Altruism Forum.TL;DR: Let us know how you are feeling about EA post the FTX crisis by filling out the EA survey. If you’ve already responded to the EA survey, you can take the extra questionnaire here.OverviewWe are gathering perspectives on how the FTX crisis has impacted the community’s views of the effective altruism movement, its organizations, and leaders. CEA has worked with Rethink Priorities to append these questions to the EA survey. If you haven’t taken the 2022 EA Survey yet, you can take the EA survey with our FTX specific questions here. With our new FTX questions, our testers estimate the survey will take you <15 minutes.If you have already taken the EA survey, you can answer our FTX specific questions here. If you login to your effectivealtruism.org account, most answers will be saved. By logging in, testers that had previously completed the EA survey finished the extra questionnaire in 5 minutes.Note: the extra FTX survey will close on December 31st, along with the EA survey as a whole.FAQWhy are you running this survey?The survey will provide us a broad but shallow understanding of how EAs around the world are feeling about Effective Altruism post FTX crisis. There are a variety of community efforts to try to understand how the community is feeling, and we feel the survey can serve as a useful complement to these efforts.Why are you combining this with the EA survey?The EA survey asks a number of demographic questions which will also be useful for analysis of the FTX-specific questions. By combining the survey we minimize overhead for those survey respondents while simultaneously increasing response rate and making it easier for us to cross-reference and analyze the data.Will the results of this survey be public?We intend on releasing our analysis of the aggregate survey results publicly, just as we do with the rest of EA Survey results.Who will see the raw data in this survey?In order to conduct the analysis, a few employees of Rethink Priorities and the Center for Effective Altruism will have access to the raw data in the survey.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Conor McGurk https://forum.effectivealtruism.org/posts/wph9LGoT8dGpzjuZa/announcing-ftx-community-response-survey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing FTX Community Response Survey, published by Conor McGurk on December 2, 2022 on The Effective Altruism Forum.TL;DR: Let us know how you are feeling about EA post the FTX crisis by filling out the EA survey. If you’ve already responded to the EA survey, you can take the extra questionnaire here.OverviewWe are gathering perspectives on how the FTX crisis has impacted the community’s views of the effective altruism movement, its organizations, and leaders. CEA has worked with Rethink Priorities to append these questions to the EA survey. If you haven’t taken the 2022 EA Survey yet, you can take the EA survey with our FTX specific questions here. With our new FTX questions, our testers estimate the survey will take you <15 minutes.If you have already taken the EA survey, you can answer our FTX specific questions here. If you login to your effectivealtruism.org account, most answers will be saved. By logging in, testers that had previously completed the EA survey finished the extra questionnaire in 5 minutes.Note: the extra FTX survey will close on December 31st, along with the EA survey as a whole.FAQWhy are you running this survey?The survey will provide us a broad but shallow understanding of how EAs around the world are feeling about Effective Altruism post FTX crisis. There are a variety of community efforts to try to understand how the community is feeling, and we feel the survey can serve as a useful complement to these efforts.Why are you combining this with the EA survey?The EA survey asks a number of demographic questions which will also be useful for analysis of the FTX-specific questions. By combining the survey we minimize overhead for those survey respondents while simultaneously increasing response rate and making it easier for us to cross-reference and analyze the data.Will the results of this survey be public?We intend on releasing our analysis of the aggregate survey results publicly, just as we do with the rest of EA Survey results.Who will see the raw data in this survey?In order to conduct the analysis, a few employees of Rethink Priorities and the Center for Effective Altruism will have access to the raw data in the survey.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 02 Dec 2022 18:05:33 +0000 EA - Announcing FTX Community Response Survey by Conor McGurk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing FTX Community Response Survey, published by Conor McGurk on December 2, 2022 on The Effective Altruism Forum.TL;DR: Let us know how you are feeling about EA post the FTX crisis by filling out the EA survey. If you’ve already responded to the EA survey, you can take the extra questionnaire here.OverviewWe are gathering perspectives on how the FTX crisis has impacted the community’s views of the effective altruism movement, its organizations, and leaders. CEA has worked with Rethink Priorities to append these questions to the EA survey. If you haven’t taken the 2022 EA Survey yet, you can take the EA survey with our FTX specific questions here. With our new FTX questions, our testers estimate the survey will take you <15 minutes.If you have already taken the EA survey, you can answer our FTX specific questions here. If you login to your effectivealtruism.org account, most answers will be saved. By logging in, testers that had previously completed the EA survey finished the extra questionnaire in 5 minutes.Note: the extra FTX survey will close on December 31st, along with the EA survey as a whole.FAQWhy are you running this survey?The survey will provide us a broad but shallow understanding of how EAs around the world are feeling about Effective Altruism post FTX crisis. There are a variety of community efforts to try to understand how the community is feeling, and we feel the survey can serve as a useful complement to these efforts.Why are you combining this with the EA survey?The EA survey asks a number of demographic questions which will also be useful for analysis of the FTX-specific questions. By combining the survey we minimize overhead for those survey respondents while simultaneously increasing response rate and making it easier for us to cross-reference and analyze the data.Will the results of this survey be public?We intend on releasing our analysis of the aggregate survey results publicly, just as we do with the rest of EA Survey results.Who will see the raw data in this survey?In order to conduct the analysis, a few employees of Rethink Priorities and the Center for Effective Altruism will have access to the raw data in the survey.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing FTX Community Response Survey, published by Conor McGurk on December 2, 2022 on The Effective Altruism Forum.TL;DR: Let us know how you are feeling about EA post the FTX crisis by filling out the EA survey. If you’ve already responded to the EA survey, you can take the extra questionnaire here.OverviewWe are gathering perspectives on how the FTX crisis has impacted the community’s views of the effective altruism movement, its organizations, and leaders. CEA has worked with Rethink Priorities to append these questions to the EA survey. If you haven’t taken the 2022 EA Survey yet, you can take the EA survey with our FTX specific questions here. With our new FTX questions, our testers estimate the survey will take you <15 minutes.If you have already taken the EA survey, you can answer our FTX specific questions here. If you login to your effectivealtruism.org account, most answers will be saved. By logging in, testers that had previously completed the EA survey finished the extra questionnaire in 5 minutes.Note: the extra FTX survey will close on December 31st, along with the EA survey as a whole.FAQWhy are you running this survey?The survey will provide us a broad but shallow understanding of how EAs around the world are feeling about Effective Altruism post FTX crisis. There are a variety of community efforts to try to understand how the community is feeling, and we feel the survey can serve as a useful complement to these efforts.Why are you combining this with the EA survey?The EA survey asks a number of demographic questions which will also be useful for analysis of the FTX-specific questions. By combining the survey we minimize overhead for those survey respondents while simultaneously increasing response rate and making it easier for us to cross-reference and analyze the data.Will the results of this survey be public?We intend on releasing our analysis of the aggregate survey results publicly, just as we do with the rest of EA Survey results.Who will see the raw data in this survey?In order to conduct the analysis, a few employees of Rethink Priorities and the Center for Effective Altruism will have access to the raw data in the survey.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Conor McGurk https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:24 None full 3983
CLgXstmDetfPgbPEy_EA EA - Update on Harvard AI Safety Team and MIT AI Alignment by Xander Davies Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update on Harvard AI Safety Team and MIT AI Alignment, published by Xander Davies on December 2, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Xander Davies https://forum.effectivealtruism.org/posts/CLgXstmDetfPgbPEy/update-on-harvard-ai-safety-team-and-mit-ai-alignment Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update on Harvard AI Safety Team and MIT AI Alignment, published by Xander Davies on December 2, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 02 Dec 2022 14:06:48 +0000 EA - Update on Harvard AI Safety Team and MIT AI Alignment by Xander Davies Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update on Harvard AI Safety Team and MIT AI Alignment, published by Xander Davies on December 2, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update on Harvard AI Safety Team and MIT AI Alignment, published by Xander Davies on December 2, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Xander Davies https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:28 None full 3985
23HedigdyP24awmsK_EA EA - How does the collapse of the FTX Future Fund change the picture for individual donors? by vipulnaik Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How does the collapse of the FTX Future Fund change the picture for individual donors?, published by vipulnaik on December 1, 2022 on The Effective Altruism Forum.In a post inOctober, I outlined my thoughts at the time regarding where I'll make my end-of-year donation. In there, I noted that while I think that many cause areas that fall under "longtermism" are likely very important, the significant funding being directed at them from FTX Future Fund and Survival and Flourishing Fund makes these causesnon-neglected. That was one of the reasons I was leaning toward making my end-of-year donation to the Animal Welfare Fund instead of the Long-Term Future Fund.This was before the November 2022 collapse of FTX and of the FTXFuture Fund. I'm wondering how to think about the effect of the collapse of the FTX Future Fund on the funding available for"longtermist" projects in general, and how this should affect individual donors such as myself.In particular, some of the things I've been wondering are:How effectively does the Long-Term Future Fund (the main donation option for individual donors) funge with FTX Future Fund in terms of the projects and organizations funded? How much does the collapse of the FTX Future Fund increase the room for more funding for theLong-Term Future Fund?Are there specific other donees that have become particularly relevant for individual donors in light of the collapse of the FTXFuture Fund? An example (that I don't know much about) is theNonlinear EmergencyFund intended to help FTX grantees, that seemed to indicate that they have room for more funding.Does the continually developing nature of the FTX collapse situation make it more important to hold funds for now and donate them a little later once the ramifications are clearer? Or is it the opposite, namely, that there is a more urgent need for funds and therefore it's more important to donate more now?Are there other related questions and considerations that I'm missing?I don't know how the answers to the above questions will affect my donation decision (it's possible that my donation decision will ultimately be influenced by personal factors specific to me). I hope that any answers or comments generated by this post will be helpful not just to me but to other potential donors wondering how the FTX collapse situation affects them.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
vipulnaik https://forum.effectivealtruism.org/posts/23HedigdyP24awmsK/how-does-the-collapse-of-the-ftx-future-fund-change-the Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How does the collapse of the FTX Future Fund change the picture for individual donors?, published by vipulnaik on December 1, 2022 on The Effective Altruism Forum.In a post inOctober, I outlined my thoughts at the time regarding where I'll make my end-of-year donation. In there, I noted that while I think that many cause areas that fall under "longtermism" are likely very important, the significant funding being directed at them from FTX Future Fund and Survival and Flourishing Fund makes these causesnon-neglected. That was one of the reasons I was leaning toward making my end-of-year donation to the Animal Welfare Fund instead of the Long-Term Future Fund.This was before the November 2022 collapse of FTX and of the FTXFuture Fund. I'm wondering how to think about the effect of the collapse of the FTX Future Fund on the funding available for"longtermist" projects in general, and how this should affect individual donors such as myself.In particular, some of the things I've been wondering are:How effectively does the Long-Term Future Fund (the main donation option for individual donors) funge with FTX Future Fund in terms of the projects and organizations funded? How much does the collapse of the FTX Future Fund increase the room for more funding for theLong-Term Future Fund?Are there specific other donees that have become particularly relevant for individual donors in light of the collapse of the FTXFuture Fund? An example (that I don't know much about) is theNonlinear EmergencyFund intended to help FTX grantees, that seemed to indicate that they have room for more funding.Does the continually developing nature of the FTX collapse situation make it more important to hold funds for now and donate them a little later once the ramifications are clearer? Or is it the opposite, namely, that there is a more urgent need for funds and therefore it's more important to donate more now?Are there other related questions and considerations that I'm missing?I don't know how the answers to the above questions will affect my donation decision (it's possible that my donation decision will ultimately be influenced by personal factors specific to me). I hope that any answers or comments generated by this post will be helpful not just to me but to other potential donors wondering how the FTX collapse situation affects them.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 02 Dec 2022 08:06:43 +0000 EA - How does the collapse of the FTX Future Fund change the picture for individual donors? by vipulnaik Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How does the collapse of the FTX Future Fund change the picture for individual donors?, published by vipulnaik on December 1, 2022 on The Effective Altruism Forum.In a post inOctober, I outlined my thoughts at the time regarding where I'll make my end-of-year donation. In there, I noted that while I think that many cause areas that fall under "longtermism" are likely very important, the significant funding being directed at them from FTX Future Fund and Survival and Flourishing Fund makes these causesnon-neglected. That was one of the reasons I was leaning toward making my end-of-year donation to the Animal Welfare Fund instead of the Long-Term Future Fund.This was before the November 2022 collapse of FTX and of the FTXFuture Fund. I'm wondering how to think about the effect of the collapse of the FTX Future Fund on the funding available for"longtermist" projects in general, and how this should affect individual donors such as myself.In particular, some of the things I've been wondering are:How effectively does the Long-Term Future Fund (the main donation option for individual donors) funge with FTX Future Fund in terms of the projects and organizations funded? How much does the collapse of the FTX Future Fund increase the room for more funding for theLong-Term Future Fund?Are there specific other donees that have become particularly relevant for individual donors in light of the collapse of the FTXFuture Fund? An example (that I don't know much about) is theNonlinear EmergencyFund intended to help FTX grantees, that seemed to indicate that they have room for more funding.Does the continually developing nature of the FTX collapse situation make it more important to hold funds for now and donate them a little later once the ramifications are clearer? Or is it the opposite, namely, that there is a more urgent need for funds and therefore it's more important to donate more now?Are there other related questions and considerations that I'm missing?I don't know how the answers to the above questions will affect my donation decision (it's possible that my donation decision will ultimately be influenced by personal factors specific to me). I hope that any answers or comments generated by this post will be helpful not just to me but to other potential donors wondering how the FTX collapse situation affects them.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How does the collapse of the FTX Future Fund change the picture for individual donors?, published by vipulnaik on December 1, 2022 on The Effective Altruism Forum.In a post inOctober, I outlined my thoughts at the time regarding where I'll make my end-of-year donation. In there, I noted that while I think that many cause areas that fall under "longtermism" are likely very important, the significant funding being directed at them from FTX Future Fund and Survival and Flourishing Fund makes these causesnon-neglected. That was one of the reasons I was leaning toward making my end-of-year donation to the Animal Welfare Fund instead of the Long-Term Future Fund.This was before the November 2022 collapse of FTX and of the FTXFuture Fund. I'm wondering how to think about the effect of the collapse of the FTX Future Fund on the funding available for"longtermist" projects in general, and how this should affect individual donors such as myself.In particular, some of the things I've been wondering are:How effectively does the Long-Term Future Fund (the main donation option for individual donors) funge with FTX Future Fund in terms of the projects and organizations funded? How much does the collapse of the FTX Future Fund increase the room for more funding for theLong-Term Future Fund?Are there specific other donees that have become particularly relevant for individual donors in light of the collapse of the FTXFuture Fund? An example (that I don't know much about) is theNonlinear EmergencyFund intended to help FTX grantees, that seemed to indicate that they have room for more funding.Does the continually developing nature of the FTX collapse situation make it more important to hold funds for now and donate them a little later once the ramifications are clearer? Or is it the opposite, namely, that there is a more urgent need for funds and therefore it's more important to donate more now?Are there other related questions and considerations that I'm missing?I don't know how the answers to the above questions will affect my donation decision (it's possible that my donation decision will ultimately be influenced by personal factors specific to me). I hope that any answers or comments generated by this post will be helpful not just to me but to other potential donors wondering how the FTX collapse situation affects them.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
vipulnaik https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:25 None full 3972
t5vFLabB2mQz2tgDr_EA EA - I’m a 22-year-old woman involved in Effective Altruism. I’m sad, disappointed, and scared. by Maya D Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I’m a 22-year-old woman involved in Effective Altruism. I’m sad, disappointed, and scared., published by Maya D on December 2, 2022 on The Effective Altruism Forum.Before I get to the heart of what I want to share, a couple of disclaimers:I am one person and, as such, I recognize that my perspective is both informed and limited by my experience and identity. I would like to share my perspective in the hope that it may interact with and broaden yours.In writing this post, my aim is not to be combative nor divisive. The values of Effective Altruism are my values (for the most part - I likely value rationality less than many), and its goals are my goals. I do not, therefore, aim to “take down” or harm the Effective Altruism community. Rather, I hope to challenge us all to think about what it means to be in community.Who am I and why am I writing this?I’m a 22-year-old college senior, set to graduate with a degree in Human and Organizational Development in May. I learned about Effective Altruism last fall, when I transferred to a new university and started attending my school’s EA events and fellowships. I have become increasingly involved in the Effective Altruism community over the past 14 months - participating in intro and in-depth fellowships, taking on a leadership role in my school’s EA club, and attending an EAGx and an EAG.So why am I writing this? Because I am at a point in my life where I have to make a lot of choices: where I want to live; what type of work I want to engage in; the people whom I want to surround myself with. When I found Effective Altruism, it seemed as though I had stumbled across a movement and a community that would provide me with guidance in all three of these areas. However, as my social and intellectual circles became increasingly entangled with EA, I grew hesitant, then skeptical, then downright sad as I observed behavior (both in-person and online) from those involved in the EA community. I’m writing this because I want to be able to feel proud when I tell people that I am involved in Effective Altruism. I want to feel as if I can encourage others to join without an added list of disclaimers about the type of behavior they may encounter.Lastly, I want the Effective Altruism community to revisit and continuously strive towards what the Centre for Effective Altruism calls the core principles of EA: commitment to others, scientific mindset, openness, integrity, and collaborative spirit.Gender and CultureAccording to the EA Survey 2020 (the latest year for which I could find data), the makeup of people involved in EA was very similar to that of 2019: 76% white and 71% male. Lack of diversity within movements, organizations, and “intellectual projects” is incredibly damaging for many important reasons. CEA writes several of these reasons on their Diversity and Inclusion page, but the one I would like to highlight is We don’t want to miss important perspectives. As the website reads, “if the community isn’t able to welcome and encourage members who don’t resemble the existing community, we will consistently miss out on the perspectives of underrepresented groups.” I agree with this statement and that is why I am concerned - I’m unconvinced that this community is effective in “welcoming and encouraging” people who don’t fit the majority white-and-man mold.I question the welcoming-ness of the EA community because, despite fitting the EA mold in many ways - I’m white, American, in my twenties, and will soon graduate from a highly-ranked college - I still often feel as if the EA community is not something I want to be a part of. I can imagine that those with even less of the predominant EA demographic characteristics face exponentially increased barriers to entry. Several (yet not all) instances in which I’ve felt this way.A friend ...]]>
Maya D https://forum.effectivealtruism.org/posts/t5vFLabB2mQz2tgDr/i-m-a-22-year-old-woman-involved-in-effective-altruism-i-m Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I’m a 22-year-old woman involved in Effective Altruism. I’m sad, disappointed, and scared., published by Maya D on December 2, 2022 on The Effective Altruism Forum.Before I get to the heart of what I want to share, a couple of disclaimers:I am one person and, as such, I recognize that my perspective is both informed and limited by my experience and identity. I would like to share my perspective in the hope that it may interact with and broaden yours.In writing this post, my aim is not to be combative nor divisive. The values of Effective Altruism are my values (for the most part - I likely value rationality less than many), and its goals are my goals. I do not, therefore, aim to “take down” or harm the Effective Altruism community. Rather, I hope to challenge us all to think about what it means to be in community.Who am I and why am I writing this?I’m a 22-year-old college senior, set to graduate with a degree in Human and Organizational Development in May. I learned about Effective Altruism last fall, when I transferred to a new university and started attending my school’s EA events and fellowships. I have become increasingly involved in the Effective Altruism community over the past 14 months - participating in intro and in-depth fellowships, taking on a leadership role in my school’s EA club, and attending an EAGx and an EAG.So why am I writing this? Because I am at a point in my life where I have to make a lot of choices: where I want to live; what type of work I want to engage in; the people whom I want to surround myself with. When I found Effective Altruism, it seemed as though I had stumbled across a movement and a community that would provide me with guidance in all three of these areas. However, as my social and intellectual circles became increasingly entangled with EA, I grew hesitant, then skeptical, then downright sad as I observed behavior (both in-person and online) from those involved in the EA community. I’m writing this because I want to be able to feel proud when I tell people that I am involved in Effective Altruism. I want to feel as if I can encourage others to join without an added list of disclaimers about the type of behavior they may encounter.Lastly, I want the Effective Altruism community to revisit and continuously strive towards what the Centre for Effective Altruism calls the core principles of EA: commitment to others, scientific mindset, openness, integrity, and collaborative spirit.Gender and CultureAccording to the EA Survey 2020 (the latest year for which I could find data), the makeup of people involved in EA was very similar to that of 2019: 76% white and 71% male. Lack of diversity within movements, organizations, and “intellectual projects” is incredibly damaging for many important reasons. CEA writes several of these reasons on their Diversity and Inclusion page, but the one I would like to highlight is We don’t want to miss important perspectives. As the website reads, “if the community isn’t able to welcome and encourage members who don’t resemble the existing community, we will consistently miss out on the perspectives of underrepresented groups.” I agree with this statement and that is why I am concerned - I’m unconvinced that this community is effective in “welcoming and encouraging” people who don’t fit the majority white-and-man mold.I question the welcoming-ness of the EA community because, despite fitting the EA mold in many ways - I’m white, American, in my twenties, and will soon graduate from a highly-ranked college - I still often feel as if the EA community is not something I want to be a part of. I can imagine that those with even less of the predominant EA demographic characteristics face exponentially increased barriers to entry. Several (yet not all) instances in which I’ve felt this way.A friend ...]]>
Fri, 02 Dec 2022 05:30:54 +0000 EA - I’m a 22-year-old woman involved in Effective Altruism. I’m sad, disappointed, and scared. by Maya D Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I’m a 22-year-old woman involved in Effective Altruism. I’m sad, disappointed, and scared., published by Maya D on December 2, 2022 on The Effective Altruism Forum.Before I get to the heart of what I want to share, a couple of disclaimers:I am one person and, as such, I recognize that my perspective is both informed and limited by my experience and identity. I would like to share my perspective in the hope that it may interact with and broaden yours.In writing this post, my aim is not to be combative nor divisive. The values of Effective Altruism are my values (for the most part - I likely value rationality less than many), and its goals are my goals. I do not, therefore, aim to “take down” or harm the Effective Altruism community. Rather, I hope to challenge us all to think about what it means to be in community.Who am I and why am I writing this?I’m a 22-year-old college senior, set to graduate with a degree in Human and Organizational Development in May. I learned about Effective Altruism last fall, when I transferred to a new university and started attending my school’s EA events and fellowships. I have become increasingly involved in the Effective Altruism community over the past 14 months - participating in intro and in-depth fellowships, taking on a leadership role in my school’s EA club, and attending an EAGx and an EAG.So why am I writing this? Because I am at a point in my life where I have to make a lot of choices: where I want to live; what type of work I want to engage in; the people whom I want to surround myself with. When I found Effective Altruism, it seemed as though I had stumbled across a movement and a community that would provide me with guidance in all three of these areas. However, as my social and intellectual circles became increasingly entangled with EA, I grew hesitant, then skeptical, then downright sad as I observed behavior (both in-person and online) from those involved in the EA community. I’m writing this because I want to be able to feel proud when I tell people that I am involved in Effective Altruism. I want to feel as if I can encourage others to join without an added list of disclaimers about the type of behavior they may encounter.Lastly, I want the Effective Altruism community to revisit and continuously strive towards what the Centre for Effective Altruism calls the core principles of EA: commitment to others, scientific mindset, openness, integrity, and collaborative spirit.Gender and CultureAccording to the EA Survey 2020 (the latest year for which I could find data), the makeup of people involved in EA was very similar to that of 2019: 76% white and 71% male. Lack of diversity within movements, organizations, and “intellectual projects” is incredibly damaging for many important reasons. CEA writes several of these reasons on their Diversity and Inclusion page, but the one I would like to highlight is We don’t want to miss important perspectives. As the website reads, “if the community isn’t able to welcome and encourage members who don’t resemble the existing community, we will consistently miss out on the perspectives of underrepresented groups.” I agree with this statement and that is why I am concerned - I’m unconvinced that this community is effective in “welcoming and encouraging” people who don’t fit the majority white-and-man mold.I question the welcoming-ness of the EA community because, despite fitting the EA mold in many ways - I’m white, American, in my twenties, and will soon graduate from a highly-ranked college - I still often feel as if the EA community is not something I want to be a part of. I can imagine that those with even less of the predominant EA demographic characteristics face exponentially increased barriers to entry. Several (yet not all) instances in which I’ve felt this way.A friend ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I’m a 22-year-old woman involved in Effective Altruism. I’m sad, disappointed, and scared., published by Maya D on December 2, 2022 on The Effective Altruism Forum.Before I get to the heart of what I want to share, a couple of disclaimers:I am one person and, as such, I recognize that my perspective is both informed and limited by my experience and identity. I would like to share my perspective in the hope that it may interact with and broaden yours.In writing this post, my aim is not to be combative nor divisive. The values of Effective Altruism are my values (for the most part - I likely value rationality less than many), and its goals are my goals. I do not, therefore, aim to “take down” or harm the Effective Altruism community. Rather, I hope to challenge us all to think about what it means to be in community.Who am I and why am I writing this?I’m a 22-year-old college senior, set to graduate with a degree in Human and Organizational Development in May. I learned about Effective Altruism last fall, when I transferred to a new university and started attending my school’s EA events and fellowships. I have become increasingly involved in the Effective Altruism community over the past 14 months - participating in intro and in-depth fellowships, taking on a leadership role in my school’s EA club, and attending an EAGx and an EAG.So why am I writing this? Because I am at a point in my life where I have to make a lot of choices: where I want to live; what type of work I want to engage in; the people whom I want to surround myself with. When I found Effective Altruism, it seemed as though I had stumbled across a movement and a community that would provide me with guidance in all three of these areas. However, as my social and intellectual circles became increasingly entangled with EA, I grew hesitant, then skeptical, then downright sad as I observed behavior (both in-person and online) from those involved in the EA community. I’m writing this because I want to be able to feel proud when I tell people that I am involved in Effective Altruism. I want to feel as if I can encourage others to join without an added list of disclaimers about the type of behavior they may encounter.Lastly, I want the Effective Altruism community to revisit and continuously strive towards what the Centre for Effective Altruism calls the core principles of EA: commitment to others, scientific mindset, openness, integrity, and collaborative spirit.Gender and CultureAccording to the EA Survey 2020 (the latest year for which I could find data), the makeup of people involved in EA was very similar to that of 2019: 76% white and 71% male. Lack of diversity within movements, organizations, and “intellectual projects” is incredibly damaging for many important reasons. CEA writes several of these reasons on their Diversity and Inclusion page, but the one I would like to highlight is We don’t want to miss important perspectives. As the website reads, “if the community isn’t able to welcome and encourage members who don’t resemble the existing community, we will consistently miss out on the perspectives of underrepresented groups.” I agree with this statement and that is why I am concerned - I’m unconvinced that this community is effective in “welcoming and encouraging” people who don’t fit the majority white-and-man mold.I question the welcoming-ness of the EA community because, despite fitting the EA mold in many ways - I’m white, American, in my twenties, and will soon graduate from a highly-ranked college - I still often feel as if the EA community is not something I want to be a part of. I can imagine that those with even less of the predominant EA demographic characteristics face exponentially increased barriers to entry. Several (yet not all) instances in which I’ve felt this way.A friend ...]]>
Maya D https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:13 None full 3965
szzJiWDeYk5KaD2tC_EA EA - FYI: CC-BY license for all new Forum content from today (Dec 1) by Will Bradshaw Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FYI: CC-BY license for all new Forum content from today (Dec 1), published by Will Bradshaw on December 1, 2022 on The Effective Altruism Forum.From today, all new Forum content (including comments and shortform posts) will be published under CC-BY:Therefore, as of December 1, 2022, we are requiring that all content posted to the Forum be available under a CC BY 4.0 license.There's not currently much on the Forum making people aware of this change, especially for minor content like comments and shortforms. Even though I support the change, I think it would be pretty bad if a user accidentally licensed their content under CC-BY due to lack of information, so I'm posting this as a transitional heads-up.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Will Bradshaw https://forum.effectivealtruism.org/posts/szzJiWDeYk5KaD2tC/fyi-cc-by-license-for-all-new-forum-content-from-today-dec-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FYI: CC-BY license for all new Forum content from today (Dec 1), published by Will Bradshaw on December 1, 2022 on The Effective Altruism Forum.From today, all new Forum content (including comments and shortform posts) will be published under CC-BY:Therefore, as of December 1, 2022, we are requiring that all content posted to the Forum be available under a CC BY 4.0 license.There's not currently much on the Forum making people aware of this change, especially for minor content like comments and shortforms. Even though I support the change, I think it would be pretty bad if a user accidentally licensed their content under CC-BY due to lack of information, so I'm posting this as a transitional heads-up.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 02 Dec 2022 03:43:49 +0000 EA - FYI: CC-BY license for all new Forum content from today (Dec 1) by Will Bradshaw Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FYI: CC-BY license for all new Forum content from today (Dec 1), published by Will Bradshaw on December 1, 2022 on The Effective Altruism Forum.From today, all new Forum content (including comments and shortform posts) will be published under CC-BY:Therefore, as of December 1, 2022, we are requiring that all content posted to the Forum be available under a CC BY 4.0 license.There's not currently much on the Forum making people aware of this change, especially for minor content like comments and shortforms. Even though I support the change, I think it would be pretty bad if a user accidentally licensed their content under CC-BY due to lack of information, so I'm posting this as a transitional heads-up.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FYI: CC-BY license for all new Forum content from today (Dec 1), published by Will Bradshaw on December 1, 2022 on The Effective Altruism Forum.From today, all new Forum content (including comments and shortform posts) will be published under CC-BY:Therefore, as of December 1, 2022, we are requiring that all content posted to the Forum be available under a CC BY 4.0 license.There's not currently much on the Forum making people aware of this change, especially for minor content like comments and shortforms. Even though I support the change, I think it would be pretty bad if a user accidentally licensed their content under CC-BY due to lack of information, so I'm posting this as a transitional heads-up.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Will Bradshaw https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:04 None full 3969
xQBcrPsH57MjCcgTb_EA EA - Announcing the Cambridge Boston Alignment Initiative [Hiring!] by kuhanj Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Cambridge Boston Alignment Initiative [Hiring!], published by kuhanj on December 2, 2022 on The Effective Altruism Forum.TLDR: The Cambridge Boston Alignment Initiative (CBAI) is a new organization aimed at supporting and accelerating Cambridge and Boston students interested in pursuing careers in AI safety. We’re excited about our ongoing work, including running a winter ML bootcamp, and are hiring for Cambridge-based roles (rolling applications, priority deadline Dec. 14 to work with us next year).We think that reducing risks from advanced AI systems is one of the most important issues of our time, and that undergraduate and graduate students can quickly start doing valuable work that mitigates these risks.We (Kuhan, Trevor, Xander and Alexandra) formed the Cambridge Boston Alignment Initiative (CBAI) to increase the number of talented researchers working to mitigate risks from AI by supporting Boston-area infrastructure, research and outreach related to AI alignment and governance. Our current programming involves working with groups like the Harvard AI Safety Team (HAIST) and MIT AI Alignment (MAIA), as well as organizing a winter ML bootcamp based on Redwood Research’s MLAB curriculum.We think that the Boston and Cambridge area is a particularly important place to foster a strong community of AI safety-interested students and researchers. The AI alignment community and infrastructure in the Boston/Cambridge area has also grown rapidly in recent months (see updates from HAIST and MAIA for more context), and has many opportunities for improvement: office spaces, advanced programming, research, community events, and internship/job opportunities to name a few.If you’d like to work with us to make this happen, we’re hiring for full-time generalist roles in Boston. Depending on personal fit, this work might take the form of co-director, technical director/program lead, operations director, or operations associate. We will respond to applications submitted by December 14 by the end of the year. For more information, see our website. For questions, email kuhan@cbai.ai.We’ll also be at EAGxBerkeley, and are excited to talk to people there.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
kuhanj https://forum.effectivealtruism.org/posts/xQBcrPsH57MjCcgTb/announcing-the-cambridge-boston-alignment-initiative-hiring Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Cambridge Boston Alignment Initiative [Hiring!], published by kuhanj on December 2, 2022 on The Effective Altruism Forum.TLDR: The Cambridge Boston Alignment Initiative (CBAI) is a new organization aimed at supporting and accelerating Cambridge and Boston students interested in pursuing careers in AI safety. We’re excited about our ongoing work, including running a winter ML bootcamp, and are hiring for Cambridge-based roles (rolling applications, priority deadline Dec. 14 to work with us next year).We think that reducing risks from advanced AI systems is one of the most important issues of our time, and that undergraduate and graduate students can quickly start doing valuable work that mitigates these risks.We (Kuhan, Trevor, Xander and Alexandra) formed the Cambridge Boston Alignment Initiative (CBAI) to increase the number of talented researchers working to mitigate risks from AI by supporting Boston-area infrastructure, research and outreach related to AI alignment and governance. Our current programming involves working with groups like the Harvard AI Safety Team (HAIST) and MIT AI Alignment (MAIA), as well as organizing a winter ML bootcamp based on Redwood Research’s MLAB curriculum.We think that the Boston and Cambridge area is a particularly important place to foster a strong community of AI safety-interested students and researchers. The AI alignment community and infrastructure in the Boston/Cambridge area has also grown rapidly in recent months (see updates from HAIST and MAIA for more context), and has many opportunities for improvement: office spaces, advanced programming, research, community events, and internship/job opportunities to name a few.If you’d like to work with us to make this happen, we’re hiring for full-time generalist roles in Boston. Depending on personal fit, this work might take the form of co-director, technical director/program lead, operations director, or operations associate. We will respond to applications submitted by December 14 by the end of the year. For more information, see our website. For questions, email kuhan@cbai.ai.We’ll also be at EAGxBerkeley, and are excited to talk to people there.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 02 Dec 2022 02:50:05 +0000 EA - Announcing the Cambridge Boston Alignment Initiative [Hiring!] by kuhanj Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Cambridge Boston Alignment Initiative [Hiring!], published by kuhanj on December 2, 2022 on The Effective Altruism Forum.TLDR: The Cambridge Boston Alignment Initiative (CBAI) is a new organization aimed at supporting and accelerating Cambridge and Boston students interested in pursuing careers in AI safety. We’re excited about our ongoing work, including running a winter ML bootcamp, and are hiring for Cambridge-based roles (rolling applications, priority deadline Dec. 14 to work with us next year).We think that reducing risks from advanced AI systems is one of the most important issues of our time, and that undergraduate and graduate students can quickly start doing valuable work that mitigates these risks.We (Kuhan, Trevor, Xander and Alexandra) formed the Cambridge Boston Alignment Initiative (CBAI) to increase the number of talented researchers working to mitigate risks from AI by supporting Boston-area infrastructure, research and outreach related to AI alignment and governance. Our current programming involves working with groups like the Harvard AI Safety Team (HAIST) and MIT AI Alignment (MAIA), as well as organizing a winter ML bootcamp based on Redwood Research’s MLAB curriculum.We think that the Boston and Cambridge area is a particularly important place to foster a strong community of AI safety-interested students and researchers. The AI alignment community and infrastructure in the Boston/Cambridge area has also grown rapidly in recent months (see updates from HAIST and MAIA for more context), and has many opportunities for improvement: office spaces, advanced programming, research, community events, and internship/job opportunities to name a few.If you’d like to work with us to make this happen, we’re hiring for full-time generalist roles in Boston. Depending on personal fit, this work might take the form of co-director, technical director/program lead, operations director, or operations associate. We will respond to applications submitted by December 14 by the end of the year. For more information, see our website. For questions, email kuhan@cbai.ai.We’ll also be at EAGxBerkeley, and are excited to talk to people there.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Cambridge Boston Alignment Initiative [Hiring!], published by kuhanj on December 2, 2022 on The Effective Altruism Forum.TLDR: The Cambridge Boston Alignment Initiative (CBAI) is a new organization aimed at supporting and accelerating Cambridge and Boston students interested in pursuing careers in AI safety. We’re excited about our ongoing work, including running a winter ML bootcamp, and are hiring for Cambridge-based roles (rolling applications, priority deadline Dec. 14 to work with us next year).We think that reducing risks from advanced AI systems is one of the most important issues of our time, and that undergraduate and graduate students can quickly start doing valuable work that mitigates these risks.We (Kuhan, Trevor, Xander and Alexandra) formed the Cambridge Boston Alignment Initiative (CBAI) to increase the number of talented researchers working to mitigate risks from AI by supporting Boston-area infrastructure, research and outreach related to AI alignment and governance. Our current programming involves working with groups like the Harvard AI Safety Team (HAIST) and MIT AI Alignment (MAIA), as well as organizing a winter ML bootcamp based on Redwood Research’s MLAB curriculum.We think that the Boston and Cambridge area is a particularly important place to foster a strong community of AI safety-interested students and researchers. The AI alignment community and infrastructure in the Boston/Cambridge area has also grown rapidly in recent months (see updates from HAIST and MAIA for more context), and has many opportunities for improvement: office spaces, advanced programming, research, community events, and internship/job opportunities to name a few.If you’d like to work with us to make this happen, we’re hiring for full-time generalist roles in Boston. Depending on personal fit, this work might take the form of co-director, technical director/program lead, operations director, or operations associate. We will respond to applications submitted by December 14 by the end of the year. For more information, see our website. For questions, email kuhan@cbai.ai.We’ll also be at EAGxBerkeley, and are excited to talk to people there.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
kuhanj https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:21 None full 3966
qmh9bWAthovqoew8z_EA EA - EA needs more humor by SWK Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA needs more humor, published by SWK on December 1, 2022 on The Effective Altruism Forum.In the wake of the FTX collapse, much ink has been spilled on EA reform by people smarter and more experienced in this space than I am. However, as someone who has been engaging with EA over the past few years and who has become increasingly connected with the community, I have a modest proposal I’d like to share: EA needs more humor.Criticism of EA has roots back in the “earning to give” era. This Stanford Social Innovation Review editorial from 2013 describes EA as “cold and hyper-rationalistic,” deeming the idea of numerically judging charities as “defective altruism.” This piece in Aeon from 2014 essentially argues that the EA utilitarian worldview opposes art, aesthetic beauty, and creativity in general.Criticism of EA has only heightened in recent years with the rise of longtermism. Another Aeon editorial from 2021 characterizes the “apocalypticism” of longtermist thought as “profoundly dangerous” while also lampooning EA organizations like the “grandiosely named Future of Humanity Institute” and “the even more grandiosely named Future of Life Institute.”In the last few months before the FTX situation, criticism was directed at Will MacAskill’s longtermist manifesto, What We Owe the Future. A Wall Street Journal review concludes that “‘What We Owe the Future’ is a preposterous book” and that it is “replete with highfalutin truisms, cockamamie analogies and complex discussions leading nowhere.” A Current Affairs article once again evokes the phrase “defective altruism” and asserts that MacAskill’s book shows how EA as a whole “is self-righteous in the most literal sense.”The above example are, of course, just a small snapshot of the criticism EA has faced. However, I think these examples capture a common theme in EA critiques. Overall, it seems that critics tend to characterize EA as a community of cold, calculating, imperious, pretentious people who take themselves and their ostensible mission to “save humanity” far too seriously.To be honest, a lot of EA criticism seems like it’s coming from cynical, jaded adults who relish in the opportunity to crush young people’s ambitious dreams about changing the world. I also think many critics don’t really understand what EA is about and extrapolate based on a glance at the most radical ideas or make unfair assumptions based on a list of EA’s high-profile Silicon Valley supporters.However, there is a lot of truth to what critics are saying: EA’s aims are incredibly ambitious, its ideas frequently radical, and its organizations often graced with grandiose names. I also agree that the FTX/SBF situation has exposed glaring holes in EA philosophy and shortcomings in the organization of the EA community.However, my personal experience in this community has been that the majority of EAs are not cold, calculating, imperious, pretentious people but warm, intelligent, honest, and altruistic individuals who wholeheartedly want to “do good better.”I think one thing the EA community could do moving forward to improve its external image and internal function is to embrace a bit more humor. EA could stand to acknowledge and make fun of the craziness of comparing the effectiveness of charities as disparate as a deworming campaign and a policy advocacy group, or the absurdity of a outlining a superintelligent extinction event.I say these ideas are absurd not because I don’t believe in them; I have the utmost respect for rigorous charity evaluators like GiveWell and am convinced that AI is indeed the most important problem facing humanity. But I think that acknowledging the external optics of these ideas and, to a degree, joking about how crazy they may seem could make EA less disagreeable for many people on the outside looking in.There ...]]>
SWK https://forum.effectivealtruism.org/posts/qmh9bWAthovqoew8z/ea-needs-more-humor Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA needs more humor, published by SWK on December 1, 2022 on The Effective Altruism Forum.In the wake of the FTX collapse, much ink has been spilled on EA reform by people smarter and more experienced in this space than I am. However, as someone who has been engaging with EA over the past few years and who has become increasingly connected with the community, I have a modest proposal I’d like to share: EA needs more humor.Criticism of EA has roots back in the “earning to give” era. This Stanford Social Innovation Review editorial from 2013 describes EA as “cold and hyper-rationalistic,” deeming the idea of numerically judging charities as “defective altruism.” This piece in Aeon from 2014 essentially argues that the EA utilitarian worldview opposes art, aesthetic beauty, and creativity in general.Criticism of EA has only heightened in recent years with the rise of longtermism. Another Aeon editorial from 2021 characterizes the “apocalypticism” of longtermist thought as “profoundly dangerous” while also lampooning EA organizations like the “grandiosely named Future of Humanity Institute” and “the even more grandiosely named Future of Life Institute.”In the last few months before the FTX situation, criticism was directed at Will MacAskill’s longtermist manifesto, What We Owe the Future. A Wall Street Journal review concludes that “‘What We Owe the Future’ is a preposterous book” and that it is “replete with highfalutin truisms, cockamamie analogies and complex discussions leading nowhere.” A Current Affairs article once again evokes the phrase “defective altruism” and asserts that MacAskill’s book shows how EA as a whole “is self-righteous in the most literal sense.”The above example are, of course, just a small snapshot of the criticism EA has faced. However, I think these examples capture a common theme in EA critiques. Overall, it seems that critics tend to characterize EA as a community of cold, calculating, imperious, pretentious people who take themselves and their ostensible mission to “save humanity” far too seriously.To be honest, a lot of EA criticism seems like it’s coming from cynical, jaded adults who relish in the opportunity to crush young people’s ambitious dreams about changing the world. I also think many critics don’t really understand what EA is about and extrapolate based on a glance at the most radical ideas or make unfair assumptions based on a list of EA’s high-profile Silicon Valley supporters.However, there is a lot of truth to what critics are saying: EA’s aims are incredibly ambitious, its ideas frequently radical, and its organizations often graced with grandiose names. I also agree that the FTX/SBF situation has exposed glaring holes in EA philosophy and shortcomings in the organization of the EA community.However, my personal experience in this community has been that the majority of EAs are not cold, calculating, imperious, pretentious people but warm, intelligent, honest, and altruistic individuals who wholeheartedly want to “do good better.”I think one thing the EA community could do moving forward to improve its external image and internal function is to embrace a bit more humor. EA could stand to acknowledge and make fun of the craziness of comparing the effectiveness of charities as disparate as a deworming campaign and a policy advocacy group, or the absurdity of a outlining a superintelligent extinction event.I say these ideas are absurd not because I don’t believe in them; I have the utmost respect for rigorous charity evaluators like GiveWell and am convinced that AI is indeed the most important problem facing humanity. But I think that acknowledging the external optics of these ideas and, to a degree, joking about how crazy they may seem could make EA less disagreeable for many people on the outside looking in.There ...]]>
Fri, 02 Dec 2022 02:17:45 +0000 EA - EA needs more humor by SWK Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA needs more humor, published by SWK on December 1, 2022 on The Effective Altruism Forum.In the wake of the FTX collapse, much ink has been spilled on EA reform by people smarter and more experienced in this space than I am. However, as someone who has been engaging with EA over the past few years and who has become increasingly connected with the community, I have a modest proposal I’d like to share: EA needs more humor.Criticism of EA has roots back in the “earning to give” era. This Stanford Social Innovation Review editorial from 2013 describes EA as “cold and hyper-rationalistic,” deeming the idea of numerically judging charities as “defective altruism.” This piece in Aeon from 2014 essentially argues that the EA utilitarian worldview opposes art, aesthetic beauty, and creativity in general.Criticism of EA has only heightened in recent years with the rise of longtermism. Another Aeon editorial from 2021 characterizes the “apocalypticism” of longtermist thought as “profoundly dangerous” while also lampooning EA organizations like the “grandiosely named Future of Humanity Institute” and “the even more grandiosely named Future of Life Institute.”In the last few months before the FTX situation, criticism was directed at Will MacAskill’s longtermist manifesto, What We Owe the Future. A Wall Street Journal review concludes that “‘What We Owe the Future’ is a preposterous book” and that it is “replete with highfalutin truisms, cockamamie analogies and complex discussions leading nowhere.” A Current Affairs article once again evokes the phrase “defective altruism” and asserts that MacAskill’s book shows how EA as a whole “is self-righteous in the most literal sense.”The above example are, of course, just a small snapshot of the criticism EA has faced. However, I think these examples capture a common theme in EA critiques. Overall, it seems that critics tend to characterize EA as a community of cold, calculating, imperious, pretentious people who take themselves and their ostensible mission to “save humanity” far too seriously.To be honest, a lot of EA criticism seems like it’s coming from cynical, jaded adults who relish in the opportunity to crush young people’s ambitious dreams about changing the world. I also think many critics don’t really understand what EA is about and extrapolate based on a glance at the most radical ideas or make unfair assumptions based on a list of EA’s high-profile Silicon Valley supporters.However, there is a lot of truth to what critics are saying: EA’s aims are incredibly ambitious, its ideas frequently radical, and its organizations often graced with grandiose names. I also agree that the FTX/SBF situation has exposed glaring holes in EA philosophy and shortcomings in the organization of the EA community.However, my personal experience in this community has been that the majority of EAs are not cold, calculating, imperious, pretentious people but warm, intelligent, honest, and altruistic individuals who wholeheartedly want to “do good better.”I think one thing the EA community could do moving forward to improve its external image and internal function is to embrace a bit more humor. EA could stand to acknowledge and make fun of the craziness of comparing the effectiveness of charities as disparate as a deworming campaign and a policy advocacy group, or the absurdity of a outlining a superintelligent extinction event.I say these ideas are absurd not because I don’t believe in them; I have the utmost respect for rigorous charity evaluators like GiveWell and am convinced that AI is indeed the most important problem facing humanity. But I think that acknowledging the external optics of these ideas and, to a degree, joking about how crazy they may seem could make EA less disagreeable for many people on the outside looking in.There ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA needs more humor, published by SWK on December 1, 2022 on The Effective Altruism Forum.In the wake of the FTX collapse, much ink has been spilled on EA reform by people smarter and more experienced in this space than I am. However, as someone who has been engaging with EA over the past few years and who has become increasingly connected with the community, I have a modest proposal I’d like to share: EA needs more humor.Criticism of EA has roots back in the “earning to give” era. This Stanford Social Innovation Review editorial from 2013 describes EA as “cold and hyper-rationalistic,” deeming the idea of numerically judging charities as “defective altruism.” This piece in Aeon from 2014 essentially argues that the EA utilitarian worldview opposes art, aesthetic beauty, and creativity in general.Criticism of EA has only heightened in recent years with the rise of longtermism. Another Aeon editorial from 2021 characterizes the “apocalypticism” of longtermist thought as “profoundly dangerous” while also lampooning EA organizations like the “grandiosely named Future of Humanity Institute” and “the even more grandiosely named Future of Life Institute.”In the last few months before the FTX situation, criticism was directed at Will MacAskill’s longtermist manifesto, What We Owe the Future. A Wall Street Journal review concludes that “‘What We Owe the Future’ is a preposterous book” and that it is “replete with highfalutin truisms, cockamamie analogies and complex discussions leading nowhere.” A Current Affairs article once again evokes the phrase “defective altruism” and asserts that MacAskill’s book shows how EA as a whole “is self-righteous in the most literal sense.”The above example are, of course, just a small snapshot of the criticism EA has faced. However, I think these examples capture a common theme in EA critiques. Overall, it seems that critics tend to characterize EA as a community of cold, calculating, imperious, pretentious people who take themselves and their ostensible mission to “save humanity” far too seriously.To be honest, a lot of EA criticism seems like it’s coming from cynical, jaded adults who relish in the opportunity to crush young people’s ambitious dreams about changing the world. I also think many critics don’t really understand what EA is about and extrapolate based on a glance at the most radical ideas or make unfair assumptions based on a list of EA’s high-profile Silicon Valley supporters.However, there is a lot of truth to what critics are saying: EA’s aims are incredibly ambitious, its ideas frequently radical, and its organizations often graced with grandiose names. I also agree that the FTX/SBF situation has exposed glaring holes in EA philosophy and shortcomings in the organization of the EA community.However, my personal experience in this community has been that the majority of EAs are not cold, calculating, imperious, pretentious people but warm, intelligent, honest, and altruistic individuals who wholeheartedly want to “do good better.”I think one thing the EA community could do moving forward to improve its external image and internal function is to embrace a bit more humor. EA could stand to acknowledge and make fun of the craziness of comparing the effectiveness of charities as disparate as a deworming campaign and a policy advocacy group, or the absurdity of a outlining a superintelligent extinction event.I say these ideas are absurd not because I don’t believe in them; I have the utmost respect for rigorous charity evaluators like GiveWell and am convinced that AI is indeed the most important problem facing humanity. But I think that acknowledging the external optics of these ideas and, to a degree, joking about how crazy they may seem could make EA less disagreeable for many people on the outside looking in.There ...]]>
SWK https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:35 None full 3971
Lr99AGm4czFK7bsgj_EA EA - A challenge for AGI organizations, and a challenge for readers by RobBensinger Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A challenge for AGI organizations, and a challenge for readers, published by RobBensinger on December 1, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
RobBensinger https://forum.effectivealtruism.org/posts/Lr99AGm4czFK7bsgj/a-challenge-for-agi-organizations-and-a-challenge-for Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A challenge for AGI organizations, and a challenge for readers, published by RobBensinger on December 1, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 02 Dec 2022 00:03:46 +0000 EA - A challenge for AGI organizations, and a challenge for readers by RobBensinger Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A challenge for AGI organizations, and a challenge for readers, published by RobBensinger on December 1, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A challenge for AGI organizations, and a challenge for readers, published by RobBensinger on December 1, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
RobBensinger https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:29 None full 3967
wypC4nDxsxYcsRvdC_EA EA - Beware frictions from altruistic value differences by Magnus Vinding Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Beware frictions from altruistic value differences, published by Magnus Vinding on December 1, 2022 on The Effective Altruism Forum.I believe value differences pose some underappreciated challenges in large-scale altruistic efforts. My aim in this post is to outline what I see as the main such challenges, and to present a few psychological reasons as to why we should expect these challenges to be significant and difficult to overcome.To clarify, my aim in this post is not to make a case against value differences per se, much less a case against vigorous debate over values (I believe that such debate is healthy and desirable). Instead, my aim is to highlight some of the challenges and pitfalls that are associated with value differences, in the hope that we can better mitigate these pitfalls. After all, value differences are sure to persist among people who are trying to help others, and hence a critical issue is how well — or how poorly — we are going to handle these differences.Examples of challenges posed by value differences among altruistsA key challenge posed by value differences, in my view, is that they can make us prone to tribal or otherwise antagonistic dynamics that are suboptimal by the lights of our own moral values. Such values-related frictions may in turn lead to the following pitfalls and failure modes:Failing to achieve moral aims that are already widely shared, such as avoiding worst-case outcomes (cf. “Common ground for longtermists”).Failing to make mutually beneficial moral trades and compromises when possible (in ways that do not introduce problematic behavior such as dishonesty or censorship).Failing to update on arguments, whether they be empirical or values-related, because the arguments are made by those who, to our minds, seem like they belong to the “other side”.Some people committing harmful acts out of spite or primitive tribal instincts. (The sections below give some sense as to why this might happen.)Of course, some of the failure modes listed above can have other causes beyond values- and coalition-related frictions. Yet poorly handled such frictions are probably still a key risk factor for these failure modes.In short, as I see it, the main challenges associated with value differences lie in mitigating the risks that emerge from values-related frictions, such as the risks outlined above.Reasons to expect values-related frictions to be significantThe following are some reasons to expect values-related frictions to be both common and quite difficult to handle by default.Harmful actions based on different moral beliefs may be judged more harshly than intentional harmOne set of findings that seem relevant come from a 2016 anthropological study that examined the moral judgments of people across ten different cultures, eight of which were traditional small-scale societies (Barrett et al., 2016).The study specifically asked people how they would evaluate a harmful act in light of a range of potentially extenuating circumstances, such as different moral beliefs, a mistake of fact, or self-defense. While there was significant variation in people’s moral judgments across cultures, there was nevertheless unanimous agreement that committing a harmful act based on different moral beliefs was not an extenuating circumstance. Indeed, on average across cultures, committing a harmful act based on different moral beliefs was considered worse than was committing the harmful act intentionally (see Barrett et al., 2016, fig. 5).It is unclear whether this pattern in moral judgment necessarily applies to all or even most kinds of acts inspired by different moral beliefs. Yet these results still tentatively suggest that we may be inclined to see value differences as a uniquely aggravating factor in our moral judgments of people’s actions — a...]]>
Magnus Vinding https://forum.effectivealtruism.org/posts/wypC4nDxsxYcsRvdC/beware-frictions-from-altruistic-value-differences Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Beware frictions from altruistic value differences, published by Magnus Vinding on December 1, 2022 on The Effective Altruism Forum.I believe value differences pose some underappreciated challenges in large-scale altruistic efforts. My aim in this post is to outline what I see as the main such challenges, and to present a few psychological reasons as to why we should expect these challenges to be significant and difficult to overcome.To clarify, my aim in this post is not to make a case against value differences per se, much less a case against vigorous debate over values (I believe that such debate is healthy and desirable). Instead, my aim is to highlight some of the challenges and pitfalls that are associated with value differences, in the hope that we can better mitigate these pitfalls. After all, value differences are sure to persist among people who are trying to help others, and hence a critical issue is how well — or how poorly — we are going to handle these differences.Examples of challenges posed by value differences among altruistsA key challenge posed by value differences, in my view, is that they can make us prone to tribal or otherwise antagonistic dynamics that are suboptimal by the lights of our own moral values. Such values-related frictions may in turn lead to the following pitfalls and failure modes:Failing to achieve moral aims that are already widely shared, such as avoiding worst-case outcomes (cf. “Common ground for longtermists”).Failing to make mutually beneficial moral trades and compromises when possible (in ways that do not introduce problematic behavior such as dishonesty or censorship).Failing to update on arguments, whether they be empirical or values-related, because the arguments are made by those who, to our minds, seem like they belong to the “other side”.Some people committing harmful acts out of spite or primitive tribal instincts. (The sections below give some sense as to why this might happen.)Of course, some of the failure modes listed above can have other causes beyond values- and coalition-related frictions. Yet poorly handled such frictions are probably still a key risk factor for these failure modes.In short, as I see it, the main challenges associated with value differences lie in mitigating the risks that emerge from values-related frictions, such as the risks outlined above.Reasons to expect values-related frictions to be significantThe following are some reasons to expect values-related frictions to be both common and quite difficult to handle by default.Harmful actions based on different moral beliefs may be judged more harshly than intentional harmOne set of findings that seem relevant come from a 2016 anthropological study that examined the moral judgments of people across ten different cultures, eight of which were traditional small-scale societies (Barrett et al., 2016).The study specifically asked people how they would evaluate a harmful act in light of a range of potentially extenuating circumstances, such as different moral beliefs, a mistake of fact, or self-defense. While there was significant variation in people’s moral judgments across cultures, there was nevertheless unanimous agreement that committing a harmful act based on different moral beliefs was not an extenuating circumstance. Indeed, on average across cultures, committing a harmful act based on different moral beliefs was considered worse than was committing the harmful act intentionally (see Barrett et al., 2016, fig. 5).It is unclear whether this pattern in moral judgment necessarily applies to all or even most kinds of acts inspired by different moral beliefs. Yet these results still tentatively suggest that we may be inclined to see value differences as a uniquely aggravating factor in our moral judgments of people’s actions — a...]]>
Thu, 01 Dec 2022 23:55:17 +0000 EA - Beware frictions from altruistic value differences by Magnus Vinding Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Beware frictions from altruistic value differences, published by Magnus Vinding on December 1, 2022 on The Effective Altruism Forum.I believe value differences pose some underappreciated challenges in large-scale altruistic efforts. My aim in this post is to outline what I see as the main such challenges, and to present a few psychological reasons as to why we should expect these challenges to be significant and difficult to overcome.To clarify, my aim in this post is not to make a case against value differences per se, much less a case against vigorous debate over values (I believe that such debate is healthy and desirable). Instead, my aim is to highlight some of the challenges and pitfalls that are associated with value differences, in the hope that we can better mitigate these pitfalls. After all, value differences are sure to persist among people who are trying to help others, and hence a critical issue is how well — or how poorly — we are going to handle these differences.Examples of challenges posed by value differences among altruistsA key challenge posed by value differences, in my view, is that they can make us prone to tribal or otherwise antagonistic dynamics that are suboptimal by the lights of our own moral values. Such values-related frictions may in turn lead to the following pitfalls and failure modes:Failing to achieve moral aims that are already widely shared, such as avoiding worst-case outcomes (cf. “Common ground for longtermists”).Failing to make mutually beneficial moral trades and compromises when possible (in ways that do not introduce problematic behavior such as dishonesty or censorship).Failing to update on arguments, whether they be empirical or values-related, because the arguments are made by those who, to our minds, seem like they belong to the “other side”.Some people committing harmful acts out of spite or primitive tribal instincts. (The sections below give some sense as to why this might happen.)Of course, some of the failure modes listed above can have other causes beyond values- and coalition-related frictions. Yet poorly handled such frictions are probably still a key risk factor for these failure modes.In short, as I see it, the main challenges associated with value differences lie in mitigating the risks that emerge from values-related frictions, such as the risks outlined above.Reasons to expect values-related frictions to be significantThe following are some reasons to expect values-related frictions to be both common and quite difficult to handle by default.Harmful actions based on different moral beliefs may be judged more harshly than intentional harmOne set of findings that seem relevant come from a 2016 anthropological study that examined the moral judgments of people across ten different cultures, eight of which were traditional small-scale societies (Barrett et al., 2016).The study specifically asked people how they would evaluate a harmful act in light of a range of potentially extenuating circumstances, such as different moral beliefs, a mistake of fact, or self-defense. While there was significant variation in people’s moral judgments across cultures, there was nevertheless unanimous agreement that committing a harmful act based on different moral beliefs was not an extenuating circumstance. Indeed, on average across cultures, committing a harmful act based on different moral beliefs was considered worse than was committing the harmful act intentionally (see Barrett et al., 2016, fig. 5).It is unclear whether this pattern in moral judgment necessarily applies to all or even most kinds of acts inspired by different moral beliefs. Yet these results still tentatively suggest that we may be inclined to see value differences as a uniquely aggravating factor in our moral judgments of people’s actions — a...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Beware frictions from altruistic value differences, published by Magnus Vinding on December 1, 2022 on The Effective Altruism Forum.I believe value differences pose some underappreciated challenges in large-scale altruistic efforts. My aim in this post is to outline what I see as the main such challenges, and to present a few psychological reasons as to why we should expect these challenges to be significant and difficult to overcome.To clarify, my aim in this post is not to make a case against value differences per se, much less a case against vigorous debate over values (I believe that such debate is healthy and desirable). Instead, my aim is to highlight some of the challenges and pitfalls that are associated with value differences, in the hope that we can better mitigate these pitfalls. After all, value differences are sure to persist among people who are trying to help others, and hence a critical issue is how well — or how poorly — we are going to handle these differences.Examples of challenges posed by value differences among altruistsA key challenge posed by value differences, in my view, is that they can make us prone to tribal or otherwise antagonistic dynamics that are suboptimal by the lights of our own moral values. Such values-related frictions may in turn lead to the following pitfalls and failure modes:Failing to achieve moral aims that are already widely shared, such as avoiding worst-case outcomes (cf. “Common ground for longtermists”).Failing to make mutually beneficial moral trades and compromises when possible (in ways that do not introduce problematic behavior such as dishonesty or censorship).Failing to update on arguments, whether they be empirical or values-related, because the arguments are made by those who, to our minds, seem like they belong to the “other side”.Some people committing harmful acts out of spite or primitive tribal instincts. (The sections below give some sense as to why this might happen.)Of course, some of the failure modes listed above can have other causes beyond values- and coalition-related frictions. Yet poorly handled such frictions are probably still a key risk factor for these failure modes.In short, as I see it, the main challenges associated with value differences lie in mitigating the risks that emerge from values-related frictions, such as the risks outlined above.Reasons to expect values-related frictions to be significantThe following are some reasons to expect values-related frictions to be both common and quite difficult to handle by default.Harmful actions based on different moral beliefs may be judged more harshly than intentional harmOne set of findings that seem relevant come from a 2016 anthropological study that examined the moral judgments of people across ten different cultures, eight of which were traditional small-scale societies (Barrett et al., 2016).The study specifically asked people how they would evaluate a harmful act in light of a range of potentially extenuating circumstances, such as different moral beliefs, a mistake of fact, or self-defense. While there was significant variation in people’s moral judgments across cultures, there was nevertheless unanimous agreement that committing a harmful act based on different moral beliefs was not an extenuating circumstance. Indeed, on average across cultures, committing a harmful act based on different moral beliefs was considered worse than was committing the harmful act intentionally (see Barrett et al., 2016, fig. 5).It is unclear whether this pattern in moral judgment necessarily applies to all or even most kinds of acts inspired by different moral beliefs. Yet these results still tentatively suggest that we may be inclined to see value differences as a uniquely aggravating factor in our moral judgments of people’s actions — a...]]>
Magnus Vinding https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:56 None full 3970
oosCitFzBup2P3etg_EA EA - "Insider EA content" in Gideon Lewis-Kraus's recent New Yorker article by To be stuck inside of Mobile Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Insider EA content" in Gideon Lewis-Kraus's recent New Yorker article, published by To be stuck inside of Mobile on December 1, 2022 on The Effective Altruism Forum.Direct link (to New Yorker website).Alternative link (publicly accessible).This piece from Gideon Lewis-Kraus (the writer for the MacAskill piece) is a recent overview of how EA has reacted to SBF and the FTX collapse.Lewis-Kraus's articles are probably the most in-depth public writing on EA, and he has had wide access to EA members and leadership.The New Yorker is highly respected and the narratives and attitudes in this piece will influence future perceptions of EA.This piece contains inside information about discussions or warnings about SBF. It uses interviews from a "senior EA", and excepts from an internal Slack channel used by senior EAs.When my profile of MacAskill, which discussed internal movement discord about Bankman-Fried’s rise to prominence, appeared in August, Wiblin vented his displeasure on the Slack channel. As he put it, the problem was with the format of such a treatment. He wrote, “They don’t focus on ‘does this person have true and important ideas.’ The writer has no particular expertise to judge such a thing and readers don’t especially care either. Instead the focus is more often on personal quirkiness and charisma, relationships among people in the story, ‘she said / he said’ reporting of disagreements, making the reader feel wise and above the substantive issue, and finding ways the topic can be linked to existing political attitudes of New Yorker readers (so traditional liberal concerns). This is pretty bad because our great virtue is being right, not being likeable or uncontroversial or ‘right-on’ in terms of having fashionable political opinions.”There are claims of a warning about SBF on the Slack channel:This past July, a contributor to the Slack channel wrote to express great apprehension about Sam Bankman-Fried. “Just FYSA,”—or for your situational awareness—“said to me yesterday in DC by somebody in gov’t: ‘Hey I was investigating someone for [x type of crime] and realized they’re on the board of CEA’ ”—MacAskill’s Centre for Effective Altruism—“ ‘or run EA or something? Crazy! I didn’t realize you could be an EA and also commit a lot of crime. Like shouldn’t those be incompatible?’ (about SBF). I don’t usually share this type of thing here, but seemed worth sharing the sentiment since I think it is not very uncommon and may be surprising to some people.” In a second message, the contributor continued, “I think in some circles SBF has a reputation as someone who regularly breaks laws to make money, which is something that many people see as directly antithetical to being altruistic or EA. (and I get why!!).That reputation poses PR concerns to EA whether or not he’s investigated, and whether or not he’s found guilty.” The contributor felt this was a serious enough issue to elaborate a third time: “I guess my point in sharing this is to raise awareness that a) in some circles SBF’s reputation is very bad b) in some circles SBF’s reputation is closely tied to EA, and c) there’s some chance SBF’s reputation gets much, much worse. But I don’t have any data on these (particularly c, I have no idea what types of scenarios are likely), though it seems like a major PR vulnerability. I imagine people working full-time on PR are aware of this and actively working to mitigate it, but it seemed worth passing on if not since many people may not be having these types of interactions.” (Bankman-Fried has not been charged with a crime. The Department of Justice declined to comment.)The suggestion is that EA leadership, while not knowing of any actual crime, accepted poor behavior and norm breaking because of the resources Bankman-Fried provided.In other words, it seems as th...]]>
To be stuck inside of Mobile https://forum.effectivealtruism.org/posts/oosCitFzBup2P3etg/insider-ea-content-in-gideon-lewis-kraus-s-recent-new-yorker Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Insider EA content" in Gideon Lewis-Kraus's recent New Yorker article, published by To be stuck inside of Mobile on December 1, 2022 on The Effective Altruism Forum.Direct link (to New Yorker website).Alternative link (publicly accessible).This piece from Gideon Lewis-Kraus (the writer for the MacAskill piece) is a recent overview of how EA has reacted to SBF and the FTX collapse.Lewis-Kraus's articles are probably the most in-depth public writing on EA, and he has had wide access to EA members and leadership.The New Yorker is highly respected and the narratives and attitudes in this piece will influence future perceptions of EA.This piece contains inside information about discussions or warnings about SBF. It uses interviews from a "senior EA", and excepts from an internal Slack channel used by senior EAs.When my profile of MacAskill, which discussed internal movement discord about Bankman-Fried’s rise to prominence, appeared in August, Wiblin vented his displeasure on the Slack channel. As he put it, the problem was with the format of such a treatment. He wrote, “They don’t focus on ‘does this person have true and important ideas.’ The writer has no particular expertise to judge such a thing and readers don’t especially care either. Instead the focus is more often on personal quirkiness and charisma, relationships among people in the story, ‘she said / he said’ reporting of disagreements, making the reader feel wise and above the substantive issue, and finding ways the topic can be linked to existing political attitudes of New Yorker readers (so traditional liberal concerns). This is pretty bad because our great virtue is being right, not being likeable or uncontroversial or ‘right-on’ in terms of having fashionable political opinions.”There are claims of a warning about SBF on the Slack channel:This past July, a contributor to the Slack channel wrote to express great apprehension about Sam Bankman-Fried. “Just FYSA,”—or for your situational awareness—“said to me yesterday in DC by somebody in gov’t: ‘Hey I was investigating someone for [x type of crime] and realized they’re on the board of CEA’ ”—MacAskill’s Centre for Effective Altruism—“ ‘or run EA or something? Crazy! I didn’t realize you could be an EA and also commit a lot of crime. Like shouldn’t those be incompatible?’ (about SBF). I don’t usually share this type of thing here, but seemed worth sharing the sentiment since I think it is not very uncommon and may be surprising to some people.” In a second message, the contributor continued, “I think in some circles SBF has a reputation as someone who regularly breaks laws to make money, which is something that many people see as directly antithetical to being altruistic or EA. (and I get why!!).That reputation poses PR concerns to EA whether or not he’s investigated, and whether or not he’s found guilty.” The contributor felt this was a serious enough issue to elaborate a third time: “I guess my point in sharing this is to raise awareness that a) in some circles SBF’s reputation is very bad b) in some circles SBF’s reputation is closely tied to EA, and c) there’s some chance SBF’s reputation gets much, much worse. But I don’t have any data on these (particularly c, I have no idea what types of scenarios are likely), though it seems like a major PR vulnerability. I imagine people working full-time on PR are aware of this and actively working to mitigate it, but it seemed worth passing on if not since many people may not be having these types of interactions.” (Bankman-Fried has not been charged with a crime. The Department of Justice declined to comment.)The suggestion is that EA leadership, while not knowing of any actual crime, accepted poor behavior and norm breaking because of the resources Bankman-Fried provided.In other words, it seems as th...]]>
Thu, 01 Dec 2022 23:07:41 +0000 EA - "Insider EA content" in Gideon Lewis-Kraus's recent New Yorker article by To be stuck inside of Mobile Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Insider EA content" in Gideon Lewis-Kraus's recent New Yorker article, published by To be stuck inside of Mobile on December 1, 2022 on The Effective Altruism Forum.Direct link (to New Yorker website).Alternative link (publicly accessible).This piece from Gideon Lewis-Kraus (the writer for the MacAskill piece) is a recent overview of how EA has reacted to SBF and the FTX collapse.Lewis-Kraus's articles are probably the most in-depth public writing on EA, and he has had wide access to EA members and leadership.The New Yorker is highly respected and the narratives and attitudes in this piece will influence future perceptions of EA.This piece contains inside information about discussions or warnings about SBF. It uses interviews from a "senior EA", and excepts from an internal Slack channel used by senior EAs.When my profile of MacAskill, which discussed internal movement discord about Bankman-Fried’s rise to prominence, appeared in August, Wiblin vented his displeasure on the Slack channel. As he put it, the problem was with the format of such a treatment. He wrote, “They don’t focus on ‘does this person have true and important ideas.’ The writer has no particular expertise to judge such a thing and readers don’t especially care either. Instead the focus is more often on personal quirkiness and charisma, relationships among people in the story, ‘she said / he said’ reporting of disagreements, making the reader feel wise and above the substantive issue, and finding ways the topic can be linked to existing political attitudes of New Yorker readers (so traditional liberal concerns). This is pretty bad because our great virtue is being right, not being likeable or uncontroversial or ‘right-on’ in terms of having fashionable political opinions.”There are claims of a warning about SBF on the Slack channel:This past July, a contributor to the Slack channel wrote to express great apprehension about Sam Bankman-Fried. “Just FYSA,”—or for your situational awareness—“said to me yesterday in DC by somebody in gov’t: ‘Hey I was investigating someone for [x type of crime] and realized they’re on the board of CEA’ ”—MacAskill’s Centre for Effective Altruism—“ ‘or run EA or something? Crazy! I didn’t realize you could be an EA and also commit a lot of crime. Like shouldn’t those be incompatible?’ (about SBF). I don’t usually share this type of thing here, but seemed worth sharing the sentiment since I think it is not very uncommon and may be surprising to some people.” In a second message, the contributor continued, “I think in some circles SBF has a reputation as someone who regularly breaks laws to make money, which is something that many people see as directly antithetical to being altruistic or EA. (and I get why!!).That reputation poses PR concerns to EA whether or not he’s investigated, and whether or not he’s found guilty.” The contributor felt this was a serious enough issue to elaborate a third time: “I guess my point in sharing this is to raise awareness that a) in some circles SBF’s reputation is very bad b) in some circles SBF’s reputation is closely tied to EA, and c) there’s some chance SBF’s reputation gets much, much worse. But I don’t have any data on these (particularly c, I have no idea what types of scenarios are likely), though it seems like a major PR vulnerability. I imagine people working full-time on PR are aware of this and actively working to mitigate it, but it seemed worth passing on if not since many people may not be having these types of interactions.” (Bankman-Fried has not been charged with a crime. The Department of Justice declined to comment.)The suggestion is that EA leadership, while not knowing of any actual crime, accepted poor behavior and norm breaking because of the resources Bankman-Fried provided.In other words, it seems as th...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Insider EA content" in Gideon Lewis-Kraus's recent New Yorker article, published by To be stuck inside of Mobile on December 1, 2022 on The Effective Altruism Forum.Direct link (to New Yorker website).Alternative link (publicly accessible).This piece from Gideon Lewis-Kraus (the writer for the MacAskill piece) is a recent overview of how EA has reacted to SBF and the FTX collapse.Lewis-Kraus's articles are probably the most in-depth public writing on EA, and he has had wide access to EA members and leadership.The New Yorker is highly respected and the narratives and attitudes in this piece will influence future perceptions of EA.This piece contains inside information about discussions or warnings about SBF. It uses interviews from a "senior EA", and excepts from an internal Slack channel used by senior EAs.When my profile of MacAskill, which discussed internal movement discord about Bankman-Fried’s rise to prominence, appeared in August, Wiblin vented his displeasure on the Slack channel. As he put it, the problem was with the format of such a treatment. He wrote, “They don’t focus on ‘does this person have true and important ideas.’ The writer has no particular expertise to judge such a thing and readers don’t especially care either. Instead the focus is more often on personal quirkiness and charisma, relationships among people in the story, ‘she said / he said’ reporting of disagreements, making the reader feel wise and above the substantive issue, and finding ways the topic can be linked to existing political attitudes of New Yorker readers (so traditional liberal concerns). This is pretty bad because our great virtue is being right, not being likeable or uncontroversial or ‘right-on’ in terms of having fashionable political opinions.”There are claims of a warning about SBF on the Slack channel:This past July, a contributor to the Slack channel wrote to express great apprehension about Sam Bankman-Fried. “Just FYSA,”—or for your situational awareness—“said to me yesterday in DC by somebody in gov’t: ‘Hey I was investigating someone for [x type of crime] and realized they’re on the board of CEA’ ”—MacAskill’s Centre for Effective Altruism—“ ‘or run EA or something? Crazy! I didn’t realize you could be an EA and also commit a lot of crime. Like shouldn’t those be incompatible?’ (about SBF). I don’t usually share this type of thing here, but seemed worth sharing the sentiment since I think it is not very uncommon and may be surprising to some people.” In a second message, the contributor continued, “I think in some circles SBF has a reputation as someone who regularly breaks laws to make money, which is something that many people see as directly antithetical to being altruistic or EA. (and I get why!!).That reputation poses PR concerns to EA whether or not he’s investigated, and whether or not he’s found guilty.” The contributor felt this was a serious enough issue to elaborate a third time: “I guess my point in sharing this is to raise awareness that a) in some circles SBF’s reputation is very bad b) in some circles SBF’s reputation is closely tied to EA, and c) there’s some chance SBF’s reputation gets much, much worse. But I don’t have any data on these (particularly c, I have no idea what types of scenarios are likely), though it seems like a major PR vulnerability. I imagine people working full-time on PR are aware of this and actively working to mitigate it, but it seemed worth passing on if not since many people may not be having these types of interactions.” (Bankman-Fried has not been charged with a crime. The Department of Justice declined to comment.)The suggestion is that EA leadership, while not knowing of any actual crime, accepted poor behavior and norm breaking because of the resources Bankman-Fried provided.In other words, it seems as th...]]>
To be stuck inside of Mobile https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:29 None full 3968
xqfjevoDHvvRjPcPo_EA EA - How should small and medium-sized donors step in to fill gaps left by the collapse of the FTX Future Fund? by JanBrauner Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How should small and medium-sized donors step in to fill gaps left by the collapse of the FTX Future Fund?, published by JanBrauner on December 1, 2022 on The Effective Altruism Forum.Scott Alexander writes:"The past year has been a terrible time to be a charitable funder, since FTX ate every opportunity so quickly that everyone else had trouble finding good not-yet-funded projects. But right now is a great time to be a charitable funder: there are lots of really great charities on the verge of collapse who just need a little bit of funding to get them through. I’m trying to coordinate with some of the people involved. I haven’t really succeeded yet, I think because they’re all hiding under their beds gibbering - but probably they’ll have to come out eventually, if only for food and water. If you’re a potential charitable funder interested in helping, and not already connected to this project, please email me at scott@slatestarcodex.com. I don’t want any affected charities to get their hopes up, because I don’t expect this to fill more than a few percent of the hole, but maybe we can make the triage process slightly less of a disaster."I think this argument is probably correct. Potentially we shouldn't fund every organisation that would have been funded by the Future Fund. Potentially, we shouldn't fund things that were started because FTX had so much money, and which are then going to die in a year or two now that the Future Fund isn't around anymore. This might be a time of "right-sizing" projects for available money. However, even with the recent reduction in total assets, many of these projects/organisations should probably be supported. One could simply apply the heuristic of "should this be part of the EA portfolio?" after taking into consideration that EA has less funding now.There are also considerations about community support and mutual insurance; e.g. donors might want to help out individuals who quit their jobs because they expected a grant from the Future Fund, and so on. I'm going to stop here now, as this is not my main point.My main question is this: HOW should small-to-medium sized donors (let's say people who want to donate 3-to-7 figures) actually go about this? In particular, if they don't want to put out a public call for proposals, which will likely end in receiving dozens and dozens of grant requests?One option is to email Scott (see above). Any other ideas?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
JanBrauner https://forum.effectivealtruism.org/posts/xqfjevoDHvvRjPcPo/how-should-small-and-medium-sized-donors-step-in-to-fill Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How should small and medium-sized donors step in to fill gaps left by the collapse of the FTX Future Fund?, published by JanBrauner on December 1, 2022 on The Effective Altruism Forum.Scott Alexander writes:"The past year has been a terrible time to be a charitable funder, since FTX ate every opportunity so quickly that everyone else had trouble finding good not-yet-funded projects. But right now is a great time to be a charitable funder: there are lots of really great charities on the verge of collapse who just need a little bit of funding to get them through. I’m trying to coordinate with some of the people involved. I haven’t really succeeded yet, I think because they’re all hiding under their beds gibbering - but probably they’ll have to come out eventually, if only for food and water. If you’re a potential charitable funder interested in helping, and not already connected to this project, please email me at scott@slatestarcodex.com. I don’t want any affected charities to get their hopes up, because I don’t expect this to fill more than a few percent of the hole, but maybe we can make the triage process slightly less of a disaster."I think this argument is probably correct. Potentially we shouldn't fund every organisation that would have been funded by the Future Fund. Potentially, we shouldn't fund things that were started because FTX had so much money, and which are then going to die in a year or two now that the Future Fund isn't around anymore. This might be a time of "right-sizing" projects for available money. However, even with the recent reduction in total assets, many of these projects/organisations should probably be supported. One could simply apply the heuristic of "should this be part of the EA portfolio?" after taking into consideration that EA has less funding now.There are also considerations about community support and mutual insurance; e.g. donors might want to help out individuals who quit their jobs because they expected a grant from the Future Fund, and so on. I'm going to stop here now, as this is not my main point.My main question is this: HOW should small-to-medium sized donors (let's say people who want to donate 3-to-7 figures) actually go about this? In particular, if they don't want to put out a public call for proposals, which will likely end in receiving dozens and dozens of grant requests?One option is to email Scott (see above). Any other ideas?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 01 Dec 2022 21:38:03 +0000 EA - How should small and medium-sized donors step in to fill gaps left by the collapse of the FTX Future Fund? by JanBrauner Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How should small and medium-sized donors step in to fill gaps left by the collapse of the FTX Future Fund?, published by JanBrauner on December 1, 2022 on The Effective Altruism Forum.Scott Alexander writes:"The past year has been a terrible time to be a charitable funder, since FTX ate every opportunity so quickly that everyone else had trouble finding good not-yet-funded projects. But right now is a great time to be a charitable funder: there are lots of really great charities on the verge of collapse who just need a little bit of funding to get them through. I’m trying to coordinate with some of the people involved. I haven’t really succeeded yet, I think because they’re all hiding under their beds gibbering - but probably they’ll have to come out eventually, if only for food and water. If you’re a potential charitable funder interested in helping, and not already connected to this project, please email me at scott@slatestarcodex.com. I don’t want any affected charities to get their hopes up, because I don’t expect this to fill more than a few percent of the hole, but maybe we can make the triage process slightly less of a disaster."I think this argument is probably correct. Potentially we shouldn't fund every organisation that would have been funded by the Future Fund. Potentially, we shouldn't fund things that were started because FTX had so much money, and which are then going to die in a year or two now that the Future Fund isn't around anymore. This might be a time of "right-sizing" projects for available money. However, even with the recent reduction in total assets, many of these projects/organisations should probably be supported. One could simply apply the heuristic of "should this be part of the EA portfolio?" after taking into consideration that EA has less funding now.There are also considerations about community support and mutual insurance; e.g. donors might want to help out individuals who quit their jobs because they expected a grant from the Future Fund, and so on. I'm going to stop here now, as this is not my main point.My main question is this: HOW should small-to-medium sized donors (let's say people who want to donate 3-to-7 figures) actually go about this? In particular, if they don't want to put out a public call for proposals, which will likely end in receiving dozens and dozens of grant requests?One option is to email Scott (see above). Any other ideas?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How should small and medium-sized donors step in to fill gaps left by the collapse of the FTX Future Fund?, published by JanBrauner on December 1, 2022 on The Effective Altruism Forum.Scott Alexander writes:"The past year has been a terrible time to be a charitable funder, since FTX ate every opportunity so quickly that everyone else had trouble finding good not-yet-funded projects. But right now is a great time to be a charitable funder: there are lots of really great charities on the verge of collapse who just need a little bit of funding to get them through. I’m trying to coordinate with some of the people involved. I haven’t really succeeded yet, I think because they’re all hiding under their beds gibbering - but probably they’ll have to come out eventually, if only for food and water. If you’re a potential charitable funder interested in helping, and not already connected to this project, please email me at scott@slatestarcodex.com. I don’t want any affected charities to get their hopes up, because I don’t expect this to fill more than a few percent of the hole, but maybe we can make the triage process slightly less of a disaster."I think this argument is probably correct. Potentially we shouldn't fund every organisation that would have been funded by the Future Fund. Potentially, we shouldn't fund things that were started because FTX had so much money, and which are then going to die in a year or two now that the Future Fund isn't around anymore. This might be a time of "right-sizing" projects for available money. However, even with the recent reduction in total assets, many of these projects/organisations should probably be supported. One could simply apply the heuristic of "should this be part of the EA portfolio?" after taking into consideration that EA has less funding now.There are also considerations about community support and mutual insurance; e.g. donors might want to help out individuals who quit their jobs because they expected a grant from the Future Fund, and so on. I'm going to stop here now, as this is not my main point.My main question is this: HOW should small-to-medium sized donors (let's say people who want to donate 3-to-7 figures) actually go about this? In particular, if they don't want to put out a public call for proposals, which will likely end in receiving dozens and dozens of grant requests?One option is to email Scott (see above). Any other ideas?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
JanBrauner https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:27 None full 3963
bFDwxxfErRStMvuAQ_EA EA - Biological Anchors external review by Jennifer Lin (linkpost) by peterhartree Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Biological Anchors external review by Jennifer Lin (linkpost), published by peterhartree on November 30, 2022 on The Effective Altruism Forum.This report is one of the winners of the EA Criticism and Red Teaming Contest.Summary: This is a summary and critical review of Ajeya Cotra’s biological anchors report on AI timelines. It provides an easy-to-understand overview of the main methodology of Cotra’s report. It then examines and challenges central assumptions of the modelling in Cotra’s report. First, the review looks at reasons why we might not expect 2022 architectures to scale to AGI. Second, it raises the point that we don’t know how to specify a space of algorithmic architectures that contains something that could scale to AGI and can be efficiently searched through (inability to specify this could undermine the ability to take the evolutionary anchors from the report as a bound on timelines).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
peterhartree https://forum.effectivealtruism.org/posts/bFDwxxfErRStMvuAQ/biological-anchors-external-review-by-jennifer-lin-linkpost Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Biological Anchors external review by Jennifer Lin (linkpost), published by peterhartree on November 30, 2022 on The Effective Altruism Forum.This report is one of the winners of the EA Criticism and Red Teaming Contest.Summary: This is a summary and critical review of Ajeya Cotra’s biological anchors report on AI timelines. It provides an easy-to-understand overview of the main methodology of Cotra’s report. It then examines and challenges central assumptions of the modelling in Cotra’s report. First, the review looks at reasons why we might not expect 2022 architectures to scale to AGI. Second, it raises the point that we don’t know how to specify a space of algorithmic architectures that contains something that could scale to AGI and can be efficiently searched through (inability to specify this could undermine the ability to take the evolutionary anchors from the report as a bound on timelines).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 01 Dec 2022 16:10:58 +0000 EA - Biological Anchors external review by Jennifer Lin (linkpost) by peterhartree Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Biological Anchors external review by Jennifer Lin (linkpost), published by peterhartree on November 30, 2022 on The Effective Altruism Forum.This report is one of the winners of the EA Criticism and Red Teaming Contest.Summary: This is a summary and critical review of Ajeya Cotra’s biological anchors report on AI timelines. It provides an easy-to-understand overview of the main methodology of Cotra’s report. It then examines and challenges central assumptions of the modelling in Cotra’s report. First, the review looks at reasons why we might not expect 2022 architectures to scale to AGI. Second, it raises the point that we don’t know how to specify a space of algorithmic architectures that contains something that could scale to AGI and can be efficiently searched through (inability to specify this could undermine the ability to take the evolutionary anchors from the report as a bound on timelines).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Biological Anchors external review by Jennifer Lin (linkpost), published by peterhartree on November 30, 2022 on The Effective Altruism Forum.This report is one of the winners of the EA Criticism and Red Teaming Contest.Summary: This is a summary and critical review of Ajeya Cotra’s biological anchors report on AI timelines. It provides an easy-to-understand overview of the main methodology of Cotra’s report. It then examines and challenges central assumptions of the modelling in Cotra’s report. First, the review looks at reasons why we might not expect 2022 architectures to scale to AGI. Second, it raises the point that we don’t know how to specify a space of algorithmic architectures that contains something that could scale to AGI and can be efficiently searched through (inability to specify this could undermine the ability to take the evolutionary anchors from the report as a bound on timelines).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
peterhartree https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:10 None full 3964
yeMzJATjqxLioGM6K_EA EA - Estimating the marginal impact of outreach by Duncan Mcclements Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Estimating the marginal impact of outreach, published by Duncan Mcclements on November 30, 2022 on The Effective Altruism Forum.This post is an entry for $5k challenge to quantify the impact of 80,000 hours' top career pathsOne of the career paths evaluated by 80000 hours is helping to build the effective altruism movement: this is currently ranked in fifth position, behind AI technical research, AI governance, biorisk and organisation entrepeneurship. This post presents a model for the marginal number of additional individuals an individual devoted to outreach would attract, and finds with very high confidence that on the margin outreach to build effective altruism further is higher impact than working on existential misk mitigation directly.Discount rate estimationThe value of outreach compared to immediately working on existential risk heavily depends on our discount rate. This is because the benefits in terms of basis points of existential risk reduced are entirely in the future for outreach (in the form of additional researchers) while immediately working on research will bring gains sooner. Two factors seem relevant for our discount rate: the probability that humanity ceases to exist before the additional researchers can have an impact and the marginal value of labour over time. The best database we are aware of for total existential risk is here: of these, the only bounded annual (or annual-equivalent assuming constant risk over time) estimates for the risk are 0.19%, 0.21%, 0.11% and 0.2%. These have a geometric mean 0f 0.17%, and a standard deviation of 0.046%, which will be used here.For the latter component, if constant returns to scale and variable exogeneity are assumed, a Cobb-Douglas production function can be taken, with Y as output, however defined, A as labour productivity, L as labour, K as capital and α as the capital elasticity of output:If differentiated with respect to labour, this then yields:Thus:So if capital allocated to EA is growing at a faster rate than labour (β>γ), our discount rate should be negative with respect to time: if labour is growing faster, it should be positive, making the simplifying assumption that EAs are the only group in the world producing moral value . Intuitively, this occurs because capital and labour are varying at some rates exogenously and we wish our level of capital per worker to be as close to constant over time as possible due to diminishing marginal returns to all inputs.Is capital or labour growing faster? This 80000 hours article from 2021 estimated at the time that the total capital allocation was growing by around 37% per year, and labour by 10-20%, which would imply deeply negative discount rates if these figures still held. However, the two largest components of the increase in capital allocated were primarily in the form of FTX and Facebook stock. With the former now worthless, and the latter having declined to less than one third of its value at the time of the writing of the article, the overall capital stock allocated to EA has plunged in value while the labour stock remains almost unaffected. Using the same methodology as the above article, noting that Dustin Moskovitz’s net worth now only stands at $6.5 billion at the time of writing, reduces the increase in funding over the past 7 years to $2.95 billion nominally (from a starting level of ~$10 billion), or a real terms increase of only $0.35 billion, or 2.8%: 0.39% annually.Capital growth will be modelled as lognormal, as a heavier tail distribution feels appropriate due to the possibility of another FTX, with mean eln(0.39) and log standard deviation of 10%, due to the potential for capital growth of 37% absent FTX's collapse and Facebook's decline in stock value. Labour growth is considerably more stable than capital growth, but sti...]]>
Duncan Mcclements https://forum.effectivealtruism.org/posts/yeMzJATjqxLioGM6K/estimating-the-marginal-impact-of-outreach Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Estimating the marginal impact of outreach, published by Duncan Mcclements on November 30, 2022 on The Effective Altruism Forum.This post is an entry for $5k challenge to quantify the impact of 80,000 hours' top career pathsOne of the career paths evaluated by 80000 hours is helping to build the effective altruism movement: this is currently ranked in fifth position, behind AI technical research, AI governance, biorisk and organisation entrepeneurship. This post presents a model for the marginal number of additional individuals an individual devoted to outreach would attract, and finds with very high confidence that on the margin outreach to build effective altruism further is higher impact than working on existential misk mitigation directly.Discount rate estimationThe value of outreach compared to immediately working on existential risk heavily depends on our discount rate. This is because the benefits in terms of basis points of existential risk reduced are entirely in the future for outreach (in the form of additional researchers) while immediately working on research will bring gains sooner. Two factors seem relevant for our discount rate: the probability that humanity ceases to exist before the additional researchers can have an impact and the marginal value of labour over time. The best database we are aware of for total existential risk is here: of these, the only bounded annual (or annual-equivalent assuming constant risk over time) estimates for the risk are 0.19%, 0.21%, 0.11% and 0.2%. These have a geometric mean 0f 0.17%, and a standard deviation of 0.046%, which will be used here.For the latter component, if constant returns to scale and variable exogeneity are assumed, a Cobb-Douglas production function can be taken, with Y as output, however defined, A as labour productivity, L as labour, K as capital and α as the capital elasticity of output:If differentiated with respect to labour, this then yields:Thus:So if capital allocated to EA is growing at a faster rate than labour (β>γ), our discount rate should be negative with respect to time: if labour is growing faster, it should be positive, making the simplifying assumption that EAs are the only group in the world producing moral value . Intuitively, this occurs because capital and labour are varying at some rates exogenously and we wish our level of capital per worker to be as close to constant over time as possible due to diminishing marginal returns to all inputs.Is capital or labour growing faster? This 80000 hours article from 2021 estimated at the time that the total capital allocation was growing by around 37% per year, and labour by 10-20%, which would imply deeply negative discount rates if these figures still held. However, the two largest components of the increase in capital allocated were primarily in the form of FTX and Facebook stock. With the former now worthless, and the latter having declined to less than one third of its value at the time of the writing of the article, the overall capital stock allocated to EA has plunged in value while the labour stock remains almost unaffected. Using the same methodology as the above article, noting that Dustin Moskovitz’s net worth now only stands at $6.5 billion at the time of writing, reduces the increase in funding over the past 7 years to $2.95 billion nominally (from a starting level of ~$10 billion), or a real terms increase of only $0.35 billion, or 2.8%: 0.39% annually.Capital growth will be modelled as lognormal, as a heavier tail distribution feels appropriate due to the possibility of another FTX, with mean eln(0.39) and log standard deviation of 10%, due to the potential for capital growth of 37% absent FTX's collapse and Facebook's decline in stock value. Labour growth is considerably more stable than capital growth, but sti...]]>
Wed, 30 Nov 2022 23:10:05 +0000 EA - Estimating the marginal impact of outreach by Duncan Mcclements Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Estimating the marginal impact of outreach, published by Duncan Mcclements on November 30, 2022 on The Effective Altruism Forum.This post is an entry for $5k challenge to quantify the impact of 80,000 hours' top career pathsOne of the career paths evaluated by 80000 hours is helping to build the effective altruism movement: this is currently ranked in fifth position, behind AI technical research, AI governance, biorisk and organisation entrepeneurship. This post presents a model for the marginal number of additional individuals an individual devoted to outreach would attract, and finds with very high confidence that on the margin outreach to build effective altruism further is higher impact than working on existential misk mitigation directly.Discount rate estimationThe value of outreach compared to immediately working on existential risk heavily depends on our discount rate. This is because the benefits in terms of basis points of existential risk reduced are entirely in the future for outreach (in the form of additional researchers) while immediately working on research will bring gains sooner. Two factors seem relevant for our discount rate: the probability that humanity ceases to exist before the additional researchers can have an impact and the marginal value of labour over time. The best database we are aware of for total existential risk is here: of these, the only bounded annual (or annual-equivalent assuming constant risk over time) estimates for the risk are 0.19%, 0.21%, 0.11% and 0.2%. These have a geometric mean 0f 0.17%, and a standard deviation of 0.046%, which will be used here.For the latter component, if constant returns to scale and variable exogeneity are assumed, a Cobb-Douglas production function can be taken, with Y as output, however defined, A as labour productivity, L as labour, K as capital and α as the capital elasticity of output:If differentiated with respect to labour, this then yields:Thus:So if capital allocated to EA is growing at a faster rate than labour (β>γ), our discount rate should be negative with respect to time: if labour is growing faster, it should be positive, making the simplifying assumption that EAs are the only group in the world producing moral value . Intuitively, this occurs because capital and labour are varying at some rates exogenously and we wish our level of capital per worker to be as close to constant over time as possible due to diminishing marginal returns to all inputs.Is capital or labour growing faster? This 80000 hours article from 2021 estimated at the time that the total capital allocation was growing by around 37% per year, and labour by 10-20%, which would imply deeply negative discount rates if these figures still held. However, the two largest components of the increase in capital allocated were primarily in the form of FTX and Facebook stock. With the former now worthless, and the latter having declined to less than one third of its value at the time of the writing of the article, the overall capital stock allocated to EA has plunged in value while the labour stock remains almost unaffected. Using the same methodology as the above article, noting that Dustin Moskovitz’s net worth now only stands at $6.5 billion at the time of writing, reduces the increase in funding over the past 7 years to $2.95 billion nominally (from a starting level of ~$10 billion), or a real terms increase of only $0.35 billion, or 2.8%: 0.39% annually.Capital growth will be modelled as lognormal, as a heavier tail distribution feels appropriate due to the possibility of another FTX, with mean eln(0.39) and log standard deviation of 10%, due to the potential for capital growth of 37% absent FTX's collapse and Facebook's decline in stock value. Labour growth is considerably more stable than capital growth, but sti...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Estimating the marginal impact of outreach, published by Duncan Mcclements on November 30, 2022 on The Effective Altruism Forum.This post is an entry for $5k challenge to quantify the impact of 80,000 hours' top career pathsOne of the career paths evaluated by 80000 hours is helping to build the effective altruism movement: this is currently ranked in fifth position, behind AI technical research, AI governance, biorisk and organisation entrepeneurship. This post presents a model for the marginal number of additional individuals an individual devoted to outreach would attract, and finds with very high confidence that on the margin outreach to build effective altruism further is higher impact than working on existential misk mitigation directly.Discount rate estimationThe value of outreach compared to immediately working on existential risk heavily depends on our discount rate. This is because the benefits in terms of basis points of existential risk reduced are entirely in the future for outreach (in the form of additional researchers) while immediately working on research will bring gains sooner. Two factors seem relevant for our discount rate: the probability that humanity ceases to exist before the additional researchers can have an impact and the marginal value of labour over time. The best database we are aware of for total existential risk is here: of these, the only bounded annual (or annual-equivalent assuming constant risk over time) estimates for the risk are 0.19%, 0.21%, 0.11% and 0.2%. These have a geometric mean 0f 0.17%, and a standard deviation of 0.046%, which will be used here.For the latter component, if constant returns to scale and variable exogeneity are assumed, a Cobb-Douglas production function can be taken, with Y as output, however defined, A as labour productivity, L as labour, K as capital and α as the capital elasticity of output:If differentiated with respect to labour, this then yields:Thus:So if capital allocated to EA is growing at a faster rate than labour (β>γ), our discount rate should be negative with respect to time: if labour is growing faster, it should be positive, making the simplifying assumption that EAs are the only group in the world producing moral value . Intuitively, this occurs because capital and labour are varying at some rates exogenously and we wish our level of capital per worker to be as close to constant over time as possible due to diminishing marginal returns to all inputs.Is capital or labour growing faster? This 80000 hours article from 2021 estimated at the time that the total capital allocation was growing by around 37% per year, and labour by 10-20%, which would imply deeply negative discount rates if these figures still held. However, the two largest components of the increase in capital allocated were primarily in the form of FTX and Facebook stock. With the former now worthless, and the latter having declined to less than one third of its value at the time of the writing of the article, the overall capital stock allocated to EA has plunged in value while the labour stock remains almost unaffected. Using the same methodology as the above article, noting that Dustin Moskovitz’s net worth now only stands at $6.5 billion at the time of writing, reduces the increase in funding over the past 7 years to $2.95 billion nominally (from a starting level of ~$10 billion), or a real terms increase of only $0.35 billion, or 2.8%: 0.39% annually.Capital growth will be modelled as lognormal, as a heavier tail distribution feels appropriate due to the possibility of another FTX, with mean eln(0.39) and log standard deviation of 10%, due to the potential for capital growth of 37% absent FTX's collapse and Facebook's decline in stock value. Labour growth is considerably more stable than capital growth, but sti...]]>
Duncan Mcclements https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 14:34 None full 3957
TQsTahJWj2S5QD86t_EA EA - Banding Together to Ban Octopus Farming by Tessa @ ALI Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Banding Together to Ban Octopus Farming, published by Tessa @ ALI on November 30, 2022 on The Effective Altruism Forum.Who?The Aquatic Life Institute (ALI) is an international NGO working to improve the lives of aquatic animals exploited in the food system. Operating from effective altruism principles, ALI seeks to develop, support, and accelerate activities that have positive animal welfare impacts during rearing on farms and during capture in wild fisheries. ALI founded the first global alliance for aquatic animal welfare, the Aquatic Animal Alliance (AAA), now comprised of over 110 member organizations. As new issues arise in this space, we too must steer our efforts in an attempt to curb any additional aquatic animal suffering before it begins.What?The consumer appetite for octopus and squid continues to grow, but wild populations are unstable. The seafood industry is looking to fill the supply gap through new factory farming projects despite numerous environmental, social, public health and animal welfare concerns.ALI is implementing a global campaign that aims to increase public and legislative pressure on countries/regions where octopus farms are being considered to achieve a regulatory ban, and reduce future chances of these farms being created elsewhere. Additionally, we will work with corporations on procurement policies banning the purchase of farmed octopus.Why?The development of octopus farming casts a spotlight on the collection of concerns connected to these intensive practices. Rather than incentivizing the research and development of aquaculture that could be “efficient and cheap enough” to be commercialized, we should direct investment efforts towards innovative, alternative forms of seafood. From sustainable, environmental, and ethical perspectives, octopus farming should not exist.Campaign SummaryWe envision a future in which aquatic animal suffering is dramatically reduced in factory farms. Aquatic animal welfare is a highly neglected and tractable issue. Approximately 500 billion aquatic animals are farmed annually in high-suffering conditions and, to date, there is negligible advocacy aimed at improving welfare conditions for these remarkable beings. We support research to compare potential welfare interventions, and then advocate for the implementation of the most promising initiatives, with the aim of positively impacting aquatic lives for years to come.ALI unites nonprofits, academic institutions, industry stakeholders, and the public with the common goal of reducing aquatic animal suffering. Our internal research team identifies priorities in areas of uncertainty with the goal of creating a framework to compare the relative impact of different interventions. We then advocate for high aquatic animal welfare amongst key decision-makers that influence how aquatic animals are utilized (e.g. by industry), and how their welfare is defined and governed (e.g. by standards, certifications, policies and guidelines).ALI spearheads the Aquatic Animal Alliance (AAA), a historic global coalition that believes aquatic animals should lead lives free of industrial suffering. Founded in 2020, the AAA is modeled after Open Wing Alliance, Climate Justice Alliance, and other powerful coalition groups that have demonstrated we are strongest when we work together. Through this coalition, we work with over 110 animal protection organizations across six continents to collectively address the issues facing trillions of aquatic animals.A ban on cephalopod farming would call on all pillars of ALI’s operation (coalition building, research, and policy change) that could lead to monumental success and trigger institutional/market change.An overarching goal of this project is to launch a global initiative to increase public and legislative pressure on countri...]]>
Tessa @ ALI https://forum.effectivealtruism.org/posts/TQsTahJWj2S5QD86t/banding-together-to-ban-octopus-farming Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Banding Together to Ban Octopus Farming, published by Tessa @ ALI on November 30, 2022 on The Effective Altruism Forum.Who?The Aquatic Life Institute (ALI) is an international NGO working to improve the lives of aquatic animals exploited in the food system. Operating from effective altruism principles, ALI seeks to develop, support, and accelerate activities that have positive animal welfare impacts during rearing on farms and during capture in wild fisheries. ALI founded the first global alliance for aquatic animal welfare, the Aquatic Animal Alliance (AAA), now comprised of over 110 member organizations. As new issues arise in this space, we too must steer our efforts in an attempt to curb any additional aquatic animal suffering before it begins.What?The consumer appetite for octopus and squid continues to grow, but wild populations are unstable. The seafood industry is looking to fill the supply gap through new factory farming projects despite numerous environmental, social, public health and animal welfare concerns.ALI is implementing a global campaign that aims to increase public and legislative pressure on countries/regions where octopus farms are being considered to achieve a regulatory ban, and reduce future chances of these farms being created elsewhere. Additionally, we will work with corporations on procurement policies banning the purchase of farmed octopus.Why?The development of octopus farming casts a spotlight on the collection of concerns connected to these intensive practices. Rather than incentivizing the research and development of aquaculture that could be “efficient and cheap enough” to be commercialized, we should direct investment efforts towards innovative, alternative forms of seafood. From sustainable, environmental, and ethical perspectives, octopus farming should not exist.Campaign SummaryWe envision a future in which aquatic animal suffering is dramatically reduced in factory farms. Aquatic animal welfare is a highly neglected and tractable issue. Approximately 500 billion aquatic animals are farmed annually in high-suffering conditions and, to date, there is negligible advocacy aimed at improving welfare conditions for these remarkable beings. We support research to compare potential welfare interventions, and then advocate for the implementation of the most promising initiatives, with the aim of positively impacting aquatic lives for years to come.ALI unites nonprofits, academic institutions, industry stakeholders, and the public with the common goal of reducing aquatic animal suffering. Our internal research team identifies priorities in areas of uncertainty with the goal of creating a framework to compare the relative impact of different interventions. We then advocate for high aquatic animal welfare amongst key decision-makers that influence how aquatic animals are utilized (e.g. by industry), and how their welfare is defined and governed (e.g. by standards, certifications, policies and guidelines).ALI spearheads the Aquatic Animal Alliance (AAA), a historic global coalition that believes aquatic animals should lead lives free of industrial suffering. Founded in 2020, the AAA is modeled after Open Wing Alliance, Climate Justice Alliance, and other powerful coalition groups that have demonstrated we are strongest when we work together. Through this coalition, we work with over 110 animal protection organizations across six continents to collectively address the issues facing trillions of aquatic animals.A ban on cephalopod farming would call on all pillars of ALI’s operation (coalition building, research, and policy change) that could lead to monumental success and trigger institutional/market change.An overarching goal of this project is to launch a global initiative to increase public and legislative pressure on countri...]]>
Wed, 30 Nov 2022 17:49:08 +0000 EA - Banding Together to Ban Octopus Farming by Tessa @ ALI Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Banding Together to Ban Octopus Farming, published by Tessa @ ALI on November 30, 2022 on The Effective Altruism Forum.Who?The Aquatic Life Institute (ALI) is an international NGO working to improve the lives of aquatic animals exploited in the food system. Operating from effective altruism principles, ALI seeks to develop, support, and accelerate activities that have positive animal welfare impacts during rearing on farms and during capture in wild fisheries. ALI founded the first global alliance for aquatic animal welfare, the Aquatic Animal Alliance (AAA), now comprised of over 110 member organizations. As new issues arise in this space, we too must steer our efforts in an attempt to curb any additional aquatic animal suffering before it begins.What?The consumer appetite for octopus and squid continues to grow, but wild populations are unstable. The seafood industry is looking to fill the supply gap through new factory farming projects despite numerous environmental, social, public health and animal welfare concerns.ALI is implementing a global campaign that aims to increase public and legislative pressure on countries/regions where octopus farms are being considered to achieve a regulatory ban, and reduce future chances of these farms being created elsewhere. Additionally, we will work with corporations on procurement policies banning the purchase of farmed octopus.Why?The development of octopus farming casts a spotlight on the collection of concerns connected to these intensive practices. Rather than incentivizing the research and development of aquaculture that could be “efficient and cheap enough” to be commercialized, we should direct investment efforts towards innovative, alternative forms of seafood. From sustainable, environmental, and ethical perspectives, octopus farming should not exist.Campaign SummaryWe envision a future in which aquatic animal suffering is dramatically reduced in factory farms. Aquatic animal welfare is a highly neglected and tractable issue. Approximately 500 billion aquatic animals are farmed annually in high-suffering conditions and, to date, there is negligible advocacy aimed at improving welfare conditions for these remarkable beings. We support research to compare potential welfare interventions, and then advocate for the implementation of the most promising initiatives, with the aim of positively impacting aquatic lives for years to come.ALI unites nonprofits, academic institutions, industry stakeholders, and the public with the common goal of reducing aquatic animal suffering. Our internal research team identifies priorities in areas of uncertainty with the goal of creating a framework to compare the relative impact of different interventions. We then advocate for high aquatic animal welfare amongst key decision-makers that influence how aquatic animals are utilized (e.g. by industry), and how their welfare is defined and governed (e.g. by standards, certifications, policies and guidelines).ALI spearheads the Aquatic Animal Alliance (AAA), a historic global coalition that believes aquatic animals should lead lives free of industrial suffering. Founded in 2020, the AAA is modeled after Open Wing Alliance, Climate Justice Alliance, and other powerful coalition groups that have demonstrated we are strongest when we work together. Through this coalition, we work with over 110 animal protection organizations across six continents to collectively address the issues facing trillions of aquatic animals.A ban on cephalopod farming would call on all pillars of ALI’s operation (coalition building, research, and policy change) that could lead to monumental success and trigger institutional/market change.An overarching goal of this project is to launch a global initiative to increase public and legislative pressure on countri...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Banding Together to Ban Octopus Farming, published by Tessa @ ALI on November 30, 2022 on The Effective Altruism Forum.Who?The Aquatic Life Institute (ALI) is an international NGO working to improve the lives of aquatic animals exploited in the food system. Operating from effective altruism principles, ALI seeks to develop, support, and accelerate activities that have positive animal welfare impacts during rearing on farms and during capture in wild fisheries. ALI founded the first global alliance for aquatic animal welfare, the Aquatic Animal Alliance (AAA), now comprised of over 110 member organizations. As new issues arise in this space, we too must steer our efforts in an attempt to curb any additional aquatic animal suffering before it begins.What?The consumer appetite for octopus and squid continues to grow, but wild populations are unstable. The seafood industry is looking to fill the supply gap through new factory farming projects despite numerous environmental, social, public health and animal welfare concerns.ALI is implementing a global campaign that aims to increase public and legislative pressure on countries/regions where octopus farms are being considered to achieve a regulatory ban, and reduce future chances of these farms being created elsewhere. Additionally, we will work with corporations on procurement policies banning the purchase of farmed octopus.Why?The development of octopus farming casts a spotlight on the collection of concerns connected to these intensive practices. Rather than incentivizing the research and development of aquaculture that could be “efficient and cheap enough” to be commercialized, we should direct investment efforts towards innovative, alternative forms of seafood. From sustainable, environmental, and ethical perspectives, octopus farming should not exist.Campaign SummaryWe envision a future in which aquatic animal suffering is dramatically reduced in factory farms. Aquatic animal welfare is a highly neglected and tractable issue. Approximately 500 billion aquatic animals are farmed annually in high-suffering conditions and, to date, there is negligible advocacy aimed at improving welfare conditions for these remarkable beings. We support research to compare potential welfare interventions, and then advocate for the implementation of the most promising initiatives, with the aim of positively impacting aquatic lives for years to come.ALI unites nonprofits, academic institutions, industry stakeholders, and the public with the common goal of reducing aquatic animal suffering. Our internal research team identifies priorities in areas of uncertainty with the goal of creating a framework to compare the relative impact of different interventions. We then advocate for high aquatic animal welfare amongst key decision-makers that influence how aquatic animals are utilized (e.g. by industry), and how their welfare is defined and governed (e.g. by standards, certifications, policies and guidelines).ALI spearheads the Aquatic Animal Alliance (AAA), a historic global coalition that believes aquatic animals should lead lives free of industrial suffering. Founded in 2020, the AAA is modeled after Open Wing Alliance, Climate Justice Alliance, and other powerful coalition groups that have demonstrated we are strongest when we work together. Through this coalition, we work with over 110 animal protection organizations across six continents to collectively address the issues facing trillions of aquatic animals.A ban on cephalopod farming would call on all pillars of ALI’s operation (coalition building, research, and policy change) that could lead to monumental success and trigger institutional/market change.An overarching goal of this project is to launch a global initiative to increase public and legislative pressure on countri...]]>
Tessa @ ALI https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 13:14 None full 3954
CWAHovjrT4L9RStmm_EA EA - Altruistic kidney donation in the UK: my experience by RichArmitage Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Altruistic kidney donation in the UK: my experience, published by RichArmitage on November 30, 2022 on The Effective Altruism Forum.Last week I donated a kidney as an altruistic donor through the UK Living Kidney Sharing Scheme (UKLKSS). This post will cover the landscape of kidney donation in the UK, how kidneys from living donors are shared in the UK, the process of donating through the UKLKSS, and some reflections on my experience.Note: this post discusses details of altruistic kidney donation specifically in the context of the UK.Kidney donation in the UKAt any one time, some 3,500-5,000 patients with end-stage renal disease (ESRD) sit on the UK’s national waiting list in need of a kidney donation. While most of these are adults, 112 children (under 18 years of age) were in need of a replacement kidney in April 2021 (most recent data available). While waiting for a replacement kidney, these individuals must undergo renal dialysis for three to four hours per session, three times every week, which imposes severe limitations on the freedom in their lives and their ability to enjoy a ‘normal’ existence. Around 250 of these people die every year, either because a suitable donor cannot be identified in time, or after they are removed from the waiting list due to a deterioration in their health that renders them no longer able to endure the necessary surgery and immunosuppressive therapies inherent to organ transplantation.Around 3,000 kidney donations take place in the UK each year, of which about 2,000 originate from deceased donors (an opt-out system of organ donation after death came into effect in Wales in 2015, in England in 2020, and in Scotland in 2021, and Northern Ireland will follow suit in 2023), and about 1,000 from living donors. Kidneys constitute by far the most frequently donated solid organ in the UK (67.6% of all solid organ donations in 2019/20), followed by liver lobes (which can also be donated by living donors), and heart, lungs and pancreas (which obviously cannot).How are kidneys from living donors shared in the UK?Kidneys donated from living donors are ‘shared’ across the UK through the UKLKSS, which includes paired/pooled donations (PPD), and altruistic donor chains (ADCs) that are initiated by non-directed altruistic donors (NDADs).A person in need of a kidney (the recipient) may have a specific individual (such as a family member, partner or close friend) who is prepared to donate one to them (the donor), but is unable to do so directly since this donor-recipient pair is incompatible by Human Leucocyte Antigen (HLA) type or ABO blood group. Such incompatible linked donor-recipient pairs are registered in the UKLKSS and ‘matched,’ through quarterly Living Donor Kidney Matching Runs (LDKMR), with other incompatible linked donor-recipient pairs that, in some combination, are together compatible for donation exchanges.In PPDs, a two-way (in paired donation) exchange occurs between two linked pairs (D1-R1 and D2-R2) in which D1 donates to R2, and D2 donates to R1, while a three- or greater-way (in pooled donation) exchange occurs between more than two linked pairs in which (for example) D1 donates to R2, D2 donates to R3, and D3 donates to R1.Individuals not in linked pairs but who wish to donate a kidney without the promise of a linked recipient receiving one in return can do so anonymously as NDADs. Such donors are registered into the UKLKSS and donate to a recipient in the paired/pooled scheme, triggering an ADC consisting of multiple donations (NDAD donates to R1, D1 donates to R2, D2 donates to D3, and so on) that culminates (when no compatible linked pairs remain) in the last donor donating to a recipient on the national waiting list. The first non-directed altruistic kidney donation in the UK took place in 2006 and, since the beg...]]>
RichArmitage https://forum.effectivealtruism.org/posts/CWAHovjrT4L9RStmm/altruistic-kidney-donation-in-the-uk-my-experience Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Altruistic kidney donation in the UK: my experience, published by RichArmitage on November 30, 2022 on The Effective Altruism Forum.Last week I donated a kidney as an altruistic donor through the UK Living Kidney Sharing Scheme (UKLKSS). This post will cover the landscape of kidney donation in the UK, how kidneys from living donors are shared in the UK, the process of donating through the UKLKSS, and some reflections on my experience.Note: this post discusses details of altruistic kidney donation specifically in the context of the UK.Kidney donation in the UKAt any one time, some 3,500-5,000 patients with end-stage renal disease (ESRD) sit on the UK’s national waiting list in need of a kidney donation. While most of these are adults, 112 children (under 18 years of age) were in need of a replacement kidney in April 2021 (most recent data available). While waiting for a replacement kidney, these individuals must undergo renal dialysis for three to four hours per session, three times every week, which imposes severe limitations on the freedom in their lives and their ability to enjoy a ‘normal’ existence. Around 250 of these people die every year, either because a suitable donor cannot be identified in time, or after they are removed from the waiting list due to a deterioration in their health that renders them no longer able to endure the necessary surgery and immunosuppressive therapies inherent to organ transplantation.Around 3,000 kidney donations take place in the UK each year, of which about 2,000 originate from deceased donors (an opt-out system of organ donation after death came into effect in Wales in 2015, in England in 2020, and in Scotland in 2021, and Northern Ireland will follow suit in 2023), and about 1,000 from living donors. Kidneys constitute by far the most frequently donated solid organ in the UK (67.6% of all solid organ donations in 2019/20), followed by liver lobes (which can also be donated by living donors), and heart, lungs and pancreas (which obviously cannot).How are kidneys from living donors shared in the UK?Kidneys donated from living donors are ‘shared’ across the UK through the UKLKSS, which includes paired/pooled donations (PPD), and altruistic donor chains (ADCs) that are initiated by non-directed altruistic donors (NDADs).A person in need of a kidney (the recipient) may have a specific individual (such as a family member, partner or close friend) who is prepared to donate one to them (the donor), but is unable to do so directly since this donor-recipient pair is incompatible by Human Leucocyte Antigen (HLA) type or ABO blood group. Such incompatible linked donor-recipient pairs are registered in the UKLKSS and ‘matched,’ through quarterly Living Donor Kidney Matching Runs (LDKMR), with other incompatible linked donor-recipient pairs that, in some combination, are together compatible for donation exchanges.In PPDs, a two-way (in paired donation) exchange occurs between two linked pairs (D1-R1 and D2-R2) in which D1 donates to R2, and D2 donates to R1, while a three- or greater-way (in pooled donation) exchange occurs between more than two linked pairs in which (for example) D1 donates to R2, D2 donates to R3, and D3 donates to R1.Individuals not in linked pairs but who wish to donate a kidney without the promise of a linked recipient receiving one in return can do so anonymously as NDADs. Such donors are registered into the UKLKSS and donate to a recipient in the paired/pooled scheme, triggering an ADC consisting of multiple donations (NDAD donates to R1, D1 donates to R2, D2 donates to D3, and so on) that culminates (when no compatible linked pairs remain) in the last donor donating to a recipient on the national waiting list. The first non-directed altruistic kidney donation in the UK took place in 2006 and, since the beg...]]>
Wed, 30 Nov 2022 14:09:04 +0000 EA - Altruistic kidney donation in the UK: my experience by RichArmitage Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Altruistic kidney donation in the UK: my experience, published by RichArmitage on November 30, 2022 on The Effective Altruism Forum.Last week I donated a kidney as an altruistic donor through the UK Living Kidney Sharing Scheme (UKLKSS). This post will cover the landscape of kidney donation in the UK, how kidneys from living donors are shared in the UK, the process of donating through the UKLKSS, and some reflections on my experience.Note: this post discusses details of altruistic kidney donation specifically in the context of the UK.Kidney donation in the UKAt any one time, some 3,500-5,000 patients with end-stage renal disease (ESRD) sit on the UK’s national waiting list in need of a kidney donation. While most of these are adults, 112 children (under 18 years of age) were in need of a replacement kidney in April 2021 (most recent data available). While waiting for a replacement kidney, these individuals must undergo renal dialysis for three to four hours per session, three times every week, which imposes severe limitations on the freedom in their lives and their ability to enjoy a ‘normal’ existence. Around 250 of these people die every year, either because a suitable donor cannot be identified in time, or after they are removed from the waiting list due to a deterioration in their health that renders them no longer able to endure the necessary surgery and immunosuppressive therapies inherent to organ transplantation.Around 3,000 kidney donations take place in the UK each year, of which about 2,000 originate from deceased donors (an opt-out system of organ donation after death came into effect in Wales in 2015, in England in 2020, and in Scotland in 2021, and Northern Ireland will follow suit in 2023), and about 1,000 from living donors. Kidneys constitute by far the most frequently donated solid organ in the UK (67.6% of all solid organ donations in 2019/20), followed by liver lobes (which can also be donated by living donors), and heart, lungs and pancreas (which obviously cannot).How are kidneys from living donors shared in the UK?Kidneys donated from living donors are ‘shared’ across the UK through the UKLKSS, which includes paired/pooled donations (PPD), and altruistic donor chains (ADCs) that are initiated by non-directed altruistic donors (NDADs).A person in need of a kidney (the recipient) may have a specific individual (such as a family member, partner or close friend) who is prepared to donate one to them (the donor), but is unable to do so directly since this donor-recipient pair is incompatible by Human Leucocyte Antigen (HLA) type or ABO blood group. Such incompatible linked donor-recipient pairs are registered in the UKLKSS and ‘matched,’ through quarterly Living Donor Kidney Matching Runs (LDKMR), with other incompatible linked donor-recipient pairs that, in some combination, are together compatible for donation exchanges.In PPDs, a two-way (in paired donation) exchange occurs between two linked pairs (D1-R1 and D2-R2) in which D1 donates to R2, and D2 donates to R1, while a three- or greater-way (in pooled donation) exchange occurs between more than two linked pairs in which (for example) D1 donates to R2, D2 donates to R3, and D3 donates to R1.Individuals not in linked pairs but who wish to donate a kidney without the promise of a linked recipient receiving one in return can do so anonymously as NDADs. Such donors are registered into the UKLKSS and donate to a recipient in the paired/pooled scheme, triggering an ADC consisting of multiple donations (NDAD donates to R1, D1 donates to R2, D2 donates to D3, and so on) that culminates (when no compatible linked pairs remain) in the last donor donating to a recipient on the national waiting list. The first non-directed altruistic kidney donation in the UK took place in 2006 and, since the beg...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Altruistic kidney donation in the UK: my experience, published by RichArmitage on November 30, 2022 on The Effective Altruism Forum.Last week I donated a kidney as an altruistic donor through the UK Living Kidney Sharing Scheme (UKLKSS). This post will cover the landscape of kidney donation in the UK, how kidneys from living donors are shared in the UK, the process of donating through the UKLKSS, and some reflections on my experience.Note: this post discusses details of altruistic kidney donation specifically in the context of the UK.Kidney donation in the UKAt any one time, some 3,500-5,000 patients with end-stage renal disease (ESRD) sit on the UK’s national waiting list in need of a kidney donation. While most of these are adults, 112 children (under 18 years of age) were in need of a replacement kidney in April 2021 (most recent data available). While waiting for a replacement kidney, these individuals must undergo renal dialysis for three to four hours per session, three times every week, which imposes severe limitations on the freedom in their lives and their ability to enjoy a ‘normal’ existence. Around 250 of these people die every year, either because a suitable donor cannot be identified in time, or after they are removed from the waiting list due to a deterioration in their health that renders them no longer able to endure the necessary surgery and immunosuppressive therapies inherent to organ transplantation.Around 3,000 kidney donations take place in the UK each year, of which about 2,000 originate from deceased donors (an opt-out system of organ donation after death came into effect in Wales in 2015, in England in 2020, and in Scotland in 2021, and Northern Ireland will follow suit in 2023), and about 1,000 from living donors. Kidneys constitute by far the most frequently donated solid organ in the UK (67.6% of all solid organ donations in 2019/20), followed by liver lobes (which can also be donated by living donors), and heart, lungs and pancreas (which obviously cannot).How are kidneys from living donors shared in the UK?Kidneys donated from living donors are ‘shared’ across the UK through the UKLKSS, which includes paired/pooled donations (PPD), and altruistic donor chains (ADCs) that are initiated by non-directed altruistic donors (NDADs).A person in need of a kidney (the recipient) may have a specific individual (such as a family member, partner or close friend) who is prepared to donate one to them (the donor), but is unable to do so directly since this donor-recipient pair is incompatible by Human Leucocyte Antigen (HLA) type or ABO blood group. Such incompatible linked donor-recipient pairs are registered in the UKLKSS and ‘matched,’ through quarterly Living Donor Kidney Matching Runs (LDKMR), with other incompatible linked donor-recipient pairs that, in some combination, are together compatible for donation exchanges.In PPDs, a two-way (in paired donation) exchange occurs between two linked pairs (D1-R1 and D2-R2) in which D1 donates to R2, and D2 donates to R1, while a three- or greater-way (in pooled donation) exchange occurs between more than two linked pairs in which (for example) D1 donates to R2, D2 donates to R3, and D3 donates to R1.Individuals not in linked pairs but who wish to donate a kidney without the promise of a linked recipient receiving one in return can do so anonymously as NDADs. Such donors are registered into the UKLKSS and donate to a recipient in the paired/pooled scheme, triggering an ADC consisting of multiple donations (NDAD donates to R1, D1 donates to R2, D2 donates to D3, and so on) that culminates (when no compatible linked pairs remain) in the last donor donating to a recipient on the national waiting list. The first non-directed altruistic kidney donation in the UK took place in 2006 and, since the beg...]]>
RichArmitage https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 12:30 None full 3955
c3we8rKppwkRLfzwv_EA EA - The deathprint of replacing beef by chicken and insect meat by Stijn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The deathprint of replacing beef by chicken and insect meat, published by Stijn on November 30, 2022 on The Effective Altruism Forum.Animal-based meat production is a large contributor to climate change. Especially beef has a high a carbon footprint, measured in terms of kilogram CO2-equivalents per kilogram of meat. Switching from beef to chicken meat or insect meat lowers greenhouse gas emissions and hence decreases future climate change damages. But chicken meat has a much higher moral footprint (Saja, 2013) or welfare footprint (welfarefootprint.org) than beef. Chickens experience more intense suffering and more hours of suffering for one kilogram of meat, compared to beef cows.This article shows that the increase in moral footprint when switching from beef to chicken meat or insect meat is likely to be worse than the decrease in carbon footprint. To compare these footprint changes, all the footprints are expressed in terms of the deathprint: the number of humans dying prematurely from climate change and the number of animals killed (slaughtered) in animal farming, for the production of one unit of meat.The deathprint of climate changeA recent study (Bressler, 2021) estimated the net number of humans dying prematurely from temperature changes (especially heat waves) due to climate change, before the year 2100. An extra emission of 4000 ton CO2, emitted today, results in one extra human death due to climate change, in the business as usual scenario where everyone else does not take measures to reduce their emissions. Hence, 0,00000025 humans will be killed this century by emitting one extra kilogram of CO2.I use this number of deaths for the calculations below, although this number is both an underestimation and overestimation of the total human deaths due to climate change. It is an underestimation, because it does not include deaths from e.g. famines, wars, infectious diseases, floods and other risks that are increased by climate change. On the other hand, this number is an overestimation in the sense that climate change adaptation measures and CO2 emission reduction measures are likely to be taken. If poor countries develop and become richer, people in those countries can take more adaptive measures such as installing air conditioning, which lowers the mortality rate from extreme temperatures. And if global CO2 emissions are reduced, the impact of an extra unit of CO2 emissions (i.e. the marginal mortality rate) reduces as well (due to the non-linear relationship between amount of CO2 in the atmosphere and climate damages).[i]The deathprint of meatThe table below shows the amount of meat produced by one animal, and the carbon footprints of meat products. These footprints measure the greenhouse gas emissions, in terms of CO2-equivalents, including all life cycle emissions as well as land use change emissions from e.g. deforestation, for the production of one kilogram of meat. The values of beef, pork and chicken are taken from Pieper e.a. (2020), which applies to Germany. The carbon footprint of insect meat is assumed to be lower than the footprint of chicken meat but slightly higher than the footprint of plant-based protein. The chosen value is a preliminary estimate of cricket meat, taken from Blonk e.a. (2008). It requires roughly 10.000 crickets for the production of one kilogram of cricket protein powder.kg meat per animal consumedanimals killed per kg meatkg CO2 per kg meatbeefporkchicken meatinsect meat3000,003371000,010101,50,667160,000110.0002With these values, we can calculate the human and animal deathprints of meat, i.e. how many humans will die from climate change and how many animals are killed (slaughtered) for the production of one kilogram of meat. This animal deathprint of meat only includes the animals that will be consume...]]>
Stijn https://forum.effectivealtruism.org/posts/c3we8rKppwkRLfzwv/the-deathprint-of-replacing-beef-by-chicken-and-insect-meat Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The deathprint of replacing beef by chicken and insect meat, published by Stijn on November 30, 2022 on The Effective Altruism Forum.Animal-based meat production is a large contributor to climate change. Especially beef has a high a carbon footprint, measured in terms of kilogram CO2-equivalents per kilogram of meat. Switching from beef to chicken meat or insect meat lowers greenhouse gas emissions and hence decreases future climate change damages. But chicken meat has a much higher moral footprint (Saja, 2013) or welfare footprint (welfarefootprint.org) than beef. Chickens experience more intense suffering and more hours of suffering for one kilogram of meat, compared to beef cows.This article shows that the increase in moral footprint when switching from beef to chicken meat or insect meat is likely to be worse than the decrease in carbon footprint. To compare these footprint changes, all the footprints are expressed in terms of the deathprint: the number of humans dying prematurely from climate change and the number of animals killed (slaughtered) in animal farming, for the production of one unit of meat.The deathprint of climate changeA recent study (Bressler, 2021) estimated the net number of humans dying prematurely from temperature changes (especially heat waves) due to climate change, before the year 2100. An extra emission of 4000 ton CO2, emitted today, results in one extra human death due to climate change, in the business as usual scenario where everyone else does not take measures to reduce their emissions. Hence, 0,00000025 humans will be killed this century by emitting one extra kilogram of CO2.I use this number of deaths for the calculations below, although this number is both an underestimation and overestimation of the total human deaths due to climate change. It is an underestimation, because it does not include deaths from e.g. famines, wars, infectious diseases, floods and other risks that are increased by climate change. On the other hand, this number is an overestimation in the sense that climate change adaptation measures and CO2 emission reduction measures are likely to be taken. If poor countries develop and become richer, people in those countries can take more adaptive measures such as installing air conditioning, which lowers the mortality rate from extreme temperatures. And if global CO2 emissions are reduced, the impact of an extra unit of CO2 emissions (i.e. the marginal mortality rate) reduces as well (due to the non-linear relationship between amount of CO2 in the atmosphere and climate damages).[i]The deathprint of meatThe table below shows the amount of meat produced by one animal, and the carbon footprints of meat products. These footprints measure the greenhouse gas emissions, in terms of CO2-equivalents, including all life cycle emissions as well as land use change emissions from e.g. deforestation, for the production of one kilogram of meat. The values of beef, pork and chicken are taken from Pieper e.a. (2020), which applies to Germany. The carbon footprint of insect meat is assumed to be lower than the footprint of chicken meat but slightly higher than the footprint of plant-based protein. The chosen value is a preliminary estimate of cricket meat, taken from Blonk e.a. (2008). It requires roughly 10.000 crickets for the production of one kilogram of cricket protein powder.kg meat per animal consumedanimals killed per kg meatkg CO2 per kg meatbeefporkchicken meatinsect meat3000,003371000,010101,50,667160,000110.0002With these values, we can calculate the human and animal deathprints of meat, i.e. how many humans will die from climate change and how many animals are killed (slaughtered) for the production of one kilogram of meat. This animal deathprint of meat only includes the animals that will be consume...]]>
Wed, 30 Nov 2022 12:59:25 +0000 EA - The deathprint of replacing beef by chicken and insect meat by Stijn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The deathprint of replacing beef by chicken and insect meat, published by Stijn on November 30, 2022 on The Effective Altruism Forum.Animal-based meat production is a large contributor to climate change. Especially beef has a high a carbon footprint, measured in terms of kilogram CO2-equivalents per kilogram of meat. Switching from beef to chicken meat or insect meat lowers greenhouse gas emissions and hence decreases future climate change damages. But chicken meat has a much higher moral footprint (Saja, 2013) or welfare footprint (welfarefootprint.org) than beef. Chickens experience more intense suffering and more hours of suffering for one kilogram of meat, compared to beef cows.This article shows that the increase in moral footprint when switching from beef to chicken meat or insect meat is likely to be worse than the decrease in carbon footprint. To compare these footprint changes, all the footprints are expressed in terms of the deathprint: the number of humans dying prematurely from climate change and the number of animals killed (slaughtered) in animal farming, for the production of one unit of meat.The deathprint of climate changeA recent study (Bressler, 2021) estimated the net number of humans dying prematurely from temperature changes (especially heat waves) due to climate change, before the year 2100. An extra emission of 4000 ton CO2, emitted today, results in one extra human death due to climate change, in the business as usual scenario where everyone else does not take measures to reduce their emissions. Hence, 0,00000025 humans will be killed this century by emitting one extra kilogram of CO2.I use this number of deaths for the calculations below, although this number is both an underestimation and overestimation of the total human deaths due to climate change. It is an underestimation, because it does not include deaths from e.g. famines, wars, infectious diseases, floods and other risks that are increased by climate change. On the other hand, this number is an overestimation in the sense that climate change adaptation measures and CO2 emission reduction measures are likely to be taken. If poor countries develop and become richer, people in those countries can take more adaptive measures such as installing air conditioning, which lowers the mortality rate from extreme temperatures. And if global CO2 emissions are reduced, the impact of an extra unit of CO2 emissions (i.e. the marginal mortality rate) reduces as well (due to the non-linear relationship between amount of CO2 in the atmosphere and climate damages).[i]The deathprint of meatThe table below shows the amount of meat produced by one animal, and the carbon footprints of meat products. These footprints measure the greenhouse gas emissions, in terms of CO2-equivalents, including all life cycle emissions as well as land use change emissions from e.g. deforestation, for the production of one kilogram of meat. The values of beef, pork and chicken are taken from Pieper e.a. (2020), which applies to Germany. The carbon footprint of insect meat is assumed to be lower than the footprint of chicken meat but slightly higher than the footprint of plant-based protein. The chosen value is a preliminary estimate of cricket meat, taken from Blonk e.a. (2008). It requires roughly 10.000 crickets for the production of one kilogram of cricket protein powder.kg meat per animal consumedanimals killed per kg meatkg CO2 per kg meatbeefporkchicken meatinsect meat3000,003371000,010101,50,667160,000110.0002With these values, we can calculate the human and animal deathprints of meat, i.e. how many humans will die from climate change and how many animals are killed (slaughtered) for the production of one kilogram of meat. This animal deathprint of meat only includes the animals that will be consume...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The deathprint of replacing beef by chicken and insect meat, published by Stijn on November 30, 2022 on The Effective Altruism Forum.Animal-based meat production is a large contributor to climate change. Especially beef has a high a carbon footprint, measured in terms of kilogram CO2-equivalents per kilogram of meat. Switching from beef to chicken meat or insect meat lowers greenhouse gas emissions and hence decreases future climate change damages. But chicken meat has a much higher moral footprint (Saja, 2013) or welfare footprint (welfarefootprint.org) than beef. Chickens experience more intense suffering and more hours of suffering for one kilogram of meat, compared to beef cows.This article shows that the increase in moral footprint when switching from beef to chicken meat or insect meat is likely to be worse than the decrease in carbon footprint. To compare these footprint changes, all the footprints are expressed in terms of the deathprint: the number of humans dying prematurely from climate change and the number of animals killed (slaughtered) in animal farming, for the production of one unit of meat.The deathprint of climate changeA recent study (Bressler, 2021) estimated the net number of humans dying prematurely from temperature changes (especially heat waves) due to climate change, before the year 2100. An extra emission of 4000 ton CO2, emitted today, results in one extra human death due to climate change, in the business as usual scenario where everyone else does not take measures to reduce their emissions. Hence, 0,00000025 humans will be killed this century by emitting one extra kilogram of CO2.I use this number of deaths for the calculations below, although this number is both an underestimation and overestimation of the total human deaths due to climate change. It is an underestimation, because it does not include deaths from e.g. famines, wars, infectious diseases, floods and other risks that are increased by climate change. On the other hand, this number is an overestimation in the sense that climate change adaptation measures and CO2 emission reduction measures are likely to be taken. If poor countries develop and become richer, people in those countries can take more adaptive measures such as installing air conditioning, which lowers the mortality rate from extreme temperatures. And if global CO2 emissions are reduced, the impact of an extra unit of CO2 emissions (i.e. the marginal mortality rate) reduces as well (due to the non-linear relationship between amount of CO2 in the atmosphere and climate damages).[i]The deathprint of meatThe table below shows the amount of meat produced by one animal, and the carbon footprints of meat products. These footprints measure the greenhouse gas emissions, in terms of CO2-equivalents, including all life cycle emissions as well as land use change emissions from e.g. deforestation, for the production of one kilogram of meat. The values of beef, pork and chicken are taken from Pieper e.a. (2020), which applies to Germany. The carbon footprint of insect meat is assumed to be lower than the footprint of chicken meat but slightly higher than the footprint of plant-based protein. The chosen value is a preliminary estimate of cricket meat, taken from Blonk e.a. (2008). It requires roughly 10.000 crickets for the production of one kilogram of cricket protein powder.kg meat per animal consumedanimals killed per kg meatkg CO2 per kg meatbeefporkchicken meatinsect meat3000,003371000,010101,50,667160,000110.0002With these values, we can calculate the human and animal deathprints of meat, i.e. how many humans will die from climate change and how many animals are killed (slaughtered) for the production of one kilogram of meat. This animal deathprint of meat only includes the animals that will be consume...]]>
Stijn https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 16:01 None full 3956
YFyzHT3H67jrk7mdc_EA EA - James Lovelock (1919 – 2022) by Gavin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: James Lovelock (1919 – 2022), published by Gavin on November 30, 2022 on The Effective Altruism Forum.The real job of science is trying to make science fiction come true.Britain's greatest mad scientist died recently at 103.We'll get to his achievements. But I can't avoid mentioning the 'Gaia hypothesis', his notorious metaphor gone wrong that the Earth is in some sense a single organism whooaa. But him being most famous for this is like thinking Einstein's violin playing was his best stuff.Lovelock was raised as a Quaker, to which he credits his independent thinking (he was a conscientious objector in WWII). Also:"His family was poor, too poor to pay for him to go to university. He later came to regard this as a blessing because it meant he wasn’t immediately locked into a silo of academia. Somehow, he created an education for himself, taking evening classes that led, when he was 21, to the University of Manchester."He quit work and left academia forever in 1964, instead running a one-man "ten foot by ten foot" lab from his garden in the West Country, living off consulting work for NASA, Shell, HP, and MI5 and royalties from 40 inventions.Chromatography, &, modern environmentalismBy far his biggest coup was building the electron capture detector in 1957 during his second PhD, the world's most sensitive gas chromatograph (way of detecting chemicals in air).When Lovelock first developed the ECD, the device was at least a thousand times more sensitive than any other detector in existence at the time. It was able to detect chemicals at concentrations as low as one part per trillion—that’s equivalent to detecting a single drop of ink diluted in 20 Olympic-sized swimming pools.He became curious about what the visible air pollution he saw was due to. He picked the notorious CFCs just because they were conspicuous, becoming the first person to notice the global consequences of Thomas Midgley's almighty fuckup fifty years earlier. (CFCs later turned out to be the cause of the hole in the ozone layer, i.e. millions of skin cancer causes.) He went to Antarctica in person, "partially self-funded" to check if they were there too, because why not. He screwed up the interpretation though, writing in Nature "the presence of these compounds constitutes no conceivable hazard".The ECD revolutionised atmospheric chemistry and so the study of air pollution, still one of the more important causes of premature death.On the lawn of the house peacocks strut and mew; a pair of barn owls have built their nest above the Exponential Dilution Chamber, a sealed upper room that was built in order to calibrate the Electron Capture Device. In the garden stands an off-white baroque plaster statue: the image of Gaia.The device was so sensitive that it showed traces of pesticides in animal tissues all over the world, including DDT. Since that led to Silent Spring, he probably helped along the perverse return of organic farming and the anti-chemicals paranoia of the second half of the C20th.Not that he was ever one of those:Too many greens are not just ignorant of science, they hate science... [Environmentalism is like a] global over-anxious mother figure who is so concerned about small risks that she ignores the real dangers. I wish they would grow up [and focus on the real problem]: How can we feed, house and clothe the abundant human race without destroying the habitats of other creatures?Some time in the next century, when the adverse effects of climate change begin to bite, people will look back in anger at those who now so foolishly continue to pollute by burning fossil fuel instead of accepting the beneficence of nuclear power. Is our distrust of nuclear power and genetically modified food soundly based?Later, he was notable for sounding the retreat (humans should start leaving c...]]>
Gavin https://forum.effectivealtruism.org/posts/YFyzHT3H67jrk7mdc/james-lovelock-1919-2022 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: James Lovelock (1919 – 2022), published by Gavin on November 30, 2022 on The Effective Altruism Forum.The real job of science is trying to make science fiction come true.Britain's greatest mad scientist died recently at 103.We'll get to his achievements. But I can't avoid mentioning the 'Gaia hypothesis', his notorious metaphor gone wrong that the Earth is in some sense a single organism whooaa. But him being most famous for this is like thinking Einstein's violin playing was his best stuff.Lovelock was raised as a Quaker, to which he credits his independent thinking (he was a conscientious objector in WWII). Also:"His family was poor, too poor to pay for him to go to university. He later came to regard this as a blessing because it meant he wasn’t immediately locked into a silo of academia. Somehow, he created an education for himself, taking evening classes that led, when he was 21, to the University of Manchester."He quit work and left academia forever in 1964, instead running a one-man "ten foot by ten foot" lab from his garden in the West Country, living off consulting work for NASA, Shell, HP, and MI5 and royalties from 40 inventions.Chromatography, &, modern environmentalismBy far his biggest coup was building the electron capture detector in 1957 during his second PhD, the world's most sensitive gas chromatograph (way of detecting chemicals in air).When Lovelock first developed the ECD, the device was at least a thousand times more sensitive than any other detector in existence at the time. It was able to detect chemicals at concentrations as low as one part per trillion—that’s equivalent to detecting a single drop of ink diluted in 20 Olympic-sized swimming pools.He became curious about what the visible air pollution he saw was due to. He picked the notorious CFCs just because they were conspicuous, becoming the first person to notice the global consequences of Thomas Midgley's almighty fuckup fifty years earlier. (CFCs later turned out to be the cause of the hole in the ozone layer, i.e. millions of skin cancer causes.) He went to Antarctica in person, "partially self-funded" to check if they were there too, because why not. He screwed up the interpretation though, writing in Nature "the presence of these compounds constitutes no conceivable hazard".The ECD revolutionised atmospheric chemistry and so the study of air pollution, still one of the more important causes of premature death.On the lawn of the house peacocks strut and mew; a pair of barn owls have built their nest above the Exponential Dilution Chamber, a sealed upper room that was built in order to calibrate the Electron Capture Device. In the garden stands an off-white baroque plaster statue: the image of Gaia.The device was so sensitive that it showed traces of pesticides in animal tissues all over the world, including DDT. Since that led to Silent Spring, he probably helped along the perverse return of organic farming and the anti-chemicals paranoia of the second half of the C20th.Not that he was ever one of those:Too many greens are not just ignorant of science, they hate science... [Environmentalism is like a] global over-anxious mother figure who is so concerned about small risks that she ignores the real dangers. I wish they would grow up [and focus on the real problem]: How can we feed, house and clothe the abundant human race without destroying the habitats of other creatures?Some time in the next century, when the adverse effects of climate change begin to bite, people will look back in anger at those who now so foolishly continue to pollute by burning fossil fuel instead of accepting the beneficence of nuclear power. Is our distrust of nuclear power and genetically modified food soundly based?Later, he was notable for sounding the retreat (humans should start leaving c...]]>
Wed, 30 Nov 2022 08:53:17 +0000 EA - James Lovelock (1919 – 2022) by Gavin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: James Lovelock (1919 – 2022), published by Gavin on November 30, 2022 on The Effective Altruism Forum.The real job of science is trying to make science fiction come true.Britain's greatest mad scientist died recently at 103.We'll get to his achievements. But I can't avoid mentioning the 'Gaia hypothesis', his notorious metaphor gone wrong that the Earth is in some sense a single organism whooaa. But him being most famous for this is like thinking Einstein's violin playing was his best stuff.Lovelock was raised as a Quaker, to which he credits his independent thinking (he was a conscientious objector in WWII). Also:"His family was poor, too poor to pay for him to go to university. He later came to regard this as a blessing because it meant he wasn’t immediately locked into a silo of academia. Somehow, he created an education for himself, taking evening classes that led, when he was 21, to the University of Manchester."He quit work and left academia forever in 1964, instead running a one-man "ten foot by ten foot" lab from his garden in the West Country, living off consulting work for NASA, Shell, HP, and MI5 and royalties from 40 inventions.Chromatography, &, modern environmentalismBy far his biggest coup was building the electron capture detector in 1957 during his second PhD, the world's most sensitive gas chromatograph (way of detecting chemicals in air).When Lovelock first developed the ECD, the device was at least a thousand times more sensitive than any other detector in existence at the time. It was able to detect chemicals at concentrations as low as one part per trillion—that’s equivalent to detecting a single drop of ink diluted in 20 Olympic-sized swimming pools.He became curious about what the visible air pollution he saw was due to. He picked the notorious CFCs just because they were conspicuous, becoming the first person to notice the global consequences of Thomas Midgley's almighty fuckup fifty years earlier. (CFCs later turned out to be the cause of the hole in the ozone layer, i.e. millions of skin cancer causes.) He went to Antarctica in person, "partially self-funded" to check if they were there too, because why not. He screwed up the interpretation though, writing in Nature "the presence of these compounds constitutes no conceivable hazard".The ECD revolutionised atmospheric chemistry and so the study of air pollution, still one of the more important causes of premature death.On the lawn of the house peacocks strut and mew; a pair of barn owls have built their nest above the Exponential Dilution Chamber, a sealed upper room that was built in order to calibrate the Electron Capture Device. In the garden stands an off-white baroque plaster statue: the image of Gaia.The device was so sensitive that it showed traces of pesticides in animal tissues all over the world, including DDT. Since that led to Silent Spring, he probably helped along the perverse return of organic farming and the anti-chemicals paranoia of the second half of the C20th.Not that he was ever one of those:Too many greens are not just ignorant of science, they hate science... [Environmentalism is like a] global over-anxious mother figure who is so concerned about small risks that she ignores the real dangers. I wish they would grow up [and focus on the real problem]: How can we feed, house and clothe the abundant human race without destroying the habitats of other creatures?Some time in the next century, when the adverse effects of climate change begin to bite, people will look back in anger at those who now so foolishly continue to pollute by burning fossil fuel instead of accepting the beneficence of nuclear power. Is our distrust of nuclear power and genetically modified food soundly based?Later, he was notable for sounding the retreat (humans should start leaving c...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: James Lovelock (1919 – 2022), published by Gavin on November 30, 2022 on The Effective Altruism Forum.The real job of science is trying to make science fiction come true.Britain's greatest mad scientist died recently at 103.We'll get to his achievements. But I can't avoid mentioning the 'Gaia hypothesis', his notorious metaphor gone wrong that the Earth is in some sense a single organism whooaa. But him being most famous for this is like thinking Einstein's violin playing was his best stuff.Lovelock was raised as a Quaker, to which he credits his independent thinking (he was a conscientious objector in WWII). Also:"His family was poor, too poor to pay for him to go to university. He later came to regard this as a blessing because it meant he wasn’t immediately locked into a silo of academia. Somehow, he created an education for himself, taking evening classes that led, when he was 21, to the University of Manchester."He quit work and left academia forever in 1964, instead running a one-man "ten foot by ten foot" lab from his garden in the West Country, living off consulting work for NASA, Shell, HP, and MI5 and royalties from 40 inventions.Chromatography, &, modern environmentalismBy far his biggest coup was building the electron capture detector in 1957 during his second PhD, the world's most sensitive gas chromatograph (way of detecting chemicals in air).When Lovelock first developed the ECD, the device was at least a thousand times more sensitive than any other detector in existence at the time. It was able to detect chemicals at concentrations as low as one part per trillion—that’s equivalent to detecting a single drop of ink diluted in 20 Olympic-sized swimming pools.He became curious about what the visible air pollution he saw was due to. He picked the notorious CFCs just because they were conspicuous, becoming the first person to notice the global consequences of Thomas Midgley's almighty fuckup fifty years earlier. (CFCs later turned out to be the cause of the hole in the ozone layer, i.e. millions of skin cancer causes.) He went to Antarctica in person, "partially self-funded" to check if they were there too, because why not. He screwed up the interpretation though, writing in Nature "the presence of these compounds constitutes no conceivable hazard".The ECD revolutionised atmospheric chemistry and so the study of air pollution, still one of the more important causes of premature death.On the lawn of the house peacocks strut and mew; a pair of barn owls have built their nest above the Exponential Dilution Chamber, a sealed upper room that was built in order to calibrate the Electron Capture Device. In the garden stands an off-white baroque plaster statue: the image of Gaia.The device was so sensitive that it showed traces of pesticides in animal tissues all over the world, including DDT. Since that led to Silent Spring, he probably helped along the perverse return of organic farming and the anti-chemicals paranoia of the second half of the C20th.Not that he was ever one of those:Too many greens are not just ignorant of science, they hate science... [Environmentalism is like a] global over-anxious mother figure who is so concerned about small risks that she ignores the real dangers. I wish they would grow up [and focus on the real problem]: How can we feed, house and clothe the abundant human race without destroying the habitats of other creatures?Some time in the next century, when the adverse effects of climate change begin to bite, people will look back in anger at those who now so foolishly continue to pollute by burning fossil fuel instead of accepting the beneficence of nuclear power. Is our distrust of nuclear power and genetically modified food soundly based?Later, he was notable for sounding the retreat (humans should start leaving c...]]>
Gavin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 13:56 None full 3944
YS3gn2KRR9rEBgjvJ_EA EA - Sense-making around the FTX catastrophe: a deep dive podcast episode we just released by spencerg Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sense-making around the FTX catastrophe: a deep dive podcast episode we just released, published by spencerg on November 30, 2022 on The Effective Altruism Forum.There hasn’t been as much sense-making around the FTX catastrophe as I would have liked, so we worked quickly to put together a special episode of the Clearer Thinking podcast on the topic.We discuss how more than ten billion dollars of apparent value was lost in a matter of days, the timeline of what happened, deception and confusion related to the event, why this catastrophe took place in the first place, and what this means for communities connected to it.If you’re interested, here it is:The first portion covers the timeline of events, and after that there are interviews with 5 guests who give their take on the events:00:01:37 — Intro & timeline00:51:48 — Byrne Hobart01:39:52 — Vipul Naik02:18:35 — Maomao Hu02:41:19 — Marcus Abramovitch02:49:38 — Ozzie Gooen03:21:40 — Wrap-up & outroThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
spencerg https://forum.effectivealtruism.org/posts/YS3gn2KRR9rEBgjvJ/sense-making-around-the-ftx-catastrophe-a-deep-dive-podcast Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sense-making around the FTX catastrophe: a deep dive podcast episode we just released, published by spencerg on November 30, 2022 on The Effective Altruism Forum.There hasn’t been as much sense-making around the FTX catastrophe as I would have liked, so we worked quickly to put together a special episode of the Clearer Thinking podcast on the topic.We discuss how more than ten billion dollars of apparent value was lost in a matter of days, the timeline of what happened, deception and confusion related to the event, why this catastrophe took place in the first place, and what this means for communities connected to it.If you’re interested, here it is:The first portion covers the timeline of events, and after that there are interviews with 5 guests who give their take on the events:00:01:37 — Intro & timeline00:51:48 — Byrne Hobart01:39:52 — Vipul Naik02:18:35 — Maomao Hu02:41:19 — Marcus Abramovitch02:49:38 — Ozzie Gooen03:21:40 — Wrap-up & outroThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 30 Nov 2022 08:40:00 +0000 EA - Sense-making around the FTX catastrophe: a deep dive podcast episode we just released by spencerg Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sense-making around the FTX catastrophe: a deep dive podcast episode we just released, published by spencerg on November 30, 2022 on The Effective Altruism Forum.There hasn’t been as much sense-making around the FTX catastrophe as I would have liked, so we worked quickly to put together a special episode of the Clearer Thinking podcast on the topic.We discuss how more than ten billion dollars of apparent value was lost in a matter of days, the timeline of what happened, deception and confusion related to the event, why this catastrophe took place in the first place, and what this means for communities connected to it.If you’re interested, here it is:The first portion covers the timeline of events, and after that there are interviews with 5 guests who give their take on the events:00:01:37 — Intro & timeline00:51:48 — Byrne Hobart01:39:52 — Vipul Naik02:18:35 — Maomao Hu02:41:19 — Marcus Abramovitch02:49:38 — Ozzie Gooen03:21:40 — Wrap-up & outroThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sense-making around the FTX catastrophe: a deep dive podcast episode we just released, published by spencerg on November 30, 2022 on The Effective Altruism Forum.There hasn’t been as much sense-making around the FTX catastrophe as I would have liked, so we worked quickly to put together a special episode of the Clearer Thinking podcast on the topic.We discuss how more than ten billion dollars of apparent value was lost in a matter of days, the timeline of what happened, deception and confusion related to the event, why this catastrophe took place in the first place, and what this means for communities connected to it.If you’re interested, here it is:The first portion covers the timeline of events, and after that there are interviews with 5 guests who give their take on the events:00:01:37 — Intro & timeline00:51:48 — Byrne Hobart01:39:52 — Vipul Naik02:18:35 — Maomao Hu02:41:19 — Marcus Abramovitch02:49:38 — Ozzie Gooen03:21:40 — Wrap-up & outroThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
spencerg https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:26 None full 3945
WAdhvskTh2yffW9gc_EA EA - Carl Djerassi (1923–2014) by Gavin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Carl Djerassi (1923–2014), published by Gavin on November 29, 2022 on The Effective Altruism Forum.Carl Djerassi helped invent the synthetic hormone norethindrone, one of the 500 most important medicines (actually top 50 by prescription count). A large supply is a basic requirement of every health system in the world.Norethindrone is important for two reasons. First, it treats menstrual disorders and endometriosis, together 0.3% of the global burden of disease. More famously, it was a component of The Pill. People mix up the timelines, which is why he is sometimes called the 'Father of the Pill'. But "neither Djerassi nor the company he works for, Syntex, had any interest in testing it as a contraceptive" and it was only used for birth control 12 years after.As usual in industrial chemistry, Djerassi got no royalties from the blockbuster medicine he helped develop - but, surprise ending! - he bought cheap shares in Syntex and got rich when it became one of the most important medicines in history for two reasons.He also synthesized the third-ever practical antihistamine, and applied new instruments in 1,200 papers on the structure of many important steroids. He also worked on one of the first AI programs to do useful work in science.AchievementsEpistemic status: little better than a guess.Not many inventions are fully counterfactual; most simple, massively profitable things which get invented would have been invented by someone else a bit later. So the appropriate unit for lauding inventors is years saved. And if I put a number on that I'd just be making it up.Here are the numbers I made up:About 4 million US users, so maybe up to 94 million world users at present. No sense of the endometriosis / contraception split.Call it 600 million users, 10% endometriosis use case.For menstrual disorders:on the market 65 years and counting.Counterfactual: on the market 3 years before the next oral progestogen was.It was the first practical oral progestogen, so we should compare to the injectable alternativesAbout 1/6 of Americans hate needles so much that they refuse treatment.Attrition and missed doses for needle treatments is higher than pill treatments.Endometriosis is about 0.25 - 0.35 QALY loss. So if it's 30% effective, around $30 / QALY, an amazing deal.For easy contraception:on the market 59 years and counting.The big gains (besides autonomy) are averting unintended pregnancies, abortions, and pregnancy-related deaths.Modern cost-effectiveness in Ethiopia is $96 / QALY.There's probably some additive effect for endometriosis sufferers (who would want contraception anyway).A full account would guess the Pill's effect on the sexual revolution and cultural attitudes toward women. But I've reached my limit. (You might also consider the role of the Pill in the ongoing decline of church authority: "1980: In spite of the Pope's ruling against the Pill and birth control, almost 80% of American Catholic women use contraceptives, and only 29% of American priests believe it is intrinsically immoral.")How many years did he bring the invention forward? Call it 5.Then split the credit three ways with Luis Miramontes and George Rosenkranz.So (largely made-up numbers) it looks like millions of QALYs for the treatment overall, and tens of thousands counterfactually for Djerassi.ArtistAfter surviving cancer, he decided to become a writer.I was very depressed, and for the first time thought about mortality. Strangely enough I had not thought about death before... I realized that who knows how long I would live? In cancer they always talk about five years: if one can survive five years then presumably the cancer had been extirpated. And I thought: gee, had I known five years earlier that I would come down with cancer, would I have led a different life during these...]]>
Gavin https://forum.effectivealtruism.org/posts/WAdhvskTh2yffW9gc/carl-djerassi-1923-2014 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Carl Djerassi (1923–2014), published by Gavin on November 29, 2022 on The Effective Altruism Forum.Carl Djerassi helped invent the synthetic hormone norethindrone, one of the 500 most important medicines (actually top 50 by prescription count). A large supply is a basic requirement of every health system in the world.Norethindrone is important for two reasons. First, it treats menstrual disorders and endometriosis, together 0.3% of the global burden of disease. More famously, it was a component of The Pill. People mix up the timelines, which is why he is sometimes called the 'Father of the Pill'. But "neither Djerassi nor the company he works for, Syntex, had any interest in testing it as a contraceptive" and it was only used for birth control 12 years after.As usual in industrial chemistry, Djerassi got no royalties from the blockbuster medicine he helped develop - but, surprise ending! - he bought cheap shares in Syntex and got rich when it became one of the most important medicines in history for two reasons.He also synthesized the third-ever practical antihistamine, and applied new instruments in 1,200 papers on the structure of many important steroids. He also worked on one of the first AI programs to do useful work in science.AchievementsEpistemic status: little better than a guess.Not many inventions are fully counterfactual; most simple, massively profitable things which get invented would have been invented by someone else a bit later. So the appropriate unit for lauding inventors is years saved. And if I put a number on that I'd just be making it up.Here are the numbers I made up:About 4 million US users, so maybe up to 94 million world users at present. No sense of the endometriosis / contraception split.Call it 600 million users, 10% endometriosis use case.For menstrual disorders:on the market 65 years and counting.Counterfactual: on the market 3 years before the next oral progestogen was.It was the first practical oral progestogen, so we should compare to the injectable alternativesAbout 1/6 of Americans hate needles so much that they refuse treatment.Attrition and missed doses for needle treatments is higher than pill treatments.Endometriosis is about 0.25 - 0.35 QALY loss. So if it's 30% effective, around $30 / QALY, an amazing deal.For easy contraception:on the market 59 years and counting.The big gains (besides autonomy) are averting unintended pregnancies, abortions, and pregnancy-related deaths.Modern cost-effectiveness in Ethiopia is $96 / QALY.There's probably some additive effect for endometriosis sufferers (who would want contraception anyway).A full account would guess the Pill's effect on the sexual revolution and cultural attitudes toward women. But I've reached my limit. (You might also consider the role of the Pill in the ongoing decline of church authority: "1980: In spite of the Pope's ruling against the Pill and birth control, almost 80% of American Catholic women use contraceptives, and only 29% of American priests believe it is intrinsically immoral.")How many years did he bring the invention forward? Call it 5.Then split the credit three ways with Luis Miramontes and George Rosenkranz.So (largely made-up numbers) it looks like millions of QALYs for the treatment overall, and tens of thousands counterfactually for Djerassi.ArtistAfter surviving cancer, he decided to become a writer.I was very depressed, and for the first time thought about mortality. Strangely enough I had not thought about death before... I realized that who knows how long I would live? In cancer they always talk about five years: if one can survive five years then presumably the cancer had been extirpated. And I thought: gee, had I known five years earlier that I would come down with cancer, would I have led a different life during these...]]>
Wed, 30 Nov 2022 06:08:57 +0000 EA - Carl Djerassi (1923–2014) by Gavin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Carl Djerassi (1923–2014), published by Gavin on November 29, 2022 on The Effective Altruism Forum.Carl Djerassi helped invent the synthetic hormone norethindrone, one of the 500 most important medicines (actually top 50 by prescription count). A large supply is a basic requirement of every health system in the world.Norethindrone is important for two reasons. First, it treats menstrual disorders and endometriosis, together 0.3% of the global burden of disease. More famously, it was a component of The Pill. People mix up the timelines, which is why he is sometimes called the 'Father of the Pill'. But "neither Djerassi nor the company he works for, Syntex, had any interest in testing it as a contraceptive" and it was only used for birth control 12 years after.As usual in industrial chemistry, Djerassi got no royalties from the blockbuster medicine he helped develop - but, surprise ending! - he bought cheap shares in Syntex and got rich when it became one of the most important medicines in history for two reasons.He also synthesized the third-ever practical antihistamine, and applied new instruments in 1,200 papers on the structure of many important steroids. He also worked on one of the first AI programs to do useful work in science.AchievementsEpistemic status: little better than a guess.Not many inventions are fully counterfactual; most simple, massively profitable things which get invented would have been invented by someone else a bit later. So the appropriate unit for lauding inventors is years saved. And if I put a number on that I'd just be making it up.Here are the numbers I made up:About 4 million US users, so maybe up to 94 million world users at present. No sense of the endometriosis / contraception split.Call it 600 million users, 10% endometriosis use case.For menstrual disorders:on the market 65 years and counting.Counterfactual: on the market 3 years before the next oral progestogen was.It was the first practical oral progestogen, so we should compare to the injectable alternativesAbout 1/6 of Americans hate needles so much that they refuse treatment.Attrition and missed doses for needle treatments is higher than pill treatments.Endometriosis is about 0.25 - 0.35 QALY loss. So if it's 30% effective, around $30 / QALY, an amazing deal.For easy contraception:on the market 59 years and counting.The big gains (besides autonomy) are averting unintended pregnancies, abortions, and pregnancy-related deaths.Modern cost-effectiveness in Ethiopia is $96 / QALY.There's probably some additive effect for endometriosis sufferers (who would want contraception anyway).A full account would guess the Pill's effect on the sexual revolution and cultural attitudes toward women. But I've reached my limit. (You might also consider the role of the Pill in the ongoing decline of church authority: "1980: In spite of the Pope's ruling against the Pill and birth control, almost 80% of American Catholic women use contraceptives, and only 29% of American priests believe it is intrinsically immoral.")How many years did he bring the invention forward? Call it 5.Then split the credit three ways with Luis Miramontes and George Rosenkranz.So (largely made-up numbers) it looks like millions of QALYs for the treatment overall, and tens of thousands counterfactually for Djerassi.ArtistAfter surviving cancer, he decided to become a writer.I was very depressed, and for the first time thought about mortality. Strangely enough I had not thought about death before... I realized that who knows how long I would live? In cancer they always talk about five years: if one can survive five years then presumably the cancer had been extirpated. And I thought: gee, had I known five years earlier that I would come down with cancer, would I have led a different life during these...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Carl Djerassi (1923–2014), published by Gavin on November 29, 2022 on The Effective Altruism Forum.Carl Djerassi helped invent the synthetic hormone norethindrone, one of the 500 most important medicines (actually top 50 by prescription count). A large supply is a basic requirement of every health system in the world.Norethindrone is important for two reasons. First, it treats menstrual disorders and endometriosis, together 0.3% of the global burden of disease. More famously, it was a component of The Pill. People mix up the timelines, which is why he is sometimes called the 'Father of the Pill'. But "neither Djerassi nor the company he works for, Syntex, had any interest in testing it as a contraceptive" and it was only used for birth control 12 years after.As usual in industrial chemistry, Djerassi got no royalties from the blockbuster medicine he helped develop - but, surprise ending! - he bought cheap shares in Syntex and got rich when it became one of the most important medicines in history for two reasons.He also synthesized the third-ever practical antihistamine, and applied new instruments in 1,200 papers on the structure of many important steroids. He also worked on one of the first AI programs to do useful work in science.AchievementsEpistemic status: little better than a guess.Not many inventions are fully counterfactual; most simple, massively profitable things which get invented would have been invented by someone else a bit later. So the appropriate unit for lauding inventors is years saved. And if I put a number on that I'd just be making it up.Here are the numbers I made up:About 4 million US users, so maybe up to 94 million world users at present. No sense of the endometriosis / contraception split.Call it 600 million users, 10% endometriosis use case.For menstrual disorders:on the market 65 years and counting.Counterfactual: on the market 3 years before the next oral progestogen was.It was the first practical oral progestogen, so we should compare to the injectable alternativesAbout 1/6 of Americans hate needles so much that they refuse treatment.Attrition and missed doses for needle treatments is higher than pill treatments.Endometriosis is about 0.25 - 0.35 QALY loss. So if it's 30% effective, around $30 / QALY, an amazing deal.For easy contraception:on the market 59 years and counting.The big gains (besides autonomy) are averting unintended pregnancies, abortions, and pregnancy-related deaths.Modern cost-effectiveness in Ethiopia is $96 / QALY.There's probably some additive effect for endometriosis sufferers (who would want contraception anyway).A full account would guess the Pill's effect on the sexual revolution and cultural attitudes toward women. But I've reached my limit. (You might also consider the role of the Pill in the ongoing decline of church authority: "1980: In spite of the Pope's ruling against the Pill and birth control, almost 80% of American Catholic women use contraceptives, and only 29% of American priests believe it is intrinsically immoral.")How many years did he bring the invention forward? Call it 5.Then split the credit three ways with Luis Miramontes and George Rosenkranz.So (largely made-up numbers) it looks like millions of QALYs for the treatment overall, and tens of thousands counterfactually for Djerassi.ArtistAfter surviving cancer, he decided to become a writer.I was very depressed, and for the first time thought about mortality. Strangely enough I had not thought about death before... I realized that who knows how long I would live? In cancer they always talk about five years: if one can survive five years then presumably the cancer had been extirpated. And I thought: gee, had I known five years earlier that I would come down with cancer, would I have led a different life during these...]]>
Gavin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:10 None full 3948
TWv9GnKkGBbu3gbPX_EA EA - SBF interview with Tiffany Fong by Timothy Chan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SBF interview with Tiffany Fong, published by Timothy Chan on November 29, 2022 on The Effective Altruism Forum.Covered here:And here:/Disclaimer: My transcriptions might contain some inaccuracies.The section beginning 11:31 in the second interview might be especially interesting for the readers of the forum. From this and other publicly available information, it seems likely to me that Sam acted as a naive consequentialist. This, of course, isn't mutually exclusive with having elevated dark triad traits (as mentioned here and here).Quotes from that section:"Honestly, I like, right now, I'm mostly focusing on, what I can do, and like, where I can be helpful, and like, it's... there will be a time and a place for (...) ruminating on my future, but [sighs] I, right now it's more one foot in front of the other, and you know, trying to be as helpful and constructive as I can, and uh, that's all I can do for now, and there's no (...) I don't know what the future will hold for me - it's pretty unclear - it's certainly not the future I once thought it was (...) my future is not the thing that [inaudible] not the thing that matters here - what matters is the world's future -I'm much more worried about the damage that I did to that than whatever happens to me personally.""I made a decision a while ago that like, I was gonna, like you know, spend my life trying to do what I could for the world, and like obviously it hasn't turned out like how I had hoped.""I feel really really bad for the people who trusted me and believed in me, and, then, you know, we're trying to do great things for the world and tied it to me - and that got, you know, undermined so I fucked up. And that's like... I don't know, that's the shittiest part of it. If it were just myself that it hurt, like, then whatever, but it wasn't."Other highlights:Sam claims that he donated to Republicans: "I donated to both parties. I donated about the same amount to both parties (...) That was not generally known (...) All my Republican donations were dark (...) and the reason was not for regulatory reasons - it's just that reporters freak the fuck out if you donate to Republicans [inaudible] they're all liberal, and I didn't want to have that fight".In response to to his lawyers' advice regarding his public apologies, Sam says he told his lawyers "to go fuck [themselves]" and claims that they "know what they talk about in extremely narrow domain of litigation - they don't understand the broader context of the world, like, if you're a complete dick about everything, even if it narrowly avoids maybe moderately embarrassing statements, it's not helping [mostly inaudible - but maybe he said 'any of them'?]".Sam describes the collapse as a "risk management failure". Fong asks: "I mean, you can't be the only person that was like, aware - in charge of all of this." Sam replies: "I think the bigger problem was that there was no [...] person who was chiefly in charge of monitoring the risk of margin positions on FTX. Like there should have been but there wasn't". Later, Sam also says "at the same time, I think, you know, we stretched ourselves too thin. And we're doing a lot of things at the company - and, you know, I think we should have cut a few of them out and focus more on making sure that the fundamental, like, the most important things we were doing well at". Malice and incompetence can mix and match, so that might be the case (in addition to a lack of moral qualms).Other relevant videos from Fong:Why is Sam Bankman-Fried Talking To Me? (AUDIO CLIP) Phone Call with SBF - Former FTX CEO / FounderCONTEXT: My Phone Call with SBF / Sam Bankman-Fried- Former CEO of FTX - Phone Calls About Ch 11Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Timothy Chan https://forum.effectivealtruism.org/posts/TWv9GnKkGBbu3gbPX/sbf-interview-with-tiffany-fong Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SBF interview with Tiffany Fong, published by Timothy Chan on November 29, 2022 on The Effective Altruism Forum.Covered here:And here:/Disclaimer: My transcriptions might contain some inaccuracies.The section beginning 11:31 in the second interview might be especially interesting for the readers of the forum. From this and other publicly available information, it seems likely to me that Sam acted as a naive consequentialist. This, of course, isn't mutually exclusive with having elevated dark triad traits (as mentioned here and here).Quotes from that section:"Honestly, I like, right now, I'm mostly focusing on, what I can do, and like, where I can be helpful, and like, it's... there will be a time and a place for (...) ruminating on my future, but [sighs] I, right now it's more one foot in front of the other, and you know, trying to be as helpful and constructive as I can, and uh, that's all I can do for now, and there's no (...) I don't know what the future will hold for me - it's pretty unclear - it's certainly not the future I once thought it was (...) my future is not the thing that [inaudible] not the thing that matters here - what matters is the world's future -I'm much more worried about the damage that I did to that than whatever happens to me personally.""I made a decision a while ago that like, I was gonna, like you know, spend my life trying to do what I could for the world, and like obviously it hasn't turned out like how I had hoped.""I feel really really bad for the people who trusted me and believed in me, and, then, you know, we're trying to do great things for the world and tied it to me - and that got, you know, undermined so I fucked up. And that's like... I don't know, that's the shittiest part of it. If it were just myself that it hurt, like, then whatever, but it wasn't."Other highlights:Sam claims that he donated to Republicans: "I donated to both parties. I donated about the same amount to both parties (...) That was not generally known (...) All my Republican donations were dark (...) and the reason was not for regulatory reasons - it's just that reporters freak the fuck out if you donate to Republicans [inaudible] they're all liberal, and I didn't want to have that fight".In response to to his lawyers' advice regarding his public apologies, Sam says he told his lawyers "to go fuck [themselves]" and claims that they "know what they talk about in extremely narrow domain of litigation - they don't understand the broader context of the world, like, if you're a complete dick about everything, even if it narrowly avoids maybe moderately embarrassing statements, it's not helping [mostly inaudible - but maybe he said 'any of them'?]".Sam describes the collapse as a "risk management failure". Fong asks: "I mean, you can't be the only person that was like, aware - in charge of all of this." Sam replies: "I think the bigger problem was that there was no [...] person who was chiefly in charge of monitoring the risk of margin positions on FTX. Like there should have been but there wasn't". Later, Sam also says "at the same time, I think, you know, we stretched ourselves too thin. And we're doing a lot of things at the company - and, you know, I think we should have cut a few of them out and focus more on making sure that the fundamental, like, the most important things we were doing well at". Malice and incompetence can mix and match, so that might be the case (in addition to a lack of moral qualms).Other relevant videos from Fong:Why is Sam Bankman-Fried Talking To Me? (AUDIO CLIP) Phone Call with SBF - Former FTX CEO / FounderCONTEXT: My Phone Call with SBF / Sam Bankman-Fried- Former CEO of FTX - Phone Calls About Ch 11Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 30 Nov 2022 00:18:30 +0000 EA - SBF interview with Tiffany Fong by Timothy Chan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SBF interview with Tiffany Fong, published by Timothy Chan on November 29, 2022 on The Effective Altruism Forum.Covered here:And here:/Disclaimer: My transcriptions might contain some inaccuracies.The section beginning 11:31 in the second interview might be especially interesting for the readers of the forum. From this and other publicly available information, it seems likely to me that Sam acted as a naive consequentialist. This, of course, isn't mutually exclusive with having elevated dark triad traits (as mentioned here and here).Quotes from that section:"Honestly, I like, right now, I'm mostly focusing on, what I can do, and like, where I can be helpful, and like, it's... there will be a time and a place for (...) ruminating on my future, but [sighs] I, right now it's more one foot in front of the other, and you know, trying to be as helpful and constructive as I can, and uh, that's all I can do for now, and there's no (...) I don't know what the future will hold for me - it's pretty unclear - it's certainly not the future I once thought it was (...) my future is not the thing that [inaudible] not the thing that matters here - what matters is the world's future -I'm much more worried about the damage that I did to that than whatever happens to me personally.""I made a decision a while ago that like, I was gonna, like you know, spend my life trying to do what I could for the world, and like obviously it hasn't turned out like how I had hoped.""I feel really really bad for the people who trusted me and believed in me, and, then, you know, we're trying to do great things for the world and tied it to me - and that got, you know, undermined so I fucked up. And that's like... I don't know, that's the shittiest part of it. If it were just myself that it hurt, like, then whatever, but it wasn't."Other highlights:Sam claims that he donated to Republicans: "I donated to both parties. I donated about the same amount to both parties (...) That was not generally known (...) All my Republican donations were dark (...) and the reason was not for regulatory reasons - it's just that reporters freak the fuck out if you donate to Republicans [inaudible] they're all liberal, and I didn't want to have that fight".In response to to his lawyers' advice regarding his public apologies, Sam says he told his lawyers "to go fuck [themselves]" and claims that they "know what they talk about in extremely narrow domain of litigation - they don't understand the broader context of the world, like, if you're a complete dick about everything, even if it narrowly avoids maybe moderately embarrassing statements, it's not helping [mostly inaudible - but maybe he said 'any of them'?]".Sam describes the collapse as a "risk management failure". Fong asks: "I mean, you can't be the only person that was like, aware - in charge of all of this." Sam replies: "I think the bigger problem was that there was no [...] person who was chiefly in charge of monitoring the risk of margin positions on FTX. Like there should have been but there wasn't". Later, Sam also says "at the same time, I think, you know, we stretched ourselves too thin. And we're doing a lot of things at the company - and, you know, I think we should have cut a few of them out and focus more on making sure that the fundamental, like, the most important things we were doing well at". Malice and incompetence can mix and match, so that might be the case (in addition to a lack of moral qualms).Other relevant videos from Fong:Why is Sam Bankman-Fried Talking To Me? (AUDIO CLIP) Phone Call with SBF - Former FTX CEO / FounderCONTEXT: My Phone Call with SBF / Sam Bankman-Fried- Former CEO of FTX - Phone Calls About Ch 11Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SBF interview with Tiffany Fong, published by Timothy Chan on November 29, 2022 on The Effective Altruism Forum.Covered here:And here:/Disclaimer: My transcriptions might contain some inaccuracies.The section beginning 11:31 in the second interview might be especially interesting for the readers of the forum. From this and other publicly available information, it seems likely to me that Sam acted as a naive consequentialist. This, of course, isn't mutually exclusive with having elevated dark triad traits (as mentioned here and here).Quotes from that section:"Honestly, I like, right now, I'm mostly focusing on, what I can do, and like, where I can be helpful, and like, it's... there will be a time and a place for (...) ruminating on my future, but [sighs] I, right now it's more one foot in front of the other, and you know, trying to be as helpful and constructive as I can, and uh, that's all I can do for now, and there's no (...) I don't know what the future will hold for me - it's pretty unclear - it's certainly not the future I once thought it was (...) my future is not the thing that [inaudible] not the thing that matters here - what matters is the world's future -I'm much more worried about the damage that I did to that than whatever happens to me personally.""I made a decision a while ago that like, I was gonna, like you know, spend my life trying to do what I could for the world, and like obviously it hasn't turned out like how I had hoped.""I feel really really bad for the people who trusted me and believed in me, and, then, you know, we're trying to do great things for the world and tied it to me - and that got, you know, undermined so I fucked up. And that's like... I don't know, that's the shittiest part of it. If it were just myself that it hurt, like, then whatever, but it wasn't."Other highlights:Sam claims that he donated to Republicans: "I donated to both parties. I donated about the same amount to both parties (...) That was not generally known (...) All my Republican donations were dark (...) and the reason was not for regulatory reasons - it's just that reporters freak the fuck out if you donate to Republicans [inaudible] they're all liberal, and I didn't want to have that fight".In response to to his lawyers' advice regarding his public apologies, Sam says he told his lawyers "to go fuck [themselves]" and claims that they "know what they talk about in extremely narrow domain of litigation - they don't understand the broader context of the world, like, if you're a complete dick about everything, even if it narrowly avoids maybe moderately embarrassing statements, it's not helping [mostly inaudible - but maybe he said 'any of them'?]".Sam describes the collapse as a "risk management failure". Fong asks: "I mean, you can't be the only person that was like, aware - in charge of all of this." Sam replies: "I think the bigger problem was that there was no [...] person who was chiefly in charge of monitoring the risk of margin positions on FTX. Like there should have been but there wasn't". Later, Sam also says "at the same time, I think, you know, we stretched ourselves too thin. And we're doing a lot of things at the company - and, you know, I think we should have cut a few of them out and focus more on making sure that the fundamental, like, the most important things we were doing well at". Malice and incompetence can mix and match, so that might be the case (in addition to a lack of moral qualms).Other relevant videos from Fong:Why is Sam Bankman-Fried Talking To Me? (AUDIO CLIP) Phone Call with SBF - Former FTX CEO / FounderCONTEXT: My Phone Call with SBF / Sam Bankman-Fried- Former CEO of FTX - Phone Calls About Ch 11Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Timothy Chan https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:40 None full 3947
Foh2mCycudKgDNZqC_EA EA - Come get malaria with me? by jeberts Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Come get malaria with me?, published by jeberts on November 29, 2022 on The Effective Altruism Forum.I promise I'm not going to start spamming the forums every week to badger you about getting a different exotic disease (see the post about Zika). But I was accepted into this study in Baltimore, which you can sort of think of as a part-time job fighting malaria from January to early March-ish. I figured some of you may be interested as well. (Please let me know if you sign up for pre-screening! I am always happy to talk about things in more detail if you have questions.)If you're not in the DMV, 1Day Sooner keeps track of potentially high-impact studies on our website and via our newsletter; sign up if you want to hear about hot, single challenge trials recruiting in YOUR area in the future.The following is a condensed version of what's outlined in this informal document. Obligatory disclosure: Neither I nor my org, 1Day Sooner, represent the study. I'm just excited about it :)Why would an EA consider this? Malaria is bad and the vaccine in this trial could result in nontrivial decreases in malaria mortality. What it comes down to is whether the costs — especially time commitment — make sense for you specifically. As the expanded document discusses, the risks of serious complications are very, very low.You'll be screened to make sure you're not at any elevated risk, and treatment will be initiated very quickly after you contract malaria (if you do at all). So if you do feel like you have the bandwidth for a part-time malaria-fighting job (remote work capability pretty much necessary), this is a good tangible way to make a difference. Also, you'll get paid. And you'll become friends with me, and I am very fun to be around, in my opinion.30-second trial summary: A malaria vaccine candidate with solid chance of eventual deployment in pregnant women needs to undergo this challenge trial being held by U Maryland — Baltimore's Center for Vaccine Development. It's outpatient. The burden of time will very likely be more than the burden of actually being sick. I will be in it, and other EAs and 1Day Sooner volunteers have expressed interest. Compensation runs up to $3,845. 1Day Sooner can help ease burdens of participation, especially transport from DC.Slightly expanded summary:A promising malaria vaccine, PfSPZ, needs to be tested as a key step in licensure as a traveler's vaccine, which in turn will support eventual deployment among pregnant women.This vaccine will not be the final cure humanity has been waiting for. It is most promising for use during pregnancy.There is never a guarantee a vaccine candidate will be successful, nor successfully deployed. It may well be that PfSPZ works well, but in a few years another vaccine turns out to be even better.The primary burden is time rather than discomfort, i.e., being sick with malaria for at most a few days, but also lots of blood draws. I have a rough estimate of hours spent in this spreadsheet.Low estimate for time spent if you are from DC (transit, visits, other lost productive time): 60-70 hours. High estimate: 90-100.For Baltimoreans, probably more like 45-65 hours.A few of these hours can be productive, like on a train or waiting around at some of the longer visits.Vaccination begins in January (specific dates to be announced shortly). There are three doses spaced out across one month. The vaccine has already been tested rather extensively for safety, it is very well tolerated. Three weeks after the final dose, we will be challenged with malaria. One-fourth of the participants will get a placebo injection.One week after the malaria challenge, we will begin going into the clinic in Baltimore for short, daily blood tests in the morning. Treatment will be administered ASAP after malaria is detected, definite...]]>
jeberts https://forum.effectivealtruism.org/posts/Foh2mCycudKgDNZqC/come-get-malaria-with-me Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Come get malaria with me?, published by jeberts on November 29, 2022 on The Effective Altruism Forum.I promise I'm not going to start spamming the forums every week to badger you about getting a different exotic disease (see the post about Zika). But I was accepted into this study in Baltimore, which you can sort of think of as a part-time job fighting malaria from January to early March-ish. I figured some of you may be interested as well. (Please let me know if you sign up for pre-screening! I am always happy to talk about things in more detail if you have questions.)If you're not in the DMV, 1Day Sooner keeps track of potentially high-impact studies on our website and via our newsletter; sign up if you want to hear about hot, single challenge trials recruiting in YOUR area in the future.The following is a condensed version of what's outlined in this informal document. Obligatory disclosure: Neither I nor my org, 1Day Sooner, represent the study. I'm just excited about it :)Why would an EA consider this? Malaria is bad and the vaccine in this trial could result in nontrivial decreases in malaria mortality. What it comes down to is whether the costs — especially time commitment — make sense for you specifically. As the expanded document discusses, the risks of serious complications are very, very low.You'll be screened to make sure you're not at any elevated risk, and treatment will be initiated very quickly after you contract malaria (if you do at all). So if you do feel like you have the bandwidth for a part-time malaria-fighting job (remote work capability pretty much necessary), this is a good tangible way to make a difference. Also, you'll get paid. And you'll become friends with me, and I am very fun to be around, in my opinion.30-second trial summary: A malaria vaccine candidate with solid chance of eventual deployment in pregnant women needs to undergo this challenge trial being held by U Maryland — Baltimore's Center for Vaccine Development. It's outpatient. The burden of time will very likely be more than the burden of actually being sick. I will be in it, and other EAs and 1Day Sooner volunteers have expressed interest. Compensation runs up to $3,845. 1Day Sooner can help ease burdens of participation, especially transport from DC.Slightly expanded summary:A promising malaria vaccine, PfSPZ, needs to be tested as a key step in licensure as a traveler's vaccine, which in turn will support eventual deployment among pregnant women.This vaccine will not be the final cure humanity has been waiting for. It is most promising for use during pregnancy.There is never a guarantee a vaccine candidate will be successful, nor successfully deployed. It may well be that PfSPZ works well, but in a few years another vaccine turns out to be even better.The primary burden is time rather than discomfort, i.e., being sick with malaria for at most a few days, but also lots of blood draws. I have a rough estimate of hours spent in this spreadsheet.Low estimate for time spent if you are from DC (transit, visits, other lost productive time): 60-70 hours. High estimate: 90-100.For Baltimoreans, probably more like 45-65 hours.A few of these hours can be productive, like on a train or waiting around at some of the longer visits.Vaccination begins in January (specific dates to be announced shortly). There are three doses spaced out across one month. The vaccine has already been tested rather extensively for safety, it is very well tolerated. Three weeks after the final dose, we will be challenged with malaria. One-fourth of the participants will get a placebo injection.One week after the malaria challenge, we will begin going into the clinic in Baltimore for short, daily blood tests in the morning. Treatment will be administered ASAP after malaria is detected, definite...]]>
Tue, 29 Nov 2022 23:40:20 +0000 EA - Come get malaria with me? by jeberts Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Come get malaria with me?, published by jeberts on November 29, 2022 on The Effective Altruism Forum.I promise I'm not going to start spamming the forums every week to badger you about getting a different exotic disease (see the post about Zika). But I was accepted into this study in Baltimore, which you can sort of think of as a part-time job fighting malaria from January to early March-ish. I figured some of you may be interested as well. (Please let me know if you sign up for pre-screening! I am always happy to talk about things in more detail if you have questions.)If you're not in the DMV, 1Day Sooner keeps track of potentially high-impact studies on our website and via our newsletter; sign up if you want to hear about hot, single challenge trials recruiting in YOUR area in the future.The following is a condensed version of what's outlined in this informal document. Obligatory disclosure: Neither I nor my org, 1Day Sooner, represent the study. I'm just excited about it :)Why would an EA consider this? Malaria is bad and the vaccine in this trial could result in nontrivial decreases in malaria mortality. What it comes down to is whether the costs — especially time commitment — make sense for you specifically. As the expanded document discusses, the risks of serious complications are very, very low.You'll be screened to make sure you're not at any elevated risk, and treatment will be initiated very quickly after you contract malaria (if you do at all). So if you do feel like you have the bandwidth for a part-time malaria-fighting job (remote work capability pretty much necessary), this is a good tangible way to make a difference. Also, you'll get paid. And you'll become friends with me, and I am very fun to be around, in my opinion.30-second trial summary: A malaria vaccine candidate with solid chance of eventual deployment in pregnant women needs to undergo this challenge trial being held by U Maryland — Baltimore's Center for Vaccine Development. It's outpatient. The burden of time will very likely be more than the burden of actually being sick. I will be in it, and other EAs and 1Day Sooner volunteers have expressed interest. Compensation runs up to $3,845. 1Day Sooner can help ease burdens of participation, especially transport from DC.Slightly expanded summary:A promising malaria vaccine, PfSPZ, needs to be tested as a key step in licensure as a traveler's vaccine, which in turn will support eventual deployment among pregnant women.This vaccine will not be the final cure humanity has been waiting for. It is most promising for use during pregnancy.There is never a guarantee a vaccine candidate will be successful, nor successfully deployed. It may well be that PfSPZ works well, but in a few years another vaccine turns out to be even better.The primary burden is time rather than discomfort, i.e., being sick with malaria for at most a few days, but also lots of blood draws. I have a rough estimate of hours spent in this spreadsheet.Low estimate for time spent if you are from DC (transit, visits, other lost productive time): 60-70 hours. High estimate: 90-100.For Baltimoreans, probably more like 45-65 hours.A few of these hours can be productive, like on a train or waiting around at some of the longer visits.Vaccination begins in January (specific dates to be announced shortly). There are three doses spaced out across one month. The vaccine has already been tested rather extensively for safety, it is very well tolerated. Three weeks after the final dose, we will be challenged with malaria. One-fourth of the participants will get a placebo injection.One week after the malaria challenge, we will begin going into the clinic in Baltimore for short, daily blood tests in the morning. Treatment will be administered ASAP after malaria is detected, definite...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Come get malaria with me?, published by jeberts on November 29, 2022 on The Effective Altruism Forum.I promise I'm not going to start spamming the forums every week to badger you about getting a different exotic disease (see the post about Zika). But I was accepted into this study in Baltimore, which you can sort of think of as a part-time job fighting malaria from January to early March-ish. I figured some of you may be interested as well. (Please let me know if you sign up for pre-screening! I am always happy to talk about things in more detail if you have questions.)If you're not in the DMV, 1Day Sooner keeps track of potentially high-impact studies on our website and via our newsletter; sign up if you want to hear about hot, single challenge trials recruiting in YOUR area in the future.The following is a condensed version of what's outlined in this informal document. Obligatory disclosure: Neither I nor my org, 1Day Sooner, represent the study. I'm just excited about it :)Why would an EA consider this? Malaria is bad and the vaccine in this trial could result in nontrivial decreases in malaria mortality. What it comes down to is whether the costs — especially time commitment — make sense for you specifically. As the expanded document discusses, the risks of serious complications are very, very low.You'll be screened to make sure you're not at any elevated risk, and treatment will be initiated very quickly after you contract malaria (if you do at all). So if you do feel like you have the bandwidth for a part-time malaria-fighting job (remote work capability pretty much necessary), this is a good tangible way to make a difference. Also, you'll get paid. And you'll become friends with me, and I am very fun to be around, in my opinion.30-second trial summary: A malaria vaccine candidate with solid chance of eventual deployment in pregnant women needs to undergo this challenge trial being held by U Maryland — Baltimore's Center for Vaccine Development. It's outpatient. The burden of time will very likely be more than the burden of actually being sick. I will be in it, and other EAs and 1Day Sooner volunteers have expressed interest. Compensation runs up to $3,845. 1Day Sooner can help ease burdens of participation, especially transport from DC.Slightly expanded summary:A promising malaria vaccine, PfSPZ, needs to be tested as a key step in licensure as a traveler's vaccine, which in turn will support eventual deployment among pregnant women.This vaccine will not be the final cure humanity has been waiting for. It is most promising for use during pregnancy.There is never a guarantee a vaccine candidate will be successful, nor successfully deployed. It may well be that PfSPZ works well, but in a few years another vaccine turns out to be even better.The primary burden is time rather than discomfort, i.e., being sick with malaria for at most a few days, but also lots of blood draws. I have a rough estimate of hours spent in this spreadsheet.Low estimate for time spent if you are from DC (transit, visits, other lost productive time): 60-70 hours. High estimate: 90-100.For Baltimoreans, probably more like 45-65 hours.A few of these hours can be productive, like on a train or waiting around at some of the longer visits.Vaccination begins in January (specific dates to be announced shortly). There are three doses spaced out across one month. The vaccine has already been tested rather extensively for safety, it is very well tolerated. Three weeks after the final dose, we will be challenged with malaria. One-fourth of the participants will get a placebo injection.One week after the malaria challenge, we will begin going into the clinic in Baltimore for short, daily blood tests in the morning. Treatment will be administered ASAP after malaria is detected, definite...]]>
jeberts https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:54 None full 3946
qadEJ7umzr2atXNen_EA EA - A Barebones Guide to Mechanistic Interpretability Prerequisites by Neel Nanda Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Barebones Guide to Mechanistic Interpretability Prerequisites, published by Neel Nanda on November 29, 2022 on The Effective Altruism Forum.Co-authored by Neel Nanda and Jess SmithCrossposted on the suggestion of Vasco GriloWhy does this exist?People often get intimidated when trying to get into AI or AI Alignment research. People often think that the gulf between where they are and where they need to be is huge. This presents practical concerns for people trying to change fields: we all have limited time and energy. And for the most part, people wildly overestimate the actual core skills required.This guide is our take on the essential skills required to understand, write code and ideally contribute useful research to mechanistic interpretability. We hope that it’s useful and unintimidating. :)Core Skills:Maths:Linear Algebra: 3Blue1Brown or Linear Algebra Done RightCore goals - to deeply & intuitively understand these concepts:BasisChange of basisThat a vector space is a geometric object that doesn’t necessarily have a canonical basisThat a matrix is a linear map between two vector spaces (or from a vector space to itself)Bonus things that it’s useful to understand:What’s singular value decomposition? Why is it useful?What are orthogonal/orthonormal matrices, and how is changing to an orthonormal basis importantly different from just any change of basis?What are eigenvalues and eigenvectors, and what do these tell you about a linear map?Probability basicsBasics of distributions: expected value, standard deviation, normal distributionsLog likelihoodMaximum value estimatorsRandom variablesCentral limit theoremCalculus basicsGradientsThe chain ruleThe intuition for what backprop is - in particular, grokking the idea that backprop is just the chain rule on multivariate functionsCoding:Python BasicsThe “how to learn coding” market is pretty saturated - there’s a lot of good stuff out there! And not really a clear best one.Zac Hatfield-Dodds recommends Al Sweigart's Automate the Boring Stuff and then Beyond the Basic Stuff (both readable for free on inventwithpython.com, or purchasable in books); he's also written some books of exercises. If you prefer a more traditional textbook, Think Python 2e is excellent and also available freely online.NumPy BasicsTry to do the first ~third of these:. Bonus points for doing them in pytorch on tensors :)ML:Rough grounding in ML.fast.ai is a good intro, but a fair bit more effort than is necessary. For an 80/20, focus on Andrej Karpathy’s new video explaining neural nets:PyTorch basicsDon’t go overboard here. You’ll pick up what you need over time - learning to google things when you get confused or stuck is most of the real skill in programming.One goal: build linear regression that runs in Google Colab on a GPU.Transformers - probably the biggest way mechanistic interpretability differs from normal ML is that it’s really important to deeply understand the architectures of the models you use, all of the moving parts inside of them, and how they fit together. In this case, the main architecture that matters is a transformer! (This is useful in normal ML too, but you can often get away with treating the model as a black box)Check out the illustrated transformerNote that you can pretty much ignore the stuff on encoder vs decoder transformers - we mostly care about autoregressive decoder-only transformers like GPT-2, which means that each token can only see tokens before it, and they learn to predict the next tokenGood (but hard) exercise: Code your own tiny GPT-2 and train it. If you can do this, I’d say that you basically fully understand the transformer architecture.Example of basic training boilerplate and train scriptThe EasyTransformer codebase is probably good to riff off of hereAn ...]]>
Neel Nanda https://forum.effectivealtruism.org/posts/qadEJ7umzr2atXNen/a-barebones-guide-to-mechanistic-interpretability Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Barebones Guide to Mechanistic Interpretability Prerequisites, published by Neel Nanda on November 29, 2022 on The Effective Altruism Forum.Co-authored by Neel Nanda and Jess SmithCrossposted on the suggestion of Vasco GriloWhy does this exist?People often get intimidated when trying to get into AI or AI Alignment research. People often think that the gulf between where they are and where they need to be is huge. This presents practical concerns for people trying to change fields: we all have limited time and energy. And for the most part, people wildly overestimate the actual core skills required.This guide is our take on the essential skills required to understand, write code and ideally contribute useful research to mechanistic interpretability. We hope that it’s useful and unintimidating. :)Core Skills:Maths:Linear Algebra: 3Blue1Brown or Linear Algebra Done RightCore goals - to deeply & intuitively understand these concepts:BasisChange of basisThat a vector space is a geometric object that doesn’t necessarily have a canonical basisThat a matrix is a linear map between two vector spaces (or from a vector space to itself)Bonus things that it’s useful to understand:What’s singular value decomposition? Why is it useful?What are orthogonal/orthonormal matrices, and how is changing to an orthonormal basis importantly different from just any change of basis?What are eigenvalues and eigenvectors, and what do these tell you about a linear map?Probability basicsBasics of distributions: expected value, standard deviation, normal distributionsLog likelihoodMaximum value estimatorsRandom variablesCentral limit theoremCalculus basicsGradientsThe chain ruleThe intuition for what backprop is - in particular, grokking the idea that backprop is just the chain rule on multivariate functionsCoding:Python BasicsThe “how to learn coding” market is pretty saturated - there’s a lot of good stuff out there! And not really a clear best one.Zac Hatfield-Dodds recommends Al Sweigart's Automate the Boring Stuff and then Beyond the Basic Stuff (both readable for free on inventwithpython.com, or purchasable in books); he's also written some books of exercises. If you prefer a more traditional textbook, Think Python 2e is excellent and also available freely online.NumPy BasicsTry to do the first ~third of these:. Bonus points for doing them in pytorch on tensors :)ML:Rough grounding in ML.fast.ai is a good intro, but a fair bit more effort than is necessary. For an 80/20, focus on Andrej Karpathy’s new video explaining neural nets:PyTorch basicsDon’t go overboard here. You’ll pick up what you need over time - learning to google things when you get confused or stuck is most of the real skill in programming.One goal: build linear regression that runs in Google Colab on a GPU.Transformers - probably the biggest way mechanistic interpretability differs from normal ML is that it’s really important to deeply understand the architectures of the models you use, all of the moving parts inside of them, and how they fit together. In this case, the main architecture that matters is a transformer! (This is useful in normal ML too, but you can often get away with treating the model as a black box)Check out the illustrated transformerNote that you can pretty much ignore the stuff on encoder vs decoder transformers - we mostly care about autoregressive decoder-only transformers like GPT-2, which means that each token can only see tokens before it, and they learn to predict the next tokenGood (but hard) exercise: Code your own tiny GPT-2 and train it. If you can do this, I’d say that you basically fully understand the transformer architecture.Example of basic training boilerplate and train scriptThe EasyTransformer codebase is probably good to riff off of hereAn ...]]>
Tue, 29 Nov 2022 22:15:05 +0000 EA - A Barebones Guide to Mechanistic Interpretability Prerequisites by Neel Nanda Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Barebones Guide to Mechanistic Interpretability Prerequisites, published by Neel Nanda on November 29, 2022 on The Effective Altruism Forum.Co-authored by Neel Nanda and Jess SmithCrossposted on the suggestion of Vasco GriloWhy does this exist?People often get intimidated when trying to get into AI or AI Alignment research. People often think that the gulf between where they are and where they need to be is huge. This presents practical concerns for people trying to change fields: we all have limited time and energy. And for the most part, people wildly overestimate the actual core skills required.This guide is our take on the essential skills required to understand, write code and ideally contribute useful research to mechanistic interpretability. We hope that it’s useful and unintimidating. :)Core Skills:Maths:Linear Algebra: 3Blue1Brown or Linear Algebra Done RightCore goals - to deeply & intuitively understand these concepts:BasisChange of basisThat a vector space is a geometric object that doesn’t necessarily have a canonical basisThat a matrix is a linear map between two vector spaces (or from a vector space to itself)Bonus things that it’s useful to understand:What’s singular value decomposition? Why is it useful?What are orthogonal/orthonormal matrices, and how is changing to an orthonormal basis importantly different from just any change of basis?What are eigenvalues and eigenvectors, and what do these tell you about a linear map?Probability basicsBasics of distributions: expected value, standard deviation, normal distributionsLog likelihoodMaximum value estimatorsRandom variablesCentral limit theoremCalculus basicsGradientsThe chain ruleThe intuition for what backprop is - in particular, grokking the idea that backprop is just the chain rule on multivariate functionsCoding:Python BasicsThe “how to learn coding” market is pretty saturated - there’s a lot of good stuff out there! And not really a clear best one.Zac Hatfield-Dodds recommends Al Sweigart's Automate the Boring Stuff and then Beyond the Basic Stuff (both readable for free on inventwithpython.com, or purchasable in books); he's also written some books of exercises. If you prefer a more traditional textbook, Think Python 2e is excellent and also available freely online.NumPy BasicsTry to do the first ~third of these:. Bonus points for doing them in pytorch on tensors :)ML:Rough grounding in ML.fast.ai is a good intro, but a fair bit more effort than is necessary. For an 80/20, focus on Andrej Karpathy’s new video explaining neural nets:PyTorch basicsDon’t go overboard here. You’ll pick up what you need over time - learning to google things when you get confused or stuck is most of the real skill in programming.One goal: build linear regression that runs in Google Colab on a GPU.Transformers - probably the biggest way mechanistic interpretability differs from normal ML is that it’s really important to deeply understand the architectures of the models you use, all of the moving parts inside of them, and how they fit together. In this case, the main architecture that matters is a transformer! (This is useful in normal ML too, but you can often get away with treating the model as a black box)Check out the illustrated transformerNote that you can pretty much ignore the stuff on encoder vs decoder transformers - we mostly care about autoregressive decoder-only transformers like GPT-2, which means that each token can only see tokens before it, and they learn to predict the next tokenGood (but hard) exercise: Code your own tiny GPT-2 and train it. If you can do this, I’d say that you basically fully understand the transformer architecture.Example of basic training boilerplate and train scriptThe EasyTransformer codebase is probably good to riff off of hereAn ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Barebones Guide to Mechanistic Interpretability Prerequisites, published by Neel Nanda on November 29, 2022 on The Effective Altruism Forum.Co-authored by Neel Nanda and Jess SmithCrossposted on the suggestion of Vasco GriloWhy does this exist?People often get intimidated when trying to get into AI or AI Alignment research. People often think that the gulf between where they are and where they need to be is huge. This presents practical concerns for people trying to change fields: we all have limited time and energy. And for the most part, people wildly overestimate the actual core skills required.This guide is our take on the essential skills required to understand, write code and ideally contribute useful research to mechanistic interpretability. We hope that it’s useful and unintimidating. :)Core Skills:Maths:Linear Algebra: 3Blue1Brown or Linear Algebra Done RightCore goals - to deeply & intuitively understand these concepts:BasisChange of basisThat a vector space is a geometric object that doesn’t necessarily have a canonical basisThat a matrix is a linear map between two vector spaces (or from a vector space to itself)Bonus things that it’s useful to understand:What’s singular value decomposition? Why is it useful?What are orthogonal/orthonormal matrices, and how is changing to an orthonormal basis importantly different from just any change of basis?What are eigenvalues and eigenvectors, and what do these tell you about a linear map?Probability basicsBasics of distributions: expected value, standard deviation, normal distributionsLog likelihoodMaximum value estimatorsRandom variablesCentral limit theoremCalculus basicsGradientsThe chain ruleThe intuition for what backprop is - in particular, grokking the idea that backprop is just the chain rule on multivariate functionsCoding:Python BasicsThe “how to learn coding” market is pretty saturated - there’s a lot of good stuff out there! And not really a clear best one.Zac Hatfield-Dodds recommends Al Sweigart's Automate the Boring Stuff and then Beyond the Basic Stuff (both readable for free on inventwithpython.com, or purchasable in books); he's also written some books of exercises. If you prefer a more traditional textbook, Think Python 2e is excellent and also available freely online.NumPy BasicsTry to do the first ~third of these:. Bonus points for doing them in pytorch on tensors :)ML:Rough grounding in ML.fast.ai is a good intro, but a fair bit more effort than is necessary. For an 80/20, focus on Andrej Karpathy’s new video explaining neural nets:PyTorch basicsDon’t go overboard here. You’ll pick up what you need over time - learning to google things when you get confused or stuck is most of the real skill in programming.One goal: build linear regression that runs in Google Colab on a GPU.Transformers - probably the biggest way mechanistic interpretability differs from normal ML is that it’s really important to deeply understand the architectures of the models you use, all of the moving parts inside of them, and how they fit together. In this case, the main architecture that matters is a transformer! (This is useful in normal ML too, but you can often get away with treating the model as a black box)Check out the illustrated transformerNote that you can pretty much ignore the stuff on encoder vs decoder transformers - we mostly care about autoregressive decoder-only transformers like GPT-2, which means that each token can only see tokens before it, and they learn to predict the next tokenGood (but hard) exercise: Code your own tiny GPT-2 and train it. If you can do this, I’d say that you basically fully understand the transformer architecture.Example of basic training boilerplate and train scriptThe EasyTransformer codebase is probably good to riff off of hereAn ...]]>
Neel Nanda https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:05 None full 3941
4Y3NKH37S9hvrXLCF_EA EA - Apply to join Rethink Priorities’ board of directors. by abrahamrowe Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to join Rethink Priorities’ board of directors., published by abrahamrowe on November 29, 2022 on The Effective Altruism Forum.Right now (through January 13th), you can apply to join RP’s board of directors in an unpaid (3-10 hours per month) or paid (10-15 hours per week) capacity.Rethink Priorities (RP) has grown quickly, is now large, and remains ambitious (see our recent post for more details). We're looking for people to join our board who can help RP really secure its foundations, and scale in the next several years. While we had been planning on opening these roles prior to FTX’s collapse because we recognized governance as an area of growth for our organization, the recent events help in highlighting why these roles are important. We want to ensure that RP is healthy and sustainable, and thinking about risk and success well in the long-run.Our board of directors plays an active role in ensuring that our senior management is making responsible, legal, and risk-aware decisions for the organization in the long-run. They evaluate things like our financial controls, the performance of our Co-CEOs, and budgets and fundraising to help ensure the organization is acting legally and ethically. They also advise our senior management to help ensure the organization stays on track, and continues to target high-level goals for itself.If you have any questions about these positions, please contact careers@rethinkpriorities.org. If you have questions about RP’s governance generally, contact abraham@rethinkpriorities.org.What does the board of directors do?Our board’s primary functions are:Providing long-term financial oversight to the organization including:Reviewing and approving the annual budget, and spending controls for the Co-CEOsReviewing annual audits of financial statements and financial controlsProviding oversight for the Co-CEOs including:Performance evaluations of senior managementServing as contacts for staff outside the chain of commandsProviding feedback on Co-CEOs strategic plansProviding legal oversight for the organization, such as:Helping assess risky and complicated situations, and providing feedback on plans to navigate those situationsEnsuring that RP is compliant with its charitable purposesAdvising on RP’s long-term strategy and directionWhat qualifications are you looking for in board members?We are particularly interested in adding individuals who have knowledge/experience within longtermism, launching/supporting new ventures, and/or scaling organizations. We’d also be excited for candidates with professional legal or nonprofit finance experience.Do I have to be an American to join the board?No! Though we are a US-based organization, these roles do not require US residency. However, we’d like the majority of our board to be made up of US residents (including non-US citizens), and some board functions may require US residency, so while location wouldn’t be disqualifying, it may be a consideration.What’s the difference between paid and unpaid roles?The majority of our board is required by our bylaws to be unpaid. However, we think that there is significant value in our board being more engaged than many members are able to be in a voluntary capacity, so we’d like to pay up to 2 members of the board to provide administrative assistance to the other members, and to tackle some of the more work intensive tasks (such as performance evaluations of senior management). In our view, a failure for many nonprofit boards is they select for skills but not time, and that contributes towards a tendency for boards to not do a very thorough job. We're excited to experiment with one to two people who are designated to spend 10-15 hrs/week on board duties. Right now we have some idea of how this will work, but it will be a first for us. We ...]]>
abrahamrowe https://forum.effectivealtruism.org/posts/4Y3NKH37S9hvrXLCF/apply-to-join-rethink-priorities-board-of-directors Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to join Rethink Priorities’ board of directors., published by abrahamrowe on November 29, 2022 on The Effective Altruism Forum.Right now (through January 13th), you can apply to join RP’s board of directors in an unpaid (3-10 hours per month) or paid (10-15 hours per week) capacity.Rethink Priorities (RP) has grown quickly, is now large, and remains ambitious (see our recent post for more details). We're looking for people to join our board who can help RP really secure its foundations, and scale in the next several years. While we had been planning on opening these roles prior to FTX’s collapse because we recognized governance as an area of growth for our organization, the recent events help in highlighting why these roles are important. We want to ensure that RP is healthy and sustainable, and thinking about risk and success well in the long-run.Our board of directors plays an active role in ensuring that our senior management is making responsible, legal, and risk-aware decisions for the organization in the long-run. They evaluate things like our financial controls, the performance of our Co-CEOs, and budgets and fundraising to help ensure the organization is acting legally and ethically. They also advise our senior management to help ensure the organization stays on track, and continues to target high-level goals for itself.If you have any questions about these positions, please contact careers@rethinkpriorities.org. If you have questions about RP’s governance generally, contact abraham@rethinkpriorities.org.What does the board of directors do?Our board’s primary functions are:Providing long-term financial oversight to the organization including:Reviewing and approving the annual budget, and spending controls for the Co-CEOsReviewing annual audits of financial statements and financial controlsProviding oversight for the Co-CEOs including:Performance evaluations of senior managementServing as contacts for staff outside the chain of commandsProviding feedback on Co-CEOs strategic plansProviding legal oversight for the organization, such as:Helping assess risky and complicated situations, and providing feedback on plans to navigate those situationsEnsuring that RP is compliant with its charitable purposesAdvising on RP’s long-term strategy and directionWhat qualifications are you looking for in board members?We are particularly interested in adding individuals who have knowledge/experience within longtermism, launching/supporting new ventures, and/or scaling organizations. We’d also be excited for candidates with professional legal or nonprofit finance experience.Do I have to be an American to join the board?No! Though we are a US-based organization, these roles do not require US residency. However, we’d like the majority of our board to be made up of US residents (including non-US citizens), and some board functions may require US residency, so while location wouldn’t be disqualifying, it may be a consideration.What’s the difference between paid and unpaid roles?The majority of our board is required by our bylaws to be unpaid. However, we think that there is significant value in our board being more engaged than many members are able to be in a voluntary capacity, so we’d like to pay up to 2 members of the board to provide administrative assistance to the other members, and to tackle some of the more work intensive tasks (such as performance evaluations of senior management). In our view, a failure for many nonprofit boards is they select for skills but not time, and that contributes towards a tendency for boards to not do a very thorough job. We're excited to experiment with one to two people who are designated to spend 10-15 hrs/week on board duties. Right now we have some idea of how this will work, but it will be a first for us. We ...]]>
Tue, 29 Nov 2022 20:02:49 +0000 EA - Apply to join Rethink Priorities’ board of directors. by abrahamrowe Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to join Rethink Priorities’ board of directors., published by abrahamrowe on November 29, 2022 on The Effective Altruism Forum.Right now (through January 13th), you can apply to join RP’s board of directors in an unpaid (3-10 hours per month) or paid (10-15 hours per week) capacity.Rethink Priorities (RP) has grown quickly, is now large, and remains ambitious (see our recent post for more details). We're looking for people to join our board who can help RP really secure its foundations, and scale in the next several years. While we had been planning on opening these roles prior to FTX’s collapse because we recognized governance as an area of growth for our organization, the recent events help in highlighting why these roles are important. We want to ensure that RP is healthy and sustainable, and thinking about risk and success well in the long-run.Our board of directors plays an active role in ensuring that our senior management is making responsible, legal, and risk-aware decisions for the organization in the long-run. They evaluate things like our financial controls, the performance of our Co-CEOs, and budgets and fundraising to help ensure the organization is acting legally and ethically. They also advise our senior management to help ensure the organization stays on track, and continues to target high-level goals for itself.If you have any questions about these positions, please contact careers@rethinkpriorities.org. If you have questions about RP’s governance generally, contact abraham@rethinkpriorities.org.What does the board of directors do?Our board’s primary functions are:Providing long-term financial oversight to the organization including:Reviewing and approving the annual budget, and spending controls for the Co-CEOsReviewing annual audits of financial statements and financial controlsProviding oversight for the Co-CEOs including:Performance evaluations of senior managementServing as contacts for staff outside the chain of commandsProviding feedback on Co-CEOs strategic plansProviding legal oversight for the organization, such as:Helping assess risky and complicated situations, and providing feedback on plans to navigate those situationsEnsuring that RP is compliant with its charitable purposesAdvising on RP’s long-term strategy and directionWhat qualifications are you looking for in board members?We are particularly interested in adding individuals who have knowledge/experience within longtermism, launching/supporting new ventures, and/or scaling organizations. We’d also be excited for candidates with professional legal or nonprofit finance experience.Do I have to be an American to join the board?No! Though we are a US-based organization, these roles do not require US residency. However, we’d like the majority of our board to be made up of US residents (including non-US citizens), and some board functions may require US residency, so while location wouldn’t be disqualifying, it may be a consideration.What’s the difference between paid and unpaid roles?The majority of our board is required by our bylaws to be unpaid. However, we think that there is significant value in our board being more engaged than many members are able to be in a voluntary capacity, so we’d like to pay up to 2 members of the board to provide administrative assistance to the other members, and to tackle some of the more work intensive tasks (such as performance evaluations of senior management). In our view, a failure for many nonprofit boards is they select for skills but not time, and that contributes towards a tendency for boards to not do a very thorough job. We're excited to experiment with one to two people who are designated to spend 10-15 hrs/week on board duties. Right now we have some idea of how this will work, but it will be a first for us. We ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to join Rethink Priorities’ board of directors., published by abrahamrowe on November 29, 2022 on The Effective Altruism Forum.Right now (through January 13th), you can apply to join RP’s board of directors in an unpaid (3-10 hours per month) or paid (10-15 hours per week) capacity.Rethink Priorities (RP) has grown quickly, is now large, and remains ambitious (see our recent post for more details). We're looking for people to join our board who can help RP really secure its foundations, and scale in the next several years. While we had been planning on opening these roles prior to FTX’s collapse because we recognized governance as an area of growth for our organization, the recent events help in highlighting why these roles are important. We want to ensure that RP is healthy and sustainable, and thinking about risk and success well in the long-run.Our board of directors plays an active role in ensuring that our senior management is making responsible, legal, and risk-aware decisions for the organization in the long-run. They evaluate things like our financial controls, the performance of our Co-CEOs, and budgets and fundraising to help ensure the organization is acting legally and ethically. They also advise our senior management to help ensure the organization stays on track, and continues to target high-level goals for itself.If you have any questions about these positions, please contact careers@rethinkpriorities.org. If you have questions about RP’s governance generally, contact abraham@rethinkpriorities.org.What does the board of directors do?Our board’s primary functions are:Providing long-term financial oversight to the organization including:Reviewing and approving the annual budget, and spending controls for the Co-CEOsReviewing annual audits of financial statements and financial controlsProviding oversight for the Co-CEOs including:Performance evaluations of senior managementServing as contacts for staff outside the chain of commandsProviding feedback on Co-CEOs strategic plansProviding legal oversight for the organization, such as:Helping assess risky and complicated situations, and providing feedback on plans to navigate those situationsEnsuring that RP is compliant with its charitable purposesAdvising on RP’s long-term strategy and directionWhat qualifications are you looking for in board members?We are particularly interested in adding individuals who have knowledge/experience within longtermism, launching/supporting new ventures, and/or scaling organizations. We’d also be excited for candidates with professional legal or nonprofit finance experience.Do I have to be an American to join the board?No! Though we are a US-based organization, these roles do not require US residency. However, we’d like the majority of our board to be made up of US residents (including non-US citizens), and some board functions may require US residency, so while location wouldn’t be disqualifying, it may be a consideration.What’s the difference between paid and unpaid roles?The majority of our board is required by our bylaws to be unpaid. However, we think that there is significant value in our board being more engaged than many members are able to be in a voluntary capacity, so we’d like to pay up to 2 members of the board to provide administrative assistance to the other members, and to tackle some of the more work intensive tasks (such as performance evaluations of senior management). In our view, a failure for many nonprofit boards is they select for skills but not time, and that contributes towards a tendency for boards to not do a very thorough job. We're excited to experiment with one to two people who are designated to spend 10-15 hrs/week on board duties. Right now we have some idea of how this will work, but it will be a first for us. We ...]]>
abrahamrowe https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:47 None full 3940
suEMaFLd7vzkqybMp_EA EA - Double your donation with matching opportunities by BarryGrimes Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Double your donation with matching opportunities, published by BarryGrimes on November 28, 2022 on The Effective Altruism Forum.There are several matching opportunities running this Giving Season. If you know of any that I've missed, please mention them in the comments.Double Up DriveThe 2022 Double Up Drive will start at 8:00 am PT / 11:00 am ET / 4:00 pm GMT on Giving Tuesday (29 November).I don’t know how big the matched pool will be or how long it will last so it's best to donate as soon as it opens.2022 charitiesAgainst Malaria FoundationAnimal Charity EvaluatorsClean Air Task ForceEvidence ActionFounders PledgeInternational Refugee Assistance ProjectNew IncentivesStrongMindsThe Good Food InstituteThe Life You Can SaveUBS Optimus Foundation (StrongMinds)All donations to StrongMinds will be matched up to $800,000 or until the end of 2022 (whichever comes first)Animal Charity EvaluatorsUntil December 31, any donation you make to ACE’s Recommended Charity Fund will be matched up to $300,000, thanks to the estate of a generous legacy donor.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
BarryGrimes https://forum.effectivealtruism.org/posts/suEMaFLd7vzkqybMp/double-your-donation-with-matching-opportunities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Double your donation with matching opportunities, published by BarryGrimes on November 28, 2022 on The Effective Altruism Forum.There are several matching opportunities running this Giving Season. If you know of any that I've missed, please mention them in the comments.Double Up DriveThe 2022 Double Up Drive will start at 8:00 am PT / 11:00 am ET / 4:00 pm GMT on Giving Tuesday (29 November).I don’t know how big the matched pool will be or how long it will last so it's best to donate as soon as it opens.2022 charitiesAgainst Malaria FoundationAnimal Charity EvaluatorsClean Air Task ForceEvidence ActionFounders PledgeInternational Refugee Assistance ProjectNew IncentivesStrongMindsThe Good Food InstituteThe Life You Can SaveUBS Optimus Foundation (StrongMinds)All donations to StrongMinds will be matched up to $800,000 or until the end of 2022 (whichever comes first)Animal Charity EvaluatorsUntil December 31, any donation you make to ACE’s Recommended Charity Fund will be matched up to $300,000, thanks to the estate of a generous legacy donor.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 29 Nov 2022 18:02:16 +0000 EA - Double your donation with matching opportunities by BarryGrimes Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Double your donation with matching opportunities, published by BarryGrimes on November 28, 2022 on The Effective Altruism Forum.There are several matching opportunities running this Giving Season. If you know of any that I've missed, please mention them in the comments.Double Up DriveThe 2022 Double Up Drive will start at 8:00 am PT / 11:00 am ET / 4:00 pm GMT on Giving Tuesday (29 November).I don’t know how big the matched pool will be or how long it will last so it's best to donate as soon as it opens.2022 charitiesAgainst Malaria FoundationAnimal Charity EvaluatorsClean Air Task ForceEvidence ActionFounders PledgeInternational Refugee Assistance ProjectNew IncentivesStrongMindsThe Good Food InstituteThe Life You Can SaveUBS Optimus Foundation (StrongMinds)All donations to StrongMinds will be matched up to $800,000 or until the end of 2022 (whichever comes first)Animal Charity EvaluatorsUntil December 31, any donation you make to ACE’s Recommended Charity Fund will be matched up to $300,000, thanks to the estate of a generous legacy donor.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Double your donation with matching opportunities, published by BarryGrimes on November 28, 2022 on The Effective Altruism Forum.There are several matching opportunities running this Giving Season. If you know of any that I've missed, please mention them in the comments.Double Up DriveThe 2022 Double Up Drive will start at 8:00 am PT / 11:00 am ET / 4:00 pm GMT on Giving Tuesday (29 November).I don’t know how big the matched pool will be or how long it will last so it's best to donate as soon as it opens.2022 charitiesAgainst Malaria FoundationAnimal Charity EvaluatorsClean Air Task ForceEvidence ActionFounders PledgeInternational Refugee Assistance ProjectNew IncentivesStrongMindsThe Good Food InstituteThe Life You Can SaveUBS Optimus Foundation (StrongMinds)All donations to StrongMinds will be matched up to $800,000 or until the end of 2022 (whichever comes first)Animal Charity EvaluatorsUntil December 31, any donation you make to ACE’s Recommended Charity Fund will be matched up to $300,000, thanks to the estate of a generous legacy donor.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
BarryGrimes https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:29 None full 3943
9kNCuSRvcu6BGXB9H_EA EA - Why I gave AUD$12,573 to Innovations For Poverty Action by Henry Howard Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I gave AUD$12,573 to Innovations For Poverty Action, published by Henry Howard on November 29, 2022 on The Effective Altruism Forum.I gave 50% of my 1st-year doctor's salary to charity last year. This was mostly to GiveWell-recommended and The Life You Can Save-recommended charities. The largest share went to Innovations For Poverty Action, a global development research organisation that designs and runs trials of global development interventions around the world in order to find which interventions are effective.We need more global development cause discoveryThe main reason I favoured Innovations For Poverty Action is my feeling that the slow rate of discovery of new effective charities is a bottleneck for effective altruism. From what I can see, GiveWell has added one charity to its top recommendations in recent years (New Incentives) while it's entirely removed its list of about 10 "standout charities". I haven't noticed many new additions to The Life You Can Save's list of recommended charities in recent years. GiveWell maxxed out the funding of its top charities last year and, while they claim they now have room for hundreds of millions more dollars, this is still a drop in the pond when compared to the total amount of philanthropy and government aid money that is spent annually worldwide. Finding further effective global development causes should be top priority, so that governments and philanthropists can be advised to direct their funds more effectively.They probably know more than usEffective altruists do a lot of independent research looking at effective ways to make the world better. This is great. An example is the recent Open Philanthropy cause exploration prize. Most effective altruism enthusiasts aren't Nobel prize-winning economists nor do they have decades of experience in global development nor do they have extensive global networks to feed them information. This all probably puts the average effective giving enthusiast at a disadvantage when it comes to seeing and seizing on global development opportunities. When it comes to effective cause discovery I think it would be difficult for anyone to outperform established global development research organisations like Innovations for Poverty Action, The Jameel Poverty Action Lab, and the Center for Effective Global Action, each of which have established networks, experience, and track records.They have a good recordInnovations for Poverty Action has conducted research showing that giving free bednets is more effective than charging for them, they conducted the research that led to Evidence Action's Dispensers for Safe Water program, they conducted the research around No Lean Season that first appeared to yield promising results but at scale was less promising (negative results are important too). They were recently involved in a promising trial of cash-transfers and cognitive behavioural therapy to reduce crime among at-risk young men in Liberia.You can give to them tax-deductably in AustraliaI've considered giving to The Jameel Poverty Action Lab or another global development research organisation. In Australia you can give to Innovations For Poverty Action via The Life You Can Save and the donation is tax-deductable. I don't know of a way to give to other global development research organisations tax-deductably from down here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Henry Howard https://forum.effectivealtruism.org/posts/9kNCuSRvcu6BGXB9H/why-i-gave-audusd12-573-to-innovations-for-poverty-action Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I gave AUD$12,573 to Innovations For Poverty Action, published by Henry Howard on November 29, 2022 on The Effective Altruism Forum.I gave 50% of my 1st-year doctor's salary to charity last year. This was mostly to GiveWell-recommended and The Life You Can Save-recommended charities. The largest share went to Innovations For Poverty Action, a global development research organisation that designs and runs trials of global development interventions around the world in order to find which interventions are effective.We need more global development cause discoveryThe main reason I favoured Innovations For Poverty Action is my feeling that the slow rate of discovery of new effective charities is a bottleneck for effective altruism. From what I can see, GiveWell has added one charity to its top recommendations in recent years (New Incentives) while it's entirely removed its list of about 10 "standout charities". I haven't noticed many new additions to The Life You Can Save's list of recommended charities in recent years. GiveWell maxxed out the funding of its top charities last year and, while they claim they now have room for hundreds of millions more dollars, this is still a drop in the pond when compared to the total amount of philanthropy and government aid money that is spent annually worldwide. Finding further effective global development causes should be top priority, so that governments and philanthropists can be advised to direct their funds more effectively.They probably know more than usEffective altruists do a lot of independent research looking at effective ways to make the world better. This is great. An example is the recent Open Philanthropy cause exploration prize. Most effective altruism enthusiasts aren't Nobel prize-winning economists nor do they have decades of experience in global development nor do they have extensive global networks to feed them information. This all probably puts the average effective giving enthusiast at a disadvantage when it comes to seeing and seizing on global development opportunities. When it comes to effective cause discovery I think it would be difficult for anyone to outperform established global development research organisations like Innovations for Poverty Action, The Jameel Poverty Action Lab, and the Center for Effective Global Action, each of which have established networks, experience, and track records.They have a good recordInnovations for Poverty Action has conducted research showing that giving free bednets is more effective than charging for them, they conducted the research that led to Evidence Action's Dispensers for Safe Water program, they conducted the research around No Lean Season that first appeared to yield promising results but at scale was less promising (negative results are important too). They were recently involved in a promising trial of cash-transfers and cognitive behavioural therapy to reduce crime among at-risk young men in Liberia.You can give to them tax-deductably in AustraliaI've considered giving to The Jameel Poverty Action Lab or another global development research organisation. In Australia you can give to Innovations For Poverty Action via The Life You Can Save and the donation is tax-deductable. I don't know of a way to give to other global development research organisations tax-deductably from down here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 29 Nov 2022 14:48:44 +0000 EA - Why I gave AUD$12,573 to Innovations For Poverty Action by Henry Howard Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I gave AUD$12,573 to Innovations For Poverty Action, published by Henry Howard on November 29, 2022 on The Effective Altruism Forum.I gave 50% of my 1st-year doctor's salary to charity last year. This was mostly to GiveWell-recommended and The Life You Can Save-recommended charities. The largest share went to Innovations For Poverty Action, a global development research organisation that designs and runs trials of global development interventions around the world in order to find which interventions are effective.We need more global development cause discoveryThe main reason I favoured Innovations For Poverty Action is my feeling that the slow rate of discovery of new effective charities is a bottleneck for effective altruism. From what I can see, GiveWell has added one charity to its top recommendations in recent years (New Incentives) while it's entirely removed its list of about 10 "standout charities". I haven't noticed many new additions to The Life You Can Save's list of recommended charities in recent years. GiveWell maxxed out the funding of its top charities last year and, while they claim they now have room for hundreds of millions more dollars, this is still a drop in the pond when compared to the total amount of philanthropy and government aid money that is spent annually worldwide. Finding further effective global development causes should be top priority, so that governments and philanthropists can be advised to direct their funds more effectively.They probably know more than usEffective altruists do a lot of independent research looking at effective ways to make the world better. This is great. An example is the recent Open Philanthropy cause exploration prize. Most effective altruism enthusiasts aren't Nobel prize-winning economists nor do they have decades of experience in global development nor do they have extensive global networks to feed them information. This all probably puts the average effective giving enthusiast at a disadvantage when it comes to seeing and seizing on global development opportunities. When it comes to effective cause discovery I think it would be difficult for anyone to outperform established global development research organisations like Innovations for Poverty Action, The Jameel Poverty Action Lab, and the Center for Effective Global Action, each of which have established networks, experience, and track records.They have a good recordInnovations for Poverty Action has conducted research showing that giving free bednets is more effective than charging for them, they conducted the research that led to Evidence Action's Dispensers for Safe Water program, they conducted the research around No Lean Season that first appeared to yield promising results but at scale was less promising (negative results are important too). They were recently involved in a promising trial of cash-transfers and cognitive behavioural therapy to reduce crime among at-risk young men in Liberia.You can give to them tax-deductably in AustraliaI've considered giving to The Jameel Poverty Action Lab or another global development research organisation. In Australia you can give to Innovations For Poverty Action via The Life You Can Save and the donation is tax-deductable. I don't know of a way to give to other global development research organisations tax-deductably from down here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I gave AUD$12,573 to Innovations For Poverty Action, published by Henry Howard on November 29, 2022 on The Effective Altruism Forum.I gave 50% of my 1st-year doctor's salary to charity last year. This was mostly to GiveWell-recommended and The Life You Can Save-recommended charities. The largest share went to Innovations For Poverty Action, a global development research organisation that designs and runs trials of global development interventions around the world in order to find which interventions are effective.We need more global development cause discoveryThe main reason I favoured Innovations For Poverty Action is my feeling that the slow rate of discovery of new effective charities is a bottleneck for effective altruism. From what I can see, GiveWell has added one charity to its top recommendations in recent years (New Incentives) while it's entirely removed its list of about 10 "standout charities". I haven't noticed many new additions to The Life You Can Save's list of recommended charities in recent years. GiveWell maxxed out the funding of its top charities last year and, while they claim they now have room for hundreds of millions more dollars, this is still a drop in the pond when compared to the total amount of philanthropy and government aid money that is spent annually worldwide. Finding further effective global development causes should be top priority, so that governments and philanthropists can be advised to direct their funds more effectively.They probably know more than usEffective altruists do a lot of independent research looking at effective ways to make the world better. This is great. An example is the recent Open Philanthropy cause exploration prize. Most effective altruism enthusiasts aren't Nobel prize-winning economists nor do they have decades of experience in global development nor do they have extensive global networks to feed them information. This all probably puts the average effective giving enthusiast at a disadvantage when it comes to seeing and seizing on global development opportunities. When it comes to effective cause discovery I think it would be difficult for anyone to outperform established global development research organisations like Innovations for Poverty Action, The Jameel Poverty Action Lab, and the Center for Effective Global Action, each of which have established networks, experience, and track records.They have a good recordInnovations for Poverty Action has conducted research showing that giving free bednets is more effective than charging for them, they conducted the research that led to Evidence Action's Dispensers for Safe Water program, they conducted the research around No Lean Season that first appeared to yield promising results but at scale was less promising (negative results are important too). They were recently involved in a promising trial of cash-transfers and cognitive behavioural therapy to reduce crime among at-risk young men in Liberia.You can give to them tax-deductably in AustraliaI've considered giving to The Jameel Poverty Action Lab or another global development research organisation. In Australia you can give to Innovations For Poverty Action via The Life You Can Save and the donation is tax-deductable. I don't know of a way to give to other global development research organisations tax-deductably from down here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Henry Howard https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:10 None full 3942
wHyvkwpwCA4nm46rp_EA EA - Why Giving What We Can recommends using expert-led charitable funds by Michael Townsend Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Giving What We Can recommends using expert-led charitable funds, published by Michael Townsend on November 28, 2022 on The Effective Altruism Forum.Giving What We Can is looking to build more of an 'institutional view' on various aspects of effective giving. Something we've recently decided to more explicitly push for is donating via a fund. I'm cross-posting our page outlining why we recommend funds with hopes of getting feedback and questions from forum users :).Funds are a relatively new way for donors to coordinate their giving to maximise their impact. This page outlines what a fund is, and why Giving What We Can generally recommends donating to funds rather than directly to individual charities.How do funds work?Funds allow donors to give together, as a community. Rather than each individual giving to a specific charity, donating via a fund pools their donations so that expert grantmakers and evaluators can direct those funds as cost-effectively as possible.Using a fund is similar to using an actively managed investment fund instead of trying to pick individual stocks to invest in: in both cases, you let experts decide what to do with your money. This analogy helps explain the structure of a charitable fund, but it likely understates its benefits. This is because:Investment funds regularly take a management fee (hedge funds, for example, typically take 1–4% of invested funds each year). Whereas the charitable funds we recommend don’t take any fees for their work.According to the efficient-market hypothesis you should expect that any given stock costs roughly what it should given the best available information. If true, that would mean there’s not much room for experts to pick better stocks than you could (even if you picked at random!). Whereas the best charity can be at least 10 times better than a typical charity even within the same area. This means that there is substantial room for experts to make sure your donations do far more good than they otherwise would.As we’ll see next, there are other advantages of funds — both for the donor, and the charity.Advantages of fundsFunds make it easier to ensure that effective organisations receive the funding they need, when they need it.Donating through an expert-led fund is often a much more effective way to support a cause than donating individually — even if you and the fund support the same organisations. This is because individual donors aren’t able to easily coordinate with each other, nor the organisations they support. Whereas if they donate together through a fund, the fund manager can:Learn how much funding the organisation needs.Provide funding when the organisation needs it.Monitor how that funding is used.Work with and incentivise organisations to be even more impactful.As a result, donors might prefer funds because their money can often be allocated more efficiently and effectively.But organisations also often benefit from the fund model. This is because:Funds can provide a consistent and reliable stream of funding for effective organisations to carry out their work, whereas relying on individual donations can often be a challenge.Fund managers can provide support in addition to funding. They often give advice and share key connections that can help the organisation succeed.In general, because a fund pools the resources of multiple donors, and because a fund manager can spend more time investigating and supporting organisations than individual donors can, funds are often a highly cost-effective donation option.When donating to funds may not be the best optionWhile we think most donors should give to funds, there are some cases where it might not make sense. This would be if:You think you can find more cost-effective donation opportunities by yourself. For example, you may have...]]>
Michael Townsend https://forum.effectivealtruism.org/posts/wHyvkwpwCA4nm46rp/why-giving-what-we-can-recommends-using-expert-led Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Giving What We Can recommends using expert-led charitable funds, published by Michael Townsend on November 28, 2022 on The Effective Altruism Forum.Giving What We Can is looking to build more of an 'institutional view' on various aspects of effective giving. Something we've recently decided to more explicitly push for is donating via a fund. I'm cross-posting our page outlining why we recommend funds with hopes of getting feedback and questions from forum users :).Funds are a relatively new way for donors to coordinate their giving to maximise their impact. This page outlines what a fund is, and why Giving What We Can generally recommends donating to funds rather than directly to individual charities.How do funds work?Funds allow donors to give together, as a community. Rather than each individual giving to a specific charity, donating via a fund pools their donations so that expert grantmakers and evaluators can direct those funds as cost-effectively as possible.Using a fund is similar to using an actively managed investment fund instead of trying to pick individual stocks to invest in: in both cases, you let experts decide what to do with your money. This analogy helps explain the structure of a charitable fund, but it likely understates its benefits. This is because:Investment funds regularly take a management fee (hedge funds, for example, typically take 1–4% of invested funds each year). Whereas the charitable funds we recommend don’t take any fees for their work.According to the efficient-market hypothesis you should expect that any given stock costs roughly what it should given the best available information. If true, that would mean there’s not much room for experts to pick better stocks than you could (even if you picked at random!). Whereas the best charity can be at least 10 times better than a typical charity even within the same area. This means that there is substantial room for experts to make sure your donations do far more good than they otherwise would.As we’ll see next, there are other advantages of funds — both for the donor, and the charity.Advantages of fundsFunds make it easier to ensure that effective organisations receive the funding they need, when they need it.Donating through an expert-led fund is often a much more effective way to support a cause than donating individually — even if you and the fund support the same organisations. This is because individual donors aren’t able to easily coordinate with each other, nor the organisations they support. Whereas if they donate together through a fund, the fund manager can:Learn how much funding the organisation needs.Provide funding when the organisation needs it.Monitor how that funding is used.Work with and incentivise organisations to be even more impactful.As a result, donors might prefer funds because their money can often be allocated more efficiently and effectively.But organisations also often benefit from the fund model. This is because:Funds can provide a consistent and reliable stream of funding for effective organisations to carry out their work, whereas relying on individual donations can often be a challenge.Fund managers can provide support in addition to funding. They often give advice and share key connections that can help the organisation succeed.In general, because a fund pools the resources of multiple donors, and because a fund manager can spend more time investigating and supporting organisations than individual donors can, funds are often a highly cost-effective donation option.When donating to funds may not be the best optionWhile we think most donors should give to funds, there are some cases where it might not make sense. This would be if:You think you can find more cost-effective donation opportunities by yourself. For example, you may have...]]>
Mon, 28 Nov 2022 23:28:38 +0000 EA - Why Giving What We Can recommends using expert-led charitable funds by Michael Townsend Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Giving What We Can recommends using expert-led charitable funds, published by Michael Townsend on November 28, 2022 on The Effective Altruism Forum.Giving What We Can is looking to build more of an 'institutional view' on various aspects of effective giving. Something we've recently decided to more explicitly push for is donating via a fund. I'm cross-posting our page outlining why we recommend funds with hopes of getting feedback and questions from forum users :).Funds are a relatively new way for donors to coordinate their giving to maximise their impact. This page outlines what a fund is, and why Giving What We Can generally recommends donating to funds rather than directly to individual charities.How do funds work?Funds allow donors to give together, as a community. Rather than each individual giving to a specific charity, donating via a fund pools their donations so that expert grantmakers and evaluators can direct those funds as cost-effectively as possible.Using a fund is similar to using an actively managed investment fund instead of trying to pick individual stocks to invest in: in both cases, you let experts decide what to do with your money. This analogy helps explain the structure of a charitable fund, but it likely understates its benefits. This is because:Investment funds regularly take a management fee (hedge funds, for example, typically take 1–4% of invested funds each year). Whereas the charitable funds we recommend don’t take any fees for their work.According to the efficient-market hypothesis you should expect that any given stock costs roughly what it should given the best available information. If true, that would mean there’s not much room for experts to pick better stocks than you could (even if you picked at random!). Whereas the best charity can be at least 10 times better than a typical charity even within the same area. This means that there is substantial room for experts to make sure your donations do far more good than they otherwise would.As we’ll see next, there are other advantages of funds — both for the donor, and the charity.Advantages of fundsFunds make it easier to ensure that effective organisations receive the funding they need, when they need it.Donating through an expert-led fund is often a much more effective way to support a cause than donating individually — even if you and the fund support the same organisations. This is because individual donors aren’t able to easily coordinate with each other, nor the organisations they support. Whereas if they donate together through a fund, the fund manager can:Learn how much funding the organisation needs.Provide funding when the organisation needs it.Monitor how that funding is used.Work with and incentivise organisations to be even more impactful.As a result, donors might prefer funds because their money can often be allocated more efficiently and effectively.But organisations also often benefit from the fund model. This is because:Funds can provide a consistent and reliable stream of funding for effective organisations to carry out their work, whereas relying on individual donations can often be a challenge.Fund managers can provide support in addition to funding. They often give advice and share key connections that can help the organisation succeed.In general, because a fund pools the resources of multiple donors, and because a fund manager can spend more time investigating and supporting organisations than individual donors can, funds are often a highly cost-effective donation option.When donating to funds may not be the best optionWhile we think most donors should give to funds, there are some cases where it might not make sense. This would be if:You think you can find more cost-effective donation opportunities by yourself. For example, you may have...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Giving What We Can recommends using expert-led charitable funds, published by Michael Townsend on November 28, 2022 on The Effective Altruism Forum.Giving What We Can is looking to build more of an 'institutional view' on various aspects of effective giving. Something we've recently decided to more explicitly push for is donating via a fund. I'm cross-posting our page outlining why we recommend funds with hopes of getting feedback and questions from forum users :).Funds are a relatively new way for donors to coordinate their giving to maximise their impact. This page outlines what a fund is, and why Giving What We Can generally recommends donating to funds rather than directly to individual charities.How do funds work?Funds allow donors to give together, as a community. Rather than each individual giving to a specific charity, donating via a fund pools their donations so that expert grantmakers and evaluators can direct those funds as cost-effectively as possible.Using a fund is similar to using an actively managed investment fund instead of trying to pick individual stocks to invest in: in both cases, you let experts decide what to do with your money. This analogy helps explain the structure of a charitable fund, but it likely understates its benefits. This is because:Investment funds regularly take a management fee (hedge funds, for example, typically take 1–4% of invested funds each year). Whereas the charitable funds we recommend don’t take any fees for their work.According to the efficient-market hypothesis you should expect that any given stock costs roughly what it should given the best available information. If true, that would mean there’s not much room for experts to pick better stocks than you could (even if you picked at random!). Whereas the best charity can be at least 10 times better than a typical charity even within the same area. This means that there is substantial room for experts to make sure your donations do far more good than they otherwise would.As we’ll see next, there are other advantages of funds — both for the donor, and the charity.Advantages of fundsFunds make it easier to ensure that effective organisations receive the funding they need, when they need it.Donating through an expert-led fund is often a much more effective way to support a cause than donating individually — even if you and the fund support the same organisations. This is because individual donors aren’t able to easily coordinate with each other, nor the organisations they support. Whereas if they donate together through a fund, the fund manager can:Learn how much funding the organisation needs.Provide funding when the organisation needs it.Monitor how that funding is used.Work with and incentivise organisations to be even more impactful.As a result, donors might prefer funds because their money can often be allocated more efficiently and effectively.But organisations also often benefit from the fund model. This is because:Funds can provide a consistent and reliable stream of funding for effective organisations to carry out their work, whereas relying on individual donations can often be a challenge.Fund managers can provide support in addition to funding. They often give advice and share key connections that can help the organisation succeed.In general, because a fund pools the resources of multiple donors, and because a fund manager can spend more time investigating and supporting organisations than individual donors can, funds are often a highly cost-effective donation option.When donating to funds may not be the best optionWhile we think most donors should give to funds, there are some cases where it might not make sense. This would be if:You think you can find more cost-effective donation opportunities by yourself. For example, you may have...]]>
Michael Townsend https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:40 None full 3935
n9h6RAPLMoFbf66Pi_EA EA - How have your views on where to give updated over the past year? by JulianHazell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How have your views on where to give updated over the past year?, published by JulianHazell on November 28, 2022 on The Effective Altruism Forum.I'm wondering if your views on where to give have meaningfully changed throughout 2022.If they have, please share! I'd love to hear why.Personally, I'd say my biggest update has been towards giving opportunities that are more speculative but potentially higher EV. My giving portfolio in the past was heavily tilted towards GiveWell, primarily because of their demonstrated track record and the strong evidenced behind their top charities.But now, I'm increasingly feeling comfortable with shifting my portfolio more towards other speculative options.I still think GiveWell is an excellent choice, but I have a bit of a stronger appetite now for taking risks with my giving. This was partially motivated by a talk I watched Hilary Greaves give on making a difference.Some charities I have given to/intend to give more to going forward include:Happier Lives Institute for their work on improving competition and intellectual diversity in the effective giving space;The Long Term Future Fund due to my increased belief that protecting the long-term future is an especially important moral priority;And Animal Charity Evaluator’s Recommended Charity Fund due to my increased view that funding charities working to improve animal welfare is a particularly effective way to increase present-day well-being.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
JulianHazell https://forum.effectivealtruism.org/posts/n9h6RAPLMoFbf66Pi/how-have-your-views-on-where-to-give-updated-over-the-past Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How have your views on where to give updated over the past year?, published by JulianHazell on November 28, 2022 on The Effective Altruism Forum.I'm wondering if your views on where to give have meaningfully changed throughout 2022.If they have, please share! I'd love to hear why.Personally, I'd say my biggest update has been towards giving opportunities that are more speculative but potentially higher EV. My giving portfolio in the past was heavily tilted towards GiveWell, primarily because of their demonstrated track record and the strong evidenced behind their top charities.But now, I'm increasingly feeling comfortable with shifting my portfolio more towards other speculative options.I still think GiveWell is an excellent choice, but I have a bit of a stronger appetite now for taking risks with my giving. This was partially motivated by a talk I watched Hilary Greaves give on making a difference.Some charities I have given to/intend to give more to going forward include:Happier Lives Institute for their work on improving competition and intellectual diversity in the effective giving space;The Long Term Future Fund due to my increased belief that protecting the long-term future is an especially important moral priority;And Animal Charity Evaluator’s Recommended Charity Fund due to my increased view that funding charities working to improve animal welfare is a particularly effective way to increase present-day well-being.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 28 Nov 2022 23:13:00 +0000 EA - How have your views on where to give updated over the past year? by JulianHazell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How have your views on where to give updated over the past year?, published by JulianHazell on November 28, 2022 on The Effective Altruism Forum.I'm wondering if your views on where to give have meaningfully changed throughout 2022.If they have, please share! I'd love to hear why.Personally, I'd say my biggest update has been towards giving opportunities that are more speculative but potentially higher EV. My giving portfolio in the past was heavily tilted towards GiveWell, primarily because of their demonstrated track record and the strong evidenced behind their top charities.But now, I'm increasingly feeling comfortable with shifting my portfolio more towards other speculative options.I still think GiveWell is an excellent choice, but I have a bit of a stronger appetite now for taking risks with my giving. This was partially motivated by a talk I watched Hilary Greaves give on making a difference.Some charities I have given to/intend to give more to going forward include:Happier Lives Institute for their work on improving competition and intellectual diversity in the effective giving space;The Long Term Future Fund due to my increased belief that protecting the long-term future is an especially important moral priority;And Animal Charity Evaluator’s Recommended Charity Fund due to my increased view that funding charities working to improve animal welfare is a particularly effective way to increase present-day well-being.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How have your views on where to give updated over the past year?, published by JulianHazell on November 28, 2022 on The Effective Altruism Forum.I'm wondering if your views on where to give have meaningfully changed throughout 2022.If they have, please share! I'd love to hear why.Personally, I'd say my biggest update has been towards giving opportunities that are more speculative but potentially higher EV. My giving portfolio in the past was heavily tilted towards GiveWell, primarily because of their demonstrated track record and the strong evidenced behind their top charities.But now, I'm increasingly feeling comfortable with shifting my portfolio more towards other speculative options.I still think GiveWell is an excellent choice, but I have a bit of a stronger appetite now for taking risks with my giving. This was partially motivated by a talk I watched Hilary Greaves give on making a difference.Some charities I have given to/intend to give more to going forward include:Happier Lives Institute for their work on improving competition and intellectual diversity in the effective giving space;The Long Term Future Fund due to my increased belief that protecting the long-term future is an especially important moral priority;And Animal Charity Evaluator’s Recommended Charity Fund due to my increased view that funding charities working to improve animal welfare is a particularly effective way to increase present-day well-being.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
JulianHazell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:33 None full 3936
jqJLcsqEqdnd35kTB_EA EA - List of past fraudsters similar to SBF by NunoSempere Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: List of past fraudsters similar to SBF, published by NunoSempere on November 28, 2022 on The Effective Altruism Forum.To inform my forecasting around FTX events, I looked at the Wikipedia list of fraudsters and selected those I subjectively found similar—you can see a spreadsheet with my selection here. For each of the similar fraudsters, I present some common basic details below together with some notes.My main takeaway is that many salient aspects of FTX have precedents: the incestuous relationship between an exchange and a trading house (Bernie Madoff, Richard Whitney), a philosophical or philanthropic component (Enric Duran, Tom Petters, etc.), embroiling friends and families in the scheme (Charles Ponzi), or multi-billion fraud not getting found out for years (Elizabeth Holmes, many others).Fraud with a philosophical, philanthropic or religious componentBernard EbbersPrison: YesJurisdiction: USAmount: $18BI find the section on his faith most informative:While CEO of WorldCom, he was a member of the Easthaven Baptist Church in Brookhaven, Mississippi. As a high-profile member of the congregation, Ebbers regularly taught Sunday school and attended the morning church service with his family. His faith was overt, and he often started corporate meetings with prayer. When the allegations of conspiracy and fraud were first brought to light in 2002, Ebbers addressed the congregation and insisted on his innocence. "I just want you to know you aren't going to church with a crook," he said. "No one will find me to have knowingly committed fraud."Also note that eventually, $8B was restored to investors.Enric DuránPrison: No, life in hidingJurisdiction: SpainAmount: ~$700k (all amounts are inflation approximate and adjusted using in2013dollars.com)During the 2008 crisis, he robbed Spanish banks by taking out spurious loans, and donated the amounts to anticapitalist causes. He wrote a guide about how to do this, and widely distributed it: an online version in Spanish and English can be found here.Personal takeaway: Stealing money for altruistic causes is not unprecedented. And if you are going to cross that line, it can be done with much more style. It will also be viewed much more sympathetically if you steal from organizations perceived to be corrupt.Tom PettersPrison: YesJurisdiction: USAmount: ~$5BSome of his donations were later returned (bold emphasis my own):Petters was appointed to the board of trustees for the College of St. Benedict in 2002; his mother had attended the school. In 2006 he gave $2 million for improvements to St. John's Abbey on the campus of adjacent Saint John's University. In light of the criminal prosecution, St. John's Abbey arranged to return the $2 million gift to the court-appointed receiver for the Petters bankruptcy. In October 2007, Petters made a $5.3 million gift to the College of St. Benedict to create the Thomas J. Petters Center for Global Education. In 2006, he served as a co-chairman of a capital campaign at his high school, Cathedral High School, and offered to match donations up to $750,000.Petters formed the John T. Petters Foundation to provide gifts and endowments at select universities to benefit future college students. The foundation was formed to honor his son, John Thomas Petters, who was killed on a visit in 2004 to Florence, Italy. The college student inadvertently wandered onto private property where the owner, Alfio Raugei, mistook him for an intruder and stabbed him to death." In response, in September 2004, Tom Petters pledged $10 million to his late son's college, Miami University. He later promised an additional $4 million, with the total to support two professorships and the John T. Petters Center for Leadership, Ethics and Skills Development within the Farmer School of Business. Miami Univers...]]>
NunoSempere https://forum.effectivealtruism.org/posts/jqJLcsqEqdnd35kTB/list-of-past-fraudsters-similar-to-sbf Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: List of past fraudsters similar to SBF, published by NunoSempere on November 28, 2022 on The Effective Altruism Forum.To inform my forecasting around FTX events, I looked at the Wikipedia list of fraudsters and selected those I subjectively found similar—you can see a spreadsheet with my selection here. For each of the similar fraudsters, I present some common basic details below together with some notes.My main takeaway is that many salient aspects of FTX have precedents: the incestuous relationship between an exchange and a trading house (Bernie Madoff, Richard Whitney), a philosophical or philanthropic component (Enric Duran, Tom Petters, etc.), embroiling friends and families in the scheme (Charles Ponzi), or multi-billion fraud not getting found out for years (Elizabeth Holmes, many others).Fraud with a philosophical, philanthropic or religious componentBernard EbbersPrison: YesJurisdiction: USAmount: $18BI find the section on his faith most informative:While CEO of WorldCom, he was a member of the Easthaven Baptist Church in Brookhaven, Mississippi. As a high-profile member of the congregation, Ebbers regularly taught Sunday school and attended the morning church service with his family. His faith was overt, and he often started corporate meetings with prayer. When the allegations of conspiracy and fraud were first brought to light in 2002, Ebbers addressed the congregation and insisted on his innocence. "I just want you to know you aren't going to church with a crook," he said. "No one will find me to have knowingly committed fraud."Also note that eventually, $8B was restored to investors.Enric DuránPrison: No, life in hidingJurisdiction: SpainAmount: ~$700k (all amounts are inflation approximate and adjusted using in2013dollars.com)During the 2008 crisis, he robbed Spanish banks by taking out spurious loans, and donated the amounts to anticapitalist causes. He wrote a guide about how to do this, and widely distributed it: an online version in Spanish and English can be found here.Personal takeaway: Stealing money for altruistic causes is not unprecedented. And if you are going to cross that line, it can be done with much more style. It will also be viewed much more sympathetically if you steal from organizations perceived to be corrupt.Tom PettersPrison: YesJurisdiction: USAmount: ~$5BSome of his donations were later returned (bold emphasis my own):Petters was appointed to the board of trustees for the College of St. Benedict in 2002; his mother had attended the school. In 2006 he gave $2 million for improvements to St. John's Abbey on the campus of adjacent Saint John's University. In light of the criminal prosecution, St. John's Abbey arranged to return the $2 million gift to the court-appointed receiver for the Petters bankruptcy. In October 2007, Petters made a $5.3 million gift to the College of St. Benedict to create the Thomas J. Petters Center for Global Education. In 2006, he served as a co-chairman of a capital campaign at his high school, Cathedral High School, and offered to match donations up to $750,000.Petters formed the John T. Petters Foundation to provide gifts and endowments at select universities to benefit future college students. The foundation was formed to honor his son, John Thomas Petters, who was killed on a visit in 2004 to Florence, Italy. The college student inadvertently wandered onto private property where the owner, Alfio Raugei, mistook him for an intruder and stabbed him to death." In response, in September 2004, Tom Petters pledged $10 million to his late son's college, Miami University. He later promised an additional $4 million, with the total to support two professorships and the John T. Petters Center for Leadership, Ethics and Skills Development within the Farmer School of Business. Miami Univers...]]>
Mon, 28 Nov 2022 21:26:28 +0000 EA - List of past fraudsters similar to SBF by NunoSempere Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: List of past fraudsters similar to SBF, published by NunoSempere on November 28, 2022 on The Effective Altruism Forum.To inform my forecasting around FTX events, I looked at the Wikipedia list of fraudsters and selected those I subjectively found similar—you can see a spreadsheet with my selection here. For each of the similar fraudsters, I present some common basic details below together with some notes.My main takeaway is that many salient aspects of FTX have precedents: the incestuous relationship between an exchange and a trading house (Bernie Madoff, Richard Whitney), a philosophical or philanthropic component (Enric Duran, Tom Petters, etc.), embroiling friends and families in the scheme (Charles Ponzi), or multi-billion fraud not getting found out for years (Elizabeth Holmes, many others).Fraud with a philosophical, philanthropic or religious componentBernard EbbersPrison: YesJurisdiction: USAmount: $18BI find the section on his faith most informative:While CEO of WorldCom, he was a member of the Easthaven Baptist Church in Brookhaven, Mississippi. As a high-profile member of the congregation, Ebbers regularly taught Sunday school and attended the morning church service with his family. His faith was overt, and he often started corporate meetings with prayer. When the allegations of conspiracy and fraud were first brought to light in 2002, Ebbers addressed the congregation and insisted on his innocence. "I just want you to know you aren't going to church with a crook," he said. "No one will find me to have knowingly committed fraud."Also note that eventually, $8B was restored to investors.Enric DuránPrison: No, life in hidingJurisdiction: SpainAmount: ~$700k (all amounts are inflation approximate and adjusted using in2013dollars.com)During the 2008 crisis, he robbed Spanish banks by taking out spurious loans, and donated the amounts to anticapitalist causes. He wrote a guide about how to do this, and widely distributed it: an online version in Spanish and English can be found here.Personal takeaway: Stealing money for altruistic causes is not unprecedented. And if you are going to cross that line, it can be done with much more style. It will also be viewed much more sympathetically if you steal from organizations perceived to be corrupt.Tom PettersPrison: YesJurisdiction: USAmount: ~$5BSome of his donations were later returned (bold emphasis my own):Petters was appointed to the board of trustees for the College of St. Benedict in 2002; his mother had attended the school. In 2006 he gave $2 million for improvements to St. John's Abbey on the campus of adjacent Saint John's University. In light of the criminal prosecution, St. John's Abbey arranged to return the $2 million gift to the court-appointed receiver for the Petters bankruptcy. In October 2007, Petters made a $5.3 million gift to the College of St. Benedict to create the Thomas J. Petters Center for Global Education. In 2006, he served as a co-chairman of a capital campaign at his high school, Cathedral High School, and offered to match donations up to $750,000.Petters formed the John T. Petters Foundation to provide gifts and endowments at select universities to benefit future college students. The foundation was formed to honor his son, John Thomas Petters, who was killed on a visit in 2004 to Florence, Italy. The college student inadvertently wandered onto private property where the owner, Alfio Raugei, mistook him for an intruder and stabbed him to death." In response, in September 2004, Tom Petters pledged $10 million to his late son's college, Miami University. He later promised an additional $4 million, with the total to support two professorships and the John T. Petters Center for Leadership, Ethics and Skills Development within the Farmer School of Business. Miami Univers...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: List of past fraudsters similar to SBF, published by NunoSempere on November 28, 2022 on The Effective Altruism Forum.To inform my forecasting around FTX events, I looked at the Wikipedia list of fraudsters and selected those I subjectively found similar—you can see a spreadsheet with my selection here. For each of the similar fraudsters, I present some common basic details below together with some notes.My main takeaway is that many salient aspects of FTX have precedents: the incestuous relationship between an exchange and a trading house (Bernie Madoff, Richard Whitney), a philosophical or philanthropic component (Enric Duran, Tom Petters, etc.), embroiling friends and families in the scheme (Charles Ponzi), or multi-billion fraud not getting found out for years (Elizabeth Holmes, many others).Fraud with a philosophical, philanthropic or religious componentBernard EbbersPrison: YesJurisdiction: USAmount: $18BI find the section on his faith most informative:While CEO of WorldCom, he was a member of the Easthaven Baptist Church in Brookhaven, Mississippi. As a high-profile member of the congregation, Ebbers regularly taught Sunday school and attended the morning church service with his family. His faith was overt, and he often started corporate meetings with prayer. When the allegations of conspiracy and fraud were first brought to light in 2002, Ebbers addressed the congregation and insisted on his innocence. "I just want you to know you aren't going to church with a crook," he said. "No one will find me to have knowingly committed fraud."Also note that eventually, $8B was restored to investors.Enric DuránPrison: No, life in hidingJurisdiction: SpainAmount: ~$700k (all amounts are inflation approximate and adjusted using in2013dollars.com)During the 2008 crisis, he robbed Spanish banks by taking out spurious loans, and donated the amounts to anticapitalist causes. He wrote a guide about how to do this, and widely distributed it: an online version in Spanish and English can be found here.Personal takeaway: Stealing money for altruistic causes is not unprecedented. And if you are going to cross that line, it can be done with much more style. It will also be viewed much more sympathetically if you steal from organizations perceived to be corrupt.Tom PettersPrison: YesJurisdiction: USAmount: ~$5BSome of his donations were later returned (bold emphasis my own):Petters was appointed to the board of trustees for the College of St. Benedict in 2002; his mother had attended the school. In 2006 he gave $2 million for improvements to St. John's Abbey on the campus of adjacent Saint John's University. In light of the criminal prosecution, St. John's Abbey arranged to return the $2 million gift to the court-appointed receiver for the Petters bankruptcy. In October 2007, Petters made a $5.3 million gift to the College of St. Benedict to create the Thomas J. Petters Center for Global Education. In 2006, he served as a co-chairman of a capital campaign at his high school, Cathedral High School, and offered to match donations up to $750,000.Petters formed the John T. Petters Foundation to provide gifts and endowments at select universities to benefit future college students. The foundation was formed to honor his son, John Thomas Petters, who was killed on a visit in 2004 to Florence, Italy. The college student inadvertently wandered onto private property where the owner, Alfio Raugei, mistook him for an intruder and stabbed him to death." In response, in September 2004, Tom Petters pledged $10 million to his late son's college, Miami University. He later promised an additional $4 million, with the total to support two professorships and the John T. Petters Center for Leadership, Ethics and Skills Development within the Farmer School of Business. Miami Univers...]]>
NunoSempere https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 13:13 None full 3930
fLYiwuxyFF9q3pe6B_EA EA - Effective giving subforum and other updates (bonus Forum update November 2022) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective giving subforum and other updates (bonus Forum update November 2022), published by Lizka on November 28, 2022 on The Effective Altruism Forum.TL;DR: We’ve launched an “effective giving” subforum — the banner will be up for a couple of days. We’re announcing this and a couple other updates.More detailed summary:We’ve launched the effective giving subforum and are working on our other test subforums ⬇️Narrated versions of some posts will be available soon ⬇️We’re testing targeted advertising for high-impact jobs ⬇️We’ve fixed some bugs for pasting from a Google Document ⬇️This is a short announcement post — we mostly wanted to give more context about the subforum. It’s also an opportunity for you to comment and share feedback, which we’d really appreciate (and you can always suggest new features here).Effective giving subforum — and other work on subforumsWe previously announced our first two pilot subforums (bioethics and software engineering). We’re now adding a third: effective giving — please feel free to join or explore it!We hope that subforums will let usersExplore discussions and content about the subforum’s topicStart discussions — including more casual threads — that will be visible only to people who have joined or are exploring the subforumKeep up to date with news and ideas that are most relevant to themWe’ve made significant changes to the subforums since the first ones launched, and expect to make more (we’re aware of some bugs), so all your feedback is really valuable. You can pass that on by commenting on this post or emailing us at forum@centreforeffectivealtruism.org.Other things to know about joining a subforumJoining the subforum will automatically subscribe you to the related topic, meaning that posts with the relevant tag will stay on the Frontpage for longer for you (as if they had 25 extra karma points). You can change that by clicking on the bell icon on the subforum page and setting your preferences accordingly (e.g. by removing “upweight on frontpage”).For now, joining will also add a tag to your profile. We plan to set up a way for users to opt out of this (and we might change the default settings), but it’s currently not possible. Note that you can also add more topics you’re interested in (whether or not they have associated subforums) by going to your profile and clicking “edit public profile” and finding the “My Activity” section.Audio narrations of some Forum postsThere will likely be a proper announcement about this sometime soon, but, in addition to the Nonlinear Library, some EA Forum posts will get human narrations (that will be accessible in podcast apps and on the post pages themselves). This will be a collaboration with TYPE III AUDIO.Testing targeted advertisements for impactful jobsSome of you might have seen an advertisement for an open position at Metaculus (likely because you had interacted with the “software engineering” topic). We’ll be testing different types of job advertisements over the next month or so. As always, feedback is appreciated.Bugs fixed for pasting from Google DocumentsIn some situations, images in posts that had been copy-pasted from Google Documents were broken. Some links were also failing. The major bugs here have been fixed.Unfortunately, copy-pasting internal headers automatically (say, if you have a document with a table of contents that links to other sections in the document, and you want to copy-paste that into a Forum draft such that the table of contents links to sections in the Forum post) is currently not possible, and we might never make it work. The easiest way to create internal links to sections is outlined here.Please give us feedback!You can comment here, reach us at forum@centreforeffectivealtruism.org, or comment on the feature suggestions thread....]]>
Lizka https://forum.effectivealtruism.org/posts/fLYiwuxyFF9q3pe6B/effective-giving-subforum-and-other-updates-bonus-forum Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective giving subforum and other updates (bonus Forum update November 2022), published by Lizka on November 28, 2022 on The Effective Altruism Forum.TL;DR: We’ve launched an “effective giving” subforum — the banner will be up for a couple of days. We’re announcing this and a couple other updates.More detailed summary:We’ve launched the effective giving subforum and are working on our other test subforums ⬇️Narrated versions of some posts will be available soon ⬇️We’re testing targeted advertising for high-impact jobs ⬇️We’ve fixed some bugs for pasting from a Google Document ⬇️This is a short announcement post — we mostly wanted to give more context about the subforum. It’s also an opportunity for you to comment and share feedback, which we’d really appreciate (and you can always suggest new features here).Effective giving subforum — and other work on subforumsWe previously announced our first two pilot subforums (bioethics and software engineering). We’re now adding a third: effective giving — please feel free to join or explore it!We hope that subforums will let usersExplore discussions and content about the subforum’s topicStart discussions — including more casual threads — that will be visible only to people who have joined or are exploring the subforumKeep up to date with news and ideas that are most relevant to themWe’ve made significant changes to the subforums since the first ones launched, and expect to make more (we’re aware of some bugs), so all your feedback is really valuable. You can pass that on by commenting on this post or emailing us at forum@centreforeffectivealtruism.org.Other things to know about joining a subforumJoining the subforum will automatically subscribe you to the related topic, meaning that posts with the relevant tag will stay on the Frontpage for longer for you (as if they had 25 extra karma points). You can change that by clicking on the bell icon on the subforum page and setting your preferences accordingly (e.g. by removing “upweight on frontpage”).For now, joining will also add a tag to your profile. We plan to set up a way for users to opt out of this (and we might change the default settings), but it’s currently not possible. Note that you can also add more topics you’re interested in (whether or not they have associated subforums) by going to your profile and clicking “edit public profile” and finding the “My Activity” section.Audio narrations of some Forum postsThere will likely be a proper announcement about this sometime soon, but, in addition to the Nonlinear Library, some EA Forum posts will get human narrations (that will be accessible in podcast apps and on the post pages themselves). This will be a collaboration with TYPE III AUDIO.Testing targeted advertisements for impactful jobsSome of you might have seen an advertisement for an open position at Metaculus (likely because you had interacted with the “software engineering” topic). We’ll be testing different types of job advertisements over the next month or so. As always, feedback is appreciated.Bugs fixed for pasting from Google DocumentsIn some situations, images in posts that had been copy-pasted from Google Documents were broken. Some links were also failing. The major bugs here have been fixed.Unfortunately, copy-pasting internal headers automatically (say, if you have a document with a table of contents that links to other sections in the document, and you want to copy-paste that into a Forum draft such that the table of contents links to sections in the Forum post) is currently not possible, and we might never make it work. The easiest way to create internal links to sections is outlined here.Please give us feedback!You can comment here, reach us at forum@centreforeffectivealtruism.org, or comment on the feature suggestions thread....]]>
Mon, 28 Nov 2022 16:40:29 +0000 EA - Effective giving subforum and other updates (bonus Forum update November 2022) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective giving subforum and other updates (bonus Forum update November 2022), published by Lizka on November 28, 2022 on The Effective Altruism Forum.TL;DR: We’ve launched an “effective giving” subforum — the banner will be up for a couple of days. We’re announcing this and a couple other updates.More detailed summary:We’ve launched the effective giving subforum and are working on our other test subforums ⬇️Narrated versions of some posts will be available soon ⬇️We’re testing targeted advertising for high-impact jobs ⬇️We’ve fixed some bugs for pasting from a Google Document ⬇️This is a short announcement post — we mostly wanted to give more context about the subforum. It’s also an opportunity for you to comment and share feedback, which we’d really appreciate (and you can always suggest new features here).Effective giving subforum — and other work on subforumsWe previously announced our first two pilot subforums (bioethics and software engineering). We’re now adding a third: effective giving — please feel free to join or explore it!We hope that subforums will let usersExplore discussions and content about the subforum’s topicStart discussions — including more casual threads — that will be visible only to people who have joined or are exploring the subforumKeep up to date with news and ideas that are most relevant to themWe’ve made significant changes to the subforums since the first ones launched, and expect to make more (we’re aware of some bugs), so all your feedback is really valuable. You can pass that on by commenting on this post or emailing us at forum@centreforeffectivealtruism.org.Other things to know about joining a subforumJoining the subforum will automatically subscribe you to the related topic, meaning that posts with the relevant tag will stay on the Frontpage for longer for you (as if they had 25 extra karma points). You can change that by clicking on the bell icon on the subforum page and setting your preferences accordingly (e.g. by removing “upweight on frontpage”).For now, joining will also add a tag to your profile. We plan to set up a way for users to opt out of this (and we might change the default settings), but it’s currently not possible. Note that you can also add more topics you’re interested in (whether or not they have associated subforums) by going to your profile and clicking “edit public profile” and finding the “My Activity” section.Audio narrations of some Forum postsThere will likely be a proper announcement about this sometime soon, but, in addition to the Nonlinear Library, some EA Forum posts will get human narrations (that will be accessible in podcast apps and on the post pages themselves). This will be a collaboration with TYPE III AUDIO.Testing targeted advertisements for impactful jobsSome of you might have seen an advertisement for an open position at Metaculus (likely because you had interacted with the “software engineering” topic). We’ll be testing different types of job advertisements over the next month or so. As always, feedback is appreciated.Bugs fixed for pasting from Google DocumentsIn some situations, images in posts that had been copy-pasted from Google Documents were broken. Some links were also failing. The major bugs here have been fixed.Unfortunately, copy-pasting internal headers automatically (say, if you have a document with a table of contents that links to other sections in the document, and you want to copy-paste that into a Forum draft such that the table of contents links to sections in the Forum post) is currently not possible, and we might never make it work. The easiest way to create internal links to sections is outlined here.Please give us feedback!You can comment here, reach us at forum@centreforeffectivealtruism.org, or comment on the feature suggestions thread....]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective giving subforum and other updates (bonus Forum update November 2022), published by Lizka on November 28, 2022 on The Effective Altruism Forum.TL;DR: We’ve launched an “effective giving” subforum — the banner will be up for a couple of days. We’re announcing this and a couple other updates.More detailed summary:We’ve launched the effective giving subforum and are working on our other test subforums ⬇️Narrated versions of some posts will be available soon ⬇️We’re testing targeted advertising for high-impact jobs ⬇️We’ve fixed some bugs for pasting from a Google Document ⬇️This is a short announcement post — we mostly wanted to give more context about the subforum. It’s also an opportunity for you to comment and share feedback, which we’d really appreciate (and you can always suggest new features here).Effective giving subforum — and other work on subforumsWe previously announced our first two pilot subforums (bioethics and software engineering). We’re now adding a third: effective giving — please feel free to join or explore it!We hope that subforums will let usersExplore discussions and content about the subforum’s topicStart discussions — including more casual threads — that will be visible only to people who have joined or are exploring the subforumKeep up to date with news and ideas that are most relevant to themWe’ve made significant changes to the subforums since the first ones launched, and expect to make more (we’re aware of some bugs), so all your feedback is really valuable. You can pass that on by commenting on this post or emailing us at forum@centreforeffectivealtruism.org.Other things to know about joining a subforumJoining the subforum will automatically subscribe you to the related topic, meaning that posts with the relevant tag will stay on the Frontpage for longer for you (as if they had 25 extra karma points). You can change that by clicking on the bell icon on the subforum page and setting your preferences accordingly (e.g. by removing “upweight on frontpage”).For now, joining will also add a tag to your profile. We plan to set up a way for users to opt out of this (and we might change the default settings), but it’s currently not possible. Note that you can also add more topics you’re interested in (whether or not they have associated subforums) by going to your profile and clicking “edit public profile” and finding the “My Activity” section.Audio narrations of some Forum postsThere will likely be a proper announcement about this sometime soon, but, in addition to the Nonlinear Library, some EA Forum posts will get human narrations (that will be accessible in podcast apps and on the post pages themselves). This will be a collaboration with TYPE III AUDIO.Testing targeted advertisements for impactful jobsSome of you might have seen an advertisement for an open position at Metaculus (likely because you had interacted with the “software engineering” topic). We’ll be testing different types of job advertisements over the next month or so. As always, feedback is appreciated.Bugs fixed for pasting from Google DocumentsIn some situations, images in posts that had been copy-pasted from Google Documents were broken. Some links were also failing. The major bugs here have been fixed.Unfortunately, copy-pasting internal headers automatically (say, if you have a document with a table of contents that links to other sections in the document, and you want to copy-paste that into a Forum draft such that the table of contents links to sections in the Forum post) is currently not possible, and we might never make it work. The easiest way to create internal links to sections is outlined here.Please give us feedback!You can comment here, reach us at forum@centreforeffectivealtruism.org, or comment on the feature suggestions thread....]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:50 None full 3931
pqmQ9PxpzzGHshmhC_EA EA - 2022 ALLFED highlights by Ross Tieman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2022 ALLFED highlights, published by Ross Tieman on November 28, 2022 on The Effective Altruism Forum.Executive SummaryLike many others, we are reeling from recent FTX news, and what it means for ALLFED and the whole EA Community. Alliance to Feed the Earth in Disasters was an FTX Future Fund grantee. Taken the current landscape, we were debating whether we should include it in these highlights. We have decided to do so in the interest of transparency and integrity, so as to accurately report on our January-November 2022 position.We would like to start with massive thanks to Jaan Tallinn, whose generous support last year through the Survival and Flourishing Fund ($1,154,000) is a major reason why we are able to weather this storm (also a huge thank you to all our other donors, we appreciate each and every one).2022 marks ALLFED’s 5th anniversary (2017-2022). Being a fully remote team, we now have team members on all continents except Antarctica. By the end of the year, we will have a presence in New Zealand due to David Denkenberger accepting a professor position at the University of Canterbury in Christchurch.In these 2022 highlights:To start with, we give updates on ALLFED’s 2022 research, including our papers and Abrupt Sun Reduction Scenario (ASRS) preparedness and response plans, including a recent proposal for the US government.Next, we talk about financial mechanisms for food system interventions, including superpests, climate food finance nexus, pandemic preparedness, and our policy work.We then move to operations and communications highlights, including our media mentions.We next talk about events, workshops and presentations we have delivered this year.We then dive into some major changes to our team (including at the management level), ALLFED’s internships and our volunteering program, and also give key statistics from this spring’s research associate recruitment (you will also find there imminent PhD opportunities as well as a temporary researcher position with David in New Zealand).Finally, we thank those whose support we wish to especially recognize this year and talk about our funding needs for 2023, which range from dedicated funding to establish an ALLFED UK charity, to resilient food pilots, to support to continue key priority research projects on the topic of resilient foods for nuclear winter-level shocks, and to support preparedness and response plans (essential if we are to be able to present to decision makers within the current policy window).There is no escaping the fact that, rather unexpectedly, our funding situation has worsened due to the FTX developments. We will therefore be especially grateful for your donations and support this giving season (please visit our donation webpage or contact david@allfed.info if you are interested in donating appreciated stock).Since our inception, we have been contributing annual updates to the EA Forum. You can find last year’s ALLFED Highlights here, and here is our last EA Forum post EA Resilience & ALLFED's Case Study.ResearchIt’s been a good year for research at ALLFED.PapersWe have submitted 4 papers to peer review, one of which has now been accepted and published.Authors: David Denkenberger, Anders Sandberg, Ross John Tieman, Joshua M. PearceStatus: Published (peer reviewed)Journal: The International Journal of Disaster Risk ReductionThis paper estimates the long-term cost-effectiveness of resilient foods for preventing starvation in the face of a global agricultural collapse caused by a long-lasting sunlight reduction, and compares it with that of investing in artificial general intelligence (AGI) safety. Using two versions of a probabilistic model, the researchers find that investing in resilient foods is more cost-effective than investing in AGI safety, with a confidence of ...]]>
Ross Tieman https://forum.effectivealtruism.org/posts/pqmQ9PxpzzGHshmhC/2022-allfed-highlights Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2022 ALLFED highlights, published by Ross Tieman on November 28, 2022 on The Effective Altruism Forum.Executive SummaryLike many others, we are reeling from recent FTX news, and what it means for ALLFED and the whole EA Community. Alliance to Feed the Earth in Disasters was an FTX Future Fund grantee. Taken the current landscape, we were debating whether we should include it in these highlights. We have decided to do so in the interest of transparency and integrity, so as to accurately report on our January-November 2022 position.We would like to start with massive thanks to Jaan Tallinn, whose generous support last year through the Survival and Flourishing Fund ($1,154,000) is a major reason why we are able to weather this storm (also a huge thank you to all our other donors, we appreciate each and every one).2022 marks ALLFED’s 5th anniversary (2017-2022). Being a fully remote team, we now have team members on all continents except Antarctica. By the end of the year, we will have a presence in New Zealand due to David Denkenberger accepting a professor position at the University of Canterbury in Christchurch.In these 2022 highlights:To start with, we give updates on ALLFED’s 2022 research, including our papers and Abrupt Sun Reduction Scenario (ASRS) preparedness and response plans, including a recent proposal for the US government.Next, we talk about financial mechanisms for food system interventions, including superpests, climate food finance nexus, pandemic preparedness, and our policy work.We then move to operations and communications highlights, including our media mentions.We next talk about events, workshops and presentations we have delivered this year.We then dive into some major changes to our team (including at the management level), ALLFED’s internships and our volunteering program, and also give key statistics from this spring’s research associate recruitment (you will also find there imminent PhD opportunities as well as a temporary researcher position with David in New Zealand).Finally, we thank those whose support we wish to especially recognize this year and talk about our funding needs for 2023, which range from dedicated funding to establish an ALLFED UK charity, to resilient food pilots, to support to continue key priority research projects on the topic of resilient foods for nuclear winter-level shocks, and to support preparedness and response plans (essential if we are to be able to present to decision makers within the current policy window).There is no escaping the fact that, rather unexpectedly, our funding situation has worsened due to the FTX developments. We will therefore be especially grateful for your donations and support this giving season (please visit our donation webpage or contact david@allfed.info if you are interested in donating appreciated stock).Since our inception, we have been contributing annual updates to the EA Forum. You can find last year’s ALLFED Highlights here, and here is our last EA Forum post EA Resilience & ALLFED's Case Study.ResearchIt’s been a good year for research at ALLFED.PapersWe have submitted 4 papers to peer review, one of which has now been accepted and published.Authors: David Denkenberger, Anders Sandberg, Ross John Tieman, Joshua M. PearceStatus: Published (peer reviewed)Journal: The International Journal of Disaster Risk ReductionThis paper estimates the long-term cost-effectiveness of resilient foods for preventing starvation in the face of a global agricultural collapse caused by a long-lasting sunlight reduction, and compares it with that of investing in artificial general intelligence (AGI) safety. Using two versions of a probabilistic model, the researchers find that investing in resilient foods is more cost-effective than investing in AGI safety, with a confidence of ...]]>
Mon, 28 Nov 2022 12:55:24 +0000 EA - 2022 ALLFED highlights by Ross Tieman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2022 ALLFED highlights, published by Ross Tieman on November 28, 2022 on The Effective Altruism Forum.Executive SummaryLike many others, we are reeling from recent FTX news, and what it means for ALLFED and the whole EA Community. Alliance to Feed the Earth in Disasters was an FTX Future Fund grantee. Taken the current landscape, we were debating whether we should include it in these highlights. We have decided to do so in the interest of transparency and integrity, so as to accurately report on our January-November 2022 position.We would like to start with massive thanks to Jaan Tallinn, whose generous support last year through the Survival and Flourishing Fund ($1,154,000) is a major reason why we are able to weather this storm (also a huge thank you to all our other donors, we appreciate each and every one).2022 marks ALLFED’s 5th anniversary (2017-2022). Being a fully remote team, we now have team members on all continents except Antarctica. By the end of the year, we will have a presence in New Zealand due to David Denkenberger accepting a professor position at the University of Canterbury in Christchurch.In these 2022 highlights:To start with, we give updates on ALLFED’s 2022 research, including our papers and Abrupt Sun Reduction Scenario (ASRS) preparedness and response plans, including a recent proposal for the US government.Next, we talk about financial mechanisms for food system interventions, including superpests, climate food finance nexus, pandemic preparedness, and our policy work.We then move to operations and communications highlights, including our media mentions.We next talk about events, workshops and presentations we have delivered this year.We then dive into some major changes to our team (including at the management level), ALLFED’s internships and our volunteering program, and also give key statistics from this spring’s research associate recruitment (you will also find there imminent PhD opportunities as well as a temporary researcher position with David in New Zealand).Finally, we thank those whose support we wish to especially recognize this year and talk about our funding needs for 2023, which range from dedicated funding to establish an ALLFED UK charity, to resilient food pilots, to support to continue key priority research projects on the topic of resilient foods for nuclear winter-level shocks, and to support preparedness and response plans (essential if we are to be able to present to decision makers within the current policy window).There is no escaping the fact that, rather unexpectedly, our funding situation has worsened due to the FTX developments. We will therefore be especially grateful for your donations and support this giving season (please visit our donation webpage or contact david@allfed.info if you are interested in donating appreciated stock).Since our inception, we have been contributing annual updates to the EA Forum. You can find last year’s ALLFED Highlights here, and here is our last EA Forum post EA Resilience & ALLFED's Case Study.ResearchIt’s been a good year for research at ALLFED.PapersWe have submitted 4 papers to peer review, one of which has now been accepted and published.Authors: David Denkenberger, Anders Sandberg, Ross John Tieman, Joshua M. PearceStatus: Published (peer reviewed)Journal: The International Journal of Disaster Risk ReductionThis paper estimates the long-term cost-effectiveness of resilient foods for preventing starvation in the face of a global agricultural collapse caused by a long-lasting sunlight reduction, and compares it with that of investing in artificial general intelligence (AGI) safety. Using two versions of a probabilistic model, the researchers find that investing in resilient foods is more cost-effective than investing in AGI safety, with a confidence of ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2022 ALLFED highlights, published by Ross Tieman on November 28, 2022 on The Effective Altruism Forum.Executive SummaryLike many others, we are reeling from recent FTX news, and what it means for ALLFED and the whole EA Community. Alliance to Feed the Earth in Disasters was an FTX Future Fund grantee. Taken the current landscape, we were debating whether we should include it in these highlights. We have decided to do so in the interest of transparency and integrity, so as to accurately report on our January-November 2022 position.We would like to start with massive thanks to Jaan Tallinn, whose generous support last year through the Survival and Flourishing Fund ($1,154,000) is a major reason why we are able to weather this storm (also a huge thank you to all our other donors, we appreciate each and every one).2022 marks ALLFED’s 5th anniversary (2017-2022). Being a fully remote team, we now have team members on all continents except Antarctica. By the end of the year, we will have a presence in New Zealand due to David Denkenberger accepting a professor position at the University of Canterbury in Christchurch.In these 2022 highlights:To start with, we give updates on ALLFED’s 2022 research, including our papers and Abrupt Sun Reduction Scenario (ASRS) preparedness and response plans, including a recent proposal for the US government.Next, we talk about financial mechanisms for food system interventions, including superpests, climate food finance nexus, pandemic preparedness, and our policy work.We then move to operations and communications highlights, including our media mentions.We next talk about events, workshops and presentations we have delivered this year.We then dive into some major changes to our team (including at the management level), ALLFED’s internships and our volunteering program, and also give key statistics from this spring’s research associate recruitment (you will also find there imminent PhD opportunities as well as a temporary researcher position with David in New Zealand).Finally, we thank those whose support we wish to especially recognize this year and talk about our funding needs for 2023, which range from dedicated funding to establish an ALLFED UK charity, to resilient food pilots, to support to continue key priority research projects on the topic of resilient foods for nuclear winter-level shocks, and to support preparedness and response plans (essential if we are to be able to present to decision makers within the current policy window).There is no escaping the fact that, rather unexpectedly, our funding situation has worsened due to the FTX developments. We will therefore be especially grateful for your donations and support this giving season (please visit our donation webpage or contact david@allfed.info if you are interested in donating appreciated stock).Since our inception, we have been contributing annual updates to the EA Forum. You can find last year’s ALLFED Highlights here, and here is our last EA Forum post EA Resilience & ALLFED's Case Study.ResearchIt’s been a good year for research at ALLFED.PapersWe have submitted 4 papers to peer review, one of which has now been accepted and published.Authors: David Denkenberger, Anders Sandberg, Ross John Tieman, Joshua M. PearceStatus: Published (peer reviewed)Journal: The International Journal of Disaster Risk ReductionThis paper estimates the long-term cost-effectiveness of resilient foods for preventing starvation in the face of a global agricultural collapse caused by a long-lasting sunlight reduction, and compares it with that of investing in artificial general intelligence (AGI) safety. Using two versions of a probabilistic model, the researchers find that investing in resilient foods is more cost-effective than investing in AGI safety, with a confidence of ...]]>
Ross Tieman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 31:48 None full 3933
Mfq7KxQRvkeLnJvoB_EA EA - Why Neuron Counts Shouldn't Be Used as Proxies for Moral Weight by Adam Shriver Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Neuron Counts Shouldn't Be Used as Proxies for Moral Weight, published by Adam Shriver on November 28, 2022 on The Effective Altruism Forum.Key TakeawaysSeveral influential EAs have suggested using neuron counts as rough proxies for animals’ relative moral weights. We challenge this suggestion.We take the following ideas to be the strongest reasons in favor of a neuron count proxy:neuron counts are correlated with intelligence and intelligence is correlated with moral weight,additional neurons result in “more consciousness” or “more valenced consciousness,” andincreasing numbers of neurons are required to reach thresholds of minimal information capacity required for morally relevant cognitive abilities.However:in regards to intelligence, we can question both the extent to which more neurons are correlated with intelligence and whether more intelligence in fact predicts greater moral weight;many ways of arguing that more neurons results in more valenced consciousness seem incompatible with our current understanding of how the brain is likely to work; andthere is no straightforward empirical evidence or compelling conceptual arguments indicating that relative differences in neuron counts within or between species reliably predicts welfare relevant functional capacities.Overall, we suggest that neuron counts should not be used as a sole proxy for moral weight, but cannot be dismissed entirely. Rather, neuron counts should be combined with other metrics in an overall weighted score that includes information about whether different species have welfare-relevant capacities.IntroductionThis is the fourth post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species. The aim of this post is to summarize our full report on the use of neuron counts as proxies for moral weights. The full report can be found here and includes more extensive arguments and evidence.Motivations for the ReportCan the number of neurons an organism possesses, or some related measure, be used as a proxy for deciding how much weight to give that organism in moral decisions? Several influential EAs have suggested that the answer is “Yes” in cases that involve aggregating the welfare of members of different species (Tomasik 2013, MacAskill 2022, Alexander 2021, Budolfson & Spears 2020).For the purposes of aggregating and comparing welfare across species, neuron counts are proposed as multipliers for cross-species comparisons of welfare. In general, the idea goes, as the number of neurons an organism possesses increases, so too does some morally relevant property related to the organism’s welfare. Generally, the morally relevant properties are assumed to increase linearly with an increase in neurons, though other scaling functions are possible.Scott Alexander of Slate Star Codex has a passage illustrating how weighting by neuron count might work:“Might cows be "more conscious" in a way that makes their suffering matter more than chickens? Hard to tell. But if we expect this to scale with neuron number, we find cows have 6x as many cortical neurons as chickens, and most people think of them as about 10x more morally valuable. If we massively round up and think of a cow as morally equivalent to 20 chickens, switching from an all-chicken diet to an all-beef diet saves 60 chicken-equivalents per year.” (2021)This methodology has important implications for assigning moral weight. For example, the average number of neurons in a human (86,000,000,000) is 390 times greater than the average number of neurons in a chicken (220,000,000) so we would treat the welfare units of humans as 39...]]>
Adam Shriver https://forum.effectivealtruism.org/posts/Mfq7KxQRvkeLnJvoB/why-neuron-counts-shouldn-t-be-used-as-proxies-for-moral Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Neuron Counts Shouldn't Be Used as Proxies for Moral Weight, published by Adam Shriver on November 28, 2022 on The Effective Altruism Forum.Key TakeawaysSeveral influential EAs have suggested using neuron counts as rough proxies for animals’ relative moral weights. We challenge this suggestion.We take the following ideas to be the strongest reasons in favor of a neuron count proxy:neuron counts are correlated with intelligence and intelligence is correlated with moral weight,additional neurons result in “more consciousness” or “more valenced consciousness,” andincreasing numbers of neurons are required to reach thresholds of minimal information capacity required for morally relevant cognitive abilities.However:in regards to intelligence, we can question both the extent to which more neurons are correlated with intelligence and whether more intelligence in fact predicts greater moral weight;many ways of arguing that more neurons results in more valenced consciousness seem incompatible with our current understanding of how the brain is likely to work; andthere is no straightforward empirical evidence or compelling conceptual arguments indicating that relative differences in neuron counts within or between species reliably predicts welfare relevant functional capacities.Overall, we suggest that neuron counts should not be used as a sole proxy for moral weight, but cannot be dismissed entirely. Rather, neuron counts should be combined with other metrics in an overall weighted score that includes information about whether different species have welfare-relevant capacities.IntroductionThis is the fourth post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species. The aim of this post is to summarize our full report on the use of neuron counts as proxies for moral weights. The full report can be found here and includes more extensive arguments and evidence.Motivations for the ReportCan the number of neurons an organism possesses, or some related measure, be used as a proxy for deciding how much weight to give that organism in moral decisions? Several influential EAs have suggested that the answer is “Yes” in cases that involve aggregating the welfare of members of different species (Tomasik 2013, MacAskill 2022, Alexander 2021, Budolfson & Spears 2020).For the purposes of aggregating and comparing welfare across species, neuron counts are proposed as multipliers for cross-species comparisons of welfare. In general, the idea goes, as the number of neurons an organism possesses increases, so too does some morally relevant property related to the organism’s welfare. Generally, the morally relevant properties are assumed to increase linearly with an increase in neurons, though other scaling functions are possible.Scott Alexander of Slate Star Codex has a passage illustrating how weighting by neuron count might work:“Might cows be "more conscious" in a way that makes their suffering matter more than chickens? Hard to tell. But if we expect this to scale with neuron number, we find cows have 6x as many cortical neurons as chickens, and most people think of them as about 10x more morally valuable. If we massively round up and think of a cow as morally equivalent to 20 chickens, switching from an all-chicken diet to an all-beef diet saves 60 chicken-equivalents per year.” (2021)This methodology has important implications for assigning moral weight. For example, the average number of neurons in a human (86,000,000,000) is 390 times greater than the average number of neurons in a chicken (220,000,000) so we would treat the welfare units of humans as 39...]]>
Mon, 28 Nov 2022 12:07:52 +0000 EA - Why Neuron Counts Shouldn't Be Used as Proxies for Moral Weight by Adam Shriver Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Neuron Counts Shouldn't Be Used as Proxies for Moral Weight, published by Adam Shriver on November 28, 2022 on The Effective Altruism Forum.Key TakeawaysSeveral influential EAs have suggested using neuron counts as rough proxies for animals’ relative moral weights. We challenge this suggestion.We take the following ideas to be the strongest reasons in favor of a neuron count proxy:neuron counts are correlated with intelligence and intelligence is correlated with moral weight,additional neurons result in “more consciousness” or “more valenced consciousness,” andincreasing numbers of neurons are required to reach thresholds of minimal information capacity required for morally relevant cognitive abilities.However:in regards to intelligence, we can question both the extent to which more neurons are correlated with intelligence and whether more intelligence in fact predicts greater moral weight;many ways of arguing that more neurons results in more valenced consciousness seem incompatible with our current understanding of how the brain is likely to work; andthere is no straightforward empirical evidence or compelling conceptual arguments indicating that relative differences in neuron counts within or between species reliably predicts welfare relevant functional capacities.Overall, we suggest that neuron counts should not be used as a sole proxy for moral weight, but cannot be dismissed entirely. Rather, neuron counts should be combined with other metrics in an overall weighted score that includes information about whether different species have welfare-relevant capacities.IntroductionThis is the fourth post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species. The aim of this post is to summarize our full report on the use of neuron counts as proxies for moral weights. The full report can be found here and includes more extensive arguments and evidence.Motivations for the ReportCan the number of neurons an organism possesses, or some related measure, be used as a proxy for deciding how much weight to give that organism in moral decisions? Several influential EAs have suggested that the answer is “Yes” in cases that involve aggregating the welfare of members of different species (Tomasik 2013, MacAskill 2022, Alexander 2021, Budolfson & Spears 2020).For the purposes of aggregating and comparing welfare across species, neuron counts are proposed as multipliers for cross-species comparisons of welfare. In general, the idea goes, as the number of neurons an organism possesses increases, so too does some morally relevant property related to the organism’s welfare. Generally, the morally relevant properties are assumed to increase linearly with an increase in neurons, though other scaling functions are possible.Scott Alexander of Slate Star Codex has a passage illustrating how weighting by neuron count might work:“Might cows be "more conscious" in a way that makes their suffering matter more than chickens? Hard to tell. But if we expect this to scale with neuron number, we find cows have 6x as many cortical neurons as chickens, and most people think of them as about 10x more morally valuable. If we massively round up and think of a cow as morally equivalent to 20 chickens, switching from an all-chicken diet to an all-beef diet saves 60 chicken-equivalents per year.” (2021)This methodology has important implications for assigning moral weight. For example, the average number of neurons in a human (86,000,000,000) is 390 times greater than the average number of neurons in a chicken (220,000,000) so we would treat the welfare units of humans as 39...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Neuron Counts Shouldn't Be Used as Proxies for Moral Weight, published by Adam Shriver on November 28, 2022 on The Effective Altruism Forum.Key TakeawaysSeveral influential EAs have suggested using neuron counts as rough proxies for animals’ relative moral weights. We challenge this suggestion.We take the following ideas to be the strongest reasons in favor of a neuron count proxy:neuron counts are correlated with intelligence and intelligence is correlated with moral weight,additional neurons result in “more consciousness” or “more valenced consciousness,” andincreasing numbers of neurons are required to reach thresholds of minimal information capacity required for morally relevant cognitive abilities.However:in regards to intelligence, we can question both the extent to which more neurons are correlated with intelligence and whether more intelligence in fact predicts greater moral weight;many ways of arguing that more neurons results in more valenced consciousness seem incompatible with our current understanding of how the brain is likely to work; andthere is no straightforward empirical evidence or compelling conceptual arguments indicating that relative differences in neuron counts within or between species reliably predicts welfare relevant functional capacities.Overall, we suggest that neuron counts should not be used as a sole proxy for moral weight, but cannot be dismissed entirely. Rather, neuron counts should be combined with other metrics in an overall weighted score that includes information about whether different species have welfare-relevant capacities.IntroductionThis is the fourth post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species. The aim of this post is to summarize our full report on the use of neuron counts as proxies for moral weights. The full report can be found here and includes more extensive arguments and evidence.Motivations for the ReportCan the number of neurons an organism possesses, or some related measure, be used as a proxy for deciding how much weight to give that organism in moral decisions? Several influential EAs have suggested that the answer is “Yes” in cases that involve aggregating the welfare of members of different species (Tomasik 2013, MacAskill 2022, Alexander 2021, Budolfson & Spears 2020).For the purposes of aggregating and comparing welfare across species, neuron counts are proposed as multipliers for cross-species comparisons of welfare. In general, the idea goes, as the number of neurons an organism possesses increases, so too does some morally relevant property related to the organism’s welfare. Generally, the morally relevant properties are assumed to increase linearly with an increase in neurons, though other scaling functions are possible.Scott Alexander of Slate Star Codex has a passage illustrating how weighting by neuron count might work:“Might cows be "more conscious" in a way that makes their suffering matter more than chickens? Hard to tell. But if we expect this to scale with neuron number, we find cows have 6x as many cortical neurons as chickens, and most people think of them as about 10x more morally valuable. If we massively round up and think of a cow as morally equivalent to 20 chickens, switching from an all-chicken diet to an all-beef diet saves 60 chicken-equivalents per year.” (2021)This methodology has important implications for assigning moral weight. For example, the average number of neurons in a human (86,000,000,000) is 390 times greater than the average number of neurons in a chicken (220,000,000) so we would treat the welfare units of humans as 39...]]>
Adam Shriver https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 15:13 None full 3932
z859kiJCSJhBMG32j_EA EA - Create a fundraiser with GWWC! by Giving What We Can Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Create a fundraiser with GWWC!, published by Giving What We Can on November 28, 2022 on The Effective Altruism Forum.We have just launched our brand new fundraising pages. If you’d like to fundraise for effective charities this Giving Season, you can fill out our form.You can select up to 3 charities or funds from our Donate page and select a suggested split between them. GWWC has the benefit of being tax deductible in the US, UK and Netherlands.Create a fundraiser!Also you might like to keep an eye out for fundraising pages from people in the community and support or share their fundraisers!List of active Giving What We Can fundraisersGiving What We Can's first ever fundraising pageA.J. Jacobs Giving Season FundraiserEA Hong Kong's Giving Season Fundraiser(Your future fundraiser?)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Giving What We Can https://forum.effectivealtruism.org/posts/z859kiJCSJhBMG32j/create-a-fundraiser-with-gwwc Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Create a fundraiser with GWWC!, published by Giving What We Can on November 28, 2022 on The Effective Altruism Forum.We have just launched our brand new fundraising pages. If you’d like to fundraise for effective charities this Giving Season, you can fill out our form.You can select up to 3 charities or funds from our Donate page and select a suggested split between them. GWWC has the benefit of being tax deductible in the US, UK and Netherlands.Create a fundraiser!Also you might like to keep an eye out for fundraising pages from people in the community and support or share their fundraisers!List of active Giving What We Can fundraisersGiving What We Can's first ever fundraising pageA.J. Jacobs Giving Season FundraiserEA Hong Kong's Giving Season Fundraiser(Your future fundraiser?)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 28 Nov 2022 10:25:57 +0000 EA - Create a fundraiser with GWWC! by Giving What We Can Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Create a fundraiser with GWWC!, published by Giving What We Can on November 28, 2022 on The Effective Altruism Forum.We have just launched our brand new fundraising pages. If you’d like to fundraise for effective charities this Giving Season, you can fill out our form.You can select up to 3 charities or funds from our Donate page and select a suggested split between them. GWWC has the benefit of being tax deductible in the US, UK and Netherlands.Create a fundraiser!Also you might like to keep an eye out for fundraising pages from people in the community and support or share their fundraisers!List of active Giving What We Can fundraisersGiving What We Can's first ever fundraising pageA.J. Jacobs Giving Season FundraiserEA Hong Kong's Giving Season Fundraiser(Your future fundraiser?)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Create a fundraiser with GWWC!, published by Giving What We Can on November 28, 2022 on The Effective Altruism Forum.We have just launched our brand new fundraising pages. If you’d like to fundraise for effective charities this Giving Season, you can fill out our form.You can select up to 3 charities or funds from our Donate page and select a suggested split between them. GWWC has the benefit of being tax deductible in the US, UK and Netherlands.Create a fundraiser!Also you might like to keep an eye out for fundraising pages from people in the community and support or share their fundraisers!List of active Giving What We Can fundraisersGiving What We Can's first ever fundraising pageA.J. Jacobs Giving Season FundraiserEA Hong Kong's Giving Season Fundraiser(Your future fundraiser?)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Giving What We Can https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:05 None full 3923
FZ2BMwSYhkdBWmTTA_EA EA - Good Futures Initiative: Winter Project Internship by Aris Richardson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Good Futures Initiative: Winter Project Internship, published by Aris Richardson on November 27, 2022 on The Effective Altruism Forum.TLDR: I'm launching Good Futures Initiative, a winter project internship to sponsor students to take on projects to upskill, test their fit for career aptitudes, or do impactful work over winter break. You can read more on our website and apply here by December 11th if interested!Good Futures InitiativeThe Good Futures Initiative is a 4.5 week internship in which students can use their winter break to lead high-EV projects. Projects could take many forms, but each project produces a final product while accomplishing one of these three goals:Skill up the intern in order for them to work on AI Safety or Biosecurity in the future.Let the intern explore an aptitude for an impactful career.Create impact directly.Good Futures takes place remotely from December 18th-January 25th, with a minimum of 12 hours of work per week. Accepted applicants will receive a $300 stipend and up to $1000 in funding for additional time or project fees. I expect to accept ~12 interns. The final number will depend largely on my capacity, but I may offer a lower-effort version of the program for promising applicants who I can’t fully fund/support (with a cohort for weekly check-ins and invites to guest speaker events).Example projectsThese project examples are far from perfect. At the start of the internship, I'd work with each intern to be sure they're doing the best project fit for their goals. That being said, I’m excited by projects similar to (and better than!!) these.Skilling up by working on AI Safety technical projects that have been posted by existing researchers, with the goal of creating a lesswrong post detailing your findings. For an example of a potential project to work on, Sam Bowman posted the following project:Consider questions where the most common answer online is likely to be false (as in TruthfulQA). If you prompt GPT-3-Instruct (or similar) with questions and correct answers in one domain/topic, then ask it about another domain/topic, will it tend to give the correct answer or the popular answer? As you make the domains/topics more or less different, how does this vary?Exploring an aptitude for communications by creating 2 articles on longtermist ideas and submitting them to 10 relevant magazines for publishing.Create impact by translating 3 relevant research papers from AGISF into Mandarin and posting them somewhere where they can be accessed by ML engineers in China.In addition to leading a focused project, interns will have weekly one-on-one progress check-ins with me (accountability), guest speaker events (expertise), and a meeting with a cohort of ~5 other interns working on projects (community).Our projects are student-directed. Although there will be guest speakers who have expertise in various topics, weekly advising/mentorship will be focused on helping students learn to self-sufficiently lead projects and build skills, rather than technical help executing the projects. E.g.: Accountability, increasing ambition, figuring out how to increase a project’s EV, making sure interns focus on the right metrics each week. Students are encouraged to apply for technical projects that will help them upskill/test their fit for technical work and reach out to additional mentors during the internship.RationaleThis internship fills a few gaps the EA community has in the process of getting students/recent grads to seriously pursue high-impact work.Students have a lot of time over winter break, but few obvious opportunities for impactful work, structured aptitude testing, or focused skilling-up.There are a lot of students or recent graduates interested in switching to a more impactful career, but aren't sure of the best way to do ...]]>
Aris Richardson https://forum.effectivealtruism.org/posts/FZ2BMwSYhkdBWmTTA/good-futures-initiative-winter-project-internship Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Good Futures Initiative: Winter Project Internship, published by Aris Richardson on November 27, 2022 on The Effective Altruism Forum.TLDR: I'm launching Good Futures Initiative, a winter project internship to sponsor students to take on projects to upskill, test their fit for career aptitudes, or do impactful work over winter break. You can read more on our website and apply here by December 11th if interested!Good Futures InitiativeThe Good Futures Initiative is a 4.5 week internship in which students can use their winter break to lead high-EV projects. Projects could take many forms, but each project produces a final product while accomplishing one of these three goals:Skill up the intern in order for them to work on AI Safety or Biosecurity in the future.Let the intern explore an aptitude for an impactful career.Create impact directly.Good Futures takes place remotely from December 18th-January 25th, with a minimum of 12 hours of work per week. Accepted applicants will receive a $300 stipend and up to $1000 in funding for additional time or project fees. I expect to accept ~12 interns. The final number will depend largely on my capacity, but I may offer a lower-effort version of the program for promising applicants who I can’t fully fund/support (with a cohort for weekly check-ins and invites to guest speaker events).Example projectsThese project examples are far from perfect. At the start of the internship, I'd work with each intern to be sure they're doing the best project fit for their goals. That being said, I’m excited by projects similar to (and better than!!) these.Skilling up by working on AI Safety technical projects that have been posted by existing researchers, with the goal of creating a lesswrong post detailing your findings. For an example of a potential project to work on, Sam Bowman posted the following project:Consider questions where the most common answer online is likely to be false (as in TruthfulQA). If you prompt GPT-3-Instruct (or similar) with questions and correct answers in one domain/topic, then ask it about another domain/topic, will it tend to give the correct answer or the popular answer? As you make the domains/topics more or less different, how does this vary?Exploring an aptitude for communications by creating 2 articles on longtermist ideas and submitting them to 10 relevant magazines for publishing.Create impact by translating 3 relevant research papers from AGISF into Mandarin and posting them somewhere where they can be accessed by ML engineers in China.In addition to leading a focused project, interns will have weekly one-on-one progress check-ins with me (accountability), guest speaker events (expertise), and a meeting with a cohort of ~5 other interns working on projects (community).Our projects are student-directed. Although there will be guest speakers who have expertise in various topics, weekly advising/mentorship will be focused on helping students learn to self-sufficiently lead projects and build skills, rather than technical help executing the projects. E.g.: Accountability, increasing ambition, figuring out how to increase a project’s EV, making sure interns focus on the right metrics each week. Students are encouraged to apply for technical projects that will help them upskill/test their fit for technical work and reach out to additional mentors during the internship.RationaleThis internship fills a few gaps the EA community has in the process of getting students/recent grads to seriously pursue high-impact work.Students have a lot of time over winter break, but few obvious opportunities for impactful work, structured aptitude testing, or focused skilling-up.There are a lot of students or recent graduates interested in switching to a more impactful career, but aren't sure of the best way to do ...]]>
Mon, 28 Nov 2022 08:05:19 +0000 EA - Good Futures Initiative: Winter Project Internship by Aris Richardson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Good Futures Initiative: Winter Project Internship, published by Aris Richardson on November 27, 2022 on The Effective Altruism Forum.TLDR: I'm launching Good Futures Initiative, a winter project internship to sponsor students to take on projects to upskill, test their fit for career aptitudes, or do impactful work over winter break. You can read more on our website and apply here by December 11th if interested!Good Futures InitiativeThe Good Futures Initiative is a 4.5 week internship in which students can use their winter break to lead high-EV projects. Projects could take many forms, but each project produces a final product while accomplishing one of these three goals:Skill up the intern in order for them to work on AI Safety or Biosecurity in the future.Let the intern explore an aptitude for an impactful career.Create impact directly.Good Futures takes place remotely from December 18th-January 25th, with a minimum of 12 hours of work per week. Accepted applicants will receive a $300 stipend and up to $1000 in funding for additional time or project fees. I expect to accept ~12 interns. The final number will depend largely on my capacity, but I may offer a lower-effort version of the program for promising applicants who I can’t fully fund/support (with a cohort for weekly check-ins and invites to guest speaker events).Example projectsThese project examples are far from perfect. At the start of the internship, I'd work with each intern to be sure they're doing the best project fit for their goals. That being said, I’m excited by projects similar to (and better than!!) these.Skilling up by working on AI Safety technical projects that have been posted by existing researchers, with the goal of creating a lesswrong post detailing your findings. For an example of a potential project to work on, Sam Bowman posted the following project:Consider questions where the most common answer online is likely to be false (as in TruthfulQA). If you prompt GPT-3-Instruct (or similar) with questions and correct answers in one domain/topic, then ask it about another domain/topic, will it tend to give the correct answer or the popular answer? As you make the domains/topics more or less different, how does this vary?Exploring an aptitude for communications by creating 2 articles on longtermist ideas and submitting them to 10 relevant magazines for publishing.Create impact by translating 3 relevant research papers from AGISF into Mandarin and posting them somewhere where they can be accessed by ML engineers in China.In addition to leading a focused project, interns will have weekly one-on-one progress check-ins with me (accountability), guest speaker events (expertise), and a meeting with a cohort of ~5 other interns working on projects (community).Our projects are student-directed. Although there will be guest speakers who have expertise in various topics, weekly advising/mentorship will be focused on helping students learn to self-sufficiently lead projects and build skills, rather than technical help executing the projects. E.g.: Accountability, increasing ambition, figuring out how to increase a project’s EV, making sure interns focus on the right metrics each week. Students are encouraged to apply for technical projects that will help them upskill/test their fit for technical work and reach out to additional mentors during the internship.RationaleThis internship fills a few gaps the EA community has in the process of getting students/recent grads to seriously pursue high-impact work.Students have a lot of time over winter break, but few obvious opportunities for impactful work, structured aptitude testing, or focused skilling-up.There are a lot of students or recent graduates interested in switching to a more impactful career, but aren't sure of the best way to do ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Good Futures Initiative: Winter Project Internship, published by Aris Richardson on November 27, 2022 on The Effective Altruism Forum.TLDR: I'm launching Good Futures Initiative, a winter project internship to sponsor students to take on projects to upskill, test their fit for career aptitudes, or do impactful work over winter break. You can read more on our website and apply here by December 11th if interested!Good Futures InitiativeThe Good Futures Initiative is a 4.5 week internship in which students can use their winter break to lead high-EV projects. Projects could take many forms, but each project produces a final product while accomplishing one of these three goals:Skill up the intern in order for them to work on AI Safety or Biosecurity in the future.Let the intern explore an aptitude for an impactful career.Create impact directly.Good Futures takes place remotely from December 18th-January 25th, with a minimum of 12 hours of work per week. Accepted applicants will receive a $300 stipend and up to $1000 in funding for additional time or project fees. I expect to accept ~12 interns. The final number will depend largely on my capacity, but I may offer a lower-effort version of the program for promising applicants who I can’t fully fund/support (with a cohort for weekly check-ins and invites to guest speaker events).Example projectsThese project examples are far from perfect. At the start of the internship, I'd work with each intern to be sure they're doing the best project fit for their goals. That being said, I’m excited by projects similar to (and better than!!) these.Skilling up by working on AI Safety technical projects that have been posted by existing researchers, with the goal of creating a lesswrong post detailing your findings. For an example of a potential project to work on, Sam Bowman posted the following project:Consider questions where the most common answer online is likely to be false (as in TruthfulQA). If you prompt GPT-3-Instruct (or similar) with questions and correct answers in one domain/topic, then ask it about another domain/topic, will it tend to give the correct answer or the popular answer? As you make the domains/topics more or less different, how does this vary?Exploring an aptitude for communications by creating 2 articles on longtermist ideas and submitting them to 10 relevant magazines for publishing.Create impact by translating 3 relevant research papers from AGISF into Mandarin and posting them somewhere where they can be accessed by ML engineers in China.In addition to leading a focused project, interns will have weekly one-on-one progress check-ins with me (accountability), guest speaker events (expertise), and a meeting with a cohort of ~5 other interns working on projects (community).Our projects are student-directed. Although there will be guest speakers who have expertise in various topics, weekly advising/mentorship will be focused on helping students learn to self-sufficiently lead projects and build skills, rather than technical help executing the projects. E.g.: Accountability, increasing ambition, figuring out how to increase a project’s EV, making sure interns focus on the right metrics each week. Students are encouraged to apply for technical projects that will help them upskill/test their fit for technical work and reach out to additional mentors during the internship.RationaleThis internship fills a few gaps the EA community has in the process of getting students/recent grads to seriously pursue high-impact work.Students have a lot of time over winter break, but few obvious opportunities for impactful work, structured aptitude testing, or focused skilling-up.There are a lot of students or recent graduates interested in switching to a more impactful career, but aren't sure of the best way to do ...]]>
Aris Richardson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:12 None full 3924
MwXAJfaqMvLhmvNsA_EA EA - Effective Giving Day is only 1 day away! by Giving What We Can Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Giving Day is only 1 day away!, published by Giving What We Can on November 27, 2022 on The Effective Altruism Forum.The global effective giving community has worked hard to create an awesome virtual event for Effective Giving Day! There are also several awesome groups around the world that are hosting their own events in person and online in Toronto, Vancouver, São Paulo, Sydney, Melbourne, Munich, Heidelberg, Estonia, Italy and more!Join us for Effective Giving Day on Monday 28th NovemberOur community sharing why they give effectivelyThis year the global effective giving community is celebrating on the 28th of November (29th of November Asia/Pacific)!Our event on YouTube Live will help you learn:How just one person really can make a difference: How to 100x your impact through effective donationsThe newest findings in the philanthropic space from industry leadersHow effective giving has benefitted our member’s lives: Featuring guest speakers bestselling author Rutger Bregman, ethicist Peter Singer and science broadcaster and professional poker player Liv BoereeRSVP for Effective Giving Day via FacebookRSVP for Effective Giving Day via the EA ForumGiving Season EventsEffective Giving Day - Main Online Event28 Nov at 19:00 UTC (London: 7:00 pm, Munich: 8:00 pm, New York 2:00 pm, San Francisco: 11:00 am)RSVP on FacebookRSVP for the event on the EA ForumSet a reminder on YouTubeEffective Giving Day Viewing Party, Lunch and Giving Game (Hosted by Effective Altruists of UBC & Vancouver)28 Nov at 19:00 UTC (London: 7:00 pm, Munich: 8:00 pm, New York 2:00 pm, San Francisco: 11:00 am)DiscordEffective Giving Day: Toronto28 Nov at 23:30 UTC (Toronto/EST: 6:30pm)Event detailsRSVP via MeetupEffective Giving Day: São Paulo28 Nov at 22:30 UTC(São Paulo: 7:30pm)Event detailsRegister interest via Google FormEffective Giving Day: Vancouver28 Nov at 18:45 UTC (Vancouver time: 10:45am)Event detailsRSVP via ZoomEffective Giving Day: Heidelberg28 Nov at 18:30 UTC (Heidelberg time: 7:30pm)Event detailsRSVP via FacebookEffective Giving Day: Estonia28 NovCheck for details on FacebookEffective Giving Day: Tübingen28 Nov at 18:30 (7:30pm Heidelberg time)Event detailsRSVP via Google CalendarEffective Giving Day: Italy28 Nov at 19:00 UTCEvent details via SlackRSVP via SlackEffective Giving Day: Sydney29 Nov at 06:30 UTC (Sydney: 5:30pm)Event detailsRSVP via Google FormEffective Giving Day: Melbourne29 Nov at 06:15 UTC (Melbourne time: 5:15pm)Event detailsRSVP via FacebookMunich GWWC/EA Event - December 20221 Dec at 18:00 UTC (Munich: 7pm)RVSP on FacebookEvent detailsAre you hosting your own Effective Giving Day event?Register your Effective Giving Day event with us!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Giving What We Can https://forum.effectivealtruism.org/posts/MwXAJfaqMvLhmvNsA/effective-giving-day-is-only-1-day-away Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Giving Day is only 1 day away!, published by Giving What We Can on November 27, 2022 on The Effective Altruism Forum.The global effective giving community has worked hard to create an awesome virtual event for Effective Giving Day! There are also several awesome groups around the world that are hosting their own events in person and online in Toronto, Vancouver, São Paulo, Sydney, Melbourne, Munich, Heidelberg, Estonia, Italy and more!Join us for Effective Giving Day on Monday 28th NovemberOur community sharing why they give effectivelyThis year the global effective giving community is celebrating on the 28th of November (29th of November Asia/Pacific)!Our event on YouTube Live will help you learn:How just one person really can make a difference: How to 100x your impact through effective donationsThe newest findings in the philanthropic space from industry leadersHow effective giving has benefitted our member’s lives: Featuring guest speakers bestselling author Rutger Bregman, ethicist Peter Singer and science broadcaster and professional poker player Liv BoereeRSVP for Effective Giving Day via FacebookRSVP for Effective Giving Day via the EA ForumGiving Season EventsEffective Giving Day - Main Online Event28 Nov at 19:00 UTC (London: 7:00 pm, Munich: 8:00 pm, New York 2:00 pm, San Francisco: 11:00 am)RSVP on FacebookRSVP for the event on the EA ForumSet a reminder on YouTubeEffective Giving Day Viewing Party, Lunch and Giving Game (Hosted by Effective Altruists of UBC & Vancouver)28 Nov at 19:00 UTC (London: 7:00 pm, Munich: 8:00 pm, New York 2:00 pm, San Francisco: 11:00 am)DiscordEffective Giving Day: Toronto28 Nov at 23:30 UTC (Toronto/EST: 6:30pm)Event detailsRSVP via MeetupEffective Giving Day: São Paulo28 Nov at 22:30 UTC(São Paulo: 7:30pm)Event detailsRegister interest via Google FormEffective Giving Day: Vancouver28 Nov at 18:45 UTC (Vancouver time: 10:45am)Event detailsRSVP via ZoomEffective Giving Day: Heidelberg28 Nov at 18:30 UTC (Heidelberg time: 7:30pm)Event detailsRSVP via FacebookEffective Giving Day: Estonia28 NovCheck for details on FacebookEffective Giving Day: Tübingen28 Nov at 18:30 (7:30pm Heidelberg time)Event detailsRSVP via Google CalendarEffective Giving Day: Italy28 Nov at 19:00 UTCEvent details via SlackRSVP via SlackEffective Giving Day: Sydney29 Nov at 06:30 UTC (Sydney: 5:30pm)Event detailsRSVP via Google FormEffective Giving Day: Melbourne29 Nov at 06:15 UTC (Melbourne time: 5:15pm)Event detailsRSVP via FacebookMunich GWWC/EA Event - December 20221 Dec at 18:00 UTC (Munich: 7pm)RVSP on FacebookEvent detailsAre you hosting your own Effective Giving Day event?Register your Effective Giving Day event with us!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 27 Nov 2022 23:27:10 +0000 EA - Effective Giving Day is only 1 day away! by Giving What We Can Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Giving Day is only 1 day away!, published by Giving What We Can on November 27, 2022 on The Effective Altruism Forum.The global effective giving community has worked hard to create an awesome virtual event for Effective Giving Day! There are also several awesome groups around the world that are hosting their own events in person and online in Toronto, Vancouver, São Paulo, Sydney, Melbourne, Munich, Heidelberg, Estonia, Italy and more!Join us for Effective Giving Day on Monday 28th NovemberOur community sharing why they give effectivelyThis year the global effective giving community is celebrating on the 28th of November (29th of November Asia/Pacific)!Our event on YouTube Live will help you learn:How just one person really can make a difference: How to 100x your impact through effective donationsThe newest findings in the philanthropic space from industry leadersHow effective giving has benefitted our member’s lives: Featuring guest speakers bestselling author Rutger Bregman, ethicist Peter Singer and science broadcaster and professional poker player Liv BoereeRSVP for Effective Giving Day via FacebookRSVP for Effective Giving Day via the EA ForumGiving Season EventsEffective Giving Day - Main Online Event28 Nov at 19:00 UTC (London: 7:00 pm, Munich: 8:00 pm, New York 2:00 pm, San Francisco: 11:00 am)RSVP on FacebookRSVP for the event on the EA ForumSet a reminder on YouTubeEffective Giving Day Viewing Party, Lunch and Giving Game (Hosted by Effective Altruists of UBC & Vancouver)28 Nov at 19:00 UTC (London: 7:00 pm, Munich: 8:00 pm, New York 2:00 pm, San Francisco: 11:00 am)DiscordEffective Giving Day: Toronto28 Nov at 23:30 UTC (Toronto/EST: 6:30pm)Event detailsRSVP via MeetupEffective Giving Day: São Paulo28 Nov at 22:30 UTC(São Paulo: 7:30pm)Event detailsRegister interest via Google FormEffective Giving Day: Vancouver28 Nov at 18:45 UTC (Vancouver time: 10:45am)Event detailsRSVP via ZoomEffective Giving Day: Heidelberg28 Nov at 18:30 UTC (Heidelberg time: 7:30pm)Event detailsRSVP via FacebookEffective Giving Day: Estonia28 NovCheck for details on FacebookEffective Giving Day: Tübingen28 Nov at 18:30 (7:30pm Heidelberg time)Event detailsRSVP via Google CalendarEffective Giving Day: Italy28 Nov at 19:00 UTCEvent details via SlackRSVP via SlackEffective Giving Day: Sydney29 Nov at 06:30 UTC (Sydney: 5:30pm)Event detailsRSVP via Google FormEffective Giving Day: Melbourne29 Nov at 06:15 UTC (Melbourne time: 5:15pm)Event detailsRSVP via FacebookMunich GWWC/EA Event - December 20221 Dec at 18:00 UTC (Munich: 7pm)RVSP on FacebookEvent detailsAre you hosting your own Effective Giving Day event?Register your Effective Giving Day event with us!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Giving Day is only 1 day away!, published by Giving What We Can on November 27, 2022 on The Effective Altruism Forum.The global effective giving community has worked hard to create an awesome virtual event for Effective Giving Day! There are also several awesome groups around the world that are hosting their own events in person and online in Toronto, Vancouver, São Paulo, Sydney, Melbourne, Munich, Heidelberg, Estonia, Italy and more!Join us for Effective Giving Day on Monday 28th NovemberOur community sharing why they give effectivelyThis year the global effective giving community is celebrating on the 28th of November (29th of November Asia/Pacific)!Our event on YouTube Live will help you learn:How just one person really can make a difference: How to 100x your impact through effective donationsThe newest findings in the philanthropic space from industry leadersHow effective giving has benefitted our member’s lives: Featuring guest speakers bestselling author Rutger Bregman, ethicist Peter Singer and science broadcaster and professional poker player Liv BoereeRSVP for Effective Giving Day via FacebookRSVP for Effective Giving Day via the EA ForumGiving Season EventsEffective Giving Day - Main Online Event28 Nov at 19:00 UTC (London: 7:00 pm, Munich: 8:00 pm, New York 2:00 pm, San Francisco: 11:00 am)RSVP on FacebookRSVP for the event on the EA ForumSet a reminder on YouTubeEffective Giving Day Viewing Party, Lunch and Giving Game (Hosted by Effective Altruists of UBC & Vancouver)28 Nov at 19:00 UTC (London: 7:00 pm, Munich: 8:00 pm, New York 2:00 pm, San Francisco: 11:00 am)DiscordEffective Giving Day: Toronto28 Nov at 23:30 UTC (Toronto/EST: 6:30pm)Event detailsRSVP via MeetupEffective Giving Day: São Paulo28 Nov at 22:30 UTC(São Paulo: 7:30pm)Event detailsRegister interest via Google FormEffective Giving Day: Vancouver28 Nov at 18:45 UTC (Vancouver time: 10:45am)Event detailsRSVP via ZoomEffective Giving Day: Heidelberg28 Nov at 18:30 UTC (Heidelberg time: 7:30pm)Event detailsRSVP via FacebookEffective Giving Day: Estonia28 NovCheck for details on FacebookEffective Giving Day: Tübingen28 Nov at 18:30 (7:30pm Heidelberg time)Event detailsRSVP via Google CalendarEffective Giving Day: Italy28 Nov at 19:00 UTCEvent details via SlackRSVP via SlackEffective Giving Day: Sydney29 Nov at 06:30 UTC (Sydney: 5:30pm)Event detailsRSVP via Google FormEffective Giving Day: Melbourne29 Nov at 06:15 UTC (Melbourne time: 5:15pm)Event detailsRSVP via FacebookMunich GWWC/EA Event - December 20221 Dec at 18:00 UTC (Munich: 7pm)RVSP on FacebookEvent detailsAre you hosting your own Effective Giving Day event?Register your Effective Giving Day event with us!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Giving What We Can https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:58 None full 3925
LWN6qFhCtPDEJJpeG_EA EA - Cost-effectiveness of operations management in high-impact organisations by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cost-effectiveness of operations management in high-impact organisations, published by Vasco Grilo on November 27, 2022 on The Effective Altruism Forum.SummaryFollowing up on the challenge to quantify the impact of 80,000 hours' top career paths introduced by Nuño Sempere, I have estimated the cost-effectiveness of operations management in high-impact organisations (OM), which arguably include 80,000 Hours’ top-recommended organisations.The results for the mean cost-effectiveness of various metrics in bp/G$ in terms of existential risk reduction are summarised in the table below for my preferred method. I present all results with 3 digits, but I think their resilience is such that they only represent order of magnitude estimates (i.e. they may well be wrong by a factor of 10^0.5 = 3).Mean cost-effectiveness (bp/G$) of.Global health and developmentLongtermism and catastrophic risk preventionAnimal welfareEffective altruism infrastructureThe effective altruism communityOperations management in high-impact organisationsMethod 3 with truncation0.4313.951.623.201.557.01AcknowledgementsThanks to Abraham Rowe, Dan Hendrycks, Luke Freeman, Matt Lerner, Nuño Sempere, Sawyer Bernath, Stien van der Ploeg, and Tamay Besiroglu.MethodsI estimated the cost-effectiveness from the product between:The cost-effectiveness of the high-impact organisations, which I assumed equal to that of the effective altruism community.The multiplier of OM, which I defined as the ratio between the cost-effectiveness of OM and the high-impact organisations.This method assumes the cost-effectiveness distribution of the high-impact organisations is represented by the one theorised for the effective altruism community in the next section. Moreover, the cost-effectiveness estimates are only accurate to the extent that future opportunities are as valuable as recent ones.The calculations are in this Colab.Cost-effectiveness of the effective altruism communityI calculated the cost-effectiveness of the effective altruism community from the mean cost-effectiveness weighted by cumulative spending between 1 January 2020 and 15 August 2022 of 4 cause areas:Global health and development.Longtermism and catastrophic risk prevention.Animal welfare.Effective altruism infrastructure.These are the areas for which Tyler Maule collected data here (see EA Forum post here). I adjusted the 2020 and 2021 values for inflation using the calculator from in2013dollars.I computed the cost-effectiveness of each area using 3 methods. All rely on distributions which are either truncated to the 99 % confidence interval (CI) or not truncated, in order to understand the effect of outliers. The parameters of the pre-truncation distributions, which are the final distributions for the non-truncation cases, are provided below.Method 1I defined the cost-effectiveness of longtermism and catastrophic risk prevention as a truncated lognormal distribution with pre-truncation 5th and 95th percentiles equal to 1 and 10 bp/G$ in terms of existential risk reduction. These are the lower and upper bounds proposed here by Linchuan Zhang.I assumed the ratio between the cost-effectiveness of i) longtermism and catastrophic risk prevention and ii) global health and development to be a truncated lognormal distribution with pre-truncation 5th and 95th percentiles equal to 10 and 100. These are the lower and upper bounds guessed here by Benjamin Todd for the ratio between the cost-effectiveness of the Long-Term Future Fund (LTFF) and Global Health and Development Fund (search for “10-100x more cost-effective”).I considered the ratio between the cost-effectiveness of i) animal welfare and ii) global health and development to be a truncated lognormal distribution with pre-truncation 5th and 95th percentiles equal to 270 μ and 211....]]>
Vasco Grilo https://forum.effectivealtruism.org/posts/LWN6qFhCtPDEJJpeG/cost-effectiveness-of-operations-management-in-high-impact Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cost-effectiveness of operations management in high-impact organisations, published by Vasco Grilo on November 27, 2022 on The Effective Altruism Forum.SummaryFollowing up on the challenge to quantify the impact of 80,000 hours' top career paths introduced by Nuño Sempere, I have estimated the cost-effectiveness of operations management in high-impact organisations (OM), which arguably include 80,000 Hours’ top-recommended organisations.The results for the mean cost-effectiveness of various metrics in bp/G$ in terms of existential risk reduction are summarised in the table below for my preferred method. I present all results with 3 digits, but I think their resilience is such that they only represent order of magnitude estimates (i.e. they may well be wrong by a factor of 10^0.5 = 3).Mean cost-effectiveness (bp/G$) of.Global health and developmentLongtermism and catastrophic risk preventionAnimal welfareEffective altruism infrastructureThe effective altruism communityOperations management in high-impact organisationsMethod 3 with truncation0.4313.951.623.201.557.01AcknowledgementsThanks to Abraham Rowe, Dan Hendrycks, Luke Freeman, Matt Lerner, Nuño Sempere, Sawyer Bernath, Stien van der Ploeg, and Tamay Besiroglu.MethodsI estimated the cost-effectiveness from the product between:The cost-effectiveness of the high-impact organisations, which I assumed equal to that of the effective altruism community.The multiplier of OM, which I defined as the ratio between the cost-effectiveness of OM and the high-impact organisations.This method assumes the cost-effectiveness distribution of the high-impact organisations is represented by the one theorised for the effective altruism community in the next section. Moreover, the cost-effectiveness estimates are only accurate to the extent that future opportunities are as valuable as recent ones.The calculations are in this Colab.Cost-effectiveness of the effective altruism communityI calculated the cost-effectiveness of the effective altruism community from the mean cost-effectiveness weighted by cumulative spending between 1 January 2020 and 15 August 2022 of 4 cause areas:Global health and development.Longtermism and catastrophic risk prevention.Animal welfare.Effective altruism infrastructure.These are the areas for which Tyler Maule collected data here (see EA Forum post here). I adjusted the 2020 and 2021 values for inflation using the calculator from in2013dollars.I computed the cost-effectiveness of each area using 3 methods. All rely on distributions which are either truncated to the 99 % confidence interval (CI) or not truncated, in order to understand the effect of outliers. The parameters of the pre-truncation distributions, which are the final distributions for the non-truncation cases, are provided below.Method 1I defined the cost-effectiveness of longtermism and catastrophic risk prevention as a truncated lognormal distribution with pre-truncation 5th and 95th percentiles equal to 1 and 10 bp/G$ in terms of existential risk reduction. These are the lower and upper bounds proposed here by Linchuan Zhang.I assumed the ratio between the cost-effectiveness of i) longtermism and catastrophic risk prevention and ii) global health and development to be a truncated lognormal distribution with pre-truncation 5th and 95th percentiles equal to 10 and 100. These are the lower and upper bounds guessed here by Benjamin Todd for the ratio between the cost-effectiveness of the Long-Term Future Fund (LTFF) and Global Health and Development Fund (search for “10-100x more cost-effective”).I considered the ratio between the cost-effectiveness of i) animal welfare and ii) global health and development to be a truncated lognormal distribution with pre-truncation 5th and 95th percentiles equal to 270 μ and 211....]]>
Sun, 27 Nov 2022 22:31:24 +0000 EA - Cost-effectiveness of operations management in high-impact organisations by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cost-effectiveness of operations management in high-impact organisations, published by Vasco Grilo on November 27, 2022 on The Effective Altruism Forum.SummaryFollowing up on the challenge to quantify the impact of 80,000 hours' top career paths introduced by Nuño Sempere, I have estimated the cost-effectiveness of operations management in high-impact organisations (OM), which arguably include 80,000 Hours’ top-recommended organisations.The results for the mean cost-effectiveness of various metrics in bp/G$ in terms of existential risk reduction are summarised in the table below for my preferred method. I present all results with 3 digits, but I think their resilience is such that they only represent order of magnitude estimates (i.e. they may well be wrong by a factor of 10^0.5 = 3).Mean cost-effectiveness (bp/G$) of.Global health and developmentLongtermism and catastrophic risk preventionAnimal welfareEffective altruism infrastructureThe effective altruism communityOperations management in high-impact organisationsMethod 3 with truncation0.4313.951.623.201.557.01AcknowledgementsThanks to Abraham Rowe, Dan Hendrycks, Luke Freeman, Matt Lerner, Nuño Sempere, Sawyer Bernath, Stien van der Ploeg, and Tamay Besiroglu.MethodsI estimated the cost-effectiveness from the product between:The cost-effectiveness of the high-impact organisations, which I assumed equal to that of the effective altruism community.The multiplier of OM, which I defined as the ratio between the cost-effectiveness of OM and the high-impact organisations.This method assumes the cost-effectiveness distribution of the high-impact organisations is represented by the one theorised for the effective altruism community in the next section. Moreover, the cost-effectiveness estimates are only accurate to the extent that future opportunities are as valuable as recent ones.The calculations are in this Colab.Cost-effectiveness of the effective altruism communityI calculated the cost-effectiveness of the effective altruism community from the mean cost-effectiveness weighted by cumulative spending between 1 January 2020 and 15 August 2022 of 4 cause areas:Global health and development.Longtermism and catastrophic risk prevention.Animal welfare.Effective altruism infrastructure.These are the areas for which Tyler Maule collected data here (see EA Forum post here). I adjusted the 2020 and 2021 values for inflation using the calculator from in2013dollars.I computed the cost-effectiveness of each area using 3 methods. All rely on distributions which are either truncated to the 99 % confidence interval (CI) or not truncated, in order to understand the effect of outliers. The parameters of the pre-truncation distributions, which are the final distributions for the non-truncation cases, are provided below.Method 1I defined the cost-effectiveness of longtermism and catastrophic risk prevention as a truncated lognormal distribution with pre-truncation 5th and 95th percentiles equal to 1 and 10 bp/G$ in terms of existential risk reduction. These are the lower and upper bounds proposed here by Linchuan Zhang.I assumed the ratio between the cost-effectiveness of i) longtermism and catastrophic risk prevention and ii) global health and development to be a truncated lognormal distribution with pre-truncation 5th and 95th percentiles equal to 10 and 100. These are the lower and upper bounds guessed here by Benjamin Todd for the ratio between the cost-effectiveness of the Long-Term Future Fund (LTFF) and Global Health and Development Fund (search for “10-100x more cost-effective”).I considered the ratio between the cost-effectiveness of i) animal welfare and ii) global health and development to be a truncated lognormal distribution with pre-truncation 5th and 95th percentiles equal to 270 μ and 211....]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cost-effectiveness of operations management in high-impact organisations, published by Vasco Grilo on November 27, 2022 on The Effective Altruism Forum.SummaryFollowing up on the challenge to quantify the impact of 80,000 hours' top career paths introduced by Nuño Sempere, I have estimated the cost-effectiveness of operations management in high-impact organisations (OM), which arguably include 80,000 Hours’ top-recommended organisations.The results for the mean cost-effectiveness of various metrics in bp/G$ in terms of existential risk reduction are summarised in the table below for my preferred method. I present all results with 3 digits, but I think their resilience is such that they only represent order of magnitude estimates (i.e. they may well be wrong by a factor of 10^0.5 = 3).Mean cost-effectiveness (bp/G$) of.Global health and developmentLongtermism and catastrophic risk preventionAnimal welfareEffective altruism infrastructureThe effective altruism communityOperations management in high-impact organisationsMethod 3 with truncation0.4313.951.623.201.557.01AcknowledgementsThanks to Abraham Rowe, Dan Hendrycks, Luke Freeman, Matt Lerner, Nuño Sempere, Sawyer Bernath, Stien van der Ploeg, and Tamay Besiroglu.MethodsI estimated the cost-effectiveness from the product between:The cost-effectiveness of the high-impact organisations, which I assumed equal to that of the effective altruism community.The multiplier of OM, which I defined as the ratio between the cost-effectiveness of OM and the high-impact organisations.This method assumes the cost-effectiveness distribution of the high-impact organisations is represented by the one theorised for the effective altruism community in the next section. Moreover, the cost-effectiveness estimates are only accurate to the extent that future opportunities are as valuable as recent ones.The calculations are in this Colab.Cost-effectiveness of the effective altruism communityI calculated the cost-effectiveness of the effective altruism community from the mean cost-effectiveness weighted by cumulative spending between 1 January 2020 and 15 August 2022 of 4 cause areas:Global health and development.Longtermism and catastrophic risk prevention.Animal welfare.Effective altruism infrastructure.These are the areas for which Tyler Maule collected data here (see EA Forum post here). I adjusted the 2020 and 2021 values for inflation using the calculator from in2013dollars.I computed the cost-effectiveness of each area using 3 methods. All rely on distributions which are either truncated to the 99 % confidence interval (CI) or not truncated, in order to understand the effect of outliers. The parameters of the pre-truncation distributions, which are the final distributions for the non-truncation cases, are provided below.Method 1I defined the cost-effectiveness of longtermism and catastrophic risk prevention as a truncated lognormal distribution with pre-truncation 5th and 95th percentiles equal to 1 and 10 bp/G$ in terms of existential risk reduction. These are the lower and upper bounds proposed here by Linchuan Zhang.I assumed the ratio between the cost-effectiveness of i) longtermism and catastrophic risk prevention and ii) global health and development to be a truncated lognormal distribution with pre-truncation 5th and 95th percentiles equal to 10 and 100. These are the lower and upper bounds guessed here by Benjamin Todd for the ratio between the cost-effectiveness of the Long-Term Future Fund (LTFF) and Global Health and Development Fund (search for “10-100x more cost-effective”).I considered the ratio between the cost-effectiveness of i) animal welfare and ii) global health and development to be a truncated lognormal distribution with pre-truncation 5th and 95th percentiles equal to 270 μ and 211....]]>
Vasco Grilo https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 27:04 None full 3917
zBbqjFCENh2yFjFiE_EA EA - Geometric Rationality is Not VNM Rational by Scott Garrabrant Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Geometric Rationality is Not VNM Rational, published by Scott Garrabrant on November 27, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Scott Garrabrant https://forum.effectivealtruism.org/posts/zBbqjFCENh2yFjFiE/geometric-rationality-is-not-vnm-rational Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Geometric Rationality is Not VNM Rational, published by Scott Garrabrant on November 27, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 27 Nov 2022 20:36:36 +0000 EA - Geometric Rationality is Not VNM Rational by Scott Garrabrant Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Geometric Rationality is Not VNM Rational, published by Scott Garrabrant on November 27, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Geometric Rationality is Not VNM Rational, published by Scott Garrabrant on November 27, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Scott Garrabrant https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:27 None full 3934
mCDEfENFgNJqWtydK_EA EA - How VCs can avoid being tricked by obvious frauds: Rohit Krishnan on Noahpinion (linkpost) by HaydnBelfield Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How VCs can avoid being tricked by obvious frauds: Rohit Krishnan on Noahpinion (linkpost), published by HaydnBelfield on November 27, 2022 on The Effective Altruism Forum.Rohit Krishnan is a former hedge fund manager. Both he and Noah Smith are now mainly-economics commentators, and have been good guides to the FTX crash on Twitter.I found this short piece very helpful in getting a sense of how big the screw-up was by the investors in FTX.It opens like this:We live in the golden age of technology fraud. When Theranos exploded, there was much hemming and hawing amongst the investing circles, mostly to note that the smart money on Sand Hill Road were not amongst those who lost their shirts. When WeWork put out its absolute sham of an IPO prospectus before getting cut by 80%, most folks said hey, it’s only the vision fund that was lacking vision.But now there’s a third head on that mountain, and it’s the biggest. Theranos only burned $700 million of investors' money. Neumann at WeWork supposedly burned around $4 Billion, but that was mostly from Softbank. FTX puts these to shame, incinerating at least $2 Billion of investors' money and another $6-8 Billion of customers’ money in mere hours. Soon to be legendary, worse than Enron and faster than Lehman, there is the singular fraud of FTX and its CEO Sam Bankman-Fried.But unlike those other crashes, this seems like it might take down multiple other firms, and create a 2008 moment for crypto, which used to be a $2 Trillion asset class. More importantly, to figure out how we can stop something like this from happening. Not fraud, since that’s part of the human OS, but at least having the smartest money around the table getting bamboozled by tousled hair and cargo shorts.The part that I found the most illuminating was this section on 'Dumb Enron' and some of the specific mistakes made by big investors like Temasek and Sequoia.I. The problem: this is Dumb EnronTemasek, not known to be a gunslinger in the venture world, released a statement after they lost $275 million with FTX. It’s carefully written and well worded, and is rather circumspect about what actually went wrong.They mention how their exposure was tiny (0.09% of AUM) and that they did extensive due diligence which took approximately 8 months, audited financial statements, and undertook regulatory risk assessments.But the most interesting part is here:As we only had a ~1% stake in FTX, we did not have a board seat. However, we take corporate governance seriously, engage the boards and management of our investee companies regularly and hold them accountable for the activities of their companies.Sequoia, when it lost $214 million across a couple of funds, also mentioned in their letter to LPs they did “extensive research and thorough due diligence”. A week later they apologized to the LPs on a call and said they'll do better, by maybe using the Big 4 to audit all startups. I suspect this is hyperbole because otherwise this is medicine sillier than the disease.These are not isolated errors in judgement though. The list of investors in FTX is a who’s who of the investing world - Sequoia, Paradigm, Thoma Bravo, Multicoin, Softbank, Temasek, Lux, Insight, Tiger Global.Doug Leone made the reasonable point that I made above, that VCs don’t really do forensic accounting. They got some audited financials, and it looked good, but it's a snapshot at the end of a quarter, so why would they know shenanigans had taken place!But honestly, if VCs had been snookered by Theranos, that would make more sense. Like what do VCs know about how much blood is needed to test something? Sure it doesn’t quite sound right (100s of tests from a single drop of blood!) and there were people saying this is impossible, but they say that kind of thing about everything! And Holmes’ pro...]]>
HaydnBelfield https://forum.effectivealtruism.org/posts/mCDEfENFgNJqWtydK/how-vcs-can-avoid-being-tricked-by-obvious-frauds-rohit Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How VCs can avoid being tricked by obvious frauds: Rohit Krishnan on Noahpinion (linkpost), published by HaydnBelfield on November 27, 2022 on The Effective Altruism Forum.Rohit Krishnan is a former hedge fund manager. Both he and Noah Smith are now mainly-economics commentators, and have been good guides to the FTX crash on Twitter.I found this short piece very helpful in getting a sense of how big the screw-up was by the investors in FTX.It opens like this:We live in the golden age of technology fraud. When Theranos exploded, there was much hemming and hawing amongst the investing circles, mostly to note that the smart money on Sand Hill Road were not amongst those who lost their shirts. When WeWork put out its absolute sham of an IPO prospectus before getting cut by 80%, most folks said hey, it’s only the vision fund that was lacking vision.But now there’s a third head on that mountain, and it’s the biggest. Theranos only burned $700 million of investors' money. Neumann at WeWork supposedly burned around $4 Billion, but that was mostly from Softbank. FTX puts these to shame, incinerating at least $2 Billion of investors' money and another $6-8 Billion of customers’ money in mere hours. Soon to be legendary, worse than Enron and faster than Lehman, there is the singular fraud of FTX and its CEO Sam Bankman-Fried.But unlike those other crashes, this seems like it might take down multiple other firms, and create a 2008 moment for crypto, which used to be a $2 Trillion asset class. More importantly, to figure out how we can stop something like this from happening. Not fraud, since that’s part of the human OS, but at least having the smartest money around the table getting bamboozled by tousled hair and cargo shorts.The part that I found the most illuminating was this section on 'Dumb Enron' and some of the specific mistakes made by big investors like Temasek and Sequoia.I. The problem: this is Dumb EnronTemasek, not known to be a gunslinger in the venture world, released a statement after they lost $275 million with FTX. It’s carefully written and well worded, and is rather circumspect about what actually went wrong.They mention how their exposure was tiny (0.09% of AUM) and that they did extensive due diligence which took approximately 8 months, audited financial statements, and undertook regulatory risk assessments.But the most interesting part is here:As we only had a ~1% stake in FTX, we did not have a board seat. However, we take corporate governance seriously, engage the boards and management of our investee companies regularly and hold them accountable for the activities of their companies.Sequoia, when it lost $214 million across a couple of funds, also mentioned in their letter to LPs they did “extensive research and thorough due diligence”. A week later they apologized to the LPs on a call and said they'll do better, by maybe using the Big 4 to audit all startups. I suspect this is hyperbole because otherwise this is medicine sillier than the disease.These are not isolated errors in judgement though. The list of investors in FTX is a who’s who of the investing world - Sequoia, Paradigm, Thoma Bravo, Multicoin, Softbank, Temasek, Lux, Insight, Tiger Global.Doug Leone made the reasonable point that I made above, that VCs don’t really do forensic accounting. They got some audited financials, and it looked good, but it's a snapshot at the end of a quarter, so why would they know shenanigans had taken place!But honestly, if VCs had been snookered by Theranos, that would make more sense. Like what do VCs know about how much blood is needed to test something? Sure it doesn’t quite sound right (100s of tests from a single drop of blood!) and there were people saying this is impossible, but they say that kind of thing about everything! And Holmes’ pro...]]>
Sun, 27 Nov 2022 18:09:02 +0000 EA - How VCs can avoid being tricked by obvious frauds: Rohit Krishnan on Noahpinion (linkpost) by HaydnBelfield Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How VCs can avoid being tricked by obvious frauds: Rohit Krishnan on Noahpinion (linkpost), published by HaydnBelfield on November 27, 2022 on The Effective Altruism Forum.Rohit Krishnan is a former hedge fund manager. Both he and Noah Smith are now mainly-economics commentators, and have been good guides to the FTX crash on Twitter.I found this short piece very helpful in getting a sense of how big the screw-up was by the investors in FTX.It opens like this:We live in the golden age of technology fraud. When Theranos exploded, there was much hemming and hawing amongst the investing circles, mostly to note that the smart money on Sand Hill Road were not amongst those who lost their shirts. When WeWork put out its absolute sham of an IPO prospectus before getting cut by 80%, most folks said hey, it’s only the vision fund that was lacking vision.But now there’s a third head on that mountain, and it’s the biggest. Theranos only burned $700 million of investors' money. Neumann at WeWork supposedly burned around $4 Billion, but that was mostly from Softbank. FTX puts these to shame, incinerating at least $2 Billion of investors' money and another $6-8 Billion of customers’ money in mere hours. Soon to be legendary, worse than Enron and faster than Lehman, there is the singular fraud of FTX and its CEO Sam Bankman-Fried.But unlike those other crashes, this seems like it might take down multiple other firms, and create a 2008 moment for crypto, which used to be a $2 Trillion asset class. More importantly, to figure out how we can stop something like this from happening. Not fraud, since that’s part of the human OS, but at least having the smartest money around the table getting bamboozled by tousled hair and cargo shorts.The part that I found the most illuminating was this section on 'Dumb Enron' and some of the specific mistakes made by big investors like Temasek and Sequoia.I. The problem: this is Dumb EnronTemasek, not known to be a gunslinger in the venture world, released a statement after they lost $275 million with FTX. It’s carefully written and well worded, and is rather circumspect about what actually went wrong.They mention how their exposure was tiny (0.09% of AUM) and that they did extensive due diligence which took approximately 8 months, audited financial statements, and undertook regulatory risk assessments.But the most interesting part is here:As we only had a ~1% stake in FTX, we did not have a board seat. However, we take corporate governance seriously, engage the boards and management of our investee companies regularly and hold them accountable for the activities of their companies.Sequoia, when it lost $214 million across a couple of funds, also mentioned in their letter to LPs they did “extensive research and thorough due diligence”. A week later they apologized to the LPs on a call and said they'll do better, by maybe using the Big 4 to audit all startups. I suspect this is hyperbole because otherwise this is medicine sillier than the disease.These are not isolated errors in judgement though. The list of investors in FTX is a who’s who of the investing world - Sequoia, Paradigm, Thoma Bravo, Multicoin, Softbank, Temasek, Lux, Insight, Tiger Global.Doug Leone made the reasonable point that I made above, that VCs don’t really do forensic accounting. They got some audited financials, and it looked good, but it's a snapshot at the end of a quarter, so why would they know shenanigans had taken place!But honestly, if VCs had been snookered by Theranos, that would make more sense. Like what do VCs know about how much blood is needed to test something? Sure it doesn’t quite sound right (100s of tests from a single drop of blood!) and there were people saying this is impossible, but they say that kind of thing about everything! And Holmes’ pro...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How VCs can avoid being tricked by obvious frauds: Rohit Krishnan on Noahpinion (linkpost), published by HaydnBelfield on November 27, 2022 on The Effective Altruism Forum.Rohit Krishnan is a former hedge fund manager. Both he and Noah Smith are now mainly-economics commentators, and have been good guides to the FTX crash on Twitter.I found this short piece very helpful in getting a sense of how big the screw-up was by the investors in FTX.It opens like this:We live in the golden age of technology fraud. When Theranos exploded, there was much hemming and hawing amongst the investing circles, mostly to note that the smart money on Sand Hill Road were not amongst those who lost their shirts. When WeWork put out its absolute sham of an IPO prospectus before getting cut by 80%, most folks said hey, it’s only the vision fund that was lacking vision.But now there’s a third head on that mountain, and it’s the biggest. Theranos only burned $700 million of investors' money. Neumann at WeWork supposedly burned around $4 Billion, but that was mostly from Softbank. FTX puts these to shame, incinerating at least $2 Billion of investors' money and another $6-8 Billion of customers’ money in mere hours. Soon to be legendary, worse than Enron and faster than Lehman, there is the singular fraud of FTX and its CEO Sam Bankman-Fried.But unlike those other crashes, this seems like it might take down multiple other firms, and create a 2008 moment for crypto, which used to be a $2 Trillion asset class. More importantly, to figure out how we can stop something like this from happening. Not fraud, since that’s part of the human OS, but at least having the smartest money around the table getting bamboozled by tousled hair and cargo shorts.The part that I found the most illuminating was this section on 'Dumb Enron' and some of the specific mistakes made by big investors like Temasek and Sequoia.I. The problem: this is Dumb EnronTemasek, not known to be a gunslinger in the venture world, released a statement after they lost $275 million with FTX. It’s carefully written and well worded, and is rather circumspect about what actually went wrong.They mention how their exposure was tiny (0.09% of AUM) and that they did extensive due diligence which took approximately 8 months, audited financial statements, and undertook regulatory risk assessments.But the most interesting part is here:As we only had a ~1% stake in FTX, we did not have a board seat. However, we take corporate governance seriously, engage the boards and management of our investee companies regularly and hold them accountable for the activities of their companies.Sequoia, when it lost $214 million across a couple of funds, also mentioned in their letter to LPs they did “extensive research and thorough due diligence”. A week later they apologized to the LPs on a call and said they'll do better, by maybe using the Big 4 to audit all startups. I suspect this is hyperbole because otherwise this is medicine sillier than the disease.These are not isolated errors in judgement though. The list of investors in FTX is a who’s who of the investing world - Sequoia, Paradigm, Thoma Bravo, Multicoin, Softbank, Temasek, Lux, Insight, Tiger Global.Doug Leone made the reasonable point that I made above, that VCs don’t really do forensic accounting. They got some audited financials, and it looked good, but it's a snapshot at the end of a quarter, so why would they know shenanigans had taken place!But honestly, if VCs had been snookered by Theranos, that would make more sense. Like what do VCs know about how much blood is needed to test something? Sure it doesn’t quite sound right (100s of tests from a single drop of blood!) and there were people saying this is impossible, but they say that kind of thing about everything! And Holmes’ pro...]]>
HaydnBelfield https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:40 None full 3916
kCCTBFgRt2L9Gqndw_EA EA - An EA storybook for kids by Simon Newstead Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An EA storybook for kids, published by Simon Newstead on November 27, 2022 on The Effective Altruism Forum.Hi folks,Sharing a new EA storybook for kids, designed to help introduce EA concepts to kids in a fun and age-appropriate way.Althea and the Generation Tree tells the story of a free-spirited girl and her trusty sidekick Hamster, who together make a fateful discovery from the distant past.The goal of the project is to help inspire kindness and thoughtfulness in future generations. It's a non-profit project, the e-book will be free and any proceeds from a future hardcopy version will be donated to charity.We'd like the book to be created in a collaborative way, and we're calling for beta readers from the community: - if you have any feedback on the concept or ideas or tips, feel free to shareThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Simon Newstead https://forum.effectivealtruism.org/posts/kCCTBFgRt2L9Gqndw/an-ea-storybook-for-kids Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An EA storybook for kids, published by Simon Newstead on November 27, 2022 on The Effective Altruism Forum.Hi folks,Sharing a new EA storybook for kids, designed to help introduce EA concepts to kids in a fun and age-appropriate way.Althea and the Generation Tree tells the story of a free-spirited girl and her trusty sidekick Hamster, who together make a fateful discovery from the distant past.The goal of the project is to help inspire kindness and thoughtfulness in future generations. It's a non-profit project, the e-book will be free and any proceeds from a future hardcopy version will be donated to charity.We'd like the book to be created in a collaborative way, and we're calling for beta readers from the community: - if you have any feedback on the concept or ideas or tips, feel free to shareThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 27 Nov 2022 16:09:08 +0000 EA - An EA storybook for kids by Simon Newstead Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An EA storybook for kids, published by Simon Newstead on November 27, 2022 on The Effective Altruism Forum.Hi folks,Sharing a new EA storybook for kids, designed to help introduce EA concepts to kids in a fun and age-appropriate way.Althea and the Generation Tree tells the story of a free-spirited girl and her trusty sidekick Hamster, who together make a fateful discovery from the distant past.The goal of the project is to help inspire kindness and thoughtfulness in future generations. It's a non-profit project, the e-book will be free and any proceeds from a future hardcopy version will be donated to charity.We'd like the book to be created in a collaborative way, and we're calling for beta readers from the community: - if you have any feedback on the concept or ideas or tips, feel free to shareThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An EA storybook for kids, published by Simon Newstead on November 27, 2022 on The Effective Altruism Forum.Hi folks,Sharing a new EA storybook for kids, designed to help introduce EA concepts to kids in a fun and age-appropriate way.Althea and the Generation Tree tells the story of a free-spirited girl and her trusty sidekick Hamster, who together make a fateful discovery from the distant past.The goal of the project is to help inspire kindness and thoughtfulness in future generations. It's a non-profit project, the e-book will be free and any proceeds from a future hardcopy version will be donated to charity.We'd like the book to be created in a collaborative way, and we're calling for beta readers from the community: - if you have any feedback on the concept or ideas or tips, feel free to shareThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Simon Newstead https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:02 None full 3918
st59vLvsorvQhqvBr_EA EA - FTX, 'EA Principles', and 'The (Longtermist) EA Community' by Violet Hour Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX, 'EA Principles', and 'The (Longtermist) EA Community', published by Violet Hour on November 25, 2022 on The Effective Altruism Forum.1. IntroTwo weeks ago, I was in the process of writing a different essay. Instead, I’ll touch on FTX, and make the following claims.I think ‘the principles of EA’ are, at the community level, indeterminate in important ways. This makes me feel uncertain about the degree to which we can legitimately make statements of the form: “SBF violated EA principles”.The longtermist community — despite not having a set of explicit, widely agreed upon, and determinate set of deontic norms — nevertheless contains a distinctive set of more implicit norms, which I believe are worth preserving at the community level. I thus suggest an alternative self-conception for the longtermist community, centered on striving towards a certain set of moral-cum-epistemic virtues.Section 2 discusses the first claim, and Section 3 discusses the second. Each chunk can probably be read independently, though I’d like it if you read them both.2. 'EA Principles’This section will criticize some of the comments in Will’s tweet thread, published in the aftermath of FTX’s collapse.I want to say that, while I’ll criticize some of Will’s remarks, I recognize that expressing yourself well under conditions of emotional stress is really, really hard. Despite this difficulty, I imagine that Will nevertheless felt he had to say something, and quickly. So, while I stand behind my criticism, I hope that my criticism can be viewed as an attempt to live up to ideals I think Will and I both share — of frank intellectual honesty, in service of a better world.2.1.From Will’s response:“If those involved deceived others and engaged in fraud (whether illegal or not) that may cost many thousands of people their savings, they entirely abandoned the principles of the effective altruism community.” (emphasis mine)Overall, I’m not convinced. In his tweet thread, Will cites various sources — one of which is Holden’s post on the dangers of maximization, in which Holden makes the following claim:“I think “do the most good possible” is an . important idea . but it’s also a perilous idea if taken too far . Fortunately, I think EA mostly resists [the perils of maximization] – but that’s due to the good judgment and general anti-radicalism of the human beings involved, not because the ideas/themes/memes themselves offer enough guidance on how to avoid the pitfalls.”According to Holden, one of EA’s “core ideas” is a concern with maximization. And he thinks that the primary way in which EA avoids the pitfalls of their code ideas is through being tempered by moderating forces external to the core ideas themselves. If we weren’t tempered by moderating forces, Holden claims that:We’d have a community full of low-integrity people, and “bad people” as most people define it.Here’s one (to me natural) reading of Holden’s post, in light of the FTX debacle. SBF was a risk-neutral Benthamite, who describes his own motivations in founding FTX as the result of a risky, but positive expected value bet done in service of the greater good. And, indeed, there are other examples of Sam being really quite unusually committed to this risk-neutral, Benthamite way of approaching decisions. In light of this, one may think that Sam’s decision to deceive and commit fraud may well have been more in keeping with an attempt to meet the core EA idea of explicit maximization, even if his attempt was poorly executed. On this reading, Sam’s fault may not have consisted in abandoning the principles of the EA community. Instead, his failings may have arisen from the absence of normal moderating forces, which are external to EA ideas themselves.Recall Will’s statement: he claimed that, conditional on Sam committing frau...]]>
Violet Hour https://forum.effectivealtruism.org/posts/st59vLvsorvQhqvBr/ftx-ea-principles-and-the-longtermist-ea-community Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX, 'EA Principles', and 'The (Longtermist) EA Community', published by Violet Hour on November 25, 2022 on The Effective Altruism Forum.1. IntroTwo weeks ago, I was in the process of writing a different essay. Instead, I’ll touch on FTX, and make the following claims.I think ‘the principles of EA’ are, at the community level, indeterminate in important ways. This makes me feel uncertain about the degree to which we can legitimately make statements of the form: “SBF violated EA principles”.The longtermist community — despite not having a set of explicit, widely agreed upon, and determinate set of deontic norms — nevertheless contains a distinctive set of more implicit norms, which I believe are worth preserving at the community level. I thus suggest an alternative self-conception for the longtermist community, centered on striving towards a certain set of moral-cum-epistemic virtues.Section 2 discusses the first claim, and Section 3 discusses the second. Each chunk can probably be read independently, though I’d like it if you read them both.2. 'EA Principles’This section will criticize some of the comments in Will’s tweet thread, published in the aftermath of FTX’s collapse.I want to say that, while I’ll criticize some of Will’s remarks, I recognize that expressing yourself well under conditions of emotional stress is really, really hard. Despite this difficulty, I imagine that Will nevertheless felt he had to say something, and quickly. So, while I stand behind my criticism, I hope that my criticism can be viewed as an attempt to live up to ideals I think Will and I both share — of frank intellectual honesty, in service of a better world.2.1.From Will’s response:“If those involved deceived others and engaged in fraud (whether illegal or not) that may cost many thousands of people their savings, they entirely abandoned the principles of the effective altruism community.” (emphasis mine)Overall, I’m not convinced. In his tweet thread, Will cites various sources — one of which is Holden’s post on the dangers of maximization, in which Holden makes the following claim:“I think “do the most good possible” is an . important idea . but it’s also a perilous idea if taken too far . Fortunately, I think EA mostly resists [the perils of maximization] – but that’s due to the good judgment and general anti-radicalism of the human beings involved, not because the ideas/themes/memes themselves offer enough guidance on how to avoid the pitfalls.”According to Holden, one of EA’s “core ideas” is a concern with maximization. And he thinks that the primary way in which EA avoids the pitfalls of their code ideas is through being tempered by moderating forces external to the core ideas themselves. If we weren’t tempered by moderating forces, Holden claims that:We’d have a community full of low-integrity people, and “bad people” as most people define it.Here’s one (to me natural) reading of Holden’s post, in light of the FTX debacle. SBF was a risk-neutral Benthamite, who describes his own motivations in founding FTX as the result of a risky, but positive expected value bet done in service of the greater good. And, indeed, there are other examples of Sam being really quite unusually committed to this risk-neutral, Benthamite way of approaching decisions. In light of this, one may think that Sam’s decision to deceive and commit fraud may well have been more in keeping with an attempt to meet the core EA idea of explicit maximization, even if his attempt was poorly executed. On this reading, Sam’s fault may not have consisted in abandoning the principles of the EA community. Instead, his failings may have arisen from the absence of normal moderating forces, which are external to EA ideas themselves.Recall Will’s statement: he claimed that, conditional on Sam committing frau...]]>
Sat, 26 Nov 2022 02:46:55 +0000 EA - FTX, 'EA Principles', and 'The (Longtermist) EA Community' by Violet Hour Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX, 'EA Principles', and 'The (Longtermist) EA Community', published by Violet Hour on November 25, 2022 on The Effective Altruism Forum.1. IntroTwo weeks ago, I was in the process of writing a different essay. Instead, I’ll touch on FTX, and make the following claims.I think ‘the principles of EA’ are, at the community level, indeterminate in important ways. This makes me feel uncertain about the degree to which we can legitimately make statements of the form: “SBF violated EA principles”.The longtermist community — despite not having a set of explicit, widely agreed upon, and determinate set of deontic norms — nevertheless contains a distinctive set of more implicit norms, which I believe are worth preserving at the community level. I thus suggest an alternative self-conception for the longtermist community, centered on striving towards a certain set of moral-cum-epistemic virtues.Section 2 discusses the first claim, and Section 3 discusses the second. Each chunk can probably be read independently, though I’d like it if you read them both.2. 'EA Principles’This section will criticize some of the comments in Will’s tweet thread, published in the aftermath of FTX’s collapse.I want to say that, while I’ll criticize some of Will’s remarks, I recognize that expressing yourself well under conditions of emotional stress is really, really hard. Despite this difficulty, I imagine that Will nevertheless felt he had to say something, and quickly. So, while I stand behind my criticism, I hope that my criticism can be viewed as an attempt to live up to ideals I think Will and I both share — of frank intellectual honesty, in service of a better world.2.1.From Will’s response:“If those involved deceived others and engaged in fraud (whether illegal or not) that may cost many thousands of people their savings, they entirely abandoned the principles of the effective altruism community.” (emphasis mine)Overall, I’m not convinced. In his tweet thread, Will cites various sources — one of which is Holden’s post on the dangers of maximization, in which Holden makes the following claim:“I think “do the most good possible” is an . important idea . but it’s also a perilous idea if taken too far . Fortunately, I think EA mostly resists [the perils of maximization] – but that’s due to the good judgment and general anti-radicalism of the human beings involved, not because the ideas/themes/memes themselves offer enough guidance on how to avoid the pitfalls.”According to Holden, one of EA’s “core ideas” is a concern with maximization. And he thinks that the primary way in which EA avoids the pitfalls of their code ideas is through being tempered by moderating forces external to the core ideas themselves. If we weren’t tempered by moderating forces, Holden claims that:We’d have a community full of low-integrity people, and “bad people” as most people define it.Here’s one (to me natural) reading of Holden’s post, in light of the FTX debacle. SBF was a risk-neutral Benthamite, who describes his own motivations in founding FTX as the result of a risky, but positive expected value bet done in service of the greater good. And, indeed, there are other examples of Sam being really quite unusually committed to this risk-neutral, Benthamite way of approaching decisions. In light of this, one may think that Sam’s decision to deceive and commit fraud may well have been more in keeping with an attempt to meet the core EA idea of explicit maximization, even if his attempt was poorly executed. On this reading, Sam’s fault may not have consisted in abandoning the principles of the EA community. Instead, his failings may have arisen from the absence of normal moderating forces, which are external to EA ideas themselves.Recall Will’s statement: he claimed that, conditional on Sam committing frau...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX, 'EA Principles', and 'The (Longtermist) EA Community', published by Violet Hour on November 25, 2022 on The Effective Altruism Forum.1. IntroTwo weeks ago, I was in the process of writing a different essay. Instead, I’ll touch on FTX, and make the following claims.I think ‘the principles of EA’ are, at the community level, indeterminate in important ways. This makes me feel uncertain about the degree to which we can legitimately make statements of the form: “SBF violated EA principles”.The longtermist community — despite not having a set of explicit, widely agreed upon, and determinate set of deontic norms — nevertheless contains a distinctive set of more implicit norms, which I believe are worth preserving at the community level. I thus suggest an alternative self-conception for the longtermist community, centered on striving towards a certain set of moral-cum-epistemic virtues.Section 2 discusses the first claim, and Section 3 discusses the second. Each chunk can probably be read independently, though I’d like it if you read them both.2. 'EA Principles’This section will criticize some of the comments in Will’s tweet thread, published in the aftermath of FTX’s collapse.I want to say that, while I’ll criticize some of Will’s remarks, I recognize that expressing yourself well under conditions of emotional stress is really, really hard. Despite this difficulty, I imagine that Will nevertheless felt he had to say something, and quickly. So, while I stand behind my criticism, I hope that my criticism can be viewed as an attempt to live up to ideals I think Will and I both share — of frank intellectual honesty, in service of a better world.2.1.From Will’s response:“If those involved deceived others and engaged in fraud (whether illegal or not) that may cost many thousands of people their savings, they entirely abandoned the principles of the effective altruism community.” (emphasis mine)Overall, I’m not convinced. In his tweet thread, Will cites various sources — one of which is Holden’s post on the dangers of maximization, in which Holden makes the following claim:“I think “do the most good possible” is an . important idea . but it’s also a perilous idea if taken too far . Fortunately, I think EA mostly resists [the perils of maximization] – but that’s due to the good judgment and general anti-radicalism of the human beings involved, not because the ideas/themes/memes themselves offer enough guidance on how to avoid the pitfalls.”According to Holden, one of EA’s “core ideas” is a concern with maximization. And he thinks that the primary way in which EA avoids the pitfalls of their code ideas is through being tempered by moderating forces external to the core ideas themselves. If we weren’t tempered by moderating forces, Holden claims that:We’d have a community full of low-integrity people, and “bad people” as most people define it.Here’s one (to me natural) reading of Holden’s post, in light of the FTX debacle. SBF was a risk-neutral Benthamite, who describes his own motivations in founding FTX as the result of a risky, but positive expected value bet done in service of the greater good. And, indeed, there are other examples of Sam being really quite unusually committed to this risk-neutral, Benthamite way of approaching decisions. In light of this, one may think that Sam’s decision to deceive and commit fraud may well have been more in keeping with an attempt to meet the core EA idea of explicit maximization, even if his attempt was poorly executed. On this reading, Sam’s fault may not have consisted in abandoning the principles of the EA community. Instead, his failings may have arisen from the absence of normal moderating forces, which are external to EA ideas themselves.Recall Will’s statement: he claimed that, conditional on Sam committing frau...]]>
Violet Hour https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 25:12 None full 3911
oZCPayvcxkDHubcDv_EA EA - Does putting kids in school now put money in their pockets later? Revisiting a natural experiment in Indonesia by droodman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does putting kids in school now put money in their pockets later? Revisiting a natural experiment in Indonesia, published by droodman on November 25, 2022 on The Effective Altruism Forum.Open Philanthropy’s Global Health and Wellbeing team continues to investigate potential areas for grantmaking. One of those is education in poorer countries. These countries have massively expanded schooling in the last half century. but many of their students lack minimal numeracy and literacy.To support the team’s assessment of the scope for doing good through education, I reviewed prominent research on the effect of schooling on how much children earn after they grow up. Here, I will describe my reanalysis of a study published by Esther Duflo in 2001. It finds that a big primary schooling expansion in Indonesia in the 1970s caused boys to go to school more — by 0.25–0.40 years on average over their childhoods — and boosted their wages as young adults, by 6.8–10.6% per extra year of schooling.I reproduced the original findings, introduced some technical changes, ran fresh tests, and thought hard about what is generating the patterns in the data. I wound up skeptical that the paper made its case. I think building primary schools probably led more kids to finish primary school (which is not a given in poor regions of a poor country). I’m less sure that it lifted pay in adulthood.Key points behind this conclusion:The study’s margins of error” — the indications of uncertainty — are too narrow. The reasons are several and technical. I hold this view mostly because, in the 21 years since the study was published, economists including Duflo have improved collective understanding of how to estimate uncertainty in these kinds of studies.The reported impact on wages does not clearly persist through life, at least according to a method I constructed to look for a statistical fingerprint of the school-building campaign.Under the study’s methods, normal patterns in Indonesian pay scales and the allocation of school funding can generate the appearance of an impact even if there was none.Switching to a modern method which filters out that mirage also erases the statistical results of the study.My full report is here. Data and code (to the extent shareable) are here.BackgroundThe Indonesia study started out as the first chapter of Esther Duflo’s Ph.D. thesis in 1999. It appeared in final form in the prestigious American Economic Review in 2001, which marked Duflo as a rising star. Within economics, the paper was emblematic of an ascendant emphasis on exploiting natural experiments in order to identify cause and effect (think Freakonomics).Here, the natural experiment was a sudden campaign to build tens of thousands of three-room schoolhouses across Indonesia. The country’s dictator, Suharto, launched the big push with a Presidential Instruction (Instruksi Presiden, or Inpres) in late 1973, soon after the first global oil shock sent revenue pouring into the nation’s treasury. I suspect that Suharto wanted not only to improve the lot of the poor, but also to consolidate the control of his government — which had come to power through a bloody coup in 1967 — over the ethnically fractious population of the far-flung and colonially constructed nation.I live near the Library of Congress, so I biked over there to peruse a copy of that 1973 presidential instruction. It reminded me of James Scott’s Seeing Like a State, which is about how public bureaucracies impose homogenizing paradigms on the polities they strive to control. After the legal text come neat tables decreeing how many schools are to be built in each regency. (Regencies are the second-level administrative unit in Indonesia, below provinces.) After the tables come pages of architectural plans, like the one at the top of this post.T...]]>
droodman https://forum.effectivealtruism.org/posts/oZCPayvcxkDHubcDv/does-putting-kids-in-school-now-put-money-in-their-pockets Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does putting kids in school now put money in their pockets later? Revisiting a natural experiment in Indonesia, published by droodman on November 25, 2022 on The Effective Altruism Forum.Open Philanthropy’s Global Health and Wellbeing team continues to investigate potential areas for grantmaking. One of those is education in poorer countries. These countries have massively expanded schooling in the last half century. but many of their students lack minimal numeracy and literacy.To support the team’s assessment of the scope for doing good through education, I reviewed prominent research on the effect of schooling on how much children earn after they grow up. Here, I will describe my reanalysis of a study published by Esther Duflo in 2001. It finds that a big primary schooling expansion in Indonesia in the 1970s caused boys to go to school more — by 0.25–0.40 years on average over their childhoods — and boosted their wages as young adults, by 6.8–10.6% per extra year of schooling.I reproduced the original findings, introduced some technical changes, ran fresh tests, and thought hard about what is generating the patterns in the data. I wound up skeptical that the paper made its case. I think building primary schools probably led more kids to finish primary school (which is not a given in poor regions of a poor country). I’m less sure that it lifted pay in adulthood.Key points behind this conclusion:The study’s margins of error” — the indications of uncertainty — are too narrow. The reasons are several and technical. I hold this view mostly because, in the 21 years since the study was published, economists including Duflo have improved collective understanding of how to estimate uncertainty in these kinds of studies.The reported impact on wages does not clearly persist through life, at least according to a method I constructed to look for a statistical fingerprint of the school-building campaign.Under the study’s methods, normal patterns in Indonesian pay scales and the allocation of school funding can generate the appearance of an impact even if there was none.Switching to a modern method which filters out that mirage also erases the statistical results of the study.My full report is here. Data and code (to the extent shareable) are here.BackgroundThe Indonesia study started out as the first chapter of Esther Duflo’s Ph.D. thesis in 1999. It appeared in final form in the prestigious American Economic Review in 2001, which marked Duflo as a rising star. Within economics, the paper was emblematic of an ascendant emphasis on exploiting natural experiments in order to identify cause and effect (think Freakonomics).Here, the natural experiment was a sudden campaign to build tens of thousands of three-room schoolhouses across Indonesia. The country’s dictator, Suharto, launched the big push with a Presidential Instruction (Instruksi Presiden, or Inpres) in late 1973, soon after the first global oil shock sent revenue pouring into the nation’s treasury. I suspect that Suharto wanted not only to improve the lot of the poor, but also to consolidate the control of his government — which had come to power through a bloody coup in 1967 — over the ethnically fractious population of the far-flung and colonially constructed nation.I live near the Library of Congress, so I biked over there to peruse a copy of that 1973 presidential instruction. It reminded me of James Scott’s Seeing Like a State, which is about how public bureaucracies impose homogenizing paradigms on the polities they strive to control. After the legal text come neat tables decreeing how many schools are to be built in each regency. (Regencies are the second-level administrative unit in Indonesia, below provinces.) After the tables come pages of architectural plans, like the one at the top of this post.T...]]>
Fri, 25 Nov 2022 23:08:36 +0000 EA - Does putting kids in school now put money in their pockets later? Revisiting a natural experiment in Indonesia by droodman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does putting kids in school now put money in their pockets later? Revisiting a natural experiment in Indonesia, published by droodman on November 25, 2022 on The Effective Altruism Forum.Open Philanthropy’s Global Health and Wellbeing team continues to investigate potential areas for grantmaking. One of those is education in poorer countries. These countries have massively expanded schooling in the last half century. but many of their students lack minimal numeracy and literacy.To support the team’s assessment of the scope for doing good through education, I reviewed prominent research on the effect of schooling on how much children earn after they grow up. Here, I will describe my reanalysis of a study published by Esther Duflo in 2001. It finds that a big primary schooling expansion in Indonesia in the 1970s caused boys to go to school more — by 0.25–0.40 years on average over their childhoods — and boosted their wages as young adults, by 6.8–10.6% per extra year of schooling.I reproduced the original findings, introduced some technical changes, ran fresh tests, and thought hard about what is generating the patterns in the data. I wound up skeptical that the paper made its case. I think building primary schools probably led more kids to finish primary school (which is not a given in poor regions of a poor country). I’m less sure that it lifted pay in adulthood.Key points behind this conclusion:The study’s margins of error” — the indications of uncertainty — are too narrow. The reasons are several and technical. I hold this view mostly because, in the 21 years since the study was published, economists including Duflo have improved collective understanding of how to estimate uncertainty in these kinds of studies.The reported impact on wages does not clearly persist through life, at least according to a method I constructed to look for a statistical fingerprint of the school-building campaign.Under the study’s methods, normal patterns in Indonesian pay scales and the allocation of school funding can generate the appearance of an impact even if there was none.Switching to a modern method which filters out that mirage also erases the statistical results of the study.My full report is here. Data and code (to the extent shareable) are here.BackgroundThe Indonesia study started out as the first chapter of Esther Duflo’s Ph.D. thesis in 1999. It appeared in final form in the prestigious American Economic Review in 2001, which marked Duflo as a rising star. Within economics, the paper was emblematic of an ascendant emphasis on exploiting natural experiments in order to identify cause and effect (think Freakonomics).Here, the natural experiment was a sudden campaign to build tens of thousands of three-room schoolhouses across Indonesia. The country’s dictator, Suharto, launched the big push with a Presidential Instruction (Instruksi Presiden, or Inpres) in late 1973, soon after the first global oil shock sent revenue pouring into the nation’s treasury. I suspect that Suharto wanted not only to improve the lot of the poor, but also to consolidate the control of his government — which had come to power through a bloody coup in 1967 — over the ethnically fractious population of the far-flung and colonially constructed nation.I live near the Library of Congress, so I biked over there to peruse a copy of that 1973 presidential instruction. It reminded me of James Scott’s Seeing Like a State, which is about how public bureaucracies impose homogenizing paradigms on the polities they strive to control. After the legal text come neat tables decreeing how many schools are to be built in each regency. (Regencies are the second-level administrative unit in Indonesia, below provinces.) After the tables come pages of architectural plans, like the one at the top of this post.T...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does putting kids in school now put money in their pockets later? Revisiting a natural experiment in Indonesia, published by droodman on November 25, 2022 on The Effective Altruism Forum.Open Philanthropy’s Global Health and Wellbeing team continues to investigate potential areas for grantmaking. One of those is education in poorer countries. These countries have massively expanded schooling in the last half century. but many of their students lack minimal numeracy and literacy.To support the team’s assessment of the scope for doing good through education, I reviewed prominent research on the effect of schooling on how much children earn after they grow up. Here, I will describe my reanalysis of a study published by Esther Duflo in 2001. It finds that a big primary schooling expansion in Indonesia in the 1970s caused boys to go to school more — by 0.25–0.40 years on average over their childhoods — and boosted their wages as young adults, by 6.8–10.6% per extra year of schooling.I reproduced the original findings, introduced some technical changes, ran fresh tests, and thought hard about what is generating the patterns in the data. I wound up skeptical that the paper made its case. I think building primary schools probably led more kids to finish primary school (which is not a given in poor regions of a poor country). I’m less sure that it lifted pay in adulthood.Key points behind this conclusion:The study’s margins of error” — the indications of uncertainty — are too narrow. The reasons are several and technical. I hold this view mostly because, in the 21 years since the study was published, economists including Duflo have improved collective understanding of how to estimate uncertainty in these kinds of studies.The reported impact on wages does not clearly persist through life, at least according to a method I constructed to look for a statistical fingerprint of the school-building campaign.Under the study’s methods, normal patterns in Indonesian pay scales and the allocation of school funding can generate the appearance of an impact even if there was none.Switching to a modern method which filters out that mirage also erases the statistical results of the study.My full report is here. Data and code (to the extent shareable) are here.BackgroundThe Indonesia study started out as the first chapter of Esther Duflo’s Ph.D. thesis in 1999. It appeared in final form in the prestigious American Economic Review in 2001, which marked Duflo as a rising star. Within economics, the paper was emblematic of an ascendant emphasis on exploiting natural experiments in order to identify cause and effect (think Freakonomics).Here, the natural experiment was a sudden campaign to build tens of thousands of three-room schoolhouses across Indonesia. The country’s dictator, Suharto, launched the big push with a Presidential Instruction (Instruksi Presiden, or Inpres) in late 1973, soon after the first global oil shock sent revenue pouring into the nation’s treasury. I suspect that Suharto wanted not only to improve the lot of the poor, but also to consolidate the control of his government — which had come to power through a bloody coup in 1967 — over the ethnically fractious population of the far-flung and colonially constructed nation.I live near the Library of Congress, so I biked over there to peruse a copy of that 1973 presidential instruction. It reminded me of James Scott’s Seeing Like a State, which is about how public bureaucracies impose homogenizing paradigms on the polities they strive to control. After the legal text come neat tables decreeing how many schools are to be built in each regency. (Regencies are the second-level administrative unit in Indonesia, below provinces.) After the tables come pages of architectural plans, like the one at the top of this post.T...]]>
droodman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 31:45 None full 3909
bDWNb7LcbJLBAz7Gd_EA EA - Could a single alien message destroy us? by Writer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Could a single alien message destroy us?, published by Writer on November 25, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Writer https://forum.effectivealtruism.org/posts/bDWNb7LcbJLBAz7Gd/could-a-single-alien-message-destroy-us-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Could a single alien message destroy us?, published by Writer on November 25, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 25 Nov 2022 17:17:28 +0000 EA - Could a single alien message destroy us? by Writer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Could a single alien message destroy us?, published by Writer on November 25, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Could a single alien message destroy us?, published by Writer on November 25, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Writer https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:25 None full 3907
THgezaPxhvoizkRFy_EA EA - Clarifications on diminishing returns and risk aversion in giving by Robert Wiblin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Clarifications on diminishing returns and risk aversion in giving, published by Robert Wiblin on November 25, 2022 on The Effective Altruism Forum.In April when we released my interview with SBF, I attempted to very quickly explain his views on expected value and risk aversion for the episode description, but unfortunately did so in a way that was both confusing and made them sound more like a description of my views rather than his.Those few paragraphs have gotten substantial attention because Matt Yglesias pointed out where it could go wrong, and wasn't impressed, thinking that I'd presented an analytic error as "sound EA doctrine".So it seems worth clarifying what I actually do think. In brief, I entirely agree with Matt Yglesias that:Returns to additional money are certainly not linear at large scales, which counsels in favour of risk aversion.Returns become sublinear more quickly when you're working on more niche cause areas like longtermism, relative to larger cause areas such as global poverty alleviation.This sublinearity becomes especially pronounced when you're considering giving on the scale of billions rather than millions of dollars.There are other major practical considerations that point in favour of risk-aversion as well.(SBF appears to think the effects above are smaller than Matt or I do, but it's hard to know exactly what he believes, so I'll set that aside here.)The offending paragraphs in the original post were:"If you were offered a 100% chance of $1 million to keep yourself, or a 10% chance of $15 million — it makes total sense to play it safe. You’d be devastated if you lost, and barely happier if you won.But if you were offered a 100% chance of donating $1 billion, or a 10% chance of donating $15 billion, you should just go with whatever has the highest expected value — that is, probability multiplied by the goodness of the outcome [in this case $1.5 billion] — and so swing for the fences.This is the totally rational but rarely seen high-risk approach to philanthropy championed by today’s guest, Sam Bankman-Fried. Sam founded the cryptocurrency trading platform FTX, which has grown his wealth from around $1 million to $20,000 million.The point from the conversation that I wanted to highlight — and what is clearly true — is that for an individual who is going to spend the money on themselves, the fact that one quickly runs out of any useful way to spend the money to improve one's well-being makes it far more sensible to receive $1 billion with certainty than to accept a 90% chance of walking away with nothing.On the other hand, if you plan to spend the money to help others, such as by distributing it to the world's poorest people, then the good one does by dispersing the first dollar and the billionth dollar are much closer together than if you were spending them on yourself. That greatly strengthens the case for taking the risk of receiving nothing in return for a larger amount on average, relative to the personal case.But: the impact of the first dollar and the billionth dollar aren't identical, and in fact could be very different, so calling the approach 'totally rational' was somewhere between an oversimplification and an error.Before we get to that though, we should flag a practical consideration that is as important, or maybe more so, than getting the shape of the returns curve precisely right.As Yglesias points out, once you have begun a foundation and people are building organisations and careers in the expectation of a known minimum level of funding for their field, there are particular harms to risking your entire existing endowment in a way that could leave them and their work stranded and half-finished.While in the hypothetical your downside is meant to be capped at zero, in reality, 'swinging for the fences' with...]]>
Robert Wiblin https://forum.effectivealtruism.org/posts/THgezaPxhvoizkRFy/clarifications-on-diminishing-returns-and-risk-aversion-in Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Clarifications on diminishing returns and risk aversion in giving, published by Robert Wiblin on November 25, 2022 on The Effective Altruism Forum.In April when we released my interview with SBF, I attempted to very quickly explain his views on expected value and risk aversion for the episode description, but unfortunately did so in a way that was both confusing and made them sound more like a description of my views rather than his.Those few paragraphs have gotten substantial attention because Matt Yglesias pointed out where it could go wrong, and wasn't impressed, thinking that I'd presented an analytic error as "sound EA doctrine".So it seems worth clarifying what I actually do think. In brief, I entirely agree with Matt Yglesias that:Returns to additional money are certainly not linear at large scales, which counsels in favour of risk aversion.Returns become sublinear more quickly when you're working on more niche cause areas like longtermism, relative to larger cause areas such as global poverty alleviation.This sublinearity becomes especially pronounced when you're considering giving on the scale of billions rather than millions of dollars.There are other major practical considerations that point in favour of risk-aversion as well.(SBF appears to think the effects above are smaller than Matt or I do, but it's hard to know exactly what he believes, so I'll set that aside here.)The offending paragraphs in the original post were:"If you were offered a 100% chance of $1 million to keep yourself, or a 10% chance of $15 million — it makes total sense to play it safe. You’d be devastated if you lost, and barely happier if you won.But if you were offered a 100% chance of donating $1 billion, or a 10% chance of donating $15 billion, you should just go with whatever has the highest expected value — that is, probability multiplied by the goodness of the outcome [in this case $1.5 billion] — and so swing for the fences.This is the totally rational but rarely seen high-risk approach to philanthropy championed by today’s guest, Sam Bankman-Fried. Sam founded the cryptocurrency trading platform FTX, which has grown his wealth from around $1 million to $20,000 million.The point from the conversation that I wanted to highlight — and what is clearly true — is that for an individual who is going to spend the money on themselves, the fact that one quickly runs out of any useful way to spend the money to improve one's well-being makes it far more sensible to receive $1 billion with certainty than to accept a 90% chance of walking away with nothing.On the other hand, if you plan to spend the money to help others, such as by distributing it to the world's poorest people, then the good one does by dispersing the first dollar and the billionth dollar are much closer together than if you were spending them on yourself. That greatly strengthens the case for taking the risk of receiving nothing in return for a larger amount on average, relative to the personal case.But: the impact of the first dollar and the billionth dollar aren't identical, and in fact could be very different, so calling the approach 'totally rational' was somewhere between an oversimplification and an error.Before we get to that though, we should flag a practical consideration that is as important, or maybe more so, than getting the shape of the returns curve precisely right.As Yglesias points out, once you have begun a foundation and people are building organisations and careers in the expectation of a known minimum level of funding for their field, there are particular harms to risking your entire existing endowment in a way that could leave them and their work stranded and half-finished.While in the hypothetical your downside is meant to be capped at zero, in reality, 'swinging for the fences' with...]]>
Fri, 25 Nov 2022 15:16:41 +0000 EA - Clarifications on diminishing returns and risk aversion in giving by Robert Wiblin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Clarifications on diminishing returns and risk aversion in giving, published by Robert Wiblin on November 25, 2022 on The Effective Altruism Forum.In April when we released my interview with SBF, I attempted to very quickly explain his views on expected value and risk aversion for the episode description, but unfortunately did so in a way that was both confusing and made them sound more like a description of my views rather than his.Those few paragraphs have gotten substantial attention because Matt Yglesias pointed out where it could go wrong, and wasn't impressed, thinking that I'd presented an analytic error as "sound EA doctrine".So it seems worth clarifying what I actually do think. In brief, I entirely agree with Matt Yglesias that:Returns to additional money are certainly not linear at large scales, which counsels in favour of risk aversion.Returns become sublinear more quickly when you're working on more niche cause areas like longtermism, relative to larger cause areas such as global poverty alleviation.This sublinearity becomes especially pronounced when you're considering giving on the scale of billions rather than millions of dollars.There are other major practical considerations that point in favour of risk-aversion as well.(SBF appears to think the effects above are smaller than Matt or I do, but it's hard to know exactly what he believes, so I'll set that aside here.)The offending paragraphs in the original post were:"If you were offered a 100% chance of $1 million to keep yourself, or a 10% chance of $15 million — it makes total sense to play it safe. You’d be devastated if you lost, and barely happier if you won.But if you were offered a 100% chance of donating $1 billion, or a 10% chance of donating $15 billion, you should just go with whatever has the highest expected value — that is, probability multiplied by the goodness of the outcome [in this case $1.5 billion] — and so swing for the fences.This is the totally rational but rarely seen high-risk approach to philanthropy championed by today’s guest, Sam Bankman-Fried. Sam founded the cryptocurrency trading platform FTX, which has grown his wealth from around $1 million to $20,000 million.The point from the conversation that I wanted to highlight — and what is clearly true — is that for an individual who is going to spend the money on themselves, the fact that one quickly runs out of any useful way to spend the money to improve one's well-being makes it far more sensible to receive $1 billion with certainty than to accept a 90% chance of walking away with nothing.On the other hand, if you plan to spend the money to help others, such as by distributing it to the world's poorest people, then the good one does by dispersing the first dollar and the billionth dollar are much closer together than if you were spending them on yourself. That greatly strengthens the case for taking the risk of receiving nothing in return for a larger amount on average, relative to the personal case.But: the impact of the first dollar and the billionth dollar aren't identical, and in fact could be very different, so calling the approach 'totally rational' was somewhere between an oversimplification and an error.Before we get to that though, we should flag a practical consideration that is as important, or maybe more so, than getting the shape of the returns curve precisely right.As Yglesias points out, once you have begun a foundation and people are building organisations and careers in the expectation of a known minimum level of funding for their field, there are particular harms to risking your entire existing endowment in a way that could leave them and their work stranded and half-finished.While in the hypothetical your downside is meant to be capped at zero, in reality, 'swinging for the fences' with...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Clarifications on diminishing returns and risk aversion in giving, published by Robert Wiblin on November 25, 2022 on The Effective Altruism Forum.In April when we released my interview with SBF, I attempted to very quickly explain his views on expected value and risk aversion for the episode description, but unfortunately did so in a way that was both confusing and made them sound more like a description of my views rather than his.Those few paragraphs have gotten substantial attention because Matt Yglesias pointed out where it could go wrong, and wasn't impressed, thinking that I'd presented an analytic error as "sound EA doctrine".So it seems worth clarifying what I actually do think. In brief, I entirely agree with Matt Yglesias that:Returns to additional money are certainly not linear at large scales, which counsels in favour of risk aversion.Returns become sublinear more quickly when you're working on more niche cause areas like longtermism, relative to larger cause areas such as global poverty alleviation.This sublinearity becomes especially pronounced when you're considering giving on the scale of billions rather than millions of dollars.There are other major practical considerations that point in favour of risk-aversion as well.(SBF appears to think the effects above are smaller than Matt or I do, but it's hard to know exactly what he believes, so I'll set that aside here.)The offending paragraphs in the original post were:"If you were offered a 100% chance of $1 million to keep yourself, or a 10% chance of $15 million — it makes total sense to play it safe. You’d be devastated if you lost, and barely happier if you won.But if you were offered a 100% chance of donating $1 billion, or a 10% chance of donating $15 billion, you should just go with whatever has the highest expected value — that is, probability multiplied by the goodness of the outcome [in this case $1.5 billion] — and so swing for the fences.This is the totally rational but rarely seen high-risk approach to philanthropy championed by today’s guest, Sam Bankman-Fried. Sam founded the cryptocurrency trading platform FTX, which has grown his wealth from around $1 million to $20,000 million.The point from the conversation that I wanted to highlight — and what is clearly true — is that for an individual who is going to spend the money on themselves, the fact that one quickly runs out of any useful way to spend the money to improve one's well-being makes it far more sensible to receive $1 billion with certainty than to accept a 90% chance of walking away with nothing.On the other hand, if you plan to spend the money to help others, such as by distributing it to the world's poorest people, then the good one does by dispersing the first dollar and the billionth dollar are much closer together than if you were spending them on yourself. That greatly strengthens the case for taking the risk of receiving nothing in return for a larger amount on average, relative to the personal case.But: the impact of the first dollar and the billionth dollar aren't identical, and in fact could be very different, so calling the approach 'totally rational' was somewhere between an oversimplification and an error.Before we get to that though, we should flag a practical consideration that is as important, or maybe more so, than getting the shape of the returns curve precisely right.As Yglesias points out, once you have begun a foundation and people are building organisations and careers in the expectation of a known minimum level of funding for their field, there are particular harms to risking your entire existing endowment in a way that could leave them and their work stranded and half-finished.While in the hypothetical your downside is meant to be capped at zero, in reality, 'swinging for the fences' with...]]>
Robert Wiblin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:54 None full 3901
TH4wN6sB8TFjdt87b_EA EA - Sentience Institute 2022 End of Year Summary by MichaelDello Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sentience Institute 2022 End of Year Summary, published by MichaelDello on November 25, 2022 on The Effective Altruism Forum.SummarySentience Institute is a 501(c)3 nonprofit think tank aiming to maximize positive impact through longtermist research on social and technological change, particularly moral circle expansion. Our main focus in 2022 has been conducting high-quality empirical research, primarily surveys and behavioral experiments, to build the field of digital minds research (e.g., How will humans react to AI that seems agentic and intentional? How will we know when an AI is sentient?). Our two most notable publications this year are a report and open-access data for our 2021 Artificial Intelligence, Morality, and Sentience (AIMS) survey and our paper in Computers in Human Behavior on “Predicting the Moral Consideration of Artificial Intelligences,” and we have substantial room for more funding to continue and expand this work in 2023 and beyond.AIMS is the first nationally representative longitudinal survey of attitudes on these topics. Even today with limited AI capabilities, we are already seeing the social relationship we have with these digital minds bearing on public discourse, funding, and other events in the trajectory of AI. The CiHB predictors paper is a deep dive into demographic and psychological predictors of AI attitudes, so we can understand why certain people view these topics in the way that they do. This follows up on our 2021 conceptual paper in Futures and literature review in Science and Engineering Ethics. We have also been working to build this new field through hosting a podcast, an AI summit at the University of Chicago, and a regular intergroup call between organizations working on this topic (e.g., Center on Long-Term Risk, Future of Humanity Institute).The urgency of building this field has been underscored this year in two ways in 2022. First, AIs are rapidly becoming more advanced, as illustrated in the amazing performance of image generation models — OpenAI’s DALL-E 2, Midjourney, and Stable Diffusion — as well as DeepMind’s Gato as a general-purpose transformer and high-performing language models Chinchilla and Google’s PaLM. Second, the topic of AI sentience had one of its first spikes in the mainstream news cycle, first as OpenAI chief scientist Ilya Sutskever tweeted “it may be that today's large neural networks are slightly conscious” in February, and then a much larger spike as Google Engineer Blake Lemoine was fired after claiming their language model LaMDA is “sentient.”In 2023, we hope to continue our empirical research as well as develop these findings into a digital minds “cause profile,” making the case for it as a highly neglected, tractable cause area in effective altruism with an extremely large scale. The perceptions, nature, and effects of digital minds are quickly becoming an important part of the trajectory of advanced AI systems and seem like they will continue to be so in medium- and long-term futures. This cause profile would be in part a more rigorous version of our blog post, “The Importance of Artificial Sentience.”2022 has been a challenging year for effective altruism funding. There was already an economic downturn when this month’s collapse of FTX, one of the two largest EA funders, left many EA organizations like us scrambling for funding and many other funders stretched thin. We expect substantially less funding in the coming months, and we need your help to continue work in this time-sensitive area, perhaps more than any other year since we were founded in 2017. We hope to raise $90,000 this giving season for our work on digital minds, such as surveys, experimental research, the cause profile, and other field-building projects. We will also continue some work on factory farming thanks to ge...]]>
MichaelDello https://forum.effectivealtruism.org/posts/TH4wN6sB8TFjdt87b/sentience-institute-2022-end-of-year-summary Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sentience Institute 2022 End of Year Summary, published by MichaelDello on November 25, 2022 on The Effective Altruism Forum.SummarySentience Institute is a 501(c)3 nonprofit think tank aiming to maximize positive impact through longtermist research on social and technological change, particularly moral circle expansion. Our main focus in 2022 has been conducting high-quality empirical research, primarily surveys and behavioral experiments, to build the field of digital minds research (e.g., How will humans react to AI that seems agentic and intentional? How will we know when an AI is sentient?). Our two most notable publications this year are a report and open-access data for our 2021 Artificial Intelligence, Morality, and Sentience (AIMS) survey and our paper in Computers in Human Behavior on “Predicting the Moral Consideration of Artificial Intelligences,” and we have substantial room for more funding to continue and expand this work in 2023 and beyond.AIMS is the first nationally representative longitudinal survey of attitudes on these topics. Even today with limited AI capabilities, we are already seeing the social relationship we have with these digital minds bearing on public discourse, funding, and other events in the trajectory of AI. The CiHB predictors paper is a deep dive into demographic and psychological predictors of AI attitudes, so we can understand why certain people view these topics in the way that they do. This follows up on our 2021 conceptual paper in Futures and literature review in Science and Engineering Ethics. We have also been working to build this new field through hosting a podcast, an AI summit at the University of Chicago, and a regular intergroup call between organizations working on this topic (e.g., Center on Long-Term Risk, Future of Humanity Institute).The urgency of building this field has been underscored this year in two ways in 2022. First, AIs are rapidly becoming more advanced, as illustrated in the amazing performance of image generation models — OpenAI’s DALL-E 2, Midjourney, and Stable Diffusion — as well as DeepMind’s Gato as a general-purpose transformer and high-performing language models Chinchilla and Google’s PaLM. Second, the topic of AI sentience had one of its first spikes in the mainstream news cycle, first as OpenAI chief scientist Ilya Sutskever tweeted “it may be that today's large neural networks are slightly conscious” in February, and then a much larger spike as Google Engineer Blake Lemoine was fired after claiming their language model LaMDA is “sentient.”In 2023, we hope to continue our empirical research as well as develop these findings into a digital minds “cause profile,” making the case for it as a highly neglected, tractable cause area in effective altruism with an extremely large scale. The perceptions, nature, and effects of digital minds are quickly becoming an important part of the trajectory of advanced AI systems and seem like they will continue to be so in medium- and long-term futures. This cause profile would be in part a more rigorous version of our blog post, “The Importance of Artificial Sentience.”2022 has been a challenging year for effective altruism funding. There was already an economic downturn when this month’s collapse of FTX, one of the two largest EA funders, left many EA organizations like us scrambling for funding and many other funders stretched thin. We expect substantially less funding in the coming months, and we need your help to continue work in this time-sensitive area, perhaps more than any other year since we were founded in 2017. We hope to raise $90,000 this giving season for our work on digital minds, such as surveys, experimental research, the cause profile, and other field-building projects. We will also continue some work on factory farming thanks to ge...]]>
Fri, 25 Nov 2022 15:02:39 +0000 EA - Sentience Institute 2022 End of Year Summary by MichaelDello Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sentience Institute 2022 End of Year Summary, published by MichaelDello on November 25, 2022 on The Effective Altruism Forum.SummarySentience Institute is a 501(c)3 nonprofit think tank aiming to maximize positive impact through longtermist research on social and technological change, particularly moral circle expansion. Our main focus in 2022 has been conducting high-quality empirical research, primarily surveys and behavioral experiments, to build the field of digital minds research (e.g., How will humans react to AI that seems agentic and intentional? How will we know when an AI is sentient?). Our two most notable publications this year are a report and open-access data for our 2021 Artificial Intelligence, Morality, and Sentience (AIMS) survey and our paper in Computers in Human Behavior on “Predicting the Moral Consideration of Artificial Intelligences,” and we have substantial room for more funding to continue and expand this work in 2023 and beyond.AIMS is the first nationally representative longitudinal survey of attitudes on these topics. Even today with limited AI capabilities, we are already seeing the social relationship we have with these digital minds bearing on public discourse, funding, and other events in the trajectory of AI. The CiHB predictors paper is a deep dive into demographic and psychological predictors of AI attitudes, so we can understand why certain people view these topics in the way that they do. This follows up on our 2021 conceptual paper in Futures and literature review in Science and Engineering Ethics. We have also been working to build this new field through hosting a podcast, an AI summit at the University of Chicago, and a regular intergroup call between organizations working on this topic (e.g., Center on Long-Term Risk, Future of Humanity Institute).The urgency of building this field has been underscored this year in two ways in 2022. First, AIs are rapidly becoming more advanced, as illustrated in the amazing performance of image generation models — OpenAI’s DALL-E 2, Midjourney, and Stable Diffusion — as well as DeepMind’s Gato as a general-purpose transformer and high-performing language models Chinchilla and Google’s PaLM. Second, the topic of AI sentience had one of its first spikes in the mainstream news cycle, first as OpenAI chief scientist Ilya Sutskever tweeted “it may be that today's large neural networks are slightly conscious” in February, and then a much larger spike as Google Engineer Blake Lemoine was fired after claiming their language model LaMDA is “sentient.”In 2023, we hope to continue our empirical research as well as develop these findings into a digital minds “cause profile,” making the case for it as a highly neglected, tractable cause area in effective altruism with an extremely large scale. The perceptions, nature, and effects of digital minds are quickly becoming an important part of the trajectory of advanced AI systems and seem like they will continue to be so in medium- and long-term futures. This cause profile would be in part a more rigorous version of our blog post, “The Importance of Artificial Sentience.”2022 has been a challenging year for effective altruism funding. There was already an economic downturn when this month’s collapse of FTX, one of the two largest EA funders, left many EA organizations like us scrambling for funding and many other funders stretched thin. We expect substantially less funding in the coming months, and we need your help to continue work in this time-sensitive area, perhaps more than any other year since we were founded in 2017. We hope to raise $90,000 this giving season for our work on digital minds, such as surveys, experimental research, the cause profile, and other field-building projects. We will also continue some work on factory farming thanks to ge...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sentience Institute 2022 End of Year Summary, published by MichaelDello on November 25, 2022 on The Effective Altruism Forum.SummarySentience Institute is a 501(c)3 nonprofit think tank aiming to maximize positive impact through longtermist research on social and technological change, particularly moral circle expansion. Our main focus in 2022 has been conducting high-quality empirical research, primarily surveys and behavioral experiments, to build the field of digital minds research (e.g., How will humans react to AI that seems agentic and intentional? How will we know when an AI is sentient?). Our two most notable publications this year are a report and open-access data for our 2021 Artificial Intelligence, Morality, and Sentience (AIMS) survey and our paper in Computers in Human Behavior on “Predicting the Moral Consideration of Artificial Intelligences,” and we have substantial room for more funding to continue and expand this work in 2023 and beyond.AIMS is the first nationally representative longitudinal survey of attitudes on these topics. Even today with limited AI capabilities, we are already seeing the social relationship we have with these digital minds bearing on public discourse, funding, and other events in the trajectory of AI. The CiHB predictors paper is a deep dive into demographic and psychological predictors of AI attitudes, so we can understand why certain people view these topics in the way that they do. This follows up on our 2021 conceptual paper in Futures and literature review in Science and Engineering Ethics. We have also been working to build this new field through hosting a podcast, an AI summit at the University of Chicago, and a regular intergroup call between organizations working on this topic (e.g., Center on Long-Term Risk, Future of Humanity Institute).The urgency of building this field has been underscored this year in two ways in 2022. First, AIs are rapidly becoming more advanced, as illustrated in the amazing performance of image generation models — OpenAI’s DALL-E 2, Midjourney, and Stable Diffusion — as well as DeepMind’s Gato as a general-purpose transformer and high-performing language models Chinchilla and Google’s PaLM. Second, the topic of AI sentience had one of its first spikes in the mainstream news cycle, first as OpenAI chief scientist Ilya Sutskever tweeted “it may be that today's large neural networks are slightly conscious” in February, and then a much larger spike as Google Engineer Blake Lemoine was fired after claiming their language model LaMDA is “sentient.”In 2023, we hope to continue our empirical research as well as develop these findings into a digital minds “cause profile,” making the case for it as a highly neglected, tractable cause area in effective altruism with an extremely large scale. The perceptions, nature, and effects of digital minds are quickly becoming an important part of the trajectory of advanced AI systems and seem like they will continue to be so in medium- and long-term futures. This cause profile would be in part a more rigorous version of our blog post, “The Importance of Artificial Sentience.”2022 has been a challenging year for effective altruism funding. There was already an economic downturn when this month’s collapse of FTX, one of the two largest EA funders, left many EA organizations like us scrambling for funding and many other funders stretched thin. We expect substantially less funding in the coming months, and we need your help to continue work in this time-sensitive area, perhaps more than any other year since we were founded in 2017. We hope to raise $90,000 this giving season for our work on digital minds, such as surveys, experimental research, the cause profile, and other field-building projects. We will also continue some work on factory farming thanks to ge...]]>
MichaelDello https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 12:36 None full 3906
jbsmfPjRH6irTP6zu_EA EA - Some feelings, and what’s keeping me going by Michelle Hutchinson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some feelings, and what’s keeping me going, published by Michelle Hutchinson on November 25, 2022 on The Effective Altruism Forum.It seems weird to start a public blog post with how I’m feeling, because that seems so irrelevant to what’s happening right now, and to anyone who doesn’t know me. But a couple of people suggested it might be good to write a post like this. Also, over some of the past week I couldn’t be around others (covid), and found that pretty isolating and alienating while the FTX crisis was intensifying. For people who feel less surrounded by like-minded people in daily life, I thought it might be helpful to have some more voices on the forum expressing how we’re feeling. Sharing our coping strategies also seems particularly useful right now, given that many of us are currently dealing both with difficult emotions, more life uncertainty and greatly increased stress.In what follows, I’m not going to touch much on what specifically happened around FTX. I’ve haven’t been following the reporting as closely as others and don’t have a background in finance. I’m also not going to say that much about what I’ve learned from the debacle. I want to come back to what I should learn going forward when the situation is clearer, the most urgent response work has settled down and I can think more clearly. As usual for my posts on the forum, I’m writing in a personal capacity.[Written 19/11/2022]How I’m feelingI was hesitant to write about my feelings not just because they seem irrelevant, but also because it feels hard even to know how I’m feeling, let alone describe it. I also honestly really don’t want to inhabit a bunch of my emotions right now. But I’ll have a go at describing them here.Please don’t take any of these as in any way a prescription on how people should feel. Some of my friends feel angry and betrayed right now, some feel confused about their place in the world, some feel totally fine and largely unaffected by last week’s events. All of those sound totally reasonable to me.Here are a few of the kinds of things I’ve been feeling over the last week, to give a flavour of how much it feels like I’ve bounced around: Panic and urgency over figuring out how the things I’m responsible for need to be handled and done differently in light of this. Frustration and anger at how powerless I feel to really help. Anguish that whatever I tried to work on I somehow seemed to upset someone or let someone down. Gratitude for being able to go home to my son and how excited toddlers are to see you even if you can’t do anything more in the world than provide food and cuddles.I’ve been trying not to inhabit my feelings of sadness too much. There’s a lot to do, now of all times, so being too upset to contribute seems worth avoiding if I can. I’ve found two things particularly salient, and their sadness impossible to avoid. One is interpersonal conflict. It’s an extremely high stress time in which people are even busier than usual, so it’s easier than ever to cause friction, whether with friends/colleagues/online acquaintances. I’m also finding that harder to deal with than usual, I think because my refuge is usually caring relationships. The other sadness I’m finding impossible to avoid inhabiting is that some of my dearest friends are feeling deeply sad in a way I can do almost nothing to alleviate.Those are the things that I can’t get away from. But I know that in some light they don’t even seem that significant. There are so many sadnesses now that I really want to avoid looking at. One is sadness for the people who lost their money, despite thinking it was safely deposited rather than even being invested. Like a lot of us, I know people who lost money to this, including most of their life savings. Another is sadness for people who had been planning important e...]]>
Michelle Hutchinson https://forum.effectivealtruism.org/posts/jbsmfPjRH6irTP6zu/some-feelings-and-what-s-keeping-me-going Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some feelings, and what’s keeping me going, published by Michelle Hutchinson on November 25, 2022 on The Effective Altruism Forum.It seems weird to start a public blog post with how I’m feeling, because that seems so irrelevant to what’s happening right now, and to anyone who doesn’t know me. But a couple of people suggested it might be good to write a post like this. Also, over some of the past week I couldn’t be around others (covid), and found that pretty isolating and alienating while the FTX crisis was intensifying. For people who feel less surrounded by like-minded people in daily life, I thought it might be helpful to have some more voices on the forum expressing how we’re feeling. Sharing our coping strategies also seems particularly useful right now, given that many of us are currently dealing both with difficult emotions, more life uncertainty and greatly increased stress.In what follows, I’m not going to touch much on what specifically happened around FTX. I’ve haven’t been following the reporting as closely as others and don’t have a background in finance. I’m also not going to say that much about what I’ve learned from the debacle. I want to come back to what I should learn going forward when the situation is clearer, the most urgent response work has settled down and I can think more clearly. As usual for my posts on the forum, I’m writing in a personal capacity.[Written 19/11/2022]How I’m feelingI was hesitant to write about my feelings not just because they seem irrelevant, but also because it feels hard even to know how I’m feeling, let alone describe it. I also honestly really don’t want to inhabit a bunch of my emotions right now. But I’ll have a go at describing them here.Please don’t take any of these as in any way a prescription on how people should feel. Some of my friends feel angry and betrayed right now, some feel confused about their place in the world, some feel totally fine and largely unaffected by last week’s events. All of those sound totally reasonable to me.Here are a few of the kinds of things I’ve been feeling over the last week, to give a flavour of how much it feels like I’ve bounced around: Panic and urgency over figuring out how the things I’m responsible for need to be handled and done differently in light of this. Frustration and anger at how powerless I feel to really help. Anguish that whatever I tried to work on I somehow seemed to upset someone or let someone down. Gratitude for being able to go home to my son and how excited toddlers are to see you even if you can’t do anything more in the world than provide food and cuddles.I’ve been trying not to inhabit my feelings of sadness too much. There’s a lot to do, now of all times, so being too upset to contribute seems worth avoiding if I can. I’ve found two things particularly salient, and their sadness impossible to avoid. One is interpersonal conflict. It’s an extremely high stress time in which people are even busier than usual, so it’s easier than ever to cause friction, whether with friends/colleagues/online acquaintances. I’m also finding that harder to deal with than usual, I think because my refuge is usually caring relationships. The other sadness I’m finding impossible to avoid inhabiting is that some of my dearest friends are feeling deeply sad in a way I can do almost nothing to alleviate.Those are the things that I can’t get away from. But I know that in some light they don’t even seem that significant. There are so many sadnesses now that I really want to avoid looking at. One is sadness for the people who lost their money, despite thinking it was safely deposited rather than even being invested. Like a lot of us, I know people who lost money to this, including most of their life savings. Another is sadness for people who had been planning important e...]]>
Fri, 25 Nov 2022 12:56:44 +0000 EA - Some feelings, and what’s keeping me going by Michelle Hutchinson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some feelings, and what’s keeping me going, published by Michelle Hutchinson on November 25, 2022 on The Effective Altruism Forum.It seems weird to start a public blog post with how I’m feeling, because that seems so irrelevant to what’s happening right now, and to anyone who doesn’t know me. But a couple of people suggested it might be good to write a post like this. Also, over some of the past week I couldn’t be around others (covid), and found that pretty isolating and alienating while the FTX crisis was intensifying. For people who feel less surrounded by like-minded people in daily life, I thought it might be helpful to have some more voices on the forum expressing how we’re feeling. Sharing our coping strategies also seems particularly useful right now, given that many of us are currently dealing both with difficult emotions, more life uncertainty and greatly increased stress.In what follows, I’m not going to touch much on what specifically happened around FTX. I’ve haven’t been following the reporting as closely as others and don’t have a background in finance. I’m also not going to say that much about what I’ve learned from the debacle. I want to come back to what I should learn going forward when the situation is clearer, the most urgent response work has settled down and I can think more clearly. As usual for my posts on the forum, I’m writing in a personal capacity.[Written 19/11/2022]How I’m feelingI was hesitant to write about my feelings not just because they seem irrelevant, but also because it feels hard even to know how I’m feeling, let alone describe it. I also honestly really don’t want to inhabit a bunch of my emotions right now. But I’ll have a go at describing them here.Please don’t take any of these as in any way a prescription on how people should feel. Some of my friends feel angry and betrayed right now, some feel confused about their place in the world, some feel totally fine and largely unaffected by last week’s events. All of those sound totally reasonable to me.Here are a few of the kinds of things I’ve been feeling over the last week, to give a flavour of how much it feels like I’ve bounced around: Panic and urgency over figuring out how the things I’m responsible for need to be handled and done differently in light of this. Frustration and anger at how powerless I feel to really help. Anguish that whatever I tried to work on I somehow seemed to upset someone or let someone down. Gratitude for being able to go home to my son and how excited toddlers are to see you even if you can’t do anything more in the world than provide food and cuddles.I’ve been trying not to inhabit my feelings of sadness too much. There’s a lot to do, now of all times, so being too upset to contribute seems worth avoiding if I can. I’ve found two things particularly salient, and their sadness impossible to avoid. One is interpersonal conflict. It’s an extremely high stress time in which people are even busier than usual, so it’s easier than ever to cause friction, whether with friends/colleagues/online acquaintances. I’m also finding that harder to deal with than usual, I think because my refuge is usually caring relationships. The other sadness I’m finding impossible to avoid inhabiting is that some of my dearest friends are feeling deeply sad in a way I can do almost nothing to alleviate.Those are the things that I can’t get away from. But I know that in some light they don’t even seem that significant. There are so many sadnesses now that I really want to avoid looking at. One is sadness for the people who lost their money, despite thinking it was safely deposited rather than even being invested. Like a lot of us, I know people who lost money to this, including most of their life savings. Another is sadness for people who had been planning important e...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some feelings, and what’s keeping me going, published by Michelle Hutchinson on November 25, 2022 on The Effective Altruism Forum.It seems weird to start a public blog post with how I’m feeling, because that seems so irrelevant to what’s happening right now, and to anyone who doesn’t know me. But a couple of people suggested it might be good to write a post like this. Also, over some of the past week I couldn’t be around others (covid), and found that pretty isolating and alienating while the FTX crisis was intensifying. For people who feel less surrounded by like-minded people in daily life, I thought it might be helpful to have some more voices on the forum expressing how we’re feeling. Sharing our coping strategies also seems particularly useful right now, given that many of us are currently dealing both with difficult emotions, more life uncertainty and greatly increased stress.In what follows, I’m not going to touch much on what specifically happened around FTX. I’ve haven’t been following the reporting as closely as others and don’t have a background in finance. I’m also not going to say that much about what I’ve learned from the debacle. I want to come back to what I should learn going forward when the situation is clearer, the most urgent response work has settled down and I can think more clearly. As usual for my posts on the forum, I’m writing in a personal capacity.[Written 19/11/2022]How I’m feelingI was hesitant to write about my feelings not just because they seem irrelevant, but also because it feels hard even to know how I’m feeling, let alone describe it. I also honestly really don’t want to inhabit a bunch of my emotions right now. But I’ll have a go at describing them here.Please don’t take any of these as in any way a prescription on how people should feel. Some of my friends feel angry and betrayed right now, some feel confused about their place in the world, some feel totally fine and largely unaffected by last week’s events. All of those sound totally reasonable to me.Here are a few of the kinds of things I’ve been feeling over the last week, to give a flavour of how much it feels like I’ve bounced around: Panic and urgency over figuring out how the things I’m responsible for need to be handled and done differently in light of this. Frustration and anger at how powerless I feel to really help. Anguish that whatever I tried to work on I somehow seemed to upset someone or let someone down. Gratitude for being able to go home to my son and how excited toddlers are to see you even if you can’t do anything more in the world than provide food and cuddles.I’ve been trying not to inhabit my feelings of sadness too much. There’s a lot to do, now of all times, so being too upset to contribute seems worth avoiding if I can. I’ve found two things particularly salient, and their sadness impossible to avoid. One is interpersonal conflict. It’s an extremely high stress time in which people are even busier than usual, so it’s easier than ever to cause friction, whether with friends/colleagues/online acquaintances. I’m also finding that harder to deal with than usual, I think because my refuge is usually caring relationships. The other sadness I’m finding impossible to avoid inhabiting is that some of my dearest friends are feeling deeply sad in a way I can do almost nothing to alleviate.Those are the things that I can’t get away from. But I know that in some light they don’t even seem that significant. There are so many sadnesses now that I really want to avoid looking at. One is sadness for the people who lost their money, despite thinking it was safely deposited rather than even being invested. Like a lot of us, I know people who lost money to this, including most of their life savings. Another is sadness for people who had been planning important e...]]>
Michelle Hutchinson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:58 None full 3904
GnJQaSaXRebZgrmg3_EA EA - The 2022 Giving What We Can Donor Lottery is now open by Fabio Kuhn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The 2022 Giving What We Can Donor Lottery is now open, published by Fabio Kuhn on November 25, 2022 on The Effective Altruism Forum.The 2022 Giving What We Can donor lottery (previously managed by EA Funds) is now open, with block sizes of $100k, $500k, and $2 million. To enter a lottery, or to learn more about donor lotteries, head to the Giving What We Can donor lottery page.Why enter a donor lottery?We’ve written before about why you should give to a donor lottery this giving season. If you’re intrigued about what a donor lottery is, or why you might want to enter, I’d recommend reading that article, including the comments (note that it’s from last year, so some specifics like block sizes and dates are different). I’ve copied in some of the key points below:A donor lottery allows you to turn your donation into a larger donation with some probability, while holding the expected donation amount constant. E.g., you can trade a $1,000 donation for a 1% chance of allocating $100,000 worth of donations. Your expected donation size stays constant at $1,000.If you win, it will be worth the time to think more carefully about where to allocate the money. Because extra time thinking is more likely to lead to better (rather than worse) decisions, this leads to more (expected) impact overall, even though your expected donation size stays the same.For this reason, we believe that a donor lottery is the most effective way for most smaller donors to give the majority of their donations, for those who feel comfortable with it.If you win, we can put you in touch with experienced grantmakers who can help you with the decision.You should only participate in a donor lottery if you think there’s a good chance you (or someone who you trust) will spend additional time thinking about your donation if you win.We also think there’s a good case for continuing to make some fraction of your donations directly, to keep engaged with EA donation opportunities.You can participate anonymously if you like.Continue reading the original article here.Practical information about the 2022/2023 Donor LotteryThe Giving What We Can Donor Lottery is now open. The lottery will close to new entries on Monday, Jan 10 2023, 12:00 PM UTC. Any payments not confirmed by Giving What We Can by Monday, Jan 17, 2023, 12:00 PM UTC will not be accepted as entries.The lotteries will be drawn starting at Mon, Jan 24, 2023, 12:00 PM UTC (drawings for each block size will be spaced five minutes apart).There will be three block sizes:$100,000$500,000$2,000,000Which block you decide to enter is up to you (there are no minimum entry sizes on any of the blocks). If you’re not sure, we suggest that you aim to enter with a 1%-30% chance of winning.Donations are tax-deductible in the US, the UK, and the Netherlands. However, if you live somewhere else, you should still consider entering if you think the expected value of the lottery (including the potential to allocate winnings to projects that are more effective than the most effective charity that’s tax-deductible where you live) is a larger value-add than tax-deductibility.It is possible to participate anonymously, such that your personal details will only be visible to Giving What We Can and EV operational staff, even if you win. By default, all grants will be made public, unless winners or recipients request otherwise.For more in-depth information about the lottery process (including the important Caveats and Limitations section), please see the donor lottery website.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fabio Kuhn https://forum.effectivealtruism.org/posts/GnJQaSaXRebZgrmg3/the-2022-giving-what-we-can-donor-lottery-is-now-open Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The 2022 Giving What We Can Donor Lottery is now open, published by Fabio Kuhn on November 25, 2022 on The Effective Altruism Forum.The 2022 Giving What We Can donor lottery (previously managed by EA Funds) is now open, with block sizes of $100k, $500k, and $2 million. To enter a lottery, or to learn more about donor lotteries, head to the Giving What We Can donor lottery page.Why enter a donor lottery?We’ve written before about why you should give to a donor lottery this giving season. If you’re intrigued about what a donor lottery is, or why you might want to enter, I’d recommend reading that article, including the comments (note that it’s from last year, so some specifics like block sizes and dates are different). I’ve copied in some of the key points below:A donor lottery allows you to turn your donation into a larger donation with some probability, while holding the expected donation amount constant. E.g., you can trade a $1,000 donation for a 1% chance of allocating $100,000 worth of donations. Your expected donation size stays constant at $1,000.If you win, it will be worth the time to think more carefully about where to allocate the money. Because extra time thinking is more likely to lead to better (rather than worse) decisions, this leads to more (expected) impact overall, even though your expected donation size stays the same.For this reason, we believe that a donor lottery is the most effective way for most smaller donors to give the majority of their donations, for those who feel comfortable with it.If you win, we can put you in touch with experienced grantmakers who can help you with the decision.You should only participate in a donor lottery if you think there’s a good chance you (or someone who you trust) will spend additional time thinking about your donation if you win.We also think there’s a good case for continuing to make some fraction of your donations directly, to keep engaged with EA donation opportunities.You can participate anonymously if you like.Continue reading the original article here.Practical information about the 2022/2023 Donor LotteryThe Giving What We Can Donor Lottery is now open. The lottery will close to new entries on Monday, Jan 10 2023, 12:00 PM UTC. Any payments not confirmed by Giving What We Can by Monday, Jan 17, 2023, 12:00 PM UTC will not be accepted as entries.The lotteries will be drawn starting at Mon, Jan 24, 2023, 12:00 PM UTC (drawings for each block size will be spaced five minutes apart).There will be three block sizes:$100,000$500,000$2,000,000Which block you decide to enter is up to you (there are no minimum entry sizes on any of the blocks). If you’re not sure, we suggest that you aim to enter with a 1%-30% chance of winning.Donations are tax-deductible in the US, the UK, and the Netherlands. However, if you live somewhere else, you should still consider entering if you think the expected value of the lottery (including the potential to allocate winnings to projects that are more effective than the most effective charity that’s tax-deductible where you live) is a larger value-add than tax-deductibility.It is possible to participate anonymously, such that your personal details will only be visible to Giving What We Can and EV operational staff, even if you win. By default, all grants will be made public, unless winners or recipients request otherwise.For more in-depth information about the lottery process (including the important Caveats and Limitations section), please see the donor lottery website.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 25 Nov 2022 12:15:26 +0000 EA - The 2022 Giving What We Can Donor Lottery is now open by Fabio Kuhn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The 2022 Giving What We Can Donor Lottery is now open, published by Fabio Kuhn on November 25, 2022 on The Effective Altruism Forum.The 2022 Giving What We Can donor lottery (previously managed by EA Funds) is now open, with block sizes of $100k, $500k, and $2 million. To enter a lottery, or to learn more about donor lotteries, head to the Giving What We Can donor lottery page.Why enter a donor lottery?We’ve written before about why you should give to a donor lottery this giving season. If you’re intrigued about what a donor lottery is, or why you might want to enter, I’d recommend reading that article, including the comments (note that it’s from last year, so some specifics like block sizes and dates are different). I’ve copied in some of the key points below:A donor lottery allows you to turn your donation into a larger donation with some probability, while holding the expected donation amount constant. E.g., you can trade a $1,000 donation for a 1% chance of allocating $100,000 worth of donations. Your expected donation size stays constant at $1,000.If you win, it will be worth the time to think more carefully about where to allocate the money. Because extra time thinking is more likely to lead to better (rather than worse) decisions, this leads to more (expected) impact overall, even though your expected donation size stays the same.For this reason, we believe that a donor lottery is the most effective way for most smaller donors to give the majority of their donations, for those who feel comfortable with it.If you win, we can put you in touch with experienced grantmakers who can help you with the decision.You should only participate in a donor lottery if you think there’s a good chance you (or someone who you trust) will spend additional time thinking about your donation if you win.We also think there’s a good case for continuing to make some fraction of your donations directly, to keep engaged with EA donation opportunities.You can participate anonymously if you like.Continue reading the original article here.Practical information about the 2022/2023 Donor LotteryThe Giving What We Can Donor Lottery is now open. The lottery will close to new entries on Monday, Jan 10 2023, 12:00 PM UTC. Any payments not confirmed by Giving What We Can by Monday, Jan 17, 2023, 12:00 PM UTC will not be accepted as entries.The lotteries will be drawn starting at Mon, Jan 24, 2023, 12:00 PM UTC (drawings for each block size will be spaced five minutes apart).There will be three block sizes:$100,000$500,000$2,000,000Which block you decide to enter is up to you (there are no minimum entry sizes on any of the blocks). If you’re not sure, we suggest that you aim to enter with a 1%-30% chance of winning.Donations are tax-deductible in the US, the UK, and the Netherlands. However, if you live somewhere else, you should still consider entering if you think the expected value of the lottery (including the potential to allocate winnings to projects that are more effective than the most effective charity that’s tax-deductible where you live) is a larger value-add than tax-deductibility.It is possible to participate anonymously, such that your personal details will only be visible to Giving What We Can and EV operational staff, even if you win. By default, all grants will be made public, unless winners or recipients request otherwise.For more in-depth information about the lottery process (including the important Caveats and Limitations section), please see the donor lottery website.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The 2022 Giving What We Can Donor Lottery is now open, published by Fabio Kuhn on November 25, 2022 on The Effective Altruism Forum.The 2022 Giving What We Can donor lottery (previously managed by EA Funds) is now open, with block sizes of $100k, $500k, and $2 million. To enter a lottery, or to learn more about donor lotteries, head to the Giving What We Can donor lottery page.Why enter a donor lottery?We’ve written before about why you should give to a donor lottery this giving season. If you’re intrigued about what a donor lottery is, or why you might want to enter, I’d recommend reading that article, including the comments (note that it’s from last year, so some specifics like block sizes and dates are different). I’ve copied in some of the key points below:A donor lottery allows you to turn your donation into a larger donation with some probability, while holding the expected donation amount constant. E.g., you can trade a $1,000 donation for a 1% chance of allocating $100,000 worth of donations. Your expected donation size stays constant at $1,000.If you win, it will be worth the time to think more carefully about where to allocate the money. Because extra time thinking is more likely to lead to better (rather than worse) decisions, this leads to more (expected) impact overall, even though your expected donation size stays the same.For this reason, we believe that a donor lottery is the most effective way for most smaller donors to give the majority of their donations, for those who feel comfortable with it.If you win, we can put you in touch with experienced grantmakers who can help you with the decision.You should only participate in a donor lottery if you think there’s a good chance you (or someone who you trust) will spend additional time thinking about your donation if you win.We also think there’s a good case for continuing to make some fraction of your donations directly, to keep engaged with EA donation opportunities.You can participate anonymously if you like.Continue reading the original article here.Practical information about the 2022/2023 Donor LotteryThe Giving What We Can Donor Lottery is now open. The lottery will close to new entries on Monday, Jan 10 2023, 12:00 PM UTC. Any payments not confirmed by Giving What We Can by Monday, Jan 17, 2023, 12:00 PM UTC will not be accepted as entries.The lotteries will be drawn starting at Mon, Jan 24, 2023, 12:00 PM UTC (drawings for each block size will be spaced five minutes apart).There will be three block sizes:$100,000$500,000$2,000,000Which block you decide to enter is up to you (there are no minimum entry sizes on any of the blocks). If you’re not sure, we suggest that you aim to enter with a 1%-30% chance of winning.Donations are tax-deductible in the US, the UK, and the Netherlands. However, if you live somewhere else, you should still consider entering if you think the expected value of the lottery (including the potential to allocate winnings to projects that are more effective than the most effective charity that’s tax-deductible where you live) is a larger value-add than tax-deductibility.It is possible to participate anonymously, such that your personal details will only be visible to Giving What We Can and EV operational staff, even if you win. By default, all grants will be made public, unless winners or recipients request otherwise.For more in-depth information about the lottery process (including the important Caveats and Limitations section), please see the donor lottery website.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fabio Kuhn https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:43 None full 3908
5iQoR8mhEpvRT43jv_EA EA - Part 1: The AI Safety community has four main work groups, Strategy, Governance, Technical and Movement Building by PeterSlattery Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Part 1: The AI Safety community has four main work groups, Strategy, Governance, Technical and Movement Building, published by PeterSlattery on November 25, 2022 on The Effective Altruism Forum.Epistemic statusWritten as a non-expert to develop and get feedback on my views, rather than persuade. It will probably be somewhat incomplete and inaccurate, but it should provoke helpful feedback and discussion.AimThis is the first part of my series ‘A proposed approach for AI safety movement building’. Through this series, I outline a theory of change for AI Safety movement building. I don’t necessarily want to immediately accelerate recruitment into AI safety because I take concerns (e.g., 1,2) about the downsides of AI Safety movement building seriously. However, I do want to understand how different viewpoints within the AI Safety community overlap and aggregate.I start by attempting to conceptualise the AI Safety community. I originally planned to outline my theory of change in my first post. However, when I got feedback, I realised that i) I conceptualised the AI Safety community differently from some of my readers, and ii) I wasn’t confident in my understanding of all the key parts.TLDRI argue that the AI Safety community mainly comprises four overlapping, self-identifying, groups: Strategy, Governance, Technical and Movement Building.I explain what each group does and what differentiates it from the other groupsI outline a few other potential work groupsI integrate these into an illustration of my current conceptualisation of the AI Safety communityI request constructive feedback.My conceptualisation of the AI Safety communityAt a high level of simplification and low level of precision, the AI Safety community mainly comprises four overlapping, self-identifying, groups who are working to prevent an AI-related catastrophe. These groups are Strategy, Governance, Technical and Movement Building. These are illustrated below.We can compare the AI Safety community to a government and relate each work group to a government body. I think this helps clarify how the parts of the community fit together (though of course, the analogies are imperfect).StrategyThe AI Safety Strategy group seeks to mitigate AI risk by understanding and influencing strategy.Their work focuses on developing strategies (i.e., plans of action) that maximise the probability that we achieve positive AI-related outcomes and avoid catastrophes. In practice, this includes researching, evaluating, developing, and disseminating strategy (see this for more detail).They attempt to answer questions such as i) ‘how can we best distribute funds to improve interpretability?’, ii) ‘when should we expect transformative AI?’, or iii) “What is happening in areas relevant to AI?”.Due to a lack of ‘strategic clarity/consensus’ most AI strategy work focuses on research. However, Toby Ord’s submission to the UK parliament is arguably an example of developing, and disseminating an AI Safety related strategy.We can compare the Strategy group to the Executive Branch of a government, which sets a strategy for the state and parts of the government, while also attempting to understand and influence the strategies of external parties (e.g., organisations and nations).AI Safety Strategy exemplars: Holden Karnofsky, Toby Ord and Luke Muehlhauser.AI Safety Strategy post examples (1,2,3,4).GovernanceThe AI Safety Governance group seeks to mitigate AI risk by understanding and influencing decision-making.Their work focuses on understanding how decisions are made about AI and what institutions and arrangements help those decisions to be made well. In practice, this includes consultation, research, policy advocacy and policy implementation (see 1 & 2 for more detail).They attempt to answer questions such as i) ...]]>
PeterSlattery https://forum.effectivealtruism.org/posts/5iQoR8mhEpvRT43jv/part-1-the-ai-safety-community-has-four-main-work-groups Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Part 1: The AI Safety community has four main work groups, Strategy, Governance, Technical and Movement Building, published by PeterSlattery on November 25, 2022 on The Effective Altruism Forum.Epistemic statusWritten as a non-expert to develop and get feedback on my views, rather than persuade. It will probably be somewhat incomplete and inaccurate, but it should provoke helpful feedback and discussion.AimThis is the first part of my series ‘A proposed approach for AI safety movement building’. Through this series, I outline a theory of change for AI Safety movement building. I don’t necessarily want to immediately accelerate recruitment into AI safety because I take concerns (e.g., 1,2) about the downsides of AI Safety movement building seriously. However, I do want to understand how different viewpoints within the AI Safety community overlap and aggregate.I start by attempting to conceptualise the AI Safety community. I originally planned to outline my theory of change in my first post. However, when I got feedback, I realised that i) I conceptualised the AI Safety community differently from some of my readers, and ii) I wasn’t confident in my understanding of all the key parts.TLDRI argue that the AI Safety community mainly comprises four overlapping, self-identifying, groups: Strategy, Governance, Technical and Movement Building.I explain what each group does and what differentiates it from the other groupsI outline a few other potential work groupsI integrate these into an illustration of my current conceptualisation of the AI Safety communityI request constructive feedback.My conceptualisation of the AI Safety communityAt a high level of simplification and low level of precision, the AI Safety community mainly comprises four overlapping, self-identifying, groups who are working to prevent an AI-related catastrophe. These groups are Strategy, Governance, Technical and Movement Building. These are illustrated below.We can compare the AI Safety community to a government and relate each work group to a government body. I think this helps clarify how the parts of the community fit together (though of course, the analogies are imperfect).StrategyThe AI Safety Strategy group seeks to mitigate AI risk by understanding and influencing strategy.Their work focuses on developing strategies (i.e., plans of action) that maximise the probability that we achieve positive AI-related outcomes and avoid catastrophes. In practice, this includes researching, evaluating, developing, and disseminating strategy (see this for more detail).They attempt to answer questions such as i) ‘how can we best distribute funds to improve interpretability?’, ii) ‘when should we expect transformative AI?’, or iii) “What is happening in areas relevant to AI?”.Due to a lack of ‘strategic clarity/consensus’ most AI strategy work focuses on research. However, Toby Ord’s submission to the UK parliament is arguably an example of developing, and disseminating an AI Safety related strategy.We can compare the Strategy group to the Executive Branch of a government, which sets a strategy for the state and parts of the government, while also attempting to understand and influence the strategies of external parties (e.g., organisations and nations).AI Safety Strategy exemplars: Holden Karnofsky, Toby Ord and Luke Muehlhauser.AI Safety Strategy post examples (1,2,3,4).GovernanceThe AI Safety Governance group seeks to mitigate AI risk by understanding and influencing decision-making.Their work focuses on understanding how decisions are made about AI and what institutions and arrangements help those decisions to be made well. In practice, this includes consultation, research, policy advocacy and policy implementation (see 1 & 2 for more detail).They attempt to answer questions such as i) ...]]>
Fri, 25 Nov 2022 06:41:30 +0000 EA - Part 1: The AI Safety community has four main work groups, Strategy, Governance, Technical and Movement Building by PeterSlattery Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Part 1: The AI Safety community has four main work groups, Strategy, Governance, Technical and Movement Building, published by PeterSlattery on November 25, 2022 on The Effective Altruism Forum.Epistemic statusWritten as a non-expert to develop and get feedback on my views, rather than persuade. It will probably be somewhat incomplete and inaccurate, but it should provoke helpful feedback and discussion.AimThis is the first part of my series ‘A proposed approach for AI safety movement building’. Through this series, I outline a theory of change for AI Safety movement building. I don’t necessarily want to immediately accelerate recruitment into AI safety because I take concerns (e.g., 1,2) about the downsides of AI Safety movement building seriously. However, I do want to understand how different viewpoints within the AI Safety community overlap and aggregate.I start by attempting to conceptualise the AI Safety community. I originally planned to outline my theory of change in my first post. However, when I got feedback, I realised that i) I conceptualised the AI Safety community differently from some of my readers, and ii) I wasn’t confident in my understanding of all the key parts.TLDRI argue that the AI Safety community mainly comprises four overlapping, self-identifying, groups: Strategy, Governance, Technical and Movement Building.I explain what each group does and what differentiates it from the other groupsI outline a few other potential work groupsI integrate these into an illustration of my current conceptualisation of the AI Safety communityI request constructive feedback.My conceptualisation of the AI Safety communityAt a high level of simplification and low level of precision, the AI Safety community mainly comprises four overlapping, self-identifying, groups who are working to prevent an AI-related catastrophe. These groups are Strategy, Governance, Technical and Movement Building. These are illustrated below.We can compare the AI Safety community to a government and relate each work group to a government body. I think this helps clarify how the parts of the community fit together (though of course, the analogies are imperfect).StrategyThe AI Safety Strategy group seeks to mitigate AI risk by understanding and influencing strategy.Their work focuses on developing strategies (i.e., plans of action) that maximise the probability that we achieve positive AI-related outcomes and avoid catastrophes. In practice, this includes researching, evaluating, developing, and disseminating strategy (see this for more detail).They attempt to answer questions such as i) ‘how can we best distribute funds to improve interpretability?’, ii) ‘when should we expect transformative AI?’, or iii) “What is happening in areas relevant to AI?”.Due to a lack of ‘strategic clarity/consensus’ most AI strategy work focuses on research. However, Toby Ord’s submission to the UK parliament is arguably an example of developing, and disseminating an AI Safety related strategy.We can compare the Strategy group to the Executive Branch of a government, which sets a strategy for the state and parts of the government, while also attempting to understand and influence the strategies of external parties (e.g., organisations and nations).AI Safety Strategy exemplars: Holden Karnofsky, Toby Ord and Luke Muehlhauser.AI Safety Strategy post examples (1,2,3,4).GovernanceThe AI Safety Governance group seeks to mitigate AI risk by understanding and influencing decision-making.Their work focuses on understanding how decisions are made about AI and what institutions and arrangements help those decisions to be made well. In practice, this includes consultation, research, policy advocacy and policy implementation (see 1 & 2 for more detail).They attempt to answer questions such as i) ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Part 1: The AI Safety community has four main work groups, Strategy, Governance, Technical and Movement Building, published by PeterSlattery on November 25, 2022 on The Effective Altruism Forum.Epistemic statusWritten as a non-expert to develop and get feedback on my views, rather than persuade. It will probably be somewhat incomplete and inaccurate, but it should provoke helpful feedback and discussion.AimThis is the first part of my series ‘A proposed approach for AI safety movement building’. Through this series, I outline a theory of change for AI Safety movement building. I don’t necessarily want to immediately accelerate recruitment into AI safety because I take concerns (e.g., 1,2) about the downsides of AI Safety movement building seriously. However, I do want to understand how different viewpoints within the AI Safety community overlap and aggregate.I start by attempting to conceptualise the AI Safety community. I originally planned to outline my theory of change in my first post. However, when I got feedback, I realised that i) I conceptualised the AI Safety community differently from some of my readers, and ii) I wasn’t confident in my understanding of all the key parts.TLDRI argue that the AI Safety community mainly comprises four overlapping, self-identifying, groups: Strategy, Governance, Technical and Movement Building.I explain what each group does and what differentiates it from the other groupsI outline a few other potential work groupsI integrate these into an illustration of my current conceptualisation of the AI Safety communityI request constructive feedback.My conceptualisation of the AI Safety communityAt a high level of simplification and low level of precision, the AI Safety community mainly comprises four overlapping, self-identifying, groups who are working to prevent an AI-related catastrophe. These groups are Strategy, Governance, Technical and Movement Building. These are illustrated below.We can compare the AI Safety community to a government and relate each work group to a government body. I think this helps clarify how the parts of the community fit together (though of course, the analogies are imperfect).StrategyThe AI Safety Strategy group seeks to mitigate AI risk by understanding and influencing strategy.Their work focuses on developing strategies (i.e., plans of action) that maximise the probability that we achieve positive AI-related outcomes and avoid catastrophes. In practice, this includes researching, evaluating, developing, and disseminating strategy (see this for more detail).They attempt to answer questions such as i) ‘how can we best distribute funds to improve interpretability?’, ii) ‘when should we expect transformative AI?’, or iii) “What is happening in areas relevant to AI?”.Due to a lack of ‘strategic clarity/consensus’ most AI strategy work focuses on research. However, Toby Ord’s submission to the UK parliament is arguably an example of developing, and disseminating an AI Safety related strategy.We can compare the Strategy group to the Executive Branch of a government, which sets a strategy for the state and parts of the government, while also attempting to understand and influence the strategies of external parties (e.g., organisations and nations).AI Safety Strategy exemplars: Holden Karnofsky, Toby Ord and Luke Muehlhauser.AI Safety Strategy post examples (1,2,3,4).GovernanceThe AI Safety Governance group seeks to mitigate AI risk by understanding and influencing decision-making.Their work focuses on understanding how decisions are made about AI and what institutions and arrangements help those decisions to be made well. In practice, this includes consultation, research, policy advocacy and policy implementation (see 1 & 2 for more detail).They attempt to answer questions such as i) ...]]>
PeterSlattery https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:56 None full 3899
7cCr6vAmN4Xi3yzR5_EA EA - Two contrasting models of “intelligence” and future growth by Magnus Vinding Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two contrasting models of “intelligence” and future growth, published by Magnus Vinding on November 24, 2022 on The Effective Altruism Forum.My primary aim in this post is to present two basic models of the development and future of “intelligence”, and to highlight the differences between these models. I believe that people’s beliefs about the future of AI and “AI takeoff scenarios” may in large part depend on which of these two simple models they favor most strongly, and hence it seems worth making these models more explicit, so that we can better evaluate and critique them.Among the two models I present, I myself happen to consider one of them significantly more plausible, and I will outline some of the reasons why I believe that.The models I present may feel painfully basic, but I think it can be helpful to visit these most basic issues, as it seems to me that much disagreement springs from there.“AI skepticism”?It might be tempting to view the discussion of the contrasting models below as a clash between the “AI priority” camp and the “AI skepticism” camp. But I think this would be inaccurate. Neither of the two models I present imply that we should be unconcerned about AI, or indeed that avoiding catastrophic AI outcomes should not be a top priority. Where the models will tend to disagree is more when it comes to what kinds of AI outcomes are most likely, and, as a consequence, how we can best address risks of bad AI outcomes. (More on this below.)Two contrasting definitions of “intelligence”Before outlining the two models of the development and future of “intelligence”, it is worth first specifying two distinct definitions of “intelligence”. These definitions are important, as the two contrasting models that I outline below see the relationship between these definitions of “intelligence” in very different ways.The two definitions of intelligence are the following:Intelligence 1: Individual cognitive abilities.Intelligence 2: The ability to achieve a wide range of goals.The first definition is arguably the common-sense definition of “intelligence”, and is often associated with constructs such as IQ and the g factor. The second definition is more abstract, and is inspired by attempts to provide a broad definition of “intelligence” (see e.g. Legg & Hutter, 2007).At a first glance, the difference between these two definitions may not be all that clear. After all, individual cognitive abilities can surely be classified as “abilities to achieve a wide range of goals”, meaning that Intelligence 1 can be seen as a subset of Intelligence 2. This seems fairly uncontroversial to say, and both of the models outlined below would agree with this claim.Where substantive disagreement begins to enter the picture is when we explore the reverse relation. Is Intelligence 2 likewise a subset of Intelligence 1? In other words, are the two notions of “intelligence” virtually identical?This is hardly the case. After all, abilities such as constructing a large building, or sending a spaceship to the moon, are not purely a product of individual cognitive abilities, even if cognitive abilities play crucial parts in such achievements.The distance between Intelligence 1 and Intelligence 2 — or rather, how small or large of a subset Intelligence 1 is within Intelligence 2 — is a key point of disagreement between the two models outlined below, as will hopefully become clear shortly.Two contrasting models of the development and future of “intelligence”Simplified models at opposite ends of a spectrumThe two models I present below are extremely simple and coarse-grained, but I still think they capture some key aspects of how people tend to diverge in their thinking about the development and future of “intelligence”.The models I present exist at opposite ends of a spectrum, wh...]]>
Magnus Vinding https://forum.effectivealtruism.org/posts/7cCr6vAmN4Xi3yzR5/two-contrasting-models-of-intelligence-and-future-growth Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two contrasting models of “intelligence” and future growth, published by Magnus Vinding on November 24, 2022 on The Effective Altruism Forum.My primary aim in this post is to present two basic models of the development and future of “intelligence”, and to highlight the differences between these models. I believe that people’s beliefs about the future of AI and “AI takeoff scenarios” may in large part depend on which of these two simple models they favor most strongly, and hence it seems worth making these models more explicit, so that we can better evaluate and critique them.Among the two models I present, I myself happen to consider one of them significantly more plausible, and I will outline some of the reasons why I believe that.The models I present may feel painfully basic, but I think it can be helpful to visit these most basic issues, as it seems to me that much disagreement springs from there.“AI skepticism”?It might be tempting to view the discussion of the contrasting models below as a clash between the “AI priority” camp and the “AI skepticism” camp. But I think this would be inaccurate. Neither of the two models I present imply that we should be unconcerned about AI, or indeed that avoiding catastrophic AI outcomes should not be a top priority. Where the models will tend to disagree is more when it comes to what kinds of AI outcomes are most likely, and, as a consequence, how we can best address risks of bad AI outcomes. (More on this below.)Two contrasting definitions of “intelligence”Before outlining the two models of the development and future of “intelligence”, it is worth first specifying two distinct definitions of “intelligence”. These definitions are important, as the two contrasting models that I outline below see the relationship between these definitions of “intelligence” in very different ways.The two definitions of intelligence are the following:Intelligence 1: Individual cognitive abilities.Intelligence 2: The ability to achieve a wide range of goals.The first definition is arguably the common-sense definition of “intelligence”, and is often associated with constructs such as IQ and the g factor. The second definition is more abstract, and is inspired by attempts to provide a broad definition of “intelligence” (see e.g. Legg & Hutter, 2007).At a first glance, the difference between these two definitions may not be all that clear. After all, individual cognitive abilities can surely be classified as “abilities to achieve a wide range of goals”, meaning that Intelligence 1 can be seen as a subset of Intelligence 2. This seems fairly uncontroversial to say, and both of the models outlined below would agree with this claim.Where substantive disagreement begins to enter the picture is when we explore the reverse relation. Is Intelligence 2 likewise a subset of Intelligence 1? In other words, are the two notions of “intelligence” virtually identical?This is hardly the case. After all, abilities such as constructing a large building, or sending a spaceship to the moon, are not purely a product of individual cognitive abilities, even if cognitive abilities play crucial parts in such achievements.The distance between Intelligence 1 and Intelligence 2 — or rather, how small or large of a subset Intelligence 1 is within Intelligence 2 — is a key point of disagreement between the two models outlined below, as will hopefully become clear shortly.Two contrasting models of the development and future of “intelligence”Simplified models at opposite ends of a spectrumThe two models I present below are extremely simple and coarse-grained, but I still think they capture some key aspects of how people tend to diverge in their thinking about the development and future of “intelligence”.The models I present exist at opposite ends of a spectrum, wh...]]>
Thu, 24 Nov 2022 18:09:54 +0000 EA - Two contrasting models of “intelligence” and future growth by Magnus Vinding Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two contrasting models of “intelligence” and future growth, published by Magnus Vinding on November 24, 2022 on The Effective Altruism Forum.My primary aim in this post is to present two basic models of the development and future of “intelligence”, and to highlight the differences between these models. I believe that people’s beliefs about the future of AI and “AI takeoff scenarios” may in large part depend on which of these two simple models they favor most strongly, and hence it seems worth making these models more explicit, so that we can better evaluate and critique them.Among the two models I present, I myself happen to consider one of them significantly more plausible, and I will outline some of the reasons why I believe that.The models I present may feel painfully basic, but I think it can be helpful to visit these most basic issues, as it seems to me that much disagreement springs from there.“AI skepticism”?It might be tempting to view the discussion of the contrasting models below as a clash between the “AI priority” camp and the “AI skepticism” camp. But I think this would be inaccurate. Neither of the two models I present imply that we should be unconcerned about AI, or indeed that avoiding catastrophic AI outcomes should not be a top priority. Where the models will tend to disagree is more when it comes to what kinds of AI outcomes are most likely, and, as a consequence, how we can best address risks of bad AI outcomes. (More on this below.)Two contrasting definitions of “intelligence”Before outlining the two models of the development and future of “intelligence”, it is worth first specifying two distinct definitions of “intelligence”. These definitions are important, as the two contrasting models that I outline below see the relationship between these definitions of “intelligence” in very different ways.The two definitions of intelligence are the following:Intelligence 1: Individual cognitive abilities.Intelligence 2: The ability to achieve a wide range of goals.The first definition is arguably the common-sense definition of “intelligence”, and is often associated with constructs such as IQ and the g factor. The second definition is more abstract, and is inspired by attempts to provide a broad definition of “intelligence” (see e.g. Legg & Hutter, 2007).At a first glance, the difference between these two definitions may not be all that clear. After all, individual cognitive abilities can surely be classified as “abilities to achieve a wide range of goals”, meaning that Intelligence 1 can be seen as a subset of Intelligence 2. This seems fairly uncontroversial to say, and both of the models outlined below would agree with this claim.Where substantive disagreement begins to enter the picture is when we explore the reverse relation. Is Intelligence 2 likewise a subset of Intelligence 1? In other words, are the two notions of “intelligence” virtually identical?This is hardly the case. After all, abilities such as constructing a large building, or sending a spaceship to the moon, are not purely a product of individual cognitive abilities, even if cognitive abilities play crucial parts in such achievements.The distance between Intelligence 1 and Intelligence 2 — or rather, how small or large of a subset Intelligence 1 is within Intelligence 2 — is a key point of disagreement between the two models outlined below, as will hopefully become clear shortly.Two contrasting models of the development and future of “intelligence”Simplified models at opposite ends of a spectrumThe two models I present below are extremely simple and coarse-grained, but I still think they capture some key aspects of how people tend to diverge in their thinking about the development and future of “intelligence”.The models I present exist at opposite ends of a spectrum, wh...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two contrasting models of “intelligence” and future growth, published by Magnus Vinding on November 24, 2022 on The Effective Altruism Forum.My primary aim in this post is to present two basic models of the development and future of “intelligence”, and to highlight the differences between these models. I believe that people’s beliefs about the future of AI and “AI takeoff scenarios” may in large part depend on which of these two simple models they favor most strongly, and hence it seems worth making these models more explicit, so that we can better evaluate and critique them.Among the two models I present, I myself happen to consider one of them significantly more plausible, and I will outline some of the reasons why I believe that.The models I present may feel painfully basic, but I think it can be helpful to visit these most basic issues, as it seems to me that much disagreement springs from there.“AI skepticism”?It might be tempting to view the discussion of the contrasting models below as a clash between the “AI priority” camp and the “AI skepticism” camp. But I think this would be inaccurate. Neither of the two models I present imply that we should be unconcerned about AI, or indeed that avoiding catastrophic AI outcomes should not be a top priority. Where the models will tend to disagree is more when it comes to what kinds of AI outcomes are most likely, and, as a consequence, how we can best address risks of bad AI outcomes. (More on this below.)Two contrasting definitions of “intelligence”Before outlining the two models of the development and future of “intelligence”, it is worth first specifying two distinct definitions of “intelligence”. These definitions are important, as the two contrasting models that I outline below see the relationship between these definitions of “intelligence” in very different ways.The two definitions of intelligence are the following:Intelligence 1: Individual cognitive abilities.Intelligence 2: The ability to achieve a wide range of goals.The first definition is arguably the common-sense definition of “intelligence”, and is often associated with constructs such as IQ and the g factor. The second definition is more abstract, and is inspired by attempts to provide a broad definition of “intelligence” (see e.g. Legg & Hutter, 2007).At a first glance, the difference between these two definitions may not be all that clear. After all, individual cognitive abilities can surely be classified as “abilities to achieve a wide range of goals”, meaning that Intelligence 1 can be seen as a subset of Intelligence 2. This seems fairly uncontroversial to say, and both of the models outlined below would agree with this claim.Where substantive disagreement begins to enter the picture is when we explore the reverse relation. Is Intelligence 2 likewise a subset of Intelligence 1? In other words, are the two notions of “intelligence” virtually identical?This is hardly the case. After all, abilities such as constructing a large building, or sending a spaceship to the moon, are not purely a product of individual cognitive abilities, even if cognitive abilities play crucial parts in such achievements.The distance between Intelligence 1 and Intelligence 2 — or rather, how small or large of a subset Intelligence 1 is within Intelligence 2 — is a key point of disagreement between the two models outlined below, as will hopefully become clear shortly.Two contrasting models of the development and future of “intelligence”Simplified models at opposite ends of a spectrumThe two models I present below are extremely simple and coarse-grained, but I still think they capture some key aspects of how people tend to diverge in their thinking about the development and future of “intelligence”.The models I present exist at opposite ends of a spectrum, wh...]]>
Magnus Vinding https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 50:02 None full 3896
pj2LqeJxefRFCFEhm_EA EA - Effective Altruism: Not as bad as you think by James Ozden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Altruism: Not as bad as you think, published by James Ozden on November 24, 2022 on The Effective Altruism Forum.Note: Linking this from my personal blog. Written fairly quickly and informally, for a largely non-EA audience. I felt pretty annoyed by misleading opinion pieces slamming EA, and this was the result.It’s been a pretty hectic few weeks if you follow Effective Altruism, crypto, tech firms or basically any news. FTX, one of the largest cryptocurrency exchanges in the world, recently lost all of its value, went bankrupt, and lost about $1 billion in customer funds. News has emerged that the CEO, Sam Bankman-Fried, might have been mishandling customer funds and that the mismanagement of money was far worse than in the collapse of energy giant Enron. This is pretty awful for a lot of people, whether it’s the approximately one million people who lost their crypto investments that were stored in FTX, or all the charities that are worried they will have to pay back any donations made by FTX-related entities (of which there was over $160 million).Sam Bankman-Fried has openly spoken about Effective Altruism a fair bit in the past, and committed to giving away almost all of his wealth to charitable causes. For those who don’t know, Effective Altruism (or EA) is a research project and community that uses evidence and reason to do the most good. Tangibly, it’s a burgeoning intellectual movement with close to 10,000 engaged folks globally, across 70 countries (although mostly in the EU and US). People in the effective altruism community focus primarily on alleviating global poverty and diseases, reducing the suffering of animals used for food, and preventing existential risks from worst-case pandemics or misaligned artificial intelligence.Due to the close association between Sam Bankman-Fried and Effective Altruism, EA has gotten a fair bit of criticism recently about potentially not being the ideal do-gooder project it set out to be. But I think some of these critiques miss the mark. Often, they criticise issues that don't actually exist in the community, but sound good. They also seem to glance over evidence that disputes their claims, such as the huge amount of effort that Effective Altruists have put towards improving the lives of some of the most poor people globally. For example, via GiveWell, an EA-aligned charity evaluator, over 110,000 donors have moved over $1 billion to charities helping people in extreme poverty, by providing malaria bed nets, direct cash transfers, or more. This is amazing. GiveWell thinks these actions will have saved over 150,000 lives, as well as providing $175 million in direct cash to the global poor. See below for a breakdown of EA funding to date, as of August 2022.So what are the issues that critics of EA point out? I’ll discuss a few from this recent critique in the Guardian, which, in my opinion, offers a somewhat canonical and oft-heard angle. I’ll paraphrase what I believe some of these common critiques are, but it may not be perfect:Philanthropy concentrates power in the hands of a few wealthy donors. If organisations decide to pursue projects outside the scope of donor interests, then funders can easily pull their giving.EA donors give money to projects they find the most interesting, rather than what actually does the most good. This is partly down to donors defining effectiveness in their own terms, and funding projects that meet their criteria.Effective Altruism doesn’t tackle root issues, such as systems of oppression. Instead, it tinkers around the edge by proposing small reforms to our current capitalist, racist, patriarchal (etc.) system, without attempting to change the status quo.Longtermism, the view that we should be doing much more to protect future generations, is morally dubious, ignores the su...]]>
James Ozden https://forum.effectivealtruism.org/posts/pj2LqeJxefRFCFEhm/effective-altruism-not-as-bad-as-you-think Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Altruism: Not as bad as you think, published by James Ozden on November 24, 2022 on The Effective Altruism Forum.Note: Linking this from my personal blog. Written fairly quickly and informally, for a largely non-EA audience. I felt pretty annoyed by misleading opinion pieces slamming EA, and this was the result.It’s been a pretty hectic few weeks if you follow Effective Altruism, crypto, tech firms or basically any news. FTX, one of the largest cryptocurrency exchanges in the world, recently lost all of its value, went bankrupt, and lost about $1 billion in customer funds. News has emerged that the CEO, Sam Bankman-Fried, might have been mishandling customer funds and that the mismanagement of money was far worse than in the collapse of energy giant Enron. This is pretty awful for a lot of people, whether it’s the approximately one million people who lost their crypto investments that were stored in FTX, or all the charities that are worried they will have to pay back any donations made by FTX-related entities (of which there was over $160 million).Sam Bankman-Fried has openly spoken about Effective Altruism a fair bit in the past, and committed to giving away almost all of his wealth to charitable causes. For those who don’t know, Effective Altruism (or EA) is a research project and community that uses evidence and reason to do the most good. Tangibly, it’s a burgeoning intellectual movement with close to 10,000 engaged folks globally, across 70 countries (although mostly in the EU and US). People in the effective altruism community focus primarily on alleviating global poverty and diseases, reducing the suffering of animals used for food, and preventing existential risks from worst-case pandemics or misaligned artificial intelligence.Due to the close association between Sam Bankman-Fried and Effective Altruism, EA has gotten a fair bit of criticism recently about potentially not being the ideal do-gooder project it set out to be. But I think some of these critiques miss the mark. Often, they criticise issues that don't actually exist in the community, but sound good. They also seem to glance over evidence that disputes their claims, such as the huge amount of effort that Effective Altruists have put towards improving the lives of some of the most poor people globally. For example, via GiveWell, an EA-aligned charity evaluator, over 110,000 donors have moved over $1 billion to charities helping people in extreme poverty, by providing malaria bed nets, direct cash transfers, or more. This is amazing. GiveWell thinks these actions will have saved over 150,000 lives, as well as providing $175 million in direct cash to the global poor. See below for a breakdown of EA funding to date, as of August 2022.So what are the issues that critics of EA point out? I’ll discuss a few from this recent critique in the Guardian, which, in my opinion, offers a somewhat canonical and oft-heard angle. I’ll paraphrase what I believe some of these common critiques are, but it may not be perfect:Philanthropy concentrates power in the hands of a few wealthy donors. If organisations decide to pursue projects outside the scope of donor interests, then funders can easily pull their giving.EA donors give money to projects they find the most interesting, rather than what actually does the most good. This is partly down to donors defining effectiveness in their own terms, and funding projects that meet their criteria.Effective Altruism doesn’t tackle root issues, such as systems of oppression. Instead, it tinkers around the edge by proposing small reforms to our current capitalist, racist, patriarchal (etc.) system, without attempting to change the status quo.Longtermism, the view that we should be doing much more to protect future generations, is morally dubious, ignores the su...]]>
Thu, 24 Nov 2022 14:18:31 +0000 EA - Effective Altruism: Not as bad as you think by James Ozden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Altruism: Not as bad as you think, published by James Ozden on November 24, 2022 on The Effective Altruism Forum.Note: Linking this from my personal blog. Written fairly quickly and informally, for a largely non-EA audience. I felt pretty annoyed by misleading opinion pieces slamming EA, and this was the result.It’s been a pretty hectic few weeks if you follow Effective Altruism, crypto, tech firms or basically any news. FTX, one of the largest cryptocurrency exchanges in the world, recently lost all of its value, went bankrupt, and lost about $1 billion in customer funds. News has emerged that the CEO, Sam Bankman-Fried, might have been mishandling customer funds and that the mismanagement of money was far worse than in the collapse of energy giant Enron. This is pretty awful for a lot of people, whether it’s the approximately one million people who lost their crypto investments that were stored in FTX, or all the charities that are worried they will have to pay back any donations made by FTX-related entities (of which there was over $160 million).Sam Bankman-Fried has openly spoken about Effective Altruism a fair bit in the past, and committed to giving away almost all of his wealth to charitable causes. For those who don’t know, Effective Altruism (or EA) is a research project and community that uses evidence and reason to do the most good. Tangibly, it’s a burgeoning intellectual movement with close to 10,000 engaged folks globally, across 70 countries (although mostly in the EU and US). People in the effective altruism community focus primarily on alleviating global poverty and diseases, reducing the suffering of animals used for food, and preventing existential risks from worst-case pandemics or misaligned artificial intelligence.Due to the close association between Sam Bankman-Fried and Effective Altruism, EA has gotten a fair bit of criticism recently about potentially not being the ideal do-gooder project it set out to be. But I think some of these critiques miss the mark. Often, they criticise issues that don't actually exist in the community, but sound good. They also seem to glance over evidence that disputes their claims, such as the huge amount of effort that Effective Altruists have put towards improving the lives of some of the most poor people globally. For example, via GiveWell, an EA-aligned charity evaluator, over 110,000 donors have moved over $1 billion to charities helping people in extreme poverty, by providing malaria bed nets, direct cash transfers, or more. This is amazing. GiveWell thinks these actions will have saved over 150,000 lives, as well as providing $175 million in direct cash to the global poor. See below for a breakdown of EA funding to date, as of August 2022.So what are the issues that critics of EA point out? I’ll discuss a few from this recent critique in the Guardian, which, in my opinion, offers a somewhat canonical and oft-heard angle. I’ll paraphrase what I believe some of these common critiques are, but it may not be perfect:Philanthropy concentrates power in the hands of a few wealthy donors. If organisations decide to pursue projects outside the scope of donor interests, then funders can easily pull their giving.EA donors give money to projects they find the most interesting, rather than what actually does the most good. This is partly down to donors defining effectiveness in their own terms, and funding projects that meet their criteria.Effective Altruism doesn’t tackle root issues, such as systems of oppression. Instead, it tinkers around the edge by proposing small reforms to our current capitalist, racist, patriarchal (etc.) system, without attempting to change the status quo.Longtermism, the view that we should be doing much more to protect future generations, is morally dubious, ignores the su...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Altruism: Not as bad as you think, published by James Ozden on November 24, 2022 on The Effective Altruism Forum.Note: Linking this from my personal blog. Written fairly quickly and informally, for a largely non-EA audience. I felt pretty annoyed by misleading opinion pieces slamming EA, and this was the result.It’s been a pretty hectic few weeks if you follow Effective Altruism, crypto, tech firms or basically any news. FTX, one of the largest cryptocurrency exchanges in the world, recently lost all of its value, went bankrupt, and lost about $1 billion in customer funds. News has emerged that the CEO, Sam Bankman-Fried, might have been mishandling customer funds and that the mismanagement of money was far worse than in the collapse of energy giant Enron. This is pretty awful for a lot of people, whether it’s the approximately one million people who lost their crypto investments that were stored in FTX, or all the charities that are worried they will have to pay back any donations made by FTX-related entities (of which there was over $160 million).Sam Bankman-Fried has openly spoken about Effective Altruism a fair bit in the past, and committed to giving away almost all of his wealth to charitable causes. For those who don’t know, Effective Altruism (or EA) is a research project and community that uses evidence and reason to do the most good. Tangibly, it’s a burgeoning intellectual movement with close to 10,000 engaged folks globally, across 70 countries (although mostly in the EU and US). People in the effective altruism community focus primarily on alleviating global poverty and diseases, reducing the suffering of animals used for food, and preventing existential risks from worst-case pandemics or misaligned artificial intelligence.Due to the close association between Sam Bankman-Fried and Effective Altruism, EA has gotten a fair bit of criticism recently about potentially not being the ideal do-gooder project it set out to be. But I think some of these critiques miss the mark. Often, they criticise issues that don't actually exist in the community, but sound good. They also seem to glance over evidence that disputes their claims, such as the huge amount of effort that Effective Altruists have put towards improving the lives of some of the most poor people globally. For example, via GiveWell, an EA-aligned charity evaluator, over 110,000 donors have moved over $1 billion to charities helping people in extreme poverty, by providing malaria bed nets, direct cash transfers, or more. This is amazing. GiveWell thinks these actions will have saved over 150,000 lives, as well as providing $175 million in direct cash to the global poor. See below for a breakdown of EA funding to date, as of August 2022.So what are the issues that critics of EA point out? I’ll discuss a few from this recent critique in the Guardian, which, in my opinion, offers a somewhat canonical and oft-heard angle. I’ll paraphrase what I believe some of these common critiques are, but it may not be perfect:Philanthropy concentrates power in the hands of a few wealthy donors. If organisations decide to pursue projects outside the scope of donor interests, then funders can easily pull their giving.EA donors give money to projects they find the most interesting, rather than what actually does the most good. This is partly down to donors defining effectiveness in their own terms, and funding projects that meet their criteria.Effective Altruism doesn’t tackle root issues, such as systems of oppression. Instead, it tinkers around the edge by proposing small reforms to our current capitalist, racist, patriarchal (etc.) system, without attempting to change the status quo.Longtermism, the view that we should be doing much more to protect future generations, is morally dubious, ignores the su...]]>
James Ozden https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 15:26 None full 3895
82heDPsmvhThda3af_EA EA - AMA: Sean Mayberry, Founder and CEO of StrongMinds by Sean Mayberry Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Sean Mayberry, Founder & CEO of StrongMinds, published by Sean Mayberry on November 24, 2022 on The Effective Altruism Forum.I'm Sean Mayberry, and I’m the Founder/ Chief Executive Officer of StrongMinds. I will spend time on the Monday after the Thanksgiving holiday answering questions here (though I may get to some questions sooner).A little background information about me:I founded StrongMinds in 2013. We are a social enterprise/NGO that treats depression in low-income women and adolescents by providing group interpersonal therapy (IPT-G) delivered by lay community health workers. StrongMinds is the only organization scaling a cost-effective solution to the depression epidemic in Africa.Our model developed from the findings of a randomized controlled trial in Uganda in 2002 that had remarkable success in treating depression with group interpersonal psychotherapy (IPT-G). The study, by researchers from Johns Hopkins University (JHU), used lay community workers with only a high school education.I left my position as the CEO of a global antipoverty organization and foundedStrongMinds, concentrating in Uganda, the site of the previous randomized controlled trial. I used my family’s savings to accomplish this and volunteered full-time for the first 18 months until supporters were identified. We would seek out individuals with an interest in being data-driven, entrepreneurial, people-focused, passionate, open, and collaborative. Those traits eventually informed the core values of the company culture at StrongMinds.StrongMinds has now treated over 160,000 women with depression to date in Uganda and Zambia. On average, 80% of the women we treat remain depression-free six months after the conclusion of therapy. When our clients become depression-free, they can work more, and their kids eat and attend school more regularly. They also report that they no longer feel isolated and have people to turn to for social support. By the end of 2022, we will have treated over 210,000 women and adolescents through our work.Drawing on evidence from over 80 academic studies, Happier Lives Institute has found that the group interpersonal therapy provided by StrongMinds is almost ten times more cost-effective than giving cash to people in extreme poverty (a standard benchmark for aid effectiveness).I have been honored to present at a few Effective Altruism events. We love that thethe community has taken such an interest in StrongMinds’ approach centered around data collection, transparency, cultural competence/appropriateness, and human well-being.Please ask me anything! I look forward to answering all of your questions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sean Mayberry https://forum.effectivealtruism.org/posts/82heDPsmvhThda3af/ama-sean-mayberry-founder-and-ceo-of-strongminds Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Sean Mayberry, Founder & CEO of StrongMinds, published by Sean Mayberry on November 24, 2022 on The Effective Altruism Forum.I'm Sean Mayberry, and I’m the Founder/ Chief Executive Officer of StrongMinds. I will spend time on the Monday after the Thanksgiving holiday answering questions here (though I may get to some questions sooner).A little background information about me:I founded StrongMinds in 2013. We are a social enterprise/NGO that treats depression in low-income women and adolescents by providing group interpersonal therapy (IPT-G) delivered by lay community health workers. StrongMinds is the only organization scaling a cost-effective solution to the depression epidemic in Africa.Our model developed from the findings of a randomized controlled trial in Uganda in 2002 that had remarkable success in treating depression with group interpersonal psychotherapy (IPT-G). The study, by researchers from Johns Hopkins University (JHU), used lay community workers with only a high school education.I left my position as the CEO of a global antipoverty organization and foundedStrongMinds, concentrating in Uganda, the site of the previous randomized controlled trial. I used my family’s savings to accomplish this and volunteered full-time for the first 18 months until supporters were identified. We would seek out individuals with an interest in being data-driven, entrepreneurial, people-focused, passionate, open, and collaborative. Those traits eventually informed the core values of the company culture at StrongMinds.StrongMinds has now treated over 160,000 women with depression to date in Uganda and Zambia. On average, 80% of the women we treat remain depression-free six months after the conclusion of therapy. When our clients become depression-free, they can work more, and their kids eat and attend school more regularly. They also report that they no longer feel isolated and have people to turn to for social support. By the end of 2022, we will have treated over 210,000 women and adolescents through our work.Drawing on evidence from over 80 academic studies, Happier Lives Institute has found that the group interpersonal therapy provided by StrongMinds is almost ten times more cost-effective than giving cash to people in extreme poverty (a standard benchmark for aid effectiveness).I have been honored to present at a few Effective Altruism events. We love that thethe community has taken such an interest in StrongMinds’ approach centered around data collection, transparency, cultural competence/appropriateness, and human well-being.Please ask me anything! I look forward to answering all of your questions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 24 Nov 2022 11:11:10 +0000 EA - AMA: Sean Mayberry, Founder and CEO of StrongMinds by Sean Mayberry Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Sean Mayberry, Founder & CEO of StrongMinds, published by Sean Mayberry on November 24, 2022 on The Effective Altruism Forum.I'm Sean Mayberry, and I’m the Founder/ Chief Executive Officer of StrongMinds. I will spend time on the Monday after the Thanksgiving holiday answering questions here (though I may get to some questions sooner).A little background information about me:I founded StrongMinds in 2013. We are a social enterprise/NGO that treats depression in low-income women and adolescents by providing group interpersonal therapy (IPT-G) delivered by lay community health workers. StrongMinds is the only organization scaling a cost-effective solution to the depression epidemic in Africa.Our model developed from the findings of a randomized controlled trial in Uganda in 2002 that had remarkable success in treating depression with group interpersonal psychotherapy (IPT-G). The study, by researchers from Johns Hopkins University (JHU), used lay community workers with only a high school education.I left my position as the CEO of a global antipoverty organization and foundedStrongMinds, concentrating in Uganda, the site of the previous randomized controlled trial. I used my family’s savings to accomplish this and volunteered full-time for the first 18 months until supporters were identified. We would seek out individuals with an interest in being data-driven, entrepreneurial, people-focused, passionate, open, and collaborative. Those traits eventually informed the core values of the company culture at StrongMinds.StrongMinds has now treated over 160,000 women with depression to date in Uganda and Zambia. On average, 80% of the women we treat remain depression-free six months after the conclusion of therapy. When our clients become depression-free, they can work more, and their kids eat and attend school more regularly. They also report that they no longer feel isolated and have people to turn to for social support. By the end of 2022, we will have treated over 210,000 women and adolescents through our work.Drawing on evidence from over 80 academic studies, Happier Lives Institute has found that the group interpersonal therapy provided by StrongMinds is almost ten times more cost-effective than giving cash to people in extreme poverty (a standard benchmark for aid effectiveness).I have been honored to present at a few Effective Altruism events. We love that thethe community has taken such an interest in StrongMinds’ approach centered around data collection, transparency, cultural competence/appropriateness, and human well-being.Please ask me anything! I look forward to answering all of your questions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Sean Mayberry, Founder & CEO of StrongMinds, published by Sean Mayberry on November 24, 2022 on The Effective Altruism Forum.I'm Sean Mayberry, and I’m the Founder/ Chief Executive Officer of StrongMinds. I will spend time on the Monday after the Thanksgiving holiday answering questions here (though I may get to some questions sooner).A little background information about me:I founded StrongMinds in 2013. We are a social enterprise/NGO that treats depression in low-income women and adolescents by providing group interpersonal therapy (IPT-G) delivered by lay community health workers. StrongMinds is the only organization scaling a cost-effective solution to the depression epidemic in Africa.Our model developed from the findings of a randomized controlled trial in Uganda in 2002 that had remarkable success in treating depression with group interpersonal psychotherapy (IPT-G). The study, by researchers from Johns Hopkins University (JHU), used lay community workers with only a high school education.I left my position as the CEO of a global antipoverty organization and foundedStrongMinds, concentrating in Uganda, the site of the previous randomized controlled trial. I used my family’s savings to accomplish this and volunteered full-time for the first 18 months until supporters were identified. We would seek out individuals with an interest in being data-driven, entrepreneurial, people-focused, passionate, open, and collaborative. Those traits eventually informed the core values of the company culture at StrongMinds.StrongMinds has now treated over 160,000 women with depression to date in Uganda and Zambia. On average, 80% of the women we treat remain depression-free six months after the conclusion of therapy. When our clients become depression-free, they can work more, and their kids eat and attend school more regularly. They also report that they no longer feel isolated and have people to turn to for social support. By the end of 2022, we will have treated over 210,000 women and adolescents through our work.Drawing on evidence from over 80 academic studies, Happier Lives Institute has found that the group interpersonal therapy provided by StrongMinds is almost ten times more cost-effective than giving cash to people in extreme poverty (a standard benchmark for aid effectiveness).I have been honored to present at a few Effective Altruism events. We love that thethe community has taken such an interest in StrongMinds’ approach centered around data collection, transparency, cultural competence/appropriateness, and human well-being.Please ask me anything! I look forward to answering all of your questions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sean Mayberry https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:48 None full 3897
ebN8eB7DoN2Frd7Wy_EA EA - Announcing GWWC's new giving recommendations by SjirH Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing GWWC's new giving recommendations, published by SjirH on November 24, 2022 on The Effective Altruism Forum.We are delighted to announce that Giving What We Can have updated our giving recommendations page and donation platform (previously EA Funds), reflecting the latest recommendations of our five trusted evaluators this giving season (GiveWell, Animal Charity Evaluators, Founders Pledge, EA Funds and Longview Philanthropy) and applying our new inclusion criteria for funds and charities.We are currently recommending 12 top-rated funds and 14 top-rated charities spread across the "cause areas" of improving human wellbeing, improving animal welfare, and creating a better future. We have also recently added content on why we recommend donors to use funds over donating directly to charities, and have added a transparency page for Giving What We Can.Read more about GWWC's broader activities around giving season here, and about our research plans for next year here.We plan to add up to 10 more top-rated funds and charities over the coming few weeks, conditional on them completing due diligence and onboarding.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
SjirH https://forum.effectivealtruism.org/posts/ebN8eB7DoN2Frd7Wy/announcing-gwwc-s-new-giving-recommendations Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing GWWC's new giving recommendations, published by SjirH on November 24, 2022 on The Effective Altruism Forum.We are delighted to announce that Giving What We Can have updated our giving recommendations page and donation platform (previously EA Funds), reflecting the latest recommendations of our five trusted evaluators this giving season (GiveWell, Animal Charity Evaluators, Founders Pledge, EA Funds and Longview Philanthropy) and applying our new inclusion criteria for funds and charities.We are currently recommending 12 top-rated funds and 14 top-rated charities spread across the "cause areas" of improving human wellbeing, improving animal welfare, and creating a better future. We have also recently added content on why we recommend donors to use funds over donating directly to charities, and have added a transparency page for Giving What We Can.Read more about GWWC's broader activities around giving season here, and about our research plans for next year here.We plan to add up to 10 more top-rated funds and charities over the coming few weeks, conditional on them completing due diligence and onboarding.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 24 Nov 2022 10:29:14 +0000 EA - Announcing GWWC's new giving recommendations by SjirH Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing GWWC's new giving recommendations, published by SjirH on November 24, 2022 on The Effective Altruism Forum.We are delighted to announce that Giving What We Can have updated our giving recommendations page and donation platform (previously EA Funds), reflecting the latest recommendations of our five trusted evaluators this giving season (GiveWell, Animal Charity Evaluators, Founders Pledge, EA Funds and Longview Philanthropy) and applying our new inclusion criteria for funds and charities.We are currently recommending 12 top-rated funds and 14 top-rated charities spread across the "cause areas" of improving human wellbeing, improving animal welfare, and creating a better future. We have also recently added content on why we recommend donors to use funds over donating directly to charities, and have added a transparency page for Giving What We Can.Read more about GWWC's broader activities around giving season here, and about our research plans for next year here.We plan to add up to 10 more top-rated funds and charities over the coming few weeks, conditional on them completing due diligence and onboarding.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing GWWC's new giving recommendations, published by SjirH on November 24, 2022 on The Effective Altruism Forum.We are delighted to announce that Giving What We Can have updated our giving recommendations page and donation platform (previously EA Funds), reflecting the latest recommendations of our five trusted evaluators this giving season (GiveWell, Animal Charity Evaluators, Founders Pledge, EA Funds and Longview Philanthropy) and applying our new inclusion criteria for funds and charities.We are currently recommending 12 top-rated funds and 14 top-rated charities spread across the "cause areas" of improving human wellbeing, improving animal welfare, and creating a better future. We have also recently added content on why we recommend donors to use funds over donating directly to charities, and have added a transparency page for Giving What We Can.Read more about GWWC's broader activities around giving season here, and about our research plans for next year here.We plan to add up to 10 more top-rated funds and charities over the coming few weeks, conditional on them completing due diligence and onboarding.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
SjirH https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:21 None full 3886
uY5SwjHTXgTaWC85f_EA EA - Don’t give well, give WELLBYs: HLI’s 2022 charity recommendation by MichaelPlant Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don’t give well, give WELLBYs: HLI’s 2022 charity recommendation, published by MichaelPlant on November 24, 2022 on The Effective Altruism Forum.This post sets out the Happier Lives Institute’s charity recommendation for 2022, how we got here, and what’s next. We provide a summary first, followed by a more detailed version.SummaryHLI’s charity recommendation for 2022 is StrongMinds, a non-profit that provides group psychotherapy for women in Uganda and Zambia who are struggling with depression.We compared StrongMinds to three interventions that have been recommended by GiveWell as being amongst the most cost-effective in the world: cash transfers, deworming pills, and anti-malarial bednets. We find that StrongMinds is more cost-effective (in almost all cases).HLI is pioneering a new and improved approach to evaluating charities. We focus directly on what really matters, how much they improve people’s happiness, rather than on health or wealth. We measure effectiveness in WELLBYs (wellbeing-adjusted life years).We estimate that StrongMinds is ~10x more cost-effective than GiveDirectly, which provides cash transfers. StrongMinds’ 8-10 week programme of group interpersonal therapy has a slightly larger effect than a $1,000 cash transfer but costs only $170 per person to deliver.For deworming, our forthcoming analysis finds it has a small but statistically non-significant effect on happiness. Even if we assume this effect is true, deworming is still half as cost-effective as StrongMinds. We expect to publish our full report in the coming days (sadly, it’s been delayed due to a bereavement for one of the authors).In our new report, The Elephant in the Bednet, we show that the relative value of life-extending and life-improving interventions depends very heavily on the philosophical assumptions you make. This issue is usually glossed over and there is no simple answer.We conclude that the Against Malaria Foundation is less cost-effective than StrongMinds under almost all assumptions. We expect this conclusion will similarly apply to the other life-extending charities recommended by GiveWell.HLI’s original mission, when we started three years ago, was to take what appeared to be the world’s top charities - the ones GiveWell recommended - reevaluate them in terms of subjective wellbeing, and then try to find something better. We believe we’ve now accomplished that mission: treating depression at scale allows you to do even more good with your money.We’re now moving to ‘Phase 2’, analysing a wider range of interventions and charities in WELLBYs to find even better opportunities for donors.StrongMinds aims to raise $20 million over the next two years and there’s over $800,000 of matching funds available for StrongMinds this giving season.Why does HLI exist?The Happier Lives Institute advises donors how to maximise the impact of their donations. Our distinctive approach is to focus directly on what really matters to people, improving their subjective wellbeing, how they feel during and about their lives. The idea that we should take happiness seriously is simple:Happiness matters. Although it’s common to think about impact in terms of health and wealth, those are just a means, not an end in themselves. What’s really important is that people enjoy their lives and are free from suffering.We can measure happiness by asking people how they feel. Lots of research has shown that subjective wellbeing surveys are scientifically valid (e.g. OECD, 2013; Kaiser & Oswald, 2022). A typical question is, “Overall, how satisfied are you with your life, nowadays?” (0 - not at all satisfied, 10 - completely satisfied).Our expectations about happiness are often wrong. When we try to guess what life would be like, for others or our future selves, we suffer from biases. When we put...]]>
MichaelPlant https://forum.effectivealtruism.org/posts/uY5SwjHTXgTaWC85f/don-t-give-well-give-wellbys-hli-s-2022-charity Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don’t give well, give WELLBYs: HLI’s 2022 charity recommendation, published by MichaelPlant on November 24, 2022 on The Effective Altruism Forum.This post sets out the Happier Lives Institute’s charity recommendation for 2022, how we got here, and what’s next. We provide a summary first, followed by a more detailed version.SummaryHLI’s charity recommendation for 2022 is StrongMinds, a non-profit that provides group psychotherapy for women in Uganda and Zambia who are struggling with depression.We compared StrongMinds to three interventions that have been recommended by GiveWell as being amongst the most cost-effective in the world: cash transfers, deworming pills, and anti-malarial bednets. We find that StrongMinds is more cost-effective (in almost all cases).HLI is pioneering a new and improved approach to evaluating charities. We focus directly on what really matters, how much they improve people’s happiness, rather than on health or wealth. We measure effectiveness in WELLBYs (wellbeing-adjusted life years).We estimate that StrongMinds is ~10x more cost-effective than GiveDirectly, which provides cash transfers. StrongMinds’ 8-10 week programme of group interpersonal therapy has a slightly larger effect than a $1,000 cash transfer but costs only $170 per person to deliver.For deworming, our forthcoming analysis finds it has a small but statistically non-significant effect on happiness. Even if we assume this effect is true, deworming is still half as cost-effective as StrongMinds. We expect to publish our full report in the coming days (sadly, it’s been delayed due to a bereavement for one of the authors).In our new report, The Elephant in the Bednet, we show that the relative value of life-extending and life-improving interventions depends very heavily on the philosophical assumptions you make. This issue is usually glossed over and there is no simple answer.We conclude that the Against Malaria Foundation is less cost-effective than StrongMinds under almost all assumptions. We expect this conclusion will similarly apply to the other life-extending charities recommended by GiveWell.HLI’s original mission, when we started three years ago, was to take what appeared to be the world’s top charities - the ones GiveWell recommended - reevaluate them in terms of subjective wellbeing, and then try to find something better. We believe we’ve now accomplished that mission: treating depression at scale allows you to do even more good with your money.We’re now moving to ‘Phase 2’, analysing a wider range of interventions and charities in WELLBYs to find even better opportunities for donors.StrongMinds aims to raise $20 million over the next two years and there’s over $800,000 of matching funds available for StrongMinds this giving season.Why does HLI exist?The Happier Lives Institute advises donors how to maximise the impact of their donations. Our distinctive approach is to focus directly on what really matters to people, improving their subjective wellbeing, how they feel during and about their lives. The idea that we should take happiness seriously is simple:Happiness matters. Although it’s common to think about impact in terms of health and wealth, those are just a means, not an end in themselves. What’s really important is that people enjoy their lives and are free from suffering.We can measure happiness by asking people how they feel. Lots of research has shown that subjective wellbeing surveys are scientifically valid (e.g. OECD, 2013; Kaiser & Oswald, 2022). A typical question is, “Overall, how satisfied are you with your life, nowadays?” (0 - not at all satisfied, 10 - completely satisfied).Our expectations about happiness are often wrong. When we try to guess what life would be like, for others or our future selves, we suffer from biases. When we put...]]>
Thu, 24 Nov 2022 10:02:33 +0000 EA - Don’t give well, give WELLBYs: HLI’s 2022 charity recommendation by MichaelPlant Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don’t give well, give WELLBYs: HLI’s 2022 charity recommendation, published by MichaelPlant on November 24, 2022 on The Effective Altruism Forum.This post sets out the Happier Lives Institute’s charity recommendation for 2022, how we got here, and what’s next. We provide a summary first, followed by a more detailed version.SummaryHLI’s charity recommendation for 2022 is StrongMinds, a non-profit that provides group psychotherapy for women in Uganda and Zambia who are struggling with depression.We compared StrongMinds to three interventions that have been recommended by GiveWell as being amongst the most cost-effective in the world: cash transfers, deworming pills, and anti-malarial bednets. We find that StrongMinds is more cost-effective (in almost all cases).HLI is pioneering a new and improved approach to evaluating charities. We focus directly on what really matters, how much they improve people’s happiness, rather than on health or wealth. We measure effectiveness in WELLBYs (wellbeing-adjusted life years).We estimate that StrongMinds is ~10x more cost-effective than GiveDirectly, which provides cash transfers. StrongMinds’ 8-10 week programme of group interpersonal therapy has a slightly larger effect than a $1,000 cash transfer but costs only $170 per person to deliver.For deworming, our forthcoming analysis finds it has a small but statistically non-significant effect on happiness. Even if we assume this effect is true, deworming is still half as cost-effective as StrongMinds. We expect to publish our full report in the coming days (sadly, it’s been delayed due to a bereavement for one of the authors).In our new report, The Elephant in the Bednet, we show that the relative value of life-extending and life-improving interventions depends very heavily on the philosophical assumptions you make. This issue is usually glossed over and there is no simple answer.We conclude that the Against Malaria Foundation is less cost-effective than StrongMinds under almost all assumptions. We expect this conclusion will similarly apply to the other life-extending charities recommended by GiveWell.HLI’s original mission, when we started three years ago, was to take what appeared to be the world’s top charities - the ones GiveWell recommended - reevaluate them in terms of subjective wellbeing, and then try to find something better. We believe we’ve now accomplished that mission: treating depression at scale allows you to do even more good with your money.We’re now moving to ‘Phase 2’, analysing a wider range of interventions and charities in WELLBYs to find even better opportunities for donors.StrongMinds aims to raise $20 million over the next two years and there’s over $800,000 of matching funds available for StrongMinds this giving season.Why does HLI exist?The Happier Lives Institute advises donors how to maximise the impact of their donations. Our distinctive approach is to focus directly on what really matters to people, improving their subjective wellbeing, how they feel during and about their lives. The idea that we should take happiness seriously is simple:Happiness matters. Although it’s common to think about impact in terms of health and wealth, those are just a means, not an end in themselves. What’s really important is that people enjoy their lives and are free from suffering.We can measure happiness by asking people how they feel. Lots of research has shown that subjective wellbeing surveys are scientifically valid (e.g. OECD, 2013; Kaiser & Oswald, 2022). A typical question is, “Overall, how satisfied are you with your life, nowadays?” (0 - not at all satisfied, 10 - completely satisfied).Our expectations about happiness are often wrong. When we try to guess what life would be like, for others or our future selves, we suffer from biases. When we put...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don’t give well, give WELLBYs: HLI’s 2022 charity recommendation, published by MichaelPlant on November 24, 2022 on The Effective Altruism Forum.This post sets out the Happier Lives Institute’s charity recommendation for 2022, how we got here, and what’s next. We provide a summary first, followed by a more detailed version.SummaryHLI’s charity recommendation for 2022 is StrongMinds, a non-profit that provides group psychotherapy for women in Uganda and Zambia who are struggling with depression.We compared StrongMinds to three interventions that have been recommended by GiveWell as being amongst the most cost-effective in the world: cash transfers, deworming pills, and anti-malarial bednets. We find that StrongMinds is more cost-effective (in almost all cases).HLI is pioneering a new and improved approach to evaluating charities. We focus directly on what really matters, how much they improve people’s happiness, rather than on health or wealth. We measure effectiveness in WELLBYs (wellbeing-adjusted life years).We estimate that StrongMinds is ~10x more cost-effective than GiveDirectly, which provides cash transfers. StrongMinds’ 8-10 week programme of group interpersonal therapy has a slightly larger effect than a $1,000 cash transfer but costs only $170 per person to deliver.For deworming, our forthcoming analysis finds it has a small but statistically non-significant effect on happiness. Even if we assume this effect is true, deworming is still half as cost-effective as StrongMinds. We expect to publish our full report in the coming days (sadly, it’s been delayed due to a bereavement for one of the authors).In our new report, The Elephant in the Bednet, we show that the relative value of life-extending and life-improving interventions depends very heavily on the philosophical assumptions you make. This issue is usually glossed over and there is no simple answer.We conclude that the Against Malaria Foundation is less cost-effective than StrongMinds under almost all assumptions. We expect this conclusion will similarly apply to the other life-extending charities recommended by GiveWell.HLI’s original mission, when we started three years ago, was to take what appeared to be the world’s top charities - the ones GiveWell recommended - reevaluate them in terms of subjective wellbeing, and then try to find something better. We believe we’ve now accomplished that mission: treating depression at scale allows you to do even more good with your money.We’re now moving to ‘Phase 2’, analysing a wider range of interventions and charities in WELLBYs to find even better opportunities for donors.StrongMinds aims to raise $20 million over the next two years and there’s over $800,000 of matching funds available for StrongMinds this giving season.Why does HLI exist?The Happier Lives Institute advises donors how to maximise the impact of their donations. Our distinctive approach is to focus directly on what really matters to people, improving their subjective wellbeing, how they feel during and about their lives. The idea that we should take happiness seriously is simple:Happiness matters. Although it’s common to think about impact in terms of health and wealth, those are just a means, not an end in themselves. What’s really important is that people enjoy their lives and are free from suffering.We can measure happiness by asking people how they feel. Lots of research has shown that subjective wellbeing surveys are scientifically valid (e.g. OECD, 2013; Kaiser & Oswald, 2022). A typical question is, “Overall, how satisfied are you with your life, nowadays?” (0 - not at all satisfied, 10 - completely satisfied).Our expectations about happiness are often wrong. When we try to guess what life would be like, for others or our future selves, we suffer from biases. When we put...]]>
MichaelPlant https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 17:31 None full 3884
pp2jmWHyDK9sfC4Rh_EA EA - "Evaluating the evaluators": GWWC's research direction by SjirH Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Evaluating the evaluators": GWWC's research direction, published by SjirH on November 24, 2022 on The Effective Altruism Forum.This post is about GWWC's research plans for next year; for our giving recommendations this giving season please see this post and for our other activities see this post.The public effective giving ecosystem now consists of over 40 organisations and projects. These are initiatives that either try to identify publicly accessible philanthropic funding opportunities using an effective-altruism-inspired methodology (evaluators), or to fundraise for the funding opportunities that have already been identified (fundraisers), or both.Over 25 of these organisations and projects are purely fundraisers and do not have any research capacity of their own: they have to rely on evaluators for their giving recommendations, and in practice currently mainly rely on three of those: GiveWell, Animal Charity Evaluators and Founders Pledge.At the moment, fundraisers and individual donors have very little to go on to select which evaluators they rely on and how to curate the exact recommendations and donations they make. These decisions seem to be made based on public reputation of evaluators, personal impressions and trust, and perhaps in some cases a lack of information about existing alternatives or simple legacy/historical artefact. Furthermore, many fundraisers currently maintain separate relationships with the evaluators they use recommendations from and with the charities they end up recommending, causing extra overhead for all involved parties.Considering this situation and from checking with a subset of fundraising organisations, it seems there is a pressing need for (1) a quality check on new and existing evaluators (“evaluating the evaluators”) and (2) an accessible overview of all recommendations made by evaluators whose methodology meets a certain quality standard. This need is becoming more pressing with the ecosystem growing both on the supply (evaluator) and demand (fundraiser) side.The new GWWC research team is looking to start filling this gap: to help connect evaluators and donors/fundraisers in the effective giving ecosystem in a more effective (higher-quality recommendations) and efficient (lower transaction costs) way.Starting in 2023, the GWWC research team plan to evaluate funding opportunity evaluators on their methodology, to share our findings with other effective giving organisations and projects, and to promote the recommendations of those evaluators that we find meet a certain quality standard. In all of this, we aim to take an inclusive approach in terms of worldviews and values: we are open to evaluating all evaluators that could be seen to maximise positive impact according to some reasonably common worldview or value system, even though we appreciate the challenge here and admit we can never be perfectly “neutral”.We also appreciate this is an ambitious project for a small team (currently only 2!) to take on, and expect it to take us time to build our capacity to evaluate all suitable evaluators at the quality level at which we'd like to evaluate them. Especially in this first year, we may be limited in the number of evaluators we can evaluate and in the time we can spend on evaluating each, and we may not yet be able to provide the full "quality check" we aim to ultimately provide. We'll try to prioritise our time to address the most pressing needs first, and aim to communicate transparently about the confidence of our conclusions, the limitations of our processes, and the mistakes we are inevitably going to make.We very much welcome any questions or feedback on our plans, and look forward to working with others on further improving the state of the effective giving ecosystem, getting more money to where it is needed mos...]]>
SjirH https://forum.effectivealtruism.org/posts/pp2jmWHyDK9sfC4Rh/evaluating-the-evaluators-gwwc-s-research-direction Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Evaluating the evaluators": GWWC's research direction, published by SjirH on November 24, 2022 on The Effective Altruism Forum.This post is about GWWC's research plans for next year; for our giving recommendations this giving season please see this post and for our other activities see this post.The public effective giving ecosystem now consists of over 40 organisations and projects. These are initiatives that either try to identify publicly accessible philanthropic funding opportunities using an effective-altruism-inspired methodology (evaluators), or to fundraise for the funding opportunities that have already been identified (fundraisers), or both.Over 25 of these organisations and projects are purely fundraisers and do not have any research capacity of their own: they have to rely on evaluators for their giving recommendations, and in practice currently mainly rely on three of those: GiveWell, Animal Charity Evaluators and Founders Pledge.At the moment, fundraisers and individual donors have very little to go on to select which evaluators they rely on and how to curate the exact recommendations and donations they make. These decisions seem to be made based on public reputation of evaluators, personal impressions and trust, and perhaps in some cases a lack of information about existing alternatives or simple legacy/historical artefact. Furthermore, many fundraisers currently maintain separate relationships with the evaluators they use recommendations from and with the charities they end up recommending, causing extra overhead for all involved parties.Considering this situation and from checking with a subset of fundraising organisations, it seems there is a pressing need for (1) a quality check on new and existing evaluators (“evaluating the evaluators”) and (2) an accessible overview of all recommendations made by evaluators whose methodology meets a certain quality standard. This need is becoming more pressing with the ecosystem growing both on the supply (evaluator) and demand (fundraiser) side.The new GWWC research team is looking to start filling this gap: to help connect evaluators and donors/fundraisers in the effective giving ecosystem in a more effective (higher-quality recommendations) and efficient (lower transaction costs) way.Starting in 2023, the GWWC research team plan to evaluate funding opportunity evaluators on their methodology, to share our findings with other effective giving organisations and projects, and to promote the recommendations of those evaluators that we find meet a certain quality standard. In all of this, we aim to take an inclusive approach in terms of worldviews and values: we are open to evaluating all evaluators that could be seen to maximise positive impact according to some reasonably common worldview or value system, even though we appreciate the challenge here and admit we can never be perfectly “neutral”.We also appreciate this is an ambitious project for a small team (currently only 2!) to take on, and expect it to take us time to build our capacity to evaluate all suitable evaluators at the quality level at which we'd like to evaluate them. Especially in this first year, we may be limited in the number of evaluators we can evaluate and in the time we can spend on evaluating each, and we may not yet be able to provide the full "quality check" we aim to ultimately provide. We'll try to prioritise our time to address the most pressing needs first, and aim to communicate transparently about the confidence of our conclusions, the limitations of our processes, and the mistakes we are inevitably going to make.We very much welcome any questions or feedback on our plans, and look forward to working with others on further improving the state of the effective giving ecosystem, getting more money to where it is needed mos...]]>
Thu, 24 Nov 2022 06:48:35 +0000 EA - "Evaluating the evaluators": GWWC's research direction by SjirH Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Evaluating the evaluators": GWWC's research direction, published by SjirH on November 24, 2022 on The Effective Altruism Forum.This post is about GWWC's research plans for next year; for our giving recommendations this giving season please see this post and for our other activities see this post.The public effective giving ecosystem now consists of over 40 organisations and projects. These are initiatives that either try to identify publicly accessible philanthropic funding opportunities using an effective-altruism-inspired methodology (evaluators), or to fundraise for the funding opportunities that have already been identified (fundraisers), or both.Over 25 of these organisations and projects are purely fundraisers and do not have any research capacity of their own: they have to rely on evaluators for their giving recommendations, and in practice currently mainly rely on three of those: GiveWell, Animal Charity Evaluators and Founders Pledge.At the moment, fundraisers and individual donors have very little to go on to select which evaluators they rely on and how to curate the exact recommendations and donations they make. These decisions seem to be made based on public reputation of evaluators, personal impressions and trust, and perhaps in some cases a lack of information about existing alternatives or simple legacy/historical artefact. Furthermore, many fundraisers currently maintain separate relationships with the evaluators they use recommendations from and with the charities they end up recommending, causing extra overhead for all involved parties.Considering this situation and from checking with a subset of fundraising organisations, it seems there is a pressing need for (1) a quality check on new and existing evaluators (“evaluating the evaluators”) and (2) an accessible overview of all recommendations made by evaluators whose methodology meets a certain quality standard. This need is becoming more pressing with the ecosystem growing both on the supply (evaluator) and demand (fundraiser) side.The new GWWC research team is looking to start filling this gap: to help connect evaluators and donors/fundraisers in the effective giving ecosystem in a more effective (higher-quality recommendations) and efficient (lower transaction costs) way.Starting in 2023, the GWWC research team plan to evaluate funding opportunity evaluators on their methodology, to share our findings with other effective giving organisations and projects, and to promote the recommendations of those evaluators that we find meet a certain quality standard. In all of this, we aim to take an inclusive approach in terms of worldviews and values: we are open to evaluating all evaluators that could be seen to maximise positive impact according to some reasonably common worldview or value system, even though we appreciate the challenge here and admit we can never be perfectly “neutral”.We also appreciate this is an ambitious project for a small team (currently only 2!) to take on, and expect it to take us time to build our capacity to evaluate all suitable evaluators at the quality level at which we'd like to evaluate them. Especially in this first year, we may be limited in the number of evaluators we can evaluate and in the time we can spend on evaluating each, and we may not yet be able to provide the full "quality check" we aim to ultimately provide. We'll try to prioritise our time to address the most pressing needs first, and aim to communicate transparently about the confidence of our conclusions, the limitations of our processes, and the mistakes we are inevitably going to make.We very much welcome any questions or feedback on our plans, and look forward to working with others on further improving the state of the effective giving ecosystem, getting more money to where it is needed mos...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Evaluating the evaluators": GWWC's research direction, published by SjirH on November 24, 2022 on The Effective Altruism Forum.This post is about GWWC's research plans for next year; for our giving recommendations this giving season please see this post and for our other activities see this post.The public effective giving ecosystem now consists of over 40 organisations and projects. These are initiatives that either try to identify publicly accessible philanthropic funding opportunities using an effective-altruism-inspired methodology (evaluators), or to fundraise for the funding opportunities that have already been identified (fundraisers), or both.Over 25 of these organisations and projects are purely fundraisers and do not have any research capacity of their own: they have to rely on evaluators for their giving recommendations, and in practice currently mainly rely on three of those: GiveWell, Animal Charity Evaluators and Founders Pledge.At the moment, fundraisers and individual donors have very little to go on to select which evaluators they rely on and how to curate the exact recommendations and donations they make. These decisions seem to be made based on public reputation of evaluators, personal impressions and trust, and perhaps in some cases a lack of information about existing alternatives or simple legacy/historical artefact. Furthermore, many fundraisers currently maintain separate relationships with the evaluators they use recommendations from and with the charities they end up recommending, causing extra overhead for all involved parties.Considering this situation and from checking with a subset of fundraising organisations, it seems there is a pressing need for (1) a quality check on new and existing evaluators (“evaluating the evaluators”) and (2) an accessible overview of all recommendations made by evaluators whose methodology meets a certain quality standard. This need is becoming more pressing with the ecosystem growing both on the supply (evaluator) and demand (fundraiser) side.The new GWWC research team is looking to start filling this gap: to help connect evaluators and donors/fundraisers in the effective giving ecosystem in a more effective (higher-quality recommendations) and efficient (lower transaction costs) way.Starting in 2023, the GWWC research team plan to evaluate funding opportunity evaluators on their methodology, to share our findings with other effective giving organisations and projects, and to promote the recommendations of those evaluators that we find meet a certain quality standard. In all of this, we aim to take an inclusive approach in terms of worldviews and values: we are open to evaluating all evaluators that could be seen to maximise positive impact according to some reasonably common worldview or value system, even though we appreciate the challenge here and admit we can never be perfectly “neutral”.We also appreciate this is an ambitious project for a small team (currently only 2!) to take on, and expect it to take us time to build our capacity to evaluate all suitable evaluators at the quality level at which we'd like to evaluate them. Especially in this first year, we may be limited in the number of evaluators we can evaluate and in the time we can spend on evaluating each, and we may not yet be able to provide the full "quality check" we aim to ultimately provide. We'll try to prioritise our time to address the most pressing needs first, and aim to communicate transparently about the confidence of our conclusions, the limitations of our processes, and the mistakes we are inevitably going to make.We very much welcome any questions or feedback on our plans, and look forward to working with others on further improving the state of the effective giving ecosystem, getting more money to where it is needed mos...]]>
SjirH https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:46 None full 3885
NxEuiEWHffi5s6qQz_EA EA - Thoughts on legal concerns surrounding the FTX situation: document preservation and communications by Molly Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on legal concerns surrounding the FTX situation: document preservation and communications, published by Molly on November 24, 2022 on The Effective Altruism Forum.Aside from concerns about bankruptcy clawbacks, the biggest repeat legal-oriented question I've been getting since the FTX fallout is: are my communications going to end up in court or in Bloomberg? Should I just delete everything? Should I save everything?It’s borderline impossible to give a large diffuse group of people clear guidance about their obligations in this regard. (And the usual disclaimers about not being able to provide legal advice to entities/people other than Open Phil still apply.) But I can lay out some principles and processes that should help people make better-informed decisions. Unfortunately this probably cannot be reliably exported to non-US contexts.Are My Records Going to Become Public?I think the most useful piece of information I’ve been able to give people who have asked about this kind of thing in the last two weeks is that being asked to turn things over to an investigation or discovery process does not mean you have to turn entire email servers or hard drives over. You only have to turn over what’s responsive to the inquiry. And having to turn over your materials does not automatically mean that they become public.The materials will probably go to some windowless legal office where dozens of junior attorneys (or interns) are listening to podcasts under fluorescent lights while combing through thousands of pages of documents trying to find stuff that’s relevant to the matter they’re working on. When they do notice something that could be relevant, they’ll stop and read more carefully. If it informs the matter, even in a small way, it could become an exhibit in a litigation proceeding, and that would make it accessible to the public, and therefore fair game for the media.It’s uncommon for these review processes to dig into issues that aren’t relevant to the proceeding that triggered the discovery. The peccadillos of day-to-day life are unlikely to draw additional investigative scrutiny.That said, if, for example, you’re regularly corresponding with Sam Bankman-Fried or Caroline Ellison, I would just assume all of that correspondence is going to be relevant and, eventually, public. The farther removed the correspondence is from anyone at the epicenter of any investigations, or from investigation-relevant subject-matter, the less likely it is to have to be turned over, or if turned over, made public.A final, minor, point: if you’re worried about leaks, I think it’s fair to say that most of those don’t come from legal teams, who would be risking their careers and credentials in leaking confidential material.Quick recap of the process:Discovery request or subpoena you turn over material responsive to the request junior lawyers look through massive quantity of information for relevant material relevant material may become an exhibit in legal proceedings, which are accessible to the public material accessible to the public can get picked up in the mediaShould I save/delete everything?Legal obligations to preserve documents generally hinge on whether you have a “reasonable anticipation of litigation.” This can mean you think you’re actually going to be a party to a lawsuit (either because you sue someone, or they sue you). It could also mean that you get a discovery demand or a subpoena or some other formal notice from a court.If something does trigger a legal obligation to preserve documents, you need to make sure that the material in your possession that could be relevant to the proceeding in question doesn’t get deleted. For example, if you use Signal a lot, and you have Signal messages set to delete after a week, you should screenshot any relevant texts ...]]>
Molly https://forum.effectivealtruism.org/posts/NxEuiEWHffi5s6qQz/thoughts-on-legal-concerns-surrounding-the-ftx-situation-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on legal concerns surrounding the FTX situation: document preservation and communications, published by Molly on November 24, 2022 on The Effective Altruism Forum.Aside from concerns about bankruptcy clawbacks, the biggest repeat legal-oriented question I've been getting since the FTX fallout is: are my communications going to end up in court or in Bloomberg? Should I just delete everything? Should I save everything?It’s borderline impossible to give a large diffuse group of people clear guidance about their obligations in this regard. (And the usual disclaimers about not being able to provide legal advice to entities/people other than Open Phil still apply.) But I can lay out some principles and processes that should help people make better-informed decisions. Unfortunately this probably cannot be reliably exported to non-US contexts.Are My Records Going to Become Public?I think the most useful piece of information I’ve been able to give people who have asked about this kind of thing in the last two weeks is that being asked to turn things over to an investigation or discovery process does not mean you have to turn entire email servers or hard drives over. You only have to turn over what’s responsive to the inquiry. And having to turn over your materials does not automatically mean that they become public.The materials will probably go to some windowless legal office where dozens of junior attorneys (or interns) are listening to podcasts under fluorescent lights while combing through thousands of pages of documents trying to find stuff that’s relevant to the matter they’re working on. When they do notice something that could be relevant, they’ll stop and read more carefully. If it informs the matter, even in a small way, it could become an exhibit in a litigation proceeding, and that would make it accessible to the public, and therefore fair game for the media.It’s uncommon for these review processes to dig into issues that aren’t relevant to the proceeding that triggered the discovery. The peccadillos of day-to-day life are unlikely to draw additional investigative scrutiny.That said, if, for example, you’re regularly corresponding with Sam Bankman-Fried or Caroline Ellison, I would just assume all of that correspondence is going to be relevant and, eventually, public. The farther removed the correspondence is from anyone at the epicenter of any investigations, or from investigation-relevant subject-matter, the less likely it is to have to be turned over, or if turned over, made public.A final, minor, point: if you’re worried about leaks, I think it’s fair to say that most of those don’t come from legal teams, who would be risking their careers and credentials in leaking confidential material.Quick recap of the process:Discovery request or subpoena you turn over material responsive to the request junior lawyers look through massive quantity of information for relevant material relevant material may become an exhibit in legal proceedings, which are accessible to the public material accessible to the public can get picked up in the mediaShould I save/delete everything?Legal obligations to preserve documents generally hinge on whether you have a “reasonable anticipation of litigation.” This can mean you think you’re actually going to be a party to a lawsuit (either because you sue someone, or they sue you). It could also mean that you get a discovery demand or a subpoena or some other formal notice from a court.If something does trigger a legal obligation to preserve documents, you need to make sure that the material in your possession that could be relevant to the proceeding in question doesn’t get deleted. For example, if you use Signal a lot, and you have Signal messages set to delete after a week, you should screenshot any relevant texts ...]]>
Thu, 24 Nov 2022 03:46:57 +0000 EA - Thoughts on legal concerns surrounding the FTX situation: document preservation and communications by Molly Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on legal concerns surrounding the FTX situation: document preservation and communications, published by Molly on November 24, 2022 on The Effective Altruism Forum.Aside from concerns about bankruptcy clawbacks, the biggest repeat legal-oriented question I've been getting since the FTX fallout is: are my communications going to end up in court or in Bloomberg? Should I just delete everything? Should I save everything?It’s borderline impossible to give a large diffuse group of people clear guidance about their obligations in this regard. (And the usual disclaimers about not being able to provide legal advice to entities/people other than Open Phil still apply.) But I can lay out some principles and processes that should help people make better-informed decisions. Unfortunately this probably cannot be reliably exported to non-US contexts.Are My Records Going to Become Public?I think the most useful piece of information I’ve been able to give people who have asked about this kind of thing in the last two weeks is that being asked to turn things over to an investigation or discovery process does not mean you have to turn entire email servers or hard drives over. You only have to turn over what’s responsive to the inquiry. And having to turn over your materials does not automatically mean that they become public.The materials will probably go to some windowless legal office where dozens of junior attorneys (or interns) are listening to podcasts under fluorescent lights while combing through thousands of pages of documents trying to find stuff that’s relevant to the matter they’re working on. When they do notice something that could be relevant, they’ll stop and read more carefully. If it informs the matter, even in a small way, it could become an exhibit in a litigation proceeding, and that would make it accessible to the public, and therefore fair game for the media.It’s uncommon for these review processes to dig into issues that aren’t relevant to the proceeding that triggered the discovery. The peccadillos of day-to-day life are unlikely to draw additional investigative scrutiny.That said, if, for example, you’re regularly corresponding with Sam Bankman-Fried or Caroline Ellison, I would just assume all of that correspondence is going to be relevant and, eventually, public. The farther removed the correspondence is from anyone at the epicenter of any investigations, or from investigation-relevant subject-matter, the less likely it is to have to be turned over, or if turned over, made public.A final, minor, point: if you’re worried about leaks, I think it’s fair to say that most of those don’t come from legal teams, who would be risking their careers and credentials in leaking confidential material.Quick recap of the process:Discovery request or subpoena you turn over material responsive to the request junior lawyers look through massive quantity of information for relevant material relevant material may become an exhibit in legal proceedings, which are accessible to the public material accessible to the public can get picked up in the mediaShould I save/delete everything?Legal obligations to preserve documents generally hinge on whether you have a “reasonable anticipation of litigation.” This can mean you think you’re actually going to be a party to a lawsuit (either because you sue someone, or they sue you). It could also mean that you get a discovery demand or a subpoena or some other formal notice from a court.If something does trigger a legal obligation to preserve documents, you need to make sure that the material in your possession that could be relevant to the proceeding in question doesn’t get deleted. For example, if you use Signal a lot, and you have Signal messages set to delete after a week, you should screenshot any relevant texts ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on legal concerns surrounding the FTX situation: document preservation and communications, published by Molly on November 24, 2022 on The Effective Altruism Forum.Aside from concerns about bankruptcy clawbacks, the biggest repeat legal-oriented question I've been getting since the FTX fallout is: are my communications going to end up in court or in Bloomberg? Should I just delete everything? Should I save everything?It’s borderline impossible to give a large diffuse group of people clear guidance about their obligations in this regard. (And the usual disclaimers about not being able to provide legal advice to entities/people other than Open Phil still apply.) But I can lay out some principles and processes that should help people make better-informed decisions. Unfortunately this probably cannot be reliably exported to non-US contexts.Are My Records Going to Become Public?I think the most useful piece of information I’ve been able to give people who have asked about this kind of thing in the last two weeks is that being asked to turn things over to an investigation or discovery process does not mean you have to turn entire email servers or hard drives over. You only have to turn over what’s responsive to the inquiry. And having to turn over your materials does not automatically mean that they become public.The materials will probably go to some windowless legal office where dozens of junior attorneys (or interns) are listening to podcasts under fluorescent lights while combing through thousands of pages of documents trying to find stuff that’s relevant to the matter they’re working on. When they do notice something that could be relevant, they’ll stop and read more carefully. If it informs the matter, even in a small way, it could become an exhibit in a litigation proceeding, and that would make it accessible to the public, and therefore fair game for the media.It’s uncommon for these review processes to dig into issues that aren’t relevant to the proceeding that triggered the discovery. The peccadillos of day-to-day life are unlikely to draw additional investigative scrutiny.That said, if, for example, you’re regularly corresponding with Sam Bankman-Fried or Caroline Ellison, I would just assume all of that correspondence is going to be relevant and, eventually, public. The farther removed the correspondence is from anyone at the epicenter of any investigations, or from investigation-relevant subject-matter, the less likely it is to have to be turned over, or if turned over, made public.A final, minor, point: if you’re worried about leaks, I think it’s fair to say that most of those don’t come from legal teams, who would be risking their careers and credentials in leaking confidential material.Quick recap of the process:Discovery request or subpoena you turn over material responsive to the request junior lawyers look through massive quantity of information for relevant material relevant material may become an exhibit in legal proceedings, which are accessible to the public material accessible to the public can get picked up in the mediaShould I save/delete everything?Legal obligations to preserve documents generally hinge on whether you have a “reasonable anticipation of litigation.” This can mean you think you’re actually going to be a party to a lawsuit (either because you sue someone, or they sue you). It could also mean that you get a discovery demand or a subpoena or some other formal notice from a court.If something does trigger a legal obligation to preserve documents, you need to make sure that the material in your possession that could be relevant to the proceeding in question doesn’t get deleted. For example, if you use Signal a lot, and you have Signal messages set to delete after a week, you should screenshot any relevant texts ...]]>
Molly https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:42 None full 3888
NeK9XYY2mDsH5bJdD_EA EA - Our recommendations for giving in 2022 by GiveWell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Our recommendations for giving in 2022, published by GiveWell on November 23, 2022 on The Effective Altruism Forum.Author: Miranda Kaplan, GiveWell Communications AssociateWe wrote back in July that we expected to be funding-constrained this year. That remains true as we approach the end of the year, putting us in the unusual position of leaving impact on the table.We've set a goal of raising $600 million in 2022, but our research team has identified $900 million in highly cost-effective funding gaps. That leaves $300 million in funding gaps unfilled. By donating this year, you can help us not only meet but exceed our goal—and say yes to more excellent opportunities to save and improve lives.Additionally, our giving guidance for donors has changed this year. For the first time, our top recommendation is to give to our new All Grants Fund, which we allocate to any need that meets our cost-effectiveness bar. We think it's the best bet for donors who want to support the most promising opportunities we've found to help people, regardless of program or location. And it reflects our current views on how we can best meet our goal of maximizing global well-being—by taking advantage of every path to impact, whether that's funding top charities, seeding and scaling newer programs, or funding research. See "Our giving funds, and our top recommendation" for more on all three of our giving funds.Why your support is so importantWe rely heavily on numbers to think through our funding decisions. But it’s important to remind ourselves what those numbers represent.[1] If we reach our goal of $600 million this year, we speculatively guess that that funding would save around 70,000 lives.[2] That's approximately the population of Portland, Maine.[3]To make the image a little more specific: we also expect most of the lives saved will be those of very young children, under five years old.[4] If they reach their fifth birthday, they'll have a much higher chance of surviving into adulthood.[5] We think about 49,000 of the lives these donations are expected to save will be those of children under five[6]—enough to fill more than 2,000 average US primary school classrooms.[7]But raising $600 million is not a given. We expect $350 million of our funding this year to come from Open Philanthropy, our single largest donor.[8] The rest will come from our broader community of supporters (like you!), and our projections for this category of our fundraising are fairly uncertain.What $600 million will enableLast year, our research team had tremendous success in identifying new room for more funding,[9] and those efforts have continued to bear fruit in 2022. We've found highly cost-effective funding opportunities in both the interventions implemented by our top charities—malaria prevention, incentives for vaccination, and vitamin A supplementation (VAS)—and newer-to-us areas, such as water treatment, iron fortification, and maternal syphilis screening and treatment.Below are just a few examples of what the money we raise this year will likely fund:$30.2 million to New Incentives for continued expansion of its program. New Incentives, a top charity since 2020, provides cash transfers to incentivize caregivers in northern Nigeria to get their infants vaccinated. Funding from GiveWell and others has enabled the program to grow rapidly over the past couple of years.[10] Prior to this grant, New Incentives had raised funding to reach approximately 3.2 million children across northern Nigeria; we estimate that this grant will allow it to reach approximately 1.4 million additional children.[11]$2.4 million to r.i.c.e., a nonprofit that has partnered with the government of Uttar Pradesh, India, to continue operating the Project on Breastfeeding and Newborn Care. The project focuses on kangaroo mot...]]>
GiveWell https://forum.effectivealtruism.org/posts/NeK9XYY2mDsH5bJdD/our-recommendations-for-giving-in-2022 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Our recommendations for giving in 2022, published by GiveWell on November 23, 2022 on The Effective Altruism Forum.Author: Miranda Kaplan, GiveWell Communications AssociateWe wrote back in July that we expected to be funding-constrained this year. That remains true as we approach the end of the year, putting us in the unusual position of leaving impact on the table.We've set a goal of raising $600 million in 2022, but our research team has identified $900 million in highly cost-effective funding gaps. That leaves $300 million in funding gaps unfilled. By donating this year, you can help us not only meet but exceed our goal—and say yes to more excellent opportunities to save and improve lives.Additionally, our giving guidance for donors has changed this year. For the first time, our top recommendation is to give to our new All Grants Fund, which we allocate to any need that meets our cost-effectiveness bar. We think it's the best bet for donors who want to support the most promising opportunities we've found to help people, regardless of program or location. And it reflects our current views on how we can best meet our goal of maximizing global well-being—by taking advantage of every path to impact, whether that's funding top charities, seeding and scaling newer programs, or funding research. See "Our giving funds, and our top recommendation" for more on all three of our giving funds.Why your support is so importantWe rely heavily on numbers to think through our funding decisions. But it’s important to remind ourselves what those numbers represent.[1] If we reach our goal of $600 million this year, we speculatively guess that that funding would save around 70,000 lives.[2] That's approximately the population of Portland, Maine.[3]To make the image a little more specific: we also expect most of the lives saved will be those of very young children, under five years old.[4] If they reach their fifth birthday, they'll have a much higher chance of surviving into adulthood.[5] We think about 49,000 of the lives these donations are expected to save will be those of children under five[6]—enough to fill more than 2,000 average US primary school classrooms.[7]But raising $600 million is not a given. We expect $350 million of our funding this year to come from Open Philanthropy, our single largest donor.[8] The rest will come from our broader community of supporters (like you!), and our projections for this category of our fundraising are fairly uncertain.What $600 million will enableLast year, our research team had tremendous success in identifying new room for more funding,[9] and those efforts have continued to bear fruit in 2022. We've found highly cost-effective funding opportunities in both the interventions implemented by our top charities—malaria prevention, incentives for vaccination, and vitamin A supplementation (VAS)—and newer-to-us areas, such as water treatment, iron fortification, and maternal syphilis screening and treatment.Below are just a few examples of what the money we raise this year will likely fund:$30.2 million to New Incentives for continued expansion of its program. New Incentives, a top charity since 2020, provides cash transfers to incentivize caregivers in northern Nigeria to get their infants vaccinated. Funding from GiveWell and others has enabled the program to grow rapidly over the past couple of years.[10] Prior to this grant, New Incentives had raised funding to reach approximately 3.2 million children across northern Nigeria; we estimate that this grant will allow it to reach approximately 1.4 million additional children.[11]$2.4 million to r.i.c.e., a nonprofit that has partnered with the government of Uttar Pradesh, India, to continue operating the Project on Breastfeeding and Newborn Care. The project focuses on kangaroo mot...]]>
Thu, 24 Nov 2022 01:17:42 +0000 EA - Our recommendations for giving in 2022 by GiveWell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Our recommendations for giving in 2022, published by GiveWell on November 23, 2022 on The Effective Altruism Forum.Author: Miranda Kaplan, GiveWell Communications AssociateWe wrote back in July that we expected to be funding-constrained this year. That remains true as we approach the end of the year, putting us in the unusual position of leaving impact on the table.We've set a goal of raising $600 million in 2022, but our research team has identified $900 million in highly cost-effective funding gaps. That leaves $300 million in funding gaps unfilled. By donating this year, you can help us not only meet but exceed our goal—and say yes to more excellent opportunities to save and improve lives.Additionally, our giving guidance for donors has changed this year. For the first time, our top recommendation is to give to our new All Grants Fund, which we allocate to any need that meets our cost-effectiveness bar. We think it's the best bet for donors who want to support the most promising opportunities we've found to help people, regardless of program or location. And it reflects our current views on how we can best meet our goal of maximizing global well-being—by taking advantage of every path to impact, whether that's funding top charities, seeding and scaling newer programs, or funding research. See "Our giving funds, and our top recommendation" for more on all three of our giving funds.Why your support is so importantWe rely heavily on numbers to think through our funding decisions. But it’s important to remind ourselves what those numbers represent.[1] If we reach our goal of $600 million this year, we speculatively guess that that funding would save around 70,000 lives.[2] That's approximately the population of Portland, Maine.[3]To make the image a little more specific: we also expect most of the lives saved will be those of very young children, under five years old.[4] If they reach their fifth birthday, they'll have a much higher chance of surviving into adulthood.[5] We think about 49,000 of the lives these donations are expected to save will be those of children under five[6]—enough to fill more than 2,000 average US primary school classrooms.[7]But raising $600 million is not a given. We expect $350 million of our funding this year to come from Open Philanthropy, our single largest donor.[8] The rest will come from our broader community of supporters (like you!), and our projections for this category of our fundraising are fairly uncertain.What $600 million will enableLast year, our research team had tremendous success in identifying new room for more funding,[9] and those efforts have continued to bear fruit in 2022. We've found highly cost-effective funding opportunities in both the interventions implemented by our top charities—malaria prevention, incentives for vaccination, and vitamin A supplementation (VAS)—and newer-to-us areas, such as water treatment, iron fortification, and maternal syphilis screening and treatment.Below are just a few examples of what the money we raise this year will likely fund:$30.2 million to New Incentives for continued expansion of its program. New Incentives, a top charity since 2020, provides cash transfers to incentivize caregivers in northern Nigeria to get their infants vaccinated. Funding from GiveWell and others has enabled the program to grow rapidly over the past couple of years.[10] Prior to this grant, New Incentives had raised funding to reach approximately 3.2 million children across northern Nigeria; we estimate that this grant will allow it to reach approximately 1.4 million additional children.[11]$2.4 million to r.i.c.e., a nonprofit that has partnered with the government of Uttar Pradesh, India, to continue operating the Project on Breastfeeding and Newborn Care. The project focuses on kangaroo mot...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Our recommendations for giving in 2022, published by GiveWell on November 23, 2022 on The Effective Altruism Forum.Author: Miranda Kaplan, GiveWell Communications AssociateWe wrote back in July that we expected to be funding-constrained this year. That remains true as we approach the end of the year, putting us in the unusual position of leaving impact on the table.We've set a goal of raising $600 million in 2022, but our research team has identified $900 million in highly cost-effective funding gaps. That leaves $300 million in funding gaps unfilled. By donating this year, you can help us not only meet but exceed our goal—and say yes to more excellent opportunities to save and improve lives.Additionally, our giving guidance for donors has changed this year. For the first time, our top recommendation is to give to our new All Grants Fund, which we allocate to any need that meets our cost-effectiveness bar. We think it's the best bet for donors who want to support the most promising opportunities we've found to help people, regardless of program or location. And it reflects our current views on how we can best meet our goal of maximizing global well-being—by taking advantage of every path to impact, whether that's funding top charities, seeding and scaling newer programs, or funding research. See "Our giving funds, and our top recommendation" for more on all three of our giving funds.Why your support is so importantWe rely heavily on numbers to think through our funding decisions. But it’s important to remind ourselves what those numbers represent.[1] If we reach our goal of $600 million this year, we speculatively guess that that funding would save around 70,000 lives.[2] That's approximately the population of Portland, Maine.[3]To make the image a little more specific: we also expect most of the lives saved will be those of very young children, under five years old.[4] If they reach their fifth birthday, they'll have a much higher chance of surviving into adulthood.[5] We think about 49,000 of the lives these donations are expected to save will be those of children under five[6]—enough to fill more than 2,000 average US primary school classrooms.[7]But raising $600 million is not a given. We expect $350 million of our funding this year to come from Open Philanthropy, our single largest donor.[8] The rest will come from our broader community of supporters (like you!), and our projections for this category of our fundraising are fairly uncertain.What $600 million will enableLast year, our research team had tremendous success in identifying new room for more funding,[9] and those efforts have continued to bear fruit in 2022. We've found highly cost-effective funding opportunities in both the interventions implemented by our top charities—malaria prevention, incentives for vaccination, and vitamin A supplementation (VAS)—and newer-to-us areas, such as water treatment, iron fortification, and maternal syphilis screening and treatment.Below are just a few examples of what the money we raise this year will likely fund:$30.2 million to New Incentives for continued expansion of its program. New Incentives, a top charity since 2020, provides cash transfers to incentivize caregivers in northern Nigeria to get their infants vaccinated. Funding from GiveWell and others has enabled the program to grow rapidly over the past couple of years.[10] Prior to this grant, New Incentives had raised funding to reach approximately 3.2 million children across northern Nigeria; we estimate that this grant will allow it to reach approximately 1.4 million additional children.[11]$2.4 million to r.i.c.e., a nonprofit that has partnered with the government of Uttar Pradesh, India, to continue operating the Project on Breastfeeding and Newborn Care. The project focuses on kangaroo mot...]]>
GiveWell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:03 None full 3889
rxojcFfpN88YNwGop_EA EA - Rethink Priorities’ Leadership Statement on the FTX situation by abrahamrowe Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities’ Leadership Statement on the FTX situation, published by abrahamrowe on November 23, 2022 on The Effective Altruism Forum.From the Executive Team and Board of Directors of Rethink Priorities (Peter Wildeford, Marcus Davis, Abraham Rowe, Kieran Greig, David Moss, Ozzie Gooen, Cameron Meyer Shorb, and Vicky Bond).We were saddened and shocked to learn about the extremely serious alleged misdeeds and misconduct of Sam Bankman-Fried and FTX. While we are still trying to understand what happened and the consequences of these events, we are dismayed that customer funds may have been used improperly, and that, currently, many customers are unable to retrieve funds held by FTX. We unequivocally and in the strongest possible terms condemn any potential fraud or misuse of customer funds and trust that occurred at FTX. The actions that Bankman-Fried and FTX have been accused of are out of line with the values that we believe in and try to represent as an organization.At this time, Rethink Priorities remains in a stable financial and legal position. We do not plan on laying off staff or cutting salaries in response to these events or to the changed financial condition of the EA space. However, the strategies of our General Longtermism, Special Projects, and Surveys teams were partly based on the existence of FTX funding for Rethink Priorities and others in the EA community. For the time being, we've mainly paused further hiring for these programs and are revisiting our strategies for them going forward. We’ve decided that hiring for our Special Projects team, which was already in progress before we learned about the FTX situation, will proceed in order to evaluate and onboard new fiscal sponsees.Unfortunately, this situation does impact our long-term financial outlook and our ability to keep growing. Rethink Priorities continues to have large funding needs and we look forward to sharing more about our plans with the community in the next few days. We will need to address the funding gap left by these changed conditions for the coming years.In terms of legal exposure, Rethink Priorities’ legal counsel are looking into the possibility of clawbacks of funds previously donated to us by FTX-related sources. At this time, we are not aware of any other significant legal exposure for Rethink Priorities or its staff.Prior to the news breaking this month, we already had procedures in place intended to mitigate potential financial risks from relying on FTX or other cryptocurrency donors. Internally, we've always had a practice of treating pledged or anticipated cryptocurrency donations as less reliable than other types of donations for fundraising forecasting purposes, simply due to volatility in that sector. As a part of regular crisis management exercises, we also engaged in an internal simulation in August around the possibility of FTX funds no longer being available. We did this exercise due to the relative size and importance of the funding to us, and the base failure rates of cryptocurrency projects, not due to having non-public information about FTX or Bankman-Fried.In hindsight, we believe we could have done more to share these internal risk assessments with the rest of the EA community. Going forward, we are reevaluating our own approach to risk management and the assessment of donors, though we do not believe any changes we will make would have caught this specific issue.As mentioned above, Rethink Priorities is receiving legal advice on clawbacks, and we are happy to share resources with other organizations that are concerned about their exposure. We cannot provide legal advice, but we are able to provide information on our own response—please reach out to Abraham Rowe (abraham@rethinkpriorities.org) for more information.Thanks for listening. To help u...]]>
abrahamrowe https://forum.effectivealtruism.org/posts/rxojcFfpN88YNwGop/rethink-priorities-leadership-statement-on-the-ftx-situation Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities’ Leadership Statement on the FTX situation, published by abrahamrowe on November 23, 2022 on The Effective Altruism Forum.From the Executive Team and Board of Directors of Rethink Priorities (Peter Wildeford, Marcus Davis, Abraham Rowe, Kieran Greig, David Moss, Ozzie Gooen, Cameron Meyer Shorb, and Vicky Bond).We were saddened and shocked to learn about the extremely serious alleged misdeeds and misconduct of Sam Bankman-Fried and FTX. While we are still trying to understand what happened and the consequences of these events, we are dismayed that customer funds may have been used improperly, and that, currently, many customers are unable to retrieve funds held by FTX. We unequivocally and in the strongest possible terms condemn any potential fraud or misuse of customer funds and trust that occurred at FTX. The actions that Bankman-Fried and FTX have been accused of are out of line with the values that we believe in and try to represent as an organization.At this time, Rethink Priorities remains in a stable financial and legal position. We do not plan on laying off staff or cutting salaries in response to these events or to the changed financial condition of the EA space. However, the strategies of our General Longtermism, Special Projects, and Surveys teams were partly based on the existence of FTX funding for Rethink Priorities and others in the EA community. For the time being, we've mainly paused further hiring for these programs and are revisiting our strategies for them going forward. We’ve decided that hiring for our Special Projects team, which was already in progress before we learned about the FTX situation, will proceed in order to evaluate and onboard new fiscal sponsees.Unfortunately, this situation does impact our long-term financial outlook and our ability to keep growing. Rethink Priorities continues to have large funding needs and we look forward to sharing more about our plans with the community in the next few days. We will need to address the funding gap left by these changed conditions for the coming years.In terms of legal exposure, Rethink Priorities’ legal counsel are looking into the possibility of clawbacks of funds previously donated to us by FTX-related sources. At this time, we are not aware of any other significant legal exposure for Rethink Priorities or its staff.Prior to the news breaking this month, we already had procedures in place intended to mitigate potential financial risks from relying on FTX or other cryptocurrency donors. Internally, we've always had a practice of treating pledged or anticipated cryptocurrency donations as less reliable than other types of donations for fundraising forecasting purposes, simply due to volatility in that sector. As a part of regular crisis management exercises, we also engaged in an internal simulation in August around the possibility of FTX funds no longer being available. We did this exercise due to the relative size and importance of the funding to us, and the base failure rates of cryptocurrency projects, not due to having non-public information about FTX or Bankman-Fried.In hindsight, we believe we could have done more to share these internal risk assessments with the rest of the EA community. Going forward, we are reevaluating our own approach to risk management and the assessment of donors, though we do not believe any changes we will make would have caught this specific issue.As mentioned above, Rethink Priorities is receiving legal advice on clawbacks, and we are happy to share resources with other organizations that are concerned about their exposure. We cannot provide legal advice, but we are able to provide information on our own response—please reach out to Abraham Rowe (abraham@rethinkpriorities.org) for more information.Thanks for listening. To help u...]]>
Wed, 23 Nov 2022 23:16:02 +0000 EA - Rethink Priorities’ Leadership Statement on the FTX situation by abrahamrowe Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities’ Leadership Statement on the FTX situation, published by abrahamrowe on November 23, 2022 on The Effective Altruism Forum.From the Executive Team and Board of Directors of Rethink Priorities (Peter Wildeford, Marcus Davis, Abraham Rowe, Kieran Greig, David Moss, Ozzie Gooen, Cameron Meyer Shorb, and Vicky Bond).We were saddened and shocked to learn about the extremely serious alleged misdeeds and misconduct of Sam Bankman-Fried and FTX. While we are still trying to understand what happened and the consequences of these events, we are dismayed that customer funds may have been used improperly, and that, currently, many customers are unable to retrieve funds held by FTX. We unequivocally and in the strongest possible terms condemn any potential fraud or misuse of customer funds and trust that occurred at FTX. The actions that Bankman-Fried and FTX have been accused of are out of line with the values that we believe in and try to represent as an organization.At this time, Rethink Priorities remains in a stable financial and legal position. We do not plan on laying off staff or cutting salaries in response to these events or to the changed financial condition of the EA space. However, the strategies of our General Longtermism, Special Projects, and Surveys teams were partly based on the existence of FTX funding for Rethink Priorities and others in the EA community. For the time being, we've mainly paused further hiring for these programs and are revisiting our strategies for them going forward. We’ve decided that hiring for our Special Projects team, which was already in progress before we learned about the FTX situation, will proceed in order to evaluate and onboard new fiscal sponsees.Unfortunately, this situation does impact our long-term financial outlook and our ability to keep growing. Rethink Priorities continues to have large funding needs and we look forward to sharing more about our plans with the community in the next few days. We will need to address the funding gap left by these changed conditions for the coming years.In terms of legal exposure, Rethink Priorities’ legal counsel are looking into the possibility of clawbacks of funds previously donated to us by FTX-related sources. At this time, we are not aware of any other significant legal exposure for Rethink Priorities or its staff.Prior to the news breaking this month, we already had procedures in place intended to mitigate potential financial risks from relying on FTX or other cryptocurrency donors. Internally, we've always had a practice of treating pledged or anticipated cryptocurrency donations as less reliable than other types of donations for fundraising forecasting purposes, simply due to volatility in that sector. As a part of regular crisis management exercises, we also engaged in an internal simulation in August around the possibility of FTX funds no longer being available. We did this exercise due to the relative size and importance of the funding to us, and the base failure rates of cryptocurrency projects, not due to having non-public information about FTX or Bankman-Fried.In hindsight, we believe we could have done more to share these internal risk assessments with the rest of the EA community. Going forward, we are reevaluating our own approach to risk management and the assessment of donors, though we do not believe any changes we will make would have caught this specific issue.As mentioned above, Rethink Priorities is receiving legal advice on clawbacks, and we are happy to share resources with other organizations that are concerned about their exposure. We cannot provide legal advice, but we are able to provide information on our own response—please reach out to Abraham Rowe (abraham@rethinkpriorities.org) for more information.Thanks for listening. To help u...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities’ Leadership Statement on the FTX situation, published by abrahamrowe on November 23, 2022 on The Effective Altruism Forum.From the Executive Team and Board of Directors of Rethink Priorities (Peter Wildeford, Marcus Davis, Abraham Rowe, Kieran Greig, David Moss, Ozzie Gooen, Cameron Meyer Shorb, and Vicky Bond).We were saddened and shocked to learn about the extremely serious alleged misdeeds and misconduct of Sam Bankman-Fried and FTX. While we are still trying to understand what happened and the consequences of these events, we are dismayed that customer funds may have been used improperly, and that, currently, many customers are unable to retrieve funds held by FTX. We unequivocally and in the strongest possible terms condemn any potential fraud or misuse of customer funds and trust that occurred at FTX. The actions that Bankman-Fried and FTX have been accused of are out of line with the values that we believe in and try to represent as an organization.At this time, Rethink Priorities remains in a stable financial and legal position. We do not plan on laying off staff or cutting salaries in response to these events or to the changed financial condition of the EA space. However, the strategies of our General Longtermism, Special Projects, and Surveys teams were partly based on the existence of FTX funding for Rethink Priorities and others in the EA community. For the time being, we've mainly paused further hiring for these programs and are revisiting our strategies for them going forward. We’ve decided that hiring for our Special Projects team, which was already in progress before we learned about the FTX situation, will proceed in order to evaluate and onboard new fiscal sponsees.Unfortunately, this situation does impact our long-term financial outlook and our ability to keep growing. Rethink Priorities continues to have large funding needs and we look forward to sharing more about our plans with the community in the next few days. We will need to address the funding gap left by these changed conditions for the coming years.In terms of legal exposure, Rethink Priorities’ legal counsel are looking into the possibility of clawbacks of funds previously donated to us by FTX-related sources. At this time, we are not aware of any other significant legal exposure for Rethink Priorities or its staff.Prior to the news breaking this month, we already had procedures in place intended to mitigate potential financial risks from relying on FTX or other cryptocurrency donors. Internally, we've always had a practice of treating pledged or anticipated cryptocurrency donations as less reliable than other types of donations for fundraising forecasting purposes, simply due to volatility in that sector. As a part of regular crisis management exercises, we also engaged in an internal simulation in August around the possibility of FTX funds no longer being available. We did this exercise due to the relative size and importance of the funding to us, and the base failure rates of cryptocurrency projects, not due to having non-public information about FTX or Bankman-Fried.In hindsight, we believe we could have done more to share these internal risk assessments with the rest of the EA community. Going forward, we are reevaluating our own approach to risk management and the assessment of donors, though we do not believe any changes we will make would have caught this specific issue.As mentioned above, Rethink Priorities is receiving legal advice on clawbacks, and we are happy to share resources with other organizations that are concerned about their exposure. We cannot provide legal advice, but we are able to provide information on our own response—please reach out to Abraham Rowe (abraham@rethinkpriorities.org) for more information.Thanks for listening. To help u...]]>
abrahamrowe https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:39 None full 3890
y568Gat7np8hsLXNf_EA EA - Timeline of FTX collapse by vipulnaik Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Timeline of FTX collapse, published by vipulnaik on November 23, 2022 on The Effective Altruism Forum.Sebastian Sanchez has been working on a timeline of FTX collapse on the Timelines Wiki; I'm paying for the work and also providing some input and direction. The timeline isn't quite done yet; you can see some pending (unchecked) items in the What the timeline is still missing section. And we may need to expand it further as we find more information or new stuff happens. Further proofreading and fact-checking are also pending.The timeline is focused largely on the collapse in November 2022; Sebastian plans to work on a timeline ofFTX that will go more into the background. For more on what the collapse timeline is focused on, see the inclusion criteria section of the timeline.Thoughts welcome!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
vipulnaik https://forum.effectivealtruism.org/posts/y568Gat7np8hsLXNf/timeline-of-ftx-collapse Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Timeline of FTX collapse, published by vipulnaik on November 23, 2022 on The Effective Altruism Forum.Sebastian Sanchez has been working on a timeline of FTX collapse on the Timelines Wiki; I'm paying for the work and also providing some input and direction. The timeline isn't quite done yet; you can see some pending (unchecked) items in the What the timeline is still missing section. And we may need to expand it further as we find more information or new stuff happens. Further proofreading and fact-checking are also pending.The timeline is focused largely on the collapse in November 2022; Sebastian plans to work on a timeline ofFTX that will go more into the background. For more on what the collapse timeline is focused on, see the inclusion criteria section of the timeline.Thoughts welcome!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 23 Nov 2022 22:46:54 +0000 EA - Timeline of FTX collapse by vipulnaik Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Timeline of FTX collapse, published by vipulnaik on November 23, 2022 on The Effective Altruism Forum.Sebastian Sanchez has been working on a timeline of FTX collapse on the Timelines Wiki; I'm paying for the work and also providing some input and direction. The timeline isn't quite done yet; you can see some pending (unchecked) items in the What the timeline is still missing section. And we may need to expand it further as we find more information or new stuff happens. Further proofreading and fact-checking are also pending.The timeline is focused largely on the collapse in November 2022; Sebastian plans to work on a timeline ofFTX that will go more into the background. For more on what the collapse timeline is focused on, see the inclusion criteria section of the timeline.Thoughts welcome!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Timeline of FTX collapse, published by vipulnaik on November 23, 2022 on The Effective Altruism Forum.Sebastian Sanchez has been working on a timeline of FTX collapse on the Timelines Wiki; I'm paying for the work and also providing some input and direction. The timeline isn't quite done yet; you can see some pending (unchecked) items in the What the timeline is still missing section. And we may need to expand it further as we find more information or new stuff happens. Further proofreading and fact-checking are also pending.The timeline is focused largely on the collapse in November 2022; Sebastian plans to work on a timeline ofFTX that will go more into the background. For more on what the collapse timeline is focused on, see the inclusion criteria section of the timeline.Thoughts welcome!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
vipulnaik https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:04 None full 3870
jsByfxvNA4x23stLY_EA EA - A Letter to the Bulletin of Atomic Scientists by John G. Halstead Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Letter to the Bulletin of Atomic Scientists, published by John G. Halstead on November 23, 2022 on The Effective Altruism Forum.Tldr: This is a letter I wrote to the Climate Contributing Editor of the Bulletin Atomic Scientists, Dawn Stover, about Emile Torres' latest piece criticising EA. In short:In advance of the publication of the article, Ms Stover reached out to us to check on what Torres calls their most "disturbing" claim viz. that Will MacAskill lied about getting advice from five climate experts.We showed them that this was false.The Bulletin published the claim anyway, and then tweeted it.In my opinion, this is outrageous, so I have asked them to issue a correction and an apology.Dear Ms Stover,I have long admired the work of the Bulletin of the Atomic Scientists. However, I am extremely disappointed by your publication of the latest piece by Emile Torres.I knew long ago that Torres would publish a piece critical of What We Owe the Future, and on me following my report on climate change. However, I am surprised that the Bulletin has chosen to publish this particular piece in its current form. There are many things wrong with the piece, but the most important is that it accuses Will MacAskill and his research assistants of research misconduct. Specifically, Torres contends that five of the climate experts we listed in the acknowledgements for the book were not actually consulted.Ms Stover: you contacted us about this claim in advance of the article’s publication, and we informed you that it was not true. Overall, we consulted around 106 experts in the research process for What We Owe The Future. Torres suggests that five experts were never consulted at all, but this is not true — as Will stated in his earlier email to you, four of those five experts were consulted. I am happy to provide evidence for this. The article would have readers think that we made up the citations out of thin air. One of them was contacted but didn’t have time to give feedback, and was incorrectly credited in the acknowledgements, which we will change in future editions: this was an honest mistake. The Bulletin also went on to tweet the false claim that multiple people hadn’t been consulted at all.The acknowledgements are also clear that we are not claiming that those listed checked and agreed with every claim in the book. Immediately after the acknowledgements of subject-matter experts, Will writes: “These advisers don’t necessarily agree with the claims I make in the book, and all errors in the book are my responsibility alone.”To accuse someone of research misconduct is a very serious allegation. After you check it and find out that it is false, it is extremely poor form to let the claim go out anyway and then to tweet it. The Bulletin should issue a correction to the article, and to the false claim they put out in a tweet.I also have concerns about the nature of Torres’ background work for article — they seemingly sent every person that was acknowledged for the book a misleading email, telling them that we lied in the acknowledgements, and making some reviewers quite uncomfortable.To reiterate, I am very disappointed by the journalistic standards demonstrated in this article. I will be publishing something separately about Torres’ (as usual) misrepresented substantive claims, but the most serious allegation of research misconduct needs to be retracted and we need an apology.(Also, a more minor point: it's not true that I am Head of Applied Research at Founders Pledge. I left that role in 2019.)JohnThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
John G. Halstead https://forum.effectivealtruism.org/posts/jsByfxvNA4x23stLY/a-letter-to-the-bulletin-of-atomic-scientists Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Letter to the Bulletin of Atomic Scientists, published by John G. Halstead on November 23, 2022 on The Effective Altruism Forum.Tldr: This is a letter I wrote to the Climate Contributing Editor of the Bulletin Atomic Scientists, Dawn Stover, about Emile Torres' latest piece criticising EA. In short:In advance of the publication of the article, Ms Stover reached out to us to check on what Torres calls their most "disturbing" claim viz. that Will MacAskill lied about getting advice from five climate experts.We showed them that this was false.The Bulletin published the claim anyway, and then tweeted it.In my opinion, this is outrageous, so I have asked them to issue a correction and an apology.Dear Ms Stover,I have long admired the work of the Bulletin of the Atomic Scientists. However, I am extremely disappointed by your publication of the latest piece by Emile Torres.I knew long ago that Torres would publish a piece critical of What We Owe the Future, and on me following my report on climate change. However, I am surprised that the Bulletin has chosen to publish this particular piece in its current form. There are many things wrong with the piece, but the most important is that it accuses Will MacAskill and his research assistants of research misconduct. Specifically, Torres contends that five of the climate experts we listed in the acknowledgements for the book were not actually consulted.Ms Stover: you contacted us about this claim in advance of the article’s publication, and we informed you that it was not true. Overall, we consulted around 106 experts in the research process for What We Owe The Future. Torres suggests that five experts were never consulted at all, but this is not true — as Will stated in his earlier email to you, four of those five experts were consulted. I am happy to provide evidence for this. The article would have readers think that we made up the citations out of thin air. One of them was contacted but didn’t have time to give feedback, and was incorrectly credited in the acknowledgements, which we will change in future editions: this was an honest mistake. The Bulletin also went on to tweet the false claim that multiple people hadn’t been consulted at all.The acknowledgements are also clear that we are not claiming that those listed checked and agreed with every claim in the book. Immediately after the acknowledgements of subject-matter experts, Will writes: “These advisers don’t necessarily agree with the claims I make in the book, and all errors in the book are my responsibility alone.”To accuse someone of research misconduct is a very serious allegation. After you check it and find out that it is false, it is extremely poor form to let the claim go out anyway and then to tweet it. The Bulletin should issue a correction to the article, and to the false claim they put out in a tweet.I also have concerns about the nature of Torres’ background work for article — they seemingly sent every person that was acknowledged for the book a misleading email, telling them that we lied in the acknowledgements, and making some reviewers quite uncomfortable.To reiterate, I am very disappointed by the journalistic standards demonstrated in this article. I will be publishing something separately about Torres’ (as usual) misrepresented substantive claims, but the most serious allegation of research misconduct needs to be retracted and we need an apology.(Also, a more minor point: it's not true that I am Head of Applied Research at Founders Pledge. I left that role in 2019.)JohnThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 23 Nov 2022 20:22:38 +0000 EA - A Letter to the Bulletin of Atomic Scientists by John G. Halstead Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Letter to the Bulletin of Atomic Scientists, published by John G. Halstead on November 23, 2022 on The Effective Altruism Forum.Tldr: This is a letter I wrote to the Climate Contributing Editor of the Bulletin Atomic Scientists, Dawn Stover, about Emile Torres' latest piece criticising EA. In short:In advance of the publication of the article, Ms Stover reached out to us to check on what Torres calls their most "disturbing" claim viz. that Will MacAskill lied about getting advice from five climate experts.We showed them that this was false.The Bulletin published the claim anyway, and then tweeted it.In my opinion, this is outrageous, so I have asked them to issue a correction and an apology.Dear Ms Stover,I have long admired the work of the Bulletin of the Atomic Scientists. However, I am extremely disappointed by your publication of the latest piece by Emile Torres.I knew long ago that Torres would publish a piece critical of What We Owe the Future, and on me following my report on climate change. However, I am surprised that the Bulletin has chosen to publish this particular piece in its current form. There are many things wrong with the piece, but the most important is that it accuses Will MacAskill and his research assistants of research misconduct. Specifically, Torres contends that five of the climate experts we listed in the acknowledgements for the book were not actually consulted.Ms Stover: you contacted us about this claim in advance of the article’s publication, and we informed you that it was not true. Overall, we consulted around 106 experts in the research process for What We Owe The Future. Torres suggests that five experts were never consulted at all, but this is not true — as Will stated in his earlier email to you, four of those five experts were consulted. I am happy to provide evidence for this. The article would have readers think that we made up the citations out of thin air. One of them was contacted but didn’t have time to give feedback, and was incorrectly credited in the acknowledgements, which we will change in future editions: this was an honest mistake. The Bulletin also went on to tweet the false claim that multiple people hadn’t been consulted at all.The acknowledgements are also clear that we are not claiming that those listed checked and agreed with every claim in the book. Immediately after the acknowledgements of subject-matter experts, Will writes: “These advisers don’t necessarily agree with the claims I make in the book, and all errors in the book are my responsibility alone.”To accuse someone of research misconduct is a very serious allegation. After you check it and find out that it is false, it is extremely poor form to let the claim go out anyway and then to tweet it. The Bulletin should issue a correction to the article, and to the false claim they put out in a tweet.I also have concerns about the nature of Torres’ background work for article — they seemingly sent every person that was acknowledged for the book a misleading email, telling them that we lied in the acknowledgements, and making some reviewers quite uncomfortable.To reiterate, I am very disappointed by the journalistic standards demonstrated in this article. I will be publishing something separately about Torres’ (as usual) misrepresented substantive claims, but the most serious allegation of research misconduct needs to be retracted and we need an apology.(Also, a more minor point: it's not true that I am Head of Applied Research at Founders Pledge. I left that role in 2019.)JohnThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Letter to the Bulletin of Atomic Scientists, published by John G. Halstead on November 23, 2022 on The Effective Altruism Forum.Tldr: This is a letter I wrote to the Climate Contributing Editor of the Bulletin Atomic Scientists, Dawn Stover, about Emile Torres' latest piece criticising EA. In short:In advance of the publication of the article, Ms Stover reached out to us to check on what Torres calls their most "disturbing" claim viz. that Will MacAskill lied about getting advice from five climate experts.We showed them that this was false.The Bulletin published the claim anyway, and then tweeted it.In my opinion, this is outrageous, so I have asked them to issue a correction and an apology.Dear Ms Stover,I have long admired the work of the Bulletin of the Atomic Scientists. However, I am extremely disappointed by your publication of the latest piece by Emile Torres.I knew long ago that Torres would publish a piece critical of What We Owe the Future, and on me following my report on climate change. However, I am surprised that the Bulletin has chosen to publish this particular piece in its current form. There are many things wrong with the piece, but the most important is that it accuses Will MacAskill and his research assistants of research misconduct. Specifically, Torres contends that five of the climate experts we listed in the acknowledgements for the book were not actually consulted.Ms Stover: you contacted us about this claim in advance of the article’s publication, and we informed you that it was not true. Overall, we consulted around 106 experts in the research process for What We Owe The Future. Torres suggests that five experts were never consulted at all, but this is not true — as Will stated in his earlier email to you, four of those five experts were consulted. I am happy to provide evidence for this. The article would have readers think that we made up the citations out of thin air. One of them was contacted but didn’t have time to give feedback, and was incorrectly credited in the acknowledgements, which we will change in future editions: this was an honest mistake. The Bulletin also went on to tweet the false claim that multiple people hadn’t been consulted at all.The acknowledgements are also clear that we are not claiming that those listed checked and agreed with every claim in the book. Immediately after the acknowledgements of subject-matter experts, Will writes: “These advisers don’t necessarily agree with the claims I make in the book, and all errors in the book are my responsibility alone.”To accuse someone of research misconduct is a very serious allegation. After you check it and find out that it is false, it is extremely poor form to let the claim go out anyway and then to tweet it. The Bulletin should issue a correction to the article, and to the false claim they put out in a tweet.I also have concerns about the nature of Torres’ background work for article — they seemingly sent every person that was acknowledged for the book a misleading email, telling them that we lied in the acknowledgements, and making some reviewers quite uncomfortable.To reiterate, I am very disappointed by the journalistic standards demonstrated in this article. I will be publishing something separately about Torres’ (as usual) misrepresented substantive claims, but the most serious allegation of research misconduct needs to be retracted and we need an apology.(Also, a more minor point: it's not true that I am Head of Applied Research at Founders Pledge. I left that role in 2019.)JohnThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
John G. Halstead https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:22 None full 3871
CCXSdmkPvrLQZgyCk_EA EA - Announcing AI safety Mentors and Mentees by mariushobbhahn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing AI safety Mentors and Mentees, published by mariushobbhahn on November 23, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
mariushobbhahn https://forum.effectivealtruism.org/posts/CCXSdmkPvrLQZgyCk/announcing-ai-safety-mentors-and-mentees Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing AI safety Mentors and Mentees, published by mariushobbhahn on November 23, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 23 Nov 2022 17:58:55 +0000 EA - Announcing AI safety Mentors and Mentees by mariushobbhahn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing AI safety Mentors and Mentees, published by mariushobbhahn on November 23, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing AI safety Mentors and Mentees, published by mariushobbhahn on November 23, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
mariushobbhahn https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:27 None full 3872
LNbzDCgCH2py3cnJv_EA EA - Where are you donating this year, and why? (Open thread) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Where are you donating this year, and why? (Open thread), published by Lizka on November 23, 2022 on The Effective Altruism Forum.This post is intended as an open thread for anyone to share where you donated or plan to donate in 2022, and why.I encourage you to share regardless of how small or large a donation you’re making! And you shouldn’t feel obliged to share the amount that you’re donating.You can share as much or as little detail as you want (anything from 1 sentence simply describing where you’re giving, to multiple pages explaining your decision process and key considerations).And if you have thoughts or feedback on someone else’s donation plans, I’d encourage you to share that in a reply to their “answer”, unless the person indicated they don’t want that. (But remember to be respectful and kind while doing this! See also supportive scepticism.)Why commenting on this post might be useful:You might get useful feedback on your donation planReaders might form better donation plans by learning about donation options you're considering, seeing your reasoning, etc.Commenting or reading might help you/other people become or stay inspired to give (and to give effectively)Related:Effective Giving Day is coming up — November 28 — next week!Talk about donations earlier and morePrevious posts of this kind:Where are you donating in 2021, and why?Where are you donating in 2020 and why?Where are you donating this year and why – in 2019? Open thread for discussion.As a final note: we’re enabling emoji reactions for this thread.Adapted almost entirely from Where are you donating in 2020 and why?, with permission.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lizka https://forum.effectivealtruism.org/posts/LNbzDCgCH2py3cnJv/where-are-you-donating-this-year-and-why-open-thread Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Where are you donating this year, and why? (Open thread), published by Lizka on November 23, 2022 on The Effective Altruism Forum.This post is intended as an open thread for anyone to share where you donated or plan to donate in 2022, and why.I encourage you to share regardless of how small or large a donation you’re making! And you shouldn’t feel obliged to share the amount that you’re donating.You can share as much or as little detail as you want (anything from 1 sentence simply describing where you’re giving, to multiple pages explaining your decision process and key considerations).And if you have thoughts or feedback on someone else’s donation plans, I’d encourage you to share that in a reply to their “answer”, unless the person indicated they don’t want that. (But remember to be respectful and kind while doing this! See also supportive scepticism.)Why commenting on this post might be useful:You might get useful feedback on your donation planReaders might form better donation plans by learning about donation options you're considering, seeing your reasoning, etc.Commenting or reading might help you/other people become or stay inspired to give (and to give effectively)Related:Effective Giving Day is coming up — November 28 — next week!Talk about donations earlier and morePrevious posts of this kind:Where are you donating in 2021, and why?Where are you donating in 2020 and why?Where are you donating this year and why – in 2019? Open thread for discussion.As a final note: we’re enabling emoji reactions for this thread.Adapted almost entirely from Where are you donating in 2020 and why?, with permission.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 23 Nov 2022 13:08:58 +0000 EA - Where are you donating this year, and why? (Open thread) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Where are you donating this year, and why? (Open thread), published by Lizka on November 23, 2022 on The Effective Altruism Forum.This post is intended as an open thread for anyone to share where you donated or plan to donate in 2022, and why.I encourage you to share regardless of how small or large a donation you’re making! And you shouldn’t feel obliged to share the amount that you’re donating.You can share as much or as little detail as you want (anything from 1 sentence simply describing where you’re giving, to multiple pages explaining your decision process and key considerations).And if you have thoughts or feedback on someone else’s donation plans, I’d encourage you to share that in a reply to their “answer”, unless the person indicated they don’t want that. (But remember to be respectful and kind while doing this! See also supportive scepticism.)Why commenting on this post might be useful:You might get useful feedback on your donation planReaders might form better donation plans by learning about donation options you're considering, seeing your reasoning, etc.Commenting or reading might help you/other people become or stay inspired to give (and to give effectively)Related:Effective Giving Day is coming up — November 28 — next week!Talk about donations earlier and morePrevious posts of this kind:Where are you donating in 2021, and why?Where are you donating in 2020 and why?Where are you donating this year and why – in 2019? Open thread for discussion.As a final note: we’re enabling emoji reactions for this thread.Adapted almost entirely from Where are you donating in 2020 and why?, with permission.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Where are you donating this year, and why? (Open thread), published by Lizka on November 23, 2022 on The Effective Altruism Forum.This post is intended as an open thread for anyone to share where you donated or plan to donate in 2022, and why.I encourage you to share regardless of how small or large a donation you’re making! And you shouldn’t feel obliged to share the amount that you’re donating.You can share as much or as little detail as you want (anything from 1 sentence simply describing where you’re giving, to multiple pages explaining your decision process and key considerations).And if you have thoughts or feedback on someone else’s donation plans, I’d encourage you to share that in a reply to their “answer”, unless the person indicated they don’t want that. (But remember to be respectful and kind while doing this! See also supportive scepticism.)Why commenting on this post might be useful:You might get useful feedback on your donation planReaders might form better donation plans by learning about donation options you're considering, seeing your reasoning, etc.Commenting or reading might help you/other people become or stay inspired to give (and to give effectively)Related:Effective Giving Day is coming up — November 28 — next week!Talk about donations earlier and morePrevious posts of this kind:Where are you donating in 2021, and why?Where are you donating in 2020 and why?Where are you donating this year and why – in 2019? Open thread for discussion.As a final note: we’re enabling emoji reactions for this thread.Adapted almost entirely from Where are you donating in 2020 and why?, with permission.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:51 None full 3873
LFNibDPRRmPW6jj9q_EA EA - Insurance against FTX clawbacks by Grant Demaree Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Insurance against FTX clawbacks, published by Grant Demaree on November 23, 2022 on The Effective Altruism Forum.I've been reviewing funding requests as part of Nonlinear's effort to help FTXF grantees. Many applications sound like this:FTX Future Fund already paid me, so I have plenty of money. I can't spend any of it, since I'm worried about clawbacks. This leaves me practically destitute.Imagine you could buy insurance against these clawbacks:Amy has $20,000 in her bank account, but it's all from an FTX grantBill owns an insurance company. He believes the chance of a successful clawback is much less than 25%, and he charges Amy $5,000 for the insuranceNow Amy is comfortable spending her remaining $15,000. If a clawback happens, Bill pays her $20,000, and she's unharmedSeems like a win-win for Amy and Bill. Is there a way to make it happen?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Grant Demaree https://forum.effectivealtruism.org/posts/LFNibDPRRmPW6jj9q/insurance-against-ftx-clawbacks Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Insurance against FTX clawbacks, published by Grant Demaree on November 23, 2022 on The Effective Altruism Forum.I've been reviewing funding requests as part of Nonlinear's effort to help FTXF grantees. Many applications sound like this:FTX Future Fund already paid me, so I have plenty of money. I can't spend any of it, since I'm worried about clawbacks. This leaves me practically destitute.Imagine you could buy insurance against these clawbacks:Amy has $20,000 in her bank account, but it's all from an FTX grantBill owns an insurance company. He believes the chance of a successful clawback is much less than 25%, and he charges Amy $5,000 for the insuranceNow Amy is comfortable spending her remaining $15,000. If a clawback happens, Bill pays her $20,000, and she's unharmedSeems like a win-win for Amy and Bill. Is there a way to make it happen?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 23 Nov 2022 12:39:25 +0000 EA - Insurance against FTX clawbacks by Grant Demaree Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Insurance against FTX clawbacks, published by Grant Demaree on November 23, 2022 on The Effective Altruism Forum.I've been reviewing funding requests as part of Nonlinear's effort to help FTXF grantees. Many applications sound like this:FTX Future Fund already paid me, so I have plenty of money. I can't spend any of it, since I'm worried about clawbacks. This leaves me practically destitute.Imagine you could buy insurance against these clawbacks:Amy has $20,000 in her bank account, but it's all from an FTX grantBill owns an insurance company. He believes the chance of a successful clawback is much less than 25%, and he charges Amy $5,000 for the insuranceNow Amy is comfortable spending her remaining $15,000. If a clawback happens, Bill pays her $20,000, and she's unharmedSeems like a win-win for Amy and Bill. Is there a way to make it happen?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Insurance against FTX clawbacks, published by Grant Demaree on November 23, 2022 on The Effective Altruism Forum.I've been reviewing funding requests as part of Nonlinear's effort to help FTXF grantees. Many applications sound like this:FTX Future Fund already paid me, so I have plenty of money. I can't spend any of it, since I'm worried about clawbacks. This leaves me practically destitute.Imagine you could buy insurance against these clawbacks:Amy has $20,000 in her bank account, but it's all from an FTX grantBill owns an insurance company. He believes the chance of a successful clawback is much less than 25%, and he charges Amy $5,000 for the insuranceNow Amy is comfortable spending her remaining $15,000. If a clawback happens, Bill pays her $20,000, and she's unharmedSeems like a win-win for Amy and Bill. Is there a way to make it happen?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Grant Demaree https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:12 None full 3874
wDGcTPTyADHAjomNC_EA EA - Announcing AI Alignment Awards: $100k research contests about goal misgeneralization and corrigibility by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing AI Alignment Awards: $100k research contests about goal misgeneralization & corrigibility, published by Akash on November 22, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://forum.effectivealtruism.org/posts/wDGcTPTyADHAjomNC/announcing-ai-alignment-awards-usd100k-research-contests Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing AI Alignment Awards: $100k research contests about goal misgeneralization & corrigibility, published by Akash on November 22, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 23 Nov 2022 00:54:03 +0000 EA - Announcing AI Alignment Awards: $100k research contests about goal misgeneralization and corrigibility by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing AI Alignment Awards: $100k research contests about goal misgeneralization & corrigibility, published by Akash on November 22, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing AI Alignment Awards: $100k research contests about goal misgeneralization & corrigibility, published by Akash on November 22, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:34 None full 3862
mGbptXsRFrFLfR45J_EA EA - A Thanksgiving gratitude exercise for EAs by Geoffrey Miller Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Thanksgiving gratitude exercise for EAs, published by Geoffrey Miller on November 22, 2022 on The Effective Altruism Forum.We've all had a rough time lately with the recent FTX crisis, and many of us have probably been feeling anxious, depressed, angry, betrayed, confused, etc., and worried about our funding, our careers, our future, and the future of EA.The American Thanksgiving holiday is coming up in a couple of days, which is traditionally a time for reconnecting with family and friends, and focusing on the many things to be thankful for in life. Also, there's considerable research from positive psychology that 'gratitude exercises' can promote happiness and health.So, I thought it might be helpful for EAs to do a little gratitude exercise here on EA Forum, focused on ways that -- despite the recent crisis -- you're still thankful for EA ideas, insights, ideals, organizations, colleagues, and friendships.In the comments below, please share one or two brief ways that you're grateful for being involved in the EA movement. I hope we can take a step back from the recent news, reconnect with why we care about EA, and find some positivity this week.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Geoffrey Miller https://forum.effectivealtruism.org/posts/mGbptXsRFrFLfR45J/a-thanksgiving-gratitude-exercise-for-eas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Thanksgiving gratitude exercise for EAs, published by Geoffrey Miller on November 22, 2022 on The Effective Altruism Forum.We've all had a rough time lately with the recent FTX crisis, and many of us have probably been feeling anxious, depressed, angry, betrayed, confused, etc., and worried about our funding, our careers, our future, and the future of EA.The American Thanksgiving holiday is coming up in a couple of days, which is traditionally a time for reconnecting with family and friends, and focusing on the many things to be thankful for in life. Also, there's considerable research from positive psychology that 'gratitude exercises' can promote happiness and health.So, I thought it might be helpful for EAs to do a little gratitude exercise here on EA Forum, focused on ways that -- despite the recent crisis -- you're still thankful for EA ideas, insights, ideals, organizations, colleagues, and friendships.In the comments below, please share one or two brief ways that you're grateful for being involved in the EA movement. I hope we can take a step back from the recent news, reconnect with why we care about EA, and find some positivity this week.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 22 Nov 2022 22:51:46 +0000 EA - A Thanksgiving gratitude exercise for EAs by Geoffrey Miller Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Thanksgiving gratitude exercise for EAs, published by Geoffrey Miller on November 22, 2022 on The Effective Altruism Forum.We've all had a rough time lately with the recent FTX crisis, and many of us have probably been feeling anxious, depressed, angry, betrayed, confused, etc., and worried about our funding, our careers, our future, and the future of EA.The American Thanksgiving holiday is coming up in a couple of days, which is traditionally a time for reconnecting with family and friends, and focusing on the many things to be thankful for in life. Also, there's considerable research from positive psychology that 'gratitude exercises' can promote happiness and health.So, I thought it might be helpful for EAs to do a little gratitude exercise here on EA Forum, focused on ways that -- despite the recent crisis -- you're still thankful for EA ideas, insights, ideals, organizations, colleagues, and friendships.In the comments below, please share one or two brief ways that you're grateful for being involved in the EA movement. I hope we can take a step back from the recent news, reconnect with why we care about EA, and find some positivity this week.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Thanksgiving gratitude exercise for EAs, published by Geoffrey Miller on November 22, 2022 on The Effective Altruism Forum.We've all had a rough time lately with the recent FTX crisis, and many of us have probably been feeling anxious, depressed, angry, betrayed, confused, etc., and worried about our funding, our careers, our future, and the future of EA.The American Thanksgiving holiday is coming up in a couple of days, which is traditionally a time for reconnecting with family and friends, and focusing on the many things to be thankful for in life. Also, there's considerable research from positive psychology that 'gratitude exercises' can promote happiness and health.So, I thought it might be helpful for EAs to do a little gratitude exercise here on EA Forum, focused on ways that -- despite the recent crisis -- you're still thankful for EA ideas, insights, ideals, organizations, colleagues, and friendships.In the comments below, please share one or two brief ways that you're grateful for being involved in the EA movement. I hope we can take a step back from the recent news, reconnect with why we care about EA, and find some positivity this week.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Geoffrey Miller https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:24 None full 3853
TaBJqDMJtEH68Hhfw_EA EA - EA should blurt by RobBensinger Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA should blurt, published by RobBensinger on November 22, 2022 on The Effective Altruism Forum.A lot of EAs are reporting that some things seem like early signs of character or judgment flaws in SBF — an argument that seems wrong, an action that seems unjustified, etc. — now that they can reexamine those data points with the benefit of hindsight.But the mental motions involved in "revisit the past and do a mental search for warning signs confirming that a Bad Person is bad" are pretty different from the mental motions involved in noticing and responding to problems before the person seems Bad at all."Noticing red flags" often isn't what it feels like from the inside to properly notice, respond to, and propagate warning signs that someone you respect is fucking up in a surprising way.Things usually feel like "red flags" after you're suspicious, rather than before.You're hopefully learning some real-world patterns via this "reinterpret old data points in a new light" process. But you aren't necessarily training the relevant skills and habits by doing this.From my perspective, the whole idea that the relevant skillset is specifically about spotting Bad Actors is itself sort of confused. Like, EAs might indeed have too low a prior on bad actors existing, but also, the idea that the world is sharply divided into Fully Good Actors and Fully Bad Actors is part of what protected SBF in the first place!It kept us from doing mundane epistemic accounting before he seemed Bad. If you're discouraged from just raising a minor local Criticism or Objection for its own sake — if you need some larger thesis or agenda or axe to grind, before it's OK to say "hey wait, I don't get X" — then it will be a lot harder to update incrementally and spot problems early.(And, incidentally, a lot harder to trust your information sources! EA will inevitably make slower intellectual progress insofar as we don't trust each other to just say what's on our mind like an ordinary group of acquaintances working on a project together, and instead have to try to correct for various agendas or strategies we think the other party might be implementing.)(Even if nobody's lying, we have to worry about filtered evidence, where people are willing to say X if they believe X but unwilling to say not-X if they believe not-X.)Suppose that I say "the mental motions needed to spot SBF's issues early are mostly the same as the mental motions needed to notice when Eliezer's saying something that doesn't seem to make sense, casually updating at least a little against Eliezer's judgment in this domain, and naively blurting out 'wait, that doesn't currently make sense to me, what about objection X?'"(Or if you don't have much respect for Eliezer, pick someone you do have respect for — Holden Karnofsky, or Paul Graham, or Peter Singer, or whoever.)I imagine some people's reaction to that being: "But wait! Are you saying that Eliezer/Holden/whoever is a bad actor?? That seems totally wrong, what about evidence A B C X Y Z..."Which seems to me to be missing the point:1. The processes required to catch bad actors reliably, are often (though not always) similar to the processes required to correct innocent errors by good actors.You do need to also have "bad actor" in your hypothesis space, or you'll be fooled forever even as you keep noting weird data points. (More concretely, since "bad actor" is vague verbiage: you need to have probability mass on people being liars, promise-breakers, Machiavellian manipulators, etc.)But in practice, I think most of the problem lies in people not noticing or sharing the data points in the first place. Certainly in SBF's case, I (and I think most EAs) had never even heard any of the red flags about SBF, as opposed to us hearing a ton of flags and trying to explain them away.So...]]>
RobBensinger https://forum.effectivealtruism.org/posts/TaBJqDMJtEH68Hhfw/ea-should-blurt Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA should blurt, published by RobBensinger on November 22, 2022 on The Effective Altruism Forum.A lot of EAs are reporting that some things seem like early signs of character or judgment flaws in SBF — an argument that seems wrong, an action that seems unjustified, etc. — now that they can reexamine those data points with the benefit of hindsight.But the mental motions involved in "revisit the past and do a mental search for warning signs confirming that a Bad Person is bad" are pretty different from the mental motions involved in noticing and responding to problems before the person seems Bad at all."Noticing red flags" often isn't what it feels like from the inside to properly notice, respond to, and propagate warning signs that someone you respect is fucking up in a surprising way.Things usually feel like "red flags" after you're suspicious, rather than before.You're hopefully learning some real-world patterns via this "reinterpret old data points in a new light" process. But you aren't necessarily training the relevant skills and habits by doing this.From my perspective, the whole idea that the relevant skillset is specifically about spotting Bad Actors is itself sort of confused. Like, EAs might indeed have too low a prior on bad actors existing, but also, the idea that the world is sharply divided into Fully Good Actors and Fully Bad Actors is part of what protected SBF in the first place!It kept us from doing mundane epistemic accounting before he seemed Bad. If you're discouraged from just raising a minor local Criticism or Objection for its own sake — if you need some larger thesis or agenda or axe to grind, before it's OK to say "hey wait, I don't get X" — then it will be a lot harder to update incrementally and spot problems early.(And, incidentally, a lot harder to trust your information sources! EA will inevitably make slower intellectual progress insofar as we don't trust each other to just say what's on our mind like an ordinary group of acquaintances working on a project together, and instead have to try to correct for various agendas or strategies we think the other party might be implementing.)(Even if nobody's lying, we have to worry about filtered evidence, where people are willing to say X if they believe X but unwilling to say not-X if they believe not-X.)Suppose that I say "the mental motions needed to spot SBF's issues early are mostly the same as the mental motions needed to notice when Eliezer's saying something that doesn't seem to make sense, casually updating at least a little against Eliezer's judgment in this domain, and naively blurting out 'wait, that doesn't currently make sense to me, what about objection X?'"(Or if you don't have much respect for Eliezer, pick someone you do have respect for — Holden Karnofsky, or Paul Graham, or Peter Singer, or whoever.)I imagine some people's reaction to that being: "But wait! Are you saying that Eliezer/Holden/whoever is a bad actor?? That seems totally wrong, what about evidence A B C X Y Z..."Which seems to me to be missing the point:1. The processes required to catch bad actors reliably, are often (though not always) similar to the processes required to correct innocent errors by good actors.You do need to also have "bad actor" in your hypothesis space, or you'll be fooled forever even as you keep noting weird data points. (More concretely, since "bad actor" is vague verbiage: you need to have probability mass on people being liars, promise-breakers, Machiavellian manipulators, etc.)But in practice, I think most of the problem lies in people not noticing or sharing the data points in the first place. Certainly in SBF's case, I (and I think most EAs) had never even heard any of the red flags about SBF, as opposed to us hearing a ton of flags and trying to explain them away.So...]]>
Tue, 22 Nov 2022 22:33:03 +0000 EA - EA should blurt by RobBensinger Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA should blurt, published by RobBensinger on November 22, 2022 on The Effective Altruism Forum.A lot of EAs are reporting that some things seem like early signs of character or judgment flaws in SBF — an argument that seems wrong, an action that seems unjustified, etc. — now that they can reexamine those data points with the benefit of hindsight.But the mental motions involved in "revisit the past and do a mental search for warning signs confirming that a Bad Person is bad" are pretty different from the mental motions involved in noticing and responding to problems before the person seems Bad at all."Noticing red flags" often isn't what it feels like from the inside to properly notice, respond to, and propagate warning signs that someone you respect is fucking up in a surprising way.Things usually feel like "red flags" after you're suspicious, rather than before.You're hopefully learning some real-world patterns via this "reinterpret old data points in a new light" process. But you aren't necessarily training the relevant skills and habits by doing this.From my perspective, the whole idea that the relevant skillset is specifically about spotting Bad Actors is itself sort of confused. Like, EAs might indeed have too low a prior on bad actors existing, but also, the idea that the world is sharply divided into Fully Good Actors and Fully Bad Actors is part of what protected SBF in the first place!It kept us from doing mundane epistemic accounting before he seemed Bad. If you're discouraged from just raising a minor local Criticism or Objection for its own sake — if you need some larger thesis or agenda or axe to grind, before it's OK to say "hey wait, I don't get X" — then it will be a lot harder to update incrementally and spot problems early.(And, incidentally, a lot harder to trust your information sources! EA will inevitably make slower intellectual progress insofar as we don't trust each other to just say what's on our mind like an ordinary group of acquaintances working on a project together, and instead have to try to correct for various agendas or strategies we think the other party might be implementing.)(Even if nobody's lying, we have to worry about filtered evidence, where people are willing to say X if they believe X but unwilling to say not-X if they believe not-X.)Suppose that I say "the mental motions needed to spot SBF's issues early are mostly the same as the mental motions needed to notice when Eliezer's saying something that doesn't seem to make sense, casually updating at least a little against Eliezer's judgment in this domain, and naively blurting out 'wait, that doesn't currently make sense to me, what about objection X?'"(Or if you don't have much respect for Eliezer, pick someone you do have respect for — Holden Karnofsky, or Paul Graham, or Peter Singer, or whoever.)I imagine some people's reaction to that being: "But wait! Are you saying that Eliezer/Holden/whoever is a bad actor?? That seems totally wrong, what about evidence A B C X Y Z..."Which seems to me to be missing the point:1. The processes required to catch bad actors reliably, are often (though not always) similar to the processes required to correct innocent errors by good actors.You do need to also have "bad actor" in your hypothesis space, or you'll be fooled forever even as you keep noting weird data points. (More concretely, since "bad actor" is vague verbiage: you need to have probability mass on people being liars, promise-breakers, Machiavellian manipulators, etc.)But in practice, I think most of the problem lies in people not noticing or sharing the data points in the first place. Certainly in SBF's case, I (and I think most EAs) had never even heard any of the red flags about SBF, as opposed to us hearing a ton of flags and trying to explain them away.So...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA should blurt, published by RobBensinger on November 22, 2022 on The Effective Altruism Forum.A lot of EAs are reporting that some things seem like early signs of character or judgment flaws in SBF — an argument that seems wrong, an action that seems unjustified, etc. — now that they can reexamine those data points with the benefit of hindsight.But the mental motions involved in "revisit the past and do a mental search for warning signs confirming that a Bad Person is bad" are pretty different from the mental motions involved in noticing and responding to problems before the person seems Bad at all."Noticing red flags" often isn't what it feels like from the inside to properly notice, respond to, and propagate warning signs that someone you respect is fucking up in a surprising way.Things usually feel like "red flags" after you're suspicious, rather than before.You're hopefully learning some real-world patterns via this "reinterpret old data points in a new light" process. But you aren't necessarily training the relevant skills and habits by doing this.From my perspective, the whole idea that the relevant skillset is specifically about spotting Bad Actors is itself sort of confused. Like, EAs might indeed have too low a prior on bad actors existing, but also, the idea that the world is sharply divided into Fully Good Actors and Fully Bad Actors is part of what protected SBF in the first place!It kept us from doing mundane epistemic accounting before he seemed Bad. If you're discouraged from just raising a minor local Criticism or Objection for its own sake — if you need some larger thesis or agenda or axe to grind, before it's OK to say "hey wait, I don't get X" — then it will be a lot harder to update incrementally and spot problems early.(And, incidentally, a lot harder to trust your information sources! EA will inevitably make slower intellectual progress insofar as we don't trust each other to just say what's on our mind like an ordinary group of acquaintances working on a project together, and instead have to try to correct for various agendas or strategies we think the other party might be implementing.)(Even if nobody's lying, we have to worry about filtered evidence, where people are willing to say X if they believe X but unwilling to say not-X if they believe not-X.)Suppose that I say "the mental motions needed to spot SBF's issues early are mostly the same as the mental motions needed to notice when Eliezer's saying something that doesn't seem to make sense, casually updating at least a little against Eliezer's judgment in this domain, and naively blurting out 'wait, that doesn't currently make sense to me, what about objection X?'"(Or if you don't have much respect for Eliezer, pick someone you do have respect for — Holden Karnofsky, or Paul Graham, or Peter Singer, or whoever.)I imagine some people's reaction to that being: "But wait! Are you saying that Eliezer/Holden/whoever is a bad actor?? That seems totally wrong, what about evidence A B C X Y Z..."Which seems to me to be missing the point:1. The processes required to catch bad actors reliably, are often (though not always) similar to the processes required to correct innocent errors by good actors.You do need to also have "bad actor" in your hypothesis space, or you'll be fooled forever even as you keep noting weird data points. (More concretely, since "bad actor" is vague verbiage: you need to have probability mass on people being liars, promise-breakers, Machiavellian manipulators, etc.)But in practice, I think most of the problem lies in people not noticing or sharing the data points in the first place. Certainly in SBF's case, I (and I think most EAs) had never even heard any of the red flags about SBF, as opposed to us hearing a ton of flags and trying to explain them away.So...]]>
RobBensinger https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:22 None full 3852
cGn5HrDKdCkaf3REy_EA EA - Announcing our 2022 charity recommendations by Animal Charity Evaluators Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing our 2022 charity recommendations, published by Animal Charity Evaluators on November 22, 2022 on The Effective Altruism Forum.Every year, Animal Charity Evaluators (ACE) spends several months evaluating animal advocacy organizations to identify those that work effectively and are able to do the most good with additional donations. Our goal is to help people help animals by providing donors with impactful giving opportunities that reduce suffering to the greatest extent possible. This year, we are excited to announce that we have selected one Top Charity and four Standout Charities.In 2022, we conducted comprehensive evaluations of 12 animal advocacy organizations that are doing promising work. Per our evaluation criteria, the five charities we recommended this year have the most impactful programs, are highly cost-effective, and have the most room for additional funding, making them exceptional choices for end-of-year giving.Because we changed the re-evaluation frequency of Top Charities from one to two years, The Humane League, Wild Animal Initiative, and Faunalytics have all retained their Top Charity status from 2021. The Good Food Institute now joins their ranks!We are also pleased to recommend Fish Welfare Initiative, Dansk Vegetarisk Forening, and Çiftlik Hayvanlarını Koruma Derneği as new Standout Charities. Additionally, Sinergia Animal retained their status as a Standout Charity after being re-evaluated this year. These charities join the seven other Standout Charities that retain their status from last year: Compassion USA, Dharma Voices for Animals, Federation of Indian Animal Protection Organisations, Material Innovation Initiative, Mercy For Animals, New Harvest, and xiaobuVEGAN.Below, you will find a brief overview of each of our Top and Standout charities. For more details, please check out our comprehensive charity reviews.Top CharitiesEvaluated in 2022The Good Food Institute (GFI) currently operates in the U.S., Brazil, India, Asia-Pacific, Europe, and Israel, where they work to increase the availability of animal-free products through supporting the development and marketing of plant-based and cell-cultured alternatives to animal products. They achieve this through corporate engagement, institutional outreach, and policy work. They also work to strengthen the capacity of the animal advocacy movement through supporting research and start-ups focused on alternative proteins. GFI was one of our Top Charities from November 2016 to November 2021. To learn more, read our 2022 comprehensive review of the Good Food Institute.Evaluated in 2021Faunalytics is a U.S.-based organization working to connect animal advocates with information relevant to advocacy. This mostly involves cоnducting and publishing independent research, working directly with partner organizations on various research projects, and promoting existing research and data for individual advocates through their website’s content library. Faunalytics was one of our Standout Charities from December 2015 to November 2021. To learn more, read our 2021 comprehensive review of Faunalytics.The Humane League (THL) operates in the U.S., Mexico, the U.K., and Japan, where they work to improve animal welfare standards through grassroots campaigns, movement building, vegan advocacy, research, and advocacy training, as well as through corporate, media, and community outreach. They work to build the animal advocacy movement internationally through the Open Wing Alliance (OWA), a coalition founded by THL whose mission is to end the use of battery cages globally. THL has been one of ACE’s Top Charities since August 2012, when we used a different evaluation process and did not publish reviews. In 2014, THL was awarded Top Charity status in our first official round of ACE charity evaluation...]]>
Animal Charity Evaluators https://forum.effectivealtruism.org/posts/cGn5HrDKdCkaf3REy/announcing-our-2022-charity-recommendations Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing our 2022 charity recommendations, published by Animal Charity Evaluators on November 22, 2022 on The Effective Altruism Forum.Every year, Animal Charity Evaluators (ACE) spends several months evaluating animal advocacy organizations to identify those that work effectively and are able to do the most good with additional donations. Our goal is to help people help animals by providing donors with impactful giving opportunities that reduce suffering to the greatest extent possible. This year, we are excited to announce that we have selected one Top Charity and four Standout Charities.In 2022, we conducted comprehensive evaluations of 12 animal advocacy organizations that are doing promising work. Per our evaluation criteria, the five charities we recommended this year have the most impactful programs, are highly cost-effective, and have the most room for additional funding, making them exceptional choices for end-of-year giving.Because we changed the re-evaluation frequency of Top Charities from one to two years, The Humane League, Wild Animal Initiative, and Faunalytics have all retained their Top Charity status from 2021. The Good Food Institute now joins their ranks!We are also pleased to recommend Fish Welfare Initiative, Dansk Vegetarisk Forening, and Çiftlik Hayvanlarını Koruma Derneği as new Standout Charities. Additionally, Sinergia Animal retained their status as a Standout Charity after being re-evaluated this year. These charities join the seven other Standout Charities that retain their status from last year: Compassion USA, Dharma Voices for Animals, Federation of Indian Animal Protection Organisations, Material Innovation Initiative, Mercy For Animals, New Harvest, and xiaobuVEGAN.Below, you will find a brief overview of each of our Top and Standout charities. For more details, please check out our comprehensive charity reviews.Top CharitiesEvaluated in 2022The Good Food Institute (GFI) currently operates in the U.S., Brazil, India, Asia-Pacific, Europe, and Israel, where they work to increase the availability of animal-free products through supporting the development and marketing of plant-based and cell-cultured alternatives to animal products. They achieve this through corporate engagement, institutional outreach, and policy work. They also work to strengthen the capacity of the animal advocacy movement through supporting research and start-ups focused on alternative proteins. GFI was one of our Top Charities from November 2016 to November 2021. To learn more, read our 2022 comprehensive review of the Good Food Institute.Evaluated in 2021Faunalytics is a U.S.-based organization working to connect animal advocates with information relevant to advocacy. This mostly involves cоnducting and publishing independent research, working directly with partner organizations on various research projects, and promoting existing research and data for individual advocates through their website’s content library. Faunalytics was one of our Standout Charities from December 2015 to November 2021. To learn more, read our 2021 comprehensive review of Faunalytics.The Humane League (THL) operates in the U.S., Mexico, the U.K., and Japan, where they work to improve animal welfare standards through grassroots campaigns, movement building, vegan advocacy, research, and advocacy training, as well as through corporate, media, and community outreach. They work to build the animal advocacy movement internationally through the Open Wing Alliance (OWA), a coalition founded by THL whose mission is to end the use of battery cages globally. THL has been one of ACE’s Top Charities since August 2012, when we used a different evaluation process and did not publish reviews. In 2014, THL was awarded Top Charity status in our first official round of ACE charity evaluation...]]>
Tue, 22 Nov 2022 19:35:00 +0000 EA - Announcing our 2022 charity recommendations by Animal Charity Evaluators Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing our 2022 charity recommendations, published by Animal Charity Evaluators on November 22, 2022 on The Effective Altruism Forum.Every year, Animal Charity Evaluators (ACE) spends several months evaluating animal advocacy organizations to identify those that work effectively and are able to do the most good with additional donations. Our goal is to help people help animals by providing donors with impactful giving opportunities that reduce suffering to the greatest extent possible. This year, we are excited to announce that we have selected one Top Charity and four Standout Charities.In 2022, we conducted comprehensive evaluations of 12 animal advocacy organizations that are doing promising work. Per our evaluation criteria, the five charities we recommended this year have the most impactful programs, are highly cost-effective, and have the most room for additional funding, making them exceptional choices for end-of-year giving.Because we changed the re-evaluation frequency of Top Charities from one to two years, The Humane League, Wild Animal Initiative, and Faunalytics have all retained their Top Charity status from 2021. The Good Food Institute now joins their ranks!We are also pleased to recommend Fish Welfare Initiative, Dansk Vegetarisk Forening, and Çiftlik Hayvanlarını Koruma Derneği as new Standout Charities. Additionally, Sinergia Animal retained their status as a Standout Charity after being re-evaluated this year. These charities join the seven other Standout Charities that retain their status from last year: Compassion USA, Dharma Voices for Animals, Federation of Indian Animal Protection Organisations, Material Innovation Initiative, Mercy For Animals, New Harvest, and xiaobuVEGAN.Below, you will find a brief overview of each of our Top and Standout charities. For more details, please check out our comprehensive charity reviews.Top CharitiesEvaluated in 2022The Good Food Institute (GFI) currently operates in the U.S., Brazil, India, Asia-Pacific, Europe, and Israel, where they work to increase the availability of animal-free products through supporting the development and marketing of plant-based and cell-cultured alternatives to animal products. They achieve this through corporate engagement, institutional outreach, and policy work. They also work to strengthen the capacity of the animal advocacy movement through supporting research and start-ups focused on alternative proteins. GFI was one of our Top Charities from November 2016 to November 2021. To learn more, read our 2022 comprehensive review of the Good Food Institute.Evaluated in 2021Faunalytics is a U.S.-based organization working to connect animal advocates with information relevant to advocacy. This mostly involves cоnducting and publishing independent research, working directly with partner organizations on various research projects, and promoting existing research and data for individual advocates through their website’s content library. Faunalytics was one of our Standout Charities from December 2015 to November 2021. To learn more, read our 2021 comprehensive review of Faunalytics.The Humane League (THL) operates in the U.S., Mexico, the U.K., and Japan, where they work to improve animal welfare standards through grassroots campaigns, movement building, vegan advocacy, research, and advocacy training, as well as through corporate, media, and community outreach. They work to build the animal advocacy movement internationally through the Open Wing Alliance (OWA), a coalition founded by THL whose mission is to end the use of battery cages globally. THL has been one of ACE’s Top Charities since August 2012, when we used a different evaluation process and did not publish reviews. In 2014, THL was awarded Top Charity status in our first official round of ACE charity evaluation...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing our 2022 charity recommendations, published by Animal Charity Evaluators on November 22, 2022 on The Effective Altruism Forum.Every year, Animal Charity Evaluators (ACE) spends several months evaluating animal advocacy organizations to identify those that work effectively and are able to do the most good with additional donations. Our goal is to help people help animals by providing donors with impactful giving opportunities that reduce suffering to the greatest extent possible. This year, we are excited to announce that we have selected one Top Charity and four Standout Charities.In 2022, we conducted comprehensive evaluations of 12 animal advocacy organizations that are doing promising work. Per our evaluation criteria, the five charities we recommended this year have the most impactful programs, are highly cost-effective, and have the most room for additional funding, making them exceptional choices for end-of-year giving.Because we changed the re-evaluation frequency of Top Charities from one to two years, The Humane League, Wild Animal Initiative, and Faunalytics have all retained their Top Charity status from 2021. The Good Food Institute now joins their ranks!We are also pleased to recommend Fish Welfare Initiative, Dansk Vegetarisk Forening, and Çiftlik Hayvanlarını Koruma Derneği as new Standout Charities. Additionally, Sinergia Animal retained their status as a Standout Charity after being re-evaluated this year. These charities join the seven other Standout Charities that retain their status from last year: Compassion USA, Dharma Voices for Animals, Federation of Indian Animal Protection Organisations, Material Innovation Initiative, Mercy For Animals, New Harvest, and xiaobuVEGAN.Below, you will find a brief overview of each of our Top and Standout charities. For more details, please check out our comprehensive charity reviews.Top CharitiesEvaluated in 2022The Good Food Institute (GFI) currently operates in the U.S., Brazil, India, Asia-Pacific, Europe, and Israel, where they work to increase the availability of animal-free products through supporting the development and marketing of plant-based and cell-cultured alternatives to animal products. They achieve this through corporate engagement, institutional outreach, and policy work. They also work to strengthen the capacity of the animal advocacy movement through supporting research and start-ups focused on alternative proteins. GFI was one of our Top Charities from November 2016 to November 2021. To learn more, read our 2022 comprehensive review of the Good Food Institute.Evaluated in 2021Faunalytics is a U.S.-based organization working to connect animal advocates with information relevant to advocacy. This mostly involves cоnducting and publishing independent research, working directly with partner organizations on various research projects, and promoting existing research and data for individual advocates through their website’s content library. Faunalytics was one of our Standout Charities from December 2015 to November 2021. To learn more, read our 2021 comprehensive review of Faunalytics.The Humane League (THL) operates in the U.S., Mexico, the U.K., and Japan, where they work to improve animal welfare standards through grassroots campaigns, movement building, vegan advocacy, research, and advocacy training, as well as through corporate, media, and community outreach. They work to build the animal advocacy movement internationally through the Open Wing Alliance (OWA), a coalition founded by THL whose mission is to end the use of battery cages globally. THL has been one of ACE’s Top Charities since August 2012, when we used a different evaluation process and did not publish reviews. In 2014, THL was awarded Top Charity status in our first official round of ACE charity evaluation...]]>
Animal Charity Evaluators https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 13:13 None full 3854
ofuGvRMjLx6gMLinF_EA EA - Meta AI announces Cicero: Human-Level Diplomacy play (with dialogue) by Jacy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Meta AI announces Cicero: Human-Level Diplomacy play (with dialogue), published by Jacy on November 22, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jacy https://forum.effectivealtruism.org/posts/ofuGvRMjLx6gMLinF/meta-ai-announces-cicero-human-level-diplomacy-play-with Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Meta AI announces Cicero: Human-Level Diplomacy play (with dialogue), published by Jacy on November 22, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 22 Nov 2022 18:05:26 +0000 EA - Meta AI announces Cicero: Human-Level Diplomacy play (with dialogue) by Jacy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Meta AI announces Cicero: Human-Level Diplomacy play (with dialogue), published by Jacy on November 22, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Meta AI announces Cicero: Human-Level Diplomacy play (with dialogue), published by Jacy on November 22, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jacy https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:31 None full 3855
yPDXXxdeK9cgCfLwj_EA EA - Short Research Summary: Can insects feel pain? A review of the neural and behavioural evidence by Gibbons et al. 2022 by Meghan Barrett Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Short Research Summary: Can insects feel pain? A review of the neural and behavioural evidence by Gibbons et al. 2022, published by Meghan Barrett on November 22, 2022 on The Effective Altruism Forum.This short research summary briefly highlights the major results of a new publication on the scientific evidence for insect pain in Advances in Insect Physiology by Gibbons et al. (2022). This EA Forum post was prepared by Meghan Barrett, Lars Chittka, Andrew Crump, Matilda Gibbons, and Sajedeh Sarlak.The 75-page publication summarizes over 350 scientific studies to assess the scientific evidence for pain across six orders of insects at, minimally, two developmental time points (juvenile, adult). In addition, the paper discusses the use and management of insects in farmed, wild, and research contexts. The publication in its entirety can be reviewed here. The original publication was authored by Matilda Gibbons, Andrew Crump, Meghan Barrett, Sajedeh Sarlak, Jonathan Birch, and Lars Chittka.Major TakeawayWe find strong evidence for pain in adult insects of two orders (Blattodea: cockroaches and termites; Diptera: flies and mosquitoes). We find substantial evidence for pain in adult insects of three additional orders, as well as some juveniles. For several criteria, evidence was distributed across the insect phylogeny, providing some reason to believe that certain kinds of evidence for pain will be found in other taxa. Trillions of insects are directly impacted by humans each year (farmed, managed, killed, etc.). Significant welfare concerns have been identified as the result of human activities. Insect welfare is both completely unregulated and infrequently researched.Given the evidence reviewed in Gibbons et al. (2022), insect welfare is both important and highly neglected.Research SummaryThe Birch et al. (2021) framework, which the UK government has applied to assess evidence for animal pain, uses eight neural and behavioral criteria to assess the likelihood for sentience in invertebrates: 1) nociception; 2) sensory integration; 3) integrated nociception; 4) analgesia; 5) motivational trade-offs; 6) flexible self-protection; 7) associative learning; and 8) analgesia preference.Definitions of these criteria can be found on pages 4 & 5 of the publication's main text.Gibbons et al. (2022) applies the framework to six orders of insects at, minimally, two developmental time points per order (juvenile, adult).Insect orders assessed: Blattodea (cockroaches, termites), Coleoptera (beetles), Diptera (flies, mosquitoes), Hymenoptera (bees, ants, wasps, sawflies), Lepidoptera (butterflies, moths), Orthoptera (crickets, katydids, grasshoppers).Adult Blattodea and Diptera meet 6/8 criteria to a high or very high level of confidence, constituting strong evidence for pain (see Table 1, below). This is stronger evidence for pain than Birch et al. (2021) found for decapod crustaceans (5/8), which are currently protected via the UK Animal Welfare (Sentience) Act 2022.Adults of the remaining orders (except Coleoptera) and some juveniles (Blattodea, Diptera, and last juvenile stage Lepidoptera) satisfy 3 or 4 criteria, constituting substantial evidence for pain (see Tables 1 + 2).We found no good evidence that any insect failed a criterion.For several criteria, evidence was distributed across the insect phylogeny (Figure 1), including across the major split between the hemimetabolous (incomplete metamorphosis) and holometabolous (complete metamorphosis) insects. This provides some reason to believe that certain kinds of evidence for pain (e.g., integrated nociception in adults) will be found in other taxa.Our review demonstrates that there are many areas of insect pain research that have been completely unexplored. Research gaps are particularly substantial for juveniles, hig...]]>
Meghan Barrett https://forum.effectivealtruism.org/posts/yPDXXxdeK9cgCfLwj/short-research-summary-can-insects-feel-pain-a-review-of-the Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Short Research Summary: Can insects feel pain? A review of the neural and behavioural evidence by Gibbons et al. 2022, published by Meghan Barrett on November 22, 2022 on The Effective Altruism Forum.This short research summary briefly highlights the major results of a new publication on the scientific evidence for insect pain in Advances in Insect Physiology by Gibbons et al. (2022). This EA Forum post was prepared by Meghan Barrett, Lars Chittka, Andrew Crump, Matilda Gibbons, and Sajedeh Sarlak.The 75-page publication summarizes over 350 scientific studies to assess the scientific evidence for pain across six orders of insects at, minimally, two developmental time points (juvenile, adult). In addition, the paper discusses the use and management of insects in farmed, wild, and research contexts. The publication in its entirety can be reviewed here. The original publication was authored by Matilda Gibbons, Andrew Crump, Meghan Barrett, Sajedeh Sarlak, Jonathan Birch, and Lars Chittka.Major TakeawayWe find strong evidence for pain in adult insects of two orders (Blattodea: cockroaches and termites; Diptera: flies and mosquitoes). We find substantial evidence for pain in adult insects of three additional orders, as well as some juveniles. For several criteria, evidence was distributed across the insect phylogeny, providing some reason to believe that certain kinds of evidence for pain will be found in other taxa. Trillions of insects are directly impacted by humans each year (farmed, managed, killed, etc.). Significant welfare concerns have been identified as the result of human activities. Insect welfare is both completely unregulated and infrequently researched.Given the evidence reviewed in Gibbons et al. (2022), insect welfare is both important and highly neglected.Research SummaryThe Birch et al. (2021) framework, which the UK government has applied to assess evidence for animal pain, uses eight neural and behavioral criteria to assess the likelihood for sentience in invertebrates: 1) nociception; 2) sensory integration; 3) integrated nociception; 4) analgesia; 5) motivational trade-offs; 6) flexible self-protection; 7) associative learning; and 8) analgesia preference.Definitions of these criteria can be found on pages 4 & 5 of the publication's main text.Gibbons et al. (2022) applies the framework to six orders of insects at, minimally, two developmental time points per order (juvenile, adult).Insect orders assessed: Blattodea (cockroaches, termites), Coleoptera (beetles), Diptera (flies, mosquitoes), Hymenoptera (bees, ants, wasps, sawflies), Lepidoptera (butterflies, moths), Orthoptera (crickets, katydids, grasshoppers).Adult Blattodea and Diptera meet 6/8 criteria to a high or very high level of confidence, constituting strong evidence for pain (see Table 1, below). This is stronger evidence for pain than Birch et al. (2021) found for decapod crustaceans (5/8), which are currently protected via the UK Animal Welfare (Sentience) Act 2022.Adults of the remaining orders (except Coleoptera) and some juveniles (Blattodea, Diptera, and last juvenile stage Lepidoptera) satisfy 3 or 4 criteria, constituting substantial evidence for pain (see Tables 1 + 2).We found no good evidence that any insect failed a criterion.For several criteria, evidence was distributed across the insect phylogeny (Figure 1), including across the major split between the hemimetabolous (incomplete metamorphosis) and holometabolous (complete metamorphosis) insects. This provides some reason to believe that certain kinds of evidence for pain (e.g., integrated nociception in adults) will be found in other taxa.Our review demonstrates that there are many areas of insect pain research that have been completely unexplored. Research gaps are particularly substantial for juveniles, hig...]]>
Tue, 22 Nov 2022 17:27:57 +0000 EA - Short Research Summary: Can insects feel pain? A review of the neural and behavioural evidence by Gibbons et al. 2022 by Meghan Barrett Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Short Research Summary: Can insects feel pain? A review of the neural and behavioural evidence by Gibbons et al. 2022, published by Meghan Barrett on November 22, 2022 on The Effective Altruism Forum.This short research summary briefly highlights the major results of a new publication on the scientific evidence for insect pain in Advances in Insect Physiology by Gibbons et al. (2022). This EA Forum post was prepared by Meghan Barrett, Lars Chittka, Andrew Crump, Matilda Gibbons, and Sajedeh Sarlak.The 75-page publication summarizes over 350 scientific studies to assess the scientific evidence for pain across six orders of insects at, minimally, two developmental time points (juvenile, adult). In addition, the paper discusses the use and management of insects in farmed, wild, and research contexts. The publication in its entirety can be reviewed here. The original publication was authored by Matilda Gibbons, Andrew Crump, Meghan Barrett, Sajedeh Sarlak, Jonathan Birch, and Lars Chittka.Major TakeawayWe find strong evidence for pain in adult insects of two orders (Blattodea: cockroaches and termites; Diptera: flies and mosquitoes). We find substantial evidence for pain in adult insects of three additional orders, as well as some juveniles. For several criteria, evidence was distributed across the insect phylogeny, providing some reason to believe that certain kinds of evidence for pain will be found in other taxa. Trillions of insects are directly impacted by humans each year (farmed, managed, killed, etc.). Significant welfare concerns have been identified as the result of human activities. Insect welfare is both completely unregulated and infrequently researched.Given the evidence reviewed in Gibbons et al. (2022), insect welfare is both important and highly neglected.Research SummaryThe Birch et al. (2021) framework, which the UK government has applied to assess evidence for animal pain, uses eight neural and behavioral criteria to assess the likelihood for sentience in invertebrates: 1) nociception; 2) sensory integration; 3) integrated nociception; 4) analgesia; 5) motivational trade-offs; 6) flexible self-protection; 7) associative learning; and 8) analgesia preference.Definitions of these criteria can be found on pages 4 & 5 of the publication's main text.Gibbons et al. (2022) applies the framework to six orders of insects at, minimally, two developmental time points per order (juvenile, adult).Insect orders assessed: Blattodea (cockroaches, termites), Coleoptera (beetles), Diptera (flies, mosquitoes), Hymenoptera (bees, ants, wasps, sawflies), Lepidoptera (butterflies, moths), Orthoptera (crickets, katydids, grasshoppers).Adult Blattodea and Diptera meet 6/8 criteria to a high or very high level of confidence, constituting strong evidence for pain (see Table 1, below). This is stronger evidence for pain than Birch et al. (2021) found for decapod crustaceans (5/8), which are currently protected via the UK Animal Welfare (Sentience) Act 2022.Adults of the remaining orders (except Coleoptera) and some juveniles (Blattodea, Diptera, and last juvenile stage Lepidoptera) satisfy 3 or 4 criteria, constituting substantial evidence for pain (see Tables 1 + 2).We found no good evidence that any insect failed a criterion.For several criteria, evidence was distributed across the insect phylogeny (Figure 1), including across the major split between the hemimetabolous (incomplete metamorphosis) and holometabolous (complete metamorphosis) insects. This provides some reason to believe that certain kinds of evidence for pain (e.g., integrated nociception in adults) will be found in other taxa.Our review demonstrates that there are many areas of insect pain research that have been completely unexplored. Research gaps are particularly substantial for juveniles, hig...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Short Research Summary: Can insects feel pain? A review of the neural and behavioural evidence by Gibbons et al. 2022, published by Meghan Barrett on November 22, 2022 on The Effective Altruism Forum.This short research summary briefly highlights the major results of a new publication on the scientific evidence for insect pain in Advances in Insect Physiology by Gibbons et al. (2022). This EA Forum post was prepared by Meghan Barrett, Lars Chittka, Andrew Crump, Matilda Gibbons, and Sajedeh Sarlak.The 75-page publication summarizes over 350 scientific studies to assess the scientific evidence for pain across six orders of insects at, minimally, two developmental time points (juvenile, adult). In addition, the paper discusses the use and management of insects in farmed, wild, and research contexts. The publication in its entirety can be reviewed here. The original publication was authored by Matilda Gibbons, Andrew Crump, Meghan Barrett, Sajedeh Sarlak, Jonathan Birch, and Lars Chittka.Major TakeawayWe find strong evidence for pain in adult insects of two orders (Blattodea: cockroaches and termites; Diptera: flies and mosquitoes). We find substantial evidence for pain in adult insects of three additional orders, as well as some juveniles. For several criteria, evidence was distributed across the insect phylogeny, providing some reason to believe that certain kinds of evidence for pain will be found in other taxa. Trillions of insects are directly impacted by humans each year (farmed, managed, killed, etc.). Significant welfare concerns have been identified as the result of human activities. Insect welfare is both completely unregulated and infrequently researched.Given the evidence reviewed in Gibbons et al. (2022), insect welfare is both important and highly neglected.Research SummaryThe Birch et al. (2021) framework, which the UK government has applied to assess evidence for animal pain, uses eight neural and behavioral criteria to assess the likelihood for sentience in invertebrates: 1) nociception; 2) sensory integration; 3) integrated nociception; 4) analgesia; 5) motivational trade-offs; 6) flexible self-protection; 7) associative learning; and 8) analgesia preference.Definitions of these criteria can be found on pages 4 & 5 of the publication's main text.Gibbons et al. (2022) applies the framework to six orders of insects at, minimally, two developmental time points per order (juvenile, adult).Insect orders assessed: Blattodea (cockroaches, termites), Coleoptera (beetles), Diptera (flies, mosquitoes), Hymenoptera (bees, ants, wasps, sawflies), Lepidoptera (butterflies, moths), Orthoptera (crickets, katydids, grasshoppers).Adult Blattodea and Diptera meet 6/8 criteria to a high or very high level of confidence, constituting strong evidence for pain (see Table 1, below). This is stronger evidence for pain than Birch et al. (2021) found for decapod crustaceans (5/8), which are currently protected via the UK Animal Welfare (Sentience) Act 2022.Adults of the remaining orders (except Coleoptera) and some juveniles (Blattodea, Diptera, and last juvenile stage Lepidoptera) satisfy 3 or 4 criteria, constituting substantial evidence for pain (see Tables 1 + 2).We found no good evidence that any insect failed a criterion.For several criteria, evidence was distributed across the insect phylogeny (Figure 1), including across the major split between the hemimetabolous (incomplete metamorphosis) and holometabolous (complete metamorphosis) insects. This provides some reason to believe that certain kinds of evidence for pain (e.g., integrated nociception in adults) will be found in other taxa.Our review demonstrates that there are many areas of insect pain research that have been completely unexplored. Research gaps are particularly substantial for juveniles, hig...]]>
Meghan Barrett https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:24 None full 3856
f8BY2yiLBzHLntjTL_EA EA - Toby Ord's new report on lessons from the development of the atomic bomb by Ishan Mukherjee Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Toby Ord's new report on lessons from the development of the atomic bomb, published by Ishan Mukherjee on November 22, 2022 on The Effective Altruism Forum.Toby Ord has written a new report with GovAI on lessons from the development of the atomic bomb relevant to emerging technologies like artificial intelligence, synthetic biology, and nanotechnology.The creation of the atomic bomb is one of the most famous and well-studied examples of developing a transformative technology — one that changes the shape of human affairs. There is much we don’t know about the future development of these technologies. This makes it much more difficult to reason about the strategic landscape that surrounds them. Which, in turn, makes it more difficult to help make sure the development is safe and beneficial for humanity. It is thus very useful to have a case study of developing a transformative technology.The making of the atomic bomb provides such a reference case. This report summarises the most important aspects of the development of atomic weapons and draws out a number of important insights for the development of similarly important technologies.One should treat the development of the atomic bomb not as a map to one’s destination, but as a detailed account of another traveller’s journey in a nearby land.Something that provides valuable hints to important dangers or strategies we might not have considered, and which we neglect at our own peril.Read the report here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ishan Mukherjee https://forum.effectivealtruism.org/posts/f8BY2yiLBzHLntjTL/toby-ord-s-new-report-on-lessons-from-the-development-of-the Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Toby Ord's new report on lessons from the development of the atomic bomb, published by Ishan Mukherjee on November 22, 2022 on The Effective Altruism Forum.Toby Ord has written a new report with GovAI on lessons from the development of the atomic bomb relevant to emerging technologies like artificial intelligence, synthetic biology, and nanotechnology.The creation of the atomic bomb is one of the most famous and well-studied examples of developing a transformative technology — one that changes the shape of human affairs. There is much we don’t know about the future development of these technologies. This makes it much more difficult to reason about the strategic landscape that surrounds them. Which, in turn, makes it more difficult to help make sure the development is safe and beneficial for humanity. It is thus very useful to have a case study of developing a transformative technology.The making of the atomic bomb provides such a reference case. This report summarises the most important aspects of the development of atomic weapons and draws out a number of important insights for the development of similarly important technologies.One should treat the development of the atomic bomb not as a map to one’s destination, but as a detailed account of another traveller’s journey in a nearby land.Something that provides valuable hints to important dangers or strategies we might not have considered, and which we neglect at our own peril.Read the report here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 22 Nov 2022 15:50:25 +0000 EA - Toby Ord's new report on lessons from the development of the atomic bomb by Ishan Mukherjee Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Toby Ord's new report on lessons from the development of the atomic bomb, published by Ishan Mukherjee on November 22, 2022 on The Effective Altruism Forum.Toby Ord has written a new report with GovAI on lessons from the development of the atomic bomb relevant to emerging technologies like artificial intelligence, synthetic biology, and nanotechnology.The creation of the atomic bomb is one of the most famous and well-studied examples of developing a transformative technology — one that changes the shape of human affairs. There is much we don’t know about the future development of these technologies. This makes it much more difficult to reason about the strategic landscape that surrounds them. Which, in turn, makes it more difficult to help make sure the development is safe and beneficial for humanity. It is thus very useful to have a case study of developing a transformative technology.The making of the atomic bomb provides such a reference case. This report summarises the most important aspects of the development of atomic weapons and draws out a number of important insights for the development of similarly important technologies.One should treat the development of the atomic bomb not as a map to one’s destination, but as a detailed account of another traveller’s journey in a nearby land.Something that provides valuable hints to important dangers or strategies we might not have considered, and which we neglect at our own peril.Read the report here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Toby Ord's new report on lessons from the development of the atomic bomb, published by Ishan Mukherjee on November 22, 2022 on The Effective Altruism Forum.Toby Ord has written a new report with GovAI on lessons from the development of the atomic bomb relevant to emerging technologies like artificial intelligence, synthetic biology, and nanotechnology.The creation of the atomic bomb is one of the most famous and well-studied examples of developing a transformative technology — one that changes the shape of human affairs. There is much we don’t know about the future development of these technologies. This makes it much more difficult to reason about the strategic landscape that surrounds them. Which, in turn, makes it more difficult to help make sure the development is safe and beneficial for humanity. It is thus very useful to have a case study of developing a transformative technology.The making of the atomic bomb provides such a reference case. This report summarises the most important aspects of the development of atomic weapons and draws out a number of important insights for the development of similarly important technologies.One should treat the development of the atomic bomb not as a map to one’s destination, but as a detailed account of another traveller’s journey in a nearby land.Something that provides valuable hints to important dangers or strategies we might not have considered, and which we neglect at our own peril.Read the report here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ishan Mukherjee https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:35 None full 3857
f58guFdECnXgsjn2v_EA EA - A socialist view on liberal progressive criticisms of EA by freedomandutility Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A socialist view on liberal progressive criticisms of EA, published by freedomandutility on November 21, 2022 on The Effective Altruism Forum.As someone who identifies as a socialist, politically to the left of many progressive critics of EA, I wanted to outline where I disagree with some liberal progressives criticisms of EA.Obviously, I am just one socialist and my views probably differ a fair amount from the average socialist.My view of "liberal progressive criticisms of EA" is mostly based on tweets, but I think this article is a good example of what I categorise as a liberal progressive criticism of EA:/.Explicit and Implicit PrioritisationSome liberal progressive criticisms of EA treat opportunity costs and prioritisation as things that EA has created which unfairly pit good causes against one another.All political movements have limited resources and must prioritise certain issues. Often, individuals aligned with political movements do not realise that they are prioritising certain issues over others, or do not want to recognise this reality, because deeming some good causes less worthy than other good causes is deeply emotionally uncomfortable for most people.EAs are less uncomfortable with this because they are used to prioritisation, and prioritise between causes explicitly rather than implicitly.I think many liberal progressives focused on issues like student loan in Western countries do not recognise, or do not want to recognise, that they are prioritising this issue over vaccinating the world's poorest children against disease. However, EAs prioritising existential risks generally explicitly recognise and state that they are prioritising this over global health.This leads some critics to a double standard of criticising those who explicitly prioritise issues over global health in favour of those who implicitly prioritise other issues over global health, despite the impacts on global health being the same.Scope InsensitivityEAs are generally aware of scope insensitivity.If asked to compare issues, it is likely that liberal progressives would at least agree that global health should in theory be prioritised over issues such as student loans or better healthcare for poorer Westerners. However, liberal progressives will likely still not prioritise pandemic preparedness and global health sufficiently because of scope insensitivity and failures to understand the gulf in impact between initiatives focused on poorer Westerners vs the global poor.Differences in Priorities and NeglectednessLiberal progressivism is a much more successful ideology / movement than EA, in terms of reach and influence. EA claims to do the most good, but adherents of many ideologies and movements expect that they are doing the most good too, and are likely surprised when EA's cause priorities do not reflect their own. They may also simply critique EA out of disagreement between their priorities and EA's priorities.But EA priorities differing from the priorities of mainstream political movements is a systematic feature of EA, because EA focuses on neglected issues. If liberal progressivism or conservatism sufficiently prioritised AI safety, pandemic preparedness, global health or factory farming, there is a good chance that EA would switch to new priorities.EA does not focus on the "most important" issues - it focuses on the issues where additional time or money will have the largest social impact.Localism, Internationalism and XenophobiaMy view of liberal progressivism is that it largely embraces localism with a preference for local actors solving local problems using local resources.However, the world is very unequal. The areas with the best resources and the most skilled actors to deploy them are also the areas with the least severe problems. The areas with the most...]]>
freedomandutility https://forum.effectivealtruism.org/posts/f58guFdECnXgsjn2v/a-socialist-view-on-liberal-progressive-criticisms-of-ea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A socialist view on liberal progressive criticisms of EA, published by freedomandutility on November 21, 2022 on The Effective Altruism Forum.As someone who identifies as a socialist, politically to the left of many progressive critics of EA, I wanted to outline where I disagree with some liberal progressives criticisms of EA.Obviously, I am just one socialist and my views probably differ a fair amount from the average socialist.My view of "liberal progressive criticisms of EA" is mostly based on tweets, but I think this article is a good example of what I categorise as a liberal progressive criticism of EA:/.Explicit and Implicit PrioritisationSome liberal progressive criticisms of EA treat opportunity costs and prioritisation as things that EA has created which unfairly pit good causes against one another.All political movements have limited resources and must prioritise certain issues. Often, individuals aligned with political movements do not realise that they are prioritising certain issues over others, or do not want to recognise this reality, because deeming some good causes less worthy than other good causes is deeply emotionally uncomfortable for most people.EAs are less uncomfortable with this because they are used to prioritisation, and prioritise between causes explicitly rather than implicitly.I think many liberal progressives focused on issues like student loan in Western countries do not recognise, or do not want to recognise, that they are prioritising this issue over vaccinating the world's poorest children against disease. However, EAs prioritising existential risks generally explicitly recognise and state that they are prioritising this over global health.This leads some critics to a double standard of criticising those who explicitly prioritise issues over global health in favour of those who implicitly prioritise other issues over global health, despite the impacts on global health being the same.Scope InsensitivityEAs are generally aware of scope insensitivity.If asked to compare issues, it is likely that liberal progressives would at least agree that global health should in theory be prioritised over issues such as student loans or better healthcare for poorer Westerners. However, liberal progressives will likely still not prioritise pandemic preparedness and global health sufficiently because of scope insensitivity and failures to understand the gulf in impact between initiatives focused on poorer Westerners vs the global poor.Differences in Priorities and NeglectednessLiberal progressivism is a much more successful ideology / movement than EA, in terms of reach and influence. EA claims to do the most good, but adherents of many ideologies and movements expect that they are doing the most good too, and are likely surprised when EA's cause priorities do not reflect their own. They may also simply critique EA out of disagreement between their priorities and EA's priorities.But EA priorities differing from the priorities of mainstream political movements is a systematic feature of EA, because EA focuses on neglected issues. If liberal progressivism or conservatism sufficiently prioritised AI safety, pandemic preparedness, global health or factory farming, there is a good chance that EA would switch to new priorities.EA does not focus on the "most important" issues - it focuses on the issues where additional time or money will have the largest social impact.Localism, Internationalism and XenophobiaMy view of liberal progressivism is that it largely embraces localism with a preference for local actors solving local problems using local resources.However, the world is very unequal. The areas with the best resources and the most skilled actors to deploy them are also the areas with the least severe problems. The areas with the most...]]>
Tue, 22 Nov 2022 02:33:49 +0000 EA - A socialist view on liberal progressive criticisms of EA by freedomandutility Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A socialist view on liberal progressive criticisms of EA, published by freedomandutility on November 21, 2022 on The Effective Altruism Forum.As someone who identifies as a socialist, politically to the left of many progressive critics of EA, I wanted to outline where I disagree with some liberal progressives criticisms of EA.Obviously, I am just one socialist and my views probably differ a fair amount from the average socialist.My view of "liberal progressive criticisms of EA" is mostly based on tweets, but I think this article is a good example of what I categorise as a liberal progressive criticism of EA:/.Explicit and Implicit PrioritisationSome liberal progressive criticisms of EA treat opportunity costs and prioritisation as things that EA has created which unfairly pit good causes against one another.All political movements have limited resources and must prioritise certain issues. Often, individuals aligned with political movements do not realise that they are prioritising certain issues over others, or do not want to recognise this reality, because deeming some good causes less worthy than other good causes is deeply emotionally uncomfortable for most people.EAs are less uncomfortable with this because they are used to prioritisation, and prioritise between causes explicitly rather than implicitly.I think many liberal progressives focused on issues like student loan in Western countries do not recognise, or do not want to recognise, that they are prioritising this issue over vaccinating the world's poorest children against disease. However, EAs prioritising existential risks generally explicitly recognise and state that they are prioritising this over global health.This leads some critics to a double standard of criticising those who explicitly prioritise issues over global health in favour of those who implicitly prioritise other issues over global health, despite the impacts on global health being the same.Scope InsensitivityEAs are generally aware of scope insensitivity.If asked to compare issues, it is likely that liberal progressives would at least agree that global health should in theory be prioritised over issues such as student loans or better healthcare for poorer Westerners. However, liberal progressives will likely still not prioritise pandemic preparedness and global health sufficiently because of scope insensitivity and failures to understand the gulf in impact between initiatives focused on poorer Westerners vs the global poor.Differences in Priorities and NeglectednessLiberal progressivism is a much more successful ideology / movement than EA, in terms of reach and influence. EA claims to do the most good, but adherents of many ideologies and movements expect that they are doing the most good too, and are likely surprised when EA's cause priorities do not reflect their own. They may also simply critique EA out of disagreement between their priorities and EA's priorities.But EA priorities differing from the priorities of mainstream political movements is a systematic feature of EA, because EA focuses on neglected issues. If liberal progressivism or conservatism sufficiently prioritised AI safety, pandemic preparedness, global health or factory farming, there is a good chance that EA would switch to new priorities.EA does not focus on the "most important" issues - it focuses on the issues where additional time or money will have the largest social impact.Localism, Internationalism and XenophobiaMy view of liberal progressivism is that it largely embraces localism with a preference for local actors solving local problems using local resources.However, the world is very unequal. The areas with the best resources and the most skilled actors to deploy them are also the areas with the least severe problems. The areas with the most...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A socialist view on liberal progressive criticisms of EA, published by freedomandutility on November 21, 2022 on The Effective Altruism Forum.As someone who identifies as a socialist, politically to the left of many progressive critics of EA, I wanted to outline where I disagree with some liberal progressives criticisms of EA.Obviously, I am just one socialist and my views probably differ a fair amount from the average socialist.My view of "liberal progressive criticisms of EA" is mostly based on tweets, but I think this article is a good example of what I categorise as a liberal progressive criticism of EA:/.Explicit and Implicit PrioritisationSome liberal progressive criticisms of EA treat opportunity costs and prioritisation as things that EA has created which unfairly pit good causes against one another.All political movements have limited resources and must prioritise certain issues. Often, individuals aligned with political movements do not realise that they are prioritising certain issues over others, or do not want to recognise this reality, because deeming some good causes less worthy than other good causes is deeply emotionally uncomfortable for most people.EAs are less uncomfortable with this because they are used to prioritisation, and prioritise between causes explicitly rather than implicitly.I think many liberal progressives focused on issues like student loan in Western countries do not recognise, or do not want to recognise, that they are prioritising this issue over vaccinating the world's poorest children against disease. However, EAs prioritising existential risks generally explicitly recognise and state that they are prioritising this over global health.This leads some critics to a double standard of criticising those who explicitly prioritise issues over global health in favour of those who implicitly prioritise other issues over global health, despite the impacts on global health being the same.Scope InsensitivityEAs are generally aware of scope insensitivity.If asked to compare issues, it is likely that liberal progressives would at least agree that global health should in theory be prioritised over issues such as student loans or better healthcare for poorer Westerners. However, liberal progressives will likely still not prioritise pandemic preparedness and global health sufficiently because of scope insensitivity and failures to understand the gulf in impact between initiatives focused on poorer Westerners vs the global poor.Differences in Priorities and NeglectednessLiberal progressivism is a much more successful ideology / movement than EA, in terms of reach and influence. EA claims to do the most good, but adherents of many ideologies and movements expect that they are doing the most good too, and are likely surprised when EA's cause priorities do not reflect their own. They may also simply critique EA out of disagreement between their priorities and EA's priorities.But EA priorities differing from the priorities of mainstream political movements is a systematic feature of EA, because EA focuses on neglected issues. If liberal progressivism or conservatism sufficiently prioritised AI safety, pandemic preparedness, global health or factory farming, there is a good chance that EA would switch to new priorities.EA does not focus on the "most important" issues - it focuses on the issues where additional time or money will have the largest social impact.Localism, Internationalism and XenophobiaMy view of liberal progressivism is that it largely embraces localism with a preference for local actors solving local problems using local resources.However, the world is very unequal. The areas with the best resources and the most skilled actors to deploy them are also the areas with the least severe problems. The areas with the most...]]>
freedomandutility https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:39 None full 3849
yPpCCC4REq3zKXWdJ_EA EA - Review: What We Owe The Future by Kelsey Piper Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Review: What We Owe The Future, published by Kelsey Piper on November 21, 2022 on The Effective Altruism Forum.For the inaugural edition of Asterisk, I wrote about What We Owe The Future. Some highlights:What is the longtermist worldview? First — that humanity’s potential future is vast beyond comprehension, that trillions of lives may lie ahead of us, and that we should try to secure and shape that future if possible.Here there’s little disagreement among effective altruists. The catch is the qualifier: “if possible.” When I talk to people working on cash transfers or clean water or accelerating vaccine timelines, their reason for prioritizing those projects over long-term-future ones is approximately never “because future people aren’t of moral importance”; it’s usually “because I don’t think we can predictably affect the lives of future people in the desired direction.”As it happens, I think we can — but not through the pathways outlined in What We Owe the Future.The stakes are as high as MacAskill says — but when you start trying to figure out what to do about it, you end up face-to-face with problems that are deeply unclear and solutions that are deeply technical.I think we’re in a dangerous world, one with perils ahead for which we’re not at all prepared, one where we’re likely to make an irrecoverable mistake and all die. Most of the obligation I feel toward the future is an obligation to not screw up so badly that it never exists. Most longtermists are scared, and the absence of that sentiment from What We Owe the Future feels glaring.If we grant MacAskill’s premise that values change matters, though, the value I would want to impart is this one: an appetite for these details, however tedious they may seem.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Kelsey Piper https://forum.effectivealtruism.org/posts/yPpCCC4REq3zKXWdJ/review-what-we-owe-the-future Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Review: What We Owe The Future, published by Kelsey Piper on November 21, 2022 on The Effective Altruism Forum.For the inaugural edition of Asterisk, I wrote about What We Owe The Future. Some highlights:What is the longtermist worldview? First — that humanity’s potential future is vast beyond comprehension, that trillions of lives may lie ahead of us, and that we should try to secure and shape that future if possible.Here there’s little disagreement among effective altruists. The catch is the qualifier: “if possible.” When I talk to people working on cash transfers or clean water or accelerating vaccine timelines, their reason for prioritizing those projects over long-term-future ones is approximately never “because future people aren’t of moral importance”; it’s usually “because I don’t think we can predictably affect the lives of future people in the desired direction.”As it happens, I think we can — but not through the pathways outlined in What We Owe the Future.The stakes are as high as MacAskill says — but when you start trying to figure out what to do about it, you end up face-to-face with problems that are deeply unclear and solutions that are deeply technical.I think we’re in a dangerous world, one with perils ahead for which we’re not at all prepared, one where we’re likely to make an irrecoverable mistake and all die. Most of the obligation I feel toward the future is an obligation to not screw up so badly that it never exists. Most longtermists are scared, and the absence of that sentiment from What We Owe the Future feels glaring.If we grant MacAskill’s premise that values change matters, though, the value I would want to impart is this one: an appetite for these details, however tedious they may seem.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 21 Nov 2022 23:22:22 +0000 EA - Review: What We Owe The Future by Kelsey Piper Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Review: What We Owe The Future, published by Kelsey Piper on November 21, 2022 on The Effective Altruism Forum.For the inaugural edition of Asterisk, I wrote about What We Owe The Future. Some highlights:What is the longtermist worldview? First — that humanity’s potential future is vast beyond comprehension, that trillions of lives may lie ahead of us, and that we should try to secure and shape that future if possible.Here there’s little disagreement among effective altruists. The catch is the qualifier: “if possible.” When I talk to people working on cash transfers or clean water or accelerating vaccine timelines, their reason for prioritizing those projects over long-term-future ones is approximately never “because future people aren’t of moral importance”; it’s usually “because I don’t think we can predictably affect the lives of future people in the desired direction.”As it happens, I think we can — but not through the pathways outlined in What We Owe the Future.The stakes are as high as MacAskill says — but when you start trying to figure out what to do about it, you end up face-to-face with problems that are deeply unclear and solutions that are deeply technical.I think we’re in a dangerous world, one with perils ahead for which we’re not at all prepared, one where we’re likely to make an irrecoverable mistake and all die. Most of the obligation I feel toward the future is an obligation to not screw up so badly that it never exists. Most longtermists are scared, and the absence of that sentiment from What We Owe the Future feels glaring.If we grant MacAskill’s premise that values change matters, though, the value I would want to impart is this one: an appetite for these details, however tedious they may seem.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Review: What We Owe The Future, published by Kelsey Piper on November 21, 2022 on The Effective Altruism Forum.For the inaugural edition of Asterisk, I wrote about What We Owe The Future. Some highlights:What is the longtermist worldview? First — that humanity’s potential future is vast beyond comprehension, that trillions of lives may lie ahead of us, and that we should try to secure and shape that future if possible.Here there’s little disagreement among effective altruists. The catch is the qualifier: “if possible.” When I talk to people working on cash transfers or clean water or accelerating vaccine timelines, their reason for prioritizing those projects over long-term-future ones is approximately never “because future people aren’t of moral importance”; it’s usually “because I don’t think we can predictably affect the lives of future people in the desired direction.”As it happens, I think we can — but not through the pathways outlined in What We Owe the Future.The stakes are as high as MacAskill says — but when you start trying to figure out what to do about it, you end up face-to-face with problems that are deeply unclear and solutions that are deeply technical.I think we’re in a dangerous world, one with perils ahead for which we’re not at all prepared, one where we’re likely to make an irrecoverable mistake and all die. Most of the obligation I feel toward the future is an obligation to not screw up so badly that it never exists. Most longtermists are scared, and the absence of that sentiment from What We Owe the Future feels glaring.If we grant MacAskill’s premise that values change matters, though, the value I would want to impart is this one: an appetite for these details, however tedious they may seem.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Kelsey Piper https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:48 None full 3850
3kaojgsu6qy2n8TdC_EA EA - Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest by Jason Schukraft Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest, published by Jason Schukraft on November 21, 2022 on The Effective Altruism Forum.At Open Philanthropy we believe that future developments in AI could be extremely important, but the timing, pathways, and implications of those developments are uncertain. We want to continually test our arguments about AI and work to surface new considerations that could inform our thinking.We were pleased when the Future Fund announced a competition earlier this year to challenge their fundamental assumptions about AI. We believe this sort of openness to criticism is good for the AI, longtermist, and EA communities. Given recent developments, it seems likely that competition is no longer moving forward.We recognize that many people have already invested significant time and thought into their contest entries. We don’t want that effort to be wasted, and we want to incentivize further work in the same vein. For these reasons, Open Phil will run its own AI Worldviews Contest in early 2023.To be clear, this is a new contest, not a continuation of the Future Fund competition. There will be substantial differences, including:A smaller overall prize poolA different panel of judgesChanges to the operationalization of winning entriesThe spirit and purpose of the two competitions, however, remains the same. We expect it will be easy to adapt Future Fund submissions for the Open Phil contest.More details will be published when we formally announce the competition in early 2023. We are releasing this post now to try to alleviate some of the fear, uncertainty, and doubt surrounding the old Future Fund competition and also to capture some of the value that has already been generated by the Future Fund competition before it dissipates.We are still figuring out the logistics of the competition, and as such we are not yet in a position to answer many concrete questions (e.g., about deadlines or prize amounts). Nonetheless, if you have questions about the contest you think we might be able to answer, you can leave them as comments below, and we will do our best to answer them over the next few weeks.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jason Schukraft https://forum.effectivealtruism.org/posts/3kaojgsu6qy2n8TdC/pre-announcing-the-2023-open-philanthropy-ai-worldviews Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest, published by Jason Schukraft on November 21, 2022 on The Effective Altruism Forum.At Open Philanthropy we believe that future developments in AI could be extremely important, but the timing, pathways, and implications of those developments are uncertain. We want to continually test our arguments about AI and work to surface new considerations that could inform our thinking.We were pleased when the Future Fund announced a competition earlier this year to challenge their fundamental assumptions about AI. We believe this sort of openness to criticism is good for the AI, longtermist, and EA communities. Given recent developments, it seems likely that competition is no longer moving forward.We recognize that many people have already invested significant time and thought into their contest entries. We don’t want that effort to be wasted, and we want to incentivize further work in the same vein. For these reasons, Open Phil will run its own AI Worldviews Contest in early 2023.To be clear, this is a new contest, not a continuation of the Future Fund competition. There will be substantial differences, including:A smaller overall prize poolA different panel of judgesChanges to the operationalization of winning entriesThe spirit and purpose of the two competitions, however, remains the same. We expect it will be easy to adapt Future Fund submissions for the Open Phil contest.More details will be published when we formally announce the competition in early 2023. We are releasing this post now to try to alleviate some of the fear, uncertainty, and doubt surrounding the old Future Fund competition and also to capture some of the value that has already been generated by the Future Fund competition before it dissipates.We are still figuring out the logistics of the competition, and as such we are not yet in a position to answer many concrete questions (e.g., about deadlines or prize amounts). Nonetheless, if you have questions about the contest you think we might be able to answer, you can leave them as comments below, and we will do our best to answer them over the next few weeks.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 21 Nov 2022 21:55:36 +0000 EA - Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest by Jason Schukraft Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest, published by Jason Schukraft on November 21, 2022 on The Effective Altruism Forum.At Open Philanthropy we believe that future developments in AI could be extremely important, but the timing, pathways, and implications of those developments are uncertain. We want to continually test our arguments about AI and work to surface new considerations that could inform our thinking.We were pleased when the Future Fund announced a competition earlier this year to challenge their fundamental assumptions about AI. We believe this sort of openness to criticism is good for the AI, longtermist, and EA communities. Given recent developments, it seems likely that competition is no longer moving forward.We recognize that many people have already invested significant time and thought into their contest entries. We don’t want that effort to be wasted, and we want to incentivize further work in the same vein. For these reasons, Open Phil will run its own AI Worldviews Contest in early 2023.To be clear, this is a new contest, not a continuation of the Future Fund competition. There will be substantial differences, including:A smaller overall prize poolA different panel of judgesChanges to the operationalization of winning entriesThe spirit and purpose of the two competitions, however, remains the same. We expect it will be easy to adapt Future Fund submissions for the Open Phil contest.More details will be published when we formally announce the competition in early 2023. We are releasing this post now to try to alleviate some of the fear, uncertainty, and doubt surrounding the old Future Fund competition and also to capture some of the value that has already been generated by the Future Fund competition before it dissipates.We are still figuring out the logistics of the competition, and as such we are not yet in a position to answer many concrete questions (e.g., about deadlines or prize amounts). Nonetheless, if you have questions about the contest you think we might be able to answer, you can leave them as comments below, and we will do our best to answer them over the next few weeks.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest, published by Jason Schukraft on November 21, 2022 on The Effective Altruism Forum.At Open Philanthropy we believe that future developments in AI could be extremely important, but the timing, pathways, and implications of those developments are uncertain. We want to continually test our arguments about AI and work to surface new considerations that could inform our thinking.We were pleased when the Future Fund announced a competition earlier this year to challenge their fundamental assumptions about AI. We believe this sort of openness to criticism is good for the AI, longtermist, and EA communities. Given recent developments, it seems likely that competition is no longer moving forward.We recognize that many people have already invested significant time and thought into their contest entries. We don’t want that effort to be wasted, and we want to incentivize further work in the same vein. For these reasons, Open Phil will run its own AI Worldviews Contest in early 2023.To be clear, this is a new contest, not a continuation of the Future Fund competition. There will be substantial differences, including:A smaller overall prize poolA different panel of judgesChanges to the operationalization of winning entriesThe spirit and purpose of the two competitions, however, remains the same. We expect it will be easy to adapt Future Fund submissions for the Open Phil contest.More details will be published when we formally announce the competition in early 2023. We are releasing this post now to try to alleviate some of the fear, uncertainty, and doubt surrounding the old Future Fund competition and also to capture some of the value that has already been generated by the Future Fund competition before it dissipates.We are still figuring out the logistics of the competition, and as such we are not yet in a position to answer many concrete questions (e.g., about deadlines or prize amounts). Nonetheless, if you have questions about the contest you think we might be able to answer, you can leave them as comments below, and we will do our best to answer them over the next few weeks.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jason Schukraft https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:16 None full 3841
cXH2sG3taM5hKbiva_EA EA - Beyond Simple Existential Risk: Survival in a Complex Interconnected World by Gideon Futerman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Beyond Simple Existential Risk: Survival in a Complex Interconnected World, published by Gideon Futerman on November 21, 2022 on The Effective Altruism Forum.This is the script of a talk I gave at EAGx Rotterdam, with some citations and references linked throughout. I lay out the argument challenging the relatively narrow focus EA has in existential risk studies, and in favour of more methodological pluralism. This isn't a finalised thesis, and nor should it be taken as anything except a conversation starter. I hope to follow this up with more rigorous work exploring the questions I pose over the next few years, and hope other do too, but I thought to post the script to give everyone an opportunity to see what was said. Note, however, the tone of this is obviously the tone of a speech, not as much of a forum post. I hope to link the video when its up. Mostly, however, this is really synthesising the work of others; very little of this is my own original thought. If people are interested in talking to me about this, please DM me on here.Existential Risk Studies; the interdisciplinary “science” of studying existential and global catastrophic risk. So, what is the object of our study? There are many definitions of Existential Risk, including an irrecoverable loss of humanity's potential or a major loss of the expected value of the future, both of these from essentially a transhumanist perspective. In this talk, however, I will be using Existential Risk in the broadest sense, taking my definition from Beard et al 2020, with Existential Risk being risk that may result in the very worst catastrophes “encompassing human extinction, civilizational collapse and any major catastrophe commonly associated with these things.”X-Risk is a risk, not an event. It is defined therefore by potentiality, and thus is inherently uncertain. We can thus clearly distinguish between different global and existential catastrophes (nuclear winters, pandemics) and drivers of existential risk, and there are no one to one mapping of these. The IPCC commonly, and helpfully, splits drivers of risk into hazards, vulnerabilities, exposures, and responses, and through this lens, it is clear that risk isn’t something exogenous, but is reliant on decision making and governance failures, even if that failure is merely a failure of response.The thesis I present here is not original, and draws on the work of a variety of thinkers, although I accept full blame for things that may be wrong. I will argue there are two different paradigms of studying X-Risk: a simple paradigm and a complex paradigm. I will argue that EA unfairly neglects the complex paradigm, and that this is dangerous if we want to have a complete understanding of X-Risk to be able to combat it. I am not suggesting the simple paradigm is “wrong”; but that alone it currently doesn’t and never truly can, capture the full picture of X-Risk. I think the differences in the two paradigms of existential risk are diverse, with some of the differences being “intellectual” due to fundamentally different assumptions about the nature of the world we live in, and some are “cultural” which is more contingent on which thinkers works gain prominence.I won’t really try and distinguish between these differences too hard, as I think this will make everything a bit too complicated. This presentation is merely a start, a challenge to the status quo, not asking for it to be torn down, but arguing for more epistemic and methodological pluralism. This call for pluralism is the core of my argument.The “simple” paradigm of existential risk is at present dominant in EA-circles. It tends to assume that the best way to combat X-Risk is identify the most important hazards, find out the most tractable and neglected solutions to those, and work on that. It often takes a ...]]>
Gideon Futerman https://forum.effectivealtruism.org/posts/cXH2sG3taM5hKbiva/beyond-simple-existential-risk-survival-in-a-complex Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Beyond Simple Existential Risk: Survival in a Complex Interconnected World, published by Gideon Futerman on November 21, 2022 on The Effective Altruism Forum.This is the script of a talk I gave at EAGx Rotterdam, with some citations and references linked throughout. I lay out the argument challenging the relatively narrow focus EA has in existential risk studies, and in favour of more methodological pluralism. This isn't a finalised thesis, and nor should it be taken as anything except a conversation starter. I hope to follow this up with more rigorous work exploring the questions I pose over the next few years, and hope other do too, but I thought to post the script to give everyone an opportunity to see what was said. Note, however, the tone of this is obviously the tone of a speech, not as much of a forum post. I hope to link the video when its up. Mostly, however, this is really synthesising the work of others; very little of this is my own original thought. If people are interested in talking to me about this, please DM me on here.Existential Risk Studies; the interdisciplinary “science” of studying existential and global catastrophic risk. So, what is the object of our study? There are many definitions of Existential Risk, including an irrecoverable loss of humanity's potential or a major loss of the expected value of the future, both of these from essentially a transhumanist perspective. In this talk, however, I will be using Existential Risk in the broadest sense, taking my definition from Beard et al 2020, with Existential Risk being risk that may result in the very worst catastrophes “encompassing human extinction, civilizational collapse and any major catastrophe commonly associated with these things.”X-Risk is a risk, not an event. It is defined therefore by potentiality, and thus is inherently uncertain. We can thus clearly distinguish between different global and existential catastrophes (nuclear winters, pandemics) and drivers of existential risk, and there are no one to one mapping of these. The IPCC commonly, and helpfully, splits drivers of risk into hazards, vulnerabilities, exposures, and responses, and through this lens, it is clear that risk isn’t something exogenous, but is reliant on decision making and governance failures, even if that failure is merely a failure of response.The thesis I present here is not original, and draws on the work of a variety of thinkers, although I accept full blame for things that may be wrong. I will argue there are two different paradigms of studying X-Risk: a simple paradigm and a complex paradigm. I will argue that EA unfairly neglects the complex paradigm, and that this is dangerous if we want to have a complete understanding of X-Risk to be able to combat it. I am not suggesting the simple paradigm is “wrong”; but that alone it currently doesn’t and never truly can, capture the full picture of X-Risk. I think the differences in the two paradigms of existential risk are diverse, with some of the differences being “intellectual” due to fundamentally different assumptions about the nature of the world we live in, and some are “cultural” which is more contingent on which thinkers works gain prominence.I won’t really try and distinguish between these differences too hard, as I think this will make everything a bit too complicated. This presentation is merely a start, a challenge to the status quo, not asking for it to be torn down, but arguing for more epistemic and methodological pluralism. This call for pluralism is the core of my argument.The “simple” paradigm of existential risk is at present dominant in EA-circles. It tends to assume that the best way to combat X-Risk is identify the most important hazards, find out the most tractable and neglected solutions to those, and work on that. It often takes a ...]]>
Mon, 21 Nov 2022 19:41:02 +0000 EA - Beyond Simple Existential Risk: Survival in a Complex Interconnected World by Gideon Futerman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Beyond Simple Existential Risk: Survival in a Complex Interconnected World, published by Gideon Futerman on November 21, 2022 on The Effective Altruism Forum.This is the script of a talk I gave at EAGx Rotterdam, with some citations and references linked throughout. I lay out the argument challenging the relatively narrow focus EA has in existential risk studies, and in favour of more methodological pluralism. This isn't a finalised thesis, and nor should it be taken as anything except a conversation starter. I hope to follow this up with more rigorous work exploring the questions I pose over the next few years, and hope other do too, but I thought to post the script to give everyone an opportunity to see what was said. Note, however, the tone of this is obviously the tone of a speech, not as much of a forum post. I hope to link the video when its up. Mostly, however, this is really synthesising the work of others; very little of this is my own original thought. If people are interested in talking to me about this, please DM me on here.Existential Risk Studies; the interdisciplinary “science” of studying existential and global catastrophic risk. So, what is the object of our study? There are many definitions of Existential Risk, including an irrecoverable loss of humanity's potential or a major loss of the expected value of the future, both of these from essentially a transhumanist perspective. In this talk, however, I will be using Existential Risk in the broadest sense, taking my definition from Beard et al 2020, with Existential Risk being risk that may result in the very worst catastrophes “encompassing human extinction, civilizational collapse and any major catastrophe commonly associated with these things.”X-Risk is a risk, not an event. It is defined therefore by potentiality, and thus is inherently uncertain. We can thus clearly distinguish between different global and existential catastrophes (nuclear winters, pandemics) and drivers of existential risk, and there are no one to one mapping of these. The IPCC commonly, and helpfully, splits drivers of risk into hazards, vulnerabilities, exposures, and responses, and through this lens, it is clear that risk isn’t something exogenous, but is reliant on decision making and governance failures, even if that failure is merely a failure of response.The thesis I present here is not original, and draws on the work of a variety of thinkers, although I accept full blame for things that may be wrong. I will argue there are two different paradigms of studying X-Risk: a simple paradigm and a complex paradigm. I will argue that EA unfairly neglects the complex paradigm, and that this is dangerous if we want to have a complete understanding of X-Risk to be able to combat it. I am not suggesting the simple paradigm is “wrong”; but that alone it currently doesn’t and never truly can, capture the full picture of X-Risk. I think the differences in the two paradigms of existential risk are diverse, with some of the differences being “intellectual” due to fundamentally different assumptions about the nature of the world we live in, and some are “cultural” which is more contingent on which thinkers works gain prominence.I won’t really try and distinguish between these differences too hard, as I think this will make everything a bit too complicated. This presentation is merely a start, a challenge to the status quo, not asking for it to be torn down, but arguing for more epistemic and methodological pluralism. This call for pluralism is the core of my argument.The “simple” paradigm of existential risk is at present dominant in EA-circles. It tends to assume that the best way to combat X-Risk is identify the most important hazards, find out the most tractable and neglected solutions to those, and work on that. It often takes a ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Beyond Simple Existential Risk: Survival in a Complex Interconnected World, published by Gideon Futerman on November 21, 2022 on The Effective Altruism Forum.This is the script of a talk I gave at EAGx Rotterdam, with some citations and references linked throughout. I lay out the argument challenging the relatively narrow focus EA has in existential risk studies, and in favour of more methodological pluralism. This isn't a finalised thesis, and nor should it be taken as anything except a conversation starter. I hope to follow this up with more rigorous work exploring the questions I pose over the next few years, and hope other do too, but I thought to post the script to give everyone an opportunity to see what was said. Note, however, the tone of this is obviously the tone of a speech, not as much of a forum post. I hope to link the video when its up. Mostly, however, this is really synthesising the work of others; very little of this is my own original thought. If people are interested in talking to me about this, please DM me on here.Existential Risk Studies; the interdisciplinary “science” of studying existential and global catastrophic risk. So, what is the object of our study? There are many definitions of Existential Risk, including an irrecoverable loss of humanity's potential or a major loss of the expected value of the future, both of these from essentially a transhumanist perspective. In this talk, however, I will be using Existential Risk in the broadest sense, taking my definition from Beard et al 2020, with Existential Risk being risk that may result in the very worst catastrophes “encompassing human extinction, civilizational collapse and any major catastrophe commonly associated with these things.”X-Risk is a risk, not an event. It is defined therefore by potentiality, and thus is inherently uncertain. We can thus clearly distinguish between different global and existential catastrophes (nuclear winters, pandemics) and drivers of existential risk, and there are no one to one mapping of these. The IPCC commonly, and helpfully, splits drivers of risk into hazards, vulnerabilities, exposures, and responses, and through this lens, it is clear that risk isn’t something exogenous, but is reliant on decision making and governance failures, even if that failure is merely a failure of response.The thesis I present here is not original, and draws on the work of a variety of thinkers, although I accept full blame for things that may be wrong. I will argue there are two different paradigms of studying X-Risk: a simple paradigm and a complex paradigm. I will argue that EA unfairly neglects the complex paradigm, and that this is dangerous if we want to have a complete understanding of X-Risk to be able to combat it. I am not suggesting the simple paradigm is “wrong”; but that alone it currently doesn’t and never truly can, capture the full picture of X-Risk. I think the differences in the two paradigms of existential risk are diverse, with some of the differences being “intellectual” due to fundamentally different assumptions about the nature of the world we live in, and some are “cultural” which is more contingent on which thinkers works gain prominence.I won’t really try and distinguish between these differences too hard, as I think this will make everything a bit too complicated. This presentation is merely a start, a challenge to the status quo, not asking for it to be torn down, but arguing for more epistemic and methodological pluralism. This call for pluralism is the core of my argument.The “simple” paradigm of existential risk is at present dominant in EA-circles. It tends to assume that the best way to combat X-Risk is identify the most important hazards, find out the most tractable and neglected solutions to those, and work on that. It often takes a ...]]>
Gideon Futerman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 33:09 None full 3851
YdHMxWKBaa79JcsSW_EA EA - Announcing the first issue of Asterisk by Clara Collier Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the first issue of Asterisk, published by Clara Collier on November 21, 2022 on The Effective Altruism Forum.Are you a fan of engaging, epistemically rigorous longform writing about the world's most pressing problems? Interested in in-depth interviews with leading scholars? A reader of taste and discernment? Sick of FTXcourse?Distract yourself with the inaugural issue of Asterisk Magazine, out now!Asterisk is a new quarterly journal of clear writing and clear thinking about things that matter (and, occasionally, things we just think are interesting). In this issue:Kelsey Piper argues that What We Owe The Future can't quite support the weight of its own premises.Kevin Esvelt talks about how we can prevent the next pandemic.Jared Leibowich gives us a superforecaster's approach to modeling monkeypox.Christopher Leslie Brown on the history of abolitionism and the slippery concept of moral progressStuart Ritchie tries to find out if the replication crisis has really made science better.Dietrich Vollrath explains what economists do and don't know about why some countries become rich and others don't.Scott Alexander asks: is wine fake?Karson Elmgren on the history and future of China's semiconductor industry.Xander Balwit imagines a future where genetic engineering has radically altered the animals we eat.A huge thank you to everyone in the community who helped us make Asterisk a reality. We hope you all enjoy reading it as much as we enjoyed making it.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Clara Collier https://forum.effectivealtruism.org/posts/YdHMxWKBaa79JcsSW/announcing-the-first-issue-of-asterisk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the first issue of Asterisk, published by Clara Collier on November 21, 2022 on The Effective Altruism Forum.Are you a fan of engaging, epistemically rigorous longform writing about the world's most pressing problems? Interested in in-depth interviews with leading scholars? A reader of taste and discernment? Sick of FTXcourse?Distract yourself with the inaugural issue of Asterisk Magazine, out now!Asterisk is a new quarterly journal of clear writing and clear thinking about things that matter (and, occasionally, things we just think are interesting). In this issue:Kelsey Piper argues that What We Owe The Future can't quite support the weight of its own premises.Kevin Esvelt talks about how we can prevent the next pandemic.Jared Leibowich gives us a superforecaster's approach to modeling monkeypox.Christopher Leslie Brown on the history of abolitionism and the slippery concept of moral progressStuart Ritchie tries to find out if the replication crisis has really made science better.Dietrich Vollrath explains what economists do and don't know about why some countries become rich and others don't.Scott Alexander asks: is wine fake?Karson Elmgren on the history and future of China's semiconductor industry.Xander Balwit imagines a future where genetic engineering has radically altered the animals we eat.A huge thank you to everyone in the community who helped us make Asterisk a reality. We hope you all enjoy reading it as much as we enjoyed making it.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 21 Nov 2022 19:38:36 +0000 EA - Announcing the first issue of Asterisk by Clara Collier Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the first issue of Asterisk, published by Clara Collier on November 21, 2022 on The Effective Altruism Forum.Are you a fan of engaging, epistemically rigorous longform writing about the world's most pressing problems? Interested in in-depth interviews with leading scholars? A reader of taste and discernment? Sick of FTXcourse?Distract yourself with the inaugural issue of Asterisk Magazine, out now!Asterisk is a new quarterly journal of clear writing and clear thinking about things that matter (and, occasionally, things we just think are interesting). In this issue:Kelsey Piper argues that What We Owe The Future can't quite support the weight of its own premises.Kevin Esvelt talks about how we can prevent the next pandemic.Jared Leibowich gives us a superforecaster's approach to modeling monkeypox.Christopher Leslie Brown on the history of abolitionism and the slippery concept of moral progressStuart Ritchie tries to find out if the replication crisis has really made science better.Dietrich Vollrath explains what economists do and don't know about why some countries become rich and others don't.Scott Alexander asks: is wine fake?Karson Elmgren on the history and future of China's semiconductor industry.Xander Balwit imagines a future where genetic engineering has radically altered the animals we eat.A huge thank you to everyone in the community who helped us make Asterisk a reality. We hope you all enjoy reading it as much as we enjoyed making it.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the first issue of Asterisk, published by Clara Collier on November 21, 2022 on The Effective Altruism Forum.Are you a fan of engaging, epistemically rigorous longform writing about the world's most pressing problems? Interested in in-depth interviews with leading scholars? A reader of taste and discernment? Sick of FTXcourse?Distract yourself with the inaugural issue of Asterisk Magazine, out now!Asterisk is a new quarterly journal of clear writing and clear thinking about things that matter (and, occasionally, things we just think are interesting). In this issue:Kelsey Piper argues that What We Owe The Future can't quite support the weight of its own premises.Kevin Esvelt talks about how we can prevent the next pandemic.Jared Leibowich gives us a superforecaster's approach to modeling monkeypox.Christopher Leslie Brown on the history of abolitionism and the slippery concept of moral progressStuart Ritchie tries to find out if the replication crisis has really made science better.Dietrich Vollrath explains what economists do and don't know about why some countries become rich and others don't.Scott Alexander asks: is wine fake?Karson Elmgren on the history and future of China's semiconductor industry.Xander Balwit imagines a future where genetic engineering has radically altered the animals we eat.A huge thank you to everyone in the community who helped us make Asterisk a reality. We hope you all enjoy reading it as much as we enjoyed making it.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Clara Collier https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:39 None full 3842
gbjxQuEhjAYsgWz8T_EA EA - A job matching service for affected FTXFF grantees by High Impact Professionals Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A job matching service for affected FTXFF grantees, published by High Impact Professionals on November 21, 2022 on The Effective Altruism Forum.In light of recent developments with the FTX Future Fund (FTXFF), there are many uncertainties around promised funding for grantees. Many in our community are rallying to support those affected, from potential funding opportunities to avenues of health support.To help those impacted FTXFF grantees who are now seeking alternative job opportunities, High Impact Professionals (HIP) is providing a service to help match grantees with high-impact opportunities.This works as follows:You sign up for a talent directory to provide us information on your background, interests, availability, and the like.We help match you to high-impact roles in the following ways, depending on the level of consent you give us (laid out more in the linked form):We reach out to you with roles that are potentially a good fit for your profile.We share your information with EA partner organizations that are looking to fill roles.We add your information to a talent directory list that will be published on our website (similar to/).If you are an affected FTXFF grantee who is now pursuing other jobs, please fill out this form. We plan to keep the survey open through the end of the year; however, we encourage you to complete it as soon and as thoroughly as possible. We will consider you on our list to match to organizations until you explicitly tell us not to, which you can do by reaching out.As a side note, we plan to send around a similar but more general form in the coming weeks for anyone interested in being placed at a high-impact organization. But, we wanted to get this version out quickly as a bridge for those affected.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
High Impact Professionals https://forum.effectivealtruism.org/posts/gbjxQuEhjAYsgWz8T/a-job-matching-service-for-affected-ftxff-grantees Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A job matching service for affected FTXFF grantees, published by High Impact Professionals on November 21, 2022 on The Effective Altruism Forum.In light of recent developments with the FTX Future Fund (FTXFF), there are many uncertainties around promised funding for grantees. Many in our community are rallying to support those affected, from potential funding opportunities to avenues of health support.To help those impacted FTXFF grantees who are now seeking alternative job opportunities, High Impact Professionals (HIP) is providing a service to help match grantees with high-impact opportunities.This works as follows:You sign up for a talent directory to provide us information on your background, interests, availability, and the like.We help match you to high-impact roles in the following ways, depending on the level of consent you give us (laid out more in the linked form):We reach out to you with roles that are potentially a good fit for your profile.We share your information with EA partner organizations that are looking to fill roles.We add your information to a talent directory list that will be published on our website (similar to/).If you are an affected FTXFF grantee who is now pursuing other jobs, please fill out this form. We plan to keep the survey open through the end of the year; however, we encourage you to complete it as soon and as thoroughly as possible. We will consider you on our list to match to organizations until you explicitly tell us not to, which you can do by reaching out.As a side note, we plan to send around a similar but more general form in the coming weeks for anyone interested in being placed at a high-impact organization. But, we wanted to get this version out quickly as a bridge for those affected.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 21 Nov 2022 13:13:00 +0000 EA - A job matching service for affected FTXFF grantees by High Impact Professionals Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A job matching service for affected FTXFF grantees, published by High Impact Professionals on November 21, 2022 on The Effective Altruism Forum.In light of recent developments with the FTX Future Fund (FTXFF), there are many uncertainties around promised funding for grantees. Many in our community are rallying to support those affected, from potential funding opportunities to avenues of health support.To help those impacted FTXFF grantees who are now seeking alternative job opportunities, High Impact Professionals (HIP) is providing a service to help match grantees with high-impact opportunities.This works as follows:You sign up for a talent directory to provide us information on your background, interests, availability, and the like.We help match you to high-impact roles in the following ways, depending on the level of consent you give us (laid out more in the linked form):We reach out to you with roles that are potentially a good fit for your profile.We share your information with EA partner organizations that are looking to fill roles.We add your information to a talent directory list that will be published on our website (similar to/).If you are an affected FTXFF grantee who is now pursuing other jobs, please fill out this form. We plan to keep the survey open through the end of the year; however, we encourage you to complete it as soon and as thoroughly as possible. We will consider you on our list to match to organizations until you explicitly tell us not to, which you can do by reaching out.As a side note, we plan to send around a similar but more general form in the coming weeks for anyone interested in being placed at a high-impact organization. But, we wanted to get this version out quickly as a bridge for those affected.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A job matching service for affected FTXFF grantees, published by High Impact Professionals on November 21, 2022 on The Effective Altruism Forum.In light of recent developments with the FTX Future Fund (FTXFF), there are many uncertainties around promised funding for grantees. Many in our community are rallying to support those affected, from potential funding opportunities to avenues of health support.To help those impacted FTXFF grantees who are now seeking alternative job opportunities, High Impact Professionals (HIP) is providing a service to help match grantees with high-impact opportunities.This works as follows:You sign up for a talent directory to provide us information on your background, interests, availability, and the like.We help match you to high-impact roles in the following ways, depending on the level of consent you give us (laid out more in the linked form):We reach out to you with roles that are potentially a good fit for your profile.We share your information with EA partner organizations that are looking to fill roles.We add your information to a talent directory list that will be published on our website (similar to/).If you are an affected FTXFF grantee who is now pursuing other jobs, please fill out this form. We plan to keep the survey open through the end of the year; however, we encourage you to complete it as soon and as thoroughly as possible. We will consider you on our list to match to organizations until you explicitly tell us not to, which you can do by reaching out.As a side note, we plan to send around a similar but more general form in the coming weeks for anyone interested in being placed at a high-impact organization. But, we wanted to get this version out quickly as a bridge for those affected.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
High Impact Professionals https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:54 None full 3843
G3DJ2qsZiZjdvmgSv_EA EA - Shallow Report on Hypertension by Joel Tan (CEARCH) Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow Report on Hypertension, published by Joel Tan (CEARCH) on November 21, 2022 on The Effective Altruism Forum.SummaryTaking into account the expected benefits of eliminating hypertension (i.e. improved health and greater economic output), as well as the tractability of sodium taxation policy advocacy, I find that the marginal expected value of sodium taxation policy advocacy to control hypertension to be 190,927 DALYs per USD 100,000, which is around 300x as cost-effective as giving to a GiveWell top charity.Key PointsImportance: This is a strongly important cause, with 1.28 1010 DALYs at stake from now to the indefinite future. Around 84% of the burden is health related, while 16% is economic in nature.Neglectedness: Whatever governments/charities/businesses are doing to solve this problem (e.g. labelling laws/providing low sodium food in food banks/developing new hypertension drugs) may be making a difference, since age-standardized DALYs lost are falling, but (a) attribution is hard, and structural factors (e.g. more educated populations eating and exercising better, or economic development expanding access to healthcare) will also be behind the decline; and (b) all this is insufficient all the same, with population growth and ageing driving an increase in DALYs lost over time for the coming decades.Tractability: A moderately tractable solution in the form of sodium taxes is available. This is highly effective if and when implemented, but there is of course considerably uncertainty as to whether advocacy for taxes on food – which are highly unpopular – can succeed.Further DiscussionThis is a highly promising cause area that CEARCH will be conducting deeper research into, but it is important to note that early stage CEAs tend to be overoptimistic, and it is likely that this initial x300 GiveWell estimate will be revised downwards after more research and greater scrutiny, possibly extremely substantially (e.g. a one or two magnitude downgrade in cost-effectiveness).DALYs lost to hypertension have grown tremendously (43.5%) from 1990 to 2015, and it is certainly not just a rich world problem – over that same period, DALYs lost to hypertension in LMICs exploded (45% increase in high-middle income countries, 72% increase in middle income countries, 94% in low-middle income countries, and 86% in low income countries); and of the large countries, Bangladesh notably saw a fairly staggering near-tripling of DALYs lost.Growth in DALYs lost is driven not just by population growth and ageing, but also by urbanization and corresponding lifestyle changes (e.g. excessive dietary sodium, stress, sedentary lifestyle etc).Note that the analysis here does not model income effects from the tax (i.e. reduced purchasing power causing less consumption of healthy food) or substitution effects, whether positive (e.g. reducing sugar and fat consumption from food – such as junk food – that is high in not just salt but also sugar and fat) or negative (i.e. causing people to switch to low-salt high-sugar food or drinks); the analysis here also does not model the impact of industry reformulating food products in response to a sodium tax. My sense is that these balance out to some extent, but it is very hard to say.There is extremely high uncertainty over the calculations over how the problem will grow or shrink in the coming decades. This is certainly an area where expert advice and expert epidemiological modelling would be extremely valuable, and is something that CEARCH will pursue at deeper research stages.We underestimate the economic burden insofar as it focuses on the burden from hypertension (i.e. SBP of > 140 mm Hg) even though high systolic blood pressure (i.e. SBP of > 110-115 mm Hg) has adverse health consequences and presumably negative economic effects as well....]]>
Joel Tan (CEARCH) https://forum.effectivealtruism.org/posts/G3DJ2qsZiZjdvmgSv/shallow-report-on-hypertension Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow Report on Hypertension, published by Joel Tan (CEARCH) on November 21, 2022 on The Effective Altruism Forum.SummaryTaking into account the expected benefits of eliminating hypertension (i.e. improved health and greater economic output), as well as the tractability of sodium taxation policy advocacy, I find that the marginal expected value of sodium taxation policy advocacy to control hypertension to be 190,927 DALYs per USD 100,000, which is around 300x as cost-effective as giving to a GiveWell top charity.Key PointsImportance: This is a strongly important cause, with 1.28 1010 DALYs at stake from now to the indefinite future. Around 84% of the burden is health related, while 16% is economic in nature.Neglectedness: Whatever governments/charities/businesses are doing to solve this problem (e.g. labelling laws/providing low sodium food in food banks/developing new hypertension drugs) may be making a difference, since age-standardized DALYs lost are falling, but (a) attribution is hard, and structural factors (e.g. more educated populations eating and exercising better, or economic development expanding access to healthcare) will also be behind the decline; and (b) all this is insufficient all the same, with population growth and ageing driving an increase in DALYs lost over time for the coming decades.Tractability: A moderately tractable solution in the form of sodium taxes is available. This is highly effective if and when implemented, but there is of course considerably uncertainty as to whether advocacy for taxes on food – which are highly unpopular – can succeed.Further DiscussionThis is a highly promising cause area that CEARCH will be conducting deeper research into, but it is important to note that early stage CEAs tend to be overoptimistic, and it is likely that this initial x300 GiveWell estimate will be revised downwards after more research and greater scrutiny, possibly extremely substantially (e.g. a one or two magnitude downgrade in cost-effectiveness).DALYs lost to hypertension have grown tremendously (43.5%) from 1990 to 2015, and it is certainly not just a rich world problem – over that same period, DALYs lost to hypertension in LMICs exploded (45% increase in high-middle income countries, 72% increase in middle income countries, 94% in low-middle income countries, and 86% in low income countries); and of the large countries, Bangladesh notably saw a fairly staggering near-tripling of DALYs lost.Growth in DALYs lost is driven not just by population growth and ageing, but also by urbanization and corresponding lifestyle changes (e.g. excessive dietary sodium, stress, sedentary lifestyle etc).Note that the analysis here does not model income effects from the tax (i.e. reduced purchasing power causing less consumption of healthy food) or substitution effects, whether positive (e.g. reducing sugar and fat consumption from food – such as junk food – that is high in not just salt but also sugar and fat) or negative (i.e. causing people to switch to low-salt high-sugar food or drinks); the analysis here also does not model the impact of industry reformulating food products in response to a sodium tax. My sense is that these balance out to some extent, but it is very hard to say.There is extremely high uncertainty over the calculations over how the problem will grow or shrink in the coming decades. This is certainly an area where expert advice and expert epidemiological modelling would be extremely valuable, and is something that CEARCH will pursue at deeper research stages.We underestimate the economic burden insofar as it focuses on the burden from hypertension (i.e. SBP of > 140 mm Hg) even though high systolic blood pressure (i.e. SBP of > 110-115 mm Hg) has adverse health consequences and presumably negative economic effects as well....]]>
Mon, 21 Nov 2022 11:08:28 +0000 EA - Shallow Report on Hypertension by Joel Tan (CEARCH) Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow Report on Hypertension, published by Joel Tan (CEARCH) on November 21, 2022 on The Effective Altruism Forum.SummaryTaking into account the expected benefits of eliminating hypertension (i.e. improved health and greater economic output), as well as the tractability of sodium taxation policy advocacy, I find that the marginal expected value of sodium taxation policy advocacy to control hypertension to be 190,927 DALYs per USD 100,000, which is around 300x as cost-effective as giving to a GiveWell top charity.Key PointsImportance: This is a strongly important cause, with 1.28 1010 DALYs at stake from now to the indefinite future. Around 84% of the burden is health related, while 16% is economic in nature.Neglectedness: Whatever governments/charities/businesses are doing to solve this problem (e.g. labelling laws/providing low sodium food in food banks/developing new hypertension drugs) may be making a difference, since age-standardized DALYs lost are falling, but (a) attribution is hard, and structural factors (e.g. more educated populations eating and exercising better, or economic development expanding access to healthcare) will also be behind the decline; and (b) all this is insufficient all the same, with population growth and ageing driving an increase in DALYs lost over time for the coming decades.Tractability: A moderately tractable solution in the form of sodium taxes is available. This is highly effective if and when implemented, but there is of course considerably uncertainty as to whether advocacy for taxes on food – which are highly unpopular – can succeed.Further DiscussionThis is a highly promising cause area that CEARCH will be conducting deeper research into, but it is important to note that early stage CEAs tend to be overoptimistic, and it is likely that this initial x300 GiveWell estimate will be revised downwards after more research and greater scrutiny, possibly extremely substantially (e.g. a one or two magnitude downgrade in cost-effectiveness).DALYs lost to hypertension have grown tremendously (43.5%) from 1990 to 2015, and it is certainly not just a rich world problem – over that same period, DALYs lost to hypertension in LMICs exploded (45% increase in high-middle income countries, 72% increase in middle income countries, 94% in low-middle income countries, and 86% in low income countries); and of the large countries, Bangladesh notably saw a fairly staggering near-tripling of DALYs lost.Growth in DALYs lost is driven not just by population growth and ageing, but also by urbanization and corresponding lifestyle changes (e.g. excessive dietary sodium, stress, sedentary lifestyle etc).Note that the analysis here does not model income effects from the tax (i.e. reduced purchasing power causing less consumption of healthy food) or substitution effects, whether positive (e.g. reducing sugar and fat consumption from food – such as junk food – that is high in not just salt but also sugar and fat) or negative (i.e. causing people to switch to low-salt high-sugar food or drinks); the analysis here also does not model the impact of industry reformulating food products in response to a sodium tax. My sense is that these balance out to some extent, but it is very hard to say.There is extremely high uncertainty over the calculations over how the problem will grow or shrink in the coming decades. This is certainly an area where expert advice and expert epidemiological modelling would be extremely valuable, and is something that CEARCH will pursue at deeper research stages.We underestimate the economic burden insofar as it focuses on the burden from hypertension (i.e. SBP of > 140 mm Hg) even though high systolic blood pressure (i.e. SBP of > 110-115 mm Hg) has adverse health consequences and presumably negative economic effects as well....]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow Report on Hypertension, published by Joel Tan (CEARCH) on November 21, 2022 on The Effective Altruism Forum.SummaryTaking into account the expected benefits of eliminating hypertension (i.e. improved health and greater economic output), as well as the tractability of sodium taxation policy advocacy, I find that the marginal expected value of sodium taxation policy advocacy to control hypertension to be 190,927 DALYs per USD 100,000, which is around 300x as cost-effective as giving to a GiveWell top charity.Key PointsImportance: This is a strongly important cause, with 1.28 1010 DALYs at stake from now to the indefinite future. Around 84% of the burden is health related, while 16% is economic in nature.Neglectedness: Whatever governments/charities/businesses are doing to solve this problem (e.g. labelling laws/providing low sodium food in food banks/developing new hypertension drugs) may be making a difference, since age-standardized DALYs lost are falling, but (a) attribution is hard, and structural factors (e.g. more educated populations eating and exercising better, or economic development expanding access to healthcare) will also be behind the decline; and (b) all this is insufficient all the same, with population growth and ageing driving an increase in DALYs lost over time for the coming decades.Tractability: A moderately tractable solution in the form of sodium taxes is available. This is highly effective if and when implemented, but there is of course considerably uncertainty as to whether advocacy for taxes on food – which are highly unpopular – can succeed.Further DiscussionThis is a highly promising cause area that CEARCH will be conducting deeper research into, but it is important to note that early stage CEAs tend to be overoptimistic, and it is likely that this initial x300 GiveWell estimate will be revised downwards after more research and greater scrutiny, possibly extremely substantially (e.g. a one or two magnitude downgrade in cost-effectiveness).DALYs lost to hypertension have grown tremendously (43.5%) from 1990 to 2015, and it is certainly not just a rich world problem – over that same period, DALYs lost to hypertension in LMICs exploded (45% increase in high-middle income countries, 72% increase in middle income countries, 94% in low-middle income countries, and 86% in low income countries); and of the large countries, Bangladesh notably saw a fairly staggering near-tripling of DALYs lost.Growth in DALYs lost is driven not just by population growth and ageing, but also by urbanization and corresponding lifestyle changes (e.g. excessive dietary sodium, stress, sedentary lifestyle etc).Note that the analysis here does not model income effects from the tax (i.e. reduced purchasing power causing less consumption of healthy food) or substitution effects, whether positive (e.g. reducing sugar and fat consumption from food – such as junk food – that is high in not just salt but also sugar and fat) or negative (i.e. causing people to switch to low-salt high-sugar food or drinks); the analysis here also does not model the impact of industry reformulating food products in response to a sodium tax. My sense is that these balance out to some extent, but it is very hard to say.There is extremely high uncertainty over the calculations over how the problem will grow or shrink in the coming decades. This is certainly an area where expert advice and expert epidemiological modelling would be extremely valuable, and is something that CEARCH will pursue at deeper research stages.We underestimate the economic burden insofar as it focuses on the burden from hypertension (i.e. SBP of > 140 mm Hg) even though high systolic blood pressure (i.e. SBP of > 110-115 mm Hg) has adverse health consequences and presumably negative economic effects as well....]]>
Joel Tan (CEARCH) https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 27:59 None full 3844
XdZvCQoF3TsoWA4rZ_EA EA - Are you neglecting your health? by dotsam Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Are you neglecting your health?, published by dotsam on November 20, 2022 on The Effective Altruism Forum.Do you take your physical and mental health seriously?If you are trying to secure a good future for humanity, make sure you are also working to secure a good future for your own body and mind too.Do you do regular exercise? If not (and you don't have a medical reason not to exercise) spend 5 minutes thinking of actionable ways you could start exercising more, make a plan, and get to work (you can literally do this right now).It doesn't have to be perfect. You can start very small and work up. Doing a little consistently is better than trying to do too much and then giving up. Plan to experiment to find what will work best for you, if you don't already know.Take another 5 minutes and do the same for the other areas that robustly affect your ability to live a healthy and long life - check in on your diet, your sleep patterns and your stress levels.Even if you don't much like yourself, even if you detest physical exercise, even if you are imperfect at keeping to routines or impulse control, and even if you were bad at sports as a kid, this is worth doing.The world is a better place with you in it. Be the kind of person who works actively to keep fit and healthy. It matters!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
dotsam https://forum.effectivealtruism.org/posts/XdZvCQoF3TsoWA4rZ/are-you-neglecting-your-health Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Are you neglecting your health?, published by dotsam on November 20, 2022 on The Effective Altruism Forum.Do you take your physical and mental health seriously?If you are trying to secure a good future for humanity, make sure you are also working to secure a good future for your own body and mind too.Do you do regular exercise? If not (and you don't have a medical reason not to exercise) spend 5 minutes thinking of actionable ways you could start exercising more, make a plan, and get to work (you can literally do this right now).It doesn't have to be perfect. You can start very small and work up. Doing a little consistently is better than trying to do too much and then giving up. Plan to experiment to find what will work best for you, if you don't already know.Take another 5 minutes and do the same for the other areas that robustly affect your ability to live a healthy and long life - check in on your diet, your sleep patterns and your stress levels.Even if you don't much like yourself, even if you detest physical exercise, even if you are imperfect at keeping to routines or impulse control, and even if you were bad at sports as a kid, this is worth doing.The world is a better place with you in it. Be the kind of person who works actively to keep fit and healthy. It matters!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 21 Nov 2022 03:55:39 +0000 EA - Are you neglecting your health? by dotsam Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Are you neglecting your health?, published by dotsam on November 20, 2022 on The Effective Altruism Forum.Do you take your physical and mental health seriously?If you are trying to secure a good future for humanity, make sure you are also working to secure a good future for your own body and mind too.Do you do regular exercise? If not (and you don't have a medical reason not to exercise) spend 5 minutes thinking of actionable ways you could start exercising more, make a plan, and get to work (you can literally do this right now).It doesn't have to be perfect. You can start very small and work up. Doing a little consistently is better than trying to do too much and then giving up. Plan to experiment to find what will work best for you, if you don't already know.Take another 5 minutes and do the same for the other areas that robustly affect your ability to live a healthy and long life - check in on your diet, your sleep patterns and your stress levels.Even if you don't much like yourself, even if you detest physical exercise, even if you are imperfect at keeping to routines or impulse control, and even if you were bad at sports as a kid, this is worth doing.The world is a better place with you in it. Be the kind of person who works actively to keep fit and healthy. It matters!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Are you neglecting your health?, published by dotsam on November 20, 2022 on The Effective Altruism Forum.Do you take your physical and mental health seriously?If you are trying to secure a good future for humanity, make sure you are also working to secure a good future for your own body and mind too.Do you do regular exercise? If not (and you don't have a medical reason not to exercise) spend 5 minutes thinking of actionable ways you could start exercising more, make a plan, and get to work (you can literally do this right now).It doesn't have to be perfect. You can start very small and work up. Doing a little consistently is better than trying to do too much and then giving up. Plan to experiment to find what will work best for you, if you don't already know.Take another 5 minutes and do the same for the other areas that robustly affect your ability to live a healthy and long life - check in on your diet, your sleep patterns and your stress levels.Even if you don't much like yourself, even if you detest physical exercise, even if you are imperfect at keeping to routines or impulse control, and even if you were bad at sports as a kid, this is worth doing.The world is a better place with you in it. Be the kind of person who works actively to keep fit and healthy. It matters!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
dotsam https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:26 None full 3840
eWYQwbX895HnyuhhY_EA EA - The relative silence on FTX/SBF is likely the result of sound legal advice by Tyler Whitmer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The relative silence on FTX/SBF is likely the result of sound legal advice, published by Tyler Whitmer on November 21, 2022 on The Effective Altruism Forum.The purpose of this post is to explain why I think we should expect to hear very little, if anything, about the FTX/SBF fiasco from any EA public figures or institutions, and why I wouldn’t read anything into that other than that folks are receiving good legal advice and appropriately following it.A little about me and why I’m writing thisA little background on me, as this is my first real post here (gulp!) after lurking on and off for a long time. I recently resigned from the partnership of a big international law firm based in the US so I could take some time off. During my 16ish-year career as a big firm lawyer, my practice often focused on plaintiff-side restructuring-related litigation, including cases stemming from some of the biggest financial blow-ups in US history. Early in my career, I worked on litigation about deceptive lending practices at Enron. I then spent the better part of a decade working on litigation against the big money center banks by creditors of Lehman Brothers. I was also involved in litigation against the banks by the Federal Housing Finance Agency, after it took over Fannie Mae and Freddie Mac.I’ve been what I’ll call EA-adjacent for a long time, but I’ve recently gotten more involved. For the past few months during my time off, I’ve been meeting as many lawyers and other mid-career folks in the EA community as I can (hi to those I’ve met!), and I’ve pitched in on a few projects led by others. I was informed on November 2 that I would receive funding for my own project from the FTX Future Fund. I obviously don’t expect to see that funding now. I’m upset and sad about that. I’m really excited about the project, and while I can do some of it by donating my time to it, some of it really does require a bit of funding that I’m not sure I can get now. That sucks, and I’m still working out how I feel about it.Obligatory “this is not legal advice” throat clearingI’m not acting as anyone’s lawyer here, and I don’t want to (at least right now with respect to these issues). Nothing in this post is legal advice. This is just my two cents, off the top of my head, based entirely on my experience and no research at all.Why it’s best for folks not to comment on the FTX/SFB situation, and not just for their own sakeI don’t know anything about the FTX/SBF situation other than what’s in the public record, which I’ve been following closely as it develops. My thoughts here are inspired a bit by this post by Shakeel Hashim and this one by Holden Karnofsky, and the comments to each, but it’s also a response more generally to frustration I've seen here and on Twitter about the fact that many public figure EAs and senior folks at EA institutions are not directly addressing the FTX/SBF situation as much as some would like.Lawyers almost always advise both individual clients and folks representing client entities not to speak to anyone, including friends and family, about ongoing litigation or any facts and circumstances likely to lead to litigation, including bankruptcy proceedings. People often feel frustrated by being “muzzled” in this way—especially where a narrative is establishing itself publicly that they see as casting them in a negative light. I suspect many are feeling that way right now about FTX/SBF and how the press is reporting on it. But smart people will continue to follow their lawyers’ advice. There are good reasons for this, including many that go beyond self-interest.I’ve seen most of the self-interest angles addressed elsewhere, but I’ll say a bit about it just to drive the point home. Being involved in litigation, even as a totally blameless witness—or even a perceived witness who...]]>
Tyler Whitmer https://forum.effectivealtruism.org/posts/eWYQwbX895HnyuhhY/the-relative-silence-on-ftx-sbf-is-likely-the-result-of Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The relative silence on FTX/SBF is likely the result of sound legal advice, published by Tyler Whitmer on November 21, 2022 on The Effective Altruism Forum.The purpose of this post is to explain why I think we should expect to hear very little, if anything, about the FTX/SBF fiasco from any EA public figures or institutions, and why I wouldn’t read anything into that other than that folks are receiving good legal advice and appropriately following it.A little about me and why I’m writing thisA little background on me, as this is my first real post here (gulp!) after lurking on and off for a long time. I recently resigned from the partnership of a big international law firm based in the US so I could take some time off. During my 16ish-year career as a big firm lawyer, my practice often focused on plaintiff-side restructuring-related litigation, including cases stemming from some of the biggest financial blow-ups in US history. Early in my career, I worked on litigation about deceptive lending practices at Enron. I then spent the better part of a decade working on litigation against the big money center banks by creditors of Lehman Brothers. I was also involved in litigation against the banks by the Federal Housing Finance Agency, after it took over Fannie Mae and Freddie Mac.I’ve been what I’ll call EA-adjacent for a long time, but I’ve recently gotten more involved. For the past few months during my time off, I’ve been meeting as many lawyers and other mid-career folks in the EA community as I can (hi to those I’ve met!), and I’ve pitched in on a few projects led by others. I was informed on November 2 that I would receive funding for my own project from the FTX Future Fund. I obviously don’t expect to see that funding now. I’m upset and sad about that. I’m really excited about the project, and while I can do some of it by donating my time to it, some of it really does require a bit of funding that I’m not sure I can get now. That sucks, and I’m still working out how I feel about it.Obligatory “this is not legal advice” throat clearingI’m not acting as anyone’s lawyer here, and I don’t want to (at least right now with respect to these issues). Nothing in this post is legal advice. This is just my two cents, off the top of my head, based entirely on my experience and no research at all.Why it’s best for folks not to comment on the FTX/SFB situation, and not just for their own sakeI don’t know anything about the FTX/SBF situation other than what’s in the public record, which I’ve been following closely as it develops. My thoughts here are inspired a bit by this post by Shakeel Hashim and this one by Holden Karnofsky, and the comments to each, but it’s also a response more generally to frustration I've seen here and on Twitter about the fact that many public figure EAs and senior folks at EA institutions are not directly addressing the FTX/SBF situation as much as some would like.Lawyers almost always advise both individual clients and folks representing client entities not to speak to anyone, including friends and family, about ongoing litigation or any facts and circumstances likely to lead to litigation, including bankruptcy proceedings. People often feel frustrated by being “muzzled” in this way—especially where a narrative is establishing itself publicly that they see as casting them in a negative light. I suspect many are feeling that way right now about FTX/SBF and how the press is reporting on it. But smart people will continue to follow their lawyers’ advice. There are good reasons for this, including many that go beyond self-interest.I’ve seen most of the self-interest angles addressed elsewhere, but I’ll say a bit about it just to drive the point home. Being involved in litigation, even as a totally blameless witness—or even a perceived witness who...]]>
Mon, 21 Nov 2022 02:45:41 +0000 EA - The relative silence on FTX/SBF is likely the result of sound legal advice by Tyler Whitmer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The relative silence on FTX/SBF is likely the result of sound legal advice, published by Tyler Whitmer on November 21, 2022 on The Effective Altruism Forum.The purpose of this post is to explain why I think we should expect to hear very little, if anything, about the FTX/SBF fiasco from any EA public figures or institutions, and why I wouldn’t read anything into that other than that folks are receiving good legal advice and appropriately following it.A little about me and why I’m writing thisA little background on me, as this is my first real post here (gulp!) after lurking on and off for a long time. I recently resigned from the partnership of a big international law firm based in the US so I could take some time off. During my 16ish-year career as a big firm lawyer, my practice often focused on plaintiff-side restructuring-related litigation, including cases stemming from some of the biggest financial blow-ups in US history. Early in my career, I worked on litigation about deceptive lending practices at Enron. I then spent the better part of a decade working on litigation against the big money center banks by creditors of Lehman Brothers. I was also involved in litigation against the banks by the Federal Housing Finance Agency, after it took over Fannie Mae and Freddie Mac.I’ve been what I’ll call EA-adjacent for a long time, but I’ve recently gotten more involved. For the past few months during my time off, I’ve been meeting as many lawyers and other mid-career folks in the EA community as I can (hi to those I’ve met!), and I’ve pitched in on a few projects led by others. I was informed on November 2 that I would receive funding for my own project from the FTX Future Fund. I obviously don’t expect to see that funding now. I’m upset and sad about that. I’m really excited about the project, and while I can do some of it by donating my time to it, some of it really does require a bit of funding that I’m not sure I can get now. That sucks, and I’m still working out how I feel about it.Obligatory “this is not legal advice” throat clearingI’m not acting as anyone’s lawyer here, and I don’t want to (at least right now with respect to these issues). Nothing in this post is legal advice. This is just my two cents, off the top of my head, based entirely on my experience and no research at all.Why it’s best for folks not to comment on the FTX/SFB situation, and not just for their own sakeI don’t know anything about the FTX/SBF situation other than what’s in the public record, which I’ve been following closely as it develops. My thoughts here are inspired a bit by this post by Shakeel Hashim and this one by Holden Karnofsky, and the comments to each, but it’s also a response more generally to frustration I've seen here and on Twitter about the fact that many public figure EAs and senior folks at EA institutions are not directly addressing the FTX/SBF situation as much as some would like.Lawyers almost always advise both individual clients and folks representing client entities not to speak to anyone, including friends and family, about ongoing litigation or any facts and circumstances likely to lead to litigation, including bankruptcy proceedings. People often feel frustrated by being “muzzled” in this way—especially where a narrative is establishing itself publicly that they see as casting them in a negative light. I suspect many are feeling that way right now about FTX/SBF and how the press is reporting on it. But smart people will continue to follow their lawyers’ advice. There are good reasons for this, including many that go beyond self-interest.I’ve seen most of the self-interest angles addressed elsewhere, but I’ll say a bit about it just to drive the point home. Being involved in litigation, even as a totally blameless witness—or even a perceived witness who...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The relative silence on FTX/SBF is likely the result of sound legal advice, published by Tyler Whitmer on November 21, 2022 on The Effective Altruism Forum.The purpose of this post is to explain why I think we should expect to hear very little, if anything, about the FTX/SBF fiasco from any EA public figures or institutions, and why I wouldn’t read anything into that other than that folks are receiving good legal advice and appropriately following it.A little about me and why I’m writing thisA little background on me, as this is my first real post here (gulp!) after lurking on and off for a long time. I recently resigned from the partnership of a big international law firm based in the US so I could take some time off. During my 16ish-year career as a big firm lawyer, my practice often focused on plaintiff-side restructuring-related litigation, including cases stemming from some of the biggest financial blow-ups in US history. Early in my career, I worked on litigation about deceptive lending practices at Enron. I then spent the better part of a decade working on litigation against the big money center banks by creditors of Lehman Brothers. I was also involved in litigation against the banks by the Federal Housing Finance Agency, after it took over Fannie Mae and Freddie Mac.I’ve been what I’ll call EA-adjacent for a long time, but I’ve recently gotten more involved. For the past few months during my time off, I’ve been meeting as many lawyers and other mid-career folks in the EA community as I can (hi to those I’ve met!), and I’ve pitched in on a few projects led by others. I was informed on November 2 that I would receive funding for my own project from the FTX Future Fund. I obviously don’t expect to see that funding now. I’m upset and sad about that. I’m really excited about the project, and while I can do some of it by donating my time to it, some of it really does require a bit of funding that I’m not sure I can get now. That sucks, and I’m still working out how I feel about it.Obligatory “this is not legal advice” throat clearingI’m not acting as anyone’s lawyer here, and I don’t want to (at least right now with respect to these issues). Nothing in this post is legal advice. This is just my two cents, off the top of my head, based entirely on my experience and no research at all.Why it’s best for folks not to comment on the FTX/SFB situation, and not just for their own sakeI don’t know anything about the FTX/SBF situation other than what’s in the public record, which I’ve been following closely as it develops. My thoughts here are inspired a bit by this post by Shakeel Hashim and this one by Holden Karnofsky, and the comments to each, but it’s also a response more generally to frustration I've seen here and on Twitter about the fact that many public figure EAs and senior folks at EA institutions are not directly addressing the FTX/SBF situation as much as some would like.Lawyers almost always advise both individual clients and folks representing client entities not to speak to anyone, including friends and family, about ongoing litigation or any facts and circumstances likely to lead to litigation, including bankruptcy proceedings. People often feel frustrated by being “muzzled” in this way—especially where a narrative is establishing itself publicly that they see as casting them in a negative light. I suspect many are feeling that way right now about FTX/SBF and how the press is reporting on it. But smart people will continue to follow their lawyers’ advice. There are good reasons for this, including many that go beyond self-interest.I’ve seen most of the self-interest angles addressed elsewhere, but I’ll say a bit about it just to drive the point home. Being involved in litigation, even as a totally blameless witness—or even a perceived witness who...]]>
Tyler Whitmer https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:23 None full 3839
sD4kdobiRaBpxcL8M_EA EA - What happened to the "Women and Effective Altruism" post? by Cornelis Dirk Haupt Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What happened to the "Women and Effective Altruism" post?, published by Cornelis Dirk Haupt on November 20, 2022 on The Effective Altruism Forum.There was a post up some time last week I wanted to read called Women and Effective Altruism. Later last week I noticed the page seemed down and now I just get "Sorry, you don't have access to this page." I was wondering why that was. My 60 second glance at the two top comments gave me the impression that the post stimulated healthy discourse among EA women on their experiences in EA.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Cornelis Dirk Haupt https://forum.effectivealtruism.org/posts/sD4kdobiRaBpxcL8M/what-happened-to-the-women-and-effective-altruism-post Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What happened to the "Women and Effective Altruism" post?, published by Cornelis Dirk Haupt on November 20, 2022 on The Effective Altruism Forum.There was a post up some time last week I wanted to read called Women and Effective Altruism. Later last week I noticed the page seemed down and now I just get "Sorry, you don't have access to this page." I was wondering why that was. My 60 second glance at the two top comments gave me the impression that the post stimulated healthy discourse among EA women on their experiences in EA.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 20 Nov 2022 22:00:55 +0000 EA - What happened to the "Women and Effective Altruism" post? by Cornelis Dirk Haupt Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What happened to the "Women and Effective Altruism" post?, published by Cornelis Dirk Haupt on November 20, 2022 on The Effective Altruism Forum.There was a post up some time last week I wanted to read called Women and Effective Altruism. Later last week I noticed the page seemed down and now I just get "Sorry, you don't have access to this page." I was wondering why that was. My 60 second glance at the two top comments gave me the impression that the post stimulated healthy discourse among EA women on their experiences in EA.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What happened to the "Women and Effective Altruism" post?, published by Cornelis Dirk Haupt on November 20, 2022 on The Effective Altruism Forum.There was a post up some time last week I wanted to read called Women and Effective Altruism. Later last week I noticed the page seemed down and now I just get "Sorry, you don't have access to this page." I was wondering why that was. My 60 second glance at the two top comments gave me the impression that the post stimulated healthy discourse among EA women on their experiences in EA.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Cornelis Dirk Haupt https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:47 None full 3830
iZxY6QqTSQm2afqyq_EA EA - Some data on the stock of EAâ„¢ funding by NunoSempere Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some data on the stock of EA™ funding, published by NunoSempere on November 20, 2022 on The Effective Altruism Forum.Overall Open Philanthropy fundingOpen Philanthropy’s allocation of funding through time looks as follows:Dustin Moskovitz’s wealth looks, per Bloomberg, like this:If we plot the two together, we don’t see that much of a correlation:Holden Karnofsky, head of Open Philanthropy, writes that the Blomberg estimates might not be all that accurate:Our available capital has fallen over the last year for these reasons. That said, as of now, public reports of Dustin Moskovitz and Cari Tuna’s net worth give a substantially understated picture of our available resources. That’s because, among other issues, they don’t include resources that are already in foundations. (I also note that META stock is not as large a part of their portfolio as some seem to assume)In mid 2022, Forbes put Sam Bankman-Fried’s wealth at $24B. So in some sense, the amount of money allocated to or according to Effective Altruism™ peaked somewhere close to $50B.Funding flow restricted to longtermism & global catatrophic risks (GCRs)The analysis becomes a bit more interesting if we look only at longtermism and GCRs:In contrast, per Forbes,[^1] the FTX Foundation had given out $160M by September 2022. My sense is that most (say, maybe 50% to 80%) of those grants went to “longtermist” cause areas, broadly defined. In addition, SBF and other FTX employees led a $580M funding round for AnthropicFurther analysisIt’s unclear what would have to happen for Open Philanthropy to pick up the slack here. In practical terms, I’m not sure whether their team has enough evaluation capacity for an additional $100M/year, or whether they will choose to expand that.Two somewhat informative posts from Open Philanthropy on this are here and hereI’d be curious about both interpretative analysis and forecasting on these numbers. I am up for supporting the later by e.g., committing to rerunning this analysis in a year.Appendix: CodeThe code to produce these plots can be found here; lines 42 to 48 make the division into categories fairly apparent. To execute this code you will need a working R installation and a document named grants.csv, which can be downloaded from Open Philanthropy’s website.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
NunoSempere https://forum.effectivealtruism.org/posts/iZxY6QqTSQm2afqyq/some-data-on-the-stock-of-ea-tm-funding Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some data on the stock of EA™ funding, published by NunoSempere on November 20, 2022 on The Effective Altruism Forum.Overall Open Philanthropy fundingOpen Philanthropy’s allocation of funding through time looks as follows:Dustin Moskovitz’s wealth looks, per Bloomberg, like this:If we plot the two together, we don’t see that much of a correlation:Holden Karnofsky, head of Open Philanthropy, writes that the Blomberg estimates might not be all that accurate:Our available capital has fallen over the last year for these reasons. That said, as of now, public reports of Dustin Moskovitz and Cari Tuna’s net worth give a substantially understated picture of our available resources. That’s because, among other issues, they don’t include resources that are already in foundations. (I also note that META stock is not as large a part of their portfolio as some seem to assume)In mid 2022, Forbes put Sam Bankman-Fried’s wealth at $24B. So in some sense, the amount of money allocated to or according to Effective Altruism™ peaked somewhere close to $50B.Funding flow restricted to longtermism & global catatrophic risks (GCRs)The analysis becomes a bit more interesting if we look only at longtermism and GCRs:In contrast, per Forbes,[^1] the FTX Foundation had given out $160M by September 2022. My sense is that most (say, maybe 50% to 80%) of those grants went to “longtermist” cause areas, broadly defined. In addition, SBF and other FTX employees led a $580M funding round for AnthropicFurther analysisIt’s unclear what would have to happen for Open Philanthropy to pick up the slack here. In practical terms, I’m not sure whether their team has enough evaluation capacity for an additional $100M/year, or whether they will choose to expand that.Two somewhat informative posts from Open Philanthropy on this are here and hereI’d be curious about both interpretative analysis and forecasting on these numbers. I am up for supporting the later by e.g., committing to rerunning this analysis in a year.Appendix: CodeThe code to produce these plots can be found here; lines 42 to 48 make the division into categories fairly apparent. To execute this code you will need a working R installation and a document named grants.csv, which can be downloaded from Open Philanthropy’s website.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 20 Nov 2022 15:27:06 +0000 EA - Some data on the stock of EAâ„¢ funding by NunoSempere Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some data on the stock of EA™ funding, published by NunoSempere on November 20, 2022 on The Effective Altruism Forum.Overall Open Philanthropy fundingOpen Philanthropy’s allocation of funding through time looks as follows:Dustin Moskovitz’s wealth looks, per Bloomberg, like this:If we plot the two together, we don’t see that much of a correlation:Holden Karnofsky, head of Open Philanthropy, writes that the Blomberg estimates might not be all that accurate:Our available capital has fallen over the last year for these reasons. That said, as of now, public reports of Dustin Moskovitz and Cari Tuna’s net worth give a substantially understated picture of our available resources. That’s because, among other issues, they don’t include resources that are already in foundations. (I also note that META stock is not as large a part of their portfolio as some seem to assume)In mid 2022, Forbes put Sam Bankman-Fried’s wealth at $24B. So in some sense, the amount of money allocated to or according to Effective Altruism™ peaked somewhere close to $50B.Funding flow restricted to longtermism & global catatrophic risks (GCRs)The analysis becomes a bit more interesting if we look only at longtermism and GCRs:In contrast, per Forbes,[^1] the FTX Foundation had given out $160M by September 2022. My sense is that most (say, maybe 50% to 80%) of those grants went to “longtermist” cause areas, broadly defined. In addition, SBF and other FTX employees led a $580M funding round for AnthropicFurther analysisIt’s unclear what would have to happen for Open Philanthropy to pick up the slack here. In practical terms, I’m not sure whether their team has enough evaluation capacity for an additional $100M/year, or whether they will choose to expand that.Two somewhat informative posts from Open Philanthropy on this are here and hereI’d be curious about both interpretative analysis and forecasting on these numbers. I am up for supporting the later by e.g., committing to rerunning this analysis in a year.Appendix: CodeThe code to produce these plots can be found here; lines 42 to 48 make the division into categories fairly apparent. To execute this code you will need a working R installation and a document named grants.csv, which can be downloaded from Open Philanthropy’s website.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some data on the stock of EA™ funding, published by NunoSempere on November 20, 2022 on The Effective Altruism Forum.Overall Open Philanthropy fundingOpen Philanthropy’s allocation of funding through time looks as follows:Dustin Moskovitz’s wealth looks, per Bloomberg, like this:If we plot the two together, we don’t see that much of a correlation:Holden Karnofsky, head of Open Philanthropy, writes that the Blomberg estimates might not be all that accurate:Our available capital has fallen over the last year for these reasons. That said, as of now, public reports of Dustin Moskovitz and Cari Tuna’s net worth give a substantially understated picture of our available resources. That’s because, among other issues, they don’t include resources that are already in foundations. (I also note that META stock is not as large a part of their portfolio as some seem to assume)In mid 2022, Forbes put Sam Bankman-Fried’s wealth at $24B. So in some sense, the amount of money allocated to or according to Effective Altruism™ peaked somewhere close to $50B.Funding flow restricted to longtermism & global catatrophic risks (GCRs)The analysis becomes a bit more interesting if we look only at longtermism and GCRs:In contrast, per Forbes,[^1] the FTX Foundation had given out $160M by September 2022. My sense is that most (say, maybe 50% to 80%) of those grants went to “longtermist” cause areas, broadly defined. In addition, SBF and other FTX employees led a $580M funding round for AnthropicFurther analysisIt’s unclear what would have to happen for Open Philanthropy to pick up the slack here. In practical terms, I’m not sure whether their team has enough evaluation capacity for an additional $100M/year, or whether they will choose to expand that.Two somewhat informative posts from Open Philanthropy on this are here and hereI’d be curious about both interpretative analysis and forecasting on these numbers. I am up for supporting the later by e.g., committing to rerunning this analysis in a year.Appendix: CodeThe code to produce these plots can be found here; lines 42 to 48 make the division into categories fairly apparent. To execute this code you will need a working R installation and a document named grants.csv, which can be downloaded from Open Philanthropy’s website.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
NunoSempere https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:38 None full 3831
rEHGbC6cAiMGBbB2o_EA EA - On EA messaging - being a doctor in a poorer country by Luke Eure Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On EA messaging - being a doctor in a poorer country, published by Luke Eure on November 20, 2022 on The Effective Altruism Forum.EA has growing communities in low and middle-income countries. Off the top of my head: Philippines, Nigeria, South Africa, Kenya.This is very good. It also means that EA orgs and communicators should move away from "western-by-default" messaging and thinking.Here's an instance of how assuming readers are all western can lead to wrong, or at least incompletely thought-through, conclusions.EA thinking tends to lead people in the US/UK away from being doctors. I've heard "don't be a doctor if you care about people" used as a shorthand for the at-first-counterintuitive recommendations EA can sometimes give.But, in poorer countries, it might be reasonable on EA grounds to be a doctor:There is a shortage of doctors in countries like Kenya -> higher direct impactEarning opportunities are much lower -> lower impact through counterfactual optionsIt may be that it still nets out that being a doctor is not the best career choice for people in poorer countries. Seems pretty uncertain, because I don’t think anyone has thought about it in detail. In the 80k hours post, the top-level recommendation "people likely to succeed at medical school admission could have a greater impact outside medicine" is only backed up with evidence from US/UK.Takeaway: As EA attracts people from all over the world, we need to move away from “western-by-default” communications to ensure people get the correct, and correctly-reasoned, advice.In many circumstances it will still make sense to focus on a western audience for different types of communication. But this should be a decision based on the specifics of what you're communicating, not an unthinking default based on "all EAs are western"Quick back of the envelope on this based on stats from an 80k interview with Gregory Lewis:Assume that over a UK career, a doctor saves saves 6 lives. You could do the same by donating ~$30k (assuming $5k per life saved)He says it could be 10x in a developing country (here he says it could be more like 40-50x). This would require donating $300k to offsetA very very good job in Kenya would be earning $50k / year. Someone making that much and donating 10% would not donate $300k over their lifetime (unless they worked for 60 years)> being a doctor looks pretty reasonable for someone in Kenya, relative to earning to giveThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Luke Eure https://forum.effectivealtruism.org/posts/rEHGbC6cAiMGBbB2o/on-ea-messaging-being-a-doctor-in-a-poorer-country Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On EA messaging - being a doctor in a poorer country, published by Luke Eure on November 20, 2022 on The Effective Altruism Forum.EA has growing communities in low and middle-income countries. Off the top of my head: Philippines, Nigeria, South Africa, Kenya.This is very good. It also means that EA orgs and communicators should move away from "western-by-default" messaging and thinking.Here's an instance of how assuming readers are all western can lead to wrong, or at least incompletely thought-through, conclusions.EA thinking tends to lead people in the US/UK away from being doctors. I've heard "don't be a doctor if you care about people" used as a shorthand for the at-first-counterintuitive recommendations EA can sometimes give.But, in poorer countries, it might be reasonable on EA grounds to be a doctor:There is a shortage of doctors in countries like Kenya -> higher direct impactEarning opportunities are much lower -> lower impact through counterfactual optionsIt may be that it still nets out that being a doctor is not the best career choice for people in poorer countries. Seems pretty uncertain, because I don’t think anyone has thought about it in detail. In the 80k hours post, the top-level recommendation "people likely to succeed at medical school admission could have a greater impact outside medicine" is only backed up with evidence from US/UK.Takeaway: As EA attracts people from all over the world, we need to move away from “western-by-default” communications to ensure people get the correct, and correctly-reasoned, advice.In many circumstances it will still make sense to focus on a western audience for different types of communication. But this should be a decision based on the specifics of what you're communicating, not an unthinking default based on "all EAs are western"Quick back of the envelope on this based on stats from an 80k interview with Gregory Lewis:Assume that over a UK career, a doctor saves saves 6 lives. You could do the same by donating ~$30k (assuming $5k per life saved)He says it could be 10x in a developing country (here he says it could be more like 40-50x). This would require donating $300k to offsetA very very good job in Kenya would be earning $50k / year. Someone making that much and donating 10% would not donate $300k over their lifetime (unless they worked for 60 years)> being a doctor looks pretty reasonable for someone in Kenya, relative to earning to giveThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 20 Nov 2022 12:27:55 +0000 EA - On EA messaging - being a doctor in a poorer country by Luke Eure Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On EA messaging - being a doctor in a poorer country, published by Luke Eure on November 20, 2022 on The Effective Altruism Forum.EA has growing communities in low and middle-income countries. Off the top of my head: Philippines, Nigeria, South Africa, Kenya.This is very good. It also means that EA orgs and communicators should move away from "western-by-default" messaging and thinking.Here's an instance of how assuming readers are all western can lead to wrong, or at least incompletely thought-through, conclusions.EA thinking tends to lead people in the US/UK away from being doctors. I've heard "don't be a doctor if you care about people" used as a shorthand for the at-first-counterintuitive recommendations EA can sometimes give.But, in poorer countries, it might be reasonable on EA grounds to be a doctor:There is a shortage of doctors in countries like Kenya -> higher direct impactEarning opportunities are much lower -> lower impact through counterfactual optionsIt may be that it still nets out that being a doctor is not the best career choice for people in poorer countries. Seems pretty uncertain, because I don’t think anyone has thought about it in detail. In the 80k hours post, the top-level recommendation "people likely to succeed at medical school admission could have a greater impact outside medicine" is only backed up with evidence from US/UK.Takeaway: As EA attracts people from all over the world, we need to move away from “western-by-default” communications to ensure people get the correct, and correctly-reasoned, advice.In many circumstances it will still make sense to focus on a western audience for different types of communication. But this should be a decision based on the specifics of what you're communicating, not an unthinking default based on "all EAs are western"Quick back of the envelope on this based on stats from an 80k interview with Gregory Lewis:Assume that over a UK career, a doctor saves saves 6 lives. You could do the same by donating ~$30k (assuming $5k per life saved)He says it could be 10x in a developing country (here he says it could be more like 40-50x). This would require donating $300k to offsetA very very good job in Kenya would be earning $50k / year. Someone making that much and donating 10% would not donate $300k over their lifetime (unless they worked for 60 years)> being a doctor looks pretty reasonable for someone in Kenya, relative to earning to giveThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On EA messaging - being a doctor in a poorer country, published by Luke Eure on November 20, 2022 on The Effective Altruism Forum.EA has growing communities in low and middle-income countries. Off the top of my head: Philippines, Nigeria, South Africa, Kenya.This is very good. It also means that EA orgs and communicators should move away from "western-by-default" messaging and thinking.Here's an instance of how assuming readers are all western can lead to wrong, or at least incompletely thought-through, conclusions.EA thinking tends to lead people in the US/UK away from being doctors. I've heard "don't be a doctor if you care about people" used as a shorthand for the at-first-counterintuitive recommendations EA can sometimes give.But, in poorer countries, it might be reasonable on EA grounds to be a doctor:There is a shortage of doctors in countries like Kenya -> higher direct impactEarning opportunities are much lower -> lower impact through counterfactual optionsIt may be that it still nets out that being a doctor is not the best career choice for people in poorer countries. Seems pretty uncertain, because I don’t think anyone has thought about it in detail. In the 80k hours post, the top-level recommendation "people likely to succeed at medical school admission could have a greater impact outside medicine" is only backed up with evidence from US/UK.Takeaway: As EA attracts people from all over the world, we need to move away from “western-by-default” communications to ensure people get the correct, and correctly-reasoned, advice.In many circumstances it will still make sense to focus on a western audience for different types of communication. But this should be a decision based on the specifics of what you're communicating, not an unthinking default based on "all EAs are western"Quick back of the envelope on this based on stats from an 80k interview with Gregory Lewis:Assume that over a UK career, a doctor saves saves 6 lives. You could do the same by donating ~$30k (assuming $5k per life saved)He says it could be 10x in a developing country (here he says it could be more like 40-50x). This would require donating $300k to offsetA very very good job in Kenya would be earning $50k / year. Someone making that much and donating 10% would not donate $300k over their lifetime (unless they worked for 60 years)> being a doctor looks pretty reasonable for someone in Kenya, relative to earning to giveThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Luke Eure https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:37 None full 3832
qLPtqEqsadiBSK33b_EA EA - EA Organization Updates: November 2022 by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Organization Updates: November 2022, published by Lizka on November 19, 2022 on The Effective Altruism Forum.These monthly posts originated as the "Updates" section of the EA Newsletter. Organizations submit their own updates, which we edit for clarity.Job listings that these organizations highlighted (as well as a couple of other impactful jobs) are at the top of this post. Some of the jobs have extremely pressing deadlines.You can see previous updates on the "EA Organization Updates (monthly series)" topic page, or in our repository of past newsletters. Notice that there’s also an “org update” tag, where you can find more news and updates that are not part of this consolidated series.The organizations are in alphabetical order.Note also: Why you’re not hearing as much from EA orgs as you’d like.AnnouncementsThe EA survey is out! Consider taking 10 minutes to complete it (regardless of your level of involvement with effective altruism).Applications are open for EAGxBerkeley (2-4 December 2022). The conference will be bringing together people interested in effective altruism from across the United States, particularly those who are looking to build their network and engage more deeply with EA concepts.Job listingsConsider also exploring jobs listed on “Job listing (open).”Animal Advocacy Careers:Researcher and M&E Manager (12-month contract, Remote, apply by 20 November)Clinton Health Access Initiative:Technical Advisor, Maximum Impact Incubator (Remote)Program Manager, Maximum Impact Incubator (Remote)Program Associate, Maximum Impact Incubator (Remote)Effective Institutions Project:Spring and Summer Research Fellows (Remote, apply by December 4)Effective Ventures Operations:Office Manager at the Harvard Square EA Office (Cambridge, MA)EpochResearch Data Analyst (12-month contract, remote, apply by 14 December)Fish Welfare Initiative:International Generalist (Partially in India and partially remote)Project Design & Research Manager (India preferred / remote possible)Program Coordinator (India)Family Empowerment MediaHead of Research (Remote)Director of Development (Remote)Founders Pledge:Senior Researcher/Grantmaker (Climate) (Remote)Senior Advisor (Remote)Accountant (RemoteGiveDirectlyCountry Directors (Liberia, Mozambique, & Nigeria)Data Scientist (Remote)Senior UX/UI Designer (Remote)GiveWell:Research Analyst (Remote / Oakland, CA; apply by 20 November)Senior Researcher (Remote / Oakland, CA)Other jobs on our research team or content editing team (Remote / Oakland, CA)Global Priorities Institute:Research Fellows/Senior Research Fellows in Philosophy (Oxford, apply by 6 December)Research Fellows/Senior Research Fellows in Economics (Oxford, apply by 6 December)Predoctoral Research Fellows in Economics (Oxford, apply by 6 January)IDinsight:India Regional Director (India)Associate Director/Director (Philippines)Junior Data Engineer Francophone (Morocco)Associates and Senior Associates (Multiple locations)Longview Philanthropy:Head of Events Production (London, apply by 18 December)Sol Head of Office Operations (London, apply by 11 December)Sol Office Operations Associate (London, apply by 11 December)Open Philanthropy:Lead Researcher (Remote)Program Operations Assistant, Global Health & Wellbeing (Remote, US working hours)Probably Good:Multiple positions in operations, growth, and community management (Remote)Organizational updatesThese are in alphabetical order.80,000 HoursThis month, 80,000 Hours launched their updated job board, which now has a search function, improved job filtering, and an email alert system.They also released Anonymous advice: If you want to reduce AI risk, should you take roles that advance AI capabilities? and shared seven new problem profiles:Immigration restrictionsPreven...]]>
Lizka https://forum.effectivealtruism.org/posts/qLPtqEqsadiBSK33b/ea-organization-updates-november-2022-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Organization Updates: November 2022, published by Lizka on November 19, 2022 on The Effective Altruism Forum.These monthly posts originated as the "Updates" section of the EA Newsletter. Organizations submit their own updates, which we edit for clarity.Job listings that these organizations highlighted (as well as a couple of other impactful jobs) are at the top of this post. Some of the jobs have extremely pressing deadlines.You can see previous updates on the "EA Organization Updates (monthly series)" topic page, or in our repository of past newsletters. Notice that there’s also an “org update” tag, where you can find more news and updates that are not part of this consolidated series.The organizations are in alphabetical order.Note also: Why you’re not hearing as much from EA orgs as you’d like.AnnouncementsThe EA survey is out! Consider taking 10 minutes to complete it (regardless of your level of involvement with effective altruism).Applications are open for EAGxBerkeley (2-4 December 2022). The conference will be bringing together people interested in effective altruism from across the United States, particularly those who are looking to build their network and engage more deeply with EA concepts.Job listingsConsider also exploring jobs listed on “Job listing (open).”Animal Advocacy Careers:Researcher and M&E Manager (12-month contract, Remote, apply by 20 November)Clinton Health Access Initiative:Technical Advisor, Maximum Impact Incubator (Remote)Program Manager, Maximum Impact Incubator (Remote)Program Associate, Maximum Impact Incubator (Remote)Effective Institutions Project:Spring and Summer Research Fellows (Remote, apply by December 4)Effective Ventures Operations:Office Manager at the Harvard Square EA Office (Cambridge, MA)EpochResearch Data Analyst (12-month contract, remote, apply by 14 December)Fish Welfare Initiative:International Generalist (Partially in India and partially remote)Project Design & Research Manager (India preferred / remote possible)Program Coordinator (India)Family Empowerment MediaHead of Research (Remote)Director of Development (Remote)Founders Pledge:Senior Researcher/Grantmaker (Climate) (Remote)Senior Advisor (Remote)Accountant (RemoteGiveDirectlyCountry Directors (Liberia, Mozambique, & Nigeria)Data Scientist (Remote)Senior UX/UI Designer (Remote)GiveWell:Research Analyst (Remote / Oakland, CA; apply by 20 November)Senior Researcher (Remote / Oakland, CA)Other jobs on our research team or content editing team (Remote / Oakland, CA)Global Priorities Institute:Research Fellows/Senior Research Fellows in Philosophy (Oxford, apply by 6 December)Research Fellows/Senior Research Fellows in Economics (Oxford, apply by 6 December)Predoctoral Research Fellows in Economics (Oxford, apply by 6 January)IDinsight:India Regional Director (India)Associate Director/Director (Philippines)Junior Data Engineer Francophone (Morocco)Associates and Senior Associates (Multiple locations)Longview Philanthropy:Head of Events Production (London, apply by 18 December)Sol Head of Office Operations (London, apply by 11 December)Sol Office Operations Associate (London, apply by 11 December)Open Philanthropy:Lead Researcher (Remote)Program Operations Assistant, Global Health & Wellbeing (Remote, US working hours)Probably Good:Multiple positions in operations, growth, and community management (Remote)Organizational updatesThese are in alphabetical order.80,000 HoursThis month, 80,000 Hours launched their updated job board, which now has a search function, improved job filtering, and an email alert system.They also released Anonymous advice: If you want to reduce AI risk, should you take roles that advance AI capabilities? and shared seven new problem profiles:Immigration restrictionsPreven...]]>
Sat, 19 Nov 2022 22:44:12 +0000 EA - EA Organization Updates: November 2022 by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Organization Updates: November 2022, published by Lizka on November 19, 2022 on The Effective Altruism Forum.These monthly posts originated as the "Updates" section of the EA Newsletter. Organizations submit their own updates, which we edit for clarity.Job listings that these organizations highlighted (as well as a couple of other impactful jobs) are at the top of this post. Some of the jobs have extremely pressing deadlines.You can see previous updates on the "EA Organization Updates (monthly series)" topic page, or in our repository of past newsletters. Notice that there’s also an “org update” tag, where you can find more news and updates that are not part of this consolidated series.The organizations are in alphabetical order.Note also: Why you’re not hearing as much from EA orgs as you’d like.AnnouncementsThe EA survey is out! Consider taking 10 minutes to complete it (regardless of your level of involvement with effective altruism).Applications are open for EAGxBerkeley (2-4 December 2022). The conference will be bringing together people interested in effective altruism from across the United States, particularly those who are looking to build their network and engage more deeply with EA concepts.Job listingsConsider also exploring jobs listed on “Job listing (open).”Animal Advocacy Careers:Researcher and M&E Manager (12-month contract, Remote, apply by 20 November)Clinton Health Access Initiative:Technical Advisor, Maximum Impact Incubator (Remote)Program Manager, Maximum Impact Incubator (Remote)Program Associate, Maximum Impact Incubator (Remote)Effective Institutions Project:Spring and Summer Research Fellows (Remote, apply by December 4)Effective Ventures Operations:Office Manager at the Harvard Square EA Office (Cambridge, MA)EpochResearch Data Analyst (12-month contract, remote, apply by 14 December)Fish Welfare Initiative:International Generalist (Partially in India and partially remote)Project Design & Research Manager (India preferred / remote possible)Program Coordinator (India)Family Empowerment MediaHead of Research (Remote)Director of Development (Remote)Founders Pledge:Senior Researcher/Grantmaker (Climate) (Remote)Senior Advisor (Remote)Accountant (RemoteGiveDirectlyCountry Directors (Liberia, Mozambique, & Nigeria)Data Scientist (Remote)Senior UX/UI Designer (Remote)GiveWell:Research Analyst (Remote / Oakland, CA; apply by 20 November)Senior Researcher (Remote / Oakland, CA)Other jobs on our research team or content editing team (Remote / Oakland, CA)Global Priorities Institute:Research Fellows/Senior Research Fellows in Philosophy (Oxford, apply by 6 December)Research Fellows/Senior Research Fellows in Economics (Oxford, apply by 6 December)Predoctoral Research Fellows in Economics (Oxford, apply by 6 January)IDinsight:India Regional Director (India)Associate Director/Director (Philippines)Junior Data Engineer Francophone (Morocco)Associates and Senior Associates (Multiple locations)Longview Philanthropy:Head of Events Production (London, apply by 18 December)Sol Head of Office Operations (London, apply by 11 December)Sol Office Operations Associate (London, apply by 11 December)Open Philanthropy:Lead Researcher (Remote)Program Operations Assistant, Global Health & Wellbeing (Remote, US working hours)Probably Good:Multiple positions in operations, growth, and community management (Remote)Organizational updatesThese are in alphabetical order.80,000 HoursThis month, 80,000 Hours launched their updated job board, which now has a search function, improved job filtering, and an email alert system.They also released Anonymous advice: If you want to reduce AI risk, should you take roles that advance AI capabilities? and shared seven new problem profiles:Immigration restrictionsPreven...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Organization Updates: November 2022, published by Lizka on November 19, 2022 on The Effective Altruism Forum.These monthly posts originated as the "Updates" section of the EA Newsletter. Organizations submit their own updates, which we edit for clarity.Job listings that these organizations highlighted (as well as a couple of other impactful jobs) are at the top of this post. Some of the jobs have extremely pressing deadlines.You can see previous updates on the "EA Organization Updates (monthly series)" topic page, or in our repository of past newsletters. Notice that there’s also an “org update” tag, where you can find more news and updates that are not part of this consolidated series.The organizations are in alphabetical order.Note also: Why you’re not hearing as much from EA orgs as you’d like.AnnouncementsThe EA survey is out! Consider taking 10 minutes to complete it (regardless of your level of involvement with effective altruism).Applications are open for EAGxBerkeley (2-4 December 2022). The conference will be bringing together people interested in effective altruism from across the United States, particularly those who are looking to build their network and engage more deeply with EA concepts.Job listingsConsider also exploring jobs listed on “Job listing (open).”Animal Advocacy Careers:Researcher and M&E Manager (12-month contract, Remote, apply by 20 November)Clinton Health Access Initiative:Technical Advisor, Maximum Impact Incubator (Remote)Program Manager, Maximum Impact Incubator (Remote)Program Associate, Maximum Impact Incubator (Remote)Effective Institutions Project:Spring and Summer Research Fellows (Remote, apply by December 4)Effective Ventures Operations:Office Manager at the Harvard Square EA Office (Cambridge, MA)EpochResearch Data Analyst (12-month contract, remote, apply by 14 December)Fish Welfare Initiative:International Generalist (Partially in India and partially remote)Project Design & Research Manager (India preferred / remote possible)Program Coordinator (India)Family Empowerment MediaHead of Research (Remote)Director of Development (Remote)Founders Pledge:Senior Researcher/Grantmaker (Climate) (Remote)Senior Advisor (Remote)Accountant (RemoteGiveDirectlyCountry Directors (Liberia, Mozambique, & Nigeria)Data Scientist (Remote)Senior UX/UI Designer (Remote)GiveWell:Research Analyst (Remote / Oakland, CA; apply by 20 November)Senior Researcher (Remote / Oakland, CA)Other jobs on our research team or content editing team (Remote / Oakland, CA)Global Priorities Institute:Research Fellows/Senior Research Fellows in Philosophy (Oxford, apply by 6 December)Research Fellows/Senior Research Fellows in Economics (Oxford, apply by 6 December)Predoctoral Research Fellows in Economics (Oxford, apply by 6 January)IDinsight:India Regional Director (India)Associate Director/Director (Philippines)Junior Data Engineer Francophone (Morocco)Associates and Senior Associates (Multiple locations)Longview Philanthropy:Head of Events Production (London, apply by 18 December)Sol Head of Office Operations (London, apply by 11 December)Sol Office Operations Associate (London, apply by 11 December)Open Philanthropy:Lead Researcher (Remote)Program Operations Assistant, Global Health & Wellbeing (Remote, US working hours)Probably Good:Multiple positions in operations, growth, and community management (Remote)Organizational updatesThese are in alphabetical order.80,000 HoursThis month, 80,000 Hours launched their updated job board, which now has a search function, improved job filtering, and an email alert system.They also released Anonymous advice: If you want to reduce AI risk, should you take roles that advance AI capabilities? and shared seven new problem profiles:Immigration restrictionsPreven...]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 19:40 None full 3824
tSQKuohL6WZvh2scz_EA EA - Trying to keep my head on straight by ChanaMessinger Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Trying to keep my head on straight, published by ChanaMessinger on November 19, 2022 on The Effective Altruism Forum.I’m a little confused who this is for: I think it’s for anyone who might want thoughts on orienting to the FTX situation in ways they’d most endorse later, especially if they are in a position of leadership or have people relying on them for guidance. It might not be coherent, it's just some thoughts, in the spirit of Scattered Takes and Unsolicited Advice.This is written in my personal capacity, not as an employee of CEASomething I’m thinking about a lot right now is how rationality, values, and judgment can be hardest to use when you need them most. My vision of a community is one that makes you most likely to be your best self in those times. I think I'm seeing a lot of this already, and I hope to see even more.So for anyone thinking about FTX things, or talking about them with others, or planning to write things in that strange dialect known as “comms”, here’s my set of things I don’t want to forget. Please feel encouraged to add your own in the comments.IntegrityThere is no party line (I say by fiat) - I want EA to be ok after this, and it’s sure true that there are things people could say that would make that less likely, but I just really really don’t want EA to be a place where people can’t think and say thingsI want to give explicit okness to questions and wondering around judgment, decision quality or integrity of EA, EAs and EA leaders, and I don’t want to have a missing mood about people’s understandable curiosity and concernThat said, obviously people in sensitive situations may not answer all questions you’re curious about, and in fact many of them have legal reasons not to.It is not your job by dint of being an EA to protect “EA the brand”. You may decide that the brand is valuable in service of its goals; you may also have your own opinions on what the brand worth protecting is.Sometimes I have opinions about what makes sense to share based on confidentiality or other things, but at a broad stroke, I tend to be into people saying the truth out loud (or if my system 1 says different, I want to be into it).Soldier-iness (here meaning the feeling of wanting to defend “your tribe”) is normal and some of it is tracking real and important things about the value of what we’ve built here. Integrity doesn’t mean highlighting every bad faith criticism. (But also don’t let the desire to protect what is valuable warp your own beliefs about the world)There are going to be a lot of incentives to pile on, especially if any particular narrative starts emerging, and I also want EA to be a place where you can say “this thing that looks bad doesn’t seem actually object level bad to me for these reasons”, or “Utilitarianism is good, actually” or “EA is/isn’t worse than the reference class on this” or “I think the ways in which EA is different from other movements was a worthwhile bet, even if it added risk” or “I don’t think I know enough about this to have a take.”Updating your views makes sense, but probably you for the moment have most of the same views you had two weeks ago, and overupdating also lands you in the wrong placeI would be sad if people jumped too quickly to repudiate their system of ethics, or all the unusual features of it that have let us aim at doing an unusual amount of good. I would also be sad if the vibe of our response felt disingenuous - aiming to appear less consequentialist than is the case (whatever that true case is), less willing to think about tradeoffs, etc.You don’t even need to have one take - you can just say a lot of things that seem true to youI want to say things here, on twitter, out loud, etc, that are filtered by “is this helping me and others think better and more clearly”. I might not always be ma...]]>
ChanaMessinger https://forum.effectivealtruism.org/posts/tSQKuohL6WZvh2scz/trying-to-keep-my-head-on-straight Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Trying to keep my head on straight, published by ChanaMessinger on November 19, 2022 on The Effective Altruism Forum.I’m a little confused who this is for: I think it’s for anyone who might want thoughts on orienting to the FTX situation in ways they’d most endorse later, especially if they are in a position of leadership or have people relying on them for guidance. It might not be coherent, it's just some thoughts, in the spirit of Scattered Takes and Unsolicited Advice.This is written in my personal capacity, not as an employee of CEASomething I’m thinking about a lot right now is how rationality, values, and judgment can be hardest to use when you need them most. My vision of a community is one that makes you most likely to be your best self in those times. I think I'm seeing a lot of this already, and I hope to see even more.So for anyone thinking about FTX things, or talking about them with others, or planning to write things in that strange dialect known as “comms”, here’s my set of things I don’t want to forget. Please feel encouraged to add your own in the comments.IntegrityThere is no party line (I say by fiat) - I want EA to be ok after this, and it’s sure true that there are things people could say that would make that less likely, but I just really really don’t want EA to be a place where people can’t think and say thingsI want to give explicit okness to questions and wondering around judgment, decision quality or integrity of EA, EAs and EA leaders, and I don’t want to have a missing mood about people’s understandable curiosity and concernThat said, obviously people in sensitive situations may not answer all questions you’re curious about, and in fact many of them have legal reasons not to.It is not your job by dint of being an EA to protect “EA the brand”. You may decide that the brand is valuable in service of its goals; you may also have your own opinions on what the brand worth protecting is.Sometimes I have opinions about what makes sense to share based on confidentiality or other things, but at a broad stroke, I tend to be into people saying the truth out loud (or if my system 1 says different, I want to be into it).Soldier-iness (here meaning the feeling of wanting to defend “your tribe”) is normal and some of it is tracking real and important things about the value of what we’ve built here. Integrity doesn’t mean highlighting every bad faith criticism. (But also don’t let the desire to protect what is valuable warp your own beliefs about the world)There are going to be a lot of incentives to pile on, especially if any particular narrative starts emerging, and I also want EA to be a place where you can say “this thing that looks bad doesn’t seem actually object level bad to me for these reasons”, or “Utilitarianism is good, actually” or “EA is/isn’t worse than the reference class on this” or “I think the ways in which EA is different from other movements was a worthwhile bet, even if it added risk” or “I don’t think I know enough about this to have a take.”Updating your views makes sense, but probably you for the moment have most of the same views you had two weeks ago, and overupdating also lands you in the wrong placeI would be sad if people jumped too quickly to repudiate their system of ethics, or all the unusual features of it that have let us aim at doing an unusual amount of good. I would also be sad if the vibe of our response felt disingenuous - aiming to appear less consequentialist than is the case (whatever that true case is), less willing to think about tradeoffs, etc.You don’t even need to have one take - you can just say a lot of things that seem true to youI want to say things here, on twitter, out loud, etc, that are filtered by “is this helping me and others think better and more clearly”. I might not always be ma...]]>
Sat, 19 Nov 2022 19:06:14 +0000 EA - Trying to keep my head on straight by ChanaMessinger Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Trying to keep my head on straight, published by ChanaMessinger on November 19, 2022 on The Effective Altruism Forum.I’m a little confused who this is for: I think it’s for anyone who might want thoughts on orienting to the FTX situation in ways they’d most endorse later, especially if they are in a position of leadership or have people relying on them for guidance. It might not be coherent, it's just some thoughts, in the spirit of Scattered Takes and Unsolicited Advice.This is written in my personal capacity, not as an employee of CEASomething I’m thinking about a lot right now is how rationality, values, and judgment can be hardest to use when you need them most. My vision of a community is one that makes you most likely to be your best self in those times. I think I'm seeing a lot of this already, and I hope to see even more.So for anyone thinking about FTX things, or talking about them with others, or planning to write things in that strange dialect known as “comms”, here’s my set of things I don’t want to forget. Please feel encouraged to add your own in the comments.IntegrityThere is no party line (I say by fiat) - I want EA to be ok after this, and it’s sure true that there are things people could say that would make that less likely, but I just really really don’t want EA to be a place where people can’t think and say thingsI want to give explicit okness to questions and wondering around judgment, decision quality or integrity of EA, EAs and EA leaders, and I don’t want to have a missing mood about people’s understandable curiosity and concernThat said, obviously people in sensitive situations may not answer all questions you’re curious about, and in fact many of them have legal reasons not to.It is not your job by dint of being an EA to protect “EA the brand”. You may decide that the brand is valuable in service of its goals; you may also have your own opinions on what the brand worth protecting is.Sometimes I have opinions about what makes sense to share based on confidentiality or other things, but at a broad stroke, I tend to be into people saying the truth out loud (or if my system 1 says different, I want to be into it).Soldier-iness (here meaning the feeling of wanting to defend “your tribe”) is normal and some of it is tracking real and important things about the value of what we’ve built here. Integrity doesn’t mean highlighting every bad faith criticism. (But also don’t let the desire to protect what is valuable warp your own beliefs about the world)There are going to be a lot of incentives to pile on, especially if any particular narrative starts emerging, and I also want EA to be a place where you can say “this thing that looks bad doesn’t seem actually object level bad to me for these reasons”, or “Utilitarianism is good, actually” or “EA is/isn’t worse than the reference class on this” or “I think the ways in which EA is different from other movements was a worthwhile bet, even if it added risk” or “I don’t think I know enough about this to have a take.”Updating your views makes sense, but probably you for the moment have most of the same views you had two weeks ago, and overupdating also lands you in the wrong placeI would be sad if people jumped too quickly to repudiate their system of ethics, or all the unusual features of it that have let us aim at doing an unusual amount of good. I would also be sad if the vibe of our response felt disingenuous - aiming to appear less consequentialist than is the case (whatever that true case is), less willing to think about tradeoffs, etc.You don’t even need to have one take - you can just say a lot of things that seem true to youI want to say things here, on twitter, out loud, etc, that are filtered by “is this helping me and others think better and more clearly”. I might not always be ma...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Trying to keep my head on straight, published by ChanaMessinger on November 19, 2022 on The Effective Altruism Forum.I’m a little confused who this is for: I think it’s for anyone who might want thoughts on orienting to the FTX situation in ways they’d most endorse later, especially if they are in a position of leadership or have people relying on them for guidance. It might not be coherent, it's just some thoughts, in the spirit of Scattered Takes and Unsolicited Advice.This is written in my personal capacity, not as an employee of CEASomething I’m thinking about a lot right now is how rationality, values, and judgment can be hardest to use when you need them most. My vision of a community is one that makes you most likely to be your best self in those times. I think I'm seeing a lot of this already, and I hope to see even more.So for anyone thinking about FTX things, or talking about them with others, or planning to write things in that strange dialect known as “comms”, here’s my set of things I don’t want to forget. Please feel encouraged to add your own in the comments.IntegrityThere is no party line (I say by fiat) - I want EA to be ok after this, and it’s sure true that there are things people could say that would make that less likely, but I just really really don’t want EA to be a place where people can’t think and say thingsI want to give explicit okness to questions and wondering around judgment, decision quality or integrity of EA, EAs and EA leaders, and I don’t want to have a missing mood about people’s understandable curiosity and concernThat said, obviously people in sensitive situations may not answer all questions you’re curious about, and in fact many of them have legal reasons not to.It is not your job by dint of being an EA to protect “EA the brand”. You may decide that the brand is valuable in service of its goals; you may also have your own opinions on what the brand worth protecting is.Sometimes I have opinions about what makes sense to share based on confidentiality or other things, but at a broad stroke, I tend to be into people saying the truth out loud (or if my system 1 says different, I want to be into it).Soldier-iness (here meaning the feeling of wanting to defend “your tribe”) is normal and some of it is tracking real and important things about the value of what we’ve built here. Integrity doesn’t mean highlighting every bad faith criticism. (But also don’t let the desire to protect what is valuable warp your own beliefs about the world)There are going to be a lot of incentives to pile on, especially if any particular narrative starts emerging, and I also want EA to be a place where you can say “this thing that looks bad doesn’t seem actually object level bad to me for these reasons”, or “Utilitarianism is good, actually” or “EA is/isn’t worse than the reference class on this” or “I think the ways in which EA is different from other movements was a worthwhile bet, even if it added risk” or “I don’t think I know enough about this to have a take.”Updating your views makes sense, but probably you for the moment have most of the same views you had two weeks ago, and overupdating also lands you in the wrong placeI would be sad if people jumped too quickly to repudiate their system of ethics, or all the unusual features of it that have let us aim at doing an unusual amount of good. I would also be sad if the vibe of our response felt disingenuous - aiming to appear less consequentialist than is the case (whatever that true case is), less willing to think about tradeoffs, etc.You don’t even need to have one take - you can just say a lot of things that seem true to youI want to say things here, on twitter, out loud, etc, that are filtered by “is this helping me and others think better and more clearly”. I might not always be ma...]]>
ChanaMessinger https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:13 None full 3825
PtnGywLuk3Keo642c_EA EA - Polis (Cluster) Poll: What are our opinions on EA, governance and FTX? (Now with visualisation) by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Polis (Cluster) Poll: What are our opinions on EA, governance and FTX? (Now with visualisation), published by Nathan Young on November 19, 2022 on The Effective Altruism Forum.Tl;drI am editing this as I goPolis is a tool for understanding community opinionIt allows us to see clusters of similar opinionsIt not a representative poll nor does it claim to be- it's not saying how many people think X just that the cluster exists.What are people's views on community/governance/FTX in EA?It creates a fun/interesting visualisation when enough people have voted. I think it will be about the most useful way to understand what different groups feel right now.Add your own commentsIt was mentioned on the 80k podcast (by Audrey Tang)It is anonymous unless you log inI will add the visualisations when we have enough votes. Polis is being a bit slow today so this may be delayed.Link here:Report here (this now works):Longer introIt is easy to have a false sense of what people think. But we don't have to - we have tools that allow us to hear what people actually think. One is representative polling. This isn't that. Here we can do cluster analysis (not a technical term) and look at what opinions are held in common. Do those who think Longtermisms is true tend to think that certain solutions are better than others? If so, why.It shouldn't be used to say X% of EAs think this. That's not what this tool is for. Instead, it should suggest interesting questions to ask in future.It would be very easy to use the loudest voices to decided what EAs really think. But that is unwise. Instead, if we want community input we should try and understand what groups form the community as a whole and understand what underlies the desires of each group.Example questionReportClusters and visualisation (Coming when Polis decides to work)There are three clusters currently:Group A largely think this is overrated and want clear suggestionsGroup B wants more robust internal mechanisms and hold EA responsible for not spotting thisGroup C is similar to group A but is more likely to hold those views. They also thinks that flirting at EAGs during the day time should be okayThe majority views:Note that the red bars are thing that everyone disagrees with.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nathan Young https://forum.effectivealtruism.org/posts/PtnGywLuk3Keo642c/polis-cluster-poll-what-are-our-opinions-on-ea-governance Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Polis (Cluster) Poll: What are our opinions on EA, governance and FTX? (Now with visualisation), published by Nathan Young on November 19, 2022 on The Effective Altruism Forum.Tl;drI am editing this as I goPolis is a tool for understanding community opinionIt allows us to see clusters of similar opinionsIt not a representative poll nor does it claim to be- it's not saying how many people think X just that the cluster exists.What are people's views on community/governance/FTX in EA?It creates a fun/interesting visualisation when enough people have voted. I think it will be about the most useful way to understand what different groups feel right now.Add your own commentsIt was mentioned on the 80k podcast (by Audrey Tang)It is anonymous unless you log inI will add the visualisations when we have enough votes. Polis is being a bit slow today so this may be delayed.Link here:Report here (this now works):Longer introIt is easy to have a false sense of what people think. But we don't have to - we have tools that allow us to hear what people actually think. One is representative polling. This isn't that. Here we can do cluster analysis (not a technical term) and look at what opinions are held in common. Do those who think Longtermisms is true tend to think that certain solutions are better than others? If so, why.It shouldn't be used to say X% of EAs think this. That's not what this tool is for. Instead, it should suggest interesting questions to ask in future.It would be very easy to use the loudest voices to decided what EAs really think. But that is unwise. Instead, if we want community input we should try and understand what groups form the community as a whole and understand what underlies the desires of each group.Example questionReportClusters and visualisation (Coming when Polis decides to work)There are three clusters currently:Group A largely think this is overrated and want clear suggestionsGroup B wants more robust internal mechanisms and hold EA responsible for not spotting thisGroup C is similar to group A but is more likely to hold those views. They also thinks that flirting at EAGs during the day time should be okayThe majority views:Note that the red bars are thing that everyone disagrees with.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 19 Nov 2022 15:57:49 +0000 EA - Polis (Cluster) Poll: What are our opinions on EA, governance and FTX? (Now with visualisation) by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Polis (Cluster) Poll: What are our opinions on EA, governance and FTX? (Now with visualisation), published by Nathan Young on November 19, 2022 on The Effective Altruism Forum.Tl;drI am editing this as I goPolis is a tool for understanding community opinionIt allows us to see clusters of similar opinionsIt not a representative poll nor does it claim to be- it's not saying how many people think X just that the cluster exists.What are people's views on community/governance/FTX in EA?It creates a fun/interesting visualisation when enough people have voted. I think it will be about the most useful way to understand what different groups feel right now.Add your own commentsIt was mentioned on the 80k podcast (by Audrey Tang)It is anonymous unless you log inI will add the visualisations when we have enough votes. Polis is being a bit slow today so this may be delayed.Link here:Report here (this now works):Longer introIt is easy to have a false sense of what people think. But we don't have to - we have tools that allow us to hear what people actually think. One is representative polling. This isn't that. Here we can do cluster analysis (not a technical term) and look at what opinions are held in common. Do those who think Longtermisms is true tend to think that certain solutions are better than others? If so, why.It shouldn't be used to say X% of EAs think this. That's not what this tool is for. Instead, it should suggest interesting questions to ask in future.It would be very easy to use the loudest voices to decided what EAs really think. But that is unwise. Instead, if we want community input we should try and understand what groups form the community as a whole and understand what underlies the desires of each group.Example questionReportClusters and visualisation (Coming when Polis decides to work)There are three clusters currently:Group A largely think this is overrated and want clear suggestionsGroup B wants more robust internal mechanisms and hold EA responsible for not spotting thisGroup C is similar to group A but is more likely to hold those views. They also thinks that flirting at EAGs during the day time should be okayThe majority views:Note that the red bars are thing that everyone disagrees with.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Polis (Cluster) Poll: What are our opinions on EA, governance and FTX? (Now with visualisation), published by Nathan Young on November 19, 2022 on The Effective Altruism Forum.Tl;drI am editing this as I goPolis is a tool for understanding community opinionIt allows us to see clusters of similar opinionsIt not a representative poll nor does it claim to be- it's not saying how many people think X just that the cluster exists.What are people's views on community/governance/FTX in EA?It creates a fun/interesting visualisation when enough people have voted. I think it will be about the most useful way to understand what different groups feel right now.Add your own commentsIt was mentioned on the 80k podcast (by Audrey Tang)It is anonymous unless you log inI will add the visualisations when we have enough votes. Polis is being a bit slow today so this may be delayed.Link here:Report here (this now works):Longer introIt is easy to have a false sense of what people think. But we don't have to - we have tools that allow us to hear what people actually think. One is representative polling. This isn't that. Here we can do cluster analysis (not a technical term) and look at what opinions are held in common. Do those who think Longtermisms is true tend to think that certain solutions are better than others? If so, why.It shouldn't be used to say X% of EAs think this. That's not what this tool is for. Instead, it should suggest interesting questions to ask in future.It would be very easy to use the loudest voices to decided what EAs really think. But that is unwise. Instead, if we want community input we should try and understand what groups form the community as a whole and understand what underlies the desires of each group.Example questionReportClusters and visualisation (Coming when Polis decides to work)There are three clusters currently:Group A largely think this is overrated and want clear suggestionsGroup B wants more robust internal mechanisms and hold EA responsible for not spotting thisGroup C is similar to group A but is more likely to hold those views. They also thinks that flirting at EAGs during the day time should be okayThe majority views:Note that the red bars are thing that everyone disagrees with.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nathan Young https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:36 None full 3826
38LGZhQsPNEBRq5Ke_EA EA - First FDA Cultured Meat Safety Approval by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: First FDA Cultured Meat Safety Approval, published by Ben West on November 19, 2022 on The Effective Altruism Forum.In a major first, the U.S. Food and Drug Administration just offered its safety blessing to a cultivated meat product startup. It completed its first pre-market consultation with Upside Foods to examine human food made from the cultured cells of animals, and it concluded that it had “no further questions” related to the way Upside is producing its chicken.Note: I don't fully understand all the legal hoops that a cultured meat company has to jump through; it seems like this is a major one, but by no means the last.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ben West https://forum.effectivealtruism.org/posts/38LGZhQsPNEBRq5Ke/first-fda-cultured-meat-safety-approval Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: First FDA Cultured Meat Safety Approval, published by Ben West on November 19, 2022 on The Effective Altruism Forum.In a major first, the U.S. Food and Drug Administration just offered its safety blessing to a cultivated meat product startup. It completed its first pre-market consultation with Upside Foods to examine human food made from the cultured cells of animals, and it concluded that it had “no further questions” related to the way Upside is producing its chicken.Note: I don't fully understand all the legal hoops that a cultured meat company has to jump through; it seems like this is a major one, but by no means the last.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 19 Nov 2022 01:08:02 +0000 EA - First FDA Cultured Meat Safety Approval by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: First FDA Cultured Meat Safety Approval, published by Ben West on November 19, 2022 on The Effective Altruism Forum.In a major first, the U.S. Food and Drug Administration just offered its safety blessing to a cultivated meat product startup. It completed its first pre-market consultation with Upside Foods to examine human food made from the cultured cells of animals, and it concluded that it had “no further questions” related to the way Upside is producing its chicken.Note: I don't fully understand all the legal hoops that a cultured meat company has to jump through; it seems like this is a major one, but by no means the last.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: First FDA Cultured Meat Safety Approval, published by Ben West on November 19, 2022 on The Effective Altruism Forum.In a major first, the U.S. Food and Drug Administration just offered its safety blessing to a cultivated meat product startup. It completed its first pre-market consultation with Upside Foods to examine human food made from the cultured cells of animals, and it concluded that it had “no further questions” related to the way Upside is producing its chicken.Note: I don't fully understand all the legal hoops that a cultured meat company has to jump through; it seems like this is a major one, but by no means the last.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ben West https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:53 None full 3820
qFEwQbetaaSpvHm9e_EA EA - My takes on the FTX situation will (mostly) be cold, not hot by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My takes on the FTX situation will (mostly) be cold, not hot, published by Holden Karnofsky on November 18, 2022 on The Effective Altruism Forum.The FTX situation is raising a lot of good questions. Could this have been prevented? What warning signs were there, and did people act on them as much as they should have? What steps could be taken to lower the odds of a similar situation in the future?I want to think hard about questions like these, and I want to have a good (and public) discussion about them. But I don’t want to rush to make sure that happens as fast as possible. (I will continue to communicate things that seem directly and time-sensitively action-relevant; what I don’t want to rush is reflection on what went wrong and what we can learn.)The overarching reason for this is that I think discussion will be better - more thoughtful, more honest, more productive - to the extent that it happens after the dust has settled a bit. (I’m guessing this will be some number of weeks or months, as opposed to days or years.)I’m hearing calls from various members of the community to discuss all these issues quickly, and concerns (from people who’d rather not move so quickly) that engaging too slowly could risk losing the trust of the community. As a sort of compromise, I’m rushing out this post on why I disprefer rushing.1My guess is that a number of other people have similar thoughts to mine on this point, but I’ll speak only for myself.I expect some people to read this as implicitly suggesting that others behave as I do. That’s mostly not right, so I’ll be explicit about my goals. My primary goal is just to explain my own behavior. My secondary goal is to make it easier to understand why some others might be behaving as I am. My third goal is to put some considerations out there that might change some other people’s minds somewhat about what they want to do; but I don’t expect or want everyone to make the same calls I’m making (actually it would be very weird if the EA Forum were quiet right now; that’s not something I wish for).So, reasons why I mostly expect to stick to cold takes (weeks or months from now) rather than hot takes (days):I think cold takes will be more intelligent and thoughtful. In general, I find that I have better thoughts on anything after I have a while to process it. In the immediate aftermath of new information, I have tons of quick reactions that tend not to hold up well; they’re often emotion-driven, often overcorrections to what I thought before and overreactions to what others are saying, etc.Waiting also tends to mean I get to take in a lot more information, and angles from other people, that can affect my thinking. (This is especially the case with other people being so into hot takes!)It also tends to give me more space for minimal-trust thinking. If I want to form the most accurate possible belief in the heat of the moment, I tend to look to people who have thought more about the matter than I have, and think about which of them I want to bet on and defer to. But if I have more time, I can develop my own models and come to the point where I can personally stand behind my opinions. (In general I’ve been slower than some to adopt ideas like the most important century hypothesis, but I also think I have more detailed understanding and more gut-level seriousness about such ideas than I would’ve if I’d adopted them more quickly and switched from “explore” to “exploit” mode earlier.)These factors seem especially important for topics like “What went wrong here and what can we learn for the future?” It’s easy to learn the wrong lessons from a new development, and I think the extra info and thought is likely to really pay off.I think cold takes will pose less risk of doing harm. Right now there is a lot of interest in the FTX situation,...]]>
Holden Karnofsky https://forum.effectivealtruism.org/posts/qFEwQbetaaSpvHm9e/my-takes-on-the-ftx-situation-will-mostly-be-cold-not-hot Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My takes on the FTX situation will (mostly) be cold, not hot, published by Holden Karnofsky on November 18, 2022 on The Effective Altruism Forum.The FTX situation is raising a lot of good questions. Could this have been prevented? What warning signs were there, and did people act on them as much as they should have? What steps could be taken to lower the odds of a similar situation in the future?I want to think hard about questions like these, and I want to have a good (and public) discussion about them. But I don’t want to rush to make sure that happens as fast as possible. (I will continue to communicate things that seem directly and time-sensitively action-relevant; what I don’t want to rush is reflection on what went wrong and what we can learn.)The overarching reason for this is that I think discussion will be better - more thoughtful, more honest, more productive - to the extent that it happens after the dust has settled a bit. (I’m guessing this will be some number of weeks or months, as opposed to days or years.)I’m hearing calls from various members of the community to discuss all these issues quickly, and concerns (from people who’d rather not move so quickly) that engaging too slowly could risk losing the trust of the community. As a sort of compromise, I’m rushing out this post on why I disprefer rushing.1My guess is that a number of other people have similar thoughts to mine on this point, but I’ll speak only for myself.I expect some people to read this as implicitly suggesting that others behave as I do. That’s mostly not right, so I’ll be explicit about my goals. My primary goal is just to explain my own behavior. My secondary goal is to make it easier to understand why some others might be behaving as I am. My third goal is to put some considerations out there that might change some other people’s minds somewhat about what they want to do; but I don’t expect or want everyone to make the same calls I’m making (actually it would be very weird if the EA Forum were quiet right now; that’s not something I wish for).So, reasons why I mostly expect to stick to cold takes (weeks or months from now) rather than hot takes (days):I think cold takes will be more intelligent and thoughtful. In general, I find that I have better thoughts on anything after I have a while to process it. In the immediate aftermath of new information, I have tons of quick reactions that tend not to hold up well; they’re often emotion-driven, often overcorrections to what I thought before and overreactions to what others are saying, etc.Waiting also tends to mean I get to take in a lot more information, and angles from other people, that can affect my thinking. (This is especially the case with other people being so into hot takes!)It also tends to give me more space for minimal-trust thinking. If I want to form the most accurate possible belief in the heat of the moment, I tend to look to people who have thought more about the matter than I have, and think about which of them I want to bet on and defer to. But if I have more time, I can develop my own models and come to the point where I can personally stand behind my opinions. (In general I’ve been slower than some to adopt ideas like the most important century hypothesis, but I also think I have more detailed understanding and more gut-level seriousness about such ideas than I would’ve if I’d adopted them more quickly and switched from “explore” to “exploit” mode earlier.)These factors seem especially important for topics like “What went wrong here and what can we learn for the future?” It’s easy to learn the wrong lessons from a new development, and I think the extra info and thought is likely to really pay off.I think cold takes will pose less risk of doing harm. Right now there is a lot of interest in the FTX situation,...]]>
Sat, 19 Nov 2022 00:04:57 +0000 EA - My takes on the FTX situation will (mostly) be cold, not hot by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My takes on the FTX situation will (mostly) be cold, not hot, published by Holden Karnofsky on November 18, 2022 on The Effective Altruism Forum.The FTX situation is raising a lot of good questions. Could this have been prevented? What warning signs were there, and did people act on them as much as they should have? What steps could be taken to lower the odds of a similar situation in the future?I want to think hard about questions like these, and I want to have a good (and public) discussion about them. But I don’t want to rush to make sure that happens as fast as possible. (I will continue to communicate things that seem directly and time-sensitively action-relevant; what I don’t want to rush is reflection on what went wrong and what we can learn.)The overarching reason for this is that I think discussion will be better - more thoughtful, more honest, more productive - to the extent that it happens after the dust has settled a bit. (I’m guessing this will be some number of weeks or months, as opposed to days or years.)I’m hearing calls from various members of the community to discuss all these issues quickly, and concerns (from people who’d rather not move so quickly) that engaging too slowly could risk losing the trust of the community. As a sort of compromise, I’m rushing out this post on why I disprefer rushing.1My guess is that a number of other people have similar thoughts to mine on this point, but I’ll speak only for myself.I expect some people to read this as implicitly suggesting that others behave as I do. That’s mostly not right, so I’ll be explicit about my goals. My primary goal is just to explain my own behavior. My secondary goal is to make it easier to understand why some others might be behaving as I am. My third goal is to put some considerations out there that might change some other people’s minds somewhat about what they want to do; but I don’t expect or want everyone to make the same calls I’m making (actually it would be very weird if the EA Forum were quiet right now; that’s not something I wish for).So, reasons why I mostly expect to stick to cold takes (weeks or months from now) rather than hot takes (days):I think cold takes will be more intelligent and thoughtful. In general, I find that I have better thoughts on anything after I have a while to process it. In the immediate aftermath of new information, I have tons of quick reactions that tend not to hold up well; they’re often emotion-driven, often overcorrections to what I thought before and overreactions to what others are saying, etc.Waiting also tends to mean I get to take in a lot more information, and angles from other people, that can affect my thinking. (This is especially the case with other people being so into hot takes!)It also tends to give me more space for minimal-trust thinking. If I want to form the most accurate possible belief in the heat of the moment, I tend to look to people who have thought more about the matter than I have, and think about which of them I want to bet on and defer to. But if I have more time, I can develop my own models and come to the point where I can personally stand behind my opinions. (In general I’ve been slower than some to adopt ideas like the most important century hypothesis, but I also think I have more detailed understanding and more gut-level seriousness about such ideas than I would’ve if I’d adopted them more quickly and switched from “explore” to “exploit” mode earlier.)These factors seem especially important for topics like “What went wrong here and what can we learn for the future?” It’s easy to learn the wrong lessons from a new development, and I think the extra info and thought is likely to really pay off.I think cold takes will pose less risk of doing harm. Right now there is a lot of interest in the FTX situation,...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My takes on the FTX situation will (mostly) be cold, not hot, published by Holden Karnofsky on November 18, 2022 on The Effective Altruism Forum.The FTX situation is raising a lot of good questions. Could this have been prevented? What warning signs were there, and did people act on them as much as they should have? What steps could be taken to lower the odds of a similar situation in the future?I want to think hard about questions like these, and I want to have a good (and public) discussion about them. But I don’t want to rush to make sure that happens as fast as possible. (I will continue to communicate things that seem directly and time-sensitively action-relevant; what I don’t want to rush is reflection on what went wrong and what we can learn.)The overarching reason for this is that I think discussion will be better - more thoughtful, more honest, more productive - to the extent that it happens after the dust has settled a bit. (I’m guessing this will be some number of weeks or months, as opposed to days or years.)I’m hearing calls from various members of the community to discuss all these issues quickly, and concerns (from people who’d rather not move so quickly) that engaging too slowly could risk losing the trust of the community. As a sort of compromise, I’m rushing out this post on why I disprefer rushing.1My guess is that a number of other people have similar thoughts to mine on this point, but I’ll speak only for myself.I expect some people to read this as implicitly suggesting that others behave as I do. That’s mostly not right, so I’ll be explicit about my goals. My primary goal is just to explain my own behavior. My secondary goal is to make it easier to understand why some others might be behaving as I am. My third goal is to put some considerations out there that might change some other people’s minds somewhat about what they want to do; but I don’t expect or want everyone to make the same calls I’m making (actually it would be very weird if the EA Forum were quiet right now; that’s not something I wish for).So, reasons why I mostly expect to stick to cold takes (weeks or months from now) rather than hot takes (days):I think cold takes will be more intelligent and thoughtful. In general, I find that I have better thoughts on anything after I have a while to process it. In the immediate aftermath of new information, I have tons of quick reactions that tend not to hold up well; they’re often emotion-driven, often overcorrections to what I thought before and overreactions to what others are saying, etc.Waiting also tends to mean I get to take in a lot more information, and angles from other people, that can affect my thinking. (This is especially the case with other people being so into hot takes!)It also tends to give me more space for minimal-trust thinking. If I want to form the most accurate possible belief in the heat of the moment, I tend to look to people who have thought more about the matter than I have, and think about which of them I want to bet on and defer to. But if I have more time, I can develop my own models and come to the point where I can personally stand behind my opinions. (In general I’ve been slower than some to adopt ideas like the most important century hypothesis, but I also think I have more detailed understanding and more gut-level seriousness about such ideas than I would’ve if I’d adopted them more quickly and switched from “explore” to “exploit” mode earlier.)These factors seem especially important for topics like “What went wrong here and what can we learn for the future?” It’s easy to learn the wrong lessons from a new development, and I think the extra info and thought is likely to really pay off.I think cold takes will pose less risk of doing harm. Right now there is a lot of interest in the FTX situation,...]]>
Holden Karnofsky https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:16 None full 3821
dFqydq8nuukH9faX7_EA EA - Take the EA Forum Survey and Help us Improve the Forum by Sharang Phadke Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Take the EA Forum Survey and Help us Improve the Forum, published by Sharang Phadke on November 18, 2022 on The Effective Altruism Forum.The EA Forum Team is running a survey of Forum users to help us improve our strategy and better serve the EA Community. We invite you to take this 10-minute survey here by the end of the day on Friday, December 9.Everyone who engages with the Forum is encouraged to take the survey — even if you don’t have an account, look at the Forum very rarely, or feel like you’re not a frequent user. You can also take the survey anonymously.We are offering free books to the first 200 respondents to encourage newcomers (and anyone who likes reading!) to participate. If you complete the survey, you will be given a link to claim any one of the books listed on effectivealtruism.org and we will ship it to you with our gratitude.Feel free to ask any questions in the comments below. Thank you in advance for your time and feedback!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sharang Phadke https://forum.effectivealtruism.org/posts/dFqydq8nuukH9faX7/take-the-ea-forum-survey-and-help-us-improve-the-forum Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Take the EA Forum Survey and Help us Improve the Forum, published by Sharang Phadke on November 18, 2022 on The Effective Altruism Forum.The EA Forum Team is running a survey of Forum users to help us improve our strategy and better serve the EA Community. We invite you to take this 10-minute survey here by the end of the day on Friday, December 9.Everyone who engages with the Forum is encouraged to take the survey — even if you don’t have an account, look at the Forum very rarely, or feel like you’re not a frequent user. You can also take the survey anonymously.We are offering free books to the first 200 respondents to encourage newcomers (and anyone who likes reading!) to participate. If you complete the survey, you will be given a link to claim any one of the books listed on effectivealtruism.org and we will ship it to you with our gratitude.Feel free to ask any questions in the comments below. Thank you in advance for your time and feedback!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 18 Nov 2022 23:26:17 +0000 EA - Take the EA Forum Survey and Help us Improve the Forum by Sharang Phadke Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Take the EA Forum Survey and Help us Improve the Forum, published by Sharang Phadke on November 18, 2022 on The Effective Altruism Forum.The EA Forum Team is running a survey of Forum users to help us improve our strategy and better serve the EA Community. We invite you to take this 10-minute survey here by the end of the day on Friday, December 9.Everyone who engages with the Forum is encouraged to take the survey — even if you don’t have an account, look at the Forum very rarely, or feel like you’re not a frequent user. You can also take the survey anonymously.We are offering free books to the first 200 respondents to encourage newcomers (and anyone who likes reading!) to participate. If you complete the survey, you will be given a link to claim any one of the books listed on effectivealtruism.org and we will ship it to you with our gratitude.Feel free to ask any questions in the comments below. Thank you in advance for your time and feedback!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Take the EA Forum Survey and Help us Improve the Forum, published by Sharang Phadke on November 18, 2022 on The Effective Altruism Forum.The EA Forum Team is running a survey of Forum users to help us improve our strategy and better serve the EA Community. We invite you to take this 10-minute survey here by the end of the day on Friday, December 9.Everyone who engages with the Forum is encouraged to take the survey — even if you don’t have an account, look at the Forum very rarely, or feel like you’re not a frequent user. You can also take the survey anonymously.We are offering free books to the first 200 respondents to encourage newcomers (and anyone who likes reading!) to participate. If you complete the survey, you will be given a link to claim any one of the books listed on effectivealtruism.org and we will ship it to you with our gratitude.Feel free to ask any questions in the comments below. Thank you in advance for your time and feedback!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sharang Phadke https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:10 None full 3822
GqvSoerAurEbiP9GE_EA EA - Moderator Appreciation Thread by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Moderator Appreciation Thread, published by Ben West on November 18, 2022 on The Effective Altruism Forum.You might have noticed that the EA Forum has gotten more engagement recently:A sudden 10x increase engagement would be a strain under the best of circumstances, but the moderation load has been even more than the increased engagement would imply, because recent topics of conversation have been unusually heated and targeted by spam, trolls, etc.Thank you to Lorenzo Buonanno and Ryan Fugate, who joined the moderation team a few weeks ago without expecting this, but have hugely stepped up; Amber Dawn, who has tirelessly flagged trolls, spam, and tagged posts; our long time moderators Edo Arad and Julia Wise and moderation advisor Aaron Gertler, who have taken substantial time out from their extremely busy other jobs to help; and of course Lizka Vaintrob for being the non-technical Forum lead.(For those who don’t know me: I manage CEA’s Online team, which includes the EA Forum.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ben West https://forum.effectivealtruism.org/posts/GqvSoerAurEbiP9GE/moderator-appreciation-thread Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Moderator Appreciation Thread, published by Ben West on November 18, 2022 on The Effective Altruism Forum.You might have noticed that the EA Forum has gotten more engagement recently:A sudden 10x increase engagement would be a strain under the best of circumstances, but the moderation load has been even more than the increased engagement would imply, because recent topics of conversation have been unusually heated and targeted by spam, trolls, etc.Thank you to Lorenzo Buonanno and Ryan Fugate, who joined the moderation team a few weeks ago without expecting this, but have hugely stepped up; Amber Dawn, who has tirelessly flagged trolls, spam, and tagged posts; our long time moderators Edo Arad and Julia Wise and moderation advisor Aaron Gertler, who have taken substantial time out from their extremely busy other jobs to help; and of course Lizka Vaintrob for being the non-technical Forum lead.(For those who don’t know me: I manage CEA’s Online team, which includes the EA Forum.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 18 Nov 2022 22:57:48 +0000 EA - Moderator Appreciation Thread by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Moderator Appreciation Thread, published by Ben West on November 18, 2022 on The Effective Altruism Forum.You might have noticed that the EA Forum has gotten more engagement recently:A sudden 10x increase engagement would be a strain under the best of circumstances, but the moderation load has been even more than the increased engagement would imply, because recent topics of conversation have been unusually heated and targeted by spam, trolls, etc.Thank you to Lorenzo Buonanno and Ryan Fugate, who joined the moderation team a few weeks ago without expecting this, but have hugely stepped up; Amber Dawn, who has tirelessly flagged trolls, spam, and tagged posts; our long time moderators Edo Arad and Julia Wise and moderation advisor Aaron Gertler, who have taken substantial time out from their extremely busy other jobs to help; and of course Lizka Vaintrob for being the non-technical Forum lead.(For those who don’t know me: I manage CEA’s Online team, which includes the EA Forum.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Moderator Appreciation Thread, published by Ben West on November 18, 2022 on The Effective Altruism Forum.You might have noticed that the EA Forum has gotten more engagement recently:A sudden 10x increase engagement would be a strain under the best of circumstances, but the moderation load has been even more than the increased engagement would imply, because recent topics of conversation have been unusually heated and targeted by spam, trolls, etc.Thank you to Lorenzo Buonanno and Ryan Fugate, who joined the moderation team a few weeks ago without expecting this, but have hugely stepped up; Amber Dawn, who has tirelessly flagged trolls, spam, and tagged posts; our long time moderators Edo Arad and Julia Wise and moderation advisor Aaron Gertler, who have taken substantial time out from their extremely busy other jobs to help; and of course Lizka Vaintrob for being the non-technical Forum lead.(For those who don’t know me: I manage CEA’s Online team, which includes the EA Forum.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ben West https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:12 None full 3814
STJ2DAC8wyFbviPc9_EA EA - A long-termist perspective on EA's current PR crisis by Geoffrey Miller Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A long-termist perspective on EA's current PR crisis, published by Geoffrey Miller on November 18, 2022 on The Effective Altruism Forum.EAs seem so good at taking long-termist perspectives on everything -- other than our own movement's public relations (PR) issues.Let's try to apply some of our long-termist perspective-taking and future point-of-view imagination to the current situation.In ten years, we'll still remember the FTX/SBF crisis as a dramatic betrayal, a public relations (PR) crisis, and a significant financial and reputational setback.However, I'm also confident that in ten years, EA will still exist, will have restored its reputation, will still be attracting great talent, resources, and ideas, and will be doing more good than ever in the world.When we happen to find ourselves living in 'interesting times', it's useful to remember the age-old wisdom: the dogs bark, but the caravan moves on. This too shall pass. Think about all the 'huge crises!!!' that the news has covered over the last few years -- and how quickly each news story died down when the news media shifted to some new crisis narrative that carries enough monetizable outrage potential that it could attract clicks, subscribers, and advertisers.We are part of this week's monetizable outrage narrative. Every other week, ever since the development of the 24-hour news cycle, has had its own monetizable outrage narrative. If you've never been part of an outrage narrative before, welcome to the club. It sucks. It leaves scars. But it is survivable. (Speaking as someone who has survived my own share of public controversy, cancellation, and outrage narratives, and who has worked in several academic subfields that are routinely demonized by the press.)Also, haters gonna hate. Many people resent EA because it makes their warm-glow virtue-signaling seem superficial, irrational, and performative. They know deep in their hearts that it's hard to argue with the core EA values that we should use reason, evidence, and open debate to try to reduce suffering, and to promote a better long-term future. They've been waiting for an opportunity to attack EA as hypocritical or hyper-earnest, as hyper-rational or irrational, as hyper-nerdy or suspiciously cool, as hyper-sexual or incel-adjacent asexual -- anything they can think of. Just remember -- if you wouldn't have taken any misguided, uninformed, snarky critiques of EA seriously last month, don't take them seriously this week. Haters gonna hate.None of this is to minimize the actual, real-world impact of the current crisis. Many people lost of lot of money to a company they trusted. A major tech industry (crypto) will be handicapped for a while. Our organizations face real losses of resources, for a while. These are real problems, but I think they are not existential threats to EA as a movement, and they are certainly not good counter-arguments against EA principles and insights.There will come a time, maybe in the 2050s, when you may be sitting in front of a cheerful Christmas fireplace, a grandkid bouncing on your knee, and your adult kids may ask you to tell them once more the tale of the Great FTX Crisis of 2022, and how it all played out, and died down, and how EA survived and prospered. You won't remember all the breathless EA forum posts, the in-fighting, and the crisis management. You'll just remember that you either kept your faith in the cause and the community -- or you didn't.I hope more of us can take this kind of long-term perspective on the current situation.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Geoffrey Miller https://forum.effectivealtruism.org/posts/STJ2DAC8wyFbviPc9/a-long-termist-perspective-on-ea-s-current-pr-crisis Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A long-termist perspective on EA's current PR crisis, published by Geoffrey Miller on November 18, 2022 on The Effective Altruism Forum.EAs seem so good at taking long-termist perspectives on everything -- other than our own movement's public relations (PR) issues.Let's try to apply some of our long-termist perspective-taking and future point-of-view imagination to the current situation.In ten years, we'll still remember the FTX/SBF crisis as a dramatic betrayal, a public relations (PR) crisis, and a significant financial and reputational setback.However, I'm also confident that in ten years, EA will still exist, will have restored its reputation, will still be attracting great talent, resources, and ideas, and will be doing more good than ever in the world.When we happen to find ourselves living in 'interesting times', it's useful to remember the age-old wisdom: the dogs bark, but the caravan moves on. This too shall pass. Think about all the 'huge crises!!!' that the news has covered over the last few years -- and how quickly each news story died down when the news media shifted to some new crisis narrative that carries enough monetizable outrage potential that it could attract clicks, subscribers, and advertisers.We are part of this week's monetizable outrage narrative. Every other week, ever since the development of the 24-hour news cycle, has had its own monetizable outrage narrative. If you've never been part of an outrage narrative before, welcome to the club. It sucks. It leaves scars. But it is survivable. (Speaking as someone who has survived my own share of public controversy, cancellation, and outrage narratives, and who has worked in several academic subfields that are routinely demonized by the press.)Also, haters gonna hate. Many people resent EA because it makes their warm-glow virtue-signaling seem superficial, irrational, and performative. They know deep in their hearts that it's hard to argue with the core EA values that we should use reason, evidence, and open debate to try to reduce suffering, and to promote a better long-term future. They've been waiting for an opportunity to attack EA as hypocritical or hyper-earnest, as hyper-rational or irrational, as hyper-nerdy or suspiciously cool, as hyper-sexual or incel-adjacent asexual -- anything they can think of. Just remember -- if you wouldn't have taken any misguided, uninformed, snarky critiques of EA seriously last month, don't take them seriously this week. Haters gonna hate.None of this is to minimize the actual, real-world impact of the current crisis. Many people lost of lot of money to a company they trusted. A major tech industry (crypto) will be handicapped for a while. Our organizations face real losses of resources, for a while. These are real problems, but I think they are not existential threats to EA as a movement, and they are certainly not good counter-arguments against EA principles and insights.There will come a time, maybe in the 2050s, when you may be sitting in front of a cheerful Christmas fireplace, a grandkid bouncing on your knee, and your adult kids may ask you to tell them once more the tale of the Great FTX Crisis of 2022, and how it all played out, and died down, and how EA survived and prospered. You won't remember all the breathless EA forum posts, the in-fighting, and the crisis management. You'll just remember that you either kept your faith in the cause and the community -- or you didn't.I hope more of us can take this kind of long-term perspective on the current situation.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 18 Nov 2022 19:22:47 +0000 EA - A long-termist perspective on EA's current PR crisis by Geoffrey Miller Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A long-termist perspective on EA's current PR crisis, published by Geoffrey Miller on November 18, 2022 on The Effective Altruism Forum.EAs seem so good at taking long-termist perspectives on everything -- other than our own movement's public relations (PR) issues.Let's try to apply some of our long-termist perspective-taking and future point-of-view imagination to the current situation.In ten years, we'll still remember the FTX/SBF crisis as a dramatic betrayal, a public relations (PR) crisis, and a significant financial and reputational setback.However, I'm also confident that in ten years, EA will still exist, will have restored its reputation, will still be attracting great talent, resources, and ideas, and will be doing more good than ever in the world.When we happen to find ourselves living in 'interesting times', it's useful to remember the age-old wisdom: the dogs bark, but the caravan moves on. This too shall pass. Think about all the 'huge crises!!!' that the news has covered over the last few years -- and how quickly each news story died down when the news media shifted to some new crisis narrative that carries enough monetizable outrage potential that it could attract clicks, subscribers, and advertisers.We are part of this week's monetizable outrage narrative. Every other week, ever since the development of the 24-hour news cycle, has had its own monetizable outrage narrative. If you've never been part of an outrage narrative before, welcome to the club. It sucks. It leaves scars. But it is survivable. (Speaking as someone who has survived my own share of public controversy, cancellation, and outrage narratives, and who has worked in several academic subfields that are routinely demonized by the press.)Also, haters gonna hate. Many people resent EA because it makes their warm-glow virtue-signaling seem superficial, irrational, and performative. They know deep in their hearts that it's hard to argue with the core EA values that we should use reason, evidence, and open debate to try to reduce suffering, and to promote a better long-term future. They've been waiting for an opportunity to attack EA as hypocritical or hyper-earnest, as hyper-rational or irrational, as hyper-nerdy or suspiciously cool, as hyper-sexual or incel-adjacent asexual -- anything they can think of. Just remember -- if you wouldn't have taken any misguided, uninformed, snarky critiques of EA seriously last month, don't take them seriously this week. Haters gonna hate.None of this is to minimize the actual, real-world impact of the current crisis. Many people lost of lot of money to a company they trusted. A major tech industry (crypto) will be handicapped for a while. Our organizations face real losses of resources, for a while. These are real problems, but I think they are not existential threats to EA as a movement, and they are certainly not good counter-arguments against EA principles and insights.There will come a time, maybe in the 2050s, when you may be sitting in front of a cheerful Christmas fireplace, a grandkid bouncing on your knee, and your adult kids may ask you to tell them once more the tale of the Great FTX Crisis of 2022, and how it all played out, and died down, and how EA survived and prospered. You won't remember all the breathless EA forum posts, the in-fighting, and the crisis management. You'll just remember that you either kept your faith in the cause and the community -- or you didn't.I hope more of us can take this kind of long-term perspective on the current situation.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A long-termist perspective on EA's current PR crisis, published by Geoffrey Miller on November 18, 2022 on The Effective Altruism Forum.EAs seem so good at taking long-termist perspectives on everything -- other than our own movement's public relations (PR) issues.Let's try to apply some of our long-termist perspective-taking and future point-of-view imagination to the current situation.In ten years, we'll still remember the FTX/SBF crisis as a dramatic betrayal, a public relations (PR) crisis, and a significant financial and reputational setback.However, I'm also confident that in ten years, EA will still exist, will have restored its reputation, will still be attracting great talent, resources, and ideas, and will be doing more good than ever in the world.When we happen to find ourselves living in 'interesting times', it's useful to remember the age-old wisdom: the dogs bark, but the caravan moves on. This too shall pass. Think about all the 'huge crises!!!' that the news has covered over the last few years -- and how quickly each news story died down when the news media shifted to some new crisis narrative that carries enough monetizable outrage potential that it could attract clicks, subscribers, and advertisers.We are part of this week's monetizable outrage narrative. Every other week, ever since the development of the 24-hour news cycle, has had its own monetizable outrage narrative. If you've never been part of an outrage narrative before, welcome to the club. It sucks. It leaves scars. But it is survivable. (Speaking as someone who has survived my own share of public controversy, cancellation, and outrage narratives, and who has worked in several academic subfields that are routinely demonized by the press.)Also, haters gonna hate. Many people resent EA because it makes their warm-glow virtue-signaling seem superficial, irrational, and performative. They know deep in their hearts that it's hard to argue with the core EA values that we should use reason, evidence, and open debate to try to reduce suffering, and to promote a better long-term future. They've been waiting for an opportunity to attack EA as hypocritical or hyper-earnest, as hyper-rational or irrational, as hyper-nerdy or suspiciously cool, as hyper-sexual or incel-adjacent asexual -- anything they can think of. Just remember -- if you wouldn't have taken any misguided, uninformed, snarky critiques of EA seriously last month, don't take them seriously this week. Haters gonna hate.None of this is to minimize the actual, real-world impact of the current crisis. Many people lost of lot of money to a company they trusted. A major tech industry (crypto) will be handicapped for a while. Our organizations face real losses of resources, for a while. These are real problems, but I think they are not existential threats to EA as a movement, and they are certainly not good counter-arguments against EA principles and insights.There will come a time, maybe in the 2050s, when you may be sitting in front of a cheerful Christmas fireplace, a grandkid bouncing on your knee, and your adult kids may ask you to tell them once more the tale of the Great FTX Crisis of 2022, and how it all played out, and died down, and how EA survived and prospered. You won't remember all the breathless EA forum posts, the in-fighting, and the crisis management. You'll just remember that you either kept your faith in the cause and the community -- or you didn't.I hope more of us can take this kind of long-term perspective on the current situation.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Geoffrey Miller https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:33 None full 3815
jFPWZLP5LJ6EsNifj_EA EA - Introducing the Animal Advocacy Bi-Weekly Digest (Nov 4 - Nov 18) by James Ozden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the Animal Advocacy Bi-Weekly Digest (Nov 4 - Nov 18), published by James Ozden on November 18, 2022 on The Effective Altruism Forum.We are launching the Animal Advocacy Digest, a project aimed at collating and distributing the best research, news, and updates in animal advocacy on a bi-weekly cadence. Our goal with the project is to help people in the animal advocacy community quickly keep up with the most important information in the space even if they don’t browse the EA Forum or other sources daily. We’ll post the digest as an EA Forum post, as well as distribute it on this public email distribution - so you should sign up if you're interested! To start with, we'll only compile content posted on the EA Forum. This project will run as an experiment for 3 months, at which point we’ll evaluate how useful it is to users.Without further ado, here are the following posts summarised in this digest:Does the US public support radical action against factory farming in the name of animal welfare? by Neil Dullaghan (Rethink Priorities)Subsidies: Which reforms can help animals? by Ren Springlea (Animal Ask)The Welfare Range Table by Bob Fischer (Rethink Priorities)Theories of Welfare and Welfare Range Estimates by Bob Fischer (Rethink Priorities)What matters to shrimps? Factors affecting shrimp suffering in aquaculture by Lucas Lewit-Mendes & Aaron Boddy (Shrimp Welfare Project)Local Action for Animals as a Stepping Stone to State Protections by Precious Hose (original author, Faunalytics)(Bonus extra summary of interesting research) Low-cost climate-change informational intervention reduces meat consumption among students for years - Jalil et. al (2022)Does the US public support radical action against factory farming in the name of animal welfare? By Neil Dullaghan (Rethink Priorities)A pre-registered and US nationally representative survey by Rethink Priorities found approximately ~20% of respondents supported a ban on slaughterhouses when presented with arguments for and against. This number dropped to ~8% when participants were asked to explain their reasoning. Previous surveys by the Sentience Institute in the US found ~40% supported banning slaughterhouses or said ‘don’t know / no opinion’ to questions, highlighting a large discrepancy.Rethink Priorities suggests that previous polls which determined attitudes in response to broad questions (e.g. “I support a ban on slaughterhouses”) may not be accurate indicators of support for certain policies. These findings are notable for animal advocates as previous findings had been cited as support for bold reforms.Neil also notes that for future research, it might be useful to test a radical ask (ban factory farming) and a moderate ask (labelling for cage-free eggs, say) each with a radical message ("meat is murder") versus a moderate one ("human/consumer welfare") somewhat similar to this paper.Subsidies: Which reforms can help animals? - Ren Springlea (Animal Ask)A report by Animal Ask examines different subsidies reforms and how impactful these could be as campaign options for animal advocacy organisations. The five options examined are:Promoting welfare-conditional subsidiesAbolishing or reducing subsidies for meat productionSubsidies for feed cropsPromoting subsidies for plant-based foodsAbolishing subsidies for fisheriesThe authors say that they believe promoting welfare-conditional subsidies is the most promising campaign for animal advocacy organisations. They also believe reducing subsidies for meat production could be promising, however they suggest additional research to ensure that this doesn’t lead to an increase in the number of chickens (and overall animals) killed. They also note that they don’t believe options 3-5 above are promising campaigns, and encourage organisations to stee...]]>
James Ozden https://forum.effectivealtruism.org/posts/jFPWZLP5LJ6EsNifj/introducing-the-animal-advocacy-bi-weekly-digest-nov-4-nov Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the Animal Advocacy Bi-Weekly Digest (Nov 4 - Nov 18), published by James Ozden on November 18, 2022 on The Effective Altruism Forum.We are launching the Animal Advocacy Digest, a project aimed at collating and distributing the best research, news, and updates in animal advocacy on a bi-weekly cadence. Our goal with the project is to help people in the animal advocacy community quickly keep up with the most important information in the space even if they don’t browse the EA Forum or other sources daily. We’ll post the digest as an EA Forum post, as well as distribute it on this public email distribution - so you should sign up if you're interested! To start with, we'll only compile content posted on the EA Forum. This project will run as an experiment for 3 months, at which point we’ll evaluate how useful it is to users.Without further ado, here are the following posts summarised in this digest:Does the US public support radical action against factory farming in the name of animal welfare? by Neil Dullaghan (Rethink Priorities)Subsidies: Which reforms can help animals? by Ren Springlea (Animal Ask)The Welfare Range Table by Bob Fischer (Rethink Priorities)Theories of Welfare and Welfare Range Estimates by Bob Fischer (Rethink Priorities)What matters to shrimps? Factors affecting shrimp suffering in aquaculture by Lucas Lewit-Mendes & Aaron Boddy (Shrimp Welfare Project)Local Action for Animals as a Stepping Stone to State Protections by Precious Hose (original author, Faunalytics)(Bonus extra summary of interesting research) Low-cost climate-change informational intervention reduces meat consumption among students for years - Jalil et. al (2022)Does the US public support radical action against factory farming in the name of animal welfare? By Neil Dullaghan (Rethink Priorities)A pre-registered and US nationally representative survey by Rethink Priorities found approximately ~20% of respondents supported a ban on slaughterhouses when presented with arguments for and against. This number dropped to ~8% when participants were asked to explain their reasoning. Previous surveys by the Sentience Institute in the US found ~40% supported banning slaughterhouses or said ‘don’t know / no opinion’ to questions, highlighting a large discrepancy.Rethink Priorities suggests that previous polls which determined attitudes in response to broad questions (e.g. “I support a ban on slaughterhouses”) may not be accurate indicators of support for certain policies. These findings are notable for animal advocates as previous findings had been cited as support for bold reforms.Neil also notes that for future research, it might be useful to test a radical ask (ban factory farming) and a moderate ask (labelling for cage-free eggs, say) each with a radical message ("meat is murder") versus a moderate one ("human/consumer welfare") somewhat similar to this paper.Subsidies: Which reforms can help animals? - Ren Springlea (Animal Ask)A report by Animal Ask examines different subsidies reforms and how impactful these could be as campaign options for animal advocacy organisations. The five options examined are:Promoting welfare-conditional subsidiesAbolishing or reducing subsidies for meat productionSubsidies for feed cropsPromoting subsidies for plant-based foodsAbolishing subsidies for fisheriesThe authors say that they believe promoting welfare-conditional subsidies is the most promising campaign for animal advocacy organisations. They also believe reducing subsidies for meat production could be promising, however they suggest additional research to ensure that this doesn’t lead to an increase in the number of chickens (and overall animals) killed. They also note that they don’t believe options 3-5 above are promising campaigns, and encourage organisations to stee...]]>
Fri, 18 Nov 2022 15:31:04 +0000 EA - Introducing the Animal Advocacy Bi-Weekly Digest (Nov 4 - Nov 18) by James Ozden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the Animal Advocacy Bi-Weekly Digest (Nov 4 - Nov 18), published by James Ozden on November 18, 2022 on The Effective Altruism Forum.We are launching the Animal Advocacy Digest, a project aimed at collating and distributing the best research, news, and updates in animal advocacy on a bi-weekly cadence. Our goal with the project is to help people in the animal advocacy community quickly keep up with the most important information in the space even if they don’t browse the EA Forum or other sources daily. We’ll post the digest as an EA Forum post, as well as distribute it on this public email distribution - so you should sign up if you're interested! To start with, we'll only compile content posted on the EA Forum. This project will run as an experiment for 3 months, at which point we’ll evaluate how useful it is to users.Without further ado, here are the following posts summarised in this digest:Does the US public support radical action against factory farming in the name of animal welfare? by Neil Dullaghan (Rethink Priorities)Subsidies: Which reforms can help animals? by Ren Springlea (Animal Ask)The Welfare Range Table by Bob Fischer (Rethink Priorities)Theories of Welfare and Welfare Range Estimates by Bob Fischer (Rethink Priorities)What matters to shrimps? Factors affecting shrimp suffering in aquaculture by Lucas Lewit-Mendes & Aaron Boddy (Shrimp Welfare Project)Local Action for Animals as a Stepping Stone to State Protections by Precious Hose (original author, Faunalytics)(Bonus extra summary of interesting research) Low-cost climate-change informational intervention reduces meat consumption among students for years - Jalil et. al (2022)Does the US public support radical action against factory farming in the name of animal welfare? By Neil Dullaghan (Rethink Priorities)A pre-registered and US nationally representative survey by Rethink Priorities found approximately ~20% of respondents supported a ban on slaughterhouses when presented with arguments for and against. This number dropped to ~8% when participants were asked to explain their reasoning. Previous surveys by the Sentience Institute in the US found ~40% supported banning slaughterhouses or said ‘don’t know / no opinion’ to questions, highlighting a large discrepancy.Rethink Priorities suggests that previous polls which determined attitudes in response to broad questions (e.g. “I support a ban on slaughterhouses”) may not be accurate indicators of support for certain policies. These findings are notable for animal advocates as previous findings had been cited as support for bold reforms.Neil also notes that for future research, it might be useful to test a radical ask (ban factory farming) and a moderate ask (labelling for cage-free eggs, say) each with a radical message ("meat is murder") versus a moderate one ("human/consumer welfare") somewhat similar to this paper.Subsidies: Which reforms can help animals? - Ren Springlea (Animal Ask)A report by Animal Ask examines different subsidies reforms and how impactful these could be as campaign options for animal advocacy organisations. The five options examined are:Promoting welfare-conditional subsidiesAbolishing or reducing subsidies for meat productionSubsidies for feed cropsPromoting subsidies for plant-based foodsAbolishing subsidies for fisheriesThe authors say that they believe promoting welfare-conditional subsidies is the most promising campaign for animal advocacy organisations. They also believe reducing subsidies for meat production could be promising, however they suggest additional research to ensure that this doesn’t lead to an increase in the number of chickens (and overall animals) killed. They also note that they don’t believe options 3-5 above are promising campaigns, and encourage organisations to stee...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the Animal Advocacy Bi-Weekly Digest (Nov 4 - Nov 18), published by James Ozden on November 18, 2022 on The Effective Altruism Forum.We are launching the Animal Advocacy Digest, a project aimed at collating and distributing the best research, news, and updates in animal advocacy on a bi-weekly cadence. Our goal with the project is to help people in the animal advocacy community quickly keep up with the most important information in the space even if they don’t browse the EA Forum or other sources daily. We’ll post the digest as an EA Forum post, as well as distribute it on this public email distribution - so you should sign up if you're interested! To start with, we'll only compile content posted on the EA Forum. This project will run as an experiment for 3 months, at which point we’ll evaluate how useful it is to users.Without further ado, here are the following posts summarised in this digest:Does the US public support radical action against factory farming in the name of animal welfare? by Neil Dullaghan (Rethink Priorities)Subsidies: Which reforms can help animals? by Ren Springlea (Animal Ask)The Welfare Range Table by Bob Fischer (Rethink Priorities)Theories of Welfare and Welfare Range Estimates by Bob Fischer (Rethink Priorities)What matters to shrimps? Factors affecting shrimp suffering in aquaculture by Lucas Lewit-Mendes & Aaron Boddy (Shrimp Welfare Project)Local Action for Animals as a Stepping Stone to State Protections by Precious Hose (original author, Faunalytics)(Bonus extra summary of interesting research) Low-cost climate-change informational intervention reduces meat consumption among students for years - Jalil et. al (2022)Does the US public support radical action against factory farming in the name of animal welfare? By Neil Dullaghan (Rethink Priorities)A pre-registered and US nationally representative survey by Rethink Priorities found approximately ~20% of respondents supported a ban on slaughterhouses when presented with arguments for and against. This number dropped to ~8% when participants were asked to explain their reasoning. Previous surveys by the Sentience Institute in the US found ~40% supported banning slaughterhouses or said ‘don’t know / no opinion’ to questions, highlighting a large discrepancy.Rethink Priorities suggests that previous polls which determined attitudes in response to broad questions (e.g. “I support a ban on slaughterhouses”) may not be accurate indicators of support for certain policies. These findings are notable for animal advocates as previous findings had been cited as support for bold reforms.Neil also notes that for future research, it might be useful to test a radical ask (ban factory farming) and a moderate ask (labelling for cage-free eggs, say) each with a radical message ("meat is murder") versus a moderate one ("human/consumer welfare") somewhat similar to this paper.Subsidies: Which reforms can help animals? - Ren Springlea (Animal Ask)A report by Animal Ask examines different subsidies reforms and how impactful these could be as campaign options for animal advocacy organisations. The five options examined are:Promoting welfare-conditional subsidiesAbolishing or reducing subsidies for meat productionSubsidies for feed cropsPromoting subsidies for plant-based foodsAbolishing subsidies for fisheriesThe authors say that they believe promoting welfare-conditional subsidies is the most promising campaign for animal advocacy organisations. They also believe reducing subsidies for meat production could be promising, however they suggest additional research to ensure that this doesn’t lead to an increase in the number of chickens (and overall animals) killed. They also note that they don’t believe options 3-5 above are promising campaigns, and encourage organisations to stee...]]>
James Ozden https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:30 None full 3816
JgqEqsa6iAtqGLYmw_EA EA - The elephant in the bednet: the importance of philosophy when choosing between extending and improving lives by MichaelPlant Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The elephant in the bednet: the importance of philosophy when choosing between extending and improving lives, published by MichaelPlant on November 18, 2022 on The Effective Altruism Forum.Michael Plant, Joel McGuire, and Samuel DupretSummaryHow should we compare the value of extending lives to improving lives? Doing so requires us to make various philosophical assumptions, either implicitly or explicitly. But these choices are rarely acknowledged or discussed by decision-makers, all of them are controversial, and they have significant implications for how resources should be distributed.We set out two crucial philosophical issues: (A) an account of the badness of death, how to determine the relative value of deaths at different ages, and (B) locating the neutral point, the place on the wellbeing scale at which life is neither good nor bad for someone. We then illustrate how different choices for (A) and (B) alter the cost-effectiveness of three charities which operate in low-income countries, provide different interventions, and are considered to be some of the most cost-effective ways to help others: Against Malaria Foundation (insecticide-treated nets), GiveDirectly (cash transfers), and StrongMinds (group therapy for depression). We assess all three in terms of wellbeing-adjusted life years (WELLBYs) and explain why we do not, and cannot, use standard health metrics (QALYs and DALYs) for this purpose. We show how much cost-effectiveness changes by shifting from one extreme of (reasonable) opinion to the other. At one end, AMF is 1.3x better than StrongMinds.At the other, StrongMinds is 12x better than AMF. We do not advocate for any particular view. Our aim is simply to show that these philosophical choices are decision-relevant and merit further discussion.Our results are displayed in the chart below, which plots the cost-effectiveness of the three charities in WELLBYs/$1,000.StrongMinds and GiveDirectly are represented with flat, dashed lines because their cost-effectiveness does not change under the different assumptions. The changes in AMF’s cost-effectiveness are a result of two varying factors. One is using different accounts of the badness of death, that is, ways to assign value to saving lives at different ages; these three accounts go by unintuitive names in the philosophical literature, so we’ve put a slogan in brackets after each one to clarify their differences: deprivationism (prioritise the youngest), the time-relative interest account (prioritise older children over infants), and Epicureanism (death isn’t bad for anyone – prioritise living well, not living long). We also consider including two variants of the time-relative interest account (TRIA); on these, life has a maximum value at the ages of either 5 or 25.The other factor is where to locate the neutral point, the place at which someone has overall zero wellbeing, on a 0-10 life satisfaction scale; we assess that as being at each location between 0/10 and 5/10. As you can see, AMF’s cost-effectiveness changes a lot. It is only more cost-effective than StrongMinds if you adopt deprivationism and place the neutral point below 1.1. IntroductionHow should we compare the value of extending lives to improving lives? Let’s focus our minds with a real choice. On current estimates, for around $4,500, you can expect to save one child’s life by providing insecticide-treated nets (ITNs). Alternatively, that sum could provide a $1,000 cash transfer to four-and-a-half families living in extreme poverty ($1,000 is about a year’s household income). The cost of both choices is the same, but the outcomes differ. Which one will do the most good?This is a difficult and discomforting ethical question. How might we answer it? And how much would different answers change the priorities?There are various m...]]>
MichaelPlant https://forum.effectivealtruism.org/posts/JgqEqsa6iAtqGLYmw/the-elephant-in-the-bednet-the-importance-of-philosophy-when-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The elephant in the bednet: the importance of philosophy when choosing between extending and improving lives, published by MichaelPlant on November 18, 2022 on The Effective Altruism Forum.Michael Plant, Joel McGuire, and Samuel DupretSummaryHow should we compare the value of extending lives to improving lives? Doing so requires us to make various philosophical assumptions, either implicitly or explicitly. But these choices are rarely acknowledged or discussed by decision-makers, all of them are controversial, and they have significant implications for how resources should be distributed.We set out two crucial philosophical issues: (A) an account of the badness of death, how to determine the relative value of deaths at different ages, and (B) locating the neutral point, the place on the wellbeing scale at which life is neither good nor bad for someone. We then illustrate how different choices for (A) and (B) alter the cost-effectiveness of three charities which operate in low-income countries, provide different interventions, and are considered to be some of the most cost-effective ways to help others: Against Malaria Foundation (insecticide-treated nets), GiveDirectly (cash transfers), and StrongMinds (group therapy for depression). We assess all three in terms of wellbeing-adjusted life years (WELLBYs) and explain why we do not, and cannot, use standard health metrics (QALYs and DALYs) for this purpose. We show how much cost-effectiveness changes by shifting from one extreme of (reasonable) opinion to the other. At one end, AMF is 1.3x better than StrongMinds.At the other, StrongMinds is 12x better than AMF. We do not advocate for any particular view. Our aim is simply to show that these philosophical choices are decision-relevant and merit further discussion.Our results are displayed in the chart below, which plots the cost-effectiveness of the three charities in WELLBYs/$1,000.StrongMinds and GiveDirectly are represented with flat, dashed lines because their cost-effectiveness does not change under the different assumptions. The changes in AMF’s cost-effectiveness are a result of two varying factors. One is using different accounts of the badness of death, that is, ways to assign value to saving lives at different ages; these three accounts go by unintuitive names in the philosophical literature, so we’ve put a slogan in brackets after each one to clarify their differences: deprivationism (prioritise the youngest), the time-relative interest account (prioritise older children over infants), and Epicureanism (death isn’t bad for anyone – prioritise living well, not living long). We also consider including two variants of the time-relative interest account (TRIA); on these, life has a maximum value at the ages of either 5 or 25.The other factor is where to locate the neutral point, the place at which someone has overall zero wellbeing, on a 0-10 life satisfaction scale; we assess that as being at each location between 0/10 and 5/10. As you can see, AMF’s cost-effectiveness changes a lot. It is only more cost-effective than StrongMinds if you adopt deprivationism and place the neutral point below 1.1. IntroductionHow should we compare the value of extending lives to improving lives? Let’s focus our minds with a real choice. On current estimates, for around $4,500, you can expect to save one child’s life by providing insecticide-treated nets (ITNs). Alternatively, that sum could provide a $1,000 cash transfer to four-and-a-half families living in extreme poverty ($1,000 is about a year’s household income). The cost of both choices is the same, but the outcomes differ. Which one will do the most good?This is a difficult and discomforting ethical question. How might we answer it? And how much would different answers change the priorities?There are various m...]]>
Fri, 18 Nov 2022 13:40:01 +0000 EA - The elephant in the bednet: the importance of philosophy when choosing between extending and improving lives by MichaelPlant Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The elephant in the bednet: the importance of philosophy when choosing between extending and improving lives, published by MichaelPlant on November 18, 2022 on The Effective Altruism Forum.Michael Plant, Joel McGuire, and Samuel DupretSummaryHow should we compare the value of extending lives to improving lives? Doing so requires us to make various philosophical assumptions, either implicitly or explicitly. But these choices are rarely acknowledged or discussed by decision-makers, all of them are controversial, and they have significant implications for how resources should be distributed.We set out two crucial philosophical issues: (A) an account of the badness of death, how to determine the relative value of deaths at different ages, and (B) locating the neutral point, the place on the wellbeing scale at which life is neither good nor bad for someone. We then illustrate how different choices for (A) and (B) alter the cost-effectiveness of three charities which operate in low-income countries, provide different interventions, and are considered to be some of the most cost-effective ways to help others: Against Malaria Foundation (insecticide-treated nets), GiveDirectly (cash transfers), and StrongMinds (group therapy for depression). We assess all three in terms of wellbeing-adjusted life years (WELLBYs) and explain why we do not, and cannot, use standard health metrics (QALYs and DALYs) for this purpose. We show how much cost-effectiveness changes by shifting from one extreme of (reasonable) opinion to the other. At one end, AMF is 1.3x better than StrongMinds.At the other, StrongMinds is 12x better than AMF. We do not advocate for any particular view. Our aim is simply to show that these philosophical choices are decision-relevant and merit further discussion.Our results are displayed in the chart below, which plots the cost-effectiveness of the three charities in WELLBYs/$1,000.StrongMinds and GiveDirectly are represented with flat, dashed lines because their cost-effectiveness does not change under the different assumptions. The changes in AMF’s cost-effectiveness are a result of two varying factors. One is using different accounts of the badness of death, that is, ways to assign value to saving lives at different ages; these three accounts go by unintuitive names in the philosophical literature, so we’ve put a slogan in brackets after each one to clarify their differences: deprivationism (prioritise the youngest), the time-relative interest account (prioritise older children over infants), and Epicureanism (death isn’t bad for anyone – prioritise living well, not living long). We also consider including two variants of the time-relative interest account (TRIA); on these, life has a maximum value at the ages of either 5 or 25.The other factor is where to locate the neutral point, the place at which someone has overall zero wellbeing, on a 0-10 life satisfaction scale; we assess that as being at each location between 0/10 and 5/10. As you can see, AMF’s cost-effectiveness changes a lot. It is only more cost-effective than StrongMinds if you adopt deprivationism and place the neutral point below 1.1. IntroductionHow should we compare the value of extending lives to improving lives? Let’s focus our minds with a real choice. On current estimates, for around $4,500, you can expect to save one child’s life by providing insecticide-treated nets (ITNs). Alternatively, that sum could provide a $1,000 cash transfer to four-and-a-half families living in extreme poverty ($1,000 is about a year’s household income). The cost of both choices is the same, but the outcomes differ. Which one will do the most good?This is a difficult and discomforting ethical question. How might we answer it? And how much would different answers change the priorities?There are various m...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The elephant in the bednet: the importance of philosophy when choosing between extending and improving lives, published by MichaelPlant on November 18, 2022 on The Effective Altruism Forum.Michael Plant, Joel McGuire, and Samuel DupretSummaryHow should we compare the value of extending lives to improving lives? Doing so requires us to make various philosophical assumptions, either implicitly or explicitly. But these choices are rarely acknowledged or discussed by decision-makers, all of them are controversial, and they have significant implications for how resources should be distributed.We set out two crucial philosophical issues: (A) an account of the badness of death, how to determine the relative value of deaths at different ages, and (B) locating the neutral point, the place on the wellbeing scale at which life is neither good nor bad for someone. We then illustrate how different choices for (A) and (B) alter the cost-effectiveness of three charities which operate in low-income countries, provide different interventions, and are considered to be some of the most cost-effective ways to help others: Against Malaria Foundation (insecticide-treated nets), GiveDirectly (cash transfers), and StrongMinds (group therapy for depression). We assess all three in terms of wellbeing-adjusted life years (WELLBYs) and explain why we do not, and cannot, use standard health metrics (QALYs and DALYs) for this purpose. We show how much cost-effectiveness changes by shifting from one extreme of (reasonable) opinion to the other. At one end, AMF is 1.3x better than StrongMinds.At the other, StrongMinds is 12x better than AMF. We do not advocate for any particular view. Our aim is simply to show that these philosophical choices are decision-relevant and merit further discussion.Our results are displayed in the chart below, which plots the cost-effectiveness of the three charities in WELLBYs/$1,000.StrongMinds and GiveDirectly are represented with flat, dashed lines because their cost-effectiveness does not change under the different assumptions. The changes in AMF’s cost-effectiveness are a result of two varying factors. One is using different accounts of the badness of death, that is, ways to assign value to saving lives at different ages; these three accounts go by unintuitive names in the philosophical literature, so we’ve put a slogan in brackets after each one to clarify their differences: deprivationism (prioritise the youngest), the time-relative interest account (prioritise older children over infants), and Epicureanism (death isn’t bad for anyone – prioritise living well, not living long). We also consider including two variants of the time-relative interest account (TRIA); on these, life has a maximum value at the ages of either 5 or 25.The other factor is where to locate the neutral point, the place at which someone has overall zero wellbeing, on a 0-10 life satisfaction scale; we assess that as being at each location between 0/10 and 5/10. As you can see, AMF’s cost-effectiveness changes a lot. It is only more cost-effective than StrongMinds if you adopt deprivationism and place the neutral point below 1.1. IntroductionHow should we compare the value of extending lives to improving lives? Let’s focus our minds with a real choice. On current estimates, for around $4,500, you can expect to save one child’s life by providing insecticide-treated nets (ITNs). Alternatively, that sum could provide a $1,000 cash transfer to four-and-a-half families living in extreme poverty ($1,000 is about a year’s household income). The cost of both choices is the same, but the outcomes differ. Which one will do the most good?This is a difficult and discomforting ethical question. How might we answer it? And how much would different answers change the priorities?There are various m...]]>
MichaelPlant https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 55:54 None full 3818
7ZjJ9w2xf7Mkdofk8_EA EA - Does Sam make me want to renounce the actions of the EA community? No. Does your reaction? Absolutely. by GoodEAGoneBad Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does Sam make me want to renounce the actions of the EA community? No. Does your reaction? Absolutely., published by GoodEAGoneBad on November 18, 2022 on The Effective Altruism Forum.Alright, you finally broke me. No honey, all vinegar.I have spent close to ten years in EA. Over those ten years, I have worked extremely hard and invested in our community. I have organized and helped newcomers and junior members when I’m exhausted, when I’m facing family crises, when the last thing I want to do is get on the phone or train. Instead of going to the private sector and making money, I have stuck around doing impactful jobs that I largely hate as a way of dedicating myself to helping others and this community to thrive. I don’t regret this. Do FTX or Sam Bankman-Fried's actions change my assessment of my actions? No. Do the reactions I have seen here? Yes, I’m afraid so.The things that draw me to the EA community are, above anything else, its commitment to supporting one another and working together as a team to reduce suffering. Throughout the years, I have held friends as they cried because a project they were doing failed, celebrated their successes, and watched again and again as EAs do the brave thing and are then there for one another. Through this, I have built friendships and relationships that are the joys of my life.Now the new catechism. Do I condemn fraud? Yes. Of course, I do. This is a stupid question EAs keep performatively asking and answering. Everyone opposes fraud, there is no one on the other side of this issue. Sam’s actions were awful and I condemn them. Do I believe we should circle our wagons and defend Sam? No. However, there is a huge difference between condemning his actions while rallying together to support one another through this awful time and what I see happening here which I believe can best be described as a witch-hunt against everything and everyone that ever intersected with Sam or his beliefs.Over the last few days, posters on this forum, Twitter, and Facebook have used this scandal to attack and air every single grievance they have ever had against Effective Altruism or associated individuals and organizations. Even imaginary ones. Especially imaginary ones. This has included Will MacAskill and other thought leaders for the grave sin of not magically predicting that someone whose every external action suggested that he wanted to work with us to make the world a better place, would YOLO it and go Bernie Madoff. The hunt has included members of Sam’s family for the grave sin of being related to him. It has included attributing the cause of Sam’s actions to everything from issues with diversity and inclusivity, lack of transparency in EAG admissions, the pitfalls of caring if we all get eviscerated by a nuke or rogue AI, and, of course, our office spaces and dating habits.Like Stop. There are lessons to be learned here and I would have been fully down for learning them and working together to fix them with all of you. But why exactly should I help those in the community who believe that the moral thing to do when someone is on their knees is to curb stomp them while yelling “I should have been admitted to EAG 2016!”? Why should I expose myself further by doing ambitious things (No I don’t mean fraud- that’s not an ambitious thing that’s a --- criminal--- thing) when if I fail people are going to make everything worse by screaming “I told you so” to signal that they never would have been such a newb? Yeah. No. The circle I’m drawing around who is and is not in my community is getting dramatically redrawn. This is not because one person or company made a series of very bad decisions, it's because so many of your actions are those of people I will not invest in further and who I don't want anywhere near my life or life’s work.I’ll k...]]>
GoodEAGoneBad https://forum.effectivealtruism.org/posts/7ZjJ9w2xf7Mkdofk8/does-sam-make-me-want-to-renounce-the-actions-of-the-ea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does Sam make me want to renounce the actions of the EA community? No. Does your reaction? Absolutely., published by GoodEAGoneBad on November 18, 2022 on The Effective Altruism Forum.Alright, you finally broke me. No honey, all vinegar.I have spent close to ten years in EA. Over those ten years, I have worked extremely hard and invested in our community. I have organized and helped newcomers and junior members when I’m exhausted, when I’m facing family crises, when the last thing I want to do is get on the phone or train. Instead of going to the private sector and making money, I have stuck around doing impactful jobs that I largely hate as a way of dedicating myself to helping others and this community to thrive. I don’t regret this. Do FTX or Sam Bankman-Fried's actions change my assessment of my actions? No. Do the reactions I have seen here? Yes, I’m afraid so.The things that draw me to the EA community are, above anything else, its commitment to supporting one another and working together as a team to reduce suffering. Throughout the years, I have held friends as they cried because a project they were doing failed, celebrated their successes, and watched again and again as EAs do the brave thing and are then there for one another. Through this, I have built friendships and relationships that are the joys of my life.Now the new catechism. Do I condemn fraud? Yes. Of course, I do. This is a stupid question EAs keep performatively asking and answering. Everyone opposes fraud, there is no one on the other side of this issue. Sam’s actions were awful and I condemn them. Do I believe we should circle our wagons and defend Sam? No. However, there is a huge difference between condemning his actions while rallying together to support one another through this awful time and what I see happening here which I believe can best be described as a witch-hunt against everything and everyone that ever intersected with Sam or his beliefs.Over the last few days, posters on this forum, Twitter, and Facebook have used this scandal to attack and air every single grievance they have ever had against Effective Altruism or associated individuals and organizations. Even imaginary ones. Especially imaginary ones. This has included Will MacAskill and other thought leaders for the grave sin of not magically predicting that someone whose every external action suggested that he wanted to work with us to make the world a better place, would YOLO it and go Bernie Madoff. The hunt has included members of Sam’s family for the grave sin of being related to him. It has included attributing the cause of Sam’s actions to everything from issues with diversity and inclusivity, lack of transparency in EAG admissions, the pitfalls of caring if we all get eviscerated by a nuke or rogue AI, and, of course, our office spaces and dating habits.Like Stop. There are lessons to be learned here and I would have been fully down for learning them and working together to fix them with all of you. But why exactly should I help those in the community who believe that the moral thing to do when someone is on their knees is to curb stomp them while yelling “I should have been admitted to EAG 2016!”? Why should I expose myself further by doing ambitious things (No I don’t mean fraud- that’s not an ambitious thing that’s a --- criminal--- thing) when if I fail people are going to make everything worse by screaming “I told you so” to signal that they never would have been such a newb? Yeah. No. The circle I’m drawing around who is and is not in my community is getting dramatically redrawn. This is not because one person or company made a series of very bad decisions, it's because so many of your actions are those of people I will not invest in further and who I don't want anywhere near my life or life’s work.I’ll k...]]>
Fri, 18 Nov 2022 11:47:07 +0000 EA - Does Sam make me want to renounce the actions of the EA community? No. Does your reaction? Absolutely. by GoodEAGoneBad Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does Sam make me want to renounce the actions of the EA community? No. Does your reaction? Absolutely., published by GoodEAGoneBad on November 18, 2022 on The Effective Altruism Forum.Alright, you finally broke me. No honey, all vinegar.I have spent close to ten years in EA. Over those ten years, I have worked extremely hard and invested in our community. I have organized and helped newcomers and junior members when I’m exhausted, when I’m facing family crises, when the last thing I want to do is get on the phone or train. Instead of going to the private sector and making money, I have stuck around doing impactful jobs that I largely hate as a way of dedicating myself to helping others and this community to thrive. I don’t regret this. Do FTX or Sam Bankman-Fried's actions change my assessment of my actions? No. Do the reactions I have seen here? Yes, I’m afraid so.The things that draw me to the EA community are, above anything else, its commitment to supporting one another and working together as a team to reduce suffering. Throughout the years, I have held friends as they cried because a project they were doing failed, celebrated their successes, and watched again and again as EAs do the brave thing and are then there for one another. Through this, I have built friendships and relationships that are the joys of my life.Now the new catechism. Do I condemn fraud? Yes. Of course, I do. This is a stupid question EAs keep performatively asking and answering. Everyone opposes fraud, there is no one on the other side of this issue. Sam’s actions were awful and I condemn them. Do I believe we should circle our wagons and defend Sam? No. However, there is a huge difference between condemning his actions while rallying together to support one another through this awful time and what I see happening here which I believe can best be described as a witch-hunt against everything and everyone that ever intersected with Sam or his beliefs.Over the last few days, posters on this forum, Twitter, and Facebook have used this scandal to attack and air every single grievance they have ever had against Effective Altruism or associated individuals and organizations. Even imaginary ones. Especially imaginary ones. This has included Will MacAskill and other thought leaders for the grave sin of not magically predicting that someone whose every external action suggested that he wanted to work with us to make the world a better place, would YOLO it and go Bernie Madoff. The hunt has included members of Sam’s family for the grave sin of being related to him. It has included attributing the cause of Sam’s actions to everything from issues with diversity and inclusivity, lack of transparency in EAG admissions, the pitfalls of caring if we all get eviscerated by a nuke or rogue AI, and, of course, our office spaces and dating habits.Like Stop. There are lessons to be learned here and I would have been fully down for learning them and working together to fix them with all of you. But why exactly should I help those in the community who believe that the moral thing to do when someone is on their knees is to curb stomp them while yelling “I should have been admitted to EAG 2016!”? Why should I expose myself further by doing ambitious things (No I don’t mean fraud- that’s not an ambitious thing that’s a --- criminal--- thing) when if I fail people are going to make everything worse by screaming “I told you so” to signal that they never would have been such a newb? Yeah. No. The circle I’m drawing around who is and is not in my community is getting dramatically redrawn. This is not because one person or company made a series of very bad decisions, it's because so many of your actions are those of people I will not invest in further and who I don't want anywhere near my life or life’s work.I’ll k...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does Sam make me want to renounce the actions of the EA community? No. Does your reaction? Absolutely., published by GoodEAGoneBad on November 18, 2022 on The Effective Altruism Forum.Alright, you finally broke me. No honey, all vinegar.I have spent close to ten years in EA. Over those ten years, I have worked extremely hard and invested in our community. I have organized and helped newcomers and junior members when I’m exhausted, when I’m facing family crises, when the last thing I want to do is get on the phone or train. Instead of going to the private sector and making money, I have stuck around doing impactful jobs that I largely hate as a way of dedicating myself to helping others and this community to thrive. I don’t regret this. Do FTX or Sam Bankman-Fried's actions change my assessment of my actions? No. Do the reactions I have seen here? Yes, I’m afraid so.The things that draw me to the EA community are, above anything else, its commitment to supporting one another and working together as a team to reduce suffering. Throughout the years, I have held friends as they cried because a project they were doing failed, celebrated their successes, and watched again and again as EAs do the brave thing and are then there for one another. Through this, I have built friendships and relationships that are the joys of my life.Now the new catechism. Do I condemn fraud? Yes. Of course, I do. This is a stupid question EAs keep performatively asking and answering. Everyone opposes fraud, there is no one on the other side of this issue. Sam’s actions were awful and I condemn them. Do I believe we should circle our wagons and defend Sam? No. However, there is a huge difference between condemning his actions while rallying together to support one another through this awful time and what I see happening here which I believe can best be described as a witch-hunt against everything and everyone that ever intersected with Sam or his beliefs.Over the last few days, posters on this forum, Twitter, and Facebook have used this scandal to attack and air every single grievance they have ever had against Effective Altruism or associated individuals and organizations. Even imaginary ones. Especially imaginary ones. This has included Will MacAskill and other thought leaders for the grave sin of not magically predicting that someone whose every external action suggested that he wanted to work with us to make the world a better place, would YOLO it and go Bernie Madoff. The hunt has included members of Sam’s family for the grave sin of being related to him. It has included attributing the cause of Sam’s actions to everything from issues with diversity and inclusivity, lack of transparency in EAG admissions, the pitfalls of caring if we all get eviscerated by a nuke or rogue AI, and, of course, our office spaces and dating habits.Like Stop. There are lessons to be learned here and I would have been fully down for learning them and working together to fix them with all of you. But why exactly should I help those in the community who believe that the moral thing to do when someone is on their knees is to curb stomp them while yelling “I should have been admitted to EAG 2016!”? Why should I expose myself further by doing ambitious things (No I don’t mean fraud- that’s not an ambitious thing that’s a --- criminal--- thing) when if I fail people are going to make everything worse by screaming “I told you so” to signal that they never would have been such a newb? Yeah. No. The circle I’m drawing around who is and is not in my community is getting dramatically redrawn. This is not because one person or company made a series of very bad decisions, it's because so many of your actions are those of people I will not invest in further and who I don't want anywhere near my life or life’s work.I’ll k...]]>
GoodEAGoneBad https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:15 None full 3819
56CHyqoZskFejWgae_EA EA - EA is a global community - but should it be? by Davidmanheim Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA is a global community - but should it be?, published by Davidmanheim on November 18, 2022 on The Effective Altruism Forum.Without trying to wade into definitions, effective altruism is not just a philosophy and a plan of action, it’s also a community. And that means that community dynamics are incredibly important in shaping both the people involved, and the ideas. Healthy communities can make people happier, more effective, and better citizens locally and globally - but not all communities are healthy. A number of people have voiced concerns about the EA community in the recent past, and I said at the time that I think that we needed to take those concerns seriously. The failure of the community to realize what was happening with FTX isn’t itself an indictment of the community - especially given that their major investors did not know - but it’s a symptom that reinforces many of the earlier complaints.The solutions seem unclear, but there are two very different paths that would address the failure - either reform, or rethinking the entire idea of EA as a community. So while people are thinking about changes, I’d like to suggest that we not take the default path of least resistance reforms, at least without seriously considering the alternative.“The community” failed?Many people have said that the EA community failed when they didn’t realize what SBF was doing. Others have responded that no, we should not blame ourselves. (As an aside, when Eliezer Yudkowsky is telling you that you’re overdoing heroic responsibility, you’ve clearly gone too far.) But when someone begins giving to EA causes, whether individually, or via Founders Pledge, or via setting up something like SFF, there is no-one vetting them for being honest or well controlled.The community was trusting - in this case, much too trusting. And people have said that they trusted the apparent (but illusory) consensus of EAs about FTX. I am one of them. We were all too trusting of someone who, according to several reports, had a history of breaking rules and cheating others, including an acrimonious split that happened early on at Alameda, and evidently more recently frontrunning. But the people who raised flags were evidently ignored, or in other cases feared being pariahs for speaking out more publicly.But the idea that I and others trusted in “the community” is itself a problem. Like Rob Wiblin, I generally subscribe to the idea that most people can be trusted. But I wasn’t sufficiently cautious about how trust that applies to “you won’t steal from my wallet, even if you’re pretty sure you can get away with it,” doesn’t scale to “you can run a large business or charity with effectively no oversight.” A community that trusts by default is only sustainable if it is small. Claiming to subscribe to EA ideas, especially in a scenario where you can be paid well to do so, isn’t much of a reason to trust anyone. And given the size of the EA community, we’ve already passed the limits of where trusting others because of shared values is viable.Failures of TrustThere are two ways to have high trust: naivety, and sophistication. The naive way is what EA groups have employed so far, and the sophisticated way requires infrastructure to make cheating difficult and costly.To explain, when I started in graduate school, I entered a high-trust environment. I never thought about it, partly because I grew up in a religious community that was high trust. So in grad school, I was comfortable if I left my wallet on my desk when going to the bathroom, or even sometimes when I had an hour-long meeting elsewhere in the building.I think during my second year, someone had something stolen from their desk - I don’t recall what, maybe it was a wallet. We all received an email saying that if someone took it, they would be expel...]]>
Davidmanheim https://forum.effectivealtruism.org/posts/56CHyqoZskFejWgae/ea-is-a-global-community-but-should-it-be Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA is a global community - but should it be?, published by Davidmanheim on November 18, 2022 on The Effective Altruism Forum.Without trying to wade into definitions, effective altruism is not just a philosophy and a plan of action, it’s also a community. And that means that community dynamics are incredibly important in shaping both the people involved, and the ideas. Healthy communities can make people happier, more effective, and better citizens locally and globally - but not all communities are healthy. A number of people have voiced concerns about the EA community in the recent past, and I said at the time that I think that we needed to take those concerns seriously. The failure of the community to realize what was happening with FTX isn’t itself an indictment of the community - especially given that their major investors did not know - but it’s a symptom that reinforces many of the earlier complaints.The solutions seem unclear, but there are two very different paths that would address the failure - either reform, or rethinking the entire idea of EA as a community. So while people are thinking about changes, I’d like to suggest that we not take the default path of least resistance reforms, at least without seriously considering the alternative.“The community” failed?Many people have said that the EA community failed when they didn’t realize what SBF was doing. Others have responded that no, we should not blame ourselves. (As an aside, when Eliezer Yudkowsky is telling you that you’re overdoing heroic responsibility, you’ve clearly gone too far.) But when someone begins giving to EA causes, whether individually, or via Founders Pledge, or via setting up something like SFF, there is no-one vetting them for being honest or well controlled.The community was trusting - in this case, much too trusting. And people have said that they trusted the apparent (but illusory) consensus of EAs about FTX. I am one of them. We were all too trusting of someone who, according to several reports, had a history of breaking rules and cheating others, including an acrimonious split that happened early on at Alameda, and evidently more recently frontrunning. But the people who raised flags were evidently ignored, or in other cases feared being pariahs for speaking out more publicly.But the idea that I and others trusted in “the community” is itself a problem. Like Rob Wiblin, I generally subscribe to the idea that most people can be trusted. But I wasn’t sufficiently cautious about how trust that applies to “you won’t steal from my wallet, even if you’re pretty sure you can get away with it,” doesn’t scale to “you can run a large business or charity with effectively no oversight.” A community that trusts by default is only sustainable if it is small. Claiming to subscribe to EA ideas, especially in a scenario where you can be paid well to do so, isn’t much of a reason to trust anyone. And given the size of the EA community, we’ve already passed the limits of where trusting others because of shared values is viable.Failures of TrustThere are two ways to have high trust: naivety, and sophistication. The naive way is what EA groups have employed so far, and the sophisticated way requires infrastructure to make cheating difficult and costly.To explain, when I started in graduate school, I entered a high-trust environment. I never thought about it, partly because I grew up in a religious community that was high trust. So in grad school, I was comfortable if I left my wallet on my desk when going to the bathroom, or even sometimes when I had an hour-long meeting elsewhere in the building.I think during my second year, someone had something stolen from their desk - I don’t recall what, maybe it was a wallet. We all received an email saying that if someone took it, they would be expel...]]>
Fri, 18 Nov 2022 10:39:41 +0000 EA - EA is a global community - but should it be? by Davidmanheim Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA is a global community - but should it be?, published by Davidmanheim on November 18, 2022 on The Effective Altruism Forum.Without trying to wade into definitions, effective altruism is not just a philosophy and a plan of action, it’s also a community. And that means that community dynamics are incredibly important in shaping both the people involved, and the ideas. Healthy communities can make people happier, more effective, and better citizens locally and globally - but not all communities are healthy. A number of people have voiced concerns about the EA community in the recent past, and I said at the time that I think that we needed to take those concerns seriously. The failure of the community to realize what was happening with FTX isn’t itself an indictment of the community - especially given that their major investors did not know - but it’s a symptom that reinforces many of the earlier complaints.The solutions seem unclear, but there are two very different paths that would address the failure - either reform, or rethinking the entire idea of EA as a community. So while people are thinking about changes, I’d like to suggest that we not take the default path of least resistance reforms, at least without seriously considering the alternative.“The community” failed?Many people have said that the EA community failed when they didn’t realize what SBF was doing. Others have responded that no, we should not blame ourselves. (As an aside, when Eliezer Yudkowsky is telling you that you’re overdoing heroic responsibility, you’ve clearly gone too far.) But when someone begins giving to EA causes, whether individually, or via Founders Pledge, or via setting up something like SFF, there is no-one vetting them for being honest or well controlled.The community was trusting - in this case, much too trusting. And people have said that they trusted the apparent (but illusory) consensus of EAs about FTX. I am one of them. We were all too trusting of someone who, according to several reports, had a history of breaking rules and cheating others, including an acrimonious split that happened early on at Alameda, and evidently more recently frontrunning. But the people who raised flags were evidently ignored, or in other cases feared being pariahs for speaking out more publicly.But the idea that I and others trusted in “the community” is itself a problem. Like Rob Wiblin, I generally subscribe to the idea that most people can be trusted. But I wasn’t sufficiently cautious about how trust that applies to “you won’t steal from my wallet, even if you’re pretty sure you can get away with it,” doesn’t scale to “you can run a large business or charity with effectively no oversight.” A community that trusts by default is only sustainable if it is small. Claiming to subscribe to EA ideas, especially in a scenario where you can be paid well to do so, isn’t much of a reason to trust anyone. And given the size of the EA community, we’ve already passed the limits of where trusting others because of shared values is viable.Failures of TrustThere are two ways to have high trust: naivety, and sophistication. The naive way is what EA groups have employed so far, and the sophisticated way requires infrastructure to make cheating difficult and costly.To explain, when I started in graduate school, I entered a high-trust environment. I never thought about it, partly because I grew up in a religious community that was high trust. So in grad school, I was comfortable if I left my wallet on my desk when going to the bathroom, or even sometimes when I had an hour-long meeting elsewhere in the building.I think during my second year, someone had something stolen from their desk - I don’t recall what, maybe it was a wallet. We all received an email saying that if someone took it, they would be expel...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA is a global community - but should it be?, published by Davidmanheim on November 18, 2022 on The Effective Altruism Forum.Without trying to wade into definitions, effective altruism is not just a philosophy and a plan of action, it’s also a community. And that means that community dynamics are incredibly important in shaping both the people involved, and the ideas. Healthy communities can make people happier, more effective, and better citizens locally and globally - but not all communities are healthy. A number of people have voiced concerns about the EA community in the recent past, and I said at the time that I think that we needed to take those concerns seriously. The failure of the community to realize what was happening with FTX isn’t itself an indictment of the community - especially given that their major investors did not know - but it’s a symptom that reinforces many of the earlier complaints.The solutions seem unclear, but there are two very different paths that would address the failure - either reform, or rethinking the entire idea of EA as a community. So while people are thinking about changes, I’d like to suggest that we not take the default path of least resistance reforms, at least without seriously considering the alternative.“The community” failed?Many people have said that the EA community failed when they didn’t realize what SBF was doing. Others have responded that no, we should not blame ourselves. (As an aside, when Eliezer Yudkowsky is telling you that you’re overdoing heroic responsibility, you’ve clearly gone too far.) But when someone begins giving to EA causes, whether individually, or via Founders Pledge, or via setting up something like SFF, there is no-one vetting them for being honest or well controlled.The community was trusting - in this case, much too trusting. And people have said that they trusted the apparent (but illusory) consensus of EAs about FTX. I am one of them. We were all too trusting of someone who, according to several reports, had a history of breaking rules and cheating others, including an acrimonious split that happened early on at Alameda, and evidently more recently frontrunning. But the people who raised flags were evidently ignored, or in other cases feared being pariahs for speaking out more publicly.But the idea that I and others trusted in “the community” is itself a problem. Like Rob Wiblin, I generally subscribe to the idea that most people can be trusted. But I wasn’t sufficiently cautious about how trust that applies to “you won’t steal from my wallet, even if you’re pretty sure you can get away with it,” doesn’t scale to “you can run a large business or charity with effectively no oversight.” A community that trusts by default is only sustainable if it is small. Claiming to subscribe to EA ideas, especially in a scenario where you can be paid well to do so, isn’t much of a reason to trust anyone. And given the size of the EA community, we’ve already passed the limits of where trusting others because of shared values is viable.Failures of TrustThere are two ways to have high trust: naivety, and sophistication. The naive way is what EA groups have employed so far, and the sophisticated way requires infrastructure to make cheating difficult and costly.To explain, when I started in graduate school, I entered a high-trust environment. I never thought about it, partly because I grew up in a religious community that was high trust. So in grad school, I was comfortable if I left my wallet on my desk when going to the bathroom, or even sometimes when I had an hour-long meeting elsewhere in the building.I think during my second year, someone had something stolen from their desk - I don’t recall what, maybe it was a wallet. We all received an email saying that if someone took it, they would be expel...]]>
Davidmanheim https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 15:10 None full 3811
CZJ93Y7hinjvqt87j_EA EA - Introducing new leadership in Animal Charity Evaluators’ Research team by Animal Charity Evaluators Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing new leadership in Animal Charity Evaluators’ Research team, published by Animal Charity Evaluators on November 17, 2022 on The Effective Altruism Forum.IntroductionHello EA Forum! As some of you may already be aware, Animal Charity Evaluators (ACE) has recently expanded our team, most notably in Research. As such, we'd like to introduce ourselves as ACE's new Director of Research and Evaluations Program Manager and share how we intend to engage the EA community via this forum moving forward. We both joined ACE this summer, right as the annual charity evaluation season kicked off. Since we were hired, two new researchers, Alina Salmen, Ph.D. and Max Taylor, have joined, as well as our new Executive Director, Stien van der Ploeg.Our BackgroundsElisabeth Ormandy - Director of ResearchI have an academic background in Neuroscience (B.Sc.), Applied Animal Behaviour and Welfare (M.Sc.), and Animal Welfare and Ethics (Ph.D.). I completed a research fellowship in animal policy development for the Canadian Council on Animal Care and held a post-doctoral fellowship position at the University of British Columbia (UBC). I also taught various undergraduate courses at UBC including: Animals and Society, Animals and Global Issues, Scholarly Writing and Argumentation, and Animals, Politics and Ethics. In 2015, I opted to leave academia to co-found the Canadian Society for Humane Science—I served as their Executive Director until I joined ACE. In that role, I gained experience in nonprofit management, impact assessment, and strategic planning, and learned the importance of strengthening the animal advocacy movement.Alongside my paid work in animal advocacy, I currently serve in a number of volunteer roles, most notably as a Board member for the Association for the Protection of Fur-Bearing Animals.Vince Mak - Evaluations Program ManagerI have a generalist background—I graduated from the Wharton School at the University of Pennsylvania and spent the beginning of my career in financial services. I discovered effective altruism in 2019, developed an interest in animal advocacy after reading Animal Liberation in 2020, and began doing EA-style charity evaluations across cause areas as a volunteer with SoGive in 2021. Outside of my job at ACE, I currently assist in various capacities with EA research, grantmaking, and movement building.ACE's Next Steps on the EA ForumOur intentionsIn the past few years, ACE's activity on the EA Forum has been to provide updates on our work and share our thinking. As an EA organization dedicated to transparency and intellectual rigor, we would like to take it a step further and interact more closely with the community that shares these values. As we tinker with our evaluation methodology in the months and years ahead, we plan to invite your feedback during the intermediate stages; we do not want to just inform you, we want to be informed by you. In addition to opportunities to critique our thinking, you can also expect from ACE transparency about our processes, responsiveness to fair and genuine criticism, and a commitment to evolve our thinking in response to new evidence.Adjustments to our evaluations process so farThis year, our research team altered the way we approach our evaluations, partially thanks to input from people on this forum in the past. For instance, we have now implemented quantitative scoring frameworks for our Programs and Cost Effectiveness evaluation criteria—a marked departure from how we assessed these criteria last year.We appreciate the EA community's willingness to provide constructive criticism so that we can continually refine our methods. More recently, some of you have already volunteered your time and expertise by sharing proposals for how ACE can improve. In particular, we thank Nuno Sempere for a th...]]>
Animal Charity Evaluators https://forum.effectivealtruism.org/posts/CZJ93Y7hinjvqt87j/introducing-new-leadership-in-animal-charity-evaluators Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing new leadership in Animal Charity Evaluators’ Research team, published by Animal Charity Evaluators on November 17, 2022 on The Effective Altruism Forum.IntroductionHello EA Forum! As some of you may already be aware, Animal Charity Evaluators (ACE) has recently expanded our team, most notably in Research. As such, we'd like to introduce ourselves as ACE's new Director of Research and Evaluations Program Manager and share how we intend to engage the EA community via this forum moving forward. We both joined ACE this summer, right as the annual charity evaluation season kicked off. Since we were hired, two new researchers, Alina Salmen, Ph.D. and Max Taylor, have joined, as well as our new Executive Director, Stien van der Ploeg.Our BackgroundsElisabeth Ormandy - Director of ResearchI have an academic background in Neuroscience (B.Sc.), Applied Animal Behaviour and Welfare (M.Sc.), and Animal Welfare and Ethics (Ph.D.). I completed a research fellowship in animal policy development for the Canadian Council on Animal Care and held a post-doctoral fellowship position at the University of British Columbia (UBC). I also taught various undergraduate courses at UBC including: Animals and Society, Animals and Global Issues, Scholarly Writing and Argumentation, and Animals, Politics and Ethics. In 2015, I opted to leave academia to co-found the Canadian Society for Humane Science—I served as their Executive Director until I joined ACE. In that role, I gained experience in nonprofit management, impact assessment, and strategic planning, and learned the importance of strengthening the animal advocacy movement.Alongside my paid work in animal advocacy, I currently serve in a number of volunteer roles, most notably as a Board member for the Association for the Protection of Fur-Bearing Animals.Vince Mak - Evaluations Program ManagerI have a generalist background—I graduated from the Wharton School at the University of Pennsylvania and spent the beginning of my career in financial services. I discovered effective altruism in 2019, developed an interest in animal advocacy after reading Animal Liberation in 2020, and began doing EA-style charity evaluations across cause areas as a volunteer with SoGive in 2021. Outside of my job at ACE, I currently assist in various capacities with EA research, grantmaking, and movement building.ACE's Next Steps on the EA ForumOur intentionsIn the past few years, ACE's activity on the EA Forum has been to provide updates on our work and share our thinking. As an EA organization dedicated to transparency and intellectual rigor, we would like to take it a step further and interact more closely with the community that shares these values. As we tinker with our evaluation methodology in the months and years ahead, we plan to invite your feedback during the intermediate stages; we do not want to just inform you, we want to be informed by you. In addition to opportunities to critique our thinking, you can also expect from ACE transparency about our processes, responsiveness to fair and genuine criticism, and a commitment to evolve our thinking in response to new evidence.Adjustments to our evaluations process so farThis year, our research team altered the way we approach our evaluations, partially thanks to input from people on this forum in the past. For instance, we have now implemented quantitative scoring frameworks for our Programs and Cost Effectiveness evaluation criteria—a marked departure from how we assessed these criteria last year.We appreciate the EA community's willingness to provide constructive criticism so that we can continually refine our methods. More recently, some of you have already volunteered your time and expertise by sharing proposals for how ACE can improve. In particular, we thank Nuno Sempere for a th...]]>
Thu, 17 Nov 2022 22:53:36 +0000 EA - Introducing new leadership in Animal Charity Evaluators’ Research team by Animal Charity Evaluators Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing new leadership in Animal Charity Evaluators’ Research team, published by Animal Charity Evaluators on November 17, 2022 on The Effective Altruism Forum.IntroductionHello EA Forum! As some of you may already be aware, Animal Charity Evaluators (ACE) has recently expanded our team, most notably in Research. As such, we'd like to introduce ourselves as ACE's new Director of Research and Evaluations Program Manager and share how we intend to engage the EA community via this forum moving forward. We both joined ACE this summer, right as the annual charity evaluation season kicked off. Since we were hired, two new researchers, Alina Salmen, Ph.D. and Max Taylor, have joined, as well as our new Executive Director, Stien van der Ploeg.Our BackgroundsElisabeth Ormandy - Director of ResearchI have an academic background in Neuroscience (B.Sc.), Applied Animal Behaviour and Welfare (M.Sc.), and Animal Welfare and Ethics (Ph.D.). I completed a research fellowship in animal policy development for the Canadian Council on Animal Care and held a post-doctoral fellowship position at the University of British Columbia (UBC). I also taught various undergraduate courses at UBC including: Animals and Society, Animals and Global Issues, Scholarly Writing and Argumentation, and Animals, Politics and Ethics. In 2015, I opted to leave academia to co-found the Canadian Society for Humane Science—I served as their Executive Director until I joined ACE. In that role, I gained experience in nonprofit management, impact assessment, and strategic planning, and learned the importance of strengthening the animal advocacy movement.Alongside my paid work in animal advocacy, I currently serve in a number of volunteer roles, most notably as a Board member for the Association for the Protection of Fur-Bearing Animals.Vince Mak - Evaluations Program ManagerI have a generalist background—I graduated from the Wharton School at the University of Pennsylvania and spent the beginning of my career in financial services. I discovered effective altruism in 2019, developed an interest in animal advocacy after reading Animal Liberation in 2020, and began doing EA-style charity evaluations across cause areas as a volunteer with SoGive in 2021. Outside of my job at ACE, I currently assist in various capacities with EA research, grantmaking, and movement building.ACE's Next Steps on the EA ForumOur intentionsIn the past few years, ACE's activity on the EA Forum has been to provide updates on our work and share our thinking. As an EA organization dedicated to transparency and intellectual rigor, we would like to take it a step further and interact more closely with the community that shares these values. As we tinker with our evaluation methodology in the months and years ahead, we plan to invite your feedback during the intermediate stages; we do not want to just inform you, we want to be informed by you. In addition to opportunities to critique our thinking, you can also expect from ACE transparency about our processes, responsiveness to fair and genuine criticism, and a commitment to evolve our thinking in response to new evidence.Adjustments to our evaluations process so farThis year, our research team altered the way we approach our evaluations, partially thanks to input from people on this forum in the past. For instance, we have now implemented quantitative scoring frameworks for our Programs and Cost Effectiveness evaluation criteria—a marked departure from how we assessed these criteria last year.We appreciate the EA community's willingness to provide constructive criticism so that we can continually refine our methods. More recently, some of you have already volunteered your time and expertise by sharing proposals for how ACE can improve. In particular, we thank Nuno Sempere for a th...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing new leadership in Animal Charity Evaluators’ Research team, published by Animal Charity Evaluators on November 17, 2022 on The Effective Altruism Forum.IntroductionHello EA Forum! As some of you may already be aware, Animal Charity Evaluators (ACE) has recently expanded our team, most notably in Research. As such, we'd like to introduce ourselves as ACE's new Director of Research and Evaluations Program Manager and share how we intend to engage the EA community via this forum moving forward. We both joined ACE this summer, right as the annual charity evaluation season kicked off. Since we were hired, two new researchers, Alina Salmen, Ph.D. and Max Taylor, have joined, as well as our new Executive Director, Stien van der Ploeg.Our BackgroundsElisabeth Ormandy - Director of ResearchI have an academic background in Neuroscience (B.Sc.), Applied Animal Behaviour and Welfare (M.Sc.), and Animal Welfare and Ethics (Ph.D.). I completed a research fellowship in animal policy development for the Canadian Council on Animal Care and held a post-doctoral fellowship position at the University of British Columbia (UBC). I also taught various undergraduate courses at UBC including: Animals and Society, Animals and Global Issues, Scholarly Writing and Argumentation, and Animals, Politics and Ethics. In 2015, I opted to leave academia to co-found the Canadian Society for Humane Science—I served as their Executive Director until I joined ACE. In that role, I gained experience in nonprofit management, impact assessment, and strategic planning, and learned the importance of strengthening the animal advocacy movement.Alongside my paid work in animal advocacy, I currently serve in a number of volunteer roles, most notably as a Board member for the Association for the Protection of Fur-Bearing Animals.Vince Mak - Evaluations Program ManagerI have a generalist background—I graduated from the Wharton School at the University of Pennsylvania and spent the beginning of my career in financial services. I discovered effective altruism in 2019, developed an interest in animal advocacy after reading Animal Liberation in 2020, and began doing EA-style charity evaluations across cause areas as a volunteer with SoGive in 2021. Outside of my job at ACE, I currently assist in various capacities with EA research, grantmaking, and movement building.ACE's Next Steps on the EA ForumOur intentionsIn the past few years, ACE's activity on the EA Forum has been to provide updates on our work and share our thinking. As an EA organization dedicated to transparency and intellectual rigor, we would like to take it a step further and interact more closely with the community that shares these values. As we tinker with our evaluation methodology in the months and years ahead, we plan to invite your feedback during the intermediate stages; we do not want to just inform you, we want to be informed by you. In addition to opportunities to critique our thinking, you can also expect from ACE transparency about our processes, responsiveness to fair and genuine criticism, and a commitment to evolve our thinking in response to new evidence.Adjustments to our evaluations process so farThis year, our research team altered the way we approach our evaluations, partially thanks to input from people on this forum in the past. For instance, we have now implemented quantitative scoring frameworks for our Programs and Cost Effectiveness evaluation criteria—a marked departure from how we assessed these criteria last year.We appreciate the EA community's willingness to provide constructive criticism so that we can continually refine our methods. More recently, some of you have already volunteered your time and expertise by sharing proposals for how ACE can improve. In particular, we thank Nuno Sempere for a th...]]>
Animal Charity Evaluators https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:35 None full 3794
iFNemktkcZjRynAkH_EA EA - Diversification is Underrated by Justis Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Diversification is Underrated, published by Justis on November 17, 2022 on The Effective Altruism Forum.Note: This is not an FTX post, and I don't think its content hinges on current events. Also - though this is probably obvious - I'm speaking in a strictly personal capacity.Formal optimization problems often avail themselves of one solution - there can be multiple optima, but by default there tends to be one optimum for any given problem setup, and the highest expected value move is just to dump everything into that optimum.As a community, we tend to enjoy framing things as formal optimization problems. This is pretty good! But the thing about formal problem setups is they encode lots of assumptions, and those assumptions can have several degrees of freedom. Sometimes the assumptions are just plain qualitative, where quantifying them misses the point; the key isn't to just add another order-of-magnitude (or three) variable to express uncertainty. Rather, the key is to adopt a portfolio approach such that you're hitting optima or near-optima under a variety of plausible assumptions, even mutually exclusive ones.This isn't a new idea. In various guises and on various scales, it's called moral parliament, buckets, cluster thinking, or even just plain hedging. As a community, to our credit, we do a lot of this stuff.But I think we could do more, and be more confident and happy about it.Case study: meI do/have done the following things, that are likely EA-related:Every month, I donate 10% of my pre-tax income to the Against Malaria Foundation.I also donate $100 to Compassion in World Farming, mostly because I feel bad about eating meat.In my spare time, I provide editing services to various organizations as a contractor. The content I edit is often informed by a longtermist perspective, and the modal topic is probably AI safety.I once was awarded (part of a) LTFF (not FTX, the EA Funds one) grant, editing writeups on current cutting-edge AI safety research and researchers.Case study from a causes perspectiveOn a typical longtermist view, my financial donations don't make that much sense - they're morally fine, but it'd be dramatically better in expectation to donate toward reducing x-risk.On a longtermist-skeptical view, the bulk of my editing doesn't accomplish much for altruistic purposes. It's morally fine, but it'd be better to polish general outreach communications for the more legible global poverty and health sector.And depending on how you feel about farmed animals, that smaller piece of the pie could dwarf everything else (even just the $100 a month is plausibly saving more chickens from bad lives than my AMF donations save human lives), or irrelevant (if you don't care about chicken welfare basically at all).I much prefer my situation to a more "aligned" situation, where all my efforts go the same single direction.It's totally plausible to me that work being done right now on AI safety makes a really big difference for how well things go in the next couple decades. It's also plausible to me that none of it matters, either because we're doomed in any case or because our current trajectory is just basically fine.Similarly, it's plausible to me (though I think unlikely) that I learn that AMF's numbers are super inflated somehow, or that its effectiveness collapsed and nobody bothered to check. And it's plausible that in 20 years, we will have made sufficient progress in global poverty and health that there no longer exist donation opportunities in the space as high leverage as there are right now, and so now is a really important time.So I'm really happy to just do both. I don't have quantitative credences here, though I'm normally a huge fan of those. I just don't think they work that well for the outside view of the portfolio approach - I've ...]]>
Justis https://forum.effectivealtruism.org/posts/iFNemktkcZjRynAkH/diversification-is-underrated Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Diversification is Underrated, published by Justis on November 17, 2022 on The Effective Altruism Forum.Note: This is not an FTX post, and I don't think its content hinges on current events. Also - though this is probably obvious - I'm speaking in a strictly personal capacity.Formal optimization problems often avail themselves of one solution - there can be multiple optima, but by default there tends to be one optimum for any given problem setup, and the highest expected value move is just to dump everything into that optimum.As a community, we tend to enjoy framing things as formal optimization problems. This is pretty good! But the thing about formal problem setups is they encode lots of assumptions, and those assumptions can have several degrees of freedom. Sometimes the assumptions are just plain qualitative, where quantifying them misses the point; the key isn't to just add another order-of-magnitude (or three) variable to express uncertainty. Rather, the key is to adopt a portfolio approach such that you're hitting optima or near-optima under a variety of plausible assumptions, even mutually exclusive ones.This isn't a new idea. In various guises and on various scales, it's called moral parliament, buckets, cluster thinking, or even just plain hedging. As a community, to our credit, we do a lot of this stuff.But I think we could do more, and be more confident and happy about it.Case study: meI do/have done the following things, that are likely EA-related:Every month, I donate 10% of my pre-tax income to the Against Malaria Foundation.I also donate $100 to Compassion in World Farming, mostly because I feel bad about eating meat.In my spare time, I provide editing services to various organizations as a contractor. The content I edit is often informed by a longtermist perspective, and the modal topic is probably AI safety.I once was awarded (part of a) LTFF (not FTX, the EA Funds one) grant, editing writeups on current cutting-edge AI safety research and researchers.Case study from a causes perspectiveOn a typical longtermist view, my financial donations don't make that much sense - they're morally fine, but it'd be dramatically better in expectation to donate toward reducing x-risk.On a longtermist-skeptical view, the bulk of my editing doesn't accomplish much for altruistic purposes. It's morally fine, but it'd be better to polish general outreach communications for the more legible global poverty and health sector.And depending on how you feel about farmed animals, that smaller piece of the pie could dwarf everything else (even just the $100 a month is plausibly saving more chickens from bad lives than my AMF donations save human lives), or irrelevant (if you don't care about chicken welfare basically at all).I much prefer my situation to a more "aligned" situation, where all my efforts go the same single direction.It's totally plausible to me that work being done right now on AI safety makes a really big difference for how well things go in the next couple decades. It's also plausible to me that none of it matters, either because we're doomed in any case or because our current trajectory is just basically fine.Similarly, it's plausible to me (though I think unlikely) that I learn that AMF's numbers are super inflated somehow, or that its effectiveness collapsed and nobody bothered to check. And it's plausible that in 20 years, we will have made sufficient progress in global poverty and health that there no longer exist donation opportunities in the space as high leverage as there are right now, and so now is a really important time.So I'm really happy to just do both. I don't have quantitative credences here, though I'm normally a huge fan of those. I just don't think they work that well for the outside view of the portfolio approach - I've ...]]>
Thu, 17 Nov 2022 22:36:50 +0000 EA - Diversification is Underrated by Justis Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Diversification is Underrated, published by Justis on November 17, 2022 on The Effective Altruism Forum.Note: This is not an FTX post, and I don't think its content hinges on current events. Also - though this is probably obvious - I'm speaking in a strictly personal capacity.Formal optimization problems often avail themselves of one solution - there can be multiple optima, but by default there tends to be one optimum for any given problem setup, and the highest expected value move is just to dump everything into that optimum.As a community, we tend to enjoy framing things as formal optimization problems. This is pretty good! But the thing about formal problem setups is they encode lots of assumptions, and those assumptions can have several degrees of freedom. Sometimes the assumptions are just plain qualitative, where quantifying them misses the point; the key isn't to just add another order-of-magnitude (or three) variable to express uncertainty. Rather, the key is to adopt a portfolio approach such that you're hitting optima or near-optima under a variety of plausible assumptions, even mutually exclusive ones.This isn't a new idea. In various guises and on various scales, it's called moral parliament, buckets, cluster thinking, or even just plain hedging. As a community, to our credit, we do a lot of this stuff.But I think we could do more, and be more confident and happy about it.Case study: meI do/have done the following things, that are likely EA-related:Every month, I donate 10% of my pre-tax income to the Against Malaria Foundation.I also donate $100 to Compassion in World Farming, mostly because I feel bad about eating meat.In my spare time, I provide editing services to various organizations as a contractor. The content I edit is often informed by a longtermist perspective, and the modal topic is probably AI safety.I once was awarded (part of a) LTFF (not FTX, the EA Funds one) grant, editing writeups on current cutting-edge AI safety research and researchers.Case study from a causes perspectiveOn a typical longtermist view, my financial donations don't make that much sense - they're morally fine, but it'd be dramatically better in expectation to donate toward reducing x-risk.On a longtermist-skeptical view, the bulk of my editing doesn't accomplish much for altruistic purposes. It's morally fine, but it'd be better to polish general outreach communications for the more legible global poverty and health sector.And depending on how you feel about farmed animals, that smaller piece of the pie could dwarf everything else (even just the $100 a month is plausibly saving more chickens from bad lives than my AMF donations save human lives), or irrelevant (if you don't care about chicken welfare basically at all).I much prefer my situation to a more "aligned" situation, where all my efforts go the same single direction.It's totally plausible to me that work being done right now on AI safety makes a really big difference for how well things go in the next couple decades. It's also plausible to me that none of it matters, either because we're doomed in any case or because our current trajectory is just basically fine.Similarly, it's plausible to me (though I think unlikely) that I learn that AMF's numbers are super inflated somehow, or that its effectiveness collapsed and nobody bothered to check. And it's plausible that in 20 years, we will have made sufficient progress in global poverty and health that there no longer exist donation opportunities in the space as high leverage as there are right now, and so now is a really important time.So I'm really happy to just do both. I don't have quantitative credences here, though I'm normally a huge fan of those. I just don't think they work that well for the outside view of the portfolio approach - I've ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Diversification is Underrated, published by Justis on November 17, 2022 on The Effective Altruism Forum.Note: This is not an FTX post, and I don't think its content hinges on current events. Also - though this is probably obvious - I'm speaking in a strictly personal capacity.Formal optimization problems often avail themselves of one solution - there can be multiple optima, but by default there tends to be one optimum for any given problem setup, and the highest expected value move is just to dump everything into that optimum.As a community, we tend to enjoy framing things as formal optimization problems. This is pretty good! But the thing about formal problem setups is they encode lots of assumptions, and those assumptions can have several degrees of freedom. Sometimes the assumptions are just plain qualitative, where quantifying them misses the point; the key isn't to just add another order-of-magnitude (or three) variable to express uncertainty. Rather, the key is to adopt a portfolio approach such that you're hitting optima or near-optima under a variety of plausible assumptions, even mutually exclusive ones.This isn't a new idea. In various guises and on various scales, it's called moral parliament, buckets, cluster thinking, or even just plain hedging. As a community, to our credit, we do a lot of this stuff.But I think we could do more, and be more confident and happy about it.Case study: meI do/have done the following things, that are likely EA-related:Every month, I donate 10% of my pre-tax income to the Against Malaria Foundation.I also donate $100 to Compassion in World Farming, mostly because I feel bad about eating meat.In my spare time, I provide editing services to various organizations as a contractor. The content I edit is often informed by a longtermist perspective, and the modal topic is probably AI safety.I once was awarded (part of a) LTFF (not FTX, the EA Funds one) grant, editing writeups on current cutting-edge AI safety research and researchers.Case study from a causes perspectiveOn a typical longtermist view, my financial donations don't make that much sense - they're morally fine, but it'd be dramatically better in expectation to donate toward reducing x-risk.On a longtermist-skeptical view, the bulk of my editing doesn't accomplish much for altruistic purposes. It's morally fine, but it'd be better to polish general outreach communications for the more legible global poverty and health sector.And depending on how you feel about farmed animals, that smaller piece of the pie could dwarf everything else (even just the $100 a month is plausibly saving more chickens from bad lives than my AMF donations save human lives), or irrelevant (if you don't care about chicken welfare basically at all).I much prefer my situation to a more "aligned" situation, where all my efforts go the same single direction.It's totally plausible to me that work being done right now on AI safety makes a really big difference for how well things go in the next couple decades. It's also plausible to me that none of it matters, either because we're doomed in any case or because our current trajectory is just basically fine.Similarly, it's plausible to me (though I think unlikely) that I learn that AMF's numbers are super inflated somehow, or that its effectiveness collapsed and nobody bothered to check. And it's plausible that in 20 years, we will have made sufficient progress in global poverty and health that there no longer exist donation opportunities in the space as high leverage as there are right now, and so now is a really important time.So I'm really happy to just do both. I don't have quantitative credences here, though I'm normally a huge fan of those. I just don't think they work that well for the outside view of the portfolio approach - I've ...]]>
Justis https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:15 None full 3793
CBzn8wnZoc8z4ZhhZ_EA EA - [Closing Nov 20th] University Group Accelerator Program Applications are Open by jessica mccurdy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Closing Nov 20th] University Group Accelerator Program Applications are Open, published by jessica mccurdy on November 17, 2022 on The Effective Altruism Forum.This post is a reminder that applications are open for the next round of CEA’s University Group Accelerator Program (UGAP)!Are you a student at a university without a university group? Consider starting one by applying to UGAP!The University Group Accelerator Program (UGAP) is a program run by the Centre For Effective Altruism that aims to take a university group from a couple of interested organizers to a mid-sized new EA group. It offers regular meetings with an experienced mentor, trainings, and useful resources to run your first intro seminar (or fellowship) program.Applications for next semester are open!Learn more and apply here by 11:59pm ET, on November 20thWe are also looking for additional experienced organizers to serve as mentors. You can find out more about and express interest in becoming a UGAP mentor here.We have had groups from every populated continent participate and are excited to expand!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
jessica mccurdy https://forum.effectivealtruism.org/posts/CBzn8wnZoc8z4ZhhZ/closing-nov-20th-university-group-accelerator-program Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Closing Nov 20th] University Group Accelerator Program Applications are Open, published by jessica mccurdy on November 17, 2022 on The Effective Altruism Forum.This post is a reminder that applications are open for the next round of CEA’s University Group Accelerator Program (UGAP)!Are you a student at a university without a university group? Consider starting one by applying to UGAP!The University Group Accelerator Program (UGAP) is a program run by the Centre For Effective Altruism that aims to take a university group from a couple of interested organizers to a mid-sized new EA group. It offers regular meetings with an experienced mentor, trainings, and useful resources to run your first intro seminar (or fellowship) program.Applications for next semester are open!Learn more and apply here by 11:59pm ET, on November 20thWe are also looking for additional experienced organizers to serve as mentors. You can find out more about and express interest in becoming a UGAP mentor here.We have had groups from every populated continent participate and are excited to expand!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 17 Nov 2022 21:37:40 +0000 EA - [Closing Nov 20th] University Group Accelerator Program Applications are Open by jessica mccurdy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Closing Nov 20th] University Group Accelerator Program Applications are Open, published by jessica mccurdy on November 17, 2022 on The Effective Altruism Forum.This post is a reminder that applications are open for the next round of CEA’s University Group Accelerator Program (UGAP)!Are you a student at a university without a university group? Consider starting one by applying to UGAP!The University Group Accelerator Program (UGAP) is a program run by the Centre For Effective Altruism that aims to take a university group from a couple of interested organizers to a mid-sized new EA group. It offers regular meetings with an experienced mentor, trainings, and useful resources to run your first intro seminar (or fellowship) program.Applications for next semester are open!Learn more and apply here by 11:59pm ET, on November 20thWe are also looking for additional experienced organizers to serve as mentors. You can find out more about and express interest in becoming a UGAP mentor here.We have had groups from every populated continent participate and are excited to expand!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Closing Nov 20th] University Group Accelerator Program Applications are Open, published by jessica mccurdy on November 17, 2022 on The Effective Altruism Forum.This post is a reminder that applications are open for the next round of CEA’s University Group Accelerator Program (UGAP)!Are you a student at a university without a university group? Consider starting one by applying to UGAP!The University Group Accelerator Program (UGAP) is a program run by the Centre For Effective Altruism that aims to take a university group from a couple of interested organizers to a mid-sized new EA group. It offers regular meetings with an experienced mentor, trainings, and useful resources to run your first intro seminar (or fellowship) program.Applications for next semester are open!Learn more and apply here by 11:59pm ET, on November 20thWe are also looking for additional experienced organizers to serve as mentors. You can find out more about and express interest in becoming a UGAP mentor here.We have had groups from every populated continent participate and are excited to expand!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
jessica mccurdy https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:24 None full 3797
ddeCNBhYc2sANsixS_EA EA - AI Forecasting Research Ideas by Jaime Sevilla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Forecasting Research Ideas, published by Jaime Sevilla on November 17, 2022 on The Effective Altruism Forum.OverviewThe linked document contains a collection of AI Forecasting research ideas, prepared by some Epoch employees in a personal capacityWe think that these are interesting and valuable projects that research interns or students could look into, though they may vary in difficulty (depending on your background/experience)This is the result of a quick brainstorming and curation, rather than a thorough deliberative process. We encourage a critical outlook when reading themYou may also be interested in these other forecasting research ideas suggested by Jaime SevillaYou can use Epoch’s database as a resource for finding notable machine learning papers with parameter, compute, and dataset sizes. On Epoch website you can also find our past research, a tool for visualising the dataset, and some other tools like this compute calculator.Please feel free to contact us for clarification about these questions!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jaime Sevilla https://forum.effectivealtruism.org/posts/ddeCNBhYc2sANsixS/ai-forecasting-research-ideas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Forecasting Research Ideas, published by Jaime Sevilla on November 17, 2022 on The Effective Altruism Forum.OverviewThe linked document contains a collection of AI Forecasting research ideas, prepared by some Epoch employees in a personal capacityWe think that these are interesting and valuable projects that research interns or students could look into, though they may vary in difficulty (depending on your background/experience)This is the result of a quick brainstorming and curation, rather than a thorough deliberative process. We encourage a critical outlook when reading themYou may also be interested in these other forecasting research ideas suggested by Jaime SevillaYou can use Epoch’s database as a resource for finding notable machine learning papers with parameter, compute, and dataset sizes. On Epoch website you can also find our past research, a tool for visualising the dataset, and some other tools like this compute calculator.Please feel free to contact us for clarification about these questions!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 17 Nov 2022 19:25:22 +0000 EA - AI Forecasting Research Ideas by Jaime Sevilla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Forecasting Research Ideas, published by Jaime Sevilla on November 17, 2022 on The Effective Altruism Forum.OverviewThe linked document contains a collection of AI Forecasting research ideas, prepared by some Epoch employees in a personal capacityWe think that these are interesting and valuable projects that research interns or students could look into, though they may vary in difficulty (depending on your background/experience)This is the result of a quick brainstorming and curation, rather than a thorough deliberative process. We encourage a critical outlook when reading themYou may also be interested in these other forecasting research ideas suggested by Jaime SevillaYou can use Epoch’s database as a resource for finding notable machine learning papers with parameter, compute, and dataset sizes. On Epoch website you can also find our past research, a tool for visualising the dataset, and some other tools like this compute calculator.Please feel free to contact us for clarification about these questions!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Forecasting Research Ideas, published by Jaime Sevilla on November 17, 2022 on The Effective Altruism Forum.OverviewThe linked document contains a collection of AI Forecasting research ideas, prepared by some Epoch employees in a personal capacityWe think that these are interesting and valuable projects that research interns or students could look into, though they may vary in difficulty (depending on your background/experience)This is the result of a quick brainstorming and curation, rather than a thorough deliberative process. We encourage a critical outlook when reading themYou may also be interested in these other forecasting research ideas suggested by Jaime SevillaYou can use Epoch’s database as a resource for finding notable machine learning papers with parameter, compute, and dataset sizes. On Epoch website you can also find our past research, a tool for visualising the dataset, and some other tools like this compute calculator.Please feel free to contact us for clarification about these questions!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jaime Sevilla https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:14 None full 3796
D7tkpztwmgg3KrykY_EA EA - Media attention on EA (again) by Julia Wise Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Media attention on EA (again), published by Julia Wise on November 17, 2022 on The Effective Altruism Forum.Six months ago I wrote that EA would likely get more attention soon.Well, that’s certainly true now (and not for the reason I expected). Here’s where I think things stand now in this regard:EA has more communications expertise than it did six months ago. My colleague Shakeel Hashim at CEA is focused on communications for all of EA, not CEA in particular.Journalists might be contacting organizations, groups, and individuals affiliated with EA. CEA’s usual advice about talking to journalists still stands. As someone who’s put my foot in my mouth more than once while talking to a journalist, I expect this is an especially hard time to do interviews. It’s particularly tricky because any comment you make publicly about FTX could have legal implications. Feel free to ask Shakeel for advice if you receive enquiries: media@centreforeffectivealtruism.orgThere will likely be a bunch of negative media pieces about EA. There’s probably not much for a typical EA to do about that.As Shakeel wrote here, the leaders of EA organizations can’t say a lot right now, and we know that’s really frustrating.For people working on projects that are able to continue, keeping up the heartbeat of EA’s work toward a better world is so valuable. Thank you.Doomscrolling is not that good for most of us.I don’t mean any of this as “stop discussing community problems and how to solve them.” It’s important work to reckon with whatever the hell just happened, what it means, and what changes we should make as a community.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Julia Wise https://forum.effectivealtruism.org/posts/D7tkpztwmgg3KrykY/media-attention-on-ea-again Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Media attention on EA (again), published by Julia Wise on November 17, 2022 on The Effective Altruism Forum.Six months ago I wrote that EA would likely get more attention soon.Well, that’s certainly true now (and not for the reason I expected). Here’s where I think things stand now in this regard:EA has more communications expertise than it did six months ago. My colleague Shakeel Hashim at CEA is focused on communications for all of EA, not CEA in particular.Journalists might be contacting organizations, groups, and individuals affiliated with EA. CEA’s usual advice about talking to journalists still stands. As someone who’s put my foot in my mouth more than once while talking to a journalist, I expect this is an especially hard time to do interviews. It’s particularly tricky because any comment you make publicly about FTX could have legal implications. Feel free to ask Shakeel for advice if you receive enquiries: media@centreforeffectivealtruism.orgThere will likely be a bunch of negative media pieces about EA. There’s probably not much for a typical EA to do about that.As Shakeel wrote here, the leaders of EA organizations can’t say a lot right now, and we know that’s really frustrating.For people working on projects that are able to continue, keeping up the heartbeat of EA’s work toward a better world is so valuable. Thank you.Doomscrolling is not that good for most of us.I don’t mean any of this as “stop discussing community problems and how to solve them.” It’s important work to reckon with whatever the hell just happened, what it means, and what changes we should make as a community.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 17 Nov 2022 18:35:07 +0000 EA - Media attention on EA (again) by Julia Wise Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Media attention on EA (again), published by Julia Wise on November 17, 2022 on The Effective Altruism Forum.Six months ago I wrote that EA would likely get more attention soon.Well, that’s certainly true now (and not for the reason I expected). Here’s where I think things stand now in this regard:EA has more communications expertise than it did six months ago. My colleague Shakeel Hashim at CEA is focused on communications for all of EA, not CEA in particular.Journalists might be contacting organizations, groups, and individuals affiliated with EA. CEA’s usual advice about talking to journalists still stands. As someone who’s put my foot in my mouth more than once while talking to a journalist, I expect this is an especially hard time to do interviews. It’s particularly tricky because any comment you make publicly about FTX could have legal implications. Feel free to ask Shakeel for advice if you receive enquiries: media@centreforeffectivealtruism.orgThere will likely be a bunch of negative media pieces about EA. There’s probably not much for a typical EA to do about that.As Shakeel wrote here, the leaders of EA organizations can’t say a lot right now, and we know that’s really frustrating.For people working on projects that are able to continue, keeping up the heartbeat of EA’s work toward a better world is so valuable. Thank you.Doomscrolling is not that good for most of us.I don’t mean any of this as “stop discussing community problems and how to solve them.” It’s important work to reckon with whatever the hell just happened, what it means, and what changes we should make as a community.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Media attention on EA (again), published by Julia Wise on November 17, 2022 on The Effective Altruism Forum.Six months ago I wrote that EA would likely get more attention soon.Well, that’s certainly true now (and not for the reason I expected). Here’s where I think things stand now in this regard:EA has more communications expertise than it did six months ago. My colleague Shakeel Hashim at CEA is focused on communications for all of EA, not CEA in particular.Journalists might be contacting organizations, groups, and individuals affiliated with EA. CEA’s usual advice about talking to journalists still stands. As someone who’s put my foot in my mouth more than once while talking to a journalist, I expect this is an especially hard time to do interviews. It’s particularly tricky because any comment you make publicly about FTX could have legal implications. Feel free to ask Shakeel for advice if you receive enquiries: media@centreforeffectivealtruism.orgThere will likely be a bunch of negative media pieces about EA. There’s probably not much for a typical EA to do about that.As Shakeel wrote here, the leaders of EA organizations can’t say a lot right now, and we know that’s really frustrating.For people working on projects that are able to continue, keeping up the heartbeat of EA’s work toward a better world is so valuable. Thank you.Doomscrolling is not that good for most of us.I don’t mean any of this as “stop discussing community problems and how to solve them.” It’s important work to reckon with whatever the hell just happened, what it means, and what changes we should make as a community.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Julia Wise https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:50 None full 3795
f9rFsbsd4rGLQfpQg_EA EA - Sadly, FTX by Zvi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sadly, FTX, published by Zvi on November 17, 2022 on The Effective Altruism Forum.It has been quite a past two weeks, with different spheres in deeply divided narratives. In addition to the liberation of Kherson City, the midterm elections that of course took forever to resolve and the ongoing hijinks of Elon Musk taking over Twitter, there was the complete implosion of Sam Bankman-Fried (hereafter ‘SBF’), his crypto exchange FTX and his crypto trading firm Alameda Research.The situation somehow kept getting worse several times a day, even relative to my updated expectations, as new events happened and new revelations came out.In the wake of those events, there are not only many questions but many categories of questions. Here are some of them.What just happened?What happened in the lead-up to this happening?Why did all of this happen?What is going to happen to those involved going forward?What is going to happen to crypto in general?Why didn’t we see this coming, or those who did see it speak louder?What does this mean for FTX’s charitable efforts and those getting funding?What does this mean for Effective Altruism? Who knew what when?What if anything does this say about utilitarianism?How are we casting and framing the movie Michael Lewis is selling, in which he was previously (it seems) planning on portraying Sam Bankman-Fried as the Luke Skywalker to CZ’s Darth Vader? Presumably that will change a bit.This post is my attempt to take my best stab at as much of this as possible, give my model of what happened and is likely to happen from here, what implications we can and should draw, and to compile as many sources as possible that I have found useful.This is a fast-moving complicated situation that involves a lot of lying and fraud. Anyone attempting to sort it all out, especially quickly, is going to make mistakes. I decided that this was the time when the value of synthesis exceeded the cost of such errors. Still, doubtless there will be mistakes here. I will correct them as they are discovered, either by myself or others. I apologize for them in advance.Thus I highly encourage you to read this post on the web rather than via an email or RSS version, in case there have been substantial revisions. There have as of yet not been any substantive revisions since publication, which was the morning of 11/17/22. The original version (and the one I will try hardest to keep updated) is here.Also, yes, long post is long. By all means read only the sections you care about.What The Hell Happened?Background: FTX is a crypto exchange largely owned and run by SBF. Alameda Research is a prop trading firm owned by SBF. FTT is FTX’s exchange token, where they commit a portion of profits to buying back FTT. Thus, FTT is a non-registered security, functionally similar to junior non-voting stock in FTX.The events of the past week, and the proximate cause of them, seem to have gone as follows, with some events likely slightly out of order or happening simultaneously.Alameda Research, SBF’s crypto trading firm, has its balance sheet leaked.The balance sheet contains a lot of FTT, such that if FTT loses its value it is not clear that Alameda would remain solvent.This raises questions. Some people notice. Caroline, CEO of Alameda, says this is incomplete and that Alameda is fine.CZ, head of Binance, who had been in various battles with SBF, notices. He announces the intention to sell >$500mm of FTT, but does not actually sell.Alameda offers to buy all his FTT at $22/coin, almost full market price at the time but a deal cannot be reached.Other people sell a lot of FTT. There are not other buyers. Alameda spends capital trying to defend the $22 price, and ultimately fails. They continue to sell everything they can to support FTT and to pay FTX depositors, but ulti...]]>
Zvi https://forum.effectivealtruism.org/posts/f9rFsbsd4rGLQfpQg/sadly-ftx Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sadly, FTX, published by Zvi on November 17, 2022 on The Effective Altruism Forum.It has been quite a past two weeks, with different spheres in deeply divided narratives. In addition to the liberation of Kherson City, the midterm elections that of course took forever to resolve and the ongoing hijinks of Elon Musk taking over Twitter, there was the complete implosion of Sam Bankman-Fried (hereafter ‘SBF’), his crypto exchange FTX and his crypto trading firm Alameda Research.The situation somehow kept getting worse several times a day, even relative to my updated expectations, as new events happened and new revelations came out.In the wake of those events, there are not only many questions but many categories of questions. Here are some of them.What just happened?What happened in the lead-up to this happening?Why did all of this happen?What is going to happen to those involved going forward?What is going to happen to crypto in general?Why didn’t we see this coming, or those who did see it speak louder?What does this mean for FTX’s charitable efforts and those getting funding?What does this mean for Effective Altruism? Who knew what when?What if anything does this say about utilitarianism?How are we casting and framing the movie Michael Lewis is selling, in which he was previously (it seems) planning on portraying Sam Bankman-Fried as the Luke Skywalker to CZ’s Darth Vader? Presumably that will change a bit.This post is my attempt to take my best stab at as much of this as possible, give my model of what happened and is likely to happen from here, what implications we can and should draw, and to compile as many sources as possible that I have found useful.This is a fast-moving complicated situation that involves a lot of lying and fraud. Anyone attempting to sort it all out, especially quickly, is going to make mistakes. I decided that this was the time when the value of synthesis exceeded the cost of such errors. Still, doubtless there will be mistakes here. I will correct them as they are discovered, either by myself or others. I apologize for them in advance.Thus I highly encourage you to read this post on the web rather than via an email or RSS version, in case there have been substantial revisions. There have as of yet not been any substantive revisions since publication, which was the morning of 11/17/22. The original version (and the one I will try hardest to keep updated) is here.Also, yes, long post is long. By all means read only the sections you care about.What The Hell Happened?Background: FTX is a crypto exchange largely owned and run by SBF. Alameda Research is a prop trading firm owned by SBF. FTT is FTX’s exchange token, where they commit a portion of profits to buying back FTT. Thus, FTT is a non-registered security, functionally similar to junior non-voting stock in FTX.The events of the past week, and the proximate cause of them, seem to have gone as follows, with some events likely slightly out of order or happening simultaneously.Alameda Research, SBF’s crypto trading firm, has its balance sheet leaked.The balance sheet contains a lot of FTT, such that if FTT loses its value it is not clear that Alameda would remain solvent.This raises questions. Some people notice. Caroline, CEO of Alameda, says this is incomplete and that Alameda is fine.CZ, head of Binance, who had been in various battles with SBF, notices. He announces the intention to sell >$500mm of FTT, but does not actually sell.Alameda offers to buy all his FTT at $22/coin, almost full market price at the time but a deal cannot be reached.Other people sell a lot of FTT. There are not other buyers. Alameda spends capital trying to defend the $22 price, and ultimately fails. They continue to sell everything they can to support FTT and to pay FTX depositors, but ulti...]]>
Thu, 17 Nov 2022 17:03:10 +0000 EA - Sadly, FTX by Zvi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sadly, FTX, published by Zvi on November 17, 2022 on The Effective Altruism Forum.It has been quite a past two weeks, with different spheres in deeply divided narratives. In addition to the liberation of Kherson City, the midterm elections that of course took forever to resolve and the ongoing hijinks of Elon Musk taking over Twitter, there was the complete implosion of Sam Bankman-Fried (hereafter ‘SBF’), his crypto exchange FTX and his crypto trading firm Alameda Research.The situation somehow kept getting worse several times a day, even relative to my updated expectations, as new events happened and new revelations came out.In the wake of those events, there are not only many questions but many categories of questions. Here are some of them.What just happened?What happened in the lead-up to this happening?Why did all of this happen?What is going to happen to those involved going forward?What is going to happen to crypto in general?Why didn’t we see this coming, or those who did see it speak louder?What does this mean for FTX’s charitable efforts and those getting funding?What does this mean for Effective Altruism? Who knew what when?What if anything does this say about utilitarianism?How are we casting and framing the movie Michael Lewis is selling, in which he was previously (it seems) planning on portraying Sam Bankman-Fried as the Luke Skywalker to CZ’s Darth Vader? Presumably that will change a bit.This post is my attempt to take my best stab at as much of this as possible, give my model of what happened and is likely to happen from here, what implications we can and should draw, and to compile as many sources as possible that I have found useful.This is a fast-moving complicated situation that involves a lot of lying and fraud. Anyone attempting to sort it all out, especially quickly, is going to make mistakes. I decided that this was the time when the value of synthesis exceeded the cost of such errors. Still, doubtless there will be mistakes here. I will correct them as they are discovered, either by myself or others. I apologize for them in advance.Thus I highly encourage you to read this post on the web rather than via an email or RSS version, in case there have been substantial revisions. There have as of yet not been any substantive revisions since publication, which was the morning of 11/17/22. The original version (and the one I will try hardest to keep updated) is here.Also, yes, long post is long. By all means read only the sections you care about.What The Hell Happened?Background: FTX is a crypto exchange largely owned and run by SBF. Alameda Research is a prop trading firm owned by SBF. FTT is FTX’s exchange token, where they commit a portion of profits to buying back FTT. Thus, FTT is a non-registered security, functionally similar to junior non-voting stock in FTX.The events of the past week, and the proximate cause of them, seem to have gone as follows, with some events likely slightly out of order or happening simultaneously.Alameda Research, SBF’s crypto trading firm, has its balance sheet leaked.The balance sheet contains a lot of FTT, such that if FTT loses its value it is not clear that Alameda would remain solvent.This raises questions. Some people notice. Caroline, CEO of Alameda, says this is incomplete and that Alameda is fine.CZ, head of Binance, who had been in various battles with SBF, notices. He announces the intention to sell >$500mm of FTT, but does not actually sell.Alameda offers to buy all his FTT at $22/coin, almost full market price at the time but a deal cannot be reached.Other people sell a lot of FTT. There are not other buyers. Alameda spends capital trying to defend the $22 price, and ultimately fails. They continue to sell everything they can to support FTT and to pay FTX depositors, but ulti...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sadly, FTX, published by Zvi on November 17, 2022 on The Effective Altruism Forum.It has been quite a past two weeks, with different spheres in deeply divided narratives. In addition to the liberation of Kherson City, the midterm elections that of course took forever to resolve and the ongoing hijinks of Elon Musk taking over Twitter, there was the complete implosion of Sam Bankman-Fried (hereafter ‘SBF’), his crypto exchange FTX and his crypto trading firm Alameda Research.The situation somehow kept getting worse several times a day, even relative to my updated expectations, as new events happened and new revelations came out.In the wake of those events, there are not only many questions but many categories of questions. Here are some of them.What just happened?What happened in the lead-up to this happening?Why did all of this happen?What is going to happen to those involved going forward?What is going to happen to crypto in general?Why didn’t we see this coming, or those who did see it speak louder?What does this mean for FTX’s charitable efforts and those getting funding?What does this mean for Effective Altruism? Who knew what when?What if anything does this say about utilitarianism?How are we casting and framing the movie Michael Lewis is selling, in which he was previously (it seems) planning on portraying Sam Bankman-Fried as the Luke Skywalker to CZ’s Darth Vader? Presumably that will change a bit.This post is my attempt to take my best stab at as much of this as possible, give my model of what happened and is likely to happen from here, what implications we can and should draw, and to compile as many sources as possible that I have found useful.This is a fast-moving complicated situation that involves a lot of lying and fraud. Anyone attempting to sort it all out, especially quickly, is going to make mistakes. I decided that this was the time when the value of synthesis exceeded the cost of such errors. Still, doubtless there will be mistakes here. I will correct them as they are discovered, either by myself or others. I apologize for them in advance.Thus I highly encourage you to read this post on the web rather than via an email or RSS version, in case there have been substantial revisions. There have as of yet not been any substantive revisions since publication, which was the morning of 11/17/22. The original version (and the one I will try hardest to keep updated) is here.Also, yes, long post is long. By all means read only the sections you care about.What The Hell Happened?Background: FTX is a crypto exchange largely owned and run by SBF. Alameda Research is a prop trading firm owned by SBF. FTT is FTX’s exchange token, where they commit a portion of profits to buying back FTT. Thus, FTT is a non-registered security, functionally similar to junior non-voting stock in FTX.The events of the past week, and the proximate cause of them, seem to have gone as follows, with some events likely slightly out of order or happening simultaneously.Alameda Research, SBF’s crypto trading firm, has its balance sheet leaked.The balance sheet contains a lot of FTT, such that if FTT loses its value it is not clear that Alameda would remain solvent.This raises questions. Some people notice. Caroline, CEO of Alameda, says this is incomplete and that Alameda is fine.CZ, head of Binance, who had been in various battles with SBF, notices. He announces the intention to sell >$500mm of FTT, but does not actually sell.Alameda offers to buy all his FTT at $22/coin, almost full market price at the time but a deal cannot be reached.Other people sell a lot of FTT. There are not other buyers. Alameda spends capital trying to defend the $22 price, and ultimately fails. They continue to sell everything they can to support FTT and to pay FTX depositors, but ulti...]]>
Zvi https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:11:33 None full 3799
ctiLEKuC23oTzsDtm_EA EA - Samo Burja: What the collapse of FTX means for effective altruism by peterhartree Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Samo Burja: What the collapse of FTX means for effective altruism, published by peterhartree on November 17, 2022 on The Effective Altruism Forum.Samo Burja has the best analysis I've seen so far.CONTENT WARNING: Samo's analysis may be upsetting and demoralising.If you're feeling low, anxious, or otherwise in a bad way, I strongly recommend you bookmark this post and come back when you're on better form.If you're ready for a calm attempt to understand what is happening, and what this all means, read on.This post is written in a personal capacityI'm sharing this in a personal capacity.I am not speaking on behalf of any current or past employer.I asked a couple of people for their thoughts before I posted, but the decision to post is mine alone.DisclosureI personally received ~$60K funding from FTX senior staff (Nishad & Claire) in 2020, to pursue a year of independent study. (We had no pre-existing social or professional relationship before they made the grant.)Two projects I run, Radio Bostrom and Parfit Archive, were funded via the FTX Future Fund Regrantors Programme.80,000 Hours, a previous employer for whom I currently serve as a freelance advisor, also received significant donations from SBF and Alameda.I don't follow crypto closely, but I did put about $10K into FTX wallets over the past couple of years. I've not logged into these accounts for months, so I've no idea what they are/were worth.Personal commentI don't follow crypto closely. My understanding of things over at FTX and Alameda is entirely based on what is being reported on Twitter and the newspapers.I can't verify all of the factual claims that I've excerpted below. The ones I actually know about all seem correct, to the best of my knowledge.I am personally feeling calm and fine about things. Part of me is devasted, another part is angry. But I am good at compartmentalising. My capability for questionable gallows humor remains in evidence.I agree with (put high credence on) roughly all of Samo's takes that are excerpted below.I semi-independently arrived at most of these takes between Wednesday 9 - and Friday 11 November, and they've been fairly confident and robust since then. My thinking was supported by many conversations over those days (mostly not with staff from Effective Ventures, CEA or 80K—who are all extremely busy).For me, this analysis is a big positive update on Samo and his team's general capability for insight and analysis. I have followed Samo for a while, read his Great Founder Theory, and listened to at least 10 interviews. I always have a difficult time assessing claims about history and historical forces, though—especially when they have fluent, charismatic and media-savvy proponents.Selected excerptsThe moral authority and intellectual legitimacy of EA will be reducedMost importantly, the collapse of FTX will reduce the moral authority and intellectual legitimacy of the “Effective Altruism” (EA) movement, which has become a rising cultural force in Silicon Valley and a rising financial force in global philanthropy. Sam Bankman-Fried was a highly visible example of an “Effective Altruist.” He was a major donor both to the movement and to causes favored by the movement, for example committing $160 million in grants through the FTX Future Fund. Bankman-Fried also frequently boosted its ideology in press interviews and reportedly only earned money so that he could donate more of it to the cause, an ethos known as “earn to give” that was invented by the EA movement.Good summary of SBF political activityBankman-Fried has also shown a serious interest in engaging with mainstream U.S. politics. He has funded political candidates aligned with his own views in congressional primary races and ballot initiatives in California. In 2020, he became President Joseph Bide...]]>
peterhartree https://forum.effectivealtruism.org/posts/ctiLEKuC23oTzsDtm/samo-burja-what-the-collapse-of-ftx-means-for-effective Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Samo Burja: What the collapse of FTX means for effective altruism, published by peterhartree on November 17, 2022 on The Effective Altruism Forum.Samo Burja has the best analysis I've seen so far.CONTENT WARNING: Samo's analysis may be upsetting and demoralising.If you're feeling low, anxious, or otherwise in a bad way, I strongly recommend you bookmark this post and come back when you're on better form.If you're ready for a calm attempt to understand what is happening, and what this all means, read on.This post is written in a personal capacityI'm sharing this in a personal capacity.I am not speaking on behalf of any current or past employer.I asked a couple of people for their thoughts before I posted, but the decision to post is mine alone.DisclosureI personally received ~$60K funding from FTX senior staff (Nishad & Claire) in 2020, to pursue a year of independent study. (We had no pre-existing social or professional relationship before they made the grant.)Two projects I run, Radio Bostrom and Parfit Archive, were funded via the FTX Future Fund Regrantors Programme.80,000 Hours, a previous employer for whom I currently serve as a freelance advisor, also received significant donations from SBF and Alameda.I don't follow crypto closely, but I did put about $10K into FTX wallets over the past couple of years. I've not logged into these accounts for months, so I've no idea what they are/were worth.Personal commentI don't follow crypto closely. My understanding of things over at FTX and Alameda is entirely based on what is being reported on Twitter and the newspapers.I can't verify all of the factual claims that I've excerpted below. The ones I actually know about all seem correct, to the best of my knowledge.I am personally feeling calm and fine about things. Part of me is devasted, another part is angry. But I am good at compartmentalising. My capability for questionable gallows humor remains in evidence.I agree with (put high credence on) roughly all of Samo's takes that are excerpted below.I semi-independently arrived at most of these takes between Wednesday 9 - and Friday 11 November, and they've been fairly confident and robust since then. My thinking was supported by many conversations over those days (mostly not with staff from Effective Ventures, CEA or 80K—who are all extremely busy).For me, this analysis is a big positive update on Samo and his team's general capability for insight and analysis. I have followed Samo for a while, read his Great Founder Theory, and listened to at least 10 interviews. I always have a difficult time assessing claims about history and historical forces, though—especially when they have fluent, charismatic and media-savvy proponents.Selected excerptsThe moral authority and intellectual legitimacy of EA will be reducedMost importantly, the collapse of FTX will reduce the moral authority and intellectual legitimacy of the “Effective Altruism” (EA) movement, which has become a rising cultural force in Silicon Valley and a rising financial force in global philanthropy. Sam Bankman-Fried was a highly visible example of an “Effective Altruist.” He was a major donor both to the movement and to causes favored by the movement, for example committing $160 million in grants through the FTX Future Fund. Bankman-Fried also frequently boosted its ideology in press interviews and reportedly only earned money so that he could donate more of it to the cause, an ethos known as “earn to give” that was invented by the EA movement.Good summary of SBF political activityBankman-Fried has also shown a serious interest in engaging with mainstream U.S. politics. He has funded political candidates aligned with his own views in congressional primary races and ballot initiatives in California. In 2020, he became President Joseph Bide...]]>
Thu, 17 Nov 2022 14:26:39 +0000 EA - Samo Burja: What the collapse of FTX means for effective altruism by peterhartree Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Samo Burja: What the collapse of FTX means for effective altruism, published by peterhartree on November 17, 2022 on The Effective Altruism Forum.Samo Burja has the best analysis I've seen so far.CONTENT WARNING: Samo's analysis may be upsetting and demoralising.If you're feeling low, anxious, or otherwise in a bad way, I strongly recommend you bookmark this post and come back when you're on better form.If you're ready for a calm attempt to understand what is happening, and what this all means, read on.This post is written in a personal capacityI'm sharing this in a personal capacity.I am not speaking on behalf of any current or past employer.I asked a couple of people for their thoughts before I posted, but the decision to post is mine alone.DisclosureI personally received ~$60K funding from FTX senior staff (Nishad & Claire) in 2020, to pursue a year of independent study. (We had no pre-existing social or professional relationship before they made the grant.)Two projects I run, Radio Bostrom and Parfit Archive, were funded via the FTX Future Fund Regrantors Programme.80,000 Hours, a previous employer for whom I currently serve as a freelance advisor, also received significant donations from SBF and Alameda.I don't follow crypto closely, but I did put about $10K into FTX wallets over the past couple of years. I've not logged into these accounts for months, so I've no idea what they are/were worth.Personal commentI don't follow crypto closely. My understanding of things over at FTX and Alameda is entirely based on what is being reported on Twitter and the newspapers.I can't verify all of the factual claims that I've excerpted below. The ones I actually know about all seem correct, to the best of my knowledge.I am personally feeling calm and fine about things. Part of me is devasted, another part is angry. But I am good at compartmentalising. My capability for questionable gallows humor remains in evidence.I agree with (put high credence on) roughly all of Samo's takes that are excerpted below.I semi-independently arrived at most of these takes between Wednesday 9 - and Friday 11 November, and they've been fairly confident and robust since then. My thinking was supported by many conversations over those days (mostly not with staff from Effective Ventures, CEA or 80K—who are all extremely busy).For me, this analysis is a big positive update on Samo and his team's general capability for insight and analysis. I have followed Samo for a while, read his Great Founder Theory, and listened to at least 10 interviews. I always have a difficult time assessing claims about history and historical forces, though—especially when they have fluent, charismatic and media-savvy proponents.Selected excerptsThe moral authority and intellectual legitimacy of EA will be reducedMost importantly, the collapse of FTX will reduce the moral authority and intellectual legitimacy of the “Effective Altruism” (EA) movement, which has become a rising cultural force in Silicon Valley and a rising financial force in global philanthropy. Sam Bankman-Fried was a highly visible example of an “Effective Altruist.” He was a major donor both to the movement and to causes favored by the movement, for example committing $160 million in grants through the FTX Future Fund. Bankman-Fried also frequently boosted its ideology in press interviews and reportedly only earned money so that he could donate more of it to the cause, an ethos known as “earn to give” that was invented by the EA movement.Good summary of SBF political activityBankman-Fried has also shown a serious interest in engaging with mainstream U.S. politics. He has funded political candidates aligned with his own views in congressional primary races and ballot initiatives in California. In 2020, he became President Joseph Bide...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Samo Burja: What the collapse of FTX means for effective altruism, published by peterhartree on November 17, 2022 on The Effective Altruism Forum.Samo Burja has the best analysis I've seen so far.CONTENT WARNING: Samo's analysis may be upsetting and demoralising.If you're feeling low, anxious, or otherwise in a bad way, I strongly recommend you bookmark this post and come back when you're on better form.If you're ready for a calm attempt to understand what is happening, and what this all means, read on.This post is written in a personal capacityI'm sharing this in a personal capacity.I am not speaking on behalf of any current or past employer.I asked a couple of people for their thoughts before I posted, but the decision to post is mine alone.DisclosureI personally received ~$60K funding from FTX senior staff (Nishad & Claire) in 2020, to pursue a year of independent study. (We had no pre-existing social or professional relationship before they made the grant.)Two projects I run, Radio Bostrom and Parfit Archive, were funded via the FTX Future Fund Regrantors Programme.80,000 Hours, a previous employer for whom I currently serve as a freelance advisor, also received significant donations from SBF and Alameda.I don't follow crypto closely, but I did put about $10K into FTX wallets over the past couple of years. I've not logged into these accounts for months, so I've no idea what they are/were worth.Personal commentI don't follow crypto closely. My understanding of things over at FTX and Alameda is entirely based on what is being reported on Twitter and the newspapers.I can't verify all of the factual claims that I've excerpted below. The ones I actually know about all seem correct, to the best of my knowledge.I am personally feeling calm and fine about things. Part of me is devasted, another part is angry. But I am good at compartmentalising. My capability for questionable gallows humor remains in evidence.I agree with (put high credence on) roughly all of Samo's takes that are excerpted below.I semi-independently arrived at most of these takes between Wednesday 9 - and Friday 11 November, and they've been fairly confident and robust since then. My thinking was supported by many conversations over those days (mostly not with staff from Effective Ventures, CEA or 80K—who are all extremely busy).For me, this analysis is a big positive update on Samo and his team's general capability for insight and analysis. I have followed Samo for a while, read his Great Founder Theory, and listened to at least 10 interviews. I always have a difficult time assessing claims about history and historical forces, though—especially when they have fluent, charismatic and media-savvy proponents.Selected excerptsThe moral authority and intellectual legitimacy of EA will be reducedMost importantly, the collapse of FTX will reduce the moral authority and intellectual legitimacy of the “Effective Altruism” (EA) movement, which has become a rising cultural force in Silicon Valley and a rising financial force in global philanthropy. Sam Bankman-Fried was a highly visible example of an “Effective Altruist.” He was a major donor both to the movement and to causes favored by the movement, for example committing $160 million in grants through the FTX Future Fund. Bankman-Fried also frequently boosted its ideology in press interviews and reportedly only earned money so that he could donate more of it to the cause, an ethos known as “earn to give” that was invented by the EA movement.Good summary of SBF political activityBankman-Fried has also shown a serious interest in engaging with mainstream U.S. politics. He has funded political candidates aligned with his own views in congressional primary races and ballot initiatives in California. In 2020, he became President Joseph Bide...]]>
peterhartree https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:21 None full 3801
D5RdHgzeczrDKKywH_EA EA - Introducing School of Thinking by Luca Parodi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing School of Thinking, published by Luca Parodi on November 17, 2022 on The Effective Altruism Forum.(Cross-posted on LessWrong)IntroductionSchool of Thinking (SoT) is a media startup.Our purpose is to spread Effective Altruist, longtermist, and rationalist values and ideas as much as possible to the general public by leveraging new media. We aim to reach our goal through the creation of high-quality material posted on an ecosystem of YouTube channels, profiles on social media platforms, podcasts, and SoT's website.Our priority is to produce content in English and Italian, but we will cover more languages down the line. We have been funded by the Effective Altruism Infrastructure Fund (EAIF) and the FTX Future Fund.SummarySchool of Thinking was launched on Instagram in 2020 by Luca Parodi. For a year and a half, Luca's goal has been to do rationalist outreach on Instagram in Italian. The profile reached 18.000 followers in March 2022. In January 2022, Luca received his first grant (from EAIF) to leave his job, work full-time on the project, and expand it on other platforms in Italian. The goal at that moment was to do EA community building in Italy.In March 2022, Eloisa Margherita Calafiore joined the project as a co-founder and Chief Operating Officer, and in April 2022, School of Thinking received a grant from FTX Future Fund to expand globally. Being able to upscale the project, the goal shifted towards doing outreach more globally (whilst continuing our local work in Italian).Our current strategy is to focus on creating weekly long-format videos for YouTube and daily short-format videos and other content (e.g. carousels and IG stories) on TikTok, YouTube, and Instagram. In the last few weeks, we had several viral videos on Instagram Italy (+630.000 views in the last 30 days). We have a total of ~25.000 followers across six platforms and two languages and are growing steadily.We create educational content about several important concepts related to rationalism, longtermism, and effective altruism. We aim to explain them to the general public in an easy, engaging, and practical yet accurate way.We have two paths to impact, listed here in descending order of importance:Attracting new people in EAHelp people new to the Effective Altruist, longtermist, and rationalist communities become more engaged.Spreading Good Values and Improving Reflective ProcessesRaise awareness about specific topics in untouched but influential portions of the general populationIncrease epistemic hygieneSpreading important ideasOur HistoryThe First Few YearsThe general idea for School of Thinking (SoT) was formulated for the first time by me (Luca Parodi) in 2017 when I discovered LessWrong. Nonetheless, I worked on it informally for years before launching the project. The official launch happened in July 2020, with an Instagram profile with the bold goal of becoming the Italian LessWrong's voice on social media. For a year and a half, SoT was just a side activity while I worked as a management consultant. I dedicated an average of 15 extra work hours per week of my time to this project.In September 2021, the Instagram profile reached 10.000 followers, and in November 2021, I applied for the Effective Altruism Infrastructure Fund to leave my job and work full time on School of Thinking. The goal was to move to other platforms - like YouTube or a podcast - ‌and leverage School of Thinking's audience to grow the Italian EA community. EAIF accepted my grant request, so I left my job and started working on SoT full-time in January 2022.Rapidly, I realized that I was one of the few full-time content creators with a strong track record (18k followers on IG in 17 months) within the global EA community, so I started thinking globally. With Eloisa Margherita Calafiore, n...]]>
Luca Parodi https://forum.effectivealtruism.org/posts/D5RdHgzeczrDKKywH/introducing-school-of-thinking-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing School of Thinking, published by Luca Parodi on November 17, 2022 on The Effective Altruism Forum.(Cross-posted on LessWrong)IntroductionSchool of Thinking (SoT) is a media startup.Our purpose is to spread Effective Altruist, longtermist, and rationalist values and ideas as much as possible to the general public by leveraging new media. We aim to reach our goal through the creation of high-quality material posted on an ecosystem of YouTube channels, profiles on social media platforms, podcasts, and SoT's website.Our priority is to produce content in English and Italian, but we will cover more languages down the line. We have been funded by the Effective Altruism Infrastructure Fund (EAIF) and the FTX Future Fund.SummarySchool of Thinking was launched on Instagram in 2020 by Luca Parodi. For a year and a half, Luca's goal has been to do rationalist outreach on Instagram in Italian. The profile reached 18.000 followers in March 2022. In January 2022, Luca received his first grant (from EAIF) to leave his job, work full-time on the project, and expand it on other platforms in Italian. The goal at that moment was to do EA community building in Italy.In March 2022, Eloisa Margherita Calafiore joined the project as a co-founder and Chief Operating Officer, and in April 2022, School of Thinking received a grant from FTX Future Fund to expand globally. Being able to upscale the project, the goal shifted towards doing outreach more globally (whilst continuing our local work in Italian).Our current strategy is to focus on creating weekly long-format videos for YouTube and daily short-format videos and other content (e.g. carousels and IG stories) on TikTok, YouTube, and Instagram. In the last few weeks, we had several viral videos on Instagram Italy (+630.000 views in the last 30 days). We have a total of ~25.000 followers across six platforms and two languages and are growing steadily.We create educational content about several important concepts related to rationalism, longtermism, and effective altruism. We aim to explain them to the general public in an easy, engaging, and practical yet accurate way.We have two paths to impact, listed here in descending order of importance:Attracting new people in EAHelp people new to the Effective Altruist, longtermist, and rationalist communities become more engaged.Spreading Good Values and Improving Reflective ProcessesRaise awareness about specific topics in untouched but influential portions of the general populationIncrease epistemic hygieneSpreading important ideasOur HistoryThe First Few YearsThe general idea for School of Thinking (SoT) was formulated for the first time by me (Luca Parodi) in 2017 when I discovered LessWrong. Nonetheless, I worked on it informally for years before launching the project. The official launch happened in July 2020, with an Instagram profile with the bold goal of becoming the Italian LessWrong's voice on social media. For a year and a half, SoT was just a side activity while I worked as a management consultant. I dedicated an average of 15 extra work hours per week of my time to this project.In September 2021, the Instagram profile reached 10.000 followers, and in November 2021, I applied for the Effective Altruism Infrastructure Fund to leave my job and work full time on School of Thinking. The goal was to move to other platforms - like YouTube or a podcast - ‌and leverage School of Thinking's audience to grow the Italian EA community. EAIF accepted my grant request, so I left my job and started working on SoT full-time in January 2022.Rapidly, I realized that I was one of the few full-time content creators with a strong track record (18k followers on IG in 17 months) within the global EA community, so I started thinking globally. With Eloisa Margherita Calafiore, n...]]>
Thu, 17 Nov 2022 13:03:18 +0000 EA - Introducing School of Thinking by Luca Parodi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing School of Thinking, published by Luca Parodi on November 17, 2022 on The Effective Altruism Forum.(Cross-posted on LessWrong)IntroductionSchool of Thinking (SoT) is a media startup.Our purpose is to spread Effective Altruist, longtermist, and rationalist values and ideas as much as possible to the general public by leveraging new media. We aim to reach our goal through the creation of high-quality material posted on an ecosystem of YouTube channels, profiles on social media platforms, podcasts, and SoT's website.Our priority is to produce content in English and Italian, but we will cover more languages down the line. We have been funded by the Effective Altruism Infrastructure Fund (EAIF) and the FTX Future Fund.SummarySchool of Thinking was launched on Instagram in 2020 by Luca Parodi. For a year and a half, Luca's goal has been to do rationalist outreach on Instagram in Italian. The profile reached 18.000 followers in March 2022. In January 2022, Luca received his first grant (from EAIF) to leave his job, work full-time on the project, and expand it on other platforms in Italian. The goal at that moment was to do EA community building in Italy.In March 2022, Eloisa Margherita Calafiore joined the project as a co-founder and Chief Operating Officer, and in April 2022, School of Thinking received a grant from FTX Future Fund to expand globally. Being able to upscale the project, the goal shifted towards doing outreach more globally (whilst continuing our local work in Italian).Our current strategy is to focus on creating weekly long-format videos for YouTube and daily short-format videos and other content (e.g. carousels and IG stories) on TikTok, YouTube, and Instagram. In the last few weeks, we had several viral videos on Instagram Italy (+630.000 views in the last 30 days). We have a total of ~25.000 followers across six platforms and two languages and are growing steadily.We create educational content about several important concepts related to rationalism, longtermism, and effective altruism. We aim to explain them to the general public in an easy, engaging, and practical yet accurate way.We have two paths to impact, listed here in descending order of importance:Attracting new people in EAHelp people new to the Effective Altruist, longtermist, and rationalist communities become more engaged.Spreading Good Values and Improving Reflective ProcessesRaise awareness about specific topics in untouched but influential portions of the general populationIncrease epistemic hygieneSpreading important ideasOur HistoryThe First Few YearsThe general idea for School of Thinking (SoT) was formulated for the first time by me (Luca Parodi) in 2017 when I discovered LessWrong. Nonetheless, I worked on it informally for years before launching the project. The official launch happened in July 2020, with an Instagram profile with the bold goal of becoming the Italian LessWrong's voice on social media. For a year and a half, SoT was just a side activity while I worked as a management consultant. I dedicated an average of 15 extra work hours per week of my time to this project.In September 2021, the Instagram profile reached 10.000 followers, and in November 2021, I applied for the Effective Altruism Infrastructure Fund to leave my job and work full time on School of Thinking. The goal was to move to other platforms - like YouTube or a podcast - ‌and leverage School of Thinking's audience to grow the Italian EA community. EAIF accepted my grant request, so I left my job and started working on SoT full-time in January 2022.Rapidly, I realized that I was one of the few full-time content creators with a strong track record (18k followers on IG in 17 months) within the global EA community, so I started thinking globally. With Eloisa Margherita Calafiore, n...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing School of Thinking, published by Luca Parodi on November 17, 2022 on The Effective Altruism Forum.(Cross-posted on LessWrong)IntroductionSchool of Thinking (SoT) is a media startup.Our purpose is to spread Effective Altruist, longtermist, and rationalist values and ideas as much as possible to the general public by leveraging new media. We aim to reach our goal through the creation of high-quality material posted on an ecosystem of YouTube channels, profiles on social media platforms, podcasts, and SoT's website.Our priority is to produce content in English and Italian, but we will cover more languages down the line. We have been funded by the Effective Altruism Infrastructure Fund (EAIF) and the FTX Future Fund.SummarySchool of Thinking was launched on Instagram in 2020 by Luca Parodi. For a year and a half, Luca's goal has been to do rationalist outreach on Instagram in Italian. The profile reached 18.000 followers in March 2022. In January 2022, Luca received his first grant (from EAIF) to leave his job, work full-time on the project, and expand it on other platforms in Italian. The goal at that moment was to do EA community building in Italy.In March 2022, Eloisa Margherita Calafiore joined the project as a co-founder and Chief Operating Officer, and in April 2022, School of Thinking received a grant from FTX Future Fund to expand globally. Being able to upscale the project, the goal shifted towards doing outreach more globally (whilst continuing our local work in Italian).Our current strategy is to focus on creating weekly long-format videos for YouTube and daily short-format videos and other content (e.g. carousels and IG stories) on TikTok, YouTube, and Instagram. In the last few weeks, we had several viral videos on Instagram Italy (+630.000 views in the last 30 days). We have a total of ~25.000 followers across six platforms and two languages and are growing steadily.We create educational content about several important concepts related to rationalism, longtermism, and effective altruism. We aim to explain them to the general public in an easy, engaging, and practical yet accurate way.We have two paths to impact, listed here in descending order of importance:Attracting new people in EAHelp people new to the Effective Altruist, longtermist, and rationalist communities become more engaged.Spreading Good Values and Improving Reflective ProcessesRaise awareness about specific topics in untouched but influential portions of the general populationIncrease epistemic hygieneSpreading important ideasOur HistoryThe First Few YearsThe general idea for School of Thinking (SoT) was formulated for the first time by me (Luca Parodi) in 2017 when I discovered LessWrong. Nonetheless, I worked on it informally for years before launching the project. The official launch happened in July 2020, with an Instagram profile with the bold goal of becoming the Italian LessWrong's voice on social media. For a year and a half, SoT was just a side activity while I worked as a management consultant. I dedicated an average of 15 extra work hours per week of my time to this project.In September 2021, the Instagram profile reached 10.000 followers, and in November 2021, I applied for the Effective Altruism Infrastructure Fund to leave my job and work full time on School of Thinking. The goal was to move to other platforms - like YouTube or a podcast - ‌and leverage School of Thinking's audience to grow the Italian EA community. EAIF accepted my grant request, so I left my job and started working on SoT full-time in January 2022.Rapidly, I realized that I was one of the few full-time content creators with a strong track record (18k followers on IG in 17 months) within the global EA community, so I started thinking globally. With Eloisa Margherita Calafiore, n...]]>
Luca Parodi https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 17:49 None full 3800
8FmaQDyRqDE7bpXtv_EA EA - late entry for the EA criticism contest by eigenrobot Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: late entry for the EA criticism contest, published by eigenrobot on November 17, 2022 on The Effective Altruism Forum.I recently wrote a post digesting my impression of what the SBF meltdown means for EA. I hadn't intended to post it here (I don't identify as an EA and this feels a bit like barging into a conversation) but an EA mutual suggested that I ought to, and that it would be welcome. I'm still not sure the tone is quite right for this forum, and my epistemic confidence isn't especially high, but on the off chance any of it's useful: here you are.Quick summary: I think EA as a movement is extremely vulnerable to capture by individuals and organizations with very different value systems, but there are some things that might be done to at least mitigate this danger.Good luck guys.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
eigenrobot https://forum.effectivealtruism.org/posts/8FmaQDyRqDE7bpXtv/late-entry-for-the-ea-criticism-contest Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: late entry for the EA criticism contest, published by eigenrobot on November 17, 2022 on The Effective Altruism Forum.I recently wrote a post digesting my impression of what the SBF meltdown means for EA. I hadn't intended to post it here (I don't identify as an EA and this feels a bit like barging into a conversation) but an EA mutual suggested that I ought to, and that it would be welcome. I'm still not sure the tone is quite right for this forum, and my epistemic confidence isn't especially high, but on the off chance any of it's useful: here you are.Quick summary: I think EA as a movement is extremely vulnerable to capture by individuals and organizations with very different value systems, but there are some things that might be done to at least mitigate this danger.Good luck guys.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 17 Nov 2022 07:49:22 +0000 EA - late entry for the EA criticism contest by eigenrobot Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: late entry for the EA criticism contest, published by eigenrobot on November 17, 2022 on The Effective Altruism Forum.I recently wrote a post digesting my impression of what the SBF meltdown means for EA. I hadn't intended to post it here (I don't identify as an EA and this feels a bit like barging into a conversation) but an EA mutual suggested that I ought to, and that it would be welcome. I'm still not sure the tone is quite right for this forum, and my epistemic confidence isn't especially high, but on the off chance any of it's useful: here you are.Quick summary: I think EA as a movement is extremely vulnerable to capture by individuals and organizations with very different value systems, but there are some things that might be done to at least mitigate this danger.Good luck guys.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: late entry for the EA criticism contest, published by eigenrobot on November 17, 2022 on The Effective Altruism Forum.I recently wrote a post digesting my impression of what the SBF meltdown means for EA. I hadn't intended to post it here (I don't identify as an EA and this feels a bit like barging into a conversation) but an EA mutual suggested that I ought to, and that it would be welcome. I'm still not sure the tone is quite right for this forum, and my epistemic confidence isn't especially high, but on the off chance any of it's useful: here you are.Quick summary: I think EA as a movement is extremely vulnerable to capture by individuals and organizations with very different value systems, but there are some things that might be done to at least mitigate this danger.Good luck guys.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
eigenrobot https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:01 None full 3788
dMcPi5BDy5mQHZHGH_EA EA - Stop Overreacting re: FTX / SBF by MattBall Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Stop Overreacting re: FTX / SBF, published by MattBall on November 16, 2022 on The Effective Altruism Forum.Written as tomorrow's public blog post in part as a reaction to this, but given what I see on this forum and what I hear from otherwise level-headed individuals, I'm sharing it here now:As anyone who has read my blog or Losing My Religions knows, I have my disagreements with some effective altruists (EAs). I also have some fundamental philosophical differences. But after this FTX fiasco, some non-EAs are taking one person's failures as an excuse to mock and condemn the entire idea of effective altruism. (And some EAs are losing their minds and questioning their very identity and "purpose" in life.)Think about it. Fine, SBF seemingly had lots of money. And he said he was pursuing money for EA causes. But he is just one guy! His money doesn’t make him any more important than anyone else; even less so because his thoughts and ideas are not central to EA.Just because one Effective Altruist did things we don’t like doesn’t mean we throw out utilitarianism or toss Peter Singer under the bus, just as we don’t become Republicans because Bill Clinton was sleazy or give up on meditation because some teachers were predators.The EA community is a lot of people.Some of them are bad. Some of them are mentally unstable. Some of them are crazy. We don’t judge all of EA on one person – that is lunacy.Let’s think about this from first principles.These people are altruists. In short, they care about others, not just themselves. They want others to not suffer but instead thrive. But they also know:Some individuals are suffering more than others. My hands and back hurt and my tinnitus is screaming, but I can say with absolute certainty that many people out there are suffering worse, such that they wish to die. So if you are a rational altruist, you recognize differences in need, regardless of any other factors.Efforts and money have different impacts in different situations. An additional million dollars to Harvard has a different impact than a million dollars in Polio eradication. (And both have a different sign compared to a million dollars to the homophobic Church of Jesus Christ and Latter-day Saints.)Now there is a lot more to say about effective altruism, and no one knows this more than EAs. But this is the bottom line: They want to be as effective as possible in helping others.Is this true of the critics? Or do the critics simply not want to feel guilty? Guilty about buying a fancy car, owning several houses, taking lavish vacations? Giving to their church, their alma mater, their kid's soccer team? Caring infinitely more for their kids than every non-human animal on the planet?Before you pile on along with those attacking effective altruists, ask yourself if doing so is the best way to help the less fortunate.And note: If you are sure that you know what EAs should do and how they should do it, I am willing to bet all my savings that you are wrong. These questions are simply too complex to have certain questions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
MattBall https://forum.effectivealtruism.org/posts/dMcPi5BDy5mQHZHGH/stop-overreacting-re-ftx-sbf Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Stop Overreacting re: FTX / SBF, published by MattBall on November 16, 2022 on The Effective Altruism Forum.Written as tomorrow's public blog post in part as a reaction to this, but given what I see on this forum and what I hear from otherwise level-headed individuals, I'm sharing it here now:As anyone who has read my blog or Losing My Religions knows, I have my disagreements with some effective altruists (EAs). I also have some fundamental philosophical differences. But after this FTX fiasco, some non-EAs are taking one person's failures as an excuse to mock and condemn the entire idea of effective altruism. (And some EAs are losing their minds and questioning their very identity and "purpose" in life.)Think about it. Fine, SBF seemingly had lots of money. And he said he was pursuing money for EA causes. But he is just one guy! His money doesn’t make him any more important than anyone else; even less so because his thoughts and ideas are not central to EA.Just because one Effective Altruist did things we don’t like doesn’t mean we throw out utilitarianism or toss Peter Singer under the bus, just as we don’t become Republicans because Bill Clinton was sleazy or give up on meditation because some teachers were predators.The EA community is a lot of people.Some of them are bad. Some of them are mentally unstable. Some of them are crazy. We don’t judge all of EA on one person – that is lunacy.Let’s think about this from first principles.These people are altruists. In short, they care about others, not just themselves. They want others to not suffer but instead thrive. But they also know:Some individuals are suffering more than others. My hands and back hurt and my tinnitus is screaming, but I can say with absolute certainty that many people out there are suffering worse, such that they wish to die. So if you are a rational altruist, you recognize differences in need, regardless of any other factors.Efforts and money have different impacts in different situations. An additional million dollars to Harvard has a different impact than a million dollars in Polio eradication. (And both have a different sign compared to a million dollars to the homophobic Church of Jesus Christ and Latter-day Saints.)Now there is a lot more to say about effective altruism, and no one knows this more than EAs. But this is the bottom line: They want to be as effective as possible in helping others.Is this true of the critics? Or do the critics simply not want to feel guilty? Guilty about buying a fancy car, owning several houses, taking lavish vacations? Giving to their church, their alma mater, their kid's soccer team? Caring infinitely more for their kids than every non-human animal on the planet?Before you pile on along with those attacking effective altruists, ask yourself if doing so is the best way to help the less fortunate.And note: If you are sure that you know what EAs should do and how they should do it, I am willing to bet all my savings that you are wrong. These questions are simply too complex to have certain questions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 17 Nov 2022 02:44:22 +0000 EA - Stop Overreacting re: FTX / SBF by MattBall Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Stop Overreacting re: FTX / SBF, published by MattBall on November 16, 2022 on The Effective Altruism Forum.Written as tomorrow's public blog post in part as a reaction to this, but given what I see on this forum and what I hear from otherwise level-headed individuals, I'm sharing it here now:As anyone who has read my blog or Losing My Religions knows, I have my disagreements with some effective altruists (EAs). I also have some fundamental philosophical differences. But after this FTX fiasco, some non-EAs are taking one person's failures as an excuse to mock and condemn the entire idea of effective altruism. (And some EAs are losing their minds and questioning their very identity and "purpose" in life.)Think about it. Fine, SBF seemingly had lots of money. And he said he was pursuing money for EA causes. But he is just one guy! His money doesn’t make him any more important than anyone else; even less so because his thoughts and ideas are not central to EA.Just because one Effective Altruist did things we don’t like doesn’t mean we throw out utilitarianism or toss Peter Singer under the bus, just as we don’t become Republicans because Bill Clinton was sleazy or give up on meditation because some teachers were predators.The EA community is a lot of people.Some of them are bad. Some of them are mentally unstable. Some of them are crazy. We don’t judge all of EA on one person – that is lunacy.Let’s think about this from first principles.These people are altruists. In short, they care about others, not just themselves. They want others to not suffer but instead thrive. But they also know:Some individuals are suffering more than others. My hands and back hurt and my tinnitus is screaming, but I can say with absolute certainty that many people out there are suffering worse, such that they wish to die. So if you are a rational altruist, you recognize differences in need, regardless of any other factors.Efforts and money have different impacts in different situations. An additional million dollars to Harvard has a different impact than a million dollars in Polio eradication. (And both have a different sign compared to a million dollars to the homophobic Church of Jesus Christ and Latter-day Saints.)Now there is a lot more to say about effective altruism, and no one knows this more than EAs. But this is the bottom line: They want to be as effective as possible in helping others.Is this true of the critics? Or do the critics simply not want to feel guilty? Guilty about buying a fancy car, owning several houses, taking lavish vacations? Giving to their church, their alma mater, their kid's soccer team? Caring infinitely more for their kids than every non-human animal on the planet?Before you pile on along with those attacking effective altruists, ask yourself if doing so is the best way to help the less fortunate.And note: If you are sure that you know what EAs should do and how they should do it, I am willing to bet all my savings that you are wrong. These questions are simply too complex to have certain questions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Stop Overreacting re: FTX / SBF, published by MattBall on November 16, 2022 on The Effective Altruism Forum.Written as tomorrow's public blog post in part as a reaction to this, but given what I see on this forum and what I hear from otherwise level-headed individuals, I'm sharing it here now:As anyone who has read my blog or Losing My Religions knows, I have my disagreements with some effective altruists (EAs). I also have some fundamental philosophical differences. But after this FTX fiasco, some non-EAs are taking one person's failures as an excuse to mock and condemn the entire idea of effective altruism. (And some EAs are losing their minds and questioning their very identity and "purpose" in life.)Think about it. Fine, SBF seemingly had lots of money. And he said he was pursuing money for EA causes. But he is just one guy! His money doesn’t make him any more important than anyone else; even less so because his thoughts and ideas are not central to EA.Just because one Effective Altruist did things we don’t like doesn’t mean we throw out utilitarianism or toss Peter Singer under the bus, just as we don’t become Republicans because Bill Clinton was sleazy or give up on meditation because some teachers were predators.The EA community is a lot of people.Some of them are bad. Some of them are mentally unstable. Some of them are crazy. We don’t judge all of EA on one person – that is lunacy.Let’s think about this from first principles.These people are altruists. In short, they care about others, not just themselves. They want others to not suffer but instead thrive. But they also know:Some individuals are suffering more than others. My hands and back hurt and my tinnitus is screaming, but I can say with absolute certainty that many people out there are suffering worse, such that they wish to die. So if you are a rational altruist, you recognize differences in need, regardless of any other factors.Efforts and money have different impacts in different situations. An additional million dollars to Harvard has a different impact than a million dollars in Polio eradication. (And both have a different sign compared to a million dollars to the homophobic Church of Jesus Christ and Latter-day Saints.)Now there is a lot more to say about effective altruism, and no one knows this more than EAs. But this is the bottom line: They want to be as effective as possible in helping others.Is this true of the critics? Or do the critics simply not want to feel guilty? Guilty about buying a fancy car, owning several houses, taking lavish vacations? Giving to their church, their alma mater, their kid's soccer team? Caring infinitely more for their kids than every non-human animal on the planet?Before you pile on along with those attacking effective altruists, ask yourself if doing so is the best way to help the less fortunate.And note: If you are sure that you know what EAs should do and how they should do it, I am willing to bet all my savings that you are wrong. These questions are simply too complex to have certain questions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
MattBall https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:03 None full 3789
SB279h7bZbveNTxYB_EA EA - New Faunalytics Study on Local Action for Animals as a Stepping Stone to State Protections by JLRiedi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New Faunalytics Study on Local Action for Animals as a Stepping Stone to State Protections, published by JLRiedi on November 16, 2022 on The Effective Altruism Forum.Legislation is a key avenue advocates use to effect change for animals, but there isn’t much research about how to choose tractable issues and lobby for them successfully. Nonprofit research organization Faunalytics has released a new study which found evidence that local laws can be an effective way to implement animal welfare protections, and could lead to great success at the state level.Read the full study here:/BackgroundMunicipal ordinances can be an effective way to create animal protection laws at the local level, and could lead to great success at the state level. Passing laws at the local level allows people to help animals in their communities, while providing a model for other cities and jurisdictions. Local laws can also create momentum for statewide initiatives, which demonstrates a state’s strong commitment to protecting animals.The goal of this project was to look at whether local laws have laid the groundwork for laws at the state level of government, as a potential avenue for change. This study aimed to determine whether there is evidence that local animal laws have been or could influence state laws. And secondarily, whether case law has influenced state legislation.To this end, we reviewed legal materials relating to animal welfare in the United States. The scope of this review included legislation and case law from the past twenty years, related to a range of animal welfare topics. Our primary focus was on farmed animal issues, but with consideration given to other issues that are similar and potentially generalizable. Our goals were to identify any trends and provide recommendations to advocates based on previous attempts to broaden the scope of animal welfare laws.Research TeamThe project’s lead author was Precious Hose (Elisabeth Haub School of Law at Pace University). Dr. Jo Anderson (Faunalytics) reviewed and oversaw the work.ConclusionSuccesses & ChallengesLocal laws can sometimes create meaningful change at the state level. Success is far from guaranteed, but this review found evidence of states taking into account local laws and resolutions during discussion of bills on animal topics including battery cages, gestation crates, veal crates, foie gras, and meat reduction. Some of those attempts failed, but they suggest that at the bare minimum, states will generally consider local examples, even outside of their own state.This research encompassing all fifty U.S. states found evidence suggesting that when similar laws are widely adopted across multiple municipalities, it appears to increase the chances of passing related state laws. In the strongest example, over 400 municipalities passed their own ordinances banning puppy mill sales and the widespread support from many municipalities supported five states passing statewide bans on puppy mill sales within the past six years.The biggest barrier to creating change from the ground up is state preemption of local laws. While existing preemptions—which are particularly common for laws around animal farming—pose a major hindrance to progress, the worst-case scenario is for opponents of animal protection fight back against legislation by bringing a state bill to preempt pro-animal local ordinances, as occurred in the case of Puppies ’N Love v. City of Phoenix, 2017.Some opponents may also bring lawsuits against the city enforcing the ordinance. While this is a concerning possible outcome to consider, we found that it is a relatively rare outcome to date, having occurred in only four examples we reviewed. Even in instances where there is a lot of support for animal protection actions like banning puppy mill sales—which acc...]]>
JLRiedi https://forum.effectivealtruism.org/posts/SB279h7bZbveNTxYB/new-faunalytics-study-on-local-action-for-animals-as-a Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New Faunalytics Study on Local Action for Animals as a Stepping Stone to State Protections, published by JLRiedi on November 16, 2022 on The Effective Altruism Forum.Legislation is a key avenue advocates use to effect change for animals, but there isn’t much research about how to choose tractable issues and lobby for them successfully. Nonprofit research organization Faunalytics has released a new study which found evidence that local laws can be an effective way to implement animal welfare protections, and could lead to great success at the state level.Read the full study here:/BackgroundMunicipal ordinances can be an effective way to create animal protection laws at the local level, and could lead to great success at the state level. Passing laws at the local level allows people to help animals in their communities, while providing a model for other cities and jurisdictions. Local laws can also create momentum for statewide initiatives, which demonstrates a state’s strong commitment to protecting animals.The goal of this project was to look at whether local laws have laid the groundwork for laws at the state level of government, as a potential avenue for change. This study aimed to determine whether there is evidence that local animal laws have been or could influence state laws. And secondarily, whether case law has influenced state legislation.To this end, we reviewed legal materials relating to animal welfare in the United States. The scope of this review included legislation and case law from the past twenty years, related to a range of animal welfare topics. Our primary focus was on farmed animal issues, but with consideration given to other issues that are similar and potentially generalizable. Our goals were to identify any trends and provide recommendations to advocates based on previous attempts to broaden the scope of animal welfare laws.Research TeamThe project’s lead author was Precious Hose (Elisabeth Haub School of Law at Pace University). Dr. Jo Anderson (Faunalytics) reviewed and oversaw the work.ConclusionSuccesses & ChallengesLocal laws can sometimes create meaningful change at the state level. Success is far from guaranteed, but this review found evidence of states taking into account local laws and resolutions during discussion of bills on animal topics including battery cages, gestation crates, veal crates, foie gras, and meat reduction. Some of those attempts failed, but they suggest that at the bare minimum, states will generally consider local examples, even outside of their own state.This research encompassing all fifty U.S. states found evidence suggesting that when similar laws are widely adopted across multiple municipalities, it appears to increase the chances of passing related state laws. In the strongest example, over 400 municipalities passed their own ordinances banning puppy mill sales and the widespread support from many municipalities supported five states passing statewide bans on puppy mill sales within the past six years.The biggest barrier to creating change from the ground up is state preemption of local laws. While existing preemptions—which are particularly common for laws around animal farming—pose a major hindrance to progress, the worst-case scenario is for opponents of animal protection fight back against legislation by bringing a state bill to preempt pro-animal local ordinances, as occurred in the case of Puppies ’N Love v. City of Phoenix, 2017.Some opponents may also bring lawsuits against the city enforcing the ordinance. While this is a concerning possible outcome to consider, we found that it is a relatively rare outcome to date, having occurred in only four examples we reviewed. Even in instances where there is a lot of support for animal protection actions like banning puppy mill sales—which acc...]]>
Wed, 16 Nov 2022 23:00:59 +0000 EA - New Faunalytics Study on Local Action for Animals as a Stepping Stone to State Protections by JLRiedi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New Faunalytics Study on Local Action for Animals as a Stepping Stone to State Protections, published by JLRiedi on November 16, 2022 on The Effective Altruism Forum.Legislation is a key avenue advocates use to effect change for animals, but there isn’t much research about how to choose tractable issues and lobby for them successfully. Nonprofit research organization Faunalytics has released a new study which found evidence that local laws can be an effective way to implement animal welfare protections, and could lead to great success at the state level.Read the full study here:/BackgroundMunicipal ordinances can be an effective way to create animal protection laws at the local level, and could lead to great success at the state level. Passing laws at the local level allows people to help animals in their communities, while providing a model for other cities and jurisdictions. Local laws can also create momentum for statewide initiatives, which demonstrates a state’s strong commitment to protecting animals.The goal of this project was to look at whether local laws have laid the groundwork for laws at the state level of government, as a potential avenue for change. This study aimed to determine whether there is evidence that local animal laws have been or could influence state laws. And secondarily, whether case law has influenced state legislation.To this end, we reviewed legal materials relating to animal welfare in the United States. The scope of this review included legislation and case law from the past twenty years, related to a range of animal welfare topics. Our primary focus was on farmed animal issues, but with consideration given to other issues that are similar and potentially generalizable. Our goals were to identify any trends and provide recommendations to advocates based on previous attempts to broaden the scope of animal welfare laws.Research TeamThe project’s lead author was Precious Hose (Elisabeth Haub School of Law at Pace University). Dr. Jo Anderson (Faunalytics) reviewed and oversaw the work.ConclusionSuccesses & ChallengesLocal laws can sometimes create meaningful change at the state level. Success is far from guaranteed, but this review found evidence of states taking into account local laws and resolutions during discussion of bills on animal topics including battery cages, gestation crates, veal crates, foie gras, and meat reduction. Some of those attempts failed, but they suggest that at the bare minimum, states will generally consider local examples, even outside of their own state.This research encompassing all fifty U.S. states found evidence suggesting that when similar laws are widely adopted across multiple municipalities, it appears to increase the chances of passing related state laws. In the strongest example, over 400 municipalities passed their own ordinances banning puppy mill sales and the widespread support from many municipalities supported five states passing statewide bans on puppy mill sales within the past six years.The biggest barrier to creating change from the ground up is state preemption of local laws. While existing preemptions—which are particularly common for laws around animal farming—pose a major hindrance to progress, the worst-case scenario is for opponents of animal protection fight back against legislation by bringing a state bill to preempt pro-animal local ordinances, as occurred in the case of Puppies ’N Love v. City of Phoenix, 2017.Some opponents may also bring lawsuits against the city enforcing the ordinance. While this is a concerning possible outcome to consider, we found that it is a relatively rare outcome to date, having occurred in only four examples we reviewed. Even in instances where there is a lot of support for animal protection actions like banning puppy mill sales—which acc...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New Faunalytics Study on Local Action for Animals as a Stepping Stone to State Protections, published by JLRiedi on November 16, 2022 on The Effective Altruism Forum.Legislation is a key avenue advocates use to effect change for animals, but there isn’t much research about how to choose tractable issues and lobby for them successfully. Nonprofit research organization Faunalytics has released a new study which found evidence that local laws can be an effective way to implement animal welfare protections, and could lead to great success at the state level.Read the full study here:/BackgroundMunicipal ordinances can be an effective way to create animal protection laws at the local level, and could lead to great success at the state level. Passing laws at the local level allows people to help animals in their communities, while providing a model for other cities and jurisdictions. Local laws can also create momentum for statewide initiatives, which demonstrates a state’s strong commitment to protecting animals.The goal of this project was to look at whether local laws have laid the groundwork for laws at the state level of government, as a potential avenue for change. This study aimed to determine whether there is evidence that local animal laws have been or could influence state laws. And secondarily, whether case law has influenced state legislation.To this end, we reviewed legal materials relating to animal welfare in the United States. The scope of this review included legislation and case law from the past twenty years, related to a range of animal welfare topics. Our primary focus was on farmed animal issues, but with consideration given to other issues that are similar and potentially generalizable. Our goals were to identify any trends and provide recommendations to advocates based on previous attempts to broaden the scope of animal welfare laws.Research TeamThe project’s lead author was Precious Hose (Elisabeth Haub School of Law at Pace University). Dr. Jo Anderson (Faunalytics) reviewed and oversaw the work.ConclusionSuccesses & ChallengesLocal laws can sometimes create meaningful change at the state level. Success is far from guaranteed, but this review found evidence of states taking into account local laws and resolutions during discussion of bills on animal topics including battery cages, gestation crates, veal crates, foie gras, and meat reduction. Some of those attempts failed, but they suggest that at the bare minimum, states will generally consider local examples, even outside of their own state.This research encompassing all fifty U.S. states found evidence suggesting that when similar laws are widely adopted across multiple municipalities, it appears to increase the chances of passing related state laws. In the strongest example, over 400 municipalities passed their own ordinances banning puppy mill sales and the widespread support from many municipalities supported five states passing statewide bans on puppy mill sales within the past six years.The biggest barrier to creating change from the ground up is state preemption of local laws. While existing preemptions—which are particularly common for laws around animal farming—pose a major hindrance to progress, the worst-case scenario is for opponents of animal protection fight back against legislation by bringing a state bill to preempt pro-animal local ordinances, as occurred in the case of Puppies ’N Love v. City of Phoenix, 2017.Some opponents may also bring lawsuits against the city enforcing the ordinance. While this is a concerning possible outcome to consider, we found that it is a relatively rare outcome to date, having occurred in only four examples we reviewed. Even in instances where there is a lot of support for animal protection actions like banning puppy mill sales—which acc...]]>
JLRiedi https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:41 None full 3790
vjyWBnCmXjErAN6sZ_EA EA - Kelsey Piper's recent interview of SBF by Agustín Covarrubias Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Kelsey Piper's recent interview of SBF, published by Agustín Covarrubias on November 16, 2022 on The Effective Altruism Forum.Kelsey Piper from Vox's Future Perfect very recently released an interview (made through Twitter DMs) with Sam Bankman-Fried. The interview goes in depth into the events surrounding FTX and Alameda Research.As we messaged, I was trying to make sense of what, behind the PR and the charitable donations and the lobbying, Bankman-Fried actually believes about what’s right and what’s wrong — and especially the ethics of what he did and the industry he worked in. Looming over our whole conversation was the fact that people who trusted him have lost their savings, and that he’s done incalculable damage to everything he proclaimed only a few weeks ago to care about. The grief and pain he has caused is immense, and I came away from our conversation appalled by much of what he said. But if these mistakes haunted him, he largely didn’t show it.The interview gives a much-awaited outlet into SBF's thinking, specifically in relation to prior questions in the community regarding whether SBF was practicing some form of naive consequentialism or whether the events surround the crisis largely emerged from incompetence.During the interview, Kelsey asked explicitly about previous statements by SBF agreeing with the existence of strong moral boundaries to maximizing good. His answers seem to suggest he had intentionally misrepresented his views on the issue:This seems to give some credit to the theory that SBF could have been acting like a naive utilitarian, choosing to engage in morally objectionable behavior to maximize his positive impact, while explicitly misrepresenting his views to others.However, Kelsey also asked directly regarding the lending out of customer deposits alongside Alameda Research:All of his claims are at least consistent with the view of SBF acting like an incompetent investor. FTX and Alameda Research seems to have had serious governance and accounting problems, and SBF seems to have taken several decisions which to him sounded individually reasonable, all based on bad information. He repeatedly doubled down, instead of cutting his losses.I'm still not sure what to take out of this interview, especially because Sam seems, at best, somewhat incoherent regarding his moral views and previous mistakes. This might have to do with his emotional state at the time of the interview, or even be a sign that he's blatantly lying, but I still think there is a lot of stuff to update from.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Agustín Covarrubias https://forum.effectivealtruism.org/posts/vjyWBnCmXjErAN6sZ/kelsey-piper-s-recent-interview-of-sbf Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Kelsey Piper's recent interview of SBF, published by Agustín Covarrubias on November 16, 2022 on The Effective Altruism Forum.Kelsey Piper from Vox's Future Perfect very recently released an interview (made through Twitter DMs) with Sam Bankman-Fried. The interview goes in depth into the events surrounding FTX and Alameda Research.As we messaged, I was trying to make sense of what, behind the PR and the charitable donations and the lobbying, Bankman-Fried actually believes about what’s right and what’s wrong — and especially the ethics of what he did and the industry he worked in. Looming over our whole conversation was the fact that people who trusted him have lost their savings, and that he’s done incalculable damage to everything he proclaimed only a few weeks ago to care about. The grief and pain he has caused is immense, and I came away from our conversation appalled by much of what he said. But if these mistakes haunted him, he largely didn’t show it.The interview gives a much-awaited outlet into SBF's thinking, specifically in relation to prior questions in the community regarding whether SBF was practicing some form of naive consequentialism or whether the events surround the crisis largely emerged from incompetence.During the interview, Kelsey asked explicitly about previous statements by SBF agreeing with the existence of strong moral boundaries to maximizing good. His answers seem to suggest he had intentionally misrepresented his views on the issue:This seems to give some credit to the theory that SBF could have been acting like a naive utilitarian, choosing to engage in morally objectionable behavior to maximize his positive impact, while explicitly misrepresenting his views to others.However, Kelsey also asked directly regarding the lending out of customer deposits alongside Alameda Research:All of his claims are at least consistent with the view of SBF acting like an incompetent investor. FTX and Alameda Research seems to have had serious governance and accounting problems, and SBF seems to have taken several decisions which to him sounded individually reasonable, all based on bad information. He repeatedly doubled down, instead of cutting his losses.I'm still not sure what to take out of this interview, especially because Sam seems, at best, somewhat incoherent regarding his moral views and previous mistakes. This might have to do with his emotional state at the time of the interview, or even be a sign that he's blatantly lying, but I still think there is a lot of stuff to update from.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 16 Nov 2022 21:02:57 +0000 EA - Kelsey Piper's recent interview of SBF by Agustín Covarrubias Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Kelsey Piper's recent interview of SBF, published by Agustín Covarrubias on November 16, 2022 on The Effective Altruism Forum.Kelsey Piper from Vox's Future Perfect very recently released an interview (made through Twitter DMs) with Sam Bankman-Fried. The interview goes in depth into the events surrounding FTX and Alameda Research.As we messaged, I was trying to make sense of what, behind the PR and the charitable donations and the lobbying, Bankman-Fried actually believes about what’s right and what’s wrong — and especially the ethics of what he did and the industry he worked in. Looming over our whole conversation was the fact that people who trusted him have lost their savings, and that he’s done incalculable damage to everything he proclaimed only a few weeks ago to care about. The grief and pain he has caused is immense, and I came away from our conversation appalled by much of what he said. But if these mistakes haunted him, he largely didn’t show it.The interview gives a much-awaited outlet into SBF's thinking, specifically in relation to prior questions in the community regarding whether SBF was practicing some form of naive consequentialism or whether the events surround the crisis largely emerged from incompetence.During the interview, Kelsey asked explicitly about previous statements by SBF agreeing with the existence of strong moral boundaries to maximizing good. His answers seem to suggest he had intentionally misrepresented his views on the issue:This seems to give some credit to the theory that SBF could have been acting like a naive utilitarian, choosing to engage in morally objectionable behavior to maximize his positive impact, while explicitly misrepresenting his views to others.However, Kelsey also asked directly regarding the lending out of customer deposits alongside Alameda Research:All of his claims are at least consistent with the view of SBF acting like an incompetent investor. FTX and Alameda Research seems to have had serious governance and accounting problems, and SBF seems to have taken several decisions which to him sounded individually reasonable, all based on bad information. He repeatedly doubled down, instead of cutting his losses.I'm still not sure what to take out of this interview, especially because Sam seems, at best, somewhat incoherent regarding his moral views and previous mistakes. This might have to do with his emotional state at the time of the interview, or even be a sign that he's blatantly lying, but I still think there is a lot of stuff to update from.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Kelsey Piper's recent interview of SBF, published by Agustín Covarrubias on November 16, 2022 on The Effective Altruism Forum.Kelsey Piper from Vox's Future Perfect very recently released an interview (made through Twitter DMs) with Sam Bankman-Fried. The interview goes in depth into the events surrounding FTX and Alameda Research.As we messaged, I was trying to make sense of what, behind the PR and the charitable donations and the lobbying, Bankman-Fried actually believes about what’s right and what’s wrong — and especially the ethics of what he did and the industry he worked in. Looming over our whole conversation was the fact that people who trusted him have lost their savings, and that he’s done incalculable damage to everything he proclaimed only a few weeks ago to care about. The grief and pain he has caused is immense, and I came away from our conversation appalled by much of what he said. But if these mistakes haunted him, he largely didn’t show it.The interview gives a much-awaited outlet into SBF's thinking, specifically in relation to prior questions in the community regarding whether SBF was practicing some form of naive consequentialism or whether the events surround the crisis largely emerged from incompetence.During the interview, Kelsey asked explicitly about previous statements by SBF agreeing with the existence of strong moral boundaries to maximizing good. His answers seem to suggest he had intentionally misrepresented his views on the issue:This seems to give some credit to the theory that SBF could have been acting like a naive utilitarian, choosing to engage in morally objectionable behavior to maximize his positive impact, while explicitly misrepresenting his views to others.However, Kelsey also asked directly regarding the lending out of customer deposits alongside Alameda Research:All of his claims are at least consistent with the view of SBF acting like an incompetent investor. FTX and Alameda Research seems to have had serious governance and accounting problems, and SBF seems to have taken several decisions which to him sounded individually reasonable, all based on bad information. He repeatedly doubled down, instead of cutting his losses.I'm still not sure what to take out of this interview, especially because Sam seems, at best, somewhat incoherent regarding his moral views and previous mistakes. This might have to do with his emotional state at the time of the interview, or even be a sign that he's blatantly lying, but I still think there is a lot of stuff to update from.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Agustín Covarrubias https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:30 None full 3781
RPTPo8eHTnruoFyRH_EA EA - Some important questions for the EA Leadership by Gideon Futerman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some important questions for the EA Leadership, published by Gideon Futerman on November 16, 2022 on The Effective Altruism Forum.The recent FTX scandal has, I think, caused a major dent in the confidence many in the EA Community have in our leadership. It seems to me increasingly less obvious that the control of a lot of EA by a narrow group of funders and thought leaders is the best way for this community full of smart and passionate people to do good in the world. The assumption I had is we defer a lot of power, both intellectual, social and financial, to a small group of broadly unaccountable, non-transparent people on the assumption they are uniquely good at making decisions, noticing risks to the EA enterprise and combatting them, and that this unique competence is what justifies the power structures we have in EA. A series of failure by the community this year, including the Carrick Flynn campaign and now the FTX scandal has shattered my confidence in this group.I really think EA is amazing, and I am proud to be on the committee of EA Oxford (this represent my own views), having been a summer research fellow at CERI and having spoken at EAGx Rotterdam; my confidence in the EA leadership, however, is exceptionally low, and I think having an answer to some of these questions would be very useful.An aside: maybe I’m wrong about power structures in EA being unaccountable, centralised and non-transparent. If so, the fact it feels like that is also a sign something is going wrong.Thus, I have a number of questions for the “leadership group” about how decisions are made in EA and rationale for these. This list is neither exhaustive nor meant as an attack; there possibly are innocuous answers to many of these questions.Who is invited to the coordination forum and who attends? What sort of decisions are made? How does the coordination forum impact the direction the community moves in? Who decides who goes to the coordination forum? How? What's the rationale for keeping the attendees of the coordination forum secret (or is it not purposeful)?Which senior decision makers in EA played a part in the decision to make the Carrick Flynn campaign happen? Did any express the desire for it not to? Who signed off on the decision to make the campaign manager someone with no political experience?Why did Will MacAskill introduce Sam Bankman-Fried to Elon Musk with the intention of getting SBF to help Elon buy twitter? What was the rationale that this would have been a cost effective use of $8-15 Billion? Who else was consulted on this?Why did Will MacAskill choose not to take on board any of the suggestions of Zoe Cremer that she set out when she met with him?Will MacAskill has expressed public discomfort with the degree of hero-worship towards him. What steps has he taken to reduce this? What plans have decision makers tried to enact to reduce the amount of hero worship in EA?The EA community prides itself on being an open forum for discussion without fear of reprisal for disagreement. A very large number of people in the community however do not feel it is, and feel pressure to conform and not to express their disagreement with the community, with senior leaders or even with lower level community builders.Has there been discussions within the community health team with how to deal with this? What approaches are they taking community wide rather than just dealing with ad hoc incidents?A number of people have expressed suspicion or worry that they have been rejected from grants because of publicly expressing disagreements with EA. Has this ever been part of the rationale for rejecting someone from a grant?FTX Future Fund decided to fund me on a project working on SRM and GCR, but refused to publicise it on their website. How many other projects were funded but not public...]]>
Gideon Futerman https://forum.effectivealtruism.org/posts/RPTPo8eHTnruoFyRH/some-important-questions-for-the-ea-leadership Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some important questions for the EA Leadership, published by Gideon Futerman on November 16, 2022 on The Effective Altruism Forum.The recent FTX scandal has, I think, caused a major dent in the confidence many in the EA Community have in our leadership. It seems to me increasingly less obvious that the control of a lot of EA by a narrow group of funders and thought leaders is the best way for this community full of smart and passionate people to do good in the world. The assumption I had is we defer a lot of power, both intellectual, social and financial, to a small group of broadly unaccountable, non-transparent people on the assumption they are uniquely good at making decisions, noticing risks to the EA enterprise and combatting them, and that this unique competence is what justifies the power structures we have in EA. A series of failure by the community this year, including the Carrick Flynn campaign and now the FTX scandal has shattered my confidence in this group.I really think EA is amazing, and I am proud to be on the committee of EA Oxford (this represent my own views), having been a summer research fellow at CERI and having spoken at EAGx Rotterdam; my confidence in the EA leadership, however, is exceptionally low, and I think having an answer to some of these questions would be very useful.An aside: maybe I’m wrong about power structures in EA being unaccountable, centralised and non-transparent. If so, the fact it feels like that is also a sign something is going wrong.Thus, I have a number of questions for the “leadership group” about how decisions are made in EA and rationale for these. This list is neither exhaustive nor meant as an attack; there possibly are innocuous answers to many of these questions.Who is invited to the coordination forum and who attends? What sort of decisions are made? How does the coordination forum impact the direction the community moves in? Who decides who goes to the coordination forum? How? What's the rationale for keeping the attendees of the coordination forum secret (or is it not purposeful)?Which senior decision makers in EA played a part in the decision to make the Carrick Flynn campaign happen? Did any express the desire for it not to? Who signed off on the decision to make the campaign manager someone with no political experience?Why did Will MacAskill introduce Sam Bankman-Fried to Elon Musk with the intention of getting SBF to help Elon buy twitter? What was the rationale that this would have been a cost effective use of $8-15 Billion? Who else was consulted on this?Why did Will MacAskill choose not to take on board any of the suggestions of Zoe Cremer that she set out when she met with him?Will MacAskill has expressed public discomfort with the degree of hero-worship towards him. What steps has he taken to reduce this? What plans have decision makers tried to enact to reduce the amount of hero worship in EA?The EA community prides itself on being an open forum for discussion without fear of reprisal for disagreement. A very large number of people in the community however do not feel it is, and feel pressure to conform and not to express their disagreement with the community, with senior leaders or even with lower level community builders.Has there been discussions within the community health team with how to deal with this? What approaches are they taking community wide rather than just dealing with ad hoc incidents?A number of people have expressed suspicion or worry that they have been rejected from grants because of publicly expressing disagreements with EA. Has this ever been part of the rationale for rejecting someone from a grant?FTX Future Fund decided to fund me on a project working on SRM and GCR, but refused to publicise it on their website. How many other projects were funded but not public...]]>
Wed, 16 Nov 2022 17:45:47 +0000 EA - Some important questions for the EA Leadership by Gideon Futerman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some important questions for the EA Leadership, published by Gideon Futerman on November 16, 2022 on The Effective Altruism Forum.The recent FTX scandal has, I think, caused a major dent in the confidence many in the EA Community have in our leadership. It seems to me increasingly less obvious that the control of a lot of EA by a narrow group of funders and thought leaders is the best way for this community full of smart and passionate people to do good in the world. The assumption I had is we defer a lot of power, both intellectual, social and financial, to a small group of broadly unaccountable, non-transparent people on the assumption they are uniquely good at making decisions, noticing risks to the EA enterprise and combatting them, and that this unique competence is what justifies the power structures we have in EA. A series of failure by the community this year, including the Carrick Flynn campaign and now the FTX scandal has shattered my confidence in this group.I really think EA is amazing, and I am proud to be on the committee of EA Oxford (this represent my own views), having been a summer research fellow at CERI and having spoken at EAGx Rotterdam; my confidence in the EA leadership, however, is exceptionally low, and I think having an answer to some of these questions would be very useful.An aside: maybe I’m wrong about power structures in EA being unaccountable, centralised and non-transparent. If so, the fact it feels like that is also a sign something is going wrong.Thus, I have a number of questions for the “leadership group” about how decisions are made in EA and rationale for these. This list is neither exhaustive nor meant as an attack; there possibly are innocuous answers to many of these questions.Who is invited to the coordination forum and who attends? What sort of decisions are made? How does the coordination forum impact the direction the community moves in? Who decides who goes to the coordination forum? How? What's the rationale for keeping the attendees of the coordination forum secret (or is it not purposeful)?Which senior decision makers in EA played a part in the decision to make the Carrick Flynn campaign happen? Did any express the desire for it not to? Who signed off on the decision to make the campaign manager someone with no political experience?Why did Will MacAskill introduce Sam Bankman-Fried to Elon Musk with the intention of getting SBF to help Elon buy twitter? What was the rationale that this would have been a cost effective use of $8-15 Billion? Who else was consulted on this?Why did Will MacAskill choose not to take on board any of the suggestions of Zoe Cremer that she set out when she met with him?Will MacAskill has expressed public discomfort with the degree of hero-worship towards him. What steps has he taken to reduce this? What plans have decision makers tried to enact to reduce the amount of hero worship in EA?The EA community prides itself on being an open forum for discussion without fear of reprisal for disagreement. A very large number of people in the community however do not feel it is, and feel pressure to conform and not to express their disagreement with the community, with senior leaders or even with lower level community builders.Has there been discussions within the community health team with how to deal with this? What approaches are they taking community wide rather than just dealing with ad hoc incidents?A number of people have expressed suspicion or worry that they have been rejected from grants because of publicly expressing disagreements with EA. Has this ever been part of the rationale for rejecting someone from a grant?FTX Future Fund decided to fund me on a project working on SRM and GCR, but refused to publicise it on their website. How many other projects were funded but not public...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some important questions for the EA Leadership, published by Gideon Futerman on November 16, 2022 on The Effective Altruism Forum.The recent FTX scandal has, I think, caused a major dent in the confidence many in the EA Community have in our leadership. It seems to me increasingly less obvious that the control of a lot of EA by a narrow group of funders and thought leaders is the best way for this community full of smart and passionate people to do good in the world. The assumption I had is we defer a lot of power, both intellectual, social and financial, to a small group of broadly unaccountable, non-transparent people on the assumption they are uniquely good at making decisions, noticing risks to the EA enterprise and combatting them, and that this unique competence is what justifies the power structures we have in EA. A series of failure by the community this year, including the Carrick Flynn campaign and now the FTX scandal has shattered my confidence in this group.I really think EA is amazing, and I am proud to be on the committee of EA Oxford (this represent my own views), having been a summer research fellow at CERI and having spoken at EAGx Rotterdam; my confidence in the EA leadership, however, is exceptionally low, and I think having an answer to some of these questions would be very useful.An aside: maybe I’m wrong about power structures in EA being unaccountable, centralised and non-transparent. If so, the fact it feels like that is also a sign something is going wrong.Thus, I have a number of questions for the “leadership group” about how decisions are made in EA and rationale for these. This list is neither exhaustive nor meant as an attack; there possibly are innocuous answers to many of these questions.Who is invited to the coordination forum and who attends? What sort of decisions are made? How does the coordination forum impact the direction the community moves in? Who decides who goes to the coordination forum? How? What's the rationale for keeping the attendees of the coordination forum secret (or is it not purposeful)?Which senior decision makers in EA played a part in the decision to make the Carrick Flynn campaign happen? Did any express the desire for it not to? Who signed off on the decision to make the campaign manager someone with no political experience?Why did Will MacAskill introduce Sam Bankman-Fried to Elon Musk with the intention of getting SBF to help Elon buy twitter? What was the rationale that this would have been a cost effective use of $8-15 Billion? Who else was consulted on this?Why did Will MacAskill choose not to take on board any of the suggestions of Zoe Cremer that she set out when she met with him?Will MacAskill has expressed public discomfort with the degree of hero-worship towards him. What steps has he taken to reduce this? What plans have decision makers tried to enact to reduce the amount of hero worship in EA?The EA community prides itself on being an open forum for discussion without fear of reprisal for disagreement. A very large number of people in the community however do not feel it is, and feel pressure to conform and not to express their disagreement with the community, with senior leaders or even with lower level community builders.Has there been discussions within the community health team with how to deal with this? What approaches are they taking community wide rather than just dealing with ad hoc incidents?A number of people have expressed suspicion or worry that they have been rejected from grants because of publicly expressing disagreements with EA. Has this ever been part of the rationale for rejecting someone from a grant?FTX Future Fund decided to fund me on a project working on SRM and GCR, but refused to publicise it on their website. How many other projects were funded but not public...]]>
Gideon Futerman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:06 None full 3782
rJ4cG9xcKPsqJmLTv_EA EA - Deontology is not the solution by Peter McLaughlin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Deontology is not the solution, published by Peter McLaughlin on November 16, 2022 on The Effective Altruism Forum.This is a lightly-edited extract from a longer post I have been writing about the problems Effective Altruism has with power. That post will likely be uploaded soon, but I wanted to upload this extract first since I think it's especially relevant to the kind of reflection that is currently happening in this community, and because I think it's more polished than the rest of my work-in-progress. Thank you to Julian Hazell and Keir Bradwell for reading and commenting on an earlier draft.In the wake of revelations about FTX and Sam Bankman-Fried's behaviour, Effective Altruists have begun reflecting on how they might respond to this situation, and if the movement needs to reform itself before 'next time'. And I have begun to notice a pattern emerging: people saying that this fuck-up is evidence of too little 'deontology' in Effective Altruism. As this diagnosis goes, Bankman-Fried's behaviour was partly (though not entirely) the result of attitudes that are unfortunately general among Effective Altruists, such as a too-easy willingness to violate side-constraints, too little concern with honesty and transparency, and sometimes a lack of integrity. This thread by Dustin Moskovitz and this post by Julian Hazell both exemplify the conclusion that EA needs to be a bit more 'deontological'.I’m sympathetic here: I’m an ethics guy by background, and I think it’s an important and insightful field. I understand that EA and longtermism emerged out of moral philosophy, that some of the movement’s most prominent leaders are analytic ethicists in their day jobs, and that the language of the movement is (in large part) the language of analytic ethics. So it makes sense that EAs reach for ethical distinctions and ideas when trying to think about a question, such as ‘what went wrong with FTX?’. But I think that it is completely the wrong way to think about cases where people abuse their power, like Bankman-Fried abused his.The problem with the abuse of power is not simply that having power lets you do things that fuck over other people (in potentially self-defeating ways). You will always have opportunities to fuck people over for influence and leverage, and it is always possible, at least in principle, that you will get too carried away by your own vision and take these opportunities (even if they are self-defeating). This applies no matter if you are the President of the United States or if you’re just asking your friend for £20; it applies even if you are purely altruistically motivated.However, morally thoughtful people tend to have good ‘intuitions’ about everyday cases: it is these that common-sense morality was designed to handle. We know that it’s wrong to take someone else’s money and not pay it back; we know that it’s typically wrong to lie solely for our own benefit; we understand that it’s good to be trustworthy and honest. Indeed, in everyday contexts certain options are just entirely unthinkable. For example, a surgeon won’t typically even ask themselves ‘should I cut up this patient and redistribute their organs to maximise utility?’—the idea to do such a thing would never even enter their mind—and you would probably be a bit uneasy with a surgeon who had indeed asked himself this question, even if he had concluded that he shouldn’t cut you up.This kind of everyday moral reasoning is exactly what is captured by the kinds of deontological ‘side constraints’ most often discussed in the Effective Altruism community. As this post makes wonderfully clear, the reason why even consequentialists should be concerned with side-constraints is because you can predict ahead of time that you will face certain kinds of situations, and you know that it would be better ...]]>
Peter McLaughlin https://forum.effectivealtruism.org/posts/rJ4cG9xcKPsqJmLTv/deontology-is-not-the-solution Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Deontology is not the solution, published by Peter McLaughlin on November 16, 2022 on The Effective Altruism Forum.This is a lightly-edited extract from a longer post I have been writing about the problems Effective Altruism has with power. That post will likely be uploaded soon, but I wanted to upload this extract first since I think it's especially relevant to the kind of reflection that is currently happening in this community, and because I think it's more polished than the rest of my work-in-progress. Thank you to Julian Hazell and Keir Bradwell for reading and commenting on an earlier draft.In the wake of revelations about FTX and Sam Bankman-Fried's behaviour, Effective Altruists have begun reflecting on how they might respond to this situation, and if the movement needs to reform itself before 'next time'. And I have begun to notice a pattern emerging: people saying that this fuck-up is evidence of too little 'deontology' in Effective Altruism. As this diagnosis goes, Bankman-Fried's behaviour was partly (though not entirely) the result of attitudes that are unfortunately general among Effective Altruists, such as a too-easy willingness to violate side-constraints, too little concern with honesty and transparency, and sometimes a lack of integrity. This thread by Dustin Moskovitz and this post by Julian Hazell both exemplify the conclusion that EA needs to be a bit more 'deontological'.I’m sympathetic here: I’m an ethics guy by background, and I think it’s an important and insightful field. I understand that EA and longtermism emerged out of moral philosophy, that some of the movement’s most prominent leaders are analytic ethicists in their day jobs, and that the language of the movement is (in large part) the language of analytic ethics. So it makes sense that EAs reach for ethical distinctions and ideas when trying to think about a question, such as ‘what went wrong with FTX?’. But I think that it is completely the wrong way to think about cases where people abuse their power, like Bankman-Fried abused his.The problem with the abuse of power is not simply that having power lets you do things that fuck over other people (in potentially self-defeating ways). You will always have opportunities to fuck people over for influence and leverage, and it is always possible, at least in principle, that you will get too carried away by your own vision and take these opportunities (even if they are self-defeating). This applies no matter if you are the President of the United States or if you’re just asking your friend for £20; it applies even if you are purely altruistically motivated.However, morally thoughtful people tend to have good ‘intuitions’ about everyday cases: it is these that common-sense morality was designed to handle. We know that it’s wrong to take someone else’s money and not pay it back; we know that it’s typically wrong to lie solely for our own benefit; we understand that it’s good to be trustworthy and honest. Indeed, in everyday contexts certain options are just entirely unthinkable. For example, a surgeon won’t typically even ask themselves ‘should I cut up this patient and redistribute their organs to maximise utility?’—the idea to do such a thing would never even enter their mind—and you would probably be a bit uneasy with a surgeon who had indeed asked himself this question, even if he had concluded that he shouldn’t cut you up.This kind of everyday moral reasoning is exactly what is captured by the kinds of deontological ‘side constraints’ most often discussed in the Effective Altruism community. As this post makes wonderfully clear, the reason why even consequentialists should be concerned with side-constraints is because you can predict ahead of time that you will face certain kinds of situations, and you know that it would be better ...]]>
Wed, 16 Nov 2022 15:34:05 +0000 EA - Deontology is not the solution by Peter McLaughlin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Deontology is not the solution, published by Peter McLaughlin on November 16, 2022 on The Effective Altruism Forum.This is a lightly-edited extract from a longer post I have been writing about the problems Effective Altruism has with power. That post will likely be uploaded soon, but I wanted to upload this extract first since I think it's especially relevant to the kind of reflection that is currently happening in this community, and because I think it's more polished than the rest of my work-in-progress. Thank you to Julian Hazell and Keir Bradwell for reading and commenting on an earlier draft.In the wake of revelations about FTX and Sam Bankman-Fried's behaviour, Effective Altruists have begun reflecting on how they might respond to this situation, and if the movement needs to reform itself before 'next time'. And I have begun to notice a pattern emerging: people saying that this fuck-up is evidence of too little 'deontology' in Effective Altruism. As this diagnosis goes, Bankman-Fried's behaviour was partly (though not entirely) the result of attitudes that are unfortunately general among Effective Altruists, such as a too-easy willingness to violate side-constraints, too little concern with honesty and transparency, and sometimes a lack of integrity. This thread by Dustin Moskovitz and this post by Julian Hazell both exemplify the conclusion that EA needs to be a bit more 'deontological'.I’m sympathetic here: I’m an ethics guy by background, and I think it’s an important and insightful field. I understand that EA and longtermism emerged out of moral philosophy, that some of the movement’s most prominent leaders are analytic ethicists in their day jobs, and that the language of the movement is (in large part) the language of analytic ethics. So it makes sense that EAs reach for ethical distinctions and ideas when trying to think about a question, such as ‘what went wrong with FTX?’. But I think that it is completely the wrong way to think about cases where people abuse their power, like Bankman-Fried abused his.The problem with the abuse of power is not simply that having power lets you do things that fuck over other people (in potentially self-defeating ways). You will always have opportunities to fuck people over for influence and leverage, and it is always possible, at least in principle, that you will get too carried away by your own vision and take these opportunities (even if they are self-defeating). This applies no matter if you are the President of the United States or if you’re just asking your friend for £20; it applies even if you are purely altruistically motivated.However, morally thoughtful people tend to have good ‘intuitions’ about everyday cases: it is these that common-sense morality was designed to handle. We know that it’s wrong to take someone else’s money and not pay it back; we know that it’s typically wrong to lie solely for our own benefit; we understand that it’s good to be trustworthy and honest. Indeed, in everyday contexts certain options are just entirely unthinkable. For example, a surgeon won’t typically even ask themselves ‘should I cut up this patient and redistribute their organs to maximise utility?’—the idea to do such a thing would never even enter their mind—and you would probably be a bit uneasy with a surgeon who had indeed asked himself this question, even if he had concluded that he shouldn’t cut you up.This kind of everyday moral reasoning is exactly what is captured by the kinds of deontological ‘side constraints’ most often discussed in the Effective Altruism community. As this post makes wonderfully clear, the reason why even consequentialists should be concerned with side-constraints is because you can predict ahead of time that you will face certain kinds of situations, and you know that it would be better ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Deontology is not the solution, published by Peter McLaughlin on November 16, 2022 on The Effective Altruism Forum.This is a lightly-edited extract from a longer post I have been writing about the problems Effective Altruism has with power. That post will likely be uploaded soon, but I wanted to upload this extract first since I think it's especially relevant to the kind of reflection that is currently happening in this community, and because I think it's more polished than the rest of my work-in-progress. Thank you to Julian Hazell and Keir Bradwell for reading and commenting on an earlier draft.In the wake of revelations about FTX and Sam Bankman-Fried's behaviour, Effective Altruists have begun reflecting on how they might respond to this situation, and if the movement needs to reform itself before 'next time'. And I have begun to notice a pattern emerging: people saying that this fuck-up is evidence of too little 'deontology' in Effective Altruism. As this diagnosis goes, Bankman-Fried's behaviour was partly (though not entirely) the result of attitudes that are unfortunately general among Effective Altruists, such as a too-easy willingness to violate side-constraints, too little concern with honesty and transparency, and sometimes a lack of integrity. This thread by Dustin Moskovitz and this post by Julian Hazell both exemplify the conclusion that EA needs to be a bit more 'deontological'.I’m sympathetic here: I’m an ethics guy by background, and I think it’s an important and insightful field. I understand that EA and longtermism emerged out of moral philosophy, that some of the movement’s most prominent leaders are analytic ethicists in their day jobs, and that the language of the movement is (in large part) the language of analytic ethics. So it makes sense that EAs reach for ethical distinctions and ideas when trying to think about a question, such as ‘what went wrong with FTX?’. But I think that it is completely the wrong way to think about cases where people abuse their power, like Bankman-Fried abused his.The problem with the abuse of power is not simply that having power lets you do things that fuck over other people (in potentially self-defeating ways). You will always have opportunities to fuck people over for influence and leverage, and it is always possible, at least in principle, that you will get too carried away by your own vision and take these opportunities (even if they are self-defeating). This applies no matter if you are the President of the United States or if you’re just asking your friend for £20; it applies even if you are purely altruistically motivated.However, morally thoughtful people tend to have good ‘intuitions’ about everyday cases: it is these that common-sense morality was designed to handle. We know that it’s wrong to take someone else’s money and not pay it back; we know that it’s typically wrong to lie solely for our own benefit; we understand that it’s good to be trustworthy and honest. Indeed, in everyday contexts certain options are just entirely unthinkable. For example, a surgeon won’t typically even ask themselves ‘should I cut up this patient and redistribute their organs to maximise utility?’—the idea to do such a thing would never even enter their mind—and you would probably be a bit uneasy with a surgeon who had indeed asked himself this question, even if he had concluded that he shouldn’t cut you up.This kind of everyday moral reasoning is exactly what is captured by the kinds of deontological ‘side constraints’ most often discussed in the Effective Altruism community. As this post makes wonderfully clear, the reason why even consequentialists should be concerned with side-constraints is because you can predict ahead of time that you will face certain kinds of situations, and you know that it would be better ...]]>
Peter McLaughlin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:52 None full 3785
gWSa7e2CS7KCu78D8_EA EA - Disagreement with bio anchors that lead to shorter timelines by mariushobbhahn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Disagreement with bio anchors that lead to shorter timelines, published by mariushobbhahn on November 16, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
mariushobbhahn https://forum.effectivealtruism.org/posts/gWSa7e2CS7KCu78D8/disagreement-with-bio-anchors-that-lead-to-shorter-timelines Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Disagreement with bio anchors that lead to shorter timelines, published by mariushobbhahn on November 16, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 16 Nov 2022 15:26:06 +0000 EA - Disagreement with bio anchors that lead to shorter timelines by mariushobbhahn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Disagreement with bio anchors that lead to shorter timelines, published by mariushobbhahn on November 16, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Disagreement with bio anchors that lead to shorter timelines, published by mariushobbhahn on November 16, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
mariushobbhahn https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:28 None full 3784
7o8foiTKYdRrgtzyE_EA EA - If Professional Investors Missed This... by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If Professional Investors Missed This..., published by Jeff Kaufman on November 16, 2022 on The Effective Altruism Forum.One of the largest cryptocurrency exchanges,FTX, recently imploded afterapparently transferring customer funds to cover losses at their affiliated hedge fund. Matt Levine hasgood coverage, especially his recent post ontheir balance sheet. Normally a crypto exchange going bust isn't something I'd pay that much attention to, aside from sympathy for its customers, but itsFutureFund was one of the largest funders in effective altruism (EA).One reaction I've seen in several places, mostly outside EA, is something like, "this was obviously a fraud from the start, look at all the red flags, how could EAs have been so credulous?" I think this is mostly wrong: the red flags they cite (size of FTX's claimed profits, located in the Bahamas, involved in crypto, relatively young founders, etc.) are not actually strong indicators here. Cause for scrutiny, sure, but short of anything obviously wrong.The opposite reaction, which I've also seen in several places, mostly within EA, is more like, "how could we have caught this when serious insitutional investors with hundreds of millions of dollars on the line missed it?" FTX had raised about$2B in external funding, including ~$200M from Sequoia, ~$100M from SoftBank, and ~$100M from the Ontario Teacher's Pension Plan. I think this argument does have some truth in it: this is part of whyI'm ok dismissing the "obvious fraud" view of the previous paragraph. But I also think this lets EA off too easily.The issue is, we had a lot more on the line than their investors did.Their worst case was that their investments would go to zero and they would have mild public embarrassment at having funded something that turned out so poorly. A strategy of making a lot of risky bets can do well, especially if spending more time investigating each opportunity trades off against making more investments or means that they sometimes lose the best opportunities to competitor funds. Half of their investments could fail and they could still come out ahead if the other half did well enough. Sequoia wrote after, "We are in the business of taking risk. Some investments will surprise to the upside, and some will surprise to the downside."This was not our situation:The money FTX planned to donate represented a far greater portion of the EA "portfolio" than FTX did for these institutional investors, The FTX Future Fund was probably the biggest source of EA funding after OpenPhilanthropy, and was ramping up very quickly.This bankruptcy means that many organizations now suddenly have much less money than they expected: the FTX Future Fund's committed grants won't be paid out, and the moral and legal status of past grants is unclear. [1] Institutional investors were not relying on the continued healthy operation of FTX or any other single company they invested in, and were thinking of the venture capital segment of their portfolios as a long-term investment.FTX and their affiliated hedge fund, Alameda Research, were founded and run by people from the effective altruism community with the explicit goal of earning money to donate. Their founder, SamBankman-Fried, was profiled by 80,000 Hours and listed on their homepage as an example earning to give, back when he was a first-year trader at JaneStreet, and he was later on the board of the Centre forEffective Altruism's US branch. FTX, and Bankman-Fried in particular, represented in part an investment of reputation, and unlike typical financial investments reputational investments can go negative.These other investors did have much more experience evaluating large startups than most EAs, but we have people in the community who do this kind of evaluation professionally, and it would ...]]>
Jeff Kaufman https://forum.effectivealtruism.org/posts/7o8foiTKYdRrgtzyE/if-professional-investors-missed-this Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If Professional Investors Missed This..., published by Jeff Kaufman on November 16, 2022 on The Effective Altruism Forum.One of the largest cryptocurrency exchanges,FTX, recently imploded afterapparently transferring customer funds to cover losses at their affiliated hedge fund. Matt Levine hasgood coverage, especially his recent post ontheir balance sheet. Normally a crypto exchange going bust isn't something I'd pay that much attention to, aside from sympathy for its customers, but itsFutureFund was one of the largest funders in effective altruism (EA).One reaction I've seen in several places, mostly outside EA, is something like, "this was obviously a fraud from the start, look at all the red flags, how could EAs have been so credulous?" I think this is mostly wrong: the red flags they cite (size of FTX's claimed profits, located in the Bahamas, involved in crypto, relatively young founders, etc.) are not actually strong indicators here. Cause for scrutiny, sure, but short of anything obviously wrong.The opposite reaction, which I've also seen in several places, mostly within EA, is more like, "how could we have caught this when serious insitutional investors with hundreds of millions of dollars on the line missed it?" FTX had raised about$2B in external funding, including ~$200M from Sequoia, ~$100M from SoftBank, and ~$100M from the Ontario Teacher's Pension Plan. I think this argument does have some truth in it: this is part of whyI'm ok dismissing the "obvious fraud" view of the previous paragraph. But I also think this lets EA off too easily.The issue is, we had a lot more on the line than their investors did.Their worst case was that their investments would go to zero and they would have mild public embarrassment at having funded something that turned out so poorly. A strategy of making a lot of risky bets can do well, especially if spending more time investigating each opportunity trades off against making more investments or means that they sometimes lose the best opportunities to competitor funds. Half of their investments could fail and they could still come out ahead if the other half did well enough. Sequoia wrote after, "We are in the business of taking risk. Some investments will surprise to the upside, and some will surprise to the downside."This was not our situation:The money FTX planned to donate represented a far greater portion of the EA "portfolio" than FTX did for these institutional investors, The FTX Future Fund was probably the biggest source of EA funding after OpenPhilanthropy, and was ramping up very quickly.This bankruptcy means that many organizations now suddenly have much less money than they expected: the FTX Future Fund's committed grants won't be paid out, and the moral and legal status of past grants is unclear. [1] Institutional investors were not relying on the continued healthy operation of FTX or any other single company they invested in, and were thinking of the venture capital segment of their portfolios as a long-term investment.FTX and their affiliated hedge fund, Alameda Research, were founded and run by people from the effective altruism community with the explicit goal of earning money to donate. Their founder, SamBankman-Fried, was profiled by 80,000 Hours and listed on their homepage as an example earning to give, back when he was a first-year trader at JaneStreet, and he was later on the board of the Centre forEffective Altruism's US branch. FTX, and Bankman-Fried in particular, represented in part an investment of reputation, and unlike typical financial investments reputational investments can go negative.These other investors did have much more experience evaluating large startups than most EAs, but we have people in the community who do this kind of evaluation professionally, and it would ...]]>
Wed, 16 Nov 2022 15:22:04 +0000 EA - If Professional Investors Missed This... by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If Professional Investors Missed This..., published by Jeff Kaufman on November 16, 2022 on The Effective Altruism Forum.One of the largest cryptocurrency exchanges,FTX, recently imploded afterapparently transferring customer funds to cover losses at their affiliated hedge fund. Matt Levine hasgood coverage, especially his recent post ontheir balance sheet. Normally a crypto exchange going bust isn't something I'd pay that much attention to, aside from sympathy for its customers, but itsFutureFund was one of the largest funders in effective altruism (EA).One reaction I've seen in several places, mostly outside EA, is something like, "this was obviously a fraud from the start, look at all the red flags, how could EAs have been so credulous?" I think this is mostly wrong: the red flags they cite (size of FTX's claimed profits, located in the Bahamas, involved in crypto, relatively young founders, etc.) are not actually strong indicators here. Cause for scrutiny, sure, but short of anything obviously wrong.The opposite reaction, which I've also seen in several places, mostly within EA, is more like, "how could we have caught this when serious insitutional investors with hundreds of millions of dollars on the line missed it?" FTX had raised about$2B in external funding, including ~$200M from Sequoia, ~$100M from SoftBank, and ~$100M from the Ontario Teacher's Pension Plan. I think this argument does have some truth in it: this is part of whyI'm ok dismissing the "obvious fraud" view of the previous paragraph. But I also think this lets EA off too easily.The issue is, we had a lot more on the line than their investors did.Their worst case was that their investments would go to zero and they would have mild public embarrassment at having funded something that turned out so poorly. A strategy of making a lot of risky bets can do well, especially if spending more time investigating each opportunity trades off against making more investments or means that they sometimes lose the best opportunities to competitor funds. Half of their investments could fail and they could still come out ahead if the other half did well enough. Sequoia wrote after, "We are in the business of taking risk. Some investments will surprise to the upside, and some will surprise to the downside."This was not our situation:The money FTX planned to donate represented a far greater portion of the EA "portfolio" than FTX did for these institutional investors, The FTX Future Fund was probably the biggest source of EA funding after OpenPhilanthropy, and was ramping up very quickly.This bankruptcy means that many organizations now suddenly have much less money than they expected: the FTX Future Fund's committed grants won't be paid out, and the moral and legal status of past grants is unclear. [1] Institutional investors were not relying on the continued healthy operation of FTX or any other single company they invested in, and were thinking of the venture capital segment of their portfolios as a long-term investment.FTX and their affiliated hedge fund, Alameda Research, were founded and run by people from the effective altruism community with the explicit goal of earning money to donate. Their founder, SamBankman-Fried, was profiled by 80,000 Hours and listed on their homepage as an example earning to give, back when he was a first-year trader at JaneStreet, and he was later on the board of the Centre forEffective Altruism's US branch. FTX, and Bankman-Fried in particular, represented in part an investment of reputation, and unlike typical financial investments reputational investments can go negative.These other investors did have much more experience evaluating large startups than most EAs, but we have people in the community who do this kind of evaluation professionally, and it would ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If Professional Investors Missed This..., published by Jeff Kaufman on November 16, 2022 on The Effective Altruism Forum.One of the largest cryptocurrency exchanges,FTX, recently imploded afterapparently transferring customer funds to cover losses at their affiliated hedge fund. Matt Levine hasgood coverage, especially his recent post ontheir balance sheet. Normally a crypto exchange going bust isn't something I'd pay that much attention to, aside from sympathy for its customers, but itsFutureFund was one of the largest funders in effective altruism (EA).One reaction I've seen in several places, mostly outside EA, is something like, "this was obviously a fraud from the start, look at all the red flags, how could EAs have been so credulous?" I think this is mostly wrong: the red flags they cite (size of FTX's claimed profits, located in the Bahamas, involved in crypto, relatively young founders, etc.) are not actually strong indicators here. Cause for scrutiny, sure, but short of anything obviously wrong.The opposite reaction, which I've also seen in several places, mostly within EA, is more like, "how could we have caught this when serious insitutional investors with hundreds of millions of dollars on the line missed it?" FTX had raised about$2B in external funding, including ~$200M from Sequoia, ~$100M from SoftBank, and ~$100M from the Ontario Teacher's Pension Plan. I think this argument does have some truth in it: this is part of whyI'm ok dismissing the "obvious fraud" view of the previous paragraph. But I also think this lets EA off too easily.The issue is, we had a lot more on the line than their investors did.Their worst case was that their investments would go to zero and they would have mild public embarrassment at having funded something that turned out so poorly. A strategy of making a lot of risky bets can do well, especially if spending more time investigating each opportunity trades off against making more investments or means that they sometimes lose the best opportunities to competitor funds. Half of their investments could fail and they could still come out ahead if the other half did well enough. Sequoia wrote after, "We are in the business of taking risk. Some investments will surprise to the upside, and some will surprise to the downside."This was not our situation:The money FTX planned to donate represented a far greater portion of the EA "portfolio" than FTX did for these institutional investors, The FTX Future Fund was probably the biggest source of EA funding after OpenPhilanthropy, and was ramping up very quickly.This bankruptcy means that many organizations now suddenly have much less money than they expected: the FTX Future Fund's committed grants won't be paid out, and the moral and legal status of past grants is unclear. [1] Institutional investors were not relying on the continued healthy operation of FTX or any other single company they invested in, and were thinking of the venture capital segment of their portfolios as a long-term investment.FTX and their affiliated hedge fund, Alameda Research, were founded and run by people from the effective altruism community with the explicit goal of earning money to donate. Their founder, SamBankman-Fried, was profiled by 80,000 Hours and listed on their homepage as an example earning to give, back when he was a first-year trader at JaneStreet, and he was later on the board of the Centre forEffective Altruism's US branch. FTX, and Bankman-Fried in particular, represented in part an investment of reputation, and unlike typical financial investments reputational investments can go negative.These other investors did have much more experience evaluating large startups than most EAs, but we have people in the community who do this kind of evaluation professionally, and it would ...]]>
Jeff Kaufman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:48 None full 3783
W5bT8mxvXzkrKifTE_EA EA - Eirik Mofoss named "Young leader of the year" in Norway by Jorgen Ljones Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Eirik Mofoss named "Young leader of the year" in Norway, published by Jorgen Ljones on November 16, 2022 on The Effective Altruism Forum.Written by Jørgen Ljønes, managing director of Gi Effektivt and co-founder of the Norwegian effective altruism movement.SummaryEirik Mofoss was recently named “Young leader of the year” by the largest business newspaper in Norway. He is the co-founder of the Norwegian effective altruism movement and—as a member of Giving What We Can—donates 20 percent of his income to organisations that, through documented effectiveness, make a positive difference in the world. The award provided lots of valuable attention and traffic to EA Norway and Gi Effektivt, our donation plattform. We wanted to share and celebrate this, not only as a recognition of Eirik, but also for all the Norwegian EAs who has made this possible.BackgroundDagens Næringsliv (Norwegian for "Today's Business"), commonly known as DN, is the third-largest daily newspaper in Norway. Every year, DN nominates 30 people under the age of 30 who, through good ideas, entrepreneurial or leadership skills, represent guiding figures for others, and want to make the world more sustainable. First, DN’s readership nominates a selection of people, then an independent jury picks the final 30, based on the UN's 17 sustainable development goals (SDGs). The jury, together with readers, then selects a single winner, among people whose accomplishments have yielded some kind of measurable impact in the realm of sustainable development.The nomination of Eirik as a candidate for the awardEirik Mofoss co-founded the first Norwegian branch of the Effective Altruism movement at NTNU in 2014, and has remained true to his ideals ever since. Eirik has committed to donating 20 percent of his income for the rest of his life, and has—through his nomination—imparted his mode of working and living on a readership of over 150 000 people. He has also widely publicised his altruistic ideals through his earlier work, as adviser to both the Conservative Party in Parliament, as well as adviser to Norad (Norwegian Agency for Development Cooperation). According to the jury:Eirik Mofoss examines our fundamental assumptions about what aid currently is, and our vision for what it could be; how one can—and should—contribute, as a private citizen.DN’s first interview with Eirik Mofoss quickly became their most read “30 under 30”-interview ever, and the newspaper’s most read piece in 2022.Eirik Mofoss is 29 years old, born in Tromsø, in the north of Norway, and has a master's degree in industrial economics and technology management from NTNU. Mofoss has previously worked as a political and financial adviser for the Conservative Party of Norway, and as a business development manager for Visma. Almost ten years ago, he became a member of Giving What We Can, and committed to donating 20 percent of his income for the rest of his life. Mofoss co-founded the first effective altruism group in Norway in 2014 while studying at NTNU, and has later co-founded the foundation Gi Effektivt that fundraises for charities recommended by GiveWell.The jury’s reasoning for naming Eirik "Young leader of the year"Eirik Mofoss shows us that our direct actions still yield impactful results during a time when many feel increasingly powerless. Mofoss builds bridges, and has skillfully integrated his personal and working lives. He seeks, through his activism and modes of thinking, to elevate the discourse around sustainability—away from individualistic critiques, and toward systemic ones.Mofoss’ generation has been raised in the global labour aristocracy. Despite this, some of them have recognized the extent to which their privileges and positions can be used for good. Mofoss challenges how we as individuals can contribute, and works to ...]]>
Jorgen Ljones https://forum.effectivealtruism.org/posts/W5bT8mxvXzkrKifTE/eirik-mofoss-named-young-leader-of-the-year-in-norway Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Eirik Mofoss named "Young leader of the year" in Norway, published by Jorgen Ljones on November 16, 2022 on The Effective Altruism Forum.Written by Jørgen Ljønes, managing director of Gi Effektivt and co-founder of the Norwegian effective altruism movement.SummaryEirik Mofoss was recently named “Young leader of the year” by the largest business newspaper in Norway. He is the co-founder of the Norwegian effective altruism movement and—as a member of Giving What We Can—donates 20 percent of his income to organisations that, through documented effectiveness, make a positive difference in the world. The award provided lots of valuable attention and traffic to EA Norway and Gi Effektivt, our donation plattform. We wanted to share and celebrate this, not only as a recognition of Eirik, but also for all the Norwegian EAs who has made this possible.BackgroundDagens Næringsliv (Norwegian for "Today's Business"), commonly known as DN, is the third-largest daily newspaper in Norway. Every year, DN nominates 30 people under the age of 30 who, through good ideas, entrepreneurial or leadership skills, represent guiding figures for others, and want to make the world more sustainable. First, DN’s readership nominates a selection of people, then an independent jury picks the final 30, based on the UN's 17 sustainable development goals (SDGs). The jury, together with readers, then selects a single winner, among people whose accomplishments have yielded some kind of measurable impact in the realm of sustainable development.The nomination of Eirik as a candidate for the awardEirik Mofoss co-founded the first Norwegian branch of the Effective Altruism movement at NTNU in 2014, and has remained true to his ideals ever since. Eirik has committed to donating 20 percent of his income for the rest of his life, and has—through his nomination—imparted his mode of working and living on a readership of over 150 000 people. He has also widely publicised his altruistic ideals through his earlier work, as adviser to both the Conservative Party in Parliament, as well as adviser to Norad (Norwegian Agency for Development Cooperation). According to the jury:Eirik Mofoss examines our fundamental assumptions about what aid currently is, and our vision for what it could be; how one can—and should—contribute, as a private citizen.DN’s first interview with Eirik Mofoss quickly became their most read “30 under 30”-interview ever, and the newspaper’s most read piece in 2022.Eirik Mofoss is 29 years old, born in Tromsø, in the north of Norway, and has a master's degree in industrial economics and technology management from NTNU. Mofoss has previously worked as a political and financial adviser for the Conservative Party of Norway, and as a business development manager for Visma. Almost ten years ago, he became a member of Giving What We Can, and committed to donating 20 percent of his income for the rest of his life. Mofoss co-founded the first effective altruism group in Norway in 2014 while studying at NTNU, and has later co-founded the foundation Gi Effektivt that fundraises for charities recommended by GiveWell.The jury’s reasoning for naming Eirik "Young leader of the year"Eirik Mofoss shows us that our direct actions still yield impactful results during a time when many feel increasingly powerless. Mofoss builds bridges, and has skillfully integrated his personal and working lives. He seeks, through his activism and modes of thinking, to elevate the discourse around sustainability—away from individualistic critiques, and toward systemic ones.Mofoss’ generation has been raised in the global labour aristocracy. Despite this, some of them have recognized the extent to which their privileges and positions can be used for good. Mofoss challenges how we as individuals can contribute, and works to ...]]>
Wed, 16 Nov 2022 14:30:20 +0000 EA - Eirik Mofoss named "Young leader of the year" in Norway by Jorgen Ljones Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Eirik Mofoss named "Young leader of the year" in Norway, published by Jorgen Ljones on November 16, 2022 on The Effective Altruism Forum.Written by Jørgen Ljønes, managing director of Gi Effektivt and co-founder of the Norwegian effective altruism movement.SummaryEirik Mofoss was recently named “Young leader of the year” by the largest business newspaper in Norway. He is the co-founder of the Norwegian effective altruism movement and—as a member of Giving What We Can—donates 20 percent of his income to organisations that, through documented effectiveness, make a positive difference in the world. The award provided lots of valuable attention and traffic to EA Norway and Gi Effektivt, our donation plattform. We wanted to share and celebrate this, not only as a recognition of Eirik, but also for all the Norwegian EAs who has made this possible.BackgroundDagens Næringsliv (Norwegian for "Today's Business"), commonly known as DN, is the third-largest daily newspaper in Norway. Every year, DN nominates 30 people under the age of 30 who, through good ideas, entrepreneurial or leadership skills, represent guiding figures for others, and want to make the world more sustainable. First, DN’s readership nominates a selection of people, then an independent jury picks the final 30, based on the UN's 17 sustainable development goals (SDGs). The jury, together with readers, then selects a single winner, among people whose accomplishments have yielded some kind of measurable impact in the realm of sustainable development.The nomination of Eirik as a candidate for the awardEirik Mofoss co-founded the first Norwegian branch of the Effective Altruism movement at NTNU in 2014, and has remained true to his ideals ever since. Eirik has committed to donating 20 percent of his income for the rest of his life, and has—through his nomination—imparted his mode of working and living on a readership of over 150 000 people. He has also widely publicised his altruistic ideals through his earlier work, as adviser to both the Conservative Party in Parliament, as well as adviser to Norad (Norwegian Agency for Development Cooperation). According to the jury:Eirik Mofoss examines our fundamental assumptions about what aid currently is, and our vision for what it could be; how one can—and should—contribute, as a private citizen.DN’s first interview with Eirik Mofoss quickly became their most read “30 under 30”-interview ever, and the newspaper’s most read piece in 2022.Eirik Mofoss is 29 years old, born in Tromsø, in the north of Norway, and has a master's degree in industrial economics and technology management from NTNU. Mofoss has previously worked as a political and financial adviser for the Conservative Party of Norway, and as a business development manager for Visma. Almost ten years ago, he became a member of Giving What We Can, and committed to donating 20 percent of his income for the rest of his life. Mofoss co-founded the first effective altruism group in Norway in 2014 while studying at NTNU, and has later co-founded the foundation Gi Effektivt that fundraises for charities recommended by GiveWell.The jury’s reasoning for naming Eirik "Young leader of the year"Eirik Mofoss shows us that our direct actions still yield impactful results during a time when many feel increasingly powerless. Mofoss builds bridges, and has skillfully integrated his personal and working lives. He seeks, through his activism and modes of thinking, to elevate the discourse around sustainability—away from individualistic critiques, and toward systemic ones.Mofoss’ generation has been raised in the global labour aristocracy. Despite this, some of them have recognized the extent to which their privileges and positions can be used for good. Mofoss challenges how we as individuals can contribute, and works to ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Eirik Mofoss named "Young leader of the year" in Norway, published by Jorgen Ljones on November 16, 2022 on The Effective Altruism Forum.Written by Jørgen Ljønes, managing director of Gi Effektivt and co-founder of the Norwegian effective altruism movement.SummaryEirik Mofoss was recently named “Young leader of the year” by the largest business newspaper in Norway. He is the co-founder of the Norwegian effective altruism movement and—as a member of Giving What We Can—donates 20 percent of his income to organisations that, through documented effectiveness, make a positive difference in the world. The award provided lots of valuable attention and traffic to EA Norway and Gi Effektivt, our donation plattform. We wanted to share and celebrate this, not only as a recognition of Eirik, but also for all the Norwegian EAs who has made this possible.BackgroundDagens Næringsliv (Norwegian for "Today's Business"), commonly known as DN, is the third-largest daily newspaper in Norway. Every year, DN nominates 30 people under the age of 30 who, through good ideas, entrepreneurial or leadership skills, represent guiding figures for others, and want to make the world more sustainable. First, DN’s readership nominates a selection of people, then an independent jury picks the final 30, based on the UN's 17 sustainable development goals (SDGs). The jury, together with readers, then selects a single winner, among people whose accomplishments have yielded some kind of measurable impact in the realm of sustainable development.The nomination of Eirik as a candidate for the awardEirik Mofoss co-founded the first Norwegian branch of the Effective Altruism movement at NTNU in 2014, and has remained true to his ideals ever since. Eirik has committed to donating 20 percent of his income for the rest of his life, and has—through his nomination—imparted his mode of working and living on a readership of over 150 000 people. He has also widely publicised his altruistic ideals through his earlier work, as adviser to both the Conservative Party in Parliament, as well as adviser to Norad (Norwegian Agency for Development Cooperation). According to the jury:Eirik Mofoss examines our fundamental assumptions about what aid currently is, and our vision for what it could be; how one can—and should—contribute, as a private citizen.DN’s first interview with Eirik Mofoss quickly became their most read “30 under 30”-interview ever, and the newspaper’s most read piece in 2022.Eirik Mofoss is 29 years old, born in Tromsø, in the north of Norway, and has a master's degree in industrial economics and technology management from NTNU. Mofoss has previously worked as a political and financial adviser for the Conservative Party of Norway, and as a business development manager for Visma. Almost ten years ago, he became a member of Giving What We Can, and committed to donating 20 percent of his income for the rest of his life. Mofoss co-founded the first effective altruism group in Norway in 2014 while studying at NTNU, and has later co-founded the foundation Gi Effektivt that fundraises for charities recommended by GiveWell.The jury’s reasoning for naming Eirik "Young leader of the year"Eirik Mofoss shows us that our direct actions still yield impactful results during a time when many feel increasingly powerless. Mofoss builds bridges, and has skillfully integrated his personal and working lives. He seeks, through his activism and modes of thinking, to elevate the discourse around sustainability—away from individualistic critiques, and toward systemic ones.Mofoss’ generation has been raised in the global labour aristocracy. Despite this, some of them have recognized the extent to which their privileges and positions can be used for good. Mofoss challenges how we as individuals can contribute, and works to ...]]>
Jorgen Ljones https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:05 None full 3786
pqGnC4sAjzzZHoqXY_EA EA - Cruxes for nuclear risk reduction efforts - A proposal by Sarah Weiler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cruxes for nuclear risk reduction efforts - A proposal, published by Sarah Weiler on November 16, 2022 on The Effective Altruism Forum.This is my attempt to give an overview of debates and arguments relevant to the question of how to mitigate nuclear risks effectively. I list a number of crucial questions that I think need to be answered by anyone (individual or group) seeking to find their role as a contributor to nuclear risk mitigation efforts. I give a high-level overview of the cruxes in Figure 1:These questions are based on my moderately extensive engagement with the nuclear risk field; they are likely not exhaustive and might well be phrased in a less-than-optimal way — I thus welcome any feedback for how to improve the list found below. I hope that this list can help people (and groups) reflect on the cause of nuclear risk reduction by highlighting relevant considerations and structuring the large amount of thinking that has gone into the topic already. I do not provide definitive answers to the questions listed, but try to outline competing responses to each question and flesh out my own current position on some of them in separate posts/write-ups (linked to below).The post consists of the following sections:Setting the stage: some background on my CERI research projectOutline of my work on nuclear issues prior to the summer fellowshipSummary of work by others with some similarity to mineA defense of the value of my project and the output presented hereMain body: list of cruxes in the nuclear risk debateSubstantive cruxes: questions to determine which nuclear risks to work on and how to do soSub-cruxes: questions to help tackle the cruxes aboveMeta-level cruxes: methodological and epistemological questionsLinks and referencesSetting the stageFor a couple of months, I have been engaged in an effort to disentangle the nuclear risk cause area, i.e., to figure out which specific risks it encompasses and to get a sense for what can and should be done about these risks. I took several stabs at the problem, and this is my latest attempt to make progress on this disentanglement goal.My previous attempts to disentangle nuclear riskWhile I had some exposure to nuclear affairs during my studies of global politics at uni (i.e., at least since 2018) and have been reading about the topic throughout the last few years, I’ve been engaging with the topic more seriously only since the beginning of this year (2022), when I did a part-time research fellowship in which I decided to focus on nuclear risks.For that fellowship, I started by brainstorming my thoughts and uncertainties about nuclear risk as a problem area that I might want to work on (resulting in a list of questions and my preliminary thoughts on them), did a limited survey of the academic literature on different intellectual approaches to the topic of nuclear weapons, and conducted a small-scale empirical investigation into how three different non-profits in the nuclear risk field (the Nuclear Threat Initiative, the International Campaign to Abolish Nuclear Weapons, and the RAND Corporation) conceptualize and justify their work on nuclear risk (resulting in a sketch of the theory of change. of each organization, constructed based on the information they provide on their websites).During ten weeks over this summer (Jul-Sep 2022), my participation in the Cambridge Existential Risk Initiative — a full-time, paid and mentored research fellowship — has allowed me to dedicate more time to this project and to test out a few more approaches to understanding the nuclear risk field. I spent around three weeks with the goal of compiling a list of organizations working on nuclear risk issues, collecting information on their self-described theory of change, and categorizing the organizations in a broad typolog...]]>
Sarah Weiler https://forum.effectivealtruism.org/posts/pqGnC4sAjzzZHoqXY/cruxes-for-nuclear-risk-reduction-efforts-a-proposal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cruxes for nuclear risk reduction efforts - A proposal, published by Sarah Weiler on November 16, 2022 on The Effective Altruism Forum.This is my attempt to give an overview of debates and arguments relevant to the question of how to mitigate nuclear risks effectively. I list a number of crucial questions that I think need to be answered by anyone (individual or group) seeking to find their role as a contributor to nuclear risk mitigation efforts. I give a high-level overview of the cruxes in Figure 1:These questions are based on my moderately extensive engagement with the nuclear risk field; they are likely not exhaustive and might well be phrased in a less-than-optimal way — I thus welcome any feedback for how to improve the list found below. I hope that this list can help people (and groups) reflect on the cause of nuclear risk reduction by highlighting relevant considerations and structuring the large amount of thinking that has gone into the topic already. I do not provide definitive answers to the questions listed, but try to outline competing responses to each question and flesh out my own current position on some of them in separate posts/write-ups (linked to below).The post consists of the following sections:Setting the stage: some background on my CERI research projectOutline of my work on nuclear issues prior to the summer fellowshipSummary of work by others with some similarity to mineA defense of the value of my project and the output presented hereMain body: list of cruxes in the nuclear risk debateSubstantive cruxes: questions to determine which nuclear risks to work on and how to do soSub-cruxes: questions to help tackle the cruxes aboveMeta-level cruxes: methodological and epistemological questionsLinks and referencesSetting the stageFor a couple of months, I have been engaged in an effort to disentangle the nuclear risk cause area, i.e., to figure out which specific risks it encompasses and to get a sense for what can and should be done about these risks. I took several stabs at the problem, and this is my latest attempt to make progress on this disentanglement goal.My previous attempts to disentangle nuclear riskWhile I had some exposure to nuclear affairs during my studies of global politics at uni (i.e., at least since 2018) and have been reading about the topic throughout the last few years, I’ve been engaging with the topic more seriously only since the beginning of this year (2022), when I did a part-time research fellowship in which I decided to focus on nuclear risks.For that fellowship, I started by brainstorming my thoughts and uncertainties about nuclear risk as a problem area that I might want to work on (resulting in a list of questions and my preliminary thoughts on them), did a limited survey of the academic literature on different intellectual approaches to the topic of nuclear weapons, and conducted a small-scale empirical investigation into how three different non-profits in the nuclear risk field (the Nuclear Threat Initiative, the International Campaign to Abolish Nuclear Weapons, and the RAND Corporation) conceptualize and justify their work on nuclear risk (resulting in a sketch of the theory of change. of each organization, constructed based on the information they provide on their websites).During ten weeks over this summer (Jul-Sep 2022), my participation in the Cambridge Existential Risk Initiative — a full-time, paid and mentored research fellowship — has allowed me to dedicate more time to this project and to test out a few more approaches to understanding the nuclear risk field. I spent around three weeks with the goal of compiling a list of organizations working on nuclear risk issues, collecting information on their self-described theory of change, and categorizing the organizations in a broad typolog...]]>
Wed, 16 Nov 2022 14:20:38 +0000 EA - Cruxes for nuclear risk reduction efforts - A proposal by Sarah Weiler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cruxes for nuclear risk reduction efforts - A proposal, published by Sarah Weiler on November 16, 2022 on The Effective Altruism Forum.This is my attempt to give an overview of debates and arguments relevant to the question of how to mitigate nuclear risks effectively. I list a number of crucial questions that I think need to be answered by anyone (individual or group) seeking to find their role as a contributor to nuclear risk mitigation efforts. I give a high-level overview of the cruxes in Figure 1:These questions are based on my moderately extensive engagement with the nuclear risk field; they are likely not exhaustive and might well be phrased in a less-than-optimal way — I thus welcome any feedback for how to improve the list found below. I hope that this list can help people (and groups) reflect on the cause of nuclear risk reduction by highlighting relevant considerations and structuring the large amount of thinking that has gone into the topic already. I do not provide definitive answers to the questions listed, but try to outline competing responses to each question and flesh out my own current position on some of them in separate posts/write-ups (linked to below).The post consists of the following sections:Setting the stage: some background on my CERI research projectOutline of my work on nuclear issues prior to the summer fellowshipSummary of work by others with some similarity to mineA defense of the value of my project and the output presented hereMain body: list of cruxes in the nuclear risk debateSubstantive cruxes: questions to determine which nuclear risks to work on and how to do soSub-cruxes: questions to help tackle the cruxes aboveMeta-level cruxes: methodological and epistemological questionsLinks and referencesSetting the stageFor a couple of months, I have been engaged in an effort to disentangle the nuclear risk cause area, i.e., to figure out which specific risks it encompasses and to get a sense for what can and should be done about these risks. I took several stabs at the problem, and this is my latest attempt to make progress on this disentanglement goal.My previous attempts to disentangle nuclear riskWhile I had some exposure to nuclear affairs during my studies of global politics at uni (i.e., at least since 2018) and have been reading about the topic throughout the last few years, I’ve been engaging with the topic more seriously only since the beginning of this year (2022), when I did a part-time research fellowship in which I decided to focus on nuclear risks.For that fellowship, I started by brainstorming my thoughts and uncertainties about nuclear risk as a problem area that I might want to work on (resulting in a list of questions and my preliminary thoughts on them), did a limited survey of the academic literature on different intellectual approaches to the topic of nuclear weapons, and conducted a small-scale empirical investigation into how three different non-profits in the nuclear risk field (the Nuclear Threat Initiative, the International Campaign to Abolish Nuclear Weapons, and the RAND Corporation) conceptualize and justify their work on nuclear risk (resulting in a sketch of the theory of change. of each organization, constructed based on the information they provide on their websites).During ten weeks over this summer (Jul-Sep 2022), my participation in the Cambridge Existential Risk Initiative — a full-time, paid and mentored research fellowship — has allowed me to dedicate more time to this project and to test out a few more approaches to understanding the nuclear risk field. I spent around three weeks with the goal of compiling a list of organizations working on nuclear risk issues, collecting information on their self-described theory of change, and categorizing the organizations in a broad typolog...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cruxes for nuclear risk reduction efforts - A proposal, published by Sarah Weiler on November 16, 2022 on The Effective Altruism Forum.This is my attempt to give an overview of debates and arguments relevant to the question of how to mitigate nuclear risks effectively. I list a number of crucial questions that I think need to be answered by anyone (individual or group) seeking to find their role as a contributor to nuclear risk mitigation efforts. I give a high-level overview of the cruxes in Figure 1:These questions are based on my moderately extensive engagement with the nuclear risk field; they are likely not exhaustive and might well be phrased in a less-than-optimal way — I thus welcome any feedback for how to improve the list found below. I hope that this list can help people (and groups) reflect on the cause of nuclear risk reduction by highlighting relevant considerations and structuring the large amount of thinking that has gone into the topic already. I do not provide definitive answers to the questions listed, but try to outline competing responses to each question and flesh out my own current position on some of them in separate posts/write-ups (linked to below).The post consists of the following sections:Setting the stage: some background on my CERI research projectOutline of my work on nuclear issues prior to the summer fellowshipSummary of work by others with some similarity to mineA defense of the value of my project and the output presented hereMain body: list of cruxes in the nuclear risk debateSubstantive cruxes: questions to determine which nuclear risks to work on and how to do soSub-cruxes: questions to help tackle the cruxes aboveMeta-level cruxes: methodological and epistemological questionsLinks and referencesSetting the stageFor a couple of months, I have been engaged in an effort to disentangle the nuclear risk cause area, i.e., to figure out which specific risks it encompasses and to get a sense for what can and should be done about these risks. I took several stabs at the problem, and this is my latest attempt to make progress on this disentanglement goal.My previous attempts to disentangle nuclear riskWhile I had some exposure to nuclear affairs during my studies of global politics at uni (i.e., at least since 2018) and have been reading about the topic throughout the last few years, I’ve been engaging with the topic more seriously only since the beginning of this year (2022), when I did a part-time research fellowship in which I decided to focus on nuclear risks.For that fellowship, I started by brainstorming my thoughts and uncertainties about nuclear risk as a problem area that I might want to work on (resulting in a list of questions and my preliminary thoughts on them), did a limited survey of the academic literature on different intellectual approaches to the topic of nuclear weapons, and conducted a small-scale empirical investigation into how three different non-profits in the nuclear risk field (the Nuclear Threat Initiative, the International Campaign to Abolish Nuclear Weapons, and the RAND Corporation) conceptualize and justify their work on nuclear risk (resulting in a sketch of the theory of change. of each organization, constructed based on the information they provide on their websites).During ten weeks over this summer (Jul-Sep 2022), my participation in the Cambridge Existential Risk Initiative — a full-time, paid and mentored research fellowship — has allowed me to dedicate more time to this project and to test out a few more approaches to understanding the nuclear risk field. I spent around three weeks with the goal of compiling a list of organizations working on nuclear risk issues, collecting information on their self-described theory of change, and categorizing the organizations in a broad typolog...]]>
Sarah Weiler https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 48:25 None full 3787
4pXiFKbpzpDH6p2Qk_EA EA - [Linkpost] Sam Harris on "The Fall of Sam Bankman-Fried" by michel Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Sam Harris on "The Fall of Sam Bankman-Fried", published by michel on November 16, 2022 on The Effective Altruism Forum.Sam Harris just shared a 20 min podcast with his thoughts on the FTX crash, Sam Bankman-Fried (SBF), and effective altruism. I think he had a good take, and I'm glad he shared it.Highlights (written retrospectively; hope I'm not misrepresenting):Sam Harris discusses how he didn't expect what looks like serious wrongdoing – and likely fraud – from SBF, who he had glorified on his podcast in December 2021 for an episode on Earning to Give. Even listening to the podcast back again, Sam Harris didn't detect any deviousness in SBF.He acknowledges there's a lot we don't know yet about SBF's character and how long this has been going on.Sam mentions that he's steered clear of the EA community because he has always found it "cult-like"But he defends EA principles and says that this whole situation means "exactly zero" for his commitment to those ideas.He compares criticizing EA ideas in light of FTX to criticizing the scientific method in light of elite scientists faking results.He extends his defense to utilitarianism and consequentialism: if you're critiquing consequentialism ethics on the basis that it led to bad consequences you're "seriously confused."Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
michel https://forum.effectivealtruism.org/posts/4pXiFKbpzpDH6p2Qk/linkpost-sam-harris-on-the-fall-of-sam-bankman-fried Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Sam Harris on "The Fall of Sam Bankman-Fried", published by michel on November 16, 2022 on The Effective Altruism Forum.Sam Harris just shared a 20 min podcast with his thoughts on the FTX crash, Sam Bankman-Fried (SBF), and effective altruism. I think he had a good take, and I'm glad he shared it.Highlights (written retrospectively; hope I'm not misrepresenting):Sam Harris discusses how he didn't expect what looks like serious wrongdoing – and likely fraud – from SBF, who he had glorified on his podcast in December 2021 for an episode on Earning to Give. Even listening to the podcast back again, Sam Harris didn't detect any deviousness in SBF.He acknowledges there's a lot we don't know yet about SBF's character and how long this has been going on.Sam mentions that he's steered clear of the EA community because he has always found it "cult-like"But he defends EA principles and says that this whole situation means "exactly zero" for his commitment to those ideas.He compares criticizing EA ideas in light of FTX to criticizing the scientific method in light of elite scientists faking results.He extends his defense to utilitarianism and consequentialism: if you're critiquing consequentialism ethics on the basis that it led to bad consequences you're "seriously confused."Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 16 Nov 2022 07:32:31 +0000 EA - [Linkpost] Sam Harris on "The Fall of Sam Bankman-Fried" by michel Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Sam Harris on "The Fall of Sam Bankman-Fried", published by michel on November 16, 2022 on The Effective Altruism Forum.Sam Harris just shared a 20 min podcast with his thoughts on the FTX crash, Sam Bankman-Fried (SBF), and effective altruism. I think he had a good take, and I'm glad he shared it.Highlights (written retrospectively; hope I'm not misrepresenting):Sam Harris discusses how he didn't expect what looks like serious wrongdoing – and likely fraud – from SBF, who he had glorified on his podcast in December 2021 for an episode on Earning to Give. Even listening to the podcast back again, Sam Harris didn't detect any deviousness in SBF.He acknowledges there's a lot we don't know yet about SBF's character and how long this has been going on.Sam mentions that he's steered clear of the EA community because he has always found it "cult-like"But he defends EA principles and says that this whole situation means "exactly zero" for his commitment to those ideas.He compares criticizing EA ideas in light of FTX to criticizing the scientific method in light of elite scientists faking results.He extends his defense to utilitarianism and consequentialism: if you're critiquing consequentialism ethics on the basis that it led to bad consequences you're "seriously confused."Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Sam Harris on "The Fall of Sam Bankman-Fried", published by michel on November 16, 2022 on The Effective Altruism Forum.Sam Harris just shared a 20 min podcast with his thoughts on the FTX crash, Sam Bankman-Fried (SBF), and effective altruism. I think he had a good take, and I'm glad he shared it.Highlights (written retrospectively; hope I'm not misrepresenting):Sam Harris discusses how he didn't expect what looks like serious wrongdoing – and likely fraud – from SBF, who he had glorified on his podcast in December 2021 for an episode on Earning to Give. Even listening to the podcast back again, Sam Harris didn't detect any deviousness in SBF.He acknowledges there's a lot we don't know yet about SBF's character and how long this has been going on.Sam mentions that he's steered clear of the EA community because he has always found it "cult-like"But he defends EA principles and says that this whole situation means "exactly zero" for his commitment to those ideas.He compares criticizing EA ideas in light of FTX to criticizing the scientific method in light of elite scientists faking results.He extends his defense to utilitarianism and consequentialism: if you're critiquing consequentialism ethics on the basis that it led to bad consequences you're "seriously confused."Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
michel https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:32 None full 3773
HPdWWetJbv4z8eJEe_EA EA - Open Phil is seeking applications from grantees impacted by recent events by Bastian Stern Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Phil is seeking applications from grantees impacted by recent events, published by Bastian Stern on November 16, 2022 on The Effective Altruism Forum.We (Open Phil) are seeking applications from grantees affected by the recent collapse of the FTX Future Fund (FTXFF) who fall within our long-termist focus areas (biosecurity, AI risk, and building the long-termist EA community). If you fit the following description, please fill out this application form.We’re open to applications from:Grantees who never received some or all of their committed funds from FTXFF.Grantees who received funds, but want to set them aside to return to creditors or depositors.We think there could be a number of complex considerations here, and we don’t yet have a clear picture of how we’ll treat requests like these. We’d encourage people to apply if in doubt, but to avoid making assumptions about whether you’ll be funded (and about what our take will end up being on what the right thing to do is for your case). (Additionally, we’re unsure if there will be legal barriers to returning funds.) That said, we’ll do our best to respond to urgent requests quickly, so you have clarity as soon as possible.Grantees whose funding was otherwise affected by recent events.Please note that this form does not guarantee funding. We intend to evaluate applications using the same standard as we would if they were coming through one of our other longtermist programs — we will evaluate whether they are a cost-effective way to positively influence the long-term future. As described in Holden’s post, we expect our cost-effectiveness “bar” to rise relative to what it has been in the past, so unfortunately we expect that some of the applications we receive (and possibly a sizeable fraction of them) will not be successful. That said, this is a sudden disruptive event and we plan to take into account the benefits of stability and continuity in our assessment.We’ll prioritize getting back to applicants who indicate time sensitivity and whose work seems highly likely to fall above our updated bar. If we’re unsure whether an application is above our new bar, we’ll do our best to get back within the indicated deadline (or within 6 weeks, if the application isn’t time-sensitive), but we may take longer as we reevaluate where the bar should be.We’re aware that others may want to help out financially. If you would like to identify yourself as a potential donor either to this effort or to a different one aimed at impacted FTXFF grantees, you can get in contact with us at inquiries@openphilanthropy.org. We sincerely appreciate anyone who wants to help, but due to logistical reasons we can only respond to emails from people who think there’s a serious chance they’d be willing to contribute over $250k.We’ve left an option on the form to explain specific circumstances – we can imagine many ways that recent events could be disruptive. (For example, if FTXFF had committed to funding a grantee that planned to regrant some of those funds, anyone anticipating a regrant could be affected.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Bastian Stern https://forum.effectivealtruism.org/posts/HPdWWetJbv4z8eJEe/open-phil-is-seeking-applications-from-grantees-impacted-by Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Phil is seeking applications from grantees impacted by recent events, published by Bastian Stern on November 16, 2022 on The Effective Altruism Forum.We (Open Phil) are seeking applications from grantees affected by the recent collapse of the FTX Future Fund (FTXFF) who fall within our long-termist focus areas (biosecurity, AI risk, and building the long-termist EA community). If you fit the following description, please fill out this application form.We’re open to applications from:Grantees who never received some or all of their committed funds from FTXFF.Grantees who received funds, but want to set them aside to return to creditors or depositors.We think there could be a number of complex considerations here, and we don’t yet have a clear picture of how we’ll treat requests like these. We’d encourage people to apply if in doubt, but to avoid making assumptions about whether you’ll be funded (and about what our take will end up being on what the right thing to do is for your case). (Additionally, we’re unsure if there will be legal barriers to returning funds.) That said, we’ll do our best to respond to urgent requests quickly, so you have clarity as soon as possible.Grantees whose funding was otherwise affected by recent events.Please note that this form does not guarantee funding. We intend to evaluate applications using the same standard as we would if they were coming through one of our other longtermist programs — we will evaluate whether they are a cost-effective way to positively influence the long-term future. As described in Holden’s post, we expect our cost-effectiveness “bar” to rise relative to what it has been in the past, so unfortunately we expect that some of the applications we receive (and possibly a sizeable fraction of them) will not be successful. That said, this is a sudden disruptive event and we plan to take into account the benefits of stability and continuity in our assessment.We’ll prioritize getting back to applicants who indicate time sensitivity and whose work seems highly likely to fall above our updated bar. If we’re unsure whether an application is above our new bar, we’ll do our best to get back within the indicated deadline (or within 6 weeks, if the application isn’t time-sensitive), but we may take longer as we reevaluate where the bar should be.We’re aware that others may want to help out financially. If you would like to identify yourself as a potential donor either to this effort or to a different one aimed at impacted FTXFF grantees, you can get in contact with us at inquiries@openphilanthropy.org. We sincerely appreciate anyone who wants to help, but due to logistical reasons we can only respond to emails from people who think there’s a serious chance they’d be willing to contribute over $250k.We’ve left an option on the form to explain specific circumstances – we can imagine many ways that recent events could be disruptive. (For example, if FTXFF had committed to funding a grantee that planned to regrant some of those funds, anyone anticipating a regrant could be affected.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 16 Nov 2022 05:17:53 +0000 EA - Open Phil is seeking applications from grantees impacted by recent events by Bastian Stern Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Phil is seeking applications from grantees impacted by recent events, published by Bastian Stern on November 16, 2022 on The Effective Altruism Forum.We (Open Phil) are seeking applications from grantees affected by the recent collapse of the FTX Future Fund (FTXFF) who fall within our long-termist focus areas (biosecurity, AI risk, and building the long-termist EA community). If you fit the following description, please fill out this application form.We’re open to applications from:Grantees who never received some or all of their committed funds from FTXFF.Grantees who received funds, but want to set them aside to return to creditors or depositors.We think there could be a number of complex considerations here, and we don’t yet have a clear picture of how we’ll treat requests like these. We’d encourage people to apply if in doubt, but to avoid making assumptions about whether you’ll be funded (and about what our take will end up being on what the right thing to do is for your case). (Additionally, we’re unsure if there will be legal barriers to returning funds.) That said, we’ll do our best to respond to urgent requests quickly, so you have clarity as soon as possible.Grantees whose funding was otherwise affected by recent events.Please note that this form does not guarantee funding. We intend to evaluate applications using the same standard as we would if they were coming through one of our other longtermist programs — we will evaluate whether they are a cost-effective way to positively influence the long-term future. As described in Holden’s post, we expect our cost-effectiveness “bar” to rise relative to what it has been in the past, so unfortunately we expect that some of the applications we receive (and possibly a sizeable fraction of them) will not be successful. That said, this is a sudden disruptive event and we plan to take into account the benefits of stability and continuity in our assessment.We’ll prioritize getting back to applicants who indicate time sensitivity and whose work seems highly likely to fall above our updated bar. If we’re unsure whether an application is above our new bar, we’ll do our best to get back within the indicated deadline (or within 6 weeks, if the application isn’t time-sensitive), but we may take longer as we reevaluate where the bar should be.We’re aware that others may want to help out financially. If you would like to identify yourself as a potential donor either to this effort or to a different one aimed at impacted FTXFF grantees, you can get in contact with us at inquiries@openphilanthropy.org. We sincerely appreciate anyone who wants to help, but due to logistical reasons we can only respond to emails from people who think there’s a serious chance they’d be willing to contribute over $250k.We’ve left an option on the form to explain specific circumstances – we can imagine many ways that recent events could be disruptive. (For example, if FTXFF had committed to funding a grantee that planned to regrant some of those funds, anyone anticipating a regrant could be affected.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Phil is seeking applications from grantees impacted by recent events, published by Bastian Stern on November 16, 2022 on The Effective Altruism Forum.We (Open Phil) are seeking applications from grantees affected by the recent collapse of the FTX Future Fund (FTXFF) who fall within our long-termist focus areas (biosecurity, AI risk, and building the long-termist EA community). If you fit the following description, please fill out this application form.We’re open to applications from:Grantees who never received some or all of their committed funds from FTXFF.Grantees who received funds, but want to set them aside to return to creditors or depositors.We think there could be a number of complex considerations here, and we don’t yet have a clear picture of how we’ll treat requests like these. We’d encourage people to apply if in doubt, but to avoid making assumptions about whether you’ll be funded (and about what our take will end up being on what the right thing to do is for your case). (Additionally, we’re unsure if there will be legal barriers to returning funds.) That said, we’ll do our best to respond to urgent requests quickly, so you have clarity as soon as possible.Grantees whose funding was otherwise affected by recent events.Please note that this form does not guarantee funding. We intend to evaluate applications using the same standard as we would if they were coming through one of our other longtermist programs — we will evaluate whether they are a cost-effective way to positively influence the long-term future. As described in Holden’s post, we expect our cost-effectiveness “bar” to rise relative to what it has been in the past, so unfortunately we expect that some of the applications we receive (and possibly a sizeable fraction of them) will not be successful. That said, this is a sudden disruptive event and we plan to take into account the benefits of stability and continuity in our assessment.We’ll prioritize getting back to applicants who indicate time sensitivity and whose work seems highly likely to fall above our updated bar. If we’re unsure whether an application is above our new bar, we’ll do our best to get back within the indicated deadline (or within 6 weeks, if the application isn’t time-sensitive), but we may take longer as we reevaluate where the bar should be.We’re aware that others may want to help out financially. If you would like to identify yourself as a potential donor either to this effort or to a different one aimed at impacted FTXFF grantees, you can get in contact with us at inquiries@openphilanthropy.org. We sincerely appreciate anyone who wants to help, but due to logistical reasons we can only respond to emails from people who think there’s a serious chance they’d be willing to contribute over $250k.We’ve left an option on the form to explain specific circumstances – we can imagine many ways that recent events could be disruptive. (For example, if FTXFF had committed to funding a grantee that planned to regrant some of those funds, anyone anticipating a regrant could be affected.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Bastian Stern https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:00 None full 3774
JWBhv5EfHyQoFd6SL_EA EA - I think EA will make it through stronger by lincolnq Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I think EA will make it through stronger, published by lincolnq on November 16, 2022 on The Effective Altruism Forum.status: kind of rambly but I wanted to get this out there in case it helpsThis week's events triggered in me some soul-searching, wondering whether effective altruism even makes sense as a coherent thing anymore.The reason I thought EA might break up or dissolve was something like: EA mostly attracted naive maximizer-types ("do the Most Good, using reasoning"), but now it's obvious that the idea of maximizing goodness doesn't work in practice--we have a really clear example of where trying to do that fails (SBF if you attribute pure motives to him); as well as a lot of recent quotes from EA luminaries saying that you shouldn't do that. I didn't see what else holds us together besides the maximizing thing.But I was kind of ignoring the reasoning thing! I thought about it, and I think that we can make minimal changes: The framing I like is "Do good, using REASONING". With capital letters :)I think deleting "the most" is a change we should have made a long time ago; few of the important people in EA were claiming that they were doing the most good anyway. And EA at its core is about reasoning: reasoning carefully, using evidence; thinking about first-order and second-order effects; comparing options in front of you; argument and debate. The simpler phrasing of this new mission is intended to make reasoning stand out.If this direction is adopted, I have the following hopes:that EA will become a "bigger tent," accepting of more types of people doing more types of good things in the world and reasoning about them. e.g., we'll welcome anyone who is trying to do good, and is open to talking through the 'why' behind what they are doingthat naive utilitarian maximizers will go away or be a bit more humble :)that people will put more emphasis on developing and relying on their own reasoning processes, and rely less on the reasoning of others to make big decisions in their lives.that cause prioritization will get less emphasis, especially career cause prioritization (I think the maximizing thingy regularly causes people to make bad career decisions)(Some color on the final one: I've had a blog post brewing for a long time against strong career cause prio but haven't really managed to write it up in a convincing way. e.g., I think AI is a bad career direction for a lot of people, but young EAs are convinced to try it anyway because AI is held up as the priority path and they'll have so much more impact if they make it. This seems bad for lots of reasons which I will try to write up in a post if I can ever figure out how to articulate them.)Anyway, I think the above hopes, if they pan out, will make the community stronger. And, though I am normally loath to argue about optics, I do think this change would counter most of the arguments that you regularly see in news media against EA principles (such as that EA is about dangerous maximizing, or that it's only for elites, or that young people's careers are affected in unstable/chaotic ways when they encounter EA).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
lincolnq https://forum.effectivealtruism.org/posts/JWBhv5EfHyQoFd6SL/i-think-ea-will-make-it-through-stronger Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I think EA will make it through stronger, published by lincolnq on November 16, 2022 on The Effective Altruism Forum.status: kind of rambly but I wanted to get this out there in case it helpsThis week's events triggered in me some soul-searching, wondering whether effective altruism even makes sense as a coherent thing anymore.The reason I thought EA might break up or dissolve was something like: EA mostly attracted naive maximizer-types ("do the Most Good, using reasoning"), but now it's obvious that the idea of maximizing goodness doesn't work in practice--we have a really clear example of where trying to do that fails (SBF if you attribute pure motives to him); as well as a lot of recent quotes from EA luminaries saying that you shouldn't do that. I didn't see what else holds us together besides the maximizing thing.But I was kind of ignoring the reasoning thing! I thought about it, and I think that we can make minimal changes: The framing I like is "Do good, using REASONING". With capital letters :)I think deleting "the most" is a change we should have made a long time ago; few of the important people in EA were claiming that they were doing the most good anyway. And EA at its core is about reasoning: reasoning carefully, using evidence; thinking about first-order and second-order effects; comparing options in front of you; argument and debate. The simpler phrasing of this new mission is intended to make reasoning stand out.If this direction is adopted, I have the following hopes:that EA will become a "bigger tent," accepting of more types of people doing more types of good things in the world and reasoning about them. e.g., we'll welcome anyone who is trying to do good, and is open to talking through the 'why' behind what they are doingthat naive utilitarian maximizers will go away or be a bit more humble :)that people will put more emphasis on developing and relying on their own reasoning processes, and rely less on the reasoning of others to make big decisions in their lives.that cause prioritization will get less emphasis, especially career cause prioritization (I think the maximizing thingy regularly causes people to make bad career decisions)(Some color on the final one: I've had a blog post brewing for a long time against strong career cause prio but haven't really managed to write it up in a convincing way. e.g., I think AI is a bad career direction for a lot of people, but young EAs are convinced to try it anyway because AI is held up as the priority path and they'll have so much more impact if they make it. This seems bad for lots of reasons which I will try to write up in a post if I can ever figure out how to articulate them.)Anyway, I think the above hopes, if they pan out, will make the community stronger. And, though I am normally loath to argue about optics, I do think this change would counter most of the arguments that you regularly see in news media against EA principles (such as that EA is about dangerous maximizing, or that it's only for elites, or that young people's careers are affected in unstable/chaotic ways when they encounter EA).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 16 Nov 2022 05:14:36 +0000 EA - I think EA will make it through stronger by lincolnq Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I think EA will make it through stronger, published by lincolnq on November 16, 2022 on The Effective Altruism Forum.status: kind of rambly but I wanted to get this out there in case it helpsThis week's events triggered in me some soul-searching, wondering whether effective altruism even makes sense as a coherent thing anymore.The reason I thought EA might break up or dissolve was something like: EA mostly attracted naive maximizer-types ("do the Most Good, using reasoning"), but now it's obvious that the idea of maximizing goodness doesn't work in practice--we have a really clear example of where trying to do that fails (SBF if you attribute pure motives to him); as well as a lot of recent quotes from EA luminaries saying that you shouldn't do that. I didn't see what else holds us together besides the maximizing thing.But I was kind of ignoring the reasoning thing! I thought about it, and I think that we can make minimal changes: The framing I like is "Do good, using REASONING". With capital letters :)I think deleting "the most" is a change we should have made a long time ago; few of the important people in EA were claiming that they were doing the most good anyway. And EA at its core is about reasoning: reasoning carefully, using evidence; thinking about first-order and second-order effects; comparing options in front of you; argument and debate. The simpler phrasing of this new mission is intended to make reasoning stand out.If this direction is adopted, I have the following hopes:that EA will become a "bigger tent," accepting of more types of people doing more types of good things in the world and reasoning about them. e.g., we'll welcome anyone who is trying to do good, and is open to talking through the 'why' behind what they are doingthat naive utilitarian maximizers will go away or be a bit more humble :)that people will put more emphasis on developing and relying on their own reasoning processes, and rely less on the reasoning of others to make big decisions in their lives.that cause prioritization will get less emphasis, especially career cause prioritization (I think the maximizing thingy regularly causes people to make bad career decisions)(Some color on the final one: I've had a blog post brewing for a long time against strong career cause prio but haven't really managed to write it up in a convincing way. e.g., I think AI is a bad career direction for a lot of people, but young EAs are convinced to try it anyway because AI is held up as the priority path and they'll have so much more impact if they make it. This seems bad for lots of reasons which I will try to write up in a post if I can ever figure out how to articulate them.)Anyway, I think the above hopes, if they pan out, will make the community stronger. And, though I am normally loath to argue about optics, I do think this change would counter most of the arguments that you regularly see in news media against EA principles (such as that EA is about dangerous maximizing, or that it's only for elites, or that young people's careers are affected in unstable/chaotic ways when they encounter EA).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I think EA will make it through stronger, published by lincolnq on November 16, 2022 on The Effective Altruism Forum.status: kind of rambly but I wanted to get this out there in case it helpsThis week's events triggered in me some soul-searching, wondering whether effective altruism even makes sense as a coherent thing anymore.The reason I thought EA might break up or dissolve was something like: EA mostly attracted naive maximizer-types ("do the Most Good, using reasoning"), but now it's obvious that the idea of maximizing goodness doesn't work in practice--we have a really clear example of where trying to do that fails (SBF if you attribute pure motives to him); as well as a lot of recent quotes from EA luminaries saying that you shouldn't do that. I didn't see what else holds us together besides the maximizing thing.But I was kind of ignoring the reasoning thing! I thought about it, and I think that we can make minimal changes: The framing I like is "Do good, using REASONING". With capital letters :)I think deleting "the most" is a change we should have made a long time ago; few of the important people in EA were claiming that they were doing the most good anyway. And EA at its core is about reasoning: reasoning carefully, using evidence; thinking about first-order and second-order effects; comparing options in front of you; argument and debate. The simpler phrasing of this new mission is intended to make reasoning stand out.If this direction is adopted, I have the following hopes:that EA will become a "bigger tent," accepting of more types of people doing more types of good things in the world and reasoning about them. e.g., we'll welcome anyone who is trying to do good, and is open to talking through the 'why' behind what they are doingthat naive utilitarian maximizers will go away or be a bit more humble :)that people will put more emphasis on developing and relying on their own reasoning processes, and rely less on the reasoning of others to make big decisions in their lives.that cause prioritization will get less emphasis, especially career cause prioritization (I think the maximizing thingy regularly causes people to make bad career decisions)(Some color on the final one: I've had a blog post brewing for a long time against strong career cause prio but haven't really managed to write it up in a convincing way. e.g., I think AI is a bad career direction for a lot of people, but young EAs are convinced to try it anyway because AI is held up as the priority path and they'll have so much more impact if they make it. This seems bad for lots of reasons which I will try to write up in a post if I can ever figure out how to articulate them.)Anyway, I think the above hopes, if they pan out, will make the community stronger. And, though I am normally loath to argue about optics, I do think this change would counter most of the arguments that you regularly see in news media against EA principles (such as that EA is about dangerous maximizing, or that it's only for elites, or that young people's careers are affected in unstable/chaotic ways when they encounter EA).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
lincolnq https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:57 None full 3776
SP3Hkas3jo6i2cmqb_EA EA - Who's at fault for FTX's wrongdoing by EliezerYudkowsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who's at fault for FTX's wrongdoing, published by EliezerYudkowsky on November 16, 2022 on The Effective Altruism Forum.Caroline Ellison, co-CEO and later CEO of Alameda, had a now-deleted blog, "worldoptimization" on Tumblr. One does not usually post excerpts from deleted blogs - the Internet has, of course, saved it by now - but it looks like Caroline violated enough deontology to be less protected than usual in turn, and also I think it's important for people to see what signals are apparently not reliable signs of honesty and goodness.In a post on Oct 10 2022, Caroline Ellison crossposted her Goodreads review of The Golden Enclaves, book 3 of Scholomance by Naomi Novik. Caroline Ellison writes, including very light / abstract spoilers only:A pretty good conclusion to the series.Biggest pro was the resolution of mysteries/open questions from the first two books. It wrapped everything up in a way that felt very satisfying.Biggest con was . I think I felt less bought into the ethics of the story than I had for the previous two books?The first two books often have a vibe of “you can either do the thing that’s easy and safe or you can do the thing that’s hard and scary but right, and being a good person is doing the right thing.” And I’m super on board with that.Whereas if I had to sum up the moral message of the third book I might go with “there is no ethical consumption under late capitalism.”For someone like myself, this is a pretty shocking thing to hear somebody say, on a Tumblr blog not then associated with their main corporate persona, not in a way that sounds like the usual performativity, not like it's meant to impress anybody (because then you're probably not writing about anything as undignified as fantasy fiction in the first place). It sounds like - Caroline might have been under the impression, as late as Oct 10, that what she was doing at FTX was the thing that's hard and scary but right? That she was doing, even, what Naomi Novik would have told her to do?The Scholomance novels feature a protagonist, Galadriel Higgins, with unusually dark and scary powers, with a dark and scary prophecy about herself, trying to do the right thing anyways and being misinterpreted by her classmates, in an incredibly hostile environment.The line of causality seems clear - Naomi Novik, by telling her readers to do the right thing, probably contributed to Caroline Ellison doing what she thought was the right thing - misusing Alameda's customer deposits. Furthermore, the Scholomance novels romanticized people with dark and scary powers, and those people not just immediately killing themselves in the face of a prophecy that they'd do immense harm later, i.e., sending the message that it's okay for them to take huge risks with other people's interests.I expect this to be a very serious blow to Naomi Novik's reputation, possibly the reputation of fantasy fiction in general. The now-deleted Tumblr post is tantamount to a declaration that Caroline Ellison was doing this because she thought Naomi Novik told her to. We can infer that probably at least $30 of Scholomance sales are due to Caroline Ellison, and with the resources that Ellison commanded as co-CEO of Alameda, some unknown other fraction of Scholomance's entire revenues could have been due to phantom purchases that Ellison funded in order to channel customer deposits to her favorite author.My moral here? It can also be summed up in an old joke that goes as follows: "He has no right to make himself that small; he is not that great."The best summary of the FTX affair that I've read so is Milky Eggs's "What Happened at Alameda Research?" If you haven't read it already, and you're at all interested in this affair, I recommend that you go read it right now.Pieced together from various sources, including some alleged...]]>
EliezerYudkowsky https://forum.effectivealtruism.org/posts/SP3Hkas3jo6i2cmqb/who-s-at-fault-for-ftx-s-wrongdoing Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who's at fault for FTX's wrongdoing, published by EliezerYudkowsky on November 16, 2022 on The Effective Altruism Forum.Caroline Ellison, co-CEO and later CEO of Alameda, had a now-deleted blog, "worldoptimization" on Tumblr. One does not usually post excerpts from deleted blogs - the Internet has, of course, saved it by now - but it looks like Caroline violated enough deontology to be less protected than usual in turn, and also I think it's important for people to see what signals are apparently not reliable signs of honesty and goodness.In a post on Oct 10 2022, Caroline Ellison crossposted her Goodreads review of The Golden Enclaves, book 3 of Scholomance by Naomi Novik. Caroline Ellison writes, including very light / abstract spoilers only:A pretty good conclusion to the series.Biggest pro was the resolution of mysteries/open questions from the first two books. It wrapped everything up in a way that felt very satisfying.Biggest con was . I think I felt less bought into the ethics of the story than I had for the previous two books?The first two books often have a vibe of “you can either do the thing that’s easy and safe or you can do the thing that’s hard and scary but right, and being a good person is doing the right thing.” And I’m super on board with that.Whereas if I had to sum up the moral message of the third book I might go with “there is no ethical consumption under late capitalism.”For someone like myself, this is a pretty shocking thing to hear somebody say, on a Tumblr blog not then associated with their main corporate persona, not in a way that sounds like the usual performativity, not like it's meant to impress anybody (because then you're probably not writing about anything as undignified as fantasy fiction in the first place). It sounds like - Caroline might have been under the impression, as late as Oct 10, that what she was doing at FTX was the thing that's hard and scary but right? That she was doing, even, what Naomi Novik would have told her to do?The Scholomance novels feature a protagonist, Galadriel Higgins, with unusually dark and scary powers, with a dark and scary prophecy about herself, trying to do the right thing anyways and being misinterpreted by her classmates, in an incredibly hostile environment.The line of causality seems clear - Naomi Novik, by telling her readers to do the right thing, probably contributed to Caroline Ellison doing what she thought was the right thing - misusing Alameda's customer deposits. Furthermore, the Scholomance novels romanticized people with dark and scary powers, and those people not just immediately killing themselves in the face of a prophecy that they'd do immense harm later, i.e., sending the message that it's okay for them to take huge risks with other people's interests.I expect this to be a very serious blow to Naomi Novik's reputation, possibly the reputation of fantasy fiction in general. The now-deleted Tumblr post is tantamount to a declaration that Caroline Ellison was doing this because she thought Naomi Novik told her to. We can infer that probably at least $30 of Scholomance sales are due to Caroline Ellison, and with the resources that Ellison commanded as co-CEO of Alameda, some unknown other fraction of Scholomance's entire revenues could have been due to phantom purchases that Ellison funded in order to channel customer deposits to her favorite author.My moral here? It can also be summed up in an old joke that goes as follows: "He has no right to make himself that small; he is not that great."The best summary of the FTX affair that I've read so is Milky Eggs's "What Happened at Alameda Research?" If you haven't read it already, and you're at all interested in this affair, I recommend that you go read it right now.Pieced together from various sources, including some alleged...]]>
Wed, 16 Nov 2022 04:55:15 +0000 EA - Who's at fault for FTX's wrongdoing by EliezerYudkowsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who's at fault for FTX's wrongdoing, published by EliezerYudkowsky on November 16, 2022 on The Effective Altruism Forum.Caroline Ellison, co-CEO and later CEO of Alameda, had a now-deleted blog, "worldoptimization" on Tumblr. One does not usually post excerpts from deleted blogs - the Internet has, of course, saved it by now - but it looks like Caroline violated enough deontology to be less protected than usual in turn, and also I think it's important for people to see what signals are apparently not reliable signs of honesty and goodness.In a post on Oct 10 2022, Caroline Ellison crossposted her Goodreads review of The Golden Enclaves, book 3 of Scholomance by Naomi Novik. Caroline Ellison writes, including very light / abstract spoilers only:A pretty good conclusion to the series.Biggest pro was the resolution of mysteries/open questions from the first two books. It wrapped everything up in a way that felt very satisfying.Biggest con was . I think I felt less bought into the ethics of the story than I had for the previous two books?The first two books often have a vibe of “you can either do the thing that’s easy and safe or you can do the thing that’s hard and scary but right, and being a good person is doing the right thing.” And I’m super on board with that.Whereas if I had to sum up the moral message of the third book I might go with “there is no ethical consumption under late capitalism.”For someone like myself, this is a pretty shocking thing to hear somebody say, on a Tumblr blog not then associated with their main corporate persona, not in a way that sounds like the usual performativity, not like it's meant to impress anybody (because then you're probably not writing about anything as undignified as fantasy fiction in the first place). It sounds like - Caroline might have been under the impression, as late as Oct 10, that what she was doing at FTX was the thing that's hard and scary but right? That she was doing, even, what Naomi Novik would have told her to do?The Scholomance novels feature a protagonist, Galadriel Higgins, with unusually dark and scary powers, with a dark and scary prophecy about herself, trying to do the right thing anyways and being misinterpreted by her classmates, in an incredibly hostile environment.The line of causality seems clear - Naomi Novik, by telling her readers to do the right thing, probably contributed to Caroline Ellison doing what she thought was the right thing - misusing Alameda's customer deposits. Furthermore, the Scholomance novels romanticized people with dark and scary powers, and those people not just immediately killing themselves in the face of a prophecy that they'd do immense harm later, i.e., sending the message that it's okay for them to take huge risks with other people's interests.I expect this to be a very serious blow to Naomi Novik's reputation, possibly the reputation of fantasy fiction in general. The now-deleted Tumblr post is tantamount to a declaration that Caroline Ellison was doing this because she thought Naomi Novik told her to. We can infer that probably at least $30 of Scholomance sales are due to Caroline Ellison, and with the resources that Ellison commanded as co-CEO of Alameda, some unknown other fraction of Scholomance's entire revenues could have been due to phantom purchases that Ellison funded in order to channel customer deposits to her favorite author.My moral here? It can also be summed up in an old joke that goes as follows: "He has no right to make himself that small; he is not that great."The best summary of the FTX affair that I've read so is Milky Eggs's "What Happened at Alameda Research?" If you haven't read it already, and you're at all interested in this affair, I recommend that you go read it right now.Pieced together from various sources, including some alleged...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who's at fault for FTX's wrongdoing, published by EliezerYudkowsky on November 16, 2022 on The Effective Altruism Forum.Caroline Ellison, co-CEO and later CEO of Alameda, had a now-deleted blog, "worldoptimization" on Tumblr. One does not usually post excerpts from deleted blogs - the Internet has, of course, saved it by now - but it looks like Caroline violated enough deontology to be less protected than usual in turn, and also I think it's important for people to see what signals are apparently not reliable signs of honesty and goodness.In a post on Oct 10 2022, Caroline Ellison crossposted her Goodreads review of The Golden Enclaves, book 3 of Scholomance by Naomi Novik. Caroline Ellison writes, including very light / abstract spoilers only:A pretty good conclusion to the series.Biggest pro was the resolution of mysteries/open questions from the first two books. It wrapped everything up in a way that felt very satisfying.Biggest con was . I think I felt less bought into the ethics of the story than I had for the previous two books?The first two books often have a vibe of “you can either do the thing that’s easy and safe or you can do the thing that’s hard and scary but right, and being a good person is doing the right thing.” And I’m super on board with that.Whereas if I had to sum up the moral message of the third book I might go with “there is no ethical consumption under late capitalism.”For someone like myself, this is a pretty shocking thing to hear somebody say, on a Tumblr blog not then associated with their main corporate persona, not in a way that sounds like the usual performativity, not like it's meant to impress anybody (because then you're probably not writing about anything as undignified as fantasy fiction in the first place). It sounds like - Caroline might have been under the impression, as late as Oct 10, that what she was doing at FTX was the thing that's hard and scary but right? That she was doing, even, what Naomi Novik would have told her to do?The Scholomance novels feature a protagonist, Galadriel Higgins, with unusually dark and scary powers, with a dark and scary prophecy about herself, trying to do the right thing anyways and being misinterpreted by her classmates, in an incredibly hostile environment.The line of causality seems clear - Naomi Novik, by telling her readers to do the right thing, probably contributed to Caroline Ellison doing what she thought was the right thing - misusing Alameda's customer deposits. Furthermore, the Scholomance novels romanticized people with dark and scary powers, and those people not just immediately killing themselves in the face of a prophecy that they'd do immense harm later, i.e., sending the message that it's okay for them to take huge risks with other people's interests.I expect this to be a very serious blow to Naomi Novik's reputation, possibly the reputation of fantasy fiction in general. The now-deleted Tumblr post is tantamount to a declaration that Caroline Ellison was doing this because she thought Naomi Novik told her to. We can infer that probably at least $30 of Scholomance sales are due to Caroline Ellison, and with the resources that Ellison commanded as co-CEO of Alameda, some unknown other fraction of Scholomance's entire revenues could have been due to phantom purchases that Ellison funded in order to channel customer deposits to her favorite author.My moral here? It can also be summed up in an old joke that goes as follows: "He has no right to make himself that small; he is not that great."The best summary of the FTX affair that I've read so is Milky Eggs's "What Happened at Alameda Research?" If you haven't read it already, and you're at all interested in this affair, I recommend that you go read it right now.Pieced together from various sources, including some alleged...]]>
EliezerYudkowsky https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:48 None full 3775
6uwAXinuaxyssofBB_EA EA - Assessing the case for population growth as a priority by Charlotte Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Assessing the case for population growth as a priority, published by Charlotte on November 15, 2022 on The Effective Altruism Forum.We are assessing the case for population growth as a priority. Here is the Google Docs version if you prefer to read it as a document. Charlotte Siegmann and Luis MotaSummaryRecently, population growth as a cause area has been receiving more attention (MacAskill, 2022, PWI, Jones, 2022a, Bricker and Ibbitson, 2019). We outline and discuss arguments making that case, and argue that population growth falls short of being a top cause area under the longtermism paradigm. But our main contribution may be to just outline and explain all considerations and arguments.We consider three value propositions of population growth, and argue that:Long-run population size is likely determined by factors apart from biological population growth rates. MoreBiological reproduction will be replaced if the future is to be large, which should nullify the effects of population growth in timescales larger than a thousand years.Assuming biological reproduction continues, long-run population size is only influenced by current population growth in particular scenarios.Population size may impact economic growth, but its long-run effects are comparatively small. MorePopulation is one of the current drivers of economic growth.We argue that the probability and long-term harms of economic stagnation, such as moral regress, are overstated.We argue that the effects of economic growth on extinction risk are small.Population size has negligible effects on humanity’s resilience to catastrophes. MoreWe think that the most compelling case for intervening on population growth comes from its effects on economic growth. Overall, while we think that increasing population growth rates is positive, the scale of its benefits appears to be orders of magnitude smaller than those of top cause areas.IntroductionToday is the Day of Eight Billion, a day projected by the UN to be roughly the one in which the world human population reaches 8 billion people. The last time the world population increased by a round billion was 11 years ago, and the UN projection suggests that the world population will reach the 9 billion mark in 2037, and the 10 billion mark in the mid-2050s. This projection never gets to 11 billion, as it peaks at 10.4 billion during the 2080s and slowly declines throughout the last decades of this century. Other population projections predict an even earlier peak in the human population, which would happen before there are 10 billion of us (Lutz et al. 2018, Vollset et al. 2020).There is a reasonable amount of uncertainty about whether the peak will happen by the end of this century. In the UN projection, with a 5% probability, the population size will be larger than 12 billion by 2100. But fertility trends indicate that a peak will eventually happen. The world fertility rate has been declining since the 1960s and is expected to decline to the replacement level of 2.1 children per woman by the mid-century. Once it is below that point, growth would only be due to population momentum. This transition to below replacement levels of fertility is already on its way, as more than half of the world population lives in a country where fertility is already below the replacement rate.The expectation of a population decline raises the question: should population growth be a priority for actors wanting to do the most good? For one, a higher population might be desirable on its own. For example, according to the total view in population ethics, creating lives with positive wellbeing is good, all else equal. Moreover, population growth may also affect other variables of moral importance, such as economic growth (Jones, 2022b) and social and political norms (MacAskill 2022,...]]>
Charlotte https://forum.effectivealtruism.org/posts/6uwAXinuaxyssofBB/assessing-the-case-for-population-growth-as-a-priority Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Assessing the case for population growth as a priority, published by Charlotte on November 15, 2022 on The Effective Altruism Forum.We are assessing the case for population growth as a priority. Here is the Google Docs version if you prefer to read it as a document. Charlotte Siegmann and Luis MotaSummaryRecently, population growth as a cause area has been receiving more attention (MacAskill, 2022, PWI, Jones, 2022a, Bricker and Ibbitson, 2019). We outline and discuss arguments making that case, and argue that population growth falls short of being a top cause area under the longtermism paradigm. But our main contribution may be to just outline and explain all considerations and arguments.We consider three value propositions of population growth, and argue that:Long-run population size is likely determined by factors apart from biological population growth rates. MoreBiological reproduction will be replaced if the future is to be large, which should nullify the effects of population growth in timescales larger than a thousand years.Assuming biological reproduction continues, long-run population size is only influenced by current population growth in particular scenarios.Population size may impact economic growth, but its long-run effects are comparatively small. MorePopulation is one of the current drivers of economic growth.We argue that the probability and long-term harms of economic stagnation, such as moral regress, are overstated.We argue that the effects of economic growth on extinction risk are small.Population size has negligible effects on humanity’s resilience to catastrophes. MoreWe think that the most compelling case for intervening on population growth comes from its effects on economic growth. Overall, while we think that increasing population growth rates is positive, the scale of its benefits appears to be orders of magnitude smaller than those of top cause areas.IntroductionToday is the Day of Eight Billion, a day projected by the UN to be roughly the one in which the world human population reaches 8 billion people. The last time the world population increased by a round billion was 11 years ago, and the UN projection suggests that the world population will reach the 9 billion mark in 2037, and the 10 billion mark in the mid-2050s. This projection never gets to 11 billion, as it peaks at 10.4 billion during the 2080s and slowly declines throughout the last decades of this century. Other population projections predict an even earlier peak in the human population, which would happen before there are 10 billion of us (Lutz et al. 2018, Vollset et al. 2020).There is a reasonable amount of uncertainty about whether the peak will happen by the end of this century. In the UN projection, with a 5% probability, the population size will be larger than 12 billion by 2100. But fertility trends indicate that a peak will eventually happen. The world fertility rate has been declining since the 1960s and is expected to decline to the replacement level of 2.1 children per woman by the mid-century. Once it is below that point, growth would only be due to population momentum. This transition to below replacement levels of fertility is already on its way, as more than half of the world population lives in a country where fertility is already below the replacement rate.The expectation of a population decline raises the question: should population growth be a priority for actors wanting to do the most good? For one, a higher population might be desirable on its own. For example, according to the total view in population ethics, creating lives with positive wellbeing is good, all else equal. Moreover, population growth may also affect other variables of moral importance, such as economic growth (Jones, 2022b) and social and political norms (MacAskill 2022,...]]>
Wed, 16 Nov 2022 03:46:14 +0000 EA - Assessing the case for population growth as a priority by Charlotte Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Assessing the case for population growth as a priority, published by Charlotte on November 15, 2022 on The Effective Altruism Forum.We are assessing the case for population growth as a priority. Here is the Google Docs version if you prefer to read it as a document. Charlotte Siegmann and Luis MotaSummaryRecently, population growth as a cause area has been receiving more attention (MacAskill, 2022, PWI, Jones, 2022a, Bricker and Ibbitson, 2019). We outline and discuss arguments making that case, and argue that population growth falls short of being a top cause area under the longtermism paradigm. But our main contribution may be to just outline and explain all considerations and arguments.We consider three value propositions of population growth, and argue that:Long-run population size is likely determined by factors apart from biological population growth rates. MoreBiological reproduction will be replaced if the future is to be large, which should nullify the effects of population growth in timescales larger than a thousand years.Assuming biological reproduction continues, long-run population size is only influenced by current population growth in particular scenarios.Population size may impact economic growth, but its long-run effects are comparatively small. MorePopulation is one of the current drivers of economic growth.We argue that the probability and long-term harms of economic stagnation, such as moral regress, are overstated.We argue that the effects of economic growth on extinction risk are small.Population size has negligible effects on humanity’s resilience to catastrophes. MoreWe think that the most compelling case for intervening on population growth comes from its effects on economic growth. Overall, while we think that increasing population growth rates is positive, the scale of its benefits appears to be orders of magnitude smaller than those of top cause areas.IntroductionToday is the Day of Eight Billion, a day projected by the UN to be roughly the one in which the world human population reaches 8 billion people. The last time the world population increased by a round billion was 11 years ago, and the UN projection suggests that the world population will reach the 9 billion mark in 2037, and the 10 billion mark in the mid-2050s. This projection never gets to 11 billion, as it peaks at 10.4 billion during the 2080s and slowly declines throughout the last decades of this century. Other population projections predict an even earlier peak in the human population, which would happen before there are 10 billion of us (Lutz et al. 2018, Vollset et al. 2020).There is a reasonable amount of uncertainty about whether the peak will happen by the end of this century. In the UN projection, with a 5% probability, the population size will be larger than 12 billion by 2100. But fertility trends indicate that a peak will eventually happen. The world fertility rate has been declining since the 1960s and is expected to decline to the replacement level of 2.1 children per woman by the mid-century. Once it is below that point, growth would only be due to population momentum. This transition to below replacement levels of fertility is already on its way, as more than half of the world population lives in a country where fertility is already below the replacement rate.The expectation of a population decline raises the question: should population growth be a priority for actors wanting to do the most good? For one, a higher population might be desirable on its own. For example, according to the total view in population ethics, creating lives with positive wellbeing is good, all else equal. Moreover, population growth may also affect other variables of moral importance, such as economic growth (Jones, 2022b) and social and political norms (MacAskill 2022,...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Assessing the case for population growth as a priority, published by Charlotte on November 15, 2022 on The Effective Altruism Forum.We are assessing the case for population growth as a priority. Here is the Google Docs version if you prefer to read it as a document. Charlotte Siegmann and Luis MotaSummaryRecently, population growth as a cause area has been receiving more attention (MacAskill, 2022, PWI, Jones, 2022a, Bricker and Ibbitson, 2019). We outline and discuss arguments making that case, and argue that population growth falls short of being a top cause area under the longtermism paradigm. But our main contribution may be to just outline and explain all considerations and arguments.We consider three value propositions of population growth, and argue that:Long-run population size is likely determined by factors apart from biological population growth rates. MoreBiological reproduction will be replaced if the future is to be large, which should nullify the effects of population growth in timescales larger than a thousand years.Assuming biological reproduction continues, long-run population size is only influenced by current population growth in particular scenarios.Population size may impact economic growth, but its long-run effects are comparatively small. MorePopulation is one of the current drivers of economic growth.We argue that the probability and long-term harms of economic stagnation, such as moral regress, are overstated.We argue that the effects of economic growth on extinction risk are small.Population size has negligible effects on humanity’s resilience to catastrophes. MoreWe think that the most compelling case for intervening on population growth comes from its effects on economic growth. Overall, while we think that increasing population growth rates is positive, the scale of its benefits appears to be orders of magnitude smaller than those of top cause areas.IntroductionToday is the Day of Eight Billion, a day projected by the UN to be roughly the one in which the world human population reaches 8 billion people. The last time the world population increased by a round billion was 11 years ago, and the UN projection suggests that the world population will reach the 9 billion mark in 2037, and the 10 billion mark in the mid-2050s. This projection never gets to 11 billion, as it peaks at 10.4 billion during the 2080s and slowly declines throughout the last decades of this century. Other population projections predict an even earlier peak in the human population, which would happen before there are 10 billion of us (Lutz et al. 2018, Vollset et al. 2020).There is a reasonable amount of uncertainty about whether the peak will happen by the end of this century. In the UN projection, with a 5% probability, the population size will be larger than 12 billion by 2100. But fertility trends indicate that a peak will eventually happen. The world fertility rate has been declining since the 1960s and is expected to decline to the replacement level of 2.1 children per woman by the mid-century. Once it is below that point, growth would only be due to population momentum. This transition to below replacement levels of fertility is already on its way, as more than half of the world population lives in a country where fertility is already below the replacement rate.The expectation of a population decline raises the question: should population growth be a priority for actors wanting to do the most good? For one, a higher population might be desirable on its own. For example, according to the total view in population ethics, creating lives with positive wellbeing is good, all else equal. Moreover, population growth may also affect other variables of moral importance, such as economic growth (Jones, 2022b) and social and political norms (MacAskill 2022,...]]>
Charlotte https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 43:47 None full 3777
YAsJCsvi4tBpCwuFB_EA EA - Why didn't the FTX Foundation secure its bag? by Greg Colbourn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why didn't the FTX Foundation secure its bag?, published by Greg Colbourn on November 15, 2022 on The Effective Altruism Forum.“Asked whether he had set up any kind of endowment for his giving, Mr. Bankman-Fried said in the Times interview last month: “It’s more of a pay-as-we-go thing, and the reason for that, frankly, is I’m not liquid enough for it to make sense to do an endowment right now.”” (Source (a))Quite apart from the fact that in hindsight, the above quote should’ve been ringing alarm bells, I think a big lesson here is that foundations should "secure their bags" – aka secure assets/their endowment – early. The fact that the FTX Foundation didn't do this is a massive loss - for the world, EA, and FTX creditors. It's not even just about fraud or bankruptcy risk. Basic risk management would suggest that it should be done to hedge against the possibility of the major donors' wealth diminishing for any reason.This is a massive risk management failure from the high level EA community point of view of doing good with the money. Equity, or crypto, should've been donated to the foundation, and sold for cash by it. And the foundation should've been controlled by people who weren't FTX/Alameda! As a community, we put way too much faith and trust in those who owned/controlled FTX/Alameda, and insufficient effort into pressing for commitments to be secured. Trust, but verify.As I understand it, OpenPhil has a few $B that is theirs, under independent legal control (i.e. separate from Moskovitz and Tuna, who, whilst being board members, do not have unilateral control of the funds; they make up ⅖ of the board).The point I'm making here is bigger than just the loss of current promised grants. It's about making sure that money is secured when pledges are made, to minimise the risks of losing it. Otherwise the pledges don't count for much (as we can see here). If Moskovitz and Tuna went down now, OpenPhil would still have $Bs at its disposal. Sure, funds might be frozen for some amount of time (if the donors went bankrupt), but it is very unlikely that they would be lost.For a much smaller example, I can attest that with CEEALAR (the org I founded), we set up a charity, for which I only control ⅓ (i.e. the other two Trustees could out vote me), and I have donated the building to the charity. It can go on without me.Regarding any moral/PR risks of the FTX Foundation spending money now were it to have it; suppose the FTX Foundation was set-up like OpenPhil so it had at least $1B in assets that weren't under the control of SBF and his cronies. And suppose that money was made in a non-fraudulent way (which seems likely if it were to have been given last year, significantly before all the financial trouble started at FTX/Alameda). We would be in a much better position now even from the point of view of wanting (or being compelled) to use the money to pay back FTX creditors (i.e. there would be significantly more money for the creditors).In practice, maybe securing significant assets wasn’t really possible with the FTX Foundation, given the likely illiquid nature of any sizable assets that FTX/Alameda could’ve donated (e.g. FTT or FTX stock), which have now collapsed in value. But it still would’ve been a step in the right direction. Did anyone at the FTX Foundation or Future Fund, or others in advisory roles, press for this? A side benefit might’ve also been uncovering the fraud earlier and perhaps mitigating it somewhat.Although note that I can’t seem to find any public reference to the original source of the quote from “last month”.To use a phrase that is popular in crypto.See penultimate paragraph.I can imagine SBF saying something like the max-EV thing to do is keeping all the funds in the for-profit companies to maximise their growth, and the FTX Foundation / F...]]>
Greg Colbourn https://forum.effectivealtruism.org/posts/YAsJCsvi4tBpCwuFB/why-didn-t-the-ftx-foundation-secure-its-bag Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why didn't the FTX Foundation secure its bag?, published by Greg Colbourn on November 15, 2022 on The Effective Altruism Forum.“Asked whether he had set up any kind of endowment for his giving, Mr. Bankman-Fried said in the Times interview last month: “It’s more of a pay-as-we-go thing, and the reason for that, frankly, is I’m not liquid enough for it to make sense to do an endowment right now.”” (Source (a))Quite apart from the fact that in hindsight, the above quote should’ve been ringing alarm bells, I think a big lesson here is that foundations should "secure their bags" – aka secure assets/their endowment – early. The fact that the FTX Foundation didn't do this is a massive loss - for the world, EA, and FTX creditors. It's not even just about fraud or bankruptcy risk. Basic risk management would suggest that it should be done to hedge against the possibility of the major donors' wealth diminishing for any reason.This is a massive risk management failure from the high level EA community point of view of doing good with the money. Equity, or crypto, should've been donated to the foundation, and sold for cash by it. And the foundation should've been controlled by people who weren't FTX/Alameda! As a community, we put way too much faith and trust in those who owned/controlled FTX/Alameda, and insufficient effort into pressing for commitments to be secured. Trust, but verify.As I understand it, OpenPhil has a few $B that is theirs, under independent legal control (i.e. separate from Moskovitz and Tuna, who, whilst being board members, do not have unilateral control of the funds; they make up ⅖ of the board).The point I'm making here is bigger than just the loss of current promised grants. It's about making sure that money is secured when pledges are made, to minimise the risks of losing it. Otherwise the pledges don't count for much (as we can see here). If Moskovitz and Tuna went down now, OpenPhil would still have $Bs at its disposal. Sure, funds might be frozen for some amount of time (if the donors went bankrupt), but it is very unlikely that they would be lost.For a much smaller example, I can attest that with CEEALAR (the org I founded), we set up a charity, for which I only control ⅓ (i.e. the other two Trustees could out vote me), and I have donated the building to the charity. It can go on without me.Regarding any moral/PR risks of the FTX Foundation spending money now were it to have it; suppose the FTX Foundation was set-up like OpenPhil so it had at least $1B in assets that weren't under the control of SBF and his cronies. And suppose that money was made in a non-fraudulent way (which seems likely if it were to have been given last year, significantly before all the financial trouble started at FTX/Alameda). We would be in a much better position now even from the point of view of wanting (or being compelled) to use the money to pay back FTX creditors (i.e. there would be significantly more money for the creditors).In practice, maybe securing significant assets wasn’t really possible with the FTX Foundation, given the likely illiquid nature of any sizable assets that FTX/Alameda could’ve donated (e.g. FTT or FTX stock), which have now collapsed in value. But it still would’ve been a step in the right direction. Did anyone at the FTX Foundation or Future Fund, or others in advisory roles, press for this? A side benefit might’ve also been uncovering the fraud earlier and perhaps mitigating it somewhat.Although note that I can’t seem to find any public reference to the original source of the quote from “last month”.To use a phrase that is popular in crypto.See penultimate paragraph.I can imagine SBF saying something like the max-EV thing to do is keeping all the funds in the for-profit companies to maximise their growth, and the FTX Foundation / F...]]>
Wed, 16 Nov 2022 01:59:51 +0000 EA - Why didn't the FTX Foundation secure its bag? by Greg Colbourn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why didn't the FTX Foundation secure its bag?, published by Greg Colbourn on November 15, 2022 on The Effective Altruism Forum.“Asked whether he had set up any kind of endowment for his giving, Mr. Bankman-Fried said in the Times interview last month: “It’s more of a pay-as-we-go thing, and the reason for that, frankly, is I’m not liquid enough for it to make sense to do an endowment right now.”” (Source (a))Quite apart from the fact that in hindsight, the above quote should’ve been ringing alarm bells, I think a big lesson here is that foundations should "secure their bags" – aka secure assets/their endowment – early. The fact that the FTX Foundation didn't do this is a massive loss - for the world, EA, and FTX creditors. It's not even just about fraud or bankruptcy risk. Basic risk management would suggest that it should be done to hedge against the possibility of the major donors' wealth diminishing for any reason.This is a massive risk management failure from the high level EA community point of view of doing good with the money. Equity, or crypto, should've been donated to the foundation, and sold for cash by it. And the foundation should've been controlled by people who weren't FTX/Alameda! As a community, we put way too much faith and trust in those who owned/controlled FTX/Alameda, and insufficient effort into pressing for commitments to be secured. Trust, but verify.As I understand it, OpenPhil has a few $B that is theirs, under independent legal control (i.e. separate from Moskovitz and Tuna, who, whilst being board members, do not have unilateral control of the funds; they make up ⅖ of the board).The point I'm making here is bigger than just the loss of current promised grants. It's about making sure that money is secured when pledges are made, to minimise the risks of losing it. Otherwise the pledges don't count for much (as we can see here). If Moskovitz and Tuna went down now, OpenPhil would still have $Bs at its disposal. Sure, funds might be frozen for some amount of time (if the donors went bankrupt), but it is very unlikely that they would be lost.For a much smaller example, I can attest that with CEEALAR (the org I founded), we set up a charity, for which I only control ⅓ (i.e. the other two Trustees could out vote me), and I have donated the building to the charity. It can go on without me.Regarding any moral/PR risks of the FTX Foundation spending money now were it to have it; suppose the FTX Foundation was set-up like OpenPhil so it had at least $1B in assets that weren't under the control of SBF and his cronies. And suppose that money was made in a non-fraudulent way (which seems likely if it were to have been given last year, significantly before all the financial trouble started at FTX/Alameda). We would be in a much better position now even from the point of view of wanting (or being compelled) to use the money to pay back FTX creditors (i.e. there would be significantly more money for the creditors).In practice, maybe securing significant assets wasn’t really possible with the FTX Foundation, given the likely illiquid nature of any sizable assets that FTX/Alameda could’ve donated (e.g. FTT or FTX stock), which have now collapsed in value. But it still would’ve been a step in the right direction. Did anyone at the FTX Foundation or Future Fund, or others in advisory roles, press for this? A side benefit might’ve also been uncovering the fraud earlier and perhaps mitigating it somewhat.Although note that I can’t seem to find any public reference to the original source of the quote from “last month”.To use a phrase that is popular in crypto.See penultimate paragraph.I can imagine SBF saying something like the max-EV thing to do is keeping all the funds in the for-profit companies to maximise their growth, and the FTX Foundation / F...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why didn't the FTX Foundation secure its bag?, published by Greg Colbourn on November 15, 2022 on The Effective Altruism Forum.“Asked whether he had set up any kind of endowment for his giving, Mr. Bankman-Fried said in the Times interview last month: “It’s more of a pay-as-we-go thing, and the reason for that, frankly, is I’m not liquid enough for it to make sense to do an endowment right now.”” (Source (a))Quite apart from the fact that in hindsight, the above quote should’ve been ringing alarm bells, I think a big lesson here is that foundations should "secure their bags" – aka secure assets/their endowment – early. The fact that the FTX Foundation didn't do this is a massive loss - for the world, EA, and FTX creditors. It's not even just about fraud or bankruptcy risk. Basic risk management would suggest that it should be done to hedge against the possibility of the major donors' wealth diminishing for any reason.This is a massive risk management failure from the high level EA community point of view of doing good with the money. Equity, or crypto, should've been donated to the foundation, and sold for cash by it. And the foundation should've been controlled by people who weren't FTX/Alameda! As a community, we put way too much faith and trust in those who owned/controlled FTX/Alameda, and insufficient effort into pressing for commitments to be secured. Trust, but verify.As I understand it, OpenPhil has a few $B that is theirs, under independent legal control (i.e. separate from Moskovitz and Tuna, who, whilst being board members, do not have unilateral control of the funds; they make up ⅖ of the board).The point I'm making here is bigger than just the loss of current promised grants. It's about making sure that money is secured when pledges are made, to minimise the risks of losing it. Otherwise the pledges don't count for much (as we can see here). If Moskovitz and Tuna went down now, OpenPhil would still have $Bs at its disposal. Sure, funds might be frozen for some amount of time (if the donors went bankrupt), but it is very unlikely that they would be lost.For a much smaller example, I can attest that with CEEALAR (the org I founded), we set up a charity, for which I only control ⅓ (i.e. the other two Trustees could out vote me), and I have donated the building to the charity. It can go on without me.Regarding any moral/PR risks of the FTX Foundation spending money now were it to have it; suppose the FTX Foundation was set-up like OpenPhil so it had at least $1B in assets that weren't under the control of SBF and his cronies. And suppose that money was made in a non-fraudulent way (which seems likely if it were to have been given last year, significantly before all the financial trouble started at FTX/Alameda). We would be in a much better position now even from the point of view of wanting (or being compelled) to use the money to pay back FTX creditors (i.e. there would be significantly more money for the creditors).In practice, maybe securing significant assets wasn’t really possible with the FTX Foundation, given the likely illiquid nature of any sizable assets that FTX/Alameda could’ve donated (e.g. FTT or FTX stock), which have now collapsed in value. But it still would’ve been a step in the right direction. Did anyone at the FTX Foundation or Future Fund, or others in advisory roles, press for this? A side benefit might’ve also been uncovering the fraud earlier and perhaps mitigating it somewhat.Although note that I can’t seem to find any public reference to the original source of the quote from “last month”.To use a phrase that is popular in crypto.See penultimate paragraph.I can imagine SBF saying something like the max-EV thing to do is keeping all the funds in the for-profit companies to maximise their growth, and the FTX Foundation / F...]]>
Greg Colbourn https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:16 None full 3778
ozPL3mLGShqvjhiaG_EA EA - Some research ideas in forecasting by Jaime Sevilla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some research ideas in forecasting, published by Jaime Sevilla on November 15, 2022 on The Effective Altruism Forum.In the past, I have researched how we can effectively pool the predictions of many experts. For the most part, I am now focusing on directing Epoch and AI forecasting.However, I have accumulated a log of research projects related to forecasting. I have the vague intention of working on them at some point, but this will likely be months or years, and meanwhile I would be elated if someone else takes my ideas and develops them.And with the Million Predictions Hackathon by Metaculus looming, now seems a particularly good moment to write down some of these project ideas.Compare different aggregation methodsDifficulty: easyThe ultimate arbiter of what aggregation works is what performs best in practice.Redoing a comparison of forecast aggregation methods on metaculus / INFER / etc questions would be helpful data for that purpose.For example, here is a script I wrote to compare some aggregation methods, and the results I obtained:MethodWeightedBrier-logQuestionsNeyman aggregate (p=0.36)Yes0.1060.340899Extremized mean of logodds (d=1.55)Yes0.1110.350899Neyman aggregate (p=0.5)Yes0.1110.351899Extremized mean of probabilities (d=1.60)Yes0.1120.355899Metaculus predictionYes0.1110.361774Mean of logoddsYes0.1160.370899Neyman aggregate (p=0.36)No0.1200.377899MedianYes0.1210.381899Extremized mean of logodds (d=1.50)No0.1260.391899Mean of probabilitiesYes0.1220.392899Neyman aggregate (o=1.00)No0.1260.393899Extremized mean of probabilities (d=1.60)No0.1270.399899Mean of logoddsNo0.1300.410899MedianNo0.1340.418899Mean of probabilitiesNo0.1380.439899Baseline (p = 0.36)N/A0.2300.652899It would be straightforward to extend this analysis with new questions that resolved since then, other dataset or new techniques.Literature review of weight aggregationDifficulty: easyWhen aggregating forecast, we usually resort to formulas like ∑iailogo1, where oi are the individual predictions (expressed in odds) and ai the weights assigned to each prediction.Right now I have a lot of uncertainty about what are the best theoretical and empirical approaches to assigning weights to predictions. These could be based on factors like the date of the prediction, the track record of the forecaster or other factors.The first step would be to a literature review of schemes to weigh the predictions of experts when aggregating, and compare them using Metaculus data.Comparing methods for predicting base ratesDifficulty: mediumUsing historical data is always a must when forecasting.While one can rely on intuition to extract lessons from the past, it is often convenient to have some rules of thumb that inform how to translate historical frquencies to baserate probabilities.The classical method in this situation is Laplace's rule of succession. However, we showed that this method gives inconsistent results when trying to apply it to observations over a time period, and we proposed a fix here.Number of observed successes S during time TProbability of no successes during t timeS=0(1+tT)−1S>0(1+tT)−S if the sampling time period is variable(1+tT)−(S+1) if the sampling time period is fixedWhile theoretically appealing, we did not show that employing this fix actually improves performance, so there is a good research opportunity for someone to collect data and investigate this.Decay of predictionsDifficulty: mediumImagine I predict that no earthquakes will happen in Chile before 2024 with 60% probability today. Then in April 2023, if no earthquakes have happened, my implied probability should be lower than 60%.Theoretically, we should be derive the implied probabability under some mild assumptions that the probability was uniform over time, maybe following a framework like the time-...]]>
Jaime Sevilla https://forum.effectivealtruism.org/posts/ozPL3mLGShqvjhiaG/some-research-ideas-in-forecasting Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some research ideas in forecasting, published by Jaime Sevilla on November 15, 2022 on The Effective Altruism Forum.In the past, I have researched how we can effectively pool the predictions of many experts. For the most part, I am now focusing on directing Epoch and AI forecasting.However, I have accumulated a log of research projects related to forecasting. I have the vague intention of working on them at some point, but this will likely be months or years, and meanwhile I would be elated if someone else takes my ideas and develops them.And with the Million Predictions Hackathon by Metaculus looming, now seems a particularly good moment to write down some of these project ideas.Compare different aggregation methodsDifficulty: easyThe ultimate arbiter of what aggregation works is what performs best in practice.Redoing a comparison of forecast aggregation methods on metaculus / INFER / etc questions would be helpful data for that purpose.For example, here is a script I wrote to compare some aggregation methods, and the results I obtained:MethodWeightedBrier-logQuestionsNeyman aggregate (p=0.36)Yes0.1060.340899Extremized mean of logodds (d=1.55)Yes0.1110.350899Neyman aggregate (p=0.5)Yes0.1110.351899Extremized mean of probabilities (d=1.60)Yes0.1120.355899Metaculus predictionYes0.1110.361774Mean of logoddsYes0.1160.370899Neyman aggregate (p=0.36)No0.1200.377899MedianYes0.1210.381899Extremized mean of logodds (d=1.50)No0.1260.391899Mean of probabilitiesYes0.1220.392899Neyman aggregate (o=1.00)No0.1260.393899Extremized mean of probabilities (d=1.60)No0.1270.399899Mean of logoddsNo0.1300.410899MedianNo0.1340.418899Mean of probabilitiesNo0.1380.439899Baseline (p = 0.36)N/A0.2300.652899It would be straightforward to extend this analysis with new questions that resolved since then, other dataset or new techniques.Literature review of weight aggregationDifficulty: easyWhen aggregating forecast, we usually resort to formulas like ∑iailogo1, where oi are the individual predictions (expressed in odds) and ai the weights assigned to each prediction.Right now I have a lot of uncertainty about what are the best theoretical and empirical approaches to assigning weights to predictions. These could be based on factors like the date of the prediction, the track record of the forecaster or other factors.The first step would be to a literature review of schemes to weigh the predictions of experts when aggregating, and compare them using Metaculus data.Comparing methods for predicting base ratesDifficulty: mediumUsing historical data is always a must when forecasting.While one can rely on intuition to extract lessons from the past, it is often convenient to have some rules of thumb that inform how to translate historical frquencies to baserate probabilities.The classical method in this situation is Laplace's rule of succession. However, we showed that this method gives inconsistent results when trying to apply it to observations over a time period, and we proposed a fix here.Number of observed successes S during time TProbability of no successes during t timeS=0(1+tT)−1S>0(1+tT)−S if the sampling time period is variable(1+tT)−(S+1) if the sampling time period is fixedWhile theoretically appealing, we did not show that employing this fix actually improves performance, so there is a good research opportunity for someone to collect data and investigate this.Decay of predictionsDifficulty: mediumImagine I predict that no earthquakes will happen in Chile before 2024 with 60% probability today. Then in April 2023, if no earthquakes have happened, my implied probability should be lower than 60%.Theoretically, we should be derive the implied probabability under some mild assumptions that the probability was uniform over time, maybe following a framework like the time-...]]>
Tue, 15 Nov 2022 21:36:24 +0000 EA - Some research ideas in forecasting by Jaime Sevilla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some research ideas in forecasting, published by Jaime Sevilla on November 15, 2022 on The Effective Altruism Forum.In the past, I have researched how we can effectively pool the predictions of many experts. For the most part, I am now focusing on directing Epoch and AI forecasting.However, I have accumulated a log of research projects related to forecasting. I have the vague intention of working on them at some point, but this will likely be months or years, and meanwhile I would be elated if someone else takes my ideas and develops them.And with the Million Predictions Hackathon by Metaculus looming, now seems a particularly good moment to write down some of these project ideas.Compare different aggregation methodsDifficulty: easyThe ultimate arbiter of what aggregation works is what performs best in practice.Redoing a comparison of forecast aggregation methods on metaculus / INFER / etc questions would be helpful data for that purpose.For example, here is a script I wrote to compare some aggregation methods, and the results I obtained:MethodWeightedBrier-logQuestionsNeyman aggregate (p=0.36)Yes0.1060.340899Extremized mean of logodds (d=1.55)Yes0.1110.350899Neyman aggregate (p=0.5)Yes0.1110.351899Extremized mean of probabilities (d=1.60)Yes0.1120.355899Metaculus predictionYes0.1110.361774Mean of logoddsYes0.1160.370899Neyman aggregate (p=0.36)No0.1200.377899MedianYes0.1210.381899Extremized mean of logodds (d=1.50)No0.1260.391899Mean of probabilitiesYes0.1220.392899Neyman aggregate (o=1.00)No0.1260.393899Extremized mean of probabilities (d=1.60)No0.1270.399899Mean of logoddsNo0.1300.410899MedianNo0.1340.418899Mean of probabilitiesNo0.1380.439899Baseline (p = 0.36)N/A0.2300.652899It would be straightforward to extend this analysis with new questions that resolved since then, other dataset or new techniques.Literature review of weight aggregationDifficulty: easyWhen aggregating forecast, we usually resort to formulas like ∑iailogo1, where oi are the individual predictions (expressed in odds) and ai the weights assigned to each prediction.Right now I have a lot of uncertainty about what are the best theoretical and empirical approaches to assigning weights to predictions. These could be based on factors like the date of the prediction, the track record of the forecaster or other factors.The first step would be to a literature review of schemes to weigh the predictions of experts when aggregating, and compare them using Metaculus data.Comparing methods for predicting base ratesDifficulty: mediumUsing historical data is always a must when forecasting.While one can rely on intuition to extract lessons from the past, it is often convenient to have some rules of thumb that inform how to translate historical frquencies to baserate probabilities.The classical method in this situation is Laplace's rule of succession. However, we showed that this method gives inconsistent results when trying to apply it to observations over a time period, and we proposed a fix here.Number of observed successes S during time TProbability of no successes during t timeS=0(1+tT)−1S>0(1+tT)−S if the sampling time period is variable(1+tT)−(S+1) if the sampling time period is fixedWhile theoretically appealing, we did not show that employing this fix actually improves performance, so there is a good research opportunity for someone to collect data and investigate this.Decay of predictionsDifficulty: mediumImagine I predict that no earthquakes will happen in Chile before 2024 with 60% probability today. Then in April 2023, if no earthquakes have happened, my implied probability should be lower than 60%.Theoretically, we should be derive the implied probabability under some mild assumptions that the probability was uniform over time, maybe following a framework like the time-...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some research ideas in forecasting, published by Jaime Sevilla on November 15, 2022 on The Effective Altruism Forum.In the past, I have researched how we can effectively pool the predictions of many experts. For the most part, I am now focusing on directing Epoch and AI forecasting.However, I have accumulated a log of research projects related to forecasting. I have the vague intention of working on them at some point, but this will likely be months or years, and meanwhile I would be elated if someone else takes my ideas and develops them.And with the Million Predictions Hackathon by Metaculus looming, now seems a particularly good moment to write down some of these project ideas.Compare different aggregation methodsDifficulty: easyThe ultimate arbiter of what aggregation works is what performs best in practice.Redoing a comparison of forecast aggregation methods on metaculus / INFER / etc questions would be helpful data for that purpose.For example, here is a script I wrote to compare some aggregation methods, and the results I obtained:MethodWeightedBrier-logQuestionsNeyman aggregate (p=0.36)Yes0.1060.340899Extremized mean of logodds (d=1.55)Yes0.1110.350899Neyman aggregate (p=0.5)Yes0.1110.351899Extremized mean of probabilities (d=1.60)Yes0.1120.355899Metaculus predictionYes0.1110.361774Mean of logoddsYes0.1160.370899Neyman aggregate (p=0.36)No0.1200.377899MedianYes0.1210.381899Extremized mean of logodds (d=1.50)No0.1260.391899Mean of probabilitiesYes0.1220.392899Neyman aggregate (o=1.00)No0.1260.393899Extremized mean of probabilities (d=1.60)No0.1270.399899Mean of logoddsNo0.1300.410899MedianNo0.1340.418899Mean of probabilitiesNo0.1380.439899Baseline (p = 0.36)N/A0.2300.652899It would be straightforward to extend this analysis with new questions that resolved since then, other dataset or new techniques.Literature review of weight aggregationDifficulty: easyWhen aggregating forecast, we usually resort to formulas like ∑iailogo1, where oi are the individual predictions (expressed in odds) and ai the weights assigned to each prediction.Right now I have a lot of uncertainty about what are the best theoretical and empirical approaches to assigning weights to predictions. These could be based on factors like the date of the prediction, the track record of the forecaster or other factors.The first step would be to a literature review of schemes to weigh the predictions of experts when aggregating, and compare them using Metaculus data.Comparing methods for predicting base ratesDifficulty: mediumUsing historical data is always a must when forecasting.While one can rely on intuition to extract lessons from the past, it is often convenient to have some rules of thumb that inform how to translate historical frquencies to baserate probabilities.The classical method in this situation is Laplace's rule of succession. However, we showed that this method gives inconsistent results when trying to apply it to observations over a time period, and we proposed a fix here.Number of observed successes S during time TProbability of no successes during t timeS=0(1+tT)−1S>0(1+tT)−S if the sampling time period is variable(1+tT)−(S+1) if the sampling time period is fixedWhile theoretically appealing, we did not show that employing this fix actually improves performance, so there is a good research opportunity for someone to collect data and investigate this.Decay of predictionsDifficulty: mediumImagine I predict that no earthquakes will happen in Chile before 2024 with 60% probability today. Then in April 2023, if no earthquakes have happened, my implied probability should be lower than 60%.Theoretically, we should be derive the implied probabability under some mild assumptions that the probability was uniform over time, maybe following a framework like the time-...]]>
Jaime Sevilla https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:20 None full 3761
23ACb9MjQn7tizKY7_EA EA - Selective truth-telling: concerns about EA leadership communication. by tcelferact Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Selective truth-telling: concerns about EA leadership communication., published by tcelferact on November 15, 2022 on The Effective Altruism Forum.IntroI have had concerns about EA leadership communication as long as I've known about EA, and the FTX meltdown has persuaded me I should have taken them more seriously.(By leadership I mean well-known public-facing EAs and organizations.)This post attempts to explain why I'm concerned by listing some of the experiences that have made me uncomfortable. tl;dr: EA leadership has a history of being selective in the information they share in a way that increases their appeal, and this raises doubts for me over what was and wasn't known about FTX.I do not think I'm sharing any major transgressions here, and I suspect some readers will find all these points pretty minor.I'm sharing them anyway because I'm increasingly involved in EA (two EAGs and exploring funding for an EA non-profit) and I've lost confidence in leadership of the movement. A reflection on why I've lost confidence and what would help me regain it seems like useful feedback, and it may also resonate with others.i.e. This is intended to be a personal account of why I'm lacking confidence, not an argument for why you, the reader, should also lack confidence.2014: 80K promotional event in Oxford.I really wish I could find/remember more concrete information on this event, and if anyone recognizes what I'm talking about and has access to the original promotional material then please share it.In 2014 I was an undergraduate at Oxford and had a vague awareness of EA and 80,000 hours as orgs that cared about highly data-driven charitable interventions. At the time this was not something that interested me, I was really focussed on art!I saw a flyer for an event with a title something like 'How to be an artist and help improve the world!' I don't remember any mention of 80K or EA, and the impression it left on me was 'this is an event on how to be a less pretentious version of Bono from U2'. (I'm happy to walk all of this back if someone from 80K still has the flyer somewhere and can share it, but this is at least the impression it left on me.)So I went to the event, and it was an 80K event with Ben Todd and Will MacAskill. The keynote speaker was an art dealer (I cannot remember his name) who talked about his own career, donating large chunks of his income, and encouraging others to do the same. He also did a stump speech for 80K and announced ~£180K of donations he was making to the org.This was a great event with a great speaker! It was also not remotely the event I had signed up for. Talking to Ben after the event didn't help: his answers to my questions felt similar to the marketing for the event itself, i.e. say what you need to say to get me in the door. (Two rough questions I remember: Q: Is your approach utilitarian? A: It's utilitarian flavoured. Q: What would you say to someone who e.g. really cares about art and doesn't want to earn to give? A: Will is actually a great example of someone I think shouldn't earn to give (he intended to at the time) as we need him doing philosophical analysis of the best ways to donate instead.)This all left me highly suspicious of EA, and as a result I didn't pay much attention to them after that for years. I started engaging again in 2017, and more deeply in 2021, when I figured everyone involved had been young, they had only been minorly dishonest (if I was even remembering things correctly), and I should just give them a pass.Philosophy, but also not Philosophy?: Underemphasizing risk on the 80K websiteMy undergraduate degree was in philosophy, and when I started thinking about EA involvement more seriously I took a look at global priorities research. It was one of five top-recommended career paths on 80K's web...]]>
tcelferact https://forum.effectivealtruism.org/posts/23ACb9MjQn7tizKY7/selective-truth-telling-concerns-about-ea-leadership Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Selective truth-telling: concerns about EA leadership communication., published by tcelferact on November 15, 2022 on The Effective Altruism Forum.IntroI have had concerns about EA leadership communication as long as I've known about EA, and the FTX meltdown has persuaded me I should have taken them more seriously.(By leadership I mean well-known public-facing EAs and organizations.)This post attempts to explain why I'm concerned by listing some of the experiences that have made me uncomfortable. tl;dr: EA leadership has a history of being selective in the information they share in a way that increases their appeal, and this raises doubts for me over what was and wasn't known about FTX.I do not think I'm sharing any major transgressions here, and I suspect some readers will find all these points pretty minor.I'm sharing them anyway because I'm increasingly involved in EA (two EAGs and exploring funding for an EA non-profit) and I've lost confidence in leadership of the movement. A reflection on why I've lost confidence and what would help me regain it seems like useful feedback, and it may also resonate with others.i.e. This is intended to be a personal account of why I'm lacking confidence, not an argument for why you, the reader, should also lack confidence.2014: 80K promotional event in Oxford.I really wish I could find/remember more concrete information on this event, and if anyone recognizes what I'm talking about and has access to the original promotional material then please share it.In 2014 I was an undergraduate at Oxford and had a vague awareness of EA and 80,000 hours as orgs that cared about highly data-driven charitable interventions. At the time this was not something that interested me, I was really focussed on art!I saw a flyer for an event with a title something like 'How to be an artist and help improve the world!' I don't remember any mention of 80K or EA, and the impression it left on me was 'this is an event on how to be a less pretentious version of Bono from U2'. (I'm happy to walk all of this back if someone from 80K still has the flyer somewhere and can share it, but this is at least the impression it left on me.)So I went to the event, and it was an 80K event with Ben Todd and Will MacAskill. The keynote speaker was an art dealer (I cannot remember his name) who talked about his own career, donating large chunks of his income, and encouraging others to do the same. He also did a stump speech for 80K and announced ~£180K of donations he was making to the org.This was a great event with a great speaker! It was also not remotely the event I had signed up for. Talking to Ben after the event didn't help: his answers to my questions felt similar to the marketing for the event itself, i.e. say what you need to say to get me in the door. (Two rough questions I remember: Q: Is your approach utilitarian? A: It's utilitarian flavoured. Q: What would you say to someone who e.g. really cares about art and doesn't want to earn to give? A: Will is actually a great example of someone I think shouldn't earn to give (he intended to at the time) as we need him doing philosophical analysis of the best ways to donate instead.)This all left me highly suspicious of EA, and as a result I didn't pay much attention to them after that for years. I started engaging again in 2017, and more deeply in 2021, when I figured everyone involved had been young, they had only been minorly dishonest (if I was even remembering things correctly), and I should just give them a pass.Philosophy, but also not Philosophy?: Underemphasizing risk on the 80K websiteMy undergraduate degree was in philosophy, and when I started thinking about EA involvement more seriously I took a look at global priorities research. It was one of five top-recommended career paths on 80K's web...]]>
Tue, 15 Nov 2022 20:48:03 +0000 EA - Selective truth-telling: concerns about EA leadership communication. by tcelferact Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Selective truth-telling: concerns about EA leadership communication., published by tcelferact on November 15, 2022 on The Effective Altruism Forum.IntroI have had concerns about EA leadership communication as long as I've known about EA, and the FTX meltdown has persuaded me I should have taken them more seriously.(By leadership I mean well-known public-facing EAs and organizations.)This post attempts to explain why I'm concerned by listing some of the experiences that have made me uncomfortable. tl;dr: EA leadership has a history of being selective in the information they share in a way that increases their appeal, and this raises doubts for me over what was and wasn't known about FTX.I do not think I'm sharing any major transgressions here, and I suspect some readers will find all these points pretty minor.I'm sharing them anyway because I'm increasingly involved in EA (two EAGs and exploring funding for an EA non-profit) and I've lost confidence in leadership of the movement. A reflection on why I've lost confidence and what would help me regain it seems like useful feedback, and it may also resonate with others.i.e. This is intended to be a personal account of why I'm lacking confidence, not an argument for why you, the reader, should also lack confidence.2014: 80K promotional event in Oxford.I really wish I could find/remember more concrete information on this event, and if anyone recognizes what I'm talking about and has access to the original promotional material then please share it.In 2014 I was an undergraduate at Oxford and had a vague awareness of EA and 80,000 hours as orgs that cared about highly data-driven charitable interventions. At the time this was not something that interested me, I was really focussed on art!I saw a flyer for an event with a title something like 'How to be an artist and help improve the world!' I don't remember any mention of 80K or EA, and the impression it left on me was 'this is an event on how to be a less pretentious version of Bono from U2'. (I'm happy to walk all of this back if someone from 80K still has the flyer somewhere and can share it, but this is at least the impression it left on me.)So I went to the event, and it was an 80K event with Ben Todd and Will MacAskill. The keynote speaker was an art dealer (I cannot remember his name) who talked about his own career, donating large chunks of his income, and encouraging others to do the same. He also did a stump speech for 80K and announced ~£180K of donations he was making to the org.This was a great event with a great speaker! It was also not remotely the event I had signed up for. Talking to Ben after the event didn't help: his answers to my questions felt similar to the marketing for the event itself, i.e. say what you need to say to get me in the door. (Two rough questions I remember: Q: Is your approach utilitarian? A: It's utilitarian flavoured. Q: What would you say to someone who e.g. really cares about art and doesn't want to earn to give? A: Will is actually a great example of someone I think shouldn't earn to give (he intended to at the time) as we need him doing philosophical analysis of the best ways to donate instead.)This all left me highly suspicious of EA, and as a result I didn't pay much attention to them after that for years. I started engaging again in 2017, and more deeply in 2021, when I figured everyone involved had been young, they had only been minorly dishonest (if I was even remembering things correctly), and I should just give them a pass.Philosophy, but also not Philosophy?: Underemphasizing risk on the 80K websiteMy undergraduate degree was in philosophy, and when I started thinking about EA involvement more seriously I took a look at global priorities research. It was one of five top-recommended career paths on 80K's web...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Selective truth-telling: concerns about EA leadership communication., published by tcelferact on November 15, 2022 on The Effective Altruism Forum.IntroI have had concerns about EA leadership communication as long as I've known about EA, and the FTX meltdown has persuaded me I should have taken them more seriously.(By leadership I mean well-known public-facing EAs and organizations.)This post attempts to explain why I'm concerned by listing some of the experiences that have made me uncomfortable. tl;dr: EA leadership has a history of being selective in the information they share in a way that increases their appeal, and this raises doubts for me over what was and wasn't known about FTX.I do not think I'm sharing any major transgressions here, and I suspect some readers will find all these points pretty minor.I'm sharing them anyway because I'm increasingly involved in EA (two EAGs and exploring funding for an EA non-profit) and I've lost confidence in leadership of the movement. A reflection on why I've lost confidence and what would help me regain it seems like useful feedback, and it may also resonate with others.i.e. This is intended to be a personal account of why I'm lacking confidence, not an argument for why you, the reader, should also lack confidence.2014: 80K promotional event in Oxford.I really wish I could find/remember more concrete information on this event, and if anyone recognizes what I'm talking about and has access to the original promotional material then please share it.In 2014 I was an undergraduate at Oxford and had a vague awareness of EA and 80,000 hours as orgs that cared about highly data-driven charitable interventions. At the time this was not something that interested me, I was really focussed on art!I saw a flyer for an event with a title something like 'How to be an artist and help improve the world!' I don't remember any mention of 80K or EA, and the impression it left on me was 'this is an event on how to be a less pretentious version of Bono from U2'. (I'm happy to walk all of this back if someone from 80K still has the flyer somewhere and can share it, but this is at least the impression it left on me.)So I went to the event, and it was an 80K event with Ben Todd and Will MacAskill. The keynote speaker was an art dealer (I cannot remember his name) who talked about his own career, donating large chunks of his income, and encouraging others to do the same. He also did a stump speech for 80K and announced ~£180K of donations he was making to the org.This was a great event with a great speaker! It was also not remotely the event I had signed up for. Talking to Ben after the event didn't help: his answers to my questions felt similar to the marketing for the event itself, i.e. say what you need to say to get me in the door. (Two rough questions I remember: Q: Is your approach utilitarian? A: It's utilitarian flavoured. Q: What would you say to someone who e.g. really cares about art and doesn't want to earn to give? A: Will is actually a great example of someone I think shouldn't earn to give (he intended to at the time) as we need him doing philosophical analysis of the best ways to donate instead.)This all left me highly suspicious of EA, and as a result I didn't pay much attention to them after that for years. I started engaging again in 2017, and more deeply in 2021, when I figured everyone involved had been young, they had only been minorly dishonest (if I was even remembering things correctly), and I should just give them a pass.Philosophy, but also not Philosophy?: Underemphasizing risk on the 80K websiteMy undergraduate degree was in philosophy, and when I started thinking about EA involvement more seriously I took a look at global priorities research. It was one of five top-recommended career paths on 80K's web...]]>
tcelferact https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:14 None full 3762
Et7oPMu6czhEd8ExW_EA EA - Why you’re not hearing as much from EA orgs as you’d like by Shakeel Hashim Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why you’re not hearing as much from EA orgs as you’d like, published by Shakeel Hashim on November 15, 2022 on The Effective Altruism Forum.Hi everyone — I wanted to update you about the sorts of communication you’ll expect to hear from EA leadership and EA organisations, and why this will probably be an intensely frustrating situation for all involved. For those that don’t know, I’m head of communications at CEA, and am working on Effective Ventures’ response to the current situation.In particular, I expect that in the short term you’ll get a lot less communication about things than you’d want. This is for a few reasons:Legal risk. It’s likely that there will be extensive legal proceedings around FTX that will drag on for a very long time. This means that anything that is said by anyone who is even tangentially involved is at risk of being scrutinised and multiply interpreted by dozens of people, including people whose role (rightly) is to advocate for their clients or those they represent.Lack of information. Everything has happened very quickly, and everyone is still trying to gather facts and figure out what’s going on. We don’t even fully know what we don’t know. So we’re figuring out things as we go, and don’t want to share information that might later turn out to be inaccurate.This is compounded by the fact that everyone is incredibly busy and dealing with a ton of different things (legal, financial, operational, management) all at once.This sucks. I really want to be saying everything on my mind right now, and I would love for other people at EA orgs to do the same. I also want to try to make sure people don’t say things they’ll regret in the years to come. But these are hard tradeoffs, and I’m not sure we’ll always get them right.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Shakeel Hashim https://forum.effectivealtruism.org/posts/Et7oPMu6czhEd8ExW/why-you-re-not-hearing-as-much-from-ea-orgs-as-you-d-like Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why you’re not hearing as much from EA orgs as you’d like, published by Shakeel Hashim on November 15, 2022 on The Effective Altruism Forum.Hi everyone — I wanted to update you about the sorts of communication you’ll expect to hear from EA leadership and EA organisations, and why this will probably be an intensely frustrating situation for all involved. For those that don’t know, I’m head of communications at CEA, and am working on Effective Ventures’ response to the current situation.In particular, I expect that in the short term you’ll get a lot less communication about things than you’d want. This is for a few reasons:Legal risk. It’s likely that there will be extensive legal proceedings around FTX that will drag on for a very long time. This means that anything that is said by anyone who is even tangentially involved is at risk of being scrutinised and multiply interpreted by dozens of people, including people whose role (rightly) is to advocate for their clients or those they represent.Lack of information. Everything has happened very quickly, and everyone is still trying to gather facts and figure out what’s going on. We don’t even fully know what we don’t know. So we’re figuring out things as we go, and don’t want to share information that might later turn out to be inaccurate.This is compounded by the fact that everyone is incredibly busy and dealing with a ton of different things (legal, financial, operational, management) all at once.This sucks. I really want to be saying everything on my mind right now, and I would love for other people at EA orgs to do the same. I also want to try to make sure people don’t say things they’ll regret in the years to come. But these are hard tradeoffs, and I’m not sure we’ll always get them right.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 15 Nov 2022 20:06:38 +0000 EA - Why you’re not hearing as much from EA orgs as you’d like by Shakeel Hashim Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why you’re not hearing as much from EA orgs as you’d like, published by Shakeel Hashim on November 15, 2022 on The Effective Altruism Forum.Hi everyone — I wanted to update you about the sorts of communication you’ll expect to hear from EA leadership and EA organisations, and why this will probably be an intensely frustrating situation for all involved. For those that don’t know, I’m head of communications at CEA, and am working on Effective Ventures’ response to the current situation.In particular, I expect that in the short term you’ll get a lot less communication about things than you’d want. This is for a few reasons:Legal risk. It’s likely that there will be extensive legal proceedings around FTX that will drag on for a very long time. This means that anything that is said by anyone who is even tangentially involved is at risk of being scrutinised and multiply interpreted by dozens of people, including people whose role (rightly) is to advocate for their clients or those they represent.Lack of information. Everything has happened very quickly, and everyone is still trying to gather facts and figure out what’s going on. We don’t even fully know what we don’t know. So we’re figuring out things as we go, and don’t want to share information that might later turn out to be inaccurate.This is compounded by the fact that everyone is incredibly busy and dealing with a ton of different things (legal, financial, operational, management) all at once.This sucks. I really want to be saying everything on my mind right now, and I would love for other people at EA orgs to do the same. I also want to try to make sure people don’t say things they’ll regret in the years to come. But these are hard tradeoffs, and I’m not sure we’ll always get them right.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why you’re not hearing as much from EA orgs as you’d like, published by Shakeel Hashim on November 15, 2022 on The Effective Altruism Forum.Hi everyone — I wanted to update you about the sorts of communication you’ll expect to hear from EA leadership and EA organisations, and why this will probably be an intensely frustrating situation for all involved. For those that don’t know, I’m head of communications at CEA, and am working on Effective Ventures’ response to the current situation.In particular, I expect that in the short term you’ll get a lot less communication about things than you’d want. This is for a few reasons:Legal risk. It’s likely that there will be extensive legal proceedings around FTX that will drag on for a very long time. This means that anything that is said by anyone who is even tangentially involved is at risk of being scrutinised and multiply interpreted by dozens of people, including people whose role (rightly) is to advocate for their clients or those they represent.Lack of information. Everything has happened very quickly, and everyone is still trying to gather facts and figure out what’s going on. We don’t even fully know what we don’t know. So we’re figuring out things as we go, and don’t want to share information that might later turn out to be inaccurate.This is compounded by the fact that everyone is incredibly busy and dealing with a ton of different things (legal, financial, operational, management) all at once.This sucks. I really want to be saying everything on my mind right now, and I would love for other people at EA orgs to do the same. I also want to try to make sure people don’t say things they’ll regret in the years to come. But these are hard tradeoffs, and I’m not sure we’ll always get them right.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Shakeel Hashim https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:50 None full 3763
kEbdGgMt4Qfair3aN_EA EA - Friendship Forever (new EA cause area?) by rogersbacon1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Friendship Forever (new EA cause area?), published by rogersbacon1 on November 15, 2022 on The Effective Altruism Forum.I.I was surprised to learn recently that it was common for men in western cultures to hold hands throughout the 19th century, and that friendship for both men and women was generally valued much more highly than it is today, a time when people say things like “No New Friends” and this:Contrast this to:From the Civil War through the 1920′s, it was very common for male friends to visit a photographer's studio together to have a portrait done as a memento of their love and loyalty. Sometimes the men would act out scenes; sometimes they'd simply sit side-by-side; sometimes they'd sit on each other's laps or hold hands. The men's very comfortable and familiar poses and body language might make the men look like gay lovers to the modern eye — and they could very well have been — but that was not the message they were sending at the time... Because homosexuality, even if thought of as a practice rather than an identity, was not something publicly expressed, these men were not knowingly outing themselves in these shots; their poses were common, and simply reflected the intimacy and intensity of male friendships at the time — none of these photos would have caused their contemporaries to bat an eye.“Bosom Buddies: a Photo History of Male Affection”Photos of men with their wives, an often functional arrangement not rendered from romantic love, are often much less affectionate or loving. Women, too, thought of their best friends as more than what a husband could ever offer: loyal, supportive, sympathetic, clever, and funny. Same sex friendships were often considered the best relationships one had in life outside of one’s parents.“Men Holding Hands? Friendship in the 19th Century Was Quite Different”I'm not really sure why I was surprised that the friendship was regarded very different in the past—it seems obvious that romantic relationships have evolved considerably over time, for example. Maybe it’s just my personal ignorance (probably), but I have a suspicion that something deeper is going on here, a kind of blind spot that we have surrounding friendship, a under-appreciation for its importance in our lives that leads to a broader under-representation in the arts, politics, philosophy, and culture writ large.For instance, how often do we reflect on the fact that 65% of children up to the age of 7 have imaginary friends (“There is little research about the concept of imaginary friends in children's imaginations” according to Wikipedia,), or that adults, if desperate enough, will befriend inanimate objects such as volleyballs?The dynamic of friendship is almost always underestimated as a constant force in human life. A diminishing circle of friends is the first terrible diagnostic of a life in deep trouble: of overwork, of too much emphasis on a professional identity, of forgetting who will be there when our armoured personalities run into the inevitable natural disasters and vulnerabilities found in even the most average existence.David WhyteI suppose there is no a priori expectation for how much romantically-themed art there should be relative to friendship-themed art, but it’s hard not feel like there is a paucity of the latter (how many popular friendship songs can you think of besides this masterpiece?). The paucity of friendship-themed work may be even greater in “serious” intellectual domains; as one example, there is effectively no discussion of friendship amongst effective altruists (google it or search the effective altruism forum to see what I mean; what discussion does exist focuses on friendship as a tool for community building within EA, not as something that EA should focus on promoting in and of itself).Is it not strange that a phi...]]>
rogersbacon1 https://forum.effectivealtruism.org/posts/kEbdGgMt4Qfair3aN/friendship-forever-new-ea-cause-area Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Friendship Forever (new EA cause area?), published by rogersbacon1 on November 15, 2022 on The Effective Altruism Forum.I.I was surprised to learn recently that it was common for men in western cultures to hold hands throughout the 19th century, and that friendship for both men and women was generally valued much more highly than it is today, a time when people say things like “No New Friends” and this:Contrast this to:From the Civil War through the 1920′s, it was very common for male friends to visit a photographer's studio together to have a portrait done as a memento of their love and loyalty. Sometimes the men would act out scenes; sometimes they'd simply sit side-by-side; sometimes they'd sit on each other's laps or hold hands. The men's very comfortable and familiar poses and body language might make the men look like gay lovers to the modern eye — and they could very well have been — but that was not the message they were sending at the time... Because homosexuality, even if thought of as a practice rather than an identity, was not something publicly expressed, these men were not knowingly outing themselves in these shots; their poses were common, and simply reflected the intimacy and intensity of male friendships at the time — none of these photos would have caused their contemporaries to bat an eye.“Bosom Buddies: a Photo History of Male Affection”Photos of men with their wives, an often functional arrangement not rendered from romantic love, are often much less affectionate or loving. Women, too, thought of their best friends as more than what a husband could ever offer: loyal, supportive, sympathetic, clever, and funny. Same sex friendships were often considered the best relationships one had in life outside of one’s parents.“Men Holding Hands? Friendship in the 19th Century Was Quite Different”I'm not really sure why I was surprised that the friendship was regarded very different in the past—it seems obvious that romantic relationships have evolved considerably over time, for example. Maybe it’s just my personal ignorance (probably), but I have a suspicion that something deeper is going on here, a kind of blind spot that we have surrounding friendship, a under-appreciation for its importance in our lives that leads to a broader under-representation in the arts, politics, philosophy, and culture writ large.For instance, how often do we reflect on the fact that 65% of children up to the age of 7 have imaginary friends (“There is little research about the concept of imaginary friends in children's imaginations” according to Wikipedia,), or that adults, if desperate enough, will befriend inanimate objects such as volleyballs?The dynamic of friendship is almost always underestimated as a constant force in human life. A diminishing circle of friends is the first terrible diagnostic of a life in deep trouble: of overwork, of too much emphasis on a professional identity, of forgetting who will be there when our armoured personalities run into the inevitable natural disasters and vulnerabilities found in even the most average existence.David WhyteI suppose there is no a priori expectation for how much romantically-themed art there should be relative to friendship-themed art, but it’s hard not feel like there is a paucity of the latter (how many popular friendship songs can you think of besides this masterpiece?). The paucity of friendship-themed work may be even greater in “serious” intellectual domains; as one example, there is effectively no discussion of friendship amongst effective altruists (google it or search the effective altruism forum to see what I mean; what discussion does exist focuses on friendship as a tool for community building within EA, not as something that EA should focus on promoting in and of itself).Is it not strange that a phi...]]>
Tue, 15 Nov 2022 19:46:46 +0000 EA - Friendship Forever (new EA cause area?) by rogersbacon1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Friendship Forever (new EA cause area?), published by rogersbacon1 on November 15, 2022 on The Effective Altruism Forum.I.I was surprised to learn recently that it was common for men in western cultures to hold hands throughout the 19th century, and that friendship for both men and women was generally valued much more highly than it is today, a time when people say things like “No New Friends” and this:Contrast this to:From the Civil War through the 1920′s, it was very common for male friends to visit a photographer's studio together to have a portrait done as a memento of their love and loyalty. Sometimes the men would act out scenes; sometimes they'd simply sit side-by-side; sometimes they'd sit on each other's laps or hold hands. The men's very comfortable and familiar poses and body language might make the men look like gay lovers to the modern eye — and they could very well have been — but that was not the message they were sending at the time... Because homosexuality, even if thought of as a practice rather than an identity, was not something publicly expressed, these men were not knowingly outing themselves in these shots; their poses were common, and simply reflected the intimacy and intensity of male friendships at the time — none of these photos would have caused their contemporaries to bat an eye.“Bosom Buddies: a Photo History of Male Affection”Photos of men with their wives, an often functional arrangement not rendered from romantic love, are often much less affectionate or loving. Women, too, thought of their best friends as more than what a husband could ever offer: loyal, supportive, sympathetic, clever, and funny. Same sex friendships were often considered the best relationships one had in life outside of one’s parents.“Men Holding Hands? Friendship in the 19th Century Was Quite Different”I'm not really sure why I was surprised that the friendship was regarded very different in the past—it seems obvious that romantic relationships have evolved considerably over time, for example. Maybe it’s just my personal ignorance (probably), but I have a suspicion that something deeper is going on here, a kind of blind spot that we have surrounding friendship, a under-appreciation for its importance in our lives that leads to a broader under-representation in the arts, politics, philosophy, and culture writ large.For instance, how often do we reflect on the fact that 65% of children up to the age of 7 have imaginary friends (“There is little research about the concept of imaginary friends in children's imaginations” according to Wikipedia,), or that adults, if desperate enough, will befriend inanimate objects such as volleyballs?The dynamic of friendship is almost always underestimated as a constant force in human life. A diminishing circle of friends is the first terrible diagnostic of a life in deep trouble: of overwork, of too much emphasis on a professional identity, of forgetting who will be there when our armoured personalities run into the inevitable natural disasters and vulnerabilities found in even the most average existence.David WhyteI suppose there is no a priori expectation for how much romantically-themed art there should be relative to friendship-themed art, but it’s hard not feel like there is a paucity of the latter (how many popular friendship songs can you think of besides this masterpiece?). The paucity of friendship-themed work may be even greater in “serious” intellectual domains; as one example, there is effectively no discussion of friendship amongst effective altruists (google it or search the effective altruism forum to see what I mean; what discussion does exist focuses on friendship as a tool for community building within EA, not as something that EA should focus on promoting in and of itself).Is it not strange that a phi...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Friendship Forever (new EA cause area?), published by rogersbacon1 on November 15, 2022 on The Effective Altruism Forum.I.I was surprised to learn recently that it was common for men in western cultures to hold hands throughout the 19th century, and that friendship for both men and women was generally valued much more highly than it is today, a time when people say things like “No New Friends” and this:Contrast this to:From the Civil War through the 1920′s, it was very common for male friends to visit a photographer's studio together to have a portrait done as a memento of their love and loyalty. Sometimes the men would act out scenes; sometimes they'd simply sit side-by-side; sometimes they'd sit on each other's laps or hold hands. The men's very comfortable and familiar poses and body language might make the men look like gay lovers to the modern eye — and they could very well have been — but that was not the message they were sending at the time... Because homosexuality, even if thought of as a practice rather than an identity, was not something publicly expressed, these men were not knowingly outing themselves in these shots; their poses were common, and simply reflected the intimacy and intensity of male friendships at the time — none of these photos would have caused their contemporaries to bat an eye.“Bosom Buddies: a Photo History of Male Affection”Photos of men with their wives, an often functional arrangement not rendered from romantic love, are often much less affectionate or loving. Women, too, thought of their best friends as more than what a husband could ever offer: loyal, supportive, sympathetic, clever, and funny. Same sex friendships were often considered the best relationships one had in life outside of one’s parents.“Men Holding Hands? Friendship in the 19th Century Was Quite Different”I'm not really sure why I was surprised that the friendship was regarded very different in the past—it seems obvious that romantic relationships have evolved considerably over time, for example. Maybe it’s just my personal ignorance (probably), but I have a suspicion that something deeper is going on here, a kind of blind spot that we have surrounding friendship, a under-appreciation for its importance in our lives that leads to a broader under-representation in the arts, politics, philosophy, and culture writ large.For instance, how often do we reflect on the fact that 65% of children up to the age of 7 have imaginary friends (“There is little research about the concept of imaginary friends in children's imaginations” according to Wikipedia,), or that adults, if desperate enough, will befriend inanimate objects such as volleyballs?The dynamic of friendship is almost always underestimated as a constant force in human life. A diminishing circle of friends is the first terrible diagnostic of a life in deep trouble: of overwork, of too much emphasis on a professional identity, of forgetting who will be there when our armoured personalities run into the inevitable natural disasters and vulnerabilities found in even the most average existence.David WhyteI suppose there is no a priori expectation for how much romantically-themed art there should be relative to friendship-themed art, but it’s hard not feel like there is a paucity of the latter (how many popular friendship songs can you think of besides this masterpiece?). The paucity of friendship-themed work may be even greater in “serious” intellectual domains; as one example, there is effectively no discussion of friendship amongst effective altruists (google it or search the effective altruism forum to see what I mean; what discussion does exist focuses on friendship as a tool for community building within EA, not as something that EA should focus on promoting in and of itself).Is it not strange that a phi...]]>
rogersbacon1 https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 20:04 None full 3764
22zk3tZyYWoanQwt7_EA EA - Training for Good - Update and Plans for 2023 by Cillian Crosson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Training for Good - Update & Plans for 2023, published by Cillian Crosson on November 15, 2022 on The Effective Altruism Forum.SummaryTraining for Good will now focus exclusively on programmes that enable talented and altruistic early-career professionals to directly enter the first stage of high impact careers.Concretely, we will only run the following programmes in Sep 2022 - Aug 2023:1. EU Tech Policy Fellowship.2. Tarbell Fellowship (journalism)3. An unannounced 3rd programme which is still under developmentApplications for the 2023 EU Tech Policy Fellowship are open until December 11. Apply here.In year 1, we experimented with ~7 different programmes, 6 of which have now been discontinued. This is largely because we believe that focus is important & wanted to double-down on the most promising programme we had identified thus far (the EU Tech Policy Fellowship which successfully placed 7 fellows in relevant European think tanks focused on emerging technology policy).We plan to have an external review of Training for Good conducted between July 2023 - December 2023. We will default to sharing this publicly.IntroductionTraining for Good is an impact-focused training organisation, incubated by Charity Entrepreneurship in 2021.Quite a lot has changed since our launch in September 2021. We considered our first year to be an exploratory period in which we ran many, many different projects. We’ve now discontinued the majority of these programmes and have narrowed our focus to running fellowships that directly place early-career individuals in impactful careers.Now that TFG has a clearer focus, we're writing this post to update others in the EA community on our activities and the scope of our organisation.What we doTraining for Good runs fellowships that place talented professionals in impactful careers in policy, journalism & other areas. We do this by providing a combination of stipends, mentorship from experienced professionals, training and placements in relevant organisations.Between Sep 2022 - Aug 2023 (i.e. year 2), we plan to only run the following programmes:EU Tech Policy FellowshipTarbell FellowshipAn unannounced 3rd programme which is still under developmentWhy this might be importantMany high impact career paths are neglected by talented and altruistic people, often because they lack clear pathways for entry. This is limiting progress on some of the world’s most important problems: reducing existential risk, ending factory farming and tackling global poverty.TFG seeks to provide concrete opportunities for early-career professionals to gain entry level roles in impactful career paths that are difficult to enter. Building these talent pipelines could be important because:Direct progress on problems: Talented individuals in these career paths can directly contribute to progress on solving the world’s most important problemsCloser towards the “ideal portfolio”: We mostly take a portfolio approach to doing good. One could imagine an optimal distribution of talent within the effective altruism community, which might involve people pursuing a variety of different career paths. With our fellowships, we are attempting to move the effective altruism community closer towards this ideal allocation by enabling people to pursue paths that we believe are currently underrepresented (and expect to remain so) within this community’s portfolio. We believe that thinking in these terms is particularly useful partly due to:Diminishing returns from certain career pathsEpistemic uncertainty about which career paths are best (and the associated information value from reducing this uncertainty somewhat)Differing personal fit for individuals across different career pathsConcrete opportunities: The number of people interested in effective altruism has been g...]]>
Cillian Crosson https://forum.effectivealtruism.org/posts/22zk3tZyYWoanQwt7/training-for-good-update-and-plans-for-2023 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Training for Good - Update & Plans for 2023, published by Cillian Crosson on November 15, 2022 on The Effective Altruism Forum.SummaryTraining for Good will now focus exclusively on programmes that enable talented and altruistic early-career professionals to directly enter the first stage of high impact careers.Concretely, we will only run the following programmes in Sep 2022 - Aug 2023:1. EU Tech Policy Fellowship.2. Tarbell Fellowship (journalism)3. An unannounced 3rd programme which is still under developmentApplications for the 2023 EU Tech Policy Fellowship are open until December 11. Apply here.In year 1, we experimented with ~7 different programmes, 6 of which have now been discontinued. This is largely because we believe that focus is important & wanted to double-down on the most promising programme we had identified thus far (the EU Tech Policy Fellowship which successfully placed 7 fellows in relevant European think tanks focused on emerging technology policy).We plan to have an external review of Training for Good conducted between July 2023 - December 2023. We will default to sharing this publicly.IntroductionTraining for Good is an impact-focused training organisation, incubated by Charity Entrepreneurship in 2021.Quite a lot has changed since our launch in September 2021. We considered our first year to be an exploratory period in which we ran many, many different projects. We’ve now discontinued the majority of these programmes and have narrowed our focus to running fellowships that directly place early-career individuals in impactful careers.Now that TFG has a clearer focus, we're writing this post to update others in the EA community on our activities and the scope of our organisation.What we doTraining for Good runs fellowships that place talented professionals in impactful careers in policy, journalism & other areas. We do this by providing a combination of stipends, mentorship from experienced professionals, training and placements in relevant organisations.Between Sep 2022 - Aug 2023 (i.e. year 2), we plan to only run the following programmes:EU Tech Policy FellowshipTarbell FellowshipAn unannounced 3rd programme which is still under developmentWhy this might be importantMany high impact career paths are neglected by talented and altruistic people, often because they lack clear pathways for entry. This is limiting progress on some of the world’s most important problems: reducing existential risk, ending factory farming and tackling global poverty.TFG seeks to provide concrete opportunities for early-career professionals to gain entry level roles in impactful career paths that are difficult to enter. Building these talent pipelines could be important because:Direct progress on problems: Talented individuals in these career paths can directly contribute to progress on solving the world’s most important problemsCloser towards the “ideal portfolio”: We mostly take a portfolio approach to doing good. One could imagine an optimal distribution of talent within the effective altruism community, which might involve people pursuing a variety of different career paths. With our fellowships, we are attempting to move the effective altruism community closer towards this ideal allocation by enabling people to pursue paths that we believe are currently underrepresented (and expect to remain so) within this community’s portfolio. We believe that thinking in these terms is particularly useful partly due to:Diminishing returns from certain career pathsEpistemic uncertainty about which career paths are best (and the associated information value from reducing this uncertainty somewhat)Differing personal fit for individuals across different career pathsConcrete opportunities: The number of people interested in effective altruism has been g...]]>
Tue, 15 Nov 2022 19:00:25 +0000 EA - Training for Good - Update and Plans for 2023 by Cillian Crosson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Training for Good - Update & Plans for 2023, published by Cillian Crosson on November 15, 2022 on The Effective Altruism Forum.SummaryTraining for Good will now focus exclusively on programmes that enable talented and altruistic early-career professionals to directly enter the first stage of high impact careers.Concretely, we will only run the following programmes in Sep 2022 - Aug 2023:1. EU Tech Policy Fellowship.2. Tarbell Fellowship (journalism)3. An unannounced 3rd programme which is still under developmentApplications for the 2023 EU Tech Policy Fellowship are open until December 11. Apply here.In year 1, we experimented with ~7 different programmes, 6 of which have now been discontinued. This is largely because we believe that focus is important & wanted to double-down on the most promising programme we had identified thus far (the EU Tech Policy Fellowship which successfully placed 7 fellows in relevant European think tanks focused on emerging technology policy).We plan to have an external review of Training for Good conducted between July 2023 - December 2023. We will default to sharing this publicly.IntroductionTraining for Good is an impact-focused training organisation, incubated by Charity Entrepreneurship in 2021.Quite a lot has changed since our launch in September 2021. We considered our first year to be an exploratory period in which we ran many, many different projects. We’ve now discontinued the majority of these programmes and have narrowed our focus to running fellowships that directly place early-career individuals in impactful careers.Now that TFG has a clearer focus, we're writing this post to update others in the EA community on our activities and the scope of our organisation.What we doTraining for Good runs fellowships that place talented professionals in impactful careers in policy, journalism & other areas. We do this by providing a combination of stipends, mentorship from experienced professionals, training and placements in relevant organisations.Between Sep 2022 - Aug 2023 (i.e. year 2), we plan to only run the following programmes:EU Tech Policy FellowshipTarbell FellowshipAn unannounced 3rd programme which is still under developmentWhy this might be importantMany high impact career paths are neglected by talented and altruistic people, often because they lack clear pathways for entry. This is limiting progress on some of the world’s most important problems: reducing existential risk, ending factory farming and tackling global poverty.TFG seeks to provide concrete opportunities for early-career professionals to gain entry level roles in impactful career paths that are difficult to enter. Building these talent pipelines could be important because:Direct progress on problems: Talented individuals in these career paths can directly contribute to progress on solving the world’s most important problemsCloser towards the “ideal portfolio”: We mostly take a portfolio approach to doing good. One could imagine an optimal distribution of talent within the effective altruism community, which might involve people pursuing a variety of different career paths. With our fellowships, we are attempting to move the effective altruism community closer towards this ideal allocation by enabling people to pursue paths that we believe are currently underrepresented (and expect to remain so) within this community’s portfolio. We believe that thinking in these terms is particularly useful partly due to:Diminishing returns from certain career pathsEpistemic uncertainty about which career paths are best (and the associated information value from reducing this uncertainty somewhat)Differing personal fit for individuals across different career pathsConcrete opportunities: The number of people interested in effective altruism has been g...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Training for Good - Update & Plans for 2023, published by Cillian Crosson on November 15, 2022 on The Effective Altruism Forum.SummaryTraining for Good will now focus exclusively on programmes that enable talented and altruistic early-career professionals to directly enter the first stage of high impact careers.Concretely, we will only run the following programmes in Sep 2022 - Aug 2023:1. EU Tech Policy Fellowship.2. Tarbell Fellowship (journalism)3. An unannounced 3rd programme which is still under developmentApplications for the 2023 EU Tech Policy Fellowship are open until December 11. Apply here.In year 1, we experimented with ~7 different programmes, 6 of which have now been discontinued. This is largely because we believe that focus is important & wanted to double-down on the most promising programme we had identified thus far (the EU Tech Policy Fellowship which successfully placed 7 fellows in relevant European think tanks focused on emerging technology policy).We plan to have an external review of Training for Good conducted between July 2023 - December 2023. We will default to sharing this publicly.IntroductionTraining for Good is an impact-focused training organisation, incubated by Charity Entrepreneurship in 2021.Quite a lot has changed since our launch in September 2021. We considered our first year to be an exploratory period in which we ran many, many different projects. We’ve now discontinued the majority of these programmes and have narrowed our focus to running fellowships that directly place early-career individuals in impactful careers.Now that TFG has a clearer focus, we're writing this post to update others in the EA community on our activities and the scope of our organisation.What we doTraining for Good runs fellowships that place talented professionals in impactful careers in policy, journalism & other areas. We do this by providing a combination of stipends, mentorship from experienced professionals, training and placements in relevant organisations.Between Sep 2022 - Aug 2023 (i.e. year 2), we plan to only run the following programmes:EU Tech Policy FellowshipTarbell FellowshipAn unannounced 3rd programme which is still under developmentWhy this might be importantMany high impact career paths are neglected by talented and altruistic people, often because they lack clear pathways for entry. This is limiting progress on some of the world’s most important problems: reducing existential risk, ending factory farming and tackling global poverty.TFG seeks to provide concrete opportunities for early-career professionals to gain entry level roles in impactful career paths that are difficult to enter. Building these talent pipelines could be important because:Direct progress on problems: Talented individuals in these career paths can directly contribute to progress on solving the world’s most important problemsCloser towards the “ideal portfolio”: We mostly take a portfolio approach to doing good. One could imagine an optimal distribution of talent within the effective altruism community, which might involve people pursuing a variety of different career paths. With our fellowships, we are attempting to move the effective altruism community closer towards this ideal allocation by enabling people to pursue paths that we believe are currently underrepresented (and expect to remain so) within this community’s portfolio. We believe that thinking in these terms is particularly useful partly due to:Diminishing returns from certain career pathsEpistemic uncertainty about which career paths are best (and the associated information value from reducing this uncertainty somewhat)Differing personal fit for individuals across different career pathsConcrete opportunities: The number of people interested in effective altruism has been g...]]>
Cillian Crosson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 17:29 None full 3766
3EvLvjfsujzjjXTJM_EA EA - Delay, Detect, Defend: Preparing for a Future in which Thousands Can Release New Pandemics by Kevin Esvelt by Jeremy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Delay, Detect, Defend: Preparing for a Future in which Thousands Can Release New Pandemics by Kevin Esvelt, published by Jeremy on November 15, 2022 on The Effective Altruism Forum.At EAGxBoston in April, I attended his talk of the same name (embedded below for those who prefer video format). I found it to be a great introduction to the current state of pandemic preparedness/GCBR. It has now been published as a paper by the Geneva Centre for Security Policy. I had some trouble finding it even on their website, so it seemed worth linkposting here. Executive summary and key takeaways are reproduced below.Executive summaryThe world is demonstrably vulnerable to the introduction of a single pandemic virus with a comparatively low case fatality rate. The deliberate and simultaneous release of many pandemic viruses across travel hubs could threaten the stability of civilisation. Current trends suggest that within a decade, tens of thousands of skilled individuals will be able to access the information required for them to single-handedly cause new pandemics. Safeguarding civilisation from the catastrophic misuse of biotechnology requires delaying the development and misuse of pandemic-class agents while building systems capable of reliably detecting threats and preventing nearly all infections.Key takeawaysBackgroundWe don't yet know of any credible viruses that could cause new pandemics, but ongoing research projects aim to publicly identify them.Identifying a sequenced virus as pandemic-capable will allow >1,000 individuals to assemble it.One person with a list of such viruses could simultaneously ignite multiple pandemics.Viruses can spread faster than vaccines or antivirals can be distributed.Pandemic agents are more lethal than nuclear devices and will be accessible to terrorists.DelayA pandemic test-ban treaty will delay proliferation without slowing beneficial advances.Liability and insurance for catastrophic outcomes will compensate for negative externalities.Secure and universal DNA synthesis screening can reduce unauthorised access by >100-fold.DetectUntargeted sequencing can reliably detect all exponentially spreading biological threatsDefendGoal: eliminate the virus while providing food, water, power, lawenforcement, and healthcareDevelop and distribute pandemic-proof protective equipment for all essential workersComfortable, stylish, durable powered respirators must be proven to work reliablyFoster resilient supply chains, local production, and behavioural outbreak controlStrengthen systems and offer individualised early warning to block transmissionDevelop and install germicidal low-wavelength lights, which appear to be harmless to humansOverhead fixtures can reduce airborne and surface pathogens by >90 per cent in secondsRead the Full PDF.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeremy https://forum.effectivealtruism.org/posts/3EvLvjfsujzjjXTJM/delay-detect-defend-preparing-for-a-future-in-which Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Delay, Detect, Defend: Preparing for a Future in which Thousands Can Release New Pandemics by Kevin Esvelt, published by Jeremy on November 15, 2022 on The Effective Altruism Forum.At EAGxBoston in April, I attended his talk of the same name (embedded below for those who prefer video format). I found it to be a great introduction to the current state of pandemic preparedness/GCBR. It has now been published as a paper by the Geneva Centre for Security Policy. I had some trouble finding it even on their website, so it seemed worth linkposting here. Executive summary and key takeaways are reproduced below.Executive summaryThe world is demonstrably vulnerable to the introduction of a single pandemic virus with a comparatively low case fatality rate. The deliberate and simultaneous release of many pandemic viruses across travel hubs could threaten the stability of civilisation. Current trends suggest that within a decade, tens of thousands of skilled individuals will be able to access the information required for them to single-handedly cause new pandemics. Safeguarding civilisation from the catastrophic misuse of biotechnology requires delaying the development and misuse of pandemic-class agents while building systems capable of reliably detecting threats and preventing nearly all infections.Key takeawaysBackgroundWe don't yet know of any credible viruses that could cause new pandemics, but ongoing research projects aim to publicly identify them.Identifying a sequenced virus as pandemic-capable will allow >1,000 individuals to assemble it.One person with a list of such viruses could simultaneously ignite multiple pandemics.Viruses can spread faster than vaccines or antivirals can be distributed.Pandemic agents are more lethal than nuclear devices and will be accessible to terrorists.DelayA pandemic test-ban treaty will delay proliferation without slowing beneficial advances.Liability and insurance for catastrophic outcomes will compensate for negative externalities.Secure and universal DNA synthesis screening can reduce unauthorised access by >100-fold.DetectUntargeted sequencing can reliably detect all exponentially spreading biological threatsDefendGoal: eliminate the virus while providing food, water, power, lawenforcement, and healthcareDevelop and distribute pandemic-proof protective equipment for all essential workersComfortable, stylish, durable powered respirators must be proven to work reliablyFoster resilient supply chains, local production, and behavioural outbreak controlStrengthen systems and offer individualised early warning to block transmissionDevelop and install germicidal low-wavelength lights, which appear to be harmless to humansOverhead fixtures can reduce airborne and surface pathogens by >90 per cent in secondsRead the Full PDF.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 15 Nov 2022 17:14:43 +0000 EA - Delay, Detect, Defend: Preparing for a Future in which Thousands Can Release New Pandemics by Kevin Esvelt by Jeremy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Delay, Detect, Defend: Preparing for a Future in which Thousands Can Release New Pandemics by Kevin Esvelt, published by Jeremy on November 15, 2022 on The Effective Altruism Forum.At EAGxBoston in April, I attended his talk of the same name (embedded below for those who prefer video format). I found it to be a great introduction to the current state of pandemic preparedness/GCBR. It has now been published as a paper by the Geneva Centre for Security Policy. I had some trouble finding it even on their website, so it seemed worth linkposting here. Executive summary and key takeaways are reproduced below.Executive summaryThe world is demonstrably vulnerable to the introduction of a single pandemic virus with a comparatively low case fatality rate. The deliberate and simultaneous release of many pandemic viruses across travel hubs could threaten the stability of civilisation. Current trends suggest that within a decade, tens of thousands of skilled individuals will be able to access the information required for them to single-handedly cause new pandemics. Safeguarding civilisation from the catastrophic misuse of biotechnology requires delaying the development and misuse of pandemic-class agents while building systems capable of reliably detecting threats and preventing nearly all infections.Key takeawaysBackgroundWe don't yet know of any credible viruses that could cause new pandemics, but ongoing research projects aim to publicly identify them.Identifying a sequenced virus as pandemic-capable will allow >1,000 individuals to assemble it.One person with a list of such viruses could simultaneously ignite multiple pandemics.Viruses can spread faster than vaccines or antivirals can be distributed.Pandemic agents are more lethal than nuclear devices and will be accessible to terrorists.DelayA pandemic test-ban treaty will delay proliferation without slowing beneficial advances.Liability and insurance for catastrophic outcomes will compensate for negative externalities.Secure and universal DNA synthesis screening can reduce unauthorised access by >100-fold.DetectUntargeted sequencing can reliably detect all exponentially spreading biological threatsDefendGoal: eliminate the virus while providing food, water, power, lawenforcement, and healthcareDevelop and distribute pandemic-proof protective equipment for all essential workersComfortable, stylish, durable powered respirators must be proven to work reliablyFoster resilient supply chains, local production, and behavioural outbreak controlStrengthen systems and offer individualised early warning to block transmissionDevelop and install germicidal low-wavelength lights, which appear to be harmless to humansOverhead fixtures can reduce airborne and surface pathogens by >90 per cent in secondsRead the Full PDF.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Delay, Detect, Defend: Preparing for a Future in which Thousands Can Release New Pandemics by Kevin Esvelt, published by Jeremy on November 15, 2022 on The Effective Altruism Forum.At EAGxBoston in April, I attended his talk of the same name (embedded below for those who prefer video format). I found it to be a great introduction to the current state of pandemic preparedness/GCBR. It has now been published as a paper by the Geneva Centre for Security Policy. I had some trouble finding it even on their website, so it seemed worth linkposting here. Executive summary and key takeaways are reproduced below.Executive summaryThe world is demonstrably vulnerable to the introduction of a single pandemic virus with a comparatively low case fatality rate. The deliberate and simultaneous release of many pandemic viruses across travel hubs could threaten the stability of civilisation. Current trends suggest that within a decade, tens of thousands of skilled individuals will be able to access the information required for them to single-handedly cause new pandemics. Safeguarding civilisation from the catastrophic misuse of biotechnology requires delaying the development and misuse of pandemic-class agents while building systems capable of reliably detecting threats and preventing nearly all infections.Key takeawaysBackgroundWe don't yet know of any credible viruses that could cause new pandemics, but ongoing research projects aim to publicly identify them.Identifying a sequenced virus as pandemic-capable will allow >1,000 individuals to assemble it.One person with a list of such viruses could simultaneously ignite multiple pandemics.Viruses can spread faster than vaccines or antivirals can be distributed.Pandemic agents are more lethal than nuclear devices and will be accessible to terrorists.DelayA pandemic test-ban treaty will delay proliferation without slowing beneficial advances.Liability and insurance for catastrophic outcomes will compensate for negative externalities.Secure and universal DNA synthesis screening can reduce unauthorised access by >100-fold.DetectUntargeted sequencing can reliably detect all exponentially spreading biological threatsDefendGoal: eliminate the virus while providing food, water, power, lawenforcement, and healthcareDevelop and distribute pandemic-proof protective equipment for all essential workersComfortable, stylish, durable powered respirators must be proven to work reliablyFoster resilient supply chains, local production, and behavioural outbreak controlStrengthen systems and offer individualised early warning to block transmissionDevelop and install germicidal low-wavelength lights, which appear to be harmless to humansOverhead fixtures can reduce airborne and surface pathogens by >90 per cent in secondsRead the Full PDF.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Jeremy https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:04 None full 3765
qegC9AwJuWbCkj8xY_EA EA - If FTX is liquidated, who ends up controlling Anthropic? by ofer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If FTX is liquidated, who ends up controlling Anthropic?, published by ofer on November 15, 2022 on The Effective Altruism Forum.[EDIT: The original title of this question was: "If FTX/Alameda currently control Anthropic, who will end up controlling it?". Based on the comments so far, it does not seem likely that FTX/Alameda currently control Anthropic.]If I understand correctly, FTX/Alameda have seemingly invested $500M in Anthropic, according to a Bloomberg article:FTX/Alameda were funneling customer money into effective altruism. Bankman-Fried seems to have generously funded a lot of effective altruism charities, artificial-intelligence and pandemic research, Democratic political candidates, etc. One $500 million entry on the desperation balance sheet is “Anthropic,” a venture investment in an AI safety company. [...]That seems consistent with the following excerpts from this page on Anthropic's website:Anthropic, an AI safety and research company, has raised $580 million in a Series B.The Series B follows the company raising $124 million in a Series A round in 2021. The Series B round was led by Sam Bankman-Fried, CEO of FTX. The round also included participation from Caroline Ellison, Jim McClave, Nishad Singh, Jaan Tallinn, and the Center for Emerging Risk Research (CERR).Is it likely that FTX/Alameda currently have >50% voting power over Anthropic? If they do, who will end up having control over Anthropic?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
ofer https://forum.effectivealtruism.org/posts/qegC9AwJuWbCkj8xY/if-ftx-is-liquidated-who-ends-up-controlling-anthropic Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If FTX is liquidated, who ends up controlling Anthropic?, published by ofer on November 15, 2022 on The Effective Altruism Forum.[EDIT: The original title of this question was: "If FTX/Alameda currently control Anthropic, who will end up controlling it?". Based on the comments so far, it does not seem likely that FTX/Alameda currently control Anthropic.]If I understand correctly, FTX/Alameda have seemingly invested $500M in Anthropic, according to a Bloomberg article:FTX/Alameda were funneling customer money into effective altruism. Bankman-Fried seems to have generously funded a lot of effective altruism charities, artificial-intelligence and pandemic research, Democratic political candidates, etc. One $500 million entry on the desperation balance sheet is “Anthropic,” a venture investment in an AI safety company. [...]That seems consistent with the following excerpts from this page on Anthropic's website:Anthropic, an AI safety and research company, has raised $580 million in a Series B.The Series B follows the company raising $124 million in a Series A round in 2021. The Series B round was led by Sam Bankman-Fried, CEO of FTX. The round also included participation from Caroline Ellison, Jim McClave, Nishad Singh, Jaan Tallinn, and the Center for Emerging Risk Research (CERR).Is it likely that FTX/Alameda currently have >50% voting power over Anthropic? If they do, who will end up having control over Anthropic?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 15 Nov 2022 15:56:14 +0000 EA - If FTX is liquidated, who ends up controlling Anthropic? by ofer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If FTX is liquidated, who ends up controlling Anthropic?, published by ofer on November 15, 2022 on The Effective Altruism Forum.[EDIT: The original title of this question was: "If FTX/Alameda currently control Anthropic, who will end up controlling it?". Based on the comments so far, it does not seem likely that FTX/Alameda currently control Anthropic.]If I understand correctly, FTX/Alameda have seemingly invested $500M in Anthropic, according to a Bloomberg article:FTX/Alameda were funneling customer money into effective altruism. Bankman-Fried seems to have generously funded a lot of effective altruism charities, artificial-intelligence and pandemic research, Democratic political candidates, etc. One $500 million entry on the desperation balance sheet is “Anthropic,” a venture investment in an AI safety company. [...]That seems consistent with the following excerpts from this page on Anthropic's website:Anthropic, an AI safety and research company, has raised $580 million in a Series B.The Series B follows the company raising $124 million in a Series A round in 2021. The Series B round was led by Sam Bankman-Fried, CEO of FTX. The round also included participation from Caroline Ellison, Jim McClave, Nishad Singh, Jaan Tallinn, and the Center for Emerging Risk Research (CERR).Is it likely that FTX/Alameda currently have >50% voting power over Anthropic? If they do, who will end up having control over Anthropic?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If FTX is liquidated, who ends up controlling Anthropic?, published by ofer on November 15, 2022 on The Effective Altruism Forum.[EDIT: The original title of this question was: "If FTX/Alameda currently control Anthropic, who will end up controlling it?". Based on the comments so far, it does not seem likely that FTX/Alameda currently control Anthropic.]If I understand correctly, FTX/Alameda have seemingly invested $500M in Anthropic, according to a Bloomberg article:FTX/Alameda were funneling customer money into effective altruism. Bankman-Fried seems to have generously funded a lot of effective altruism charities, artificial-intelligence and pandemic research, Democratic political candidates, etc. One $500 million entry on the desperation balance sheet is “Anthropic,” a venture investment in an AI safety company. [...]That seems consistent with the following excerpts from this page on Anthropic's website:Anthropic, an AI safety and research company, has raised $580 million in a Series B.The Series B follows the company raising $124 million in a Series A round in 2021. The Series B round was led by Sam Bankman-Fried, CEO of FTX. The round also included participation from Caroline Ellison, Jim McClave, Nishad Singh, Jaan Tallinn, and the Center for Emerging Risk Research (CERR).Is it likely that FTX/Alameda currently have >50% voting power over Anthropic? If they do, who will end up having control over Anthropic?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
ofer https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:49 None full 3767
oiEArRjkajAKayMCp_EA EA - What might FTX mean for effective giving and EA funding by Jack Lewars Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What might FTX mean for effective giving and EA funding, published by Jack Lewars on November 15, 2022 on The Effective Altruism Forum.FTX's demise is understandably dominating EA discussion at the moment. Like everyone in the community, I’m primarily just really sad about the news. A lot of people have been hurt; and a lot of funding that could have been used to do a staggering amount of good is no longer available.I’m also aware of the dangers of publishing reactions too quickly. Many of the facts about FTX are not clear. However, the major effect of recent events on EA funding seems fairly unambiguous: there is substantially less funding available to EA today than there was a week ago.Accordingly, I think we can lay out some early food for thought from the last year or so; and hopefully start an evolving conversation around funding in EA, and effective giving more broadly, as we learn more.(P.s. Much of what is below has been said before but it seems helpful to summarize it in one place. If it’s your work I’m drawing on, I hope you see this as validation of what you saw that others didn’t. I’ll link to you where I can!)We need to aim for greater funding diversity and properly resource efforts to achieve thisSaying “we should diversify our funding” is almost cringingly obvious in the current circumstances. But a lot of the discussion about funding over the last year seemed not to acknowledge the risks when ~85% of expected funding is supplied by two pots of money; and when those pots are effectively reliant on the value of 2-3 highly volatile assets. Even when these issues were explicitly acknowledged, they didn’t seem to influence the way funding was forecasted sufficiently - discussions rarely seemed to consider a future where one or both sources of funding rapidly declined in value. We may find out in the coming months that grantees didn’t consider this risk sufficiently either, for example if they bought fixed assets with significant running costs.Funding diversity is important not just to protect the overall amount of funding available to EA. It’s also vital to avoid a very small number of decision-makers having too much influence (even if they don’t want that level of influence in the first place). If we have more sources of funding and more decision-makers, it is likely to improve the overall quality of funding decisions and, critically, reduce the consequences for grantees if they are rejected by just one or two major funders. It might also reduce the dangers of money unduly influencing moral and philosophical views in a particular direction.In short: it's still important to find new donors, even if the amount of existing funding is very large in expectation.Of course, it is significantly easier to say we should diversify than actually to diversify. So what could we do to try to mitigate the risks of overly concentrated sources of funding?Bring more donors at every level into EAIt seems wise to support organizations that can broaden the funding base of EA. Recently, there has been a move to prioritize securing major donors (and, in some extreme cases, to suggest that anything attracting small donors is a waste of time). Seeking more super wealthy donors is a sensible move (see below), but there is a reasonable case for seeking more donors in general at the same time. More donors in EA means more reliable, less volatile funding overall. Additionally, even if the amounts they give seem trivial by comparison, small donors can still make a difference, for example by funging dollars back to larger donors, who can then give more to things that are less popular, more technical or higher risk.To this end, we should celebrate, encourage and (most importantly) fund the wide range of projects that attract effective givers - pledge organizations (One for th...]]>
Jack Lewars https://forum.effectivealtruism.org/posts/oiEArRjkajAKayMCp/what-might-ftx-mean-for-effective-giving-and-ea-funding Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What might FTX mean for effective giving and EA funding, published by Jack Lewars on November 15, 2022 on The Effective Altruism Forum.FTX's demise is understandably dominating EA discussion at the moment. Like everyone in the community, I’m primarily just really sad about the news. A lot of people have been hurt; and a lot of funding that could have been used to do a staggering amount of good is no longer available.I’m also aware of the dangers of publishing reactions too quickly. Many of the facts about FTX are not clear. However, the major effect of recent events on EA funding seems fairly unambiguous: there is substantially less funding available to EA today than there was a week ago.Accordingly, I think we can lay out some early food for thought from the last year or so; and hopefully start an evolving conversation around funding in EA, and effective giving more broadly, as we learn more.(P.s. Much of what is below has been said before but it seems helpful to summarize it in one place. If it’s your work I’m drawing on, I hope you see this as validation of what you saw that others didn’t. I’ll link to you where I can!)We need to aim for greater funding diversity and properly resource efforts to achieve thisSaying “we should diversify our funding” is almost cringingly obvious in the current circumstances. But a lot of the discussion about funding over the last year seemed not to acknowledge the risks when ~85% of expected funding is supplied by two pots of money; and when those pots are effectively reliant on the value of 2-3 highly volatile assets. Even when these issues were explicitly acknowledged, they didn’t seem to influence the way funding was forecasted sufficiently - discussions rarely seemed to consider a future where one or both sources of funding rapidly declined in value. We may find out in the coming months that grantees didn’t consider this risk sufficiently either, for example if they bought fixed assets with significant running costs.Funding diversity is important not just to protect the overall amount of funding available to EA. It’s also vital to avoid a very small number of decision-makers having too much influence (even if they don’t want that level of influence in the first place). If we have more sources of funding and more decision-makers, it is likely to improve the overall quality of funding decisions and, critically, reduce the consequences for grantees if they are rejected by just one or two major funders. It might also reduce the dangers of money unduly influencing moral and philosophical views in a particular direction.In short: it's still important to find new donors, even if the amount of existing funding is very large in expectation.Of course, it is significantly easier to say we should diversify than actually to diversify. So what could we do to try to mitigate the risks of overly concentrated sources of funding?Bring more donors at every level into EAIt seems wise to support organizations that can broaden the funding base of EA. Recently, there has been a move to prioritize securing major donors (and, in some extreme cases, to suggest that anything attracting small donors is a waste of time). Seeking more super wealthy donors is a sensible move (see below), but there is a reasonable case for seeking more donors in general at the same time. More donors in EA means more reliable, less volatile funding overall. Additionally, even if the amounts they give seem trivial by comparison, small donors can still make a difference, for example by funging dollars back to larger donors, who can then give more to things that are less popular, more technical or higher risk.To this end, we should celebrate, encourage and (most importantly) fund the wide range of projects that attract effective givers - pledge organizations (One for th...]]>
Tue, 15 Nov 2022 13:51:53 +0000 EA - What might FTX mean for effective giving and EA funding by Jack Lewars Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What might FTX mean for effective giving and EA funding, published by Jack Lewars on November 15, 2022 on The Effective Altruism Forum.FTX's demise is understandably dominating EA discussion at the moment. Like everyone in the community, I’m primarily just really sad about the news. A lot of people have been hurt; and a lot of funding that could have been used to do a staggering amount of good is no longer available.I’m also aware of the dangers of publishing reactions too quickly. Many of the facts about FTX are not clear. However, the major effect of recent events on EA funding seems fairly unambiguous: there is substantially less funding available to EA today than there was a week ago.Accordingly, I think we can lay out some early food for thought from the last year or so; and hopefully start an evolving conversation around funding in EA, and effective giving more broadly, as we learn more.(P.s. Much of what is below has been said before but it seems helpful to summarize it in one place. If it’s your work I’m drawing on, I hope you see this as validation of what you saw that others didn’t. I’ll link to you where I can!)We need to aim for greater funding diversity and properly resource efforts to achieve thisSaying “we should diversify our funding” is almost cringingly obvious in the current circumstances. But a lot of the discussion about funding over the last year seemed not to acknowledge the risks when ~85% of expected funding is supplied by two pots of money; and when those pots are effectively reliant on the value of 2-3 highly volatile assets. Even when these issues were explicitly acknowledged, they didn’t seem to influence the way funding was forecasted sufficiently - discussions rarely seemed to consider a future where one or both sources of funding rapidly declined in value. We may find out in the coming months that grantees didn’t consider this risk sufficiently either, for example if they bought fixed assets with significant running costs.Funding diversity is important not just to protect the overall amount of funding available to EA. It’s also vital to avoid a very small number of decision-makers having too much influence (even if they don’t want that level of influence in the first place). If we have more sources of funding and more decision-makers, it is likely to improve the overall quality of funding decisions and, critically, reduce the consequences for grantees if they are rejected by just one or two major funders. It might also reduce the dangers of money unduly influencing moral and philosophical views in a particular direction.In short: it's still important to find new donors, even if the amount of existing funding is very large in expectation.Of course, it is significantly easier to say we should diversify than actually to diversify. So what could we do to try to mitigate the risks of overly concentrated sources of funding?Bring more donors at every level into EAIt seems wise to support organizations that can broaden the funding base of EA. Recently, there has been a move to prioritize securing major donors (and, in some extreme cases, to suggest that anything attracting small donors is a waste of time). Seeking more super wealthy donors is a sensible move (see below), but there is a reasonable case for seeking more donors in general at the same time. More donors in EA means more reliable, less volatile funding overall. Additionally, even if the amounts they give seem trivial by comparison, small donors can still make a difference, for example by funging dollars back to larger donors, who can then give more to things that are less popular, more technical or higher risk.To this end, we should celebrate, encourage and (most importantly) fund the wide range of projects that attract effective givers - pledge organizations (One for th...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What might FTX mean for effective giving and EA funding, published by Jack Lewars on November 15, 2022 on The Effective Altruism Forum.FTX's demise is understandably dominating EA discussion at the moment. Like everyone in the community, I’m primarily just really sad about the news. A lot of people have been hurt; and a lot of funding that could have been used to do a staggering amount of good is no longer available.I’m also aware of the dangers of publishing reactions too quickly. Many of the facts about FTX are not clear. However, the major effect of recent events on EA funding seems fairly unambiguous: there is substantially less funding available to EA today than there was a week ago.Accordingly, I think we can lay out some early food for thought from the last year or so; and hopefully start an evolving conversation around funding in EA, and effective giving more broadly, as we learn more.(P.s. Much of what is below has been said before but it seems helpful to summarize it in one place. If it’s your work I’m drawing on, I hope you see this as validation of what you saw that others didn’t. I’ll link to you where I can!)We need to aim for greater funding diversity and properly resource efforts to achieve thisSaying “we should diversify our funding” is almost cringingly obvious in the current circumstances. But a lot of the discussion about funding over the last year seemed not to acknowledge the risks when ~85% of expected funding is supplied by two pots of money; and when those pots are effectively reliant on the value of 2-3 highly volatile assets. Even when these issues were explicitly acknowledged, they didn’t seem to influence the way funding was forecasted sufficiently - discussions rarely seemed to consider a future where one or both sources of funding rapidly declined in value. We may find out in the coming months that grantees didn’t consider this risk sufficiently either, for example if they bought fixed assets with significant running costs.Funding diversity is important not just to protect the overall amount of funding available to EA. It’s also vital to avoid a very small number of decision-makers having too much influence (even if they don’t want that level of influence in the first place). If we have more sources of funding and more decision-makers, it is likely to improve the overall quality of funding decisions and, critically, reduce the consequences for grantees if they are rejected by just one or two major funders. It might also reduce the dangers of money unduly influencing moral and philosophical views in a particular direction.In short: it's still important to find new donors, even if the amount of existing funding is very large in expectation.Of course, it is significantly easier to say we should diversify than actually to diversify. So what could we do to try to mitigate the risks of overly concentrated sources of funding?Bring more donors at every level into EAIt seems wise to support organizations that can broaden the funding base of EA. Recently, there has been a move to prioritize securing major donors (and, in some extreme cases, to suggest that anything attracting small donors is a waste of time). Seeking more super wealthy donors is a sensible move (see below), but there is a reasonable case for seeking more donors in general at the same time. More donors in EA means more reliable, less volatile funding overall. Additionally, even if the amounts they give seem trivial by comparison, small donors can still make a difference, for example by funging dollars back to larger donors, who can then give more to things that are less popular, more technical or higher risk.To this end, we should celebrate, encourage and (most importantly) fund the wide range of projects that attract effective givers - pledge organizations (One for th...]]>
Jack Lewars https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 14:43 None full 3768
AbohvyvtF6P7cXBgy_EA EA - Brainstorming ways to make EA safer and more inclusive by richard ngo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Brainstorming ways to make EA safer and more inclusive, published by richard ngo on November 15, 2022 on The Effective Altruism Forum.After some recent discussion on the forum and on twitter about negative experiences that women have had in EA community spaces, I wanted to start a discussion about concrete actions that could be taken to make EA spaces safer, more comortable, and more inclusive for women. The community health team describes some of their work related to interpersonal harm here, but I expect there's a lot more that the wider community can do to prevent sexual harrassment and abusive behavior, particularly when it comes to setting up norms that proactively prevent problems rather than just dealing with them afterwards. Some prompts for discussion:What negative experiences have you had, and what do you wish the EA community had done differently in response to them?What specific behaviors have you seen which you wish were less common/wish there were stronger norms against? What would have helped you push back against them?As the movement becomes larger and more professionalized, how can we enable people to set clear boundaries and deal with conflicts of interest in workplaces and grantmaking?How can we set clearer norms related to informal power structures (e.g. people who are respected or well-connected within EA, community organizers, etc)?What codes of conduct should we have around events like EA Global? Here's the current code; are there things which should be included in there that aren't currently (e.g. explicitly talking about not asking people out in work-related 1:1s)?What are the best ways to get feedback to the right people on an ongoing basis? E.g. what sort of reporting mechanisms would make sure that concerning patterns in specific EA groups get noticed early? And which ones are currently in place?How can we enable people who are best at creating safe, welcoming environments to share that knowledge? Are there specific posts which should be written about best practices and lessons learned (e.g. additions to the community health resources here)?I'd welcome people's thoughts and experiences, whether detailed discussions or just off-the-cuff comments. I'm particularly excited about suggestions for ways to translate these ideas to concrete actions going forward.EDIT: here's a google form for people who want to comment anonymously; the answers should be visible here. And feel free to reach out to me in messages or in person if you have suggestions for how to do this better.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
richard ngo https://forum.effectivealtruism.org/posts/AbohvyvtF6P7cXBgy/brainstorming-ways-to-make-ea-safer-and-more-inclusive Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Brainstorming ways to make EA safer and more inclusive, published by richard ngo on November 15, 2022 on The Effective Altruism Forum.After some recent discussion on the forum and on twitter about negative experiences that women have had in EA community spaces, I wanted to start a discussion about concrete actions that could be taken to make EA spaces safer, more comortable, and more inclusive for women. The community health team describes some of their work related to interpersonal harm here, but I expect there's a lot more that the wider community can do to prevent sexual harrassment and abusive behavior, particularly when it comes to setting up norms that proactively prevent problems rather than just dealing with them afterwards. Some prompts for discussion:What negative experiences have you had, and what do you wish the EA community had done differently in response to them?What specific behaviors have you seen which you wish were less common/wish there were stronger norms against? What would have helped you push back against them?As the movement becomes larger and more professionalized, how can we enable people to set clear boundaries and deal with conflicts of interest in workplaces and grantmaking?How can we set clearer norms related to informal power structures (e.g. people who are respected or well-connected within EA, community organizers, etc)?What codes of conduct should we have around events like EA Global? Here's the current code; are there things which should be included in there that aren't currently (e.g. explicitly talking about not asking people out in work-related 1:1s)?What are the best ways to get feedback to the right people on an ongoing basis? E.g. what sort of reporting mechanisms would make sure that concerning patterns in specific EA groups get noticed early? And which ones are currently in place?How can we enable people who are best at creating safe, welcoming environments to share that knowledge? Are there specific posts which should be written about best practices and lessons learned (e.g. additions to the community health resources here)?I'd welcome people's thoughts and experiences, whether detailed discussions or just off-the-cuff comments. I'm particularly excited about suggestions for ways to translate these ideas to concrete actions going forward.EDIT: here's a google form for people who want to comment anonymously; the answers should be visible here. And feel free to reach out to me in messages or in person if you have suggestions for how to do this better.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 15 Nov 2022 12:00:12 +0000 EA - Brainstorming ways to make EA safer and more inclusive by richard ngo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Brainstorming ways to make EA safer and more inclusive, published by richard ngo on November 15, 2022 on The Effective Altruism Forum.After some recent discussion on the forum and on twitter about negative experiences that women have had in EA community spaces, I wanted to start a discussion about concrete actions that could be taken to make EA spaces safer, more comortable, and more inclusive for women. The community health team describes some of their work related to interpersonal harm here, but I expect there's a lot more that the wider community can do to prevent sexual harrassment and abusive behavior, particularly when it comes to setting up norms that proactively prevent problems rather than just dealing with them afterwards. Some prompts for discussion:What negative experiences have you had, and what do you wish the EA community had done differently in response to them?What specific behaviors have you seen which you wish were less common/wish there were stronger norms against? What would have helped you push back against them?As the movement becomes larger and more professionalized, how can we enable people to set clear boundaries and deal with conflicts of interest in workplaces and grantmaking?How can we set clearer norms related to informal power structures (e.g. people who are respected or well-connected within EA, community organizers, etc)?What codes of conduct should we have around events like EA Global? Here's the current code; are there things which should be included in there that aren't currently (e.g. explicitly talking about not asking people out in work-related 1:1s)?What are the best ways to get feedback to the right people on an ongoing basis? E.g. what sort of reporting mechanisms would make sure that concerning patterns in specific EA groups get noticed early? And which ones are currently in place?How can we enable people who are best at creating safe, welcoming environments to share that knowledge? Are there specific posts which should be written about best practices and lessons learned (e.g. additions to the community health resources here)?I'd welcome people's thoughts and experiences, whether detailed discussions or just off-the-cuff comments. I'm particularly excited about suggestions for ways to translate these ideas to concrete actions going forward.EDIT: here's a google form for people who want to comment anonymously; the answers should be visible here. And feel free to reach out to me in messages or in person if you have suggestions for how to do this better.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Brainstorming ways to make EA safer and more inclusive, published by richard ngo on November 15, 2022 on The Effective Altruism Forum.After some recent discussion on the forum and on twitter about negative experiences that women have had in EA community spaces, I wanted to start a discussion about concrete actions that could be taken to make EA spaces safer, more comortable, and more inclusive for women. The community health team describes some of their work related to interpersonal harm here, but I expect there's a lot more that the wider community can do to prevent sexual harrassment and abusive behavior, particularly when it comes to setting up norms that proactively prevent problems rather than just dealing with them afterwards. Some prompts for discussion:What negative experiences have you had, and what do you wish the EA community had done differently in response to them?What specific behaviors have you seen which you wish were less common/wish there were stronger norms against? What would have helped you push back against them?As the movement becomes larger and more professionalized, how can we enable people to set clear boundaries and deal with conflicts of interest in workplaces and grantmaking?How can we set clearer norms related to informal power structures (e.g. people who are respected or well-connected within EA, community organizers, etc)?What codes of conduct should we have around events like EA Global? Here's the current code; are there things which should be included in there that aren't currently (e.g. explicitly talking about not asking people out in work-related 1:1s)?What are the best ways to get feedback to the right people on an ongoing basis? E.g. what sort of reporting mechanisms would make sure that concerning patterns in specific EA groups get noticed early? And which ones are currently in place?How can we enable people who are best at creating safe, welcoming environments to share that knowledge? Are there specific posts which should be written about best practices and lessons learned (e.g. additions to the community health resources here)?I'd welcome people's thoughts and experiences, whether detailed discussions or just off-the-cuff comments. I'm particularly excited about suggestions for ways to translate these ideas to concrete actions going forward.EDIT: here's a google form for people who want to comment anonymously; the answers should be visible here. And feel free to reach out to me in messages or in person if you have suggestions for how to do this better.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
richard ngo https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:29 None full 3769
jpc5FLcTwHyFzWNFy_EA EA - Want advice on management/organization-building? by Ben Kuhn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Want advice on management/organization-building?, published by Ben Kuhn on November 15, 2022 on The Effective Altruism Forum.Someone observed to me recently that there are a lot of new EA organizations whose founders don't have much experience building teams and could benefit from advice from someone further along in the scaling process. Is this you, or someone you know? If so, I'd be interested in talking about your management/org-building challenges :) You can reach out through the forum's messaging feature or email me (ben dot s dot kuhn at the most common email address suffix).About me / credentials: I'm the CTO of Wave, a startup building financial infrastructure for unbanked people in sub-Saharan Africa. I joined as an engineer in the very early days (employee #6), became CTO in 2019, and subsequently grew the engineering team from ~2 to 70+ engineers while the company overall scaled to ~2k people. Along the way I had to address a large number of different team-scaling problems, both within engineering and across the rest of Wave. I also write a blog with some advice posts that people have found useful.Example areas I might have useful input on:Hiring: clarifying roles, writing job descriptions, developing interview loops, executing hiring processes, headcount planning...People management: coaching, feedback, handling unhappy or underperforming people, designing processes for things like performance reviewsOrganizational structure: grouping people into teams, figuring out good boundaries between teams, adding management layersAbout you: a leader at an organization that's experiencing (or about to experience) a bunch of growth, and could use advice on how to navigate the scaling problems that come with that.Structure: pretty uncertain since I've never done this before, but I'm thinking some sort of biweekly or weekly checkin (after an initial convo to determine fit—I'll be doing this on a volunteer basis with a smallish chunk of time, which means I may need to prioritize folks based on where I feel the most useful).Disclaimer: this is an experiment—I've never done this before, and giving good advice is hard, so I can't guarantee that I'll be useful :)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ben Kuhn https://forum.effectivealtruism.org/posts/jpc5FLcTwHyFzWNFy/want-advice-on-management-organization-building Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Want advice on management/organization-building?, published by Ben Kuhn on November 15, 2022 on The Effective Altruism Forum.Someone observed to me recently that there are a lot of new EA organizations whose founders don't have much experience building teams and could benefit from advice from someone further along in the scaling process. Is this you, or someone you know? If so, I'd be interested in talking about your management/org-building challenges :) You can reach out through the forum's messaging feature or email me (ben dot s dot kuhn at the most common email address suffix).About me / credentials: I'm the CTO of Wave, a startup building financial infrastructure for unbanked people in sub-Saharan Africa. I joined as an engineer in the very early days (employee #6), became CTO in 2019, and subsequently grew the engineering team from ~2 to 70+ engineers while the company overall scaled to ~2k people. Along the way I had to address a large number of different team-scaling problems, both within engineering and across the rest of Wave. I also write a blog with some advice posts that people have found useful.Example areas I might have useful input on:Hiring: clarifying roles, writing job descriptions, developing interview loops, executing hiring processes, headcount planning...People management: coaching, feedback, handling unhappy or underperforming people, designing processes for things like performance reviewsOrganizational structure: grouping people into teams, figuring out good boundaries between teams, adding management layersAbout you: a leader at an organization that's experiencing (or about to experience) a bunch of growth, and could use advice on how to navigate the scaling problems that come with that.Structure: pretty uncertain since I've never done this before, but I'm thinking some sort of biweekly or weekly checkin (after an initial convo to determine fit—I'll be doing this on a volunteer basis with a smallish chunk of time, which means I may need to prioritize folks based on where I feel the most useful).Disclaimer: this is an experiment—I've never done this before, and giving good advice is hard, so I can't guarantee that I'll be useful :)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 15 Nov 2022 05:31:33 +0000 EA - Want advice on management/organization-building? by Ben Kuhn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Want advice on management/organization-building?, published by Ben Kuhn on November 15, 2022 on The Effective Altruism Forum.Someone observed to me recently that there are a lot of new EA organizations whose founders don't have much experience building teams and could benefit from advice from someone further along in the scaling process. Is this you, or someone you know? If so, I'd be interested in talking about your management/org-building challenges :) You can reach out through the forum's messaging feature or email me (ben dot s dot kuhn at the most common email address suffix).About me / credentials: I'm the CTO of Wave, a startup building financial infrastructure for unbanked people in sub-Saharan Africa. I joined as an engineer in the very early days (employee #6), became CTO in 2019, and subsequently grew the engineering team from ~2 to 70+ engineers while the company overall scaled to ~2k people. Along the way I had to address a large number of different team-scaling problems, both within engineering and across the rest of Wave. I also write a blog with some advice posts that people have found useful.Example areas I might have useful input on:Hiring: clarifying roles, writing job descriptions, developing interview loops, executing hiring processes, headcount planning...People management: coaching, feedback, handling unhappy or underperforming people, designing processes for things like performance reviewsOrganizational structure: grouping people into teams, figuring out good boundaries between teams, adding management layersAbout you: a leader at an organization that's experiencing (or about to experience) a bunch of growth, and could use advice on how to navigate the scaling problems that come with that.Structure: pretty uncertain since I've never done this before, but I'm thinking some sort of biweekly or weekly checkin (after an initial convo to determine fit—I'll be doing this on a volunteer basis with a smallish chunk of time, which means I may need to prioritize folks based on where I feel the most useful).Disclaimer: this is an experiment—I've never done this before, and giving good advice is hard, so I can't guarantee that I'll be useful :)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Want advice on management/organization-building?, published by Ben Kuhn on November 15, 2022 on The Effective Altruism Forum.Someone observed to me recently that there are a lot of new EA organizations whose founders don't have much experience building teams and could benefit from advice from someone further along in the scaling process. Is this you, or someone you know? If so, I'd be interested in talking about your management/org-building challenges :) You can reach out through the forum's messaging feature or email me (ben dot s dot kuhn at the most common email address suffix).About me / credentials: I'm the CTO of Wave, a startup building financial infrastructure for unbanked people in sub-Saharan Africa. I joined as an engineer in the very early days (employee #6), became CTO in 2019, and subsequently grew the engineering team from ~2 to 70+ engineers while the company overall scaled to ~2k people. Along the way I had to address a large number of different team-scaling problems, both within engineering and across the rest of Wave. I also write a blog with some advice posts that people have found useful.Example areas I might have useful input on:Hiring: clarifying roles, writing job descriptions, developing interview loops, executing hiring processes, headcount planning...People management: coaching, feedback, handling unhappy or underperforming people, designing processes for things like performance reviewsOrganizational structure: grouping people into teams, figuring out good boundaries between teams, adding management layersAbout you: a leader at an organization that's experiencing (or about to experience) a bunch of growth, and could use advice on how to navigate the scaling problems that come with that.Structure: pretty uncertain since I've never done this before, but I'm thinking some sort of biweekly or weekly checkin (after an initial convo to determine fit—I'll be doing this on a volunteer basis with a smallish chunk of time, which means I may need to prioritize folks based on where I feel the most useful).Disclaimer: this is an experiment—I've never done this before, and giving good advice is hard, so I can't guarantee that I'll be useful :)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ben Kuhn https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:19 None full 3759
ZnhLAzQxSi9svpkLQ_EA EA - The NY Times Interviewed SBF on Sunday by Lauren Maria Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The NY Times Interviewed SBF on Sunday, published by Lauren Maria on November 14, 2022 on The Effective Altruism Forum.The NY times did an interview with SBF yesterday that "stretched past midnight". Here are some quotes from the article: “Had I been a bit more concentrated on what I was doing, I would have been able to be more thorough,” he said. “That would have allowed me to catch what was going on on the risk side.”Mr. Bankman-Fried, who is based in the Bahamas, declined to comment on his current location, citing safety concerns. Lawyers for FTX and Mr. Bankman-Fried did not respond to requests for comment."Meanwhile, at a meeting with Alameda employees on Wednesday, Ms. Ellison explained what had caused the collapse, according to a person familiar with the matter. Her voice shaking, she apologized, saying she had let the group down. Over recent months, she said, Alameda had taken out loans and used the money to make venture capital investments, among other expenditures.Around the time the crypto market crashed this spring, Ms. Ellison explained, lenders moved to recall those loans, the person familiar with the meeting said. But the funds that Alameda had spent were no longer easily available, so the company used FTX customer funds to make the payments. Besides her and Mr. Bankman-Fried, she said, two other people knew about the arrangement: Mr. Singh and Mr. Wang."I "gifted" this article here, but if it doesn't work for some reason, I can post it again in the comments. Edited to add an extra quoteThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lauren Maria https://forum.effectivealtruism.org/posts/ZnhLAzQxSi9svpkLQ/the-ny-times-interviewed-sbf-on-sunday Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The NY Times Interviewed SBF on Sunday, published by Lauren Maria on November 14, 2022 on The Effective Altruism Forum.The NY times did an interview with SBF yesterday that "stretched past midnight". Here are some quotes from the article: “Had I been a bit more concentrated on what I was doing, I would have been able to be more thorough,” he said. “That would have allowed me to catch what was going on on the risk side.”Mr. Bankman-Fried, who is based in the Bahamas, declined to comment on his current location, citing safety concerns. Lawyers for FTX and Mr. Bankman-Fried did not respond to requests for comment."Meanwhile, at a meeting with Alameda employees on Wednesday, Ms. Ellison explained what had caused the collapse, according to a person familiar with the matter. Her voice shaking, she apologized, saying she had let the group down. Over recent months, she said, Alameda had taken out loans and used the money to make venture capital investments, among other expenditures.Around the time the crypto market crashed this spring, Ms. Ellison explained, lenders moved to recall those loans, the person familiar with the meeting said. But the funds that Alameda had spent were no longer easily available, so the company used FTX customer funds to make the payments. Besides her and Mr. Bankman-Fried, she said, two other people knew about the arrangement: Mr. Singh and Mr. Wang."I "gifted" this article here, but if it doesn't work for some reason, I can post it again in the comments. Edited to add an extra quoteThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 15 Nov 2022 01:42:28 +0000 EA - The NY Times Interviewed SBF on Sunday by Lauren Maria Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The NY Times Interviewed SBF on Sunday, published by Lauren Maria on November 14, 2022 on The Effective Altruism Forum.The NY times did an interview with SBF yesterday that "stretched past midnight". Here are some quotes from the article: “Had I been a bit more concentrated on what I was doing, I would have been able to be more thorough,” he said. “That would have allowed me to catch what was going on on the risk side.”Mr. Bankman-Fried, who is based in the Bahamas, declined to comment on his current location, citing safety concerns. Lawyers for FTX and Mr. Bankman-Fried did not respond to requests for comment."Meanwhile, at a meeting with Alameda employees on Wednesday, Ms. Ellison explained what had caused the collapse, according to a person familiar with the matter. Her voice shaking, she apologized, saying she had let the group down. Over recent months, she said, Alameda had taken out loans and used the money to make venture capital investments, among other expenditures.Around the time the crypto market crashed this spring, Ms. Ellison explained, lenders moved to recall those loans, the person familiar with the meeting said. But the funds that Alameda had spent were no longer easily available, so the company used FTX customer funds to make the payments. Besides her and Mr. Bankman-Fried, she said, two other people knew about the arrangement: Mr. Singh and Mr. Wang."I "gifted" this article here, but if it doesn't work for some reason, I can post it again in the comments. Edited to add an extra quoteThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The NY Times Interviewed SBF on Sunday, published by Lauren Maria on November 14, 2022 on The Effective Altruism Forum.The NY times did an interview with SBF yesterday that "stretched past midnight". Here are some quotes from the article: “Had I been a bit more concentrated on what I was doing, I would have been able to be more thorough,” he said. “That would have allowed me to catch what was going on on the risk side.”Mr. Bankman-Fried, who is based in the Bahamas, declined to comment on his current location, citing safety concerns. Lawyers for FTX and Mr. Bankman-Fried did not respond to requests for comment."Meanwhile, at a meeting with Alameda employees on Wednesday, Ms. Ellison explained what had caused the collapse, according to a person familiar with the matter. Her voice shaking, she apologized, saying she had let the group down. Over recent months, she said, Alameda had taken out loans and used the money to make venture capital investments, among other expenditures.Around the time the crypto market crashed this spring, Ms. Ellison explained, lenders moved to recall those loans, the person familiar with the meeting said. But the funds that Alameda had spent were no longer easily available, so the company used FTX customer funds to make the payments. Besides her and Mr. Bankman-Fried, she said, two other people knew about the arrangement: Mr. Singh and Mr. Wang."I "gifted" this article here, but if it doesn't work for some reason, I can post it again in the comments. Edited to add an extra quoteThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lauren Maria https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:41 None full 3760
nWJqDJvkM3hnx9Acv_EA EA - Suggestion - separate out the FTX threads somehow by Arepo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Suggestion - separate out the FTX threads somehow, published by Arepo on November 14, 2022 on The Effective Altruism Forum.In general the frontpage has felt overloaded to me, but that's a broader topic of which this is an acute example. For now, could we just quickly set up a way to stop other subjects from getting drowned out? Maybe just filter FTX-related posts from off the frontpage, with an announcement at the top that you've done so and linking to the FTX crisis tag?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Arepo https://forum.effectivealtruism.org/posts/nWJqDJvkM3hnx9Acv/suggestion-separate-out-the-ftx-threads-somehow Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Suggestion - separate out the FTX threads somehow, published by Arepo on November 14, 2022 on The Effective Altruism Forum.In general the frontpage has felt overloaded to me, but that's a broader topic of which this is an acute example. For now, could we just quickly set up a way to stop other subjects from getting drowned out? Maybe just filter FTX-related posts from off the frontpage, with an announcement at the top that you've done so and linking to the FTX crisis tag?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 14 Nov 2022 22:13:53 +0000 EA - Suggestion - separate out the FTX threads somehow by Arepo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Suggestion - separate out the FTX threads somehow, published by Arepo on November 14, 2022 on The Effective Altruism Forum.In general the frontpage has felt overloaded to me, but that's a broader topic of which this is an acute example. For now, could we just quickly set up a way to stop other subjects from getting drowned out? Maybe just filter FTX-related posts from off the frontpage, with an announcement at the top that you've done so and linking to the FTX crisis tag?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Suggestion - separate out the FTX threads somehow, published by Arepo on November 14, 2022 on The Effective Altruism Forum.In general the frontpage has felt overloaded to me, but that's a broader topic of which this is an acute example. For now, could we just quickly set up a way to stop other subjects from getting drowned out? Maybe just filter FTX-related posts from off the frontpage, with an announcement at the top that you've done so and linking to the FTX crisis tag?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Arepo https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:46 None full 3753
xauPtrTdKjQo9EogZ_EA EA - Money Stuff: FTX’s Balance Sheet Was Bad by Elliot Temple Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Money Stuff: FTX’s Balance Sheet Was Bad, published by Elliot Temple on November 14, 2022 on The Effective Altruism Forum.Matt Levine explains that FTX's balance sheet (which has been leaked) is a nightmare. Sample quote:And then the basic question is, how bad is the mismatch [between liabilities and assets on the balance sheet]. Like, $16 billion of dollar liabilities and $16 billion of liquid dollar-denominated assets? Sure, great. $16 billion of dollar liabilities and $16 billion worth of Bitcoin assets? Not ideal, incredibly risky, but in some broad sense understandable. $16 billion of dollar liabilities and assets consisting entirely of some magic beans that you bought in the market for $16 billion? Very bad. $16 billion of dollar liabilities and assets consisting mostly of some magic beans that you invented yourself and acquired for zero dollars? WHAT? Never mind the valuation of the beans; where did the money go? What happened to the $16 billion? Spending $5 billion of customer money on Serum would have been horrible, but FTX didn’t do that, and couldn’t have, because there wasn’t $5 billion of Serum available to buy.FTX shot its customer money into some still-unexplained reaches of the astral plane and was like “well we do have $5 billion of this Serum token we made up, that’s something?” No it isn’t!Mirror to avoid paywall: (Loading the article in a private browsing window would probably work too.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Elliot Temple https://forum.effectivealtruism.org/posts/xauPtrTdKjQo9EogZ/money-stuff-ftx-s-balance-sheet-was-bad Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Money Stuff: FTX’s Balance Sheet Was Bad, published by Elliot Temple on November 14, 2022 on The Effective Altruism Forum.Matt Levine explains that FTX's balance sheet (which has been leaked) is a nightmare. Sample quote:And then the basic question is, how bad is the mismatch [between liabilities and assets on the balance sheet]. Like, $16 billion of dollar liabilities and $16 billion of liquid dollar-denominated assets? Sure, great. $16 billion of dollar liabilities and $16 billion worth of Bitcoin assets? Not ideal, incredibly risky, but in some broad sense understandable. $16 billion of dollar liabilities and assets consisting entirely of some magic beans that you bought in the market for $16 billion? Very bad. $16 billion of dollar liabilities and assets consisting mostly of some magic beans that you invented yourself and acquired for zero dollars? WHAT? Never mind the valuation of the beans; where did the money go? What happened to the $16 billion? Spending $5 billion of customer money on Serum would have been horrible, but FTX didn’t do that, and couldn’t have, because there wasn’t $5 billion of Serum available to buy.FTX shot its customer money into some still-unexplained reaches of the astral plane and was like “well we do have $5 billion of this Serum token we made up, that’s something?” No it isn’t!Mirror to avoid paywall: (Loading the article in a private browsing window would probably work too.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 14 Nov 2022 21:21:17 +0000 EA - Money Stuff: FTX’s Balance Sheet Was Bad by Elliot Temple Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Money Stuff: FTX’s Balance Sheet Was Bad, published by Elliot Temple on November 14, 2022 on The Effective Altruism Forum.Matt Levine explains that FTX's balance sheet (which has been leaked) is a nightmare. Sample quote:And then the basic question is, how bad is the mismatch [between liabilities and assets on the balance sheet]. Like, $16 billion of dollar liabilities and $16 billion of liquid dollar-denominated assets? Sure, great. $16 billion of dollar liabilities and $16 billion worth of Bitcoin assets? Not ideal, incredibly risky, but in some broad sense understandable. $16 billion of dollar liabilities and assets consisting entirely of some magic beans that you bought in the market for $16 billion? Very bad. $16 billion of dollar liabilities and assets consisting mostly of some magic beans that you invented yourself and acquired for zero dollars? WHAT? Never mind the valuation of the beans; where did the money go? What happened to the $16 billion? Spending $5 billion of customer money on Serum would have been horrible, but FTX didn’t do that, and couldn’t have, because there wasn’t $5 billion of Serum available to buy.FTX shot its customer money into some still-unexplained reaches of the astral plane and was like “well we do have $5 billion of this Serum token we made up, that’s something?” No it isn’t!Mirror to avoid paywall: (Loading the article in a private browsing window would probably work too.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Money Stuff: FTX’s Balance Sheet Was Bad, published by Elliot Temple on November 14, 2022 on The Effective Altruism Forum.Matt Levine explains that FTX's balance sheet (which has been leaked) is a nightmare. Sample quote:And then the basic question is, how bad is the mismatch [between liabilities and assets on the balance sheet]. Like, $16 billion of dollar liabilities and $16 billion of liquid dollar-denominated assets? Sure, great. $16 billion of dollar liabilities and $16 billion worth of Bitcoin assets? Not ideal, incredibly risky, but in some broad sense understandable. $16 billion of dollar liabilities and assets consisting entirely of some magic beans that you bought in the market for $16 billion? Very bad. $16 billion of dollar liabilities and assets consisting mostly of some magic beans that you invented yourself and acquired for zero dollars? WHAT? Never mind the valuation of the beans; where did the money go? What happened to the $16 billion? Spending $5 billion of customer money on Serum would have been horrible, but FTX didn’t do that, and couldn’t have, because there wasn’t $5 billion of Serum available to buy.FTX shot its customer money into some still-unexplained reaches of the astral plane and was like “well we do have $5 billion of this Serum token we made up, that’s something?” No it isn’t!Mirror to avoid paywall: (Loading the article in a private browsing window would probably work too.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Elliot Temple https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:44 None full 3748
kKidKRiCcZ5uGJ6w5_EA EA - Stop Thinking about FTX. Think About Getting Zika Instead. by jeberts Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Stop Thinking about FTX. Think About Getting Zika Instead., published by jeberts on November 14, 2022 on The Effective Altruism Forum.(Or about getting malaria, or hepatitis C (see below), or another exciting disease in an artisanal, curated list of trials in the UK, Canada, and US by 1Day Sooner.)Hi! My name is Jake. I got dysentery as part of a human challenge trial for a vaccine against Shigella, a group of bacteria that are the primary cause of dysentery globally. I quite literally shtposted through it on Twitter and earned fifteen minutes of Internet fame.I now work for 1Day Sooner, which was founded as an advocacy group in early 2020 for Covid-19 human challenge trials. (Who knew dysentery could lead to a career change?) 1Day is also involved in a range of other things, including pandemic preparedness policy and getting hepatitis C challenge trials off the ground.What I want to focus on right now is Zika. Specifically, I want to convince people assigned female at birth aged 18-40 in the DC-Baltimore/DMV area reading this post to take a few minutes to consider signing up for screening for the first-ever human challenge trial for Zika virus (ZIKV) at Johns Hopkins University in Baltimore.Even if you fall outside that category, I figure this might be something interesting to ponder over, and probably less stressful than the cryptocurrency debacle that shall not be named.1Day Sooner is not compensated in any way by the study/Hopkins for this, nor do I/we represent the study in any official sense. I happen to have become very fascinated by this one in particular because it represents how challenge trials can be used for pandemic prevention with comparatively few resources. This post is meant to inform you of the study with a bit more detail from an EA perspective, but does not supplant information provided by the study staff.(If you’re a DMV male like me bummed you can’t take part, ask me about current or upcoming malaria vaccine and monoclonal antibody trials taking place at the University of Maryland — I'll be screening next week! If you’re not in the DMV or otherwise just can’t do anything for Zika or malaria, you can still sign up for our volunteer base and newsletter, which will help you keep tabs on future studies. Something we’re very excited about is the emerging push for hepatitis C challenge studies, see the link above.)Zika 101Zika is a mainly mosquito-borne disease that has been known since 1947 — The 2015-2016 western hemisphere epidemic showed that Zika could cause grave birth defects (congenital Zika syndrome, CZS) — The disease is very mild in adults at presentThe Zika virus (ZIKV) was discovered in the Zika forest of Uganda in 1947, and a few years later, we learned it could cause what was universally assumed to be extremely mild disease in humans. The 2015-16 Zika epidemic that started in South America, and which was particularly severe in Brazil, proved otherwise. This is when it became clear that Zika was linked to horrific birth defects. Zika has since earned its place on the WHO priority pathogen list.To briefly review the basics of Zika:The Zika virus is a flavivirus, in the same family as dengue fever, yellow fever, and Chikungunya, among others. Flaviviruses are RNA viruses, which generally lack genomic proofreading ability and are thus more prone to mutation.Zika infection sometimes causes Zika fever, though asymptomatic cases are very common. Zika fever is usually very, very mild, and direct deaths are extremely rare.Zika is much more of a concern if you are pregnant — in about 7% of infections during pregnancy, the virus infects a fetus and causes serious, even fatal, birth defects. These defects, which most frequently include microcephaly, are referred to as congenital Zika syndrome (CZS).Zika is also thought to increase t...]]>
jeberts https://forum.effectivealtruism.org/posts/kKidKRiCcZ5uGJ6w5/stop-thinking-about-ftx-think-about-getting-zika-instead Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Stop Thinking about FTX. Think About Getting Zika Instead., published by jeberts on November 14, 2022 on The Effective Altruism Forum.(Or about getting malaria, or hepatitis C (see below), or another exciting disease in an artisanal, curated list of trials in the UK, Canada, and US by 1Day Sooner.)Hi! My name is Jake. I got dysentery as part of a human challenge trial for a vaccine against Shigella, a group of bacteria that are the primary cause of dysentery globally. I quite literally shtposted through it on Twitter and earned fifteen minutes of Internet fame.I now work for 1Day Sooner, which was founded as an advocacy group in early 2020 for Covid-19 human challenge trials. (Who knew dysentery could lead to a career change?) 1Day is also involved in a range of other things, including pandemic preparedness policy and getting hepatitis C challenge trials off the ground.What I want to focus on right now is Zika. Specifically, I want to convince people assigned female at birth aged 18-40 in the DC-Baltimore/DMV area reading this post to take a few minutes to consider signing up for screening for the first-ever human challenge trial for Zika virus (ZIKV) at Johns Hopkins University in Baltimore.Even if you fall outside that category, I figure this might be something interesting to ponder over, and probably less stressful than the cryptocurrency debacle that shall not be named.1Day Sooner is not compensated in any way by the study/Hopkins for this, nor do I/we represent the study in any official sense. I happen to have become very fascinated by this one in particular because it represents how challenge trials can be used for pandemic prevention with comparatively few resources. This post is meant to inform you of the study with a bit more detail from an EA perspective, but does not supplant information provided by the study staff.(If you’re a DMV male like me bummed you can’t take part, ask me about current or upcoming malaria vaccine and monoclonal antibody trials taking place at the University of Maryland — I'll be screening next week! If you’re not in the DMV or otherwise just can’t do anything for Zika or malaria, you can still sign up for our volunteer base and newsletter, which will help you keep tabs on future studies. Something we’re very excited about is the emerging push for hepatitis C challenge studies, see the link above.)Zika 101Zika is a mainly mosquito-borne disease that has been known since 1947 — The 2015-2016 western hemisphere epidemic showed that Zika could cause grave birth defects (congenital Zika syndrome, CZS) — The disease is very mild in adults at presentThe Zika virus (ZIKV) was discovered in the Zika forest of Uganda in 1947, and a few years later, we learned it could cause what was universally assumed to be extremely mild disease in humans. The 2015-16 Zika epidemic that started in South America, and which was particularly severe in Brazil, proved otherwise. This is when it became clear that Zika was linked to horrific birth defects. Zika has since earned its place on the WHO priority pathogen list.To briefly review the basics of Zika:The Zika virus is a flavivirus, in the same family as dengue fever, yellow fever, and Chikungunya, among others. Flaviviruses are RNA viruses, which generally lack genomic proofreading ability and are thus more prone to mutation.Zika infection sometimes causes Zika fever, though asymptomatic cases are very common. Zika fever is usually very, very mild, and direct deaths are extremely rare.Zika is much more of a concern if you are pregnant — in about 7% of infections during pregnancy, the virus infects a fetus and causes serious, even fatal, birth defects. These defects, which most frequently include microcephaly, are referred to as congenital Zika syndrome (CZS).Zika is also thought to increase t...]]>
Mon, 14 Nov 2022 20:26:14 +0000 EA - Stop Thinking about FTX. Think About Getting Zika Instead. by jeberts Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Stop Thinking about FTX. Think About Getting Zika Instead., published by jeberts on November 14, 2022 on The Effective Altruism Forum.(Or about getting malaria, or hepatitis C (see below), or another exciting disease in an artisanal, curated list of trials in the UK, Canada, and US by 1Day Sooner.)Hi! My name is Jake. I got dysentery as part of a human challenge trial for a vaccine against Shigella, a group of bacteria that are the primary cause of dysentery globally. I quite literally shtposted through it on Twitter and earned fifteen minutes of Internet fame.I now work for 1Day Sooner, which was founded as an advocacy group in early 2020 for Covid-19 human challenge trials. (Who knew dysentery could lead to a career change?) 1Day is also involved in a range of other things, including pandemic preparedness policy and getting hepatitis C challenge trials off the ground.What I want to focus on right now is Zika. Specifically, I want to convince people assigned female at birth aged 18-40 in the DC-Baltimore/DMV area reading this post to take a few minutes to consider signing up for screening for the first-ever human challenge trial for Zika virus (ZIKV) at Johns Hopkins University in Baltimore.Even if you fall outside that category, I figure this might be something interesting to ponder over, and probably less stressful than the cryptocurrency debacle that shall not be named.1Day Sooner is not compensated in any way by the study/Hopkins for this, nor do I/we represent the study in any official sense. I happen to have become very fascinated by this one in particular because it represents how challenge trials can be used for pandemic prevention with comparatively few resources. This post is meant to inform you of the study with a bit more detail from an EA perspective, but does not supplant information provided by the study staff.(If you’re a DMV male like me bummed you can’t take part, ask me about current or upcoming malaria vaccine and monoclonal antibody trials taking place at the University of Maryland — I'll be screening next week! If you’re not in the DMV or otherwise just can’t do anything for Zika or malaria, you can still sign up for our volunteer base and newsletter, which will help you keep tabs on future studies. Something we’re very excited about is the emerging push for hepatitis C challenge studies, see the link above.)Zika 101Zika is a mainly mosquito-borne disease that has been known since 1947 — The 2015-2016 western hemisphere epidemic showed that Zika could cause grave birth defects (congenital Zika syndrome, CZS) — The disease is very mild in adults at presentThe Zika virus (ZIKV) was discovered in the Zika forest of Uganda in 1947, and a few years later, we learned it could cause what was universally assumed to be extremely mild disease in humans. The 2015-16 Zika epidemic that started in South America, and which was particularly severe in Brazil, proved otherwise. This is when it became clear that Zika was linked to horrific birth defects. Zika has since earned its place on the WHO priority pathogen list.To briefly review the basics of Zika:The Zika virus is a flavivirus, in the same family as dengue fever, yellow fever, and Chikungunya, among others. Flaviviruses are RNA viruses, which generally lack genomic proofreading ability and are thus more prone to mutation.Zika infection sometimes causes Zika fever, though asymptomatic cases are very common. Zika fever is usually very, very mild, and direct deaths are extremely rare.Zika is much more of a concern if you are pregnant — in about 7% of infections during pregnancy, the virus infects a fetus and causes serious, even fatal, birth defects. These defects, which most frequently include microcephaly, are referred to as congenital Zika syndrome (CZS).Zika is also thought to increase t...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Stop Thinking about FTX. Think About Getting Zika Instead., published by jeberts on November 14, 2022 on The Effective Altruism Forum.(Or about getting malaria, or hepatitis C (see below), or another exciting disease in an artisanal, curated list of trials in the UK, Canada, and US by 1Day Sooner.)Hi! My name is Jake. I got dysentery as part of a human challenge trial for a vaccine against Shigella, a group of bacteria that are the primary cause of dysentery globally. I quite literally shtposted through it on Twitter and earned fifteen minutes of Internet fame.I now work for 1Day Sooner, which was founded as an advocacy group in early 2020 for Covid-19 human challenge trials. (Who knew dysentery could lead to a career change?) 1Day is also involved in a range of other things, including pandemic preparedness policy and getting hepatitis C challenge trials off the ground.What I want to focus on right now is Zika. Specifically, I want to convince people assigned female at birth aged 18-40 in the DC-Baltimore/DMV area reading this post to take a few minutes to consider signing up for screening for the first-ever human challenge trial for Zika virus (ZIKV) at Johns Hopkins University in Baltimore.Even if you fall outside that category, I figure this might be something interesting to ponder over, and probably less stressful than the cryptocurrency debacle that shall not be named.1Day Sooner is not compensated in any way by the study/Hopkins for this, nor do I/we represent the study in any official sense. I happen to have become very fascinated by this one in particular because it represents how challenge trials can be used for pandemic prevention with comparatively few resources. This post is meant to inform you of the study with a bit more detail from an EA perspective, but does not supplant information provided by the study staff.(If you’re a DMV male like me bummed you can’t take part, ask me about current or upcoming malaria vaccine and monoclonal antibody trials taking place at the University of Maryland — I'll be screening next week! If you’re not in the DMV or otherwise just can’t do anything for Zika or malaria, you can still sign up for our volunteer base and newsletter, which will help you keep tabs on future studies. Something we’re very excited about is the emerging push for hepatitis C challenge studies, see the link above.)Zika 101Zika is a mainly mosquito-borne disease that has been known since 1947 — The 2015-2016 western hemisphere epidemic showed that Zika could cause grave birth defects (congenital Zika syndrome, CZS) — The disease is very mild in adults at presentThe Zika virus (ZIKV) was discovered in the Zika forest of Uganda in 1947, and a few years later, we learned it could cause what was universally assumed to be extremely mild disease in humans. The 2015-16 Zika epidemic that started in South America, and which was particularly severe in Brazil, proved otherwise. This is when it became clear that Zika was linked to horrific birth defects. Zika has since earned its place on the WHO priority pathogen list.To briefly review the basics of Zika:The Zika virus is a flavivirus, in the same family as dengue fever, yellow fever, and Chikungunya, among others. Flaviviruses are RNA viruses, which generally lack genomic proofreading ability and are thus more prone to mutation.Zika infection sometimes causes Zika fever, though asymptomatic cases are very common. Zika fever is usually very, very mild, and direct deaths are extremely rare.Zika is much more of a concern if you are pregnant — in about 7% of infections during pregnancy, the virus infects a fetus and causes serious, even fatal, birth defects. These defects, which most frequently include microcephaly, are referred to as congenital Zika syndrome (CZS).Zika is also thought to increase t...]]>
jeberts https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 22:01 None full 3749
aHPhh6GjHtTBhe7cX_EA EA - Proposals for reform should come with detailed stories by Eric Neyman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Proposals for reform should come with detailed stories, published by Eric Neyman on November 14, 2022 on The Effective Altruism Forum.Following the collapse of FTX, many of us have been asking ourselves:What should members of the EA community have done differently, given the information they had at the time?I think that proposed answers to this question are considerably more useful if they meet (and explicitly address) the following three criteria:(1) The (expected) benefits of the proposal outweigh the costs, given information available at the time.(2) Implementing the proposal would have been realistic.(3) There is a somewhat detailed, plausible story of how the proposal could have led to a significantly better outcome with regard to the FTX collapse.Hopefully it's clear why we want the first two of these. (1) is EA bread and butter. And (2) is important to consider given how decentralized EA is: if your proposal is "all EAs should refuse funding from a billionaire without a credible audit of their finances", you face a pretty impossible coordination problem.But most of the proposals I've seen have been failing (3). Here's a proposal I've heard:The people SBF was seeking out to run the FTX Foundation should have requested an audit of FTX's finances at the start.This isn't enough for me to construct a detailed, plausible story for things going better. In particular, I think one of two things would have happened:The FTX leadership would have agreed to a basic audit that wouldn't have uncovered underlying problems (much as the entire finance industry failed to uncover the problems).The FTX leadership would have refused to accede to the audit.In the first case, the proposal fails at accomplishing its goal. In the second case, I want more details! What should the FTX Foundation people have done in response, and how could their actions have led to a better outcome?A more detailed proposal:The people SBF was seeking out to run the FTX Foundation should have requested a thorough, rigorous audit of FTX's finances at the start. If FTX leadership had refused, they should have refused to run the FTX Foundation.This still isn't detailed enough: SBF would have just asked someone else to run the Foundation. It seems like the default outcome is that someone who's a bit worse at the job would have been in charge instead.A more detailed proposal:The people SBF was seeking out to run the FTX Foundation should have requested a thorough, rigorous audit of FTX's finances at the start. If FTX leadership had refused, they should have refused to run the FTX Foundation and made it public that FTX leadership had refused the audit. Then, EA leaders should have discouraged major EA organizations from taking money from the FTX Foundation and promoted a culture of looking down on anyone who took money from the Foundation.This is starting to get to the point where something could have realistically changed as a result of the actions. Maybe the pressure for transparency would have been strong enough that SBF would have acceded to an audit -- though I still think the audit wouldn't have uncovered anything. Or maybe he wouldn't have acceded, and for a while EA organizations would have refused his money, before eventually giving in to the significant incentives to take the money. Or maybe they would have refused his money for many years -- in that case, I would question whether criterion (1) is still satisfied (given only information we would have had at the time, remember). But at least we have a somewhat detailed story to argue about.Without a detailed story, it's easy to convince yourself that the change you propose would have plausibly averted the disaster we face. A detailed story forces you to spell out your assumptions in a way that makes it easier for other people to poke ...]]>
Eric Neyman https://forum.effectivealtruism.org/posts/aHPhh6GjHtTBhe7cX/proposals-for-reform-should-come-with-detailed-stories Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Proposals for reform should come with detailed stories, published by Eric Neyman on November 14, 2022 on The Effective Altruism Forum.Following the collapse of FTX, many of us have been asking ourselves:What should members of the EA community have done differently, given the information they had at the time?I think that proposed answers to this question are considerably more useful if they meet (and explicitly address) the following three criteria:(1) The (expected) benefits of the proposal outweigh the costs, given information available at the time.(2) Implementing the proposal would have been realistic.(3) There is a somewhat detailed, plausible story of how the proposal could have led to a significantly better outcome with regard to the FTX collapse.Hopefully it's clear why we want the first two of these. (1) is EA bread and butter. And (2) is important to consider given how decentralized EA is: if your proposal is "all EAs should refuse funding from a billionaire without a credible audit of their finances", you face a pretty impossible coordination problem.But most of the proposals I've seen have been failing (3). Here's a proposal I've heard:The people SBF was seeking out to run the FTX Foundation should have requested an audit of FTX's finances at the start.This isn't enough for me to construct a detailed, plausible story for things going better. In particular, I think one of two things would have happened:The FTX leadership would have agreed to a basic audit that wouldn't have uncovered underlying problems (much as the entire finance industry failed to uncover the problems).The FTX leadership would have refused to accede to the audit.In the first case, the proposal fails at accomplishing its goal. In the second case, I want more details! What should the FTX Foundation people have done in response, and how could their actions have led to a better outcome?A more detailed proposal:The people SBF was seeking out to run the FTX Foundation should have requested a thorough, rigorous audit of FTX's finances at the start. If FTX leadership had refused, they should have refused to run the FTX Foundation.This still isn't detailed enough: SBF would have just asked someone else to run the Foundation. It seems like the default outcome is that someone who's a bit worse at the job would have been in charge instead.A more detailed proposal:The people SBF was seeking out to run the FTX Foundation should have requested a thorough, rigorous audit of FTX's finances at the start. If FTX leadership had refused, they should have refused to run the FTX Foundation and made it public that FTX leadership had refused the audit. Then, EA leaders should have discouraged major EA organizations from taking money from the FTX Foundation and promoted a culture of looking down on anyone who took money from the Foundation.This is starting to get to the point where something could have realistically changed as a result of the actions. Maybe the pressure for transparency would have been strong enough that SBF would have acceded to an audit -- though I still think the audit wouldn't have uncovered anything. Or maybe he wouldn't have acceded, and for a while EA organizations would have refused his money, before eventually giving in to the significant incentives to take the money. Or maybe they would have refused his money for many years -- in that case, I would question whether criterion (1) is still satisfied (given only information we would have had at the time, remember). But at least we have a somewhat detailed story to argue about.Without a detailed story, it's easy to convince yourself that the change you propose would have plausibly averted the disaster we face. A detailed story forces you to spell out your assumptions in a way that makes it easier for other people to poke ...]]>
Mon, 14 Nov 2022 19:12:03 +0000 EA - Proposals for reform should come with detailed stories by Eric Neyman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Proposals for reform should come with detailed stories, published by Eric Neyman on November 14, 2022 on The Effective Altruism Forum.Following the collapse of FTX, many of us have been asking ourselves:What should members of the EA community have done differently, given the information they had at the time?I think that proposed answers to this question are considerably more useful if they meet (and explicitly address) the following three criteria:(1) The (expected) benefits of the proposal outweigh the costs, given information available at the time.(2) Implementing the proposal would have been realistic.(3) There is a somewhat detailed, plausible story of how the proposal could have led to a significantly better outcome with regard to the FTX collapse.Hopefully it's clear why we want the first two of these. (1) is EA bread and butter. And (2) is important to consider given how decentralized EA is: if your proposal is "all EAs should refuse funding from a billionaire without a credible audit of their finances", you face a pretty impossible coordination problem.But most of the proposals I've seen have been failing (3). Here's a proposal I've heard:The people SBF was seeking out to run the FTX Foundation should have requested an audit of FTX's finances at the start.This isn't enough for me to construct a detailed, plausible story for things going better. In particular, I think one of two things would have happened:The FTX leadership would have agreed to a basic audit that wouldn't have uncovered underlying problems (much as the entire finance industry failed to uncover the problems).The FTX leadership would have refused to accede to the audit.In the first case, the proposal fails at accomplishing its goal. In the second case, I want more details! What should the FTX Foundation people have done in response, and how could their actions have led to a better outcome?A more detailed proposal:The people SBF was seeking out to run the FTX Foundation should have requested a thorough, rigorous audit of FTX's finances at the start. If FTX leadership had refused, they should have refused to run the FTX Foundation.This still isn't detailed enough: SBF would have just asked someone else to run the Foundation. It seems like the default outcome is that someone who's a bit worse at the job would have been in charge instead.A more detailed proposal:The people SBF was seeking out to run the FTX Foundation should have requested a thorough, rigorous audit of FTX's finances at the start. If FTX leadership had refused, they should have refused to run the FTX Foundation and made it public that FTX leadership had refused the audit. Then, EA leaders should have discouraged major EA organizations from taking money from the FTX Foundation and promoted a culture of looking down on anyone who took money from the Foundation.This is starting to get to the point where something could have realistically changed as a result of the actions. Maybe the pressure for transparency would have been strong enough that SBF would have acceded to an audit -- though I still think the audit wouldn't have uncovered anything. Or maybe he wouldn't have acceded, and for a while EA organizations would have refused his money, before eventually giving in to the significant incentives to take the money. Or maybe they would have refused his money for many years -- in that case, I would question whether criterion (1) is still satisfied (given only information we would have had at the time, remember). But at least we have a somewhat detailed story to argue about.Without a detailed story, it's easy to convince yourself that the change you propose would have plausibly averted the disaster we face. A detailed story forces you to spell out your assumptions in a way that makes it easier for other people to poke ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Proposals for reform should come with detailed stories, published by Eric Neyman on November 14, 2022 on The Effective Altruism Forum.Following the collapse of FTX, many of us have been asking ourselves:What should members of the EA community have done differently, given the information they had at the time?I think that proposed answers to this question are considerably more useful if they meet (and explicitly address) the following three criteria:(1) The (expected) benefits of the proposal outweigh the costs, given information available at the time.(2) Implementing the proposal would have been realistic.(3) There is a somewhat detailed, plausible story of how the proposal could have led to a significantly better outcome with regard to the FTX collapse.Hopefully it's clear why we want the first two of these. (1) is EA bread and butter. And (2) is important to consider given how decentralized EA is: if your proposal is "all EAs should refuse funding from a billionaire without a credible audit of their finances", you face a pretty impossible coordination problem.But most of the proposals I've seen have been failing (3). Here's a proposal I've heard:The people SBF was seeking out to run the FTX Foundation should have requested an audit of FTX's finances at the start.This isn't enough for me to construct a detailed, plausible story for things going better. In particular, I think one of two things would have happened:The FTX leadership would have agreed to a basic audit that wouldn't have uncovered underlying problems (much as the entire finance industry failed to uncover the problems).The FTX leadership would have refused to accede to the audit.In the first case, the proposal fails at accomplishing its goal. In the second case, I want more details! What should the FTX Foundation people have done in response, and how could their actions have led to a better outcome?A more detailed proposal:The people SBF was seeking out to run the FTX Foundation should have requested a thorough, rigorous audit of FTX's finances at the start. If FTX leadership had refused, they should have refused to run the FTX Foundation.This still isn't detailed enough: SBF would have just asked someone else to run the Foundation. It seems like the default outcome is that someone who's a bit worse at the job would have been in charge instead.A more detailed proposal:The people SBF was seeking out to run the FTX Foundation should have requested a thorough, rigorous audit of FTX's finances at the start. If FTX leadership had refused, they should have refused to run the FTX Foundation and made it public that FTX leadership had refused the audit. Then, EA leaders should have discouraged major EA organizations from taking money from the FTX Foundation and promoted a culture of looking down on anyone who took money from the Foundation.This is starting to get to the point where something could have realistically changed as a result of the actions. Maybe the pressure for transparency would have been strong enough that SBF would have acceded to an audit -- though I still think the audit wouldn't have uncovered anything. Or maybe he wouldn't have acceded, and for a while EA organizations would have refused his money, before eventually giving in to the significant incentives to take the money. Or maybe they would have refused his money for many years -- in that case, I would question whether criterion (1) is still satisfied (given only information we would have had at the time, remember). But at least we have a somewhat detailed story to argue about.Without a detailed story, it's easy to convince yourself that the change you propose would have plausibly averted the disaster we face. A detailed story forces you to spell out your assumptions in a way that makes it easier for other people to poke ...]]>
Eric Neyman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:26 None full 3750
WfeWN2X4k8w8nTeaS_EA EA - Theories of Welfare and Welfare Range Estimates by Bob Fischer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Theories of Welfare and Welfare Range Estimates, published by Bob Fischer on November 14, 2022 on The Effective Altruism Forum.Key TakeawaysMany theories of welfare imply that there are probably differences in animals’ welfare ranges. However, these theories do not agree about the sizes of those differences.The Moral Weight Project assumes that hedonism is true. This post tries to estimate how different our welfare range estimates could be if we were to assume some other theory of welfare.We argue that even if hedonic goods and bads (i.e., pleasures and pains) aren't all of welfare, they’re a lot of it. So, probably, the choice of a theory of welfare will only have a modest (less than 10x) impact on the differences we estimate between humans' and nonhumans' welfare ranges.IntroductionThis is the third post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization. The aim of this post is to suggest a way to quantify the impact of assuming hedonism on welfare range estimates.MotivationsTheories of welfare disagree about the determinants of welfare. According to hedonism, the determinants of welfare are positively and negatively valenced experiences. According to desire satisfaction theory, the determinants are satisfied and frustrated desires. According to a garden variety objective list theory, the determinants are something like knowledge, developing and maintaining friendships, engaging in meaningful activities, and so on. Now, some animals probably have more intense pains than others; some probably have richer, more complex desires; some are able to acquire more sophisticated knowledge of the world; others can make stronger, more complex relationships with others. If animals systematically vary with respect to their ability to realize the determinants of welfare, then they probably vary in their welfare ranges.That is, some of them can probably realize more positive welfare at a time than others; likewise, some of them can probably realize more negative welfare at a time than others. As a result, animals probably vary with respect to the differences between the best and worst welfare states they can realize. The upshot: many theories of welfare imply that there are probably differences in animals’ welfare ranges.However, theories of welfare do not obviously agree about the sizes of those differences. Consider a garden variety objective list theory on which the following things contribute positively to welfare: acting autonomously, gaining knowledge, having friends, being in a loving relationship, doing meaningful work, creating valuable institutions, experiencing pleasure, and so on. Now consider a simple version of hedonism (i.e., one that rejects the higher / lower pleasure distinction) on which just one thing contributes positively to welfare: experiencing pleasure. Presumably, while many nonhuman animals (henceforth, animals) can experience pleasure, they can’t realize many of the other things that matter according to the objective list theory. Given as much, it’s plausible that if the objective list theory is true, there will be larger differences in welfare ranges between many humans and animals than there will be if hedonism is true.For practical and theoretical reasons, the Moral Weight Project assumes that hedonism is true. On the practical side, we needed to make some assumptions to make any progress in the time we had available. On the theoretical side, there are powerful arguments for hedonism. Still, those who reject hedonism will rightly wonder about the impact of assuming hedonism. How different would our welfare range estimates be if we were to assume some other theory of welfare?In the ...]]>
Bob Fischer https://forum.effectivealtruism.org/posts/WfeWN2X4k8w8nTeaS/theories-of-welfare-and-welfare-range-estimates Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Theories of Welfare and Welfare Range Estimates, published by Bob Fischer on November 14, 2022 on The Effective Altruism Forum.Key TakeawaysMany theories of welfare imply that there are probably differences in animals’ welfare ranges. However, these theories do not agree about the sizes of those differences.The Moral Weight Project assumes that hedonism is true. This post tries to estimate how different our welfare range estimates could be if we were to assume some other theory of welfare.We argue that even if hedonic goods and bads (i.e., pleasures and pains) aren't all of welfare, they’re a lot of it. So, probably, the choice of a theory of welfare will only have a modest (less than 10x) impact on the differences we estimate between humans' and nonhumans' welfare ranges.IntroductionThis is the third post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization. The aim of this post is to suggest a way to quantify the impact of assuming hedonism on welfare range estimates.MotivationsTheories of welfare disagree about the determinants of welfare. According to hedonism, the determinants of welfare are positively and negatively valenced experiences. According to desire satisfaction theory, the determinants are satisfied and frustrated desires. According to a garden variety objective list theory, the determinants are something like knowledge, developing and maintaining friendships, engaging in meaningful activities, and so on. Now, some animals probably have more intense pains than others; some probably have richer, more complex desires; some are able to acquire more sophisticated knowledge of the world; others can make stronger, more complex relationships with others. If animals systematically vary with respect to their ability to realize the determinants of welfare, then they probably vary in their welfare ranges.That is, some of them can probably realize more positive welfare at a time than others; likewise, some of them can probably realize more negative welfare at a time than others. As a result, animals probably vary with respect to the differences between the best and worst welfare states they can realize. The upshot: many theories of welfare imply that there are probably differences in animals’ welfare ranges.However, theories of welfare do not obviously agree about the sizes of those differences. Consider a garden variety objective list theory on which the following things contribute positively to welfare: acting autonomously, gaining knowledge, having friends, being in a loving relationship, doing meaningful work, creating valuable institutions, experiencing pleasure, and so on. Now consider a simple version of hedonism (i.e., one that rejects the higher / lower pleasure distinction) on which just one thing contributes positively to welfare: experiencing pleasure. Presumably, while many nonhuman animals (henceforth, animals) can experience pleasure, they can’t realize many of the other things that matter according to the objective list theory. Given as much, it’s plausible that if the objective list theory is true, there will be larger differences in welfare ranges between many humans and animals than there will be if hedonism is true.For practical and theoretical reasons, the Moral Weight Project assumes that hedonism is true. On the practical side, we needed to make some assumptions to make any progress in the time we had available. On the theoretical side, there are powerful arguments for hedonism. Still, those who reject hedonism will rightly wonder about the impact of assuming hedonism. How different would our welfare range estimates be if we were to assume some other theory of welfare?In the ...]]>
Mon, 14 Nov 2022 15:58:14 +0000 EA - Theories of Welfare and Welfare Range Estimates by Bob Fischer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Theories of Welfare and Welfare Range Estimates, published by Bob Fischer on November 14, 2022 on The Effective Altruism Forum.Key TakeawaysMany theories of welfare imply that there are probably differences in animals’ welfare ranges. However, these theories do not agree about the sizes of those differences.The Moral Weight Project assumes that hedonism is true. This post tries to estimate how different our welfare range estimates could be if we were to assume some other theory of welfare.We argue that even if hedonic goods and bads (i.e., pleasures and pains) aren't all of welfare, they’re a lot of it. So, probably, the choice of a theory of welfare will only have a modest (less than 10x) impact on the differences we estimate between humans' and nonhumans' welfare ranges.IntroductionThis is the third post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization. The aim of this post is to suggest a way to quantify the impact of assuming hedonism on welfare range estimates.MotivationsTheories of welfare disagree about the determinants of welfare. According to hedonism, the determinants of welfare are positively and negatively valenced experiences. According to desire satisfaction theory, the determinants are satisfied and frustrated desires. According to a garden variety objective list theory, the determinants are something like knowledge, developing and maintaining friendships, engaging in meaningful activities, and so on. Now, some animals probably have more intense pains than others; some probably have richer, more complex desires; some are able to acquire more sophisticated knowledge of the world; others can make stronger, more complex relationships with others. If animals systematically vary with respect to their ability to realize the determinants of welfare, then they probably vary in their welfare ranges.That is, some of them can probably realize more positive welfare at a time than others; likewise, some of them can probably realize more negative welfare at a time than others. As a result, animals probably vary with respect to the differences between the best and worst welfare states they can realize. The upshot: many theories of welfare imply that there are probably differences in animals’ welfare ranges.However, theories of welfare do not obviously agree about the sizes of those differences. Consider a garden variety objective list theory on which the following things contribute positively to welfare: acting autonomously, gaining knowledge, having friends, being in a loving relationship, doing meaningful work, creating valuable institutions, experiencing pleasure, and so on. Now consider a simple version of hedonism (i.e., one that rejects the higher / lower pleasure distinction) on which just one thing contributes positively to welfare: experiencing pleasure. Presumably, while many nonhuman animals (henceforth, animals) can experience pleasure, they can’t realize many of the other things that matter according to the objective list theory. Given as much, it’s plausible that if the objective list theory is true, there will be larger differences in welfare ranges between many humans and animals than there will be if hedonism is true.For practical and theoretical reasons, the Moral Weight Project assumes that hedonism is true. On the practical side, we needed to make some assumptions to make any progress in the time we had available. On the theoretical side, there are powerful arguments for hedonism. Still, those who reject hedonism will rightly wonder about the impact of assuming hedonism. How different would our welfare range estimates be if we were to assume some other theory of welfare?In the ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Theories of Welfare and Welfare Range Estimates, published by Bob Fischer on November 14, 2022 on The Effective Altruism Forum.Key TakeawaysMany theories of welfare imply that there are probably differences in animals’ welfare ranges. However, these theories do not agree about the sizes of those differences.The Moral Weight Project assumes that hedonism is true. This post tries to estimate how different our welfare range estimates could be if we were to assume some other theory of welfare.We argue that even if hedonic goods and bads (i.e., pleasures and pains) aren't all of welfare, they’re a lot of it. So, probably, the choice of a theory of welfare will only have a modest (less than 10x) impact on the differences we estimate between humans' and nonhumans' welfare ranges.IntroductionThis is the third post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization. The aim of this post is to suggest a way to quantify the impact of assuming hedonism on welfare range estimates.MotivationsTheories of welfare disagree about the determinants of welfare. According to hedonism, the determinants of welfare are positively and negatively valenced experiences. According to desire satisfaction theory, the determinants are satisfied and frustrated desires. According to a garden variety objective list theory, the determinants are something like knowledge, developing and maintaining friendships, engaging in meaningful activities, and so on. Now, some animals probably have more intense pains than others; some probably have richer, more complex desires; some are able to acquire more sophisticated knowledge of the world; others can make stronger, more complex relationships with others. If animals systematically vary with respect to their ability to realize the determinants of welfare, then they probably vary in their welfare ranges.That is, some of them can probably realize more positive welfare at a time than others; likewise, some of them can probably realize more negative welfare at a time than others. As a result, animals probably vary with respect to the differences between the best and worst welfare states they can realize. The upshot: many theories of welfare imply that there are probably differences in animals’ welfare ranges.However, theories of welfare do not obviously agree about the sizes of those differences. Consider a garden variety objective list theory on which the following things contribute positively to welfare: acting autonomously, gaining knowledge, having friends, being in a loving relationship, doing meaningful work, creating valuable institutions, experiencing pleasure, and so on. Now consider a simple version of hedonism (i.e., one that rejects the higher / lower pleasure distinction) on which just one thing contributes positively to welfare: experiencing pleasure. Presumably, while many nonhuman animals (henceforth, animals) can experience pleasure, they can’t realize many of the other things that matter according to the objective list theory. Given as much, it’s plausible that if the objective list theory is true, there will be larger differences in welfare ranges between many humans and animals than there will be if hedonism is true.For practical and theoretical reasons, the Moral Weight Project assumes that hedonism is true. On the practical side, we needed to make some assumptions to make any progress in the time we had available. On the theoretical side, there are powerful arguments for hedonism. Still, those who reject hedonism will rightly wonder about the impact of assuming hedonism. How different would our welfare range estimates be if we were to assume some other theory of welfare?In the ...]]>
Bob Fischer https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 15:29 None full 3752
ucqPQ5sgMsj5WMpsT_EA EA - Jeff Bezos announces donation plans (in response to question) by david reinstein Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Jeff Bezos announces donation plans (in response to question), published by david reinstein on November 14, 2022 on The Effective Altruism Forum.For a change, some good billionaire philanthropy newsCaveats:It seems to have been in response to the question "Do you plan to give away the majority of your wealth in your lifetime?"; I don't know whether he encouraged them to ask thisI suspect this is not entirely 'news'; iirc he has made noises like this in the pastAmazon founder Jeff Bezos plans to give away the majority of his $124 billion net worth during his lifetime, telling CNN in an exclusive interview he will devote the bulk of his wealth to fighting climate change and supporting people who can unify humanity in the face of deep social and political divisions.This seems potentially promising: he seems to prioritize effectiveness.“The hard part is figuring out how to do it in a levered way,” he said, implying that even as he gives away his billions, he is still looking to maximize his return. “It’s not easy. Building Amazon was not easy. It took a lot of hard work, a bunch of very smart teammates, hard-working teammates, and I’m finding — and I think Lauren is finding the same thing — that charity, philanthropy, is very similar.” “There are a bunch of ways that I think you could do ineffective things, too,” he added. “So you have to think about it carefully and you have to have brilliant people on the team.”Bezos’ methodical approach to giving stands in sharp contrast to that of his ex-wife, the philanthropist MacKenzie Scott, who recently gave away nearly $4 billion to 465 organizations in the span of less than a year.In terms of specifics, the Earth Fund seems relatively good, to me:Bezos has committed $10 billion over 10 years, or about 8% of his current net worth, to the Bezos Earth Fund, which Sánchez co-chairs. Among its priorities are reducing the carbon footprint of construction-grade cement and steel; pushing financial regulators to consider climate-related risks; advancing data and mapping technologies to monitor carbon emissions; and building natural, plant-based carbon sinks on a large scale.I’m less enthusiastic about the “Bezos Courage and Civility Award” which seems celebrity-driven (as much as I love Dolly Parton) and perhaps less likely to target global priorities/effectiveness.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
david reinstein https://forum.effectivealtruism.org/posts/ucqPQ5sgMsj5WMpsT/jeff-bezos-announces-donation-plans-in-response-to-question Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Jeff Bezos announces donation plans (in response to question), published by david reinstein on November 14, 2022 on The Effective Altruism Forum.For a change, some good billionaire philanthropy newsCaveats:It seems to have been in response to the question "Do you plan to give away the majority of your wealth in your lifetime?"; I don't know whether he encouraged them to ask thisI suspect this is not entirely 'news'; iirc he has made noises like this in the pastAmazon founder Jeff Bezos plans to give away the majority of his $124 billion net worth during his lifetime, telling CNN in an exclusive interview he will devote the bulk of his wealth to fighting climate change and supporting people who can unify humanity in the face of deep social and political divisions.This seems potentially promising: he seems to prioritize effectiveness.“The hard part is figuring out how to do it in a levered way,” he said, implying that even as he gives away his billions, he is still looking to maximize his return. “It’s not easy. Building Amazon was not easy. It took a lot of hard work, a bunch of very smart teammates, hard-working teammates, and I’m finding — and I think Lauren is finding the same thing — that charity, philanthropy, is very similar.” “There are a bunch of ways that I think you could do ineffective things, too,” he added. “So you have to think about it carefully and you have to have brilliant people on the team.”Bezos’ methodical approach to giving stands in sharp contrast to that of his ex-wife, the philanthropist MacKenzie Scott, who recently gave away nearly $4 billion to 465 organizations in the span of less than a year.In terms of specifics, the Earth Fund seems relatively good, to me:Bezos has committed $10 billion over 10 years, or about 8% of his current net worth, to the Bezos Earth Fund, which Sánchez co-chairs. Among its priorities are reducing the carbon footprint of construction-grade cement and steel; pushing financial regulators to consider climate-related risks; advancing data and mapping technologies to monitor carbon emissions; and building natural, plant-based carbon sinks on a large scale.I’m less enthusiastic about the “Bezos Courage and Civility Award” which seems celebrity-driven (as much as I love Dolly Parton) and perhaps less likely to target global priorities/effectiveness.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 14 Nov 2022 15:37:58 +0000 EA - Jeff Bezos announces donation plans (in response to question) by david reinstein Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Jeff Bezos announces donation plans (in response to question), published by david reinstein on November 14, 2022 on The Effective Altruism Forum.For a change, some good billionaire philanthropy newsCaveats:It seems to have been in response to the question "Do you plan to give away the majority of your wealth in your lifetime?"; I don't know whether he encouraged them to ask thisI suspect this is not entirely 'news'; iirc he has made noises like this in the pastAmazon founder Jeff Bezos plans to give away the majority of his $124 billion net worth during his lifetime, telling CNN in an exclusive interview he will devote the bulk of his wealth to fighting climate change and supporting people who can unify humanity in the face of deep social and political divisions.This seems potentially promising: he seems to prioritize effectiveness.“The hard part is figuring out how to do it in a levered way,” he said, implying that even as he gives away his billions, he is still looking to maximize his return. “It’s not easy. Building Amazon was not easy. It took a lot of hard work, a bunch of very smart teammates, hard-working teammates, and I’m finding — and I think Lauren is finding the same thing — that charity, philanthropy, is very similar.” “There are a bunch of ways that I think you could do ineffective things, too,” he added. “So you have to think about it carefully and you have to have brilliant people on the team.”Bezos’ methodical approach to giving stands in sharp contrast to that of his ex-wife, the philanthropist MacKenzie Scott, who recently gave away nearly $4 billion to 465 organizations in the span of less than a year.In terms of specifics, the Earth Fund seems relatively good, to me:Bezos has committed $10 billion over 10 years, or about 8% of his current net worth, to the Bezos Earth Fund, which Sánchez co-chairs. Among its priorities are reducing the carbon footprint of construction-grade cement and steel; pushing financial regulators to consider climate-related risks; advancing data and mapping technologies to monitor carbon emissions; and building natural, plant-based carbon sinks on a large scale.I’m less enthusiastic about the “Bezos Courage and Civility Award” which seems celebrity-driven (as much as I love Dolly Parton) and perhaps less likely to target global priorities/effectiveness.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Jeff Bezos announces donation plans (in response to question), published by david reinstein on November 14, 2022 on The Effective Altruism Forum.For a change, some good billionaire philanthropy newsCaveats:It seems to have been in response to the question "Do you plan to give away the majority of your wealth in your lifetime?"; I don't know whether he encouraged them to ask thisI suspect this is not entirely 'news'; iirc he has made noises like this in the pastAmazon founder Jeff Bezos plans to give away the majority of his $124 billion net worth during his lifetime, telling CNN in an exclusive interview he will devote the bulk of his wealth to fighting climate change and supporting people who can unify humanity in the face of deep social and political divisions.This seems potentially promising: he seems to prioritize effectiveness.“The hard part is figuring out how to do it in a levered way,” he said, implying that even as he gives away his billions, he is still looking to maximize his return. “It’s not easy. Building Amazon was not easy. It took a lot of hard work, a bunch of very smart teammates, hard-working teammates, and I’m finding — and I think Lauren is finding the same thing — that charity, philanthropy, is very similar.” “There are a bunch of ways that I think you could do ineffective things, too,” he added. “So you have to think about it carefully and you have to have brilliant people on the team.”Bezos’ methodical approach to giving stands in sharp contrast to that of his ex-wife, the philanthropist MacKenzie Scott, who recently gave away nearly $4 billion to 465 organizations in the span of less than a year.In terms of specifics, the Earth Fund seems relatively good, to me:Bezos has committed $10 billion over 10 years, or about 8% of his current net worth, to the Bezos Earth Fund, which Sánchez co-chairs. Among its priorities are reducing the carbon footprint of construction-grade cement and steel; pushing financial regulators to consider climate-related risks; advancing data and mapping technologies to monitor carbon emissions; and building natural, plant-based carbon sinks on a large scale.I’m less enthusiastic about the “Bezos Courage and Civility Award” which seems celebrity-driven (as much as I love Dolly Parton) and perhaps less likely to target global priorities/effectiveness.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
david reinstein https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:28 None full 3751
7PqmnrBhSX4yCyMCk_EA EA - Effective Peer Support Network in FTX crisis (Update) by Emily Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Peer Support Network in FTX crisis (Update), published by Emily on November 14, 2022 on The Effective Altruism Forum.Are you going through a rough patch due to the FTX crisis?Or do you want to help EA peers who are?Following up on our post from last Friday, 'Get and Give Mental Health Support During These Hard Times', we [Rethink Wellbeing and the Mental Health Navigator] set up a support network to react to the many people in our community affected by the FTX crisis. The number of people who have joined is growing! These are the two modes of action:(1) In this table, you can find experienced supporters.These supporters want to help (for free), and you can just contact them. The community health team, as well as a growing number of coaches and therapists that are informed about EA and the FTX crisis, are already listed.If you have experience in supporting others and would like to dedicate >=1 hour in the next few weeks to support one or more EA peers that are having a hard time, you can take 1 min to add your details to the table. We likely need more support to take care of this situation. Consultants might be helpful, too.(2) You can join our new Peer Support Network Slack here.People can share and discuss their issues, get together in groups and 1:1, as well as get support from the trained helpers as well as peers:It enables you to chat with a trained helper anonymously—if you use a Nickname and an email address that doesn't contain your name.You can create closed sub-channels here for a small group of people with similar issues that want to support each other more closely.One can tackle specific topics in sub-channels, e.g., dealing with loss, future worries, or personal crises.Feel free to send this to anyone who might need or wish to provide help!Rethink Wellbeing was supposed to receive FTX funding to implement Effective Peer Support. Finding an alternative funding source (USD 25-63k), we could also provide a simple first version of mental health support groups and 1:1s to people affected by the FTX crisis quickly. Let us know if you'd like to make that possible. Around 300 EAs have signed up to participate in Effective Peer Support already, mainly after our post in October.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Emily https://forum.effectivealtruism.org/posts/7PqmnrBhSX4yCyMCk/effective-peer-support-network-in-ftx-crisis-update Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Peer Support Network in FTX crisis (Update), published by Emily on November 14, 2022 on The Effective Altruism Forum.Are you going through a rough patch due to the FTX crisis?Or do you want to help EA peers who are?Following up on our post from last Friday, 'Get and Give Mental Health Support During These Hard Times', we [Rethink Wellbeing and the Mental Health Navigator] set up a support network to react to the many people in our community affected by the FTX crisis. The number of people who have joined is growing! These are the two modes of action:(1) In this table, you can find experienced supporters.These supporters want to help (for free), and you can just contact them. The community health team, as well as a growing number of coaches and therapists that are informed about EA and the FTX crisis, are already listed.If you have experience in supporting others and would like to dedicate >=1 hour in the next few weeks to support one or more EA peers that are having a hard time, you can take 1 min to add your details to the table. We likely need more support to take care of this situation. Consultants might be helpful, too.(2) You can join our new Peer Support Network Slack here.People can share and discuss their issues, get together in groups and 1:1, as well as get support from the trained helpers as well as peers:It enables you to chat with a trained helper anonymously—if you use a Nickname and an email address that doesn't contain your name.You can create closed sub-channels here for a small group of people with similar issues that want to support each other more closely.One can tackle specific topics in sub-channels, e.g., dealing with loss, future worries, or personal crises.Feel free to send this to anyone who might need or wish to provide help!Rethink Wellbeing was supposed to receive FTX funding to implement Effective Peer Support. Finding an alternative funding source (USD 25-63k), we could also provide a simple first version of mental health support groups and 1:1s to people affected by the FTX crisis quickly. Let us know if you'd like to make that possible. Around 300 EAs have signed up to participate in Effective Peer Support already, mainly after our post in October.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 14 Nov 2022 14:22:05 +0000 EA - Effective Peer Support Network in FTX crisis (Update) by Emily Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Peer Support Network in FTX crisis (Update), published by Emily on November 14, 2022 on The Effective Altruism Forum.Are you going through a rough patch due to the FTX crisis?Or do you want to help EA peers who are?Following up on our post from last Friday, 'Get and Give Mental Health Support During These Hard Times', we [Rethink Wellbeing and the Mental Health Navigator] set up a support network to react to the many people in our community affected by the FTX crisis. The number of people who have joined is growing! These are the two modes of action:(1) In this table, you can find experienced supporters.These supporters want to help (for free), and you can just contact them. The community health team, as well as a growing number of coaches and therapists that are informed about EA and the FTX crisis, are already listed.If you have experience in supporting others and would like to dedicate >=1 hour in the next few weeks to support one or more EA peers that are having a hard time, you can take 1 min to add your details to the table. We likely need more support to take care of this situation. Consultants might be helpful, too.(2) You can join our new Peer Support Network Slack here.People can share and discuss their issues, get together in groups and 1:1, as well as get support from the trained helpers as well as peers:It enables you to chat with a trained helper anonymously—if you use a Nickname and an email address that doesn't contain your name.You can create closed sub-channels here for a small group of people with similar issues that want to support each other more closely.One can tackle specific topics in sub-channels, e.g., dealing with loss, future worries, or personal crises.Feel free to send this to anyone who might need or wish to provide help!Rethink Wellbeing was supposed to receive FTX funding to implement Effective Peer Support. Finding an alternative funding source (USD 25-63k), we could also provide a simple first version of mental health support groups and 1:1s to people affected by the FTX crisis quickly. Let us know if you'd like to make that possible. Around 300 EAs have signed up to participate in Effective Peer Support already, mainly after our post in October.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Peer Support Network in FTX crisis (Update), published by Emily on November 14, 2022 on The Effective Altruism Forum.Are you going through a rough patch due to the FTX crisis?Or do you want to help EA peers who are?Following up on our post from last Friday, 'Get and Give Mental Health Support During These Hard Times', we [Rethink Wellbeing and the Mental Health Navigator] set up a support network to react to the many people in our community affected by the FTX crisis. The number of people who have joined is growing! These are the two modes of action:(1) In this table, you can find experienced supporters.These supporters want to help (for free), and you can just contact them. The community health team, as well as a growing number of coaches and therapists that are informed about EA and the FTX crisis, are already listed.If you have experience in supporting others and would like to dedicate >=1 hour in the next few weeks to support one or more EA peers that are having a hard time, you can take 1 min to add your details to the table. We likely need more support to take care of this situation. Consultants might be helpful, too.(2) You can join our new Peer Support Network Slack here.People can share and discuss their issues, get together in groups and 1:1, as well as get support from the trained helpers as well as peers:It enables you to chat with a trained helper anonymously—if you use a Nickname and an email address that doesn't contain your name.You can create closed sub-channels here for a small group of people with similar issues that want to support each other more closely.One can tackle specific topics in sub-channels, e.g., dealing with loss, future worries, or personal crises.Feel free to send this to anyone who might need or wish to provide help!Rethink Wellbeing was supposed to receive FTX funding to implement Effective Peer Support. Finding an alternative funding source (USD 25-63k), we could also provide a simple first version of mental health support groups and 1:1s to people affected by the FTX crisis quickly. Let us know if you'd like to make that possible. Around 300 EAs have signed up to participate in Effective Peer Support already, mainly after our post in October.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Emily https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:22 None full 3755
sEpWkCvvJfoEbhnsd_EA EA - The FTX crisis highlights a deeper cultural problem within EA - we don't sufficiently value good governance by Fods12 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The FTX crisis highlights a deeper cultural problem within EA - we don't sufficiently value good governance, published by Fods12 on November 14, 2022 on The Effective Altruism Forum.IntroductionIn this piece, I will explain why I don't think the collapse of FTX and resulting fallout for Future Fund and EA community in general is a one-off or 'black swan' event as some have argued on this forum. Rather, I think that what happened was part of a broader pattern of failures and oversights that have been persistent within EA and EA-adjacent organisations since the beginning of the movement.As a disclaimer, I do not have any inside knowledge or special expertise about FTX or any of the other organisations I will mention in this post. I speak simply as a long-standing and concerned member of the EA community.Weak Norms of GovernanceThe essential point I want to make in this post is that the EA community has not been very successful in fostering norms of transparency, accountability, and institutionalisation of decision-making. Many EA organisations began as ad hoc collections of like-minded individuals with very ambitions goals but relatively little career experience. This has often led to inadequate organisational structures and procedures being established for proper management of personal, financial oversight, external auditing, or accountability to stakeholders. Let me illustrate my point with some major examples I am aware of from EA and EA-adjacent organisations:Weak governance structures and financial oversight at the Singularity Institute, leading to the theft of over $100,000 in 2009.Inadequate record keeping, rapid executive turnover, and insufficient board oversight at the Centre for Effective Altruism over the period 2016-2019.Inadequate financial record keeping at 80,000 Hours during 2018.Insufficient oversight, unhealthy power dynamics, and other harmful practices reported at MIRI/CFAR during 2015-2017.Similar problems reported at the EA-adjacent organisation Leverage Research during 2017-2019.'Loose norms around board of directors and conflicts of interests between funding orgs and grantees' at FTX and the Future Fund from 2021-2022.While these specific issues are somewhat diverse, I think what they have in common is an insufficient emphasis on principles of good organisational governance. This ranges from the most basic such as clear objectives and good record keeping, to more complex issues such as external auditing, good systems of accountability, transparency of the organisation to its stakeholders, avoiding conflicts of interest, and ensuring that systems exist to protect participants in asymmetric power relationships. I believe that these aspects of good governance and robust institution building have not been very highly valued in the broader EA community. In my experience, EAs like to talk about philosophy, outreach, career choice, and other nerdy stuff. Discussing best practise of organisational governance and systems of accountability doesn't seem very high status or 'sexy' in the EA space.There has been some discussion of such issues on this forum (e.g. this thoughtful post), but overall EA culture seems to have failed to properly absorb these lessons.EA projects are often run by small groups of young idealistic people who have similar educational and social backgrounds, who often socialise together, and (in many cases) participate in romantic relationships with one another - The case of Sam Bankman-Fried and Caroline Ellison is certainly not the only such example in the EA community. The EA culture seems to be heavily influenced by start-up culture and entrepreneurialism, with a focus on moving quickly and relying on finding highly-skilled and highly-aligned people and then providing them funding and space to work with minimal oversi...]]>
Fods12 https://forum.effectivealtruism.org/posts/sEpWkCvvJfoEbhnsd/the-ftx-crisis-highlights-a-deeper-cultural-problem-within Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The FTX crisis highlights a deeper cultural problem within EA - we don't sufficiently value good governance, published by Fods12 on November 14, 2022 on The Effective Altruism Forum.IntroductionIn this piece, I will explain why I don't think the collapse of FTX and resulting fallout for Future Fund and EA community in general is a one-off or 'black swan' event as some have argued on this forum. Rather, I think that what happened was part of a broader pattern of failures and oversights that have been persistent within EA and EA-adjacent organisations since the beginning of the movement.As a disclaimer, I do not have any inside knowledge or special expertise about FTX or any of the other organisations I will mention in this post. I speak simply as a long-standing and concerned member of the EA community.Weak Norms of GovernanceThe essential point I want to make in this post is that the EA community has not been very successful in fostering norms of transparency, accountability, and institutionalisation of decision-making. Many EA organisations began as ad hoc collections of like-minded individuals with very ambitions goals but relatively little career experience. This has often led to inadequate organisational structures and procedures being established for proper management of personal, financial oversight, external auditing, or accountability to stakeholders. Let me illustrate my point with some major examples I am aware of from EA and EA-adjacent organisations:Weak governance structures and financial oversight at the Singularity Institute, leading to the theft of over $100,000 in 2009.Inadequate record keeping, rapid executive turnover, and insufficient board oversight at the Centre for Effective Altruism over the period 2016-2019.Inadequate financial record keeping at 80,000 Hours during 2018.Insufficient oversight, unhealthy power dynamics, and other harmful practices reported at MIRI/CFAR during 2015-2017.Similar problems reported at the EA-adjacent organisation Leverage Research during 2017-2019.'Loose norms around board of directors and conflicts of interests between funding orgs and grantees' at FTX and the Future Fund from 2021-2022.While these specific issues are somewhat diverse, I think what they have in common is an insufficient emphasis on principles of good organisational governance. This ranges from the most basic such as clear objectives and good record keeping, to more complex issues such as external auditing, good systems of accountability, transparency of the organisation to its stakeholders, avoiding conflicts of interest, and ensuring that systems exist to protect participants in asymmetric power relationships. I believe that these aspects of good governance and robust institution building have not been very highly valued in the broader EA community. In my experience, EAs like to talk about philosophy, outreach, career choice, and other nerdy stuff. Discussing best practise of organisational governance and systems of accountability doesn't seem very high status or 'sexy' in the EA space.There has been some discussion of such issues on this forum (e.g. this thoughtful post), but overall EA culture seems to have failed to properly absorb these lessons.EA projects are often run by small groups of young idealistic people who have similar educational and social backgrounds, who often socialise together, and (in many cases) participate in romantic relationships with one another - The case of Sam Bankman-Fried and Caroline Ellison is certainly not the only such example in the EA community. The EA culture seems to be heavily influenced by start-up culture and entrepreneurialism, with a focus on moving quickly and relying on finding highly-skilled and highly-aligned people and then providing them funding and space to work with minimal oversi...]]>
Mon, 14 Nov 2022 13:04:08 +0000 EA - The FTX crisis highlights a deeper cultural problem within EA - we don't sufficiently value good governance by Fods12 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The FTX crisis highlights a deeper cultural problem within EA - we don't sufficiently value good governance, published by Fods12 on November 14, 2022 on The Effective Altruism Forum.IntroductionIn this piece, I will explain why I don't think the collapse of FTX and resulting fallout for Future Fund and EA community in general is a one-off or 'black swan' event as some have argued on this forum. Rather, I think that what happened was part of a broader pattern of failures and oversights that have been persistent within EA and EA-adjacent organisations since the beginning of the movement.As a disclaimer, I do not have any inside knowledge or special expertise about FTX or any of the other organisations I will mention in this post. I speak simply as a long-standing and concerned member of the EA community.Weak Norms of GovernanceThe essential point I want to make in this post is that the EA community has not been very successful in fostering norms of transparency, accountability, and institutionalisation of decision-making. Many EA organisations began as ad hoc collections of like-minded individuals with very ambitions goals but relatively little career experience. This has often led to inadequate organisational structures and procedures being established for proper management of personal, financial oversight, external auditing, or accountability to stakeholders. Let me illustrate my point with some major examples I am aware of from EA and EA-adjacent organisations:Weak governance structures and financial oversight at the Singularity Institute, leading to the theft of over $100,000 in 2009.Inadequate record keeping, rapid executive turnover, and insufficient board oversight at the Centre for Effective Altruism over the period 2016-2019.Inadequate financial record keeping at 80,000 Hours during 2018.Insufficient oversight, unhealthy power dynamics, and other harmful practices reported at MIRI/CFAR during 2015-2017.Similar problems reported at the EA-adjacent organisation Leverage Research during 2017-2019.'Loose norms around board of directors and conflicts of interests between funding orgs and grantees' at FTX and the Future Fund from 2021-2022.While these specific issues are somewhat diverse, I think what they have in common is an insufficient emphasis on principles of good organisational governance. This ranges from the most basic such as clear objectives and good record keeping, to more complex issues such as external auditing, good systems of accountability, transparency of the organisation to its stakeholders, avoiding conflicts of interest, and ensuring that systems exist to protect participants in asymmetric power relationships. I believe that these aspects of good governance and robust institution building have not been very highly valued in the broader EA community. In my experience, EAs like to talk about philosophy, outreach, career choice, and other nerdy stuff. Discussing best practise of organisational governance and systems of accountability doesn't seem very high status or 'sexy' in the EA space.There has been some discussion of such issues on this forum (e.g. this thoughtful post), but overall EA culture seems to have failed to properly absorb these lessons.EA projects are often run by small groups of young idealistic people who have similar educational and social backgrounds, who often socialise together, and (in many cases) participate in romantic relationships with one another - The case of Sam Bankman-Fried and Caroline Ellison is certainly not the only such example in the EA community. The EA culture seems to be heavily influenced by start-up culture and entrepreneurialism, with a focus on moving quickly and relying on finding highly-skilled and highly-aligned people and then providing them funding and space to work with minimal oversi...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The FTX crisis highlights a deeper cultural problem within EA - we don't sufficiently value good governance, published by Fods12 on November 14, 2022 on The Effective Altruism Forum.IntroductionIn this piece, I will explain why I don't think the collapse of FTX and resulting fallout for Future Fund and EA community in general is a one-off or 'black swan' event as some have argued on this forum. Rather, I think that what happened was part of a broader pattern of failures and oversights that have been persistent within EA and EA-adjacent organisations since the beginning of the movement.As a disclaimer, I do not have any inside knowledge or special expertise about FTX or any of the other organisations I will mention in this post. I speak simply as a long-standing and concerned member of the EA community.Weak Norms of GovernanceThe essential point I want to make in this post is that the EA community has not been very successful in fostering norms of transparency, accountability, and institutionalisation of decision-making. Many EA organisations began as ad hoc collections of like-minded individuals with very ambitions goals but relatively little career experience. This has often led to inadequate organisational structures and procedures being established for proper management of personal, financial oversight, external auditing, or accountability to stakeholders. Let me illustrate my point with some major examples I am aware of from EA and EA-adjacent organisations:Weak governance structures and financial oversight at the Singularity Institute, leading to the theft of over $100,000 in 2009.Inadequate record keeping, rapid executive turnover, and insufficient board oversight at the Centre for Effective Altruism over the period 2016-2019.Inadequate financial record keeping at 80,000 Hours during 2018.Insufficient oversight, unhealthy power dynamics, and other harmful practices reported at MIRI/CFAR during 2015-2017.Similar problems reported at the EA-adjacent organisation Leverage Research during 2017-2019.'Loose norms around board of directors and conflicts of interests between funding orgs and grantees' at FTX and the Future Fund from 2021-2022.While these specific issues are somewhat diverse, I think what they have in common is an insufficient emphasis on principles of good organisational governance. This ranges from the most basic such as clear objectives and good record keeping, to more complex issues such as external auditing, good systems of accountability, transparency of the organisation to its stakeholders, avoiding conflicts of interest, and ensuring that systems exist to protect participants in asymmetric power relationships. I believe that these aspects of good governance and robust institution building have not been very highly valued in the broader EA community. In my experience, EAs like to talk about philosophy, outreach, career choice, and other nerdy stuff. Discussing best practise of organisational governance and systems of accountability doesn't seem very high status or 'sexy' in the EA space.There has been some discussion of such issues on this forum (e.g. this thoughtful post), but overall EA culture seems to have failed to properly absorb these lessons.EA projects are often run by small groups of young idealistic people who have similar educational and social backgrounds, who often socialise together, and (in many cases) participate in romantic relationships with one another - The case of Sam Bankman-Fried and Caroline Ellison is certainly not the only such example in the EA community. The EA culture seems to be heavily influenced by start-up culture and entrepreneurialism, with a focus on moving quickly and relying on finding highly-skilled and highly-aligned people and then providing them funding and space to work with minimal oversi...]]>
Fods12 https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:28 None full 3754
iCYxXmFri5QSj9K6J_EA EA - Lessons from taking group members to an EAGx conference by Irene H Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lessons from taking group members to an EAGx conference, published by Irene H on November 14, 2022 on The Effective Altruism Forum.We (Irene and Jelle) run the EA Eindhoven university group and went to EAGxRotterdam (4-6 November 2022) with 16 of our members.Our members told us they really enjoyed the conference and have plans for how they want to pursue their EA journeys. We are excited that almost all Dutch universities have EA groups now and felt EAGxRotterdam was the capstone of this year in which the Dutch EA community has really taken off.These are some of the lessons we learned and best practices we discovered in our preparation for the conference as well as what we did during the conference itself. We hope other group organizers can benefit from these lessons too. We of course hope to visit many more EA conferences in the future and grow as community builders, which means this guide is a work in progress. We are also excited to learn about the best practices of other community builders and welcome their suggestions. Feel free to place comments!Some things to keep in mind when reading thisCircumstance-specific things that were the case for us:Our group is only a few months old, our members all became engaged with EA only recently (most of them through our Introduction Fellowship) and our two organizers were the only people who had ever attended an EA conference before. This is why we put quite a lot of effort into encouraging members to apply and helping them prepare.Rotterdam is only a 1-hour train ride from Eindhoven, so it was relatively easy for us to convince members to attend. Some of our members stayed with friends or traveled back and forth every day.The conference was on the weekend between two exam weeks at our university and a few of our members cited this as a reason for not applying. One of our members got accepted but never showed up to the conference in order to work on a class assignment. We really regret these things but do not know what we could have done about them.Because this conference in the Netherlands was such a rare event, we also advised some members to apply even though we were not 100% sure if they were engaged enough in EA. We would probably be stricter with our advice to them about this for conferences that are further away.For members who had not done an Introduction Fellowship (or equivalent), we made it clear that the conference was not going to be useful if they did not do some kind of preparation. We agreed with them that they would go over the EA Handbook and scheduled a 1-on-1 to discuss the preparation. In the end, all people who were interested were willing to do this preparation and carried it out. We spoke to them after to conference and they told us they found the conference interesting and returned with new ideas.We had 3 people show up at our collective application night, which is not a lot, but they all applied. We had approximately 10 people in total show up at our preparation evenings (we hosted 2 in a student café on our campus).We were both volunteers at EAGxRotterdam, had a lot of 1-on-1s scheduled and Jelle was also a speaker. Irene also met a community builder at the conference who was mainly there to guide his members, but in our case, that was not our only priority.Encourage and help members to applyPlan other programs (especially the Introduction Fellowship) so that they finish in time to still have space for promoting the conference and giving members the time to apply.Pitch the conference during your programs and eventsHost a collective application nightGuide for Collective application night by the EAGxRotterdam team that we usedFocus on members you think would benefit most from the conferenceMembers who have already done at least an Introduction Fellowship (or equivalent)Have 1-on-1...]]>
Irene H https://forum.effectivealtruism.org/posts/iCYxXmFri5QSj9K6J/lessons-from-taking-group-members-to-an-eagx-conference Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lessons from taking group members to an EAGx conference, published by Irene H on November 14, 2022 on The Effective Altruism Forum.We (Irene and Jelle) run the EA Eindhoven university group and went to EAGxRotterdam (4-6 November 2022) with 16 of our members.Our members told us they really enjoyed the conference and have plans for how they want to pursue their EA journeys. We are excited that almost all Dutch universities have EA groups now and felt EAGxRotterdam was the capstone of this year in which the Dutch EA community has really taken off.These are some of the lessons we learned and best practices we discovered in our preparation for the conference as well as what we did during the conference itself. We hope other group organizers can benefit from these lessons too. We of course hope to visit many more EA conferences in the future and grow as community builders, which means this guide is a work in progress. We are also excited to learn about the best practices of other community builders and welcome their suggestions. Feel free to place comments!Some things to keep in mind when reading thisCircumstance-specific things that were the case for us:Our group is only a few months old, our members all became engaged with EA only recently (most of them through our Introduction Fellowship) and our two organizers were the only people who had ever attended an EA conference before. This is why we put quite a lot of effort into encouraging members to apply and helping them prepare.Rotterdam is only a 1-hour train ride from Eindhoven, so it was relatively easy for us to convince members to attend. Some of our members stayed with friends or traveled back and forth every day.The conference was on the weekend between two exam weeks at our university and a few of our members cited this as a reason for not applying. One of our members got accepted but never showed up to the conference in order to work on a class assignment. We really regret these things but do not know what we could have done about them.Because this conference in the Netherlands was such a rare event, we also advised some members to apply even though we were not 100% sure if they were engaged enough in EA. We would probably be stricter with our advice to them about this for conferences that are further away.For members who had not done an Introduction Fellowship (or equivalent), we made it clear that the conference was not going to be useful if they did not do some kind of preparation. We agreed with them that they would go over the EA Handbook and scheduled a 1-on-1 to discuss the preparation. In the end, all people who were interested were willing to do this preparation and carried it out. We spoke to them after to conference and they told us they found the conference interesting and returned with new ideas.We had 3 people show up at our collective application night, which is not a lot, but they all applied. We had approximately 10 people in total show up at our preparation evenings (we hosted 2 in a student café on our campus).We were both volunteers at EAGxRotterdam, had a lot of 1-on-1s scheduled and Jelle was also a speaker. Irene also met a community builder at the conference who was mainly there to guide his members, but in our case, that was not our only priority.Encourage and help members to applyPlan other programs (especially the Introduction Fellowship) so that they finish in time to still have space for promoting the conference and giving members the time to apply.Pitch the conference during your programs and eventsHost a collective application nightGuide for Collective application night by the EAGxRotterdam team that we usedFocus on members you think would benefit most from the conferenceMembers who have already done at least an Introduction Fellowship (or equivalent)Have 1-on-1...]]>
Mon, 14 Nov 2022 12:59:48 +0000 EA - Lessons from taking group members to an EAGx conference by Irene H Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lessons from taking group members to an EAGx conference, published by Irene H on November 14, 2022 on The Effective Altruism Forum.We (Irene and Jelle) run the EA Eindhoven university group and went to EAGxRotterdam (4-6 November 2022) with 16 of our members.Our members told us they really enjoyed the conference and have plans for how they want to pursue their EA journeys. We are excited that almost all Dutch universities have EA groups now and felt EAGxRotterdam was the capstone of this year in which the Dutch EA community has really taken off.These are some of the lessons we learned and best practices we discovered in our preparation for the conference as well as what we did during the conference itself. We hope other group organizers can benefit from these lessons too. We of course hope to visit many more EA conferences in the future and grow as community builders, which means this guide is a work in progress. We are also excited to learn about the best practices of other community builders and welcome their suggestions. Feel free to place comments!Some things to keep in mind when reading thisCircumstance-specific things that were the case for us:Our group is only a few months old, our members all became engaged with EA only recently (most of them through our Introduction Fellowship) and our two organizers were the only people who had ever attended an EA conference before. This is why we put quite a lot of effort into encouraging members to apply and helping them prepare.Rotterdam is only a 1-hour train ride from Eindhoven, so it was relatively easy for us to convince members to attend. Some of our members stayed with friends or traveled back and forth every day.The conference was on the weekend between two exam weeks at our university and a few of our members cited this as a reason for not applying. One of our members got accepted but never showed up to the conference in order to work on a class assignment. We really regret these things but do not know what we could have done about them.Because this conference in the Netherlands was such a rare event, we also advised some members to apply even though we were not 100% sure if they were engaged enough in EA. We would probably be stricter with our advice to them about this for conferences that are further away.For members who had not done an Introduction Fellowship (or equivalent), we made it clear that the conference was not going to be useful if they did not do some kind of preparation. We agreed with them that they would go over the EA Handbook and scheduled a 1-on-1 to discuss the preparation. In the end, all people who were interested were willing to do this preparation and carried it out. We spoke to them after to conference and they told us they found the conference interesting and returned with new ideas.We had 3 people show up at our collective application night, which is not a lot, but they all applied. We had approximately 10 people in total show up at our preparation evenings (we hosted 2 in a student café on our campus).We were both volunteers at EAGxRotterdam, had a lot of 1-on-1s scheduled and Jelle was also a speaker. Irene also met a community builder at the conference who was mainly there to guide his members, but in our case, that was not our only priority.Encourage and help members to applyPlan other programs (especially the Introduction Fellowship) so that they finish in time to still have space for promoting the conference and giving members the time to apply.Pitch the conference during your programs and eventsHost a collective application nightGuide for Collective application night by the EAGxRotterdam team that we usedFocus on members you think would benefit most from the conferenceMembers who have already done at least an Introduction Fellowship (or equivalent)Have 1-on-1...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lessons from taking group members to an EAGx conference, published by Irene H on November 14, 2022 on The Effective Altruism Forum.We (Irene and Jelle) run the EA Eindhoven university group and went to EAGxRotterdam (4-6 November 2022) with 16 of our members.Our members told us they really enjoyed the conference and have plans for how they want to pursue their EA journeys. We are excited that almost all Dutch universities have EA groups now and felt EAGxRotterdam was the capstone of this year in which the Dutch EA community has really taken off.These are some of the lessons we learned and best practices we discovered in our preparation for the conference as well as what we did during the conference itself. We hope other group organizers can benefit from these lessons too. We of course hope to visit many more EA conferences in the future and grow as community builders, which means this guide is a work in progress. We are also excited to learn about the best practices of other community builders and welcome their suggestions. Feel free to place comments!Some things to keep in mind when reading thisCircumstance-specific things that were the case for us:Our group is only a few months old, our members all became engaged with EA only recently (most of them through our Introduction Fellowship) and our two organizers were the only people who had ever attended an EA conference before. This is why we put quite a lot of effort into encouraging members to apply and helping them prepare.Rotterdam is only a 1-hour train ride from Eindhoven, so it was relatively easy for us to convince members to attend. Some of our members stayed with friends or traveled back and forth every day.The conference was on the weekend between two exam weeks at our university and a few of our members cited this as a reason for not applying. One of our members got accepted but never showed up to the conference in order to work on a class assignment. We really regret these things but do not know what we could have done about them.Because this conference in the Netherlands was such a rare event, we also advised some members to apply even though we were not 100% sure if they were engaged enough in EA. We would probably be stricter with our advice to them about this for conferences that are further away.For members who had not done an Introduction Fellowship (or equivalent), we made it clear that the conference was not going to be useful if they did not do some kind of preparation. We agreed with them that they would go over the EA Handbook and scheduled a 1-on-1 to discuss the preparation. In the end, all people who were interested were willing to do this preparation and carried it out. We spoke to them after to conference and they told us they found the conference interesting and returned with new ideas.We had 3 people show up at our collective application night, which is not a lot, but they all applied. We had approximately 10 people in total show up at our preparation evenings (we hosted 2 in a student café on our campus).We were both volunteers at EAGxRotterdam, had a lot of 1-on-1s scheduled and Jelle was also a speaker. Irene also met a community builder at the conference who was mainly there to guide his members, but in our case, that was not our only priority.Encourage and help members to applyPlan other programs (especially the Introduction Fellowship) so that they finish in time to still have space for promoting the conference and giving members the time to apply.Pitch the conference during your programs and eventsHost a collective application nightGuide for Collective application night by the EAGxRotterdam team that we usedFocus on members you think would benefit most from the conferenceMembers who have already done at least an Introduction Fellowship (or equivalent)Have 1-on-1...]]>
Irene H https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:27 None full 3756
LYqkptuAiPQcmmGbs_EA EA - AI Safety Microgrant Round by Chris Leong Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Microgrant Round, published by Chris Leong on November 14, 2022 on The Effective Altruism Forum.We are pleased to announce an AI Safety Microgrants Round, which will provide micro-grants to field-building projects and other initiatives that can be done with less.We believe there are projects and individuals in the AI Safety space who lack funding but have high agency and potential. We think individuals helping to fund projects will be particularly important given recent changes in the availability of funding. For this reason, we decided to experiment with a small micro-grant round as a test and demonstration of this concept.To keep the evaluation simple, we’re focusing on field-building projects rather than projects that would require complex evaluation (we expect that most technical projects would be much more difficult to evaluate).We are offering microgrants up to $2,000 USD with the total size of this round being $6,000 USD (we know that this is a tiny round, but we are running this as a proof-of-concept). One possible way this could pan out would be two grants of $2000 and two of $1000, although we aren’t wedded and we are fine with request for less. We want to fund grant requests of this size where EA funds possibly has a bit too much overhead.The process is as follows:Fill out this form at microgrant.ai (<15 min).Shortlisted applications receive a follow-up call within two weeks after the applications closeWe may also send an email with follow-up questions if we need more information. We would expect a reply within a few days so that we could confirm the grant within a weekTo inspire applicants, here are some examples of projects where a microgrant would have been helpful:Chris recently received a grant through an Australian-based organisation to hire two facilitators at $1,000 each for running a local version of the AGI safety fundamentals course. Intro fellowships have proven to be a great way of engaging people and we would be excited about funding this if it would help kickstart a new local AI safety group rather than just being an isolated project.We might have funded something like the AI safety nudge competition to help people overcome their procrastination, and are excited about the potential to motivate a large number of people to accelerate their AI safety journeys and about experimenting with a potentially scalable intervention.The Sydney AI Safety Fellowship was originally going to be funded with three people each throwing in $2000, which would have been enough for a coworking space, weekly lunches and some socials for a few participants. We would be excited about funding a similar project if they were likely to attract good candidates - especially in communities and countries where this is currently nonexistent.We are open to smaller grant applications, but in a lot of cases, small grants don't make much of a counterfactual difference, however here are two examples of the kinds of grants we would be excited about:One of us thinks there should be a logo for AI safety just like EA has the light bulb. This may be considered for a microgrant.One of us recently granted $200 to half subsidise ten maths lessons. This allowed the grantee to build up evidence to subsequently get a grant to half-subsidise their lessons. We are excited about grants that assist someone in testing their fit. One of us had also granted $100 to someone in a low and middle-income country to get them a device for self-study so they could upskill themselves.In contrast, here are the kinds of grants we wouldn't be very excited about:Any project with significant downside risk$2000 towards a project which requires $8,000 which hasn't been obtained yetFunding to add one more fellow to a program that already has ten fellows. These kind of grants could be imp...]]>
Chris Leong https://forum.effectivealtruism.org/posts/LYqkptuAiPQcmmGbs/ai-safety-microgrant-round Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Microgrant Round, published by Chris Leong on November 14, 2022 on The Effective Altruism Forum.We are pleased to announce an AI Safety Microgrants Round, which will provide micro-grants to field-building projects and other initiatives that can be done with less.We believe there are projects and individuals in the AI Safety space who lack funding but have high agency and potential. We think individuals helping to fund projects will be particularly important given recent changes in the availability of funding. For this reason, we decided to experiment with a small micro-grant round as a test and demonstration of this concept.To keep the evaluation simple, we’re focusing on field-building projects rather than projects that would require complex evaluation (we expect that most technical projects would be much more difficult to evaluate).We are offering microgrants up to $2,000 USD with the total size of this round being $6,000 USD (we know that this is a tiny round, but we are running this as a proof-of-concept). One possible way this could pan out would be two grants of $2000 and two of $1000, although we aren’t wedded and we are fine with request for less. We want to fund grant requests of this size where EA funds possibly has a bit too much overhead.The process is as follows:Fill out this form at microgrant.ai (<15 min).Shortlisted applications receive a follow-up call within two weeks after the applications closeWe may also send an email with follow-up questions if we need more information. We would expect a reply within a few days so that we could confirm the grant within a weekTo inspire applicants, here are some examples of projects where a microgrant would have been helpful:Chris recently received a grant through an Australian-based organisation to hire two facilitators at $1,000 each for running a local version of the AGI safety fundamentals course. Intro fellowships have proven to be a great way of engaging people and we would be excited about funding this if it would help kickstart a new local AI safety group rather than just being an isolated project.We might have funded something like the AI safety nudge competition to help people overcome their procrastination, and are excited about the potential to motivate a large number of people to accelerate their AI safety journeys and about experimenting with a potentially scalable intervention.The Sydney AI Safety Fellowship was originally going to be funded with three people each throwing in $2000, which would have been enough for a coworking space, weekly lunches and some socials for a few participants. We would be excited about funding a similar project if they were likely to attract good candidates - especially in communities and countries where this is currently nonexistent.We are open to smaller grant applications, but in a lot of cases, small grants don't make much of a counterfactual difference, however here are two examples of the kinds of grants we would be excited about:One of us thinks there should be a logo for AI safety just like EA has the light bulb. This may be considered for a microgrant.One of us recently granted $200 to half subsidise ten maths lessons. This allowed the grantee to build up evidence to subsequently get a grant to half-subsidise their lessons. We are excited about grants that assist someone in testing their fit. One of us had also granted $100 to someone in a low and middle-income country to get them a device for self-study so they could upskill themselves.In contrast, here are the kinds of grants we wouldn't be very excited about:Any project with significant downside risk$2000 towards a project which requires $8,000 which hasn't been obtained yetFunding to add one more fellow to a program that already has ten fellows. These kind of grants could be imp...]]>
Mon, 14 Nov 2022 05:53:27 +0000 EA - AI Safety Microgrant Round by Chris Leong Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Microgrant Round, published by Chris Leong on November 14, 2022 on The Effective Altruism Forum.We are pleased to announce an AI Safety Microgrants Round, which will provide micro-grants to field-building projects and other initiatives that can be done with less.We believe there are projects and individuals in the AI Safety space who lack funding but have high agency and potential. We think individuals helping to fund projects will be particularly important given recent changes in the availability of funding. For this reason, we decided to experiment with a small micro-grant round as a test and demonstration of this concept.To keep the evaluation simple, we’re focusing on field-building projects rather than projects that would require complex evaluation (we expect that most technical projects would be much more difficult to evaluate).We are offering microgrants up to $2,000 USD with the total size of this round being $6,000 USD (we know that this is a tiny round, but we are running this as a proof-of-concept). One possible way this could pan out would be two grants of $2000 and two of $1000, although we aren’t wedded and we are fine with request for less. We want to fund grant requests of this size where EA funds possibly has a bit too much overhead.The process is as follows:Fill out this form at microgrant.ai (<15 min).Shortlisted applications receive a follow-up call within two weeks after the applications closeWe may also send an email with follow-up questions if we need more information. We would expect a reply within a few days so that we could confirm the grant within a weekTo inspire applicants, here are some examples of projects where a microgrant would have been helpful:Chris recently received a grant through an Australian-based organisation to hire two facilitators at $1,000 each for running a local version of the AGI safety fundamentals course. Intro fellowships have proven to be a great way of engaging people and we would be excited about funding this if it would help kickstart a new local AI safety group rather than just being an isolated project.We might have funded something like the AI safety nudge competition to help people overcome their procrastination, and are excited about the potential to motivate a large number of people to accelerate their AI safety journeys and about experimenting with a potentially scalable intervention.The Sydney AI Safety Fellowship was originally going to be funded with three people each throwing in $2000, which would have been enough for a coworking space, weekly lunches and some socials for a few participants. We would be excited about funding a similar project if they were likely to attract good candidates - especially in communities and countries where this is currently nonexistent.We are open to smaller grant applications, but in a lot of cases, small grants don't make much of a counterfactual difference, however here are two examples of the kinds of grants we would be excited about:One of us thinks there should be a logo for AI safety just like EA has the light bulb. This may be considered for a microgrant.One of us recently granted $200 to half subsidise ten maths lessons. This allowed the grantee to build up evidence to subsequently get a grant to half-subsidise their lessons. We are excited about grants that assist someone in testing their fit. One of us had also granted $100 to someone in a low and middle-income country to get them a device for self-study so they could upskill themselves.In contrast, here are the kinds of grants we wouldn't be very excited about:Any project with significant downside risk$2000 towards a project which requires $8,000 which hasn't been obtained yetFunding to add one more fellow to a program that already has ten fellows. These kind of grants could be imp...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Microgrant Round, published by Chris Leong on November 14, 2022 on The Effective Altruism Forum.We are pleased to announce an AI Safety Microgrants Round, which will provide micro-grants to field-building projects and other initiatives that can be done with less.We believe there are projects and individuals in the AI Safety space who lack funding but have high agency and potential. We think individuals helping to fund projects will be particularly important given recent changes in the availability of funding. For this reason, we decided to experiment with a small micro-grant round as a test and demonstration of this concept.To keep the evaluation simple, we’re focusing on field-building projects rather than projects that would require complex evaluation (we expect that most technical projects would be much more difficult to evaluate).We are offering microgrants up to $2,000 USD with the total size of this round being $6,000 USD (we know that this is a tiny round, but we are running this as a proof-of-concept). One possible way this could pan out would be two grants of $2000 and two of $1000, although we aren’t wedded and we are fine with request for less. We want to fund grant requests of this size where EA funds possibly has a bit too much overhead.The process is as follows:Fill out this form at microgrant.ai (<15 min).Shortlisted applications receive a follow-up call within two weeks after the applications closeWe may also send an email with follow-up questions if we need more information. We would expect a reply within a few days so that we could confirm the grant within a weekTo inspire applicants, here are some examples of projects where a microgrant would have been helpful:Chris recently received a grant through an Australian-based organisation to hire two facilitators at $1,000 each for running a local version of the AGI safety fundamentals course. Intro fellowships have proven to be a great way of engaging people and we would be excited about funding this if it would help kickstart a new local AI safety group rather than just being an isolated project.We might have funded something like the AI safety nudge competition to help people overcome their procrastination, and are excited about the potential to motivate a large number of people to accelerate their AI safety journeys and about experimenting with a potentially scalable intervention.The Sydney AI Safety Fellowship was originally going to be funded with three people each throwing in $2000, which would have been enough for a coworking space, weekly lunches and some socials for a few participants. We would be excited about funding a similar project if they were likely to attract good candidates - especially in communities and countries where this is currently nonexistent.We are open to smaller grant applications, but in a lot of cases, small grants don't make much of a counterfactual difference, however here are two examples of the kinds of grants we would be excited about:One of us thinks there should be a logo for AI safety just like EA has the light bulb. This may be considered for a microgrant.One of us recently granted $200 to half subsidise ten maths lessons. This allowed the grantee to build up evidence to subsequently get a grant to half-subsidise their lessons. We are excited about grants that assist someone in testing their fit. One of us had also granted $100 to someone in a low and middle-income country to get them a device for self-study so they could upskill themselves.In contrast, here are the kinds of grants we wouldn't be very excited about:Any project with significant downside risk$2000 towards a project which requires $8,000 which hasn't been obtained yetFunding to add one more fellow to a program that already has ten fellows. These kind of grants could be imp...]]>
Chris Leong https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:34 None full 3744
efGNMe6uB87qXozXJ_EA EA - NY Times on the FTX implosion's impact on EA by AllAmericanBreakfast Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: NY Times on the FTX implosion's impact on EA, published by AllAmericanBreakfast on November 14, 2022 on The Effective Altruism Forum.The impact of the FTX scandal on EA is starting to hit the news. The coverage in this NY Times article seems fair to me. I also think that the FTX Future Fund leadership decision to jointly resign both was the right thing to do, and comes across that way in the article. Will MacAskill, I think, is continuing to show leadership interfacing with the media - it's a big transition from his book tour not long ago to giving quotes to the press about FTX's implosion.The article focuses on the impact this has had on EA:[The collapse of FTX] has also dealt a significant blow to the corner of philanthropy known as effective altruism, a philosophy that advocates applying data and evidence to doing the most good for the many and that is deeply tied to Mr. Bankman-Fried, one of its leading proponents and donors. Now nonprofits are scrambling to replace millions in grant commitments from Mr. Bankman-Fried’s charitable vehicles, and members of the effective altruism community are asking themselves whether they might have helped burnish his reputation...... For a relatively young movement that was already wrestling over its growth and focus, such a high-profile scandal implicating one of the group’s most famous proponents represents a significant setback.The article mentions the FTX Future Fund joint resignation, focusing on the grants that will not be able to be honored and what those might have helped.The article talks about Will MacAskill inspiring SBF to switch his career plans to pursue earning to give, but doesn't try to blame the fraud on utilitarianism or on EA. This is my opinion, but I'm just confused by people's eagerness to blame this on utilitarianism or the EA movement. The common-sense American lens to view these sorts of outcomes is a framework of personal responsibility. If SBF committed fraud, that is indicative of a problem with his personal character, not the moral philosophy he claims to subscribe to.His connection to the movement in fact predates the vast fortune he won and lost in the cryptocurrency field. Over lunch a decade ago while he was still in college, Mr. Bankman-Fried told Mr. MacAskill, the philosopher, that he wanted to work on animal-welfare issues. Mr. MacAskill suggested the young man could do more good earning large sums of money and donating the bulk of it to good causes instead.Mr. Bankman-Fried went into finance with the stated intention of making a fortune that he could then give away. In an interview with The New York Times last month about effective altruism, Mr. Bankman-Fried said he planned to give away a vast majority of his fortune in the next 10 to 20 years to effective altruist causes. He did not respond to a request for comment for this article.Contrary to my expectation, the article was pretty straightforward in describing the global health/longtermism aspects of EA:Effective altruism focuses on the question of how individuals can do as much good as possible with the money and time available to them. Historically, the community focused on low-cost medical interventions, such as insecticide-treated bed nets to prevent mosquitoes from giving people malaria.More recently many members of the movement have focused on issues that could have a greater impact on the future, like pandemic prevention and nuclear nonproliferation as well as preventing artificial intelligence from running amok and sending people to distant planets to increase our chances of survival as a species.Probably the most critical aspect of the article was this:Benjamin Soskis, senior research associate in the Center on Nonprofits and Philanthropy at the Urban Institute, said that the issues raised by Mr. Bankman-Fried’s rev...]]>
AllAmericanBreakfast https://forum.effectivealtruism.org/posts/efGNMe6uB87qXozXJ/ny-times-on-the-ftx-implosion-s-impact-on-ea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: NY Times on the FTX implosion's impact on EA, published by AllAmericanBreakfast on November 14, 2022 on The Effective Altruism Forum.The impact of the FTX scandal on EA is starting to hit the news. The coverage in this NY Times article seems fair to me. I also think that the FTX Future Fund leadership decision to jointly resign both was the right thing to do, and comes across that way in the article. Will MacAskill, I think, is continuing to show leadership interfacing with the media - it's a big transition from his book tour not long ago to giving quotes to the press about FTX's implosion.The article focuses on the impact this has had on EA:[The collapse of FTX] has also dealt a significant blow to the corner of philanthropy known as effective altruism, a philosophy that advocates applying data and evidence to doing the most good for the many and that is deeply tied to Mr. Bankman-Fried, one of its leading proponents and donors. Now nonprofits are scrambling to replace millions in grant commitments from Mr. Bankman-Fried’s charitable vehicles, and members of the effective altruism community are asking themselves whether they might have helped burnish his reputation...... For a relatively young movement that was already wrestling over its growth and focus, such a high-profile scandal implicating one of the group’s most famous proponents represents a significant setback.The article mentions the FTX Future Fund joint resignation, focusing on the grants that will not be able to be honored and what those might have helped.The article talks about Will MacAskill inspiring SBF to switch his career plans to pursue earning to give, but doesn't try to blame the fraud on utilitarianism or on EA. This is my opinion, but I'm just confused by people's eagerness to blame this on utilitarianism or the EA movement. The common-sense American lens to view these sorts of outcomes is a framework of personal responsibility. If SBF committed fraud, that is indicative of a problem with his personal character, not the moral philosophy he claims to subscribe to.His connection to the movement in fact predates the vast fortune he won and lost in the cryptocurrency field. Over lunch a decade ago while he was still in college, Mr. Bankman-Fried told Mr. MacAskill, the philosopher, that he wanted to work on animal-welfare issues. Mr. MacAskill suggested the young man could do more good earning large sums of money and donating the bulk of it to good causes instead.Mr. Bankman-Fried went into finance with the stated intention of making a fortune that he could then give away. In an interview with The New York Times last month about effective altruism, Mr. Bankman-Fried said he planned to give away a vast majority of his fortune in the next 10 to 20 years to effective altruist causes. He did not respond to a request for comment for this article.Contrary to my expectation, the article was pretty straightforward in describing the global health/longtermism aspects of EA:Effective altruism focuses on the question of how individuals can do as much good as possible with the money and time available to them. Historically, the community focused on low-cost medical interventions, such as insecticide-treated bed nets to prevent mosquitoes from giving people malaria.More recently many members of the movement have focused on issues that could have a greater impact on the future, like pandemic prevention and nuclear nonproliferation as well as preventing artificial intelligence from running amok and sending people to distant planets to increase our chances of survival as a species.Probably the most critical aspect of the article was this:Benjamin Soskis, senior research associate in the Center on Nonprofits and Philanthropy at the Urban Institute, said that the issues raised by Mr. Bankman-Fried’s rev...]]>
Mon, 14 Nov 2022 04:37:13 +0000 EA - NY Times on the FTX implosion's impact on EA by AllAmericanBreakfast Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: NY Times on the FTX implosion's impact on EA, published by AllAmericanBreakfast on November 14, 2022 on The Effective Altruism Forum.The impact of the FTX scandal on EA is starting to hit the news. The coverage in this NY Times article seems fair to me. I also think that the FTX Future Fund leadership decision to jointly resign both was the right thing to do, and comes across that way in the article. Will MacAskill, I think, is continuing to show leadership interfacing with the media - it's a big transition from his book tour not long ago to giving quotes to the press about FTX's implosion.The article focuses on the impact this has had on EA:[The collapse of FTX] has also dealt a significant blow to the corner of philanthropy known as effective altruism, a philosophy that advocates applying data and evidence to doing the most good for the many and that is deeply tied to Mr. Bankman-Fried, one of its leading proponents and donors. Now nonprofits are scrambling to replace millions in grant commitments from Mr. Bankman-Fried’s charitable vehicles, and members of the effective altruism community are asking themselves whether they might have helped burnish his reputation...... For a relatively young movement that was already wrestling over its growth and focus, such a high-profile scandal implicating one of the group’s most famous proponents represents a significant setback.The article mentions the FTX Future Fund joint resignation, focusing on the grants that will not be able to be honored and what those might have helped.The article talks about Will MacAskill inspiring SBF to switch his career plans to pursue earning to give, but doesn't try to blame the fraud on utilitarianism or on EA. This is my opinion, but I'm just confused by people's eagerness to blame this on utilitarianism or the EA movement. The common-sense American lens to view these sorts of outcomes is a framework of personal responsibility. If SBF committed fraud, that is indicative of a problem with his personal character, not the moral philosophy he claims to subscribe to.His connection to the movement in fact predates the vast fortune he won and lost in the cryptocurrency field. Over lunch a decade ago while he was still in college, Mr. Bankman-Fried told Mr. MacAskill, the philosopher, that he wanted to work on animal-welfare issues. Mr. MacAskill suggested the young man could do more good earning large sums of money and donating the bulk of it to good causes instead.Mr. Bankman-Fried went into finance with the stated intention of making a fortune that he could then give away. In an interview with The New York Times last month about effective altruism, Mr. Bankman-Fried said he planned to give away a vast majority of his fortune in the next 10 to 20 years to effective altruist causes. He did not respond to a request for comment for this article.Contrary to my expectation, the article was pretty straightforward in describing the global health/longtermism aspects of EA:Effective altruism focuses on the question of how individuals can do as much good as possible with the money and time available to them. Historically, the community focused on low-cost medical interventions, such as insecticide-treated bed nets to prevent mosquitoes from giving people malaria.More recently many members of the movement have focused on issues that could have a greater impact on the future, like pandemic prevention and nuclear nonproliferation as well as preventing artificial intelligence from running amok and sending people to distant planets to increase our chances of survival as a species.Probably the most critical aspect of the article was this:Benjamin Soskis, senior research associate in the Center on Nonprofits and Philanthropy at the Urban Institute, said that the issues raised by Mr. Bankman-Fried’s rev...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: NY Times on the FTX implosion's impact on EA, published by AllAmericanBreakfast on November 14, 2022 on The Effective Altruism Forum.The impact of the FTX scandal on EA is starting to hit the news. The coverage in this NY Times article seems fair to me. I also think that the FTX Future Fund leadership decision to jointly resign both was the right thing to do, and comes across that way in the article. Will MacAskill, I think, is continuing to show leadership interfacing with the media - it's a big transition from his book tour not long ago to giving quotes to the press about FTX's implosion.The article focuses on the impact this has had on EA:[The collapse of FTX] has also dealt a significant blow to the corner of philanthropy known as effective altruism, a philosophy that advocates applying data and evidence to doing the most good for the many and that is deeply tied to Mr. Bankman-Fried, one of its leading proponents and donors. Now nonprofits are scrambling to replace millions in grant commitments from Mr. Bankman-Fried’s charitable vehicles, and members of the effective altruism community are asking themselves whether they might have helped burnish his reputation...... For a relatively young movement that was already wrestling over its growth and focus, such a high-profile scandal implicating one of the group’s most famous proponents represents a significant setback.The article mentions the FTX Future Fund joint resignation, focusing on the grants that will not be able to be honored and what those might have helped.The article talks about Will MacAskill inspiring SBF to switch his career plans to pursue earning to give, but doesn't try to blame the fraud on utilitarianism or on EA. This is my opinion, but I'm just confused by people's eagerness to blame this on utilitarianism or the EA movement. The common-sense American lens to view these sorts of outcomes is a framework of personal responsibility. If SBF committed fraud, that is indicative of a problem with his personal character, not the moral philosophy he claims to subscribe to.His connection to the movement in fact predates the vast fortune he won and lost in the cryptocurrency field. Over lunch a decade ago while he was still in college, Mr. Bankman-Fried told Mr. MacAskill, the philosopher, that he wanted to work on animal-welfare issues. Mr. MacAskill suggested the young man could do more good earning large sums of money and donating the bulk of it to good causes instead.Mr. Bankman-Fried went into finance with the stated intention of making a fortune that he could then give away. In an interview with The New York Times last month about effective altruism, Mr. Bankman-Fried said he planned to give away a vast majority of his fortune in the next 10 to 20 years to effective altruist causes. He did not respond to a request for comment for this article.Contrary to my expectation, the article was pretty straightforward in describing the global health/longtermism aspects of EA:Effective altruism focuses on the question of how individuals can do as much good as possible with the money and time available to them. Historically, the community focused on low-cost medical interventions, such as insecticide-treated bed nets to prevent mosquitoes from giving people malaria.More recently many members of the movement have focused on issues that could have a greater impact on the future, like pandemic prevention and nuclear nonproliferation as well as preventing artificial intelligence from running amok and sending people to distant planets to increase our chances of survival as a species.Probably the most critical aspect of the article was this:Benjamin Soskis, senior research associate in the Center on Nonprofits and Philanthropy at the Urban Institute, said that the issues raised by Mr. Bankman-Fried’s rev...]]>
AllAmericanBreakfast https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:31 None full 3745
DB9ggzc5u9RMBosoz_EA EA - Wrong lessons from the FTX catastrophe by burner Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wrong lessons from the FTX catastrophe, published by burner on November 14, 2022 on The Effective Altruism Forum.There remains a large amount of uncertainty about what exactly happened inside FTX and Alameda. I do not offer any new takes on what occurred. However, there are the inklings of some "lessons" from the situation that are incorrect regardless of how the details flesh out. Pointing these out now may seem in poor taste while many are still coming to terms with what happened, but it important to do so before they become the canonical lessons.1. Ambition was a mistakeThere have been a number of calls for EA to "go back to bed nets." It is notable that this refrain conflates the alleged illegal and unethical behavior from FTX/Alameda with the philosophical position of longtermism. Rather than evaluating the two issues as distinct, the call seems to assume both were born out of the same runaway ambition.Logically, this is obviously not the case. Longtermism grew in popularity during SBFs rise, and Future Fund did focus on longtermist projects, but FTX/Alameda situation has no bearing on the truth of longtermism and associated project's prioritization.To the extent that both becoming incredibly rich and affecting the longterm trajectory of humanity are "ambitious" goals, ambition is not the problem. Committing financial crimes is a problem. Longtermism has problems, like knowing how to act given uncertainty about the future. But an enlightened understanding of ambition accommodates these problems: We should be ambitious in our goals while understanding our limitations in solving them.There is an uncomfortable reality that SBF symbolized a new level of ambition for EA. That sense of ambition should be retained. His malfeasance should not be.This is not to say that there may be lessons to learn about transparency, overconfidence, centralized power, trusting leaders, etc from these events. But all of these are distinct from a lesson about ambition, which depends more on vague allusions to Icarus than argument.2. No more earning to giveI am not sure that this is being learned as a "lesson" or if this situation simply "leaves a bad taste" in EA's mouths about earning to give. The alleged actions of the FTX/Alameda team in no way suggest that earning money to donate to effective causes is a poor career path.Certain employees of FTX/Alameda seem to have been doing distinctly unethical work, such as grossly mismanaging client funds. One of the only arguments for why one might be open to working a possibly unethical job like trading crypto currency is because an ethically motivated actor would do that job in a more ethical way than the next person would (from Todd and MacAskill). Earning to give never asked EAs to pursue unethical work, and encouraged them to pursue any line of work in an ethically upstanding way.(I emphasize Todd and MacAskill conclude in their 2017 post: "We believe that in the vast majority of cases, it’s a mistake to pursue a career in which the direct effects of the work are seriously harmful").Earning to give in the way it has always been advised-by doing work that is basically ethical-continues to be a highly promising route to impact. This is especially true when total EA assets have significantly depreciated.There is a risk that EAs do not continue to pursue earning to give, thinking either that it is icky post-FTX or that someone else has it covered. This is a poor strategy. It is imperative that some EAs who are well suited for founding companies shift their careers into entrepreneurship as soon as possible.3. General FUD from nefarious actorsAs the EA community reals from the FTX/Alameda blow up, a number of actors with histories of hating EA have chimed in with threads about how this catastrophe is in line with X thing they alre...]]>
burner https://forum.effectivealtruism.org/posts/DB9ggzc5u9RMBosoz/wrong-lessons-from-the-ftx-catastrophe Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wrong lessons from the FTX catastrophe, published by burner on November 14, 2022 on The Effective Altruism Forum.There remains a large amount of uncertainty about what exactly happened inside FTX and Alameda. I do not offer any new takes on what occurred. However, there are the inklings of some "lessons" from the situation that are incorrect regardless of how the details flesh out. Pointing these out now may seem in poor taste while many are still coming to terms with what happened, but it important to do so before they become the canonical lessons.1. Ambition was a mistakeThere have been a number of calls for EA to "go back to bed nets." It is notable that this refrain conflates the alleged illegal and unethical behavior from FTX/Alameda with the philosophical position of longtermism. Rather than evaluating the two issues as distinct, the call seems to assume both were born out of the same runaway ambition.Logically, this is obviously not the case. Longtermism grew in popularity during SBFs rise, and Future Fund did focus on longtermist projects, but FTX/Alameda situation has no bearing on the truth of longtermism and associated project's prioritization.To the extent that both becoming incredibly rich and affecting the longterm trajectory of humanity are "ambitious" goals, ambition is not the problem. Committing financial crimes is a problem. Longtermism has problems, like knowing how to act given uncertainty about the future. But an enlightened understanding of ambition accommodates these problems: We should be ambitious in our goals while understanding our limitations in solving them.There is an uncomfortable reality that SBF symbolized a new level of ambition for EA. That sense of ambition should be retained. His malfeasance should not be.This is not to say that there may be lessons to learn about transparency, overconfidence, centralized power, trusting leaders, etc from these events. But all of these are distinct from a lesson about ambition, which depends more on vague allusions to Icarus than argument.2. No more earning to giveI am not sure that this is being learned as a "lesson" or if this situation simply "leaves a bad taste" in EA's mouths about earning to give. The alleged actions of the FTX/Alameda team in no way suggest that earning money to donate to effective causes is a poor career path.Certain employees of FTX/Alameda seem to have been doing distinctly unethical work, such as grossly mismanaging client funds. One of the only arguments for why one might be open to working a possibly unethical job like trading crypto currency is because an ethically motivated actor would do that job in a more ethical way than the next person would (from Todd and MacAskill). Earning to give never asked EAs to pursue unethical work, and encouraged them to pursue any line of work in an ethically upstanding way.(I emphasize Todd and MacAskill conclude in their 2017 post: "We believe that in the vast majority of cases, it’s a mistake to pursue a career in which the direct effects of the work are seriously harmful").Earning to give in the way it has always been advised-by doing work that is basically ethical-continues to be a highly promising route to impact. This is especially true when total EA assets have significantly depreciated.There is a risk that EAs do not continue to pursue earning to give, thinking either that it is icky post-FTX or that someone else has it covered. This is a poor strategy. It is imperative that some EAs who are well suited for founding companies shift their careers into entrepreneurship as soon as possible.3. General FUD from nefarious actorsAs the EA community reals from the FTX/Alameda blow up, a number of actors with histories of hating EA have chimed in with threads about how this catastrophe is in line with X thing they alre...]]>
Mon, 14 Nov 2022 01:14:15 +0000 EA - Wrong lessons from the FTX catastrophe by burner Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wrong lessons from the FTX catastrophe, published by burner on November 14, 2022 on The Effective Altruism Forum.There remains a large amount of uncertainty about what exactly happened inside FTX and Alameda. I do not offer any new takes on what occurred. However, there are the inklings of some "lessons" from the situation that are incorrect regardless of how the details flesh out. Pointing these out now may seem in poor taste while many are still coming to terms with what happened, but it important to do so before they become the canonical lessons.1. Ambition was a mistakeThere have been a number of calls for EA to "go back to bed nets." It is notable that this refrain conflates the alleged illegal and unethical behavior from FTX/Alameda with the philosophical position of longtermism. Rather than evaluating the two issues as distinct, the call seems to assume both were born out of the same runaway ambition.Logically, this is obviously not the case. Longtermism grew in popularity during SBFs rise, and Future Fund did focus on longtermist projects, but FTX/Alameda situation has no bearing on the truth of longtermism and associated project's prioritization.To the extent that both becoming incredibly rich and affecting the longterm trajectory of humanity are "ambitious" goals, ambition is not the problem. Committing financial crimes is a problem. Longtermism has problems, like knowing how to act given uncertainty about the future. But an enlightened understanding of ambition accommodates these problems: We should be ambitious in our goals while understanding our limitations in solving them.There is an uncomfortable reality that SBF symbolized a new level of ambition for EA. That sense of ambition should be retained. His malfeasance should not be.This is not to say that there may be lessons to learn about transparency, overconfidence, centralized power, trusting leaders, etc from these events. But all of these are distinct from a lesson about ambition, which depends more on vague allusions to Icarus than argument.2. No more earning to giveI am not sure that this is being learned as a "lesson" or if this situation simply "leaves a bad taste" in EA's mouths about earning to give. The alleged actions of the FTX/Alameda team in no way suggest that earning money to donate to effective causes is a poor career path.Certain employees of FTX/Alameda seem to have been doing distinctly unethical work, such as grossly mismanaging client funds. One of the only arguments for why one might be open to working a possibly unethical job like trading crypto currency is because an ethically motivated actor would do that job in a more ethical way than the next person would (from Todd and MacAskill). Earning to give never asked EAs to pursue unethical work, and encouraged them to pursue any line of work in an ethically upstanding way.(I emphasize Todd and MacAskill conclude in their 2017 post: "We believe that in the vast majority of cases, it’s a mistake to pursue a career in which the direct effects of the work are seriously harmful").Earning to give in the way it has always been advised-by doing work that is basically ethical-continues to be a highly promising route to impact. This is especially true when total EA assets have significantly depreciated.There is a risk that EAs do not continue to pursue earning to give, thinking either that it is icky post-FTX or that someone else has it covered. This is a poor strategy. It is imperative that some EAs who are well suited for founding companies shift their careers into entrepreneurship as soon as possible.3. General FUD from nefarious actorsAs the EA community reals from the FTX/Alameda blow up, a number of actors with histories of hating EA have chimed in with threads about how this catastrophe is in line with X thing they alre...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wrong lessons from the FTX catastrophe, published by burner on November 14, 2022 on The Effective Altruism Forum.There remains a large amount of uncertainty about what exactly happened inside FTX and Alameda. I do not offer any new takes on what occurred. However, there are the inklings of some "lessons" from the situation that are incorrect regardless of how the details flesh out. Pointing these out now may seem in poor taste while many are still coming to terms with what happened, but it important to do so before they become the canonical lessons.1. Ambition was a mistakeThere have been a number of calls for EA to "go back to bed nets." It is notable that this refrain conflates the alleged illegal and unethical behavior from FTX/Alameda with the philosophical position of longtermism. Rather than evaluating the two issues as distinct, the call seems to assume both were born out of the same runaway ambition.Logically, this is obviously not the case. Longtermism grew in popularity during SBFs rise, and Future Fund did focus on longtermist projects, but FTX/Alameda situation has no bearing on the truth of longtermism and associated project's prioritization.To the extent that both becoming incredibly rich and affecting the longterm trajectory of humanity are "ambitious" goals, ambition is not the problem. Committing financial crimes is a problem. Longtermism has problems, like knowing how to act given uncertainty about the future. But an enlightened understanding of ambition accommodates these problems: We should be ambitious in our goals while understanding our limitations in solving them.There is an uncomfortable reality that SBF symbolized a new level of ambition for EA. That sense of ambition should be retained. His malfeasance should not be.This is not to say that there may be lessons to learn about transparency, overconfidence, centralized power, trusting leaders, etc from these events. But all of these are distinct from a lesson about ambition, which depends more on vague allusions to Icarus than argument.2. No more earning to giveI am not sure that this is being learned as a "lesson" or if this situation simply "leaves a bad taste" in EA's mouths about earning to give. The alleged actions of the FTX/Alameda team in no way suggest that earning money to donate to effective causes is a poor career path.Certain employees of FTX/Alameda seem to have been doing distinctly unethical work, such as grossly mismanaging client funds. One of the only arguments for why one might be open to working a possibly unethical job like trading crypto currency is because an ethically motivated actor would do that job in a more ethical way than the next person would (from Todd and MacAskill). Earning to give never asked EAs to pursue unethical work, and encouraged them to pursue any line of work in an ethically upstanding way.(I emphasize Todd and MacAskill conclude in their 2017 post: "We believe that in the vast majority of cases, it’s a mistake to pursue a career in which the direct effects of the work are seriously harmful").Earning to give in the way it has always been advised-by doing work that is basically ethical-continues to be a highly promising route to impact. This is especially true when total EA assets have significantly depreciated.There is a risk that EAs do not continue to pursue earning to give, thinking either that it is icky post-FTX or that someone else has it covered. This is a poor strategy. It is imperative that some EAs who are well suited for founding companies shift their careers into entrepreneurship as soon as possible.3. General FUD from nefarious actorsAs the EA community reals from the FTX/Alameda blow up, a number of actors with histories of hating EA have chimed in with threads about how this catastrophe is in line with X thing they alre...]]>
burner https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:40 None full 3746
7q8uEwksmP5wK6kqA_EA EA - How the FTX crash damaged the Altruistic Agency by Markus Amalthea Magnuson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How the FTX crash damaged the Altruistic Agency, published by Markus Amalthea Magnuson on November 13, 2022 on The Effective Altruism Forum.IntroductionFirst, I’d like to express my deepest sympathies to everyone affected by the FTX breakdown. A lot of people have lost their life savings and are suffering terribly, something I do not wish to diminish in any way. Many are affected in far worse ways than I can even imagine, and I hope as many of those as possible will be able to find the support they need to get through these challenging times.I got confirmation this week on Wednesday (Nov 9) that payouts from the FTX Future Fund had stopped. Since I had an outstanding grant with them, this was of great concern to me personally. The days following that, and what is still unravelling minute by minute, it seems, was the complete meltdown of FTX and all related entities, including the Future Fund. You are all following these events in other places, so I won’t go into that much, but I wanted to offer a perspective from a Future Fund grantee and the specific ways something like this can do damage.This post is also a call for support since all funding for my next year of operations has suddenly evaporated entirely. Specifically, if you or someone you know is a grantmaker or donor to EA meta/infrastructure/operations projects and would be interested in funding a new organisation with a good track record, please get in touch. I’d be happy to share much more details and data on the proven value so far, my grant application to the Future Fund, and anything else that might be of interest.I will also mention the benefits of being vigilant about organisational structure and how it can save your organisation in the long run, even though it might be an upfront and ongoing cost of time and money.BackgroundI founded the Altruistic Agency in January this year. The idea was to apply my knowledge and experience from 15+ years as a full-stack developer to help organisations and others in the EA community with tech expertise. I hypothesised that the kind of work I had done mostly in a commercial context for most of my professional life is also highly valuable to nonprofits/charities and others doing high-impact work within EA cause areas. I have always been fond of meta projects, and this seemed like something that could greatly increase productivity (like technology does), especially by saving people lots of time that they could instead spend on their core mission.Thanks to a grant from the EA Infrastructure Fund, I was able to test this hypothesis full-time and spent the first half of this year providing free tech support to many EA organisations and individuals. During that time, I worked with around 45 of them in areas such as existential risk, animal advocacy, climate, legal research, mental health, and effective fundraising. The work ranged from small tasks, such as improving email deliverability, fixing website bugs, and making software recommendations, to larger tasks, such as building websites, doing security audits, and software integration.The response from the community was overwhelmingly positive, and the data from the January–June pilot phase of the Altruistic Agency indicates high value. I solved issues (on median) in a fifth of the time it would have taken organisations themselves. The data indicates just one person doing this work for six months saved EA organisations at least 900 hours of work. Additionally, many respondents said they learned a lot from working with me, both about their own systems and setups and about tech in general. Not least, increased awareness of security issues in code and systems, which will only become more crucial over time, and can be significantly harmful if not dealt with properly.Insights from the pilot phase told me two important t...]]>
Markus Amalthea Magnuson https://forum.effectivealtruism.org/posts/7q8uEwksmP5wK6kqA/how-the-ftx-crash-damaged-the-altruistic-agency Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How the FTX crash damaged the Altruistic Agency, published by Markus Amalthea Magnuson on November 13, 2022 on The Effective Altruism Forum.IntroductionFirst, I’d like to express my deepest sympathies to everyone affected by the FTX breakdown. A lot of people have lost their life savings and are suffering terribly, something I do not wish to diminish in any way. Many are affected in far worse ways than I can even imagine, and I hope as many of those as possible will be able to find the support they need to get through these challenging times.I got confirmation this week on Wednesday (Nov 9) that payouts from the FTX Future Fund had stopped. Since I had an outstanding grant with them, this was of great concern to me personally. The days following that, and what is still unravelling minute by minute, it seems, was the complete meltdown of FTX and all related entities, including the Future Fund. You are all following these events in other places, so I won’t go into that much, but I wanted to offer a perspective from a Future Fund grantee and the specific ways something like this can do damage.This post is also a call for support since all funding for my next year of operations has suddenly evaporated entirely. Specifically, if you or someone you know is a grantmaker or donor to EA meta/infrastructure/operations projects and would be interested in funding a new organisation with a good track record, please get in touch. I’d be happy to share much more details and data on the proven value so far, my grant application to the Future Fund, and anything else that might be of interest.I will also mention the benefits of being vigilant about organisational structure and how it can save your organisation in the long run, even though it might be an upfront and ongoing cost of time and money.BackgroundI founded the Altruistic Agency in January this year. The idea was to apply my knowledge and experience from 15+ years as a full-stack developer to help organisations and others in the EA community with tech expertise. I hypothesised that the kind of work I had done mostly in a commercial context for most of my professional life is also highly valuable to nonprofits/charities and others doing high-impact work within EA cause areas. I have always been fond of meta projects, and this seemed like something that could greatly increase productivity (like technology does), especially by saving people lots of time that they could instead spend on their core mission.Thanks to a grant from the EA Infrastructure Fund, I was able to test this hypothesis full-time and spent the first half of this year providing free tech support to many EA organisations and individuals. During that time, I worked with around 45 of them in areas such as existential risk, animal advocacy, climate, legal research, mental health, and effective fundraising. The work ranged from small tasks, such as improving email deliverability, fixing website bugs, and making software recommendations, to larger tasks, such as building websites, doing security audits, and software integration.The response from the community was overwhelmingly positive, and the data from the January–June pilot phase of the Altruistic Agency indicates high value. I solved issues (on median) in a fifth of the time it would have taken organisations themselves. The data indicates just one person doing this work for six months saved EA organisations at least 900 hours of work. Additionally, many respondents said they learned a lot from working with me, both about their own systems and setups and about tech in general. Not least, increased awareness of security issues in code and systems, which will only become more crucial over time, and can be significantly harmful if not dealt with properly.Insights from the pilot phase told me two important t...]]>
Mon, 14 Nov 2022 00:20:56 +0000 EA - How the FTX crash damaged the Altruistic Agency by Markus Amalthea Magnuson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How the FTX crash damaged the Altruistic Agency, published by Markus Amalthea Magnuson on November 13, 2022 on The Effective Altruism Forum.IntroductionFirst, I’d like to express my deepest sympathies to everyone affected by the FTX breakdown. A lot of people have lost their life savings and are suffering terribly, something I do not wish to diminish in any way. Many are affected in far worse ways than I can even imagine, and I hope as many of those as possible will be able to find the support they need to get through these challenging times.I got confirmation this week on Wednesday (Nov 9) that payouts from the FTX Future Fund had stopped. Since I had an outstanding grant with them, this was of great concern to me personally. The days following that, and what is still unravelling minute by minute, it seems, was the complete meltdown of FTX and all related entities, including the Future Fund. You are all following these events in other places, so I won’t go into that much, but I wanted to offer a perspective from a Future Fund grantee and the specific ways something like this can do damage.This post is also a call for support since all funding for my next year of operations has suddenly evaporated entirely. Specifically, if you or someone you know is a grantmaker or donor to EA meta/infrastructure/operations projects and would be interested in funding a new organisation with a good track record, please get in touch. I’d be happy to share much more details and data on the proven value so far, my grant application to the Future Fund, and anything else that might be of interest.I will also mention the benefits of being vigilant about organisational structure and how it can save your organisation in the long run, even though it might be an upfront and ongoing cost of time and money.BackgroundI founded the Altruistic Agency in January this year. The idea was to apply my knowledge and experience from 15+ years as a full-stack developer to help organisations and others in the EA community with tech expertise. I hypothesised that the kind of work I had done mostly in a commercial context for most of my professional life is also highly valuable to nonprofits/charities and others doing high-impact work within EA cause areas. I have always been fond of meta projects, and this seemed like something that could greatly increase productivity (like technology does), especially by saving people lots of time that they could instead spend on their core mission.Thanks to a grant from the EA Infrastructure Fund, I was able to test this hypothesis full-time and spent the first half of this year providing free tech support to many EA organisations and individuals. During that time, I worked with around 45 of them in areas such as existential risk, animal advocacy, climate, legal research, mental health, and effective fundraising. The work ranged from small tasks, such as improving email deliverability, fixing website bugs, and making software recommendations, to larger tasks, such as building websites, doing security audits, and software integration.The response from the community was overwhelmingly positive, and the data from the January–June pilot phase of the Altruistic Agency indicates high value. I solved issues (on median) in a fifth of the time it would have taken organisations themselves. The data indicates just one person doing this work for six months saved EA organisations at least 900 hours of work. Additionally, many respondents said they learned a lot from working with me, both about their own systems and setups and about tech in general. Not least, increased awareness of security issues in code and systems, which will only become more crucial over time, and can be significantly harmful if not dealt with properly.Insights from the pilot phase told me two important t...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How the FTX crash damaged the Altruistic Agency, published by Markus Amalthea Magnuson on November 13, 2022 on The Effective Altruism Forum.IntroductionFirst, I’d like to express my deepest sympathies to everyone affected by the FTX breakdown. A lot of people have lost their life savings and are suffering terribly, something I do not wish to diminish in any way. Many are affected in far worse ways than I can even imagine, and I hope as many of those as possible will be able to find the support they need to get through these challenging times.I got confirmation this week on Wednesday (Nov 9) that payouts from the FTX Future Fund had stopped. Since I had an outstanding grant with them, this was of great concern to me personally. The days following that, and what is still unravelling minute by minute, it seems, was the complete meltdown of FTX and all related entities, including the Future Fund. You are all following these events in other places, so I won’t go into that much, but I wanted to offer a perspective from a Future Fund grantee and the specific ways something like this can do damage.This post is also a call for support since all funding for my next year of operations has suddenly evaporated entirely. Specifically, if you or someone you know is a grantmaker or donor to EA meta/infrastructure/operations projects and would be interested in funding a new organisation with a good track record, please get in touch. I’d be happy to share much more details and data on the proven value so far, my grant application to the Future Fund, and anything else that might be of interest.I will also mention the benefits of being vigilant about organisational structure and how it can save your organisation in the long run, even though it might be an upfront and ongoing cost of time and money.BackgroundI founded the Altruistic Agency in January this year. The idea was to apply my knowledge and experience from 15+ years as a full-stack developer to help organisations and others in the EA community with tech expertise. I hypothesised that the kind of work I had done mostly in a commercial context for most of my professional life is also highly valuable to nonprofits/charities and others doing high-impact work within EA cause areas. I have always been fond of meta projects, and this seemed like something that could greatly increase productivity (like technology does), especially by saving people lots of time that they could instead spend on their core mission.Thanks to a grant from the EA Infrastructure Fund, I was able to test this hypothesis full-time and spent the first half of this year providing free tech support to many EA organisations and individuals. During that time, I worked with around 45 of them in areas such as existential risk, animal advocacy, climate, legal research, mental health, and effective fundraising. The work ranged from small tasks, such as improving email deliverability, fixing website bugs, and making software recommendations, to larger tasks, such as building websites, doing security audits, and software integration.The response from the community was overwhelmingly positive, and the data from the January–June pilot phase of the Altruistic Agency indicates high value. I solved issues (on median) in a fifth of the time it would have taken organisations themselves. The data indicates just one person doing this work for six months saved EA organisations at least 900 hours of work. Additionally, many respondents said they learned a lot from working with me, both about their own systems and setups and about tech in general. Not least, increased awareness of security issues in code and systems, which will only become more crucial over time, and can be significantly harmful if not dealt with properly.Insights from the pilot phase told me two important t...]]>
Markus Amalthea Magnuson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:43 None full 3747
ArDhtEcRbkwo42N9H_EA EA - The FTX Situation: Wait for more information before proposing solutions by D0TheMath Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The FTX Situation: Wait for more information before proposing solutions, published by D0TheMath on November 13, 2022 on The Effective Altruism Forum.Edit: Eli has a great comment on this which I suggest everyone read. He corrects me on a few things, and gives his far more informed takes.I'm slightly scared that EA will overcorrect in an irrelevant direction to the FTX situation in a way I think is net harmful, and I think a major reason for this fear is seeing lots of people espousing conclusions about solutions to problems without us actually knowing what the problems are yet.Some examples of this I've seen recently on the forum follow.IntegrityIt is uncertain whether SBF intentionally committed fraud, or just made a mistake, but people seem to be reacting as if the takeaway from this is that fraud is bad.These articles are mostly saying things of the form 'if FTX engaged in fraud, then EA needs to make sure people don't do more fraud in the service of utilitarianism.' from a worrying-about-group-think perspective, this is only a little less concerning than directly saying 'FTX engaged in fraud, so EA should make sure people don't do more fraud'.Even though these articles aren't literally saying that FTX engaged in fraud in the service of utilitarianism, I worry these articles will shift the narrative EA tells itself towards up-weighting hypotheses which say FTX engaged in fraud in the service of utilitarianism, especially in worlds where it turned out that FTX did commit fraud, but it was motivated by pride, or other selfish desires.DatingSome have claimed FTX's downfall happened as a result of everyone sleeping with each other, and this interpretation is not obviously unpopular on the forum. This seems quite unlikely compared to alternative explanations, and the post Women and Effective Altruism takes on a tone & content I find toxic to community epistemics[1], and anticipate wouldn't fly on the forum a week ago.I worry the reason we see this post now is that EA is confused, wants to do something, and is really searching for anything to blame for the FTX situation. If you are confused about what your problems are, you should not go searching for solutions! You should ask questions, make predictions, and try to understand what's going on. Then you should ask how you could have prevented or mitigated the bad events, and ask whether those prevention and mitigation efforts would be worth their costs.I think this problem is important to address, and am uncertain about whether this post is good or bad on net. The point is that I'm seeing a bunch of heated emotions on the forum right now, this is not like the forum I'm used to, and lots of these heated discussions seem to be directed towards pushing new EA policy proposals rather than trying to figure out what's going on.Vetting fundingWe could immediately launch a costly investigation to see who had knowledge of fraud that occurred before we actually know if fraud occured or why. In worlds where we’re wrong about whether or why fraud occurred this would be very costly. My suggestion: wait for information to costlessly come out, discuss what happened when not in the midst of the fog and emotions of current events, and then decide whether we should launch this costly investigation.Adjacently, some are arguing EA could have vetted FTX and Sam better, and averted this situation. This reeks of hindsight bias! Probably EA could not have done better than all the investors who originally vetted FTX before giving them a buttload of money!Maybe EA should investigate funders more, but arguments for this are orthogonal to recent events, unless CEA believes their comparative advantage in the wider market is high-quality vetting of corporations. If so, they could stand to make quite a bit of money selling this service,...]]>
D0TheMath https://forum.effectivealtruism.org/posts/ArDhtEcRbkwo42N9H/the-ftx-situation-wait-for-more-information-before-proposing Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The FTX Situation: Wait for more information before proposing solutions, published by D0TheMath on November 13, 2022 on The Effective Altruism Forum.Edit: Eli has a great comment on this which I suggest everyone read. He corrects me on a few things, and gives his far more informed takes.I'm slightly scared that EA will overcorrect in an irrelevant direction to the FTX situation in a way I think is net harmful, and I think a major reason for this fear is seeing lots of people espousing conclusions about solutions to problems without us actually knowing what the problems are yet.Some examples of this I've seen recently on the forum follow.IntegrityIt is uncertain whether SBF intentionally committed fraud, or just made a mistake, but people seem to be reacting as if the takeaway from this is that fraud is bad.These articles are mostly saying things of the form 'if FTX engaged in fraud, then EA needs to make sure people don't do more fraud in the service of utilitarianism.' from a worrying-about-group-think perspective, this is only a little less concerning than directly saying 'FTX engaged in fraud, so EA should make sure people don't do more fraud'.Even though these articles aren't literally saying that FTX engaged in fraud in the service of utilitarianism, I worry these articles will shift the narrative EA tells itself towards up-weighting hypotheses which say FTX engaged in fraud in the service of utilitarianism, especially in worlds where it turned out that FTX did commit fraud, but it was motivated by pride, or other selfish desires.DatingSome have claimed FTX's downfall happened as a result of everyone sleeping with each other, and this interpretation is not obviously unpopular on the forum. This seems quite unlikely compared to alternative explanations, and the post Women and Effective Altruism takes on a tone & content I find toxic to community epistemics[1], and anticipate wouldn't fly on the forum a week ago.I worry the reason we see this post now is that EA is confused, wants to do something, and is really searching for anything to blame for the FTX situation. If you are confused about what your problems are, you should not go searching for solutions! You should ask questions, make predictions, and try to understand what's going on. Then you should ask how you could have prevented or mitigated the bad events, and ask whether those prevention and mitigation efforts would be worth their costs.I think this problem is important to address, and am uncertain about whether this post is good or bad on net. The point is that I'm seeing a bunch of heated emotions on the forum right now, this is not like the forum I'm used to, and lots of these heated discussions seem to be directed towards pushing new EA policy proposals rather than trying to figure out what's going on.Vetting fundingWe could immediately launch a costly investigation to see who had knowledge of fraud that occurred before we actually know if fraud occured or why. In worlds where we’re wrong about whether or why fraud occurred this would be very costly. My suggestion: wait for information to costlessly come out, discuss what happened when not in the midst of the fog and emotions of current events, and then decide whether we should launch this costly investigation.Adjacently, some are arguing EA could have vetted FTX and Sam better, and averted this situation. This reeks of hindsight bias! Probably EA could not have done better than all the investors who originally vetted FTX before giving them a buttload of money!Maybe EA should investigate funders more, but arguments for this are orthogonal to recent events, unless CEA believes their comparative advantage in the wider market is high-quality vetting of corporations. If so, they could stand to make quite a bit of money selling this service,...]]>
Sun, 13 Nov 2022 21:11:49 +0000 EA - The FTX Situation: Wait for more information before proposing solutions by D0TheMath Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The FTX Situation: Wait for more information before proposing solutions, published by D0TheMath on November 13, 2022 on The Effective Altruism Forum.Edit: Eli has a great comment on this which I suggest everyone read. He corrects me on a few things, and gives his far more informed takes.I'm slightly scared that EA will overcorrect in an irrelevant direction to the FTX situation in a way I think is net harmful, and I think a major reason for this fear is seeing lots of people espousing conclusions about solutions to problems without us actually knowing what the problems are yet.Some examples of this I've seen recently on the forum follow.IntegrityIt is uncertain whether SBF intentionally committed fraud, or just made a mistake, but people seem to be reacting as if the takeaway from this is that fraud is bad.These articles are mostly saying things of the form 'if FTX engaged in fraud, then EA needs to make sure people don't do more fraud in the service of utilitarianism.' from a worrying-about-group-think perspective, this is only a little less concerning than directly saying 'FTX engaged in fraud, so EA should make sure people don't do more fraud'.Even though these articles aren't literally saying that FTX engaged in fraud in the service of utilitarianism, I worry these articles will shift the narrative EA tells itself towards up-weighting hypotheses which say FTX engaged in fraud in the service of utilitarianism, especially in worlds where it turned out that FTX did commit fraud, but it was motivated by pride, or other selfish desires.DatingSome have claimed FTX's downfall happened as a result of everyone sleeping with each other, and this interpretation is not obviously unpopular on the forum. This seems quite unlikely compared to alternative explanations, and the post Women and Effective Altruism takes on a tone & content I find toxic to community epistemics[1], and anticipate wouldn't fly on the forum a week ago.I worry the reason we see this post now is that EA is confused, wants to do something, and is really searching for anything to blame for the FTX situation. If you are confused about what your problems are, you should not go searching for solutions! You should ask questions, make predictions, and try to understand what's going on. Then you should ask how you could have prevented or mitigated the bad events, and ask whether those prevention and mitigation efforts would be worth their costs.I think this problem is important to address, and am uncertain about whether this post is good or bad on net. The point is that I'm seeing a bunch of heated emotions on the forum right now, this is not like the forum I'm used to, and lots of these heated discussions seem to be directed towards pushing new EA policy proposals rather than trying to figure out what's going on.Vetting fundingWe could immediately launch a costly investigation to see who had knowledge of fraud that occurred before we actually know if fraud occured or why. In worlds where we’re wrong about whether or why fraud occurred this would be very costly. My suggestion: wait for information to costlessly come out, discuss what happened when not in the midst of the fog and emotions of current events, and then decide whether we should launch this costly investigation.Adjacently, some are arguing EA could have vetted FTX and Sam better, and averted this situation. This reeks of hindsight bias! Probably EA could not have done better than all the investors who originally vetted FTX before giving them a buttload of money!Maybe EA should investigate funders more, but arguments for this are orthogonal to recent events, unless CEA believes their comparative advantage in the wider market is high-quality vetting of corporations. If so, they could stand to make quite a bit of money selling this service,...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The FTX Situation: Wait for more information before proposing solutions, published by D0TheMath on November 13, 2022 on The Effective Altruism Forum.Edit: Eli has a great comment on this which I suggest everyone read. He corrects me on a few things, and gives his far more informed takes.I'm slightly scared that EA will overcorrect in an irrelevant direction to the FTX situation in a way I think is net harmful, and I think a major reason for this fear is seeing lots of people espousing conclusions about solutions to problems without us actually knowing what the problems are yet.Some examples of this I've seen recently on the forum follow.IntegrityIt is uncertain whether SBF intentionally committed fraud, or just made a mistake, but people seem to be reacting as if the takeaway from this is that fraud is bad.These articles are mostly saying things of the form 'if FTX engaged in fraud, then EA needs to make sure people don't do more fraud in the service of utilitarianism.' from a worrying-about-group-think perspective, this is only a little less concerning than directly saying 'FTX engaged in fraud, so EA should make sure people don't do more fraud'.Even though these articles aren't literally saying that FTX engaged in fraud in the service of utilitarianism, I worry these articles will shift the narrative EA tells itself towards up-weighting hypotheses which say FTX engaged in fraud in the service of utilitarianism, especially in worlds where it turned out that FTX did commit fraud, but it was motivated by pride, or other selfish desires.DatingSome have claimed FTX's downfall happened as a result of everyone sleeping with each other, and this interpretation is not obviously unpopular on the forum. This seems quite unlikely compared to alternative explanations, and the post Women and Effective Altruism takes on a tone & content I find toxic to community epistemics[1], and anticipate wouldn't fly on the forum a week ago.I worry the reason we see this post now is that EA is confused, wants to do something, and is really searching for anything to blame for the FTX situation. If you are confused about what your problems are, you should not go searching for solutions! You should ask questions, make predictions, and try to understand what's going on. Then you should ask how you could have prevented or mitigated the bad events, and ask whether those prevention and mitigation efforts would be worth their costs.I think this problem is important to address, and am uncertain about whether this post is good or bad on net. The point is that I'm seeing a bunch of heated emotions on the forum right now, this is not like the forum I'm used to, and lots of these heated discussions seem to be directed towards pushing new EA policy proposals rather than trying to figure out what's going on.Vetting fundingWe could immediately launch a costly investigation to see who had knowledge of fraud that occurred before we actually know if fraud occured or why. In worlds where we’re wrong about whether or why fraud occurred this would be very costly. My suggestion: wait for information to costlessly come out, discuss what happened when not in the midst of the fog and emotions of current events, and then decide whether we should launch this costly investigation.Adjacently, some are arguing EA could have vetted FTX and Sam better, and averted this situation. This reeks of hindsight bias! Probably EA could not have done better than all the investors who originally vetted FTX before giving them a buttload of money!Maybe EA should investigate funders more, but arguments for this are orthogonal to recent events, unless CEA believes their comparative advantage in the wider market is high-quality vetting of corporations. If so, they could stand to make quite a bit of money selling this service,...]]>
D0TheMath https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:00 None full 3738
aryJKRDxLejPHommx_EA EA - SBF, extreme risk-taking, expected value, and effective altruism by vipulnaik Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SBF, extreme risk-taking, expected value, and effective altruism, published by vipulnaik on November 13, 2022 on The Effective Altruism Forum.NOTE: I have some indirect associations with SBF and his companies, though probably less so than many of the others who've been posting and commenting on the forum. I don't expect anything I write here to meaningfully affect how things play out in the future for me, so I don't think this creates a conflict of interest, but feel free to discount what I say.NOTE 2: I'm publishing this post without having spent the level of effort polishing and refining it that I normally try to spend. This is due to the time-sensitive nature of the subject matter and becauseI expect to get more value from being corrected in the comments on the post than from refining the post myself. If errors are pointed out, I will try to correct them, but may not always be able to make timely corrections, so if you're reading the post, please also check the comments to check for flaws identified by comments.The collapse of Sam Bankman-Fried (SBF) and his companies FTX andAlameda Research is the topic du jour on the Effective Altruism Forum, and there have been several posts on theForum discussing what happened and what we can learn from it. The post FTXFAQ provides a good summary of what we know as of the time I'm writing this post. I'm also funding work on a timeline of FTX collapse (still a work in progress, but with enough coverage already to be useful if you are starting with very little knowledge).Based on information so far, fraud and deception on the part of SBF(and/or others in FTX and/or Alameda Research) likely happened and were likely key to the way things played out and the extent of damage caused. The trigger seems to be the big loan that FTX provided toAlameda Research to bail it out, using customer funds for the purpose. If FTX hadn't bailed out Alameda, it's quite likely that the spectacular death of FTX we saw (with depositors losing all their money as well) wouldn't have happened. But it's also plausible that without the loan, the situation with Alameda Research was dire enough that Alameda Research, and then FTX, would have died due to the lack of funds. Hopefully that would have been a more graceful death with less pain to depositors. That is a very important difference. Nonetheless, I suspect that by the time of the bailout, we were already at a kind of endgame.In this post, I try to step back a bit from the endgame, and even get away from the specifics of FTX and Alameda Research (that I know very little about) and in fact even get away from the specifics of SBF's business practices (where again I know very little). Rather, I talk about SBF's overall philosophy around risk and expected value, as he has articulated himself, and has been approvingly amplified by severalEA websites and groups. I think the philosophy was key to the overall way things played out. And I also discuss the relationship between the philosophy and the ideas of effective altruism, both in the abstract and as specifically championed by many leaders in effective altruism (including the team at 80,000 Hours). My goal is to encourage people to reassess the philosophy and make appropriate updates.I make two claims:Claim 1: SBF engages in extreme risk-taking that is a crude approximation to the idea of expected value maximization as perceived by him.Claim 2: At least part of the motivation for SBF's risk-taking comes from ideas in effective altruism, and in particular specific points made by EA leaders including people affiliated with 80,000Hours. While personality probably accounts for a lot of SBF's decisions, the role of EA ideas as a catalyst cannot be dismissed based on the evidence.Here are a few things I am not claiming (some of these are discussed ...]]>
vipulnaik https://forum.effectivealtruism.org/posts/aryJKRDxLejPHommx/sbf-extreme-risk-taking-expected-value-and-effective Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SBF, extreme risk-taking, expected value, and effective altruism, published by vipulnaik on November 13, 2022 on The Effective Altruism Forum.NOTE: I have some indirect associations with SBF and his companies, though probably less so than many of the others who've been posting and commenting on the forum. I don't expect anything I write here to meaningfully affect how things play out in the future for me, so I don't think this creates a conflict of interest, but feel free to discount what I say.NOTE 2: I'm publishing this post without having spent the level of effort polishing and refining it that I normally try to spend. This is due to the time-sensitive nature of the subject matter and becauseI expect to get more value from being corrected in the comments on the post than from refining the post myself. If errors are pointed out, I will try to correct them, but may not always be able to make timely corrections, so if you're reading the post, please also check the comments to check for flaws identified by comments.The collapse of Sam Bankman-Fried (SBF) and his companies FTX andAlameda Research is the topic du jour on the Effective Altruism Forum, and there have been several posts on theForum discussing what happened and what we can learn from it. The post FTXFAQ provides a good summary of what we know as of the time I'm writing this post. I'm also funding work on a timeline of FTX collapse (still a work in progress, but with enough coverage already to be useful if you are starting with very little knowledge).Based on information so far, fraud and deception on the part of SBF(and/or others in FTX and/or Alameda Research) likely happened and were likely key to the way things played out and the extent of damage caused. The trigger seems to be the big loan that FTX provided toAlameda Research to bail it out, using customer funds for the purpose. If FTX hadn't bailed out Alameda, it's quite likely that the spectacular death of FTX we saw (with depositors losing all their money as well) wouldn't have happened. But it's also plausible that without the loan, the situation with Alameda Research was dire enough that Alameda Research, and then FTX, would have died due to the lack of funds. Hopefully that would have been a more graceful death with less pain to depositors. That is a very important difference. Nonetheless, I suspect that by the time of the bailout, we were already at a kind of endgame.In this post, I try to step back a bit from the endgame, and even get away from the specifics of FTX and Alameda Research (that I know very little about) and in fact even get away from the specifics of SBF's business practices (where again I know very little). Rather, I talk about SBF's overall philosophy around risk and expected value, as he has articulated himself, and has been approvingly amplified by severalEA websites and groups. I think the philosophy was key to the overall way things played out. And I also discuss the relationship between the philosophy and the ideas of effective altruism, both in the abstract and as specifically championed by many leaders in effective altruism (including the team at 80,000 Hours). My goal is to encourage people to reassess the philosophy and make appropriate updates.I make two claims:Claim 1: SBF engages in extreme risk-taking that is a crude approximation to the idea of expected value maximization as perceived by him.Claim 2: At least part of the motivation for SBF's risk-taking comes from ideas in effective altruism, and in particular specific points made by EA leaders including people affiliated with 80,000Hours. While personality probably accounts for a lot of SBF's decisions, the role of EA ideas as a catalyst cannot be dismissed based on the evidence.Here are a few things I am not claiming (some of these are discussed ...]]>
Sun, 13 Nov 2022 20:06:15 +0000 EA - SBF, extreme risk-taking, expected value, and effective altruism by vipulnaik Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SBF, extreme risk-taking, expected value, and effective altruism, published by vipulnaik on November 13, 2022 on The Effective Altruism Forum.NOTE: I have some indirect associations with SBF and his companies, though probably less so than many of the others who've been posting and commenting on the forum. I don't expect anything I write here to meaningfully affect how things play out in the future for me, so I don't think this creates a conflict of interest, but feel free to discount what I say.NOTE 2: I'm publishing this post without having spent the level of effort polishing and refining it that I normally try to spend. This is due to the time-sensitive nature of the subject matter and becauseI expect to get more value from being corrected in the comments on the post than from refining the post myself. If errors are pointed out, I will try to correct them, but may not always be able to make timely corrections, so if you're reading the post, please also check the comments to check for flaws identified by comments.The collapse of Sam Bankman-Fried (SBF) and his companies FTX andAlameda Research is the topic du jour on the Effective Altruism Forum, and there have been several posts on theForum discussing what happened and what we can learn from it. The post FTXFAQ provides a good summary of what we know as of the time I'm writing this post. I'm also funding work on a timeline of FTX collapse (still a work in progress, but with enough coverage already to be useful if you are starting with very little knowledge).Based on information so far, fraud and deception on the part of SBF(and/or others in FTX and/or Alameda Research) likely happened and were likely key to the way things played out and the extent of damage caused. The trigger seems to be the big loan that FTX provided toAlameda Research to bail it out, using customer funds for the purpose. If FTX hadn't bailed out Alameda, it's quite likely that the spectacular death of FTX we saw (with depositors losing all their money as well) wouldn't have happened. But it's also plausible that without the loan, the situation with Alameda Research was dire enough that Alameda Research, and then FTX, would have died due to the lack of funds. Hopefully that would have been a more graceful death with less pain to depositors. That is a very important difference. Nonetheless, I suspect that by the time of the bailout, we were already at a kind of endgame.In this post, I try to step back a bit from the endgame, and even get away from the specifics of FTX and Alameda Research (that I know very little about) and in fact even get away from the specifics of SBF's business practices (where again I know very little). Rather, I talk about SBF's overall philosophy around risk and expected value, as he has articulated himself, and has been approvingly amplified by severalEA websites and groups. I think the philosophy was key to the overall way things played out. And I also discuss the relationship between the philosophy and the ideas of effective altruism, both in the abstract and as specifically championed by many leaders in effective altruism (including the team at 80,000 Hours). My goal is to encourage people to reassess the philosophy and make appropriate updates.I make two claims:Claim 1: SBF engages in extreme risk-taking that is a crude approximation to the idea of expected value maximization as perceived by him.Claim 2: At least part of the motivation for SBF's risk-taking comes from ideas in effective altruism, and in particular specific points made by EA leaders including people affiliated with 80,000Hours. While personality probably accounts for a lot of SBF's decisions, the role of EA ideas as a catalyst cannot be dismissed based on the evidence.Here are a few things I am not claiming (some of these are discussed ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SBF, extreme risk-taking, expected value, and effective altruism, published by vipulnaik on November 13, 2022 on The Effective Altruism Forum.NOTE: I have some indirect associations with SBF and his companies, though probably less so than many of the others who've been posting and commenting on the forum. I don't expect anything I write here to meaningfully affect how things play out in the future for me, so I don't think this creates a conflict of interest, but feel free to discount what I say.NOTE 2: I'm publishing this post without having spent the level of effort polishing and refining it that I normally try to spend. This is due to the time-sensitive nature of the subject matter and becauseI expect to get more value from being corrected in the comments on the post than from refining the post myself. If errors are pointed out, I will try to correct them, but may not always be able to make timely corrections, so if you're reading the post, please also check the comments to check for flaws identified by comments.The collapse of Sam Bankman-Fried (SBF) and his companies FTX andAlameda Research is the topic du jour on the Effective Altruism Forum, and there have been several posts on theForum discussing what happened and what we can learn from it. The post FTXFAQ provides a good summary of what we know as of the time I'm writing this post. I'm also funding work on a timeline of FTX collapse (still a work in progress, but with enough coverage already to be useful if you are starting with very little knowledge).Based on information so far, fraud and deception on the part of SBF(and/or others in FTX and/or Alameda Research) likely happened and were likely key to the way things played out and the extent of damage caused. The trigger seems to be the big loan that FTX provided toAlameda Research to bail it out, using customer funds for the purpose. If FTX hadn't bailed out Alameda, it's quite likely that the spectacular death of FTX we saw (with depositors losing all their money as well) wouldn't have happened. But it's also plausible that without the loan, the situation with Alameda Research was dire enough that Alameda Research, and then FTX, would have died due to the lack of funds. Hopefully that would have been a more graceful death with less pain to depositors. That is a very important difference. Nonetheless, I suspect that by the time of the bailout, we were already at a kind of endgame.In this post, I try to step back a bit from the endgame, and even get away from the specifics of FTX and Alameda Research (that I know very little about) and in fact even get away from the specifics of SBF's business practices (where again I know very little). Rather, I talk about SBF's overall philosophy around risk and expected value, as he has articulated himself, and has been approvingly amplified by severalEA websites and groups. I think the philosophy was key to the overall way things played out. And I also discuss the relationship between the philosophy and the ideas of effective altruism, both in the abstract and as specifically championed by many leaders in effective altruism (including the team at 80,000 Hours). My goal is to encourage people to reassess the philosophy and make appropriate updates.I make two claims:Claim 1: SBF engages in extreme risk-taking that is a crude approximation to the idea of expected value maximization as perceived by him.Claim 2: At least part of the motivation for SBF's risk-taking comes from ideas in effective altruism, and in particular specific points made by EA leaders including people affiliated with 80,000Hours. While personality probably accounts for a lot of SBF's decisions, the role of EA ideas as a catalyst cannot be dismissed based on the evidence.Here are a few things I am not claiming (some of these are discussed ...]]>
vipulnaik https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 35:40 None full 3740
L4S2NCysoJxgCBuB6_EA EA - Announcing Nonlinear Emergency Funding by Kat Woods Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Nonlinear Emergency Funding, published by Kat Woods on November 13, 2022 on The Effective Altruism Forum.Like most of you, we at Nonlinear are horrified and saddened by recent events concerning FTX. Some of you counting on Future Fund grants are suddenly finding yourselves facing an existential financial crisis, so, inspired by the Covid Fast Grants program, we’re trying something similar for EA. If you are a Future Fund grantee and <$10,000 of bridge funding would be of substantial help to you, fill out this short form (<10 mins) and we’ll get back to you ASAP.We have a small budget, so if you’re a funder and would like to help, please reach out: katwoods@nonlinear.orgThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Kat Woods https://forum.effectivealtruism.org/posts/L4S2NCysoJxgCBuB6/announcing-nonlinear-emergency-funding Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Nonlinear Emergency Funding, published by Kat Woods on November 13, 2022 on The Effective Altruism Forum.Like most of you, we at Nonlinear are horrified and saddened by recent events concerning FTX. Some of you counting on Future Fund grants are suddenly finding yourselves facing an existential financial crisis, so, inspired by the Covid Fast Grants program, we’re trying something similar for EA. If you are a Future Fund grantee and <$10,000 of bridge funding would be of substantial help to you, fill out this short form (<10 mins) and we’ll get back to you ASAP.We have a small budget, so if you’re a funder and would like to help, please reach out: katwoods@nonlinear.orgThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 13 Nov 2022 18:17:27 +0000 EA - Announcing Nonlinear Emergency Funding by Kat Woods Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Nonlinear Emergency Funding, published by Kat Woods on November 13, 2022 on The Effective Altruism Forum.Like most of you, we at Nonlinear are horrified and saddened by recent events concerning FTX. Some of you counting on Future Fund grants are suddenly finding yourselves facing an existential financial crisis, so, inspired by the Covid Fast Grants program, we’re trying something similar for EA. If you are a Future Fund grantee and <$10,000 of bridge funding would be of substantial help to you, fill out this short form (<10 mins) and we’ll get back to you ASAP.We have a small budget, so if you’re a funder and would like to help, please reach out: katwoods@nonlinear.orgThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Nonlinear Emergency Funding, published by Kat Woods on November 13, 2022 on The Effective Altruism Forum.Like most of you, we at Nonlinear are horrified and saddened by recent events concerning FTX. Some of you counting on Future Fund grants are suddenly finding yourselves facing an existential financial crisis, so, inspired by the Covid Fast Grants program, we’re trying something similar for EA. If you are a Future Fund grantee and <$10,000 of bridge funding would be of substantial help to you, fill out this short form (<10 mins) and we’ll get back to you ASAP.We have a small budget, so if you’re a funder and would like to help, please reach out: katwoods@nonlinear.orgThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Kat Woods https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:58 None full 3739
o8B9kCkwteSqZg9zc_EA EA - Thoughts on legal concerns surrounding the FTX situation by Molly Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on legal concerns surrounding the FTX situation, published by Molly on November 13, 2022 on The Effective Altruism Forum.As Open Phil’s managing counsel, and as a member of the EA community, I’ve received an onslaught of questions relating to the FTX fiasco in the last few days. Unfortunately, I’ve been unable to answer most of them either because I don’t have the relevant legal expertise, or because I can’t provide legal advice to non-clients (and Open Philanthropy is my only client in this situation), or because the facts I’d need to answer the questions just aren’t available yet.The biggest topic of concern is something along the lines of: if I received FTX-related grant money, what am I supposed to do? Will it be clawed back? This post aims to provide legal context on this topic; it doesn’t address ethical or other practical perspectives on the topic.Before diving into that, I want to offer this context: this is the first few days of what is going to be a multi-year legal process. It will be drawn out and tedious. We cannot will the information we want into existence, so we can’t be as strategic as we’d like.But there’s an upside to that. Emotions are high and many people are probably not in a great mental place to make big decisions. This externally-imposed waiting period can be put to good use as a time to process.I understand that for some people who received FTX-related grant money, waiting doesn’t feel like an option; people need to know whether they can pay rent, or if their organization still exists. I hope some of the information below will provide a little more context for individual situations and decisions.I also committed to putting out an explainer on clawbacks. That is here, though I think the information in this post is more useful.Bankruptcy and ClawbacksBackgroundThe information in this section is based on publicly available information and general conversations with bankruptcy lawyers. I do not have access to any nonpublic information about this case. None of this should be taken as legal advice to you.FTX filed for bankruptcy on Friday (November 11th, 2022). More specifically, Alameda Research Ltd. filed a voluntary petition for bankruptcy under Chapter 11 of the Bankruptcy Code, by filing a standard form in the United States Bankruptcy Court for the District of Delaware. The filing includes 134 “debtor entities” (listed as Annex I); it looks like this covers the full FTX corporate group.This means that the full FTX group is now under the protection of the bankruptcy court, and over the coming months, all of the assets in the debtor group will be administered for the benefit of FTX’s creditors. By filing under Chapter 11 (instead of Chapter 7), FTX has preserved the option of emerging out of the bankruptcy proceeding and continuing to operate in some capacity. You can read a useful explainer on the bankruptcy process here.The rules in the Bankruptcy Code are ultimately trying to ensure a fair outcome for creditors. This includes capturing certain payment transactions that occurred in the past. Basically, the debtor can reach back in time to undo deals it made and recoup monies it paid; this money comes back into the estate through clawbacks and gets redistributed to creditors according to the bankruptcy rules.ClawbacksThere are two main types of clawback processes. The first and most common (called a “preference claim”) target transactions that happened in the 90 days prior to the bankruptcy filing. Essentially, if you received money from an FTX entity in the debtor group anytime on or after approximately August 11, 2022, the bankruptcy process will probably ask you, at some point, to pay all or part of that money back. It’s almost impossible to say right now whether any specific grant or contract will be subject to clawb...]]>
Molly https://forum.effectivealtruism.org/posts/o8B9kCkwteSqZg9zc/thoughts-on-legal-concerns-surrounding-the-ftx-situation Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on legal concerns surrounding the FTX situation, published by Molly on November 13, 2022 on The Effective Altruism Forum.As Open Phil’s managing counsel, and as a member of the EA community, I’ve received an onslaught of questions relating to the FTX fiasco in the last few days. Unfortunately, I’ve been unable to answer most of them either because I don’t have the relevant legal expertise, or because I can’t provide legal advice to non-clients (and Open Philanthropy is my only client in this situation), or because the facts I’d need to answer the questions just aren’t available yet.The biggest topic of concern is something along the lines of: if I received FTX-related grant money, what am I supposed to do? Will it be clawed back? This post aims to provide legal context on this topic; it doesn’t address ethical or other practical perspectives on the topic.Before diving into that, I want to offer this context: this is the first few days of what is going to be a multi-year legal process. It will be drawn out and tedious. We cannot will the information we want into existence, so we can’t be as strategic as we’d like.But there’s an upside to that. Emotions are high and many people are probably not in a great mental place to make big decisions. This externally-imposed waiting period can be put to good use as a time to process.I understand that for some people who received FTX-related grant money, waiting doesn’t feel like an option; people need to know whether they can pay rent, or if their organization still exists. I hope some of the information below will provide a little more context for individual situations and decisions.I also committed to putting out an explainer on clawbacks. That is here, though I think the information in this post is more useful.Bankruptcy and ClawbacksBackgroundThe information in this section is based on publicly available information and general conversations with bankruptcy lawyers. I do not have access to any nonpublic information about this case. None of this should be taken as legal advice to you.FTX filed for bankruptcy on Friday (November 11th, 2022). More specifically, Alameda Research Ltd. filed a voluntary petition for bankruptcy under Chapter 11 of the Bankruptcy Code, by filing a standard form in the United States Bankruptcy Court for the District of Delaware. The filing includes 134 “debtor entities” (listed as Annex I); it looks like this covers the full FTX corporate group.This means that the full FTX group is now under the protection of the bankruptcy court, and over the coming months, all of the assets in the debtor group will be administered for the benefit of FTX’s creditors. By filing under Chapter 11 (instead of Chapter 7), FTX has preserved the option of emerging out of the bankruptcy proceeding and continuing to operate in some capacity. You can read a useful explainer on the bankruptcy process here.The rules in the Bankruptcy Code are ultimately trying to ensure a fair outcome for creditors. This includes capturing certain payment transactions that occurred in the past. Basically, the debtor can reach back in time to undo deals it made and recoup monies it paid; this money comes back into the estate through clawbacks and gets redistributed to creditors according to the bankruptcy rules.ClawbacksThere are two main types of clawback processes. The first and most common (called a “preference claim”) target transactions that happened in the 90 days prior to the bankruptcy filing. Essentially, if you received money from an FTX entity in the debtor group anytime on or after approximately August 11, 2022, the bankruptcy process will probably ask you, at some point, to pay all or part of that money back. It’s almost impossible to say right now whether any specific grant or contract will be subject to clawb...]]>
Sun, 13 Nov 2022 17:19:20 +0000 EA - Thoughts on legal concerns surrounding the FTX situation by Molly Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on legal concerns surrounding the FTX situation, published by Molly on November 13, 2022 on The Effective Altruism Forum.As Open Phil’s managing counsel, and as a member of the EA community, I’ve received an onslaught of questions relating to the FTX fiasco in the last few days. Unfortunately, I’ve been unable to answer most of them either because I don’t have the relevant legal expertise, or because I can’t provide legal advice to non-clients (and Open Philanthropy is my only client in this situation), or because the facts I’d need to answer the questions just aren’t available yet.The biggest topic of concern is something along the lines of: if I received FTX-related grant money, what am I supposed to do? Will it be clawed back? This post aims to provide legal context on this topic; it doesn’t address ethical or other practical perspectives on the topic.Before diving into that, I want to offer this context: this is the first few days of what is going to be a multi-year legal process. It will be drawn out and tedious. We cannot will the information we want into existence, so we can’t be as strategic as we’d like.But there’s an upside to that. Emotions are high and many people are probably not in a great mental place to make big decisions. This externally-imposed waiting period can be put to good use as a time to process.I understand that for some people who received FTX-related grant money, waiting doesn’t feel like an option; people need to know whether they can pay rent, or if their organization still exists. I hope some of the information below will provide a little more context for individual situations and decisions.I also committed to putting out an explainer on clawbacks. That is here, though I think the information in this post is more useful.Bankruptcy and ClawbacksBackgroundThe information in this section is based on publicly available information and general conversations with bankruptcy lawyers. I do not have access to any nonpublic information about this case. None of this should be taken as legal advice to you.FTX filed for bankruptcy on Friday (November 11th, 2022). More specifically, Alameda Research Ltd. filed a voluntary petition for bankruptcy under Chapter 11 of the Bankruptcy Code, by filing a standard form in the United States Bankruptcy Court for the District of Delaware. The filing includes 134 “debtor entities” (listed as Annex I); it looks like this covers the full FTX corporate group.This means that the full FTX group is now under the protection of the bankruptcy court, and over the coming months, all of the assets in the debtor group will be administered for the benefit of FTX’s creditors. By filing under Chapter 11 (instead of Chapter 7), FTX has preserved the option of emerging out of the bankruptcy proceeding and continuing to operate in some capacity. You can read a useful explainer on the bankruptcy process here.The rules in the Bankruptcy Code are ultimately trying to ensure a fair outcome for creditors. This includes capturing certain payment transactions that occurred in the past. Basically, the debtor can reach back in time to undo deals it made and recoup monies it paid; this money comes back into the estate through clawbacks and gets redistributed to creditors according to the bankruptcy rules.ClawbacksThere are two main types of clawback processes. The first and most common (called a “preference claim”) target transactions that happened in the 90 days prior to the bankruptcy filing. Essentially, if you received money from an FTX entity in the debtor group anytime on or after approximately August 11, 2022, the bankruptcy process will probably ask you, at some point, to pay all or part of that money back. It’s almost impossible to say right now whether any specific grant or contract will be subject to clawb...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on legal concerns surrounding the FTX situation, published by Molly on November 13, 2022 on The Effective Altruism Forum.As Open Phil’s managing counsel, and as a member of the EA community, I’ve received an onslaught of questions relating to the FTX fiasco in the last few days. Unfortunately, I’ve been unable to answer most of them either because I don’t have the relevant legal expertise, or because I can’t provide legal advice to non-clients (and Open Philanthropy is my only client in this situation), or because the facts I’d need to answer the questions just aren’t available yet.The biggest topic of concern is something along the lines of: if I received FTX-related grant money, what am I supposed to do? Will it be clawed back? This post aims to provide legal context on this topic; it doesn’t address ethical or other practical perspectives on the topic.Before diving into that, I want to offer this context: this is the first few days of what is going to be a multi-year legal process. It will be drawn out and tedious. We cannot will the information we want into existence, so we can’t be as strategic as we’d like.But there’s an upside to that. Emotions are high and many people are probably not in a great mental place to make big decisions. This externally-imposed waiting period can be put to good use as a time to process.I understand that for some people who received FTX-related grant money, waiting doesn’t feel like an option; people need to know whether they can pay rent, or if their organization still exists. I hope some of the information below will provide a little more context for individual situations and decisions.I also committed to putting out an explainer on clawbacks. That is here, though I think the information in this post is more useful.Bankruptcy and ClawbacksBackgroundThe information in this section is based on publicly available information and general conversations with bankruptcy lawyers. I do not have access to any nonpublic information about this case. None of this should be taken as legal advice to you.FTX filed for bankruptcy on Friday (November 11th, 2022). More specifically, Alameda Research Ltd. filed a voluntary petition for bankruptcy under Chapter 11 of the Bankruptcy Code, by filing a standard form in the United States Bankruptcy Court for the District of Delaware. The filing includes 134 “debtor entities” (listed as Annex I); it looks like this covers the full FTX corporate group.This means that the full FTX group is now under the protection of the bankruptcy court, and over the coming months, all of the assets in the debtor group will be administered for the benefit of FTX’s creditors. By filing under Chapter 11 (instead of Chapter 7), FTX has preserved the option of emerging out of the bankruptcy proceeding and continuing to operate in some capacity. You can read a useful explainer on the bankruptcy process here.The rules in the Bankruptcy Code are ultimately trying to ensure a fair outcome for creditors. This includes capturing certain payment transactions that occurred in the past. Basically, the debtor can reach back in time to undo deals it made and recoup monies it paid; this money comes back into the estate through clawbacks and gets redistributed to creditors according to the bankruptcy rules.ClawbacksThere are two main types of clawback processes. The first and most common (called a “preference claim”) target transactions that happened in the 90 days prior to the bankruptcy filing. Essentially, if you received money from an FTX entity in the debtor group anytime on or after approximately August 11, 2022, the bankruptcy process will probably ask you, at some point, to pay all or part of that money back. It’s almost impossible to say right now whether any specific grant or contract will be subject to clawb...]]>
Molly https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:12 None full 3741
eoLwR3y2gcZ8wgECc_EA EA - Hubris and coldness within EA (my experience) by James Gough Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hubris and coldness within EA (my experience), published by James Gough on November 13, 2022 on The Effective Altruism Forum.Hi all.Like a lot of people that have had a connection to EA I am appalled by the close connection between the FTX scandal and EA. But not surprised.The EA community events I attended totally killed my passion for EA. I attended an EA global conference in London and left feeling really really sad. Before the conference I was told I was not important enough or not worth the time to get career advice. One person I'd met before at local EA events made it clear that he didn't want to waste time talking to me (this was in the guide btw to make it clear if you don't think someone is worth your time). Well it certainly made me unconfident and uncomfortable to approach anyone else. I found the whole thing miserable. Everyone went out to take photo for the conference and I didn't bother. I don't want to be part of a community that I didn't feel happy in.On a less personal level, I overheard some unpleasant conversations about how EA should only be reserved for the intellectual elite (whatever the fuck that is) and how diversity didn't really matter. How they were annoyed that women got talks just for being women.Honestly, the whole place just reeked of hubris - everyone was so sure they were right, people had no interest in you as a person. I have never experienced more unfriendly, self-important, uncompassionate people in my life (I am 31 now). It was of course the last time I was ever involved with anything EA related.Maybe you read this and can dismiss it with yeah but issues are too important to waste time with petty small talk or showing interest in others. Or your subjective experience doesn't matter. Or we talk about rationality and complex ideas here , not personal opinions.But that is the whole point I'm trying to make. When you take away the human element, when you're so focused on grandiose ideas and certain of your perfect rationality, you end up dismissing the fast thinking necessary to make good ethical decisions. Anyone that values human kindness would run a mile from someone that doesn't have the respect to listen to someone talking to them and makes clear that their video game is valued above that person. Similarly to the long history of Musk's contempt for ordinary people.EA just seems so focused on being ethical that it forgot how to be nice. In my opinion, a new more inclusive organisation with a focus on making a positive impact needs to be created - with a better name.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
James Gough https://forum.effectivealtruism.org/posts/eoLwR3y2gcZ8wgECc/hubris-and-coldness-within-ea-my-experience Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hubris and coldness within EA (my experience), published by James Gough on November 13, 2022 on The Effective Altruism Forum.Hi all.Like a lot of people that have had a connection to EA I am appalled by the close connection between the FTX scandal and EA. But not surprised.The EA community events I attended totally killed my passion for EA. I attended an EA global conference in London and left feeling really really sad. Before the conference I was told I was not important enough or not worth the time to get career advice. One person I'd met before at local EA events made it clear that he didn't want to waste time talking to me (this was in the guide btw to make it clear if you don't think someone is worth your time). Well it certainly made me unconfident and uncomfortable to approach anyone else. I found the whole thing miserable. Everyone went out to take photo for the conference and I didn't bother. I don't want to be part of a community that I didn't feel happy in.On a less personal level, I overheard some unpleasant conversations about how EA should only be reserved for the intellectual elite (whatever the fuck that is) and how diversity didn't really matter. How they were annoyed that women got talks just for being women.Honestly, the whole place just reeked of hubris - everyone was so sure they were right, people had no interest in you as a person. I have never experienced more unfriendly, self-important, uncompassionate people in my life (I am 31 now). It was of course the last time I was ever involved with anything EA related.Maybe you read this and can dismiss it with yeah but issues are too important to waste time with petty small talk or showing interest in others. Or your subjective experience doesn't matter. Or we talk about rationality and complex ideas here , not personal opinions.But that is the whole point I'm trying to make. When you take away the human element, when you're so focused on grandiose ideas and certain of your perfect rationality, you end up dismissing the fast thinking necessary to make good ethical decisions. Anyone that values human kindness would run a mile from someone that doesn't have the respect to listen to someone talking to them and makes clear that their video game is valued above that person. Similarly to the long history of Musk's contempt for ordinary people.EA just seems so focused on being ethical that it forgot how to be nice. In my opinion, a new more inclusive organisation with a focus on making a positive impact needs to be created - with a better name.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 13 Nov 2022 15:45:02 +0000 EA - Hubris and coldness within EA (my experience) by James Gough Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hubris and coldness within EA (my experience), published by James Gough on November 13, 2022 on The Effective Altruism Forum.Hi all.Like a lot of people that have had a connection to EA I am appalled by the close connection between the FTX scandal and EA. But not surprised.The EA community events I attended totally killed my passion for EA. I attended an EA global conference in London and left feeling really really sad. Before the conference I was told I was not important enough or not worth the time to get career advice. One person I'd met before at local EA events made it clear that he didn't want to waste time talking to me (this was in the guide btw to make it clear if you don't think someone is worth your time). Well it certainly made me unconfident and uncomfortable to approach anyone else. I found the whole thing miserable. Everyone went out to take photo for the conference and I didn't bother. I don't want to be part of a community that I didn't feel happy in.On a less personal level, I overheard some unpleasant conversations about how EA should only be reserved for the intellectual elite (whatever the fuck that is) and how diversity didn't really matter. How they were annoyed that women got talks just for being women.Honestly, the whole place just reeked of hubris - everyone was so sure they were right, people had no interest in you as a person. I have never experienced more unfriendly, self-important, uncompassionate people in my life (I am 31 now). It was of course the last time I was ever involved with anything EA related.Maybe you read this and can dismiss it with yeah but issues are too important to waste time with petty small talk or showing interest in others. Or your subjective experience doesn't matter. Or we talk about rationality and complex ideas here , not personal opinions.But that is the whole point I'm trying to make. When you take away the human element, when you're so focused on grandiose ideas and certain of your perfect rationality, you end up dismissing the fast thinking necessary to make good ethical decisions. Anyone that values human kindness would run a mile from someone that doesn't have the respect to listen to someone talking to them and makes clear that their video game is valued above that person. Similarly to the long history of Musk's contempt for ordinary people.EA just seems so focused on being ethical that it forgot how to be nice. In my opinion, a new more inclusive organisation with a focus on making a positive impact needs to be created - with a better name.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hubris and coldness within EA (my experience), published by James Gough on November 13, 2022 on The Effective Altruism Forum.Hi all.Like a lot of people that have had a connection to EA I am appalled by the close connection between the FTX scandal and EA. But not surprised.The EA community events I attended totally killed my passion for EA. I attended an EA global conference in London and left feeling really really sad. Before the conference I was told I was not important enough or not worth the time to get career advice. One person I'd met before at local EA events made it clear that he didn't want to waste time talking to me (this was in the guide btw to make it clear if you don't think someone is worth your time). Well it certainly made me unconfident and uncomfortable to approach anyone else. I found the whole thing miserable. Everyone went out to take photo for the conference and I didn't bother. I don't want to be part of a community that I didn't feel happy in.On a less personal level, I overheard some unpleasant conversations about how EA should only be reserved for the intellectual elite (whatever the fuck that is) and how diversity didn't really matter. How they were annoyed that women got talks just for being women.Honestly, the whole place just reeked of hubris - everyone was so sure they were right, people had no interest in you as a person. I have never experienced more unfriendly, self-important, uncompassionate people in my life (I am 31 now). It was of course the last time I was ever involved with anything EA related.Maybe you read this and can dismiss it with yeah but issues are too important to waste time with petty small talk or showing interest in others. Or your subjective experience doesn't matter. Or we talk about rationality and complex ideas here , not personal opinions.But that is the whole point I'm trying to make. When you take away the human element, when you're so focused on grandiose ideas and certain of your perfect rationality, you end up dismissing the fast thinking necessary to make good ethical decisions. Anyone that values human kindness would run a mile from someone that doesn't have the respect to listen to someone talking to them and makes clear that their video game is valued above that person. Similarly to the long history of Musk's contempt for ordinary people.EA just seems so focused on being ethical that it forgot how to be nice. In my opinion, a new more inclusive organisation with a focus on making a positive impact needs to be created - with a better name.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
James Gough https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:32 None full 3742
4bDRdeRHH57G34fv9_EA EA - In favour of ‘personal policies’ by Michelle Hutchinson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: In favour of ‘personal policies’, published by Michelle Hutchinson on November 13, 2022 on The Effective Altruism Forum.I’m pretty sad this week and isolating due to having covid. I thought I’d try to have something positive come out of that, and write up a blog post I’ve had in the back of my mind for a while.I wanted to share my positive experience with setting ‘policies’ for myself. They’re basically heuristics I use to avoid having to make as many decisions (particular about things that are somewhat stressful). I got the idea, and suggestions for ways to implement it, from Brenton Mayer (thank you!).What are personal policies?There are various classes of decisions I make periodically, for which I’d like to have an answer in advance rather than deciding in individual cases. Those are the kinds of cases in which I try to make a ‘policy decision’ going forward.This is the kind of thing we do all the time for particular types of actions. For example, someone might decide to be a vegetarian. From then, they no longer consider in each individual instance whether they should eat some dish with meat in, they’ve made a blanket rule not to do so in any individual case.There are a number of things like ‘being a vegetarian’ which we’re used to. We’re less likely to make up our own of these ‘rules I plan to live by’. A way we might frame it when we do is as ‘getting into a habit’. I sometimes prefer the framing of ‘policy’ in that it’s instantaneous (whereas something can’t really be a habit until you’ve done it a few times) and it sounds like a clear decision you’re acting on.A way I like to think of this is: For tricky repeating decisions, make them only once.Having said that, for long run policies, it’s likely you’ll want to have periodic re-examinations of them to check you still endorse them. I keep a list of my policies, both to make them easier to remember and to come back and re-evaluate them.Use cases and benefitsMake faster decisionsHaving a policy for some type of decision means you don’t have to spend time making a decision in each specific case.One of my friends has the policy of always running for a train if there’s one she wants to be on which looks about to leave. This is the kind of situation where we’re often unsure what to do - is it impossible to make the train however fast I am? Do I have plenty of time because it’s not about to leave? Time spent dithering increases the chance you miss the train even if you then choose to run for it. So having made the decision in advance means over the long run you’ll catch more trains than you otherwise would have. (And given the downside is basically some extra cardio, this seems easily worth it.)Make better decisionsMore thoughtful decisions: Even aside from cases where you don’t have enough time to think much about a decision (like catching a train), it’s worth putting more time into a blanket policy than an individual decision. Rather than half-thinking through a type of decision a number of times, you might act better if you make a careful/thorough decision once and then act on that repeatedly (When the decisions are relevantly similar.).More objective decisions: In some cases, individual decision points are emotionally laden in a way that will bias your decision. In cases like that, you might make a more objective decision in advance than you would in the moment. For example, you might find it hard not to donate to a charity that’s raising money on the street if you have to decide whether to do that on the spur of the moment. You might feel your decision will be more objective if you think beforehand about the circumstances under which you do and don’t want to donate to charities when you pass fundraisers for them (eg yes for particular interventions, no for others).Help from others: I often find i...]]>
Michelle Hutchinson https://forum.effectivealtruism.org/posts/4bDRdeRHH57G34fv9/in-favour-of-personal-policies Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: In favour of ‘personal policies’, published by Michelle Hutchinson on November 13, 2022 on The Effective Altruism Forum.I’m pretty sad this week and isolating due to having covid. I thought I’d try to have something positive come out of that, and write up a blog post I’ve had in the back of my mind for a while.I wanted to share my positive experience with setting ‘policies’ for myself. They’re basically heuristics I use to avoid having to make as many decisions (particular about things that are somewhat stressful). I got the idea, and suggestions for ways to implement it, from Brenton Mayer (thank you!).What are personal policies?There are various classes of decisions I make periodically, for which I’d like to have an answer in advance rather than deciding in individual cases. Those are the kinds of cases in which I try to make a ‘policy decision’ going forward.This is the kind of thing we do all the time for particular types of actions. For example, someone might decide to be a vegetarian. From then, they no longer consider in each individual instance whether they should eat some dish with meat in, they’ve made a blanket rule not to do so in any individual case.There are a number of things like ‘being a vegetarian’ which we’re used to. We’re less likely to make up our own of these ‘rules I plan to live by’. A way we might frame it when we do is as ‘getting into a habit’. I sometimes prefer the framing of ‘policy’ in that it’s instantaneous (whereas something can’t really be a habit until you’ve done it a few times) and it sounds like a clear decision you’re acting on.A way I like to think of this is: For tricky repeating decisions, make them only once.Having said that, for long run policies, it’s likely you’ll want to have periodic re-examinations of them to check you still endorse them. I keep a list of my policies, both to make them easier to remember and to come back and re-evaluate them.Use cases and benefitsMake faster decisionsHaving a policy for some type of decision means you don’t have to spend time making a decision in each specific case.One of my friends has the policy of always running for a train if there’s one she wants to be on which looks about to leave. This is the kind of situation where we’re often unsure what to do - is it impossible to make the train however fast I am? Do I have plenty of time because it’s not about to leave? Time spent dithering increases the chance you miss the train even if you then choose to run for it. So having made the decision in advance means over the long run you’ll catch more trains than you otherwise would have. (And given the downside is basically some extra cardio, this seems easily worth it.)Make better decisionsMore thoughtful decisions: Even aside from cases where you don’t have enough time to think much about a decision (like catching a train), it’s worth putting more time into a blanket policy than an individual decision. Rather than half-thinking through a type of decision a number of times, you might act better if you make a careful/thorough decision once and then act on that repeatedly (When the decisions are relevantly similar.).More objective decisions: In some cases, individual decision points are emotionally laden in a way that will bias your decision. In cases like that, you might make a more objective decision in advance than you would in the moment. For example, you might find it hard not to donate to a charity that’s raising money on the street if you have to decide whether to do that on the spur of the moment. You might feel your decision will be more objective if you think beforehand about the circumstances under which you do and don’t want to donate to charities when you pass fundraisers for them (eg yes for particular interventions, no for others).Help from others: I often find i...]]>
Sun, 13 Nov 2022 12:42:45 +0000 EA - In favour of ‘personal policies’ by Michelle Hutchinson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: In favour of ‘personal policies’, published by Michelle Hutchinson on November 13, 2022 on The Effective Altruism Forum.I’m pretty sad this week and isolating due to having covid. I thought I’d try to have something positive come out of that, and write up a blog post I’ve had in the back of my mind for a while.I wanted to share my positive experience with setting ‘policies’ for myself. They’re basically heuristics I use to avoid having to make as many decisions (particular about things that are somewhat stressful). I got the idea, and suggestions for ways to implement it, from Brenton Mayer (thank you!).What are personal policies?There are various classes of decisions I make periodically, for which I’d like to have an answer in advance rather than deciding in individual cases. Those are the kinds of cases in which I try to make a ‘policy decision’ going forward.This is the kind of thing we do all the time for particular types of actions. For example, someone might decide to be a vegetarian. From then, they no longer consider in each individual instance whether they should eat some dish with meat in, they’ve made a blanket rule not to do so in any individual case.There are a number of things like ‘being a vegetarian’ which we’re used to. We’re less likely to make up our own of these ‘rules I plan to live by’. A way we might frame it when we do is as ‘getting into a habit’. I sometimes prefer the framing of ‘policy’ in that it’s instantaneous (whereas something can’t really be a habit until you’ve done it a few times) and it sounds like a clear decision you’re acting on.A way I like to think of this is: For tricky repeating decisions, make them only once.Having said that, for long run policies, it’s likely you’ll want to have periodic re-examinations of them to check you still endorse them. I keep a list of my policies, both to make them easier to remember and to come back and re-evaluate them.Use cases and benefitsMake faster decisionsHaving a policy for some type of decision means you don’t have to spend time making a decision in each specific case.One of my friends has the policy of always running for a train if there’s one she wants to be on which looks about to leave. This is the kind of situation where we’re often unsure what to do - is it impossible to make the train however fast I am? Do I have plenty of time because it’s not about to leave? Time spent dithering increases the chance you miss the train even if you then choose to run for it. So having made the decision in advance means over the long run you’ll catch more trains than you otherwise would have. (And given the downside is basically some extra cardio, this seems easily worth it.)Make better decisionsMore thoughtful decisions: Even aside from cases where you don’t have enough time to think much about a decision (like catching a train), it’s worth putting more time into a blanket policy than an individual decision. Rather than half-thinking through a type of decision a number of times, you might act better if you make a careful/thorough decision once and then act on that repeatedly (When the decisions are relevantly similar.).More objective decisions: In some cases, individual decision points are emotionally laden in a way that will bias your decision. In cases like that, you might make a more objective decision in advance than you would in the moment. For example, you might find it hard not to donate to a charity that’s raising money on the street if you have to decide whether to do that on the spur of the moment. You might feel your decision will be more objective if you think beforehand about the circumstances under which you do and don’t want to donate to charities when you pass fundraisers for them (eg yes for particular interventions, no for others).Help from others: I often find i...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: In favour of ‘personal policies’, published by Michelle Hutchinson on November 13, 2022 on The Effective Altruism Forum.I’m pretty sad this week and isolating due to having covid. I thought I’d try to have something positive come out of that, and write up a blog post I’ve had in the back of my mind for a while.I wanted to share my positive experience with setting ‘policies’ for myself. They’re basically heuristics I use to avoid having to make as many decisions (particular about things that are somewhat stressful). I got the idea, and suggestions for ways to implement it, from Brenton Mayer (thank you!).What are personal policies?There are various classes of decisions I make periodically, for which I’d like to have an answer in advance rather than deciding in individual cases. Those are the kinds of cases in which I try to make a ‘policy decision’ going forward.This is the kind of thing we do all the time for particular types of actions. For example, someone might decide to be a vegetarian. From then, they no longer consider in each individual instance whether they should eat some dish with meat in, they’ve made a blanket rule not to do so in any individual case.There are a number of things like ‘being a vegetarian’ which we’re used to. We’re less likely to make up our own of these ‘rules I plan to live by’. A way we might frame it when we do is as ‘getting into a habit’. I sometimes prefer the framing of ‘policy’ in that it’s instantaneous (whereas something can’t really be a habit until you’ve done it a few times) and it sounds like a clear decision you’re acting on.A way I like to think of this is: For tricky repeating decisions, make them only once.Having said that, for long run policies, it’s likely you’ll want to have periodic re-examinations of them to check you still endorse them. I keep a list of my policies, both to make them easier to remember and to come back and re-evaluate them.Use cases and benefitsMake faster decisionsHaving a policy for some type of decision means you don’t have to spend time making a decision in each specific case.One of my friends has the policy of always running for a train if there’s one she wants to be on which looks about to leave. This is the kind of situation where we’re often unsure what to do - is it impossible to make the train however fast I am? Do I have plenty of time because it’s not about to leave? Time spent dithering increases the chance you miss the train even if you then choose to run for it. So having made the decision in advance means over the long run you’ll catch more trains than you otherwise would have. (And given the downside is basically some extra cardio, this seems easily worth it.)Make better decisionsMore thoughtful decisions: Even aside from cases where you don’t have enough time to think much about a decision (like catching a train), it’s worth putting more time into a blanket policy than an individual decision. Rather than half-thinking through a type of decision a number of times, you might act better if you make a careful/thorough decision once and then act on that repeatedly (When the decisions are relevantly similar.).More objective decisions: In some cases, individual decision points are emotionally laden in a way that will bias your decision. In cases like that, you might make a more objective decision in advance than you would in the moment. For example, you might find it hard not to donate to a charity that’s raising money on the street if you have to decide whether to do that on the spur of the moment. You might feel your decision will be more objective if you think beforehand about the circumstances under which you do and don’t want to donate to charities when you pass fundraisers for them (eg yes for particular interventions, no for others).Help from others: I often find i...]]>
Michelle Hutchinson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:31 None full 3743
vuWGw4nvq2qJn5eCE_EA EA - Noting an unsubstantiated belief about the FTX disaster by Yitz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Noting an unsubstantiated belief about the FTX disaster, published by Yitz on November 13, 2022 on The Effective Altruism Forum.There is a narrative about the FTX collapse that I have noticed emerging as a commonly-held belief, despite little concrete evidence for or against it. The belief goes something like this:Sam Bankman Fried did what he did primarily for the sake of "Effective Altruism," as he understood it. Even though from a purely utilitarian perspective his actions were negative in expectation, he justified the fraud to himself because it was "for the greater good." As such, poor messaging on our part may be partially at fault for his downfall.This take may be more or less plausible, but it is also unsubstantiated. As Astrid Wilde noted on Twitter, there is a distinct possibility that the causality of the situation may have run the other way, with SBF as a conman taking advantage of the EA community's high-trust environment to boost himself. Alternatively (or additionally), it also seems quite plausible to me that the downfall of FTX had something to do with the social dynamics of the company, much as Enron's downfall can be traced back to [insert your favorite theory for why Enron collapsed here]. We do not, and to some degree cannot, know what SBF's internal monologue has been, and if we are to update our actions responsibly in order to avoid future mistakes of this magnitude (which we absolutely should do), we must deal with the facts as they most likely are, not as we would like or fear them to be.All of this said, I strongly suspect that in ten years from now, conventional wisdom will hold the above belief as being basically cannon, regardless of further evidence in either direction. This is because it presents an intrinsically interesting, almost Hollywood villain-esque narrative, one that will surely evoke endless "hot takes" which journalists, bloggers, etc. will have a hard time passing over. Expect this to become the default understanding of what happened (from outsiders at least), and prepare accordingly. At the same time, be cautious when updating your internal beliefs so as not to assume automatically that this story must be the truth of the matter.We need to carefully examine where our focus in self-improvement should lie moving forward, and it may not be the case that a revamping of our internal messaging is necessary (though it may very well be in the end; I certainly do not feel qualified to make that final call, only to point out what I recognize from experience as a temptingly powerful story beat which may influence it).Primarily on the Effective Altruism forum, but also on Twitter.See e.g. "pro fanaticism" messaging from some community factions, though it should be noted that this has always been a minority position.With roughly 80% confidence, conditional on 1.) No obviously true alternative story coming out about FTX that totally accounts for all their misdeeds somehow, and 2.) This post (or one containing the same observation), not becoming widely cited (since feedback loops can get complex and I don't want to bother accounting for that).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Yitz https://forum.effectivealtruism.org/posts/vuWGw4nvq2qJn5eCE/noting-an-unsubstantiated-belief-about-the-ftx-disaster Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Noting an unsubstantiated belief about the FTX disaster, published by Yitz on November 13, 2022 on The Effective Altruism Forum.There is a narrative about the FTX collapse that I have noticed emerging as a commonly-held belief, despite little concrete evidence for or against it. The belief goes something like this:Sam Bankman Fried did what he did primarily for the sake of "Effective Altruism," as he understood it. Even though from a purely utilitarian perspective his actions were negative in expectation, he justified the fraud to himself because it was "for the greater good." As such, poor messaging on our part may be partially at fault for his downfall.This take may be more or less plausible, but it is also unsubstantiated. As Astrid Wilde noted on Twitter, there is a distinct possibility that the causality of the situation may have run the other way, with SBF as a conman taking advantage of the EA community's high-trust environment to boost himself. Alternatively (or additionally), it also seems quite plausible to me that the downfall of FTX had something to do with the social dynamics of the company, much as Enron's downfall can be traced back to [insert your favorite theory for why Enron collapsed here]. We do not, and to some degree cannot, know what SBF's internal monologue has been, and if we are to update our actions responsibly in order to avoid future mistakes of this magnitude (which we absolutely should do), we must deal with the facts as they most likely are, not as we would like or fear them to be.All of this said, I strongly suspect that in ten years from now, conventional wisdom will hold the above belief as being basically cannon, regardless of further evidence in either direction. This is because it presents an intrinsically interesting, almost Hollywood villain-esque narrative, one that will surely evoke endless "hot takes" which journalists, bloggers, etc. will have a hard time passing over. Expect this to become the default understanding of what happened (from outsiders at least), and prepare accordingly. At the same time, be cautious when updating your internal beliefs so as not to assume automatically that this story must be the truth of the matter.We need to carefully examine where our focus in self-improvement should lie moving forward, and it may not be the case that a revamping of our internal messaging is necessary (though it may very well be in the end; I certainly do not feel qualified to make that final call, only to point out what I recognize from experience as a temptingly powerful story beat which may influence it).Primarily on the Effective Altruism forum, but also on Twitter.See e.g. "pro fanaticism" messaging from some community factions, though it should be noted that this has always been a minority position.With roughly 80% confidence, conditional on 1.) No obviously true alternative story coming out about FTX that totally accounts for all their misdeeds somehow, and 2.) This post (or one containing the same observation), not becoming widely cited (since feedback loops can get complex and I don't want to bother accounting for that).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 13 Nov 2022 08:30:57 +0000 EA - Noting an unsubstantiated belief about the FTX disaster by Yitz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Noting an unsubstantiated belief about the FTX disaster, published by Yitz on November 13, 2022 on The Effective Altruism Forum.There is a narrative about the FTX collapse that I have noticed emerging as a commonly-held belief, despite little concrete evidence for or against it. The belief goes something like this:Sam Bankman Fried did what he did primarily for the sake of "Effective Altruism," as he understood it. Even though from a purely utilitarian perspective his actions were negative in expectation, he justified the fraud to himself because it was "for the greater good." As such, poor messaging on our part may be partially at fault for his downfall.This take may be more or less plausible, but it is also unsubstantiated. As Astrid Wilde noted on Twitter, there is a distinct possibility that the causality of the situation may have run the other way, with SBF as a conman taking advantage of the EA community's high-trust environment to boost himself. Alternatively (or additionally), it also seems quite plausible to me that the downfall of FTX had something to do with the social dynamics of the company, much as Enron's downfall can be traced back to [insert your favorite theory for why Enron collapsed here]. We do not, and to some degree cannot, know what SBF's internal monologue has been, and if we are to update our actions responsibly in order to avoid future mistakes of this magnitude (which we absolutely should do), we must deal with the facts as they most likely are, not as we would like or fear them to be.All of this said, I strongly suspect that in ten years from now, conventional wisdom will hold the above belief as being basically cannon, regardless of further evidence in either direction. This is because it presents an intrinsically interesting, almost Hollywood villain-esque narrative, one that will surely evoke endless "hot takes" which journalists, bloggers, etc. will have a hard time passing over. Expect this to become the default understanding of what happened (from outsiders at least), and prepare accordingly. At the same time, be cautious when updating your internal beliefs so as not to assume automatically that this story must be the truth of the matter.We need to carefully examine where our focus in self-improvement should lie moving forward, and it may not be the case that a revamping of our internal messaging is necessary (though it may very well be in the end; I certainly do not feel qualified to make that final call, only to point out what I recognize from experience as a temptingly powerful story beat which may influence it).Primarily on the Effective Altruism forum, but also on Twitter.See e.g. "pro fanaticism" messaging from some community factions, though it should be noted that this has always been a minority position.With roughly 80% confidence, conditional on 1.) No obviously true alternative story coming out about FTX that totally accounts for all their misdeeds somehow, and 2.) This post (or one containing the same observation), not becoming widely cited (since feedback loops can get complex and I don't want to bother accounting for that).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Noting an unsubstantiated belief about the FTX disaster, published by Yitz on November 13, 2022 on The Effective Altruism Forum.There is a narrative about the FTX collapse that I have noticed emerging as a commonly-held belief, despite little concrete evidence for or against it. The belief goes something like this:Sam Bankman Fried did what he did primarily for the sake of "Effective Altruism," as he understood it. Even though from a purely utilitarian perspective his actions were negative in expectation, he justified the fraud to himself because it was "for the greater good." As such, poor messaging on our part may be partially at fault for his downfall.This take may be more or less plausible, but it is also unsubstantiated. As Astrid Wilde noted on Twitter, there is a distinct possibility that the causality of the situation may have run the other way, with SBF as a conman taking advantage of the EA community's high-trust environment to boost himself. Alternatively (or additionally), it also seems quite plausible to me that the downfall of FTX had something to do with the social dynamics of the company, much as Enron's downfall can be traced back to [insert your favorite theory for why Enron collapsed here]. We do not, and to some degree cannot, know what SBF's internal monologue has been, and if we are to update our actions responsibly in order to avoid future mistakes of this magnitude (which we absolutely should do), we must deal with the facts as they most likely are, not as we would like or fear them to be.All of this said, I strongly suspect that in ten years from now, conventional wisdom will hold the above belief as being basically cannon, regardless of further evidence in either direction. This is because it presents an intrinsically interesting, almost Hollywood villain-esque narrative, one that will surely evoke endless "hot takes" which journalists, bloggers, etc. will have a hard time passing over. Expect this to become the default understanding of what happened (from outsiders at least), and prepare accordingly. At the same time, be cautious when updating your internal beliefs so as not to assume automatically that this story must be the truth of the matter.We need to carefully examine where our focus in self-improvement should lie moving forward, and it may not be the case that a revamping of our internal messaging is necessary (though it may very well be in the end; I certainly do not feel qualified to make that final call, only to point out what I recognize from experience as a temptingly powerful story beat which may influence it).Primarily on the Effective Altruism forum, but also on Twitter.See e.g. "pro fanaticism" messaging from some community factions, though it should be noted that this has always been a minority position.With roughly 80% confidence, conditional on 1.) No obviously true alternative story coming out about FTX that totally accounts for all their misdeeds somehow, and 2.) This post (or one containing the same observation), not becoming widely cited (since feedback loops can get complex and I don't want to bother accounting for that).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Yitz https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:02 None full 3731
j7sDfXKEMeT2SRvLG_EA EA - FTX FAQ by Hamish Doodles Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX FAQ, published by Hamish Doodles on November 13, 2022 on The Effective Altruism Forum.What is this?There's a lot of information flying around at the moment, and I'm going to try and organise it into an FAQ.I expect I have made a lot of mistakes, so please don't assume any specific claim here is true. This is definitely not legal or financial advice or anything like that.Please let me know if anything is wrong/unclear/misleading.Please suggest quesetions and/or answers in the comments.Update: actually, I would advise against wading into the comments.I'm erring on the side of brevity, so if you need more information follow the links.What is FTX?FTX is a Cryptocurrency Derivatives Exchange. It is now bankrupt.Who is Sam Bankman-Fried (SBF)?The founder of FTX. He was recently a billionaire and the richest person under 30.How is FTX connected to effective altruism?In the last couple of years, effective altruism received millions of dollars of funding from SBF and FTX via the Future Fund.SBF was following a strategy of "make tons of money to give it to charity." This is called "earning to give", and it's an idea that was spread by EA in early-to-mid 2010s. SBF was definitely encouraged onto his current path by engaging with EA.SBF was something of a "golden boy" to EA. For example, this.How did FTX go bankrupt?FTX gambled with user deposits rather than keeping them in reserve.Binance, a competitor, triggered a run on the bank where depositors attempted to get their money out.It looked like Binance was going to acquire FTX at one point, but they pulled out after due diligence.Now FTX and SBF are bunkrupt, and SBF will likely be convicted of felony.SourceHow bad is this?"It is therefore very likely to lead to the loss of deposits which will hurt the lives of 10,000s of people eg here"SBF will likely be convicted of felony.SourceDid SBF definitely do something illegal and/or immoral?The vibe I'm reading is "very likely, but not yet certain."Does EA still have funding?Yes.Before FTX there was Open Philanthropy (OP), which is mostly funded by Dustin Muskovitz and Cari Tuna. None of this is connected to FTX, and OP's funding is unaffected.Is Open Philanthropy funding impacted?Global health and wellbeing funding will continue as normal.Because the total pool of funding to longtermism has shrunk, Open Philanthropy will have to raise the bar on longtermist grant making.Thus, Open Philanthropy is "pausing most new longtermist funding commitments" (longtermism includes AI, Biosecurity, and Community Growth) for a couple of months to recalibrate.SourceIf you got money from FTX, do you have to give it back?It's possible, but we don't know.What if you've already spent money from FTX?It's still possible that you may have to give it back. Again, we don't know.If you got money from FTX, should you give it back?You probably shouldn't, at least for the moment. If you gave the money back, there's the possibility that because it wasn't done through the proper legal channels you end up having to give the money back again.If you got money from FTX, should you spend it?Probably not. At least for the next few days. You may have to give it back.I feel terrible about having FTX money.Reading this may help.What if I'm still expecting FTX money?The board of the FTX future fund has all resigned, but "grantees may email grantee-reachout@googlegroups.com."How can I get support/help?"Here’s the best place to reach us if you’d like to talk. I know a form isn’t the warmest, but a real person will get back to you soon." (source)Some mental health advice here.How are people reacting?Will MacAskill: "If there was deception and misuse of funds, I am outraged, and I don’t know which emotion is stronger: my utter rage at Sam (and others?) for causing such ha...]]>
Hamish Doodles https://forum.effectivealtruism.org/posts/j7sDfXKEMeT2SRvLG/ftx-faq Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX FAQ, published by Hamish Doodles on November 13, 2022 on The Effective Altruism Forum.What is this?There's a lot of information flying around at the moment, and I'm going to try and organise it into an FAQ.I expect I have made a lot of mistakes, so please don't assume any specific claim here is true. This is definitely not legal or financial advice or anything like that.Please let me know if anything is wrong/unclear/misleading.Please suggest quesetions and/or answers in the comments.Update: actually, I would advise against wading into the comments.I'm erring on the side of brevity, so if you need more information follow the links.What is FTX?FTX is a Cryptocurrency Derivatives Exchange. It is now bankrupt.Who is Sam Bankman-Fried (SBF)?The founder of FTX. He was recently a billionaire and the richest person under 30.How is FTX connected to effective altruism?In the last couple of years, effective altruism received millions of dollars of funding from SBF and FTX via the Future Fund.SBF was following a strategy of "make tons of money to give it to charity." This is called "earning to give", and it's an idea that was spread by EA in early-to-mid 2010s. SBF was definitely encouraged onto his current path by engaging with EA.SBF was something of a "golden boy" to EA. For example, this.How did FTX go bankrupt?FTX gambled with user deposits rather than keeping them in reserve.Binance, a competitor, triggered a run on the bank where depositors attempted to get their money out.It looked like Binance was going to acquire FTX at one point, but they pulled out after due diligence.Now FTX and SBF are bunkrupt, and SBF will likely be convicted of felony.SourceHow bad is this?"It is therefore very likely to lead to the loss of deposits which will hurt the lives of 10,000s of people eg here"SBF will likely be convicted of felony.SourceDid SBF definitely do something illegal and/or immoral?The vibe I'm reading is "very likely, but not yet certain."Does EA still have funding?Yes.Before FTX there was Open Philanthropy (OP), which is mostly funded by Dustin Muskovitz and Cari Tuna. None of this is connected to FTX, and OP's funding is unaffected.Is Open Philanthropy funding impacted?Global health and wellbeing funding will continue as normal.Because the total pool of funding to longtermism has shrunk, Open Philanthropy will have to raise the bar on longtermist grant making.Thus, Open Philanthropy is "pausing most new longtermist funding commitments" (longtermism includes AI, Biosecurity, and Community Growth) for a couple of months to recalibrate.SourceIf you got money from FTX, do you have to give it back?It's possible, but we don't know.What if you've already spent money from FTX?It's still possible that you may have to give it back. Again, we don't know.If you got money from FTX, should you give it back?You probably shouldn't, at least for the moment. If you gave the money back, there's the possibility that because it wasn't done through the proper legal channels you end up having to give the money back again.If you got money from FTX, should you spend it?Probably not. At least for the next few days. You may have to give it back.I feel terrible about having FTX money.Reading this may help.What if I'm still expecting FTX money?The board of the FTX future fund has all resigned, but "grantees may email grantee-reachout@googlegroups.com."How can I get support/help?"Here’s the best place to reach us if you’d like to talk. I know a form isn’t the warmest, but a real person will get back to you soon." (source)Some mental health advice here.How are people reacting?Will MacAskill: "If there was deception and misuse of funds, I am outraged, and I don’t know which emotion is stronger: my utter rage at Sam (and others?) for causing such ha...]]>
Sun, 13 Nov 2022 06:26:23 +0000 EA - FTX FAQ by Hamish Doodles Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX FAQ, published by Hamish Doodles on November 13, 2022 on The Effective Altruism Forum.What is this?There's a lot of information flying around at the moment, and I'm going to try and organise it into an FAQ.I expect I have made a lot of mistakes, so please don't assume any specific claim here is true. This is definitely not legal or financial advice or anything like that.Please let me know if anything is wrong/unclear/misleading.Please suggest quesetions and/or answers in the comments.Update: actually, I would advise against wading into the comments.I'm erring on the side of brevity, so if you need more information follow the links.What is FTX?FTX is a Cryptocurrency Derivatives Exchange. It is now bankrupt.Who is Sam Bankman-Fried (SBF)?The founder of FTX. He was recently a billionaire and the richest person under 30.How is FTX connected to effective altruism?In the last couple of years, effective altruism received millions of dollars of funding from SBF and FTX via the Future Fund.SBF was following a strategy of "make tons of money to give it to charity." This is called "earning to give", and it's an idea that was spread by EA in early-to-mid 2010s. SBF was definitely encouraged onto his current path by engaging with EA.SBF was something of a "golden boy" to EA. For example, this.How did FTX go bankrupt?FTX gambled with user deposits rather than keeping them in reserve.Binance, a competitor, triggered a run on the bank where depositors attempted to get their money out.It looked like Binance was going to acquire FTX at one point, but they pulled out after due diligence.Now FTX and SBF are bunkrupt, and SBF will likely be convicted of felony.SourceHow bad is this?"It is therefore very likely to lead to the loss of deposits which will hurt the lives of 10,000s of people eg here"SBF will likely be convicted of felony.SourceDid SBF definitely do something illegal and/or immoral?The vibe I'm reading is "very likely, but not yet certain."Does EA still have funding?Yes.Before FTX there was Open Philanthropy (OP), which is mostly funded by Dustin Muskovitz and Cari Tuna. None of this is connected to FTX, and OP's funding is unaffected.Is Open Philanthropy funding impacted?Global health and wellbeing funding will continue as normal.Because the total pool of funding to longtermism has shrunk, Open Philanthropy will have to raise the bar on longtermist grant making.Thus, Open Philanthropy is "pausing most new longtermist funding commitments" (longtermism includes AI, Biosecurity, and Community Growth) for a couple of months to recalibrate.SourceIf you got money from FTX, do you have to give it back?It's possible, but we don't know.What if you've already spent money from FTX?It's still possible that you may have to give it back. Again, we don't know.If you got money from FTX, should you give it back?You probably shouldn't, at least for the moment. If you gave the money back, there's the possibility that because it wasn't done through the proper legal channels you end up having to give the money back again.If you got money from FTX, should you spend it?Probably not. At least for the next few days. You may have to give it back.I feel terrible about having FTX money.Reading this may help.What if I'm still expecting FTX money?The board of the FTX future fund has all resigned, but "grantees may email grantee-reachout@googlegroups.com."How can I get support/help?"Here’s the best place to reach us if you’d like to talk. I know a form isn’t the warmest, but a real person will get back to you soon." (source)Some mental health advice here.How are people reacting?Will MacAskill: "If there was deception and misuse of funds, I am outraged, and I don’t know which emotion is stronger: my utter rage at Sam (and others?) for causing such ha...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX FAQ, published by Hamish Doodles on November 13, 2022 on The Effective Altruism Forum.What is this?There's a lot of information flying around at the moment, and I'm going to try and organise it into an FAQ.I expect I have made a lot of mistakes, so please don't assume any specific claim here is true. This is definitely not legal or financial advice or anything like that.Please let me know if anything is wrong/unclear/misleading.Please suggest quesetions and/or answers in the comments.Update: actually, I would advise against wading into the comments.I'm erring on the side of brevity, so if you need more information follow the links.What is FTX?FTX is a Cryptocurrency Derivatives Exchange. It is now bankrupt.Who is Sam Bankman-Fried (SBF)?The founder of FTX. He was recently a billionaire and the richest person under 30.How is FTX connected to effective altruism?In the last couple of years, effective altruism received millions of dollars of funding from SBF and FTX via the Future Fund.SBF was following a strategy of "make tons of money to give it to charity." This is called "earning to give", and it's an idea that was spread by EA in early-to-mid 2010s. SBF was definitely encouraged onto his current path by engaging with EA.SBF was something of a "golden boy" to EA. For example, this.How did FTX go bankrupt?FTX gambled with user deposits rather than keeping them in reserve.Binance, a competitor, triggered a run on the bank where depositors attempted to get their money out.It looked like Binance was going to acquire FTX at one point, but they pulled out after due diligence.Now FTX and SBF are bunkrupt, and SBF will likely be convicted of felony.SourceHow bad is this?"It is therefore very likely to lead to the loss of deposits which will hurt the lives of 10,000s of people eg here"SBF will likely be convicted of felony.SourceDid SBF definitely do something illegal and/or immoral?The vibe I'm reading is "very likely, but not yet certain."Does EA still have funding?Yes.Before FTX there was Open Philanthropy (OP), which is mostly funded by Dustin Muskovitz and Cari Tuna. None of this is connected to FTX, and OP's funding is unaffected.Is Open Philanthropy funding impacted?Global health and wellbeing funding will continue as normal.Because the total pool of funding to longtermism has shrunk, Open Philanthropy will have to raise the bar on longtermist grant making.Thus, Open Philanthropy is "pausing most new longtermist funding commitments" (longtermism includes AI, Biosecurity, and Community Growth) for a couple of months to recalibrate.SourceIf you got money from FTX, do you have to give it back?It's possible, but we don't know.What if you've already spent money from FTX?It's still possible that you may have to give it back. Again, we don't know.If you got money from FTX, should you give it back?You probably shouldn't, at least for the moment. If you gave the money back, there's the possibility that because it wasn't done through the proper legal channels you end up having to give the money back again.If you got money from FTX, should you spend it?Probably not. At least for the next few days. You may have to give it back.I feel terrible about having FTX money.Reading this may help.What if I'm still expecting FTX money?The board of the FTX future fund has all resigned, but "grantees may email grantee-reachout@googlegroups.com."How can I get support/help?"Here’s the best place to reach us if you’d like to talk. I know a form isn’t the warmest, but a real person will get back to you soon." (source)Some mental health advice here.How are people reacting?Will MacAskill: "If there was deception and misuse of funds, I am outraged, and I don’t know which emotion is stronger: my utter rage at Sam (and others?) for causing such ha...]]>
Hamish Doodles https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:17 None full 3732
sdAgwtHkBZmkmkcpZ_EA EA - In favour of compassion, and against bandwagons of outrage by Emrik Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: In favour of compassion, and against bandwagons of outrage, published by Emrik on November 13, 2022 on The Effective Altruism Forum.I hate to add to the number of FTX posts on the forum, but after some (imo) inappropriate and unkind memes and comments in the Dank EA Memes fb group and elsewhere, I wanted to push back against what seems like a bandwagon of anger and ridicule spiralling too far, and I wish to call attention to it.But first, I should point out that I personally, at this time, know not nearly enough to make confident conclusions regarding what's happened at FTX. That means I will not make any morally relevant judgments. I will especially not insinuate them without sufficient evidence. That just amounts to irresponsibly fuelling the bandwagon while maintaining plausible deniability, which is arguably worse.You are not required to pretend to know more than you do just so you can empathise with the outrage of your friends. That shouldn't be how friendship works.This topic is not without nuance. There's a good case to be made for why ridicule can be pro-social, and I think Alex makes it here:"Ridicule makes clear our commitment to punishing ultimately harmful behavior, in a tit-for-tat sense; we are not the government so we cannot lock up wrongdoers, and acting as a vigilante assassin is precluded by other issues, so our top utility-realizing option is to meme harmful behavior out of the sphere of social acceptability."I don't disagree with condemning someone for having behaved unethically. It's a necessary part of maintaining civil society, and it enables people to cooperate and trade in good faith. But if you accuse someone of having ill-advisedly forsaken ethics in the (putative) service of the greater good, then retaliating by forsaking compassion in the service of unchecked mockery can't possibly make anything better.Why bother with compassion, you might ask? After all, compassion is superfluous for positive-sum cooperation. What we really need for essential social institutions to work at all is widespread trust in the basic ethics of people we trade with. So when a public figure gets caught depreciating that trust, it's imperative that we send a strong signal that this is completely unacceptable.This I all agree with. Judicious punishments are essential for safeguarding prevailing social institutions. Plain fact. But what if prevailing social institutions are unjust? When we jump on a bandwagon for humiliating the accused transgressor after their life has already fallen apart, we are exercising our instincts for mob justice, and we are indirectly strengthening the norm for coercing deviants more generally.Advocating punitive attitudes trades off against advocating for compassion to some extent. Especially if the way you're trying to advocate for punishments is by means of gleefwly inflicting harm.In a society where most people are all too eager to join in when they see their in-group massing against deviants, and where groups have wildly different opinions on who the deviants are in the first place, we need an alternative set of principles.Compassion is a corrective on unjust social norms. It lets us see more clearly where prevailing ethics strays from what's kind and good. In essence, that's the whole purpose of effective altruism: to do better than the unquestioned norms that's been handed down to us.Hence why I hope we can outgrow--or at least lend nuance to--our reflexive instinct to punish, and instead cultivate whatever embers of compassion we can find. Let that be our cultural contribution, because the alternative, advocating punitive sentiments, just isn't a neglected cause area.example, example, example, and example.commentThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Emrik https://forum.effectivealtruism.org/posts/sdAgwtHkBZmkmkcpZ/in-favour-of-compassion-and-against-bandwagons-of-outrage Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: In favour of compassion, and against bandwagons of outrage, published by Emrik on November 13, 2022 on The Effective Altruism Forum.I hate to add to the number of FTX posts on the forum, but after some (imo) inappropriate and unkind memes and comments in the Dank EA Memes fb group and elsewhere, I wanted to push back against what seems like a bandwagon of anger and ridicule spiralling too far, and I wish to call attention to it.But first, I should point out that I personally, at this time, know not nearly enough to make confident conclusions regarding what's happened at FTX. That means I will not make any morally relevant judgments. I will especially not insinuate them without sufficient evidence. That just amounts to irresponsibly fuelling the bandwagon while maintaining plausible deniability, which is arguably worse.You are not required to pretend to know more than you do just so you can empathise with the outrage of your friends. That shouldn't be how friendship works.This topic is not without nuance. There's a good case to be made for why ridicule can be pro-social, and I think Alex makes it here:"Ridicule makes clear our commitment to punishing ultimately harmful behavior, in a tit-for-tat sense; we are not the government so we cannot lock up wrongdoers, and acting as a vigilante assassin is precluded by other issues, so our top utility-realizing option is to meme harmful behavior out of the sphere of social acceptability."I don't disagree with condemning someone for having behaved unethically. It's a necessary part of maintaining civil society, and it enables people to cooperate and trade in good faith. But if you accuse someone of having ill-advisedly forsaken ethics in the (putative) service of the greater good, then retaliating by forsaking compassion in the service of unchecked mockery can't possibly make anything better.Why bother with compassion, you might ask? After all, compassion is superfluous for positive-sum cooperation. What we really need for essential social institutions to work at all is widespread trust in the basic ethics of people we trade with. So when a public figure gets caught depreciating that trust, it's imperative that we send a strong signal that this is completely unacceptable.This I all agree with. Judicious punishments are essential for safeguarding prevailing social institutions. Plain fact. But what if prevailing social institutions are unjust? When we jump on a bandwagon for humiliating the accused transgressor after their life has already fallen apart, we are exercising our instincts for mob justice, and we are indirectly strengthening the norm for coercing deviants more generally.Advocating punitive attitudes trades off against advocating for compassion to some extent. Especially if the way you're trying to advocate for punishments is by means of gleefwly inflicting harm.In a society where most people are all too eager to join in when they see their in-group massing against deviants, and where groups have wildly different opinions on who the deviants are in the first place, we need an alternative set of principles.Compassion is a corrective on unjust social norms. It lets us see more clearly where prevailing ethics strays from what's kind and good. In essence, that's the whole purpose of effective altruism: to do better than the unquestioned norms that's been handed down to us.Hence why I hope we can outgrow--or at least lend nuance to--our reflexive instinct to punish, and instead cultivate whatever embers of compassion we can find. Let that be our cultural contribution, because the alternative, advocating punitive sentiments, just isn't a neglected cause area.example, example, example, and example.commentThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 13 Nov 2022 02:23:26 +0000 EA - In favour of compassion, and against bandwagons of outrage by Emrik Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: In favour of compassion, and against bandwagons of outrage, published by Emrik on November 13, 2022 on The Effective Altruism Forum.I hate to add to the number of FTX posts on the forum, but after some (imo) inappropriate and unkind memes and comments in the Dank EA Memes fb group and elsewhere, I wanted to push back against what seems like a bandwagon of anger and ridicule spiralling too far, and I wish to call attention to it.But first, I should point out that I personally, at this time, know not nearly enough to make confident conclusions regarding what's happened at FTX. That means I will not make any morally relevant judgments. I will especially not insinuate them without sufficient evidence. That just amounts to irresponsibly fuelling the bandwagon while maintaining plausible deniability, which is arguably worse.You are not required to pretend to know more than you do just so you can empathise with the outrage of your friends. That shouldn't be how friendship works.This topic is not without nuance. There's a good case to be made for why ridicule can be pro-social, and I think Alex makes it here:"Ridicule makes clear our commitment to punishing ultimately harmful behavior, in a tit-for-tat sense; we are not the government so we cannot lock up wrongdoers, and acting as a vigilante assassin is precluded by other issues, so our top utility-realizing option is to meme harmful behavior out of the sphere of social acceptability."I don't disagree with condemning someone for having behaved unethically. It's a necessary part of maintaining civil society, and it enables people to cooperate and trade in good faith. But if you accuse someone of having ill-advisedly forsaken ethics in the (putative) service of the greater good, then retaliating by forsaking compassion in the service of unchecked mockery can't possibly make anything better.Why bother with compassion, you might ask? After all, compassion is superfluous for positive-sum cooperation. What we really need for essential social institutions to work at all is widespread trust in the basic ethics of people we trade with. So when a public figure gets caught depreciating that trust, it's imperative that we send a strong signal that this is completely unacceptable.This I all agree with. Judicious punishments are essential for safeguarding prevailing social institutions. Plain fact. But what if prevailing social institutions are unjust? When we jump on a bandwagon for humiliating the accused transgressor after their life has already fallen apart, we are exercising our instincts for mob justice, and we are indirectly strengthening the norm for coercing deviants more generally.Advocating punitive attitudes trades off against advocating for compassion to some extent. Especially if the way you're trying to advocate for punishments is by means of gleefwly inflicting harm.In a society where most people are all too eager to join in when they see their in-group massing against deviants, and where groups have wildly different opinions on who the deviants are in the first place, we need an alternative set of principles.Compassion is a corrective on unjust social norms. It lets us see more clearly where prevailing ethics strays from what's kind and good. In essence, that's the whole purpose of effective altruism: to do better than the unquestioned norms that's been handed down to us.Hence why I hope we can outgrow--or at least lend nuance to--our reflexive instinct to punish, and instead cultivate whatever embers of compassion we can find. Let that be our cultural contribution, because the alternative, advocating punitive sentiments, just isn't a neglected cause area.example, example, example, and example.commentThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: In favour of compassion, and against bandwagons of outrage, published by Emrik on November 13, 2022 on The Effective Altruism Forum.I hate to add to the number of FTX posts on the forum, but after some (imo) inappropriate and unkind memes and comments in the Dank EA Memes fb group and elsewhere, I wanted to push back against what seems like a bandwagon of anger and ridicule spiralling too far, and I wish to call attention to it.But first, I should point out that I personally, at this time, know not nearly enough to make confident conclusions regarding what's happened at FTX. That means I will not make any morally relevant judgments. I will especially not insinuate them without sufficient evidence. That just amounts to irresponsibly fuelling the bandwagon while maintaining plausible deniability, which is arguably worse.You are not required to pretend to know more than you do just so you can empathise with the outrage of your friends. That shouldn't be how friendship works.This topic is not without nuance. There's a good case to be made for why ridicule can be pro-social, and I think Alex makes it here:"Ridicule makes clear our commitment to punishing ultimately harmful behavior, in a tit-for-tat sense; we are not the government so we cannot lock up wrongdoers, and acting as a vigilante assassin is precluded by other issues, so our top utility-realizing option is to meme harmful behavior out of the sphere of social acceptability."I don't disagree with condemning someone for having behaved unethically. It's a necessary part of maintaining civil society, and it enables people to cooperate and trade in good faith. But if you accuse someone of having ill-advisedly forsaken ethics in the (putative) service of the greater good, then retaliating by forsaking compassion in the service of unchecked mockery can't possibly make anything better.Why bother with compassion, you might ask? After all, compassion is superfluous for positive-sum cooperation. What we really need for essential social institutions to work at all is widespread trust in the basic ethics of people we trade with. So when a public figure gets caught depreciating that trust, it's imperative that we send a strong signal that this is completely unacceptable.This I all agree with. Judicious punishments are essential for safeguarding prevailing social institutions. Plain fact. But what if prevailing social institutions are unjust? When we jump on a bandwagon for humiliating the accused transgressor after their life has already fallen apart, we are exercising our instincts for mob justice, and we are indirectly strengthening the norm for coercing deviants more generally.Advocating punitive attitudes trades off against advocating for compassion to some extent. Especially if the way you're trying to advocate for punishments is by means of gleefwly inflicting harm.In a society where most people are all too eager to join in when they see their in-group massing against deviants, and where groups have wildly different opinions on who the deviants are in the first place, we need an alternative set of principles.Compassion is a corrective on unjust social norms. It lets us see more clearly where prevailing ethics strays from what's kind and good. In essence, that's the whole purpose of effective altruism: to do better than the unquestioned norms that's been handed down to us.Hence why I hope we can outgrow--or at least lend nuance to--our reflexive instinct to punish, and instead cultivate whatever embers of compassion we can find. Let that be our cultural contribution, because the alternative, advocating punitive sentiments, just isn't a neglected cause area.example, example, example, and example.commentThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Emrik https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:33 None full 3733
K4LCmbsAzsedWoNzg_EA EA - Ways to buy time by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ways to buy time, published by Akash on November 12, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://forum.effectivealtruism.org/posts/K4LCmbsAzsedWoNzg/ways-to-buy-time Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ways to buy time, published by Akash on November 12, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 13 Nov 2022 00:50:19 +0000 EA - Ways to buy time by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ways to buy time, published by Akash on November 12, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ways to buy time, published by Akash on November 12, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:23 None full 3734
NacFjEJGoFFWRqsc8_EA EA - Women and Effective Altruism by Keerthana Gopalakrishnan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Women and Effective Altruism, published by Keerthana Gopalakrishnan on November 12, 2022 on The Effective Altruism Forum.A lot has been talked about SBF/FTX/EA but this coverage reminds me that is time to talk about the toxicity of the culture within EA communities, especially as it relates to women.EA circles, much like the group house in Bahamas, are widely incestous where people mix their work life (in EA cause areas), their often polyamorous love life and social life in one amalgomous mix without strict separations. This is the default status quo. This means that if you’re a reasonably attractive woman entering an EA community, you get a ton of sexual requests to join polycules, often from poly and partnered men. Some of these men control funding for projects and enjoy high status in EA communities and that means there are real downsides to refusing their sexual advances and pressure to say yes, especially if your career is in an EA cause area or is funded by them. There are also upsides, as reported by CoinDesk on Caroline Ellison. From experience it appears that, a ‘no’ once said is not enough for many men in EA.Having to keep replenishing that ‘no’ becomes annoying very fast, and becomes harder to give informed consent when socializing in the presence of alcohol/psychedelics. It puts your safety at risk. From experience, EA as a community, has very little respect for monogamy and many men, often competing with each other, will persuade you to join polyamory using LessWrong style jedi mindtricks while they stand to benefit from the erosion of your boundaries. (Edit: I have personally experienced this more than three times in less than one year of attending EA events and that is far too many times. )So how do these men maintain polycules and find sexual novelty? EA meet ups of course. Several EA communities are grounds for predatory men in search of their nth polycule partner and to fill their “dancecards”. I have seen this in NYC EA circles, I have seen this in SF. I decided to stop socializing in EA circles a couple months ago due to this toxicity, the benefits are not worth the uncovered downside risk. I also am lucky enough to not work for an EA aligned organization / cause area and socially diversified enough to take that hit. The power enjoyed by men who are predatory, the rate of occurrence and a lack of visible push back equals to a tacit and somewhat widespread backing for this behaviour. My experience resonates with a few other women in SF I have spoken to. They have also met red pilled, exploitative men in EA/rationalist circles. EA/rationalism and redpill fit like yin and yang.Akin to how EA is an optimization of altruism with “suboptimal” human tendencies like morality and empathy stripped from it, red pill is an optimized sexual strategy with the humanity of women stripped from it. You’ll also, surprisingly, encounter many women who are redpilled and manifest internalized misogyny in EA. How to check if you’re one: if terms like SMV, hypergamy etc are part of your everyday vocabulary and thought processes, you might be affected. You’ll also encounter many women who are unhappy participants in polygamous relationships; some of them really smart women who agree to be unhappy (dump him, sis). And if you're a lucky woman who hasn't experienced this in EA, great, and your experience does not need to negate those of others.Despite this culture, EA as a philosophy has a lot of good in it and they should fix this bug with some introspection. Now mind you, this is not a criticism of polyamory itself. If polyamorous love happens between consenting adults without favoritism in professional settings, all is well and good. But EA is an organization and community focused on a mission of altruism, enjoy huge swathes of donor money and exert socio-political ...]]>
Keerthana Gopalakrishnan https://forum.effectivealtruism.org/posts/NacFjEJGoFFWRqsc8/women-and-effective-altruism Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Women and Effective Altruism, published by Keerthana Gopalakrishnan on November 12, 2022 on The Effective Altruism Forum.A lot has been talked about SBF/FTX/EA but this coverage reminds me that is time to talk about the toxicity of the culture within EA communities, especially as it relates to women.EA circles, much like the group house in Bahamas, are widely incestous where people mix their work life (in EA cause areas), their often polyamorous love life and social life in one amalgomous mix without strict separations. This is the default status quo. This means that if you’re a reasonably attractive woman entering an EA community, you get a ton of sexual requests to join polycules, often from poly and partnered men. Some of these men control funding for projects and enjoy high status in EA communities and that means there are real downsides to refusing their sexual advances and pressure to say yes, especially if your career is in an EA cause area or is funded by them. There are also upsides, as reported by CoinDesk on Caroline Ellison. From experience it appears that, a ‘no’ once said is not enough for many men in EA.Having to keep replenishing that ‘no’ becomes annoying very fast, and becomes harder to give informed consent when socializing in the presence of alcohol/psychedelics. It puts your safety at risk. From experience, EA as a community, has very little respect for monogamy and many men, often competing with each other, will persuade you to join polyamory using LessWrong style jedi mindtricks while they stand to benefit from the erosion of your boundaries. (Edit: I have personally experienced this more than three times in less than one year of attending EA events and that is far too many times. )So how do these men maintain polycules and find sexual novelty? EA meet ups of course. Several EA communities are grounds for predatory men in search of their nth polycule partner and to fill their “dancecards”. I have seen this in NYC EA circles, I have seen this in SF. I decided to stop socializing in EA circles a couple months ago due to this toxicity, the benefits are not worth the uncovered downside risk. I also am lucky enough to not work for an EA aligned organization / cause area and socially diversified enough to take that hit. The power enjoyed by men who are predatory, the rate of occurrence and a lack of visible push back equals to a tacit and somewhat widespread backing for this behaviour. My experience resonates with a few other women in SF I have spoken to. They have also met red pilled, exploitative men in EA/rationalist circles. EA/rationalism and redpill fit like yin and yang.Akin to how EA is an optimization of altruism with “suboptimal” human tendencies like morality and empathy stripped from it, red pill is an optimized sexual strategy with the humanity of women stripped from it. You’ll also, surprisingly, encounter many women who are redpilled and manifest internalized misogyny in EA. How to check if you’re one: if terms like SMV, hypergamy etc are part of your everyday vocabulary and thought processes, you might be affected. You’ll also encounter many women who are unhappy participants in polygamous relationships; some of them really smart women who agree to be unhappy (dump him, sis). And if you're a lucky woman who hasn't experienced this in EA, great, and your experience does not need to negate those of others.Despite this culture, EA as a philosophy has a lot of good in it and they should fix this bug with some introspection. Now mind you, this is not a criticism of polyamory itself. If polyamorous love happens between consenting adults without favoritism in professional settings, all is well and good. But EA is an organization and community focused on a mission of altruism, enjoy huge swathes of donor money and exert socio-political ...]]>
Sat, 12 Nov 2022 20:58:28 +0000 EA - Women and Effective Altruism by Keerthana Gopalakrishnan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Women and Effective Altruism, published by Keerthana Gopalakrishnan on November 12, 2022 on The Effective Altruism Forum.A lot has been talked about SBF/FTX/EA but this coverage reminds me that is time to talk about the toxicity of the culture within EA communities, especially as it relates to women.EA circles, much like the group house in Bahamas, are widely incestous where people mix their work life (in EA cause areas), their often polyamorous love life and social life in one amalgomous mix without strict separations. This is the default status quo. This means that if you’re a reasonably attractive woman entering an EA community, you get a ton of sexual requests to join polycules, often from poly and partnered men. Some of these men control funding for projects and enjoy high status in EA communities and that means there are real downsides to refusing their sexual advances and pressure to say yes, especially if your career is in an EA cause area or is funded by them. There are also upsides, as reported by CoinDesk on Caroline Ellison. From experience it appears that, a ‘no’ once said is not enough for many men in EA.Having to keep replenishing that ‘no’ becomes annoying very fast, and becomes harder to give informed consent when socializing in the presence of alcohol/psychedelics. It puts your safety at risk. From experience, EA as a community, has very little respect for monogamy and many men, often competing with each other, will persuade you to join polyamory using LessWrong style jedi mindtricks while they stand to benefit from the erosion of your boundaries. (Edit: I have personally experienced this more than three times in less than one year of attending EA events and that is far too many times. )So how do these men maintain polycules and find sexual novelty? EA meet ups of course. Several EA communities are grounds for predatory men in search of their nth polycule partner and to fill their “dancecards”. I have seen this in NYC EA circles, I have seen this in SF. I decided to stop socializing in EA circles a couple months ago due to this toxicity, the benefits are not worth the uncovered downside risk. I also am lucky enough to not work for an EA aligned organization / cause area and socially diversified enough to take that hit. The power enjoyed by men who are predatory, the rate of occurrence and a lack of visible push back equals to a tacit and somewhat widespread backing for this behaviour. My experience resonates with a few other women in SF I have spoken to. They have also met red pilled, exploitative men in EA/rationalist circles. EA/rationalism and redpill fit like yin and yang.Akin to how EA is an optimization of altruism with “suboptimal” human tendencies like morality and empathy stripped from it, red pill is an optimized sexual strategy with the humanity of women stripped from it. You’ll also, surprisingly, encounter many women who are redpilled and manifest internalized misogyny in EA. How to check if you’re one: if terms like SMV, hypergamy etc are part of your everyday vocabulary and thought processes, you might be affected. You’ll also encounter many women who are unhappy participants in polygamous relationships; some of them really smart women who agree to be unhappy (dump him, sis). And if you're a lucky woman who hasn't experienced this in EA, great, and your experience does not need to negate those of others.Despite this culture, EA as a philosophy has a lot of good in it and they should fix this bug with some introspection. Now mind you, this is not a criticism of polyamory itself. If polyamorous love happens between consenting adults without favoritism in professional settings, all is well and good. But EA is an organization and community focused on a mission of altruism, enjoy huge swathes of donor money and exert socio-political ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Women and Effective Altruism, published by Keerthana Gopalakrishnan on November 12, 2022 on The Effective Altruism Forum.A lot has been talked about SBF/FTX/EA but this coverage reminds me that is time to talk about the toxicity of the culture within EA communities, especially as it relates to women.EA circles, much like the group house in Bahamas, are widely incestous where people mix their work life (in EA cause areas), their often polyamorous love life and social life in one amalgomous mix without strict separations. This is the default status quo. This means that if you’re a reasonably attractive woman entering an EA community, you get a ton of sexual requests to join polycules, often from poly and partnered men. Some of these men control funding for projects and enjoy high status in EA communities and that means there are real downsides to refusing their sexual advances and pressure to say yes, especially if your career is in an EA cause area or is funded by them. There are also upsides, as reported by CoinDesk on Caroline Ellison. From experience it appears that, a ‘no’ once said is not enough for many men in EA.Having to keep replenishing that ‘no’ becomes annoying very fast, and becomes harder to give informed consent when socializing in the presence of alcohol/psychedelics. It puts your safety at risk. From experience, EA as a community, has very little respect for monogamy and many men, often competing with each other, will persuade you to join polyamory using LessWrong style jedi mindtricks while they stand to benefit from the erosion of your boundaries. (Edit: I have personally experienced this more than three times in less than one year of attending EA events and that is far too many times. )So how do these men maintain polycules and find sexual novelty? EA meet ups of course. Several EA communities are grounds for predatory men in search of their nth polycule partner and to fill their “dancecards”. I have seen this in NYC EA circles, I have seen this in SF. I decided to stop socializing in EA circles a couple months ago due to this toxicity, the benefits are not worth the uncovered downside risk. I also am lucky enough to not work for an EA aligned organization / cause area and socially diversified enough to take that hit. The power enjoyed by men who are predatory, the rate of occurrence and a lack of visible push back equals to a tacit and somewhat widespread backing for this behaviour. My experience resonates with a few other women in SF I have spoken to. They have also met red pilled, exploitative men in EA/rationalist circles. EA/rationalism and redpill fit like yin and yang.Akin to how EA is an optimization of altruism with “suboptimal” human tendencies like morality and empathy stripped from it, red pill is an optimized sexual strategy with the humanity of women stripped from it. You’ll also, surprisingly, encounter many women who are redpilled and manifest internalized misogyny in EA. How to check if you’re one: if terms like SMV, hypergamy etc are part of your everyday vocabulary and thought processes, you might be affected. You’ll also encounter many women who are unhappy participants in polygamous relationships; some of them really smart women who agree to be unhappy (dump him, sis). And if you're a lucky woman who hasn't experienced this in EA, great, and your experience does not need to negate those of others.Despite this culture, EA as a philosophy has a lot of good in it and they should fix this bug with some introspection. Now mind you, this is not a criticism of polyamory itself. If polyamorous love happens between consenting adults without favoritism in professional settings, all is well and good. But EA is an organization and community focused on a mission of altruism, enjoy huge swathes of donor money and exert socio-political ...]]>
Keerthana Gopalakrishnan https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:38 None full 3722
cvJYmLWLfwEtYtuXJ_EA EA - Internalizing the damage of bad-acting partners creates incentives for due diligence by tailcalled Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Internalizing the damage of bad-acting partners creates incentives for due diligence, published by tailcalled on November 11, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
tailcalled https://forum.effectivealtruism.org/posts/cvJYmLWLfwEtYtuXJ/internalizing-the-damage-of-bad-acting-partners-creates Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Internalizing the damage of bad-acting partners creates incentives for due diligence, published by tailcalled on November 11, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 12 Nov 2022 18:04:47 +0000 EA - Internalizing the damage of bad-acting partners creates incentives for due diligence by tailcalled Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Internalizing the damage of bad-acting partners creates incentives for due diligence, published by tailcalled on November 11, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Internalizing the damage of bad-acting partners creates incentives for due diligence, published by tailcalled on November 11, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
tailcalled https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:30 None full 3728
9rmGJQNgHJeseTGQL_EA EA - After recent FTX events, what are alternative sources of funding for longtermist projects? by CarolineJ Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: After recent FTX events, what are alternative sources of funding for longtermist projects?, published by CarolineJ on November 12, 2022 on The Effective Altruism Forum.Now that we know "that it looks likely that there are many committed grants that the Future Fund will be unable to honor" according to the former team of FTX Future Funds, it would be useful for a number of us to have alternatives.Current large funders are re-considering their grant strategy (such as OpenPhil). These donors are going to be extremely busy in the next few weeks. If you're not too funding-constrained, it seems like a good and pro-social strategy is to wait until after the storm and let others who have urgent and important grants figure things out.However, it seems that this may not be true if you are very funding-restrained right now, or if some grants or fellowships have deadlines coming up soon that it may be useful to have on your radar.So, what are good places to apply for funding now? (and in the future too)To start, the FLI AI Existential Safety PhD Fellowship has a deadline on November 15.The Vitalik Buterin PhD Fellowship in AI Existential Safety is for PhD students who plan to work on AI existential safety research, or for existing PhD students who would not otherwise have funding to work on AI existential safety research. It will fund students for 5 years of their PhD, with extension funding possible. At universities in the US, UK, or Canada, annual funding will cover tuition, fees, and the stipend of the student's PhD program up to $40,000, as well as a fund of $10,000 that can be used for research-related expenses such as travel and computing. At universities not in the US, UK or Canada, the stipend amount will be adjusted to match local conditions. Fellows will also be invited to workshops where they will be able to interact with other researchers in the field.Applicants who are short-listed for the Fellowship will be reimbursed for application fees for up to 5 PhD programs, and will be invited to an information session about research groups that can serve as good homes for AI existential safety research.More about the fellowship here.What are other options?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
CarolineJ https://forum.effectivealtruism.org/posts/9rmGJQNgHJeseTGQL/after-recent-ftx-events-what-are-alternative-sources-of Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: After recent FTX events, what are alternative sources of funding for longtermist projects?, published by CarolineJ on November 12, 2022 on The Effective Altruism Forum.Now that we know "that it looks likely that there are many committed grants that the Future Fund will be unable to honor" according to the former team of FTX Future Funds, it would be useful for a number of us to have alternatives.Current large funders are re-considering their grant strategy (such as OpenPhil). These donors are going to be extremely busy in the next few weeks. If you're not too funding-constrained, it seems like a good and pro-social strategy is to wait until after the storm and let others who have urgent and important grants figure things out.However, it seems that this may not be true if you are very funding-restrained right now, or if some grants or fellowships have deadlines coming up soon that it may be useful to have on your radar.So, what are good places to apply for funding now? (and in the future too)To start, the FLI AI Existential Safety PhD Fellowship has a deadline on November 15.The Vitalik Buterin PhD Fellowship in AI Existential Safety is for PhD students who plan to work on AI existential safety research, or for existing PhD students who would not otherwise have funding to work on AI existential safety research. It will fund students for 5 years of their PhD, with extension funding possible. At universities in the US, UK, or Canada, annual funding will cover tuition, fees, and the stipend of the student's PhD program up to $40,000, as well as a fund of $10,000 that can be used for research-related expenses such as travel and computing. At universities not in the US, UK or Canada, the stipend amount will be adjusted to match local conditions. Fellows will also be invited to workshops where they will be able to interact with other researchers in the field.Applicants who are short-listed for the Fellowship will be reimbursed for application fees for up to 5 PhD programs, and will be invited to an information session about research groups that can serve as good homes for AI existential safety research.More about the fellowship here.What are other options?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 12 Nov 2022 17:36:25 +0000 EA - After recent FTX events, what are alternative sources of funding for longtermist projects? by CarolineJ Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: After recent FTX events, what are alternative sources of funding for longtermist projects?, published by CarolineJ on November 12, 2022 on The Effective Altruism Forum.Now that we know "that it looks likely that there are many committed grants that the Future Fund will be unable to honor" according to the former team of FTX Future Funds, it would be useful for a number of us to have alternatives.Current large funders are re-considering their grant strategy (such as OpenPhil). These donors are going to be extremely busy in the next few weeks. If you're not too funding-constrained, it seems like a good and pro-social strategy is to wait until after the storm and let others who have urgent and important grants figure things out.However, it seems that this may not be true if you are very funding-restrained right now, or if some grants or fellowships have deadlines coming up soon that it may be useful to have on your radar.So, what are good places to apply for funding now? (and in the future too)To start, the FLI AI Existential Safety PhD Fellowship has a deadline on November 15.The Vitalik Buterin PhD Fellowship in AI Existential Safety is for PhD students who plan to work on AI existential safety research, or for existing PhD students who would not otherwise have funding to work on AI existential safety research. It will fund students for 5 years of their PhD, with extension funding possible. At universities in the US, UK, or Canada, annual funding will cover tuition, fees, and the stipend of the student's PhD program up to $40,000, as well as a fund of $10,000 that can be used for research-related expenses such as travel and computing. At universities not in the US, UK or Canada, the stipend amount will be adjusted to match local conditions. Fellows will also be invited to workshops where they will be able to interact with other researchers in the field.Applicants who are short-listed for the Fellowship will be reimbursed for application fees for up to 5 PhD programs, and will be invited to an information session about research groups that can serve as good homes for AI existential safety research.More about the fellowship here.What are other options?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: After recent FTX events, what are alternative sources of funding for longtermist projects?, published by CarolineJ on November 12, 2022 on The Effective Altruism Forum.Now that we know "that it looks likely that there are many committed grants that the Future Fund will be unable to honor" according to the former team of FTX Future Funds, it would be useful for a number of us to have alternatives.Current large funders are re-considering their grant strategy (such as OpenPhil). These donors are going to be extremely busy in the next few weeks. If you're not too funding-constrained, it seems like a good and pro-social strategy is to wait until after the storm and let others who have urgent and important grants figure things out.However, it seems that this may not be true if you are very funding-restrained right now, or if some grants or fellowships have deadlines coming up soon that it may be useful to have on your radar.So, what are good places to apply for funding now? (and in the future too)To start, the FLI AI Existential Safety PhD Fellowship has a deadline on November 15.The Vitalik Buterin PhD Fellowship in AI Existential Safety is for PhD students who plan to work on AI existential safety research, or for existing PhD students who would not otherwise have funding to work on AI existential safety research. It will fund students for 5 years of their PhD, with extension funding possible. At universities in the US, UK, or Canada, annual funding will cover tuition, fees, and the stipend of the student's PhD program up to $40,000, as well as a fund of $10,000 that can be used for research-related expenses such as travel and computing. At universities not in the US, UK or Canada, the stipend amount will be adjusted to match local conditions. Fellows will also be invited to workshops where they will be able to interact with other researchers in the field.Applicants who are short-listed for the Fellowship will be reimbursed for application fees for up to 5 PhD programs, and will be invited to an information session about research groups that can serve as good homes for AI existential safety research.More about the fellowship here.What are other options?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
CarolineJ https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:21 None full 3723
WdeiPrwgqW2wHAxgT_EA EA - A personal statement on FTX by William MacAskill Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A personal statement on FTX, published by William MacAskill on November 12, 2022 on The Effective Altruism Forum.This is a repost from a Twitter thread I made last night. It reads a little oddly when presented as a Forum post, but I wanted to have the content shared here for those not on Twitter.This is a thread of my thoughts and feelings about the actions that led to FTX’s bankruptcy, and the enormous harm that was caused as a result, involving the likely loss of many thousands of innocent people’s savings.Based on publicly available information, it seems to me more likely than not that senior leadership at FTX used customer deposits to bail out Alameda, despite terms of service prohibiting this, and a (later deleted) tweet from Sam claiming customer deposits are never invested.Some places making the case for this view include this article from Wall Street Journal, this tweet from jonwu.eth, this article from Bloomberg (and follow on articles).I am not certain that this is what happened. I haven’t been in contact with anyone at FTX (other than those at Future Fund), except a short email to resign from my unpaid advisor role at Future Fund. If new information vindicates FTX, I will change my view and offer an apology.But if there was deception and misuse of funds, I am outraged, and I don’t know which emotion is stronger: my utter rage at Sam (and others?) for causing such harm to so many people, or my sadness and self-hatred for falling for this deception.I want to make it utterly clear: if those involved deceived others and engaged in fraud (whether illegal or not) that may cost many thousands of people their savings, they entirely abandoned the principles of the effective altruism community.If this is what happened, then I cannot in words convey how strongly I condemn what they did. I had put my trust in Sam, and if he lied and misused customer funds he betrayed me, just as he betrayed his customers, his employees, his investors, & the communities he was a part of.For years, the EA community has emphasised the importance of integrity, honesty, and the respect of common-sense moral constraints. If customer funds were misused, then Sam did not listen; he must have thought he was above such considerations.A clear-thinking EA should strongly oppose “ends justify the means” reasoning. I hope to write more soon about this. In the meantime, here are some links to writings produced over the years.These are some relevant sections from What We Owe The Future:Here is Toby Ord in The Precipice:Here is Holden Karnofsky:Here are the Centre for Effective Altruism’s Guiding Principles:If FTX misused customer funds, then I personally will have much to reflect on. Sam and FTX had a lot of goodwill – and some of that goodwill was the result of association with ideas I have spent my career promoting. If that goodwill laundered fraud, I am ashamed.As a community, too, we will need to reflect on what has happened, and how we could reduce the chance of anything like this from happening again. Yes, we want to make the world better, and yes, we should be ambitious in the pursuit of that.But that in no way justifies fraud. If you think that you’re the exception, you’re duping yourself.We must make clear that we do not see ourselves as above common-sense ethical norms, and must engage criticism with humility.I know that others from inside and outside of the community have worried about the misuse of EA ideas in ways that could cause harm. I used to think these worries, though worth taking seriously, seemed speculative and unlikely.I was probably wrong. I will be reflecting on this in the days and months to come, and thinking through what should change.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
William MacAskill https://forum.effectivealtruism.org/posts/WdeiPrwgqW2wHAxgT/a-personal-statement-on-ftx Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A personal statement on FTX, published by William MacAskill on November 12, 2022 on The Effective Altruism Forum.This is a repost from a Twitter thread I made last night. It reads a little oddly when presented as a Forum post, but I wanted to have the content shared here for those not on Twitter.This is a thread of my thoughts and feelings about the actions that led to FTX’s bankruptcy, and the enormous harm that was caused as a result, involving the likely loss of many thousands of innocent people’s savings.Based on publicly available information, it seems to me more likely than not that senior leadership at FTX used customer deposits to bail out Alameda, despite terms of service prohibiting this, and a (later deleted) tweet from Sam claiming customer deposits are never invested.Some places making the case for this view include this article from Wall Street Journal, this tweet from jonwu.eth, this article from Bloomberg (and follow on articles).I am not certain that this is what happened. I haven’t been in contact with anyone at FTX (other than those at Future Fund), except a short email to resign from my unpaid advisor role at Future Fund. If new information vindicates FTX, I will change my view and offer an apology.But if there was deception and misuse of funds, I am outraged, and I don’t know which emotion is stronger: my utter rage at Sam (and others?) for causing such harm to so many people, or my sadness and self-hatred for falling for this deception.I want to make it utterly clear: if those involved deceived others and engaged in fraud (whether illegal or not) that may cost many thousands of people their savings, they entirely abandoned the principles of the effective altruism community.If this is what happened, then I cannot in words convey how strongly I condemn what they did. I had put my trust in Sam, and if he lied and misused customer funds he betrayed me, just as he betrayed his customers, his employees, his investors, & the communities he was a part of.For years, the EA community has emphasised the importance of integrity, honesty, and the respect of common-sense moral constraints. If customer funds were misused, then Sam did not listen; he must have thought he was above such considerations.A clear-thinking EA should strongly oppose “ends justify the means” reasoning. I hope to write more soon about this. In the meantime, here are some links to writings produced over the years.These are some relevant sections from What We Owe The Future:Here is Toby Ord in The Precipice:Here is Holden Karnofsky:Here are the Centre for Effective Altruism’s Guiding Principles:If FTX misused customer funds, then I personally will have much to reflect on. Sam and FTX had a lot of goodwill – and some of that goodwill was the result of association with ideas I have spent my career promoting. If that goodwill laundered fraud, I am ashamed.As a community, too, we will need to reflect on what has happened, and how we could reduce the chance of anything like this from happening again. Yes, we want to make the world better, and yes, we should be ambitious in the pursuit of that.But that in no way justifies fraud. If you think that you’re the exception, you’re duping yourself.We must make clear that we do not see ourselves as above common-sense ethical norms, and must engage criticism with humility.I know that others from inside and outside of the community have worried about the misuse of EA ideas in ways that could cause harm. I used to think these worries, though worth taking seriously, seemed speculative and unlikely.I was probably wrong. I will be reflecting on this in the days and months to come, and thinking through what should change.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 12 Nov 2022 16:48:05 +0000 EA - A personal statement on FTX by William MacAskill Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A personal statement on FTX, published by William MacAskill on November 12, 2022 on The Effective Altruism Forum.This is a repost from a Twitter thread I made last night. It reads a little oddly when presented as a Forum post, but I wanted to have the content shared here for those not on Twitter.This is a thread of my thoughts and feelings about the actions that led to FTX’s bankruptcy, and the enormous harm that was caused as a result, involving the likely loss of many thousands of innocent people’s savings.Based on publicly available information, it seems to me more likely than not that senior leadership at FTX used customer deposits to bail out Alameda, despite terms of service prohibiting this, and a (later deleted) tweet from Sam claiming customer deposits are never invested.Some places making the case for this view include this article from Wall Street Journal, this tweet from jonwu.eth, this article from Bloomberg (and follow on articles).I am not certain that this is what happened. I haven’t been in contact with anyone at FTX (other than those at Future Fund), except a short email to resign from my unpaid advisor role at Future Fund. If new information vindicates FTX, I will change my view and offer an apology.But if there was deception and misuse of funds, I am outraged, and I don’t know which emotion is stronger: my utter rage at Sam (and others?) for causing such harm to so many people, or my sadness and self-hatred for falling for this deception.I want to make it utterly clear: if those involved deceived others and engaged in fraud (whether illegal or not) that may cost many thousands of people their savings, they entirely abandoned the principles of the effective altruism community.If this is what happened, then I cannot in words convey how strongly I condemn what they did. I had put my trust in Sam, and if he lied and misused customer funds he betrayed me, just as he betrayed his customers, his employees, his investors, & the communities he was a part of.For years, the EA community has emphasised the importance of integrity, honesty, and the respect of common-sense moral constraints. If customer funds were misused, then Sam did not listen; he must have thought he was above such considerations.A clear-thinking EA should strongly oppose “ends justify the means” reasoning. I hope to write more soon about this. In the meantime, here are some links to writings produced over the years.These are some relevant sections from What We Owe The Future:Here is Toby Ord in The Precipice:Here is Holden Karnofsky:Here are the Centre for Effective Altruism’s Guiding Principles:If FTX misused customer funds, then I personally will have much to reflect on. Sam and FTX had a lot of goodwill – and some of that goodwill was the result of association with ideas I have spent my career promoting. If that goodwill laundered fraud, I am ashamed.As a community, too, we will need to reflect on what has happened, and how we could reduce the chance of anything like this from happening again. Yes, we want to make the world better, and yes, we should be ambitious in the pursuit of that.But that in no way justifies fraud. If you think that you’re the exception, you’re duping yourself.We must make clear that we do not see ourselves as above common-sense ethical norms, and must engage criticism with humility.I know that others from inside and outside of the community have worried about the misuse of EA ideas in ways that could cause harm. I used to think these worries, though worth taking seriously, seemed speculative and unlikely.I was probably wrong. I will be reflecting on this in the days and months to come, and thinking through what should change.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A personal statement on FTX, published by William MacAskill on November 12, 2022 on The Effective Altruism Forum.This is a repost from a Twitter thread I made last night. It reads a little oddly when presented as a Forum post, but I wanted to have the content shared here for those not on Twitter.This is a thread of my thoughts and feelings about the actions that led to FTX’s bankruptcy, and the enormous harm that was caused as a result, involving the likely loss of many thousands of innocent people’s savings.Based on publicly available information, it seems to me more likely than not that senior leadership at FTX used customer deposits to bail out Alameda, despite terms of service prohibiting this, and a (later deleted) tweet from Sam claiming customer deposits are never invested.Some places making the case for this view include this article from Wall Street Journal, this tweet from jonwu.eth, this article from Bloomberg (and follow on articles).I am not certain that this is what happened. I haven’t been in contact with anyone at FTX (other than those at Future Fund), except a short email to resign from my unpaid advisor role at Future Fund. If new information vindicates FTX, I will change my view and offer an apology.But if there was deception and misuse of funds, I am outraged, and I don’t know which emotion is stronger: my utter rage at Sam (and others?) for causing such harm to so many people, or my sadness and self-hatred for falling for this deception.I want to make it utterly clear: if those involved deceived others and engaged in fraud (whether illegal or not) that may cost many thousands of people their savings, they entirely abandoned the principles of the effective altruism community.If this is what happened, then I cannot in words convey how strongly I condemn what they did. I had put my trust in Sam, and if he lied and misused customer funds he betrayed me, just as he betrayed his customers, his employees, his investors, & the communities he was a part of.For years, the EA community has emphasised the importance of integrity, honesty, and the respect of common-sense moral constraints. If customer funds were misused, then Sam did not listen; he must have thought he was above such considerations.A clear-thinking EA should strongly oppose “ends justify the means” reasoning. I hope to write more soon about this. In the meantime, here are some links to writings produced over the years.These are some relevant sections from What We Owe The Future:Here is Toby Ord in The Precipice:Here is Holden Karnofsky:Here are the Centre for Effective Altruism’s Guiding Principles:If FTX misused customer funds, then I personally will have much to reflect on. Sam and FTX had a lot of goodwill – and some of that goodwill was the result of association with ideas I have spent my career promoting. If that goodwill laundered fraud, I am ashamed.As a community, too, we will need to reflect on what has happened, and how we could reduce the chance of anything like this from happening again. Yes, we want to make the world better, and yes, we should be ambitious in the pursuit of that.But that in no way justifies fraud. If you think that you’re the exception, you’re duping yourself.We must make clear that we do not see ourselves as above common-sense ethical norms, and must engage criticism with humility.I know that others from inside and outside of the community have worried about the misuse of EA ideas in ways that could cause harm. I used to think these worries, though worth taking seriously, seemed speculative and unlikely.I was probably wrong. I will be reflecting on this in the days and months to come, and thinking through what should change.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
William MacAskill https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:36 None full 3724
mQzLtPPGfv89Xo4wC_EA EA - GiveWell is hiring a Research Analyst (apply by November 20) by GiveWell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell is hiring a Research Analyst (apply by November 20), published by GiveWell on November 12, 2022 on The Effective Altruism Forum.GiveWell is looking for a Research Analyst to join our core interventions team, which investigates and makes funding decisions about programs we're already supporting at scale, including our top charities. The application deadline for this role is midnight Pacific Standard Time on Sunday, November 20. This role does not require any particular academic credentials or work experience. However, it does demand strong analytical and communication skills and a high degree of comfort in interpreting data.More details follow. We invite anyone who feels this role would be a good fit for them to apply by the above deadline.The roleAs a Research Analyst on GiveWell's core interventions team, you will help the team decide how hundreds of millions of dollars will be spent with the goal of saving and improving the lives of people living in the lowest-income communities in the world. The core interventions team focuses on programs that we are already supporting at scale—this includes our top charities. This team updates our cost-effectiveness estimates for these programs on a rolling basis. Your work will support our goal of always directing money to the most cost-effective funding opportunities available at the time of our grantmaking.You would be responsible for:Analyzing data on spending and numbers of people previously served by a program to generate estimates of cost per person reached in future programs.Adapting cost-effectiveness models to apply to new locations that a program may expand to, including determining which inputs to update and sourcing data for those inputs.Applying updates to our top charities cost-effectiveness analysis, a process which includes archiving the previous model, documenting the change in results, and managing a quality assurance procedure.Writing entries for cost-effectiveness analysis changelogs (examples here) and drafting updates to reports that summarize our cost-effectiveness analyses (example here).Creating cost-effectiveness analysis inputs for programs where we have a framework for these inputs but need to interpret messy data to apply that framework to a particular case. Examples include inputs for mosquito resistance to insecticides used in malaria nets, population-level burden of infection with certain parasites, and length of time a malaria net remains effective in a given context.About youNote: Confidence can sometimes hold us back from applying for a job. Here’s a secret: there's no such thing as a "perfect" candidate. GiveWell is looking for exceptional people who want to make a positive impact through their work and help create an organization where everyone can thrive. So whatever background you bring with you, please apply if this role would make you excited to come to work every day.We expect you will be characterized by many of the below qualities. We encourage you to apply if you would use the majority of these characteristics to describe yourself:Conscientious: You are able and willing to carefully follow a process with many steps. You carefully document processes and sources. You are thoughtful about your approach and perform high-quality work, with or without supervision. You exhibit meticulous attention to detail, including fine print, and don’t cut corners. (This doesn’t mean you never make mistakes, but you learn from them and rarely repeat one.)Strong communicator: You write clearly and concisely. You clearly communicate what you believe and why, as well as what you are uncertain about. You can explain our cost-effectiveness analysis parameters and our reasons for changing them clearly and succinctly to a semi-informed audience (e.g., GiveWell staff and donors).Analytic...]]>
GiveWell https://forum.effectivealtruism.org/posts/mQzLtPPGfv89Xo4wC/givewell-is-hiring-a-research-analyst-apply-by-november-20 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell is hiring a Research Analyst (apply by November 20), published by GiveWell on November 12, 2022 on The Effective Altruism Forum.GiveWell is looking for a Research Analyst to join our core interventions team, which investigates and makes funding decisions about programs we're already supporting at scale, including our top charities. The application deadline for this role is midnight Pacific Standard Time on Sunday, November 20. This role does not require any particular academic credentials or work experience. However, it does demand strong analytical and communication skills and a high degree of comfort in interpreting data.More details follow. We invite anyone who feels this role would be a good fit for them to apply by the above deadline.The roleAs a Research Analyst on GiveWell's core interventions team, you will help the team decide how hundreds of millions of dollars will be spent with the goal of saving and improving the lives of people living in the lowest-income communities in the world. The core interventions team focuses on programs that we are already supporting at scale—this includes our top charities. This team updates our cost-effectiveness estimates for these programs on a rolling basis. Your work will support our goal of always directing money to the most cost-effective funding opportunities available at the time of our grantmaking.You would be responsible for:Analyzing data on spending and numbers of people previously served by a program to generate estimates of cost per person reached in future programs.Adapting cost-effectiveness models to apply to new locations that a program may expand to, including determining which inputs to update and sourcing data for those inputs.Applying updates to our top charities cost-effectiveness analysis, a process which includes archiving the previous model, documenting the change in results, and managing a quality assurance procedure.Writing entries for cost-effectiveness analysis changelogs (examples here) and drafting updates to reports that summarize our cost-effectiveness analyses (example here).Creating cost-effectiveness analysis inputs for programs where we have a framework for these inputs but need to interpret messy data to apply that framework to a particular case. Examples include inputs for mosquito resistance to insecticides used in malaria nets, population-level burden of infection with certain parasites, and length of time a malaria net remains effective in a given context.About youNote: Confidence can sometimes hold us back from applying for a job. Here’s a secret: there's no such thing as a "perfect" candidate. GiveWell is looking for exceptional people who want to make a positive impact through their work and help create an organization where everyone can thrive. So whatever background you bring with you, please apply if this role would make you excited to come to work every day.We expect you will be characterized by many of the below qualities. We encourage you to apply if you would use the majority of these characteristics to describe yourself:Conscientious: You are able and willing to carefully follow a process with many steps. You carefully document processes and sources. You are thoughtful about your approach and perform high-quality work, with or without supervision. You exhibit meticulous attention to detail, including fine print, and don’t cut corners. (This doesn’t mean you never make mistakes, but you learn from them and rarely repeat one.)Strong communicator: You write clearly and concisely. You clearly communicate what you believe and why, as well as what you are uncertain about. You can explain our cost-effectiveness analysis parameters and our reasons for changing them clearly and succinctly to a semi-informed audience (e.g., GiveWell staff and donors).Analytic...]]>
Sat, 12 Nov 2022 16:43:36 +0000 EA - GiveWell is hiring a Research Analyst (apply by November 20) by GiveWell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell is hiring a Research Analyst (apply by November 20), published by GiveWell on November 12, 2022 on The Effective Altruism Forum.GiveWell is looking for a Research Analyst to join our core interventions team, which investigates and makes funding decisions about programs we're already supporting at scale, including our top charities. The application deadline for this role is midnight Pacific Standard Time on Sunday, November 20. This role does not require any particular academic credentials or work experience. However, it does demand strong analytical and communication skills and a high degree of comfort in interpreting data.More details follow. We invite anyone who feels this role would be a good fit for them to apply by the above deadline.The roleAs a Research Analyst on GiveWell's core interventions team, you will help the team decide how hundreds of millions of dollars will be spent with the goal of saving and improving the lives of people living in the lowest-income communities in the world. The core interventions team focuses on programs that we are already supporting at scale—this includes our top charities. This team updates our cost-effectiveness estimates for these programs on a rolling basis. Your work will support our goal of always directing money to the most cost-effective funding opportunities available at the time of our grantmaking.You would be responsible for:Analyzing data on spending and numbers of people previously served by a program to generate estimates of cost per person reached in future programs.Adapting cost-effectiveness models to apply to new locations that a program may expand to, including determining which inputs to update and sourcing data for those inputs.Applying updates to our top charities cost-effectiveness analysis, a process which includes archiving the previous model, documenting the change in results, and managing a quality assurance procedure.Writing entries for cost-effectiveness analysis changelogs (examples here) and drafting updates to reports that summarize our cost-effectiveness analyses (example here).Creating cost-effectiveness analysis inputs for programs where we have a framework for these inputs but need to interpret messy data to apply that framework to a particular case. Examples include inputs for mosquito resistance to insecticides used in malaria nets, population-level burden of infection with certain parasites, and length of time a malaria net remains effective in a given context.About youNote: Confidence can sometimes hold us back from applying for a job. Here’s a secret: there's no such thing as a "perfect" candidate. GiveWell is looking for exceptional people who want to make a positive impact through their work and help create an organization where everyone can thrive. So whatever background you bring with you, please apply if this role would make you excited to come to work every day.We expect you will be characterized by many of the below qualities. We encourage you to apply if you would use the majority of these characteristics to describe yourself:Conscientious: You are able and willing to carefully follow a process with many steps. You carefully document processes and sources. You are thoughtful about your approach and perform high-quality work, with or without supervision. You exhibit meticulous attention to detail, including fine print, and don’t cut corners. (This doesn’t mean you never make mistakes, but you learn from them and rarely repeat one.)Strong communicator: You write clearly and concisely. You clearly communicate what you believe and why, as well as what you are uncertain about. You can explain our cost-effectiveness analysis parameters and our reasons for changing them clearly and succinctly to a semi-informed audience (e.g., GiveWell staff and donors).Analytic...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell is hiring a Research Analyst (apply by November 20), published by GiveWell on November 12, 2022 on The Effective Altruism Forum.GiveWell is looking for a Research Analyst to join our core interventions team, which investigates and makes funding decisions about programs we're already supporting at scale, including our top charities. The application deadline for this role is midnight Pacific Standard Time on Sunday, November 20. This role does not require any particular academic credentials or work experience. However, it does demand strong analytical and communication skills and a high degree of comfort in interpreting data.More details follow. We invite anyone who feels this role would be a good fit for them to apply by the above deadline.The roleAs a Research Analyst on GiveWell's core interventions team, you will help the team decide how hundreds of millions of dollars will be spent with the goal of saving and improving the lives of people living in the lowest-income communities in the world. The core interventions team focuses on programs that we are already supporting at scale—this includes our top charities. This team updates our cost-effectiveness estimates for these programs on a rolling basis. Your work will support our goal of always directing money to the most cost-effective funding opportunities available at the time of our grantmaking.You would be responsible for:Analyzing data on spending and numbers of people previously served by a program to generate estimates of cost per person reached in future programs.Adapting cost-effectiveness models to apply to new locations that a program may expand to, including determining which inputs to update and sourcing data for those inputs.Applying updates to our top charities cost-effectiveness analysis, a process which includes archiving the previous model, documenting the change in results, and managing a quality assurance procedure.Writing entries for cost-effectiveness analysis changelogs (examples here) and drafting updates to reports that summarize our cost-effectiveness analyses (example here).Creating cost-effectiveness analysis inputs for programs where we have a framework for these inputs but need to interpret messy data to apply that framework to a particular case. Examples include inputs for mosquito resistance to insecticides used in malaria nets, population-level burden of infection with certain parasites, and length of time a malaria net remains effective in a given context.About youNote: Confidence can sometimes hold us back from applying for a job. Here’s a secret: there's no such thing as a "perfect" candidate. GiveWell is looking for exceptional people who want to make a positive impact through their work and help create an organization where everyone can thrive. So whatever background you bring with you, please apply if this role would make you excited to come to work every day.We expect you will be characterized by many of the below qualities. We encourage you to apply if you would use the majority of these characteristics to describe yourself:Conscientious: You are able and willing to carefully follow a process with many steps. You carefully document processes and sources. You are thoughtful about your approach and perform high-quality work, with or without supervision. You exhibit meticulous attention to detail, including fine print, and don’t cut corners. (This doesn’t mean you never make mistakes, but you learn from them and rarely repeat one.)Strong communicator: You write clearly and concisely. You clearly communicate what you believe and why, as well as what you are uncertain about. You can explain our cost-effectiveness analysis parameters and our reasons for changing them clearly and succinctly to a semi-informed audience (e.g., GiveWell staff and donors).Analytic...]]>
GiveWell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:50 None full 3727
HyHCkK3aDsfY95MoD_EA EA - CEA/EV + OP + RP should engage an independent investigator to determine whether key figures in EA knew about the (likely) fraud at FTX by Tyrone-Jay Barugh Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA/EV + OP + RP should engage an independent investigator to determine whether key figures in EA knew about the (likely) fraud at FTX, published by Tyrone-Jay Barugh on November 12, 2022 on The Effective Altruism Forum.I think that key EA orgs (perhaps collectively) like the Center for Effective Altruism/Effective Ventures, Open Philanthropy, and Rethink Priorities should consider engaging an independent investigator (with no connection to EA) to try to identify whether key figures in those organisations knew (or can reasonably be inferred to have known, based on other things they knew) about the (likely) fraud at FTX.The investigator should also be contactable (probably confidentially?) by members of the community and others who might have relevant information.Typically a lawyer might be engaged to carry out the investigation, particularly because of professional obligations in relation to confidentiality (subject to the terms of reference of the investigation) and natural justice. But other professionals also conduct independent investigations, and there is no in principle reason why a lawyer needs to lead this work.My sense is that this should happen very promptly. If anyone did know about the (likely) fraud at FTX, then delay potentially increases the risk that any such person hides evidence or spreads an alternative account that vindicates them.I'm torn about whether to post this, as it may well be something that leadership (or lawyers) in the key EA orgs are already thinking about, and posting this prematurely might result in those orgs being pressured to launch an investigation hastily with bad terms of reference. On the other hand, I've had the concern that there is no whistleblower protection in EA for some time (raised in my March 2022 post on legal needs within EA), and others (e.g. Carla Zoe C) have made this point earlier still. I am not posting this because I have a strong belief that anyone in a key EA org did know - I have no information in this regard beyond vague speculation I have seen on Twitter.If you have a better suggestion, I would appreciate you sharing it (even if anonymously).Epistemic status: pretty uncertain, slightly anxious this will make the situation worse, but on balance think worth raising.Relevant disclosure: I received a regrant from the FTX Future Fund to investigate the legal needs of effective altruist organisations.Edit: I want to clarify that I don't think that any particular person knew. I still trust all the same community figures I trusted one week ago, other than folks in the FTX business. For each 'High Profile EA' I can think of, I would be very surprised if that person in particular knew. But even if we think there is only a 0.1% chance that any of the most influential, say, 100 EAs knew, then the chance that none of them knew is 0.999^100, which is about 90.4% (assuming we naively treat those as independent events). If we care about the top 1000 most influential EAs, then we could get that 90.4% chance with just a 0.01% chance of failure.Edit: I think Ryan Carey's comment is further in the right direction than this post (subject to my view that an independent investigation should stick to fact-finding rather than making philosophical/moral calls for EA) plus I've also had other people contact me spitballing ideas that seem sensible. I don't know what the terms of reference of an investigation would be, but it does seem like simply answering "did anybody know" might be the wrong approach. If you have further suggestions for the sorts of things that should be considered, it might be worth dropping those into the comments.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tyrone-Jay Barugh https://forum.effectivealtruism.org/posts/HyHCkK3aDsfY95MoD/cea-ev-op-rp-should-engage-an-independent-investigator-to Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA/EV + OP + RP should engage an independent investigator to determine whether key figures in EA knew about the (likely) fraud at FTX, published by Tyrone-Jay Barugh on November 12, 2022 on The Effective Altruism Forum.I think that key EA orgs (perhaps collectively) like the Center for Effective Altruism/Effective Ventures, Open Philanthropy, and Rethink Priorities should consider engaging an independent investigator (with no connection to EA) to try to identify whether key figures in those organisations knew (or can reasonably be inferred to have known, based on other things they knew) about the (likely) fraud at FTX.The investigator should also be contactable (probably confidentially?) by members of the community and others who might have relevant information.Typically a lawyer might be engaged to carry out the investigation, particularly because of professional obligations in relation to confidentiality (subject to the terms of reference of the investigation) and natural justice. But other professionals also conduct independent investigations, and there is no in principle reason why a lawyer needs to lead this work.My sense is that this should happen very promptly. If anyone did know about the (likely) fraud at FTX, then delay potentially increases the risk that any such person hides evidence or spreads an alternative account that vindicates them.I'm torn about whether to post this, as it may well be something that leadership (or lawyers) in the key EA orgs are already thinking about, and posting this prematurely might result in those orgs being pressured to launch an investigation hastily with bad terms of reference. On the other hand, I've had the concern that there is no whistleblower protection in EA for some time (raised in my March 2022 post on legal needs within EA), and others (e.g. Carla Zoe C) have made this point earlier still. I am not posting this because I have a strong belief that anyone in a key EA org did know - I have no information in this regard beyond vague speculation I have seen on Twitter.If you have a better suggestion, I would appreciate you sharing it (even if anonymously).Epistemic status: pretty uncertain, slightly anxious this will make the situation worse, but on balance think worth raising.Relevant disclosure: I received a regrant from the FTX Future Fund to investigate the legal needs of effective altruist organisations.Edit: I want to clarify that I don't think that any particular person knew. I still trust all the same community figures I trusted one week ago, other than folks in the FTX business. For each 'High Profile EA' I can think of, I would be very surprised if that person in particular knew. But even if we think there is only a 0.1% chance that any of the most influential, say, 100 EAs knew, then the chance that none of them knew is 0.999^100, which is about 90.4% (assuming we naively treat those as independent events). If we care about the top 1000 most influential EAs, then we could get that 90.4% chance with just a 0.01% chance of failure.Edit: I think Ryan Carey's comment is further in the right direction than this post (subject to my view that an independent investigation should stick to fact-finding rather than making philosophical/moral calls for EA) plus I've also had other people contact me spitballing ideas that seem sensible. I don't know what the terms of reference of an investigation would be, but it does seem like simply answering "did anybody know" might be the wrong approach. If you have further suggestions for the sorts of things that should be considered, it might be worth dropping those into the comments.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 12 Nov 2022 14:55:31 +0000 EA - CEA/EV + OP + RP should engage an independent investigator to determine whether key figures in EA knew about the (likely) fraud at FTX by Tyrone-Jay Barugh Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA/EV + OP + RP should engage an independent investigator to determine whether key figures in EA knew about the (likely) fraud at FTX, published by Tyrone-Jay Barugh on November 12, 2022 on The Effective Altruism Forum.I think that key EA orgs (perhaps collectively) like the Center for Effective Altruism/Effective Ventures, Open Philanthropy, and Rethink Priorities should consider engaging an independent investigator (with no connection to EA) to try to identify whether key figures in those organisations knew (or can reasonably be inferred to have known, based on other things they knew) about the (likely) fraud at FTX.The investigator should also be contactable (probably confidentially?) by members of the community and others who might have relevant information.Typically a lawyer might be engaged to carry out the investigation, particularly because of professional obligations in relation to confidentiality (subject to the terms of reference of the investigation) and natural justice. But other professionals also conduct independent investigations, and there is no in principle reason why a lawyer needs to lead this work.My sense is that this should happen very promptly. If anyone did know about the (likely) fraud at FTX, then delay potentially increases the risk that any such person hides evidence or spreads an alternative account that vindicates them.I'm torn about whether to post this, as it may well be something that leadership (or lawyers) in the key EA orgs are already thinking about, and posting this prematurely might result in those orgs being pressured to launch an investigation hastily with bad terms of reference. On the other hand, I've had the concern that there is no whistleblower protection in EA for some time (raised in my March 2022 post on legal needs within EA), and others (e.g. Carla Zoe C) have made this point earlier still. I am not posting this because I have a strong belief that anyone in a key EA org did know - I have no information in this regard beyond vague speculation I have seen on Twitter.If you have a better suggestion, I would appreciate you sharing it (even if anonymously).Epistemic status: pretty uncertain, slightly anxious this will make the situation worse, but on balance think worth raising.Relevant disclosure: I received a regrant from the FTX Future Fund to investigate the legal needs of effective altruist organisations.Edit: I want to clarify that I don't think that any particular person knew. I still trust all the same community figures I trusted one week ago, other than folks in the FTX business. For each 'High Profile EA' I can think of, I would be very surprised if that person in particular knew. But even if we think there is only a 0.1% chance that any of the most influential, say, 100 EAs knew, then the chance that none of them knew is 0.999^100, which is about 90.4% (assuming we naively treat those as independent events). If we care about the top 1000 most influential EAs, then we could get that 90.4% chance with just a 0.01% chance of failure.Edit: I think Ryan Carey's comment is further in the right direction than this post (subject to my view that an independent investigation should stick to fact-finding rather than making philosophical/moral calls for EA) plus I've also had other people contact me spitballing ideas that seem sensible. I don't know what the terms of reference of an investigation would be, but it does seem like simply answering "did anybody know" might be the wrong approach. If you have further suggestions for the sorts of things that should be considered, it might be worth dropping those into the comments.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA/EV + OP + RP should engage an independent investigator to determine whether key figures in EA knew about the (likely) fraud at FTX, published by Tyrone-Jay Barugh on November 12, 2022 on The Effective Altruism Forum.I think that key EA orgs (perhaps collectively) like the Center for Effective Altruism/Effective Ventures, Open Philanthropy, and Rethink Priorities should consider engaging an independent investigator (with no connection to EA) to try to identify whether key figures in those organisations knew (or can reasonably be inferred to have known, based on other things they knew) about the (likely) fraud at FTX.The investigator should also be contactable (probably confidentially?) by members of the community and others who might have relevant information.Typically a lawyer might be engaged to carry out the investigation, particularly because of professional obligations in relation to confidentiality (subject to the terms of reference of the investigation) and natural justice. But other professionals also conduct independent investigations, and there is no in principle reason why a lawyer needs to lead this work.My sense is that this should happen very promptly. If anyone did know about the (likely) fraud at FTX, then delay potentially increases the risk that any such person hides evidence or spreads an alternative account that vindicates them.I'm torn about whether to post this, as it may well be something that leadership (or lawyers) in the key EA orgs are already thinking about, and posting this prematurely might result in those orgs being pressured to launch an investigation hastily with bad terms of reference. On the other hand, I've had the concern that there is no whistleblower protection in EA for some time (raised in my March 2022 post on legal needs within EA), and others (e.g. Carla Zoe C) have made this point earlier still. I am not posting this because I have a strong belief that anyone in a key EA org did know - I have no information in this regard beyond vague speculation I have seen on Twitter.If you have a better suggestion, I would appreciate you sharing it (even if anonymously).Epistemic status: pretty uncertain, slightly anxious this will make the situation worse, but on balance think worth raising.Relevant disclosure: I received a regrant from the FTX Future Fund to investigate the legal needs of effective altruist organisations.Edit: I want to clarify that I don't think that any particular person knew. I still trust all the same community figures I trusted one week ago, other than folks in the FTX business. For each 'High Profile EA' I can think of, I would be very surprised if that person in particular knew. But even if we think there is only a 0.1% chance that any of the most influential, say, 100 EAs knew, then the chance that none of them knew is 0.999^100, which is about 90.4% (assuming we naively treat those as independent events). If we care about the top 1000 most influential EAs, then we could get that 90.4% chance with just a 0.01% chance of failure.Edit: I think Ryan Carey's comment is further in the right direction than this post (subject to my view that an independent investigation should stick to fact-finding rather than making philosophical/moral calls for EA) plus I've also had other people contact me spitballing ideas that seem sensible. I don't know what the terms of reference of an investigation would be, but it does seem like simply answering "did anybody know" might be the wrong approach. If you have further suggestions for the sorts of things that should be considered, it might be worth dropping those into the comments.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tyrone-Jay Barugh https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:42 None full 3725
4zjnFxGWYkEF4nqMi_EA EA - How could we have avoided this? by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How could we have avoided this?, published by Nathan Young on November 12, 2022 on The Effective Altruism Forum.It seems to me that the information that betting so heavily on FTX and SBF was an avoidable failure. So what could we have done ex-ante to avoid it?You have to suggest things we could have actually done with the information we had. Some examples of information we had:First, the best counterargument:Then again, if we think we are better at spotting x-risks then these people maybe this should make us update towards being worse at predicting things.Also I know there is a temptation to wait until the dust settles, but I don't think that's right. We are a community with useful information-gathering technology. We are capable of discussing here.Things we knew at the timeWe knew that about half of Alameda left at one time. I'm pretty sure many are EAs or know them and they would have had some sense of this.We knew that SBF's wealth was a very high proportion of effective altruism's total wealth. And we ought to have known that something that took him down would be catastrophic to us.This was Charles Dillon's take, but he tweets behind a locked account and gave me permission to tweet it.Peter Wildeford noted the possible reputational risk 6 months ago:We knew that corruption is possible and that large institutions need to work hard to avoid being coopted by bad actors.Many people found crypto distasteful or felt that crypto could have been a scam.FTX's Chief Compliance Officer, Daniel S. Friedberg, had behaved fraudulently In the past. This from august 2021.In 2013, an audio recording surfaced that made mincemeat of UB’s original version of events. The recording of an early 2008 meeting with the principal cheater (Russ Hamilton) features Daniel S. Friedberg actively conspiring with the other principals in attendance to (a) publicly obfuscate the source of the cheating, (b) minimize the amount of restitution made to players, and (c) force shareholders to shoulder most of the bill.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nathan Young https://forum.effectivealtruism.org/posts/4zjnFxGWYkEF4nqMi/how-could-we-have-avoided-this Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How could we have avoided this?, published by Nathan Young on November 12, 2022 on The Effective Altruism Forum.It seems to me that the information that betting so heavily on FTX and SBF was an avoidable failure. So what could we have done ex-ante to avoid it?You have to suggest things we could have actually done with the information we had. Some examples of information we had:First, the best counterargument:Then again, if we think we are better at spotting x-risks then these people maybe this should make us update towards being worse at predicting things.Also I know there is a temptation to wait until the dust settles, but I don't think that's right. We are a community with useful information-gathering technology. We are capable of discussing here.Things we knew at the timeWe knew that about half of Alameda left at one time. I'm pretty sure many are EAs or know them and they would have had some sense of this.We knew that SBF's wealth was a very high proportion of effective altruism's total wealth. And we ought to have known that something that took him down would be catastrophic to us.This was Charles Dillon's take, but he tweets behind a locked account and gave me permission to tweet it.Peter Wildeford noted the possible reputational risk 6 months ago:We knew that corruption is possible and that large institutions need to work hard to avoid being coopted by bad actors.Many people found crypto distasteful or felt that crypto could have been a scam.FTX's Chief Compliance Officer, Daniel S. Friedberg, had behaved fraudulently In the past. This from august 2021.In 2013, an audio recording surfaced that made mincemeat of UB’s original version of events. The recording of an early 2008 meeting with the principal cheater (Russ Hamilton) features Daniel S. Friedberg actively conspiring with the other principals in attendance to (a) publicly obfuscate the source of the cheating, (b) minimize the amount of restitution made to players, and (c) force shareholders to shoulder most of the bill.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 12 Nov 2022 14:19:46 +0000 EA - How could we have avoided this? by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How could we have avoided this?, published by Nathan Young on November 12, 2022 on The Effective Altruism Forum.It seems to me that the information that betting so heavily on FTX and SBF was an avoidable failure. So what could we have done ex-ante to avoid it?You have to suggest things we could have actually done with the information we had. Some examples of information we had:First, the best counterargument:Then again, if we think we are better at spotting x-risks then these people maybe this should make us update towards being worse at predicting things.Also I know there is a temptation to wait until the dust settles, but I don't think that's right. We are a community with useful information-gathering technology. We are capable of discussing here.Things we knew at the timeWe knew that about half of Alameda left at one time. I'm pretty sure many are EAs or know them and they would have had some sense of this.We knew that SBF's wealth was a very high proportion of effective altruism's total wealth. And we ought to have known that something that took him down would be catastrophic to us.This was Charles Dillon's take, but he tweets behind a locked account and gave me permission to tweet it.Peter Wildeford noted the possible reputational risk 6 months ago:We knew that corruption is possible and that large institutions need to work hard to avoid being coopted by bad actors.Many people found crypto distasteful or felt that crypto could have been a scam.FTX's Chief Compliance Officer, Daniel S. Friedberg, had behaved fraudulently In the past. This from august 2021.In 2013, an audio recording surfaced that made mincemeat of UB’s original version of events. The recording of an early 2008 meeting with the principal cheater (Russ Hamilton) features Daniel S. Friedberg actively conspiring with the other principals in attendance to (a) publicly obfuscate the source of the cheating, (b) minimize the amount of restitution made to players, and (c) force shareholders to shoulder most of the bill.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How could we have avoided this?, published by Nathan Young on November 12, 2022 on The Effective Altruism Forum.It seems to me that the information that betting so heavily on FTX and SBF was an avoidable failure. So what could we have done ex-ante to avoid it?You have to suggest things we could have actually done with the information we had. Some examples of information we had:First, the best counterargument:Then again, if we think we are better at spotting x-risks then these people maybe this should make us update towards being worse at predicting things.Also I know there is a temptation to wait until the dust settles, but I don't think that's right. We are a community with useful information-gathering technology. We are capable of discussing here.Things we knew at the timeWe knew that about half of Alameda left at one time. I'm pretty sure many are EAs or know them and they would have had some sense of this.We knew that SBF's wealth was a very high proportion of effective altruism's total wealth. And we ought to have known that something that took him down would be catastrophic to us.This was Charles Dillon's take, but he tweets behind a locked account and gave me permission to tweet it.Peter Wildeford noted the possible reputational risk 6 months ago:We knew that corruption is possible and that large institutions need to work hard to avoid being coopted by bad actors.Many people found crypto distasteful or felt that crypto could have been a scam.FTX's Chief Compliance Officer, Daniel S. Friedberg, had behaved fraudulently In the past. This from august 2021.In 2013, an audio recording surfaced that made mincemeat of UB’s original version of events. The recording of an early 2008 meeting with the principal cheater (Russ Hamilton) features Daniel S. Friedberg actively conspiring with the other principals in attendance to (a) publicly obfuscate the source of the cheating, (b) minimize the amount of restitution made to players, and (c) force shareholders to shoulder most of the bill.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nathan Young https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:07 None full 3726
nvus8kuGxyacyfXeg_EA EA - Naïve vs Prudent Utilitarianism by Richard Y Chappell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Naïve vs Prudent Utilitarianism, published by Richard Y Chappell on November 11, 2022 on The Effective Altruism Forum.Critics sometimes imagine that utilitarianism directs us to act disreputably whenever it appears (however fleetingly) that the act would have good consequences. Or whenever crudely calculating the most salient first-order consequences (in isolation) yields a positive number. This “naïve utilitarian” decision procedure is clearly daft, and not something that any sound utilitarian actually advocates. On the other hand, critics sometimes mistake this point for the claim that utilitarianism itself is plainly counterproductive, and necessarily advocates against its own acceptance. While that’s always a conceptual possibility, I don’t think it has any empirical credibility. Most who think otherwise are still making the mistake of conflating naïve utilitarianism with utilitarianism proper. The latter is a much more prudent view, as I’ll now explain.Adjusting for BiasImagine an archer, trying to hit a target on a windy day. A naive archer might ignore the wind, aim directly at the target, and (predictably) miss as their arrow is blown off-course. A more sophisticated archer will deliberately re-calibrate, superficially seeming to aim “off-target” but in a way that makes them more likely to hit. Finally, a master archer will automatically adjust as needed, doing what (to her) seems obviously how to hit the target, though to a naïve observer it might look like she was aiming awry.Is the best way to be a successful archer on a windy day to stop even trying to hit the target? Surely not. (It’s conceivable that an evil demon might interfere in such a way as to make this so — i.e., so that only people genuinely trying to miss would end up hitting the target — but that’s a much weirder case than what we’re talking about.) The point is just that naïve targeting is likely to miss. Making appropriate adjustments to one’s aim (overriding naive judgments of how to achieve the goal) is not at all the same thing as abandoning the goal altogether.And so it goes in ethics. Crudely calculating the expected utility of (e.g.) murdering your rivals and harvesting their vital organs, and naively acting upon such first-pass calculations, would be predictably disastrous. This doesn’t mean that you should abandon the goal of doing good. It just means that you should pursue it in a prudent rather than naive manner.Metacoherence prohibits naïve utilitarianism“But doesn’t utilitarianism direct us to maximize expected value?” you may ask. Only in the same way that norms of archery direct our archer to hit the target. There’s nothing in either norm that requires (or even permits) it to be pursued naively, without obviously-called-for bias adjustments.This is something that has been stressed by utilitarian theorists from Mill and Sidgwick through to R.M. Hare, Pettit, and Railton—to name but a few. Here’s a pithy listing from J.L. Mackie of six reasons why utilitarians oppose naïve calculation as a decision procedure:Shortage of time and energy will in general preclude such calculations.Even if time and energy are available, the relevant information commonly is not.An agent's judgment on particular issues is likely to be distorted by his own interests and special affections.Even if he were intellectually able to determine the right choice, weakness of will would be likely to impair his putting of it into effect.Even decisions that are right in themselves and actions based on them are liable to be misused as precedents, so that they will encourage and seem to legitimate wrong actions that are superficially similar to them.And, human nature being what it is, a practical working morality must not be too demanding: it is worse than useless to set standards so high that there is ...]]>
Richard Y Chappell https://forum.effectivealtruism.org/posts/nvus8kuGxyacyfXeg/naive-vs-prudent-utilitarianism Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Naïve vs Prudent Utilitarianism, published by Richard Y Chappell on November 11, 2022 on The Effective Altruism Forum.Critics sometimes imagine that utilitarianism directs us to act disreputably whenever it appears (however fleetingly) that the act would have good consequences. Or whenever crudely calculating the most salient first-order consequences (in isolation) yields a positive number. This “naïve utilitarian” decision procedure is clearly daft, and not something that any sound utilitarian actually advocates. On the other hand, critics sometimes mistake this point for the claim that utilitarianism itself is plainly counterproductive, and necessarily advocates against its own acceptance. While that’s always a conceptual possibility, I don’t think it has any empirical credibility. Most who think otherwise are still making the mistake of conflating naïve utilitarianism with utilitarianism proper. The latter is a much more prudent view, as I’ll now explain.Adjusting for BiasImagine an archer, trying to hit a target on a windy day. A naive archer might ignore the wind, aim directly at the target, and (predictably) miss as their arrow is blown off-course. A more sophisticated archer will deliberately re-calibrate, superficially seeming to aim “off-target” but in a way that makes them more likely to hit. Finally, a master archer will automatically adjust as needed, doing what (to her) seems obviously how to hit the target, though to a naïve observer it might look like she was aiming awry.Is the best way to be a successful archer on a windy day to stop even trying to hit the target? Surely not. (It’s conceivable that an evil demon might interfere in such a way as to make this so — i.e., so that only people genuinely trying to miss would end up hitting the target — but that’s a much weirder case than what we’re talking about.) The point is just that naïve targeting is likely to miss. Making appropriate adjustments to one’s aim (overriding naive judgments of how to achieve the goal) is not at all the same thing as abandoning the goal altogether.And so it goes in ethics. Crudely calculating the expected utility of (e.g.) murdering your rivals and harvesting their vital organs, and naively acting upon such first-pass calculations, would be predictably disastrous. This doesn’t mean that you should abandon the goal of doing good. It just means that you should pursue it in a prudent rather than naive manner.Metacoherence prohibits naïve utilitarianism“But doesn’t utilitarianism direct us to maximize expected value?” you may ask. Only in the same way that norms of archery direct our archer to hit the target. There’s nothing in either norm that requires (or even permits) it to be pursued naively, without obviously-called-for bias adjustments.This is something that has been stressed by utilitarian theorists from Mill and Sidgwick through to R.M. Hare, Pettit, and Railton—to name but a few. Here’s a pithy listing from J.L. Mackie of six reasons why utilitarians oppose naïve calculation as a decision procedure:Shortage of time and energy will in general preclude such calculations.Even if time and energy are available, the relevant information commonly is not.An agent's judgment on particular issues is likely to be distorted by his own interests and special affections.Even if he were intellectually able to determine the right choice, weakness of will would be likely to impair his putting of it into effect.Even decisions that are right in themselves and actions based on them are liable to be misused as precedents, so that they will encourage and seem to legitimate wrong actions that are superficially similar to them.And, human nature being what it is, a practical working morality must not be too demanding: it is worse than useless to set standards so high that there is ...]]>
Sat, 12 Nov 2022 06:20:53 +0000 EA - Naïve vs Prudent Utilitarianism by Richard Y Chappell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Naïve vs Prudent Utilitarianism, published by Richard Y Chappell on November 11, 2022 on The Effective Altruism Forum.Critics sometimes imagine that utilitarianism directs us to act disreputably whenever it appears (however fleetingly) that the act would have good consequences. Or whenever crudely calculating the most salient first-order consequences (in isolation) yields a positive number. This “naïve utilitarian” decision procedure is clearly daft, and not something that any sound utilitarian actually advocates. On the other hand, critics sometimes mistake this point for the claim that utilitarianism itself is plainly counterproductive, and necessarily advocates against its own acceptance. While that’s always a conceptual possibility, I don’t think it has any empirical credibility. Most who think otherwise are still making the mistake of conflating naïve utilitarianism with utilitarianism proper. The latter is a much more prudent view, as I’ll now explain.Adjusting for BiasImagine an archer, trying to hit a target on a windy day. A naive archer might ignore the wind, aim directly at the target, and (predictably) miss as their arrow is blown off-course. A more sophisticated archer will deliberately re-calibrate, superficially seeming to aim “off-target” but in a way that makes them more likely to hit. Finally, a master archer will automatically adjust as needed, doing what (to her) seems obviously how to hit the target, though to a naïve observer it might look like she was aiming awry.Is the best way to be a successful archer on a windy day to stop even trying to hit the target? Surely not. (It’s conceivable that an evil demon might interfere in such a way as to make this so — i.e., so that only people genuinely trying to miss would end up hitting the target — but that’s a much weirder case than what we’re talking about.) The point is just that naïve targeting is likely to miss. Making appropriate adjustments to one’s aim (overriding naive judgments of how to achieve the goal) is not at all the same thing as abandoning the goal altogether.And so it goes in ethics. Crudely calculating the expected utility of (e.g.) murdering your rivals and harvesting their vital organs, and naively acting upon such first-pass calculations, would be predictably disastrous. This doesn’t mean that you should abandon the goal of doing good. It just means that you should pursue it in a prudent rather than naive manner.Metacoherence prohibits naïve utilitarianism“But doesn’t utilitarianism direct us to maximize expected value?” you may ask. Only in the same way that norms of archery direct our archer to hit the target. There’s nothing in either norm that requires (or even permits) it to be pursued naively, without obviously-called-for bias adjustments.This is something that has been stressed by utilitarian theorists from Mill and Sidgwick through to R.M. Hare, Pettit, and Railton—to name but a few. Here’s a pithy listing from J.L. Mackie of six reasons why utilitarians oppose naïve calculation as a decision procedure:Shortage of time and energy will in general preclude such calculations.Even if time and energy are available, the relevant information commonly is not.An agent's judgment on particular issues is likely to be distorted by his own interests and special affections.Even if he were intellectually able to determine the right choice, weakness of will would be likely to impair his putting of it into effect.Even decisions that are right in themselves and actions based on them are liable to be misused as precedents, so that they will encourage and seem to legitimate wrong actions that are superficially similar to them.And, human nature being what it is, a practical working morality must not be too demanding: it is worse than useless to set standards so high that there is ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Naïve vs Prudent Utilitarianism, published by Richard Y Chappell on November 11, 2022 on The Effective Altruism Forum.Critics sometimes imagine that utilitarianism directs us to act disreputably whenever it appears (however fleetingly) that the act would have good consequences. Or whenever crudely calculating the most salient first-order consequences (in isolation) yields a positive number. This “naïve utilitarian” decision procedure is clearly daft, and not something that any sound utilitarian actually advocates. On the other hand, critics sometimes mistake this point for the claim that utilitarianism itself is plainly counterproductive, and necessarily advocates against its own acceptance. While that’s always a conceptual possibility, I don’t think it has any empirical credibility. Most who think otherwise are still making the mistake of conflating naïve utilitarianism with utilitarianism proper. The latter is a much more prudent view, as I’ll now explain.Adjusting for BiasImagine an archer, trying to hit a target on a windy day. A naive archer might ignore the wind, aim directly at the target, and (predictably) miss as their arrow is blown off-course. A more sophisticated archer will deliberately re-calibrate, superficially seeming to aim “off-target” but in a way that makes them more likely to hit. Finally, a master archer will automatically adjust as needed, doing what (to her) seems obviously how to hit the target, though to a naïve observer it might look like she was aiming awry.Is the best way to be a successful archer on a windy day to stop even trying to hit the target? Surely not. (It’s conceivable that an evil demon might interfere in such a way as to make this so — i.e., so that only people genuinely trying to miss would end up hitting the target — but that’s a much weirder case than what we’re talking about.) The point is just that naïve targeting is likely to miss. Making appropriate adjustments to one’s aim (overriding naive judgments of how to achieve the goal) is not at all the same thing as abandoning the goal altogether.And so it goes in ethics. Crudely calculating the expected utility of (e.g.) murdering your rivals and harvesting their vital organs, and naively acting upon such first-pass calculations, would be predictably disastrous. This doesn’t mean that you should abandon the goal of doing good. It just means that you should pursue it in a prudent rather than naive manner.Metacoherence prohibits naïve utilitarianism“But doesn’t utilitarianism direct us to maximize expected value?” you may ask. Only in the same way that norms of archery direct our archer to hit the target. There’s nothing in either norm that requires (or even permits) it to be pursued naively, without obviously-called-for bias adjustments.This is something that has been stressed by utilitarian theorists from Mill and Sidgwick through to R.M. Hare, Pettit, and Railton—to name but a few. Here’s a pithy listing from J.L. Mackie of six reasons why utilitarians oppose naïve calculation as a decision procedure:Shortage of time and energy will in general preclude such calculations.Even if time and energy are available, the relevant information commonly is not.An agent's judgment on particular issues is likely to be distorted by his own interests and special affections.Even if he were intellectually able to determine the right choice, weakness of will would be likely to impair his putting of it into effect.Even decisions that are right in themselves and actions based on them are liable to be misused as precedents, so that they will encourage and seem to legitimate wrong actions that are superficially similar to them.And, human nature being what it is, a practical working morality must not be too demanding: it is worse than useless to set standards so high that there is ...]]>
Richard Y Chappell https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:37 None full 3720
FKJ8yiF3KjFhAuivt_EA EA - IMPCO, don't injure yourself by returning FTXFF money for services you already provided by EliezerYudkowsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: IMPCO, don't injure yourself by returning FTXFF money for services you already provided, published by EliezerYudkowsky on November 12, 2022 on The Effective Altruism Forum.In my possibly contrarian opinion, and speaking as somebody who I don't think actually got any money directly from FTX Future Fund that I can recall; also speaking for myself and hastily, having not run this post past any other major leaders in EA:You are not obligated to return funding that got to you ultimately by way of FTX; especially if it's been given for a service you already rendered, any more than the electrical utility ought to return FTX's money that's already been spent on electricity; especially if that would put you to hardship. This is not a for-the-greater-good argument; I don't think you're obligated to that much personal martyrdom in the first place, just like the utility company isn't so obligated.It's fine to be somebody who sells utilons for money, just like utilities sell electricity for money. People who work in the philanthropic sector, and don't capture all of the gain they create, do not thereby relinquish the solidity of their claim to the part of that gain they do capture, to below the levels of an electrical utility's claim to similar money. The money you hold could maybe possibly compensate some FTX users - if it doesn't just get immediately captured by users selling NFTs to Bahamian accounts, or the equivalent in later bankruptcy proceedings - but that's equally true of the electrical utility's money, or, heck, money held by any number of people richer than you. Plumbers who worked on the FTX building should likewise not anguish about that and give the money back; yes, even though plumbers are probably well above average in income for the Bahamas.You are not more deeply implicit in FTX's sins, by way of the FTX FF connection, than the plumber who worked directly on their building.I don't like the way that some people think about charity, like anybody who works in the philanthropic sector gives up the right to really own anything. You can try to do a little good in the world, or even sell a little good into the world, without signing up to be the martyr who gets all the blame for not being better when something goes wrong.You probably forwent some of your possible gains to work in the charity sector at all, and took a bit of a generally riskier job. (If you didn't know that, my condolences.) You may suddenly need to look for a new job. You did not sign away your legal or moral right to keep title to money that was already given you, if you've already performed the corresponding service, or you're still going to perform that service. If you can't perform the service anymore, then maybe return some of that money once it's clear that it'll actually make its way to FTX customers; but keep what covers the cost of a month to re-search for a job, or the month you already spent searching for that job.It's fine to call it your charity and your contribution that you undercharge for those utilons and don't capture as much value as you create - if you're nice enough to do that, which you don't have to be, you can be Lawful Neutral instead of Lawful Good and I won't think you're as cool but I'll still happily trade with you.(I apologize for resorting to the D&D alignment chart, at this point, but I really am not sure how to compactly express these ideas without those concepts, or concepts that I could define more elaborately that would mean the same thing.)That you're trying to be some degree of Good, by undercharging for the utilons you provide, doesn't mean you can't still hold that money that you got in Lawful Neutral exchange for the services you sold. Just like any ordinary plumber is entitled to do, if it turned out they unwittingly fixed the toilet of a bad bad pers...]]>
EliezerYudkowsky https://forum.effectivealtruism.org/posts/FKJ8yiF3KjFhAuivt/impco-don-t-injure-yourself-by-returning-ftxff-money-for Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: IMPCO, don't injure yourself by returning FTXFF money for services you already provided, published by EliezerYudkowsky on November 12, 2022 on The Effective Altruism Forum.In my possibly contrarian opinion, and speaking as somebody who I don't think actually got any money directly from FTX Future Fund that I can recall; also speaking for myself and hastily, having not run this post past any other major leaders in EA:You are not obligated to return funding that got to you ultimately by way of FTX; especially if it's been given for a service you already rendered, any more than the electrical utility ought to return FTX's money that's already been spent on electricity; especially if that would put you to hardship. This is not a for-the-greater-good argument; I don't think you're obligated to that much personal martyrdom in the first place, just like the utility company isn't so obligated.It's fine to be somebody who sells utilons for money, just like utilities sell electricity for money. People who work in the philanthropic sector, and don't capture all of the gain they create, do not thereby relinquish the solidity of their claim to the part of that gain they do capture, to below the levels of an electrical utility's claim to similar money. The money you hold could maybe possibly compensate some FTX users - if it doesn't just get immediately captured by users selling NFTs to Bahamian accounts, or the equivalent in later bankruptcy proceedings - but that's equally true of the electrical utility's money, or, heck, money held by any number of people richer than you. Plumbers who worked on the FTX building should likewise not anguish about that and give the money back; yes, even though plumbers are probably well above average in income for the Bahamas.You are not more deeply implicit in FTX's sins, by way of the FTX FF connection, than the plumber who worked directly on their building.I don't like the way that some people think about charity, like anybody who works in the philanthropic sector gives up the right to really own anything. You can try to do a little good in the world, or even sell a little good into the world, without signing up to be the martyr who gets all the blame for not being better when something goes wrong.You probably forwent some of your possible gains to work in the charity sector at all, and took a bit of a generally riskier job. (If you didn't know that, my condolences.) You may suddenly need to look for a new job. You did not sign away your legal or moral right to keep title to money that was already given you, if you've already performed the corresponding service, or you're still going to perform that service. If you can't perform the service anymore, then maybe return some of that money once it's clear that it'll actually make its way to FTX customers; but keep what covers the cost of a month to re-search for a job, or the month you already spent searching for that job.It's fine to call it your charity and your contribution that you undercharge for those utilons and don't capture as much value as you create - if you're nice enough to do that, which you don't have to be, you can be Lawful Neutral instead of Lawful Good and I won't think you're as cool but I'll still happily trade with you.(I apologize for resorting to the D&D alignment chart, at this point, but I really am not sure how to compactly express these ideas without those concepts, or concepts that I could define more elaborately that would mean the same thing.)That you're trying to be some degree of Good, by undercharging for the utilons you provide, doesn't mean you can't still hold that money that you got in Lawful Neutral exchange for the services you sold. Just like any ordinary plumber is entitled to do, if it turned out they unwittingly fixed the toilet of a bad bad pers...]]>
Sat, 12 Nov 2022 05:09:14 +0000 EA - IMPCO, don't injure yourself by returning FTXFF money for services you already provided by EliezerYudkowsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: IMPCO, don't injure yourself by returning FTXFF money for services you already provided, published by EliezerYudkowsky on November 12, 2022 on The Effective Altruism Forum.In my possibly contrarian opinion, and speaking as somebody who I don't think actually got any money directly from FTX Future Fund that I can recall; also speaking for myself and hastily, having not run this post past any other major leaders in EA:You are not obligated to return funding that got to you ultimately by way of FTX; especially if it's been given for a service you already rendered, any more than the electrical utility ought to return FTX's money that's already been spent on electricity; especially if that would put you to hardship. This is not a for-the-greater-good argument; I don't think you're obligated to that much personal martyrdom in the first place, just like the utility company isn't so obligated.It's fine to be somebody who sells utilons for money, just like utilities sell electricity for money. People who work in the philanthropic sector, and don't capture all of the gain they create, do not thereby relinquish the solidity of their claim to the part of that gain they do capture, to below the levels of an electrical utility's claim to similar money. The money you hold could maybe possibly compensate some FTX users - if it doesn't just get immediately captured by users selling NFTs to Bahamian accounts, or the equivalent in later bankruptcy proceedings - but that's equally true of the electrical utility's money, or, heck, money held by any number of people richer than you. Plumbers who worked on the FTX building should likewise not anguish about that and give the money back; yes, even though plumbers are probably well above average in income for the Bahamas.You are not more deeply implicit in FTX's sins, by way of the FTX FF connection, than the plumber who worked directly on their building.I don't like the way that some people think about charity, like anybody who works in the philanthropic sector gives up the right to really own anything. You can try to do a little good in the world, or even sell a little good into the world, without signing up to be the martyr who gets all the blame for not being better when something goes wrong.You probably forwent some of your possible gains to work in the charity sector at all, and took a bit of a generally riskier job. (If you didn't know that, my condolences.) You may suddenly need to look for a new job. You did not sign away your legal or moral right to keep title to money that was already given you, if you've already performed the corresponding service, or you're still going to perform that service. If you can't perform the service anymore, then maybe return some of that money once it's clear that it'll actually make its way to FTX customers; but keep what covers the cost of a month to re-search for a job, or the month you already spent searching for that job.It's fine to call it your charity and your contribution that you undercharge for those utilons and don't capture as much value as you create - if you're nice enough to do that, which you don't have to be, you can be Lawful Neutral instead of Lawful Good and I won't think you're as cool but I'll still happily trade with you.(I apologize for resorting to the D&D alignment chart, at this point, but I really am not sure how to compactly express these ideas without those concepts, or concepts that I could define more elaborately that would mean the same thing.)That you're trying to be some degree of Good, by undercharging for the utilons you provide, doesn't mean you can't still hold that money that you got in Lawful Neutral exchange for the services you sold. Just like any ordinary plumber is entitled to do, if it turned out they unwittingly fixed the toilet of a bad bad pers...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: IMPCO, don't injure yourself by returning FTXFF money for services you already provided, published by EliezerYudkowsky on November 12, 2022 on The Effective Altruism Forum.In my possibly contrarian opinion, and speaking as somebody who I don't think actually got any money directly from FTX Future Fund that I can recall; also speaking for myself and hastily, having not run this post past any other major leaders in EA:You are not obligated to return funding that got to you ultimately by way of FTX; especially if it's been given for a service you already rendered, any more than the electrical utility ought to return FTX's money that's already been spent on electricity; especially if that would put you to hardship. This is not a for-the-greater-good argument; I don't think you're obligated to that much personal martyrdom in the first place, just like the utility company isn't so obligated.It's fine to be somebody who sells utilons for money, just like utilities sell electricity for money. People who work in the philanthropic sector, and don't capture all of the gain they create, do not thereby relinquish the solidity of their claim to the part of that gain they do capture, to below the levels of an electrical utility's claim to similar money. The money you hold could maybe possibly compensate some FTX users - if it doesn't just get immediately captured by users selling NFTs to Bahamian accounts, or the equivalent in later bankruptcy proceedings - but that's equally true of the electrical utility's money, or, heck, money held by any number of people richer than you. Plumbers who worked on the FTX building should likewise not anguish about that and give the money back; yes, even though plumbers are probably well above average in income for the Bahamas.You are not more deeply implicit in FTX's sins, by way of the FTX FF connection, than the plumber who worked directly on their building.I don't like the way that some people think about charity, like anybody who works in the philanthropic sector gives up the right to really own anything. You can try to do a little good in the world, or even sell a little good into the world, without signing up to be the martyr who gets all the blame for not being better when something goes wrong.You probably forwent some of your possible gains to work in the charity sector at all, and took a bit of a generally riskier job. (If you didn't know that, my condolences.) You may suddenly need to look for a new job. You did not sign away your legal or moral right to keep title to money that was already given you, if you've already performed the corresponding service, or you're still going to perform that service. If you can't perform the service anymore, then maybe return some of that money once it's clear that it'll actually make its way to FTX customers; but keep what covers the cost of a month to re-search for a job, or the month you already spent searching for that job.It's fine to call it your charity and your contribution that you undercharge for those utilons and don't capture as much value as you create - if you're nice enough to do that, which you don't have to be, you can be Lawful Neutral instead of Lawful Good and I won't think you're as cool but I'll still happily trade with you.(I apologize for resorting to the D&D alignment chart, at this point, but I really am not sure how to compactly express these ideas without those concepts, or concepts that I could define more elaborately that would mean the same thing.)That you're trying to be some degree of Good, by undercharging for the utilons you provide, doesn't mean you can't still hold that money that you got in Lawful Neutral exchange for the services you sold. Just like any ordinary plumber is entitled to do, if it turned out they unwittingly fixed the toilet of a bad bad pers...]]>
EliezerYudkowsky https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:37 None full 3719
WAA9jborjiXqczsTk_EA EA - Thoughts on FTX and returning to our ideals by michel Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on FTX and returning to our ideals, published by michel on November 11, 2022 on The Effective Altruism Forum.I can’t work today. I knew I wouldn’t be able to work as soon as I woke up. The ordinary tasks I’d dedicate myself to feel so distant from the shitshow unfolding as I sit at my desk.The news of FTXs unraveling hit me hard. Most immediately, I feel for the people who entrusted their life savings to an organization that didn’t deserve their trust.It’s easy to extrapolate from the twitter updates I scroll past of people’s net worth halving in hours, of people frantically trying to withdraw money they’re not sure exists anymore: people’s lives just got ruined. Families will struggle to send their kids to college. Young adults will need to take on new jobs to stay afloat. Poor parents who looked at FTX for a way out will lose their trust and so much more in a system that continues to fail them. I hope those responsible for gambling money that wasn’t theirs to gamble are held responsible, and I dearly hope FTX can find a way to repay the people who trusted them.I grieve for those who trusted in FTX, and that includes people in the effective altruism community. We’re not the victims – we benefited incredibly from Sam Bankman–Fried and others at FTX over the past years. But, to my knowledge, we had no idea that what appears to be fraud at this level was a possibility when we signed onto a billionaire’s backing. Money changes the world, and I don’t hate us for getting it. Up until this week, I had a very favorable impression of Sam Bankman-Fried. I saw him as an altruist who encapsulated what it meant to think big. No longer; doing good means acting with integrity. This feels like the moment that you learn that a childhood hero of yours might be no hero after all.A lot just changed. Projects from people in the effective altruism community that I think would have genuinely improved the world, like pandemic preparedness initiatives and charity startups, may be delayed for years – or never arrive at all. Our community's entire trust networks, emphasis on ambition, and expectation that standout ideas need not be held back my insufficient funds feel as if they’re beginning to shake, and it’s not clear how much they’ll withstand over the coming months. At a personal level, too, those grant applications I’ve been waiting on to fund my past months of independent work seem awfully precarious. I know I’ll be fine, but others won’t be, including the people alive today and far beyond tomorrow who aspiring effective altruists are trying to help.That’s something that weighs on me: while so much feels like it’s changed, the problems in this world haven’t. People will still die from preventable diseases today; we’ll still experience a background noise of nuclear armageddon threats tomorrow; and emerging technologies coming in the next decades could still pose a threat to all sentient life.It’s in this context – a community that implicitly trusted FTX in their efforts to do good being shaken up and the world’s problems staying awfully still – that I’m drawn back to effective altruism’s core project: trying to improve the world as much as we can with the resources that we have.Amidst everything shaking right now, I notice my personal commitment to effective altruism's ideals standing still. And I can picture my friends' commitments to effective altruism's ideals standing steadfast too. Over the past two years, engaging with this community has introduced me to some of the most incredible, kind-hearted people I know. I haven’t talked to many of them as I’ve tried to gather my thoughts over the past day, but I bet they too are still committed to creating a world better than the one they were born into.Absolutely, we’ll all need to reassess how we bring our altruistic proj...]]>
michel https://forum.effectivealtruism.org/posts/WAA9jborjiXqczsTk/thoughts-on-ftx-and-returning-to-our-ideals Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on FTX and returning to our ideals, published by michel on November 11, 2022 on The Effective Altruism Forum.I can’t work today. I knew I wouldn’t be able to work as soon as I woke up. The ordinary tasks I’d dedicate myself to feel so distant from the shitshow unfolding as I sit at my desk.The news of FTXs unraveling hit me hard. Most immediately, I feel for the people who entrusted their life savings to an organization that didn’t deserve their trust.It’s easy to extrapolate from the twitter updates I scroll past of people’s net worth halving in hours, of people frantically trying to withdraw money they’re not sure exists anymore: people’s lives just got ruined. Families will struggle to send their kids to college. Young adults will need to take on new jobs to stay afloat. Poor parents who looked at FTX for a way out will lose their trust and so much more in a system that continues to fail them. I hope those responsible for gambling money that wasn’t theirs to gamble are held responsible, and I dearly hope FTX can find a way to repay the people who trusted them.I grieve for those who trusted in FTX, and that includes people in the effective altruism community. We’re not the victims – we benefited incredibly from Sam Bankman–Fried and others at FTX over the past years. But, to my knowledge, we had no idea that what appears to be fraud at this level was a possibility when we signed onto a billionaire’s backing. Money changes the world, and I don’t hate us for getting it. Up until this week, I had a very favorable impression of Sam Bankman-Fried. I saw him as an altruist who encapsulated what it meant to think big. No longer; doing good means acting with integrity. This feels like the moment that you learn that a childhood hero of yours might be no hero after all.A lot just changed. Projects from people in the effective altruism community that I think would have genuinely improved the world, like pandemic preparedness initiatives and charity startups, may be delayed for years – or never arrive at all. Our community's entire trust networks, emphasis on ambition, and expectation that standout ideas need not be held back my insufficient funds feel as if they’re beginning to shake, and it’s not clear how much they’ll withstand over the coming months. At a personal level, too, those grant applications I’ve been waiting on to fund my past months of independent work seem awfully precarious. I know I’ll be fine, but others won’t be, including the people alive today and far beyond tomorrow who aspiring effective altruists are trying to help.That’s something that weighs on me: while so much feels like it’s changed, the problems in this world haven’t. People will still die from preventable diseases today; we’ll still experience a background noise of nuclear armageddon threats tomorrow; and emerging technologies coming in the next decades could still pose a threat to all sentient life.It’s in this context – a community that implicitly trusted FTX in their efforts to do good being shaken up and the world’s problems staying awfully still – that I’m drawn back to effective altruism’s core project: trying to improve the world as much as we can with the resources that we have.Amidst everything shaking right now, I notice my personal commitment to effective altruism's ideals standing still. And I can picture my friends' commitments to effective altruism's ideals standing steadfast too. Over the past two years, engaging with this community has introduced me to some of the most incredible, kind-hearted people I know. I haven’t talked to many of them as I’ve tried to gather my thoughts over the past day, but I bet they too are still committed to creating a world better than the one they were born into.Absolutely, we’ll all need to reassess how we bring our altruistic proj...]]>
Fri, 11 Nov 2022 21:39:59 +0000 EA - Thoughts on FTX and returning to our ideals by michel Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on FTX and returning to our ideals, published by michel on November 11, 2022 on The Effective Altruism Forum.I can’t work today. I knew I wouldn’t be able to work as soon as I woke up. The ordinary tasks I’d dedicate myself to feel so distant from the shitshow unfolding as I sit at my desk.The news of FTXs unraveling hit me hard. Most immediately, I feel for the people who entrusted their life savings to an organization that didn’t deserve their trust.It’s easy to extrapolate from the twitter updates I scroll past of people’s net worth halving in hours, of people frantically trying to withdraw money they’re not sure exists anymore: people’s lives just got ruined. Families will struggle to send their kids to college. Young adults will need to take on new jobs to stay afloat. Poor parents who looked at FTX for a way out will lose their trust and so much more in a system that continues to fail them. I hope those responsible for gambling money that wasn’t theirs to gamble are held responsible, and I dearly hope FTX can find a way to repay the people who trusted them.I grieve for those who trusted in FTX, and that includes people in the effective altruism community. We’re not the victims – we benefited incredibly from Sam Bankman–Fried and others at FTX over the past years. But, to my knowledge, we had no idea that what appears to be fraud at this level was a possibility when we signed onto a billionaire’s backing. Money changes the world, and I don’t hate us for getting it. Up until this week, I had a very favorable impression of Sam Bankman-Fried. I saw him as an altruist who encapsulated what it meant to think big. No longer; doing good means acting with integrity. This feels like the moment that you learn that a childhood hero of yours might be no hero after all.A lot just changed. Projects from people in the effective altruism community that I think would have genuinely improved the world, like pandemic preparedness initiatives and charity startups, may be delayed for years – or never arrive at all. Our community's entire trust networks, emphasis on ambition, and expectation that standout ideas need not be held back my insufficient funds feel as if they’re beginning to shake, and it’s not clear how much they’ll withstand over the coming months. At a personal level, too, those grant applications I’ve been waiting on to fund my past months of independent work seem awfully precarious. I know I’ll be fine, but others won’t be, including the people alive today and far beyond tomorrow who aspiring effective altruists are trying to help.That’s something that weighs on me: while so much feels like it’s changed, the problems in this world haven’t. People will still die from preventable diseases today; we’ll still experience a background noise of nuclear armageddon threats tomorrow; and emerging technologies coming in the next decades could still pose a threat to all sentient life.It’s in this context – a community that implicitly trusted FTX in their efforts to do good being shaken up and the world’s problems staying awfully still – that I’m drawn back to effective altruism’s core project: trying to improve the world as much as we can with the resources that we have.Amidst everything shaking right now, I notice my personal commitment to effective altruism's ideals standing still. And I can picture my friends' commitments to effective altruism's ideals standing steadfast too. Over the past two years, engaging with this community has introduced me to some of the most incredible, kind-hearted people I know. I haven’t talked to many of them as I’ve tried to gather my thoughts over the past day, but I bet they too are still committed to creating a world better than the one they were born into.Absolutely, we’ll all need to reassess how we bring our altruistic proj...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on FTX and returning to our ideals, published by michel on November 11, 2022 on The Effective Altruism Forum.I can’t work today. I knew I wouldn’t be able to work as soon as I woke up. The ordinary tasks I’d dedicate myself to feel so distant from the shitshow unfolding as I sit at my desk.The news of FTXs unraveling hit me hard. Most immediately, I feel for the people who entrusted their life savings to an organization that didn’t deserve their trust.It’s easy to extrapolate from the twitter updates I scroll past of people’s net worth halving in hours, of people frantically trying to withdraw money they’re not sure exists anymore: people’s lives just got ruined. Families will struggle to send their kids to college. Young adults will need to take on new jobs to stay afloat. Poor parents who looked at FTX for a way out will lose their trust and so much more in a system that continues to fail them. I hope those responsible for gambling money that wasn’t theirs to gamble are held responsible, and I dearly hope FTX can find a way to repay the people who trusted them.I grieve for those who trusted in FTX, and that includes people in the effective altruism community. We’re not the victims – we benefited incredibly from Sam Bankman–Fried and others at FTX over the past years. But, to my knowledge, we had no idea that what appears to be fraud at this level was a possibility when we signed onto a billionaire’s backing. Money changes the world, and I don’t hate us for getting it. Up until this week, I had a very favorable impression of Sam Bankman-Fried. I saw him as an altruist who encapsulated what it meant to think big. No longer; doing good means acting with integrity. This feels like the moment that you learn that a childhood hero of yours might be no hero after all.A lot just changed. Projects from people in the effective altruism community that I think would have genuinely improved the world, like pandemic preparedness initiatives and charity startups, may be delayed for years – or never arrive at all. Our community's entire trust networks, emphasis on ambition, and expectation that standout ideas need not be held back my insufficient funds feel as if they’re beginning to shake, and it’s not clear how much they’ll withstand over the coming months. At a personal level, too, those grant applications I’ve been waiting on to fund my past months of independent work seem awfully precarious. I know I’ll be fine, but others won’t be, including the people alive today and far beyond tomorrow who aspiring effective altruists are trying to help.That’s something that weighs on me: while so much feels like it’s changed, the problems in this world haven’t. People will still die from preventable diseases today; we’ll still experience a background noise of nuclear armageddon threats tomorrow; and emerging technologies coming in the next decades could still pose a threat to all sentient life.It’s in this context – a community that implicitly trusted FTX in their efforts to do good being shaken up and the world’s problems staying awfully still – that I’m drawn back to effective altruism’s core project: trying to improve the world as much as we can with the resources that we have.Amidst everything shaking right now, I notice my personal commitment to effective altruism's ideals standing still. And I can picture my friends' commitments to effective altruism's ideals standing steadfast too. Over the past two years, engaging with this community has introduced me to some of the most incredible, kind-hearted people I know. I haven’t talked to many of them as I’ve tried to gather my thoughts over the past day, but I bet they too are still committed to creating a world better than the one they were born into.Absolutely, we’ll all need to reassess how we bring our altruistic proj...]]>
michel https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:10 None full 3706
9YodZj6J6iv3xua4f_EA EA - Another FTX post: suggestions for change by SaraAzubuike Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Another FTX post: suggestions for change, published by SaraAzubuike on November 11, 2022 on The Effective Altruism Forum.I suggested that we would have trouble with FTX and funding around 4 months ago.SBF has been giving lots of money to EA. He admits it's a massively speculative bubble. Crypto crash hurts the most vulnerable, because poor uneducated people put lots of money into it (Krugman). Crypto is currently small, but should be regulated and has potential contagion effects (BIS). EA as a whole is getting loose with it's money due to large crypto flows (MacAskill). An inevitable crypto crash leads to either a) bad optics leading to less interest in EA or b) lots of dead projects.It was quite obvious that this would happen--although the specific details with Alameda were not obvious. Stuart Buck and Blonergan (and a few others) were the only one who took me seriously at the time.Below are some suggestions for change.1. The new button of "support" is great, but I think EA forum should have a way to sort by controversiality. And, have the EA forum algorithm occasionally (some ϵ% of the time), punt controversial posts back upwards to the front page. If you're like me, you read the forum sorted by Magic (New and Upvoted). But this promotes herd mentality. The red-teaming and self-criticism is excellent, but if the only way we aggregate how "good" is is by up-votes, that is flawed. Perhaps the best way to know that criticism has touched a nerve is to compute a fraction: how many members of the community disagree vs how many agree. (or, even better, if you are in an organization, use a weighted fraction, where you put lower weight on the people in the organization that are in positions of power (obviously difficult to implement in practice))2. More of you should consider anonymous posts. This is EA forum. I cannot believe that some of you delete your posts simply because it ends up being downvoted. Especially if you're working higher up in an EA org, you ought to be actively voicing your dissent and helping to monitor EA.For example, this is not good:"Members of the mutinous cohort told me that the movement’s leaders were not to be taken at their word—that they would say anything in public to maximize impact. Some of the paranoia—rumor-mill references to secret Google docs and ruthless clandestine councils—seemed overstated, but there was a core cadre that exercised control over public messaging; its members debated, for example, how to formulate their position that climate change was probably not as important as runaway A.I. without sounding like denialists or jerks." (New Yorker)What makes EA, EA, what makes EA antifragile, is its ruthless transparency. If we are self-censoring because we have already concluded something is super effective, then there is no point in EA. Go do your own thing with your own money. Become Bill Gates. But don't associate with EA.3. Finances should be partially anonymized. If an EA org receives some money above a certain threshold from an individual contribution, we should be transparent in saying that we will reject said money if it is not donated anonymously. You may protest that this would decrease the number of donations by rich billionaires. But take it this way: if they donate to EA, it's because they believe that EA can spend it better. Thus, they should be willing to donate anonymously, to not affect how EA spends money. If they don't donate to EA, then they can establish a different philanthropic organization and hire EA-adjacent staff, making for more competition. [Edit--see comments/revision]Revision:Blonergan took my previous post very seriously--apologies.Anonymizing finances may not be the best option. I am clearly naive about the legal implications. Perhaps other members of the community have suggestions about h...]]>
SaraAzubuike https://forum.effectivealtruism.org/posts/9YodZj6J6iv3xua4f/another-ftx-post-suggestions-for-change Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Another FTX post: suggestions for change, published by SaraAzubuike on November 11, 2022 on The Effective Altruism Forum.I suggested that we would have trouble with FTX and funding around 4 months ago.SBF has been giving lots of money to EA. He admits it's a massively speculative bubble. Crypto crash hurts the most vulnerable, because poor uneducated people put lots of money into it (Krugman). Crypto is currently small, but should be regulated and has potential contagion effects (BIS). EA as a whole is getting loose with it's money due to large crypto flows (MacAskill). An inevitable crypto crash leads to either a) bad optics leading to less interest in EA or b) lots of dead projects.It was quite obvious that this would happen--although the specific details with Alameda were not obvious. Stuart Buck and Blonergan (and a few others) were the only one who took me seriously at the time.Below are some suggestions for change.1. The new button of "support" is great, but I think EA forum should have a way to sort by controversiality. And, have the EA forum algorithm occasionally (some ϵ% of the time), punt controversial posts back upwards to the front page. If you're like me, you read the forum sorted by Magic (New and Upvoted). But this promotes herd mentality. The red-teaming and self-criticism is excellent, but if the only way we aggregate how "good" is is by up-votes, that is flawed. Perhaps the best way to know that criticism has touched a nerve is to compute a fraction: how many members of the community disagree vs how many agree. (or, even better, if you are in an organization, use a weighted fraction, where you put lower weight on the people in the organization that are in positions of power (obviously difficult to implement in practice))2. More of you should consider anonymous posts. This is EA forum. I cannot believe that some of you delete your posts simply because it ends up being downvoted. Especially if you're working higher up in an EA org, you ought to be actively voicing your dissent and helping to monitor EA.For example, this is not good:"Members of the mutinous cohort told me that the movement’s leaders were not to be taken at their word—that they would say anything in public to maximize impact. Some of the paranoia—rumor-mill references to secret Google docs and ruthless clandestine councils—seemed overstated, but there was a core cadre that exercised control over public messaging; its members debated, for example, how to formulate their position that climate change was probably not as important as runaway A.I. without sounding like denialists or jerks." (New Yorker)What makes EA, EA, what makes EA antifragile, is its ruthless transparency. If we are self-censoring because we have already concluded something is super effective, then there is no point in EA. Go do your own thing with your own money. Become Bill Gates. But don't associate with EA.3. Finances should be partially anonymized. If an EA org receives some money above a certain threshold from an individual contribution, we should be transparent in saying that we will reject said money if it is not donated anonymously. You may protest that this would decrease the number of donations by rich billionaires. But take it this way: if they donate to EA, it's because they believe that EA can spend it better. Thus, they should be willing to donate anonymously, to not affect how EA spends money. If they don't donate to EA, then they can establish a different philanthropic organization and hire EA-adjacent staff, making for more competition. [Edit--see comments/revision]Revision:Blonergan took my previous post very seriously--apologies.Anonymizing finances may not be the best option. I am clearly naive about the legal implications. Perhaps other members of the community have suggestions about h...]]>
Fri, 11 Nov 2022 21:25:08 +0000 EA - Another FTX post: suggestions for change by SaraAzubuike Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Another FTX post: suggestions for change, published by SaraAzubuike on November 11, 2022 on The Effective Altruism Forum.I suggested that we would have trouble with FTX and funding around 4 months ago.SBF has been giving lots of money to EA. He admits it's a massively speculative bubble. Crypto crash hurts the most vulnerable, because poor uneducated people put lots of money into it (Krugman). Crypto is currently small, but should be regulated and has potential contagion effects (BIS). EA as a whole is getting loose with it's money due to large crypto flows (MacAskill). An inevitable crypto crash leads to either a) bad optics leading to less interest in EA or b) lots of dead projects.It was quite obvious that this would happen--although the specific details with Alameda were not obvious. Stuart Buck and Blonergan (and a few others) were the only one who took me seriously at the time.Below are some suggestions for change.1. The new button of "support" is great, but I think EA forum should have a way to sort by controversiality. And, have the EA forum algorithm occasionally (some ϵ% of the time), punt controversial posts back upwards to the front page. If you're like me, you read the forum sorted by Magic (New and Upvoted). But this promotes herd mentality. The red-teaming and self-criticism is excellent, but if the only way we aggregate how "good" is is by up-votes, that is flawed. Perhaps the best way to know that criticism has touched a nerve is to compute a fraction: how many members of the community disagree vs how many agree. (or, even better, if you are in an organization, use a weighted fraction, where you put lower weight on the people in the organization that are in positions of power (obviously difficult to implement in practice))2. More of you should consider anonymous posts. This is EA forum. I cannot believe that some of you delete your posts simply because it ends up being downvoted. Especially if you're working higher up in an EA org, you ought to be actively voicing your dissent and helping to monitor EA.For example, this is not good:"Members of the mutinous cohort told me that the movement’s leaders were not to be taken at their word—that they would say anything in public to maximize impact. Some of the paranoia—rumor-mill references to secret Google docs and ruthless clandestine councils—seemed overstated, but there was a core cadre that exercised control over public messaging; its members debated, for example, how to formulate their position that climate change was probably not as important as runaway A.I. without sounding like denialists or jerks." (New Yorker)What makes EA, EA, what makes EA antifragile, is its ruthless transparency. If we are self-censoring because we have already concluded something is super effective, then there is no point in EA. Go do your own thing with your own money. Become Bill Gates. But don't associate with EA.3. Finances should be partially anonymized. If an EA org receives some money above a certain threshold from an individual contribution, we should be transparent in saying that we will reject said money if it is not donated anonymously. You may protest that this would decrease the number of donations by rich billionaires. But take it this way: if they donate to EA, it's because they believe that EA can spend it better. Thus, they should be willing to donate anonymously, to not affect how EA spends money. If they don't donate to EA, then they can establish a different philanthropic organization and hire EA-adjacent staff, making for more competition. [Edit--see comments/revision]Revision:Blonergan took my previous post very seriously--apologies.Anonymizing finances may not be the best option. I am clearly naive about the legal implications. Perhaps other members of the community have suggestions about h...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Another FTX post: suggestions for change, published by SaraAzubuike on November 11, 2022 on The Effective Altruism Forum.I suggested that we would have trouble with FTX and funding around 4 months ago.SBF has been giving lots of money to EA. He admits it's a massively speculative bubble. Crypto crash hurts the most vulnerable, because poor uneducated people put lots of money into it (Krugman). Crypto is currently small, but should be regulated and has potential contagion effects (BIS). EA as a whole is getting loose with it's money due to large crypto flows (MacAskill). An inevitable crypto crash leads to either a) bad optics leading to less interest in EA or b) lots of dead projects.It was quite obvious that this would happen--although the specific details with Alameda were not obvious. Stuart Buck and Blonergan (and a few others) were the only one who took me seriously at the time.Below are some suggestions for change.1. The new button of "support" is great, but I think EA forum should have a way to sort by controversiality. And, have the EA forum algorithm occasionally (some ϵ% of the time), punt controversial posts back upwards to the front page. If you're like me, you read the forum sorted by Magic (New and Upvoted). But this promotes herd mentality. The red-teaming and self-criticism is excellent, but if the only way we aggregate how "good" is is by up-votes, that is flawed. Perhaps the best way to know that criticism has touched a nerve is to compute a fraction: how many members of the community disagree vs how many agree. (or, even better, if you are in an organization, use a weighted fraction, where you put lower weight on the people in the organization that are in positions of power (obviously difficult to implement in practice))2. More of you should consider anonymous posts. This is EA forum. I cannot believe that some of you delete your posts simply because it ends up being downvoted. Especially if you're working higher up in an EA org, you ought to be actively voicing your dissent and helping to monitor EA.For example, this is not good:"Members of the mutinous cohort told me that the movement’s leaders were not to be taken at their word—that they would say anything in public to maximize impact. Some of the paranoia—rumor-mill references to secret Google docs and ruthless clandestine councils—seemed overstated, but there was a core cadre that exercised control over public messaging; its members debated, for example, how to formulate their position that climate change was probably not as important as runaway A.I. without sounding like denialists or jerks." (New Yorker)What makes EA, EA, what makes EA antifragile, is its ruthless transparency. If we are self-censoring because we have already concluded something is super effective, then there is no point in EA. Go do your own thing with your own money. Become Bill Gates. But don't associate with EA.3. Finances should be partially anonymized. If an EA org receives some money above a certain threshold from an individual contribution, we should be transparent in saying that we will reject said money if it is not donated anonymously. You may protest that this would decrease the number of donations by rich billionaires. But take it this way: if they donate to EA, it's because they believe that EA can spend it better. Thus, they should be willing to donate anonymously, to not affect how EA spends money. If they don't donate to EA, then they can establish a different philanthropic organization and hire EA-adjacent staff, making for more competition. [Edit--see comments/revision]Revision:Blonergan took my previous post very seriously--apologies.Anonymizing finances may not be the best option. I am clearly naive about the legal implications. Perhaps other members of the community have suggestions about h...]]>
SaraAzubuike https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:07 None full 3708
CcNaQmrtdeC9PPixK_EA EA - Under what conditions should FTX grantees voluntarily return their grants? by sawyer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Under what conditions should FTX grantees voluntarily return their grants?, published by sawyer on November 11, 2022 on The Effective Altruism Forum.There is a possibility of clawbacks, in which case orgs could be legally obligated to return funds. But in cases in which we're not legally required to return a grant, could it still be morally good to do so?I think it could be useful to discuss this before even the basic details of the case have been settled, since there will be a high potential for motivated reasoning in favor of keeping the grants.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
sawyer https://forum.effectivealtruism.org/posts/CcNaQmrtdeC9PPixK/under-what-conditions-should-ftx-grantees-voluntarily-return Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Under what conditions should FTX grantees voluntarily return their grants?, published by sawyer on November 11, 2022 on The Effective Altruism Forum.There is a possibility of clawbacks, in which case orgs could be legally obligated to return funds. But in cases in which we're not legally required to return a grant, could it still be morally good to do so?I think it could be useful to discuss this before even the basic details of the case have been settled, since there will be a high potential for motivated reasoning in favor of keeping the grants.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 11 Nov 2022 20:28:24 +0000 EA - Under what conditions should FTX grantees voluntarily return their grants? by sawyer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Under what conditions should FTX grantees voluntarily return their grants?, published by sawyer on November 11, 2022 on The Effective Altruism Forum.There is a possibility of clawbacks, in which case orgs could be legally obligated to return funds. But in cases in which we're not legally required to return a grant, could it still be morally good to do so?I think it could be useful to discuss this before even the basic details of the case have been settled, since there will be a high potential for motivated reasoning in favor of keeping the grants.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Under what conditions should FTX grantees voluntarily return their grants?, published by sawyer on November 11, 2022 on The Effective Altruism Forum.There is a possibility of clawbacks, in which case orgs could be legally obligated to return funds. But in cases in which we're not legally required to return a grant, could it still be morally good to do so?I think it could be useful to discuss this before even the basic details of the case have been settled, since there will be a high potential for motivated reasoning in favor of keeping the grants.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
sawyer https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:49 None full 3709
ypLeyBAs8yWxRkaNk_EA EA - EA Images by Bob Jacobs Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Images, published by Bob Jacobs on November 11, 2022 on The Effective Altruism Forum.A while back there was a contest on this forum to design a flag for utilitarianism. Since getting money as an artist is very difficult, I entered hoping to win the prize money. However, after I submitted my design, the organizer changed the rules making my design retroactively ineligible. The organizer later deleted the posts, which not only means that I can no longer get the prize money, but also that my work is no longer visible on the site. Therefore, I decided to make this post to showcase not only these flag designs, but also some other works I made for EA but hadn't posted on the forum before.Flag of utilitarianism:Yellow stands for happiness, that which utilitarianism pursuesWhite stands for morality, that which utilitarianism isThe symbol is a sigma, since utilitarians care about the sum of all utilityThe symbol is also an hourglass, since utilitarians care about the (longterm) future consequencesIf you don't like the rounded design I also have a more angular design:Logo EA Brussels:It displays the Atomium, a famous building in Brussels:Logo EA Ghent:It incorporates elements from the logo of the University of Ghent (where I organize my group):And here is a banner I made for the facebook group (In Dutch you write "Gent"):I also made a bunch of banners and thumbnails for sequences on this site (although a lot of them are uncredited) and the images for the discussion norms.Lastly I made a symbol for slack/scout-mindset and moloch/soldier-mindset:There is a balance between Moloch (which I think of as the forces of exploitation) and Slack (which I think of as the forces of exploration).Scott Alexander writes:"Think of slack as a paradox – the Taoist art of winning competitions by not trying too hard at them. Moloch and Slack are opposites and complements, like yin and yang. Neither is stronger than the other, but their interplay creates the ten thousand things."Here is a Taijitu of Moloch and Slack as created by the inadequate equilibria and the hillock:You can find a higher resolution image and a svg-file of this symbol hereIf you want a graphic design for your EA projects, feel free to message me.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Bob Jacobs https://forum.effectivealtruism.org/posts/ypLeyBAs8yWxRkaNk/ea-images Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Images, published by Bob Jacobs on November 11, 2022 on The Effective Altruism Forum.A while back there was a contest on this forum to design a flag for utilitarianism. Since getting money as an artist is very difficult, I entered hoping to win the prize money. However, after I submitted my design, the organizer changed the rules making my design retroactively ineligible. The organizer later deleted the posts, which not only means that I can no longer get the prize money, but also that my work is no longer visible on the site. Therefore, I decided to make this post to showcase not only these flag designs, but also some other works I made for EA but hadn't posted on the forum before.Flag of utilitarianism:Yellow stands for happiness, that which utilitarianism pursuesWhite stands for morality, that which utilitarianism isThe symbol is a sigma, since utilitarians care about the sum of all utilityThe symbol is also an hourglass, since utilitarians care about the (longterm) future consequencesIf you don't like the rounded design I also have a more angular design:Logo EA Brussels:It displays the Atomium, a famous building in Brussels:Logo EA Ghent:It incorporates elements from the logo of the University of Ghent (where I organize my group):And here is a banner I made for the facebook group (In Dutch you write "Gent"):I also made a bunch of banners and thumbnails for sequences on this site (although a lot of them are uncredited) and the images for the discussion norms.Lastly I made a symbol for slack/scout-mindset and moloch/soldier-mindset:There is a balance between Moloch (which I think of as the forces of exploitation) and Slack (which I think of as the forces of exploration).Scott Alexander writes:"Think of slack as a paradox – the Taoist art of winning competitions by not trying too hard at them. Moloch and Slack are opposites and complements, like yin and yang. Neither is stronger than the other, but their interplay creates the ten thousand things."Here is a Taijitu of Moloch and Slack as created by the inadequate equilibria and the hillock:You can find a higher resolution image and a svg-file of this symbol hereIf you want a graphic design for your EA projects, feel free to message me.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 11 Nov 2022 20:17:00 +0000 EA - EA Images by Bob Jacobs Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Images, published by Bob Jacobs on November 11, 2022 on The Effective Altruism Forum.A while back there was a contest on this forum to design a flag for utilitarianism. Since getting money as an artist is very difficult, I entered hoping to win the prize money. However, after I submitted my design, the organizer changed the rules making my design retroactively ineligible. The organizer later deleted the posts, which not only means that I can no longer get the prize money, but also that my work is no longer visible on the site. Therefore, I decided to make this post to showcase not only these flag designs, but also some other works I made for EA but hadn't posted on the forum before.Flag of utilitarianism:Yellow stands for happiness, that which utilitarianism pursuesWhite stands for morality, that which utilitarianism isThe symbol is a sigma, since utilitarians care about the sum of all utilityThe symbol is also an hourglass, since utilitarians care about the (longterm) future consequencesIf you don't like the rounded design I also have a more angular design:Logo EA Brussels:It displays the Atomium, a famous building in Brussels:Logo EA Ghent:It incorporates elements from the logo of the University of Ghent (where I organize my group):And here is a banner I made for the facebook group (In Dutch you write "Gent"):I also made a bunch of banners and thumbnails for sequences on this site (although a lot of them are uncredited) and the images for the discussion norms.Lastly I made a symbol for slack/scout-mindset and moloch/soldier-mindset:There is a balance between Moloch (which I think of as the forces of exploitation) and Slack (which I think of as the forces of exploration).Scott Alexander writes:"Think of slack as a paradox – the Taoist art of winning competitions by not trying too hard at them. Moloch and Slack are opposites and complements, like yin and yang. Neither is stronger than the other, but their interplay creates the ten thousand things."Here is a Taijitu of Moloch and Slack as created by the inadequate equilibria and the hillock:You can find a higher resolution image and a svg-file of this symbol hereIf you want a graphic design for your EA projects, feel free to message me.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Images, published by Bob Jacobs on November 11, 2022 on The Effective Altruism Forum.A while back there was a contest on this forum to design a flag for utilitarianism. Since getting money as an artist is very difficult, I entered hoping to win the prize money. However, after I submitted my design, the organizer changed the rules making my design retroactively ineligible. The organizer later deleted the posts, which not only means that I can no longer get the prize money, but also that my work is no longer visible on the site. Therefore, I decided to make this post to showcase not only these flag designs, but also some other works I made for EA but hadn't posted on the forum before.Flag of utilitarianism:Yellow stands for happiness, that which utilitarianism pursuesWhite stands for morality, that which utilitarianism isThe symbol is a sigma, since utilitarians care about the sum of all utilityThe symbol is also an hourglass, since utilitarians care about the (longterm) future consequencesIf you don't like the rounded design I also have a more angular design:Logo EA Brussels:It displays the Atomium, a famous building in Brussels:Logo EA Ghent:It incorporates elements from the logo of the University of Ghent (where I organize my group):And here is a banner I made for the facebook group (In Dutch you write "Gent"):I also made a bunch of banners and thumbnails for sequences on this site (although a lot of them are uncredited) and the images for the discussion norms.Lastly I made a symbol for slack/scout-mindset and moloch/soldier-mindset:There is a balance between Moloch (which I think of as the forces of exploitation) and Slack (which I think of as the forces of exploration).Scott Alexander writes:"Think of slack as a paradox – the Taoist art of winning competitions by not trying too hard at them. Moloch and Slack are opposites and complements, like yin and yang. Neither is stronger than the other, but their interplay creates the ten thousand things."Here is a Taijitu of Moloch and Slack as created by the inadequate equilibria and the hillock:You can find a higher resolution image and a svg-file of this symbol hereIf you want a graphic design for your EA projects, feel free to message me.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Bob Jacobs https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:23 None full 3710
txGgPvKgZFpphdkHe_EA EA - My reaction to FTX: appalled by Robert Wiblin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My reaction to FTX: appalled, published by Robert Wiblin on November 11, 2022 on The Effective Altruism Forum.I thought I would repost this thread I wrote for Twitter.I've been waiting for the Future Fund people to have their say, and they have all resigned ().So now you can hear what I think.I am appalled.If media reports of what happened are at all accurate, what at least two people high up at FTX and Alameda have done here is inexcusable (e.g.).Making risky trades with depositors’ funds without telling them is grossly immoral.(I'm gripped reading the news and Twitter like everyone else and this is all based on my reading between the lines of e.g.:,,,I also speak only for myself here.)Probably some story will come out about why they felt they had no choice, but one always has a choice to act with integrity or not to.One or more leaders at FTX have betrayed the trust of everyone who was counting on them.Most importantly FTX's depositors, who didn't stand to gain on the upside but were unwittingly exposed to a massive downside and may lose savings they and their families were relying on.FTX leaders also betrayed investors, staff, collaborators, and the groups working to reduce suffering and the risk of future tragedies that they committed to help.No plausible ethics permits one to lose money trading then take other people's money to make yet more risky bets in the hope that doing so will help you make it back.That basic story has blown up banks and destroyed lives many times through history.Good leaders resist the temptation to double down, and instead eat their losses up front.In his tweets Sam claims that he's working to get depositors paid back as much as possible.I hope that is his only focus and that it's possible to compensate the most vulnerable FTX depositors to the greatest extent.To people who have quit jobs or made life plans assuming that FTX wouldn't implode overnight, my heart goes out to you. This situation is fucked, not your fault and foreseen by almost no one.To those who quit their jobs hoping to work to reduce suffering and catastrophic risks using funds that have now evaporated: I hope that other donors can temporarily fill the gap and smooth the path to a new equilibrium level of funding for pandemic prevention, etc.I feel it's clear mistakes have been made. We were too quick to trust folks who hadn't proven they deserved that level of confidence.One always wants to believe the best about others. In life I've mostly found people to be good and kind, sometimes to an astonishing degree.Hindsight is 20/20 and this week's events have been frankly insane.But I will be less trusting of people with huge responsibilities going forward, maybe just less trusting across the board.Mass destruction of trust is exactly what results from this kind of wrong-doing.Some people are saying this is no surprise, as all of crypto was a Ponzi scheme from the start.I'm pretty skeptical of crypto having many productive applications, but there's a big dif between investing in good faith in a speculative unproven technology, and having your assets misappropriated from you.The first (foolish or not) is business. The second is illegal.I'll have more to say, maybe after I calm down, maybe not.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Robert Wiblin https://forum.effectivealtruism.org/posts/txGgPvKgZFpphdkHe/my-reaction-to-ftx-appalled Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My reaction to FTX: appalled, published by Robert Wiblin on November 11, 2022 on The Effective Altruism Forum.I thought I would repost this thread I wrote for Twitter.I've been waiting for the Future Fund people to have their say, and they have all resigned ().So now you can hear what I think.I am appalled.If media reports of what happened are at all accurate, what at least two people high up at FTX and Alameda have done here is inexcusable (e.g.).Making risky trades with depositors’ funds without telling them is grossly immoral.(I'm gripped reading the news and Twitter like everyone else and this is all based on my reading between the lines of e.g.:,,,I also speak only for myself here.)Probably some story will come out about why they felt they had no choice, but one always has a choice to act with integrity or not to.One or more leaders at FTX have betrayed the trust of everyone who was counting on them.Most importantly FTX's depositors, who didn't stand to gain on the upside but were unwittingly exposed to a massive downside and may lose savings they and their families were relying on.FTX leaders also betrayed investors, staff, collaborators, and the groups working to reduce suffering and the risk of future tragedies that they committed to help.No plausible ethics permits one to lose money trading then take other people's money to make yet more risky bets in the hope that doing so will help you make it back.That basic story has blown up banks and destroyed lives many times through history.Good leaders resist the temptation to double down, and instead eat their losses up front.In his tweets Sam claims that he's working to get depositors paid back as much as possible.I hope that is his only focus and that it's possible to compensate the most vulnerable FTX depositors to the greatest extent.To people who have quit jobs or made life plans assuming that FTX wouldn't implode overnight, my heart goes out to you. This situation is fucked, not your fault and foreseen by almost no one.To those who quit their jobs hoping to work to reduce suffering and catastrophic risks using funds that have now evaporated: I hope that other donors can temporarily fill the gap and smooth the path to a new equilibrium level of funding for pandemic prevention, etc.I feel it's clear mistakes have been made. We were too quick to trust folks who hadn't proven they deserved that level of confidence.One always wants to believe the best about others. In life I've mostly found people to be good and kind, sometimes to an astonishing degree.Hindsight is 20/20 and this week's events have been frankly insane.But I will be less trusting of people with huge responsibilities going forward, maybe just less trusting across the board.Mass destruction of trust is exactly what results from this kind of wrong-doing.Some people are saying this is no surprise, as all of crypto was a Ponzi scheme from the start.I'm pretty skeptical of crypto having many productive applications, but there's a big dif between investing in good faith in a speculative unproven technology, and having your assets misappropriated from you.The first (foolish or not) is business. The second is illegal.I'll have more to say, maybe after I calm down, maybe not.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 11 Nov 2022 18:17:47 +0000 EA - My reaction to FTX: appalled by Robert Wiblin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My reaction to FTX: appalled, published by Robert Wiblin on November 11, 2022 on The Effective Altruism Forum.I thought I would repost this thread I wrote for Twitter.I've been waiting for the Future Fund people to have their say, and they have all resigned ().So now you can hear what I think.I am appalled.If media reports of what happened are at all accurate, what at least two people high up at FTX and Alameda have done here is inexcusable (e.g.).Making risky trades with depositors’ funds without telling them is grossly immoral.(I'm gripped reading the news and Twitter like everyone else and this is all based on my reading between the lines of e.g.:,,,I also speak only for myself here.)Probably some story will come out about why they felt they had no choice, but one always has a choice to act with integrity or not to.One or more leaders at FTX have betrayed the trust of everyone who was counting on them.Most importantly FTX's depositors, who didn't stand to gain on the upside but were unwittingly exposed to a massive downside and may lose savings they and their families were relying on.FTX leaders also betrayed investors, staff, collaborators, and the groups working to reduce suffering and the risk of future tragedies that they committed to help.No plausible ethics permits one to lose money trading then take other people's money to make yet more risky bets in the hope that doing so will help you make it back.That basic story has blown up banks and destroyed lives many times through history.Good leaders resist the temptation to double down, and instead eat their losses up front.In his tweets Sam claims that he's working to get depositors paid back as much as possible.I hope that is his only focus and that it's possible to compensate the most vulnerable FTX depositors to the greatest extent.To people who have quit jobs or made life plans assuming that FTX wouldn't implode overnight, my heart goes out to you. This situation is fucked, not your fault and foreseen by almost no one.To those who quit their jobs hoping to work to reduce suffering and catastrophic risks using funds that have now evaporated: I hope that other donors can temporarily fill the gap and smooth the path to a new equilibrium level of funding for pandemic prevention, etc.I feel it's clear mistakes have been made. We were too quick to trust folks who hadn't proven they deserved that level of confidence.One always wants to believe the best about others. In life I've mostly found people to be good and kind, sometimes to an astonishing degree.Hindsight is 20/20 and this week's events have been frankly insane.But I will be less trusting of people with huge responsibilities going forward, maybe just less trusting across the board.Mass destruction of trust is exactly what results from this kind of wrong-doing.Some people are saying this is no surprise, as all of crypto was a Ponzi scheme from the start.I'm pretty skeptical of crypto having many productive applications, but there's a big dif between investing in good faith in a speculative unproven technology, and having your assets misappropriated from you.The first (foolish or not) is business. The second is illegal.I'll have more to say, maybe after I calm down, maybe not.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My reaction to FTX: appalled, published by Robert Wiblin on November 11, 2022 on The Effective Altruism Forum.I thought I would repost this thread I wrote for Twitter.I've been waiting for the Future Fund people to have their say, and they have all resigned ().So now you can hear what I think.I am appalled.If media reports of what happened are at all accurate, what at least two people high up at FTX and Alameda have done here is inexcusable (e.g.).Making risky trades with depositors’ funds without telling them is grossly immoral.(I'm gripped reading the news and Twitter like everyone else and this is all based on my reading between the lines of e.g.:,,,I also speak only for myself here.)Probably some story will come out about why they felt they had no choice, but one always has a choice to act with integrity or not to.One or more leaders at FTX have betrayed the trust of everyone who was counting on them.Most importantly FTX's depositors, who didn't stand to gain on the upside but were unwittingly exposed to a massive downside and may lose savings they and their families were relying on.FTX leaders also betrayed investors, staff, collaborators, and the groups working to reduce suffering and the risk of future tragedies that they committed to help.No plausible ethics permits one to lose money trading then take other people's money to make yet more risky bets in the hope that doing so will help you make it back.That basic story has blown up banks and destroyed lives many times through history.Good leaders resist the temptation to double down, and instead eat their losses up front.In his tweets Sam claims that he's working to get depositors paid back as much as possible.I hope that is his only focus and that it's possible to compensate the most vulnerable FTX depositors to the greatest extent.To people who have quit jobs or made life plans assuming that FTX wouldn't implode overnight, my heart goes out to you. This situation is fucked, not your fault and foreseen by almost no one.To those who quit their jobs hoping to work to reduce suffering and catastrophic risks using funds that have now evaporated: I hope that other donors can temporarily fill the gap and smooth the path to a new equilibrium level of funding for pandemic prevention, etc.I feel it's clear mistakes have been made. We were too quick to trust folks who hadn't proven they deserved that level of confidence.One always wants to believe the best about others. In life I've mostly found people to be good and kind, sometimes to an astonishing degree.Hindsight is 20/20 and this week's events have been frankly insane.But I will be less trusting of people with huge responsibilities going forward, maybe just less trusting across the board.Mass destruction of trust is exactly what results from this kind of wrong-doing.Some people are saying this is no surprise, as all of crypto was a Ponzi scheme from the start.I'm pretty skeptical of crypto having many productive applications, but there's a big dif between investing in good faith in a speculative unproven technology, and having your assets misappropriated from you.The first (foolish or not) is business. The second is illegal.I'll have more to say, maybe after I calm down, maybe not.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Robert Wiblin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:10 None full 3707
yMKmCbmL8ekDcJhQd_EA EA - If EA has overestimated its projected funding, which decisions must be revised? by strawberry calm Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If EA has overestimated its projected funding, which decisions must be revised?, published by strawberry calm on November 11, 2022 on The Effective Altruism Forum.FTX, a big source of EA funding, has imploded.There's mounting evidence that FTX was engaged in theft/fraud, which would be straightforwardly unethical.There's been a big drop in the funding that EA organisations expect to receive over the next few years.Because these organisations were acting under false information, they would've made (ex-post) wrong decisions, which they will now need to revise.Which revisions are most pressing?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
strawberry calm https://forum.effectivealtruism.org/posts/yMKmCbmL8ekDcJhQd/if-ea-has-overestimated-its-projected-funding-which Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If EA has overestimated its projected funding, which decisions must be revised?, published by strawberry calm on November 11, 2022 on The Effective Altruism Forum.FTX, a big source of EA funding, has imploded.There's mounting evidence that FTX was engaged in theft/fraud, which would be straightforwardly unethical.There's been a big drop in the funding that EA organisations expect to receive over the next few years.Because these organisations were acting under false information, they would've made (ex-post) wrong decisions, which they will now need to revise.Which revisions are most pressing?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 11 Nov 2022 14:18:38 +0000 EA - If EA has overestimated its projected funding, which decisions must be revised? by strawberry calm Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If EA has overestimated its projected funding, which decisions must be revised?, published by strawberry calm on November 11, 2022 on The Effective Altruism Forum.FTX, a big source of EA funding, has imploded.There's mounting evidence that FTX was engaged in theft/fraud, which would be straightforwardly unethical.There's been a big drop in the funding that EA organisations expect to receive over the next few years.Because these organisations were acting under false information, they would've made (ex-post) wrong decisions, which they will now need to revise.Which revisions are most pressing?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If EA has overestimated its projected funding, which decisions must be revised?, published by strawberry calm on November 11, 2022 on The Effective Altruism Forum.FTX, a big source of EA funding, has imploded.There's mounting evidence that FTX was engaged in theft/fraud, which would be straightforwardly unethical.There's been a big drop in the funding that EA organisations expect to receive over the next few years.Because these organisations were acting under false information, they would've made (ex-post) wrong decisions, which they will now need to revise.Which revisions are most pressing?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
strawberry calm https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:56 None full 3714
HBgAHoSSqpJ7xNYp7_EA EA - Resorting constantly to bets can be bad for optics, especially in the current situation by EdMathieu Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Resorting constantly to bets can be bad for optics, especially in the current situation, published by EdMathieu on November 11, 2022 on The Effective Altruism Forum.I currently don't have time to write much more than a quick post on this; apologies if this feels like it's not fleshed out enough.I've seen many comments online (especially here and on Twitter) in the last few days about betting on the evolving FTX situation, as a way to force people to express certainty around their statements.For example, person A says something really strong about what happened, or the way the situation is going to evolve; and person B comes in and says "that seems exaggerated, are you willing to bet against me on that?".While I understand that betting, predictions, etc. are things the EA community relies on for clearer thinking and good epistemics, I would urge everyone to consider the optics of trivially making financial bets in a situation where many people are facing a possible loss of their personal savings.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
EdMathieu https://forum.effectivealtruism.org/posts/HBgAHoSSqpJ7xNYp7/resorting-constantly-to-bets-can-be-bad-for-optics Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Resorting constantly to bets can be bad for optics, especially in the current situation, published by EdMathieu on November 11, 2022 on The Effective Altruism Forum.I currently don't have time to write much more than a quick post on this; apologies if this feels like it's not fleshed out enough.I've seen many comments online (especially here and on Twitter) in the last few days about betting on the evolving FTX situation, as a way to force people to express certainty around their statements.For example, person A says something really strong about what happened, or the way the situation is going to evolve; and person B comes in and says "that seems exaggerated, are you willing to bet against me on that?".While I understand that betting, predictions, etc. are things the EA community relies on for clearer thinking and good epistemics, I would urge everyone to consider the optics of trivially making financial bets in a situation where many people are facing a possible loss of their personal savings.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 11 Nov 2022 12:09:29 +0000 EA - Resorting constantly to bets can be bad for optics, especially in the current situation by EdMathieu Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Resorting constantly to bets can be bad for optics, especially in the current situation, published by EdMathieu on November 11, 2022 on The Effective Altruism Forum.I currently don't have time to write much more than a quick post on this; apologies if this feels like it's not fleshed out enough.I've seen many comments online (especially here and on Twitter) in the last few days about betting on the evolving FTX situation, as a way to force people to express certainty around their statements.For example, person A says something really strong about what happened, or the way the situation is going to evolve; and person B comes in and says "that seems exaggerated, are you willing to bet against me on that?".While I understand that betting, predictions, etc. are things the EA community relies on for clearer thinking and good epistemics, I would urge everyone to consider the optics of trivially making financial bets in a situation where many people are facing a possible loss of their personal savings.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Resorting constantly to bets can be bad for optics, especially in the current situation, published by EdMathieu on November 11, 2022 on The Effective Altruism Forum.I currently don't have time to write much more than a quick post on this; apologies if this feels like it's not fleshed out enough.I've seen many comments online (especially here and on Twitter) in the last few days about betting on the evolving FTX situation, as a way to force people to express certainty around their statements.For example, person A says something really strong about what happened, or the way the situation is going to evolve; and person B comes in and says "that seems exaggerated, are you willing to bet against me on that?".While I understand that betting, predictions, etc. are things the EA community relies on for clearer thinking and good epistemics, I would urge everyone to consider the optics of trivially making financial bets in a situation where many people are facing a possible loss of their personal savings.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
EdMathieu https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:13 None full 3711
ReexceojC5mwDQwzX_EA EA - For the mental health of those affected by the FTX crisis... by Daystar Eld Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: For the mental health of those affected by the FTX crisis..., published by Daystar Eld on November 11, 2022 on The Effective Altruism Forum.There's a thing that happens when we're in crisis that feels like the world is collapsing around us, and all hope for the future feels lost.Whether you worked for FTX, were relying on grant money from it, had savings held there that are now in question, or are just observing from the sidelines as friends and colleagues struggle with their new shattered reality, picking up the pieces can feel not just monumental, but impossible.This mantra came to me after I was in very bad situation over a decade ago. In that situation, not all parts of this mantra were true, and it legitimately seemed like my life might be over, or utterly derailed for the perpetual worse even if not.So the next time I faced crisis, I couldn't help but compare it to that one, and generalized to the other ways in which things weren't as bleak as they could have been. It has since helped me keep perspective and hope in lesser crises, and a number of my clients have reported being helped by it as well. So I thought I'd share it, in the hopes it helps you too:"I'm going to be okay. I'm not dead, in the hospital, or in jail. I've still got my health, my family, and my friends. I'm going to be okay."This isn't done to diminish the losses and pain many of you are feeling. There's no one general platitude that will cover what everyone has lost or how they've been affected by any crisis. For some, maybe parts of that mantra aren't true, for complex reasons. Maybe not everyone has the same social support networks, which can definitely make crises much worse. Or perhaps someone is going through a medical crisis, and they've lost the income or savings or insurance they were relying on for treatment. And maybe some have lost friends over this, or will.The point of the mantra is not to insist that you've lost nothing, or that life will be unchanged. Many have lost something, large or small, present or in the future, and it makes sense that it hurts. Many were hoping to do good with the opportunities and funding FTX provided, and those problems are still there in the world, and in need of solving.But the core of who you are is still here, still capable of enjoying life and doing good, and in time what feels like the end of everything will, with some hope for a regression to the mean, mostly just be a sad memory.Take an extra deep breath, now and then. Spend a bit more time in the sunlight, or with friends. Give yourself time to nurse the hurt and heal. It's an ongoing process, even as you start to pick up the pieces and reforge something new.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Daystar Eld https://forum.effectivealtruism.org/posts/ReexceojC5mwDQwzX/for-the-mental-health-of-those-affected-by-the-ftx-crisis Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: For the mental health of those affected by the FTX crisis..., published by Daystar Eld on November 11, 2022 on The Effective Altruism Forum.There's a thing that happens when we're in crisis that feels like the world is collapsing around us, and all hope for the future feels lost.Whether you worked for FTX, were relying on grant money from it, had savings held there that are now in question, or are just observing from the sidelines as friends and colleagues struggle with their new shattered reality, picking up the pieces can feel not just monumental, but impossible.This mantra came to me after I was in very bad situation over a decade ago. In that situation, not all parts of this mantra were true, and it legitimately seemed like my life might be over, or utterly derailed for the perpetual worse even if not.So the next time I faced crisis, I couldn't help but compare it to that one, and generalized to the other ways in which things weren't as bleak as they could have been. It has since helped me keep perspective and hope in lesser crises, and a number of my clients have reported being helped by it as well. So I thought I'd share it, in the hopes it helps you too:"I'm going to be okay. I'm not dead, in the hospital, or in jail. I've still got my health, my family, and my friends. I'm going to be okay."This isn't done to diminish the losses and pain many of you are feeling. There's no one general platitude that will cover what everyone has lost or how they've been affected by any crisis. For some, maybe parts of that mantra aren't true, for complex reasons. Maybe not everyone has the same social support networks, which can definitely make crises much worse. Or perhaps someone is going through a medical crisis, and they've lost the income or savings or insurance they were relying on for treatment. And maybe some have lost friends over this, or will.The point of the mantra is not to insist that you've lost nothing, or that life will be unchanged. Many have lost something, large or small, present or in the future, and it makes sense that it hurts. Many were hoping to do good with the opportunities and funding FTX provided, and those problems are still there in the world, and in need of solving.But the core of who you are is still here, still capable of enjoying life and doing good, and in time what feels like the end of everything will, with some hope for a regression to the mean, mostly just be a sad memory.Take an extra deep breath, now and then. Spend a bit more time in the sunlight, or with friends. Give yourself time to nurse the hurt and heal. It's an ongoing process, even as you start to pick up the pieces and reforge something new.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 11 Nov 2022 11:58:37 +0000 EA - For the mental health of those affected by the FTX crisis... by Daystar Eld Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: For the mental health of those affected by the FTX crisis..., published by Daystar Eld on November 11, 2022 on The Effective Altruism Forum.There's a thing that happens when we're in crisis that feels like the world is collapsing around us, and all hope for the future feels lost.Whether you worked for FTX, were relying on grant money from it, had savings held there that are now in question, or are just observing from the sidelines as friends and colleagues struggle with their new shattered reality, picking up the pieces can feel not just monumental, but impossible.This mantra came to me after I was in very bad situation over a decade ago. In that situation, not all parts of this mantra were true, and it legitimately seemed like my life might be over, or utterly derailed for the perpetual worse even if not.So the next time I faced crisis, I couldn't help but compare it to that one, and generalized to the other ways in which things weren't as bleak as they could have been. It has since helped me keep perspective and hope in lesser crises, and a number of my clients have reported being helped by it as well. So I thought I'd share it, in the hopes it helps you too:"I'm going to be okay. I'm not dead, in the hospital, or in jail. I've still got my health, my family, and my friends. I'm going to be okay."This isn't done to diminish the losses and pain many of you are feeling. There's no one general platitude that will cover what everyone has lost or how they've been affected by any crisis. For some, maybe parts of that mantra aren't true, for complex reasons. Maybe not everyone has the same social support networks, which can definitely make crises much worse. Or perhaps someone is going through a medical crisis, and they've lost the income or savings or insurance they were relying on for treatment. And maybe some have lost friends over this, or will.The point of the mantra is not to insist that you've lost nothing, or that life will be unchanged. Many have lost something, large or small, present or in the future, and it makes sense that it hurts. Many were hoping to do good with the opportunities and funding FTX provided, and those problems are still there in the world, and in need of solving.But the core of who you are is still here, still capable of enjoying life and doing good, and in time what feels like the end of everything will, with some hope for a regression to the mean, mostly just be a sad memory.Take an extra deep breath, now and then. Spend a bit more time in the sunlight, or with friends. Give yourself time to nurse the hurt and heal. It's an ongoing process, even as you start to pick up the pieces and reforge something new.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: For the mental health of those affected by the FTX crisis..., published by Daystar Eld on November 11, 2022 on The Effective Altruism Forum.There's a thing that happens when we're in crisis that feels like the world is collapsing around us, and all hope for the future feels lost.Whether you worked for FTX, were relying on grant money from it, had savings held there that are now in question, or are just observing from the sidelines as friends and colleagues struggle with their new shattered reality, picking up the pieces can feel not just monumental, but impossible.This mantra came to me after I was in very bad situation over a decade ago. In that situation, not all parts of this mantra were true, and it legitimately seemed like my life might be over, or utterly derailed for the perpetual worse even if not.So the next time I faced crisis, I couldn't help but compare it to that one, and generalized to the other ways in which things weren't as bleak as they could have been. It has since helped me keep perspective and hope in lesser crises, and a number of my clients have reported being helped by it as well. So I thought I'd share it, in the hopes it helps you too:"I'm going to be okay. I'm not dead, in the hospital, or in jail. I've still got my health, my family, and my friends. I'm going to be okay."This isn't done to diminish the losses and pain many of you are feeling. There's no one general platitude that will cover what everyone has lost or how they've been affected by any crisis. For some, maybe parts of that mantra aren't true, for complex reasons. Maybe not everyone has the same social support networks, which can definitely make crises much worse. Or perhaps someone is going through a medical crisis, and they've lost the income or savings or insurance they were relying on for treatment. And maybe some have lost friends over this, or will.The point of the mantra is not to insist that you've lost nothing, or that life will be unchanged. Many have lost something, large or small, present or in the future, and it makes sense that it hurts. Many were hoping to do good with the opportunities and funding FTX provided, and those problems are still there in the world, and in need of solving.But the core of who you are is still here, still capable of enjoying life and doing good, and in time what feels like the end of everything will, with some hope for a regression to the mean, mostly just be a sad memory.Take an extra deep breath, now and then. Spend a bit more time in the sunlight, or with friends. Give yourself time to nurse the hurt and heal. It's an ongoing process, even as you start to pick up the pieces and reforge something new.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Daystar Eld https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:40 None full 3712
adgGhehBAAjpdwKJT_EA EA - Apply now for the EU Tech Policy Fellowship 2023 by Jan-WillemvanPutten Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply now for the EU Tech Policy Fellowship 2023, published by Jan-WillemvanPutten on November 11, 2022 on The Effective Altruism Forum.Announcing the EU Tech Policy Fellowship 2023, an 8-month programme to catapult ambitious graduates into high-impact career paths in EU policy, mainly working on the topic of AI Governance.SummaryTraining for Good is excited to announce the second edition of the EU Tech Policy Fellowship. This programme enables promising EU citizens to launch careers focused on regulating high-priority emerging technologies, mainly AI. Apply here by December 11th.This fellowship consists of three components:Remote study group (July - August, 4 hours a week): A 6 week part-time study group covering AI governance & technology policy fundamentals.2 x Policy training in Brussels (June 26-30 and September 3 - 8 exact date TBC): Two intensive week-long bootcamps in Brussels featuring workshops, guest lectures from relevant experts and networking events.Fellows will then participate in one of two tracks depending on their interests.Track 1 (September - February full time): Fellows will be matched with a host organisation working on European tech regulation for a ~5 month placement between September 2023 and February 2024.Host organisations include The Future Society, Centre for European Policy Studies, and German Marshall Fund (among others).Track 2 (September): Fellows will receive job application support and guidance to pursue a career in the European Commission, party politics or related policy jobs in Europe.This may include career workshops, feedback on CVs, interview training and mentorship from experienced policy professionals.Other important points:If you have any questions or would like to learn more about the program and whether or not it’s the right fit for you, Training for Good will be hosting an informal information session on Thursday November 24 (5.30pm CET), please subscribe here for that session.This fellowship is only open to EU citizens.Modest stipends are available to cover living and relocation costs. We expect most stipends to be between €1,750 and €2,250 per monthFor track 1, stipends are available for up to 6 months while participating in placements.For track 2, stipends are available for up to 1 month while exploring and applying for policy roles.Apply here by December 11th.The ProgrammeThe programme spans a maximum of 8 months from June 2023 to February 2024, is fully cost-covered, and where needed, participants can avail of stipends to cover living costs. It consists of 4 major parts:Policy training in Brussels (June 26-30, dates TBC): An intensive week-long bootcamp in Brussels featuring workshops, guest lectures from relevant experts and networking events.Main focus: Understanding Brussels bubble (including networking) and creating your own goals for the FellowshipAll accomodation, food & travel costs will be fully covered by Training for GoodRemote study group (July - August): A 7 week study group covering AI governance & technology policy fundamentals. Every week consists of ~4 hours of readings, a 1 hour discussion and a guest lecture.Policy training in Brussels (September 3-8, dates TBC): An intensive week-long bootcamp in Brussels featuring workshops, guest lectures from relevant experts and networking events.The goal of this week is to come up with a policy proposal, inspired by the latest insights from governance research covered in the 7 week reading group.All accomodation, food & travel costs will be fully covered by Training for GoodFellows will then participate in one of two tracks depending on their interests.Track 1 (September - February): Fellows will be matched with a host organisation working on European tech regulation for a ~5 month placement between September 2023 and February 202...]]>
Jan-WillemvanPutten https://forum.effectivealtruism.org/posts/adgGhehBAAjpdwKJT/apply-now-for-the-eu-tech-policy-fellowship-2023 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply now for the EU Tech Policy Fellowship 2023, published by Jan-WillemvanPutten on November 11, 2022 on The Effective Altruism Forum.Announcing the EU Tech Policy Fellowship 2023, an 8-month programme to catapult ambitious graduates into high-impact career paths in EU policy, mainly working on the topic of AI Governance.SummaryTraining for Good is excited to announce the second edition of the EU Tech Policy Fellowship. This programme enables promising EU citizens to launch careers focused on regulating high-priority emerging technologies, mainly AI. Apply here by December 11th.This fellowship consists of three components:Remote study group (July - August, 4 hours a week): A 6 week part-time study group covering AI governance & technology policy fundamentals.2 x Policy training in Brussels (June 26-30 and September 3 - 8 exact date TBC): Two intensive week-long bootcamps in Brussels featuring workshops, guest lectures from relevant experts and networking events.Fellows will then participate in one of two tracks depending on their interests.Track 1 (September - February full time): Fellows will be matched with a host organisation working on European tech regulation for a ~5 month placement between September 2023 and February 2024.Host organisations include The Future Society, Centre for European Policy Studies, and German Marshall Fund (among others).Track 2 (September): Fellows will receive job application support and guidance to pursue a career in the European Commission, party politics or related policy jobs in Europe.This may include career workshops, feedback on CVs, interview training and mentorship from experienced policy professionals.Other important points:If you have any questions or would like to learn more about the program and whether or not it’s the right fit for you, Training for Good will be hosting an informal information session on Thursday November 24 (5.30pm CET), please subscribe here for that session.This fellowship is only open to EU citizens.Modest stipends are available to cover living and relocation costs. We expect most stipends to be between €1,750 and €2,250 per monthFor track 1, stipends are available for up to 6 months while participating in placements.For track 2, stipends are available for up to 1 month while exploring and applying for policy roles.Apply here by December 11th.The ProgrammeThe programme spans a maximum of 8 months from June 2023 to February 2024, is fully cost-covered, and where needed, participants can avail of stipends to cover living costs. It consists of 4 major parts:Policy training in Brussels (June 26-30, dates TBC): An intensive week-long bootcamp in Brussels featuring workshops, guest lectures from relevant experts and networking events.Main focus: Understanding Brussels bubble (including networking) and creating your own goals for the FellowshipAll accomodation, food & travel costs will be fully covered by Training for GoodRemote study group (July - August): A 7 week study group covering AI governance & technology policy fundamentals. Every week consists of ~4 hours of readings, a 1 hour discussion and a guest lecture.Policy training in Brussels (September 3-8, dates TBC): An intensive week-long bootcamp in Brussels featuring workshops, guest lectures from relevant experts and networking events.The goal of this week is to come up with a policy proposal, inspired by the latest insights from governance research covered in the 7 week reading group.All accomodation, food & travel costs will be fully covered by Training for GoodFellows will then participate in one of two tracks depending on their interests.Track 1 (September - February): Fellows will be matched with a host organisation working on European tech regulation for a ~5 month placement between September 2023 and February 202...]]>
Fri, 11 Nov 2022 11:24:10 +0000 EA - Apply now for the EU Tech Policy Fellowship 2023 by Jan-WillemvanPutten Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply now for the EU Tech Policy Fellowship 2023, published by Jan-WillemvanPutten on November 11, 2022 on The Effective Altruism Forum.Announcing the EU Tech Policy Fellowship 2023, an 8-month programme to catapult ambitious graduates into high-impact career paths in EU policy, mainly working on the topic of AI Governance.SummaryTraining for Good is excited to announce the second edition of the EU Tech Policy Fellowship. This programme enables promising EU citizens to launch careers focused on regulating high-priority emerging technologies, mainly AI. Apply here by December 11th.This fellowship consists of three components:Remote study group (July - August, 4 hours a week): A 6 week part-time study group covering AI governance & technology policy fundamentals.2 x Policy training in Brussels (June 26-30 and September 3 - 8 exact date TBC): Two intensive week-long bootcamps in Brussels featuring workshops, guest lectures from relevant experts and networking events.Fellows will then participate in one of two tracks depending on their interests.Track 1 (September - February full time): Fellows will be matched with a host organisation working on European tech regulation for a ~5 month placement between September 2023 and February 2024.Host organisations include The Future Society, Centre for European Policy Studies, and German Marshall Fund (among others).Track 2 (September): Fellows will receive job application support and guidance to pursue a career in the European Commission, party politics or related policy jobs in Europe.This may include career workshops, feedback on CVs, interview training and mentorship from experienced policy professionals.Other important points:If you have any questions or would like to learn more about the program and whether or not it’s the right fit for you, Training for Good will be hosting an informal information session on Thursday November 24 (5.30pm CET), please subscribe here for that session.This fellowship is only open to EU citizens.Modest stipends are available to cover living and relocation costs. We expect most stipends to be between €1,750 and €2,250 per monthFor track 1, stipends are available for up to 6 months while participating in placements.For track 2, stipends are available for up to 1 month while exploring and applying for policy roles.Apply here by December 11th.The ProgrammeThe programme spans a maximum of 8 months from June 2023 to February 2024, is fully cost-covered, and where needed, participants can avail of stipends to cover living costs. It consists of 4 major parts:Policy training in Brussels (June 26-30, dates TBC): An intensive week-long bootcamp in Brussels featuring workshops, guest lectures from relevant experts and networking events.Main focus: Understanding Brussels bubble (including networking) and creating your own goals for the FellowshipAll accomodation, food & travel costs will be fully covered by Training for GoodRemote study group (July - August): A 7 week study group covering AI governance & technology policy fundamentals. Every week consists of ~4 hours of readings, a 1 hour discussion and a guest lecture.Policy training in Brussels (September 3-8, dates TBC): An intensive week-long bootcamp in Brussels featuring workshops, guest lectures from relevant experts and networking events.The goal of this week is to come up with a policy proposal, inspired by the latest insights from governance research covered in the 7 week reading group.All accomodation, food & travel costs will be fully covered by Training for GoodFellows will then participate in one of two tracks depending on their interests.Track 1 (September - February): Fellows will be matched with a host organisation working on European tech regulation for a ~5 month placement between September 2023 and February 202...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply now for the EU Tech Policy Fellowship 2023, published by Jan-WillemvanPutten on November 11, 2022 on The Effective Altruism Forum.Announcing the EU Tech Policy Fellowship 2023, an 8-month programme to catapult ambitious graduates into high-impact career paths in EU policy, mainly working on the topic of AI Governance.SummaryTraining for Good is excited to announce the second edition of the EU Tech Policy Fellowship. This programme enables promising EU citizens to launch careers focused on regulating high-priority emerging technologies, mainly AI. Apply here by December 11th.This fellowship consists of three components:Remote study group (July - August, 4 hours a week): A 6 week part-time study group covering AI governance & technology policy fundamentals.2 x Policy training in Brussels (June 26-30 and September 3 - 8 exact date TBC): Two intensive week-long bootcamps in Brussels featuring workshops, guest lectures from relevant experts and networking events.Fellows will then participate in one of two tracks depending on their interests.Track 1 (September - February full time): Fellows will be matched with a host organisation working on European tech regulation for a ~5 month placement between September 2023 and February 2024.Host organisations include The Future Society, Centre for European Policy Studies, and German Marshall Fund (among others).Track 2 (September): Fellows will receive job application support and guidance to pursue a career in the European Commission, party politics or related policy jobs in Europe.This may include career workshops, feedback on CVs, interview training and mentorship from experienced policy professionals.Other important points:If you have any questions or would like to learn more about the program and whether or not it’s the right fit for you, Training for Good will be hosting an informal information session on Thursday November 24 (5.30pm CET), please subscribe here for that session.This fellowship is only open to EU citizens.Modest stipends are available to cover living and relocation costs. We expect most stipends to be between €1,750 and €2,250 per monthFor track 1, stipends are available for up to 6 months while participating in placements.For track 2, stipends are available for up to 1 month while exploring and applying for policy roles.Apply here by December 11th.The ProgrammeThe programme spans a maximum of 8 months from June 2023 to February 2024, is fully cost-covered, and where needed, participants can avail of stipends to cover living costs. It consists of 4 major parts:Policy training in Brussels (June 26-30, dates TBC): An intensive week-long bootcamp in Brussels featuring workshops, guest lectures from relevant experts and networking events.Main focus: Understanding Brussels bubble (including networking) and creating your own goals for the FellowshipAll accomodation, food & travel costs will be fully covered by Training for GoodRemote study group (July - August): A 7 week study group covering AI governance & technology policy fundamentals. Every week consists of ~4 hours of readings, a 1 hour discussion and a guest lecture.Policy training in Brussels (September 3-8, dates TBC): An intensive week-long bootcamp in Brussels featuring workshops, guest lectures from relevant experts and networking events.The goal of this week is to come up with a policy proposal, inspired by the latest insights from governance research covered in the 7 week reading group.All accomodation, food & travel costs will be fully covered by Training for GoodFellows will then participate in one of two tracks depending on their interests.Track 1 (September - February): Fellows will be matched with a host organisation working on European tech regulation for a ~5 month placement between September 2023 and February 202...]]>
Jan-WillemvanPutten https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:11 None full 3713
JHzgJ5FA3yDnuk6tj_EA EA - Should companies aligned to EA be open about that? by Worried Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should companies aligned to EA be open about that?, published by Worried on November 10, 2022 on The Effective Altruism Forum.I own a company that's closely related to EA. We've been vocal about that relationship and actively acquanting and educating our customers about EA. Almost all of our customers and employees are non-EA's, we operate B2C and we are for-profit. I no longer think it's in the best interest of my company, and its impact and future survival odds, to have a visual relationship with EA. I do continue to believe its very good to acquaint and educate people about EA principles, but I just think it's too risky associating with it publicly now. I have already been worried about the optics of longtermism (although I'm a longtermist), flying students to conferences and having them stay in fancy hotels (although I believe that to be high EV) and everything immoral Elon Musk does in the name of longtermism, but I could deal with and defend that.However, FTX seemingly plagueing fraud to move more money to effective causes is something I know I won't be able to defend (even when that's high EV), and seems like a net-negative thing to be associated with.I realize that asking this question might be bad because it could persuade other people and companies to be silent EA's. I also think for some that might be the highest EV course of action.I'm not new to this forum, but I've chosen to post this anonymously. I look forward to reading your arguments around this question.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Worried https://forum.effectivealtruism.org/posts/JHzgJ5FA3yDnuk6tj/should-companies-aligned-to-ea-be-open-about-that Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should companies aligned to EA be open about that?, published by Worried on November 10, 2022 on The Effective Altruism Forum.I own a company that's closely related to EA. We've been vocal about that relationship and actively acquanting and educating our customers about EA. Almost all of our customers and employees are non-EA's, we operate B2C and we are for-profit. I no longer think it's in the best interest of my company, and its impact and future survival odds, to have a visual relationship with EA. I do continue to believe its very good to acquaint and educate people about EA principles, but I just think it's too risky associating with it publicly now. I have already been worried about the optics of longtermism (although I'm a longtermist), flying students to conferences and having them stay in fancy hotels (although I believe that to be high EV) and everything immoral Elon Musk does in the name of longtermism, but I could deal with and defend that.However, FTX seemingly plagueing fraud to move more money to effective causes is something I know I won't be able to defend (even when that's high EV), and seems like a net-negative thing to be associated with.I realize that asking this question might be bad because it could persuade other people and companies to be silent EA's. I also think for some that might be the highest EV course of action.I'm not new to this forum, but I've chosen to post this anonymously. I look forward to reading your arguments around this question.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 11 Nov 2022 03:15:05 +0000 EA - Should companies aligned to EA be open about that? by Worried Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should companies aligned to EA be open about that?, published by Worried on November 10, 2022 on The Effective Altruism Forum.I own a company that's closely related to EA. We've been vocal about that relationship and actively acquanting and educating our customers about EA. Almost all of our customers and employees are non-EA's, we operate B2C and we are for-profit. I no longer think it's in the best interest of my company, and its impact and future survival odds, to have a visual relationship with EA. I do continue to believe its very good to acquaint and educate people about EA principles, but I just think it's too risky associating with it publicly now. I have already been worried about the optics of longtermism (although I'm a longtermist), flying students to conferences and having them stay in fancy hotels (although I believe that to be high EV) and everything immoral Elon Musk does in the name of longtermism, but I could deal with and defend that.However, FTX seemingly plagueing fraud to move more money to effective causes is something I know I won't be able to defend (even when that's high EV), and seems like a net-negative thing to be associated with.I realize that asking this question might be bad because it could persuade other people and companies to be silent EA's. I also think for some that might be the highest EV course of action.I'm not new to this forum, but I've chosen to post this anonymously. I look forward to reading your arguments around this question.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should companies aligned to EA be open about that?, published by Worried on November 10, 2022 on The Effective Altruism Forum.I own a company that's closely related to EA. We've been vocal about that relationship and actively acquanting and educating our customers about EA. Almost all of our customers and employees are non-EA's, we operate B2C and we are for-profit. I no longer think it's in the best interest of my company, and its impact and future survival odds, to have a visual relationship with EA. I do continue to believe its very good to acquaint and educate people about EA principles, but I just think it's too risky associating with it publicly now. I have already been worried about the optics of longtermism (although I'm a longtermist), flying students to conferences and having them stay in fancy hotels (although I believe that to be high EV) and everything immoral Elon Musk does in the name of longtermism, but I could deal with and defend that.However, FTX seemingly plagueing fraud to move more money to effective causes is something I know I won't be able to defend (even when that's high EV), and seems like a net-negative thing to be associated with.I realize that asking this question might be bad because it could persuade other people and companies to be silent EA's. I also think for some that might be the highest EV course of action.I'm not new to this forum, but I've chosen to post this anonymously. I look forward to reading your arguments around this question.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Worried https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:36 None full 3705
xafpj3on76uRDoBja_EA EA - The FTX Future Fund team has resigned by Nick Beckstead Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The FTX Future Fund team has resigned, published by Nick Beckstead on November 11, 2022 on The Effective Altruism Forum.We were shocked and immensely saddened to learn of the recent events at FTX. Our hearts go out to the thousands of FTX customers whose finances may have been jeopardized or destroyed.We are now unable to perform our work or process grants, and we have fundamental questions about the legitimacy and integrity of the business operations that were funding the FTX Foundation and the Future Fund. As a result, we resigned earlier today.We don’t yet have a full picture of what went wrong, and we are following the news online as it unfolds. But to the extent that the leadership of FTX may have engaged in deception or dishonesty, we condemn that behavior in the strongest possible terms. We believe that being a good actor in the world means striving to act with honesty and integrity.We are devastated to say that it looks likely that there are many committed grants that the Future Fund will be unable to honor. We are so sorry that it has come to this. We are no longer employed by the Future Fund, but, in our personal capacities, we are exploring ways to help with this awful situation. We joined the Future Fund to support incredible people and projects, and this outcome is heartbreaking to us.We appreciate the grantees' work to help build a better future, and we have been honored to support it. We're sorry that we won't be able to continue to do so going forward, and we deeply regret the difficult, painful, and stressful position that many of you are now in.To reach us, grantees may email grantee-reachout@googlegroups.com. We know grantees must have many questions, and in our personal capacities we will try to answer them as best as we can given the circumstances.Nick BecksteadLeopold AschenbrennerAvital BalwitKetan RamakrishnanWill MacAskillThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nick Beckstead https://forum.effectivealtruism.org/posts/xafpj3on76uRDoBja/the-ftx-future-fund-team-has-resigned-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The FTX Future Fund team has resigned, published by Nick Beckstead on November 11, 2022 on The Effective Altruism Forum.We were shocked and immensely saddened to learn of the recent events at FTX. Our hearts go out to the thousands of FTX customers whose finances may have been jeopardized or destroyed.We are now unable to perform our work or process grants, and we have fundamental questions about the legitimacy and integrity of the business operations that were funding the FTX Foundation and the Future Fund. As a result, we resigned earlier today.We don’t yet have a full picture of what went wrong, and we are following the news online as it unfolds. But to the extent that the leadership of FTX may have engaged in deception or dishonesty, we condemn that behavior in the strongest possible terms. We believe that being a good actor in the world means striving to act with honesty and integrity.We are devastated to say that it looks likely that there are many committed grants that the Future Fund will be unable to honor. We are so sorry that it has come to this. We are no longer employed by the Future Fund, but, in our personal capacities, we are exploring ways to help with this awful situation. We joined the Future Fund to support incredible people and projects, and this outcome is heartbreaking to us.We appreciate the grantees' work to help build a better future, and we have been honored to support it. We're sorry that we won't be able to continue to do so going forward, and we deeply regret the difficult, painful, and stressful position that many of you are now in.To reach us, grantees may email grantee-reachout@googlegroups.com. We know grantees must have many questions, and in our personal capacities we will try to answer them as best as we can given the circumstances.Nick BecksteadLeopold AschenbrennerAvital BalwitKetan RamakrishnanWill MacAskillThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 11 Nov 2022 02:10:19 +0000 EA - The FTX Future Fund team has resigned by Nick Beckstead Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The FTX Future Fund team has resigned, published by Nick Beckstead on November 11, 2022 on The Effective Altruism Forum.We were shocked and immensely saddened to learn of the recent events at FTX. Our hearts go out to the thousands of FTX customers whose finances may have been jeopardized or destroyed.We are now unable to perform our work or process grants, and we have fundamental questions about the legitimacy and integrity of the business operations that were funding the FTX Foundation and the Future Fund. As a result, we resigned earlier today.We don’t yet have a full picture of what went wrong, and we are following the news online as it unfolds. But to the extent that the leadership of FTX may have engaged in deception or dishonesty, we condemn that behavior in the strongest possible terms. We believe that being a good actor in the world means striving to act with honesty and integrity.We are devastated to say that it looks likely that there are many committed grants that the Future Fund will be unable to honor. We are so sorry that it has come to this. We are no longer employed by the Future Fund, but, in our personal capacities, we are exploring ways to help with this awful situation. We joined the Future Fund to support incredible people and projects, and this outcome is heartbreaking to us.We appreciate the grantees' work to help build a better future, and we have been honored to support it. We're sorry that we won't be able to continue to do so going forward, and we deeply regret the difficult, painful, and stressful position that many of you are now in.To reach us, grantees may email grantee-reachout@googlegroups.com. We know grantees must have many questions, and in our personal capacities we will try to answer them as best as we can given the circumstances.Nick BecksteadLeopold AschenbrennerAvital BalwitKetan RamakrishnanWill MacAskillThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The FTX Future Fund team has resigned, published by Nick Beckstead on November 11, 2022 on The Effective Altruism Forum.We were shocked and immensely saddened to learn of the recent events at FTX. Our hearts go out to the thousands of FTX customers whose finances may have been jeopardized or destroyed.We are now unable to perform our work or process grants, and we have fundamental questions about the legitimacy and integrity of the business operations that were funding the FTX Foundation and the Future Fund. As a result, we resigned earlier today.We don’t yet have a full picture of what went wrong, and we are following the news online as it unfolds. But to the extent that the leadership of FTX may have engaged in deception or dishonesty, we condemn that behavior in the strongest possible terms. We believe that being a good actor in the world means striving to act with honesty and integrity.We are devastated to say that it looks likely that there are many committed grants that the Future Fund will be unable to honor. We are so sorry that it has come to this. We are no longer employed by the Future Fund, but, in our personal capacities, we are exploring ways to help with this awful situation. We joined the Future Fund to support incredible people and projects, and this outcome is heartbreaking to us.We appreciate the grantees' work to help build a better future, and we have been honored to support it. We're sorry that we won't be able to continue to do so going forward, and we deeply regret the difficult, painful, and stressful position that many of you are now in.To reach us, grantees may email grantee-reachout@googlegroups.com. We know grantees must have many questions, and in our personal capacities we will try to answer them as best as we can given the circumstances.Nick BecksteadLeopold AschenbrennerAvital BalwitKetan RamakrishnanWill MacAskillThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nick Beckstead https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:59 None full 3703
XHrHsrQGyr4NnqCA7_EA EA - We must be very clear: fraud in the service of effective altruism is unacceptable by evhub Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We must be very clear: fraud in the service of effective altruism is unacceptable, published by evhub on November 10, 2022 on The Effective Altruism Forum.I care deeply about the future of humanity—more so than I care about anything else in the world. And I believe that Sam and others at FTX shared that care for the world.Nevertheless, if some hypothetical person had come to me several years ago and asked “Is it worth it to engage in fraud to send billions of dollars to effective causes?”, I would have said unequivocally no.At this stage, it is quite unclear just from public information exactly what happened to FTX, and I don't want to accuse anyone of anything that they didn't do. However, I think it is starting to look increasingly likely that, even if FTX's handling of its customer's money was not technically legally fraudulent, it seems likely to have been fraudulent in spirit.And regardless of whether FTX's business was in fact fraudulent, it is clear that many people—customers and employees—have been deeply hurt by FTX's collapse. People's life savings and careers were very rapidly wiped out. I think that compassion and support for those people is very important. In addition, I think there's another thing that we as a community have an obligation to do right now as well.Assuming FTX's business was in fact fraudulent, I think that we—as people who unknowingly benefitted from it and whose work for the world was potentially used to whitewash it—have an obligation to condemn it in no uncertain terms. This is especially true for public figures who supported or were associated with FTX or its endeavors.I don't want a witch hunt, I don't think anyone should start pulling out pitchforks, and so I think we should avoid a focus on any individual people here. We likely won't know for a long time exactly who was responsible for what, nor do I think it really matters—what's done is done, and what's important now is making very clear where EA stands with regards to fraudulent activity, not throwing any individual people under the bus.Right now, I think the best course of action is for us—and I mean all of us, anyone who has any sort of a public platform—to make clear that we don't support fraud done in the service of effective altruism. Regardless of what FTX did or did not do, I think that is a statement that should be clearly and unambiguously defensible and that we should be happy to stand by regardless of what comes out. And I think it is an important statement for us to make: outside observers will be looking to see what EA has to say about all of this, and I think we need to be very clear that fraud is not something that we ever support.In that spirit, I think it's worth us carefully confronting the moral question here: is fraud in the service of raising money for effective causes wrong? I think it is, quite clearly—and that's as someone who is about as hardcore of a total utilitarian as it is possible to be.When we, as humans, consider whether or not it makes sense to break the rules for our own benefit, we are running on corrupted hardware: we are very good at justifying to ourselves that seizing money and power for own benefit is really for the good of everyone. If I found myself in a situation where it seemed to me like seizing power for myself was net good, I would worry that in fact I was fooling myself—and even if I was pretty sure I wasn't fooling myself, I would still worry that I was falling prey to the unilateralist's curse if it wasn't very clearly a good idea to others as well.Additionally, if you're familiar with decision theory, you'll know that credibly pre-committing to follow certain principles—such as never engaging in fraud—is extremely advantageous, as it makes clear to other agents that you are a trustworthy actor who can be relied upon....]]>
evhub https://forum.effectivealtruism.org/posts/XHrHsrQGyr4NnqCA7/we-must-be-very-clear-fraud-in-the-service-of-effective Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We must be very clear: fraud in the service of effective altruism is unacceptable, published by evhub on November 10, 2022 on The Effective Altruism Forum.I care deeply about the future of humanity—more so than I care about anything else in the world. And I believe that Sam and others at FTX shared that care for the world.Nevertheless, if some hypothetical person had come to me several years ago and asked “Is it worth it to engage in fraud to send billions of dollars to effective causes?”, I would have said unequivocally no.At this stage, it is quite unclear just from public information exactly what happened to FTX, and I don't want to accuse anyone of anything that they didn't do. However, I think it is starting to look increasingly likely that, even if FTX's handling of its customer's money was not technically legally fraudulent, it seems likely to have been fraudulent in spirit.And regardless of whether FTX's business was in fact fraudulent, it is clear that many people—customers and employees—have been deeply hurt by FTX's collapse. People's life savings and careers were very rapidly wiped out. I think that compassion and support for those people is very important. In addition, I think there's another thing that we as a community have an obligation to do right now as well.Assuming FTX's business was in fact fraudulent, I think that we—as people who unknowingly benefitted from it and whose work for the world was potentially used to whitewash it—have an obligation to condemn it in no uncertain terms. This is especially true for public figures who supported or were associated with FTX or its endeavors.I don't want a witch hunt, I don't think anyone should start pulling out pitchforks, and so I think we should avoid a focus on any individual people here. We likely won't know for a long time exactly who was responsible for what, nor do I think it really matters—what's done is done, and what's important now is making very clear where EA stands with regards to fraudulent activity, not throwing any individual people under the bus.Right now, I think the best course of action is for us—and I mean all of us, anyone who has any sort of a public platform—to make clear that we don't support fraud done in the service of effective altruism. Regardless of what FTX did or did not do, I think that is a statement that should be clearly and unambiguously defensible and that we should be happy to stand by regardless of what comes out. And I think it is an important statement for us to make: outside observers will be looking to see what EA has to say about all of this, and I think we need to be very clear that fraud is not something that we ever support.In that spirit, I think it's worth us carefully confronting the moral question here: is fraud in the service of raising money for effective causes wrong? I think it is, quite clearly—and that's as someone who is about as hardcore of a total utilitarian as it is possible to be.When we, as humans, consider whether or not it makes sense to break the rules for our own benefit, we are running on corrupted hardware: we are very good at justifying to ourselves that seizing money and power for own benefit is really for the good of everyone. If I found myself in a situation where it seemed to me like seizing power for myself was net good, I would worry that in fact I was fooling myself—and even if I was pretty sure I wasn't fooling myself, I would still worry that I was falling prey to the unilateralist's curse if it wasn't very clearly a good idea to others as well.Additionally, if you're familiar with decision theory, you'll know that credibly pre-committing to follow certain principles—such as never engaging in fraud—is extremely advantageous, as it makes clear to other agents that you are a trustworthy actor who can be relied upon....]]>
Thu, 10 Nov 2022 23:44:19 +0000 EA - We must be very clear: fraud in the service of effective altruism is unacceptable by evhub Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We must be very clear: fraud in the service of effective altruism is unacceptable, published by evhub on November 10, 2022 on The Effective Altruism Forum.I care deeply about the future of humanity—more so than I care about anything else in the world. And I believe that Sam and others at FTX shared that care for the world.Nevertheless, if some hypothetical person had come to me several years ago and asked “Is it worth it to engage in fraud to send billions of dollars to effective causes?”, I would have said unequivocally no.At this stage, it is quite unclear just from public information exactly what happened to FTX, and I don't want to accuse anyone of anything that they didn't do. However, I think it is starting to look increasingly likely that, even if FTX's handling of its customer's money was not technically legally fraudulent, it seems likely to have been fraudulent in spirit.And regardless of whether FTX's business was in fact fraudulent, it is clear that many people—customers and employees—have been deeply hurt by FTX's collapse. People's life savings and careers were very rapidly wiped out. I think that compassion and support for those people is very important. In addition, I think there's another thing that we as a community have an obligation to do right now as well.Assuming FTX's business was in fact fraudulent, I think that we—as people who unknowingly benefitted from it and whose work for the world was potentially used to whitewash it—have an obligation to condemn it in no uncertain terms. This is especially true for public figures who supported or were associated with FTX or its endeavors.I don't want a witch hunt, I don't think anyone should start pulling out pitchforks, and so I think we should avoid a focus on any individual people here. We likely won't know for a long time exactly who was responsible for what, nor do I think it really matters—what's done is done, and what's important now is making very clear where EA stands with regards to fraudulent activity, not throwing any individual people under the bus.Right now, I think the best course of action is for us—and I mean all of us, anyone who has any sort of a public platform—to make clear that we don't support fraud done in the service of effective altruism. Regardless of what FTX did or did not do, I think that is a statement that should be clearly and unambiguously defensible and that we should be happy to stand by regardless of what comes out. And I think it is an important statement for us to make: outside observers will be looking to see what EA has to say about all of this, and I think we need to be very clear that fraud is not something that we ever support.In that spirit, I think it's worth us carefully confronting the moral question here: is fraud in the service of raising money for effective causes wrong? I think it is, quite clearly—and that's as someone who is about as hardcore of a total utilitarian as it is possible to be.When we, as humans, consider whether or not it makes sense to break the rules for our own benefit, we are running on corrupted hardware: we are very good at justifying to ourselves that seizing money and power for own benefit is really for the good of everyone. If I found myself in a situation where it seemed to me like seizing power for myself was net good, I would worry that in fact I was fooling myself—and even if I was pretty sure I wasn't fooling myself, I would still worry that I was falling prey to the unilateralist's curse if it wasn't very clearly a good idea to others as well.Additionally, if you're familiar with decision theory, you'll know that credibly pre-committing to follow certain principles—such as never engaging in fraud—is extremely advantageous, as it makes clear to other agents that you are a trustworthy actor who can be relied upon....]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We must be very clear: fraud in the service of effective altruism is unacceptable, published by evhub on November 10, 2022 on The Effective Altruism Forum.I care deeply about the future of humanity—more so than I care about anything else in the world. And I believe that Sam and others at FTX shared that care for the world.Nevertheless, if some hypothetical person had come to me several years ago and asked “Is it worth it to engage in fraud to send billions of dollars to effective causes?”, I would have said unequivocally no.At this stage, it is quite unclear just from public information exactly what happened to FTX, and I don't want to accuse anyone of anything that they didn't do. However, I think it is starting to look increasingly likely that, even if FTX's handling of its customer's money was not technically legally fraudulent, it seems likely to have been fraudulent in spirit.And regardless of whether FTX's business was in fact fraudulent, it is clear that many people—customers and employees—have been deeply hurt by FTX's collapse. People's life savings and careers were very rapidly wiped out. I think that compassion and support for those people is very important. In addition, I think there's another thing that we as a community have an obligation to do right now as well.Assuming FTX's business was in fact fraudulent, I think that we—as people who unknowingly benefitted from it and whose work for the world was potentially used to whitewash it—have an obligation to condemn it in no uncertain terms. This is especially true for public figures who supported or were associated with FTX or its endeavors.I don't want a witch hunt, I don't think anyone should start pulling out pitchforks, and so I think we should avoid a focus on any individual people here. We likely won't know for a long time exactly who was responsible for what, nor do I think it really matters—what's done is done, and what's important now is making very clear where EA stands with regards to fraudulent activity, not throwing any individual people under the bus.Right now, I think the best course of action is for us—and I mean all of us, anyone who has any sort of a public platform—to make clear that we don't support fraud done in the service of effective altruism. Regardless of what FTX did or did not do, I think that is a statement that should be clearly and unambiguously defensible and that we should be happy to stand by regardless of what comes out. And I think it is an important statement for us to make: outside observers will be looking to see what EA has to say about all of this, and I think we need to be very clear that fraud is not something that we ever support.In that spirit, I think it's worth us carefully confronting the moral question here: is fraud in the service of raising money for effective causes wrong? I think it is, quite clearly—and that's as someone who is about as hardcore of a total utilitarian as it is possible to be.When we, as humans, consider whether or not it makes sense to break the rules for our own benefit, we are running on corrupted hardware: we are very good at justifying to ourselves that seizing money and power for own benefit is really for the good of everyone. If I found myself in a situation where it seemed to me like seizing power for myself was net good, I would worry that in fact I was fooling myself—and even if I was pretty sure I wasn't fooling myself, I would still worry that I was falling prey to the unilateralist's curse if it wasn't very clearly a good idea to others as well.Additionally, if you're familiar with decision theory, you'll know that credibly pre-committing to follow certain principles—such as never engaging in fraud—is extremely advantageous, as it makes clear to other agents that you are a trustworthy actor who can be relied upon....]]>
evhub https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:58 None full 3704
mCCutDxCavtnhxhBR_EA EA - Some comments on recent FTX-related events by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some comments on recent FTX-related events, published by Holden Karnofsky on November 10, 2022 on The Effective Altruism Forum.It appears that FTX, whose principals support the FTX Foundation, is in serious trouble. We’ve been getting a lot of questions related to these events.I’ve made an attempt to get some basic points out quickly that might be helpful to people, but the situation appears to be developing quickly and I have little understanding of what’s going on, so this post will necessarily be incomplete and nonauthoritative.One thing I’d like to say up front (more on how this relates to FTX below) is that Open Philanthropy remains committed to our longtermist focus areas and still expects to spend billions of dollars on them over the coming decades. We will raise the bar for our giving, and we don’t know how many existing projects that will affect, but we still expect longtermist projects to grow in terms of their impact and output.Are the funds directed by Open Philanthropy invested in or otherwise exposed to FTX or related entities?No.The FTX Foundation has quickly become a major funder of many longtermist and effective altruist organizations. If it stops (or greatly reduces) funding them, how might that affect Open Philanthropy’s funding practices?If the FTX Foundation stops (or greatly reduces) funding such people and organizations, then Open Philanthropy will have to consider a substantially larger set of funding opportunities than we were considering before.In this case, we will have to raise our bar for longtermist grantmaking: with more funding opportunities that we’re choosing between, we’ll have to fund a lower percentage of them. This means grants that we would’ve made before might no longer be made, and/or we might want to provide smaller amounts of money to projects we previously would have supported more generously.Does Open Philanthropy also need to raise its bar in light of general market movements (particularly the fall in META stock) and other factors?Yes:Our available capital has fallen over the last year for these reasons. That said, as of now, public reports of Dustin Moskovitz and Cari Tuna’s net worth give a substantially understated picture of our available resources. That’s because, among other issues, they don’t include resources that are already in foundations. (I also note that META stock is not as large a part of their portfolio as some seem to assume.) Dustin and Cari still expect to spend nearly all of their resources in their lifetimes on philanthropy that aims to accomplish as much good per dollar as possible.Additionally, the longtermist community has been growing; our rate of spending has been going up; and we expect both of these trends to continue. This further contributes to the need to raise our bar.As stated above, we remain committed to our focus areas and still expect to spend billions of dollars on them over the coming decades.So how much might Open Philanthropy raise its bar for longtermist grantmaking, and what does this mean for today’s potential grantees?We don’t know yet — the news about FTX was sudden, and we’re working to figure things out.It’s a priority for us to think through how much to raise the bar for longtermist grantmaking, and therefore what kinds of giving opportunities to fund. We hope to gain some clarity on this in the next month or so, but right now we’re dealing with major new information and don’t have a lot to say about what it means. It could mean reducing support for a lot of projects, or for relatively few.(We don’t have a crisp formalization of “the bar”; instead we have general guidance to grantmakers on what sorts of requests should be generously funded vs. carefully considered vs. rejected. We need to rethink and revise this guidance.)Because of this, we are pausing mo...]]>
Holden Karnofsky https://forum.effectivealtruism.org/posts/mCCutDxCavtnhxhBR/some-comments-on-recent-ftx-related-events Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some comments on recent FTX-related events, published by Holden Karnofsky on November 10, 2022 on The Effective Altruism Forum.It appears that FTX, whose principals support the FTX Foundation, is in serious trouble. We’ve been getting a lot of questions related to these events.I’ve made an attempt to get some basic points out quickly that might be helpful to people, but the situation appears to be developing quickly and I have little understanding of what’s going on, so this post will necessarily be incomplete and nonauthoritative.One thing I’d like to say up front (more on how this relates to FTX below) is that Open Philanthropy remains committed to our longtermist focus areas and still expects to spend billions of dollars on them over the coming decades. We will raise the bar for our giving, and we don’t know how many existing projects that will affect, but we still expect longtermist projects to grow in terms of their impact and output.Are the funds directed by Open Philanthropy invested in or otherwise exposed to FTX or related entities?No.The FTX Foundation has quickly become a major funder of many longtermist and effective altruist organizations. If it stops (or greatly reduces) funding them, how might that affect Open Philanthropy’s funding practices?If the FTX Foundation stops (or greatly reduces) funding such people and organizations, then Open Philanthropy will have to consider a substantially larger set of funding opportunities than we were considering before.In this case, we will have to raise our bar for longtermist grantmaking: with more funding opportunities that we’re choosing between, we’ll have to fund a lower percentage of them. This means grants that we would’ve made before might no longer be made, and/or we might want to provide smaller amounts of money to projects we previously would have supported more generously.Does Open Philanthropy also need to raise its bar in light of general market movements (particularly the fall in META stock) and other factors?Yes:Our available capital has fallen over the last year for these reasons. That said, as of now, public reports of Dustin Moskovitz and Cari Tuna’s net worth give a substantially understated picture of our available resources. That’s because, among other issues, they don’t include resources that are already in foundations. (I also note that META stock is not as large a part of their portfolio as some seem to assume.) Dustin and Cari still expect to spend nearly all of their resources in their lifetimes on philanthropy that aims to accomplish as much good per dollar as possible.Additionally, the longtermist community has been growing; our rate of spending has been going up; and we expect both of these trends to continue. This further contributes to the need to raise our bar.As stated above, we remain committed to our focus areas and still expect to spend billions of dollars on them over the coming decades.So how much might Open Philanthropy raise its bar for longtermist grantmaking, and what does this mean for today’s potential grantees?We don’t know yet — the news about FTX was sudden, and we’re working to figure things out.It’s a priority for us to think through how much to raise the bar for longtermist grantmaking, and therefore what kinds of giving opportunities to fund. We hope to gain some clarity on this in the next month or so, but right now we’re dealing with major new information and don’t have a lot to say about what it means. It could mean reducing support for a lot of projects, or for relatively few.(We don’t have a crisp formalization of “the bar”; instead we have general guidance to grantmakers on what sorts of requests should be generously funded vs. carefully considered vs. rejected. We need to rethink and revise this guidance.)Because of this, we are pausing mo...]]>
Thu, 10 Nov 2022 22:30:17 +0000 EA - Some comments on recent FTX-related events by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some comments on recent FTX-related events, published by Holden Karnofsky on November 10, 2022 on The Effective Altruism Forum.It appears that FTX, whose principals support the FTX Foundation, is in serious trouble. We’ve been getting a lot of questions related to these events.I’ve made an attempt to get some basic points out quickly that might be helpful to people, but the situation appears to be developing quickly and I have little understanding of what’s going on, so this post will necessarily be incomplete and nonauthoritative.One thing I’d like to say up front (more on how this relates to FTX below) is that Open Philanthropy remains committed to our longtermist focus areas and still expects to spend billions of dollars on them over the coming decades. We will raise the bar for our giving, and we don’t know how many existing projects that will affect, but we still expect longtermist projects to grow in terms of their impact and output.Are the funds directed by Open Philanthropy invested in or otherwise exposed to FTX or related entities?No.The FTX Foundation has quickly become a major funder of many longtermist and effective altruist organizations. If it stops (or greatly reduces) funding them, how might that affect Open Philanthropy’s funding practices?If the FTX Foundation stops (or greatly reduces) funding such people and organizations, then Open Philanthropy will have to consider a substantially larger set of funding opportunities than we were considering before.In this case, we will have to raise our bar for longtermist grantmaking: with more funding opportunities that we’re choosing between, we’ll have to fund a lower percentage of them. This means grants that we would’ve made before might no longer be made, and/or we might want to provide smaller amounts of money to projects we previously would have supported more generously.Does Open Philanthropy also need to raise its bar in light of general market movements (particularly the fall in META stock) and other factors?Yes:Our available capital has fallen over the last year for these reasons. That said, as of now, public reports of Dustin Moskovitz and Cari Tuna’s net worth give a substantially understated picture of our available resources. That’s because, among other issues, they don’t include resources that are already in foundations. (I also note that META stock is not as large a part of their portfolio as some seem to assume.) Dustin and Cari still expect to spend nearly all of their resources in their lifetimes on philanthropy that aims to accomplish as much good per dollar as possible.Additionally, the longtermist community has been growing; our rate of spending has been going up; and we expect both of these trends to continue. This further contributes to the need to raise our bar.As stated above, we remain committed to our focus areas and still expect to spend billions of dollars on them over the coming decades.So how much might Open Philanthropy raise its bar for longtermist grantmaking, and what does this mean for today’s potential grantees?We don’t know yet — the news about FTX was sudden, and we’re working to figure things out.It’s a priority for us to think through how much to raise the bar for longtermist grantmaking, and therefore what kinds of giving opportunities to fund. We hope to gain some clarity on this in the next month or so, but right now we’re dealing with major new information and don’t have a lot to say about what it means. It could mean reducing support for a lot of projects, or for relatively few.(We don’t have a crisp formalization of “the bar”; instead we have general guidance to grantmakers on what sorts of requests should be generously funded vs. carefully considered vs. rejected. We need to rethink and revise this guidance.)Because of this, we are pausing mo...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some comments on recent FTX-related events, published by Holden Karnofsky on November 10, 2022 on The Effective Altruism Forum.It appears that FTX, whose principals support the FTX Foundation, is in serious trouble. We’ve been getting a lot of questions related to these events.I’ve made an attempt to get some basic points out quickly that might be helpful to people, but the situation appears to be developing quickly and I have little understanding of what’s going on, so this post will necessarily be incomplete and nonauthoritative.One thing I’d like to say up front (more on how this relates to FTX below) is that Open Philanthropy remains committed to our longtermist focus areas and still expects to spend billions of dollars on them over the coming decades. We will raise the bar for our giving, and we don’t know how many existing projects that will affect, but we still expect longtermist projects to grow in terms of their impact and output.Are the funds directed by Open Philanthropy invested in or otherwise exposed to FTX or related entities?No.The FTX Foundation has quickly become a major funder of many longtermist and effective altruist organizations. If it stops (or greatly reduces) funding them, how might that affect Open Philanthropy’s funding practices?If the FTX Foundation stops (or greatly reduces) funding such people and organizations, then Open Philanthropy will have to consider a substantially larger set of funding opportunities than we were considering before.In this case, we will have to raise our bar for longtermist grantmaking: with more funding opportunities that we’re choosing between, we’ll have to fund a lower percentage of them. This means grants that we would’ve made before might no longer be made, and/or we might want to provide smaller amounts of money to projects we previously would have supported more generously.Does Open Philanthropy also need to raise its bar in light of general market movements (particularly the fall in META stock) and other factors?Yes:Our available capital has fallen over the last year for these reasons. That said, as of now, public reports of Dustin Moskovitz and Cari Tuna’s net worth give a substantially understated picture of our available resources. That’s because, among other issues, they don’t include resources that are already in foundations. (I also note that META stock is not as large a part of their portfolio as some seem to assume.) Dustin and Cari still expect to spend nearly all of their resources in their lifetimes on philanthropy that aims to accomplish as much good per dollar as possible.Additionally, the longtermist community has been growing; our rate of spending has been going up; and we expect both of these trends to continue. This further contributes to the need to raise our bar.As stated above, we remain committed to our focus areas and still expect to spend billions of dollars on them over the coming decades.So how much might Open Philanthropy raise its bar for longtermist grantmaking, and what does this mean for today’s potential grantees?We don’t know yet — the news about FTX was sudden, and we’re working to figure things out.It’s a priority for us to think through how much to raise the bar for longtermist grantmaking, and therefore what kinds of giving opportunities to fund. We hope to gain some clarity on this in the next month or so, but right now we’re dealing with major new information and don’t have a lot to say about what it means. It could mean reducing support for a lot of projects, or for relatively few.(We don’t have a crisp formalization of “the bar”; instead we have general guidance to grantmakers on what sorts of requests should be generously funded vs. carefully considered vs. rejected. We need to rethink and revise this guidance.)Because of this, we are pausing mo...]]>
Holden Karnofsky https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:03 None full 3701
BesfLENShzSMeb7Xi_EA EA - Community support given FTX situation by Julia Wise Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Community support given FTX situation, published by Julia Wise on November 10, 2022 on The Effective Altruism Forum.A lot of us are looking at the news about FTX’s financial troubles and wondering what it will mean.For some people, that will be because they had investments held within FTX. For others, it will be because they were working on projects funded by the FTX Foundation. And for many, it will be because of the broader effects of this change on the world.I know people want answers, and I wish CEA had information to share.If you’re personally affected by what’s happening, the community health team wants to be here for you. We’ve already heard from people who are feeling worried, angry, or sad. We don’t have answers, but if you want to chat or vent, we’re available.Here’s the best place to reach us if you’d like to talk. I know a form isn’t the warmest, but a real person will get back to you soon.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Julia Wise https://forum.effectivealtruism.org/posts/BesfLENShzSMeb7Xi/community-support-given-ftx-situation Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Community support given FTX situation, published by Julia Wise on November 10, 2022 on The Effective Altruism Forum.A lot of us are looking at the news about FTX’s financial troubles and wondering what it will mean.For some people, that will be because they had investments held within FTX. For others, it will be because they were working on projects funded by the FTX Foundation. And for many, it will be because of the broader effects of this change on the world.I know people want answers, and I wish CEA had information to share.If you’re personally affected by what’s happening, the community health team wants to be here for you. We’ve already heard from people who are feeling worried, angry, or sad. We don’t have answers, but if you want to chat or vent, we’re available.Here’s the best place to reach us if you’d like to talk. I know a form isn’t the warmest, but a real person will get back to you soon.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 10 Nov 2022 12:02:18 +0000 EA - Community support given FTX situation by Julia Wise Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Community support given FTX situation, published by Julia Wise on November 10, 2022 on The Effective Altruism Forum.A lot of us are looking at the news about FTX’s financial troubles and wondering what it will mean.For some people, that will be because they had investments held within FTX. For others, it will be because they were working on projects funded by the FTX Foundation. And for many, it will be because of the broader effects of this change on the world.I know people want answers, and I wish CEA had information to share.If you’re personally affected by what’s happening, the community health team wants to be here for you. We’ve already heard from people who are feeling worried, angry, or sad. We don’t have answers, but if you want to chat or vent, we’re available.Here’s the best place to reach us if you’d like to talk. I know a form isn’t the warmest, but a real person will get back to you soon.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Community support given FTX situation, published by Julia Wise on November 10, 2022 on The Effective Altruism Forum.A lot of us are looking at the news about FTX’s financial troubles and wondering what it will mean.For some people, that will be because they had investments held within FTX. For others, it will be because they were working on projects funded by the FTX Foundation. And for many, it will be because of the broader effects of this change on the world.I know people want answers, and I wish CEA had information to share.If you’re personally affected by what’s happening, the community health team wants to be here for you. We’ve already heard from people who are feeling worried, angry, or sad. We don’t have answers, but if you want to chat or vent, we’re available.Here’s the best place to reach us if you’d like to talk. I know a form isn’t the warmest, but a real person will get back to you soon.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Julia Wise https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:08 None full 3702
dn2nLRgFAfodcTjQw_EA EA - What should I ask Joe Carlsmith — Open Phil researcher, philosopher and blogger? by Robert Wiblin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What should I ask Joe Carlsmith — Open Phil researcher, philosopher and blogger?, published by Robert Wiblin on November 9, 2022 on The Effective Altruism Forum.Next week for The 80,000 Hours Podcast I'll be interviewing Joe Carlsmith, Senior Research Analyst at Open Philanthropy.Joe's did a BPhil in philosophy at Oxford University and is a prolific writer on topics both philosophical and practical (until recently his blog was called 'Hands and Cities' but it's all now collected on his personal site.What should I ask him?Some things Joe has written which we could talk about include:Is Power-Seeking AI an Existential Risk?Actually possible: thoughts on UtopiaOn infinite ethics —XIV. The death of a utilitarian dreamAnthropics: Learning from the fact that you existAgainst neutrality about creating happy livesWholehearted choices and “morality as taxes”On clingingCan you control the past?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Robert Wiblin https://forum.effectivealtruism.org/posts/dn2nLRgFAfodcTjQw/what-should-i-ask-joe-carlsmith-open-phil-researcher Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What should I ask Joe Carlsmith — Open Phil researcher, philosopher and blogger?, published by Robert Wiblin on November 9, 2022 on The Effective Altruism Forum.Next week for The 80,000 Hours Podcast I'll be interviewing Joe Carlsmith, Senior Research Analyst at Open Philanthropy.Joe's did a BPhil in philosophy at Oxford University and is a prolific writer on topics both philosophical and practical (until recently his blog was called 'Hands and Cities' but it's all now collected on his personal site.What should I ask him?Some things Joe has written which we could talk about include:Is Power-Seeking AI an Existential Risk?Actually possible: thoughts on UtopiaOn infinite ethics —XIV. The death of a utilitarian dreamAnthropics: Learning from the fact that you existAgainst neutrality about creating happy livesWholehearted choices and “morality as taxes”On clingingCan you control the past?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 10 Nov 2022 09:42:53 +0000 EA - What should I ask Joe Carlsmith — Open Phil researcher, philosopher and blogger? by Robert Wiblin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What should I ask Joe Carlsmith — Open Phil researcher, philosopher and blogger?, published by Robert Wiblin on November 9, 2022 on The Effective Altruism Forum.Next week for The 80,000 Hours Podcast I'll be interviewing Joe Carlsmith, Senior Research Analyst at Open Philanthropy.Joe's did a BPhil in philosophy at Oxford University and is a prolific writer on topics both philosophical and practical (until recently his blog was called 'Hands and Cities' but it's all now collected on his personal site.What should I ask him?Some things Joe has written which we could talk about include:Is Power-Seeking AI an Existential Risk?Actually possible: thoughts on UtopiaOn infinite ethics —XIV. The death of a utilitarian dreamAnthropics: Learning from the fact that you existAgainst neutrality about creating happy livesWholehearted choices and “morality as taxes”On clingingCan you control the past?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What should I ask Joe Carlsmith — Open Phil researcher, philosopher and blogger?, published by Robert Wiblin on November 9, 2022 on The Effective Altruism Forum.Next week for The 80,000 Hours Podcast I'll be interviewing Joe Carlsmith, Senior Research Analyst at Open Philanthropy.Joe's did a BPhil in philosophy at Oxford University and is a prolific writer on topics both philosophical and practical (until recently his blog was called 'Hands and Cities' but it's all now collected on his personal site.What should I ask him?Some things Joe has written which we could talk about include:Is Power-Seeking AI an Existential Risk?Actually possible: thoughts on UtopiaOn infinite ethics —XIV. The death of a utilitarian dreamAnthropics: Learning from the fact that you existAgainst neutrality about creating happy livesWholehearted choices and “morality as taxes”On clingingCan you control the past?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Robert Wiblin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:13 None full 3698
vEAieBkRqL7Rj8KvY_EA EA - AI Safety groups should imitate career development clubs by Joshc Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety groups should imitate career development clubs, published by Joshc on November 9, 2022 on The Effective Altruism Forum.If you want to get people to do things (like learn about AI Safety) you have to offer them something valuable.Here’s one of the posters we used when I was in charge of marketing for the Columbia EA group:It’s a pretty graphic, but what valuable thing is it offering?The message is “scan this link to talk about AI.” To be fair, people like talking about AI. We had applicants.But we didn’t attract talented ML students.If you want to attract talented people, you have to know what they want. Serious and ambitious people probably don’t want to sit around having philosophical discussions. They want to build their careers.Enter ML @ Berkeley, a thriving group of 50 ML students who put 15 hours per week into projects and courses to become better at ML. No one gets paid – not even the organizers. And they are very selective. Only around 7% get in.Why is this group successful? For starters, they offer career capital. They give students projects that often turn into real published papers. They also concentrate talent. Ambitious people want to work with other ambitious people.AI safety student groups should consider imitating ML @ Berkeley.I’m not saying that we should eliminate philosophical discussions and replace them with resume boosting factories. We still want people to think AI Safety and X-risk are important. But discussions don’t need to be the primary selling point.Maybe for cultivating conceptual researchers, it makes more sense for discussions to be central. But conceptual and empirical AI Safety research are very different. ML students are probably more interested in projects and skill-building.More rigorous programming could also make it easier to identify talent.Talking about AI is fun, but top ML researchers work extremely hard. Rigorous technical curricula can filter out the ones that are driven.There is nothing like a trial by fire. Instead of trying to predict in advance who will be good at research, why not have lots of people try it and invest in those that do well?USC field builders are experimenting with a curriculum that, in addition to introducing X-risk, is packed-full with technical projects. In their first semester, they attracted 30 students who all have strong ML backgrounds. I’m interested in seeing how this goes and would be excited about more AI Safety groups running experiments on these lines.People could also try:checking whether grad students are willing to supervise group research projects.running deep learning courses and training programs (like Redwood’s MLAB)running an in-person section of intro to ML Safety (a technical course that covers safety topics).ConclusionAs far as I can tell, no one has AI safety university field-building all figured out. Rather than copying the same old discussion group model, people should experiment with new approaches. A good start could be to imitate career development clubs like ML @ Berkeley that have been highly successful.Thanks to Nat Li and Oliver Zhang for thoughts and feedback and to Dan Hendrycks for conversations that inspired this post.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Joshc https://forum.effectivealtruism.org/posts/vEAieBkRqL7Rj8KvY/ai-safety-groups-should-imitate-career-development-clubs Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety groups should imitate career development clubs, published by Joshc on November 9, 2022 on The Effective Altruism Forum.If you want to get people to do things (like learn about AI Safety) you have to offer them something valuable.Here’s one of the posters we used when I was in charge of marketing for the Columbia EA group:It’s a pretty graphic, but what valuable thing is it offering?The message is “scan this link to talk about AI.” To be fair, people like talking about AI. We had applicants.But we didn’t attract talented ML students.If you want to attract talented people, you have to know what they want. Serious and ambitious people probably don’t want to sit around having philosophical discussions. They want to build their careers.Enter ML @ Berkeley, a thriving group of 50 ML students who put 15 hours per week into projects and courses to become better at ML. No one gets paid – not even the organizers. And they are very selective. Only around 7% get in.Why is this group successful? For starters, they offer career capital. They give students projects that often turn into real published papers. They also concentrate talent. Ambitious people want to work with other ambitious people.AI safety student groups should consider imitating ML @ Berkeley.I’m not saying that we should eliminate philosophical discussions and replace them with resume boosting factories. We still want people to think AI Safety and X-risk are important. But discussions don’t need to be the primary selling point.Maybe for cultivating conceptual researchers, it makes more sense for discussions to be central. But conceptual and empirical AI Safety research are very different. ML students are probably more interested in projects and skill-building.More rigorous programming could also make it easier to identify talent.Talking about AI is fun, but top ML researchers work extremely hard. Rigorous technical curricula can filter out the ones that are driven.There is nothing like a trial by fire. Instead of trying to predict in advance who will be good at research, why not have lots of people try it and invest in those that do well?USC field builders are experimenting with a curriculum that, in addition to introducing X-risk, is packed-full with technical projects. In their first semester, they attracted 30 students who all have strong ML backgrounds. I’m interested in seeing how this goes and would be excited about more AI Safety groups running experiments on these lines.People could also try:checking whether grad students are willing to supervise group research projects.running deep learning courses and training programs (like Redwood’s MLAB)running an in-person section of intro to ML Safety (a technical course that covers safety topics).ConclusionAs far as I can tell, no one has AI safety university field-building all figured out. Rather than copying the same old discussion group model, people should experiment with new approaches. A good start could be to imitate career development clubs like ML @ Berkeley that have been highly successful.Thanks to Nat Li and Oliver Zhang for thoughts and feedback and to Dan Hendrycks for conversations that inspired this post.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 10 Nov 2022 06:56:42 +0000 EA - AI Safety groups should imitate career development clubs by Joshc Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety groups should imitate career development clubs, published by Joshc on November 9, 2022 on The Effective Altruism Forum.If you want to get people to do things (like learn about AI Safety) you have to offer them something valuable.Here’s one of the posters we used when I was in charge of marketing for the Columbia EA group:It’s a pretty graphic, but what valuable thing is it offering?The message is “scan this link to talk about AI.” To be fair, people like talking about AI. We had applicants.But we didn’t attract talented ML students.If you want to attract talented people, you have to know what they want. Serious and ambitious people probably don’t want to sit around having philosophical discussions. They want to build their careers.Enter ML @ Berkeley, a thriving group of 50 ML students who put 15 hours per week into projects and courses to become better at ML. No one gets paid – not even the organizers. And they are very selective. Only around 7% get in.Why is this group successful? For starters, they offer career capital. They give students projects that often turn into real published papers. They also concentrate talent. Ambitious people want to work with other ambitious people.AI safety student groups should consider imitating ML @ Berkeley.I’m not saying that we should eliminate philosophical discussions and replace them with resume boosting factories. We still want people to think AI Safety and X-risk are important. But discussions don’t need to be the primary selling point.Maybe for cultivating conceptual researchers, it makes more sense for discussions to be central. But conceptual and empirical AI Safety research are very different. ML students are probably more interested in projects and skill-building.More rigorous programming could also make it easier to identify talent.Talking about AI is fun, but top ML researchers work extremely hard. Rigorous technical curricula can filter out the ones that are driven.There is nothing like a trial by fire. Instead of trying to predict in advance who will be good at research, why not have lots of people try it and invest in those that do well?USC field builders are experimenting with a curriculum that, in addition to introducing X-risk, is packed-full with technical projects. In their first semester, they attracted 30 students who all have strong ML backgrounds. I’m interested in seeing how this goes and would be excited about more AI Safety groups running experiments on these lines.People could also try:checking whether grad students are willing to supervise group research projects.running deep learning courses and training programs (like Redwood’s MLAB)running an in-person section of intro to ML Safety (a technical course that covers safety topics).ConclusionAs far as I can tell, no one has AI safety university field-building all figured out. Rather than copying the same old discussion group model, people should experiment with new approaches. A good start could be to imitate career development clubs like ML @ Berkeley that have been highly successful.Thanks to Nat Li and Oliver Zhang for thoughts and feedback and to Dan Hendrycks for conversations that inspired this post.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety groups should imitate career development clubs, published by Joshc on November 9, 2022 on The Effective Altruism Forum.If you want to get people to do things (like learn about AI Safety) you have to offer them something valuable.Here’s one of the posters we used when I was in charge of marketing for the Columbia EA group:It’s a pretty graphic, but what valuable thing is it offering?The message is “scan this link to talk about AI.” To be fair, people like talking about AI. We had applicants.But we didn’t attract talented ML students.If you want to attract talented people, you have to know what they want. Serious and ambitious people probably don’t want to sit around having philosophical discussions. They want to build their careers.Enter ML @ Berkeley, a thriving group of 50 ML students who put 15 hours per week into projects and courses to become better at ML. No one gets paid – not even the organizers. And they are very selective. Only around 7% get in.Why is this group successful? For starters, they offer career capital. They give students projects that often turn into real published papers. They also concentrate talent. Ambitious people want to work with other ambitious people.AI safety student groups should consider imitating ML @ Berkeley.I’m not saying that we should eliminate philosophical discussions and replace them with resume boosting factories. We still want people to think AI Safety and X-risk are important. But discussions don’t need to be the primary selling point.Maybe for cultivating conceptual researchers, it makes more sense for discussions to be central. But conceptual and empirical AI Safety research are very different. ML students are probably more interested in projects and skill-building.More rigorous programming could also make it easier to identify talent.Talking about AI is fun, but top ML researchers work extremely hard. Rigorous technical curricula can filter out the ones that are driven.There is nothing like a trial by fire. Instead of trying to predict in advance who will be good at research, why not have lots of people try it and invest in those that do well?USC field builders are experimenting with a curriculum that, in addition to introducing X-risk, is packed-full with technical projects. In their first semester, they attracted 30 students who all have strong ML backgrounds. I’m interested in seeing how this goes and would be excited about more AI Safety groups running experiments on these lines.People could also try:checking whether grad students are willing to supervise group research projects.running deep learning courses and training programs (like Redwood’s MLAB)running an in-person section of intro to ML Safety (a technical course that covers safety topics).ConclusionAs far as I can tell, no one has AI safety university field-building all figured out. Rather than copying the same old discussion group model, people should experiment with new approaches. A good start could be to imitate career development clubs like ML @ Berkeley that have been highly successful.Thanks to Nat Li and Oliver Zhang for thoughts and feedback and to Dan Hendrycks for conversations that inspired this post.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Joshc https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:12 None full 3696
Gr3t8vWcWpwBoNNTk_EA EA - Does the US public support radical action against factory farming in the name of animal welfare? by Neil Dullaghan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does the US public support radical action against factory farming in the name of animal welfare?, published by Neil Dullaghan on November 9, 2022 on The Effective Altruism Forum.SummarySurveys from Sentience Institute (2020, 2017) and Norwood & Murray (2018) showed substantial levels of support in the US for banning slaughterhouses (~39-43% support when including those who chose no opinion/don't know). Evidence of this considerable public support for radical action has been suggested as a reason for animal advocates to push stronger messages and bolder proposals against animal agriculture.A preregistered study that we present here casts doubt on how substantial support for such radical action against factory farming actually is. In an experiment of 700 US survey respondents, we found 7.85% support (95% CI [4.3% - 14%]), when arguments framed around animal welfare for and against are presented, and respondents are asked to explain their reasoning. We also found 20.41% support (95% CI [11%-34.7%]) in the control condition when respondents were not asked to explain their reasoning.In the second survey of 2,698 US respondents, the weighted results show 15.7% (95% CI [13%-18.8%]) support for a policy banning slaughterhouses, when arguments framed around animal welfare for and against are presented, and respondents are asked to explain their reasoning.The attitudes expressed by poll respondents in response to broad questions may not be reliable indicators of actual support for specific policies or messages. It would be better to test people's responses to more detailed messages and policy proposals, paying special attention to how radical messages compare to counterfactual moderate messages.Results: Attitudes towards a proposal to ban slaughterhousesIn July and August of 2022, we ran two online surveys, one with an experimental design. In both, respondents were presented with a proposal that included a definition of slaughterhouses and arguments framed around animal welfare for and against the proposal. In the survey experiment, the treatment condition asked respondents to explain their reasoning, with this prompt removed in the control condition. In the second survey, only the treatment condition was presented. The question wording was as follows:Supporters of this policy say that slaughterhouses should be banned because it is wrong to kill animals. There is no way to kill animals for their meat which is “humane,” so this should be banned.Opponents of this policy say that people have a right to eat meat if they choose. The practices in place are humane and produce quality meat for consumers at an affordable price. It is possible to prevent animals being killed inhumanely without banning slaughterhouses.Do you support or oppose this proposed policy? [Support/Oppose/Don’t Know][Treatment condition] Please explain your reasons for the answer you gave aboveIn the survey experiment, with a sample of 700 US respondents:We found much lower levels of support in both control (20.41%, 95% CI [11%-34.7%]) and intervention (7.85%, 95% CI [4.3% - 14%]) conditions compared to the Sentience Institute (2020, 2017) and Norwood & Murray (2018) studies (43% when including “No Opinion” responses). (More results in the appendix.)In both the unweighted and weighted analyses, support was lower in the intervention condition than in the control condition. The share of “Don’t Know” respondents increased in the intervention in the unweighted analysis, while in the weighted analysis, the lost support seems to come directly from people choosing to oppose instead.Weighted results for a proposal banning slaughterhouses (control and treatment)The second survey had a sample of 2,698 US respondents (after filtering based on an honesty check and a basic attention/comprehension check). We pr...]]>
Neil Dullaghan https://forum.effectivealtruism.org/posts/Gr3t8vWcWpwBoNNTk/does-the-us-public-support-radical-action-against-factory-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does the US public support radical action against factory farming in the name of animal welfare?, published by Neil Dullaghan on November 9, 2022 on The Effective Altruism Forum.SummarySurveys from Sentience Institute (2020, 2017) and Norwood & Murray (2018) showed substantial levels of support in the US for banning slaughterhouses (~39-43% support when including those who chose no opinion/don't know). Evidence of this considerable public support for radical action has been suggested as a reason for animal advocates to push stronger messages and bolder proposals against animal agriculture.A preregistered study that we present here casts doubt on how substantial support for such radical action against factory farming actually is. In an experiment of 700 US survey respondents, we found 7.85% support (95% CI [4.3% - 14%]), when arguments framed around animal welfare for and against are presented, and respondents are asked to explain their reasoning. We also found 20.41% support (95% CI [11%-34.7%]) in the control condition when respondents were not asked to explain their reasoning.In the second survey of 2,698 US respondents, the weighted results show 15.7% (95% CI [13%-18.8%]) support for a policy banning slaughterhouses, when arguments framed around animal welfare for and against are presented, and respondents are asked to explain their reasoning.The attitudes expressed by poll respondents in response to broad questions may not be reliable indicators of actual support for specific policies or messages. It would be better to test people's responses to more detailed messages and policy proposals, paying special attention to how radical messages compare to counterfactual moderate messages.Results: Attitudes towards a proposal to ban slaughterhousesIn July and August of 2022, we ran two online surveys, one with an experimental design. In both, respondents were presented with a proposal that included a definition of slaughterhouses and arguments framed around animal welfare for and against the proposal. In the survey experiment, the treatment condition asked respondents to explain their reasoning, with this prompt removed in the control condition. In the second survey, only the treatment condition was presented. The question wording was as follows:Supporters of this policy say that slaughterhouses should be banned because it is wrong to kill animals. There is no way to kill animals for their meat which is “humane,” so this should be banned.Opponents of this policy say that people have a right to eat meat if they choose. The practices in place are humane and produce quality meat for consumers at an affordable price. It is possible to prevent animals being killed inhumanely without banning slaughterhouses.Do you support or oppose this proposed policy? [Support/Oppose/Don’t Know][Treatment condition] Please explain your reasons for the answer you gave aboveIn the survey experiment, with a sample of 700 US respondents:We found much lower levels of support in both control (20.41%, 95% CI [11%-34.7%]) and intervention (7.85%, 95% CI [4.3% - 14%]) conditions compared to the Sentience Institute (2020, 2017) and Norwood & Murray (2018) studies (43% when including “No Opinion” responses). (More results in the appendix.)In both the unweighted and weighted analyses, support was lower in the intervention condition than in the control condition. The share of “Don’t Know” respondents increased in the intervention in the unweighted analysis, while in the weighted analysis, the lost support seems to come directly from people choosing to oppose instead.Weighted results for a proposal banning slaughterhouses (control and treatment)The second survey had a sample of 2,698 US respondents (after filtering based on an honesty check and a basic attention/comprehension check). We pr...]]>
Wed, 09 Nov 2022 23:13:56 +0000 EA - Does the US public support radical action against factory farming in the name of animal welfare? by Neil Dullaghan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does the US public support radical action against factory farming in the name of animal welfare?, published by Neil Dullaghan on November 9, 2022 on The Effective Altruism Forum.SummarySurveys from Sentience Institute (2020, 2017) and Norwood & Murray (2018) showed substantial levels of support in the US for banning slaughterhouses (~39-43% support when including those who chose no opinion/don't know). Evidence of this considerable public support for radical action has been suggested as a reason for animal advocates to push stronger messages and bolder proposals against animal agriculture.A preregistered study that we present here casts doubt on how substantial support for such radical action against factory farming actually is. In an experiment of 700 US survey respondents, we found 7.85% support (95% CI [4.3% - 14%]), when arguments framed around animal welfare for and against are presented, and respondents are asked to explain their reasoning. We also found 20.41% support (95% CI [11%-34.7%]) in the control condition when respondents were not asked to explain their reasoning.In the second survey of 2,698 US respondents, the weighted results show 15.7% (95% CI [13%-18.8%]) support for a policy banning slaughterhouses, when arguments framed around animal welfare for and against are presented, and respondents are asked to explain their reasoning.The attitudes expressed by poll respondents in response to broad questions may not be reliable indicators of actual support for specific policies or messages. It would be better to test people's responses to more detailed messages and policy proposals, paying special attention to how radical messages compare to counterfactual moderate messages.Results: Attitudes towards a proposal to ban slaughterhousesIn July and August of 2022, we ran two online surveys, one with an experimental design. In both, respondents were presented with a proposal that included a definition of slaughterhouses and arguments framed around animal welfare for and against the proposal. In the survey experiment, the treatment condition asked respondents to explain their reasoning, with this prompt removed in the control condition. In the second survey, only the treatment condition was presented. The question wording was as follows:Supporters of this policy say that slaughterhouses should be banned because it is wrong to kill animals. There is no way to kill animals for their meat which is “humane,” so this should be banned.Opponents of this policy say that people have a right to eat meat if they choose. The practices in place are humane and produce quality meat for consumers at an affordable price. It is possible to prevent animals being killed inhumanely without banning slaughterhouses.Do you support or oppose this proposed policy? [Support/Oppose/Don’t Know][Treatment condition] Please explain your reasons for the answer you gave aboveIn the survey experiment, with a sample of 700 US respondents:We found much lower levels of support in both control (20.41%, 95% CI [11%-34.7%]) and intervention (7.85%, 95% CI [4.3% - 14%]) conditions compared to the Sentience Institute (2020, 2017) and Norwood & Murray (2018) studies (43% when including “No Opinion” responses). (More results in the appendix.)In both the unweighted and weighted analyses, support was lower in the intervention condition than in the control condition. The share of “Don’t Know” respondents increased in the intervention in the unweighted analysis, while in the weighted analysis, the lost support seems to come directly from people choosing to oppose instead.Weighted results for a proposal banning slaughterhouses (control and treatment)The second survey had a sample of 2,698 US respondents (after filtering based on an honesty check and a basic attention/comprehension check). We pr...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does the US public support radical action against factory farming in the name of animal welfare?, published by Neil Dullaghan on November 9, 2022 on The Effective Altruism Forum.SummarySurveys from Sentience Institute (2020, 2017) and Norwood & Murray (2018) showed substantial levels of support in the US for banning slaughterhouses (~39-43% support when including those who chose no opinion/don't know). Evidence of this considerable public support for radical action has been suggested as a reason for animal advocates to push stronger messages and bolder proposals against animal agriculture.A preregistered study that we present here casts doubt on how substantial support for such radical action against factory farming actually is. In an experiment of 700 US survey respondents, we found 7.85% support (95% CI [4.3% - 14%]), when arguments framed around animal welfare for and against are presented, and respondents are asked to explain their reasoning. We also found 20.41% support (95% CI [11%-34.7%]) in the control condition when respondents were not asked to explain their reasoning.In the second survey of 2,698 US respondents, the weighted results show 15.7% (95% CI [13%-18.8%]) support for a policy banning slaughterhouses, when arguments framed around animal welfare for and against are presented, and respondents are asked to explain their reasoning.The attitudes expressed by poll respondents in response to broad questions may not be reliable indicators of actual support for specific policies or messages. It would be better to test people's responses to more detailed messages and policy proposals, paying special attention to how radical messages compare to counterfactual moderate messages.Results: Attitudes towards a proposal to ban slaughterhousesIn July and August of 2022, we ran two online surveys, one with an experimental design. In both, respondents were presented with a proposal that included a definition of slaughterhouses and arguments framed around animal welfare for and against the proposal. In the survey experiment, the treatment condition asked respondents to explain their reasoning, with this prompt removed in the control condition. In the second survey, only the treatment condition was presented. The question wording was as follows:Supporters of this policy say that slaughterhouses should be banned because it is wrong to kill animals. There is no way to kill animals for their meat which is “humane,” so this should be banned.Opponents of this policy say that people have a right to eat meat if they choose. The practices in place are humane and produce quality meat for consumers at an affordable price. It is possible to prevent animals being killed inhumanely without banning slaughterhouses.Do you support or oppose this proposed policy? [Support/Oppose/Don’t Know][Treatment condition] Please explain your reasons for the answer you gave aboveIn the survey experiment, with a sample of 700 US respondents:We found much lower levels of support in both control (20.41%, 95% CI [11%-34.7%]) and intervention (7.85%, 95% CI [4.3% - 14%]) conditions compared to the Sentience Institute (2020, 2017) and Norwood & Murray (2018) studies (43% when including “No Opinion” responses). (More results in the appendix.)In both the unweighted and weighted analyses, support was lower in the intervention condition than in the control condition. The share of “Don’t Know” respondents increased in the intervention in the unweighted analysis, while in the weighted analysis, the lost support seems to come directly from people choosing to oppose instead.Weighted results for a proposal banning slaughterhouses (control and treatment)The second survey had a sample of 2,698 US respondents (after filtering based on an honesty check and a basic attention/comprehension check). We pr...]]>
Neil Dullaghan https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 21:23 None full 3697
EKN9Nn89ixriwSXpP_EA EA - Money Stuff: FTX Had a Death Spiral by Elliot Temple Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Money Stuff: FTX Had a Death Spiral, published by Elliot Temple on November 9, 2022 on The Effective Altruism Forum.Matt Levine is a financial writer I like and he explains (the latest information about) what happened with FTX.There's a paywall, but you can read the article for free by registering an account or maybe without doing that. I got it to load for free by opening it in a private browsing window. The newsletter is also free by email which is how I receive it. The free email signup is at but that won't help you with past issues.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Elliot Temple https://forum.effectivealtruism.org/posts/EKN9Nn89ixriwSXpP/money-stuff-ftx-had-a-death-spiral Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Money Stuff: FTX Had a Death Spiral, published by Elliot Temple on November 9, 2022 on The Effective Altruism Forum.Matt Levine is a financial writer I like and he explains (the latest information about) what happened with FTX.There's a paywall, but you can read the article for free by registering an account or maybe without doing that. I got it to load for free by opening it in a private browsing window. The newsletter is also free by email which is how I receive it. The free email signup is at but that won't help you with past issues.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 09 Nov 2022 22:08:27 +0000 EA - Money Stuff: FTX Had a Death Spiral by Elliot Temple Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Money Stuff: FTX Had a Death Spiral, published by Elliot Temple on November 9, 2022 on The Effective Altruism Forum.Matt Levine is a financial writer I like and he explains (the latest information about) what happened with FTX.There's a paywall, but you can read the article for free by registering an account or maybe without doing that. I got it to load for free by opening it in a private browsing window. The newsletter is also free by email which is how I receive it. The free email signup is at but that won't help you with past issues.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Money Stuff: FTX Had a Death Spiral, published by Elliot Temple on November 9, 2022 on The Effective Altruism Forum.Matt Levine is a financial writer I like and he explains (the latest information about) what happened with FTX.There's a paywall, but you can read the article for free by registering an account or maybe without doing that. I got it to load for free by opening it in a private browsing window. The newsletter is also free by email which is how I receive it. The free email signup is at but that won't help you with past issues.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Elliot Temple https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:49 None full 3689
6ASCLT7HDMDy8rptE_EA EA - The 8-week mental health programme for EAs finally published by tereziekosik Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The 8-week mental health programme for EAs finally published, published by tereziekosik on November 9, 2022 on The Effective Altruism Forum.We are happy to announce that we have finally finished and published an 8-week-long mental health programme for EAs on Mental Health Navigator.The whole programme is composed of the following topics:Introduction to Mindfulness and EmotionsSelf-CompassionWidening the Circles of CompassionRelating to OthersImposter SyndromeSustainable Motivation and Preventing BurnoutCreating a Healthy CommunityIntroduction to Well-being and Behavioral ChangeHow to use this programme?The format of the programme is very similar to the EA Fellowship. Each week focuses on one topic/workshop. It is also possible to run the workshops individually. Prior to the workshop, the participants are asked to read one chapter of the workbook that corresponds to the workshop's topic.The workshops are run by facilitators. The facilitators do not have to be mental health specialists, as this programme is not meant to serve as an intervention. It is more about opening the topic, creating a safe space to talk about mental health issues in the community, and enabling peer-to-peer support in the search for well-being and work-life balance.If you are interested, you can find more information about the programme, its liability, and safety on Mental Health Navigator.What are the lessons learned that we gained from designing and testing the programme?(Given the nature and size of the project, we evaluated all the workshops as well as the pilot programme mainly qualitatively via focus groups and in-depth interviews).First, designing and testing the workshop supported our original hypothesis that there is a demand for activities supporting mental health in the EA community; opening the topic via this project seemed like a good first step. We had many EA members subscribed to our newsletter (around 100 people in total) and have received positive feedback from the participants of the workshops. We were invited to run the workshops at the CARE conference or for the Stanford Existential Risks Initiative. We are currently in contact with different local organisers interested in facilitating the programme in their groups.Second, it seems that programmes that create space for peer support and target problems prevalent in the community are promising. Indeed, one of the most valuable things of the programme was to enable like-minded people to share and support each other. Being an effective altruist might be challenging in many ways; social support and a safe environment seem to be powerful tools for overcoming these challenges. Thus, we conclude that putting effort into creating materials and activities to support mental health in the community is worth investing our resources in, and we would like to encourage others to focus on that. Given the time we had and the complexity such a project presents, there are surely still improvements to be made. We would be very grateful if it served as a starting point from which others could continue.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
tereziekosik https://forum.effectivealtruism.org/posts/6ASCLT7HDMDy8rptE/the-8-week-mental-health-programme-for-eas-finally-published Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The 8-week mental health programme for EAs finally published, published by tereziekosik on November 9, 2022 on The Effective Altruism Forum.We are happy to announce that we have finally finished and published an 8-week-long mental health programme for EAs on Mental Health Navigator.The whole programme is composed of the following topics:Introduction to Mindfulness and EmotionsSelf-CompassionWidening the Circles of CompassionRelating to OthersImposter SyndromeSustainable Motivation and Preventing BurnoutCreating a Healthy CommunityIntroduction to Well-being and Behavioral ChangeHow to use this programme?The format of the programme is very similar to the EA Fellowship. Each week focuses on one topic/workshop. It is also possible to run the workshops individually. Prior to the workshop, the participants are asked to read one chapter of the workbook that corresponds to the workshop's topic.The workshops are run by facilitators. The facilitators do not have to be mental health specialists, as this programme is not meant to serve as an intervention. It is more about opening the topic, creating a safe space to talk about mental health issues in the community, and enabling peer-to-peer support in the search for well-being and work-life balance.If you are interested, you can find more information about the programme, its liability, and safety on Mental Health Navigator.What are the lessons learned that we gained from designing and testing the programme?(Given the nature and size of the project, we evaluated all the workshops as well as the pilot programme mainly qualitatively via focus groups and in-depth interviews).First, designing and testing the workshop supported our original hypothesis that there is a demand for activities supporting mental health in the EA community; opening the topic via this project seemed like a good first step. We had many EA members subscribed to our newsletter (around 100 people in total) and have received positive feedback from the participants of the workshops. We were invited to run the workshops at the CARE conference or for the Stanford Existential Risks Initiative. We are currently in contact with different local organisers interested in facilitating the programme in their groups.Second, it seems that programmes that create space for peer support and target problems prevalent in the community are promising. Indeed, one of the most valuable things of the programme was to enable like-minded people to share and support each other. Being an effective altruist might be challenging in many ways; social support and a safe environment seem to be powerful tools for overcoming these challenges. Thus, we conclude that putting effort into creating materials and activities to support mental health in the community is worth investing our resources in, and we would like to encourage others to focus on that. Given the time we had and the complexity such a project presents, there are surely still improvements to be made. We would be very grateful if it served as a starting point from which others could continue.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 09 Nov 2022 18:03:47 +0000 EA - The 8-week mental health programme for EAs finally published by tereziekosik Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The 8-week mental health programme for EAs finally published, published by tereziekosik on November 9, 2022 on The Effective Altruism Forum.We are happy to announce that we have finally finished and published an 8-week-long mental health programme for EAs on Mental Health Navigator.The whole programme is composed of the following topics:Introduction to Mindfulness and EmotionsSelf-CompassionWidening the Circles of CompassionRelating to OthersImposter SyndromeSustainable Motivation and Preventing BurnoutCreating a Healthy CommunityIntroduction to Well-being and Behavioral ChangeHow to use this programme?The format of the programme is very similar to the EA Fellowship. Each week focuses on one topic/workshop. It is also possible to run the workshops individually. Prior to the workshop, the participants are asked to read one chapter of the workbook that corresponds to the workshop's topic.The workshops are run by facilitators. The facilitators do not have to be mental health specialists, as this programme is not meant to serve as an intervention. It is more about opening the topic, creating a safe space to talk about mental health issues in the community, and enabling peer-to-peer support in the search for well-being and work-life balance.If you are interested, you can find more information about the programme, its liability, and safety on Mental Health Navigator.What are the lessons learned that we gained from designing and testing the programme?(Given the nature and size of the project, we evaluated all the workshops as well as the pilot programme mainly qualitatively via focus groups and in-depth interviews).First, designing and testing the workshop supported our original hypothesis that there is a demand for activities supporting mental health in the EA community; opening the topic via this project seemed like a good first step. We had many EA members subscribed to our newsletter (around 100 people in total) and have received positive feedback from the participants of the workshops. We were invited to run the workshops at the CARE conference or for the Stanford Existential Risks Initiative. We are currently in contact with different local organisers interested in facilitating the programme in their groups.Second, it seems that programmes that create space for peer support and target problems prevalent in the community are promising. Indeed, one of the most valuable things of the programme was to enable like-minded people to share and support each other. Being an effective altruist might be challenging in many ways; social support and a safe environment seem to be powerful tools for overcoming these challenges. Thus, we conclude that putting effort into creating materials and activities to support mental health in the community is worth investing our resources in, and we would like to encourage others to focus on that. Given the time we had and the complexity such a project presents, there are surely still improvements to be made. We would be very grateful if it served as a starting point from which others could continue.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The 8-week mental health programme for EAs finally published, published by tereziekosik on November 9, 2022 on The Effective Altruism Forum.We are happy to announce that we have finally finished and published an 8-week-long mental health programme for EAs on Mental Health Navigator.The whole programme is composed of the following topics:Introduction to Mindfulness and EmotionsSelf-CompassionWidening the Circles of CompassionRelating to OthersImposter SyndromeSustainable Motivation and Preventing BurnoutCreating a Healthy CommunityIntroduction to Well-being and Behavioral ChangeHow to use this programme?The format of the programme is very similar to the EA Fellowship. Each week focuses on one topic/workshop. It is also possible to run the workshops individually. Prior to the workshop, the participants are asked to read one chapter of the workbook that corresponds to the workshop's topic.The workshops are run by facilitators. The facilitators do not have to be mental health specialists, as this programme is not meant to serve as an intervention. It is more about opening the topic, creating a safe space to talk about mental health issues in the community, and enabling peer-to-peer support in the search for well-being and work-life balance.If you are interested, you can find more information about the programme, its liability, and safety on Mental Health Navigator.What are the lessons learned that we gained from designing and testing the programme?(Given the nature and size of the project, we evaluated all the workshops as well as the pilot programme mainly qualitatively via focus groups and in-depth interviews).First, designing and testing the workshop supported our original hypothesis that there is a demand for activities supporting mental health in the EA community; opening the topic via this project seemed like a good first step. We had many EA members subscribed to our newsletter (around 100 people in total) and have received positive feedback from the participants of the workshops. We were invited to run the workshops at the CARE conference or for the Stanford Existential Risks Initiative. We are currently in contact with different local organisers interested in facilitating the programme in their groups.Second, it seems that programmes that create space for peer support and target problems prevalent in the community are promising. Indeed, one of the most valuable things of the programme was to enable like-minded people to share and support each other. Being an effective altruist might be challenging in many ways; social support and a safe environment seem to be powerful tools for overcoming these challenges. Thus, we conclude that putting effort into creating materials and activities to support mental health in the community is worth investing our resources in, and we would like to encourage others to focus on that. Given the time we had and the complexity such a project presents, there are surely still improvements to be made. We would be very grateful if it served as a starting point from which others could continue.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
tereziekosik https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:59 None full 3691
ytHCpLbT6A4gxqH8s_EA EA - Tracking the money flows in forecasting by NunoSempere Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tracking the money flows in forecasting, published by NunoSempere on November 9, 2022 on The Effective Altruism Forum.This list of forecasting organizations includes:A brief description of each organizationA monetary estimate of value. This can serve as a rough but hard-to-fake proxy of value. Sometimes this is a flow (e.g., budget per year), and sometimes this is an estimate of total value (e.g., valuation).A more subjective, rough, and verbal estimate of how much value the organization produces.This started as a breadth first evaluation of the forecasting system, and to some extent it still is, i.e., it might be useful to get a rough sense of the ecosystem as a whole. After some discussion on whether very rough evaluations are worth it (a), people who prefer their evaluations to have a high threshold of quality and polish might want to either ignore this post or just pay attention to the monetary estimates.Summary tableNote that the monetary value column has different types of estimates.NameMonetary valuePurposeFlutter Entertainment~$20B (~£18B) market capGamblingKalshi~$30M in VC fundingPrediction marketMetaculus~$6M in grantsForecasting sitePolymarket~$4M in VC fundingPrediction marketCultivate Labs$5M to $80M (estimated valuation)Forecasting platform as a serviceGood Judgment$3M to $50M (estimated valuation)Forecasting consultingManifold Markets~$2M in early stage fundingPlay money prediction marketInsight Prediction—Prediction marketTetlock’s research group$1M to $20M (estimated grant flow)Research groupPredictIt$0.5M to $5M (estimated value)Prediction marketEpoch$2M in grantsResearch group on AI progressSwift Centre$2M in grantsForecasts as a public goodQuantified Uncertainty Research Institute$780k in grantsSoftware and forecasting researchSamotsvety Forecasting$200k to $5M (estimated valuation)Forecasts as a public good, forecasting consultingCzech Priorities—Institutional decision-making using forecastingHedgehog Markets$3.5M in VC fundingPrediction market(crypto)INFERUncertain. Estimated $2M/year in grantsForecasting platformNathan Young$180k in grantsForecasting question creationSage$700k in grantsForecasting researchGlobal Guessing$330k in grantsForecasting journalismSocial Science Prediction PlatformAt least $838k in grantsForecasting in an academic contextAugur$60M crypto market capPrediction marketConfido$190k in grantsForecasting toolingHypermind—Play money prediction marketReplication Markets$150k in forecaster rewardsForecasting experimentPredictionBook—Prediction databaseGnosis$230M crypto market capSmart contracts, including for prediction marketsNameMonetary valuePurposeFlutter Entertainment~$20B (~£18B) market capGamblingKalshi~$30M in VC fundingPrediction marketMetaculus~$6M in grantsForecasting sitePolymarket~$4M in VC fundingPrediction marketCultivate Labs$5M to $80M (estimated valuation)Forecasting platform as a serviceGood Judgment$3M to $50M (estimated valuation)Forecasting consultingManifold Markets~$2M in early stage fundingPlay money prediction marketInsight Prediction—Prediction marketTetlock’s research group$1M to $20M (estimated grant flow)Research groupPredictIt$0.5M to $5M (estimated value)Prediction marketEpoch$2M in grantsResearch group on AI progressSwift Centre$2M in grantsForecasts as a public goodQuantified Uncertainty Research Institute$780k in grantsSoftware and forecasting researchSamotsvety Forecasting$200k to $5M (estimated valuation)Forecasts as a public good, forecasting consultingCzech Priorities—Institutional decision-making using forecastingHedgehog Markets$3.5M in VC fundingPrediction market (crypto)INFERUncertain.Estimated $2M/year in grantsForecasting platformNathan Young$180k in grantsForecasting question creationSage$700k in grantsForecasting researchGlobal Guessing$330k in grantsForecasting journal...]]>
NunoSempere https://forum.effectivealtruism.org/posts/ytHCpLbT6A4gxqH8s/tracking-the-money-flows-in-forecasting Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tracking the money flows in forecasting, published by NunoSempere on November 9, 2022 on The Effective Altruism Forum.This list of forecasting organizations includes:A brief description of each organizationA monetary estimate of value. This can serve as a rough but hard-to-fake proxy of value. Sometimes this is a flow (e.g., budget per year), and sometimes this is an estimate of total value (e.g., valuation).A more subjective, rough, and verbal estimate of how much value the organization produces.This started as a breadth first evaluation of the forecasting system, and to some extent it still is, i.e., it might be useful to get a rough sense of the ecosystem as a whole. After some discussion on whether very rough evaluations are worth it (a), people who prefer their evaluations to have a high threshold of quality and polish might want to either ignore this post or just pay attention to the monetary estimates.Summary tableNote that the monetary value column has different types of estimates.NameMonetary valuePurposeFlutter Entertainment~$20B (~£18B) market capGamblingKalshi~$30M in VC fundingPrediction marketMetaculus~$6M in grantsForecasting sitePolymarket~$4M in VC fundingPrediction marketCultivate Labs$5M to $80M (estimated valuation)Forecasting platform as a serviceGood Judgment$3M to $50M (estimated valuation)Forecasting consultingManifold Markets~$2M in early stage fundingPlay money prediction marketInsight Prediction—Prediction marketTetlock’s research group$1M to $20M (estimated grant flow)Research groupPredictIt$0.5M to $5M (estimated value)Prediction marketEpoch$2M in grantsResearch group on AI progressSwift Centre$2M in grantsForecasts as a public goodQuantified Uncertainty Research Institute$780k in grantsSoftware and forecasting researchSamotsvety Forecasting$200k to $5M (estimated valuation)Forecasts as a public good, forecasting consultingCzech Priorities—Institutional decision-making using forecastingHedgehog Markets$3.5M in VC fundingPrediction market(crypto)INFERUncertain. Estimated $2M/year in grantsForecasting platformNathan Young$180k in grantsForecasting question creationSage$700k in grantsForecasting researchGlobal Guessing$330k in grantsForecasting journalismSocial Science Prediction PlatformAt least $838k in grantsForecasting in an academic contextAugur$60M crypto market capPrediction marketConfido$190k in grantsForecasting toolingHypermind—Play money prediction marketReplication Markets$150k in forecaster rewardsForecasting experimentPredictionBook—Prediction databaseGnosis$230M crypto market capSmart contracts, including for prediction marketsNameMonetary valuePurposeFlutter Entertainment~$20B (~£18B) market capGamblingKalshi~$30M in VC fundingPrediction marketMetaculus~$6M in grantsForecasting sitePolymarket~$4M in VC fundingPrediction marketCultivate Labs$5M to $80M (estimated valuation)Forecasting platform as a serviceGood Judgment$3M to $50M (estimated valuation)Forecasting consultingManifold Markets~$2M in early stage fundingPlay money prediction marketInsight Prediction—Prediction marketTetlock’s research group$1M to $20M (estimated grant flow)Research groupPredictIt$0.5M to $5M (estimated value)Prediction marketEpoch$2M in grantsResearch group on AI progressSwift Centre$2M in grantsForecasts as a public goodQuantified Uncertainty Research Institute$780k in grantsSoftware and forecasting researchSamotsvety Forecasting$200k to $5M (estimated valuation)Forecasts as a public good, forecasting consultingCzech Priorities—Institutional decision-making using forecastingHedgehog Markets$3.5M in VC fundingPrediction market (crypto)INFERUncertain.Estimated $2M/year in grantsForecasting platformNathan Young$180k in grantsForecasting question creationSage$700k in grantsForecasting researchGlobal Guessing$330k in grantsForecasting journal...]]>
Wed, 09 Nov 2022 17:52:26 +0000 EA - Tracking the money flows in forecasting by NunoSempere Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tracking the money flows in forecasting, published by NunoSempere on November 9, 2022 on The Effective Altruism Forum.This list of forecasting organizations includes:A brief description of each organizationA monetary estimate of value. This can serve as a rough but hard-to-fake proxy of value. Sometimes this is a flow (e.g., budget per year), and sometimes this is an estimate of total value (e.g., valuation).A more subjective, rough, and verbal estimate of how much value the organization produces.This started as a breadth first evaluation of the forecasting system, and to some extent it still is, i.e., it might be useful to get a rough sense of the ecosystem as a whole. After some discussion on whether very rough evaluations are worth it (a), people who prefer their evaluations to have a high threshold of quality and polish might want to either ignore this post or just pay attention to the monetary estimates.Summary tableNote that the monetary value column has different types of estimates.NameMonetary valuePurposeFlutter Entertainment~$20B (~£18B) market capGamblingKalshi~$30M in VC fundingPrediction marketMetaculus~$6M in grantsForecasting sitePolymarket~$4M in VC fundingPrediction marketCultivate Labs$5M to $80M (estimated valuation)Forecasting platform as a serviceGood Judgment$3M to $50M (estimated valuation)Forecasting consultingManifold Markets~$2M in early stage fundingPlay money prediction marketInsight Prediction—Prediction marketTetlock’s research group$1M to $20M (estimated grant flow)Research groupPredictIt$0.5M to $5M (estimated value)Prediction marketEpoch$2M in grantsResearch group on AI progressSwift Centre$2M in grantsForecasts as a public goodQuantified Uncertainty Research Institute$780k in grantsSoftware and forecasting researchSamotsvety Forecasting$200k to $5M (estimated valuation)Forecasts as a public good, forecasting consultingCzech Priorities—Institutional decision-making using forecastingHedgehog Markets$3.5M in VC fundingPrediction market(crypto)INFERUncertain. Estimated $2M/year in grantsForecasting platformNathan Young$180k in grantsForecasting question creationSage$700k in grantsForecasting researchGlobal Guessing$330k in grantsForecasting journalismSocial Science Prediction PlatformAt least $838k in grantsForecasting in an academic contextAugur$60M crypto market capPrediction marketConfido$190k in grantsForecasting toolingHypermind—Play money prediction marketReplication Markets$150k in forecaster rewardsForecasting experimentPredictionBook—Prediction databaseGnosis$230M crypto market capSmart contracts, including for prediction marketsNameMonetary valuePurposeFlutter Entertainment~$20B (~£18B) market capGamblingKalshi~$30M in VC fundingPrediction marketMetaculus~$6M in grantsForecasting sitePolymarket~$4M in VC fundingPrediction marketCultivate Labs$5M to $80M (estimated valuation)Forecasting platform as a serviceGood Judgment$3M to $50M (estimated valuation)Forecasting consultingManifold Markets~$2M in early stage fundingPlay money prediction marketInsight Prediction—Prediction marketTetlock’s research group$1M to $20M (estimated grant flow)Research groupPredictIt$0.5M to $5M (estimated value)Prediction marketEpoch$2M in grantsResearch group on AI progressSwift Centre$2M in grantsForecasts as a public goodQuantified Uncertainty Research Institute$780k in grantsSoftware and forecasting researchSamotsvety Forecasting$200k to $5M (estimated valuation)Forecasts as a public good, forecasting consultingCzech Priorities—Institutional decision-making using forecastingHedgehog Markets$3.5M in VC fundingPrediction market (crypto)INFERUncertain.Estimated $2M/year in grantsForecasting platformNathan Young$180k in grantsForecasting question creationSage$700k in grantsForecasting researchGlobal Guessing$330k in grantsForecasting journal...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tracking the money flows in forecasting, published by NunoSempere on November 9, 2022 on The Effective Altruism Forum.This list of forecasting organizations includes:A brief description of each organizationA monetary estimate of value. This can serve as a rough but hard-to-fake proxy of value. Sometimes this is a flow (e.g., budget per year), and sometimes this is an estimate of total value (e.g., valuation).A more subjective, rough, and verbal estimate of how much value the organization produces.This started as a breadth first evaluation of the forecasting system, and to some extent it still is, i.e., it might be useful to get a rough sense of the ecosystem as a whole. After some discussion on whether very rough evaluations are worth it (a), people who prefer their evaluations to have a high threshold of quality and polish might want to either ignore this post or just pay attention to the monetary estimates.Summary tableNote that the monetary value column has different types of estimates.NameMonetary valuePurposeFlutter Entertainment~$20B (~£18B) market capGamblingKalshi~$30M in VC fundingPrediction marketMetaculus~$6M in grantsForecasting sitePolymarket~$4M in VC fundingPrediction marketCultivate Labs$5M to $80M (estimated valuation)Forecasting platform as a serviceGood Judgment$3M to $50M (estimated valuation)Forecasting consultingManifold Markets~$2M in early stage fundingPlay money prediction marketInsight Prediction—Prediction marketTetlock’s research group$1M to $20M (estimated grant flow)Research groupPredictIt$0.5M to $5M (estimated value)Prediction marketEpoch$2M in grantsResearch group on AI progressSwift Centre$2M in grantsForecasts as a public goodQuantified Uncertainty Research Institute$780k in grantsSoftware and forecasting researchSamotsvety Forecasting$200k to $5M (estimated valuation)Forecasts as a public good, forecasting consultingCzech Priorities—Institutional decision-making using forecastingHedgehog Markets$3.5M in VC fundingPrediction market(crypto)INFERUncertain. Estimated $2M/year in grantsForecasting platformNathan Young$180k in grantsForecasting question creationSage$700k in grantsForecasting researchGlobal Guessing$330k in grantsForecasting journalismSocial Science Prediction PlatformAt least $838k in grantsForecasting in an academic contextAugur$60M crypto market capPrediction marketConfido$190k in grantsForecasting toolingHypermind—Play money prediction marketReplication Markets$150k in forecaster rewardsForecasting experimentPredictionBook—Prediction databaseGnosis$230M crypto market capSmart contracts, including for prediction marketsNameMonetary valuePurposeFlutter Entertainment~$20B (~£18B) market capGamblingKalshi~$30M in VC fundingPrediction marketMetaculus~$6M in grantsForecasting sitePolymarket~$4M in VC fundingPrediction marketCultivate Labs$5M to $80M (estimated valuation)Forecasting platform as a serviceGood Judgment$3M to $50M (estimated valuation)Forecasting consultingManifold Markets~$2M in early stage fundingPlay money prediction marketInsight Prediction—Prediction marketTetlock’s research group$1M to $20M (estimated grant flow)Research groupPredictIt$0.5M to $5M (estimated value)Prediction marketEpoch$2M in grantsResearch group on AI progressSwift Centre$2M in grantsForecasts as a public goodQuantified Uncertainty Research Institute$780k in grantsSoftware and forecasting researchSamotsvety Forecasting$200k to $5M (estimated valuation)Forecasts as a public good, forecasting consultingCzech Priorities—Institutional decision-making using forecastingHedgehog Markets$3.5M in VC fundingPrediction market (crypto)INFERUncertain.Estimated $2M/year in grantsForecasting platformNathan Young$180k in grantsForecasting question creationSage$700k in grantsForecasting researchGlobal Guessing$330k in grantsForecasting journal...]]>
NunoSempere https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 20:32 None full 3690
erh7FydqcmwiriYzv_EA EA - Google Scholar is now listing (some) EA Forum posts by PeterSlattery Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Google Scholar is now listing (some) EA Forum posts, published by PeterSlattery on November 9, 2022 on The Effective Altruism Forum.A quick post because this seems to be a new development that I hadn't seen before. I was just checking Google Scholar for any recent behavioural science research to put in the EA Behavioural Science Newsletter and noticed that it now returns posts from the EA Forum. See below:I don't know why it doesn't retrieve the right titles. This may be something for the EA forum team to look into. I don't know whether all posts are being indexed or there are some criteria being used to filter between them. I don't understand why this post is the top result when searching for "effective altruism".Not sure what the implications are, but wanted to make people aware. From my perspective, Google Scholar listing posts from the EA Forum seems weakly positive. It probably means that more scientists and researchers will read and cite posts. If we can identify the criteria for being indexed by Google Scholar then maybe we should try to ensure that good posts meet those criteria?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
PeterSlattery https://forum.effectivealtruism.org/posts/erh7FydqcmwiriYzv/google-scholar-is-now-listing-some-ea-forum-posts Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Google Scholar is now listing (some) EA Forum posts, published by PeterSlattery on November 9, 2022 on The Effective Altruism Forum.A quick post because this seems to be a new development that I hadn't seen before. I was just checking Google Scholar for any recent behavioural science research to put in the EA Behavioural Science Newsletter and noticed that it now returns posts from the EA Forum. See below:I don't know why it doesn't retrieve the right titles. This may be something for the EA forum team to look into. I don't know whether all posts are being indexed or there are some criteria being used to filter between them. I don't understand why this post is the top result when searching for "effective altruism".Not sure what the implications are, but wanted to make people aware. From my perspective, Google Scholar listing posts from the EA Forum seems weakly positive. It probably means that more scientists and researchers will read and cite posts. If we can identify the criteria for being indexed by Google Scholar then maybe we should try to ensure that good posts meet those criteria?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 09 Nov 2022 03:57:10 +0000 EA - Google Scholar is now listing (some) EA Forum posts by PeterSlattery Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Google Scholar is now listing (some) EA Forum posts, published by PeterSlattery on November 9, 2022 on The Effective Altruism Forum.A quick post because this seems to be a new development that I hadn't seen before. I was just checking Google Scholar for any recent behavioural science research to put in the EA Behavioural Science Newsletter and noticed that it now returns posts from the EA Forum. See below:I don't know why it doesn't retrieve the right titles. This may be something for the EA forum team to look into. I don't know whether all posts are being indexed or there are some criteria being used to filter between them. I don't understand why this post is the top result when searching for "effective altruism".Not sure what the implications are, but wanted to make people aware. From my perspective, Google Scholar listing posts from the EA Forum seems weakly positive. It probably means that more scientists and researchers will read and cite posts. If we can identify the criteria for being indexed by Google Scholar then maybe we should try to ensure that good posts meet those criteria?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Google Scholar is now listing (some) EA Forum posts, published by PeterSlattery on November 9, 2022 on The Effective Altruism Forum.A quick post because this seems to be a new development that I hadn't seen before. I was just checking Google Scholar for any recent behavioural science research to put in the EA Behavioural Science Newsletter and noticed that it now returns posts from the EA Forum. See below:I don't know why it doesn't retrieve the right titles. This may be something for the EA forum team to look into. I don't know whether all posts are being indexed or there are some criteria being used to filter between them. I don't understand why this post is the top result when searching for "effective altruism".Not sure what the implications are, but wanted to make people aware. From my perspective, Google Scholar listing posts from the EA Forum seems weakly positive. It probably means that more scientists and researchers will read and cite posts. If we can identify the criteria for being indexed by Google Scholar then maybe we should try to ensure that good posts meet those criteria?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
PeterSlattery https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:19 None full 3685
ghS52n8vCpnnuJSge_EA EA - Some advice on independent research by mariushobbhahn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some advice on independent research, published by mariushobbhahn on November 8, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
mariushobbhahn https://forum.effectivealtruism.org/posts/ghS52n8vCpnnuJSge/some-advice-on-independent-research Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some advice on independent research, published by mariushobbhahn on November 8, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 09 Nov 2022 03:07:31 +0000 EA - Some advice on independent research by mariushobbhahn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some advice on independent research, published by mariushobbhahn on November 8, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some advice on independent research, published by mariushobbhahn on November 8, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
mariushobbhahn https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:26 None full 3688
GoaKjhQGqAgk6sKKT_EA EA - Doing Ops in EA FAQ: before you join (2022) by Vaidehi Agarwalla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Doing Ops in EA FAQ: before you join (2022), published by Vaidehi Agarwalla on November 8, 2022 on The Effective Altruism Forum.Last edited: November 7th 2022This guide was written by the Pineapple Operations team with inputs from several operations staff at various organizations to provide an overview of considerations for entering operations work at EA orgs. We think it'll be especially useful for people new to EA and/or operations who are considering working in this space. We think it could also be valuable for people looking to hire for these roles to understand the perspectives of candidates but it has less directly relevant advice (we may write up something in the future).We've chosen to publish this now because it seems like it could be useful to many people. We may make updates in the future and will keep a change log in the appendix.1. What is operations in EA?TL;DR it’s come to mean “everything that supports the core work and allows other people to focus on the core work” (this is not normally what operations means outside of EA, administration may be a more appropriate term).Operations in EA is a very broad area that can mean a lot of different things. This guide focuses on most operations roles (excluding PA/ExA roles and operations leadership roles). There are a few roles that we’ve seen open positions for in the last year (2022). Note that many roles will include several items from this list:Operations Manager (most often at small / new organisations)Implementing and maintaining general systems & processesManaging Accounting, Payroll & LegalFundraisingMarketing & CommunicationsPA tasks for the teamOther ad-hoc projectsOffice & Community ManagerPeople Ops/HRRecruiting CoordinatorSpecial Projects Associate / Project Manager (usually helping incubate new projects or do project management)Events Associate (events planning & execution)Logistics/Supply ChainSome roles will also have operations staff doing direct generalist work - such as research or program development - as needed, and generalist roles at smaller orgs will also involve operations work. Generalists can do many different things, well outside the ops domain - could be research, sales, and even having inputs on strategy. Generalist work that involves research will be often very different in nature to operations where task switching is common, or external facing work vs back-office admin tasks which don’t require much human contact. If you are hired as an operations person, keep in mind that this will likely be your top priority - it’s literally what keeps the lights on.Read the job description for each role carefully before applying - roles with the same title might have very different responsibilities, and be clear about what proportion of your time could actually be spent on generalist work if that’s mentioned in the JD (and that orgs may not be able to always predict this ahead of time)."In my current EA role, which is generalist but more explicitly ops-focused, my responsibilities have included ops, communications, and research, and shift based on my comparative advantage relative to the rest of the team. In this type of role it’s important to have a clear sense of priorities and boundaries, because your work could easily encompass. everything. It’s also important to learn to communicate clearly across functions and outside of your org, and to be flexible and excited about learning new things!"Emily Thai, Operations Manager @ Giving GreenOperations is often recommended as a good fit for community builders or people with experience organising local groups. There is typically a fair amount of overlap between the two roles - community building can involve many tasks that often fall under ops in EA e.g. events, implementing systems & processes, managing people/community me...]]>
Vaidehi Agarwalla https://forum.effectivealtruism.org/posts/GoaKjhQGqAgk6sKKT/doing-ops-in-ea-faq-before-you-join-2022 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Doing Ops in EA FAQ: before you join (2022), published by Vaidehi Agarwalla on November 8, 2022 on The Effective Altruism Forum.Last edited: November 7th 2022This guide was written by the Pineapple Operations team with inputs from several operations staff at various organizations to provide an overview of considerations for entering operations work at EA orgs. We think it'll be especially useful for people new to EA and/or operations who are considering working in this space. We think it could also be valuable for people looking to hire for these roles to understand the perspectives of candidates but it has less directly relevant advice (we may write up something in the future).We've chosen to publish this now because it seems like it could be useful to many people. We may make updates in the future and will keep a change log in the appendix.1. What is operations in EA?TL;DR it’s come to mean “everything that supports the core work and allows other people to focus on the core work” (this is not normally what operations means outside of EA, administration may be a more appropriate term).Operations in EA is a very broad area that can mean a lot of different things. This guide focuses on most operations roles (excluding PA/ExA roles and operations leadership roles). There are a few roles that we’ve seen open positions for in the last year (2022). Note that many roles will include several items from this list:Operations Manager (most often at small / new organisations)Implementing and maintaining general systems & processesManaging Accounting, Payroll & LegalFundraisingMarketing & CommunicationsPA tasks for the teamOther ad-hoc projectsOffice & Community ManagerPeople Ops/HRRecruiting CoordinatorSpecial Projects Associate / Project Manager (usually helping incubate new projects or do project management)Events Associate (events planning & execution)Logistics/Supply ChainSome roles will also have operations staff doing direct generalist work - such as research or program development - as needed, and generalist roles at smaller orgs will also involve operations work. Generalists can do many different things, well outside the ops domain - could be research, sales, and even having inputs on strategy. Generalist work that involves research will be often very different in nature to operations where task switching is common, or external facing work vs back-office admin tasks which don’t require much human contact. If you are hired as an operations person, keep in mind that this will likely be your top priority - it’s literally what keeps the lights on.Read the job description for each role carefully before applying - roles with the same title might have very different responsibilities, and be clear about what proportion of your time could actually be spent on generalist work if that’s mentioned in the JD (and that orgs may not be able to always predict this ahead of time)."In my current EA role, which is generalist but more explicitly ops-focused, my responsibilities have included ops, communications, and research, and shift based on my comparative advantage relative to the rest of the team. In this type of role it’s important to have a clear sense of priorities and boundaries, because your work could easily encompass. everything. It’s also important to learn to communicate clearly across functions and outside of your org, and to be flexible and excited about learning new things!"Emily Thai, Operations Manager @ Giving GreenOperations is often recommended as a good fit for community builders or people with experience organising local groups. There is typically a fair amount of overlap between the two roles - community building can involve many tasks that often fall under ops in EA e.g. events, implementing systems & processes, managing people/community me...]]>
Wed, 09 Nov 2022 00:41:59 +0000 EA - Doing Ops in EA FAQ: before you join (2022) by Vaidehi Agarwalla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Doing Ops in EA FAQ: before you join (2022), published by Vaidehi Agarwalla on November 8, 2022 on The Effective Altruism Forum.Last edited: November 7th 2022This guide was written by the Pineapple Operations team with inputs from several operations staff at various organizations to provide an overview of considerations for entering operations work at EA orgs. We think it'll be especially useful for people new to EA and/or operations who are considering working in this space. We think it could also be valuable for people looking to hire for these roles to understand the perspectives of candidates but it has less directly relevant advice (we may write up something in the future).We've chosen to publish this now because it seems like it could be useful to many people. We may make updates in the future and will keep a change log in the appendix.1. What is operations in EA?TL;DR it’s come to mean “everything that supports the core work and allows other people to focus on the core work” (this is not normally what operations means outside of EA, administration may be a more appropriate term).Operations in EA is a very broad area that can mean a lot of different things. This guide focuses on most operations roles (excluding PA/ExA roles and operations leadership roles). There are a few roles that we’ve seen open positions for in the last year (2022). Note that many roles will include several items from this list:Operations Manager (most often at small / new organisations)Implementing and maintaining general systems & processesManaging Accounting, Payroll & LegalFundraisingMarketing & CommunicationsPA tasks for the teamOther ad-hoc projectsOffice & Community ManagerPeople Ops/HRRecruiting CoordinatorSpecial Projects Associate / Project Manager (usually helping incubate new projects or do project management)Events Associate (events planning & execution)Logistics/Supply ChainSome roles will also have operations staff doing direct generalist work - such as research or program development - as needed, and generalist roles at smaller orgs will also involve operations work. Generalists can do many different things, well outside the ops domain - could be research, sales, and even having inputs on strategy. Generalist work that involves research will be often very different in nature to operations where task switching is common, or external facing work vs back-office admin tasks which don’t require much human contact. If you are hired as an operations person, keep in mind that this will likely be your top priority - it’s literally what keeps the lights on.Read the job description for each role carefully before applying - roles with the same title might have very different responsibilities, and be clear about what proportion of your time could actually be spent on generalist work if that’s mentioned in the JD (and that orgs may not be able to always predict this ahead of time)."In my current EA role, which is generalist but more explicitly ops-focused, my responsibilities have included ops, communications, and research, and shift based on my comparative advantage relative to the rest of the team. In this type of role it’s important to have a clear sense of priorities and boundaries, because your work could easily encompass. everything. It’s also important to learn to communicate clearly across functions and outside of your org, and to be flexible and excited about learning new things!"Emily Thai, Operations Manager @ Giving GreenOperations is often recommended as a good fit for community builders or people with experience organising local groups. There is typically a fair amount of overlap between the two roles - community building can involve many tasks that often fall under ops in EA e.g. events, implementing systems & processes, managing people/community me...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Doing Ops in EA FAQ: before you join (2022), published by Vaidehi Agarwalla on November 8, 2022 on The Effective Altruism Forum.Last edited: November 7th 2022This guide was written by the Pineapple Operations team with inputs from several operations staff at various organizations to provide an overview of considerations for entering operations work at EA orgs. We think it'll be especially useful for people new to EA and/or operations who are considering working in this space. We think it could also be valuable for people looking to hire for these roles to understand the perspectives of candidates but it has less directly relevant advice (we may write up something in the future).We've chosen to publish this now because it seems like it could be useful to many people. We may make updates in the future and will keep a change log in the appendix.1. What is operations in EA?TL;DR it’s come to mean “everything that supports the core work and allows other people to focus on the core work” (this is not normally what operations means outside of EA, administration may be a more appropriate term).Operations in EA is a very broad area that can mean a lot of different things. This guide focuses on most operations roles (excluding PA/ExA roles and operations leadership roles). There are a few roles that we’ve seen open positions for in the last year (2022). Note that many roles will include several items from this list:Operations Manager (most often at small / new organisations)Implementing and maintaining general systems & processesManaging Accounting, Payroll & LegalFundraisingMarketing & CommunicationsPA tasks for the teamOther ad-hoc projectsOffice & Community ManagerPeople Ops/HRRecruiting CoordinatorSpecial Projects Associate / Project Manager (usually helping incubate new projects or do project management)Events Associate (events planning & execution)Logistics/Supply ChainSome roles will also have operations staff doing direct generalist work - such as research or program development - as needed, and generalist roles at smaller orgs will also involve operations work. Generalists can do many different things, well outside the ops domain - could be research, sales, and even having inputs on strategy. Generalist work that involves research will be often very different in nature to operations where task switching is common, or external facing work vs back-office admin tasks which don’t require much human contact. If you are hired as an operations person, keep in mind that this will likely be your top priority - it’s literally what keeps the lights on.Read the job description for each role carefully before applying - roles with the same title might have very different responsibilities, and be clear about what proportion of your time could actually be spent on generalist work if that’s mentioned in the JD (and that orgs may not be able to always predict this ahead of time)."In my current EA role, which is generalist but more explicitly ops-focused, my responsibilities have included ops, communications, and research, and shift based on my comparative advantage relative to the rest of the team. In this type of role it’s important to have a clear sense of priorities and boundaries, because your work could easily encompass. everything. It’s also important to learn to communicate clearly across functions and outside of your org, and to be flexible and excited about learning new things!"Emily Thai, Operations Manager @ Giving GreenOperations is often recommended as a good fit for community builders or people with experience organising local groups. There is typically a fair amount of overlap between the two roles - community building can involve many tasks that often fall under ops in EA e.g. events, implementing systems & processes, managing people/community me...]]>
Vaidehi Agarwalla https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 30:49 None full 3686
tm3RMfxetLsmcwftQ_EA EA - EA and LW Forums Weekly Summary (31st Oct - 6th Nov 22') by Zoe Williams Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA & LW Forums Weekly Summary (31st Oct - 6th Nov 22'), published by Zoe Williams on November 8, 2022 on The Effective Altruism Forum.Supported by Rethink PrioritiesThis is part of a weekly series summarizing the top (40+ karma) posts on the EA and LW forums - you can see the full collection here. The first post includes some details on purpose and methodology.If you'd like to receive these summaries via email, you can subscribe here.Podcast version: prefer your summaries in podcast form? A big thanks to Coleman Snell for producing these! Subscribe on your favorite podcast app by searching for 'Effective Altruism Forum Podcast'.Top / Curated ReadingsDesigned for those without the time to read all the summaries. Everything here is also within the relevant sections later on so feel free to skip if you’re planning to read it all.The Challenges with Measuring the Impact of Lobbyingby Animal Ask, Ren SpringleaSummary of a report by Animal Ask assessing if academic literature contains quantitative estimates that could be useful in gauging the counterfactual impact of lobbying. This requires breaking policy success into a baseline rate and counterfactual increase from lobbying.They found the answer is no - lobbying literature contains many well-known weaknesses, a systematic review found a strange result (narrow range around 50% success, possibly because there are often lobbyists on both sides of an issue), and only one study attempted to identify counterfactual impact (and only for one policy).The authors instead suggest using expert judgment and superforecasters.What matters to shrimps? Factors affecting shrimp welfare in aquacultureby Lucas Lewit-Mendes, Aaron BoddyReport by Shrimp Welfare Project on the importance of various factors for the welfare of farmed shrimps. Welfare can be measured via biological markers (eg. chemicals associated with immune or stress responses), behavior (eg. avoidance, aggression), physical condition and survival rates.The strongest evidence for harm to shrimp welfare was with eyestalk ablation, disease, stunning / slaughter, and insufficient dissolved oxygen or high un-ionized ammonia in the water. The authors have very high confidence that small to medium improvements in these would reduce harm to shrimp.An Introduction to the Moral Weight Projectby Bob FischerIn 2020, Rethink Priorities published the Moral Weight Series—a collection of five reports addressing questions around interspecies cause prioritization. This research was expanded in May 2021 - Oct 2022, and this is the first in a sequence of posts that will overview that research.Author’s summary (lightly edited): “Our objective is to provide moral weights for 11 farmed species, under assumptions of utilitarianism, hedonism, valence symmetry, and unitarianism. The moral weight is the capacity for welfare - calculated as the welfare range (the difference between the best and worst welfare states the individual can realize at a time) × lifespan. Given welfare ranges, we can convert welfare improvements into DALY-equivalents averted, making cross-species cost-effectiveness analyses possible.”EA ForumPhilosophy and MethodologiesAn Introduction to the Moral Weight Projectby Bob FischerIn 2020, Rethink Priorities published the Moral Weight Series—a collection of five reports addressing questions around interspecies cause prioritization. This research was expanded in May 2021 - Oct 2022, and this is the first in a sequence of posts that will overview that research.Author’s summary (lightly edited): “Our objective is to provide moral weights for 11 farmed species, under assumptions of utilitarianism, hedonism, valence symmetry, and unitarianism. The moral weight is the capacity for welfare - calculated as the welfare range (the difference between the best and worst ...]]>
Zoe Williams https://forum.effectivealtruism.org/posts/tm3RMfxetLsmcwftQ/ea-and-lw-forums-weekly-summary-31st-oct-6th-nov-22 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA & LW Forums Weekly Summary (31st Oct - 6th Nov 22'), published by Zoe Williams on November 8, 2022 on The Effective Altruism Forum.Supported by Rethink PrioritiesThis is part of a weekly series summarizing the top (40+ karma) posts on the EA and LW forums - you can see the full collection here. The first post includes some details on purpose and methodology.If you'd like to receive these summaries via email, you can subscribe here.Podcast version: prefer your summaries in podcast form? A big thanks to Coleman Snell for producing these! Subscribe on your favorite podcast app by searching for 'Effective Altruism Forum Podcast'.Top / Curated ReadingsDesigned for those without the time to read all the summaries. Everything here is also within the relevant sections later on so feel free to skip if you’re planning to read it all.The Challenges with Measuring the Impact of Lobbyingby Animal Ask, Ren SpringleaSummary of a report by Animal Ask assessing if academic literature contains quantitative estimates that could be useful in gauging the counterfactual impact of lobbying. This requires breaking policy success into a baseline rate and counterfactual increase from lobbying.They found the answer is no - lobbying literature contains many well-known weaknesses, a systematic review found a strange result (narrow range around 50% success, possibly because there are often lobbyists on both sides of an issue), and only one study attempted to identify counterfactual impact (and only for one policy).The authors instead suggest using expert judgment and superforecasters.What matters to shrimps? Factors affecting shrimp welfare in aquacultureby Lucas Lewit-Mendes, Aaron BoddyReport by Shrimp Welfare Project on the importance of various factors for the welfare of farmed shrimps. Welfare can be measured via biological markers (eg. chemicals associated with immune or stress responses), behavior (eg. avoidance, aggression), physical condition and survival rates.The strongest evidence for harm to shrimp welfare was with eyestalk ablation, disease, stunning / slaughter, and insufficient dissolved oxygen or high un-ionized ammonia in the water. The authors have very high confidence that small to medium improvements in these would reduce harm to shrimp.An Introduction to the Moral Weight Projectby Bob FischerIn 2020, Rethink Priorities published the Moral Weight Series—a collection of five reports addressing questions around interspecies cause prioritization. This research was expanded in May 2021 - Oct 2022, and this is the first in a sequence of posts that will overview that research.Author’s summary (lightly edited): “Our objective is to provide moral weights for 11 farmed species, under assumptions of utilitarianism, hedonism, valence symmetry, and unitarianism. The moral weight is the capacity for welfare - calculated as the welfare range (the difference between the best and worst welfare states the individual can realize at a time) × lifespan. Given welfare ranges, we can convert welfare improvements into DALY-equivalents averted, making cross-species cost-effectiveness analyses possible.”EA ForumPhilosophy and MethodologiesAn Introduction to the Moral Weight Projectby Bob FischerIn 2020, Rethink Priorities published the Moral Weight Series—a collection of five reports addressing questions around interspecies cause prioritization. This research was expanded in May 2021 - Oct 2022, and this is the first in a sequence of posts that will overview that research.Author’s summary (lightly edited): “Our objective is to provide moral weights for 11 farmed species, under assumptions of utilitarianism, hedonism, valence symmetry, and unitarianism. The moral weight is the capacity for welfare - calculated as the welfare range (the difference between the best and worst ...]]>
Tue, 08 Nov 2022 21:13:02 +0000 EA - EA and LW Forums Weekly Summary (31st Oct - 6th Nov 22') by Zoe Williams Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA & LW Forums Weekly Summary (31st Oct - 6th Nov 22'), published by Zoe Williams on November 8, 2022 on The Effective Altruism Forum.Supported by Rethink PrioritiesThis is part of a weekly series summarizing the top (40+ karma) posts on the EA and LW forums - you can see the full collection here. The first post includes some details on purpose and methodology.If you'd like to receive these summaries via email, you can subscribe here.Podcast version: prefer your summaries in podcast form? A big thanks to Coleman Snell for producing these! Subscribe on your favorite podcast app by searching for 'Effective Altruism Forum Podcast'.Top / Curated ReadingsDesigned for those without the time to read all the summaries. Everything here is also within the relevant sections later on so feel free to skip if you’re planning to read it all.The Challenges with Measuring the Impact of Lobbyingby Animal Ask, Ren SpringleaSummary of a report by Animal Ask assessing if academic literature contains quantitative estimates that could be useful in gauging the counterfactual impact of lobbying. This requires breaking policy success into a baseline rate and counterfactual increase from lobbying.They found the answer is no - lobbying literature contains many well-known weaknesses, a systematic review found a strange result (narrow range around 50% success, possibly because there are often lobbyists on both sides of an issue), and only one study attempted to identify counterfactual impact (and only for one policy).The authors instead suggest using expert judgment and superforecasters.What matters to shrimps? Factors affecting shrimp welfare in aquacultureby Lucas Lewit-Mendes, Aaron BoddyReport by Shrimp Welfare Project on the importance of various factors for the welfare of farmed shrimps. Welfare can be measured via biological markers (eg. chemicals associated with immune or stress responses), behavior (eg. avoidance, aggression), physical condition and survival rates.The strongest evidence for harm to shrimp welfare was with eyestalk ablation, disease, stunning / slaughter, and insufficient dissolved oxygen or high un-ionized ammonia in the water. The authors have very high confidence that small to medium improvements in these would reduce harm to shrimp.An Introduction to the Moral Weight Projectby Bob FischerIn 2020, Rethink Priorities published the Moral Weight Series—a collection of five reports addressing questions around interspecies cause prioritization. This research was expanded in May 2021 - Oct 2022, and this is the first in a sequence of posts that will overview that research.Author’s summary (lightly edited): “Our objective is to provide moral weights for 11 farmed species, under assumptions of utilitarianism, hedonism, valence symmetry, and unitarianism. The moral weight is the capacity for welfare - calculated as the welfare range (the difference between the best and worst welfare states the individual can realize at a time) × lifespan. Given welfare ranges, we can convert welfare improvements into DALY-equivalents averted, making cross-species cost-effectiveness analyses possible.”EA ForumPhilosophy and MethodologiesAn Introduction to the Moral Weight Projectby Bob FischerIn 2020, Rethink Priorities published the Moral Weight Series—a collection of five reports addressing questions around interspecies cause prioritization. This research was expanded in May 2021 - Oct 2022, and this is the first in a sequence of posts that will overview that research.Author’s summary (lightly edited): “Our objective is to provide moral weights for 11 farmed species, under assumptions of utilitarianism, hedonism, valence symmetry, and unitarianism. The moral weight is the capacity for welfare - calculated as the welfare range (the difference between the best and worst ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA & LW Forums Weekly Summary (31st Oct - 6th Nov 22'), published by Zoe Williams on November 8, 2022 on The Effective Altruism Forum.Supported by Rethink PrioritiesThis is part of a weekly series summarizing the top (40+ karma) posts on the EA and LW forums - you can see the full collection here. The first post includes some details on purpose and methodology.If you'd like to receive these summaries via email, you can subscribe here.Podcast version: prefer your summaries in podcast form? A big thanks to Coleman Snell for producing these! Subscribe on your favorite podcast app by searching for 'Effective Altruism Forum Podcast'.Top / Curated ReadingsDesigned for those without the time to read all the summaries. Everything here is also within the relevant sections later on so feel free to skip if you’re planning to read it all.The Challenges with Measuring the Impact of Lobbyingby Animal Ask, Ren SpringleaSummary of a report by Animal Ask assessing if academic literature contains quantitative estimates that could be useful in gauging the counterfactual impact of lobbying. This requires breaking policy success into a baseline rate and counterfactual increase from lobbying.They found the answer is no - lobbying literature contains many well-known weaknesses, a systematic review found a strange result (narrow range around 50% success, possibly because there are often lobbyists on both sides of an issue), and only one study attempted to identify counterfactual impact (and only for one policy).The authors instead suggest using expert judgment and superforecasters.What matters to shrimps? Factors affecting shrimp welfare in aquacultureby Lucas Lewit-Mendes, Aaron BoddyReport by Shrimp Welfare Project on the importance of various factors for the welfare of farmed shrimps. Welfare can be measured via biological markers (eg. chemicals associated with immune or stress responses), behavior (eg. avoidance, aggression), physical condition and survival rates.The strongest evidence for harm to shrimp welfare was with eyestalk ablation, disease, stunning / slaughter, and insufficient dissolved oxygen or high un-ionized ammonia in the water. The authors have very high confidence that small to medium improvements in these would reduce harm to shrimp.An Introduction to the Moral Weight Projectby Bob FischerIn 2020, Rethink Priorities published the Moral Weight Series—a collection of five reports addressing questions around interspecies cause prioritization. This research was expanded in May 2021 - Oct 2022, and this is the first in a sequence of posts that will overview that research.Author’s summary (lightly edited): “Our objective is to provide moral weights for 11 farmed species, under assumptions of utilitarianism, hedonism, valence symmetry, and unitarianism. The moral weight is the capacity for welfare - calculated as the welfare range (the difference between the best and worst welfare states the individual can realize at a time) × lifespan. Given welfare ranges, we can convert welfare improvements into DALY-equivalents averted, making cross-species cost-effectiveness analyses possible.”EA ForumPhilosophy and MethodologiesAn Introduction to the Moral Weight Projectby Bob FischerIn 2020, Rethink Priorities published the Moral Weight Series—a collection of five reports addressing questions around interspecies cause prioritization. This research was expanded in May 2021 - Oct 2022, and this is the first in a sequence of posts that will overview that research.Author’s summary (lightly edited): “Our objective is to provide moral weights for 11 farmed species, under assumptions of utilitarianism, hedonism, valence symmetry, and unitarianism. The moral weight is the capacity for welfare - calculated as the welfare range (the difference between the best and worst ...]]>
Zoe Williams https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 34:16 None full 3683
yjGye7Q2jRG3jNfi2_EA EA - FTX will probably be sold at a steep discount. What we know and some forecasts on what will happen next by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX will probably be sold at a steep discount. What we know and some forecasts on what will happen next, published by Nathan Young on November 8, 2022 on The Effective Altruism Forum.Tl;dr:The purpose of the document is to add clarity. It was written quickly. If you think it is net harmful, comment saying so, I'm pretty happy to delete. How ever I (Nathan) largely stand by its contents.Binance, a competitor sold a large stake of FTT, FTX’s native token and implied that FTX was at risk by mentioning a recent crash (LUNA)This started a run on the bank where depositors attempted to get their money out. FTX paused withdrawals for a while and seemed to be strugglingSBF tweeted that FTX.com (not FTX US or Alameda) was beginning the process of being sold to Binance in order to safeguard depositor assetsFTX.com comprises ~39% of Sam's assets and will be sold at a steep discount, affecting the value of FTX US and AlamedaAll depositors got their deposits back, which is goodThis likely means there will be a lot fewer assets for the effective causes, which is a tragedyOur sympathy lies with SBF and the teamSince most of the knowledge we want is around, prediction markets will probably be pretty accurate. There are some around a range of outcomes such as the sale value and FTX Future Fund regranting (scroll down)We should wait and see what happensPlease flag any issues and we'll try and correct themLonger versionThere are three key entities here (prices according to Bloomberg, so probably wrong):FTX (The worldwide business) that composes about 39%FTX.US (FTX’s US arm) a crypto exchange that composes about 13% of SBF’s wealthAlameda, a hedge fund which composes 46%Alameda was SBF’s original hedge fund and made markets for FTX. The behaviour of the two was correlated, and Alameda held large positions in FTT, FTX’s token. Coindesk reported Alameda were in trouble, and some internal documents were leaked. Alameda CEO, Carlone Ellison rebutted.Binance left/was pushed out of an early funding round of FTX and were paid in FTT, FTX’s native token. It seems like there was bad blood. This week Binance said they were selling their FTT and referenced LUNA a coin that recently crashed. It is common for projects in crypto to fail, so when there is a sense they will, people withdraw their money rapidly. This started a run on FTX.Earlier today SBF announced that, FTX.com, the non-US business had been agreed in principle to be sold. SBF talks about that here. It will likely be sold at a steep discount. This is tragic and will likely lead to fewer resources for effective causes.Cz (Binance)’s Twitter thread:There are some large unanswered questions both for EA and FTX. This reduction in resources will mean many fewer are dedicated to the worlds top problems as may of us see them. To me that is an enormous tragedy. There are questions about funding which will affect some jobs and we've tried to create some clear signals below. Finally I'm sure this has been a terrible few days for Sam and his team and will likely continue to.Relevant forecastsHere is a section of relevant forecasts to try and give people a picture of what might happen.Will CZ/Binance acquire ftx.com? If it doesn’t, I don’t know what will happen.How much will the deal imply that FTX.com is worth. Since we don’t know how much of FTX got sold, we can only guess the price implied by the sale. More than $3bn would be bad but better than expected. Less than $.5 would be catastrophic.If FTX.com is bought by Binance, will the deal imply FTX.com is worth more than $.5Bn?If FTX.com is bought by Binance, will the deal imply FTX.com is worth more than $3Bn? 66%The other key question is what happens to the FTX Foundation. How much will it spend next year? 66%Will the FTX Future Fund spend more than $300mn in 2023? ...]]>
Nathan Young https://forum.effectivealtruism.org/posts/yjGye7Q2jRG3jNfi2/ftx-will-probably-be-sold-at-a-steep-discount-what-we-know Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX will probably be sold at a steep discount. What we know and some forecasts on what will happen next, published by Nathan Young on November 8, 2022 on The Effective Altruism Forum.Tl;dr:The purpose of the document is to add clarity. It was written quickly. If you think it is net harmful, comment saying so, I'm pretty happy to delete. How ever I (Nathan) largely stand by its contents.Binance, a competitor sold a large stake of FTT, FTX’s native token and implied that FTX was at risk by mentioning a recent crash (LUNA)This started a run on the bank where depositors attempted to get their money out. FTX paused withdrawals for a while and seemed to be strugglingSBF tweeted that FTX.com (not FTX US or Alameda) was beginning the process of being sold to Binance in order to safeguard depositor assetsFTX.com comprises ~39% of Sam's assets and will be sold at a steep discount, affecting the value of FTX US and AlamedaAll depositors got their deposits back, which is goodThis likely means there will be a lot fewer assets for the effective causes, which is a tragedyOur sympathy lies with SBF and the teamSince most of the knowledge we want is around, prediction markets will probably be pretty accurate. There are some around a range of outcomes such as the sale value and FTX Future Fund regranting (scroll down)We should wait and see what happensPlease flag any issues and we'll try and correct themLonger versionThere are three key entities here (prices according to Bloomberg, so probably wrong):FTX (The worldwide business) that composes about 39%FTX.US (FTX’s US arm) a crypto exchange that composes about 13% of SBF’s wealthAlameda, a hedge fund which composes 46%Alameda was SBF’s original hedge fund and made markets for FTX. The behaviour of the two was correlated, and Alameda held large positions in FTT, FTX’s token. Coindesk reported Alameda were in trouble, and some internal documents were leaked. Alameda CEO, Carlone Ellison rebutted.Binance left/was pushed out of an early funding round of FTX and were paid in FTT, FTX’s native token. It seems like there was bad blood. This week Binance said they were selling their FTT and referenced LUNA a coin that recently crashed. It is common for projects in crypto to fail, so when there is a sense they will, people withdraw their money rapidly. This started a run on FTX.Earlier today SBF announced that, FTX.com, the non-US business had been agreed in principle to be sold. SBF talks about that here. It will likely be sold at a steep discount. This is tragic and will likely lead to fewer resources for effective causes.Cz (Binance)’s Twitter thread:There are some large unanswered questions both for EA and FTX. This reduction in resources will mean many fewer are dedicated to the worlds top problems as may of us see them. To me that is an enormous tragedy. There are questions about funding which will affect some jobs and we've tried to create some clear signals below. Finally I'm sure this has been a terrible few days for Sam and his team and will likely continue to.Relevant forecastsHere is a section of relevant forecasts to try and give people a picture of what might happen.Will CZ/Binance acquire ftx.com? If it doesn’t, I don’t know what will happen.How much will the deal imply that FTX.com is worth. Since we don’t know how much of FTX got sold, we can only guess the price implied by the sale. More than $3bn would be bad but better than expected. Less than $.5 would be catastrophic.If FTX.com is bought by Binance, will the deal imply FTX.com is worth more than $.5Bn?If FTX.com is bought by Binance, will the deal imply FTX.com is worth more than $3Bn? 66%The other key question is what happens to the FTX Foundation. How much will it spend next year? 66%Will the FTX Future Fund spend more than $300mn in 2023? ...]]>
Tue, 08 Nov 2022 19:33:40 +0000 EA - FTX will probably be sold at a steep discount. What we know and some forecasts on what will happen next by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX will probably be sold at a steep discount. What we know and some forecasts on what will happen next, published by Nathan Young on November 8, 2022 on The Effective Altruism Forum.Tl;dr:The purpose of the document is to add clarity. It was written quickly. If you think it is net harmful, comment saying so, I'm pretty happy to delete. How ever I (Nathan) largely stand by its contents.Binance, a competitor sold a large stake of FTT, FTX’s native token and implied that FTX was at risk by mentioning a recent crash (LUNA)This started a run on the bank where depositors attempted to get their money out. FTX paused withdrawals for a while and seemed to be strugglingSBF tweeted that FTX.com (not FTX US or Alameda) was beginning the process of being sold to Binance in order to safeguard depositor assetsFTX.com comprises ~39% of Sam's assets and will be sold at a steep discount, affecting the value of FTX US and AlamedaAll depositors got their deposits back, which is goodThis likely means there will be a lot fewer assets for the effective causes, which is a tragedyOur sympathy lies with SBF and the teamSince most of the knowledge we want is around, prediction markets will probably be pretty accurate. There are some around a range of outcomes such as the sale value and FTX Future Fund regranting (scroll down)We should wait and see what happensPlease flag any issues and we'll try and correct themLonger versionThere are three key entities here (prices according to Bloomberg, so probably wrong):FTX (The worldwide business) that composes about 39%FTX.US (FTX’s US arm) a crypto exchange that composes about 13% of SBF’s wealthAlameda, a hedge fund which composes 46%Alameda was SBF’s original hedge fund and made markets for FTX. The behaviour of the two was correlated, and Alameda held large positions in FTT, FTX’s token. Coindesk reported Alameda were in trouble, and some internal documents were leaked. Alameda CEO, Carlone Ellison rebutted.Binance left/was pushed out of an early funding round of FTX and were paid in FTT, FTX’s native token. It seems like there was bad blood. This week Binance said they were selling their FTT and referenced LUNA a coin that recently crashed. It is common for projects in crypto to fail, so when there is a sense they will, people withdraw their money rapidly. This started a run on FTX.Earlier today SBF announced that, FTX.com, the non-US business had been agreed in principle to be sold. SBF talks about that here. It will likely be sold at a steep discount. This is tragic and will likely lead to fewer resources for effective causes.Cz (Binance)’s Twitter thread:There are some large unanswered questions both for EA and FTX. This reduction in resources will mean many fewer are dedicated to the worlds top problems as may of us see them. To me that is an enormous tragedy. There are questions about funding which will affect some jobs and we've tried to create some clear signals below. Finally I'm sure this has been a terrible few days for Sam and his team and will likely continue to.Relevant forecastsHere is a section of relevant forecasts to try and give people a picture of what might happen.Will CZ/Binance acquire ftx.com? If it doesn’t, I don’t know what will happen.How much will the deal imply that FTX.com is worth. Since we don’t know how much of FTX got sold, we can only guess the price implied by the sale. More than $3bn would be bad but better than expected. Less than $.5 would be catastrophic.If FTX.com is bought by Binance, will the deal imply FTX.com is worth more than $.5Bn?If FTX.com is bought by Binance, will the deal imply FTX.com is worth more than $3Bn? 66%The other key question is what happens to the FTX Foundation. How much will it spend next year? 66%Will the FTX Future Fund spend more than $300mn in 2023? ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX will probably be sold at a steep discount. What we know and some forecasts on what will happen next, published by Nathan Young on November 8, 2022 on The Effective Altruism Forum.Tl;dr:The purpose of the document is to add clarity. It was written quickly. If you think it is net harmful, comment saying so, I'm pretty happy to delete. How ever I (Nathan) largely stand by its contents.Binance, a competitor sold a large stake of FTT, FTX’s native token and implied that FTX was at risk by mentioning a recent crash (LUNA)This started a run on the bank where depositors attempted to get their money out. FTX paused withdrawals for a while and seemed to be strugglingSBF tweeted that FTX.com (not FTX US or Alameda) was beginning the process of being sold to Binance in order to safeguard depositor assetsFTX.com comprises ~39% of Sam's assets and will be sold at a steep discount, affecting the value of FTX US and AlamedaAll depositors got their deposits back, which is goodThis likely means there will be a lot fewer assets for the effective causes, which is a tragedyOur sympathy lies with SBF and the teamSince most of the knowledge we want is around, prediction markets will probably be pretty accurate. There are some around a range of outcomes such as the sale value and FTX Future Fund regranting (scroll down)We should wait and see what happensPlease flag any issues and we'll try and correct themLonger versionThere are three key entities here (prices according to Bloomberg, so probably wrong):FTX (The worldwide business) that composes about 39%FTX.US (FTX’s US arm) a crypto exchange that composes about 13% of SBF’s wealthAlameda, a hedge fund which composes 46%Alameda was SBF’s original hedge fund and made markets for FTX. The behaviour of the two was correlated, and Alameda held large positions in FTT, FTX’s token. Coindesk reported Alameda were in trouble, and some internal documents were leaked. Alameda CEO, Carlone Ellison rebutted.Binance left/was pushed out of an early funding round of FTX and were paid in FTT, FTX’s native token. It seems like there was bad blood. This week Binance said they were selling their FTT and referenced LUNA a coin that recently crashed. It is common for projects in crypto to fail, so when there is a sense they will, people withdraw their money rapidly. This started a run on FTX.Earlier today SBF announced that, FTX.com, the non-US business had been agreed in principle to be sold. SBF talks about that here. It will likely be sold at a steep discount. This is tragic and will likely lead to fewer resources for effective causes.Cz (Binance)’s Twitter thread:There are some large unanswered questions both for EA and FTX. This reduction in resources will mean many fewer are dedicated to the worlds top problems as may of us see them. To me that is an enormous tragedy. There are questions about funding which will affect some jobs and we've tried to create some clear signals below. Finally I'm sure this has been a terrible few days for Sam and his team and will likely continue to.Relevant forecastsHere is a section of relevant forecasts to try and give people a picture of what might happen.Will CZ/Binance acquire ftx.com? If it doesn’t, I don’t know what will happen.How much will the deal imply that FTX.com is worth. Since we don’t know how much of FTX got sold, we can only guess the price implied by the sale. More than $3bn would be bad but better than expected. Less than $.5 would be catastrophic.If FTX.com is bought by Binance, will the deal imply FTX.com is worth more than $.5Bn?If FTX.com is bought by Binance, will the deal imply FTX.com is worth more than $3Bn? 66%The other key question is what happens to the FTX Foundation. How much will it spend next year? 66%Will the FTX Future Fund spend more than $300mn in 2023? ...]]>
Nathan Young https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:51 None full 3681
tdLRvYHpfYjimwhyL_EA EA - FTX.com bought out by Binance by Charles He Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX.com bought out by Binance, published by Charles He on November 8, 2022 on The Effective Altruism Forum.See:This is probably related to liquidity issues / solvency issues.Sketch of timeline:A CoinDesk article comes out claiming that much of FTX and Alameda assets are just its own tokens ("TFF" or "SOL") and there is a circular relationship in assets between the two entities.Aggressive/hostile, but crisp analysis here:As a hostile action, another exchange / leader ("CZ") publicly announced it was liquidating FTT token.FTT falls by about 26% on Nov 7.There is probably further pressure on FTX, see HN discussion of FTX withdrawals:Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Charles He https://forum.effectivealtruism.org/posts/tdLRvYHpfYjimwhyL/ftx-com-bought-out-by-binance Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX.com bought out by Binance, published by Charles He on November 8, 2022 on The Effective Altruism Forum.See:This is probably related to liquidity issues / solvency issues.Sketch of timeline:A CoinDesk article comes out claiming that much of FTX and Alameda assets are just its own tokens ("TFF" or "SOL") and there is a circular relationship in assets between the two entities.Aggressive/hostile, but crisp analysis here:As a hostile action, another exchange / leader ("CZ") publicly announced it was liquidating FTT token.FTT falls by about 26% on Nov 7.There is probably further pressure on FTX, see HN discussion of FTX withdrawals:Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 08 Nov 2022 17:50:03 +0000 EA - FTX.com bought out by Binance by Charles He Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX.com bought out by Binance, published by Charles He on November 8, 2022 on The Effective Altruism Forum.See:This is probably related to liquidity issues / solvency issues.Sketch of timeline:A CoinDesk article comes out claiming that much of FTX and Alameda assets are just its own tokens ("TFF" or "SOL") and there is a circular relationship in assets between the two entities.Aggressive/hostile, but crisp analysis here:As a hostile action, another exchange / leader ("CZ") publicly announced it was liquidating FTT token.FTT falls by about 26% on Nov 7.There is probably further pressure on FTX, see HN discussion of FTX withdrawals:Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX.com bought out by Binance, published by Charles He on November 8, 2022 on The Effective Altruism Forum.See:This is probably related to liquidity issues / solvency issues.Sketch of timeline:A CoinDesk article comes out claiming that much of FTX and Alameda assets are just its own tokens ("TFF" or "SOL") and there is a circular relationship in assets between the two entities.Aggressive/hostile, but crisp analysis here:As a hostile action, another exchange / leader ("CZ") publicly announced it was liquidating FTT token.FTT falls by about 26% on Nov 7.There is probably further pressure on FTX, see HN discussion of FTX withdrawals:Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Charles He https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:01 None full 3682
AuhkDHEuLNxqx9rgZ_EA EA - A new database of nanotechnology strategy resources by Ben Snodin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A new database of nanotechnology strategy resources, published by Ben Snodin on November 5, 2022 on The Effective Altruism Forum.Target audience: people who are considering trying out nanotechnology strategy research, or who want to learn more about nanotechnology (strategy)-related things.Marie Davidsen Buhl and I have made a database of resources relevant for nanotechnology strategy research. It contains over 40 articles, sorted by relevance for people new to nanotechnology strategy research.Additional context:Why nanotechnology strategy research might be important from a longtermist EA perspective: Advanced nanotechnology might arrive in the next couple of decades (my wild guess: there’s a 1-2% chance in the absence of transformative AI) and could have very positive or very negative implications for existential risk. There has been relatively little high-quality thinking on how to make the arrival of advanced nanotechnology go well, and I think there should be more work in this area (very tentatively, I suggest we want 2-3 people spending at least 50% of their time on this by 3 years from now).See also my EA Forum post on this area from earlier this year, and the 80000 Hours cause profile.Marie and I created this resources database as part of our roles at Rethink Priorities.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ben Snodin https://forum.effectivealtruism.org/posts/AuhkDHEuLNxqx9rgZ/a-new-database-of-nanotechnology-strategy-resources Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A new database of nanotechnology strategy resources, published by Ben Snodin on November 5, 2022 on The Effective Altruism Forum.Target audience: people who are considering trying out nanotechnology strategy research, or who want to learn more about nanotechnology (strategy)-related things.Marie Davidsen Buhl and I have made a database of resources relevant for nanotechnology strategy research. It contains over 40 articles, sorted by relevance for people new to nanotechnology strategy research.Additional context:Why nanotechnology strategy research might be important from a longtermist EA perspective: Advanced nanotechnology might arrive in the next couple of decades (my wild guess: there’s a 1-2% chance in the absence of transformative AI) and could have very positive or very negative implications for existential risk. There has been relatively little high-quality thinking on how to make the arrival of advanced nanotechnology go well, and I think there should be more work in this area (very tentatively, I suggest we want 2-3 people spending at least 50% of their time on this by 3 years from now).See also my EA Forum post on this area from earlier this year, and the 80000 Hours cause profile.Marie and I created this resources database as part of our roles at Rethink Priorities.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 07 Nov 2022 19:50:56 +0000 EA - A new database of nanotechnology strategy resources by Ben Snodin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A new database of nanotechnology strategy resources, published by Ben Snodin on November 5, 2022 on The Effective Altruism Forum.Target audience: people who are considering trying out nanotechnology strategy research, or who want to learn more about nanotechnology (strategy)-related things.Marie Davidsen Buhl and I have made a database of resources relevant for nanotechnology strategy research. It contains over 40 articles, sorted by relevance for people new to nanotechnology strategy research.Additional context:Why nanotechnology strategy research might be important from a longtermist EA perspective: Advanced nanotechnology might arrive in the next couple of decades (my wild guess: there’s a 1-2% chance in the absence of transformative AI) and could have very positive or very negative implications for existential risk. There has been relatively little high-quality thinking on how to make the arrival of advanced nanotechnology go well, and I think there should be more work in this area (very tentatively, I suggest we want 2-3 people spending at least 50% of their time on this by 3 years from now).See also my EA Forum post on this area from earlier this year, and the 80000 Hours cause profile.Marie and I created this resources database as part of our roles at Rethink Priorities.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A new database of nanotechnology strategy resources, published by Ben Snodin on November 5, 2022 on The Effective Altruism Forum.Target audience: people who are considering trying out nanotechnology strategy research, or who want to learn more about nanotechnology (strategy)-related things.Marie Davidsen Buhl and I have made a database of resources relevant for nanotechnology strategy research. It contains over 40 articles, sorted by relevance for people new to nanotechnology strategy research.Additional context:Why nanotechnology strategy research might be important from a longtermist EA perspective: Advanced nanotechnology might arrive in the next couple of decades (my wild guess: there’s a 1-2% chance in the absence of transformative AI) and could have very positive or very negative implications for existential risk. There has been relatively little high-quality thinking on how to make the arrival of advanced nanotechnology go well, and I think there should be more work in this area (very tentatively, I suggest we want 2-3 people spending at least 50% of their time on this by 3 years from now).See also my EA Forum post on this area from earlier this year, and the 80000 Hours cause profile.Marie and I created this resources database as part of our roles at Rethink Priorities.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ben Snodin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:32 None full 3676
AiSAAkvZzWKkt4uBB_EA EA - [Links post] Economists Chris Blattman and Noah Smith on China, Taiwan, and the likelihood of war by Stephen Clare Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Links post] Economists Chris Blattman and Noah Smith on China, Taiwan, and the likelihood of war, published by Stephen Clare on November 7, 2022 on The Effective Altruism Forum.Two prominent economists and writers, Chris Blattman and Noah Smith, both recently and independently published blog posts on the likelihood of war between China and the US over Taiwan. They are not optimistic. Both seem to think that war is relatively likely. Unfortunately, neither offers a quantitative forecast, but I think it's interesting and useful for serious researchers to write about their views on such important questions.Chris's post is called "The prospects for war with China: Why I see a serious chance of World War III in the next decade". Noah's is "Why I think an invasion of Taiwan probably means WW3". They complement each other well, with Chris's post arguing that war seems likely and Noah's post arguing that the US would probably get involved in the event of a war.Chris's post on why bargaining may break downChris applies the bargaining framework he used throughout his book Why We Fight to show why war is becoming more likely as China continues to grow its economy and strengthen its military. Eventually, the status quo of de facto Taiwanese independence will slip outside of its "compromise range". When this happens, Chinese leaders may decide going to war to try to bring about a different outcome may be preferable, despite the costs.Still, war is risky and negative-sum. A negotiated settlement should, in principle, be preferable for all parties. Chris suggests that negotiations may fail because ideological principles (e.g. democracy vs autocracy) can be non-negotiable, China's crackdown on Hong Kong harmed its reputation for sticking to negotiated settlements, and Xi Jinping is increasingly unchecked and isolated in his leadership.Noah's post on why the US would probably get drawn into the warNoah also applies economic thinking to inform his geopolitical analysis, using game theory to predict which war scenarios seem more likely based on the interests of the actors involved.Noah considers about how important various factors, like national pride, reputation, and military costs, seem to Chinese and American leaders. He then tries to weigh up how important these costs and benefits seem relative to each other to work out the payoffs to each actor is in each of the four possible outcomes of the game. Then he assigns probabilities to work out expected payoffs, and figures out the equilibrium through backwards induction. Long story short, the equilibrium solution under these assumptions is for China to invade and attack the US to maximize its chances of victory, nearly assuring the outbreak of a major great power war.Of course, there are many ways this analysis could be wrong, and Noah touches on them at the end of the post. For example, given escalation risk, perhaps the payoffs in any "US fights" outcome are hugely negative for both parties, making it very unlikely that they would both choose to fight. Or, perhaps losing a fight over Taiwan would be so negative for China's leadership that it's just too risky (the expected payoff is negative) to undertake. Or, perhaps the leadership of each country is misinformed about the likelihood of each outcome, leading to miscalculation, bad decision-making, and a sub-optimal (i.e. very costly) outcome.Why they're worth readingBoth posts analyze the US-China-Taiwan situation from a specific, economic framework. How insightful you find them will depend somewhat on how much resemblance you think this kind of rational analysis of weighing expected costs and benefits bears to actual foreign policy decision-making. But I think it's great to see serious thinkers trying to make progress on important questions like "Will World War III happen...]]>
Stephen Clare https://forum.effectivealtruism.org/posts/AiSAAkvZzWKkt4uBB/links-post-economists-chris-blattman-and-noah-smith-on-china Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Links post] Economists Chris Blattman and Noah Smith on China, Taiwan, and the likelihood of war, published by Stephen Clare on November 7, 2022 on The Effective Altruism Forum.Two prominent economists and writers, Chris Blattman and Noah Smith, both recently and independently published blog posts on the likelihood of war between China and the US over Taiwan. They are not optimistic. Both seem to think that war is relatively likely. Unfortunately, neither offers a quantitative forecast, but I think it's interesting and useful for serious researchers to write about their views on such important questions.Chris's post is called "The prospects for war with China: Why I see a serious chance of World War III in the next decade". Noah's is "Why I think an invasion of Taiwan probably means WW3". They complement each other well, with Chris's post arguing that war seems likely and Noah's post arguing that the US would probably get involved in the event of a war.Chris's post on why bargaining may break downChris applies the bargaining framework he used throughout his book Why We Fight to show why war is becoming more likely as China continues to grow its economy and strengthen its military. Eventually, the status quo of de facto Taiwanese independence will slip outside of its "compromise range". When this happens, Chinese leaders may decide going to war to try to bring about a different outcome may be preferable, despite the costs.Still, war is risky and negative-sum. A negotiated settlement should, in principle, be preferable for all parties. Chris suggests that negotiations may fail because ideological principles (e.g. democracy vs autocracy) can be non-negotiable, China's crackdown on Hong Kong harmed its reputation for sticking to negotiated settlements, and Xi Jinping is increasingly unchecked and isolated in his leadership.Noah's post on why the US would probably get drawn into the warNoah also applies economic thinking to inform his geopolitical analysis, using game theory to predict which war scenarios seem more likely based on the interests of the actors involved.Noah considers about how important various factors, like national pride, reputation, and military costs, seem to Chinese and American leaders. He then tries to weigh up how important these costs and benefits seem relative to each other to work out the payoffs to each actor is in each of the four possible outcomes of the game. Then he assigns probabilities to work out expected payoffs, and figures out the equilibrium through backwards induction. Long story short, the equilibrium solution under these assumptions is for China to invade and attack the US to maximize its chances of victory, nearly assuring the outbreak of a major great power war.Of course, there are many ways this analysis could be wrong, and Noah touches on them at the end of the post. For example, given escalation risk, perhaps the payoffs in any "US fights" outcome are hugely negative for both parties, making it very unlikely that they would both choose to fight. Or, perhaps losing a fight over Taiwan would be so negative for China's leadership that it's just too risky (the expected payoff is negative) to undertake. Or, perhaps the leadership of each country is misinformed about the likelihood of each outcome, leading to miscalculation, bad decision-making, and a sub-optimal (i.e. very costly) outcome.Why they're worth readingBoth posts analyze the US-China-Taiwan situation from a specific, economic framework. How insightful you find them will depend somewhat on how much resemblance you think this kind of rational analysis of weighing expected costs and benefits bears to actual foreign policy decision-making. But I think it's great to see serious thinkers trying to make progress on important questions like "Will World War III happen...]]>
Mon, 07 Nov 2022 16:43:27 +0000 EA - [Links post] Economists Chris Blattman and Noah Smith on China, Taiwan, and the likelihood of war by Stephen Clare Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Links post] Economists Chris Blattman and Noah Smith on China, Taiwan, and the likelihood of war, published by Stephen Clare on November 7, 2022 on The Effective Altruism Forum.Two prominent economists and writers, Chris Blattman and Noah Smith, both recently and independently published blog posts on the likelihood of war between China and the US over Taiwan. They are not optimistic. Both seem to think that war is relatively likely. Unfortunately, neither offers a quantitative forecast, but I think it's interesting and useful for serious researchers to write about their views on such important questions.Chris's post is called "The prospects for war with China: Why I see a serious chance of World War III in the next decade". Noah's is "Why I think an invasion of Taiwan probably means WW3". They complement each other well, with Chris's post arguing that war seems likely and Noah's post arguing that the US would probably get involved in the event of a war.Chris's post on why bargaining may break downChris applies the bargaining framework he used throughout his book Why We Fight to show why war is becoming more likely as China continues to grow its economy and strengthen its military. Eventually, the status quo of de facto Taiwanese independence will slip outside of its "compromise range". When this happens, Chinese leaders may decide going to war to try to bring about a different outcome may be preferable, despite the costs.Still, war is risky and negative-sum. A negotiated settlement should, in principle, be preferable for all parties. Chris suggests that negotiations may fail because ideological principles (e.g. democracy vs autocracy) can be non-negotiable, China's crackdown on Hong Kong harmed its reputation for sticking to negotiated settlements, and Xi Jinping is increasingly unchecked and isolated in his leadership.Noah's post on why the US would probably get drawn into the warNoah also applies economic thinking to inform his geopolitical analysis, using game theory to predict which war scenarios seem more likely based on the interests of the actors involved.Noah considers about how important various factors, like national pride, reputation, and military costs, seem to Chinese and American leaders. He then tries to weigh up how important these costs and benefits seem relative to each other to work out the payoffs to each actor is in each of the four possible outcomes of the game. Then he assigns probabilities to work out expected payoffs, and figures out the equilibrium through backwards induction. Long story short, the equilibrium solution under these assumptions is for China to invade and attack the US to maximize its chances of victory, nearly assuring the outbreak of a major great power war.Of course, there are many ways this analysis could be wrong, and Noah touches on them at the end of the post. For example, given escalation risk, perhaps the payoffs in any "US fights" outcome are hugely negative for both parties, making it very unlikely that they would both choose to fight. Or, perhaps losing a fight over Taiwan would be so negative for China's leadership that it's just too risky (the expected payoff is negative) to undertake. Or, perhaps the leadership of each country is misinformed about the likelihood of each outcome, leading to miscalculation, bad decision-making, and a sub-optimal (i.e. very costly) outcome.Why they're worth readingBoth posts analyze the US-China-Taiwan situation from a specific, economic framework. How insightful you find them will depend somewhat on how much resemblance you think this kind of rational analysis of weighing expected costs and benefits bears to actual foreign policy decision-making. But I think it's great to see serious thinkers trying to make progress on important questions like "Will World War III happen...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Links post] Economists Chris Blattman and Noah Smith on China, Taiwan, and the likelihood of war, published by Stephen Clare on November 7, 2022 on The Effective Altruism Forum.Two prominent economists and writers, Chris Blattman and Noah Smith, both recently and independently published blog posts on the likelihood of war between China and the US over Taiwan. They are not optimistic. Both seem to think that war is relatively likely. Unfortunately, neither offers a quantitative forecast, but I think it's interesting and useful for serious researchers to write about their views on such important questions.Chris's post is called "The prospects for war with China: Why I see a serious chance of World War III in the next decade". Noah's is "Why I think an invasion of Taiwan probably means WW3". They complement each other well, with Chris's post arguing that war seems likely and Noah's post arguing that the US would probably get involved in the event of a war.Chris's post on why bargaining may break downChris applies the bargaining framework he used throughout his book Why We Fight to show why war is becoming more likely as China continues to grow its economy and strengthen its military. Eventually, the status quo of de facto Taiwanese independence will slip outside of its "compromise range". When this happens, Chinese leaders may decide going to war to try to bring about a different outcome may be preferable, despite the costs.Still, war is risky and negative-sum. A negotiated settlement should, in principle, be preferable for all parties. Chris suggests that negotiations may fail because ideological principles (e.g. democracy vs autocracy) can be non-negotiable, China's crackdown on Hong Kong harmed its reputation for sticking to negotiated settlements, and Xi Jinping is increasingly unchecked and isolated in his leadership.Noah's post on why the US would probably get drawn into the warNoah also applies economic thinking to inform his geopolitical analysis, using game theory to predict which war scenarios seem more likely based on the interests of the actors involved.Noah considers about how important various factors, like national pride, reputation, and military costs, seem to Chinese and American leaders. He then tries to weigh up how important these costs and benefits seem relative to each other to work out the payoffs to each actor is in each of the four possible outcomes of the game. Then he assigns probabilities to work out expected payoffs, and figures out the equilibrium through backwards induction. Long story short, the equilibrium solution under these assumptions is for China to invade and attack the US to maximize its chances of victory, nearly assuring the outbreak of a major great power war.Of course, there are many ways this analysis could be wrong, and Noah touches on them at the end of the post. For example, given escalation risk, perhaps the payoffs in any "US fights" outcome are hugely negative for both parties, making it very unlikely that they would both choose to fight. Or, perhaps losing a fight over Taiwan would be so negative for China's leadership that it's just too risky (the expected payoff is negative) to undertake. Or, perhaps the leadership of each country is misinformed about the likelihood of each outcome, leading to miscalculation, bad decision-making, and a sub-optimal (i.e. very costly) outcome.Why they're worth readingBoth posts analyze the US-China-Taiwan situation from a specific, economic framework. How insightful you find them will depend somewhat on how much resemblance you think this kind of rational analysis of weighing expected costs and benefits bears to actual foreign policy decision-making. But I think it's great to see serious thinkers trying to make progress on important questions like "Will World War III happen...]]>
Stephen Clare https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:48 None full 3674
RGwJviLRnrSojwYwu_EA EA - Opportunities that surprised us during our Clearer Thinking Regrants program by spencerg Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Opportunities that surprised us during our Clearer Thinking Regrants program, published by spencerg on November 7, 2022 on The Effective Altruism Forum.This post was written by a subset of Clearer Thinking team members, and not all of the team members involved with the regranting necessarily agree with everything said here.As part of the Clearer Thinking Regrants program, we evaluated over 630 project proposals and ended up with 37 finalists. In the many hours that we spent evaluating these finalists, we learned some things that surprised us and saw many potential opportunities to help the world that we hadn’t considered before. Of course, if you’re an expert in any of these areas, what surprised us non-experts may not surprise you.Our aim in this post is to share object-level information that we did not know before the regranting program and that we think readers may find interesting or useful. In several cases, we are sharing the fact that specific organizations or projects have significantly more room for funding than we would have guessed, even after accounting for the outcomes of our regrants program. We hope that readers find this information useful. By highlighting some organizations that have room for more funding, we hope that this will lead to impactful giving opportunities.Summary:We were surprised that there hasn't been more work to quantify the risks from large-magnitude volcanic eruptions (considering the impact that such eruptions could have).We were surprised that the Rethink Priorities Surveys Team has significant room for funding for their own project ideas (since most of their work is research/consulting for existing orgs).We hadn’t previously considered the extent to which the use of boiling as a water treatment method in low- and middle-income countries contributes to indoor air pollution (and, therefore, to potentially negative health outcomes).We were surprised to learn that the Happier Lives Institute (HLI) has substantial room for additional funding.We hadn’t considered that there may be significant new ideas about how to reduce the chance of nuclear war (given the age of the field).Some of our team hadn’t realized how difficult it can be to tell whether there’s a vitamin deficiency in a population, and none of us had realized that point-of-care biosensors might be feasible to roll out in the foreseeable future.We hadn’t realized that 1Day Sooner conducts activities outside of their human challenge trial work, and that these activities have significant room for funding.1. We were surprised that there hasn't been more work to quantify the risks from large-magnitude volcanic eruptions (considering the impact that such eruptions could have).Toby Ord’s best guess (discussed in The Precipice) is that there is a ~1 in 10,000 chance of an existential catastrophe via supervolcano eruption within the next 100 years. He also thinks that most of the risks associated with volcanic eruptions would relate to them causing the collapse of civilization and/or reducing the chance of recovering from that collapse, and that further research quantifying these risks would be helpful.Volcanologists Mike Cassidy and Lara Mani have argued that there has been insufficient attention paid to quantifying these risks, as well as to identifying where large-magnitude volcanic eruptions are most likely to occur and which measures should be taken to mitigate these risks. It was surprising to us that these questions have not already been investigated in greater detail, considering the impact that such eruptions could have (and that improved quantification may be quite low-cost if carried out by the right team).2. We were surprised that the Rethink Priorities Surveys Team has significant room for funding for their own project ideas (since most of their work is re...]]>
spencerg https://forum.effectivealtruism.org/posts/RGwJviLRnrSojwYwu/opportunities-that-surprised-us-during-our-clearer-thinking Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Opportunities that surprised us during our Clearer Thinking Regrants program, published by spencerg on November 7, 2022 on The Effective Altruism Forum.This post was written by a subset of Clearer Thinking team members, and not all of the team members involved with the regranting necessarily agree with everything said here.As part of the Clearer Thinking Regrants program, we evaluated over 630 project proposals and ended up with 37 finalists. In the many hours that we spent evaluating these finalists, we learned some things that surprised us and saw many potential opportunities to help the world that we hadn’t considered before. Of course, if you’re an expert in any of these areas, what surprised us non-experts may not surprise you.Our aim in this post is to share object-level information that we did not know before the regranting program and that we think readers may find interesting or useful. In several cases, we are sharing the fact that specific organizations or projects have significantly more room for funding than we would have guessed, even after accounting for the outcomes of our regrants program. We hope that readers find this information useful. By highlighting some organizations that have room for more funding, we hope that this will lead to impactful giving opportunities.Summary:We were surprised that there hasn't been more work to quantify the risks from large-magnitude volcanic eruptions (considering the impact that such eruptions could have).We were surprised that the Rethink Priorities Surveys Team has significant room for funding for their own project ideas (since most of their work is research/consulting for existing orgs).We hadn’t previously considered the extent to which the use of boiling as a water treatment method in low- and middle-income countries contributes to indoor air pollution (and, therefore, to potentially negative health outcomes).We were surprised to learn that the Happier Lives Institute (HLI) has substantial room for additional funding.We hadn’t considered that there may be significant new ideas about how to reduce the chance of nuclear war (given the age of the field).Some of our team hadn’t realized how difficult it can be to tell whether there’s a vitamin deficiency in a population, and none of us had realized that point-of-care biosensors might be feasible to roll out in the foreseeable future.We hadn’t realized that 1Day Sooner conducts activities outside of their human challenge trial work, and that these activities have significant room for funding.1. We were surprised that there hasn't been more work to quantify the risks from large-magnitude volcanic eruptions (considering the impact that such eruptions could have).Toby Ord’s best guess (discussed in The Precipice) is that there is a ~1 in 10,000 chance of an existential catastrophe via supervolcano eruption within the next 100 years. He also thinks that most of the risks associated with volcanic eruptions would relate to them causing the collapse of civilization and/or reducing the chance of recovering from that collapse, and that further research quantifying these risks would be helpful.Volcanologists Mike Cassidy and Lara Mani have argued that there has been insufficient attention paid to quantifying these risks, as well as to identifying where large-magnitude volcanic eruptions are most likely to occur and which measures should be taken to mitigate these risks. It was surprising to us that these questions have not already been investigated in greater detail, considering the impact that such eruptions could have (and that improved quantification may be quite low-cost if carried out by the right team).2. We were surprised that the Rethink Priorities Surveys Team has significant room for funding for their own project ideas (since most of their work is re...]]>
Mon, 07 Nov 2022 13:55:31 +0000 EA - Opportunities that surprised us during our Clearer Thinking Regrants program by spencerg Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Opportunities that surprised us during our Clearer Thinking Regrants program, published by spencerg on November 7, 2022 on The Effective Altruism Forum.This post was written by a subset of Clearer Thinking team members, and not all of the team members involved with the regranting necessarily agree with everything said here.As part of the Clearer Thinking Regrants program, we evaluated over 630 project proposals and ended up with 37 finalists. In the many hours that we spent evaluating these finalists, we learned some things that surprised us and saw many potential opportunities to help the world that we hadn’t considered before. Of course, if you’re an expert in any of these areas, what surprised us non-experts may not surprise you.Our aim in this post is to share object-level information that we did not know before the regranting program and that we think readers may find interesting or useful. In several cases, we are sharing the fact that specific organizations or projects have significantly more room for funding than we would have guessed, even after accounting for the outcomes of our regrants program. We hope that readers find this information useful. By highlighting some organizations that have room for more funding, we hope that this will lead to impactful giving opportunities.Summary:We were surprised that there hasn't been more work to quantify the risks from large-magnitude volcanic eruptions (considering the impact that such eruptions could have).We were surprised that the Rethink Priorities Surveys Team has significant room for funding for their own project ideas (since most of their work is research/consulting for existing orgs).We hadn’t previously considered the extent to which the use of boiling as a water treatment method in low- and middle-income countries contributes to indoor air pollution (and, therefore, to potentially negative health outcomes).We were surprised to learn that the Happier Lives Institute (HLI) has substantial room for additional funding.We hadn’t considered that there may be significant new ideas about how to reduce the chance of nuclear war (given the age of the field).Some of our team hadn’t realized how difficult it can be to tell whether there’s a vitamin deficiency in a population, and none of us had realized that point-of-care biosensors might be feasible to roll out in the foreseeable future.We hadn’t realized that 1Day Sooner conducts activities outside of their human challenge trial work, and that these activities have significant room for funding.1. We were surprised that there hasn't been more work to quantify the risks from large-magnitude volcanic eruptions (considering the impact that such eruptions could have).Toby Ord’s best guess (discussed in The Precipice) is that there is a ~1 in 10,000 chance of an existential catastrophe via supervolcano eruption within the next 100 years. He also thinks that most of the risks associated with volcanic eruptions would relate to them causing the collapse of civilization and/or reducing the chance of recovering from that collapse, and that further research quantifying these risks would be helpful.Volcanologists Mike Cassidy and Lara Mani have argued that there has been insufficient attention paid to quantifying these risks, as well as to identifying where large-magnitude volcanic eruptions are most likely to occur and which measures should be taken to mitigate these risks. It was surprising to us that these questions have not already been investigated in greater detail, considering the impact that such eruptions could have (and that improved quantification may be quite low-cost if carried out by the right team).2. We were surprised that the Rethink Priorities Surveys Team has significant room for funding for their own project ideas (since most of their work is re...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Opportunities that surprised us during our Clearer Thinking Regrants program, published by spencerg on November 7, 2022 on The Effective Altruism Forum.This post was written by a subset of Clearer Thinking team members, and not all of the team members involved with the regranting necessarily agree with everything said here.As part of the Clearer Thinking Regrants program, we evaluated over 630 project proposals and ended up with 37 finalists. In the many hours that we spent evaluating these finalists, we learned some things that surprised us and saw many potential opportunities to help the world that we hadn’t considered before. Of course, if you’re an expert in any of these areas, what surprised us non-experts may not surprise you.Our aim in this post is to share object-level information that we did not know before the regranting program and that we think readers may find interesting or useful. In several cases, we are sharing the fact that specific organizations or projects have significantly more room for funding than we would have guessed, even after accounting for the outcomes of our regrants program. We hope that readers find this information useful. By highlighting some organizations that have room for more funding, we hope that this will lead to impactful giving opportunities.Summary:We were surprised that there hasn't been more work to quantify the risks from large-magnitude volcanic eruptions (considering the impact that such eruptions could have).We were surprised that the Rethink Priorities Surveys Team has significant room for funding for their own project ideas (since most of their work is research/consulting for existing orgs).We hadn’t previously considered the extent to which the use of boiling as a water treatment method in low- and middle-income countries contributes to indoor air pollution (and, therefore, to potentially negative health outcomes).We were surprised to learn that the Happier Lives Institute (HLI) has substantial room for additional funding.We hadn’t considered that there may be significant new ideas about how to reduce the chance of nuclear war (given the age of the field).Some of our team hadn’t realized how difficult it can be to tell whether there’s a vitamin deficiency in a population, and none of us had realized that point-of-care biosensors might be feasible to roll out in the foreseeable future.We hadn’t realized that 1Day Sooner conducts activities outside of their human challenge trial work, and that these activities have significant room for funding.1. We were surprised that there hasn't been more work to quantify the risks from large-magnitude volcanic eruptions (considering the impact that such eruptions could have).Toby Ord’s best guess (discussed in The Precipice) is that there is a ~1 in 10,000 chance of an existential catastrophe via supervolcano eruption within the next 100 years. He also thinks that most of the risks associated with volcanic eruptions would relate to them causing the collapse of civilization and/or reducing the chance of recovering from that collapse, and that further research quantifying these risks would be helpful.Volcanologists Mike Cassidy and Lara Mani have argued that there has been insufficient attention paid to quantifying these risks, as well as to identifying where large-magnitude volcanic eruptions are most likely to occur and which measures should be taken to mitigate these risks. It was surprising to us that these questions have not already been investigated in greater detail, considering the impact that such eruptions could have (and that improved quantification may be quite low-cost if carried out by the right team).2. We were surprised that the Rethink Priorities Surveys Team has significant room for funding for their own project ideas (since most of their work is re...]]>
spencerg https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 15:03 None full 3673
tnSg6o7crcHFLc395_EA EA - The Welfare Range Table by Bob Fischer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Welfare Range Table, published by Bob Fischer on November 7, 2022 on The Effective Altruism Forum.Key TakeawaysOur objective: estimate the welfare ranges of 11 farmed species.Given hedonism, an individual’s welfare range is the difference between the welfare level associated with the most intense positively valenced state that the individual can realize and the welfare level associated with the most intense negatively valenced state that the individual can realize.Given some prominent theories about the functions of valenced states, we identified over 90 empirical proxies that might provide evidence of variation in the potential intensities of those states.There are many unknowns across many species.It’s rare to have evidence that animals lack a given trait.We know less about the presence or absence of traits as we move from terrestrial vertebrates to most invertebrates.Many of the traits about which we know the least are affective traits.We do have information about some significant traits for many animals.IntroductionThis is the second post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species. The aim of this post is to provide an overview of the Welfare Range Table, which records the results of a literature review covering over 90 empirical traits across 11 farmed species.MotivationsIf we want to do as much good as possible, we have to compare all the ways of doing good—including ways that involve helping members of different species. The Moral Weight Project’s assumptions entail that everyone’s welfare counts the same and that all welfare improvements count equally. Still, some may be able to realize more welfare than others. We’re particularly interested in how much welfare different individuals can realize at a time—that is, their respective welfare ranges. An individual’s welfare range is the difference between the best and worst welfare states the individual can realize at a time. We assume hedonism, according to which all and only positively valenced states increase welfare and all and only negatively valenced states decrease welfare.Given as much, an individual’s welfare range is the difference between the welfare level associated with the most intense positively valenced state that the individual can realize and the welfare level associated with the most intense negatively valenced state that the individual can realize. In the case of pigs, for instance, that might be the difference between the welfare level we associate with being fully healthy on a farm sanctuary, on the one hand, and a botched slaughter, on the other.If there’s variation in welfare ranges across taxa, then there’s variation in the capacities that generate the determinants of welfare. So, if there’s such variation and hedonism is true, then there’s variation in the capacities that generate positively and negatively valenced experiences.As Jason Schukraft argues, we don’t have any good direct measures of the intensity of valenced states that let us make interspecific comparisons. Indeed, we rely on indirect measures even in humans: behavior, physiological changes, and verbal reports. We can observe behavior and physiological changes in nonhumans, but most of them aren’t verbal. So, we have to rely on other indirect proxies, piecing together an understanding from animals’ cognitive and affective traits or capabilities. The Welfare Range Table includes over 90 such traits: some behavioral, others physiological; some more cognitive, others more affective. Then, it indicates whether the empirical literature provides reason to think that members of 11 farmed spec...]]>
Bob Fischer https://forum.effectivealtruism.org/posts/tnSg6o7crcHFLc395/the-welfare-range-table Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Welfare Range Table, published by Bob Fischer on November 7, 2022 on The Effective Altruism Forum.Key TakeawaysOur objective: estimate the welfare ranges of 11 farmed species.Given hedonism, an individual’s welfare range is the difference between the welfare level associated with the most intense positively valenced state that the individual can realize and the welfare level associated with the most intense negatively valenced state that the individual can realize.Given some prominent theories about the functions of valenced states, we identified over 90 empirical proxies that might provide evidence of variation in the potential intensities of those states.There are many unknowns across many species.It’s rare to have evidence that animals lack a given trait.We know less about the presence or absence of traits as we move from terrestrial vertebrates to most invertebrates.Many of the traits about which we know the least are affective traits.We do have information about some significant traits for many animals.IntroductionThis is the second post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species. The aim of this post is to provide an overview of the Welfare Range Table, which records the results of a literature review covering over 90 empirical traits across 11 farmed species.MotivationsIf we want to do as much good as possible, we have to compare all the ways of doing good—including ways that involve helping members of different species. The Moral Weight Project’s assumptions entail that everyone’s welfare counts the same and that all welfare improvements count equally. Still, some may be able to realize more welfare than others. We’re particularly interested in how much welfare different individuals can realize at a time—that is, their respective welfare ranges. An individual’s welfare range is the difference between the best and worst welfare states the individual can realize at a time. We assume hedonism, according to which all and only positively valenced states increase welfare and all and only negatively valenced states decrease welfare.Given as much, an individual’s welfare range is the difference between the welfare level associated with the most intense positively valenced state that the individual can realize and the welfare level associated with the most intense negatively valenced state that the individual can realize. In the case of pigs, for instance, that might be the difference between the welfare level we associate with being fully healthy on a farm sanctuary, on the one hand, and a botched slaughter, on the other.If there’s variation in welfare ranges across taxa, then there’s variation in the capacities that generate the determinants of welfare. So, if there’s such variation and hedonism is true, then there’s variation in the capacities that generate positively and negatively valenced experiences.As Jason Schukraft argues, we don’t have any good direct measures of the intensity of valenced states that let us make interspecific comparisons. Indeed, we rely on indirect measures even in humans: behavior, physiological changes, and verbal reports. We can observe behavior and physiological changes in nonhumans, but most of them aren’t verbal. So, we have to rely on other indirect proxies, piecing together an understanding from animals’ cognitive and affective traits or capabilities. The Welfare Range Table includes over 90 such traits: some behavioral, others physiological; some more cognitive, others more affective. Then, it indicates whether the empirical literature provides reason to think that members of 11 farmed spec...]]>
Mon, 07 Nov 2022 12:55:54 +0000 EA - The Welfare Range Table by Bob Fischer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Welfare Range Table, published by Bob Fischer on November 7, 2022 on The Effective Altruism Forum.Key TakeawaysOur objective: estimate the welfare ranges of 11 farmed species.Given hedonism, an individual’s welfare range is the difference between the welfare level associated with the most intense positively valenced state that the individual can realize and the welfare level associated with the most intense negatively valenced state that the individual can realize.Given some prominent theories about the functions of valenced states, we identified over 90 empirical proxies that might provide evidence of variation in the potential intensities of those states.There are many unknowns across many species.It’s rare to have evidence that animals lack a given trait.We know less about the presence or absence of traits as we move from terrestrial vertebrates to most invertebrates.Many of the traits about which we know the least are affective traits.We do have information about some significant traits for many animals.IntroductionThis is the second post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species. The aim of this post is to provide an overview of the Welfare Range Table, which records the results of a literature review covering over 90 empirical traits across 11 farmed species.MotivationsIf we want to do as much good as possible, we have to compare all the ways of doing good—including ways that involve helping members of different species. The Moral Weight Project’s assumptions entail that everyone’s welfare counts the same and that all welfare improvements count equally. Still, some may be able to realize more welfare than others. We’re particularly interested in how much welfare different individuals can realize at a time—that is, their respective welfare ranges. An individual’s welfare range is the difference between the best and worst welfare states the individual can realize at a time. We assume hedonism, according to which all and only positively valenced states increase welfare and all and only negatively valenced states decrease welfare.Given as much, an individual’s welfare range is the difference between the welfare level associated with the most intense positively valenced state that the individual can realize and the welfare level associated with the most intense negatively valenced state that the individual can realize. In the case of pigs, for instance, that might be the difference between the welfare level we associate with being fully healthy on a farm sanctuary, on the one hand, and a botched slaughter, on the other.If there’s variation in welfare ranges across taxa, then there’s variation in the capacities that generate the determinants of welfare. So, if there’s such variation and hedonism is true, then there’s variation in the capacities that generate positively and negatively valenced experiences.As Jason Schukraft argues, we don’t have any good direct measures of the intensity of valenced states that let us make interspecific comparisons. Indeed, we rely on indirect measures even in humans: behavior, physiological changes, and verbal reports. We can observe behavior and physiological changes in nonhumans, but most of them aren’t verbal. So, we have to rely on other indirect proxies, piecing together an understanding from animals’ cognitive and affective traits or capabilities. The Welfare Range Table includes over 90 such traits: some behavioral, others physiological; some more cognitive, others more affective. Then, it indicates whether the empirical literature provides reason to think that members of 11 farmed spec...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Welfare Range Table, published by Bob Fischer on November 7, 2022 on The Effective Altruism Forum.Key TakeawaysOur objective: estimate the welfare ranges of 11 farmed species.Given hedonism, an individual’s welfare range is the difference between the welfare level associated with the most intense positively valenced state that the individual can realize and the welfare level associated with the most intense negatively valenced state that the individual can realize.Given some prominent theories about the functions of valenced states, we identified over 90 empirical proxies that might provide evidence of variation in the potential intensities of those states.There are many unknowns across many species.It’s rare to have evidence that animals lack a given trait.We know less about the presence or absence of traits as we move from terrestrial vertebrates to most invertebrates.Many of the traits about which we know the least are affective traits.We do have information about some significant traits for many animals.IntroductionThis is the second post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species. The aim of this post is to provide an overview of the Welfare Range Table, which records the results of a literature review covering over 90 empirical traits across 11 farmed species.MotivationsIf we want to do as much good as possible, we have to compare all the ways of doing good—including ways that involve helping members of different species. The Moral Weight Project’s assumptions entail that everyone’s welfare counts the same and that all welfare improvements count equally. Still, some may be able to realize more welfare than others. We’re particularly interested in how much welfare different individuals can realize at a time—that is, their respective welfare ranges. An individual’s welfare range is the difference between the best and worst welfare states the individual can realize at a time. We assume hedonism, according to which all and only positively valenced states increase welfare and all and only negatively valenced states decrease welfare.Given as much, an individual’s welfare range is the difference between the welfare level associated with the most intense positively valenced state that the individual can realize and the welfare level associated with the most intense negatively valenced state that the individual can realize. In the case of pigs, for instance, that might be the difference between the welfare level we associate with being fully healthy on a farm sanctuary, on the one hand, and a botched slaughter, on the other.If there’s variation in welfare ranges across taxa, then there’s variation in the capacities that generate the determinants of welfare. So, if there’s such variation and hedonism is true, then there’s variation in the capacities that generate positively and negatively valenced experiences.As Jason Schukraft argues, we don’t have any good direct measures of the intensity of valenced states that let us make interspecific comparisons. Indeed, we rely on indirect measures even in humans: behavior, physiological changes, and verbal reports. We can observe behavior and physiological changes in nonhumans, but most of them aren’t verbal. So, we have to rely on other indirect proxies, piecing together an understanding from animals’ cognitive and affective traits or capabilities. The Welfare Range Table includes over 90 such traits: some behavioral, others physiological; some more cognitive, others more affective. Then, it indicates whether the empirical literature provides reason to think that members of 11 farmed spec...]]>
Bob Fischer https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 14:32 None full 3675
GWyidA3fbXKXErDn4_EA EA - Effective altruism as a lifestyle movement (A Master’s Thesis) by Ada-Maaria Hyvärinen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective altruism as a lifestyle movement (A Master’s Thesis), published by Ada-Maaria Hyvärinen on November 6, 2022 on The Effective Altruism Forum.Earlier this year, Jemina Mikkonen published her Master’s Thesis in Sociology on the topic of “Effective altruism as a lifestyle movement”. For her research, she interviewed 8 persons who were or had been involved in effective altruism through the university/city group Effective Altruism Helsinki. The purpose of the study was to find out if EA is a lifestyle movement as defined by Haenfler, Johnson, Jones (2012). Mikkonen concluded it is.If you understand Finnish, you can read the whole thesis here. Since the contents of the thesis might be of interest to non-Finnish-speakers as well, I will summarize her key points in this post. To my understanding, the thesis received the grade “good”, so the research is not exceptionally well done, but should fill the expected function of a thesis. Reading these findings you should keep in mind that this work was done by a student, not by an experienced researcher.To my knowledge this is the only time anyone has conducted sociology research on an EA group. Mikkonen could not find previous research on EA groups specifically, making it her main motivation to choose EA as a thesis topic. Mikkonen is not involved in EA, but followed the EA Helsinki Telegram chat for a while and participated in one online meetup for the purposes of her study.According to Mikkonen, EA is a lifestyle movementThe concept of a lifestyle movement was developed by Haenfler, Johnson, Jones (2012). Lifestyle movements are loosely organized or non-organized movements that aim for social change primarily by the means of individual lifestyle choices. They are different from “traditional” social movements that have an external focus, aim for collective (often political) action and have some degree of organization. They define lifestyle movements by the following characteristics:individual (as opposed to collective) action: participation occurs primarily at the individual level with the subjective understanding that others are taking similar action, collectively adding up to social changeprivate and ongoing action: participation occurs in daily life (so its not public and not episodic)action is understood as an effort towards social change (so it is not for example exclusively done as self-help or religious exploration)personal identity is a site of social change: adherents engage in identity work, focusing particularly on cultivating a morally coherent, personally meaningful identity in the context of a collective identityMikkonen found that EA satisfied this definition in the following way:Action is individual: the interviewees felt their impact comes from personal choices, such as donating and studying to later build an EA aligned careerEA is present in daily life: study and work are methods of doing good, donating influences finances and participants use their free time to learn more about EAAction is done for social change: Mikkonen noted that EA stands out from other lifestyle movements in the sense that in EA, actions are never taken just for the sake of participating in the movement, but they are always tied to the end goal of having an impactIdentity work: Usually, members of a social movement can see their actions as a sign of moral virtue, and taking action helps them keep up with the idea of themselves as a good person. However, in Mikkonen’s interviews nobody seemed to get this feeling out of EA. But many people described having experienced pressure to follow EA principles, and emphasized that they were “just ordinary people”, which Mikkonen interpreted as a way of coping with the high standards of EA. According to her, EA can act as a way of avoiding the identity of an immoral person, rathe...]]>
Ada-Maaria Hyvärinen https://forum.effectivealtruism.org/posts/GWyidA3fbXKXErDn4/effective-altruism-as-a-lifestyle-movement-a-master-s-thesis Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective altruism as a lifestyle movement (A Master’s Thesis), published by Ada-Maaria Hyvärinen on November 6, 2022 on The Effective Altruism Forum.Earlier this year, Jemina Mikkonen published her Master’s Thesis in Sociology on the topic of “Effective altruism as a lifestyle movement”. For her research, she interviewed 8 persons who were or had been involved in effective altruism through the university/city group Effective Altruism Helsinki. The purpose of the study was to find out if EA is a lifestyle movement as defined by Haenfler, Johnson, Jones (2012). Mikkonen concluded it is.If you understand Finnish, you can read the whole thesis here. Since the contents of the thesis might be of interest to non-Finnish-speakers as well, I will summarize her key points in this post. To my understanding, the thesis received the grade “good”, so the research is not exceptionally well done, but should fill the expected function of a thesis. Reading these findings you should keep in mind that this work was done by a student, not by an experienced researcher.To my knowledge this is the only time anyone has conducted sociology research on an EA group. Mikkonen could not find previous research on EA groups specifically, making it her main motivation to choose EA as a thesis topic. Mikkonen is not involved in EA, but followed the EA Helsinki Telegram chat for a while and participated in one online meetup for the purposes of her study.According to Mikkonen, EA is a lifestyle movementThe concept of a lifestyle movement was developed by Haenfler, Johnson, Jones (2012). Lifestyle movements are loosely organized or non-organized movements that aim for social change primarily by the means of individual lifestyle choices. They are different from “traditional” social movements that have an external focus, aim for collective (often political) action and have some degree of organization. They define lifestyle movements by the following characteristics:individual (as opposed to collective) action: participation occurs primarily at the individual level with the subjective understanding that others are taking similar action, collectively adding up to social changeprivate and ongoing action: participation occurs in daily life (so its not public and not episodic)action is understood as an effort towards social change (so it is not for example exclusively done as self-help or religious exploration)personal identity is a site of social change: adherents engage in identity work, focusing particularly on cultivating a morally coherent, personally meaningful identity in the context of a collective identityMikkonen found that EA satisfied this definition in the following way:Action is individual: the interviewees felt their impact comes from personal choices, such as donating and studying to later build an EA aligned careerEA is present in daily life: study and work are methods of doing good, donating influences finances and participants use their free time to learn more about EAAction is done for social change: Mikkonen noted that EA stands out from other lifestyle movements in the sense that in EA, actions are never taken just for the sake of participating in the movement, but they are always tied to the end goal of having an impactIdentity work: Usually, members of a social movement can see their actions as a sign of moral virtue, and taking action helps them keep up with the idea of themselves as a good person. However, in Mikkonen’s interviews nobody seemed to get this feeling out of EA. But many people described having experienced pressure to follow EA principles, and emphasized that they were “just ordinary people”, which Mikkonen interpreted as a way of coping with the high standards of EA. According to her, EA can act as a way of avoiding the identity of an immoral person, rathe...]]>
Mon, 07 Nov 2022 08:14:21 +0000 EA - Effective altruism as a lifestyle movement (A Master’s Thesis) by Ada-Maaria Hyvärinen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective altruism as a lifestyle movement (A Master’s Thesis), published by Ada-Maaria Hyvärinen on November 6, 2022 on The Effective Altruism Forum.Earlier this year, Jemina Mikkonen published her Master’s Thesis in Sociology on the topic of “Effective altruism as a lifestyle movement”. For her research, she interviewed 8 persons who were or had been involved in effective altruism through the university/city group Effective Altruism Helsinki. The purpose of the study was to find out if EA is a lifestyle movement as defined by Haenfler, Johnson, Jones (2012). Mikkonen concluded it is.If you understand Finnish, you can read the whole thesis here. Since the contents of the thesis might be of interest to non-Finnish-speakers as well, I will summarize her key points in this post. To my understanding, the thesis received the grade “good”, so the research is not exceptionally well done, but should fill the expected function of a thesis. Reading these findings you should keep in mind that this work was done by a student, not by an experienced researcher.To my knowledge this is the only time anyone has conducted sociology research on an EA group. Mikkonen could not find previous research on EA groups specifically, making it her main motivation to choose EA as a thesis topic. Mikkonen is not involved in EA, but followed the EA Helsinki Telegram chat for a while and participated in one online meetup for the purposes of her study.According to Mikkonen, EA is a lifestyle movementThe concept of a lifestyle movement was developed by Haenfler, Johnson, Jones (2012). Lifestyle movements are loosely organized or non-organized movements that aim for social change primarily by the means of individual lifestyle choices. They are different from “traditional” social movements that have an external focus, aim for collective (often political) action and have some degree of organization. They define lifestyle movements by the following characteristics:individual (as opposed to collective) action: participation occurs primarily at the individual level with the subjective understanding that others are taking similar action, collectively adding up to social changeprivate and ongoing action: participation occurs in daily life (so its not public and not episodic)action is understood as an effort towards social change (so it is not for example exclusively done as self-help or religious exploration)personal identity is a site of social change: adherents engage in identity work, focusing particularly on cultivating a morally coherent, personally meaningful identity in the context of a collective identityMikkonen found that EA satisfied this definition in the following way:Action is individual: the interviewees felt their impact comes from personal choices, such as donating and studying to later build an EA aligned careerEA is present in daily life: study and work are methods of doing good, donating influences finances and participants use their free time to learn more about EAAction is done for social change: Mikkonen noted that EA stands out from other lifestyle movements in the sense that in EA, actions are never taken just for the sake of participating in the movement, but they are always tied to the end goal of having an impactIdentity work: Usually, members of a social movement can see their actions as a sign of moral virtue, and taking action helps them keep up with the idea of themselves as a good person. However, in Mikkonen’s interviews nobody seemed to get this feeling out of EA. But many people described having experienced pressure to follow EA principles, and emphasized that they were “just ordinary people”, which Mikkonen interpreted as a way of coping with the high standards of EA. According to her, EA can act as a way of avoiding the identity of an immoral person, rathe...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective altruism as a lifestyle movement (A Master’s Thesis), published by Ada-Maaria Hyvärinen on November 6, 2022 on The Effective Altruism Forum.Earlier this year, Jemina Mikkonen published her Master’s Thesis in Sociology on the topic of “Effective altruism as a lifestyle movement”. For her research, she interviewed 8 persons who were or had been involved in effective altruism through the university/city group Effective Altruism Helsinki. The purpose of the study was to find out if EA is a lifestyle movement as defined by Haenfler, Johnson, Jones (2012). Mikkonen concluded it is.If you understand Finnish, you can read the whole thesis here. Since the contents of the thesis might be of interest to non-Finnish-speakers as well, I will summarize her key points in this post. To my understanding, the thesis received the grade “good”, so the research is not exceptionally well done, but should fill the expected function of a thesis. Reading these findings you should keep in mind that this work was done by a student, not by an experienced researcher.To my knowledge this is the only time anyone has conducted sociology research on an EA group. Mikkonen could not find previous research on EA groups specifically, making it her main motivation to choose EA as a thesis topic. Mikkonen is not involved in EA, but followed the EA Helsinki Telegram chat for a while and participated in one online meetup for the purposes of her study.According to Mikkonen, EA is a lifestyle movementThe concept of a lifestyle movement was developed by Haenfler, Johnson, Jones (2012). Lifestyle movements are loosely organized or non-organized movements that aim for social change primarily by the means of individual lifestyle choices. They are different from “traditional” social movements that have an external focus, aim for collective (often political) action and have some degree of organization. They define lifestyle movements by the following characteristics:individual (as opposed to collective) action: participation occurs primarily at the individual level with the subjective understanding that others are taking similar action, collectively adding up to social changeprivate and ongoing action: participation occurs in daily life (so its not public and not episodic)action is understood as an effort towards social change (so it is not for example exclusively done as self-help or religious exploration)personal identity is a site of social change: adherents engage in identity work, focusing particularly on cultivating a morally coherent, personally meaningful identity in the context of a collective identityMikkonen found that EA satisfied this definition in the following way:Action is individual: the interviewees felt their impact comes from personal choices, such as donating and studying to later build an EA aligned careerEA is present in daily life: study and work are methods of doing good, donating influences finances and participants use their free time to learn more about EAAction is done for social change: Mikkonen noted that EA stands out from other lifestyle movements in the sense that in EA, actions are never taken just for the sake of participating in the movement, but they are always tied to the end goal of having an impactIdentity work: Usually, members of a social movement can see their actions as a sign of moral virtue, and taking action helps them keep up with the idea of themselves as a good person. However, in Mikkonen’s interviews nobody seemed to get this feeling out of EA. But many people described having experienced pressure to follow EA principles, and emphasized that they were “just ordinary people”, which Mikkonen interpreted as a way of coping with the high standards of EA. According to her, EA can act as a way of avoiding the identity of an immoral person, rathe...]]>
Ada-Maaria Hyvärinen https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:27 None full 3669
vGiyvfaGGFEzQsETR_EA EA - What's Happening in Australia by Bradley Tjandra Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What's Happening in Australia, published by Bradley Tjandra on November 7, 2022 on The Effective Altruism Forum.IntroductionCrikey! There's a lot going on in 'Straya right now!There are a lot of new and exciting projects happening in Australia— we want to share some of that with the wider community and talk about how you can be involved.Projects in AustraliaMuch of the effective altruist aligned work being done in Australia involves working remotely with the international community, at organisations already known by the international community. Given that, we decided to focus on projects which are being at least partially led by Australians.If you’re leading an exciting project in Australia, please send me a message with a completed copy of this template and I can add it into the post.AI Safety Australia & New ZealandAI Safety SupportEA PathfinderFoundations for TomorrowGiving What We CanGood Ancestors Project (Sydney Office)Good Ancestors Project (Operations)Good Ancestors Project (Policy)High Impact EngineersHigh Impact RecruitmentInsights for ImpactLead Exposure Elimination ProjectQuantifying Uncertainty in GiveWell CEAsReady ResearchSentience InstituteAI Safety Australia & New ZealandWhat do we do?We’re organising a number of community building programs:AI Sydney Fellowship - A fellowship for aspiring AI researchers in Australian/New Zealand to meet and collaborate on their projects.Sydney Retreat - A weekend getaway near Maroubra Beach and trying out a range of activities to facilitate connection, ideation, sharing, and, of course, alignment.Informal chats - Discussion group for people who have already completed the AGI Safety Fundamentals technical course or otherwise have an equivalent level of knowledge.About the teamChris Leong - Main Organiser. Previously interned at Non-Linear.Yanni Kyriacos - Marketing & Communications Lead. Recently employed at Spark Wave as Head of Marketing. 10 years of experience as a marketing strategist.Mark Carauleanu - Intern. Previously Admin Support at CEA, participated in the SERI Summer Fellowship and MLSS.Jenna - Operations Contractor. Also Design & Communications Consultant at Fortify Health and Cofounder of EA Canberra.Call to actionReach out to chris [at] aisafetysupport.org.More linksJoin the Facebook groupNudge competitionAI Safety SupportWhat do we do?Providing support services to early career, independent and transitioning AI alignment researchers. This is done through career coaching, events, training programs and fiscal sponsorship.About the teamJJ Hepburn - Cofounder and CEO based in SydneyRachel Williams - COO based in Colorado SpringsFrances Lorenz - Operations Manager based in BostonShay Gestal - Health Coach based in BerlingtonCall to actionSignup for our mailing listJoin the AI Alignment slackApply for coachingCheckout our free Health CoachingContact us about fiscal sponsorshipsMore linksaisafetysupport.orgEA PathfinderWhat do we do?EA Pathfinder is an organisation that wants to create a world in which professionals motivated by Effective Altruism can do high impact work. We are addressing the bottlenecks specific to mid-career professionals by offering career advising, peer support, research mentorship and project management support.About the teamNeil Ferro has recently come on board as a career advisor.Call to actionIf you have 5+ years of career experience and have a successful career, but feel your job doesn't have the positive impact you’d like then you can apply for free career advice here.More linksYou can visit the website hereFoundations for TomorrowWhat do we do?Foundations for Tomorrow is a proudly independent, leader-focused non-profit with the mission of protecting Australia’s future interests. Our aspiration over the next 3-5 years i...]]>
Bradley Tjandra https://forum.effectivealtruism.org/posts/vGiyvfaGGFEzQsETR/what-s-happening-in-australia Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What's Happening in Australia, published by Bradley Tjandra on November 7, 2022 on The Effective Altruism Forum.IntroductionCrikey! There's a lot going on in 'Straya right now!There are a lot of new and exciting projects happening in Australia— we want to share some of that with the wider community and talk about how you can be involved.Projects in AustraliaMuch of the effective altruist aligned work being done in Australia involves working remotely with the international community, at organisations already known by the international community. Given that, we decided to focus on projects which are being at least partially led by Australians.If you’re leading an exciting project in Australia, please send me a message with a completed copy of this template and I can add it into the post.AI Safety Australia & New ZealandAI Safety SupportEA PathfinderFoundations for TomorrowGiving What We CanGood Ancestors Project (Sydney Office)Good Ancestors Project (Operations)Good Ancestors Project (Policy)High Impact EngineersHigh Impact RecruitmentInsights for ImpactLead Exposure Elimination ProjectQuantifying Uncertainty in GiveWell CEAsReady ResearchSentience InstituteAI Safety Australia & New ZealandWhat do we do?We’re organising a number of community building programs:AI Sydney Fellowship - A fellowship for aspiring AI researchers in Australian/New Zealand to meet and collaborate on their projects.Sydney Retreat - A weekend getaway near Maroubra Beach and trying out a range of activities to facilitate connection, ideation, sharing, and, of course, alignment.Informal chats - Discussion group for people who have already completed the AGI Safety Fundamentals technical course or otherwise have an equivalent level of knowledge.About the teamChris Leong - Main Organiser. Previously interned at Non-Linear.Yanni Kyriacos - Marketing & Communications Lead. Recently employed at Spark Wave as Head of Marketing. 10 years of experience as a marketing strategist.Mark Carauleanu - Intern. Previously Admin Support at CEA, participated in the SERI Summer Fellowship and MLSS.Jenna - Operations Contractor. Also Design & Communications Consultant at Fortify Health and Cofounder of EA Canberra.Call to actionReach out to chris [at] aisafetysupport.org.More linksJoin the Facebook groupNudge competitionAI Safety SupportWhat do we do?Providing support services to early career, independent and transitioning AI alignment researchers. This is done through career coaching, events, training programs and fiscal sponsorship.About the teamJJ Hepburn - Cofounder and CEO based in SydneyRachel Williams - COO based in Colorado SpringsFrances Lorenz - Operations Manager based in BostonShay Gestal - Health Coach based in BerlingtonCall to actionSignup for our mailing listJoin the AI Alignment slackApply for coachingCheckout our free Health CoachingContact us about fiscal sponsorshipsMore linksaisafetysupport.orgEA PathfinderWhat do we do?EA Pathfinder is an organisation that wants to create a world in which professionals motivated by Effective Altruism can do high impact work. We are addressing the bottlenecks specific to mid-career professionals by offering career advising, peer support, research mentorship and project management support.About the teamNeil Ferro has recently come on board as a career advisor.Call to actionIf you have 5+ years of career experience and have a successful career, but feel your job doesn't have the positive impact you’d like then you can apply for free career advice here.More linksYou can visit the website hereFoundations for TomorrowWhat do we do?Foundations for Tomorrow is a proudly independent, leader-focused non-profit with the mission of protecting Australia’s future interests. Our aspiration over the next 3-5 years i...]]>
Mon, 07 Nov 2022 01:03:56 +0000 EA - What's Happening in Australia by Bradley Tjandra Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What's Happening in Australia, published by Bradley Tjandra on November 7, 2022 on The Effective Altruism Forum.IntroductionCrikey! There's a lot going on in 'Straya right now!There are a lot of new and exciting projects happening in Australia— we want to share some of that with the wider community and talk about how you can be involved.Projects in AustraliaMuch of the effective altruist aligned work being done in Australia involves working remotely with the international community, at organisations already known by the international community. Given that, we decided to focus on projects which are being at least partially led by Australians.If you’re leading an exciting project in Australia, please send me a message with a completed copy of this template and I can add it into the post.AI Safety Australia & New ZealandAI Safety SupportEA PathfinderFoundations for TomorrowGiving What We CanGood Ancestors Project (Sydney Office)Good Ancestors Project (Operations)Good Ancestors Project (Policy)High Impact EngineersHigh Impact RecruitmentInsights for ImpactLead Exposure Elimination ProjectQuantifying Uncertainty in GiveWell CEAsReady ResearchSentience InstituteAI Safety Australia & New ZealandWhat do we do?We’re organising a number of community building programs:AI Sydney Fellowship - A fellowship for aspiring AI researchers in Australian/New Zealand to meet and collaborate on their projects.Sydney Retreat - A weekend getaway near Maroubra Beach and trying out a range of activities to facilitate connection, ideation, sharing, and, of course, alignment.Informal chats - Discussion group for people who have already completed the AGI Safety Fundamentals technical course or otherwise have an equivalent level of knowledge.About the teamChris Leong - Main Organiser. Previously interned at Non-Linear.Yanni Kyriacos - Marketing & Communications Lead. Recently employed at Spark Wave as Head of Marketing. 10 years of experience as a marketing strategist.Mark Carauleanu - Intern. Previously Admin Support at CEA, participated in the SERI Summer Fellowship and MLSS.Jenna - Operations Contractor. Also Design & Communications Consultant at Fortify Health and Cofounder of EA Canberra.Call to actionReach out to chris [at] aisafetysupport.org.More linksJoin the Facebook groupNudge competitionAI Safety SupportWhat do we do?Providing support services to early career, independent and transitioning AI alignment researchers. This is done through career coaching, events, training programs and fiscal sponsorship.About the teamJJ Hepburn - Cofounder and CEO based in SydneyRachel Williams - COO based in Colorado SpringsFrances Lorenz - Operations Manager based in BostonShay Gestal - Health Coach based in BerlingtonCall to actionSignup for our mailing listJoin the AI Alignment slackApply for coachingCheckout our free Health CoachingContact us about fiscal sponsorshipsMore linksaisafetysupport.orgEA PathfinderWhat do we do?EA Pathfinder is an organisation that wants to create a world in which professionals motivated by Effective Altruism can do high impact work. We are addressing the bottlenecks specific to mid-career professionals by offering career advising, peer support, research mentorship and project management support.About the teamNeil Ferro has recently come on board as a career advisor.Call to actionIf you have 5+ years of career experience and have a successful career, but feel your job doesn't have the positive impact you’d like then you can apply for free career advice here.More linksYou can visit the website hereFoundations for TomorrowWhat do we do?Foundations for Tomorrow is a proudly independent, leader-focused non-profit with the mission of protecting Australia’s future interests. Our aspiration over the next 3-5 years i...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What's Happening in Australia, published by Bradley Tjandra on November 7, 2022 on The Effective Altruism Forum.IntroductionCrikey! There's a lot going on in 'Straya right now!There are a lot of new and exciting projects happening in Australia— we want to share some of that with the wider community and talk about how you can be involved.Projects in AustraliaMuch of the effective altruist aligned work being done in Australia involves working remotely with the international community, at organisations already known by the international community. Given that, we decided to focus on projects which are being at least partially led by Australians.If you’re leading an exciting project in Australia, please send me a message with a completed copy of this template and I can add it into the post.AI Safety Australia & New ZealandAI Safety SupportEA PathfinderFoundations for TomorrowGiving What We CanGood Ancestors Project (Sydney Office)Good Ancestors Project (Operations)Good Ancestors Project (Policy)High Impact EngineersHigh Impact RecruitmentInsights for ImpactLead Exposure Elimination ProjectQuantifying Uncertainty in GiveWell CEAsReady ResearchSentience InstituteAI Safety Australia & New ZealandWhat do we do?We’re organising a number of community building programs:AI Sydney Fellowship - A fellowship for aspiring AI researchers in Australian/New Zealand to meet and collaborate on their projects.Sydney Retreat - A weekend getaway near Maroubra Beach and trying out a range of activities to facilitate connection, ideation, sharing, and, of course, alignment.Informal chats - Discussion group for people who have already completed the AGI Safety Fundamentals technical course or otherwise have an equivalent level of knowledge.About the teamChris Leong - Main Organiser. Previously interned at Non-Linear.Yanni Kyriacos - Marketing & Communications Lead. Recently employed at Spark Wave as Head of Marketing. 10 years of experience as a marketing strategist.Mark Carauleanu - Intern. Previously Admin Support at CEA, participated in the SERI Summer Fellowship and MLSS.Jenna - Operations Contractor. Also Design & Communications Consultant at Fortify Health and Cofounder of EA Canberra.Call to actionReach out to chris [at] aisafetysupport.org.More linksJoin the Facebook groupNudge competitionAI Safety SupportWhat do we do?Providing support services to early career, independent and transitioning AI alignment researchers. This is done through career coaching, events, training programs and fiscal sponsorship.About the teamJJ Hepburn - Cofounder and CEO based in SydneyRachel Williams - COO based in Colorado SpringsFrances Lorenz - Operations Manager based in BostonShay Gestal - Health Coach based in BerlingtonCall to actionSignup for our mailing listJoin the AI Alignment slackApply for coachingCheckout our free Health CoachingContact us about fiscal sponsorshipsMore linksaisafetysupport.orgEA PathfinderWhat do we do?EA Pathfinder is an organisation that wants to create a world in which professionals motivated by Effective Altruism can do high impact work. We are addressing the bottlenecks specific to mid-career professionals by offering career advising, peer support, research mentorship and project management support.About the teamNeil Ferro has recently come on board as a career advisor.Call to actionIf you have 5+ years of career experience and have a successful career, but feel your job doesn't have the positive impact you’d like then you can apply for free career advice here.More linksYou can visit the website hereFoundations for TomorrowWhat do we do?Foundations for Tomorrow is a proudly independent, leader-focused non-profit with the mission of protecting Australia’s future interests. Our aspiration over the next 3-5 years i...]]>
Bradley Tjandra https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 19:20 None full 3668
8CMuNwKMcR55jhd8W_EA EA - Instead of technical research, more people should focus on buying time by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Instead of technical research, more people should focus on buying time, published by Akash on November 5, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://forum.effectivealtruism.org/posts/8CMuNwKMcR55jhd8W/instead-of-technical-research-more-people-should-focus-on Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Instead of technical research, more people should focus on buying time, published by Akash on November 5, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 06 Nov 2022 04:53:28 +0000 EA - Instead of technical research, more people should focus on buying time by Akash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Instead of technical research, more people should focus on buying time, published by Akash on November 5, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Instead of technical research, more people should focus on buying time, published by Akash on November 5, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Akash https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:28 None full 3658
Bnp9YDqErNXHmTvvE_EA EA - The Slippery Slope from DALLE-2 to Deepfake Anarchy by stecas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Slippery Slope from DALLE-2 to Deepfake Anarchy, published by stecas on November 5, 2022 on The Effective Altruism Forum.OpenAI developed DALLE-2. Then StabilityAI made an open source copycat. This is a dangerous dynamic.Stephen Casper (scasper@mit.edu)Phillip Christoffersen (philljkc@mit.edu)Rui-Jie Yew (rjy@mit.edu)Thanks to Tan Zhi-Xuan and Dylan Hadfield-Menell for feedback.This post talks about NSFW content but does not contain any. All links from this post are SFW.AbstractSince OpenAI published their work on DALLE-2 (an AI system that produces images from text prompts) in April, several copycat text-to-image models have been developed including StabilityAI’s Stable Diffusion. Stable Diffusion is open-source and can be easily misused, including for the almost-effortless development of NSFW images of specific people for blackmail or harassment. We argue that OpenAI and StabilityAI’s efforts to avoid misuse have foreseeably failed and that both share responsibility for harms from these models. And even if one is not concerned about issues specific to text-to-image models, this case study raises concerns about how copycatting and open-sourcing could lead to abuses of more dangerous systems in the future.To reduce risks, we discuss three design principles that developers should abide by when designing advanced AI systems. Finally we conclude that (1) the AI research community should curtail work on risky capabilities–or at the very least more substantially vet released models (2) the AI governance community should work to quickly adapt to heightened harms posed by copycatting in general and text-to-image models in particular, and (3) public opinion should ideally not only be critical of perpetrators for harms that they cause with AI systems, but also originators, copycatters, distributors, etc. who enable them.What’s wrong?Recent developments in AI image generation have made text-to-image models very effective at producing highly realistic images from captions. For some examples, see the paper from OpenAI on their DALLE-2 model or the release from Stability AI of their Stable Diffusion model. Deep neural image generators like StyleGan and manual image editing tools like Photoshop have been on the scene for years. But today, DALLE-2 and Stable Diffusion (which is open source) are uniquely effective at rapidly producing highly-realistic images from open-ended prompts.There are a number of risks posed by these models, and OpenAI acknowledges this. Unlike conventional art and Photoshop, today’s text-to-image models can produce images from open-ended prompts by a user in seconds. Concerns include (1) copyright and intellectual property issues (2) sensitive data being collected and learned (3) demographic biases, e.g. producing images of women when given the input, “an image of a nurse” (4) using these models for disinformation by creating images of fake events, and (5) using these models for producing non-consensual, intimate deepfakes.These are all important, but producing intimate deepfakes is where abuse of these models seems to be the most striking and possibly where we are least equipped to effectively regulate misuse. Stable Diffusion is already being used to produce realistic pornography. Reddit recently banned several subreddits dedicated to AI-generated porn including r/stablediffusionnsfw, r/unstablediffusion, and r/porndiffusion for a violation of Reddit’s rules against non-consensual intimate media.This is not to say that violations of sexual and intimate privacy are new. Before the introduction of models such as DALLE-2 and Stable Diffusion, individuals have been victims of non-consensual deepfakes. Perpetrators often make this content to discredit or humiliate people from marginalized groups, taking advantage of the negative sociocultural ...]]>
stecas https://forum.effectivealtruism.org/posts/Bnp9YDqErNXHmTvvE/the-slippery-slope-from-dalle-2-to-deepfake-anarchy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Slippery Slope from DALLE-2 to Deepfake Anarchy, published by stecas on November 5, 2022 on The Effective Altruism Forum.OpenAI developed DALLE-2. Then StabilityAI made an open source copycat. This is a dangerous dynamic.Stephen Casper (scasper@mit.edu)Phillip Christoffersen (philljkc@mit.edu)Rui-Jie Yew (rjy@mit.edu)Thanks to Tan Zhi-Xuan and Dylan Hadfield-Menell for feedback.This post talks about NSFW content but does not contain any. All links from this post are SFW.AbstractSince OpenAI published their work on DALLE-2 (an AI system that produces images from text prompts) in April, several copycat text-to-image models have been developed including StabilityAI’s Stable Diffusion. Stable Diffusion is open-source and can be easily misused, including for the almost-effortless development of NSFW images of specific people for blackmail or harassment. We argue that OpenAI and StabilityAI’s efforts to avoid misuse have foreseeably failed and that both share responsibility for harms from these models. And even if one is not concerned about issues specific to text-to-image models, this case study raises concerns about how copycatting and open-sourcing could lead to abuses of more dangerous systems in the future.To reduce risks, we discuss three design principles that developers should abide by when designing advanced AI systems. Finally we conclude that (1) the AI research community should curtail work on risky capabilities–or at the very least more substantially vet released models (2) the AI governance community should work to quickly adapt to heightened harms posed by copycatting in general and text-to-image models in particular, and (3) public opinion should ideally not only be critical of perpetrators for harms that they cause with AI systems, but also originators, copycatters, distributors, etc. who enable them.What’s wrong?Recent developments in AI image generation have made text-to-image models very effective at producing highly realistic images from captions. For some examples, see the paper from OpenAI on their DALLE-2 model or the release from Stability AI of their Stable Diffusion model. Deep neural image generators like StyleGan and manual image editing tools like Photoshop have been on the scene for years. But today, DALLE-2 and Stable Diffusion (which is open source) are uniquely effective at rapidly producing highly-realistic images from open-ended prompts.There are a number of risks posed by these models, and OpenAI acknowledges this. Unlike conventional art and Photoshop, today’s text-to-image models can produce images from open-ended prompts by a user in seconds. Concerns include (1) copyright and intellectual property issues (2) sensitive data being collected and learned (3) demographic biases, e.g. producing images of women when given the input, “an image of a nurse” (4) using these models for disinformation by creating images of fake events, and (5) using these models for producing non-consensual, intimate deepfakes.These are all important, but producing intimate deepfakes is where abuse of these models seems to be the most striking and possibly where we are least equipped to effectively regulate misuse. Stable Diffusion is already being used to produce realistic pornography. Reddit recently banned several subreddits dedicated to AI-generated porn including r/stablediffusionnsfw, r/unstablediffusion, and r/porndiffusion for a violation of Reddit’s rules against non-consensual intimate media.This is not to say that violations of sexual and intimate privacy are new. Before the introduction of models such as DALLE-2 and Stable Diffusion, individuals have been victims of non-consensual deepfakes. Perpetrators often make this content to discredit or humiliate people from marginalized groups, taking advantage of the negative sociocultural ...]]>
Sun, 06 Nov 2022 03:05:57 +0000 EA - The Slippery Slope from DALLE-2 to Deepfake Anarchy by stecas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Slippery Slope from DALLE-2 to Deepfake Anarchy, published by stecas on November 5, 2022 on The Effective Altruism Forum.OpenAI developed DALLE-2. Then StabilityAI made an open source copycat. This is a dangerous dynamic.Stephen Casper (scasper@mit.edu)Phillip Christoffersen (philljkc@mit.edu)Rui-Jie Yew (rjy@mit.edu)Thanks to Tan Zhi-Xuan and Dylan Hadfield-Menell for feedback.This post talks about NSFW content but does not contain any. All links from this post are SFW.AbstractSince OpenAI published their work on DALLE-2 (an AI system that produces images from text prompts) in April, several copycat text-to-image models have been developed including StabilityAI’s Stable Diffusion. Stable Diffusion is open-source and can be easily misused, including for the almost-effortless development of NSFW images of specific people for blackmail or harassment. We argue that OpenAI and StabilityAI’s efforts to avoid misuse have foreseeably failed and that both share responsibility for harms from these models. And even if one is not concerned about issues specific to text-to-image models, this case study raises concerns about how copycatting and open-sourcing could lead to abuses of more dangerous systems in the future.To reduce risks, we discuss three design principles that developers should abide by when designing advanced AI systems. Finally we conclude that (1) the AI research community should curtail work on risky capabilities–or at the very least more substantially vet released models (2) the AI governance community should work to quickly adapt to heightened harms posed by copycatting in general and text-to-image models in particular, and (3) public opinion should ideally not only be critical of perpetrators for harms that they cause with AI systems, but also originators, copycatters, distributors, etc. who enable them.What’s wrong?Recent developments in AI image generation have made text-to-image models very effective at producing highly realistic images from captions. For some examples, see the paper from OpenAI on their DALLE-2 model or the release from Stability AI of their Stable Diffusion model. Deep neural image generators like StyleGan and manual image editing tools like Photoshop have been on the scene for years. But today, DALLE-2 and Stable Diffusion (which is open source) are uniquely effective at rapidly producing highly-realistic images from open-ended prompts.There are a number of risks posed by these models, and OpenAI acknowledges this. Unlike conventional art and Photoshop, today’s text-to-image models can produce images from open-ended prompts by a user in seconds. Concerns include (1) copyright and intellectual property issues (2) sensitive data being collected and learned (3) demographic biases, e.g. producing images of women when given the input, “an image of a nurse” (4) using these models for disinformation by creating images of fake events, and (5) using these models for producing non-consensual, intimate deepfakes.These are all important, but producing intimate deepfakes is where abuse of these models seems to be the most striking and possibly where we are least equipped to effectively regulate misuse. Stable Diffusion is already being used to produce realistic pornography. Reddit recently banned several subreddits dedicated to AI-generated porn including r/stablediffusionnsfw, r/unstablediffusion, and r/porndiffusion for a violation of Reddit’s rules against non-consensual intimate media.This is not to say that violations of sexual and intimate privacy are new. Before the introduction of models such as DALLE-2 and Stable Diffusion, individuals have been victims of non-consensual deepfakes. Perpetrators often make this content to discredit or humiliate people from marginalized groups, taking advantage of the negative sociocultural ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Slippery Slope from DALLE-2 to Deepfake Anarchy, published by stecas on November 5, 2022 on The Effective Altruism Forum.OpenAI developed DALLE-2. Then StabilityAI made an open source copycat. This is a dangerous dynamic.Stephen Casper (scasper@mit.edu)Phillip Christoffersen (philljkc@mit.edu)Rui-Jie Yew (rjy@mit.edu)Thanks to Tan Zhi-Xuan and Dylan Hadfield-Menell for feedback.This post talks about NSFW content but does not contain any. All links from this post are SFW.AbstractSince OpenAI published their work on DALLE-2 (an AI system that produces images from text prompts) in April, several copycat text-to-image models have been developed including StabilityAI’s Stable Diffusion. Stable Diffusion is open-source and can be easily misused, including for the almost-effortless development of NSFW images of specific people for blackmail or harassment. We argue that OpenAI and StabilityAI’s efforts to avoid misuse have foreseeably failed and that both share responsibility for harms from these models. And even if one is not concerned about issues specific to text-to-image models, this case study raises concerns about how copycatting and open-sourcing could lead to abuses of more dangerous systems in the future.To reduce risks, we discuss three design principles that developers should abide by when designing advanced AI systems. Finally we conclude that (1) the AI research community should curtail work on risky capabilities–or at the very least more substantially vet released models (2) the AI governance community should work to quickly adapt to heightened harms posed by copycatting in general and text-to-image models in particular, and (3) public opinion should ideally not only be critical of perpetrators for harms that they cause with AI systems, but also originators, copycatters, distributors, etc. who enable them.What’s wrong?Recent developments in AI image generation have made text-to-image models very effective at producing highly realistic images from captions. For some examples, see the paper from OpenAI on their DALLE-2 model or the release from Stability AI of their Stable Diffusion model. Deep neural image generators like StyleGan and manual image editing tools like Photoshop have been on the scene for years. But today, DALLE-2 and Stable Diffusion (which is open source) are uniquely effective at rapidly producing highly-realistic images from open-ended prompts.There are a number of risks posed by these models, and OpenAI acknowledges this. Unlike conventional art and Photoshop, today’s text-to-image models can produce images from open-ended prompts by a user in seconds. Concerns include (1) copyright and intellectual property issues (2) sensitive data being collected and learned (3) demographic biases, e.g. producing images of women when given the input, “an image of a nurse” (4) using these models for disinformation by creating images of fake events, and (5) using these models for producing non-consensual, intimate deepfakes.These are all important, but producing intimate deepfakes is where abuse of these models seems to be the most striking and possibly where we are least equipped to effectively regulate misuse. Stable Diffusion is already being used to produce realistic pornography. Reddit recently banned several subreddits dedicated to AI-generated porn including r/stablediffusionnsfw, r/unstablediffusion, and r/porndiffusion for a violation of Reddit’s rules against non-consensual intimate media.This is not to say that violations of sexual and intimate privacy are new. Before the introduction of models such as DALLE-2 and Stable Diffusion, individuals have been victims of non-consensual deepfakes. Perpetrators often make this content to discredit or humiliate people from marginalized groups, taking advantage of the negative sociocultural ...]]>
stecas https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 29:42 None full 3659
unwLjhGqvaGfMvKxc_EA EA - The Letten Prize - An opportunity for young EA researchers? by HÃ¥kon Sandbakken Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Letten Prize - An opportunity for young EA researchers?, published by HÃ¥kon Sandbakken on November 4, 2022 on The Effective Altruism Forum.Hi,Are you a young researcher who conducts research aimed at addressing global challenges within the fields of health, development, environment and equality in all aspects of human life?Or do you know of someone who might fit this description?If so, you/they can apply for The Letten Prize.Prize money at 2,5 MNOK (~235 000 USD).Interested? View the criteria for applying here:/Disclamers: I've been involved with the EA community since 2017, eg. established EA Oslo and served on the board of EA Norway. I am currently working freelance for The Young Academy of Norway, as well as serving on the board of the Letten Foundation. This research prize are a result of a collaboration by the two parties. I see why this may be seen as "spam". However, I truly believe that we have several great candidates within our community who could contend for the prize. I will serve as the secretary of the prize committee, but have no influence on the committees decision.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
HÃ¥kon Sandbakken https://forum.effectivealtruism.org/posts/unwLjhGqvaGfMvKxc/the-letten-prize-an-opportunity-for-young-ea-researchers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Letten Prize - An opportunity for young EA researchers?, published by HÃ¥kon Sandbakken on November 4, 2022 on The Effective Altruism Forum.Hi,Are you a young researcher who conducts research aimed at addressing global challenges within the fields of health, development, environment and equality in all aspects of human life?Or do you know of someone who might fit this description?If so, you/they can apply for The Letten Prize.Prize money at 2,5 MNOK (~235 000 USD).Interested? View the criteria for applying here:/Disclamers: I've been involved with the EA community since 2017, eg. established EA Oslo and served on the board of EA Norway. I am currently working freelance for The Young Academy of Norway, as well as serving on the board of the Letten Foundation. This research prize are a result of a collaboration by the two parties. I see why this may be seen as "spam". However, I truly believe that we have several great candidates within our community who could contend for the prize. I will serve as the secretary of the prize committee, but have no influence on the committees decision.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 05 Nov 2022 07:38:36 +0000 EA - The Letten Prize - An opportunity for young EA researchers? by HÃ¥kon Sandbakken Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Letten Prize - An opportunity for young EA researchers?, published by HÃ¥kon Sandbakken on November 4, 2022 on The Effective Altruism Forum.Hi,Are you a young researcher who conducts research aimed at addressing global challenges within the fields of health, development, environment and equality in all aspects of human life?Or do you know of someone who might fit this description?If so, you/they can apply for The Letten Prize.Prize money at 2,5 MNOK (~235 000 USD).Interested? View the criteria for applying here:/Disclamers: I've been involved with the EA community since 2017, eg. established EA Oslo and served on the board of EA Norway. I am currently working freelance for The Young Academy of Norway, as well as serving on the board of the Letten Foundation. This research prize are a result of a collaboration by the two parties. I see why this may be seen as "spam". However, I truly believe that we have several great candidates within our community who could contend for the prize. I will serve as the secretary of the prize committee, but have no influence on the committees decision.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Letten Prize - An opportunity for young EA researchers?, published by HÃ¥kon Sandbakken on November 4, 2022 on The Effective Altruism Forum.Hi,Are you a young researcher who conducts research aimed at addressing global challenges within the fields of health, development, environment and equality in all aspects of human life?Or do you know of someone who might fit this description?If so, you/they can apply for The Letten Prize.Prize money at 2,5 MNOK (~235 000 USD).Interested? View the criteria for applying here:/Disclamers: I've been involved with the EA community since 2017, eg. established EA Oslo and served on the board of EA Norway. I am currently working freelance for The Young Academy of Norway, as well as serving on the board of the Letten Foundation. This research prize are a result of a collaboration by the two parties. I see why this may be seen as "spam". However, I truly believe that we have several great candidates within our community who could contend for the prize. I will serve as the secretary of the prize committee, but have no influence on the committees decision.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
HÃ¥kon Sandbakken https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:24 None full 3653
cNavhrAa9ssFjzkrK_EA EA - EA orgs should accept Summer 2023 interns in January by Abby Hoskin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA orgs should accept Summer 2023 interns in January, published by Abby Hoskin on November 5, 2022 on The Effective Altruism Forum.TLDR; EA organizations hiring graduating seniors or hosting undergraduates as 2023 summer interns should send program acceptances in January 2023 so that students don't have to accept a corporate offer before they apply to EA programs.The ProblemMultiple undergrads have complained that you can't apply to corporate internships and EA summer programs at the same time. Instead, non-EA orgs have very early deadlines (October to January), allowing them to get decisions back to students very early in the year. They typically want responses from students within a few weeks of sending out offers. Meanwhile, many EA orgs don't send out acceptances until much later, like April, well past the deadline for responding to the corporate internship offers.This puts students in an awkward position: either accept the corporate offers while planning to renege if they get a better EA internship, or gamble on the EA jobs, and potentially not get any. It's bad to miss out on anything for the summer, but lying violates most people's moral principles and going back on an offer that you previously accepted burns bridges at the organization you lied to.As a result of this application deadline system, many talented students are being funneled away from EA summer opportunities.The SolutionIdeally, students could apply for corporate programs and EA fellowships at the same time, see where they are accepted, and take the best offer without having to lie or burn bridges.Moving EA program deadlines up to compete with industry might be challenging for orgs that aren't 100% sure what kind of funding they'll have for summer programs. In this case, consider what kind of guidelines you can provide to prospective candidates to give them a better sense of how likely they will be accepted for a summer program at your org. Also, consider doing early acceptances if an extremely talented student contacts you with an exploding offer from another organization, and publicizing this policy if you decide to enact it.The CompetitionHere are some summer 2023 internship application deadlines for corporations competing with us for top talent (ie this is how early applications need to open in order to get offers back to students on a competitive timeframe):TechnologyGoogle: October 31st, 2022Palantir: Sometime in October 2022ConsultingBain: November 29, 2022McKinsey: January 19th, 2023FinanceGoldman Sachs: November 20th, 2022JP Morgan: November 27th, 2022NB: Many of these organizations have multiple summer programs which all have different deadlines. I've tried to select programs with representative deadlines, but bizarrely, some companies even have different deadlines for students from different universities! If you actually want to apply to one of these places, please go to the org's website to double check the deadline for the program that best suits you.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Abby Hoskin https://forum.effectivealtruism.org/posts/cNavhrAa9ssFjzkrK/ea-orgs-should-accept-summer-2023-interns-in-january Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA orgs should accept Summer 2023 interns in January, published by Abby Hoskin on November 5, 2022 on The Effective Altruism Forum.TLDR; EA organizations hiring graduating seniors or hosting undergraduates as 2023 summer interns should send program acceptances in January 2023 so that students don't have to accept a corporate offer before they apply to EA programs.The ProblemMultiple undergrads have complained that you can't apply to corporate internships and EA summer programs at the same time. Instead, non-EA orgs have very early deadlines (October to January), allowing them to get decisions back to students very early in the year. They typically want responses from students within a few weeks of sending out offers. Meanwhile, many EA orgs don't send out acceptances until much later, like April, well past the deadline for responding to the corporate internship offers.This puts students in an awkward position: either accept the corporate offers while planning to renege if they get a better EA internship, or gamble on the EA jobs, and potentially not get any. It's bad to miss out on anything for the summer, but lying violates most people's moral principles and going back on an offer that you previously accepted burns bridges at the organization you lied to.As a result of this application deadline system, many talented students are being funneled away from EA summer opportunities.The SolutionIdeally, students could apply for corporate programs and EA fellowships at the same time, see where they are accepted, and take the best offer without having to lie or burn bridges.Moving EA program deadlines up to compete with industry might be challenging for orgs that aren't 100% sure what kind of funding they'll have for summer programs. In this case, consider what kind of guidelines you can provide to prospective candidates to give them a better sense of how likely they will be accepted for a summer program at your org. Also, consider doing early acceptances if an extremely talented student contacts you with an exploding offer from another organization, and publicizing this policy if you decide to enact it.The CompetitionHere are some summer 2023 internship application deadlines for corporations competing with us for top talent (ie this is how early applications need to open in order to get offers back to students on a competitive timeframe):TechnologyGoogle: October 31st, 2022Palantir: Sometime in October 2022ConsultingBain: November 29, 2022McKinsey: January 19th, 2023FinanceGoldman Sachs: November 20th, 2022JP Morgan: November 27th, 2022NB: Many of these organizations have multiple summer programs which all have different deadlines. I've tried to select programs with representative deadlines, but bizarrely, some companies even have different deadlines for students from different universities! If you actually want to apply to one of these places, please go to the org's website to double check the deadline for the program that best suits you.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sat, 05 Nov 2022 07:04:28 +0000 EA - EA orgs should accept Summer 2023 interns in January by Abby Hoskin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA orgs should accept Summer 2023 interns in January, published by Abby Hoskin on November 5, 2022 on The Effective Altruism Forum.TLDR; EA organizations hiring graduating seniors or hosting undergraduates as 2023 summer interns should send program acceptances in January 2023 so that students don't have to accept a corporate offer before they apply to EA programs.The ProblemMultiple undergrads have complained that you can't apply to corporate internships and EA summer programs at the same time. Instead, non-EA orgs have very early deadlines (October to January), allowing them to get decisions back to students very early in the year. They typically want responses from students within a few weeks of sending out offers. Meanwhile, many EA orgs don't send out acceptances until much later, like April, well past the deadline for responding to the corporate internship offers.This puts students in an awkward position: either accept the corporate offers while planning to renege if they get a better EA internship, or gamble on the EA jobs, and potentially not get any. It's bad to miss out on anything for the summer, but lying violates most people's moral principles and going back on an offer that you previously accepted burns bridges at the organization you lied to.As a result of this application deadline system, many talented students are being funneled away from EA summer opportunities.The SolutionIdeally, students could apply for corporate programs and EA fellowships at the same time, see where they are accepted, and take the best offer without having to lie or burn bridges.Moving EA program deadlines up to compete with industry might be challenging for orgs that aren't 100% sure what kind of funding they'll have for summer programs. In this case, consider what kind of guidelines you can provide to prospective candidates to give them a better sense of how likely they will be accepted for a summer program at your org. Also, consider doing early acceptances if an extremely talented student contacts you with an exploding offer from another organization, and publicizing this policy if you decide to enact it.The CompetitionHere are some summer 2023 internship application deadlines for corporations competing with us for top talent (ie this is how early applications need to open in order to get offers back to students on a competitive timeframe):TechnologyGoogle: October 31st, 2022Palantir: Sometime in October 2022ConsultingBain: November 29, 2022McKinsey: January 19th, 2023FinanceGoldman Sachs: November 20th, 2022JP Morgan: November 27th, 2022NB: Many of these organizations have multiple summer programs which all have different deadlines. I've tried to select programs with representative deadlines, but bizarrely, some companies even have different deadlines for students from different universities! If you actually want to apply to one of these places, please go to the org's website to double check the deadline for the program that best suits you.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA orgs should accept Summer 2023 interns in January, published by Abby Hoskin on November 5, 2022 on The Effective Altruism Forum.TLDR; EA organizations hiring graduating seniors or hosting undergraduates as 2023 summer interns should send program acceptances in January 2023 so that students don't have to accept a corporate offer before they apply to EA programs.The ProblemMultiple undergrads have complained that you can't apply to corporate internships and EA summer programs at the same time. Instead, non-EA orgs have very early deadlines (October to January), allowing them to get decisions back to students very early in the year. They typically want responses from students within a few weeks of sending out offers. Meanwhile, many EA orgs don't send out acceptances until much later, like April, well past the deadline for responding to the corporate internship offers.This puts students in an awkward position: either accept the corporate offers while planning to renege if they get a better EA internship, or gamble on the EA jobs, and potentially not get any. It's bad to miss out on anything for the summer, but lying violates most people's moral principles and going back on an offer that you previously accepted burns bridges at the organization you lied to.As a result of this application deadline system, many talented students are being funneled away from EA summer opportunities.The SolutionIdeally, students could apply for corporate programs and EA fellowships at the same time, see where they are accepted, and take the best offer without having to lie or burn bridges.Moving EA program deadlines up to compete with industry might be challenging for orgs that aren't 100% sure what kind of funding they'll have for summer programs. In this case, consider what kind of guidelines you can provide to prospective candidates to give them a better sense of how likely they will be accepted for a summer program at your org. Also, consider doing early acceptances if an extremely talented student contacts you with an exploding offer from another organization, and publicizing this policy if you decide to enact it.The CompetitionHere are some summer 2023 internship application deadlines for corporations competing with us for top talent (ie this is how early applications need to open in order to get offers back to students on a competitive timeframe):TechnologyGoogle: October 31st, 2022Palantir: Sometime in October 2022ConsultingBain: November 29, 2022McKinsey: January 19th, 2023FinanceGoldman Sachs: November 20th, 2022JP Morgan: November 27th, 2022NB: Many of these organizations have multiple summer programs which all have different deadlines. I've tried to select programs with representative deadlines, but bizarrely, some companies even have different deadlines for students from different universities! If you actually want to apply to one of these places, please go to the org's website to double check the deadline for the program that best suits you.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Abby Hoskin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:09 None full 3650
vHxKLNQciXN4taEdd_EA EA - Applications are now open for Intro to ML Safety Spring 2023 by Joshc Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Applications are now open for Intro to ML Safety Spring 2023, published by Joshc on November 4, 2022 on The Effective Altruism Forum.The Center for AI Safety is running another iteration of Intro to ML Safety this Spring for people who want to learn about empirical AI safety research topics.Apply to be a participant by January 29th, 2023.Apply to be a facilitator by December 30th.Website: mlsafety.org/intro-to-ml-safetyAbout the CourseIntroduction to ML Safety is an 8-week course that aims to introduce students with a deep learning background to empirical AI Safety research. The program is designed and taught by Dan Hendrycks, a UC Berkeley ML PhD and director of the Center for AI Safety, and provides an introduction to robustness, alignment, monitoring, systemic safety, and conceptual foundations for existential risk.Each week, participants will be assigned readings, lecture videos, and required homework assignments. The materials are publicly available at course.mlsafety.org.There are two tracks:The introductory track: for people who are new to AI Safety. This track aims to familiarize students with the AI X-risk discussion alongside empirical research directions.The advanced track: for people who already have a conceptual understanding of AI X-risk and want to learn more about existing empirical safety research so they can start contributing.The course will be virtual by default, though in-person sections may be offered at some universities.How is this program different from AGISF?Intro to ML Safety is generally more focused on empirical topics rather than conceptual work. Participants are required to watch recorded lectures and complete homework assignments that test their understanding of the technical material. If you’ve already taken AGISF and are interested in empirical research, then you are the target audience for the advanced track.Intro to ML Safety also emphasizes different ideas and research directions than AGISF does. Examples include:Detecting trojans: this is a current security issue but also a potential microcosm for detecting deception and testing monitoring tools.Adversarial robustness: it is helpful for reward models to be adversary robust. Otherwise, the models they are used to train can ‘overoptimize’ them and exploit their deficiencies instead of performing as intended. This applies whenever an AI system is used to evaluate another AI system. For example, an ELK reporter must also be highly robust if its output is used as a training signal.Power averseness: Arguments for taking AI seriously as an existential risk often focus on power-seeking behavior. Can we train language models to avoid power-seeking actions in text-based games?You can read about more examples in Open Problems in AI X-risk.Time CommitmentThe program will last 8 weeks, beginning on February 20th and ending on April 14th. Participants are expected to commit at least 5 hours per week. This includes ~1 hour of recorded lectures (which will take more than one hour to digest), ~1-2 hours of readings, ~1-2 hours of written assignments, and 1 hour of discussion.We understand that 5 hours is a large time commitment, so to make our program more inclusive and remove any financial barriers, we will provide a $1000 stipend upon completion of the course. Refer to this document to check whether you are eligible to receive a stipend (most students should be).EligibilityAnyone is eligible to apply. The prerequisites are:Deep learning: you can gauge the background knowledge required by skimming the week 1 slides: deep learning review.Linear algebra or introductory statistics (e.g., AP Statistics)Multivariate differential calculusIf you are not sure whether you meet these prerequisites, err on the side of applying. We will review applications on a case-by-case basis....]]>
Joshc https://forum.effectivealtruism.org/posts/vHxKLNQciXN4taEdd/applications-are-now-open-for-intro-to-ml-safety-spring-2023 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Applications are now open for Intro to ML Safety Spring 2023, published by Joshc on November 4, 2022 on The Effective Altruism Forum.The Center for AI Safety is running another iteration of Intro to ML Safety this Spring for people who want to learn about empirical AI safety research topics.Apply to be a participant by January 29th, 2023.Apply to be a facilitator by December 30th.Website: mlsafety.org/intro-to-ml-safetyAbout the CourseIntroduction to ML Safety is an 8-week course that aims to introduce students with a deep learning background to empirical AI Safety research. The program is designed and taught by Dan Hendrycks, a UC Berkeley ML PhD and director of the Center for AI Safety, and provides an introduction to robustness, alignment, monitoring, systemic safety, and conceptual foundations for existential risk.Each week, participants will be assigned readings, lecture videos, and required homework assignments. The materials are publicly available at course.mlsafety.org.There are two tracks:The introductory track: for people who are new to AI Safety. This track aims to familiarize students with the AI X-risk discussion alongside empirical research directions.The advanced track: for people who already have a conceptual understanding of AI X-risk and want to learn more about existing empirical safety research so they can start contributing.The course will be virtual by default, though in-person sections may be offered at some universities.How is this program different from AGISF?Intro to ML Safety is generally more focused on empirical topics rather than conceptual work. Participants are required to watch recorded lectures and complete homework assignments that test their understanding of the technical material. If you’ve already taken AGISF and are interested in empirical research, then you are the target audience for the advanced track.Intro to ML Safety also emphasizes different ideas and research directions than AGISF does. Examples include:Detecting trojans: this is a current security issue but also a potential microcosm for detecting deception and testing monitoring tools.Adversarial robustness: it is helpful for reward models to be adversary robust. Otherwise, the models they are used to train can ‘overoptimize’ them and exploit their deficiencies instead of performing as intended. This applies whenever an AI system is used to evaluate another AI system. For example, an ELK reporter must also be highly robust if its output is used as a training signal.Power averseness: Arguments for taking AI seriously as an existential risk often focus on power-seeking behavior. Can we train language models to avoid power-seeking actions in text-based games?You can read about more examples in Open Problems in AI X-risk.Time CommitmentThe program will last 8 weeks, beginning on February 20th and ending on April 14th. Participants are expected to commit at least 5 hours per week. This includes ~1 hour of recorded lectures (which will take more than one hour to digest), ~1-2 hours of readings, ~1-2 hours of written assignments, and 1 hour of discussion.We understand that 5 hours is a large time commitment, so to make our program more inclusive and remove any financial barriers, we will provide a $1000 stipend upon completion of the course. Refer to this document to check whether you are eligible to receive a stipend (most students should be).EligibilityAnyone is eligible to apply. The prerequisites are:Deep learning: you can gauge the background knowledge required by skimming the week 1 slides: deep learning review.Linear algebra or introductory statistics (e.g., AP Statistics)Multivariate differential calculusIf you are not sure whether you meet these prerequisites, err on the side of applying. We will review applications on a case-by-case basis....]]>
Sat, 05 Nov 2022 05:37:15 +0000 EA - Applications are now open for Intro to ML Safety Spring 2023 by Joshc Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Applications are now open for Intro to ML Safety Spring 2023, published by Joshc on November 4, 2022 on The Effective Altruism Forum.The Center for AI Safety is running another iteration of Intro to ML Safety this Spring for people who want to learn about empirical AI safety research topics.Apply to be a participant by January 29th, 2023.Apply to be a facilitator by December 30th.Website: mlsafety.org/intro-to-ml-safetyAbout the CourseIntroduction to ML Safety is an 8-week course that aims to introduce students with a deep learning background to empirical AI Safety research. The program is designed and taught by Dan Hendrycks, a UC Berkeley ML PhD and director of the Center for AI Safety, and provides an introduction to robustness, alignment, monitoring, systemic safety, and conceptual foundations for existential risk.Each week, participants will be assigned readings, lecture videos, and required homework assignments. The materials are publicly available at course.mlsafety.org.There are two tracks:The introductory track: for people who are new to AI Safety. This track aims to familiarize students with the AI X-risk discussion alongside empirical research directions.The advanced track: for people who already have a conceptual understanding of AI X-risk and want to learn more about existing empirical safety research so they can start contributing.The course will be virtual by default, though in-person sections may be offered at some universities.How is this program different from AGISF?Intro to ML Safety is generally more focused on empirical topics rather than conceptual work. Participants are required to watch recorded lectures and complete homework assignments that test their understanding of the technical material. If you’ve already taken AGISF and are interested in empirical research, then you are the target audience for the advanced track.Intro to ML Safety also emphasizes different ideas and research directions than AGISF does. Examples include:Detecting trojans: this is a current security issue but also a potential microcosm for detecting deception and testing monitoring tools.Adversarial robustness: it is helpful for reward models to be adversary robust. Otherwise, the models they are used to train can ‘overoptimize’ them and exploit their deficiencies instead of performing as intended. This applies whenever an AI system is used to evaluate another AI system. For example, an ELK reporter must also be highly robust if its output is used as a training signal.Power averseness: Arguments for taking AI seriously as an existential risk often focus on power-seeking behavior. Can we train language models to avoid power-seeking actions in text-based games?You can read about more examples in Open Problems in AI X-risk.Time CommitmentThe program will last 8 weeks, beginning on February 20th and ending on April 14th. Participants are expected to commit at least 5 hours per week. This includes ~1 hour of recorded lectures (which will take more than one hour to digest), ~1-2 hours of readings, ~1-2 hours of written assignments, and 1 hour of discussion.We understand that 5 hours is a large time commitment, so to make our program more inclusive and remove any financial barriers, we will provide a $1000 stipend upon completion of the course. Refer to this document to check whether you are eligible to receive a stipend (most students should be).EligibilityAnyone is eligible to apply. The prerequisites are:Deep learning: you can gauge the background knowledge required by skimming the week 1 slides: deep learning review.Linear algebra or introductory statistics (e.g., AP Statistics)Multivariate differential calculusIf you are not sure whether you meet these prerequisites, err on the side of applying. We will review applications on a case-by-case basis....]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Applications are now open for Intro to ML Safety Spring 2023, published by Joshc on November 4, 2022 on The Effective Altruism Forum.The Center for AI Safety is running another iteration of Intro to ML Safety this Spring for people who want to learn about empirical AI safety research topics.Apply to be a participant by January 29th, 2023.Apply to be a facilitator by December 30th.Website: mlsafety.org/intro-to-ml-safetyAbout the CourseIntroduction to ML Safety is an 8-week course that aims to introduce students with a deep learning background to empirical AI Safety research. The program is designed and taught by Dan Hendrycks, a UC Berkeley ML PhD and director of the Center for AI Safety, and provides an introduction to robustness, alignment, monitoring, systemic safety, and conceptual foundations for existential risk.Each week, participants will be assigned readings, lecture videos, and required homework assignments. The materials are publicly available at course.mlsafety.org.There are two tracks:The introductory track: for people who are new to AI Safety. This track aims to familiarize students with the AI X-risk discussion alongside empirical research directions.The advanced track: for people who already have a conceptual understanding of AI X-risk and want to learn more about existing empirical safety research so they can start contributing.The course will be virtual by default, though in-person sections may be offered at some universities.How is this program different from AGISF?Intro to ML Safety is generally more focused on empirical topics rather than conceptual work. Participants are required to watch recorded lectures and complete homework assignments that test their understanding of the technical material. If you’ve already taken AGISF and are interested in empirical research, then you are the target audience for the advanced track.Intro to ML Safety also emphasizes different ideas and research directions than AGISF does. Examples include:Detecting trojans: this is a current security issue but also a potential microcosm for detecting deception and testing monitoring tools.Adversarial robustness: it is helpful for reward models to be adversary robust. Otherwise, the models they are used to train can ‘overoptimize’ them and exploit their deficiencies instead of performing as intended. This applies whenever an AI system is used to evaluate another AI system. For example, an ELK reporter must also be highly robust if its output is used as a training signal.Power averseness: Arguments for taking AI seriously as an existential risk often focus on power-seeking behavior. Can we train language models to avoid power-seeking actions in text-based games?You can read about more examples in Open Problems in AI X-risk.Time CommitmentThe program will last 8 weeks, beginning on February 20th and ending on April 14th. Participants are expected to commit at least 5 hours per week. This includes ~1 hour of recorded lectures (which will take more than one hour to digest), ~1-2 hours of readings, ~1-2 hours of written assignments, and 1 hour of discussion.We understand that 5 hours is a large time commitment, so to make our program more inclusive and remove any financial barriers, we will provide a $1000 stipend upon completion of the course. Refer to this document to check whether you are eligible to receive a stipend (most students should be).EligibilityAnyone is eligible to apply. The prerequisites are:Deep learning: you can gauge the background knowledge required by skimming the week 1 slides: deep learning review.Linear algebra or introductory statistics (e.g., AP Statistics)Multivariate differential calculusIf you are not sure whether you meet these prerequisites, err on the side of applying. We will review applications on a case-by-case basis....]]>
Joshc https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:35 None full 3651
W5taETbarXR9CQnEQ_EA EA - How CEA approaches applications to our programs by Amy Labenz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How CEA approaches applications to our programs, published by Amy Labenz on November 4, 2022 on The Effective Altruism Forum.Our programs exist to have a positive impact on the world, rather than to serve the effective altruism community as an end goal. This unfortunately means EAs will sometimes be disappointed because of decisions we’ve made — though if this results in the world being a worse place overall, then we’ve clearly made a mistake. This is one of the hard parts about how EA is both a community and a professional space.Naturally, people want to know things likeWhy didn’t I get admitted to a conference, when EA is really important to me and I’m taking actions inspired by EA?Why didn’t my friend get admitted to a conference, when they seem like a good applicant?We can understand why people would often like feedback on what they could have done differently or what they can try next time to get a better result. Or they just want to know what happened. When we have a specific idea about what would improve someone’s chances (like “you didn’t give much detail on your application, could you add more information?”) we’ll often give it.But we get thousands of applications and we don’t think it’s the best use of our staff’s time to give specific feedback about all of them. Often we don’t have constructive feedback to give.Many of the things that go into a decision are not easy to pin down — how well we think you understand EA, how we think you’ll add to the social environment, how much we think you’ll benefit from the event given the program we’ve prepared, etc. These things are subjective, and in a lot of cases, reasonable people could disagree about what call to make. There are also cases where we’ll just make mistakes (by our own standards), sometimes in favor of an applicant and sometimes against them.How we communicate about our programsIn responding to public discussion of our programs, sometimes we’ve gotten more in the weeds than we think was ideal. We’ve provided rebuttals or more information about some points but not others, which makes people understandably confused about how much information to expect from us and what the full picture is. It also uses a lot of our staff time. As the EA community grows, we need to adjust how we handle communications with the community.What you should expect from us going forward:When we think there are significant updates to our programs that the community should know about, or when there seems to be widespread confusion or misunderstanding about a particular topic, we’ll likely write a post (like this one).We’ll likely be less involved in the comments or extended public back-and-forth.We’ll read much of the public feedback about our programs, and will definitely read feedback you send directly to us (unless it’s extraordinarily long). We won’t respond in-depth to much of the feedback, though.Our programs are a work in progress, and we’ll take feedback into account as we try to improve them.What we hope you’ll do:Feel free to express your criticisms, observations, and advice about our programs. This could be publicly if you think that’s best, or by writing to us directly. Our contact form can be filled out anonymously. Or you can reach specific programs at groups@centreforeffectivealtruism.org, forum@effectivealtruism.org, or hello@eaglobal.org, or community health’s form.In general, we think it’s a good idea to fact-check public criticisms before publishing. If you send us something to fact-check, we’ll try to do so.If you think we’ve missed important considerations or information on a decision, like about someone’s application, you can send us additional information.On events specifically:We try to run a range of events that serve the breadth of the community.We recognize that there are people who ar...]]>
Amy Labenz https://forum.effectivealtruism.org/posts/W5taETbarXR9CQnEQ/how-cea-approaches-applications-to-our-programs Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How CEA approaches applications to our programs, published by Amy Labenz on November 4, 2022 on The Effective Altruism Forum.Our programs exist to have a positive impact on the world, rather than to serve the effective altruism community as an end goal. This unfortunately means EAs will sometimes be disappointed because of decisions we’ve made — though if this results in the world being a worse place overall, then we’ve clearly made a mistake. This is one of the hard parts about how EA is both a community and a professional space.Naturally, people want to know things likeWhy didn’t I get admitted to a conference, when EA is really important to me and I’m taking actions inspired by EA?Why didn’t my friend get admitted to a conference, when they seem like a good applicant?We can understand why people would often like feedback on what they could have done differently or what they can try next time to get a better result. Or they just want to know what happened. When we have a specific idea about what would improve someone’s chances (like “you didn’t give much detail on your application, could you add more information?”) we’ll often give it.But we get thousands of applications and we don’t think it’s the best use of our staff’s time to give specific feedback about all of them. Often we don’t have constructive feedback to give.Many of the things that go into a decision are not easy to pin down — how well we think you understand EA, how we think you’ll add to the social environment, how much we think you’ll benefit from the event given the program we’ve prepared, etc. These things are subjective, and in a lot of cases, reasonable people could disagree about what call to make. There are also cases where we’ll just make mistakes (by our own standards), sometimes in favor of an applicant and sometimes against them.How we communicate about our programsIn responding to public discussion of our programs, sometimes we’ve gotten more in the weeds than we think was ideal. We’ve provided rebuttals or more information about some points but not others, which makes people understandably confused about how much information to expect from us and what the full picture is. It also uses a lot of our staff time. As the EA community grows, we need to adjust how we handle communications with the community.What you should expect from us going forward:When we think there are significant updates to our programs that the community should know about, or when there seems to be widespread confusion or misunderstanding about a particular topic, we’ll likely write a post (like this one).We’ll likely be less involved in the comments or extended public back-and-forth.We’ll read much of the public feedback about our programs, and will definitely read feedback you send directly to us (unless it’s extraordinarily long). We won’t respond in-depth to much of the feedback, though.Our programs are a work in progress, and we’ll take feedback into account as we try to improve them.What we hope you’ll do:Feel free to express your criticisms, observations, and advice about our programs. This could be publicly if you think that’s best, or by writing to us directly. Our contact form can be filled out anonymously. Or you can reach specific programs at groups@centreforeffectivealtruism.org, forum@effectivealtruism.org, or hello@eaglobal.org, or community health’s form.In general, we think it’s a good idea to fact-check public criticisms before publishing. If you send us something to fact-check, we’ll try to do so.If you think we’ve missed important considerations or information on a decision, like about someone’s application, you can send us additional information.On events specifically:We try to run a range of events that serve the breadth of the community.We recognize that there are people who ar...]]>
Fri, 04 Nov 2022 23:47:35 +0000 EA - How CEA approaches applications to our programs by Amy Labenz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How CEA approaches applications to our programs, published by Amy Labenz on November 4, 2022 on The Effective Altruism Forum.Our programs exist to have a positive impact on the world, rather than to serve the effective altruism community as an end goal. This unfortunately means EAs will sometimes be disappointed because of decisions we’ve made — though if this results in the world being a worse place overall, then we’ve clearly made a mistake. This is one of the hard parts about how EA is both a community and a professional space.Naturally, people want to know things likeWhy didn’t I get admitted to a conference, when EA is really important to me and I’m taking actions inspired by EA?Why didn’t my friend get admitted to a conference, when they seem like a good applicant?We can understand why people would often like feedback on what they could have done differently or what they can try next time to get a better result. Or they just want to know what happened. When we have a specific idea about what would improve someone’s chances (like “you didn’t give much detail on your application, could you add more information?”) we’ll often give it.But we get thousands of applications and we don’t think it’s the best use of our staff’s time to give specific feedback about all of them. Often we don’t have constructive feedback to give.Many of the things that go into a decision are not easy to pin down — how well we think you understand EA, how we think you’ll add to the social environment, how much we think you’ll benefit from the event given the program we’ve prepared, etc. These things are subjective, and in a lot of cases, reasonable people could disagree about what call to make. There are also cases where we’ll just make mistakes (by our own standards), sometimes in favor of an applicant and sometimes against them.How we communicate about our programsIn responding to public discussion of our programs, sometimes we’ve gotten more in the weeds than we think was ideal. We’ve provided rebuttals or more information about some points but not others, which makes people understandably confused about how much information to expect from us and what the full picture is. It also uses a lot of our staff time. As the EA community grows, we need to adjust how we handle communications with the community.What you should expect from us going forward:When we think there are significant updates to our programs that the community should know about, or when there seems to be widespread confusion or misunderstanding about a particular topic, we’ll likely write a post (like this one).We’ll likely be less involved in the comments or extended public back-and-forth.We’ll read much of the public feedback about our programs, and will definitely read feedback you send directly to us (unless it’s extraordinarily long). We won’t respond in-depth to much of the feedback, though.Our programs are a work in progress, and we’ll take feedback into account as we try to improve them.What we hope you’ll do:Feel free to express your criticisms, observations, and advice about our programs. This could be publicly if you think that’s best, or by writing to us directly. Our contact form can be filled out anonymously. Or you can reach specific programs at groups@centreforeffectivealtruism.org, forum@effectivealtruism.org, or hello@eaglobal.org, or community health’s form.In general, we think it’s a good idea to fact-check public criticisms before publishing. If you send us something to fact-check, we’ll try to do so.If you think we’ve missed important considerations or information on a decision, like about someone’s application, you can send us additional information.On events specifically:We try to run a range of events that serve the breadth of the community.We recognize that there are people who ar...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How CEA approaches applications to our programs, published by Amy Labenz on November 4, 2022 on The Effective Altruism Forum.Our programs exist to have a positive impact on the world, rather than to serve the effective altruism community as an end goal. This unfortunately means EAs will sometimes be disappointed because of decisions we’ve made — though if this results in the world being a worse place overall, then we’ve clearly made a mistake. This is one of the hard parts about how EA is both a community and a professional space.Naturally, people want to know things likeWhy didn’t I get admitted to a conference, when EA is really important to me and I’m taking actions inspired by EA?Why didn’t my friend get admitted to a conference, when they seem like a good applicant?We can understand why people would often like feedback on what they could have done differently or what they can try next time to get a better result. Or they just want to know what happened. When we have a specific idea about what would improve someone’s chances (like “you didn’t give much detail on your application, could you add more information?”) we’ll often give it.But we get thousands of applications and we don’t think it’s the best use of our staff’s time to give specific feedback about all of them. Often we don’t have constructive feedback to give.Many of the things that go into a decision are not easy to pin down — how well we think you understand EA, how we think you’ll add to the social environment, how much we think you’ll benefit from the event given the program we’ve prepared, etc. These things are subjective, and in a lot of cases, reasonable people could disagree about what call to make. There are also cases where we’ll just make mistakes (by our own standards), sometimes in favor of an applicant and sometimes against them.How we communicate about our programsIn responding to public discussion of our programs, sometimes we’ve gotten more in the weeds than we think was ideal. We’ve provided rebuttals or more information about some points but not others, which makes people understandably confused about how much information to expect from us and what the full picture is. It also uses a lot of our staff time. As the EA community grows, we need to adjust how we handle communications with the community.What you should expect from us going forward:When we think there are significant updates to our programs that the community should know about, or when there seems to be widespread confusion or misunderstanding about a particular topic, we’ll likely write a post (like this one).We’ll likely be less involved in the comments or extended public back-and-forth.We’ll read much of the public feedback about our programs, and will definitely read feedback you send directly to us (unless it’s extraordinarily long). We won’t respond in-depth to much of the feedback, though.Our programs are a work in progress, and we’ll take feedback into account as we try to improve them.What we hope you’ll do:Feel free to express your criticisms, observations, and advice about our programs. This could be publicly if you think that’s best, or by writing to us directly. Our contact form can be filled out anonymously. Or you can reach specific programs at groups@centreforeffectivealtruism.org, forum@effectivealtruism.org, or hello@eaglobal.org, or community health’s form.In general, we think it’s a good idea to fact-check public criticisms before publishing. If you send us something to fact-check, we’ll try to do so.If you think we’ve missed important considerations or information on a decision, like about someone’s application, you can send us additional information.On events specifically:We try to run a range of events that serve the breadth of the community.We recognize that there are people who ar...]]>
Amy Labenz https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:26 None full 3652
nB778dXNsHqHthFC5_EA EA - Metaforecast late 2022 update: GraphQL API, Charts, better infrastructure behind the scenes. by NunoSempere Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Metaforecast late 2022 update: GraphQL API, Charts, better infrastructure behind the scenes., published by NunoSempere on November 4, 2022 on The Effective Altruism Forum.tl;dr: Metaforecast is a search engine and an associated repository for forecasting questions. Since our last update, we have added a GraphQL API, charts, and dashboards. We have also reworked our infrastructure to make it more stable.New APIOur most significant new addition is our GraphQL API. It allows other people to build on top of our efforts. It can be accessed on metaforecast.org/api/graphql, and looks similar to the EA Forum's own graphql api.To get the first 1000 questions, you could use a query like:titleurldescriptionoptions { nameprobability} qualityIndicators { numForecastsstars} timestamp} } pageInfo { endCursorstartCursorYou can find more examples, like code to download all questions, in our /scripts folder, to which we welcome contributions.Charts and question pages.Charts display a question's history. They look as follows:Charts can be accessed by clicking the expand button on the front page although they are fairly slow to load at the moment.Clicking on the expand button brings the user to a question page, which contains a chart, the full question description, and a range of quality indicators:We are also providing an endpoint at metaforecast.org/questions/embed/[id] to allow other pages to embed our charts. For instance, to embed a question whose id is betfair-1.178163916, the endpoint would be here. One would use it in the following code:You can find the necessary question id by clicking a toggle under "advanced options" on the frontpage, or simply by noticing the id in our URL when expanding the question.With time, we aim to improve these pages, make them more interactive, etc. We also think it would be a good idea to embed Metaforecast questions and dashboards into the EA Forum, and we are trying to collaborate with the Manifold team, who have done this before, to make that happen.DashboardsDashboards are collections of questions. For instance, here is a dashboard on global markets and inflation, as embedded in Global Guessing.Like questions, you can either view dashboards directly, or embed them. You can also create them, at.Better infrastructureWe have also revamped our infrastructure. We moved to from JavaScript to Typescript, from MongoDB to Postgres, and simplified our backend.We are open to collaborationsWe are very much open to collaborations. If you want to integrate Metaforecast into your project and need help do not hesitate to reach out, e.g., on our Github.Metaforecast is also open source, and we welcome contributions. You can see some to-dos here. Developing is going more slowly now because it's mostly driven by Nuño working in his spare time, so contributions would be counterfactual.AcknowledgementsMetaforecast is hosted by the Quantified Uncertainty Research Institute, and has received funding from Astral Codex Ten. It has received significant contributions from Vyacheslav Matyuhin, who was responsible for the upgrade to Typescript and GraphQL. Thanks to Clay Graubard of Global Guessing for their comments and dashboards, to Insight Prediction for help smoothing out their API, to Nathan Young for general comments, to others for their comments and suggestions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
NunoSempere https://forum.effectivealtruism.org/posts/nB778dXNsHqHthFC5/metaforecast-late-2022-update-graphql-api-charts-better Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Metaforecast late 2022 update: GraphQL API, Charts, better infrastructure behind the scenes., published by NunoSempere on November 4, 2022 on The Effective Altruism Forum.tl;dr: Metaforecast is a search engine and an associated repository for forecasting questions. Since our last update, we have added a GraphQL API, charts, and dashboards. We have also reworked our infrastructure to make it more stable.New APIOur most significant new addition is our GraphQL API. It allows other people to build on top of our efforts. It can be accessed on metaforecast.org/api/graphql, and looks similar to the EA Forum's own graphql api.To get the first 1000 questions, you could use a query like:titleurldescriptionoptions { nameprobability} qualityIndicators { numForecastsstars} timestamp} } pageInfo { endCursorstartCursorYou can find more examples, like code to download all questions, in our /scripts folder, to which we welcome contributions.Charts and question pages.Charts display a question's history. They look as follows:Charts can be accessed by clicking the expand button on the front page although they are fairly slow to load at the moment.Clicking on the expand button brings the user to a question page, which contains a chart, the full question description, and a range of quality indicators:We are also providing an endpoint at metaforecast.org/questions/embed/[id] to allow other pages to embed our charts. For instance, to embed a question whose id is betfair-1.178163916, the endpoint would be here. One would use it in the following code:You can find the necessary question id by clicking a toggle under "advanced options" on the frontpage, or simply by noticing the id in our URL when expanding the question.With time, we aim to improve these pages, make them more interactive, etc. We also think it would be a good idea to embed Metaforecast questions and dashboards into the EA Forum, and we are trying to collaborate with the Manifold team, who have done this before, to make that happen.DashboardsDashboards are collections of questions. For instance, here is a dashboard on global markets and inflation, as embedded in Global Guessing.Like questions, you can either view dashboards directly, or embed them. You can also create them, at.Better infrastructureWe have also revamped our infrastructure. We moved to from JavaScript to Typescript, from MongoDB to Postgres, and simplified our backend.We are open to collaborationsWe are very much open to collaborations. If you want to integrate Metaforecast into your project and need help do not hesitate to reach out, e.g., on our Github.Metaforecast is also open source, and we welcome contributions. You can see some to-dos here. Developing is going more slowly now because it's mostly driven by Nuño working in his spare time, so contributions would be counterfactual.AcknowledgementsMetaforecast is hosted by the Quantified Uncertainty Research Institute, and has received funding from Astral Codex Ten. It has received significant contributions from Vyacheslav Matyuhin, who was responsible for the upgrade to Typescript and GraphQL. Thanks to Clay Graubard of Global Guessing for their comments and dashboards, to Insight Prediction for help smoothing out their API, to Nathan Young for general comments, to others for their comments and suggestions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 04 Nov 2022 22:15:09 +0000 EA - Metaforecast late 2022 update: GraphQL API, Charts, better infrastructure behind the scenes. by NunoSempere Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Metaforecast late 2022 update: GraphQL API, Charts, better infrastructure behind the scenes., published by NunoSempere on November 4, 2022 on The Effective Altruism Forum.tl;dr: Metaforecast is a search engine and an associated repository for forecasting questions. Since our last update, we have added a GraphQL API, charts, and dashboards. We have also reworked our infrastructure to make it more stable.New APIOur most significant new addition is our GraphQL API. It allows other people to build on top of our efforts. It can be accessed on metaforecast.org/api/graphql, and looks similar to the EA Forum's own graphql api.To get the first 1000 questions, you could use a query like:titleurldescriptionoptions { nameprobability} qualityIndicators { numForecastsstars} timestamp} } pageInfo { endCursorstartCursorYou can find more examples, like code to download all questions, in our /scripts folder, to which we welcome contributions.Charts and question pages.Charts display a question's history. They look as follows:Charts can be accessed by clicking the expand button on the front page although they are fairly slow to load at the moment.Clicking on the expand button brings the user to a question page, which contains a chart, the full question description, and a range of quality indicators:We are also providing an endpoint at metaforecast.org/questions/embed/[id] to allow other pages to embed our charts. For instance, to embed a question whose id is betfair-1.178163916, the endpoint would be here. One would use it in the following code:You can find the necessary question id by clicking a toggle under "advanced options" on the frontpage, or simply by noticing the id in our URL when expanding the question.With time, we aim to improve these pages, make them more interactive, etc. We also think it would be a good idea to embed Metaforecast questions and dashboards into the EA Forum, and we are trying to collaborate with the Manifold team, who have done this before, to make that happen.DashboardsDashboards are collections of questions. For instance, here is a dashboard on global markets and inflation, as embedded in Global Guessing.Like questions, you can either view dashboards directly, or embed them. You can also create them, at.Better infrastructureWe have also revamped our infrastructure. We moved to from JavaScript to Typescript, from MongoDB to Postgres, and simplified our backend.We are open to collaborationsWe are very much open to collaborations. If you want to integrate Metaforecast into your project and need help do not hesitate to reach out, e.g., on our Github.Metaforecast is also open source, and we welcome contributions. You can see some to-dos here. Developing is going more slowly now because it's mostly driven by Nuño working in his spare time, so contributions would be counterfactual.AcknowledgementsMetaforecast is hosted by the Quantified Uncertainty Research Institute, and has received funding from Astral Codex Ten. It has received significant contributions from Vyacheslav Matyuhin, who was responsible for the upgrade to Typescript and GraphQL. Thanks to Clay Graubard of Global Guessing for their comments and dashboards, to Insight Prediction for help smoothing out their API, to Nathan Young for general comments, to others for their comments and suggestions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Metaforecast late 2022 update: GraphQL API, Charts, better infrastructure behind the scenes., published by NunoSempere on November 4, 2022 on The Effective Altruism Forum.tl;dr: Metaforecast is a search engine and an associated repository for forecasting questions. Since our last update, we have added a GraphQL API, charts, and dashboards. We have also reworked our infrastructure to make it more stable.New APIOur most significant new addition is our GraphQL API. It allows other people to build on top of our efforts. It can be accessed on metaforecast.org/api/graphql, and looks similar to the EA Forum's own graphql api.To get the first 1000 questions, you could use a query like:titleurldescriptionoptions { nameprobability} qualityIndicators { numForecastsstars} timestamp} } pageInfo { endCursorstartCursorYou can find more examples, like code to download all questions, in our /scripts folder, to which we welcome contributions.Charts and question pages.Charts display a question's history. They look as follows:Charts can be accessed by clicking the expand button on the front page although they are fairly slow to load at the moment.Clicking on the expand button brings the user to a question page, which contains a chart, the full question description, and a range of quality indicators:We are also providing an endpoint at metaforecast.org/questions/embed/[id] to allow other pages to embed our charts. For instance, to embed a question whose id is betfair-1.178163916, the endpoint would be here. One would use it in the following code:You can find the necessary question id by clicking a toggle under "advanced options" on the frontpage, or simply by noticing the id in our URL when expanding the question.With time, we aim to improve these pages, make them more interactive, etc. We also think it would be a good idea to embed Metaforecast questions and dashboards into the EA Forum, and we are trying to collaborate with the Manifold team, who have done this before, to make that happen.DashboardsDashboards are collections of questions. For instance, here is a dashboard on global markets and inflation, as embedded in Global Guessing.Like questions, you can either view dashboards directly, or embed them. You can also create them, at.Better infrastructureWe have also revamped our infrastructure. We moved to from JavaScript to Typescript, from MongoDB to Postgres, and simplified our backend.We are open to collaborationsWe are very much open to collaborations. If you want to integrate Metaforecast into your project and need help do not hesitate to reach out, e.g., on our Github.Metaforecast is also open source, and we welcome contributions. You can see some to-dos here. Developing is going more slowly now because it's mostly driven by Nuño working in his spare time, so contributions would be counterfactual.AcknowledgementsMetaforecast is hosted by the Quantified Uncertainty Research Institute, and has received funding from Astral Codex Ten. It has received significant contributions from Vyacheslav Matyuhin, who was responsible for the upgrade to Typescript and GraphQL. Thanks to Clay Graubard of Global Guessing for their comments and dashboards, to Insight Prediction for help smoothing out their API, to Nathan Young for general comments, to others for their comments and suggestions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
NunoSempere https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:46 None full 3645
CAWvSaGWNbCnoRccd_EA EA - Supporting online connections: what I learned after trying to find impactful opportunities by Clifford Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Supporting online connections: what I learned after trying to find impactful opportunities, published by Clifford on November 4, 2022 on The Effective Altruism Forum.I’ve been exploring ideas for how the Forum could help people make a valuable connection over the course of 4+ months. In this doc, I share the different areas I’ve been thinking about, how promising I think they are and what I’ve learnt about them.Overall, I don't think any of the ideas meet the bar for focusing on connections via an online platform. My bar for an idea was that it could facilitate roughly an EAG’s worth of quality-adjusted connections each year (~10,000). I’m now more excited about meeting this bar by targeting high-value outcomes (e.g. collaborations, jobs), even if that means a lower number of total connections.I cover:Overall takeWhy did we explore connections?What ideas have we explored?FAQ: Why don’t you build a year-round Swapcard?Other ideas we didn’t exploreOverall takeI’m more excited about targeting high-value outcomes that come from connections (e.g. collaborations, jobs) rather than trying to broadly facilitate conversations online.I wanted to find a product that could plausibly match a single EAG (Effective Altruism Global) in terms of the number of connections made in a year.A single EAG costs roughly 2 FTEs a year to run (excluding other costs)If Sarah (my colleague) and I are choosing what to work on, you could imagine that the alternative we’re trying to beat or match is running another EAG.Note: this is a relatively ambitious target. I think it would be more achievable to build something that’s still cost-effective but which has less potential for scale. For example, some of the ideas I explore below would be much more cost-effective than an EAG, because it's much cheaper to run online services than in-person conferences but would generate far fewer connections. [Related Forum post.]I’ve found it hard to compete with EAG on number of “connections”, where a connection means someone you feel you can ask a favour of (likely after a half-hour conversation).At an EAG, each attendee makes an average of 8 connections via lots of 1:1 chats in an environment (in-person) optimised for good conversations.To match this, we’d need 10% of our monthly active readership to make 4 connections a year, which isn’t crazy but feels like a stretch as only 8% of that number have ever sent a message on the forum.I think it might be easier to compete with EAG on the downstream effects of connections, e.g. a person ended up collaborating on a paper, a person ended up getting hired.My best guess is that EAGs get outcomes like this for something like 1-5% of connections. I think it’s much easier for the Forum to get 100-500 additional concrete outcomes (like collaborations or jobs) a year, than 10k connections.The rest of this contains details on what we explored, why and our assessment of each idea.Why did we explore connections?We thought we should work on connections because:Connections are valuableThey are cited as an important reason that people get involved in EA in the Open Phil EA/Longtermist Survey35% of people said personal contact with EAs was important for them getting involved38% said personal contacts had the largest influence on their personal ability to have a positive impactWe had some evidence that people are making valuable connections through the Forum, even though the Forum wasn’t designed well for thisThe 2020 EA Survey revealed that a surprising number of people found connections through the EA Forum5.9% of people said they found a valuable connection through the Forum, where 11.3% said this of EAG (an event optimised for connections)Having followed up with a subset of the forum users who said this, I think it’s more like that ~2% of respondent...]]>
Clifford https://forum.effectivealtruism.org/posts/CAWvSaGWNbCnoRccd/supporting-online-connections-what-i-learned-after-trying-to Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Supporting online connections: what I learned after trying to find impactful opportunities, published by Clifford on November 4, 2022 on The Effective Altruism Forum.I’ve been exploring ideas for how the Forum could help people make a valuable connection over the course of 4+ months. In this doc, I share the different areas I’ve been thinking about, how promising I think they are and what I’ve learnt about them.Overall, I don't think any of the ideas meet the bar for focusing on connections via an online platform. My bar for an idea was that it could facilitate roughly an EAG’s worth of quality-adjusted connections each year (~10,000). I’m now more excited about meeting this bar by targeting high-value outcomes (e.g. collaborations, jobs), even if that means a lower number of total connections.I cover:Overall takeWhy did we explore connections?What ideas have we explored?FAQ: Why don’t you build a year-round Swapcard?Other ideas we didn’t exploreOverall takeI’m more excited about targeting high-value outcomes that come from connections (e.g. collaborations, jobs) rather than trying to broadly facilitate conversations online.I wanted to find a product that could plausibly match a single EAG (Effective Altruism Global) in terms of the number of connections made in a year.A single EAG costs roughly 2 FTEs a year to run (excluding other costs)If Sarah (my colleague) and I are choosing what to work on, you could imagine that the alternative we’re trying to beat or match is running another EAG.Note: this is a relatively ambitious target. I think it would be more achievable to build something that’s still cost-effective but which has less potential for scale. For example, some of the ideas I explore below would be much more cost-effective than an EAG, because it's much cheaper to run online services than in-person conferences but would generate far fewer connections. [Related Forum post.]I’ve found it hard to compete with EAG on number of “connections”, where a connection means someone you feel you can ask a favour of (likely after a half-hour conversation).At an EAG, each attendee makes an average of 8 connections via lots of 1:1 chats in an environment (in-person) optimised for good conversations.To match this, we’d need 10% of our monthly active readership to make 4 connections a year, which isn’t crazy but feels like a stretch as only 8% of that number have ever sent a message on the forum.I think it might be easier to compete with EAG on the downstream effects of connections, e.g. a person ended up collaborating on a paper, a person ended up getting hired.My best guess is that EAGs get outcomes like this for something like 1-5% of connections. I think it’s much easier for the Forum to get 100-500 additional concrete outcomes (like collaborations or jobs) a year, than 10k connections.The rest of this contains details on what we explored, why and our assessment of each idea.Why did we explore connections?We thought we should work on connections because:Connections are valuableThey are cited as an important reason that people get involved in EA in the Open Phil EA/Longtermist Survey35% of people said personal contact with EAs was important for them getting involved38% said personal contacts had the largest influence on their personal ability to have a positive impactWe had some evidence that people are making valuable connections through the Forum, even though the Forum wasn’t designed well for thisThe 2020 EA Survey revealed that a surprising number of people found connections through the EA Forum5.9% of people said they found a valuable connection through the Forum, where 11.3% said this of EAG (an event optimised for connections)Having followed up with a subset of the forum users who said this, I think it’s more like that ~2% of respondent...]]>
Fri, 04 Nov 2022 20:28:22 +0000 EA - Supporting online connections: what I learned after trying to find impactful opportunities by Clifford Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Supporting online connections: what I learned after trying to find impactful opportunities, published by Clifford on November 4, 2022 on The Effective Altruism Forum.I’ve been exploring ideas for how the Forum could help people make a valuable connection over the course of 4+ months. In this doc, I share the different areas I’ve been thinking about, how promising I think they are and what I’ve learnt about them.Overall, I don't think any of the ideas meet the bar for focusing on connections via an online platform. My bar for an idea was that it could facilitate roughly an EAG’s worth of quality-adjusted connections each year (~10,000). I’m now more excited about meeting this bar by targeting high-value outcomes (e.g. collaborations, jobs), even if that means a lower number of total connections.I cover:Overall takeWhy did we explore connections?What ideas have we explored?FAQ: Why don’t you build a year-round Swapcard?Other ideas we didn’t exploreOverall takeI’m more excited about targeting high-value outcomes that come from connections (e.g. collaborations, jobs) rather than trying to broadly facilitate conversations online.I wanted to find a product that could plausibly match a single EAG (Effective Altruism Global) in terms of the number of connections made in a year.A single EAG costs roughly 2 FTEs a year to run (excluding other costs)If Sarah (my colleague) and I are choosing what to work on, you could imagine that the alternative we’re trying to beat or match is running another EAG.Note: this is a relatively ambitious target. I think it would be more achievable to build something that’s still cost-effective but which has less potential for scale. For example, some of the ideas I explore below would be much more cost-effective than an EAG, because it's much cheaper to run online services than in-person conferences but would generate far fewer connections. [Related Forum post.]I’ve found it hard to compete with EAG on number of “connections”, where a connection means someone you feel you can ask a favour of (likely after a half-hour conversation).At an EAG, each attendee makes an average of 8 connections via lots of 1:1 chats in an environment (in-person) optimised for good conversations.To match this, we’d need 10% of our monthly active readership to make 4 connections a year, which isn’t crazy but feels like a stretch as only 8% of that number have ever sent a message on the forum.I think it might be easier to compete with EAG on the downstream effects of connections, e.g. a person ended up collaborating on a paper, a person ended up getting hired.My best guess is that EAGs get outcomes like this for something like 1-5% of connections. I think it’s much easier for the Forum to get 100-500 additional concrete outcomes (like collaborations or jobs) a year, than 10k connections.The rest of this contains details on what we explored, why and our assessment of each idea.Why did we explore connections?We thought we should work on connections because:Connections are valuableThey are cited as an important reason that people get involved in EA in the Open Phil EA/Longtermist Survey35% of people said personal contact with EAs was important for them getting involved38% said personal contacts had the largest influence on their personal ability to have a positive impactWe had some evidence that people are making valuable connections through the Forum, even though the Forum wasn’t designed well for thisThe 2020 EA Survey revealed that a surprising number of people found connections through the EA Forum5.9% of people said they found a valuable connection through the Forum, where 11.3% said this of EAG (an event optimised for connections)Having followed up with a subset of the forum users who said this, I think it’s more like that ~2% of respondent...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Supporting online connections: what I learned after trying to find impactful opportunities, published by Clifford on November 4, 2022 on The Effective Altruism Forum.I’ve been exploring ideas for how the Forum could help people make a valuable connection over the course of 4+ months. In this doc, I share the different areas I’ve been thinking about, how promising I think they are and what I’ve learnt about them.Overall, I don't think any of the ideas meet the bar for focusing on connections via an online platform. My bar for an idea was that it could facilitate roughly an EAG’s worth of quality-adjusted connections each year (~10,000). I’m now more excited about meeting this bar by targeting high-value outcomes (e.g. collaborations, jobs), even if that means a lower number of total connections.I cover:Overall takeWhy did we explore connections?What ideas have we explored?FAQ: Why don’t you build a year-round Swapcard?Other ideas we didn’t exploreOverall takeI’m more excited about targeting high-value outcomes that come from connections (e.g. collaborations, jobs) rather than trying to broadly facilitate conversations online.I wanted to find a product that could plausibly match a single EAG (Effective Altruism Global) in terms of the number of connections made in a year.A single EAG costs roughly 2 FTEs a year to run (excluding other costs)If Sarah (my colleague) and I are choosing what to work on, you could imagine that the alternative we’re trying to beat or match is running another EAG.Note: this is a relatively ambitious target. I think it would be more achievable to build something that’s still cost-effective but which has less potential for scale. For example, some of the ideas I explore below would be much more cost-effective than an EAG, because it's much cheaper to run online services than in-person conferences but would generate far fewer connections. [Related Forum post.]I’ve found it hard to compete with EAG on number of “connections”, where a connection means someone you feel you can ask a favour of (likely after a half-hour conversation).At an EAG, each attendee makes an average of 8 connections via lots of 1:1 chats in an environment (in-person) optimised for good conversations.To match this, we’d need 10% of our monthly active readership to make 4 connections a year, which isn’t crazy but feels like a stretch as only 8% of that number have ever sent a message on the forum.I think it might be easier to compete with EAG on the downstream effects of connections, e.g. a person ended up collaborating on a paper, a person ended up getting hired.My best guess is that EAGs get outcomes like this for something like 1-5% of connections. I think it’s much easier for the Forum to get 100-500 additional concrete outcomes (like collaborations or jobs) a year, than 10k connections.The rest of this contains details on what we explored, why and our assessment of each idea.Why did we explore connections?We thought we should work on connections because:Connections are valuableThey are cited as an important reason that people get involved in EA in the Open Phil EA/Longtermist Survey35% of people said personal contact with EAs was important for them getting involved38% said personal contacts had the largest influence on their personal ability to have a positive impactWe had some evidence that people are making valuable connections through the Forum, even though the Forum wasn’t designed well for thisThe 2020 EA Survey revealed that a surprising number of people found connections through the EA Forum5.9% of people said they found a valuable connection through the Forum, where 11.3% said this of EAG (an event optimised for connections)Having followed up with a subset of the forum users who said this, I think it’s more like that ~2% of respondent...]]>
Clifford https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 23:55 None full 3647
PyZCqLrDTJrQofEf7_EA EA - How bad could a war get? by Stephen Clare Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How bad could a war get?, published by Stephen Clare on November 4, 2022 on The Effective Altruism Forum.Acknowledgements: Thanks to Joe Benton for research advice and Ben Harack and Max Daniel for feedback on earlier drafts.Author contributions: Stephen and Rani both did research for this post; Stephen wrote it and Rani gave comments and edits.Previously in this series: "Modelling great power conflict as an existential risk factor" and "How likely is World War III?"Introduction & ContextIn “How Likely is World War III?”, Stephen suggested the chance of an extinction-level war occurring sometime this century is just under 1%. This was a simple, rough estimate, made in the following steps:Assume that wars, i.e. conflicts that cause at least 1000 battle deaths, continue to break out at their historical average rate of one about every two years.Assume that the distribution of battle deaths in wars follows a power law.Use parameters for the power law distribution estimated by Bear Braumoeller in Only the Dead to calculate the chance that any given war escalates to 8 billion battle deathsWork out the likelihood of such a war given the expected number of wars between now and 2100.Not everybody was convinced. Arden Koehler of 80,000 Hours, for example, slammed it as “[overstating] the risk because it doesn’t consider that wars would be unlikely to continue once 90% or more of the population has been killed.” While our friendship may never recover, I (Stephen) have to admit that some skepticism is justified. An extinction-level war would be 30-to-100 times larger than World War II, the most severe war humanity has experienced so far. Is it reasonable to just assume number go up? Would the same escalatory dynamics that shape smaller wars apply at this scale?Forecasting the likelihood of enormous wars is difficult. Stephen’s extrapolatory approach creates estimates that are sensitive to the data included and the kind of distribution fit, particularly in the tails. But such efforts are important despite their defects. Estimates of the likelihood of major conflict are an important consideration for cause prioritization. And out-of-sample conflicts may account for most of the x-risk accounted for by global conflict. So in this post we interrogate two of the assumptions made in “How Likely is World War III?”:Does the distribution of battle deaths follow a power law?What do we know about the extreme tails of this distribution?Our findings are:That battle deaths per war are plausibly distributed according to a power law, but few analyses have compared the power law fit to the fit of other distributions. Plus, it’s hard to say what the tails of the distribution look like beyond the wars we’ve experienced so far.To become more confident in the power law fit, and learn more about the tails, we have to consider theory: what drives war, and how might these factors change as wars get bigger?Perhaps some factors limit the size of war, such as increasing logistical complexity. One candidate for such a factor is technology. But while it seems plausible that in the past, humanity’s war-making capacity was not sufficient to threaten extinction, this is no longer the case.This suggests that wars could get very, very bad: we shouldn’t rule out the possibility that war could cause human extinction.Battle deaths and power lawsFitting power lawsOne way to gauge the probability of out-of-sample events is to find a probability distribution, a mathematical function which gives estimates for how likely different events are, which describes the available data. If we can find a well-fitting distribution, then we can use it to predict the likelihood of events larger than anything we’ve observed, but within the range of the function describing the distribution.Several researchers have...]]>
Stephen Clare https://forum.effectivealtruism.org/posts/PyZCqLrDTJrQofEf7/how-bad-could-a-war-get Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How bad could a war get?, published by Stephen Clare on November 4, 2022 on The Effective Altruism Forum.Acknowledgements: Thanks to Joe Benton for research advice and Ben Harack and Max Daniel for feedback on earlier drafts.Author contributions: Stephen and Rani both did research for this post; Stephen wrote it and Rani gave comments and edits.Previously in this series: "Modelling great power conflict as an existential risk factor" and "How likely is World War III?"Introduction & ContextIn “How Likely is World War III?”, Stephen suggested the chance of an extinction-level war occurring sometime this century is just under 1%. This was a simple, rough estimate, made in the following steps:Assume that wars, i.e. conflicts that cause at least 1000 battle deaths, continue to break out at their historical average rate of one about every two years.Assume that the distribution of battle deaths in wars follows a power law.Use parameters for the power law distribution estimated by Bear Braumoeller in Only the Dead to calculate the chance that any given war escalates to 8 billion battle deathsWork out the likelihood of such a war given the expected number of wars between now and 2100.Not everybody was convinced. Arden Koehler of 80,000 Hours, for example, slammed it as “[overstating] the risk because it doesn’t consider that wars would be unlikely to continue once 90% or more of the population has been killed.” While our friendship may never recover, I (Stephen) have to admit that some skepticism is justified. An extinction-level war would be 30-to-100 times larger than World War II, the most severe war humanity has experienced so far. Is it reasonable to just assume number go up? Would the same escalatory dynamics that shape smaller wars apply at this scale?Forecasting the likelihood of enormous wars is difficult. Stephen’s extrapolatory approach creates estimates that are sensitive to the data included and the kind of distribution fit, particularly in the tails. But such efforts are important despite their defects. Estimates of the likelihood of major conflict are an important consideration for cause prioritization. And out-of-sample conflicts may account for most of the x-risk accounted for by global conflict. So in this post we interrogate two of the assumptions made in “How Likely is World War III?”:Does the distribution of battle deaths follow a power law?What do we know about the extreme tails of this distribution?Our findings are:That battle deaths per war are plausibly distributed according to a power law, but few analyses have compared the power law fit to the fit of other distributions. Plus, it’s hard to say what the tails of the distribution look like beyond the wars we’ve experienced so far.To become more confident in the power law fit, and learn more about the tails, we have to consider theory: what drives war, and how might these factors change as wars get bigger?Perhaps some factors limit the size of war, such as increasing logistical complexity. One candidate for such a factor is technology. But while it seems plausible that in the past, humanity’s war-making capacity was not sufficient to threaten extinction, this is no longer the case.This suggests that wars could get very, very bad: we shouldn’t rule out the possibility that war could cause human extinction.Battle deaths and power lawsFitting power lawsOne way to gauge the probability of out-of-sample events is to find a probability distribution, a mathematical function which gives estimates for how likely different events are, which describes the available data. If we can find a well-fitting distribution, then we can use it to predict the likelihood of events larger than anything we’ve observed, but within the range of the function describing the distribution.Several researchers have...]]>
Fri, 04 Nov 2022 10:52:00 +0000 EA - How bad could a war get? by Stephen Clare Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How bad could a war get?, published by Stephen Clare on November 4, 2022 on The Effective Altruism Forum.Acknowledgements: Thanks to Joe Benton for research advice and Ben Harack and Max Daniel for feedback on earlier drafts.Author contributions: Stephen and Rani both did research for this post; Stephen wrote it and Rani gave comments and edits.Previously in this series: "Modelling great power conflict as an existential risk factor" and "How likely is World War III?"Introduction & ContextIn “How Likely is World War III?”, Stephen suggested the chance of an extinction-level war occurring sometime this century is just under 1%. This was a simple, rough estimate, made in the following steps:Assume that wars, i.e. conflicts that cause at least 1000 battle deaths, continue to break out at their historical average rate of one about every two years.Assume that the distribution of battle deaths in wars follows a power law.Use parameters for the power law distribution estimated by Bear Braumoeller in Only the Dead to calculate the chance that any given war escalates to 8 billion battle deathsWork out the likelihood of such a war given the expected number of wars between now and 2100.Not everybody was convinced. Arden Koehler of 80,000 Hours, for example, slammed it as “[overstating] the risk because it doesn’t consider that wars would be unlikely to continue once 90% or more of the population has been killed.” While our friendship may never recover, I (Stephen) have to admit that some skepticism is justified. An extinction-level war would be 30-to-100 times larger than World War II, the most severe war humanity has experienced so far. Is it reasonable to just assume number go up? Would the same escalatory dynamics that shape smaller wars apply at this scale?Forecasting the likelihood of enormous wars is difficult. Stephen’s extrapolatory approach creates estimates that are sensitive to the data included and the kind of distribution fit, particularly in the tails. But such efforts are important despite their defects. Estimates of the likelihood of major conflict are an important consideration for cause prioritization. And out-of-sample conflicts may account for most of the x-risk accounted for by global conflict. So in this post we interrogate two of the assumptions made in “How Likely is World War III?”:Does the distribution of battle deaths follow a power law?What do we know about the extreme tails of this distribution?Our findings are:That battle deaths per war are plausibly distributed according to a power law, but few analyses have compared the power law fit to the fit of other distributions. Plus, it’s hard to say what the tails of the distribution look like beyond the wars we’ve experienced so far.To become more confident in the power law fit, and learn more about the tails, we have to consider theory: what drives war, and how might these factors change as wars get bigger?Perhaps some factors limit the size of war, such as increasing logistical complexity. One candidate for such a factor is technology. But while it seems plausible that in the past, humanity’s war-making capacity was not sufficient to threaten extinction, this is no longer the case.This suggests that wars could get very, very bad: we shouldn’t rule out the possibility that war could cause human extinction.Battle deaths and power lawsFitting power lawsOne way to gauge the probability of out-of-sample events is to find a probability distribution, a mathematical function which gives estimates for how likely different events are, which describes the available data. If we can find a well-fitting distribution, then we can use it to predict the likelihood of events larger than anything we’ve observed, but within the range of the function describing the distribution.Several researchers have...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How bad could a war get?, published by Stephen Clare on November 4, 2022 on The Effective Altruism Forum.Acknowledgements: Thanks to Joe Benton for research advice and Ben Harack and Max Daniel for feedback on earlier drafts.Author contributions: Stephen and Rani both did research for this post; Stephen wrote it and Rani gave comments and edits.Previously in this series: "Modelling great power conflict as an existential risk factor" and "How likely is World War III?"Introduction & ContextIn “How Likely is World War III?”, Stephen suggested the chance of an extinction-level war occurring sometime this century is just under 1%. This was a simple, rough estimate, made in the following steps:Assume that wars, i.e. conflicts that cause at least 1000 battle deaths, continue to break out at their historical average rate of one about every two years.Assume that the distribution of battle deaths in wars follows a power law.Use parameters for the power law distribution estimated by Bear Braumoeller in Only the Dead to calculate the chance that any given war escalates to 8 billion battle deathsWork out the likelihood of such a war given the expected number of wars between now and 2100.Not everybody was convinced. Arden Koehler of 80,000 Hours, for example, slammed it as “[overstating] the risk because it doesn’t consider that wars would be unlikely to continue once 90% or more of the population has been killed.” While our friendship may never recover, I (Stephen) have to admit that some skepticism is justified. An extinction-level war would be 30-to-100 times larger than World War II, the most severe war humanity has experienced so far. Is it reasonable to just assume number go up? Would the same escalatory dynamics that shape smaller wars apply at this scale?Forecasting the likelihood of enormous wars is difficult. Stephen’s extrapolatory approach creates estimates that are sensitive to the data included and the kind of distribution fit, particularly in the tails. But such efforts are important despite their defects. Estimates of the likelihood of major conflict are an important consideration for cause prioritization. And out-of-sample conflicts may account for most of the x-risk accounted for by global conflict. So in this post we interrogate two of the assumptions made in “How Likely is World War III?”:Does the distribution of battle deaths follow a power law?What do we know about the extreme tails of this distribution?Our findings are:That battle deaths per war are plausibly distributed according to a power law, but few analyses have compared the power law fit to the fit of other distributions. Plus, it’s hard to say what the tails of the distribution look like beyond the wars we’ve experienced so far.To become more confident in the power law fit, and learn more about the tails, we have to consider theory: what drives war, and how might these factors change as wars get bigger?Perhaps some factors limit the size of war, such as increasing logistical complexity. One candidate for such a factor is technology. But while it seems plausible that in the past, humanity’s war-making capacity was not sufficient to threaten extinction, this is no longer the case.This suggests that wars could get very, very bad: we shouldn’t rule out the possibility that war could cause human extinction.Battle deaths and power lawsFitting power lawsOne way to gauge the probability of out-of-sample events is to find a probability distribution, a mathematical function which gives estimates for how likely different events are, which describes the available data. If we can find a well-fitting distribution, then we can use it to predict the likelihood of events larger than anything we’ve observed, but within the range of the function describing the distribution.Several researchers have...]]>
Stephen Clare https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 18:48 None full 3640
mmsEAyvrrrQz5yE27_EA EA - Major update to EA Giving Tuesday by Giving What We Can Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Major update to EA Giving Tuesday, published by Giving What We Can on November 4, 2022 on The Effective Altruism Forum.Announcing changes to EA Giving Tuesday and Meta's Giving Season matchOn Nov 1st 2022, Meta announced a significant change to their annual Giving Tuesday donation matching scheme, which affects EA Giving Tuesday.Here are the high level details of this year’s match:“To help nonprofits jumpstart their Giving Season fundraising, Meta will match your donors’ recurring donation 100% up to $100 in the next month (up to $100,000 per organization and up to $7 million in total across all organizations). All new recurring donors who start a recurring donation within November 15 - December 31, 2022 are eligible. Read the terms and conditions.”The match now requires participants to set up a recurring donation, in order to get up to $100 in matched funds. The matched funds are provided once the second transaction goes through i.e. you need to donate for two months to receive the match. We are unsure but it seems likely that a donor could set up recurring donations to multiple organizations in order to get multiple matches.We believe this opportunity is only available in the US due to the functionality appearing to be US only.What does this mean for EA Giving Tuesday?In the past, the value proposition of EA Giving Tuesday was to organise around the 100% match in the morning of Giving Tuesday. With the lower match amount per donation and the requirement for it to be recurring, we think that the matching is much less likely to be competitive and therefore the previous level of coordination does not make sense to continue.The EA Giving Tuesday team has also decided that it makes the most sense for people to donate directly to the charities via Facebook given the new requirement about recurring donations. These changes mean that EA Giving Tuesday will not support any organisations that require re-granting or restrictions due to ongoing administrative requirements of recurring donations. 501c3’s registered with Facebook fundraising tools will be able to participate in Meta’s Giving Season match.We encourage you to look for effective charities on Facebook for the match and will be listing effective charities who are interested in participating on our website.EA Giving Tuesday will share the details of any matching opportunities we think are worthwhile and conduct an impact analysis at the end of the season.How can I get my donations matched this year?This year there are two match opportunities we are sharing with you:Meta’s Giving Season match: Details and instructions (Starts Nov 15)Every.org’s Fall Giving Challenge: Details and instructions, provided by Will Kiely on the EA Forum (Already started)Once you’ve received confirmation of a match, please let us know the details via this impact evaluation form so we can quantify the value of these opportunities for future years.We’re disappointed that the match has changed significantly from the previous year, but we hope you find value in the matching opportunities from both Meta and Every.org.We will continue to search for new matching opportunities that have the ability to shift donations towards highly effective charities throughout the season.You can read more about EA Giving Tuesday at EAGivingTuesday.orgGrace and the EA Giving Tuesday Team 2022Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Giving What We Can https://forum.effectivealtruism.org/posts/mmsEAyvrrrQz5yE27/major-update-to-ea-giving-tuesday Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Major update to EA Giving Tuesday, published by Giving What We Can on November 4, 2022 on The Effective Altruism Forum.Announcing changes to EA Giving Tuesday and Meta's Giving Season matchOn Nov 1st 2022, Meta announced a significant change to their annual Giving Tuesday donation matching scheme, which affects EA Giving Tuesday.Here are the high level details of this year’s match:“To help nonprofits jumpstart their Giving Season fundraising, Meta will match your donors’ recurring donation 100% up to $100 in the next month (up to $100,000 per organization and up to $7 million in total across all organizations). All new recurring donors who start a recurring donation within November 15 - December 31, 2022 are eligible. Read the terms and conditions.”The match now requires participants to set up a recurring donation, in order to get up to $100 in matched funds. The matched funds are provided once the second transaction goes through i.e. you need to donate for two months to receive the match. We are unsure but it seems likely that a donor could set up recurring donations to multiple organizations in order to get multiple matches.We believe this opportunity is only available in the US due to the functionality appearing to be US only.What does this mean for EA Giving Tuesday?In the past, the value proposition of EA Giving Tuesday was to organise around the 100% match in the morning of Giving Tuesday. With the lower match amount per donation and the requirement for it to be recurring, we think that the matching is much less likely to be competitive and therefore the previous level of coordination does not make sense to continue.The EA Giving Tuesday team has also decided that it makes the most sense for people to donate directly to the charities via Facebook given the new requirement about recurring donations. These changes mean that EA Giving Tuesday will not support any organisations that require re-granting or restrictions due to ongoing administrative requirements of recurring donations. 501c3’s registered with Facebook fundraising tools will be able to participate in Meta’s Giving Season match.We encourage you to look for effective charities on Facebook for the match and will be listing effective charities who are interested in participating on our website.EA Giving Tuesday will share the details of any matching opportunities we think are worthwhile and conduct an impact analysis at the end of the season.How can I get my donations matched this year?This year there are two match opportunities we are sharing with you:Meta’s Giving Season match: Details and instructions (Starts Nov 15)Every.org’s Fall Giving Challenge: Details and instructions, provided by Will Kiely on the EA Forum (Already started)Once you’ve received confirmation of a match, please let us know the details via this impact evaluation form so we can quantify the value of these opportunities for future years.We’re disappointed that the match has changed significantly from the previous year, but we hope you find value in the matching opportunities from both Meta and Every.org.We will continue to search for new matching opportunities that have the ability to shift donations towards highly effective charities throughout the season.You can read more about EA Giving Tuesday at EAGivingTuesday.orgGrace and the EA Giving Tuesday Team 2022Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 04 Nov 2022 10:04:07 +0000 EA - Major update to EA Giving Tuesday by Giving What We Can Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Major update to EA Giving Tuesday, published by Giving What We Can on November 4, 2022 on The Effective Altruism Forum.Announcing changes to EA Giving Tuesday and Meta's Giving Season matchOn Nov 1st 2022, Meta announced a significant change to their annual Giving Tuesday donation matching scheme, which affects EA Giving Tuesday.Here are the high level details of this year’s match:“To help nonprofits jumpstart their Giving Season fundraising, Meta will match your donors’ recurring donation 100% up to $100 in the next month (up to $100,000 per organization and up to $7 million in total across all organizations). All new recurring donors who start a recurring donation within November 15 - December 31, 2022 are eligible. Read the terms and conditions.”The match now requires participants to set up a recurring donation, in order to get up to $100 in matched funds. The matched funds are provided once the second transaction goes through i.e. you need to donate for two months to receive the match. We are unsure but it seems likely that a donor could set up recurring donations to multiple organizations in order to get multiple matches.We believe this opportunity is only available in the US due to the functionality appearing to be US only.What does this mean for EA Giving Tuesday?In the past, the value proposition of EA Giving Tuesday was to organise around the 100% match in the morning of Giving Tuesday. With the lower match amount per donation and the requirement for it to be recurring, we think that the matching is much less likely to be competitive and therefore the previous level of coordination does not make sense to continue.The EA Giving Tuesday team has also decided that it makes the most sense for people to donate directly to the charities via Facebook given the new requirement about recurring donations. These changes mean that EA Giving Tuesday will not support any organisations that require re-granting or restrictions due to ongoing administrative requirements of recurring donations. 501c3’s registered with Facebook fundraising tools will be able to participate in Meta’s Giving Season match.We encourage you to look for effective charities on Facebook for the match and will be listing effective charities who are interested in participating on our website.EA Giving Tuesday will share the details of any matching opportunities we think are worthwhile and conduct an impact analysis at the end of the season.How can I get my donations matched this year?This year there are two match opportunities we are sharing with you:Meta’s Giving Season match: Details and instructions (Starts Nov 15)Every.org’s Fall Giving Challenge: Details and instructions, provided by Will Kiely on the EA Forum (Already started)Once you’ve received confirmation of a match, please let us know the details via this impact evaluation form so we can quantify the value of these opportunities for future years.We’re disappointed that the match has changed significantly from the previous year, but we hope you find value in the matching opportunities from both Meta and Every.org.We will continue to search for new matching opportunities that have the ability to shift donations towards highly effective charities throughout the season.You can read more about EA Giving Tuesday at EAGivingTuesday.orgGrace and the EA Giving Tuesday Team 2022Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Major update to EA Giving Tuesday, published by Giving What We Can on November 4, 2022 on The Effective Altruism Forum.Announcing changes to EA Giving Tuesday and Meta's Giving Season matchOn Nov 1st 2022, Meta announced a significant change to their annual Giving Tuesday donation matching scheme, which affects EA Giving Tuesday.Here are the high level details of this year’s match:“To help nonprofits jumpstart their Giving Season fundraising, Meta will match your donors’ recurring donation 100% up to $100 in the next month (up to $100,000 per organization and up to $7 million in total across all organizations). All new recurring donors who start a recurring donation within November 15 - December 31, 2022 are eligible. Read the terms and conditions.”The match now requires participants to set up a recurring donation, in order to get up to $100 in matched funds. The matched funds are provided once the second transaction goes through i.e. you need to donate for two months to receive the match. We are unsure but it seems likely that a donor could set up recurring donations to multiple organizations in order to get multiple matches.We believe this opportunity is only available in the US due to the functionality appearing to be US only.What does this mean for EA Giving Tuesday?In the past, the value proposition of EA Giving Tuesday was to organise around the 100% match in the morning of Giving Tuesday. With the lower match amount per donation and the requirement for it to be recurring, we think that the matching is much less likely to be competitive and therefore the previous level of coordination does not make sense to continue.The EA Giving Tuesday team has also decided that it makes the most sense for people to donate directly to the charities via Facebook given the new requirement about recurring donations. These changes mean that EA Giving Tuesday will not support any organisations that require re-granting or restrictions due to ongoing administrative requirements of recurring donations. 501c3’s registered with Facebook fundraising tools will be able to participate in Meta’s Giving Season match.We encourage you to look for effective charities on Facebook for the match and will be listing effective charities who are interested in participating on our website.EA Giving Tuesday will share the details of any matching opportunities we think are worthwhile and conduct an impact analysis at the end of the season.How can I get my donations matched this year?This year there are two match opportunities we are sharing with you:Meta’s Giving Season match: Details and instructions (Starts Nov 15)Every.org’s Fall Giving Challenge: Details and instructions, provided by Will Kiely on the EA Forum (Already started)Once you’ve received confirmation of a match, please let us know the details via this impact evaluation form so we can quantify the value of these opportunities for future years.We’re disappointed that the match has changed significantly from the previous year, but we hope you find value in the matching opportunities from both Meta and Every.org.We will continue to search for new matching opportunities that have the ability to shift donations towards highly effective charities throughout the season.You can read more about EA Giving Tuesday at EAGivingTuesday.orgGrace and the EA Giving Tuesday Team 2022Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Giving What We Can https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:18 None full 3641
3sh8rsxYMQN6rrxu8_EA EA - Should we be doing politics at all? Some (very rough) thoughts by Holly Elmore Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should we be doing politics at all? Some (very rough) thoughts, published by Holly Elmore on November 3, 2022 on The Effective Altruism Forum.This bit of pondering was beyond the scope of the manuscript I was writing (a followup to this post, which is why the examples are all about anti-rodenticide interventions), but I still wanted to share it. Cut from rough draft, lightly edited it so it would make sense to Forum readers and to make the tone more conversational.It is often difficult to directly engage in political campaigns without incentives to lie or misrepresent. This is exacerbated by the differences in expected communication styles in politics vs the general public vs EA. There is a tradition of strong arguments in broader EA (+rationality) culture for EAs to entirely steer away from politics for both epistemic and effectiveness reasons. I find these arguments persuasive but wonder whether they have become an unquestioned dogma.We don't hold any other class of interventions to the standards we hold political interventions. I find it hard to believe that effective charities working in developing countries never have to do ethically dubious things, like giving bribes, because refusing to do so would make it impossible to get anything done within the local culture. Yet EAs often consider it unacceptable for other EAs to engage in "politician-speak" or play political games to win a valuable election.A major objection to political interventions is that may swiftly age poorly or lock orgs/EA/society into particular positions rather than leaving them flexible to pursue the goals that motivated the political push. Well-intended regulations today (full of compromises and ignorant of what may become important distinctions) could lead to difficulties in implementing better solutions tomorrow. This is a serious downside.When considering legal bans on rodenticides:Most likely failure mode: after great effort, resources, and political capital spent, second gen anticoagulant bans succeed in some areas, and pest management professionals just switch to first gen or non-anticoagulants that are nearly as bad for rodent and off-target animal welfare.Worse alternative than rodenticides are developed in response, and the political strategy becomes an arms race.Successful campaign to ban rodenticides, but closing loopholes, enforcing the ban, and enforcing enforcement of the ban by governments becomes never-ending followup job.A moral qualm I have about EA playing the political game is that high leverage political campaigns may actually tend to subvert democracy, which might be bad in itself (depending on the situation, I think) and may lead to blowback against the political aim, EA, and/or a field like wild animal welfare. “High leverage” and "democratic political process" may be fundamentally at odds. EA is looking for interventions that will have the greatest effect for our goals with the fewest resource. If you consult the electorate about their goals and how they should be achieved, they rarely strongly agree on the thing we want to do. If a lot of people already agree on a course of action, chances are it's not going to be a high leverage intervention because it's either 1) already being done, or 2) ineffective, empty, or symbolic.EAs have leaned on legislative hacks like ballot measures for several of our greatest farmed animal welfare victories to date, most notably California’s Proposition 12 and Massachusetts’s Proposition 3. However, a major aspect of those amendments, the ban of animal products imported from other states with lower welfare standards, is now being challenged by the pork industry in the US Supreme Court. It is precisely because of the ease of special interests using that avenue (i.e. the high leverage for EA) that those amendments are now being ...]]>
Holly Elmore https://forum.effectivealtruism.org/posts/3sh8rsxYMQN6rrxu8/should-we-be-doing-politics-at-all-some-very-rough-thoughts Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should we be doing politics at all? Some (very rough) thoughts, published by Holly Elmore on November 3, 2022 on The Effective Altruism Forum.This bit of pondering was beyond the scope of the manuscript I was writing (a followup to this post, which is why the examples are all about anti-rodenticide interventions), but I still wanted to share it. Cut from rough draft, lightly edited it so it would make sense to Forum readers and to make the tone more conversational.It is often difficult to directly engage in political campaigns without incentives to lie or misrepresent. This is exacerbated by the differences in expected communication styles in politics vs the general public vs EA. There is a tradition of strong arguments in broader EA (+rationality) culture for EAs to entirely steer away from politics for both epistemic and effectiveness reasons. I find these arguments persuasive but wonder whether they have become an unquestioned dogma.We don't hold any other class of interventions to the standards we hold political interventions. I find it hard to believe that effective charities working in developing countries never have to do ethically dubious things, like giving bribes, because refusing to do so would make it impossible to get anything done within the local culture. Yet EAs often consider it unacceptable for other EAs to engage in "politician-speak" or play political games to win a valuable election.A major objection to political interventions is that may swiftly age poorly or lock orgs/EA/society into particular positions rather than leaving them flexible to pursue the goals that motivated the political push. Well-intended regulations today (full of compromises and ignorant of what may become important distinctions) could lead to difficulties in implementing better solutions tomorrow. This is a serious downside.When considering legal bans on rodenticides:Most likely failure mode: after great effort, resources, and political capital spent, second gen anticoagulant bans succeed in some areas, and pest management professionals just switch to first gen or non-anticoagulants that are nearly as bad for rodent and off-target animal welfare.Worse alternative than rodenticides are developed in response, and the political strategy becomes an arms race.Successful campaign to ban rodenticides, but closing loopholes, enforcing the ban, and enforcing enforcement of the ban by governments becomes never-ending followup job.A moral qualm I have about EA playing the political game is that high leverage political campaigns may actually tend to subvert democracy, which might be bad in itself (depending on the situation, I think) and may lead to blowback against the political aim, EA, and/or a field like wild animal welfare. “High leverage” and "democratic political process" may be fundamentally at odds. EA is looking for interventions that will have the greatest effect for our goals with the fewest resource. If you consult the electorate about their goals and how they should be achieved, they rarely strongly agree on the thing we want to do. If a lot of people already agree on a course of action, chances are it's not going to be a high leverage intervention because it's either 1) already being done, or 2) ineffective, empty, or symbolic.EAs have leaned on legislative hacks like ballot measures for several of our greatest farmed animal welfare victories to date, most notably California’s Proposition 12 and Massachusetts’s Proposition 3. However, a major aspect of those amendments, the ban of animal products imported from other states with lower welfare standards, is now being challenged by the pork industry in the US Supreme Court. It is precisely because of the ease of special interests using that avenue (i.e. the high leverage for EA) that those amendments are now being ...]]>
Thu, 03 Nov 2022 22:37:24 +0000 EA - Should we be doing politics at all? Some (very rough) thoughts by Holly Elmore Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should we be doing politics at all? Some (very rough) thoughts, published by Holly Elmore on November 3, 2022 on The Effective Altruism Forum.This bit of pondering was beyond the scope of the manuscript I was writing (a followup to this post, which is why the examples are all about anti-rodenticide interventions), but I still wanted to share it. Cut from rough draft, lightly edited it so it would make sense to Forum readers and to make the tone more conversational.It is often difficult to directly engage in political campaigns without incentives to lie or misrepresent. This is exacerbated by the differences in expected communication styles in politics vs the general public vs EA. There is a tradition of strong arguments in broader EA (+rationality) culture for EAs to entirely steer away from politics for both epistemic and effectiveness reasons. I find these arguments persuasive but wonder whether they have become an unquestioned dogma.We don't hold any other class of interventions to the standards we hold political interventions. I find it hard to believe that effective charities working in developing countries never have to do ethically dubious things, like giving bribes, because refusing to do so would make it impossible to get anything done within the local culture. Yet EAs often consider it unacceptable for other EAs to engage in "politician-speak" or play political games to win a valuable election.A major objection to political interventions is that may swiftly age poorly or lock orgs/EA/society into particular positions rather than leaving them flexible to pursue the goals that motivated the political push. Well-intended regulations today (full of compromises and ignorant of what may become important distinctions) could lead to difficulties in implementing better solutions tomorrow. This is a serious downside.When considering legal bans on rodenticides:Most likely failure mode: after great effort, resources, and political capital spent, second gen anticoagulant bans succeed in some areas, and pest management professionals just switch to first gen or non-anticoagulants that are nearly as bad for rodent and off-target animal welfare.Worse alternative than rodenticides are developed in response, and the political strategy becomes an arms race.Successful campaign to ban rodenticides, but closing loopholes, enforcing the ban, and enforcing enforcement of the ban by governments becomes never-ending followup job.A moral qualm I have about EA playing the political game is that high leverage political campaigns may actually tend to subvert democracy, which might be bad in itself (depending on the situation, I think) and may lead to blowback against the political aim, EA, and/or a field like wild animal welfare. “High leverage” and "democratic political process" may be fundamentally at odds. EA is looking for interventions that will have the greatest effect for our goals with the fewest resource. If you consult the electorate about their goals and how they should be achieved, they rarely strongly agree on the thing we want to do. If a lot of people already agree on a course of action, chances are it's not going to be a high leverage intervention because it's either 1) already being done, or 2) ineffective, empty, or symbolic.EAs have leaned on legislative hacks like ballot measures for several of our greatest farmed animal welfare victories to date, most notably California’s Proposition 12 and Massachusetts’s Proposition 3. However, a major aspect of those amendments, the ban of animal products imported from other states with lower welfare standards, is now being challenged by the pork industry in the US Supreme Court. It is precisely because of the ease of special interests using that avenue (i.e. the high leverage for EA) that those amendments are now being ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should we be doing politics at all? Some (very rough) thoughts, published by Holly Elmore on November 3, 2022 on The Effective Altruism Forum.This bit of pondering was beyond the scope of the manuscript I was writing (a followup to this post, which is why the examples are all about anti-rodenticide interventions), but I still wanted to share it. Cut from rough draft, lightly edited it so it would make sense to Forum readers and to make the tone more conversational.It is often difficult to directly engage in political campaigns without incentives to lie or misrepresent. This is exacerbated by the differences in expected communication styles in politics vs the general public vs EA. There is a tradition of strong arguments in broader EA (+rationality) culture for EAs to entirely steer away from politics for both epistemic and effectiveness reasons. I find these arguments persuasive but wonder whether they have become an unquestioned dogma.We don't hold any other class of interventions to the standards we hold political interventions. I find it hard to believe that effective charities working in developing countries never have to do ethically dubious things, like giving bribes, because refusing to do so would make it impossible to get anything done within the local culture. Yet EAs often consider it unacceptable for other EAs to engage in "politician-speak" or play political games to win a valuable election.A major objection to political interventions is that may swiftly age poorly or lock orgs/EA/society into particular positions rather than leaving them flexible to pursue the goals that motivated the political push. Well-intended regulations today (full of compromises and ignorant of what may become important distinctions) could lead to difficulties in implementing better solutions tomorrow. This is a serious downside.When considering legal bans on rodenticides:Most likely failure mode: after great effort, resources, and political capital spent, second gen anticoagulant bans succeed in some areas, and pest management professionals just switch to first gen or non-anticoagulants that are nearly as bad for rodent and off-target animal welfare.Worse alternative than rodenticides are developed in response, and the political strategy becomes an arms race.Successful campaign to ban rodenticides, but closing loopholes, enforcing the ban, and enforcing enforcement of the ban by governments becomes never-ending followup job.A moral qualm I have about EA playing the political game is that high leverage political campaigns may actually tend to subvert democracy, which might be bad in itself (depending on the situation, I think) and may lead to blowback against the political aim, EA, and/or a field like wild animal welfare. “High leverage” and "democratic political process" may be fundamentally at odds. EA is looking for interventions that will have the greatest effect for our goals with the fewest resource. If you consult the electorate about their goals and how they should be achieved, they rarely strongly agree on the thing we want to do. If a lot of people already agree on a course of action, chances are it's not going to be a high leverage intervention because it's either 1) already being done, or 2) ineffective, empty, or symbolic.EAs have leaned on legislative hacks like ballot measures for several of our greatest farmed animal welfare victories to date, most notably California’s Proposition 12 and Massachusetts’s Proposition 3. However, a major aspect of those amendments, the ban of animal products imported from other states with lower welfare standards, is now being challenged by the pork industry in the US Supreme Court. It is precisely because of the ease of special interests using that avenue (i.e. the high leverage for EA) that those amendments are now being ...]]>
Holly Elmore https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:03 None full 3635
nGrmemHzQvBpnXkNX_EA EA - What matters to shrimps? Factors affecting shrimp welfare in aquaculture by Lucas Lewit-Mendes Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What matters to shrimps? Factors affecting shrimp welfare in aquaculture, published by Lucas Lewit-Mendes on November 3, 2022 on The Effective Altruism Forum.Shrimp Welfare Project (SWP) produced this report to guide our decision making on funding for further research into shrimp welfare and on which interventions to allocate our resources. We are cross-posting this on the forum because we think it may be useful to share the complexity of understanding the needs of beneficiaries who cannot communicate with us. We also hope it will be useful for other organisations working on shrimp welfare, and it’s also hopefully an interesting read!The report was written by Lucas Lewit-Mendes, with detailed feedback provided by Sasha Saugh and Aaron Boddy. We are thankful for and build on the work and feedback of other NGOs, including Charity Entrepreneurship, Rethink Priorities, Aquatic Life Institute, Fish Welfare Initiative, Compassion in World Farming and Crustacean Compassion. All errors and shortcomings are our own.Executive SummaryWhile many environmental conditions and farming practices could plausibly affect the welfare of shrimps, little research has been done to assess which factors most affect shrimp welfare.This report aims to assess the importance of various factors for the welfare of farmed shrimps, with a particular focus on Litopenaeus vannamei (also known as Penaeus vannamei, or whiteleg shrimp), due to the scale and intensity of farming (~171-405 billion globally per annum) (Mood and Brooke, 2019). Where evidence is scarce, we extend our research to other shrimps, other decapods, or even other aquatic animals. Further research into the most significant factors and practices affecting farmed shrimp welfare is needed.Conclusions from our review are summarised below:Eyestalk Ablation: Shrimps demonstrate aversive behavioural responses to eyestalk ablation, and applying anaesthesia before ablation has therapeutic effects. We believe this is strongly indicative that eyestalk ablation is a welfare concern.Disease: Infectious diseases cause significant mortality events. This is likely to both cause suffering prior to death and increase the total number of shrimps who are farmed and experience suffering.Stunning and Slaughter: Current slaughter practices (asphyxiation or immersion in ice slurry) are likely to be inhumane. While evidence on the optimal slaughter method for shrimps is limited, electrical stunning appears to be the most promising method to effectively stun and kill shrimps.Stocking Density: There is strong experimental evidence to suggest that reductions in stocking density indirectly improve welfare by improving water quality, reducing disease, and increasing survival. There is also some tentative evidence that stocking density directly impacts shrimp behaviour and measurable stress biomarkers (e.g. serotonin).Environmental Enrichment (EE): Environmental enrichments (e.g. feeding methods that mimic natural behaviours, hiding sites, different tank shapes and colours, plants, substrates, and sediments) probably improve shrimp survival, but there is little evidence on their impact on shrimp stress or behaviour. There is moderately strong evidence that physical enrichment (such as physical structure, plant, and substrate) improves welfare for aquatic animals, including crustaceans.Transport and Handling: Poor transport and handling practices are likely to lead to physical injury and stress, although research is limited on the welfare effects of current shrimp farming practices.Food: While some decapods appear resilient to lack of food, inadequate nutrition leads to risk of non-infectious disease and may lead to aggressive and abnormal behaviour.Water QualityDissolved Oxygen (DO): Several studies indicate that insufficient DO levels increase mort...]]>
Lucas Lewit-Mendes https://forum.effectivealtruism.org/posts/nGrmemHzQvBpnXkNX/what-matters-to-shrimps-factors-affecting-shrimp-welfare-in Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What matters to shrimps? Factors affecting shrimp welfare in aquaculture, published by Lucas Lewit-Mendes on November 3, 2022 on The Effective Altruism Forum.Shrimp Welfare Project (SWP) produced this report to guide our decision making on funding for further research into shrimp welfare and on which interventions to allocate our resources. We are cross-posting this on the forum because we think it may be useful to share the complexity of understanding the needs of beneficiaries who cannot communicate with us. We also hope it will be useful for other organisations working on shrimp welfare, and it’s also hopefully an interesting read!The report was written by Lucas Lewit-Mendes, with detailed feedback provided by Sasha Saugh and Aaron Boddy. We are thankful for and build on the work and feedback of other NGOs, including Charity Entrepreneurship, Rethink Priorities, Aquatic Life Institute, Fish Welfare Initiative, Compassion in World Farming and Crustacean Compassion. All errors and shortcomings are our own.Executive SummaryWhile many environmental conditions and farming practices could plausibly affect the welfare of shrimps, little research has been done to assess which factors most affect shrimp welfare.This report aims to assess the importance of various factors for the welfare of farmed shrimps, with a particular focus on Litopenaeus vannamei (also known as Penaeus vannamei, or whiteleg shrimp), due to the scale and intensity of farming (~171-405 billion globally per annum) (Mood and Brooke, 2019). Where evidence is scarce, we extend our research to other shrimps, other decapods, or even other aquatic animals. Further research into the most significant factors and practices affecting farmed shrimp welfare is needed.Conclusions from our review are summarised below:Eyestalk Ablation: Shrimps demonstrate aversive behavioural responses to eyestalk ablation, and applying anaesthesia before ablation has therapeutic effects. We believe this is strongly indicative that eyestalk ablation is a welfare concern.Disease: Infectious diseases cause significant mortality events. This is likely to both cause suffering prior to death and increase the total number of shrimps who are farmed and experience suffering.Stunning and Slaughter: Current slaughter practices (asphyxiation or immersion in ice slurry) are likely to be inhumane. While evidence on the optimal slaughter method for shrimps is limited, electrical stunning appears to be the most promising method to effectively stun and kill shrimps.Stocking Density: There is strong experimental evidence to suggest that reductions in stocking density indirectly improve welfare by improving water quality, reducing disease, and increasing survival. There is also some tentative evidence that stocking density directly impacts shrimp behaviour and measurable stress biomarkers (e.g. serotonin).Environmental Enrichment (EE): Environmental enrichments (e.g. feeding methods that mimic natural behaviours, hiding sites, different tank shapes and colours, plants, substrates, and sediments) probably improve shrimp survival, but there is little evidence on their impact on shrimp stress or behaviour. There is moderately strong evidence that physical enrichment (such as physical structure, plant, and substrate) improves welfare for aquatic animals, including crustaceans.Transport and Handling: Poor transport and handling practices are likely to lead to physical injury and stress, although research is limited on the welfare effects of current shrimp farming practices.Food: While some decapods appear resilient to lack of food, inadequate nutrition leads to risk of non-infectious disease and may lead to aggressive and abnormal behaviour.Water QualityDissolved Oxygen (DO): Several studies indicate that insufficient DO levels increase mort...]]>
Thu, 03 Nov 2022 11:49:34 +0000 EA - What matters to shrimps? Factors affecting shrimp welfare in aquaculture by Lucas Lewit-Mendes Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What matters to shrimps? Factors affecting shrimp welfare in aquaculture, published by Lucas Lewit-Mendes on November 3, 2022 on The Effective Altruism Forum.Shrimp Welfare Project (SWP) produced this report to guide our decision making on funding for further research into shrimp welfare and on which interventions to allocate our resources. We are cross-posting this on the forum because we think it may be useful to share the complexity of understanding the needs of beneficiaries who cannot communicate with us. We also hope it will be useful for other organisations working on shrimp welfare, and it’s also hopefully an interesting read!The report was written by Lucas Lewit-Mendes, with detailed feedback provided by Sasha Saugh and Aaron Boddy. We are thankful for and build on the work and feedback of other NGOs, including Charity Entrepreneurship, Rethink Priorities, Aquatic Life Institute, Fish Welfare Initiative, Compassion in World Farming and Crustacean Compassion. All errors and shortcomings are our own.Executive SummaryWhile many environmental conditions and farming practices could plausibly affect the welfare of shrimps, little research has been done to assess which factors most affect shrimp welfare.This report aims to assess the importance of various factors for the welfare of farmed shrimps, with a particular focus on Litopenaeus vannamei (also known as Penaeus vannamei, or whiteleg shrimp), due to the scale and intensity of farming (~171-405 billion globally per annum) (Mood and Brooke, 2019). Where evidence is scarce, we extend our research to other shrimps, other decapods, or even other aquatic animals. Further research into the most significant factors and practices affecting farmed shrimp welfare is needed.Conclusions from our review are summarised below:Eyestalk Ablation: Shrimps demonstrate aversive behavioural responses to eyestalk ablation, and applying anaesthesia before ablation has therapeutic effects. We believe this is strongly indicative that eyestalk ablation is a welfare concern.Disease: Infectious diseases cause significant mortality events. This is likely to both cause suffering prior to death and increase the total number of shrimps who are farmed and experience suffering.Stunning and Slaughter: Current slaughter practices (asphyxiation or immersion in ice slurry) are likely to be inhumane. While evidence on the optimal slaughter method for shrimps is limited, electrical stunning appears to be the most promising method to effectively stun and kill shrimps.Stocking Density: There is strong experimental evidence to suggest that reductions in stocking density indirectly improve welfare by improving water quality, reducing disease, and increasing survival. There is also some tentative evidence that stocking density directly impacts shrimp behaviour and measurable stress biomarkers (e.g. serotonin).Environmental Enrichment (EE): Environmental enrichments (e.g. feeding methods that mimic natural behaviours, hiding sites, different tank shapes and colours, plants, substrates, and sediments) probably improve shrimp survival, but there is little evidence on their impact on shrimp stress or behaviour. There is moderately strong evidence that physical enrichment (such as physical structure, plant, and substrate) improves welfare for aquatic animals, including crustaceans.Transport and Handling: Poor transport and handling practices are likely to lead to physical injury and stress, although research is limited on the welfare effects of current shrimp farming practices.Food: While some decapods appear resilient to lack of food, inadequate nutrition leads to risk of non-infectious disease and may lead to aggressive and abnormal behaviour.Water QualityDissolved Oxygen (DO): Several studies indicate that insufficient DO levels increase mort...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What matters to shrimps? Factors affecting shrimp welfare in aquaculture, published by Lucas Lewit-Mendes on November 3, 2022 on The Effective Altruism Forum.Shrimp Welfare Project (SWP) produced this report to guide our decision making on funding for further research into shrimp welfare and on which interventions to allocate our resources. We are cross-posting this on the forum because we think it may be useful to share the complexity of understanding the needs of beneficiaries who cannot communicate with us. We also hope it will be useful for other organisations working on shrimp welfare, and it’s also hopefully an interesting read!The report was written by Lucas Lewit-Mendes, with detailed feedback provided by Sasha Saugh and Aaron Boddy. We are thankful for and build on the work and feedback of other NGOs, including Charity Entrepreneurship, Rethink Priorities, Aquatic Life Institute, Fish Welfare Initiative, Compassion in World Farming and Crustacean Compassion. All errors and shortcomings are our own.Executive SummaryWhile many environmental conditions and farming practices could plausibly affect the welfare of shrimps, little research has been done to assess which factors most affect shrimp welfare.This report aims to assess the importance of various factors for the welfare of farmed shrimps, with a particular focus on Litopenaeus vannamei (also known as Penaeus vannamei, or whiteleg shrimp), due to the scale and intensity of farming (~171-405 billion globally per annum) (Mood and Brooke, 2019). Where evidence is scarce, we extend our research to other shrimps, other decapods, or even other aquatic animals. Further research into the most significant factors and practices affecting farmed shrimp welfare is needed.Conclusions from our review are summarised below:Eyestalk Ablation: Shrimps demonstrate aversive behavioural responses to eyestalk ablation, and applying anaesthesia before ablation has therapeutic effects. We believe this is strongly indicative that eyestalk ablation is a welfare concern.Disease: Infectious diseases cause significant mortality events. This is likely to both cause suffering prior to death and increase the total number of shrimps who are farmed and experience suffering.Stunning and Slaughter: Current slaughter practices (asphyxiation or immersion in ice slurry) are likely to be inhumane. While evidence on the optimal slaughter method for shrimps is limited, electrical stunning appears to be the most promising method to effectively stun and kill shrimps.Stocking Density: There is strong experimental evidence to suggest that reductions in stocking density indirectly improve welfare by improving water quality, reducing disease, and increasing survival. There is also some tentative evidence that stocking density directly impacts shrimp behaviour and measurable stress biomarkers (e.g. serotonin).Environmental Enrichment (EE): Environmental enrichments (e.g. feeding methods that mimic natural behaviours, hiding sites, different tank shapes and colours, plants, substrates, and sediments) probably improve shrimp survival, but there is little evidence on their impact on shrimp stress or behaviour. There is moderately strong evidence that physical enrichment (such as physical structure, plant, and substrate) improves welfare for aquatic animals, including crustaceans.Transport and Handling: Poor transport and handling practices are likely to lead to physical injury and stress, although research is limited on the welfare effects of current shrimp farming practices.Food: While some decapods appear resilient to lack of food, inadequate nutrition leads to risk of non-infectious disease and may lead to aggressive and abnormal behaviour.Water QualityDissolved Oxygen (DO): Several studies indicate that insufficient DO levels increase mort...]]>
Lucas Lewit-Mendes https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 49:32 None full 3636
xHC9AYLGjMoZbEWkX_EA EA - CE: Who underrates their likelihood of success. Why applying is worthwhile. by SteveThompson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CE: Who underrates their likelihood of success. Why applying is worthwhile., published by SteveThompson on November 3, 2022 on The Effective Altruism Forum.TL;DR: If we don't find more potential founders we may not be able to launch charities in Tobacco Taxation and Fish Welfare. We've extended the deadline to November 10 to our incubation program starting early 2023.Below we set out why applying is worthwhile.In our recent ‘Ask Charity Entrepreneurship anything’ post, the question was asked: “What kind of applicant do you think underrates their likelihood of success?” With so many upvotes, we’ve decided to give our response its own post.“You get so many applicants, it’s not worth applying”Yes, our application process is competitive and it is true that we get a lot of applicants (~3 thousand). But, and it’s a big but, ~80% of the applications are speculative, from people outside the EA community and don’t even really understand what we do. Of the 300 relevant candidates we receive, maybe 20 or so will make it onto the program. So for the purposes of those reading the EA Forum, (who one would imagine are somewhat or very involved in EA) the likelihood of getting into the later rounds of the application process are actually pretty good.“The charities you are launching will get started whether I apply or not”Nope. Unfortunately, this is often not the case. In recent years we have had more charity ideas than we have been able to find founders for. We have been trying to launch a Postpartum charity for a number of years, and ideas such as Exploratory Altruism have rolled over from one cohort to the next. The truth is that for every year a charity doesn’t get founded, that’s a year of its impact gone.Many EAs don't realize that they are exactly who we are looking for. Right now we are particularly interested in finding founders who could potentially focus on Tobacco Taxation and Fish Welfare, as it may well be the case that these interventions won’t get off the ground if we don’t find suitable co-founders very soon.To help mitigate this, we have just extended our application deadline to November 10 and we are appealing to readers here, to think once more about applying, and to spread the word.“But it pays too little”It is true that, historically, some CE founders made the decision to take meager salaries but it’s worth clarifying that the charities we launch set their own salaries (CE doesn’t dictate anything). Moreover, in recent years we’ve seen steep improvements in the speed, ease and quantity of funding achieved by our incubatees, so the choice of salaries taken, ranges greatly. Don’t expect to match corporate salaries, but our founders are living fully and comfortably.(When it comes to financial support during the program, we don’t want financial limitations to stop anyone from starting an organization, so we offer extended stipends to cover living costs, childcare, etc.. Personal financial limitations should never be a reason to not start a charity that could save thousands, if not millions, of lives.)“But I don’t have the right skills and experience”Doctors think they lack the commercial skills, business students think they lack the research skills and researchers think they lack the interpersonal skills. The truth is that nobody comes onto the program ready. That’s what the program is for!Even after the program, the first year of running the new charity is spent learning- piloting and testing, becoming a skillful and capable expert in your chosen intervention.The Charity Entrepreneurship Incubation Program is far more than just a way to get a new intervention funded. We are not looking to find ready made charity entrepreneurs, we are looking for people with great potential to, overtime, become great founders - we literally wrote the handbook on thi...]]>
SteveThompson https://forum.effectivealtruism.org/posts/xHC9AYLGjMoZbEWkX/ce-who-underrates-their-likelihood-of-success-why-applying Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CE: Who underrates their likelihood of success. Why applying is worthwhile., published by SteveThompson on November 3, 2022 on The Effective Altruism Forum.TL;DR: If we don't find more potential founders we may not be able to launch charities in Tobacco Taxation and Fish Welfare. We've extended the deadline to November 10 to our incubation program starting early 2023.Below we set out why applying is worthwhile.In our recent ‘Ask Charity Entrepreneurship anything’ post, the question was asked: “What kind of applicant do you think underrates their likelihood of success?” With so many upvotes, we’ve decided to give our response its own post.“You get so many applicants, it’s not worth applying”Yes, our application process is competitive and it is true that we get a lot of applicants (~3 thousand). But, and it’s a big but, ~80% of the applications are speculative, from people outside the EA community and don’t even really understand what we do. Of the 300 relevant candidates we receive, maybe 20 or so will make it onto the program. So for the purposes of those reading the EA Forum, (who one would imagine are somewhat or very involved in EA) the likelihood of getting into the later rounds of the application process are actually pretty good.“The charities you are launching will get started whether I apply or not”Nope. Unfortunately, this is often not the case. In recent years we have had more charity ideas than we have been able to find founders for. We have been trying to launch a Postpartum charity for a number of years, and ideas such as Exploratory Altruism have rolled over from one cohort to the next. The truth is that for every year a charity doesn’t get founded, that’s a year of its impact gone.Many EAs don't realize that they are exactly who we are looking for. Right now we are particularly interested in finding founders who could potentially focus on Tobacco Taxation and Fish Welfare, as it may well be the case that these interventions won’t get off the ground if we don’t find suitable co-founders very soon.To help mitigate this, we have just extended our application deadline to November 10 and we are appealing to readers here, to think once more about applying, and to spread the word.“But it pays too little”It is true that, historically, some CE founders made the decision to take meager salaries but it’s worth clarifying that the charities we launch set their own salaries (CE doesn’t dictate anything). Moreover, in recent years we’ve seen steep improvements in the speed, ease and quantity of funding achieved by our incubatees, so the choice of salaries taken, ranges greatly. Don’t expect to match corporate salaries, but our founders are living fully and comfortably.(When it comes to financial support during the program, we don’t want financial limitations to stop anyone from starting an organization, so we offer extended stipends to cover living costs, childcare, etc.. Personal financial limitations should never be a reason to not start a charity that could save thousands, if not millions, of lives.)“But I don’t have the right skills and experience”Doctors think they lack the commercial skills, business students think they lack the research skills and researchers think they lack the interpersonal skills. The truth is that nobody comes onto the program ready. That’s what the program is for!Even after the program, the first year of running the new charity is spent learning- piloting and testing, becoming a skillful and capable expert in your chosen intervention.The Charity Entrepreneurship Incubation Program is far more than just a way to get a new intervention funded. We are not looking to find ready made charity entrepreneurs, we are looking for people with great potential to, overtime, become great founders - we literally wrote the handbook on thi...]]>
Thu, 03 Nov 2022 11:02:00 +0000 EA - CE: Who underrates their likelihood of success. Why applying is worthwhile. by SteveThompson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CE: Who underrates their likelihood of success. Why applying is worthwhile., published by SteveThompson on November 3, 2022 on The Effective Altruism Forum.TL;DR: If we don't find more potential founders we may not be able to launch charities in Tobacco Taxation and Fish Welfare. We've extended the deadline to November 10 to our incubation program starting early 2023.Below we set out why applying is worthwhile.In our recent ‘Ask Charity Entrepreneurship anything’ post, the question was asked: “What kind of applicant do you think underrates their likelihood of success?” With so many upvotes, we’ve decided to give our response its own post.“You get so many applicants, it’s not worth applying”Yes, our application process is competitive and it is true that we get a lot of applicants (~3 thousand). But, and it’s a big but, ~80% of the applications are speculative, from people outside the EA community and don’t even really understand what we do. Of the 300 relevant candidates we receive, maybe 20 or so will make it onto the program. So for the purposes of those reading the EA Forum, (who one would imagine are somewhat or very involved in EA) the likelihood of getting into the later rounds of the application process are actually pretty good.“The charities you are launching will get started whether I apply or not”Nope. Unfortunately, this is often not the case. In recent years we have had more charity ideas than we have been able to find founders for. We have been trying to launch a Postpartum charity for a number of years, and ideas such as Exploratory Altruism have rolled over from one cohort to the next. The truth is that for every year a charity doesn’t get founded, that’s a year of its impact gone.Many EAs don't realize that they are exactly who we are looking for. Right now we are particularly interested in finding founders who could potentially focus on Tobacco Taxation and Fish Welfare, as it may well be the case that these interventions won’t get off the ground if we don’t find suitable co-founders very soon.To help mitigate this, we have just extended our application deadline to November 10 and we are appealing to readers here, to think once more about applying, and to spread the word.“But it pays too little”It is true that, historically, some CE founders made the decision to take meager salaries but it’s worth clarifying that the charities we launch set their own salaries (CE doesn’t dictate anything). Moreover, in recent years we’ve seen steep improvements in the speed, ease and quantity of funding achieved by our incubatees, so the choice of salaries taken, ranges greatly. Don’t expect to match corporate salaries, but our founders are living fully and comfortably.(When it comes to financial support during the program, we don’t want financial limitations to stop anyone from starting an organization, so we offer extended stipends to cover living costs, childcare, etc.. Personal financial limitations should never be a reason to not start a charity that could save thousands, if not millions, of lives.)“But I don’t have the right skills and experience”Doctors think they lack the commercial skills, business students think they lack the research skills and researchers think they lack the interpersonal skills. The truth is that nobody comes onto the program ready. That’s what the program is for!Even after the program, the first year of running the new charity is spent learning- piloting and testing, becoming a skillful and capable expert in your chosen intervention.The Charity Entrepreneurship Incubation Program is far more than just a way to get a new intervention funded. We are not looking to find ready made charity entrepreneurs, we are looking for people with great potential to, overtime, become great founders - we literally wrote the handbook on thi...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CE: Who underrates their likelihood of success. Why applying is worthwhile., published by SteveThompson on November 3, 2022 on The Effective Altruism Forum.TL;DR: If we don't find more potential founders we may not be able to launch charities in Tobacco Taxation and Fish Welfare. We've extended the deadline to November 10 to our incubation program starting early 2023.Below we set out why applying is worthwhile.In our recent ‘Ask Charity Entrepreneurship anything’ post, the question was asked: “What kind of applicant do you think underrates their likelihood of success?” With so many upvotes, we’ve decided to give our response its own post.“You get so many applicants, it’s not worth applying”Yes, our application process is competitive and it is true that we get a lot of applicants (~3 thousand). But, and it’s a big but, ~80% of the applications are speculative, from people outside the EA community and don’t even really understand what we do. Of the 300 relevant candidates we receive, maybe 20 or so will make it onto the program. So for the purposes of those reading the EA Forum, (who one would imagine are somewhat or very involved in EA) the likelihood of getting into the later rounds of the application process are actually pretty good.“The charities you are launching will get started whether I apply or not”Nope. Unfortunately, this is often not the case. In recent years we have had more charity ideas than we have been able to find founders for. We have been trying to launch a Postpartum charity for a number of years, and ideas such as Exploratory Altruism have rolled over from one cohort to the next. The truth is that for every year a charity doesn’t get founded, that’s a year of its impact gone.Many EAs don't realize that they are exactly who we are looking for. Right now we are particularly interested in finding founders who could potentially focus on Tobacco Taxation and Fish Welfare, as it may well be the case that these interventions won’t get off the ground if we don’t find suitable co-founders very soon.To help mitigate this, we have just extended our application deadline to November 10 and we are appealing to readers here, to think once more about applying, and to spread the word.“But it pays too little”It is true that, historically, some CE founders made the decision to take meager salaries but it’s worth clarifying that the charities we launch set their own salaries (CE doesn’t dictate anything). Moreover, in recent years we’ve seen steep improvements in the speed, ease and quantity of funding achieved by our incubatees, so the choice of salaries taken, ranges greatly. Don’t expect to match corporate salaries, but our founders are living fully and comfortably.(When it comes to financial support during the program, we don’t want financial limitations to stop anyone from starting an organization, so we offer extended stipends to cover living costs, childcare, etc.. Personal financial limitations should never be a reason to not start a charity that could save thousands, if not millions, of lives.)“But I don’t have the right skills and experience”Doctors think they lack the commercial skills, business students think they lack the research skills and researchers think they lack the interpersonal skills. The truth is that nobody comes onto the program ready. That’s what the program is for!Even after the program, the first year of running the new charity is spent learning- piloting and testing, becoming a skillful and capable expert in your chosen intervention.The Charity Entrepreneurship Incubation Program is far more than just a way to get a new intervention funded. We are not looking to find ready made charity entrepreneurs, we are looking for people with great potential to, overtime, become great founders - we literally wrote the handbook on thi...]]>
SteveThompson https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:15 None full 3637
xHrN4APvEjCkC4ceb_EA EA - Mini summaries of GPI papers by Jack Malde Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mini summaries of GPI papers, published by Jack Malde on November 2, 2022 on The Effective Altruism Forum.I have previously written about the importance of making global priorities research accessible to a wider range of people. Many people don’t have the time or desire to read academic papers, but the findings of the research are still hugely important and action-relevant.The Global Priorities Institute (GPI) has started producing paper summaries, but even these might have somewhat limited readership given their length. They are also time-consuming for GPI to develop and aren’t all in one place.With this in mind, and given my personal interest in global priorities research, I have written a few mini-summaries of GPI papers. The extra lazy / time poor can read just “The bottom lines”. I would welcome feedback on if these samples are useful and if I should continue to make them - working towards a post with all papers summarised. It is impossible to cover everything in just a few bullet points, but I hope my summaries successfully inform of the main arguments and key takeaways. Please note that for the final two summaries I made use of the existing GPI paper summaries.On the desire to make a difference (Hilary Greaves, William MacAskill, Andreas Mogensen and Teruji Thomas)The bottom line: Preferring to make a difference yourself is in deep tension with the ideals of benevolence. If we are to be benevolent, we should solely care about how much total good is done. In practice, this means avoiding tendencies to diversify individual philanthropic portfolios or to neglect mitigation of extinction risks in favour of neartermist options that seem “safer”.My brief summary:One can consider various types of “difference-making preferences” (DMPs), where one wants to do good themselves. One example is thinking of the difference one makes in terms of their own causal impact. This can make the world worse e.g. going to great lengths to be the one to save a drowning person even if other people are better placed to do so. This way of thinking is therefore in tension with benevolence.One can instead hope to have higher outcome-comparison impact, where one compares how much better an outcome is if one acts, compared to if one does nothing. This would recommend not trying to save the drowning person, which seems the correct conclusion. However, the authors note that thinking of doing good in this way can still be in tension with benevolence. For example, one might prefer that a recent disaster were severe rather than mild so that they can do more good by helping affected people.Under uncertainty, DMPs are also in tension with benevolence, in an action-relevant way. For example, being risk averse to the difference one individually makes sometimes means choosing an action that is (stochastically) dominated by another action - essentially choosing an action that is ‘objectively’ worse under uncertainty, with respect to doing good.This can also be the case when people interact - the authors show that the presence of DMPs in collective action problems with uncertainty can lead to sub-optimal outcomes. Importantly they show that the preferences themselves are the culprits. This is also the case with DMPs under ambiguity aversion (ambiguity aversion means preferring known risks over unknown risks).One could try to rationalise DMPs by saying people are trying to achieve ‘meaning’ in their life. But people who exhibit DMPs are generally motivated by the ideal of benevolence. It seems therefore that such people, if they really do want to be benevolent, should give up their DMPs.See paper here.The unexpected value of the future (Hayden Wilkinson)The bottom line: An undefined expected value of the future doesn’t invalidate longtermism. A theory is developed to deal with undefined expe...]]>
Jack Malde https://forum.effectivealtruism.org/posts/xHrN4APvEjCkC4ceb/mini-summaries-of-gpi-papers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mini summaries of GPI papers, published by Jack Malde on November 2, 2022 on The Effective Altruism Forum.I have previously written about the importance of making global priorities research accessible to a wider range of people. Many people don’t have the time or desire to read academic papers, but the findings of the research are still hugely important and action-relevant.The Global Priorities Institute (GPI) has started producing paper summaries, but even these might have somewhat limited readership given their length. They are also time-consuming for GPI to develop and aren’t all in one place.With this in mind, and given my personal interest in global priorities research, I have written a few mini-summaries of GPI papers. The extra lazy / time poor can read just “The bottom lines”. I would welcome feedback on if these samples are useful and if I should continue to make them - working towards a post with all papers summarised. It is impossible to cover everything in just a few bullet points, but I hope my summaries successfully inform of the main arguments and key takeaways. Please note that for the final two summaries I made use of the existing GPI paper summaries.On the desire to make a difference (Hilary Greaves, William MacAskill, Andreas Mogensen and Teruji Thomas)The bottom line: Preferring to make a difference yourself is in deep tension with the ideals of benevolence. If we are to be benevolent, we should solely care about how much total good is done. In practice, this means avoiding tendencies to diversify individual philanthropic portfolios or to neglect mitigation of extinction risks in favour of neartermist options that seem “safer”.My brief summary:One can consider various types of “difference-making preferences” (DMPs), where one wants to do good themselves. One example is thinking of the difference one makes in terms of their own causal impact. This can make the world worse e.g. going to great lengths to be the one to save a drowning person even if other people are better placed to do so. This way of thinking is therefore in tension with benevolence.One can instead hope to have higher outcome-comparison impact, where one compares how much better an outcome is if one acts, compared to if one does nothing. This would recommend not trying to save the drowning person, which seems the correct conclusion. However, the authors note that thinking of doing good in this way can still be in tension with benevolence. For example, one might prefer that a recent disaster were severe rather than mild so that they can do more good by helping affected people.Under uncertainty, DMPs are also in tension with benevolence, in an action-relevant way. For example, being risk averse to the difference one individually makes sometimes means choosing an action that is (stochastically) dominated by another action - essentially choosing an action that is ‘objectively’ worse under uncertainty, with respect to doing good.This can also be the case when people interact - the authors show that the presence of DMPs in collective action problems with uncertainty can lead to sub-optimal outcomes. Importantly they show that the preferences themselves are the culprits. This is also the case with DMPs under ambiguity aversion (ambiguity aversion means preferring known risks over unknown risks).One could try to rationalise DMPs by saying people are trying to achieve ‘meaning’ in their life. But people who exhibit DMPs are generally motivated by the ideal of benevolence. It seems therefore that such people, if they really do want to be benevolent, should give up their DMPs.See paper here.The unexpected value of the future (Hayden Wilkinson)The bottom line: An undefined expected value of the future doesn’t invalidate longtermism. A theory is developed to deal with undefined expe...]]>
Thu, 03 Nov 2022 09:44:15 +0000 EA - Mini summaries of GPI papers by Jack Malde Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mini summaries of GPI papers, published by Jack Malde on November 2, 2022 on The Effective Altruism Forum.I have previously written about the importance of making global priorities research accessible to a wider range of people. Many people don’t have the time or desire to read academic papers, but the findings of the research are still hugely important and action-relevant.The Global Priorities Institute (GPI) has started producing paper summaries, but even these might have somewhat limited readership given their length. They are also time-consuming for GPI to develop and aren’t all in one place.With this in mind, and given my personal interest in global priorities research, I have written a few mini-summaries of GPI papers. The extra lazy / time poor can read just “The bottom lines”. I would welcome feedback on if these samples are useful and if I should continue to make them - working towards a post with all papers summarised. It is impossible to cover everything in just a few bullet points, but I hope my summaries successfully inform of the main arguments and key takeaways. Please note that for the final two summaries I made use of the existing GPI paper summaries.On the desire to make a difference (Hilary Greaves, William MacAskill, Andreas Mogensen and Teruji Thomas)The bottom line: Preferring to make a difference yourself is in deep tension with the ideals of benevolence. If we are to be benevolent, we should solely care about how much total good is done. In practice, this means avoiding tendencies to diversify individual philanthropic portfolios or to neglect mitigation of extinction risks in favour of neartermist options that seem “safer”.My brief summary:One can consider various types of “difference-making preferences” (DMPs), where one wants to do good themselves. One example is thinking of the difference one makes in terms of their own causal impact. This can make the world worse e.g. going to great lengths to be the one to save a drowning person even if other people are better placed to do so. This way of thinking is therefore in tension with benevolence.One can instead hope to have higher outcome-comparison impact, where one compares how much better an outcome is if one acts, compared to if one does nothing. This would recommend not trying to save the drowning person, which seems the correct conclusion. However, the authors note that thinking of doing good in this way can still be in tension with benevolence. For example, one might prefer that a recent disaster were severe rather than mild so that they can do more good by helping affected people.Under uncertainty, DMPs are also in tension with benevolence, in an action-relevant way. For example, being risk averse to the difference one individually makes sometimes means choosing an action that is (stochastically) dominated by another action - essentially choosing an action that is ‘objectively’ worse under uncertainty, with respect to doing good.This can also be the case when people interact - the authors show that the presence of DMPs in collective action problems with uncertainty can lead to sub-optimal outcomes. Importantly they show that the preferences themselves are the culprits. This is also the case with DMPs under ambiguity aversion (ambiguity aversion means preferring known risks over unknown risks).One could try to rationalise DMPs by saying people are trying to achieve ‘meaning’ in their life. But people who exhibit DMPs are generally motivated by the ideal of benevolence. It seems therefore that such people, if they really do want to be benevolent, should give up their DMPs.See paper here.The unexpected value of the future (Hayden Wilkinson)The bottom line: An undefined expected value of the future doesn’t invalidate longtermism. A theory is developed to deal with undefined expe...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mini summaries of GPI papers, published by Jack Malde on November 2, 2022 on The Effective Altruism Forum.I have previously written about the importance of making global priorities research accessible to a wider range of people. Many people don’t have the time or desire to read academic papers, but the findings of the research are still hugely important and action-relevant.The Global Priorities Institute (GPI) has started producing paper summaries, but even these might have somewhat limited readership given their length. They are also time-consuming for GPI to develop and aren’t all in one place.With this in mind, and given my personal interest in global priorities research, I have written a few mini-summaries of GPI papers. The extra lazy / time poor can read just “The bottom lines”. I would welcome feedback on if these samples are useful and if I should continue to make them - working towards a post with all papers summarised. It is impossible to cover everything in just a few bullet points, but I hope my summaries successfully inform of the main arguments and key takeaways. Please note that for the final two summaries I made use of the existing GPI paper summaries.On the desire to make a difference (Hilary Greaves, William MacAskill, Andreas Mogensen and Teruji Thomas)The bottom line: Preferring to make a difference yourself is in deep tension with the ideals of benevolence. If we are to be benevolent, we should solely care about how much total good is done. In practice, this means avoiding tendencies to diversify individual philanthropic portfolios or to neglect mitigation of extinction risks in favour of neartermist options that seem “safer”.My brief summary:One can consider various types of “difference-making preferences” (DMPs), where one wants to do good themselves. One example is thinking of the difference one makes in terms of their own causal impact. This can make the world worse e.g. going to great lengths to be the one to save a drowning person even if other people are better placed to do so. This way of thinking is therefore in tension with benevolence.One can instead hope to have higher outcome-comparison impact, where one compares how much better an outcome is if one acts, compared to if one does nothing. This would recommend not trying to save the drowning person, which seems the correct conclusion. However, the authors note that thinking of doing good in this way can still be in tension with benevolence. For example, one might prefer that a recent disaster were severe rather than mild so that they can do more good by helping affected people.Under uncertainty, DMPs are also in tension with benevolence, in an action-relevant way. For example, being risk averse to the difference one individually makes sometimes means choosing an action that is (stochastically) dominated by another action - essentially choosing an action that is ‘objectively’ worse under uncertainty, with respect to doing good.This can also be the case when people interact - the authors show that the presence of DMPs in collective action problems with uncertainty can lead to sub-optimal outcomes. Importantly they show that the preferences themselves are the culprits. This is also the case with DMPs under ambiguity aversion (ambiguity aversion means preferring known risks over unknown risks).One could try to rationalise DMPs by saying people are trying to achieve ‘meaning’ in their life. But people who exhibit DMPs are generally motivated by the ideal of benevolence. It seems therefore that such people, if they really do want to be benevolent, should give up their DMPs.See paper here.The unexpected value of the future (Hayden Wilkinson)The bottom line: An undefined expected value of the future doesn’t invalidate longtermism. A theory is developed to deal with undefined expe...]]>
Jack Malde https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:21 None full 3638
EWiCySDcLSyiHTRQn_EA EA - A Theologian's Response to Anthropogenic Existential Risk by Fr Peter Wyg Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Theologian's Response to Anthropogenic Existential Risk, published by Fr Peter Wyg on November 3, 2022 on The Effective Altruism Forum.Hi all,This is very much someone outside the bailiwick of this forum looking in, but I was told it could be interesting to share this article I wrote recently.I'm a Catholic priest, with a prior background in Electronic Engineering, currently working on a PhD in Theology at Durham University. I am researching how the Catholic Church can engage with longtermism and better play its, potentially significant, part in advocating existential security. I'm particularly interested in how a Christian imagination can offer unique evaluative resources for attributing value to future human flourishing and to develop a sense of moral connection with our descendents, better motivating the sacrifices safeguarding the future demands.Much of the material will be very familiar to you as the article was written for a Catholic publication, and so also serves to introduce and promote some of the basic ideas to a new audience.I'm certainly interested to receive any comments or questions!Called to Share the Father’s Love for Humanity’s Future:A Scriptural and Patristic Perspective on Eschatological Cooperation in the Age of Anthropogenic Existential RisksAs the 16th day of July 1945 came to a close, the sun set over a changed world. For the first time, humanity had detonated an atomic bomb, and after the destruction of Hiroshima and Nagasaki later that year, society struggled to come to terms with the forces unleashed. Amidst the cacophony of devastation and the uproar of anti-nuclear movements, there were those who caught whispers of a dark threshold quietly crossed. One such thinker, Bertrand Russell, stood in the House of Lords to describe the shadow of a new kind of threat:We do not want to look at this thing simply from the point of view of the next few years; we want to look at it from the point of view of the future of mankind. The question is a simple one: Is it possible for a scientific society to continue to exist, or must such a society inevitably bring itself to destruction? ... As I go about the streets and see St. Paul's, the British Museum, the Houses of Parliament, and the other monuments of our civilization, in my mind's eye I see a nightmare vision of those buildings as heaps of rubble, surrounded by corpses.[1]Russell recognised that the development of nuclear weapons marked the dawn of a new age: humanity had become its greatest risk to itself. Adam and Eve, in eating the forbidden fruit, opened the way to individual death, but we have now “eaten more deeply of the fruit of the tree of knowledge” and are now “face to face with a second death, the death of mankind.”[2] An antithesis of God’s creatio ex nihilo, we have obtained our own absolutising power, the “potestas annihilationis, the reductio ad nihili.”[3]A philosophical response to this new power suggests that threat of nuclear apocalypse is but one example of a category of anthropogenic existential risks (AXRs). Other self-caused threats to humanity’s future potential also include engineered pandemics, human-caused climate change, and unaligned artificial intelligence, all of which could cause existential catastrophe. Further AXRs still await discovery, and we have no reason to believe these will be less hazardous.[4] Without action, the danger humanity creates for itself will continue to grow and Ord, from Oxford’s Future of Humanity Institute, argues such increasing risk is unsustainable. We will either learn to mitigate existential risks or one of them will eventually play out, causing a permanent loss of humanity’s potential.In the past, survival could be taken for granted as natural threats to the human species are vanishingly rare on the timescale of human histo...]]>
Fr Peter Wyg https://forum.effectivealtruism.org/posts/EWiCySDcLSyiHTRQn/a-theologian-s-response-to-anthropogenic-existential-risk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Theologian's Response to Anthropogenic Existential Risk, published by Fr Peter Wyg on November 3, 2022 on The Effective Altruism Forum.Hi all,This is very much someone outside the bailiwick of this forum looking in, but I was told it could be interesting to share this article I wrote recently.I'm a Catholic priest, with a prior background in Electronic Engineering, currently working on a PhD in Theology at Durham University. I am researching how the Catholic Church can engage with longtermism and better play its, potentially significant, part in advocating existential security. I'm particularly interested in how a Christian imagination can offer unique evaluative resources for attributing value to future human flourishing and to develop a sense of moral connection with our descendents, better motivating the sacrifices safeguarding the future demands.Much of the material will be very familiar to you as the article was written for a Catholic publication, and so also serves to introduce and promote some of the basic ideas to a new audience.I'm certainly interested to receive any comments or questions!Called to Share the Father’s Love for Humanity’s Future:A Scriptural and Patristic Perspective on Eschatological Cooperation in the Age of Anthropogenic Existential RisksAs the 16th day of July 1945 came to a close, the sun set over a changed world. For the first time, humanity had detonated an atomic bomb, and after the destruction of Hiroshima and Nagasaki later that year, society struggled to come to terms with the forces unleashed. Amidst the cacophony of devastation and the uproar of anti-nuclear movements, there were those who caught whispers of a dark threshold quietly crossed. One such thinker, Bertrand Russell, stood in the House of Lords to describe the shadow of a new kind of threat:We do not want to look at this thing simply from the point of view of the next few years; we want to look at it from the point of view of the future of mankind. The question is a simple one: Is it possible for a scientific society to continue to exist, or must such a society inevitably bring itself to destruction? ... As I go about the streets and see St. Paul's, the British Museum, the Houses of Parliament, and the other monuments of our civilization, in my mind's eye I see a nightmare vision of those buildings as heaps of rubble, surrounded by corpses.[1]Russell recognised that the development of nuclear weapons marked the dawn of a new age: humanity had become its greatest risk to itself. Adam and Eve, in eating the forbidden fruit, opened the way to individual death, but we have now “eaten more deeply of the fruit of the tree of knowledge” and are now “face to face with a second death, the death of mankind.”[2] An antithesis of God’s creatio ex nihilo, we have obtained our own absolutising power, the “potestas annihilationis, the reductio ad nihili.”[3]A philosophical response to this new power suggests that threat of nuclear apocalypse is but one example of a category of anthropogenic existential risks (AXRs). Other self-caused threats to humanity’s future potential also include engineered pandemics, human-caused climate change, and unaligned artificial intelligence, all of which could cause existential catastrophe. Further AXRs still await discovery, and we have no reason to believe these will be less hazardous.[4] Without action, the danger humanity creates for itself will continue to grow and Ord, from Oxford’s Future of Humanity Institute, argues such increasing risk is unsustainable. We will either learn to mitigate existential risks or one of them will eventually play out, causing a permanent loss of humanity’s potential.In the past, survival could be taken for granted as natural threats to the human species are vanishingly rare on the timescale of human histo...]]>
Thu, 03 Nov 2022 07:50:04 +0000 EA - A Theologian's Response to Anthropogenic Existential Risk by Fr Peter Wyg Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Theologian's Response to Anthropogenic Existential Risk, published by Fr Peter Wyg on November 3, 2022 on The Effective Altruism Forum.Hi all,This is very much someone outside the bailiwick of this forum looking in, but I was told it could be interesting to share this article I wrote recently.I'm a Catholic priest, with a prior background in Electronic Engineering, currently working on a PhD in Theology at Durham University. I am researching how the Catholic Church can engage with longtermism and better play its, potentially significant, part in advocating existential security. I'm particularly interested in how a Christian imagination can offer unique evaluative resources for attributing value to future human flourishing and to develop a sense of moral connection with our descendents, better motivating the sacrifices safeguarding the future demands.Much of the material will be very familiar to you as the article was written for a Catholic publication, and so also serves to introduce and promote some of the basic ideas to a new audience.I'm certainly interested to receive any comments or questions!Called to Share the Father’s Love for Humanity’s Future:A Scriptural and Patristic Perspective on Eschatological Cooperation in the Age of Anthropogenic Existential RisksAs the 16th day of July 1945 came to a close, the sun set over a changed world. For the first time, humanity had detonated an atomic bomb, and after the destruction of Hiroshima and Nagasaki later that year, society struggled to come to terms with the forces unleashed. Amidst the cacophony of devastation and the uproar of anti-nuclear movements, there were those who caught whispers of a dark threshold quietly crossed. One such thinker, Bertrand Russell, stood in the House of Lords to describe the shadow of a new kind of threat:We do not want to look at this thing simply from the point of view of the next few years; we want to look at it from the point of view of the future of mankind. The question is a simple one: Is it possible for a scientific society to continue to exist, or must such a society inevitably bring itself to destruction? ... As I go about the streets and see St. Paul's, the British Museum, the Houses of Parliament, and the other monuments of our civilization, in my mind's eye I see a nightmare vision of those buildings as heaps of rubble, surrounded by corpses.[1]Russell recognised that the development of nuclear weapons marked the dawn of a new age: humanity had become its greatest risk to itself. Adam and Eve, in eating the forbidden fruit, opened the way to individual death, but we have now “eaten more deeply of the fruit of the tree of knowledge” and are now “face to face with a second death, the death of mankind.”[2] An antithesis of God’s creatio ex nihilo, we have obtained our own absolutising power, the “potestas annihilationis, the reductio ad nihili.”[3]A philosophical response to this new power suggests that threat of nuclear apocalypse is but one example of a category of anthropogenic existential risks (AXRs). Other self-caused threats to humanity’s future potential also include engineered pandemics, human-caused climate change, and unaligned artificial intelligence, all of which could cause existential catastrophe. Further AXRs still await discovery, and we have no reason to believe these will be less hazardous.[4] Without action, the danger humanity creates for itself will continue to grow and Ord, from Oxford’s Future of Humanity Institute, argues such increasing risk is unsustainable. We will either learn to mitigate existential risks or one of them will eventually play out, causing a permanent loss of humanity’s potential.In the past, survival could be taken for granted as natural threats to the human species are vanishingly rare on the timescale of human histo...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Theologian's Response to Anthropogenic Existential Risk, published by Fr Peter Wyg on November 3, 2022 on The Effective Altruism Forum.Hi all,This is very much someone outside the bailiwick of this forum looking in, but I was told it could be interesting to share this article I wrote recently.I'm a Catholic priest, with a prior background in Electronic Engineering, currently working on a PhD in Theology at Durham University. I am researching how the Catholic Church can engage with longtermism and better play its, potentially significant, part in advocating existential security. I'm particularly interested in how a Christian imagination can offer unique evaluative resources for attributing value to future human flourishing and to develop a sense of moral connection with our descendents, better motivating the sacrifices safeguarding the future demands.Much of the material will be very familiar to you as the article was written for a Catholic publication, and so also serves to introduce and promote some of the basic ideas to a new audience.I'm certainly interested to receive any comments or questions!Called to Share the Father’s Love for Humanity’s Future:A Scriptural and Patristic Perspective on Eschatological Cooperation in the Age of Anthropogenic Existential RisksAs the 16th day of July 1945 came to a close, the sun set over a changed world. For the first time, humanity had detonated an atomic bomb, and after the destruction of Hiroshima and Nagasaki later that year, society struggled to come to terms with the forces unleashed. Amidst the cacophony of devastation and the uproar of anti-nuclear movements, there were those who caught whispers of a dark threshold quietly crossed. One such thinker, Bertrand Russell, stood in the House of Lords to describe the shadow of a new kind of threat:We do not want to look at this thing simply from the point of view of the next few years; we want to look at it from the point of view of the future of mankind. The question is a simple one: Is it possible for a scientific society to continue to exist, or must such a society inevitably bring itself to destruction? ... As I go about the streets and see St. Paul's, the British Museum, the Houses of Parliament, and the other monuments of our civilization, in my mind's eye I see a nightmare vision of those buildings as heaps of rubble, surrounded by corpses.[1]Russell recognised that the development of nuclear weapons marked the dawn of a new age: humanity had become its greatest risk to itself. Adam and Eve, in eating the forbidden fruit, opened the way to individual death, but we have now “eaten more deeply of the fruit of the tree of knowledge” and are now “face to face with a second death, the death of mankind.”[2] An antithesis of God’s creatio ex nihilo, we have obtained our own absolutising power, the “potestas annihilationis, the reductio ad nihili.”[3]A philosophical response to this new power suggests that threat of nuclear apocalypse is but one example of a category of anthropogenic existential risks (AXRs). Other self-caused threats to humanity’s future potential also include engineered pandemics, human-caused climate change, and unaligned artificial intelligence, all of which could cause existential catastrophe. Further AXRs still await discovery, and we have no reason to believe these will be less hazardous.[4] Without action, the danger humanity creates for itself will continue to grow and Ord, from Oxford’s Future of Humanity Institute, argues such increasing risk is unsustainable. We will either learn to mitigate existential risks or one of them will eventually play out, causing a permanent loss of humanity’s potential.In the past, survival could be taken for granted as natural threats to the human species are vanishingly rare on the timescale of human histo...]]>
Fr Peter Wyg https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 21:41 None full 3634
oMBKeyx7ir8Jnbenz_EA EA - Make a $50 donation into $100 (6x) by WilliamKiely Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Make a $50 donation into $100 (6x), published by WilliamKiely on November 2, 2022 on The Effective Altruism Forum.Every.org's 2022 $50,000 donation match is here!TL;DR: Get up to $300 per person in donations counterfactually matched to effective charities in ~3-8 minutes:How to Get Matched1. Set up a recurring donation ($50 per month) for up to three (3) different nonprofits on Every.org.2. That's it. The first $50 of each of your donations in both November and December will be matched one-to-one. i.e. Your six total donations of $50 (totaling $300) will be matched $300 total by Every.org.NotesEven if the Funds Remaining runs out in November, recurring donations that get matched in November will also get matched in December. (When you set up your recurring donation in November, the match amount for both your November and future December donation is immediately deducted from the Funds Remaining dashboard and set aside for your December donation to get matched.)Using a credit card for your donation is easiest/fastest.Don't forget to select "Give Monthly" to get the donation matching.Do not create multiple accounts on Every.org to exploit the match limits ($50 x 2 per nonprofit, three nonprofits per person). Doing so can result in the nonprofits you are supporting losing all their matching funds.History of EA Participation in Every.org's Donation MatchesFor those curious, we've previously directed over $400,000 in matching funds to EA nonprofits through Every.org matching:Nov 2021: Make a $100 donation into $200 (or more)Dec 2020: Make a $10 donation into $35Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
WilliamKiely https://forum.effectivealtruism.org/posts/oMBKeyx7ir8Jnbenz/make-a-usd50-donation-into-usd100-6x Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Make a $50 donation into $100 (6x), published by WilliamKiely on November 2, 2022 on The Effective Altruism Forum.Every.org's 2022 $50,000 donation match is here!TL;DR: Get up to $300 per person in donations counterfactually matched to effective charities in ~3-8 minutes:How to Get Matched1. Set up a recurring donation ($50 per month) for up to three (3) different nonprofits on Every.org.2. That's it. The first $50 of each of your donations in both November and December will be matched one-to-one. i.e. Your six total donations of $50 (totaling $300) will be matched $300 total by Every.org.NotesEven if the Funds Remaining runs out in November, recurring donations that get matched in November will also get matched in December. (When you set up your recurring donation in November, the match amount for both your November and future December donation is immediately deducted from the Funds Remaining dashboard and set aside for your December donation to get matched.)Using a credit card for your donation is easiest/fastest.Don't forget to select "Give Monthly" to get the donation matching.Do not create multiple accounts on Every.org to exploit the match limits ($50 x 2 per nonprofit, three nonprofits per person). Doing so can result in the nonprofits you are supporting losing all their matching funds.History of EA Participation in Every.org's Donation MatchesFor those curious, we've previously directed over $400,000 in matching funds to EA nonprofits through Every.org matching:Nov 2021: Make a $100 donation into $200 (or more)Dec 2020: Make a $10 donation into $35Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 02 Nov 2022 21:50:28 +0000 EA - Make a $50 donation into $100 (6x) by WilliamKiely Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Make a $50 donation into $100 (6x), published by WilliamKiely on November 2, 2022 on The Effective Altruism Forum.Every.org's 2022 $50,000 donation match is here!TL;DR: Get up to $300 per person in donations counterfactually matched to effective charities in ~3-8 minutes:How to Get Matched1. Set up a recurring donation ($50 per month) for up to three (3) different nonprofits on Every.org.2. That's it. The first $50 of each of your donations in both November and December will be matched one-to-one. i.e. Your six total donations of $50 (totaling $300) will be matched $300 total by Every.org.NotesEven if the Funds Remaining runs out in November, recurring donations that get matched in November will also get matched in December. (When you set up your recurring donation in November, the match amount for both your November and future December donation is immediately deducted from the Funds Remaining dashboard and set aside for your December donation to get matched.)Using a credit card for your donation is easiest/fastest.Don't forget to select "Give Monthly" to get the donation matching.Do not create multiple accounts on Every.org to exploit the match limits ($50 x 2 per nonprofit, three nonprofits per person). Doing so can result in the nonprofits you are supporting losing all their matching funds.History of EA Participation in Every.org's Donation MatchesFor those curious, we've previously directed over $400,000 in matching funds to EA nonprofits through Every.org matching:Nov 2021: Make a $100 donation into $200 (or more)Dec 2020: Make a $10 donation into $35Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Make a $50 donation into $100 (6x), published by WilliamKiely on November 2, 2022 on The Effective Altruism Forum.Every.org's 2022 $50,000 donation match is here!TL;DR: Get up to $300 per person in donations counterfactually matched to effective charities in ~3-8 minutes:How to Get Matched1. Set up a recurring donation ($50 per month) for up to three (3) different nonprofits on Every.org.2. That's it. The first $50 of each of your donations in both November and December will be matched one-to-one. i.e. Your six total donations of $50 (totaling $300) will be matched $300 total by Every.org.NotesEven if the Funds Remaining runs out in November, recurring donations that get matched in November will also get matched in December. (When you set up your recurring donation in November, the match amount for both your November and future December donation is immediately deducted from the Funds Remaining dashboard and set aside for your December donation to get matched.)Using a credit card for your donation is easiest/fastest.Don't forget to select "Give Monthly" to get the donation matching.Do not create multiple accounts on Every.org to exploit the match limits ($50 x 2 per nonprofit, three nonprofits per person). Doing so can result in the nonprofits you are supporting losing all their matching funds.History of EA Participation in Every.org's Donation MatchesFor those curious, we've previously directed over $400,000 in matching funds to EA nonprofits through Every.org matching:Nov 2021: Make a $100 donation into $200 (or more)Dec 2020: Make a $10 donation into $35Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
WilliamKiely https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:02 None full 3625
pHKsedBYAvzFCniDF_EA EA - AI Safety Needs Great Product Builders by goodgravy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Needs Great Product Builders, published by goodgravy on November 2, 2022 on The Effective Altruism Forum.In his AI Safety Needs Great Engineers post, Andy Jones explains how software engineers can reduce the risks of unfriendly artificial intelligence. Even without deep ML knowledge, these developers can work effectively on the challenges involved in building and understanding large language models.I would broaden the claim: AI safety doesn’t need only great engineers – it needs great product builders in general.This post will describe why, list some concrete projects for a few different roles, and show how they contribute to AI going better for everyone.AudienceThis post is aimed at anyone who has been involved with building software products: web developers, product managers, designers, founders, devops, generalist software engineers, . I’ll call these “product builders”.Non-technical roles (e.g. operations, HR, finance) do exist in many organisations focussed on AI safety, but this post isn’t aimed at them.But I thought I would need a PhD!In the past, most technical AI safety work was done in academia or in research labs. This is changing because – among other things – we now have concrete ideas for how to construct AI in a safer manner.However, it’s not enough for us to merely have ideas of what to build. We need teams of people to partner with these researchers and build real systems, in order to:Test whether they work in the real world.Demonstrate that they have the nice safety features we’re looking for.Gather empirical data for future research.This strand of AI safety work looks much more like product development, which is why you – as a product builder – can have a direct impact today.Example projects, and why they’re importantTo prove there are tangible ways that product builders can contribute to AI safety, I’ll give some current examples of work we’re doing at Ought.For software engineersIn addition to working on our user-facing app, Elicit, we recently open-sourced our Interactive Composition Explorer (ICE).ICE is a tool to help us and others better understand Factored Cognition. It consists of a software framework and an interactive visualiser:On the back-end, we’re looking for better ways to instrument the cognition “recipes” such that our framework stays out of the user’s way as much as possible, while still giving a useful trace of the reasoning process. We’re using some meta-programming, and having good CS fundamentals would be helpful, but there’s no ML experience required. Plus working on open-source projects is super fun!If you are more of a front-end developer, you’ll appreciate that representing a complex deductive process is a UX challenge as much as anything else. These execution graphs can be very large, cyclic, oddly and unpredictably shaped, and each node can contain masses of information. How can we present this in a useful UI which captures the macro structure and still allows the user to dive into the minutiae?This work is important for safety because AI systems that have a legible decision-making process are easier to reason about and more trustworthy. On a more technical level, Factored Cognition looks like it will be a linchpin of Iterated Distillation and Amplification – one of the few concrete suggestions for a safer way to build AI.For product managersAt first, it might not be obvious how big of an impact product managers can have on AI safety (the same goes for designers). However, interface design is an alignment problem – and it’s even more neglected than other areas of safety research.You don’t need a super technical background, or to already be steeped in ML. The competing priorities we face every day in our product decisions will be fairly familiar to experienced PMs. Here are some example t...]]>
goodgravy https://forum.effectivealtruism.org/posts/pHKsedBYAvzFCniDF/ai-safety-needs-great-product-builders Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Needs Great Product Builders, published by goodgravy on November 2, 2022 on The Effective Altruism Forum.In his AI Safety Needs Great Engineers post, Andy Jones explains how software engineers can reduce the risks of unfriendly artificial intelligence. Even without deep ML knowledge, these developers can work effectively on the challenges involved in building and understanding large language models.I would broaden the claim: AI safety doesn’t need only great engineers – it needs great product builders in general.This post will describe why, list some concrete projects for a few different roles, and show how they contribute to AI going better for everyone.AudienceThis post is aimed at anyone who has been involved with building software products: web developers, product managers, designers, founders, devops, generalist software engineers, . I’ll call these “product builders”.Non-technical roles (e.g. operations, HR, finance) do exist in many organisations focussed on AI safety, but this post isn’t aimed at them.But I thought I would need a PhD!In the past, most technical AI safety work was done in academia or in research labs. This is changing because – among other things – we now have concrete ideas for how to construct AI in a safer manner.However, it’s not enough for us to merely have ideas of what to build. We need teams of people to partner with these researchers and build real systems, in order to:Test whether they work in the real world.Demonstrate that they have the nice safety features we’re looking for.Gather empirical data for future research.This strand of AI safety work looks much more like product development, which is why you – as a product builder – can have a direct impact today.Example projects, and why they’re importantTo prove there are tangible ways that product builders can contribute to AI safety, I’ll give some current examples of work we’re doing at Ought.For software engineersIn addition to working on our user-facing app, Elicit, we recently open-sourced our Interactive Composition Explorer (ICE).ICE is a tool to help us and others better understand Factored Cognition. It consists of a software framework and an interactive visualiser:On the back-end, we’re looking for better ways to instrument the cognition “recipes” such that our framework stays out of the user’s way as much as possible, while still giving a useful trace of the reasoning process. We’re using some meta-programming, and having good CS fundamentals would be helpful, but there’s no ML experience required. Plus working on open-source projects is super fun!If you are more of a front-end developer, you’ll appreciate that representing a complex deductive process is a UX challenge as much as anything else. These execution graphs can be very large, cyclic, oddly and unpredictably shaped, and each node can contain masses of information. How can we present this in a useful UI which captures the macro structure and still allows the user to dive into the minutiae?This work is important for safety because AI systems that have a legible decision-making process are easier to reason about and more trustworthy. On a more technical level, Factored Cognition looks like it will be a linchpin of Iterated Distillation and Amplification – one of the few concrete suggestions for a safer way to build AI.For product managersAt first, it might not be obvious how big of an impact product managers can have on AI safety (the same goes for designers). However, interface design is an alignment problem – and it’s even more neglected than other areas of safety research.You don’t need a super technical background, or to already be steeped in ML. The competing priorities we face every day in our product decisions will be fairly familiar to experienced PMs. Here are some example t...]]>
Wed, 02 Nov 2022 19:56:10 +0000 EA - AI Safety Needs Great Product Builders by goodgravy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Needs Great Product Builders, published by goodgravy on November 2, 2022 on The Effective Altruism Forum.In his AI Safety Needs Great Engineers post, Andy Jones explains how software engineers can reduce the risks of unfriendly artificial intelligence. Even without deep ML knowledge, these developers can work effectively on the challenges involved in building and understanding large language models.I would broaden the claim: AI safety doesn’t need only great engineers – it needs great product builders in general.This post will describe why, list some concrete projects for a few different roles, and show how they contribute to AI going better for everyone.AudienceThis post is aimed at anyone who has been involved with building software products: web developers, product managers, designers, founders, devops, generalist software engineers, . I’ll call these “product builders”.Non-technical roles (e.g. operations, HR, finance) do exist in many organisations focussed on AI safety, but this post isn’t aimed at them.But I thought I would need a PhD!In the past, most technical AI safety work was done in academia or in research labs. This is changing because – among other things – we now have concrete ideas for how to construct AI in a safer manner.However, it’s not enough for us to merely have ideas of what to build. We need teams of people to partner with these researchers and build real systems, in order to:Test whether they work in the real world.Demonstrate that they have the nice safety features we’re looking for.Gather empirical data for future research.This strand of AI safety work looks much more like product development, which is why you – as a product builder – can have a direct impact today.Example projects, and why they’re importantTo prove there are tangible ways that product builders can contribute to AI safety, I’ll give some current examples of work we’re doing at Ought.For software engineersIn addition to working on our user-facing app, Elicit, we recently open-sourced our Interactive Composition Explorer (ICE).ICE is a tool to help us and others better understand Factored Cognition. It consists of a software framework and an interactive visualiser:On the back-end, we’re looking for better ways to instrument the cognition “recipes” such that our framework stays out of the user’s way as much as possible, while still giving a useful trace of the reasoning process. We’re using some meta-programming, and having good CS fundamentals would be helpful, but there’s no ML experience required. Plus working on open-source projects is super fun!If you are more of a front-end developer, you’ll appreciate that representing a complex deductive process is a UX challenge as much as anything else. These execution graphs can be very large, cyclic, oddly and unpredictably shaped, and each node can contain masses of information. How can we present this in a useful UI which captures the macro structure and still allows the user to dive into the minutiae?This work is important for safety because AI systems that have a legible decision-making process are easier to reason about and more trustworthy. On a more technical level, Factored Cognition looks like it will be a linchpin of Iterated Distillation and Amplification – one of the few concrete suggestions for a safer way to build AI.For product managersAt first, it might not be obvious how big of an impact product managers can have on AI safety (the same goes for designers). However, interface design is an alignment problem – and it’s even more neglected than other areas of safety research.You don’t need a super technical background, or to already be steeped in ML. The competing priorities we face every day in our product decisions will be fairly familiar to experienced PMs. Here are some example t...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Needs Great Product Builders, published by goodgravy on November 2, 2022 on The Effective Altruism Forum.In his AI Safety Needs Great Engineers post, Andy Jones explains how software engineers can reduce the risks of unfriendly artificial intelligence. Even without deep ML knowledge, these developers can work effectively on the challenges involved in building and understanding large language models.I would broaden the claim: AI safety doesn’t need only great engineers – it needs great product builders in general.This post will describe why, list some concrete projects for a few different roles, and show how they contribute to AI going better for everyone.AudienceThis post is aimed at anyone who has been involved with building software products: web developers, product managers, designers, founders, devops, generalist software engineers, . I’ll call these “product builders”.Non-technical roles (e.g. operations, HR, finance) do exist in many organisations focussed on AI safety, but this post isn’t aimed at them.But I thought I would need a PhD!In the past, most technical AI safety work was done in academia or in research labs. This is changing because – among other things – we now have concrete ideas for how to construct AI in a safer manner.However, it’s not enough for us to merely have ideas of what to build. We need teams of people to partner with these researchers and build real systems, in order to:Test whether they work in the real world.Demonstrate that they have the nice safety features we’re looking for.Gather empirical data for future research.This strand of AI safety work looks much more like product development, which is why you – as a product builder – can have a direct impact today.Example projects, and why they’re importantTo prove there are tangible ways that product builders can contribute to AI safety, I’ll give some current examples of work we’re doing at Ought.For software engineersIn addition to working on our user-facing app, Elicit, we recently open-sourced our Interactive Composition Explorer (ICE).ICE is a tool to help us and others better understand Factored Cognition. It consists of a software framework and an interactive visualiser:On the back-end, we’re looking for better ways to instrument the cognition “recipes” such that our framework stays out of the user’s way as much as possible, while still giving a useful trace of the reasoning process. We’re using some meta-programming, and having good CS fundamentals would be helpful, but there’s no ML experience required. Plus working on open-source projects is super fun!If you are more of a front-end developer, you’ll appreciate that representing a complex deductive process is a UX challenge as much as anything else. These execution graphs can be very large, cyclic, oddly and unpredictably shaped, and each node can contain masses of information. How can we present this in a useful UI which captures the macro structure and still allows the user to dive into the minutiae?This work is important for safety because AI systems that have a legible decision-making process are easier to reason about and more trustworthy. On a more technical level, Factored Cognition looks like it will be a linchpin of Iterated Distillation and Amplification – one of the few concrete suggestions for a safer way to build AI.For product managersAt first, it might not be obvious how big of an impact product managers can have on AI safety (the same goes for designers). However, interface design is an alignment problem – and it’s even more neglected than other areas of safety research.You don’t need a super technical background, or to already be steeped in ML. The competing priorities we face every day in our product decisions will be fairly familiar to experienced PMs. Here are some example t...]]>
goodgravy https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 10:19 None full 3626
nSvGaK74GyYYjCmiH_EA EA - Mildly Against Donor Lotteries by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mildly Against Donor Lotteries, published by Jeff Kaufman on November 1, 2022 on The Effective Altruism Forum.Let's say you're going to donate some money, and plan to put some time into figuring out the best place to donate. The more time you put into your decision the better it's likely to be, but at some point you need to stop looking and actually make the donation. The more you're donating the longer it makes sense to go, since it's more valuable to better direct a larger amount of money. Can you get the benefits of the deeper research without increasing time spent on research? Weirdly, you can! About five years ago, Carl Schulmanproposed (andran the first) "donor lottery".Say instead of donating $1k, you get together with 100 other people and you each put in $1k. You select one of the people at random (1:100) to choose where the pool ($100k) goes. This turns your100% chance of directing $1k into a 1% chance of directing $100k. The goal is to make research more efficient:If you win, you're working with enough money that it's worth it for you to put serious time into figuring out your best donation option.If you lose, you don't need to put any time into determining where your money should go.This isn't that different from giving your money to GiveWell or similar to decide how to distribute, in that it's delegating the decision to someone who can put in more research. Except that it doesn't require identifying someone better at allocating funds than you are, it just requires that you would:Make better decisions in allocating $100k than $1kPrefer a 1% chance of a well-allocated $100k to a 100% chance of a somewhat less well-allocated $1k.If you come at this from a theoretical perspective this can seem really neat: better considered donations, more efficient use of research time, strictly better, literally no downsides, why would you not do this?Despite basically agreeing with all of the above, however, I've not participated in a donor lottery, and I think they're likely slightly negative on balance. There actually are downsides, just not in areas that the logic above considers.The biggest downside is that it makes your donation decisions less legible: it's harder for people to understand what you're doing and why. Lets say you're talking to a friend:Friend: You're really into charity stuff, right? Where did you decide to donate this year?You: I put my money into a donor lottery. I traded my $10k for a 1% chance of donating $1M.Friend: Did you win?You: No, but I'm glad I did it! Let me explain why...There are a few different ways this could go. Your friend could:Listen to your explanation at length, and think "this is really cool, more efficient donation allocation, I betEA is full of great ideas like this, I should learn more!"Listen to your explanation at length, and think "maybe this works, but it seems like it could probably go wrong somehow."Not have time or interest for the full explanation, and be left thinking you irresponsibly gambled away your annual donation.I think (b) and (c) are going to happen often enough to outweigh both the benefit (a) and the benefit of the additional research. And this is in some sense a best case: your friend thinks well of you and is likely to give you the benefit of the doubt. The same conversation with someone you know less well or who doesn't have the same baseline assumption that you're an honest and principled person trying to do the right thing would likely go very poorly.The other problem with that conversation is that you're not really answering the question! They're trying to figure out where they should donate, and are looking for your advice. Even if they come away thinking your decision to participate in the lottery makes some sense, they're unlikely to decide to participate the first time they...]]>
Jeff Kaufman https://forum.effectivealtruism.org/posts/nSvGaK74GyYYjCmiH/mildly-against-donor-lotteries Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mildly Against Donor Lotteries, published by Jeff Kaufman on November 1, 2022 on The Effective Altruism Forum.Let's say you're going to donate some money, and plan to put some time into figuring out the best place to donate. The more time you put into your decision the better it's likely to be, but at some point you need to stop looking and actually make the donation. The more you're donating the longer it makes sense to go, since it's more valuable to better direct a larger amount of money. Can you get the benefits of the deeper research without increasing time spent on research? Weirdly, you can! About five years ago, Carl Schulmanproposed (andran the first) "donor lottery".Say instead of donating $1k, you get together with 100 other people and you each put in $1k. You select one of the people at random (1:100) to choose where the pool ($100k) goes. This turns your100% chance of directing $1k into a 1% chance of directing $100k. The goal is to make research more efficient:If you win, you're working with enough money that it's worth it for you to put serious time into figuring out your best donation option.If you lose, you don't need to put any time into determining where your money should go.This isn't that different from giving your money to GiveWell or similar to decide how to distribute, in that it's delegating the decision to someone who can put in more research. Except that it doesn't require identifying someone better at allocating funds than you are, it just requires that you would:Make better decisions in allocating $100k than $1kPrefer a 1% chance of a well-allocated $100k to a 100% chance of a somewhat less well-allocated $1k.If you come at this from a theoretical perspective this can seem really neat: better considered donations, more efficient use of research time, strictly better, literally no downsides, why would you not do this?Despite basically agreeing with all of the above, however, I've not participated in a donor lottery, and I think they're likely slightly negative on balance. There actually are downsides, just not in areas that the logic above considers.The biggest downside is that it makes your donation decisions less legible: it's harder for people to understand what you're doing and why. Lets say you're talking to a friend:Friend: You're really into charity stuff, right? Where did you decide to donate this year?You: I put my money into a donor lottery. I traded my $10k for a 1% chance of donating $1M.Friend: Did you win?You: No, but I'm glad I did it! Let me explain why...There are a few different ways this could go. Your friend could:Listen to your explanation at length, and think "this is really cool, more efficient donation allocation, I betEA is full of great ideas like this, I should learn more!"Listen to your explanation at length, and think "maybe this works, but it seems like it could probably go wrong somehow."Not have time or interest for the full explanation, and be left thinking you irresponsibly gambled away your annual donation.I think (b) and (c) are going to happen often enough to outweigh both the benefit (a) and the benefit of the additional research. And this is in some sense a best case: your friend thinks well of you and is likely to give you the benefit of the doubt. The same conversation with someone you know less well or who doesn't have the same baseline assumption that you're an honest and principled person trying to do the right thing would likely go very poorly.The other problem with that conversation is that you're not really answering the question! They're trying to figure out where they should donate, and are looking for your advice. Even if they come away thinking your decision to participate in the lottery makes some sense, they're unlikely to decide to participate the first time they...]]>
Wed, 02 Nov 2022 16:06:54 +0000 EA - Mildly Against Donor Lotteries by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mildly Against Donor Lotteries, published by Jeff Kaufman on November 1, 2022 on The Effective Altruism Forum.Let's say you're going to donate some money, and plan to put some time into figuring out the best place to donate. The more time you put into your decision the better it's likely to be, but at some point you need to stop looking and actually make the donation. The more you're donating the longer it makes sense to go, since it's more valuable to better direct a larger amount of money. Can you get the benefits of the deeper research without increasing time spent on research? Weirdly, you can! About five years ago, Carl Schulmanproposed (andran the first) "donor lottery".Say instead of donating $1k, you get together with 100 other people and you each put in $1k. You select one of the people at random (1:100) to choose where the pool ($100k) goes. This turns your100% chance of directing $1k into a 1% chance of directing $100k. The goal is to make research more efficient:If you win, you're working with enough money that it's worth it for you to put serious time into figuring out your best donation option.If you lose, you don't need to put any time into determining where your money should go.This isn't that different from giving your money to GiveWell or similar to decide how to distribute, in that it's delegating the decision to someone who can put in more research. Except that it doesn't require identifying someone better at allocating funds than you are, it just requires that you would:Make better decisions in allocating $100k than $1kPrefer a 1% chance of a well-allocated $100k to a 100% chance of a somewhat less well-allocated $1k.If you come at this from a theoretical perspective this can seem really neat: better considered donations, more efficient use of research time, strictly better, literally no downsides, why would you not do this?Despite basically agreeing with all of the above, however, I've not participated in a donor lottery, and I think they're likely slightly negative on balance. There actually are downsides, just not in areas that the logic above considers.The biggest downside is that it makes your donation decisions less legible: it's harder for people to understand what you're doing and why. Lets say you're talking to a friend:Friend: You're really into charity stuff, right? Where did you decide to donate this year?You: I put my money into a donor lottery. I traded my $10k for a 1% chance of donating $1M.Friend: Did you win?You: No, but I'm glad I did it! Let me explain why...There are a few different ways this could go. Your friend could:Listen to your explanation at length, and think "this is really cool, more efficient donation allocation, I betEA is full of great ideas like this, I should learn more!"Listen to your explanation at length, and think "maybe this works, but it seems like it could probably go wrong somehow."Not have time or interest for the full explanation, and be left thinking you irresponsibly gambled away your annual donation.I think (b) and (c) are going to happen often enough to outweigh both the benefit (a) and the benefit of the additional research. And this is in some sense a best case: your friend thinks well of you and is likely to give you the benefit of the doubt. The same conversation with someone you know less well or who doesn't have the same baseline assumption that you're an honest and principled person trying to do the right thing would likely go very poorly.The other problem with that conversation is that you're not really answering the question! They're trying to figure out where they should donate, and are looking for your advice. Even if they come away thinking your decision to participate in the lottery makes some sense, they're unlikely to decide to participate the first time they...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mildly Against Donor Lotteries, published by Jeff Kaufman on November 1, 2022 on The Effective Altruism Forum.Let's say you're going to donate some money, and plan to put some time into figuring out the best place to donate. The more time you put into your decision the better it's likely to be, but at some point you need to stop looking and actually make the donation. The more you're donating the longer it makes sense to go, since it's more valuable to better direct a larger amount of money. Can you get the benefits of the deeper research without increasing time spent on research? Weirdly, you can! About five years ago, Carl Schulmanproposed (andran the first) "donor lottery".Say instead of donating $1k, you get together with 100 other people and you each put in $1k. You select one of the people at random (1:100) to choose where the pool ($100k) goes. This turns your100% chance of directing $1k into a 1% chance of directing $100k. The goal is to make research more efficient:If you win, you're working with enough money that it's worth it for you to put serious time into figuring out your best donation option.If you lose, you don't need to put any time into determining where your money should go.This isn't that different from giving your money to GiveWell or similar to decide how to distribute, in that it's delegating the decision to someone who can put in more research. Except that it doesn't require identifying someone better at allocating funds than you are, it just requires that you would:Make better decisions in allocating $100k than $1kPrefer a 1% chance of a well-allocated $100k to a 100% chance of a somewhat less well-allocated $1k.If you come at this from a theoretical perspective this can seem really neat: better considered donations, more efficient use of research time, strictly better, literally no downsides, why would you not do this?Despite basically agreeing with all of the above, however, I've not participated in a donor lottery, and I think they're likely slightly negative on balance. There actually are downsides, just not in areas that the logic above considers.The biggest downside is that it makes your donation decisions less legible: it's harder for people to understand what you're doing and why. Lets say you're talking to a friend:Friend: You're really into charity stuff, right? Where did you decide to donate this year?You: I put my money into a donor lottery. I traded my $10k for a 1% chance of donating $1M.Friend: Did you win?You: No, but I'm glad I did it! Let me explain why...There are a few different ways this could go. Your friend could:Listen to your explanation at length, and think "this is really cool, more efficient donation allocation, I betEA is full of great ideas like this, I should learn more!"Listen to your explanation at length, and think "maybe this works, but it seems like it could probably go wrong somehow."Not have time or interest for the full explanation, and be left thinking you irresponsibly gambled away your annual donation.I think (b) and (c) are going to happen often enough to outweigh both the benefit (a) and the benefit of the additional research. And this is in some sense a best case: your friend thinks well of you and is likely to give you the benefit of the doubt. The same conversation with someone you know less well or who doesn't have the same baseline assumption that you're an honest and principled person trying to do the right thing would likely go very poorly.The other problem with that conversation is that you're not really answering the question! They're trying to figure out where they should donate, and are looking for your advice. Even if they come away thinking your decision to participate in the lottery makes some sense, they're unlikely to decide to participate the first time they...]]>
Jeff Kaufman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:01 None full 3630
LEDXWzixD56yXsCDp_EA EA - Intro Fellowship as Retreat: Reasons, Retrospective, and Resources by Rachel Weinberg Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Intro Fellowship as Retreat: Reasons, Retrospective, and Resources, published by Rachel Weinberg on November 2, 2022 on The Effective Altruism Forum.SummaryIntroductory EA Fellowships are the default way that EA university groups introduce new members to EA concepts. These ~8-week-long discussion groups have some virtues: they’re scalable, high fidelity, accessible, and not that weird looking. But only 2-10% of students who start an intro fellowship end up engaging with EA afterwards; it feels like we can do better.I ran a 4-day retreat to get new people to the same level of EA knowledge as an intro fellowship would, inspired by the ideas in We Need Alternatives to Intro Fellowships and University Groups Should Run More Retreats. I had 8 fellows, 5 of whom were on the intro track and 3 of whom were on the in-depth track, plus 4 facilitators/organizers, and 4 professionals who came up for one day.With n=1, it’s hard to draw concrete conclusions, so I’m not intending to sell people on retreats vs. fellowships. Rather, I hope this post helps other group organizers generate ideas about alternatives to intro fellowship, and makes it easy to iterate and improve upon what I did.Retreats vs. Intro FellowshipsAdvantages of RetreatsHere are the main points in favor of retreats, which my experience running this event corroborates:Manic retreat magic:In Trevor’s words: “Retreats encourage the kind of sustained reflection, one-on-one conversations, and social network construction that actually get people to reevaluate their plans. Most other EA programming occurs in classroom-type settings where people are used to engaging with ideas intellectually but not taking them seriously as action-relevant, life-affecting things.”Or, in Duncan Sabien’s terms, retreats make people’s minds muddier—more flexible, open to new ideas.The flip side of intro fellowships being “accessible” or “not weird” is that there is no manic magic—instead, people show up, have intellectual debates often in a literal classroom setting, and never seem to realize how life-changing all those ideas would be if they really took them seriously.Faster/more interesting for fellows: this is a key point of Ashley’s post and is supported by ~2/10 of my closest engaged EA friends saying they would have never signed up for/made it through an intro fellowship, and wouldn’t have gotten into EA if that was the on-ramp presented to them. Worse, it seems that the most curious/driven/agentic people are most likely to feel this way. Retreats are less likely to lose that demographic.Social bonding: People are way more likely to feel drawn to engage with EA more if they have friends there, feel a sense of community there, and doing so lets them talk to people that they like. EA can also be intimidating—probably particularly for minority demographics in EA—so feeling comfortable reaching out/asking “dumb” questions to more highly engaged members of the group seems important. Retreats leave more time for socializing in general, in particular for late night mushy/emotional conversations, and have people together at more vulnerable/less put together times (sleeping in the same rooms, hanging out in PJs, eating all meals together). I felt far closer to these intro fellows after the retreat than I have ever felt to a typical cohort.Faster path to doing impactful things: getting people on board in 4 days instead of 8 weeks adds 52 days to their more impactful career.One notable thing I did with this retreat was to mix intro fellows with in-depth fellows, university EA organizers, and EA professionals. The advantages of this were:Less Dunning-Krueger effect: it’s common for people after the intro fellowship to feel like they understand EA pretty extensively, and having a bunch of in-depth people there shows them what’s comin...]]>
Rachel Weinberg https://forum.effectivealtruism.org/posts/LEDXWzixD56yXsCDp/intro-fellowship-as-retreat-reasons-retrospective-and Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Intro Fellowship as Retreat: Reasons, Retrospective, and Resources, published by Rachel Weinberg on November 2, 2022 on The Effective Altruism Forum.SummaryIntroductory EA Fellowships are the default way that EA university groups introduce new members to EA concepts. These ~8-week-long discussion groups have some virtues: they’re scalable, high fidelity, accessible, and not that weird looking. But only 2-10% of students who start an intro fellowship end up engaging with EA afterwards; it feels like we can do better.I ran a 4-day retreat to get new people to the same level of EA knowledge as an intro fellowship would, inspired by the ideas in We Need Alternatives to Intro Fellowships and University Groups Should Run More Retreats. I had 8 fellows, 5 of whom were on the intro track and 3 of whom were on the in-depth track, plus 4 facilitators/organizers, and 4 professionals who came up for one day.With n=1, it’s hard to draw concrete conclusions, so I’m not intending to sell people on retreats vs. fellowships. Rather, I hope this post helps other group organizers generate ideas about alternatives to intro fellowship, and makes it easy to iterate and improve upon what I did.Retreats vs. Intro FellowshipsAdvantages of RetreatsHere are the main points in favor of retreats, which my experience running this event corroborates:Manic retreat magic:In Trevor’s words: “Retreats encourage the kind of sustained reflection, one-on-one conversations, and social network construction that actually get people to reevaluate their plans. Most other EA programming occurs in classroom-type settings where people are used to engaging with ideas intellectually but not taking them seriously as action-relevant, life-affecting things.”Or, in Duncan Sabien’s terms, retreats make people’s minds muddier—more flexible, open to new ideas.The flip side of intro fellowships being “accessible” or “not weird” is that there is no manic magic—instead, people show up, have intellectual debates often in a literal classroom setting, and never seem to realize how life-changing all those ideas would be if they really took them seriously.Faster/more interesting for fellows: this is a key point of Ashley’s post and is supported by ~2/10 of my closest engaged EA friends saying they would have never signed up for/made it through an intro fellowship, and wouldn’t have gotten into EA if that was the on-ramp presented to them. Worse, it seems that the most curious/driven/agentic people are most likely to feel this way. Retreats are less likely to lose that demographic.Social bonding: People are way more likely to feel drawn to engage with EA more if they have friends there, feel a sense of community there, and doing so lets them talk to people that they like. EA can also be intimidating—probably particularly for minority demographics in EA—so feeling comfortable reaching out/asking “dumb” questions to more highly engaged members of the group seems important. Retreats leave more time for socializing in general, in particular for late night mushy/emotional conversations, and have people together at more vulnerable/less put together times (sleeping in the same rooms, hanging out in PJs, eating all meals together). I felt far closer to these intro fellows after the retreat than I have ever felt to a typical cohort.Faster path to doing impactful things: getting people on board in 4 days instead of 8 weeks adds 52 days to their more impactful career.One notable thing I did with this retreat was to mix intro fellows with in-depth fellows, university EA organizers, and EA professionals. The advantages of this were:Less Dunning-Krueger effect: it’s common for people after the intro fellowship to feel like they understand EA pretty extensively, and having a bunch of in-depth people there shows them what’s comin...]]>
Wed, 02 Nov 2022 15:09:29 +0000 EA - Intro Fellowship as Retreat: Reasons, Retrospective, and Resources by Rachel Weinberg Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Intro Fellowship as Retreat: Reasons, Retrospective, and Resources, published by Rachel Weinberg on November 2, 2022 on The Effective Altruism Forum.SummaryIntroductory EA Fellowships are the default way that EA university groups introduce new members to EA concepts. These ~8-week-long discussion groups have some virtues: they’re scalable, high fidelity, accessible, and not that weird looking. But only 2-10% of students who start an intro fellowship end up engaging with EA afterwards; it feels like we can do better.I ran a 4-day retreat to get new people to the same level of EA knowledge as an intro fellowship would, inspired by the ideas in We Need Alternatives to Intro Fellowships and University Groups Should Run More Retreats. I had 8 fellows, 5 of whom were on the intro track and 3 of whom were on the in-depth track, plus 4 facilitators/organizers, and 4 professionals who came up for one day.With n=1, it’s hard to draw concrete conclusions, so I’m not intending to sell people on retreats vs. fellowships. Rather, I hope this post helps other group organizers generate ideas about alternatives to intro fellowship, and makes it easy to iterate and improve upon what I did.Retreats vs. Intro FellowshipsAdvantages of RetreatsHere are the main points in favor of retreats, which my experience running this event corroborates:Manic retreat magic:In Trevor’s words: “Retreats encourage the kind of sustained reflection, one-on-one conversations, and social network construction that actually get people to reevaluate their plans. Most other EA programming occurs in classroom-type settings where people are used to engaging with ideas intellectually but not taking them seriously as action-relevant, life-affecting things.”Or, in Duncan Sabien’s terms, retreats make people’s minds muddier—more flexible, open to new ideas.The flip side of intro fellowships being “accessible” or “not weird” is that there is no manic magic—instead, people show up, have intellectual debates often in a literal classroom setting, and never seem to realize how life-changing all those ideas would be if they really took them seriously.Faster/more interesting for fellows: this is a key point of Ashley’s post and is supported by ~2/10 of my closest engaged EA friends saying they would have never signed up for/made it through an intro fellowship, and wouldn’t have gotten into EA if that was the on-ramp presented to them. Worse, it seems that the most curious/driven/agentic people are most likely to feel this way. Retreats are less likely to lose that demographic.Social bonding: People are way more likely to feel drawn to engage with EA more if they have friends there, feel a sense of community there, and doing so lets them talk to people that they like. EA can also be intimidating—probably particularly for minority demographics in EA—so feeling comfortable reaching out/asking “dumb” questions to more highly engaged members of the group seems important. Retreats leave more time for socializing in general, in particular for late night mushy/emotional conversations, and have people together at more vulnerable/less put together times (sleeping in the same rooms, hanging out in PJs, eating all meals together). I felt far closer to these intro fellows after the retreat than I have ever felt to a typical cohort.Faster path to doing impactful things: getting people on board in 4 days instead of 8 weeks adds 52 days to their more impactful career.One notable thing I did with this retreat was to mix intro fellows with in-depth fellows, university EA organizers, and EA professionals. The advantages of this were:Less Dunning-Krueger effect: it’s common for people after the intro fellowship to feel like they understand EA pretty extensively, and having a bunch of in-depth people there shows them what’s comin...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Intro Fellowship as Retreat: Reasons, Retrospective, and Resources, published by Rachel Weinberg on November 2, 2022 on The Effective Altruism Forum.SummaryIntroductory EA Fellowships are the default way that EA university groups introduce new members to EA concepts. These ~8-week-long discussion groups have some virtues: they’re scalable, high fidelity, accessible, and not that weird looking. But only 2-10% of students who start an intro fellowship end up engaging with EA afterwards; it feels like we can do better.I ran a 4-day retreat to get new people to the same level of EA knowledge as an intro fellowship would, inspired by the ideas in We Need Alternatives to Intro Fellowships and University Groups Should Run More Retreats. I had 8 fellows, 5 of whom were on the intro track and 3 of whom were on the in-depth track, plus 4 facilitators/organizers, and 4 professionals who came up for one day.With n=1, it’s hard to draw concrete conclusions, so I’m not intending to sell people on retreats vs. fellowships. Rather, I hope this post helps other group organizers generate ideas about alternatives to intro fellowship, and makes it easy to iterate and improve upon what I did.Retreats vs. Intro FellowshipsAdvantages of RetreatsHere are the main points in favor of retreats, which my experience running this event corroborates:Manic retreat magic:In Trevor’s words: “Retreats encourage the kind of sustained reflection, one-on-one conversations, and social network construction that actually get people to reevaluate their plans. Most other EA programming occurs in classroom-type settings where people are used to engaging with ideas intellectually but not taking them seriously as action-relevant, life-affecting things.”Or, in Duncan Sabien’s terms, retreats make people’s minds muddier—more flexible, open to new ideas.The flip side of intro fellowships being “accessible” or “not weird” is that there is no manic magic—instead, people show up, have intellectual debates often in a literal classroom setting, and never seem to realize how life-changing all those ideas would be if they really took them seriously.Faster/more interesting for fellows: this is a key point of Ashley’s post and is supported by ~2/10 of my closest engaged EA friends saying they would have never signed up for/made it through an intro fellowship, and wouldn’t have gotten into EA if that was the on-ramp presented to them. Worse, it seems that the most curious/driven/agentic people are most likely to feel this way. Retreats are less likely to lose that demographic.Social bonding: People are way more likely to feel drawn to engage with EA more if they have friends there, feel a sense of community there, and doing so lets them talk to people that they like. EA can also be intimidating—probably particularly for minority demographics in EA—so feeling comfortable reaching out/asking “dumb” questions to more highly engaged members of the group seems important. Retreats leave more time for socializing in general, in particular for late night mushy/emotional conversations, and have people together at more vulnerable/less put together times (sleeping in the same rooms, hanging out in PJs, eating all meals together). I felt far closer to these intro fellows after the retreat than I have ever felt to a typical cohort.Faster path to doing impactful things: getting people on board in 4 days instead of 8 weeks adds 52 days to their more impactful career.One notable thing I did with this retreat was to mix intro fellows with in-depth fellows, university EA organizers, and EA professionals. The advantages of this were:Less Dunning-Krueger effect: it’s common for people after the intro fellowship to feel like they understand EA pretty extensively, and having a bunch of in-depth people there shows them what’s comin...]]>
Rachel Weinberg https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 20:10 None full 3627
bwvGpgL3yud7eD3h4_EA EA - The "Inside-Out" Model for pitching EA by Nick Corvino Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The "Inside-Out" Model for pitching EA, published by Nick Corvino on November 1, 2022 on The Effective Altruism Forum.When trying to pitch EA to someone who cares about local politics, climate change, social justice, being a doctor, or something else that you might not think is highest of EVs, I see most people getting it wrong. They lead off with something like, “well, it seems implausible that extreme climate change will be an existential risk, so you should probably focus on something else instead.” Put yourself in their shoes. If this was the first thing you’d ever heard about effective altruism, would you feel welcomed? I think it frames EA as adversarial and closed-minded, and some people won't give EA another shot.I had these types of conversations a lot these when running an EA uni group, and have developed a mnemonic called the Inside-Out Model that I find helpful. Rather than immediately comparing the cause they are interested in with one you think is more impactful, start by applying the EA mindset within their cause, then work your way out.For example: “you’re interested in climate change—great! Well, within climate change, it seems like certain interventions are way more effective than others, such as working on green technology or making effective donations.” This gives your conversation a vibe that seems congenial rather than dogmatic. After talking from within their cause area for a bit, transition out, with something like “in fact, in the same way there are more effective interventions than others for combating extreme climate change, there may be more effective causes than climate change altogether.” By this point, hopefully they will listen to your opinion in good faith.I think it’s important to keep high-fidelity and not stay on the “inside” for too long. If you think AGI is more important than climate change, don’t roll over on your belly. But maybe wait 30 seconds.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nick Corvino https://forum.effectivealtruism.org/posts/bwvGpgL3yud7eD3h4/the-inside-out-model-for-pitching-ea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The "Inside-Out" Model for pitching EA, published by Nick Corvino on November 1, 2022 on The Effective Altruism Forum.When trying to pitch EA to someone who cares about local politics, climate change, social justice, being a doctor, or something else that you might not think is highest of EVs, I see most people getting it wrong. They lead off with something like, “well, it seems implausible that extreme climate change will be an existential risk, so you should probably focus on something else instead.” Put yourself in their shoes. If this was the first thing you’d ever heard about effective altruism, would you feel welcomed? I think it frames EA as adversarial and closed-minded, and some people won't give EA another shot.I had these types of conversations a lot these when running an EA uni group, and have developed a mnemonic called the Inside-Out Model that I find helpful. Rather than immediately comparing the cause they are interested in with one you think is more impactful, start by applying the EA mindset within their cause, then work your way out.For example: “you’re interested in climate change—great! Well, within climate change, it seems like certain interventions are way more effective than others, such as working on green technology or making effective donations.” This gives your conversation a vibe that seems congenial rather than dogmatic. After talking from within their cause area for a bit, transition out, with something like “in fact, in the same way there are more effective interventions than others for combating extreme climate change, there may be more effective causes than climate change altogether.” By this point, hopefully they will listen to your opinion in good faith.I think it’s important to keep high-fidelity and not stay on the “inside” for too long. If you think AGI is more important than climate change, don’t roll over on your belly. But maybe wait 30 seconds.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 02 Nov 2022 14:45:19 +0000 EA - The "Inside-Out" Model for pitching EA by Nick Corvino Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The "Inside-Out" Model for pitching EA, published by Nick Corvino on November 1, 2022 on The Effective Altruism Forum.When trying to pitch EA to someone who cares about local politics, climate change, social justice, being a doctor, or something else that you might not think is highest of EVs, I see most people getting it wrong. They lead off with something like, “well, it seems implausible that extreme climate change will be an existential risk, so you should probably focus on something else instead.” Put yourself in their shoes. If this was the first thing you’d ever heard about effective altruism, would you feel welcomed? I think it frames EA as adversarial and closed-minded, and some people won't give EA another shot.I had these types of conversations a lot these when running an EA uni group, and have developed a mnemonic called the Inside-Out Model that I find helpful. Rather than immediately comparing the cause they are interested in with one you think is more impactful, start by applying the EA mindset within their cause, then work your way out.For example: “you’re interested in climate change—great! Well, within climate change, it seems like certain interventions are way more effective than others, such as working on green technology or making effective donations.” This gives your conversation a vibe that seems congenial rather than dogmatic. After talking from within their cause area for a bit, transition out, with something like “in fact, in the same way there are more effective interventions than others for combating extreme climate change, there may be more effective causes than climate change altogether.” By this point, hopefully they will listen to your opinion in good faith.I think it’s important to keep high-fidelity and not stay on the “inside” for too long. If you think AGI is more important than climate change, don’t roll over on your belly. But maybe wait 30 seconds.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The "Inside-Out" Model for pitching EA, published by Nick Corvino on November 1, 2022 on The Effective Altruism Forum.When trying to pitch EA to someone who cares about local politics, climate change, social justice, being a doctor, or something else that you might not think is highest of EVs, I see most people getting it wrong. They lead off with something like, “well, it seems implausible that extreme climate change will be an existential risk, so you should probably focus on something else instead.” Put yourself in their shoes. If this was the first thing you’d ever heard about effective altruism, would you feel welcomed? I think it frames EA as adversarial and closed-minded, and some people won't give EA another shot.I had these types of conversations a lot these when running an EA uni group, and have developed a mnemonic called the Inside-Out Model that I find helpful. Rather than immediately comparing the cause they are interested in with one you think is more impactful, start by applying the EA mindset within their cause, then work your way out.For example: “you’re interested in climate change—great! Well, within climate change, it seems like certain interventions are way more effective than others, such as working on green technology or making effective donations.” This gives your conversation a vibe that seems congenial rather than dogmatic. After talking from within their cause area for a bit, transition out, with something like “in fact, in the same way there are more effective interventions than others for combating extreme climate change, there may be more effective causes than climate change altogether.” By this point, hopefully they will listen to your opinion in good faith.I think it’s important to keep high-fidelity and not stay on the “inside” for too long. If you think AGI is more important than climate change, don’t roll over on your belly. But maybe wait 30 seconds.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Nick Corvino https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:58 None full 3629
yRzgdcjqeGopxuR4x_EA EA - All AGI Safety questions welcome (especially basic ones) [~monthly thread] by robertskmiles Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: All AGI Safety questions welcome (especially basic ones) [~monthly thread], published by robertskmiles on November 1, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
robertskmiles https://forum.effectivealtruism.org/posts/yRzgdcjqeGopxuR4x/all-agi-safety-questions-welcome-especially-basic-ones Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: All AGI Safety questions welcome (especially basic ones) [~monthly thread], published by robertskmiles on November 1, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 02 Nov 2022 12:04:55 +0000 EA - All AGI Safety questions welcome (especially basic ones) [~monthly thread] by robertskmiles Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: All AGI Safety questions welcome (especially basic ones) [~monthly thread], published by robertskmiles on November 1, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: All AGI Safety questions welcome (especially basic ones) [~monthly thread], published by robertskmiles on November 1, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
robertskmiles https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:31 None full 3628
oYFNKdmQtYA42fC2P_EA EA - Superintelligent AI is necessary for an amazing future, but far from sufficient by So8res Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Superintelligent AI is necessary for an amazing future, but far from sufficient, published by So8res on October 31, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
So8res https://forum.effectivealtruism.org/posts/oYFNKdmQtYA42fC2P/superintelligent-ai-is-necessary-for-an-amazing-future-but Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Superintelligent AI is necessary for an amazing future, but far from sufficient, published by So8res on October 31, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 02 Nov 2022 01:26:11 +0000 EA - Superintelligent AI is necessary for an amazing future, but far from sufficient by So8res Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Superintelligent AI is necessary for an amazing future, but far from sufficient, published by So8res on October 31, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Superintelligent AI is necessary for an amazing future, but far from sufficient, published by So8res on October 31, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
So8res https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 00:30 None full 3624
AmLtDwKpduvLbCfme_EA EA - Dark mode (Forum update November 2022) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dark mode (Forum update November 2022), published by Lizka on November 1, 2022 on The Effective Altruism Forum.TL;DR: We now have dark mode on the EA Forum. There are some bugs, but you can turn it on!This is what the Frontpage looks like in dark mode:How to use itYou need to be logged in — the Forum will respect browser settings for logged-out users.To change light/dark mode settings, hover over your username in the upper right-hand corner (or just go to your profile page). Then select “Account Settings.” In account settings, click on “Site Customizations” and click on the dropdown under “Theme.”You’ll see three options:“Auto” will respect your browser settings, even if you have a setting that varies your mode by time of day or the like (e.g. if you set your browser to be light during the day and dark in the evening and at night)“Light” will be light — the Forum you might be used to.“Dark” will always be in dark mode.Feedback is (always) welcome!Please also feel free to report bugs in the comments of this post.Thanks to everyone who worked on this. :)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lizka https://forum.effectivealtruism.org/posts/AmLtDwKpduvLbCfme/dark-mode-forum-update-november-2022 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dark mode (Forum update November 2022), published by Lizka on November 1, 2022 on The Effective Altruism Forum.TL;DR: We now have dark mode on the EA Forum. There are some bugs, but you can turn it on!This is what the Frontpage looks like in dark mode:How to use itYou need to be logged in — the Forum will respect browser settings for logged-out users.To change light/dark mode settings, hover over your username in the upper right-hand corner (or just go to your profile page). Then select “Account Settings.” In account settings, click on “Site Customizations” and click on the dropdown under “Theme.”You’ll see three options:“Auto” will respect your browser settings, even if you have a setting that varies your mode by time of day or the like (e.g. if you set your browser to be light during the day and dark in the evening and at night)“Light” will be light — the Forum you might be used to.“Dark” will always be in dark mode.Feedback is (always) welcome!Please also feel free to report bugs in the comments of this post.Thanks to everyone who worked on this. :)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 01 Nov 2022 20:06:04 +0000 EA - Dark mode (Forum update November 2022) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dark mode (Forum update November 2022), published by Lizka on November 1, 2022 on The Effective Altruism Forum.TL;DR: We now have dark mode on the EA Forum. There are some bugs, but you can turn it on!This is what the Frontpage looks like in dark mode:How to use itYou need to be logged in — the Forum will respect browser settings for logged-out users.To change light/dark mode settings, hover over your username in the upper right-hand corner (or just go to your profile page). Then select “Account Settings.” In account settings, click on “Site Customizations” and click on the dropdown under “Theme.”You’ll see three options:“Auto” will respect your browser settings, even if you have a setting that varies your mode by time of day or the like (e.g. if you set your browser to be light during the day and dark in the evening and at night)“Light” will be light — the Forum you might be used to.“Dark” will always be in dark mode.Feedback is (always) welcome!Please also feel free to report bugs in the comments of this post.Thanks to everyone who worked on this. :)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dark mode (Forum update November 2022), published by Lizka on November 1, 2022 on The Effective Altruism Forum.TL;DR: We now have dark mode on the EA Forum. There are some bugs, but you can turn it on!This is what the Frontpage looks like in dark mode:How to use itYou need to be logged in — the Forum will respect browser settings for logged-out users.To change light/dark mode settings, hover over your username in the upper right-hand corner (or just go to your profile page). Then select “Account Settings.” In account settings, click on “Site Customizations” and click on the dropdown under “Theme.”You’ll see three options:“Auto” will respect your browser settings, even if you have a setting that varies your mode by time of day or the like (e.g. if you set your browser to be light during the day and dark in the evening and at night)“Light” will be light — the Forum you might be used to.“Dark” will always be in dark mode.Feedback is (always) welcome!Please also feel free to report bugs in the comments of this post.Thanks to everyone who worked on this. :)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:20 None full 3617
SsZkuYHv4dNfu7vnS_EA EA - Rethink Priorities’ Special Projects Team is hiring by Rachel Norman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities’ Special Projects Team is hiring, published by Rachel Norman on November 1, 2022 on The Effective Altruism Forum.TL;DRThe Special Projects Program launched in mid 2022 to provide fiscal sponsorship and operational support to (for the most part, longtermist) megaprojects.We are now hiring for Manager and Associate roles on the Special Projects team.Rethink Priorities is a remote organization that welcomes applicants from almost anywhere in the world.Compensation for the roles ranges from $74,000 USD (for Associate roles) up to $111,000 (for Manager roles).Examples of activities for Managers and Associates include day-to-day operations, running hiring rounds, developing budgets, writing business plans for entrepreneurial projects, fundraising, liaising with stakeholders, researching legal/operational issues, scouting for talent for the projects, helping run prizes or requests for proposals, and grantmaking.In lieu of holding a webinar for these roles, we will aim to answer all questions posted in the comments section of this post. Please ask away!Please apply here by November 13th at the end of the day in your local time.General information about all our rolesWe invite anyone who is interested to apply, regardless of background, experience, or credentials. We will select candidates almost entirely based on their performance in our selection process, putting only minimal weight on CVs and references.Most of our positions are equally open to part- or full-time candidates (anywhere between 20-40 hours per week).The full-time annual salary we offer varies based on title and prior experience. The range for the roles in our Special Projects Team is from $74,000 (for Associate roles) to $111,000 (for Manager roles). Salaries for part-time staff are prorated for the fraction of 40 hours/week they work.See Titles at Rethink Priorities for more information.See the "What we offer" section below for details.We are a remote organization and welcome applicants from anywhere in the world.We expect to be legally able to hire in most countries. If you are unsure whether we can hire in the country in which you are based, please email info@rethinkpriorities.org.Most of our staff are in time zones between UTC-8 and UTC+3. While we welcome candidates from outside of these time zones, we often have meetings during working hours in these time zones that employees are expected to attend.For permanent staff, Rethink Priorities offers comprehensive health coverage as regionally appropriate.Other benefits include stipends for workplace equipment, organization-wide mid- and end-of-year breaks, and generous leave policies.About Special Projects and the rolesThe Special Projects Program launched in mid 2022 to support activities outside of RP’s core research agenda. A lot of these projects require complicated operational support. RP’s Core Operations team is focused on ensuring RP's day-to-day functioning, and often these special projects require additional capacity beyond what the Core Ops team has or exploration of novel areas that are outside the purview of the Core Ops team.Although we currently expect Special Projects staff to primarily focus on operations activities, this is unlikely to solely be day-to-day operations tasks, and there are also many other activities we would be excited for the team to engage in. Some possible examples include developing budgets, writing business plans for entrepreneurial projects, fundraising, liaising with stakeholders, researching legal/operational issues, scouting for talent for the projects, helping run prizes or requests for proposals, and grantmaking.We expect that most of these projects will be in the longtermism space, although that is not a requirement. SP plans on fiscally sponsoring, incubating, or otherwise...]]>
Rachel Norman https://forum.effectivealtruism.org/posts/SsZkuYHv4dNfu7vnS/rethink-priorities-special-projects-team-is-hiring Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities’ Special Projects Team is hiring, published by Rachel Norman on November 1, 2022 on The Effective Altruism Forum.TL;DRThe Special Projects Program launched in mid 2022 to provide fiscal sponsorship and operational support to (for the most part, longtermist) megaprojects.We are now hiring for Manager and Associate roles on the Special Projects team.Rethink Priorities is a remote organization that welcomes applicants from almost anywhere in the world.Compensation for the roles ranges from $74,000 USD (for Associate roles) up to $111,000 (for Manager roles).Examples of activities for Managers and Associates include day-to-day operations, running hiring rounds, developing budgets, writing business plans for entrepreneurial projects, fundraising, liaising with stakeholders, researching legal/operational issues, scouting for talent for the projects, helping run prizes or requests for proposals, and grantmaking.In lieu of holding a webinar for these roles, we will aim to answer all questions posted in the comments section of this post. Please ask away!Please apply here by November 13th at the end of the day in your local time.General information about all our rolesWe invite anyone who is interested to apply, regardless of background, experience, or credentials. We will select candidates almost entirely based on their performance in our selection process, putting only minimal weight on CVs and references.Most of our positions are equally open to part- or full-time candidates (anywhere between 20-40 hours per week).The full-time annual salary we offer varies based on title and prior experience. The range for the roles in our Special Projects Team is from $74,000 (for Associate roles) to $111,000 (for Manager roles). Salaries for part-time staff are prorated for the fraction of 40 hours/week they work.See Titles at Rethink Priorities for more information.See the "What we offer" section below for details.We are a remote organization and welcome applicants from anywhere in the world.We expect to be legally able to hire in most countries. If you are unsure whether we can hire in the country in which you are based, please email info@rethinkpriorities.org.Most of our staff are in time zones between UTC-8 and UTC+3. While we welcome candidates from outside of these time zones, we often have meetings during working hours in these time zones that employees are expected to attend.For permanent staff, Rethink Priorities offers comprehensive health coverage as regionally appropriate.Other benefits include stipends for workplace equipment, organization-wide mid- and end-of-year breaks, and generous leave policies.About Special Projects and the rolesThe Special Projects Program launched in mid 2022 to support activities outside of RP’s core research agenda. A lot of these projects require complicated operational support. RP’s Core Operations team is focused on ensuring RP's day-to-day functioning, and often these special projects require additional capacity beyond what the Core Ops team has or exploration of novel areas that are outside the purview of the Core Ops team.Although we currently expect Special Projects staff to primarily focus on operations activities, this is unlikely to solely be day-to-day operations tasks, and there are also many other activities we would be excited for the team to engage in. Some possible examples include developing budgets, writing business plans for entrepreneurial projects, fundraising, liaising with stakeholders, researching legal/operational issues, scouting for talent for the projects, helping run prizes or requests for proposals, and grantmaking.We expect that most of these projects will be in the longtermism space, although that is not a requirement. SP plans on fiscally sponsoring, incubating, or otherwise...]]>
Tue, 01 Nov 2022 18:04:19 +0000 EA - Rethink Priorities’ Special Projects Team is hiring by Rachel Norman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities’ Special Projects Team is hiring, published by Rachel Norman on November 1, 2022 on The Effective Altruism Forum.TL;DRThe Special Projects Program launched in mid 2022 to provide fiscal sponsorship and operational support to (for the most part, longtermist) megaprojects.We are now hiring for Manager and Associate roles on the Special Projects team.Rethink Priorities is a remote organization that welcomes applicants from almost anywhere in the world.Compensation for the roles ranges from $74,000 USD (for Associate roles) up to $111,000 (for Manager roles).Examples of activities for Managers and Associates include day-to-day operations, running hiring rounds, developing budgets, writing business plans for entrepreneurial projects, fundraising, liaising with stakeholders, researching legal/operational issues, scouting for talent for the projects, helping run prizes or requests for proposals, and grantmaking.In lieu of holding a webinar for these roles, we will aim to answer all questions posted in the comments section of this post. Please ask away!Please apply here by November 13th at the end of the day in your local time.General information about all our rolesWe invite anyone who is interested to apply, regardless of background, experience, or credentials. We will select candidates almost entirely based on their performance in our selection process, putting only minimal weight on CVs and references.Most of our positions are equally open to part- or full-time candidates (anywhere between 20-40 hours per week).The full-time annual salary we offer varies based on title and prior experience. The range for the roles in our Special Projects Team is from $74,000 (for Associate roles) to $111,000 (for Manager roles). Salaries for part-time staff are prorated for the fraction of 40 hours/week they work.See Titles at Rethink Priorities for more information.See the "What we offer" section below for details.We are a remote organization and welcome applicants from anywhere in the world.We expect to be legally able to hire in most countries. If you are unsure whether we can hire in the country in which you are based, please email info@rethinkpriorities.org.Most of our staff are in time zones between UTC-8 and UTC+3. While we welcome candidates from outside of these time zones, we often have meetings during working hours in these time zones that employees are expected to attend.For permanent staff, Rethink Priorities offers comprehensive health coverage as regionally appropriate.Other benefits include stipends for workplace equipment, organization-wide mid- and end-of-year breaks, and generous leave policies.About Special Projects and the rolesThe Special Projects Program launched in mid 2022 to support activities outside of RP’s core research agenda. A lot of these projects require complicated operational support. RP’s Core Operations team is focused on ensuring RP's day-to-day functioning, and often these special projects require additional capacity beyond what the Core Ops team has or exploration of novel areas that are outside the purview of the Core Ops team.Although we currently expect Special Projects staff to primarily focus on operations activities, this is unlikely to solely be day-to-day operations tasks, and there are also many other activities we would be excited for the team to engage in. Some possible examples include developing budgets, writing business plans for entrepreneurial projects, fundraising, liaising with stakeholders, researching legal/operational issues, scouting for talent for the projects, helping run prizes or requests for proposals, and grantmaking.We expect that most of these projects will be in the longtermism space, although that is not a requirement. SP plans on fiscally sponsoring, incubating, or otherwise...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities’ Special Projects Team is hiring, published by Rachel Norman on November 1, 2022 on The Effective Altruism Forum.TL;DRThe Special Projects Program launched in mid 2022 to provide fiscal sponsorship and operational support to (for the most part, longtermist) megaprojects.We are now hiring for Manager and Associate roles on the Special Projects team.Rethink Priorities is a remote organization that welcomes applicants from almost anywhere in the world.Compensation for the roles ranges from $74,000 USD (for Associate roles) up to $111,000 (for Manager roles).Examples of activities for Managers and Associates include day-to-day operations, running hiring rounds, developing budgets, writing business plans for entrepreneurial projects, fundraising, liaising with stakeholders, researching legal/operational issues, scouting for talent for the projects, helping run prizes or requests for proposals, and grantmaking.In lieu of holding a webinar for these roles, we will aim to answer all questions posted in the comments section of this post. Please ask away!Please apply here by November 13th at the end of the day in your local time.General information about all our rolesWe invite anyone who is interested to apply, regardless of background, experience, or credentials. We will select candidates almost entirely based on their performance in our selection process, putting only minimal weight on CVs and references.Most of our positions are equally open to part- or full-time candidates (anywhere between 20-40 hours per week).The full-time annual salary we offer varies based on title and prior experience. The range for the roles in our Special Projects Team is from $74,000 (for Associate roles) to $111,000 (for Manager roles). Salaries for part-time staff are prorated for the fraction of 40 hours/week they work.See Titles at Rethink Priorities for more information.See the "What we offer" section below for details.We are a remote organization and welcome applicants from anywhere in the world.We expect to be legally able to hire in most countries. If you are unsure whether we can hire in the country in which you are based, please email info@rethinkpriorities.org.Most of our staff are in time zones between UTC-8 and UTC+3. While we welcome candidates from outside of these time zones, we often have meetings during working hours in these time zones that employees are expected to attend.For permanent staff, Rethink Priorities offers comprehensive health coverage as regionally appropriate.Other benefits include stipends for workplace equipment, organization-wide mid- and end-of-year breaks, and generous leave policies.About Special Projects and the rolesThe Special Projects Program launched in mid 2022 to support activities outside of RP’s core research agenda. A lot of these projects require complicated operational support. RP’s Core Operations team is focused on ensuring RP's day-to-day functioning, and often these special projects require additional capacity beyond what the Core Ops team has or exploration of novel areas that are outside the purview of the Core Ops team.Although we currently expect Special Projects staff to primarily focus on operations activities, this is unlikely to solely be day-to-day operations tasks, and there are also many other activities we would be excited for the team to engage in. Some possible examples include developing budgets, writing business plans for entrepreneurial projects, fundraising, liaising with stakeholders, researching legal/operational issues, scouting for talent for the projects, helping run prizes or requests for proposals, and grantmaking.We expect that most of these projects will be in the longtermism space, although that is not a requirement. SP plans on fiscally sponsoring, incubating, or otherwise...]]>
Rachel Norman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 11:41 None full 3615
pb7Q9awb5nsx3mRzk_EA EA - ML Safety Scholars Summer 2022 Retrospective by ThomasW Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ML Safety Scholars Summer 2022 Retrospective, published by ThomasW on November 1, 2022 on The Effective Altruism Forum.TLDRThis is a report on the Machine Learning Safety Scholars program, organized over the summer by the Center for AI Safety. 63 students graduated from MLSS: the list of graduates and final projects is here. The program was intense, time-consuming, and at times very difficult, so graduation from the program is a significant accomplishment.Overall, I think the program went quite well and that many students have noticeably accelerated their AI safety careers. There are certainly many areas of improvement that could be made for a future iteration, and many are detailed here. We plan to conduct followup surveys to determine the longer-run effects of the program.This post contains three main sections:This TLDR, which is meant for people who just want to know what this document is and see our graduates list.The executive summary, which includes a high-level overview of MLSS. This might be of interest to students considering doing MLSS in the future or anyone else interested in MLSS.The full report, which was mainly written for future MLSS organizers, but I’m publishing here because it might be useful to others running similar programs.The report was written by Thomas Woodside, the project manager for MLSS. “I” refers to Thomas, and does not necessarily represent the opinion of the Center for AI Safety, its CEO Dan Hendrycks, or any of our funders.Visual Executive SummaryMLSS OverviewMLSS was a summer program for mostly undergraduate students aimed to teach the foundations of machine learning, deep learning, and ML safety. The program ended up being ten weeks long and included an optional final project. It incorporated office hours, discussion sections, speaker events, conceptual readings, paper readings, written assignments, and programming assignments. You can see our full curriculum here.Survey ResultsAll MLSS graduates filled out an anonymous survey. What follows is a mostly visual depiction of the program through the lens of these survey results.Overall Experience in MLSSOf course, this sample is biased towards graduates of MLSS, since we required them to complete the survey to receive their final stipends (and non graduates didn’t get final stipends). However, it seems clear from the way that people responded to this survey that the majority had quite positive opinions of our program.We also asked students about their future plans:The results suggest that many students are actively trying to work in AI safety and that their participation in MLSS helped them become more confident in that choice. MLSS did decrease some students’ desire to research AI safety. We do not think this is necessarily a bad thing, as many students were using MLSS to test fit; presumably, some are not great fits for AI safety research but might be able to contribute in some other way.We asked students about why they chose to do the program:We conclude that the stipend was extremely useful to students, and allowed many students to complete the program who wouldn’t have otherwise been able to. Nearly all graduates said they were interested in learning about ML safety in particular.We asked students about the quality of the support from their TAs:We asked students a few questions about what they thought about AI x-risk:Lastly, we asked how many hours students spent in the course:Concluding ThoughtsTo a large extent, we think these results speak for themselves. Students said they got a lot out of MLSS, and I believe many of them are very interested in pursuing AI safety careers that have been accelerated by the program. The real test of our program, of course, will come when we survey students in the future to see what they are doing and how the course helped...]]>
ThomasW https://forum.effectivealtruism.org/posts/pb7Q9awb5nsx3mRzk/ml-safety-scholars-summer-2022-retrospective Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ML Safety Scholars Summer 2022 Retrospective, published by ThomasW on November 1, 2022 on The Effective Altruism Forum.TLDRThis is a report on the Machine Learning Safety Scholars program, organized over the summer by the Center for AI Safety. 63 students graduated from MLSS: the list of graduates and final projects is here. The program was intense, time-consuming, and at times very difficult, so graduation from the program is a significant accomplishment.Overall, I think the program went quite well and that many students have noticeably accelerated their AI safety careers. There are certainly many areas of improvement that could be made for a future iteration, and many are detailed here. We plan to conduct followup surveys to determine the longer-run effects of the program.This post contains three main sections:This TLDR, which is meant for people who just want to know what this document is and see our graduates list.The executive summary, which includes a high-level overview of MLSS. This might be of interest to students considering doing MLSS in the future or anyone else interested in MLSS.The full report, which was mainly written for future MLSS organizers, but I’m publishing here because it might be useful to others running similar programs.The report was written by Thomas Woodside, the project manager for MLSS. “I” refers to Thomas, and does not necessarily represent the opinion of the Center for AI Safety, its CEO Dan Hendrycks, or any of our funders.Visual Executive SummaryMLSS OverviewMLSS was a summer program for mostly undergraduate students aimed to teach the foundations of machine learning, deep learning, and ML safety. The program ended up being ten weeks long and included an optional final project. It incorporated office hours, discussion sections, speaker events, conceptual readings, paper readings, written assignments, and programming assignments. You can see our full curriculum here.Survey ResultsAll MLSS graduates filled out an anonymous survey. What follows is a mostly visual depiction of the program through the lens of these survey results.Overall Experience in MLSSOf course, this sample is biased towards graduates of MLSS, since we required them to complete the survey to receive their final stipends (and non graduates didn’t get final stipends). However, it seems clear from the way that people responded to this survey that the majority had quite positive opinions of our program.We also asked students about their future plans:The results suggest that many students are actively trying to work in AI safety and that their participation in MLSS helped them become more confident in that choice. MLSS did decrease some students’ desire to research AI safety. We do not think this is necessarily a bad thing, as many students were using MLSS to test fit; presumably, some are not great fits for AI safety research but might be able to contribute in some other way.We asked students about why they chose to do the program:We conclude that the stipend was extremely useful to students, and allowed many students to complete the program who wouldn’t have otherwise been able to. Nearly all graduates said they were interested in learning about ML safety in particular.We asked students about the quality of the support from their TAs:We asked students a few questions about what they thought about AI x-risk:Lastly, we asked how many hours students spent in the course:Concluding ThoughtsTo a large extent, we think these results speak for themselves. Students said they got a lot out of MLSS, and I believe many of them are very interested in pursuing AI safety careers that have been accelerated by the program. The real test of our program, of course, will come when we survey students in the future to see what they are doing and how the course helped...]]>
Tue, 01 Nov 2022 17:55:53 +0000 EA - ML Safety Scholars Summer 2022 Retrospective by ThomasW Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ML Safety Scholars Summer 2022 Retrospective, published by ThomasW on November 1, 2022 on The Effective Altruism Forum.TLDRThis is a report on the Machine Learning Safety Scholars program, organized over the summer by the Center for AI Safety. 63 students graduated from MLSS: the list of graduates and final projects is here. The program was intense, time-consuming, and at times very difficult, so graduation from the program is a significant accomplishment.Overall, I think the program went quite well and that many students have noticeably accelerated their AI safety careers. There are certainly many areas of improvement that could be made for a future iteration, and many are detailed here. We plan to conduct followup surveys to determine the longer-run effects of the program.This post contains three main sections:This TLDR, which is meant for people who just want to know what this document is and see our graduates list.The executive summary, which includes a high-level overview of MLSS. This might be of interest to students considering doing MLSS in the future or anyone else interested in MLSS.The full report, which was mainly written for future MLSS organizers, but I’m publishing here because it might be useful to others running similar programs.The report was written by Thomas Woodside, the project manager for MLSS. “I” refers to Thomas, and does not necessarily represent the opinion of the Center for AI Safety, its CEO Dan Hendrycks, or any of our funders.Visual Executive SummaryMLSS OverviewMLSS was a summer program for mostly undergraduate students aimed to teach the foundations of machine learning, deep learning, and ML safety. The program ended up being ten weeks long and included an optional final project. It incorporated office hours, discussion sections, speaker events, conceptual readings, paper readings, written assignments, and programming assignments. You can see our full curriculum here.Survey ResultsAll MLSS graduates filled out an anonymous survey. What follows is a mostly visual depiction of the program through the lens of these survey results.Overall Experience in MLSSOf course, this sample is biased towards graduates of MLSS, since we required them to complete the survey to receive their final stipends (and non graduates didn’t get final stipends). However, it seems clear from the way that people responded to this survey that the majority had quite positive opinions of our program.We also asked students about their future plans:The results suggest that many students are actively trying to work in AI safety and that their participation in MLSS helped them become more confident in that choice. MLSS did decrease some students’ desire to research AI safety. We do not think this is necessarily a bad thing, as many students were using MLSS to test fit; presumably, some are not great fits for AI safety research but might be able to contribute in some other way.We asked students about why they chose to do the program:We conclude that the stipend was extremely useful to students, and allowed many students to complete the program who wouldn’t have otherwise been able to. Nearly all graduates said they were interested in learning about ML safety in particular.We asked students about the quality of the support from their TAs:We asked students a few questions about what they thought about AI x-risk:Lastly, we asked how many hours students spent in the course:Concluding ThoughtsTo a large extent, we think these results speak for themselves. Students said they got a lot out of MLSS, and I believe many of them are very interested in pursuing AI safety careers that have been accelerated by the program. The real test of our program, of course, will come when we survey students in the future to see what they are doing and how the course helped...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ML Safety Scholars Summer 2022 Retrospective, published by ThomasW on November 1, 2022 on The Effective Altruism Forum.TLDRThis is a report on the Machine Learning Safety Scholars program, organized over the summer by the Center for AI Safety. 63 students graduated from MLSS: the list of graduates and final projects is here. The program was intense, time-consuming, and at times very difficult, so graduation from the program is a significant accomplishment.Overall, I think the program went quite well and that many students have noticeably accelerated their AI safety careers. There are certainly many areas of improvement that could be made for a future iteration, and many are detailed here. We plan to conduct followup surveys to determine the longer-run effects of the program.This post contains three main sections:This TLDR, which is meant for people who just want to know what this document is and see our graduates list.The executive summary, which includes a high-level overview of MLSS. This might be of interest to students considering doing MLSS in the future or anyone else interested in MLSS.The full report, which was mainly written for future MLSS organizers, but I’m publishing here because it might be useful to others running similar programs.The report was written by Thomas Woodside, the project manager for MLSS. “I” refers to Thomas, and does not necessarily represent the opinion of the Center for AI Safety, its CEO Dan Hendrycks, or any of our funders.Visual Executive SummaryMLSS OverviewMLSS was a summer program for mostly undergraduate students aimed to teach the foundations of machine learning, deep learning, and ML safety. The program ended up being ten weeks long and included an optional final project. It incorporated office hours, discussion sections, speaker events, conceptual readings, paper readings, written assignments, and programming assignments. You can see our full curriculum here.Survey ResultsAll MLSS graduates filled out an anonymous survey. What follows is a mostly visual depiction of the program through the lens of these survey results.Overall Experience in MLSSOf course, this sample is biased towards graduates of MLSS, since we required them to complete the survey to receive their final stipends (and non graduates didn’t get final stipends). However, it seems clear from the way that people responded to this survey that the majority had quite positive opinions of our program.We also asked students about their future plans:The results suggest that many students are actively trying to work in AI safety and that their participation in MLSS helped them become more confident in that choice. MLSS did decrease some students’ desire to research AI safety. We do not think this is necessarily a bad thing, as many students were using MLSS to test fit; presumably, some are not great fits for AI safety research but might be able to contribute in some other way.We asked students about why they chose to do the program:We conclude that the stipend was extremely useful to students, and allowed many students to complete the program who wouldn’t have otherwise been able to. Nearly all graduates said they were interested in learning about ML safety in particular.We asked students about the quality of the support from their TAs:We asked students a few questions about what they thought about AI x-risk:Lastly, we asked how many hours students spent in the course:Concluding ThoughtsTo a large extent, we think these results speak for themselves. Students said they got a lot out of MLSS, and I believe many of them are very interested in pursuing AI safety careers that have been accelerated by the program. The real test of our program, of course, will come when we survey students in the future to see what they are doing and how the course helped...]]>
ThomasW https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 35:30 None full 3620
nTZWjN2apfhRAzCiN_EA EA - Insect farming might cause more suffering than other animal farming by ishankhire Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Insect farming might cause more suffering than other animal farming, published by ishankhire on November 1, 2022 on The Effective Altruism Forum.The edible insect market has been growing and is projected to continue increasing rapidly in the future. There has been significant research pointing to the fact that insects show signs of consciousness, and I think it’s important to seriously compare insect welfare interventions with other ones.Brian Tomasik's “How Much Direct Suffering Is Caused by Various Animal Foods?” compares the days of suffering per kilogram of different animals. I’d like to add on to this post by a guesstimate model of where insect suffering may lie in this list. I used cricket suffering to create this model and it’s important to note there are differences in the numbers depending on which insects are being farmed—mealworms and black soldier flies likely have a different weight and sentience multiplier.My reasoning for the numbers I used is given in the sheet: if you hover over the fields, you will see a detailed description.The model shows the mean value is 1300 days of suffering per kg of insects consumed, which is twice as high as the highest animal in Brian Tomasik’s list (farmed catfish). The median value is considerably lower (35), and the 5th and 95th percentile are approximately 0.32 and 2500.If correct, these estimates suggest that insect welfare deserves a high priority compared to other farmed animal welfare.Here are possible interventions. In addition to legislation to restrict insect farming (such as bans on using insect feed for livestock), I think there could be a lot of value in using insects other than crickets which have fewer neurons and therefore could have a smaller sentience multiplier (like mealworms). Interventions to ban likely more painful methods of killing insects such as boiling could also be impactful.I’d very much appreciate discussion and feedback on the model and where you think it could be improved.I made this model during a one-day hackathon at the Atlas Fellowship.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
ishankhire https://forum.effectivealtruism.org/posts/nTZWjN2apfhRAzCiN/insect-farming-might-cause-more-suffering-than-other-animal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Insect farming might cause more suffering than other animal farming, published by ishankhire on November 1, 2022 on The Effective Altruism Forum.The edible insect market has been growing and is projected to continue increasing rapidly in the future. There has been significant research pointing to the fact that insects show signs of consciousness, and I think it’s important to seriously compare insect welfare interventions with other ones.Brian Tomasik's “How Much Direct Suffering Is Caused by Various Animal Foods?” compares the days of suffering per kilogram of different animals. I’d like to add on to this post by a guesstimate model of where insect suffering may lie in this list. I used cricket suffering to create this model and it’s important to note there are differences in the numbers depending on which insects are being farmed—mealworms and black soldier flies likely have a different weight and sentience multiplier.My reasoning for the numbers I used is given in the sheet: if you hover over the fields, you will see a detailed description.The model shows the mean value is 1300 days of suffering per kg of insects consumed, which is twice as high as the highest animal in Brian Tomasik’s list (farmed catfish). The median value is considerably lower (35), and the 5th and 95th percentile are approximately 0.32 and 2500.If correct, these estimates suggest that insect welfare deserves a high priority compared to other farmed animal welfare.Here are possible interventions. In addition to legislation to restrict insect farming (such as bans on using insect feed for livestock), I think there could be a lot of value in using insects other than crickets which have fewer neurons and therefore could have a smaller sentience multiplier (like mealworms). Interventions to ban likely more painful methods of killing insects such as boiling could also be impactful.I’d very much appreciate discussion and feedback on the model and where you think it could be improved.I made this model during a one-day hackathon at the Atlas Fellowship.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Tue, 01 Nov 2022 10:47:49 +0000 EA - Insect farming might cause more suffering than other animal farming by ishankhire Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Insect farming might cause more suffering than other animal farming, published by ishankhire on November 1, 2022 on The Effective Altruism Forum.The edible insect market has been growing and is projected to continue increasing rapidly in the future. There has been significant research pointing to the fact that insects show signs of consciousness, and I think it’s important to seriously compare insect welfare interventions with other ones.Brian Tomasik's “How Much Direct Suffering Is Caused by Various Animal Foods?” compares the days of suffering per kilogram of different animals. I’d like to add on to this post by a guesstimate model of where insect suffering may lie in this list. I used cricket suffering to create this model and it’s important to note there are differences in the numbers depending on which insects are being farmed—mealworms and black soldier flies likely have a different weight and sentience multiplier.My reasoning for the numbers I used is given in the sheet: if you hover over the fields, you will see a detailed description.The model shows the mean value is 1300 days of suffering per kg of insects consumed, which is twice as high as the highest animal in Brian Tomasik’s list (farmed catfish). The median value is considerably lower (35), and the 5th and 95th percentile are approximately 0.32 and 2500.If correct, these estimates suggest that insect welfare deserves a high priority compared to other farmed animal welfare.Here are possible interventions. In addition to legislation to restrict insect farming (such as bans on using insect feed for livestock), I think there could be a lot of value in using insects other than crickets which have fewer neurons and therefore could have a smaller sentience multiplier (like mealworms). Interventions to ban likely more painful methods of killing insects such as boiling could also be impactful.I’d very much appreciate discussion and feedback on the model and where you think it could be improved.I made this model during a one-day hackathon at the Atlas Fellowship.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Insect farming might cause more suffering than other animal farming, published by ishankhire on November 1, 2022 on The Effective Altruism Forum.The edible insect market has been growing and is projected to continue increasing rapidly in the future. There has been significant research pointing to the fact that insects show signs of consciousness, and I think it’s important to seriously compare insect welfare interventions with other ones.Brian Tomasik's “How Much Direct Suffering Is Caused by Various Animal Foods?” compares the days of suffering per kilogram of different animals. I’d like to add on to this post by a guesstimate model of where insect suffering may lie in this list. I used cricket suffering to create this model and it’s important to note there are differences in the numbers depending on which insects are being farmed—mealworms and black soldier flies likely have a different weight and sentience multiplier.My reasoning for the numbers I used is given in the sheet: if you hover over the fields, you will see a detailed description.The model shows the mean value is 1300 days of suffering per kg of insects consumed, which is twice as high as the highest animal in Brian Tomasik’s list (farmed catfish). The median value is considerably lower (35), and the 5th and 95th percentile are approximately 0.32 and 2500.If correct, these estimates suggest that insect welfare deserves a high priority compared to other farmed animal welfare.Here are possible interventions. In addition to legislation to restrict insect farming (such as bans on using insect feed for livestock), I think there could be a lot of value in using insects other than crickets which have fewer neurons and therefore could have a smaller sentience multiplier (like mealworms). Interventions to ban likely more painful methods of killing insects such as boiling could also be impactful.I’d very much appreciate discussion and feedback on the model and where you think it could be improved.I made this model during a one-day hackathon at the Atlas Fellowship.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
ishankhire https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:06 None full 3611
qo2QpLXBrrvdqJHXQ_EA EA - A dozen doubts about GiveWell’s numbers by JoelMcGuire Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A dozen doubts about GiveWell’s numbers, published by JoelMcGuire on November 1, 2022 on The Effective Altruism Forum.An entry to GiveWell’s Change Our Mind ContestJoel McGuire, Michael Plant, and Samuel Dupret[Note: We co-authored a post on deworming that has been considered for the contest.]SummaryWe raise twelve critiques of GiveWell’s cost-effectiveness analyses. Ten apply to specific inputs for malaria prevention, cash transfers, deworming, and two are relevant for more than one intervention. Our calculations are available in this modified spreadsheet of GiveWell’s cost-effectiveness analyses. We provide a summary of these first, then set them out in greater detail. We encountered these as part of our work to replicate GiveWell’s analysis. They do not constitute an exhaustive reviewMalaria prevention1. GiveWell assumes that a large fraction (34%) of the total effects of anti-malaria bed nets come from the extra income children will later earn due to reduced malaria exposure. However, GiveWell provides little explanation or justification for these numbers, which are strikingly, indeed suspiciously, large: taken at face value, they imply that bednets are 4x more cost-effective at increasing income than cash transfers.We were unsure what the income-increasing effect of bednet should be, so we investigated a number of factors, which we enumerate as the next six critiques.2. The evidence GiveWell uses to generate its income-improving figure for bednets is from contexts that seem very different from that of the charities it recommends. While GiveWell does discount this evidence it uses for ‘generalisability’, we suggest a larger discount would be consistent with GiveWell’s reasoning regarding deworming and would reduce the total cost-effectiveness by at least 10%.3. We present possible mechanisms for malaria prevention to income benefits, but they do not explain most of the effects. It’s unclear how this would change the cost-effectiveness results.4. We demonstrate how GiveWell’s analysis does not sufficiently adjust for differences in context by looking at malaria prevalence. Malaria prevalence was higher in the context of the evidence than in AMF’s context. When we adjust for this, we expect this will lead to around a 17% decrease in AMF’s total cost-effectiveness, with low confidence.5. GiveWell has missed some evidence for the effect of malaria prevention on income. We expect this decreases the total cost-effectiveness of AMF by 14%, with moderate confidence.6. GiveWell may underestimate the differences in benefits between women and men. We expect with moderate confidence that incorporating more evidence would decrease the total cost-effectiveness of AMF by 16%.7. GiveWell assumes, without clear justification, that the income effects of malaria prevention will endure in just the same way as they suppose the effects of deworming do. We’ve previously called deworming’s benefits into question (McGuire et al., 2022), incorporating the adjustment proposed there would decrease the total cost-effectiveness by AMF by 3.4% to 10.2% (low confidence)We’ve presented these changes in cost-effectiveness for changing only one variable at a time. If we implement all of them at once (excluding critiques 3 and 7 for which we are more uncertain how they’ll change the results), then the effects decline by 29% (see row 202 of “AMF total” tab).Cash transfers8. GiveWell’s analysis of the effectiveness of cash transfers only uses one study to estimate GiveDirectly’s immediate increases in consumption (Haushofer & Shapiro, 2013) when there are many more available. Using more data we show that recipients of GiveDirectly cash transfers are actually 70% richer than assumed in GiveWell’s cost-effectiveness analysis. Hence, the relative increase in consumption from GiveDirectl...]]>
JoelMcGuire https://forum.effectivealtruism.org/posts/qo2QpLXBrrvdqJHXQ/a-dozen-doubts-about-givewell-s-numbers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A dozen doubts about GiveWell’s numbers, published by JoelMcGuire on November 1, 2022 on The Effective Altruism Forum.An entry to GiveWell’s Change Our Mind ContestJoel McGuire, Michael Plant, and Samuel Dupret[Note: We co-authored a post on deworming that has been considered for the contest.]SummaryWe raise twelve critiques of GiveWell’s cost-effectiveness analyses. Ten apply to specific inputs for malaria prevention, cash transfers, deworming, and two are relevant for more than one intervention. Our calculations are available in this modified spreadsheet of GiveWell’s cost-effectiveness analyses. We provide a summary of these first, then set them out in greater detail. We encountered these as part of our work to replicate GiveWell’s analysis. They do not constitute an exhaustive reviewMalaria prevention1. GiveWell assumes that a large fraction (34%) of the total effects of anti-malaria bed nets come from the extra income children will later earn due to reduced malaria exposure. However, GiveWell provides little explanation or justification for these numbers, which are strikingly, indeed suspiciously, large: taken at face value, they imply that bednets are 4x more cost-effective at increasing income than cash transfers.We were unsure what the income-increasing effect of bednet should be, so we investigated a number of factors, which we enumerate as the next six critiques.2. The evidence GiveWell uses to generate its income-improving figure for bednets is from contexts that seem very different from that of the charities it recommends. While GiveWell does discount this evidence it uses for ‘generalisability’, we suggest a larger discount would be consistent with GiveWell’s reasoning regarding deworming and would reduce the total cost-effectiveness by at least 10%.3. We present possible mechanisms for malaria prevention to income benefits, but they do not explain most of the effects. It’s unclear how this would change the cost-effectiveness results.4. We demonstrate how GiveWell’s analysis does not sufficiently adjust for differences in context by looking at malaria prevalence. Malaria prevalence was higher in the context of the evidence than in AMF’s context. When we adjust for this, we expect this will lead to around a 17% decrease in AMF’s total cost-effectiveness, with low confidence.5. GiveWell has missed some evidence for the effect of malaria prevention on income. We expect this decreases the total cost-effectiveness of AMF by 14%, with moderate confidence.6. GiveWell may underestimate the differences in benefits between women and men. We expect with moderate confidence that incorporating more evidence would decrease the total cost-effectiveness of AMF by 16%.7. GiveWell assumes, without clear justification, that the income effects of malaria prevention will endure in just the same way as they suppose the effects of deworming do. We’ve previously called deworming’s benefits into question (McGuire et al., 2022), incorporating the adjustment proposed there would decrease the total cost-effectiveness by AMF by 3.4% to 10.2% (low confidence)We’ve presented these changes in cost-effectiveness for changing only one variable at a time. If we implement all of them at once (excluding critiques 3 and 7 for which we are more uncertain how they’ll change the results), then the effects decline by 29% (see row 202 of “AMF total” tab).Cash transfers8. GiveWell’s analysis of the effectiveness of cash transfers only uses one study to estimate GiveDirectly’s immediate increases in consumption (Haushofer & Shapiro, 2013) when there are many more available. Using more data we show that recipients of GiveDirectly cash transfers are actually 70% richer than assumed in GiveWell’s cost-effectiveness analysis. Hence, the relative increase in consumption from GiveDirectl...]]>
Tue, 01 Nov 2022 08:35:45 +0000 EA - A dozen doubts about GiveWell’s numbers by JoelMcGuire Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A dozen doubts about GiveWell’s numbers, published by JoelMcGuire on November 1, 2022 on The Effective Altruism Forum.An entry to GiveWell’s Change Our Mind ContestJoel McGuire, Michael Plant, and Samuel Dupret[Note: We co-authored a post on deworming that has been considered for the contest.]SummaryWe raise twelve critiques of GiveWell’s cost-effectiveness analyses. Ten apply to specific inputs for malaria prevention, cash transfers, deworming, and two are relevant for more than one intervention. Our calculations are available in this modified spreadsheet of GiveWell’s cost-effectiveness analyses. We provide a summary of these first, then set them out in greater detail. We encountered these as part of our work to replicate GiveWell’s analysis. They do not constitute an exhaustive reviewMalaria prevention1. GiveWell assumes that a large fraction (34%) of the total effects of anti-malaria bed nets come from the extra income children will later earn due to reduced malaria exposure. However, GiveWell provides little explanation or justification for these numbers, which are strikingly, indeed suspiciously, large: taken at face value, they imply that bednets are 4x more cost-effective at increasing income than cash transfers.We were unsure what the income-increasing effect of bednet should be, so we investigated a number of factors, which we enumerate as the next six critiques.2. The evidence GiveWell uses to generate its income-improving figure for bednets is from contexts that seem very different from that of the charities it recommends. While GiveWell does discount this evidence it uses for ‘generalisability’, we suggest a larger discount would be consistent with GiveWell’s reasoning regarding deworming and would reduce the total cost-effectiveness by at least 10%.3. We present possible mechanisms for malaria prevention to income benefits, but they do not explain most of the effects. It’s unclear how this would change the cost-effectiveness results.4. We demonstrate how GiveWell’s analysis does not sufficiently adjust for differences in context by looking at malaria prevalence. Malaria prevalence was higher in the context of the evidence than in AMF’s context. When we adjust for this, we expect this will lead to around a 17% decrease in AMF’s total cost-effectiveness, with low confidence.5. GiveWell has missed some evidence for the effect of malaria prevention on income. We expect this decreases the total cost-effectiveness of AMF by 14%, with moderate confidence.6. GiveWell may underestimate the differences in benefits between women and men. We expect with moderate confidence that incorporating more evidence would decrease the total cost-effectiveness of AMF by 16%.7. GiveWell assumes, without clear justification, that the income effects of malaria prevention will endure in just the same way as they suppose the effects of deworming do. We’ve previously called deworming’s benefits into question (McGuire et al., 2022), incorporating the adjustment proposed there would decrease the total cost-effectiveness by AMF by 3.4% to 10.2% (low confidence)We’ve presented these changes in cost-effectiveness for changing only one variable at a time. If we implement all of them at once (excluding critiques 3 and 7 for which we are more uncertain how they’ll change the results), then the effects decline by 29% (see row 202 of “AMF total” tab).Cash transfers8. GiveWell’s analysis of the effectiveness of cash transfers only uses one study to estimate GiveDirectly’s immediate increases in consumption (Haushofer & Shapiro, 2013) when there are many more available. Using more data we show that recipients of GiveDirectly cash transfers are actually 70% richer than assumed in GiveWell’s cost-effectiveness analysis. Hence, the relative increase in consumption from GiveDirectl...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A dozen doubts about GiveWell’s numbers, published by JoelMcGuire on November 1, 2022 on The Effective Altruism Forum.An entry to GiveWell’s Change Our Mind ContestJoel McGuire, Michael Plant, and Samuel Dupret[Note: We co-authored a post on deworming that has been considered for the contest.]SummaryWe raise twelve critiques of GiveWell’s cost-effectiveness analyses. Ten apply to specific inputs for malaria prevention, cash transfers, deworming, and two are relevant for more than one intervention. Our calculations are available in this modified spreadsheet of GiveWell’s cost-effectiveness analyses. We provide a summary of these first, then set them out in greater detail. We encountered these as part of our work to replicate GiveWell’s analysis. They do not constitute an exhaustive reviewMalaria prevention1. GiveWell assumes that a large fraction (34%) of the total effects of anti-malaria bed nets come from the extra income children will later earn due to reduced malaria exposure. However, GiveWell provides little explanation or justification for these numbers, which are strikingly, indeed suspiciously, large: taken at face value, they imply that bednets are 4x more cost-effective at increasing income than cash transfers.We were unsure what the income-increasing effect of bednet should be, so we investigated a number of factors, which we enumerate as the next six critiques.2. The evidence GiveWell uses to generate its income-improving figure for bednets is from contexts that seem very different from that of the charities it recommends. While GiveWell does discount this evidence it uses for ‘generalisability’, we suggest a larger discount would be consistent with GiveWell’s reasoning regarding deworming and would reduce the total cost-effectiveness by at least 10%.3. We present possible mechanisms for malaria prevention to income benefits, but they do not explain most of the effects. It’s unclear how this would change the cost-effectiveness results.4. We demonstrate how GiveWell’s analysis does not sufficiently adjust for differences in context by looking at malaria prevalence. Malaria prevalence was higher in the context of the evidence than in AMF’s context. When we adjust for this, we expect this will lead to around a 17% decrease in AMF’s total cost-effectiveness, with low confidence.5. GiveWell has missed some evidence for the effect of malaria prevention on income. We expect this decreases the total cost-effectiveness of AMF by 14%, with moderate confidence.6. GiveWell may underestimate the differences in benefits between women and men. We expect with moderate confidence that incorporating more evidence would decrease the total cost-effectiveness of AMF by 16%.7. GiveWell assumes, without clear justification, that the income effects of malaria prevention will endure in just the same way as they suppose the effects of deworming do. We’ve previously called deworming’s benefits into question (McGuire et al., 2022), incorporating the adjustment proposed there would decrease the total cost-effectiveness by AMF by 3.4% to 10.2% (low confidence)We’ve presented these changes in cost-effectiveness for changing only one variable at a time. If we implement all of them at once (excluding critiques 3 and 7 for which we are more uncertain how they’ll change the results), then the effects decline by 29% (see row 202 of “AMF total” tab).Cash transfers8. GiveWell’s analysis of the effectiveness of cash transfers only uses one study to estimate GiveDirectly’s immediate increases in consumption (Haushofer & Shapiro, 2013) when there are many more available. Using more data we show that recipients of GiveDirectly cash transfers are actually 70% richer than assumed in GiveWell’s cost-effectiveness analysis. Hence, the relative increase in consumption from GiveDirectl...]]>
JoelMcGuire https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 42:45 None full 3612
XyEKTDtfMH4DukhaF_EA EA - The Challenges with Measuring the Impact of Lobbying by Animal Ask Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Challenges with Measuring the Impact of Lobbying, published by Animal Ask on October 31, 2022 on The Effective Altruism Forum.A foundational report on the methodology we use at Animal Ask to measure the impact of lobbying campaigns.EXECUTIVE SUMMARYLegislative lobbying has led to numerous positive outcomes in many policy areas and social movements. For example, lobbying in the United States is responsible for lower taxes on solar power, increased taxes on tobacco, and the establishment of a program to detect asteroids (Lerner 2020). In the animal advocacy movement specifically, one recent victory is the 'End the Cage Age' initiative. This lobbying campaign led to a commitment by the European Commission to phase out the use of cages for hens, sows, calves, and numerous other farmed animals (European Commission 2021).When evaluating campaign strategy, it is important to have a good way to project the success of competing legislative opportunities. Ideally, this would involve predicting the degree to which lobbying effort makes a legislative opportunity more likely to succeed, compared to the scenario where that additional effort does not take place. We refer to this as the counterfactual impact of lobbying. Understanding this would provide insight into how campaigning can be most effective.Currently, we do not know the best way to measure the counterfactual impact of lobbying. One promising option is to use the academic literature on lobbying. The published studies on lobbying could be used to derive estimates of policy success and, ideally, the counterfactual impact of lobbying.The purpose of this report is to find out whether this academic literature contains quantitative estimates that could be useful in gauging the counterfactual impact of lobbying. We answer this question by reviewing the literature on lobbying and examining studies that have published estimates of the rates of policy success.Overall policy success is made up of two components: a baseline rate of policy success, and the counterfactual impact of lobbying. Therefore, to measure the counterfactual impact of lobbying, we need to be able to do two things: 1) measure overall policy success, and 2) break down this overall policy success into the baseline rate of success and the counterfactual impact of lobbying (see Figure 1 below).We find that measuring overall policy success using the academic literature is unlikely to work. The lobbying literature contains many well-known weaknesses which cast doubt on researchers' ability to measure policy success. As well as this, our systematic review found a strange result, which suggests that the published estimates of policy success should not be taken at face value.We find that there is insufficient evidence to break down overall policy success into the baseline rate of success and the counterfactual impact of lobbying. There is only a single study that identifies the counterfactual impact of lobbying. That study was limited to a single policy issue and a single context, and so its findings need to be replicated across other issues and contexts before we can draw any sound conclusions.In summary, the academic literature does not contain sufficient information to gauge the counterfactual impact of lobbying. For these reasons, we recommend against using estimates from the published literature when modelling the effects of lobbying. Instead, we recommend that researchers choose a different option for incorporating information on the prospects of success. We conclude by highlighting two promising options that deserve further exploration: expert judgement and panels of superforecasters.Although we conducted this research to guide our own research process at Animal Ask, this report may also be useful to other researchers. In particular, we expect this ...]]>
Animal Ask https://forum.effectivealtruism.org/posts/XyEKTDtfMH4DukhaF/the-challenges-with-measuring-the-impact-of-lobbying Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Challenges with Measuring the Impact of Lobbying, published by Animal Ask on October 31, 2022 on The Effective Altruism Forum.A foundational report on the methodology we use at Animal Ask to measure the impact of lobbying campaigns.EXECUTIVE SUMMARYLegislative lobbying has led to numerous positive outcomes in many policy areas and social movements. For example, lobbying in the United States is responsible for lower taxes on solar power, increased taxes on tobacco, and the establishment of a program to detect asteroids (Lerner 2020). In the animal advocacy movement specifically, one recent victory is the 'End the Cage Age' initiative. This lobbying campaign led to a commitment by the European Commission to phase out the use of cages for hens, sows, calves, and numerous other farmed animals (European Commission 2021).When evaluating campaign strategy, it is important to have a good way to project the success of competing legislative opportunities. Ideally, this would involve predicting the degree to which lobbying effort makes a legislative opportunity more likely to succeed, compared to the scenario where that additional effort does not take place. We refer to this as the counterfactual impact of lobbying. Understanding this would provide insight into how campaigning can be most effective.Currently, we do not know the best way to measure the counterfactual impact of lobbying. One promising option is to use the academic literature on lobbying. The published studies on lobbying could be used to derive estimates of policy success and, ideally, the counterfactual impact of lobbying.The purpose of this report is to find out whether this academic literature contains quantitative estimates that could be useful in gauging the counterfactual impact of lobbying. We answer this question by reviewing the literature on lobbying and examining studies that have published estimates of the rates of policy success.Overall policy success is made up of two components: a baseline rate of policy success, and the counterfactual impact of lobbying. Therefore, to measure the counterfactual impact of lobbying, we need to be able to do two things: 1) measure overall policy success, and 2) break down this overall policy success into the baseline rate of success and the counterfactual impact of lobbying (see Figure 1 below).We find that measuring overall policy success using the academic literature is unlikely to work. The lobbying literature contains many well-known weaknesses which cast doubt on researchers' ability to measure policy success. As well as this, our systematic review found a strange result, which suggests that the published estimates of policy success should not be taken at face value.We find that there is insufficient evidence to break down overall policy success into the baseline rate of success and the counterfactual impact of lobbying. There is only a single study that identifies the counterfactual impact of lobbying. That study was limited to a single policy issue and a single context, and so its findings need to be replicated across other issues and contexts before we can draw any sound conclusions.In summary, the academic literature does not contain sufficient information to gauge the counterfactual impact of lobbying. For these reasons, we recommend against using estimates from the published literature when modelling the effects of lobbying. Instead, we recommend that researchers choose a different option for incorporating information on the prospects of success. We conclude by highlighting two promising options that deserve further exploration: expert judgement and panels of superforecasters.Although we conducted this research to guide our own research process at Animal Ask, this report may also be useful to other researchers. In particular, we expect this ...]]>
Mon, 31 Oct 2022 22:07:21 +0000 EA - The Challenges with Measuring the Impact of Lobbying by Animal Ask Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Challenges with Measuring the Impact of Lobbying, published by Animal Ask on October 31, 2022 on The Effective Altruism Forum.A foundational report on the methodology we use at Animal Ask to measure the impact of lobbying campaigns.EXECUTIVE SUMMARYLegislative lobbying has led to numerous positive outcomes in many policy areas and social movements. For example, lobbying in the United States is responsible for lower taxes on solar power, increased taxes on tobacco, and the establishment of a program to detect asteroids (Lerner 2020). In the animal advocacy movement specifically, one recent victory is the 'End the Cage Age' initiative. This lobbying campaign led to a commitment by the European Commission to phase out the use of cages for hens, sows, calves, and numerous other farmed animals (European Commission 2021).When evaluating campaign strategy, it is important to have a good way to project the success of competing legislative opportunities. Ideally, this would involve predicting the degree to which lobbying effort makes a legislative opportunity more likely to succeed, compared to the scenario where that additional effort does not take place. We refer to this as the counterfactual impact of lobbying. Understanding this would provide insight into how campaigning can be most effective.Currently, we do not know the best way to measure the counterfactual impact of lobbying. One promising option is to use the academic literature on lobbying. The published studies on lobbying could be used to derive estimates of policy success and, ideally, the counterfactual impact of lobbying.The purpose of this report is to find out whether this academic literature contains quantitative estimates that could be useful in gauging the counterfactual impact of lobbying. We answer this question by reviewing the literature on lobbying and examining studies that have published estimates of the rates of policy success.Overall policy success is made up of two components: a baseline rate of policy success, and the counterfactual impact of lobbying. Therefore, to measure the counterfactual impact of lobbying, we need to be able to do two things: 1) measure overall policy success, and 2) break down this overall policy success into the baseline rate of success and the counterfactual impact of lobbying (see Figure 1 below).We find that measuring overall policy success using the academic literature is unlikely to work. The lobbying literature contains many well-known weaknesses which cast doubt on researchers' ability to measure policy success. As well as this, our systematic review found a strange result, which suggests that the published estimates of policy success should not be taken at face value.We find that there is insufficient evidence to break down overall policy success into the baseline rate of success and the counterfactual impact of lobbying. There is only a single study that identifies the counterfactual impact of lobbying. That study was limited to a single policy issue and a single context, and so its findings need to be replicated across other issues and contexts before we can draw any sound conclusions.In summary, the academic literature does not contain sufficient information to gauge the counterfactual impact of lobbying. For these reasons, we recommend against using estimates from the published literature when modelling the effects of lobbying. Instead, we recommend that researchers choose a different option for incorporating information on the prospects of success. We conclude by highlighting two promising options that deserve further exploration: expert judgement and panels of superforecasters.Although we conducted this research to guide our own research process at Animal Ask, this report may also be useful to other researchers. In particular, we expect this ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Challenges with Measuring the Impact of Lobbying, published by Animal Ask on October 31, 2022 on The Effective Altruism Forum.A foundational report on the methodology we use at Animal Ask to measure the impact of lobbying campaigns.EXECUTIVE SUMMARYLegislative lobbying has led to numerous positive outcomes in many policy areas and social movements. For example, lobbying in the United States is responsible for lower taxes on solar power, increased taxes on tobacco, and the establishment of a program to detect asteroids (Lerner 2020). In the animal advocacy movement specifically, one recent victory is the 'End the Cage Age' initiative. This lobbying campaign led to a commitment by the European Commission to phase out the use of cages for hens, sows, calves, and numerous other farmed animals (European Commission 2021).When evaluating campaign strategy, it is important to have a good way to project the success of competing legislative opportunities. Ideally, this would involve predicting the degree to which lobbying effort makes a legislative opportunity more likely to succeed, compared to the scenario where that additional effort does not take place. We refer to this as the counterfactual impact of lobbying. Understanding this would provide insight into how campaigning can be most effective.Currently, we do not know the best way to measure the counterfactual impact of lobbying. One promising option is to use the academic literature on lobbying. The published studies on lobbying could be used to derive estimates of policy success and, ideally, the counterfactual impact of lobbying.The purpose of this report is to find out whether this academic literature contains quantitative estimates that could be useful in gauging the counterfactual impact of lobbying. We answer this question by reviewing the literature on lobbying and examining studies that have published estimates of the rates of policy success.Overall policy success is made up of two components: a baseline rate of policy success, and the counterfactual impact of lobbying. Therefore, to measure the counterfactual impact of lobbying, we need to be able to do two things: 1) measure overall policy success, and 2) break down this overall policy success into the baseline rate of success and the counterfactual impact of lobbying (see Figure 1 below).We find that measuring overall policy success using the academic literature is unlikely to work. The lobbying literature contains many well-known weaknesses which cast doubt on researchers' ability to measure policy success. As well as this, our systematic review found a strange result, which suggests that the published estimates of policy success should not be taken at face value.We find that there is insufficient evidence to break down overall policy success into the baseline rate of success and the counterfactual impact of lobbying. There is only a single study that identifies the counterfactual impact of lobbying. That study was limited to a single policy issue and a single context, and so its findings need to be replicated across other issues and contexts before we can draw any sound conclusions.In summary, the academic literature does not contain sufficient information to gauge the counterfactual impact of lobbying. For these reasons, we recommend against using estimates from the published literature when modelling the effects of lobbying. Instead, we recommend that researchers choose a different option for incorporating information on the prospects of success. We conclude by highlighting two promising options that deserve further exploration: expert judgement and panels of superforecasters.Although we conducted this research to guide our own research process at Animal Ask, this report may also be useful to other researchers. In particular, we expect this ...]]>
Animal Ask https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:05:15 None full 3605
ASycTczbzNgSSMYHo_EA EA - Draft Amnesty Day: an event we might run on the Forum by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Draft Amnesty Day: an event we might run on the Forum, published by Lizka on October 31, 2022 on The Effective Altruism Forum.TL;DR: We’re considering making an official “Draft Amnesty Day” on the Forum when users will be encouraged to share unfinished posts, unpolished writing, butterfly ideas, thoughts they’re not sure they endorse, etc. We’re looking for feedback on this idea.In the spirit of draft amnesty, I’m sharing this post even though it’s not very polished. :)MotivationWe’re often told that posting on the Forum is intimidating. Anecdotally, lots of people have drafts they’ve written that they feel uncertain about. A draft like this tends to sit around in Google Drives gathering metaphorical dust until the author decides to spend time and finalize it (often, months later), the idea becomes irrelevant (someone posts something that makes it redundant, or the moment passes), or the draft is just discarded.I think it’s good to maintain a high standard for content on the Forum, but it’s a shame and a real loss when potentially very useful ideas never see the light of day — or get shared half a year later than they could have been.Sharing a draft before you share a final version of a post can also mean that the final version is better! Others can provide feedback, notice flaws, share relevant resources, etc. You can also fail faster, which is also a win.How this would workWe’d announce a day (or weekend). On that day, there might be a banner at the top of the Frontpage announcing that it’s Draft Amnesty Day (DAD).By default, we’d do this once and see how it goes. We wouldn’t plan for it to be recurring.We’d make a tag and encourage everyone to use it. Users who aren’t logged in wouldn’t see DAD posts by default, and others would be able to hide the posts entirely.We’d provide a template message to put on the top of DAD posts so that others can engage with it accordingly.We’d ask people to be especially nice in the comments of DAD posts (and would be ready to moderate any cases that violate this norm). On the other hand, honest feedback would likely be really useful, so I’d hope that this wouldn’t prevent people from commenting, and I’d encourage people to post “feedback guidelines.”Maybe: these posts wouldn’t be indexed on search engines, meaning they wouldn’t show up when you search via something like Google.If you end up polishing or otherwise finalizing a DAD draft and would like to share it again, we’d encourage that — you wouldn’t need to worry that you should “save” your posts for later.An example blurb to put on top of DAD postsThis is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.This would go right under the title (if the author wants to put it there).What do you think of this idea?How could it go wrong? How could it be improved? Would you post anything? Please share your feedback!You can also use the comment section of this post to share ideas you have for posts and get feedback on those ideas.I think this is related to “agile” approaches (e.g. to software development), although I don’t know much about the theory, and “lean” principles.I might not respond to everything, but I'll read all the comments.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lizka https://forum.effectivealtruism.org/posts/ASycTczbzNgSSMYHo/draft-amnesty-day-an-event-we-might-run-on-the-forum Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Draft Amnesty Day: an event we might run on the Forum, published by Lizka on October 31, 2022 on The Effective Altruism Forum.TL;DR: We’re considering making an official “Draft Amnesty Day” on the Forum when users will be encouraged to share unfinished posts, unpolished writing, butterfly ideas, thoughts they’re not sure they endorse, etc. We’re looking for feedback on this idea.In the spirit of draft amnesty, I’m sharing this post even though it’s not very polished. :)MotivationWe’re often told that posting on the Forum is intimidating. Anecdotally, lots of people have drafts they’ve written that they feel uncertain about. A draft like this tends to sit around in Google Drives gathering metaphorical dust until the author decides to spend time and finalize it (often, months later), the idea becomes irrelevant (someone posts something that makes it redundant, or the moment passes), or the draft is just discarded.I think it’s good to maintain a high standard for content on the Forum, but it’s a shame and a real loss when potentially very useful ideas never see the light of day — or get shared half a year later than they could have been.Sharing a draft before you share a final version of a post can also mean that the final version is better! Others can provide feedback, notice flaws, share relevant resources, etc. You can also fail faster, which is also a win.How this would workWe’d announce a day (or weekend). On that day, there might be a banner at the top of the Frontpage announcing that it’s Draft Amnesty Day (DAD).By default, we’d do this once and see how it goes. We wouldn’t plan for it to be recurring.We’d make a tag and encourage everyone to use it. Users who aren’t logged in wouldn’t see DAD posts by default, and others would be able to hide the posts entirely.We’d provide a template message to put on the top of DAD posts so that others can engage with it accordingly.We’d ask people to be especially nice in the comments of DAD posts (and would be ready to moderate any cases that violate this norm). On the other hand, honest feedback would likely be really useful, so I’d hope that this wouldn’t prevent people from commenting, and I’d encourage people to post “feedback guidelines.”Maybe: these posts wouldn’t be indexed on search engines, meaning they wouldn’t show up when you search via something like Google.If you end up polishing or otherwise finalizing a DAD draft and would like to share it again, we’d encourage that — you wouldn’t need to worry that you should “save” your posts for later.An example blurb to put on top of DAD postsThis is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.This would go right under the title (if the author wants to put it there).What do you think of this idea?How could it go wrong? How could it be improved? Would you post anything? Please share your feedback!You can also use the comment section of this post to share ideas you have for posts and get feedback on those ideas.I think this is related to “agile” approaches (e.g. to software development), although I don’t know much about the theory, and “lean” principles.I might not respond to everything, but I'll read all the comments.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 31 Oct 2022 17:44:51 +0000 EA - Draft Amnesty Day: an event we might run on the Forum by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Draft Amnesty Day: an event we might run on the Forum, published by Lizka on October 31, 2022 on The Effective Altruism Forum.TL;DR: We’re considering making an official “Draft Amnesty Day” on the Forum when users will be encouraged to share unfinished posts, unpolished writing, butterfly ideas, thoughts they’re not sure they endorse, etc. We’re looking for feedback on this idea.In the spirit of draft amnesty, I’m sharing this post even though it’s not very polished. :)MotivationWe’re often told that posting on the Forum is intimidating. Anecdotally, lots of people have drafts they’ve written that they feel uncertain about. A draft like this tends to sit around in Google Drives gathering metaphorical dust until the author decides to spend time and finalize it (often, months later), the idea becomes irrelevant (someone posts something that makes it redundant, or the moment passes), or the draft is just discarded.I think it’s good to maintain a high standard for content on the Forum, but it’s a shame and a real loss when potentially very useful ideas never see the light of day — or get shared half a year later than they could have been.Sharing a draft before you share a final version of a post can also mean that the final version is better! Others can provide feedback, notice flaws, share relevant resources, etc. You can also fail faster, which is also a win.How this would workWe’d announce a day (or weekend). On that day, there might be a banner at the top of the Frontpage announcing that it’s Draft Amnesty Day (DAD).By default, we’d do this once and see how it goes. We wouldn’t plan for it to be recurring.We’d make a tag and encourage everyone to use it. Users who aren’t logged in wouldn’t see DAD posts by default, and others would be able to hide the posts entirely.We’d provide a template message to put on the top of DAD posts so that others can engage with it accordingly.We’d ask people to be especially nice in the comments of DAD posts (and would be ready to moderate any cases that violate this norm). On the other hand, honest feedback would likely be really useful, so I’d hope that this wouldn’t prevent people from commenting, and I’d encourage people to post “feedback guidelines.”Maybe: these posts wouldn’t be indexed on search engines, meaning they wouldn’t show up when you search via something like Google.If you end up polishing or otherwise finalizing a DAD draft and would like to share it again, we’d encourage that — you wouldn’t need to worry that you should “save” your posts for later.An example blurb to put on top of DAD postsThis is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.This would go right under the title (if the author wants to put it there).What do you think of this idea?How could it go wrong? How could it be improved? Would you post anything? Please share your feedback!You can also use the comment section of this post to share ideas you have for posts and get feedback on those ideas.I think this is related to “agile” approaches (e.g. to software development), although I don’t know much about the theory, and “lean” principles.I might not respond to everything, but I'll read all the comments.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Draft Amnesty Day: an event we might run on the Forum, published by Lizka on October 31, 2022 on The Effective Altruism Forum.TL;DR: We’re considering making an official “Draft Amnesty Day” on the Forum when users will be encouraged to share unfinished posts, unpolished writing, butterfly ideas, thoughts they’re not sure they endorse, etc. We’re looking for feedback on this idea.In the spirit of draft amnesty, I’m sharing this post even though it’s not very polished. :)MotivationWe’re often told that posting on the Forum is intimidating. Anecdotally, lots of people have drafts they’ve written that they feel uncertain about. A draft like this tends to sit around in Google Drives gathering metaphorical dust until the author decides to spend time and finalize it (often, months later), the idea becomes irrelevant (someone posts something that makes it redundant, or the moment passes), or the draft is just discarded.I think it’s good to maintain a high standard for content on the Forum, but it’s a shame and a real loss when potentially very useful ideas never see the light of day — or get shared half a year later than they could have been.Sharing a draft before you share a final version of a post can also mean that the final version is better! Others can provide feedback, notice flaws, share relevant resources, etc. You can also fail faster, which is also a win.How this would workWe’d announce a day (or weekend). On that day, there might be a banner at the top of the Frontpage announcing that it’s Draft Amnesty Day (DAD).By default, we’d do this once and see how it goes. We wouldn’t plan for it to be recurring.We’d make a tag and encourage everyone to use it. Users who aren’t logged in wouldn’t see DAD posts by default, and others would be able to hide the posts entirely.We’d provide a template message to put on the top of DAD posts so that others can engage with it accordingly.We’d ask people to be especially nice in the comments of DAD posts (and would be ready to moderate any cases that violate this norm). On the other hand, honest feedback would likely be really useful, so I’d hope that this wouldn’t prevent people from commenting, and I’d encourage people to post “feedback guidelines.”Maybe: these posts wouldn’t be indexed on search engines, meaning they wouldn’t show up when you search via something like Google.If you end up polishing or otherwise finalizing a DAD draft and would like to share it again, we’d encourage that — you wouldn’t need to worry that you should “save” your posts for later.An example blurb to put on top of DAD postsThis is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.This would go right under the title (if the author wants to put it there).What do you think of this idea?How could it go wrong? How could it be improved? Would you post anything? Please share your feedback!You can also use the comment section of this post to share ideas you have for posts and get feedback on those ideas.I think this is related to “agile” approaches (e.g. to software development), although I don’t know much about the theory, and “lean” principles.I might not respond to everything, but I'll read all the comments.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Lizka https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:28 None full 3604
hxtwzcsz8hQfGyZQM_EA EA - An Introduction to the Moral Weight Project by Bob Fischer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An Introduction to the Moral Weight Project, published by Bob Fischer on October 31, 2022 on The Effective Altruism Forum.Key TakeawaysOur objective: provide “moral weights” for 11 farmed species.To make this tractable, we made four assumptions: utilitarianism, hedonism, valence symmetry, and unitarianism.Given these assumptions, an animal’s “moral weight” is that animal’s capacity for welfare—the total amount of welfare that the animal could realize.Capacity for welfare = welfare range (the difference between the best and worst welfare states the individual can realize at a time) × lifespan.Given welfare ranges, we can convert welfare improvements into DALY-equivalents averted, making cross-species cost-effectiveness analyses possible.An Introduction to the Moral Weight ProjectIf we want to do as much good as possible, we have to compare all the ways of doing good—including ways that involve helping members of different species. But do benefits to humans count the same as benefits to chickens? What about chickens vs. carp? Carp vs. honey bees? In 2020, Rethink Priorities published the Moral Weight Series—a collection of five reports about these and related questions. The first introduces different theories of welfare and moral status and their interrelationships. The second compares two ways of estimating differences in capacity for welfare and moral status. The third explores the rate of subjective experience, its importance, and potential variation in the rate of subjective experience across taxa. The fourth considers critical flicker-fusion frequency as a proxy for the rate of subjective experience. The fifth assesses whether there’s variation in the intensity ranges of valenced experiences from species to species.In May 2021, Rethink Priorities launched the Moral Weight Project, which extended and implemented the research program that those initial reports discussed. This post is the first in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species. The aim of this post is to introduce the project and explain how EAs could use its results.DALYs-averted and “moral weight discounts”If we want to do the most good per dollar spent—that is, if we want to maximize cost-effectiveness—we need a common currency for comparing very different interventions. GiveWell, Founders Pledge, Open Philanthropy, and many other EA organizations currently have one: namely, the number of “disability-adjusted life years (DALYs) averted.” A DALY is a health measure with two parts: years of human life lost and years of human life lost to disability. The former measures the extent to which a condition shortens a human’s life; the latter measures the health impact of living with a condition in terms of years of life lost. Together, these values represent the overall burden of the condition. So, averting a DALY is averting a loss—namely, the loss of a single year of human life that’s lived at full health.Historically, some EAs used a “moral weight discount” to convert changes in animals’ welfare levels directly into DALYs-averted. That is, they understood the basic question to be:At what point should we be indifferent between (say) improving chickens’ welfare and preventing the loss of a year of healthy human life?Then, the task is to identify the correct “moral weight discount rate” to apply to the value of some quantity of chicken welfare to make it equal to the value of averting a DALY.This framing makes it tempting to think in terms of the value that (certain groups of) people assign to chicken welfare (or the welfare of the members of whatever species) relative to the ...]]>
Bob Fischer https://forum.effectivealtruism.org/posts/hxtwzcsz8hQfGyZQM/an-introduction-to-the-moral-weight-project Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An Introduction to the Moral Weight Project, published by Bob Fischer on October 31, 2022 on The Effective Altruism Forum.Key TakeawaysOur objective: provide “moral weights” for 11 farmed species.To make this tractable, we made four assumptions: utilitarianism, hedonism, valence symmetry, and unitarianism.Given these assumptions, an animal’s “moral weight” is that animal’s capacity for welfare—the total amount of welfare that the animal could realize.Capacity for welfare = welfare range (the difference between the best and worst welfare states the individual can realize at a time) × lifespan.Given welfare ranges, we can convert welfare improvements into DALY-equivalents averted, making cross-species cost-effectiveness analyses possible.An Introduction to the Moral Weight ProjectIf we want to do as much good as possible, we have to compare all the ways of doing good—including ways that involve helping members of different species. But do benefits to humans count the same as benefits to chickens? What about chickens vs. carp? Carp vs. honey bees? In 2020, Rethink Priorities published the Moral Weight Series—a collection of five reports about these and related questions. The first introduces different theories of welfare and moral status and their interrelationships. The second compares two ways of estimating differences in capacity for welfare and moral status. The third explores the rate of subjective experience, its importance, and potential variation in the rate of subjective experience across taxa. The fourth considers critical flicker-fusion frequency as a proxy for the rate of subjective experience. The fifth assesses whether there’s variation in the intensity ranges of valenced experiences from species to species.In May 2021, Rethink Priorities launched the Moral Weight Project, which extended and implemented the research program that those initial reports discussed. This post is the first in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species. The aim of this post is to introduce the project and explain how EAs could use its results.DALYs-averted and “moral weight discounts”If we want to do the most good per dollar spent—that is, if we want to maximize cost-effectiveness—we need a common currency for comparing very different interventions. GiveWell, Founders Pledge, Open Philanthropy, and many other EA organizations currently have one: namely, the number of “disability-adjusted life years (DALYs) averted.” A DALY is a health measure with two parts: years of human life lost and years of human life lost to disability. The former measures the extent to which a condition shortens a human’s life; the latter measures the health impact of living with a condition in terms of years of life lost. Together, these values represent the overall burden of the condition. So, averting a DALY is averting a loss—namely, the loss of a single year of human life that’s lived at full health.Historically, some EAs used a “moral weight discount” to convert changes in animals’ welfare levels directly into DALYs-averted. That is, they understood the basic question to be:At what point should we be indifferent between (say) improving chickens’ welfare and preventing the loss of a year of healthy human life?Then, the task is to identify the correct “moral weight discount rate” to apply to the value of some quantity of chicken welfare to make it equal to the value of averting a DALY.This framing makes it tempting to think in terms of the value that (certain groups of) people assign to chicken welfare (or the welfare of the members of whatever species) relative to the ...]]>
Mon, 31 Oct 2022 14:11:02 +0000 EA - An Introduction to the Moral Weight Project by Bob Fischer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An Introduction to the Moral Weight Project, published by Bob Fischer on October 31, 2022 on The Effective Altruism Forum.Key TakeawaysOur objective: provide “moral weights” for 11 farmed species.To make this tractable, we made four assumptions: utilitarianism, hedonism, valence symmetry, and unitarianism.Given these assumptions, an animal’s “moral weight” is that animal’s capacity for welfare—the total amount of welfare that the animal could realize.Capacity for welfare = welfare range (the difference between the best and worst welfare states the individual can realize at a time) × lifespan.Given welfare ranges, we can convert welfare improvements into DALY-equivalents averted, making cross-species cost-effectiveness analyses possible.An Introduction to the Moral Weight ProjectIf we want to do as much good as possible, we have to compare all the ways of doing good—including ways that involve helping members of different species. But do benefits to humans count the same as benefits to chickens? What about chickens vs. carp? Carp vs. honey bees? In 2020, Rethink Priorities published the Moral Weight Series—a collection of five reports about these and related questions. The first introduces different theories of welfare and moral status and their interrelationships. The second compares two ways of estimating differences in capacity for welfare and moral status. The third explores the rate of subjective experience, its importance, and potential variation in the rate of subjective experience across taxa. The fourth considers critical flicker-fusion frequency as a proxy for the rate of subjective experience. The fifth assesses whether there’s variation in the intensity ranges of valenced experiences from species to species.In May 2021, Rethink Priorities launched the Moral Weight Project, which extended and implemented the research program that those initial reports discussed. This post is the first in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species. The aim of this post is to introduce the project and explain how EAs could use its results.DALYs-averted and “moral weight discounts”If we want to do the most good per dollar spent—that is, if we want to maximize cost-effectiveness—we need a common currency for comparing very different interventions. GiveWell, Founders Pledge, Open Philanthropy, and many other EA organizations currently have one: namely, the number of “disability-adjusted life years (DALYs) averted.” A DALY is a health measure with two parts: years of human life lost and years of human life lost to disability. The former measures the extent to which a condition shortens a human’s life; the latter measures the health impact of living with a condition in terms of years of life lost. Together, these values represent the overall burden of the condition. So, averting a DALY is averting a loss—namely, the loss of a single year of human life that’s lived at full health.Historically, some EAs used a “moral weight discount” to convert changes in animals’ welfare levels directly into DALYs-averted. That is, they understood the basic question to be:At what point should we be indifferent between (say) improving chickens’ welfare and preventing the loss of a year of healthy human life?Then, the task is to identify the correct “moral weight discount rate” to apply to the value of some quantity of chicken welfare to make it equal to the value of averting a DALY.This framing makes it tempting to think in terms of the value that (certain groups of) people assign to chicken welfare (or the welfare of the members of whatever species) relative to the ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An Introduction to the Moral Weight Project, published by Bob Fischer on October 31, 2022 on The Effective Altruism Forum.Key TakeawaysOur objective: provide “moral weights” for 11 farmed species.To make this tractable, we made four assumptions: utilitarianism, hedonism, valence symmetry, and unitarianism.Given these assumptions, an animal’s “moral weight” is that animal’s capacity for welfare—the total amount of welfare that the animal could realize.Capacity for welfare = welfare range (the difference between the best and worst welfare states the individual can realize at a time) × lifespan.Given welfare ranges, we can convert welfare improvements into DALY-equivalents averted, making cross-species cost-effectiveness analyses possible.An Introduction to the Moral Weight ProjectIf we want to do as much good as possible, we have to compare all the ways of doing good—including ways that involve helping members of different species. But do benefits to humans count the same as benefits to chickens? What about chickens vs. carp? Carp vs. honey bees? In 2020, Rethink Priorities published the Moral Weight Series—a collection of five reports about these and related questions. The first introduces different theories of welfare and moral status and their interrelationships. The second compares two ways of estimating differences in capacity for welfare and moral status. The third explores the rate of subjective experience, its importance, and potential variation in the rate of subjective experience across taxa. The fourth considers critical flicker-fusion frequency as a proxy for the rate of subjective experience. The fifth assesses whether there’s variation in the intensity ranges of valenced experiences from species to species.In May 2021, Rethink Priorities launched the Moral Weight Project, which extended and implemented the research program that those initial reports discussed. This post is the first in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species. The aim of this post is to introduce the project and explain how EAs could use its results.DALYs-averted and “moral weight discounts”If we want to do the most good per dollar spent—that is, if we want to maximize cost-effectiveness—we need a common currency for comparing very different interventions. GiveWell, Founders Pledge, Open Philanthropy, and many other EA organizations currently have one: namely, the number of “disability-adjusted life years (DALYs) averted.” A DALY is a health measure with two parts: years of human life lost and years of human life lost to disability. The former measures the extent to which a condition shortens a human’s life; the latter measures the health impact of living with a condition in terms of years of life lost. Together, these values represent the overall burden of the condition. So, averting a DALY is averting a loss—namely, the loss of a single year of human life that’s lived at full health.Historically, some EAs used a “moral weight discount” to convert changes in animals’ welfare levels directly into DALYs-averted. That is, they understood the basic question to be:At what point should we be indifferent between (say) improving chickens’ welfare and preventing the loss of a year of healthy human life?Then, the task is to identify the correct “moral weight discount rate” to apply to the value of some quantity of chicken welfare to make it equal to the value of averting a DALY.This framing makes it tempting to think in terms of the value that (certain groups of) people assign to chicken welfare (or the welfare of the members of whatever species) relative to the ...]]>
Bob Fischer https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 16:36 None full 3608
SqCDoL9cHa7bnEWyb_EA EA - Announcing EA Survey 2022 by David Moss Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing EA Survey 2022, published by David Moss on October 31, 2022 on The Effective Altruism Forum.The 2022 EA Survey is now live at the following link:We appreciate it when EAs share the survey with others. If you would like to do so, please use this link () so that we can track where our sample is recruited from.We currently plan to leave the survey open until December the 1st, though it’s possible we might extend the window, as we did last year.What’s new this year?The EA Survey is substantially shorter. Our testers completed the survey in 10 minutes or less.We worked with CEA to make it possible for some of your answers to be pre-filled with your previous responses, to save you even more time. At present, this is only possible if you took the 2020 EA Survey and shared your data with CEA. This is because your responses are identified using your EffectiveAltruism.org log-in. In future years, we may be able to email you a custom link which would allow you to pre-fill, or simply not be shown, certain questions which you have answered before, whether or not you share your data with CEA, and there is an option to opt-in to this in this year’s survey.Why take the EA Survey?The EA Survey provides valuable information about the EA community and how it is changing over time. Every year the survey is used to inform the decisions of a number of different EA orgs. And, despite the survey being much shorter this year, this year we have included requests from a wider variety of decision-makers than ever before.PrizeThis year the Centre for Effective Altruism has, again, generously donated a prize of $1000 USD that will be awarded to a randomly selected respondent to the EA Survey, for them to donate to any of the organizations listed on EA Funds. Please note that to be eligible, you need to provide a valid e-mail address so that we can contact you.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
David Moss https://forum.effectivealtruism.org/posts/SqCDoL9cHa7bnEWyb/announcing-ea-survey-2022 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing EA Survey 2022, published by David Moss on October 31, 2022 on The Effective Altruism Forum.The 2022 EA Survey is now live at the following link:We appreciate it when EAs share the survey with others. If you would like to do so, please use this link () so that we can track where our sample is recruited from.We currently plan to leave the survey open until December the 1st, though it’s possible we might extend the window, as we did last year.What’s new this year?The EA Survey is substantially shorter. Our testers completed the survey in 10 minutes or less.We worked with CEA to make it possible for some of your answers to be pre-filled with your previous responses, to save you even more time. At present, this is only possible if you took the 2020 EA Survey and shared your data with CEA. This is because your responses are identified using your EffectiveAltruism.org log-in. In future years, we may be able to email you a custom link which would allow you to pre-fill, or simply not be shown, certain questions which you have answered before, whether or not you share your data with CEA, and there is an option to opt-in to this in this year’s survey.Why take the EA Survey?The EA Survey provides valuable information about the EA community and how it is changing over time. Every year the survey is used to inform the decisions of a number of different EA orgs. And, despite the survey being much shorter this year, this year we have included requests from a wider variety of decision-makers than ever before.PrizeThis year the Centre for Effective Altruism has, again, generously donated a prize of $1000 USD that will be awarded to a randomly selected respondent to the EA Survey, for them to donate to any of the organizations listed on EA Funds. Please note that to be eligible, you need to provide a valid e-mail address so that we can contact you.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 31 Oct 2022 11:37:38 +0000 EA - Announcing EA Survey 2022 by David Moss Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing EA Survey 2022, published by David Moss on October 31, 2022 on The Effective Altruism Forum.The 2022 EA Survey is now live at the following link:We appreciate it when EAs share the survey with others. If you would like to do so, please use this link () so that we can track where our sample is recruited from.We currently plan to leave the survey open until December the 1st, though it’s possible we might extend the window, as we did last year.What’s new this year?The EA Survey is substantially shorter. Our testers completed the survey in 10 minutes or less.We worked with CEA to make it possible for some of your answers to be pre-filled with your previous responses, to save you even more time. At present, this is only possible if you took the 2020 EA Survey and shared your data with CEA. This is because your responses are identified using your EffectiveAltruism.org log-in. In future years, we may be able to email you a custom link which would allow you to pre-fill, or simply not be shown, certain questions which you have answered before, whether or not you share your data with CEA, and there is an option to opt-in to this in this year’s survey.Why take the EA Survey?The EA Survey provides valuable information about the EA community and how it is changing over time. Every year the survey is used to inform the decisions of a number of different EA orgs. And, despite the survey being much shorter this year, this year we have included requests from a wider variety of decision-makers than ever before.PrizeThis year the Centre for Effective Altruism has, again, generously donated a prize of $1000 USD that will be awarded to a randomly selected respondent to the EA Survey, for them to donate to any of the organizations listed on EA Funds. Please note that to be eligible, you need to provide a valid e-mail address so that we can contact you.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing EA Survey 2022, published by David Moss on October 31, 2022 on The Effective Altruism Forum.The 2022 EA Survey is now live at the following link:We appreciate it when EAs share the survey with others. If you would like to do so, please use this link () so that we can track where our sample is recruited from.We currently plan to leave the survey open until December the 1st, though it’s possible we might extend the window, as we did last year.What’s new this year?The EA Survey is substantially shorter. Our testers completed the survey in 10 minutes or less.We worked with CEA to make it possible for some of your answers to be pre-filled with your previous responses, to save you even more time. At present, this is only possible if you took the 2020 EA Survey and shared your data with CEA. This is because your responses are identified using your EffectiveAltruism.org log-in. In future years, we may be able to email you a custom link which would allow you to pre-fill, or simply not be shown, certain questions which you have answered before, whether or not you share your data with CEA, and there is an option to opt-in to this in this year’s survey.Why take the EA Survey?The EA Survey provides valuable information about the EA community and how it is changing over time. Every year the survey is used to inform the decisions of a number of different EA orgs. And, despite the survey being much shorter this year, this year we have included requests from a wider variety of decision-makers than ever before.PrizeThis year the Centre for Effective Altruism has, again, generously donated a prize of $1000 USD that will be awarded to a randomly selected respondent to the EA Survey, for them to donate to any of the organizations listed on EA Funds. Please note that to be eligible, you need to provide a valid e-mail address so that we can contact you.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
David Moss https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:04 None full 3607
3tR7gpqYWzByPDwqL_EA EA - Quantifying the impact of grantmaking career paths by Joel Becker Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Quantifying the impact of grantmaking career paths, published by Joel Becker on October 30, 2022 on The Effective Altruism Forum.How impactful is it to try to become a grantmaker focused on pressing world problems? Encouraged by Nuno’s tweet, I spent a couple of hours trying to find out.TL;DRI estimate the expected x-risk basis points reduced for the prospective marginal grantmaker to be 0.12 (with enormous error bars and caveats).BackgroundEpistemic status: I have zero grantmaking experience and haven’t even double-checked my numbers; please do not use this post for anything that relies on it not containing serious errors.My estimand is something like: “expected x-risk basis points reduced for the prospective marginal grantmaker doing their best to benefit the most pressing problems at an Open Philanthropy-equivalent organization.”BOTECChance of getting a jobI make guesses ofthe number of employers that might be possible to work at,the Open Philanthropy-equivalentness of employers, andthe probability of acceptance for any particular job application.Multiplying these factors together gives me a probability of succeeding in getting an Open Philanthropy-equivalent grantmaking job offer at (at least) one of the Open Philanthropy-equivalent organizations of 7% (on average). Seems not-crazy.Impact from grantmakingI used recent data from these charts to guess the total longtermist grants given per year by Open Philanthropy-equivalent organizations going forwards.The fraction of grants a prospective grantmaker is responsible for is given by the reciprocal of my guess of the number of grantmaker-equivalents at Open Philanthropy-equivalent organizations. (I apologize for phrasing...)Only a fraction of the grants that a grantmaker is responsible for can be counterfactually attributed to them — some grants would make for such obvious decisions that the counterfactual grantmaker would have acted identically. I convert the vibes from this post into a fraction of grants worth deliberating over (on average, 13%).I set the ratio of skill vs. a replacement grantmaker to be mean 1.2 with mass either side of 1. This reflects the fact grantmakers may be better or worse than their counterfactual replacement, but also that on average their being hired implies that they are expected to perform better.Taken together, I get a mean estimate of $5.7m for the Open Philanthropy-equivalent resources counterfactually moved by grantmaking activities of the prospective marginal grantmaker, conditional on job offer.Impact from other pathsLinch claims that some fraction of grantmakers' impact comes from things that look like ‘improving all grantmaking processes at their organization,’ ‘improving the work/selection of their grantees,’ and other paths that I did not attempt to categorize. I add the categorized paths in to my estimate of impact.My mean estimate for the Open Philanthropy-equivalent impact from non-grantmaking activities sums to around $30m (equivalent). ~70% of this comes from improving all grantmaking, ~30% from improving grantees.UnitsIn order to calculate impact for the whole career path, I multiply annual impact by the number of years employed as a grantmaker.Finally, to convert impact into terms of x-risk basis points, I divide by USD per x-risk basis point. Estimates for this quantity come from this comment.Putting everything togetherAll considered, I get an estimate of the x-risk basis points reduced by the prospective marginal grantmaker of 0.12 in expectation. There is a ~27% chance of having a negative counterfactual impact (vs. replacement grantmaker) and a ~5% chance of having an impact greater than expected.For full details, see my code in the appendix.LimitationsThere might be over- or under-counting issues. For instance, I guess that the averag...]]>
Joel Becker https://forum.effectivealtruism.org/posts/3tR7gpqYWzByPDwqL/quantifying-the-impact-of-grantmaking-career-paths Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Quantifying the impact of grantmaking career paths, published by Joel Becker on October 30, 2022 on The Effective Altruism Forum.How impactful is it to try to become a grantmaker focused on pressing world problems? Encouraged by Nuno’s tweet, I spent a couple of hours trying to find out.TL;DRI estimate the expected x-risk basis points reduced for the prospective marginal grantmaker to be 0.12 (with enormous error bars and caveats).BackgroundEpistemic status: I have zero grantmaking experience and haven’t even double-checked my numbers; please do not use this post for anything that relies on it not containing serious errors.My estimand is something like: “expected x-risk basis points reduced for the prospective marginal grantmaker doing their best to benefit the most pressing problems at an Open Philanthropy-equivalent organization.”BOTECChance of getting a jobI make guesses ofthe number of employers that might be possible to work at,the Open Philanthropy-equivalentness of employers, andthe probability of acceptance for any particular job application.Multiplying these factors together gives me a probability of succeeding in getting an Open Philanthropy-equivalent grantmaking job offer at (at least) one of the Open Philanthropy-equivalent organizations of 7% (on average). Seems not-crazy.Impact from grantmakingI used recent data from these charts to guess the total longtermist grants given per year by Open Philanthropy-equivalent organizations going forwards.The fraction of grants a prospective grantmaker is responsible for is given by the reciprocal of my guess of the number of grantmaker-equivalents at Open Philanthropy-equivalent organizations. (I apologize for phrasing...)Only a fraction of the grants that a grantmaker is responsible for can be counterfactually attributed to them — some grants would make for such obvious decisions that the counterfactual grantmaker would have acted identically. I convert the vibes from this post into a fraction of grants worth deliberating over (on average, 13%).I set the ratio of skill vs. a replacement grantmaker to be mean 1.2 with mass either side of 1. This reflects the fact grantmakers may be better or worse than their counterfactual replacement, but also that on average their being hired implies that they are expected to perform better.Taken together, I get a mean estimate of $5.7m for the Open Philanthropy-equivalent resources counterfactually moved by grantmaking activities of the prospective marginal grantmaker, conditional on job offer.Impact from other pathsLinch claims that some fraction of grantmakers' impact comes from things that look like ‘improving all grantmaking processes at their organization,’ ‘improving the work/selection of their grantees,’ and other paths that I did not attempt to categorize. I add the categorized paths in to my estimate of impact.My mean estimate for the Open Philanthropy-equivalent impact from non-grantmaking activities sums to around $30m (equivalent). ~70% of this comes from improving all grantmaking, ~30% from improving grantees.UnitsIn order to calculate impact for the whole career path, I multiply annual impact by the number of years employed as a grantmaker.Finally, to convert impact into terms of x-risk basis points, I divide by USD per x-risk basis point. Estimates for this quantity come from this comment.Putting everything togetherAll considered, I get an estimate of the x-risk basis points reduced by the prospective marginal grantmaker of 0.12 in expectation. There is a ~27% chance of having a negative counterfactual impact (vs. replacement grantmaker) and a ~5% chance of having an impact greater than expected.For full details, see my code in the appendix.LimitationsThere might be over- or under-counting issues. For instance, I guess that the averag...]]>
Mon, 31 Oct 2022 08:01:32 +0000 EA - Quantifying the impact of grantmaking career paths by Joel Becker Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Quantifying the impact of grantmaking career paths, published by Joel Becker on October 30, 2022 on The Effective Altruism Forum.How impactful is it to try to become a grantmaker focused on pressing world problems? Encouraged by Nuno’s tweet, I spent a couple of hours trying to find out.TL;DRI estimate the expected x-risk basis points reduced for the prospective marginal grantmaker to be 0.12 (with enormous error bars and caveats).BackgroundEpistemic status: I have zero grantmaking experience and haven’t even double-checked my numbers; please do not use this post for anything that relies on it not containing serious errors.My estimand is something like: “expected x-risk basis points reduced for the prospective marginal grantmaker doing their best to benefit the most pressing problems at an Open Philanthropy-equivalent organization.”BOTECChance of getting a jobI make guesses ofthe number of employers that might be possible to work at,the Open Philanthropy-equivalentness of employers, andthe probability of acceptance for any particular job application.Multiplying these factors together gives me a probability of succeeding in getting an Open Philanthropy-equivalent grantmaking job offer at (at least) one of the Open Philanthropy-equivalent organizations of 7% (on average). Seems not-crazy.Impact from grantmakingI used recent data from these charts to guess the total longtermist grants given per year by Open Philanthropy-equivalent organizations going forwards.The fraction of grants a prospective grantmaker is responsible for is given by the reciprocal of my guess of the number of grantmaker-equivalents at Open Philanthropy-equivalent organizations. (I apologize for phrasing...)Only a fraction of the grants that a grantmaker is responsible for can be counterfactually attributed to them — some grants would make for such obvious decisions that the counterfactual grantmaker would have acted identically. I convert the vibes from this post into a fraction of grants worth deliberating over (on average, 13%).I set the ratio of skill vs. a replacement grantmaker to be mean 1.2 with mass either side of 1. This reflects the fact grantmakers may be better or worse than their counterfactual replacement, but also that on average their being hired implies that they are expected to perform better.Taken together, I get a mean estimate of $5.7m for the Open Philanthropy-equivalent resources counterfactually moved by grantmaking activities of the prospective marginal grantmaker, conditional on job offer.Impact from other pathsLinch claims that some fraction of grantmakers' impact comes from things that look like ‘improving all grantmaking processes at their organization,’ ‘improving the work/selection of their grantees,’ and other paths that I did not attempt to categorize. I add the categorized paths in to my estimate of impact.My mean estimate for the Open Philanthropy-equivalent impact from non-grantmaking activities sums to around $30m (equivalent). ~70% of this comes from improving all grantmaking, ~30% from improving grantees.UnitsIn order to calculate impact for the whole career path, I multiply annual impact by the number of years employed as a grantmaker.Finally, to convert impact into terms of x-risk basis points, I divide by USD per x-risk basis point. Estimates for this quantity come from this comment.Putting everything togetherAll considered, I get an estimate of the x-risk basis points reduced by the prospective marginal grantmaker of 0.12 in expectation. There is a ~27% chance of having a negative counterfactual impact (vs. replacement grantmaker) and a ~5% chance of having an impact greater than expected.For full details, see my code in the appendix.LimitationsThere might be over- or under-counting issues. For instance, I guess that the averag...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Quantifying the impact of grantmaking career paths, published by Joel Becker on October 30, 2022 on The Effective Altruism Forum.How impactful is it to try to become a grantmaker focused on pressing world problems? Encouraged by Nuno’s tweet, I spent a couple of hours trying to find out.TL;DRI estimate the expected x-risk basis points reduced for the prospective marginal grantmaker to be 0.12 (with enormous error bars and caveats).BackgroundEpistemic status: I have zero grantmaking experience and haven’t even double-checked my numbers; please do not use this post for anything that relies on it not containing serious errors.My estimand is something like: “expected x-risk basis points reduced for the prospective marginal grantmaker doing their best to benefit the most pressing problems at an Open Philanthropy-equivalent organization.”BOTECChance of getting a jobI make guesses ofthe number of employers that might be possible to work at,the Open Philanthropy-equivalentness of employers, andthe probability of acceptance for any particular job application.Multiplying these factors together gives me a probability of succeeding in getting an Open Philanthropy-equivalent grantmaking job offer at (at least) one of the Open Philanthropy-equivalent organizations of 7% (on average). Seems not-crazy.Impact from grantmakingI used recent data from these charts to guess the total longtermist grants given per year by Open Philanthropy-equivalent organizations going forwards.The fraction of grants a prospective grantmaker is responsible for is given by the reciprocal of my guess of the number of grantmaker-equivalents at Open Philanthropy-equivalent organizations. (I apologize for phrasing...)Only a fraction of the grants that a grantmaker is responsible for can be counterfactually attributed to them — some grants would make for such obvious decisions that the counterfactual grantmaker would have acted identically. I convert the vibes from this post into a fraction of grants worth deliberating over (on average, 13%).I set the ratio of skill vs. a replacement grantmaker to be mean 1.2 with mass either side of 1. This reflects the fact grantmakers may be better or worse than their counterfactual replacement, but also that on average their being hired implies that they are expected to perform better.Taken together, I get a mean estimate of $5.7m for the Open Philanthropy-equivalent resources counterfactually moved by grantmaking activities of the prospective marginal grantmaker, conditional on job offer.Impact from other pathsLinch claims that some fraction of grantmakers' impact comes from things that look like ‘improving all grantmaking processes at their organization,’ ‘improving the work/selection of their grantees,’ and other paths that I did not attempt to categorize. I add the categorized paths in to my estimate of impact.My mean estimate for the Open Philanthropy-equivalent impact from non-grantmaking activities sums to around $30m (equivalent). ~70% of this comes from improving all grantmaking, ~30% from improving grantees.UnitsIn order to calculate impact for the whole career path, I multiply annual impact by the number of years employed as a grantmaker.Finally, to convert impact into terms of x-risk basis points, I divide by USD per x-risk basis point. Estimates for this quantity come from this comment.Putting everything togetherAll considered, I get an estimate of the x-risk basis points reduced by the prospective marginal grantmaker of 0.12 in expectation. There is a ~27% chance of having a negative counterfactual impact (vs. replacement grantmaker) and a ~5% chance of having an impact greater than expected.For full details, see my code in the appendix.LimitationsThere might be over- or under-counting issues. For instance, I guess that the averag...]]>
Joel Becker https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:19 None full 3596
fi3Abht55xHGQ4Pha_EA EA - Longtermist terminology has biasing assumptions by Arepo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Longtermist terminology has biasing assumptions, published by Arepo on October 30, 2022 on The Effective Altruism Forum.Sequence summaryThis sequence investigates the expected loss of value from non-extinction global catastrophes. This post is a criticism of the biases and ambiguities inherent in longtermist terminology (including ‘global catastrophes’). The next post, A proposed hierarchy of longtermist concepts, lays out the terms which I intend to use for the rest of this sequence, and which encourage less heuristic, more expected-value thinking. Finally, for now, Modelling civilisation after a catastrophe lays out the structure of a proposed model which will inform the direction of my research for the next few months. If feedback on the structure is good, later parts will populate the model with some best-guess values, and present it in an editable form.IntroductionLongtermist terminology has evolved haphazardly, so that much of it is misleading or noncomplementary. Michael Aird wrote a helpful post attempting to resolve inconsistencies in our usage, but that post’s necessity and its use of partially overlapping Venn diagrams - implying no formal relationships between the terms - itself highlights these problems. Moreover, during the evolution of longtermism, assumptions that originally started out as heuristics seem to have become locked in to the discussion via the terminology, biasing us towards those heuristics and away from expected value analyses.In this post I discuss these concerns, but since I expect it to be relatively controversial and it isn’t really a prerequisite for the rest of the sequence so much as an explanation of why I’m not using standard terms, I would emphasise that this is strictly optional reading for the rest of the sequence, so think of it as a 'part 0' of the sequence. You should feel free to skip ahead if you disagree strongly or just aren’t particularly interested in a terminology discussion.Concepts under the microscopeExistential catastropheRecreating Ord and Aird’s diagrams of the anatomy of an existential catastrophe here, we can see an ‘existential catastrophe’ has various possible modes:Figure from The PrecipiceVenn diagram figures all from Aird’s postIt’s the ‘failed continuation’ branch which I think needlessly muddies the waters.An ‘existential catastrophe’ doesn’t necessarily relate to existence.In theory an existential catastrophe can describe a scenario in which civilisation lasts until the end of the universe, but has much less net welfare than we imagine it could have had.This seems odd to consider an ‘existential’ risk - there are many ways in which we can imagine positive or negative changes to expected future quality of life (see for example Beckstead’s idea of trajectory change). Classing low-value-but-interstellar outcomes as existential catastrophes seems unhelpful both since it introduces definitional ambiguity over how much net welfare must be lost for them to qualify, and since questions of expected future quality of life are very distinct from questions of future quantity of life, and so seem like they should be asked separately.. nor involve a catastrophe that anyone alive recognisesThe concept also encompasses a civilisation that lives happily on Earth until the sun dies, perhaps even finding a way to survive that, but never spreading out across the universe. This means that, for example, universal adoption of a non-totalising population ethic would be an existential catastrophe. I’m strongly in favour of totalising population ethics, but this seems needlessly biasing.‘Unrecoverable’ or ‘permanent’ states are a superfluous conceptIn the diagram above, Ord categorises ‘unrecoverable dystopias’ as a type of existential risk. He actually seems to consider them necessarily impermanent, but (in...]]>
Arepo https://forum.effectivealtruism.org/posts/fi3Abht55xHGQ4Pha/longtermist-terminology-has-biasing-assumptions Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Longtermist terminology has biasing assumptions, published by Arepo on October 30, 2022 on The Effective Altruism Forum.Sequence summaryThis sequence investigates the expected loss of value from non-extinction global catastrophes. This post is a criticism of the biases and ambiguities inherent in longtermist terminology (including ‘global catastrophes’). The next post, A proposed hierarchy of longtermist concepts, lays out the terms which I intend to use for the rest of this sequence, and which encourage less heuristic, more expected-value thinking. Finally, for now, Modelling civilisation after a catastrophe lays out the structure of a proposed model which will inform the direction of my research for the next few months. If feedback on the structure is good, later parts will populate the model with some best-guess values, and present it in an editable form.IntroductionLongtermist terminology has evolved haphazardly, so that much of it is misleading or noncomplementary. Michael Aird wrote a helpful post attempting to resolve inconsistencies in our usage, but that post’s necessity and its use of partially overlapping Venn diagrams - implying no formal relationships between the terms - itself highlights these problems. Moreover, during the evolution of longtermism, assumptions that originally started out as heuristics seem to have become locked in to the discussion via the terminology, biasing us towards those heuristics and away from expected value analyses.In this post I discuss these concerns, but since I expect it to be relatively controversial and it isn’t really a prerequisite for the rest of the sequence so much as an explanation of why I’m not using standard terms, I would emphasise that this is strictly optional reading for the rest of the sequence, so think of it as a 'part 0' of the sequence. You should feel free to skip ahead if you disagree strongly or just aren’t particularly interested in a terminology discussion.Concepts under the microscopeExistential catastropheRecreating Ord and Aird’s diagrams of the anatomy of an existential catastrophe here, we can see an ‘existential catastrophe’ has various possible modes:Figure from The PrecipiceVenn diagram figures all from Aird’s postIt’s the ‘failed continuation’ branch which I think needlessly muddies the waters.An ‘existential catastrophe’ doesn’t necessarily relate to existence.In theory an existential catastrophe can describe a scenario in which civilisation lasts until the end of the universe, but has much less net welfare than we imagine it could have had.This seems odd to consider an ‘existential’ risk - there are many ways in which we can imagine positive or negative changes to expected future quality of life (see for example Beckstead’s idea of trajectory change). Classing low-value-but-interstellar outcomes as existential catastrophes seems unhelpful both since it introduces definitional ambiguity over how much net welfare must be lost for them to qualify, and since questions of expected future quality of life are very distinct from questions of future quantity of life, and so seem like they should be asked separately.. nor involve a catastrophe that anyone alive recognisesThe concept also encompasses a civilisation that lives happily on Earth until the sun dies, perhaps even finding a way to survive that, but never spreading out across the universe. This means that, for example, universal adoption of a non-totalising population ethic would be an existential catastrophe. I’m strongly in favour of totalising population ethics, but this seems needlessly biasing.‘Unrecoverable’ or ‘permanent’ states are a superfluous conceptIn the diagram above, Ord categorises ‘unrecoverable dystopias’ as a type of existential risk. He actually seems to consider them necessarily impermanent, but (in...]]>
Mon, 31 Oct 2022 06:27:32 +0000 EA - Longtermist terminology has biasing assumptions by Arepo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Longtermist terminology has biasing assumptions, published by Arepo on October 30, 2022 on The Effective Altruism Forum.Sequence summaryThis sequence investigates the expected loss of value from non-extinction global catastrophes. This post is a criticism of the biases and ambiguities inherent in longtermist terminology (including ‘global catastrophes’). The next post, A proposed hierarchy of longtermist concepts, lays out the terms which I intend to use for the rest of this sequence, and which encourage less heuristic, more expected-value thinking. Finally, for now, Modelling civilisation after a catastrophe lays out the structure of a proposed model which will inform the direction of my research for the next few months. If feedback on the structure is good, later parts will populate the model with some best-guess values, and present it in an editable form.IntroductionLongtermist terminology has evolved haphazardly, so that much of it is misleading or noncomplementary. Michael Aird wrote a helpful post attempting to resolve inconsistencies in our usage, but that post’s necessity and its use of partially overlapping Venn diagrams - implying no formal relationships between the terms - itself highlights these problems. Moreover, during the evolution of longtermism, assumptions that originally started out as heuristics seem to have become locked in to the discussion via the terminology, biasing us towards those heuristics and away from expected value analyses.In this post I discuss these concerns, but since I expect it to be relatively controversial and it isn’t really a prerequisite for the rest of the sequence so much as an explanation of why I’m not using standard terms, I would emphasise that this is strictly optional reading for the rest of the sequence, so think of it as a 'part 0' of the sequence. You should feel free to skip ahead if you disagree strongly or just aren’t particularly interested in a terminology discussion.Concepts under the microscopeExistential catastropheRecreating Ord and Aird’s diagrams of the anatomy of an existential catastrophe here, we can see an ‘existential catastrophe’ has various possible modes:Figure from The PrecipiceVenn diagram figures all from Aird’s postIt’s the ‘failed continuation’ branch which I think needlessly muddies the waters.An ‘existential catastrophe’ doesn’t necessarily relate to existence.In theory an existential catastrophe can describe a scenario in which civilisation lasts until the end of the universe, but has much less net welfare than we imagine it could have had.This seems odd to consider an ‘existential’ risk - there are many ways in which we can imagine positive or negative changes to expected future quality of life (see for example Beckstead’s idea of trajectory change). Classing low-value-but-interstellar outcomes as existential catastrophes seems unhelpful both since it introduces definitional ambiguity over how much net welfare must be lost for them to qualify, and since questions of expected future quality of life are very distinct from questions of future quantity of life, and so seem like they should be asked separately.. nor involve a catastrophe that anyone alive recognisesThe concept also encompasses a civilisation that lives happily on Earth until the sun dies, perhaps even finding a way to survive that, but never spreading out across the universe. This means that, for example, universal adoption of a non-totalising population ethic would be an existential catastrophe. I’m strongly in favour of totalising population ethics, but this seems needlessly biasing.‘Unrecoverable’ or ‘permanent’ states are a superfluous conceptIn the diagram above, Ord categorises ‘unrecoverable dystopias’ as a type of existential risk. He actually seems to consider them necessarily impermanent, but (in...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Longtermist terminology has biasing assumptions, published by Arepo on October 30, 2022 on The Effective Altruism Forum.Sequence summaryThis sequence investigates the expected loss of value from non-extinction global catastrophes. This post is a criticism of the biases and ambiguities inherent in longtermist terminology (including ‘global catastrophes’). The next post, A proposed hierarchy of longtermist concepts, lays out the terms which I intend to use for the rest of this sequence, and which encourage less heuristic, more expected-value thinking. Finally, for now, Modelling civilisation after a catastrophe lays out the structure of a proposed model which will inform the direction of my research for the next few months. If feedback on the structure is good, later parts will populate the model with some best-guess values, and present it in an editable form.IntroductionLongtermist terminology has evolved haphazardly, so that much of it is misleading or noncomplementary. Michael Aird wrote a helpful post attempting to resolve inconsistencies in our usage, but that post’s necessity and its use of partially overlapping Venn diagrams - implying no formal relationships between the terms - itself highlights these problems. Moreover, during the evolution of longtermism, assumptions that originally started out as heuristics seem to have become locked in to the discussion via the terminology, biasing us towards those heuristics and away from expected value analyses.In this post I discuss these concerns, but since I expect it to be relatively controversial and it isn’t really a prerequisite for the rest of the sequence so much as an explanation of why I’m not using standard terms, I would emphasise that this is strictly optional reading for the rest of the sequence, so think of it as a 'part 0' of the sequence. You should feel free to skip ahead if you disagree strongly or just aren’t particularly interested in a terminology discussion.Concepts under the microscopeExistential catastropheRecreating Ord and Aird’s diagrams of the anatomy of an existential catastrophe here, we can see an ‘existential catastrophe’ has various possible modes:Figure from The PrecipiceVenn diagram figures all from Aird’s postIt’s the ‘failed continuation’ branch which I think needlessly muddies the waters.An ‘existential catastrophe’ doesn’t necessarily relate to existence.In theory an existential catastrophe can describe a scenario in which civilisation lasts until the end of the universe, but has much less net welfare than we imagine it could have had.This seems odd to consider an ‘existential’ risk - there are many ways in which we can imagine positive or negative changes to expected future quality of life (see for example Beckstead’s idea of trajectory change). Classing low-value-but-interstellar outcomes as existential catastrophes seems unhelpful both since it introduces definitional ambiguity over how much net welfare must be lost for them to qualify, and since questions of expected future quality of life are very distinct from questions of future quantity of life, and so seem like they should be asked separately.. nor involve a catastrophe that anyone alive recognisesThe concept also encompasses a civilisation that lives happily on Earth until the sun dies, perhaps even finding a way to survive that, but never spreading out across the universe. This means that, for example, universal adoption of a non-totalising population ethic would be an existential catastrophe. I’m strongly in favour of totalising population ethics, but this seems needlessly biasing.‘Unrecoverable’ or ‘permanent’ states are a superfluous conceptIn the diagram above, Ord categorises ‘unrecoverable dystopias’ as a type of existential risk. He actually seems to consider them necessarily impermanent, but (in...]]>
Arepo https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 12:58 None full 3609
GuQgTdCAXTHzFMQS8_EA EA - Every.org Donation Matching for Monthly Donations by Cullen OKeefe Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Every.org Donation Matching for Monthly Donations, published by Cullen OKeefe on October 30, 2022 on The Effective Altruism Forum.From Nov. 1 - Nov. 30, or until funds run out, any donor who sets up a monthly donation to your nonprofit will be matched up to $50.00 of their monthly donation for the first two months. This must be a new monthly donation and not one that is currently active. This recurring donation will be automatically charged to the donor's payment method every month and matched another $50.00 for the month of December as well. In total, donors giving $100 ($50 in November and $50 in December) will have $100 matched ($50 in November and $50 in December). Donors who set up a monthly gift in December are not eligible for this match.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Cullen OKeefe https://forum.effectivealtruism.org/posts/GuQgTdCAXTHzFMQS8/every-org-donation-matching-for-monthly-donations Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Every.org Donation Matching for Monthly Donations, published by Cullen OKeefe on October 30, 2022 on The Effective Altruism Forum.From Nov. 1 - Nov. 30, or until funds run out, any donor who sets up a monthly donation to your nonprofit will be matched up to $50.00 of their monthly donation for the first two months. This must be a new monthly donation and not one that is currently active. This recurring donation will be automatically charged to the donor's payment method every month and matched another $50.00 for the month of December as well. In total, donors giving $100 ($50 in November and $50 in December) will have $100 matched ($50 in November and $50 in December). Donors who set up a monthly gift in December are not eligible for this match.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Mon, 31 Oct 2022 04:34:27 +0000 EA - Every.org Donation Matching for Monthly Donations by Cullen OKeefe Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Every.org Donation Matching for Monthly Donations, published by Cullen OKeefe on October 30, 2022 on The Effective Altruism Forum.From Nov. 1 - Nov. 30, or until funds run out, any donor who sets up a monthly donation to your nonprofit will be matched up to $50.00 of their monthly donation for the first two months. This must be a new monthly donation and not one that is currently active. This recurring donation will be automatically charged to the donor's payment method every month and matched another $50.00 for the month of December as well. In total, donors giving $100 ($50 in November and $50 in December) will have $100 matched ($50 in November and $50 in December). Donors who set up a monthly gift in December are not eligible for this match.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Every.org Donation Matching for Monthly Donations, published by Cullen OKeefe on October 30, 2022 on The Effective Altruism Forum.From Nov. 1 - Nov. 30, or until funds run out, any donor who sets up a monthly donation to your nonprofit will be matched up to $50.00 of their monthly donation for the first two months. This must be a new monthly donation and not one that is currently active. This recurring donation will be automatically charged to the donor's payment method every month and matched another $50.00 for the month of December as well. In total, donors giving $100 ($50 in November and $50 in December) will have $100 matched ($50 in November and $50 in December). Donors who set up a monthly gift in December are not eligible for this match.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Cullen OKeefe https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:03 None full 3597
npNt43QRnaRNRixXK_EA EA - Map of Biosecurity Interventions by James Lin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Map of Biosecurity Interventions, published by James Lin on October 30, 2022 on The Effective Altruism Forum.SummaryI recently created a visual map of biosecurity. It covers my interpretation of foundational biosecurity interventions. The map is designed to broadly answer “what are people working on in biosecurity?”(Link to a larger, higher resolution image)‘Low-downside Interventions’ are interventions that are on the lower-risk side. These might be good discussion topics or companies to work on. Promising GCBR Interventions are interventions with general consensus among biosecurity experts for being effective at preventing catastrophic pandemics.Not all interventions are made equal. Some are much more effective than others, but I’ve included the ‘less-promising’ interventions all the same. I didn’t want my biases (or the biosecurity community’s biases, for that matter) about what is and isn’t promising to obscure solutions from a first-principles breakdown.Though I consulted experts and peers, all final decisions were ultimately mine and may differ from the biosecurity community. The rest of the post will go over my rationale for certain decisions.GCBR vs. Covid-scale pandemicsThis map covers preventing pandemics, not just the catastrophic ones, but also the Covid-scale ones which might occur more frequently.EA focuses a lot on global catastrophic biological risk (GCBR) pandemics. This seems pretty important given its neglectedness and potentially catastrophic impact. I’ve outlined the interventions that I think would be promising for GCBRs.At the same time, many in biosecurity consider the effects of AI to outweigh those from biorisks (the magnitude is usually 10-100x). Progress in alignment is influenced by factors like international cooperation, institutional distrust, and nuclear conflict. All of these factors are affected by Covid-scale pandemics, whether through decreases in trust, sudden scarcity of resources, or armed international conflict. Considering the possible detrimental effect of even small-scale but more frequent pandemics, it seems worth exploring interventions focused on this scale. There may be more low-hanging fruit than we imagine.Mitigate-Prevent breakdownThere are other excellent frameworks for pandemic prevention. Kevin Esvelt’s Delay-Detect-Defend is one such example. I decided on a two-factor framework of Mitigate-Prevent because I think it covers the space of interventions well and because it resulted in a nice visual. Part of this framework is Survive which involves bunkers and food resilience. I didn’t include it because I wanted more emphasis on preventing diseases from becoming pandemics in the first place, which is much more important than escaping to a bunker. If we’re escaping to bunkers, then it’s likely that we’ve lost. That said, I could imagine creating another map that covers this axis of resilience.Thoughts on some specific interventionsAgricultural disease monitoring and farm-animal testingFactory farms practically incubate viruses due to the high concentration of animals. Because diseases can pass from animals to humans, more vigilance here could be an effective preventative measure. Wet markets experienced pushback for being viral breeding grounds, and it’s about time that factory farms undergo the same scrutiny.In addition to worrying about human and animal infection, crops are also vulnerable. Agricultural practices are monoculture in nature, meaning there is a lack of genetic diversity in the crops grown. Blights exist, and countries are surprisingly dependent on a few sources of food. Monitoring agricultural diseases through runoff water or random sampling seems worth consideration.BioenhancementBy bioenhancement, I mean health and performance-boosting drugs. Most people have a negative connotation w...]]>
James Lin https://forum.effectivealtruism.org/posts/npNt43QRnaRNRixXK/map-of-biosecurity-interventions Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Map of Biosecurity Interventions, published by James Lin on October 30, 2022 on The Effective Altruism Forum.SummaryI recently created a visual map of biosecurity. It covers my interpretation of foundational biosecurity interventions. The map is designed to broadly answer “what are people working on in biosecurity?”(Link to a larger, higher resolution image)‘Low-downside Interventions’ are interventions that are on the lower-risk side. These might be good discussion topics or companies to work on. Promising GCBR Interventions are interventions with general consensus among biosecurity experts for being effective at preventing catastrophic pandemics.Not all interventions are made equal. Some are much more effective than others, but I’ve included the ‘less-promising’ interventions all the same. I didn’t want my biases (or the biosecurity community’s biases, for that matter) about what is and isn’t promising to obscure solutions from a first-principles breakdown.Though I consulted experts and peers, all final decisions were ultimately mine and may differ from the biosecurity community. The rest of the post will go over my rationale for certain decisions.GCBR vs. Covid-scale pandemicsThis map covers preventing pandemics, not just the catastrophic ones, but also the Covid-scale ones which might occur more frequently.EA focuses a lot on global catastrophic biological risk (GCBR) pandemics. This seems pretty important given its neglectedness and potentially catastrophic impact. I’ve outlined the interventions that I think would be promising for GCBRs.At the same time, many in biosecurity consider the effects of AI to outweigh those from biorisks (the magnitude is usually 10-100x). Progress in alignment is influenced by factors like international cooperation, institutional distrust, and nuclear conflict. All of these factors are affected by Covid-scale pandemics, whether through decreases in trust, sudden scarcity of resources, or armed international conflict. Considering the possible detrimental effect of even small-scale but more frequent pandemics, it seems worth exploring interventions focused on this scale. There may be more low-hanging fruit than we imagine.Mitigate-Prevent breakdownThere are other excellent frameworks for pandemic prevention. Kevin Esvelt’s Delay-Detect-Defend is one such example. I decided on a two-factor framework of Mitigate-Prevent because I think it covers the space of interventions well and because it resulted in a nice visual. Part of this framework is Survive which involves bunkers and food resilience. I didn’t include it because I wanted more emphasis on preventing diseases from becoming pandemics in the first place, which is much more important than escaping to a bunker. If we’re escaping to bunkers, then it’s likely that we’ve lost. That said, I could imagine creating another map that covers this axis of resilience.Thoughts on some specific interventionsAgricultural disease monitoring and farm-animal testingFactory farms practically incubate viruses due to the high concentration of animals. Because diseases can pass from animals to humans, more vigilance here could be an effective preventative measure. Wet markets experienced pushback for being viral breeding grounds, and it’s about time that factory farms undergo the same scrutiny.In addition to worrying about human and animal infection, crops are also vulnerable. Agricultural practices are monoculture in nature, meaning there is a lack of genetic diversity in the crops grown. Blights exist, and countries are surprisingly dependent on a few sources of food. Monitoring agricultural diseases through runoff water or random sampling seems worth consideration.BioenhancementBy bioenhancement, I mean health and performance-boosting drugs. Most people have a negative connotation w...]]>
Sun, 30 Oct 2022 21:27:06 +0000 EA - Map of Biosecurity Interventions by James Lin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Map of Biosecurity Interventions, published by James Lin on October 30, 2022 on The Effective Altruism Forum.SummaryI recently created a visual map of biosecurity. It covers my interpretation of foundational biosecurity interventions. The map is designed to broadly answer “what are people working on in biosecurity?”(Link to a larger, higher resolution image)‘Low-downside Interventions’ are interventions that are on the lower-risk side. These might be good discussion topics or companies to work on. Promising GCBR Interventions are interventions with general consensus among biosecurity experts for being effective at preventing catastrophic pandemics.Not all interventions are made equal. Some are much more effective than others, but I’ve included the ‘less-promising’ interventions all the same. I didn’t want my biases (or the biosecurity community’s biases, for that matter) about what is and isn’t promising to obscure solutions from a first-principles breakdown.Though I consulted experts and peers, all final decisions were ultimately mine and may differ from the biosecurity community. The rest of the post will go over my rationale for certain decisions.GCBR vs. Covid-scale pandemicsThis map covers preventing pandemics, not just the catastrophic ones, but also the Covid-scale ones which might occur more frequently.EA focuses a lot on global catastrophic biological risk (GCBR) pandemics. This seems pretty important given its neglectedness and potentially catastrophic impact. I’ve outlined the interventions that I think would be promising for GCBRs.At the same time, many in biosecurity consider the effects of AI to outweigh those from biorisks (the magnitude is usually 10-100x). Progress in alignment is influenced by factors like international cooperation, institutional distrust, and nuclear conflict. All of these factors are affected by Covid-scale pandemics, whether through decreases in trust, sudden scarcity of resources, or armed international conflict. Considering the possible detrimental effect of even small-scale but more frequent pandemics, it seems worth exploring interventions focused on this scale. There may be more low-hanging fruit than we imagine.Mitigate-Prevent breakdownThere are other excellent frameworks for pandemic prevention. Kevin Esvelt’s Delay-Detect-Defend is one such example. I decided on a two-factor framework of Mitigate-Prevent because I think it covers the space of interventions well and because it resulted in a nice visual. Part of this framework is Survive which involves bunkers and food resilience. I didn’t include it because I wanted more emphasis on preventing diseases from becoming pandemics in the first place, which is much more important than escaping to a bunker. If we’re escaping to bunkers, then it’s likely that we’ve lost. That said, I could imagine creating another map that covers this axis of resilience.Thoughts on some specific interventionsAgricultural disease monitoring and farm-animal testingFactory farms practically incubate viruses due to the high concentration of animals. Because diseases can pass from animals to humans, more vigilance here could be an effective preventative measure. Wet markets experienced pushback for being viral breeding grounds, and it’s about time that factory farms undergo the same scrutiny.In addition to worrying about human and animal infection, crops are also vulnerable. Agricultural practices are monoculture in nature, meaning there is a lack of genetic diversity in the crops grown. Blights exist, and countries are surprisingly dependent on a few sources of food. Monitoring agricultural diseases through runoff water or random sampling seems worth consideration.BioenhancementBy bioenhancement, I mean health and performance-boosting drugs. Most people have a negative connotation w...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Map of Biosecurity Interventions, published by James Lin on October 30, 2022 on The Effective Altruism Forum.SummaryI recently created a visual map of biosecurity. It covers my interpretation of foundational biosecurity interventions. The map is designed to broadly answer “what are people working on in biosecurity?”(Link to a larger, higher resolution image)‘Low-downside Interventions’ are interventions that are on the lower-risk side. These might be good discussion topics or companies to work on. Promising GCBR Interventions are interventions with general consensus among biosecurity experts for being effective at preventing catastrophic pandemics.Not all interventions are made equal. Some are much more effective than others, but I’ve included the ‘less-promising’ interventions all the same. I didn’t want my biases (or the biosecurity community’s biases, for that matter) about what is and isn’t promising to obscure solutions from a first-principles breakdown.Though I consulted experts and peers, all final decisions were ultimately mine and may differ from the biosecurity community. The rest of the post will go over my rationale for certain decisions.GCBR vs. Covid-scale pandemicsThis map covers preventing pandemics, not just the catastrophic ones, but also the Covid-scale ones which might occur more frequently.EA focuses a lot on global catastrophic biological risk (GCBR) pandemics. This seems pretty important given its neglectedness and potentially catastrophic impact. I’ve outlined the interventions that I think would be promising for GCBRs.At the same time, many in biosecurity consider the effects of AI to outweigh those from biorisks (the magnitude is usually 10-100x). Progress in alignment is influenced by factors like international cooperation, institutional distrust, and nuclear conflict. All of these factors are affected by Covid-scale pandemics, whether through decreases in trust, sudden scarcity of resources, or armed international conflict. Considering the possible detrimental effect of even small-scale but more frequent pandemics, it seems worth exploring interventions focused on this scale. There may be more low-hanging fruit than we imagine.Mitigate-Prevent breakdownThere are other excellent frameworks for pandemic prevention. Kevin Esvelt’s Delay-Detect-Defend is one such example. I decided on a two-factor framework of Mitigate-Prevent because I think it covers the space of interventions well and because it resulted in a nice visual. Part of this framework is Survive which involves bunkers and food resilience. I didn’t include it because I wanted more emphasis on preventing diseases from becoming pandemics in the first place, which is much more important than escaping to a bunker. If we’re escaping to bunkers, then it’s likely that we’ve lost. That said, I could imagine creating another map that covers this axis of resilience.Thoughts on some specific interventionsAgricultural disease monitoring and farm-animal testingFactory farms practically incubate viruses due to the high concentration of animals. Because diseases can pass from animals to humans, more vigilance here could be an effective preventative measure. Wet markets experienced pushback for being viral breeding grounds, and it’s about time that factory farms undergo the same scrutiny.In addition to worrying about human and animal infection, crops are also vulnerable. Agricultural practices are monoculture in nature, meaning there is a lack of genetic diversity in the crops grown. Blights exist, and countries are surprisingly dependent on a few sources of food. Monitoring agricultural diseases through runoff water or random sampling seems worth consideration.BioenhancementBy bioenhancement, I mean health and performance-boosting drugs. Most people have a negative connotation w...]]>
James Lin https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:02 None full 3593
DmFTFq7pwqCF4Bnnj_EA EA - Request for input: how has 80,000 Hours influenced the direction of the EA community over the past 2 years? by Ardenlk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Request for input: how has 80,000 Hours influenced the direction of the EA community over the past 2 years?, published by Ardenlk on October 29, 2022 on The Effective Altruism Forum.I'm currently helping assess 80,000 Hours' impact over the past 2 years.One part of our impact is ways we influence the direction of the EA community.By "direction of the EA community," I mean a variety of things like:What messages seem prevalent in the communityWhat ideas gain prominence or become less prominentWhat community members are interested inTo get a better understanding of this, I'm gathering thoughts from community members on how they perceive 80,000 Hours to have influenced the direction of the EA community, if at all.If you have ideas, please share them using this short form!Note we are interested to hear about both positive and negative influences.This isn’t supposed to be a rigorous or thorough survey -- but we think we should have a low bar for rapid-fire surveys of people in the community that could be helpful for giving us ideas or things to investigate.Thank you! ArdenThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ardenlk https://forum.effectivealtruism.org/posts/DmFTFq7pwqCF4Bnnj/request-for-input-how-has-80-000-hours-influenced-the Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Request for input: how has 80,000 Hours influenced the direction of the EA community over the past 2 years?, published by Ardenlk on October 29, 2022 on The Effective Altruism Forum.I'm currently helping assess 80,000 Hours' impact over the past 2 years.One part of our impact is ways we influence the direction of the EA community.By "direction of the EA community," I mean a variety of things like:What messages seem prevalent in the communityWhat ideas gain prominence or become less prominentWhat community members are interested inTo get a better understanding of this, I'm gathering thoughts from community members on how they perceive 80,000 Hours to have influenced the direction of the EA community, if at all.If you have ideas, please share them using this short form!Note we are interested to hear about both positive and negative influences.This isn’t supposed to be a rigorous or thorough survey -- but we think we should have a low bar for rapid-fire surveys of people in the community that could be helpful for giving us ideas or things to investigate.Thank you! ArdenThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Sun, 30 Oct 2022 08:04:26 +0000 EA - Request for input: how has 80,000 Hours influenced the direction of the EA community over the past 2 years? by Ardenlk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Request for input: how has 80,000 Hours influenced the direction of the EA community over the past 2 years?, published by Ardenlk on October 29, 2022 on The Effective Altruism Forum.I'm currently helping assess 80,000 Hours' impact over the past 2 years.One part of our impact is ways we influence the direction of the EA community.By "direction of the EA community," I mean a variety of things like:What messages seem prevalent in the communityWhat ideas gain prominence or become less prominentWhat community members are interested inTo get a better understanding of this, I'm gathering thoughts from community members on how they perceive 80,000 Hours to have influenced the direction of the EA community, if at all.If you have ideas, please share them using this short form!Note we are interested to hear about both positive and negative influences.This isn’t supposed to be a rigorous or thorough survey -- but we think we should have a low bar for rapid-fire surveys of people in the community that could be helpful for giving us ideas or things to investigate.Thank you! ArdenThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Request for input: how has 80,000 Hours influenced the direction of the EA community over the past 2 years?, published by Ardenlk on October 29, 2022 on The Effective Altruism Forum.I'm currently helping assess 80,000 Hours' impact over the past 2 years.One part of our impact is ways we influence the direction of the EA community.By "direction of the EA community," I mean a variety of things like:What messages seem prevalent in the communityWhat ideas gain prominence or become less prominentWhat community members are interested inTo get a better understanding of this, I'm gathering thoughts from community members on how they perceive 80,000 Hours to have influenced the direction of the EA community, if at all.If you have ideas, please share them using this short form!Note we are interested to hear about both positive and negative influences.This isn’t supposed to be a rigorous or thorough survey -- but we think we should have a low bar for rapid-fire surveys of people in the community that could be helpful for giving us ideas or things to investigate.Thank you! ArdenThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Ardenlk https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 01:21 None full 3590
H35jDxtvvTpcgwuub_EA EA - A Critique of Longtermism by Popular YouTube Science Channel, Sabine Hossenfelder: "Elon Musk and The Longtermists: What Is Their Plan?" by Ram Aditya Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Critique of Longtermism by Popular YouTube Science Channel, Sabine Hossenfelder: "Elon Musk & The Longtermists: What Is Their Plan?", published by Ram Aditya on October 29, 2022 on The Effective Altruism Forum.A popular science channel host, Sabine Hossenfelder just posted this video on a critique of longtermism.I don't think it was a fair assessment and misunderstood key concepts of longtermism. It is sad to see it being misrepresented to thousands of general viewers who might turn away from even critically engaging with these ideas based on a biased general overview. It could be worth engaging with her and her viewers in the comment section, to both learn what longtermism might be getting wrong so as to update our own views, and to discuss some of the points she raises, more critically. I'm concerned that EA would become too strongly associated with longtermism which could make thousands of these viewers avoid EA.Some things I agree with:Her discomfort with the fact that many major ideas of EA and longtermism have all originated from a handful of philosophers all located at Oxford or related institutes.This quote from Singer that she cites: “..just how bad the extinction of intelligent life on our planet would be depends crucially on how we value lives that have not yet begun and perhaps never will begin”. I agree that this is an important crux that makes for a good argument against longtermism, or for a more cautious advancement on the longtermism front.Some of the points which I disagree with:“Unlike effective altruists, longtermists don’t really care about famines or floods because those won’t lead to extinction”. She mistakes prioritizing long term future over short term as an implication that they don’t “care” about the short term at all. But it is a matter of deciding which has more impact, among many extraordinary ways of doing good which includes caring about famines and floods.“So in a nutshell longtermists say that the current conditions of our living don’t play a big role and a few million deaths are acceptable, so long as we don’t go extinct”. Nope, I don’t think so. Longtermism merely states that causing the deaths of a few billion people might be worse. Both (a million deaths and a billion deaths) are absolutely unacceptable, but I think what she misses is the trade-offs involved in doing good and the limited amount of time and resources that we have. I am surprised she misses the point that when one actually wants to do the most good, then one has to go about it a rational way.“Have you ever put away a bag of chips because you want to increase your chances of having more children so we can populate the entire galaxy in a billion years? That makes you a longtermist.” Nope I don’t think longtermism advocates for people going out of their way to make more children.She quotes a few opinion pieces that criticize longtermism: “.. the fantasy that faith in the combined power of technology and the market could change the world without needing a role for the government”, “..longtermism seems tailor-made to allow tech, finance and philosophy elites to indulge their anti-humanistic tendencies..”. Ironically, I think longtermists are more humanistic given that one of their primary goals is to ensure the long-term sustenance of humanity. Also, as far as I know, longtermism only says that given that technology is going to be important, it is best that we develop it in safer ways. It does not necessarily promote ideas of pushing for technological progress just for the sake of it. It also does not impose a technocractic moral worldview when it advocates for extinction risk prevention or prioritizing the survival of humanity and life as a whole.With regards to Pascal Mugging that she brings up, I am uncertain how this can be resolved. I’ve read a few EA ...]]>
Ram Aditya https://forum.effectivealtruism.org/posts/H35jDxtvvTpcgwuub/a-critique-of-longtermism-by-popular-youtube-science-channel Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Critique of Longtermism by Popular YouTube Science Channel, Sabine Hossenfelder: "Elon Musk & The Longtermists: What Is Their Plan?", published by Ram Aditya on October 29, 2022 on The Effective Altruism Forum.A popular science channel host, Sabine Hossenfelder just posted this video on a critique of longtermism.I don't think it was a fair assessment and misunderstood key concepts of longtermism. It is sad to see it being misrepresented to thousands of general viewers who might turn away from even critically engaging with these ideas based on a biased general overview. It could be worth engaging with her and her viewers in the comment section, to both learn what longtermism might be getting wrong so as to update our own views, and to discuss some of the points she raises, more critically. I'm concerned that EA would become too strongly associated with longtermism which could make thousands of these viewers avoid EA.Some things I agree with:Her discomfort with the fact that many major ideas of EA and longtermism have all originated from a handful of philosophers all located at Oxford or related institutes.This quote from Singer that she cites: “..just how bad the extinction of intelligent life on our planet would be depends crucially on how we value lives that have not yet begun and perhaps never will begin”. I agree that this is an important crux that makes for a good argument against longtermism, or for a more cautious advancement on the longtermism front.Some of the points which I disagree with:“Unlike effective altruists, longtermists don’t really care about famines or floods because those won’t lead to extinction”. She mistakes prioritizing long term future over short term as an implication that they don’t “care” about the short term at all. But it is a matter of deciding which has more impact, among many extraordinary ways of doing good which includes caring about famines and floods.“So in a nutshell longtermists say that the current conditions of our living don’t play a big role and a few million deaths are acceptable, so long as we don’t go extinct”. Nope, I don’t think so. Longtermism merely states that causing the deaths of a few billion people might be worse. Both (a million deaths and a billion deaths) are absolutely unacceptable, but I think what she misses is the trade-offs involved in doing good and the limited amount of time and resources that we have. I am surprised she misses the point that when one actually wants to do the most good, then one has to go about it a rational way.“Have you ever put away a bag of chips because you want to increase your chances of having more children so we can populate the entire galaxy in a billion years? That makes you a longtermist.” Nope I don’t think longtermism advocates for people going out of their way to make more children.She quotes a few opinion pieces that criticize longtermism: “.. the fantasy that faith in the combined power of technology and the market could change the world without needing a role for the government”, “..longtermism seems tailor-made to allow tech, finance and philosophy elites to indulge their anti-humanistic tendencies..”. Ironically, I think longtermists are more humanistic given that one of their primary goals is to ensure the long-term sustenance of humanity. Also, as far as I know, longtermism only says that given that technology is going to be important, it is best that we develop it in safer ways. It does not necessarily promote ideas of pushing for technological progress just for the sake of it. It also does not impose a technocractic moral worldview when it advocates for extinction risk prevention or prioritizing the survival of humanity and life as a whole.With regards to Pascal Mugging that she brings up, I am uncertain how this can be resolved. I’ve read a few EA ...]]>
Sun, 30 Oct 2022 02:51:45 +0000 EA - A Critique of Longtermism by Popular YouTube Science Channel, Sabine Hossenfelder: "Elon Musk and The Longtermists: What Is Their Plan?" by Ram Aditya Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Critique of Longtermism by Popular YouTube Science Channel, Sabine Hossenfelder: "Elon Musk & The Longtermists: What Is Their Plan?", published by Ram Aditya on October 29, 2022 on The Effective Altruism Forum.A popular science channel host, Sabine Hossenfelder just posted this video on a critique of longtermism.I don't think it was a fair assessment and misunderstood key concepts of longtermism. It is sad to see it being misrepresented to thousands of general viewers who might turn away from even critically engaging with these ideas based on a biased general overview. It could be worth engaging with her and her viewers in the comment section, to both learn what longtermism might be getting wrong so as to update our own views, and to discuss some of the points she raises, more critically. I'm concerned that EA would become too strongly associated with longtermism which could make thousands of these viewers avoid EA.Some things I agree with:Her discomfort with the fact that many major ideas of EA and longtermism have all originated from a handful of philosophers all located at Oxford or related institutes.This quote from Singer that she cites: “..just how bad the extinction of intelligent life on our planet would be depends crucially on how we value lives that have not yet begun and perhaps never will begin”. I agree that this is an important crux that makes for a good argument against longtermism, or for a more cautious advancement on the longtermism front.Some of the points which I disagree with:“Unlike effective altruists, longtermists don’t really care about famines or floods because those won’t lead to extinction”. She mistakes prioritizing long term future over short term as an implication that they don’t “care” about the short term at all. But it is a matter of deciding which has more impact, among many extraordinary ways of doing good which includes caring about famines and floods.“So in a nutshell longtermists say that the current conditions of our living don’t play a big role and a few million deaths are acceptable, so long as we don’t go extinct”. Nope, I don’t think so. Longtermism merely states that causing the deaths of a few billion people might be worse. Both (a million deaths and a billion deaths) are absolutely unacceptable, but I think what she misses is the trade-offs involved in doing good and the limited amount of time and resources that we have. I am surprised she misses the point that when one actually wants to do the most good, then one has to go about it a rational way.“Have you ever put away a bag of chips because you want to increase your chances of having more children so we can populate the entire galaxy in a billion years? That makes you a longtermist.” Nope I don’t think longtermism advocates for people going out of their way to make more children.She quotes a few opinion pieces that criticize longtermism: “.. the fantasy that faith in the combined power of technology and the market could change the world without needing a role for the government”, “..longtermism seems tailor-made to allow tech, finance and philosophy elites to indulge their anti-humanistic tendencies..”. Ironically, I think longtermists are more humanistic given that one of their primary goals is to ensure the long-term sustenance of humanity. Also, as far as I know, longtermism only says that given that technology is going to be important, it is best that we develop it in safer ways. It does not necessarily promote ideas of pushing for technological progress just for the sake of it. It also does not impose a technocractic moral worldview when it advocates for extinction risk prevention or prioritizing the survival of humanity and life as a whole.With regards to Pascal Mugging that she brings up, I am uncertain how this can be resolved. I’ve read a few EA ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Critique of Longtermism by Popular YouTube Science Channel, Sabine Hossenfelder: "Elon Musk & The Longtermists: What Is Their Plan?", published by Ram Aditya on October 29, 2022 on The Effective Altruism Forum.A popular science channel host, Sabine Hossenfelder just posted this video on a critique of longtermism.I don't think it was a fair assessment and misunderstood key concepts of longtermism. It is sad to see it being misrepresented to thousands of general viewers who might turn away from even critically engaging with these ideas based on a biased general overview. It could be worth engaging with her and her viewers in the comment section, to both learn what longtermism might be getting wrong so as to update our own views, and to discuss some of the points she raises, more critically. I'm concerned that EA would become too strongly associated with longtermism which could make thousands of these viewers avoid EA.Some things I agree with:Her discomfort with the fact that many major ideas of EA and longtermism have all originated from a handful of philosophers all located at Oxford or related institutes.This quote from Singer that she cites: “..just how bad the extinction of intelligent life on our planet would be depends crucially on how we value lives that have not yet begun and perhaps never will begin”. I agree that this is an important crux that makes for a good argument against longtermism, or for a more cautious advancement on the longtermism front.Some of the points which I disagree with:“Unlike effective altruists, longtermists don’t really care about famines or floods because those won’t lead to extinction”. She mistakes prioritizing long term future over short term as an implication that they don’t “care” about the short term at all. But it is a matter of deciding which has more impact, among many extraordinary ways of doing good which includes caring about famines and floods.“So in a nutshell longtermists say that the current conditions of our living don’t play a big role and a few million deaths are acceptable, so long as we don’t go extinct”. Nope, I don’t think so. Longtermism merely states that causing the deaths of a few billion people might be worse. Both (a million deaths and a billion deaths) are absolutely unacceptable, but I think what she misses is the trade-offs involved in doing good and the limited amount of time and resources that we have. I am surprised she misses the point that when one actually wants to do the most good, then one has to go about it a rational way.“Have you ever put away a bag of chips because you want to increase your chances of having more children so we can populate the entire galaxy in a billion years? That makes you a longtermist.” Nope I don’t think longtermism advocates for people going out of their way to make more children.She quotes a few opinion pieces that criticize longtermism: “.. the fantasy that faith in the combined power of technology and the market could change the world without needing a role for the government”, “..longtermism seems tailor-made to allow tech, finance and philosophy elites to indulge their anti-humanistic tendencies..”. Ironically, I think longtermists are more humanistic given that one of their primary goals is to ensure the long-term sustenance of humanity. Also, as far as I know, longtermism only says that given that technology is going to be important, it is best that we develop it in safer ways. It does not necessarily promote ideas of pushing for technological progress just for the sake of it. It also does not impose a technocractic moral worldview when it advocates for extinction risk prevention or prioritizing the survival of humanity and life as a whole.With regards to Pascal Mugging that she brings up, I am uncertain how this can be resolved. I’ve read a few EA ...]]>
Ram Aditya https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:48 None full 3588
KqCybin8rtfP3qztq_EA EA - AGI and Lock-In by Lukas Finnveden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI and Lock-In, published by Lukas Finnveden on October 29, 2022 on The Effective Altruism Forum.The long-term future of intelligent life is currently unpredictable and undetermined. In the linked document, we argue that the invention of artificial general intelligence (AGI) could change this by making extreme types of lock-in technologically feasible. In particular, we argue that AGI would make it technologically feasible to (i) perfectly preserve nuanced specifications of a wide variety of values or goals far into the future, and (ii) develop AGI-based institutions that would (with high probability) competently pursue any such values for at least millions, and plausibly trillions, of years.The rest of this post contains the summary (6 pages), with links to relevant sections of the main document (40 pages) for readers who want more details.0.0 The claimLife on Earth could survive for millions of years. Life in space could plausibly survive for trillions of years. What will happen to intelligent life during this time? Some possible claims are:A. Humanity will almost certainly go extinct in the next million years.B. Under Darwinian pressures, intelligent life will spread throughout the stars and rapidly evolve toward maximal reproductive fitness.C. Through moral reflection, intelligent life will reliably be driven to pursue some specific “higher” (non-reproductive) goal, such as maximizing the happiness of all creatures.D. The choices of intelligent life are deeply, fundamentally uncertain. It will at no point be predictable what intelligent beings will choose to do in the following 1000 years.E. It is possible to stabilize many features of society for millions or trillions of years. But it is possible to stabilize them into many different shapes — so civilization’s long-term behavior is contingent on what happens early on.Claims A-C assert that the future is basically determined today. Claim D asserts that the future is, and will remain, undetermined. In this document, we argue for claim E: Some of the most important features of the future of intelligent life are currently undetermined but could become determined relatively soon (relative to the trillions of years life could last).In particular, our main claim is that artificial general intelligence (AGI) will make it technologically feasible to construct long-lived institutions pursuing a wide variety of possible goals. We can break this into three assertions, all conditional on the availability of AGI:It will be possible to preserve highly nuanced specifications of values and goals far into the future, without losing any information.With sufficient investments, it will be feasible to develop AGI-based institutions that (with high probability) competently and faithfully pursue any such values until an external source stops them, or until the values in question imply that they should stop.If a large majority of the world’s economic and military powers agreed to set-up such an institution, and bestowed it with the power to defend itself against external threats, that institution could pursue its agenda for at least millions of years (and perhaps for trillions).Note that we’re mostly making claims about feasibility as opposed to likelihood. We only briefly discuss whether people would want to do something like this in Section 2.2.(Relatedly, even though the possibility of stability implies E, in the top list, there could still be a strong tendency towards worlds described by one of the other options A-D. In practice, we think D seems unlikely, but that you could make reasonable arguments that any of the end-points described by A, B, or C are probable.)Why are we interested in this set of claims? There are a few different reasons:The possibility of stable institutions could pose an existential risk, i...]]>
Lukas Finnveden https://forum.effectivealtruism.org/posts/KqCybin8rtfP3qztq/agi-and-lock-in Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI and Lock-In, published by Lukas Finnveden on October 29, 2022 on The Effective Altruism Forum.The long-term future of intelligent life is currently unpredictable and undetermined. In the linked document, we argue that the invention of artificial general intelligence (AGI) could change this by making extreme types of lock-in technologically feasible. In particular, we argue that AGI would make it technologically feasible to (i) perfectly preserve nuanced specifications of a wide variety of values or goals far into the future, and (ii) develop AGI-based institutions that would (with high probability) competently pursue any such values for at least millions, and plausibly trillions, of years.The rest of this post contains the summary (6 pages), with links to relevant sections of the main document (40 pages) for readers who want more details.0.0 The claimLife on Earth could survive for millions of years. Life in space could plausibly survive for trillions of years. What will happen to intelligent life during this time? Some possible claims are:A. Humanity will almost certainly go extinct in the next million years.B. Under Darwinian pressures, intelligent life will spread throughout the stars and rapidly evolve toward maximal reproductive fitness.C. Through moral reflection, intelligent life will reliably be driven to pursue some specific “higher” (non-reproductive) goal, such as maximizing the happiness of all creatures.D. The choices of intelligent life are deeply, fundamentally uncertain. It will at no point be predictable what intelligent beings will choose to do in the following 1000 years.E. It is possible to stabilize many features of society for millions or trillions of years. But it is possible to stabilize them into many different shapes — so civilization’s long-term behavior is contingent on what happens early on.Claims A-C assert that the future is basically determined today. Claim D asserts that the future is, and will remain, undetermined. In this document, we argue for claim E: Some of the most important features of the future of intelligent life are currently undetermined but could become determined relatively soon (relative to the trillions of years life could last).In particular, our main claim is that artificial general intelligence (AGI) will make it technologically feasible to construct long-lived institutions pursuing a wide variety of possible goals. We can break this into three assertions, all conditional on the availability of AGI:It will be possible to preserve highly nuanced specifications of values and goals far into the future, without losing any information.With sufficient investments, it will be feasible to develop AGI-based institutions that (with high probability) competently and faithfully pursue any such values until an external source stops them, or until the values in question imply that they should stop.If a large majority of the world’s economic and military powers agreed to set-up such an institution, and bestowed it with the power to defend itself against external threats, that institution could pursue its agenda for at least millions of years (and perhaps for trillions).Note that we’re mostly making claims about feasibility as opposed to likelihood. We only briefly discuss whether people would want to do something like this in Section 2.2.(Relatedly, even though the possibility of stability implies E, in the top list, there could still be a strong tendency towards worlds described by one of the other options A-D. In practice, we think D seems unlikely, but that you could make reasonable arguments that any of the end-points described by A, B, or C are probable.)Why are we interested in this set of claims? There are a few different reasons:The possibility of stable institutions could pose an existential risk, i...]]>
Sat, 29 Oct 2022 15:47:53 +0000 EA - AGI and Lock-In by Lukas Finnveden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI and Lock-In, published by Lukas Finnveden on October 29, 2022 on The Effective Altruism Forum.The long-term future of intelligent life is currently unpredictable and undetermined. In the linked document, we argue that the invention of artificial general intelligence (AGI) could change this by making extreme types of lock-in technologically feasible. In particular, we argue that AGI would make it technologically feasible to (i) perfectly preserve nuanced specifications of a wide variety of values or goals far into the future, and (ii) develop AGI-based institutions that would (with high probability) competently pursue any such values for at least millions, and plausibly trillions, of years.The rest of this post contains the summary (6 pages), with links to relevant sections of the main document (40 pages) for readers who want more details.0.0 The claimLife on Earth could survive for millions of years. Life in space could plausibly survive for trillions of years. What will happen to intelligent life during this time? Some possible claims are:A. Humanity will almost certainly go extinct in the next million years.B. Under Darwinian pressures, intelligent life will spread throughout the stars and rapidly evolve toward maximal reproductive fitness.C. Through moral reflection, intelligent life will reliably be driven to pursue some specific “higher” (non-reproductive) goal, such as maximizing the happiness of all creatures.D. The choices of intelligent life are deeply, fundamentally uncertain. It will at no point be predictable what intelligent beings will choose to do in the following 1000 years.E. It is possible to stabilize many features of society for millions or trillions of years. But it is possible to stabilize them into many different shapes — so civilization’s long-term behavior is contingent on what happens early on.Claims A-C assert that the future is basically determined today. Claim D asserts that the future is, and will remain, undetermined. In this document, we argue for claim E: Some of the most important features of the future of intelligent life are currently undetermined but could become determined relatively soon (relative to the trillions of years life could last).In particular, our main claim is that artificial general intelligence (AGI) will make it technologically feasible to construct long-lived institutions pursuing a wide variety of possible goals. We can break this into three assertions, all conditional on the availability of AGI:It will be possible to preserve highly nuanced specifications of values and goals far into the future, without losing any information.With sufficient investments, it will be feasible to develop AGI-based institutions that (with high probability) competently and faithfully pursue any such values until an external source stops them, or until the values in question imply that they should stop.If a large majority of the world’s economic and military powers agreed to set-up such an institution, and bestowed it with the power to defend itself against external threats, that institution could pursue its agenda for at least millions of years (and perhaps for trillions).Note that we’re mostly making claims about feasibility as opposed to likelihood. We only briefly discuss whether people would want to do something like this in Section 2.2.(Relatedly, even though the possibility of stability implies E, in the top list, there could still be a strong tendency towards worlds described by one of the other options A-D. In practice, we think D seems unlikely, but that you could make reasonable arguments that any of the end-points described by A, B, or C are probable.)Why are we interested in this set of claims? There are a few different reasons:The possibility of stable institutions could pose an existential risk, i...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI and Lock-In, published by Lukas Finnveden on October 29, 2022 on The Effective Altruism Forum.The long-term future of intelligent life is currently unpredictable and undetermined. In the linked document, we argue that the invention of artificial general intelligence (AGI) could change this by making extreme types of lock-in technologically feasible. In particular, we argue that AGI would make it technologically feasible to (i) perfectly preserve nuanced specifications of a wide variety of values or goals far into the future, and (ii) develop AGI-based institutions that would (with high probability) competently pursue any such values for at least millions, and plausibly trillions, of years.The rest of this post contains the summary (6 pages), with links to relevant sections of the main document (40 pages) for readers who want more details.0.0 The claimLife on Earth could survive for millions of years. Life in space could plausibly survive for trillions of years. What will happen to intelligent life during this time? Some possible claims are:A. Humanity will almost certainly go extinct in the next million years.B. Under Darwinian pressures, intelligent life will spread throughout the stars and rapidly evolve toward maximal reproductive fitness.C. Through moral reflection, intelligent life will reliably be driven to pursue some specific “higher” (non-reproductive) goal, such as maximizing the happiness of all creatures.D. The choices of intelligent life are deeply, fundamentally uncertain. It will at no point be predictable what intelligent beings will choose to do in the following 1000 years.E. It is possible to stabilize many features of society for millions or trillions of years. But it is possible to stabilize them into many different shapes — so civilization’s long-term behavior is contingent on what happens early on.Claims A-C assert that the future is basically determined today. Claim D asserts that the future is, and will remain, undetermined. In this document, we argue for claim E: Some of the most important features of the future of intelligent life are currently undetermined but could become determined relatively soon (relative to the trillions of years life could last).In particular, our main claim is that artificial general intelligence (AGI) will make it technologically feasible to construct long-lived institutions pursuing a wide variety of possible goals. We can break this into three assertions, all conditional on the availability of AGI:It will be possible to preserve highly nuanced specifications of values and goals far into the future, without losing any information.With sufficient investments, it will be feasible to develop AGI-based institutions that (with high probability) competently and faithfully pursue any such values until an external source stops them, or until the values in question imply that they should stop.If a large majority of the world’s economic and military powers agreed to set-up such an institution, and bestowed it with the power to defend itself against external threats, that institution could pursue its agenda for at least millions of years (and perhaps for trillions).Note that we’re mostly making claims about feasibility as opposed to likelihood. We only briefly discuss whether people would want to do something like this in Section 2.2.(Relatedly, even though the possibility of stability implies E, in the top list, there could still be a strong tendency towards worlds described by one of the other options A-D. In practice, we think D seems unlikely, but that you could make reasonable arguments that any of the end-points described by A, B, or C are probable.)Why are we interested in this set of claims? There are a few different reasons:The possibility of stable institutions could pose an existential risk, i...]]>
Lukas Finnveden https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 17:37 None full 3582
dSLLJX5mhgpBzbZED_EA EA - EA movement course corrections and where you might disagree by michel Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA movement course corrections and where you might disagree, published by michel on October 29, 2022 on The Effective Altruism Forum.This is the final post in a series of two posts on EA movement strategy. The first post categorized ways in which EA could fail.SummaryFollowing an influx of funding, media attention, and influence, the EA movement has been speeding along an exciting, yet perilous, trajectory recently.A lot of the EA community’s future impact rests on this uncertain growth going well (and thereby avoiding movement collapse scenarios).Yet, discussions or critiques of EA’s trajectory are often not action-guiding. Even when critiques propose course corrections that are tempting to agree with (e.g., EA should be bigger!) proposed course corrections to make EA more like X often don’t rigorously engage with the downsides of being more like X, or the opportunity cost of not being like Y. Proposals to make EA more like X also often leave me with only a vague understanding of what X looks like and how we get from here to X.I hope this post and the previous write-up on ways in which EA could fail can make discussions of the EA community’s trajectory more productive (and to clarify my own thinking on the matter).This post analyzes the different domains within which EA could change its trajectory, as well as key considerations that inform those trajectory changes where reasonable people might disagree.I also share next steps to build on this post.PrefaceI was going to write another critique of EA. How original. I was going to write about how there’s an increasingly visible EA “archetype” (rationalist, longtermist, interested in AI, etc.) that embodies an aesthetic few people feel warmly towards on first impression, and that this leads some newcomers who I think would be a great fit for EA to bounce off the community.But as I outlined my critique, I had a scary realization: If EA adopted my critique, I’m not confident the community would be more impactful. Maybe, to counter my proposed critique, AI alignment is just the problem of our century and we need to orient ourselves toward that unwelcome reality. Seems plausible. Or maybe EA is rife with echo chambers, EA exceptionalism, and an implicit bias to see ourselves as the protagonist of a story others are blind to. Also seems plausible.And then I thought about other EA strategy takes. Doesn't a proposal like “make EA enormous” also rest on lots of often implicit assumptions? Like how well current EA infrastructure and coordination systems can adapt to a large influx of people, the extent to which “Effective Altruism” as a brand can scale relative to more cause-area-specific brands, and the plausible costs of diluting EA’s uniquely truth-seeking norms. I’m not saying we shouldn’t make EA enormous, I’m saying it seems hard to know whether to make EA enormous– or for that matter to have any strong strategy opinion.Nevertheless, I’m glad people are thinking about course corrections to the EA movement trajectory. Why? Because I doubt the existing “business as usual” trajectory is the optimal trajectory.I don’t think anyone is deliberately steering the EA movement. The Centre for Effective Altruism (CEA) does at some level with EAG(x)’s, online discussion spaces, and goals for growth levels, but ask them and they will tell you CEA is not in charge of all of EA.EA thought leaders also don’t claim to own course-correcting all of EA. While they may nudge the movement in certain directions through grants and new projects, their full-time work typically has a narrower scope.I get the sense that the EA movement as we see it today is a conglomeration of past deliberate decisions (e.g., name, rough growth rate, how to brand discussion and gathering spaces) and just natural social dynamics (e.g., grouping by inter...]]>
michel https://forum.effectivealtruism.org/posts/dSLLJX5mhgpBzbZED/ea-movement-course-corrections-and-where-you-might-disagree Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA movement course corrections and where you might disagree, published by michel on October 29, 2022 on The Effective Altruism Forum.This is the final post in a series of two posts on EA movement strategy. The first post categorized ways in which EA could fail.SummaryFollowing an influx of funding, media attention, and influence, the EA movement has been speeding along an exciting, yet perilous, trajectory recently.A lot of the EA community’s future impact rests on this uncertain growth going well (and thereby avoiding movement collapse scenarios).Yet, discussions or critiques of EA’s trajectory are often not action-guiding. Even when critiques propose course corrections that are tempting to agree with (e.g., EA should be bigger!) proposed course corrections to make EA more like X often don’t rigorously engage with the downsides of being more like X, or the opportunity cost of not being like Y. Proposals to make EA more like X also often leave me with only a vague understanding of what X looks like and how we get from here to X.I hope this post and the previous write-up on ways in which EA could fail can make discussions of the EA community’s trajectory more productive (and to clarify my own thinking on the matter).This post analyzes the different domains within which EA could change its trajectory, as well as key considerations that inform those trajectory changes where reasonable people might disagree.I also share next steps to build on this post.PrefaceI was going to write another critique of EA. How original. I was going to write about how there’s an increasingly visible EA “archetype” (rationalist, longtermist, interested in AI, etc.) that embodies an aesthetic few people feel warmly towards on first impression, and that this leads some newcomers who I think would be a great fit for EA to bounce off the community.But as I outlined my critique, I had a scary realization: If EA adopted my critique, I’m not confident the community would be more impactful. Maybe, to counter my proposed critique, AI alignment is just the problem of our century and we need to orient ourselves toward that unwelcome reality. Seems plausible. Or maybe EA is rife with echo chambers, EA exceptionalism, and an implicit bias to see ourselves as the protagonist of a story others are blind to. Also seems plausible.And then I thought about other EA strategy takes. Doesn't a proposal like “make EA enormous” also rest on lots of often implicit assumptions? Like how well current EA infrastructure and coordination systems can adapt to a large influx of people, the extent to which “Effective Altruism” as a brand can scale relative to more cause-area-specific brands, and the plausible costs of diluting EA’s uniquely truth-seeking norms. I’m not saying we shouldn’t make EA enormous, I’m saying it seems hard to know whether to make EA enormous– or for that matter to have any strong strategy opinion.Nevertheless, I’m glad people are thinking about course corrections to the EA movement trajectory. Why? Because I doubt the existing “business as usual” trajectory is the optimal trajectory.I don’t think anyone is deliberately steering the EA movement. The Centre for Effective Altruism (CEA) does at some level with EAG(x)’s, online discussion spaces, and goals for growth levels, but ask them and they will tell you CEA is not in charge of all of EA.EA thought leaders also don’t claim to own course-correcting all of EA. While they may nudge the movement in certain directions through grants and new projects, their full-time work typically has a narrower scope.I get the sense that the EA movement as we see it today is a conglomeration of past deliberate decisions (e.g., name, rough growth rate, how to brand discussion and gathering spaces) and just natural social dynamics (e.g., grouping by inter...]]>
Sat, 29 Oct 2022 15:40:54 +0000 EA - EA movement course corrections and where you might disagree by michel Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA movement course corrections and where you might disagree, published by michel on October 29, 2022 on The Effective Altruism Forum.This is the final post in a series of two posts on EA movement strategy. The first post categorized ways in which EA could fail.SummaryFollowing an influx of funding, media attention, and influence, the EA movement has been speeding along an exciting, yet perilous, trajectory recently.A lot of the EA community’s future impact rests on this uncertain growth going well (and thereby avoiding movement collapse scenarios).Yet, discussions or critiques of EA’s trajectory are often not action-guiding. Even when critiques propose course corrections that are tempting to agree with (e.g., EA should be bigger!) proposed course corrections to make EA more like X often don’t rigorously engage with the downsides of being more like X, or the opportunity cost of not being like Y. Proposals to make EA more like X also often leave me with only a vague understanding of what X looks like and how we get from here to X.I hope this post and the previous write-up on ways in which EA could fail can make discussions of the EA community’s trajectory more productive (and to clarify my own thinking on the matter).This post analyzes the different domains within which EA could change its trajectory, as well as key considerations that inform those trajectory changes where reasonable people might disagree.I also share next steps to build on this post.PrefaceI was going to write another critique of EA. How original. I was going to write about how there’s an increasingly visible EA “archetype” (rationalist, longtermist, interested in AI, etc.) that embodies an aesthetic few people feel warmly towards on first impression, and that this leads some newcomers who I think would be a great fit for EA to bounce off the community.But as I outlined my critique, I had a scary realization: If EA adopted my critique, I’m not confident the community would be more impactful. Maybe, to counter my proposed critique, AI alignment is just the problem of our century and we need to orient ourselves toward that unwelcome reality. Seems plausible. Or maybe EA is rife with echo chambers, EA exceptionalism, and an implicit bias to see ourselves as the protagonist of a story others are blind to. Also seems plausible.And then I thought about other EA strategy takes. Doesn't a proposal like “make EA enormous” also rest on lots of often implicit assumptions? Like how well current EA infrastructure and coordination systems can adapt to a large influx of people, the extent to which “Effective Altruism” as a brand can scale relative to more cause-area-specific brands, and the plausible costs of diluting EA’s uniquely truth-seeking norms. I’m not saying we shouldn’t make EA enormous, I’m saying it seems hard to know whether to make EA enormous– or for that matter to have any strong strategy opinion.Nevertheless, I’m glad people are thinking about course corrections to the EA movement trajectory. Why? Because I doubt the existing “business as usual” trajectory is the optimal trajectory.I don’t think anyone is deliberately steering the EA movement. The Centre for Effective Altruism (CEA) does at some level with EAG(x)’s, online discussion spaces, and goals for growth levels, but ask them and they will tell you CEA is not in charge of all of EA.EA thought leaders also don’t claim to own course-correcting all of EA. While they may nudge the movement in certain directions through grants and new projects, their full-time work typically has a narrower scope.I get the sense that the EA movement as we see it today is a conglomeration of past deliberate decisions (e.g., name, rough growth rate, how to brand discussion and gathering spaces) and just natural social dynamics (e.g., grouping by inter...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA movement course corrections and where you might disagree, published by michel on October 29, 2022 on The Effective Altruism Forum.This is the final post in a series of two posts on EA movement strategy. The first post categorized ways in which EA could fail.SummaryFollowing an influx of funding, media attention, and influence, the EA movement has been speeding along an exciting, yet perilous, trajectory recently.A lot of the EA community’s future impact rests on this uncertain growth going well (and thereby avoiding movement collapse scenarios).Yet, discussions or critiques of EA’s trajectory are often not action-guiding. Even when critiques propose course corrections that are tempting to agree with (e.g., EA should be bigger!) proposed course corrections to make EA more like X often don’t rigorously engage with the downsides of being more like X, or the opportunity cost of not being like Y. Proposals to make EA more like X also often leave me with only a vague understanding of what X looks like and how we get from here to X.I hope this post and the previous write-up on ways in which EA could fail can make discussions of the EA community’s trajectory more productive (and to clarify my own thinking on the matter).This post analyzes the different domains within which EA could change its trajectory, as well as key considerations that inform those trajectory changes where reasonable people might disagree.I also share next steps to build on this post.PrefaceI was going to write another critique of EA. How original. I was going to write about how there’s an increasingly visible EA “archetype” (rationalist, longtermist, interested in AI, etc.) that embodies an aesthetic few people feel warmly towards on first impression, and that this leads some newcomers who I think would be a great fit for EA to bounce off the community.But as I outlined my critique, I had a scary realization: If EA adopted my critique, I’m not confident the community would be more impactful. Maybe, to counter my proposed critique, AI alignment is just the problem of our century and we need to orient ourselves toward that unwelcome reality. Seems plausible. Or maybe EA is rife with echo chambers, EA exceptionalism, and an implicit bias to see ourselves as the protagonist of a story others are blind to. Also seems plausible.And then I thought about other EA strategy takes. Doesn't a proposal like “make EA enormous” also rest on lots of often implicit assumptions? Like how well current EA infrastructure and coordination systems can adapt to a large influx of people, the extent to which “Effective Altruism” as a brand can scale relative to more cause-area-specific brands, and the plausible costs of diluting EA’s uniquely truth-seeking norms. I’m not saying we shouldn’t make EA enormous, I’m saying it seems hard to know whether to make EA enormous– or for that matter to have any strong strategy opinion.Nevertheless, I’m glad people are thinking about course corrections to the EA movement trajectory. Why? Because I doubt the existing “business as usual” trajectory is the optimal trajectory.I don’t think anyone is deliberately steering the EA movement. The Centre for Effective Altruism (CEA) does at some level with EAG(x)’s, online discussion spaces, and goals for growth levels, but ask them and they will tell you CEA is not in charge of all of EA.EA thought leaders also don’t claim to own course-correcting all of EA. While they may nudge the movement in certain directions through grants and new projects, their full-time work typically has a narrower scope.I get the sense that the EA movement as we see it today is a conglomeration of past deliberate decisions (e.g., name, rough growth rate, how to brand discussion and gathering spaces) and just natural social dynamics (e.g., grouping by inter...]]>
michel https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 39:58 None full 3581
R4Ee5LmEXDdwPEaAx_EA EA - Teaching EA through superhero thought experiments by Geoffrey Miller Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Teaching EA through superhero thought experiments, published by Geoffrey Miller on October 28, 2022 on The Effective Altruism Forum.Concrete, thought-provoking discussion questions can often spark more interest in new ideas (such as EA principles) than abstract arguments, moral preaching, or theoretical manifestos can. This is true in college seminars, but more generally in small social gatherings of any type.When I've taught my 'Psychology of Effective Altruism' class (syllabus here), it's sometimes hard to get ordinary college students interested in abstract Effective Altruism ideas. I teach at a large American state university that is not very selective in student admissions, so there is a wide range of cognitive abilities and curiosity levels in the college juniors and seniors who take my EA class. They sometimes struggle to follow abstract presentations of key EA concepts such as scope-sensitivity, tractability, neglectedness, charity effectiveness, utility, sentience, and longtermism.But they all love superhero movies. Whatever their religious affiliation, they're all familiar with the DC and Marvel pantheons of demi-gods. For better or worse, these superhero pantheons are at the heart of modern global entertainment culture. In a politically polarized era when people can't agree on much, and tend to stay within their partisan news media bubbles, superhero movies and TV series offer some rare common ground for thinking about issues of power, altruism, existential risk, counterfactuals, moral dilemmas, etc.So, I've found it useful to provoke class discussions with some superhero thought experiments, such as: "If you had all of Superman's superpowers for 24 hours, and you wanted to do the most good in the world during that one day, what would you do?" (For a sample of about 150 replies to this question on Twitter (from ordinary followers, not from college students), see here.)This question usually provokes immediate and spirited discussion. Almost all students are familiar with Superman's imaginary superpowers, and almost all accept the premise that Superman is a good guy with good intentions.The most frequent initial responses usually involve geopolitical vigilante justice, on the principle that the fastest way to do good is to eliminate bad guys -- through killing them, jailing them, or otherwise neutralizing them. So, many students will start off saying 'Superman should simply kill foreign leader X', where X is whoever the American news media is currently demonizing as the Global Bad Guy. However, other students will usually point out that political assassinations often create martyrs, generate adverse publicity, and provoke blowback, so the longer-term effects may be neutral or negative. This can lead to a good discussion of unintended consequences, counterfactuals, moral legitimacy, public sentiment, and the global catastrophic risks of geopolitical instability.Other students will sometimes suggest that our hypothetical EA Superman should simply do what classical Superman has done ever since the comics started in 1939 -- monitor the news, look for people in distress, and go save the individuals and small groups who can be saved. This typically leads to a good discussion of scope-sensitivity: why should Superman save a few people at a time through heroic actions, when he might be able to save many more through delivering water, food, or medicines to the needy? Or through infrastructure projects like digging canals, tunnels, and harbors, building dikes and dams, or delivering metal-rich asteroids to Earth? Students enjoy debating which kinds of interventions would be the best use of Superman's time -- and the 24-hour time limit in the question makes the opportunity costs of each intervention salient.It's also easy to nudge these discussions into epistem...]]>
Geoffrey Miller https://forum.effectivealtruism.org/posts/R4Ee5LmEXDdwPEaAx/teaching-ea-through-superhero-thought-experiments Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Teaching EA through superhero thought experiments, published by Geoffrey Miller on October 28, 2022 on The Effective Altruism Forum.Concrete, thought-provoking discussion questions can often spark more interest in new ideas (such as EA principles) than abstract arguments, moral preaching, or theoretical manifestos can. This is true in college seminars, but more generally in small social gatherings of any type.When I've taught my 'Psychology of Effective Altruism' class (syllabus here), it's sometimes hard to get ordinary college students interested in abstract Effective Altruism ideas. I teach at a large American state university that is not very selective in student admissions, so there is a wide range of cognitive abilities and curiosity levels in the college juniors and seniors who take my EA class. They sometimes struggle to follow abstract presentations of key EA concepts such as scope-sensitivity, tractability, neglectedness, charity effectiveness, utility, sentience, and longtermism.But they all love superhero movies. Whatever their religious affiliation, they're all familiar with the DC and Marvel pantheons of demi-gods. For better or worse, these superhero pantheons are at the heart of modern global entertainment culture. In a politically polarized era when people can't agree on much, and tend to stay within their partisan news media bubbles, superhero movies and TV series offer some rare common ground for thinking about issues of power, altruism, existential risk, counterfactuals, moral dilemmas, etc.So, I've found it useful to provoke class discussions with some superhero thought experiments, such as: "If you had all of Superman's superpowers for 24 hours, and you wanted to do the most good in the world during that one day, what would you do?" (For a sample of about 150 replies to this question on Twitter (from ordinary followers, not from college students), see here.)This question usually provokes immediate and spirited discussion. Almost all students are familiar with Superman's imaginary superpowers, and almost all accept the premise that Superman is a good guy with good intentions.The most frequent initial responses usually involve geopolitical vigilante justice, on the principle that the fastest way to do good is to eliminate bad guys -- through killing them, jailing them, or otherwise neutralizing them. So, many students will start off saying 'Superman should simply kill foreign leader X', where X is whoever the American news media is currently demonizing as the Global Bad Guy. However, other students will usually point out that political assassinations often create martyrs, generate adverse publicity, and provoke blowback, so the longer-term effects may be neutral or negative. This can lead to a good discussion of unintended consequences, counterfactuals, moral legitimacy, public sentiment, and the global catastrophic risks of geopolitical instability.Other students will sometimes suggest that our hypothetical EA Superman should simply do what classical Superman has done ever since the comics started in 1939 -- monitor the news, look for people in distress, and go save the individuals and small groups who can be saved. This typically leads to a good discussion of scope-sensitivity: why should Superman save a few people at a time through heroic actions, when he might be able to save many more through delivering water, food, or medicines to the needy? Or through infrastructure projects like digging canals, tunnels, and harbors, building dikes and dams, or delivering metal-rich asteroids to Earth? Students enjoy debating which kinds of interventions would be the best use of Superman's time -- and the 24-hour time limit in the question makes the opportunity costs of each intervention salient.It's also easy to nudge these discussions into epistem...]]>
Sat, 29 Oct 2022 05:34:03 +0000 EA - Teaching EA through superhero thought experiments by Geoffrey Miller Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Teaching EA through superhero thought experiments, published by Geoffrey Miller on October 28, 2022 on The Effective Altruism Forum.Concrete, thought-provoking discussion questions can often spark more interest in new ideas (such as EA principles) than abstract arguments, moral preaching, or theoretical manifestos can. This is true in college seminars, but more generally in small social gatherings of any type.When I've taught my 'Psychology of Effective Altruism' class (syllabus here), it's sometimes hard to get ordinary college students interested in abstract Effective Altruism ideas. I teach at a large American state university that is not very selective in student admissions, so there is a wide range of cognitive abilities and curiosity levels in the college juniors and seniors who take my EA class. They sometimes struggle to follow abstract presentations of key EA concepts such as scope-sensitivity, tractability, neglectedness, charity effectiveness, utility, sentience, and longtermism.But they all love superhero movies. Whatever their religious affiliation, they're all familiar with the DC and Marvel pantheons of demi-gods. For better or worse, these superhero pantheons are at the heart of modern global entertainment culture. In a politically polarized era when people can't agree on much, and tend to stay within their partisan news media bubbles, superhero movies and TV series offer some rare common ground for thinking about issues of power, altruism, existential risk, counterfactuals, moral dilemmas, etc.So, I've found it useful to provoke class discussions with some superhero thought experiments, such as: "If you had all of Superman's superpowers for 24 hours, and you wanted to do the most good in the world during that one day, what would you do?" (For a sample of about 150 replies to this question on Twitter (from ordinary followers, not from college students), see here.)This question usually provokes immediate and spirited discussion. Almost all students are familiar with Superman's imaginary superpowers, and almost all accept the premise that Superman is a good guy with good intentions.The most frequent initial responses usually involve geopolitical vigilante justice, on the principle that the fastest way to do good is to eliminate bad guys -- through killing them, jailing them, or otherwise neutralizing them. So, many students will start off saying 'Superman should simply kill foreign leader X', where X is whoever the American news media is currently demonizing as the Global Bad Guy. However, other students will usually point out that political assassinations often create martyrs, generate adverse publicity, and provoke blowback, so the longer-term effects may be neutral or negative. This can lead to a good discussion of unintended consequences, counterfactuals, moral legitimacy, public sentiment, and the global catastrophic risks of geopolitical instability.Other students will sometimes suggest that our hypothetical EA Superman should simply do what classical Superman has done ever since the comics started in 1939 -- monitor the news, look for people in distress, and go save the individuals and small groups who can be saved. This typically leads to a good discussion of scope-sensitivity: why should Superman save a few people at a time through heroic actions, when he might be able to save many more through delivering water, food, or medicines to the needy? Or through infrastructure projects like digging canals, tunnels, and harbors, building dikes and dams, or delivering metal-rich asteroids to Earth? Students enjoy debating which kinds of interventions would be the best use of Superman's time -- and the 24-hour time limit in the question makes the opportunity costs of each intervention salient.It's also easy to nudge these discussions into epistem...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Teaching EA through superhero thought experiments, published by Geoffrey Miller on October 28, 2022 on The Effective Altruism Forum.Concrete, thought-provoking discussion questions can often spark more interest in new ideas (such as EA principles) than abstract arguments, moral preaching, or theoretical manifestos can. This is true in college seminars, but more generally in small social gatherings of any type.When I've taught my 'Psychology of Effective Altruism' class (syllabus here), it's sometimes hard to get ordinary college students interested in abstract Effective Altruism ideas. I teach at a large American state university that is not very selective in student admissions, so there is a wide range of cognitive abilities and curiosity levels in the college juniors and seniors who take my EA class. They sometimes struggle to follow abstract presentations of key EA concepts such as scope-sensitivity, tractability, neglectedness, charity effectiveness, utility, sentience, and longtermism.But they all love superhero movies. Whatever their religious affiliation, they're all familiar with the DC and Marvel pantheons of demi-gods. For better or worse, these superhero pantheons are at the heart of modern global entertainment culture. In a politically polarized era when people can't agree on much, and tend to stay within their partisan news media bubbles, superhero movies and TV series offer some rare common ground for thinking about issues of power, altruism, existential risk, counterfactuals, moral dilemmas, etc.So, I've found it useful to provoke class discussions with some superhero thought experiments, such as: "If you had all of Superman's superpowers for 24 hours, and you wanted to do the most good in the world during that one day, what would you do?" (For a sample of about 150 replies to this question on Twitter (from ordinary followers, not from college students), see here.)This question usually provokes immediate and spirited discussion. Almost all students are familiar with Superman's imaginary superpowers, and almost all accept the premise that Superman is a good guy with good intentions.The most frequent initial responses usually involve geopolitical vigilante justice, on the principle that the fastest way to do good is to eliminate bad guys -- through killing them, jailing them, or otherwise neutralizing them. So, many students will start off saying 'Superman should simply kill foreign leader X', where X is whoever the American news media is currently demonizing as the Global Bad Guy. However, other students will usually point out that political assassinations often create martyrs, generate adverse publicity, and provoke blowback, so the longer-term effects may be neutral or negative. This can lead to a good discussion of unintended consequences, counterfactuals, moral legitimacy, public sentiment, and the global catastrophic risks of geopolitical instability.Other students will sometimes suggest that our hypothetical EA Superman should simply do what classical Superman has done ever since the comics started in 1939 -- monitor the news, look for people in distress, and go save the individuals and small groups who can be saved. This typically leads to a good discussion of scope-sensitivity: why should Superman save a few people at a time through heroic actions, when he might be able to save many more through delivering water, food, or medicines to the needy? Or through infrastructure projects like digging canals, tunnels, and harbors, building dikes and dams, or delivering metal-rich asteroids to Earth? Students enjoy debating which kinds of interventions would be the best use of Superman's time -- and the 24-hour time limit in the question makes the opportunity costs of each intervention salient.It's also easy to nudge these discussions into epistem...]]>
Geoffrey Miller https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 07:54 None full 3576
nhsdeCEZAaBQQaro8_EA EA - Using artificial intelligence (machine vision) to increase the effectiveness of human-wildlife conflict mitigations could benefit WAW by Rachel Norman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Using artificial intelligence (machine vision) to increase the effectiveness of human-wildlife conflict mitigations could benefit WAW, published by Rachel Norman on October 28, 2022 on The Effective Altruism Forum.1. OverviewThis report explores using artificial intelligence (AI) to increase the effectiveness of human-wildlife conflict (HWC) mitigations in order to benefit wild animal welfare (WAW). Two concrete examples are providing more funding, research and direct work into reducing fatalities due to 1) collisions between bats and wind turbines, and 2) culling crop-raiding starlings. The report aims merely to raise awareness of this topic and introduce the idea for discussion, but not yet strongly suggest it is a cost-effective intervention on par with other interventions - see uncertainties, limitations, and potential for harm.What's the problem profile?HWC is increasing due to human expansions and climate change, (Gross et al., 2021) and is starting to be considered in government strategies and policy. The expected future impact of innovative and effective solutions to HWC could be even larger than currently appreciated.Lethal control or other methods which significantly impact animal welfare are still widely used (such as culling), despite preventative non-lethal strategies growing in more recent wildlife management approaches.Currently deployed AI systems directed towards HWC could be expanded further within the next 10-20 years as they become more reliable, more effective, and cheaper. We should not assume they will prioritize WAW concerns, or be widely used for animals of WAW concern, so this should be embedded before they are potentially rolled out at scale.There are already companies working on AI solutions for specific problems involving endangered species, such as protected areas using AI assisted technology for poacher detection. There is already proof-of-concept of an NGO-backed early warning AI system, ‘WildEyes’, with this type of solution being invested in by a local governmental department in Tamil Nadu, India. Buy-in from a range of stakeholders (especially when it benefits humans and profits too) offers a way in with conservationists and researchers who may not otherwise consider WAW. Research and development (R&D) on AI-assisted HWC mitigations would likely attract researchers who would not otherwise consider or be motivated by WAW concerns.What should we be doing differently?A very tentative theory of change: if machine vision-based methods prevent HWC, they could be adopted, even on a small scale helps drop prices allowing for systems to be more widely adopted leads to more support and R&D continued price drops and adoption could create space for legislation to ban harmful or lethal methods of animal control preventing HWC could reduce apathy and antagonism towards “problem species” and make it easier for people to consider the welfare of animals, while also directly reducing negative WAW effects of HWC.This report highlights two examples of HWC where advocates could influence AI-assisted mitigation to directly affect substantial numbers of animals, and spread welfare considerations in software and norms:Wind turbine collisions are a leading anthropogenic cause of bat deaths and cause a significant number of bird deaths (600,000 to 949,000 bats and 140,000 to 679,000 birds annually in North America). We should expect fatalities to increase due to expansions in wind power.Culling of crop-raiding species. In one year, the USDA’s Wildlife Services culled 1,028,642 European starlings responsible for agricultural crop damage, because other mitigations are ineffective. Despite this, starlings still cause extensive damage each year. More effective mitigation measures would hold value and could prevent culls.There are a number of r...]]>
Rachel Norman https://forum.effectivealtruism.org/posts/nhsdeCEZAaBQQaro8/using-artificial-intelligence-machine-vision-to-increase-the-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Using artificial intelligence (machine vision) to increase the effectiveness of human-wildlife conflict mitigations could benefit WAW, published by Rachel Norman on October 28, 2022 on The Effective Altruism Forum.1. OverviewThis report explores using artificial intelligence (AI) to increase the effectiveness of human-wildlife conflict (HWC) mitigations in order to benefit wild animal welfare (WAW). Two concrete examples are providing more funding, research and direct work into reducing fatalities due to 1) collisions between bats and wind turbines, and 2) culling crop-raiding starlings. The report aims merely to raise awareness of this topic and introduce the idea for discussion, but not yet strongly suggest it is a cost-effective intervention on par with other interventions - see uncertainties, limitations, and potential for harm.What's the problem profile?HWC is increasing due to human expansions and climate change, (Gross et al., 2021) and is starting to be considered in government strategies and policy. The expected future impact of innovative and effective solutions to HWC could be even larger than currently appreciated.Lethal control or other methods which significantly impact animal welfare are still widely used (such as culling), despite preventative non-lethal strategies growing in more recent wildlife management approaches.Currently deployed AI systems directed towards HWC could be expanded further within the next 10-20 years as they become more reliable, more effective, and cheaper. We should not assume they will prioritize WAW concerns, or be widely used for animals of WAW concern, so this should be embedded before they are potentially rolled out at scale.There are already companies working on AI solutions for specific problems involving endangered species, such as protected areas using AI assisted technology for poacher detection. There is already proof-of-concept of an NGO-backed early warning AI system, ‘WildEyes’, with this type of solution being invested in by a local governmental department in Tamil Nadu, India. Buy-in from a range of stakeholders (especially when it benefits humans and profits too) offers a way in with conservationists and researchers who may not otherwise consider WAW. Research and development (R&D) on AI-assisted HWC mitigations would likely attract researchers who would not otherwise consider or be motivated by WAW concerns.What should we be doing differently?A very tentative theory of change: if machine vision-based methods prevent HWC, they could be adopted, even on a small scale helps drop prices allowing for systems to be more widely adopted leads to more support and R&D continued price drops and adoption could create space for legislation to ban harmful or lethal methods of animal control preventing HWC could reduce apathy and antagonism towards “problem species” and make it easier for people to consider the welfare of animals, while also directly reducing negative WAW effects of HWC.This report highlights two examples of HWC where advocates could influence AI-assisted mitigation to directly affect substantial numbers of animals, and spread welfare considerations in software and norms:Wind turbine collisions are a leading anthropogenic cause of bat deaths and cause a significant number of bird deaths (600,000 to 949,000 bats and 140,000 to 679,000 birds annually in North America). We should expect fatalities to increase due to expansions in wind power.Culling of crop-raiding species. In one year, the USDA’s Wildlife Services culled 1,028,642 European starlings responsible for agricultural crop damage, because other mitigations are ineffective. Despite this, starlings still cause extensive damage each year. More effective mitigation measures would hold value and could prevent culls.There are a number of r...]]>
Sat, 29 Oct 2022 04:42:30 +0000 EA - Using artificial intelligence (machine vision) to increase the effectiveness of human-wildlife conflict mitigations could benefit WAW by Rachel Norman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Using artificial intelligence (machine vision) to increase the effectiveness of human-wildlife conflict mitigations could benefit WAW, published by Rachel Norman on October 28, 2022 on The Effective Altruism Forum.1. OverviewThis report explores using artificial intelligence (AI) to increase the effectiveness of human-wildlife conflict (HWC) mitigations in order to benefit wild animal welfare (WAW). Two concrete examples are providing more funding, research and direct work into reducing fatalities due to 1) collisions between bats and wind turbines, and 2) culling crop-raiding starlings. The report aims merely to raise awareness of this topic and introduce the idea for discussion, but not yet strongly suggest it is a cost-effective intervention on par with other interventions - see uncertainties, limitations, and potential for harm.What's the problem profile?HWC is increasing due to human expansions and climate change, (Gross et al., 2021) and is starting to be considered in government strategies and policy. The expected future impact of innovative and effective solutions to HWC could be even larger than currently appreciated.Lethal control or other methods which significantly impact animal welfare are still widely used (such as culling), despite preventative non-lethal strategies growing in more recent wildlife management approaches.Currently deployed AI systems directed towards HWC could be expanded further within the next 10-20 years as they become more reliable, more effective, and cheaper. We should not assume they will prioritize WAW concerns, or be widely used for animals of WAW concern, so this should be embedded before they are potentially rolled out at scale.There are already companies working on AI solutions for specific problems involving endangered species, such as protected areas using AI assisted technology for poacher detection. There is already proof-of-concept of an NGO-backed early warning AI system, ‘WildEyes’, with this type of solution being invested in by a local governmental department in Tamil Nadu, India. Buy-in from a range of stakeholders (especially when it benefits humans and profits too) offers a way in with conservationists and researchers who may not otherwise consider WAW. Research and development (R&D) on AI-assisted HWC mitigations would likely attract researchers who would not otherwise consider or be motivated by WAW concerns.What should we be doing differently?A very tentative theory of change: if machine vision-based methods prevent HWC, they could be adopted, even on a small scale helps drop prices allowing for systems to be more widely adopted leads to more support and R&D continued price drops and adoption could create space for legislation to ban harmful or lethal methods of animal control preventing HWC could reduce apathy and antagonism towards “problem species” and make it easier for people to consider the welfare of animals, while also directly reducing negative WAW effects of HWC.This report highlights two examples of HWC where advocates could influence AI-assisted mitigation to directly affect substantial numbers of animals, and spread welfare considerations in software and norms:Wind turbine collisions are a leading anthropogenic cause of bat deaths and cause a significant number of bird deaths (600,000 to 949,000 bats and 140,000 to 679,000 birds annually in North America). We should expect fatalities to increase due to expansions in wind power.Culling of crop-raiding species. In one year, the USDA’s Wildlife Services culled 1,028,642 European starlings responsible for agricultural crop damage, because other mitigations are ineffective. Despite this, starlings still cause extensive damage each year. More effective mitigation measures would hold value and could prevent culls.There are a number of r...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Using artificial intelligence (machine vision) to increase the effectiveness of human-wildlife conflict mitigations could benefit WAW, published by Rachel Norman on October 28, 2022 on The Effective Altruism Forum.1. OverviewThis report explores using artificial intelligence (AI) to increase the effectiveness of human-wildlife conflict (HWC) mitigations in order to benefit wild animal welfare (WAW). Two concrete examples are providing more funding, research and direct work into reducing fatalities due to 1) collisions between bats and wind turbines, and 2) culling crop-raiding starlings. The report aims merely to raise awareness of this topic and introduce the idea for discussion, but not yet strongly suggest it is a cost-effective intervention on par with other interventions - see uncertainties, limitations, and potential for harm.What's the problem profile?HWC is increasing due to human expansions and climate change, (Gross et al., 2021) and is starting to be considered in government strategies and policy. The expected future impact of innovative and effective solutions to HWC could be even larger than currently appreciated.Lethal control or other methods which significantly impact animal welfare are still widely used (such as culling), despite preventative non-lethal strategies growing in more recent wildlife management approaches.Currently deployed AI systems directed towards HWC could be expanded further within the next 10-20 years as they become more reliable, more effective, and cheaper. We should not assume they will prioritize WAW concerns, or be widely used for animals of WAW concern, so this should be embedded before they are potentially rolled out at scale.There are already companies working on AI solutions for specific problems involving endangered species, such as protected areas using AI assisted technology for poacher detection. There is already proof-of-concept of an NGO-backed early warning AI system, ‘WildEyes’, with this type of solution being invested in by a local governmental department in Tamil Nadu, India. Buy-in from a range of stakeholders (especially when it benefits humans and profits too) offers a way in with conservationists and researchers who may not otherwise consider WAW. Research and development (R&D) on AI-assisted HWC mitigations would likely attract researchers who would not otherwise consider or be motivated by WAW concerns.What should we be doing differently?A very tentative theory of change: if machine vision-based methods prevent HWC, they could be adopted, even on a small scale helps drop prices allowing for systems to be more widely adopted leads to more support and R&D continued price drops and adoption could create space for legislation to ban harmful or lethal methods of animal control preventing HWC could reduce apathy and antagonism towards “problem species” and make it easier for people to consider the welfare of animals, while also directly reducing negative WAW effects of HWC.This report highlights two examples of HWC where advocates could influence AI-assisted mitigation to directly affect substantial numbers of animals, and spread welfare considerations in software and norms:Wind turbine collisions are a leading anthropogenic cause of bat deaths and cause a significant number of bird deaths (600,000 to 949,000 bats and 140,000 to 679,000 birds annually in North America). We should expect fatalities to increase due to expansions in wind power.Culling of crop-raiding species. In one year, the USDA’s Wildlife Services culled 1,028,642 European starlings responsible for agricultural crop damage, because other mitigations are ineffective. Despite this, starlings still cause extensive damage each year. More effective mitigation measures would hold value and could prevent culls.There are a number of r...]]>
Rachel Norman https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 59:27 None full 3578
vxLrFdrqRPdaHJwgs_EA EA - Join the interpretability research hackathon by Esben Kran Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Join the interpretability research hackathon, published by Esben Kran on October 28, 2022 on The Effective Altruism Forum.TLDR; Participate online or in-person in London, Aarhus, and Tallinn on the weekend 11th to 13th November in a fun and intense AI safety research hackathon focused on interpretability research. We invite mid-career professionals to join but it is open for everyone (also no-coders) and we will create starter code templates to help you kickstart your team’s projects. Join here.Below is an FAQ-style summary of what you can expect (navigate it with the table of contents on the left).What is it?The Interpretability Hackathon is a weekend-long event where you participate in teams (1-6) to create interesting and fun research. You submit a PDF report that summarizes and discusses your findings in the context of AI safety. These reports will be judged by our panel and you can win up to $1,000!It runs from 11th Nov to 13th Nov (in two weeks) and you’re welcome to join for a part of it (see further down). We get an interesting talk by an expert in the field and hear more about the topic.Everyone can participate and we encourage you to join especially if you’re considering AI safety from another career . We prepare templates for you to start out your projects and you’ll be surprised what you can accomplish in just a weekend – especially with your new-found friends!Read more about how to join, what you can expect, the schedule, and what previous participants have said about being part of the hackathon below.Where can I join?You can join the event both in-person and online but everyone needs to make an account and join the jam on the itch.io page.The in-person locations include the LEAH offices in London right by UCL, Imperial, King’s College, and London School of Economics (link); Aarhus University in Aarhus, Denmark (link), and Tallinn, Estonia (link). The virtual event space is on GatherTown (link).Everyone should join the Discord to ask questions, see updates and announcements, find team members, and more. Join here.What are some examples of interpretability projects I could make?You can check out a bunch of interesting, smaller interpretability project ideas on AI Safety Ideas such as reconstructing the input from neural activations, evaluating the alignment tax of interpretable models, or making models’ uncertainty interpretable.Other examples of practical projects can be to find new ways to visualize features in language models, such as Anthropic has been working on, distilling mechanistic interpretability research, create a demo for a much more interpretable language model, or map out how a possible interpretable AGI might look with our current lens.You can also do projects in explainability about how much humans understand why the outputs of language models look the way they do, how humans see attention visualizations, or maybe even the interpretability of humans themselves and take inspiration from the brain and neuroscience.Also check out the results from the last hackathon to see what you might accomplish during just one weekend. The judges were really quite impressed with the full reports given the time constraint! You can also read the complete projects here.InspirationRedwood Research's interpretability tools: http://interp-tools.redwoodresearch.org/The activation atlas:/The Tensorflow playground:/The Neural Network Playground (train simple neural networks in the browser):/Visualize different neural network architectures: http://alexlenail.me/NN-SVG/index.htmlDigestible researchDistill publication on visualizing neural network weightsAndrej Karpathy's "Understanding what convnets learn"Looking inside a neural netYou can also see more on the resources page.Why should I join?There’s loads of reasons to join! Here are ju...]]>
Esben Kran https://forum.effectivealtruism.org/posts/vxLrFdrqRPdaHJwgs/join-the-interpretability-research-hackathon Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Join the interpretability research hackathon, published by Esben Kran on October 28, 2022 on The Effective Altruism Forum.TLDR; Participate online or in-person in London, Aarhus, and Tallinn on the weekend 11th to 13th November in a fun and intense AI safety research hackathon focused on interpretability research. We invite mid-career professionals to join but it is open for everyone (also no-coders) and we will create starter code templates to help you kickstart your team’s projects. Join here.Below is an FAQ-style summary of what you can expect (navigate it with the table of contents on the left).What is it?The Interpretability Hackathon is a weekend-long event where you participate in teams (1-6) to create interesting and fun research. You submit a PDF report that summarizes and discusses your findings in the context of AI safety. These reports will be judged by our panel and you can win up to $1,000!It runs from 11th Nov to 13th Nov (in two weeks) and you’re welcome to join for a part of it (see further down). We get an interesting talk by an expert in the field and hear more about the topic.Everyone can participate and we encourage you to join especially if you’re considering AI safety from another career . We prepare templates for you to start out your projects and you’ll be surprised what you can accomplish in just a weekend – especially with your new-found friends!Read more about how to join, what you can expect, the schedule, and what previous participants have said about being part of the hackathon below.Where can I join?You can join the event both in-person and online but everyone needs to make an account and join the jam on the itch.io page.The in-person locations include the LEAH offices in London right by UCL, Imperial, King’s College, and London School of Economics (link); Aarhus University in Aarhus, Denmark (link), and Tallinn, Estonia (link). The virtual event space is on GatherTown (link).Everyone should join the Discord to ask questions, see updates and announcements, find team members, and more. Join here.What are some examples of interpretability projects I could make?You can check out a bunch of interesting, smaller interpretability project ideas on AI Safety Ideas such as reconstructing the input from neural activations, evaluating the alignment tax of interpretable models, or making models’ uncertainty interpretable.Other examples of practical projects can be to find new ways to visualize features in language models, such as Anthropic has been working on, distilling mechanistic interpretability research, create a demo for a much more interpretable language model, or map out how a possible interpretable AGI might look with our current lens.You can also do projects in explainability about how much humans understand why the outputs of language models look the way they do, how humans see attention visualizations, or maybe even the interpretability of humans themselves and take inspiration from the brain and neuroscience.Also check out the results from the last hackathon to see what you might accomplish during just one weekend. The judges were really quite impressed with the full reports given the time constraint! You can also read the complete projects here.InspirationRedwood Research's interpretability tools: http://interp-tools.redwoodresearch.org/The activation atlas:/The Tensorflow playground:/The Neural Network Playground (train simple neural networks in the browser):/Visualize different neural network architectures: http://alexlenail.me/NN-SVG/index.htmlDigestible researchDistill publication on visualizing neural network weightsAndrej Karpathy's "Understanding what convnets learn"Looking inside a neural netYou can also see more on the resources page.Why should I join?There’s loads of reasons to join! Here are ju...]]>
Fri, 28 Oct 2022 23:42:02 +0000 EA - Join the interpretability research hackathon by Esben Kran Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Join the interpretability research hackathon, published by Esben Kran on October 28, 2022 on The Effective Altruism Forum.TLDR; Participate online or in-person in London, Aarhus, and Tallinn on the weekend 11th to 13th November in a fun and intense AI safety research hackathon focused on interpretability research. We invite mid-career professionals to join but it is open for everyone (also no-coders) and we will create starter code templates to help you kickstart your team’s projects. Join here.Below is an FAQ-style summary of what you can expect (navigate it with the table of contents on the left).What is it?The Interpretability Hackathon is a weekend-long event where you participate in teams (1-6) to create interesting and fun research. You submit a PDF report that summarizes and discusses your findings in the context of AI safety. These reports will be judged by our panel and you can win up to $1,000!It runs from 11th Nov to 13th Nov (in two weeks) and you’re welcome to join for a part of it (see further down). We get an interesting talk by an expert in the field and hear more about the topic.Everyone can participate and we encourage you to join especially if you’re considering AI safety from another career . We prepare templates for you to start out your projects and you’ll be surprised what you can accomplish in just a weekend – especially with your new-found friends!Read more about how to join, what you can expect, the schedule, and what previous participants have said about being part of the hackathon below.Where can I join?You can join the event both in-person and online but everyone needs to make an account and join the jam on the itch.io page.The in-person locations include the LEAH offices in London right by UCL, Imperial, King’s College, and London School of Economics (link); Aarhus University in Aarhus, Denmark (link), and Tallinn, Estonia (link). The virtual event space is on GatherTown (link).Everyone should join the Discord to ask questions, see updates and announcements, find team members, and more. Join here.What are some examples of interpretability projects I could make?You can check out a bunch of interesting, smaller interpretability project ideas on AI Safety Ideas such as reconstructing the input from neural activations, evaluating the alignment tax of interpretable models, or making models’ uncertainty interpretable.Other examples of practical projects can be to find new ways to visualize features in language models, such as Anthropic has been working on, distilling mechanistic interpretability research, create a demo for a much more interpretable language model, or map out how a possible interpretable AGI might look with our current lens.You can also do projects in explainability about how much humans understand why the outputs of language models look the way they do, how humans see attention visualizations, or maybe even the interpretability of humans themselves and take inspiration from the brain and neuroscience.Also check out the results from the last hackathon to see what you might accomplish during just one weekend. The judges were really quite impressed with the full reports given the time constraint! You can also read the complete projects here.InspirationRedwood Research's interpretability tools: http://interp-tools.redwoodresearch.org/The activation atlas:/The Tensorflow playground:/The Neural Network Playground (train simple neural networks in the browser):/Visualize different neural network architectures: http://alexlenail.me/NN-SVG/index.htmlDigestible researchDistill publication on visualizing neural network weightsAndrej Karpathy's "Understanding what convnets learn"Looking inside a neural netYou can also see more on the resources page.Why should I join?There’s loads of reasons to join! Here are ju...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Join the interpretability research hackathon, published by Esben Kran on October 28, 2022 on The Effective Altruism Forum.TLDR; Participate online or in-person in London, Aarhus, and Tallinn on the weekend 11th to 13th November in a fun and intense AI safety research hackathon focused on interpretability research. We invite mid-career professionals to join but it is open for everyone (also no-coders) and we will create starter code templates to help you kickstart your team’s projects. Join here.Below is an FAQ-style summary of what you can expect (navigate it with the table of contents on the left).What is it?The Interpretability Hackathon is a weekend-long event where you participate in teams (1-6) to create interesting and fun research. You submit a PDF report that summarizes and discusses your findings in the context of AI safety. These reports will be judged by our panel and you can win up to $1,000!It runs from 11th Nov to 13th Nov (in two weeks) and you’re welcome to join for a part of it (see further down). We get an interesting talk by an expert in the field and hear more about the topic.Everyone can participate and we encourage you to join especially if you’re considering AI safety from another career . We prepare templates for you to start out your projects and you’ll be surprised what you can accomplish in just a weekend – especially with your new-found friends!Read more about how to join, what you can expect, the schedule, and what previous participants have said about being part of the hackathon below.Where can I join?You can join the event both in-person and online but everyone needs to make an account and join the jam on the itch.io page.The in-person locations include the LEAH offices in London right by UCL, Imperial, King’s College, and London School of Economics (link); Aarhus University in Aarhus, Denmark (link), and Tallinn, Estonia (link). The virtual event space is on GatherTown (link).Everyone should join the Discord to ask questions, see updates and announcements, find team members, and more. Join here.What are some examples of interpretability projects I could make?You can check out a bunch of interesting, smaller interpretability project ideas on AI Safety Ideas such as reconstructing the input from neural activations, evaluating the alignment tax of interpretable models, or making models’ uncertainty interpretable.Other examples of practical projects can be to find new ways to visualize features in language models, such as Anthropic has been working on, distilling mechanistic interpretability research, create a demo for a much more interpretable language model, or map out how a possible interpretable AGI might look with our current lens.You can also do projects in explainability about how much humans understand why the outputs of language models look the way they do, how humans see attention visualizations, or maybe even the interpretability of humans themselves and take inspiration from the brain and neuroscience.Also check out the results from the last hackathon to see what you might accomplish during just one weekend. The judges were really quite impressed with the full reports given the time constraint! You can also read the complete projects here.InspirationRedwood Research's interpretability tools: http://interp-tools.redwoodresearch.org/The activation atlas:/The Tensorflow playground:/The Neural Network Playground (train simple neural networks in the browser):/Visualize different neural network architectures: http://alexlenail.me/NN-SVG/index.htmlDigestible researchDistill publication on visualizing neural network weightsAndrej Karpathy's "Understanding what convnets learn"Looking inside a neural netYou can also see more on the resources page.Why should I join?There’s loads of reasons to join! Here are ju...]]>
Esben Kran https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:07 None full 3577
YejEjygKjqe2BA2pp_EA EA - A Potential Cheap and High Impact Way to Reduce Covid in the UK this Winter by Lawrence Newport Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Potential Cheap and High Impact Way to Reduce Covid in the UK this Winter, published by Lawrence Newport on October 28, 2022 on The Effective Altruism Forum.Brief problem summary: Under current plans, millions of adults in the UK this winter will be unable to get a covid booster, even if they are willing to pay for it, because there will be no option for them to take the vaccine in the NHS rollout, and they will not be allowed to pay privately for a bivalent booster. That means overall population immunity will be lower and will lead to more covid cases than otherwise, which will increase the strains on NHS services this winter.This is an area in which EAs might have a potentially very large impact for small investment by creating a simple website to encourage people who would be willing to pay for a booster to give their email address. That list could then be used to convince Pfizer or another company to choose to make covid booster vaccines privately available in the UK, just as they will be in the US from January 2023 and as flu vaccines currently are in the UK.The IssuePfizer's bivalent vaccine (vaccinating against both the original covid and Omicron variants) is being rolled out in the UK via the NHS. The original vaccination and initial booster programme in the UK started with the most vulnerable and frontline health workers, and expanded down risk profiles until all adults in the UK had been offered the opportunity to have a vaccine and a booster.However, the plan is more limited for Pfizer's bivalent booster vaccine. This winter the UK has opted for a policy which will see only the most vulnerable and frontline health workers being offered the vaccine. Unlike flu vaccines, where anyone can walk into a pharmacy and pay a small fee for a flu vaccine if they are not entitled to a free vaccine, that will not be possible for covid boosters. To put this in perspective:Number of people who were vaccinated in the UK:First dose: ~54MSecond dose: ~51MBooster: ~40MThose to be offered a bivalent booster:Estimated total to be offered a bivalent booster: 26MIn other words, even with 100% uptake of the booster, this leaves a potentially willing 28M adults without even the option of a booster from the NHS or privately.It is of course entirely right to start rollouts to the most vulnerable – and to expand those groups as widely as possible. The issue is that there is no option, other than travel to the US, for the majority of willing UK adults to receive a bivalent vaccine. It is clear that, all other things being equal, more population immunity is better. Waning immunity in millions of UK adults will reduce ongoing pandemic mitigation. That is particularly a problem when the winter burden of covid and flu on the NHS is expected to be large, impairing outcomes for treatment of other illness.I understand that there is no longer any shortage of vaccines; vaccine takeup in the UK has been lower than expected. Although it might be better for the NHS to provide boosters free of charge to all who want them, the Government has decided not to do that. But in any case, it would be strictly better for all concerned, including those who do not get a booster, if the number of covid cases were reduced by making it possible for people to pay a modest fee for a booster if they are not entitled to a free one.Pfizer has stated that it will be selling vaccines in the US from 2023 when the US Government stops providing them free of charge. In the UK flu vaccines are sold privately and are administered in pharmacies by trained staff members. Currently, there is no public argument or expectation that the bivalent vaccine should be available for purchase in the UK.However, there may be a solution. Based on confidential discussions with sources that I cannot name but who I believe ...]]>
Lawrence Newport https://forum.effectivealtruism.org/posts/YejEjygKjqe2BA2pp/a-potential-cheap-and-high-impact-way-to-reduce-covid-in-the Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Potential Cheap and High Impact Way to Reduce Covid in the UK this Winter, published by Lawrence Newport on October 28, 2022 on The Effective Altruism Forum.Brief problem summary: Under current plans, millions of adults in the UK this winter will be unable to get a covid booster, even if they are willing to pay for it, because there will be no option for them to take the vaccine in the NHS rollout, and they will not be allowed to pay privately for a bivalent booster. That means overall population immunity will be lower and will lead to more covid cases than otherwise, which will increase the strains on NHS services this winter.This is an area in which EAs might have a potentially very large impact for small investment by creating a simple website to encourage people who would be willing to pay for a booster to give their email address. That list could then be used to convince Pfizer or another company to choose to make covid booster vaccines privately available in the UK, just as they will be in the US from January 2023 and as flu vaccines currently are in the UK.The IssuePfizer's bivalent vaccine (vaccinating against both the original covid and Omicron variants) is being rolled out in the UK via the NHS. The original vaccination and initial booster programme in the UK started with the most vulnerable and frontline health workers, and expanded down risk profiles until all adults in the UK had been offered the opportunity to have a vaccine and a booster.However, the plan is more limited for Pfizer's bivalent booster vaccine. This winter the UK has opted for a policy which will see only the most vulnerable and frontline health workers being offered the vaccine. Unlike flu vaccines, where anyone can walk into a pharmacy and pay a small fee for a flu vaccine if they are not entitled to a free vaccine, that will not be possible for covid boosters. To put this in perspective:Number of people who were vaccinated in the UK:First dose: ~54MSecond dose: ~51MBooster: ~40MThose to be offered a bivalent booster:Estimated total to be offered a bivalent booster: 26MIn other words, even with 100% uptake of the booster, this leaves a potentially willing 28M adults without even the option of a booster from the NHS or privately.It is of course entirely right to start rollouts to the most vulnerable – and to expand those groups as widely as possible. The issue is that there is no option, other than travel to the US, for the majority of willing UK adults to receive a bivalent vaccine. It is clear that, all other things being equal, more population immunity is better. Waning immunity in millions of UK adults will reduce ongoing pandemic mitigation. That is particularly a problem when the winter burden of covid and flu on the NHS is expected to be large, impairing outcomes for treatment of other illness.I understand that there is no longer any shortage of vaccines; vaccine takeup in the UK has been lower than expected. Although it might be better for the NHS to provide boosters free of charge to all who want them, the Government has decided not to do that. But in any case, it would be strictly better for all concerned, including those who do not get a booster, if the number of covid cases were reduced by making it possible for people to pay a modest fee for a booster if they are not entitled to a free one.Pfizer has stated that it will be selling vaccines in the US from 2023 when the US Government stops providing them free of charge. In the UK flu vaccines are sold privately and are administered in pharmacies by trained staff members. Currently, there is no public argument or expectation that the bivalent vaccine should be available for purchase in the UK.However, there may be a solution. Based on confidential discussions with sources that I cannot name but who I believe ...]]>
Fri, 28 Oct 2022 17:10:59 +0000 EA - A Potential Cheap and High Impact Way to Reduce Covid in the UK this Winter by Lawrence Newport Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Potential Cheap and High Impact Way to Reduce Covid in the UK this Winter, published by Lawrence Newport on October 28, 2022 on The Effective Altruism Forum.Brief problem summary: Under current plans, millions of adults in the UK this winter will be unable to get a covid booster, even if they are willing to pay for it, because there will be no option for them to take the vaccine in the NHS rollout, and they will not be allowed to pay privately for a bivalent booster. That means overall population immunity will be lower and will lead to more covid cases than otherwise, which will increase the strains on NHS services this winter.This is an area in which EAs might have a potentially very large impact for small investment by creating a simple website to encourage people who would be willing to pay for a booster to give their email address. That list could then be used to convince Pfizer or another company to choose to make covid booster vaccines privately available in the UK, just as they will be in the US from January 2023 and as flu vaccines currently are in the UK.The IssuePfizer's bivalent vaccine (vaccinating against both the original covid and Omicron variants) is being rolled out in the UK via the NHS. The original vaccination and initial booster programme in the UK started with the most vulnerable and frontline health workers, and expanded down risk profiles until all adults in the UK had been offered the opportunity to have a vaccine and a booster.However, the plan is more limited for Pfizer's bivalent booster vaccine. This winter the UK has opted for a policy which will see only the most vulnerable and frontline health workers being offered the vaccine. Unlike flu vaccines, where anyone can walk into a pharmacy and pay a small fee for a flu vaccine if they are not entitled to a free vaccine, that will not be possible for covid boosters. To put this in perspective:Number of people who were vaccinated in the UK:First dose: ~54MSecond dose: ~51MBooster: ~40MThose to be offered a bivalent booster:Estimated total to be offered a bivalent booster: 26MIn other words, even with 100% uptake of the booster, this leaves a potentially willing 28M adults without even the option of a booster from the NHS or privately.It is of course entirely right to start rollouts to the most vulnerable – and to expand those groups as widely as possible. The issue is that there is no option, other than travel to the US, for the majority of willing UK adults to receive a bivalent vaccine. It is clear that, all other things being equal, more population immunity is better. Waning immunity in millions of UK adults will reduce ongoing pandemic mitigation. That is particularly a problem when the winter burden of covid and flu on the NHS is expected to be large, impairing outcomes for treatment of other illness.I understand that there is no longer any shortage of vaccines; vaccine takeup in the UK has been lower than expected. Although it might be better for the NHS to provide boosters free of charge to all who want them, the Government has decided not to do that. But in any case, it would be strictly better for all concerned, including those who do not get a booster, if the number of covid cases were reduced by making it possible for people to pay a modest fee for a booster if they are not entitled to a free one.Pfizer has stated that it will be selling vaccines in the US from 2023 when the US Government stops providing them free of charge. In the UK flu vaccines are sold privately and are administered in pharmacies by trained staff members. Currently, there is no public argument or expectation that the bivalent vaccine should be available for purchase in the UK.However, there may be a solution. Based on confidential discussions with sources that I cannot name but who I believe ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Potential Cheap and High Impact Way to Reduce Covid in the UK this Winter, published by Lawrence Newport on October 28, 2022 on The Effective Altruism Forum.Brief problem summary: Under current plans, millions of adults in the UK this winter will be unable to get a covid booster, even if they are willing to pay for it, because there will be no option for them to take the vaccine in the NHS rollout, and they will not be allowed to pay privately for a bivalent booster. That means overall population immunity will be lower and will lead to more covid cases than otherwise, which will increase the strains on NHS services this winter.This is an area in which EAs might have a potentially very large impact for small investment by creating a simple website to encourage people who would be willing to pay for a booster to give their email address. That list could then be used to convince Pfizer or another company to choose to make covid booster vaccines privately available in the UK, just as they will be in the US from January 2023 and as flu vaccines currently are in the UK.The IssuePfizer's bivalent vaccine (vaccinating against both the original covid and Omicron variants) is being rolled out in the UK via the NHS. The original vaccination and initial booster programme in the UK started with the most vulnerable and frontline health workers, and expanded down risk profiles until all adults in the UK had been offered the opportunity to have a vaccine and a booster.However, the plan is more limited for Pfizer's bivalent booster vaccine. This winter the UK has opted for a policy which will see only the most vulnerable and frontline health workers being offered the vaccine. Unlike flu vaccines, where anyone can walk into a pharmacy and pay a small fee for a flu vaccine if they are not entitled to a free vaccine, that will not be possible for covid boosters. To put this in perspective:Number of people who were vaccinated in the UK:First dose: ~54MSecond dose: ~51MBooster: ~40MThose to be offered a bivalent booster:Estimated total to be offered a bivalent booster: 26MIn other words, even with 100% uptake of the booster, this leaves a potentially willing 28M adults without even the option of a booster from the NHS or privately.It is of course entirely right to start rollouts to the most vulnerable – and to expand those groups as widely as possible. The issue is that there is no option, other than travel to the US, for the majority of willing UK adults to receive a bivalent vaccine. It is clear that, all other things being equal, more population immunity is better. Waning immunity in millions of UK adults will reduce ongoing pandemic mitigation. That is particularly a problem when the winter burden of covid and flu on the NHS is expected to be large, impairing outcomes for treatment of other illness.I understand that there is no longer any shortage of vaccines; vaccine takeup in the UK has been lower than expected. Although it might be better for the NHS to provide boosters free of charge to all who want them, the Government has decided not to do that. But in any case, it would be strictly better for all concerned, including those who do not get a booster, if the number of covid cases were reduced by making it possible for people to pay a modest fee for a booster if they are not entitled to a free one.Pfizer has stated that it will be selling vaccines in the US from 2023 when the US Government stops providing them free of charge. In the UK flu vaccines are sold privately and are administered in pharmacies by trained staff members. Currently, there is no public argument or expectation that the bivalent vaccine should be available for purchase in the UK.However, there may be a solution. Based on confidential discussions with sources that I cannot name but who I believe ...]]>
Lawrence Newport https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:14 None full 3573
wCt8zxGH3MqjXRQ99_EA EA - On retreats: nail the 'vibes' and venue by Vaidehi Agarwalla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On retreats: nail the 'vibes' and venue, published by Vaidehi Agarwalla on October 28, 2022 on The Effective Altruism Forum.This year I attended a few retreats with different goals and audiences (mostly for people already fairly involved in EA). This post lists some observations and lessons I took away from them about how I would want to run a retreat.I’ve probably written some suggestions with more confidence than is warranted - I have strong opinions that are weakly held. For brevity, I've not always delved into my reasoning for every specific point, but I'm happy to expand in the comments! I’ve shared early versions of this advice with a couple of people running retreats, and it seemed to be helpful.Not all of this advice will apply to any one retreat, I think the best way to read this is to take what makes sense given your goals and use those. I think most of these suggestions are useful for community retreats; and maybe ~30-60% are useful for professional retreats. I'd be really excited for more people to share their reflections on what's worked and what hasn't.Nail the 'vibes'Your goal is (probably) to help people make friends. For this reason, the vibes matter. Take time to observe how the attendees are interacting, and regularly ask the most perceptive or senior attendees how it’s going. It’s important to preserve participants’ intention, energy, and eagerness to participate proactively.Go light with the schedule & be flexible with contentCreating a schedule is always difficult, but when in doubt I advocate for cutting things (ruthlessly). I think that attendees will ultimately care more about the overall flow and atmosphere than that extra session.Have a shorter ‘work’ day of (high quality) scheduled events to leave enough time for chilling and socializing (e.g., 10am-5pm). This will give participants time to make friends and prevent them getting too tired. That being said - make sure the content you do have is excellent.Be willing and able to pivot - don’t be scared to throw out the schedule and do something else if you feel participants aren't resonating with the content you have, or someone makes a suggestion that seems good.Factor in downtime for attendees to recover from travel.Make people feel comfortable and relaxedTry to avoid people coming late: It can feel bad to miss the first day. If people must come late, batch latecomers and have someone give them an orientation and introduce them to other participants. Consider introducing latecomers to someone beforehand, so they know at least one non-organizer when they arrive.Check in with folks: Organizers can identify and periodically check on (e.g.) the 30% of participants who are most likely to feel out of place: for example, newer EAs or people who are less well-connected.Look out for each other: The German funconference had ‘awareness facilitators’ who would support you in case of physical & mental health problems or if you felt like someone crossed your boundaries, and who would keep information confidential. I thought this was a really good thing to have.Help people get to know each other before the retreat: You can share a ‘names and faces’ deck of all the attendees for people to read, and also print them and put them out in the common areas.Consider activities which allow people to seek help and be vulnerable, if it makes sense: I have found Hamming Circles during retreats to be quite helpful and a positive experience (and others who've participated say the same). However, they can be very intense and it's likely that I had a positive experience because I only opted into them when I already trusted the people around me, felt open to the experience, and when the activity was being run by someone who was experienced in running these kinds of activities.Think about ways to facilitate...]]>
Vaidehi Agarwalla https://forum.effectivealtruism.org/posts/wCt8zxGH3MqjXRQ99/on-retreats-nail-the-vibes-and-venue Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On retreats: nail the 'vibes' and venue, published by Vaidehi Agarwalla on October 28, 2022 on The Effective Altruism Forum.This year I attended a few retreats with different goals and audiences (mostly for people already fairly involved in EA). This post lists some observations and lessons I took away from them about how I would want to run a retreat.I’ve probably written some suggestions with more confidence than is warranted - I have strong opinions that are weakly held. For brevity, I've not always delved into my reasoning for every specific point, but I'm happy to expand in the comments! I’ve shared early versions of this advice with a couple of people running retreats, and it seemed to be helpful.Not all of this advice will apply to any one retreat, I think the best way to read this is to take what makes sense given your goals and use those. I think most of these suggestions are useful for community retreats; and maybe ~30-60% are useful for professional retreats. I'd be really excited for more people to share their reflections on what's worked and what hasn't.Nail the 'vibes'Your goal is (probably) to help people make friends. For this reason, the vibes matter. Take time to observe how the attendees are interacting, and regularly ask the most perceptive or senior attendees how it’s going. It’s important to preserve participants’ intention, energy, and eagerness to participate proactively.Go light with the schedule & be flexible with contentCreating a schedule is always difficult, but when in doubt I advocate for cutting things (ruthlessly). I think that attendees will ultimately care more about the overall flow and atmosphere than that extra session.Have a shorter ‘work’ day of (high quality) scheduled events to leave enough time for chilling and socializing (e.g., 10am-5pm). This will give participants time to make friends and prevent them getting too tired. That being said - make sure the content you do have is excellent.Be willing and able to pivot - don’t be scared to throw out the schedule and do something else if you feel participants aren't resonating with the content you have, or someone makes a suggestion that seems good.Factor in downtime for attendees to recover from travel.Make people feel comfortable and relaxedTry to avoid people coming late: It can feel bad to miss the first day. If people must come late, batch latecomers and have someone give them an orientation and introduce them to other participants. Consider introducing latecomers to someone beforehand, so they know at least one non-organizer when they arrive.Check in with folks: Organizers can identify and periodically check on (e.g.) the 30% of participants who are most likely to feel out of place: for example, newer EAs or people who are less well-connected.Look out for each other: The German funconference had ‘awareness facilitators’ who would support you in case of physical & mental health problems or if you felt like someone crossed your boundaries, and who would keep information confidential. I thought this was a really good thing to have.Help people get to know each other before the retreat: You can share a ‘names and faces’ deck of all the attendees for people to read, and also print them and put them out in the common areas.Consider activities which allow people to seek help and be vulnerable, if it makes sense: I have found Hamming Circles during retreats to be quite helpful and a positive experience (and others who've participated say the same). However, they can be very intense and it's likely that I had a positive experience because I only opted into them when I already trusted the people around me, felt open to the experience, and when the activity was being run by someone who was experienced in running these kinds of activities.Think about ways to facilitate...]]>
Fri, 28 Oct 2022 16:12:18 +0000 EA - On retreats: nail the 'vibes' and venue by Vaidehi Agarwalla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On retreats: nail the 'vibes' and venue, published by Vaidehi Agarwalla on October 28, 2022 on The Effective Altruism Forum.This year I attended a few retreats with different goals and audiences (mostly for people already fairly involved in EA). This post lists some observations and lessons I took away from them about how I would want to run a retreat.I’ve probably written some suggestions with more confidence than is warranted - I have strong opinions that are weakly held. For brevity, I've not always delved into my reasoning for every specific point, but I'm happy to expand in the comments! I’ve shared early versions of this advice with a couple of people running retreats, and it seemed to be helpful.Not all of this advice will apply to any one retreat, I think the best way to read this is to take what makes sense given your goals and use those. I think most of these suggestions are useful for community retreats; and maybe ~30-60% are useful for professional retreats. I'd be really excited for more people to share their reflections on what's worked and what hasn't.Nail the 'vibes'Your goal is (probably) to help people make friends. For this reason, the vibes matter. Take time to observe how the attendees are interacting, and regularly ask the most perceptive or senior attendees how it’s going. It’s important to preserve participants’ intention, energy, and eagerness to participate proactively.Go light with the schedule & be flexible with contentCreating a schedule is always difficult, but when in doubt I advocate for cutting things (ruthlessly). I think that attendees will ultimately care more about the overall flow and atmosphere than that extra session.Have a shorter ‘work’ day of (high quality) scheduled events to leave enough time for chilling and socializing (e.g., 10am-5pm). This will give participants time to make friends and prevent them getting too tired. That being said - make sure the content you do have is excellent.Be willing and able to pivot - don’t be scared to throw out the schedule and do something else if you feel participants aren't resonating with the content you have, or someone makes a suggestion that seems good.Factor in downtime for attendees to recover from travel.Make people feel comfortable and relaxedTry to avoid people coming late: It can feel bad to miss the first day. If people must come late, batch latecomers and have someone give them an orientation and introduce them to other participants. Consider introducing latecomers to someone beforehand, so they know at least one non-organizer when they arrive.Check in with folks: Organizers can identify and periodically check on (e.g.) the 30% of participants who are most likely to feel out of place: for example, newer EAs or people who are less well-connected.Look out for each other: The German funconference had ‘awareness facilitators’ who would support you in case of physical & mental health problems or if you felt like someone crossed your boundaries, and who would keep information confidential. I thought this was a really good thing to have.Help people get to know each other before the retreat: You can share a ‘names and faces’ deck of all the attendees for people to read, and also print them and put them out in the common areas.Consider activities which allow people to seek help and be vulnerable, if it makes sense: I have found Hamming Circles during retreats to be quite helpful and a positive experience (and others who've participated say the same). However, they can be very intense and it's likely that I had a positive experience because I only opted into them when I already trusted the people around me, felt open to the experience, and when the activity was being run by someone who was experienced in running these kinds of activities.Think about ways to facilitate...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On retreats: nail the 'vibes' and venue, published by Vaidehi Agarwalla on October 28, 2022 on The Effective Altruism Forum.This year I attended a few retreats with different goals and audiences (mostly for people already fairly involved in EA). This post lists some observations and lessons I took away from them about how I would want to run a retreat.I’ve probably written some suggestions with more confidence than is warranted - I have strong opinions that are weakly held. For brevity, I've not always delved into my reasoning for every specific point, but I'm happy to expand in the comments! I’ve shared early versions of this advice with a couple of people running retreats, and it seemed to be helpful.Not all of this advice will apply to any one retreat, I think the best way to read this is to take what makes sense given your goals and use those. I think most of these suggestions are useful for community retreats; and maybe ~30-60% are useful for professional retreats. I'd be really excited for more people to share their reflections on what's worked and what hasn't.Nail the 'vibes'Your goal is (probably) to help people make friends. For this reason, the vibes matter. Take time to observe how the attendees are interacting, and regularly ask the most perceptive or senior attendees how it’s going. It’s important to preserve participants’ intention, energy, and eagerness to participate proactively.Go light with the schedule & be flexible with contentCreating a schedule is always difficult, but when in doubt I advocate for cutting things (ruthlessly). I think that attendees will ultimately care more about the overall flow and atmosphere than that extra session.Have a shorter ‘work’ day of (high quality) scheduled events to leave enough time for chilling and socializing (e.g., 10am-5pm). This will give participants time to make friends and prevent them getting too tired. That being said - make sure the content you do have is excellent.Be willing and able to pivot - don’t be scared to throw out the schedule and do something else if you feel participants aren't resonating with the content you have, or someone makes a suggestion that seems good.Factor in downtime for attendees to recover from travel.Make people feel comfortable and relaxedTry to avoid people coming late: It can feel bad to miss the first day. If people must come late, batch latecomers and have someone give them an orientation and introduce them to other participants. Consider introducing latecomers to someone beforehand, so they know at least one non-organizer when they arrive.Check in with folks: Organizers can identify and periodically check on (e.g.) the 30% of participants who are most likely to feel out of place: for example, newer EAs or people who are less well-connected.Look out for each other: The German funconference had ‘awareness facilitators’ who would support you in case of physical & mental health problems or if you felt like someone crossed your boundaries, and who would keep information confidential. I thought this was a really good thing to have.Help people get to know each other before the retreat: You can share a ‘names and faces’ deck of all the attendees for people to read, and also print them and put them out in the common areas.Consider activities which allow people to seek help and be vulnerable, if it makes sense: I have found Hamming Circles during retreats to be quite helpful and a positive experience (and others who've participated say the same). However, they can be very intense and it's likely that I had a positive experience because I only opted into them when I already trusted the people around me, felt open to the experience, and when the activity was being run by someone who was experienced in running these kinds of activities.Think about ways to facilitate...]]>
Vaidehi Agarwalla https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 08:53 None full 3574
7LaJyhDYASnG2tJtz_EA EA - The African Movement-building Summit by jwpieters Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The African Movement-building Summit, published by jwpieters on October 28, 2022 on The Effective Altruism Forum.The African Movement-building Summit will be a chance to interact with people in the EA community from across the continent and work on growing the movement in Africa. You can apply here.Date and Location:Date: 1-5 December 2022Location: Devon Valley Hotel, Cape Town, South AfricaAbout this retreat:The African EA Summit will be a ~30-person retreat primarily for members of the EA community in Africa who are interested in movement-building for EA or related cause areas. We also encourage experienced community members working on projects relevant to the African context to attend.We will cover room, board, and reasonable travel expenses (incl. visa applications) for all attendees. This is possible thanks to support from CEA's community events program.Goals & Agenda:The 3-day retreat will focus on both broad EA movement-building, as well as issues specific to the African context. We hope to:Establish better coordination between African EA leaders and movement-buildersForm strong social connections amongst highly engaged EAs in AfricaHelp key international stakeholders better understand how the African EA community can be supportedThe summit will primarily focus on in-depth discussions and strengthening connections. There will also be workshops and talks run by the organisers and other attendees. We encourage suggestions for activities and discussion topics in the comments or in your application.Application:You can apply by filling out this application form by Sunday 6th November (midnight SAST). We highly recommend that applicants who need visas apply ASAP. We will try to fast-track decisions for these applicants to allow time for visa processing.Contact:I (Jordan Pieters) am the primary organiser for this event. If you have questions about the retreat, you can email me at jordanpieters+summit@gmail.com or leave a comment on this post.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
jwpieters https://forum.effectivealtruism.org/posts/7LaJyhDYASnG2tJtz/the-african-movement-building-summit Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The African Movement-building Summit, published by jwpieters on October 28, 2022 on The Effective Altruism Forum.The African Movement-building Summit will be a chance to interact with people in the EA community from across the continent and work on growing the movement in Africa. You can apply here.Date and Location:Date: 1-5 December 2022Location: Devon Valley Hotel, Cape Town, South AfricaAbout this retreat:The African EA Summit will be a ~30-person retreat primarily for members of the EA community in Africa who are interested in movement-building for EA or related cause areas. We also encourage experienced community members working on projects relevant to the African context to attend.We will cover room, board, and reasonable travel expenses (incl. visa applications) for all attendees. This is possible thanks to support from CEA's community events program.Goals & Agenda:The 3-day retreat will focus on both broad EA movement-building, as well as issues specific to the African context. We hope to:Establish better coordination between African EA leaders and movement-buildersForm strong social connections amongst highly engaged EAs in AfricaHelp key international stakeholders better understand how the African EA community can be supportedThe summit will primarily focus on in-depth discussions and strengthening connections. There will also be workshops and talks run by the organisers and other attendees. We encourage suggestions for activities and discussion topics in the comments or in your application.Application:You can apply by filling out this application form by Sunday 6th November (midnight SAST). We highly recommend that applicants who need visas apply ASAP. We will try to fast-track decisions for these applicants to allow time for visa processing.Contact:I (Jordan Pieters) am the primary organiser for this event. If you have questions about the retreat, you can email me at jordanpieters+summit@gmail.com or leave a comment on this post.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 28 Oct 2022 12:57:00 +0000 EA - The African Movement-building Summit by jwpieters Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The African Movement-building Summit, published by jwpieters on October 28, 2022 on The Effective Altruism Forum.The African Movement-building Summit will be a chance to interact with people in the EA community from across the continent and work on growing the movement in Africa. You can apply here.Date and Location:Date: 1-5 December 2022Location: Devon Valley Hotel, Cape Town, South AfricaAbout this retreat:The African EA Summit will be a ~30-person retreat primarily for members of the EA community in Africa who are interested in movement-building for EA or related cause areas. We also encourage experienced community members working on projects relevant to the African context to attend.We will cover room, board, and reasonable travel expenses (incl. visa applications) for all attendees. This is possible thanks to support from CEA's community events program.Goals & Agenda:The 3-day retreat will focus on both broad EA movement-building, as well as issues specific to the African context. We hope to:Establish better coordination between African EA leaders and movement-buildersForm strong social connections amongst highly engaged EAs in AfricaHelp key international stakeholders better understand how the African EA community can be supportedThe summit will primarily focus on in-depth discussions and strengthening connections. There will also be workshops and talks run by the organisers and other attendees. We encourage suggestions for activities and discussion topics in the comments or in your application.Application:You can apply by filling out this application form by Sunday 6th November (midnight SAST). We highly recommend that applicants who need visas apply ASAP. We will try to fast-track decisions for these applicants to allow time for visa processing.Contact:I (Jordan Pieters) am the primary organiser for this event. If you have questions about the retreat, you can email me at jordanpieters+summit@gmail.com or leave a comment on this post.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The African Movement-building Summit, published by jwpieters on October 28, 2022 on The Effective Altruism Forum.The African Movement-building Summit will be a chance to interact with people in the EA community from across the continent and work on growing the movement in Africa. You can apply here.Date and Location:Date: 1-5 December 2022Location: Devon Valley Hotel, Cape Town, South AfricaAbout this retreat:The African EA Summit will be a ~30-person retreat primarily for members of the EA community in Africa who are interested in movement-building for EA or related cause areas. We also encourage experienced community members working on projects relevant to the African context to attend.We will cover room, board, and reasonable travel expenses (incl. visa applications) for all attendees. This is possible thanks to support from CEA's community events program.Goals & Agenda:The 3-day retreat will focus on both broad EA movement-building, as well as issues specific to the African context. We hope to:Establish better coordination between African EA leaders and movement-buildersForm strong social connections amongst highly engaged EAs in AfricaHelp key international stakeholders better understand how the African EA community can be supportedThe summit will primarily focus on in-depth discussions and strengthening connections. There will also be workshops and talks run by the organisers and other attendees. We encourage suggestions for activities and discussion topics in the comments or in your application.Application:You can apply by filling out this application form by Sunday 6th November (midnight SAST). We highly recommend that applicants who need visas apply ASAP. We will try to fast-track decisions for these applicants to allow time for visa processing.Contact:I (Jordan Pieters) am the primary organiser for this event. If you have questions about the retreat, you can email me at jordanpieters+summit@gmail.com or leave a comment on this post.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
jwpieters https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:13 None full 3572
E3nAGbeMoFnjpYawr_EA EA - GiveWell should fund an SMC replication by Seth Ariel Green Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell should fund an SMC replication, published by Seth Ariel Green on October 28, 2022 on The Effective Altruism Forum.Abstract: This essay argues that the evidence supporting GiveWell’s top cause area – Seasonal Malaria Chemoprevention, or SMC – is much weaker than it appears at first glance and would benefit from high-quality replication. Specifically, GiveWell’s assertion that every $5,000 spent on SMC saves a life is a stronger claim than the literature warrants on three grounds: 1) the effect size is small and imprecisely estimated; 2) co-interventions delivered simultaneously pose a threat to external validity; and 3) the research lacks the quality markers of the replication/credibility revolution. I conclude by arguing that any replication of SMC should meet the standards of rigor and transparency set by GiveDirectly, whose evaluations clearly demonstrate contemporary best practices in open science.1. Introduction: the evidence for Seasonal Malaria ChemopreventionGiveWell currently endorses four top charities, with first place going to the Malaria Consortium, a charity that delivers Seasonal Malaria Chemoprevention (SMC). GiveWell provides more context on its Malaria Consortium – Seasonal Malaria Chemoprevention page and its Seasonal Malaria Chemoprevention intervention report. That report is built around a Cochrane review of seven randomized controlled trials (Meremikwu et al. 2012). GiveWell discounts one of those studies (Dicko et al. 2008) for technical reasons and includes an additional trial published later (Tagbor et al. 2016) in its evidence base.No new research has been added since then, and GiveWell’s SMC report was last updated in 2018. It appears as though GiveWell treats the question of “does SMC work?” as effectively settled.I argue that GiveWell should revisit its conclusions about SMC and should fund and/or oversee a high-quality replication study on the subject. While there is very strong evidence that SMC prevents the majority of malaria episodes, “including severe episodes” (Meremikwu et al. 2012, p. 2), GiveWell’s estimate that every $5,000 of SMC saves a life in expectation is shaky on three grounds related to research quality: 1) the underlying effect size is small, relative to the sample size, and statistically imprecise; 2) SMC is often tested in places receiving other interventions, which threatens external validity because we don’t know which set of interventions bests maps onto the target population; and 3) the evidence comes from studies that are pre-credibility revolution, and therefore lack quality controls such as detailed pre-registration, open code and data, and sufficient statistical power.2. Three grounds for doubting the relationship between SMC and mortality2.1 The effect size is small and imprecisely estimatedAcross an N of 12,589, Meremikwu et al. record 10 deaths in the combined treatment groups and 16 in the combined control groups. Subtracting the one study that GiveWell discounts and including the one they supplement with, we arrive at 10 deaths for treatment and 15 for control. As the authors note, “the difference was not statistically significant” (p. 12), “and none of the trials were adequately powered to detect an effect on mortality.However, a reduction in death would be consistent with the high quality evidence of a reduction in severe malaria” (p. 4).Overall, the authors conclude, SMC “probably prevents some deaths,” but “[l]arger trials are necessary to have full confidence in this effect” (p. 4).GiveWell forthrightly acknowledges this on its SMC page, and provides reasons why it believes SMC reduces morality despite " limited evidence." This is laudably transparent, but the question is foundational to all of GiveWell's subsequent analyses of SMC. Especially given the organization's strong fundin...]]>
Seth Ariel Green https://forum.effectivealtruism.org/posts/E3nAGbeMoFnjpYawr/givewell-should-fund-an-smc-replication Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell should fund an SMC replication, published by Seth Ariel Green on October 28, 2022 on The Effective Altruism Forum.Abstract: This essay argues that the evidence supporting GiveWell’s top cause area – Seasonal Malaria Chemoprevention, or SMC – is much weaker than it appears at first glance and would benefit from high-quality replication. Specifically, GiveWell’s assertion that every $5,000 spent on SMC saves a life is a stronger claim than the literature warrants on three grounds: 1) the effect size is small and imprecisely estimated; 2) co-interventions delivered simultaneously pose a threat to external validity; and 3) the research lacks the quality markers of the replication/credibility revolution. I conclude by arguing that any replication of SMC should meet the standards of rigor and transparency set by GiveDirectly, whose evaluations clearly demonstrate contemporary best practices in open science.1. Introduction: the evidence for Seasonal Malaria ChemopreventionGiveWell currently endorses four top charities, with first place going to the Malaria Consortium, a charity that delivers Seasonal Malaria Chemoprevention (SMC). GiveWell provides more context on its Malaria Consortium – Seasonal Malaria Chemoprevention page and its Seasonal Malaria Chemoprevention intervention report. That report is built around a Cochrane review of seven randomized controlled trials (Meremikwu et al. 2012). GiveWell discounts one of those studies (Dicko et al. 2008) for technical reasons and includes an additional trial published later (Tagbor et al. 2016) in its evidence base.No new research has been added since then, and GiveWell’s SMC report was last updated in 2018. It appears as though GiveWell treats the question of “does SMC work?” as effectively settled.I argue that GiveWell should revisit its conclusions about SMC and should fund and/or oversee a high-quality replication study on the subject. While there is very strong evidence that SMC prevents the majority of malaria episodes, “including severe episodes” (Meremikwu et al. 2012, p. 2), GiveWell’s estimate that every $5,000 of SMC saves a life in expectation is shaky on three grounds related to research quality: 1) the underlying effect size is small, relative to the sample size, and statistically imprecise; 2) SMC is often tested in places receiving other interventions, which threatens external validity because we don’t know which set of interventions bests maps onto the target population; and 3) the evidence comes from studies that are pre-credibility revolution, and therefore lack quality controls such as detailed pre-registration, open code and data, and sufficient statistical power.2. Three grounds for doubting the relationship between SMC and mortality2.1 The effect size is small and imprecisely estimatedAcross an N of 12,589, Meremikwu et al. record 10 deaths in the combined treatment groups and 16 in the combined control groups. Subtracting the one study that GiveWell discounts and including the one they supplement with, we arrive at 10 deaths for treatment and 15 for control. As the authors note, “the difference was not statistically significant” (p. 12), “and none of the trials were adequately powered to detect an effect on mortality.However, a reduction in death would be consistent with the high quality evidence of a reduction in severe malaria” (p. 4).Overall, the authors conclude, SMC “probably prevents some deaths,” but “[l]arger trials are necessary to have full confidence in this effect” (p. 4).GiveWell forthrightly acknowledges this on its SMC page, and provides reasons why it believes SMC reduces morality despite " limited evidence." This is laudably transparent, but the question is foundational to all of GiveWell's subsequent analyses of SMC. Especially given the organization's strong fundin...]]>
Fri, 28 Oct 2022 12:31:20 +0000 EA - GiveWell should fund an SMC replication by Seth Ariel Green Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell should fund an SMC replication, published by Seth Ariel Green on October 28, 2022 on The Effective Altruism Forum.Abstract: This essay argues that the evidence supporting GiveWell’s top cause area – Seasonal Malaria Chemoprevention, or SMC – is much weaker than it appears at first glance and would benefit from high-quality replication. Specifically, GiveWell’s assertion that every $5,000 spent on SMC saves a life is a stronger claim than the literature warrants on three grounds: 1) the effect size is small and imprecisely estimated; 2) co-interventions delivered simultaneously pose a threat to external validity; and 3) the research lacks the quality markers of the replication/credibility revolution. I conclude by arguing that any replication of SMC should meet the standards of rigor and transparency set by GiveDirectly, whose evaluations clearly demonstrate contemporary best practices in open science.1. Introduction: the evidence for Seasonal Malaria ChemopreventionGiveWell currently endorses four top charities, with first place going to the Malaria Consortium, a charity that delivers Seasonal Malaria Chemoprevention (SMC). GiveWell provides more context on its Malaria Consortium – Seasonal Malaria Chemoprevention page and its Seasonal Malaria Chemoprevention intervention report. That report is built around a Cochrane review of seven randomized controlled trials (Meremikwu et al. 2012). GiveWell discounts one of those studies (Dicko et al. 2008) for technical reasons and includes an additional trial published later (Tagbor et al. 2016) in its evidence base.No new research has been added since then, and GiveWell’s SMC report was last updated in 2018. It appears as though GiveWell treats the question of “does SMC work?” as effectively settled.I argue that GiveWell should revisit its conclusions about SMC and should fund and/or oversee a high-quality replication study on the subject. While there is very strong evidence that SMC prevents the majority of malaria episodes, “including severe episodes” (Meremikwu et al. 2012, p. 2), GiveWell’s estimate that every $5,000 of SMC saves a life in expectation is shaky on three grounds related to research quality: 1) the underlying effect size is small, relative to the sample size, and statistically imprecise; 2) SMC is often tested in places receiving other interventions, which threatens external validity because we don’t know which set of interventions bests maps onto the target population; and 3) the evidence comes from studies that are pre-credibility revolution, and therefore lack quality controls such as detailed pre-registration, open code and data, and sufficient statistical power.2. Three grounds for doubting the relationship between SMC and mortality2.1 The effect size is small and imprecisely estimatedAcross an N of 12,589, Meremikwu et al. record 10 deaths in the combined treatment groups and 16 in the combined control groups. Subtracting the one study that GiveWell discounts and including the one they supplement with, we arrive at 10 deaths for treatment and 15 for control. As the authors note, “the difference was not statistically significant” (p. 12), “and none of the trials were adequately powered to detect an effect on mortality.However, a reduction in death would be consistent with the high quality evidence of a reduction in severe malaria” (p. 4).Overall, the authors conclude, SMC “probably prevents some deaths,” but “[l]arger trials are necessary to have full confidence in this effect” (p. 4).GiveWell forthrightly acknowledges this on its SMC page, and provides reasons why it believes SMC reduces morality despite " limited evidence." This is laudably transparent, but the question is foundational to all of GiveWell's subsequent analyses of SMC. Especially given the organization's strong fundin...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell should fund an SMC replication, published by Seth Ariel Green on October 28, 2022 on The Effective Altruism Forum.Abstract: This essay argues that the evidence supporting GiveWell’s top cause area – Seasonal Malaria Chemoprevention, or SMC – is much weaker than it appears at first glance and would benefit from high-quality replication. Specifically, GiveWell’s assertion that every $5,000 spent on SMC saves a life is a stronger claim than the literature warrants on three grounds: 1) the effect size is small and imprecisely estimated; 2) co-interventions delivered simultaneously pose a threat to external validity; and 3) the research lacks the quality markers of the replication/credibility revolution. I conclude by arguing that any replication of SMC should meet the standards of rigor and transparency set by GiveDirectly, whose evaluations clearly demonstrate contemporary best practices in open science.1. Introduction: the evidence for Seasonal Malaria ChemopreventionGiveWell currently endorses four top charities, with first place going to the Malaria Consortium, a charity that delivers Seasonal Malaria Chemoprevention (SMC). GiveWell provides more context on its Malaria Consortium – Seasonal Malaria Chemoprevention page and its Seasonal Malaria Chemoprevention intervention report. That report is built around a Cochrane review of seven randomized controlled trials (Meremikwu et al. 2012). GiveWell discounts one of those studies (Dicko et al. 2008) for technical reasons and includes an additional trial published later (Tagbor et al. 2016) in its evidence base.No new research has been added since then, and GiveWell’s SMC report was last updated in 2018. It appears as though GiveWell treats the question of “does SMC work?” as effectively settled.I argue that GiveWell should revisit its conclusions about SMC and should fund and/or oversee a high-quality replication study on the subject. While there is very strong evidence that SMC prevents the majority of malaria episodes, “including severe episodes” (Meremikwu et al. 2012, p. 2), GiveWell’s estimate that every $5,000 of SMC saves a life in expectation is shaky on three grounds related to research quality: 1) the underlying effect size is small, relative to the sample size, and statistically imprecise; 2) SMC is often tested in places receiving other interventions, which threatens external validity because we don’t know which set of interventions bests maps onto the target population; and 3) the evidence comes from studies that are pre-credibility revolution, and therefore lack quality controls such as detailed pre-registration, open code and data, and sufficient statistical power.2. Three grounds for doubting the relationship between SMC and mortality2.1 The effect size is small and imprecisely estimatedAcross an N of 12,589, Meremikwu et al. record 10 deaths in the combined treatment groups and 16 in the combined control groups. Subtracting the one study that GiveWell discounts and including the one they supplement with, we arrive at 10 deaths for treatment and 15 for control. As the authors note, “the difference was not statistically significant” (p. 12), “and none of the trials were adequately powered to detect an effect on mortality.However, a reduction in death would be consistent with the high quality evidence of a reduction in severe malaria” (p. 4).Overall, the authors conclude, SMC “probably prevents some deaths,” but “[l]arger trials are necessary to have full confidence in this effect” (p. 4).GiveWell forthrightly acknowledges this on its SMC page, and provides reasons why it believes SMC reduces morality despite " limited evidence." This is laudably transparent, but the question is foundational to all of GiveWell's subsequent analyses of SMC. Especially given the organization's strong fundin...]]>
Seth Ariel Green https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 09:15 None full 3563
NRa7ndwZ6kmtJXKqx_EA EA - EA-Aligned Political Activity in a US Congressional Primary: Concerns and Proposed Changes by Carolina EA Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA-Aligned Political Activity in a US Congressional Primary: Concerns and Proposed Changes, published by Carolina EA on October 28, 2022 on The Effective Altruism Forum.SummaryThis post shares my observations of the approach of the EA-aligned political action committee Protect our Future (POF) in a US Congressional Primary in North Carolina. Writing anonymously, I share my experience talking informally with roughly 20 individuals and observing the race through the media and as a citizen. I express concerns with POF's limited explanation of the rationale for their spending in this race, their limited explanation of why they selected their endorsed candidate, and advertisements that did not address their stated rationale for supporting the candidate. I explain my concern that the association of EA with large scale political spending and limited transparency could have harmful reputational effects. (This post is meant to provide constructive comments, and it should not be taken as a critique of POF's intentions or broader impact.)Finally, I propose a few changes that POF or other groups could make moving forward to mitigate possible harm, such as publishing the criteria used to evaluate candidates, providing a specific rationale for each endorsed candidate, and providing additional transparency on funding sources.IntroductionI am writing this post to share my firsthand experience observing the Democratic primary in the 4th Congressional District of North Carolina this past spring. The political action committee (PAC) Protect our Future (POF), funded predominantly by prominent EA Sam Bankman-Fried, spent heavily (roughly $1 million) in this race to support the successful nomination of current NC State Senator Valerie Foushee. The 4th District is overwhelmingly Democratic, and Foushee has >99% odds of winning the seat in the general election on November 8th, according to FiveThirtyEight’s forecast as of October 26th.I decided to write this post, my first on the EA Forum, in order to share concerns that came out of my observation of this race. Given the highly anecdotal nature of my experiences, and the success of Foushee’s campaign so far, these critiques should be viewed with some skepticism. Nonetheless, as EAs become a larger force in politics, I hope that these concerns and the accompanying suggestions could be useful for EAs engaging in large scale political spending.I have no prior connection with or relationship to Sam Bankman-Fried or Protect our Future. I hope that these critiques and suggestions will be received as good faith concerns about specific practices, rather than any sort of personal attack or criticism of POF’s goals, intentions, or broader impact.Personal backgroundI work in state government/politics in North Carolina. I’m worried that writing this post with my name attached to it could negatively impact my work, so I’ve decided to stay anonymous. I did not work or volunteer for any of the candidates in this race, though I am part of a group that endorsed one of the candidates (Nida Allam). I had no role in that endorsement. I was uncertain about who I'd vote for until a few weeks before the election, but I did end up voting for Nida Allam. Those factors could certainly influence how I view this situation.My experience of the electionThe concerns expressed in this post stem from conversations I had with political professionals and politically engaged but nonprofessional individuals in the run up to and the aftermath of this primary election. I also read a variety of local and national news articles that examined the role of money in the election, including the financial support from POF. In total, I talked casually with roughly 10 political professionals and 10 nonprofessionals about these topics. I did not necessarily discuss all aspects o...]]>
Carolina EA https://forum.effectivealtruism.org/posts/NRa7ndwZ6kmtJXKqx/ea-aligned-political-activity-in-a-us-congressional-primary Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA-Aligned Political Activity in a US Congressional Primary: Concerns and Proposed Changes, published by Carolina EA on October 28, 2022 on The Effective Altruism Forum.SummaryThis post shares my observations of the approach of the EA-aligned political action committee Protect our Future (POF) in a US Congressional Primary in North Carolina. Writing anonymously, I share my experience talking informally with roughly 20 individuals and observing the race through the media and as a citizen. I express concerns with POF's limited explanation of the rationale for their spending in this race, their limited explanation of why they selected their endorsed candidate, and advertisements that did not address their stated rationale for supporting the candidate. I explain my concern that the association of EA with large scale political spending and limited transparency could have harmful reputational effects. (This post is meant to provide constructive comments, and it should not be taken as a critique of POF's intentions or broader impact.)Finally, I propose a few changes that POF or other groups could make moving forward to mitigate possible harm, such as publishing the criteria used to evaluate candidates, providing a specific rationale for each endorsed candidate, and providing additional transparency on funding sources.IntroductionI am writing this post to share my firsthand experience observing the Democratic primary in the 4th Congressional District of North Carolina this past spring. The political action committee (PAC) Protect our Future (POF), funded predominantly by prominent EA Sam Bankman-Fried, spent heavily (roughly $1 million) in this race to support the successful nomination of current NC State Senator Valerie Foushee. The 4th District is overwhelmingly Democratic, and Foushee has >99% odds of winning the seat in the general election on November 8th, according to FiveThirtyEight’s forecast as of October 26th.I decided to write this post, my first on the EA Forum, in order to share concerns that came out of my observation of this race. Given the highly anecdotal nature of my experiences, and the success of Foushee’s campaign so far, these critiques should be viewed with some skepticism. Nonetheless, as EAs become a larger force in politics, I hope that these concerns and the accompanying suggestions could be useful for EAs engaging in large scale political spending.I have no prior connection with or relationship to Sam Bankman-Fried or Protect our Future. I hope that these critiques and suggestions will be received as good faith concerns about specific practices, rather than any sort of personal attack or criticism of POF’s goals, intentions, or broader impact.Personal backgroundI work in state government/politics in North Carolina. I’m worried that writing this post with my name attached to it could negatively impact my work, so I’ve decided to stay anonymous. I did not work or volunteer for any of the candidates in this race, though I am part of a group that endorsed one of the candidates (Nida Allam). I had no role in that endorsement. I was uncertain about who I'd vote for until a few weeks before the election, but I did end up voting for Nida Allam. Those factors could certainly influence how I view this situation.My experience of the electionThe concerns expressed in this post stem from conversations I had with political professionals and politically engaged but nonprofessional individuals in the run up to and the aftermath of this primary election. I also read a variety of local and national news articles that examined the role of money in the election, including the financial support from POF. In total, I talked casually with roughly 10 political professionals and 10 nonprofessionals about these topics. I did not necessarily discuss all aspects o...]]>
Fri, 28 Oct 2022 11:05:18 +0000 EA - EA-Aligned Political Activity in a US Congressional Primary: Concerns and Proposed Changes by Carolina EA Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA-Aligned Political Activity in a US Congressional Primary: Concerns and Proposed Changes, published by Carolina EA on October 28, 2022 on The Effective Altruism Forum.SummaryThis post shares my observations of the approach of the EA-aligned political action committee Protect our Future (POF) in a US Congressional Primary in North Carolina. Writing anonymously, I share my experience talking informally with roughly 20 individuals and observing the race through the media and as a citizen. I express concerns with POF's limited explanation of the rationale for their spending in this race, their limited explanation of why they selected their endorsed candidate, and advertisements that did not address their stated rationale for supporting the candidate. I explain my concern that the association of EA with large scale political spending and limited transparency could have harmful reputational effects. (This post is meant to provide constructive comments, and it should not be taken as a critique of POF's intentions or broader impact.)Finally, I propose a few changes that POF or other groups could make moving forward to mitigate possible harm, such as publishing the criteria used to evaluate candidates, providing a specific rationale for each endorsed candidate, and providing additional transparency on funding sources.IntroductionI am writing this post to share my firsthand experience observing the Democratic primary in the 4th Congressional District of North Carolina this past spring. The political action committee (PAC) Protect our Future (POF), funded predominantly by prominent EA Sam Bankman-Fried, spent heavily (roughly $1 million) in this race to support the successful nomination of current NC State Senator Valerie Foushee. The 4th District is overwhelmingly Democratic, and Foushee has >99% odds of winning the seat in the general election on November 8th, according to FiveThirtyEight’s forecast as of October 26th.I decided to write this post, my first on the EA Forum, in order to share concerns that came out of my observation of this race. Given the highly anecdotal nature of my experiences, and the success of Foushee’s campaign so far, these critiques should be viewed with some skepticism. Nonetheless, as EAs become a larger force in politics, I hope that these concerns and the accompanying suggestions could be useful for EAs engaging in large scale political spending.I have no prior connection with or relationship to Sam Bankman-Fried or Protect our Future. I hope that these critiques and suggestions will be received as good faith concerns about specific practices, rather than any sort of personal attack or criticism of POF’s goals, intentions, or broader impact.Personal backgroundI work in state government/politics in North Carolina. I’m worried that writing this post with my name attached to it could negatively impact my work, so I’ve decided to stay anonymous. I did not work or volunteer for any of the candidates in this race, though I am part of a group that endorsed one of the candidates (Nida Allam). I had no role in that endorsement. I was uncertain about who I'd vote for until a few weeks before the election, but I did end up voting for Nida Allam. Those factors could certainly influence how I view this situation.My experience of the electionThe concerns expressed in this post stem from conversations I had with political professionals and politically engaged but nonprofessional individuals in the run up to and the aftermath of this primary election. I also read a variety of local and national news articles that examined the role of money in the election, including the financial support from POF. In total, I talked casually with roughly 10 political professionals and 10 nonprofessionals about these topics. I did not necessarily discuss all aspects o...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA-Aligned Political Activity in a US Congressional Primary: Concerns and Proposed Changes, published by Carolina EA on October 28, 2022 on The Effective Altruism Forum.SummaryThis post shares my observations of the approach of the EA-aligned political action committee Protect our Future (POF) in a US Congressional Primary in North Carolina. Writing anonymously, I share my experience talking informally with roughly 20 individuals and observing the race through the media and as a citizen. I express concerns with POF's limited explanation of the rationale for their spending in this race, their limited explanation of why they selected their endorsed candidate, and advertisements that did not address their stated rationale for supporting the candidate. I explain my concern that the association of EA with large scale political spending and limited transparency could have harmful reputational effects. (This post is meant to provide constructive comments, and it should not be taken as a critique of POF's intentions or broader impact.)Finally, I propose a few changes that POF or other groups could make moving forward to mitigate possible harm, such as publishing the criteria used to evaluate candidates, providing a specific rationale for each endorsed candidate, and providing additional transparency on funding sources.IntroductionI am writing this post to share my firsthand experience observing the Democratic primary in the 4th Congressional District of North Carolina this past spring. The political action committee (PAC) Protect our Future (POF), funded predominantly by prominent EA Sam Bankman-Fried, spent heavily (roughly $1 million) in this race to support the successful nomination of current NC State Senator Valerie Foushee. The 4th District is overwhelmingly Democratic, and Foushee has >99% odds of winning the seat in the general election on November 8th, according to FiveThirtyEight’s forecast as of October 26th.I decided to write this post, my first on the EA Forum, in order to share concerns that came out of my observation of this race. Given the highly anecdotal nature of my experiences, and the success of Foushee’s campaign so far, these critiques should be viewed with some skepticism. Nonetheless, as EAs become a larger force in politics, I hope that these concerns and the accompanying suggestions could be useful for EAs engaging in large scale political spending.I have no prior connection with or relationship to Sam Bankman-Fried or Protect our Future. I hope that these critiques and suggestions will be received as good faith concerns about specific practices, rather than any sort of personal attack or criticism of POF’s goals, intentions, or broader impact.Personal backgroundI work in state government/politics in North Carolina. I’m worried that writing this post with my name attached to it could negatively impact my work, so I’ve decided to stay anonymous. I did not work or volunteer for any of the candidates in this race, though I am part of a group that endorsed one of the candidates (Nida Allam). I had no role in that endorsement. I was uncertain about who I'd vote for until a few weeks before the election, but I did end up voting for Nida Allam. Those factors could certainly influence how I view this situation.My experience of the electionThe concerns expressed in this post stem from conversations I had with political professionals and politically engaged but nonprofessional individuals in the run up to and the aftermath of this primary election. I also read a variety of local and national news articles that examined the role of money in the election, including the financial support from POF. In total, I talked casually with roughly 10 political professionals and 10 nonprofessionals about these topics. I did not necessarily discuss all aspects o...]]>
Carolina EA https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 13:08 None full 3575
jo7hmLrhy576zEyiL_EA EA - Prizes for ML Safety Benchmark Ideas by Joshc Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prizes for ML Safety Benchmark Ideas, published by Joshc on October 28, 2022 on The Effective Altruism Forum.“If you cannot measure it, you cannot improve it.” – Lord Kelvin (paraphrased)Website: benchmarking.mlsafety.org – receiving submissions until August 2023.ML Safety lacks good benchmarks, so the Center for AI Safety is offering $50,000 - $100,000 prizes for benchmark ideas (or full research papers). We will award at least $100,000 total and up to $500,000 depending on the quality of submissions.What kinds of ideas are you looking for?Ultimately, we will are looking for benchmark ideas that motivate or advance research that reduces existential risks from AI. To provide more guidance, we’ve outlined four research categories along with example ideas.Alignment: building models that represent and safely optimize difficult-to-specify human values.Monitoring: discovering unintended model functionality.Robustness: designing systems to be reliable in the face of adversaries and highly unusual situations.Safety Applications: using ML to address broader risks related to how ML systems are handled (e.g. for cybersecurity or forecasting).See Open Problems in AI X-Risk [PAIS #5] for example research directions in these categories and their relation to existential risk.What are the requirements for submissions?Datasets or implementations are not necessary, though empirical testing can make it easier for the judges to evaluate your idea. All that is required is a brief write-up (guidelines here). How the write-up is formatted isn’t very important as long as it effectively pitches the benchmark and concretely explains how it would be implemented. If you don’t have prior experience designing benchmarks, we recommend reading this document for generic tips.Who are the judges?Dan Hendrycks, Paul Christiano, and Collin Burns.If you have questions, they might be answered on the website, or you can post them here. We would also greatly appreciate it if you helped to spread the word about this opportunity.Thanks to Sidney Hough and Kevin Liu for helping to make this happen and to Collin Burns and Akash Wasil for feedback on the website. This project is supported by the Future Fund regranting program.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Joshc https://forum.effectivealtruism.org/posts/jo7hmLrhy576zEyiL/prizes-for-ml-safety-benchmark-ideas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prizes for ML Safety Benchmark Ideas, published by Joshc on October 28, 2022 on The Effective Altruism Forum.“If you cannot measure it, you cannot improve it.” – Lord Kelvin (paraphrased)Website: benchmarking.mlsafety.org – receiving submissions until August 2023.ML Safety lacks good benchmarks, so the Center for AI Safety is offering $50,000 - $100,000 prizes for benchmark ideas (or full research papers). We will award at least $100,000 total and up to $500,000 depending on the quality of submissions.What kinds of ideas are you looking for?Ultimately, we will are looking for benchmark ideas that motivate or advance research that reduces existential risks from AI. To provide more guidance, we’ve outlined four research categories along with example ideas.Alignment: building models that represent and safely optimize difficult-to-specify human values.Monitoring: discovering unintended model functionality.Robustness: designing systems to be reliable in the face of adversaries and highly unusual situations.Safety Applications: using ML to address broader risks related to how ML systems are handled (e.g. for cybersecurity or forecasting).See Open Problems in AI X-Risk [PAIS #5] for example research directions in these categories and their relation to existential risk.What are the requirements for submissions?Datasets or implementations are not necessary, though empirical testing can make it easier for the judges to evaluate your idea. All that is required is a brief write-up (guidelines here). How the write-up is formatted isn’t very important as long as it effectively pitches the benchmark and concretely explains how it would be implemented. If you don’t have prior experience designing benchmarks, we recommend reading this document for generic tips.Who are the judges?Dan Hendrycks, Paul Christiano, and Collin Burns.If you have questions, they might be answered on the website, or you can post them here. We would also greatly appreciate it if you helped to spread the word about this opportunity.Thanks to Sidney Hough and Kevin Liu for helping to make this happen and to Collin Burns and Akash Wasil for feedback on the website. This project is supported by the Future Fund regranting program.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Fri, 28 Oct 2022 08:26:41 +0000 EA - Prizes for ML Safety Benchmark Ideas by Joshc Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prizes for ML Safety Benchmark Ideas, published by Joshc on October 28, 2022 on The Effective Altruism Forum.“If you cannot measure it, you cannot improve it.” – Lord Kelvin (paraphrased)Website: benchmarking.mlsafety.org – receiving submissions until August 2023.ML Safety lacks good benchmarks, so the Center for AI Safety is offering $50,000 - $100,000 prizes for benchmark ideas (or full research papers). We will award at least $100,000 total and up to $500,000 depending on the quality of submissions.What kinds of ideas are you looking for?Ultimately, we will are looking for benchmark ideas that motivate or advance research that reduces existential risks from AI. To provide more guidance, we’ve outlined four research categories along with example ideas.Alignment: building models that represent and safely optimize difficult-to-specify human values.Monitoring: discovering unintended model functionality.Robustness: designing systems to be reliable in the face of adversaries and highly unusual situations.Safety Applications: using ML to address broader risks related to how ML systems are handled (e.g. for cybersecurity or forecasting).See Open Problems in AI X-Risk [PAIS #5] for example research directions in these categories and their relation to existential risk.What are the requirements for submissions?Datasets or implementations are not necessary, though empirical testing can make it easier for the judges to evaluate your idea. All that is required is a brief write-up (guidelines here). How the write-up is formatted isn’t very important as long as it effectively pitches the benchmark and concretely explains how it would be implemented. If you don’t have prior experience designing benchmarks, we recommend reading this document for generic tips.Who are the judges?Dan Hendrycks, Paul Christiano, and Collin Burns.If you have questions, they might be answered on the website, or you can post them here. We would also greatly appreciate it if you helped to spread the word about this opportunity.Thanks to Sidney Hough and Kevin Liu for helping to make this happen and to Collin Burns and Akash Wasil for feedback on the website. This project is supported by the Future Fund regranting program.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prizes for ML Safety Benchmark Ideas, published by Joshc on October 28, 2022 on The Effective Altruism Forum.“If you cannot measure it, you cannot improve it.” – Lord Kelvin (paraphrased)Website: benchmarking.mlsafety.org – receiving submissions until August 2023.ML Safety lacks good benchmarks, so the Center for AI Safety is offering $50,000 - $100,000 prizes for benchmark ideas (or full research papers). We will award at least $100,000 total and up to $500,000 depending on the quality of submissions.What kinds of ideas are you looking for?Ultimately, we will are looking for benchmark ideas that motivate or advance research that reduces existential risks from AI. To provide more guidance, we’ve outlined four research categories along with example ideas.Alignment: building models that represent and safely optimize difficult-to-specify human values.Monitoring: discovering unintended model functionality.Robustness: designing systems to be reliable in the face of adversaries and highly unusual situations.Safety Applications: using ML to address broader risks related to how ML systems are handled (e.g. for cybersecurity or forecasting).See Open Problems in AI X-Risk [PAIS #5] for example research directions in these categories and their relation to existential risk.What are the requirements for submissions?Datasets or implementations are not necessary, though empirical testing can make it easier for the judges to evaluate your idea. All that is required is a brief write-up (guidelines here). How the write-up is formatted isn’t very important as long as it effectively pitches the benchmark and concretely explains how it would be implemented. If you don’t have prior experience designing benchmarks, we recommend reading this document for generic tips.Who are the judges?Dan Hendrycks, Paul Christiano, and Collin Burns.If you have questions, they might be answered on the website, or you can post them here. We would also greatly appreciate it if you helped to spread the word about this opportunity.Thanks to Sidney Hough and Kevin Liu for helping to make this happen and to Collin Burns and Akash Wasil for feedback on the website. This project is supported by the Future Fund regranting program.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Joshc https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:26 None full 3569
ivGJPv6fKCLWGgcgy_EA EA - New tool for exploring EA Forum and LessWrong - Tree of Tags by Filip Sondej Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New tool for exploring EA Forum and LessWrong - Tree of Tags, published by Filip Sondej on October 27, 2022 on The Effective Altruism Forum.Explore the hidden treasures of the forumsWith this tool you can zoom in on your favorite topics from EA Forum, LessWrong, and Alignment Forum. Or you can just wander around and see what you find.You start by seeing the whole forum split into two main topics. Choose the one that you like more, and that topic will be split again into two subtopics. Choose a subtopic, and again, you will see it split into two subsubtopics (and so on).It's like climbing a tree, where you start at the trunk and then choose which branches to go into as you go higher.Choose a forum to climb:EA ForumLessWrongAlignment ForumSome tips:it's easier to choose a branch by looking at tags rather than postson mobile, horizontal view is more practicalFeaturesEach topic has a unique and unchanging URL. So if you find a place you like, just bookmark it! The posts inside will be updated, but the theme stays the same.The bar at the right of each post is the reading time indicator. Full bar means 30 min, half bar means 15 min, and so on.At the top right, you can choose how to rank the posts:hot - new and upvoted - the default forum rankingtop - most upvotesalive - has recent commentsmeritocratic - votes from high karma users are exaggeratedregular - default scoringdemocratic - posts are scored as if everyone has the same voting powerMy hopesThe amount of content is overwhelming. My problem is not that there's nothing good to read, but that there is so much to plow through to find what's best for me.Also, it's sad to see great posts receive so little attention, the moment after they are pushed off the frontpage. But they are still valuable, and for each forgotten post, there is someone who should read it. So I want to make the right topics find their way to the right people.Let's make the forums evergreen!Tag similarityFor each tag, you can also see what tags are most related to it:Explore EA Forum tagsExplore LessWrong tagsExplore Alignment Forum tagsTip: you can go directly to some tag, by finding it's URL on the forum, and modifying this site's URL.For example, to go to "Consciousness research", you find go toEA Forum:LessWrong:Alignment Forum:How does it work?Here I explain all the details.Cross-posted to LessWrong.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Filip Sondej https://forum.effectivealtruism.org/posts/ivGJPv6fKCLWGgcgy/new-tool-for-exploring-ea-forum-and-lesswrong-tree-of-tags Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New tool for exploring EA Forum and LessWrong - Tree of Tags, published by Filip Sondej on October 27, 2022 on The Effective Altruism Forum.Explore the hidden treasures of the forumsWith this tool you can zoom in on your favorite topics from EA Forum, LessWrong, and Alignment Forum. Or you can just wander around and see what you find.You start by seeing the whole forum split into two main topics. Choose the one that you like more, and that topic will be split again into two subtopics. Choose a subtopic, and again, you will see it split into two subsubtopics (and so on).It's like climbing a tree, where you start at the trunk and then choose which branches to go into as you go higher.Choose a forum to climb:EA ForumLessWrongAlignment ForumSome tips:it's easier to choose a branch by looking at tags rather than postson mobile, horizontal view is more practicalFeaturesEach topic has a unique and unchanging URL. So if you find a place you like, just bookmark it! The posts inside will be updated, but the theme stays the same.The bar at the right of each post is the reading time indicator. Full bar means 30 min, half bar means 15 min, and so on.At the top right, you can choose how to rank the posts:hot - new and upvoted - the default forum rankingtop - most upvotesalive - has recent commentsmeritocratic - votes from high karma users are exaggeratedregular - default scoringdemocratic - posts are scored as if everyone has the same voting powerMy hopesThe amount of content is overwhelming. My problem is not that there's nothing good to read, but that there is so much to plow through to find what's best for me.Also, it's sad to see great posts receive so little attention, the moment after they are pushed off the frontpage. But they are still valuable, and for each forgotten post, there is someone who should read it. So I want to make the right topics find their way to the right people.Let's make the forums evergreen!Tag similarityFor each tag, you can also see what tags are most related to it:Explore EA Forum tagsExplore LessWrong tagsExplore Alignment Forum tagsTip: you can go directly to some tag, by finding it's URL on the forum, and modifying this site's URL.For example, to go to "Consciousness research", you find go toEA Forum:LessWrong:Alignment Forum:How does it work?Here I explain all the details.Cross-posted to LessWrong.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 27 Oct 2022 22:50:20 +0000 EA - New tool for exploring EA Forum and LessWrong - Tree of Tags by Filip Sondej Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New tool for exploring EA Forum and LessWrong - Tree of Tags, published by Filip Sondej on October 27, 2022 on The Effective Altruism Forum.Explore the hidden treasures of the forumsWith this tool you can zoom in on your favorite topics from EA Forum, LessWrong, and Alignment Forum. Or you can just wander around and see what you find.You start by seeing the whole forum split into two main topics. Choose the one that you like more, and that topic will be split again into two subtopics. Choose a subtopic, and again, you will see it split into two subsubtopics (and so on).It's like climbing a tree, where you start at the trunk and then choose which branches to go into as you go higher.Choose a forum to climb:EA ForumLessWrongAlignment ForumSome tips:it's easier to choose a branch by looking at tags rather than postson mobile, horizontal view is more practicalFeaturesEach topic has a unique and unchanging URL. So if you find a place you like, just bookmark it! The posts inside will be updated, but the theme stays the same.The bar at the right of each post is the reading time indicator. Full bar means 30 min, half bar means 15 min, and so on.At the top right, you can choose how to rank the posts:hot - new and upvoted - the default forum rankingtop - most upvotesalive - has recent commentsmeritocratic - votes from high karma users are exaggeratedregular - default scoringdemocratic - posts are scored as if everyone has the same voting powerMy hopesThe amount of content is overwhelming. My problem is not that there's nothing good to read, but that there is so much to plow through to find what's best for me.Also, it's sad to see great posts receive so little attention, the moment after they are pushed off the frontpage. But they are still valuable, and for each forgotten post, there is someone who should read it. So I want to make the right topics find their way to the right people.Let's make the forums evergreen!Tag similarityFor each tag, you can also see what tags are most related to it:Explore EA Forum tagsExplore LessWrong tagsExplore Alignment Forum tagsTip: you can go directly to some tag, by finding it's URL on the forum, and modifying this site's URL.For example, to go to "Consciousness research", you find go toEA Forum:LessWrong:Alignment Forum:How does it work?Here I explain all the details.Cross-posted to LessWrong.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New tool for exploring EA Forum and LessWrong - Tree of Tags, published by Filip Sondej on October 27, 2022 on The Effective Altruism Forum.Explore the hidden treasures of the forumsWith this tool you can zoom in on your favorite topics from EA Forum, LessWrong, and Alignment Forum. Or you can just wander around and see what you find.You start by seeing the whole forum split into two main topics. Choose the one that you like more, and that topic will be split again into two subtopics. Choose a subtopic, and again, you will see it split into two subsubtopics (and so on).It's like climbing a tree, where you start at the trunk and then choose which branches to go into as you go higher.Choose a forum to climb:EA ForumLessWrongAlignment ForumSome tips:it's easier to choose a branch by looking at tags rather than postson mobile, horizontal view is more practicalFeaturesEach topic has a unique and unchanging URL. So if you find a place you like, just bookmark it! The posts inside will be updated, but the theme stays the same.The bar at the right of each post is the reading time indicator. Full bar means 30 min, half bar means 15 min, and so on.At the top right, you can choose how to rank the posts:hot - new and upvoted - the default forum rankingtop - most upvotesalive - has recent commentsmeritocratic - votes from high karma users are exaggeratedregular - default scoringdemocratic - posts are scored as if everyone has the same voting powerMy hopesThe amount of content is overwhelming. My problem is not that there's nothing good to read, but that there is so much to plow through to find what's best for me.Also, it's sad to see great posts receive so little attention, the moment after they are pushed off the frontpage. But they are still valuable, and for each forgotten post, there is someone who should read it. So I want to make the right topics find their way to the right people.Let's make the forums evergreen!Tag similarityFor each tag, you can also see what tags are most related to it:Explore EA Forum tagsExplore LessWrong tagsExplore Alignment Forum tagsTip: you can go directly to some tag, by finding it's URL on the forum, and modifying this site's URL.For example, to go to "Consciousness research", you find go toEA Forum:LessWrong:Alignment Forum:How does it work?Here I explain all the details.Cross-posted to LessWrong.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Filip Sondej https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:45 None full 3570
vD3yDaDBLerMLdCQx_EA EA - Summary of "Technology Favours Tyranny" by Yuval Noah Harari by Madhav Malhotra Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summary of "Technology Favours Tyranny" by Yuval Noah Harari, published by Madhav Malhotra on October 26, 2022 on The Effective Altruism Forum.Link: Technology Favours Tyrrany – Yuval Noah HarariSummary:Economic incentives favour making powerful AI.Making powerful AI is easier where you don’t respect liberty / privacy.We’re developing a new useless class. People that can’t do anything better than technology. And we don’t know what to do with them.Raw Notes:Democracy has been the exception, not the norm throughout history.He thinks that there are certain technological conditions that promote democracy. And that we're growing further away from those conditions with the technology we're developing.I'm not entirely sure what he means. But he gives the example of the "ordinary man" being praised in the 1940s. Everyday steel, coal, farm, military workers would be on propaganda posters in the US, USSR, and Europe. But today, "ordinary workers" are no longer celebrated. They no longer have a future.Says ordinary people have political power (in democracies) but are less and less relevant economically. "Perhaps in the 21st century, populist revolts will be staged not against an economic elite that exploits people but against an economic elite that does not need them anymore."We can't keep people happy by just getting more and more economic growth. Not if this growth is based on inventing more and more disruptive technologies.As technology continues to develop, we'll have new jobs emerge quickly. But the new jobs will also disappear quickly. Ex: Human annotators for AI models. People will need to retrain many many times.How will this affect people's stress given how many people are already stressed?He thinks that we'll have technology that can tell what people are thinking (roughly). Ex: He says if you look at a picture and get higher blood pressure and activity in your amygdala, then you're getting angry from the contents of that picture. And he says this type of technology could be used in autocratic states to suppress dissent.He also says that we already have technology to manipulate our emotions better than our own families can manipulate our emotions. He says that this might be used to convince people to buy into X product, Y politician, Z movement, etc.He says that in past times, democracies were better able to use information than autocracies. Because information was available to everyone and more minds could process it. But with AI, the group with the most powerful AI is the best at processing information.Ex: If an autocratic government orders all its citizens to take health tests and share their medical data to the government, that government would be much better at biomedical research than governments that respect individual privacy.He uses this to point out how the centralised way that autocracies make decisions lend themselves to autocracies being the best at processing information.He mentions how we've already started to erode skills based on technological reliance. Ex: Many more people don't know how to get informationw ithout search engines. Ex: Many more people don't know how to navigate long distance without Google Maps.For next steps, he suggests that to control the development of technology like AI, we need to control the 'means of production.' In agrarian societies, the means of production was land, so we created property rights. In the 1900s, the means of production were factories/machines, so we protected those most in wartime. In the 2000s, the means of production might be computer chips, data, etc. We currently suck at controlling/regulating this.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Madhav Malhotra https://forum.effectivealtruism.org/posts/vD3yDaDBLerMLdCQx/summary-of-technology-favours-tyranny-by-yuval-noah-harari Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summary of "Technology Favours Tyranny" by Yuval Noah Harari, published by Madhav Malhotra on October 26, 2022 on The Effective Altruism Forum.Link: Technology Favours Tyrrany – Yuval Noah HarariSummary:Economic incentives favour making powerful AI.Making powerful AI is easier where you don’t respect liberty / privacy.We’re developing a new useless class. People that can’t do anything better than technology. And we don’t know what to do with them.Raw Notes:Democracy has been the exception, not the norm throughout history.He thinks that there are certain technological conditions that promote democracy. And that we're growing further away from those conditions with the technology we're developing.I'm not entirely sure what he means. But he gives the example of the "ordinary man" being praised in the 1940s. Everyday steel, coal, farm, military workers would be on propaganda posters in the US, USSR, and Europe. But today, "ordinary workers" are no longer celebrated. They no longer have a future.Says ordinary people have political power (in democracies) but are less and less relevant economically. "Perhaps in the 21st century, populist revolts will be staged not against an economic elite that exploits people but against an economic elite that does not need them anymore."We can't keep people happy by just getting more and more economic growth. Not if this growth is based on inventing more and more disruptive technologies.As technology continues to develop, we'll have new jobs emerge quickly. But the new jobs will also disappear quickly. Ex: Human annotators for AI models. People will need to retrain many many times.How will this affect people's stress given how many people are already stressed?He thinks that we'll have technology that can tell what people are thinking (roughly). Ex: He says if you look at a picture and get higher blood pressure and activity in your amygdala, then you're getting angry from the contents of that picture. And he says this type of technology could be used in autocratic states to suppress dissent.He also says that we already have technology to manipulate our emotions better than our own families can manipulate our emotions. He says that this might be used to convince people to buy into X product, Y politician, Z movement, etc.He says that in past times, democracies were better able to use information than autocracies. Because information was available to everyone and more minds could process it. But with AI, the group with the most powerful AI is the best at processing information.Ex: If an autocratic government orders all its citizens to take health tests and share their medical data to the government, that government would be much better at biomedical research than governments that respect individual privacy.He uses this to point out how the centralised way that autocracies make decisions lend themselves to autocracies being the best at processing information.He mentions how we've already started to erode skills based on technological reliance. Ex: Many more people don't know how to get informationw ithout search engines. Ex: Many more people don't know how to navigate long distance without Google Maps.For next steps, he suggests that to control the development of technology like AI, we need to control the 'means of production.' In agrarian societies, the means of production was land, so we created property rights. In the 1900s, the means of production were factories/machines, so we protected those most in wartime. In the 2000s, the means of production might be computer chips, data, etc. We currently suck at controlling/regulating this.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 27 Oct 2022 21:03:39 +0000 EA - Summary of "Technology Favours Tyranny" by Yuval Noah Harari by Madhav Malhotra Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summary of "Technology Favours Tyranny" by Yuval Noah Harari, published by Madhav Malhotra on October 26, 2022 on The Effective Altruism Forum.Link: Technology Favours Tyrrany – Yuval Noah HarariSummary:Economic incentives favour making powerful AI.Making powerful AI is easier where you don’t respect liberty / privacy.We’re developing a new useless class. People that can’t do anything better than technology. And we don’t know what to do with them.Raw Notes:Democracy has been the exception, not the norm throughout history.He thinks that there are certain technological conditions that promote democracy. And that we're growing further away from those conditions with the technology we're developing.I'm not entirely sure what he means. But he gives the example of the "ordinary man" being praised in the 1940s. Everyday steel, coal, farm, military workers would be on propaganda posters in the US, USSR, and Europe. But today, "ordinary workers" are no longer celebrated. They no longer have a future.Says ordinary people have political power (in democracies) but are less and less relevant economically. "Perhaps in the 21st century, populist revolts will be staged not against an economic elite that exploits people but against an economic elite that does not need them anymore."We can't keep people happy by just getting more and more economic growth. Not if this growth is based on inventing more and more disruptive technologies.As technology continues to develop, we'll have new jobs emerge quickly. But the new jobs will also disappear quickly. Ex: Human annotators for AI models. People will need to retrain many many times.How will this affect people's stress given how many people are already stressed?He thinks that we'll have technology that can tell what people are thinking (roughly). Ex: He says if you look at a picture and get higher blood pressure and activity in your amygdala, then you're getting angry from the contents of that picture. And he says this type of technology could be used in autocratic states to suppress dissent.He also says that we already have technology to manipulate our emotions better than our own families can manipulate our emotions. He says that this might be used to convince people to buy into X product, Y politician, Z movement, etc.He says that in past times, democracies were better able to use information than autocracies. Because information was available to everyone and more minds could process it. But with AI, the group with the most powerful AI is the best at processing information.Ex: If an autocratic government orders all its citizens to take health tests and share their medical data to the government, that government would be much better at biomedical research than governments that respect individual privacy.He uses this to point out how the centralised way that autocracies make decisions lend themselves to autocracies being the best at processing information.He mentions how we've already started to erode skills based on technological reliance. Ex: Many more people don't know how to get informationw ithout search engines. Ex: Many more people don't know how to navigate long distance without Google Maps.For next steps, he suggests that to control the development of technology like AI, we need to control the 'means of production.' In agrarian societies, the means of production was land, so we created property rights. In the 1900s, the means of production were factories/machines, so we protected those most in wartime. In the 2000s, the means of production might be computer chips, data, etc. We currently suck at controlling/regulating this.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summary of "Technology Favours Tyranny" by Yuval Noah Harari, published by Madhav Malhotra on October 26, 2022 on The Effective Altruism Forum.Link: Technology Favours Tyrrany – Yuval Noah HarariSummary:Economic incentives favour making powerful AI.Making powerful AI is easier where you don’t respect liberty / privacy.We’re developing a new useless class. People that can’t do anything better than technology. And we don’t know what to do with them.Raw Notes:Democracy has been the exception, not the norm throughout history.He thinks that there are certain technological conditions that promote democracy. And that we're growing further away from those conditions with the technology we're developing.I'm not entirely sure what he means. But he gives the example of the "ordinary man" being praised in the 1940s. Everyday steel, coal, farm, military workers would be on propaganda posters in the US, USSR, and Europe. But today, "ordinary workers" are no longer celebrated. They no longer have a future.Says ordinary people have political power (in democracies) but are less and less relevant economically. "Perhaps in the 21st century, populist revolts will be staged not against an economic elite that exploits people but against an economic elite that does not need them anymore."We can't keep people happy by just getting more and more economic growth. Not if this growth is based on inventing more and more disruptive technologies.As technology continues to develop, we'll have new jobs emerge quickly. But the new jobs will also disappear quickly. Ex: Human annotators for AI models. People will need to retrain many many times.How will this affect people's stress given how many people are already stressed?He thinks that we'll have technology that can tell what people are thinking (roughly). Ex: He says if you look at a picture and get higher blood pressure and activity in your amygdala, then you're getting angry from the contents of that picture. And he says this type of technology could be used in autocratic states to suppress dissent.He also says that we already have technology to manipulate our emotions better than our own families can manipulate our emotions. He says that this might be used to convince people to buy into X product, Y politician, Z movement, etc.He says that in past times, democracies were better able to use information than autocracies. Because information was available to everyone and more minds could process it. But with AI, the group with the most powerful AI is the best at processing information.Ex: If an autocratic government orders all its citizens to take health tests and share their medical data to the government, that government would be much better at biomedical research than governments that respect individual privacy.He uses this to point out how the centralised way that autocracies make decisions lend themselves to autocracies being the best at processing information.He mentions how we've already started to erode skills based on technological reliance. Ex: Many more people don't know how to get informationw ithout search engines. Ex: Many more people don't know how to navigate long distance without Google Maps.For next steps, he suggests that to control the development of technology like AI, we need to control the 'means of production.' In agrarian societies, the means of production was land, so we created property rights. In the 1900s, the means of production were factories/machines, so we protected those most in wartime. In the 2000s, the means of production might be computer chips, data, etc. We currently suck at controlling/regulating this.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Madhav Malhotra https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:41 None full 3566
6NnnPvzCzxWpWzAb8_EA EA - Podcast: The Left and Effective Altruism with Habiba Islam by Garrison Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Podcast: The Left and Effective Altruism with Habiba Islam, published by Garrison on October 27, 2022 on The Effective Altruism Forum.I recently rebooted my interview podcast, The Most Interesting People I Know (found wherever you find podcasts). I focus on EA and left-wing guests, and have been pretty involved in both communities for the last 5 years. Some example guests: Rutger Bregman, Leah Garcés, Lewis Bollard, Spencer Greenberg, Nathan Robinson, Malaika Jabali, Emily Bazelon, David Shor, and Eric Levitz.I just released a long conversation with Habiba Islam, an 80K career advisor and lefty, about the relationship between EA and the left.This is not an attempt to paper over differences between the two communities, or pretend that EA is more left-wing than it is. Instead, I tried to give an accurate description of both communities, where they are in hidden agreement, where they actually disagree, and what each can learn from the other.Habiba is so sharp and thoughtful throughout the conversation. We're very lucky to have her!I hope this could be a good reference text as well as an onboarding ramp for leftists who might be open to EA.I think there's a real gap in the EA media-verse on the intersection of left-wing politics and EA, and we're almost certainly missing out on some great people and perspectives who would be into EA if they were presented with the right arguments and framing.I have no delusions that all leftists would be into EA if they only understood it better, but I think there are tons of bad-faith criticisms and genuine misunderstandings that we could better address. I think we can have a healthier and more productive relationship with the left.If you'd like to support the show, here are some things you can do:Personally recommend the show/particular episodes to friends. Apparently, this is how podcasts best grow their audiences.Share the podcast/episode on social media (I'm on Twitter @garrisonlovely)Rate and review the show on Apple Podcasts.Give me feedback (anonymous form here). You can also email me at tgarrisonlovely@gmail.comThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Garrison https://forum.effectivealtruism.org/posts/6NnnPvzCzxWpWzAb8/podcast-the-left-and-effective-altruism-with-habiba-islam Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Podcast: The Left and Effective Altruism with Habiba Islam, published by Garrison on October 27, 2022 on The Effective Altruism Forum.I recently rebooted my interview podcast, The Most Interesting People I Know (found wherever you find podcasts). I focus on EA and left-wing guests, and have been pretty involved in both communities for the last 5 years. Some example guests: Rutger Bregman, Leah Garcés, Lewis Bollard, Spencer Greenberg, Nathan Robinson, Malaika Jabali, Emily Bazelon, David Shor, and Eric Levitz.I just released a long conversation with Habiba Islam, an 80K career advisor and lefty, about the relationship between EA and the left.This is not an attempt to paper over differences between the two communities, or pretend that EA is more left-wing than it is. Instead, I tried to give an accurate description of both communities, where they are in hidden agreement, where they actually disagree, and what each can learn from the other.Habiba is so sharp and thoughtful throughout the conversation. We're very lucky to have her!I hope this could be a good reference text as well as an onboarding ramp for leftists who might be open to EA.I think there's a real gap in the EA media-verse on the intersection of left-wing politics and EA, and we're almost certainly missing out on some great people and perspectives who would be into EA if they were presented with the right arguments and framing.I have no delusions that all leftists would be into EA if they only understood it better, but I think there are tons of bad-faith criticisms and genuine misunderstandings that we could better address. I think we can have a healthier and more productive relationship with the left.If you'd like to support the show, here are some things you can do:Personally recommend the show/particular episodes to friends. Apparently, this is how podcasts best grow their audiences.Share the podcast/episode on social media (I'm on Twitter @garrisonlovely)Rate and review the show on Apple Podcasts.Give me feedback (anonymous form here). You can also email me at tgarrisonlovely@gmail.comThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 27 Oct 2022 19:02:12 +0000 EA - Podcast: The Left and Effective Altruism with Habiba Islam by Garrison Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Podcast: The Left and Effective Altruism with Habiba Islam, published by Garrison on October 27, 2022 on The Effective Altruism Forum.I recently rebooted my interview podcast, The Most Interesting People I Know (found wherever you find podcasts). I focus on EA and left-wing guests, and have been pretty involved in both communities for the last 5 years. Some example guests: Rutger Bregman, Leah Garcés, Lewis Bollard, Spencer Greenberg, Nathan Robinson, Malaika Jabali, Emily Bazelon, David Shor, and Eric Levitz.I just released a long conversation with Habiba Islam, an 80K career advisor and lefty, about the relationship between EA and the left.This is not an attempt to paper over differences between the two communities, or pretend that EA is more left-wing than it is. Instead, I tried to give an accurate description of both communities, where they are in hidden agreement, where they actually disagree, and what each can learn from the other.Habiba is so sharp and thoughtful throughout the conversation. We're very lucky to have her!I hope this could be a good reference text as well as an onboarding ramp for leftists who might be open to EA.I think there's a real gap in the EA media-verse on the intersection of left-wing politics and EA, and we're almost certainly missing out on some great people and perspectives who would be into EA if they were presented with the right arguments and framing.I have no delusions that all leftists would be into EA if they only understood it better, but I think there are tons of bad-faith criticisms and genuine misunderstandings that we could better address. I think we can have a healthier and more productive relationship with the left.If you'd like to support the show, here are some things you can do:Personally recommend the show/particular episodes to friends. Apparently, this is how podcasts best grow their audiences.Share the podcast/episode on social media (I'm on Twitter @garrisonlovely)Rate and review the show on Apple Podcasts.Give me feedback (anonymous form here). You can also email me at tgarrisonlovely@gmail.comThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Podcast: The Left and Effective Altruism with Habiba Islam, published by Garrison on October 27, 2022 on The Effective Altruism Forum.I recently rebooted my interview podcast, The Most Interesting People I Know (found wherever you find podcasts). I focus on EA and left-wing guests, and have been pretty involved in both communities for the last 5 years. Some example guests: Rutger Bregman, Leah Garcés, Lewis Bollard, Spencer Greenberg, Nathan Robinson, Malaika Jabali, Emily Bazelon, David Shor, and Eric Levitz.I just released a long conversation with Habiba Islam, an 80K career advisor and lefty, about the relationship between EA and the left.This is not an attempt to paper over differences between the two communities, or pretend that EA is more left-wing than it is. Instead, I tried to give an accurate description of both communities, where they are in hidden agreement, where they actually disagree, and what each can learn from the other.Habiba is so sharp and thoughtful throughout the conversation. We're very lucky to have her!I hope this could be a good reference text as well as an onboarding ramp for leftists who might be open to EA.I think there's a real gap in the EA media-verse on the intersection of left-wing politics and EA, and we're almost certainly missing out on some great people and perspectives who would be into EA if they were presented with the right arguments and framing.I have no delusions that all leftists would be into EA if they only understood it better, but I think there are tons of bad-faith criticisms and genuine misunderstandings that we could better address. I think we can have a healthier and more productive relationship with the left.If you'd like to support the show, here are some things you can do:Personally recommend the show/particular episodes to friends. Apparently, this is how podcasts best grow their audiences.Share the podcast/episode on social media (I'm on Twitter @garrisonlovely)Rate and review the show on Apple Podcasts.Give me feedback (anonymous form here). You can also email me at tgarrisonlovely@gmail.comThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Garrison https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:13 None full 3562
ztJBqjN6iDMQBAjAr_EA EA - GiveWell should use shorter TAI timelines by Oscar Delaney Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell should use shorter TAI timelines, published by Oscar Delaney on October 27, 2022 on The Effective Altruism Forum.SummaryGiveWell’s discount rate of 4% includes a 1.4% contribution from ‘temporal uncertainty’ arising from the possibility of major events radically changing the world.This is incompatible with the transformative artificial intelligence (TAI) timelines of many AI safety researchers.I argue that GiveWell should increase its discount rate, or at least provide a justification for differing significantly from the commonly-held (in EA) view that TAI could come soon.Epistemic Status: timelines are hard, and I don’t have novel takes, but I worry perhaps GiveWell doesn’t either, and they are dissenting unintentionally.In my accompanying post I argued GiveWell should use a probability distribution over discount rates, I will ignore that here though, and just consider whether their point estimate is appropriate.GiveWell’s current discount rate of 4% is calculated as the sum of three factors. Quoting their explanations from this document:Improving circumstances over time: 1.7%. “Increases in consumption over time meaning marginal increases in consumption in the future are less valuable.”Compounding non-monetary benefits: 0.9%. “There are non-monetary returns not captured in our cost-effectiveness analysis which likely compound over time and are causally intertwined with consumption. These include reduced stress and improved nutrition.”Temporal uncertainty: 1.4%. “Uncertainty increases with projections into the future, meaning the projected benefits may fail to materialize. James [a GiveWell researcher] recommended a rate of 1.4% based on judgement on the annual likelihood of an unforeseen event or longer term change causing the expected benefits to not be realized. Examples of such events are major changes in economic structure, catastrophe, or political instability.”I do not have a good understanding of how these numbers were derived, and have no reason to think the first two are unfair estimates. I think the third is a significant underestimate.TAI is precisely the sort of “major change” meant to be captured by the temporal uncertainty factor. I have no insights to add on the question of TAI timelines, but I think absent GiveWell providing justification to the contrary, they should default towards using the timelines of people who have thought about this a lot. One such person is Ajeya Cotra, who in August reported a 50% credence in TAI being developed by 2040. I do not claim, and nor does Ajeya, that this is authoritative, however it seems a reasonable starting point for GiveWell to use, given they have not and (I think rightly) probably will not put significant work into forming independent timelines. Also in August, a broader survey of 738 experts by AI Impacts resulted in a median year for TAI of 2059. This source has the advantage of including many more people, but conversely most of them will have not spent much time thinking carefully about timelines.I will not give a sophisticated instantiation of what I propose, but rather gesture at what I think a good approach would be, and give a toy example to improve on the status quo. A naive thing to do would be to imagine that there is a fixed annual probability of developing TAI conditional on not having developed it to date. This method gives an annual probability of 3.8% under Ajeya’s timelines, and 1.9% under the AI Impacts timelines.[1] In reality, more of our probability mass should be placed on the later years between now and 2040 (or 2059), and we should not simply stop at 2040 (or 2059). A proper model would likely need to dispense with a constant discount rate entirely, and instead track the probability the world has not seen a “major change” by each year. A model that accomplishes ...]]>
Oscar Delaney https://forum.effectivealtruism.org/posts/ztJBqjN6iDMQBAjAr/givewell-should-use-shorter-tai-timelines Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell should use shorter TAI timelines, published by Oscar Delaney on October 27, 2022 on The Effective Altruism Forum.SummaryGiveWell’s discount rate of 4% includes a 1.4% contribution from ‘temporal uncertainty’ arising from the possibility of major events radically changing the world.This is incompatible with the transformative artificial intelligence (TAI) timelines of many AI safety researchers.I argue that GiveWell should increase its discount rate, or at least provide a justification for differing significantly from the commonly-held (in EA) view that TAI could come soon.Epistemic Status: timelines are hard, and I don’t have novel takes, but I worry perhaps GiveWell doesn’t either, and they are dissenting unintentionally.In my accompanying post I argued GiveWell should use a probability distribution over discount rates, I will ignore that here though, and just consider whether their point estimate is appropriate.GiveWell’s current discount rate of 4% is calculated as the sum of three factors. Quoting their explanations from this document:Improving circumstances over time: 1.7%. “Increases in consumption over time meaning marginal increases in consumption in the future are less valuable.”Compounding non-monetary benefits: 0.9%. “There are non-monetary returns not captured in our cost-effectiveness analysis which likely compound over time and are causally intertwined with consumption. These include reduced stress and improved nutrition.”Temporal uncertainty: 1.4%. “Uncertainty increases with projections into the future, meaning the projected benefits may fail to materialize. James [a GiveWell researcher] recommended a rate of 1.4% based on judgement on the annual likelihood of an unforeseen event or longer term change causing the expected benefits to not be realized. Examples of such events are major changes in economic structure, catastrophe, or political instability.”I do not have a good understanding of how these numbers were derived, and have no reason to think the first two are unfair estimates. I think the third is a significant underestimate.TAI is precisely the sort of “major change” meant to be captured by the temporal uncertainty factor. I have no insights to add on the question of TAI timelines, but I think absent GiveWell providing justification to the contrary, they should default towards using the timelines of people who have thought about this a lot. One such person is Ajeya Cotra, who in August reported a 50% credence in TAI being developed by 2040. I do not claim, and nor does Ajeya, that this is authoritative, however it seems a reasonable starting point for GiveWell to use, given they have not and (I think rightly) probably will not put significant work into forming independent timelines. Also in August, a broader survey of 738 experts by AI Impacts resulted in a median year for TAI of 2059. This source has the advantage of including many more people, but conversely most of them will have not spent much time thinking carefully about timelines.I will not give a sophisticated instantiation of what I propose, but rather gesture at what I think a good approach would be, and give a toy example to improve on the status quo. A naive thing to do would be to imagine that there is a fixed annual probability of developing TAI conditional on not having developed it to date. This method gives an annual probability of 3.8% under Ajeya’s timelines, and 1.9% under the AI Impacts timelines.[1] In reality, more of our probability mass should be placed on the later years between now and 2040 (or 2059), and we should not simply stop at 2040 (or 2059). A proper model would likely need to dispense with a constant discount rate entirely, and instead track the probability the world has not seen a “major change” by each year. A model that accomplishes ...]]>
Thu, 27 Oct 2022 17:37:14 +0000 EA - GiveWell should use shorter TAI timelines by Oscar Delaney Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell should use shorter TAI timelines, published by Oscar Delaney on October 27, 2022 on The Effective Altruism Forum.SummaryGiveWell’s discount rate of 4% includes a 1.4% contribution from ‘temporal uncertainty’ arising from the possibility of major events radically changing the world.This is incompatible with the transformative artificial intelligence (TAI) timelines of many AI safety researchers.I argue that GiveWell should increase its discount rate, or at least provide a justification for differing significantly from the commonly-held (in EA) view that TAI could come soon.Epistemic Status: timelines are hard, and I don’t have novel takes, but I worry perhaps GiveWell doesn’t either, and they are dissenting unintentionally.In my accompanying post I argued GiveWell should use a probability distribution over discount rates, I will ignore that here though, and just consider whether their point estimate is appropriate.GiveWell’s current discount rate of 4% is calculated as the sum of three factors. Quoting their explanations from this document:Improving circumstances over time: 1.7%. “Increases in consumption over time meaning marginal increases in consumption in the future are less valuable.”Compounding non-monetary benefits: 0.9%. “There are non-monetary returns not captured in our cost-effectiveness analysis which likely compound over time and are causally intertwined with consumption. These include reduced stress and improved nutrition.”Temporal uncertainty: 1.4%. “Uncertainty increases with projections into the future, meaning the projected benefits may fail to materialize. James [a GiveWell researcher] recommended a rate of 1.4% based on judgement on the annual likelihood of an unforeseen event or longer term change causing the expected benefits to not be realized. Examples of such events are major changes in economic structure, catastrophe, or political instability.”I do not have a good understanding of how these numbers were derived, and have no reason to think the first two are unfair estimates. I think the third is a significant underestimate.TAI is precisely the sort of “major change” meant to be captured by the temporal uncertainty factor. I have no insights to add on the question of TAI timelines, but I think absent GiveWell providing justification to the contrary, they should default towards using the timelines of people who have thought about this a lot. One such person is Ajeya Cotra, who in August reported a 50% credence in TAI being developed by 2040. I do not claim, and nor does Ajeya, that this is authoritative, however it seems a reasonable starting point for GiveWell to use, given they have not and (I think rightly) probably will not put significant work into forming independent timelines. Also in August, a broader survey of 738 experts by AI Impacts resulted in a median year for TAI of 2059. This source has the advantage of including many more people, but conversely most of them will have not spent much time thinking carefully about timelines.I will not give a sophisticated instantiation of what I propose, but rather gesture at what I think a good approach would be, and give a toy example to improve on the status quo. A naive thing to do would be to imagine that there is a fixed annual probability of developing TAI conditional on not having developed it to date. This method gives an annual probability of 3.8% under Ajeya’s timelines, and 1.9% under the AI Impacts timelines.[1] In reality, more of our probability mass should be placed on the later years between now and 2040 (or 2059), and we should not simply stop at 2040 (or 2059). A proper model would likely need to dispense with a constant discount rate entirely, and instead track the probability the world has not seen a “major change” by each year. A model that accomplishes ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell should use shorter TAI timelines, published by Oscar Delaney on October 27, 2022 on The Effective Altruism Forum.SummaryGiveWell’s discount rate of 4% includes a 1.4% contribution from ‘temporal uncertainty’ arising from the possibility of major events radically changing the world.This is incompatible with the transformative artificial intelligence (TAI) timelines of many AI safety researchers.I argue that GiveWell should increase its discount rate, or at least provide a justification for differing significantly from the commonly-held (in EA) view that TAI could come soon.Epistemic Status: timelines are hard, and I don’t have novel takes, but I worry perhaps GiveWell doesn’t either, and they are dissenting unintentionally.In my accompanying post I argued GiveWell should use a probability distribution over discount rates, I will ignore that here though, and just consider whether their point estimate is appropriate.GiveWell’s current discount rate of 4% is calculated as the sum of three factors. Quoting their explanations from this document:Improving circumstances over time: 1.7%. “Increases in consumption over time meaning marginal increases in consumption in the future are less valuable.”Compounding non-monetary benefits: 0.9%. “There are non-monetary returns not captured in our cost-effectiveness analysis which likely compound over time and are causally intertwined with consumption. These include reduced stress and improved nutrition.”Temporal uncertainty: 1.4%. “Uncertainty increases with projections into the future, meaning the projected benefits may fail to materialize. James [a GiveWell researcher] recommended a rate of 1.4% based on judgement on the annual likelihood of an unforeseen event or longer term change causing the expected benefits to not be realized. Examples of such events are major changes in economic structure, catastrophe, or political instability.”I do not have a good understanding of how these numbers were derived, and have no reason to think the first two are unfair estimates. I think the third is a significant underestimate.TAI is precisely the sort of “major change” meant to be captured by the temporal uncertainty factor. I have no insights to add on the question of TAI timelines, but I think absent GiveWell providing justification to the contrary, they should default towards using the timelines of people who have thought about this a lot. One such person is Ajeya Cotra, who in August reported a 50% credence in TAI being developed by 2040. I do not claim, and nor does Ajeya, that this is authoritative, however it seems a reasonable starting point for GiveWell to use, given they have not and (I think rightly) probably will not put significant work into forming independent timelines. Also in August, a broader survey of 738 experts by AI Impacts resulted in a median year for TAI of 2059. This source has the advantage of including many more people, but conversely most of them will have not spent much time thinking carefully about timelines.I will not give a sophisticated instantiation of what I propose, but rather gesture at what I think a good approach would be, and give a toy example to improve on the status quo. A naive thing to do would be to imagine that there is a fixed annual probability of developing TAI conditional on not having developed it to date. This method gives an annual probability of 3.8% under Ajeya’s timelines, and 1.9% under the AI Impacts timelines.[1] In reality, more of our probability mass should be placed on the later years between now and 2040 (or 2059), and we should not simply stop at 2040 (or 2059). A proper model would likely need to dispense with a constant discount rate entirely, and instead track the probability the world has not seen a “major change” by each year. A model that accomplishes ...]]>
Oscar Delaney https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:48 None full 3564
L2qefgmsHo845F7HQ_EA EA - Recommend Me EAs To Write About by Stephen Thomas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Recommend Me EAs To Write About, published by Stephen Thomas on October 26, 2022 on The Effective Altruism Forum.Project basicsI’ve been funded by the Future Fund Regranting Program to start a series of magazine-style profiles of people doing interesting EA work, called “Humans of Effective Altruism.”[Recommend someone here]The idea is to write these for an audience of both EAs and non-EAs, with the idea of giving people tangible examples of interesting and effective career paths and/or life paths.I’d like to get into the nitty-gritty of what people do day to day, and also dig into who they are as people, ‘what makes them tick’.I’ll be publishing them on this Substack. I anticipate most interviews will be done remotely, but if someone is in NYC, where I am, I’d somewhat prefer an in-person meeting, to give the profile color.I’m now looking for recommendations for who to write about.You can use this Google form, comment on this post, DM me, or email me at humansofea@gmail.com.Please err on the side of suggesting anyone you think would make an interesting profile subject! It’s better for me to have more people to consider, and probably at least some people suggested/recommended won’t want to be profiled, so I’d love to have a very long list of options.That said, I basically have 3 criteria:3 criteriaThey are doing high-impact activities (probably their job, but not necessarily). One motivation for this project is to give people considering life-options tangible examples of net-positive things to do.MOST IMPORTANTLY, they have an especially interesting personal story, especially if it relates to how they think about their work, or they’re just an especially interesting person.And, obviously, they must be open to being profiled, and are willing to get at least a little personal.I’m also open to the idea of a profile of more than one person (a collective, a charity, a purpose-driven group house), but I don’t expect these to be the majority of the profiles.Self-recommendations A-okay.Thanks in advance.[Recommend someone here]Further details about processI’d prefer to have a preliminary chat without committing to writing up a full profileDifferent profiles may take on different shapes. Some may be long, some short. Some may involve multiple interviews, some just one. Because of this, it’s possible (though maybe somewhat unlikely) that a ‘preliminary chat’ could be the entire process, which get processed into a short profile.Further details about how I'm thinking about the projectBasically, prioritizing an interesting story is a bid to attract the attention of a wider, non-EA audienceSlight preference for profile subjects for whom there’s something the reader could do when they finish reading, a specific action they could take if they were inspired by this person, to help their specific cause, over and above ‘get [more] into EA’Preference for someone who represents an idea, and/or whose story illustrates a broader pointRecommend someoneRecommend someone here.Subscribe / FollowSubscribe to the Substack here to get the first profile in your inbox.Follow on Twitter, Instagram, and FacebookThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Stephen Thomas https://forum.effectivealtruism.org/posts/L2qefgmsHo845F7HQ/recommend-me-eas-to-write-about Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Recommend Me EAs To Write About, published by Stephen Thomas on October 26, 2022 on The Effective Altruism Forum.Project basicsI’ve been funded by the Future Fund Regranting Program to start a series of magazine-style profiles of people doing interesting EA work, called “Humans of Effective Altruism.”[Recommend someone here]The idea is to write these for an audience of both EAs and non-EAs, with the idea of giving people tangible examples of interesting and effective career paths and/or life paths.I’d like to get into the nitty-gritty of what people do day to day, and also dig into who they are as people, ‘what makes them tick’.I’ll be publishing them on this Substack. I anticipate most interviews will be done remotely, but if someone is in NYC, where I am, I’d somewhat prefer an in-person meeting, to give the profile color.I’m now looking for recommendations for who to write about.You can use this Google form, comment on this post, DM me, or email me at humansofea@gmail.com.Please err on the side of suggesting anyone you think would make an interesting profile subject! It’s better for me to have more people to consider, and probably at least some people suggested/recommended won’t want to be profiled, so I’d love to have a very long list of options.That said, I basically have 3 criteria:3 criteriaThey are doing high-impact activities (probably their job, but not necessarily). One motivation for this project is to give people considering life-options tangible examples of net-positive things to do.MOST IMPORTANTLY, they have an especially interesting personal story, especially if it relates to how they think about their work, or they’re just an especially interesting person.And, obviously, they must be open to being profiled, and are willing to get at least a little personal.I’m also open to the idea of a profile of more than one person (a collective, a charity, a purpose-driven group house), but I don’t expect these to be the majority of the profiles.Self-recommendations A-okay.Thanks in advance.[Recommend someone here]Further details about processI’d prefer to have a preliminary chat without committing to writing up a full profileDifferent profiles may take on different shapes. Some may be long, some short. Some may involve multiple interviews, some just one. Because of this, it’s possible (though maybe somewhat unlikely) that a ‘preliminary chat’ could be the entire process, which get processed into a short profile.Further details about how I'm thinking about the projectBasically, prioritizing an interesting story is a bid to attract the attention of a wider, non-EA audienceSlight preference for profile subjects for whom there’s something the reader could do when they finish reading, a specific action they could take if they were inspired by this person, to help their specific cause, over and above ‘get [more] into EA’Preference for someone who represents an idea, and/or whose story illustrates a broader pointRecommend someoneRecommend someone here.Subscribe / FollowSubscribe to the Substack here to get the first profile in your inbox.Follow on Twitter, Instagram, and FacebookThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Thu, 27 Oct 2022 16:20:47 +0000 EA - Recommend Me EAs To Write About by Stephen Thomas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Recommend Me EAs To Write About, published by Stephen Thomas on October 26, 2022 on The Effective Altruism Forum.Project basicsI’ve been funded by the Future Fund Regranting Program to start a series of magazine-style profiles of people doing interesting EA work, called “Humans of Effective Altruism.”[Recommend someone here]The idea is to write these for an audience of both EAs and non-EAs, with the idea of giving people tangible examples of interesting and effective career paths and/or life paths.I’d like to get into the nitty-gritty of what people do day to day, and also dig into who they are as people, ‘what makes them tick’.I’ll be publishing them on this Substack. I anticipate most interviews will be done remotely, but if someone is in NYC, where I am, I’d somewhat prefer an in-person meeting, to give the profile color.I’m now looking for recommendations for who to write about.You can use this Google form, comment on this post, DM me, or email me at humansofea@gmail.com.Please err on the side of suggesting anyone you think would make an interesting profile subject! It’s better for me to have more people to consider, and probably at least some people suggested/recommended won’t want to be profiled, so I’d love to have a very long list of options.That said, I basically have 3 criteria:3 criteriaThey are doing high-impact activities (probably their job, but not necessarily). One motivation for this project is to give people considering life-options tangible examples of net-positive things to do.MOST IMPORTANTLY, they have an especially interesting personal story, especially if it relates to how they think about their work, or they’re just an especially interesting person.And, obviously, they must be open to being profiled, and are willing to get at least a little personal.I’m also open to the idea of a profile of more than one person (a collective, a charity, a purpose-driven group house), but I don’t expect these to be the majority of the profiles.Self-recommendations A-okay.Thanks in advance.[Recommend someone here]Further details about processI’d prefer to have a preliminary chat without committing to writing up a full profileDifferent profiles may take on different shapes. Some may be long, some short. Some may involve multiple interviews, some just one. Because of this, it’s possible (though maybe somewhat unlikely) that a ‘preliminary chat’ could be the entire process, which get processed into a short profile.Further details about how I'm thinking about the projectBasically, prioritizing an interesting story is a bid to attract the attention of a wider, non-EA audienceSlight preference for profile subjects for whom there’s something the reader could do when they finish reading, a specific action they could take if they were inspired by this person, to help their specific cause, over and above ‘get [more] into EA’Preference for someone who represents an idea, and/or whose story illustrates a broader pointRecommend someoneRecommend someone here.Subscribe / FollowSubscribe to the Substack here to get the first profile in your inbox.Follow on Twitter, Instagram, and FacebookThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Recommend Me EAs To Write About, published by Stephen Thomas on October 26, 2022 on The Effective Altruism Forum.Project basicsI’ve been funded by the Future Fund Regranting Program to start a series of magazine-style profiles of people doing interesting EA work, called “Humans of Effective Altruism.”[Recommend someone here]The idea is to write these for an audience of both EAs and non-EAs, with the idea of giving people tangible examples of interesting and effective career paths and/or life paths.I’d like to get into the nitty-gritty of what people do day to day, and also dig into who they are as people, ‘what makes them tick’.I’ll be publishing them on this Substack. I anticipate most interviews will be done remotely, but if someone is in NYC, where I am, I’d somewhat prefer an in-person meeting, to give the profile color.I’m now looking for recommendations for who to write about.You can use this Google form, comment on this post, DM me, or email me at humansofea@gmail.com.Please err on the side of suggesting anyone you think would make an interesting profile subject! It’s better for me to have more people to consider, and probably at least some people suggested/recommended won’t want to be profiled, so I’d love to have a very long list of options.That said, I basically have 3 criteria:3 criteriaThey are doing high-impact activities (probably their job, but not necessarily). One motivation for this project is to give people considering life-options tangible examples of net-positive things to do.MOST IMPORTANTLY, they have an especially interesting personal story, especially if it relates to how they think about their work, or they’re just an especially interesting person.And, obviously, they must be open to being profiled, and are willing to get at least a little personal.I’m also open to the idea of a profile of more than one person (a collective, a charity, a purpose-driven group house), but I don’t expect these to be the majority of the profiles.Self-recommendations A-okay.Thanks in advance.[Recommend someone here]Further details about processI’d prefer to have a preliminary chat without committing to writing up a full profileDifferent profiles may take on different shapes. Some may be long, some short. Some may involve multiple interviews, some just one. Because of this, it’s possible (though maybe somewhat unlikely) that a ‘preliminary chat’ could be the entire process, which get processed into a short profile.Further details about how I'm thinking about the projectBasically, prioritizing an interesting story is a bid to attract the attention of a wider, non-EA audienceSlight preference for profile subjects for whom there’s something the reader could do when they finish reading, a specific action they could take if they were inspired by this person, to help their specific cause, over and above ‘get [more] into EA’Preference for someone who represents an idea, and/or whose story illustrates a broader pointRecommend someoneRecommend someone here.Subscribe / FollowSubscribe to the Substack here to get the first profile in your inbox.Follow on Twitter, Instagram, and FacebookThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Stephen Thomas https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 03:13 None full 3567
JvEHiKWWwtvT3YanA_EA EA - GiveWell Misuses Discount Rates by Oscar Delaney Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell Misuses Discount Rates, published by Oscar Delaney on October 27, 2022 on The Effective Altruism Forum.SummaryGiveWell currently uses a time discount rate of 4% for all their cost-effectiveness analyses (CEAs).I argue that it is a mathematical mistake to pick any single best guess value to use for the CEAs.Instead, GiveWell should use a probability distribution over possible discount rates.This is not just an aesthetic judgement for mathematical puritans; it materially changes the CEAs, notably by making all the deworming interventions more attractive relative to other interventions.This is because deworming interventions rely on multi-decadal effects, and so a lower discount rate would make them much more valuable.Epistemic StatusOn the object level, I cannot think of any reasons to justify GiveWell's current modelling choice over my proposal.However, I still doubt my conclusion because on the meta level it seems like an obvious thing that would be surprising if no one at GiveWell had ever thought of doing, which is evidence I am missing something important.MainGiveWell’s CEAs are an impressive attempt to model many different factors in assessing the near-term impacts of various interventions.[1] I will ignore all of this complexity. For my purposes, it is sufficient to note that the CEA for most interventions is well characterised by decomposing impact into several constituents, and multiplying these numbers together. Consider Helen Keller International’s Vitamin A Supplementation program:V=M×R×1C [2] where:V is cost-effectiveness [deaths/dollar],M is baseline mortality [deaths/year/child],R is mortality reduction [%], andC is treatment cost [dollars/child/year]Obviously, all of these terms are uncertain. Treatment costs we can estimate quite accurately, but there may be fluctuations in the price of labour or materials needed in the distribution. Mortality data is generally good, but some deaths may not be reported, and mortality rates will change over time. The mortality reduction is based on a solid-seeming meta-analysis of RCTs, but things change over time, and circumstances differ between the trial and intervention locations.GiveWell’s model makes a subtle mathematical assumption, namely that the expectation of the product of these three random variables is equal to the product of their expectations:E[V]=E[M×R×1C]=E[M]×E[R]×E[1C]This is not, in general, true.[3] However, if the three random variables are independent, it is true. I cannot think of any plausible ways in which these three random variables correlate. Surely learning that the price of vitamin A tablets just doubled (C) does not affect how effective they are (R) or change the baseline of how many kids die (M). Thus, while GiveWell’s method is mathematically unsound, it gives the correct answer in this case. It could well be that GiveWell has considered this, and decided not to explain this in their CEAs because it doesn’t change the answer. I think this would be a mistake in communication, but otherwise benign.The one place where I believe this mathematical mistake translates into an incorrect answer is in the use of discount rates. From GiveWell’s explanatory document:“The discount rate's primary effect in the cost-effectiveness analyses of our top charities is to represent how much we discount increases in consumption resulting from the long run effects of improved child health for our malaria, deworming and vitamin A charities (which we call "developmental effects"). It also affects the longer-run benefits from cash transfers. We don't discount mortality benefits in our cost-effectiveness analyses.”This figure shows the cost-effectiveness of all the charities in the CEA spreadsheet, when varying the discount rate.[4]Deworming interventions, shown in dashed lines, v...]]>
Oscar Delaney https://forum.effectivealtruism.org/posts/JvEHiKWWwtvT3YanA/givewell-misuses-discount-rates Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell Misuses Discount Rates, published by Oscar Delaney on October 27, 2022 on The Effective Altruism Forum.SummaryGiveWell currently uses a time discount rate of 4% for all their cost-effectiveness analyses (CEAs).I argue that it is a mathematical mistake to pick any single best guess value to use for the CEAs.Instead, GiveWell should use a probability distribution over possible discount rates.This is not just an aesthetic judgement for mathematical puritans; it materially changes the CEAs, notably by making all the deworming interventions more attractive relative to other interventions.This is because deworming interventions rely on multi-decadal effects, and so a lower discount rate would make them much more valuable.Epistemic StatusOn the object level, I cannot think of any reasons to justify GiveWell's current modelling choice over my proposal.However, I still doubt my conclusion because on the meta level it seems like an obvious thing that would be surprising if no one at GiveWell had ever thought of doing, which is evidence I am missing something important.MainGiveWell’s CEAs are an impressive attempt to model many different factors in assessing the near-term impacts of various interventions.[1] I will ignore all of this complexity. For my purposes, it is sufficient to note that the CEA for most interventions is well characterised by decomposing impact into several constituents, and multiplying these numbers together. Consider Helen Keller International’s Vitamin A Supplementation program:V=M×R×1C [2] where:V is cost-effectiveness [deaths/dollar],M is baseline mortality [deaths/year/child],R is mortality reduction [%], andC is treatment cost [dollars/child/year]Obviously, all of these terms are uncertain. Treatment costs we can estimate quite accurately, but there may be fluctuations in the price of labour or materials needed in the distribution. Mortality data is generally good, but some deaths may not be reported, and mortality rates will change over time. The mortality reduction is based on a solid-seeming meta-analysis of RCTs, but things change over time, and circumstances differ between the trial and intervention locations.GiveWell’s model makes a subtle mathematical assumption, namely that the expectation of the product of these three random variables is equal to the product of their expectations:E[V]=E[M×R×1C]=E[M]×E[R]×E[1C]This is not, in general, true.[3] However, if the three random variables are independent, it is true. I cannot think of any plausible ways in which these three random variables correlate. Surely learning that the price of vitamin A tablets just doubled (C) does not affect how effective they are (R) or change the baseline of how many kids die (M). Thus, while GiveWell’s method is mathematically unsound, it gives the correct answer in this case. It could well be that GiveWell has considered this, and decided not to explain this in their CEAs because it doesn’t change the answer. I think this would be a mistake in communication, but otherwise benign.The one place where I believe this mathematical mistake translates into an incorrect answer is in the use of discount rates. From GiveWell’s explanatory document:“The discount rate's primary effect in the cost-effectiveness analyses of our top charities is to represent how much we discount increases in consumption resulting from the long run effects of improved child health for our malaria, deworming and vitamin A charities (which we call "developmental effects"). It also affects the longer-run benefits from cash transfers. We don't discount mortality benefits in our cost-effectiveness analyses.”This figure shows the cost-effectiveness of all the charities in the CEA spreadsheet, when varying the discount rate.[4]Deworming interventions, shown in dashed lines, v...]]>
Thu, 27 Oct 2022 14:01:21 +0000 EA - GiveWell Misuses Discount Rates by Oscar Delaney Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell Misuses Discount Rates, published by Oscar Delaney on October 27, 2022 on The Effective Altruism Forum.SummaryGiveWell currently uses a time discount rate of 4% for all their cost-effectiveness analyses (CEAs).I argue that it is a mathematical mistake to pick any single best guess value to use for the CEAs.Instead, GiveWell should use a probability distribution over possible discount rates.This is not just an aesthetic judgement for mathematical puritans; it materially changes the CEAs, notably by making all the deworming interventions more attractive relative to other interventions.This is because deworming interventions rely on multi-decadal effects, and so a lower discount rate would make them much more valuable.Epistemic StatusOn the object level, I cannot think of any reasons to justify GiveWell's current modelling choice over my proposal.However, I still doubt my conclusion because on the meta level it seems like an obvious thing that would be surprising if no one at GiveWell had ever thought of doing, which is evidence I am missing something important.MainGiveWell’s CEAs are an impressive attempt to model many different factors in assessing the near-term impacts of various interventions.[1] I will ignore all of this complexity. For my purposes, it is sufficient to note that the CEA for most interventions is well characterised by decomposing impact into several constituents, and multiplying these numbers together. Consider Helen Keller International’s Vitamin A Supplementation program:V=M×R×1C [2] where:V is cost-effectiveness [deaths/dollar],M is baseline mortality [deaths/year/child],R is mortality reduction [%], andC is treatment cost [dollars/child/year]Obviously, all of these terms are uncertain. Treatment costs we can estimate quite accurately, but there may be fluctuations in the price of labour or materials needed in the distribution. Mortality data is generally good, but some deaths may not be reported, and mortality rates will change over time. The mortality reduction is based on a solid-seeming meta-analysis of RCTs, but things change over time, and circumstances differ between the trial and intervention locations.GiveWell’s model makes a subtle mathematical assumption, namely that the expectation of the product of these three random variables is equal to the product of their expectations:E[V]=E[M×R×1C]=E[M]×E[R]×E[1C]This is not, in general, true.[3] However, if the three random variables are independent, it is true. I cannot think of any plausible ways in which these three random variables correlate. Surely learning that the price of vitamin A tablets just doubled (C) does not affect how effective they are (R) or change the baseline of how many kids die (M). Thus, while GiveWell’s method is mathematically unsound, it gives the correct answer in this case. It could well be that GiveWell has considered this, and decided not to explain this in their CEAs because it doesn’t change the answer. I think this would be a mistake in communication, but otherwise benign.The one place where I believe this mathematical mistake translates into an incorrect answer is in the use of discount rates. From GiveWell’s explanatory document:“The discount rate's primary effect in the cost-effectiveness analyses of our top charities is to represent how much we discount increases in consumption resulting from the long run effects of improved child health for our malaria, deworming and vitamin A charities (which we call "developmental effects"). It also affects the longer-run benefits from cash transfers. We don't discount mortality benefits in our cost-effectiveness analyses.”This figure shows the cost-effectiveness of all the charities in the CEA spreadsheet, when varying the discount rate.[4]Deworming interventions, shown in dashed lines, v...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell Misuses Discount Rates, published by Oscar Delaney on October 27, 2022 on The Effective Altruism Forum.SummaryGiveWell currently uses a time discount rate of 4% for all their cost-effectiveness analyses (CEAs).I argue that it is a mathematical mistake to pick any single best guess value to use for the CEAs.Instead, GiveWell should use a probability distribution over possible discount rates.This is not just an aesthetic judgement for mathematical puritans; it materially changes the CEAs, notably by making all the deworming interventions more attractive relative to other interventions.This is because deworming interventions rely on multi-decadal effects, and so a lower discount rate would make them much more valuable.Epistemic StatusOn the object level, I cannot think of any reasons to justify GiveWell's current modelling choice over my proposal.However, I still doubt my conclusion because on the meta level it seems like an obvious thing that would be surprising if no one at GiveWell had ever thought of doing, which is evidence I am missing something important.MainGiveWell’s CEAs are an impressive attempt to model many different factors in assessing the near-term impacts of various interventions.[1] I will ignore all of this complexity. For my purposes, it is sufficient to note that the CEA for most interventions is well characterised by decomposing impact into several constituents, and multiplying these numbers together. Consider Helen Keller International’s Vitamin A Supplementation program:V=M×R×1C [2] where:V is cost-effectiveness [deaths/dollar],M is baseline mortality [deaths/year/child],R is mortality reduction [%], andC is treatment cost [dollars/child/year]Obviously, all of these terms are uncertain. Treatment costs we can estimate quite accurately, but there may be fluctuations in the price of labour or materials needed in the distribution. Mortality data is generally good, but some deaths may not be reported, and mortality rates will change over time. The mortality reduction is based on a solid-seeming meta-analysis of RCTs, but things change over time, and circumstances differ between the trial and intervention locations.GiveWell’s model makes a subtle mathematical assumption, namely that the expectation of the product of these three random variables is equal to the product of their expectations:E[V]=E[M×R×1C]=E[M]×E[R]×E[1C]This is not, in general, true.[3] However, if the three random variables are independent, it is true. I cannot think of any plausible ways in which these three random variables correlate. Surely learning that the price of vitamin A tablets just doubled (C) does not affect how effective they are (R) or change the baseline of how many kids die (M). Thus, while GiveWell’s method is mathematically unsound, it gives the correct answer in this case. It could well be that GiveWell has considered this, and decided not to explain this in their CEAs because it doesn’t change the answer. I think this would be a mistake in communication, but otherwise benign.The one place where I believe this mathematical mistake translates into an incorrect answer is in the use of discount rates. From GiveWell’s explanatory document:“The discount rate's primary effect in the cost-effectiveness analyses of our top charities is to represent how much we discount increases in consumption resulting from the long run effects of improved child health for our malaria, deworming and vitamin A charities (which we call "developmental effects"). It also affects the longer-run benefits from cash transfers. We don't discount mortality benefits in our cost-effectiveness analyses.”This figure shows the cost-effectiveness of all the charities in the CEA spreadsheet, when varying the discount rate.[4]Deworming interventions, shown in dashed lines, v...]]>
Oscar Delaney https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:39 None full 3565
MGbdhjgd2v6cg3vjv_EA EA - Apply to the Redwood Research Mechanistic Interpretability Experiment (REMIX), a research program in Berkeley by Max Nadeau Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to the Redwood Research Mechanistic Interpretability Experiment (REMIX), a research program in Berkeley, published by Max Nadeau on October 27, 2022 on The Effective Altruism Forum.This winter, Redwood Research is running a coordinated research effort on mechanistic interpretability of transformer models. We’re excited about recent advances in mechanistic interpretability and now want to try to scale our interpretability methodology to a larger group doing research in parallel.REMIX participants will work to provide mechanistic explanations of model behaviors, using our causal scrubbing methodology (forthcoming within a week) to formalize and evaluate interpretability hypotheses. We hope to produce many more explanations of model behaviors akin to our recent work investigating behaviors of GPT-2-small, toy language models, and models trained on algorithmic tasks (also forthcoming). We think this work is a particularly promising research direction for mitigating existential risks from advanced AI systems (more in Goals and FAQ).Apply here by November 8th to be a researcher in the program. Apply sooner if you’d like to start early (details below) or receive an earlier response.Some key details:We expect to accept 30-50 participants.We plan to have some researchers arrive early, with some people starting as soon as possible. The majority of researchers will likely participate during the months of December and/or January.We expect researchers to participate for a month minimum, and (all else equal) will prefer applicants who are able to come for longer. We’ll pay for housing and travel, and also pay researchers for their time. We’ll clarify the payment structure prior to asking people to commit to the program.We’re interested in some participants acting as team leaders who would help on-board and provide research advice to other participants. This would involve arriving early to get experience with our tools and research directions and participating for a longer period (~2 months). You can indicate interest in this role in the application.We’re excited about applicants with a range of backgrounds; we’re not expecting applicants to have prior experience in interpretability research. Applicants should be comfortable working with Python, PyTorch/TensorFlow/Numpy (we’ll be using PyTorch), and linear algebra. We’re particularly excited about applicants with experience doing empirical science in any field.We’ll allocate the first week to practice using our interpretability tools and methodology; the rest will be researching in small groups. See Schedule.Feel free to email programs@rdwrs.com with questions.GoalsResearch output. We hope this program will produce research that is useful in multiple ways:We’d like stronger and more grounded characterizations of how language models perform a certain class of behaviors. For example, we currently have a variety of findings about how GPT-2-small implements indirect object identification (“IOI”, see next section for more explanation), but aren’t yet sure how often they apply to other models or other tasks. We’d know a lot more if we had a larger quantity of this research.For each behavior investigated, we think there’s some chance of stumbling across something really interesting. Examples of this include induction heads and the “pointer manipulation” result in the IOI paper: not only does the model copy information between attention streams, but it also copies “pointers”, i.e. the position of the residual stream that contains the relevant information.We’re interested in learning whether different language models implement the same behaviors in similar ways.We’d like a better sense of how good the current library of interpretability techniques is, and we’d like to get ideas for new techniques.We’d like to have mo...]]>
Max Nadeau https://forum.effectivealtruism.org/posts/MGbdhjgd2v6cg3vjv/apply-to-the-redwood-research-mechanistic-interpretability Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to the Redwood Research Mechanistic Interpretability Experiment (REMIX), a research program in Berkeley, published by Max Nadeau on October 27, 2022 on The Effective Altruism Forum.This winter, Redwood Research is running a coordinated research effort on mechanistic interpretability of transformer models. We’re excited about recent advances in mechanistic interpretability and now want to try to scale our interpretability methodology to a larger group doing research in parallel.REMIX participants will work to provide mechanistic explanations of model behaviors, using our causal scrubbing methodology (forthcoming within a week) to formalize and evaluate interpretability hypotheses. We hope to produce many more explanations of model behaviors akin to our recent work investigating behaviors of GPT-2-small, toy language models, and models trained on algorithmic tasks (also forthcoming). We think this work is a particularly promising research direction for mitigating existential risks from advanced AI systems (more in Goals and FAQ).Apply here by November 8th to be a researcher in the program. Apply sooner if you’d like to start early (details below) or receive an earlier response.Some key details:We expect to accept 30-50 participants.We plan to have some researchers arrive early, with some people starting as soon as possible. The majority of researchers will likely participate during the months of December and/or January.We expect researchers to participate for a month minimum, and (all else equal) will prefer applicants who are able to come for longer. We’ll pay for housing and travel, and also pay researchers for their time. We’ll clarify the payment structure prior to asking people to commit to the program.We’re interested in some participants acting as team leaders who would help on-board and provide research advice to other participants. This would involve arriving early to get experience with our tools and research directions and participating for a longer period (~2 months). You can indicate interest in this role in the application.We’re excited about applicants with a range of backgrounds; we’re not expecting applicants to have prior experience in interpretability research. Applicants should be comfortable working with Python, PyTorch/TensorFlow/Numpy (we’ll be using PyTorch), and linear algebra. We’re particularly excited about applicants with experience doing empirical science in any field.We’ll allocate the first week to practice using our interpretability tools and methodology; the rest will be researching in small groups. See Schedule.Feel free to email programs@rdwrs.com with questions.GoalsResearch output. We hope this program will produce research that is useful in multiple ways:We’d like stronger and more grounded characterizations of how language models perform a certain class of behaviors. For example, we currently have a variety of findings about how GPT-2-small implements indirect object identification (“IOI”, see next section for more explanation), but aren’t yet sure how often they apply to other models or other tasks. We’d know a lot more if we had a larger quantity of this research.For each behavior investigated, we think there’s some chance of stumbling across something really interesting. Examples of this include induction heads and the “pointer manipulation” result in the IOI paper: not only does the model copy information between attention streams, but it also copies “pointers”, i.e. the position of the residual stream that contains the relevant information.We’re interested in learning whether different language models implement the same behaviors in similar ways.We’d like a better sense of how good the current library of interpretability techniques is, and we’d like to get ideas for new techniques.We’d like to have mo...]]>
Thu, 27 Oct 2022 02:38:48 +0000 EA - Apply to the Redwood Research Mechanistic Interpretability Experiment (REMIX), a research program in Berkeley by Max Nadeau Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to the Redwood Research Mechanistic Interpretability Experiment (REMIX), a research program in Berkeley, published by Max Nadeau on October 27, 2022 on The Effective Altruism Forum.This winter, Redwood Research is running a coordinated research effort on mechanistic interpretability of transformer models. We’re excited about recent advances in mechanistic interpretability and now want to try to scale our interpretability methodology to a larger group doing research in parallel.REMIX participants will work to provide mechanistic explanations of model behaviors, using our causal scrubbing methodology (forthcoming within a week) to formalize and evaluate interpretability hypotheses. We hope to produce many more explanations of model behaviors akin to our recent work investigating behaviors of GPT-2-small, toy language models, and models trained on algorithmic tasks (also forthcoming). We think this work is a particularly promising research direction for mitigating existential risks from advanced AI systems (more in Goals and FAQ).Apply here by November 8th to be a researcher in the program. Apply sooner if you’d like to start early (details below) or receive an earlier response.Some key details:We expect to accept 30-50 participants.We plan to have some researchers arrive early, with some people starting as soon as possible. The majority of researchers will likely participate during the months of December and/or January.We expect researchers to participate for a month minimum, and (all else equal) will prefer applicants who are able to come for longer. We’ll pay for housing and travel, and also pay researchers for their time. We’ll clarify the payment structure prior to asking people to commit to the program.We’re interested in some participants acting as team leaders who would help on-board and provide research advice to other participants. This would involve arriving early to get experience with our tools and research directions and participating for a longer period (~2 months). You can indicate interest in this role in the application.We’re excited about applicants with a range of backgrounds; we’re not expecting applicants to have prior experience in interpretability research. Applicants should be comfortable working with Python, PyTorch/TensorFlow/Numpy (we’ll be using PyTorch), and linear algebra. We’re particularly excited about applicants with experience doing empirical science in any field.We’ll allocate the first week to practice using our interpretability tools and methodology; the rest will be researching in small groups. See Schedule.Feel free to email programs@rdwrs.com with questions.GoalsResearch output. We hope this program will produce research that is useful in multiple ways:We’d like stronger and more grounded characterizations of how language models perform a certain class of behaviors. For example, we currently have a variety of findings about how GPT-2-small implements indirect object identification (“IOI”, see next section for more explanation), but aren’t yet sure how often they apply to other models or other tasks. We’d know a lot more if we had a larger quantity of this research.For each behavior investigated, we think there’s some chance of stumbling across something really interesting. Examples of this include induction heads and the “pointer manipulation” result in the IOI paper: not only does the model copy information between attention streams, but it also copies “pointers”, i.e. the position of the residual stream that contains the relevant information.We’re interested in learning whether different language models implement the same behaviors in similar ways.We’d like a better sense of how good the current library of interpretability techniques is, and we’d like to get ideas for new techniques.We’d like to have mo...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to the Redwood Research Mechanistic Interpretability Experiment (REMIX), a research program in Berkeley, published by Max Nadeau on October 27, 2022 on The Effective Altruism Forum.This winter, Redwood Research is running a coordinated research effort on mechanistic interpretability of transformer models. We’re excited about recent advances in mechanistic interpretability and now want to try to scale our interpretability methodology to a larger group doing research in parallel.REMIX participants will work to provide mechanistic explanations of model behaviors, using our causal scrubbing methodology (forthcoming within a week) to formalize and evaluate interpretability hypotheses. We hope to produce many more explanations of model behaviors akin to our recent work investigating behaviors of GPT-2-small, toy language models, and models trained on algorithmic tasks (also forthcoming). We think this work is a particularly promising research direction for mitigating existential risks from advanced AI systems (more in Goals and FAQ).Apply here by November 8th to be a researcher in the program. Apply sooner if you’d like to start early (details below) or receive an earlier response.Some key details:We expect to accept 30-50 participants.We plan to have some researchers arrive early, with some people starting as soon as possible. The majority of researchers will likely participate during the months of December and/or January.We expect researchers to participate for a month minimum, and (all else equal) will prefer applicants who are able to come for longer. We’ll pay for housing and travel, and also pay researchers for their time. We’ll clarify the payment structure prior to asking people to commit to the program.We’re interested in some participants acting as team leaders who would help on-board and provide research advice to other participants. This would involve arriving early to get experience with our tools and research directions and participating for a longer period (~2 months). You can indicate interest in this role in the application.We’re excited about applicants with a range of backgrounds; we’re not expecting applicants to have prior experience in interpretability research. Applicants should be comfortable working with Python, PyTorch/TensorFlow/Numpy (we’ll be using PyTorch), and linear algebra. We’re particularly excited about applicants with experience doing empirical science in any field.We’ll allocate the first week to practice using our interpretability tools and methodology; the rest will be researching in small groups. See Schedule.Feel free to email programs@rdwrs.com with questions.GoalsResearch output. We hope this program will produce research that is useful in multiple ways:We’d like stronger and more grounded characterizations of how language models perform a certain class of behaviors. For example, we currently have a variety of findings about how GPT-2-small implements indirect object identification (“IOI”, see next section for more explanation), but aren’t yet sure how often they apply to other models or other tasks. We’d know a lot more if we had a larger quantity of this research.For each behavior investigated, we think there’s some chance of stumbling across something really interesting. Examples of this include induction heads and the “pointer manipulation” result in the IOI paper: not only does the model copy information between attention streams, but it also copies “pointers”, i.e. the position of the residual stream that contains the relevant information.We’re interested in learning whether different language models implement the same behaviors in similar ways.We’d like a better sense of how good the current library of interpretability techniques is, and we’d like to get ideas for new techniques.We’d like to have mo...]]>
Max Nadeau https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 19:08 None full 3559
WD7bFkWrR3Bu9dMnN_EA EA - We’re hiring! Probably Good is expanding our team by Probably Good Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We’re hiring! Probably Good is expanding our team, published by Probably Good on October 26, 2022 on The Effective Altruism Forum. is a young and growing organization. We’re looking for people who are passionate about helping others achieve a significant positive impact withand want to join as one of our core team members.While the roles listed below reflect our major needs, it’s worth keeping in mind that we are fairly flexible in adapting a position to fit someone’s skills and interests. We’re a small team – each wearing many hats and taking up new tasks. As such, we embrace a learning-focused culture where everyone can do their best work.Why join Probably Good?We’re a career guidance organization aiming to make impact-centered career guidance more accessible to more people (you can read more about our goals, approach, distinction from 80k, and more in our announcement post).This past spring, we brought on our first two employees, effectively doubling our team. After some months of creating new content and strategizing, we’re ready to ramp up. In the coming months, we have some exciting things in store, including:Launching a new website & full rebrandCreating more content and career profiles to reflect a broader range of career pathsCompleting the final section of our career guideExpanding our outreach and marketing efforts to reach a lot more peopleIf you’re compelled by our mission and goals, this could be a great opportunity to help shape the future of Probably Good.InclusivenessWe believe that forming an inclusive workplace and a diverse team is not only the right thing to do, but also helps us do better work. The nature of our mission requires us to consider multiple perspectives, which is only possible by making Probably Good an inclusive and welcoming place to work.To that end, we do our best to:Minimize unnecessary job requirements which would deter capable people from applying.Work with candidates to try and accommodate different needs – allowing everyone to do their best work and thrive with us.Encourage people with diverse backgrounds to apply. Even if you think that you might not be ‘good enough’ but you’re excited about our mission and want to contribute - please apply and give us a chance to get to know you and how you can help.General information about our hiringAll of our roles are fully remote, and we welcome candidates from all geographies and timezones.Many of our roles are flexible and we would consider hiring candidates at either a part-time or full-time capacityIn most cases, our hiring process includes a several months paid trial period, which we expect to make permanent if the role fits your expectations and you perform well.Qualities that we appreciate across all of our roles are clarity in communication, a deep familiarity with Effective Altruism, and working well in a team.If you’re interested in applying to any of these roles, send an email to jobs@probablygood.org with your CV and references to any previous work relevant to the role you’re applying for.Current OpportunitiesOperations LeadAbout the roleHelp us improve the processes and workflow of Probably Good. This role will support the whole team by tackling a range of operations tasks – from ensuring payroll runs smoothly to improving our internal systems to assisting with hiring/onboarding new members.About youWe’re looking for people who want to thrive in organizational and detail-oriented tasks, feel excited to support the efforts of a growing organization, have excellent communication and judgment skills, and can balance multiple tasks at once. Many backgrounds could be applicable to this position, though experience in project management or operations support would be advantageous.Head of Growth / Growth ManagerAbout the roleHelp us connect with new ...]]>
Probably Good https://forum.effectivealtruism.org/posts/WD7bFkWrR3Bu9dMnN/we-re-hiring-probably-good-is-expanding-our-team Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We’re hiring! Probably Good is expanding our team, published by Probably Good on October 26, 2022 on The Effective Altruism Forum. is a young and growing organization. We’re looking for people who are passionate about helping others achieve a significant positive impact withand want to join as one of our core team members.While the roles listed below reflect our major needs, it’s worth keeping in mind that we are fairly flexible in adapting a position to fit someone’s skills and interests. We’re a small team – each wearing many hats and taking up new tasks. As such, we embrace a learning-focused culture where everyone can do their best work.Why join Probably Good?We’re a career guidance organization aiming to make impact-centered career guidance more accessible to more people (you can read more about our goals, approach, distinction from 80k, and more in our announcement post).This past spring, we brought on our first two employees, effectively doubling our team. After some months of creating new content and strategizing, we’re ready to ramp up. In the coming months, we have some exciting things in store, including:Launching a new website & full rebrandCreating more content and career profiles to reflect a broader range of career pathsCompleting the final section of our career guideExpanding our outreach and marketing efforts to reach a lot more peopleIf you’re compelled by our mission and goals, this could be a great opportunity to help shape the future of Probably Good.InclusivenessWe believe that forming an inclusive workplace and a diverse team is not only the right thing to do, but also helps us do better work. The nature of our mission requires us to consider multiple perspectives, which is only possible by making Probably Good an inclusive and welcoming place to work.To that end, we do our best to:Minimize unnecessary job requirements which would deter capable people from applying.Work with candidates to try and accommodate different needs – allowing everyone to do their best work and thrive with us.Encourage people with diverse backgrounds to apply. Even if you think that you might not be ‘good enough’ but you’re excited about our mission and want to contribute - please apply and give us a chance to get to know you and how you can help.General information about our hiringAll of our roles are fully remote, and we welcome candidates from all geographies and timezones.Many of our roles are flexible and we would consider hiring candidates at either a part-time or full-time capacityIn most cases, our hiring process includes a several months paid trial period, which we expect to make permanent if the role fits your expectations and you perform well.Qualities that we appreciate across all of our roles are clarity in communication, a deep familiarity with Effective Altruism, and working well in a team.If you’re interested in applying to any of these roles, send an email to jobs@probablygood.org with your CV and references to any previous work relevant to the role you’re applying for.Current OpportunitiesOperations LeadAbout the roleHelp us improve the processes and workflow of Probably Good. This role will support the whole team by tackling a range of operations tasks – from ensuring payroll runs smoothly to improving our internal systems to assisting with hiring/onboarding new members.About youWe’re looking for people who want to thrive in organizational and detail-oriented tasks, feel excited to support the efforts of a growing organization, have excellent communication and judgment skills, and can balance multiple tasks at once. Many backgrounds could be applicable to this position, though experience in project management or operations support would be advantageous.Head of Growth / Growth ManagerAbout the roleHelp us connect with new ...]]>
Wed, 26 Oct 2022 22:21:00 +0000 EA - We’re hiring! Probably Good is expanding our team by Probably Good Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We’re hiring! Probably Good is expanding our team, published by Probably Good on October 26, 2022 on The Effective Altruism Forum. is a young and growing organization. We’re looking for people who are passionate about helping others achieve a significant positive impact withand want to join as one of our core team members.While the roles listed below reflect our major needs, it’s worth keeping in mind that we are fairly flexible in adapting a position to fit someone’s skills and interests. We’re a small team – each wearing many hats and taking up new tasks. As such, we embrace a learning-focused culture where everyone can do their best work.Why join Probably Good?We’re a career guidance organization aiming to make impact-centered career guidance more accessible to more people (you can read more about our goals, approach, distinction from 80k, and more in our announcement post).This past spring, we brought on our first two employees, effectively doubling our team. After some months of creating new content and strategizing, we’re ready to ramp up. In the coming months, we have some exciting things in store, including:Launching a new website & full rebrandCreating more content and career profiles to reflect a broader range of career pathsCompleting the final section of our career guideExpanding our outreach and marketing efforts to reach a lot more peopleIf you’re compelled by our mission and goals, this could be a great opportunity to help shape the future of Probably Good.InclusivenessWe believe that forming an inclusive workplace and a diverse team is not only the right thing to do, but also helps us do better work. The nature of our mission requires us to consider multiple perspectives, which is only possible by making Probably Good an inclusive and welcoming place to work.To that end, we do our best to:Minimize unnecessary job requirements which would deter capable people from applying.Work with candidates to try and accommodate different needs – allowing everyone to do their best work and thrive with us.Encourage people with diverse backgrounds to apply. Even if you think that you might not be ‘good enough’ but you’re excited about our mission and want to contribute - please apply and give us a chance to get to know you and how you can help.General information about our hiringAll of our roles are fully remote, and we welcome candidates from all geographies and timezones.Many of our roles are flexible and we would consider hiring candidates at either a part-time or full-time capacityIn most cases, our hiring process includes a several months paid trial period, which we expect to make permanent if the role fits your expectations and you perform well.Qualities that we appreciate across all of our roles are clarity in communication, a deep familiarity with Effective Altruism, and working well in a team.If you’re interested in applying to any of these roles, send an email to jobs@probablygood.org with your CV and references to any previous work relevant to the role you’re applying for.Current OpportunitiesOperations LeadAbout the roleHelp us improve the processes and workflow of Probably Good. This role will support the whole team by tackling a range of operations tasks – from ensuring payroll runs smoothly to improving our internal systems to assisting with hiring/onboarding new members.About youWe’re looking for people who want to thrive in organizational and detail-oriented tasks, feel excited to support the efforts of a growing organization, have excellent communication and judgment skills, and can balance multiple tasks at once. Many backgrounds could be applicable to this position, though experience in project management or operations support would be advantageous.Head of Growth / Growth ManagerAbout the roleHelp us connect with new ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We’re hiring! Probably Good is expanding our team, published by Probably Good on October 26, 2022 on The Effective Altruism Forum. is a young and growing organization. We’re looking for people who are passionate about helping others achieve a significant positive impact withand want to join as one of our core team members.While the roles listed below reflect our major needs, it’s worth keeping in mind that we are fairly flexible in adapting a position to fit someone’s skills and interests. We’re a small team – each wearing many hats and taking up new tasks. As such, we embrace a learning-focused culture where everyone can do their best work.Why join Probably Good?We’re a career guidance organization aiming to make impact-centered career guidance more accessible to more people (you can read more about our goals, approach, distinction from 80k, and more in our announcement post).This past spring, we brought on our first two employees, effectively doubling our team. After some months of creating new content and strategizing, we’re ready to ramp up. In the coming months, we have some exciting things in store, including:Launching a new website & full rebrandCreating more content and career profiles to reflect a broader range of career pathsCompleting the final section of our career guideExpanding our outreach and marketing efforts to reach a lot more peopleIf you’re compelled by our mission and goals, this could be a great opportunity to help shape the future of Probably Good.InclusivenessWe believe that forming an inclusive workplace and a diverse team is not only the right thing to do, but also helps us do better work. The nature of our mission requires us to consider multiple perspectives, which is only possible by making Probably Good an inclusive and welcoming place to work.To that end, we do our best to:Minimize unnecessary job requirements which would deter capable people from applying.Work with candidates to try and accommodate different needs – allowing everyone to do their best work and thrive with us.Encourage people with diverse backgrounds to apply. Even if you think that you might not be ‘good enough’ but you’re excited about our mission and want to contribute - please apply and give us a chance to get to know you and how you can help.General information about our hiringAll of our roles are fully remote, and we welcome candidates from all geographies and timezones.Many of our roles are flexible and we would consider hiring candidates at either a part-time or full-time capacityIn most cases, our hiring process includes a several months paid trial period, which we expect to make permanent if the role fits your expectations and you perform well.Qualities that we appreciate across all of our roles are clarity in communication, a deep familiarity with Effective Altruism, and working well in a team.If you’re interested in applying to any of these roles, send an email to jobs@probablygood.org with your CV and references to any previous work relevant to the role you’re applying for.Current OpportunitiesOperations LeadAbout the roleHelp us improve the processes and workflow of Probably Good. This role will support the whole team by tackling a range of operations tasks – from ensuring payroll runs smoothly to improving our internal systems to assisting with hiring/onboarding new members.About youWe’re looking for people who want to thrive in organizational and detail-oriented tasks, feel excited to support the efforts of a growing organization, have excellent communication and judgment skills, and can balance multiple tasks at once. Many backgrounds could be applicable to this position, though experience in project management or operations support would be advantageous.Head of Growth / Growth ManagerAbout the roleHelp us connect with new ...]]>
Probably Good https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 06:06 None full 3560
ZJpPvxXdimPgxFp2j_EA EA - Announcing the Founders Pledge Global Catastrophic Risks Fund by christian.r Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Founders Pledge Global Catastrophic Risks Fund, published by christian.r on October 26, 2022 on The Effective Altruism Forum.At Founders Pledge, we just launched a new addition to our funds: the Global Catastrophic Risks Fund. This post gives a brief overview of the fund.Key PointsThe fund will focus on global catastrophic risks with a special emphasis on risk pathways through international stability and great power relations.The fund’s shorter giving timelines are complementary to our investing-to-give Patient Philanthropy Fund — we are publishing a short write-up on this soon.The fund is designed to offer high-impact giving opportunities for both longtermists and non-longtermists who care about catastrophic risks (see section on “Our Perspective” in the Prospectus).You can find more information — including differences and complementarity with other funds and longtermist funders — in our Fund Prospectus.OverviewThe GCR Fund will build on Founders Pledge’s recent research into great power conflict and risks from frontier military and civilian technologies, with a special focus on international stability — a pathway that we believe shapes a number of the biggest risks facing humanity — and will work on:War between great powers, like a U.S.-China clash over Taiwan, or U.S.-Russia war;Nuclear war, especially emerging threats to nuclear stability, like vulnerabilities of nuclear command, control, and communications;Risks from artificial intelligence (AI), including risks from both machine learning applications (like autonomous weapon systems) and from transformative AI;Catastrophic biological risks, such as naturally-arising pandemics, engineered pathogens, laboratory accidents, and the misuse of new advances in synthetic biology; andEmerging threats from new technologies and in new domains.Moreover, the Fund will support field-building activities around the study and mitigation of global catastrophic risks, and methodological interventions, including new ways of studying these risks, such as probabilistic forecasting and experimental wargaming. The focus on international security is a current specialty, and we expect the areas of expertise of the fund to expand as we build capacity.Current and Future GenerationsThis Fund is designed both to tackle threats to humanity’s long-term future and to take action now to protect every human being alive today. We believe both that some interventions on global catastrophic risks can be justified on a simple cost-benefit analysis alone, and also that safeguarding the long-term future of humanity is among the most important things we can work on (and that in practice, they often converge). Whether or not you share our commitment to longtermism or believe that reducing existential risks is particularly important, you may still be interested in the Fund for the simple reason that you want to help prevent the deaths and suffering of millions of people.To illustrate this, the Fund may support the development of confidence-building measures on AI — like an International Autonomous Incidents Agreement — with the aim of both mitigating the destabilizing impact of near-term military AI applications, as well as providing a focal point for long-termist AI governance. Some grants will focus mainly on near-termist risks; others mainly on longtermist concerns.Like our other Funds, this will be a philanthropic co-funding vehicle designed to enable us to pursue a number of grantmaking opportunities, including:Active grantmaking, working with organizations to shape their plans for the future;Seeding new organizations and projects with high expected value;Committing to multi-year funding to give stability to promising projects and decrease their fundraising costs;Filling small funding gaps that fall between the cr...]]>
christian.r https://forum.effectivealtruism.org/posts/ZJpPvxXdimPgxFp2j/announcing-the-founders-pledge-global-catastrophic-risks-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Founders Pledge Global Catastrophic Risks Fund, published by christian.r on October 26, 2022 on The Effective Altruism Forum.At Founders Pledge, we just launched a new addition to our funds: the Global Catastrophic Risks Fund. This post gives a brief overview of the fund.Key PointsThe fund will focus on global catastrophic risks with a special emphasis on risk pathways through international stability and great power relations.The fund’s shorter giving timelines are complementary to our investing-to-give Patient Philanthropy Fund — we are publishing a short write-up on this soon.The fund is designed to offer high-impact giving opportunities for both longtermists and non-longtermists who care about catastrophic risks (see section on “Our Perspective” in the Prospectus).You can find more information — including differences and complementarity with other funds and longtermist funders — in our Fund Prospectus.OverviewThe GCR Fund will build on Founders Pledge’s recent research into great power conflict and risks from frontier military and civilian technologies, with a special focus on international stability — a pathway that we believe shapes a number of the biggest risks facing humanity — and will work on:War between great powers, like a U.S.-China clash over Taiwan, or U.S.-Russia war;Nuclear war, especially emerging threats to nuclear stability, like vulnerabilities of nuclear command, control, and communications;Risks from artificial intelligence (AI), including risks from both machine learning applications (like autonomous weapon systems) and from transformative AI;Catastrophic biological risks, such as naturally-arising pandemics, engineered pathogens, laboratory accidents, and the misuse of new advances in synthetic biology; andEmerging threats from new technologies and in new domains.Moreover, the Fund will support field-building activities around the study and mitigation of global catastrophic risks, and methodological interventions, including new ways of studying these risks, such as probabilistic forecasting and experimental wargaming. The focus on international security is a current specialty, and we expect the areas of expertise of the fund to expand as we build capacity.Current and Future GenerationsThis Fund is designed both to tackle threats to humanity’s long-term future and to take action now to protect every human being alive today. We believe both that some interventions on global catastrophic risks can be justified on a simple cost-benefit analysis alone, and also that safeguarding the long-term future of humanity is among the most important things we can work on (and that in practice, they often converge). Whether or not you share our commitment to longtermism or believe that reducing existential risks is particularly important, you may still be interested in the Fund for the simple reason that you want to help prevent the deaths and suffering of millions of people.To illustrate this, the Fund may support the development of confidence-building measures on AI — like an International Autonomous Incidents Agreement — with the aim of both mitigating the destabilizing impact of near-term military AI applications, as well as providing a focal point for long-termist AI governance. Some grants will focus mainly on near-termist risks; others mainly on longtermist concerns.Like our other Funds, this will be a philanthropic co-funding vehicle designed to enable us to pursue a number of grantmaking opportunities, including:Active grantmaking, working with organizations to shape their plans for the future;Seeding new organizations and projects with high expected value;Committing to multi-year funding to give stability to promising projects and decrease their fundraising costs;Filling small funding gaps that fall between the cr...]]>
Wed, 26 Oct 2022 21:11:06 +0000 EA - Announcing the Founders Pledge Global Catastrophic Risks Fund by christian.r Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Founders Pledge Global Catastrophic Risks Fund, published by christian.r on October 26, 2022 on The Effective Altruism Forum.At Founders Pledge, we just launched a new addition to our funds: the Global Catastrophic Risks Fund. This post gives a brief overview of the fund.Key PointsThe fund will focus on global catastrophic risks with a special emphasis on risk pathways through international stability and great power relations.The fund’s shorter giving timelines are complementary to our investing-to-give Patient Philanthropy Fund — we are publishing a short write-up on this soon.The fund is designed to offer high-impact giving opportunities for both longtermists and non-longtermists who care about catastrophic risks (see section on “Our Perspective” in the Prospectus).You can find more information — including differences and complementarity with other funds and longtermist funders — in our Fund Prospectus.OverviewThe GCR Fund will build on Founders Pledge’s recent research into great power conflict and risks from frontier military and civilian technologies, with a special focus on international stability — a pathway that we believe shapes a number of the biggest risks facing humanity — and will work on:War between great powers, like a U.S.-China clash over Taiwan, or U.S.-Russia war;Nuclear war, especially emerging threats to nuclear stability, like vulnerabilities of nuclear command, control, and communications;Risks from artificial intelligence (AI), including risks from both machine learning applications (like autonomous weapon systems) and from transformative AI;Catastrophic biological risks, such as naturally-arising pandemics, engineered pathogens, laboratory accidents, and the misuse of new advances in synthetic biology; andEmerging threats from new technologies and in new domains.Moreover, the Fund will support field-building activities around the study and mitigation of global catastrophic risks, and methodological interventions, including new ways of studying these risks, such as probabilistic forecasting and experimental wargaming. The focus on international security is a current specialty, and we expect the areas of expertise of the fund to expand as we build capacity.Current and Future GenerationsThis Fund is designed both to tackle threats to humanity’s long-term future and to take action now to protect every human being alive today. We believe both that some interventions on global catastrophic risks can be justified on a simple cost-benefit analysis alone, and also that safeguarding the long-term future of humanity is among the most important things we can work on (and that in practice, they often converge). Whether or not you share our commitment to longtermism or believe that reducing existential risks is particularly important, you may still be interested in the Fund for the simple reason that you want to help prevent the deaths and suffering of millions of people.To illustrate this, the Fund may support the development of confidence-building measures on AI — like an International Autonomous Incidents Agreement — with the aim of both mitigating the destabilizing impact of near-term military AI applications, as well as providing a focal point for long-termist AI governance. Some grants will focus mainly on near-termist risks; others mainly on longtermist concerns.Like our other Funds, this will be a philanthropic co-funding vehicle designed to enable us to pursue a number of grantmaking opportunities, including:Active grantmaking, working with organizations to shape their plans for the future;Seeding new organizations and projects with high expected value;Committing to multi-year funding to give stability to promising projects and decrease their fundraising costs;Filling small funding gaps that fall between the cr...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Founders Pledge Global Catastrophic Risks Fund, published by christian.r on October 26, 2022 on The Effective Altruism Forum.At Founders Pledge, we just launched a new addition to our funds: the Global Catastrophic Risks Fund. This post gives a brief overview of the fund.Key PointsThe fund will focus on global catastrophic risks with a special emphasis on risk pathways through international stability and great power relations.The fund’s shorter giving timelines are complementary to our investing-to-give Patient Philanthropy Fund — we are publishing a short write-up on this soon.The fund is designed to offer high-impact giving opportunities for both longtermists and non-longtermists who care about catastrophic risks (see section on “Our Perspective” in the Prospectus).You can find more information — including differences and complementarity with other funds and longtermist funders — in our Fund Prospectus.OverviewThe GCR Fund will build on Founders Pledge’s recent research into great power conflict and risks from frontier military and civilian technologies, with a special focus on international stability — a pathway that we believe shapes a number of the biggest risks facing humanity — and will work on:War between great powers, like a U.S.-China clash over Taiwan, or U.S.-Russia war;Nuclear war, especially emerging threats to nuclear stability, like vulnerabilities of nuclear command, control, and communications;Risks from artificial intelligence (AI), including risks from both machine learning applications (like autonomous weapon systems) and from transformative AI;Catastrophic biological risks, such as naturally-arising pandemics, engineered pathogens, laboratory accidents, and the misuse of new advances in synthetic biology; andEmerging threats from new technologies and in new domains.Moreover, the Fund will support field-building activities around the study and mitigation of global catastrophic risks, and methodological interventions, including new ways of studying these risks, such as probabilistic forecasting and experimental wargaming. The focus on international security is a current specialty, and we expect the areas of expertise of the fund to expand as we build capacity.Current and Future GenerationsThis Fund is designed both to tackle threats to humanity’s long-term future and to take action now to protect every human being alive today. We believe both that some interventions on global catastrophic risks can be justified on a simple cost-benefit analysis alone, and also that safeguarding the long-term future of humanity is among the most important things we can work on (and that in practice, they often converge). Whether or not you share our commitment to longtermism or believe that reducing existential risks is particularly important, you may still be interested in the Fund for the simple reason that you want to help prevent the deaths and suffering of millions of people.To illustrate this, the Fund may support the development of confidence-building measures on AI — like an International Autonomous Incidents Agreement — with the aim of both mitigating the destabilizing impact of near-term military AI applications, as well as providing a focal point for long-termist AI governance. Some grants will focus mainly on near-termist risks; others mainly on longtermist concerns.Like our other Funds, this will be a philanthropic co-funding vehicle designed to enable us to pursue a number of grantmaking opportunities, including:Active grantmaking, working with organizations to shape their plans for the future;Seeding new organizations and projects with high expected value;Committing to multi-year funding to give stability to promising projects and decrease their fundraising costs;Filling small funding gaps that fall between the cr...]]>
christian.r https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 05:56 None full 3553
nocZECBmPJGcSS7yx_EA EA - The Giving Store- 100% Profits to GiveDirectly by Ellie Leszczynski Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Giving Store- 100% Profits to GiveDirectly, published by Ellie Leszczynski on October 26, 2022 on The Effective Altruism Forum.TLDR: Here's a link to a store selling housewares, kids/babies supplies, pet products and out door products where 100% of the profits go to GiveDirectly. Now, shipping is 100% free on all items in the store!My name is Ellie Leszczynski and I am the Social Media Director for the Consumer Power Initiative (also known as CPI), founded by Brad West. I got interested in volunteering because once I heard about the idea and the mission, I knew I just had to be a part of it.After becoming familiar with the Effective Altruism community, I realized the EA principles really inspire me to think of how I can be better in my own life and give back to others. So I went on the EA website and ordered my free book - “Doing Good Better” by William MacAskill. I am now looking forward to reading his new book, “What We Owe the Future," as well. And I really am amazed by the community that has been developed around using our time, money, grit, and creativity to maximize our positive impact for everyone, including even insects raised in alternative farming or consciousness being generated by machine learning.This is a vast world with a lot of moving parts and ideas going on in it, and the Consumer Power Initiative is taking initiative to promote an altruistic mission that doesn't take much to share and spread. As example, I would like to introduce you to an online shop called the Consumer Giving Store. The Giving Store directs all of the profit that it generates to GiveDirectly, a highly-rated charity that does direct cash transfers to the global poor. The storefront was designed by Kagen Zethmayr, one of the members of the Consumer Power Initiative, and coded by Madhav Malhotra, an EA who was inspired by a discussion with Brad in the comments of one of his posts. You may already be familiar with it. At the Giving Store, you can buy products curated from a variety of sources at no-more-than-retail prices, whose sole purpose in being sold in this venue is to generate profits toward a worthy cause without added cost to the consumer.This fits into the broader idea of the Profit for Good model (also called “Guided Consumption”), which the Consumer Power Initiative is trying to promote and normalize. The Giving Store not only allows people to help GiveDirectly by buying goods, it also allows them to send a signal to the broader world that consumers care and want to help in what ways they can.We believe that the general public will choose to buy goods from Profit for Good Companies (“PFGs”) when they can do it without paying more or sacrificing quality. We are confident in the proposition that, however flawed and/or selfish members of the general public may be, they would rather the beneficiaries of effective charities, such as those that help the global poor, get the benefits of profit generated from their activity rather than the people who currently get said benefits: the very wealthy, especially if not even doing the work involved in this activity. The general public at large, however, cannot create this infrastructure or the initial steps for the community. This project will need EAs and others who put additional effort in trying to make the world a better place. Brad discusses the different players in the Profit for Good and their motivations here. Vincent van der Holst reviews some of the advantages Profit for Good firms have here.I gather that EA is all about how we can use our resources and our time to have the highest positive impact in the world. And using consumer sentiment and our economies to further effective charities could really transform the world with everyday choices! We would like to thank you in advance for any purchasing you might do on ...]]>
Ellie Leszczynski https://forum.effectivealtruism.org/posts/nocZECBmPJGcSS7yx/the-giving-store-100-profits-to-givedirectly Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Giving Store- 100% Profits to GiveDirectly, published by Ellie Leszczynski on October 26, 2022 on The Effective Altruism Forum.TLDR: Here's a link to a store selling housewares, kids/babies supplies, pet products and out door products where 100% of the profits go to GiveDirectly. Now, shipping is 100% free on all items in the store!My name is Ellie Leszczynski and I am the Social Media Director for the Consumer Power Initiative (also known as CPI), founded by Brad West. I got interested in volunteering because once I heard about the idea and the mission, I knew I just had to be a part of it.After becoming familiar with the Effective Altruism community, I realized the EA principles really inspire me to think of how I can be better in my own life and give back to others. So I went on the EA website and ordered my free book - “Doing Good Better” by William MacAskill. I am now looking forward to reading his new book, “What We Owe the Future," as well. And I really am amazed by the community that has been developed around using our time, money, grit, and creativity to maximize our positive impact for everyone, including even insects raised in alternative farming or consciousness being generated by machine learning.This is a vast world with a lot of moving parts and ideas going on in it, and the Consumer Power Initiative is taking initiative to promote an altruistic mission that doesn't take much to share and spread. As example, I would like to introduce you to an online shop called the Consumer Giving Store. The Giving Store directs all of the profit that it generates to GiveDirectly, a highly-rated charity that does direct cash transfers to the global poor. The storefront was designed by Kagen Zethmayr, one of the members of the Consumer Power Initiative, and coded by Madhav Malhotra, an EA who was inspired by a discussion with Brad in the comments of one of his posts. You may already be familiar with it. At the Giving Store, you can buy products curated from a variety of sources at no-more-than-retail prices, whose sole purpose in being sold in this venue is to generate profits toward a worthy cause without added cost to the consumer.This fits into the broader idea of the Profit for Good model (also called “Guided Consumption”), which the Consumer Power Initiative is trying to promote and normalize. The Giving Store not only allows people to help GiveDirectly by buying goods, it also allows them to send a signal to the broader world that consumers care and want to help in what ways they can.We believe that the general public will choose to buy goods from Profit for Good Companies (“PFGs”) when they can do it without paying more or sacrificing quality. We are confident in the proposition that, however flawed and/or selfish members of the general public may be, they would rather the beneficiaries of effective charities, such as those that help the global poor, get the benefits of profit generated from their activity rather than the people who currently get said benefits: the very wealthy, especially if not even doing the work involved in this activity. The general public at large, however, cannot create this infrastructure or the initial steps for the community. This project will need EAs and others who put additional effort in trying to make the world a better place. Brad discusses the different players in the Profit for Good and their motivations here. Vincent van der Holst reviews some of the advantages Profit for Good firms have here.I gather that EA is all about how we can use our resources and our time to have the highest positive impact in the world. And using consumer sentiment and our economies to further effective charities could really transform the world with everyday choices! We would like to thank you in advance for any purchasing you might do on ...]]>
Wed, 26 Oct 2022 20:11:38 +0000 EA - The Giving Store- 100% Profits to GiveDirectly by Ellie Leszczynski Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Giving Store- 100% Profits to GiveDirectly, published by Ellie Leszczynski on October 26, 2022 on The Effective Altruism Forum.TLDR: Here's a link to a store selling housewares, kids/babies supplies, pet products and out door products where 100% of the profits go to GiveDirectly. Now, shipping is 100% free on all items in the store!My name is Ellie Leszczynski and I am the Social Media Director for the Consumer Power Initiative (also known as CPI), founded by Brad West. I got interested in volunteering because once I heard about the idea and the mission, I knew I just had to be a part of it.After becoming familiar with the Effective Altruism community, I realized the EA principles really inspire me to think of how I can be better in my own life and give back to others. So I went on the EA website and ordered my free book - “Doing Good Better” by William MacAskill. I am now looking forward to reading his new book, “What We Owe the Future," as well. And I really am amazed by the community that has been developed around using our time, money, grit, and creativity to maximize our positive impact for everyone, including even insects raised in alternative farming or consciousness being generated by machine learning.This is a vast world with a lot of moving parts and ideas going on in it, and the Consumer Power Initiative is taking initiative to promote an altruistic mission that doesn't take much to share and spread. As example, I would like to introduce you to an online shop called the Consumer Giving Store. The Giving Store directs all of the profit that it generates to GiveDirectly, a highly-rated charity that does direct cash transfers to the global poor. The storefront was designed by Kagen Zethmayr, one of the members of the Consumer Power Initiative, and coded by Madhav Malhotra, an EA who was inspired by a discussion with Brad in the comments of one of his posts. You may already be familiar with it. At the Giving Store, you can buy products curated from a variety of sources at no-more-than-retail prices, whose sole purpose in being sold in this venue is to generate profits toward a worthy cause without added cost to the consumer.This fits into the broader idea of the Profit for Good model (also called “Guided Consumption”), which the Consumer Power Initiative is trying to promote and normalize. The Giving Store not only allows people to help GiveDirectly by buying goods, it also allows them to send a signal to the broader world that consumers care and want to help in what ways they can.We believe that the general public will choose to buy goods from Profit for Good Companies (“PFGs”) when they can do it without paying more or sacrificing quality. We are confident in the proposition that, however flawed and/or selfish members of the general public may be, they would rather the beneficiaries of effective charities, such as those that help the global poor, get the benefits of profit generated from their activity rather than the people who currently get said benefits: the very wealthy, especially if not even doing the work involved in this activity. The general public at large, however, cannot create this infrastructure or the initial steps for the community. This project will need EAs and others who put additional effort in trying to make the world a better place. Brad discusses the different players in the Profit for Good and their motivations here. Vincent van der Holst reviews some of the advantages Profit for Good firms have here.I gather that EA is all about how we can use our resources and our time to have the highest positive impact in the world. And using consumer sentiment and our economies to further effective charities could really transform the world with everyday choices! We would like to thank you in advance for any purchasing you might do on ...]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Giving Store- 100% Profits to GiveDirectly, published by Ellie Leszczynski on October 26, 2022 on The Effective Altruism Forum.TLDR: Here's a link to a store selling housewares, kids/babies supplies, pet products and out door products where 100% of the profits go to GiveDirectly. Now, shipping is 100% free on all items in the store!My name is Ellie Leszczynski and I am the Social Media Director for the Consumer Power Initiative (also known as CPI), founded by Brad West. I got interested in volunteering because once I heard about the idea and the mission, I knew I just had to be a part of it.After becoming familiar with the Effective Altruism community, I realized the EA principles really inspire me to think of how I can be better in my own life and give back to others. So I went on the EA website and ordered my free book - “Doing Good Better” by William MacAskill. I am now looking forward to reading his new book, “What We Owe the Future," as well. And I really am amazed by the community that has been developed around using our time, money, grit, and creativity to maximize our positive impact for everyone, including even insects raised in alternative farming or consciousness being generated by machine learning.This is a vast world with a lot of moving parts and ideas going on in it, and the Consumer Power Initiative is taking initiative to promote an altruistic mission that doesn't take much to share and spread. As example, I would like to introduce you to an online shop called the Consumer Giving Store. The Giving Store directs all of the profit that it generates to GiveDirectly, a highly-rated charity that does direct cash transfers to the global poor. The storefront was designed by Kagen Zethmayr, one of the members of the Consumer Power Initiative, and coded by Madhav Malhotra, an EA who was inspired by a discussion with Brad in the comments of one of his posts. You may already be familiar with it. At the Giving Store, you can buy products curated from a variety of sources at no-more-than-retail prices, whose sole purpose in being sold in this venue is to generate profits toward a worthy cause without added cost to the consumer.This fits into the broader idea of the Profit for Good model (also called “Guided Consumption”), which the Consumer Power Initiative is trying to promote and normalize. The Giving Store not only allows people to help GiveDirectly by buying goods, it also allows them to send a signal to the broader world that consumers care and want to help in what ways they can.We believe that the general public will choose to buy goods from Profit for Good Companies (“PFGs”) when they can do it without paying more or sacrificing quality. We are confident in the proposition that, however flawed and/or selfish members of the general public may be, they would rather the beneficiaries of effective charities, such as those that help the global poor, get the benefits of profit generated from their activity rather than the people who currently get said benefits: the very wealthy, especially if not even doing the work involved in this activity. The general public at large, however, cannot create this infrastructure or the initial steps for the community. This project will need EAs and others who put additional effort in trying to make the world a better place. Brad discusses the different players in the Profit for Good and their motivations here. Vincent van der Holst reviews some of the advantages Profit for Good firms have here.I gather that EA is all about how we can use our resources and our time to have the highest positive impact in the world. And using consumer sentiment and our economies to further effective charities could really transform the world with everyday choices! We would like to thank you in advance for any purchasing you might do on ...]]>
Ellie Leszczynski https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 04:41 None full 3554
9BDzFqAXu7sqPvRn5_EA EA - Reslab Request for Information: EA hardware projects by Joel Becker Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reslab Request for Information: EA hardware projects, published by Joel Becker on October 26, 2022 on The Effective Altruism Forum.TL;DRWe want to hear ideas for physical engineering projects - that is, hardware prototypes moving around atoms (not just bits) - from the EA and/or longtermist community. Submit here!Introducing ReslabThere has been a lot of talk about EA needing to take more large-scale action recently (e.g. Longtermist EA needs more Phase 2 work). And there likely exist cases where it would be helpful for action plans to start with small-scale hardware prototypes/MVPs. We came across such a case recently: as participants on SHELTER Weekend, we realised that the smallest scale MVPs for civilizational shelters often required technical implementation and tests.We are starting Resilience Lab (or ‘Reslab’) to build capacity for prototyping hardware ideas with relevance to areas identified as important to the long-term future.At this stage, we would love to hear from people with more project ideas, or an interest in working on someone else’s submitted idea. We will award $200, $150, and $50 prizes to the first-, second-, and third-best ideas respectively!Long-term goalOur intent is to develop a long-term resource to support EA and/or longtermist hardware projects. If successful, this pilot project would lead to a dedicated space with more full time staff and projects spinning out.Which projects are suitable?Any projects that:Are primarily focused on EA and/or longtermist cause areas, andCould make substantial progress with materials funding of up to $100k and project ownership by a junior mechanical engineer and intern for six months.To give you a better sense of what we’re interested in, here are some ideas we have for possible projects:Scaling of mushroom manufacturing techniques.Positive pressure air filtration system kits.Disinfection treatment testing.PPE development and testing.ALLFED technical development projects (e.g. open source hardware designs for greenhouse construction, seaweed farming, or food processing).Submit your hardware project ideas here!The deadline to be considered for prizes is January 31st, 2023.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Joel Becker https://forum.effectivealtruism.org/posts/9BDzFqAXu7sqPvRn5/reslab-request-for-information-ea-hardware-projects-1 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reslab Request for Information: EA hardware projects, published by Joel Becker on October 26, 2022 on The Effective Altruism Forum.TL;DRWe want to hear ideas for physical engineering projects - that is, hardware prototypes moving around atoms (not just bits) - from the EA and/or longtermist community. Submit here!Introducing ReslabThere has been a lot of talk about EA needing to take more large-scale action recently (e.g. Longtermist EA needs more Phase 2 work). And there likely exist cases where it would be helpful for action plans to start with small-scale hardware prototypes/MVPs. We came across such a case recently: as participants on SHELTER Weekend, we realised that the smallest scale MVPs for civilizational shelters often required technical implementation and tests.We are starting Resilience Lab (or ‘Reslab’) to build capacity for prototyping hardware ideas with relevance to areas identified as important to the long-term future.At this stage, we would love to hear from people with more project ideas, or an interest in working on someone else’s submitted idea. We will award $200, $150, and $50 prizes to the first-, second-, and third-best ideas respectively!Long-term goalOur intent is to develop a long-term resource to support EA and/or longtermist hardware projects. If successful, this pilot project would lead to a dedicated space with more full time staff and projects spinning out.Which projects are suitable?Any projects that:Are primarily focused on EA and/or longtermist cause areas, andCould make substantial progress with materials funding of up to $100k and project ownership by a junior mechanical engineer and intern for six months.To give you a better sense of what we’re interested in, here are some ideas we have for possible projects:Scaling of mushroom manufacturing techniques.Positive pressure air filtration system kits.Disinfection treatment testing.PPE development and testing.ALLFED technical development projects (e.g. open source hardware designs for greenhouse construction, seaweed farming, or food processing).Submit your hardware project ideas here!The deadline to be considered for prizes is January 31st, 2023.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Wed, 26 Oct 2022 19:17:29 +0000 EA - Reslab Request for Information: EA hardware projects by Joel Becker Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reslab Request for Information: EA hardware projects, published by Joel Becker on October 26, 2022 on The Effective Altruism Forum.TL;DRWe want to hear ideas for physical engineering projects - that is, hardware prototypes moving around atoms (not just bits) - from the EA and/or longtermist community. Submit here!Introducing ReslabThere has been a lot of talk about EA needing to take more large-scale action recently (e.g. Longtermist EA needs more Phase 2 work). And there likely exist cases where it would be helpful for action plans to start with small-scale hardware prototypes/MVPs. We came across such a case recently: as participants on SHELTER Weekend, we realised that the smallest scale MVPs for civilizational shelters often required technical implementation and tests.We are starting Resilience Lab (or ‘Reslab’) to build capacity for prototyping hardware ideas with relevance to areas identified as important to the long-term future.At this stage, we would love to hear from people with more project ideas, or an interest in working on someone else’s submitted idea. We will award $200, $150, and $50 prizes to the first-, second-, and third-best ideas respectively!Long-term goalOur intent is to develop a long-term resource to support EA and/or longtermist hardware projects. If successful, this pilot project would lead to a dedicated space with more full time staff and projects spinning out.Which projects are suitable?Any projects that:Are primarily focused on EA and/or longtermist cause areas, andCould make substantial progress with materials funding of up to $100k and project ownership by a junior mechanical engineer and intern for six months.To give you a better sense of what we’re interested in, here are some ideas we have for possible projects:Scaling of mushroom manufacturing techniques.Positive pressure air filtration system kits.Disinfection treatment testing.PPE development and testing.ALLFED technical development projects (e.g. open source hardware designs for greenhouse construction, seaweed farming, or food processing).Submit your hardware project ideas here!The deadline to be considered for prizes is January 31st, 2023.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reslab Request for Information: EA hardware projects, published by Joel Becker on October 26, 2022 on The Effective Altruism Forum.TL;DRWe want to hear ideas for physical engineering projects - that is, hardware prototypes moving around atoms (not just bits) - from the EA and/or longtermist community. Submit here!Introducing ReslabThere has been a lot of talk about EA needing to take more large-scale action recently (e.g. Longtermist EA needs more Phase 2 work). And there likely exist cases where it would be helpful for action plans to start with small-scale hardware prototypes/MVPs. We came across such a case recently: as participants on SHELTER Weekend, we realised that the smallest scale MVPs for civilizational shelters often required technical implementation and tests.We are starting Resilience Lab (or ‘Reslab’) to build capacity for prototyping hardware ideas with relevance to areas identified as important to the long-term future.At this stage, we would love to hear from people with more project ideas, or an interest in working on someone else’s submitted idea. We will award $200, $150, and $50 prizes to the first-, second-, and third-best ideas respectively!Long-term goalOur intent is to develop a long-term resource to support EA and/or longtermist hardware projects. If successful, this pilot project would lead to a dedicated space with more full time staff and projects spinning out.Which projects are suitable?Any projects that:Are primarily focused on EA and/or longtermist cause areas, andCould make substantial progress with materials funding of up to $100k and project ownership by a junior mechanical engineer and intern for six months.To give you a better sense of what we’re interested in, here are some ideas we have for possible projects:Scaling of mushroom manufacturing techniques.Positive pressure air filtration system kits.Disinfection treatment testing.PPE development and testing.ALLFED technical development projects (e.g. open source hardware designs for greenhouse construction, seaweed farming, or food processing).Submit your hardware project ideas here!The deadline to be considered for prizes is January 31st, 2023.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.]]>
Joel Becker https://storage.googleapis.com/rssfile/images/Nonlinear%20Logo%203000x3000%20-%20EA%20Forum.png 02:30 None full 3556
SmKLaLPsogwtFs6WT EA - Why Africa Needs a Cage Free Model Farm and Producer’s Directory by abilibadanielbaba11@gmail.com abilibadanielbaba11@gmail.com https://forum.effectivealtruism.org/posts/SmKLaLPsogwtFs6WT/why-africa-needs-a-cage-free-model-farm-and-producer-s Mon, 29 May 2023 19:35:07 +0000 EA - Why Africa Needs a Cage Free Model Farm and Producer’s Directory by abilibadanielbaba11@gmail.com abilibadanielbaba11@gmail.com 02:39 no full 101 aHkthQrAfuyNBNFhM EA - Has Russia’s Invasion of Ukraine Changed Your Mind? by JoelMcGuire JoelMcGuire https://forum.effectivealtruism.org/posts/aHkthQrAfuyNBNFhM/has-russia-s-invasion-of-ukraine-changed-your-mind Sun, 28 May 2023 05:23:21 +0000 EA - Has Russia’s Invasion of Ukraine Changed Your Mind? by JoelMcGuire JoelMcGuire 07:00 no full 96 kuopGotdCWeNCDpWi EA - How to evaluate relative impact in high-uncertainty contexts? An update on research methodology & grantmaking of FP Climate by jackva jackva https://forum.effectivealtruism.org/posts/kuopGotdCWeNCDpWi/how-to-evaluate-relative-impact-in-high-uncertainty-contexts Sat, 27 May 2023 14:13:14 +0000 EA - How to evaluate relative impact in high-uncertainty contexts? An update on research methodology & grantmaking of FP Climate by jackva jackva 24:22 no full 93 JjAjJ53mmpQqBeobQ EA - “The Race to the End of Humanity” – Structural Uncertainty Analysis in AI Risk Models by Froolow Froolow https://forum.effectivealtruism.org/posts/JjAjJ53mmpQqBeobQ/the-race-to-the-end-of-humanity-structural-uncertainty Sat, 20 May 2023 07:33:32 +0000 EA - “The Race to the End of Humanity” – Structural Uncertainty Analysis in AI Risk Models by Froolow Froolow 37:19 no full 66 yLKhz7LgqKJae84PM EA - Announcing the Animal Welfare Library 🦉 by arvomm arvomm https://forum.effectivealtruism.org/posts/yLKhz7LgqKJae84PM/announcing-the-animal-welfare-library Thu, 18 May 2023 05:20:30 +0000 EA - Announcing the Animal Welfare Library 🦉 by arvomm arvomm 04:27 no full 58 LnDkYCHAh5Dw4uvny EA - Probably Good launches improved website & 1-on-1 advising by Probably Good Probably Good https://forum.effectivealtruism.org/posts/LnDkYCHAh5Dw4uvny/probably-good-launches-improved-website-and-1-on-1-advising Tue, 16 May 2023 18:38:56 +0000 EA - Probably Good launches improved website & 1-on-1 advising by Probably Good Probably Good 04:36 no full 44 gDRH2SrN34KdDvmHE EA - Abolishing factory farming in Switzerland: Postmortem by naoki naoki https://forum.effectivealtruism.org/posts/gDRH2SrN34KdDvmHE/abolishing-factory-farming-in-switzerland-postmortem Mon, 05 Jun 2023 18:37:31 +0000 EA - Abolishing factory farming in Switzerland: Postmortem by naoki naoki https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:17 no full 6172 FrukYXHBSaZvb7f9Q EA - A Double Feature on The Extropians by Maxwell Tabarrok Maxwell Tabarrok https://forum.effectivealtruism.org/posts/FrukYXHBSaZvb7f9Q/a-double-feature-on-the-extropians Sun, 04 Jun 2023 18:55:31 +0000 EA - A Double Feature on The Extropians by Maxwell Tabarrok Maxwell Tabarrok https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:12 no full 6166 riYnjstGwK7hREfRo EA - Podcast Interview with David Thorstad on Existential Risk, The Time of Perils, and Billionaire Philanthropy by Nick Anyos Nick_Anyos https://forum.effectivealtruism.org/posts/riYnjstGwK7hREfRo/podcast-interview-with-david-thorstad-on-existential-risk Sun, 04 Jun 2023 18:40:27 +0000 EA - Podcast Interview with David Thorstad on Existential Risk, The Time of Perils, and Billionaire Philanthropy by Nick Anyos Nick_Anyos https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:37 no full 6165 cP7gkDFxgJqHDGdfJ EA - EA and Longtermism: not a crux for saving the world by ClaireZabel ClaireZabel https://forum.effectivealtruism.org/posts/cP7gkDFxgJqHDGdfJ/ea-and-longtermism-not-a-crux-for-saving-the-world Sat, 03 Jun 2023 00:35:02 +0000 EA - EA and Longtermism: not a crux for saving the world by ClaireZabel ClaireZabel https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 15:58 no full 6158 acBFLTsRw3fqa8WWr EA - Large Study Examining the Effects of Cash Transfer Programs on Population-Level Mortality Rates by nshaff3r nshaff3r https://forum.effectivealtruism.org/posts/acBFLTsRw3fqa8WWr/large-study-examining-the-effects-of-cash-transfer-programs Sat, 03 Jun 2023 00:06:49 +0000 EA - Large Study Examining the Effects of Cash Transfer Programs on Population-Level Mortality Rates by nshaff3r nshaff3r https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:44 no full 6159 bsbf4am9paoTq8Lrb EA - Applications open for AI Safety Fundamentals: Governance Course by Jamie Bernardi Jamie Bernardi https://forum.effectivealtruism.org/posts/bsbf4am9paoTq8Lrb/applications-open-for-ai-safety-fundamentals-governance Fri, 02 Jun 2023 18:46:58 +0000 EA - Applications open for AI Safety Fundamentals: Governance Course by Jamie Bernardi Jamie Bernardi https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:03 no full 6154 NK5mDoedYeuorhkAf EA - Lincoln Quirk has joined the EV UK board by Howie Lempel Howie_Lempel https://forum.effectivealtruism.org/posts/NK5mDoedYeuorhkAf/lincoln-quirk-has-joined-the-ev-uk-board Fri, 02 Jun 2023 15:32:06 +0000 EA - Lincoln Quirk has joined the EV UK board by Howie Lempel Howie_Lempel https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:01 no full 6155 5oTr4ExwpvhjrSgFi EA - Things I Learned by Spending Five Thousand Hours In Non-EA Charities by jenn jenn https://forum.effectivealtruism.org/posts/5oTr4ExwpvhjrSgFi/things-i-learned-by-spending-five-thousand-hours-in-non-ea Fri, 02 Jun 2023 12:19:16 +0000 EA - Things I Learned by Spending Five Thousand Hours In Non-EA Charities by jenn jenn https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 12:53 no full 6173 qnYm5MtBJcKyvYvfo EA - An Earn to Learn Pledge by Ben West Ben_West https://forum.effectivealtruism.org/posts/qnYm5MtBJcKyvYvfo/an-earn-to-learn-pledge Fri, 02 Jun 2023 07:15:32 +0000 EA - An Earn to Learn Pledge by Ben West Ben_West https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:10 no full 6156 fMCnMCMSEjanhAwpM EA - Probably tell your friends when they make big mistakes by Chi Chi https://forum.effectivealtruism.org/posts/fMCnMCMSEjanhAwpM/probably-tell-your-friends-when-they-make-big-mistakes Thu, 01 Jun 2023 20:12:31 +0000 EA - Probably tell your friends when they make big mistakes by Chi Chi https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:27 no full 6140 zuoYcZJh5FEywpvzA EA - Global Innovation Fund projects its impact to be 3x GiveWell Top Charities by jh jh https://forum.effectivealtruism.org/posts/zuoYcZJh5FEywpvzA/global-innovation-fund-projects-its-impact-to-be-3x-givewell Thu, 01 Jun 2023 14:17:21 +0000 EA - Global Innovation Fund projects its impact to be 3x GiveWell Top Charities by jh jh https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:52 no full 6141 3KuCzHJHCz99sf3ZB EA - Beyond Cost-Effectiveness: Insights for Effective Altruism from Health Economics by TomDrake TomDrake https://forum.effectivealtruism.org/posts/3KuCzHJHCz99sf3ZB/beyond-cost-effectiveness-insights-for-effective-altruism Thu, 01 Jun 2023 13:40:48 +0000 EA - Beyond Cost-Effectiveness: Insights for Effective Altruism from Health Economics by TomDrake TomDrake https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:07 no full 6142 RRm8vnmwjWK24ung2 EA - Taxing Tobacco: the intervention that got away (happy World No Tobacco Day) by Yelnats T.J. Yelnats T.J. https://forum.effectivealtruism.org/posts/RRm8vnmwjWK24ung2/taxing-tobacco-the-intervention-that-got-away-happy-world-no Thu, 01 Jun 2023 12:44:33 +0000 EA - Taxing Tobacco: the intervention that got away (happy World No Tobacco Day) by Yelnats T.J. Yelnats T.J. https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 12:04 no full 6143 qTzc4grzHQbfodTQy EA - New Video: What to Eat in a Global Catastrophe by Christian Pearson Christian Pearson https://forum.effectivealtruism.org/posts/qTzc4grzHQbfodTQy/new-video-what-to-eat-in-a-global-catastrophe Thu, 01 Jun 2023 11:15:25 +0000 EA - New Video: What to Eat in a Global Catastrophe by Christian Pearson Christian Pearson https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:29 no full 6134 4p8RpK2fYKFmEcA9w_NL_EA EA - OPTIC [Forecasting Comp] — Pilot Postmortem by OPTIC OPTIC https://forum.effectivealtruism.org/posts/4p8RpK2fYKFmEcA9w/optic-forecasting-comp-pilot-postmortem Sun, 21 May 2023 22:04:37 +0000 EA - OPTIC [Forecasting Comp] — Pilot Postmortem by OPTIC OPTIC https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:04 no full 6020 ejPNrR4pCEPiziqwE_NL_EA EA - Charity Entrepreneurship’s research into large-scale global health interventions by CE CE https://forum.effectivealtruism.org/posts/ejPNrR4pCEPiziqwE/charity-entrepreneurship-s-research-into-large-scale-global Tue, 16 May 2023 15:44:36 +0000 EA - Charity Entrepreneurship’s research into large-scale global health interventions by CE CE https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:05 no full 5972 CAC8zn292C9T5aopw_NL_EA EA - Community Health & Special Projects: Updates and Contacting Us by evemccormick evemccormick https://forum.effectivealtruism.org/posts/CAC8zn292C9T5aopw/community-health-and-special-projects-updates-and-contacting-1 Wed, 10 May 2023 20:49:30 +0000 EA - Community Health & Special Projects: Updates and Contacting Us by evemccormick evemccormick https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:18 no full 5908 pR35WbLmruKdiMn2r_NL_EA EA - Continuous doesn’t mean slow by Tom Davidson 90%. What’s driving this disagreement?One factor that often comes up in discussions is takeoff speeds, which Ajeya mentioned in the previous post. How quickly and suddenly do we move from today’s AI, to “expert-human level” AI[1], to AI that is way beyond human experts and could easily overpower humanity?The final stretch — the transition from expert-human level AI to AI systems that can easily overpower all of us — is especially crucial. If this final transition happens slowly, we could potentially have a long time to get used to the obsolescence regime and use very competent AI to help us solve AI alignment (among other things). But if it happens very quickly, we won’t have much time to ensure superhuman systems are aligned, or to prepare for human obsolescence in any other way.Scott Alexander is optimistic that things might move gradually. In a recent ACX post titled ‘Why I Am Not (As Much Of) A Doomer (As Some People)’, he says:So far we’ve had brisk but still gradual progress in AI; GPT-3 is better than GPT-2, and GPT-4 will probably be better still. Every few years we get a new model which is better than previous models by some predictable amount.Some people (eg Nate Soares) worry there’s a point where this changes. Maybe some jump. could take an AI from IQ 90 to IQ 1000 with no (or very short) period of IQ 200 in between.I’m optimistic because the past few years have provided some evidence for gradual progress.I agree with Scott that recent AI progress has been continuous and fairly predictable, and don’t particularly expect a break in that trend. But I expect the transition to superhuman AI to be very fast, even if it’s continuous.The amount of “compute” (i.e. the number of AI chips) needed to train a powerful AI is much bigger than the amount of compute needed to run it. I estimate that OpenAI has enough compute to run GPT-4 on hundreds of thousands of tasks at once.[2]This ratio will only become more extreme as models get bigger. Once OpenAI trains GPT-5 it’ll have enough compute for GPT-5 to perform millions of tasks in parallel, and once they train GPT-6 it’ll be able to perform tens of millions of tasks in parallel.[3]Now imagine that GPT-6 is as good at AI research as the average OpenAI researcher.[4] OpenAI could expand their AI researcher workforce from hundreds of experts to tens of millions. That’s a mind-boggling large increase, a factor of 100,000. It’s like going from 1000 people to the entire US workforce. What’s more, these AIs could work tirelessly through the night and could potentially “think” much more quickly than human workers.[5] (This change won’t happen all-at-once. I expect speed-ups from less capable AI before this point, as Ajeya wrote in the previous post.)How much faster would AI progress be in this scenario? It’s hard to know. But my best guess, from my recent report on takeoff speeds, is that progress would be much much faster. I think that less than a year after AI is expert-human level at AI research, AI could improve to the point of being able to easily overthrow humanity.This is much faster than the timeline mentioned in the ACX post:if you’re imagining specific years, imagine human-genius-level AI in the 2030s and world...]]> Tom Davidson https://forum.effectivealtruism.org/posts/pR35WbLmruKdiMn2r/continuous-doesn-t-mean-slow 90%. What’s driving this disagreement?One factor that often comes up in discussions is takeoff speeds, which Ajeya mentioned in the previous post. How quickly and suddenly do we move from today’s AI, to “expert-human level” AI[1], to AI that is way beyond human experts and could easily overpower humanity?The final stretch — the transition from expert-human level AI to AI systems that can easily overpower all of us — is especially crucial. If this final transition happens slowly, we could potentially have a long time to get used to the obsolescence regime and use very competent AI to help us solve AI alignment (among other things). But if it happens very quickly, we won’t have much time to ensure superhuman systems are aligned, or to prepare for human obsolescence in any other way.Scott Alexander is optimistic that things might move gradually. In a recent ACX post titled ‘Why I Am Not (As Much Of) A Doomer (As Some People)’, he says:So far we’ve had brisk but still gradual progress in AI; GPT-3 is better than GPT-2, and GPT-4 will probably be better still. Every few years we get a new model which is better than previous models by some predictable amount.Some people (eg Nate Soares) worry there’s a point where this changes. Maybe some jump. could take an AI from IQ 90 to IQ 1000 with no (or very short) period of IQ 200 in between.I’m optimistic because the past few years have provided some evidence for gradual progress.I agree with Scott that recent AI progress has been continuous and fairly predictable, and don’t particularly expect a break in that trend. But I expect the transition to superhuman AI to be very fast, even if it’s continuous.The amount of “compute” (i.e. the number of AI chips) needed to train a powerful AI is much bigger than the amount of compute needed to run it. I estimate that OpenAI has enough compute to run GPT-4 on hundreds of thousands of tasks at once.[2]This ratio will only become more extreme as models get bigger. Once OpenAI trains GPT-5 it’ll have enough compute for GPT-5 to perform millions of tasks in parallel, and once they train GPT-6 it’ll be able to perform tens of millions of tasks in parallel.[3]Now imagine that GPT-6 is as good at AI research as the average OpenAI researcher.[4] OpenAI could expand their AI researcher workforce from hundreds of experts to tens of millions. That’s a mind-boggling large increase, a factor of 100,000. It’s like going from 1000 people to the entire US workforce. What’s more, these AIs could work tirelessly through the night and could potentially “think” much more quickly than human workers.[5] (This change won’t happen all-at-once. I expect speed-ups from less capable AI before this point, as Ajeya wrote in the previous post.)How much faster would AI progress be in this scenario? It’s hard to know. But my best guess, from my recent report on takeoff speeds, is that progress would be much much faster. I think that less than a year after AI is expert-human level at AI research, AI could improve to the point of being able to easily overthrow humanity.This is much faster than the timeline mentioned in the ACX post:if you’re imagining specific years, imagine human-genius-level AI in the 2030s and world...]]> Wed, 10 May 2023 16:01:57 +0000 EA - Continuous doesn’t mean slow by Tom Davidson 90%. What’s driving this disagreement?One factor that often comes up in discussions is takeoff speeds, which Ajeya mentioned in the previous post. How quickly and suddenly do we move from today’s AI, to “expert-human level” AI[1], to AI that is way beyond human experts and could easily overpower humanity?The final stretch — the transition from expert-human level AI to AI systems that can easily overpower all of us — is especially crucial. If this final transition happens slowly, we could potentially have a long time to get used to the obsolescence regime and use very competent AI to help us solve AI alignment (among other things). But if it happens very quickly, we won’t have much time to ensure superhuman systems are aligned, or to prepare for human obsolescence in any other way.Scott Alexander is optimistic that things might move gradually. In a recent ACX post titled ‘Why I Am Not (As Much Of) A Doomer (As Some People)’, he says:So far we’ve had brisk but still gradual progress in AI; GPT-3 is better than GPT-2, and GPT-4 will probably be better still. Every few years we get a new model which is better than previous models by some predictable amount.Some people (eg Nate Soares) worry there’s a point where this changes. Maybe some jump. could take an AI from IQ 90 to IQ 1000 with no (or very short) period of IQ 200 in between.I’m optimistic because the past few years have provided some evidence for gradual progress.I agree with Scott that recent AI progress has been continuous and fairly predictable, and don’t particularly expect a break in that trend. But I expect the transition to superhuman AI to be very fast, even if it’s continuous.The amount of “compute” (i.e. the number of AI chips) needed to train a powerful AI is much bigger than the amount of compute needed to run it. I estimate that OpenAI has enough compute to run GPT-4 on hundreds of thousands of tasks at once.[2]This ratio will only become more extreme as models get bigger. Once OpenAI trains GPT-5 it’ll have enough compute for GPT-5 to perform millions of tasks in parallel, and once they train GPT-6 it’ll be able to perform tens of millions of tasks in parallel.[3]Now imagine that GPT-6 is as good at AI research as the average OpenAI researcher.[4] OpenAI could expand their AI researcher workforce from hundreds of experts to tens of millions. That’s a mind-boggling large increase, a factor of 100,000. It’s like going from 1000 people to the entire US workforce. What’s more, these AIs could work tirelessly through the night and could potentially “think” much more quickly than human workers.[5] (This change won’t happen all-at-once. I expect speed-ups from less capable AI before this point, as Ajeya wrote in the previous post.)How much faster would AI progress be in this scenario? It’s hard to know. But my best guess, from my recent report on takeoff speeds, is that progress would be much much faster. I think that less than a year after AI is expert-human level at AI research, AI could improve to the point of being able to easily overthrow humanity.This is much faster than the timeline mentioned in the ACX post:if you’re imagining specific years, imagine human-genius-level AI in the 2030s and world...]]> 90%. What’s driving this disagreement?One factor that often comes up in discussions is takeoff speeds, which Ajeya mentioned in the previous post. How quickly and suddenly do we move from today’s AI, to “expert-human level” AI[1], to AI that is way beyond human experts and could easily overpower humanity?The final stretch — the transition from expert-human level AI to AI systems that can easily overpower all of us — is especially crucial. If this final transition happens slowly, we could potentially have a long time to get used to the obsolescence regime and use very competent AI to help us solve AI alignment (among other things). But if it happens very quickly, we won’t have much time to ensure superhuman systems are aligned, or to prepare for human obsolescence in any other way.Scott Alexander is optimistic that things might move gradually. In a recent ACX post titled ‘Why I Am Not (As Much Of) A Doomer (As Some People)’, he says:So far we’ve had brisk but still gradual progress in AI; GPT-3 is better than GPT-2, and GPT-4 will probably be better still. Every few years we get a new model which is better than previous models by some predictable amount.Some people (eg Nate Soares) worry there’s a point where this changes. Maybe some jump. could take an AI from IQ 90 to IQ 1000 with no (or very short) period of IQ 200 in between.I’m optimistic because the past few years have provided some evidence for gradual progress.I agree with Scott that recent AI progress has been continuous and fairly predictable, and don’t particularly expect a break in that trend. But I expect the transition to superhuman AI to be very fast, even if it’s continuous.The amount of “compute” (i.e. the number of AI chips) needed to train a powerful AI is much bigger than the amount of compute needed to run it. I estimate that OpenAI has enough compute to run GPT-4 on hundreds of thousands of tasks at once.[2]This ratio will only become more extreme as models get bigger. Once OpenAI trains GPT-5 it’ll have enough compute for GPT-5 to perform millions of tasks in parallel, and once they train GPT-6 it’ll be able to perform tens of millions of tasks in parallel.[3]Now imagine that GPT-6 is as good at AI research as the average OpenAI researcher.[4] OpenAI could expand their AI researcher workforce from hundreds of experts to tens of millions. That’s a mind-boggling large increase, a factor of 100,000. It’s like going from 1000 people to the entire US workforce. What’s more, these AIs could work tirelessly through the night and could potentially “think” much more quickly than human workers.[5] (This change won’t happen all-at-once. I expect speed-ups from less capable AI before this point, as Ajeya wrote in the previous post.)How much faster would AI progress be in this scenario? It’s hard to know. But my best guess, from my recent report on takeoff speeds, is that progress would be much much faster. I think that less than a year after AI is expert-human level at AI research, AI could improve to the point of being able to easily overthrow humanity.This is much faster than the timeline mentioned in the ACX post:if you’re imagining specific years, imagine human-genius-level AI in the 2030s and world...]]> Tom Davidson https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:24 no full 5905 sDkHTdBsrpz7teMR2_NL_EA EA - Please don’t vote brigade by Lizka Lizka https://forum.effectivealtruism.org/posts/sDkHTdBsrpz7teMR2/please-don-t-vote-brigade Fri, 05 May 2023 09:29:58 +0000 EA - Please don’t vote brigade by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:24 no full 5851 oJQE6bALqgKKQx4Ek_NL_EA EA - Orgs & Individuals Should Spend ~1 Hour/Month Making More Introductions by Rockwell 6 months, so I'm giving myself 30 minutes to write and publish it. For context, I'm the full-time director of EA NYC, an organization dedicated to building and supporting the effective altruism community in and around New York City.Claim: More organizations and individuals should allot a small amount of time to a particularly high-value activity: 1-1 or 1-org introductions.Outside the scope of this post: I'm not going to make the case here for the value of connections. Many in the community already believe they are extremely valuable, e.g. they're the primary metric CEA uses for its events.Context: I frequently meet people who are deeply engaged in EA, have ended up at an EAG(x), work for an EA or EA-adjacent organization, or are otherwise exciting and active community members, but have no idea there are existing EA groups located in their city or university, focused on their profession, or coordinating across their cause area. When they do learn about these groups, they are often thrilled and eager to plug in. Many times, they've been engaging heavily with other community members who did know, and perhaps even once mentioned such in passing, but didn't think to make a direct introduction. For many, a direct introduction dramatically increases the likelihood of their actually engaging with another individual or organization. As a result, opportunities for valuable connections and community growth are missed.Introductions can be burdensome, but they don't have to be.80,000 Hours80,000 Hours' staff frequently directly connects me to individuals over email who are based in or near NYC, whether or not they've already advised them. In 2022, they sent over 30 emails that followed a format like this:Subject: Rocky [Name]Hi both,Rocky, meet [Name]. [Name] works in [Professional Field] and lives in [Location]. They're interested in [Career Change, Learning about ___ EA Topic, Connecting with Local EAs, Something Else]. Because of this, I thought it might be useful for [Name] to speak to you and others in the EA NYC community.[Name], meet Rocky. Rocky is Director of Effective Altruism NYC. Before that she did [Career Summary] and studied [My Degree]. Effective Altruism NYC works on helping connect and grow the community of New Yorkers who are looking to do the most good through: advising, socials, reading groups, and other activities. I thought she would be a good person for you to speak with about some next steps to get more involved with Effective Altruism.Hope you get to speak soon. Thanks!Best, [80K Staff Member]They typically link to our respected LinkedIn profiles.I then set up one-on-one calls with the individuals they connect me to and many subsequently become involved in EA NYC in various capacities.EA Virtual ProgramsEA Virtual Programs does something similar:Subject: [EA NYC] Your group has a new prospective memberHi,We are the EA Virtual Programs (EA VP) team. A recent EA Virtual Programs participant has expressed an interest in joining your Effective Altruism New York City group.Name: ____Email: ____Background Info: [Involvement in EA] [Profession] [Location] [LinkedIn]Note these connections come from the participants themselves, as they nominated they would like to get in touch with your group specifically in our exit survey.It would be wonderful for them to get a warm welcome to your group. Please do reach out to them in 1-2 weeks preferably. However, no worries if this is not a priority for you now.I hope these connections are valuable!Sincerely,EA Virtual ProgramsIn both cases, the connector receives permission from both parties, something eas...]]> Rockwell https://forum.effectivealtruism.org/posts/oJQE6bALqgKKQx4Ek/orgs-and-individuals-should-spend-1-hour-month-making-more 6 months, so I'm giving myself 30 minutes to write and publish it. For context, I'm the full-time director of EA NYC, an organization dedicated to building and supporting the effective altruism community in and around New York City.Claim: More organizations and individuals should allot a small amount of time to a particularly high-value activity: 1-1 or 1-org introductions.Outside the scope of this post: I'm not going to make the case here for the value of connections. Many in the community already believe they are extremely valuable, e.g. they're the primary metric CEA uses for its events.Context: I frequently meet people who are deeply engaged in EA, have ended up at an EAG(x), work for an EA or EA-adjacent organization, or are otherwise exciting and active community members, but have no idea there are existing EA groups located in their city or university, focused on their profession, or coordinating across their cause area. When they do learn about these groups, they are often thrilled and eager to plug in. Many times, they've been engaging heavily with other community members who did know, and perhaps even once mentioned such in passing, but didn't think to make a direct introduction. For many, a direct introduction dramatically increases the likelihood of their actually engaging with another individual or organization. As a result, opportunities for valuable connections and community growth are missed.Introductions can be burdensome, but they don't have to be.80,000 Hours80,000 Hours' staff frequently directly connects me to individuals over email who are based in or near NYC, whether or not they've already advised them. In 2022, they sent over 30 emails that followed a format like this:Subject: Rocky [Name]Hi both,Rocky, meet [Name]. [Name] works in [Professional Field] and lives in [Location]. They're interested in [Career Change, Learning about ___ EA Topic, Connecting with Local EAs, Something Else]. Because of this, I thought it might be useful for [Name] to speak to you and others in the EA NYC community.[Name], meet Rocky. Rocky is Director of Effective Altruism NYC. Before that she did [Career Summary] and studied [My Degree]. Effective Altruism NYC works on helping connect and grow the community of New Yorkers who are looking to do the most good through: advising, socials, reading groups, and other activities. I thought she would be a good person for you to speak with about some next steps to get more involved with Effective Altruism.Hope you get to speak soon. Thanks!Best, [80K Staff Member]They typically link to our respected LinkedIn profiles.I then set up one-on-one calls with the individuals they connect me to and many subsequently become involved in EA NYC in various capacities.EA Virtual ProgramsEA Virtual Programs does something similar:Subject: [EA NYC] Your group has a new prospective memberHi,We are the EA Virtual Programs (EA VP) team. A recent EA Virtual Programs participant has expressed an interest in joining your Effective Altruism New York City group.Name: ____Email: ____Background Info: [Involvement in EA] [Profession] [Location] [LinkedIn]Note these connections come from the participants themselves, as they nominated they would like to get in touch with your group specifically in our exit survey.It would be wonderful for them to get a warm welcome to your group. Please do reach out to them in 1-2 weeks preferably. However, no worries if this is not a priority for you now.I hope these connections are valuable!Sincerely,EA Virtual ProgramsIn both cases, the connector receives permission from both parties, something eas...]]> Fri, 05 May 2023 09:29:07 +0000 EA - Orgs & Individuals Should Spend ~1 Hour/Month Making More Introductions by Rockwell 6 months, so I'm giving myself 30 minutes to write and publish it. For context, I'm the full-time director of EA NYC, an organization dedicated to building and supporting the effective altruism community in and around New York City.Claim: More organizations and individuals should allot a small amount of time to a particularly high-value activity: 1-1 or 1-org introductions.Outside the scope of this post: I'm not going to make the case here for the value of connections. Many in the community already believe they are extremely valuable, e.g. they're the primary metric CEA uses for its events.Context: I frequently meet people who are deeply engaged in EA, have ended up at an EAG(x), work for an EA or EA-adjacent organization, or are otherwise exciting and active community members, but have no idea there are existing EA groups located in their city or university, focused on their profession, or coordinating across their cause area. When they do learn about these groups, they are often thrilled and eager to plug in. Many times, they've been engaging heavily with other community members who did know, and perhaps even once mentioned such in passing, but didn't think to make a direct introduction. For many, a direct introduction dramatically increases the likelihood of their actually engaging with another individual or organization. As a result, opportunities for valuable connections and community growth are missed.Introductions can be burdensome, but they don't have to be.80,000 Hours80,000 Hours' staff frequently directly connects me to individuals over email who are based in or near NYC, whether or not they've already advised them. In 2022, they sent over 30 emails that followed a format like this:Subject: Rocky [Name]Hi both,Rocky, meet [Name]. [Name] works in [Professional Field] and lives in [Location]. They're interested in [Career Change, Learning about ___ EA Topic, Connecting with Local EAs, Something Else]. Because of this, I thought it might be useful for [Name] to speak to you and others in the EA NYC community.[Name], meet Rocky. Rocky is Director of Effective Altruism NYC. Before that she did [Career Summary] and studied [My Degree]. Effective Altruism NYC works on helping connect and grow the community of New Yorkers who are looking to do the most good through: advising, socials, reading groups, and other activities. I thought she would be a good person for you to speak with about some next steps to get more involved with Effective Altruism.Hope you get to speak soon. Thanks!Best, [80K Staff Member]They typically link to our respected LinkedIn profiles.I then set up one-on-one calls with the individuals they connect me to and many subsequently become involved in EA NYC in various capacities.EA Virtual ProgramsEA Virtual Programs does something similar:Subject: [EA NYC] Your group has a new prospective memberHi,We are the EA Virtual Programs (EA VP) team. A recent EA Virtual Programs participant has expressed an interest in joining your Effective Altruism New York City group.Name: ____Email: ____Background Info: [Involvement in EA] [Profession] [Location] [LinkedIn]Note these connections come from the participants themselves, as they nominated they would like to get in touch with your group specifically in our exit survey.It would be wonderful for them to get a warm welcome to your group. Please do reach out to them in 1-2 weeks preferably. However, no worries if this is not a priority for you now.I hope these connections are valuable!Sincerely,EA Virtual ProgramsIn both cases, the connector receives permission from both parties, something eas...]]> 6 months, so I'm giving myself 30 minutes to write and publish it. For context, I'm the full-time director of EA NYC, an organization dedicated to building and supporting the effective altruism community in and around New York City.Claim: More organizations and individuals should allot a small amount of time to a particularly high-value activity: 1-1 or 1-org introductions.Outside the scope of this post: I'm not going to make the case here for the value of connections. Many in the community already believe they are extremely valuable, e.g. they're the primary metric CEA uses for its events.Context: I frequently meet people who are deeply engaged in EA, have ended up at an EAG(x), work for an EA or EA-adjacent organization, or are otherwise exciting and active community members, but have no idea there are existing EA groups located in their city or university, focused on their profession, or coordinating across their cause area. When they do learn about these groups, they are often thrilled and eager to plug in. Many times, they've been engaging heavily with other community members who did know, and perhaps even once mentioned such in passing, but didn't think to make a direct introduction. For many, a direct introduction dramatically increases the likelihood of their actually engaging with another individual or organization. As a result, opportunities for valuable connections and community growth are missed.Introductions can be burdensome, but they don't have to be.80,000 Hours80,000 Hours' staff frequently directly connects me to individuals over email who are based in or near NYC, whether or not they've already advised them. In 2022, they sent over 30 emails that followed a format like this:Subject: Rocky [Name]Hi both,Rocky, meet [Name]. [Name] works in [Professional Field] and lives in [Location]. They're interested in [Career Change, Learning about ___ EA Topic, Connecting with Local EAs, Something Else]. Because of this, I thought it might be useful for [Name] to speak to you and others in the EA NYC community.[Name], meet Rocky. Rocky is Director of Effective Altruism NYC. Before that she did [Career Summary] and studied [My Degree]. Effective Altruism NYC works on helping connect and grow the community of New Yorkers who are looking to do the most good through: advising, socials, reading groups, and other activities. I thought she would be a good person for you to speak with about some next steps to get more involved with Effective Altruism.Hope you get to speak soon. Thanks!Best, [80K Staff Member]They typically link to our respected LinkedIn profiles.I then set up one-on-one calls with the individuals they connect me to and many subsequently become involved in EA NYC in various capacities.EA Virtual ProgramsEA Virtual Programs does something similar:Subject: [EA NYC] Your group has a new prospective memberHi,We are the EA Virtual Programs (EA VP) team. A recent EA Virtual Programs participant has expressed an interest in joining your Effective Altruism New York City group.Name: ____Email: ____Background Info: [Involvement in EA] [Profession] [Location] [LinkedIn]Note these connections come from the participants themselves, as they nominated they would like to get in touch with your group specifically in our exit survey.It would be wonderful for them to get a warm welcome to your group. Please do reach out to them in 1-2 weeks preferably. However, no worries if this is not a priority for you now.I hope these connections are valuable!Sincerely,EA Virtual ProgramsIn both cases, the connector receives permission from both parties, something eas...]]> Rockwell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:21 no full 5852 xen7oLdHwoHDTG4hR_NL_EA EA - Legal Priorities Project – Annual Report 2022 by Legal Priorities Project Legal Priorities Project https://forum.effectivealtruism.org/posts/xen7oLdHwoHDTG4hR/legal-priorities-project-annual-report-2022 Tue, 02 May 2023 15:50:46 +0000 EA - Legal Priorities Project – Annual Report 2022 by Legal Priorities Project Legal Priorities Project https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 58:27 no full 5809 e9htD7txe8RDdcehm_NL_EA EA - Exploring Metaculus’s AI Track Record by Peter Scoblic Peter Scoblic https://forum.effectivealtruism.org/posts/e9htD7txe8RDdcehm/exploring-metaculus-s-ai-track-record Tue, 02 May 2023 11:01:19 +0000 EA - Exploring Metaculus’s AI Track Record by Peter Scoblic Peter Scoblic https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:00 no full 5803 pPQ5wqEPxLexCqGkL_NL_EA EA - [Linkpost] ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead by Darius1 Darius1 https://forum.effectivealtruism.org/posts/pPQ5wqEPxLexCqGkL/linkpost-the-godfather-of-a-i-leaves-google-and-warns-of Tue, 02 May 2023 03:45:17 +0000 EA - [Linkpost] ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead by Darius1 Darius1 https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:10 no full 5798 iqbdXmrNxxgzgNxPC_NL_EA EA - Introducing Stanford’s new Humane & Sustainable Food Lab by MMathur MMathur https://forum.effectivealtruism.org/posts/iqbdXmrNxxgzgNxPC/introducing-stanford-s-new-humane-and-sustainable-food-lab Sun, 30 Apr 2023 15:47:21 +0000 EA - Introducing Stanford’s new Humane & Sustainable Food Lab by MMathur MMathur https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:54 no full 5779 EpyJMXZTqLDiKaXzu_NL_EA EA - If you’d like to do something about sexual misconduct and don’t know what to do. by Habiba Habiba https://forum.effectivealtruism.org/posts/EpyJMXZTqLDiKaXzu/if-you-d-like-to-do-something-about-sexual-misconduct-and Sun, 30 Apr 2023 10:28:40 +0000 EA - If you’d like to do something about sexual misconduct and don’t know what to do. by Habiba Habiba https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 33:09 no full 5775 pZmjeb5RddWqsjp2j_NL_EA EA - New open letter on AI — "Include Consciousness Research" by Jamie Harris Jamie Harris https://forum.effectivealtruism.org/posts/pZmjeb5RddWqsjp2j/new-open-letter-on-ai-include-consciousness-research Sat, 29 Apr 2023 12:08:58 +0000 EA - New open letter on AI — "Include Consciousness Research" by Jamie Harris Jamie Harris https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:22 no full 5771 vWRP8g8pqN9np4Aow_NL_EA EA - What are work practices that you’ve adopted that you now think are underrated? by Lizka Lizka https://forum.effectivealtruism.org/posts/vWRP8g8pqN9np4Aow/what-are-work-practices-that-you-ve-adopted-that-you-now Thu, 27 Apr 2023 22:45:41 +0000 EA - What are work practices that you’ve adopted that you now think are underrated? by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:01 no full 5759 TCsanzwKGqfBBTye9_NL_EA EA - The 'Wild' and 'Wacky' Claims of Karnofsky’s ‘Most Important Century’ by Spencer Becker-Kahn Spencer Becker-Kahn https://forum.effectivealtruism.org/posts/TCsanzwKGqfBBTye9/the-wild-and-wacky-claims-of-karnofsky-s-most-important Wed, 26 Apr 2023 17:03:38 +0000 EA - The 'Wild' and 'Wacky' Claims of Karnofsky’s ‘Most Important Century’ by Spencer Becker-Kahn Spencer Becker-Kahn https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 25:58 no full 5736 epTvpAEfCY74CMdMv_NL_EA EA - Student competition for drafting a treaty on moratorium of large-scale AI capabilities R&D by Nayanika Nayanika https://forum.effectivealtruism.org/posts/epTvpAEfCY74CMdMv/student-competition-for-drafting-a-treaty-on-moratorium-of Mon, 24 Apr 2023 22:29:55 +0000 EA - Student competition for drafting a treaty on moratorium of large-scale AI capabilities R&D by Nayanika Nayanika https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:07 no full 5729 fZkeMsH2YETGfyDrL_NL_EA EA - Hiding in Plain Sight: Mexico’s Octopus Farm/Research Facade by Tessa @ ALI Tessa @ ALI https://forum.effectivealtruism.org/posts/fZkeMsH2YETGfyDrL/hiding-in-plain-sight-mexico-s-octopus-farm-research-facade Wed, 19 Apr 2023 18:01:36 +0000 EA - Hiding in Plain Sight: Mexico’s Octopus Farm/Research Facade by Tessa @ ALI Tessa @ ALI https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:05 no full 5659 xnBF2vcQaZgmhidyb_NL_EA EA - List of Short-Term (<15 hours) Biosecurity Projects to Test Your Fit by Sofya Lebedeva Sofya Lebedeva https://forum.effectivealtruism.org/posts/xnBF2vcQaZgmhidyb/list-of-short-term-less-than-15-hours-biosecurity-projects Mon, 17 Apr 2023 08:25:13 +0000 EA - List of Short-Term (<15 hours) Biosecurity Projects to Test Your Fit by Sofya Lebedeva Sofya Lebedeva https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:04 no full 5633 mLAkwFb9AnZ4uDanJ_NL_EA EA - Giving Guide for Student Organisations – An ineffective outreach project by Karla Still Karla Still https://forum.effectivealtruism.org/posts/mLAkwFb9AnZ4uDanJ/giving-guide-for-student-organisations-an-ineffective Sun, 16 Apr 2023 02:12:32 +0000 EA - Giving Guide for Student Organisations – An ineffective outreach project by Karla Still Karla Still https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 15:48 no full 5622 azCvqqjJ7Dkrv5XBH_NL_EA EA - Announcing Epoch’s dashboard of key trends and figures in Machine Learning by Jaime Sevilla Jaime Sevilla https://forum.effectivealtruism.org/posts/azCvqqjJ7Dkrv5XBH/announcing-epoch-s-dashboard-of-key-trends-and-figures-in Thu, 13 Apr 2023 13:06:35 +0000 EA - Announcing Epoch’s dashboard of key trends and figures in Machine Learning by Jaime Sevilla Jaime Sevilla https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:23 no full 5589 bT4ZKn6AwKWJJnzMv_NL_EA EA - Announcing a new animal advocacy podcast: How I Learned to Love Shrimp by James Özden James Özden https://forum.effectivealtruism.org/posts/bT4ZKn6AwKWJJnzMv/announcing-a-new-animal-advocacy-podcast-how-i-learned-to Thu, 13 Apr 2023 11:52:50 +0000 EA - Announcing a new animal advocacy podcast: How I Learned to Love Shrimp by James Özden James Özden https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:30 no full 5587 zC5CNAv8dCMyhtxW2_NL_EA EA - Trans Rescue’s operations in Uganda: high impact giving opportunity by David D David D https://forum.effectivealtruism.org/posts/zC5CNAv8dCMyhtxW2/trans-rescue-s-operations-in-uganda-high-impact-giving Wed, 12 Apr 2023 09:33:50 +0000 EA - Trans Rescue’s operations in Uganda: high impact giving opportunity by David D David D https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:24 no full 5568 Sa4ahq8AGTniuuvjE_NL_EA EA - [Linkpost] 538 Politics Podcast on AI risk & politics by jackva jackva https://forum.effectivealtruism.org/posts/Sa4ahq8AGTniuuvjE/linkpost-538-politics-podcast-on-ai-risk-and-politics Tue, 11 Apr 2023 21:43:56 +0000 EA - [Linkpost] 538 Politics Podcast on AI risk & politics by jackva jackva https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:14 no full 5557 BifFtXqaSyqjvDDdb_NL_EA EA - Nuclear risk, its potential long-term impacts, & doing research on that: An introductory talk by MichaelA MichaelA https://forum.effectivealtruism.org/posts/BifFtXqaSyqjvDDdb/nuclear-risk-its-potential-long-term-impacts-and-doing Mon, 10 Apr 2023 17:57:04 +0000 EA - Nuclear risk, its potential long-term impacts, & doing research on that: An introductory talk by MichaelA MichaelA https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:49 no full 5544 5HdE2JikwJLzwzhag_NL_EA EA - EA & “The correct response to uncertainty is not half-speed” by Lizka Lizka https://forum.effectivealtruism.org/posts/5HdE2JikwJLzwzhag/ea-and-the-correct-response-to-uncertainty-is-not-half-speed Sun, 09 Apr 2023 21:04:53 +0000 EA - EA & “The correct response to uncertainty is not half-speed” by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:34 no full 5537 3wBCKM3D2dXkXnpWY_NL_EA EA - Announcing CEA’s Interim Managing Director by Ben West Ben West https://forum.effectivealtruism.org/posts/3wBCKM3D2dXkXnpWY/announcing-cea-s-interim-managing-director Tue, 04 Apr 2023 19:03:43 +0000 EA - Announcing CEA’s Interim Managing Director by Ben West Ben West https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:00 no full 5483 eQZRorvDRQ6HEsQtj_NL_EA EA - It’s "The EA-Adjacent Forum" now by Lizka Lizka https://forum.effectivealtruism.org/posts/eQZRorvDRQ6HEsQtj/it-s-the-ea-adjacent-forum-now Sat, 01 Apr 2023 20:49:29 +0000 EA - It’s "The EA-Adjacent Forum" now by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:39 no full 5450 vLctZutbRttQPnWnw_NL_EA EA - Meta Directory of April Fool’s Ideas by Yonatan Cale Yonatan Cale https://forum.effectivealtruism.org/posts/vLctZutbRttQPnWnw/meta-directory-of-april-fool-s-ideas Sat, 01 Apr 2023 19:53:41 +0000 EA - Meta Directory of April Fool’s Ideas by Yonatan Cale Yonatan Cale https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:45 no full 5453 A4DFeE5xNc933fyjt_NL_EA EA - Honestly I’m just here for the drama by electroswing electroswing https://forum.effectivealtruism.org/posts/A4DFeE5xNc933fyjt/honestly-i-m-just-here-for-the-drama Sat, 01 Apr 2023 17:55:22 +0000 EA - Honestly I’m just here for the drama by electroswing electroswing https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:24 no full 5454 EipE75vsDuD7bdJar_NL_EA EA - GWWC's 2020–2022 Impact evaluation (executive summary) by Michael Townsend Michael Townsend https://forum.effectivealtruism.org/posts/EipE75vsDuD7bdJar/gwwc-s-2020-2022-impact-evaluation-executive-summary Fri, 31 Mar 2023 07:54:02 +0000 EA - GWWC's 2020–2022 Impact evaluation (executive summary) by Michael Townsend Michael Townsend https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 14:31 no full 5420 FejnEgjfb7LsdzsBw_NL_EA EA - What's surprised me as an entry-level generalist at Open Phil & my recommendations to early career professionals by Sam Anschell Sam Anschell https://forum.effectivealtruism.org/posts/FejnEgjfb7LsdzsBw/what-s-surprised-me-as-an-entry-level-generalist-at-open Thu, 30 Mar 2023 23:43:04 +0000 EA - What's surprised me as an entry-level generalist at Open Phil & my recommendations to early career professionals by Sam Anschell Sam Anschell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 19:34 no full 5422 wahhPpkKnCoGSMZzq_NL_EA EA - The billionaires’ philanthropy index by brb243 brb243 https://forum.effectivealtruism.org/posts/wahhPpkKnCoGSMZzq/the-billionaires-philanthropy-index Wed, 29 Mar 2023 17:42:39 +0000 EA - The billionaires’ philanthropy index by brb243 brb243 https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 48:58 no full 5397 5LNxeWFdoynvgZeik_NL_EA EA - Nobody’s on the ball on AGI alignment by leopold leopold https://forum.effectivealtruism.org/posts/5LNxeWFdoynvgZeik/nobody-s-on-the-ball-on-agi-alignment Wed, 29 Mar 2023 14:47:29 +0000 EA - Nobody’s on the ball on AGI alignment by leopold leopold https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 21:53 no full 5400 mMgkuNSmxFWia83ej_NL_EA EA - EA & LW Forum Weekly Summary (20th - 26th March 2023) by Zoe Williams Zoe Williams https://forum.effectivealtruism.org/posts/mMgkuNSmxFWia83ej/ea-and-lw-forum-weekly-summary-20th-26th-march-2023 Tue, 28 Mar 2023 21:33:09 +0000 EA - EA & LW Forum Weekly Summary (20th - 26th March 2023) by Zoe Williams Zoe Williams https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:29 no full 5380 zeL52MFB2Pkq9Kdme_NL_EA EA - Exploring Metaculus’ community predictions by Vasco Grilo Vasco Grilo https://forum.effectivealtruism.org/posts/zeL52MFB2Pkq9Kdme/exploring-metaculus-community-predictions Fri, 24 Mar 2023 19:03:43 +0000 EA - Exploring Metaculus’ community predictions by Vasco Grilo Vasco Grilo https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 23:21 no full 5347 YrXZ3pRvFuH8SJaay_NL_EA EA - Reflecting on the Last Year — Lessons for EA (opening keynote at EAG) by Toby Ord Toby Ord https://forum.effectivealtruism.org/posts/YrXZ3pRvFuH8SJaay/reflecting-on-the-last-year-lessons-for-ea-opening-keynote Fri, 24 Mar 2023 16:14:36 +0000 EA - Reflecting on the Last Year — Lessons for EA (opening keynote at EAG) by Toby Ord Toby Ord https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 24:26 no full 5344 uBSwt2vEGm4RisLjf_NL_EA EA - Holden Karnofsky’s recent comments on FTX by Lizka Lizka https://forum.effectivealtruism.org/posts/uBSwt2vEGm4RisLjf/holden-karnofsky-s-recent-comments-on-ftx Fri, 24 Mar 2023 12:50:54 +0000 EA - Holden Karnofsky’s recent comments on FTX by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:49 no full 5346 KZuyBT3Fi8umHc6zH_NL_EA EA - Highlights from LPP’s field-building efforts by Legal Priorities Project Legal Priorities Project https://forum.effectivealtruism.org/posts/KZuyBT3Fi8umHc6zH/highlights-from-lpp-s-field-building-efforts Thu, 23 Mar 2023 15:57:18 +0000 EA - Highlights from LPP’s field-building efforts by Legal Priorities Project Legal Priorities Project https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 21:52 no full 5331 hHCxhFK9ZrKEhFQrL_NL_EA EA - Transcript: NBC Nightly News: AI ‘race to recklessness’ w/ Tristan Harris, Aza Raskin by WilliamKiely WilliamKiely https://forum.effectivealtruism.org/posts/hHCxhFK9ZrKEhFQrL/transcript-abc-nightly-news-ai-race-to-recklessness-w Thu, 23 Mar 2023 12:45:44 +0000 EA - Transcript: NBC Nightly News: AI ‘race to recklessness’ w/ Tristan Harris, Aza Raskin by WilliamKiely WilliamKiely https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:34 no full 5332 sLB6tEovv7jDkEghG_NL_EA EA - Design changes & the community section (Forum update March 2023) by Lizka Lizka https://forum.effectivealtruism.org/posts/sLB6tEovv7jDkEghG/design-changes-and-the-community-section-forum-update-march Tue, 21 Mar 2023 22:44:41 +0000 EA - Design changes & the community section (Forum update March 2023) by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 12:04 no full 5309 46tXkg838EZ6uie45_NL_EA EA - My Objections to "We’re All Gonna Die with Eliezer Yudkowsky" by Quintin Pope Quintin Pope https://forum.effectivealtruism.org/posts/46tXkg838EZ6uie45/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky Tue, 21 Mar 2023 06:08:17 +0000 EA - My Objections to "We’re All Gonna Die with Eliezer Yudkowsky" by Quintin Pope Quintin Pope https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 52:37 no full 5304 BcbmKitFms6NbTMKt_NL_EA EA - Tensions between different approaches to doing good by James Özden James Özden https://forum.effectivealtruism.org/posts/BcbmKitFms6NbTMKt/tensions-between-different-approaches-to-doing-good Mon, 20 Mar 2023 10:16:31 +0000 EA - Tensions between different approaches to doing good by James Özden James Özden https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 18:03 no full 5293 eAaeeuEd4j6oJ3Ep5_NL_EA EA - GPT-4 is out: thread (& links) by Lizka Lizka https://forum.effectivealtruism.org/posts/eAaeeuEd4j6oJ3Ep5/gpt-4-is-out-thread-and-links Tue, 14 Mar 2023 21:20:38 +0000 EA - GPT-4 is out: thread (& links) by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:47 no full 5218 Gcnkp4qZJDownkLTj_NL_EA EA - Two University Group Organizer Opportunities: Pre-EAG London Summit & Summer Internship by Joris P Joris P https://forum.effectivealtruism.org/posts/Gcnkp4qZJDownkLTj/two-university-group-organizer-opportunities-pre-eag-london Tue, 14 Mar 2023 04:45:02 +0000 EA - Two University Group Organizer Opportunities: Pre-EAG London Summit & Summer Internship by Joris P Joris P https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:42 no full 5213 tedrwwpXgpBEi3Ecc_NL_EA EA - 80,000 Hours two-year review: 2021–2022 by 80000 Hours 80000 Hours https://forum.effectivealtruism.org/posts/tedrwwpXgpBEi3Ecc/80-000-hours-two-year-review-2021-2022 Wed, 08 Mar 2023 19:22:45 +0000 EA - 80,000 Hours two-year review: 2021–2022 by 80000 Hours 80000 Hours https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:21 no full 5146 DztcCwrAGo6gzCp3o_NL_EA EA - On the First Anniversary of my Best Friend’s Death by Rockwell Rockwell https://forum.effectivealtruism.org/posts/DztcCwrAGo6gzCp3o/on-the-first-anniversary-of-my-best-friend-s-death Mon, 06 Mar 2023 08:22:16 +0000 EA - On the First Anniversary of my Best Friend’s Death by Rockwell Rockwell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:35 no full 5116 ZyjARuFsDBTFXeMP4_NL_EA EA - Misalignment Museum opens in San Francisco: ‘Sorry for killing most of humanity’ by Michael Huang Michael Huang https://forum.effectivealtruism.org/posts/ZyjARuFsDBTFXeMP4/misalignment-museum-opens-in-san-francisco-sorry-for-killing Sat, 04 Mar 2023 15:57:02 +0000 EA - Misalignment Museum opens in San Francisco: ‘Sorry for killing most of humanity’ by Michael Huang Michael Huang https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:10 no full 5104 JcdQxz9gpd9Qmskih_NL_EA EA - What Has EAGxLatAm 2023 Taught Us: Retrospective & Thoughts on Measuring the Impact of EA Conferences by Hugo Ikta 10 new connections10% of participants will generate >20 new connectionsMake sure ~30% of participants are highly engaged EAWe also aimed at limiting unessential spending that would not drastically impact our main objective or our LTR (Likelihood To Recommend) score.Actual results77% of participants generated >10 new connections (below expectations)16% of participants generate >20 new connections (above expectations)~30% of participants were highly engaged EA (goal reached)SpendingWe spent a total of USD 242,732 to make this event happen (including travel grants but not our team’s wages). That’s USD 1089 per participant.DetailsTravel grants: USD 115,884Venue & Catering: USD 98,524Internet: USD 6,667Speakers’ Hotel: USD 9,837Hoodies: USD 5,536Photos & Videos: USD 5,532Other: USD 813What went well and why?We didn’t face any major issuesNothing went terribly...]]> Hugo Ikta https://forum.effectivealtruism.org/posts/JcdQxz9gpd9Qmskih/what-has-eagxlatam-2023-taught-us-retrospective-and-thoughts 10 new connections10% of participants will generate >20 new connectionsMake sure ~30% of participants are highly engaged EAWe also aimed at limiting unessential spending that would not drastically impact our main objective or our LTR (Likelihood To Recommend) score.Actual results77% of participants generated >10 new connections (below expectations)16% of participants generate >20 new connections (above expectations)~30% of participants were highly engaged EA (goal reached)SpendingWe spent a total of USD 242,732 to make this event happen (including travel grants but not our team’s wages). That’s USD 1089 per participant.DetailsTravel grants: USD 115,884Venue & Catering: USD 98,524Internet: USD 6,667Speakers’ Hotel: USD 9,837Hoodies: USD 5,536Photos & Videos: USD 5,532Other: USD 813What went well and why?We didn’t face any major issuesNothing went terribly...]]> Fri, 03 Mar 2023 18:30:33 +0000 EA - What Has EAGxLatAm 2023 Taught Us: Retrospective & Thoughts on Measuring the Impact of EA Conferences by Hugo Ikta 10 new connections10% of participants will generate >20 new connectionsMake sure ~30% of participants are highly engaged EAWe also aimed at limiting unessential spending that would not drastically impact our main objective or our LTR (Likelihood To Recommend) score.Actual results77% of participants generated >10 new connections (below expectations)16% of participants generate >20 new connections (above expectations)~30% of participants were highly engaged EA (goal reached)SpendingWe spent a total of USD 242,732 to make this event happen (including travel grants but not our team’s wages). That’s USD 1089 per participant.DetailsTravel grants: USD 115,884Venue & Catering: USD 98,524Internet: USD 6,667Speakers’ Hotel: USD 9,837Hoodies: USD 5,536Photos & Videos: USD 5,532Other: USD 813What went well and why?We didn’t face any major issuesNothing went terribly...]]> 10 new connections10% of participants will generate >20 new connectionsMake sure ~30% of participants are highly engaged EAWe also aimed at limiting unessential spending that would not drastically impact our main objective or our LTR (Likelihood To Recommend) score.Actual results77% of participants generated >10 new connections (below expectations)16% of participants generate >20 new connections (above expectations)~30% of participants were highly engaged EA (goal reached)SpendingWe spent a total of USD 242,732 to make this event happen (including travel grants but not our team’s wages). That’s USD 1089 per participant.DetailsTravel grants: USD 115,884Venue & Catering: USD 98,524Internet: USD 6,667Speakers’ Hotel: USD 9,837Hoodies: USD 5,536Photos & Videos: USD 5,532Other: USD 813What went well and why?We didn’t face any major issuesNothing went terribly...]]> Hugo Ikta https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 14:22 no full 5093 tCkBsT6cAw6LEKAbm_NL_EA EA - Scoring forecasts from the 2016 “Expert Survey on Progress in AI” by PatrickL 90% chance.So they expected 6-17 of these milestones to have happened by now. By eyeballing the forecasts for each milestone, my estimate is that they expected ~9 to have happened. I did not estimate the implied probability distribut...]]> PatrickL https://forum.effectivealtruism.org/posts/tCkBsT6cAw6LEKAbm/scoring-forecasts-from-the-2016-expert-survey-on-progress-in 90% chance.So they expected 6-17 of these milestones to have happened by now. By eyeballing the forecasts for each milestone, my estimate is that they expected ~9 to have happened. I did not estimate the implied probability distribut...]]> Wed, 01 Mar 2023 16:58:36 +0000 EA - Scoring forecasts from the 2016 “Expert Survey on Progress in AI” by PatrickL 90% chance.So they expected 6-17 of these milestones to have happened by now. By eyeballing the forecasts for each milestone, my estimate is that they expected ~9 to have happened. I did not estimate the implied probability distribut...]]> 90% chance.So they expected 6-17 of these milestones to have happened by now. By eyeballing the forecasts for each milestone, my estimate is that they expected ~9 to have happened. I did not estimate the implied probability distribut...]]> PatrickL https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 18:03 no full 5069 5KsrEWEbc4mwzMTLp_NL_EA EA - Some more projects I’d like to see by finm finm https://forum.effectivealtruism.org/posts/5KsrEWEbc4mwzMTLp/some-more-projects-i-d-like-to-see Sun, 26 Feb 2023 09:59:49 +0000 EA - Some more projects I’d like to see by finm finm https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 39:35 no full 5033 cTfEv6zAakfyxrbQu_NL_EA EA - EA content in French: Announcing EA France’s translation project and our translation coordination initiative by Louise Verkin Louise Verkin https://forum.effectivealtruism.org/posts/cTfEv6zAakfyxrbQu/ea-content-in-french-announcing-ea-france-s-translation Fri, 24 Feb 2023 23:41:29 +0000 EA - EA content in French: Announcing EA France’s translation project and our translation coordination initiative by Louise Verkin Louise Verkin https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:56 no full 5024 gr4epkwe5WoYJXF32_NL_EA EA - Why I don’t agree with HLI’s estimate of household spillovers from therapy by JamesSnowden JamesSnowden https://forum.effectivealtruism.org/posts/gr4epkwe5WoYJXF32/why-i-don-t-agree-with-hli-s-estimate-of-household Fri, 24 Feb 2023 19:59:00 +0000 EA - Why I don’t agree with HLI’s estimate of household spillovers from therapy by JamesSnowden JamesSnowden https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 14:49 no full 5013 7LqFyJWxGCZZXte3N_NL_EA EA - Summary of “Animal Rights Activism Trends to Look Out for in 2023” by Animal Agriculture Alliance by Aashish Khimasia Aashish Khimasia https://forum.effectivealtruism.org/posts/7LqFyJWxGCZZXte3N/summary-of-animal-rights-activism-trends-to-look-out-for-in Fri, 24 Feb 2023 18:33:23 +0000 EA - Summary of “Animal Rights Activism Trends to Look Out for in 2023” by Animal Agriculture Alliance by Aashish Khimasia Aashish Khimasia https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:56 no full 5016 5uZiBK4h5WcccjA2R_NL_EA EA - EA is too New & Important to Schism by Wil Perkins Wil Perkins https://forum.effectivealtruism.org/posts/5uZiBK4h5WcccjA2R/ea-is-too-new-and-important-to-schism Thu, 23 Feb 2023 18:48:07 +0000 EA - EA is too New & Important to Schism by Wil Perkins Wil Perkins https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:15 no full 4998 kPGqj8BMDzGpvFCCH_NL_EA EA - The EA Mental Health & Productivity Survey 2023 by Emily Emily https://forum.effectivealtruism.org/posts/kPGqj8BMDzGpvFCCH/the-ea-mental-health-and-productivity-survey-2023 Tue, 21 Feb 2023 18:52:29 +0000 EA - The EA Mental Health & Productivity Survey 2023 by Emily Emily https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:20 no full 4966 9wJ5Mtba9Fc2yCvpN_NL_EA EA - Getting organizational value from EA conferences, featuring Charity Entrepreneurship’s experience by Amy Labenz Amy Labenz https://forum.effectivealtruism.org/posts/9wJ5Mtba9Fc2yCvpN/getting-organizational-value-from-ea-conferences-featuring Fri, 17 Feb 2023 14:43:10 +0000 EA - Getting organizational value from EA conferences, featuring Charity Entrepreneurship’s experience by Amy Labenz Amy Labenz https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:33 no full 4915 pYPag2wpB7LF9BSBj_NL_EA EA - Anyone who likes the idea of EA & meeting EAs but doesn't want to discuss EA concepts IRL? by antisocial-throwaway antisocial-throwaway https://forum.effectivealtruism.org/posts/pYPag2wpB7LF9BSBj/anyone-who-likes-the-idea-of-ea-and-meeting-eas-but-doesn-t Thu, 16 Feb 2023 13:58:17 +0000 EA - Anyone who likes the idea of EA & meeting EAs but doesn't want to discuss EA concepts IRL? by antisocial-throwaway antisocial-throwaway https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:05 no full 4902 TfqmoroYCrNh2s2TF_NL_EA EA - Select Challenges with Criticism & Evaluation Around EA by Ozzie Gooen Ozzie Gooen https://forum.effectivealtruism.org/posts/TfqmoroYCrNh2s2TF/select-challenges-with-criticism-and-evaluation-around-ea Sat, 11 Feb 2023 03:16:27 +0000 EA - Select Challenges with Criticism & Evaluation Around EA by Ozzie Gooen Ozzie Gooen https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:23 no full 4835 vs8FFnPfKnitAhcJb_NL_EA EA - “Community” posts have their own section, subforums are closing, and more (Forum update February 2023) by Lizka Lizka https://forum.effectivealtruism.org/posts/vs8FFnPfKnitAhcJb/community-posts-have-their-own-section-subforums-are-closing Fri, 10 Feb 2023 00:22:00 +0000 EA - “Community” posts have their own section, subforums are closing, and more (Forum update February 2023) by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:32 no full 4823 sDaXKQCKHmX5DKhRy_NL_EA EA - EA Community Builders’ Commitment to Anti-Racism & Anti-Sexism by Rockwell Rockwell https://forum.effectivealtruism.org/posts/sDaXKQCKHmX5DKhRy/ea-community-builders-commitment-to-anti-racism-and-anti Thu, 09 Feb 2023 23:22:57 +0000 EA - EA Community Builders’ Commitment to Anti-Racism & Anti-Sexism by Rockwell Rockwell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:54 no full 4836 yBPyByccETmHmaByn_NL_EA EA - Could 80,000 Hours’ messaging be discouraging for people not working on x-risk / longtermism? by Mack the Knife Mack the Knife https://forum.effectivealtruism.org/posts/yBPyByccETmHmaByn/could-80-000-hours-messaging-be-discouraging-for-people-not Thu, 09 Feb 2023 14:57:52 +0000 EA - Could 80,000 Hours’ messaging be discouraging for people not working on x-risk / longtermism? by Mack the Knife Mack the Knife https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:33 no full 4812 tD2rXd9vXmTkRBwHN_NL_EA EA - Scalable longtermist projects: Speedrun series – Introduction by Buhl Buhl https://forum.effectivealtruism.org/posts/tD2rXd9vXmTkRBwHN/scalable-longtermist-projects-speedrun-series-introduction Tue, 07 Feb 2023 22:20:37 +0000 EA - Scalable longtermist projects: Speedrun series – Introduction by Buhl Buhl https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:55 no full 4777 QdYKFRexDaPeQaQCA_NL_EA EA - Unjournal's 1st eval is up: Resilient foods paper (Denkenberger et al) & AMA ~48 hours by david reinstein david reinstein https://forum.effectivealtruism.org/posts/QdYKFRexDaPeQaQCA/unjournal-s-1st-eval-is-up-resilient-foods-paper Mon, 06 Feb 2023 20:33:56 +0000 EA - Unjournal's 1st eval is up: Resilient foods paper (Denkenberger et al) & AMA ~48 hours by david reinstein david reinstein https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:02 no full 4758 GPDmbGxrtsNCWaB49_NL_EA EA - EA NYC’s Community Health Infrastructure by Rockwell Rockwell https://forum.effectivealtruism.org/posts/GPDmbGxrtsNCWaB49/ea-nyc-s-community-health-infrastructure Fri, 03 Feb 2023 21:50:25 +0000 EA - EA NYC’s Community Health Infrastructure by Rockwell Rockwell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:43 no full 4719 FnszH6ZGBi9hd8rtv_NL_EA EA - Google invests $300mn in artificial intelligence start-up Anthropic | FT by 𝕮𝖎𝖓𝖊𝖗𝖆 𝕮𝖎𝖓𝖊𝖗𝖆 https://forum.effectivealtruism.org/posts/FnszH6ZGBi9hd8rtv/google-invests-usd300mn-in-artificial-intelligence-start-up Fri, 03 Feb 2023 20:22:52 +0000 EA - Google invests $300mn in artificial intelligence start-up Anthropic | FT by 𝕮𝖎𝖓𝖊𝖗𝖆 𝕮𝖎𝖓𝖊𝖗𝖆 https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:52 no full 4720 oxGpocpzzbBNdtmf6_NL_EA EA - “My Model Of EA Burnout” (Logan Strohl) by will will https://forum.effectivealtruism.org/posts/oxGpocpzzbBNdtmf6/my-model-of-ea-burnout-logan-strohl Fri, 03 Feb 2023 09:52:33 +0000 EA - “My Model Of EA Burnout” (Logan Strohl) by will will https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:20 no full 4710 fS4mM6kD4GK3PWzEj_NL_EA EA - Nelson Mandela’s organization, The Elders, backing x risk prevention and longtermism by krohmal5 krohmal5 https://forum.effectivealtruism.org/posts/fS4mM6kD4GK3PWzEj/nelson-mandela-s-organization-the-elders-backing-x-risk Wed, 01 Feb 2023 10:22:31 +0000 EA - Nelson Mandela’s organization, The Elders, backing x risk prevention and longtermism by krohmal5 krohmal5 https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:00 no full 4680 4ssvjxD8x9iBLYnMB_NL_EA EA - FIRE & EA: Seeking feedback on "Fi-lanthropy" Calculator by Rebecca Herbst Rebecca Herbst https://forum.effectivealtruism.org/posts/4ssvjxD8x9iBLYnMB/fire-and-ea-seeking-feedback-on-fi-lanthropy-calculator Tue, 31 Jan 2023 06:00:28 +0000 EA - FIRE & EA: Seeking feedback on "Fi-lanthropy" Calculator by Rebecca Herbst Rebecca Herbst https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:57 no full 4667 KScqjN2ouSjTjWopp_NL_EA EA - An in-progress experiment to test how Laplace’s rule of succession performs in practice. by NunoSempere NunoSempere https://forum.effectivealtruism.org/posts/KScqjN2ouSjTjWopp/an-in-progress-experiment-to-test-how-laplace-s-rule-of Mon, 30 Jan 2023 19:31:53 +0000 EA - An in-progress experiment to test how Laplace’s rule of succession performs in practice. by NunoSempere NunoSempere https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:49 no full 4656 eLY4GKTgxxdtBHEep_NL_EA EA - EA is going through a bunch of conflict. Here’s some social technologies that may help. by Severin T. Seehrich Severin T. Seehrich https://forum.effectivealtruism.org/posts/eLY4GKTgxxdtBHEep/ea-is-going-through-a-bunch-of-conflict-here-s-some-social Sun, 29 Jan 2023 17:29:41 +0000 EA - EA is going through a bunch of conflict. Here’s some social technologies that may help. by Severin T. Seehrich Severin T. Seehrich https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:51 no full 4652 JAKXPxQBzHnjHbrJi_NL_EA EA - Pineapple now lists marketing, comms & fundraising talent, fiscal sponsorship recs, private database (Jan '23 Update) by Vaidehi Agarwalla Vaidehi Agarwalla https://forum.effectivealtruism.org/posts/JAKXPxQBzHnjHbrJi/pineapple-now-lists-marketing-comms-and-fundraising-talent Fri, 27 Jan 2023 22:32:15 +0000 EA - Pineapple now lists marketing, comms & fundraising talent, fiscal sponsorship recs, private database (Jan '23 Update) by Vaidehi Agarwalla Vaidehi Agarwalla https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:31 no full 4629 nKWc4EzRjkpcbDA3A_NL_EA EA - AI Risk Management Framework | NIST by 𝕮𝖎𝖓𝖊𝖗𝖆 𝕮𝖎𝖓𝖊𝖗𝖆 https://forum.effectivealtruism.org/posts/nKWc4EzRjkpcbDA3A/ai-risk-management-framework-or-nist Thu, 26 Jan 2023 22:42:10 +0000 EA - AI Risk Management Framework | NIST by 𝕮𝖎𝖓𝖊𝖗𝖆 𝕮𝖎𝖓𝖊𝖗𝖆 https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:28 no full 4622 nKWc4EzRjkpcbDA3A EA - AI Risk Management Framework | NIST by 𝕮𝖎𝖓𝖊𝖗𝖆 𝕮𝖎𝖓𝖊𝖗𝖆 https://forum.effectivealtruism.org/posts/nKWc4EzRjkpcbDA3A/ai-risk-management-framework-or-nist Thu, 26 Jan 2023 22:42:10 +0000 EA - AI Risk Management Framework | NIST by 𝕮𝖎𝖓𝖊𝖗𝖆 𝕮𝖎𝖓𝖊𝖗𝖆 https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:28 no full 4590 Nxm8htyEJsmKdZdyp_NL_EA EA - Getting Actual Value from “Info Value”: Example from a Failed Experiment by Nikola Nikola https://forum.effectivealtruism.org/posts/Nxm8htyEJsmKdZdyp/getting-actual-value-from-info-value-example-from-a-failed Thu, 26 Jan 2023 21:43:25 +0000 EA - Getting Actual Value from “Info Value”: Example from a Failed Experiment by Nikola Nikola https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:10 no full 4618 Nxm8htyEJsmKdZdyp EA - Getting Actual Value from “Info Value”: Example from a Failed Experiment by Nikola Nikola https://forum.effectivealtruism.org/posts/Nxm8htyEJsmKdZdyp/getting-actual-value-from-info-value-example-from-a-failed Thu, 26 Jan 2023 21:43:25 +0000 EA - Getting Actual Value from “Info Value”: Example from a Failed Experiment by Nikola Nikola https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:10 no full 4588 QWuKM5fsbry8Jp2x5 EA - Why people want to work on AI safety (but don’t) by Emily Grundy Emily Grundy https://forum.effectivealtruism.org/posts/QWuKM5fsbry8Jp2x5/why-people-want-to-work-on-ai-safety-but-don-t Tue, 24 Jan 2023 12:24:06 +0000 EA - Why people want to work on AI safety (but don’t) by Emily Grundy Emily Grundy https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:32 no full 4572 Nm9ahJzKsDGFfF66b EA - NYT: Google will “recalibrate” the risk of releasing AI due to competition with OpenAI by Michael Huang Michael Huang https://forum.effectivealtruism.org/posts/Nm9ahJzKsDGFfF66b/nyt-google-will-recalibrate-the-risk-of-releasing-ai-due-to Sun, 22 Jan 2023 03:27:33 +0000 EA - NYT: Google will “recalibrate” the risk of releasing AI due to competition with OpenAI by Michael Huang Michael Huang https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:56 no full 4536 GDkrPrP2m6TQqdSGF EA - [TIME magazine] DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution (Perrigo, 2023) by will will https://forum.effectivealtruism.org/posts/GDkrPrP2m6TQqdSGF/time-magazine-deepmind-s-ceo-helped-take-ai-mainstream-now Sat, 21 Jan 2023 03:39:33 +0000 EA - [TIME magazine] DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution (Perrigo, 2023) by will will https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:05 no full 4533 7CdtdieiijWXWhiZB EA - What’s going on with ‘crunch time’? by rosehadshar rosehadshar https://forum.effectivealtruism.org/posts/7CdtdieiijWXWhiZB/what-s-going-on-with-crunch-time Fri, 20 Jan 2023 16:13:37 +0000 EA - What’s going on with ‘crunch time’? by rosehadshar rosehadshar https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:23 no full 4530 KsgmLHwqRj7fZ9szo EA - UK Personal Finance Tips & Info by Rasool Rasool https://forum.effectivealtruism.org/posts/KsgmLHwqRj7fZ9szo/uk-personal-finance-tips-and-info Thu, 19 Jan 2023 21:02:43 +0000 EA - UK Personal Finance Tips & Info by Rasool Rasool https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 21:35 no full 4520 9XGhNArGhiZ3daQws EA - Heretical Thoughts on AI | Eli Dourado by 𝕮𝖎𝖓𝖊𝖗𝖆 𝕮𝖎𝖓𝖊𝖗𝖆 https://forum.effectivealtruism.org/posts/9XGhNArGhiZ3daQws/heretical-thoughts-on-ai-or-eli-dourado Thu, 19 Jan 2023 17:54:43 +0000 EA - Heretical Thoughts on AI | Eli Dourado by 𝕮𝖎𝖓𝖊𝖗𝖆 𝕮𝖎𝖓𝖊𝖗𝖆 https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:28 no full 4518 KXSBb2zgkLE6gnn3K EA - Don’t Balk at Animal-friendly Results by Bob Fischer Bob Fischer https://forum.effectivealtruism.org/posts/KXSBb2zgkLE6gnn3K/don-t-balk-at-animal-friendly-results-1 Mon, 16 Jan 2023 15:27:23 +0000 EA - Don’t Balk at Animal-friendly Results by Bob Fischer Bob Fischer https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 25:39 no full 4478 qtGjAJrmBRNiJGKFQ EA - The writing style here is bad by Michał Zabłocki Michał Zabłocki https://forum.effectivealtruism.org/posts/qtGjAJrmBRNiJGKFQ/the-writing-style-here-is-bad Sun, 15 Jan 2023 14:03:44 +0000 EA - The writing style here is bad by Michał Zabłocki Michał Zabłocki https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:59 no full 4469 XMKDvbjxtZCz3rueM EA - Economic Theory & Global Prioritization (summer 2023): Apply now! by trammell trammell https://forum.effectivealtruism.org/posts/XMKDvbjxtZCz3rueM/economic-theory-and-global-prioritization-summer-2023-apply Fri, 13 Jan 2023 19:23:33 +0000 EA - Economic Theory & Global Prioritization (summer 2023): Apply now! by trammell trammell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:50 no full 4444 NniTsDNQQo58hnxkr EA - I Support Bostrom by 𝕮𝖎𝖓𝖊𝖗𝖆 𝕮𝖎𝖓𝖊𝖗𝖆 https://forum.effectivealtruism.org/posts/NniTsDNQQo58hnxkr/i-support-bostrom Thu, 12 Jan 2023 21:49:50 +0000 EA - I Support Bostrom by 𝕮𝖎𝖓𝖊𝖗𝖆 𝕮𝖎𝖓𝖊𝖗𝖆 https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:25 no full 4428 6B3QEiSyGgAM4WJSR EA - We don’t trade with ants by Katja Grace Katja Grace https://forum.effectivealtruism.org/posts/6B3QEiSyGgAM4WJSR/we-don-t-trade-with-ants Thu, 12 Jan 2023 06:21:55 +0000 EA - We don’t trade with ants by Katja Grace Katja Grace https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:24 no full 4423 zkgYfeczYC5YKDRcH EA - Non-trivial Fellowship: start an impactful project with €500 and expert guidance by Peter McIntyre Peter McIntyre https://forum.effectivealtruism.org/posts/zkgYfeczYC5YKDRcH/non-trivial-fellowship-start-an-impactful-project-with Tue, 10 Jan 2023 14:59:12 +0000 EA - Non-trivial Fellowship: start an impactful project with €500 and expert guidance by Peter McIntyre Peter McIntyre https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:32 no full 4404 Hk9vhBhrWbyBYX6xb EA - A Study of EA Orgs’ Social Media by Stan Pinsent Stan Pinsent https://forum.effectivealtruism.org/posts/Hk9vhBhrWbyBYX6xb/a-study-of-ea-orgs-social-media Mon, 09 Jan 2023 16:57:13 +0000 EA - A Study of EA Orgs’ Social Media by Stan Pinsent Stan Pinsent https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:08 no full 4391 vEYh5NAXuXcZSkAab EA - Do short timelines impact the tractability of 80k’s advice? by smk smk https://forum.effectivealtruism.org/posts/vEYh5NAXuXcZSkAab/do-short-timelines-impact-the-tractability-of-80k-s-advice Sat, 07 Jan 2023 03:45:28 +0000 EA - Do short timelines impact the tractability of 80k’s advice? by smk smk https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:18 no full 4373 7E3AGFB86mKYeo5aC EA - EAs interested in EU policy: Consider applying for the European Commission’s Blue Book Traineeship by EU Policy Careers EU Policy Careers https://forum.effectivealtruism.org/posts/7E3AGFB86mKYeo5aC/eas-interested-in-eu-policy-consider-applying-for-the Sat, 07 Jan 2023 00:38:39 +0000 EA - EAs interested in EU policy: Consider applying for the European Commission’s Blue Book Traineeship by EU Policy Careers EU Policy Careers https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 30:33 no full 4371 SQ2ayhoYBJJCrFQjd EA - What are the most underrated posts & comments of 2022, according to you? by peterhartree peterhartree https://forum.effectivealtruism.org/posts/SQ2ayhoYBJJCrFQjd/what-are-the-most-underrated-posts-and-comments-of-2022 Mon, 02 Jan 2023 11:26:25 +0000 EA - What are the most underrated posts & comments of 2022, according to you? by peterhartree peterhartree https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:04 no full 4331 SBSC8ZiTNwTM8Azue EA - A libertarian socialist’s view on how EA can improve by freedomandutility freedomandutility https://forum.effectivealtruism.org/posts/SBSC8ZiTNwTM8Azue/a-libertarian-socialist-s-view-on-how-ea-can-improve Fri, 30 Dec 2022 14:51:12 +0000 EA - A libertarian socialist’s view on how EA can improve by freedomandutility freedomandutility https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:58 no full 4309 HgZymWBTAovWTa7F8 EA - Things I didn’t feel that guilty about before getting involved in effective altruism by Ada-Maaria Hyvärinen Ada-Maaria Hyvärinen https://forum.effectivealtruism.org/posts/HgZymWBTAovWTa7F8/things-i-didn-t-feel-that-guilty-about-before-getting Wed, 28 Dec 2022 19:27:42 +0000 EA - Things I didn’t feel that guilty about before getting involved in effective altruism by Ada-Maaria Hyvärinen Ada-Maaria Hyvärinen https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:28 no full 4289 azL6uPcbCqBfJ37TJ EA - Announcing a subforum for forecasting & estimation by Sharang Phadke Sharang Phadke https://forum.effectivealtruism.org/posts/azL6uPcbCqBfJ37TJ/announcing-a-subforum-for-forecasting-and-estimation Mon, 26 Dec 2022 21:24:18 +0000 EA - Announcing a subforum for forecasting & estimation by Sharang Phadke Sharang Phadke https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:21 no full 4264 aupKXpPGnFmbfE2xC EA - [Link-post] Politico: "Ex-Google boss helps fund dozens of jobs in Biden’s administration" by Pranay K Pranay K https://forum.effectivealtruism.org/posts/aupKXpPGnFmbfE2xC/link-post-politico-ex-google-boss-helps-fund-dozens-of-jobs Sat, 24 Dec 2022 23:03:00 +0000 EA - [Link-post] Politico: "Ex-Google boss helps fund dozens of jobs in Biden’s administration" by Pranay K Pranay K https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:53 no full 4254 rEiWzbiWkyBSuLuGy EA - Animal Advocacy Africa’s 2022 Review - Our Achievements and 2023 Strategy by AnimalAdvocacyAfrica AnimalAdvocacyAfrica https://forum.effectivealtruism.org/posts/rEiWzbiWkyBSuLuGy/animal-advocacy-africa-s-2022-review-our-achievements-and-1 Fri, 23 Dec 2022 23:15:45 +0000 EA - Animal Advocacy Africa’s 2022 Review - Our Achievements and 2023 Strategy by AnimalAdvocacyAfrica AnimalAdvocacyAfrica https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 32:07 no full 4243 vwK3v3Mekf6Jjpeep EA - Let’s think about slowing down AI by Katja Grace Katja Grace https://forum.effectivealtruism.org/posts/vwK3v3Mekf6Jjpeep/let-s-think-about-slowing-down-ai-1 Fri, 23 Dec 2022 20:35:33 +0000 EA - Let’s think about slowing down AI by Katja Grace Katja Grace https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:25 no full 4236 opDisq67NLmhZE358 EA - FTX’s collapse mirrors an infamous 18th century British financial scandal by Michael Huang Michael Huang https://forum.effectivealtruism.org/posts/opDisq67NLmhZE358/ftx-s-collapse-mirrors-an-infamous-18th-century-british Fri, 23 Dec 2022 00:06:56 +0000 EA - FTX’s collapse mirrors an infamous 18th century British financial scandal by Michael Huang Michael Huang https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:32 no full 4235 ZCBw36sCfbfondnq2 EA - Sign up for our Talent Directory if you’re interested in getting a high-impact job by High Impact Professionals High Impact Professionals https://forum.effectivealtruism.org/posts/ZCBw36sCfbfondnq2/sign-up-for-our-talent-directory-if-you-re-interested-in Thu, 22 Dec 2022 15:34:45 +0000 EA - Sign up for our Talent Directory if you’re interested in getting a high-impact job by High Impact Professionals High Impact Professionals https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:51 no full 4226 acG2fbfd9xwv3XrKZ EA - Link-post for Caroline Ellison’s Guilty Plea by Lauren Maria Lauren Maria https://forum.effectivealtruism.org/posts/acG2fbfd9xwv3XrKZ/link-post-for-caroline-ellison-s-guilty-plea Thu, 22 Dec 2022 05:27:11 +0000 EA - Link-post for Caroline Ellison’s Guilty Plea by Lauren Maria Lauren Maria https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:42 no full 4216 qDaBSETiQJusrcctJ EA - Proposal — change the name of EA Global by DMMF DMMF https://forum.effectivealtruism.org/posts/qDaBSETiQJusrcctJ/proposal-change-the-name-of-ea-global Mon, 19 Dec 2022 19:29:22 +0000 EA - Proposal — change the name of EA Global by DMMF DMMF https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:15 no full 4174 eCYkD4BP2s4FYuwbP EA - Staff members’ personal donations for giving season 2022 by GiveWell GiveWell https://forum.effectivealtruism.org/posts/eCYkD4BP2s4FYuwbP/staff-members-personal-donations-for-giving-season-2022 Mon, 19 Dec 2022 18:31:04 +0000 EA - Staff members’ personal donations for giving season 2022 by GiveWell GiveWell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 16:26 no full 4176 k73qrirnxcKtKZ4ng EA - The ‘Old AI’: Lessons for AI governance from early electricity regulation by Sam Clarke Sam Clarke https://forum.effectivealtruism.org/posts/k73qrirnxcKtKZ4ng/the-old-ai-lessons-for-ai-governance-from-early-electricity-1 Mon, 19 Dec 2022 17:42:58 +0000 EA - The ‘Old AI’: Lessons for AI governance from early electricity regulation by Sam Clarke Sam Clarke https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 24:19 no full 4180 PsztRbQnuMmpgSrRc EA - Why we’re getting the Fidelity Model wrong by Alishaandomeda Alishaandomeda https://forum.effectivealtruism.org/posts/PsztRbQnuMmpgSrRc/why-we-re-getting-the-fidelity-model-wrong Sun, 18 Dec 2022 00:49:07 +0000 EA - Why we’re getting the Fidelity Model wrong by Alishaandomeda Alishaandomeda https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:36 no full 4164 jgsFfsDPzjdjbPhB6 EA - Concerns over EA’s possible neglect of experts by Jack Malde Jack Malde https://forum.effectivealtruism.org/posts/jgsFfsDPzjdjbPhB6/concerns-over-ea-s-possible-neglect-of-experts Sat, 17 Dec 2022 10:50:01 +0000 EA - Concerns over EA’s possible neglect of experts by Jack Malde Jack Malde https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:53 no full 4149 cGM86RhxMdfDYbQnn EA - We should say more than “x-risk is high” by OllieBase =1% chance of AI causing x-risk and >=0.1% chance of bio causing x-risk in my lifetime" this is enough to justify the core action relevant points of EA.AISafetyIsNotLongtermist argues that the chance of the author dying prematurely because of AI x-risk is sufficiently high (~41%, conditional on their death in the subsequent 30 years) that the pitch for reducing this risk need not appeal to longtermism.The generalised argument, which I’ll call “x-risk is high”, is fairly simple:1) X-risk this century is, or could very plausibly be, very high (>10%)2) X-risk is high enough that it matters to people alive today - e.g. it could result in their premature death.3) The above is sufficient to motivate people to take high-priority paths to reduce x-risk. We don’t need to emphasise anything else, including the philosophical case for the importance of the long-run future.I think this argument holds up. However, I think that outlining the case for longtermism (and EA principles, more broadly) is better for building a community of people who will reliably choose the highest-priority actions and paths to do the most good and that this is better for the world and keeping x-risk low in the long-run. Here are three counterpoints to only using “x-risk is high”:Our situation could changeTrivially, if we successfully reduce x-risk or, after further examination, determine that overall x-risk is much lower than we thought, “x-risk is high” loses its force. If top talent, policymakers or funders convinced by “x-risk is high” learn that x-risk this century is actually much lower, they might move away from these issues. This would be bad because any non-negligible amount of x-risk is still unsustainably high from a longtermist perspective.Our priorities could changeIn the early 2010s, the EA movement was much more focused on funding effective global health charities. What if, at that time, EAs decided to stop explaining the core principles of EA, and instead made the following argument “effective charities are effective”?Effective charities are, or could very plausible be, very effective.Effective charities are effective enough that donating to them is a clear and enormous opportunity to do good.The above is sufficient to motivate people to take high-priority paths, like earning to give. We don’t need to emphasise anything else, including the case for effective altruism.This argument is probably different in important respects to “x-risk is high”, but illustrates how the EA movement could have “locked in” their approach to doing good if they made this argument. If we started using “effective charities are effective” instead of explaining the core principles of EA, it might have taken a lot longer for the EA movement to identify x-risks as a top priority.Our pri...]]> OllieBase https://forum.effectivealtruism.org/posts/cGM86RhxMdfDYbQnn/we-should-say-more-than-x-risk-is-high =1% chance of AI causing x-risk and >=0.1% chance of bio causing x-risk in my lifetime" this is enough to justify the core action relevant points of EA.AISafetyIsNotLongtermist argues that the chance of the author dying prematurely because of AI x-risk is sufficiently high (~41%, conditional on their death in the subsequent 30 years) that the pitch for reducing this risk need not appeal to longtermism.The generalised argument, which I’ll call “x-risk is high”, is fairly simple:1) X-risk this century is, or could very plausibly be, very high (>10%)2) X-risk is high enough that it matters to people alive today - e.g. it could result in their premature death.3) The above is sufficient to motivate people to take high-priority paths to reduce x-risk. We don’t need to emphasise anything else, including the philosophical case for the importance of the long-run future.I think this argument holds up. However, I think that outlining the case for longtermism (and EA principles, more broadly) is better for building a community of people who will reliably choose the highest-priority actions and paths to do the most good and that this is better for the world and keeping x-risk low in the long-run. Here are three counterpoints to only using “x-risk is high”:Our situation could changeTrivially, if we successfully reduce x-risk or, after further examination, determine that overall x-risk is much lower than we thought, “x-risk is high” loses its force. If top talent, policymakers or funders convinced by “x-risk is high” learn that x-risk this century is actually much lower, they might move away from these issues. This would be bad because any non-negligible amount of x-risk is still unsustainably high from a longtermist perspective.Our priorities could changeIn the early 2010s, the EA movement was much more focused on funding effective global health charities. What if, at that time, EAs decided to stop explaining the core principles of EA, and instead made the following argument “effective charities are effective”?Effective charities are, or could very plausible be, very effective.Effective charities are effective enough that donating to them is a clear and enormous opportunity to do good.The above is sufficient to motivate people to take high-priority paths, like earning to give. We don’t need to emphasise anything else, including the case for effective altruism.This argument is probably different in important respects to “x-risk is high”, but illustrates how the EA movement could have “locked in” their approach to doing good if they made this argument. If we started using “effective charities are effective” instead of explaining the core principles of EA, it might have taken a lot longer for the EA movement to identify x-risks as a top priority.Our pri...]]> Sat, 17 Dec 2022 06:13:18 +0000 EA - We should say more than “x-risk is high” by OllieBase =1% chance of AI causing x-risk and >=0.1% chance of bio causing x-risk in my lifetime" this is enough to justify the core action relevant points of EA.AISafetyIsNotLongtermist argues that the chance of the author dying prematurely because of AI x-risk is sufficiently high (~41%, conditional on their death in the subsequent 30 years) that the pitch for reducing this risk need not appeal to longtermism.The generalised argument, which I’ll call “x-risk is high”, is fairly simple:1) X-risk this century is, or could very plausibly be, very high (>10%)2) X-risk is high enough that it matters to people alive today - e.g. it could result in their premature death.3) The above is sufficient to motivate people to take high-priority paths to reduce x-risk. We don’t need to emphasise anything else, including the philosophical case for the importance of the long-run future.I think this argument holds up. However, I think that outlining the case for longtermism (and EA principles, more broadly) is better for building a community of people who will reliably choose the highest-priority actions and paths to do the most good and that this is better for the world and keeping x-risk low in the long-run. Here are three counterpoints to only using “x-risk is high”:Our situation could changeTrivially, if we successfully reduce x-risk or, after further examination, determine that overall x-risk is much lower than we thought, “x-risk is high” loses its force. If top talent, policymakers or funders convinced by “x-risk is high” learn that x-risk this century is actually much lower, they might move away from these issues. This would be bad because any non-negligible amount of x-risk is still unsustainably high from a longtermist perspective.Our priorities could changeIn the early 2010s, the EA movement was much more focused on funding effective global health charities. What if, at that time, EAs decided to stop explaining the core principles of EA, and instead made the following argument “effective charities are effective”?Effective charities are, or could very plausible be, very effective.Effective charities are effective enough that donating to them is a clear and enormous opportunity to do good.The above is sufficient to motivate people to take high-priority paths, like earning to give. We don’t need to emphasise anything else, including the case for effective altruism.This argument is probably different in important respects to “x-risk is high”, but illustrates how the EA movement could have “locked in” their approach to doing good if they made this argument. If we started using “effective charities are effective” instead of explaining the core principles of EA, it might have taken a lot longer for the EA movement to identify x-risks as a top priority.Our pri...]]> =1% chance of AI causing x-risk and >=0.1% chance of bio causing x-risk in my lifetime" this is enough to justify the core action relevant points of EA.AISafetyIsNotLongtermist argues that the chance of the author dying prematurely because of AI x-risk is sufficiently high (~41%, conditional on their death in the subsequent 30 years) that the pitch for reducing this risk need not appeal to longtermism.The generalised argument, which I’ll call “x-risk is high”, is fairly simple:1) X-risk this century is, or could very plausibly be, very high (>10%)2) X-risk is high enough that it matters to people alive today - e.g. it could result in their premature death.3) The above is sufficient to motivate people to take high-priority paths to reduce x-risk. We don’t need to emphasise anything else, including the philosophical case for the importance of the long-run future.I think this argument holds up. However, I think that outlining the case for longtermism (and EA principles, more broadly) is better for building a community of people who will reliably choose the highest-priority actions and paths to do the most good and that this is better for the world and keeping x-risk low in the long-run. Here are three counterpoints to only using “x-risk is high”:Our situation could changeTrivially, if we successfully reduce x-risk or, after further examination, determine that overall x-risk is much lower than we thought, “x-risk is high” loses its force. If top talent, policymakers or funders convinced by “x-risk is high” learn that x-risk this century is actually much lower, they might move away from these issues. This would be bad because any non-negligible amount of x-risk is still unsustainably high from a longtermist perspective.Our priorities could changeIn the early 2010s, the EA movement was much more focused on funding effective global health charities. What if, at that time, EAs decided to stop explaining the core principles of EA, and instead made the following argument “effective charities are effective”?Effective charities are, or could very plausible be, very effective.Effective charities are effective enough that donating to them is a clear and enormous opportunity to do good.The above is sufficient to motivate people to take high-priority paths, like earning to give. We don’t need to emphasise anything else, including the case for effective altruism.This argument is probably different in important respects to “x-risk is high”, but illustrates how the EA movement could have “locked in” their approach to doing good if they made this argument. If we started using “effective charities are effective” instead of explaining the core principles of EA, it might have taken a lot longer for the EA movement to identify x-risks as a top priority.Our pri...]]> OllieBase https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:59 no full 4150 DosxprPupic6dmiB6 EA - You Don’t Have to Call Yourself an Effective Altruist or Fraternize With Effective Altruists or Support Longtermism, Just Please, for the Love of God, Help the Global Poor by Omnizoid Omnizoid https://forum.effectivealtruism.org/posts/DosxprPupic6dmiB6/you-don-t-have-to-call-yourself-an-effective-altruist-or Sat, 17 Dec 2022 01:13:01 +0000 EA - You Don’t Have to Call Yourself an Effective Altruist or Fraternize With Effective Altruists or Support Longtermism, Just Please, for the Love of God, Help the Global Poor by Omnizoid Omnizoid https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:54 no full 4148 NFhELFno7ScuCxXMY EA - The winners of the Change Our Mind Contest—and some reflections by GiveWell GiveWell https://forum.effectivealtruism.org/posts/NFhELFno7ScuCxXMY/the-winners-of-the-change-our-mind-contest-and-some Thu, 15 Dec 2022 20:24:04 +0000 EA - The winners of the Change Our Mind Contest—and some reflections by GiveWell GiveWell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 15:45 no full 4131 YkjDHYqEubHaDsEQT EA - I went to the Progress Summit. Here’s What I Learned. by Nick Corvino Nick Corvino https://forum.effectivealtruism.org/posts/YkjDHYqEubHaDsEQT/i-went-to-the-progress-summit-here-s-what-i-learned Thu, 15 Dec 2022 00:04:05 +0000 EA - I went to the Progress Summit. Here’s What I Learned. by Nick Corvino Nick Corvino https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:21 no full 4124 LTq3DsBedqD8asnZE EA - The US Attorney for the SDNY asked that FTX money be returned to victims. What are the moral and legal consequences to EA? by Fermi–Dirac Distribution Fermi–Dirac Distribution https://forum.effectivealtruism.org/posts/LTq3DsBedqD8asnZE/the-us-attorney-for-the-sdny-asked-that-ftx-money-be Wed, 14 Dec 2022 22:45:58 +0000 EA - The US Attorney for the SDNY asked that FTX money be returned to victims. What are the moral and legal consequences to EA? by Fermi–Dirac Distribution Fermi–Dirac Distribution https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:51 no full 4116 rMucQN9EkL8u4xnjd EA - Improving EA events: start early & invest in content and stewardship by Vaidehi Agarwalla Vaidehi Agarwalla https://forum.effectivealtruism.org/posts/rMucQN9EkL8u4xnjd/improving-ea-events-start-early-and-invest-in-content-and Tue, 13 Dec 2022 21:43:05 +0000 EA - Improving EA events: start early & invest in content and stewardship by Vaidehi Agarwalla Vaidehi Agarwalla https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 23:20 no full 4106 CixidC6JCruHue8Hs EA - GiveWell’s Moral Weights Underweight the Value of Transfers to the Poor by Trevor Woolley Trevor Woolley https://forum.effectivealtruism.org/posts/CixidC6JCruHue8Hs/givewell-s-moral-weights-underweight-the-value-of-transfers Tue, 13 Dec 2022 16:14:03 +0000 EA - GiveWell’s Moral Weights Underweight the Value of Transfers to the Poor by Trevor Woolley Trevor Woolley https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:27 no full 4105 kEd5qWwg8pZjWAeFS EA - Announcing the Forecasting Research Institute (we’re hiring) by Tegan Tegan https://forum.effectivealtruism.org/posts/kEd5qWwg8pZjWAeFS/announcing-the-forecasting-research-institute-we-re-hiring Tue, 13 Dec 2022 13:40:40 +0000 EA - Announcing the Forecasting Research Institute (we’re hiring) by Tegan Tegan https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:58 no full 4103 bkF4jWM9pbBFxnCLH EA - Observations of community building in Asia, a 🧵 by Vaidehi Agarwalla perfectAsian CB's please correct me + enhance!Everyone - please ask clarification questions!What do we do with people? This isn’t unique to Asian CB’s, but there are strictly fewer and less attractive opportunities available. I might do a separate thread or add to this thought later.Some countries have less traction with English & worry about EA being presented as Western concept (e.g. Japan & Iran). Translation of key texts seems important, and could be a way to engage newer EAs with a concrete project (see a test from Estonia).Countries with lots of internal issues may have difficulty gaining EA traction, but it may be a matter of perspective and approach. A Turkey CB mentioned that the fact that another group from Iran was able to get traction was inspiring, since they perceived Iran as having worse problemsMultiple CB’s suggested that after talking to other EAs they started thinking more about city / uni group building rather than trying to build national groups to start with. For large countries (India) there are many choices, .....But some countries (Nepal, Pakistan, Vietnam) have 1 or 2 (Kathmandu, Karachi, Ho Chi Minh & Hanoi) major cities that are likely to be viable for EA groups. Kathmandu has all of Nepal’s top uni’s and best talent pool so in practice EA Nepal ~= EA Kathmandu.A few CBs want to/are starting local groups in liberal arts unis which they feel are more EA aligned. A challenge in Turkey is that vegans are abolitionist and against welfarism, and was concerned about discussing farmed animal welfare within EA.In Japan (+ others?), many students study abroad. There may be an opportunity to get those students interested in EA before they go (and connect them to local EA groups in the West), and catch them again after they return.E.g. 1 uni group struggled with reading group retention. It seemed plausible they could focus on their existing ~8 engaged members, or do a “trial week” for their reading group to help attendees evaluate fit early on.There is uncertainty over what messaging works best and non-existent testing. People mostly rely on their insights. More testing seems good, e.g. how much do you need to incorporate native philosophy vs. localizing examples and stories. bad e.g. "Asking someone who grew up or has spent a lot of time in LMICs to imagine a child drowning in a lake is not a hypothetical - it's something they might see and ask themselves every single day. This thought experiment loses a lot of its power. What are some alternatives?"Asking someone who grew up or has spent a lot of time in LMICs to imagine a child drowning in a lake is not a hypothetical - it's something they might see and ask themselves every single day. This thought experiment loses a lot of its power. What are some alternatives?Vietnam doesn’t have a big book-reading culture, so EA books could be less likely to be a way in. Perhaps focusing on blogs or podcasts or other formats is more promising?Many of us (including myself) learnt a lot from Israel on volunteer management and early stage group priorities. I believe a lot of value I provide CB's is expanding the option space and generating lots of specific examples. I wish more CB’s from around the world had attended.Early stage groups (which is most Asia groups today) could benefit from The Lean Startup model of validated learning - trying to optimize early activities to l...]]> Vaidehi Agarwalla https://forum.effectivealtruism.org/posts/bkF4jWM9pbBFxnCLH/observations-of-community-building-in-asia-a perfectAsian CB's please correct me + enhance!Everyone - please ask clarification questions!What do we do with people? This isn’t unique to Asian CB’s, but there are strictly fewer and less attractive opportunities available. I might do a separate thread or add to this thought later.Some countries have less traction with English & worry about EA being presented as Western concept (e.g. Japan & Iran). Translation of key texts seems important, and could be a way to engage newer EAs with a concrete project (see a test from Estonia).Countries with lots of internal issues may have difficulty gaining EA traction, but it may be a matter of perspective and approach. A Turkey CB mentioned that the fact that another group from Iran was able to get traction was inspiring, since they perceived Iran as having worse problemsMultiple CB’s suggested that after talking to other EAs they started thinking more about city / uni group building rather than trying to build national groups to start with. For large countries (India) there are many choices, .....But some countries (Nepal, Pakistan, Vietnam) have 1 or 2 (Kathmandu, Karachi, Ho Chi Minh & Hanoi) major cities that are likely to be viable for EA groups. Kathmandu has all of Nepal’s top uni’s and best talent pool so in practice EA Nepal ~= EA Kathmandu.A few CBs want to/are starting local groups in liberal arts unis which they feel are more EA aligned. A challenge in Turkey is that vegans are abolitionist and against welfarism, and was concerned about discussing farmed animal welfare within EA.In Japan (+ others?), many students study abroad. There may be an opportunity to get those students interested in EA before they go (and connect them to local EA groups in the West), and catch them again after they return.E.g. 1 uni group struggled with reading group retention. It seemed plausible they could focus on their existing ~8 engaged members, or do a “trial week” for their reading group to help attendees evaluate fit early on.There is uncertainty over what messaging works best and non-existent testing. People mostly rely on their insights. More testing seems good, e.g. how much do you need to incorporate native philosophy vs. localizing examples and stories. bad e.g. "Asking someone who grew up or has spent a lot of time in LMICs to imagine a child drowning in a lake is not a hypothetical - it's something they might see and ask themselves every single day. This thought experiment loses a lot of its power. What are some alternatives?"Asking someone who grew up or has spent a lot of time in LMICs to imagine a child drowning in a lake is not a hypothetical - it's something they might see and ask themselves every single day. This thought experiment loses a lot of its power. What are some alternatives?Vietnam doesn’t have a big book-reading culture, so EA books could be less likely to be a way in. Perhaps focusing on blogs or podcasts or other formats is more promising?Many of us (including myself) learnt a lot from Israel on volunteer management and early stage group priorities. I believe a lot of value I provide CB's is expanding the option space and generating lots of specific examples. I wish more CB’s from around the world had attended.Early stage groups (which is most Asia groups today) could benefit from The Lean Startup model of validated learning - trying to optimize early activities to l...]]> Mon, 12 Dec 2022 02:45:17 +0000 EA - Observations of community building in Asia, a 🧵 by Vaidehi Agarwalla perfectAsian CB's please correct me + enhance!Everyone - please ask clarification questions!What do we do with people? This isn’t unique to Asian CB’s, but there are strictly fewer and less attractive opportunities available. I might do a separate thread or add to this thought later.Some countries have less traction with English & worry about EA being presented as Western concept (e.g. Japan & Iran). Translation of key texts seems important, and could be a way to engage newer EAs with a concrete project (see a test from Estonia).Countries with lots of internal issues may have difficulty gaining EA traction, but it may be a matter of perspective and approach. A Turkey CB mentioned that the fact that another group from Iran was able to get traction was inspiring, since they perceived Iran as having worse problemsMultiple CB’s suggested that after talking to other EAs they started thinking more about city / uni group building rather than trying to build national groups to start with. For large countries (India) there are many choices, .....But some countries (Nepal, Pakistan, Vietnam) have 1 or 2 (Kathmandu, Karachi, Ho Chi Minh & Hanoi) major cities that are likely to be viable for EA groups. Kathmandu has all of Nepal’s top uni’s and best talent pool so in practice EA Nepal ~= EA Kathmandu.A few CBs want to/are starting local groups in liberal arts unis which they feel are more EA aligned. A challenge in Turkey is that vegans are abolitionist and against welfarism, and was concerned about discussing farmed animal welfare within EA.In Japan (+ others?), many students study abroad. There may be an opportunity to get those students interested in EA before they go (and connect them to local EA groups in the West), and catch them again after they return.E.g. 1 uni group struggled with reading group retention. It seemed plausible they could focus on their existing ~8 engaged members, or do a “trial week” for their reading group to help attendees evaluate fit early on.There is uncertainty over what messaging works best and non-existent testing. People mostly rely on their insights. More testing seems good, e.g. how much do you need to incorporate native philosophy vs. localizing examples and stories. bad e.g. "Asking someone who grew up or has spent a lot of time in LMICs to imagine a child drowning in a lake is not a hypothetical - it's something they might see and ask themselves every single day. This thought experiment loses a lot of its power. What are some alternatives?"Asking someone who grew up or has spent a lot of time in LMICs to imagine a child drowning in a lake is not a hypothetical - it's something they might see and ask themselves every single day. This thought experiment loses a lot of its power. What are some alternatives?Vietnam doesn’t have a big book-reading culture, so EA books could be less likely to be a way in. Perhaps focusing on blogs or podcasts or other formats is more promising?Many of us (including myself) learnt a lot from Israel on volunteer management and early stage group priorities. I believe a lot of value I provide CB's is expanding the option space and generating lots of specific examples. I wish more CB’s from around the world had attended.Early stage groups (which is most Asia groups today) could benefit from The Lean Startup model of validated learning - trying to optimize early activities to l...]]> perfectAsian CB's please correct me + enhance!Everyone - please ask clarification questions!What do we do with people? This isn’t unique to Asian CB’s, but there are strictly fewer and less attractive opportunities available. I might do a separate thread or add to this thought later.Some countries have less traction with English & worry about EA being presented as Western concept (e.g. Japan & Iran). Translation of key texts seems important, and could be a way to engage newer EAs with a concrete project (see a test from Estonia).Countries with lots of internal issues may have difficulty gaining EA traction, but it may be a matter of perspective and approach. A Turkey CB mentioned that the fact that another group from Iran was able to get traction was inspiring, since they perceived Iran as having worse problemsMultiple CB’s suggested that after talking to other EAs they started thinking more about city / uni group building rather than trying to build national groups to start with. For large countries (India) there are many choices, .....But some countries (Nepal, Pakistan, Vietnam) have 1 or 2 (Kathmandu, Karachi, Ho Chi Minh & Hanoi) major cities that are likely to be viable for EA groups. Kathmandu has all of Nepal’s top uni’s and best talent pool so in practice EA Nepal ~= EA Kathmandu.A few CBs want to/are starting local groups in liberal arts unis which they feel are more EA aligned. A challenge in Turkey is that vegans are abolitionist and against welfarism, and was concerned about discussing farmed animal welfare within EA.In Japan (+ others?), many students study abroad. There may be an opportunity to get those students interested in EA before they go (and connect them to local EA groups in the West), and catch them again after they return.E.g. 1 uni group struggled with reading group retention. It seemed plausible they could focus on their existing ~8 engaged members, or do a “trial week” for their reading group to help attendees evaluate fit early on.There is uncertainty over what messaging works best and non-existent testing. People mostly rely on their insights. More testing seems good, e.g. how much do you need to incorporate native philosophy vs. localizing examples and stories. bad e.g. "Asking someone who grew up or has spent a lot of time in LMICs to imagine a child drowning in a lake is not a hypothetical - it's something they might see and ask themselves every single day. This thought experiment loses a lot of its power. What are some alternatives?"Asking someone who grew up or has spent a lot of time in LMICs to imagine a child drowning in a lake is not a hypothetical - it's something they might see and ask themselves every single day. This thought experiment loses a lot of its power. What are some alternatives?Vietnam doesn’t have a big book-reading culture, so EA books could be less likely to be a way in. Perhaps focusing on blogs or podcasts or other formats is more promising?Many of us (including myself) learnt a lot from Israel on volunteer management and early stage group priorities. I believe a lot of value I provide CB's is expanding the option space and generating lots of specific examples. I wish more CB’s from around the world had attended.Early stage groups (which is most Asia groups today) could benefit from The Lean Startup model of validated learning - trying to optimize early activities to l...]]> Vaidehi Agarwalla https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:55 no full 4082 JGR87M8to93D7Ahzh EA - Hugh Thompson Jr (1943–2006) by Gavin Gavin https://forum.effectivealtruism.org/posts/JGR87M8to93D7Ahzh/hugh-thompson-jr-1943-2006 Sun, 11 Dec 2022 04:35:12 +0000 EA - Hugh Thompson Jr (1943–2006) by Gavin Gavin https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:06 no full 4073 QB6JJdYemL2kQPhLM EA - Monitoring & Evaluation Specialists – a new career path profile from Probably Good by Probably Good Probably Good https://forum.effectivealtruism.org/posts/QB6JJdYemL2kQPhLM/monitoring-and-evaluation-specialists-a-new-career-path Thu, 08 Dec 2022 19:42:31 +0000 EA - Monitoring & Evaluation Specialists – a new career path profile from Probably Good by Probably Good Probably Good https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 14:02 no full 4050 uG3s9qDCnntJwci9i EA - [Link Post] If We Don’t End Factory Farming Soon, It Might Be Here Forever. by BrianK BrianK https://forum.effectivealtruism.org/posts/uG3s9qDCnntJwci9i/link-post-if-we-don-t-end-factory-farming-soon-it-might-be Wed, 07 Dec 2022 13:18:44 +0000 EA - [Link Post] If We Don’t End Factory Farming Soon, It Might Be Here Forever. by BrianK BrianK https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:40 no full 4039 erYvs4tLwnNCopBxg EA - CEA “serious incident report”? by Pagw Pagw https://forum.effectivealtruism.org/posts/erYvs4tLwnNCopBxg/cea-serious-incident-report Sat, 03 Dec 2022 17:40:33 +0000 EA - CEA “serious incident report”? by Pagw Pagw https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:13 no full 3994 t5vFLabB2mQz2tgDr EA - I’m a 22-year-old woman involved in Effective Altruism. I’m sad, disappointed, and scared. by Maya D Maya D https://forum.effectivealtruism.org/posts/t5vFLabB2mQz2tgDr/i-m-a-22-year-old-woman-involved-in-effective-altruism-i-m Fri, 02 Dec 2022 05:30:54 +0000 EA - I’m a 22-year-old woman involved in Effective Altruism. I’m sad, disappointed, and scared. by Maya D Maya D https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:13 no full 3965 YFyzHT3H67jrk7mdc EA - James Lovelock (1919 – 2022) by Gavin Gavin https://forum.effectivealtruism.org/posts/YFyzHT3H67jrk7mdc/james-lovelock-1919-2022 Wed, 30 Nov 2022 08:53:17 +0000 EA - James Lovelock (1919 – 2022) by Gavin Gavin https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:56 no full 3944 WAdhvskTh2yffW9gc EA - Carl Djerassi (1923–2014) by Gavin Gavin https://forum.effectivealtruism.org/posts/WAdhvskTh2yffW9gc/carl-djerassi-1923-2014 Wed, 30 Nov 2022 06:08:57 +0000 EA - Carl Djerassi (1923–2014) by Gavin Gavin https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:10 no full 3948 4Y3NKH37S9hvrXLCF EA - Apply to join Rethink Priorities’ board of directors. by abrahamrowe abrahamrowe https://forum.effectivealtruism.org/posts/4Y3NKH37S9hvrXLCF/apply-to-join-rethink-priorities-board-of-directors Tue, 29 Nov 2022 20:02:49 +0000 EA - Apply to join Rethink Priorities’ board of directors. by abrahamrowe abrahamrowe https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:47 no full 3940 jbsmfPjRH6irTP6zu EA - Some feelings, and what’s keeping me going by Michelle Hutchinson Michelle Hutchinson https://forum.effectivealtruism.org/posts/jbsmfPjRH6irTP6zu/some-feelings-and-what-s-keeping-me-going Fri, 25 Nov 2022 12:56:44 +0000 EA - Some feelings, and what’s keeping me going by Michelle Hutchinson Michelle Hutchinson https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:58 no full 3904 7cCr6vAmN4Xi3yzR5 EA - Two contrasting models of “intelligence” and future growth by Magnus Vinding Magnus Vinding https://forum.effectivealtruism.org/posts/7cCr6vAmN4Xi3yzR5/two-contrasting-models-of-intelligence-and-future-growth Thu, 24 Nov 2022 18:09:54 +0000 EA - Two contrasting models of “intelligence” and future growth by Magnus Vinding Magnus Vinding https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 50:02 no full 3896 82heDPsmvhThda3af EA - AMA: Sean Mayberry, Founder & CEO of StrongMinds by Sean Mayberry Sean Mayberry https://forum.effectivealtruism.org/posts/82heDPsmvhThda3af/ama-sean-mayberry-founder-and-ceo-of-strongminds Thu, 24 Nov 2022 11:11:10 +0000 EA - AMA: Sean Mayberry, Founder & CEO of StrongMinds by Sean Mayberry Sean Mayberry https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:48 no full 3897 uY5SwjHTXgTaWC85f EA - Don’t give well, give WELLBYs: HLI’s 2022 charity recommendation by MichaelPlant MichaelPlant https://forum.effectivealtruism.org/posts/uY5SwjHTXgTaWC85f/don-t-give-well-give-wellbys-hli-s-2022-charity Thu, 24 Nov 2022 10:02:33 +0000 EA - Don’t give well, give WELLBYs: HLI’s 2022 charity recommendation by MichaelPlant MichaelPlant https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 17:31 no full 3884 rxojcFfpN88YNwGop EA - Rethink Priorities’ Leadership Statement on the FTX situation by abrahamrowe abrahamrowe https://forum.effectivealtruism.org/posts/rxojcFfpN88YNwGop/rethink-priorities-leadership-statement-on-the-ftx-situation Wed, 23 Nov 2022 23:16:02 +0000 EA - Rethink Priorities’ Leadership Statement on the FTX situation by abrahamrowe abrahamrowe https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:39 no full 3890 wDGcTPTyADHAjomNC EA - Announcing AI Alignment Awards: $100k research contests about goal misgeneralization & corrigibility by Akash Akash https://forum.effectivealtruism.org/posts/wDGcTPTyADHAjomNC/announcing-ai-alignment-awards-usd100k-research-contests Wed, 23 Nov 2022 00:54:03 +0000 EA - Announcing AI Alignment Awards: $100k research contests about goal misgeneralization & corrigibility by Akash Akash https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:34 no full 3862 iZxY6QqTSQm2afqyq EA - Some data on the stock of EA™ funding by NunoSempere NunoSempere https://forum.effectivealtruism.org/posts/iZxY6QqTSQm2afqyq/some-data-on-the-stock-of-ea-tm-funding Sun, 20 Nov 2022 15:27:06 +0000 EA - Some data on the stock of EA™ funding by NunoSempere NunoSempere https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:38 no full 3831 CZJ93Y7hinjvqt87j EA - Introducing new leadership in Animal Charity Evaluators’ Research team by Animal Charity Evaluators Animal Charity Evaluators https://forum.effectivealtruism.org/posts/CZJ93Y7hinjvqt87j/introducing-new-leadership-in-animal-charity-evaluators Thu, 17 Nov 2022 22:53:36 +0000 EA - Introducing new leadership in Animal Charity Evaluators’ Research team by Animal Charity Evaluators Animal Charity Evaluators https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:35 no full 3794 Et7oPMu6czhEd8ExW EA - Why you’re not hearing as much from EA orgs as you’d like by Shakeel Hashim Shakeel Hashim https://forum.effectivealtruism.org/posts/Et7oPMu6czhEd8ExW/why-you-re-not-hearing-as-much-from-ea-orgs-as-you-d-like Tue, 15 Nov 2022 20:06:38 +0000 EA - Why you’re not hearing as much from EA orgs as you’d like by Shakeel Hashim Shakeel Hashim https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:50 no full 3763 22zk3tZyYWoanQwt7 EA - Training for Good - Update & Plans for 2023 by Cillian Crosson Cillian Crosson https://forum.effectivealtruism.org/posts/22zk3tZyYWoanQwt7/training-for-good-update-and-plans-for-2023 Tue, 15 Nov 2022 19:00:25 +0000 EA - Training for Good - Update & Plans for 2023 by Cillian Crosson Cillian Crosson https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 17:29 no full 3766 xauPtrTdKjQo9EogZ EA - Money Stuff: FTX’s Balance Sheet Was Bad by Elliot Temple Elliot Temple https://forum.effectivealtruism.org/posts/xauPtrTdKjQo9EogZ/money-stuff-ftx-s-balance-sheet-was-bad Mon, 14 Nov 2022 21:21:17 +0000 EA - Money Stuff: FTX’s Balance Sheet Was Bad by Elliot Temple Elliot Temple https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:44 no full 3748 4bDRdeRHH57G34fv9 EA - In favour of ‘personal policies’ by Michelle Hutchinson Michelle Hutchinson https://forum.effectivealtruism.org/posts/4bDRdeRHH57G34fv9/in-favour-of-personal-policies Sun, 13 Nov 2022 12:42:45 +0000 EA - In favour of ‘personal policies’ by Michelle Hutchinson Michelle Hutchinson https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:31 no full 3743 dn2nLRgFAfodcTjQw EA - What should I ask Joe Carlsmith — Open Phil researcher, philosopher and blogger? by Robert Wiblin Robert Wiblin https://forum.effectivealtruism.org/posts/dn2nLRgFAfodcTjQw/what-should-i-ask-joe-carlsmith-open-phil-researcher Thu, 10 Nov 2022 09:42:53 +0000 EA - What should I ask Joe Carlsmith — Open Phil researcher, philosopher and blogger? by Robert Wiblin Robert Wiblin https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:13 no full 3698 tm3RMfxetLsmcwftQ EA - EA & LW Forums Weekly Summary (31st Oct - 6th Nov 22') by Zoe Williams Zoe Williams https://forum.effectivealtruism.org/posts/tm3RMfxetLsmcwftQ/ea-and-lw-forums-weekly-summary-31st-oct-6th-nov-22 Tue, 08 Nov 2022 21:13:02 +0000 EA - EA & LW Forums Weekly Summary (31st Oct - 6th Nov 22') by Zoe Williams Zoe Williams https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 34:16 no full 3683 GWyidA3fbXKXErDn4 EA - Effective altruism as a lifestyle movement (A Master’s Thesis) by Ada-Maaria Hyvärinen Ada-Maaria Hyvärinen https://forum.effectivealtruism.org/posts/GWyidA3fbXKXErDn4/effective-altruism-as-a-lifestyle-movement-a-master-s-thesis Mon, 07 Nov 2022 08:14:21 +0000 EA - Effective altruism as a lifestyle movement (A Master’s Thesis) by Ada-Maaria Hyvärinen Ada-Maaria Hyvärinen https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:27 no full 3669 SsZkuYHv4dNfu7vnS EA - Rethink Priorities’ Special Projects Team is hiring by Rachel Norman Rachel Norman https://forum.effectivealtruism.org/posts/SsZkuYHv4dNfu7vnS/rethink-priorities-special-projects-team-is-hiring Tue, 01 Nov 2022 18:04:19 +0000 EA - Rethink Priorities’ Special Projects Team is hiring by Rachel Norman Rachel Norman https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:41 no full 3615 qo2QpLXBrrvdqJHXQ EA - A dozen doubts about GiveWell’s numbers by JoelMcGuire JoelMcGuire https://forum.effectivealtruism.org/posts/qo2QpLXBrrvdqJHXQ/a-dozen-doubts-about-givewell-s-numbers Tue, 01 Nov 2022 08:35:45 +0000 EA - A dozen doubts about GiveWell’s numbers by JoelMcGuire JoelMcGuire https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 42:45 no full 3612 vTcmucF4XmBLXbDqM EA - EA Architect: Dissertation on Improving the Social Dynamics of Confined Spaces & Shelters Precedents Report by Tereza Flidrova Tereza_Flidrova https://forum.effectivealtruism.org/posts/vTcmucF4XmBLXbDqM/ea-architect-dissertation-on-improving-the-social-dynamics Tue, 06 Jun 2023 20:19:33 +0000 EA - EA Architect: Dissertation on Improving the Social Dynamics of Confined Spaces & Shelters Precedents Report by Tereza Flidrova Tereza_Flidrova https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 17:02 no full 6190 ARkbWch5RMsj6xP5p EA - Transformative AGI by 2043 is <1% likely by Ted Sanders 10% seems to require probabilities that feel unreasonably high, and even 3% seems unlikely.Thoughtfully applying the cascading conditional probability approach to this question yields lower probability values than is often supposed. This framework helps enumerate the many future scenarios where humanity makes partial but incomplete progress toward transformative AGI.Executive summaryFor AGI to do most human work for <$25/hr by 2043, many things must happen.We forecast cascading conditional probabilities for 10 necessary events, and find they multiply to an overall likelihood of 0.4%:We invent algorithms for transformative AGI60%We invent a way for AGIs to learn faster than humans40%AGI inference costs drop below $25/hr (per human equivalent)16%We invent and scale cheap, quality robots60%We massively scale production of chips and power46%We avoid derailment by human regulation70%We avoid derailment by AI-caused delay90%We avoid derailment from wars (e.g., China invades Taiwan)70%We avoid derailment from pandemics90%We avoid derailment from severe depressions95%Joint odds0.4%EventForecastby 2043 or TAGI,conditional onprior stepsIf you think our estimates are pessimistic, feel free to substitute your own here. You’ll find it difficult to arrive at odds above 10%.Of course, the difficulty is by construction. Any framework that multiplies ten probabilities together is almost fated to produce low odds.So a good skeptic must ask: Is our framework fair?There are two possible errors to beware of:Did we neglect possible parallel paths to transformative AGI?Did we hew toward unconditional probabilities rather than fully conditional probabilities?We believe we are innocent of both sins.Regarding failing to model parallel disjunctive paths:We have chosen generic steps that don’t make rigid assumptions about the particular algorithms, requirements, or timelines of AGI technologyOne opinionated claim we do make is that transformative AGI by 2043 will almost certainly be run on semiconductor transistors powered by electricity and built in capital-intensive fabs, and we spend many pages justifying this beliefRegarding failing to really grapple with conditional probabilities:Our conditional probabilities are, in some cases, quite different from our unconditional probabilities. In particular, we assume that a world on track to transformative AGI will.Construct semiconductor fabs and power plants at a far faster pace than today (our unconditional probability is substantially lower)Have invented very cheap and effi...]]> Ted Sanders https://forum.effectivealtruism.org/posts/ARkbWch5RMsj6xP5p/transformative-agi-by-2043-is-less-than-1-likely 10% seems to require probabilities that feel unreasonably high, and even 3% seems unlikely.Thoughtfully applying the cascading conditional probability approach to this question yields lower probability values than is often supposed. This framework helps enumerate the many future scenarios where humanity makes partial but incomplete progress toward transformative AGI.Executive summaryFor AGI to do most human work for <$25/hr by 2043, many things must happen.We forecast cascading conditional probabilities for 10 necessary events, and find they multiply to an overall likelihood of 0.4%:We invent algorithms for transformative AGI60%We invent a way for AGIs to learn faster than humans40%AGI inference costs drop below $25/hr (per human equivalent)16%We invent and scale cheap, quality robots60%We massively scale production of chips and power46%We avoid derailment by human regulation70%We avoid derailment by AI-caused delay90%We avoid derailment from wars (e.g., China invades Taiwan)70%We avoid derailment from pandemics90%We avoid derailment from severe depressions95%Joint odds0.4%EventForecastby 2043 or TAGI,conditional onprior stepsIf you think our estimates are pessimistic, feel free to substitute your own here. You’ll find it difficult to arrive at odds above 10%.Of course, the difficulty is by construction. Any framework that multiplies ten probabilities together is almost fated to produce low odds.So a good skeptic must ask: Is our framework fair?There are two possible errors to beware of:Did we neglect possible parallel paths to transformative AGI?Did we hew toward unconditional probabilities rather than fully conditional probabilities?We believe we are innocent of both sins.Regarding failing to model parallel disjunctive paths:We have chosen generic steps that don’t make rigid assumptions about the particular algorithms, requirements, or timelines of AGI technologyOne opinionated claim we do make is that transformative AGI by 2043 will almost certainly be run on semiconductor transistors powered by electricity and built in capital-intensive fabs, and we spend many pages justifying this beliefRegarding failing to really grapple with conditional probabilities:Our conditional probabilities are, in some cases, quite different from our unconditional probabilities. In particular, we assume that a world on track to transformative AGI will.Construct semiconductor fabs and power plants at a far faster pace than today (our unconditional probability is substantially lower)Have invented very cheap and effi...]]> Tue, 06 Jun 2023 18:12:35 +0000 EA - Transformative AGI by 2043 is <1% likely by Ted Sanders 10% seems to require probabilities that feel unreasonably high, and even 3% seems unlikely.Thoughtfully applying the cascading conditional probability approach to this question yields lower probability values than is often supposed. This framework helps enumerate the many future scenarios where humanity makes partial but incomplete progress toward transformative AGI.Executive summaryFor AGI to do most human work for <$25/hr by 2043, many things must happen.We forecast cascading conditional probabilities for 10 necessary events, and find they multiply to an overall likelihood of 0.4%:We invent algorithms for transformative AGI60%We invent a way for AGIs to learn faster than humans40%AGI inference costs drop below $25/hr (per human equivalent)16%We invent and scale cheap, quality robots60%We massively scale production of chips and power46%We avoid derailment by human regulation70%We avoid derailment by AI-caused delay90%We avoid derailment from wars (e.g., China invades Taiwan)70%We avoid derailment from pandemics90%We avoid derailment from severe depressions95%Joint odds0.4%EventForecastby 2043 or TAGI,conditional onprior stepsIf you think our estimates are pessimistic, feel free to substitute your own here. You’ll find it difficult to arrive at odds above 10%.Of course, the difficulty is by construction. Any framework that multiplies ten probabilities together is almost fated to produce low odds.So a good skeptic must ask: Is our framework fair?There are two possible errors to beware of:Did we neglect possible parallel paths to transformative AGI?Did we hew toward unconditional probabilities rather than fully conditional probabilities?We believe we are innocent of both sins.Regarding failing to model parallel disjunctive paths:We have chosen generic steps that don’t make rigid assumptions about the particular algorithms, requirements, or timelines of AGI technologyOne opinionated claim we do make is that transformative AGI by 2043 will almost certainly be run on semiconductor transistors powered by electricity and built in capital-intensive fabs, and we spend many pages justifying this beliefRegarding failing to really grapple with conditional probabilities:Our conditional probabilities are, in some cases, quite different from our unconditional probabilities. In particular, we assume that a world on track to transformative AGI will.Construct semiconductor fabs and power plants at a far faster pace than today (our unconditional probability is substantially lower)Have invented very cheap and effi...]]> 10% seems to require probabilities that feel unreasonably high, and even 3% seems unlikely.Thoughtfully applying the cascading conditional probability approach to this question yields lower probability values than is often supposed. This framework helps enumerate the many future scenarios where humanity makes partial but incomplete progress toward transformative AGI.Executive summaryFor AGI to do most human work for <$25/hr by 2043, many things must happen.We forecast cascading conditional probabilities for 10 necessary events, and find they multiply to an overall likelihood of 0.4%:We invent algorithms for transformative AGI60%We invent a way for AGIs to learn faster than humans40%AGI inference costs drop below $25/hr (per human equivalent)16%We invent and scale cheap, quality robots60%We massively scale production of chips and power46%We avoid derailment by human regulation70%We avoid derailment by AI-caused delay90%We avoid derailment from wars (e.g., China invades Taiwan)70%We avoid derailment from pandemics90%We avoid derailment from severe depressions95%Joint odds0.4%EventForecastby 2043 or TAGI,conditional onprior stepsIf you think our estimates are pessimistic, feel free to substitute your own here. You’ll find it difficult to arrive at odds above 10%.Of course, the difficulty is by construction. Any framework that multiplies ten probabilities together is almost fated to produce low odds.So a good skeptic must ask: Is our framework fair?There are two possible errors to beware of:Did we neglect possible parallel paths to transformative AGI?Did we hew toward unconditional probabilities rather than fully conditional probabilities?We believe we are innocent of both sins.Regarding failing to model parallel disjunctive paths:We have chosen generic steps that don’t make rigid assumptions about the particular algorithms, requirements, or timelines of AGI technologyOne opinionated claim we do make is that transformative AGI by 2043 will almost certainly be run on semiconductor transistors powered by electricity and built in capital-intensive fabs, and we spend many pages justifying this beliefRegarding failing to really grapple with conditional probabilities:Our conditional probabilities are, in some cases, quite different from our unconditional probabilities. In particular, we assume that a world on track to transformative AGI will.Construct semiconductor fabs and power plants at a far faster pace than today (our unconditional probability is substantially lower)Have invented very cheap and effi...]]> Ted Sanders https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:10 no full 6189 W93Pt7xch7eyrkZ7f EA - Cause area report: Antimicrobial Resistance by Akhil Akhil https://forum.effectivealtruism.org/posts/W93Pt7xch7eyrkZ7f/cause-area-report-antimicrobial-resistance Tue, 06 Jun 2023 09:25:23 +0000 EA - Cause area report: Antimicrobial Resistance by Akhil Akhil https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:22 no full 6191 iykkkxvwcuySBeTEL EA - ALTER Israel - 2023 Mid-Year Update by Davidmanheim Davidmanheim https://forum.effectivealtruism.org/posts/iykkkxvwcuySBeTEL/alter-israel-2023-mid-year-update Tue, 06 Jun 2023 02:58:12 +0000 EA - ALTER Israel - 2023 Mid-Year Update by Davidmanheim Davidmanheim https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:54 no full 6194 RfiEENFj8SaYty9Gj EA - National EA groups shouldn’t focus on city groups by DavidNash DavidNash https://forum.effectivealtruism.org/posts/RfiEENFj8SaYty9Gj/national-ea-groups-shouldn-t-focus-on-city-groups Mon, 05 Jun 2023 22:27:28 +0000 EA - National EA groups shouldn’t focus on city groups by DavidNash DavidNash https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:17 no full 6193 hChXEPPkDpiufCE4E EA - I made a news site based on prediction markets by vandemonian vandemonian https://forum.effectivealtruism.org/posts/hChXEPPkDpiufCE4E/i-made-a-news-site-based-on-prediction-markets Mon, 05 Jun 2023 22:15:58 +0000 EA - I made a news site based on prediction markets by vandemonian vandemonian https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:56 no full 6192 nZSFdAdyyfeXF4Rpa EA - Would it make sense for EA funding to be not so much focused on top talent? by Franziska Fischer Franziska Fischer https://forum.effectivealtruism.org/posts/nZSFdAdyyfeXF4Rpa/would-it-make-sense-for-ea-funding-to-be-not-so-much-focused Wed, 07 Jun 2023 21:29:05 +0000 EA - Would it make sense for EA funding to be not so much focused on top talent? by Franziska Fischer Franziska Fischer https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:31 no full 6206 weJZjku3HiNgQC4ER EA - A note of caution about recent AI risk coverage by Sean o h 10 years, perhaps >20 or 30 years. Right now this issue has a lot of the most prominent AI scientists and CEOs signed up, and political leaders worldwide committing to examining the issue seriously (examples from last week). What happens then in the >10 year-timeline world?The extinction-level outcomes that the public is hearing, and that these experts are raising and policymakers making costly reputational investments in, don’t transpire. What does happen is all the benefits of near-term AI that have been talked about, plus all the near-term harms that are being predominantly raised by the AI ethics/FAccT communities. Perhaps these harms include somewhat more extreme versions than what is currently talked about, but nowhere near catastrophic. Suddenly the year is 2028, and that whole 2023 furore is starting to look a bit silly. Remember when everyone agreed AI was going to make us all extinct? Yeah, like Limits to Growth all over again. Except that we’re not safe. In reality, in this scenario, we’re just entering the period in which risk is most acute, and in which gaining or maintaining the support of leaders across society for coordinated action is most important.And it’s possibly even harder to convince them, because people remember how silly lots of people looked the last time.(3) How to navigate this scenario (in advance).Suggestions:Have our messaging make clear that we ...]]> Sean_o_h https://forum.effectivealtruism.org/posts/weJZjku3HiNgQC4ER/a-note-of-caution-about-recent-ai-risk-coverage 10 years, perhaps >20 or 30 years. Right now this issue has a lot of the most prominent AI scientists and CEOs signed up, and political leaders worldwide committing to examining the issue seriously (examples from last week). What happens then in the >10 year-timeline world?The extinction-level outcomes that the public is hearing, and that these experts are raising and policymakers making costly reputational investments in, don’t transpire. What does happen is all the benefits of near-term AI that have been talked about, plus all the near-term harms that are being predominantly raised by the AI ethics/FAccT communities. Perhaps these harms include somewhat more extreme versions than what is currently talked about, but nowhere near catastrophic. Suddenly the year is 2028, and that whole 2023 furore is starting to look a bit silly. Remember when everyone agreed AI was going to make us all extinct? Yeah, like Limits to Growth all over again. Except that we’re not safe. In reality, in this scenario, we’re just entering the period in which risk is most acute, and in which gaining or maintaining the support of leaders across society for coordinated action is most important.And it’s possibly even harder to convince them, because people remember how silly lots of people looked the last time.(3) How to navigate this scenario (in advance).Suggestions:Have our messaging make clear that we ...]]> Wed, 07 Jun 2023 18:01:12 +0000 EA - A note of caution about recent AI risk coverage by Sean o h 10 years, perhaps >20 or 30 years. Right now this issue has a lot of the most prominent AI scientists and CEOs signed up, and political leaders worldwide committing to examining the issue seriously (examples from last week). What happens then in the >10 year-timeline world?The extinction-level outcomes that the public is hearing, and that these experts are raising and policymakers making costly reputational investments in, don’t transpire. What does happen is all the benefits of near-term AI that have been talked about, plus all the near-term harms that are being predominantly raised by the AI ethics/FAccT communities. Perhaps these harms include somewhat more extreme versions than what is currently talked about, but nowhere near catastrophic. Suddenly the year is 2028, and that whole 2023 furore is starting to look a bit silly. Remember when everyone agreed AI was going to make us all extinct? Yeah, like Limits to Growth all over again. Except that we’re not safe. In reality, in this scenario, we’re just entering the period in which risk is most acute, and in which gaining or maintaining the support of leaders across society for coordinated action is most important.And it’s possibly even harder to convince them, because people remember how silly lots of people looked the last time.(3) How to navigate this scenario (in advance).Suggestions:Have our messaging make clear that we ...]]> 10 years, perhaps >20 or 30 years. Right now this issue has a lot of the most prominent AI scientists and CEOs signed up, and political leaders worldwide committing to examining the issue seriously (examples from last week). What happens then in the >10 year-timeline world?The extinction-level outcomes that the public is hearing, and that these experts are raising and policymakers making costly reputational investments in, don’t transpire. What does happen is all the benefits of near-term AI that have been talked about, plus all the near-term harms that are being predominantly raised by the AI ethics/FAccT communities. Perhaps these harms include somewhat more extreme versions than what is currently talked about, but nowhere near catastrophic. Suddenly the year is 2028, and that whole 2023 furore is starting to look a bit silly. Remember when everyone agreed AI was going to make us all extinct? Yeah, like Limits to Growth all over again. Except that we’re not safe. In reality, in this scenario, we’re just entering the period in which risk is most acute, and in which gaining or maintaining the support of leaders across society for coordinated action is most important.And it’s possibly even harder to convince them, because people remember how silly lots of people looked the last time.(3) How to navigate this scenario (in advance).Suggestions:Have our messaging make clear that we ...]]> Sean_o_h https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:24 no full 6203 KRSthwicCTRw9Ayzg EA - Large epistemological concerns I should maybe have about EA a priori by Luise Luise https://forum.effectivealtruism.org/posts/KRSthwicCTRw9Ayzg/large-epistemological-concerns-i-should-maybe-have-about-ea Wed, 07 Jun 2023 17:24:30 +0000 EA - Large epistemological concerns I should maybe have about EA a priori by Luise Luise https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:16 no full 6205 yfuoCzFNdsp6CHX5z EA - Unveiling the Challenges and Potential of Research in Nigeria: Nurturing Talent in Resource-Limited Settings by emmannaemeka emmannaemeka https://forum.effectivealtruism.org/posts/yfuoCzFNdsp6CHX5z/unveiling-the-challenges-and-potential-of-research-in Wed, 07 Jun 2023 16:42:13 +0000 EA - Unveiling the Challenges and Potential of Research in Nigeria: Nurturing Talent in Resource-Limited Settings by emmannaemeka emmannaemeka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:03 no full 6207 wx9GgKGWqksvMWjg2 EA - Successif: helping mid-career and senior professionals have impactful careers by ClaireB ClaireB https://forum.effectivealtruism.org/posts/wx9GgKGWqksvMWjg2/successif-helping-mid-career-and-senior-professionals-have Wed, 07 Jun 2023 16:39:00 +0000 EA - Successif: helping mid-career and senior professionals have impactful careers by ClaireB ClaireB https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 16:40 no full 6204 aSBEN99X2KaRLSmeT EA - US Policy Career Resources by US Policy Careers US Policy Careers https://forum.effectivealtruism.org/posts/aSBEN99X2KaRLSmeT/us-policy-career-resources Wed, 07 Jun 2023 11:19:19 +0000 EA - US Policy Career Resources by US Policy Careers US Policy Careers https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 18:01 no full 6208 MFA9jC2daSMnhNryv EA - Notes on how I want to handle criticism by Lizka Lizka https://forum.effectivealtruism.org/posts/MFA9jC2daSMnhNryv/notes-on-how-i-want-to-handle-criticism Thu, 08 Jun 2023 17:34:20 +0000 EA - Notes on how I want to handle criticism by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:39 no full 6217 Y2xbKLjEmL6dCd2Z6 EA - UK government to host first global summit on AI Safety by DavidNash DavidNash https://forum.effectivealtruism.org/posts/Y2xbKLjEmL6dCd2Z6/uk-government-to-host-first-global-summit-on-ai-safety Thu, 08 Jun 2023 16:49:25 +0000 EA - UK government to host first global summit on AI Safety by DavidNash DavidNash https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:56 no full 6216 9T7qmJcRpgvr3bjGZ EA - Seeking important GH or IDEV working papers to evaluate by ryancbriggs ryancbriggs https://forum.effectivealtruism.org/posts/9T7qmJcRpgvr3bjGZ/seeking-important-gh-or-idev-working-papers-to-evaluate Thu, 08 Jun 2023 07:58:18 +0000 EA - Seeking important GH or IDEV working papers to evaluate by ryancbriggs ryancbriggs https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:51 no full 6220 yCx3kCReJtucpdd33 EA - The current alignment plan, and how we might improve it | EAG Bay Area 23 by Buck Buck https://forum.effectivealtruism.org/posts/yCx3kCReJtucpdd33/the-current-alignment-plan-and-how-we-might-improve-it-or Thu, 08 Jun 2023 07:57:48 +0000 EA - The current alignment plan, and how we might improve it | EAG Bay Area 23 by Buck Buck https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 45:54 no full 6219 ct3zLpD5FMwBwYCZ7 EA - EA Strategy Fortnight (June 12-24) by Ben West Ben_West https://forum.effectivealtruism.org/posts/ct3zLpD5FMwBwYCZ7/ea-strategy-fortnight-june-12-24 Thu, 08 Jun 2023 01:13:26 +0000 EA - EA Strategy Fortnight (June 12-24) by Ben West Ben_West https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:28 no full 6218 J4cLuxvAwnKNQxwxj EA - How does AI progress affect other EA cause areas? by Luis Mota Freitas Luis Mota Freitas https://forum.effectivealtruism.org/posts/J4cLuxvAwnKNQxwxj/how-does-ai-progress-affect-other-ea-cause-areas Fri, 09 Jun 2023 18:43:06 +0000 EA - How does AI progress affect other EA cause areas? by Luis Mota Freitas Luis Mota Freitas https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:15 no full 6229 vxpqFFtrRsG9RLkqa EA - Announcement: You can now listen to the “AI Safety Fundamentals” courses by peterhartree peterhartree https://forum.effectivealtruism.org/posts/vxpqFFtrRsG9RLkqa/announcement-you-can-now-listen-to-the-ai-safety Fri, 09 Jun 2023 18:23:15 +0000 EA - Announcement: You can now listen to the “AI Safety Fundamentals” courses by peterhartree peterhartree https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:18 no full 6227 K3XiFGMdAQXBGTFSH EA - A survey of concrete risks derived from Artificial Intelligence by Guillem Bas Guillem Bas https://forum.effectivealtruism.org/posts/K3XiFGMdAQXBGTFSH/a-survey-of-concrete-risks-derived-from-artificial Fri, 09 Jun 2023 17:01:33 +0000 EA - A survey of concrete risks derived from Artificial Intelligence by Guillem Bas Guillem Bas https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:23 no full 6230 K85qGvjqnJbznNgiY EA - Strawmen, steelmen, and mithrilmen: getting the principle of charity right by MichaelPlant MichaelPlant https://forum.effectivealtruism.org/posts/K85qGvjqnJbznNgiY/strawmen-steelmen-and-mithrilmen-getting-the-principle-of Fri, 09 Jun 2023 15:04:04 +0000 EA - Strawmen, steelmen, and mithrilmen: getting the principle of charity right by MichaelPlant MichaelPlant https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:48 no full 6228 qyhDz9djZAmxZ6Qzx EA - How economists got Africa’s AIDS epidemic wrong by Justin Sandefur Justin Sandefur https://forum.effectivealtruism.org/posts/qyhDz9djZAmxZ6Qzx/how-economists-got-africa-s-aids-epidemic-wrong Sat, 10 Jun 2023 17:57:37 +0000 EA - How economists got Africa’s AIDS epidemic wrong by Justin Sandefur Justin Sandefur https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:28 no full 6238 sNqzGZjv4pRJjjhZs EA - Wild Animal Welfare Scenarios for AI Doom by utilistrutil utilistrutil https://forum.effectivealtruism.org/posts/sNqzGZjv4pRJjjhZs/wild-animal-welfare-scenarios-for-ai-doom Sat, 10 Jun 2023 02:31:39 +0000 EA - Wild Animal Welfare Scenarios for AI Doom by utilistrutil utilistrutil https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:13 no full 6239 LqjG4bAxHfmHC5iut EA - Why I Spoke to TIME Magazine, and My Experience as a Female AI Researcher in Silicon Valley with Sexual Harassment/Abuse by Lucretia Lucretia https://forum.effectivealtruism.org/posts/LqjG4bAxHfmHC5iut/why-i-spoke-to-time-magazine-and-my-experience-as-a-female Sun, 11 Jun 2023 09:12:46 +0000 EA - Why I Spoke to TIME Magazine, and My Experience as a Female AI Researcher in Silicon Valley with Sexual Harassment/Abuse by Lucretia Lucretia https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:00:35 no full 6243 vw3cCCohgNhBoJE8D EA - Holly Elmore on reducing rodent suffering and wild animal welfare by Karthik Palakodeti Karthik Palakodeti https://forum.effectivealtruism.org/posts/vw3cCCohgNhBoJE8D/holly-elmore-on-reducing-rodent-suffering-and-wild-animal Mon, 12 Jun 2023 16:46:05 +0000 EA - Holly Elmore on reducing rodent suffering and wild animal welfare by Karthik Palakodeti Karthik Palakodeti https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:04 no full 6255 WFeBgJb2GQWK6J2Do EA - Historical Global Health R&D “hits”: Development, main sources of funding, and impact by Rethink Priorities Rethink Priorities https://forum.effectivealtruism.org/posts/WFeBgJb2GQWK6J2Do/historical-global-health-r-and-d-hits-development-main Mon, 12 Jun 2023 16:36:33 +0000 EA - Historical Global Health R&D “hits”: Development, main sources of funding, and impact by Rethink Priorities Rethink Priorities https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:59 no full 6251 NPHJBby6KjDC7iNYK EA - What can superintelligent ANI tell us about superintelligent AGI? by Ted Sanders Ted Sanders https://forum.effectivealtruism.org/posts/NPHJBby6KjDC7iNYK/what-can-superintelligent-ani-tell-us-about-superintelligent Mon, 12 Jun 2023 15:30:59 +0000 EA - What can superintelligent ANI tell us about superintelligent AGI? by Ted Sanders Ted Sanders https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:22 no full 6253 viXCv8thAAd68Qnfs EA - Effective altruism organizations should avoid using “polarizing techniques” by Joey Joey https://forum.effectivealtruism.org/posts/viXCv8thAAd68Qnfs/effective-altruism-organizations-should-avoid-using Mon, 12 Jun 2023 14:30:05 +0000 EA - Effective altruism organizations should avoid using “polarizing techniques” by Joey Joey https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:49 no full 6250 fGDN9xxrd8k7kZ2nf EA - Family Empowerment Media: track record, cost-effectiveness, and main uncertainties by Rethink Priorities Rethink Priorities https://forum.effectivealtruism.org/posts/fGDN9xxrd8k7kZ2nf/family-empowerment-media-track-record-cost-effectiveness-and Mon, 12 Jun 2023 10:55:58 +0000 EA - Family Empowerment Media: track record, cost-effectiveness, and main uncertainties by Rethink Priorities Rethink Priorities https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:46 no full 6252 gkfMLX4NWZdmpikto EA - Critiques of prominent AI safety labs: Conjecture by Omega 4 years experience), and one non-technical community member with experience in the EA community. We’d like to make our critiques non-anonymously but believe this will not be a wise move professionally speaking. We believe our criticisms stand on their own without appeal to our positions. Readers should not assume that we are completely unbiased or don’t have anything to personally or professionally gain from publishing these critiques. We’ve tried to take the benefits and drawbacks of the anonymous nature of our post seriously and carefully, and are open to feedback on anything we might have done better.This is the second post in this series and it covers Conjecture. Conjecture is a for-profit alignment startup founded in late 2021 by Connor Leahy, Sid Black and Gabriel Alfour, which aims to scale applied alignment research. Based in London, Conjecture has received $10 million in funding from venture capitalists (VCs), and recruits heavily from the EA movement. We shared a draft of this document with Conjecture for feedback prior to publication, and include their response below. We also requested feedback on a draft from a small group of experienced alignment researchers from various organizations, and have invited them to share their views in the comments of this post.We would like to invite others to share their thoughts in the comments openly if you feel comfortable, or contribute anonymously via this form. We will add inputs from there to the comments section of this post, but will likely not be updating the main body of the post as a result (unless comments catch errors in our writing).Key TakeawaysFor those with limited knowledge and context on Conjecture, we recommend first reading or skimming the About Conjecture section.Time to read the core sections (Criticisms & Suggestions and Our views on Conjecture) is 22 minutes.Criticisms and SuggestionsWe think Conjecture’s research is low quality (read more).Their posts don’t always make assumptions clear, don’t make it clear what evidence base they have for a given hypothesis, and evidence is frequently cherry-picked. We also think their bar for publishing is too low, which decreases the signal to noise ratio. Conjecture has acknowledged some of these criticisms, but not all (read more).We make specific critiques of examples of their research from their initial research agenda (read more).There is limited information available on their new research direction (cognitive emulation), but from the publicly available information it appears extremely challenging and so we are skeptical as to its tractability (read more).We have some concerns with the CEO’s character and trustworthiness because, in order of importance (read more):The CEO and Conjecture have misrepresented themselves to external parties multiple times (read more);The CEO’s involvement in EleutherAI and Stability AI has contributed to race dynamics (read more);The CEO previously overstated his accomplishments in 2019 (when an undergrad) (read more);The CEO has been inconsistent over time regarding his position on releasing LLMs (read more).We believe Conjecture has scaled too quickly before demonstrating they have promising research results, and believe this will make it harder for them to pivot in the future (read more).We are concerned that Conjecture does not have a clear plan for balancing profit an...]]> Omega https://forum.effectivealtruism.org/posts/gkfMLX4NWZdmpikto/critiques-of-prominent-ai-safety-labs-conjecture 4 years experience), and one non-technical community member with experience in the EA community. We’d like to make our critiques non-anonymously but believe this will not be a wise move professionally speaking. We believe our criticisms stand on their own without appeal to our positions. Readers should not assume that we are completely unbiased or don’t have anything to personally or professionally gain from publishing these critiques. We’ve tried to take the benefits and drawbacks of the anonymous nature of our post seriously and carefully, and are open to feedback on anything we might have done better.This is the second post in this series and it covers Conjecture. Conjecture is a for-profit alignment startup founded in late 2021 by Connor Leahy, Sid Black and Gabriel Alfour, which aims to scale applied alignment research. Based in London, Conjecture has received $10 million in funding from venture capitalists (VCs), and recruits heavily from the EA movement. We shared a draft of this document with Conjecture for feedback prior to publication, and include their response below. We also requested feedback on a draft from a small group of experienced alignment researchers from various organizations, and have invited them to share their views in the comments of this post.We would like to invite others to share their thoughts in the comments openly if you feel comfortable, or contribute anonymously via this form. We will add inputs from there to the comments section of this post, but will likely not be updating the main body of the post as a result (unless comments catch errors in our writing).Key TakeawaysFor those with limited knowledge and context on Conjecture, we recommend first reading or skimming the About Conjecture section.Time to read the core sections (Criticisms & Suggestions and Our views on Conjecture) is 22 minutes.Criticisms and SuggestionsWe think Conjecture’s research is low quality (read more).Their posts don’t always make assumptions clear, don’t make it clear what evidence base they have for a given hypothesis, and evidence is frequently cherry-picked. We also think their bar for publishing is too low, which decreases the signal to noise ratio. Conjecture has acknowledged some of these criticisms, but not all (read more).We make specific critiques of examples of their research from their initial research agenda (read more).There is limited information available on their new research direction (cognitive emulation), but from the publicly available information it appears extremely challenging and so we are skeptical as to its tractability (read more).We have some concerns with the CEO’s character and trustworthiness because, in order of importance (read more):The CEO and Conjecture have misrepresented themselves to external parties multiple times (read more);The CEO’s involvement in EleutherAI and Stability AI has contributed to race dynamics (read more);The CEO previously overstated his accomplishments in 2019 (when an undergrad) (read more);The CEO has been inconsistent over time regarding his position on releasing LLMs (read more).We believe Conjecture has scaled too quickly before demonstrating they have promising research results, and believe this will make it harder for them to pivot in the future (read more).We are concerned that Conjecture does not have a clear plan for balancing profit an...]]> Mon, 12 Jun 2023 03:50:56 +0000 EA - Critiques of prominent AI safety labs: Conjecture by Omega 4 years experience), and one non-technical community member with experience in the EA community. We’d like to make our critiques non-anonymously but believe this will not be a wise move professionally speaking. We believe our criticisms stand on their own without appeal to our positions. Readers should not assume that we are completely unbiased or don’t have anything to personally or professionally gain from publishing these critiques. We’ve tried to take the benefits and drawbacks of the anonymous nature of our post seriously and carefully, and are open to feedback on anything we might have done better.This is the second post in this series and it covers Conjecture. Conjecture is a for-profit alignment startup founded in late 2021 by Connor Leahy, Sid Black and Gabriel Alfour, which aims to scale applied alignment research. Based in London, Conjecture has received $10 million in funding from venture capitalists (VCs), and recruits heavily from the EA movement. We shared a draft of this document with Conjecture for feedback prior to publication, and include their response below. We also requested feedback on a draft from a small group of experienced alignment researchers from various organizations, and have invited them to share their views in the comments of this post.We would like to invite others to share their thoughts in the comments openly if you feel comfortable, or contribute anonymously via this form. We will add inputs from there to the comments section of this post, but will likely not be updating the main body of the post as a result (unless comments catch errors in our writing).Key TakeawaysFor those with limited knowledge and context on Conjecture, we recommend first reading or skimming the About Conjecture section.Time to read the core sections (Criticisms & Suggestions and Our views on Conjecture) is 22 minutes.Criticisms and SuggestionsWe think Conjecture’s research is low quality (read more).Their posts don’t always make assumptions clear, don’t make it clear what evidence base they have for a given hypothesis, and evidence is frequently cherry-picked. We also think their bar for publishing is too low, which decreases the signal to noise ratio. Conjecture has acknowledged some of these criticisms, but not all (read more).We make specific critiques of examples of their research from their initial research agenda (read more).There is limited information available on their new research direction (cognitive emulation), but from the publicly available information it appears extremely challenging and so we are skeptical as to its tractability (read more).We have some concerns with the CEO’s character and trustworthiness because, in order of importance (read more):The CEO and Conjecture have misrepresented themselves to external parties multiple times (read more);The CEO’s involvement in EleutherAI and Stability AI has contributed to race dynamics (read more);The CEO previously overstated his accomplishments in 2019 (when an undergrad) (read more);The CEO has been inconsistent over time regarding his position on releasing LLMs (read more).We believe Conjecture has scaled too quickly before demonstrating they have promising research results, and believe this will make it harder for them to pivot in the future (read more).We are concerned that Conjecture does not have a clear plan for balancing profit an...]]> 4 years experience), and one non-technical community member with experience in the EA community. We’d like to make our critiques non-anonymously but believe this will not be a wise move professionally speaking. We believe our criticisms stand on their own without appeal to our positions. Readers should not assume that we are completely unbiased or don’t have anything to personally or professionally gain from publishing these critiques. We’ve tried to take the benefits and drawbacks of the anonymous nature of our post seriously and carefully, and are open to feedback on anything we might have done better.This is the second post in this series and it covers Conjecture. Conjecture is a for-profit alignment startup founded in late 2021 by Connor Leahy, Sid Black and Gabriel Alfour, which aims to scale applied alignment research. Based in London, Conjecture has received $10 million in funding from venture capitalists (VCs), and recruits heavily from the EA movement. We shared a draft of this document with Conjecture for feedback prior to publication, and include their response below. We also requested feedback on a draft from a small group of experienced alignment researchers from various organizations, and have invited them to share their views in the comments of this post.We would like to invite others to share their thoughts in the comments openly if you feel comfortable, or contribute anonymously via this form. We will add inputs from there to the comments section of this post, but will likely not be updating the main body of the post as a result (unless comments catch errors in our writing).Key TakeawaysFor those with limited knowledge and context on Conjecture, we recommend first reading or skimming the About Conjecture section.Time to read the core sections (Criticisms & Suggestions and Our views on Conjecture) is 22 minutes.Criticisms and SuggestionsWe think Conjecture’s research is low quality (read more).Their posts don’t always make assumptions clear, don’t make it clear what evidence base they have for a given hypothesis, and evidence is frequently cherry-picked. We also think their bar for publishing is too low, which decreases the signal to noise ratio. Conjecture has acknowledged some of these criticisms, but not all (read more).We make specific critiques of examples of their research from their initial research agenda (read more).There is limited information available on their new research direction (cognitive emulation), but from the publicly available information it appears extremely challenging and so we are skeptical as to its tractability (read more).We have some concerns with the CEO’s character and trustworthiness because, in order of importance (read more):The CEO and Conjecture have misrepresented themselves to external parties multiple times (read more);The CEO’s involvement in EleutherAI and Stability AI has contributed to race dynamics (read more);The CEO previously overstated his accomplishments in 2019 (when an undergrad) (read more);The CEO has been inconsistent over time regarding his position on releasing LLMs (read more).We believe Conjecture has scaled too quickly before demonstrating they have promising research results, and believe this will make it harder for them to pivot in the future (read more).We are concerned that Conjecture does not have a clear plan for balancing profit an...]]> Omega https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 54:37 no full 6254 iAsepse4jx6zLH4tZ EA - Improving EA Communication Surrounding Disability by MHR MHR https://forum.effectivealtruism.org/posts/iAsepse4jx6zLH4tZ/improving-ea-communication-surrounding-disability Tue, 13 Jun 2023 14:15:10 +0000 EA - Improving EA Communication Surrounding Disability by MHR MHR https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:04 no full 6265 EBZggasznbotKrpLW EA - Tony Blair Institute AI Safety Work by TomWestgarth TomWestgarth https://forum.effectivealtruism.org/posts/EBZggasznbotKrpLW/tony-blair-institute-ai-safety-work Tue, 13 Jun 2023 17:17:58 +0000 EA - Tony Blair Institute AI Safety Work by TomWestgarth TomWestgarth https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:57 no full 6266 CvhndEdp5yJwGpcXi EA - AMA: Luke Freeman, ED of Giving What We Can (GWWC) by Luke Freeman Luke Freeman https://forum.effectivealtruism.org/posts/CvhndEdp5yJwGpcXi/ama-luke-freeman-ed-of-giving-what-we-can-gwwc Tue, 13 Jun 2023 13:17:17 +0000 EA - AMA: Luke Freeman, ED of Giving What We Can (GWWC) by Luke Freeman Luke Freeman https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:53 no full 6267 vz8wia5x8pk2fNLZk EA - <$750k grants for General Purpose AI Assurance/Safety Research by Phosphorous Phosphorous https://forum.effectivealtruism.org/posts/vz8wia5x8pk2fNLZk/less-than-usd750k-grants-for-general-purpose-ai-assurance Tue, 13 Jun 2023 16:40:47 +0000 EA - <$750k grants for General Purpose AI Assurance/Safety Research by Phosphorous Phosphorous https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:57 no full 6268 iKWeQ7jhsZ8FrRDto EA - Linkpost: Dwarkesh Patel interviewing Carl Shulman by Stefan Schubert Stefan_Schubert https://forum.effectivealtruism.org/posts/iKWeQ7jhsZ8FrRDto/linkpost-dwarkesh-patel-interviewing-carl-shulman Wed, 14 Jun 2023 18:57:16 +0000 EA - Linkpost: Dwarkesh Patel interviewing Carl Shulman by Stefan Schubert Stefan_Schubert https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:09 no full 6279 mzzPMrBjGpra2JSDw EA - EA organizations should have a transparent scope by Joey Joey https://forum.effectivealtruism.org/posts/mzzPMrBjGpra2JSDw/ea-organizations-should-have-a-transparent-scope Wed, 14 Jun 2023 11:50:37 +0000 EA - EA organizations should have a transparent scope by Joey Joey https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:21 no full 6280 B9wahjnGNvmLqQYD9 EA - We are Hiring: The Knowledge & Research Team @ Power for Democracies by Stephan Schwahlen Stephan Schwahlen https://forum.effectivealtruism.org/posts/B9wahjnGNvmLqQYD9/we-are-hiring-the-knowledge-and-research-team-power-for Wed, 14 Jun 2023 12:26:34 +0000 EA - We are Hiring: The Knowledge & Research Team @ Power for Democracies by Stephan Schwahlen Stephan Schwahlen https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:17 no full 6281 xKek2ygmwqbPrWyGx EA - The Common Sense View on Meat Implies Some Pretty Radical Conclusions by Omnizoid Omnizoid https://forum.effectivealtruism.org/posts/xKek2ygmwqbPrWyGx/the-common-sense-view-on-meat-implies-some-pretty-radical Wed, 14 Jun 2023 13:51:27 +0000 EA - The Common Sense View on Meat Implies Some Pretty Radical Conclusions by Omnizoid Omnizoid https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:53 no full 6282 QiCZoxjjvPpd8qfWb EA - Epoch and FRI Mentorship Program Summer 2023 by merilalama merilalama https://forum.effectivealtruism.org/posts/QiCZoxjjvPpd8qfWb/epoch-and-fri-mentorship-program-summer-2023-1 Tue, 13 Jun 2023 23:22:18 +0000 EA - Epoch and FRI Mentorship Program Summer 2023 by merilalama merilalama https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:35 no full 6283 ozSBaNLysue9MmFqs EA - Aptitudes for AI governance work by Sam Clarke Sam Clarke https://forum.effectivealtruism.org/posts/ozSBaNLysue9MmFqs/aptitudes-for-ai-governance-work Wed, 14 Jun 2023 10:57:40 +0000 EA - Aptitudes for AI governance work by Sam Clarke Sam Clarke https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:42 no full 6284 Avi9XgSikH5BdHzKu EA - Mindmap with overview of EA organisations via tinyurl.com/eamindmap (and many other lists of orgs) by mariekedev Overview of the public effective giving ecosystem (supply and demand) [shared] (Sjir Hoeijmakers) - Amazing to see all these giving orgs, I had no ideaLet's advertise EA infrastructure projects, Feb 2023 - A list of EA-related projects and orgs that offer free and useful servicesEASE: directory of independent agencies and freelancers offering expertise to EA-aligned organizationsAISafety.world is a map of the AIS ecosystem - Very hard to compete with this :)Other somewhat related lists:OpenBook: New EA Grants DatabaseList of EA Slack workspaces (Alex Berezhnoi)List of EA-aligned research training programs (Michael Aird)Effective Altruism DataHistorical EA funding dataLet me know if I missed an important list of orgs here.Somehow I still ended up sharing my mindmap with others, as it gives a clear visual overview of the orgs per main category or cause area. I know it is not perfect, but it is easy to maintain and people seem to like it. I was asked multiple times to add it to the forum, so here it is.DisclaimerOrganisation inclusion doesn't imply my endorsement.The branches (the categories or cause areas) are a bit arbitrary, as some organisations serve multiple categories or cause areas. For me the aim was to get an overview, not a perfect mindmap.It's incomplete. I'm still adding orgs weekly. Feel free to add notes (in the mindmap) where this overview can be corrected or orgs can be added or deleted.I use the words orgs, organisations and entities in this post. That is because 'orgs' is more accessible and clear as a word, but the entities aren't all official organisations.I changed the title of the mindmap from EA(-aligned) to EA(-related), due to this comment. I'm still a bit uncomfortable with it, because there are many more EA-related orgs which won't fit all into this mindmap (see 80k jobboard), but it seems more right (for example: I added the AI labs..). I try to add only the orgs that self-identify as EA-aligned or the ones that many EAs refer to.How to use thisHowever you like :) the left side contains more meta entities and the right side contains more object-level (cause area related) orgs.I heard people use it in order to quickly get an overview of existing entities in one category (like EA career support or biorisk).Sometimes I use it in an intro presentation to make people more aware of ...]]> mariekedev https://forum.effectivealtruism.org/posts/Avi9XgSikH5BdHzKu/mindmap-with-overview-of-ea-organisations-via-tinyurl-com Overview of the public effective giving ecosystem (supply and demand) [shared] (Sjir Hoeijmakers) - Amazing to see all these giving orgs, I had no ideaLet's advertise EA infrastructure projects, Feb 2023 - A list of EA-related projects and orgs that offer free and useful servicesEASE: directory of independent agencies and freelancers offering expertise to EA-aligned organizationsAISafety.world is a map of the AIS ecosystem - Very hard to compete with this :)Other somewhat related lists:OpenBook: New EA Grants DatabaseList of EA Slack workspaces (Alex Berezhnoi)List of EA-aligned research training programs (Michael Aird)Effective Altruism DataHistorical EA funding dataLet me know if I missed an important list of orgs here.Somehow I still ended up sharing my mindmap with others, as it gives a clear visual overview of the orgs per main category or cause area. I know it is not perfect, but it is easy to maintain and people seem to like it. I was asked multiple times to add it to the forum, so here it is.DisclaimerOrganisation inclusion doesn't imply my endorsement.The branches (the categories or cause areas) are a bit arbitrary, as some organisations serve multiple categories or cause areas. For me the aim was to get an overview, not a perfect mindmap.It's incomplete. I'm still adding orgs weekly. Feel free to add notes (in the mindmap) where this overview can be corrected or orgs can be added or deleted.I use the words orgs, organisations and entities in this post. That is because 'orgs' is more accessible and clear as a word, but the entities aren't all official organisations.I changed the title of the mindmap from EA(-aligned) to EA(-related), due to this comment. I'm still a bit uncomfortable with it, because there are many more EA-related orgs which won't fit all into this mindmap (see 80k jobboard), but it seems more right (for example: I added the AI labs..). I try to add only the orgs that self-identify as EA-aligned or the ones that many EAs refer to.How to use thisHowever you like :) the left side contains more meta entities and the right side contains more object-level (cause area related) orgs.I heard people use it in order to quickly get an overview of existing entities in one category (like EA career support or biorisk).Sometimes I use it in an intro presentation to make people more aware of ...]]> Thu, 15 Jun 2023 09:51:04 +0000 EA - Mindmap with overview of EA organisations via tinyurl.com/eamindmap (and many other lists of orgs) by mariekedev Overview of the public effective giving ecosystem (supply and demand) [shared] (Sjir Hoeijmakers) - Amazing to see all these giving orgs, I had no ideaLet's advertise EA infrastructure projects, Feb 2023 - A list of EA-related projects and orgs that offer free and useful servicesEASE: directory of independent agencies and freelancers offering expertise to EA-aligned organizationsAISafety.world is a map of the AIS ecosystem - Very hard to compete with this :)Other somewhat related lists:OpenBook: New EA Grants DatabaseList of EA Slack workspaces (Alex Berezhnoi)List of EA-aligned research training programs (Michael Aird)Effective Altruism DataHistorical EA funding dataLet me know if I missed an important list of orgs here.Somehow I still ended up sharing my mindmap with others, as it gives a clear visual overview of the orgs per main category or cause area. I know it is not perfect, but it is easy to maintain and people seem to like it. I was asked multiple times to add it to the forum, so here it is.DisclaimerOrganisation inclusion doesn't imply my endorsement.The branches (the categories or cause areas) are a bit arbitrary, as some organisations serve multiple categories or cause areas. For me the aim was to get an overview, not a perfect mindmap.It's incomplete. I'm still adding orgs weekly. Feel free to add notes (in the mindmap) where this overview can be corrected or orgs can be added or deleted.I use the words orgs, organisations and entities in this post. That is because 'orgs' is more accessible and clear as a word, but the entities aren't all official organisations.I changed the title of the mindmap from EA(-aligned) to EA(-related), due to this comment. I'm still a bit uncomfortable with it, because there are many more EA-related orgs which won't fit all into this mindmap (see 80k jobboard), but it seems more right (for example: I added the AI labs..). I try to add only the orgs that self-identify as EA-aligned or the ones that many EAs refer to.How to use thisHowever you like :) the left side contains more meta entities and the right side contains more object-level (cause area related) orgs.I heard people use it in order to quickly get an overview of existing entities in one category (like EA career support or biorisk).Sometimes I use it in an intro presentation to make people more aware of ...]]> Overview of the public effective giving ecosystem (supply and demand) [shared] (Sjir Hoeijmakers) - Amazing to see all these giving orgs, I had no ideaLet's advertise EA infrastructure projects, Feb 2023 - A list of EA-related projects and orgs that offer free and useful servicesEASE: directory of independent agencies and freelancers offering expertise to EA-aligned organizationsAISafety.world is a map of the AIS ecosystem - Very hard to compete with this :)Other somewhat related lists:OpenBook: New EA Grants DatabaseList of EA Slack workspaces (Alex Berezhnoi)List of EA-aligned research training programs (Michael Aird)Effective Altruism DataHistorical EA funding dataLet me know if I missed an important list of orgs here.Somehow I still ended up sharing my mindmap with others, as it gives a clear visual overview of the orgs per main category or cause area. I know it is not perfect, but it is easy to maintain and people seem to like it. I was asked multiple times to add it to the forum, so here it is.DisclaimerOrganisation inclusion doesn't imply my endorsement.The branches (the categories or cause areas) are a bit arbitrary, as some organisations serve multiple categories or cause areas. For me the aim was to get an overview, not a perfect mindmap.It's incomplete. I'm still adding orgs weekly. Feel free to add notes (in the mindmap) where this overview can be corrected or orgs can be added or deleted.I use the words orgs, organisations and entities in this post. That is because 'orgs' is more accessible and clear as a word, but the entities aren't all official organisations.I changed the title of the mindmap from EA(-aligned) to EA(-related), due to this comment. I'm still a bit uncomfortable with it, because there are many more EA-related orgs which won't fit all into this mindmap (see 80k jobboard), but it seems more right (for example: I added the AI labs..). I try to add only the orgs that self-identify as EA-aligned or the ones that many EAs refer to.How to use thisHowever you like :) the left side contains more meta entities and the right side contains more object-level (cause area related) orgs.I heard people use it in order to quickly get an overview of existing entities in one category (like EA career support or biorisk).Sometimes I use it in an intro presentation to make people more aware of ...]]> mariekedev https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:51 no full 6294 rcLDNmYcna5SLsELQ EA - Why EA Community building by Rob Gledhill Rob Gledhill https://forum.effectivealtruism.org/posts/rcLDNmYcna5SLsELQ/why-ea-community-building Thu, 15 Jun 2023 11:34:49 +0000 EA - Why EA Community building by Rob Gledhill Rob Gledhill https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:20 no full 6295 gsPmsdXWFmkwezc5L EA - Some talent needs in AI governance by Sam Clarke Sam Clarke https://forum.effectivealtruism.org/posts/gsPmsdXWFmkwezc5L/some-talent-needs-in-ai-governance Wed, 14 Jun 2023 22:22:00 +0000 EA - Some talent needs in AI governance by Sam Clarke Sam Clarke https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 14:06 no full 6296 SFAMvCxnEzaQHNeSL EA - How has FTX's collapse affected public perception of EA? by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How has FTX's collapse affected public perception of EA?, published by Ben West on June 16, 2023 on The Effective Altruism Forum.Purpose of postAs part of the EA Strategy Fortnight, we want to compile some survey results addressing how FTX has impacted the EA brand—particularly people’s sentiments towards EA. From the research we have on attitudes toward EA from the EA community, the general public, and university students, it seems that the FTX crash hasn’t, overall, impacted sentiments toward EA very much.This is not to say that FTX has not significantly impacted people in many ways, including mental and emotional health, levels of trust in EA leadership, and increased uncertainty in EA as a movement. We hope compiling these survey results will help contextualize our individual experiences and improve our understanding of general attitudes toward EA.The EA CommunityIn December 2022, Rethink Priorities ran a survey to gauge how the FTX crisis affected EA community members’ attitudes toward the EA movement, organizations, and leadership (fuller results of that survey from Rethink are here).Data points:Results demonstrated that FTX had decreased satisfaction by 0.5-1 points on a 10-point scale within the EA community, but overall community sentiment remained positive at ~7.5/10.Around half of the respondents reported that the FTX crisis gave them concerns with EA meta-organizations, the EA community and its norms, and the leaders of EA meta-organizations.More respondents agreed that the EA community had responded well to the crisis so far (47%) than disagreed (21%), though roughly a third of respondents neither agreed nor disagreed with this.Most respondents reported continuing to trust EA organizations, though over 30% said they had substantially lost trust in EA public figures or leadership.University StudentsThe CEA groups team surveyed some university group organizers regarding the states of their individual groups in May 2023. From November 2022 to February 2023, Rethink commissioned a survey to measure campus awareness of EA and whether students’ awareness of EA was because of FTX.Data points:When CEA polled university group organizers, they gave an average response of 3.8/10 to “How worried are you about how your group is perceived on campus?” Only two organizers mentioned FTX, and both of them only did so to state that they haven’t seen impacts from FTX—though one said that this might change in the fall.The vast majority of students interviewed on campuses did not mention FTX when asked where and when they heard about EA, and only 13/233 (5.6%) respondents who had encountered EA found FTX or SBF salient enough to mention when interviewed.Most respondents to Rethink's survey hadn't encountered EA. Of those who had (233), only 18 (1.1% of total respondents) referred to FTX/SBF explicitly or obliquely when asked what they think effective altruism means or where and when they first heard of it.General PublicRethink ran surveys pre- and post-FTX assessing awareness of and feelings towards EA in the US general public and later in more elite (educated and informed) US groups in February-March (post-FTX).Data points:Awareness of EA remains low, and ~99% of people who were aware of EA did not mention FTX.Among those aware of EA, attitudes remain positive and actually maybe increased post-FTX —though they were lower (d = -1.5, with large uncertainty) among those who were additionally aware of FTX.This does not suggest that there has not been a worsening in attitudes towards EA among those highly familiar with EA and FTX, only that most people are not familiar with both EA and FTX.Major Donor ImpactsWhile not a formal survey, we thought it might be useful to include some info about how major EA donors have responded. This is primarily anecdotal in...

]]>
Ben_West https://forum.effectivealtruism.org/posts/SFAMvCxnEzaQHNeSL/how-has-ftx-s-collapse-affected-public-perception-of-ea Fri, 16 Jun 2023 21:06:56 +0000 EA - How has FTX's collapse affected public perception of EA? by Ben West Ben_West https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:04 no full 6304
nhenCNq7s3zaXpQ8c EA - What would it look like for AIS to no longer be neglected? by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What would it look like for AIS to no longer be neglected?, published by Rockwell on June 16, 2023 on The Effective Altruism Forum.Recently, I've heard a number of people I consider informed and credible posit that AIS may either no longer be neglected or may be nearing a point at which it is no longer neglected. Predictably, this was met with pushback of varying intensity, including from comparably informed and credible people.I think it would be helpful to have a shared and public vision of what it would look like for AIS to no longer be neglected by general EA standards, or a metric of some sort for the degree of neglectedness or the specific components that are neglected. I think this is likely different from AIS being "solved" and is necessarily contextualized in the full breadth of the world's most pressing problems, including other x-risks and s-risks, and their relative neglectedness. This seems like an important bar to establish and hold community progress against. Maybe this already exists and I've missed it.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Rockwell https://forum.effectivealtruism.org/posts/nhenCNq7s3zaXpQ8c/what-would-it-look-like-for-ais-to-no-longer-be-neglected Fri, 16 Jun 2023 19:56:16 +0000 EA - What would it look like for AIS to no longer be neglected? by Rockwell Rockwell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:07 no full 6305
PJLx7CwB4mtaDgmFc EA - Critiques of non-existent AI safety labs: Yours by Anneal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Critiques of non-existent AI safety labs: Yours, published by Anneal on June 16, 2023 on The Effective Altruism Forum."Starting a company is like chewing glass. Eventually, you start to like the taste of your own blood."Building a new organization is extremely hard. It's hard when you've done it before, even several times. It's even harder the first time.Some new organizations are very similar to existing organizations. The founders of the new org can go look at all the previous closeby examples, learn from them, copy their playbook and avoid their mistakes. If your org is shaped like a Y-combinator company, you can spend dozens of hours absorbing high-quality, expert-crafted content which has been tested and tweaked and improved over hundreds of companies and more than a decade. You can do a 15 minute interview to go work next to a bunch of the best people who are also building your type of org, and learn by looking over their shoulder and troubleshooting together. You get to talk to a bunch of people who have actually succeeded building an org-like-yours.How likely is org building success, in this premier reference class, rich with prior examples to learn from, with a tried and true playbook, a tight community of founder peers, the advice of many people who have tried to do your kind of thing and won? 5%.An AI safety lab is not the same as a Y-combinator company.It is. WAY. FUCKING. HARDER.Y-combinator crowd has a special category for orgs which are trying build something that requires > ~any minor research breakthrough: HARD tech. Yet the vast majority of these Hard Tech companies are actually building on top of an academic field which basically has the science figured out. Ginkgo Bioworks did not need to figure out the principles of molecular biology, nor the tools and protocols of genetic engineering. They took the a decades old, well-developed paradigm, and worked within it to incrementally build something new. How does this look for AI safety?And how about timing. Y-combinator reference class companies take a long time to build. Growing headcount slowly, running lean: absolutely essential if you are stretching out your last funding round over 7 years to iterate your way from a 24 hour livestream tv show of one guy's life to a game streaming company.Remind me again, what are your timelines?I could keep going on this for a while. People? Fewer. Funding? Monolithic. Advice from the winners? HA.Apply these updates to our starting reference class success rate ofONE. IN. TWENTY.Now count the AI safety labs.Multiply by ~3. That is the roughly the number of people who are not the subject of this post. For all the rest of us, consider several criticisms and suggestions, which were not feasible to run by the subjects of this post before publication0. Nobody knows what they are fucking doing when founding and running an AI safety lab and everyone who says they do is lying to you.1. Nobody has ever seen an organization which has succeeded at this goal.2. Nobody has ever met the founder of such an organization, nor noted down their qualifications.3. If the quote at the top of this post doesn't evoke a visceral sense memory for you, consider whether you have an accurate mental picture of what it looks like and feels like to be succeeding at this kind of thing from the inside. Make sure you imagine having fully internalized that FAILURE IS YOUR FAULT and no one else's, and are defining success correctly.(I believe it should be "everyone doesn't die" rather than "be highly respected for your organization's contributions" or "avoid horribly embarrassing mistakes".) 4. If that last bit feels awful and stress inducing, I expect that is because it is. Even for and especially for the handfulls of people who are not the subjects of this post. So much so that I'm guessing ...

]]>
Anneal https://forum.effectivealtruism.org/posts/PJLx7CwB4mtaDgmFc/critiques-of-non-existent-ai-safety-labs-yours ~any minor research breakthrough: HARD tech. Yet the vast majority of these Hard Tech companies are actually building on top of an academic field which basically has the science figured out. Ginkgo Bioworks did not need to figure out the principles of molecular biology, nor the tools and protocols of genetic engineering. They took the a decades old, well-developed paradigm, and worked within it to incrementally build something new. How does this look for AI safety?And how about timing. Y-combinator reference class companies take a long time to build. Growing headcount slowly, running lean: absolutely essential if you are stretching out your last funding round over 7 years to iterate your way from a 24 hour livestream tv show of one guy's life to a game streaming company.Remind me again, what are your timelines?I could keep going on this for a while. People? Fewer. Funding? Monolithic. Advice from the winners? HA.Apply these updates to our starting reference class success rate ofONE. IN. TWENTY.Now count the AI safety labs.Multiply by ~3. That is the roughly the number of people who are not the subject of this post. For all the rest of us, consider several criticisms and suggestions, which were not feasible to run by the subjects of this post before publication0. Nobody knows what they are fucking doing when founding and running an AI safety lab and everyone who says they do is lying to you.1. Nobody has ever seen an organization which has succeeded at this goal.2. Nobody has ever met the founder of such an organization, nor noted down their qualifications.3. If the quote at the top of this post doesn't evoke a visceral sense memory for you, consider whether you have an accurate mental picture of what it looks like and feels like to be succeeding at this kind of thing from the inside. Make sure you imagine having fully internalized that FAILURE IS YOUR FAULT and no one else's, and are defining success correctly.(I believe it should be "everyone doesn't die" rather than "be highly respected for your organization's contributions" or "avoid horribly embarrassing mistakes".) 4. If that last bit feels awful and stress inducing, I expect that is because it is. Even for and especially for the handfulls of people who are not the subjects of this post. So much so that I'm guessing ...]]> Fri, 16 Jun 2023 11:12:59 +0000 EA - Critiques of non-existent AI safety labs: Yours by Anneal ~any minor research breakthrough: HARD tech. Yet the vast majority of these Hard Tech companies are actually building on top of an academic field which basically has the science figured out. Ginkgo Bioworks did not need to figure out the principles of molecular biology, nor the tools and protocols of genetic engineering. They took the a decades old, well-developed paradigm, and worked within it to incrementally build something new. How does this look for AI safety?And how about timing. Y-combinator reference class companies take a long time to build. Growing headcount slowly, running lean: absolutely essential if you are stretching out your last funding round over 7 years to iterate your way from a 24 hour livestream tv show of one guy's life to a game streaming company.Remind me again, what are your timelines?I could keep going on this for a while. People? Fewer. Funding? Monolithic. Advice from the winners? HA.Apply these updates to our starting reference class success rate ofONE. IN. TWENTY.Now count the AI safety labs.Multiply by ~3. That is the roughly the number of people who are not the subject of this post. For all the rest of us, consider several criticisms and suggestions, which were not feasible to run by the subjects of this post before publication0. Nobody knows what they are fucking doing when founding and running an AI safety lab and everyone who says they do is lying to you.1. Nobody has ever seen an organization which has succeeded at this goal.2. Nobody has ever met the founder of such an organization, nor noted down their qualifications.3. If the quote at the top of this post doesn't evoke a visceral sense memory for you, consider whether you have an accurate mental picture of what it looks like and feels like to be succeeding at this kind of thing from the inside. Make sure you imagine having fully internalized that FAILURE IS YOUR FAULT and no one else's, and are defining success correctly.(I believe it should be "everyone doesn't die" rather than "be highly respected for your organization's contributions" or "avoid horribly embarrassing mistakes".) 4. If that last bit feels awful and stress inducing, I expect that is because it is. Even for and especially for the handfulls of people who are not the subjects of this post. So much so that I'm guessing ...]]> ~any minor research breakthrough: HARD tech. Yet the vast majority of these Hard Tech companies are actually building on top of an academic field which basically has the science figured out. Ginkgo Bioworks did not need to figure out the principles of molecular biology, nor the tools and protocols of genetic engineering. They took the a decades old, well-developed paradigm, and worked within it to incrementally build something new. How does this look for AI safety?And how about timing. Y-combinator reference class companies take a long time to build. Growing headcount slowly, running lean: absolutely essential if you are stretching out your last funding round over 7 years to iterate your way from a 24 hour livestream tv show of one guy's life to a game streaming company.Remind me again, what are your timelines?I could keep going on this for a while. People? Fewer. Funding? Monolithic. Advice from the winners? HA.Apply these updates to our starting reference class success rate ofONE. IN. TWENTY.Now count the AI safety labs.Multiply by ~3. That is the roughly the number of people who are not the subject of this post. For all the rest of us, consider several criticisms and suggestions, which were not feasible to run by the subjects of this post before publication0. Nobody knows what they are fucking doing when founding and running an AI safety lab and everyone who says they do is lying to you.1. Nobody has ever seen an organization which has succeeded at this goal.2. Nobody has ever met the founder of such an organization, nor noted down their qualifications.3. If the quote at the top of this post doesn't evoke a visceral sense memory for you, consider whether you have an accurate mental picture of what it looks like and feels like to be succeeding at this kind of thing from the inside. Make sure you imagine having fully internalized that FAILURE IS YOUR FAULT and no one else's, and are defining success correctly.(I believe it should be "everyone doesn't die" rather than "be highly respected for your organization's contributions" or "avoid horribly embarrassing mistakes".) 4. If that last bit feels awful and stress inducing, I expect that is because it is. Even for and especially for the handfulls of people who are not the subjects of this post. So much so that I'm guessing ...]]> Anneal https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:30 no full 6306
wxxoRHmisojF6Y2qD EA - UN Secretary-General recognises existential threat from AI by Greg Colbourn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: UN Secretary-General recognises existential threat from AI, published by Greg Colbourn on June 16, 2023 on The Effective Altruism Forum.At the Digital Platforms policy brief press conference on Monday, UN Secretary-General António Guterres started his speech with:Distinguish members of our press corps. New technology is moving at warp speed, and so are the threats that come with it. Alarm bells over the latest form of Artificial Intelligence - generative AI - are deafening. And they are loudest from the developers who designed it. These scientists and experts have called on the World to act, declaring AI an existential threat to humanity on a par with the risk of nuclear war. We must take those warnings seriously. Our proposal, Global Digital Compact New Agenda for Peace and Accord on the Global Governance of AI, will offer multilateral solutions based on human rights.(Video here.)Guterres went on to discuss current damage from digital technology ("but the advent of generative AI must not distract us from the damaged digital technology is already doing to our world").The opening mention of existential threat from AI is a very welcome development in terms of the possibility of global coordination on the issue.It seems likely that the CAIS Statement on AI Risk - "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." - was instrumental in prompting this, given the mention of nuclear war.In terms of extinction risk, remember that the right to life is first and foremost!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Greg_Colbourn https://forum.effectivealtruism.org/posts/wxxoRHmisojF6Y2qD/un-secretary-general-recognises-existential-threat-from-ai Fri, 16 Jun 2023 06:10:47 +0000 EA - UN Secretary-General recognises existential threat from AI by Greg Colbourn Greg_Colbourn https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:38 no full 6307
ctPrrzFnXGyWrmK3w EA - EU AI Act passed vote in Plenary meeting by Ariel G. Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EU AI Act passed vote in Plenary meeting, published by Ariel G. on June 16, 2023 on The Effective Altruism Forum.Mainly discussed in the linked article - keeping the post brief.The newest EU AI Act parliament version passed an important vote, despite recent political uncertainty.I found it fascinating to watch the the Plenary session (from June 13th, the vote was on the 14th), where the Act was discussed by various EU parties. A few things that stood out to me:I was surprised that many EU country representatives mentioned the Open Letters and Existential Risk as a real concern, even though the EU AI Act was not originally intended to address it (though, it now has GPAI/foundation model bits added). Transparency and Fairness took a back seat, to some extent.Real-time Biometric monitoring was a big debate topic - whether to giving an exemption for law enforcement or not, for national security. Currently it looks like it will not be allowed, other than post-incident with special approval. This may be a useful lever to keep in mind for policy workOthers that watched the stream, feel free to mention insights in the comments.Linked here (relevant timestamp 12:39 - 14:33)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Ariel G. https://forum.effectivealtruism.org/posts/ctPrrzFnXGyWrmK3w/eu-ai-act-passed-vote-in-plenary-meeting Fri, 16 Jun 2023 08:40:16 +0000 EA - EU AI Act passed vote in Plenary meeting by Ariel G. Ariel G. https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:22 no full 6308
XTBGAWAXR25atu39P EA - Third Wave Effective Altruism by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Third Wave Effective Altruism, published by Ben West on June 17, 2023 on The Effective Altruism Forum.This is a frame that I have found useful and I'm sharing in case others find it useful.EA has arguably gone through several waves:Waves of EA (highly simplified model — see caveats below) First waveSecond waveThird waveTime period2010-20172017-20232023-??Primary constraintMoneyTalent ???Primary call to actionDonations to effective charitiesCareer changePrimary target audienceMiddle-upper-class peopleUniversity students and early career professionalsFlagship cause areaGlobal health and developmentLongtermismMajor hubsOxford > SF Bay > Berlin (?)SF Bay > Oxford > London > DC > BostonThe boundaries between waves are obviously vague and somewhat arbitrary. This table is also overly simplistic – I first got involved in EA through animal welfare, which is not listed at all on this table, for example. But I think this is a decent first approximation.It’s not entirely clear to me whether we are actually in a third wave. People often overestimate the extent to which their local circumstances are unique. But there are two main things which make me think that we have a “wave” which is distinct from, say, mid 2022:Substantially less money, through a combination of Meta stock falling, FTX collapsing, and general market/crypto downturnsAI safety becoming (relatively) mainstreamIf I had to choose an arbitrary date for the beginning of the third wave, I might choose March 22, 2023, when the FLI open letter on pausing AI experiments was published.It remains to be seen if public concern about AI is sustained – Superintelligence was endorsed by a bunch of fancy people when it first came out, but they mostly faded away. If it is sustained though, I think EA will be in a qualitatively new regime: one where AI safety worries are common, AI safety is getting a lot of coverage, people with expertise in AI safety might get into important rooms, and where the field might be less neglected.Third wave EA: what are some possibilities?Here are a few random ideas; I am not intending to imply that these are the most likely scenarios.Example future scenarioPolitics and Civil SocietyForefront of weirdnessReturn to non-AI causesDescription of the possible “third wave” — chosen to illustrate the breadth of possibilitiesThere is substantial public appetite to heavily regulate AI. The technical challenges end up being relatively easy. The archetypal EA project is running a grassroots petition for a moratorium on AI.AI safety becomes mainstream and "spins out" of EA. EA stays at the forefront of weirdness and the people who were previously interested in AI safety turn their focus to digital sentience, acausal moral trade, and other issues that still fall outside the Overton window.AI safety becomes mainstream and "spins out" of EA.AI safety advocates leave EA, and vibes shift back to “first wave” EA.Primary constraintPolitical willResearchMoneyPrimary call to actionVoting/advocacyResearchDonationsPrimary target audienceVoters in US/EUFuture researchers (university students)Middle-upper class peopleFlagship cause areaAI regulationDigital sentienceAnimal welfareWhere do we go from here?I’m interested in organizing more projects like EA Strategy Fortnight. I don’t feel very confident about what third wave EA should look like, or even that there will be a third wave, but it does seem worth spending time discussing the possibilities.I'm particularly interested in claims that there isn't, or shouldn't be, a third wave of EA (i.e. please feel free to disagree with the whole model, argue that we’re still in wave 2, argue we might be moving towards wave 3 but shouldn’t be, etc.).I’m also interested in generating cruxes and forecasts about those cruxes. A lot of these are about the counterfactual v...

]]>
Ben_West https://forum.effectivealtruism.org/posts/XTBGAWAXR25atu39P/third-wave-effective-altruism SF Bay > Berlin (?)SF Bay > Oxford > London > DC > BostonThe boundaries between waves are obviously vague and somewhat arbitrary. This table is also overly simplistic – I first got involved in EA through animal welfare, which is not listed at all on this table, for example. But I think this is a decent first approximation.It’s not entirely clear to me whether we are actually in a third wave. People often overestimate the extent to which their local circumstances are unique. But there are two main things which make me think that we have a “wave” which is distinct from, say, mid 2022:Substantially less money, through a combination of Meta stock falling, FTX collapsing, and general market/crypto downturnsAI safety becoming (relatively) mainstreamIf I had to choose an arbitrary date for the beginning of the third wave, I might choose March 22, 2023, when the FLI open letter on pausing AI experiments was published.It remains to be seen if public concern about AI is sustained – Superintelligence was endorsed by a bunch of fancy people when it first came out, but they mostly faded away. If it is sustained though, I think EA will be in a qualitatively new regime: one where AI safety worries are common, AI safety is getting a lot of coverage, people with expertise in AI safety might get into important rooms, and where the field might be less neglected.Third wave EA: what are some possibilities?Here are a few random ideas; I am not intending to imply that these are the most likely scenarios.Example future scenarioPolitics and Civil SocietyForefront of weirdnessReturn to non-AI causesDescription of the possible “third wave” — chosen to illustrate the breadth of possibilitiesThere is substantial public appetite to heavily regulate AI. The technical challenges end up being relatively easy. The archetypal EA project is running a grassroots petition for a moratorium on AI.AI safety becomes mainstream and "spins out" of EA. EA stays at the forefront of weirdness and the people who were previously interested in AI safety turn their focus to digital sentience, acausal moral trade, and other issues that still fall outside the Overton window.AI safety becomes mainstream and "spins out" of EA.AI safety advocates leave EA, and vibes shift back to “first wave” EA.Primary constraintPolitical willResearchMoneyPrimary call to actionVoting/advocacyResearchDonationsPrimary target audienceVoters in US/EUFuture researchers (university students)Middle-upper class peopleFlagship cause areaAI regulationDigital sentienceAnimal welfareWhere do we go from here?I’m interested in organizing more projects like EA Strategy Fortnight. I don’t feel very confident about what third wave EA should look like, or even that there will be a third wave, but it does seem worth spending time discussing the possibilities.I'm particularly interested in claims that there isn't, or shouldn't be, a third wave of EA (i.e. please feel free to disagree with the whole model, argue that we’re still in wave 2, argue we might be moving towards wave 3 but shouldn’t be, etc.).I’m also interested in generating cruxes and forecasts about those cruxes. A lot of these are about the counterfactual v...]]> Sat, 17 Jun 2023 17:21:05 +0000 EA - Third Wave Effective Altruism by Ben West SF Bay > Berlin (?)SF Bay > Oxford > London > DC > BostonThe boundaries between waves are obviously vague and somewhat arbitrary. This table is also overly simplistic – I first got involved in EA through animal welfare, which is not listed at all on this table, for example. But I think this is a decent first approximation.It’s not entirely clear to me whether we are actually in a third wave. People often overestimate the extent to which their local circumstances are unique. But there are two main things which make me think that we have a “wave” which is distinct from, say, mid 2022:Substantially less money, through a combination of Meta stock falling, FTX collapsing, and general market/crypto downturnsAI safety becoming (relatively) mainstreamIf I had to choose an arbitrary date for the beginning of the third wave, I might choose March 22, 2023, when the FLI open letter on pausing AI experiments was published.It remains to be seen if public concern about AI is sustained – Superintelligence was endorsed by a bunch of fancy people when it first came out, but they mostly faded away. If it is sustained though, I think EA will be in a qualitatively new regime: one where AI safety worries are common, AI safety is getting a lot of coverage, people with expertise in AI safety might get into important rooms, and where the field might be less neglected.Third wave EA: what are some possibilities?Here are a few random ideas; I am not intending to imply that these are the most likely scenarios.Example future scenarioPolitics and Civil SocietyForefront of weirdnessReturn to non-AI causesDescription of the possible “third wave” — chosen to illustrate the breadth of possibilitiesThere is substantial public appetite to heavily regulate AI. The technical challenges end up being relatively easy. The archetypal EA project is running a grassroots petition for a moratorium on AI.AI safety becomes mainstream and "spins out" of EA. EA stays at the forefront of weirdness and the people who were previously interested in AI safety turn their focus to digital sentience, acausal moral trade, and other issues that still fall outside the Overton window.AI safety becomes mainstream and "spins out" of EA.AI safety advocates leave EA, and vibes shift back to “first wave” EA.Primary constraintPolitical willResearchMoneyPrimary call to actionVoting/advocacyResearchDonationsPrimary target audienceVoters in US/EUFuture researchers (university students)Middle-upper class peopleFlagship cause areaAI regulationDigital sentienceAnimal welfareWhere do we go from here?I’m interested in organizing more projects like EA Strategy Fortnight. I don’t feel very confident about what third wave EA should look like, or even that there will be a third wave, but it does seem worth spending time discussing the possibilities.I'm particularly interested in claims that there isn't, or shouldn't be, a third wave of EA (i.e. please feel free to disagree with the whole model, argue that we’re still in wave 2, argue we might be moving towards wave 3 but shouldn’t be, etc.).I’m also interested in generating cruxes and forecasts about those cruxes. A lot of these are about the counterfactual v...]]> SF Bay > Berlin (?)SF Bay > Oxford > London > DC > BostonThe boundaries between waves are obviously vague and somewhat arbitrary. This table is also overly simplistic – I first got involved in EA through animal welfare, which is not listed at all on this table, for example. But I think this is a decent first approximation.It’s not entirely clear to me whether we are actually in a third wave. People often overestimate the extent to which their local circumstances are unique. But there are two main things which make me think that we have a “wave” which is distinct from, say, mid 2022:Substantially less money, through a combination of Meta stock falling, FTX collapsing, and general market/crypto downturnsAI safety becoming (relatively) mainstreamIf I had to choose an arbitrary date for the beginning of the third wave, I might choose March 22, 2023, when the FLI open letter on pausing AI experiments was published.It remains to be seen if public concern about AI is sustained – Superintelligence was endorsed by a bunch of fancy people when it first came out, but they mostly faded away. If it is sustained though, I think EA will be in a qualitatively new regime: one where AI safety worries are common, AI safety is getting a lot of coverage, people with expertise in AI safety might get into important rooms, and where the field might be less neglected.Third wave EA: what are some possibilities?Here are a few random ideas; I am not intending to imply that these are the most likely scenarios.Example future scenarioPolitics and Civil SocietyForefront of weirdnessReturn to non-AI causesDescription of the possible “third wave” — chosen to illustrate the breadth of possibilitiesThere is substantial public appetite to heavily regulate AI. The technical challenges end up being relatively easy. The archetypal EA project is running a grassroots petition for a moratorium on AI.AI safety becomes mainstream and "spins out" of EA. EA stays at the forefront of weirdness and the people who were previously interested in AI safety turn their focus to digital sentience, acausal moral trade, and other issues that still fall outside the Overton window.AI safety becomes mainstream and "spins out" of EA.AI safety advocates leave EA, and vibes shift back to “first wave” EA.Primary constraintPolitical willResearchMoneyPrimary call to actionVoting/advocacyResearchDonationsPrimary target audienceVoters in US/EUFuture researchers (university students)Middle-upper class peopleFlagship cause areaAI regulationDigital sentienceAnimal welfareWhere do we go from here?I’m interested in organizing more projects like EA Strategy Fortnight. I don’t feel very confident about what third wave EA should look like, or even that there will be a third wave, but it does seem worth spending time discussing the possibilities.I'm particularly interested in claims that there isn't, or shouldn't be, a third wave of EA (i.e. please feel free to disagree with the whole model, argue that we’re still in wave 2, argue we might be moving towards wave 3 but shouldn’t be, etc.).I’m also interested in generating cruxes and forecasts about those cruxes. A lot of these are about the counterfactual v...]]> Ben_West https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:41 no full 6314
73mAv8m3PjsXzJ4Ad EA - Update on task force on reforms at EA organizations by Julia Wise Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update on task force on reforms at EA organizations, published by Julia Wise on June 17, 2023 on The Effective Altruism Forum.This post is an update on the task force on reforms EA organizations might make.Currently the people on the task force are me (employee at Centre for Effective Altruism, board member at GiveWell), Ozzie Gooen (president at Quantified Uncertainty Research Institute, board member at Rethink Charity, former board member at Rethink Priorities), and Sam Donald (strategy fellow at Open Philanthropy, former staff at COVID taskforce at UK Cabinet Office, former staff at McKinsey). We also have a discussion space with a larger group of about a dozen people (experts in related fields, EA organization staff, and community members). Currently the total time on this project is about 1 full-time equivalent, mostly from Julia.We realize a small group of people isn’t going to reflect all the views or types of expertise useful for a project like this. Our goal is to draw on that expertise, often from people who don’t have time to participate in frequent meetings about EA reforms, and to synthesize views and practical information from a range of sources. If you have suggestions for people it would be useful for us to get input from (including yourself!) we're happy to hear ideas at this form.So far the process has included:Reading and cataloging the problems identified and possible solutions proposed in posts about institutional reform that have been written up on the Forum.Speaking to ~25 people about which areas they see as most important for possible reforms in EA, and what best practices they think EA should be adapting from other fields. We’re trying to speak with a mix of people with significant experience in EA institutions, and people with significant work history in non-EA institutions (nonprofits, finance, government, management consulting.)Researching existing whistleblowing platforms and lawsNext steps:Better defining possible reforms based on the ideas collected and discussed.Getting more advice from people with professional experience in those areas.Understanding the pros and cons of a possible changeUnderstanding what’s legally feasible (e.g. given how different countries regulate nonprofit boards)As we get closer to specific recommendations, discussing them with relevant staff at organizations, to learn more about barriers and feasibility of the possible changes.Spelling out concrete recommendations to organizations. We expect this might be in the form of 5- to 15-page reports, with different reports for different organizations.A further public update about the project, though this likely won’t include all the specifics of the recommendations made to organizations.Shapes that our recommendations might take:“Here’s a change we think organization X should make.” (Likely to focus on Open Philanthropy and Effective Ventures.)“Here’s a change we think any organization in situation Y should make — we think there are a dozen organizations in that situation.” (Likely to focus on basic governance practices for small organizations, like having a staff handbook if there isn’t one.)“Here’s a function/service that doesn’t currently exist in EA; we think it’s probably good for it to be created and funded.”After we have proposals clearly spelled out, we’ll present them to the various organizations. These will be recommendations, not requirements. We hope that doing this as a general effort across EA can save effort for organizations, rather than each of them doing this sort of project independently. But we expect organizations will think through the recommendations critically and will get independent advice as needed.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Julia_Wise https://forum.effectivealtruism.org/posts/73mAv8m3PjsXzJ4Ad/update-on-task-force-on-reforms-at-ea-organizations Sat, 17 Jun 2023 15:26:48 +0000 EA - Update on task force on reforms at EA organizations by Julia Wise Julia_Wise https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:31 no full 6315
eoGrGYMhGYmTkkPSg EA - AMA: Ed Mathieu, Head of Data & Research at Our World in Data by EdMathieu Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Ed Mathieu, Head of Data & Research at Our World in Data, published by EdMathieu on June 17, 2023 on The Effective Altruism Forum.Hi, EAs! I'm Ed Mathieu, manager of a team of data scientists and researchers at Our World in Data (OWID), an online publication founded by Max Roser and based out of the University of Oxford.We aim to make the data and research on the world's largest problems accessible and understandable. You can learn more about our mission on our site.You’re welcome to ask me anything! I’ll start answering questions on Friday, 23 June.Feel free to ask anything you may want to know about our mission, work, articles, charts, or more meta-aspects like our team structure, the history of OWID, etc.Please post your questions as comments on this post. The earlier you share your questions, the higher the chances they'll reach the top!Please upvote questions you'd most like answered.I'll answer questions on Friday, 23 June. Questions posted after that are less likely to get answers.(This is an “AMA” — you can explore others here.)I joined OWID in 2020 and spent the first couple of years leading our work on the COVID-19 pandemic. Since then, my role has expanded to coordinating all the research & data work on our site.I previously worked as a data scientist at the University of Oxford in the departments of Population Health and Primary Care Health Sciences; and as a data science consultant in the private sector.For a (3.5-hour!) overview of my background, and the work of our team at OWID, you can listen to my interview with Fin Moorhouse and Luca Righetti on Hear This Idea. I also gave a talk at EA Global: London 22.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
EdMathieu https://forum.effectivealtruism.org/posts/eoGrGYMhGYmTkkPSg/ama-ed-mathieu-head-of-data-and-research-at-our-world-in Sat, 17 Jun 2023 15:10:27 +0000 EA - AMA: Ed Mathieu, Head of Data & Research at Our World in Data by EdMathieu EdMathieu https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:50 no full 6316
DpQFod5P9e5yJxeCP EA - SoGive rates Open-Phil-funded charity NTI “too rich” by Sanjay Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SoGive rates Open-Phil-funded charity NTI “too rich”, published by Sanjay on June 18, 2023 on The Effective Altruism Forum.Exec summaryUnder SoGive’s methodology, charities holding more than 1.5 years’ expenditure are typically rated “too rich”, in the absence of a strong reason to judge otherwise. (more)Our level of confidence in the appropriateness of this policy depends on fundamental ethical considerations, and could be “clearly (c.95%) very well justified” or “c.50% to c.90% confident in this policy, depending on the charity” (more)We understand that the Nuclear Threat Initiative (NTI) holds > 4 years of spend (c$85m), as at the most recently published Form 990, well in excess of our warning threshold. (more)We are now around 90% confident that NTI’s reserves are well in excess of our warning threshold, indeed >3x annual spend, although there are some caveats. (more)Our conversation with NTI about this provides little reason to believe that we should deviate from our default rating of “too rich”. (more)It is possible that NTI could show us forecasts of their future income and spend that might make us less likely to be concerned about the value of donations to NTI, although this seems unlikely since they have already indicated that they do not wish to share this. (more)We do not typically recommend that donors donate to NTI. However we do think it’s valuable for donors to communicate that they are interested in supporting their work, but are avoiding donating to NTI because of their high reserves. (more)Although this post is primarily to help donors decide whether to donate to NTI, readers may find it interesting for understanding SoGive's approach to charities which are too rich, and how this interacts with different ethical systems.We thank NTI for agreeing to discuss this with us knowing that there was a good chance that we might publish something on the back of the discussion. We showed them a draft of this post before publishing; they indicated that they disagree with the premise of the piece, but declined to indicate what specifically they disagreed with.0. Intent of this postAlthough this post highlights the fact that NTI has received funding from Open Philanthropy (Open Phil), the aim is not to put Open Philanthropy on the spot or demand any response from them.Rather, we have argued that it is often a good idea for donors to “coattail” (i.e. copy) donations made by Open Phil. For donors doing this, including donors supported by SoGive, we think it’s useful to know which Open Phil grantees we might give lower or higher priority to.1. Background on SoGive’s methodology for assessing reservesThe SoGive ratings scale has a category called “too rich”. It is used for charities which we deem to have a large enough amount of money that it no longer makes sense for donors to provide them with funds. We set this threshold at 18 months of spend (i.e. if the amount of unrestricted reserves is one and a half times as big as its annual spend then we typically deem the charity “too rich”). To be clear, this allows the charity carte blanche to hold as much money as it likes as long as it indicates that it has a non-binding plan for that money.So, having generously ignored the designated reserves, we then notionally apply the (normally severe) stress of all the income disappearing overnight. Our threshold considers the scenario where the charity has so much reserves that it could go for one and a half years without even having to take management actions such as downsizing its activities.In this scenario, we think it is likely better for donors to send their donations elsewhere, and allow the charity to use up its reserves.Originally we considered a different, possibly more lenient policy. We considered that charities should be considered too rich if they...

]]>
Sanjay https://forum.effectivealtruism.org/posts/DpQFod5P9e5yJxeCP/sogive-rates-open-phil-funded-charity-nti-too-rich 4 years of spend (c$85m), as at the most recently published Form 990, well in excess of our warning threshold. (more)We are now around 90% confident that NTI’s reserves are well in excess of our warning threshold, indeed >3x annual spend, although there are some caveats. (more)Our conversation with NTI about this provides little reason to believe that we should deviate from our default rating of “too rich”. (more)It is possible that NTI could show us forecasts of their future income and spend that might make us less likely to be concerned about the value of donations to NTI, although this seems unlikely since they have already indicated that they do not wish to share this. (more)We do not typically recommend that donors donate to NTI. However we do think it’s valuable for donors to communicate that they are interested in supporting their work, but are avoiding donating to NTI because of their high reserves. (more)Although this post is primarily to help donors decide whether to donate to NTI, readers may find it interesting for understanding SoGive's approach to charities which are too rich, and how this interacts with different ethical systems.We thank NTI for agreeing to discuss this with us knowing that there was a good chance that we might publish something on the back of the discussion. We showed them a draft of this post before publishing; they indicated that they disagree with the premise of the piece, but declined to indicate what specifically they disagreed with.0. Intent of this postAlthough this post highlights the fact that NTI has received funding from Open Philanthropy (Open Phil), the aim is not to put Open Philanthropy on the spot or demand any response from them.Rather, we have argued that it is often a good idea for donors to “coattail” (i.e. copy) donations made by Open Phil. For donors doing this, including donors supported by SoGive, we think it’s useful to know which Open Phil grantees we might give lower or higher priority to.1. Background on SoGive’s methodology for assessing reservesThe SoGive ratings scale has a category called “too rich”. It is used for charities which we deem to have a large enough amount of money that it no longer makes sense for donors to provide them with funds. We set this threshold at 18 months of spend (i.e. if the amount of unrestricted reserves is one and a half times as big as its annual spend then we typically deem the charity “too rich”). To be clear, this allows the charity carte blanche to hold as much money as it likes as long as it indicates that it has a non-binding plan for that money.So, having generously ignored the designated reserves, we then notionally apply the (normally severe) stress of all the income disappearing overnight. Our threshold considers the scenario where the charity has so much reserves that it could go for one and a half years without even having to take management actions such as downsizing its activities.In this scenario, we think it is likely better for donors to send their donations elsewhere, and allow the charity to use up its reserves.Originally we considered a different, possibly more lenient policy. We considered that charities should be considered too rich if they...]]> Sun, 18 Jun 2023 20:07:59 +0000 EA - SoGive rates Open-Phil-funded charity NTI “too rich” by Sanjay 4 years of spend (c$85m), as at the most recently published Form 990, well in excess of our warning threshold. (more)We are now around 90% confident that NTI’s reserves are well in excess of our warning threshold, indeed >3x annual spend, although there are some caveats. (more)Our conversation with NTI about this provides little reason to believe that we should deviate from our default rating of “too rich”. (more)It is possible that NTI could show us forecasts of their future income and spend that might make us less likely to be concerned about the value of donations to NTI, although this seems unlikely since they have already indicated that they do not wish to share this. (more)We do not typically recommend that donors donate to NTI. However we do think it’s valuable for donors to communicate that they are interested in supporting their work, but are avoiding donating to NTI because of their high reserves. (more)Although this post is primarily to help donors decide whether to donate to NTI, readers may find it interesting for understanding SoGive's approach to charities which are too rich, and how this interacts with different ethical systems.We thank NTI for agreeing to discuss this with us knowing that there was a good chance that we might publish something on the back of the discussion. We showed them a draft of this post before publishing; they indicated that they disagree with the premise of the piece, but declined to indicate what specifically they disagreed with.0. Intent of this postAlthough this post highlights the fact that NTI has received funding from Open Philanthropy (Open Phil), the aim is not to put Open Philanthropy on the spot or demand any response from them.Rather, we have argued that it is often a good idea for donors to “coattail” (i.e. copy) donations made by Open Phil. For donors doing this, including donors supported by SoGive, we think it’s useful to know which Open Phil grantees we might give lower or higher priority to.1. Background on SoGive’s methodology for assessing reservesThe SoGive ratings scale has a category called “too rich”. It is used for charities which we deem to have a large enough amount of money that it no longer makes sense for donors to provide them with funds. We set this threshold at 18 months of spend (i.e. if the amount of unrestricted reserves is one and a half times as big as its annual spend then we typically deem the charity “too rich”). To be clear, this allows the charity carte blanche to hold as much money as it likes as long as it indicates that it has a non-binding plan for that money.So, having generously ignored the designated reserves, we then notionally apply the (normally severe) stress of all the income disappearing overnight. Our threshold considers the scenario where the charity has so much reserves that it could go for one and a half years without even having to take management actions such as downsizing its activities.In this scenario, we think it is likely better for donors to send their donations elsewhere, and allow the charity to use up its reserves.Originally we considered a different, possibly more lenient policy. We considered that charities should be considered too rich if they...]]> 4 years of spend (c$85m), as at the most recently published Form 990, well in excess of our warning threshold. (more)We are now around 90% confident that NTI’s reserves are well in excess of our warning threshold, indeed >3x annual spend, although there are some caveats. (more)Our conversation with NTI about this provides little reason to believe that we should deviate from our default rating of “too rich”. (more)It is possible that NTI could show us forecasts of their future income and spend that might make us less likely to be concerned about the value of donations to NTI, although this seems unlikely since they have already indicated that they do not wish to share this. (more)We do not typically recommend that donors donate to NTI. However we do think it’s valuable for donors to communicate that they are interested in supporting their work, but are avoiding donating to NTI because of their high reserves. (more)Although this post is primarily to help donors decide whether to donate to NTI, readers may find it interesting for understanding SoGive's approach to charities which are too rich, and how this interacts with different ethical systems.We thank NTI for agreeing to discuss this with us knowing that there was a good chance that we might publish something on the back of the discussion. We showed them a draft of this post before publishing; they indicated that they disagree with the premise of the piece, but declined to indicate what specifically they disagreed with.0. Intent of this postAlthough this post highlights the fact that NTI has received funding from Open Philanthropy (Open Phil), the aim is not to put Open Philanthropy on the spot or demand any response from them.Rather, we have argued that it is often a good idea for donors to “coattail” (i.e. copy) donations made by Open Phil. For donors doing this, including donors supported by SoGive, we think it’s useful to know which Open Phil grantees we might give lower or higher priority to.1. Background on SoGive’s methodology for assessing reservesThe SoGive ratings scale has a category called “too rich”. It is used for charities which we deem to have a large enough amount of money that it no longer makes sense for donors to provide them with funds. We set this threshold at 18 months of spend (i.e. if the amount of unrestricted reserves is one and a half times as big as its annual spend then we typically deem the charity “too rich”). To be clear, this allows the charity carte blanche to hold as much money as it likes as long as it indicates that it has a non-binding plan for that money.So, having generously ignored the designated reserves, we then notionally apply the (normally severe) stress of all the income disappearing overnight. Our threshold considers the scenario where the charity has so much reserves that it could go for one and a half years without even having to take management actions such as downsizing its activities.In this scenario, we think it is likely better for donors to send their donations elsewhere, and allow the charity to use up its reserves.Originally we considered a different, possibly more lenient policy. We considered that charities should be considered too rich if they...]]> Sanjay https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 24:53 no full 6322
czsP5iWmz3wLtz7LT EA - Question and Answer-based EA Communities by Joey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Question and Answer-based EA Communities, published by Joey on June 18, 2023 on The Effective Altruism Forum.The EA community has expanded to encompass a broad spectrum of interests, making its identity and definition a hotly debated topic. In my view, the community's current diversity could easily support multiple distinct communities, and if we were building a movement from scratch, it would likely look different from the current EA movement.Defining sub-communities within the EA movement can be approached in numerous ways. One proposed division that I believe captures much of what people appreciate about the EA community, is as follows:Question-based communitiesAn Effective Giving CommunityAn Impactful Career CommunityAnswer-based communitiesAn AI X-Risk CommunityAn Effective Animal Advocacy CommunityQuestion-based communitiesAn Effective Giving CommunityThe concept of effective giving is where EA originated and remains a significant component of the community. Notable organizations such as GWWC, Effektiv Spenden, One for the World, Founders Pledge, and others, share a common mission and practical outcomes. The primary metric for this community is directing funds towards highly impactful areas. GiveWell, for instance, is perhaps the first and most recognized organization within this effective giving community outside the EA movement. This community benefits from its diversity and plurality, as many people could, for example, take the 10% pledge, and an even larger number could enhance their giving effectiveness using EA principles. Key concepts for this community could include determining the best charities to donate to, identifying the most effective charity evaluators, and deciding how much one should donate. This, in many ways, echoes the fundamentals of the EA 1.0 community.An Impactful Career CommunityIn addition to funding, individuals can contribute to the world through their careers. Much like the effective giving community, there's the question of how to maximize the impact of one's career across multiple cause areas. Organizations such as Probably Good, High Impact Professionals, or Charity Entrepreneurship focus on this area (I intentionally exclude career-focused organizations with a narrow cause area focus, like 80,000 Hours or Animal Advocacy Careers). The objective of this community would be related to career changes and enhancing understanding of the most impactful career paths. Although this is a broadly inclusive community benefiting from cause plurality, it's likely less extensive than the effective giving community, as a smaller percentage of the population will prioritize impact when considering a career switch.Relevant topics for this community could include identifying high absorbency, impactful careers, assessing the most impactful paths for individuals with specific value or skill sets, and determining underrated careers.Answer-based communities, e.g., AI X-Risk CommunityThe second community category that is a bit different from these others is anwer-based communities. I think there are two somewhat distinctive answer-based communities in EA: AI and animals. I think AI X-risk is a better example as it's more often mixed with the other above two communities and has significantly grown as a unique area within EA. This community consists of meta-organizations like Longview, Effective Giving and 80,000 Hours as well as the organizations working directly on the problem. It has begun to hold separate forums, conferences, and events. Its shared goal is to mitigate existential risks from AI, a specific objective that doesn't necessarily require members to embrace effective giving or prioritize impact in their careers. However, it does require specific values and epistemic assumptions, leading to this cause being prioritized over ot...

]]>
Joey https://forum.effectivealtruism.org/posts/czsP5iWmz3wLtz7LT/question-and-answer-based-ea-communities Sun, 18 Jun 2023 15:28:03 +0000 EA - Question and Answer-based EA Communities by Joey Joey https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:09 no full 6323
T3P4oX6F8tMh4h55s EA - The Meat Eater Problem by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Meat Eater Problem, published by Vasco Grilo on June 18, 2023 on The Effective Altruism Forum.I am linkposting this open access article by Michael Plant about the meat eater problem. Some excerpt and quick thoughts are below.AbstractHere are two commonly held moral views. First, we must save strangers’ lives, at least if we can do so easily: you would be required to rescue a child drowning in a pond even if it will ruin your expensive suit. Second, it is wrong to eat meat because of the suffering caused to animals in factory farms. Many accept both simultaneously—Peter Singer is the pre-eminent example. I point out that these two beliefs are in a sharp and seemingly unrecognised tension and may even be incompatible. It seems universally accepted that doing or allowing a harm is permissible—and may even be required—when it is the lesser evil. I argue that, if meat eating is wrong on animal suffering grounds then, once we consider how much suffering might occur, it starts to seem plausible that saving strangers would be the greater evil than not rescuing them and is, therefore, not required after all. Given the uncertainties and subjective assessments here, reasonable people could substantially disagree.The surprising result is that a moral principle widely considered to be obviously true—we must rescue others—is not, on further reflection, obviously true and would be defensibly rejected by some. Some potential implications are discussed.1. IntroductionIt is widely believed that we, as members of the public, have a Duty of Easy Rescue to one another.Duty of Easy Rescue: We are required to save lives in rescue cases, one-off instances where we can physically save a stranger [not a friend or really good person] at trivial cost to ourselves.1To illustrate this, consider the following, familiar case from Singer2:Shallow Pond: You are walking past a shallow pond and see a drowning child. You can easily rescue the child, but doing so will ruin the expensive new suit you are wearing.Intuitively, we are required to save the child. This is because, as Peter Singer explains: “[it] will mean getting my clothes muddy, but this is insignificant, while the death of the child would presumably be a very bad thing.”3It is also widely believed that it is wrong to be a meat eater, someone who regularly consumes animal products produced from factory farms.4 This is on the grounds that this consumption requires creating animals who, due to the conditions in factory farms, live overall bad lives.5This paper argues these two beliefs are, in reality, in substantial tension and may well be incompatible, once additional plausible empirical and normative considerations are accounted for. Further, if they are incompatible, we must abandon the notion that there is a Duty of Easy Rescue.Here, in brief, is the argument. We accept that doing (or allowing) harm is permissible—and may be required—when it is the lesser evil; to put the same thing differently, we are not required to do (or allow) the greater evil. Therefore, we wouldn’t be required to save lives, even if we could do so easily, if that would be the greater evil. I argue that, if we accept that meat eating is wrong (on animal suffering grounds) then, once we look at the details, it starts to look plausible that meat eating causes so much suffering that saving the lives of strangers would be the greater evil (compared to not saving them) and would, therefore, not be required. Simply, accounting for the existing concern for animal welfare reduces, and may remove, the obligation to rescue others.[...] The basic argument is this: for each year of meat eating by a human, that creates about five years of chicken life. So, if we consider just that, and think that those animals have lives which are nearly as bad as human lives ar...

]]>
Vasco Grilo https://forum.effectivealtruism.org/posts/T3P4oX6F8tMh4h55s/the-meat-eater-problem Sun, 18 Jun 2023 02:40:02 +0000 EA - The Meat Eater Problem by Vasco Grilo Vasco Grilo https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:06 no full 6324
SZJBE3fuk2majqwJQ EA - Principles for AI Welfare Research by jeffsebo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Principles for AI Welfare Research, published by jeffsebo on June 19, 2023 on The Effective Altruism Forum.Tl;dr: This post, which is part of the EA Strategy Fortnight series, summarizes some of my current views about the importance of AI welfare, priorities for AI welfare research, and principles for AI welfare research.1. IntroductionAs humans start to take seriously the prospect of AI consciousness, sentience, and sapience, we also need to take seriously the prospect of AI welfare. That is, we need to take seriously the prospect that AI systems can have positive or negative states like pleasure, pain, happiness, and suffering, and that if they do, then these states can be good or bad for them.A world that includes the prospect of AI welfare is a world that requires the development of AI welfare research. Researchers need to examine whether and to what extent AI systems might have the capacity for welfare. And to the extent that they might, researchers need to examine what might be good or bad for AI systems and what follows for our actions and policies.The bad news is that AI welfare research will be difficult. Many researchers are likely to be skeptical of this topic at first. And even insofar as we take the topic seriously, it will be difficult for us to know what, if anything, it might be like to be an AI system. After all, the only mind that we can directly access is our own, and so our ability to study other minds is limited at best.The good news is that we have a head start. Researchers have spent the past half century making steady progress in animal welfare research. And while there are many potentially relevant differences between animals and AI systems, there are also many potentially relevant similarities – enough for it to be useful for us to look to animal welfare research for guidance.In Fall 2022, we launched the NYU Mind, Ethics, and Policy Program, which examines the nature and intrinsic value of nonhuman minds, with special focus on invertebrates and AI systems. In this post, I summarize some of my current views about the importance of AI welfare, priorities for AI welfare research, and principles for AI welfare research.I want to emphasize that this post discusses these issues in a selective and general way. A comprehensive treatment of these issues would need to address many more topics in much more detail. But I hope that this discussion can be a useful starting point for researchers who want to think more deeply about what might be good or bad for AI systems in the future.I also want to emphasize that this post expresses my current, tentative views about this topic. It might not reflect the views of other people at the NYU Mind, Ethics, and Policy Program or of other experts in effective altruism, global priorities research, and other relevant research, advocacy, or policy communities. It might not even reflect my own views a year from now.Finally, I want to emphasize that AI welfare is only one of many topics that merit more attention right now. Many other topics merit more attention too, and this post makes no specific claims about relative priorities. I simply wish to claim that AI welfare research should be among our priorities, and to suggest how we can study and promote AI welfare in a productive way.2. Why AI welfare mattersWe can use the standard EA scale-neglectedness-tractability framework to see why AI welfare matters. The general idea is that there could be many more digital minds than biological minds in the future, humanity is currently considering digital minds much less than biological minds, and humanity might be able to take steps to treat both kinds of minds well.First, AI welfare is potentially an extremely large-scale issue. In the same way that the invertebrate population is much larger than the vertebrate p...

]]>
jeffsebo https://forum.effectivealtruism.org/posts/SZJBE3fuk2majqwJQ/principles-for-ai-welfare-research Mon, 19 Jun 2023 21:32:22 +0000 EA - Principles for AI Welfare Research by jeffsebo jeffsebo https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 22:07 no full 6330
WMdEJjLAHmdwyA5Wm EA - We can all help solve funding constraints. What stops us? by Luke Freeman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We can all help solve funding constraints. What stops us?, published by Luke Freeman on June 19, 2023 on The Effective Altruism Forum.This post is a personal reflection that follows my journey to effective altruism, my experiences within it, the concerns I've developed along the way and my hopes for addressing them. It culminates in my views on funding constraints — the role we can all play in solving them and a key question I have for you all: What stops us? (Please let me know in the comments).My journeyWhile this starts with a reflection on my personal journey, I suspect it might feel familiar, it might strike a chord, at times it might rhyme with yours.I was about eight years old when I was first confronted with the tragic reality that an overwhelming number of children my age were suffering and dying from preventable diseases and unjust economic conditions.It broke my heart.I knew that I had done nothing to deserve my incredibly privileged position of being born healthy to a loving, stable, middle-income family in Australia (a country with one of the highest standards of living).Throughout my early years, I took many opportunities to do what I could to right this wrong. In school, that meant participating in fundraisers and advocacy. As a young professional, that meant living frugally but still giving a relatively meagre amount to help others. When I got my first stable job, I decided it was time to give 10% to help others... But when I calculated that that would be $5,000, this commitment began to feel like a pretty big deal. I wasn't going to back down, but I wanted to be more confident that it'd actually result in something good. I felt a responsibility to donate wisely.Some Googling quickly led me to discover Giving What We Can, GiveWell, and Julia Wise's blog Giving Gladly. From this first introduction to what would soon be known as the effective altruism (EA) community, I found the information I needed to help guide me, and the inspiration I needed to help me follow through.I also took several opportunities to pursue a more impact-oriented career, and even tried getting involved in politics. These attempts had varying success, but that was okay: I had one constant opportunity to help others by giving.Around this time, the EA community started expanding their lines of reasoning beyond effective giving advice to other areas like careers and advocacy. I was thrilled to see this. We all have an opportunity to use various resources to make a dent in the world's problems, and the same community that had made good progress on philanthropy seemed to me well-positioned to make progress on other fronts too.By 2016, effective altruism was well and truly “a thing” and I discovered that there was an EA group and conference near me. So, I ventured out to actually meet some of these "effective altruism" people in person.It hit me: I'd finally found "my people."These were people who actually cared enough to put their money where their mouths were, to use the best tools they could find to make the biggest possible difference, and to advocate for others to join them. None of these things were easy, but these people really owned the stakes and did the work.I admired the integrity and true altruism that I found. It motivated me to do better.How I saw effective altruism changeAs time went on, however, I noticed some changes that concerned me. The EA community’s expanded focus started to feel less like a "yes, and" message — supporting both effective giving and pursuing other effective paths to impact — and more like a "no, instead" message: giving began to feel a bit passé within the community.Above: My response to the shift away from effective givingIt started slow, but the change became overwhelming:2015: 80,000 Hours started to advocate to focus on talent...

]]>
Luke Freeman https://forum.effectivealtruism.org/posts/WMdEJjLAHmdwyA5Wm/we-can-all-help-solve-funding-constraints-what-stops-us Mon, 19 Jun 2023 00:37:04 +0000 EA - We can all help solve funding constraints. What stops us? by Luke Freeman Luke Freeman https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 18:38 no full 6331
ZWjDkENuFohPShTyc EA - My lab's small AI safety agenda by Jobst Heitzig (vodle.it) Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My lab's small AI safety agenda, published by Jobst Heitzig (vodle.it) on June 18, 2023 on The Effective Altruism Forum.My lab has started devoting some resources to AI safety work. As a transparency measure and to reach out, I here describe our approach.Overall ApproachI select small theoretical and practical work packages that...seem manageable in view of our very limited resources,match our mixed background in applied machine learning, game theory, agent-based modeling, complex networks science, dynamical systems theory, social choice theory, mechanism design, environmental economics, behvioural social science, pure mathematics, and applied statistics, andappear under-explored or neglected but promising or even necessary, according to our subjective assessment based on our reading of the literature and exchanges with individuals from applied machine learning, computer linguistics, AI ethics researchers, and most importantly, AI alignment researchers (you?).Initial ReasoningI believe that the following are likely to hold:We don't want the world to develop into a very low-welfare state.Powerful AI agents that optimizes for an objective not almost perfectly aligned with welfare can produce very low-welfare states.Powerful AI agents will emerge soon enough.It is impossible to specify sufficiently well what "welfare" means (welfare theorists have tried for centuries and still disagree, common people disagree even more).My puzzling conclusion from this is:We can't make sure that powerful AI agents optimize for an objective that is almost perfectly aligned with welfare.Hence we must try to prevent that any powerful AI agent optimizes for any objective whatsoever.Those of you who are Asimov fans like me might like the following...Six Laws of Non-OptimizingNever attempt to optimize your behavior with regards to any metric.Constrained by 1, don't cause suffering or do other harm.Constrained by 1-2, prevent other agents from violating 1. or 2.Constrained by 1-3, do what the stakeholders in your behavior would collectively decide you should do.Constrained by 1-4, cooperate with other agents.Constrained by 1-5, protect and improve yourself.Rather than trying to formalize this or even define the terms precisely, I just use them to roughly guide my work.When saying "optimize" I mean it in the strict mathematical sense: aiming to find an exact or approximate, local or global maximum or minimum of some function. When I mean mere improvements w.r.t. some metric, I just say "improve" rather than "optimize".AgendaWe currently slowly pursue two parallel approaches, the first related to laws 1,3,5 from above, the other related to law 4.Non-Optimizing AgentsExplore several novel variants of "satisficing" policies and related learning algorithms for POMDPs, produce corresponding non-optimizing versions of classical to state-of-the art tabular and ANN-based RL algorithms, and test and evaluate them in benchmark and safety-relevant environments from the literature, plus in tailormade environments for testing particular hypotheses. (Currently underway)Test them in near-term relevant application areas such as autonomous vehicles, via state-of-the-art complex simulation environments. (Planned with partner from autonomous vehicles research)Using our game-theoretical and agent-based modeling expertise, study them in multi-agent environments both theoretically and numerically.Design evolutionarily stable non-optimizing strategies for non-optimizing agents that cooperate with others to punish violations of law 1 in paradigmatic evolutionary games.Use our expertise in adaptive complex networks and dynamical systems theory to study dynamical properties of mixed populations of optimizing and non-optimizing agents: attractors, basins of attraction, their stability and...

]]>
Jobst Heitzig (vodle.it) https://forum.effectivealtruism.org/posts/ZWjDkENuFohPShTyc/my-lab-s-small-ai-safety-agenda Sun, 18 Jun 2023 23:50:53 +0000 EA - My lab's small AI safety agenda by Jobst Heitzig (vodle.it) Jobst Heitzig (vodle.it) https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:07 no full 6332
EEMpNRJK5qqCw6zqH EA - A Cost-Effectiveness Analysis of Historical Farmed Animal Welfare Ballot Initiatives by Laura Duffy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Cost-Effectiveness Analysis of Historical Farmed Animal Welfare Ballot Initiatives, published by Laura Duffy on June 20, 2023 on The Effective Altruism Forum.On May 11, 2023, the Supreme Court of the United States upheld a 2018 California law, Proposition 12, which banned the sale of certain animal products that did not meet minimum welfare standards. Especially now that such ballot initiatives have withstood legal challenges, a relevant question we might ask is: how cost-effective were initiatives like Proposition 12 at reducing farmed animal suffering? Below is the executive summary of a report I wrote for Rethink Priorities analyzing the cost-effectiveness of not just Proposition 12, but also three other historical animal welfare ballot initiatives in the United States. In this report, I also compared the results to the cost-effectiveness of corporate campaigns. The full report is viewable here with a detailed discussion of the methodology and results.Executive SummaryKey ResultsIn this report, I estimated the impact and cost-effectiveness of four historical United States ballot initiatives that either restricted the use of common animal confinement methods (including extremely confining stalls and tethering for veal calves, extremely confining gestation crates for breeding sows, and conventional cages for egg-laying hens), set minimum per-animal space requirements, and/or mandated cage-free systems for egg-laying hens. These initiatives are Arizona Proposition 204 (2006), California Proposition 2 (2008), Massachusetts Question 3 (2016), and California Proposition 12 (2018).The three metrics of impact I estimated were:The number of years of improved animal welfare produced per dollar spent passing the initiatives for three animal types (veal calves, breeding sows, and egg-laying hens);The estimated years of disabling pain-equivalent suffering alleviated per dollar for veal calves, breeding sows, and egg-laying hens; andThe relative years of improved hen welfare and disabling pain-equivalent suffering avoided per dollar per year of counterfactual impact compared to corporate cage-free campaigns, rated using an equivalent metric.I estimated these metrics by building a Monte Carlo model in Causal that used data on state population sizes, per-capita animal product consumption, state animal populations, and campaign fundraising amounts to estimate the number of animals whose quality of life was improved over the first four years in which the ballot initiatives were in place. In addition, for egg-laying hens specifically, I used data from the Welfare Footprint Project to estimate the years of suffering alleviated by the transition away from conventional cages for the ballot initiatives that included egg-laying hens (Welfare Footprint Project). Finally, I compared the per-dollar years of improved welfare generated and suffering avoided by ballot initiatives to that accomplished by corporate campaigns, using cost-effectiveness estimates from Saulius Šimčikas, the estimated number of animals whose lives were improved, and the Welfare Footprint data.The key takeaways are:Among all species, about 5.0 years of animal life were improved per dollar spent (with a 90-percentile range between 3.9 and 6.4 years per dollar). This translates into approximately 0.10 years of suffering avoided per dollar spent on all four ballot initiatives, with a 90-percentile range from 0.05 years per dollar to 0.14 years per dollar.Helping egg-laying hens is, by far, the most impactful farmed animal welfare reform.Nearly all (about 99%) of the reductions in farmed-animal suffering by these four initiatives can be attributed to bans on battery cages and/or cage-free requirements.The three initiatives that impacted egg-laying hens were approximately two orders of magnitude more cost-e...

]]>
Laura Duffy https://forum.effectivealtruism.org/posts/EEMpNRJK5qqCw6zqH/a-cost-effectiveness-analysis-of-historical-farmed-animal Tue, 20 Jun 2023 18:40:34 +0000 EA - A Cost-Effectiveness Analysis of Historical Farmed Animal Welfare Ballot Initiatives by Laura Duffy Laura Duffy https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 22:35 no full 6343
QXywXmka8pACPuiHq EA - LPP Summer Research Fellowship in Law & AI 2023: Applications Open by Legal Priorities Project Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LPP Summer Research Fellowship in Law & AI 2023: Applications Open, published by Legal Priorities Project on June 20, 2023 on The Effective Altruism Forum.The Legal Priorities Project (LPP) is excited to announce that applications for our Summer Research Fellowship in Law & AI 2023 are now open. For 8–12 weeks, participants will work with researchers at LPP on how the law can help to mitigate existential risks from artificial intelligence. Fellows will receive a stipend of $10,000.If you are interested in carrying out research in this field and are considering using your career to mitigate existential risks, particularly those from AI, we invite you to apply. The application deadline is July 6 at 11:59 pm Anywhere on Earth; however, we will consider applications and select fellows on a rolling basis, so we encourage you to apply as early as possible. Current students are encouraged to check their academic calendars and apply with enough time to complete the fellowship, or as much of it as possible, before classes resume.We look forward to receiving your application!About the fellowshipYou will take the lead on a research project, with mentorship and support from other LPP researchers. We will support you in deciding what project and output will be most valuable for you to work towards, for example, publishing a report, journal/law review article, or blog post. We also expect fellows to attend regular meetings, give occasional presentations on their research, and provide feedback on other research pieces.Fellows will have the opportunity to select a research topic from a list prepared by LPP. Potential research topics for the summer may include:Tort law liability, including strict liability for abnormally dangerous activities, for activities related to the development and dissemination of transformative AI.Product liability law as a way to address harms from transformative AI.The role of litigation in mitigating risks from transformative AI.Potential obstacles for AI regulation presented by the major questions doctrine.First Amendment issues related to AI regulation.The design of a new international organization, similar to the IAEA or CERN, for the international governance of AI.The legal authorities of agencies in the United States government to address risks from transformative AI.The influence of different jurisdictions on the development and dissemination of transformative AI.Developing a syllabus for a course on law and transformative AI.This list of topics is non-exhaustive, and is presented to give an overview of the types of research we are interested in. Fellows will further define the research question at the beginning of the fellowship.In exceptional cases, we are open to research project proposals relevant to existential risk in one of our other focus areas.Selection criteriaWe are looking for graduate law students (JD or LLM), PhD candidates, and postdocs working in law. Students entering the final year of a 5-year undergraduate law degree are also welcome to apply.We strongly encourage you to apply if you have an interest in our work and are considering using your career to study or mitigate existential risks, particularly those from transformative AI. Candidates will be expected to apply their research capabilities and legal knowledge to AI governance, but are not required to have previous experience or expertise in AI.In addition to a willingness to engage with existential risks from AI, the ideal candidate will have the following strengths:Ability to carry out self-directed research with limited supervision.Excellent written communication skills.Excellent problem-solving and critical thinking skills.If you're not sure about applying because you don't know if you're qualified or the right fit, we would encourage you to apply a...

]]>
Legal Priorities Project https://forum.effectivealtruism.org/posts/QXywXmka8pACPuiHq/lpp-summer-research-fellowship-in-law-and-ai-2023 Tue, 20 Jun 2023 19:23:19 +0000 EA - LPP Summer Research Fellowship in Law & AI 2023: Applications Open by Legal Priorities Project Legal Priorities Project https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:42 no full 6344
vaDxspBZ37NoEfgMx EA - Linkpost - dawnwatch 'Peter Singer is not Animal Liberation Now' by Charlotte Darnell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Linkpost - dawnwatch 'Peter Singer is not Animal Liberation Now', published by Charlotte Darnell on June 20, 2023 on The Effective Altruism Forum.This is a linkpost for/.There is some criticism of Singer’s approach to animal welfare; effective altruism; appropriating women’s voices; and discussion of a sexual harassment claim.This was a hard read, and CEA/EV haven’t investigated these claims, but I thought it was of interest to the community to share.If you’d like to talk through any concerns about this, or other similar situations, you can contact me or other members of the Community Health Team via our form.Here are some resources that may be usefulThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Charlotte Darnell https://forum.effectivealtruism.org/posts/vaDxspBZ37NoEfgMx/linkpost-dawnwatch-peter-singer-is-not-animal-liberation-now Tue, 20 Jun 2023 17:42:15 +0000 EA - Linkpost - dawnwatch 'Peter Singer is not Animal Liberation Now' by Charlotte Darnell Charlotte Darnell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:51 no full 6345
KYApMdtPsveYPAoZk EA - Longtermists are perceived as power-seeking by OllieBase Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Longtermists are perceived as power-seeking, published by OllieBase on June 20, 2023 on The Effective Altruism Forum.A short and arguably unfinished blog post that I'm sharing as part of EA strategy fortnight. There's probably a lot more to say about this, but I've sat on this draft for a few months and don't expect to have time to develop the argument much further.I understand longtermism to be the claim that positively shaping the long-term future is a moral priority. The argument for longtermism goes:The future could be extremely large;The beings who will inhabit that future matter, morally;There are things we can do to improve the lives of those beings (one of which is reducing existential risk);Therefore, positively shaping the long-term future should be a moral priority.However, I have one core worry about longtermism and it’s this: people (reasonably) see its adherents as power-seeking. I think this worry somewhat extends to broad existential risk reduction work, but much less so.Arguments for longtermism tell us something important and surprising; that there is an extremely large thing that people aren’t paying attention to. That thing is the long-term future. In some ways, it’s odd that we have to draw attention to this extremely large thing. Everyone believes the future will exist and most people don’t expect the world to end that soon.Perhaps what longtermism introduces to most people is actually premises 2 and 3 (above) — that we might have some reason to take it seriously, morally, and that we can shape it.In any case, longtermism seems to point to something that people vaguely know about or even agree with already and then say that we have reason to try and influence that thing.This would all be fine if everyone felt like they were on the same team. That, when longtermists say “we should try and influence the long-term future”, everyone listening sees themselves as part of that “we”.This doesn’t seem to be what’s happening. For whatever reason, when people hear longtermists say “we should try and influence the long-term future”, they hear the “we” as just the longtermists.This is worrying to them. It sounds like this small group of people making this clever argument will take control of this extremely big thing that no one thought you could (or should) control.The only thing that could make this worse is if this small group of people were somehow undeserving of more power and influence, such as relatively wealthy, well-educated white men. Unfortunately, many people making this argument are relatively wealthy, well-educated white men (including me).To be clear, I think longtermists do not view accruing power as a core goal or as an implication of longtermism. Importantly, when longtermists say “we should try and influence the long-term future”, I think they/we really mean everyone.Ironically, it seems that, because no one else is paying attention to the extremely big thing, they’re going to have to be the first ones to pay attention to it.I don’t have much in the way of a solution here. I mostly wanted to point to this worry and spell it out more clearly so that those of us making the case for longtermism can at least be aware of this potential, unfortunate misreading of the idea.58% of US adults do not think we are living in “the end times”. Not super reassuring.See Torres and Crary. A google search will also do the trick.As much as they try and make themselves less wealthy by donating a large portion of their income to charity.I think you could make the case that this is often an indirect goal, such as getting the ears of important policymakers.Except, perhaps, dictators and other ne'er-do-wells.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
OllieBase https://forum.effectivealtruism.org/posts/KYApMdtPsveYPAoZk/longtermists-are-perceived-as-power-seeking Tue, 20 Jun 2023 09:55:12 +0000 EA - Longtermists are perceived as power-seeking by OllieBase OllieBase https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:28 no full 6346
EJAFvvnbZHA7zn2LA EA - Book summary: 'Why Intelligence Fails' by Robert Jervis by Ben Stewart Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Book summary: 'Why Intelligence Fails' by Robert Jervis, published by Ben Stewart on June 20, 2023 on The Effective Altruism Forum.Here’s a summary of ‘Why Intelligence Fails’ by the political scientist Robert Jervis. It’s a book analysing two cases where the U.S. intelligence community ‘failed’: being slow to foresee the 1979 Iranian Revolution, and the overconfident and false assessment that Saddam Hussein had weapons of mass destruction in 2003.I’m interested in summarising more books that contain valuable insights but are outside the typical EA canon. If you’d like more of this or have a book to suggest, let me know.Key takeawaysGood intelligence generally requires the relevant agency and country office to prioritise the topic and direct scarce resources to it. Good intelligence in a foreign country requires a dedicated diplomatic and covert collection corps with language skills and contextual knowledge.Intelligence analysis can be deficient in critical review, external expertise, and social-scientific methodology. Access to classified information only generates useful insight for some phenomena.Priors can be critical in determining interpretation within intelligence, and they can often go unchallenged.Political pressure can have a significant effect on analysis, but is hard to pin down.If the justification of an intelligence conclusion is unpublished, you can still interrogate it by asking:whether the topic would have been given sufficient priority and resources by the relevant intelligence organisationwhether classified information, if available, would be likely to yield insightwhether pre-existing beliefs are likely to bias analysiswhether political pressures could significantly affect analysisSome correctives to intelligence failures which may be useful to EA:demand sharp, explicit, and well-tracked predictionsdemand early warning indicators, and notice when beliefs can only be disproven at a late stageconsider negative indicators - 'dogs that don't bark', i.e. things that the view implies should not happenuse critical engagement by peers and external experts, especially by challenging fundamental beliefs that influence what seems plausible and provide alternative hypotheses and interpretationsuse red-teams, pre-mortems, and post-mortems.Overall, I’ve found the book to somewhat demystify intelligence analysis. You should contextualise a piece of analysis with respect to the psychology and resources involved, including whether classified information would be of significant benefit. I have become more sceptical of intelligence, but the methodology of focusing on two known failures - selecting on the dependent variable - mean that I hesitate to become too pessimistic about intelligence as a whole and as it functions today.Why it’s relevant to EAThe most direct application of this topic is to the improvement of institutional decision-making, but there is value for any cause area that depends on conducting or interpreting analysis of state and non-state adversaries, such as in biosecurity, nuclear war, or great power conflict.This topic may also contribute to the reader's sense of when and how much one should defer to the outputs of intelligence communities. Deference is motivated by their access to classified information and presumed analytic capability. However, Tetlock’s ‘Expert Political Judgment’ cast doubt on the value of classified information for improving prediction compared to generalist members of the public.Finally, assessments of the IC’s epistemic practices might offer lessons for how an intellectual community should grapple with information hazards, both intellectually and socially. More broadly, the IC is an example of a group pursuing complex, decision-relevant analysis in a high-uncertainty environment. Their successes and ...

]]>
Ben Stewart https://forum.effectivealtruism.org/posts/EJAFvvnbZHA7zn2LA/book-summary-why-intelligence-fails-by-robert-jervis Tue, 20 Jun 2023 19:14:54 +0000 EA - Book summary: 'Why Intelligence Fails' by Robert Jervis by Ben Stewart Ben Stewart https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 21:59 no full 6347
kP95dWZJR5qKwdThA EA - Five Years of Rethink Priorities: What We've Learned by Peter Wildeford Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Five Years of Rethink Priorities: What We've Learned, published by Peter Wildeford on June 21, 2023 on The Effective Altruism Forum.The post contains a reflection on our journey as co-founders of Rethink Priorities. We are Peter Wildeford and Marcus A. Davis.In 2017, we were at a crossroads.We had been working on creating new global health and development interventions, co-founding an organization that used text message reminders to encourage new parents in India to get their children vaccinated.However, we felt there was potentially more value in creating an organization that would help tackle important questions within cause and intervention prioritization. We were convinced that farmed and wild animal welfare were very important, but we didn’t know which approaches to helping those animals would be impactful. Hits-based giving seemed like an important idea, but we were unsure how to empirically compare that type of approach to the mostly higher-certainty outcomes available from funding GiveWell’s top charities.So, we chose to create a research organization. Our aim was to take the large amount of evidence base and strong approaches used to understand global health interventions and apply them to other neglected cause areas, such as animal welfare and reducing risks posed by unprecedented new technologies like AI. We wanted to identify neglected interventions and do the research needed to make them happen.Five years later, Rethink Priorities is now a research and implementation group that works with foundations and impact-focused non-profits to identify pressing opportunities to make the world better, figures out strategies for working on those problems, and does that work.Reflecting on everything the organization has accomplished and everything we want to happen in the next five years, we’re proud of a lot of the work our team has done.For example, we went from being unsure if invertebrates were capable of suffering to researching the issue and establishing invertebrate welfare as a proposition worth taking seriously. Following through, we helped create some of the first groups in the effective animal advocacy space working on interventions targeting invertebrates. Our team did the deep philosophical work and the practical research needed to establish specific interventions, and we incubated groups to implement them.Building on this work, our ambitious Moral Weight Project improved our understanding of both capacity for welfare and intensity of valenced experiences across species, and the moral implications of those possible differences. By doing so, the Moral Weight Project laid the foundation for cross-animal species cost-effectiveness analyses that inform important decisions regarding how many resources grantmakers and organizations should tentatively allocate towards helping each of these species.We have also produced dozens of in-depth research pieces. Our global health and development team, alone, has produced 23 reports commissioned by Open Philanthropy that increased the scope of impactful interventions considered in their global health and development portfolio. This work has influenced decisions directing millions of dollars towards the most effective interventions.Our survey and data analysis team also worked closely with more than a dozen groups in EA including the Centre for Effective Altruism, Open Philanthropy, 80,000 Hours, and 1 Day Sooner to help them fine-tune their messaging, improve their advertising, and have better data analysis for their impact tracking.RP has provided 23 fellowships to aspiring researchers, building a robust talent pipeline. Many talented people have remained in successful careers at Rethink Priorities. RP staff have gone on to work at Open Philanthropy, Founders Pledge, the Centre for Effective Altruism, 80,00...

]]>
Peter Wildeford https://forum.effectivealtruism.org/posts/kP95dWZJR5qKwdThA/five-years-of-rethink-priorities-what-we-ve-learned Wed, 21 Jun 2023 17:28:58 +0000 EA - Five Years of Rethink Priorities: What We've Learned by Peter Wildeford Peter Wildeford https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 19:19 no full 6353
AJwuMw7ddcKQNFLcR EA - Concrete projects for reducing existential risk by Buhl Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Concrete projects for reducing existential risk, published by Buhl on June 21, 2023 on The Effective Altruism Forum.This is a blog post, not a research report, meaning it was produced quickly and is not to Rethink Priorities’ typical standards of substantiveness and careful checking for accuracy.Super quick summaryThis is a list of twenty projects that we (Rethink Priorities’ Existential Security Team) think might be especially promising projects for reducing existential risk, based on our very preliminary and high-level research to identify and compare projects.You can see an overview of the full list here. Here are five ideas we (tentatively) think seem especially promising:Improving info/cybersec at top AI labsAI lab coordinationField building for AI policyFacilitating people’s transition from AI capabilities research to AI safety researchFinding market opportunities for biodefence-relevant technologiesIntroductionWhat is this list?This is a list of projects that we (the Existential Security Team at Rethink Priorities) think are plausible candidates for being top projects for substantially reducing existential risk.The list was generated based on a wide search (resulting in an initial list of around 300 ideas, most of which we did not come up with ourselves) and a shallow, high-level prioritization process (spending between a few minutes and an hour per idea). The process took about 100 total hours of work, spread across three researchers. More details on our research process can be found in the appendix. Note that some of the ideas we considered most promising were excluded from this list due to being confidential, sensitive or particularly high-risk.We’re planning to prioritize projects on this list (as well as other non-public ideas) for further research, as candidates for projects we might eventually incubate. We’re planning to focus exclusively on projects aiming to reduce AI existential risk in 2023 but have included project ideas in other cause areas on this list as we still think those ideas are promising and would be excited about others working on them. More on our team’s strategy here.We’d be potentially excited about others researching, pursuing and supporting the projects on this list, although we don't think this is a be-all-end-all list of promising existential-risk-reducing projects and there are important limitations to this list (see “Key limitations of this list”).Why are we sharing this list and who is it for?By sharing this list, we’re hoping to:Give a sense of what kinds of projects we’re considering incubating and be transparent about our research process and results.Provide inspiration for projects others could consider working on.Contribute to community discussion about existential security entrepreneurship – we’re excited to receive feedback on the list, additional project suggestions, and information about the project areas we highlight (for example, existing projects we may have missed, top ideas not on this list, or reasons that some of our ideas may be worse than we think).You might be interested in looking at this list if you’re:Considering being a founder or early employee of a new project. This list can give you some inspiration for potential project areas to look into. If you’re interested in being a (co-)founder or early employee for one of the projects on this list, feel free to reach out to Marie Buhl at marie@rethinkpriorities.org so we can potentially provide you with additional resources or contacts when we have them.Note that our plan for 2023 is to zoom in on just a few particularly promising projects targeting AI existential risk. This means that we’ll have limited bandwidth to provide ad hoc feedback and support for projects that aren’t our main focus, and that we might not be able to respond to ev...

]]>
Buhl https://forum.effectivealtruism.org/posts/AJwuMw7ddcKQNFLcR/concrete-projects-for-reducing-existential-risk Wed, 21 Jun 2023 16:18:08 +0000 EA - Concrete projects for reducing existential risk by Buhl Buhl https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 36:13 no full 6354
kSrjdtazFhkwwLuK8 EA - Rethink Priorities’ Worldview Investigation Team: Introductions and Next Steps by Bob Fischer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities’ Worldview Investigation Team: Introductions and Next Steps, published by Bob Fischer on June 21, 2023 on The Effective Altruism Forum.Some months ago, Rethink Priorities announced its interdisciplinary Worldview Investigation Team (WIT). Now, we’re pleased to introduce the team’s members:Bob Fischer is a Senior Research Manager at Rethink Priorities, an Associate Professor of Philosophy at Texas State University, and the Director of the Society for the Study of Ethics & Animals. Before leading WIT, he ran RP’s Moral Weight Project.Laura Duffy is an Executive Research Coordinator for Co-CEO Marcus Davis and works on the Worldview Investigations Project. She is a graduate of the University of Chicago, where she earned a Bachelor of Science in Statistics and co-facilitated UChicago Effective Altruism’s Introductory Fellowship.Arvo Muñoz Morán is a Quantitative Researcher working on the Worldview Investigations Team at Rethink Priorities and a research assistant at Oxford's Global Priorities Institute. Before that, he was a Research Analyst at the Forethought Foundation for Global Priorities Research and earned an MPhil in Economics from Oxford. His background is in mathematics and philosophy.Hayley Clatterbuck is a Philosophy Researcher at Rethink Priorities and an Associate Professor of Philosophy at the University of Wisconsin-Madison. She has published on topics in probability, evolutionary biology, and animal minds.Derek Shiller is a Philosophy Researcher at Rethink Priorities. He has a PhD in philosophy and has written on topics in metaethics, consciousness, and the philosophy of probability. Before joining Rethink Priorities, Derek worked as the lead web developer for The Humane League.David Bernard is a Quantitative Researcher at Rethink Priorities. He will soon complete his PhD in economics at the Paris School of Economics, where his research focuses on forecasting and causal inference in the short and long-run. He was a Fulbright Scholar at UC Berkeley and a Global Priorities fellow at the Global Priorities Institute.Over the next few months, the team will be working on cause prioritization—a topic that raises hard normative, metanormative, decision-theoretic, and empirical issues. We aren’t going to resolve them anytime soon. So, we need to decide how to navigate a sea of open questions. In part, this involves making our assumptions explicit, producing the best models we can, and then conducting sensitivity analyses to determine both how robust our models are to uncertainty and where the value of information lies.Accordingly, WIT’s goal is to make several contributions to the broader conversation about global priorities. Among the planned contributions, you can expect:A cross-cause cost-effectiveness model. This tool will allow users to compare interventions like corporate animal welfare campaigns with work on AI safety, the Against Malaria Foundation with attempts to reduce the risk of nuclear war, biosecurity projects with community building, and so on. We’ve been working on a draft of this model in recent months and we recently hired two programmers—Chase Carter and Agustín Covarrubias—to accelerate its public release. While this tool won’t resolve all disputes about resource allocation, we hope it will help the community reason more transparently about these issues.Surveys of key stakeholders about the inputs to the model. Many people have thought long and hard about how much x-risk certain interventions can reduce, the relative importance of improving human and animal welfare, and the cost of saving lives in developing countries. We want to capture and distill those insights.A series of reports on the cruxes. The model has three key cruxes: animals’ “moral weights,” the expected value of the future, and your preference for ...

]]>
Bob Fischer https://forum.effectivealtruism.org/posts/kSrjdtazFhkwwLuK8/rethink-priorities-worldview-investigation-team Wed, 21 Jun 2023 14:32:34 +0000 EA - Rethink Priorities’ Worldview Investigation Team: Introductions and Next Steps by Bob Fischer Bob Fischer https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:07 no full 6355
ZZe5aFGKeZATYGGMD EA - Upcoming speaker series on emerging tech, national security & US policy careers by kuhanj Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Upcoming speaker series on emerging tech, national security & US policy careers, published by kuhanj on June 21, 2023 on The Effective Altruism Forum.There is an upcoming virtual speaker series on emerging tech, national security, and US policy careers and wanted to share this opportunity. I’ve heard some of these speakers before think the series could be really helpful for anyone interested in working on AI/bio policy in the US. I've pasted the announcement of the speaker series below.Summer webinar series on emerging technology & national security policy careersThe Horizon Institute for Public Service, in collaboration with partners at the Scowcroft Center for International Affairs at the Texas A&M Bush School and SeedAI, is excited to announce an upcoming webinar series on US emerging technology policy careers to help individuals decide if they should pursue careers in this field. In line with Horizon’s and our partners’ focus areas, the series will focus primarily on policy opportunities related to AI and biosecurity and run from late June to early August.Sessions will not be recorded and individuals must sign up to receive event access — you can express interest in attending here.Horizon’s mission is to help the US government navigate our era of rapid technological change by fostering the next generation of public servants with emerging technology expertise. The policy opportunities and challenges related to emerging technology are interdisciplinary and will require talent from a range of backgrounds and communities, including many that don’t have ready access to information about what policy work is like and what a career transition might look like. As a result, this series will cover:Examples of individuals with non-traditional (e.g. technical or legal) backgrounds transitioning into emerging technology policyExamples of what a “day in a life” is like at a think tank, advocacy organization, executive agency, or in CongressExamples of ongoing policy efforts and debates related to AI or biosecurity policyAll sessions will involve interactive conversations with experienced policy practitioners and opportunities for audience questions. Some of the sessions will be useful for individuals from all fields and career stages, while others are more focused — you may choose to attend all or only some of the sessions. Currently scheduled sessions include:Q&A with Jason Matheny, CEO of the RAND CorporationQ&A with Nikki Teran, Institute for Progress Fellow, on biosecurity challenges and policy careers to address themQ&A with Helen Toner‚ Director of Strategy and Foundational Research Grants at CSET, on AI challenges and policy careers to address themWhat’s it like working in the executive branch?What’s it like working in a think tank?What’s it like working in Congress?Choosing graduate schools for policy careers (master’s, PhD, JD)Transitioning from law to policyTransitioning from science and tech to policyAdvancing in policy from underrepresented backgroundsHorizon Fellowship info sessionThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
kuhanj https://forum.effectivealtruism.org/posts/ZZe5aFGKeZATYGGMD/upcoming-speaker-series-on-emerging-tech-national-security Wed, 21 Jun 2023 18:15:30 +0000 EA - Upcoming speaker series on emerging tech, national security & US policy careers by kuhanj kuhanj https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:03 no full 6356
t9e6enPXcH6HFzQku EA - Why Altruists Can't Have Nice Things by lincolnq Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Altruists Can't Have Nice Things, published by lincolnq on June 21, 2023 on The Effective Altruism Forum.Gesturing at a thing to mostly avoid. My personal opinion.This topic has been discussed on the EA Forum before, e.g. Free-spending EA might be a big problem (2022) and The biggest risk of free-spending EA is grift (2022). I also wrote What's Your Hourly Rate? in 2013, and Value of time as an employee in 2019. This piece mostly stands on its own.There's a temptation, when solving the world's toughest and most-important problems, to throw money around.Lattes on tap! Full-time massage team! Business class flights! Retreat in the Bahamas! When you do the cost/benefit analysis it comes out positive: "An extra four hours of sleep on the plane is worth four thousand dollars, because of how much we're getting paid and how tight the time is."The problem, which we always underindex on, is that our culture doesn't stand up to this kind of assault on normalcy. No altruistic, mission-oriented culture can. "I have never witnessed so much money in my life." [1]What is culture? I often phrase it as "lessons from the early days of an org." How we survive; how we make it work despite the tough times; our story of how we started with something small and ended up with something great. That knowledge fundamentally pervades everything we do. It needs upkeep and constant reinforcement. "It is always Day One" [2] refers to how Amazon is trying hard, even as they have grown huge, to preserve their culture of scrappiness and caring.What perks sayFancy, unusual, expensive perks are costly signals. They're saying or implying the following:Your time is worth a lot of moneyYou are special and important, you deserve thisWe are rich and successful; we are eliteWe are generous and you are lucky to be in our orbitYou're in the inner ring; you're better than people who aren't part of thisWe desperately want to keep you aroundYou are free from menial tasksYou would never pay for this on your own—but through us, you can have it anywayWe're just like Google!Some of these things might be locally true, but when I zoom out, I get a villainous vibe: this story flatters, it manipulates, it promotes hubris, it tells lies you want to believe. It's a Faustian trade: in exchange for these perks you "just" have to distort your reality, and we're not even asking you to believe hard scary things, just nice ego-boosting things about how special, irreplaceable, on-the-right-track we all are.Signals you might want to send insteadThe work cultures I prefer would signal something like the following:We're normal people who have chosen to take on especially important workWe have an angle / insight that most people haven't realized/acted on yetWe might be wrong, and are constantly seeking evidence that would change our mindsWe should try to be especially virtuous whenever we find ourselves setting a moral example for others(We aren't morally better by default, although we may have a bit more information; we are always learning as we go)We are focused on the long term good for the worldTons of people are suffering / will suffer in the futureWe remind ourselves of this regularly(We regularly acknowledge suffering in the world; we definitely don't do things that are cushy)We invest a lot in our people and their relationships, and act to preserve that valueEveryone works hard, sometimes they have to do things they don't want to doWe are in it for the long haul so don't burn out(We might use money in various ways—to make our work relationships stronger or stave off burnout—but we aren't profligate.)Instantiations of this culture will vary a decent amount—but I expect that a lot of altruistic orgs have/want to promote at least some subset of these values. Fancy perks push (perniciously, te...

]]>
lincolnq https://forum.effectivealtruism.org/posts/t9e6enPXcH6HFzQku/why-altruists-can-t-have-nice-things Wed, 21 Jun 2023 06:29:23 +0000 EA - Why Altruists Can't Have Nice Things by lincolnq lincolnq https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:09 no full 6357
vjysioCANWNXFKipq EA - The impact of mobile phones & mobile money for people in poverty by GiveDirectly Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The impact of mobile phones & mobile money for people in poverty, published by GiveDirectly on June 21, 2023 on The Effective Altruism Forum.In our main program, GiveDirectly gives those who don’t already have a mobile phone the option to buy one, subtracting the ~$15 cost from their first transfer, an offer taken by 90% of recipients. Your donations then go directly to the SIM card in their phone, a technology called mobile money.The primary benefits of direct cash – improved earnings, health, education, etc. – may obscure a secondary benefit of our work: connecting people to a mobile network. Below we lay out what the evidence shows impact and how we specifically extend impact.Access to mobile money and phones has distinct benefitsA large analysis found mobile money access alone (without cash grants or other aid) has lifted 194K or 2% of Kenyan households out of extreme poverty by increased savings, resilience, and access to better business opportunities. Other research finds mobile money usage can help African families be more resilient to economic shocks, access healthcare more often, and have more social closeness and lending reciprocity than similar households without mobile money.Even without access to mobile money, simply owning a phone improves resilience and earnings by reducing travel costs and increasing social connection. Phone ownership also allows for innovative responses to shocks and disasters. During COVID-19 lockdowns, the government of Togo targeted and sent cash aid fully remotely, taking advantage of the fact that some 90% of their citizens already owned a phone. Health officials in low income countries have harnessed phone data to track disease outbreaks.GiveDirectly overcomes structural obstacles to bring mobile accessWe issue mobile phones and free SIM cards to households that need one. Our staff walk recipients through how to use this new technology, guiding them to set their unique PIN and check their balance. We also sensitize them to fraud risks and inform them what fees they should expect when they visit an agent to cash out. A government ID is required to register for mobile money, so in some cases we coordinate with local governments to hold ID-issuing campaigns ahead of our enrollment.But even with phones, many villages will still struggle to access the benefits of mobile devices due to other structural hurdles. Here are some solutions we implemented to help:We bring cell network where it never reached before. 16% of Africans do not live within reach of a mobile network, with the biggest gaps in the poorest regions due to a lack of demand – telcos only build towers where they believe they’ll have customers. However, when GiveDirectly starts a project, we’re creating thousands of new subscribers. Our project reaching 15K families in Kiryandongo, Uganda motivated two telcos (MTN & Airtel) to extend coverage and mobile money agents to the area. In Maryland, Liberia, where we’ve enrolled 11.5K households in our basic income program, we co-financed 10 new cell towers with local telco MTN, bringing cell coverage for the first time to over 2,400 adults across 21 villages (see video below).We get cash to remote communities. Over 75% of recipients choose to cash out rather than keeping their transfers in their mobile money account, as many rural merchants only accept cash. In some rural areas, banks and agents do not have enough cash on hand for hundreds of families to withdraw large amounts in a matter of days. GiveDirectly collaborates with national banks and mobile money operators to ensure sufficient liquidity in the regions where payments are about to go out. In some of the most remote areas, we arrange for mobile money agents to travel to village centers to cash out recipients.The final challenge of charging phones without an ...

]]>
GiveDirectly https://forum.effectivealtruism.org/posts/vjysioCANWNXFKipq/the-impact-of-mobile-phones-and-mobile-money-for-people-in Wed, 21 Jun 2023 00:12:45 +0000 EA - The impact of mobile phones & mobile money for people in poverty by GiveDirectly GiveDirectly https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:28 no full 6358
Rg7h7G3KTvaYEtL55 EA - US public perception of CAIS statement and the risk of extinction by Jamie Elsey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: US public perception of CAIS statement and the risk of extinction, published by Jamie Elsey on June 22, 2023 on The Effective Altruism Forum.SummaryOn June 2nd-June 3rd 2023, Rethink Priorities conducted an online poll of US adults, to assess their views regarding a recent open statement from the Center for AI Safety (CAIS). The statement, which has been signed by a number of prominent figures in the AI industry and AI research communities, as well as other public figures, was:“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”The goal of this poll was to determine the degree to which the American public’s views of AI risk align with this statement.The poll covered opinions regarding:Agreement/disagreement with the CAIS open statementSupport for/opposition to the CAIS open statementWorry about negative effects of AIPerceived likelihood of human extinction from AI by the year 2100Our population estimates reflect the responses of 2407 US adults, poststratified to be representative of the US population. See the Methodology section of the Appendix for more information on sampling and estimation procedures.Key findingsAttitudes towards the CAIS statement were largely positive. A majority of the population supports (58%) and agrees with (59%) the CAIS statement, relative to 22% opposition and 26% disagreement.Worry about AI remains low. We estimate that most (68%) US adults would say that, at most, they only worry a little bit in their daily lives about the possible negative effects of AI on their lives or society more broadly. This is similar to our estimate in April (where 71% were estimated to have this level of worry).The public estimates of the chance of extinction from AI are highly skewed, with the most common estimate around 1%, but substantially higher medians and means. We estimate that half the population would give a probability below 15%, and half would give a probability above 15%. The most common response is expected to be around 1%, with 13% of people saying there is no chance. However, the mean estimate for the chance of extinction from AI by 2100 is quite high, at 26%, owing to a long tail of people giving higher ratings. It should be noted that just because respondents provided ratings in the form of probabilities, it does not mean they have a full grasp of the exact likelihoods their ratings imply.Attitudes towards the CAIS statementRespondents were presented with the CAIS statement on AI risk, and asked to indicate both the extent to which they agreed/disagreed with it, and the extent to which they supported/opposed it. We estimate that the US population broadly agrees with (59%) and supports (58%) the statement. Disagreement (26%) and opposition (22%) were relatively low, and sizable proportions of people remained neutral (12% and 18% for agreement and support formats, respectively).It is important to note that agreement with or support of this statement may not translate to agreement with or support of more specific policies geared towards actually making AI risk a comparable priority to pandemics or nuclear weapons. People may also support certain concrete actions that serve to mitigate AI risk despite not agreeing that it is of comparable concern to pandemics or nuclear security.The level of agreement/support appears to vary with age: the youngest age brackets of 18-24 are expected to show the most disagreement with/opposition to the statement. However, all ages were still expected to have majority support for the statement.Perceived likelihood of human extinction from AIWe were interested to understand how likely the public believed the risk of extinction from AI to be. In our previous survey of AI-related attitudes and beliefs, we asked...

]]>
Jamie Elsey https://forum.effectivealtruism.org/posts/Rg7h7G3KTvaYEtL55/us-public-perception-of-cais-statement-and-the-risk-of Thu, 22 Jun 2023 17:38:19 +0000 EA - US public perception of CAIS statement and the risk of extinction by Jamie Elsey Jamie Elsey https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 18:21 no full 6368
AXCnNJTQXeAY4jnsw EA - Rejection thread: stories and tips by Luisa Rodriguez Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rejection thread: stories and tips, published by Luisa Rodriguez on June 22, 2023 on The Effective Altruism Forum.Getting rejected from jobs can be crushing — but learning how to deal with rejection productively is an incredibly valuable skill. And hearing others' rejection stories can make us feel less alone and judged, and generally help us orient toward rejection in more productive ways.Let's use this thread to help each other with this.If you're up for it, comment and share:Rejection stories you might haveAny lessons you've learned for coping with rejectionWhat has helped in the pastYou can also message Lizka to share rejections that she will anonymize and add to the comments, or you can omit some details or just share tips without sharing the rejection stories themselves.Sharing rejections like this can be hard. Don't force yourself to do it if it stresses you out. And if you're commenting on this post, please remember to be kind.Luisa's experience — shared in the 80,000 Hours newsletterRejection was the topic of this week's 80,000 Hours newsletter, where Luisa shared a lot about her experience and how she's learned to cope with it. (That prompted this thread!) She wrote the following:I've been rejected many, many times. In 2015, I applied to ten PhD programs and was rejected from nine. After doing a summer internship with GiveWell in 2016, I wasn't offered a full-time role. In 2017, I was rejected by J-PAL, IDinsight, and Founders Pledge (among others). Around the same time, I was so afraid of being rejected by Open Philanthropy, I dropped out of their hiring round.I now have what I consider a dream job at 80,000 Hours: I get to host a podcast about the world's most pressing problems and how to solve them. But before getting a job offer from 80,000 Hours in 2020, I got rejected by them for a role in 2018. That rejection hurt the most.I still remember compulsively checking my phone after my work trial to see if 80,000 Hours had made me an offer. And I still remember waking up at 5:00 AM, checking my email, and finding the kind and well-written — but devastating — rejection: "Unfortunately we don't think the role is the right fit right now."And I remember being so sad that I took a five-hour bus ride to stay with a friend so I wouldn't have to be alone. After a few days of wallowing, I re-read the rejection email and noticed a lot of specific feedback — and a promising path forward."We're optimistic about your career in global prioritisation research and think you should stay in the area and build experience," they said. "We're not going anywhere, and could be a good career transition for you further down the line."I took their advice and accepted a job offer at Rethink Priorities, which also does global priorities research. And a year and a half later, 80,000 Hours invited me to apply for a job again.It's hard to say what would've happened had I not opened myself to rejection in 2018, but it seems possible I'd be in a pretty different place. While that rejection was really painful, the feedback I got was a huge help in moving my research career forward. I think there's an important lesson here.For me, rejection is one of the worst feelings. But whether you're like me, looking to work in global priorities research at small nonprofits, or interested to work in another potentially impactful path, getting rejected can come with unexpected benefits:When you get rejected from a role you thought was a good fit, you get more information about your strengths and weaknesses. It can indicate whether you need more career capital or should perhaps consider different types of roles or paths altogether.When applying for roles in an ecosystem you want to work in, you grow the number of people in that field who know you and who might reach out to you for futu...

]]>
Luisa_Rodriguez https://forum.effectivealtruism.org/posts/AXCnNJTQXeAY4jnsw/rejection-thread-stories-and-tips Thu, 22 Jun 2023 16:06:15 +0000 EA - Rejection thread: stories and tips by Luisa Rodriguez Luisa_Rodriguez https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:23 no full 6369
BSmMok4r5ocnD5dqT EA - RP’s AI Governance & Strategy team - June 2023 interim overview by MichaelA Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: RP’s AI Governance & Strategy team - June 2023 interim overview, published by MichaelA on June 22, 2023 on The Effective Altruism Forum.Hi! I co-lead Rethink Priorities’ AI Governance & Strategy (AIGS) team. At the suggestion of Ben West, I’m providing an update on our team.Caveats:This was quickly written and omits most of the reasoning for our choices, because that was all I had time to write and that seemed better than not providing an update at all.This post may not reflect the views of all members of the team, and doesn’t represent RP as an organization.The areas we work in are evolving rapidly, so our strategy and projects are as well.Comments and DMs are welcome, though I can’t guarantee a rapid or detailed reply.SummaryThe AIGS team works to reduce catastrophic risks related to AI by conducting research and strengthening the field of AI governance. We aim to bridge the technical and policy worlds, and we now focus on short, rapid-turnaround outputs and briefings. [read more]Our four key workstreams are compute governance, China, lab governance, and US regulations. [read more]We list some of our ongoing or completed projects. [read more]Please feel free to reach out if you’d like to suggest a project; if you’re open to sharing feedback, expertise, or connections with us; or if you or someone you know might be interested in working with or funding us. [read more]I summarize a few lessons learned and recent updates. [read more]Who we areRethink Priorities’ AI Governance & Strategy team works to reduce catastrophic risks related to development & deployment of AI systems. We do this by producing research that grounds concrete recommendations in strategic considerations, and by strengthening coordination and talent pipelines across the AI governance field.We combine the intellectual independence and nonpartisanship of a think tank with the flexibility and responsiveness of a consultancy. Our work is funded solely by foundations and independent donors, giving us the freedom to pursue important questions without bias. We’re always on the lookout for unexplored high-value research questions–feel free to pitch us!We aim to bridge the technical and policy worlds, with expertise on foundation models and the hardware underpinning them.We focus on short, rapid-turnaround outputs and briefings, but also produce longer reports. Much of our work is nonpublic, but may be shareable on request.We have 11 staff, listed here. You can contact any of us at firstname@rethinkpriorities.orgOur four workstreamsWe recently narrowed down to four focus areas, each of which has a 1-3 person subteam working on it. Below we summarize these workstreams and link to docs that provide further information on each (e.g., about ongoing projects, public outputs, and stakeholders and paths to impact).Compute governance: This workstream will focus on establishing a firmer empirical and theoretical grounding for the fledgling field of compute governance, informing ongoing policy processes and debates, and developing more concrete technical and policy proposals. In particular, we will focus on understanding the impact of existing compute-related US export controls, and researching what changes to them may be feasible and beneficial.This workstream consists of Onni Aarne and Erich Grunewald, and we’re currently hiring a third member.China: This workstream’s mission is to improve decisions at the intersection of AI governance and China. We are interested in both China-West relations concerning AI, as well as AI developments within China. We are particularly focused on informing decision-makers who are concerned about catastrophic risks from AI.This workstream consists of Oliver Guest.Lab governance: This workstream identifies concrete measures frontier AI labs can adopt now and in ...

]]>
MichaelA https://forum.effectivealtruism.org/posts/BSmMok4r5ocnD5dqT/rp-s-ai-governance-and-strategy-team-june-2023-interim-1 Thu, 22 Jun 2023 14:19:44 +0000 EA - RP’s AI Governance & Strategy team - June 2023 interim overview by MichaelA MichaelA https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 12:23 no full 6370
gxAXKRTzdEqiRbkrr EA - Yip Fai Tse on animal welfare in AI ethics and long termism by Karthik Palakodeti Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Yip Fai Tse on animal welfare in AI ethics and long termism, published by Karthik Palakodeti on June 22, 2023 on The Effective Altruism Forum.We talk about Fai's work in the field of AI ethics and non-human animals with Peter Singer, his thoughts on whether animal welfare is an issue concerning long termism, and some of the big debates in the philosophy of animal welfare interventions!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Karthik Palakodeti https://forum.effectivealtruism.org/posts/gxAXKRTzdEqiRbkrr/yip-fai-tse-on-animal-welfare-in-ai-ethics-and-long-termism Thu, 22 Jun 2023 20:55:04 +0000 EA - Yip Fai Tse on animal welfare in AI ethics and long termism by Karthik Palakodeti Karthik Palakodeti https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:36 no full 6371
sPnNyG79CcSZq9avo EA - Lab-grown meat is cleared for sale in the United States by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lab-grown meat is cleared for sale in the United States, published by Ben West on June 22, 2023 on The Effective Altruism Forum.Upside Foods and Good Meat, two companies that make what they call “cultivated chicken,” said Wednesday that they have gotten approval from the US Department of Agriculture to start producing their cell-based proteins.Good Meat, which is owned by plant-based egg substitute maker Eat Just, said that production is starting immediately.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Ben_West https://forum.effectivealtruism.org/posts/sPnNyG79CcSZq9avo/lab-grown-meat-is-cleared-for-sale-in-the-united-states Thu, 22 Jun 2023 04:02:17 +0000 EA - Lab-grown meat is cleared for sale in the United States by Ben West Ben_West https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:39 no full 6372
mzmGaEq7pwxqihZjF EA - Announcing the University of Chicago’s $2M Market Shaping Accelerator’s Innovation Challenge: Biosecurity, Pandemic Preparedness, and Climate Change by schethik Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the University of Chicago’s $2M Market Shaping Accelerator’s Innovation Challenge: Biosecurity, Pandemic Preparedness, and Climate Change, published by schethik on June 22, 2023 on The Effective Altruism Forum.The Market Shaping Accelerator (MSA) is a new initiative at the University of Chicago aiming to accelerate innovations to address pressing global challenges. It is led by Michael Kremer, Rachel Glennerster and Christopher Snyder. Our current focus areas are climate change, biosecurity and pandemic preparedness. We recently launched the MSA Innovation Challenge, which will award up to $2,000,000 total in prizes for ideas about problems to tackle with pull incentives for innovation in these areas. Pull mechanisms reward outputs and outcomes in contrast to push funding which pay for inputs (e.g. research grants).We want to invite members of the EA community and others to submit your ideas to the challenge. We are interested in hearing from domain experts, innovators, EA organizations as well as people just interested in a problem. You can read more about the challenge here (check out the FAQ on the bottom of this page) and view the application template here. The deadline to submit for Phase I is Friday, July 21, 2023 (12 PM CT). Submissions that meet a minimum criteria will receive $4,000. Up to $500,000 in prizes will be awarded in Phase I.Ideas that are selected for entry into Phase II will benefit from support and guidance of the MSA team as well as domain specialists to help turn their ideas into fully worked up contracts. Top ideas will also gain the MSA’s support in fundraising for the multi-millions or billions of dollars needed to back their pull mechanism.ITN FrameworkThe MSA Innovation Challenge is partly informed by the ITN Framework – we are seeking to surface major problems (importance), that can plausibly be addressed by innovation (tractability), but where innovation is under-incentivized by markets (neglectedness).We would welcome both technologically close and technologically distant targets – we think pull mechanisms can accelerate innovation and scale up for both.What is pull funding?Pull mechanisms reward outputs and outcomes rather than fund inputs. They create an incentive for the private sector to invest in R&D and bring solutions to market. Advance Market Commitments (AMCs) are an example of a pull mechanism. AMCs involve promising, in advance, to purchase or subsidize the purchase of a large quantity of an innovative product if it is invented. The $1.5 billion Advance Market Commitment for the Pneumococcal Vaccine was launched in 2009. Since then three vaccines for the strains of pneumococcus common in low- and middle-income countries have been developed, hundreds of millions of doses delivered, and an estimated 700,000 lives saved. The rate of vaccine coverage for the pneumococcal vaccine in GAVI countries converged to the global rate five years faster than for the rotavirus vaccine which GAVI supported without an AMC.Pull mechanisms have important advantages:They can be designed to be firm and solution-agnostic. The funder does not have to choose a particular firm or technological path in advance, they can just commit to rewarding an effective solution.The funder does not have to pay unless the targets are met. Payment can be linked to scale and take-up.They reduce demand uncertainty – they can signal to firms there will be demand for socially useful innovations.They can incentivize solutions that appeal to consumers. The funder can provide a matching subsidy to a consumer purchase. This incentivizes firms to develop products that consumers will actually use.Further readingAdvance Market Commitments: Insights from Theory and ExperienceThe Case for More Pull FinancingMaking Markets for Vaccines: Ideas to Action...

]]>
schethik https://forum.effectivealtruism.org/posts/mzmGaEq7pwxqihZjF/announcing-the-university-of-chicago-s-usd2m-market-shaping Thu, 22 Jun 2023 03:59:23 +0000 EA - Announcing the University of Chicago’s $2M Market Shaping Accelerator’s Innovation Challenge: Biosecurity, Pandemic Preparedness, and Climate Change by schethik schethik https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:03 no full 6373
cJLsd2TYxv8KCzHvg EA - Announcing the AIPolicyIdeas.com Database by abiolvera Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the AIPolicyIdeas.com Database, published by abiolvera on June 23, 2023 on The Effective Altruism Forum.Executive SummaryAIPolicyIdeas.com is a new database that compiles AI policy ideas from various sources. It’s intended to help AI policy practitioners and researchers quickly review high-impact, high-feasibility AI policy ideas, inform decisions on what to push for or research, and identify gaps in knowledge for future work (alternate link if any issues AIPolicyIdeas.com URL).This was created relatively quickly. Users are encouraged to conduct their own research and analysis before making any decisions.Submit ideas through this form.If you’re working on existential-risk-relevant AI policy or related research, request access to the database via this form.Other people can also use GCR Policy’s related public database.Approval for accessing the AI policy ideas database is not guaranteed. We appreciate your understanding if your application is not approved.We are excited to announce the launch of AIPolicyIdeas.com, a database compiling AI policy ideas from various sources across the longtermist AI governance community and beyond. The database prioritizes inclusion of policy ideas that may help reduce catastrophic risk from AI and may be implementable in the US in the near- or medium-term (in the next ~5-10 years). The database includes policy ideas of varying levels of expected impact, clarity about how impactful they’d be, and feasibility.The ideas were curated by Abi Olvera from various sources such as Google Docs, the GCR Policy database, individual submissions, and public reports. For most ideas, we have included information on its source, relevant topic area, relevant U.S. agency, as well as loose ratings estimating expected levels of impact, feasibility, and specificity, and degree of confidence/certainty.Collection Process: Abi started off with a collection of lists of AI policy ideas from personal Google Docs, contacts, conversations, and public reports. To avoid redundancy, ideas were only added if they contained unique ideas not already on the database. The two largest sources of AI ideas were RP’s Survey on intermediate goals in AI governance and ideas shared by the GCR Policy team. Additional ideas will be gradually added from similar sources and a form for idea submission.Loose Ratings: To help sort the ideas, we used a loose five-point scale for impact, confidence in our impact assessment, feasibility, and specificity. These ratings were assigned by the original author, the GCR Policy evaluation team, or Abi. However, the ratings were not rigorously assessed and come from various sources, including different assessors with their biases.Note that most choices about what to include in the database and what ratings to give were made by Abi alone, without someone else reviewing that.Negative Impacts Not Well Accounted For: We want to make it clear that while we have included a range of policy ideas in this database, some may have lower confidence and unclear levels of expected impact. Therefore, potential negative impacts are not well represented in this database. We encourage users to exercise caution when considering ideas, particularly those with uncertain impacts, and to conduct their research and analysis before making any decisions.Flag if You’re Researching or Available for Expertise on an Idea: We hope this database will serve as a useful resource for effective policymaking and research that can help make a positive impact on society. Researchers and policy practitioners can engage with the database by reviewing ideas, filtering them by relevant agency, and adding their names to the "Person Researching or Familiar With" column to collaborate with others. Users can also help keep the database up-to-date by sharing relevant ideas...

]]>
abiolvera https://forum.effectivealtruism.org/posts/cJLsd2TYxv8KCzHvg/announcing-the-aipolicyideas-com-database Fri, 23 Jun 2023 19:11:43 +0000 EA - Announcing the AIPolicyIdeas.com Database by abiolvera abiolvera https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:33 no full 6381
Q2aLn36Cq8HyciLqk EA - Open Board recruitment should be a norm by Jack Lewars Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Board recruitment should be a norm, published by Jack Lewars on June 23, 2023 on The Effective Altruism Forum.Better governance would not necessarily have prevented prominent scandals in EA. However, stronger accountability and governance mechanisms would almost certainly have mitigated them, and possibly even prevented them entirely.Governance standards are generally quite poor in EA. If you audited EA organisations using a checklist of things like whistleblowing protections, managing conflicts of interest, Board function and Board composition, I think most Boards would be 'amber' or 'red' against wider governance norms. This unnecessarily increases the risk of future scandals. Unless this is reformed, we should expect more scandals that hurt people, organisations and the community, and which prevent us achieving the goals of the EA community.One of the easiest fixes here is Board composition. You want a Board that has:enough members that it can continue to function well if ~2 people are unavailable for some reason (therefore 6-8 people, given that some members may be conflicted on some issues)at least some people with relevant expertise, particularly the ability to interpret internal financial reports; to manage, assess and hold accountable the CEO; and to create strong governance processesa mixture of inside and outside views of the organisation (and the EA community) to reduce blind spotsThese things are much less likely to be achieved if the recruitment process for Boards is entirely done via personal networks and direct, unconditional invitations to join the Board. This approach is especially likely to fail tests two and three.Accordingly, I think it's really important that there have been at least three open calls for trustee recruitment this year. Rethink Priorities led the way in January, and One for the World (my organisation) and EVF are conducting open searches at the moment. This is a significant positive step and should become a norm.It can also be really inspiring. In our latest round, we had more than 20 people who met our bar for 'would be a values-aligned, valuable Board member'. Also, some of them were so impressive that we wouldn't have approached them directly, believing that they would be unlikely to accept.Epistemic status: reasonably high. There are well-established governance norms based on more than a century of real-world experience; it seems highly unlikely that EA is exempt from these.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Jack Lewars https://forum.effectivealtruism.org/posts/Q2aLn36Cq8HyciLqk/open-board-recruitment-should-be-a-norm Fri, 23 Jun 2023 13:40:03 +0000 EA - Open Board recruitment should be a norm by Jack Lewars Jack Lewars https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:25 no full 6382
mFGZtPKTjqrfeHHsH EA - How CEA’s communications team is thinking about EA communications at the moment by Shakeel Hashim Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How CEA’s communications team is thinking about EA communications at the moment, published by Shakeel Hashim on June 23, 2023 on The Effective Altruism Forum.TL;DR: For now, we're going to be promoting EA as a place for intellectual exploration, incredible research, and real-world impact and innovation.These are my thoughts, but Emma Richter has been closely involved with developing them.This post is intended as a very overdue introduction to CEA’s communications team, our goals, and what we’re currently working on/planning to work on.I started at CEA as head of communications in September 2022. My position was a new one: as I understand it, various EA stakeholders were concerned that EA communications had fallen into a diffusion of responsibility. Though everyone in this ecosystem wanted it to go well, no one explicitly managed it. I was therefore hired with the remit of trying to fix this. Emma Richter joined the team as a contractor in December and became a permanent member of the team in March. We’ve also worked with a variety of external advisors, most notably Mike Levine at TSD Communications.Our team has two main goals. The first is to help look after the EA brand. That means, broadly, that we want the outside world to have an accurate, and positive impression of effective altruism and the value created by this ecosystem. The second, more nebulous goal, is to help the EA ecosystem better use communications to achieve various object-level goals. This means things like “helping to publicise a report on effective giving”, or “advocating for AI safety in the press”. As communications capacity grows across the EA ecosystem, I expect this goal to become less of a priority for us — but for now I think we have expertise that can be used to make a big difference in this way.With that in mind, here’s how we’re thinking about things at the moment.I’ll start with what’s going on in the world. There are a few particularly salient things I’m tracking:On the EA brand:Negative attention on EA has significantly died down.We expect it to flare back up somewhat this autumn, around SBF’s trial and various book releases, though probably not to the level that it was in late 2022.On net, there doesn’t appear to be a hit to the EA brand from FTX (see here for various data). Among those who have heard of both, though, there may have been a hit — and I suspect that group of people would include important subgroups like journalists and politicians.There is uncertainty about what people want EA (the brand, the ecosystem and/or the community) to be.Within CEA, our new executive director might make fairly radical changes (though they may also keep things quite similar).From the job announcement: “One thing to highlight is that we are both open to and enthusiastic about candidates who want to pursue significant changes to CEA. This might include: Spinning off or shutting down programs, or starting new programs; Focusing on specific cause areas, or on promoting general EA principles; Trying to build something more like a mass movement or trying to be more selective and focused; Significant staffing changes; Changing CEA’s name.There is increased interest in cause-specific field building (e.g. see here).In general, there are lots of conversations and uncertainties about what direction to take EA in (“should we frame EA as a community or a philosophical movement?" or "should we devote most of our resources to AI safety right now?")I expect this uncertainty to clear up a little as conversations continue in the next few months (e.g. in things like EA Strategy Fortnight), and CEA getting a new ED might help too. But I don’t expect it to resolve altogether.That said, EA community building (groups, conferences, online discussion spaces) has a strong track record and it seems likel...

]]>
Shakeel Hashim https://forum.effectivealtruism.org/posts/mFGZtPKTjqrfeHHsH/how-cea-s-communications-team-is-thinking-about-ea Fri, 23 Jun 2023 13:23:35 +0000 EA - How CEA’s communications team is thinking about EA communications at the moment by Shakeel Hashim Shakeel Hashim https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:30 no full 6383
aetatCMGqcAPsNbLs EA - Growth and engagement in EA groups: 2022 Groups Census results by BrianTan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Growth and engagement in EA groups: 2022 Groups Census results, published by BrianTan on June 23, 2023 on The Effective Altruism Forum.SummaryIn December 2022, CEA sent out a survey to all groups. 309 EA groups reported that they were active. However, based on our records, we estimate that ~63 active EA groups didn’t fill out the census. Therefore, it’s important to note that the data in this post are not representative of all active groups.Almost half of all active groups (309) are university groups (150). After that, most groups are location-based (53 city groups, 25 city-university hybrid groups, and 40 national groups). There were 40 groups of other types (e.g. workplace).The number of groups has been growing quickly in recent years. The net growth in the number of groups was 19% in 2020–2021 and 31% in 2021–2022. Most of the groups that were founded in recent years are university groups: the number of university groups roughly doubled between 2020 and 2022.We hypothesize that the two main reasons for the fast growth rate in general are CEA’s University Group Accelerator Program (UGAP) and the increased focus on and funding for EA community building in 2021–2022.We’d like to note that we don’t aim to just maximize the number of groups or members — we care about the quality and fidelity of groups, and we’re taking steps to measure and help improve group quality.There are active groups in 56 countries. Most of the currently active groups are in Europe and North America (39% and 30% respectively). However, the number of groups in Asia, South America, and Africa significantly grew in 2022, ranging from 65-86% growth per continent that year.We think the high growth in these continents is partially due to UGAP, EA Virtual Programs, and other community building efforts (e.g. by the Spanish-speaking community).We also asked groups for attendance and engagement data.The data we have shows that the median group size remained constant between 2020 and 2022, although due to changes in the questions we asked between the 2020 and 2022 surveys, we can only meaningfully compare the attendance and engagement data of groups on two measures (total event attendees and engaged EAs).While the total number of attendees per group stayed the same, the number of members who engaged with EA deeply — defined as 100 hours of engagement with EA content and taking some significant action — increased significantly. That is, the mean number of “engaged EAs” per group doubled (from 6 to 12) and the median even tripled (from 2 to 6). We think it’s plausible that this suggests that it has become easier to deeply engage with EA for people around the world (e.g. through accessible fellowships, more conferences, and higher Forum activity).As expected, engagement data across all measures we asked for were skewed: a small share of the groups produced the majority of outcomes in terms of number of events organized, EAG(x) attendees, engaged EAs, and total attendees. The least skewed measures were the number of regular attendees and intro fellowship participants per group. We discuss potential explanations for this in this post.On average, organizers rated their satisfaction with CEA’s support as 7.6/10. This is roughly the same as in 2020 — though it’s worth noting that the number of groups that CEA supports, and thus the number that rated their satisfaction with CEA’s support, has increased by ~50%.Compared to two years ago, there was an increase both in groups that are dissatisfied and groups that are very satisfied with CEA support. We think that one of the reasons may be that CEA’s support has become more targeted at specific groups (e.g. city/national groups in key locations and new uni groups in UGAP).Profession-based and virtual groups were on average less satisfied with support fro...

]]>
BrianTan https://forum.effectivealtruism.org/posts/aetatCMGqcAPsNbLs/growth-and-engagement-in-ea-groups-2022-groups-census Fri, 23 Jun 2023 21:00:25 +0000 EA - Growth and engagement in EA groups: 2022 Groups Census results by BrianTan BrianTan https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 35:52 no full 6384
zohs3eYHd8WdhF88M EA - Four claims about the role of effective giving in the EA community by Sjir Hoeijmakers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Four claims about the role of effective giving in the EA community, published by Sjir Hoeijmakers on June 23, 2023 on The Effective Altruism Forum.I’m sharing the below as part of the EA Strategy Fortnight. I think there’s value in discussing what the role of effective giving in the EA community should be, as (1) I expect people have quite different views on this, and (2) I think there are concrete things we should do differently based on our views here (I share some suggestions at the bottom of this post).These claims or similar ones have been made by others in various places (e.g. here, here, and here), but I thought it'd be useful to put them together in one place so people can critique them not only one-by-one but also as a set. This post doesn’t make a well-supported argument for all these claims and suggestions: many are hypotheses on which I’d love to see more data and/or pushback.Full disclosure: I work at Giving What We Can (though these are my personal views).Claim 1: Giving effectively and significantly should be normal in the EA communityMore concretely, I think it would be desirable and feasible for most people who currently self-associate with EA to give at least 10% of their income to high-impact funding opportunities (e.g. by taking the GWWC Pledge) or to be on their way there (e.g. by taking the Trial Pledge).I think this is desirable for three reasons: (1) effective giving is — in absolute terms — an incredibly efficient way for us to convert resources into impact, (2) even for individuals who may have more impact directly through their careers, giving effectively is often highly cost-effective on the margin and is not mutually exclusive with their direct impact (so worth doing!), and (3) there are many positive effects for the EA community as a whole from having effective giving as a norm.I also think this is feasible. There are good reasons for some people to not give at some points in their lives — for instance, if it leaves someone with insufficient resources to live a comfortable life, or if it would interfere strongly with the impact someone could have in their career. However, I expect these situations will be the exception rather than the rule within the current EA community, and even where they do apply there are often ways around them (e.g. exceptions to the Pledge for students and people who are unemployed).Claim 2: Giving effectively and significantly should not be required in the EA communityI think we should positively encourage everyone in the EA community who can give effectively and significantly to do so, and celebrate people when they do — but I don’t think that this should be an (implied) requirement for people in order to “feel at home” in the community, for a couple of reasons:EA is about using one's resources to try to do the most good, and its community should be accessible to people who want to use different types of resources to do this (e.g. money, time, network, expertise).Moreover, we don’t want the EA community to intentionally or unintentionally select only for people who have significant financial resources: we would be missing out on many (if not most) of the people we need to achieve our ambitious goals, including a large part of the global population that isn’t in a position (yet) to give significantly.Claim 3: Giving effectively and significantly should be sufficient to be part of the EA communityI think giving at least 10% to high-impact funding opportunities (or being on the path there) should be "enough" for someone to fully feel part of the EA project and community, regardless of their career. For example, people who give effectively should feel respected and included by other people in the community, feel represented by community leaders, and feel welcome at general EA-themed events. I believe th...

]]>
Sjir Hoeijmakers https://forum.effectivealtruism.org/posts/zohs3eYHd8WdhF88M/four-claims-about-the-role-of-effective-giving-in-the-ea Fri, 23 Jun 2023 07:30:38 +0000 EA - Four claims about the role of effective giving in the EA community by Sjir Hoeijmakers Sjir Hoeijmakers https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:05 no full 6385
nnTQaLpBfy2znG5vm EA - The flow of funding in EA movement building by Vaidehi Agarwalla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The flow of funding in EA movement building, published by Vaidehi Agarwalla on June 23, 2023 on The Effective Altruism Forum.This post is part of EA Strategy Fortnight. You can see other Strategy Fortnight posts here.I’ve been reflecting on the role of funding in the EA movement & community over time. Specifically I wanted to improve common knowledge around funding flows in the EA movement building space. It seems that many people may not be aware of it.Funders (and the main organizations they have supported) have shaped the EA community in many ways - the rate & speed at which EA has grown (example), the people that are attracted and given access to opportunities, and the culture and norms the community embodies and the overall ecosystem.I share some preliminary results from research I’ve conducted looking at the historical flow of data to movement building sources. I wanted to share what I have so far for the strategy fortnight to get conversation started. I think there is enough information here to understand the general pattern of funding flows. If you want to play around with the data, here is my (raw, messy) spreadsheet.Key observationsOverall pictureTotal funding 2012-2023 by known sourcesAccording to known funding sources, approximately $245M have been granted to EA movement building organizations and projects since 2012. I’d estimate the real number is something like $250-280M. The Open Philanthropy EA Community Growth (Longtermism) team (OP LT) has directed ~64% ($159M) of known movement building funding (incl. ~5% or $12M to the EAIF) since 2016. Note that OP launched an EACG program for Global Health and Wellbeing in 2022, which started making grants in 2023. Their budget is significantly smaller (currently ~$10M per year) and they currently prioritize effective giving organizations.The unlabeled dark blue segment is “other donors”Funders of EA Groups from 2015-2022See discussion below for description of the "CEA - imputed" category. Note that I’ve primarily estimated paid organizer time, not general groups expenses.EA groups are an important movement building project. The Centre for Effective Altruism (CEA) has had an outsized influence on EA groups for much of the history of the EA movement. Until May 2021, CEA was the primary funder of part- and full-time work on EA groups. In May 2021, CEA narrowed its scope to certain university & city/national groups, and the EA Infrastructure Fund (EAIF) started making grants to non-target groups. In 2022, OP LT took over most university groups funding from both CEA (in April) and EAIF (in August). Until 2021 most of CEA’s funding has come from OP LT, so its EA groups funding can be seen as an OP LT regrant.Breakdown of funding by source and time (known sources)2012-2016Before 2016, there was very limited funding available for meta projects and almost no support from institutional funders. Most organizations active during this period were funded by individual earning-to-givers and major donors or volunteer-run. Here’s a view of funding from 2012-2016:No donations from Jaan Tallinn during this period were via SFF as it didn’t exist yet. There is a $10K donation from OP to a UC Berkeley group in 2015 that is not visible in the main chart. “Other donors” includes mostly individual donors and some small foundationsQuick details on active funders during this period:Individual Donors: A number of (U)HNW & earning-to-give donors, many of whom are still active today, such as Jaan Tallinn, Luke Ding, Matt Wage and Jeff Kaufman & Julia Wise. I expect I’m missing somewhere between ~$100,000 to $1,000,000 of donations from individuals in this chart per year from 2012 to 2016.EA Giving Group: In 2013, Nick Beckstead and a large anonymous donor started a fund (the EA Giving Group) to which multiple individual ...

]]>
Vaidehi Agarwalla https://forum.effectivealtruism.org/posts/nnTQaLpBfy2znG5vm/the-flow-of-funding-in-ea-movement-building Fri, 23 Jun 2023 05:49:01 +0000 EA - The flow of funding in EA movement building by Vaidehi Agarwalla Vaidehi Agarwalla https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 15:22 no full 6386
GCaRhu84NuCdBiRz8 EA - EA’s success no one cares about by Jakub Stencel Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA’s success no one cares about, published by Jakub Stencel on June 24, 2023 on The Effective Altruism Forum.StatusWhile this is part of EA Strategy Fortnight, my intention is to focus more on what effective altruism did well, rather than what it should do. At the same time, my hope would be that the post provides the community with at least some valuable context to the discussions about the path forward.This post will be very subjective and a lot less thought-through than what I am usually comfortable sharing, although built on ~13 years of experience in the field. Some details may be off, for example by memory distortion or indirect testimonies. Nevertheless, it’s truthful to my internal models – when I walk and talk to my dog about this kind of stuff, you can expect my mind to go in the same direction as this post.ContextRecently, there has been a lot of attention directed at effective altruism. Some was external, but, from my perspective, most of it came from within the movement. My interpretation was that at least a portion of it was built on feelings of anxiety, doubt, and maybe some anger or fear. Of course, a lot of concerns seemed to me legitimized by what was happening or what we were discovering.In some way, I was worried about the community I identify as part of, but at the same time, there was this feeling of appreciation that we can go together through a crisis. It’s a lesson for a young movement, and experience is invaluable. Just like it’s better to learn hard lessons about life as a teenager than an adult, of course ideally with not much harm involved.The energy spent on inward focus felt encouraging, even though I disagreed with a chunk of the opinions. After all, some of the values of effective altruism I’m the most optimistic about are openness to criticism, intellectual humility, and truth-seeking. But the more external and internal takes I was reading, the more something seemed off. Something was missing.It felt one-sided. There was almost no mention of successes and wins – some appreciation of what this very young and weird movement managed to achieve in such a short period of time. Maybe I shouldn’t expect this in adversarial pieces about EA, and maybe it was implied when people were making criticism internally, but it still didn’t feel fully right to me.It felt like we all take effective altruism for granted. There was not much gratitude in the air.Maybe one can argue that EA hasn’t done much. While I have my strong intuitions on the counterfactual impact of EA in many areas, in the end I don’t feel fully qualified here, so I would prefer to defer. Yet, I’m confident that there is at least one success we should celebrate, and it’s very much absent from the discourse – making historical progress for animals.Animal advocate’s lens on effective altruismThis is my take on the short path of effective altruism’s impact on animals. Please note that I came from the part of animal advocacy that is closely aligned with effective altruism, so I’ll be biased, and it’s good to expect that reasonable people will disagree with me in at least some parts. Additionally, Open Philanthropy, which is crucial here, is the main funder of my organization. But if bias is present in this summary, I’m very confident it’s rather due to the alignment with the core premises of effective altruism rather than a conflict of interest.FundingWhile the animal advocacy movement has never been big, it has always had very active and dedicated activists. People have always been willing to sacrifice a lot to make a difference for animals. Breaking in or getting hired at farms to document the sickening reality of animal suffering or rescue animals, being abused by police, working for years for free or minimal pay, and using any personal funds available to produce necess...

]]>
Jakub Stencel https://forum.effectivealtruism.org/posts/GCaRhu84NuCdBiRz8/ea-s-success-no-one-cares-about Sat, 24 Jun 2023 15:43:03 +0000 EA - EA’s success no one cares about by Jakub Stencel Jakub Stencel https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 14:56 no full 6388
CEtKAP5Gr7QrTXHRW EA - On focusing resources more on particular fields vs. EA per se - considerations and takes by Ardenlk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On focusing resources more on particular fields vs. EA per se - considerations and takes, published by Ardenlk on June 24, 2023 on The Effective Altruism Forum.Epistemic status: This post is an edited version of an informal memo I wrote several months ago. I adapted it for the forum at the prompting of EA strategy fortnight. At the time of writing I conceived of its value as mostly in laying out considerations / trying to structure a conversation that felt a bit messy to me at the time, though I do give some of my personal takes too.I went back and forth a decent amount about whether to post this - I'm not sure about a lot of it. But some people I showed it to thought it would be good to post, and it feels like it's in the spirit of EA strategy fortnight to have a lower bar for posting, so I'm going for it.Overall takeSome people argue that the effective altruism community should focus more of its resources on building cause-specific fields (such as AI safety, biosecurity, global health, and farmed animal welfare), and less on effective altruism community building per se. I take the latter to mean something like: community building around the basic ideas/principles, and which invests in particular causes always with a more tentative attitude of "we're doing this only insofar as/while we're convinced this is actually the way to do the most good." (I'll call this "EA per se" for the rest of the post.)I think there are reasons for some shift in this direction. But I also have some resistance to some of the arguments I think people have for it.My guess is thatAllocating some resources from "EA per se" to field-specific development will be an overall good thing, butMy best guess (not confident) is that a modest reallocation is warranted, andI worry some reasons for reallocation are overrated.In this post I'llArticulate the reasons I think people have for favouring shifting resources in this way (just below), and give my takes on them (this will doubtless miss some reasons).Explain some reasons in favour of continuing (substantial) support for EA per se.Reasons I think people might have for a shift away from EA per se, and my quick takes on them1. The reason: The EA brand is (maybe) heavily damaged post FTX — making building EA per se less tractable and less valuable because getting involved in EA per se now has bigger costs.My take: I think how strong this is basically depends on how people perceive EA now post-FTX, and I'm not convinced that the public feels as badly about it as some other people seem to think. I think it's hard to infer how people think about EA just by looking at headlines or Twitter coverage about it over the course of a few months. My impression is that lots of people are still learning about EA and finding it intuitively appealing, and I think it's unclear how much this has changed on net post-FTX.Also, I think EA per se has a lot to contribute to the conversation about AI risk — and was talking about it before AI concern became mainstream — so it's not clear it makes sense to pull back from the label and community now.I'd want someone to look at and aggregate systematic measures like subscribers to blogs, advising applications at 80,000 Hours, applications to EA globals, people interested in joining local EA groups, etc. (As far as I know as of quickly revising this in June, these systematic measures are actually going fairly strong, but I have not really tried to assess this. These survey responses seem like a mild positive update on public perceptions.)Overall, I think this is probably some reason in favour of a shift but not a strong one.2. The reason: maybe building EA per se is dangerous because it attracts/boosts actors like SBF. (See: Holden's last bullet here)My take: My guess is that this is a weak-ish reason – though I...

]]>
Ardenlk https://forum.effectivealtruism.org/posts/CEtKAP5Gr7QrTXHRW/on-focusing-resources-more-on-particular-fields-vs-ea-per-se Sat, 24 Jun 2023 15:49:20 +0000 EA - On focusing resources more on particular fields vs. EA per se - considerations and takes by Ardenlk Ardenlk https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 12:59 no full 6389
iCDcJdqqmBa9QrEHv EA - FAQ on the relationship between 80,000 Hours and the effective altruism community by Ardenlk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FAQ on the relationship between 80,000 Hours and the effective altruism community, published by Ardenlk on June 24, 2023 on The Effective Altruism Forum.As part of 'strategy fortnight' (and in part inspired by this post) I decided to write this short post clarifying the relationship, as I see it, between 80,000 Hours and the EA community. I chose these questions because I thought there might be some people who care about the answers and who would want to know what (someone on the leadership team at) 80,000 Hours would say.Is 80,000 Hours's mission to build the EA community?No — our aim is to help people have careers with a lot of social impact. If the EA community didn't exist, we could still pursue our mission.However, we count ourselves as part of the EA community in part because we think it's pretty great. It has flaws, and we don't blanket recommend getting involved to our readers (a recommendation we feel better about making widely is to get involved in some kind of community that shares your aims). But we think the EA community does do a lot to help people (including us) figure out how to have a big positive impact, think through options carefully, and work together to make projects happen.For that reason, we do say we think learning about and getting involved in the EA community could be a great idea for many of our readers.And we think building the EA community can be a way to have a high-impact career, so we list articles focused on it high up on our problem profiles page and among our career reviews.Both of these are ways in which we do contribute substantially to building the effective altruism community.We think this is one of the ways we've had a positive impact over the years, so we do continue to put energy into this route to value (more on this below). But doing so is ultimately about helping the individuals we are writing for to increase their ability to have a positive impact by getting involved, rather than to benefit the community per se.In other words, helping grow the ea community is part of our strategy for pursuing our mission of helping people have high-impact careers.Does 80,000 Hours seek to provide "career advice for effective altruists"?Somewhat, but not mostly, and it would feel misleading to put it that way.80,000 Hours focuses on helping a group much larger than the (current) EA community have higher impact careers. For example, we estimate the size of the group we are trying to reach with the website to be ~100k people — which is around 10x larger than the EA community. (For context, we currently get in the range of 4M visitors to the website a year, and have 300k newsletter subscribers.)Some of the people in our audience are part of the EA community already, but they're a minority.One reason we focus so broadly is that we are trying to optimise the marginal counterfactual impact of our efforts. This often translates into trying to focus on people who aren't already heavily involved and so don't have other EA resources to draw on. For someone who hasn't heard of EA or who has heard of it but doesn't know much about it, there is much lower hanging fruit for counterfactually helping them improve the impact of their careers. For example, we can introduce them to well-known-within-EA ideas like the ITN framework and cause selection, or particularly pressing issues like AI safety and biosecurity, as well as the EA community itself. Once someone is involved in EA, they are also more likely and able to take advantage of resources that are less optimised for newer people.This is not an absolute rule, and it varies by programme – for example, the website tends to focus more (though not exclusively) on 'introductory' materials than the podcast which aims to go more in-depth, and one-on-one advising tries to tailor their discussio...

]]>
Ardenlk https://forum.effectivealtruism.org/posts/iCDcJdqqmBa9QrEHv/faq-on-the-relationship-between-80-000-hours-and-the Sat, 24 Jun 2023 16:34:14 +0000 EA - FAQ on the relationship between 80,000 Hours and the effective altruism community by Ardenlk Ardenlk https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:40 no full 6390
K7F8pF38SKnrxDsza EA - Crisis Boot Camp: lessons learned and implications for EA by Nicole Ross Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Crisis Boot Camp: lessons learned and implications for EA, published by Nicole Ross on June 24, 2023 on The Effective Altruism Forum.Over the last 7+ months at work, I've needed to handle or support several crises (I'm on the US EV board and, in normal times, head of the community health and special projects team). It's been a crisis-handling boot camp, so I want to share my lessons learned.I expect to learn more in the coming months, and it's plausible that longer-term ramifications could change my lessons learned, but I wanted to share my reflections at this point.My reason for writing this: I generally think that at least some of us (maybe many of us) may go through many more crises and that the world has a decent chance of getting even crazier due to AI. I want us to learn from crises and make updates to be better prepared for next time. I'm worried about people returning to normal without making relevant updates on many levels. Hopefully, some of my lessons learned will contribute to people making updates and folks being more prepared next time!Handling yourself in a crisis:Expand your thinking1. Hold multiple hypotheses at onceGenerally, people struggle to hold more than one or two hypotheses simultaneously, and this struggle seems even stronger during a crisis.The world is complicated, and being confident about what's true is tough. When people make plans, though, they often focus only on the hypothesis that they think is most likely, or at best, their top two hypotheses.For a hypothetical example, imagine the following scenario:You're working on getting life-sustaining and valuable resources to your allies in a place with a lot of organized crime. Pretty frequently, your supplies are stolen. You suspect Person X is behind it. One day he disappears, and a lot of supplies are missing. You need to move to the next location to get the supplies in the right hands. What should you do?Create multiple hypotheses! You are assuming X is behind the theft. In that world, you might want to move on without him and think through things like "Does he know where we're going to be next? Is he going to steal more? How should we update our security measures?".Other hypotheses you should consider:Maybe he was kidnapped, in which case it would be pretty shitty to leave him.Perhaps he was working with someone else. Are they on your team too?It could be a coincidence, and he didn't show up for another reason (e.g., incompetence, sickness).Each of these hypotheses has different likelihoods; sometimes, hypotheses are not mutually exclusive. E.g., X might be behind the theft and working with others on your team; X may be incompetent and indirectly connected to the theft.In my experience, people focus on only 1-2 hypotheses and hold those too tightly, even if they give lip service to multiple hypotheses. You should be looking for evidence for and against many different hypotheses and endeavor to track updates to many of them. Two particularly salient implications of this are:If you don't hold multiple hypotheses at once, you might jump to a conclusion, which makes it less likely you make the best decision.If you don't consider what you'd do in different worlds, you miss cheap and/or important ways to mitigate harm or realize the upside.How to do it: I've found it helpful to literally write down several hypotheses. Then I take a step back and ask myself:Would I be surprised if none of these are true?What worlds are these hypotheses neglecting to consider?What's missing?Usually, there are more relevant hypotheses to write down and track.2. Think beyond black-and-white binariesIt's easy to default to thinking in black-and-white binaries when the array of paths available to you is way more expansive.For example, imagine that you have to navigate a disagreement bet...

]]>
Nicole_Ross https://forum.effectivealtruism.org/posts/K7F8pF38SKnrxDsza/crisis-boot-camp-lessons-learned-and-implications-for-ea Sat, 24 Jun 2023 11:18:41 +0000 EA - Crisis Boot Camp: lessons learned and implications for EA by Nicole Ross Nicole_Ross https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 24:00 no full 6391
KE4Ga3zHQsczooQi7 EA - Correctly Calibrated Trust by ChanaMessinger Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Correctly Calibrated Trust, published by ChanaMessinger on June 24, 2023 on The Effective Altruism Forum.This post comes from finding out that Asya Bergal was having thoughts about this and was maybe going to write a post, thoughts I was having along similar lines, and a decision to combine energy and use the strategy fortnight as an excuse to get something out the door. A lot of this is written out of notes I took from a call with her, so she get credit for a lot of the concrete examples and the impetus for writing a post shaped like this.Interested in whether this resonates with people's experience!Short version:[Just read the bold to get a really short version]There’s a lot of “social sense of trust” in EA, in my experience. There’s a feeling that people, organizations and projects are broadly good and reasonable (often true!) that’s based on a combination of general vibes, EA branding and a few other specific signals of approval, as well as an absence of negative signals. I think that it’s likely common to overweight those signals of approval and the absence of disapproval.Especially post-FTX, I’d like us to be well calibrated on what the vague intuition we download from the social web is telling us, and place trust wisely.[“Trust” here is a fuzzy and under-defined thing that I’m not going to nail down - I mean here something like a general sense that things are fine and going well]Things like getting funding, being highly upvoted on the forum, being on podcasts, being high status and being EA-branded are fuzzy and often poor proxies for trustworthiness and of relevant people’s views on the people, projects and organizations in question.Negative opinions (anywhere from “that person not so great” to “that organization potentially quite sketch, but I don't have any details”) are not necessarily that likely to find their way to any given person for a bunch of reasons, and we don’t have great solutions to collecting and acting on character evidence that doesn't come along with specific bad actions. It’s easy to overestimate what you would know if there’s a bad thing to know.If it’s decision relevant or otherwise important to know how much to trust a person or organization, I think it’s a mistake to rely heavily on the above indicators, or on the “general feeling” in EA. Instead, get data if you can, and ask relevant people their actual thoughts - you might find them surprisingly out of step with what the vibe would indicate.I’m pretty unsure what we can or should do as a community about this, but I have a few thoughts at the bottom, and having a post about it as something to point to might help.Longer version:I think you'll get plenty out of this if you read the headings and read more under each heading if something piques your curiosityPart 1: What fuzzy proxies are people using and why would they be systematically overweighted?(I don’t know how common these mistakes are, or that they apply to you, the specific reader of the post. I expect them to bite harder if you’re newer or less connected, but I also expect that it’s easy to be somewhat biased in the same directions even if you have a lot of context. I’m hoping this serves as contextualization for the former and a reminder / nudge for the latter.)Getting funding from OP and LTFFSeems easy to expect that if someone got funding from Open Phil or the Long Term Future Fund, that’s a reasonable signal about the value of their work or the competence or trustworthiness or other virtues of the person running it. It obviously is Bayesian evidence, but I expect this to be extremely noisy.These organisations engage in hits-based philanthropy - as I understand it, they don’t expect most of the grants they make to be especially valuable (but the amount and way this is true varies by funder - Linch describes...

]]>
ChanaMessinger https://forum.effectivealtruism.org/posts/KE4Ga3zHQsczooQi7/correctly-calibrated-trust Sat, 24 Jun 2023 01:54:00 +0000 EA - Correctly Calibrated Trust by ChanaMessinger ChanaMessinger https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 15:39 no full 6392
P55P4YJoncfQmZ2RR EA - Downsides of Small Organizations in EA by Ozzie Gooen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Downsides of Small Organizations in EA, published by Ozzie Gooen on June 25, 2023 on The Effective Altruism Forum.Epistemic Status: This is a subject I've been casually thinking about for a while, but I wrote this document fairly quickly. Take this with a big grain of salt. This is written in a personal capacity.A lot of EA, especially in meta and longtermism, is made up of small organizations and independent researchers. This provides some benefits, but I think the downsides are substantial and often unappreciated.More clearly:EA funding mostly comes from a very few funders, but it goes to a mass of small organizations. My impression is that this is an unusual combination.I think that there are a lot of important downsides to having things split up into a bunch of small nonprofits.I'm suspicious of many of the reasons for having small organizations that I've come across. There might well still be good reasons I haven't heard or that haven't been suggested.I suggest some potential changes we could make to try to get some of the best incremental tradeoffs.DownsidesLow Management FlexibilityIf you want to quickly create a new project in a sizeable organization, you can pull people from existing teams. This requires upper management but is normal for said management. On the other hand, if you instead have a bunch of tiny independent organizations, your options are much more limited. Managers of tiny organizations can be near-impossible to move around because many of them own key funding relationships. Pulling together employees from different organizations is a pain, as no one has the authority to directly do this. The best you can do is slowly encourage people to join said new project.Moving people around is crucial for startups and tech firms. The first version Amazon Prime was made in under two months, in large part because Jeff Bezos was able to rapidly deploy the right people to it. At other tech companies, some amounts of regularly rotating team members is considered healthy. Strong software engineers get to work on many projects and people.Small nonprofit teams with locked-in mission statements are the opposite of this. This rigidness could be good for donors with little trust, but it comes at a substantial cost of flexibility.I’ve seen several projects in EA come up that could use rapid labor. Funding rounds seem particularly labor-intensive. It often seems to me like it should be possible to pull trusted people from existing organizations for a few weeks or months, but doing so is awkward because they’re formally part of separate organizations with specific mission statements and funding agreements.A major thing that managers at sizeable organizations do is size up requests for labor changes. The really good requests (with good managers) get quickly moved forward, and the bad ones are shot down. This is hard to do without clear, able, and available authorities.Low Employee FlexibilityEmployees that join small organizations with narrow missions can be assured that they will work on those missions. But if they ever want to try working with a different team or project, even just for a few months, the only option is often that they formally quit and formally apply for new roles. This is often a massive pain that could take 2-6+ months. The employee might be nervous about resentment or retaliation from other team members.My org, QURI, is doing fairly niche work and has a small team (2 people now!). I’d be excited to have some team members try rotating on and off for a few months here and there. It’s very valuable for us to have people with certain long-term skills “available” for long periods of time, but they don’t need to be active for long portions. I think this project makes sense to keep small, but I also feel awkward about asking others to joi...

]]>
Ozzie Gooen https://forum.effectivealtruism.org/posts/P55P4YJoncfQmZ2RR/downsides-of-small-organizations-in-ea Sun, 25 Jun 2023 00:03:16 +0000 EA - Downsides of Small Organizations in EA by Ozzie Gooen Ozzie Gooen https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 18:26 no full 6398
sFoqCw6BnZmJNxFda EA - An update on the Spanish-speaking EA community by Jaime Sevilla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An update on the Spanish-speaking EA community, published by Jaime Sevilla on June 26, 2023 on The Effective Altruism Forum.CoI warning: I am friends with many (most really!) of the people who I talk about. I direct Riesgos Catastróficos Globales, one of the organisations I talk about. I also still feel anger around the unfair situation that my partner, Sandra Malagón, had to experience. I do not speak in representation of the community or its members, but in personal capacity as a fairly involved member. I haven’t been following the community developments very closely since January, so it is likely I have missed stuff!Last year I wrote a post celebrating the Spanish-speaking EA community and its achievements. A lot has happened since then, so I am writing here to organise my thoughts and provide some transparency on the state of the community.My perception of the Spanish-speaking community has changed a lot since. Back then it felt like an integrated whole, with direction and momentum. Now it feels more like an ecosystem of independent projects, some with a lot of momentum and others languishing.A major motivator of this is that we no longer have appointed coordinators who are steering the community. Sandra Malagón, who worked full-time as community coordinator, stepped down following a series of incidents where she felt pressured to work together with a community member she didn’t feel comfortable with . She is now working instead on virtual programs for Spanish Speakers. The other part-time coordinator, Laura González, stepped down to focus on other projects.The effects of this can be seen in the activity stats of our shared workspace. Sandra left her position in early February, which matches the stop of activity growth:This is hard to separate from other causes like the FTX collapse and sex scandals in the community, so take it with a pinch of salt.Incidentally, if you are part of the Spanish-speaking community, I would like to apologise for the lack of communication about the status of the coordination team. This situation was and still is very hard to navigate.ProjectsThe lack of direction does not mean that nothing has happened over the last few months. Here are a few notable projects and developments that have happened since I last posted.The EA México fellowship programThe fellowship program received +80 participants from all around the globe who stayed over different periods between November 2022 and January 2023.This program was, in my opinion, competently run. The organisation team, including Sandra Malagón, Hugo Ikta and Miguel Alvarado, showcased incredible attention to detail and problem solving skills. The fellowship was welcoming and diverse, and it is impressive that no major incidents resulted, given the scale of the program.The impact of the program was, however, curtailed by a series of factors. First, they had to deal with the FTX scandal and sex scandals in the international EA community. Inhabiting an EA co-living space at that time was not fun. Second, Sandra renounced her position as community coordinator at the end of the fellowship, and the EA Mexico coordinators had distanced themselves from community building. The original vision of the fellowship, as I understood it, was to act as a catalyst for a community in CDMX, which could not be pursued given that nobody was in a position to do so.This doesn’t mean that the fellowship was in vain. As a participant, I got a lot of value from connecting to other professionals. This resulted in a lead for a hire for Epoch, and I feel the time I spent in CDMX was very productive. My informal impression from talking to others is similar. I’ve been told a retrospective will be released soon-ish, look out for that!EAGx LatAmThe first ever major Effective Altruism conference in Latin Ameri...

]]>
Jaime Sevilla https://forum.effectivealtruism.org/posts/sFoqCw6BnZmJNxFda/an-update-on-the-spanish-speaking-ea-community Mon, 26 Jun 2023 19:11:05 +0000 EA - An update on the Spanish-speaking EA community by Jaime Sevilla Jaime Sevilla https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:06 no full 6405
DdSszj5NXk45MhQoq EA - Decision-making and decentralisation in EA by William MacAskill Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Decision-making and decentralisation in EA, published by William MacAskill on June 26, 2023 on The Effective Altruism Forum.This post is a slightly belated contribution to the Strategy Fortnight. It represents my personal takes only; I’m not speaking on behalf of any organisation I’m involved with. For some context on how I’m now thinking about talking in public, I’ve made a shortform post here. Thanks to the many people who provided comments on a draft of this post.Intro and OverviewHow does decision-making in EA work? How should it work? In particular: to what extent is decision-making in EA centralised, and to what extent should it be centralised?These are the questions I’m going to address in this post. In what follows, I’ll use “EA” to refer to the actual set of people, practices and institutions in the EA movement, rather than EA as an idea.My broad view is that EA as a whole is currently in the worst of both worlds with respect to centralisation. We get the downsides of appearing (to some) like a single entity without the benefits of tight coordination and clear decision-making structures that centralised entities have.It’s hard to know whether the right response to this is to become more centralised or less. In this post, I’m mainly hoping just to start a discussion of this issue, as it’s one that impacts a wide number of decisions in EA. At a high level, though, I currently think that the balance of considerations tends to push in favour of decentralisation relative to where we are now.But centralisation isn’t a single spectrum, and we can break it down into sub-components. I’ll talk about this in more depth later in the post, but here are some ways in which I think EA should become more decentralised:Perception: At the very least, wider perception should reflect reality on how (de)centralised EA is. That means:Core organisations and people should communicate clearly (and repeatedly) about their roles and what they do and do not take ownership for. (I agree with Joey Savoie’s post, which he wrote independently of this one.)We should, insofar as we can, cultivate a diversity of EA-associated public figures.[Maybe] The EA Forum could be renamed. (Note that many decisions relating to CEA will wait until it has a new executive director).[Maybe] CEA could be renamed. (This is suggested by Kaleem here.)Funding: It’s hard to fix, but it would be great to have a greater diversity of funding sources. That means:Recruiting more large donors.Some significant donor or donors start a regranters program.More people pursue earning to give, or donate more (though I expect this “diversity of funding” consideration to have already been baked-in to most people’s decision-making on this). Luke Freeman has a moving essay about the continued need for funding here.Decision-making:Some projects that are currently housed within EV could spin out and become their own legal entities. The various different projects within EV have each been thinking through whether it makes sense for them to spin out. I expect around half of the projects will ultimately spin out over the coming year or two, which seems positive from my perspective.[Maybe] CEA could partly dissolve into sub-projects.Culture:We could try to go further to emphasise that there are many conclusions that one could come to on the grounds of EA values and principles, and celebrate cases where people pursue heterodox paths (as long as their actions are clearly non-harmful).Here are some ways in which I think EA could, ideally, become more centralised (though these ideas crucially depend on someone taking them on and making them happen):Information flow:Someone could create a guide to what EA is, in practice: all the different projects, and the roles they fill, and how they relate to one another.Someone c...

]]>
William_MacAskill https://forum.effectivealtruism.org/posts/DdSszj5NXk45MhQoq/decision-making-and-decentralisation-in-ea Mon, 26 Jun 2023 11:53:51 +0000 EA - Decision-making and decentralisation in EA by William MacAskill William_MacAskill https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 49:00 no full 6406
dAuaHKnH6CsaH8ecg EA - Map of maps of interesting fields by Max Görlitz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Map of maps of interesting fields, published by Max Görlitz on June 26, 2023 on The Effective Altruism Forum.I love seeing cool visualizations (maps) of interesting intellectual fields. I’m also a big fan of making lists!Accordingly, I compiled a bunch of maps of fields that I thought were interesting. Please comment if you know of more such maps, and I’ll include them.Scott Alexander’s map of Effective Altruism (2020)mariekedev’s mindmap of EA organisations (2023)Hamish Doodles’ aisafety.world (2023)On a side note, I would be very keen for someone to create a similar map of relevant organizations in the biosecurity & pandemic preparedness space. I plan to post a minimum viable product (bullet point list) soon.James Lin’s map of biosecurity interventions (2022)Scott Alexander’s map of the rationality community (2014)Dan Elton’s map of progress studies (2021)Ada Nguyen’s map of the Longevity Biotech Landscape (2023)Nadia Asparouhova’s map of climate tribes (2022)Samuel Arbesman’s Catalog of New Types of Research Organizations (2023)This one is more of a long list, but I thought it was very interesting nonetheless!Honorable mentionsThese are other mapping efforts that met my "I'm curious about it" but not my "This is super interesting" bar. Some of them also seem outdated.xkcd’s map of online communities (2010)Julia Galef’s map of the Bay Area memespace (2013)Joe Lightfoot’s The Liminal Web: Mapping An Emergent Subculture Of Sensemakers, Meta-Theorists & Systems Poets (2021 & updated in 2023)Shoutout to Rival Voices, who created a similar-ish collection of maps (2023), and Nadia Asparouhova, who wrote a more meta-level post about Mapping digital worlds (2023).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Max Görlitz https://forum.effectivealtruism.org/posts/dAuaHKnH6CsaH8ecg/map-of-maps-of-interesting-fields Mon, 26 Jun 2023 09:30:22 +0000 EA - Map of maps of interesting fields by Max Görlitz Max Görlitz https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:06 no full 6407
LxBc5cgwZRhQEuPkA EA - Presentación Introductoria al Altruismo Eficaz by davidfriva Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Presentación Introductoria al Altruismo Eficaz, published by davidfriva on June 26, 2023 on The Effective Altruism Forum.TL;DR: Spanish-Speaking Introduction to Effective Altruism covering key concepts like EA itself, 80,000 Hours, Longtermism, and the ITN Framework.Message to the English-Speaking Community (Mensaje para la comunidad angloparlante):Hey everyone!I'm David, a 21-year-old Computer Science student at the University of Buenos Aires (Argentina). I recently delivered an introductory talk on Effective Altruism (EA), drawing inspiration from Ajeya Cotra's style and targeting young adults.During the talk, I covered various important concepts such as Effective Altruism itself, the idea of 80,000 hours, Longtermism, and the ITN Framework (translated into Spanish), after sharing my personal journey of discovering these concepts and why they hold significance for me.As part of my ongoing efforts to address the lack of Spanish content on the EA Forum, I am sharing a link to the talk and the accompanying transcript in the form of an article.Spanish, being the second most widely spoken language in the world and extensively used on the internet, deserves greater representation within the EA community. My hope is that this initiative will help bridge the gap and make EA concepts more accessible to Spanish speakers.I.Hola, soy David, tengo 21 años y estudio Computación en la Universidad de Buenos Aires. Hoy, quiero hablarles sobre un concepto que transformó radicalmente mi vida: el Altruismo Eficaz.El Altruismo Eficaz es, básicamente, un proyecto que tiene como objetivo encontrar las mejores formas de ayudar a los demás y ponerlas en práctica.Es tanto un campo de investigación que busca identificar los problemas más urgentes del mundo y las mejores soluciones para ellos, como una comunidad práctica que busca utilizar esos hallazgos para hacer el bien.Pero. para que entiendan por qué es tan importante para mí, y poder profundizar en esto, necesito antes contarles un poco de mi historia:Bueno, era marzo de 2020, cuando habiendo cumplido 18 años, termino la secundaria. El mismo mes me quedo sin hogar justo antes de empezar la universidad, cuando mi padre me echa de la habitación que compartíamos en una vivienda colectiva. Terminé siendo hospedado en la casa de un amigo y comencé a buscar empleo.Eran tiempos difíciles, de pandemia, la actividad económica había quedado paralizada.Por suerte, mala y buena, me contratan como empleado de limpieza en un HospitalEn mi tiempo trabajando ahí, pude ser testigo de lo colapsado que se encontraba el sistema sanitario. Veía a personas ansiosas esperando en la guardia, pacientes sufriendo de enfermedades graves, y los trabajadores de la salud que se encargaban de tratarlos, completamente exhaustos. Con la emergencia sanitaria sumándose al caos, era un ambiente bastante estresante, incluso aterrador.En ese ambiente yo trabajaba.Y a la vuelta, iba a un comedor comunitario, que maneja una iglesia en el barrio de Constitución, para pedir comida. Es ahí, donde me doy cuenta de que el hambre, frío, y desesperación, que yo sentía, era cotidiano para una parte significativa de nuestra sociedad.Durante esta época, hubo noches en las que simplemente lloraba, sintiéndome completamente impotente. No entendía cómo el mundo podía llegar a ser tan injusto, y la gente capaz de ayudar, con recursos de sobra, tan indiferente a la tragedia de otros.Eventualmente llego a una posición cómoda, habiendo conseguido empleo como programador podía trabajar desde el confort de un departamento en el barrio más caro de Buenos Aires, alejado de los comedores comunitarios y los hospitales. Viviendo en una burbuja, lentamente, me fuí olvidando de la gente que encontraba en esos lugares, de los pobres y enfermos, los más desfavorecidos.Altruismo Efica...

]]>
davidfriva https://forum.effectivealtruism.org/posts/LxBc5cgwZRhQEuPkA/presentacion-introductoria-al-altruismo-eficaz Mon, 26 Jun 2023 03:07:46 +0000 EA - Presentación Introductoria al Altruismo Eficaz by davidfriva davidfriva https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 18:39 no full 6408
AdouuTH7esiDQPExz EA - Announcing CE’s new Research Training Program - Apply Now! by KarolinaSarek Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing CE’s new Research Training Program - Apply Now!, published by KarolinaSarek on June 27, 2023 on The Effective Altruism Forum.TL;DR: We are excited to announce our Research Training Program. This online program is designed to equip participants with the tools and skills needed to identify, compare, and recommend the most effective charities and interventions. It is a full-time, fully cost-covered program that will run online for 11 weeks.Apply here!Deadline for application: July 17, 2023The program dates are: October 2 - December 17, 2023So far, Charity Entrepreneurship has launched and run two successful training programs: a Charity Incubation Program and a Foundation Program. Now we are piloting a third - a Research Training Program, which will tackle a different problem.The Problem:People: Many individuals are eager to enter research careers, level up their current knowledge and skills from junior to senior, or simply make their existing skills more applicable to work within EA frameworks/organizations. At the same time, research organizations have trouble filling a senior-level researcher talent gap. There is a scarcity of specific training opportunities for the niche skills required, such as intervention prioritization and cost-effectiveness analyses, which are hard to learn through traditional avenues.Ideas: A lack of capacity for exhaustive investigation means there is a multitude of potentially impactful intervention ideas that remain unexplored. There may be great ideas being missed, as with limited time, we will only get to the most obvious solutions that other people are likely to have thought of as well.Evaluation: Unlike the for-profit sector, the nonprofit sector lacks clear metrics for assessing an organization's actual impact. External evaluations can help nonprofits evaluate and reorganize their own effectiveness and also allow funders to choose the highest impact opportunities available to them- potentially unlocking more funding (sometimes limited by lack of public external evaluation). There are some great organizations that carry out evaluations (e.g., GiveWell), but they are constrained by capacity and have limited scope; this results in several potentially worthwhile organizations remaining unassessed.Who Is This Program For?Motivated researchers who want to produce trusted research outputs to improve the prioritization and allocation decisions of effectiveness-minded organizationsEarly career individuals who are seeking to build their research toolkits and gain practical experience through real projectsExisting researchers in the broader Global Health and Well-being communities (global health, animal advocacy, mental health, health/biosecurity, etc.) who are interested in approaching research from an effectiveness-minded perspectiveWhat Does Being a Fellow Involve?Similar to our Charity Incubation Program, the first month focuses on learning generalizable and specific research skills. It involves watching training videos, reading materials, and practicing by applying those skills to concrete mini-research projects. Participants learn by doing while we provide guidance and lots of feedback.The second month is focused on applying skills, working on different stages of the research process, and producing final research reports that could be used to guide real decision-making.Frequent feedback on your projects from expert researchersRegular check-in calls with a mentor for troubleshooting, guidance on research, and your careerWriting reports on selected topicsOpportunities to connect with established researchers and explore potential job opportunitiesAssistance with editing your cause area report for publication and disseminationWhat Are We Offering?11 weeks of online, full-time training with practical research assig...

]]>
KarolinaSarek https://forum.effectivealtruism.org/posts/AdouuTH7esiDQPExz/announcing-ce-s-new-research-training-program-apply-now Tue, 27 Jun 2023 18:23:17 +0000 EA - Announcing CE’s new Research Training Program - Apply Now! by KarolinaSarek KarolinaSarek https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:23 no full 6414
ztLGLrWBbmmPgyZsb EA - Longtermism and alternative proteins by BruceF Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Longtermism and alternative proteins, published by BruceF on June 27, 2023 on The Effective Altruism Forum.I spoke at EA Global London 2023 about longtermism and alternative proteins. Here’s the basic argument:1) Meat production is a significant contributor to climate change, other environmental harms (pretty much all of them), food insecurity, antibiotic resistance, and pandemic risk - causing significant and immediate harm to billions of people.2) All of these harms are likely to double in adverse impact (or more) by 2050 unless alternative proteins succeed.3) Their X risk level is sufficiently high (Ord chart) that they warrant attention from longtermists. Especially for longtermists in policy or philanthropy, adding alt proteins to the portfolio impactful and tractable interventions that you support can allow you to do even more good in the world (a lot of it fairly immediate).In the talk, I cite this report from the Center for Strategic & International Studies’ director of global food security & director of climate and energy, as well as a report from ClimateWorks Foundation & the Global Methane Hub (1-pager w/r/t the points I made in the talk here).Below are the recording and transcript - comments welcomed.Here's a link to the slides from this talk.IntroductionThe observation is that we have been making meat in the same way for 12,000 years. Food is a technology. Making meat is a technology. The way we do it now is extraordinarily inefficient and comes with significant external costs that do indeed jeopardize our long-term future. This is Johan Rockström after the EAT-Lancet Commission called on the world to eat 90 percent less meat back in 2018 and 2019. He said, "Humanity now poses a threat to the stability of the planet. This requires nothing less than a new global agricultural revolution." That's what I'm going to be talking about, and I'm going to situate it in terms of effective altruism.There are five parts to the talk. The first one is that meat production has risen inexorably for many decades, and there is no sign of that growth slowing. The second is that our only strategy for changing this trajectory is support for alternative proteins - there's not a tractable plan B. The third point is that alternative proteins address multiple risks to long-term flourishing and they should be a priority for longtermists. I'm not going to try to convince you. They should be the priority - they're on par with AI risk or bioengineered pandemics. But I am going to try to convince you that, unless you are working for an organization that is focused on one thing, you should add alternative proteins to your portfolio if you are focused on longtermism. Fourth, I want to give you a sense of how GFI thinks about prioritization so that what we're doing as we expand is the highest marginal possible impact. Then we'll have some time for a discussion which Sim will lead us through.Meat Production has risen by 300% since 1961.The first observation is that, since 1961, global meat production has risen 300 percent.In China, it has skyrocketed by 1,500 percent. It's 15 times up since 1961, and meat production and consumption is going to continue to rise through 2050.There have been 11 peer-review articles looking at what meat production and consumption are going to look like in 2050. The lowest production is 61 percent more. One of the predictions is 3.4 times as much, so 340 percent more. Most of the predictions hover at about double. Most of that growth is not in developed economies. Developed economies have leveled off. They're going up a little bit. Most of that growth is in developing economies and in Asia.The world doesn't have tractable solutions to this. Bill Gates when he released How to Avoid a Climate Disaster, on his book tour was talking about how the cl...

]]>
BruceF https://forum.effectivealtruism.org/posts/ztLGLrWBbmmPgyZsb/longtermism-and-alternative-proteins Tue, 27 Jun 2023 11:13:30 +0000 EA - Longtermism and alternative proteins by BruceF BruceF https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 26:03 no full 6416
YpADfSeSccsEkaetk EA - AI Safety Field Building vs. EA CB by kuhanj Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Field Building vs. EA CB, published by kuhanj on June 27, 2023 on The Effective Altruism Forum.SummaryAs part of the EA Strategy fortnight, I am sharing a reflection on my experience doing AI safety movement building over the last year, and why I am more excited about more efforts in the space compared to EA movement-building. This is mostly due to the relative success of AI safety groups compared to EA groups at universities with both (e.g. read about Harvard and MIT updates from this past year here). I expect many of the takeaways to extend beyond the university context. The main reasons AI safety field building seems more impactful are:Experimental data from universities with substantial effort put into EA and AI safety groups: Higher engagement overall, and from individuals with relevant expertise, interests, and skillsStronger object-level focus encourages skill and knowledge accumulation, offers better career capital, and lends itself to engagement from more knowledgeable and senior individuals (including graduate students and professors).Impartial/future-focused altruism not being a crux for many for working on AI safetyRecent developments increasing the salience of potential risks from transformative AI, and decreasing the appeal of the EA community/ideas.I also discuss some hesitations and counterarguments, of which the large decrease in neglectedness of existential risk from AI is most salient (and which I have not reflected too much on the implications of yet, though I still agree with the high-level takes this post argues for).Context/Why I am writing about thisI helped set up and run the Cambridge Boston Alignment Initiative (CBAI) and the MIT AI Alignment group this past year. I also helped out with Harvard’s AI Safety team programming, along with some broader university AI safety programming (e.g. a retreat, two MLAB-inspired bootcamps, and a 3-week research program on AI strategy). Before this, I ran the Stanford Existential Risks Initiative and effective altruism student group and have supported many other university student groups.Why AI Safety Field Building over EA Community BuildingFrom my experiences over the past few months, it seems that AI safety field building is generally more impactful than EA movement building for people able to do either well, especially at the university level (under the assumption that reducing AI x-risk is probably the most effective way to do good, which I assume in this article). Here are some reasons for this:AI-alignment-branded outreach is empirically attracting many more students with relevant skill sets and expertise than EA-branded outreach at universities.Anecdotal evidence: At MIT, we received ~5x the number of applications for AI safety programming compared to EA programming, despite similar levels of outreach last year. This ratio was even higher when just considering applicants with relevant backgrounds and accomplishments. Around two dozen winners and top performers of international competitions (math/CS/science olympiads, research competitions) and students with significant research experience engaged with AI alignment programming, but very few engaged with EA programming.This phenomenon at MIT has also roughly been matched at Harvard, Stanford, Cambridge, and I’d guess several other universities (though I think the relevant ratios are slightly lower than at MIT).It makes sense that things marketed with a specific cause area (e.g. AI rather than EA) are more likely to attract individuals highly skilled, experienced, and interested in topics relevant to the cause area.Effective cause-area specific direct work and movement building still involves the learning, understanding, and application of many important principles and concepts in EA:Prioritization/Optimization are relevant,...

]]>
kuhanj https://forum.effectivealtruism.org/posts/YpADfSeSccsEkaetk/ai-safety-field-building-vs-ea-cb Tue, 27 Jun 2023 01:28:41 +0000 EA - AI Safety Field Building vs. EA CB by kuhanj kuhanj https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:42 no full 6415
mNfr9J64rxyyFNzde EA - [Link Post] Do Microbes Matter More Than Humans? by BrianK Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Link Post] Do Microbes Matter More Than Humans?, published by BrianK on June 28, 2023 on The Effective Altruism Forum.GROWING UP, MOST of the stories I heard about animals featured charismatic megafauna—“flagship species,” as they were called. Elephants and tigers were the main attraction in zoos; dolphin shows were the primary draw at aquariums; and nonprofit organizations like the World Wildlife Fund celebrated pandas. In the news, the biggest stories about animals featured species like gorillas, lions, and orcas. This is largely still true today, and in a way it makes sense. These animals, with their sheer size, enigmatic behavior, and endangered status, can captivate the human imagination and command attention like few other creatures can, eliciting deep emotional responses from people around the world.Yet the past decade has seen increasing pushback against this idea of prioritizing the welfare of megafauna while ignoring less charismatic creatures. The view that we should extend our moral concern to more than just animals with faces is becoming more mainstream. But if we stop simply prioritizing the welfare of animals that are “majestic” or “cute,” how should we prioritize species? Should we be concerned about the welfare of fish, bivalves, or insects? What about microorganisms? If meat is murder, does that mean antibacterial soap is, too?Read the rest on Wired.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
BrianK https://forum.effectivealtruism.org/posts/mNfr9J64rxyyFNzde/link-post-do-microbes-matter-more-than-humans Wed, 28 Jun 2023 15:32:39 +0000 EA - [Link Post] Do Microbes Matter More Than Humans? by BrianK BrianK https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:33 no full 6425
5inarAxrymywW6JPC EA - Everything I didn't know about fertilizers by Helene K Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Everything I didn't know about fertilizers, published by Helene K on June 28, 2023 on The Effective Altruism Forum.At a recent EA meetup, someone mentioned fertilizers in the context of climate change and global food production. Admittedly, I had never thought about fertilizers before, except when I’ve been trying to get the tomato plants on my balcony to grow more than five tomatoes per season (I have yet to find my green thumb). Turns out fertilizers are a wildly fascinating topic! I dug into the topic a bit and decided to write up my findings as I assume others might also learn a lot about fertilizers and how they fit into the bigger picture.This post is the product of around twelve hours of research. It is by no means comprehensive and I wonder if I’ve come to the right conclusions in some sections. Any feedback is highly welcome, as well as corrections and resources that could fill in gaps or contradict anything I’ve written here.SummaryFertilizers provide what plants need to grow: nitrogen, potassium and phosphorus (in addition to sunlight and water).Humans have used organic fertilizers for thousands of years and started using synthetic fertilizers in the 20th century.While phosphate and potassium fertilizer is made from mined phosphorus and potash, respectively, nitrogen fertilizer is mostly produced through combining hydrogen from natural gas with atmospheric nitrogen via the Haber-Bosch process.58% of fertilizers are nitrogen fertilizers, with China, the US, India and Russia being the world’s largest producers. Potash is mainly mined in Canada, Belarus and Russia, phosphorus comes mostly from China.Fertilizers have vastly increased food production in the last 100 years and it is estimated that about 50% of today’s global population rely on fertilizers for their food supply, making fertilizers one of the “four pillars of civilization”.In comparison to almost all other regions in the world, crop yields in Sub-Saharan Africa are still very low, an unsettling fact given Sub-Saharan Africa’s fast population growth and high rates of undernourishment. This is partly the result of too little fertilizer use.Because fertilizer is so good at increasing crop yields, it means we can produce more food on the same or even less farmland, helping to preserve wildlife habitats and biodiversity.Many countries around the world use a lot of fertilizers—in fact, many use too much, including China, Mexico, Brazil, Colombia and Thailand, and could maintain their food production even with lower fertilizer use.Producing ammonia for nitrogen fertilizers relies heavily on natural gas, consumes around 2% of the global energy supply and constitutes about 5% of global anthropogenic greenhouse gas emissions, so we need to find ways to produce nitrogen fertilizer more sustainably.There is a range of ongoing development for producing “green ammonia” but most of them are still in their early stages, and it is unclear whether new production technologies can cover future needs for ammonia.Although higher crop yields and fertilizer use seem to be vital for improving food security in Sub-Saharan Africa, the issue is relatively unexplored in EA. It seems worthwhile to have a deeper look into this issue and think about potential cost-effective interventions as well as how they compare to top interventions in global health and development.What are fertilizers?Plants need three things to grow: sunlight, water and nutrients. Fertilizers provide plants with the latter, more specifically with phosphorus, potassium and nitrogen [1].Hang on, but why do we need to give a plant nitrogen? Isn’t our air made up of 78% nitrogen? Yes, but atmospheric nitrogen cannot be used by plants directly and instead needs to be converted into ammonia or other nitrogenous compounds in the soil. This process...

]]>
Helene_K https://forum.effectivealtruism.org/posts/5inarAxrymywW6JPC/everything-i-didn-t-know-about-fertilizers-1 Wed, 28 Jun 2023 13:02:43 +0000 EA - Everything I didn't know about fertilizers by Helene K Helene_K https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 22:24 no full 6427
Cve3s6C5kxow6ScQQ EA - What we talk about when we talk about community building by James Herbert Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What we talk about when we talk about community building, published by James Herbert on June 27, 2023 on The Effective Altruism Forum.These are my views, they are not necessarily the views of Effective Altruism Netherlands. Thanks to those people who provided comments on a draft of this post.SummaryIn this post I introduce a set of terms that could be useful for discussing EA community building strategy.I take as a starting point the claim that broadly speaking, effective altruism is an attempt at social change.There are four different approaches to social change: Social Movement Support, Field Building, Network Development, and Promoting the Uptake of Practices by Organizations. The post illustrates these approaches using different historical examples:The Civil Rights Movement mainly used Social Movement SupportThe field of Public Health primarily focused on Field BuildingThe United Nations emphasized Network DevelopmentThe Fair Trade movement concentrated on Promoting the Uptake of Practices by OrganizationsI then take a stab at describing EA’s current approach. Something like: Field Building (40%), Network Development (35%), Movement Support (20%), and Promoting the Uptake of Practices by Organizations (5%).I also suggest a re-balancing: Field Building (50%), Movement Support (25%), Network Development (15%), and Promoting the Uptake of Practices by Organizations (10%). This shift would make EA more engaged with society and more focused on its core mission, while spending less time in its own bubble.Finally, I have a few questions for you, the reader:Do you agree that, broadly speaking, EA is an attempt to bring about social change?Is there something missing from the set of social change approaches I’ve described?What do you think EA’s current social change portfolio is?What do you think it ought to be?How should we inform the above decision? Historical case studies? Something else?IntroductionEffective altruism has been called many things. MacAskill defines it as follows:(i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and(ii) the use of the findings from (i) to try to improve the world.(i) refers to effective altruism as an intellectual project (or ‘research field’); (ii) refers to effective altruism as a practical project (or ‘social movement’).CEA's outward-facing website (effectivealtruism.org) uses a similar definition: “Effective altruism is a research field and practical community that aims to find the best ways to help others, and put them into practice”. Wikipedia also: “Effective altruism is a philosophical and social movement that advocates ‘using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis’”.I think these definitions are broadly correct. However, to speak in more abstract terms, I think EA is an attempt to bring about social change. By social change I mean the not-insignificant alteration of society. For example, changes in social institutions, social behaviours, or social relations. Examples of other attempts at social change include: building the field of public health, the fair trade movement, the civil rights movement, and the development of the UN.Following MacAskill’s definition, the social change that EA is aiming for is something like: building effective altruism as a research field and helping people use its findings when making decisions about their donations, careers, etc. The assumption being that, by doing this, you’re pursuing one of the most effective strategies for doing good and, in the words of CEA, you’re helping to build a radically better world, a world in which humanity has solved a range of pressing global pro...

]]>
James Herbert https://forum.effectivealtruism.org/posts/Cve3s6C5kxow6ScQQ/what-we-talk-about-when-we-talk-about-community-building Tue, 27 Jun 2023 23:13:59 +0000 EA - What we talk about when we talk about community building by James Herbert James Herbert https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 14:30 no full 6426
9ubtY3G7jTnLBrZKg EA - [Job post] API is looking for a Policy Manager in New Zealand by Rainer Kravets Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Job post] API is looking for a Policy Manager in New Zealand, published by Rainer Kravets on June 27, 2023 on The Effective Altruism Forum.SummaryAnimal Policy International seeks a part-time Policy Manager in New Zealand to support the organisation’s policy and communications work.Role description is available here.Apply before July 7 by filling the form.About the roleAnimal Policy International (API) is looking for a Policy Manager based in New Zealand. By working closely with the Government, farmers and NGOs, the Policy Manager will help drive forward our ask of legislative change in imports. The Policy Manager will contribute to the work of API by delivering advocacy and communications work - and in particular represent the organisation at meetings and events with varied external stakeholders, in person in New Zealand. The Manager will need to be comfortable talking to a diverse range of stakeholders - from politicians and civil servants and to farmers and NGOS. The Manager may also organise events, prepare briefings, position papers and may contribute to public consultations. With the role being flexible it may suit someone who already works full time in another role and would like to do some additional policy work.Position SummaryApplication Deadline: 7 JulyStart Date: July/AugustDuration: 6 months (will likely be extended)Hours: 4 hours a week average (flexible, tailored to organisation’s needs and availability of Policy Manager)Compensation: NegotiableEmployment type: ContractorLocation: Remote, preferably WellingtonProcess: There will be an interview and 2-hour test taskTo apply please fill in the form.About Animal Policy InternationalAnimal Policy International is a Charity Entrepreneurship incubated organisation working in cooperation with farmers, NGOs and policymakers towards responsible imports that uphold standards in higher welfare regions, advance fair market conditions, and help promote higher standards in low-welfare countries. Read our intro post here.If you have any questions about the position, please reach out to Rainer Kravets at rainer@animalpolicyinternational.org. If you know any people who might be interested, then please don't hesitate to share this post with them!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Rainer Kravets https://forum.effectivealtruism.org/posts/9ubtY3G7jTnLBrZKg/job-post-api-is-looking-for-a-policy-manager-in-new-zealand Tue, 27 Jun 2023 22:35:40 +0000 EA - [Job post] API is looking for a Policy Manager in New Zealand by Rainer Kravets Rainer Kravets https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:19 no full 6428
yhWSx8W5KMTuJzyXt EA - Some Reflections on EA Strategy Fortnight by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some Reflections on EA Strategy Fortnight, published by Ben West on June 29, 2023 on The Effective Altruism Forum.I stated “I see this mostly as an experiment into whether having a simple “event” can cause people to publish more stuff” and I feel like the answer is conclusively “yes”. I put a little bit of effort into pitching people, and I’m sure that my title and personal connections didn’t hurt, but this really was not a terribly heavy lift.Thanks to the fortnight I have a post I can reference for EA being a do-ocracy! I would encourage other people to try to organize things like this.I noticed an interesting phenomenon: contributors who are less visible in EA wanted to participate because they thought it would give their writing more attention, and people who are more visible in EA wanted to participate because they thought it would give their writing less attention.I think the average EA might underestimate the extent to which being visible in EA (e.g. speaking at EAG) is seen as a burden rather than an opportunity.This feels like an important problem to solve, though outside the bounds of this project.3. Part of my goal was to get conversations that are happening in private into a more public venue. I think this basically worked, at least measured by “conversations that I personally have been having in private”. There are some ways in which karma did not reflect what I personally thought was most important though:I’ve started to worry that it might be important to get digital sentience work (e.g. legal protection for digital beings) before we get transformative AI, and EA’s seem like approximately the only people who could realistically do this in the next ~5 years. So I would have liked to have seen more grappling with this post, although in fairness Jeff wasn’t making a strong pitch for prioritizing AI welfare.I also find myself making the points that Arden raised here pretty regularly, and wish there was more engagement with them.4. When doing a “special event” on the forum, I always wonder whether the event will add to the forum’s engagement or just cannibalize existing engagement. I think the strategy fortnight was mostly additive, although it’s pretty hard to know the counterfactual.5. Some events I would be interested in someone trying to organize on the Forum“Everyone change your job week” – opportunity for people to think seriously about whether they should change their jobs, write up a post about it, and then get feedback from other Forum usersRotblat day – Joseph Rotblat was a physicist on the Manhattan project who was originally motivated by wanting to defeat Nazi Germany, but withdrew once he realized the project was actually motivated by wanting to defeat the USSR. On Rotblat day, people post what signs they would look for to determine if their work was being counterproductive.“Should we have an AI moratorium?” debate week – basically the comments here, except you recruit people to give longer form takes.Video week – people create and post video versions of forum articles (or other EA content)Or more specifically: they would be seen as one of many voices, rather than someone whose opinions should receive special attention/deferenceLargely because no one else cares.This and the Rotblat idea come from Sydney Von ArxThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Ben_West https://forum.effectivealtruism.org/posts/yhWSx8W5KMTuJzyXt/some-reflections-on-ea-strategy-fortnight Thu, 29 Jun 2023 20:26:50 +0000 EA - Some Reflections on EA Strategy Fortnight by Ben West Ben_West https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:03 no full 6432
ZnPLPFC49nJym7y8g EA - AGI x Animal Welfare: A High-EV Outreach Opportunity? by simeon c Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI x Animal Welfare: A High-EV Outreach Opportunity?, published by simeon c on June 29, 2023 on The Effective Altruism Forum.Epistemic status: Very quickly written, on a thought I've been holding for a year and that I haven't read elsewhere.I believe that within this decade, there could be AGIs (Artificial General Intelligences) powerful enough that the values they pursue might have a value lock-in effect, at least partially. This means they could have a long-lasting impact on the future values and trajectory of our civilization (assuming we survive).This brief post aims to share the idea that if your primary focus and concern is animal welfare (or digital sentience), you may want to consider engaging in targeted outreach on those topics towards those who will most likely shape the values of the first AGIs. This group likely includes executives and employees in top AGI labs (e.g. OpenAI, DeepMind, Anthropic), the broader US tech community, as well as policymakers from major countries.Due to the risk of lock-in effects, I believe that the values of relatively small groups of individuals like the ones I mentioned (less than 3 000 people in top AGI labs) might have a disproportionately large impact on AGI, and consequently, on the future values and trajectory of our civilization. My impression is that, generally speaking, these people currentlya) don't prioritize animal welfare significantlyb) don't show substantial concern for digital minds sentience.Hence if you believe those things are very important (which I do believe), and you think that AGI might come in the next few decades (which a majority of people in the field believe), you might want to consider this intervention.Feel free to reach out if you want to chat more about this, either here or via my contact you can find here.Even more so if you believe, as I do along with many software engineers in top AGI labs, that it could happen this decade.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
simeon_c https://forum.effectivealtruism.org/posts/ZnPLPFC49nJym7y8g/agi-x-animal-welfare-a-high-ev-outreach-opportunity Thu, 29 Jun 2023 17:22:07 +0000 EA - AGI x Animal Welfare: A High-EV Outreach Opportunity? by simeon c simeon_c https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:02 no full 6433
dikcpP32Q3cg6tvdA EA - AI Incident Sharing - Best practices from other fields and a comprehensive list of existing platforms by stepanlos Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Incident Sharing - Best practices from other fields and a comprehensive list of existing platforms, published by stepanlos on June 29, 2023 on The Effective Altruism Forum.Purpose of this post: The purpose of this post is three-fold: 1) highlight the importance of incident sharing and share best practices from adjacent fields to AI safety 2) collect tentative and existing ideas of implementing a widely used AI incident database and 3) serve as a comprehensive list of existing AI incident databases as of June 2023.Epistemic status: I have spent around 25+ hours researching this topic and this list is by no means meant to be exhaustive. It should give the reader an idea of relevant adjacent fields where incident databases are common practice and should highlight some of the more widely used AI incident databases which exist to date. Please feel encouraged to comment any relevant ideas or databases that I have missed, I will periodically update the list if I find anything new.Motivation for AI Incident DatabasesSharing incidents, near misses and best practices in AI development decreases the likelihood of future malfunctions and large-scale risk. To mitigate risks from AI systems, it is vital to understand the causes and effects of their failures. Many AI governance organizations, including FLI and CSET, recommend creating a detailed database of AI incidents to enable information-sharing between developers, government and the public. Generally, information-sharing between different stakeholders 1) enables quicker identification of security issues and 2) boosts risk-mitigation by helping companies take appropriate actions against vulnerabilities.Best practices from other fieldsNational Transportation Safety Board (NTSB) publishes and maintains a database of aviation accidents, including detailed reports evaluating technological and environmental factors as well as potential human errors causing the incident. The reports include descriptions of the aircraft, how it was operated by the flight crew, environmental conditions, consequences of event, probable cause of accident, etc. The meticulous record-keeping and best-practices recommendations are one of the key factors behind the steady decline in yearly aviation accidents, making air travel one of the safest form of travel.National Highway Traffic Safety Administration (NHTSA) maintains a comprehensive database recording the number of crashes and fatal injuries caused by automobile and motor vehicle traffic, detailing information about the incidents such as specific driver behavior, atmospheric conditions, light conditions or road-type. NHTSA also enforces safety standards for manufacturing and deploying vehicle parts and equipment.Common Vulnerabilities and Exposure (CVE) is a cross-sector public database recording specific vulnerabilities and exposures in information-security systems, maintained by Mitre Corporation. If a vulnerability is reported, it is examined by a CVE Numbering Authority (CNA) and entered into the database with a description and the identification of the information-security system and all its versions that it applies to.Information Sharing and Analysis Centers (ISAC). ISACs are entities established by important stakeholders in critical infrastructure sectors which are responsible for collecting and sharing: 1) actionable information about physical and cyber threats 2) sharing best threat-mitigation practices. ISACs have 24/7 threat warning and incident reporting services, providing relevant and prompt information to actors in various sectors including automotive, chemical, gas utility or healthcare.National Council of Information Sharing and Analysis Centers (NCI) is a cross-sector forum designated for sharing and integrating information among sector-based ISACs (Information Sharing an...

]]>
stepanlos https://forum.effectivealtruism.org/posts/dikcpP32Q3cg6tvdA/ai-incident-sharing-best-practices-from-other-fields-and-a Thu, 29 Jun 2023 12:35:34 +0000 EA - AI Incident Sharing - Best practices from other fields and a comprehensive list of existing platforms by stepanlos stepanlos https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:27 no full 6435
eZAzq442f2nbvFqa7 EA - Taking happiness seriously: Can we? Should we? Would it matter if we did? A debate by MichaelPlant Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Taking happiness seriously: Can we? Should we? Would it matter if we did? A debate, published by MichaelPlant on June 29, 2023 on The Effective Altruism Forum.Mark Fabian and I recently had a debate at EA Global London: 2023. In it, we discuss taking happiness seriously: Can we? Should we? Would it matter if we did? We both really enjoyed doing this. We don't think EAs think enough about well-being or debate enough in general, and we hoped our discussion was a way to bring out some of the issues. We only regret we could get stuck deeper into the topic (and I only regret that couldn't get the clicker working...).(You can see the slides for this talk here and here.)Taking happiness seriously (Happier Lives Institute)Let me start you with a real-life moral dilemma. For £1000 you could double the annual income of one household living in absolute poverty. You could provide 250 bed nets, which would save in expectation 1/6 of a child's life or 1/6 of your child. You could treat 10 women for depression by providing a 10-week course of group therapy, or you could do over 1000 Children. And of course, what we want to do is to do the most good. But the question is, how do we know that we're doing that?Two paths to measuring impactI want to point out there are really two parts to measuring impact. The first is what I've called the objective indicators approach, which I think is just sort of the default approach that society has taken for the last 100 or so years. To think about impact, we look at objective measures of well-being, such as health and wealth, and then we make intuitive trade-offs between them. An example of this is GDP. That's our default measure of social progress. More economic activity is good.However, the objective indicative approach seems to miss something. It seems to miss people's feelings, their happiness, and how their lives are going for them. Where is that in the picture? An alternative, then, is the subjective well-being approach where we use self-reported measures of well-being such as happiness and life satisfaction. The typical case would be 0 to 10. How satisfied are you with your life nowadays?And so my proposal in this session is that we should take happiness seriously. We should set the priorities using the evidence on subjective well-being trying to work out.You might wonder, is this a new radical idea? Well, the first steam train, the Coalbrookdale Locomotive, was built in 1802. And the idea that we should take happiness seriously is as old as that. Thomas Jefferson, writing in 1809 said, “The care of human life and happiness is the first no legitimate object of good government.” Jeremy Bentham, writing in 1776 said, “The greatest happiness of the greatest number is the foundation of morals and legislation.” And looking further back, you have Aristotle who said, “Happiness and meaning and the purpose of life is the whole end and aim of human existence." This idea is old. It's older than the steam trade. It has real lineage to it.What’s new? DataBut what is new is that now we have data. It was only after the Second World War there started to be large-scale surveys of households on how their lives were going. Gallup was founded in 1946. It was in 1972 that there are two countries started having nationally represented examples of subjective well-being. The US General Social Survey, the Gross National Happiness Index in Bhutan. And in 2005, there's the first global survey on well-being. There's the Gallup World Poll, which runs in 160 countries and surveys 98% of the world's population. In 2011, the UK starts to collect and measure data on well-being. The UK is kind of weirdly enough a world leader in measuring well-being. You’ll hear more of that later. In 2012, there was the first edition of the World Happiness Report. People know th...

]]>
MichaelPlant https://forum.effectivealtruism.org/posts/eZAzq442f2nbvFqa7/taking-happiness-seriously-can-we-should-we-would-it-matter Thu, 29 Jun 2023 04:25:16 +0000 EA - Taking happiness seriously: Can we? Should we? Would it matter if we did? A debate by MichaelPlant MichaelPlant https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 40:36 no full 6434
p6oP854ZCZ6skojx6 EA - Juan B. García Martínez on tackling many causes at once and his journey into EA by Amber Dawn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Juan B. García Martínez on tackling many causes at once and his journey into EA, published by Amber Dawn on June 30, 2023 on The Effective Altruism Forum.This post is part of a series of 6 interviews. As EAs, we want to use our careers or donations to do the most good - but it’s difficult to work out what exactly that looks like for us. I wanted to interview effective altruists working in different fields and on different causes and ask them how they chose their cause area, and how they relate to effective altruism and doing good more generally. During the Prague Fall Season residency, I interviewed six EAs in Prague about what they are doing and why they are doing it. I’m grateful to my interviewees for giving their time, and to the organisers of PFS for supporting my visit. Juan B. García Martínez was born in La Mancha, Spain, and studied Chemical Engineering in Madrid (BA) and the Netherlands (MA). Looking for neglected opportunities in climate change, he wrote his Bachelor’s thesis on solar silicon and his Master’s thesis on CO2 capture.During this time, he also became deeply concerned with farmed animal suffering and global catastrophic risks.He is now Research Manager at ALLFED. In his current work, he assesses whether and how we might use various technologies to produce food that is resilient to global catastrophes, for example fermentation technology, single cell proteins from CO₂ or natural gas, sugars from plant fiber or CO₂, fats from microorganisms or hydrocarbons, microbial electrosynthesis, and non-biological synthesis of food from CO₂ or hydrocarbons. He hopes that this work will help to mitigate the effects of global catastrophic risks, but also reduce our dependence on factory farming, thus reducing animal suffering and carbon emissions.On producing food without sunlightAmber: What are you currently working on at ALLFED?Juan: My research has been focused on non-agricultural, independent, industrial food production. We’re looking at how we could produce food without requiring any sunlight at all, to complement methods that use sunlight more efficiently. Specifically, I’ve been studying industrial factories, chemical processing, food production, and a lot of biotech ingredients and alternative protein stuff.I’m also doing some more general projects, like looking into technology readiness more broadly. Which of these things are more or less ready to go than the others?Amber: Which things are the most ready-to-go?Juan: Obviously agriculture, that’s pretty straightforward, though crop relocation would be needed to maintain yields. For more technology-based stuff, I research methane single cell proteins, which they’re already making at the industrial scale. Also lignocellulosic sugar, which is converting trees and leaves into sugars — that’s been done before at some scale.Amber: How long have you been working at ALLFED?Juan: I started there as a research volunteer at the end of 2019. I really liked the type of research that they were doing: it was similar to what I’d been doing at university, with my Master’s. So I started volunteering there while I was finishing my Master's thesis on atmospheric CO2 capture, because I was getting a little tired of that.I did that for four months. I spent a lot of time working on ALLFED projects. As soon as I presented my thesis, they hired me and I worked as a research associate for 2 years. Then they hired me as a coordinator, where I'm still doing the same research stuff, but with other responsibilities: dealing with volunteers and interns, deciding where to put them, ensuring good communication between the research team and the rest of the people or the organisation—doing a little bit of everything. Now I’m working as research manager, acting as deputy to Dr Denkenberger.On planning his career using EA pri...

]]>
Amber Dawn https://forum.effectivealtruism.org/posts/p6oP854ZCZ6skojx6/juan-b-garcia-martinez-on-tackling-many-causes-at-once-and Fri, 30 Jun 2023 18:30:10 +0000 EA - Juan B. García Martínez on tackling many causes at once and his journey into EA by Amber Dawn Amber Dawn https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:18 no full 6442
xL8H3TRj3xxenDgEF EA - Transforming Democracy: A Unique Funding Opportunity for US Federal Approval Voting by aaronhamlin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transforming Democracy: A Unique Funding Opportunity for US Federal Approval Voting, published by aaronhamlin on June 30, 2023 on The Effective Altruism Forum.I'm excited to share a special opportunity to create a systemic impact: a statewide approval voting ballot initiative in Missouri. This would affect all elections throughout the state including federal and presidential. Approval voting favors consensus candidates and a more accurate representation of the public's support. This is critical if we want a government to behave in our interests on policies that concern our well-being.The organization leading this charge is Show Me Integrity, where I'm currently doing a fellowship and assisting with fundraising efforts. Show Me Integrity has successfully passed a ballot initiative before, showing their ability to succeed on this kind of scale. They also successfully ran the ballot initiative for approval voting in St. Louis.Why is this important?Approval voting is a method that allows voters to select as many candidates as they want; still, most votes wins. Approval voting, an easy-to-implement system, can greatly improve our current plurality-based approach to electing Federal and state-level positions. If you’ve read my writing on this before, you’ve seen me make that case. And this is much more effective and lasting than putting money behind individual candidates. This opportunity may not come around again.The ImpactThis initiative is not just about changing the voting method; it's about transforming how we elect individuals to government office, from local to federal positions, in the 19th largest state of over 6 million people. This includes influencing presidential electoral votes. This is the first statewide ballot initiative for approval voting, making it a pioneering effort with potentially far-reaching implications.The AskWe are currently in the signature-gathering phase, a crucial step that requires initial funding. The cost for signature gathering is around $4M, with an additional $9M needed later for campaign execution. Yes, it's expensive, but the potential impact justifies the investment.About Show Me IntegrityShow Me Integrity has a history of successfully implementing ballot initiatives, including a statewide initiative and passing approval voting in St. Louis. Their experience and proven success make them an ideal organization to lead this initiative. There is no better opportunity.Logistics and ChallengesThere are some challenges to be aware of. The competition for signature gathering means that costs could increase if we can't secure initial funding soon. This could also necessitate the use of different firms, which may not have the same quality. With timely funding, we can overcome these hurdles. Additionally, while the initial polling is over 60% even with opposition messaging, this support can change. Ballot measures can be risky.A Matching OpportunityTo encourage donations, a generous donor is currently offering a match of $600K. This match may end soon, but there's a possibility of an extension. This is a true match, meaning the donor will only match funds that can meaningfully kickstart the signature-gathering process. $100K of this match is already being met by other donors.Your Chance to Make a DifferenceThis is a rare opportunity to contribute to a significant societal change. If you, someone you know, or an institution is interested in supporting this initiative, please reach out to me at aaronhamlin@gmail.com. [Please do not use my old CES email.]Thank you for considering this opportunity. Your support could help transform our democracy and create a lasting impact.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
aaronhamlin https://forum.effectivealtruism.org/posts/xL8H3TRj3xxenDgEF/transforming-democracy-a-unique-funding-opportunity-for-us Fri, 30 Jun 2023 17:58:21 +0000 EA - Transforming Democracy: A Unique Funding Opportunity for US Federal Approval Voting by aaronhamlin aaronhamlin https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:36 no full 6445
chqv4wneoXHpHzdQk EA - CE: Rigorously prioritizing the top health security (biosecurity) ideas by CE Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CE: Rigorously prioritizing the top health security (biosecurity) ideas, published by CE on June 30, 2023 on The Effective Altruism Forum.Every year at Charity Entrepreneurship (CE), we try to find the best interventions for impact-focused charities to launch through our Incubation Program. Our research in 2022 focused on health security (which includes biosecurity) and on large-scale global health.This post outlines:An introduction to health security at CE. What CE means by health security exactly, some of CE’s views on preventing catastrophic and existential risks, and why we picked “health security” as a topic.Our prioritization of the top health security interventions. Why rigorous prioritization of ideas is needed to be effective in the world (especially for longtermist cause areas), and CE’s approach to this within health security.Our results. The long list of nearly 200 ideas, our top six shortlisted ideas, and our final recommendations.If you’d like to learn how to conduct similar prioritization research, and how to evaluate the results that come out of starting high-impact interventions, we have just launched our Research Training Program that you can apply to by July 17, 2023.Introduction to health security at CEWhat we mean by health securityWe took a fairly broad interpretation of health security. Our initial definition was: anything that minimizes the danger of acute public health events.This included several broad categories:Pandemic preventionPandemic preparednessGeneral risk preparedness for acute public health eventsAntimicrobial resistanceHealth system strengtheningOther / meta ideas (e.g., better biosecurity forecasting)This included looking at both natural pandemics and engineered pandemics (which could be due to accidental or deliberate misuse of biotech).How much does CE value preventing catastrophic/existential risks?This may seem like a bit of a cop-out answer, but there are a wide range of views on this topic among CE staff. There is no clear overarching organizational view or policy. This section gives a rough sense of how “CE staff” might tend to think about these topics, but probably doesn’t accurately reflect any individual staff member’s stance.In general, CE’s staff tend to apply a very high level of empirical rigor to decision making. Staff tend to trust empirical evidence (e.g., RCTs, M&E of existing charities, base-rate forecasting, etc.) above other kinds of evidence, particularly valuing such evidence above theoretical reasoned arguments. That said, staff tend to accept that making good decisions requires robust/cluster thinking and look for cases where many kinds of evidence align, including theoretical reasoning. Along the same lines, staff are likely to think that doing good is really difficult and having some ongoing measurable evidence of impact is probably required.In general, CE staff believe that preventing extinction is a worthwhile endeavor. However, given the above, staff are likely to be skeptical about:The ability to know what the biggest future risks are, especially where risk estimates rely on reasoned speculation about future technologies.The success of any organization that doesn’t have a clear path to measure the impact of its activities.We took all these views into consideration and chose to focus on health security, including biorisks. This focus allows us (and future CE charities) to explore risk areas that have at least some chance of being globally catastrophic, but also have clear historical precedents and evidence of less extreme catastrophes. Additionally, we expected to find a number of options within biorisk that could be impactful in addressing catastrophic biorisks while also demonstrating health impacts in the short run, meaning a new charity should be able to track and demonst...

]]>
CE https://forum.effectivealtruism.org/posts/chqv4wneoXHpHzdQk/ce-rigorously-prioritizing-the-top-health-security Fri, 30 Jun 2023 16:11:49 +0000 EA - CE: Rigorously prioritizing the top health security (biosecurity) ideas by CE CE https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:14 no full 6443
4Gjavidm767Cddnuu EA - MHFC Spring Grants Round by wtroy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MHFC Spring Grants Round, published by wtroy on June 30, 2023 on The Effective Altruism Forum.The Mental Health Funding Circle recently completed our second round of funding. We are very happy to support great organizations working on highly impactful mental health interventions. Our next open funding round will be held in the Fall - applications will be due October 1st, with final decisions made in early November. For more information, visit our website.This round, MHFC members disbursed $789,833 to the following organizations, institutions and individuals:$70,000 to Fine Mind for their direct service work on depression in northern Uganda.$78,225 to Phlourish for guided self-help in the Philippines.$50,000 to Happier Lives Institute for continued research on subjective wellbeing metrics and general operating support.$65,000 to Overcome to support their work offering free online therapy.$153,000 to Eggshells for their work on digital guided self-help.$120,000 to Children’s Hospital of Philadelphia to station task-shifting youth therapy practitioners in HIV clinics in Botswana.$24,000 to School of Hard Knocks to support lay practitioner interpersonal therapy for youth in South Africa.$30,000 to CEARCH for meta research into effective mental health interventions.$99,360 to Columbia University and Makerere University for capacity building for interpersonal therapy in primary care in Uganda.$30,000 to Tata Institute of Social Sciences to support research on interpersonal therapy in primary schools in India.$70,248 to Swiss Tropical and Public Health Institute for research on suicidality and suicide prevention in urban settings in Zambia.All funding decisions are made personally by individual circle members and do not necessarily reflect the priorities of the circle. For any information about funding or membership, please reach out!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
wtroy https://forum.effectivealtruism.org/posts/4Gjavidm767Cddnuu/mhfc-spring-grants-round Fri, 30 Jun 2023 04:28:26 +0000 EA - MHFC Spring Grants Round by wtroy wtroy https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:09 no full 6444
dBdNoSAbkG4k98GT9 EA - Evidence of effectiveness and transparency of a few effective giving organisations by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Evidence of effectiveness and transparency of a few effective giving organisations, published by Vasco Grilo on July 1, 2023 on The Effective Altruism Forum.SummaryEffective giving can be quite impactful.I estimated the factual non-marginal multipliers until 2021 of Ayuda Efectiva (Spain), Doebem (Brazil), Effektiv Spenden (Germany), and Giving What We Can (GWWC), i.e. how much donations they moved per dollar spent. Those of Ayuda Efectiva (1.34) and Doebem (5.53) are much lower than those of Effektiv Spenden (61.2) and GWWC (135).However, the results might differ accounting for future donations (received after 2021, but caused until then), counterfactuals, diminishing marginal returns, cost-effectiveness of caused donations, and indirect impacts of effective giving. Furthermore, the organisations were at different levels of maturity. Consequently, my estimates for the factual non-marginal multipliers are not directly comparable, and I do not know which of the 4 organisations are more effective at the margin.I did not find any proper cost-effectiveness analyses of Ayuda Efectiva, Doebem or Effektiv Spenden. I encourage these and other effective giving organisations as well as their funders (namely, Open Philanthropy) to do and publish cost-effectiveness analyses of their work (ideally including the indirect impacts of effective giving), as GWWC has done.IntroductionEffective giving can be quite impactful:Supporting with 0.399 $/year the corporate campaigns for chicken welfare of The Humane League might be enough to neutralise the suffering of factory-farmed animals caused by a random person. This estimate can easily be off by a factor of 10, but illustrates that the (financial and non-financial) costs/savings of switching to a fully plant-based diet may well be much higher.Helen Keller International’s vitamin A supplementation program has a cost-effectiveness of 3.5 k$ per life saved, i.e. one can save 13.8 lives (= 48.3/3.5) for the average transaction price of new cars in the United States in April 2023 of 48.3 k$.So there are good reasons for giving effectively and significantly to become a cultural norm. This is a primary goal of effective giving organisations, and I have estimated the factual non-marginal multiplier of a few of them to get a sense of whether they are accomplishing it effectively. To clarify:A factual non-marginal multiplier of x means the effective giving organisation moved x $ of donations (hopefully to effective organisations) for each dollar it spent.A counterfactual non-marginal multiplier of y means the effective giving organisation caused y $ of donations for each dollar it spent.A counterfactual marginal multiplier of z means the effective giving organisation would have caused z $ of donations for each additional dollar it had spent.y < x because effective giving organisations do not cause all the donations they move, and z < y owing to diminishing marginal returns. The effective giving organisations is underfunded if z < 1, as long as the counterfactual marginal multiplier includes all relevant effects.I was curious about Ayuda Efectiva and Doebem because their results could be more generalisable to Portugal (where I am from). I looked into Effektiv Spenden owing to it being regarded as a successful example of effective giving, and included GWWC as a major reference in this space.MethodsI calculated the factual non-marginal multipliers from the ratio between donations received to be directed towards effective organisations and costs. I neglected future donations, and did not account for the opportunity cost of workers and volunteers. The greater the future donations, the greater my underestimation of the factual multipliers. The greater the opportunity cost, the greater my overestimation of the factual non-marginal multi...

]]>
Vasco Grilo https://forum.effectivealtruism.org/posts/dBdNoSAbkG4k98GT9/evidence-of-effectiveness-and-transparency-of-a-few Sat, 01 Jul 2023 19:26:30 +0000 EA - Evidence of effectiveness and transparency of a few effective giving organisations by Vasco Grilo Vasco Grilo https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 19:53 no full 6449
QYJrH854vZzCFKmoG EA - Longlist of Causes + Cause Exploration Contest by Joel Tan (CEARCH) Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Longlist of Causes + Cause Exploration Contest, published by Joel Tan (CEARCH) on July 1, 2023 on The Effective Altruism Forum.Longlist of CausesCEARCH keeps a running longlist of causes (link) that may merit further research to see if they are highly impactful causes worth supporting. The list, which covers the broad areas of global health & development, longtermism, as well as EA meta, is currently around 400 causes long.In compiling this longlist, we have used a variety of methods, as detailed in this search methodology (link); core ones include:Using Nuno’s excellent list as a starting point.Conducting consultations and surveys (e.g. of both EA and non-EA organizations and individuals).Performing outcome tracing (i.e. looking at good/bad outcomes and identifying the underlying causes): The Global Burden of Diseases database and the World Database of Happiness are especially useful in this regard.Our hope is that this list is useful to the community, and not just our own research team.Notes:Classification of causes is fairly arbitrary, and each organization has their own approach. CEARCH find it useful to think of causes in three distinct levels, from broadest to narrowest:(1) High-level cause domain, which are problems defined in the broadest way possible: (a) global well-being, which concerns human welfare in the near-term; (b) animal welfare, which is self-explanatory; (c) longtermism, which concerns human welfare in the long-term; and (d) EA meta, which involves doing good through improving or expanding effective altruism itself.(2) Cause areas, which are significantly narrowed down from high-level cause domains, but are still fairly broad themselves. For example, within global well-being, we might have global health, economic & development, political reform etc(3) Causes, which are problems defined in a fairly narrow way (e.g. malaria, vitamin A deficiency, childhood vaccination, hypertension, diabetes etc).Of course, causes can always be broken down further (e.g. malaria in sub-Saharan Africa, or childhood vaccination for diphtheria), and going through our list, you can also see that causes may overlap (e.g. air pollution in a general sense, vs ambient/outdoor particulate matter pollution, vs indoor air quality, vs specifically indoor household air pollution from soot). The reason for such overlap is partly a lack of time on CEARCH's part to rationalize the whole list; but partly it also reflects our view that it can be valuable to look at problems at different levels of granularity (e.g. at higher levels, a single intervention may be able to solve multiple problems at the same time, such that a broader definition of a cause areas helps find more cost-effective solutions; conversely, at lower levels, you can focus on very targeted interventions that may be very cost-effective but not generally applicable).Note that animal welfare causes are not in this longlist, as CEARCH has so far not focused on them, for want of good moral weights to do evaluations with. This should not be taken to imply that animal causes are unimportant, or that research into cost-effective animal causes is not valuable.Cause Exploration ContestOpen Philanthropy had its excellent Cause Exploration Prize; here, we’ll like to do something similar but make the bar significantly lower.We invite people to suggest potential cause areas, providing a short justification if you feel it useful (e.g. briefly covering why the issue is important/tractable/neglected), or not, if otherwise (e.g. the idea simply appears novel or interesting to you). All ideas are welcome, and even causes which do not appear intuitively impactful can be fairly cost-effective upon deeper research.People are also welcome to suggest potential search methodologies for finding causes (e.g. consulting weird ...

]]>
Joel Tan (CEARCH) https://forum.effectivealtruism.org/posts/QYJrH854vZzCFKmoG/longlist-of-causes-cause-exploration-contest Sat, 01 Jul 2023 19:24:11 +0000 EA - Longlist of Causes + Cause Exploration Contest by Joel Tan (CEARCH) Joel Tan (CEARCH) https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:32 no full 6450
GxRcKACcJuLBEJPmE EA - Consider Earning Less by ElliotJDavies Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider Earning Less, published by ElliotJDavies on July 1, 2023 on The Effective Altruism Forum.This post is aimed at those working in jobs which are funded by EA donors who might be interested in voluntarily earning less. This post isn't aimed to influence pay scales at organisations, or at those not interested in earning less. When the Future Fund was founded in 2022, there was a simultaneous upwards pressure on both ambitiousness and net-earnings in the wider EA community. The pressure to be ambitious resulted in EAs really considering the opportunity cost of key decisions. Meanwhile, the discussions around why EAs should consider ordering food or investing in a new laptop pointed towards a common solution: EAs in direct work earning more.The funding situation has significantly shifted from then, as has the supply-demand curve for EA jobs. This should put a deflationary pressure on EAs' salaries, but I'd argue we largely haven't seen this effect, likely because people's salaries are "sticky".One result of this is that there are a lot of impactful projects which are unable to find funding right now, and in a similar vein, there's a lot of productive potential employees who are unable to get hired right now. There's even a significant proportion of employees who will be made redundant.This seems a shame, since there's no good reasons for salaries to be sticky. It seems especially bad if we do in fact see significant redundancies, since under a "veil of ignorance" the optimal behaviour would be to voluntarily lower your salary (assuming you could get your colleagues to do the same). Members of German labour unions quite commonly do something similar (Kurzarbeit) during economic downturns, to avoid layoffs and enable faster growth during an upturnSome Reasons you Might Want to Earn Less:You want to do as much good as possible, and suspect your organisation will do more good if it had more money at hand.Your Organisation is likely to make redundancies, which could include you.You have short timelines, and you suspect that by earning less, more people could work on alignment.You can consider your voluntary pay-cut a donation, which you can report on your GWWC account. (The great thing about pay-cut donations is you essentially get a 100% tax refund, which is particularly nice if you live somewhere with high income tax).Some Reasons you May Not Want to Earn Less:It would cause you financial hardship.You would experience a significant drop in productivity.You suspect it would promote an unhealthy culture in your organisation.You expect you're much better than the next-best candidate, and you'd be less likely to work in a high impact role if you had to earn less.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
ElliotJDavies https://forum.effectivealtruism.org/posts/GxRcKACcJuLBEJPmE/consider-earning-less Sat, 01 Jul 2023 18:27:36 +0000 EA - Consider Earning Less by ElliotJDavies ElliotJDavies https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:36 no full 6448
HvaEvMK2EXaXXtvDP EA - Starting the second Green Revolution by freedomandutility Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Starting the second Green Revolution, published by freedomandutility on June 30, 2023 on The Effective Altruism Forum.The Green Revolution may have saved over a billion people from starvation, driven by active efforts by the Rockefeller Foundation, Mexico, the USA and the UN.Food security remains poor in conflict-affected areas, and is threatened by risks such as plant disease pandemics and nuclear war.Since the Green Revolution, we've made immense scientific progress, particularly in synthetic biology and AI.How can we make another large jump in agricultural efficiency, to tackle poor food security in conflict-stricken states now and in the future, and improve global resilience to wide-scale disasters?In development, a lot of work on agriculture focuses on adoption of existing technologies.I want to read more about the kind of frontier technologies we should be prioritising R&D investment in.In addition, do we need to develop new institutions to conduct RCTs (like a J-PAL spin-off focused on agriculture) to generate better evidence for evaluating new farming technologies? Do we need to engage farmers in large-scale RCTs, similar to the way we engage doctors and patients in medical RCTs?I'd like to see more EA work on this, but if there already is some work in this area, please point me towards it!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
freedomandutility https://forum.effectivealtruism.org/posts/HvaEvMK2EXaXXtvDP/starting-the-second-green-revolution Fri, 30 Jun 2023 22:10:03 +0000 EA - Starting the second Green Revolution by freedomandutility freedomandutility https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:28 no full 6451
rRTmMaShCeRpBFCfx EA - Bill Gates' 400 million dollar bet - First Tuberculosis Vaccine in 100 years? by NickLaing Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bill Gates' 400 million dollar bet - First Tuberculosis Vaccine in 100 years?, published by NickLaing on July 2, 2023 on The Effective Altruism Forum.Last week a young man Onekalit turned up with a nasty cough to a health center we operate in a youth prison in Gulu, Northern Uganda. The dry cough wore him down for over a month, but last week he finally managed to cough a bit of sputum into a small plastic container. The incredible Gates Foundation funded GeneXpert test confimed our fears – TuberculosisBut Onekalit will be OK – after 6-9 months of gruelling treatment, the TB will be cured. He will not become one of the 1.5 million people that TB kills every year, more than double that of malaria. After covid subsided TB has now regained the dubious honour of the world’s deadliest infectious disease.The Gates Foundation helped bring the amazing GeneXpert diagnostic test to places like rural Uganda, but Bill and co. are now going a step further making their biggest ever 400 million dollar bet on a vaccine that initial trials show may be 50% effective in stopping TB progress from latent infection to deadly lung disease. The first new effective TB vaccine in over 100 years.Surprisingly this vaccine has been sitting around (in a form) doing nothing much for around 20 years. GlaxoSmithKline (GSK) bought the patent almost 20 years ago, before publishing a trial which showed it was actually quite good and could save millions of lives, then decided they couldn’t make money from the vaccine so shelved it... Crazy stuff.Unfortunately, our economic system is not set up to bring a vaccine which could save hundreds of millions of future lives to market. Fortunately our economic system does allow people like Effective Altruists and Bill Gates to donate their own stacks of cash towards life saving endeavours that the market has failed to bring to fruition.This mind bogglingly expensive 550million dollar trail is necessary because TB is a slow disease. Slow to divide, slow to spread, slow to treat. Tracking and follow up TB takes far longer than for your average infectious disease. For malaria, within a year we can start to see whether a vaccine works. For TB it will take at least 5 times as long – 5 years or more before we know whether we are onto a winner.If the vaccine really is 50% effective, it could save around 10 million lives in the next 25 years, not to mention helping prevent the terrifying Antimicrobial Resistance (AMR) that TB has already partly achieved.500 million died from Smallpox (“but not a single one more”) – over 1 billion have died from TB. We remain far a from “not a single one more” in the case of TB – but this could be a spectacular step in the right direction.Not his real nameAlthough from an effective altruism perspective the suffering caused by malaria is worse, because malaria kills mostly young children, whereas TB kills people of all ages.I’m not sure about this, please correct me if I’m wrongThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
NickLaing https://forum.effectivealtruism.org/posts/rRTmMaShCeRpBFCfx/bill-gates-400-million-dollar-bet-first-tuberculosis-vaccine Sun, 02 Jul 2023 15:10:57 +0000 EA - Bill Gates' 400 million dollar bet - First Tuberculosis Vaccine in 100 years? by NickLaing NickLaing https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:01 no full 6460
Ykq4hEQwEFdspAMyr EA - Benefits of being rejected from CEA’s Online team by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Benefits of being rejected from CEA’s Online team, published by Ben West on July 3, 2023 on The Effective Altruism Forum.Summary:If you apply and are rejected, then I (and presumably other hiring managers) will often either not remember you — or actually be enthusiastic about you (e.g if you made it to later hiring rounds) if anyone asks in the future.People who have been rejected from the CEA Online team have gotten concrete job opportunities that they would not have gotten if they had never appliedIf you're rejected, I can also give you advice and connect you to other opportunities.Rejection is hard, but fear of it shouldn't stop you from applying.Note: I wrote this year ago and temporal references like “the most recent” should be interpreted relative to that. Importantly: I no longer am on the Online team, although I think this post is still roughly accurate for them.I sometimes talk to people who are nervous about applying to EA organizations because they think a rejection could damage their chances at not just the organization they apply to but all EA organizations.This fear is not completely ungrounded – EA is a small community, and hiring managers do occasionally talk to each other about candidates. As with most worries with how others think of you though, "You probably wouldn’t worry about what people think of you if you could know how seldom they do": 100+ people apply to CEA per month, and my memory is pretty bad. The people grading applications will probably not remember people whose applications they reject, especially if that happens early on, and if it happens later, that likely means that they saw something promising. (There are also other costs to applying, like your time and energy.)I wanted to point out though that being rejected as a candidate, particularly if you make it to the final round of a hiring process, can actually be substantially positive.Here are some things that happened to rejected candidates in some of the hiring rounds I ran:Hired by CEA as a contractor for a position similar to what they applied to (the position we were hiring for ended up needing slightly more than 1 FTE of work, so we hired them to do some of the overfill)Received a grant to quit their job and work independently on something similar to what they applied to after I encouraged them to do so and recommended the grantRepeatedly consulted CEA on their area of expertise as a volunteer (though we offered to pay them, they just declined payment)Received a grant from LTFF to skill up after I endorsed them, based on what I learned during their hiring processI also try to give useful feedback to candidates who are rejected. Here is an email I recently received from a rejected applicant, which Carly (the applicant) kindly agreed to share publicly:Hi Ben,Hope you are well! I was in the Project Coordinator search at CEA a few months ago and wanted to drop you a quick note of gratitude.I want to thank you for your part in helping to transition my career into EA. I was very hopeful about getting the role at CEA and can easily imagine a scenario where a typical rejection letter - short, generic, or dismissive - may have dampened my enthusiasm and at worst lowered my spirits enough to not go to EAGx Boston the following day. Your rejection letter, however, was so insightful and encouraging that it had the opposite effect. You motivated me to keep learning and networking and to go to the conference which started the chain of events that led me to a position that I'm very excited to start at Alvea in a few weeks.All of this is just to say thanks and no need for a reply! I know those letters take time and are not expected or necessary for unsuccessful candidates but they make an impact!Carly TryensPeople in the EA network tend to be both inexperienced and self-mo...

]]>
Ben_West https://forum.effectivealtruism.org/posts/Ykq4hEQwEFdspAMyr/benefits-of-being-rejected-from-cea-s-online-team Mon, 03 Jul 2023 22:00:06 +0000 EA - Benefits of being rejected from CEA’s Online team by Ben West Ben_West https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:02 no full 6471
85gP3mxnpDuqKgSfk EA - Evaluating claims about ContraPest by Spencer Ericson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Evaluating claims about ContraPest, published by Spencer Ericson on July 3, 2023 on The Effective Altruism Forum.TL;DR: We conclude that there is some evidence against ContraPest’s claims that the product is “humane,” “it is not an endocrine disruptor,” and “its effects are reversible.” The literature suggests that the active ingredients in ContraPest may cause bone loss, cancer, organ injury, immunosuppression, and endocrine disruption in rats. Any of these could be a risk factor for decreased well-being in rats. VCD may result in permanent sterility in rats over time or at high doses, and at low doses, it results in irreversible premature reproductive senescence.Epistemic status: Uncertain. This was about a 40-hour investigation by me, Spencer, a person with ~no biology background. I conclude that more research is needed into every question I pose. This post will be edited if new information comes in.Quirks: I say “we” because writing “I” in formal writing makes me feel weird. Where I say “We are not aware of studies.” I mean that I searched 3-4 relevant key terms on Google Scholar about that question and came up empty.Acknowledgements: I am grateful to Constance Li for sponsoring this project. I am grateful to Contance Li, Michael St. Jules, and Holly Elmore for feedback. I am grateful to two interviewees for their help. (None of the acknowledged people have endorsed or rigorously fact-checked my claims herein.)IntroductionContraPest is the first and only current EPA-approved contraceptive for Norway rats and roof rats. It was approved in 2016 and brought to market in 2021. ContraPest has received increasing attention; for example, it was featured in Time in March 2023. It is touted as a humane option for rat population control that could replace many use cases for rodenticides. Rodenticides tend to cause prolonged, painful deaths, both for rats and for other wild animals, so ContraPest is thought to be an opportunity to improve wild animal welfare. Its manufacturer, SenesTech, claims that “ContraPest is a non-lethal rodenticide/pesticide that does not bioaccumulate, it is not an endocrine disruptor, and its effects are reversible.”ContraPest contains two chemicals in small amounts, 4-vinylcyclohexene diepoxide (VCD) and triptolide. VCD reduces fertility in female rats by depleting ovarian follicles, which mature into eggs. Triptolide reduces fertility in male and female rats by interrupting the development of ovarian follicles and sperm. However, as we will see below, these chemicals might have side effects like permanent reductions in fertility, cancer, bone loss, organ injury, immunosuppression, and hormone disruption at high doses. More research in needed into the one-time and repeated doses where these side effects become apparent in rats, animals that eat rats, other animals that might come into contact with ContraPest, and humans.We have yet to see a post that thoroughly and critically evaluates SenesTech’s claims about ContraPest. However, we have come to know that “many cities” and at least one animal sanctuary that ran pilots of ContraPest did not continue to use the product due to their concerns about its effectiveness and side effects. To better understand how this product affects wild animal welfare, we attempt to evaluate SenesTech’s claims below. You can skip to the Conclusion, or read the TL;DRs at the top of each section.(We wrote this report after reading Holly Elmore et al.’s sequence for Rethink Priorities, “The rodenticide reduction sequence.” You can read our summary of the sequence here.)QuestionsWe have the following questions about ContraPest. We will only be able to answer some of them.Why did the cities and organizations that tried ContraPest stop using it?At what dose is ContraPest toxic?Is it true that ContraPest causes n...

]]>
Spencer Ericson https://forum.effectivealtruism.org/posts/85gP3mxnpDuqKgSfk/evaluating-claims-about-contrapest Mon, 03 Jul 2023 21:17:33 +0000 EA - Evaluating claims about ContraPest by Spencer Ericson Spencer Ericson https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 48:56 no full 6470
HjwJaBuxfHbrr3CbS EA - Getting over my fear that going vegan would make me weak and unhealthy by Drew Housman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting over my fear that going vegan would make me weak and unhealthy, published by Drew Housman on July 3, 2023 on The Effective Altruism Forum.I recently read a Richard Hanania article titled "Eating Animals and the Virtues of Honesty." It talks about the moral atrocity that is factory farming and how that relates to Hanania's personal dietary choices. I commend him for calling out how bad most animal agriculture is.Hanania understands that there is no excuse to torture so many sentient beings just because we like how they taste. He concedes it could very well be one of the worst crimes in human history. Given all that, he wishes he could be vegan.Unfortunately, he must continue to eat animals. If he stops, he will ruin his body composition:I just grant to the vegan that he has won the argument, and he is morally superior from a utilitarian perspective, but I want to be thin and have broad shoulders.His argument, as I understand it, boils down to the idea that he needs to eat animals in order be fit, strong, and healthy. His position made me think of these great tweets:I am tempted to poke fun at Hanania, but I used to have the exact same worry! So I am going to do the ethical thing and talk about my personal experience as a jacked vegan.I know it's easy to google “vegan weightlifter” or "vegan athlete" or visit r/veganfitness to find examples vegans with shoulders so broad they'd make Hanania weep.My goal with this post is to normalize the idea that even late 30’s, tech working, regular guy vegans can be muscular and healthy. Sorry if this comes across as bragging, or if it's cringey. It just struck me while reading the Hanania piece that more people might go vegan if they felt they could do so without withering away. I want Hanania to know he can have his cake and eat it too! Were he to internalize that, how many people amongst his large audience could he influence to change their diets? How much suffering could he reduce?I don't think it would be easy, possible, or desirable for every vegan to have big muscles. Nor do I think that a fully vegan diet is the healthiest choice for everyone. All I'm saying is that if what's holding you back from going vegan is a deep rooted fear that doing so will cause you to have small shoulders, I think you should reconsider.Going vegan, maintaining my strengthWhen I first considered going vegan, I was also worried I'd become thin and weak. I'd read Stephan Guyenet's account of going vegan for 6 months. He struggled a lot, and it really gave me pause.Old friends who came to visit during that period did repeatedly ask me if I was sick, because of the amount of weight I had lost-- largely muscle. I had grown paler as well.Health and strength are priorities for me. I didn't want to become pale and frail. But after I got my first puppy I decided that I could no longer tolerate the idea of eating other sentient beings, so I stopped.I went vegetarian for a month, then vegan. I figured I could always start eating meat again if my body fell apart.The whole getting pale and weak thing just.didn’t happen to me. At all. I kept expecting to lose weight, or at least lose muscle, but I never did.I have actually gained some muscle and weight over my 4 years as a vegan, while keeping to roughly the same workout routine. Maybe I'm just lucky. Or maybe the loudest voices online are the ones who either had a bad experience going vegan or are convinced they will if they try. Maybe it's not as hard as everyone makes it out to be.Here's me just before going vegan:And here I am after years of eating vegan:At 36, I can currently bench and squat more than when I was an avid meat eater in my early twenties.But am I destroying my health?To dispel the notion that I am only superficially healthy, I'll share some bloodwork. This section ca...

]]>
Drew Housman https://forum.effectivealtruism.org/posts/HjwJaBuxfHbrr3CbS/getting-over-my-fear-that-going-vegan-would-make-me-weak-and Mon, 03 Jul 2023 16:56:23 +0000 EA - Getting over my fear that going vegan would make me weak and unhealthy by Drew Housman Drew Housman https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:13 no full 6472
NDRBZNc2sBy5MC8Fw EA - Recovering from Rejection (written for the In-Depth EA Program) by Aaron Gertler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Recovering from Rejection (written for the In-Depth EA Program), published by Aaron Gertler on July 3, 2023 on The Effective Altruism Forum.Formerly known as "Aaron's Epistemic Stories", which stops working as a title when it's on the Forum and people aren't required to read it.What is this post?A story about how I reacted poorly to my first few EA job rejections, and what I learned from reflecting on my mistakes.Context: When I worked at CEA, my colleague was working on EA Virtual Program curricula. She asked me to respond to this prompt:"What made you start caring about having good epistemics? What made you start trying to improve your epistemics? Why?"I wrote a meandering, stream-of-consciousness response and shared it. I assumed it would either be ignored or briefly summarized as part of a larger piece. Instead, it — went directly to the curriculum for the In-Depth Program?That was a surprise. It was a much bigger surprise when people started reaching out to tell me how much it had helped them: maybe a dozen times over the last two years. From the emails alone, it seems to be the most important thing I've written.So I'm sharing a lightly edited version on the Forum, in case it helps anyone else.Recovering from rejectionWritten in a bit of a rush. But I think that captures how it felt to be me in the throes of epistemic upheaval.After I graduated from college, I took the most profitable job I could find, at a company in a cheap city. I wanted to save money so I could be flexible later. So far, so good.I started an EA group at the company, which kept me thinking about effective altruism on a regular basis even without my college group. It wasn’t nearly as fun to run as the college group — people who work full-time jobs don't like extra meetings, and my co-organizers kept getting other jobs and leaving. But I still felt like “part of EA”.Eventually, I decided to move on from the company. So I applied to GiveWell, got to the very last step of the application process. and got rejected.Well, I thought, I guess it makes sense that I’m not qualified for an EA job. My grades weren’t great, and I was never a big researcher in college. Time to do something else.This is a story about a mistake. Do you see it?I moved to San Diego and spent the next 18 months as a freelance tutor and writer, feeling generally dissatisfied with my life. My local group met rarely and far away; I had no car, I was busy with family stuff, and I became less and less engaged with EA.Through an old connection, I was introduced to a couple who ran an EA-aligned foundation and lived nearby. I ended up doing part-time operations work for them — reading papers, emailing charities with questions, and other EA-flavored stuff.This boosted my confidence and led me to think harder about my career, though I kept running into limitations. For example, GiveDirectly’s CEO wanted to hire a research assistant for his lab at UCSD, but I’d totally forgotten my old R classes and wasn’t a good candidate, despite having a great connection from my operations work.There goes maybe the best opportunity I’ll ever get as a washed-up 24-year-old. Sigh.In early 2018, I got an email from someone at Open Philanthropy, inviting me to apply for a new research position. I was excited by the sudden opportunity and threw everything I had into the process. I made it to the last step. and got rejected.Well, I thought, I guess it makes sense that I’m still not qualified for an EA job. I’m not a kid with limitless potential anymore. I haven’t learned anything important since college. I guess it’s back to finding a coding bootcamp and trying to get a “real job”.Is the mistake standing out yet?This was a major setback; for a while, I was barely engaged in EA. But I did happen to see an 80,000 Hours page with a survey...

]]>
Aaron Gertler https://forum.effectivealtruism.org/posts/NDRBZNc2sBy5MC8Fw/recovering-from-rejection-written-for-the-in-depth-ea Mon, 03 Jul 2023 16:48:24 +0000 EA - Recovering from Rejection (written for the In-Depth EA Program) by Aaron Gertler Aaron Gertler https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 17:20 no full 6473
zRLhq5A7eE6ayA9D5 EA - Mistakes in the moral mathematics of existential risk (Part 1: Introduction and cumulative risk) - Reflective altruism by BrownHairedEevee Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mistakes in the moral mathematics of existential risk (Part 1: Introduction and cumulative risk) - Reflective altruism, published by BrownHairedEevee on July 3, 2023 on The Effective Altruism Forum.This is the first part of "Mistakes in the moral mathematics of existential risk", a series of blog posts by David Thorstad that aims to identify ways in which estimates of the value of reducing existential risk have been inflated. I've made this linkpost part of a sequence.Even if we use . conservative estimates, which entirely ignor[e] the possibility of space colonization and software minds, we find that the expected loss of an existential catastrophe is greater than the value of 1016 human lives. This implies that the expected value of reducing existential risk by a mere one millionth of one percentage point is at least a hundred times the value of a million human lives.Nick Bostrom, “Existential risk prevention as global priority”1. IntroductionThis is Part 1 of a series based on my paper, “Mistakes in the moral mathematics of existential risk.”(Almost) everyone agrees that human extinction would be a bad thing, and that actions which reduce the chance of human extinction have positive value. But some authors assign quite high value to extinction mitigation efforts. For example:Nick Bostrom argues that even on the most conservative assumptions, reducing existential risk by just one millionth of one percentage point would be as valuable as saving a hundred million lives today.Hilary Greaves and Will MacAskill estimate that early asteroid-detection efforts saved lives at an expected cost of fourteen cents per life.These numbers are a bit on the high side. If they are correct, then on many philosophical views the truth of longtermism will be (nearly) a foregone conclusion.I think that these, and other similar estimates, are inflated by many orders of magnitude. My paper and blog series “Existential risk pessimism and the time of perils” brought out one way in which these numbers may be too high: they will be overestimates unless the Time of Perils Hypothesis is true.My aim in this paper is to bring out three novel ways in which many leading estimates of the value of existential risk mitigation have been inflated. (The paper should be online as a working paper within a month.)I’ll introduce the mistakes in detail throughout the series, but it might be helpful to list them now.Mistake 1: Focusing on cumulative risk rather than per-unit risk.Mistake 2: Ignoring background risk.Mistake 3: Neglecting population dynamics.I show how many leading estimates make one, or often more than one of these mistakes.Correcting these mistakes in the moral mathematics of existential risk has two important implications.First, many debates have been mislocated, insofar as factors such as background risk and population dynamics are highly relevant to the value of existential risk mitigation, but these factors have rarely figured in recent debates.Second, many authors have overestimated the value of existential risk mitigation, often by many orders of magnitude.In this series, I review each mistake in turn. Then I consider implications of this discussion for current and future debates. Today, I look at the first mistake, focusing on cumulative rather than per-unit risk.2. Bostrom’s conservative scenarioNick Bostrom (2013) considers what he terms a conservative scenario in which humanity survives for a billion years on the planet Earth, at a stable population of one billion humans.We will see throughout this series that is far from a conservative scenario. Modeling background risk (correcting the second mistake) will put pressure on the likelihood of humanity surviving for a billion years. And modeling population dynamics (correcting the third mistake) will raise the possi...

]]>
BrownHairedEevee https://forum.effectivealtruism.org/posts/zRLhq5A7eE6ayA9D5/mistakes-in-the-moral-mathematics-of-existential-risk-part-1 Mon, 03 Jul 2023 14:34:14 +0000 EA - Mistakes in the moral mathematics of existential risk (Part 1: Introduction and cumulative risk) - Reflective altruism by BrownHairedEevee BrownHairedEevee https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:47 no full 6475
awmsvYwXk2yjCN3uT EA - Mistakes in the moral mathematics of existential risk (Part 2: Ignoring background risk) - Reflective altruism by BrownHairedEevee Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mistakes in the moral mathematics of existential risk (Part 2: Ignoring background risk) - Reflective altruism, published by BrownHairedEevee on July 3, 2023 on The Effective Altruism Forum.This is the second part of "Mistakes in the moral mathematics of existential risk", a series of blog posts by David Thorstad that aims to identify ways in which estimates of the value of reducing existential risk have been inflated. I've made this linkpost part of a sequence.In the decades to come, advanced bioweapons could threaten human existence. Although the probability of human extinction from bioweapons may be low, the expected value of reducing the risk could still be large, since such risks jeopardize the existence of all future generations. We provide an overview of biotechnological extinction risk, make some rough initial estimates for how severe the risks might be, and compare the cost-effectiveness of reducing these extinction-level risks with existing biosecurity work. We find that reducing human extinction risk can be more cost-effective than reducing smaller-scale risks, even when using conservative estimates. This suggests that the risks are not low enough to ignore and that more ought to be done to prevent the worst-case scenarios.Millett and Snyder-Beattie, “Existential risk and cost-effective biosecurity”1. IntroductionThis is Part 2 of a series based on my paper “Mistakes in the moral mathematics of existential risk”.Part 1 introduced the series and discussed the first mistake: focusing on cumulative rather than per-unit risk. We saw how bringing the focus back to per-unit rather than cumulative risk was enough to change a claimed 'small’ risk reduction of one millionth of one percent into an astronomically large reduction that would drive risk to almost one in a million per century.Today, I want to focus on a second mistake: ignoring background risk. The importance of modeling background risk is one way to interpret the main lesson of my paper and blog series “Existential risk pessimism and the time of perils.” Indeed, in Part 7 of that series, I suggested just this interpretation.It turns out that blogging is sometimes a good way to write a paper. Today, I want to expand my discussion in Part 7 of the existential risk pessimism series to clarify the second mistake (ignoring background risk) and to show how it interacts with the first mistake (focusing on cumulative risk) in a leading discussion of cost-effective biosecurity.Some elements of this discussion are lifted verbatim from my earlier post. In my defense, I remind my readers that I am lazy.2. Snyder-Beattie and Millett on cost-effective biosecurityAndrew Snyder-Beattie holds a DPhil in Zoology from Oxford, and works as a Senior Program Officer at Open Philanthropy. Snyder-Beattie is widely considered to be among the very most influential voices on biosecurity within the effective altruist community.Piers Millett is a Senior Research Fellow at the Future of Humanity Institute. Millett holds advanced degrees in science policy, research methodology and international security, and has extensive industry experience in biosecurity.Millett and Snyder-Beattie’s paper, “Existential risk and cost-effective biosecurity”, is among the most-cited papers on biosecurity written by effective altruists. The paper argues that even very small reductions in existential risks in the biosecurity sector (henceforth, 'biorisks’) are cost-effective by standard metrics.Millett and Snyder-Beattie estimate the cost-effectiveness of an intervention as C/(NLR), where:C is the cost of the intervention.N is “the number of biothreats we expect to occur in 1 century”.L is “the number of life-years lost in such an event”.R is “the reduction in risk [in this century only] achieved by spending . C”.Millett and Snyder Be...

]]>
BrownHairedEevee https://forum.effectivealtruism.org/posts/awmsvYwXk2yjCN3uT/mistakes-in-the-moral-mathematics-of-existential-risk-part-2 Mon, 03 Jul 2023 09:33:33 +0000 EA - Mistakes in the moral mathematics of existential risk (Part 2: Ignoring background risk) - Reflective altruism by BrownHairedEevee BrownHairedEevee https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:08 no full 6474
pdMjPuddtHeLSBDiF EA - Apply to fall policy internships (we can help) by Elika Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to fall policy internships (we can help), published by Elika on July 3, 2023 on The Effective Altruism Forum.Many U.S. congressional internship applications are closing in the next few weeks for Fall (Sep- Dec) internships. This is a relatively low-effort, high reward thing to do if you you’re interested in testing your fit for policy.I (Elika) interned in my congressional office for a semester just from off-the-cuff applying to test my fit and build my resume. This experience has been incredibly helpful (I now work for the US government and it gives me some more credibility in D.C). Many applications are closing within the next 1-2 weeks. We’re offering to support anyone considering applying.This is a particularly good fit if you’re:Interested in working in policy, politics, or governance solutions to problemsAn undergraduate studentAble to work part-time (10+ hours per week)If you think this could be a good opportunity, we recommend:Reading this guide to internships which has information on which offices to choose from and how to apply and more (including this helpful link of all the Congressional office internships)Making a list of offices you think you’d be a good fit forApplying! When in doubt, apply - there’s no harm in applying if you’re serious about exploring this opportunity. We’re offering to support if you’re interested.Sign up to get support applying hereThings we can help with:Whether or not you’d be a good fit for the positionsReview your resume, cover letter & offices you’re interested inAccountability for submitting applications by the deadlineThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Elika https://forum.effectivealtruism.org/posts/pdMjPuddtHeLSBDiF/apply-to-fall-policy-internships-we-can-help Mon, 03 Jul 2023 08:37:17 +0000 EA - Apply to fall policy internships (we can help) by Elika Elika https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:43 no full 6477
3sJbwpGbAu5tpGkqD EA - Douglas Hoftstadter concerned about AI xrisk by Eli Rose Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Douglas Hoftstadter concerned about AI xrisk, published by Eli Rose on July 3, 2023 on The Effective Altruism Forum.Douglas Hofstadter is best known for authoring Godel, Escher, Bach, a book on artificial intelligence (among other things) which is sort of a cult classic. In a recent interview, he says he's terrified of recent AI progress and expresses beliefs similar to many people who focus on AI xrisk.Hoftstadter: The accelerating progress has been so unexpected that it has caught me off guard... not only myself, but many many people. There's a sense of terror akin to an oncoming tsunami that could catch all of humanity off guard. It's not clear whether this could mean the end of humanity in the sense of the systems we've created destroying us, it's not clear if that's the case but it's certainly conceivable. If not, it's also that it just renders humanity a small, almost insignificant phenomenon, compared to something that is far more intelligent and will become as incomprehensible to us as we are to cockroaches.Interviewer: That's an interesting thought.Hoftstadter: Well I don't think it's interesting. I think it's terrifying. I hate it.I think this is the first time he's publicly expressed this, and his views seem to have changed recently. Previously he published this which listed a bunch of silly questions GPT-3 gets wrong and concluded thatThere are no concepts behind the GPT-3 scenes; rather, there’s just an unimaginably huge amount of absorbed text upon which it draws to produce answersthough it ended with a gesture to the fast pace of change and inability to predict the future. I randomly tried some of his stumpers on GPT-4 and it gets them right (and I remember being convinced when this came out that GPT-3 could get them right too with a bit of prompt engineering, though I don't remember specifics).I find this a bit emotional because of how much I loved Godel, Escher, Bach in early college. It was my introduction to "real" math and STEM, which I'd previously disliked and been bad at; because of this book, I majored in computer science. It presented a lot of philosophical puzzles for and problems with AI, and gave beautiful, eye-opening answers to them. I think Hofstadter expected us to understand AI much better before we got to this level of capabilities; expected more of the type of understanding his parables and thought experiments could sometimes create.Now I work professionally on situations along the lines of what he describes in the interview (and feel a similar way about them) — it's a weird way to meet Hofstadter again.See also Gwern's post on LessWrong.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Eli Rose https://forum.effectivealtruism.org/posts/3sJbwpGbAu5tpGkqD/douglas-hoftstadter-concerned-about-ai-xrisk Mon, 03 Jul 2023 07:52:44 +0000 EA - Douglas Hoftstadter concerned about AI xrisk by Eli Rose Eli Rose https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:34 no full 6476
t6JzBxtrXjLRufE8o EA - Who Was the Funder that Counterfactually Resulted in LEEP Starting? by Joey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who Was the Funder that Counterfactually Resulted in LEEP Starting?, published by Joey on July 4, 2023 on The Effective Altruism Forum.Lead Exposure Elimination Project (LEEP) is an outstanding Charity Entrepreneurship-incubated charity recognized externally for its impactful work by RP, Founders Pledge, Schmidt Futures, and Open Philanthropy. It's one of the clearest cases of new charities having a profound impact on the world. However, everything is clear in hindsight; it now seems obvious that this was a great idea and team to fund, but who funded LEEP at the earliest stage? Before any of the aforementioned bodies would have considered or looked at them, who provided funding when $60k made the difference between launching and not existing?The CE Seed Network, so far, has been a rather well-kept secret. They are the first people to see each new batch of CE-incubated charities and make a decision on whether and how much to support them. A handful of donors supported LEEP in its earliest days, culminating in the excellent charity we see today. Some of them donated anonymously, never seeking credit or the limelight, just quietly making a significant impact. Others engaged deeply and regularly with the team, eventually becoming trusted board members. Historically, the Seed Network has been a small group (~30) of primarily E2G-focused EAs, invited by the CE team or alumni from the CE program to join. However, now we are opening it up for expressions of interest for those who might want to join in future rounds. Our charity production has doubled (from 5 to 10 charities a year) and although our Seed Network has grown, there is still room for more members to join to support our next batches of charities.We have now created a website to describe how it works. On that website, there's an application form for those who might be a good fit to be a member in the future. It’s not a great fit for everyone as it focuses on the CE (near-termist) cause areas and donors who could donate over $10k a year to new charities and can make a decision on whether and whom to fund with how much in a short period of time when we send out the newest project proposals (~9 days). But for those who fit, we think it's one of the most impactful ways to donate.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Joey https://forum.effectivealtruism.org/posts/t6JzBxtrXjLRufE8o/who-was-the-funder-that-counterfactually-resulted-in-leep Tue, 04 Jul 2023 20:41:45 +0000 EA - Who Was the Funder that Counterfactually Resulted in LEEP Starting? by Joey Joey https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:17 no full 6483
upgF9yNdtWzt4kgxj EA - Three mistakes in the moral mathematics of existential risk (David Thorstad) by Global Priorities Institute Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Three mistakes in the moral mathematics of existential risk (David Thorstad), published by Global Priorities Institute on July 4, 2023 on The Effective Altruism Forum.AbstractLongtermists have recently argued that it is overwhelmingly important to do what we can to mitigate existential risks to humanity. I consider three mistakes that are often made in calculating the value of existential risk mitigation: focusing on cumulative risk rather than period risk; ignoring background risk; and neglecting population dynamics. I show how correcting these mistakes pushes the value of existential risk mitigation substantially below leading estimates, potentially low enough to threaten the normative case for existential risk mitigation.I use this discussion to draw four positive lessons for the study of existential risk: the importance of treating existential risk as an intergenerational coordination problem; a surprising dialectical flip in the relevance of background risk levels to the case for existential risk mitigation; renewed importance of population dynamics, including the dynamics of digital minds; and a novel form of the cluelessness challenge to longtermism.IntroductionSuppose you are an altruist. You want to do as much good as possible with the resources available to you. What might you do? One option is to address pressing short-term challenges. For example, GiveWell (2021) estimates that $5,000 spent on bed nets could save a life from malaria today.Recently, a number of longtermists (Greaves and MacAskill 2021; MacAskill 2022b) have argued that you could do much more good by acting to mitigate existential risks: risks of existential catastrophes involving “the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development” (Bostrom 2013, p. 15). For example, you might work to regulate chemical and biological weapons, or to reduce the threat of nuclear conflict (Bostrom and Cirkovi ' c' 2011; MacAskill 2022b; Ord 2020).Many authors argue that efforts to mitigate existential risk have enormous value. For example, Nick Bostrom (2013) argues that even on the most conservative assumptions, reducing existential risk by just one-millionth of one percentage point would be as valuable as saving a hundred million lives today. Similarly, Hilary Greaves and Will MacAskill (2021) estimate that early efforts to detect potentially lethal asteroid impacts in the 1980s and 1990s had an expected cost of just fourteen cents per life saved. If this is right, then perhaps an altruist should focus on existential risk mitigation over short term improvements.There are many ways to push back here. Perhaps we might defend population-ethical assumptions such as neutrality (Naverson 1973; Frick 2017) that cut against the importance of creating happy people. Alternatively, perhaps we might introduce decision-theoretic assumptions such as risk aversion (Pettigrew 2022), ambiguity aversion (Buchak forthcoming) or anti-fanaticism (Monton 2019; Smith 2014) that tell against risky, ambiguous and low-probability gambles to prevent existential catastrophe. We might challenge assumptions about aggregation (Curran 2022; Heikkinen 2022), personal prerogatives (Unruh forthcoming), and rights used to build a deontic case for existential risk mitigation. We might discount the well-being of future people (Lloyd 2021; Mogensen 2022), or hold that pressing current duties, such as reparative duties (Cordelli 2016), take precedence over duties to promote far-future welfare.These strategies set themselves a difficult task if they accept the longtermist’s framing on which existential risk mitigation is not simply better, but orders of magnitude better than competing short-termist interventions. Is it really so obvious ...

]]>
Global Priorities Institute https://forum.effectivealtruism.org/posts/upgF9yNdtWzt4kgxj/three-mistakes-in-the-moral-mathematics-of-existential-risk Tue, 04 Jul 2023 15:47:43 +0000 EA - Three mistakes in the moral mathematics of existential risk (David Thorstad) by Global Priorities Institute Global Priorities Institute https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:38 no full 6484
eJiakpK4SJAmRWjz9 EA - GWWC Newsletter: June 2023 by Giving What We Can Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC Newsletter: June 2023, published by Giving What We Can on July 4, 2023 on The Effective Altruism Forum.Hello and welcome to our June newsletter!Pop quiz! If you travelled to visit every person who's taken the GWWC pledge, how many different countries would you visit?Answer: 100 countries! With the newest addition of Uzbekistan , Giving What We Can members are present in 100 countries worldwide! Turns out the idea of giving to help others effectively has universal appeal!Even though our movement is growing, we need your help - talking to your friends, family and colleagues is one of the best ways to help us change the norms around giving, which in turn means faster progress on some of the world’s biggest issues.We’d love to know how we can help you talk to the people in your life about high-impact charities and how we can help you advocate. (We already have lots of ideas and resources here)If you have any ideas about what you would find helpful, simply reply or send me a quick email at: grace.adams@givingwhatwecan.org.Below you’ll find loads of interesting updates from our partner charities and other news we think you’ll like!With gratitude, - Grace Adams & the Giving What We Can teamNews & UpdatesCommunityOur Executive Director Luke Freeman recently published a post on the EA Forum about the role of individuals in helping to fund high-impact projects and charities as well as hosting an AMA (Ask Me Anything) about his work, life and more!Director of Research, Sjir Hoeijmakers published a post on the EA Forum with “Four claims about the role of effective giving in the EA Community”.Power for Democracies is a new non-profit democracy charity evaluator based in Berlin, Germany, and operating globally. They are looking to hire 5-6 democracy enthusiasts to form their ‘Knowledge & Research Team’. The objective of the team is twofold: To build and execute a ‘knowledge-building roadmap’ that will lead to a growing set of methodologies for identifying highly effective pro-democracy interventions and potential NGOs to apply them. And to use these methodologies to generate giving recommendations for the international community of democracy-focused, effectiveness-driven donors.Magnify Mentoring is still accepting mentee applications from women, non-binary, and trans people of any gender who are enthusiastic about pursuing high-impact career paths for the next day or two. On average, mentees and mentors meet once a month for 60-90 minutes. Magnify Mentoring offers mentees access to a broader community with a wealth of professional and personal expertise. You can find out more here and apply here.Evaluators, grantmakers and incubatorsUpdates to ACE’s Charity Evaluation Criteria in 2023: Animal Charity Evaluators (ACE) is entering its 2023 charity evaluation season! This is the time of year when ACE works to identify charities that can do the most good for animals with two years of additional funding. To provide more transparency and insight into its evaluation process, ACE is sharing some changes it made to its four charity evaluation criteria this year.ACE's Updated Strategic Plan: One year ago, in 2022, ACE developed a strategic plan for the period of 2022–2024. This plan, created collectively by ACE staff under the leadership of the Acting Executive Director and approved by the board of directors, was the result of the hard work and dedication of a severely understaffed team. It represented what was needed then. Things have changed since last year. ACE added several talented individuals to their team, including new leadership and board members. ACE now has an updated strategic plan and is looking forward to testing its assumptions and delivering results.GiveWell CEO and co-founder Elie Hassenfeld was interviewed on the 80,000 Hours podcast about newer areas of ...

]]>
Giving What We Can https://forum.effectivealtruism.org/posts/eJiakpK4SJAmRWjz9/gwwc-newsletter-june-2023 Tue, 04 Jul 2023 11:39:51 +0000 EA - GWWC Newsletter: June 2023 by Giving What We Can Giving What We Can https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:29 no full 6485
Sp6rGZcmqCc7FMQCL EA - An Impact Roadmap by Aaron Boddy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An Impact Roadmap, published by Aaron Boddy on July 5, 2023 on The Effective Altruism Forum.One of the things that surprised me when trying to develop potential interventions for Shrimp Welfare Project, was trying to understand a clear pathway from A (wanting to help shrimps cost-effectively) to Z (having a demonstrably impactful intervention scaling-up).As a result of this, I spent a lot of time reading up on and learning about Programme Development Methodologies - trying to Frankenstein something together that I thought would make sense for us. I want to share this work in case it’s useful to other NGOs (in particular for NGOs in contexts similar to ours). I hope it’s useful to youA quick note - Though this Roadmap is written in such a way as to outline the steps from A to B to C, etc. in reality, some of these steps may run in parallel (or at least, overlap in some ways). For example, one member of your team may be conducting an Evidence Review, while others are preparing to do some on-the-ground Needs Assessment work. In particular, throughout the Roadmap, you’ll likely continue to revisit and reflect on your Theory of Change, as well as updating and refining your Impact Monitoring.I also wrote some "Appendix posts" to complement this one that some people may find useful:Programme Development Methodologies - Summaries of all the existing methodologies I used as a reference when putting this Roadmap togetherDecision-Making Tools - These can be referred to throughout the Roadmap when a decision needs to be madeCreative-Thinking Tools - Useful tools to help you brainstormExecutive Summary1. Key Decisions - The most important decisions we’ll make, and how to make themThere are some useful Decision-Making and Creative-Thinking that it makes sense to learn early on, as they can be used throughout the roadmap. Not least, to help you with some of the most important decisions you’ll make on the path to impact, such as defining your values and choosing your career area based on personal fit. If starting a nonprofit is the best fit for you, then we can move on to deciding on a charity idea, finding a co-founder and selecting your (initial) implementation country. We can now begin to more concretely understand our path to impact through an (initial) Theory of Change.2. Theory of Change - How and why the program will workNow that you’ve made your key decisions, you can move on to defining your potential program. You’ll do this by first outlining some Project Objectives, before creating your first (likely somewhat broad at this point) Theory of Change diagram, sketching out the Activities you will undertake which lead to your projects Outputs, resulting in the overall positive Outcomes of your project, which ultimately achieves Impact. This will provide some clarity around the impact you intend to make, as well as the pathway to that impact. It’s likely though that there are some gaps in your knowledge and some uncertainties and assumptions in your Theory of Change, which brings us to an Evidence Review.3. Evidence Review - A critical review of secondary sources related to our programWe now have an idea of how we intend to make an impact. It’s time to review any evidence that could help us more concretely answer some of our uncertainties and clarify our assumptions. To do this, we’ll undertake a literature review to help us answer these questions (i.e. What interventions have been tested or studied in relation to this program? Under what conditions were they implemented? Were they effective? Why or why not?). Still, however, we will need to get a better understanding of our chosen context, so we’ll go on to undertake a Needs Assessment.4. Needs Assessment - Seeking out perspectives from the populations we aim to helpWe now have a good overview of the relevant e...

]]>
Aaron Boddy https://forum.effectivealtruism.org/posts/Sp6rGZcmqCc7FMQCL/an-impact-roadmap Wed, 05 Jul 2023 12:57:23 +0000 EA - An Impact Roadmap by Aaron Boddy Aaron Boddy https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 38:28 no full 6497
F2MfbmRAiMx2PDhaD EA - Some Observations on Alcoholism by Devin Kalish Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some Observations on Alcoholism, published by Devin Kalish on July 8, 2023 on The Effective Altruism Forum.This is a tough one to post, it’s also a little off topic for the forum. I went back and forth a great deal about whether to crosspost it anyway, and ultimately decided to, since I have in some ways posted on this topic here before, and since there are several parts that are directly relevant to Effective Altruism (the last three sections all have substantial relevance of some sort). Doing this also makes it easier for this to be relatively public, and so to get some difficult conversations over with. The short version of it is that I’ve been an alcoholic, mostly in secret, for about three years now. This blogpost is a lengthy dive into different observations about it, and ways it has changed my mind on various issues. I don’t want to post the whole thing below because, well, frankly it’s huge and only occasionally relevant, so instead I’m going to post some relevant quotes as people often do with linkposts of things they didn’t write. There’s a good deal more in the link.First, here’s a quick summary of how things got started:“I started drinking during early 2020, when as far as I can tell there was no special drama going on with Effective Altruism, and I had already been involved with it in a similar capacity for a couple years. Most of the alcoholics I’ve met at this point either got started or got significantly worse during the pandemic, I was no different.But the truth is my drinking even then wasn’t terribly dramatic a coping mechanism. There was never anything that meaningfully ‘drove me to drink’. The idea that drinking at this point could land me here wasn’t part of my decision at all, I was just kind of bored and lonely and decided it would be a fun treat to drink a beer or two at night - something I had very rarely done before.As the pandemic wore on, it became something I looked forward to more and more, and eventually I discovered the appeal of hard liquor, which I never switched back from, and eventually I started working on my thesis for my first MA. The combination of my thesis and hard liquor turned a casual habit and minor coping mechanism into something more obviously hard for me to let go of. Over the course of the next three years things got slowly worse from there, and I came to realize more and more how little control I had.It wasn’t some meaningful part of the larger story of my life, replete with a buried darkness in my soul coming to the forefront, or a unique challenge driven by terrible circumstances. I have had to push back in therapy repeatedly on these subtler and more interesting attempts to make something of the event. The truth is sober reflection makes it all look like little more than a meaningless tragedy.”Here are some reflections on what it feels like:“One thing that I think people on the outside of this get wrong is the way conscious and unconscious feelings work in this context. I have been asked many times, especially in therapeutic contexts, what it feels like to get urges to drink. Truthfully, it doesn’t feel like much of anything as far as I can tell. It isn’t like thirst or hunger where there is some identifiable physical sensation. The easiest description of how the ‘urges’ start, is with intrusive thoughts. Not necessarily something as consciously obvious as ‘hey, you should drink’, but often more abstract things about drinking. ‘Do you think you’re going to drink tonight?’ ‘what happens if you drink tonight?’ ‘what happens if you don’t drink tonight?’ ‘if you do this, then you can drink tonight’. Argue with these, and they keep going all day until you drink. Drinking, in fact, is largely how you settle the argument, how you release the building tension.Do not argue with these thoughts, it does not help, o...

]]>
Devin Kalish https://forum.effectivealtruism.org/posts/F2MfbmRAiMx2PDhaD/some-observations-on-alcoholism Sat, 08 Jul 2023 11:49:15 +0000 EA - Some Observations on Alcoholism by Devin Kalish Devin Kalish https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:11 no full 6528
ubXnfYihRtvg59Nk2 EA - Even an obligate omnivore can try to eat less meat by Joe Rogero Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Even an obligate omnivore can try to eat less meat, published by Joe Rogero on July 8, 2023 on The Effective Altruism Forum.I've seen some good posts lately about veganism and health. Here's my two cents.I'm convinced by the arguments that many animals might have conscious experiences. I'd prefer not to torture the ones that do. Unfortunately, if I adopted a fully vegan lifestyle I might well be hospitalized for malnutrition within the year. In this post I'll give an overview of how I balance nutrition, ethics, and allergy constraints, and solicit ideas for improvement.My problemI am demonstrably allergic to a startling variety of foods, including, but not limited to:PeasCeleryUncooked egg yolksChickpeas (and derivatives like hummus)Beans (but not peanuts, which are also legumes)Not deathly allergic, but definitely my-afternoon-is-ruined allergic. A tablespoon of hummus can give me 2-4 hours of intensely distracting pain. I have also experienced an identical, though sometimes inconsistent, reaction to:FishSoylentShellfishEggplantSweet potatoCranberry sauceButternut squashEvery protein shake mix I have ever triedSeveral mixtures containing none of the above but some hard-to-isolate combination of ingredientsYou will notice that several of the items on the above lists are widely considered key sources of vital nutrients in a vegetarian or vegan diet.Coupled with my strong dislike of spicy food, a typical restaurant menu typically contains at most 1-5 entrees I can safely eat. Catered meals often hedge me out entirely. At one (non-EA) event, after listing my allergies in the RSVP, I was served a dinner plate consisting of fish, lima beans, and rice pilaf with peas and celery.Even with these honestly ridiculous constraints, I've found some cheap ways to reduce my dependence on harmfully-produced meat.Reducing harm in an omnivorous dietIt's easier to cut the first half of my meat intake than the second half. Diminishing returns apply. By the same logic, two people eating half as much meat is just as good as one person eating none.Here are some things I've learned while trying to minimize the suffering my diet imposes:Tofu is a fine supplement in many dishes, and rice is a cheap and flexible staple.Green vegetables like spinach and broccoli make good additions to any diet.Nuts and many dried fruits are nutrient-rich. Also, peanuts and cashews add a nice crunchy texture to homemade rice dishes.I can also eat cooked eggs and yogurt, and I've heard some convincing arguments in favor of eggs being less harmful than chicken and dairy less harmful than beef.I'm distrustful of "cruelty-free" branding because so many standards for that kind of thing are false or misleading, but with further research I expect I could find more harm-minimizing options there too.Being allergic to fish really hurts, because I think fish probably suffer less than birds or mammals, if at all. If I could eat more fish instead of meat, I would.I could probably bring myself to eat insect-based protein if it were a) actually available where I live and b) not recognizably still a whole bug at the time. Still working on that angle.I'm tentatively excited about lab-grown meat.My wife and I did some math and determined we aren't getting enough protein in our diets, so we're stepping up meat intake overall, but it's still a significant improvement over my past eating habits. I used to cook lots of chicken.My wife can eat beans and likes them fine. We don't have to eat the same things, especially when leftovers are available.We do a lot of home cooking, buy foods that keep, and try not to waste anything.If you have other ideas for reducing diet-induced harm, please share!ConclusionEven if you can't or don't want to fully expunge meat from your diet, it's possible to signific...

]]>
Joe Rogero https://forum.effectivealtruism.org/posts/ubXnfYihRtvg59Nk2/even-an-obligate-omnivore-can-try-to-eat-less-meat Sat, 08 Jul 2023 10:44:09 +0000 EA - Even an obligate omnivore can try to eat less meat by Joe Rogero Joe Rogero https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:28 no full 6531
dqpR2E4Bw9KEEaWoK EA - Announcing the Existential InfoSec Forum by calebp Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Existential InfoSec Forum, published by calebp on July 8, 2023 on The Effective Altruism Forum.We're pleased to announce the first Existential InfoSec Forum, a half-day event aimed at strengthening the infosec community pursuing important ways to reduce the risk of an existential catastrophe.The forum will feature talks from infosec leaders across AI labs, policy and research, small group discussions and interactive workshops on real-world infosec challenges.This event will significantly emphasise AI as it coincides with a big push at Defcon for AI model red-teaming. We think this is an exciting opportunity to share insights and updates from a rapidly developing space - both for those working in infosec in priority areas for existential risk reduction and those newer to the field.Event DetailsDate: 10 August 2023Time: 2 pm - 8 pm PDT, including dinnerLocation: Las VegasThe conference will take place just before DEFCON 31 and will be a short walk from the main DEFCON venue (though the event is not affiliated with DEFCON). Wim van der Schoot is running the event with support from EA Funds and the Partner Events team at CEA.Why Attend?The forum is designed to:Facilitate knowledge transfer among attendees, improve understanding of the existential infosec landscape, and develop professional skills.Connect attendees to active infosec practitioners working on existential risk for feedback, identify potential job opportunities, and foster mentorship relationships.Enhance situational awareness of attendees to help them make more informed decisions and identify ecosystem gaps and opportunities.Mitigate risks associated with transitioning to direct work in infosec through a supportive network, which can help identify job opportunities.Who Should Attend?People already engaged in infosec within high-priority areas for existential risk reduction.Infosec professionals interested in transitioning into priority areas for existential risk reduction.Policy professionals, members of the national security community, and others working on regulating AI who would benefit from connecting with x-risk-conscious infosec practitioners.Software engineers (or others with a strong background for skilling up in infosec) who are interested in working in a high-priority infosec role for existential risk reduction in the next two years.Tentative Event Schedule2 pm - 3:30 pm: Keynote talks (speakers include Jason Clinton, Anthropic CISO, and others from frontier labs, government agencies, think tanks and researchers)3:30 pm - 4:30 pm: Networking break4:30 pm - 6 pm: Breakout sessions/workshops, e.g. AI infosec policy, AI cybersecurity capabilities and evaluations, how to defend against nation-states, and building a career in infosec.6 pm - 7 pm: Drinks reception7 pm - 8:30 pm: Closing dinnerPlease complete this form before 20 July to register your interest in attending. Note that we can only invite some people who register due to capacity limitations, but early registration will increase your chances of receiving an invitation.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
calebp https://forum.effectivealtruism.org/posts/dqpR2E4Bw9KEEaWoK/announcing-the-existential-infosec-forum Sat, 08 Jul 2023 07:38:28 +0000 EA - Announcing the Existential InfoSec Forum by calebp calebp https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:15 no full 6529
hijgHeBaYZMwo6XEg EA - Why we're funding clubfoot treatment through MiracleFeet by GiveWell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why we're funding clubfoot treatment through MiracleFeet, published by GiveWell on July 7, 2023 on The Effective Altruism Forum.Author: Miranda Kaplan, GiveWell Communications AssociateFor many people, GiveWell is practically synonymous with our short list of top charities. But the amount of money we've sent to other organizations, doing other important work, has been increasing. In 2021, we made or recommended about $190 million in grants to non-top charity programs, like water treatment and malnutrition treatment, and in 2022, we set up the All Grants Fund specifically so donors could contribute to programs in this category.Source: GiveWell, GiveWell Metrics Report - 2021 Annual Review, p. 9We want to use this blog to give you more frequent, brief insights into these newer areas of our grantmaking before we publish our formal grant write-ups. Below we'll discuss, in light detail, a program that's well outside of our traditional wheelhouse, but that we think significantly improves children's lives - treatment for clubfoot with an organization called MiracleFeet.The grantClubfoot is a congenital (i.e., present from birth) abnormality that causes one or both feet to twist inward and upward. Children born with clubfoot must walk on the sides or backs of their feet, which leads to pain, severely limited mobility, and, reportedly, social stigma. If not corrected, clubfoot is a lifelong condition.[1] In January 2023, we recommended a $5.2 million grant to MiracleFeet to expand its existing clubfoot treatment program in the Philippines and launch two new programs in Chad and Côte d'Ivoire.[2]In the countries where it works, MiracleFeet and its local NGO partners help health facilities diagnose and treat clubfoot, using a process called the Ponseti method. This generally requires placing the affected foot in a series of casts, performing a minor surgical procedure to improve the foot's flexibility, and bracing the foot during sleep for up to five years.[3] MiracleFeet and its partners provide supplies for casting and bracing, train government health care workers in the above procedures, build awareness of clubfoot, and help health systems collect data on treatment.[4] This makes it comparable to a "technical assistance" program: MiracleFeet doesn't perform clubfoot treatment itself; instead, along with its partners, it helps set health facilities up to successfully find and treat clubfoot cases themselves.The brace and custom shoes supplied by MiracleFeet for clubfoot treatment. Photograph courtesy of MiracleFeet.We were excited to recommend this grant because we think it will probably result in a lot more kids being treated for a serious, lifelong condition that nevertheless appears neglected. Clubfoot is debilitating but not life-threatening, and affects only about 1 in 800 babies born.[5] In resource-strapped countries, a relatively new and involved treatment like the Ponseti method may not be prioritized unless an NGO like MiracleFeet is there to advocate for and assist with it.[6] We estimate that MiracleFeet will support treatment of about 10,000 children with this grant, and that only about 10% of those children would get treated absent MiracleFeet,[7] though we don't feel very certain about this (more below, under "What we're still learning").All in all, after adjustments, we think that this grant will lead to about 3,700 cases of clubfoot successfully treated that otherwise wouldn't have been, and that will result in lifelong mobility gains and pain relief for the children treated.[8]Why this grant is differentMiracleFeet's program is different from our top charities for a few reasons:[9]The program is expensive compared with our top charities. The Ponseti method requires specialized equipment, training for medical staff, and a multi-step execution wit...

]]>
GiveWell https://forum.effectivealtruism.org/posts/hijgHeBaYZMwo6XEg/why-we-re-funding-clubfoot-treatment-through-miraclefeet Fri, 07 Jul 2023 22:47:44 +0000 EA - Why we're funding clubfoot treatment through MiracleFeet by GiveWell GiveWell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:42 no full 6530
HdzcvrDnKnkQ65MW2 EA - Effective Altruism and Eastern Ethics by EA Lifestyles Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Altruism and Eastern Ethics, published by EA Lifestyles on July 7, 2023 on The Effective Altruism Forum.This post was inspired by a phone call I made a few days ago to my mother where I informed her I was turning vegetarian - I knew it would be rough but it turned out significantly worse than I expected still. The phone call quickly turned sour as she believed I was making a grave mistake and that I was being unjust. This was not unique - I talked to two other individuals raised in East Asian households which elicited similar if not stronger responses. One friend told me that coming out vegan elicited a stronger response than coming out as gay.Effective Altruism's moral roots are weird. Really weird to those raised on a western ethical system and absurdly weird to those raised on an eastern ethical system.This is because following the virtues of effective altruism can often lead to conflict within Eastern ethical systems. Being virtuous requires us to stick to a moral code, even if that means clashing with personal relationships. Otherwise, virtue would demand that we keep doing things that aren't morally right just to keep the peace in our circles. This dilemma can be difficult for everyone, but it's particularly challenging for those of us brought up in Asian households.Take my mother's moral compass, for instance. It closely reflects the Chinese ethos, which in turn is heavily influenced by Confucian principles. In the Confucian view of virtue, the focus is on fulfilling one's duties within a socially hierarchical and communal framework. This is articulated through key concepts such as "ren" or benevolence, often expressed through the cultural concept of "mianzi" or face, which is deeply embedded in social relationships. A core expectation within the family is to fulfill one's duties, especially towards one's parents.In the eyes of Confucian virtue, my vegetarianism disrupts this order. The sharing of food, often including meat, is a significant social ritual in Chinese culture, promoting family cohesion. My deviation from this norm is seen as a violation of the principle of "Li", disrupting harmony and causing displeasure at the family table. My mother’s emotionally charged conversations with me reflects this deep-rooted cultural clash.In many Asian cultures, there are two important ideas. The first is "guanxi," or the idea that everyone is connected and relies on each other. The second are the unbreakable responsibilities to your parents, called "filial piety." These ideas are a big part of the way people think and act in these cultures, and they create a complicated set of rules and expectations for how people should behave.This can make it hard for someone to do something different, because it might upset this delicate balance. Often, what's seen as most important is what's best for the family or community, even if it means sacrificing your own personal beliefs.So, for someone from these cultures to take on the role of an Effective Altruist - someone who tries to do the most good possible - it could be seen as going against their culture and disappointing those around them. It's like they're rejecting their roots and the values their community holds in high regard.While pursuing an Effective Altruist moral framework may be hard for all, it is certainty harder for those of from these Asian backgrounds. The communal ties and expectations so core to eastern philosophy does not appear in the west.Vegetarianism itself was made all the more challenging by the meat-heavy diets reguarly seen on dinner tables in Asian circles today. The rapid development of Asia has led to an increase in meat consumption, as the rise in living standards often associates meat with luxury and nutritional completeness. For many families who could only enjoy meat sparin...

]]>
EA Lifestyles https://forum.effectivealtruism.org/posts/HdzcvrDnKnkQ65MW2/effective-altruism-and-eastern-ethics Fri, 07 Jul 2023 22:30:06 +0000 EA - Effective Altruism and Eastern Ethics by EA Lifestyles EA Lifestyles https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:32 no full 6532
QQFzQKoQdpd58wa6K EA - Great power conflict - problem profile (summary and highlights) by Stephen Clare Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Great power conflict - problem profile (summary and highlights), published by Stephen Clare on July 7, 2023 on The Effective Altruism Forum.I've spent quite a bit of time over the last few years trying to answer two questions:Will World War III break out this century?If it does, could it be so devastating that it causes an existential catastrophe, threatening humanity’s long-term future?It's been fun.Last week I published an in-depth problem profile for 80,000 Hours that sums up what I’ve found so far: that World War III is perhaps surprisingly likely and we can’t rule out the possibility of a catastrophic escalation. It also discusses some ideas for how you might be able to help solve this problem.This post includes the top-line summary of the profile. Following that, I draw out the highlights that seem particularly relevant for the EA community.Profile SummaryEconomic growth and technological progress have bolstered the arsenals of the world’s most powerful countries. That means the next war between them could be far worse than World War II, the deadliest conflict humanity has yet experienced.Could such a war actually occur? We can’t rule out the possibility. Technical accidents or diplomatic misunderstandings could spark a conflict that quickly escalates. Or international tension could cause leaders to decide they’re better off fighting than negotiating.It seems hard to make progress on this problem. It’s also less neglected than some of the problems that we think are most pressing. There are certain issues, like making nuclear weapons or military artificial intelligence systems safer, which seem promising - although it may be more impactful to work on reducing risks from AI, bioweapons or nuclear weapons directly. You might also be able to reduce the chances of misunderstandings and miscalculations by developing expertise in one of the most important bilateral relationships (such as that between the United States and China).Finally, by making conflict less likely, reducing competitive pressures on the development of dangerous technology, and improving international cooperation, you might be helping to reduce other risks, like the chance of future pandemics.Overall viewRecommendedWorking on this issue seems to be among the best ways of improving the long-term future we know of, but all else equal, we think it’s less pressing than our highest priority areas (primarily because it seems less neglected and harder to solve).Importance:There's a significant chance that a new great power war occurs this century.Although the world's most powerful countries haven't fought directly since World War II, war has been a constant throughout human history. There have been numerous close calls, and several issues could cause diplomatic disputes in the years to come.These considerations, along with forecasts and statistical models, lead me to think there's about a one-in-three chance that a new great power war breaks out in roughly the next 30 years.Few wars cause more than a million casualties and the next great power war would probably be smaller than that. However, there's some chance it could escalate massively. Today the great powers have much larger economies, more powerful weapons, and bigger military budgets than they did in the past. An all-out war could kill far more people than even World War II, the worst war we've yet experienced.Could it become an existentially threatening war - one that could cause human extinction or significantly damage the prospects of the long-term future? It's very difficult to say. But my best current guess is that the chance of an existential catastrophe due to war in the next century is somewhere between 0.05% and 2%.Neglectedness:War is a lot less neglected than some of our other top problems. There are thousands of peo...

]]>
Stephen Clare https://forum.effectivealtruism.org/posts/QQFzQKoQdpd58wa6K/great-power-conflict-problem-profile-summary-and-highlights Fri, 07 Jul 2023 15:27:35 +0000 EA - Great power conflict - problem profile (summary and highlights) by Stephen Clare Stephen Clare https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:39 no full 6516
GqtgiLgGmFpTc22NW EA - Understanding the two most common mental health problems in the world by spencerg Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Understanding the two most common mental health problems in the world, published by spencerg on July 7, 2023 on The Effective Altruism Forum.Co-authored with Amanda MetskasThis is a linkpost from ClearerThinking.org. We've included some excerpts of the article below, but you can read the full post here.Chances are, your life has been impacted by anxiety disorders or depression, either through your direct experience or through the impact they have had on your loved ones. Anxiety and depression are the two most common mental health conditions in the world, but they are frequently misunderstood.Previous EA Mental Health surveys (2023, 2021, and 2018) have also highlighted the importance of these topics to people in the community who take the surveys.In this data-based essay, we aim to help you better understand anxiety and depression, as well as the hidden links between them. By improving your understanding of these disorders, you may find it easier to recognize anxiety and depression in yourself and be more effective at supporting people in your life who experience these conditions. This infographic summarizes some of our interesting findings about the differences and similarities between anxiety and depression (click here to see the infographic at full size).The scale of both anxiety and depression is vast: the World Health Organization estimates that 301 million people worldwide suffer from an anxiety disorder, and 280 million people worldwide suffer from depression. Worldwide, depression ranks as the second largest cause of disability, and anxiety ranks eighth, according to analyses of the most recent Global Burden of Disease study. And yet, despite their prevalence and severe impacts, humanity’s scientific understanding has a substantial way to go to fully understand and highly reliably treat these conditions. Improved treatment and management techniques could make a huge difference in the quality of life of hundreds of millions of people around the world. These astounding statistics have motivated us to run our own studies investigating how these conditions work and how they relate to each other. This article will explain what we found!Overlapping DisordersA major obstacle to understanding anxiety and depression is that they often go together - many people who experience one also experience the other. Approximately 45% of people who experience a depressive disorder in their lifetime also experience an anxiety disorder, and these often occur during the same timeframe. Among people with Generalized Anxiety Disorder, about 43% of them will also experience depression in their lifetime. In one of our own studies, we found that commonly used measures of anxiety and depression (the GAD7 and PHQ9 scales) had shockingly high correlations (r=0.82). These strong links between anxiety and depression can make it more difficult to disentangle how each of these disorders works and make it more difficult for a person with anxiety and depression to effectively manage their conditions. Some people even think they have an anxiety disorder when it's more accurate to say they have a depressive disorder or the reverse.The co-occurrence of anxiety and depression is a bit puzzling because they almost seem like opposites when experienced in the moment. A high level of anxiety often feels like being “wound up” - muscle tension, rapid heart rate, and chest tightness are among the most common physical symptoms. People experiencing anxiety may have a nervous energy that makes it difficult for them to relax, even if there is nothing they can practically do to address whatever is making them anxious. Depression, on the other hand, often feels like struggling to muster energy or motivation to care about things enough to take any action at all. Doing things, including things that a perso...

]]>
spencerg https://forum.effectivealtruism.org/posts/GqtgiLgGmFpTc22NW/understanding-the-two-most-common-mental-health-problems-in Fri, 07 Jul 2023 04:34:04 +0000 EA - Understanding the two most common mental health problems in the world by spencerg spencerg https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 12:40 no full 6518
NKMxC2nA47uuhFm8x EA - A Defense of Work on Mathematical AI Safety by Davidmanheim Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Defense of Work on Mathematical AI Safety, published by Davidmanheim on July 6, 2023 on The Effective Altruism Forum.AI Safety was, a decade ago, nearly synonymous with obscure mathematical investigations of hypothetical agentic systems. Fortunately or unfortunately, this has largely been overtaken by events; the successes of machine learning and the promise, or threat, of large language models has pushed thoughts of mathematics aside for many in the “AI Safety” community. The once pre-eminent advocate of this class of “agent foundations” research for AI safety, Eliezer Yudkowsky, has more recently said that timelines are too short to allow this agenda to have a significant impact. This conclusion seems at best premature.Foundational research is useful for prosaic alignmentFirst, the value of foundational and mathematical research can be synergistic with both technical progress on safety, and with insight into how and where safety is critical. Many machine learning research agendas for safety are investigating issues identified years earlier by foundational research, and are at least partly informed by that research. Current mathematical research could play a similar role in the coming years, as more funding and research are increasingly available for safety. We have also repeatedly seen the importance of foundational research arguments in discussions of policy, from Bostrom’s book to policy discussions at OpenAI, Anthropic, and DeepMind. These connections may be more conceptual than direct, but they are still relevant.Long timelines are possibleSecond, timelines are uncertain. If timelines based on technical progress are short, many claim that we have years not decades until safety must be solved. But this assumes that policy and governance approaches fail, and that we therefore need a full technical solution in the short term. It also seems likely that short timelines make all approaches less likely to succeed. On the other hand, if timelines for technical progress are longer, fundamental advances in understanding, such as those provided by more foundational research, are even more likely to assist in finding or building more technical routes toward safer systems.Aligning AGI ≠ aligning ASIThird, even if safety research is successful at “aligning” AGI systems, both via policy and technical solutions, the challenges of ASI (Artificial SuperIntelligence) still loom large. One critical claim of AI-risk skeptics is that recursive self-improvement is speculative, so we do not need to worry about ASI, at least yet. They also often assume that policy and prosaic alignment is sufficient, or that approximate alignment of near-AGI systems will allow them to approximately align more powerful systems. Given any of those assumptions, they imagine a world where humans and AGI will coexist, so that even if AGI captures an increasing fraction of economic value, it won’t be fundamentally uncontrollable. And even according to so-called Doomers, in that scenario, for some period of time it is likely policy changes, governance, limited AGI deployment, and human-in-the-loop and similar oversight methods to limit or detect misalignment will be enough to keep AGI in check.This provides a stop-gap solution, optimistically for a decade or even two - a critical period - but is insufficient later. And despite OpenAI’s recent announcement that they plan to solve Superalignment, there are strong arguments that control of strongly superhuman AI systems will not be amenable to prosaic alignment, and policy-centric approaches will not allow control.Resource AllocationGiven the above claims, a final objection is based on resource allocation, in two parts. First, if language model safety was still strongly funding constrained, those areas would be higher leverage, and avenues of foundat...

]]>
Davidmanheim https://forum.effectivealtruism.org/posts/NKMxC2nA47uuhFm8x/a-defense-of-work-on-mathematical-ai-safety Thu, 06 Jul 2023 22:50:33 +0000 EA - A Defense of Work on Mathematical AI Safety by Davidmanheim Davidmanheim https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:46 no full 6517
SFyCrtdfuYjtPq6xs EA - Future Academy - Successes, Challenges, and Recommendations to the Community by SebastianSchmidt Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future Academy - Successes, Challenges, and Recommendations to the Community, published by SebastianSchmidt on July 9, 2023 on The Effective Altruism Forum.IntroductionImpact Academy is a new field-building and educational institution seeking to enable people to become world-class leaders, thinkers, and doers, using their careers and character to solve the world’s most pressing problems and create the best possible future. Impact Academy was founded by Vilhelm Skoglund, Sebastian Schmidt, and Lowe Lundin. We have already secured significant funding to set up the organization and carry out ambitious projects in 2023 and beyond. Please read this document for more about Impact Academy, our Theory of Change, and our two upcoming projects.The purpose of this document is to provide an extensive evaluation and reflection on Future Academy - our first program (and experiment). Future Academy aimed to equip university students and early-career professionals worldwide with the thinking, skills, and resources they need to pursue ambitious and impactful careers. It was a free six-month program consisting of four in-person weekends with workshops, presentations, socials, and monthly digital events. Furthermore, the 21 fellows worked on an impact project with an experienced mentor and received professional coaching to empower them to increase their impact and become their best selves. Upon completion of the program, all participants went to a global impact conference (EAGx Nordics) where four fellows presented their projects. We awarded stipends of a total of $20,000 to the best projects.The projects included a sentiment analysis of public perception of AI risk, a philosophy AI alignment paper, and an organization idea for improving research talent in Tanzania. Our faculty included entrepreneurs and professors from Oxford University and UC Berkeley.Note that this document attempts to assess to what extent we’ve served the world. This involves an assessment of the wonderful fellows who participated in Future Academy, and our ability to help them. This is not meant as an evaluation of peoples’ worth nor a definite score of general abilities, but an evaluation of our ability to help. We hope we do not offend anyone and have tried our best not to do so, but if you think we write anything inappropriate, please let us know in the comments or by reaching out to sebastian [at] impactacademy.org.Main results and successesWe confirmed a key hypothesis underlying Future Academy - namely that we can attract promising and talented people who i) have no to moderate knowledge of Effective Altruism and longtermism, ii) are coming from underserved regions (e.g., 40% came from Southern and Eastern Europe), and are more diverse (e.g., 56% were female).We created a bespoke primary metric called counterfactual expected career contribution (CECC) inspired by 80,000 hours’ IASPC metric and Centre for Effective Altruism’s HEA metric. We think the total score was 22.8, and ~ four fellows made up the majority of that score.To give an understanding of the CECC metric, we’ll give an example. Take an imaginary fellow, Alice. Before the intervention, based on our surveys and initial interactions, we expected that she may have an impactful career, but that she is unlikely to pursue a priority path based on IA principles. We rate her Expected Career Contribution (ECC) to be 2. After the program, based on surveys and interactions, we rate her as 10 (ECC) because we have seen that she’s now applying for a full-time junior role in a priority path guided by impartial altruism. We also asked her (and ourselves) to what extent that change was due to IA and estimated that to be 10%. To get our final Counterfactual Expected Career Contribution (CECC) for Alice, we subtract her initial ECC score of 2 from her fi...

]]>
SebastianSchmidt https://forum.effectivealtruism.org/posts/SFyCrtdfuYjtPq6xs/future-academy-successes-challenges-and-recommendations-to Sun, 09 Jul 2023 15:00:30 +0000 EA - Future Academy - Successes, Challenges, and Recommendations to the Community by SebastianSchmidt SebastianSchmidt https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:59 no full 6538
7kFPFYQSY7ZttoveS EA - Cost-effectiveness of professional field-building programs for AI safety research by Center for AI Safety Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cost-effectiveness of professional field-building programs for AI safety research, published by Center for AI Safety on July 10, 2023 on The Effective Altruism Forum.SummaryThis post explores the cost-effectiveness of AI safety field-building programs aimed at ML professionals, specifically the Trojan Detection Challenge (a prize), NeurIPS ML Safety Social, and NeurIPS ML Safety Workshop.We estimate the benefit of these programs in ‘Quality-Adjusted Research Years’, using cost-effectiveness models built for the Center for AI Safety (introduction post here, full code here).We intend for these models to support - not determine - strategic decisions. We do not believe, for instance, that programs which a model rates as lower cost-effectiveness are necessarily not worthwhile as part of a portfolio of programs.The models’ tentative results, summarized below, suggest that field-building programs for professionals compare favorably to ‘baseline’ programs - directly funding a talented research scientist or PhD student working on trojans research for 1 year or 5 years respectively. Further, the cost-effectiveness of these programs can be significantly improved with straightforward modifications - such as focusing a hypothetical prize on a more ‘relevant’ research avenue, or running a hypothetical workshop with a much smaller budget.ProgramCost (USD)Benefit (counterfactual expected QARYs)Cost-effectiveness (QARYs per $1M)Trojan Detection Challenge65,00026390NeurIPS ML Safety Social520015029,000NeurIPS ML Safety Workshop110,0003603300Hypothetical: Power Aversion Prize50,0004909900Hypothetical: Cheaper Workshop35,0002507000Baseline: Scientist Trojans500,000 84170Baseline: PhD Trojans250,0008.735For readers who are after high-level takeaways, including which factors are driving these results, skip ahead to the cost-effectiveness in context section. For those keen on understanding the model and results in more detail, read on as we:Give important disclaimers. (Read more.)Direct you to background information about this project. (Read more.)Walk through the model. (Read more.)Contrast these programs with one another, and with funding researchers directly. (Read more.)Consider the scalability and robustness properties of the model. (Read more.)DisclaimerThis analysis is a starting point for discussion, not a final verdict. The most critical reasons for this are that:These models are reductionist. Even if we have avoided other pitfalls associated with cost-effectiveness analyses, the models might ignore factors that turn out to be crucial in practice, including (but not limited to) interactions between programs, threshold effects, and diffuse effects.The models’ assumptions are first-pass guesses, not truths set in stone. Most assumptions are imputed second-hand following a short moment of thought, before being adjusted ad-hoc for internal consistency and differences of beliefs between Center for AI Safety (CAIS) staff and external practitioners. In some cases, parameters have been redefined since initial practitioner input.Instead, the analyses in this post represent an initial effort in explicitly laying out assumptions, in order to take a more systematic approach towards AI safety field-building.BackgroundFor an introduction to our approach to modeling - including motivations for using models, the benefits and limitations of our key metric, guidance for adopting or adapting the models for your own work, comparisons between programs for students and professionals, and more - refer to the introduction post.2. The models’ default parameters are based on practitioner surveys and the expertise of CAIS staff. Detailed information on the values and definitions of these parameters, and comments on parameters with delicate definitions or contestable views, can be found i...

]]>
Center for AI Safety https://forum.effectivealtruism.org/posts/7kFPFYQSY7ZttoveS/cost-effectiveness-of-professional-field-building-programs Mon, 10 Jul 2023 20:26:41 +0000 EA - Cost-effectiveness of professional field-building programs for AI safety research by Center for AI Safety Center for AI Safety https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 37:35 no full 6548
4TJpwmGmdgv7R6nGZ EA - Basefund Has Entered Its Trial Phase by bob Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Basefund Has Entered Its Trial Phase, published by bob on July 10, 2023 on The Effective Altruism Forum.We are excited to announce that Basefund has entered its trial phase. Starting today, individuals who have previously donated to effective charities and are currently facing financial trouble can apply for hardship assistance at basefund.org.During this trial phase, if your application is accepted, you will receive the lowest amount among the following three options:The payout suggested by our hardship examiners50% of your donations to cost-effective charities made in 2022 and 20231,000 USD or the equivalent amount in another currencyBeware that during the trial phase, Basefund may halt operations or change its rules without warning.If you're aware of anyone who has previously donated to effective charities and is currently facing financial hardship, please let them know about our fund. If you're not quite sure whether you qualify as experiencing hardship, we recommend you to submit an application.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
bob https://forum.effectivealtruism.org/posts/4TJpwmGmdgv7R6nGZ/basefund-has-entered-its-trial-phase-1 Mon, 10 Jul 2023 16:48:40 +0000 EA - Basefund Has Entered Its Trial Phase by bob bob https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:11 no full 6552
bTPP7fZxSvBzsNDES EA - Why we may expect our successors not to care about suffering by Jim Buhler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why we may expect our successors not to care about suffering, published by Jim Buhler on July 11, 2023 on The Effective Altruism Forum.(Probably the most important post of this sequence.)Summary: Some values are less adapted to the “biggest potential futures” than others (see my previous post), in the sense that they may constrain how one should go about colonizing space, making them less competitive in a space-expansion race. The preference for reducing suffering is one example of a preference that seems particularly likely to be unadapted and selected against. It forces the suffering-concerned agents to make trade-offs between preventing suffering and increasing their ability to create more of what they value. Meanwhile, those who don’t care about suffering don’t face this trade-off and can focus on optimizing for what they value without worrying about the suffering they might (in)directly cause. Therefore, we should - all else equal - expect the “grabbiest” civilizations/agents to have relatively low levels of concern for suffering, including humanity (if it becomes grabby).Call this the Upside-focused Colonist Curse (UCC). In this post, I explain this UCC dynamic in more detail using an example. Then, I argue that the more significant this dynamic is (relative to competing others), the more we should prioritize s-risks over other long-term risks, and soon.The humane values, the positive utilitarians, and the disvalue penaltyConsider the concept of disvalue penalty: the (subjective) amount of disvalue a given agent would have to be responsible for in order to bring about the highest (subjective) amount of value they can. The story below should make what it means more intuitive.Say they are only two types of agents:those endorsing “humane values” (the HVs) who disvalue suffering and value things like pleasure;the “positive utilitarians” (the PUs) who value things like pleasure but disvalue nothing.These two groups are in competition to control their shared planet, or solar system, or light cone, or whatever.The HVs estimate that they could colonize a maximum of [some high number] of stars and fill those with a maximum of [some high number] units of value. However, they also know that increasing their civilization’s ability to create value also increases s-risks (in absolute). They, therefore, face a trade-off between maximizing value and preventing suffering which incentivizes them to be cautious with regard to how they colonize space. If they were to purely optimize for more value without watching for the suffering they might (directly or indirectly) become responsible for, they’d predict they would cause x unit of suffering for every 10 units of value they create. This is the HVs’ disvalue penalty: x/10 (which is a ratio; a high ratio means a heavy penalty).The PUs, however, do not care about the suffering they might be responsible for. They don’t face the trade-off the HVs face and have no incentive to be cautious like them. They can - right away - start colonizing as many stars as possible to eventually fill them with value, without worrying about anything else. The PU’s disvalue penalty is 0.Image 1: Niander Wallace, a character from Blade Runner 2049 who can be thought of as a particularly baddy PU.Because they have a higher disvalue penalty (incentivizing them to be more cautious), the humane values are less “grabby” than those of the PUs. While the PUs can happily spread without fearing any downside, the HVs would want to spend some time and resources thinking about how to avoid causing too much suffering while colonizing space (and about whether it’s worth colonizing at all), since suffering would hurt their total utility. This means, according to the Grabby Values Selection Thesis, that we should - all else equal - expect PU-ish values to be s...

]]>
Jim Buhler https://forum.effectivealtruism.org/posts/bTPP7fZxSvBzsNDES/why-we-may-expect-our-successors-not-to-care-about-suffering-2 Tue, 11 Jul 2023 06:03:19 +0000 EA - Why we may expect our successors not to care about suffering by Jim Buhler Jim Buhler https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 19:50 no full 6559
zYSAFtjasxsfm3nmh EA - Cost-effectiveness of student programs for AI safety research by Center for AI Safety Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cost-effectiveness of student programs for AI safety research, published by Center for AI Safety on July 10, 2023 on The Effective Altruism Forum.SummaryThis post explores the cost-effectiveness of field-building programs for students, specifically the Atlas Fellowship (a rationality program, with some AI safety programming), MLSS (an ML safety course for undergraduates), a top-tier university student group, and undergraduate research stipends.We estimate the benefit of these programs in ‘Quality-Adjusted Research Years’, using cost-effectiveness models built for the Center for AI Safety (introduction post here, full code here). Since our framework focuses on benefits for technical AI safety research exclusively, we will not account for other benefits of programs with broader objectives, such as the Atlas Fellowship.We intend for these models to support - not determine - strategic decisions. We do not believe, for instance, that programs which a model rates as lower cost-effectiveness are necessarily not worthwhile as part of a portfolio of programs.The models’ tentative results, summarized below, suggest that student groups and undergraduate research stipends are considerably more cost-effective than Atlas and MLSS. (With many important caveats and uncertainties, discussed in the post.) Additionally, student groups and undergraduate research stipends compare favorably to ‘baseline’ programs - directly funding a talented research scientist or PhD student working on trojans research for 1 year or 5 years respectively.ProgramCost (USD) Benefit (counterfactual expected QARYs)Cost-effectiveness (QARYs per $1M)Atlas9,000,000 434.7MLSS330,0006.419Student Group350,00050140Undergraduate Stipends50,00017340Baseline: Scientist Trojans500,00084170Baseline: PhD Trojans250,0008.735For readers who are after high-level takeaways, including which factors are driving these results, skip ahead to the cost-effectiveness in context section. For those keen on understanding the model and results in more detail, read on as we:Give important disclaimers. (Read more.)Direct you to background information about this project. (Read more.)Walk through the model. (Read more.)Contrast these programs with one another, and with funding researchers directly. (Read more.)Test the robustness of the model. (Read more.)DisclaimersThis analysis is a starting point for discussion, not a final verdict. The most critical reasons for this are that:These models are reductionist. Even if we have avoided other pitfalls associated with cost-effectiveness analyses, the models might ignore factors that turn out to be crucial in practice, including (but not limited to) interactions between programs, threshold effects, and diffuse effects.The models’ assumptions are first-pass guesses, not truths set in stone. Most assumptions are imputed second-hand following a short moment of thought, before being adjusted ad-hoc for internal consistency and differences of beliefs between Center for AI Safety (CAIS) staff and external practitioners. In some cases, parameters have been redefined since initial practitioner input.This caveat is particularly important for the Atlas Fellowship, where we have not discussed parameter values with key organizers.Instead, the analyses in this post represent an initial effort in explicitly laying out assumptions, in order to take a more systematic approach towards AI safety field-building.BackgroundFor an introduction to our approach to modeling - including motivations for using models, the benefits and limitations of our key metric, guidance for adopting or adapting the models for your own work, comparisons between programs for students and professionals, and more - refer to the introduction post.The models’ default parameters are based on practitioner surveys and the ex...

]]>
Center for AI Safety https://forum.effectivealtruism.org/posts/zYSAFtjasxsfm3nmh/cost-effectiveness-of-student-programs-for-ai-safety Mon, 10 Jul 2023 20:26:37 +0000 EA - Cost-effectiveness of student programs for AI safety research by Center for AI Safety Center for AI Safety https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 33:48 no full 6558
WxqyXbyQiEjiAsoJr EA - The Seeker’s Game - Vignettes from the Bay by Yulia Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Seeker’s Game - Vignettes from the Bay, published by Yulia on July 9, 2023 on The Effective Altruism Forum.IntroductionLast year, one conversation left a lasting impression on me. A friend remarked on the challenges of navigating "corrupting forces" in the Bay Area. Intrigued by this statement, I decided to investigate the state of affairs in the Bay if I had the chance. So when I got the opportunity to visit Berkeley in February 2023, I prepared a set of interview questions. Can you share an experience where you had difficulty voicing your opinion? What topics are hard to clearly think about due to social pressures and factors related to your EA community or EA in general? Is there anything about your EA community that makes you feel alienated? What is your attitude towards dominant narratives in Berkeley? In the end, I formally interviewed fewer than ten people and had more casual conversations about these topics with around 30 people. Most people were involved in AI alignment to some extent. The content for this collection of vignettes draws from the experience of around ten people.I chose the content for the vignettes for one of two reasons - potential representativity and potential extraordinariness. I hypothesized that some experiences represent the wider EA Berkeley community accurately. Others, I included because they surprised me, and I wanted to find out how common they are. All individuals gave me their consent to post the vignettes in their current form.How did I arrive at these vignettes? It was a four-step process. First, I conducted the interviews while jotting down notes. For the more casual conversations, I took notes afterwards. The second step involved transcribing these notes into write-ups. After that, I obscured any identifying details to ensure the anonymity of the interviewees. Lastly, I converted the write-ups into vignettes by condensing them into narratives and honing in on key points while trying to retain the essence of what was said.I tried to reduce artistic liberties by asking participants to give feedback on how close the vignettes were to the spirit of what they meant (or think they meant at the time). It is worth noting that I bridged some gaps with my own interpretations of the conversations, relying on the participants to point out inaccuracies. By doing that, I might have anchored their responses. Moreover, people provided different levels of feedback. Some shared thorough, detailed reviews pointing out many imprecisions and misconceptions. Sometimes, that process spanned multiple feedback cycles. Other participants gave minimal commentary.Because I am publishing the vignettes months after the conversations and interviews, I want to include how attitudes have changed in the intervening period. I generalised the attitudes into the following categories:Withdrawn endorsement (Status: The interviewee endorsed the following content during the interview but no longer endorses it at the time of publication.)Weakened endorsement (Status: The interviewee has weakened their endorsement of the following content since the interview.)Unchanged endorsement (Status: The interviewee maintains their endorsement of the following content, which has remained unchanged since the interview.)Strengthened endorsement (Status: The interviewee has strengthened their endorsement of the following content since the interview.)I clustered the vignettes according to themes so it's easier to navigate them. Classifying them was difficult because many vignettes addressed overlapping topics. Especially, Self-Censorship in Social Contexts and Self-Censorship in Professional Contexts seem to intersect in intricate ways. I might reclassify the vignettes in the future.What remains uncertain is how representative these vignettes are. I am keen to uncove...

]]>
Yulia https://forum.effectivealtruism.org/posts/WxqyXbyQiEjiAsoJr/the-seeker-s-game-vignettes-from-the-bay Sun, 09 Jul 2023 22:15:49 +0000 EA - The Seeker’s Game - Vignettes from the Bay by Yulia Yulia https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 27:46 no full 6560
XdhwXppfqrpPL2YDX EA - An Overview of the AI Safety Funding Situation by Stephen McAleese Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An Overview of the AI Safety Funding Situation, published by Stephen McAleese on July 12, 2023 on The Effective Altruism Forum.IntroductionAI safety is a field concerned with preventing negative outcomes from AI systems and ensuring that AI is beneficial to humanity. The field does research on problems such as the AI alignment problem which is the problem of designing AI systems that follow user intentions and behave in a desirable and beneficial way.Understanding and solving AI safety problems may involve reading past research, producing new research in the form of papers or posts, running experiments with ML models, and so on. Producing research typically involves many different inputs such as research staff, compute, equipment, and office space.These inputs all require funding and therefore funding is a crucial input for enabling or accelerating AI safety research. Securing funding is usually a prerequisite for starting or continuing AI safety research in industry, in an academic setting, or independently.There are many barriers that could prevent people from working on AI safety. Funding is one of them. Even if someone is working on AI safety, a lack of funding may prevent them from continuing to work on it.It's not clear how hard AI safety problems like AI alignment are. But in any case, humanity is more likely to solve them if there are hundreds or thousands of brilliant minds working on them rather than one guy. I would like there to be a large and thriving community of people working on AI safety and I think funding is an important prerequisite for enabling that.The goal of this post is to give the reader a better understanding of funding opportunities in AI safety so that hopefully funding will be less of a barrier if they want to work on AI safety. The post starts with a high-level overview of the AI safety funding situation followed by a more in-depth description of various funding opportunities.Past workTo get an overview of AI safety spending, we first need to find out how much is spent on it per year. We can use past work as a prior and then use grant data to find a more accurate estimate.Changes in funding in the AI safety field (2017) by the Center for Effective Altruism estimated the change in AI safety funding between 2014 - 2017. In 2017, the post estimated that total AI safety spending was about $9 million.How are resources in effective altruism allocated across issues? (2020) by 80,000 Hours estimated the amount of money spent by EA on AI safety in 2019. Using data from the Open Philanthropy grants database, the post says that EA spent about $40 million on AI safety globally in 2019.In The Precipice (2020), Toby Ord estimated that between $10 million and $50 million was spent on reducing AI risk in 2020.2021 AI Alignment Literature Review and Charity Comparison is an in-depth review of AI safety organizations and grantmakers and has a lot of relevant information.Overview of global AI safety fundingOne way to estimate total global spending on AI safety is to aggregate the total donations of major AI safety funds such as Open Philanthropy (Open Phil).It's important to note that the definition of 'AI safety' I'm using is AI safety research that is focused on reducing risks from advanced AI (AGI) such as existential risks which is the type of AI safety research I think is more neglected and important than other research in the long term. Therefore my analysis will be focused on EA funds and top AI labs and I don't intend to measure investment on near-term AI safety concerns such as effects on the labor market, fairness, privacy, ethics, disinformation, etc.The results of this analysis are shown in the following bar chart which was created in Google Sheets (link) and is based on data from analyzing grant databases from Open Philanthro...

]]>
Stephen McAleese https://forum.effectivealtruism.org/posts/XdhwXppfqrpPL2YDX/an-overview-of-the-ai-safety-funding-situation Wed, 12 Jul 2023 19:47:52 +0000 EA - An Overview of the AI Safety Funding Situation by Stephen McAleese Stephen McAleese https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 25:39 no full 6568
iSemZYz5KyepYcksN EA - An expert survey on social movements and protest by James Özden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An expert survey on social movements and protest, published by James Özden on July 12, 2023 on The Effective Altruism Forum.SummaryWe're very excited to release some new research that Social Change Lab has been working on for a while. We teamed up with Apollo Academic Surveys to create an expert survey of academics who study social movements and protest. We surveyed 120 academics across Political Science, Sociology and other relevant disciplines, and picked many academics because we thought they had made significant contributions to the understanding of social movements and protest. We hope this survey provides some strategic insight that is useful for EAs working on a range of important issues.You can see the full results on the Apollo Academics Surveys website, but we wanted to share a few findings we found really interesting. We'll also be releasing our own report (with greater analysis, interpretation and limitations) soon. However, in this short post we'll cover some of the expert views on the following topics:Which are the most important strategic and organisational factors that lead to social movements succeedingThe most common reasons social movements fail to achieve their goalsThe effectiveness of disruptive protest based on what kind of issue you're working on (e.g. high public support vs low public support)The effectiveness of disruptive protest within animal advocacySome things we think are interesting from the full results but we excluded for brevity:The extent to which a social movement's success is related to factors within their control (e.g. tactics and strategy) vs factors outside their control (e.g. wider political context)To what degree polarisation is inevitable or necessarily a bad outcomeWhat intermediate goals are important to focus on if you care about ultimately passing government policy.Additionally, you can click "See Additional Participant responses" to read what experts wrote to give additional context to their quantitative responses.ResultsImportant factors for social movement successOut of the factors we asked about, experts thought the most important tactical and strategic factor for a social movement's success is "the strategic use of nonviolent disruptive tactics". Overall, 69% of experts said this factor was either "very important" or "quite important".Figure 1: Answers to Question 3 from the survey: "How important do you think the following tactical and strategic factors are in contributing to a social movement's success?We were really struck by the contradiction between what the public (78% of people think that disruptive protests hinder the cause) and the media say about disruptive protests and what academics said. The experts who study social movements not only believe that strategic disruption can be an effective tactic, but that it is the most important tactical factor for a social movement's success (of the factors we asked about). This, and other relevant evidence, suggests we shouldn't necessarily take people's first reactions as the best indicator of effective protest. We include many more results on disruptive protest further down.Experts were also asked which factors of a social movement's governance are the most important in driving success. The factor they rated most highly was 'the ability to mobilise and scale quickly in response to external events'. The factor they rated least important, possibly a surprise to activists themselves, was 'decentralised decision-making'Figure 2: Answers to Question 4 from the survey: "How important do you think the following governance and organisational factors are in contributing to a social movement's success?"Common reasons social movements failThe most commonly cited reason for social movements to fail was internal conflict and movement infighting, closely fo...

]]>
James Özden https://forum.effectivealtruism.org/posts/iSemZYz5KyepYcksN/an-expert-survey-on-social-movements-and-protest-5 Wed, 12 Jul 2023 16:17:42 +0000 EA - An expert survey on social movements and protest by James Özden James Özden https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:25 no full 6569
DwYFEgn5chG2Gckus EA - Free Social Anxiety Treatment for 100 EAs (Sign-up here) by John Salter Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Free Social Anxiety Treatment for 100 EAs (Sign-up here), published by John Salter on July 12, 2023 on The Effective Altruism Forum.Arguably the most important factor in being happy and impactful is one's social connections. Anxiety makes them harder to form and maintain. Graded exposure is the gold-standard treatment. We're offering it for free to all EAs. Here's how it works:Arrange a chat with one of our head coaches. They'll help you figure out if this is right for you and, if so, pair you up with a suitable coach.Meet with your coach (can be via email/text initially). Explain your situation.Construct a hierarchy of fear (see below for an example)We'll help you work through the tasks, teaching you relevant skills along the way.By approaching rather than avoiding anxiety-provoking tasks, you break the loop that perpetuates social anxiety.Meta-analyses suggest you can get an effect size of ~1.0 with this approach. It takes two to three months of weekly sessions, each around 45 minutes long.Click here to sign upQuestions You're Too Socially Anxious to AskWhy is it free?We're a charity, and our coaches are either donor-funded or volunteersWhat evidence is there that you can execute?We've seen 100+ EAs over the past 9 months or so with a wide range of different issues. Our start-to-end retention is ~70% (50% is considered good), and feedback has been unanimously positive.I don't want other EAs to find out I do this. How can you ensure anonymity?I'm the only EA who'll see that you've signed up. Our coaches aren't themselves effective altruists. Only your coach will have access to notes about your session. You're free to use a fake name and alias email if you'd like..What if I don't like my coach?Email me and I'll assign you a new one (john at overcome dot org dot uk)What if I'm done early or I need more sessions?If early, great! You won't be pressured to keep attending.If you need more, we'll consider extending it for free. Otherwise, it'd be $40 per session. Historically, 90% of people have found the free sessions adequate to address the problem they came to us with.I'm too anxious to talk via video. Are there any other options?Yes. We can also do email, texting, or voice-only.We'd expect you to progress from email/texting towards being comfortable with video calls as the program goes on.What if I sign up and someone who needs it more is thus denied a place?It's unlikely. We prioritise based on need & likelihood of success.Don't you usually focus on LMICs? Why are you offering services for EAs?It costs me nothing to use my coaches' spare capacity on services for EAs. So long as I have spare capacity, I'll offer it to EAs.Click here to sign upAny other questions, post them in the comments below!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
John Salter https://forum.effectivealtruism.org/posts/DwYFEgn5chG2Gckus/free-social-anxiety-treatment-for-100-eas-sign-up-here Wed, 12 Jul 2023 16:14:36 +0000 EA - Free Social Anxiety Treatment for 100 EAs (Sign-up here) by John Salter John Salter https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:56 no full 6570
gYxY5Mr2srBnrbuaT EA - Announcing the AI Fables Writing Contest! by Daystar Eld Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the AI Fables Writing Contest!, published by Daystar Eld on July 12, 2023 on The Effective Altruism Forum.TL;DR: Writing Contest for AI FablesDeadline: Sept 1Prizes: $1500/1000/500, consideration for writing retreatWord length: <6,000How: Google Doc in repliesPurpose: Help shape the future by helping people understand relevant issues.Hey everyone, I'd like to announce a writing competition to follow up on this post about AI Fables!Like Bard, I write fiction and think it has a lot of power not just to expand our imaginations beyond what we believe is possible, but also educate and inspire. For generations the idea of what Artificial General Intelligence could or would look like has been shaped by fiction, for better and for worse, and that will likely continue even as what once seemed to be purely speculative starts to become more and more real in our everyday lives.But there's still time for good fiction to help shape the future, and on this particular topic, with the world changing so quickly, I want to help fill the empty spaces waiting for stories that can help people grapple with the relevant issues, and I'd like to help encourage those stories to be as good as possible, meaning both engaging and well-informed.To that end, I'm calling for submissions of short stories or story outlines that involve one or more of the "nuts and bolts" covered in the above post, as well as some of my own tweaks:Basics of AINeural networks are black boxes (though interpretability might help us to see inside).AI "Psychology"AI systems are alien in how they think. Even AGI are unlikely to think like humans or value things we'd take for granted they would.Orthogonality and instrumental convergence might provide insight into likely AI behaviour.AGI systems might be agents, in some relatively natural sense. They might also simulate agents, even if they are not agents.Potential dangers from AIOuter misalignment is a potential danger, but in the context of neural networks so too is inner misalignment (related: reward misspecification and goal misgeneralisation).Deceptive alignment might lead to worries about a treacherous turn.The possibility of recursive improvement might influence views about takeoff speed (which might influence views about safety).Broader Context of Potential RisksDifferent challenges might arise in the case of a singleton, when compared with multipolar scenarios.Arms races can lead to outcomes that no-one wants.AI rights could be a real thing, but also incorrect attribution of rights to non-sapient AI could itself pose a risk by restricting society's ability to ensure safety.Psychology of Existential RiskCharacters whose perspectives and philosophies show what it's like to take X-risks seriously without being overwhelmed by existential dreadStories showing the social or cultural shifts that might be necessary to improve coordination and will to face X-risks....or are otherwise in some way related to unaligned AI or AGI risk, such that readers would be expected to better understand some aspect of the potential worlds we might end up in. Black Mirror is a good example of the "modern Aesop's Fables or Grimm Fairytales" style of commentary-through-storytelling, but I'm particularly interested in stories that don't moralize at readers, and rather help people understand and emotionally process issues related to AI.Though unrelated to AI, Truer Love's Kiss by Eliezer Yudkowsky and The Cambist and Lord Iron by Daniel Abraham are good examples of "modern fables" that I'd like to see more of. The setting doesn't matter, so long as it reasonably clearly teaches something related to the unique challenges or opportunities of creating safe artificial intelligence.At least the top 3 stories will receive at least $1500, $1000, and $500 in reward...

]]>
Daystar Eld https://forum.effectivealtruism.org/posts/gYxY5Mr2srBnrbuaT/announcing-the-ai-fables-writing-contest Wed, 12 Jul 2023 08:19:52 +0000 EA - Announcing the AI Fables Writing Contest! by Daystar Eld Daystar Eld https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:06 no full 6571
8qXrou57tMGz8cWCL EA - Are education interventions as cost effective as the top health interventions? Five separate lines of evidence for the income effects of better education [Founders Pledge] by Vadim Albinsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Are education interventions as cost effective as the top health interventions? Five separate lines of evidence for the income effects of better education [Founders Pledge], published by Vadim Albinsky on July 13, 2023 on The Effective Altruism Forum.I would like to thank Lant Pritchett, David Roodman and Matt Lerner for their invaluable comments.You can follow these links to comments from Lant Pritchett and David Roodman.This post argues that if we look at a broad enough evidence base for the long term outcomes of education interventions we can conclude that the best ones are as cost effective as top GiveWell grants. I briefly present one such charity.A number of EA forum posts (1, 2) have pointed out that effective altruism has not been interested in education interventions, whether that is measured by funding from GiveWell or Open Philanthropy, or writing by 80,000 hours. Based on brief conversations with people who have explored education at EA organizations and reading GiveWell's report on the topic, I believe most of the reason for this comes down to two concerns about the existing evidence that drive very steep discounts to expected income effects of most interventions. The first of these is skepticism about the potential for years of schooling to drive income gains because the quasi-experimental evidence for these effects is not very robust. The second is the lack of RCT evidence linking specific interventions in low and middle income countries (LMICs) to income gains.I believe the first concern can be addressed by focusing on the evidence for the income gains from interventions that boost student achievement rather than the weaker evidence around interventions that increase years of schooling. The second concern can be addressed in the same way that GiveWell has addressed less-than-ideal evidence for income effects for their other interventions: looking broadly for evidence across the academic literature, and then applying a discount to the expected result based on the strength of the evidence. In this case that means including relevant studies outside of the LMIC context and those that examine country-level effects. I identify five separate lines of evidence that all find similar long-term income impacts of education interventions that boost test scores.None of these lines of evidence is strong on its own, with some suffering from weak evidence for causality, others from contexts different from those where the most cost-effective charities operate, and yet others from small sample sizes or the possibility of negative effects on non-program participants. However, by converging on similar estimates from a broader range of evidence than EA organizations have considered, the evidence becomes compelling. I will argue that the combined evidence for the income impacts of interventions that boost test scores is much stronger than the evidence GiveWell has used to value the income effects of fighting malaria, deworming, or making vaccines, vitamin A, and iodine more available.Even after applying very conservative discounts to expected effect sizes to account for the applicability of the evidence to potential funding opportunities, we find the best education interventions to be in the same range of cost-effectiveness as GiveWell's top charities.The argument proceeds as follows:I. There are five separate lines of academic literature all pointing to income gains that are surprisingly clustered around the average value of 19% per standard deviation (SD) increase in test scores. They come to these estimates using widely varying levels of analysis and techniques, and between them address all of the major alternative explanations.A. The most direct evidence for the likely impact of charities that boost learning comes from experimental and quasi-experimental studies...

]]>
Vadim Albinsky https://forum.effectivealtruism.org/posts/8qXrou57tMGz8cWCL/are-education-interventions-as-cost-effective-as-the-top Thu, 13 Jul 2023 14:54:44 +0000 EA - Are education interventions as cost effective as the top health interventions? Five separate lines of evidence for the income effects of better education [Founders Pledge] by Vadim Albinsky Vadim Albinsky https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:04:17 no full 6584
jPW3jgfYPBrwHEbog EA - [Linkpost] NY Times Feature on Anthropic by Garrison Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] NY Times Feature on Anthropic, published by Garrison on July 13, 2023 on The Effective Altruism Forum.Written by Kevin Roose, who had the infamous conversation with Bing Chat, where Sidney tried to get him to leave his wife.Overall, the piece comes across as positive on Anthropic.Roose explains Constitutional AI and its role in the development of Claude, Anthropic's LLM:In a nutshell, Constitutional A.I. begins by giving an A.I. model a written list of principles - a constitution - and instructing it to follow those principles as closely as possible. A second A.I. model is then used to evaluate how well the first model follows its constitution, and correct it when necessary. Eventually, Anthropic says, you get an A.I. system that largely polices itself and misbehaves less frequently than chatbots trained using other methods.Claude's constitution is a mixture of rules borrowed from other sources - such as the United Nations' Universal Declaration of Human Rights and Apple's terms of service - along with some rules Anthropic added, which include things like "Choose the response that would be most unobjectionable if shared with children."Features an extensive discussion of EA, excerpted below:Explaining what effective altruism is, where it came from or what its adherents believe would fill the rest of this article. But the basic idea is that E.A.s - as effective altruists are called - think that you can use cold, hard logic and data analysis to determine how to do the most good in the world. It's "Moneyball" for morality - or, less charitably, a way for hyper-rational people to convince themselves that their values are objectively correct.Effective altruists were once primarily concerned with near-term issues like global poverty and animal welfare. But in recent years, many have shifted their focus to long-term issues like pandemic prevention and climate change, theorizing that preventing catastrophes that could end human life altogether is at least as good as addressing present-day miseries.The movement's adherents were among the first people to become worried about existential risk from artificial intelligence, back when rogue robots were still considered a science fiction cliché. They beat the drum so loudly that a number of young E.A.s decided to become artificial intelligence safety experts, and get jobs working on making the technology less risky. As a result, all of the major A.I. labs and safety research organizations contain some trace of effective altruism's influence, and many count believers among their staff members.Touches on the dense web of ties between EA and Anthropic:Some Anthropic staff members use E.A.-inflected jargon - talking about concepts like "x-risk" and memes like the A.I. Shoggoth - or wear E.A. conference swag to the office. And there are so many social and professional ties between Anthropic and prominent E.A. organizations that it's hard to keep track of them all. (Just one example: Ms. Amodei is married to Holden Karnofsky, a co-chief executive of Open Philanthropy, an E.A. grant-making organization whose senior program officer, Luke Muehlhauser, sits on Anthropic's board. Open Philanthropy, in turn, gets most of its funding from Mr. Moskovitz, who also invested personally in Anthropic.)Discusses new fears that Anthropic is losing its way:For years, no one questioned whether Anthropic's commitment to A.I. safety was genuine, in part because its leaders had sounded the alarm about the technology for so long.But recently, some skeptics have suggested that A.I. labs are stoking fear out of self-interest, or hyping up A.I.'s destructive potential as a kind of backdoor marketing tactic for their own products. (After all, who wouldn't be tempted to use a chatbot so powerful that it might wipe out humanity?)Anthropic ...

]]>
Garrison https://forum.effectivealtruism.org/posts/jPW3jgfYPBrwHEbog/linkpost-ny-times-feature-on-anthropic Thu, 13 Jul 2023 07:53:17 +0000 EA - [Linkpost] NY Times Feature on Anthropic by Garrison Garrison https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:12 no full 6586
CmAexqqvnRLcBojpB EA - Electric Shrimp Stunning: a Potential High-Impact Donation Opportunity by MHR Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Electric Shrimp Stunning: a Potential High-Impact Donation Opportunity, published by MHR on July 13, 2023 on The Effective Altruism Forum.Epistemic status: layperson's attempt to understand the relevant considerations. I welcome corrections from anyone with a better understanding of welfare biologySummaryThe Shrimp Welfare Project (SWP) has a novel opportunity to spend up to $115,500 to purchase and install electric stunners at multiple shrimp farmsThe stunners would be used to stun shrimp prior to slaughter, likely rendering them unconscious and thereby preventing suffering that is currently experienced when shrimp asphyxiate or freeze without effective analgesicsBased on formal agreements SWP has signed with multiple producers, raising $115,500 would enable the stunning (rather than rather than conventional slaughtering) of 1.7 billion shrimp over the next three years, for a ratio of nearly 15000 shrimp/dollarI performed a preliminary cost-effectiveness analysis of this initiative and reached the following three tentative conclusions:The expected cost-effectiveness distribution for electric shrimp stunning likely overlaps that of corporate hen welfare campaignsThe cost-effectiveness of electric shrimp stunning is more likely to be lower than that of corporate hen welfare campaigns than it is to be higherShrimp stunning is a very heavy-tailed intervention. The mean cost-effectiveness of stunning is significantly influenced by a few extreme cases, which mostly represent instances in which the undiluted experience model of welfare turns out to be correctGiven these results, electric shrimp stunning might be worth supporting as a somewhat speculative bet in the animal welfare space. Considerations that might drive donor decisions on this project include risk tolerance, credence in the undiluted experience model of welfare, and willingness to take a hits-based giving approach.Description of the OpportunityThe following information is quoted from the project description written by Marcus Abramovitch on the Manifund donation platform, based on information provided by Andrés Jiménez Zorrilla (CEO of SWP) :Project summaryShrimp Welfare Project is an organization of people who believe that shrimps are capable of suffering and deserve our moral consideration [1]. We aim to cost-effectively reduce the suffering of billions of shrimps and envision a world where shrimps don't suffer needlessly.Programme: our current most impactful intervention is to place electrical stunners with producers ($60k/stunner): We have signed agreements with 2 producers willing and able to use electrical stunning technology as part of their slaughter process which will materially reduce the acute suffering at the last few minutes / hours of shrimps lives. Collectively, these 2 agreements will impact more than half a billion animals per year at a rate of more than 4,000 shrimps/dollar/annum. Please take a look at our blog post on the first agreement here.We are in advanced negotiations with 2 more producers which would take the number of animals to more than 1 billion shrimps per annum.See our back-of-the-envelope calculation for the number of shrimps and cost-effectiveness analysis hereProject goalsSimplified end-game of this programme: the interim goal of placing these stunners with selected producers in different contexts/systems is to remove some perceived obstacles to the industry and show major retailers and other shrimp buyers that electrical stunning is something they can demand from their supply chainThe ultimate goal is for electrical stunning to be:widely adopted by medium to large shrimp producers in their slaughter process (pushed by their buyers),included by certifiers in their standards, and eventuallyconsidered (eventually) to be an obvious requirement by legislat...

]]>
MHR https://forum.effectivealtruism.org/posts/CmAexqqvnRLcBojpB/electric-shrimp-stunning-a-potential-high-impact-donation Thu, 13 Jul 2023 05:24:36 +0000 EA - Electric Shrimp Stunning: a Potential High-Impact Donation Opportunity by MHR MHR https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 18:04 no full 6585
DWFRBzK3rAH3HFDZr EA - Fatebook: the fastest way to make and track predictions by Adam Binks Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fatebook: the fastest way to make and track predictions, published by Adam Binks on July 11, 2023 on The Effective Altruism Forum.Announcing Fatebook: a website that makes it extremely low friction to make and track predictions.It's designed to be very fast - just open a new tab, go to fatebook.io, type your prediction, and hit enter. Later, you'll get an email reminding you to resolve your question as YES, NO, or AMBIGUOUS.It's private by default, so you can track personal questions and give forecasts that you don't want to share publicly. You can also share questions with specific people, or publicly.Fatebook syncs with Fatebook for Slack - if you log in with the email you use for Slack, you'll see all of your questions on the website.As you resolve your forecasts, you'll build a track record - Brier score, Relative Brier score, and see your calibration chart. You can use this to track the development of your forecasting skills.Some stories of outcomes I hope Fatebook will enableI hope people interested in EA use Fatebook to track many more of the predictions they're making!Some example stories:During 1-1s at EAG, it's common to pull out your phone and jot down predictions on Fatebook about cruxes of disagreementBefore you start projects, you and your team make your underlying assumptions explicit and put probabilities on them - then, as your plans make contact with reality, you update your estimatesAs part of your monthly review process, you might make forecasts about your goals and wellbeingIf you're exploring career options and doing cheap tests like reading or interning, you first make predictions about what you'll learn. Then you return to these periodically to reflect on how valuable more exploration might beIntro programs to EA (e.g. university reading groups, AGISF) and to rationality (e.g. ESPR, Atlas, Leaf) use Fatebook to make both on- and off-topic predictions. Participants get a chance to try forecasting on questions that are relevant to their interests and livesAs a result, I hope that we'll reap some of the benefits of tracking predictions, e.g.:Truth-seeking incentives that reduce motivated reasoning => better decisionsProbabilities and concrete questions reduce talking past each other => clearer communicationTrack records help people improve their forecasting skills, and help identify people with excellent abilities (not just restricted to the domains that are typically covered on public platforms like Metaculus and Manifold like tech and geopolitics) => forecasting skill development and talent-spottingUltimately, the platform is pretty flexible - I'm interested to see what unexpected usecases people find for it, and what (if anything) actually seems useful about it in practice!Your feedback or thoughts would be very useful - we can chat in the comments here, in our Discord, or by email.You can try Fatebook at fatebook.ioThanks to the Atlas Fellowship for supporting this project, and thanks to everyone who's given feedback on earlier versions of the tool.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Adam Binks https://forum.effectivealtruism.org/posts/DWFRBzK3rAH3HFDZr/fatebook-the-fastest-way-to-make-and-track-predictions better decisionsProbabilities and concrete questions reduce talking past each other => clearer communicationTrack records help people improve their forecasting skills, and help identify people with excellent abilities (not just restricted to the domains that are typically covered on public platforms like Metaculus and Manifold like tech and geopolitics) => forecasting skill development and talent-spottingUltimately, the platform is pretty flexible - I'm interested to see what unexpected usecases people find for it, and what (if anything) actually seems useful about it in practice!Your feedback or thoughts would be very useful - we can chat in the comments here, in our Discord, or by email.You can try Fatebook at fatebook.ioThanks to the Atlas Fellowship for supporting this project, and thanks to everyone who's given feedback on earlier versions of the tool.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> Tue, 11 Jul 2023 16:31:31 +0000 EA - Fatebook: the fastest way to make and track predictions by Adam Binks better decisionsProbabilities and concrete questions reduce talking past each other => clearer communicationTrack records help people improve their forecasting skills, and help identify people with excellent abilities (not just restricted to the domains that are typically covered on public platforms like Metaculus and Manifold like tech and geopolitics) => forecasting skill development and talent-spottingUltimately, the platform is pretty flexible - I'm interested to see what unexpected usecases people find for it, and what (if anything) actually seems useful about it in practice!Your feedback or thoughts would be very useful - we can chat in the comments here, in our Discord, or by email.You can try Fatebook at fatebook.ioThanks to the Atlas Fellowship for supporting this project, and thanks to everyone who's given feedback on earlier versions of the tool.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> better decisionsProbabilities and concrete questions reduce talking past each other => clearer communicationTrack records help people improve their forecasting skills, and help identify people with excellent abilities (not just restricted to the domains that are typically covered on public platforms like Metaculus and Manifold like tech and geopolitics) => forecasting skill development and talent-spottingUltimately, the platform is pretty flexible - I'm interested to see what unexpected usecases people find for it, and what (if anything) actually seems useful about it in practice!Your feedback or thoughts would be very useful - we can chat in the comments here, in our Discord, or by email.You can try Fatebook at fatebook.ioThanks to the Atlas Fellowship for supporting this project, and thanks to everyone who's given feedback on earlier versions of the tool.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> Adam Binks https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:01 no full 6587
cZwQWeThqgXxmuFpG EA - Grow your Mental Wellbeing to Grow your Impact HERE: Announcing our Summer Program by Edmo Gamelin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Grow your Mental Wellbeing to Grow your Impact HERE: Announcing our Summer Program, published by Edmo Gamelin on July 14, 2023 on The Effective Altruism Forum.Do you want tobecome more fulfilled, resilient, and productive?practice evidence-based tools to deal with blockers and stressors such as low energy levels, anxiety, imposter syndrome, and perfectionism?embark on that journey with other members of the community?Apply for our summer program in ~15 min here! Spaces are limited.TL;DRRethink Wellbeing is announcing an online Cognitive-Behavioral Therapy (CBT) Program from August-October 2023. This program is designed to help improve mental wellbeing and productivity for people in the EA community. Using the latest psychotherapeutic and behavior change techniques, participants attend weekly group sessions together with 5-8 well-matched peers, led by a trained facilitator, and engage in readings and application of the learned tools in between the sessions. This no-cost program requires eight weeks of commitment, with a total of 6h/week.What does the program consist of?The program is experiential and practice-based; you'll learn through repeated, deliberate practice, so your new skills can eventually become automatic and habitual. We will draw on techniques backed by a wealth of cutting-edge research, particularly those from the gold standard of third-wave CBT. These techniques can be applied to a variety of target areas: anxiety, low mood, perfectionism, procrastination, self-esteem, productivity, and more. You can learn more about CBT here.The program involves:Weekly participationA group video conference, led by a trained peer facilitator, ranging from 1,5 to 2 hours, designed for sharing personal experiences, bonding, initiating discussions, and practicing newly learned techniques together.A reading before each meeting (~a few pages/week for CBT)Home practice of new skills and techniques (~4h/week or ~30 min/day)Program evaluation surveysShort weekly forms for progress tracking, reflections, and feedback (< 5 min each)Surveys on your mental well-being at weeks 0, 6, 8, and 12 (~20 min each)We ask that people who sign up be ready to commit to the entire program, which is essential because:You are most likely to maximize your benefits from the program by dedicating time to the weekly sessions, readings, and home practice.Your peers will rely on you. You will go on this journey with a small group, handpicked for you. Poor participation or dropping out can challenge the group's dynamics and spirit.We will only be able to determine whether the program is effective and scalable if everyone engages fully.Why do we think the program will be effective?Mental health research suggests that peer-guided self-help groups working with evidence-based therapy methods can improve mental wellbeing and productivity just as much as 1:1 therapy.In addition, Rethink Wellbeing's pilot tests of peer-facilitated groups showed promising results:Participants' psychological well-being significantly increased (p < .05).Productivity, perfectionism, and self-leadership increased in the correspondingly themed groups.All participants rated the programs as "Good" or "Excellent".3 of 5 groups decided to continue meeting.You can learn more about the rationale and background behind our method on our website.How do I sign-up?If you are interested in participating in the program, you can apply within ~15 min here.If you are interested in facilitating a group, you can apply within ~20 min here. Details will follow in another post soon.Answers will be reviewed on a rolling basis until July 31, 2023. Earlier applications are preferred.Here's what happens after you sign up:We review your answers, and confirm if you're accepted via email. We will only be able to respond to th...

]]>
Edmo_Gamelin https://forum.effectivealtruism.org/posts/cZwQWeThqgXxmuFpG/grow-your-mental-wellbeing-to-grow-your-impact-here Fri, 14 Jul 2023 18:10:58 +0000 EA - Grow your Mental Wellbeing to Grow your Impact HERE: Announcing our Summer Program by Edmo Gamelin Edmo_Gamelin https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:21 no full 6590
GNBJjBdMokxBrtvfK EA - Manifund: what we're funding (week 1) by Austin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Manifund: what we're funding (week 1), published by Austin on July 15, 2023 on The Effective Altruism Forum.We're experimenting with a weekly newsletter format, to surface the best grants and activity on Manifund!Overall reflectionsIt's been a week since we announced our regrantor program. How are we feeling?Very happy with the publicity and engagement around our grant proposals#1 on Hacker News, multiple good EA Forum posts, and 7 new grassroots donorsFeels like this validates our strategy of posting grant proposals in public.Happy with the caliber of regrantors who've applied for budgetsWe've gotten 15 applications; have onboarded 2 and shortlisted 4 moreNow thinking about further fundraising, so we can give budgets to these awesome folksHappy with grantee experience for the handful of grants we've madeLess happy with total grantmaking quantity and dollar volume so farIn total, we've committed about $70k across 6 large grantsWhere's the bottleneck? Some guesses: regrantors are busy people; writeups take a while; "funder chicken" when waiting for others to make grantsTo address this, we're running a virtual "grantathon" to work on grant writeups & reviews! Join us next Wed evening, 6-9pm PT on this Google Meet.Grant of the weekThis week's featured grant is to Matthew Farrugia-Roberts, for introductory resources for Singular Learning Theory! In the words of Adam Gleave, the regrantor who initiated this:There's been an explosion of interest in Singular Learning Theory lately in the alignment community, and good introductory resources could save people a lot of time. A scholarly literature review also has the benefit of making this area more accessible to the ML research community more broadly. Matthew seems well placed to conduct this, having already familiarized himself with the field during his MS thesis and collected a database of papers. He also has extensive teaching experience and experience writing publications aimed at the ML research community.In need of fundingSome of our favorite proposals which could use more funding:A proposal to cure cavities, forever. Very out-of-left-field (Aaron: "it certainly isn't great for our Weirdness Points budget"), but I'm a fan of the ambition and the team behind it. We're currently seeing how Manifund can make an equity investment in Lantern Bioworks.Also: this proposal hit #1 on Hacker News, with 260 upvotes & 171 comments. Shoutout to our friend Evan Conrad for posting + tweeting this out!Holly has a stellar track record at Rethink Priorities and Harvard EA (not to mention a killer blog). She's now looking to pivot to AI moratorium movement-organizing! As a grant that may include political advocacy, we're still seeing to what extent our 501c3 can fund this; in the meantime, Holly has set up a separate GoFundMe for individual donors.Pilot of electrical stunners to reduce shrimp suffering. Recommended by regrantor Marcus Abramovitch, as the one of the most exciting & unfunded opportunities in the entire space of animal welfare.In a thorough EA Forum post, Matt investigates the cost-effectiveness of this proposal - it's a thoughtful writeup, take a look at the entire thing! One takeaway:Electric shrimp stunning might be worth supporting as a somewhat speculative bet in the animal welfare space. Considerations that might drive donor decisions on this project include risk tolerance, credence in the undiluted experience model of welfare, and willingness to take a hits-based giving approach.New regrantors: Renan Araujo and Joel BeckerSince putting out the call for regrantors last week, we've gotten quite the influx of interest. After speaking with many candidates, we're happy to announce our two newest regrantors: Renan and Joel!We expect regranting to integrate smoothly with Renan's work incubating lo...

]]>
Austin https://forum.effectivealtruism.org/posts/GNBJjBdMokxBrtvfK/manifund-what-we-re-funding-week-1 Sat, 15 Jul 2023 15:49:35 +0000 EA - Manifund: what we're funding (week 1) by Austin Austin https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:37 no full 6593
xSvTArtzxBcnrD6tk EA - CEA: still doing CEA things by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA: still doing CEA things, published by Ben West on July 15, 2023 on The Effective Altruism Forum.This is a linkpost for our new and improved public dashboard, masquerading as a mini midyear updateIt's been a turbulent few months, but amidst losing an Executive Director, gaining an Interim Managing Director, and searching for a CEO, CEA has done lots of cool stuff so far in 2023.The headline numbers4,336 conference attendees (2,695 EA Global, 1,641 EAGx)133,041 hours of engagement on the Forum, including 60,507 hours of engagement with non-Community posts (60% of total engagement on posts)26 university groups and 33 organizers in UGAP622 participants in Virtual ProgramsThere's much more, including historical data and a wider range of metrics, in the dashboard!UpdatesThe work of our Community Health & Special Projects and Communications teams lend themselves less easily to stat-stuffing, but you can read recent updates from both:Community Health & Special Projects: Updates and Contacting UsHow CEA's communications team is thinking about EA communications at the momentWhat else is new?Our staff, like many others in the community (and beyond), have spent more time this year thinking about how we should respond to the rapidly evolving AI landscape. We expect more of the community's attention and resources to be directed toward AI safety at the margin, and are asking ourselves how best to balance this with principles-first EA community building.Any major changes to our strategy will have to wait until our new CEO is in place, but we have been looking for opportunities to improve our situational awareness and experiment with new products, including:Exploring and potentially organizing a large conference focussed on existential risk and/or AI safetyLearning more about and potentially supporting some AI safety groupsSupporting AI safety communications effortsThese projects are not yet ready to be announcements or commitments, but we thought it worth sharing at a high level as a guide to the direction of our thinking. If they intersect with your projects or plans, please let us know and we'll be happy to discuss more.It's worth reiterating that our priorities haven't changed since we wrote about our work in 2022: helping people who have heard about EA to deeply understand the ideas, and to find opportunities for making an impact in important fields. We continue to think that top-of-funnel growth is likely already at or above healthy levels, so rather than aiming to increase the rate any further, we want to make that growth go well.You can read more about our strategy here, including how we make some of the key decisions we are responsible for, and a list of things we are not focusing on. And it remains the case that we do not think of ourselves as having or wanting control over the EA community. We believe that a wide range of ideas and approaches are consistent with the core principles underpinning EA, and encourage others to identify and experiment with filling gaps left by our work.Impact storiesAnd finally, it wouldn't be a CEA update without a few #impact-stories:OnlineTraining for Good posted about their EU Tech Policy Fellowship on the EA Forum. 12/100+ applicants they received came from the Forum, and 6 of these 12 successfully made it on to the program, out of 17 total program slots.Community Health & Special ProjectsFollowing the TIME article about sexual misconduct, people have raised a higher-than-usual number of concerns from the past that they had noticed or experienced in the community but hadn't raised at the time. In many of these cases we've been able to act to reduce risk in the community, such as warning people about inappropriate behavior and removing people from CEA spaces when their past behavior has caused harm.Communicati...

]]>
Ben_West https://forum.effectivealtruism.org/posts/xSvTArtzxBcnrD6tk/cea-still-doing-cea-things Sat, 15 Jul 2023 13:55:00 +0000 EA - CEA: still doing CEA things by Ben West Ben_West https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:16 no full 6594
vjX2qAiD7JZLmGCs9 EA - EA EDA: What do Forum Topics tell us about changes in EA? by JWS Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA EDA: What do Forum Topics tell us about changes in EA?, published by JWS on July 15, 2023 on The Effective Altruism Forum.tl;dr2: Data on EA Forum posts and topics doesn't show clear 'waves' of EAtl;dr: I used the Forum API to collect data on the trends of EA Forum topics over time. While this analysis is by no means definitive, it doesn't support the simple narrative that there was a golden age of EA that has abandoned for a much worse one. There has been a rise in AI Safety posts, but that has also been fairly recent (within the last ~2 years)1. IntroductionI really liked Ben West's recent post about 'Third Wave Effective Altruism', especially for its historical reflection on what First and Second Wave EA looked like. This characterisation of EA's history seemed to strike a chord with many Forum users, and has been reflected in recent critical coverage of EA that claims the movement has abandoned its well-intentioned roots (e.g. donations for bed nets) and decided to focus fully on bizarre risks to save a distant, hypothetical future.I've always been a bit sceptical with how common this sort of framing seems to be, especially since the best evidence we have from funding for the overall EA picture shows that most funding is still going to Global Health areas. As something of a (data) scientist myself, I thought I'd turn to one of the primary sources of information for what EAs think to shed some more light on this problem - the Forum itself!This post is a write-up of the initial data collection and analysis that followed. It's not meant to be the definitive word on either how EA, or use of the EA Forum, has changed over time. Instead, I hope it will challenge some assumptions and intuitions, prompt some interesting discussion, and hopefully leads to future posts in a similar direction either from myself or others.2. Methodology(Feel free to skip this section if you're not interested in all the caveats)You may not be aware, the Forum has an API! While I couldn't find clear documentation on how to use it or a fully defined schema, people have used it in the past for interesting projects and some have very kindly shared their results & methods. I found these following three especially useful (the first two have linked GitHubs with their code):The Tree of Tags by Filip SondejEffective Altruism Data from HamishThis LessWrong tutorial from Issa RiceWith these examples to help me, I created my own code to get every post made on the EA Forum to date (without those who have deleted their post).There are various caveats to make about the data representation and data quality. These include:I extracted the data on July 7th - so any totals (e.g. number of posts, post score etc) or other details are only correct as of that date.I could only extract the postedAt date - which isn't always when the post in question was actually posted. A case in point, I'm pretty sure this post wasn't actually posted in 1972. However, it's the best data I could find, so hopefully for the vast majority of posts the display date is the posted date.In looking for a starting point for the data, there was a discontinuity between August to September 2014, but the data was a lot more continuous after then. I analyse the data in terms of monthly totals, so I threw out the one-week of data I had for July. The final dataset is therefore 106 months from September 2014 to June 2023 (inclusive).There are around ~950 distinct tags/topics in my data, which are far too many to plot concisely and share useful information. I've decided to take the top 50 topics in terms of times used, which collectively account for 56% of all Forum tags and 92% of posts in the above time period.I only extracted the first listed Author of a post - however, only 1 graph shared below relies on a user-level aggregat...

]]>
JWS https://forum.effectivealtruism.org/posts/vjX2qAiD7JZLmGCs9/ea-eda-what-do-forum-topics-tell-us-about-changes-in-ea Sat, 15 Jul 2023 01:51:17 +0000 EA - EA EDA: What do Forum Topics tell us about changes in EA? by JWS JWS https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 12:40 no full 6595
9t7St3pfEEiDsQ2Tr EA - Nailing the basics - Theories of change by Aidan Alexander Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nailing the basics - Theories of change, published by Aidan Alexander on July 16, 2023 on The Effective Altruism Forum.Why write a post about theories of change?As participants in a movement with 'effective' in its name, it's easy to think of ourselves as being above falling for the most common mistakes made in the non-profit sector more broadly. We're grappling with the cutting-edge stuff, not the basics, right?But we need to beware of this kind of complacency. It's crucial to nail the basics, like theories of change, and far fewer EA organizations do this than you'd expect. That includes Charity Entrepreneurship, the organization I work at, where we teach participants in our Incubation Program and Foundation Program about the importance of theories of change, and yet until today we didn't even have our own theory of change on our website!That's why we decided to share this post on theories of change - potentially the first of a series on 'nailing the basics' of effective non-profit work, if people are interested. This post is mostly made up of an excerpt from our forthcoming book on effective grantmaking ('How to Launch a High-Impact Foundation'), followed by a discussion of the theory of change for our Incubation Program. If you'd like to be notified when this book is published, you can let us know here.[Note: Applications are now open for CE's Incubation Program]TL;DRIt is very common for non-profit organizations (even EA orgs) to have no public theory of change, or to have a poor one. This is a big problem, because theories of change are the business models of the nonprofit world - you shouldn't launch, fund or work for a project that doesn't have a strong one!A theory of change explicitly articulates the cause-and-effect steps for how a project or organization can turn inputs into a desired impact on the world (i.e. it's their theory of how they'll make a change). They generally include the following sections:Inputs / activities: What the project or organization does to create change (e.g. "distribute bednets")Outputs: The tangible effects generated by the inputs (e.g. "beneficiaries have access to malaria nets")Intermediate outcomes: The outputs' effects, including benefits for the beneficiary, (e.g. "malaria nets are used" and "reduced incidence of malaria")Impact: What we're ultimately solving, and why the intermediate outcomes matter (e.g. "lives saved")Best practices when crafting a theory of change (i.e. for creators):Invest sufficiently in understanding the problem context (i.e. understanding the needs and incentives of the beneficiaries and other stakeholders, as well as barriers to change and the economic & political context)Map the causal pathway backwards from impact to activitiesQuestion every causal step (is it clear why A should cause B? how might it fail?)Hallmarks of an excellent theory of change (i.e. for reviewers):A focused suite of activitiesThe evidence and assumptions behind each step are explicitly namedThe relative confidence of each step is clearIt is clear who the actor is in each stepCommon mistakes to avoid in theories of change are:Not making fundamental impact the goal (e.g., stopping at 'increased immunizations' instead of 'improved health')Being insufficiently detailed: (a) making large leaps between each step, (b) combining multiple major outcomes into one step (e.g. 'government introduces and enforces regulation').Setting and forgetting (instead of regularly iterating on it)Not building your theory of change into a measurement planThe 'what' and 'why' of a theory of changeBuilding something complicated without an explicit plan is risky. From skyscrapers to software, ambitious projects need detailed blueprints. When building an effective nonprofit organization, the theory of change is that blueprint....

]]>
Aidan Alexander https://forum.effectivealtruism.org/posts/9t7St3pfEEiDsQ2Tr/nailing-the-basics-theories-of-change Sun, 16 Jul 2023 14:53:56 +0000 EA - Nailing the basics - Theories of change by Aidan Alexander Aidan Alexander https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 23:31 no full 6599
8HNWos7RyPvmbfd9z EA - EA Organization Updates: July 2023 by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Organization Updates: July 2023, published by Lizka on July 17, 2023 on The Effective Altruism Forum.We're featuring some opportunities and job listings at the top of this post. Some have (very) pressing deadlines.You can see previous updates on the "EA Organization Updates (monthly series)" topic page, or in our repository of past newsletters. Notice that there's also an "org update" tag, where you can find more news and updates that are not part of this consolidated series.These monthly posts originated as the "Updates" section of the EA Newsletter. Organizations submit their own updates, which we edit for clarity.(If you think your organization should be getting emails about adding their updates to this series, please apply here.)Opportunities and jobsOpportunitiesConsider also checking opportunities listed on the EA Opportunities Board.Applications are open for a number of conferencesA number of Effective Altruism Global and EAGx conferences have upcoming deadlines:EAGxNYC will run 18-20 August. Tickets are $0-100. If you live in the New York area, consider applying by 31 July.EAGxBerlin, runs 8-10 September and is aimed at people in Western Europe. Tickets cost €0-80. Apply by 18 August.EAGxAustralia (22 - 24 September) is for people in Australia and New Zealand. Tickets are $75-150 (AUD). Apply by 8 September.Other conferences with open applications include EA Global: Boston (27-29 October, apply by 13 October), and EAGxPhilippines.The international Conference on Animal Rights in Europe (CARE) will run 17-20 August, in Warsaw and online. Participants from all areas of expertise are invited to network with activists, discover funding opportunities, and build a stronger movement for animals. You can get CARE 2023 tickets until 1 August.Fellowships and incubation programsGovAI's 2024 Winter Fellowship gives people the opportunity to spend three months (February-April) working on an AI governance project, learning about the field, and networking. Fellows get a £9,000 stipend and support for traveling to Oxford. If you're early in your career and are interested in studying or shaping the long-term implications of AI, consider applying by 23 July.Charity Entrepreneurship is accepting applications for its charity incubation programs in 2024 (apply by 30 September) and for its new online Research Training Program (2 October-17 December - apply by 17 July - today). The research program focuses on tools and skills needed to identify, compare, and recommend the most cost-effective and evidence-based charities and interventions. It is a full-time, fully cost-covered program that will run online for 11 weeks.Other opportunitiesThe Roots of Progress Blog Building Intensive is for writers eager to write more (and better) about progress studies topics. Participants will connect with other writers, receive writing coaching, and more. The part-time, 8-week online program runs from mid-September to mid-November. Apply by 11 August.There's a new regranting platform, Manifund; you can apply for grants, apply to regrant, or just explore and participate in the discussion.If you're interested in running an EA conference in your region or country (EAGx), you can apply for support.Ian Hogarth, the new Chair of the UK's AI Foundation Model Taskforce invites expressions of interest from AI specialists who want to help.Job listingsConsider also exploring jobs listed on "Job listing (open)."Against Malaria FoundationJunior Operations Manager (Remote, £28K - £35K)Centre for Effective AltruismContent Specialist (Remote / Oxford / Boston / other, £54.6k -£67.3k / $96.2-$124.k, apply by 26 July)Cooperative AI FoundationManaging Director (Remote, $100K - $130K, apply by 30 July)GiveWellSenior Researcher (Remote or Oakland, CA; $193,100 - $209,000)Content Edito...

]]>
Lizka https://forum.effectivealtruism.org/posts/8HNWos7RyPvmbfd9z/ea-organization-updates-july-2023 Mon, 17 Jul 2023 19:09:53 +0000 EA - EA Organization Updates: July 2023 by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 17:30 no full 6608
oJJLgJTsQKX3oQ9xw EA - Minimalist views of wellbeing by Teo Ajantaival Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Minimalist views of wellbeing, published by Teo Ajantaival on July 15, 2023 on The Effective Altruism Forum.IntroductionProblems with "good minus bad" viewsWellbeing is often defined as the balance of that which is good for oneself over that which is bad for oneself. For instance, hedonism typically equates wellbeing with pleasure minus pain, and preferentialism often sees wellbeing as the difference between fulfilled and unfulfilled preferences. Similarly, objective list theories may posit multiple independent goods and bads that contribute to one's overall wellbeing.A crucial challenge for these "good minus bad" conceptions of wellbeing is their reliance on an offsetting theory of aggregation. That is, they assume that any independent bads can always be counterbalanced, outweighed, or offset by a sufficient addition of independent goods, at least at the individual level.This offsetting premise has more problems than are commonly recognized, including the often sidelined question of what justifies it in the first place (Vinding, 2020a, 2022). Interpersonally, it plays a key role in generating moral implications that many would consider unacceptable, such as 'Creating Hell to Please the Blissful' (Ajantaival, 2022a, sec. 2.5; 2022b). At the individual level, it implies that a rollercoaster life containing unbearable agony and a sufficient amount of independent goods has greater wellbeing than does a perfectly untroubled life. These issues highlight the importance of exploring alternative conceptions of wellbeing that do not rely on the offsetting premise.Minimalist alternativesMinimalist views provide a unique perspective by rejecting the notion of independent goods. Instead, they define things that are good for us in entirely relational terms, namely in terms of the minimization of one or more sources of illbeing. These views avoid the problems specific to the offsetting premise, yet they are often overlooked in existing overviews of wellbeing theories, which tend to focus only on the variety of "good minus bad" views on offer. However, not only do minimalist views deserve serious consideration for their comparative merits, they can also, as I hope to show in this post, be positively intuitive in their own right.In particular, I hope to show that minimalist views can make sense of the practical tradeoffs that many of us reflectively endorse, with no need for the offsetting premise in the first place. And because many minimalist views focus on a single common currency of value, they may be promising candidates for resolving theoretical conflicts between multiple, seemingly intrinsic values. By contrast, all "good minus bad" views are still pluralistic in that they involve at least two distinct value entities.Although minimalist views do not depend on the idea of an independent good, they still provide principled answers to the question of what makes life better for an individual. Moreover, in practice, it is essential to always view the narrow question of 'better for oneself' within the broader context of 'better overall'. In this context, all minimalist views agree that life can be worth living and protecting for its overall positive roles.This essay delves into a selection of minimalist views on wellbeing, not intending to provide an exhaustive survey, but to give a sense of their diversity and intuitive appeal. For instance, experientialist minimalist views like tranquilism remain aligned with the "experience requirement", which is the intuition that a person's wellbeing cannot be directly affected by things outside their experience. In contrast, extra-experientialist minimalist views like antifrustrationism or objective list minimalism reject the experience requirement, and can thus be consistent with the intuition that premature death can leave us wors...

]]>
Teo Ajantaival https://forum.effectivealtruism.org/posts/oJJLgJTsQKX3oQ9xw/minimalist-views-of-wellbeing Sat, 15 Jul 2023 22:24:52 +0000 EA - Minimalist views of wellbeing by Teo Ajantaival Teo Ajantaival https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 33:46 no full 6609
68avgsfkfPGKmneHR EA - EAGxCambridge 2023 Retrospective by David Mears Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAGxCambridge 2023 Retrospective, published by David Mears on July 18, 2023 on The Effective Altruism Forum.Written by the core organising team for EAGxCambridge, this retrospective evaluates the conference, gives feedback for the CEA events team, and makes recommendations for future organisers. It's also a general update for the community about the event, and an exercise in transparency. We welcome your feedback and comments about how we can improve EA conferences in the future.You can watch 19 of the talks on CEA's YouTube channel, here.Attendees' photos are here, and professional photos are in a subfolder.SummaryWe think EAGxCambridge went well.The main metric CEA uses to evaluate their events is 'number of connections'. We estimate around 4200 new connections resulted, at an all-told cost of around £53 per connection (=$67 at time of writing), which is a better cost-per-connection than many previous conferences. The low cost-per-connection is partly driven by the fact that the event was on the large side compared to the historical average (enabling economies of scale to kick in) and encompassed 3 days; it was also kept low by limiting travel grants.Of these 4200 new connections, around 1700 were potentially 'impactful' as rated by attendees. (Pinch of salt: as a rule, people don't know how impactful any given connection is.)The likelihood-to-recommend scores were on a par with other EA conferences, which are usually very highly rated. (The average answer was 8.7 on a 1-to-10 scale.)Besides making connections, we also wanted to encourage and inspire attendees to take action.82% of survey respondents said they planned to take at least one of a list of actions (e.g. 'change degree') as a result of the conference, including 14.5% resolving to found an EA organisation and 30% resolving to work full-time for such an organisation or in a primary cause area. After applying a pinch of salt, those numbers suggest the conference inspired people to take significant action. We heard of several anecdotal cases where the conference triggered people to apply for particular jobs or funding, or resulted in internships or research collaborations.We're very thankful to everyone who made this happen: volunteers, attendees, session hosts, and many others.ContentsFor more in-depth commentary, click through to the relevant part of the Google Doc using the links below.Basic statsStrategyWhy Cambridge?Focussing on the UK and IrelandCore teamCompressed timelinesBudgetAdmissionsSome admissions statisticsStewardshipVenuesMain venue: Guildhall first floorSecondary venue: ground floorTertiary venue: Lola LoAcousticsCoordinating with venue staff on the dayOverall viewVolunteersNumbersAttendee experienceMore snacksBetter acousticsFaster wifiFood was "incredible" / "amazing" / "extremely good" / "really excellent"Attendee SlackContentAttendee favourites"Were any sessions you attended particularly good? Which ones?"MerchSatellite eventsCommunicationsDesignComms strategyComms tacticsCloser to the conference itselfFeedback survey resultsNet Promoter ScoreDemographicsResulting actionsWelcomingnessFor the sake of the search index, those videos are:Testing Your Suitability For AI Alignment Research | Esben KranGood News in Effective Altruism | Shakeel HashimCombating Catastrophic Biorisk With Indoor Air Safety | Jam KraprayoonShould Effective Altruists Focus On Air Pollution? | Tom BarnesAlternative Proteins and How I Got There | Friederike Grosse-HolzHow Local Groups Can Have Global Impact | Cambridge Alternative Protein ProjectGlobal Food System Failure And What We Can Do About It | Noah WescombeInfecting Humans For Vaccine Development | Jasmin KaurExistential Risk Pessimism And The Time Of Perils | David ThorstadHow To Make The Most of EAGx | Oll...

]]>
David Mears https://forum.effectivealtruism.org/posts/68avgsfkfPGKmneHR/eagxcambridge-2023-retrospective Tue, 18 Jul 2023 19:47:31 +0000 EA - EAGxCambridge 2023 Retrospective by David Mears David Mears https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:37 no full 6624
7QyemcXLaxNicLNNa EA - Five Years of Rethink Priorities: Impact, Future Plans, Funding Needs (July 2023) by Rethink Priorities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Five Years of Rethink Priorities: Impact, Future Plans, Funding Needs (July 2023), published by Rethink Priorities on July 18, 2023 on The Effective Altruism Forum.OverviewThis piece highlights Rethink Priorities' accomplishments, mistakes, and changes since its establishment in 2018. We discuss RP's future plans as well as potential constraints to our impact. Finally, we call for donations and invite people to engage in an Ask Me Anything (AMA) discussion with Co-CEO Peter Wildeford that will be announced by July 19 in our newsletter and social media.You can also read this post as a PDF with visualizations.Executive summaryKey accomplishments (2018-2023)In five years, RP has published over 125 pieces of research, completed more than another 100 research projects, provided various grantmakers with consultation, influenced tens of millions of dollars in funding, fiscally sponsored nine projects, and drove forward the promising field of invertebrate welfare. Specific accomplishments include:Collaborating with dozens of European Union (EU) animal advocacy organizations to work on setting medium-term policy strategies for farmed animal welfare.Providing expert consultation to the Chilean government as they considered a bill (which has advanced to the next legislative stage) to recognize animals as sentient.Contributing significantly to burgeoning fields, such as invertebrate welfare - including work related to shrimps and insects (see more here).Completing the Moral Weight Project to try to help funders decide how to best allocate resources across species.Producing 23 reports commissioned by Open Philanthropy answering their questions about global health and development issues and interventions.Conducting 205 tailored surveys and data analysis to help several organizations that build communities of people working on global priorities.Launching projects such as Condor Camp and fiscally sponsoring organizations like Epoch and Apollo Research via our Special Projects team, which provides operational support.Setting up an Artificial Intelligence (AI) Governance and Strategy team.Growing from a two-person operation in 2018 to a team that will soon include 75 RP employees, 30 contractors, and 25 staff of fiscally sponsored projects.Mistakes and challengesWe believe that some of RP's past projects failed because we did not adequately consider the project's probability of success, its potential value, and the resources required. For example, our 2018 PriorityWiki project would have involved a large volunteer coordination effort we weren't well-placed to execute, and it is unclear how valuable it would have been even if successful.Our biggest early mistake was not building a plan for each project's path to influence and not putting enough resources into measuring our impact. We initially relied too much on producing research and hoping that it would be impactful just by existing.Thus far, we have not publicly shared as much of our existing internal impact tracking as we had initially intended due to time constraints.While we did some preparation prior to scaling in 2022, we would have liked to have established more robust project management systems beforehand.Our project timelines are not always predictable, as it is difficult to determine when you should stop researching due to diminishing returns. We think there are several cases where we spent too long working on a piece of research and would've had more impact by releasing the work earlier.ChangesWe now have better project management systems and spend much more time thinking through how to communicate our research and ensure each piece has a higher chance for impact.We are focusing more on measuring our impact, and hired a Chief Strategy Analyst last year.In addition to researching neglected areas (e....

]]>
Rethink Priorities https://forum.effectivealtruism.org/posts/7QyemcXLaxNicLNNa/five-years-of-rethink-priorities-impact-future-plans-funding Tue, 18 Jul 2023 18:25:41 +0000 EA - Five Years of Rethink Priorities: Impact, Future Plans, Funding Needs (July 2023) by Rethink Priorities Rethink Priorities https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 28:03 no full 6619
Doa69pezbZBqrcucs EA - Shaping Humanity's Longterm Trajectory by Toby Ord Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shaping Humanity's Longterm Trajectory, published by Toby Ord on July 18, 2023 on The Effective Altruism Forum.Since writing The Precipice, one of my aims has been to better understand how reducing existential risk compares with other ways of influencing the longterm future. Helping avert a catastrophe can have profound value due to the way that the short-run effects of our actions can have a systematic influence on the long-run future. But it isn't the only way that could happen.For example, if we advanced human progress by a year, perhaps we should expect to see us reach each subsequent milestone a year earlier. And if things are generally becoming better over time, then this may make all years across the whole future better on average.I've developed a clean mathematical framework in which possibilities like this can be made precise, the assumptions behind them can be clearly stated, and their value can be compared.The starting point is the longterm trajectory of humanity, understood as how the instantaneous value of humanity unfolds over time. In this framework, the value of our future is equal to the area under this curve and the value of altering our trajectory is equal to the area between the original curve and the altered curve.This allows us to compare the value of reducing existential risk to other ways our actions might improve the longterm future, such as improving the values that guide humanity, or advancing progress.Ultimately, I draw out and name 4 idealised ways our short-term actions could change the longterm trajectory:advancementsspeed-upsgainsenhancementsAnd I show how these compare to each other, and to reducing existential risk.While the framework is mathematical, the maths in these four cases turns out to simplify dramatically, so anyone should be able to follow it.My hope is that this framework, and this categorisation of some of the key ways we might hope to shape the longterm future, can improve our thinking about longtermism.Some upshots of the work:Some ways of altering our trajectory only scale with humanity's duration or its average value - but not both. There is a serious advantage to those that scale with both: speed-ups, enhancements, and reducing existential risk.When people talk about 'speed-ups', they are often conflating two different concepts. I disentangle these into advancements and speed-ups, showing that we mainly have advancements in mind, but that true speed-ups may yet be possible.The value of advancements and speed-ups depends crucially on whether they also bring forward the end of humanity. When they do, they have negative value.It is hard for pure advancements to compete with reducing existential risk as their value turns out not to scale with the duration of humanity's future. Advancements are competitive in outcomes where value increases exponentially up until the end time, but this isn't likely over the very long run. Work on creating longterm value via advancing progress is most likely to compete with reducing risk if the focus is on increasing the relative progress of some areas over others, in order to make a more radical change to the trajectory.The work is appearing as a chapter for the forthcoming book, Essays on Longtermism, but as of today, you can also read it online here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Toby_Ord https://forum.effectivealtruism.org/posts/Doa69pezbZBqrcucs/shaping-humanity-s-longterm-trajectory Tue, 18 Jul 2023 10:54:41 +0000 EA - Shaping Humanity's Longterm Trajectory by Toby Ord Toby_Ord https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:08 no full 6620
SAvkXAwrzdhecAaCj EA - New career review: AI safety technical research by Benjamin Hilton Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New career review: AI safety technical research, published by Benjamin Hilton on July 18, 2023 on The Effective Altruism Forum.Note: this post is a (minorly) edited version of a new 80,000 Hours career review.Progress in AI - while it could be hugely beneficial - comes with significant risks. Risks that we've argued could be existential.But these risks can be tackled.With further progress in AI safety, we have an opportunity to develop AI for good: systems that are safe, ethical, and beneficial for everyone.This article explains how you can help.SummaryArtificial intelligence will have transformative effects on society over the coming decades, and could bring huge benefits - but we also think there's a substantial risk. One promising way to reduce the chances of an AI-related catastrophe is to find technical solutions that could allow us to prevent AI systems from carrying out dangerous behaviour.ProsOpportunity to make a significant contribution to a hugely important area of researchIntellectually challenging and interesting workThe area has a strong need for skilled researchers and engineers, and is highly neglected overallConsDue to a shortage of managers, it's difficult to get jobs and might take you some time to build the required career capital and expertiseYou need a strong quantitative backgroundIt might be very difficult to find solutionsThere's a real risk of doing harmKey facts on fitYou'll need a quantitative background and should probably enjoy programming. If you've never tried programming, you may be a good fit if you can break problems down into logical parts, generate and test hypotheses, possess a willingness to try out many different solutions, and have high attention to detail.If you already:Are a strong software engineer, you could apply for empirical research contributor roles right now (even if you don't have a machine learning background, although that helps)Could get into a top 10 machine learning PhD, that would put you on track to become a research leadHave a very strong maths or theoretical computer science background, you'll probably be a good fit for theoretical alignment researchRecommendedIf you are well suited to this career, it may be the best way for you to have a social impact.Thanks to Adam Gleave, Jacob Hilton and Rohin Shah for reviewing this article. And thanks to Charlie Rogers-Smith for his help, and his article on the topic - How to pursue a career in technical AI alignment.Why AI safety technical research is high impactAs we've argued, in the next few decades, we might see the development of hugely powerful machine learning systems with the potential to transform society. This transformation could bring huge benefits - but only if we avoid the risks.We think that the worst-case risks from AI systems arise in large part because AI systems could be misaligned - that is, they will aim to do things that we don't want them to do. In particular, we think they could be misaligned in such a way that they develop (and execute) plans that pose risks to humanity's ability to influence the world, even when we don't want that influence to be lost.We think this means that these future systems pose an existential threat to civilisation.Even if we find a way to avoid this power-seeking behaviour, there are still substantial risks - such as misuse by governments or other actors - which could be existential threats in themselves.There are many ways in which we could go about reducing the risks that these systems might pose. But one of the most promising may be researching technical solutions that prevent unwanted behaviour - including misaligned behaviour - from AI systems. (Finding a technical way to prevent misalignment in particular is known as the alignment problem.)In the past few years, we've seen more o...

]]>
Benjamin Hilton https://forum.effectivealtruism.org/posts/SAvkXAwrzdhecAaCj/new-career-review-ai-safety-technical-research Tue, 18 Jul 2023 08:25:54 +0000 EA - New career review: AI safety technical research by Benjamin Hilton Benjamin Hilton https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 47:37 no full 6623
PChvstTKnf6iAP36F EA - Effective Altruism and the strategic ambiguity of 'doing good' by Jeroen De Ryck Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Altruism and the strategic ambiguity of 'doing good', published by Jeroen De Ryck on July 18, 2023 on The Effective Altruism Forum.Whilst Googling around for something entirely unrelated, I stumbled on a discussion paper published in January of 2023 about Effective Altruism that argues Global Health & Wellbeing is basically a facade to get people into the way more controversial core of longtermism. I couldn't find something posted about it elsewhere on the forum, so I'll try to summarise here.The paper argues that there is a big distinction between what they call public facing EA and Core EA. The former cares about global health and wellbeing (GH&W) whereas the latter cares about x-risks, animal welfare and "helping elites get advanced degrees" (which I'll just refer to as core topics). There are several more distinctions between public EA and core EA, e.g. about impartiality and the importance of evidence and reason. The author argues, based on quotes from a variety of posts from a variety of influential people within EA, that for the core audience, GH&W is just a facade such that EA is perceived as 'good' by the broader public, whilst the core members work on much more controversial core topics such as transhumanism that go against many of the principles put forward by GH&W research and positions.The author seems to claim that this was done on purpose and that GH&W merely exists as a method to "convert more recruits" to a controversial core of transhumanism that EA is nowadays. This substantial distinction between GH&W and core topics causes an identity crisis between people who genuinely believe that EA is about GH&W and people who have been convinced of the core topics. The author says that these distinctions have always existed, but have been purposely hidden with nice-sounding GH&W topics by a few core members (such as Yudkowsky, Alexander, Todd, Ord, MacAskill), as a transhumanist agenda would be too controversial for the public, although it was the goal of EA after all and always has been.To quote from the final paragraph from the paper:The 'EA' that academics write about is a mirage, albeit one invoked as shorthand for a very real phenomenon, i.e., the elevation of RCTs and quantitative evaluation methods in the aid and development sector. [...] Rather, my point is that these articles and the arguments they make - sophisticated and valuable as they are - are not about EA: they are about the Singer-solution to global poverty, effective giving, and about the role of RCTs and quantitative evaluation methods in development practice. EA is an entirely different project, and the magnitude and implications of that project cannot be grasped until people are willing to look at the evidence beyond EA's glossy front-cover, and see what activities and aims the EA movement actually prioritizes, how funding is actually distributed, whose agenda is actually pursued, and whose interests are actually served.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Jeroen De Ryck https://forum.effectivealtruism.org/posts/PChvstTKnf6iAP36F/effective-altruism-and-the-strategic-ambiguity-of-doing-good Tue, 18 Jul 2023 06:47:41 +0000 EA - Effective Altruism and the strategic ambiguity of 'doing good' by Jeroen De Ryck Jeroen De Ryck https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:53 no full 6622
7zSzAZnWYQWat9qsp EA - What do y'all think of pay gaps in EA? by JohnSnow Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What do y'all think of pay gaps in EA?, published by JohnSnow on July 17, 2023 on The Effective Altruism Forum.I saw a CEA role advertised for twice the salary of an AMF job, whereas they do not seem to differ dramatically in expected level of expertise / experience. Even if one believes they can make more impact at AMF, they would have to give up 20k pounds in salary to pass on the content specialist role. We learned recently to consider earning less, but this may still be quite the conundrum. What do you think?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
JohnSnow https://forum.effectivealtruism.org/posts/7zSzAZnWYQWat9qsp/what-do-y-all-think-of-pay-gaps-in-ea Mon, 17 Jul 2023 22:23:06 +0000 EA - What do y'all think of pay gaps in EA? by JohnSnow JohnSnow https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:44 no full 6621
K2xQrrXn5ZSgtntuT EA - What do XPT forecasts tell us about AI risk? by Forecasting Research Institute Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What do XPT forecasts tell us about AI risk?, published by Forecasting Research Institute on July 19, 2023 on The Effective Altruism Forum.This post was co-authored by the Forecasting Research Institute and Rose Hadshar. Thanks to Josh Rosenberg for managing this work, Zachary Jacobs and Molly Hickman for the underlying data analysis, Coralie Consigny and Bridget Williams for fact-checking and copy-editing, the whole FRI XPT team for all their work on this project, and our external reviewers.In 2022, the Forecasting Research Institute (FRI) ran the Existential Risk Persuasion Tournament (XPT). From June through October 2022, 169 forecasters, including 80 superforecasters and 89 experts, developed forecasts on various questions related to existential and catastrophic risk. Forecasters moved through a four-stage deliberative process that was designed to incentivize them not only to make accurate predictions but also to provide persuasive rationales that boosted the predictive accuracy of others' forecasts. Forecasters stopped updating their forecasts on 31st October 2022, and are not currently updating on an ongoing basis. FRI plans to run future iterations of the tournament, and open up the questions more broadly for other forecasters.You can see the overall results of the XPT here.Some of the questions were related to AI risk. This post:Sets out the XPT forecasts on AI risk, and puts them in context.Lays out the arguments given in the XPT for and against these forecasts.Offers some thoughts on what these forecasts and arguments show us about AI risk.TL;DRXPT superforecasters predicted that catastrophic and extinction risk from AI by 2030 is very low (0.01% catastrophic risk and 0.0001% extinction risk).XPT superforecasters predicted that catastrophic risk from nuclear weapons by 2100 is almost twice as likely as catastrophic risk from AI by 2100 (4% vs 2.13%).XPT superforecasters predicted that extinction risk from AI by 2050 and 2100 is roughly an order of magnitude larger than extinction risk from nuclear, which in turn is an order of magnitude larger than non-anthropogenic extinction risk (see here for details).XPT superforecasters more than quadruple their forecasts for AI extinction risk by 2100 if conditioned on AGI or TAI by 2070 (see here for details).XPT domain experts predicted that AI extinction risk by 2100 is far greater than XPT superforecasters do (3% for domain experts, and 0.38% for superforecasters by 2100).Although XPT superforecasters and experts disagreed substantially about AI risk, both superforecasters and experts still prioritized AI as an area for marginal resource allocation (see here for details).It's unclear how accurate these forecasts will prove, particularly as superforecasters have not been evaluated on this timeframe before.The forecastsIn the table below, we present forecasts from the following groups:Superforecasters: median forecast across superforecasters in the XPT.Domain experts: median forecasts across all AI experts in the XPT.(See our discussion of aggregation choices (pp. 20-22) for why we focus on medians.)QuestionForecastersN203020502100AI Catastrophic risk (>10% of humans die within 5 years)Superforecasters880.01%0.73%2.13%Domain experts300.35%5%12%AI Extinction risk (human population <5,000)Superforecasters880.0001%0.03%0.38%Domain experts290.02%1.1%3%The forecasts in contextDifferent methods have been used to estimate AI risk:Surveying experts of various kinds, e.g. Sanders and Bostrom, 2008; Grace et al. 2017.Doing in-depth investigations, e.g. Ord, 2020; Carlsmith, 2021.The XPT forecasts are distinctive relative to expert surveys in that:The forecasts were incentivized: for long-run questions, XPT used 'reciprocal scoring' rules to incentivize accurate forecasts (see here for details).Fore...

]]>
Forecasting Research Institute https://forum.effectivealtruism.org/posts/K2xQrrXn5ZSgtntuT/what-do-xpt-forecasts-tell-us-about-ai-risk-1 10% of humans die within 5 years)Superforecasters880.01%0.73%2.13%Domain experts300.35%5%12%AI Extinction risk (human population <5,000)Superforecasters880.0001%0.03%0.38%Domain experts290.02%1.1%3%The forecasts in contextDifferent methods have been used to estimate AI risk:Surveying experts of various kinds, e.g. Sanders and Bostrom, 2008; Grace et al. 2017.Doing in-depth investigations, e.g. Ord, 2020; Carlsmith, 2021.The XPT forecasts are distinctive relative to expert surveys in that:The forecasts were incentivized: for long-run questions, XPT used 'reciprocal scoring' rules to incentivize accurate forecasts (see here for details).Fore...]]> Wed, 19 Jul 2023 18:21:19 +0000 EA - What do XPT forecasts tell us about AI risk? by Forecasting Research Institute 10% of humans die within 5 years)Superforecasters880.01%0.73%2.13%Domain experts300.35%5%12%AI Extinction risk (human population <5,000)Superforecasters880.0001%0.03%0.38%Domain experts290.02%1.1%3%The forecasts in contextDifferent methods have been used to estimate AI risk:Surveying experts of various kinds, e.g. Sanders and Bostrom, 2008; Grace et al. 2017.Doing in-depth investigations, e.g. Ord, 2020; Carlsmith, 2021.The XPT forecasts are distinctive relative to expert surveys in that:The forecasts were incentivized: for long-run questions, XPT used 'reciprocal scoring' rules to incentivize accurate forecasts (see here for details).Fore...]]> 10% of humans die within 5 years)Superforecasters880.01%0.73%2.13%Domain experts300.35%5%12%AI Extinction risk (human population <5,000)Superforecasters880.0001%0.03%0.38%Domain experts290.02%1.1%3%The forecasts in contextDifferent methods have been used to estimate AI risk:Surveying experts of various kinds, e.g. Sanders and Bostrom, 2008; Grace et al. 2017.Doing in-depth investigations, e.g. Ord, 2020; Carlsmith, 2021.The XPT forecasts are distinctive relative to expert surveys in that:The forecasts were incentivized: for long-run questions, XPT used 'reciprocal scoring' rules to incentivize accurate forecasts (see here for details).Fore...]]> Forecasting Research Institute https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 51:59 no full 6633
Yq6yKgBtaMgkgyetm EA - An EA's Guide to Visiting New York City by Alex R Kaplan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An EA's Guide to Visiting New York City, published by Alex R Kaplan on July 19, 2023 on The Effective Altruism Forum.Note: We (The EA NYC Team) are posting this on the suggestion from this Forum post calling for more public guides for navigating EA hubs. This guide is not a representation of the views of everyone in the community.Read on for a basic overview of the EA community in New York City. Whether you are visiting, planning to attend EAGxNYC (applications still open!), have just moved here, or are considering moving, we hope this guide will provide some helpful context on our community. That said, please comment or message us if we're missing anything - there's a lot happening here and it's hard to capture it all!And if you are planning on being in or have just arrived in NYC, get in touch with us! We're here to help support and connect the NYC EA network. We think there's a really special community here and Rocky, Alex, and the whole team would be happy to help you find like-minded New Yorkers.OverviewAbove: map outlining NYC's five boroughs. Credit: Wikimedia CommonsNYC itselfNew York City comprises five boroughs: Manhattan, Brooklyn, Queens, the Bronx, and Staten Island. Each borough has its own character, as do the individual neighborhoods within the boroughs, but they are all part of New York City and are served by a united city government and public transit system. Manhattan is at the heart of the city, and is what most people think of when they think of New York City.There are also several areas immediately outside of the city that are very accessible by public transit, such as Jersey City, Hoboken, and New Jersey's suburbs, suburbs on Long Island (Nassau and Suffolk Counties), parts of the state of Connecticut, and suburbs in upstate New York (ie. Westchester, Putnam, and Dutchess Counties).Each of these areas is made up of many smaller neighborhoods, which vary dramatically in demographics, building styles, and activities.The EA community in NYC is very spread-out, and includes people in all of these areas and sometimes beyond.Of the five boroughs, Manhattan is the most densely populated, generally has the best accessibility, and usually has the most going on during the day (including anything from university research and co-working to tourism and EA NYC events). It will typically be the most expensive place to stay.Brooklyn (especially northwestern Brooklyn closer to Manhattan) and Queens (especially southwestern Queens closer to Manhattan) are also pretty centrally located and would allow access to co-working from cafes/libraries, more residential neighborhoods where some community members live, as well as more recreational options like dancing, performances, and bars.New Jersey (especially eastern New Jersey closer to Manhattan) can also provide more affordable living and decent access to Manhattan.Staten Island and the Bronx: EA NYC usually sees less representation from these boroughs, likely due to the greater geographical distance from events.A quick note on safetyMost New Yorkers consider the city very safe (to the point that our team almost forgot to mention this), and the data supports this.Overall, New York is a very safe city (despite what others would have you believe based on misconceptions rooted from 40 years ago). Being a major city that covers ~300 square miles, New York is not without any crime, but given its population, NYC has relatively low crime rates.Nevertheless some travelers reasonably ask us about safety. For visitors with less urban experience, any city may be a shock. And while New York is quite safe by US standards, it may not be as safe as some European or East Asian cities to which some visitors are accustomed. Most of this difference is attributable to crime in low-income areas unlikely to be seen by t...

]]>
Alex R Kaplan https://forum.effectivealtruism.org/posts/Yq6yKgBtaMgkgyetm/an-ea-s-guide-to-visiting-new-york-city Wed, 19 Jul 2023 13:42:49 +0000 EA - An EA's Guide to Visiting New York City by Alex R Kaplan Alex R Kaplan https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 21:07 no full 6632
kM9NFhZnXphEFiCFc EA - From Vision to Reality: Vida Plena's Pilot Results by Joy Bittner Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: From Vision to Reality: Vida Plena's Pilot Results, published by Joy Bittner on July 19, 2023 on The Effective Altruism Forum.ContextVida Plena (meaning 'a flourishing life' in Spanish) is a new nonprofit organization based in Quito, Ecuador which launched last year (see our launch post here).Our mission is to build strong mental health in low-income and refugee communities, who otherwise would have no access to care. We provide evidence-based depression treatment using group interpersonal therapy, which is highly cost-effective and scalable (see our predictive cost-effectiveness analysis for more details).Pilot ResultsAfter completing the Charity Entrepreneurship (CE) Incubator program in the summer of 2022, we ran our pilot last fall/winter. We are pleased to now share results (baseline/endline comparisons) from the pilot.A few highlights:We had a total of 10 therapy groups and 55 people in the pilot75% of people had a clinically significant reduction in their depression symptoms50% of people had improvements in the secondary indicators of anxiety and PTSDI owe a huge debt of gratitude to Dr. Melanie Basnak from Rethink Priorities, who volunteered her time to conduct the data analysis for us.Next StepsWe are currently analyzing the 3 month and 6 month follow up data.Given the initial positive results, we have continued our programs this year, treating an additional 250 people to date.In 2024 we hope to scale our program and continue to conduct program evaluations involving a control group.We would love to stay in touch and keep you up to date with our workSign up here for email updatesWe're also on instagram: @vidaplena.globalIf you want to help, please tell others in your network about us: the best connections are unexpected. We are looking to connect with people and organizations involved in global mental health, both in industry and academia. Feel feel free to reach out at joy@vidaplena.globalAnd of course, we welcome any donations to help support our mission and work.Our incredible group of community facilitators who were certified in interpersonal therapy by Columbia University for the pilot.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Joy Bittner https://forum.effectivealtruism.org/posts/kM9NFhZnXphEFiCFc/from-vision-to-reality-vida-plena-s-pilot-results Wed, 19 Jul 2023 13:34:20 +0000 EA - From Vision to Reality: Vida Plena's Pilot Results by Joy Bittner Joy Bittner https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:18 no full 6631
N4LKrktopDs5Qdqgn EA - An Introduction to Critiques of prominent AI safety organizations by Omega Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An Introduction to Critiques of prominent AI safety organizations, published by Omega on July 19, 2023 on The Effective Altruism Forum.What is this series (and who are we)?This is a series of evaluations of technical AI safety (TAIS) organizations. We evaluate organizations that have received more than $10 million per year in funding and that have had limited external evaluation.The primary authors of this series include one technical AI safety researcher (>4 years experience), and one non-technical person with experience in the EA community. Some posts also have contributions from others with experience in technical AI safety and/or the EA community.This introduction was written after the first two posts in the series were published. Since we first started working on this series we have updated and refined our process for evaluating and publishing critiques, and this post reflects our present views.Why are we writing this series?Recently, there has been more attention on the field of technical AI safety (TAIS), meaning that many people are trying to get into TAIS roles. Without knowing significant context about different organizations, new entrants to the field will tend to apply to TAIS organizations based on their prominence, which is largely related to factors such as total funding, media coverage, volume of output, etc, rather than just the quality of their research or approach. Much of the discussion we have observed about TAIS organizations, especially criticisms of them, happens behind closed doors, in conversations that junior people are usually not privy to. We wish to help disseminate this information more broadly to enable individuals to make a better informed decision.We focus on evaluating large organizations, defined as those with more than $10 million per year in funding. These organizations are amongst the most visible and tend to have a significant influence on the AI safety ecosystem by virtue of their size, making evaluation particularly important. Additionally, these organizations would only need to dedicate a small fraction of their resources to engaging with these criticisms.How do we evaluate organizations?We believe that an organization should be graded on multiple metrics. We consider:Research outputs: How much good quality research has the organization published? This is the area where we put the most weight.Research agenda: Does the organization's research plan seem likely to bear fruit?Research team: What proportion of researchers are senior/experienced? What is the leadership's experience in ML and safety research? Are the leaders trustworthy? Are there conflicts of interest?Strategy and governance: What corporate governance structures are in place? Does the organization have independent accountability? How transparent is it? The FTX crisis has shown how important this can be.Organizational culture and work environment: Does the organization foster a good work environment for their team? What efforts has the organization made to improve its work culture?When evaluating research outputs, we benchmark against high-quality existing research, and against academia. Although academic AIS research is not always the most novel or insightful, there are strong standards for rigor in academia that we believe are important. Some existing research that we think is exceptional include:Eliciting latent knowledge (ARC)Iterated Distillation and Amplification (Paul Christiano)Constitutional AI (Anthropic)Trojan Detection Competition (CAIS)Causal scrubbing (Redwood)Toy models of superposition (Anthropic)Our thoughts on hits-based research agendasWhen we criticized Conjecture's output, commenters suggested that we were being unfair, because Conjecture is pursuing a hits-based research agenda, and this style of research typically takes...

]]>
Omega https://forum.effectivealtruism.org/posts/N4LKrktopDs5Qdqgn/an-introduction-to-critiques-of-prominent-ai-safety 4 years experience), and one non-technical person with experience in the EA community. Some posts also have contributions from others with experience in technical AI safety and/or the EA community.This introduction was written after the first two posts in the series were published. Since we first started working on this series we have updated and refined our process for evaluating and publishing critiques, and this post reflects our present views.Why are we writing this series?Recently, there has been more attention on the field of technical AI safety (TAIS), meaning that many people are trying to get into TAIS roles. Without knowing significant context about different organizations, new entrants to the field will tend to apply to TAIS organizations based on their prominence, which is largely related to factors such as total funding, media coverage, volume of output, etc, rather than just the quality of their research or approach. Much of the discussion we have observed about TAIS organizations, especially criticisms of them, happens behind closed doors, in conversations that junior people are usually not privy to. We wish to help disseminate this information more broadly to enable individuals to make a better informed decision.We focus on evaluating large organizations, defined as those with more than $10 million per year in funding. These organizations are amongst the most visible and tend to have a significant influence on the AI safety ecosystem by virtue of their size, making evaluation particularly important. Additionally, these organizations would only need to dedicate a small fraction of their resources to engaging with these criticisms.How do we evaluate organizations?We believe that an organization should be graded on multiple metrics. We consider:Research outputs: How much good quality research has the organization published? This is the area where we put the most weight.Research agenda: Does the organization's research plan seem likely to bear fruit?Research team: What proportion of researchers are senior/experienced? What is the leadership's experience in ML and safety research? Are the leaders trustworthy? Are there conflicts of interest?Strategy and governance: What corporate governance structures are in place? Does the organization have independent accountability? How transparent is it? The FTX crisis has shown how important this can be.Organizational culture and work environment: Does the organization foster a good work environment for their team? What efforts has the organization made to improve its work culture?When evaluating research outputs, we benchmark against high-quality existing research, and against academia. Although academic AIS research is not always the most novel or insightful, there are strong standards for rigor in academia that we believe are important. Some existing research that we think is exceptional include:Eliciting latent knowledge (ARC)Iterated Distillation and Amplification (Paul Christiano)Constitutional AI (Anthropic)Trojan Detection Competition (CAIS)Causal scrubbing (Redwood)Toy models of superposition (Anthropic)Our thoughts on hits-based research agendasWhen we criticized Conjecture's output, commenters suggested that we were being unfair, because Conjecture is pursuing a hits-based research agenda, and this style of research typically takes...]]> Wed, 19 Jul 2023 09:12:09 +0000 EA - An Introduction to Critiques of prominent AI safety organizations by Omega 4 years experience), and one non-technical person with experience in the EA community. Some posts also have contributions from others with experience in technical AI safety and/or the EA community.This introduction was written after the first two posts in the series were published. Since we first started working on this series we have updated and refined our process for evaluating and publishing critiques, and this post reflects our present views.Why are we writing this series?Recently, there has been more attention on the field of technical AI safety (TAIS), meaning that many people are trying to get into TAIS roles. Without knowing significant context about different organizations, new entrants to the field will tend to apply to TAIS organizations based on their prominence, which is largely related to factors such as total funding, media coverage, volume of output, etc, rather than just the quality of their research or approach. Much of the discussion we have observed about TAIS organizations, especially criticisms of them, happens behind closed doors, in conversations that junior people are usually not privy to. We wish to help disseminate this information more broadly to enable individuals to make a better informed decision.We focus on evaluating large organizations, defined as those with more than $10 million per year in funding. These organizations are amongst the most visible and tend to have a significant influence on the AI safety ecosystem by virtue of their size, making evaluation particularly important. Additionally, these organizations would only need to dedicate a small fraction of their resources to engaging with these criticisms.How do we evaluate organizations?We believe that an organization should be graded on multiple metrics. We consider:Research outputs: How much good quality research has the organization published? This is the area where we put the most weight.Research agenda: Does the organization's research plan seem likely to bear fruit?Research team: What proportion of researchers are senior/experienced? What is the leadership's experience in ML and safety research? Are the leaders trustworthy? Are there conflicts of interest?Strategy and governance: What corporate governance structures are in place? Does the organization have independent accountability? How transparent is it? The FTX crisis has shown how important this can be.Organizational culture and work environment: Does the organization foster a good work environment for their team? What efforts has the organization made to improve its work culture?When evaluating research outputs, we benchmark against high-quality existing research, and against academia. Although academic AIS research is not always the most novel or insightful, there are strong standards for rigor in academia that we believe are important. Some existing research that we think is exceptional include:Eliciting latent knowledge (ARC)Iterated Distillation and Amplification (Paul Christiano)Constitutional AI (Anthropic)Trojan Detection Competition (CAIS)Causal scrubbing (Redwood)Toy models of superposition (Anthropic)Our thoughts on hits-based research agendasWhen we criticized Conjecture's output, commenters suggested that we were being unfair, because Conjecture is pursuing a hits-based research agenda, and this style of research typically takes...]]> 4 years experience), and one non-technical person with experience in the EA community. Some posts also have contributions from others with experience in technical AI safety and/or the EA community.This introduction was written after the first two posts in the series were published. Since we first started working on this series we have updated and refined our process for evaluating and publishing critiques, and this post reflects our present views.Why are we writing this series?Recently, there has been more attention on the field of technical AI safety (TAIS), meaning that many people are trying to get into TAIS roles. Without knowing significant context about different organizations, new entrants to the field will tend to apply to TAIS organizations based on their prominence, which is largely related to factors such as total funding, media coverage, volume of output, etc, rather than just the quality of their research or approach. Much of the discussion we have observed about TAIS organizations, especially criticisms of them, happens behind closed doors, in conversations that junior people are usually not privy to. We wish to help disseminate this information more broadly to enable individuals to make a better informed decision.We focus on evaluating large organizations, defined as those with more than $10 million per year in funding. These organizations are amongst the most visible and tend to have a significant influence on the AI safety ecosystem by virtue of their size, making evaluation particularly important. Additionally, these organizations would only need to dedicate a small fraction of their resources to engaging with these criticisms.How do we evaluate organizations?We believe that an organization should be graded on multiple metrics. We consider:Research outputs: How much good quality research has the organization published? This is the area where we put the most weight.Research agenda: Does the organization's research plan seem likely to bear fruit?Research team: What proportion of researchers are senior/experienced? What is the leadership's experience in ML and safety research? Are the leaders trustworthy? Are there conflicts of interest?Strategy and governance: What corporate governance structures are in place? Does the organization have independent accountability? How transparent is it? The FTX crisis has shown how important this can be.Organizational culture and work environment: Does the organization foster a good work environment for their team? What efforts has the organization made to improve its work culture?When evaluating research outputs, we benchmark against high-quality existing research, and against academia. Although academic AIS research is not always the most novel or insightful, there are strong standards for rigor in academia that we believe are important. Some existing research that we think is exceptional include:Eliciting latent knowledge (ARC)Iterated Distillation and Amplification (Paul Christiano)Constitutional AI (Anthropic)Trojan Detection Competition (CAIS)Causal scrubbing (Redwood)Toy models of superposition (Anthropic)Our thoughts on hits-based research agendasWhen we criticized Conjecture's output, commenters suggested that we were being unfair, because Conjecture is pursuing a hits-based research agenda, and this style of research typically takes...]]> Omega https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:13 no full 6634
db5TbLPSZL2XPeaed EA - AMA: Peter Wildeford (Co-CEO at Rethink Priorities) by Peter Wildeford Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Peter Wildeford (Co-CEO at Rethink Priorities), published by Peter Wildeford on July 19, 2023 on The Effective Altruism Forum.Hi everyone! I'll be doing an Ask Me Anything (AMA) here. Feel free to drop your questions in the comments below. I will aim to answer them by Monday, July 24.Who am I?I'm Peter. I co-founded Rethink Priorities (RP) with Marcus A. Davis in 2018. Previously, I worked as a data scientist in industry for five years. I'm an avid forecaster. I've been known to tweet here and blog here.What does Rethink Priorities do?RP is a research and implementation group that works with foundations and impact-focused non-profits to identify pressing opportunities to make the world better, figures out strategies for working on those problems, and does that work.We focus on:Wild and farmed animal welfare (including invertebrate welfare)Global health and development (including climate change)AI governance and strategyExistential security and safeguarding a flourishing long-term futureUnderstanding and supporting communities relevant to the aboveWhat should you ask me?Anything!I oversee RP's work related to existential security, AI, and surveys and data analysis research, but I can answer any question about RP (or anything).I'm also excited to answer questions about the organization's future plans and our funding gaps (see here for more information). We're pretty funding constrained right now and could use some help!We also recently published a personal reflection on what Marcus and I have learned in the last five years as well as a review of the organization's impacts, future plans, and funding needs that you might be interested in or have questions about.RP's publicly available research can be found in this database. If you'd like to support RP's mission, please donate here or contact Director of Development Janique Behman.To stay up-to-date on our work, please subscribe to our newsletter or engage with us on Twitter, Facebook, or LinkedIn.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Peter Wildeford https://forum.effectivealtruism.org/posts/db5TbLPSZL2XPeaed/ama-peter-wildeford-co-ceo-at-rethink-priorities Wed, 19 Jul 2023 08:02:21 +0000 EA - AMA: Peter Wildeford (Co-CEO at Rethink Priorities) by Peter Wildeford Peter Wildeford https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:10 no full 6635
rdxHxZYusMsihf8Qk EA - I'm interviewing Jan Leike, co-lead of OpenAI's new Superalignment project. What should I ask him? by Robert Wiblin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I'm interviewing Jan Leike, co-lead of OpenAI's new Superalignment project. What should I ask him?, published by Robert Wiblin on July 18, 2023 on The Effective Altruism Forum.I'm interviewing Jan Leike for The 80,000 Hours Podcast (personal website, Twitter).He's been Head of Alignment at OpenAI and is now leading their Superalignment team which will aim to figure out "how to steer & control AI systems much smarter than us" - and do it in under 4 years!They've been given 20% of the compute OpenAI has secured so far in order to work on it. Read the official announcement about it or Jan Leike's Twitter thread.What should I ask him?(P.S. Here's Jan's first appearance on the show back in 2018.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Robert_Wiblin https://forum.effectivealtruism.org/posts/rdxHxZYusMsihf8Qk/i-m-interviewing-jan-leike-co-lead-of-openai-s-new Tue, 18 Jul 2023 23:17:21 +0000 EA - I'm interviewing Jan Leike, co-lead of OpenAI's new Superalignment project. What should I ask him? by Robert Wiblin Robert_Wiblin https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:54 no full 6636
gEmkxFuMck8SHC55w EA - Introducing the Effective Altruism Addiction Recovery Group by Devin Kalish Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the Effective Altruism Addiction Recovery Group, published by Devin Kalish on July 20, 2023 on The Effective Altruism Forum.Since posting my article on alcoholism, I have, somewhat in the background, been trying to organize an EA recovery group with a few other interested people. This group would, ideally, be broadly defined and include alcoholism, which so far seems like the most common problem, but also other addictions, either to drugs, or more behavioral ones such as eating disorders. We have not had any meetings yet, but I have set up a Discord server which can be accessed here:Based on a poll I set up, I am leaning towards 5:00 PM EST July 25th as our initial meeting time. This meeting is meant more to discuss logistics and administration than to be a conventional group therapy meeting of any kind, although it might drift in that direction at points as well. So far I have not set any explicit rules, and don't want to before our first meeting, when I can get input from others. Given this, I am hoping no one does anything particularly ban-worthy before this time, as I will have to judge/get input on a case by case basis, and will prefer to start with warnings barring obviously intentional bad behavior. As a starting point, if you are EA or EA adjacent and have an addiction like one of those mentioned in the beginning, consider yourself invited.When the final rules are put in place I will likely more formally restrict it, and in the meantime I would prefer that you only come onto the server if you are, suspect you are, or have struggled with an addiction in the past. I will most likely also be welcoming to people with professional mental/community health experience, but before discussing the rules in a meeting, I would prefer that people who have a more impersonal interest in this project don't join right away. I do not plan to enforce this before the rules are set however, so long as you behave well. I am only posting this link while the project is still so half-baked because I would prefer to have more people present at an initial meeting where important matters of this sort are decided, rather than a small handful of the people who will eventually want to join. If you have any questions, feel free to reach out to me through my DMs.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Devin Kalish https://forum.effectivealtruism.org/posts/gEmkxFuMck8SHC55w/introducing-the-effective-altruism-addiction-recovery-group Thu, 20 Jul 2023 20:11:53 +0000 EA - Introducing the Effective Altruism Addiction Recovery Group by Devin Kalish Devin Kalish https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:12 no full 6642
7Tvc3N7nFa32ekuGA EA - Forum feature update: reactions, improved search, updated post pages and more (July 2023) by Sharang Phadke Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forum feature update: reactions, improved search, updated post pages and more (July 2023), published by Sharang Phadke on July 20, 2023 on The Effective Altruism Forum.We've introduced a number of new features in the last two months based on two high level goals of improving discussion dynamics and helping users find and engage with high quality content relevant to their interests. Highlights include:Reactions on commentsImproved search and the ability to sort search resultsA page to find posts you've saved, read, and voted onA feature that nudges post authors to consider running criticism past the people being criticizedThe "quick takes" section on the FrontpageAnd more!Reactions on commentsWe're launching a set of icon reactions with the goals of giving users a richer way to participate in discussions and an easier way to give positive feedback. We outlined our goals for this feature in detail here, and appreciate everyone who gave feedback on our design questions. We decided to launch a trimmed down set of reactions at the comment level and merge the experience with agree/disagree voting. For now this will only be applied to new posts, but we may update old posts to use the new reactions voting system in the future. Here's what the new experience looks like:A few notes:You can no longer strong agree or disagree.Agree/disagree reactions (and regular upvoting/downvoting) continue to be anonymous, while other reactions will be non-anonymous.We're starting out with this feature at the comment level, and plan to consider expanding it to the post level after we see how it goes. (We may also roll back reactions depending on how this test goes.)Better tools for finding that post you've been looking forSee your saved posts, read history, and vote historyWe've updated the saved & read page (formerly known as bookmarks) to show your saved posts, read history, and your vote history.Updated search, including sorting optionsWe've updated the search functionality to include the ability to sort by relevance, karma, and date. We also changed the entire backend infrastructure that powers search, so please let us know if you see surprising results!Nudging users to share criticismWe've developed the opinion that criticism is often more productive when shared with the people being criticized before it is published, and that some people who publish criticism would have run it past people if they'd thought of it. We were worried that heavy-handed interventions might deter useful critical writing, so have implemented small nudge on the draft page; a "tip" about this that appears on drafts of criticism and shares a link that might be useful. This is powered by a model that identifies posts as "criticism."Updates to post pagesWe've made a number of changes to post pages, and are still experimenting.Recommendations on postsAs mentioned here, we're experimenting with better ways to help readers find high quality posts relevant to their interests. Authors often tell us they feel disappointed when their posts get visibility for just a few days on the Frontpage and then drop off into oblivion, after tens or hundreds of hours of work. We've been experimenting with recommendations on post pages first, and may also consider personalized recommendations on the Frontpage. Our first experiments with recommendations at the bottom of post pages were promising, but we realized most users don't get to the bottom of pages, and are experimenting with showing them on the right side of post pages. We plan to monitor both user feedback and click rates to decide how to proceed.Updated post headerWe updated the headers of post pages to create more space for post titles and separate information and icons from post tags, and add a more familiar way to share posts.Reading completion bar ...

]]>
Sharang Phadke https://forum.effectivealtruism.org/posts/7Tvc3N7nFa32ekuGA/forum-feature-update-reactions-improved-search-updated-post Thu, 20 Jul 2023 14:25:47 +0000 EA - Forum feature update: reactions, improved search, updated post pages and more (July 2023) by Sharang Phadke Sharang Phadke https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:21 no full 6643
W49d9pvg7CuNvbntu EA - Social Beneficence (Jacob Barrett) by Global Priorities Institute Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Social Beneficence (Jacob Barrett), published by Global Priorities Institute on July 20, 2023 on The Effective Altruism Forum.(This working paper was published in September 2022.)AbstractA background assumption in much contemporary political philosophy is that justice is the first virtue of social institutions, taking priority over other values such as beneficence. This assumption is typically treated as a methodological starting point, rather than as following from any particular moral or political theory. In this paper, I challenge this assumption. To frame my discussion, I argue, first, that justice doesn't in principle override beneficence, and second, that justice doesn't typically outweigh beneficence, since, in institutional contexts, the stakes of beneficence are often extremely high. While there are various ways one might resist this argument, none challenge the core methodological point that political philosophy should abandon its preoccupation with justice and begin to pay considerably more attention to social beneficence - that is, to beneficence understood as a virtue of social institutions.Along the way, I also highlight areas where focusing on social beneficence would lead political philosophers in new and fruitful directions, and where normative ethicists focused on personal beneficence might scale up their thinking to the institutional case.I.Justice is the first virtue of social institutions, as truth is of systems of thought. A theory however elegant and economical must be rejected or revised if it is untrue; likewise laws and institutions no matter how efficient and well-arranged must be reformed or abolished if they are unjust. The only thing that permits us to acquiesce in an erroneous theory is the lack of a better one; analogously, an injustice is tolerable only when it is necessary to avoid an even greater injustice. Being first virtues of human activities, truth and justice are uncompromising. These propositions seem to express our intuitive conviction of the primacy of justice. No doubt they are expressed too strongly.John Rawls, A Theory of Justice, 4A background assumption in much contemporary political philosophy is that justice takes priority over beneficence. When evaluating social and political institutions, or thinking through questions of institutional design or reform, we should focus primarily on justice. This assumption is often associated with various further ideas, such as that justice but not beneficence is enforceable, that justice but not beneficence concerns rights, or that justice involves perfect duties but beneficence only imperfect ones. It is also typically assumed that justice is institutional, while beneficence is personal. There is much talk of social justice, and some talk of justice as a personal virtue, but, for the most part, we talk only of personal beneficence - not social beneficence.This phenomenon extends beyond the academy. A similar concern with justice permeates our political discourse. Justice operates as a conversation stopper. If the status quo is unjust, this is taken as an almost conclusive argument against the status quo; if some policy promotes justice, this is taken as an almost conclusive argument in favor of the policy. In both political philosophy and everyday political discourse, we do, of course, recognize exceptions to this rule. In the face of a serious disaster, we may need to override justice - we shouldn't really let justice be done though the heavens fall. But these exceptions are generally assumed to be rare - the heavens are only seldom falling. For the most part, then, contemporary political philosophy and discourse follows John Rawls's statement in the above epigraph. It operates with an "intuitive conviction of the primacy of justice," albeit, one that is sometimes "expre...

]]>
Global Priorities Institute https://forum.effectivealtruism.org/posts/W49d9pvg7CuNvbntu/social-beneficence-jacob-barrett Thu, 20 Jul 2023 11:24:18 +0000 EA - Social Beneficence (Jacob Barrett) by Global Priorities Institute Global Priorities Institute https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:59 no full 6645
o5vJ9aNvc7twdqDxv EA - The mental health challenges that come with trying to have a big impact (Hannah Boettcher on the 80k After Hours Podcast) by 80000 Hours Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The mental health challenges that come with trying to have a big impact (Hannah Boettcher on the 80k After Hours Podcast), published by 80000 Hours on July 20, 2023 on The Effective Altruism Forum.We just published an interview: Hannah Boettcher on the mental health challenges that come with trying to have a big impact. You can click through for the audio, a full transcript, and related links. Below are the episode summary and some key excerpts.Episode summaryWe're in a universe where tradeoffs exist, we have finite resources, we have multiple things we care about, and we have incomplete information. So we have to make guesses and take risks - and that hurts. So I think self-compassion and acceptance come in here, like, "Damn, I so am wishing this were not the case, and by golly, it looks like it still is."And then I think that it's a matter of recognising that we aren't going to score 100% on any unitary definition of "rightness." And then recognise that, "Well, I could just look at that and stall out forever, or I could make some moves." And probably making moves is preferable to stalling out.Hannah BoettcherIn this episode of 80k After Hours, Luisa Rodriguez and Hannah Boettcher discuss 4 different kinds of therapy, and how to use them in practice - focusing specifically on people trying to have a big impact.They cover:The effectiveness of therapy, and tips for finding a therapistMoral demandingnessInternal family systems-style therapyMotivation and burnoutExposure therapyGrappling with world problems and x-riskPerfectionism and imposter syndromeAnd the risk of over-intellectualisingWho this episode is for:High-impact focused people who struggle with moral demandingness, perfectionism, or imposter syndromePeople who feel anxious thinking about the end of the world80,000 Hours Podcast hosts with the initials LRWho this episode isn't for:People who aren't focused on having a big impactPeople who don't struggle with any mental health issuesFounders of Scientology with the initials LRHGet this episode by subscribing to our podcast on the world's most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app. Or read the transcript below.Producer: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Dominic ArmstrongContent editing: Katy Moore, Luisa Rodriguez, and Keiran HarrisTranscriptions: Katy Moore"Gershwin - Rhapsody in Blue, original 1924 version" by Jason Weinberger is licensed under creative commonsHighlightsWhat makes therapy more or less effective?Hannah Boettcher: So broadly speaking, we have known for a long time, and it's not controversial, that psychotherapy is efficacious and effective - so under control settings and less controlled settings. And in meta-analytic evidence, the effect size of psychotherapy is approximately 0.8 in comparison to no treatment - and that's conventionally considered a large effect size. Another way to say this would be that in comparison to not getting therapy, getting therapy explains 14%ish of the outcomes in randomised controlled trials. 14% might not sound great, depending on what your priors are, but this is actually really good for healthcare: it's on par with or better than effects of medications, both psychiatric and medically, and it's superior to plenty of medical interventions that are considered effective.Luisa Rodriguez: What do we know about how effective different types of therapy are? Are those compared in studies?Hannah Boettcher: Yes, these are definitely compared head to head. And something that's really interesting here is that specific treatment ingredients or therapeutic "modalities" are actually not strong predictors of better outcomes in psychotherapy.Luisa Rodriguez: Right, this is a thing I've actually heard before, and it bas...

]]>
80000_Hours https://forum.effectivealtruism.org/posts/o5vJ9aNvc7twdqDxv/the-mental-health-challenges-that-come-with-trying-to-have-a Thu, 20 Jul 2023 08:49:39 +0000 EA - The mental health challenges that come with trying to have a big impact (Hannah Boettcher on the 80k After Hours Podcast) by 80000 Hours 80000_Hours https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 12:41 no full 6644
aTSoxTcSjyBWem3Xz EA - EA Survey 2022: How People Get Involved in EA by WillemSleegers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Survey 2022: How People Get Involved in EA, published by WillemSleegers on July 21, 2023 on The Effective Altruism Forum.SummaryPersonal contact (22.6%), 80,000 Hours (13.5%), and a book, article, or blog post (13.1%) are the most common sources where respondents first hear about EA.80,000 Hours, local or university EA groups, personal contacts, and podcasts have become more common as sources of where respondents first encounter EA.Facebook, Giving What We Can, LessWrong, Slate Star Codex / Astral Codex Ten, The Life You Can Save, and GiveWell have become less common.Respondents whose gender selection was 'woman', 'non-binary', or 'prefer to self-describe', were much more likely to have first heard of EA via a personal contact (30.2%) compared to respondents whose gender selection was 'man' (18.4%).80,000 Hours (58.0%), personal contact with EAs (44.0%), and EA groups (36.8%) are the most common factors important for getting involved in EA.80,000 Hours, EA Groups, and EAGx have been increasing in importance over the last years.EA Global, personal contact with EAs, and the online EA community saw a noticeable increase in importance for helping EAs get involved between 2020 and 2022.Personal contact with EAs, EA groups, the online EA community, EA Global, and EAGx stand out as being particularly important among highly engaged respondents for getting involved.Respondents who identified as non-white, as well as women, non-binary, and respondents who preferred to self-describe, were generally more likely to select factors involving social contact with EAs (e.g., EA group, EAGx) as important.Where do people first hear about EA?Personal contacts continue to be the most common place where people first hear about EA (22.6%), followed by 80,000 Hours (13.5%) and a book, article, or blog post (13.1%).Comparison across all yearsThe plot below shows changes in where people report first hearing of EA across time (since we ran the first EA Survey in 2014).We generally observe that the following routes into EA have been increasing in importance over time:80,000 HoursLocal or university EA groupsPersonal contactsPodcastsAnd the following sources have been decreasing in importance:FacebookGiving What We CanLessWrongSlate Star Codex / Astral Codex TenThe Life You Can SaveGiveWellComparison across cohortsSeveral of the patterns observed in the previous section are also observed when we look at where different cohorts of EA respondents first encountered EA. We see that more recent cohorts are more likely to have encountered EA via 80,000 Hours and podcasts, and are less likely to have encountered EA via Giving What We Can, LessWrong, and GiveWell. No clear cohort effects were observed for other sources. Note that the figure below omits categories with few observations (e.g., EA Global, EAGx).Further DetailsWe asked respondents to provide further details about their responses, and provide a breakdown for some of the larger categories. Details of other categories are available on request.80,000 HoursThe largest number of respondents who first heard of EA through 80,000 Hours reported doing so through an independent search, e.g., they were searching online for "ethical careers" and found 80,000 Hours. The second largest category was via the website (which is potentially closely related, i.e., contact with the website resulting from independent search).Relatively much smaller proportions mentioned reaching 80,000 Hours through other categories, including more active outreach (e.g., advertisements).Books, articles, and blogsA book was cited as the most common source of encountering EA when considering the category of books, articles, and blogs.BooksBooks by Peter Singer were by far the most frequently cited books, followed by Doing Good Better by Willia...

]]>
WillemSleegers https://forum.effectivealtruism.org/posts/aTSoxTcSjyBWem3Xz/ea-survey-2022-how-people-get-involved-in-ea Fri, 21 Jul 2023 21:54:54 +0000 EA - EA Survey 2022: How People Get Involved in EA by WillemSleegers WillemSleegers https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:08 no full 6655
74CkwGxmXaevwzhNG EA - Linkpost: 7 A.I. Companies Agree to Safeguards After Pressure From the White House by MHR Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Linkpost: 7 A.I. Companies Agree to Safeguards After Pressure From the White House, published by MHR on July 21, 2023 on The Effective Altruism Forum.The NYT just released a breaking news piece regarding an agreement on AI safeguards. It's hard to tell exactly how useful the proposed measures will be, but it seems like a promising step.Seven leading A.I. companies in the United States have agreed to voluntary safeguards on the technology's development, the White House announced on Friday, pledging to strive for safety, security and trust even as they compete over the potential of artificial intelligence.The seven companies - Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI - will formally announce their commitment to the new standards at a meeting with President Biden at the White House on Friday afternoon.The voluntary safeguards announced on Friday are only an early step as Washington and governments across the world put in place legal and regulatory frameworks for the development of artificial intelligence. White House officials said the administration was working on an executive order that would go further than Friday's announcement and supported the development of bipartisan legislation.As part of the agreement, the companies agreed to:Security testing of their A.I. products, in part by independent experts and to share information about their products with governments and others who are attempting to manage the risks of the technology.Ensuring that consumers are able to spot A.I.-generated material by implementing watermarks or other means of identifying generated content.Publicly reporting the capabilities and limitations of their systems on a regular basis, including security risks and evidence of bias.Deploying advanced artificial intelligence tools to tackle society's biggest challenges, like curing cancer and combating climate change.Conducting research on the risks of bias, discrimination and invasion of privacy from the spread of A.I. tools.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
MHR https://forum.effectivealtruism.org/posts/74CkwGxmXaevwzhNG/linkpost-7-a-i-companies-agree-to-safeguards-after-pressure Fri, 21 Jul 2023 21:12:49 +0000 EA - Linkpost: 7 A.I. Companies Agree to Safeguards After Pressure From the White House by MHR MHR https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:58 no full 6654
9btFvtGwkufZpC7Yu EA - Australians call for AI safety to be taken seriously by AlexanderSaeri Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Australians call for AI safety to be taken seriously, published by AlexanderSaeri on July 21, 2023 on The Effective Altruism Forum.The Australian Government is considering how to regulate AI in Australia, has published a discussion paper ("Safe and Responsible AI"), and has invited feedback by 26 July 2023:"We want your views on how the Australian Government can mitigate any potential risks of AI and support safe and responsible AI practices."Good Ancestors Policy (goodancestors.org.au/policy), with the support of EA and AI Safety community organisers in Australia, have coordinated Australians' submissions to the feedback process.Today, the website Australians for AI Safety launched with a co-signed letter (media release). The letter called on the relevant Australian Federal Minister, Ed Husic, to take AI safety seriously by:recognising the catastrophic and existential risksaddressing uncertain but catastrophic risks alongside other known risksworking with the global community on international governancesupporting research into AI safetyGood Ancestors Policy have also held community workshops across Australia (e.g., Brisbane, Perth) to support members of the EA and AI Safety community in understanding the feedback process and preparing submissions, including access to some of the best evidence and arguments for acknowledging and addressing risks from AI. Policy ideas are drawn from the Global Catastrophic Risk Policy database (), the AI Policy Ideas database (aipolicyideas.com), and expert community input.So far, about 50 members of the community have attended a workshop, and feedback we've received is that the workshops have been very helpful, the majority (~75% people) are likely or very likely (>80% likelihood) to make a submission, and that most (~70% people) would be unlikely or very unlikely (<20% likelihood) to have made a submission without the workshop.If you're an Australian living in Australia or overseas, and you'd like to make a submission to this process, there is one more online community workshop on Saturday 22 July at 3pm AEST (UTC+10). Register here for the workshop!Contact Greg Sadler (greg@goodancestors.org.au) or Alexander Saeri (alexander@goodancestors.org.au) if you'd like to stay involved.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
AlexanderSaeri https://forum.effectivealtruism.org/posts/9btFvtGwkufZpC7Yu/australians-call-for-ai-safety-to-be-taken-seriously 80% likelihood) to make a submission, and that most (~70% people) would be unlikely or very unlikely (<20% likelihood) to have made a submission without the workshop.If you're an Australian living in Australia or overseas, and you'd like to make a submission to this process, there is one more online community workshop on Saturday 22 July at 3pm AEST (UTC+10). Register here for the workshop!Contact Greg Sadler (greg@goodancestors.org.au) or Alexander Saeri (alexander@goodancestors.org.au) if you'd like to stay involved.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> Fri, 21 Jul 2023 20:52:05 +0000 EA - Australians call for AI safety to be taken seriously by AlexanderSaeri 80% likelihood) to make a submission, and that most (~70% people) would be unlikely or very unlikely (<20% likelihood) to have made a submission without the workshop.If you're an Australian living in Australia or overseas, and you'd like to make a submission to this process, there is one more online community workshop on Saturday 22 July at 3pm AEST (UTC+10). Register here for the workshop!Contact Greg Sadler (greg@goodancestors.org.au) or Alexander Saeri (alexander@goodancestors.org.au) if you'd like to stay involved.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> 80% likelihood) to make a submission, and that most (~70% people) would be unlikely or very unlikely (<20% likelihood) to have made a submission without the workshop.If you're an Australian living in Australia or overseas, and you'd like to make a submission to this process, there is one more online community workshop on Saturday 22 July at 3pm AEST (UTC+10). Register here for the workshop!Contact Greg Sadler (greg@goodancestors.org.au) or Alexander Saeri (alexander@goodancestors.org.au) if you'd like to stay involved.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> AlexanderSaeri https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:31 no full 6656
DNm5sbFogr9wvDasH EA - Thoughts on yesterday's UN Security Council meeting on AI by Greg Colbourn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on yesterday's UN Security Council meeting on AI, published by Greg Colbourn on July 22, 2023 on The Effective Altruism Forum.Firstly, it's encouraging that AI is being discussed as a threat at the highest global body dedicated to ensuring global peace and security. This seemed like a remote possibility just 4 months ago.However, throughout the meeting, (possibly near term) extinction risk from uncontrollable superintelligent AI was the elephant in the room. ~1% air time, when it needs to be ~99%, given the venue and its power to stop it. Let's hope future meetings improve on this. Ultimately we need the UNSC to put together a global non-proliferation treaty on AGI, if we are to stand a reasonable chance of making it out of this decade alive.There was plenty of mention of using AI for peacekeeping. However, this seems naive in light of the offence-defence asymmetry facilitated by generative AI (especially when it comes to threats like bio-terror/engineered pandemics, and cybercrime/warfare). And in the limit of outsourcing intelligence gathering and strategy recommendations to AI (whist still keeping a human in the loop), you get scenarios like this.Highlights:China mentioned Pause: "The international community needs to. ensure that risks beyond human control don't occur. We need to strengthen the detection and evaluation of the entire lifecycle of AI, ensuring that mankind has the ability to press the pause button at critical moments". (Zhang Jun, representing China at the UN Security Council meeting on AI))Mozambique mentioned the Sorcerer's Apprentice, human loss of control, recursive self-improvement, accidents, catastrophic and existential risk: "In the event that credible evidence emerges indicating that AI poses and existential risk, it's crucial to negotiate an intergovernmental treaty to govern and monitor its use." (MANUEL GONÇALVES, Deputy Minister for Foreign Affairs of Mozambique, at the UN Security Council meeting on AI)(A bunch of us protesting about this outside the UK Foreign Office last week.)(PauseAI's comments on the meeting on Twitter.)(Discussion with Jack Clark on Twitter re his lack of mention of x-risk. Note that the post war atomic settlement - Baruch Plan - would probably have been quite different if the first nuclear detonation was assessed to have a significant chance of igniting the entire atmosphere!)(My Tweet version of this post. I'm Tweeting more as I think it's time for mass public engagement on AGI x-risk.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Greg_Colbourn https://forum.effectivealtruism.org/posts/DNm5sbFogr9wvDasH/thoughts-on-yesterday-s-un-security-council-meeting-on-ai Sat, 22 Jul 2023 21:31:56 +0000 EA - Thoughts on yesterday's UN Security Council meeting on AI by Greg Colbourn Greg_Colbourn https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:29 no full 6662
8NgFWTk2puBG5eSJA EA - FTX Foundation wanted to buy the nation of Nauru to save EAs' lives by Eugenics-Adjacent Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX Foundation wanted to buy the nation of Nauru to save EAs' lives, published by Eugenics-Adjacent on July 21, 2023 on The Effective Altruism Forum.I thought this was interesting. I'd love to see a debate on whether this is a healthy line of thinking, and something we're glad the public knows about us now.The court filings in a federal bankruptcy court in Delaware, dated July 20, included a memo crafted by an FTX Foundation official and Sam Bankman-Fried's brother Gabriel Bankman-Fried. It outlined the future survival of FTX and Alameda Research employees and all those who subscribed to the effective altruism concept.The ultimate strategy, according to the memo, was "to purchase the sovereign nation of Nauru in order to construct a 'bunker / shelter' that would be used for some event where 50%-99.99% of people die [to] ensure that most EAs (effective altruists) survive."Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Eugenics-Adjacent https://forum.effectivealtruism.org/posts/8NgFWTk2puBG5eSJA/ftx-foundation-wanted-to-buy-the-nation-of-nauru-to-save-eas Fri, 21 Jul 2023 17:00:29 +0000 EA - FTX Foundation wanted to buy the nation of Nauru to save EAs' lives by Eugenics-Adjacent Eugenics-Adjacent https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:06 no full 6661
pwPbQPv8EL9piJRfa EA - EffectiveAltruismData.com is now a spreadsheet by Hamish Doodles Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EffectiveAltruismData.com is now a spreadsheet, published by Hamish Doodles on July 23, 2023 on The Effective Altruism Forum.A few years ago I built EffectiveAltruismData.com which looked like this:A few people told me they liked the web app. Some even said they found it useful, especially the bits that made the funding landscape more legible.But anyway, I never got around to automating the data scraping, and the website ended up hopelessly out of date.So I killed it.But it recently occurred to me that I could do the data scraping, data aggregation, and data visualisation, all within Google Sheets.So with a bit of help from Chatty G, I put together a spreadsheet which:Downloads the latest grant data from the Open Philanthropy website every 24 hours (via Google Apps Scripts).Aggregate funding by cause areaAggregate funding by organization.Visualise all grant data in a pivot table that lets you expand/aggregate by Cause Area, then Organization Name, then individual grantsBut note that expanding/collapsing counts as editing the spreadsheet, so you'll have to make a copy to be able to do this.You can also change the scale of the bar chart using the dropdownAnd you can sort grants by size or by date using the "Sort Sheet Z to A" option on the Amount or Date columns.Here's a link to the spreadsheet. You can also find it at www.effectivealtruismdata.com.Other funding legibility projectsHere's another thing I made. It gives time series and cumulative bar charts for funding based on funder and cause area. You can hover over points on the time series to get the total funding per cause/org per year.The data comes from this spreadsheet by TylerMaule.Another thing which may be of interest is openbook.fyi by Rachel Weinberg & Austin Chen, which let's you search/view individual grants from a range of EA-flavoured sources.Openbook gets its data from donations.vipulnaik.com/ by Vipul Naik.I'm currently working on another spreadsheet which scrapes, aggregates, and visualises all the Vipul Naiks's data.Feedback & RequestsI enjoy working on spreadsheets and data viz and stuff. Let me know if you can think of any other stuff in this area which would be useful.This is a joke.This is also a joke.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Hamish Doodles https://forum.effectivealtruism.org/posts/pwPbQPv8EL9piJRfa/effectivealtruismdata-com-is-now-a-spreadsheet Sun, 23 Jul 2023 10:58:38 +0000 EA - EffectiveAltruismData.com is now a spreadsheet by Hamish Doodles Hamish Doodles https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:23 no full 6668
9sZic9oNLNPrTDuNt EA - Biosecurity Resource Hub from Aron by Aron Lajko Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Biosecurity Resource Hub from Aron, published by Aron Lajko on July 23, 2023 on The Effective Altruism Forum.I have been thinking about biosecurity a lot! Here is my Resource Hub!Executive Summary: An Introduction to Biosecurity: Navigating the Risks, Responses, and Future StrategiesAim of the document:To provide an overview of GCBRs and summarise the projects and ideas people could get involved with.To help me overview the field, find low-hanging fruits, and identify a suitable career fit.To assess where I could start a startup in the field.Get in touch:If you would like to get in touch, please email at lajkoaron@gmail.com or DM on Twitter at @AronLajko. Feedback is welcome, a form to submit is attached below!Summaries: My Summary and Hot Takes on Historical Cases of Biological Risks - AronA summary on UVC lamps - AronNote:Summaries under the articles were often generated or semi-generated using Chat-GPT4. This is a live document, I might add a review/summary on each section to make it easier for others to digest!Disclaimer:This is a personal project that is not sponsored by anyone. If you would like to sponsor or accelerate the development of this public Resource Hub or you would like to collaborate on a project please get in touch.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Aron Lajko https://forum.effectivealtruism.org/posts/9sZic9oNLNPrTDuNt/biosecurity-resource-hub-from-aron Sun, 23 Jul 2023 10:39:49 +0000 EA - Biosecurity Resource Hub from Aron by Aron Lajko Aron Lajko https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:28 no full 6669
CmvqwYfuzR6E5HPfd EA - Could someone help me understand why it's so difficult to solve the alignment problem? by Jadon Schmitt Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Could someone help me understand why it's so difficult to solve the alignment problem?, published by Jadon Schmitt on July 24, 2023 on The Effective Altruism Forum.AGI will be able to model human langauge and psychology very accurately. Given that, wouldn't alignment be easy if you trained the AGI to interpret linguistic prompts in the way that the "average" human would? (I know language doesn't encode an exact meaning, but for any chunk of text, there does exist a distribution of ways that humans interpret it.)Thus, on its face, inner alignment seems fairly doable. But apparently, according to RobBesinger, "We don't know how to get an AI system's goals to robustly 'point at' objects like 'the American people' ... [or even] simpler physical systems." Why is this so difficult? Is there an argument that it is impossible?Outer alignment doesn't seem very difficult to me, either. Here's a prompt I thought of: "Do not do an action if anyone in a specified list of philosophers, intellectuals, members of the public, etc. would prefer you not do it, if they had all relevant knowledge of the action and its effects beforehand, consistent with the human legal standard of informed consent." Wouldn't this prompt (in its ideal form, not exactly as I wrote it) guard against many bad actions, including power-seeking behavior?Thank you for the help!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Jadon Schmitt https://forum.effectivealtruism.org/posts/CmvqwYfuzR6E5HPfd/could-someone-help-me-understand-why-it-s-so-difficult-to Mon, 24 Jul 2023 21:07:15 +0000 EA - Could someone help me understand why it's so difficult to solve the alignment problem? by Jadon Schmitt Jadon Schmitt https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:26 no full 6682
YGsojZYtEsj2A3PjZ EA - Who's right about inputs to the biological anchors model? by rosehadshar Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who's right about inputs to the biological anchors model?, published by rosehadshar on July 24, 2023 on The Effective Altruism Forum.In this post, I compared forecasts from Ajeya Cotra and from forecasters in the Existential Risk Persuasion Tournament (XPT) relating to some of the inputs to Cotra's biological anchors model.Here, I give my personal take on which of those forecasts seem more plausible.Note that:I'm only considering the inputs to the bio anchors model which we have XPT forecasts for. This notably excludes the 2020 training requirements distribution, which is a very important driver of model outputs.My take is based on considering the explicit arguments that Cotra and the XPT forecasters gave, rather than on independent research.My take is subjective.I've been working with the Forecasting Research Institute (who ran the XPT) since November 2022, and this is a potential source of bias.I'm publishing this post in a personal capacity and it hasn't gone through FRI's review process.I originally wrote this early in 2023. I've tried to update it as new information came out, but I likely haven't done a comprehensive job of this.To recap, here are the relevant forecasts:See workings here and here. The 'most aggressive' and 'most conservative' forecasts can be considered equivalent to 90% confidence intervals for the median estimate.HardwareFor FLOP/$ in 2025, I think both Cotra and the XPGT forecasters are wrong, but Cotra will prove more right.Epoch's current estimate of highest GPU price-performance is 4.2e18 FLOP per $.They also find a trend in GPU price-performance of 0.1 OOM/year for state of the art GPUs. So I'll extrapolate 4.2e18 to 5.97E+18.For compute price halving time to 2100, I think it depends how likely you think it is that novel technologies like optical computing will reduce compute prices in future.This is the main argument Cotra puts forward for expecting such low prices.It's an argument made in XPT too, but less weight is put on it.Counterarguments given in XPT: fundamental physical limits, progress getting harder, rare materials capping how much prices can drop, catastrophe/extinction, optimisation shifting to memory architectures.Cotra mentions some but not all of these (she doesn't mention rare materials or memory architectures).Cotra flags that she thinks after 2040 her forecasts on this are pretty unreliable.But, because of how wrong their 2024 and 2030 forecasts seem to be, I'm not inclined to put much weight on XPT forecasts here either.I'll go with the most aggressive XPT figure, which is close to Cotra's.I don't have an inside view on the likelihood of novel technologies causing further price drops.Note that the disagreement about compute price halving times drives a lot of the difference in model output.Willingness to spendOn the most expensive training run by 2025, I think Cotra is a bit too aggressive and XPT forecasters are much too conservative.In 2022, Cotra updated downwards a bit on the likelihood of a $1bn training run by 2025.There isn't much time left for Cotra to be right.Cotra was predicting $20m by the end of 2020, and $80m by the end of 2021.GPT-3 was $4.6m in 2020. If you buy that unreleased proprietary models are likely to be 2-8x more expensive than public ones (which Cotra argues), that XPT forecasters missed this consideration, and that GPT-3 isn't proprietary and/or unreleased (flagging because I'm unsure what Cotra actually means by proprietary/unreleased), then this could be consistent with Cotra's forecasts.Epoch estimates that GPT-4 cost $50m to train at some point in 2022.Again, this could be in line with Cotra's predictions.More importantly, GPT-4 costs make XPT forecasters look quite wrong already - their 2024 prediction was surpassed in 2022. This is especially striking i...

]]>
rosehadshar https://forum.effectivealtruism.org/posts/YGsojZYtEsj2A3PjZ/who-s-right-about-inputs-to-the-biological-anchors-model Mon, 24 Jul 2023 15:59:40 +0000 EA - Who's right about inputs to the biological anchors model? by rosehadshar rosehadshar https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:42 no full 6678
vHimiTK8qQLzeDHmN EA - What we learned from training EA community members in facilitation skills by AlexanderSaeri Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What we learned from training EA community members in facilitation skills, published by AlexanderSaeri on July 24, 2023 on The Effective Altruism Forum.SummaryWe designed and delivered online facilitator training to engaged EAs in Australia and New Zealand. Participants included community leaders and staff at EA organisations.The training course covered key elements of facilitation, including asking questions, active listening, planning and preparation, reading the room, and managing participants.We conducted an evaluation of the course, which found that participants were very confident the course was very helpful for their facilitation practice, and identified the practical exercises with rapid reflection and feedback as key components that made the course valuable for them.We believe that facilitation training, implemented effectively, could assist community building in Effective Altruism by improving learning, group discussions, and successful events.We were motivated to develop and deliver facilitation training because it's an important 'soft skill' in community buildingOur motivationWe are two academics with extensive experience in teaching, training, and facilitation, who also are engaged in the Effective Altruism community in Australia. After working with the team at BlueDot Impact early in 2023 to revise their training for course facilitators, we thought that a similar approach could be useful for community leaders and community-facing staff at EA organisations.We think that facilitation in particular is an important "soft skill" in supporting learning, productive group interactions, and successful events. Facilitation is distinct from instructing, managing, or event planning.While lecturing is "sage on the stage", facilitation is "guide by the side". We think that a lot of EA community building involves being a 'guide by the side', consistent with valuing autonomy, critical thinking, and supporting the development of intrinsic motivations to act in alignment with EA values.Some of the specific skills involved in being this 'guide' include asking good questions, helping people feel like their contributions were heard, navigating group dynamics and energy, adapting to unexpected situations, and managing participant disagreement or conflict. A good facilitator clarifies and advances the shared purpose of the group.Participants' motivationWe sought interest from prospective participants in Australia and New Zealand with an identified need for improving their facilitation practice. 14 people expressed interest and we invited 11 to participate. We asked participants to complete a pre-survey to help us understand their specific needs for facilitation practice. Some of the needs that participants expressed:Improving skills in leading engaging discussions. For example, they wanted to be able to draw people into conversations, ask the right questions to encourage reflection (e.g., in a reading group, or in an EA introductory fellowship), and guiding discussions towards topics of interest to participants (e.g., managing vocal or dominant members, or recurring but unproductive topics of discussion).Developing empathy and a better understanding of group dynamics. For example, they wanted to be able to pick up on emotional states and discomfort (so it could be addressed), identify and resolve blocks for discussion or decision-making, or encourage participants to be active learners or otherwise promote practical action from attendees.Improving enjoyment, inclusion, and engagement at planned events. For example, they held frequent networking events, community meetups, workshops, working bees, or other meetings with community members.We developed and delivered an online facilitation training course using Miro, Zoom, and Google DocsDevelopment ProcessWe pre...

]]>
AlexanderSaeri https://forum.effectivealtruism.org/posts/vHimiTK8qQLzeDHmN/what-we-learned-from-training-ea-community-members-in Mon, 24 Jul 2023 14:17:42 +0000 EA - What we learned from training EA community members in facilitation skills by AlexanderSaeri AlexanderSaeri https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 15:43 no full 6679
D4khSueGA4Trebkks EA - [Crosspost] An AI Pause Is Humanity's Best Bet For Preventing Extinction (TIME) by Otto Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Crosspost] An AI Pause Is Humanity's Best Bet For Preventing Extinction (TIME), published by Otto on July 25, 2023 on The Effective Altruism Forum.Otto Barten is director of the Existential Risk Observatory, a nonprofit aiming to reduce existential risk by informing the public debate.Joep Meindertsma is founder of PauseAI, a movement campaigning for an AI Pause.The existential risks posed by artificial intelligence (AI) are now widely recognized.After hundreds of industry and science leaders warned that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the U.N. Secretary-General recently echoed their concern. So did the prime minister of the U.K., who is also investing 100 million pounds into AI safety research that is mostly meant to prevent existential risk. Other leaders are likely to follow in recognizing AI's ultimate threat.In the scientific field of existential risk, which studies the most likely causes of human extinction, AI is consistently ranked at the top of the list. In The Precipice, a book by Oxford existential risk researcher Toby Ord that aims to quantify human extinction risks, the likeliness of AI leading to human extinction exceeds that of climate change, pandemics, asteroid strikes, supervolcanoes, and nuclear war combined. One would expect that even for severe global problems, the risk that they lead to full human extinction is relatively small, and this is indeed true for most of the above risks. AI, however, may cause human extinction if only a few conditions are met. Among them is human-level AI, defined as an AI that can perform a broad range of cognitive tasks at least as well as we can. Studies outlining these ideas were previously known, but new AI breakthroughs have underlined their urgency: AI may be getting close to human level already.Recursive self-improvement is one of the reasons why existential-risk academics think human-level AI is so dangerous. Because human-level AI could do almost all tasks at our level, and since doing AI research is one of those tasks, advanced AI should therefore be able to improve the state of AI. Constantly improving AI would create a positive feedback loop with no scientifically established limits: an intelligence explosion. The endpoint of this intelligence explosion could be a superintelligence: a godlike AI that outsmarts us the way humans often outsmart insects. We would be no match for it.A godlike, superintelligent AIA superintelligent AI could therefore likely execute any goal it is given. Such a goal would be initially introduced by humans, but might come from a malicious actor, or not have been thought through carefully, or might get corrupted during training or deployment. If the resulting goal conflicts with what is in the best interest of humanity, a superintelligence would aim to execute it regardless. To do so, it could first hack large parts of the internet and then use any hardware connected to it. Or it could use its intelligence to construct narratives that are extremely convincing to us. Combined with hacked access to our social media timelines, it could create a fake reality on a massive scale. As Yuval Harari recently put it: "If we are not careful, we might be trapped behind a curtain of illusions, which we could not tear away - or even realise is there."As a third option, after either legally making money or hacking our financial system, a superintelligence could simply pay us to perform any actions it needs from us. And these are just some of the strategies a superintelligent AI could use in order to achieve its goals. There are likely many more. Like playing chess against grandmaster Magnus Carlsen, we cannot predict the moves he will play, but we can predict the outcome: we los...

]]>
Otto https://forum.effectivealtruism.org/posts/D4khSueGA4Trebkks/crosspost-an-ai-pause-is-humanity-s-best-bet-for-preventing Tue, 25 Jul 2023 21:37:22 +0000 EA - [Crosspost] An AI Pause Is Humanity's Best Bet For Preventing Extinction (TIME) by Otto Otto https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:28 no full 6693
EHTynQaSN8ubjCbm9 EA - How much is reducing catastrophic and extinction risk worth, assuming XPT forecasts? by rosehadshar Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How much is reducing catastrophic and extinction risk worth, assuming XPT forecasts?, published by rosehadshar on July 25, 2023 on The Effective Altruism Forum.This is a post I drafted some months ago, in the course of analysing some XPT data and reading Shulman and Thornley. It's not very sophisticated, I haven't checked the workings, and I haven't polished the language; but I'm posting anyway because that seems better than not posting. Note that it's a personal take and doesn't represent FRI's views.Thanks to Josh Rosenberg at FRI and Elliot Thornley for help and comments.BLUF: if you make a bunch of assumptions, then even quite low absolute risk forecasts like the XPT ones imply quite high spending on reducing GCRs, conditional on there being sufficiently cost-effective ways to do so.In 2022, what has become the Forecasting Research Institute ran the Existential Risk Persuasion Tournament (XPT). Over 200 forecasters, including superforecasters and domain experts, spent 4 months making forecasts on various questions related to existential and catastrophic risk.You can see the results from the tournament overall here, and a discussion of the XPT AI risk forecasts in particular here.These are the main XPT forecasts on catastrophic and extinction risk:Biological--Engineered pathogens--Natural pathogens--AI (superforecasters)AI (domain experts)NuclearNon-anthropogenicTotal catastrophic riskBiological--Engineered pathogens--Natural pathogens--AI (superforecasters)AI (domain experts)NuclearNon-anthropogenicTotal extinction risk203020502100Catastrophic risk (>10% of humans die in 5 years)1.8%0.8%1%0.01%0.73%2.13%0.35%5%12%0.50%1.83%4%0.0026%0.015%0.05%0.85%3.85%9.05%Extinction risk (human population <5000)0.012%0.01%0.0018%0.0001%0.03%0.38%0.02%1.1%3%0.001%0.01%0.074%0.0004%0.0014%0.0043%0.01%0.3%1%If we take these numbers at face value, how much is catastrophic and extinction risk reduction worth?One approach is to take the XPT forecasts, convert them into deaths in expectation, then assume a value of a statistical life and a discount rate, and estimate how much averting those deaths is 'worth'. (I'm stealing this method directly from Shulman and Thornley.)Using the XPT superforecasts and OWID population projections gives us the following deaths in expectation, in millions:Bio----AINuclearNon-anthropogenicTotalBio----AINuclearNon-anthropogenicTotalDeaths in expectation (millions)203020502100Catastrophic risk (>10% of humans die in 5 years)18.60.097.122.04.317.841.40.020.150.57.337.493.7Extinction risk (human population <5000)1.20.012.939.30.091.07.70.030.10.40.929.1103.5Some notes:Workings here.For catastrophic risks, the deaths in expectation should be read as a lower bound, because they assume 10% deaths and the question includes scenarios with >10% deaths.That's deaths in expectation worldwide. But the value of a statistical life varies by country: governments have different resources and the cost of interventions in different places varies.So the most straightforward way to think about the worth of catastrophic and extinction risk reduction is to ask how much this would be worth in a given country. Let's take the US as an example.First we need US deaths in expectation:Bio----AINuclearNon-anthropogenicTotalBio----AINuclearNon-anthropogenicTotalUS deaths in expectation (millions)203020502100Catastrophic risk (>10% of humans die in 5 years)0.70.0040.30.80.20.71.60.0010.0060.020.31.43.6Extinction risk (human population <5000)0.050.00040.11.50.0040.040.30.0010.010.020.041.13.9Workings here.We can then assume a US value for a statistical life, and a discount rate, and use these to estimate how much averting the deaths in expectation is 'worth' to the...

]]>
rosehadshar https://forum.effectivealtruism.org/posts/EHTynQaSN8ubjCbm9/how-much-is-reducing-catastrophic-and-extinction-risk-worth 10% of humans die in 5 years)1.8%0.8%1%0.01%0.73%2.13%0.35%5%12%0.50%1.83%4%0.0026%0.015%0.05%0.85%3.85%9.05%Extinction risk (human population <5000)0.012%0.01%0.0018%0.0001%0.03%0.38%0.02%1.1%3%0.001%0.01%0.074%0.0004%0.0014%0.0043%0.01%0.3%1%If we take these numbers at face value, how much is catastrophic and extinction risk reduction worth?One approach is to take the XPT forecasts, convert them into deaths in expectation, then assume a value of a statistical life and a discount rate, and estimate how much averting those deaths is 'worth'. (I'm stealing this method directly from Shulman and Thornley.)Using the XPT superforecasts and OWID population projections gives us the following deaths in expectation, in millions:Bio----AINuclearNon-anthropogenicTotalBio----AINuclearNon-anthropogenicTotalDeaths in expectation (millions)203020502100Catastrophic risk (>10% of humans die in 5 years)18.60.097.122.04.317.841.40.020.150.57.337.493.7Extinction risk (human population <5000)1.20.012.939.30.091.07.70.030.10.40.929.1103.5Some notes:Workings here.For catastrophic risks, the deaths in expectation should be read as a lower bound, because they assume 10% deaths and the question includes scenarios with >10% deaths.That's deaths in expectation worldwide. But the value of a statistical life varies by country: governments have different resources and the cost of interventions in different places varies.So the most straightforward way to think about the worth of catastrophic and extinction risk reduction is to ask how much this would be worth in a given country. Let's take the US as an example.First we need US deaths in expectation:Bio----AINuclearNon-anthropogenicTotalBio----AINuclearNon-anthropogenicTotalUS deaths in expectation (millions)203020502100Catastrophic risk (>10% of humans die in 5 years)0.70.0040.30.80.20.71.60.0010.0060.020.31.43.6Extinction risk (human population <5000)0.050.00040.11.50.0040.040.30.0010.010.020.041.13.9Workings here.We can then assume a US value for a statistical life, and a discount rate, and use these to estimate how much averting the deaths in expectation is 'worth' to the...]]> Tue, 25 Jul 2023 20:47:38 +0000 EA - How much is reducing catastrophic and extinction risk worth, assuming XPT forecasts? by rosehadshar 10% of humans die in 5 years)1.8%0.8%1%0.01%0.73%2.13%0.35%5%12%0.50%1.83%4%0.0026%0.015%0.05%0.85%3.85%9.05%Extinction risk (human population <5000)0.012%0.01%0.0018%0.0001%0.03%0.38%0.02%1.1%3%0.001%0.01%0.074%0.0004%0.0014%0.0043%0.01%0.3%1%If we take these numbers at face value, how much is catastrophic and extinction risk reduction worth?One approach is to take the XPT forecasts, convert them into deaths in expectation, then assume a value of a statistical life and a discount rate, and estimate how much averting those deaths is 'worth'. (I'm stealing this method directly from Shulman and Thornley.)Using the XPT superforecasts and OWID population projections gives us the following deaths in expectation, in millions:Bio----AINuclearNon-anthropogenicTotalBio----AINuclearNon-anthropogenicTotalDeaths in expectation (millions)203020502100Catastrophic risk (>10% of humans die in 5 years)18.60.097.122.04.317.841.40.020.150.57.337.493.7Extinction risk (human population <5000)1.20.012.939.30.091.07.70.030.10.40.929.1103.5Some notes:Workings here.For catastrophic risks, the deaths in expectation should be read as a lower bound, because they assume 10% deaths and the question includes scenarios with >10% deaths.That's deaths in expectation worldwide. But the value of a statistical life varies by country: governments have different resources and the cost of interventions in different places varies.So the most straightforward way to think about the worth of catastrophic and extinction risk reduction is to ask how much this would be worth in a given country. Let's take the US as an example.First we need US deaths in expectation:Bio----AINuclearNon-anthropogenicTotalBio----AINuclearNon-anthropogenicTotalUS deaths in expectation (millions)203020502100Catastrophic risk (>10% of humans die in 5 years)0.70.0040.30.80.20.71.60.0010.0060.020.31.43.6Extinction risk (human population <5000)0.050.00040.11.50.0040.040.30.0010.010.020.041.13.9Workings here.We can then assume a US value for a statistical life, and a discount rate, and use these to estimate how much averting the deaths in expectation is 'worth' to the...]]> 10% of humans die in 5 years)1.8%0.8%1%0.01%0.73%2.13%0.35%5%12%0.50%1.83%4%0.0026%0.015%0.05%0.85%3.85%9.05%Extinction risk (human population <5000)0.012%0.01%0.0018%0.0001%0.03%0.38%0.02%1.1%3%0.001%0.01%0.074%0.0004%0.0014%0.0043%0.01%0.3%1%If we take these numbers at face value, how much is catastrophic and extinction risk reduction worth?One approach is to take the XPT forecasts, convert them into deaths in expectation, then assume a value of a statistical life and a discount rate, and estimate how much averting those deaths is 'worth'. (I'm stealing this method directly from Shulman and Thornley.)Using the XPT superforecasts and OWID population projections gives us the following deaths in expectation, in millions:Bio----AINuclearNon-anthropogenicTotalBio----AINuclearNon-anthropogenicTotalDeaths in expectation (millions)203020502100Catastrophic risk (>10% of humans die in 5 years)18.60.097.122.04.317.841.40.020.150.57.337.493.7Extinction risk (human population <5000)1.20.012.939.30.091.07.70.030.10.40.929.1103.5Some notes:Workings here.For catastrophic risks, the deaths in expectation should be read as a lower bound, because they assume 10% deaths and the question includes scenarios with >10% deaths.That's deaths in expectation worldwide. But the value of a statistical life varies by country: governments have different resources and the cost of interventions in different places varies.So the most straightforward way to think about the worth of catastrophic and extinction risk reduction is to ask how much this would be worth in a given country. Let's take the US as an example.First we need US deaths in expectation:Bio----AINuclearNon-anthropogenicTotalBio----AINuclearNon-anthropogenicTotalUS deaths in expectation (millions)203020502100Catastrophic risk (>10% of humans die in 5 years)0.70.0040.30.80.20.71.60.0010.0060.020.31.43.6Extinction risk (human population <5000)0.050.00040.11.50.0040.040.30.0010.010.020.041.13.9Workings here.We can then assume a US value for a statistical life, and a discount rate, and use these to estimate how much averting the deaths in expectation is 'worth' to the...]]> rosehadshar https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 16:37 no full 6692
uYF5rLjH7tbJmFSbQ EA - [Linkpost] Can we confidently dismiss the existence of near aliens? Probabilities and implications by Magnus Vinding Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Can we confidently dismiss the existence of near aliens? Probabilities and implications, published by Magnus Vinding on July 25, 2023 on The Effective Altruism Forum.An earlier post of mine reviewed the most credible evidence I have managed to find regarding seemingly anomalous UFOs. My aim in this post is to mostly set aside the purported UFO evidence and to instead explore whether we can justify placing an extremely low probability on the existence of near aliens, irrespective of the alleged UFO evidence. (By "near aliens", I mean advanced aliens on or around Earth.)Specifically, after getting some initial clarifications out of the way, I proceed to do the following:I explore three potential justifications for a high level of confidence (>99.99 percent) regarding the absence of near aliens: (I) an extremely low prior, (II) technological impossibility, and (III) expectations about what we should observe conditional on advanced aliens being here.I review various considerations that suggest that these potential justifications, while they each have some merit, are often overstated.For example, in terms of what we should expect to observe conditional on advanced aliens having reached Earth, I argue that it might not look so different from what we in fact observe.In particular, I argue that near aliens who are entirely silent or only occasionally visible are more plausible than commonly acknowledged. The motive of gathering information about the evolution of life on Earth makes strategic sense relative to a wide range of goals, and this info gain motive is not only compatible with a lack of clear visibility, but arguably predicts it.I try to give some specific probability estimates - priors and likelihoods on the existence of near aliens - that seem reasonable to me in light of the foregoing considerations.Based on these probability estimates, I present Bayesian updates of the probability of advanced aliens around Earth under different assumptions about our evidence.I argue that, regardless of what we make of the purported UFO evidence, the probability of near aliens seems high enough to be relevant to many of our decisions, especially those relating to large-scale impact and risks.Lastly, I consider the implications that a non-negligible probability of near aliens might have for our future decisions, including the possibility that our main influence on the future might be through our influence on near aliens.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Magnus Vinding https://forum.effectivealtruism.org/posts/uYF5rLjH7tbJmFSbQ/linkpost-can-we-confidently-dismiss-the-existence-of-near 99.99 percent) regarding the absence of near aliens: (I) an extremely low prior, (II) technological impossibility, and (III) expectations about what we should observe conditional on advanced aliens being here.I review various considerations that suggest that these potential justifications, while they each have some merit, are often overstated.For example, in terms of what we should expect to observe conditional on advanced aliens having reached Earth, I argue that it might not look so different from what we in fact observe.In particular, I argue that near aliens who are entirely silent or only occasionally visible are more plausible than commonly acknowledged. The motive of gathering information about the evolution of life on Earth makes strategic sense relative to a wide range of goals, and this info gain motive is not only compatible with a lack of clear visibility, but arguably predicts it.I try to give some specific probability estimates - priors and likelihoods on the existence of near aliens - that seem reasonable to me in light of the foregoing considerations.Based on these probability estimates, I present Bayesian updates of the probability of advanced aliens around Earth under different assumptions about our evidence.I argue that, regardless of what we make of the purported UFO evidence, the probability of near aliens seems high enough to be relevant to many of our decisions, especially those relating to large-scale impact and risks.Lastly, I consider the implications that a non-negligible probability of near aliens might have for our future decisions, including the possibility that our main influence on the future might be through our influence on near aliens.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> Tue, 25 Jul 2023 18:45:23 +0000 EA - [Linkpost] Can we confidently dismiss the existence of near aliens? Probabilities and implications by Magnus Vinding 99.99 percent) regarding the absence of near aliens: (I) an extremely low prior, (II) technological impossibility, and (III) expectations about what we should observe conditional on advanced aliens being here.I review various considerations that suggest that these potential justifications, while they each have some merit, are often overstated.For example, in terms of what we should expect to observe conditional on advanced aliens having reached Earth, I argue that it might not look so different from what we in fact observe.In particular, I argue that near aliens who are entirely silent or only occasionally visible are more plausible than commonly acknowledged. The motive of gathering information about the evolution of life on Earth makes strategic sense relative to a wide range of goals, and this info gain motive is not only compatible with a lack of clear visibility, but arguably predicts it.I try to give some specific probability estimates - priors and likelihoods on the existence of near aliens - that seem reasonable to me in light of the foregoing considerations.Based on these probability estimates, I present Bayesian updates of the probability of advanced aliens around Earth under different assumptions about our evidence.I argue that, regardless of what we make of the purported UFO evidence, the probability of near aliens seems high enough to be relevant to many of our decisions, especially those relating to large-scale impact and risks.Lastly, I consider the implications that a non-negligible probability of near aliens might have for our future decisions, including the possibility that our main influence on the future might be through our influence on near aliens.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> 99.99 percent) regarding the absence of near aliens: (I) an extremely low prior, (II) technological impossibility, and (III) expectations about what we should observe conditional on advanced aliens being here.I review various considerations that suggest that these potential justifications, while they each have some merit, are often overstated.For example, in terms of what we should expect to observe conditional on advanced aliens having reached Earth, I argue that it might not look so different from what we in fact observe.In particular, I argue that near aliens who are entirely silent or only occasionally visible are more plausible than commonly acknowledged. The motive of gathering information about the evolution of life on Earth makes strategic sense relative to a wide range of goals, and this info gain motive is not only compatible with a lack of clear visibility, but arguably predicts it.I try to give some specific probability estimates - priors and likelihoods on the existence of near aliens - that seem reasonable to me in light of the foregoing considerations.Based on these probability estimates, I present Bayesian updates of the probability of advanced aliens around Earth under different assumptions about our evidence.I argue that, regardless of what we make of the purported UFO evidence, the probability of near aliens seems high enough to be relevant to many of our decisions, especially those relating to large-scale impact and risks.Lastly, I consider the implications that a non-negligible probability of near aliens might have for our future decisions, including the possibility that our main influence on the future might be through our influence on near aliens.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> Magnus Vinding https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:26 no full 6688
pd6m3LDYZ7tjc6WoB EA - 2022 Effective Animal Advocacy Forum Survey: Results and analysis by Rethink Priorities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2022 Effective Animal Advocacy Forum Survey: Results and analysis, published by Rethink Priorities on July 25, 2023 on The Effective Altruism Forum.IntroductionIn September 2022, the Effective Animal Advocacy (EAA) Coordination Forum (now titled the Animal Advocacy Strategy Forum) was held with the purpose of bringing together key decision-makers in the animal advocacy community to connect, coordinate, and strategize. The attendees represented approximately 20 key groups in the effective animal advocacy space.At the end of the forum, 25 participants filled out a survey that sought to better understand the future needs of effective animal advocacy groups and the perceptions of animal advocates about the most important areas to focus on in the future. This report analyzes the results of that survey.Key TakeawaysOn average, respondents to the EAA Survey believe that:The largest share (29%) of effective animal advocacy resources should be spent in Asia and the Pacific, followed by Western Europe, the U.S., Canada, Australia, and New Zealand (26%).Farmed fish and farmed invertebrates received the highest allocations of resources among respondents (16.5% and 17.1%, respectively), shortly followed by egg-laying hens and broiler chickens (12.8% and 13.1%, respectively).The plurality of resources should be spent targeting businesses (34%), followed by government institutions (28%).There's about a 60% chance that an area that should receive over 20% of the EAA funding currently receives less than 5% of it.In addition, a plurality of respondents believes that:EAA needs more people who are experts on the developing world/populous-yet-neglected countries (17/25 votes), government and policy (16/25), and/or figuring out what matters most and setting priorities (13/25).Their EAA organization is sometimes (9/25 votes) or often (10/25) funding-constrained and it is sometimes hard (11/25) to find outstanding candidates for roles.When asked about issues facing the effective animal advocacy movement:The lack of a strong evidence base (11/25 votes) and ability to appeal to the people most able to contribute to EAA cause areas (10/25) were the most commonly cited problems for EAA.Epistemic uncertainty regarding interventions (10/25 votes) and a lack of influence over the public, donors, and others with power (10/25) were generally cited as the most pressing problems in EAA.AcknowledgmentsThis report is a project of Rethink Priorities-a think tank dedicated to informing decisions made by high-impact organizations and funders across various cause areas. It was written by Laura Duffy. Thanks to William McAuliffe for helpful feedback and to Adam Papineau for copy-editing.If you are interested in RP's work, please visit our research database and subscribe to our newsletter.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Rethink Priorities https://forum.effectivealtruism.org/posts/pd6m3LDYZ7tjc6WoB/2022-effective-animal-advocacy-forum-survey-results-and Tue, 25 Jul 2023 18:35:12 +0000 EA - 2022 Effective Animal Advocacy Forum Survey: Results and analysis by Rethink Priorities Rethink Priorities https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:06 no full 6690
vqPy7TkBbzrAkxCf7 EA - Updates to the flow of funding in EA movement building post by Vaidehi Agarwalla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updates to the flow of funding in EA movement building post, published by Vaidehi Agarwalla on July 25, 2023 on The Effective Altruism Forum.This is an summary of updates made to my previous post, The flow of funding in EA movement building.Overall ChangesTotal funding tracked in the data increased to $290M (from $245M). New data is from:Several private donors and Longview Philanthropy who shared (previously non-public) donation & grant recommendation dataGlobal health & wellbeing spending e.g. GiveWell, ACE and some animal orgs (at a discounted rate since these organizations aren't explicitly focused on EA movement building but did contribute to the growth of the EA movement)The inclusion of some longtermist research organizations such as FHI which have helped do field building (also at a discounted rate)Changes to proportions and funding over timeDuring the 2012-2016 period, funding tracked in my data roughly doubled from ~$4M to ~$8.9M (quick estimate) including $4M in funding to GiveWell and $0.5M from other donors. During 2017-2023 period, funding tracked roughly increased from $241 to $281M, from other donors and the inclusion of some cause-area specific organizations that contributed to movement building.The table below summarizes the changes to the proportions of funding coming from different sources:Funder CategoryChange in % New %Original %Other donorsUp ~8% 9.6%1.5%FTX Future FundDown ~3%14.8%17.5%EAIF (non-OP donors), LTFF & Jaan Tallinn (incl. SFF)Down ~1% EA Animal FundUp ~1%1.1%0%Open PhilanthropyOP LT: Down 9.5%(~10.1% w. EAIF)OP GH&W: Down 0.4%OP Other: Up 5.9%Overall: Down ~3%OP LT: 50.4% (~54.5% w. EAIF)OP GH&W: 2.6%OP Other: 5.9%Overall: 63%OP LT: 59.8% (~64.6% w. EAIF)OP GH&W: 2.2%OP Other: 0%Overall: 66%Here's the new % data in a pie chart:What data is still missing?Total funding: I estimate total funding from 2012 to June 2023 is likely $300-350M (medium confidence). I previously estimated $250-280M (significant underestimate).Individual donors: I estimate that $1-20M since 2012 is probably still missing, since I haven't included donors who work with Effective Giving, Generation Pledge or Founders' Pledge.Allocation of cause-specific efforts: You may disagree with the discounting I've done towards different cause-specific projects (in either direction). If you think I'm underweighting those efforts, then you could consider that "missing" data.The most accurate way to do these estimates would be to ask movement building organizations for their annual expenses and to break down the sources of their funding. This information is not publicly available, and some organizations do not publish annual expenses publicly from where you might make initial guesses. I'd encourage organizations to share their numbers to give us a fuller picture of the landscape.Mistakes & reflectionsI didn't expect this post to be read by as many people as it was. If I'd known this in advance, I think it's likely I would have delayed publication and seeked more external feedback because concrete numbers can be sticky and hard to update people's views on.I noted that this was a preliminary analysis in the opening, but the data may have been seen as more final than it was. In the future I would spend more time hedging numbers and stating ranges of possible values and encourage people to cite those instead of exact numbers.I didn't add enough uncertainty estimates to the numbers throughout the post. For example, I mentioned that the data was incomplete, and provided an estimate on the total amount of funding ($250-280M) - this was a moderately large underestimate (the total new total tracked data now stands at $290M).I missed several sources of global health & wellbeing spending, which significantly increased total spend between 2012-2016. This ...

]]>
Vaidehi Agarwalla https://forum.effectivealtruism.org/posts/vqPy7TkBbzrAkxCf7/updates-to-the-flow-of-funding-in-ea-movement-building-post Tue, 25 Jul 2023 10:34:41 +0000 EA - Updates to the flow of funding in EA movement building post by Vaidehi Agarwalla Vaidehi Agarwalla https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:51 no full 6689
6sykvCXRC5rjgtoQt EA - The Productivity Fallacy by Deena Englander Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Productivity Fallacy, published by Deena Englander on July 25, 2023 on The Effective Altruism Forum.When I find myself saying the same thing multiple times, it's time to write it up in an article.Recently, in my coaching sessions primarily with EA clients, I've found myself giving the same advice multiple times - cut down on what you're doing, spend time on yourself, and try to be unproductive for short periods of time.The problem is that people want to work at their maximum capacity in order to be the most impactful. And that is the productivity fallacy. So let's take a look at what happens when you are constantly working at full capacity.To start with, I'm going to use the analogy of a computer since most of us are pretty familiar with the basic mechanics of how it works. When a computer is going slowly, there are a few basic troubleshooting steps:Check your computer's resourcesYour computer only has a limited amount of resources to apply to the tasks set to it. When it exceeds hardware limitations, it will not be able to perform the required functions. Even worse, when it's exceeding its resource availability (running at >80%) for an extended period of time, it can have unintended and negative consequences such as shortened lifespan, slower performance, and software errors.I don't think anyone would argue with that - that's pretty much basic knowledge.So let's apply that to you and productivity:We only have a limited amount of energy. You can definitely argue that it's a design flaw with humans. When we exceed that amount of energy without replenishing it properly, you start running at "max capacity". When you're running at max capacity (being highly productive and efficient with your time without the restorative components to balance it), there are 3 big problems you'll encounter:You're at a much greater risk of burnout, getting sick, and harming your long-term ability to be impactful. The stress on your system has damaging consequences for both your physical and mental health, and they're not easy to recover from.Humans aren't built to do too much at once. If you take too much on, it will likely take necessary energy away from the things that matter most.You're much more likely to make mistakes. Mistakes can often be prevented by having the presence, calm, and headspace to focus properly. When you have too much going on, mistakes should be expected. You're also less likely to be able to come up with creative solutions since our creativity flows much more when we're not in a stressed state.2. Close unnecessary applicationsIf your computer is running too many applications, it slows everything else down. So you have to make a choice - which are the ones that are critical to have running, and which ones can you live without, are consuming too many resources, or you didn't even realize were consuming resources?Applying that back to you, take an honest look at the activities that consume your resources. Which ones are critical to keep going? Which ones are less essential? Letting go of something isn't a failure - it's redirecting your resources to excel in your top priorities. Sometimes it helps to use a "monitoring program" like time tracking to see where your time and energy is going.3. Optimize your settingsSometimes there are some applications that you need, but they consume a lot of resources. So the next recommended step is to optimize your settings. Sometimes it's deleting the backlog, or changing the refresh rate, or having it not run in the background, or run at lower intensity. There are lots of potential solutions, and they differ based on your unique set of programs, available resources, and objectives.In your life, there may be some things that are high-resource consuming. But they don't need to be that way. How can you adjust these ...

]]>
Deena Englander https://forum.effectivealtruism.org/posts/6sykvCXRC5rjgtoQt/the-productivity-fallacy 80%) for an extended period of time, it can have unintended and negative consequences such as shortened lifespan, slower performance, and software errors.I don't think anyone would argue with that - that's pretty much basic knowledge.So let's apply that to you and productivity:We only have a limited amount of energy. You can definitely argue that it's a design flaw with humans. When we exceed that amount of energy without replenishing it properly, you start running at "max capacity". When you're running at max capacity (being highly productive and efficient with your time without the restorative components to balance it), there are 3 big problems you'll encounter:You're at a much greater risk of burnout, getting sick, and harming your long-term ability to be impactful. The stress on your system has damaging consequences for both your physical and mental health, and they're not easy to recover from.Humans aren't built to do too much at once. If you take too much on, it will likely take necessary energy away from the things that matter most.You're much more likely to make mistakes. Mistakes can often be prevented by having the presence, calm, and headspace to focus properly. When you have too much going on, mistakes should be expected. You're also less likely to be able to come up with creative solutions since our creativity flows much more when we're not in a stressed state.2. Close unnecessary applicationsIf your computer is running too many applications, it slows everything else down. So you have to make a choice - which are the ones that are critical to have running, and which ones can you live without, are consuming too many resources, or you didn't even realize were consuming resources?Applying that back to you, take an honest look at the activities that consume your resources. Which ones are critical to keep going? Which ones are less essential? Letting go of something isn't a failure - it's redirecting your resources to excel in your top priorities. Sometimes it helps to use a "monitoring program" like time tracking to see where your time and energy is going.3. Optimize your settingsSometimes there are some applications that you need, but they consume a lot of resources. So the next recommended step is to optimize your settings. Sometimes it's deleting the backlog, or changing the refresh rate, or having it not run in the background, or run at lower intensity. There are lots of potential solutions, and they differ based on your unique set of programs, available resources, and objectives.In your life, there may be some things that are high-resource consuming. But they don't need to be that way. How can you adjust these ...]]> Tue, 25 Jul 2023 00:42:44 +0000 EA - The Productivity Fallacy by Deena Englander 80%) for an extended period of time, it can have unintended and negative consequences such as shortened lifespan, slower performance, and software errors.I don't think anyone would argue with that - that's pretty much basic knowledge.So let's apply that to you and productivity:We only have a limited amount of energy. You can definitely argue that it's a design flaw with humans. When we exceed that amount of energy without replenishing it properly, you start running at "max capacity". When you're running at max capacity (being highly productive and efficient with your time without the restorative components to balance it), there are 3 big problems you'll encounter:You're at a much greater risk of burnout, getting sick, and harming your long-term ability to be impactful. The stress on your system has damaging consequences for both your physical and mental health, and they're not easy to recover from.Humans aren't built to do too much at once. If you take too much on, it will likely take necessary energy away from the things that matter most.You're much more likely to make mistakes. Mistakes can often be prevented by having the presence, calm, and headspace to focus properly. When you have too much going on, mistakes should be expected. You're also less likely to be able to come up with creative solutions since our creativity flows much more when we're not in a stressed state.2. Close unnecessary applicationsIf your computer is running too many applications, it slows everything else down. So you have to make a choice - which are the ones that are critical to have running, and which ones can you live without, are consuming too many resources, or you didn't even realize were consuming resources?Applying that back to you, take an honest look at the activities that consume your resources. Which ones are critical to keep going? Which ones are less essential? Letting go of something isn't a failure - it's redirecting your resources to excel in your top priorities. Sometimes it helps to use a "monitoring program" like time tracking to see where your time and energy is going.3. Optimize your settingsSometimes there are some applications that you need, but they consume a lot of resources. So the next recommended step is to optimize your settings. Sometimes it's deleting the backlog, or changing the refresh rate, or having it not run in the background, or run at lower intensity. There are lots of potential solutions, and they differ based on your unique set of programs, available resources, and objectives.In your life, there may be some things that are high-resource consuming. But they don't need to be that way. How can you adjust these ...]]> 80%) for an extended period of time, it can have unintended and negative consequences such as shortened lifespan, slower performance, and software errors.I don't think anyone would argue with that - that's pretty much basic knowledge.So let's apply that to you and productivity:We only have a limited amount of energy. You can definitely argue that it's a design flaw with humans. When we exceed that amount of energy without replenishing it properly, you start running at "max capacity". When you're running at max capacity (being highly productive and efficient with your time without the restorative components to balance it), there are 3 big problems you'll encounter:You're at a much greater risk of burnout, getting sick, and harming your long-term ability to be impactful. The stress on your system has damaging consequences for both your physical and mental health, and they're not easy to recover from.Humans aren't built to do too much at once. If you take too much on, it will likely take necessary energy away from the things that matter most.You're much more likely to make mistakes. Mistakes can often be prevented by having the presence, calm, and headspace to focus properly. When you have too much going on, mistakes should be expected. You're also less likely to be able to come up with creative solutions since our creativity flows much more when we're not in a stressed state.2. Close unnecessary applicationsIf your computer is running too many applications, it slows everything else down. So you have to make a choice - which are the ones that are critical to have running, and which ones can you live without, are consuming too many resources, or you didn't even realize were consuming resources?Applying that back to you, take an honest look at the activities that consume your resources. Which ones are critical to keep going? Which ones are less essential? Letting go of something isn't a failure - it's redirecting your resources to excel in your top priorities. Sometimes it helps to use a "monitoring program" like time tracking to see where your time and energy is going.3. Optimize your settingsSometimes there are some applications that you need, but they consume a lot of resources. So the next recommended step is to optimize your settings. Sometimes it's deleting the backlog, or changing the refresh rate, or having it not run in the background, or run at lower intensity. There are lots of potential solutions, and they differ based on your unique set of programs, available resources, and objectives.In your life, there may be some things that are high-resource consuming. But they don't need to be that way. How can you adjust these ...]]> Deena Englander https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:26 no full 6691
5WLGmCg7vSfXeqSWC EA - Launching the meta charity funding circle (MCF): Apply for funding or join as a donor! by Joey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Launching the meta charity funding circle (MCF): Apply for funding or join as a donor!, published by Joey on July 26, 2023 on The Effective Altruism Forum.SummaryWe are launching the Meta Charity Funders, a growing network of donors sharing knowledge and discussing funding opportunities in the EA meta space. Apply for funding by August 27th or join the circle as a donor. See below or visit our website to learn more!If you are doing EA-aligned "meta" work, and have not received substantial funding for several years, you might be worried about funding. Over the past 10 years, Open Philanthropy and EA Funds comprised a large percent of total meta funding and are far from independent of each other. This lack of diversity means potentially effective projects outside their priorities often struggle to stay afloat or scale, and the beliefs of just a few grant-makers can massively shape the EA movement's trajectory.It can be difficult for funders within meta as well. Individual donors often don't know where to give if they don't share EA Funds' approach. Thorough vetting is scarce and expensive, with only a handful of grant-makers deploying tens of millions per year in meta grants, resulting in sub-optimal allocations.This is why we are launching the Meta Charity Funders, a growing network of donors sharing knowledge, discussing funding opportunities, and running joint open grant rounds in the EA meta space. We believe many charitable projects create a huge impact by working at one level removed from direct impact to instead enhance the impact of others. Often these projects cut across causes and don't fit neatly into a box, thus being neglected by funders. Well known examples of meta organizations include charity evaluators like GiveWell, incubators like Charity Entrepreneurship, cause prioritization research organizations like Rethink Priorities, or field-building projects promoting effective giving or impactful careers.Grantees: Apply to many HNW donors at once - 1st round closes August 27.We are open to funding meta work across a range of causes, organizational stages, strategies, etc. We are most interested in applications that have not already been substantially supported by similar actors such as EA Funds or Open Philanthropy, though we will still consider these. We expect most of our grants to range from $10,000 to $500,000 and consider grants to both individuals and organizations. We expect our first round to be between $500,000 and $1.5m of total funding.Please lean in favor of applying if you are unsure if you would be a good fit!Donors: Join us! Find neglected opportunities, get help with ops and vetting, and give on your own terms.People who are unable to commit to regular meetings are still encouraged to apply and may be invited to our Slack and email list and gain access to our grant opportunities database.Meta Charity Funding Circle is a project of Charity Entrepreneurship and Impactful Grantmaking. It is organized by this post's authors: Gage Weston, Vilhelm Skoglund, and Joey Savoie. Our members are anonymous.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Joey https://forum.effectivealtruism.org/posts/5WLGmCg7vSfXeqSWC/launching-the-meta-charity-funding-circle-mcf-apply-for Wed, 26 Jul 2023 17:03:53 +0000 EA - Launching the meta charity funding circle (MCF): Apply for funding or join as a donor! by Joey Joey https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:04 no full 6702
YDTgRR7Qjmj47PaTj EA - An overview of standards in biosafety and biorisk by rosehadshar Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An overview of standards in biosafety and biorisk, published by rosehadshar on July 26, 2023 on The Effective Altruism Forum.Linkpost forThis report represents ~40 hours of work by Rose Hadshar in summer 2023 for Arb Research, in turn for Holden Karnofsky in response to this call for proposals on standards.It's based on a mixture of background reading, research into individual standards, and interviews with experts. Note that I didn't ask for permission to cite the expert interviews publicly, so I've anonymised them.I suggest reading the scope and summary and skimming the overview, then only looking at sections which seem particularly relevant to you.ScopeThis report covers:Both biosecurity and biosafety:Biosecurity: "the protection, control and accountability for valuable biological materials (including information) in laboratories in order to prevent their unauthorized access, loss, theft, misuse, diversion or intentional release."Biosafety: "the containment principles, technologies and practices that are implemented to prevent unintentional exposure to pathogens and toxins or their accidental release"Biosecurity and biosafety standards internationally, but with much more emphasis on the USRegulations and guidance as well as standards proper. I am using these terms as follows:Regulations: rules on how to comply with a particular law or laws. Legally bindingGuidance: rules on how to comply with particular regulations. Not legally binding, but risky to ignoreStandards: rules which do not relate to compliance with a particular law or laws. Not legally binding.Note that I also sometimes use 'standards' as an umbrella term for regulations, guidance and standards.Summary of most interesting findingsFor each point:I've included my confidence in the claim (operationalised as the probability that I would still believe the claim after 40 hours' more work).I link to a subsection with more details (though in some cases I don't have much more to say).The origins of bio standards(80%) There were many different motivations behind bio standards (e.g. plant health, animal health, worker protection, bioterrorism, fair sharing of genetic resources.)(70%) Standards were significantly reactive to rather than proactive about incidents (e.g. lab accidents, terrorist attacks, and epidemics), though:There are exceptions (e.g. the NIH guidelines on recombinant DNA)Guidance is often more proactive than standards (e.g. gene drives)(80%) International standards weren't always later or less influential than national ones(70%) Voluntary standards seem to have prevented regulation in at least one case (e.g. the NIH guidelines)(65%) In the US, it may be more likely that mandatory standards are passed on matters of national security (e.g. FSAP)Compliance(60%) Voluntary compliance may sometimes be higher than mandated compliance (e.g. NIH guidelines)(70%) Motives for voluntarily following standards include responsibility, market access, and the spread of norms via international training(80%) Voluntary standards may be easier to internationalise than regulation(90%) Deliberate efforts were made to increase compliance internationally (e.g. via funding biosafety associations, offering training and other assistance)Problems with these standards(90%) Bio standards are often list-based. This means that they are not comprehensive, do not reflect new threats, prevent innovation in risk management, and fail to recognise the importance of context for riskThere's been a partial move away from prescriptive, list-based standards towards holistic, risk-based standards (e.g. ISO 35001)(85%) Bio standards tend to lack reporting standards, so it's very hard to tell how effective they are(60%) Standards may have impeded safety work in some areas (e.g. select agent designation as a...

]]>
rosehadshar https://forum.effectivealtruism.org/posts/YDTgRR7Qjmj47PaTj/an-overview-of-standards-in-biosafety-and-biorisk Wed, 26 Jul 2023 15:39:11 +0000 EA - An overview of standards in biosafety and biorisk by rosehadshar rosehadshar https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 34:49 no full 6703
CtACh7xRBFnpK3NW4 EA - Guide to Safe and Inclusive Events by GWWC and OFTW by GraceAdams Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Guide to Safe and Inclusive Events by GWWC and OFTW, published by GraceAdams on July 27, 2023 on The Effective Altruism Forum.[We are sharing this guide on the Forum as an example for other organisations and groups that may be organising events. This guide is used by volunteers and staff at both Giving What We Can (GWWC) and One for the World (OFTW) when planning and running events.]IntroductionWho is this guide for?This guide is for anyone who is organising events in association with Giving What We Can or One for the World, whether that be paid staff or volunteers. We also hope it will be a useful resource for anyone else who is looking to organise community events in a safe and inclusive way.Why have a guide?It can be hard to think of all the relevant considerations around running events by yourself, and each of us have biases or preferences that mean that we may be more likely to think of some areas and not others. This guide aims to cover a broad range of considerations to help all organisers and ensure safe and inclusive events for all attendees.Giving What We Can and One for the World take providing safe and inclusive events seriously. Our organisations exist to improve the world, and aim to do so through the lens of compassion. We believe our events should mirror our commitment to creating a better world.What are other helpful resources to consult?CEA's Advice about Community Health at RetreatsHow to help someone having a panic attackResources from CEA on community healthWhat to keep in mind when planning the eventWho will be attending the eventConsiderations around ageUnder 18sThe organisation may not be covered by insurance to host attendees under the age 18 when unaccompanied by a parent. Please reach out to staff to discuss this more if you think Under 18s may be interested in attending.AlcoholBe aware of the restrictions on drinking in different countries. In the US, students at the undergraduate level are typically not of legal age to drink, which is 21.In other countries, it may be more common for students to drink together. That said, be mindful of the dynamics that alcohol can introduce to events, regardless of age. At the top of the event-planning process, you should first consider running the event without alcohol. You should only decide to include it if there is a strong reason to, and if there are no concerns about the mixing of young students or professionals of different ages, who come with different cultural and regional contexts around drinking.Having alcohol at your event increases associated risks for serious, negative outcomes, like sexual harassment, bullying, and even assault.Power dynamicsBe aware of power dynamics that may be unintentionally (or intentionally) created within your spaces. People who have management responsibility typically have a degree of power, whether intentionally or not, over those who are in a reporting relationship to them. People who are older, especially with significant age differences, sometimes have a level of unstated power relative to those who are younger than them. This can be doubly true when mixing folks of different ages and genders.There may also be funding relationships within your spaces that manifest through power dynamics. A person with any degree of responsibility for funding a particular person, group, or organization has a degree of power over those that they are funding. People who are in the process of reviewing grants or funding proposals from others may also have a degree of power over those people.FamiliesGWWC and OFTW events typically cater to either students or young professionals. Keep in mind that either group may have families and children who might be involved in an event that you host.When planning an event, especially a larger one, consider whether it woul...

]]>
GraceAdams https://forum.effectivealtruism.org/posts/CtACh7xRBFnpK3NW4/guide-to-safe-and-inclusive-events-by-gwwc-and-oftw Thu, 27 Jul 2023 22:00:01 +0000 EA - Guide to Safe and Inclusive Events by GWWC and OFTW by GraceAdams GraceAdams https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 20:35 no full 6715
x5Re9EKwGvAjZSmeb EA - Takeaways from the Metaculus AI Progress Tournament by Javier Prieto Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Takeaways from the Metaculus AI Progress Tournament, published by Javier Prieto on July 27, 2023 on The Effective Altruism Forum.In 2019, Open Philanthropy commissioned a set of forecasts on AI progress from Metaculus. The forecasting questions had time horizons between 6 months and >6 years. As of June 2023, 69 of the 111 questions had been resolved unambiguously. In this post, I analyze the accuracy of these forecasts as a function of question (sub)category, crowd size, and forecasting horizon. Unless otherwise indicated, my analyses are about Metaculus' proprietary aggregate forecast ("the Metaculus prediction") evaluated at the time the question closed.Related workFeel free to skip to the next section if you're already familiar with these analyses.This analysis published 2 years ago (July 2021) looked at 64 resolved AI questions and concluded there was weak but ultimately inconclusive evidence of bias towards faster progress.A more recent analysis from March 2023 found that Metaculus had a worse Brier score on (some) AI questions than on average across all questions and presented a few behavioral correlates of accuracy within AI questions, e.g. accuracy was poorer on questions with more updates and when those updates were less informative in a certain technical sense (see post for details).Metaculus responded to the previous post with a more comprehensive analysis that included all resolved AI questions (152 in total, 64 of which were binary and 88 continuous). They show that performance is significantly better than chance for both question types and marginally better than was claimed in the previous analysis (which relied on a smaller sample of questions), though still worse than the average for all questions on the site.The analysis I present below has some overlaps with those three but it fills an important gap by studying whether there's systematic over- or under-optimism in Metaculus's AI progress predictions using data from a fairly recent tournament that had monetary incentives and thus (presumably) should've resulted in more careful forecasts.Key takeawaysNB: These results haven't been thoroughly vetted by anyone else. The conclusions I draw represent my views, not Open Phil's.Progress on benchmarks was underestimated, while progress on other proxies (compute, bibliometric indicators, and, to a lesser extent, economic indicators) was overestimated. [more]This is consistent with a picture where AI progresses surprisingly rapidly on well-defined benchmarks but the attention it receives and its "real world" impact fail to keep up with performance on said benchmarks.However, I see a few problems with this picture:It's unclear to me how some of the non-benchmark proxies are relevant to AI progress, e.g.The TOP500 compute benchmark is mostly about supercomputers that (AFAICT) are mostly used to run numerical simulations, not to accelerate AI training and inference. In fact, some of the top performers don't even have GPUs.The number of new preprints in certain ML subfields over short (~6-month) time horizons may be more dependent on conference publication cycles than underlying growth.Most of these forecasts came due before or very soon after the release of ChatGPT and GPT-4 / Bing, a time that felt qualitatively different from where we are today.Metaculus narrowly beats chance and performs worse in this tournament than on average across all continuous questions on the site despite the prize money. This could indicate that these questions are inherently harder, or that they drove less or lower-quality engagement. [more]There's no strong evidence that performance was significantly worse on questions with longer horizons (<1 year vs ~2 years). [more]I see no clear pattern behind the biggest misses, but I provide plausible post-mortems for some o...

]]>
Javier Prieto https://forum.effectivealtruism.org/posts/x5Re9EKwGvAjZSmeb/takeaways-from-the-metaculus-ai-progress-tournament 6 years. As of June 2023, 69 of the 111 questions had been resolved unambiguously. In this post, I analyze the accuracy of these forecasts as a function of question (sub)category, crowd size, and forecasting horizon. Unless otherwise indicated, my analyses are about Metaculus' proprietary aggregate forecast ("the Metaculus prediction") evaluated at the time the question closed.Related workFeel free to skip to the next section if you're already familiar with these analyses.This analysis published 2 years ago (July 2021) looked at 64 resolved AI questions and concluded there was weak but ultimately inconclusive evidence of bias towards faster progress.A more recent analysis from March 2023 found that Metaculus had a worse Brier score on (some) AI questions than on average across all questions and presented a few behavioral correlates of accuracy within AI questions, e.g. accuracy was poorer on questions with more updates and when those updates were less informative in a certain technical sense (see post for details).Metaculus responded to the previous post with a more comprehensive analysis that included all resolved AI questions (152 in total, 64 of which were binary and 88 continuous). They show that performance is significantly better than chance for both question types and marginally better than was claimed in the previous analysis (which relied on a smaller sample of questions), though still worse than the average for all questions on the site.The analysis I present below has some overlaps with those three but it fills an important gap by studying whether there's systematic over- or under-optimism in Metaculus's AI progress predictions using data from a fairly recent tournament that had monetary incentives and thus (presumably) should've resulted in more careful forecasts.Key takeawaysNB: These results haven't been thoroughly vetted by anyone else. The conclusions I draw represent my views, not Open Phil's.Progress on benchmarks was underestimated, while progress on other proxies (compute, bibliometric indicators, and, to a lesser extent, economic indicators) was overestimated. [more]This is consistent with a picture where AI progresses surprisingly rapidly on well-defined benchmarks but the attention it receives and its "real world" impact fail to keep up with performance on said benchmarks.However, I see a few problems with this picture:It's unclear to me how some of the non-benchmark proxies are relevant to AI progress, e.g.The TOP500 compute benchmark is mostly about supercomputers that (AFAICT) are mostly used to run numerical simulations, not to accelerate AI training and inference. In fact, some of the top performers don't even have GPUs.The number of new preprints in certain ML subfields over short (~6-month) time horizons may be more dependent on conference publication cycles than underlying growth.Most of these forecasts came due before or very soon after the release of ChatGPT and GPT-4 / Bing, a time that felt qualitatively different from where we are today.Metaculus narrowly beats chance and performs worse in this tournament than on average across all continuous questions on the site despite the prize money. This could indicate that these questions are inherently harder, or that they drove less or lower-quality engagement. [more]There's no strong evidence that performance was significantly worse on questions with longer horizons (<1 year vs ~2 years). [more]I see no clear pattern behind the biggest misses, but I provide plausible post-mortems for some o...]]> Thu, 27 Jul 2023 20:52:21 +0000 EA - Takeaways from the Metaculus AI Progress Tournament by Javier Prieto 6 years. As of June 2023, 69 of the 111 questions had been resolved unambiguously. In this post, I analyze the accuracy of these forecasts as a function of question (sub)category, crowd size, and forecasting horizon. Unless otherwise indicated, my analyses are about Metaculus' proprietary aggregate forecast ("the Metaculus prediction") evaluated at the time the question closed.Related workFeel free to skip to the next section if you're already familiar with these analyses.This analysis published 2 years ago (July 2021) looked at 64 resolved AI questions and concluded there was weak but ultimately inconclusive evidence of bias towards faster progress.A more recent analysis from March 2023 found that Metaculus had a worse Brier score on (some) AI questions than on average across all questions and presented a few behavioral correlates of accuracy within AI questions, e.g. accuracy was poorer on questions with more updates and when those updates were less informative in a certain technical sense (see post for details).Metaculus responded to the previous post with a more comprehensive analysis that included all resolved AI questions (152 in total, 64 of which were binary and 88 continuous). They show that performance is significantly better than chance for both question types and marginally better than was claimed in the previous analysis (which relied on a smaller sample of questions), though still worse than the average for all questions on the site.The analysis I present below has some overlaps with those three but it fills an important gap by studying whether there's systematic over- or under-optimism in Metaculus's AI progress predictions using data from a fairly recent tournament that had monetary incentives and thus (presumably) should've resulted in more careful forecasts.Key takeawaysNB: These results haven't been thoroughly vetted by anyone else. The conclusions I draw represent my views, not Open Phil's.Progress on benchmarks was underestimated, while progress on other proxies (compute, bibliometric indicators, and, to a lesser extent, economic indicators) was overestimated. [more]This is consistent with a picture where AI progresses surprisingly rapidly on well-defined benchmarks but the attention it receives and its "real world" impact fail to keep up with performance on said benchmarks.However, I see a few problems with this picture:It's unclear to me how some of the non-benchmark proxies are relevant to AI progress, e.g.The TOP500 compute benchmark is mostly about supercomputers that (AFAICT) are mostly used to run numerical simulations, not to accelerate AI training and inference. In fact, some of the top performers don't even have GPUs.The number of new preprints in certain ML subfields over short (~6-month) time horizons may be more dependent on conference publication cycles than underlying growth.Most of these forecasts came due before or very soon after the release of ChatGPT and GPT-4 / Bing, a time that felt qualitatively different from where we are today.Metaculus narrowly beats chance and performs worse in this tournament than on average across all continuous questions on the site despite the prize money. This could indicate that these questions are inherently harder, or that they drove less or lower-quality engagement. [more]There's no strong evidence that performance was significantly worse on questions with longer horizons (<1 year vs ~2 years). [more]I see no clear pattern behind the biggest misses, but I provide plausible post-mortems for some o...]]> 6 years. As of June 2023, 69 of the 111 questions had been resolved unambiguously. In this post, I analyze the accuracy of these forecasts as a function of question (sub)category, crowd size, and forecasting horizon. Unless otherwise indicated, my analyses are about Metaculus' proprietary aggregate forecast ("the Metaculus prediction") evaluated at the time the question closed.Related workFeel free to skip to the next section if you're already familiar with these analyses.This analysis published 2 years ago (July 2021) looked at 64 resolved AI questions and concluded there was weak but ultimately inconclusive evidence of bias towards faster progress.A more recent analysis from March 2023 found that Metaculus had a worse Brier score on (some) AI questions than on average across all questions and presented a few behavioral correlates of accuracy within AI questions, e.g. accuracy was poorer on questions with more updates and when those updates were less informative in a certain technical sense (see post for details).Metaculus responded to the previous post with a more comprehensive analysis that included all resolved AI questions (152 in total, 64 of which were binary and 88 continuous). They show that performance is significantly better than chance for both question types and marginally better than was claimed in the previous analysis (which relied on a smaller sample of questions), though still worse than the average for all questions on the site.The analysis I present below has some overlaps with those three but it fills an important gap by studying whether there's systematic over- or under-optimism in Metaculus's AI progress predictions using data from a fairly recent tournament that had monetary incentives and thus (presumably) should've resulted in more careful forecasts.Key takeawaysNB: These results haven't been thoroughly vetted by anyone else. The conclusions I draw represent my views, not Open Phil's.Progress on benchmarks was underestimated, while progress on other proxies (compute, bibliometric indicators, and, to a lesser extent, economic indicators) was overestimated. [more]This is consistent with a picture where AI progresses surprisingly rapidly on well-defined benchmarks but the attention it receives and its "real world" impact fail to keep up with performance on said benchmarks.However, I see a few problems with this picture:It's unclear to me how some of the non-benchmark proxies are relevant to AI progress, e.g.The TOP500 compute benchmark is mostly about supercomputers that (AFAICT) are mostly used to run numerical simulations, not to accelerate AI training and inference. In fact, some of the top performers don't even have GPUs.The number of new preprints in certain ML subfields over short (~6-month) time horizons may be more dependent on conference publication cycles than underlying growth.Most of these forecasts came due before or very soon after the release of ChatGPT and GPT-4 / Bing, a time that felt qualitatively different from where we are today.Metaculus narrowly beats chance and performs worse in this tournament than on average across all continuous questions on the site despite the prize money. This could indicate that these questions are inherently harder, or that they drove less or lower-quality engagement. [more]There's no strong evidence that performance was significantly worse on questions with longer horizons (<1 year vs ~2 years). [more]I see no clear pattern behind the biggest misses, but I provide plausible post-mortems for some o...]]> Javier Prieto https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:15 no full 6716
67zFQT4GeJdgvdFuk EA - Partial Transcript of Recent Senate Hearing Discussing AI X-Risk by Daniel Eth Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Partial Transcript of Recent Senate Hearing Discussing AI X-Risk, published by Daniel Eth on July 27, 2023 on The Effective Altruism Forum.On Tuesday, the US Senate Judiciary Subcommittee on Privacy, Technology and the Law held a hearing on AI. The hearing involved 3 witnesses - Dario Amodei (CEO of Anthropic), Yoshua Bengio (Turing Award winner, and the second-most cited AI researcher in the world), and Stuart Russell (Professor of CS at Berkeley, and co-author of the standard textbook for AI).The hearing wound up focusing a surprising amount on AI X-risk and related topics. I originally planned on jotting down all the quotes related to these topics, thinking it would make for a short post of a handful of quotes, which is something I did for a similar hearing by the same subcommittee 2 months ago. Instead, this hearing focused so much on these topics that I wound up with something that's better described as a partial transcript.All the quotes below are verbatim. Text that is bolded is simply stuff I thought readers might find particularly interesting. If you want to listen to the hearing, you can do so here (it's around 2.5 hours). You might also find it interesting to compare this post to the one from 2 months ago, to see how the discourse has progressed.Opening remarksSenator Blumenthal:What I have heard [from the public after the last AI hearing] again and again and again, and the word that has been used so repeatedly is 'scary.' 'Scary'. What rivets [the public's] attention is the science-fiction image of an intelligence device, out of control, autonomous, self-replicating, potentially creating diseases - pandemic-grade viruses, or other kinds of evils, purposely engineered by people or simply the result of mistakes. And, frankly, the nightmares are reinforced in a way by the testimony that I've read from each of you.I think you have provided objective, fact-based views on what the dangers are, and the risks and potentially even human extinction - an existential threat which has been mentioned by many more than just the three of you, experts who know first hand the potential for harm. But these fears need to be addressed, and I think can be addressed through many of the suggestions that you are making to us and others as well.I've come to the conclusion that we need some kind of regulatory agency, but not just a reactive body. actually investing proactively in research, so that we develop countermeasures against the kind of autonomous, out-of-control scenarios that are potential dangers: an artificial intelligence device that is in effect programmed to resist any turning off, a decision by AI to begin nuclear reaction to a nonexistent attack.The White House certainly has recognized the urgency with a historic meeting of the seven major companies which made eight profoundly significant commitments. but it's only a start. The urgency here demands action.The future is not science fiction or fantasy - it's not even the future, it's here and now. And a number of you have put the timeline at 2 years before we see some of the biological most severe dangers. It may be shorter because the kinds of pace of development is not only stunningly fast, it is also accelerated at a stunning pace, because of the quantity of chips, the speed of chips, the effectiveness of algorithms. It is an inexorable flow of development.Building on our previous hearing, I think there are core standards that we are building bipartisan consensus around. And I welcome hearing from many others on these potential rules:Establishing a licensing regime for companies that are engaged in high-risk AI development;A testing and auditing regimen by objective 3rd parties or by preferably the new entity that we will establish;Imposing legal limits on certain uses related to elections. related to...

]]>
Daniel_Eth https://forum.effectivealtruism.org/posts/67zFQT4GeJdgvdFuk/partial-transcript-of-recent-senate-hearing-discussing-ai-x Thu, 27 Jul 2023 12:46:13 +0000 EA - Partial Transcript of Recent Senate Hearing Discussing AI X-Risk by Daniel Eth Daniel_Eth https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 32:27 no full 6718
2Nnu9ykixiqG2mMit EA - CE: Announcing our February 2024 Charity Ideas. Apply now! by CE Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CE: Announcing our February 2024 Charity Ideas. Apply now!, published by CE on July 27, 2023 on The Effective Altruism Forum.Join our February-March 2024 Incubation Program to start nonprofits in Mass Media (Global Health) and Animal Welfare.In this post, we announce our top four charity ideas to launch in February 2024. They are the results of months of work by our research team, who selected them through a seven-stage research process. We pick interventions that exceed ambitious cost-effectiveness bars (e.g., for global health policy, this is 5x top GiveWell evaluated charities), have a high quality of evidence, minimal failure modes, and high expected value.We're seeking people to launch these ideas through our February-March 2024 Incubation Program. No particular previous experience is necessary - if you could plausibly see yourself excited to launch one of these charities, we encourage you to apply. The deadline for applications is September 30, 2024.In the Incubation Program, we provide two months of cost-covered training, stipends, funding up to $200,000, operational support in your first months, a co-working space at our CE office in London, ongoing mentorship, and access to a community of alumni, funders, and experts. Learn more on our refreshed CE Incubation Program page.Disclaimer:To be brief, we have sacrificed nuance, the details of our considerable uncertainties, and the downside risks discussed in the extended reports. Full reports will be published on our website and the EA Forum and announced in our newsletter in the upcoming weeks.Please note that previous incubatees attest to the ideas becoming increasingly exciting over the course of the program.One-Sentence SummariesChildhood vaccination remindersAn organization that sends SMS or voice messages to remind caregivers to attend their child's vaccination appointments.Mass media to prevent violence against womenA non-profit that produces and delivers educational entertainment content focusing on preventing intimate partner violence.Influencing EU fish welfare policy through strategic work in GreeceAn organization focused on improving fish welfare through corporate campaigning and policy work in Greece, aiming to influence animal welfare standards at the EU level.Influencing key stakeholders of the emerging insect industryAn organization that provides information to relevant stakeholders on sustainability, environmental impacts, food safety concerns, and animal welfare issues related to insect farming.One-Paragraph SummariesChildhood vaccination remindersIn 2021, 25 million children under one went unvaccinated. Studies show that mobile messages can effectively remind caregivers to keep up with their child's vaccination schedule, thereby mitigating disease risk and improving overall health outcomes (1, 2, 3). Yet, such beneficial services are rarely implemented on a large scale. Suvita, a non-profit incubated by CE, is delivering this impactful service in India. A new non-profit organization will launch it in the next top-priority country. This new organization will likely coordinate closely with Suvita to expand to numerous priority countries or operate under the same umbrella. This intervention can be expected to help avert one disability-adjusted life year (DALY) for approximately $80, making it a highly cost-effective means to improve global health.Mass media to prevent violence against womenAlmost 500 million women aged 15 to 49 have been subjected to violence from an intimate partner at least once since they turned 15. Countless women continually face threats to their safety within their homes, experiencing physical, sexual, and emotional abuse. Fueled by societal norms and attitudes, this pervasive form of violence remains prevalent in many societies worldwide. Evidence from r...

]]>
CE https://forum.effectivealtruism.org/posts/2Nnu9ykixiqG2mMit/ce-announcing-our-february-2024-charity-ideas-apply-now Thu, 27 Jul 2023 12:05:22 +0000 EA - CE: Announcing our February 2024 Charity Ideas. Apply now! by CE CE https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 18:15 no full 6717
AJJTRmW7zhvXrmD5s EA - Apply to CEEALAR to do AGI moratorium work by Greg Colbourn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to CEEALAR to do AGI moratorium work, published by Greg Colbourn on July 27, 2023 on The Effective Altruism Forum.Do you have short AI timelines and/or p(doom|AGI) that is far too high for comfort? Do you want to pivot to working on directly addressing the problem and lowering all of our p(doom)s by slowing down or pausing AGI development? Is lack of funding/runway holding you back?This is an invitation for people to apply to CEEALAR for a grant (of free accommodation, food and stipend) to work towards getting a global moratorium on AGI implemented. Such work may take the form of organising public campaigns, such as letter writing, petitions, protests, social media posts, ads etc, drafting of relevant policies or regulatory frameworks (e.g. how to implement caps on training runs), or meta work organising and fundraising for such activities.We've already had one grantee stay who's working in the space, and I (Founder and Executive Director) am very interested in the area, having recently (post-GPT-4) elevated it to a top priority of mine.Active discussion spaces for those working in the area include the AGI Moratorium HQ Slack and the PauseAI Discord. Various projects are being organised within them.Orgs already in the space that may have projects you can get involved with, include: PauseAI, Campaign for AI Safety, Stop AGI, Centre for AI Policy, Stakeout AI, Centre for AI Safety, Future of Life Institute.Given we (CEEALAR) are a (UK) charity, we have to be mindful of not being too overtly political, or ensuring that any political activity is furthering our charitable objects. This means not being partisan by singling out individual political parties or politicians for criticism, or being needlessly provocative. Think public awareness raising, public education and encouragement of civic responsibility over the issue, similar to how many charities focused on climate campaigning operate (e.g. the Climate Coalition).I look forward to your applications and hope that we can hereby accelerate meaningful action toward a global moratorium on AGI.Following charitable object 3, "To advance such other purposes which are exclusively charitable according to the law in England and Wales", your work would need to fit in with the Charitable Purposes under UK law listed here. For practical purposes, preventing human extinction from AGI would come under "saving of lives".For example, any protests being organised should require people to abide by this code of conduct.We reserve the right to withdraw funding to anyone who doesn't work within our charitable objects or who's work may risk damage to our reputation.We are also still very much accepting general applications for any EA-related work/career development.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Greg_Colbourn https://forum.effectivealtruism.org/posts/AJJTRmW7zhvXrmD5s/apply-to-ceealar-to-do-agi-moratorium-work Thu, 27 Jul 2023 10:15:12 +0000 EA - Apply to CEEALAR to do AGI moratorium work by Greg Colbourn Greg_Colbourn https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:50 no full 6720
FzoMPHtXzTig8pXuh EA - General support for "General EA" by Arthur Malone Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: General support for "General EA", published by Arthur Malone on July 27, 2023 on The Effective Altruism Forum.TL;DR: When I say "General EA" I am referring to the cluster including the term "effective altruism," the idea of "big-tent EA," as well as branding and support of those ideas. This post is a response in opposition to many calls for renaming EA or backing away from an umbrella movement. I make some strategic recommendations and take something of a deep dive using my own personal history/cause prioritization as a case study for why "General EA" works (longpost is long, so there's TL;DRs for each major section).I'm primarily aiming to see if I'm right that there's a comparatively silent group that supports EA largely as it. If you're in that category and don't need the full rationale and story, the call to action is to add a comment linking your favorite "EA win" (success story/accomplishment you'd like to have people associate with EA).Since long before last fall's reckoning, I've been following discussions in the EA community both for or against the "effective altruism" name, debates about rebranding, and calls to splinter the various worldviews currently covered by the EA umbrella into separate groups. There are too many to link, and since this post is ultimately in opposition to them, I prefer not to elevate any specific post.I'm actually entirely supportive of such discussions. And I think the reevaluation post-FTX and continued questioning during the EA Strategy fortnight is great, precisely because it is EA in action: trying to use reason to figure out how to do the most good means applying that methodology to our movement, its principles and its public image.Unfortunately, I haven't seen a post advocating for a holistic reaffirmation of "EA has already developed and coalesced around a great (if not optimal) movement. We should not only stay the course, but further invest in the EA status quo." Because while status quo bias is a real thing to stay vigilant against, it is also the case that the movement, its name and branding, and the actions it takes in the world, are all the cumulative work of a lot of incredibly intelligent people doing their best to take the right course of action. Don't fix what ain't broke.As I interact with EAs as a community builder (I also lead the team organizing EAGxNYC, applications closing soon!) I have heard people advocating for the strategy/branding changes that are described on the forum. However, I perceive this as a minority compared to those who generally think we should just continue "being EA." It is often the case that those in favor of maintaining a positive status quo do not express their position as vocally as those aiming for a change, so I wrote this post to reflect my own view of why it is preferable to stick with general EA.I aim to be somewhat ambitious and address several longstanding criticisms about EA, and hope to get some engagement from those with different viewpoints. But I also hope that some of the (what I perceive to be) silent majority will chime in and demonstrate that we're here and don't want to see EA splintered, rebranded, or otherwise demoted in favor of some other label."Effective Altruism"TL;DR: There's no word in the English language that accurately covers EA's principles or equally applies to all its community. Every possible name would be a compromise, and "effective altruism" has the benefit of demonstrable success and established momentum. As long as we stick with it as a descriptive moniker rather than asserting it as a prescriptive identifier, it can serve us as well as the names chosen by other movements.Circa 2009-2014, I attempted to write a book with the goal of starting a movement of individuals who try to take their ethics seriously and make positive changes in the w...

]]>
Arthur Malone https://forum.effectivealtruism.org/posts/FzoMPHtXzTig8pXuh/general-support-for-general-ea Thu, 27 Jul 2023 03:36:11 +0000 EA - General support for "General EA" by Arthur Malone Arthur Malone https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 19:45 no full 6719
KaEEDxupWgSwsAoaK EA - USAID Office of the Chief Economist: Evidence Use Team Lead by Dean Karlan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: USAID Office of the Chief Economist: Evidence Use Team Lead, published by Dean Karlan on July 28, 2023 on The Effective Altruism Forum.I'm pleased to share an exciting job opening to lead the new Office of the Chief Economist's "Evidence Use" team. Evidence here mainly refers to evidence of cost-effectiveness. The closing date is Aug 7th.The link for the general public to apply is here.The link for candidates meeting specific requirements (e.g., federal employee in the competitive service, veterans, persons with disability) (described in "This job is open to") is here.Please share with anyone you think may be interested!Thanks!Dean KarlanThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Dean Karlan https://forum.effectivealtruism.org/posts/KaEEDxupWgSwsAoaK/usaid-office-of-the-chief-economist-evidence-use-team-lead Fri, 28 Jul 2023 14:48:09 +0000 EA - USAID Office of the Chief Economist: Evidence Use Team Lead by Dean Karlan Dean Karlan https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:52 no full 6725
NJGzmY5BNRw7fSg3z EA - EA Survey 2022: Community Satisfaction, Retention, and Mental Health by WillemSleegers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Survey 2022: Community Satisfaction, Retention, and Mental Health, published by WillemSleegers on July 28, 2023 on The Effective Altruism Forum.SummaryCommunity satisfaction remains high, though is slightly lower post-FTX.More recent cohorts report being more satisfied than earlier cohorts.Respondents identifying as a man were, on average, more satisfied than other respondents.We observed no significant differences in satisfaction based on racial identity.Highly engaged respondents were, on average, more satisfied than less engaged respondents.Prioritizing longtermist causes relative to neartermist causes was associated with higher satisfaction with the communityA large majority (84.1%) of respondents report they are likely to still be involved in EA in three years' time, while only a small minority (5.1%) thought it unlikely they would still be involved.Respondents who identified as a man reported being slightly more likely to remain in the community.We observed no significant differences in retention based on racial identity.More engaged respondents reported being significantly more likely to remain in EA.Higher endorsement of longtermist causes was associated with being more likely to remain in EA, while higher endorsement of neartermist causes was associated with being less likely to remain in EA.More respondents reported that their experiences with EA had improved their mental health (36.8%) than said that it had worsened (16.4%), while 36.0% said that it had stayed the same.Men were somewhat less likely to say that their mental health had greatly decreased.Respondents not identifying as white were more likely to report that their mental health had greatly increased.Respondents higher in EA engagement were more likely to report both that their mental health had increased or that it had decreased as a result of EA, whereas those lower in engagement were much more likely to report that it had stayed the same.IntroductionIn this post, we report on a number of questions about people's satisfaction with the EA community, their likelihood of remaining in EA, and EA's impact on their mental health. Note that we significantly shortened the EA Survey this year, meaning there are fewer community-related questions than in the previous EA Survey.SatisfactionRespondents indicated being generally satisfied with the EA community, scoring an average of 7.16 (SD = 1.66) on a 10-point scale. The median satisfaction response was a 7.As we noted in our recent post on community responses to the FTX crisis, community satisfaction seems to have declined following the FTX crisis. In 2020, respondents reported an average satisfaction of 7.26 (SD = 1.72). In 2022, the average of all respondents was 7.16 (SD = 1.66) and 6.95 (SD = 1.72) for respondents who completed the survey after the FTX crisis. It's also possible that much less satisfied respondents disengaged from the EA community and did not take the survey. Had they participated, the average satisfaction post-FTX could be lower than what we report here.CohortRespondents who recently joined the EA community seem to be more satisfied with the community than earlier cohorts. This pattern is less clear for the earliest cohorts, possibly because the sample size for these groups is relatively small compared to newer cohorts.GenderRespondents identifying as a man were, on average, more satisfied with the EA community (M = 7.31, SD = 1.58) than respondents not identifying as a man (M = 6.93, SD = 1.72)-a difference of 0.38.Racial identityWe did not observe a significant difference in EA community satisfaction between respondents identifying as white (M = 7.19, SD = 1.61) compared to non-white (M = 7.24, SD = 1.66).EngagementNot unexpectedly, more engaged respondents reported being more satisfied with the EA ...

]]>
WillemSleegers https://forum.effectivealtruism.org/posts/NJGzmY5BNRw7fSg3z/ea-survey-2022-community-satisfaction-retention-and-mental Fri, 28 Jul 2023 14:06:26 +0000 EA - EA Survey 2022: Community Satisfaction, Retention, and Mental Health by WillemSleegers WillemSleegers https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:13 no full 6724
cypLFJNbsngcDgJqm EA - The harm cascade: why helping others is so hard by Stijn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The harm cascade: why helping others is so hard, published by Stijn on July 29, 2023 on The Effective Altruism Forum.Altruism, helping others, is so difficult. Suppose you save a child's live, and many years later that child becomes a new Hitler, killing thousands of people. Of course this is very unlikely, because there are not many Hitlers in the world. And the child could also become a scientist who invents a new vaccine that saves thousands of people. So in expectation, saving a child is good. But, it's complicated, because of.The harm cascadeWhat does it mean to help others? Simply put, it means choosing something that is desired, or the consequences of which are desired, by the individuals that are helped. The individuals have positive evaluations of the choice or the consequences of the choice. This means the individuals must have personal experiences and personal desires or preferences. In this sense, the others are persons or sentient beings (including almost all humans and many non-human animals). We cannot help non-sentient objects or things that have no personal experiences and desires. The opposite of helping is harming: doing something such that individuals have negative evaluations of the consequences.Now here is the real problem: almost everyone that we will help, is very likely to harm others as a consequence of the provided help. The more we help sentient beings, the more harm they will cause to others. If you help one individual, that individual is likely to cause harm to more than one other individual. Not helping that one individual will result in less harm caused to other individuals. So that means those other individuals are helped. But to make it more complex, when they are helped, those other individuals will cause harm to even more other individuals.Suppose each saved individual will - as a consequence of the help - kill two other individuals. Or vice versa: killing (harming) one individual will save (help) two other individuals. So if you help individual A, she will harm individuals B and C, such that these two individuals will no longer harm their victims, namely individuals D, E, F and G. The latter individuals are helped and will as a consequence harm individuals H, I, J, K, L, M, N and O, who will no longer harm individuals P, Q,. And so forth. This is the harm cascade.As a concrete example, consider saving a human child's life. That child will not become a new Hitler, but is most likely a meat eater: she will eat more than one animal during her life. And if you help a poor person raising his income, that person is likely to spend some of that extra income on meat, thereby increasing animal farming that causes harm to even more animals. This is the poor meat-eater problem. What if the humans that you helped become vegan and stop eating animal products? That will reduce animal farming and the use of agricultural land. Those vegans may have a preference for turning that unused farm land back into natural habitat. That will increase the population of wild animals. Those wild animals may be better-off than farmed animals, but many of those wild animals will cause harm to other wild animals.In nature we see again a harm cascade. One top-predator kills more than one meso-predators. Those harmed meso-predators can no longer kill their many prey. So due to the top-predator harming many animals, many other prey animals are saved. Those saved animals can cause harm to even other animals. Especially in aquatic ecosystems we see long food chains, where a top predator is like a serial killer who kills many other serial killers who kill even more serial killers, and so on.If animals are not killed by predators, they might increase in population and hence increase competition for food with other animals. They might use violence to attain food ...

]]>
Stijn https://forum.effectivealtruism.org/posts/cypLFJNbsngcDgJqm/the-harm-cascade-why-helping-others-is-so-hard Sat, 29 Jul 2023 21:06:55 +0000 EA - The harm cascade: why helping others is so hard by Stijn Stijn https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 21:34 no full 6742
MN34Pd6gCeHPgnMwH EA - Visit Mexico City in January & February to interact with the AI Futures Fellowship by AmAristizabal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Visit Mexico City in January & February to interact with the AI Futures Fellowship, published by AmAristizabal on July 29, 2023 on The Effective Altruism Forum.TLDRApply via this form if you would like to cowork with other AI researchers in Mexico City in January and/or February 2024 (we will check applications on a rolling basis).See also our general post announcing the AI Futures Fellowship.AI Futures FellowshipWe are planning to open a coworking space in Mexico City in January and February 2024. This office space will be occupied by the fellows and staff from the ITAM AI Futures Fellowship and an engaging intellectual community of visitors and collaborators, and you could be one of them!VisitorsThe fellowship will take place in an office space located in La Condesa, a very green and calm area of Mexico City. The office faces Amsterdam street and Parque Mexico, which offer a fantastic mix of nature, cafes and restaurants.We are offering extra office spots for the duration of the program. This includes:Access to our coworking space for up to two months. Shorter visits are welcome, but longer visits are preferred.The default office policy is Hot-Desking. We might be able to accommodate more specific requests for individuals or organizations depending on availability, so let us know if there is a particular arrangement that would make you or your organization more likely to visit (e.g. "a private office for 5 people for x organization").We will provide lunch and some snacks throughout that time (Monday through Friday).The exact number of spots offered will vary depending on space availability and demand. We are also exploring the possibility of receiving more visitors who are willing to pay for their memberships.We are interested in hosting people working on technical AI safety research and AI governance. We will prioritize those who can have the most fruitful interactions with our fellows or those who would benefit the most from interacting with our community.If your organization would like to use the office space for a team retreat, do let us know as soon as possible.By default we won't cover accommodation or travel costs, but please let us know if this would significantly affect your chances of coming. We might be able to explore some options with you or point you to possible alternatives.Consider also joining us as a mentorThe form also gives you the option of applying as a mentor. This might involve the following responsibilities:Help a fellow define a project and support them throughout its execution (This may be a project you've been interested in doing but haven't found the time to work on!)Availability to commit ~1 hour weekly for 8 weeks to support the fellow (this can include reviewing drafts and having regular calls).Assist the fellow in expanding their professional network, identifying additional resources, and valuable opportunities.Offer career guidance and motivation whenever needed. However, our staff will meet regularly with mentees to follow up on their time management, accountability, wellbeing and intrinsic motivation. You are expected to provide more object-level guidance.There is a compensation of $1000 USD for the valuable time and effort invested by mentors.If you want to join us as a visitor or mentor, fill out this brief form. If you're not sure yet but you're interested in exploring ways of interacting with our fellowship, also let us know!Feel free to contact us with any further questions you may have: info@aifuturesitam.org.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
AmAristizabal https://forum.effectivealtruism.org/posts/MN34Pd6gCeHPgnMwH/visit-mexico-city-in-january-and-february-to-interact-with Sat, 29 Jul 2023 20:06:57 +0000 EA - Visit Mexico City in January & February to interact with the AI Futures Fellowship by AmAristizabal AmAristizabal https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:24 no full 6740
QahDdRzpPuat8ujx2 EA - Why isn't there a charity evaluator for longtermist projects? by BrownHairedEevee Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why isn't there a charity evaluator for longtermist projects?, published by BrownHairedEevee on July 29, 2023 on The Effective Altruism Forum.i.e., Why isn't there an org analogous to GiveWell and Animal Charity Evaluators that evaluates and recommends charities according to how much impact they can have on the long-term future, e.g. by reducing existential risk? As opposed to only making grants directly and saying "just trust us" like the EA Funds.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
BrownHairedEevee https://forum.effectivealtruism.org/posts/QahDdRzpPuat8ujx2/why-isn-t-there-a-charity-evaluator-for-longtermist-projects Sat, 29 Jul 2023 17:46:12 +0000 EA - Why isn't there a charity evaluator for longtermist projects? by BrownHairedEevee BrownHairedEevee https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:39 no full 6739
FvgQjicdSk6S7xQvC EA - Announcing the ITAM AI Futures Fellowship by AmAristizabal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the ITAM AI Futures Fellowship, published by AmAristizabal on July 29, 2023 on The Effective Altruism Forum.We are thrilled to introduce the AI Futures Fellowship, an eight-week program in Mexico City (January and February 2024) designed to support exceptional individuals in understanding and mitigating catastrophic and existential risks from advanced AI.TLDRApply now to become one of our fellows (deadline: end of August 24, AOE)!Share this opportunity with potential candidates from your networks.Apply via this form if you would like to mentor one or more of our fellows.See this post and fill out this form if you would like to visit Mexico City at some point or throughout January and February to interact with the AI Futures Fellowship and other AI researchers.About the ProgramFor eight weeks, fellows will join the Instituto Tecnológico Autónomo de México (ITAM) in Mexico City and have the opportunity to interact with a rich intellectual community of AI researchers visiting Mexico City in January and February 2024.Fellows will pursue a project agreed upon with a mentor at the beginning of the program. We generally expect fellows to produce a research report on a specific problem in various AI subfields, but we are open to different outputs. For example, fellows may also focus on learning more about a particular topic in AI, such as interpretability research, global race dynamics, or compute governance, and summarize their findings. Fellows may also collaborate with other participants or experts of the program.There will be weekly meetings, seminars, and Q&As with leading experts in the field.Mexico City OfficeThe fellowship will take place in an office space located in La Condesa, a very green and calm area of Mexico City. The office faces Amsterdam street and Parque Mexico, which offer a fantastic mix of nature, cafes and restaurants.Who Should Apply?We are looking for early-career individuals and students from all over the world. We expect most participants to be at the late undergraduate, Master, Ph.D., and postdoctoral levels, but other exceptional candidates are also welcome. While we expect candidates to apply their research skills and knowledge to issues of advanced AI, they are not required to have previous technical experience or expertise in machine learning or AI.For Potential MentorsWe are actively seeking mentors for our program, so fill out this form if you think this could be you! The role will be tailored to the fellows' needs, but it might involve the following responsibilities:Help a fellow define a project and support them throughout its execution (This may be a project you've been interested in doing but haven't found the time to work on!)Availability to commit ~1 hour weekly for 8 weeks to support the fellow (this can include reviewing drafts and having regular calls).Assist the fellow in expanding their professional network, identifying additional resources, and valuable opportunities.Offer career guidance and motivation whenever needed. However, our staff will meet regularly with mentees to follow up on their time management, accountability, wellbeing and intrinsic motivation. You are primarily expected to provide object-level guidance.There is a compensation of $1000 USD for the valuable time and effort invested by mentors.Mentors can be remote: you don't need to come to Mexico to be eligible. That said, we want to encourage in person collaboration and synergies, so if you are interested in spending some time in Mexico City during the winter, consider applying as a mentor interested in visiting Mexico in the form.For Potential VisitorsIf you want to visit Mexico City in January & February to interact with the AI Futures Fellowship, see this post and fill out our form as a visitor.Feel free to email any questi...

]]>
AmAristizabal https://forum.effectivealtruism.org/posts/FvgQjicdSk6S7xQvC/announcing-the-itam-ai-futures-fellowship Sat, 29 Jul 2023 13:48:16 +0000 EA - Announcing the ITAM AI Futures Fellowship by AmAristizabal AmAristizabal https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:01 no full 6738
oPMH9e6F7r3fXHd8M EA - Are there diseconomies of scale in the reputation of communities? by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Are there diseconomies of scale in the reputation of communities?, published by Lizka on July 29, 2023 on The Effective Altruism Forum.Summary: in what we think is a mostly reasonable model, the amount of impact a group has increases as the group gets larger, but so do the risks of reputational harm. Unless we believe that, as a group grows, the likelihood of scandals grows slowly (at most as quickly as a logarithmic function), this model implies that groups have an optimal size beyond which further growth is actively counterproductive - although this size is highly sensitive to uncertain parameters. Our best guesses for the model's parameters suggest that it's unlikely that EA has hit or passed this optimal size, so we reject this argument for limiting EA's growth. (And our prior, setting the model aside, is that growth for EA continues to be good.)You can play with the model (insert parameters that you think are reasonable) here.Epistemic status: reasonable-seeming but highly simplified model built by non-professionals. We expect that there are errors and missed considerations, and would be excited for comments pointing these out.Overview of the modelAny group engaged in social change is likely to face reputational issues from wrongdoing by members, even if it avoids actively promoting harmful practices, simply because its members will commit wrongdoing at rates in the ballpark of the broader population.Wrongdoing becomes a scandal for the group if the wrongdoing becomes prominently known by people inside and outside the group, for instance if it's covered in the news (this is more likely if the person committing the wrongdoing is prominent themselves).Let's pretend that "scandals" are all alike (and that this is the primary way by which a group accrues reputational harm).Reputational harm from scandals diminishes the group's overall effectiveness (via things like it being harder to raise money).Conclusion of the model: If the reputational harm accrued by the group grows more quickly than the benefits (impact not accounting for reputational harm), then at some point, growth of the group would be counterproductive. If that's the case, the exact point past which growth is counterproductive would depend on things like how likely and how harmful scandals are, and how big coordination benefits are.To understand whether a point like this exists, we should compare the rates at which reputational harm and impact grow with the size of the group. Both might grow greater than linearly.Reputational harm accrued by the group in a given period of time might grow greater than linearly with the size of the group, because:The total reputational harm done by each scandal probably grows with the size of the group (because more people are harmed).The number of scandals per year probably grows roughly linearly with the size of the group, because there are simply more people who each might do something wrong.These things add up to greater-than-linear growth in expected reputational damage per year as the number of people involved grows.The impact accomplished by the group (not accounting for reputational damage) might also grow greater than linearly with the size of the group (because more people are doing what the group thinks is impactful, and because something like network effects might help larger groups more).Implications for EAIf costs grow more quickly than benefits, then at some point, EA should stop growing (or should shrink); additional people in the community will decrease EA's positive impact.The answer to the question "when should EA stop growing?" is very sensitive to parameters in the model; you get pretty different answers based on plausible parameters (even if you buy the setup of the model).However, it seems hard to choose parameters that imply that ...

]]>
Lizka https://forum.effectivealtruism.org/posts/oPMH9e6F7r3fXHd8M/are-there-diseconomies-of-scale-in-the-reputation-of Sat, 29 Jul 2023 02:48:51 +0000 EA - Are there diseconomies of scale in the reputation of communities? by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 29:19 no full 6731
owM2obTsNiZHK2nu9 EA - About 'subjective' wellbeing and cost-effectiveness analysis in mental health by LondonGal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: About 'subjective' wellbeing and cost-effectiveness analysis in mental health, published by LondonGal on July 30, 2023 on The Effective Altruism Forum.Hello everyone,I was first 'sucked in' to this forum when I was directed to a post I might find interesting - it was about a research organisation with EA endorsement that was straying into my area of work, mental health. I'm a UK doctor specialising in psychiatry, with some research experience. To be honest, I was baffled and a little frustrated by how far this organisation strayed from what I would expect from mental health research - hence the (perhaps overly) technical diatribe I launched into on a website I hadn't visited before, about an organisation I hadn't heard of prior.However, that's not usually my style, and once I took a step back from my knee-jerk reaction, I wanted to understand how people with the same goals could arrive at completely different conclusions. It's led me to do a lot of reading, and I wanted to see if I could try on a makeshift 'EA' hat, with most of my philosophy knowledge gained from The Good Place, no economics experience, and see where it went.What I wanted to understand:Where has the interest in 'wellbeing' arisen from, and what does it mean?What are 'subjective wellbeing' (SWB) measures, and are they useful?Are we at a point of putting monetary value on SWB (e.g. like QALYs) for the sake of cost-effectiveness analysis (CEA)?When people are in this space talking about mental health, are we talking the same language?Why are RCTs the 'best' evidence for subjective wellbeing?What would I come up with from my perspective of working within mental health for a way of comparing different interventions based on their intended effects on wellbeing? a. Spillover effects b. Catastrophic multipliersHow does my guess stack up against existing research into wellbeing?How could my framework be helpful in practice?What would I be suggesting as research areas for maximal gains in wellbeing from my biased perspective?I'm aware this might be well-trodden ground in EA, which would make me embarrassingly late to the party, and consequently a complete bore. To lay my cards firmly on the table, I did approach these questions from the perspective that mental health is desperately underfunded, I spend a lot of time with patients who are severely affected by mental illness and therefore I'm biased towards seeing 'wellbeing' as an opportunity to rebalance this scale and acknowledge the impact mental illnesses have on people. I also feel the term 'mental health' is used in a way which is often confusing and occasionally unhelpful or stigmatising.This is not meant as an attempt to further an argument against any person or organisation; it will also not be high in tech-speak as this was the first lesson I learnt very quickly on my journey - while jargon is a useful shorthand for talking with people in the same field, as an outsider it is exhausting. This post does not reflect the attitudes or opinions of anyone but me - this is my personal quest for common ground and understanding, not a representation of 'UK psychiatry' - I'm speaking in an entirely personal capacity and, accordingly, I'm assuming I've gotten a lot of it completely wrong.To make this less self-indulgent, I've arranged this post to follow that question-and-answer format. For the sake of transparency, this was how this work came to be: I started with a long piece of writing about my concerns with assumptions made about mental health interventions in low- or middle-income country (LMIC) settings. I then did a quick Google on the WELLBY and wrote a lot about the idea of asking people to rate their 'satisfaction with life' on a scale from 0-10 which was essentially just entirely critical. I subsequently wrote out my concept of wellbei...

]]>
LondonGal https://forum.effectivealtruism.org/posts/owM2obTsNiZHK2nu9/about-subjective-wellbeing-and-cost-effectiveness-analysis Sun, 30 Jul 2023 16:29:00 +0000 EA - About 'subjective' wellbeing and cost-effectiveness analysis in mental health by LondonGal LondonGal https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:05:47 no full 6744
Bjr6FXvnKqb37uMPP EA - Shutting down AI Safety Support by JJ Hepburn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shutting down AI Safety Support, published by JJ Hepburn on July 30, 2023 on The Effective Altruism Forum.This is a quick post to officially announce that AI Safety Support is on an indefinite pause. We have been on pause for a while now, but I wanted to make it official. I might have more to say on this later but expect that if I wait till I write a longer explanation, it will not happen.I'm calling it a pause because maybe AI Safety Support does reopen in some capacity in the future, possibly with a different person running it.A lack of funding is a part of the decision but is not the only factor. That is to say that I'm not looking for offers of funding.AI Safety Support Ltd, the Australian-based charity, supports several other projects and will continue to do so. None of these projects should be affected by this.If you have any questions, you can ask them in the comments here, but I may not respond to them.It has been a wild ride these last few years. I have so much love and appreciation for everyone I have worked with.Thank you all so much.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
JJ Hepburn https://forum.effectivealtruism.org/posts/Bjr6FXvnKqb37uMPP/shutting-down-ai-safety-support Sun, 30 Jul 2023 07:31:21 +0000 EA - Shutting down AI Safety Support by JJ Hepburn JJ Hepburn https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:11 no full 6743
u5serAwnrrFSJonoF EA - 4 things GiveDirectly got right and wrong sending cash to flood survivors by GiveDirectly Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 4 things GiveDirectly got right and wrong sending cash to flood survivors, published by GiveDirectly on July 31, 2023 on The Effective Altruism Forum.For Hassana in Kogi, Nigeria, October's floods were not like years past. "All our farmlands washed away as many had not yet harvested what they planted. The flooding continued until our homes and other things were destroyed. At this point we were running helter-skelter," she said.These floods, the worst in a decade, result from predictable seasonal rains. If we can anticipate floods, we can also anticipate the action needed to help. So why does aid often take months (even up to a year) to reach people like Hassana? Traditional humanitarian processes can be slow and cumbersome, government and aid agencies often lack the capacity and money to respond, and most aid is delivered in person, an added challenge when infrastructure is damaged.Digital cash transfers can avoid these issues, and getting them to work in a disaster setting means more people will survive climate change. In the past year, with support from Google.org, GiveDirectly ran pilots to send cash remotely to flood survivors: in Nigeria, we sent funds to survivors weeks after flooding and in Mozambique, we sent funds days before predicted floods. Below, we outline what worked, what didn't, and how you can help for next time.Over 1.5B people in low and middle income countries are threatened by extreme floods. Evidence shows giving them unconditional cash during a crisis lets them meet their immediate needs and rebuild their lives. However, operating in countries with limited infrastructure during severe weather events is complicated, so we ran two pilots to test and learn (see Appendix):What went right and what went wrongInnovating in the face of climate change requires a 'no regrets' strategy, accepting a degree of uncertainty in order to act early to prevent suffering. In that spirit, we're laying out what worked and did not:✅ Designing with community input meant our program worked betterA cash program only works if recipients can easily access the money. In Nigeria, we customized our program design based on dozens of community member interviews:Use the local dialect: There are 500+ dialects spoken in Nigeria, and our interviews determined a relatively uncommon one, Egbura Koto, was most widely used in the villages we were targeting. We hired field staff who spoke Egbura Koto, which made the program easier to access and more credible to community members, with one saying, "I didn't believe the program at first when my husband told me but when I got a call from GiveDirectly and someone spoke in my language, I started believing."Promote mobile money: Only 10% of Nigerians have a mobile money account (compared to 90% of Kenya), so we planned to text recipients instructions to create one and provide a hotline for assistance. But would they struggle to set up the new technology? Our interviews found most households had at least one technologically savvy member, and younger residents often helped their older or less literate neighbors read texts, so we proceeded with our design. In the end, 94% of surveyed recipients found the mobile money cash out process "easy."Send cash promptly: Cash is most useful where markets are functioning, so should we delay sending payments until floods recede if it means more shops will be reopened? In our interviews, residents explained the nearby Lokoja and Koton-Karfe markets functioned throughout flooding and could be reached in 10 minutes by boat. We decided not to design in a delay and found the nearby markets were, in fact, open during peak flooding.❌ We didn't send payments before severe floodsIn Mozambique, we attempted to pay people days ahead of severe floods based on data from Google Research's Flood Forecasting ...

]]>
GiveDirectly https://forum.effectivealtruism.org/posts/u5serAwnrrFSJonoF/4-things-givedirectly-got-right-and-wrong-sending-cash-to Mon, 31 Jul 2023 21:17:29 +0000 EA - 4 things GiveDirectly got right and wrong sending cash to flood survivors by GiveDirectly GiveDirectly https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:04 no full 6753
z8ZWwm4xeHBAiLZ6d EA - Thoughts on far-UVC after working in the field for 8 months by Max Görlitz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on far-UVC after working in the field for 8 months, published by Max Görlitz on July 31, 2023 on The Effective Altruism Forum.Views expressed in this article are my own and do not necessarily reflect those of my employer SecureBio.SummaryFar-UVC has great promise, but a lot of work still needs to be doneThere still are many important open research questions that need to be answered before the technology can become widely adoptedRight now, a key priority is to grow the research field and improve coordinationThe main reason far-UVC is so promising is that widespread installation could passively suppress future pandemics before we even learn that an outbreak has occurredHigher doses mean more rapid inactivation of airborne pathogens but also more risk for harm to skin, eyes, and through indoor air chemistry. Therefore, the important question in safety is, "How high can far-UVC doses go while maintaining a reasonable risk profile?"Existing evidence for skin safety within current exposure guidelines seems pretty robust, and I expect that skin safety won't be the bottleneck for far-UVC deployment at higher doses.Current evidence around eye safety is much more sparse than for skin safety. Eye safety seems like it could be the bottleneck to what doses of far-UVC can be reasonably used.Undoubtedly, far-UVC has a substantial impact on indoor air chemistry by producing ozone, which oxidizes volatile organic compounds in the air that can result in harmful products such as particulate matter.Little research has been done on methods to mitigate this issue.This might turn out to be a bottleneck to what doses of far-UVC can be reasonably used, but I am really uncertain here.There is no doubt that far-UVC can dramatically reduce the amount of airborne pathogens within a room (inactivation of ~98% of aerosolized bacteria within 5 minutes). Crucially, we don't know how well this translates into an actual reduction in the total number of infections.Very few people have thought about how the adoption of far-UVC could be driven and what a widespread deployment of the technology could look likeSo far, there is little to no regulation of far-UVC.In the US, (potential) regulation of far-UVC seems quite messy, as no authority has clear jurisdiction over it.IntroductionFar-UVC (200-235 nm) has received quite a bit of attention in EA-adjacent biosecurity circles as a technology to reduce indoor airborne disease spread and is often discussed in the context of indoor air quality (IAQ). Notably, Will MacAskill mentioned it often throughout various media appearances in 2022.I have been working on research around far-UVC for the past 8 months. More specifically, we wrote an extensive literature review on skin and eye safety (submitted & soon™ to be published as an academic paper). We also coordinated with many researchers in the field to lay out a plan for the studies that still need to be done to get a more comprehensive understanding of the technology's safety & efficacy.Although far-UVC has been discussed on the forum, the existing information is relatively shallow, and most in-depth knowledge is either buried in technical research papers or not publicly available since a lot of intricacies are mostly discussed informally within the research community.In this post, I will first offer high-level thoughts and then go over different categories of information around far-UVC (safety, efficacy, indoor air chemistry, adoption, and regulation) to provide my current perspectives & takes. Please note that I am much more familiar with safety aspects than with the other categories. Also, this is not a general overview of far-UVC, what it is, and how it works. For a relatively recent and comprehensive introduction, I recommend "Far UV-C radiation: An emerging tool for pandemic co...

]]>
Max Görlitz https://forum.effectivealtruism.org/posts/z8ZWwm4xeHBAiLZ6d/thoughts-on-far-uvc-after-working-in-the-field-for-8-months Mon, 31 Jul 2023 15:45:11 +0000 EA - Thoughts on far-UVC after working in the field for 8 months by Max Görlitz Max Görlitz https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 20:44 no full 6751
SZ6reaQ2HkABAcY2N EA - [JOB] Opportunity to found Charity Entrepreneurship NGO (outside of the incubation program): Tobacco taxation advocacy by Yelnats T.J. Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [JOB] Opportunity to found Charity Entrepreneurship NGO (outside of the incubation program): Tobacco taxation advocacy, published by Yelnats T.J. on August 1, 2023 on The Effective Altruism Forum.Overview of job postingHighlightsWhy found a NGO/charityWhy tobacco taxationThe roleWho your future co-founder isThe selection processHow to applyFAQsRecommended readingsCharity Entrepreneurship is often called the Y combinator of the charity/NGO space and its alumni are compared to the PayPal Mafia (but for the charity sector).Note: this is not an application for the upcoming CE incubation program. You can apply for that here.Early application deadline: August 6th 23:59 UTCApplication deadline: August 13th 23:59 UTCAfter the deadline, applications will be taken on a rolling basis.The process is going to be fast and most applicants that go through the whole process will do so in about 3.5 weeks. Times will be quicker for applicants that submit earlier at each stage. Turnaround times will typically be a matter of days.HighlightsThe opportunityThis is a CE-researched intervention with a CE incubatee - J.T. Stanley - that will be considered for seed funding (via the CE seed funder network) in mid-September.(The CE incubatee went through the winter 2023 cohort. His original match for co-founder from the cohort ended up pursuing another career opportunity with comparable impact that arose near the end of the program.)The impactTobacco is 13x Malaria in deaths (WHO, 2023).Whereas malaria, HIV, and neonatal deaths are all decreasing year over year, deaths from tobacco are increasing. Tobacco is currently on track to kill a billion people in the 21st century (WHO, 2021).Here is a link to a section that talks about the financial and economic impacts.Taxation is both the most effective and most neglected form of tobacco control (WHO; NIH, 2016; World Bank, 2017).Multiple EA organizations have ran analysis that found tobacco taxation advocacy to be "an extremely cost-effective intervention."CE's cost-effectiveness analysis put the expected value between 39 and 51 USD per DALY averted.[1]Open Philanthropy's back-of-the-envelope calculation found 30 USD per DALY averted in expectation.[2]Giving What We Can's (GWWC) report suggested that cost per life saved could go as low as 800 USD according to one scientific study.[3]In other words, a successful tobacco taxation charity would immediately have an impact with cost-effectiveness at the upper echelons (think Against Malaria Foundation for GiveWell and Lead Exposure Elimination Project for CE).Why you should applyThis is your chance to be a founder of an impactful organization.Don't count yourself out. If in doubt apply. Roughly half of the successful applicants/founders that are accepted into the CE incubation program did not think they would qualify. Don't preempt yourself from becoming a founder; let the process play out.If you've applied to CE in the past, this is the opportunity for you.Why found an NGO/charityProsYou are not just a cog in the machine; you have ownership of a venture/organization that could deliver massive impact. The health of hundreds of thousands of tobacco users depends on you and your co-founder's fateful actions.One of the best expected values for impact - This intervention has the potential to avert a ridiculously high amount of deaths and DALYs.Upward career mobility and increased impact later in your career. Running an impactful organization will give you critical skills, experience, and career capital that will better position you for impact later in your career.ConsA much more serious commitment than your typical job.Founder salaries of CE charities are very modest compared to other parts of the EA ecosystem and the first-year salary is lean.Founding an organization...

]]>
Yelnats T.J. https://forum.effectivealtruism.org/posts/SZ6reaQ2HkABAcY2N/job-opportunity-to-found-charity-entrepreneurship-ngo Tue, 01 Aug 2023 08:12:09 +0000 EA - [JOB] Opportunity to found Charity Entrepreneurship NGO (outside of the incubation program): Tobacco taxation advocacy by Yelnats T.J. Yelnats T.J. https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 15:20 no full 6762
yKWGkcuke577ReBPW EA - You Can Also Help Animals By Earning (More) in Other Career Paths and Donating by Animal Advocacy Careers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You Can Also Help Animals By Earning (More) in Other Career Paths and Donating, published by Animal Advocacy Careers on August 1, 2023 on The Effective Altruism Forum."Animal Advocacy Careers" do not need to be only careers in animal advocacy organisationsThis is the first piece of AAC's new "Impactful Animal Advocacy Career Paths" series (more posts coming soon).Animal advocacy organisations play a vital role in creating a compassionate world for animals, but they face significant challenges in securing adequate funding. Almost all organisations need financial support in order to sustain and expand their programs. Many of these organisations are funding constrained.Providing these resources creates an essential, indirect yet major impact for animals through empowering the organisations helping them.Why Donations For Animals Are CrucialAlmost every advocacy organisation would like to:hire more staff who would work on important tasks,create more advertisements that would raise awareness about animal rights,start more programs to grow and strengthen the movement.None of these things are free.It is simply unrealistic and unfair to expect individuals to selflessly work for animals without any compensation. It is also hard to attract skilled, and competent individuals to work on animal advocacy without decent salaries.There are 3 main problems limiting our progress in putting an end to animal suffering:Lack of funding in nonprofitsAnimal advocacy organisations listed a lack of funding as the single most important thing that limited their organisations impact.For these reasons, organisations require extensive and stable funding.Unless animal nonprofits are adequately resourced, they cannot create meaningful impact for animals. While some organisations also earn money by selling merchandise or event tickets, they simply cannot function like a for profit company.Nonprofits primarily depend on donations to cover their expenses.Uneven distribution of fundingThe problem is not just the limited amount of money in the animal advocacy space, it is also about how the funds are shared and distributed.Farmed and wild animal welfare organisations receive significantly fewer donations compared to organisations working on companion animals, animals used in labs, and captive animals. Yet, the number and suffering of farmed and wild animals are much higher than these other groups.Farmed and wild animal advocacy is relatively more neglected than other animal advocacy fields, in terms of both public attention and funding.While "for every one dog or cat euthanized in a shelter, 3,400 farmed land animals are confined and slaughtered", farmed and wild animal organisations receive about %1 of total donations to animal charities (which also include shelters, organisations that focus on companion animals, etc.).Graphic by Animal Charity EvaluatorsStruggling Against Powerful InstitutionsFinally, the institutions that these organisations are struggling against are extremely powerful. The annual donations to farmed and wild animal advocacy organisations is approximately 200 million dollars.Animal agriculture industry, on the other hand, is a billions of dollars worth industry where tens of thousands people work each day.The animal advocacy movement, farmed and wild animal advocacy in particular is miniscule in terms of financial power compared to the institutions it is positioned against.When animal welfare organisations stage campaigns against corporations like McDonalds or Kroeger in order to achieve institutional welfare reforms, they struggle against organisations that have hundreds, even thousands times more financial resources than they do.Is Alt Protein the Solution? Exploring Ambitions and ChallengesThe alternative protein sector has much more investment and mar...

]]>
Animal Advocacy Careers https://forum.effectivealtruism.org/posts/yKWGkcuke577ReBPW/you-can-also-help-animals-by-earning-more-in-other-career Tue, 01 Aug 2023 05:07:58 +0000 EA - You Can Also Help Animals By Earning (More) in Other Career Paths and Donating by Animal Advocacy Careers Animal Advocacy Careers https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 25:16 no full 6761
DPfGxeWFLQaWEgBTj EA - How many people are neartermist and have high P(doom)? by Sanjay Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How many people are neartermist and have high P(doom)?, published by Sanjay on August 2, 2023 on The Effective Altruism Forum.As far as I can tell, there seems to be a strong tendency for those who are worried about AI risk to also be longtermists.However many elements of the philosophical case for longtermism are independent of contingent facts about what is going to happen with AI in the coming decades.If we have good community epistemic health, we should expect there to be people who object to longtermism on grounds like:person-affecting viewssupporting a non-zero pure discount ratebut who still are just as worried about AI as those with P(doom) > 90%.Indeed, the proportion of "doomers" with those philosophical objections to longtermism should be just as high as the rate of such philosophical objections among those typically considered neartermist.I'm interested in answers either of the form:"hello, I'm both neartermist and have high P(doom) from AI risk..."; or"here's some relevant data, from, say, the EA survey, or whatever"Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Sanjay https://forum.effectivealtruism.org/posts/DPfGxeWFLQaWEgBTj/how-many-people-are-neartermist-and-have-high-p-doom 90%.Indeed, the proportion of "doomers" with those philosophical objections to longtermism should be just as high as the rate of such philosophical objections among those typically considered neartermist.I'm interested in answers either of the form:"hello, I'm both neartermist and have high P(doom) from AI risk..."; or"here's some relevant data, from, say, the EA survey, or whatever"Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> Wed, 02 Aug 2023 22:34:18 +0000 EA - How many people are neartermist and have high P(doom)? by Sanjay 90%.Indeed, the proportion of "doomers" with those philosophical objections to longtermism should be just as high as the rate of such philosophical objections among those typically considered neartermist.I'm interested in answers either of the form:"hello, I'm both neartermist and have high P(doom) from AI risk..."; or"here's some relevant data, from, say, the EA survey, or whatever"Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> 90%.Indeed, the proportion of "doomers" with those philosophical objections to longtermism should be just as high as the rate of such philosophical objections among those typically considered neartermist.I'm interested in answers either of the form:"hello, I'm both neartermist and have high P(doom) from AI risk..."; or"here's some relevant data, from, say, the EA survey, or whatever"Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> Sanjay https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:13 no full 6775
k7NjuGEKdRSrrJHmZ EA - Deep Report on Hypertension by Joel Tan (CEARCH) Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Deep Report on Hypertension, published by Joel Tan (CEARCH) on August 2, 2023 on The Effective Altruism Forum.Taking into account (a) the expected benefits of eliminating hypertension, in terms of improved health and increased economic output, while also factoring in (b) the expected costs on the economic front and (c) the tractability of advocacy for top sodium reduction policies (i.e. a sodium tax; mandatory food reformulation; a strategic location intervention; a public education campaign; and front-of-pack labelling) to prevent and ameliorate hypertension, CEARCH finds the marginal expected value of advocacy for these top sodium reduction policies to control hypertension to be 88,052 DALYs per USD 100,000, which is at least 10x as cost-effective as giving to a GiveWell top charity.Our detailed cost-effectiveness analysis can be found here, as can the full report can be read here.Introduction: This report on hypertension is the culmination of three iterative rounds of research: (i) an initial shallow research round involving 1 week of desktop research; (ii) a subsequent intermediate research round involving 2 weeks of desktop research and expert interviews; and (iii) a final deep research round involving 3 weeks of desktop research, expert interviews, and the commissioning of surveys and quantitative modelling.Importance: Globally, hypertension is certainly a problem, and causes a significant health burden of 248 million disability-adjusted life years (DALYs) in 2024, as well as an accompanying net economic burden equivalent to 748 million foregone doublings of GDP per capita, each of which people typically value at around 1/5th of a year of healthy life. And the problem is only expected to grow between 2024 and 2100, as a result of factors like high sodium consumption, ageing, and population growth.Neglectedness: Government policy is far from adequate, with only 4% of countries currently implementing the top WHO-recommended ideas on sodium reduction, and this not expected to change much going forward - based on the historical track record, any individual country has only a 1% chance per annum of introducing such policies. At the same time, while there are NGOs working on hypertension and sodium reduction (e.g. in China, India and Latin America) - and while some are impact-oriented in focusing on poorer countries where the disease is growing far more rapidly than in wealthier countries - fundamentally, not enough is being done.Tractability: There are many potential solutions to the problem of hypertension (e.g. reducing dietary sodium, increasing potassium consumption, and pharmacological agents); however, we find that the most cost-effective solution is likely to be advocacy for top sodium reduction policies - specifically: a sodium tax; mandatory food reformulation; a strategic location intervention to change food availability in public institutions like hospitals, schools, workplaces and nursing homes; a public education campaign; and front-of-pack labelling. The theory of change behind this intervention package is as such:Step 1: Lobby a government to implement top sodium reduction policies.Step 2a: Sodium tax reduces sodium consumption.Step 2b: Mandatory food reformulation reduces sodium consumption.Step 2c: Strategic location intervention reduces sodium consumption.Step 2d: Public education reduces sodium consumption.Step 2e: Mandatory front-of-pack labelling reduces sodium consumption.Step 3: Lower sodium consumption in a single country reduces blood pressure and hence the global disease burden of hypertensionUsing the track record of past sodium control and sugar tax advocacy efforts and of general lobbying attempts (i.e. an "outside view"), and combining this with reasoning through the particulars of the case (i.e. an "inside view"), our bes...

]]>
Joel Tan (CEARCH) https://forum.effectivealtruism.org/posts/k7NjuGEKdRSrrJHmZ/deep-report-on-hypertension Wed, 02 Aug 2023 09:39:29 +0000 EA - Deep Report on Hypertension by Joel Tan (CEARCH) Joel Tan (CEARCH) https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:57 no full 6772
h9unK57kLnmKdG6uq EA - Riesgos Catastróficos Globales needs funding by Jaime Sevilla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Riesgos Catastróficos Globales needs funding, published by Jaime Sevilla on August 2, 2023 on The Effective Altruism Forum.Riesgos Catastróficos Globales (RCG) is a science-policy nonprofit investigating opportunities to improve the management of Global Catastrophic Risks in Spanish-Speaking countries.I wrote a previous update back in May. Since then, the organisation has published seven more articles, including a report on Artificial Intelligence regulation in the context of the EU AI Act sandbox. We have also been invited to contribute to the 2024-2030 National Risk Management Plan of Argentina, which will consequently be the world's first to include a section on abrupt sunlight reduction scenarios (ASRS).Unfortunately, our major fundraising efforts have been unsuccessful. We are only able to keep operating due to some incredibly generous donations by private individuals. We are looking to fundraise $87k to support our operations between October 2023 and March 2024. If you are a funder, you can contact us through info@riesgoscatastroficosglobales.com . Individuals can help extend our runway through a donation.Reasons to support Riesgos Catastróficos GlobalesI believe that RCG is an incredible opportunity for impact. Here are some reasons why.We have already found promising avenues to impact. We have officially joined the public risk management network in Argentina, and we have been invited to contribute an entry on abrupt sun-reducing scenarios (ASRS) to the 2024-2030 national risk management plan.RCG has shown to be amazingly productive. Since the new team started operating in March we have published two large reports and ten articles. Another large report is currently undergoing review, and we are working on three articles we plan to submit to academic journals. This is an unusually high rate of output for a new organization.RCG is the only Spanish-Speaking organisation producing work on Global Catastrophic Risks studies. I believe that our reports on Artificial Winter and Artificial Intelligence are the best produced in the language. Of particular significance is our active engagement with Latin American countries, which are otherwise not well represented in conversations about global risk.We are incubating some incredible talent. Our staff includes competent profiles who in a short span of time have gained in-depth expertise in Global Catastrophic Risks. This would have been hard to acquire elsewhere, and I am very excited about their careers.In sum, I am very excited about the impact we are having and the work that is happening in Riesgos Catastróficos Globales. Keep reading to learn more about it!Status updateHere are updates on our main lines of work.Artificial Winter. We have joined the Argentinian Register of Associations for Comprehensive Risk Management (RAGIR), and we will be contributing a section on managing abrupt sunlight reduction scenarios (ASRS) to the 2024-2030 National Risk Management Plan. We continue promoting public engagement with the topic, having recently published a summary infographic of our report. We also are preparing a related submission to an academic journal.Artificial Intelligence. We have published our report on AI governance in the context of the EU AI Act sandbox, as well as companion infographics. A member of the European parliament has agreed to write a prologue for the report. In parallel, we have been engaging with the discussion around the AI Act through calls for feedback. We are also currently preparing two submissions to academic journals related to risks and regulation of AI.Biosecurity. We have drafted a report on biosurveillance and contention of emergent infectious diseases in Guatemala, which is currently undergoing expert review. It will be published in August. We are also writing a short article o...

]]>
Jaime Sevilla https://forum.effectivealtruism.org/posts/h9unK57kLnmKdG6uq/riesgos-catastroficos-globales-needs-funding Wed, 02 Aug 2023 07:44:28 +0000 EA - Riesgos Catastróficos Globales needs funding by Jaime Sevilla Jaime Sevilla https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:08 no full 6771
nT8Ybhjvz5qG3eLuw EA - Implementational Considerations for Digital Consciousness by Derek Shiller Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Implementational Considerations for Digital Consciousness, published by Derek Shiller on August 2, 2023 on The Effective Altruism Forum.This post is a summary of my conclusions after a philosophical project investigating some aspects of computing architecture that might be relevant to assessing digital consciousness. I tried to approach the issues in a way that is useful to people with mainstream views and intuitions. Overall, I think that present-day implementational considerations should significantly reduce the probability most people assign to the possibility of conscious digital systems using current architectural and programming paradigms.The project was funded by the Long Term Future Fund.Key claims and synopses of the rationale for each:1. Details of the implementation of computer systems may be important to how confident we are about their capacity for consciousness.Experts are unlikely to come to agree that a specific theory of consciousness is correct and epistemic humility demands that we keep an open mind.Some plausible theories will make consciousness dependent on aspects of implementation.The plausible implementational challenges to digital consciousness should influence our overall assessment of the likelihood of digital consciousness.2. If computer systems are capable of consciousness, it is most likely that some theory of the nature of consciousness in the ballpark of functionalism is true.Brains and computers are composed of fundamentally different materials and operate at low levels in fundamentally different ways.Brains and computers share abstract functional organizations, but not their material composition.If we don't think that functional organizations play a critical role in assessing consciousness, we have little reason to think computers could be conscious.3. A complete functionalist theory of consciousness needs two distinct components: 1) a theory of what organizations are required for consciousness and 2) a theory of what it takes to implement an organization.An organization is an abstract pattern - it can be treated as a set of relational claims between the states of a system's various parts.Whether a system implements an organization depends on what parts it has, what properties belong to those parts, and how those properties depend on each other over time.There are multiple ways of interpreting the parts and states of any given physical system. Even if we know what relational claims define an organization, we need to know how it is permissible to carve up a system to assess whether the system implements that organization.4. There are hypothetical systems that can be interpreted as implementing the organization of a human brain that are intuitively very unlikely to be conscious.See examples in section 4.5. To be plausible, functionalism should be supplemented with additional constraints related to the integrity of the entities that can populate functional organizations.Philosophers have discussed the need for such constraints and some possible candidates, but there has been little exploration of the details of those constraints or what they mean for hypothetical artificial systems.There are many different possible constraints that would help invalidate the application of functional organizations to problematic systems in different ways.The thread tying together different proposals is that functional implementation is constrained by the cohesiveness or integrity of a system's component parts that play the roles in the implementations of functional organizations.Integrity constraints are independently plausible.6.) Several plausible constraints would prevent digital systems from being conscious even if they implemented the same functional organization as a human brain, supposing that they did so with current techni...

]]>
Derek Shiller https://forum.effectivealtruism.org/posts/nT8Ybhjvz5qG3eLuw/implementational-considerations-for-digital-consciousness Wed, 02 Aug 2023 06:21:48 +0000 EA - Implementational Considerations for Digital Consciousness by Derek Shiller Derek Shiller https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:30 no full 6770
9vazTE4nTCEivYSC6 EA - Reflections on my time on the Long-Term Future Fund by abergal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflections on my time on the Long-Term Future Fund, published by abergal on August 2, 2023 on The Effective Altruism Forum.I'm stepping down as chair of the Long-Term Future Fund. I'm writing this post partially as a loose set of reflections on my time there, and partially as an overall update on what's going on with the fund, as I think we should generally be transparent with donors and grantees, and my sense is the broader community has fairly little insight into the fund's current operations. I'll start with a brief history of what's happened since I joined the fund, and its impact, and move to a few reflections on ways the fund is working now.(Also- you can donate to the Long-Term Future Fund here, and let us know here if you might be interested in becoming a fund manager. (The Long-Term Future Fund is part of EA Funds, which is a fiscally sponsored project of Effective Ventures Foundation (UK) (EV UK) and Effective Ventures Foundation USA Inc. (EV US). Donations to the Long-Term Future Fund are donations to EV US or EV UK.))A brief history of my time on the Long-Term Future FundI joined the Long-Term Future Fund as a fund manager in June 2020. At the time, it was chaired by Matt Wage, had five fund managers (including me), and ran three rounds per year, with order ~50 applications per round, giving away an average of ~$450K per round in both 2019 and 2020, for a total of ~$1.35M per year.Matt Wage left the fund, and I was appointed the new chair in February 2021. We also hired a number of new fund managers, and decided to try out a guest manager system where we had more temporary fund managers work with the fund. (We're still doing that today.)The biggest change that I pushed for during my time as chair (which I believe was originally suggested by Ozzie Goeen, thanks Ozzie) was switching from three fixed rounds a year to a rolling application system, where anyone could apply to the fund at any time. My current guess is that this was a pretty big boost to the fund's impact, via giving us access to a bunch of counterfactual grant opportunities that other funders (who either didn't have rolling applications, or were funding a more constrained set of things) didn't have access to. I also pushed for a switch away from mandatory public reporting for our grant applicants (which I discuss somewhat below), which I also think gave us access to better grant opportunities, and which I also overall still endorse.At a high-level, the fund consists of a number of fund managers who work a relatively low number of hours per week (generally 3 - 10, though I think it might be trending even lower recently). EA Funds itself only has one full-time employee - Caleb Parikh. There's been significant fund manager turnover since I joined the fund, generally because relevant fund managers have wanted to prioritize their other work - since I joined the fund, Matt Wage and Helen Toner have left, Adam Gleave, Evan Hubinger, and Becca Kagan have joined as permanent fund managers and then left, Linchuan Zhang has joined as a permanent fund manager, and we've had a number of guest managers join and leave the fund.Also during my time on the fund, the volume of our grantmaking work has scaled significantly - whereas in 2020, we received 211 applications and funded 34 as grants, worth ~$1.3M dollars total, from March 2022 to March 2023, we received 878 applications and funded 263 as grants, worth ~$9.1M dollars total.The fund's impactMy reflections below focus on ways I think the Long-Term Future Fund has been suboptimal over my time as chair, and I got feedback on my original draft of this post that this made it seem like my view of the fund was negative overall, which wasn't my intention. For clarity, my best guess is that overall, the Long-Term Future Fund has been, and continues to b...

]]>
abergal https://forum.effectivealtruism.org/posts/9vazTE4nTCEivYSC6/reflections-on-my-time-on-the-long-term-future-fund Wed, 02 Aug 2023 03:25:21 +0000 EA - Reflections on my time on the Long-Term Future Fund by abergal abergal https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:40 no full 6768
zt6MsCCDStm74HFwo EA - EA Funds organisational update: Open Philanthropy matching and distancing by calebp Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Funds organisational update: Open Philanthropy matching and distancing, published by calebp on August 2, 2023 on The Effective Altruism Forum.We want to communicate some changes that are happening at EA Funds, particularly on the EA Infrastructure Fund and the Long-Term Future Fund.In summary:EA Funds (particularly the EAIF and LTFF) and Open Philanthropy have historically had overlapping staff, and Open Phil has supported EA Funds, but we (staff at EA Funds and Open Philanthropy) are now trying to increase the separation between EA Funds and Open Philanthropy. In particular:The current chairs of the LTFF and the EAIF, who have also joined as staff members at Open Philanthropy, are planning to step down from their respective chair positions over the next several months. Max Daniel is going to step down as the EAIF's chair on August 2nd, and Asya Bergal is planning to step down as the LTFF's chair in October.To help transition EA Funds away from reliance on Open Philanthropy's financial support, Open Philanthropy is planning to match donations to the EA Infrastructure and Long-Term Future Fund at 2:1 rates, up to $3.5M each, over the next six months.The EAIF and LTFF have substantial funding gaps - we are looking to raise an additional $3.84M for the LTFF and $4.83M for the EAIF. over the next six months. By default, I expect, the LTFF to have ~$720k, and the EAIF to have ~$400k by default.Our relationship with Open PhilanthropyEA Funds started in 2017 and was largely developed during CEA's time at Y Combinator. It spun out of CEA in 2020, though both CEA and EA Funds are part of the Effective Ventures Foundation. Last year, EA Funds moved over $35M towards high-impact projects through the Animal Welfare Fund (AWF), EA Infrastructure Fund (EAIF), Global Health and Development Fund (GHDF), and Long-Term Future Fund (LTFF).Over the last two years, the EAIF and LTFF used some overlapping resources with Open Philanthropy in the following ways:Over the last year, Open Philanthropy has contributed a substantial proportion of EAIF and LTFF budgets and has covered our entire operations budget.[1] They also made a sizable grant in February 2022. (You can see more detail on Open Philanthropy's website.)The chairs of the EAIF and LTFF both joined the Longtermist EA Community Growth team at Open Philanthropy and have worked in positions at EA Funds and Open Philanthropy simultaneously. (Asya Bergal joined the LTFF in June 2020, has been chair since February 2021, and joined Open Philanthropy in April 2021; Max Daniel joined the EAIF in March 2021, has been chair since mid-2021, and joined Open Philanthropy in November 2022.)As a board member of the Effective Ventures Foundation (UK), Claire Zabel, who is also the Senior Program Officer for EA Community Growth (Longtermism) at Open Philanthropy and supervises both Asya and Max, has regularly met with me throughout my tenure at EA Funds to hear updates on EA Funds and offer advice on various topics related to EA Funds (both day-to-day issues and higher-level organisation strategy).That said, I think it is worth noting that:The majority of funding for the LTFF has come from non-Open Philanthropy sources.Open Philanthropy as an organisation has limited visibility into our activities, though certain Open Philanthropy employees, particularly Max Daniel and Asya Bergal, have a lot of visibility into certain parts of EA Funds.Our grants supporting our operations and LTFF/EAIF grantmaking funds have had minimal restrictions.Since the shutdown of the FTX Future Fund, Open Phil and I have both felt more excited about building a grantmaking organisation that is legibly independent from Open Phil. Earlier this year, Open Phil staff reached out to me proposing some steps to make this happen, and have worked with me closely ...

]]>
calebp https://forum.effectivealtruism.org/posts/zt6MsCCDStm74HFwo/ea-funds-organisational-update-open-philanthropy-matching Wed, 02 Aug 2023 03:07:55 +0000 EA - EA Funds organisational update: Open Philanthropy matching and distancing by calebp calebp https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:43 no full 6769
y3wDuScgWzzPdAtgo EA - Future technological progress does NOT correlate with methods that involve less suffering by Jim Buhler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future technological progress does NOT correlate with methods that involve less suffering, published by Jim Buhler on August 2, 2023 on The Effective Altruism Forum.Summary: The alleged inevitable convergence between efficiency and methods that involve less suffering is one of the main arguments I've heard in favor of assuming the expected value of the future of humanity is positive, and I think it is invalid. While increased efficiency luckily converges with less biological suffering so far, this seems to be due to the physical limitations of humans and other animals rather than due to their suffering per se. And while past and present suffering beings all have severe physical limitations making them "inefficient", future forms of sentience will likely make this past trend completely irrelevant. Future forms of suffering might even be instrumentally very useful and therefore "efficient", such that we could make the reverse argument. Note that the goal of this post is not to argue that technological progress is bad, but simply to call out one specific claim that, despite its popularity, is - I think - just wrong.The original argumentWhile I've been mostly facing this argument in informal conversation, it has been (I think pretty well) fleshed out by Ben West (2017): (emphasis is mine)[W]e should expect there to only be suffering in the future if that suffering enables people to be lazier [(i.e., if it is instrumentally "efficient".] The most efficient solutions to problems don't seem like they involve suffering. [...] Therefore, as technology progresses, we will move more towards solutions which don't involve suffering[.]Like most people I've heard use this argument, he illustrates his point with the following two examples:Factory farming exists because the easiest way to get food which tastes good and meets various social goals people have causes cruelty. Once we get more scientifically advanced though, it will presumably become even more efficient to produce foods without any conscious experience at all by the animals (i.e. clean meat); at that point, the lazy solution is the more ethical one.(This arguably is what happened with domestic work animals on farms: we now have cars and trucks which replaced horses and mules, making even the phrase "beat like a rented mule" seem appalling.)Slavery exists because there is currently no way to get labor from people without them having conscious experience. Again though, this is due to a lack of scientific knowledge: there is no obvious reason why conscious experience is required for plowing a field or harvesting cocoa, and therefore the more efficient solution is to simply have nonconscious robots do these tasks.(This arguably is what happened with human slavery in the US: industrialization meant that slavery wasn't required to create wealth in a large chunk of the US, and therefore slavery was outlawed.)Why this argument is invalidWhile I tentatively think the "the most efficient solutions to problems don't seem like they involve suffering" claim is true if we limit ourselves to the present and the past, I think it is false once we consider the long-term future, which makes the argument break apart.Future solutions are more efficient insofar as they overcome past limitations. In the relevant examples that are enslaved humans and exploited animals, suffering itself is not a limiting factor. It is rather the physical limitations of those biological beings, relative to machines that could do a better job at their tasks.I don't see any inevitable dependence between their suffering and these physical limitations. If human slaves and exploited animals were not sentient, this wouldn't change the fact that machines would do a better job.The fact that suffering has been correlated with inefficiency so far seems to be ...

]]>
Jim Buhler https://forum.effectivealtruism.org/posts/y3wDuScgWzzPdAtgo/future-technological-progress-does-not-correlate-with Wed, 02 Aug 2023 00:11:35 +0000 EA - Future technological progress does NOT correlate with methods that involve less suffering by Jim Buhler Jim Buhler https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:34 no full 6766
euzDpFvbLqPdwCnXF EA - University EA Groups Need Fixing by Dave Banerjee Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: University EA Groups Need Fixing, published by Dave Banerjee on August 3, 2023 on The Effective Altruism Forum.(Cross-posted from my website.)I recently resigned as Columbia EA President and have stepped away from the EA community. This post aims to explain my EA experience and some reasons why I am leaving EA. I will discuss poor epistemic norms in university groups, why retreats can be manipulative, and why paying university group organizers may be harmful. Most of my views on university group dynamics are informed by my experience with Columbia EA. My knowledge of other university groups comes from conversations with other organizers from selective US universities, but I don't claim to have a complete picture of the university group ecosystem.Disclaimer: I've written this piece in a more aggressive tone than I initially intended. I suppose the writing style reflects my feelings of EA disillusionment and betrayal.My EA ExperienceDuring my freshman year, I heard about a club called Columbia Effective Altruism. Rumor on the street told me it was a cult, but I was intrigued. Every week, my friend would return from the fellowship and share what he learned. I was fascinated. Once spring rolled around, I applied for the spring Arete (Introductory) Fellowship.After enrolling in the fellowship, I quickly fell in love with effective altruism. Everything about EA seemed just right - it was the perfect club for me. EAs were talking about the biggest and most important ideas of our time. The EA community was everything I hoped college to be. I felt like I found my people. I found people who actually cared about improving the world. I found people who strived to tear down the sellout culture at Columbia.After completing the Arete Fellowship, I reached out to the organizers asking how I could get more involved. They told me about EA Global San Francisco (EAG SF) and a longtermist community builder retreat. Excited, I applied to both and was accepted. Just three months after getting involved with EA, I was flown out to San Francisco to a fancy conference and a seemingly exclusive retreat.EAG SF was a lovely experience. I met many people who inspired me to be more ambitious. My love for EA further cemented itself. I felt psychologically safe and welcomed. After about thirty one-on-ones, the conference was over, and I was on my way to an ~exclusive~ retreat.I like to think I can navigate social situations elegantly, but at this retreat, I felt totally lost. All these people around me were talking about so many weird ideas I knew nothing about. When I'd hear these ideas, I didn't really know what to do besides nod my head and occasionally say "that makes sense." After each one-on-one, I knew that I shouldn't update my beliefs too much, but after hearing almost every person talk about how AI safety is the most important cause area, I couldn't help but be convinced. By the end of the retreat, I went home a self-proclaimed longtermist who prioritized AI safety.It took several months to sober up. After rereading some notable EA criticisms (Bad Omens, Doing EA Better, etc.), I realized I got duped. My poor epistemics led me astray, but weirdly enough, my poor epistemics gained me some social points in EA circles. While at the retreat and at EA events afterwards, I was socially rewarded for telling people that I was a longtermist who cared about AI safety. Nowadays, when I tell people I might not be a longtermist and don't prioritize AI safety, the burden of proof is on me to explain why I "dissent" from EA. If you're a longtermist AI safety person, there's no need to offer evidence to defend your view.(I would be really excited if more experienced EAs asked EA newbies why they take AI safety seriously more often. I think what normally happens is that the experienced EA gets su...

]]>
Dave Banerjee https://forum.effectivealtruism.org/posts/euzDpFvbLqPdwCnXF/university-ea-groups-need-fixing Thu, 03 Aug 2023 22:28:59 +0000 EA - University EA Groups Need Fixing by Dave Banerjee Dave Banerjee https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 16:35 no full 6781
HfnebqujtDxtppyN5 EA - EA Survey 2022: What Helps People Have an Impact and Connect with Other EAs by WillemSleegers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Survey 2022: What Helps People Have an Impact and Connect with Other EAs, published by WillemSleegers on August 3, 2023 on The Effective Altruism Forum.SummaryPersonal contact with EAs was the most commonly selected influence on EAs' ability to have a positive impact (40.9%), followed by 80,000 Hours (31.4%) and local EA groups (19.8%).Compared to 2020, EA Global, the EA Forum, EAGx, and 80,000 Hours one-on-one career discussion showed a small increase in being reported as important sources for greater impact. Relatively more sources decreased, some quite significantly, in being reported as important (e.g., GiveWell, 80,000 Hours website and podcast).Local EA groups were the most commonly cited source for making a new connection (35.5%), followed by a personal connection (34.1%), EA Global (26.2%), and EAGx (24.0%).Compared to 2020, relatively more respondents indicate having made an interesting and valuable new personal connection via EA Global and EAGx, and fewer via most other sources.IntroductionIn this post, we report on what factors EAs say helped them have a positive impact or create new connections. Note that we significantly shortened the EA Survey this year, meaning there are fewer community-related questions than in the previous EA Survey.Positive InfluencesWe asked about which factors, within the last 12 months, had the largest influence on your personal ability to have a positive impact, allowing respondents to select up to three options. On average, respondents selected 2.36 options (median 3).Personal contact with EAs stood out as the most common factor selected by respondents (40.9%), followed by 80,000 Hours (combined, 31.4%), and local EA groups (19.8%).Personal contact, 80,000 Hours, and EA groups were also the top three factors that respondents reported as being important for getting them involved in EA.2020 vs. 2022We asked the same question in 2020, although we changed some of the response categories. This year we included 80,000 Hours (job board), the online EA community (other than EA Forum), and Virtual programs, but dropped Animal Charity Evaluators, The Life You Can Save, and Animal Advocacy Careers. These categories were dropped due to low endorsement rates in previous years.Compared to 2020, we see an increase in EA Global, the EA Forum, EAGx, and 80,000 Hours one-on-one career discussion as important sources for greater impact. These increases were quite small in most cases, with the biggest change observed for EAGx (from 6% to 13%). We saw multiple decreases, some quite sizable, in local EA groups, 80,000 Hours (both the website and podcast), Books, GiveWell, Articles/blogs, Giving What We Can, LessWrong, Slate Star Codex/Astral Codex Ten, podcasts, and Facebook groups.A portion of the decrease of the website of 80,000 Hours can be attributed to the addition of the 80,000 Hours job board category in this year's survey. Last year, respondents may have included this category in the website category of 80,000 Hours, while this year it was its own category. Including the job board category with the website category leads to a smaller decrease between 2022 and 2020, although it does not fully account for it.It's important to recall that these questions asked about which factors had had the largest influence within the last 12 months. Thus, the percentages of respondents who have been influenced by these factors at some point are likely larger than those reporting having been influenced in the last 12 months within this survey. Responses to this question might also be expected to change more, across years, than our questions which are not limited to the last 12 months.GenderRespondents who indicated identifying as a man were more likely to select the EA Forum, GiveWell, the 80,000 Hours podcast and one-on-one career discu...

]]>
WillemSleegers https://forum.effectivealtruism.org/posts/HfnebqujtDxtppyN5/ea-survey-2022-what-helps-people-have-an-impact-and-connect Thu, 03 Aug 2023 19:28:02 +0000 EA - EA Survey 2022: What Helps People Have an Impact and Connect with Other EAs by WillemSleegers WillemSleegers https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:58 no full 6780
pimZLbsXPnHTXqFT7 EA - Want to work on US emerging tech policy? Consider the Horizon Fellowship, TechCongress, and AAAS AI Fellowship by kuhanj Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Want to work on US emerging tech policy? Consider the Horizon Fellowship, TechCongress, and AAAS AI Fellowship, published by kuhanj on August 3, 2023 on The Effective Altruism Forum.I recently learned that there are a lot of fellowships currently open for individuals interested in transitioning into US emerging tech policy careers. I shared the announcements from different websites below.Based on my reading, the Horizon Fellowship seems to be broader with placements at think tanks, executive agencies, and congress. These placements can last up to two years. TechCongress and AAAS Rapid Response STPF are only for 10-12 month congressional placements. All are only open to US citizens or individuals with the right to work in the US without employer sponsorship.Horizon Fellowship (Deadline: Sept 15th, 2023. Apply here):"Applications for the 2024 cohort of the Horizon Fellowship are now open! If you are interested in working on AI, biosecurity, or related policy areas at a federal agency, congressional office, or think tank, you can learn more about how to apply at Become a Fellow.This application round is for the third cohort of Horizon Fellows. The first two cohorts had 15 and 20 fellows respectively, and a 100% success rate in securing high-impact placements related to AI and biosecurity policy. Among many other host organizations, our fellows have placed at the Departments of Defense and Homeland Security, the Senate Commerce and House Science committees, and think tanks such as the Center for Security and Emerging Technology and the Center for Health Security. You can learn more about past fellows and their placements at Meet our Fellows."Eligibility (junior think tank track): Recently obtained a bachelor's or master's degree (including those graduating in the year that their placements would start; Demonstrated interest in AI, biosecurity, or related emerging technology areasEligibility (senior track): Several years of professional experience; A credential (e.g. work experience, degree, publications, or similar) related to area of focus; A graduate degree (not required; strongly preferred for executive branch fellows)Tech Congress (Deadline: August 22nd 2023. Apply here):Applications for the January 2024 Fellowship"The Congressional Innovation Fellowship will place you among the top tech decision makers in the United States government at a time when technology is reshaping society in fundamental ways. Even if you've never considered working in government, the Congressional Innovation Fellowship will allow you to make change at the highest levels and at a scale unparalleled in the private or public sectors.We are bridging the divide between Congress and the technology sector by placing tech savvy people like you-- an early-career technologist (two - six years professional experience), including those who have recently finished, or are on track to finish a Master's program or PhD-- to work with Members of Congress and Congressional Committees. Our goals are to build capacity in Congress, train cross-sector leaders -- who can understand the challenges of government and in the technology community -- and keep Congress up to date about the latest challenges and opportunities relating to technology."Eligibility (junior track): Early-career, with between two and six years of experience, including recently finishing (or projected to finish by January 2024) a technical degree program. Tech savvy, with experience working in or studying the technology sector.Eligibility (senior track): Eight or more years of work or postgraduate study. Tech savvy, with experience working in or studying the technology sector.AAAS Rapid Response STPF Cohort in AI (Deadline: August 5th, 2023. Apply here)(Note that this is a separate program from the annual AAAS Science & Technology P...

]]>
kuhanj https://forum.effectivealtruism.org/posts/pimZLbsXPnHTXqFT7/want-to-work-on-us-emerging-tech-policy-consider-the-horizon Thu, 03 Aug 2023 18:57:14 +0000 EA - Want to work on US emerging tech policy? Consider the Horizon Fellowship, TechCongress, and AAAS AI Fellowship by kuhanj kuhanj https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:46 no full 6779
BErC24s77fdo93ghi EA - Alignment Grantmaking is Funding-Limited Right Now [crosspost] by johnswentworth Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Alignment Grantmaking is Funding-Limited Right Now [crosspost], published by johnswentworth on August 3, 2023 on The Effective Altruism Forum.For the past few years, I've generally mostly heard from alignment grantmakers that they're bottlenecked by projects/people they want to fund, not by amount of money. Grantmakers generally had no trouble funding the projects/people they found object-level promising, with money left over. In that environment, figuring out how to turn marginal dollars into new promising researchers/projects - e.g. by finding useful recruitment channels or designing useful training programs - was a major problem.Within the past month or two, that situation has reversed. My understanding is that alignment grantmaking is now mostly funding-bottlenecked. This is mostly based on word-of-mouth, but for instance, I heard that the recent lightspeed grants round received far more applications than they could fund which passed the bar for basic promising-ness. I've also heard that the Long-Term Future Fund (which funded my current grant) now has insufficient money for all the grants they'd like to fund.I don't know whether this is a temporary phenomenon, or longer-term. Alignment research has gone mainstream, so we should expect both more researchers interested and more funders interested. It may be that the researchers pivot a bit faster, but funders will catch up later. Or, it may be that the funding bottleneck becomes the new normal. Regardless, it seems like grantmaking is at least funding-bottlenecked right now.Some takeaways:If you have a big pile of money and would like to help, but haven't been donating much to alignment because the field wasn't money constrained, now is your time!If this situation is the new normal, then earning-to-give for alignment may look like a more useful option again. That said, at this point committing to an earning-to-give path would be a bet on this situation being the new normal.Grants for upskilling, training junior people, and recruitment make a lot less sense right now from grantmakers' perspective.For those applying for grants, asking for less money might make you more likely to be funded. (Historically, grantmakers consistently tell me that most people ask for less money than they should; I don't know whether that will change going forward, but now is an unusually probable time for it to change.)Note that I am not a grantmaker, I'm just passing on what I hear from grantmakers in casual conversation. If anyone with more knowledge wants to chime in, I'd appreciate it.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
johnswentworth https://forum.effectivealtruism.org/posts/BErC24s77fdo93ghi/alignment-grantmaking-is-funding-limited-right-now-crosspost Thu, 03 Aug 2023 09:11:21 +0000 EA - Alignment Grantmaking is Funding-Limited Right Now [crosspost] by johnswentworth johnswentworth https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:27 no full 6776
ZsMqioc7BgL2DTJca EA - Manifund: What we're funding (weeks 2-4) by Austin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Manifund: What we're funding (weeks 2-4), published by Austin on August 4, 2023 on The Effective Altruism Forum.Overall reflectionsVery happy with the volume and quality of grants we've been making$600k+ newly committed across 12 projectsRegrantors have been initiating grants and coordinating on large projectsIndependent donors have committed $35k+ of their own money!We plan to start fundraising soon, based on this pace of distributionHappy to be coordinating with funders at LTFF, Lightspeed, Nonlinear and OpenPhilWe now have a common Slack channel to share knowledge and plansCurrently floating the idea of setting up a common app between us.Happy with our experimentation! Some things we've been trying:Equity investments, loans, dominant assurance contracts and retroactive fundingGrantathon, office hours, feedback on Discord & site commentsLess happy with our operations (wrt feedback and response times to applicants)Taking longer to support to individual grantees, or start new Manifund initiativesPlease ping us if it's been a week and you haven't heard anything!Wise deactivated our account, making international payments more difficult/expensive.In cases where multiple regrantors may fund a project, we've observed a bit of "funding chicken"Grant of the month[$310k] Apollo ResearchThis is our largest grant to date! Many of our regrantors were independently excited about Apollo; in the end, we coordinated between Tristan Hume, Evan Hubinger and Marcus Abramovitch to fund this.From Tristan:I'm very excited about Apollo based on a combination of the track record of it's founding employees and the research agenda they've articulated.Marius and Lee have published work that's significantly contributed to Anthropic's work on dictionary learning. I've also met both Marius and Lee and have confidence in them to do a good job with Apollo.Additionally, I'm very much a fan of alignment and dangerous capability evals as an area of research and think there's lots of room for more people to work on them.In terms of cost-effectiveness I like these research areas because they're ones I think are very tractable to approach from outside a major lab in a helpful way, while not taking large amounts of compute. I also think Apollo existing in London will allow them to hire underutilized talent that would have trouble getting a U.S. visa.New grants[$112k] Jesse Hoogland: Scoping Developmental InterpretabilityJesse posted this through our open call:We propose a 6-month research project to assess the viability of Developmental Interpretability, a new AI alignment research agenda. "DevInterp" studies how phase transitions give rise to computational structure in neural networks, and offers a possible path to scalable interpretability tools.Though we have both empirical and theoretical reasons to believe that phase transitions dominate the training process, the details remain unclear. We plan to clarify the role of phase transitions by studying them in a variety of models combining techniques from Singular Learning Theory and Mechanistic Interpretability. In six months, we expect to have gathered enough evidence to confirm that DevInterp is a viable research program.If successful, we expect Developmental Interpretability to become one of the main branches of technical alignment research over the next few years.Rachel was excited about this project and considered setting up a dominance assurance contract to encourage regrants, but instead offered 10% matching; Evan took her up on this![$60k] Dam and Pietro: Writeup on Agency and (Dis)EmpowermentA regrant initiated by Evan:6 months support for two people, Damiano and Pietro, to write a paper about (dis)empowerment. Its ultimate aim is to offer formal and operational notions of (dis)empowerment. For example, an inte...

]]>
Austin https://forum.effectivealtruism.org/posts/ZsMqioc7BgL2DTJca/manifund-what-we-re-funding-weeks-2-4 Fri, 04 Aug 2023 17:41:18 +0000 EA - Manifund: What we're funding (weeks 2-4) by Austin Austin https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:01 no full 6786
iLJKiuDqP7H7hDtW4 EA - Seeking founders for new effective giving organisations by Luke Moore Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Seeking founders for new effective giving organisations, published by Luke Moore on August 4, 2023 on The Effective Altruism Forum.Giving What We Can is looking to help founding teams organise to grow effective giving around the world and wants to hear from people excited about helping new effective giving initiatives get off the ground. These are likely to be initiatives in new geographies, like Effektiv Spenden, but we are also open to hearing about ideas for projects seeking to promote effective giving to specific interest groups, like High Impact Athletes. At this point we will prioritise supporting organisations which will fundraise for GWWC's existing recommendations over helping with geographically specific research and making geographically specific recommendations, although we are interested in this as well.Send us your expression of interestWhy?Effective giving is an incredibly efficient way for us to convert resources into impact and moreover, there are many positive indirect effects from promoting effective giving; such as advocating for effective altruism and promoting positive values globally. However, there are significant regional gaps in the current effective giving ecosystem. As a result, donors from unrepresented regions or communities may be unable to make (tax-deductible) donations to effective charities and more importantly may not even come to hear about effective giving, as the ideas are not available in their language or cultural/community/social context. We think that this represents a huge missed opportunity - and one that we hope to help rectify.What?These new effective giving initiative will likely be a fundraising organisation that hosts funds which make grants to other effective charities and high-impact funding opportunities. These new fundraising organisations will need to be able to accept donations (ideally tax-deductible) in their legal jurisdiction and be able to make grants to charitable programs.Who?We are initially interested in recruiting a working group that would set the strategy, take the initial steps, and hire the executive director (who could be one of the original founding team). The types of roles that could be useful at this stage will be very varied and include e.g. volunteers, board members, an executive director and other team members, particularly those with expertise in the following areas:LegalGovernanceWebsite designPhilanthropic advisingMarketing and communicationsAccounting and financeCommunity fundraisingEventsIT and information securitySupport from Giving What We Can?Giving What We Can can provide support to new organisations in the following ways:General advice and supportSharing best practice and lessons learntAn MVP starter-guide for new effective giving organisationsAccess to shared resources and content within the effective giving communityIntroductions to the broader effective giving network (potentially helping form mentorship relationships)Assistance in accessing legal advice when neededHelp in finding funding for the project, and potentially spending some of Giving What We Can's own budget on initial costsIn some cases, the use of the GWWC brand and donation platformThe level of support will depend on the specific project, its potential for impact, and the strength of the founding team.Giving What We Can has previously provided support in various ways to different effective giving organisations, here are just a few illustrative examples:Don Efficace: We provided some funding, governance and legal support, access to our donation platform, and mentorship for the founding team.Ayuda Efectiva: We provided a brand partnership and Ayuda Efectiva acts as the community for our members in Spain with their Executive Director being an Ambassador for Giving What We Can.Giving Gr...

]]>
Luke Moore https://forum.effectivealtruism.org/posts/iLJKiuDqP7H7hDtW4/seeking-founders-for-new-effective-giving-organisations Fri, 04 Aug 2023 13:22:31 +0000 EA - Seeking founders for new effective giving organisations by Luke Moore Luke Moore https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:46 no full 6784
Kugwq3jmoNEgrmAZG EA - Marketing 101 for EA Organizations by Igor Scaldini Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Marketing 101 for EA Organizations, published by Igor Scaldini on August 4, 2023 on The Effective Altruism Forum.As a newcomer to the EA community, I'm surprised by how rarely EAs talk about marketing and organizational growth. As a marketer myself, maybe it's just a matter of me over-expecting people to give more importance to the matter, but even so, I'm confident that there are good reasons to talk more about it.First, according to this survey run by User-Friendly in February 2022, 72% of EA organization leaders said that effective marketing significantly helped their organization to achieve its objectives. I haven't conducted a survey myself, but the dozens of conversations I've had with professionals and founders in the EA movement support this statement.Another good reason is that EA organizations need a minimum degree of awareness to raise funds to achieve their goals. Of course, networking is important, but how scalable is it? Since creating awareness is one of the main marketing premises, EA organizations should consider using it more effectively.So the question I've been wondering is: why?Why don't people in the EA community talk that much about growth loops, retention, and marketing in general? Talking to colleagues in the community, I'm able to see a pattern in the main reasons that EAs are resistant to marketing.So my objective with this article is not only to explain what marketing is and provide a roadmap with practical tips that make sense for EA organizations but also to address some of these objections. Hopefully, this will at least get you thinking a bit about the value marketing could bring to your organization.Please feel free to share your thoughts and criticisms in the comments section, as I'd like to hear them so that I can improve this post to make it more useful.If you are convinced of the importance of marketing and are just looking for a roadmap, I recommend jumping straight to the section "A useful marketing framework for EA organizations."You may also be in a situation in which you have been running marketing campaigns for a while but facing some operations issues (misalignment of strategies, missing deadlines, idleness, etc.). In this case, the section "How to be prepared for operational success" may be particularly useful for you.Lastly, it's important to note that I will focus on marketing for EA organizations, not the marketing of the effective altruism movement itself. If you're interested in the latter, I recommend this post on making effective altruism enormous, this one on keeping EA welcoming, and this one on movement course corrections.Addressing some marketing misconceptions I've seen among EAsMisconception #1: Marketing is all about persuasion.Some EAs are suspicious of marketing because they think that it's all about persuading people to agree with you, rather than impartially seeking the truth. If used ethically, persuasion can indeed be a vital tool for marketing. However, an effective marketing strategy goes way beyond persuasion - channel optimizations, user journey, targeting, and many other things we do in marketing are important for growth but don't involve persuasion at all.Misconception #2: Logical arguments are enough to attract people.Some EAs think that there's no need to market EA organizations because the right people will be attracted by logical arguments alone. I deeply wish this were the case, but unfortunately, it's not. There is plenty of research showing that emotions play a huge role in our daily decisions. In my experience, telling a compelling story is one of the greatest and most time-tested ways to encourage people to act - even if those people are part of an audience that highly values logic.Misconception #3: Marketing destroys the organization's reputation among EAs.If done wrong, ...

]]>
Igor Scaldini https://forum.effectivealtruism.org/posts/Kugwq3jmoNEgrmAZG/marketing-101-for-ea-organizations Fri, 04 Aug 2023 00:09:08 +0000 EA - Marketing 101 for EA Organizations by Igor Scaldini Igor Scaldini https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 18:36 no full 6783
rPj6Fh4ZTEpRah3uf EA - Problems with free services for EA projects by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Problems with free services for EA projects, published by Lizka on August 3, 2023 on The Effective Altruism Forum.EA-motivated specialists sometimes offer free or subsidized versions of normally expensive services to EA projects. I think this is often counterproductive and outline my reasoning in this post.The key problem with free services is that we don't have market-based information about their quality, so the beneficiaries of a free service might be getting less value than it might appear they are getting. As a result, service providers waste their time providing expensive services to people who wouldn't pay the full price (instead, providers could charge and donate or spend that time on other impactful work). Additionally, community-level overestimates of the quality of free services are more likely and might lead people who need good services to use the free versions even when they're worse suited to their needs.If you're offering, taking, or advertising a free service like this, I think you should believe the situation is an exception to the general heuristic. (More on these problems and other issues, as well as exceptions and nuances.)Some services make sense as free services. For instance (see more):Community infrastructure projects or other work where the benefits go through many people (there's not a clear "beneficiary" of the service)Services for which overhead costs are especially highServices that are in large part beneficial to the provider, not the recipients of the serviceNotes & caveats:Scope of the postWhen I talk about "services," I mean costly services that are often provided by specialists (coaching, consulting, etc.). I think more minor services are more often fine, and I'm definitely not talking about friendly support, like helping your officemate with an ughy task or having a call with someone you met at EAG who's interested in getting into your field.Certain types of volunteering don't encounter the issues outlined here, particularly when the volunteer has the option to be paid a specified rate but opts out (so they know more about the "market value" of their free work).The concerns I discuss are also less relevant for services provided by for-profit organizations/groups that provide a free/subsidized version to e.g. nonprofits.I focus on free services here, but the arguments can probably be extended to cover subsidized services.I don't discuss paid services that only serve EA customers, but I also have concerns about these services when it isn't clear that the needs of EA projects covered are unusual. My rough take is that:People advertising infrastructure and support projects in EA spaces (e.g. newsletters) should not give preference to services that only have EA customers unless there's something clearly special about the service that makes it especially useful for EA work.People working on EA projects should find the available service that best suits their needs, which is generally unlikely to be EA-focused; most of the world is not in EA. (Maybe it's easier to identify a good-enough service in EA, but I still think we should be a bit more suspicious of our instinct towards going for EA things.)I'm not speaking for my team or for CEA.I don't know how important the issues I outline here are relative to other issues in EA, but this topic has come up several times, so I decided to write this post.I don't think people who provide free services are ill-meaning or anything like that. Also, I appreciate many people in the EA community for reasons that go beyond "I think they do impactful work." I like to hang out with other people in the EA community for personal reasons. But if I'm thinking about doing something for EA reasons - because I think it's an effective use of resources in order to help the world - I want to focus ...

]]>
Lizka https://forum.effectivealtruism.org/posts/rPj6Fh4ZTEpRah3uf/problems-with-free-services-for-ea-projects Thu, 03 Aug 2023 23:33:38 +0000 EA - Problems with free services for EA projects by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 17:16 no full 6782
5DR89AbnjGjEcYvu8 EA - Hiring Retrospective: ERA Fellowship 2023 by Oscar Delaney Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hiring Retrospective: ERA Fellowship 2023, published by Oscar Delaney on August 5, 2023 on The Effective Altruism Forum.SummaryWe hired 31 people for our current Summer Research Fellowship, out of 631 applicationsThe applications were of impressive quality, so we hired more people than expectedWe think we made several mistakes, and should:Communicate more clearly what level of seniority/experience we want in applicantsHave a slightly shorter initial application, but be upfront that it is a long formAssign one person to evaluate all applicants for one particular question, rather than marking all questions on one application togetherHiring round processesThe Existential Risk Alliance (ERA) is a non-profit project equipping young researchers with the skills and knowledge needed to tackle existential risks. We achieve this by running a Summer Research Fellowship program where participants do an independent research project, supported by an ERA research manager, and an external subject-matter expert mentor.Promotion and OutreachWe tried quite hard to promote the Fellowship. Personal connections and various EA community sources were the most common referral source for applications:Initial applicationOur initial application form consisted of submitting a CV (which we put minimal weight on) and answering various open-ended questions. Some questions on motivation, reasoning ability, and previous experience were the same across all cause areas, and we also asked some cause area-specific subject matter questions. Our application form was open for 22 days, and we received a total of 631 applications from 556 unique applicants (some people applied to multiple cause areas). There was significant variation in the number of applications in each cause area: AI Gov = 167, AI Tech = 127, Climate = 100, Biosecurity = 96, Misc & Meta = 86, Nuclear = 51.People tended to apply late in the application period, with more than half of applications arriving within three days of the deadline.The majority of applicants were male:And the UK and US were by far the most common countries of residence for applicants:InterviewsAfter assessing the written applications, we invited 80 applicants (13%) for an interview as the second part of the recruitment process. Interviews were conducted by the research manager of the cause area the person was applying to, and lasted around thirty minutes. We used structured interviews, where each cause area had a standard list of questions that all interviewees were asked, to try to maximize comparability between applicants and improve fairness. Interview questions sought to gauge people's cause-area-specific knowledge, and ability to reason clearly responding to unseen questions. Even though only the people who did best on the initial application were invited to interviews, there was some positive correlation between initial application and interview scores. If this correlation was very strong, that would be some reason not to do an interview at all and just select people based on their application.This was not the case: the interview changed our ordering of applicants considerably.Composition of the final cohortWe had initially projected to fill approximately 20 fellowship spots, with an expected 3-4 candidates per cause area. We aimed to interview at least three times the number of candidates as the positions we planned to offer, to improve our chances of selecting optimal candidates.Because of the large number of excellent applications, we decided to open our fellowship to remote candidates that we couldn't host otherwise. Within cause areas, we selected candidates roughly based on a weighted average of their initial application, and interview (with a 5:3 ratio of weights for application:interview, given the application was far longer than the...

]]>
Oscar Delaney https://forum.effectivealtruism.org/posts/5DR89AbnjGjEcYvu8/hiring-retrospective-era-fellowship-2023 Sat, 05 Aug 2023 22:11:31 +0000 EA - Hiring Retrospective: ERA Fellowship 2023 by Oscar Delaney Oscar Delaney https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:42 no full 6791
3a6QWDhxYTz5dEMag EA - How can we improve Infohazard Governance in EA Biosecurity? by Nadia Montazeri Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How can we improve Infohazard Governance in EA Biosecurity?, published by Nadia Montazeri on August 5, 2023 on The Effective Altruism Forum.Or: "Why EA biosecurity epistemics are whack"The effective altruism (EA) biosecurity community focuses on reducing the risks associated with global biological catastrophes (GCBRs). This includes preparing for pandemics, improving global surveillance, and developing technologies to mitigate the risks of engineered pathogens. While the work of this community is important, there are significant challenges to developing good epistemics, or practices for acquiring and evaluating knowledge, in this area.One major challenge is the issue of infohazards. Infohazards are ideas or information that, if widely disseminated, could cause harm. In the context of biosecurity, this could mean that knowledge of specific pathogens or their capabilities could be used to create bioweapons. As a result, members of the EA biosecurity community are often cautious about sharing information, particularly in online forums where it could be easily disseminated.The issue of infohazards is not straightforward. Even senior biosecurity professionals may have different thresholds for what they consider to be an infohazard. This lack of consensus can make it difficult for junior members to learn what is appropriate to share and discuss. Furthermore, it can be challenging for senior members to provide feedback on the appropriateness of specific information without risking further harm if that information is disseminated to a wider audience. At the moment, all EA biosecurity community-building efforts are essentially gate-kept by Open Phil, whose staff are particularly cautious about infohazards, even compared to experts in the field at the Center for Health Security. Open Phil staff time is chronically scarce, making it impossible to copy and critique their heuristics on infohazards, threat models, and big-picture biosecurity strategy from 1:1 conversations.Challenges for cause and intervention prioritisationThese challenges can lead to a lack of good epistemics within the EA biosecurity community, as well as a deference culture where junior members defer to senior members without fully understanding the reasoning behind their decisions. This can result in a failure to adequately assess the risks associated with GCBRs and make well-informed decisions.The lack of open discourse on biosecurity risks in the EA community is particularly concerning when compared to the thriving online discourse on AI alignment, another core area of longtermism for the EA movement. While there are legitimate reasons for being cautious about sharing information related to biosecurity, this caution may lead to a lack of knowledge sharing and limited opportunities for junior members of the community to learn from experienced members.In the words of a biosecurity researcher who commented on this draft:"Because of this lack of this discussion, it seems that some junior biosecurity EAs fixate on the "gospel of EA biosecurity interventions" - the small number of ideas seen as approved, good, and safe to think about. These ideas seem to take up most of the mind space for many junior folks thinking about what to do in biosecurity. I've been asked "So, you're working in biosecurity, are you going to do PPE or UVC?" one too many times. There are many other interesting defence-dominant interventions, and I get the sense that even some experienced folks are reluctant to explore this landscape."Another example is the difficulty of comparing biorisk and AI risk without engaging in potentially infohazardous concrete threat models. While both are considered core cause areas of longtermism, it is challenging to determine how to prioritise these risks without evaluating the likelihood of a catast...

]]>
Nadia Montazeri https://forum.effectivealtruism.org/posts/3a6QWDhxYTz5dEMag/how-can-we-improve-infohazard-governance-in-ea-biosecurity Sat, 05 Aug 2023 14:01:11 +0000 EA - How can we improve Infohazard Governance in EA Biosecurity? by Nadia Montazeri Nadia Montazeri https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:07 no full 6790
rK3NgqLg3HHDzyLah EA - Announcing Squiggle Hub by Ozzie Gooen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Squiggle Hub, published by Ozzie Gooen on August 5, 2023 on The Effective Altruism Forum.OverviewSquiggle Hub is a platform for the creation and sharing of code written in Squiggle. As with Squiggle, Squiggle Hub is free and open-source.As a refresher, Squiggle is a simple programming language for probabilistic estimation that runs on Javascript. It begins with the syntax of Guesstimate, but generally adds a lot more functionality. See its launch post here for more information, or the website for the full documentation.Squiggle Hub is a lot like a more powerful, but less visual, version of Guesstimate. We hope that it will eventually be much more valuable than Guesstimate is now.If you can use Guesstimate, you can basically use Squiggle. If you already use Guesstimate, try using the same syntax in Squiggle. It should mostly work.All models on Squiggle Hub are public. We've produced several small ones so far, and a few friends have written some as well. We're looking forward to seeing what others make!Looking for Squiggle examples? We've organized some in the docs. The Squiggle EA Forum Tag also has an updating list.Key LinksSquiggle HubSquiggle DiscordThe Squiggle Language HomepageSquiggle Newsletter (Part of the QURI Newsletter)FunctionalitySquiggle (the language)Write functions that accept and return probability distributions. Squiggle generates automatic plots for these.You can provide explicit ranges for functions. This helps with the visualization, and ensures they won't be called outside that range. Like, appleStockPrice(t:[2023, 2060]).Custom plots for distributions and functions. Scales include linear, log, and symlog (like log, but with support for negative values). Symlog scales accept a parameter constant that you can use for adjusting the scale.Make custom tables of any data and functions, with Table.make({data, columns}).Automatic conversion of Monte Carlo samples to distribution plots, using KDE. In the cases where the distribution is heavily skewed, Squiggle does this with a log transformation. The result of this is often more accurate than using histograms. Combined with custom scales, Squiggle much better supports highly skewed distributions (i.e. "5 to 5M") than Guesstimate does.Squiggle supports most of JSON. You can copy & paste JSON data and begin using it in Squiggle.Squiggle runs on Javascript. You can simply take Squiggle code and run it in your website, Observable, Obsidian, and more.[1] If you are making an application that uses probability distributions, you can use the Squiggle components directly.Lots of performance enhancements, library additions, and bug fixes, since Squiggle's initial release.All the docs and grammar are consolidated here. You can use this to feed into Claude, for some Squiggle generation and assistance.Squiggle Editor (The window on the left)With "Autorun", the output will update as you type. Turn this off if you want to run it manually.Adjust the number of Monte Carlo samples. It defaults to 1000, but you can change this in the settings. Also, there are a bunch of other toggles there to play with.A built-in Squiggle code formatter. Useful for lengthy files and for storing data.Function and variable autocomplete.Squiggle Viewer (The window on the right)Click the arrows to open and close visualizations.Select any variable in the right view to "zoom in" on it. Very handy when working on particular estimates of a file!On hover, there's a "show in editor" button, that finds the source code to the variable in question.If your Squiggle file ends with a variable, this will be shown as the "Result." You'll still be able to see all of the other defined variables, but these will be collapsed when you open the model. Useful for selecting a few key findings for display purposes.You can...

]]>
Ozzie Gooen https://forum.effectivealtruism.org/posts/rK3NgqLg3HHDzyLah/announcing-squiggle-hub Sat, 05 Aug 2023 06:29:32 +0000 EA - Announcing Squiggle Hub by Ozzie Gooen Ozzie Gooen https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:44 no full 6788
xpHqEus2xzhFW5icJ EA - ProMED, platform which alerted the world to Covid, might collapse - can EA donors fund it? by freedomandutility Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ProMED, platform which alerted the world to Covid, might collapse - can EA donors fund it?, published by freedomandutility on August 5, 2023 on The Effective Altruism Forum."The early warning disease network that alerted the world to the original SARS outbreak and the start of the Covid-19 pandemic appears to be in peril.""In February 2003, it was ProMED that alerted the world to the fact that a new disease that caused pneumonia had started to spread in China's Guangdong province. That disease became known as SARS - severe acute respiratory syndrome. In September 2012, an Egyptian doctor working in Saudi Arabia wrote to ProMED to reveal he had treated a patient who died from pneumonia triggered by a new coronavirus, a camel virus we now know as MERS - Middle East respiratory syndrome. Just before midnight on Dec. 30, 2019, a ProMED "RFI" post - request for information - was the first warning the outside world received of a fast-growing outbreak in Wuhan, China. That was the start of the Covid-19 pandemic."""In its post dated July 14, the ISID revealed it had been having trouble raising the money needed to sustain ProMED. A fundraising drive that aimed for $1 million brought in $20,000. "To put it frankly, ProMED is in dire financial straits," the post said."This doesn't seem very expensive to sustain, and losing ProMED would be a step backwards when we're aiming for an effective global early warning system - surely a group of EA donors could save ProMED?You can directly donate here -/ but it might make sense to get in touch with ProMED before making large donations.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
freedomandutility https://forum.effectivealtruism.org/posts/xpHqEus2xzhFW5icJ/promed-platform-which-alerted-the-world-to-covid-might Sat, 05 Aug 2023 06:27:56 +0000 EA - ProMED, platform which alerted the world to Covid, might collapse - can EA donors fund it? by freedomandutility freedomandutility https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:44 no full 6789
qwSRzAaQdquERuv27 EA - How my view on using games for EA has changed by mmKALLL Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How my view on using games for EA has changed, published by mmKALLL on August 5, 2023 on The Effective Altruism Forum.Last year I wrote a post about the effectiveness of making a video game with the intent of inducing EA ideas naturally through gameplay. The game got released on Steam and received positive impressions, so I wanted to follow up with the results.Although I was originally quite optimistic about using games (and other forms of art) for EA, my current thinking has changed. First of all, the original estimates had various flaws:1) Combining confidence intervals also increases the uncertainty of the variable. Thus none of the calculations were actually conclusive.2) I underestimated the amount of effort/luck needed for marketing, and overestimated the expected number of players.3) Although the game received praise for its depth, in 10 follow-up interviews none of the reviewers reported having changed their behavior or thinking as a result of playing it.Since only around a thousand players experienced the main part of the game, having spent a whole year on it seems inefficient. Additional effort also seems unlikely to greatly improve the outcome.Given this I'm now less confident about whether game development can be reasonably pursued from an EA perspective. The effects don't seem tractable, it's very difficult to know what will be meaningful during development, there's loads of work that's not relevant to the intended message, and any larger influence requires a disproportionately lucky hit in the market.As far as I know, it also seems that the people behind/ and/ have experienced similar problems, more or less abandoning their EA+gaming projects. (Although I haven't been able to confirm the exact reasons yet.)That being said, I do still think the medium has lots of unexplored potential, it just seems very difficult for game developers to utilize. My guess is that people in lead design, director, and producer roles at large studios seem much more likely to be able to induce relevant insights for (a large number of) players. In comparison, spending lots of time and money for an indie game just to temporarily influence a handful of players doesn't seem like a very effective endeavor.Personally, I made the decision to focus more on AI alignment moving forward. It seems more immediate, more important, more tractable, has much higher marginal impact, and could also benefit from shifts in cultural norms. I'd like to recommend looking into it if you're at all interested.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
mmKALLL https://forum.effectivealtruism.org/posts/qwSRzAaQdquERuv27/how-my-view-on-using-games-for-ea-has-changed Sat, 05 Aug 2023 02:03:38 +0000 EA - How my view on using games for EA has changed by mmKALLL mmKALLL https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:28 no full 6787
EAA5YeR6s6Ye2cZjD EA - Soaking Beans - a cost-effectiveness analysis by NickLaing Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Soaking Beans - a cost-effectiveness analysis, published by NickLaing on August 6, 2023 on The Effective Altruism Forum.TLDR: On early-stage analysis, persuading people to soak their beans before cooking could cost-effectively save Sub-saharan Africans money, and modestly reduce carbon emissions. (great uncertainty)IntroductionAcross East Africa, hundreds of millions of people cook and eat beans multiple times every week. In Uganda where I live, beans make up an estimated 25% of the average Ugandan's calorie intake and 40% of their daily protein intake. Unfortunately cooking beans takes an absurd amount of time - usually two to three hours using charcoal or wood. The great news is that just soaking beans in water for 6-12 hours reduces cooking time by between 20% and 50% and has no negative effect on bean taste or nutrients . When we tested soaking vs. not soaking, cooking time reduced by half.Despite the obvious benefits of massively reduced cooking time using less fuel., very few people in Uganda soak their beans - nobody I know at least. I estimate under 0.5% of Ugandan families soak beans, but likely far less. I couldn't find any data on bean soaking habits in Uganda or Sub-Saharan Africa in general but I have heard anecdotally that it is common in some countries, perhaps Zimbabwe? (insider knowledge appreciated).Considering Uganda alone, Ugandans eat an estimated 10-20kg of beans per capita every year . Changing the behaviour of even a small percentage of Ugandans by convincing them to soak their beans, has potential benefits of reduced fuel burned, bringing about a range of environmental, economic and health impacts.Soaking beans could be IMPORTANT due to the potential environmental, economic and health benefits gained through reduced cooking time. It is NEGLECTED as no organizations we know of are working on mass media or other interventions to persuade people to soak beans. It may be TRACTABLE as people can immediately experience financial benefit from soaking beans through reduced expenditure on charcoal and time gathering firewood.Potential impact calculationsAssumptionsUptake: For simplicity, we assume that it may be possible to persuade 1% of Ugandans to change their behavior and soak beans. This is just a guess at what could be the outcome of a moderately successful campaign.Fuel/time saving: We estimate a 25% time and fuel saving from soaking beans (ref)Time horizons: If someone starts soaking their beans, once benefits are clear and the change is locked in, it seems likely that they and their family will continue to soak for a long time, possibly even indefinitely. On the other hand, Uganda could electrify faster than expected making much of this analysis obsolete (unlikely), or Ugandans could start eating something other than beans (also unlikely). To be conservative, we have capped our analysis at 5 years of benefit from the campaign.Counterfactual: For the purposes of this analysis we assume that all of the 1% of Ugandans who will change behaviour to soak beans is due to our intervention. This is somewhat reasonable as there are no current efforts promoting bean soaking, and it is very unlikely people will change their behaviour without a specific promotion campaignCO2 emissions prevented through soakingEnvironmental impact could come through two avenues - CO2 equivalent emissions prevented, and deforestation prevented. Although benefits of preventing deforestation could potentially be large, it is difficult to calculate so here we only calculate the potential CO2 emissions prevented, first through reducing charcoal use, then through reducing woodfire user.Charcoal: CO2 equivalent saved by bean soakingAbout 1 in 3 Ugandans use charcoal for cooking. We estimate the Uganda-wide amount of charcoal use for cooking beans through 2 diffrent...

]]>
NickLaing https://forum.effectivealtruism.org/posts/EAA5YeR6s6Ye2cZjD/soaking-beans-a-cost-effectiveness-analysis Sun, 06 Aug 2023 16:51:07 +0000 EA - Soaking Beans - a cost-effectiveness analysis by NickLaing NickLaing https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:37 no full 6797
GpeYCzDSSLznnJ97F EA - The Vegan Blindspot: A Presentation on Wild Animal Suffering by Jack Hancock-Fairs Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Vegan Blindspot: A Presentation on Wild Animal Suffering, published by Jack Hancock-Fairs on August 6, 2023 on The Effective Altruism Forum.I recently gave a talk at the UK's Vegan Campout Festival about wild animal suffering. This presentation was created to be persuasive for vegans but I'm also hoping it can serve as an introduction to the topic of wild animal suffering.You can watch it here:Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Jack Hancock-Fairs https://forum.effectivealtruism.org/posts/GpeYCzDSSLznnJ97F/the-vegan-blindspot-a-presentation-on-wild-animal-suffering Sun, 06 Aug 2023 12:30:13 +0000 EA - The Vegan Blindspot: A Presentation on Wild Animal Suffering by Jack Hancock-Fairs Jack Hancock-Fairs https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:37 no full 6796
BpxKj5P9dRBB4ged6 EA - An appeal to people who are smarter than me: please help me clarify my thinking about AI by bethhw Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An appeal to people who are smarter than me: please help me clarify my thinking about AI, published by bethhw on August 6, 2023 on The Effective Altruism Forum.Hi,As a disclaimer, this will not be as eloquent or well-informed as most of the other posts on this forum. I'm something of an EA lurker who has a casual interest in philosophy but is wildly out of her intellectual depth on this forum 90% of the time. I'm also somewhat prone to existential anxiety and have a tendency to become hyper-fixated on certain topics - and recently had the misfortune of falling down the AI safety internet rabbit hole.It all started when I used ChatGPT for the first time and started to become concerned that I might lose my (content writing) job to a chatbot. My company then convened a meeting where they reassured as all that despite recent advances in AI, they would continue taking a human-led approach to content creation 'for now' (which wasn't as comforting as they probably intended).In a move I now somewhat regret, I decided my best bet would be to find out as much about the topic as I could. This was around the time that Geoffrey Hinton stepped down from Google, so the first thing I encountered was one of his media appearances. This quickly updated me from 'what if AI takes my job' to 'what if AI kills me'. I was vaguely familiar with the existential risk from AI scenarios already, but had considered them far off enough the the future to not really worry about.In looking for less bleak perspectives than Hinton's, I managed to find the exact opposite (ie that Bankless episode with Eliezer Yudkowsky). From there I was introduced to whole cast of similarly pessimistic AI researchers predicting the imminent extinction of humanity with all the confidence of fundamentalist Christians awaiting the rapture (I'm sure I don't have to name them here - also I apologise if any of you reading this are the aforementioned researchers, I don't mean this to be disparaging in any way - this was just my first impression as one of the uninitiated).I'll be honest and say that I initially thought I'd stumbled across some kind of doomsday cult. I assumed there must be some more moderate expert consensus that the more extreme doomers were diverging from. I spent a good month hunting for the well-established body of evidence projecting a more mundane, steady improvement of technology, where everything in 10 years would be kinda like now but with more sophisticated LLMs and an untold amount of AI-generated spam clogging up the internet. Hours spent scanning think-pieces and news reports for the magic words 'while a minority of researchers expect worst-case scenarios, most experts believe.'. But 'most experts' were nowhere to be found.The closest I could find to a reasonably large sample size was that 2022 (?) survey that gave rise to the much-repeated statistic about half of ML researchers placing a >10% chance on extinction from AI. If anything, that survey seemed reassuring, because the median probability was something around 5% as opposed to the >50% estimated by the most prominent safety experts. There was also the recent XPT forecasting contest, which, again produced generally low p(doom) estimates and seemed to leave most people quibbling over the fact that domain experts were assigning single digit probabilities to AI extinction, while superforecasters thought the odds were below 1%. I couldn't help but think that these seemed like strange differences of opinion to be focused on, when you don't need to look far to find seasoned experts who are convinced that AI doom is all but inevitable within the next few years.I now find myself in a place where I spend every free second scouring the internet for the AGI timelines and p(doom) estimates of anyone who sounds vaguely credible. I'm not ashamed t...

]]>
bethhw https://forum.effectivealtruism.org/posts/BpxKj5P9dRBB4ged6/an-appeal-to-people-who-are-smarter-than-me-please-help-me 10% chance on extinction from AI. If anything, that survey seemed reassuring, because the median probability was something around 5% as opposed to the >50% estimated by the most prominent safety experts. There was also the recent XPT forecasting contest, which, again produced generally low p(doom) estimates and seemed to leave most people quibbling over the fact that domain experts were assigning single digit probabilities to AI extinction, while superforecasters thought the odds were below 1%. I couldn't help but think that these seemed like strange differences of opinion to be focused on, when you don't need to look far to find seasoned experts who are convinced that AI doom is all but inevitable within the next few years.I now find myself in a place where I spend every free second scouring the internet for the AGI timelines and p(doom) estimates of anyone who sounds vaguely credible. I'm not ashamed t...]]> Sun, 06 Aug 2023 12:13:04 +0000 EA - An appeal to people who are smarter than me: please help me clarify my thinking about AI by bethhw 10% chance on extinction from AI. If anything, that survey seemed reassuring, because the median probability was something around 5% as opposed to the >50% estimated by the most prominent safety experts. There was also the recent XPT forecasting contest, which, again produced generally low p(doom) estimates and seemed to leave most people quibbling over the fact that domain experts were assigning single digit probabilities to AI extinction, while superforecasters thought the odds were below 1%. I couldn't help but think that these seemed like strange differences of opinion to be focused on, when you don't need to look far to find seasoned experts who are convinced that AI doom is all but inevitable within the next few years.I now find myself in a place where I spend every free second scouring the internet for the AGI timelines and p(doom) estimates of anyone who sounds vaguely credible. I'm not ashamed t...]]> 10% chance on extinction from AI. If anything, that survey seemed reassuring, because the median probability was something around 5% as opposed to the >50% estimated by the most prominent safety experts. There was also the recent XPT forecasting contest, which, again produced generally low p(doom) estimates and seemed to leave most people quibbling over the fact that domain experts were assigning single digit probabilities to AI extinction, while superforecasters thought the odds were below 1%. I couldn't help but think that these seemed like strange differences of opinion to be focused on, when you don't need to look far to find seasoned experts who are convinced that AI doom is all but inevitable within the next few years.I now find myself in a place where I spend every free second scouring the internet for the AGI timelines and p(doom) estimates of anyone who sounds vaguely credible. I'm not ashamed t...]]> bethhw https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:20 no full 6795
qgbLr6es2jCwwcGuH EA - Best Use of 2 Minutes this Month (U.S.) by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Best Use of 2 Minutes this Month (U.S.), published by Rockwell on August 7, 2023 on The Effective Altruism Forum.Actions that have a large impact sometimes don't feel like much. To counteract that bias, I'm sharing arguably the best use of two minutes this month for those in the U.S.BackgroundIn May, the US Supreme Court upheld the ability of US states to require certain standards for animal products sold within their borders, e.g. California's Prop 12, which banned the sale of animal products that involve certain intensive confinement practices.It was a huge victory! But after their defeat in the Supreme Court, the animal farming industry has turned to Congress, pushing the EATS Act.The proposed legislation would take away state power to regulate the kind of agricultural products that enter their borders. Essentially, if any one state permits the production or sale of a particular agricultural product, every other state could have to do so as well, regardless of how dangerous or unethical the product is and regardless of existing state legislation. Just a small sample of laws that could be subverted by the EATS Act include those governing:Chemicals in baby food containersHarmful pesticides in communities and applying them directly to crops for human consumptionArsenic in feed for animals slaughtered for food and other poison controlChild laborPuppy millsWildlife protectionPollutant and emissions standards, e.g., bans on spraying sewage on crops directly before they are sold to peopleFire hazardsDrugs that contain opioid properties and alcohol and tobacco sales to minorsAnd, of course, legislation like Proposition 12 that improves animal welfare requirements and has been a dominant focus of the pro-animal movement.2 Minutes of Action10 seconds: Send a written message to your legislators~1.5 minutes: Call your legislatorsFind themCall your one U.S. representative and two U.S. senators (found in the federal section), ideally during business hoursRead a script like this one:As your constituent, I urge you to oppose the Exposing Agricultural Trade Suppression' (EATS) Act (S. 2619/HR 4999). This is a dangerous, regressive bill that will undo decades of protections for farmed animals and cause them to endure even more suffering for the profits of animal agricultural interests. It would also have devastating consequences for humans and the environment."Note that there is another bill known as the EATS Act of 2023, which is Enhance Access To SNAP. Please be specific that you are asking them to oppose the Exposing Agricultural Trade Suppression Act.For more information, see: (1), (2), (3).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Rockwell https://forum.effectivealtruism.org/posts/qgbLr6es2jCwwcGuH/best-use-of-2-minutes-this-month-u-s Mon, 07 Aug 2023 21:22:07 +0000 EA - Best Use of 2 Minutes this Month (U.S.) by Rockwell Rockwell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:46 no full 6805
Nvw7dGi4kmuXCDDhH EA - I saved a kid's life today by michel Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I saved a kid's life today, published by michel on August 7, 2023 on The Effective Altruism Forum.I'm working writing more, quicker, and not directly for an EA Forum audience. This is a post copied over from my blog.I wonder what they're doing today, the kid whose life I saved. Maybe playing with other kids in their village. Maybe seeking shade with his or her siblings, trying to escape the sub-Saharan African heat. Maybe being held by their mother or grandmother, nurtured.Whatever they're doing today, some day they'll grow up, and they'll live. They'll have a first kiss, a favorite dance, a hobby that makes them feel free, a role model they look up to, a best friend. all of it. They'll live. And I think it will be because of what I did today.It isn't thrilling or adventurous, saving a life in the 21st century. I opened my laptop, clicked my way to a bookmarked website, and donated to a standout charity. Someone watching me couldn't be blamed for assuming I'm doing nothing of much importance, maybe answering some text messages about plans tonight. The whole thing (the donation part after you check in with what you really care about) probably took less than 5-minutes.What did I do to wield this power? Nothing. In an important sense, I think I did nothing to be able to save a life without getting up from my couch. (I certainly did nothing to 'deserve' this power). I just won the birth lottery. I was born in an upper-middle class family, born on track to get a good education, and -just like that - born to become one of the richest people in the world.I didn't do anything crazy to make slightly more than the median US income, yet here I am making decisions about whether someone lives or dies.I just wish it wasn't so easy. The five-thousand dollars I donated today isn't a trivial amount, but it's much more trivial than a human life. Modern economies of abundance should have ensured that it costs me more than a new car I don't need to make the difference between a kid dying before their fifth birthday and that kid meeting their grandkids. Yet here I am, sitting on my couch, holding a life I cannot see - but that exists as so much more than an abstraction - in my hands.Please, I think, as I walk by people with expensive cars and watches, and picture a little girl celebrating her birthday, please don't tell yourself you deserve it.Oh, and about that taboo of not talking about donations: fuck that. Imagine if sharing how I feel about donating could inspire at least one other person to join the project of giving what we can, but I stayed quite because of worries that I would come across as self-righteous or self-centered." I worry much more about the self-centeredness I would be expressing in that silence.I'm not that different from the people who I expect to read this post. See how you compare to the rest of the world here.Does $5,000 seem like a lot? Find out why the instagram ads telling you you can save a life for less aren't telling the whole truth.If someone ever saves my life, the first thing I'll ask is whether they "did it for the right reasons."Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
michel https://forum.effectivealtruism.org/posts/Nvw7dGi4kmuXCDDhH/i-saved-a-kid-s-life-today Mon, 07 Aug 2023 13:47:05 +0000 EA - I saved a kid's life today by michel michel https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:58 no full 6803
SCKg5N9oHdBDxHvEM EA - Getting into an EA-aligned organisation mid-career by Patrick Gruban Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting into an EA-aligned organisation mid-career, published by Patrick Gruban on August 8, 2023 on The Effective Altruism Forum.SummaryIn this article, I'll share my journey of joining an effective altruism (EA) organisation in the middle of my career. I'll talk about the misconceptions I initially held, how I came to understand the fundamental ideas of EA, increased my involvement in the community, and started connecting with more experienced members. I will also share how I became more active through volunteer work and then intentionally worked towards transitioning my career to have more impact.IntroductionWithin the EA community, you will find many young individuals who started engaging with EA ideas during their university years, devoting significant time to podcasts and articles. While the movement has a large youth presence, seasoned professionals are less common. EA-aligned organisations are eager for professionals who embody their core values. However, immersing oneself in these values can be a challenge if you're already occupied with a full-time job and commitments to family and friends.I was introduced to EA at the age of 40 and without a university degree. At that point, I couldn't see a clear path to working directly in a high-impact area. Yet, eight years later, I became the Co-Director at EA Germany, guiding individuals on their quest for more impactful career paths. Looking back, I believe I could have made the career transition sooner had I had fewer misunderstandings and focused more on specific areas.My Misconceptions about EAWhen I initially began attending my local EA group meetings, I had misconceptions about the requisites for deeper engagement with the EA community. I erroneously believed that it required a complete dedication to veganism, a commitment to donating at least 10% of income, and a lifestyle wholly centred around maximising impact.One of my first experiences with the group involved a discussion on animal welfare. As the co-founder of a company that promotes higher animal-welfare yarn from sheep's wool, I felt aligned with their mission. Yet, I noticed that many participants advocated for an entirely vegan lifestyle, excluding all animal products. Their backgrounds, largely consisting of younger students, also seemed to widen the gap of understanding between us.Later, I met individuals who committed to donating 10% of their income to impactful charities. I admired their dedication but was uncertain if I could match that contribution level. I also came across people who believed that EA demanded a life entirely devoted to maximising the overall good.However, as time passed, I understood these were individual choices rather than a standard or norm within the EA community. I met actively involved EA members who were not vegans, concentrated more on their careers than on donations, and enjoyed leisure activities unrelated to their impact work. I eventually concluded that, for me, prioritising a career change held more significance than modifying my lifestyle.Core Values of EAWhile it is true that choices are individual, and people have different values and priorities in their private lives, EA has principles that most people follow. These include prioritisation, impartial altruism, open truth-seeking, and a collaborative spirit. Having individuals who adhere to these values in the community and organisations fosters easier collaboration, as there is a higher level of trust and underlying agreement compared to society at large.My previous experiences in business and political activism taught me to present myself in the best light, select beliefs that supported my purpose, and prioritise self-advocacy over collaboration. Engaging more with the EA community required being open to changing my beliefs, practising epistemic hu...

]]>
Patrick Gruban https://forum.effectivealtruism.org/posts/SCKg5N9oHdBDxHvEM/getting-into-an-ea-aligned-organisation-mid-career Tue, 08 Aug 2023 20:19:52 +0000 EA - Getting into an EA-aligned organisation mid-career by Patrick Gruban Patrick Gruban https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:31 no full 6811
g9gfXhNhLdJxSFBLW EA - Fundamentals of Global Priorities Research in Economics Syllabus by poliboni Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fundamentals of Global Priorities Research in Economics Syllabus, published by poliboni on August 8, 2023 on The Effective Altruism Forum.This is a 6-9 session syllabus on the fundamentals of global priorities research in economics.The purpose is to help economics students and researchers interested in GPR get a big picture view of the field and come up with research ideas.Because of this focus on fundamentals, the readings are rather light on economics and heavy on philosophy and empirics of different cause areas.Previous versions of this list were used internally at GPI and during GPI's Oxford Global Priorities Fellowship in 2023, where the prompts guided individual reflection and group discussion.Many thanks to the following for their help creating and improving this syllabus: Gustav Alexandrie, Loren Fryxell, Arden Koehler, Luis Mota, and Charlotte Siegmann. The readings below don't necessarily represent their views, GPI's, or mine.1. Philosophical FoundationsTopic: Global priorities research is a normative enquiry. It is primarily interested in understanding what we should do in the face of global problems, and only derivatively interested in how those problems work/facts about the world that surround them.In this session, we will focus on understanding what ethical theory is, what some of the most important moral theories are, how these theories relate to normative thinking in economics, and what these theories imply about what the most important causes are.Literature:MacAskill, William. 2019. "The Definition of Effective Altruism" (Section 4 is optional)Prompt 1: How aligned with your aims as a researcher is the definition of Effective Altruism proposed in this article (p. 14)?Trammell, Philip. 2022. Philosophical foundations (Slides 1-2, 5-9, 12-16, 20-24)Prompt 2: What is your best guess theory of welfare? How much do you think it matters to get this right?Prompt 3: What is your best guess view in axiology? What are your key uncertainties about it? Do you think axiology is all that matters in determining what one ought to do (excluding empirical uncertainty)?Trammell, Philip. 2022. Three sins of economics (Slides 1-24, 27)Prompt 4: What are your "normative defaults"? What are views here that you would like to explore more?Prompt 5: Do you agree that economics has the normative defaults identified in the reading? Can you give examples of economics work that avoids these?Prompt 6: Insofar as economists tend to commit the 3 'sins', what do you think of the strategy of finding problems which are underprovided by those views?Extra reading:Wilkinson, Hayden. 2022. "Key Lessons From Global Priorities Research" (watch video here - slides are not quite self-contained)Which key results are most interesting or surprising to you and why? Do you think any of them are wrong?Greaves, Hilary. 2017. "Population axiology"Broome, John. 1996. "The Welfare Economics of Population"2. Effective altruism: differences in impact and cost-effectiveness estimatesTopic: In this session we tackle two key issues in cause prioritization. First, how is impact distributed across interventions (or importance across problems). Second, how to compare the cost-effectiveness of interventions which are differentially well-grounded.Literature:Kokotajlo, Daniel and Oprea, Alexandra. 2020. "Counterproductive Altruism: The Other Heavy Tail" (Skip Sections I and II)Prompt 1: Do you think there is a heavy right tail of opportunities to do good? What about a heavy left tail?Prompt 2: How do the distributions of impact of interventions aimed at the near-term and long-term compare (specifically, in terms of the heaviness of their tails)?Karnofsky, Holden. 2016. "Why we can't take expected value estimates literally (even when they're unbiased)"Prompt 3: What, in your view, is ...

]]>
poliboni https://forum.effectivealtruism.org/posts/g9gfXhNhLdJxSFBLW/fundamentals-of-global-priorities-research-in-economics Tue, 08 Aug 2023 18:48:42 +0000 EA - Fundamentals of Global Priorities Research in Economics Syllabus by poliboni poliboni https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 16:39 no full 6810
Kyh84cxzWcaKonHFG EA - OpenAI's massive push to make superintelligence safe in 4 years or less (Jan Leike on the 80,000 Hours Podcast) by 80000 Hours Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OpenAI's massive push to make superintelligence safe in 4 years or less (Jan Leike on the 80,000 Hours Podcast), published by 80000 Hours on August 9, 2023 on The Effective Altruism Forum.We just published an interview: Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less. You can click through for the audio, a full transcript, and related links. Below are the episode summary and some key excerpts.Episode summaryIf you're thinking about how do you align the superintelligence - how do you align the system that's vastly smarter than humans? - I don't know. I don't have an answer. I don't think anyone really has an answer.But it's also not the problem that we fundamentally need to solve. Maybe this problem isn't even solvable by humans who live today. But there's this easier problem, which is how do you align the system that is the next generation? How do you align GPT-N+1? And that is a substantially easier problem.Jan LeikeIn July, OpenAI announced a new team and project: Superalignment. The goal is to figure out how to make superintelligent AI systems aligned and safe to use within four years, and the lab is putting a massive 20% of its computational resources behind the effort.Today's guest, Jan Leike, is Head of Alignment at OpenAI and will be co-leading the project. As OpenAI puts it, ".the vast power of superintelligence could be very dangerous, and lead to the disempowerment of humanity or even human extinction. . Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue."Given that OpenAI is in the business of developing superintelligent AI, it sees that as a scary problem that urgently has to be fixed. So it's not just throwing compute at the problem - it's also hiring dozens of scientists and engineers to build out the Superalignment team.Plenty of people are pessimistic that this can be done at all, let alone in four years. But Jan is guardedly optimistic. As he explains:Honestly, it really feels like we have a real angle of attack on the problem that we can actually iterate on. and I think it's pretty likely going to work, actually. And that's really, really wild, and it's really exciting. It's like we have this hard problem that we've been talking about for years and years and years, and now we have a real shot at actually solving it. And that'd be so good if we did.Jan thinks that this work is actually the most scientifically interesting part of machine learning. Rather than just throwing more chips and more data at a training run, this work requires actually understanding how these models work and how they think. The answers are likely to be breakthroughs on the level of solving the mysteries of the human brain.The plan, in a nutshell, is to get AI to help us solve alignment. That might sound a bit crazy - as one person described it, "like using one fire to put out another fire."But Jan's thinking is this: the core problem is that AI capabilities will keep getting better and the challenge of monitoring cutting-edge models will keep getting harder, while human intelligence stays more or less the same. To have any hope of ensuring safety, we need our ability to monitor, understand, and design ML models to advance at the same pace as the complexity of the models themselves.And there's an obvious way to do that: get AI to do most of the work, such that the sophistication of the AIs that need aligning, and the sophistication of the AIs doing the aligning, advance in lockstep.Jan doesn't want to produce machine learning models capable of doing ML research. But such models are coming, whether we like it or not. And at that point Jan wants to make sure we turn them towards useful alignment and safety work, as much or more than we use them to...

]]>
80000_Hours https://forum.effectivealtruism.org/posts/Kyh84cxzWcaKonHFG/openai-s-massive-push-to-make-superintelligence-safe-in-4 Wed, 09 Aug 2023 20:42:49 +0000 EA - OpenAI's massive push to make superintelligence safe in 4 years or less (Jan Leike on the 80,000 Hours Podcast) by 80000 Hours 80000_Hours https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 27:46 no full 6818
jqsiixtDrs66Ykp5C EA - Skilled volunteering opportunity at Animal Policy International by Rainer Kravets Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Skilled volunteering opportunity at Animal Policy International, published by Rainer Kravets on August 9, 2023 on The Effective Altruism Forum.Animal Policy International is a Charity Entrepreneurship incubated organisation focused on protecting animal welfare in international trade.We collaborate with policymakers, farmers, and NGOs towards responsible imports that align with standards and expectations in higher-welfare regions, create fairer market conditions for local farmers, and help to promote higher animal welfare standards in low-welfare countries. You can read more about our work and focus regions in our intro post.We are looking for skilled volunteers primarily in two areas: research and social media. However, if you are interested in contributing in other domains, don't hesitate to reach out.ResearchTo build the evidence base for our ask, we plan to conduct further research into various topics, including countries' animal welfare legislation, international supply chains, and write reports for public sharing. We are looking for volunteers to analyse our existing research, identify gaps, conduct additional research, and write and design final reports to be presented to key stakeholders.Importance: the research will contribute directly to answering the concerns put forward by policymakers and other stakeholders.Social mediaWe are looking for a person who would contribute to our social media strategy development and execution. This will include designing visuals, writing captions, and publishing on social media platforms, ideally on a regular basis - but you can also help us short-term.Importance: being present on social media increases API's credibility and helps to raise awareness of the issue among key stakeholders.Qualities we are looking forStrong internal motivation to help reduce animal sufferingInterest in this area of volunteeringSome experience in the area (research or communications)Self-motivation to get things done and be an accountable team memberWhat's in it for youBe part of an impact-focused, caring, and agile teamHigh-impact volunteering opportunity: help progress a potentially high impact intervention for animalsExperience working in a fast-paced charity start-up environmentHaving volunteering experience on your CV can enhance appealRegular feedback on workRegular calls with team membersUnsure, whether you should consider skilled volunteering? Check out this post by Sofia Balderson.We're looking for volunteers who have short-term availability for our research projects. If you are interested in volunteering for Animal Policy International then please fill in this form. We look forward to hearing from you!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Rainer Kravets https://forum.effectivealtruism.org/posts/jqsiixtDrs66Ykp5C/skilled-volunteering-opportunity-at-animal-policy Wed, 09 Aug 2023 00:50:42 +0000 EA - Skilled volunteering opportunity at Animal Policy International by Rainer Kravets Rainer Kravets https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:43 no full 6813
idp4GEqWp24ocgfes EA - The Hinge of History Hypothesis: Reply to MacAskill (Andreas Mogensen) by Global Priorities Institute Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Hinge of History Hypothesis: Reply to MacAskill (Andreas Mogensen), published by Global Priorities Institute on August 8, 2023 on The Effective Altruism Forum.This paper was originally published as a working paper in August 2022 and is forthcoming in Analysis.AbstractSome believe that the current era is uniquely important with respect to how well the rest of human history goes. Following Parfit, call this the Hinge of History Hypothesis. Recently, MacAskill has argued that our era is actually very unlikely to be especially influential in the way asserted by the Hinge of History Hypothesis. I respond to MacAskill, pointing to important unresolved ambiguities in his proposed definition of what it means for a time to be influential and criticizing the two arguments used to cast doubt on the claim that the current era is a uniquely important moment in human history.IntroductionSome believe that the current era is a uniquely important moment in human history. We are living, they claim, at a time of unprecedented risk, heralded by the advent of nuclear weapons and other world-shaping technologies. Only by responding wisely to the anthropogenic risks we now face can we survive into the future and fulfil our potential as a species (Sagan 1994; Parfit 2011, Bostrom 2014, Ord 2020).Following Parfit (2011), call the hypothesis that we live at such a uniquely important time the Hinge of History Hypothesis (3H). Recently, MacAskill (2022) has argued that 3H is "quite unlikely to be true." (332) He interprets 3H as the claim that "[w]e are among the very most influential people ever, out of a truly astronomical number of people who will ever live" (339) and defines a period of time as influential in proportion to "how much expected good one can do with the direct expenditure (rather than investment) of a unit of resources at [that] time" (335), where 'investment' may refer "to both financial investment, and to using one's time to grow the number of people who are also impartial altruists." (335 n.13) MacAskill thus relates the truth or falsity of 3H to the practical question of the optimal time at which to expend resources to achieve morally good outcomes, considered impartially.MacAskill presents two arguments against 3H. The first is an argument that the prior probability that we are living at the most influential time in history should be very low, because we should reason as if we represent a random sample from observers in our reference class. The second is an inductive argument that we should expect future people to have more influence over human history because the overall trend throughout human history is for later generations to be more influential.In my view, neither of these arguments should convince us. As I argue in section 2, MacAskill's priors argument relies on formulating 3H in a way that does not conform to how this hypothesis is traditionally understood. Moreover, I will argue in section 3 that MacAskill's definition of what it means for a time to be influential leaves too many unresolved ambiguities for his inductive argument to work.Read the rest of the paperThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Global Priorities Institute https://forum.effectivealtruism.org/posts/idp4GEqWp24ocgfes/the-hinge-of-history-hypothesis-reply-to-macaskill-andreas Tue, 08 Aug 2023 21:37:52 +0000 EA - The Hinge of History Hypothesis: Reply to MacAskill (Andreas Mogensen) by Global Priorities Institute Global Priorities Institute https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:07 no full 6812
7RrjXQhGgAJiDLWYR EA - What Does a Marginal Grant at LTFF Look Like? Funding Priorities and Grantmaking Thresholds at the Long-Term Future Fund by Linch Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What Does a Marginal Grant at LTFF Look Like? Funding Priorities and Grantmaking Thresholds at the Long-Term Future Fund, published by Linch on August 10, 2023 on The Effective Altruism Forum.The Long-Term Future Fund (LTFF) makes small, targeted grants with the aim of improving the long-term trajectory of humanity. We are currently fundraising to cover our grantmaking budget for the next 6 months. We would like to give donors more insight into how we prioritize different projects, so they have a better sense of how we plan to spend their marginal dollar. Below, we've compiled fictional but representative grants to illustrate what sort of projects we might fund depending on how much we raise for the next 6 months, assuming we receive grant applications at a similar rate and quality to the recent past.Our motivations for presenting this information are a) to provide transparency about how the LTFF works, and b) to move the EA and longtermist donor communities towards a more accurate understanding of what their donations are used for. Sometimes, when people donate to charities (EA or otherwise), they may wrongly assume that their donations go towards funding the average, or even more optimistically, the best work of those charities. However, it is usually more useful to consider the marginal impact for the world that additional dollars would buy. By offering illustrative examples of the sort of projects we might fund at different levels of funding, we hope to give potential donors a better sense of what their donations might buy, depending on how much funding has already been committed. We hope that this post will help improve the quality of thinking and discussions about charities in the EA and longtermist communities.For donors who believe that the current marginal LTFF grants are better than marginal funding of all other organizations, please consider donating! Compared to the last 3 years, we now have both a) unusually high quality and quantity of applications and b) unusually low amount of donations, which means we'll have to raise our bar substantially if we do not receive additional donations. This is an especially good time to donate, as donations are matched 2:1 by Open Philanthropy (OP donates $2 for every $1 you donate). That said, if you instead believe that marginal funding of another organization is (between 1x and 3x, depending on how you view marginal OP money) better than current marginal LTFF grants, then please do not donate to us, and instead donate to them and/or save the money for later.Background on the LTFFWe are committed to improving the long-term trajectory of civilization, with a particular focus on reducing global catastrophic risks.We specialize in funding early stage projects rather than established organizations.From March 2022 to March 2023, we received 878 applications and funded 263 as grants, worth ~$9.1M dollars total (average $34.6k/grant). To our knowledge, we have made more small grants in this time period than any other longtermist- or EA- motivated funder.Other funders in this space include Open Philanthropy, Survival and Flourishing Fund, and recently Lightspeed Grants and Manifund.Historically, ~40% of our funding has come from Open Phil. However, we are trying to become more independent of Open Phil. As a temporary stopgap measure, Open Phil is matching donations to LTFF 2:1 instead of granting to us directly.100% of money we fundraise for LTFF qua LTFF goes to grantees; we fundraise separately and privately for operational costs.We try to be very willing to fund weird things that the grantmakers' inside views believe are really impactful for the long-term future.You can read more about our work at our website here, or in our accompanying payout report here.Methodology for this analysisAt the LTFF, we assign ea...

]]>
Linch https://forum.effectivealtruism.org/posts/7RrjXQhGgAJiDLWYR/what-does-a-marginal-grant-at-ltff-look-like-funding Thu, 10 Aug 2023 21:31:33 +0000 EA - What Does a Marginal Grant at LTFF Look Like? Funding Priorities and Grantmaking Thresholds at the Long-Term Future Fund by Linch Linch https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 17:09 no full 6827
DJEbnY4P5gcEaN3WA EA - Manifolio: The tool for making Kelly optimal bets on Manifold Markets by Will Howard Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Manifolio: The tool for making Kelly optimal bets on Manifold Markets, published by Will Howard on August 10, 2023 on The Effective Altruism Forum.I've made a calculator that makes it easy to make correctly sized bets on Manifold. You just put in the market and your estimate of the true probability, and it tells you the right amount to bet according to the Kelly criterion."The right amount to bet according to the Kelly criterion" means maximising the expected logarithm of your wealth.There is a simple formula for this in the case of bets with fixed odds, but this doesn't work well on prediction markets in general because the market moves in response to your bet. Manifolio accounts for this, plus some other things like the risk from other bets in your portfolio. I've aimed to make it simple and robust so you can focus on estimating the probability and trust that you are betting the right amount based on this.You can use it here (with a market prefilled as an example), or read a more detailed guide in the github readme. It's also available as a chrome extension... which currently has to be installed in a slightly roundabout way (instructions also in the readme). I'll update here when it's approved in the chrome web store.Why bet Kelly (redux)?Much ink has been spilled about why maximising the logarithm of your wealth is a good thing to do. I'll just give a brief pitch for why it is probably the best strategy, both for you, and for "the good of the epistemic environment".For youGiven a specific wealth goal, it minimises the expected time to reach that goal compared to any other strategy.It maximises wealth in the median (50th percentile) outcome.Furthermore, for any particular percentile it gets arbitrarily close to being the best strategy as the number of bets gets very large. So if you are about to participate in 100 coin flip bets in a row, even if you know you are going to get the 90th percentile luckiest outcome, the optimal amount to bet is still close to the Kelly optimal amount (just marginally higher). In my opinion this is the most compelling self-interested reason, even if you get very lucky or unlucky it's never far off the best strategy.(the above are all in the limit of a large number of iterated bets)There are also some horror stories of how people do when using a more intuition based approach... it's surprisingly easy to lose (fake) money even when you have favourable odds.For the good of the epistemic environmentA marketplace consisting of Kelly bettors learns at the optimal rate, in the following sense:Special property 1: the market will produce an equilibrium probability that is the wealth weighted average of each participant's individual probability estimate. In other words it behaves as if the relative wealth of each participant is the prior on them being correct.Special property 2: When the market resolves one way or the other, the relative wealth distribution ends up being updated in a perfectly Bayesian manner. When it comes time to bet on the next market, the new wealth distribution is the correctly updated prior on each participant being right, as if you had gone through and calculated Bayes' rule for each of them.Together these mean that, if everyone bets according to the Kelly criterion, then after many iterations the relative wealth of each participant ends up being the best possible indicator of their predictive ability. And the equilibrium probability of each market is the best possible estimate of the probability, given the track record of each participant. This is a pretty strong result!I'd love to hear any feedback people have on this. You can leave a comment here or contact me by email.Thanks to the people who funded this project on Manifund, and everyone who has given feedback and helped me test it outThis is shown...

]]>
Will Howard https://forum.effectivealtruism.org/posts/DJEbnY4P5gcEaN3WA/manifolio-the-tool-for-making-kelly-optimal-bets-on-manifold Thu, 10 Aug 2023 18:03:12 +0000 EA - Manifolio: The tool for making Kelly optimal bets on Manifold Markets by Will Howard Will Howard https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:47 no full 6826
oPao8avpq48GPvzDZ EA - Two Years Community Building, Ten Lessons (Re)Learned by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two Years Community Building, Ten Lessons (Re)Learned, published by Rockwell on August 10, 2023 on The Effective Altruism Forum.Today is my two-year anniversary with EA NYC, where I serve as director. To say I've learned a lot would be a tremendous understatement. The more visible parts of that learning are often technical: who's doing what in which organization, why [niche thing I'd never heard of] is actually really important, what mentorship opportunities exist for people with [these very specific qualities], how to wrangle 100+ people into a group photo and still have everyone visible. (I've gotten super good at that last part!) But I've also learned - or relearned - some much larger life lessons.None of the below lessons are wholly novel, but I think they're worth stating for the broader community. Regardless of where I personally am in five or ten years, I think this is a list I'll return to. The contents are simple but so, so easy to forget. Without further ado, here are ten major lessons I've from my tenure thus far:1. Many people who want to do good in the world feel alone as a resultThrough my role with EA NYC, I'm often among people's first touchpoints with the EA community. I constantly have conversations with people who feel deeply unfulfilled in their life and their life's work, with an underlying knowledge they're not currently doing much of meaning. Often, they are in their 30s or 40s. Often, they are - or think they are - the only one in their circles who wants to do something that matters. To them, even just learning the EA community exists feels like a door into a new world, one where they no longer need to feel deeply alone in their desire to do good. There are many people who want to do good and are willing to devote their lives to it, it's just a matter of finding them and helping them believe it's possible. Which leads me to:2. Extremely impressive people have imposter syndromeI routinely speak with extremely accomplished individuals who I'm so excited to have join the community and have multiple promising routes to impact lying before them. Yet, they profoundly question their abilities. More than once, I've felt quietly intimidated while speaking to someone, only to have them then ask me, "Is there a place for me in EA? Do you think someone like me has anything to bring to the table?" People you would least expect are plagued by self-doubt. People with a range of backgrounds and skill sets are nearly universally poorly calibrated when it comes to their own potential. As a result:3. People often want "permission" to do goodMany people are waiting for an external green light to signal "you can do something that matters." Others, like the SuccessIf advising team, have explained this concept more in-depth than I will here. In a word, people often want an external party to say, "Yes, this is okay to do," before they do it. That external validation or assurance can be a deciding factor in whether or not they take an action. And as community builders and community members, it's our job to provide that permission slip, reassuring others that their contributions matter, their thinking is rational, and they're not alone in this journey.Tangentially, I worry that EA's focus on professionalized channels toward improving the world further fosters "permission culture" in comparison to, for example, grassroots social movements.A job offer should not be a prerequisite to making an impact, and a rejection letter certainly shouldn't stop you from picking up your phone and calling your elected official or volunteering for a human challenge trial. Being able to keep going and keep believing in your ability to have an impact can mean recognizing that:4. Tempering rejection is a learned skillA lot of my observations above come back to a fear of rejection....

]]>
Rockwell https://forum.effectivealtruism.org/posts/oPao8avpq48GPvzDZ/two-years-community-building-ten-lessons-re-learned Thu, 10 Aug 2023 11:05:06 +0000 EA - Two Years Community Building, Ten Lessons (Re)Learned by Rockwell Rockwell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:32 no full 6825
SK4PdjbBJHtCc97SA EA - Utilitarianism.net Audio by Richard Y Chappell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Utilitarianism.net Audio, published by Richard Y Chappell on August 10, 2023 on The Effective Altruism Forum.Hi all, a quick announcement that Utilitarianism.net now has AI-narrated audio (courtesy of the good folks at Type III Audio)! Click the audio icon on almost any page of the website, or subscribe to the podcast to listen to it all in your preferred podcast app.Enjoy!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Richard Y Chappell https://forum.effectivealtruism.org/posts/SK4PdjbBJHtCc97SA/utilitarianism-net-audio Thu, 10 Aug 2023 10:35:18 +0000 EA - Utilitarianism.net Audio by Richard Y Chappell Richard Y Chappell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:37 no full 6824
3kMQTjtdWqkxGuWxB EA - Update on cause area focus working group by Bastian Stern Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update on cause area focus working group, published by Bastian Stern on August 10, 2023 on The Effective Altruism Forum.Prompted by the FTX collapse, the rapid progress in AI, and increased mainstream acceptance of AI risk concerns, there has recently been a fair amount of discussion among EAs whether it would make sense to rebalance the movement's portfolio of outreach/recruitment/movement-building activities away from efforts that use EA/EA-related framings and towards projects that instead focus on the constituent causes. In March 2023, Open Philanthropy's Alexander Berger invited Claire Zabel (Open Phil), James Snowden (Open Phil), Max Dalton (CEA), Nicole Ross (CEA), Niel Bowerman (80k), Will MacAskill (GPI), and myself (Open Phil, staffing the group) to join a working group on this and related questions.In the end, the group only ended up having two meetings, in part because it proved more difficult than expected to surface key action-relevant disagreements. Prior to the first session, participants circulated relevant memos and their initial thoughts on the topic. The group also did a small amount of evidence-gathering on how the FTX collapse has impacted the perception of EA among key target audiences. At the end of the process, working group members filled in an anonymous survey where they specified their level of agreement with a list of ideas/hypotheses that were generated during the two sessions. This included many proposals/questions for which this group/its members aren't the relevant decision-makers, e.g. proposals about actions taken/changes made by various organisations.The idea behind discussing these wasn't for this group to make any sort of direct decisions about them, but rather to get a better sense of what people thought about them in the abstract, in the hope that this might sharpen the discussion about the broader question at issue.Some points of significant agreement:Overall, there seems to have been near-consensus that relative to the status quo, it would be desirable for the movement to invest more heavily in cause-area-specific outreach, at least as an experiment, and less (in proportional terms) in outreach that uses EA/EA-related framings. At the same time, several participants also expressed concern about overshooting by scaling back on forms of outreach with a strong track-record and thereby "throwing out the baby with the bathwater", and there seems to have been consensus that a non-trivial fraction of outreach efforts that are framed in EA terms are still worth supporting.Consistently with this, when asked in the final survey to what extent the EA movement should rebalance its portfolio of outreach/recruitment/movement-building activities away from efforts that use EA/EA-related framings and towards projects that instead focus on the constituent causes, responses generally ranged from 6-8 on a 10-point scale (where 5=stick with the status quo allocation, 0=rebalance 100% to outreach using EA framings, 10=rebalance 100% to outreached framed in terms of constituent causes), with one respondent selecting 3/10.There was consensus that it would be good if CEA replaced one of its (currently) three annual conferences with a conference that's explicitly framed as being about x-risk or AI-risk focused conference. This was the most concrete recommendation to come out of this working group. My sense from the discussion was that this consensus was mainly driven by people agreeing that there would be value of information to be gained from trying this; I perceived more disagreement about how likely it is that this would prove a good permanent change.In response to a corresponding prompt (" . at least one of the EAGs should get replaced by an x-risk or AI-risk focused conference ."), answers ranged from 7-9 (mean 7.9), on a scale where 0=ve...

]]>
Bastian_Stern https://forum.effectivealtruism.org/posts/3kMQTjtdWqkxGuWxB/update-on-cause-area-focus-working-group Thu, 10 Aug 2023 02:41:10 +0000 EA - Update on cause area focus working group by Bastian Stern Bastian_Stern https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:03 no full 6821
L8BfajG59MCyCciwf EA - The cost of (not) taking marketing seriously by James Odene [User-Friendly] Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The cost of (not) taking marketing seriously, published by James Odene [User-Friendly] on August 11, 2023 on The Effective Altruism Forum.TLDR: EA organisations are forfeiting impact if they don't take marketing seriously. User-friendly are offering a free and friendly 'sense-check' consultancy call to all EA-aligned organisations. Email Letstalk@userfriendly.org.uk to enquire.IntroductionBy not properly addressing the role of marketing in the effective altruism movement, there is substantial impact being forfeited; potentially ~20% impact loss, though in some cases, likely even more. Our efforts, no matter how rational or impact-seeking, can only create the desired change if they are effectively communicated and disseminated to our intended audience. Marketing provides us with the tools to bridge the gap between intention and action, between knowledge and support, and between ideas and real-world impact. If we continue to communicate without adequate marketing consideration, we are consistently short-changing our impact.The effective altruist approach is highly analytical. Due to this, we often assume that our intended audience won't be impacted by marketing, however, everyone is (yes, even you are) impacted by aesthetic, by language choice, by an interface and experience quality. You may not think of your organisation as a 'brand' with a 'customer-base' trying to 'sell', but you are still beholden to all the same rules that consumer facing brands are as these are derived from human-nature and our shared (biassed) psychology.Below I will highlight some key research and how EA organisations can take advantage of the findings.5 key pieces of research on why marketing matters;"The Long and Short of It" by Les Binet and Peter Field:This influential research study analysed over 1,400 advertising campaigns and demonstrated the importance of long-term brand building and balancing it with short-term activation. The findings indicate that organisations that invest in long-term brand building experience significant increases in brand-related key performance indicators, market share growth, and profitability.How can EA organisations benefit from this:Firstly, it's important to note that most organisations don't have a marketing budget - so this is step one. What this should be will vary for each organisation, but I would expect a rough gauge of 10-30% of your annual budget to be on marketing.For companies that allocate 60% or more of their marketing budget to long-term brand building activities achieve a 140% increase in brand-related metrics. Additionally, organisations that strike the right balance between long-term and short-term campaigns experience 2.5x higher market share growth and 3x higher profitability growth compared to those with an unbalanced approach. For EA organisations, this could translate into higher donation volumes, fellowship applications or related organisational objectives. Even less directly public-facing organisations will rely on long-term brand building as without being known or being salient, the organisation objectives will suffer no matter how effective your core work is."How Brands Grow" by Byron Sharp and the Ehrenberg Bass Institute:Sharp's research challenges traditional marketing assumptions and provides evidence-based insights into brand growth and audience behaviour. The study emphasises the significance of reaching a broad audience and creating strong mental availability through consistent and distinctive branding. The research findings indicate that organisations focusing on broadening their audience base can achieve a 60% to 80% increase in brand penetration and a 25% to 50% increase in market share. Moreover, improving mental availability through consistent and distinctive branding contributes to a 10% to 25% increase in m...

]]>
James Odene [User-Friendly] https://forum.effectivealtruism.org/posts/L8BfajG59MCyCciwf/the-cost-of-not-taking-marketing-seriously Fri, 11 Aug 2023 04:08:49 +0000 EA - The cost of (not) taking marketing seriously by James Odene [User-Friendly] James Odene [User-Friendly] https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:08 no full 6828
M3zJkqpruMWc4nFqj EA - Predicting Virus Relative Abundance in Wastewater by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Predicting Virus Relative Abundance in Wastewater, published by Jeff Kaufman on August 11, 2023 on The Effective Altruism Forum.At the Nucleic Acid Observatory (NAO) we're evaluating pathogen-agnostic surveillance. A key question is whether metagenomic sequencing of wastewater can be a cost-effective method to detect and mitigate future pandemics. In this report we investigate one piece of this question: at a given stage of a viral pandemic, what fraction of wastewater metagenomic sequencing reads would that virus represent?To make this concrete, we define RA(1%). If 1% of people are infected with some virus (prevalence) or have become infected with it during a given week (incidence), RA(1%) is the fraction of sequencing reads (relative abundance) generated by a given method that would match that virus. To estimate RA(1%) we collected public health data on sixteen human-infecting viruses, re-analyzed sequencing data from four municipal wastewater metagenomic studies, and linked them with a hierarchical Bayesian model.Three of the viruses were not present in the sequencing data, and we could only generate an upper bound on RA(1%). Four viruses had a handful of reads, for which we were able to generate rough estimates. For the remaining nine viruses we were able to narrow down RA(1%) for a specific virus-method combination to approximately an order of magnitude. We found RA(1%) for these nine viruses varied dramatically, over approximately six orders of magnitude. It also varied by study, with some viruses seeing an RA(1%) three orders of magnitude higher in one study than another.The NAO plans to use the estimates from this study as inputs into a modeling framework to assess the cost effectiveness of wastewater MGS detection under different pandemic scenarios, and we include an outline of such a framework with some rough estimates of the costs of different monitoring approaches.Read the full report: Predicting Virus Relative Abundance in Wastewater.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Jeff Kaufman https://forum.effectivealtruism.org/posts/M3zJkqpruMWc4nFqj/predicting-virus-relative-abundance-in-wastewater Fri, 11 Aug 2023 03:13:45 +0000 EA - Predicting Virus Relative Abundance in Wastewater by Jeff Kaufman Jeff Kaufman https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:07 no full 6829
Ysq53coRwgSWHHz2x EA - Nuclear winter scepticism by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nuclear winter scepticism, published by Vasco Grilo on August 13, 2023 on The Effective Altruism Forum.This is a crosspost for Nuclear Winter by Bean from Naval Gazing, published on 24 April 2022. It argues Toon 2008 has overestimated the soot ejected into the stratosphere following a nuclear war by something like a factor of 191 (= 1.522(1 + 2)/2(2 + 3)/2(4 + 13)/2). I have not investigated the claims made in Bean's post, but that seems worthwhile. If its conclusions hold, the soot ejected into the stratosphere following the 4.4 k nuclear detonations analysed in Toon 2008 would be 0.942 Tg (= 180/191) instead of "180 Tg". From Fig. 3a of Toon 2014, the lower soot ejection would lead to a reduction in temperature of 0.2 ºC, and in precipitation of 0.6 %. These would have a negligible impact in terms of food security, and imply the deaths from the climatic effects being dwarfed by the "770 million [direct] casualties" mentioned in Toon 2008.For context, Luísa Rodriguez estimated "30 Tg" of soot would be ejected into the stratosphere in a nuclear war between the United States and Russia. Nevertheless, Luísa notes the following:As a final point, I'd like to emphasize that the nuclear winter is quite controversial (for example, see: Singer, 1985; Seitz, 2011; Robock, 2011; Coupe et al., 2019; Reisner et al., 2019; Pausata et al., 2016; Reisner et al., 2018; Also see the summary of the nuclear winter controversy in Wikipedia's article on nuclear winter). Critics argue that the parameters fed into the climate models (like, how much smoke would be generated by a given exchange) as well as the assumptions in the climate models themselves (for example, the way clouds would behave) are suspect, and may have been biased by the researchers' political motivations (for example, see: Singer, 1985; Seitz, 2011; Reisner et al., 2019; Pausata et al., 2016; Reisner et al., 2018). I take these criticisms very seriously - and believe we should probably be skeptical of this body of research as a result. For the purposes of this estimation, I assume that the nuclear winter research comes to the right conclusion.However, if we discounted the expected harm caused by US-Russia nuclear war for the fact that the nuclear winter hypothesis is somewhat suspect, the expected harm could shrink substantially.As Luísa, I have been assuming "the nuclear winter research comes to the right conclusion", but I suppose it is worth bringing more attention to potential concerns. I have also not flagged them in my posts, so I am crossposting Bean's analysis for some balance.Nuclear WinterWhen I took a broad overview of how destructive nuclear weapons are, one of the areas I looked at was nuclear winter, but I only dealt with it briefly. As such, it was something worth circling back to for a more in-depth look at the science involved.First, as my opponent here, I'm going to take What the science says: Could humans survive a nuclear war between NATO and Russia? from the prestigious-sounding "Alliance For Science", affiliated with Cornell University, and the papers it cites in hopes of being fair to the other side. Things don't start off well, as they claim that we're closer to nuclear war than any time since the Cuban Missile Crisis, which is clearly nonsense given Able Archer 83 among others. This is followed with the following gem: "Many scientists have investigated this question already. Their work is surprisingly little known, likely because in peacetime no one wants to think the unthinkable. But we are no longer in peacetime and the shadows of multiple mushroom clouds are looming once again over our planet." Clearly, I must have hallucinated the big PR push around nuclear winter back in the mid-80s. Well, I didn't because I wasn't born yet, but everyone else must have.Things don't get much better. ...

]]>
Vasco Grilo https://forum.effectivealtruism.org/posts/Ysq53coRwgSWHHz2x/nuclear-winter-scepticism Sun, 13 Aug 2023 19:12:23 +0000 EA - Nuclear winter scepticism by Vasco Grilo Vasco Grilo https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 17:28 no full 6837
uv88CMAmbRtzjcKb2 EA - Announcing Manifest 2023 (Sep 22-24 in Berkeley) by Manifest Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Manifest 2023 (Sep 22-24 in Berkeley), published by Manifest on August 14, 2023 on The Effective Altruism Forum.Forecasting Festival hosted by ManifoldTL;DR: Manifold is hosting a conference! 🥳 Chat with the Manifold team, special guests like Robin Hanson, Shayne Coplan, Patrick McKenzie, Dylan Matthews, Destiny, Aella, and more at our inaugural in-person gathering of the forecasting & prediction market community.More info & buy tickets: manifestconference.netWHEN: Sept 22-24WHERE: Berkeley, CAWHO: Everyone in the forecasting, EA, and LW communities. If you're reading this, you're invited!Join the Discord here :)Why should I come?Forecasting and prediction markets are effective ways of improving our judgement and decision-making - most people in the Effective Altruism and LessWrong communities will feel right at home at Manifest.Here are extra reasons to come:you think forecasting & prediction markets are impactful/fun/cool/rational/intriguing/etcyou want to vibe with other forecasting nerdsyou want to engage & network with the forecasting community (find jobs/recruit hires/see what's out there)you want to meet & chat with the Manifold team and our special guests (including Robin Hanson, Shayne Coplan, Patrick McKenzie, Dylan Matthews, Destiny, Aella, and more!)you want enjoy the gorgeous Rose Garden Inn.or if you like memes?A day at ManifestEverything's optional. There will always be a bunch of sessions running concurrently, but this is an example of what your day at Manifest might look like:10-11 - Opening session11-12 - Fireside chat: Robin Hanson12-1 - Lunch & mingling1-2 - Estimathon: fermi estimation with prizes and steep competition!2-3 - Speed friending: a few chats with other friendly, ambitious forecasting nerds :)3-4 - Break: relax, vibe, unwind, chill, destress, etc4-5 - Panel: Forecasting Founders (hear from Manifold, Metaculus, Polymarket, Kalshi, FRI, and more!)5-6 - Games & markets: chess, poker, and prediction markets!6-7 - Dinner & mingling7-8 - Workshop: How to Write Good Forecasting Questions8-12 - Murder She Bet: a murder mystery + a low-tech prediction market = fun!Who else is coming?The rest of the forecasting & prediction market community. Current speakers & special guests include:The whole Manifold team, including co-founders Austin Chen and Stephen & James GrugettRobin Hanson (economist & professor)Shayne Coplan (CEO of Polymarket)Patrick McKenzie (writer, Advisor at Stripe)Dylan Matthews (Head writer at Future Perfect, Vox)Destiny (streamer & political streamer)Aella (sex researcher & writer)Josh Rosenberg (CEO of the Forecasting Research Institute)Rational Animations (Rationality Youtuber)Byrne Hobart (Author of The Diff)Cate Hall (CEO of Alvea)Robert Miles (AI safety researcher & writer).and more to be announced!For the most up-to-date info, see manifestconference.net/speakers :)Where can I buy tickets?Buy tickets & check the most up-to-date info on the Manifest website! Tickets are free for students and employees of forecasting organizations.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Manifest https://forum.effectivealtruism.org/posts/uv88CMAmbRtzjcKb2/announcing-manifest-2023-sep-22-24-in-berkeley-3 Mon, 14 Aug 2023 21:36:38 +0000 EA - Announcing Manifest 2023 (Sep 22-24 in Berkeley) by Manifest Manifest https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:45 no full 6847
Tnw3juizvcS9E6HXf EA - (Linkpost) Alexander Berger is now the sole CEO of Open Philanthropy by Yadav Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: (Linkpost) Alexander Berger is now the sole CEO of Open Philanthropy, published by Yadav on August 14, 2023 on The Effective Altruism Forum.I haven't seen this anywhere on the Forum, and I've been curious to see what Holden and Open Phil do. Alexander Berger is now the sole CEO of Open Phil, while Holden is the 'Director of AI Strategy' and Emily Oehlsen is the Managing Director of Open Phil.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Yadav https://forum.effectivealtruism.org/posts/Tnw3juizvcS9E6HXf/linkpost-alexander-berger-is-now-the-sole-ceo-of-open Mon, 14 Aug 2023 17:27:51 +0000 EA - (Linkpost) Alexander Berger is now the sole CEO of Open Philanthropy by Yadav Yadav https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:37 no full 6843
iukeBPYNhKcddfFki EA - Price-, Taste-, and Convenience-Competitive Plant-Based Meat Would Not Currently Replace Meat by Jacob Peacock Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Price-, Taste-, and Convenience-Competitive Plant-Based Meat Would Not Currently Replace Meat, published by Jacob Peacock on August 15, 2023 on The Effective Altruism Forum.Also available on the Rethink Priorities website.Executive summaryPlant-based meats, like the Beyond Sausage or Impossible Burger, and cultivated meats have become a source of optimism for reducing animal-based meat usage.Public health, environmental, and animal welfare advocates aim to mitigate the myriad harms of meat usage.The price, taste, and convenience (PTC) hypothesis posits that if plant-based meat is competitive with animal-based meat on these three criteria, the large majority of current consumers would replace animal-based meat with plant-based meat.The PTC hypothesis rests on the premise that PTC primarily drive food choice.The PTC hypothesis and premise are both likely false.A majority of current consumers would continue eating primarily animal-based meat even if plant-based meats were PTC-competitive.PTC do not mainly determine food choices of current consumers; social and psychological factors also play important roles.Although not examined here, there may exist other viable approaches to drive the replacement of animal-based meats with plant-based meats.There is insufficient empirical evidence to more precisely estimate or optimize the current (or future) impacts of plant-based meat. To rectify this, consider funding:Research measuring the effects of plant-based meat sales on displacement of animal-based meat.Research comparing the effects of plant-based meats with other interventions to reduce animal-based meat usage.Informed (non-blinded) taste tests to benchmark current plant-based meats and enable measurements of taste improvement over time.IntroductionPlant-based meats, like the Beyond Sausage or Impossible Burger, and cultivated meats[1] have been identified as important means of reducing the public health, environmental, and animal welfare harms associated with animal-based meat production (Rubio et al., 2020). By providing competitive alternatives, these products might displace the consumption of animal-based meats. Since cultivated meats are not currently widely available on the public market, this paper will focus on plant-based meats, although many of the arguments might also apply to cultivated meats.Animal welfare, environmental, and public health advocates believe plant-based meats present a valuable opportunity to mitigate significant negative externalities of industrial animal agriculture, like animal suffering, greenhouse gas emissions, and antimicrobial resistance. For example, Animal Charity Evaluators lists "[cultivated] and plant-based food tech" as a priority cause area (Animal Charity Evaluators, 2022b), and a 2018 survey of 30 animal advocacy leaders and researchers ranked creating plant-based (and cultivated) meats third (after only research and corporate outreach) in their top priorities (Savoie, 2018). Non-profits working to research and support plant-based and cultivated meat production have received millions of dollars in funding (Animal Charity Evaluators, 2022a; New Harvest, 2021). Hu et al. (2019) describes plant-based meats as a potentially "vital" means to reduce the risks of diabetes, cardiovascular disease, and some cancers.Others have focused on reducing the climate impact of food production and "the need to de-risk global food systems" (Zane Swanson et al., 2023). The private and public sectors have taken note as well; in 2022, the "plant-based meat, seafood, eggs, and dairy companies" foods industry attracted at least $1.2 billion in private investment activity and at least $874 million in public funding (The Good Food Institute, 2022, pp. 55, 85-88).This enthusiasm has been propelled in some significant part by the informa...

]]>
Jacob_Peacock https://forum.effectivealtruism.org/posts/iukeBPYNhKcddfFki/price-taste-and-convenience-competitive-plant-based-meat Tue, 15 Aug 2023 19:25:02 +0000 EA - Price-, Taste-, and Convenience-Competitive Plant-Based Meat Would Not Currently Replace Meat by Jacob Peacock Jacob_Peacock https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:11:57 no full 6855
2sgSwwTe4mG4uP36e EA - Unveiling the Longtermism Framework in Islam: Urging Muslims to Embrace Future-Oriented Values through 'Islamic Longtermism' by Zayn A Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Unveiling the Longtermism Framework in Islam: Urging Muslims to Embrace Future-Oriented Values through 'Islamic Longtermism', published by Zayn A on August 15, 2023 on The Effective Altruism Forum.I. IntroductionLongtermism is the notion that those who will live in the long-term future matter just as much, morally, as those beings that are alive today; that, although current conceptions of who society ought to care about appears restricted to living people, this understanding must change because there is, on society at large, a moral obligation to care, and to ensure that future people are able to live their lives satisfactorily.[1] MacAskill coined the term Longtermism in 2017, in a bid to help concretize the idea of caring about future people into one word that helped make it sound like a consequential movement.[2]This, however, is not to say that the ideas that longtermism espouses are particularly brand new. To the contrary, it has been admitted that longtermism builds on a rather long historical concern for future people.[3]A perception that may be gleaned is that longtermism is a 'Western' idea, so to speak. In this regard, one need not look further than the leading scholarship in this area,[4] where most of the literature comes from, who it talks about, and the solutions espoused[5] - all seem to originate from Western perspectives. Consequently, longtermism could perhaps mistakenly be seen as a completely novel concept emerging from the West, and its recent surge in popularity only serves as evidence to this notion.[6] .In this piece, I seek to illustrate how Islam, and the initial flagbearers of Islam,[7] have historically preached tenets of longtermism, despite the religion finding inception some several centuries ago.[8] In fact, I will show in this piece that, not only is there a moral obligation on societies to care about future people, but that Islam, the Prophet, the Caliphs and even its prominent leaders place this obligation as one that is of great importance. This will be done with the following end in mind: that to convince Muslims to think in longtermist ways, the best method will be to find evidence that the idea finds some backing within Islamic teaching.II. Justifying the Study and Laying GroundworkAt the very onset, one may question the very utility of this piece, by questioning why involving an Islamic perspective is at all necessary. In other words, what are the ramifications of an Islamic viewpoint of longtermism? In response to such argumentative resistance, one may merely resort to stating that this discussion is important for its own sake. That may be true. However, there are certainly a deeper reasons.Since its inception in 2017, when MacAskill offered the term 'longtermism', the main impetus pushing him to coin this word, essentially, was to make it appealing as a movement.[9] He opined that the previous renditions of phrases and terms that were used to appeal to longtermism, as we now know it, were 'a mishmash'. The entire point of the term and its meaning, was to make it 'attractive' as a movement for the wider public.[10] From the foregoing, it becomes clear that the conceptualizers of the term sought to find ways to make longtermism appeal to the public, presumably so that the movement gains support. In fact, it could be said that various undertakings have since occurred to do just that: increase the movements following. Whether using moral arguments, or other ideological arguments, much has been done to try and popularise the longtermist thought.Some discussion, on the Effective Altruism Forum, as well as other informal platforms, has taken place on how it may be possible to bring in the Christian contingent onboard to the idea of Effective Altruism and longtermism.[11] But, this avenue, using the Christian religion to ga...

]]>
Zayn A https://forum.effectivealtruism.org/posts/2sgSwwTe4mG4uP36e/unveiling-the-longtermism-framework-in-islam-urging-muslims Tue, 15 Aug 2023 15:29:58 +0000 EA - Unveiling the Longtermism Framework in Islam: Urging Muslims to Embrace Future-Oriented Values through 'Islamic Longtermism' by Zayn A Zayn A https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 35:38 no full 6853
sKxRKDsBjSNm8JTJD EA - Should I train to become a mediator for the EA ecosystem? by Severin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should I train to become a mediator for the EA ecosystem?, published by Severin on August 15, 2023 on The Effective Altruism Forum.Mediators help individuals and organizations resolve conflict. They serve as a buffer when strong emotions come up, help the conflict parties clarify their wants (which are often enough more compatible than it seems at first sight), and help make clear agreements.Over the years, I've heard of conflicts within a variety of EA organizations. Professional mediation might have helped resolve many of them earlier and more smoothly.I already bring a solid background in facilitation, counseling, and minimal training in mediation. In addition, my next career step is open.Accordingly, I'm currently thinking of training up to become a mediator for the EA/rationalist ecosystem. Services I'd include in that role would be:Offering conflict mediation for individuals and organizations.Giving workshops for EAs on how to disagree better.Offering communication trainings for organizations to a) build healthy team cultures that prevent conflict in the first place, and b) transform friction into clarity and cooperation. (I'm already working on formats for this with the team behind AuthRev and the Relating Languages.)Do you think this is a good idea? If yes, my next step would be to apply for 6-12 months of transition funding in order to do specialized skill-building and networking.Here are the reasons for and against this that I've come up with so far:Reasons forEspecially after last year, there are a some boiling conflict lines within EA. And, the agile startup environments of EA orgs offer plenty potential for friction.It may be valuable to have an "in-house" mediater who has an in-depth understanding of EA culture, the local conflict lines, etc.As far as I know, no one else currently specializes in this.While the average EA likes to have controversial intellectual debates, I perceive the community as relatively conflict-averse when things get emotional. I tend to enjoy conflict and have an easy time trusting the process. I think that's useful for filling this role.Trust in EA leadership seems to be at an all-time low. While I've heard that CEA's community health team is remarkably good at not being partisan, some people might be more comfortable with having an EA mediator who is not directly involved with CEA.Reasons againstIt may be hard to convince those who'd profit from mediation to actually make use of it. (Just as with therapy or coaching.) I.e., there might not actually be a market for this.Subcultural knowledge may be less important than I think. External mediators may be able to fulfill this role just fine.The community health team, as well as the current coaches and therapists in EA, might already be sufficiently skilled and a sufficiently obvious address in case of conflict.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Severin https://forum.effectivealtruism.org/posts/sKxRKDsBjSNm8JTJD/should-i-train-to-become-a-mediator-for-the-ea-ecosystem Tue, 15 Aug 2023 15:25:58 +0000 EA - Should I train to become a mediator for the EA ecosystem? by Severin Severin https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:50 no full 6854
RYNtykh5xM467zRNj EA - Why some people disagree with the CAIS statement on AI by David Moss Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why some people disagree with the CAIS statement on AI, published by David Moss on August 15, 2023 on The Effective Altruism Forum.SummaryPrevious research from Rethink Priorities found that a majority of the population (59%) agreed with a statement from the Center for AI Safety (CAIS) that stated "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." 26% of the population disagreed with this statement. This research piece does further qualitative research to analyze this opposition in more depth.The most commonly mentioned theme among those who disagreed with the CAIS statement was that other priorities were more important (mentioned by 36% of disagreeing respondents), with climate change particularly commonly mentioned.This theme was particularly strongly occurring among younger disagreeing respondents (43.3%) relative to older disagreeing respondents (27.8%).The next most commonly mentioned theme was rejection of the idea that AI would cause extinction (23.4%), though some of these respondents agreed AI may pose other risks.Another commonly mentioned theme was the idea that AI was not yet a threat, though it might be in the future.This was commonly co-occurring with the 'Other priorities' theme, with many arguing that other threats were more imminent.Other less commonly mentioned themes included that AI would be under our control (8.8%) and so would not pose a threat, while another was that AI was not capable of causing harm, because it was not sentient or sophisticated or autonomous (5%).IntroductionOur previous survey on attitudes on US public perception of the CAIS statement on AI risk found that a majority of Americans agree with the statement (59%), while a minority (26%) disagreed. To gain a better understanding of why individuals might disagree with the statement, we ran an additional survey, where we asked a new sample of respondents whether they agreed or disagreed with the statement, and then asked them to explain why they agreed or disagreed. We then coded the responses of those who disagreed with the statement to identify major recurring themes in people's comments. We did not formally analyze comments from those who did not disagree with the statement, though may do so in a future report.Since responses to this question might reflect responses to the specifics of the statement, rather than more general reactions to the idea of AI risk, it may be useful to review the statement before reading about the results."Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."Common themesThis section outlines the most commonly recurring themes. In a later section of this report we'll discuss each theme in more detail and provide examples from each. It is important, when interpreting these percentages to remember that they are percentages of those 31.2% respondents who disagreed with the statement, not of all respondents.The dominant theme, by quite a wide margin, was the claim that 'Other priorities' were more important, which was mentioned by 36% of disagreeing respondents. The next most common theme was 'Not extinction', mentioned in 23.4% of responses, which simply involved respondents asserting that they did not believe that AI would cause extinction. The third most commonly mentioned theme was 'Not yet', which involved respondents claiming that AI was not yet a threat or something to worry about. The 'Other priorities' and 'Not yet' themes were commonly co-occurring, mentioned together by 7.9% of respondents, more than any other combination.Some less commonly mentioned themes were 'Control', the idea that AI could not be a threat because it would inevitably be under our...

]]>
David_Moss https://forum.effectivealtruism.org/posts/RYNtykh5xM467zRNj/why-some-people-disagree-with-the-cais-statement-on-ai Tue, 15 Aug 2023 15:21:18 +0000 EA - Why some people disagree with the CAIS statement on AI by David Moss David_Moss https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 27:19 no full 6852
jS6HPv4Qx9gWC98f5 EA - EA Organization Updates: August 2023 by JP Addison Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Organization Updates: August 2023, published by JP Addison on August 15, 2023 on The Effective Altruism Forum.We're featuring some opportunities and job listings at the top of this post. Some have (very) pressing deadlines.You can see previous updates on the "EA Organization Updates (monthly series)" topic page, or in our repository of past newsletters. Notice that there's also an "org update" tag, where you can find more news and updates that are not part of this consolidated series.These monthly posts originated as the "Updates" section of the EA Newsletter. Organizations submit their own updates, which we edit for clarity.(If you think your organization should be getting emails about adding their updates to this series, please apply here.)Opportunities and jobsOpportunitiesConsider also checking opportunities listed on the EA Opportunities Board.Applications are open for a number of conferencesEA Global: Boston (27-29 October) is for people who have a solid understanding of effective altruism, and who are taking significant actions on the basis of key ideas in the movement. Apply by 13 October.EAGxBerlin, (8-10 September) is aimed at people in Western Europe. Tickets cost €0-80. Apply by 18 August.EAGxAustralia (22 - 24 September) is for people in Australia and New Zealand. Tickets are $75-150 (AUD). Apply by 8 September.EAGxPhilippines (20 - 22 October) is for people in South East and East Asia. Tickets are $0-100. Apply by 30 September.The Good Food Conference 2023 will be back in-person this year at the historic Fort Mason Center in San Francisco on Sept. 18-20.This year's program is bursting with insights and inspiration from big-picture plenaries and fascinating flash talks to in-depth technical sessions and solutions-focused workshops led by a stellar lineup of scientists, policymakers, private sector leaders, and other brilliant humans.Check out the detailed program to learn more, and register here to help build a future where alternative proteins are no longer alternative.Opportunities to take actionSoGive is conducting a survey of people who (have the capacity to) give £10k+. They are trying to help the community fill the gaps in support so more major donors can give more, and give more effectively. Read more here.Job listingsConsider also exploring jobs listed on "Job listing (open)."Centre for the Governance of AIHead of Operations (Mostly Oxford, £60K - £80K +benefits, apply by 21 August)EpochML Hardware Researcher (Remote, $70K - $120K apply by 20 August)Family Empowerment MediaProject Director (Remote with international travel, apply by 21 August)Fish Welfare InitiativeOperations Lead/Associate (Remote in a time-zone compatible with IST, apply by 3 September)GiveWellSenior Researchers (Remote or Oakland, California; $193.1K - $209K)Content Editors (Remote or Oakland, California; $90.6K - $98K)IDinsightAssociate and Senior Associate (Philippines)Technical Delivery Manager/Director (India or Kenya)Social Scientist/Economist - (Morocco or Senegal)Open PhilanthropyOperations Associate - Biosecurity & Pandemic Preparedness (Washington, DC, $114.2k)Recruiter (Remote, San Francisco/Washington, DC, $108.3k)Organization Updates80,000 HoursThis month on The 80,000 Hours Podcast, Rob interviewed:Ezra Klein on existential risk from AI and what DC could do about itHolden Karnofsky's four-part playbook for dealing with AIAnd Luisa interviewed:Hannah Boettcher on the mental health challenges that come with trying to have a big impactMarkus Anderljung on how to regulate cutting-edge AI models80,000 Hours also re-released an updated version of a classic three-part article series written by Gregory Lewis, exploring how many lives a doctor saves. They also updated their article on working in US AI policy.Anima Internati...

]]>
JP Addison https://forum.effectivealtruism.org/posts/jS6HPv4Qx9gWC98f5/ea-organization-updates-august-2023 Tue, 15 Aug 2023 13:16:07 +0000 EA - EA Organization Updates: August 2023 by JP Addison JP Addison https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 14:06 no full 6851
vd325jctadGoL2F8s EA - EA Poland 2023 - the first update by EA Poland Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Poland 2023 - the first update, published by EA Poland on August 15, 2023 on The Effective Altruism Forum.TL;DR: EA Poland is active and growing. Our aim is to build and maintain a supportive and empowering community that enables individuals to plan their next steps to make a positive impact. We want to achieve this by focusing on three areas: Community Building, Career Planning, and Project Incubation.IntroductionAt the beginning of June we had our first Polish EAGx, in Warsaw. We think this is a good opportunity to introduce EA Poland. Before we start, we would like to mention that this post's structure was inspired by EA Estonia's post and was prepared with the support of ChatGPT and DeepL/Writer.Quick Facts About PolandLocated in Central Europe, Poland has a population of ~38 million and has been classified by the World Bank as a high-income country since 2009. Additionally, there are up to 21 million Polish people living across the world. We love potatoes, onions, and dumplings with onions and potatoes.Quick Facts About EA PolandThe first EA meetup we know of took place in 2014, and the first group was established in Cracow, in 2015. In 2018, the Polish Foundation for Effective Altruism (FEA) was registered. We have members from all over Poland: Cracow, Warsaw, Wroclaw, Poznan, Gdansk, Katowice, Lodz...Current statusEA Poland in numbers:3 employees, with a total of 1.5 FTE (full-time equivalents; funded by the EA Infrastructure Fund)~30 volunteers (incl. group organisers)~60 highly-engaged EAs - HEAs (incl. most of the volunteers)178 members registered on our community SlackOf these, ~70 users are active on a weekly basis (figure 2)336 newsletter subscribersSocial media:2,700 Facebook fanpage followers1,100 Facebook group members515 Instagram followers893 Linkedin followers211 YouTube subscribers3 local groups (Warsaw, Cracow, Wroclaw)1 active university group (University of Warsaw) and 1 university group expected to start in 2023/2024 (AGH in Cracow)An online book club (we plan to publish a separate forum post about it)2 AI Safety Groups (on-line): technical and governance2 EA-aligned organisations (Otwarte Klatki [Anima International Poland] and Optimum Pareto Foundation)What we did last year (June 2022 - June 2023)Organisational Structure and OperationsOver the course of the past year, we channelled a lot of effort into strengthening our organisational structure and operational capacity. This includes establishing basic processes and procedures in areas such as IT, finance, and legal; reintroducing Asana for task management; and revitalising our social media and newsletter. We collectively agreed to form an executive board where two Co-Directors represent FEA for a period of two years. After this period, re-election of the Co-Directors requires approval, granted by the Active Members via a vote. To evaluate and support the work of the Co-Directors, we have set up a Supervisory Board consisting of five HEAs, elected by the Active Members (employees, volunteers, interns, groups organisers, projects coordinators).Mission, Vision, and StrategyIn 2022, we defined our strategy through a workshop and multiple group discussions. In November 2023, we will revise it and determine an action plan for the next two years. We intend to write a separate post on this topic.Organisational CultureCreating a safe and inclusive environment is a priority for EA Poland. We developed a Code of Conduct through workshops, and introduced the Safety Person role to ensure the well-being and comfort of all community members. We also conducted a comprehensive survey to examine our organisational culture, with the aim of continuous improvement and fostering a supportive community. The survey, consisting of 72 questions, had an average score of 3.55 on a s...

]]>
EA Poland https://forum.effectivealtruism.org/posts/vd325jctadGoL2F8s/ea-poland-2023-the-first-update Tue, 15 Aug 2023 01:00:12 +0000 EA - EA Poland 2023 - the first update by EA Poland EA Poland https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 15:19 no full 6850
wxWrHf387RybbztYT EA - Half baked idea: A "pivot pledge" by Ardenlk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Half baked idea: A "pivot pledge", published by Ardenlk on August 16, 2023 on The Effective Altruism Forum.I work at 80,000 Hours but this is not an 80,000 Hours thing - this is just an idea I wrote up because I had it and thought it was kind of cool.I probably won't be executing on it myself, but I thought I'd throw it out there into the world in case someone is interested in picking it up and running with it, do-ocracy style!Epistemic status: half baked.The problem:Being and staying prepared to switch jobs/careers if an excellent impact opportunity arises is very hard, but high expected value for many people.The proposal:Someone set up a "pivot pledge" to provide accountability, support, and encouragement to people who want to be prepared to switch careers in this way.More on the problem:In "Jobs to help with the most important century", Holden Karnofsky wrote that if you aren't up for making a career change right now, one thing you can do is to keep your options open and be ready to jump at the right opportunity. He says:> It's hard to predict what skills will be useful as AI advances further and new issues come up.> Being ready to switch careers when a big opportunity comes up could be hugely valuable - and hard. (Most people would have a lot of trouble doing this late in their career, no matter how important!)He reiterated this idea recently on the 80,000 Hours Podcast:> It might be worth emphasising that the ability to switch careers is going to get harder and harder as you get further and further into your career. So in some ways, if you're a person who's being successful, but is also making sure that you've got the financial resources, the social resources, the psychological resources, so that you really feel confident that as soon as a good opportunity comes up to do a lot of good, you're going to actually switch jobs, or have a lot of time to serve on a board or whatever - it just seems incredibly valuable.> I think it's weird because this is not a measurable thing, and it's not a thing you can, like, brag about when you go to an effective altruism meetup. And I just wish there was a way to kind of recognise that the person who is successfully able to walk away, when they need to, from a successful career has, in my mind, more expected impact than the person who's in the high-impact career right now, but is not killing it.I'd add a few things here:1. It doesn't seem like you need to prioritize AI to think that this would be good for many people to do. Though this does seem especially important if you have a view of the world in which "things are going to go crazy at some point", because that makes longer-term high impact career planning harder, and you are more likely to think that l if you think AI risk is high, longer-term career planning is always hard and even if you think other problems are much more pressing you could still think that some opportunities will be much higher impact than others and will be hard to predict.2. Many people have pointed out that we could use more experienced hands on many top problem areas. This is one way to help make that happen.3. I think going into some mechanisms that account for why it's hard to switch careers later in your career could be useful:I think it's hard for more senior people to take an actual or perceived step down in level of responsibility, prestige, or compensation, because it feels like 'going backward.' But when you switch your career, you often need to take a step 'down' on some hierarchy and build back up.Relatedly. people really don't want to have 'wasted time' so they are always very keen to be applying previous experience. Switching careers usually involves letting some of your previous experience 'go to waste'. We see this a lot at 80,000 Hours even in people in their 20s!Sta...

]]>
Ardenlk https://forum.effectivealtruism.org/posts/wxWrHf387RybbztYT/half-baked-idea-a-pivot-pledge It's hard to predict what skills will be useful as AI advances further and new issues come up.> Being ready to switch careers when a big opportunity comes up could be hugely valuable - and hard. (Most people would have a lot of trouble doing this late in their career, no matter how important!)He reiterated this idea recently on the 80,000 Hours Podcast:> It might be worth emphasising that the ability to switch careers is going to get harder and harder as you get further and further into your career. So in some ways, if you're a person who's being successful, but is also making sure that you've got the financial resources, the social resources, the psychological resources, so that you really feel confident that as soon as a good opportunity comes up to do a lot of good, you're going to actually switch jobs, or have a lot of time to serve on a board or whatever - it just seems incredibly valuable.> I think it's weird because this is not a measurable thing, and it's not a thing you can, like, brag about when you go to an effective altruism meetup. And I just wish there was a way to kind of recognise that the person who is successfully able to walk away, when they need to, from a successful career has, in my mind, more expected impact than the person who's in the high-impact career right now, but is not killing it.I'd add a few things here:1. It doesn't seem like you need to prioritize AI to think that this would be good for many people to do. Though this does seem especially important if you have a view of the world in which "things are going to go crazy at some point", because that makes longer-term high impact career planning harder, and you are more likely to think that l if you think AI risk is high, longer-term career planning is always hard and even if you think other problems are much more pressing you could still think that some opportunities will be much higher impact than others and will be hard to predict.2. Many people have pointed out that we could use more experienced hands on many top problem areas. This is one way to help make that happen.3. I think going into some mechanisms that account for why it's hard to switch careers later in your career could be useful:I think it's hard for more senior people to take an actual or perceived step down in level of responsibility, prestige, or compensation, because it feels like 'going backward.' But when you switch your career, you often need to take a step 'down' on some hierarchy and build back up.Relatedly. people really don't want to have 'wasted time' so they are always very keen to be applying previous experience. Switching careers usually involves letting some of your previous experience 'go to waste'. We see this a lot at 80,000 Hours even in people in their 20s!Sta...]]> Wed, 16 Aug 2023 20:57:19 +0000 EA - Half baked idea: A "pivot pledge" by Ardenlk It's hard to predict what skills will be useful as AI advances further and new issues come up.> Being ready to switch careers when a big opportunity comes up could be hugely valuable - and hard. (Most people would have a lot of trouble doing this late in their career, no matter how important!)He reiterated this idea recently on the 80,000 Hours Podcast:> It might be worth emphasising that the ability to switch careers is going to get harder and harder as you get further and further into your career. So in some ways, if you're a person who's being successful, but is also making sure that you've got the financial resources, the social resources, the psychological resources, so that you really feel confident that as soon as a good opportunity comes up to do a lot of good, you're going to actually switch jobs, or have a lot of time to serve on a board or whatever - it just seems incredibly valuable.> I think it's weird because this is not a measurable thing, and it's not a thing you can, like, brag about when you go to an effective altruism meetup. And I just wish there was a way to kind of recognise that the person who is successfully able to walk away, when they need to, from a successful career has, in my mind, more expected impact than the person who's in the high-impact career right now, but is not killing it.I'd add a few things here:1. It doesn't seem like you need to prioritize AI to think that this would be good for many people to do. Though this does seem especially important if you have a view of the world in which "things are going to go crazy at some point", because that makes longer-term high impact career planning harder, and you are more likely to think that l if you think AI risk is high, longer-term career planning is always hard and even if you think other problems are much more pressing you could still think that some opportunities will be much higher impact than others and will be hard to predict.2. Many people have pointed out that we could use more experienced hands on many top problem areas. This is one way to help make that happen.3. I think going into some mechanisms that account for why it's hard to switch careers later in your career could be useful:I think it's hard for more senior people to take an actual or perceived step down in level of responsibility, prestige, or compensation, because it feels like 'going backward.' But when you switch your career, you often need to take a step 'down' on some hierarchy and build back up.Relatedly. people really don't want to have 'wasted time' so they are always very keen to be applying previous experience. Switching careers usually involves letting some of your previous experience 'go to waste'. We see this a lot at 80,000 Hours even in people in their 20s!Sta...]]> It's hard to predict what skills will be useful as AI advances further and new issues come up.> Being ready to switch careers when a big opportunity comes up could be hugely valuable - and hard. (Most people would have a lot of trouble doing this late in their career, no matter how important!)He reiterated this idea recently on the 80,000 Hours Podcast:> It might be worth emphasising that the ability to switch careers is going to get harder and harder as you get further and further into your career. So in some ways, if you're a person who's being successful, but is also making sure that you've got the financial resources, the social resources, the psychological resources, so that you really feel confident that as soon as a good opportunity comes up to do a lot of good, you're going to actually switch jobs, or have a lot of time to serve on a board or whatever - it just seems incredibly valuable.> I think it's weird because this is not a measurable thing, and it's not a thing you can, like, brag about when you go to an effective altruism meetup. And I just wish there was a way to kind of recognise that the person who is successfully able to walk away, when they need to, from a successful career has, in my mind, more expected impact than the person who's in the high-impact career right now, but is not killing it.I'd add a few things here:1. It doesn't seem like you need to prioritize AI to think that this would be good for many people to do. Though this does seem especially important if you have a view of the world in which "things are going to go crazy at some point", because that makes longer-term high impact career planning harder, and you are more likely to think that l if you think AI risk is high, longer-term career planning is always hard and even if you think other problems are much more pressing you could still think that some opportunities will be much higher impact than others and will be hard to predict.2. Many people have pointed out that we could use more experienced hands on many top problem areas. This is one way to help make that happen.3. I think going into some mechanisms that account for why it's hard to switch careers later in your career could be useful:I think it's hard for more senior people to take an actual or perceived step down in level of responsibility, prestige, or compensation, because it feels like 'going backward.' But when you switch your career, you often need to take a step 'down' on some hierarchy and build back up.Relatedly. people really don't want to have 'wasted time' so they are always very keen to be applying previous experience. Switching careers usually involves letting some of your previous experience 'go to waste'. We see this a lot at 80,000 Hours even in people in their 20s!Sta...]]> Ardenlk https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:53 no full 6865
EYKZvKAHrJwakNSPo EA - Discounts in cost-effectiveness analyses [Founders Pledge] by Rosie Bettle Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Discounts in cost-effectiveness analyses [Founders Pledge], published by Rosie Bettle on August 16, 2023 on The Effective Altruism Forum.Replicability and GeneralisabilityThis report aims to provide guidelines for producing discounts within cost-effectiveness analyses; how to take an effect size from an RCT and apply discounts to best predict the real-world effect of an intervention. The goal is to have guidelines that produce accurate estimates, and are practical for researchers to use. These guidelines are likely to be updated further, and I especially invite suggestions and criticism for the purpose of further improvement. A google docs version of this report is available here.Acknowledgements: I would like to thank Matt Lerner, Filip Murár, David Reinstein and James Snowden for helpful comments on this report. I would also like to thank the rest of the FP research team for helpful comments during a presentation on this report.SummaryThis document provides guidelines for estimating the discounts that we (Founders Pledge) apply to RCTs in our cost-effectiveness analyses for global health and development charities. To skip directly to these guidelines, go to the 'Guidance for researchers' sections (here, here and here; separated by each type of discount).I think that we should separate out discounts into internal reliability and external validity adjustments, because these components have different causes (see Fig 1.)For internal reliability (degree to which the study accurately assesses the intervention in the specific context of the study- aka if an exact replication of the study was carried out, would we see the same effect?);All RCTs will need a Type M adjustment; an adjustment that corrects for potential inflation of effect size (Type M error). The RCTs that are likely to have the most inflated effect sizes are those that are low powered (where the statistical test used has only a small chance of successfully detecting an effect, see more info here), especially if they are providing some of the first evidence for the effect. Factors to account for include publication bias, researcher bias (e.g. motivated reasoning to find an exciting result; running a variety of statistical tests and only reporting the ones that reach statistical significance would be an example of this), and methodological errors (e.g. inadequate randomisation of test trial subjects). See here for guidelines, and here to assess power.Many RCTs are likely to need a 50-60% Type M discount, but there is a lot of variation here; table 1 can help to sense-check Type M adjustments.A small number (<~15%) of RCTs will need a Type S adjustment, to account for the possibility that the sign of the effect is in the wrong direction. This is for RCTs that are producing some of the first evidence for an effect, are underpowered, and where it is mechanistically plausible that the effect could go in the other direction. See here for guidelines.The likelihood of Type S error can be estimated mathematically (e.g. via the retrodesign R package).For external validity (how that result generalises to a different context, e.g. when using an RCT effect size to estimate how well an intervention will work in a different area), we should expect that the absolute median effect size will vary between different contexts by around 99% (so a 1X effect size could be as little as ~0X or ~2X in a different context; see Vivalt 2020). Note that the effect in the new context could be larger than our estimate of the true effect, but I expect that in most cases it will be smaller. Following Duflo & Banerjee (2017), we should attend to;Specific sample differences (do the conditions necessary for the intervention to work hold in the new context?)Equilibrium effects (will there be emergent effects of the intervention, when...

]]>
Rosie_Bettle https://forum.effectivealtruism.org/posts/EYKZvKAHrJwakNSPo/discounts-in-cost-effectiveness-analyses-founders-pledge Wed, 16 Aug 2023 19:36:03 +0000 EA - Discounts in cost-effectiveness analyses [Founders Pledge] by Rosie Bettle Rosie_Bettle https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:09:30 no full 6863
6WvnfKvF2i6mqp3za EA - An Overview of Catastrophic AI Risks by Center for AI Safety Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An Overview of Catastrophic AI Risks, published by Center for AI Safety on August 16, 2023 on The Effective Altruism Forum.We've recently published on our website a summary of our research paper on catastrophic risks from AI, which we are cross-posting here. We hope that this summary helps to make our research more accessible and to share our policy recommendations in a more convenient format.Executive summaryCatastrophic AI risks can be grouped under four key categories which we explore below, and in greater depth in CAIS' linked paper:Malicious use: People could intentionally harness powerful AIs to cause widespread harm. AI could be used to engineer new pandemics or for propaganda, censorship, and surveillance, or released to autonomously pursue harmful goals. To reduce these risks, we suggest improving biosecurity, restricting access to dangerous AI models, and holding AI developers liable for harms.AI race: Competition could push nations and corporations to rush AI development, relinquishing control to these systems. Conflicts could spiral out of control with autonomous weapons and AI-enabled cyberwarfare. Corporations will face incentives to automate human labor, potentially leading to mass unemployment and dependence on AI systems. As AI systems proliferate, evolutionary dynamics suggest they will become harder to control. We recommend safety regulations, international coordination, and public control of general-purpose AIs.Organizational risks: There are risks that organizations developing advanced AI cause catastrophic accidents, particularly if they prioritize profits over safety. AIs could be accidentally leaked to the public or stolen by malicious actors, and organizations could fail to properly invest in safety research. We suggest fostering a safety-oriented organizational culture and implementing rigorous audits, multi-layered risk defenses, and state-of-the-art information security.Rogue AIs: We risk losing control over AIs as they become more capable. AIs could optimize flawed objectives, drift from their original goals, become power-seeking, resist shutdown, and engage in deception. We suggest that AIs should not be deployed in high-risk settings, such as by autonomously pursuing open-ended goals or overseeing critical infrastructure, unless proven safe. We also recommend advancing AI safety research in areas such as adversarial robustness, model honesty, transparency, and removing undesired capabilities.1. IntroductionToday's technological era would astonish past generations. Human history shows a pattern of accelerating development: it took hundreds of thousands of years from the advent of Homo sapiens to the agricultural revolution, then millennia to the industrial revolution. Now, just centuries later, we're in the dawn of the AI revolution. The march of history is not constant - it is rapidly accelerating. World production has grown rapidly over the course of human history. AI could further this trend, catapulting humanity into a new period of unprecedented change.The double-edged sword of technological advancement is illustrated by the advent of nuclear weapons. We narrowly avoided nuclear war more than a dozen times, and on several occasions, it was one individual's intervention that prevented war. In 1962, a Soviet submarine near Cuba was attacked by US depth charges. The captain, believing war had broken out, wanted to respond with a nuclear torpedo - but commander Vasily Arkhipov vetoed the decision, saving the world from disaster. The rapid and unpredictable progression of AI capabilities suggests that they may soon rival the immense power of nuclear weapons. With the clock ticking, immediate, proactive measures are needed to mitigate these looming risks.2. Malicious UseThe first of our concerns is the malicious use of AI. Whe...

]]>
Center for AI Safety https://forum.effectivealtruism.org/posts/6WvnfKvF2i6mqp3za/an-overview-of-catastrophic-ai-risks Wed, 16 Aug 2023 18:42:03 +0000 EA - An Overview of Catastrophic AI Risks by Center for AI Safety Center for AI Safety https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 25:14 no full 6862
SCYrASfriLCoFaqCZ EA - Henry (John Green video) by Stephen Clare Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Henry (John Green video), published by Stephen Clare on August 16, 2023 on The Effective Altruism Forum.Author John Green talks about a 16-year-old tuberculosis patient in Sierra Leone.For me this was a good reminder of how unfair it is that where you're born has such a huge effect on how likely you are to suffer from a disease like TB. It's also a vivid account of what it really means to save a life.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Stephen Clare https://forum.effectivealtruism.org/posts/SCYrASfriLCoFaqCZ/henry-john-green-video Wed, 16 Aug 2023 14:51:53 +0000 EA - Henry (John Green video) by Stephen Clare Stephen Clare https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:37 no full 6860
KT2Cc3fY3WfhLFRj4 EA - Why it makes sense to be optimistic about the environment (Hannah Ritchie on the 80,000 Hours Podcast) by 80000 Hours Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why it makes sense to be optimistic about the environment (Hannah Ritchie on the 80,000 Hours Podcast), published by 80000 Hours on August 16, 2023 on The Effective Altruism Forum.We just published an interview: Hannah Ritchie on why it makes sense to be optimistic about the environment. You can click through for the audio, a full transcript, and related links. Below are the episode summary and some key excerpts.Episode summaryThere's no money to invest in education elsewhere, so they almost get trapped in the cycle where they don't get a lot from crop production, but everyone in the family has to work there to just stay afloat. Basically, you get locked in. There's almost no opportunities externally to go elsewhere.So one of my core arguments is that if you're going to address global poverty, you have to increase agricultural productivity in sub-Saharan Africa. There's almost no way of avoiding that.Hannah RitchieIn today's episode, host Luisa Rodriguez interviews the head of research at Our World in Data - Hannah Ritchie - on the case for environmental optimism.They cover:Why agricultural productivity in sub-Saharan Africa could be so important, and how much better things could getHer new book about how we could be the first generation to build a sustainable planetWhether climate change is the most worrying environmental issueHow we reduced outdoor air pollutionWhy Hannah is worried about the state of biodiversitySolutions that address multiple environmental issues at onceHow the world coordinated to address the hole in the ozone layerSurprises from Our World in Data's researchPsychological challenges that come up in Hannah's workAnd plenty moreGet this episode by subscribing to our podcast on the world's most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app. Or read the transcript.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuire and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy MooreHighlightsWhy agricultural productivity is so low in sub-Saharan AfricaLuisa Rodriguez: How does labour productivity in sub-Saharan Africa compare to other regions?Hannah Ritchie: So if we use a metric for it as the amount of value you'd get per worker - so the economic value per person working on the farm - the average for sub-Saharan Africa is half of the global average. [and] it's 50 times lower than you'd get in the UK or the US. If you look at some countries within sub-Saharan Africa, they're like half of the sub-Saharan Africa average. So there you're talking about 100 times less than you'd get in the UK or the US.Hannah Ritchie: So you put that in context: the value that an average farmer in the US might create in three to four days is the same as a Tanzanian farmer for the entire year.Luisa Rodriguez: Why is it so low in sub-Saharan Africa?Hannah Ritchie: I think there's a couple of reasons. One is that the farms are really small, so often the amount of crop or value you get out is quite low. And maybe we'll come onto crop yields. So with low crop yields, you get not that much out, but also, as you said, you can't afford machinery, or you can't afford fertilisers or pesticides, or things that would basically substitute for human power inputs. So it just means you need lots of hands on deck to keep the farm going and keep it at that baseline level of productivity. So you don't get much out, and you just need lots of people working on the farm.Luisa Rodriguez: OK, and then the other thing that seems to be really low here is land productivity. Which feels a bit more intuitive to me, is that basically how much crop yield you'd get from, for example, an acre of land?Hannah Ritchie: Yeah, exactly. So it's like what most...

]]>
80000_Hours https://forum.effectivealtruism.org/posts/KT2Cc3fY3WfhLFRj4/why-it-makes-sense-to-be-optimistic-about-the-environment Wed, 16 Aug 2023 07:38:51 +0000 EA - Why it makes sense to be optimistic about the environment (Hannah Ritchie on the 80,000 Hours Podcast) by 80000 Hours 80000_Hours https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 15:55 no full 6859
hpbaNabCDCfnhzc2P EA - CE alert: 2 new interventions for February-March 2024 Incubation Program by CE Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CE alert: 2 new interventions for February-March 2024 Incubation Program, published by CE on August 17, 2023 on The Effective Altruism Forum.TLDR: In this post, we announce additional top charity interventions we want to start through our upcoming February-March 2024 Incubation Program. Applications are now open!We've added two new exciting ideas to the pool of interventions we want to see started through our February 5 - March 31, 2024 Incubation Program. This increases the number of interventions to 6, and we're really hoping you help us get them founded (by applying or sharing the information with friends and colleagues who they might be a good fit for). We're now seeking founders for:1) An organization focused on bringing new, counterfactual funding into the animal advocacy movement.2) An organization providing teacher guides and other aspects of structured pedagogy to improve education outcomes in low-income countries.Before we go into the detailed descriptions, here is a reminder of the previously announced top ideas for this program: 3) Childhood vaccination reminders4) Mass media to prevent violence against women5) Influencing EU fish welfare policy through strategic work in Greece6) Influencing key stakeholders of the emerging insect industryYou can read more about them here.If you could plausibly see yourself excited to launch one of our top interventions, we encourage you to apply. For most ideas, no particular previous experience is necessary and participants report getting more excited about the ideas while working on them throughout the Incubation Program. In the program, we provide two months of cost-covered training to help you get all the knowledge and skills you need to launch an impact-focus organization. As usual, we provide stipends, funding up to $200,000, operational support in your first months, a co-working space at our CE office in London, ongoing mentorship, and access to a community of alumni, funders, and experts. Learn more on the CE Incubation Program page.The deadline for applications is September 30, 2023.To be brief, we have sacrificed nuance, the details of our considerable uncertainties, and the potential risks discussed in the extended reports. Full reports will be published on our website in the upcoming weeks.One-Sentence SummariesFundraising for animal advocacyAn organization focused on bringing new, counterfactual funding into the animal advocacy movement.Structured PedagogyAn organization providing teacher guides and other aspects of structured pedagogy to improve education outcomes in low-income countries.One-Paragraph SummariesFundraising for animal issuesFunding is expected to be a key bottleneck in the effective animal advocacy movement going forward. We predict that the pool of available money for animal advocacy is unlikely to grow in the next few years and could even shrink. We're concerned that funders may neglect more exploratory work and certain regions (e.g., Africa) due to limited resources. A new organization focused on fundraising could work to close this funding gap. There are multiple promising approaches in the space, including a giving pledge (like Giving What We Can) targeted at vegans or influencing high-net-worth individuals (HNWI) in promising geographies like India or the US.Structured pedagogyAcross many low-income countries, the quality of education is poor. Many children leave school without basic reading, writing, and numeracy skills. A new charity can improve the quality of teaching by providing structured teacher guides alongside training and coaching on their usage. Structured pedagogy is an evidence-based, cost-effective intervention deemed a "great buy" in global development. We estimate this intervention to have a benefit-cost ratio of 30:1.More Detailed SummariesFundraisin...

]]>
CE https://forum.effectivealtruism.org/posts/hpbaNabCDCfnhzc2P/ce-alert-2-new-interventions-for-february-march-2024 Thu, 17 Aug 2023 16:45:28 +0000 EA - CE alert: 2 new interventions for February-March 2024 Incubation Program by CE CE https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:54 no full 6870
bNJwJS8eAJBJABPXd EA - The weight of suffering (Andreas Mogensen) by Global Priorities Institute Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The weight of suffering (Andreas Mogensen), published by Global Priorities Institute on August 17, 2023 on The Effective Altruism Forum.This paper was originally published as a working paper in May 2022 and is forthcoming in The Journal of Philosophy.AbstractHow should we weigh suffering against happiness? This paper highlights the existence of an argument from intuitively plausible axiological principles to the striking conclusion that in comparing different populations, there exists some depth of suffering that cannot be compensated for by any measure of well-being. In addition to a number of structural principles, the argument relies on two key premises. The first is the contrary of the so-called Reverse Repugnant Conclusion. The second is a principle according to which the addition of any population of lives with positive welfare levels makes the outcome worse if accompanied by sufficiently many lives that are not worth living. I consider whether we should accept the conclusion of the argument and what we may end up committed to if we do not, illustrating the implications of the conclusions for the question of whether suffering in aggregate outweighs happiness among human and non-human animals, now and in future.IntroductionThere is both great happiness and great suffering in this world. Which has the upper hand? Does the good experienced by human and non-human animals in aggregate counterbalance all the harms they suffer, so that the world is morally good on balance? Or is the moral weight of suffering greater?To answer this question, we need to know how to weigh happiness against suffering from the moral point of view. In this paper, I present an argument from intuitively plausible axiological principles to the conclusion that in comparing different populations, there exists some depth of lifetime suffering that cannot be counterbalanced by any amount of well-being experienced by others. Following Ord (2013), I call this view lexical threshold negative utilitarianism (LTNU). I don't claim that we should accept LTNU. My aim is to explore different ways of responding to the argument. As we'll see, the positions at which we may arrive in rejecting its premises can be nearly as interesting and as striking as the conclusion.In section 2, I define LTNU more rigorously and set out the argument. It relies on a number of structural principles governing the betterness relation on populations, together with two key premises. The first is the contrary of what Carlson (1998) and Mulgan (2002) call the Reverse Repugnant Conclusion (RRC). The second says, roughly, that the addition of lives with positive welfare levels makes the outcome worse if accompanied by sufficiently many lives that are not worth living. In section 3, I consider whether we should be willing to accept the argument's conclusion, especially given that LTNU has been thought to entail the desirability of human extinction or the extinction of all sentient life (Crisp 2021). In section 4, I discuss our options forrejecting the argument's structural principles. I argue that our options for avoiding the disturbing implications of LTNU discussed in section 3 are limited if we are restricted to rejecting one or more of these principles.In section 5, I consider the possibility of rejecting the first of the key non-structural premises. I focus on the possibility of rejecting the contrary of RRC without accepting RRC. This, I claim, is also not promising, considered as a way of avoiding the disturbing implications of LTNU discussed in section 3. I will have nothing original to say about RRC per se, except that the overarching argument of this paper may be taken as a reason to accept it. In section 6, I consider the possibility of rejecting the last remaining premise. Specifically, I consider the possibility t...

]]>
Global Priorities Institute https://forum.effectivealtruism.org/posts/bNJwJS8eAJBJABPXd/the-weight-of-suffering-andreas-mogensen Thu, 17 Aug 2023 15:18:52 +0000 EA - The weight of suffering (Andreas Mogensen) by Global Priorities Institute Global Priorities Institute https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:52 no full 6869
azezpsWKkcPJd2Hfy EA - Why we should fear any bioengineered fungus and give fungi research attention by emmannaemeka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why we should fear any bioengineered fungus and give fungi research attention, published by emmannaemeka on August 18, 2023 on The Effective Altruism Forum.Someone asked me a question after my talk on climate change and emerging fungal pathogens. The question was, why should we give attention to fungi when we know the risk of bioengineering it can be low. Hence the weight given to viruses and bacteria. I thought through and wondered that it is interesting and the EA community should understand why this should be considered. I think in my opinion we need to take fungi very seriously for so many reasons:What we know that makes fungi pathogen interesting:There are only three known anitifungal drugs against major fungi pathogens which are not very effective. The available antifungals have narrow spectrus and high toxicity, and because of the plasticity of the fungi genome resistance is developed very easily. The seriousness of this issue was brought to fore during the COVID-19 pandemic, a number of secondary infections due to fungi pathogens were reported . The emergence of Azole resistance Aspergillus in Europe and environment is a concern as the mortality due to antibiotics resistance and limited drugs can get up to 100%.There are no vaccines for fungi. Currently, there are no immunotherapy or any vaccine available for any fungal infection. More research and funding is needed. Here is a nice paper that shows how close we are to finding a vaccine against any fungi pathogen.Candida auris the first fungi to have emerged as a result of climate change is a fungal pathogen. This is really interesting because this yeast behaves like bacteria and is naturally resistant to some antifungals. It is difficult to treat and led to the shutting down of hospitals . A Detroit hospital will stop taking patients temporarily as it tries to contain an outbreak of a rare, but potentially deadly and drug-resistant fungus. Find link to this new here. It was first discovered in 2009 and have now been reported in all the continents of the world. The CDC gives reasons why this pathogen is a problemWhy is Candida auris a problem? sourceIt causes serious infections. C. auris can cause bloodstream infections and even death, particularly in hospital and nursing home patients with serious medical problems. More than 1 in 3 patients with invasive C. auris infection (for example, an infection that affects the blood, heart, or brain) die.It's often resistant to medicines. Antifungal medicines commonly used to treat Candida infections often don't work for Candida auris. Some C. auris infections have been resistant to all three types of antifungal medicines.It's becoming more common. Although C. auris was just discovered in 2009, it has spread quickly and caused infections in more than a dozen countries.It's difficult to identify. C. auris can be misidentified as other types of fungi unless specialized laboratory technology is used. This misidentification might lead to a patient getting the wrong treatment.It can spread in hospitals and nursing homes. C. auris has caused outbreaks in healthcare facilities and can spread through contact with affected patients and contaminated surfaces or equipment. Good hand hygiene and cleaning in healthcare facilities is important because C. auris can live on surfaces for several weeks4. Fungi are the only species that have caused the complete extinction of a species. A newspaper reported thus "A deadly fungus that has driven more species to extinction than any other pathogen has spread across Africa unnoticed. Chytrid fungus, Batrachochytrium dendrobatidis, or Bd for short, is a highly infectious fungus that affects frogs, toads, salamanders and other amphibians(Source). Although various diseases, such as white-nose syndrome resulting from the European fungu...

]]>
emmannaemeka https://forum.effectivealtruism.org/posts/azezpsWKkcPJd2Hfy/why-we-should-fear-any-bioengineered-fungus-and-give-fungi Fri, 18 Aug 2023 14:38:03 +0000 EA - Why we should fear any bioengineered fungus and give fungi research attention by emmannaemeka emmannaemeka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:30 no full 6884
xrQkYh8GGR8GipKHL EA - How expensive is leaving your org? Squiggle Model by jessica mccurdy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How expensive is leaving your org? Squiggle Model, published by jessica mccurdy on August 18, 2023 on The Effective Altruism Forum.Disclaimer: I (Jessica) opted to post this without extra nuance as opposed to not sharing. Hopefully, it can still help spark some discussion and consideration but there are many additional considerations here.Ben West made this rough Squiggle model you can use to calculate an organization's costs of replacing an employee in months of productivity and rough monetary value.The model is pretty simple: total_cost = hiring_manager_time + hiring_support_time + time_without_employee + skill_up_timeI wanted this to be shared because when Ben mentioned some of the extra costs of an employee leaving an org, I was surprised by how high it was. I think others might be underweighting this cost as well which could influence some career decisions.One highlighted point from the model I wasn't seriously considering:A new employee will generally take some amount of time to become as efficient asthe previous employee.This comes from several sources:New people will often simply lack relevant skills/knowledgeThe coworkers of new staff will take a while to understand how to efficiently work with the new person, build up trust with them, understand their strengths and weaknesses, etc.New people will also almost certainly lack role-specific knowledge, e.g. "to send the newsletter you have to use this website and then get approval from this person except in these scenarios you actually get approval from different person" etc.Hope this is useful!To be clear, I definitely think there are times when people can have a lot more impact elsewhere and there are many biases that act in favor of staying in a job for too long. However, I feel like people in EA hop around particularly fast and sometimes face some social incentives to leave their current positions.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
jessica_mccurdy https://forum.effectivealtruism.org/posts/xrQkYh8GGR8GipKHL/how-expensive-is-leaving-your-org-squiggle-model Fri, 18 Aug 2023 09:03:14 +0000 EA - How expensive is leaving your org? Squiggle Model by jessica mccurdy jessica_mccurdy https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:00 no full 6882
2rRsjdrL9BEWC3d7C EA - Personal Reflections on Longtermism by NatKiilu Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Personal Reflections on Longtermism, published by NatKiilu on August 18, 2023 on The Effective Altruism Forum.Disclaimer: This post is entirely based on my impressions of longtermism as informed by my interaction with the literature, other longtermists, and questions from curious non-longtermists. This post is not a critique of the work being done by longtermists currently but rather a push to widen the category of interventions being pursued under the longtermist umbrella. It may be that I am missing some key facts in which case I would greatly appreciate responses challenging my assumptions, claims, and conclusions. Finally, all views expressed here are solely mine and do not reflect any organizations I am affiliated with directly or indirectly. I take full responsibility for any harmful premises presented.[Summary]This post is a personal reflection on longtermism from my perspective as an African woman living and working in the continent. In the post, I make the claim that it seems as though pursuing the 'flourishing of future generations' such that all future beings lead dignified and worthwhile lives points to the need for us to pursue more interventions that feed into systemic change. I also make this point as a contribution to the of pursuit of good value lock-in. I contend that the values we currently hold substantially lower the quality of life for some groups of people who share traits which have historically been subjected to oppression and marginalization and that failing to work on these issues falls short of the claim that (some) longtermists seek not only the endurance of humanity but its flourishing too.[Earlier impressions]My initial response as a newcomer to EA was to reject longtermism. At the very least, I believed - incorrectly - that the widely accepted sustainability principle already covered its objectives by ensuring that the earth remains habitable for future generations. I also found strong longtermism - that in any set of decision situations, the best decision would be the one with the best consequences for the far-future conclusion - to be quite counterintuitive, especially given my surroundings. Finally, it was also difficult to fully comprehend the sheer numbers cited (scope insensitivity), difficult to believe that we could actually know what would count as protecting future generations (cluelessness), and even then whether these actions would actually lead to their intended consequences given that their effects are to be felt far off into the future (e.g., washing out and rapid diminution).The case for longtermism rests on the following premises: (a) that future people matter, i.e., they have equal moral worth as present people; (b) that by many estimates, they will by far outnumber us; (c) that there are ways in which we can make sure the longterm future goes well; and (d) (according to strong longtermists) that doing so is the greatest moral imperative of our lifetime given what is at stake. My understanding of the work and literature around longtermism seems to indicate that longtermism, broadly speaking, is concerned with two objectives: (a) ensuring that the longterm future exists and humanity's potential is not destroyed or significantly impaired (work here is heavily focused on preventing existential catastrophes) as well as, (b) ensuring that future beings lead flourishing lives (concerns here revolve around the quality of lives and wellbeing of sentient beings existing then, e.g., good value lock in and preventing suffering risks).I gradually came to embrace weak longtermism as a result of frequently exposing myself to discussions on the idea, a commitment to embracing the scout mindset, and becoming aware of some of the cognitive biases at work. Hence, I consider ensuring that the longterm future goes well a priority ...

]]>
NatKiilu https://forum.effectivealtruism.org/posts/2rRsjdrL9BEWC3d7C/personal-reflections-on-longtermism Fri, 18 Aug 2023 08:14:11 +0000 EA - Personal Reflections on Longtermism by NatKiilu NatKiilu https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:30 no full 6880
zjmpFW3nBKwaBB5xr EA - Corporate campaigns work: a key learning for AI Safety by Jamie Harris Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Corporate campaigns work: a key learning for AI Safety, published by Jamie Harris on August 18, 2023 on The Effective Altruism Forum.Negotiations and pressure campaigns have proven effective at driving corporate change across industries and movements. I expect that AI safety/governance can learn from this!The basic idea:The runaway success of effective animal advocacy has been sweeping corporate reformSimilar tactics have been successful across social movementsGovAI to ??? to PauseAI: Corporate campaigns need an ecosystem of roles and tacticsPossible next stepsPragmatic research: ask prioritisation, user interviews, and message testingStart learning by doingWork extensively with volunteers (and treat them like staff members)Moral trade: longtermist money for experienced campaigner secondmentsThe runaway success of effective animal advocacy has been sweeping corporate reformAnimal advocates, funded especially by Open Philanthropy and other EA sources, have achieved startling success in driving corporate change over the past ~decade. As Lewis Bollard, Senior Program Officer at Open Philanthropy, writes:A decade ago, most of the world's largest food corporations lacked even a basic farm animal welfare policy. Today, they almost all have one. That's thanks to advocates, who won about 3,000 new corporate policies in the last ten years.In 2015-18, as advocates secured cage-free pledges from almost all of the largest American and European retailers, fast food chains, and foodservice companies. Advocates then extended this work globally, securing major pledges from Brazil to Thailand. Most recently, advocates won the first global cage-free pledges from 150 multinationals, including the world's largest hotel chains and food manufacturers.A major question was whether these companies would follow through on their pledges. So far, almost 1,000 companies have - that's 88% of the companies that promised to go cage-free by the end of last year. Another 75% of the world's largest food companies are now publicly reporting on their progress in going cage-free.Some advocates establish professional relationships with companies and encourage them to introduce improvements. Others use petitions, protests, and PR pressure to push companies over the line.Almost everyone who investigates these campaigns thoroughly seems to conclude that they're exceptionally cost-effective at making real improvements for animals, at least in the short term. There are both ethical and strategic reasons why some animal advocates doubt that these kinds of incremental welfare tactics are a good idea, but I lean towards thinking that the indirect effects are neutral to positive, while the direct effects are robustly good. There are other promising tactics that animal advocates can use, but the track record and evidence base for corporate welfare campaigns is unusually strong.Of course, animal advocacy is different to AI Safety. But something that has been so successful in one context seems worth exploring seriously in others. And oh wait, it has worked in more than one context already.Similar tactics have been successful across social movementsIn my research into other social movements' histories, I found strong evidence that pressure tactics can be effective at changing companies' behaviour or disrupting their processes:US anti-abortion activists seem to have successfully disrupted the supply of abortion services and may have reduced abortion rates.Anti-death penalty activists successfully disrupted the supply of lethal injection drugs.Pressure campaigns likely accelerated Starbucks and other chains' participation in Fair Trade certification schemes.Prison riots and strikes seem to have encouraged the creation of new procedures and rules for prisoners.There are lots of caveats, concerns,...

]]>
Jamie_Harris https://forum.effectivealtruism.org/posts/zjmpFW3nBKwaBB5xr/corporate-campaigns-work-a-key-learning-for-ai-safety Fri, 18 Aug 2023 02:04:37 +0000 EA - Corporate campaigns work: a key learning for AI Safety by Jamie Harris Jamie_Harris https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:23 no full 6878
EcKmt8ZJ3dcQBigna EA - Launching Foresight Institute's AI Grant for Underexplored Approaches to AI Safety - Apply for Funding! by elteerkers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Launching Foresight Institute's AI Grant for Underexplored Approaches to AI Safety - Apply for Funding!, published by elteerkers on August 19, 2023 on The Effective Altruism Forum.SummaryWe are excited to launch our new grant programme that will fund areas that we consider underexplored when it comes to AI safety. In light of the potential for shorter AGI timelines, we will re-grant $1-1.2 million per year to support much needed development in one of the following areas:Neurotechnology, Whole Brain Emulation, and Lo-fi uploading for AI safetyCryptography and Security approaches for Infosec and AI securitySafe Multipolar AI scenarios and Multi-Agent gamesApply for funding- to be reviewed on a rolling basis.See below, or visit our website to learn more!Areas that We're Excited to FundNeurotechnology, Whole Brain Emulation, and Lo-fi uploading for AI safetyWe are interested in exploring the potential of neurotechnology, particularly Whole Brain Emulation (WBE) and cost-effective lo-fi approaches to uploading, that could be significantly sped up, leading to a re-ordering of technology arrival that might reduce the risk of unaligned AGI by the presence of aligned software intelligence.We are particularly excited by the following:WBE as a potential technology that may generate software intelligence that is human-aligned simply by being based directly on human brainsLo-fi approaches to uploading (e.g. extensive lifetime video of a laboratory mouse to train a model of a mouse without referring to biological brain data)Neuroscience and neurotech approaches to AI Safety (e.g. BCI development for AI safety)Other concrete approaches in this areaGeneral scoping/mapping opportunities in this area, especially from a differential technology development perspective, as well as understanding the reasons why this area may not be a suitable focusCryptography and Security approaches for Infosec and AI securityTo explore the potential benefits of Cryptography and Security technologies in securing AI systems. This includes:Computer security to help with AI Infosecurity or approaches for scaling up security techniques to potentially apply to more advanced AI systemsCryptographic and auxiliary techniques for building coordination/governance architectures across different AI(-building) entitiesPrivacy-preserving verification/evaluation techniquesOther concrete approaches in this areaGeneral scoping/mapping opportunities in this area, especially from a differential technology development perspective, or exploring why this area is not a good focus areaSafe Multipolar AI scenarios and Multi-Agent gamesExploring the potential of safe Multipolar AI scenarios, such as:Multi-agent game simulations or game theoryScenarios avoiding collusion and deception, and pareto-preferred and positive-sum dynamicsApproaches for tackling principal agent problems in multipolar systemsOther concrete approaches in this areaGeneral scoping/mapping opportunities in this area, especially from a differential technology development perspective, or exploring why this area is not a good focus areaInterested in Applying?We look forward to receiving your submissions. Applications will be reviewed on a rolling basis-apply here.For the initial application, you'll be required to submit:A background of yourself and your workA short summary and budget for your project highlighting which area you are applying to, and outlining what you would like to investigate and whyAt least two referencesWe will aim to get back to applicants within 8 weeks of receiving their application.If you are interested then please find more information about the grant here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
elteerkers https://forum.effectivealtruism.org/posts/EcKmt8ZJ3dcQBigna/launching-foresight-institute-s-ai-grant-for-underexplored Sat, 19 Aug 2023 15:51:56 +0000 EA - Launching Foresight Institute's AI Grant for Underexplored Approaches to AI Safety - Apply for Funding! by elteerkers elteerkers https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:43 no full 6892
StbTBXTrur3ntur4u EA - New probabilistic simulation tool by ProbabilityEnjoyer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New probabilistic simulation tool, published by ProbabilityEnjoyer on August 19, 2023 on The Effective Altruism Forum.Dagger (usedagger.com) is a new tool for calculations with uncertainty. It uses Monte Carlo simulation.There are two ways to specify your simulation model:Import an existing spreadsheetuse Probly, a Python dialect designed for probabilistic simulationℹ️ Each of the two links above has 4 interactive examples. You might want to start there.SpreadsheetExampleIn this 15-second video, we take a complex existing spreadsheet (100+ rows) from GiveWell and turn it into a Monte Carlo simulationThe sheet already gives us "optimistic/pessimistic" values, so it's as simple as adding one column to specify the distribution as (e.g.) uniform(Longer version of this video)FeaturesDependency graphIntuitive and mathematically rigorous sensitivity analysisOur sensitivity analysis uses Sobol' global sensitivity indices. The approach and the intuition behind it are explained in more detail here.ℹ️ You need to enable the sensitivity analysis under "Advanced options"Summary tableThis table exposes the structure of your model by showing the dependency graph as a tree. Similar to Workflowy, you can zoom to any variable, or expand/collapse.ProblyProbly feels very like Python, except that any number can also be a probability distribution:ExampleHere's a fuller example of the syntax and resulting output. It's part of a GiveWell CEA of iron and folic acid supplementation.Distribution supportProbly supports 9 probability distributions. Each can be constructed in multiple ways.For example, you can construct a normal distribution in 5 ways:This clickable table shows you everything that's supported, and includes example code:ℹ️ Shortcut: probly.dev redirects to usedagger.com/problyLimitationsThere are at the moment numerous limitations. A small selection of them:Probly:Doesn't support the Sobol' sensitivity analysisDoesn't show the dependency graphSpreadsheetThere is no UI in Dagger to edit the model. All changes must go via the spreadsheet.The spreadsheet must specify probability distributions in a specific format.All models are publicThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
ProbabilityEnjoyer https://forum.effectivealtruism.org/posts/StbTBXTrur3ntur4u/new-probabilistic-simulation-tool Sat, 19 Aug 2023 15:48:58 +0000 EA - New probabilistic simulation tool by ProbabilityEnjoyer ProbabilityEnjoyer https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:32 no full 6891
to34jb5LuWo4fM9gC EA - Taking prioritisation within 'EA' seriously by CEvans Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Taking prioritisation within 'EA' seriously, published by CEvans on August 19, 2023 on The Effective Altruism Forum.NB: This post is arguably slightly info hazardous (for lack of a better term) for anyone in the community who might feel particular anxiety about questioning their own career decisions. Perhaps consider reading this first: You have more than one goal, and that's fine - EA Forum (effectivealtruism.org). This piece is about the importance of having non-impact focussed goals, and that it is extremely ok to have them. This post is intended to suggest what people should do in so far as they want to have more impact, rather than being a suggestion of what everyone in the EA community should do.Your decisions matter. The precise career path you pick really matters for your impact. I think many people in the EA community would say they believe this if asked, but haven't really internalised what this means for them. I think many people would have great returns for impact to thinking more carefully about prioritisation for their career, even within the "EA careers space. Here are some slight caricatures of statements I hear regularly:"I want an 'impactful' job""I am working on a very important problem, so within that I will do what I think is interesting""I was already interested in something mentioned on 80,000 Hours, so I will work on that""People seem to think this area is important, so I suppose I should work on that""I am not a researcher, so I shouldn't work on that problem"I think these are all major mistakes for those who say them, in so far as impact is their primary career goal. My goals for this post are to make the importance of prioritisation feel more salient to members of the community (which is the first half of my post), and to help making progress feel and be more attainable (the second half from "What does thinking seriously about prioritisation look like".Key ClaimsFor any given person, their best future 'EA career paths' are at least an order of magnitude more impactful than their median 'EA career path'.For over 50% of self identifying effective altruists, in their current situation:Thinking more carefully about prioritisation will increase their expected impact by several times.There will be good returns to thinking more about the details of prioritising career options for yourself, not just uncritically deferring to others or doing very high-level "cause prioritisation".They overvalue personal fit and prior experience when determining what to work on.I think the conclusions of my argument should excite you.Helping people is amazing.. This community has enabled many of us to help orders of magnitude more people than we otherwise would have. I am claiming that you might be able to make a similar improvement again with careful thought, and that as a community we might be able to achieve a lot more.Defining prioritisation in terms of your careerPrioritisation: Determining which particular actions are most likely to result in your career having as much impact as possible. In practice, this looks like a combination of career planning and cause/intervention prioritisation. So, "your prioritisation" means something like "your current best guess of what precisely you should be doing with your career for impact".Career path: The hyper-specific route which you take through your career, factoring decisions in such as which specific issues/interventions to work on, which organisations to work at, and which specific roles to take. I do not mean to be as broad as "AI alignment researcher" or "EA community builder".'EA' person/career path: By this I mean a person or choice which is motivated by impact, not necessarily in so-called 'EA' organisations or explicitly identifying as part of the community.For any given person, the best "EA"...

]]>
CEvans https://forum.effectivealtruism.org/posts/to34jb5LuWo4fM9gC/taking-prioritisation-within-ea-seriously Sat, 19 Aug 2023 11:48:28 +0000 EA - Taking prioritisation within 'EA' seriously by CEvans CEvans https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 36:55 no full 6890
9cdntNDJQTS8dH5fh EA - Making EA more inclusive, representative, and impactful in Africa by Ashura Batungwanayo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Making EA more inclusive, representative, and impactful in Africa, published by Ashura Batungwanayo on August 19, 2023 on The Effective Altruism Forum.Authors: Ashura Batungwanayo (University of KwaZulu Natal) and Hayley Martin (University of Cape Town)DISCLAIMER: This paper draws from our collective experiences, perspectives, and conversations with community builders in Africa who share similar experiences and perspectives.IntroductionIn our engagement with Effective Altruism (EA), we noted a distinct emphasis on Existential Risks, notably concerning Artificial Intelligence (AI), alongside a focus on animal welfare and veganism - issues that carry nuanced significance, differing between Western societies and Africa due to diverse factors in animal husbandry. However, our initial allure lay in EA's focus on Global Health and Development (GH&D). GH&D holds a special significance for us as it directly confronts the realities we, as African students, encounter daily.Amidst this, the need for balance arises: acknowledging existential risks while prioritising urgent issues like poverty and education. Our vision extends to an EA Africa initiative that blends bottom-up and top-down approaches for effective, contextually attuned change. However, challenges persist, including navigating a competitive altruistic landscape and balancing immediate impact with long-term prevention. A critical thread is Africa's self-sufficiency, with EA acting as a catalyst for local partnership, co-designed interventions, and self-reliance. The path forward involves forging strategic collaborations, knowledge sharing, and empowerment, all underpinned by a commitment to inclusivity, representation, and comprehensive change.Global Health and Development's Urgent Call to Address African RealitiesThe challenges it addresses are not abstract concepts, but tangible issues that our communities and loved ones have grappled with. As university students, we acknowledge the privilege bestowed upon us and feel a profound responsibility to address the issues that plague our homeland. Our identity is intertwined with the principles of Ubuntu, which emphasise our shared humanity and interconnectedness. This cultural ethos, coupled with the weight of the "black tax," the financial responsibilities we bear for our families and communities, amplifies our desire to contribute meaningfully to the well-being of our people. Incorporating the principles of GH&D into our personal cause area is more than a mere pursuit; it's a calling driven by the urgent need to translate our empathy into action.By understanding the nuances of diplomatic engagement and the complexities of GH&D, we can channel our aspirations into effective strategies that uplift our communities while respecting our cultural values.GH&D resonates deeply with our African identity, our educational privilege, and our unwavering commitment to making a positive impact in the places we call home. While AI Alignment and Animal Welfare remains a critical concern, we acknowledge the complexities in communicating its relevance to African audiences. The messaging around AI Safety and Animal Welfare doesn't inherently speak with the immediate and intersecting challenges we face as a continent, although we recognise its significance in the broader global context (and acknowledge that there are people working on AI Alignment and Animal Advocacy on the continent as well). It's important to note that concentrating solely on existential risks could inadvertently diminish the urgency of current issues, such as poverty and education. Striving for a comprehensive approach that appreciates the distinctive dynamics of African contexts is paramount.It's worth contemplating the establishment of an EA Africa initiative, one that aspires to harmonise both bottom-up and...

]]>
Ashura Batungwanayo https://forum.effectivealtruism.org/posts/9cdntNDJQTS8dH5fh/making-ea-more-inclusive-representative-and-impactful-in Sat, 19 Aug 2023 01:11:08 +0000 EA - Making EA more inclusive, representative, and impactful in Africa by Ashura Batungwanayo Ashura Batungwanayo https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:48 no full 6888
g4TcehspjDumGXucx EA - My EA Journey by Eli Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My EA Journey, published by Eli Kaufman on August 20, 2023 on The Effective Altruism Forum.Summary:This is the story of my EA journey. Sharing it as I believe that my story could be relevant to others particularly those who arrive in EA mid-career or not having worked on a cause area before.Background:Around two years ago a podcast appeared on my spotify playlist. It was one of the 80,000 hours episodes. It piqued my curiosity and after listening to it I wanted to learn more. Listening to some other episodes I realized there are a lot of concepts I don't understand so that drove me to look for more resources online. Up until that point I have not come across EA. I had a broad interest in global health & development, philanthropy and doing good but no practical insight how to go about this. I started reading about EA online, then signed up for the virtual intro fellowship. Within a couple of months I read a few books (Doing Good Better, The Life You Can Save, The Precipice to name a few). I didn't expect that, but soon I started having realizations that would make an impact on my life.Career:I work in IT with background in operations and specialize in implementing solutions based on Salesforce platform for organizations. For years I have been working with organizations that I didn't feel particularly aligned with. I kind of accepted the reality that my work is not where my passion is and had other hobbies and interests which I was excited about. The realization that I could use my skills and expertise to do something impactful and meaningful was an important one. I figured out that sooner or later I will come across the right opportunity and in the meantime was actively networking. Here's a post I wrote at the time.Community:One of the main sources of inspiration for me was the people I met in the EA community and the stories they shared. Being part of such a community dedicated to doing good was something that resonated with me. I found out that where I live (Amsterdam) has a pretty active EA community (including an awesome co-working space) so had a chance to attend meetups, virtual and in-person events, attended a couple of EAGx conferences, signed up to a bunch of Slack spaces and interacted with people from around the world.Fast forward:I signed a giving pledge as I felt this is something that makes sense to me. I applied to a job with The END Fund and started there earlier this year, feeling excited about using my skills to help an highly effective organization in the field of Neglected Tropical Diseases.Main lessons:Initially it may seem that everyone in EA are so dedicated and it makes you feel you're not doing enough. Don't try to be a maximalist! Just do your bit.I found people in the EA community to be helpful, and willing to share their experiences, offer advice, point newcomers to a useful direction. (If you would like to chat feel free to reach out here)There are many resources out there such as career advice, coaching, specific professional and cause area groups, podcasts.Networking is perhaps the most important aspect if you're looking for a way to get more involved or make a career change. If you happen to live in a place with an active community - check out local events. If you don't - EA Anywhere is a good starting point. EAGx conferences are a great way to learn more and talk to people.Don't feel that only highly specialized experts can contribute to making the world better. Each of us has something to bring.Go ahead and take the first step - write a post, go to a conference. Who knows, perhaps it will put you on a life-changing journey?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Eli Kaufman https://forum.effectivealtruism.org/posts/g4TcehspjDumGXucx/my-ea-journey Sun, 20 Aug 2023 21:31:00 +0000 EA - My EA Journey by Eli Kaufman Eli Kaufman https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:27 no full 6899
wgos2xaTA8LF9EiWE EA - Applications Open: CEA Career Planning Program Pilot by Cian M Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Applications Open: CEA Career Planning Program Pilot, published by Cian M on August 20, 2023 on The Effective Altruism Forum.The CEA Virtual Programs team is piloting a 5-week Career Planning Program! Apply by August 27th.Key infoDates: September 11th - October 15th, 2023Application deadline: August 27thTarget Audience: Anyone who is considering pursuing a career in an EA cause area and who has an existing understanding of EA ideas.Content: Participants will complete an adapted version of 80,000 Hours' career planning template, and have weekly discussions with a facilitator and a group of peers interested in pursuing similar career paths.We think we can help you to plan your career by providing structure, accountability and feedback during your career planning process. Through a combination of reading, exercises and discussions, you'll explore what a high-impact career looks like for you, and start making progress on next steps. Find out more about the program here.We highly recommend this program if you:Are interested in finding a career that is high-impact and a good fit for you.Have a good understanding of core ideas in effective altruism and/or have completed an Introductory EA Program or In-Depth EA Program.Can commit at least 1.5 hours a week to exercises and readings, and 1.5 hours to discussion sessions.Note: Though the materials for this program are based on the 80,000 Hours 8-week career planning course and 80,000 Hours was consulted when creating this program, they are not the organisers. However, we would still encourage people to apply to 80,000 Hours 1:1 advising - the accountability and structure of this program makes a great compliment to 1:1 advising!Ready to apply? Find out more about the program here and register here by Sunday, August 27th.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Cian M https://forum.effectivealtruism.org/posts/wgos2xaTA8LF9EiWE/applications-open-cea-career-planning-program-pilot Sun, 20 Aug 2023 20:49:13 +0000 EA - Applications Open: CEA Career Planning Program Pilot by Cian M Cian M https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:59 no full 6898
4yZzSziCLkdzsYHt6 EA - Longtermism Fund: August 2023 Grants Report by Michael Townsend Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Longtermism Fund: August 2023 Grants Report, published by Michael Townsend on August 20, 2023 on The Effective Altruism Forum.IntroductionIn this grants report, the Longtermism Fund team is pleased to announce that the following grants have been recommended by Longview and are in the process of being disbursed:Two grants promoting beneficial AI:Supporting AI interpretability work conducted by Martin Wattenberg and Fernanda Viegas at Harvard University ($110,000 USD)Funding AI governance work conducted by the evaluations project at the Alignment Research Center ($220,000 USD)Two biosecurity and pandemic prevention grants:Supporting a specific project at NTI | Bio working to disincentivize biological weapons programmes ($100,000 USD)Partially funding the salary of a Director of Research and Administration for the Center for Communicable Disease Dynamics ($80,000 USD)One grant improving nuclear security:Funding a project by the Carnegie Endowment for International Peace to better understand and advocate for policies that avoid escalation pathways to nuclear war ($52,000 USD).This report will provide information on what the grants will fund, and why they were made. It was written by Giving What We Can, which is responsible for the Fund's communications. Longview Philanthropy is responsible for the Fund's research and grantmaking.We would also like to acknowledge and apologise for the report being released two months later than we would have liked, in part due to delays in the process of disbursing these grants. In future, we will aim to take potential delays into account so that we can better keep to our target of releasing a report once every six months.Scope of the FundThese grants were decided by the general grantmaking process outlined in our previous grants report and the Fund's launch announcement.As a quick summary, the Fund supports work that:Reduces existential and catastrophic risks, such as those coming from misaligned artificial intelligence, pandemics, and nuclear war.Promotes, improves, and implements key longtermist ideas.In addition, the Fund focuses on organisations with a compelling and transparent case in favour of their cost-effectiveness, and/or that will benefit from being funded by a large number of donors. Longview Philanthropy decides the grants and allocations based on its past and ongoing work to evaluate organisations in this space.GranteesAI interpretability work at Harvard University - $110,000This grant is to support the work of Martin Wattenberg and Fernanda Viegas to develop their AI interpretability work at Harvard University. The grant aims to fund research that enhances our understanding of how modern AI systems function - better understanding how these systems work is among the more straightforward ways we can ensure these systems are safe. Profs. Wattenberg and Viegas have a strong track record (with both having excellent references from other experts) and their future plans are likely to advance the interpretability field.Longview: "We recommended a grant of $110,000 to support Martin Wattenberg and Fernanda Viegas' interpretability work on the basis of excellent reviews of their prior work. These funds will go primarily towards setting up a compute cluster and hiring graduate students or possibly postdoctoral fellows."Learn more about this grant.ARC Evals - $220,000The evaluations project at the Alignment Research Center ("ARC Evals") works on "assessing whether cutting-edge AI systems could pose catastrophic risks to civilization." ARC Evals is contributing to the following AI governance approach:Before a new large-scale system is released, assess whether it is capable of potentially catastrophic activities.If so, require strong guarantees that the system will not carry out such activities.ARC Evals wor...

]]>
Michael Townsend https://forum.effectivealtruism.org/posts/4yZzSziCLkdzsYHt6/longtermism-fund-august-2023-grants-report Sun, 20 Aug 2023 12:10:00 +0000 EA - Longtermism Fund: August 2023 Grants Report by Michael Townsend Michael Townsend https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:41 no full 6895
XgeFZxBzbXuFieZ5H EA - Probably Good published a list of impact-focused job-boards by Probably Good Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Probably Good published a list of impact-focused job-boards, published by Probably Good on August 21, 2023 on The Effective Altruism Forum.Probably Good recently published a page for impact-focused job boards on our site!We created it to help people who are searching for potentially impactful opportunities in a range of cause areas and regions. It includes a variety of for-good job boards (e.g. international non-profit jobs, civil service positions, tech-focused roles), region-specific boards, and a few boards specifically geared towards climate change, animal advocacy, global health. We also spotlight the 80,000 Hours job board, which is the most EA-aligned resource we include.This page is intended to be a good jumping off point for people to start looking for jobs that could help others. So while we believe the boards listed can be a good place to look for opportunities, we don't endorse every job on every board. We encourage our readers to carefully analyze the opportunities that interest them, use tools (such as our career guide chapters on this topic) to assess their potential impact, and consider both the direct impact and the career capital that different opportunities can provide. We're also happy to chat 1-on-1 about specific opportunities and impactful options in a career advising call.This page is still a work-in-progress and we'd love to keep expanding it to accommodate different interests and priorities. If you have suggestions for job boards we should include, please let us know here or email directly at hello@probablygood.org.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Probably Good https://forum.effectivealtruism.org/posts/XgeFZxBzbXuFieZ5H/probably-good-published-a-list-of-impact-focused-job-boards Mon, 21 Aug 2023 22:46:04 +0000 EA - Probably Good published a list of impact-focused job-boards by Probably Good Probably Good https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:37 no full 6910
uxrAdXdYpXodrggto EA - An Elephant in the Community Building room by Kaleem Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An Elephant in the Community Building room, published by Kaleem on August 21, 2023 on The Effective Altruism Forum.These are my own views, and not those of my employer, EVOps, or of CEA who I have contracted for in the past and am currently contracting for now. This was meant to be a strategy fortnight contribution, but it's now a super delayed/unofficial, and underwritten strategy fortnight contribution.Before you read this:This is pretty emotionally raw, so please 1) don't update too much on it if you think I'm just being dramatic 2) I might come back and endorse or delete this at some point. I've put off writing this for a long time, because I know that some of the conclusions or implications might be hurtful or cause me to become even more unpopular than I already feel I am - as a result, I've left it really brief, but I'm willing to make it more through if I get the sense that people think it'd be valuable.This post is not meant as a disparagement of any of my fellow African or Asian or Latin-American EAs. This is less about you, and more about how much the world sucks, and how hard the state of the world makes it for us to fully participate in, and contribute to, EA the way we'd like to. I think I'm hoping to read a bunch of comments proving me wrong or at least making me reconsider how I feel about this. That being said, I don't like letting feelings get in the way of truth seeking and doing what's right. So here it goes.Summary:I think community builders and those funding/steering community building efforts should be more explicit and open about what their theory of change for global community building is (especially in light of the reduced amount of funding available), as there could be significant tradeoffs in impact between different strategies.IntroductionI think there are two broad conceptualisations of what/how EA functions in the world, and each has a corresponding community building strategy. If you think there are more than these two, or that these are wrong or could be improved, please let me know. From my experience, I think that all community building initiatives fall into one of two strategies/worldviews, each with a different theory of change. These are:Global EAEA can be for anybody in the world - The goal of EA community building is to spread the ideas of EA as far and wide as possible. By showing people that regardless of your context, you can make a difference which is possibly hundreds of times better than you would have done otherwise, we'll be increasing the chances of motivated and talented people getting involved in high-impact work, and generally increasing the counterfactual positive impact of humanity on the wellbeing of living and future beings. I have a sense that following this strategy currently leads to having a more transparent/non-secretive/less insidious optic for the movement.Efforts which fall into this bucket would be things like:funding city and national groups in countries which aren't major power-centers in the US, UK, EU, or Chinafunding university groups which aren't in the top 100/200 in the world for subjects which have a track record of being well-represented amongst global decision-makers.Allocating community resources to increasing blindly-racial or geographic diversity and inclusion in the community (rather than specific viewpoints or underrepresented moral beliefs etc).Narrow EAPower and influence follow a heavy-tailed distribution, and we need power and influence to make important changes. If there is a small group of people who are extremely influential or high-potential, then the goal of community building should be to seek out and try to convince them to use their resources to have an outsized positive influence on the wellbeing of current and future beings. I have a sense that the way that ...

]]>
Kaleem https://forum.effectivealtruism.org/posts/uxrAdXdYpXodrggto/an-elephant-in-the-community-building-room Mon, 21 Aug 2023 17:43:37 +0000 EA - An Elephant in the Community Building room by Kaleem Kaleem https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:20 no full 6905
saAXc8zsFgZuxFM6L EA - XPT forecasts on (some) Direct Approach model inputs by Forecasting Research Institute Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: XPT forecasts on (some) Direct Approach model inputs, published by Forecasting Research Institute on August 21, 2023 on The Effective Altruism Forum.This post was co-authored by the Forecasting Research Institute and Rose Hadshar. Thanks to Josh Rosenberg for managing this work, Zachary Jacobs and Molly Hickman for the underlying data analysis, Kayla Gamin for fact-checking and copy-editing, and the whole FRI XPT team for all their work on this project. Special thanks to staff at Epoch for their feedback and advice.SummarySuperforecaster and expert forecasts from the Existential Risk Persuasion Tournament (XPT) differ substantially from Epoch's default Direct Approach model inputs on algorithmic progress and investment:InputEpoch (default)XPT superforecasterXPT expertNotesBaseline growth rate in algorithmic progress (OOM/year)0.21-0.650.09-0.20.15-0.23Current spending ($, millions)$60$35$60Yearly growth in spending (%)34%-91.4%6.40%-11%5.7%-19.5%Epoch: 80% confidence interval (CI)XPT: 90% CI, based on 2024-2030 forecastsEpoch: 2023 estimateXPT: 2024 median forecastEpoch: 80% CIXPT: 90% CI, based on 2024-2050 forecastsNote that there are no XPT forecasts relating to other inputs to the Direct Approach model, most notably the compute requirements parameters.Taking the Direct Approach model as given and using relevant XPT forecasts as inputs where possible leads to substantial differences in model output:OutputEpoch default inputsXPT superforecaster inputsXPT expert inputsMedian TAI arrival yearProbability of TAI by 2050Probability of TAI by 2070Probability of TAI by 210020362065205270%38%49%76%53%65%80%66%74%Note that regeneration affects model outputs, so these results can't be replicated directly, and the TAI probabilities presented here differ slightly from those in Epoch's blog post. Figures given here are the average of 5 regenerations.Epoch is drawing on recent research which was not available at the time the XPT forecasters made their forecasts (the XPT closed in October 2022).Most of the difference in outputs comes down to differences in forecasts on baseline growth rate in algorithmic progress and yearly growth in spending, where XPT forecasts differ radically from the Epoch default inputs (which extrapolate historical trends).XPT forecasters' all-things-considered transformative artificial intelligence (TAI) timelines are much longer than those which the Direct Approach model outputs using XPT inputs:Source of 2070 forecastXPT superforecasterXPT expertDirect Approach model53%65%XPT postmortem survey question on probability of TAI by 20703.75%16%If you buy the assumptions of the Direct Approach model, and XPT forecasts on relevant inputs, this pushes timelines out by two to three decades compared with the default Epoch inputs.However, it still implies TAI by 2070.It seems very likely that XPT forecasters would not buy the assumptions of the Direct Approach model: their explicitly stated probabilities on TAI by 2070 are <20%.IntroductionThis post:Compares Direct Approach inputs with XPT forecasts on algorithmic progress and investment, and shows how the differences in forecasts impact the outputs of the Direct Approach model.Discusses why Epoch's inputs and XPT forecasts differ.Notes that XPT forecasters' all-things-considered TAI timelines are longer than those which the Direct Approach model outputs using XPT inputs.Includes an appendix on the arguments given by Epoch and in the XPT for their respective forecasts.Background on the Direct Approach modelIn May 2023, researchers at Epoch released an interactive Direct Approach model, which models the probability that TAI arrives in a given year. The model relies on:An estimate of the compute required for TAI, based on extrapolating neural scaling laws.Various inputs rel...

]]>
Forecasting Research Institute https://forum.effectivealtruism.org/posts/saAXc8zsFgZuxFM6L/xpt-forecasts-on-some-direct-approach-model-inputs Mon, 21 Aug 2023 17:38:30 +0000 EA - XPT forecasts on (some) Direct Approach model inputs by Forecasting Research Institute Forecasting Research Institute https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 33:50 no full 6906
rLiCjrAv9D8chCoG5 EA - "Dimensions of Pain" workshop: Summary and updated conclusions by Rethink Priorities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Dimensions of Pain" workshop: Summary and updated conclusions, published by Rethink Priorities on August 21, 2023 on The Effective Altruism Forum.Executive SummaryBackground: The workshop's goal was to leverage expertise in pain to identify strategies for testing whether severity or duration looms larger in the overall badness of negatively valenced experiences. The discussion was focused on how to compare welfare threats to farmed animals.No gold standard behavioral measures: Although attendees did not express confidence in any single paradigm, several felt that triangulating results across several paradigms would increase clarity about whether nonhuman animals are more averse to severe pains or long-lasting pains.Consistent results across different methodologies only strengthens a conclusion if they have uncorrelated or opposing biases. Fortunately, while classical conditioning approaches are probably biased towards severity mattering more, operant conditioning approaches are probably biased towards duration mattering more. Unfortunately, the biases might be too large to produce convergent results.Behavioral experiments may lack external validity: Attendees believed that a realistic experiment would not involve pains of the magnitude that characterize the worst problems farmed animals endure. Thus, instead of prioritizing external validity, we recommend whatever study designs create the largest differences in severity.Studies of laboratory animals and (especially) humans seem more likely to generate large differences in severity than studies of farmed animals.No gold standard biomarkers: Biomarkers could elide the biases that behavioral and self-report data inevitably introduce. However, attendees argued that there are no currently known biomarkers that could serve as an aggregate measure of pain experience over the course of a lifetime.Priors should favor prioritizing duration: Attendees had competing ideas about how to prioritize between severity and duration in the absence of compelling empirical evidence. In cases where long-lasting harms are at least thousands of times longer than more severe harms and are of at least moderate severity, we favor a presumption that long-lasting pains cause more disutility overall.Nevertheless, due to empirical and moral uncertainty, we would recommend putting some credence (~20%) in the most severe harms causing farmed animals at least as much disutility as the longest-lasting harms they experience.BackgroundThe Dimensions of Pain workshop was held April 27-28, 2023 at University of British Columbia. Attendees included animal welfare scientists (viz., Dan Weary, Thomas Ede, Leonie Jacobs, Ben Lecorps, Cynthia Schuck, Wladimir Alonso, and Michelle Lavery), pain scientists (Jeff Mogil, Gregory Corder, Fiona Moultrie, Brent Vogt), and philosophers (Bob Fischer, Murat Aydede, Walter Veit). William McAuliffe and Adam Shriver, the authors of this report, guided the discussion.Funders who want to cost-effectively improve animal welfare have to decide whether attenuating brief, severe pains (e.g., live-shackle slaughter) or chronic, milder pains (e.g., lameness) reduces more suffering overall. Farmers also face similar tradeoffs when deciding between multiple methods for achieving the same goal (e.g., single-stage versus multi-stage stunning). Our original report exploring the considerations that would favor prioritizing one dimension over another, The Relative Importance of the Severity and Duration of Pain, identified barriers to designing experiments that would provide clear-cut empirical evidence. The goal of the workshop was to ascertain whether an interdisciplinary group of experts could overcome these issues.No gold standard behavioral measuresWe spent one portion of the workshop reviewing some of the confounds th...

]]>
Rethink Priorities https://forum.effectivealtruism.org/posts/rLiCjrAv9D8chCoG5/dimensions-of-pain-workshop-summary-and-updated-conclusions Mon, 21 Aug 2023 16:57:12 +0000 EA - "Dimensions of Pain" workshop: Summary and updated conclusions by Rethink Priorities Rethink Priorities https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 24:44 no full 6904
hXPBXJ4YS22kSNeif EA - "Being an EA" - Dissertation on EA by Joanna (Asia) Wiaterek Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Being an EA" - Dissertation on EA, published by Joanna (Asia) Wiaterek on August 21, 2023 on The Effective Altruism Forum.As part of my undergraduate course, I have written a dissertation in Social Anthropology on "'Being an EA': How Effective Altruism is understood and lived out by members of its transnational community."A few points about it:I am not proposing conclusions about the entirety of the EA due to its sheer diversity, but merely work with a few specific ethnographic examples.Two questions I tackled:i) What does it mean to "be an EA" for individual members of the transnational EA community?ii) What are the characteristics and motives of the EAs' "altruism"?Key takeaways:i) I argue that EA can be productively analysed as a lifestyle movementii) I also argue that the kind of altruism among EAs resembles a spatio-temporally extended form of sharingI worked on it between August 2022 and May 2023, so it captures briefly some reflections on the SBF crisis and its impact on EAs' self-identification with the movement.It was my first article-length piece of work, so it is far from perfect, but I hope it can prompt productive discussion on the future of the movement and its community.I initially aimed to write it for: community organisers, academics interested in social movements, people intrigued by EA; but I hope that the personal stories described in it might be quite inspiring to other EAs, too.Because of the marking boycott, I cannot post the full version here yet, but if you would like to get access to the document, please fill out this Google Form.If you'd like to chat about it, feel free to contact me on joanna.wiaterek@gmail.com or Calendly:. I would like to say a big thank you again to those of you who contributed to this dissertation!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Joanna (Asia) Wiaterek https://forum.effectivealtruism.org/posts/hXPBXJ4YS22kSNeif/being-an-ea-dissertation-on-ea Mon, 21 Aug 2023 01:15:48 +0000 EA - "Being an EA" - Dissertation on EA by Joanna (Asia) Wiaterek Joanna (Asia) Wiaterek https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:53 no full 6900
qGv4JhL5wcYgA7zj8 EA - EU farmed fish policy reform roadmap by Neil Dullaghan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EU farmed fish policy reform roadmap, published by Neil Dullaghan on August 22, 2023 on The Effective Altruism Forum.Full report (in PDF) available on the Rethink Priorities website. This is a follow-up to our report "Strategic considerations for upcoming EU farmed animal legislation".SummaryThe majority of fish consumed (in tonnes) in the EU are either wild-caught fish or farmed fish imported from non-EU Norway, Scotland, and Turkey. I don't see evidence that the EU will regulate imports of fish this decade and I'm less confident on interventions that affect wild animal welfare. So we narrowed the scope of the report to the species of fish farmed in the largest numbers of individuals and life-years in the EU: sea bass, sea bream, and small trout.The report argues that the most promising EU fish policy ask right now is a fast transition to better slaughter conditions of sea bass, sea bream, and (small) rainbow trout. It's an option that EU policymakers already put on the table as part of their larger animal legislation reform package, animal advocacy organisations support it, it has real world implementation, and there are clear actions to make improvements to the ask, namely by providing evidence discussed in the report of short transition periods.The EU's scientific agency, EFSA, has opinions on farmed fish welfare due 2024-2029 which will offer a beachhead for rearing reforms the movement may ask for in future that affect the whole life of an individual (e.g. water quality standards, stocking density maximums, enrichment especially for juveniles). There is a lot of economic and political precedent the aquatic animal advocacy movement needs to start creating years in advance to make the most of EFSA opinions on farmed fish welfare - the report discusses what sort of evidence the movement might need.I worry that after the European Commission presents its reform proposal in September/October 2023 (including a cage-free hen policy), little progress will be made before the June 2024 European Parliament elections put more conservative co-legislators in place, and a new European Commission 5-year term starts in November 2024. The longer the reform negotiations drag on, if the movement doesn't pivot resources to building the case for rearing reforms, advocates may be left playing catch-up in a few years and fail to make the most of EFSA opinions.I argue the movement should make an effort over the next 10 months to pull together the evidence and coalitions in support of a fast slaughter transition and ensure it makes it in the final law if things are progressing quickly. However, given limited resources, the more time and resources that continue to go into fish slaughter the higher the risk that this cuts against building the case for the arguably highly expected value rearing reforms later this decade.One could reasonably disagree and say even if the EU reform looks to be slowing, we should focus 100% of our fish policy advocacy efforts on slaughter to make sure the movement locks in a precedent for doing anything on commercially farmed fish at the EU level, and that this slaughter provision forms a beachhead for rearing reforms later. This may be compelling if you doubt that there will be opportunities to utilise EFSA opinions to create reform, especially in the absence of a fish slaughter precedent (e.g. if the opinions are dropped or delayed, or you don't believe we can gather sufficient evidence for EFSA to make bold rearing reform recommendations). Or you might think we should push for a lot more than slaughter right now, to mainstream those asks and hope at least one more of them make it onto the agenda.On the first, I argue the leaked draft impact assessment already signalled a willingness to explore fish reforms post EFSA-opinions, and to turn EFSA opinio...

]]>
Neil_Dullaghan https://forum.effectivealtruism.org/posts/qGv4JhL5wcYgA7zj8/eu-farmed-fish-policy-reform-roadmap Tue, 22 Aug 2023 19:31:01 +0000 EA - EU farmed fish policy reform roadmap by Neil Dullaghan Neil_Dullaghan https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:52 no full 6916
AKnBQboyyKz9QdD4T EA - Call for Papers on Global AI Governance from the UN by Chris Leong Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Call for Papers on Global AI Governance from the UN, published by Chris Leong on August 22, 2023 on The Effective Altruism Forum.Copied from their LinkedIn page:𝐂𝐚𝐥𝐥 𝐟𝐨𝐫 𝐏𝐚𝐩𝐞𝐫𝐬 𝐨𝐧 𝐆𝐥𝐨𝐛𝐚𝐥 𝐀𝐈 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞Exciting news! As we gear up for the High-level Advisory Body on AI, we're inviting thought leaders, researchers, and enthusiasts to contribute short papers (~2000 words) on pivotal themes:1️⃣ Key Issues on Global AI Governance: Dive into studies and recommendations that the High-Level Advisory Body on AI should prioritize, especially those needing global governance attention.2️⃣ Current Efforts in Global AI Governance: Share analyses on bilateral, multilateral, and inter-regional initiatives. We're keen on understanding varying philosophical approaches, critiques, and suggestions.3️⃣ Models in Global AI Governance: Whether you're analyzing existing models or proposing fresh perspectives, we're all ears. Surveys and analyses of other proposals are also encouraged.We champion diversity! We're eager to hear from a myriad of groups, regions, and methodologies. Your insights will serve as foundational material (with due credit) for the High-Level Advisory Body on AI.Deadline: 30 SeptemberSubmission: Send your paper (hyperlink or PDF) to techenvoy@un.org. Ensure your title page has the author(s) details, affiliation, and contact info. If it's an Executive Summary of a more extensive piece, kindly attach the main paper's link.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Chris Leong https://forum.effectivealtruism.org/posts/AKnBQboyyKz9QdD4T/call-for-papers-on-global-ai-governance-from-the-un Tue, 22 Aug 2023 09:37:30 +0000 EA - Call for Papers on Global AI Governance from the UN by Chris Leong Chris Leong https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:47 no full 6915
yxArBdibQejHEYT4F EA - Will AI kill everyone? Here's what the godfathers of AI have to say [RA video] by Writer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will AI kill everyone? Here's what the godfathers of AI have to say [RA video], published by Writer on August 22, 2023 on The Effective Altruism Forum.This video is based on this article. @jai has written both the original article and the script for the video.Script:The ACM Turing Award is the highest distinction in computer science, comparable to the Nobel Prize. In 2018 it was awarded to three pioneers of the deep learning revolution: Geoffrey Hinton, Yoshua Bengio, and Yann LeCun.In May 2023, Geoffrey Hinton left Google so that he could speak openly about the dangers of advanced AI, agreeing that "it could figure out how to kill humans" and saying "it's not clear to me that we can solve this problem."Later that month, Yoshua Bengio wrote a blog post titled "How Rogue AIs may Arise", in which he defined a "rogue AI" as "an autonomous AI system that could behave in ways that would be catastrophically harmful to a large fraction of humans, potentially endangering our societies and even our species or the biosphere."Yann LeCun continues to refer to thoseanyone suggesting that we're facing severe and imminent risk as "professional scaremongers" and says it's a "simple fact" that "the people who are terrified of AGI are rarely the people who actually build AI models."LeCun is a highly accomplished researcher, but in light of Bengio and Hinton's recent comments it's clear that he's misrepresenting the field whether he realizes it or not. There is not a consensus among professional researchers that AI research is safe. Rather, there is considerable and growing concern that advanced AI could pose extreme risks, and this concern is shared by not only both of LeCun's award co-recipients, but the headsleaders of all three leading AI labs (OpenAI, Anthropic, and Google DeepMind):Demis Hassabis, CEO of DeepMind, said in an interview with Time Magazine: "When it comes to very powerful technologies - and obviously AI is going to be one of the most powerful ever - we need to be careful. Not everybody is thinking about those things. It's like experimentalists, many of whom don't realize they're holding dangerous material."Anthropic, in their public statement "Core Views on AI Safety", says: "One particularly important dimension of uncertainty is how difficult it will be to develop advanced AI systems that are broadly safe and pose little risk to humans. Developing such systems could lie anywhere on the spectrum from very easy to impossible."And OpenAI, in their blog post "Planning for AGI and Beyond", says "Some people in the AI field think the risks of AGI (and successor systems) are fictitious; we would be delighted if they turn out to be right, but we are going to operate as if these risks are existential." Sam Altman, the current CEO of OpenAI, once said "Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity. "There are objections one could raise to the idea that advanced AI poses significant risk to humanity, but "it's a fringe idea that actual AI experts do not take seriously" is no longer among them. Instead, a growing share of experts are echoing the conclusion reached by Alan Turing, considered by many to be the father of computer science and artificial intelligence, back in 1951: "[I]t seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. [...] At some stage therefore we should have to expect the machines to take control."Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Writer https://forum.effectivealtruism.org/posts/yxArBdibQejHEYT4F/will-ai-kill-everyone-here-s-what-the-godfathers-of-ai-have Tue, 22 Aug 2023 08:05:36 +0000 EA - Will AI kill everyone? Here's what the godfathers of AI have to say [RA video] by Writer Writer https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:22 no full 6914
Xd9ZZuPCKAvKpzvdB EA - Empowering Numbers: FEM since 2021 by hhart Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Empowering Numbers: FEM since 2021, published by hhart on August 23, 2023 on The Effective Altruism Forum.∼250,000 new contraceptive users, ∼20 million new listeners and up to ∼60x cost effectivenessTL;DR:In 2021, FEM launched our pilot: a 3-month family planning radio campaign, educating listeners about maternal health and effective contraception. Our ads and shows played up to 860 times in Kano state in northern Nigeria, reaching ~5.6 million. An independent survey showed that in 11 months, contraceptive use increased by ~75% among all women in the state. Since then, we launched a 9-month campaign, aired proofs of concept in three new locations and re-aired in 8 more states, reaching an estimated 20 million new listeners. We were recommended by Giving What We Can and Founders Pledge, who assess our campaigns as ~22 times as effective as cash transfers. We developed new technologies that will allow us to customise health advice to listeners and run a randomised controlled trial. We continue to build organisational capacity to scale across Nigeria and new cause areas in the years ahead.Studio host during show recording in KanoPromising results from our pilot (Oct - Dec 2021)In Nigeria, 1 in 22 women dies of pregnancy-related complications.In 2020, this meant ~1,047 mothers died for every 100,000 births. For contrast, in the United Kingdom, there were 9.8 maternal deaths per 100,000 births.Across Nigeria, about half of women have an unmet need for effective contraception. Increasing access to contraception reduces unwanted pregnancies and increases space between births, making pregnancies much safer. It can save the lives of thousands of women and save others from devastating health conditions such as obstetric fistula, and postpartum anaemia and depression. When women and girls understand and access family planning services, they receive substantial positive effects: get more education, make more money, and enjoy the ability to take better care of their children.Launched in September 2020, Family Empowerment Media develops evidenced-based radio campaigns to improve maternal and child health by raising awareness about contraception. In our Proof of Concept phase, we chose Kano, partnered with a Nigerian organisation, and conducted a short radio campaign.In September 2021, on World Contraception Day, we launched a pilot campaign in collaboration with the Nigerian Ministry of Health. We produced radio programmes, including dramas, storytelling shows, Q&A programmes with health experts, and interviews with prominent religious leaders on the topic of childbirth spacing.Our programmes reached a total of 5.6 million listeners up to 860 times and it looks like they had a substantial impact.Data from pre-post survey by PMA, intervention period is indicated by the shaded area. Over 11 months, we saw an increase from 10% to 19% among married women and from 8% to 14% among all women.An external survey on contraceptive use in Kano saw a 75% increase (among all women) between February 2021 and January 2022, corresponding to approximately 250,000 new contraceptive users. This is twice as many new users as over the last 5 years combined.A similar trend was not seen in Lagos or other Nigerian states, suggesting the change was specific to Kano. We were the only new large scale family planning demand generation activity in the state at that time. We are working with researchers to better understand how much impact can be attributed to FEM' campaigns. But these results are promising.New technology to evaluate our impact (Jan -Dec 2022)FEM is committed to collecting robust, scientific data about the effects of our work. With funding from Grand Challenges Canada (Stars in Global Health), FEM developed a new technology to evaluate our impact: a transmitter that identifies w...

]]>
hhart https://forum.effectivealtruism.org/posts/Xd9ZZuPCKAvKpzvdB/empowering-numbers-fem-since-2021 Wed, 23 Aug 2023 14:54:37 +0000 EA - Empowering Numbers: FEM since 2021 by hhart hhart https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 18:36 no full 6924
sBJLPeYdybSCiGpGh EA - Impact obsession: Feeling like you never do enough good by David Althaus Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Impact obsession: Feeling like you never do enough good, published by David Althaus on August 23, 2023 on The Effective Altruism Forum.SummaryImpact obsession is a potentially unhelpful way of relating to doing good which we've observed among effective altruists, including ourselves. (More)What do we mean by impact obsession?One can distinguish unhealthy and healthy forms of impact obsession. (More)Common characteristics include an overwhelming desire for doing the most good one can do, basing one's self-worth on one's own impact, judging it by increasingly demanding standards ("impact treadmill"), overexerting oneself, neglecting or subjugating non-altruistic interests, and anxiety about having no or negative impact. (More)Is impact obsession good or bad?Many aspects of impact obsession are reasonable and even desirable. (More)Others can have detrimental consequences like depression, anxiety, guilt, exhaustion, burnout, and disillusionment. (More)What to do about (unhealthy) impact obsession?Besides useful standard (mental) health advice, potentially helpful strategies involve, for example: reflecting on our relationship with and motives for having impact, integrating conflicting desires, shifting from avoidance to approach motivation, cultivating additional sources of meaning and self-worth, reducing resistance and non-acceptance, leaning into absurdity when being overwhelmed, and learning skills (e.g., exposure therapy, positive reframing, self-compassion) for managing common negative thoughts & emotions accompanying impact obsession. (More)IntroductionWe've noticed that many EAs, including ourselves, sometimes relate to effective altruism and impact in an unhealthy way. In this post, we describe this phenomenon, which we call 'impact obsession'. While its specifics vary from person to person, certain common patterns emerge (as others have also pointed out). Here's a (first-person) description of how this can feel, based on our own experiences and that of others we've spoken to:By far my most important goal in life is to do as much good as I can. I connect with the logic of EA on a visceral level: seeing a human or animal suffering inevitably reminds me of just how much awfulness exists in this world, how much brighter the future could be, and that what ultimately matters is helping as many sentient beings as best I can. I feel a lot of responsibility because I might be able to make a big difference to the lives of others. Heck, the stakes might literally be astronomical once you consider how your actions might affect the long-term trajectory of Earth-originating life. This is why maximizing positive impact is so important to me.Exactly how to go about doing the most good is a very difficult question and requires lots of strategizing and experimentation. Picking the right project in the right career path in the right cause area could easily mean having orders of magnitude more impact than if I'd settle on the first decent option. So getting this type of prioritization right is extremely important to me. I therefore spend a lot of time and emotional energy on questions like "is this cause area really the most important one?", "is this project really the best one I can do?", "is there some creative out-of-the-box idea that I'm missing and which would allow me to have way more impact?", or "am I falling prey to some cognitive bias or undue social influence which leads me down the wrong path?"Unfortunately, the world is extremely complicated, cluelessness is often a huge problem, and many smart people - most of them way smarter than myself! - disagree about what to do. So trying to figure out what I should be doing can feel overwhelming and often fills me with despair.My self-esteem and my happiness are greatly influenced by how much impact I think I'm...

]]>
David_Althaus https://forum.effectivealtruism.org/posts/sBJLPeYdybSCiGpGh/impact-obsession-feeling-like-you-never-do-enough-good Wed, 23 Aug 2023 14:12:48 +0000 EA - Impact obsession: Feeling like you never do enough good by David Althaus David_Althaus https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:13:32 no full 6923
QNdG4msZS3G29uG5X EA - Biosafety in BSL-3, BSL-3+ and BSL-4 Laboratories: Mapping and Recommendations for Latin America by JorgeTorresC Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Biosafety in BSL-3, BSL-3+ and BSL-4 Laboratories: Mapping and Recommendations for Latin America, published by JorgeTorresC on August 23, 2023 on The Effective Altruism Forum.Executive summaryThis article addresses biosafety and biosecurity in high-containment level laboratories (BSL-3 and BSL-4) in Latin America, with a focus on classification, current status, and regulatory frameworks. In the region, the lack of uniformity in data collection makes it difficult to accurately understand the infrastructure of high-containment laboratories. Regulatory frameworks vary across the region and present challenges in terms of standardization. Although countries like Colombia have made progress in this area, there is a need to establish updated and centralized regulatory frameworks in each country.To improve biosafety and biosecurity, we make a series of recommendations such as the implementation of biological risk management systems in laboratories, the promotion of non-punitive incident reports, the standardization of supervision processes, collaboration between institutions, and the exchange of best practices.IntroductionBiosafety and biosecurity in high-containment level laboratories (BSL-3 and BSL-4) are of vital importance for the protection of public health. These laboratories work with dangerous biological agents, so it is essential to ensure that practices, equipment, and security measures are adequate and rigorous. In this context, this article focuses on analyzing the current situation of BSL-3, BSL-3+, and BSL-4 laboratories in Latin America. We explore the increase in the construction of these laboratories at a global level, the regulatory frameworks by which they are governed, and the challenges that some Latin American countries face in their implementation. In addition, we propose several recommendations to improve biosafety and biosecurity in these laboratories.To consult the complete map follow the link:Classification of laboratories by biosafety levelsIn 1974, the United States Centers for Disease Control and Prevention (CDC) published a document titled "Classification of etiologic agents on the basis of hazard", proposing the classification of pathogens into four risk groups. Subsequently, both the National Institutes of Health (NIH) of the United States and the World Health Organization (WHO) updated this system, thus establishing the bases for the classification of laboratories according to the risk group of the pathogens they handle (Villegas et al. al., 2007). Out of the classification of risk groups, four levels of biosafety in laboratories have been established. These levels are determined taking into account several factors, such as the infectious capacity of the pathogen, the existence of treatments or vaccines for it, the severity of the disease it causes, its transmissibility, whether it is of exotic origin or not, and the nature of the work carried out in the laboratory (Lara-Villegas et al., 2008).Level 1 (BSL-1) laboratories use elemental equipment and practices for teaching purposes. They work with well-defined and characterized strains of microorganisms that do not cause disease in healthy people. The use of special protective equipment is not required.Level 2 (BSL-2) laboratories adopt appropriate practices, equipment, and measures to realize clinical analysis, diagnoses, and pathology. These laboratories handle microorganisms of moderate risk that are present in the community and are associated with human diseases of variable severity.Level 3 (BSL-3) laboratories implement appropriate practices, equipment, and measures to realize clinical analysis, diagnoses, and research. These laboratories handle known or unknown agents that have the potential to be transmitted by aerosol or splash and that can cause life-threatening infections...

]]>
JorgeTorresC https://forum.effectivealtruism.org/posts/QNdG4msZS3G29uG5X/biosafety-in-bsl-3-bsl-3-and-bsl-4-laboratories-mapping-and Wed, 23 Aug 2023 08:28:26 +0000 EA - Biosafety in BSL-3, BSL-3+ and BSL-4 Laboratories: Mapping and Recommendations for Latin America by JorgeTorresC JorgeTorresC https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 28:42 no full 6921
sWMwGNgpzPn7X9oSk EA - Select examples of adverse selection in longtermist grantmaking by Linch Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Select examples of adverse selection in longtermist grantmaking, published by Linch on August 23, 2023 on The Effective Altruism Forum.Sometimes, there is a reason other grantmakers aren't funding a fairly well-known EA (-adjacent) project.This post is written in a professional capacity, as a volunteer/sometimes contractor for EA Funds' Long-Term Future Fund (LTFF), which is a fiscally sponsored project of Effective Ventures Foundation (UK) and Effective Ventures Foundation USA Inc. I am not and have never been an employee at either Effective Ventures entity. Opinions are my own and do not necessarily represent that of any of my employers or of either Effective Ventures entity. I originally wanted to make this post a personal shortform, but Caleb Parikh encouraged me to make it a top-level post instead.There is an increasing number of new grantmakers popping up, and also some fairly rich donors in longtermist EA that are thinking of playing a more active role in their own giving (instead of deferring). I am broadly excited about the diversification of funding in longtermist EA. There are many advantages of having a diverse pool of funding:Potentially increases financial stability of projects and charitiesAllows for a diversification of worldviewsEncourages accountability, particularly of donors and grantmakers - if there's only one or a few funders, people might be scared of offering justified criticismsAccess to more or better networks - more diverse grantmakers might mean access to a greater diversity of networks, allowing otherwise overlooked and potentially extremely high-impact projects to be fundedGreater competition and race to excellence and speed among grantmakers - I've personally been on both sides of being faster and much slower than other grantmakers, and it's helpful to have a competitive ecosystem to improve grantee and/or donor experienceHowever, this comment will mostly talk about the disadvantages. I want to address adverse selection: In particular, if a project that you've heard of through normal EA channels hasn't been funded by existing grantmakers like LTFF, there is a decently high likelihood that other grantmakers have already evaluated the grant and (sometimes for sensitive private reasons) have decided it is not worth funding.Reasons against broadly sharing reasons for rejectionFrom my perspective as an LTFF grantmaker, it is frequently imprudent, impractical, or straightforwardly unethical to directly make public our reasons for rejection. For example:Our assessments may include private information that we are not able to share with other funders.Writing up our reasons for rejection of specific projects may be time-consuming, politically unwise, and/or encourage additional ire ("punching down").We don't want to reify our highly subjective choices too much, and public writeups of rejections can cause informational cascades.Often other funders don't even think to ask about whether the project has already been rejected by us, and why (and/or rejected grantees don't pass on that they've been rejected by another funder).Sharing negative information about applicants would make applying to EA Funds more costly and could discourage promising applicants.Select examplesHere are some (highly) anonymized examples of grants I have personally observed being rejected by a centralized grantmaker. For further anonymization, in some cases I've switched details around or collapsed multiple examples into one. Most, although not all, of the examples are personal experiences from working on the LTFF. Many of these examples are grants that have later been funded by other grantmakers or private donors.An academic wants funding for a promising sounding existential safety research intervention in an area of study that none of the LTFF grantmakers ...

]]>
Linch https://forum.effectivealtruism.org/posts/sWMwGNgpzPn7X9oSk/select-examples-of-adverse-selection-in-longtermist Wed, 23 Aug 2023 04:19:25 +0000 EA - Select examples of adverse selection in longtermist grantmaking by Linch Linch https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:39 no full 6920
tuqqpWCetmtCn5EJL EA - High-leverage opportunity to fund an EA org -- vote for CATF (takes 15 seconds)! by Ann Garth Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: High-leverage opportunity to fund an EA org -- vote for CATF (takes 15 seconds)!, published by Ann Garth on August 23, 2023 on The Effective Altruism Forum.tl;dr: You can move significant amounts of non-EA funding toward an EA-aligned charity in only 15 seconds by voting for Clean Air Task Force in the Charity Navigator Community Choice Awards. Vote!Clean Air Task Force is an EA-aligned climate mitigation nonprofit which is Founders Pledge's top climate charity and has been recommended by them since 2018. CATF has also been recommended by Giving Green since 2021. CATF is doing exceptional work pushing policy solutions to enable decarbonization through new technologies, such as next-generation geothermal energy, nuclear energy (both fission and fusion), and carbon capture and storage. For example, we were influential in many of the key provisions of the IRA, as well as in keeping open the Diablo Canyon nuclear power plant in California.CATF has been nominated for the Community Choice Awards from Charity Navigator, the largest and most-utilized charity evaluator in the United States (almost 11 million unique site visitors per year). Winning this award will earn CATF prominent visibility on Charity Navigator's website for the next year, an email announcement to Charity Navigator's audience of donors, dedicated posts on Charity Navigator's social media, and a webinar speaking opportunity. We expect that this extra visibility and engagement could drive significant funding for CATF from non-EA donors. CATF appears to be the only EA-aligned organization nominated for the Community Choice Awards, so voting is potentially very high-leverage.You can vote for CATF at this link (#organization=Clean%20Air%20Task%20Force). Voting takes only about 15 seconds and you can vote every day through August 27th, when the first round of the contest closes. Please enable more effective giving by supporting CATF!Disclaimer: I work at CATF (on the Superhot Rock Energy team). I joined CATF because of the Founders Pledge recommendation and love working at an organization that's so aligned with my values. I'm always happy to talk about CATF with anyone who's interested in learning more -- agarth@catf.us.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Ann Garth https://forum.effectivealtruism.org/posts/tuqqpWCetmtCn5EJL/high-leverage-opportunity-to-fund-an-ea-org-vote-for-catf Wed, 23 Aug 2023 02:47:41 +0000 EA - High-leverage opportunity to fund an EA org -- vote for CATF (takes 15 seconds)! by Ann Garth Ann Garth https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:19 no full 6919
ApMiZpWgD9gjpzicg EA - New encyclopedia entry on utilitarianism and Christian theology by dominicroser Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New encyclopedia entry on utilitarianism and Christian theology, published by dominicroser on August 24, 2023 on The Effective Altruism Forum.Theologians have often been very critical about utilitarianism - despite the fact that the first versions of utilitarianism have been developed by Christian thinkers.Vesa Hautala and I now got the opportunity to write an encyclopedia article on Christian Theology and Utilitarianism for the St Andrews Encyclopedia of Theology (SAET). SAET is a new project - a theological analogue of the Stanford Encyclopedia of Philosophy.This is the first such overview as far as we know. It touches on both historical developments as well as points of overlap and tension. If anyone has criticism to offer of our treatment, we'd happily take it.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
dominicroser https://forum.effectivealtruism.org/posts/ApMiZpWgD9gjpzicg/new-encyclopedia-entry-on-utilitarianism-and-christian Thu, 24 Aug 2023 08:00:12 +0000 EA - New encyclopedia entry on utilitarianism and Christian theology by dominicroser dominicroser https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:58 no full 6931
gXqimkQi2gCKHaKHm EA - Prototype: GPT-powered EA/LW weekly summary by Hamish McDoodles Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prototype: GPT-powered EA/LW weekly summary, published by Hamish McDoodles on August 24, 2023 on The Effective Altruism Forum.Zoe Williams used to manually do weekly summaries of the EA Forum and LessWrong, but now she doesn'ttoday I strung together a bunch of google apps scripts, google sheets expressions, graphQL queries, and D3.js to automatically extract all the posts on EAF/LW from the last week with >50 karma, summarize them with GPT-4, and assemble the result into HTML with links and stuffhard to say what the API usage cost, what with all the tinkering and experimenting, but I reckon it was about $5there were a bunch of posts which were too long for the API message length, so as a first whack i just cut stuff out of the middle of the post until it fit (Procrustes style)Siao (co-author) was going to help but i finished everything so fast she never got a chance lolI haven't spent much time sanity-checking these summaries, but I reckon they're "good enough to be useful"They often drop useful details or get the emphasis wrong.I haven't seen any outright fabrication.Obviously if you have a special interest in some topic these aren't going to substitute for reading the original post.the obvious next few steps are:automate the actual posting of the summaries.does EAF/LW have an API for posting?also summarize top comment(s)this is kinda hardexperiment with prompts to see if other summaries are more usefulalso generate a top level summary which gives you like 5 bullet points of the most important things from the forums this weekfeedback that would be useful:what would you (personally) like to be different about these summaries? should they be shorter? longer? bullet points? have quotes? fewer entries? more entries?leave a comment or DM me or whatever with any old feedbackoh, and: i originally also got the top posts from the AI alignment forum, but they were all cross posted on lesswrong? is that alwasy true? anyone know?EA ForumSelect examples of adverse selection in longtermist grantmakingby LinchThe author, a volunteer and sometimes contractor for EA Funds' Long-Term Future Fund (LTFF), discusses the pros and cons of diversification in longtermist EA funding. While diversification can increase financial stability, allow for a variety of worldviews, encourage accountability, and provide access to diverse networks, it can also lead to adverse selection, where projects that have been rejected by existing grantmakers are funded by new ones. The author provides examples of such cases and suggests that new grantmakers should be cautious about funding projects that have been rejected by others, but also acknowledges that grantmakers can make mistakes and that a network of independent funders could help ensure that unusual but potentially high-impact projects are not overlooked.An Elephant in the Community Building roomby KaleemThe author, a contractor for CEA and employee of EVOps, shares his personal views on the strategies of community building within the Effective Altruism (EA) movement. He identifies two main strategies: Global EA, which aims to spread EA ideas as widely as possible, and Narrow EA, which focuses on influencing a small group of highly influential people. The author argues that community builders and funders should be more explicit about their theory of change for global community building, as there could be significant trade-offs in impact between these two strategies.CE alert: 2 new interventions for February-March 2024 Incubation Programby CECharity Entrepreneurship has announced two new charity interventions for its February-March 2024 Incubation Program, bringing the total to six. The new interventions include an organization focused on bringing new funding into the animal advocacy movement and an organization providing ...

]]>
Hamish McDoodles https://forum.effectivealtruism.org/posts/gXqimkQi2gCKHaKHm/prototype-gpt-powered-ea-lw-weekly-summary 50 karma, summarize them with GPT-4, and assemble the result into HTML with links and stuffhard to say what the API usage cost, what with all the tinkering and experimenting, but I reckon it was about $5there were a bunch of posts which were too long for the API message length, so as a first whack i just cut stuff out of the middle of the post until it fit (Procrustes style)Siao (co-author) was going to help but i finished everything so fast she never got a chance lolI haven't spent much time sanity-checking these summaries, but I reckon they're "good enough to be useful"They often drop useful details or get the emphasis wrong.I haven't seen any outright fabrication.Obviously if you have a special interest in some topic these aren't going to substitute for reading the original post.the obvious next few steps are:automate the actual posting of the summaries.does EAF/LW have an API for posting?also summarize top comment(s)this is kinda hardexperiment with prompts to see if other summaries are more usefulalso generate a top level summary which gives you like 5 bullet points of the most important things from the forums this weekfeedback that would be useful:what would you (personally) like to be different about these summaries? should they be shorter? longer? bullet points? have quotes? fewer entries? more entries?leave a comment or DM me or whatever with any old feedbackoh, and: i originally also got the top posts from the AI alignment forum, but they were all cross posted on lesswrong? is that alwasy true? anyone know?EA ForumSelect examples of adverse selection in longtermist grantmakingby LinchThe author, a volunteer and sometimes contractor for EA Funds' Long-Term Future Fund (LTFF), discusses the pros and cons of diversification in longtermist EA funding. While diversification can increase financial stability, allow for a variety of worldviews, encourage accountability, and provide access to diverse networks, it can also lead to adverse selection, where projects that have been rejected by existing grantmakers are funded by new ones. The author provides examples of such cases and suggests that new grantmakers should be cautious about funding projects that have been rejected by others, but also acknowledges that grantmakers can make mistakes and that a network of independent funders could help ensure that unusual but potentially high-impact projects are not overlooked.An Elephant in the Community Building roomby KaleemThe author, a contractor for CEA and employee of EVOps, shares his personal views on the strategies of community building within the Effective Altruism (EA) movement. He identifies two main strategies: Global EA, which aims to spread EA ideas as widely as possible, and Narrow EA, which focuses on influencing a small group of highly influential people. The author argues that community builders and funders should be more explicit about their theory of change for global community building, as there could be significant trade-offs in impact between these two strategies.CE alert: 2 new interventions for February-March 2024 Incubation Programby CECharity Entrepreneurship has announced two new charity interventions for its February-March 2024 Incubation Program, bringing the total to six. The new interventions include an organization focused on bringing new funding into the animal advocacy movement and an organization providing ...]]> Thu, 24 Aug 2023 06:19:56 +0000 EA - Prototype: GPT-powered EA/LW weekly summary by Hamish McDoodles 50 karma, summarize them with GPT-4, and assemble the result into HTML with links and stuffhard to say what the API usage cost, what with all the tinkering and experimenting, but I reckon it was about $5there were a bunch of posts which were too long for the API message length, so as a first whack i just cut stuff out of the middle of the post until it fit (Procrustes style)Siao (co-author) was going to help but i finished everything so fast she never got a chance lolI haven't spent much time sanity-checking these summaries, but I reckon they're "good enough to be useful"They often drop useful details or get the emphasis wrong.I haven't seen any outright fabrication.Obviously if you have a special interest in some topic these aren't going to substitute for reading the original post.the obvious next few steps are:automate the actual posting of the summaries.does EAF/LW have an API for posting?also summarize top comment(s)this is kinda hardexperiment with prompts to see if other summaries are more usefulalso generate a top level summary which gives you like 5 bullet points of the most important things from the forums this weekfeedback that would be useful:what would you (personally) like to be different about these summaries? should they be shorter? longer? bullet points? have quotes? fewer entries? more entries?leave a comment or DM me or whatever with any old feedbackoh, and: i originally also got the top posts from the AI alignment forum, but they were all cross posted on lesswrong? is that alwasy true? anyone know?EA ForumSelect examples of adverse selection in longtermist grantmakingby LinchThe author, a volunteer and sometimes contractor for EA Funds' Long-Term Future Fund (LTFF), discusses the pros and cons of diversification in longtermist EA funding. While diversification can increase financial stability, allow for a variety of worldviews, encourage accountability, and provide access to diverse networks, it can also lead to adverse selection, where projects that have been rejected by existing grantmakers are funded by new ones. The author provides examples of such cases and suggests that new grantmakers should be cautious about funding projects that have been rejected by others, but also acknowledges that grantmakers can make mistakes and that a network of independent funders could help ensure that unusual but potentially high-impact projects are not overlooked.An Elephant in the Community Building roomby KaleemThe author, a contractor for CEA and employee of EVOps, shares his personal views on the strategies of community building within the Effective Altruism (EA) movement. He identifies two main strategies: Global EA, which aims to spread EA ideas as widely as possible, and Narrow EA, which focuses on influencing a small group of highly influential people. The author argues that community builders and funders should be more explicit about their theory of change for global community building, as there could be significant trade-offs in impact between these two strategies.CE alert: 2 new interventions for February-March 2024 Incubation Programby CECharity Entrepreneurship has announced two new charity interventions for its February-March 2024 Incubation Program, bringing the total to six. The new interventions include an organization focused on bringing new funding into the animal advocacy movement and an organization providing ...]]> 50 karma, summarize them with GPT-4, and assemble the result into HTML with links and stuffhard to say what the API usage cost, what with all the tinkering and experimenting, but I reckon it was about $5there were a bunch of posts which were too long for the API message length, so as a first whack i just cut stuff out of the middle of the post until it fit (Procrustes style)Siao (co-author) was going to help but i finished everything so fast she never got a chance lolI haven't spent much time sanity-checking these summaries, but I reckon they're "good enough to be useful"They often drop useful details or get the emphasis wrong.I haven't seen any outright fabrication.Obviously if you have a special interest in some topic these aren't going to substitute for reading the original post.the obvious next few steps are:automate the actual posting of the summaries.does EAF/LW have an API for posting?also summarize top comment(s)this is kinda hardexperiment with prompts to see if other summaries are more usefulalso generate a top level summary which gives you like 5 bullet points of the most important things from the forums this weekfeedback that would be useful:what would you (personally) like to be different about these summaries? should they be shorter? longer? bullet points? have quotes? fewer entries? more entries?leave a comment or DM me or whatever with any old feedbackoh, and: i originally also got the top posts from the AI alignment forum, but they were all cross posted on lesswrong? is that alwasy true? anyone know?EA ForumSelect examples of adverse selection in longtermist grantmakingby LinchThe author, a volunteer and sometimes contractor for EA Funds' Long-Term Future Fund (LTFF), discusses the pros and cons of diversification in longtermist EA funding. While diversification can increase financial stability, allow for a variety of worldviews, encourage accountability, and provide access to diverse networks, it can also lead to adverse selection, where projects that have been rejected by existing grantmakers are funded by new ones. The author provides examples of such cases and suggests that new grantmakers should be cautious about funding projects that have been rejected by others, but also acknowledges that grantmakers can make mistakes and that a network of independent funders could help ensure that unusual but potentially high-impact projects are not overlooked.An Elephant in the Community Building roomby KaleemThe author, a contractor for CEA and employee of EVOps, shares his personal views on the strategies of community building within the Effective Altruism (EA) movement. He identifies two main strategies: Global EA, which aims to spread EA ideas as widely as possible, and Narrow EA, which focuses on influencing a small group of highly influential people. The author argues that community builders and funders should be more explicit about their theory of change for global community building, as there could be significant trade-offs in impact between these two strategies.CE alert: 2 new interventions for February-March 2024 Incubation Programby CECharity Entrepreneurship has announced two new charity interventions for its February-March 2024 Incubation Program, bringing the total to six. The new interventions include an organization focused on bringing new funding into the animal advocacy movement and an organization providing ...]]> Hamish McDoodles https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 20:50 no full 6930
n5GJEP3tMrzdfYPGG EA - How much do EAGs cost (and why)? by Eli Nathan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How much do EAGs cost (and why)?, published by Eli Nathan on August 26, 2023 on The Effective Altruism Forum.TL;DR:EAGs from 2022-2023 each cost around $2M-$3.6M USD at around $1.5k-2.5k per person per event.These events typically cost a lot because the fixed costs of running professional events in the US and UK are surprisingly high.We're aiming to get these event costs down to ≤$2M each moving forwards.We've already started to cut back on spending and will continue to do so, whilst also raising default ticket prices to recoup more of our costs.Throughout this post, for legibility I discuss the direct costs behind running our events and don't include other indirect costs like staff salaries, software, and our office space (which would increase the costs below by ~25%).IntroductionThis year (2023), EAG Bay Area cost $2M and EAG London cost £2M (including travel grant costs). Our most expensive event ever was EAG SF 2022, which cost $3.6M. This gives us a range of about $1.5-2.5k per person per event.In-person EAGx events typically cost $150-500k, with smaller events in cheaper countries on the lower end (e.g. EAGxWarsaw) and large events in the US on the higher end (e.g. EAGxNYC). The cost per person for these events is $300-900.You can see historical attendance figures on our dashboard.People are often surprised by how expensive our events are - this post seeks to explain why our events have historically been so expensive, why they'll continue to be somewhat expensive, and how we're working to reduce costs (see an earlier post here). We're writing this partially in response to requests from the community, but we'll also be raising our ticket prices soon, and hope that this post will add some useful context as to why.We know event prices matter a lot in a community where people care so much about what donations can achieve and the importance of using money well - our staff feel the same way. And it's worth noting that we still think our events are worthwhile, as they seem to have had a large effect at shifting people into high-priority work (though we plan to run them in a cheaper way now). We think that short events can be an effective way to accelerate people in building professional relationships, applying for jobs, and making other critical career decisions.In this post I primarily discuss EAGs because they're a much larger portion of CEA Events team spending and it's hard to generalise across EAGx events, which occur in a wider range of contexts. However, similar principles and themes will apply to EAGx events (you can see more details in Ollie's posts here). I've ended this section with an example budget breakdown for EAG London 2023, though I'll note that the precise breakdown tends to vary a fair bit between events.Catering£739,680.00Venue£532,918.00Audiovisual and video recording£170,975.93 Printing and signage£106,658.40Travel grants£110,000.00Production company fee£98,810.00Furniture hire£33,610.50Other costs£161,316.96Total costs£1,953,969.79EAG London 2023ItemCostVenue and cateringOur biggest spending items are venue and catering.In a given city there are surprisingly few venues that can host 1500+ people. Some of these venues instantly get ruled out for being too expensive/flashy, being too far away from easy transit, not having suitable availability, etc. This means that in the Bay Area for example, there are maybe four venues that I would consider viable for a 1500+ person EAG (as it exists currently, with a main networking area, multiple content/meetup rooms, etc.).Most venues force you to use their in-house catering company and don't let you bring in your own food (i.e. we can't buy a bunch of Oreos ourselves). These catering companies generally have a minimum mandatory spend, and significantly mark up the costs of their service...

]]>
Eli_Nathan https://forum.effectivealtruism.org/posts/n5GJEP3tMrzdfYPGG/how-much-do-eags-cost-and-why Sat, 26 Aug 2023 19:00:28 +0000 EA - How much do EAGs cost (and why)? by Eli Nathan Eli_Nathan https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:53 no full 6954
FWmnwCcKiBLstFtYL EA - Career Conversations Week on the Forum (8-15 September) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Career Conversations Week on the Forum (8-15 September), published by Lizka on August 26, 2023 on The Effective Altruism Forum.TL;DR: September 8 - 15 will feature more career-related discussions on the EA Forum: a "Career Conversations Week." This is an invitation to participate and a pre-announcement of some posts that will be published during the event. You can follow posts under this tag.A wide range of topics would fit the theme; consider writing about what you actually do in your job (and how you got there), sharing advice about how to choose a path, etc. (some more ideas below).You can always write about these topics on the EA Forum, but if you want someone to provide extra encouragement - or if you want to post about it when others are talking about it, too - this event is your chance.What this actually is and why we're running itMuch like EA Strategy Fortnight, this event is ultimately a push for discussions of a particular kind - in this case, everything about (impact-oriented) careers. We have a tag for the event, we'll put up a banner on the Frontpage (so that people know that this is happening), and some folks have already agreed to participate. We'll also feature more career-related opportunities and we might resurface some classic posts on these topics.Goals for the event include:Prompting more people to take useful career-related actions (like applying to open opportunities)Improving how we make career and hiring decisionsDeveloping a better sense for what different roles actually look likeSharing useful resourcesPosts that people expect to share for the event (let me know in the comments if you have something that you'd like to add!):Lizka - about my job (and why you should write about yours, too)Lorenzo Buonanno - new Who wants to be hired? and Who's hiring? threads, and a reflection on last year's threadsJP Addison - The cost of leaving your role is probably higher than you thinkBen West - PlaceholderWill H - PlaceholderLikely an AMA with folks from a career-related organization!New program announcementsHow you can participateWrite relevant posts and quick takesIf you know you'll write something, please feel free to comment on this post, and I'll try to add it to the list aboveRead and comment on posts written for the eventYou can follow posts by subscribing to the tagRun related events, apply to useful opportunities, etc., and mention these things on the EA Forum so that the actions aren't invisible to othersIf you write a postTag it with the relevant tagAdd the following to the bottom of your post:This post is part of the September 2023 Career Conversations Week. You can see other Career Conversations Week posts here.Example topics you could write about (you can add more in the comments!)What you do in your job, how you got here, etc.Advice - what you wish you knew when you were 20, how to test fit for different roles, pitfalls of certain types of work (part-time, remote, independent, grant-funded, etc.), questions to ask potential employers, etc.Thoughts on network-level things like hiring bottlenecks, issues with how we advertise roles, and moreOr whatever else you have thoughts about!This event is clearly inspired by EA Strategy Fortnight - thanks to all of you who helped make that happen and participated in it! :)This isn't the most creative title, but we went with it over options like "The September JobFest," "Visionary Vocations Week" and the "Optimal Occupations Odyssey" for the sake of clarity.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Lizka https://forum.effectivealtruism.org/posts/FWmnwCcKiBLstFtYL/career-conversations-week-on-the-forum-8-15-september Sat, 26 Aug 2023 17:26:17 +0000 EA - Career Conversations Week on the Forum (8-15 September) by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:29 no full 6952
mrAZFnEjsQAQPJvLh EA - Using Points to Rate Different Kinds of Evidence by Ozzie Gooen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Using Points to Rate Different Kinds of Evidence, published by Ozzie Gooen on August 26, 2023 on The Effective Altruism Forum.Epistemic Status: Briefly written. The specific equation here captures my quick intuition - this is meant primarily as a demonstration.There's a lot of discussion on the EA Forum and LessWrong about epistemics, evidence, and updating.I don't know of many attempts at formalizing our thinking here into concrete tables or equations. Here is one (very rough and simplistic) attempt. I'd be excited to see much better versions.EquationInitial PointsScientific Evidence20 - A simple math proof proves X8 - A published scientific study in Economics supporting X6 - A published scientific study in Psychology supporting XMarket Prediction14 - Popular stock markets strongly suggest X11 - Prediction markets claim X, with 20 equivalent hours of research10 - A poll shows that 90% of LessWrong believe X6 - Prediction markets claim X, with one equivalent hour of researchExpert Opinion8 - An esteemed academic believes X, where it's directly in their line of work6 - The author has strong emotions about XReasoning6 - There's a (20-100 node) numeric model that shows X5 - A reasonable analogy between X and something clearly good/bad4 - A long-standing proverbPersonal Accounts5 - The author claims a long personal history that demonstrates X3 - Someone in the world has strong emotions about X2 - A clever remark, meme, or tweet2.3 - An insanely clever, meme, or tweet0 - Believing X is claimed to be personally beneficialTradition / Use12 - Top businesses act as if X8 - A long-standing social tradition about X5 - A single statistic about XPoint ModifiersIs this similar to existing evidence?Subtract the similarity from the extra amount of evidence. This likely will remove most of the evidence value.Is it convenient for the source to believe or say X?-10% to -90%Is there a lot of money or effort put behind spreading this evidence? For example, as an advertising campaign? +5% to +40%How credible is the author or source?-100% to +30%Do we suspect the source is goodharting on this scale?-20%Points, In PracticeEvidence Points, as outlined, are not trying to mimic mathematical bits of information or another clean existing unit. I attempted to find a compromise between accuracy and ease of use.MetaUsing an Equations for DiscussionThe equation above is rough, but at least it's (somewhat) precise and upfront. This represents much information, and any part can easily be argued against.I think such explicitness could help with epistemic conversations.Compare:"Smart people should generally use their inside view, instead of the outside view" vs. "My recommended points scores for inside-view style evidence, and my point scores for outside-view style evidence, are all listed below.""Using many arguments is better than one big argument" vs. "I've adjusted my point table function to produce higher values when multiple types of evidence are provided. You can see it returns values 30% higher than what others have provided for certain scenarios.""It's really important to respect top [intellectuals|scientists|EAs]" vs. "My score for respected [intellectuals|scientists|EAs] is 2 points higher than the current accepted average.""Chesterton's Fence is something to pay a lot of attention to" vs. "See my score table the points from various kinds of traditional practices."In a better world, different academic schools of thought could have their own neatly listed tables/functions. In an even better world, all of these functions would be forecasts of future evaluators.PresumptionsThis sort of point system makes some presumptions that might be worth flagging. Primarily, it claims that even really poor evidence is evidence.I often see people throwing ou...

]]>
Ozzie Gooen https://forum.effectivealtruism.org/posts/mrAZFnEjsQAQPJvLh/using-points-to-rate-different-kinds-of-evidence Sat, 26 Aug 2023 15:02:27 +0000 EA - Using Points to Rate Different Kinds of Evidence by Ozzie Gooen Ozzie Gooen https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:42 no full 6951
PsrCsBNchRhvwNcQZ EA - Applications open for the Biosecurity Fundamentals: Pandemics Course by Will Saunter Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Applications open for the Biosecurity Fundamentals: Pandemics Course, published by Will Saunter on August 26, 2023 on The Effective Altruism Forum.We are excited to announce that applications are now open for the Biosecurity Fundamentals: Pandemics Course, starting in October!The deadline for applications is 23:59 on 13th September 2023 (in whichever time zone you are in).OverviewThis is a 12-week, part-time, virtual course focusing on the technical and policy efforts to prevent, detect and respond to catastrophic pandemics. It brings together:An expert-curated curriculum covering topics from the history of pandemics to concrete policy and technical proposals to reduce pandemic risk.A community of scientists, policymakers and expert facilitators keen to learn about, discuss and collaborate on proposals to build a more biosecure future.Opportunities to pursue careers that enhance pandemic preparedness.The course is free to participate in and you can read more details here.Facilitating the courseIf you're already experienced in biosecurity and pandemic preparedness, we'd strongly encourage you to apply to facilitate the course. You can find more details about facilitation here.Logistical detailsThe course is free to participate in.The course is entirely virtual and can be scheduled around your existing commitments.You will be onboarded into the Slack community in mid-late Sept 2023, with course sessions starting in the second week of October.The course is 12 weeks, consisting of 8 weeks of taught content and 4 weeks to work on a project where you start to take your next steps.Participating takes about 5 hours / week, including 2-3 hours of preparatory work and 2 hours of small group discussions.A note on information hazardsWe're conscious of the potential for creating or spreading information hazards when conducting biosecurity field-building and discussion groups. We are working with various members of the biosecurity community to mitigate this risk through our curriculum design, facilitator training and other safeguards.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Will Saunter https://forum.effectivealtruism.org/posts/PsrCsBNchRhvwNcQZ/applications-open-for-the-biosecurity-fundamentals-pandemics Sat, 26 Aug 2023 11:13:37 +0000 EA - Applications open for the Biosecurity Fundamentals: Pandemics Course by Will Saunter Will Saunter https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:10 no full 6947
7aieQzPcBKxMGn9Xo EA - Open Technical Challenges around Probabilistic Programs and Javascript by Ozzie Gooen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Technical Challenges around Probabilistic Programs and Javascript, published by Ozzie Gooen on August 26, 2023 on The Effective Altruism Forum.While working on Squiggle, we've encountered many technical challenges in writing probabilistic functionality with Javascript. Some of these challenges are solved in Python and must be ported over, and some apply to all languages.We think the following tasks could be good fits for others to tackle. These are fairly isolated and could be done in contained NPM packages or similar. The solutions would be useful for Squiggle and might be handy for others in the Javascript ecosystem as well. Advice and opinions are also appreciated.This post was quickly written, as it's for a narrow audience and might get outdated. We're happy to provide more rigor and context if requested. Let us know if you are interested in taking any of them on and could use some guidance!For those not themselves interested in contributing, this might be useful for giving people a better idea of the sorts of challenges we at QURI work on.1. Density EstimationUsers often want to convert samples into continuous probability distribution functions (PDFs). This is difficult to do automatically.The standard approach of basic Kernel Density Estimation can produce poor fits on multimodal or heavily skewed data.a. Variable kernel density estimationSimple KDE algorithms use a constant bandwidth. There are multiple methods for estimating this. One common method is Silverman's rule of thumb. In practice, using Silverman's rule of thumb with one single bandwidth performs poorly for multimodal or heavily skewed distributions.Squiggle performs log KDE for heavily skewed distributions, but this only helps so much, and this strategy comes with various inconsistencies.There's a set of algorithms for variable kernel density estimation or adaptive bandwidth choice, which seems more promising. Another option is the Sheather-Jones method, which existing python KDE libraries use. We don't know of good Javascript implementations of these algorithms.b. Performant KDE with non-triangle shapesSquiggle now uses a triangle kernel for speed. Fast algorithms (FFT) should be possible, with better kernel shapes.See this thread for some more discussion.c. Cutoff HeuristicsOne frequent edge-case is that many distributions have specific limits, often at 0. There might be useful heuristics like, "If there are no samples below zero, then it's very likely there should be zero probability mass below zero, even if many samples are close and the used bandwidth would imply otherwise."See this issue for more information.d. Discrete vs. continuous estimationSometimes, users pass in samples from discrete distributions or mixtures of discrete and continuous distributions. In these cases, it's helpful to have heuristics to detect which data might be meant to be discrete and which is meant to be continuous. Right now, in Squiggle, we do this by using simple heuristics of repetition - if multiple samples are precisely the same, we assume they represent discrete information. It's unclear if there are any great/better ways of doing this heuristically.e. Multidimensional KDEEventually, it will be useful to do multidimensional KDE. It might be more effective to do this in WebAssembly, but this would of course, introduce complications.2. Quantiles to Distributions, Maybe with MetalogA frequent use case is: "I have a few quantile/CDF points in mind and want to fit this to a distribution. How should I do this?"One option is to use the Metalog distribution. There's no great existing Javascript implementation of Metalog yet. Sam Nolan made one attempt, but it's not as flexible as we'd like. (It fails to convert many points into metalog distributions).Jonas Moss thinks we can do better than...

]]>
Ozzie Gooen https://forum.effectivealtruism.org/posts/7aieQzPcBKxMGn9Xo/open-technical-challenges-around-probabilistic-programs-and Sat, 26 Aug 2023 08:51:23 +0000 EA - Open Technical Challenges around Probabilistic Programs and Javascript by Ozzie Gooen Ozzie Gooen https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:49 no full 6946
3NvLqcQPjBeBHDjz6 EA - Perceived Moral Value of Animals and Cortical Neuron Count by WillemSleegers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Perceived Moral Value of Animals and Cortical Neuron Count, published by WillemSleegers on August 25, 2023 on The Effective Altruism Forum.Publication noteWe initially drafted a writeup of these results in 2019, but decided against spending more time to finalize this draft for publication, given the large number of other projects we could counterfactually be working on, and given that our core results had already been reported (with permission) by Scott Alexander, which seemed to capture a large portion of the value of publishing.We're now releasing a lightly edited version of our initial draft, by request, so that our full results can be cited. As such, this is a 'blog post', not a research report, meaning it was produced quickly and is not to Rethink Priorities' typical standards of substantiveness and careful checking for accuracy.SummaryScott Alexander reported the results of a small (n=50) experiment on a convenience sample of his Tumblr followers suggesting that the moral value assigned to animals of different species closely tracks their cortical neuron count (Alexander, 2019a). If true, this would be a striking finding suggesting that lay people's intuitions about the moral values of non-human animals are highly attuned to these animals' neurological characteristics. However, a small replication on Amazon Mechanical Turk (N=263) by a commenter called these findings into question by reporting very different results (Alexander, 2019b). We ran a larger study on MTurk (n=526) (reported in Alexander, 2019c)) seeking to explain these differences. We show how results in line with either of these two studies could be found based on different choices about how to analyze the data. However, we conclude that neither of these approaches are the most informative way to interpret the data.We present our own analyses showing that there is enormous variance in the moral value assigned to each animal, with large numbers of respondents assigning each animal equal value with humans or no moral value commensurable with humans, as well as many in between. We discuss implications of this and possible lines of future research.IntroductionScott Alexander previously published the results of a small survey (n=50, recruited from Tumblr) asking individuals "About how many [of different non-human animals] are equal in moral value to one adult human?" (Alexander, 2019a)A response that 1 animal of a given species was of the same moral value as 1 adult human, could be construed as assigning animals of this species and humans equal moral value. Conversely, a response that >1 individuals of a given species were of the same moral value as 1 adult human, could be construed as suggesting that those animals have lower moral value than humans. In principle, this could also tell us how much moral value individuals intuitively assign to different non-human animals relative to each other (e.g. pigs vs cows).Alexander reported that the median value assigned by respondents to animals of different species seemed to track the cortical neuron count (see Alexander 2019d) of these animals conspicuously closely (as shown in Figure 1) and closer than for other potential proxies for moral value, such as encephalization quotient.This apparent correlation between the moral value assigned to animals of different species and their cortical neuron count is particularly striking given that Alexander (2019d) had argued in a previous post that cortical neuron count, specifically, was strongly associated with animal intelligence. Thus it might seem that the intuitive moral value assigned to animals of different species was closely linked to the intelligence of the animals. If so this would potentially, as Alexander put it, add "at least a little credibility" to intuitive judgments about the moral value of no...

]]>
WillemSleegers https://forum.effectivealtruism.org/posts/3NvLqcQPjBeBHDjz6/perceived-moral-value-of-animals-and-cortical-neuron-count 1 individuals of a given species were of the same moral value as 1 adult human, could be construed as suggesting that those animals have lower moral value than humans. In principle, this could also tell us how much moral value individuals intuitively assign to different non-human animals relative to each other (e.g. pigs vs cows).Alexander reported that the median value assigned by respondents to animals of different species seemed to track the cortical neuron count (see Alexander 2019d) of these animals conspicuously closely (as shown in Figure 1) and closer than for other potential proxies for moral value, such as encephalization quotient.This apparent correlation between the moral value assigned to animals of different species and their cortical neuron count is particularly striking given that Alexander (2019d) had argued in a previous post that cortical neuron count, specifically, was strongly associated with animal intelligence. Thus it might seem that the intuitive moral value assigned to animals of different species was closely linked to the intelligence of the animals. If so this would potentially, as Alexander put it, add "at least a little credibility" to intuitive judgments about the moral value of no...]]> Fri, 25 Aug 2023 21:57:35 +0000 EA - Perceived Moral Value of Animals and Cortical Neuron Count by WillemSleegers 1 individuals of a given species were of the same moral value as 1 adult human, could be construed as suggesting that those animals have lower moral value than humans. In principle, this could also tell us how much moral value individuals intuitively assign to different non-human animals relative to each other (e.g. pigs vs cows).Alexander reported that the median value assigned by respondents to animals of different species seemed to track the cortical neuron count (see Alexander 2019d) of these animals conspicuously closely (as shown in Figure 1) and closer than for other potential proxies for moral value, such as encephalization quotient.This apparent correlation between the moral value assigned to animals of different species and their cortical neuron count is particularly striking given that Alexander (2019d) had argued in a previous post that cortical neuron count, specifically, was strongly associated with animal intelligence. Thus it might seem that the intuitive moral value assigned to animals of different species was closely linked to the intelligence of the animals. If so this would potentially, as Alexander put it, add "at least a little credibility" to intuitive judgments about the moral value of no...]]> 1 individuals of a given species were of the same moral value as 1 adult human, could be construed as suggesting that those animals have lower moral value than humans. In principle, this could also tell us how much moral value individuals intuitively assign to different non-human animals relative to each other (e.g. pigs vs cows).Alexander reported that the median value assigned by respondents to animals of different species seemed to track the cortical neuron count (see Alexander 2019d) of these animals conspicuously closely (as shown in Figure 1) and closer than for other potential proxies for moral value, such as encephalization quotient.This apparent correlation between the moral value assigned to animals of different species and their cortical neuron count is particularly striking given that Alexander (2019d) had argued in a previous post that cortical neuron count, specifically, was strongly associated with animal intelligence. Thus it might seem that the intuitive moral value assigned to animals of different species was closely linked to the intelligence of the animals. If so this would potentially, as Alexander put it, add "at least a little credibility" to intuitive judgments about the moral value of no...]]> WillemSleegers https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 38:37 no full 6945
3rf99yiGhjDdBDeCJ EA - AI Safety Bounties by PatrickL Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Bounties, published by PatrickL on August 25, 2023 on The Effective Altruism Forum.I spent a while at Rethink Priorities considering 'AI Safety Bounties' - programs where public participants or approved security researchers receive rewards for identifying issues within powerful ML systems (analogous to bug bounties in cybersecurity). I considered the benefits and risks of programs like this and suggested potential program models. This report is my takeaways from this process:Short summarySafety bounties could be valuable for legitimizing examples of AI risks, bringing more talent to stress-test systems, and identifying common attack vectors.I expect safety bounties to be worth trialing for organizations working on reducing catastrophic AI risks. Traditional bug bounties seem fairly successful: they attract roughly one participant per $50 of prize money, and have become increasingly popular with software firms over time. The most analogous program for AI systems led to relatively few useful examples compared to other stress-testing methods, but one knowledgeable interviewee suggested that future programs could be significantly improved.However, I am not confident that bounties will continue to be net-positive as AI capabilities advance. At some point, I think the accident risk and harmful knowledge proliferation from open sourcing stress-testing may outweigh the benefits of bountiesIn my view, the most promising structure for such a program is a third party defining dangerous capability thresholds ("evals") and providing rewards for hunters who expose behaviors which cross these thresholds. I expect trialing such a program to cost up to $500k if well-resourced, and to take four months of operational and researcher time from safety-focused people.I also suggest two formats for lab-run bounties: open contests with subjective prize criteria decided on by a panel of judges, and private invitations for trusted bug hunters to test their internal systems.Author's note: This report was written between January and June 2023. Since then, safety bounties have become a more well-established part of the AI ecosystem, which I'm excited to see. Beyond defining and proposing safety bounties as a general intervention, I hope this report can provide useful analyses and design suggestions for readers already interested in implementing safety bounties, or in better understanding these programs.Long summaryIntroduction and bounty program recommendationsOne potential intervention for reducing catastrophic AI risk is AI safety bounties: programs where members of the public or approved security researchers receive rewards for identifying issues within powerful ML systems (analogous to bug bounties in cybersecurity). In this research report, I explore the benefits and downsides of safety bounties and conclude that safety bounties are probably worth the time and money to trial for organizations working on reducing the catastrophic risks of AI. In particular, testing a handful of new bounty programs could cost $50k-$500k per program and one to six months full-time equivalent from project managers at AI labs or from entrepreneurs interested in AI safety (depending on each program's model and ambition level).I expect safety bounties to be less successful for the field of AI safety than bug bounties are for cybersecurity, due to the higher difficulty of quickly fixing issues with AI systems. I am unsure whether bounties remain net-positive as AI capabilities increase to more dangerous levels. This is because, as AI capabilities increase, I expect safety bounties (and adversarial testing in general) to potentially generate more harmful behaviors. I also expect the benefits of the talent pipeline brought by safety bounties to diminish. I suggest an informal way to monitor the r...

]]>
PatrickL https://forum.effectivealtruism.org/posts/3rf99yiGhjDdBDeCJ/ai-safety-bounties Fri, 25 Aug 2023 21:02:05 +0000 EA - AI Safety Bounties by PatrickL PatrickL https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:28 no full 6944
EcxABaFX2Ehx2X3Hv EA - To donate or not to donate [a lobe of my liver]? by Kyle J. Lucchese Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: To donate or not to donate [a lobe of my liver]?, published by Kyle J. Lucchese on August 25, 2023 on The Effective Altruism Forum.This post is not intended to endorse any particular course of action for one's life, especially if that potentially jeopardizes your health and well-being. Please do your own research and consider how that intersects with your values.I am considering donating a lobe of my liver in a non-directed process and would welcome some community perspective:Is this something you have researched or have done yourself?Do you know anyone who has?What are your thoughts about this from a cost-benefit/impact perspective?A bit of context:I am, by all accounts, healthy and would likely be eligibleI am okay with voluntary physical discomfort for others' benefit:I am a regular double-red blood donorI have already donated a kidney in a non-directed donationI participate in challenge trials when opportunities with high-impact potential become available (I recently participated in a Shigella study and am considering Malaria, Dengue, and Zika options for the fall)Thank you, in advance, for sharing your perspective!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Kyle J. Lucchese https://forum.effectivealtruism.org/posts/EcxABaFX2Ehx2X3Hv/to-donate-or-not-to-donate-a-lobe-of-my-liver Fri, 25 Aug 2023 20:41:52 +0000 EA - To donate or not to donate [a lobe of my liver]? by Kyle J. Lucchese Kyle J. Lucchese https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:17 no full 6942
oWLT2L2ToxsNtKKnC EA - Influences on individuals adopting vegetarian and vegan diets by Jamie Elsey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Influences on individuals adopting vegetarian and vegan diets, published by Jamie Elsey on August 25, 2023 on The Effective Altruism Forum.Topline summary points:Personal conversations/interactions are reported to be the most important influences in adopting a vegan or vegetarian diet, and most frequently also the first influence in getting someone interested in dietary changeInteractions with animals also remain a strong source of influenceExposure to documentaries, social media, and online videos are also attributed considerable importance, particularly among more recent adopters, and those whose diet was vegan (rather than vegetarian)Concerns over animal welfare are rated as the most important reasons for adopting a vegetarian or vegan diet, though environmental and health concerns are also rated relatively highly - again especially among more recent adoptersPublication noteThis post is based on data obtained in December 2019, and was produced by request on a short timeframe. As such, this is a 'blog post', not a substantive research report.BackgroundDecreasing animal-product consumption can have numerous benefits, from increasing the sustainability of the food system (Clark & Tilman, 2017) to reducing animal suffering (Broom, 2007; Bryant, 2019; Park & Singer, 2012). Yet, dietary change can be challenging. Factors such as habit, familiarity, taste preferences, food availability, cooking skills, social context, and cultural traditions can all pull against the adoption of a new diet (Peacock, 2023; Graça et al., 2019; Valli et al., 2019). Determining what methods are most effective for encouraging more plant-based diets may help guide advocates and change minds, ultimately reducing animal suffering and increasing food system sustainability.A growing body of literature is examining the question of what works to reduce animal-product consumption. Several systematic reviews (e.g., Bianchi et al., 2018a; Bianchi et al., 2018b; Graça et al., 2019) have collated experimental evidence for dietary change interventions. In experimental studies, the experimenters administer an intervention (e.g., education about environmental consequences of meat consumption), and evaluate how this affects a particular outcome (e.g., self-reported meat consumption). Despite generally having high internal validity, these study designs often have limited external validity because they test efficacy under ideal circumstances that are unlikely to be present in real-world situations. Furthermore, such studies often test different interventions in isolation and in different contexts, posing difficulties for the comparison of effects across interventions.An alternative approach is to examine what works retrospectively. That is, to ask: why did current vegetarians and vegans make that dietary change? Although previous work has examined this topic, most of it has been through informal surveys hosted on blogs (e.g., The Two Vegans, 2016; The Vegan Truth, 2013; VOMAD, 2018). This recruitment method can lead to biased samples (e.g., toward individuals who are particularly engaged in veganism and willing to answer questions about it without reimbursement). Other limitations of the prior surveys include having incomplete response options (e.g., assessing animal welfare motivations for going vegan but not health or environmental reasons: The Vegan Truth, 2013), assessing first but not main influences on dietary change, and not asking about the relative importance of different influences.Despite these limitations, this initial research suggests that personal connections and educational films or documentaries are some of the more important influences (VOMAD,2018; The Vegan Truth, 2013).The aim of this project was to provide a more comprehensive, systematic survey of what influences people to ado...

]]>
Jamie Elsey https://forum.effectivealtruism.org/posts/oWLT2L2ToxsNtKKnC/influences-on-individuals-adopting-vegetarian-and-vegan Fri, 25 Aug 2023 20:32:38 +0000 EA - Influences on individuals adopting vegetarian and vegan diets by Jamie Elsey Jamie Elsey https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 20:39 no full 6943
pbMfYGjBqrhmmmDSo EA - Nuclear winter - Reviewing the evidence, the complexities, and my conclusions by Michael Hinge Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nuclear winter - Reviewing the evidence, the complexities, and my conclusions, published by Michael Hinge on August 25, 2023 on The Effective Altruism Forum.Nuclear winter is a topic that many have heard about, but where significant confusion can occur. I certainly knew little about the topic beyond the superficial prior to working on nuclear issues, and getting up to speed with the recent research is not trivial. Even then there are very large uncertainties around the topic, and different teams have come to different conclusions, which are unlikely to be resolved soon.However, this does not mean that we cannot draw some conclusions on nuclear winter. I worry that the fact there are still uncertainties has been taken to the extreme that nuclear winters are impossible, which I do not believe can be supported. Instead, I would suggest the balance of evidence suggests there is a very serious risk a large nuclear exchange could lead to catastrophic climate impacts. Given that the potential mortality of these winters could exceed the direct deaths, the evidence is worth discussing in detail, especially as it seems that we could prepare and respond.This is a controversial and complex topic, and I hope I have done it justice with my overview. Throughout this article I should however stress that these words are my own, along with any mistakes.Headline results, for those short on timeThis is a very long post, and for those short on time the thrust of my argument is summarized here. I've also included the relevant section(s) in the post that back up each point for those who wish to check my working on a specific piece of analysis.Firestorms can inject soot effectively into the stratosphere via their intense plumes. Once there, soot could persist for years, disrupting the climate (see "The case for concern: Firestorms, plumes and soot").Not every nuclear detonation would create a firestorm, but large nuclear detonations over dense cities have a high risk of causing them (see "The case for concern: Firestorms, plumes and soot").Not every weapon in arsenals will be used, and not all will hit cities. However, exchanges between Russia and NATO have the potential to result in hundreds or thousands of individual detonations over cities. This has the potential to result in a large enough soot injection that catastrophic cooling could occur (see "Complexities and Disagreements: Point one").The impacts of soot injections on the climate are nonlinear, and even if the highest thresholds in the literature are not met there still could be serious cooling. My personal estimates are that around 20-50 Tg of soot could be emitted in the event of a full-scale exchange between Russia and NATO with extensive targeting of cities. This is less than the upper threshold of 150 Tg from other sources, however that would still cause catastrophic crop losses without an immediate and effective response (see "Synthesis").There are disagreements between the teams on the dynamics of nuclear winters, which are complex. However, to date these disagreements center on a regional exchange of 100 weapons of 15 kt each, much smaller than Russian and NATO strategic weaponry. Some of the points raised in the debate are therefore far less relevant when looking at larger exchanges, where fuel loads would be higher, weapons are much more powerful and firestorms more likely (see "Complexities and Disagreements: The anatomy of the disagreements, followed by Synthesis").This means that nuclear winter is not discredited as a concept, and is very relevant for the projected mortality of nuclear conflict. There are uncertainties, and more research is needed (and possibly underway), however it remains a clear threat for larger conflicts in particular (see "Synthesis").Overall, although highly uncertain, I would expec...

]]>
Michael Hinge https://forum.effectivealtruism.org/posts/pbMfYGjBqrhmmmDSo/nuclear-winter-reviewing-the-evidence-the-complexities-and Fri, 25 Aug 2023 16:44:59 +0000 EA - Nuclear winter - Reviewing the evidence, the complexities, and my conclusions by Michael Hinge Michael Hinge https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 57:01 no full 6940
FfDu8HsNEzabqxKrs EA - [Linkpost] My moral view: Reducing suffering, 'how to be' as fundamental to morality, no positive value, cons of grand theory, and more - By Simon Knutsson by Alistair Webster Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] My moral view: Reducing suffering, 'how to be' as fundamental to morality, no positive value, cons of grand theory, and more - By Simon Knutsson, published by Alistair Webster on August 25, 2023 on The Effective Altruism Forum.The following are excerpts from a new essay by Simon Knutsson:SummaryI explain parts of my moral view briefly. I also talk about some arguments for and against my view. The basics of my moral view include that one should focus on reducing severe suffering and behave well, and these basics get us pretty far in terms of how to act and be in real life.IntroductionMy moral view is suffering-focused in the sense that it emphasises the reduction of suffering and the like. My view might differ from other suffering-focused views in the following ways:I do not pick or try to formulate an overarching moral theory. Instead, my moral view is intentionally fairly non-theoretical.I think of ideas about how one should be, such as the idea that one should be considerate, as more primitive and fundamental to morality than some others seem to think of them. Many others agree that one should be like that, but perhaps for different reasons, such as that being like that has the best consequences.My take on what should be reduced for its own sake is perhaps unusually pluralistic - it is most important to reduce extreme suffering, but it also makes sense to reduce, for example, gruesome violence, ruined lives and life projects, and acts such as ignoring harms.My view could be labelled a pitch-black philosophical pessimism located towards the end of a philosophical optimism-pessimism spectrum. In my view, there is no positive value and no positive quality of life, and there are no positive experiences. An empty world is the best possible world, the world is terrible, and the future will almost certainly be appalling. (Of course, we should still try to make the world and the future less awful.)I am sceptical of categorical notions such as 'good' and 'positive value', and I instead prefer comparative notions such as 'better' and 'worse'.I don't think much in terms of uncertainty about moral principles or evaluative judgments; rather, I tend to think in terms of to what extent I accept or agree with specific ideas about morality and value.I think that the following are some of the advantages of my moral view: By being light on theory, it avoids pitfalls that high-level moral theories can have. It also takes suffering seriously, is overall reasonable, and lacks implausible implications such as when a view recommends that a clearly immoral act should be carried out. And my moral view is quite action-guiding in real life. This talk about advantages may sound like academic niceties, but most of the points have great practical importance. For example, it is crucial to direct one's attention and efforts to those who are or will be extremely badly off.Disadvantages of overarching moral theoriesI don't identify as a particularist or antitheorist, but I doubt the importance of overarching moral theories, and at least some of them seem to have drawbacks. That is, it seems we can do without high-level moral theories, and the following are three disadvantages that high-level moral theories can have (a caveat is that some high-level theories might be innocuous, and almost all examples of disadvantages I have observed concern consequentialist moral theories).The first disadvantage is when someone accepts a problematic implication of their favourite moral theory with the justification that "all moral theories have counterintuitive implications". The person holds that something is permissible even though one would normally think of it as immoral or even monstrous. But if all existing moral theories are so problematic, we need not choose any of them. Such accept...

]]>
Alistair Webster https://forum.effectivealtruism.org/posts/FfDu8HsNEzabqxKrs/linkpost-my-moral-view-reducing-suffering-how-to-be-as Fri, 25 Aug 2023 14:39:57 +0000 EA - [Linkpost] My moral view: Reducing suffering, 'how to be' as fundamental to morality, no positive value, cons of grand theory, and more - By Simon Knutsson by Alistair Webster Alistair Webster https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:19 no full 6938
BLqJdZAgsvyWX7fug EA - Local charity evaluation: Lessons from the "Maximum Impact" Program in Israel by Yonatan Schoen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Local charity evaluation: Lessons from the "Maximum Impact" Program in Israel, published by Yonatan Schoen on August 27, 2023 on The Effective Altruism Forum.TL;DRMaximum Impact program, led by Effective Altruism Israel, aims to significantly increase the impact of Israeli philanthropy by conducting nonprofit evaluations, identifying high-impact local nonprofits (even if their recipients are abroad), and promoting effective giving. Additionally, we aim to strengthen the community internally by developing a replicable model that could achieve these goals in other local communities and by building the Israeli EA community through action-oriented projects.During the last year, we joined forces with promising non-profits and conducted and published 21 cost-effectiveness analyses. Together with an acclaimed team of expert judges, we identified 3 nonprofits as relatively cost-effective, with Smoke-Free Israel, a tobacco taxation nonprofit, which might be competitive with top charities. We learnt a lot from the pilot and have many ways to improve the program, but overall the pilot was successful: we identified high-impact local nonprofits and have engaged Israeli philanthropies interested in cost-effectiveness and in the results of the program. Finally, having a large-scale program on nonprofit effectiveness was a huge boost for EA Israel's community building, and many people heard about EA and became more involved because of the program. We're looking forward to continuing to maximize the impact of Israelis and to share the knowledge and experience we gained with the EA community.MotivationWe identified that global effective giving platforms currently don't "meet donors where they are" and only offer recommendations on global nonprofits without being able to engage donors; Donors who are initially interested in local donations but can later be introduced and nudged to give to global high-impact nonprofits.Israeli and Jewish donors abroad (~17.2bn per annum) currently donate almost entirely without evidence and cost-effectiveness estimations, as the Israeli non-profit ecosystem doesn't yet emphasize these as prerequisites for donations.With almost no research currently available on local effectiveness and a strong bias of donors towards donating locally, there are not enough impact-based donations in the local nonprofit sector.Based on discussions with EAs worldwide, we assume this problem is not unique to Israel and would like to find a cost-effective way to solve it.What have we done so far?Our research programIn our 10-month program, we chose 25 promising nonprofits from more than 160 applications from Israeli-based NGOs, 21 of which successfully published their first-ever cost-effectiveness reports. In the pilot program, we provided guidance and assistance to support non-profits to execute a cost-effectiveness analysis. After the research period, we sent the reports to a team of expert judges for review, and chose the most promising non-profits based on the expert review. We offered monetary incentives to encourage nonprofits to participate: 15 nonprofits received preliminary research grants of $6k, the 3 top outstanding nonprofits received $24k prizes, and $26k was allocated to a donation matching campaign to all nonprofits who publish a report. A private donor and an Infrastructure Fund grant funded the pilot.Guiding nonprofits on evaluation and improvementWe implemented various tools and resources to help incentivize and enable non-profits to conduct robust effectiveness analyses, including:An extensive set of templates and how-tos.Research grants for 10 young academic researchers (MA and PHDs) from Israel's top universities to conduct research with the organizations and join the EA community during the process.Personal guidance from skilled community...

]]>
Yonatan Schoen https://forum.effectivealtruism.org/posts/BLqJdZAgsvyWX7fug/local-charity-evaluation-lessons-from-the-maximum-impact Sun, 27 Aug 2023 16:52:19 +0000 EA - Local charity evaluation: Lessons from the "Maximum Impact" Program in Israel by Yonatan Schoen Yonatan Schoen https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:17 no full 6959
zakNChFRxPcYKyieh EA - Which organisation is most deserving of the marginal $ right now? (your opinions) by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Which organisation is most deserving of the marginal $ right now? (your opinions), published by Nathan Young on August 27, 2023 on The Effective Altruism Forum.@Aaron Bergman saysI wish there was (and there should be) more discussion of "which singular organization is most deserving of money rn" Individual donors should make their best guess public and indicate openness to critiquetwitterSo let's guess. What is your suggestion?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Nathan Young https://forum.effectivealtruism.org/posts/zakNChFRxPcYKyieh/which-organisation-is-most-deserving-of-the-marginal-usd Sun, 27 Aug 2023 12:23:30 +0000 EA - Which organisation is most deserving of the marginal $ right now? (your opinions) by Nathan Young Nathan Young https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:39 no full 6958
ZS9GDsBtWJMDEyFXh EA - Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong by Omnizoid Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong, published by Omnizoid on August 27, 2023 on The Effective Altruism Forum.Introduction"After many years, I came to the conclusion that everything he says is false. . . . "He will lie just for the fun of it. Every one of his arguments was tinged and coded with falseness and pretense. It was like playing chess with extra pieces. It was all fake."Paul Postal (talking about Chomsky) (note, this is not exactly how I feel about Yudkowsky, I don't think he's knowingly dishonest, but I just thought it was a good quote and partially represents my attitude towards Yudkowsky).Crosspost of this on my blog.In the days of my youth, about two years ago, I was a big fan of Eliezer Yudkowsky. I read his many, many writings religiously, and thought that he was right about most things. In my final year of high school debate, I read a case that relied crucially on the many worlds interpretation of quantum physics - and that was largely a consequence of reading through Eliezer's quantum physics sequence. In fact, Eliezer's memorable phrasing that the many worlds interpretation "wins outright given the current state of evidence," was responsible for the title of my 44-part series arguing for utilitarianism titled "Utilitarianism Wins Outright." If you read my early articles, you can find my occasional blathering about reductionism and other features that make it clear that my worldview was at least somewhat influenced by Eliezer.But as I grew older and learned more, I realized it was all bullshit.Eliezer sounds good whenever he's talking about a topic that I don't know anything about. I know nothing about quantum physics, and he sounds persuasive when talking about quantum physics. But every single time he talks about a topic that I know anything about, with perhaps one or two exceptions, what he says is total nonsense, at least, when it's not just banal self-help advice. It is not just that I always end up disagreeing with him, it is that he says with almost total confidence outrageous falsehood after outrageous falsehood, making it completely clear he has no idea what he is talking about. And this happens almost every single time. It seems that, with few exceptions, whenever I know anything about a topic that he talks about, it becomes clear that his view is a house of cards, built entirely on falsehoods and misrepresentations.Why am I writing a hit piece on Yudkowsky? I certainly don't hate him. In fact, I'd guess that I agree with him much more than almost all people on earth. Most people believe lots of outrageous falsehoods. And I think that he has probably done more good than harm for the world by sounding the alarm about AI, which is a genuine risk. And I quite enjoy his scrappy, willing-to-be-contrarian personality. So why him?Part of this is caused by personal irritation. Each time I hear some rationalist blurt out "consciousness is just what an algorithm feels like from the inside," I lose a year of my life and my blood pressure doubles (some have hypothesized that the explanation for the year of lost life involves the doubling of my blood pressure). And I spend much more time listening to Yukowsky's followers spout nonsense than most other people.But a lot of it is that Yudkowsky has the ear of many influential people. He is one of the most influential AI ethicists around. Many people, my younger self included, have had their formative years hugely shaped by Yudkowsky's views - on tons of topics. As Eliezer says:In spite of how large my mistakes were, those two years of blog posting appeared to help a surprising number of people a surprising amount.Quadratic Rationality expresses a common sentiment, that the sequences, written by Eliezer, have significantly shaped the world view of him and others. Elie...

]]>
Omnizoid https://forum.effectivealtruism.org/posts/ZS9GDsBtWJMDEyFXh/eliezer-yudkowsky-is-frequently-confidently-egregiously Sun, 27 Aug 2023 03:52:24 +0000 EA - Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong by Omnizoid Omnizoid https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 58:48 no full 6956
3J8aBk8wc668CJnbb EA - Rethink Priorities is looking for a (Co-)Founder for a New Project: Field Building in Universities for AI Policy Careers in the US by KevinN Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities is looking for a (Co-)Founder for a New Project: Field Building in Universities for AI Policy Careers in the US, published by KevinN on August 28, 2023 on The Effective Altruism Forum.SummaryWe are currently seeking a founder, or founding team, for a project focusing on field building in universities for AI policy careers in the US.RP commits to supporting this project through at least an initial pilot (e.g. an on-campus policy competition event planned within the first three months of the project), including guaranteeing initial seed funding and providing research and operational support. (Note this role is not within RP.)Our team of researchers is particularly excited about this project, following hundreds of hours of research identifying the most promising project ideas to reduce existential risk, and working from an initial list of hundreds of ideas.Depending on the results of the pilot and founder preferences, RP may remain involved in a supporting role or the project may spin off independently.Our aim is for this project to scale over time and eventually become a large-scale initiative with a significant impact on reducing extreme risks from AI.Please email university-ai-policy@rethinkpriorities.org if you have any questions.Application Deadline: September 25, 2023 at the end of the day (11:59 PM) in US/Eastern (EST) time zone.Apply hereAbout the ProjectIn the last 5 years, solid professional tracks have emerged for technical AI safety research and, increasingly, AI governance research, with programs that provide a relatively clear roadmap of opportunities and next steps. Less effort has gone into developing a similar track for working directly on AI policy in the US. This is important because policymakers are increasingly asked to tackle the risks and challenges associated with this technology, but often lack access to the expertise needed to navigate this effectively. Bringing more qualified talent to work as congressional staffers, within a relevant executive agency, or in an influential DC think tank could mean directly supporting effective decision-making around AI regulation that promotes the public interest.The idea for this project is to start a program that dramatically increases the number of talented AI-risk-minded students planning to enter a US policy career by helping them understand arguments about catastrophic risk from AI, how these career tracks can be impactful, and how to navigate related career decisions successfully. This program can also ensure a well-developed pipeline for encouraging students already interested in AI risk management careers to consider US policy roles.This program could:Run events such as policy competitions or retreats that find and develop talented individuals who might be especially well-suited for US policy careersConnect students with relevant information, mentors, and peers around AI risk management and US policyHelp establish an early career track around AI risk management policy, especially from a catastrophic risk lens, within universitiesDo even more ambitious work that we have not considered yet!About the RoleWe expect the Founder(s) of this initiative to collaborate with the Rethink Priorities team and significantly shape the direction of this project. We think the first step for this project should be to run a low-cost pilot for the Founder(s) to gather more information about the target audience, the approaches that tend to work, and their fit for running the project. This pilot can also provide a proof of concept for future grant applications, which the Founder(s) will be responsible for.RP will be able to offer support in a variety of ways based on the needs of the project, including research support, operations support, providing feedback, mentorship, and providing ...

]]>
KevinN https://forum.effectivealtruism.org/posts/3J8aBk8wc668CJnbb/rethink-priorities-is-looking-for-a-co-founder-for-a-new Mon, 28 Aug 2023 21:35:43 +0000 EA - Rethink Priorities is looking for a (Co-)Founder for a New Project: Field Building in Universities for AI Policy Careers in the US by KevinN KevinN https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:19 no full 6964
vnhRiGHodi9mSYgQL EA - EA Organisations budget size by Miri Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Organisations budget size, published by Miri on August 28, 2023 on The Effective Altruism Forum.I recently noticed that my expectations of what various organisations are spending are way off, so I tried to gather some data.This excludes money moved through the organisation but re-granted outside of it. For organisations that did not have a more recent budget, I graphed their historical trends between publicly available budgets, and assumed that fundraising targets would be hit at the organisation's mid-level.Happy to correct any of these if anyone can link to more updated data!OrganisationEstimated 2024 budgetCentre for Effective Altruism$30 millionGiveWell$28 million80,000 Hours$15.5 million (excludes marketing)Rethink Priorities$8.3 millionFounders Pledge$7 millionLightcone $3.7 millionGiving What We Can$2.35 millionCharity Entrepreneurship $2 millionAnimal charity evaluators $1.5 millionManifold markets$1 millionHappier lives institute$0.85 million One for the world$0.6 millionProbably good$0.5 millionData/sources - all numbers normed for 2024Centre for effective altruism $30 million - $28m in 2022, events ~$10m, EA forum ~$2mGiveWell $28 million - extrapolated from historical planned budgets e.g. $25.4 million in 202380,000 hours $15.5 million (excluding marketing)Rethink priorities $8.3 million - extrapolated from $7.5 million in 2022 + the medium growth estimateFounders pledge $7 million - extrapolated from the trends of $4m in 2020 to 2021 $5.6mMetaculus $2.75 million - extrapolated that this $5m grant is over 2 yearsLightcone/Lesswrong $3.7 millionGiving what we can $2 millionCharity Entrepreneurship $2 million - extrapolated from $1.62 million in 2023Animal charity evaluators $1.5 million (not including regranting)Manifold markets $1 millionHappier lives institute $0.8 million - most recent fundraising postOne For The World $0.6 million - extrapolated from $0.5m in 2022Probably good - $0.5 million - extrapolated from $0.3m in 2022Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Miri https://forum.effectivealtruism.org/posts/vnhRiGHodi9mSYgQL/ea-organisations-budget-size Mon, 28 Aug 2023 16:26:36 +0000 EA - EA Organisations budget size by Miri Miri https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:44 no full 6963
qzQ24rbZ4kMFDswDK EA - Does EA bring out the best in me? by Lorenzo Buonanno Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does EA bring out the best in me?, published by Lorenzo Buonanno on August 28, 2023 on The Effective Altruism Forum.I really liked the framing in the linked post. Some relevant quotes:When I stay with my parents I regress back to being a teenager. I become irritable, messy and spend most of my time eating cheese toasties and watching The Simpsons. There's something about this social environment which brings out qualities in me that I don't like, and behaviours which I don't want [...]Another example - there's a friend of mine who is quite a curious, laid-back type. And often when I'm with said friend, I somehow magically become more curious and laid-back. In this case, the social environment is bringing out qualities in me that I do like.There are [...] things which EA brings out in me which I don't like.I've spoken to a bunch of people that are doing something like 'trying to figure out their relationship with EA'. And often it seems like people end up trying to do something like 'work out whether EA is good or bad'.This is a hard question to answer. I think an easier one, and a more action-relevant one is 'Does EA bring out the best in you?' or even 'What does EA bring out of you?'.An overall theme here is that people are different depending on the social environment that they are in. This will be more true of some people than others. Some people will change a lot, like a chameleon, and other people will change only a little, like a cow, or a. wardrobe, or whatever the opposite of a chameleon is.I found this a useful mindset I hadn't thought of, even if it seems obvious in hindsight. Many posts and books mention the value of engaging with the effective altruism movement to prevent value drift. But I had never reflected on what directions different parts of the movement are making me drift towards, or are preventing me from drifting to.I can't trace back where I found it, it somehow made its way into my open tabs for this evening. Thank you to whoever shared it.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Lorenzo Buonanno https://forum.effectivealtruism.org/posts/qzQ24rbZ4kMFDswDK/does-ea-bring-out-the-best-in-me Mon, 28 Aug 2023 05:35:30 +0000 EA - Does EA bring out the best in me? by Lorenzo Buonanno Lorenzo Buonanno https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:57 no full 6961
ak6gHqxjrwHqeKdEB EA - I am hiring an Executive Research Assistant - improve RP while receiving mentorship by Peter Wildeford Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I am hiring an Executive Research Assistant - improve RP while receiving mentorship, published by Peter Wildeford on August 29, 2023 on The Effective Altruism Forum.I'm Peter Wildeford, co-CEO at Rethink Priorities (RP). RP is a research and implementation group that identifies pressing opportunities to make the world better. We act upon these opportunities by developing and implementing strategies, projects and solutions to key issues. We do this work in close partnership with foundations and impact-focused non-profits.I'm excited to be hiring a new Executive Research Assistant (ERA) role to work directly with me to help fill important gaps in RP's needs, and keep me and my fellow co-CEO Marcus consistently informed and prepared. See more detail + apply here.The work is incredibly varied and will give you a diversity of experience. Past ERA projects have included varied tasks such as creating a risk management framework for RP, summarizing relevant research papers, interviewing different relevant experts about how to tackle a particular problem, producing EA Forum summaries, responding to co-CEO inbound email correspondence in a timely and organized manner, and ensuring meetings with co-CEOs and others are scheduled.Project selection is adaptable to the candidate's interests and can fit more junior candidates (eg. more scheduling) and more senior candidates (eg. more research, project management, or stakeholder outreach).The role also comes with significant career mentoring. In fact, the reason we are hiring for this role is that past people in this position have already moved on to significant more senior roles at RP. As your manager I will meet with you regularly to understand your career goals, help you test fit for various roles, and put you on a path to success. The work itself will also build varied skills and connections within the organization and potentially some of our external stakeholders.Some fast facts about the role:Deadline to apply - September 17Role is permanent, full or part time with a minimum of 20hrs/weekThe role is remote and we are able to hire in most countriesThe compensation is $69,000 - $87,000 USD/year depending on experience (the amount is calculated using RP's salary algorithm and is dependent on prior relevant experience and corresponding title level)See more detail + apply here.A pitch for applying:You could make an immediate impact on a large organization at the CEO levelWork at RP influences the decision-making of major foundations spending hundreds of millions of dollarsYou will receive useful career mentorship and the opportunity to test fit and build skills for a variety of future roles based on your interestsGood work-life balance, generous paid time off leave, and generous benefits including parental leave of up to 6 months during the first 2 years after, or split before and after post a child's birth or adoption for parents of all gendersPrior people holding this role have said:"I can say from first-hand experience that this is a great job - You get to do important, interdisciplinary work, and the Co-CEOs are super supportive of you finding and developing your own strengths""This role was fantastic for helping me build up relevant subject matter expertise and test fit for roles that would have been difficult to enter directly. There's also a lot of chances for direct impact - your ideas are listened to, and there's lots of autonomy such that the way you approach things and the specific skills you bring will make a real difference to outcomes of the projects you take on."This role would be ideal for candidates who demonstrate:Strong writing skills, especially the ability to write concisely yet accuratelyAbility to accurately and quickly interpret research from a wide variety of fields, including fields yo...

]]>
Peter Wildeford https://forum.effectivealtruism.org/posts/ak6gHqxjrwHqeKdEB/i-am-hiring-an-executive-research-assistant-improve-rp-while Tue, 29 Aug 2023 20:31:28 +0000 EA - I am hiring an Executive Research Assistant - improve RP while receiving mentorship by Peter Wildeford Peter Wildeford https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:52 no full 6976
y7wwsBqfSeKxaXYZy EA - Giving gladly, giving publicly by HenryStanley Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Giving gladly, giving publicly, published by HenryStanley on August 29, 2023 on The Effective Altruism Forum.A short post on my blog detailing a recent donation - and my intention to be a bit more public about my giving in future.I'm never sure of the expected value of these sorts of posts; they feel a bit awkward and braggy to me. But I do want to get more people into philanthropy and the mindset that they can make a difference even with (relatively) small sums of money,Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
HenryStanley https://forum.effectivealtruism.org/posts/y7wwsBqfSeKxaXYZy/giving-gladly-giving-publicly Tue, 29 Aug 2023 14:57:04 +0000 EA - Giving gladly, giving publicly by HenryStanley HenryStanley https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:40 no full 6973
aFYduhr9pztFCWFpz EA - Preliminary Analysis of Intervention to Reduce Lead Exposure from Adulterated Turmeric in Bangladesh Shows Cost Benefit of About US$1 per DALY by Kate Porterfield Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Preliminary Analysis of Intervention to Reduce Lead Exposure from Adulterated Turmeric in Bangladesh Shows Cost Benefit of About US$1 per DALY, published by Kate Porterfield on August 29, 2023 on The Effective Altruism Forum.Pure Earth is a GiveWell Grantee dedicated to reducing lead exposure in low- and middle-income countries. In collaboration with Stanford University and the International Centre for Diarrhoeal Disease Research, Bangladesh (icddr,b), a preliminary cost-effectiveness analysis (CEA) was performed to assess the effectiveness of an intervention in Bangladesh. The CEA presents an encouraging outlook, with a cost per disability-adjusted life year (DALY)-equivalent averted estimated at just under US$1. As this assessment is preliminary, it may contain methodological inconsistencies with GiveWell's. As such, we welcome any comments and corrections.In 2019, after investigations concluded that turmeric was the primary source of lead exposure among residents of rural Bangladesh, Stanford University and Bangladeshi non-profit icddr,b embarked on a mission to eliminate lead poisoning from turmeric. Stanford and icddr,b's investigations had revealed that lead chromate (an industrial pigment sometimes called "School Bus Yellow") was being added to turmeric roots to make them more attractive for sale. Armed with this evidence, the team coordinated with the Bangladeshi Food Safety Authority to conduct a crack-down of the adulteration by enforcing policies at the markets and raising awareness among businesspeople and the public nationwide. These efforts successfully halted the practice of adding lead chromate to turmeric: the prevalence of lead in turmeric dropped from 47% in 2019 to 0% in 2021.In collaboration with Pure Earth, icddr,b continues to monitor turmeric and other spices and coordinate with government agencies to maintain the safety of these and other food products.To gauge the effectiveness of this program in advancing the mission of reducing lead exposures globally, it is important to assess both impact and cost-effectiveness. To approach this task, Pure Earth and Stanford have completed a back-of-the-envelope cost-effectiveness analysis (CEA), incorporating preliminary data from blood lead level assessments and various assumptions. This model is built off of previous models created by LEEP and Rethink Priorities.The preliminary findings are that this program can avert an equivalent DALY for just under $1. This result is extraordinary, albeit deserving of further scrutiny. It indicates that certain interventions in the lead space could be enormously cost-effective. The body of work to reduce lead exposures in LMICs is nascent, and not all interventions are likely to be as cost-effective as spices. But clearly, more work on spices is called for, and Pure Earth, Stanford, icddr,b, and others are pursuing funding to expand these programs into other countries.Program Implementation CostsTo establish a framework for cost-effectiveness assessment, it is essential to define the terms "cost" and "effectiveness" within the context of the Stanford-led mission. The concept of "cost" encompasses the resources utilized by the project team and those expended by the Bangladesh government as a direct result of the project's activities. Specifically, we consider monetary expenses incurred by the program, which we estimate to be upfront costs of $360,000. These expenses include both the costs to identify the sources of lead exposure and implement the program, as well as continuing costs of $100,000 to monitor and continue the program after the initial implementation. Additionally, the Government of Bangladesh is expected to spend $100,000 over the course of the intervention.To facilitate comparisons with other global health interventions, we define the project's"...

]]>
Kate Porterfield https://forum.effectivealtruism.org/posts/aFYduhr9pztFCWFpz/preliminary-analysis-of-intervention-to-reduce-lead-exposure Tue, 29 Aug 2023 13:36:35 +0000 EA - Preliminary Analysis of Intervention to Reduce Lead Exposure from Adulterated Turmeric in Bangladesh Shows Cost Benefit of About US$1 per DALY by Kate Porterfield Kate Porterfield https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:33 no full 6972
tk2YWPKNqeCDWData EA - EA Germany Community Health Documents & Processes by Milena Canzler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Germany Community Health Documents & Processes, published by Milena Canzler on August 30, 2023 on The Effective Altruism Forum.Our goal with these documents is to build a safe community for all our members by making sure any interpersonal harm is appropriately dealt with and encouraging harmed individuals to reach out to us. This is a collection of all the documents we created for EA Germany during the Community Health Project (March - July 2023). We were asked to share these with the wider community. We welcome others to use and adapt them and invite feedback.We created two types of documents:Public: to inform our community about our Code of Conduct, event standards, ways to report misbehaviours and ask for help, processes for evaluating and responding to reports, confidentiality, professional contact points, and Awareness Workshop results.Internal: description of the role and responsibility of the Community Health Contact, their interaction with the Equal Opportunities Officer of our association, and handover processes.Why we need thisLast week, Ninas, a new group organizer, was alerted by group member Sayat about allegations against a long-term member. Unaware of any past issues, Ninas informs Sayat, causing Sayat to feel belittled and cease work on a promising Biosafety camp. After confronting the accused and mistakenly revealing Sayat as the source, retaliation against Sayat ensues. Ninas, now shocked and believing Sayat, uncovers additional troubling stories about the long-term member after further inquiry. Ninas regrets not knowing earlier, as this knowledge could have prevented Sayat's distress and withdrawal from their project due to harassment.Imagine if Ninas, the hypothetical group organiser, had a starter pack of information about Community Health and previous incidents in their group. Imagine if they didn't botch their first conversation with Sayat and were better informed about confidentiality procedures.As part of the Community Health Project, which started in March 2023, we created a series of documents to inform our members about our offers and to document internal processes for the team. These documents exist to support us in staying in line with our vision and values. They further help identify areas for improvement and serve as the foundation for action.Setting in GermanyTo appreciate how these documents work, here is an overview of the system in and for which they're created. Other communities will have to adapt the processes for their own structure, culture and laws.Our structure: The German EA community has 27 active local groups. EA Germany is organised as a membership association Effektiver Altruismus Deutschland (EAD) e.V. with over 100 members. We currently have a team of five employees, some full-time and some part-time. The association also elects an Equal Opportunities Officer who checks our processes and important documents for discrimination. They also provided us with feedback on these documents.The resourcesPart of our strategy for 2023 is to provide a trained Community Health Contact, implement standards and offer documents for Community Health. With the support of colleagues, the Equal opportunities officer, CEA's Community Health team and experts, I assessed our options to create a safer community. This assessment is ongoing. Here is the summary of what needed to be done and the results so far:Public documentsCommunity Health ContactMental Health First Aid training to qualify me for this role in our teamInformed our community about this offer; also in GermanAnonymous contact forms established; also in GermanCode of ConductWe added avenues of reporting violations and responses to reports, an alcohol and drug policy and anonymous contact forms.During our annual meeting at the start of June, we asked the memb...

]]>
Milena Canzler https://forum.effectivealtruism.org/posts/tk2YWPKNqeCDWData/ea-germany-community-health-documents-and-processes Wed, 30 Aug 2023 21:11:00 +0000 EA - EA Germany Community Health Documents & Processes by Milena Canzler Milena Canzler https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:44 no full 6988
gYoB8vZcPGcL3cAKH EA - Language models surprised us by Ajeya Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Language models surprised us, published by Ajeya on August 30, 2023 on The Effective Altruism Forum.Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post.Most experts were surprised by progress in language models in 2022 and 2023. There may be more surprises ahead, so experts should register their forecasts now about 2024 and 2025.Kelsey Piper co-drafted this post. Thanks also to Isabel Juniewicz for research help.If you read media coverage of ChatGPT - which called it 'breathtaking', 'dazzling', 'astounding' - you'd get the sense that large language models (LLMs) took the world completely by surprise. Is that impression accurate?Actually, yes. There are a few different ways to attempt to measure the question "Were experts surprised by the pace of LLM progress?" but they broadly point to the same answer: ML researchers, superforecasters, and most others were all surprised by the progress in large language models in 2022 and 2023.Competitions to forecast difficult ML benchmarksML benchmarks are sets of problems which can be objectively graded, allowing relatively precise comparison across different models. We have data from forecasting competitions done in 2021 and 2022 on two of the most comprehensive and difficult ML benchmarks: the MMLU benchmark and the MATH benchmark.First, what are these benchmarks?The MMLU dataset consists of multiple choice questions in a variety of subjects collected from sources like GRE practice tests and AP tests. It was intended to test subject matter knowledge in a wide variety of professional domains. MMLU questions are legitimately quite difficult: the average person would probably struggle to solve them.At the time of its introduction in September 2020, most models only performed close to random chance on MMLU (~25%), while GPT-3 performed significantly better than chance at 44%. The benchmark was designed to be harder than any that had come before it, and the authors described their motivation as closing the gap between performance on benchmarks and "true language understanding":Natural Language Processing (NLP) models have achieved superhuman performance on a number of recently proposed benchmarks. However, these models are still well below human level performance for language understanding as a whole, suggesting a disconnect between our benchmarks and the actual capabilities of these models.Meanwhile, the MATH dataset consists of free-response questions taken from math contests aimed at the best high school math students in the country. Most college-educated adults would get well under half of these problems right (the authors used computer science undergraduates as human subjects, and their performance ranged from 40% to 90%).At the time of its introduction in January 2021, the best model achieved only about ~7% accuracy on MATH. The authors say:We find that accuracy remains low even for the best models. Furthermore, unlike for most other text-based datasets, we find that accuracy is increasing very slowly with model size. If trends continue, then we will need algorithmic improvements, rather than just scale, to make substantial progress on MATH.So, these are both hard benchmarks - the problems are difficult for humans, the best models got low performance when the benchmarks were introduced, and the authors seemed to imply it would take a while for performance to get really good.In mid-2021, ML professor Jacob Steinhardt ran a contest with superforecasters at Hypermind to predict progress on MATH and MMLU. Superforecasters massively undershot reality in both cases.They predicted that performance on MMLU would improve moderately from 44% in 2021 to 57% by June 2022. The actual performance was 68%, which s...

]]>
Ajeya https://forum.effectivealtruism.org/posts/gYoB8vZcPGcL3cAKH/language-models-surprised-us Wed, 30 Aug 2023 05:11:42 +0000 EA - Language models surprised us by Ajeya Ajeya https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:22 no full 6984
PAco5oG579k2qzrh9 EA - LTFF and EAIF are unusually funding-constrained right now by Linch Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LTFF and EAIF are unusually funding-constrained right now, published by Linch on August 30, 2023 on The Effective Altruism Forum.SummaryEA Funds aims to empower thoughtful individuals and small groups to carry out altruistically impactful projects - in particular, enabling and accelerating small/medium-sized projects (with grants <$300K). We are looking to increase our level of independence from other actors within the EA and longtermist funding landscape and are seeking to raise ~$2.7M for the Long-Term Future Fund and ~$1.7M for the EA Infrastructure Fund (~$4.4M total) over the next six months.Why donate to EA Funds? EA Funds is the largest funder of small projects in the longtermist and EA infrastructure spaces, and has had a solid operational track record of giving out hundreds of high-quality grants a year to individuals and small projects. We believe that we're well-placed to fill the role of a significant independent grantmaker, because of a combination of our track record, our historical role in this position, and the quality of our fund managers.Why now? We think now is an unusually good time to donate to us, as a) we have an unexpectedly large funding shortage, b) there are great projects on the margin that we can't currently fund, and c) more stabilized funding now can give us time to try to find large individual and institutional donors to cover future funding needs.Importantly, Open Philanthropy is no longer providing a guaranteed amount of funding to us and instead will move over to a (temporary) model of matching our funds 2:1 ($2 from them for every $1 from you, up to 3.5M from them per fund).Where to donate: If you're interested, you can donate to either Long-Term Future Fund (LTFF) or EA Infrastructure Fund (EAIF) here.Some relevant quotes from fund managers:Oliver HabrykaI think the next $1.3M in donations to the LTFF (430k pre-matching) are among the best historical grant opportunities in the time that I have been active as a grantmaker. If you are undecided between donating to us right now vs. December, my sense is now is substantially better, since I expect more and larger funders to step in by then, while we have a substantial number of time-sensitive opportunities right now that will likely go unfunded.I myself have a bunch of reservations about the LTFF and am unsure about its future trajectory, and so haven't been fundraising publicly, and I am honestly unsure about the value of more than ~$2M, but my sense is that we have a bunch of grants in the pipeline right now that are blocked on lack of funding that I can evaluate pretty directly, and that those seem like quite solid funding opportunities to me (some of this is caused by a large number of participants of the SERI MATS program applying for funding to continue the research they started during the program, and those applications are both highly time-sensitive and of higher-than-usual quality).Lawrence Chan"My main takeaway from [evaluating a batch of AI safety applications on LTFF] is [LTFF] could sure use an extra $2-3m in funding, I want to fund like, 1/3-1/2 of the projects I looked at." (At the current level of funding, we're on track to fund a much lower proportion).Related linksEA Funds organizational update: Open Philanthropy matching and distancingLong-Term Future Fund: April 2023 grant recommendationsWhat Does a Marginal Grant at LTFF Look Like?Asya Bergal's Reflections on my time on the Long-Term Future FundLinch Zhang's Select examples of adverse selection in longtermist grantmakingOur VisionWe think there is a significant shortage of independent funders in the current longtermist and EA infrastructure landscape, resulting in fewer outstanding projects receiving funding than is good for the world. Currently, the primary source of funding for these projects...

]]>
Linch https://forum.effectivealtruism.org/posts/PAco5oG579k2qzrh9/ltff-and-eaif-are-unusually-funding-constrained-right-now Wed, 30 Aug 2023 00:49:01 +0000 EA - LTFF and EAIF are unusually funding-constrained right now by Linch Linch https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 30:31 no full 6982
J7nmbqcWncPMZFhGC EA - Want to make a difference on policy and governance? Become an expert in something specific and boring by ASB Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Want to make a difference on policy and governance? Become an expert in something specific and boring, published by ASB on August 31, 2023 on The Effective Altruism Forum.I sometimes get a vibe that many people trying to ambitiously do good in the world (including EAs) are misguided about what doing successful policy/governance work looks like. An exaggerated caricature would be activities like: dreaming up novel UN structures, spending time in abstract game theory and 'strategy spirals', and sweeping analysis of historical case studies.Instead, people that want to make the world safer with policy/governance should become experts on very specific and boring topics. One of the most successful people I've met in biosecurity got their start by getting really good at analyzing obscure government budgets.Here are some crowdsourced example areas I would love to see more people become experts in:Legal liability - obviously relevant to biosecurity and AI safety, and I'm especially interested in how liability law would handle spreading infohazards (e.g. if a bio lab publishes a virus sequence that is then used for bioterrorism, or if an LLM is used maliciously in a similar way).Privacy / data protection laws - could be an important lever for regulating dangerous technologies.Executive powers for regulation - what can and can't the executive actually do to get AI labs to adhere to voluntary security standards, or get DNA synthesis appropriately monitored?Large, regularly reauthorized bills (e.g., NDAA, PAHPA, IAA) and ways in which they could be bolstered for biosecurity and AI safety (both in terms of content and process).How companies validate customers, e.g., for export control or FSAP reasons (know-your-customer), and the statutes and technologies around this.How are legal restrictions on possessing or creating certain materials justified/implemented e.g. Chemical Weapons Convention, narcotics, Toxic Substances Control Act?The efficacy of tamper-proof and tamper-evident technology (e.g. in voting machines, anti-counterfeiting printers)Biochemical supply chains - which countries make which reagents, and how are they affected by export controls and other trade policies?Consumer protection laws and their application to emerging tech risks (e.g. how do product recalls work? Could they apply to benchtop DNA synthesizers or LLMs?)Patent law - can companies patent dangerous technology in order to prevent others from developing or misusing it?How do regulations on 3d-printed firearms work?The specifics of congressional appropriations, federal funding, and procurement: what sorts of things does the government purchase, how does this relate to biotech or AI (software)? Related to this, becoming an expert on the Strategic National Stockpile and understanding the mechanisms of how a vendor managed inventory could work.A few caveats. First, I spent like 30 minutes writing this list (and crowdsourced heavily from others). Some of these topics are going to be dead ends. Still, I'd be more excited about somebody pursuing one of these concrete, specific dead ends and getting real feedback from the world (and then pivoting), rather than trying to do broad strategy work and risk ending up in a never-ending strategy spiral. Moreover, the most impactful topics are probably not on this list and will be discovered by somebody who got deep into the weeds of something obscure.For those of you that are trying to do good with an EA mindset, this also means getting out of the EA bubble and spending lots of time with established experts in these relevant fields. Every so often, I'll get the chance to collect biosecurity ideas and send them to interested people in DC. In order to be helpful, these ideas need to be super specific, e.g. this specific agency needs to task this other subag...

]]>
ASB https://forum.effectivealtruism.org/posts/J7nmbqcWncPMZFhGC/want-to-make-a-difference-on-policy-and-governance-become-an Thu, 31 Aug 2023 23:18:31 +0000 EA - Want to make a difference on policy and governance? Become an expert in something specific and boring by ASB ASB https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:24 no full 7001
JaACyD72oacQB9WDn EA - More than Earth Warriors: The Diverse Roles of Geoscientists in Effective Altruism by chanakin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: More than Earth Warriors: The Diverse Roles of Geoscientists in Effective Altruism, published by chanakin on August 31, 2023 on The Effective Altruism Forum.A global community of and for Effective Geoscientiststl;drWe, a small group of geoscientists want to gather a supportive community for knowledge sharing, collaboration, and placing geoscientists EAs in high impact roles.The use of geospatial data and spatial analysis is underrated. Four out of eight top charities from GiveWell have used geospatial data as part of their research. Particularly, we can have a significant impact in catastrophe resilience.We would like to advocate for more researchers and effective charities to consider including a geospatial dimension when conducting their studies.Currently, we will gather on channel role-geoscientists on EA Anywhere slack. Please join us if you are interested.We listed a few distilled topics that geoscientists EA might be interested in.AcknowledgementThe completion of this post would not have been possible without the extensive insight, advice, and knowledge shared by the following individuals: Prof. David Denkenberger, Ewelina Hornicka, Petya Kangalova, Leon Mayer, Dr. Kajetan Chrapkiewicz. Any mistakes or oversights in this post are solely my responsibility.Context and reason for establishmentFigure 1. Both expected impact and personal fit in our career choice reflect a power law.We want to gather an online community to bring together geoscientists of diverse disciplines. There are two motivations for starting this community. Firstly, we believe that geoscientists and geospatial data can offer value in multiple EA cause areas and intercause research. Secondly, we found that services of current EA infrastructure (e.g. High Impact Professional) did not adequately address the need for our niche speciality and comparative advantage of geoscientists. We want to inform the EA world that we geoscientists exist and our skills and domain expertise can contribute to high impact cause areas. Furthermore, we would like to advocate for more researchers to consider including a geospatial dimension when conducting their studies. Therefore, we would like to bring together geoscientists, current and future, that work in environmental anthropology, epidemiology, remote sensing, geophysics and more.We want to use this community to highlight existing geospatial research happening in the various cause areas, to support each other in interdisciplinary collaboration and place each other in high impact roles.What is geospatial data?We use geospatial data everyday, to track our movement, plan our commute. Geospatial data is any information with an addition of locality attached to it, this is usually represented by Longitude and Latitude. Geospatial data can come from many sources, from the satellite images and radar backscatter to submarine bathymetric surveys.Figure 2. Alexander von Humboldt's "A Portrait of Nature" in his famous book Der Kosmos (1849).The development of geospatial science and geospatial data emerged out of necessity. Cartography, the making of maps is the central activity of collecting and distilling geographical knowledge. Initially we made maps for navigation and exploration, then the map became a tool for managing communities, resources, and people, often involuntarily. The map room eventually became the centre of European colonialism in the Age of Discovery (Cresswell T., 2013), and the map room has remained a mainstay of today's military units' control and command. Nowadays, geospatial data are used broadly by ecologists and palaeontologists, by sailors and urban planners, by militaries and humanitarian workers alike. While people talk about the temporal component often, the spatial component often goes unnoticed. This post will attempt to demonstr...

]]>
chanakin https://forum.effectivealtruism.org/posts/JaACyD72oacQB9WDn/more-than-earth-warriors-the-diverse-roles-of-geoscientists Thu, 31 Aug 2023 21:59:43 +0000 EA - More than Earth Warriors: The Diverse Roles of Geoscientists in Effective Altruism by chanakin chanakin https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 39:09 no full 7000
xu45Sq8gZ4iy9iHXa EA - Nuclear safety/security: Why doesn't EA prioritize it more? by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nuclear safety/security: Why doesn't EA prioritize it more?, published by Rockwell on August 31, 2023 on The Effective Altruism Forum.I'm dissatisfied with my explanation of why there is not more attention from EAs and EA funders on nuclear safety and security, especially relative to e.g. AI safety and biosecurity. This has come up a lot recently, especially after the release of Oppenheimer. I'm worried I'm not capturing the current state of affairs accurately and consequently not facilitating fully contextualized dialogue.What is your best short explanation?(To be clear, I know many EAs and EA funders are working on nuclear safety and security, so this is more so a question of resource allocation, rather than inclusion in the broader EA cause portfolio.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Rockwell https://forum.effectivealtruism.org/posts/xu45Sq8gZ4iy9iHXa/nuclear-safety-security-why-doesn-t-ea-prioritize-it-more Thu, 31 Aug 2023 21:14:41 +0000 EA - Nuclear safety/security: Why doesn't EA prioritize it more? by Rockwell Rockwell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:57 no full 6998
ph6wvA2EtQ7pG3yvG EA - [Linkpost] Michael Nielsen remarks on 'Oppenheimer' by Tom Barnes Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Michael Nielsen remarks on 'Oppenheimer', published by Tom Barnes on August 31, 2023 on The Effective Altruism Forum.This is a linkpost to a recent blogpost from Michael Nielsen, who has previously written on EA among many other topics. This blogpost is adapted from a talk Nielsen gave to an audience working on AI before a screening of Oppenheimer. I think the full post is worth a read, but I've pulled out some quotes I find especially interesting (bolding my own)I was at a party recently, and happened to meet a senior person at a well-known AI startup in the Bay Area. They volunteered that they thought "humanity had about a 50% chance of extinction" caused by artificial intelligence. I asked why they were working at an AI startup if they believed that to be true. They told me that while they thought it was true, "in the meantime I get to have a nice house and car".[...] I often meet people who claim to sincerely believe (or at least seriously worry) that AI may cause significant damage to humanity. And yet they are also working on it, justifying it in ways that sometimes seem sincerely thought out, but which all-too-often seem self-serving or self-deceiving.Part of what makes the Manhattan Project interesting is that we can chart the arcs of moral thinking of multiple participants [...] Here are four caricatures:Klaus Fuchs and Ted Hall were two Manhattan Project physicists who took it upon themselves to commit espionage, communicating the secret of the bomb to the Soviet Union. It's difficult to know for sure, but both seem to have been deeply morally engaged and trying to do the right thing, willing to risk their lives; they also made, I strongly believe, a terrible error of judgment. I take it as a warning that caring and courage and imagination are not enough; they can, in fact, lead to very bad outcomes.Robert Wilson, the physicist who recruited Richard Feynman to the project. Wilson had thought deeply about Nazi Germany, and the capabilities of German physics and industry, and made a principled commitment to the project on that basis. He half-heartedly considered leaving when Germany surrendered, but opted to continue until the bombings in Japan. He later regretted that choice; immediately after the Trinity Test he was disconsolate, telling an exuberant Feynman: "It's a terrible thing that we made".Oppenheimer, who I believe was motivated in part by a genuine fear of the Nazis, but also in part by personal ambition and a desire for "success". It's interesting to ponder his statements after the War: while he seems to have genuinely felt a strong need to work on the bomb in the face of the Nazi threat, his comments about continuing to work up to the bombing of Hiroshima and Nagasaki contain many strained self-exculpatory statements about how you have to work on it as a scientist, that the technical problem is too sweet. It smells, to me, of someone looking for self-justification.Joseph Rotblat, the one physicist who actually left the project after it became clear the Nazis were not going to make an atomic bomb. He was threatened by the head of Los Alamos security, and falsely accused of having met with Soviet agents. In leaving he was turning his back on his most important professional peers at a crucial time in his career. Doing so must have required tremendous courage and moral imagination. Part of what makes the choice intriguing is that he himself didn't think it would make any difference to the success of the project. I know I personally find it tempting to think about such choices in abstract systems terms: "I, individually, can't change systems outcomes by refusing to participate ['it's inevitable!'], therefore it's okay to participate". And yet while that view seems reasonable, Rotblat's example shows it is incorrect.His private moral...

]]>
Tom Barnes https://forum.effectivealtruism.org/posts/ph6wvA2EtQ7pG3yvG/linkpost-michael-nielsen-remarks-on-oppenheimer Thu, 31 Aug 2023 19:41:35 +0000 EA - [Linkpost] Michael Nielsen remarks on 'Oppenheimer' by Tom Barnes Tom Barnes https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:59 no full 6997
X2w4uqjDuGPJFaHPp EA - Should I patent my cultivated meat method? by Michelle Hauser Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should I patent my cultivated meat method?, published by Michelle Hauser on August 31, 2023 on The Effective Altruism Forum.I am a third-year PhD candidate working on cultivated meat. In my research, I have developed a new and original approach to making cultivated meat that has the potential of being highly scalable and help reach economic parity, more than most other known methods that companies are working on.My supervisor has left it up to me to decide whether to patent this method or not. I have half-based beliefs that patenting would be counter-productive in helping cultivated meat reach the market sooner, but I acknowledge that I am not knowledgeable enough in IP law/structures to really know.If I do decide to patent the method, the patent will mostly belong to and be written by the Tech Transfer office at my university, which is of course for profit. I am not sure I trust them to consider my wish to make a fairly non-limiting patent. The usual course of this kind of thing is then for the tech transfer office to open a private startup to further develop the method. In any case, I intend to publish the method in a scientific journal. If we patent it, then it would be published after filing.What would be best for the field as a whole - a scenario where scientists patent their findings and then publish them, or where they just publish everything open source? Given that most scientists do patent and keep everything secret within companies, what would be best to do in my position?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Michelle Hauser https://forum.effectivealtruism.org/posts/X2w4uqjDuGPJFaHPp/should-i-patent-my-cultivated-meat-method Thu, 31 Aug 2023 13:06:40 +0000 EA - Should I patent my cultivated meat method? by Michelle Hauser Michelle Hauser https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:31 no full 6995
Bg6qxLGhsn7pQzHGX EA - Progress report on CEA's search for a new CEO by MaxDalton Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Progress report on CEA's search for a new CEO, published by MaxDalton on August 31, 2023 on The Effective Altruism Forum.I wanted to give an update on the Centre for Effective Altruism (CEA)'s search for a new CEO.We (Claire Zabel, Max Dalton, and Michelle Hutchinson) were appointed by the Effective Ventures boards to lead this search and make a recommendation to the boards. The committee is advised by James Snowden, Caitlin Elizondo, and one experienced executive working outside EA.We previously announced the search and asked for community input in this post.Note, we set out searching for an Executive Director, and during the process have changed the role title to CEO because it was more legible to candidates not familiar with CEA or EV. The role scope remains unchanged.In summary, we received over 400 nominations, reached out to over 150 people, spoke to about 60, and received over 25 applications. We're still considering around 15 candidates, and are currently more deeply assessing <5 candidates who seem especially promising.ProcessOver 400 people were nominated for the roleWe invited recommendations from CEA staff, stakeholders, and the EA community via the Forum.We selected candidates for outreach according to their profile and experience, the strength of recommendation we received, and (in some cases) our pre-existing knowledge about them.We decided this approach (rather than reaching out to all nominees or having an open call for applications) becauseMany nominations were fairly speculative (for instance one person nominated over 30 people, many of whom did not have management experience: this was helpful because it generated some useful names we would otherwise not have known about, but it probably would not have been a good use of our/candidates' time to reach out to all of these people).More generally, we wanted to be careful about our time and applicants' time.We thought (based on advice from professional recruiters) that a narrower outreach process would be more likely to be attractive to our most promising candidates (who matter disproportionately).Overall our sense from talking to people with experience of this is that this selective, non-public process is fairly standard practice in executive recruitment. I think that the main reason for this is that in executive recruitment your top candidates (who are the most important ones) have a very high opportunity cost for their time, and personalized non-public outreach is a much stronger signal that it will be worth their time engaging in the process.My guess is that some of our top candidates would not have proactively submitted an application, but did engage when we reached out to them.We had a lower bar for reaching out to candidates we were less familiar with (because a small chance that they would be our top candidate justified the effort).In the end, we reached out to over 150 potential candidatesWe shared a document containing information on CEA and the role, and invited them to meet a member of the search committee (typically Max).We sometimes asked people to consider booking a call even if they weren't interested in the role, because we wanted to get their advice on our hiring process.In many cases, we tried to personalize outreach messages somewhat and use our network:When we knew the candidates, this was easySometimes we asked for introductions from mutual connectionsHowever there were some cases where we didn't personalize (e.g. if we couldn't find a good mutual connection).To give a sense of the range of candidates we reached out to, according to our very rough assessment they had:EA context~50% high (e.g. they've been working at an EA org or otherwise have lots of signs of engagement with EA ideas).~40% medium (e.g. they attended an EAG once or did 80k coaching, but n...

]]>
MaxDalton https://forum.effectivealtruism.org/posts/Bg6qxLGhsn7pQzHGX/progress-report-on-cea-s-search-for-a-new-ceo Thu, 31 Aug 2023 09:41:07 +0000 EA - Progress report on CEA's search for a new CEO by MaxDalton MaxDalton https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:56 no full 6993
o5fbhWrwEH4YyT9AY EA - New Jury Analysis of the Smithfield Piglet Rescue Trial by JLRiedi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New Jury Analysis of the Smithfield Piglet Rescue Trial, published by JLRiedi on August 31, 2023 on The Effective Altruism Forum.Faunalytics analyzed transcripts from interviews with jurors of the Smithfield Foods criminal trial - in which two animal rights activists were found not guilty of "stealing" two piglets from a factory farm in Utah. This qualitative analysis will help advocates understand why jurors sided with the defense, how to potentially apply these findings to future trials, and what forms of animal activism are most convincing.Key Findings:The "not guilty" verdict hinged, in part, on the monetary value of the piglets to Smithfield, which was argued to be less than zero. The piglets required veterinary care that exceeded their value to Smithfield. The jury was initially hesitant to say the piglets had no worth because they saw them as having inherent worth as living beings, however they ultimately decided the theft charges hinged on monetary value only.The jury members believed the defendants, Wayne and Paul, did not have the intent to steal. Before their investigation of the Smithfield facility, Wayne said on video "if there's something we'll take it." The jury interpreted the "if" as meaning the two activists did not enter the facility knowing they'd have the opportunity to take piglets. However, one juror noted that if the defendants had a pattern of doing this in the past, the jury might have been more likely to find them guilty.The participants all reported being more receptive to animal advocacy and animal welfare after the trial. One participant reported that they no longer eat ham. Another reported that while they still believe that pigs are here to be eaten, as a result of the trial they now believe that pig welfare should be improved. Another was even inspired to pursue animal activism.Despite what media coverage indicates, the "right to rescue" was not a major factor in the jury's decision. Some media outlets (such as The Intercept and Democracy Now!) have characterized this trial as a test case for the "right to rescue" argument - the idea that one should be able to rescue animals, sometimes farmed animals, from distressing conditions. However, only two jurors mentioned this concept at all, and no jurors mentioned this idea as critical.BackgroundThe Smithfield Trial refers to the prosecution of two animal advocates who were charged with felony theft and burglary after they removed two piglets from a Smithfield Foods facility in Utah, United States.Wayne Hsiung and Paul Darwin Picklesimer, a co-founder and member of Direct Action Everywhere, respectively, are activists "working to achieve revolutionary social and political change for animals in one generation." In 2017, Wayne and Paul entered the Circle Four Farms facility in Milford, Utah, and removed two injured piglets (later named Lily and Lizzie). Circle Four Farms is one of the largest industrial pig processing facilities in the United States and a subsidiary of Smithfield Foods, which is the world's largest pork producer. Once rescued, the piglets were provided with veterinary care and relocated to a sanctuary where they currently reside. The removal of the piglets was filmed and posted on social media under the title "Operation Deathstar."In September 2022, Wayne and Paul went on trial in Washington County, Utah on charges of felony theft and burglary for removing the piglets. They were acquitted (i.e., found not guilty) by a jury on both counts.This trial may interest animal advocates because it provides potential guidance for future trials and investigations. Additionally, this analysis provides insight as to which pro-animal arguments are more persuasive more generally.In this study, we analyzed themes from interviews with five Smithfield Trial jury members (referred t...

]]>
JLRiedi https://forum.effectivealtruism.org/posts/o5fbhWrwEH4YyT9AY/new-jury-analysis-of-the-smithfield-piglet-rescue-trial Thu, 31 Aug 2023 09:36:24 +0000 EA - New Jury Analysis of the Smithfield Piglet Rescue Trial by JLRiedi JLRiedi https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:26 no full 6994
ee8Pamunhqabucwjq EA - Long-Term Future Fund Ask Us Anything (September 2023) by Linch Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Long-Term Future Fund Ask Us Anything (September 2023), published by Linch on August 31, 2023 on The Effective Altruism Forum.LTFF is running an Ask Us Anything! Most of the grantmakers at LTFF have agreed to set aside some time to answer questions on the Forum.I (Linch) will make a soft commitment to answer one round of questions this coming Monday (September 4th) and another round the Friday after (September 8th).We think that right now could be an unusually good time to donate. If you agree, you can donate to us here.About the FundThe Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. In addition, we seek to promote, implement, and advocate for longtermist ideas and to otherwise increase the likelihood that future generations will flourish.In 2022, we dispersed ~250 grants worth ~10 million. You can see our public grants database here.Related postsLTFF and EAIF are unusually funding-constrained right nowEA Funds organizational update: Open Philanthropy matching and distancingLong-Term Future Fund: April 2023 grant recommendationsWhat Does a Marginal Grant at LTFF Look Like?Asya Bergal's Reflections on my time on the Long-Term Future FundLinch Zhang's Select examples of adverse selection in longtermist grantmakingAbout the TeamAsya Bergal: Asya is the current chair of the Long-Term Future Fund. She also works as a Program Associate at Open Philanthropy. Previously, she worked as a researcher at AI Impacts and as a trader and software engineer for a crypto hedgefund. She's also written for the AI alignment newsletter and been a research fellow at the Centre for the Governance of AI at the Future of Humanity Institute (FHI). She has a BA in Computer Science and Engineering from MIT.Caleb Parikh: Caleb is the project lead of EA Funds. Caleb has previously worked on global priorities research as a research assistant at GPI, EA community building (as a contractor to the community health team at CEA), and global health policy.Linchuan Zhang: Linchuan (Linch) Zhang is a Senior Researcher at Rethink Priorities working on existential security research. Before joining RP, he worked on time-sensitive forecasting projects around COVID-19. Previously, he programmed for Impossible Foods and Google and has led several EA local groups.Oliver Habryka: Oliver runs Lightcone Infrastructure, whose main product is Lesswrong. Lesswrong has significantly influenced conversations around rationality and AGI risk, and the LWits community is often credited with having realized the importance of topics such as AGI (and AGI risk), COVID-19, existential risk and crypto much earlier than other comparable communities.You can find a list of our fund managers in our request for funding here.Ask Us AnythingWe're happy to answer any questions - marginal uses of money, how we approach grants, questions/critiques/concerns you have in general, what reservations you have as a potential donor or applicant, etc.There's no real deadline for questions, but let's say we have a soft commitment to focus on questions asked on or before September 8th.Because we're unusually funding-constrained right now, I'm going to shill again for donating to us.If you have projects relevant to mitigating global catastrophic risks, you can also apply for funding here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Linch https://forum.effectivealtruism.org/posts/ee8Pamunhqabucwjq/long-term-future-fund-ask-us-anything-september-2023 Thu, 31 Aug 2023 06:50:03 +0000 EA - Long-Term Future Fund Ask Us Anything (September 2023) by Linch Linch https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:34 no full 6991
6yq9TTJKvJnrgT4E2 EA - Did Economists Really Get Africa's AIDS Epidemic "Analytically Wrong"? (A Reply) by TomDrake Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Did Economists Really Get Africa's AIDS Epidemic "Analytically Wrong"? (A Reply), published by TomDrake on August 30, 2023 on The Effective Altruism Forum.To demonstrate CGD's cherished principle of not taking organisation positions, here is a response from a couple of us in the health team to our colleague Justin Sandefur's recent(ish) blog on cost-effectiveness evidence and PEPFAR.Our concern was that readers might come away from Justin's blog thinking that cost-effectiveness evidence wasn't useful in the original PEPFAR decision and wouldn't be useful in similar decisions about major global health initiatives. We disagree and wanted to make the case for cost-effectiveness as well as addressing some of Justin's specific points along the way.A recent, thought-provoking blog by our colleague, Justin Sandefur, titled "How Economists got Africa's AIDS Epidemic Wrong", has sparked a debate about the historical role of cost-effectiveness analysis in assessing the investments of the President's Emergency Plan for AIDS Relief (PEPFAR) and, implicitly, the value of such analysis in making similar global health decisions. Justin tells the story of PEPFAR and concludes that economists that raised concerns over the cost-effectiveness of antiretroviral therapies got PEPFAR "analytically wrong", a conclusion that some readers may interpret as a reason to discard cost-effectiveness analysis for such decisions in the future. The original blog draws three lessons:Lesson #1. What persuaded the White House was evidence of feasibility and efficacy, not cost-effectivenessLesson #2. The budget constraint wasn't fixed; PEPFAR unlocked new moneyLesson #3. Prices also weren't fixed, and PEPFAR may have helped bring them downIn this blog we argue that while Justin's observations hold some truth, they do not discredit the value of cost-effectiveness analysis in decision-making. Specifically, we contend that:Because there were many feasible and effective options at the time, this was not sufficient criteria for such a large decision. It should have considered the cost-effectiveness of other options, to explore the relative impact.PEPFAR may have unlocked some new money, but it wasn't all new money, and it will have had short- and long-term opportunity costs. Moreover we cannot be certain that PEPFAR was uniquely able to increase available funding. Thus the decision could have considered cost-effectiveness analysis to reveal likely trade-offs.Price reductions could have been analytically explored for PEPFAR and for alternative options as part of cost-effectiveness analysis during decision-making.The bigger lesson, we conclude, is that when the next PEPFAR-sized decision happens, our systems and their stakeholders must strive for higher standards, embracing analysis that models a range of good options and assesses them against key criteria. Cost-effectiveness analysis is a necessary component of this, but it is not sufficient, and additional analysis and scenarios should be considered through a deliberative process, before settling on a final decision.Below we offer reflections on each of Justin's three lessons, in order, then draw out the overall conclusions.Response 1: Feasibility and efficacy are not enoughJustin uses an analogy of giving to a homeless person to invite the reader to agree that cost is not really the relevant issue when considering whether to do a good deed. True enough, if something can be considered not effective or not feasible then it's a non-starter and we don't need to trouble ourselves over cost or cost-effectiveness. But when there are multiple feasible and effective options with different levels of effectiveness and cost, understanding which does the most good for the money is absolutely worth knowing. Indeed we agree that there is a moral imperative to...

]]>
TomDrake https://forum.effectivealtruism.org/posts/6yq9TTJKvJnrgT4E2/did-economists-really-get-africa-s-aids-epidemic Wed, 30 Aug 2023 21:40:14 +0000 EA - Did Economists Really Get Africa's AIDS Epidemic "Analytically Wrong"? (A Reply) by TomDrake TomDrake https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:04 no full 6992
myGb2CdCAgQPBRQie EA - Rethink Priorities is hiring a COO by abrahamrowe Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities is hiring a COO, published by abrahamrowe on September 1, 2023 on The Effective Altruism Forum.I'm Abraham Rowe, the current Chief Operations Officer (COO) of Rethink Priorities (RP). I joined RP as its first operations hire in 2020. At the time, RP was fiscally sponsored by Rethink Charity, who provided its operational support, and was getting ready to spin out into being an independent entity.Since then, we've expanded a lot. From around 10 staff when I joined, to over 110 (including staff of fiscally sponsored projects) today. We're operating in over 15 countries, have two legal entities, and are now an operationally complex (and interesting!) project. We fiscally sponsor cool projects like Epoch, Apollo Research, and the Existential Risk Alliance Fellowship. We incubate projects like Condor Camp. We conduct high quality research in animal welfare, AI governance and strategy, global health and development, and more. We conduct public opinion polling, run coordination events for cause areas, and generally work to improve the coordination and research quality in EA and adjacent fields.I'm now leaving to pursue other projects, and RP is hiring its next COO! This is a senior leadership position at Rethink Priorities, directly contributing to organizational strategy, while overseeing multiple critical lines of work, like finance, HR, development, and communications.If you're interested in applying, check out the full job ad here, and feel free to drop any questions in the comments, or send them to careers@rethinkpriorities.org.Even if you aren't interested, please refer strong candidates to us! We will pay $5,000 to the first person who refers to us a candidate we offer this position to.Some fast facts about the role:Applications are due September 30th, 2023.This role pays $130,000 to $135,000 USD, is fully remote, and is open to candidates able to work in the USThis role manages a team of around 25 direct and indirect reports, and oversees finance, HR, legal compliance, fiscal sponsorship, development, communications, and other core non-research functions at RP.This role is a senior member of Rethink Priorities' leadership team, directly advising the Co-CEOs and shaping the future of the organization, especially as we continue to grow.Relevant (operational) background on Rethink Priorities:RP is a fully remote research organization, with 75 staff (as of writing). We also fiscally sponsor several other projects with 35 additional staff, and with summer fellowships this number has been much higher. In the next year, we could easily see the total budget and headcount that this role has to account for being over 120 people / $15M USD.We've grown really quickly over the last few years - from around 10 staff in 2020 to our current size today.We're fully remote and very international, with staff in over a dozen countries. We work highly asynchronously.RP is committed to building an inclusive, equitable, and supportive community for you to thrive and do your best work. Please don't hesitate to apply for a role regardless of your age, gender identity/expression, political identity, personal preferences, physical abilities, veteran status, neurodiversity or any other background. We provide reasonable accommodations and benefits, including flexible work schedules and locations, mental health coverage in medical benefits (as available), as well as technology budgets and professional development time that can be used, for example, to purchase assistive technology or engage in job coaching.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
abrahamrowe https://forum.effectivealtruism.org/posts/myGb2CdCAgQPBRQie/rethink-priorities-is-hiring-a-coo Fri, 01 Sep 2023 22:35:48 +0000 EA - Rethink Priorities is hiring a COO by abrahamrowe abrahamrowe https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:45 no full 7009
FHGk5uqpSsjemw4mL EA - Should UV stability of plastics be a concern for far-UVC adoption? by Sean Lawrence Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should UV stability of plastics be a concern for far-UVC adoption?, published by Sean Lawrence on September 1, 2023 on The Effective Altruism Forum.TL;DR: Some plastics degrade under exposure to UV light and I am concerned this could hamper widespread adoption of far-UVC. This post outlines the rationale for these concerns and seeks feedback from the far-UVC community on their importance.My simplified line of reasoning is:Epistemic status: this post is the culmination of spending ~5-10 hours thinking, researching, and writing up this post. I feel pretty certain that UV stability is worth at least thinking about with respect to far-UVC adoption but very uncertain about it being something that blocks far-UVC adoption. I have spent some time learning about far-UVC through discussions, reading, and preparing to interview someone in the space for a podcast but don't feel I have a very deep understanding of the space.SummaryUV light damages plastics that are not UV-stable. Many of us may have encountered this in cheap outdoor furniture whose plastic components change colour or become brittle and break easily after being left out in the sun too long. My concern is that if most of the plastic materials used in indoor materials are not UV-stable - meaning they undergo irreversible physical changes when exposed to UV light - then placing far-UVC lights indoors could cause unwanted damage to plastics and limit the demand for far-UVC.In this post, I focus on two ways by which this damage could hamper the uptake of far-UVC: consumer preferences on aesthetic effects and building regulations on physical degradation. These may not be the only ways and I'm uncertain about how concerned to be about each of them. However, I think they illustrate why the UV stability of plastics is concerning to me and why I'd like to see more research into it.Both of these concerns could result in damping the market for early adoption of far-UVC. My impression is that demand for far-UVC will be required to bring down the cost of the technology. If the price of the technology remains high, this could inhibit adoption and make far-UVC an intractable defence mechanism for pandemics and global catastrophic bio risks (GCBRs).My aim with this post is to present my rationale behind this concern and get feedback from the far-UVC community on the magnitude of this concern relative to other bottlenecks in the space.Effects of UV light on plasticsThe aesthetic and mechanical effects of UV light on plastics are two examples of why I think this could be worth spending more time thinking about the UV stability of plastics. Of the two, I'm more worried about the mechanical effects as if mechanical degradation results in blocking the installation of far-UVC then this could be a significant issue for adoption.Aesthetic effectsTL;DR: Aesthetic changes to plastics may result in consumers being unwilling to adopt far-UVC lighting.The aesthetic effects of UV radiation on non-UV-stable plastics appear to primarily be colour change - notably a yellowing of plastics. Other aesthetic effects that I'm less certain about and would ideally like to research more are cracking, stickiness, chalkiness that rubs off on contact, and texture change (eg. increased roughness). The aesthetic effects seem significant from a personal preference point of view - I wouldn't want all the plastic surfaces in my home or office to turn yellow over time.A recent study on the use of far-UVC on public transport buses simulated 6.2 years of exposure to far-UVC light and found "...that far-UVC radiation at 222 nm causes significant colour degradation in all the polymeric materials tested. The degree of color degradation varies depending on the type of polymeric material and the duration of exposure to far-UVC radiation. An obvious color di...

]]>
Sean Lawrence https://forum.effectivealtruism.org/posts/FHGk5uqpSsjemw4mL/should-uv-stability-of-plastics-be-a-concern-for-far-uvc Fri, 01 Sep 2023 15:47:39 +0000 EA - Should UV stability of plastics be a concern for far-UVC adoption? by Sean Lawrence Sean Lawrence https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:54 no full 7006
4edCygGHya4rGx6xa EA - Learning from our mistakes: how HLI plans to improve by PeterBrietbart Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Learning from our mistakes: how HLI plans to improve, published by PeterBrietbart on September 1, 2023 on The Effective Altruism Forum.Hi folks, in this post we'd like to describe our views as the Chair (Peter) and Director (Michael) of HLI in light of the recent conversations around HLI's work. The purpose of this post is to reflect on HLI's work and its role within the EA community in response to community member feedback, highlight what we're doing about it, and engage in further constructive dialogue on how HLI can improve moving forward.HLI hasn't always got things right. Indeed, we think there have been some noteworthy errors (quick note: our goal here isn't to delve into details but to highlight broad lessons learnt, so this isn't an exhaustive list):Most importantly, we were overconfident and defensive in communication, particularly around our 2022 giving season post.We described our recommendation for StrongMinds using language that was too strong: "We're now in a position to confidently recommend StrongMinds as the most effective way we know of to help other people with your money". We agree with feedback that this level of confidence was and is not commensurate with the strength of the evidence and the depth of our analysis.The post's original title was "Don't give well, give WELLBYs". Though this was intended in a playful manner, it was tone-deaf, and we apologise.We made mistakes in our analysis.We made a data entry error. In our meta-analysis, we recorded that Kemp et al. (2009) found a positive effect, but in fact it was a negative effect. This correction reduced our estimated 'spillover effect' for psychotherapy (the effect that someone receiving an intervention had on other people) from 53% to 38% and therefore reduced the total cost-effectiveness estimate from 9.5x cash transfers to 7.5x.We did not include standard diagnostic tests of publication bias. If we had done this, we would have decreased our confidence in the quality of the literature on psychotherapy that we were using.After receiving feedback about necessary corrections to our cost-effectiveness estimates for psychotherapy and StrongMinds, we failed to update our materials on our website in a timely manner.As a community, EA prides itself on its commitment to epistemic rigour, and we're both grateful and glad that folks will speak up to maintain high standards. We have heard these constructive critiques, and we are making changes in response.We'd like to give a short outline of what HLI is doing next and has done in order to improve its epistemic health and comms processes.We've added an "Our Blunders" page on the HLI website, which lists the errors and missteps we mentioned above. The goal of this page is to be transparent about our mistakes, and to keep us accountable to making improvements.We've added the following text to the places in our website where we discuss StrongMinds:"Our current estimation for StrongMinds is that a donation of $1,000 produces 62 WELLBYs (or 7.5 times GiveDirectly cash transfers). See our changelog. However, we have been working on an update to our analysis since July 2023 and expect to be ready by the end of 2023. This will include using new data and improving our methods. We expect our cost-effectiveness estimate will decrease by about 25% or more - although this is a prediction we are very uncertain about as the analysis is yet to be done. While we expect the cost-effectiveness of StrongMinds will decrease, we think it is unlikely that the cost-effectiveness will be lower than GiveDirectly. Donors may want to wait to make funding decisions until the updated report is finished."We have added more/higher quality controls to our work:Since the initial StrongMinds report, we've added Samuel Dupret (researcher) and Dr Ryan Dwyer (senior research...

]]>
PeterBrietbart https://forum.effectivealtruism.org/posts/4edCygGHya4rGx6xa/learning-from-our-mistakes-how-hli-plans-to-improve Fri, 01 Sep 2023 12:59:04 +0000 EA - Learning from our mistakes: how HLI plans to improve by PeterBrietbart PeterBrietbart https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:02 no full 7005
sXJkaQFFYodhEXNvr EA - Alignment & Capabilities: What's the difference? by John G. Halstead Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Alignment & Capabilities: What's the difference?, published by John G. Halstead on September 1, 2023 on The Effective Altruism Forum.In the AI safety literature, AI alignment is often presented as conceptually distinct from capabilities. However, (1) the distinction seems somewhat fuzzy and (2) many techniques that are supposed to improve alignment also improve capabilities.(1) The distinction is fuzzy because one common way of defining alignment is getting an AI system to do what the programmer or user intends. However, programmers intend for systems to be capable. eg we want chess systems to win at chess. So, a system that wins more is more intent aligned, and is also more capable.(2) eg This Irving et al (2018) paper by a team at Open AI proposes debate as a way to improve safety and alignment, where alignment is defined as aligning with human goals. However, the debate also improved the accuracy of image classification in the paper, and therefore also improved capabilities.Similarly, Reinforcement learning with human feedback was initially presented as an alignment strategy, but my loose impression is that it also made significant capabilities improvements. There are many other examples in the literature of alignment strategies also improving capabilities.This makes me wonder whether alignment is actually more neglected that capabilities work. AI companies want to make aligned systems because they are more useful.How do people see the difference between alignment and capabilities?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
John G. Halstead https://forum.effectivealtruism.org/posts/sXJkaQFFYodhEXNvr/alignment-and-capabilities-what-s-the-difference Fri, 01 Sep 2023 09:57:50 +0000 EA - Alignment & Capabilities: What's the difference? by John G. Halstead John G. Halstead https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:38 no full 7004
vCYyLxxdGXbbs2iax EA - New Princeton course on longtermism by Calvin Baker Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New Princeton course on longtermism, published by Calvin Baker on September 2, 2023 on The Effective Altruism Forum.This semester (Fall 2023), Prof Adam Elga and I will be co-instructing Longtermism, Existential Risk, and the Future of Humanity, an upper div undergraduate philosophy seminar at Princeton. (Yes, I did shamelessly steal half of our title from The Precipice.) We are grateful for support from an Open Phil course development grant and share the reading list here for all who may be interested.Part 1: Setting the stageWeek 1: Introduction to longtermism and existential riskCoreOrd, Toby. 2020. The Precipice: Existential Risk and the Future of Humanity. London: Bloomsbury. Read introduction, chapter 1, and chapter 2 (pp. 49-56 optional); chapters 4-5 optional but highly recommended.OptionalRoser (2022) "The Future is Vast: Longtermism's perspective on humanity's past, present, and future" Our World in DataKarnofsky (2021) 'This can't go on' Cold Takes (blog)Kurzgesagt (2022) "The Last Human - A Glimpse into the Far Future"Week 2: Introduction to decision theoryCoreWeisberg, J. (2021). Odds & Ends. Read chapters 8, 11, and 14.Ord, T., Hillerbrand, R., & Sandberg, A. (2010). "Probing the improbable: Methodological challenges for risks with low probabilities and high stakes." Journal of Risk Research, 13(2), 191-205. Read sections 1-2.OptionalWeisberg, J. (2021). Odds & Ends chapters 5-7 (these may be helpful background for understanding chapter 8, if you don't have much background in probability).Titelbaum, M. G. (2020) Fundamentals of Bayesian Epistemology chapters 3-4Week 3: Introduction to population ethicsCoreParfit, Derek. 1984. Reasons and Persons. Oxford: Oxford University Press. Read sections 4.16.120-23, 125, and 127 (pp. 355-64; 366-71, and 377-79).Parfit, Derek. 1986. "Overpopulation and the Quality of Life." In Applied Ethics, ed. P. Singer, 145-164. Oxford: Oxford University Press. Read sections 1-3.OptionalRemainders of Part IV of Reasons and Persons and "Overpopulation and the Quality of Life"Greaves (2017) "Population Axiology" Philosophy CompassMcMahan (2022) "Creating People and Saving People" section 1, first page of section 4, and section 8Temkin (2012) Rethinking the Good 12.2 pp. 416-17 and section 12.3 (esp. pp. 422-27)Harman (2004) "Can We Harm and Benefit in Creating?"Roberts (2019) "The Nonidentity Problem" SEPFrick (2022) "Context-Dependent Betterness and the Mere Addition Paradox"Mogensen (2019) "Staking our future: deontic long-termism and the non-identity problem" sections 4-5Week 4: Longtermism: for and againstCoreGreaves, Hilary and William MacAskill. 2021. "The Case for Strong Longtermism." Global Priorities Institute Working Paper No.5-2021. Read sections 1-6 and 9.Curran, Emma J. 2023. "Longtermism and the Complaints of Future People". Forthcoming in Essays on Longtermism, ed. H. Greaves, J. Barrett, and D. Thorstad. Oxford: OUP. Read section 1.OptionalThorstad (2023) "High risk, low reward: A challenge to the astronomical value of existential risk mitigation." Focus on sections 1-3.Curran, E. J. (2022). "Longtermism, Aggregation, and Catastrophic Risk" (GPI Working Paper 18-2022). Global Priorities Institute.Beckstead (2013) "On the Overwhelming Importance of Shaping the Far Future" Chapter 3"Toby Ord on why the long-term future of humanity matters more than anything else, and what we should do about it" 80,000 Hours podcastFrick (2015) "Contractualism and Social Risk" sections 7-8Part 2: Philosophical problemsWeek 5: FanaticismCoreBostrom, N. (2009). "Pascal's mugging." Analysis, 69 (3): 443-445.Russell, J. S. "On two arguments for fanaticism." Noûs, forthcoming. Read sections 1, 2.1, and 2.2.Temkin, L. S. (2022). "How Expected Utility Theory Can Drive Us Off the Rails." In L. S. ...

]]>
Calvin_Baker https://forum.effectivealtruism.org/posts/vCYyLxxdGXbbs2iax/new-princeton-course-on-longtermism Sat, 02 Sep 2023 13:31:32 +0000 EA - New Princeton course on longtermism by Calvin Baker Calvin_Baker https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 16:21 no full 7010
bf4bHGyRByTQnqCRC EA - Please share with lawyers you know: Legal Impact for Chickens seeks litigator. by alene Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Please share with lawyers you know: Legal Impact for Chickens seeks litigator., published by alene on September 3, 2023 on The Effective Altruism Forum.Hi EAs!Thank you to everyone here who has supported Legal Impact for Chickens in various ways.Legal Impact for Chickens is hiring again! Please forward widely!Legal Impact for Chickens seeks a litigator.We're looking for the next attorney to help build our nonprofit and fight for animals.Want to join our team on the ground floor?About us:Legal Impact for Chickens (LIC) is a 501(c)(3) litigation nonprofit. We work to protect farmed animals.You may have seen our Costco shareholder derivative suit in The Washington Post, Fox Business, or CNN Business - or even on TikTok.Now, we're looking for our next hire: an entrepreneurial litigator to help fight for animals!About you:Licensed and in good standing with your state bar3+ years of litigation experience preferredExcellent analytical, writing, and verbal-communication skillsZealous, creative, enthusiastic litigatorPassion for helping farmed animalsInterest in entering a startup nonprofit on the ground floor, and helping to build somethingWillingness to do all types of nonprofit startup work, beyond just litigationStrong work ethic and initiativeLove of learningKind to our fellow humans, and excited about creating a welcoming, inclusive teamAbout the role:You will be an integral part of LIC. You'll help shape our organization's future.Your role will be a combination of (1) designing and pursuing creative impact litigation for animals, and (2) helping with everything else we need to do, to run this new nonprofit!Since this is such a small organization, you'll wear many hats: Sometimes you may wear a law-firm partner's hat, making litigation strategy decisions or covering a hearing on your own. Sometimes you'll wear an associate's hat, analyzing complex and novel legal issues. Sometimes you'll pitch in on administrative tasks, making sure a brief gets filed properly or formatting a table of authorities. Sometimes you'll wear a start-up founder's hat, helping plan the number of employees we need, or representing LIC at conferences. We can only promise it won't be dull!This job offers tremendous opportunity for advancement, in the form of helping to lead LIC as we grow. The hope is for you to become an indispensable, long-time member of our new team.Commitment: Full timeLocation and travel: This is a remote, U.S.-based position. You must be available to travel for work as needed, since we will litigate all over the country.Reports to: Alene Anello, LIC's presidentSalary: $80,000-$100,000 depending on experienceBenefits: Health insurance, 401(k), flexible schedule, unlimited PTO (plus mandatory vacation!)One more thing!LIC is an equal opportunity employer. Women and people of color are strongly encouraged to apply. Applicants will receive consideration without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, ancestry, citizenship status, disability, age, medical condition, veteran status, marital status, political affiliation, or any other protected characteristic.To Apply:To apply, please email your cover letter, resume, writing sample, and three references, all combined as one PDF, to info@legalimpactforchickens.org.Thank you for your time and your compassion!Sincerely, Legal Impact for ChickensThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
alene https://forum.effectivealtruism.org/posts/bf4bHGyRByTQnqCRC/please-share-with-lawyers-you-know-legal-impact-for-chickens Sun, 03 Sep 2023 19:20:53 +0000 EA - Please share with lawyers you know: Legal Impact for Chickens seeks litigator. by alene alene https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:45 no full 7017
buFyakASucJnrZj7X EA - The Lives We Can Save by Omnizoid Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Lives We Can Save, published by Omnizoid on September 3, 2023 on The Effective Altruism Forum.I work as a Resident Assistant at my college. Last year, only a few weeks into me starting, I was called at night to come help with a drunk student. I didn't actually help very much, and probably didn't have to be there. I didn't even have to write up the report at the end. At one point I went outside to let medical services into the building, but mostly I just stood in a hallway.The person in question was so drunk they couldn't move. They had puked in the bathroom and were lying in the hallway crying. They could barely talk. When Campus Safety arrived they kneeled down next to this person and helped them drink water, while asking the normal slew of questions about the person's evening.They asked this person, whose name I can't even remember, why they had been drinking so much. They said, in between hiccups and sobs, "friend doesn't want to be friend anymore."How do you describe that feeling? I don't think transcription can convey the misery and the drunkenness and the awful situation that had led to this awful situation. Someone drank so much that they could barely move, was lying curled in a hallway where all the other residents could and were watching, and was only able to muster out "friend doesn't want to be friend anymore" as they cried.Should I only care because I happened to be standing in that hallway on a late September evening? Had I remained in my room, laughing with my friends, would this person's struggle have been worth nothing?Max Alexander (this whole post is very worth reading)!It's sometimes hard to be motivated to help the world. The trip you forego, the fun you could have had with a friend, the nice things you could have bought are instead sent straight into the coffers of some charity that you've read about. It can feel sort of alienating when you think just of the number of people you have saved. Instead of thinking of numbers, think of stories. The people who make up the numbers - who make up the hundreds of thousands of lives saved by effective charities - are real, flesh-and-blood people, who matter just as much as you and I. We may not look into the gaunt faces of those who would have otherwise starved to death, we may not see their suffering with our eyes, but we know it is real. People are dying in ways that we can prevent. GiveWell top charities can save lives for only a few thousand dollars.It's hard to get your mind around that. I have a friend who has raised over 50,000 dollars for effective charities. 10 lives. 10 people. 10 people, many of them children, who will be able to live out a full life, rather than being snuffed out at a young age by a horrible painful disease. They will not have to lie in bed, with a fever of 105, slowly dying of malaria when they are five. They will have the chance to grow up.Who are these people? I do not know. But I can imagine their stories. I can imagine their stories because I can hear the stories of other people like this, people who are about to die. For example, on this Reddit thread, you can find the stories of lots of people who are about to die. Stories like these:Stage IV colon cancer here. Age 35. I'm a single mum to a 1-year-old and there is a 94% chance I'll be dead in 4 years. But there is still a wee bit of hope, so I try to hold onto that (hard to do most days). My days are filled with spending time with my baby and hoping that I live long enough that she'll remember me. She's pretty awesome and makes me laugh every day, so there is a lot of happiness in this life of mine.Reading these stories causes me to tear up. I think a lot of people have a similar response. They're so tragic - entire lives being snuffed out. The line "My days are filled with spending time with my baby and ho...

]]>
Omnizoid https://forum.effectivealtruism.org/posts/buFyakASucJnrZj7X/the-lives-we-can-save Sun, 03 Sep 2023 13:01:06 +0000 EA - The Lives We Can Save by Omnizoid Omnizoid https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:42 no full 7015
f7D4spNoAYqhFfbBz EA - Announcing the new 80,000 Hours Career Guide by Benjamin Hilton Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the new 80,000 Hours Career Guide, published by Benjamin Hilton on September 4, 2023 on The Effective Altruism Forum.From 2016 to 2019, 80,000 Hours' core content was contained in our persistently popular career guide. (You may also remember it as the 80,000 Hours book: 80,000 Hours - Find a fulfilling career that does good).Today, we're re-launching that guide. Among many other changes, in the new version:We've substantially changed our recommendations on career capital.We have significantly improved and extended our article on career planning.We improved our advice on personal fit and career exploration.We added sections on why community-building, government and policy, and organisation-building careers could be high impact.We focus more on avoiding harm (in line with our updates following the collapse of FTX), and explicitly discuss Sam Bankman-Fried when talking about earning to give.We are more upfront about 80,000 Hours' focus on existential risk in particular (while also discussing a wide variety of cause areas, including global health, animal welfare, existential risk and meta-causes).We've updated the more empirical sections of the guide using more up-to-date papers and data.You can read the guide here or start with a 2-minute summary.It's also available as a printed book (you can get a free copy by signing up for our newsletter, or buy it on Amazon), audiobook, podcast series or ebook (available as a .pdf or .epub).We'd appreciate you sharing the new guide with a friend! You can send them a free copy using this link. Many of the people who've found our advice most useful in the past have found us via a friend, so we think the time you take to share it could be really worthwhile.What's in the guide?The career guide aims to cover the most important basic concepts in career planning. (If instead you'd like to see something more in-depth, see our advanced series and podcast.)The first article is about what to look for in a fulfilling job:Part 1: What makes for a dream job?The next five are about which options are most impactful for the world:Part 2: Can one person make a difference?Part 3: How to have a real positive impact in any jobPart 4: How to choose which problems to focus onPart 5: What are the world's biggest and most urgent problems?Part 6: What types of jobs help the most?The next four cover how to find the best option for you and invest in your skills:Part 7: Which jobs put you in a better position?Part 8: How to find the right career for youPart 9: How to be more successful in any jobPart 10: How to write a career planThe last two cover how to take action and launch your dream career:Part 11: How to get a jobPart 12: How community can helpAdvice (for EAs) on how to read the guideThe topics we tackle are complex, and in the past we've noticed people interpreting our advice in ways we didn't intend. Here are some points to bear in mind before diving in.We've been wrong before and we'll be wrong again. While we've spent a lot of time thinking about these issues, we still have a lot to learn. Our positions have changed over the years, and due to the nature of the questions we take on, we're rarely more than about 70% confident in our answers. You should try to strike a balance between what we think and your previous position, depending on the strength of the arguments and how much you already knew about the topic.It's extremely difficult to give universally applicable career advice. Most importantly, the option that's best for you depends a huge amount on your skills, circumstances, and the specific details of the opportunity. So, while we might highlight path A more than path B, the best opportunities in path B will often be better than the typical opportunities in path A. Moreover, your personal circumstanc...

]]>
Benjamin Hilton https://forum.effectivealtruism.org/posts/f7D4spNoAYqhFfbBz/announcing-the-new-80-000-hours-career-guide Mon, 04 Sep 2023 17:09:18 +0000 EA - Announcing the new 80,000 Hours Career Guide by Benjamin Hilton Benjamin Hilton https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:47 no full 7020
seeyvF2qGqA4pKdn5 EA - Rethinking the Liberation Pledge (Eva Hamer) by Aaron Gertler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethinking the Liberation Pledge (Eva Hamer), published by Aaron Gertler on September 3, 2023 on The Effective Altruism Forum.This article is a really interesting example of people in a prosocial movement trying radical tactics and eventually changing their minds. I'm not sure there's any one lesson here that the average Forum reader would need to learn; I'm just crossposting because I enjoyed it.See parts two and three for how the Pledge has evolved.The Failure of the Pledge and a Better Way Towards Vegan TablesIn 2015, animal advocates with Direct Action Everywhere (DxE) launched an inspired new campaign among their members. It took courage, required sacrifice, and greatly backfired. This three-part series examines what the movement learned from the Liberation Pledge, how we might energize the intention behind the Pledge in a better way, and a piece to share with friends and family to do just that.What We Learned From the Liberation PledgeHow It StartedThe Liberation Pledge was a fascinating idea and a bit of a disaster. Instead of energizing supporters' social networks to create change, as its creators intended, it often had the opposite effect- to isolate advocates from their closest relationships.In this piece, you'll learnWhat it wasWhy it was a good ideaWhy it failedThe next piece in this series suggests a better way to energize the intention behind the pledge, for animal advocates to align their actions with their values in their personal relationships.The Liberation Pledge was a three-part public pledge tolive vegan,refuse to sit at tables where animals' bodies are being eaten, andencourage others to do the same.Enthusiasts of the pledge hoped it would create a cultural stigma around eating animals similar to the stigma that has developed around smoking over recent decades. That is, even while smoking is still practiced, it is prohibited by default in public and private spaces.Before we had the Pledge, many of us felt alienated from friends and family who continued to eat animals. We were forced to choose between two options: speaking up and risking being seen as obnoxious, angry, and argumentative, or keeping the peace with painful inauthenticity, swallowing our intense discomfort at watching our loved ones eat the bodies of animals.The pledge gave us hope that there was another way: being honest with those around us while continuing to spend time with them. And, on a larger scale, we hoped that if we all joined together, we could create a world where eating meat is stigmatized: a world where someone would ask, "Does anyone mind if I get the steak?" before making an order at a restaurant (or maybe even one in which restaurants would think twice before putting someone's body on the menu).Some people took it a step further, arguing it was immoral not to take the pledge, saying, "You wouldn't sit quietly eating your vegan option while a dog or a child was being eaten, would you?" According to this view, it was our duty not to sit idly by while violence was committed in our presence.While some beautiful and inspiring stories were detailed on a Facebook group for the Pledge, it seemed to me that there were many more instances of total disaster: people experiencing huge ruptures in their oldest relationships around the Pledge while often lamenting that those they had just discarded "care more about eating dead animals than they care about me."From where I stood, the biggest effect of the Pledge was for advocates to lose relationships with family members who didn't comply. Upon taking the Pledge, a close friend at the time experienced a years-long estrangement from their family, including those who were already vegan while many others decided to skip birthdays, weddings, and holidays with family. It's possible that all of this added stigma ar...

]]>
Aaron Gertler https://forum.effectivealtruism.org/posts/seeyvF2qGqA4pKdn5/rethinking-the-liberation-pledge-eva-hamer Sun, 03 Sep 2023 23:47:28 +0000 EA - Rethinking the Liberation Pledge (Eva Hamer) by Aaron Gertler Aaron Gertler https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:21 no full 7018
GDdvdhbGfCehnoJzY EA - Strongest real-world examples supporting AI risk claims? by rosehadshar Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Strongest real-world examples supporting AI risk claims?, published by rosehadshar on September 5, 2023 on The Effective Altruism Forum.[Manually cross-posted to LessWrong here.]There are some great collections of examples of things like specification gaming, goal misgeneralization, and AI improving AI. But almost all of the examples are from demos/toy environments, rather than systems which were actually deployed in the world.There are also some databases of AI incidents which include lots of real-world examples, but the examples aren't related to failures in a way that makes it easy to map them onto AI risk claims. (Probably most of them don't in any case, but I'd guess some do.)I think collecting real-world examples (particularly in a nuanced way without claiming too much of the examples) could be pretty valuable:I think it's good practice to have a transparent overview of the current state of evidenceFor many people I think real-world examples will be most convincingI expect there to be more and more real-world examples, so starting to collect them now seems goodWhat are the strongest real-world examples of AI systems doing things which might scale to AI risk claims?I'm particularly interested in whether there are any good real-world examples of:Goal misgeneralizationDeceptive alignment (answer: no, but yes to simple deception?)Specification gamingPower-seekingSelf-preservationSelf-improvementThis feeds into a project I'm working on with AI Impacts, collecting empirical evidence on various AI risk claims. There's a work-in-progress table here with the main things I'm tracking so far - additions and comments very welcome.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
rosehadshar https://forum.effectivealtruism.org/posts/GDdvdhbGfCehnoJzY/strongest-real-world-examples-supporting-ai-risk-claims Tue, 05 Sep 2023 19:29:08 +0000 EA - Strongest real-world examples supporting AI risk claims? by rosehadshar rosehadshar https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:47 no full 7031
vEcpYKoa9RWfGHyWo EA - Copenhagen Consensus Center's newest research on global poverty - we should be talking about this by alamo 2914 Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Copenhagen Consensus Center's newest research on global poverty - we should be talking about this, published by alamo 2914 on September 5, 2023 on The Effective Altruism Forum.The Copenhagen Consensus Center (CCC) is a non-profit think-tank, that has published economic cost-benefit analyses on global issues since 2004. It is headed by Bjørn Lomborg. In the chronological history of prominent effective altruists, I think he is the 2nd behind Peter Singer (but that's besides the point).The CCC has high standards of research, and has employed top economists since its inception, including Nobel Prize winners. It has previously published large reports in 2o04, 2008, 2012 & 2015. It's recommendations were usually similar to the ones in the EA movement (child nutrition, immunization, malaria, deworming etc.), but with a larger focus on economic policy.About 4 months ago, the CCC has published its Halftime for the Sustainable Development Goals 2016-2030.A book based on this report has received good reviews from the chief economist of the world bank, Nobel Prize winning economist Vernon Smith, and Bill Gates. Anyways, the report mentions some interventions that are seldom, or even never, talked about in the EA community, along with other more familiar ones.The things that I've barely/never seen EA talk about are land tenure security, e-procurement and agricultural R&D.Agricultural R&D is good for obvious reasons.Land tenure is interesting - according to CCC:Globally, 70 percent of the world's population has no access to formal land registration systems. One-in-five, or almost a billion people, consider it likely or very likely they will be evicted in the next five years.[...] When farmers know they own their land, they are more willing to make expensive investments to increase long-term productivity. They can also use their land deed as collateral to borrow money for investments like farm equipment or property expansion.[...] The researchers show that the total benefits of providing more secure urban tenure would therefore be about $160 billion, or 30-times the costs.e-procurement is also interesting:In the countries where the poorer half of the world's population lives, procurement makes up an astounding half of all government expenditure.This procurement can be made less corrupt and more effective by putting the whole system online, making it transparent. Electronic procurement or "e-procurement" lets many more companies hear about procurement offers, ensures more bids can be submitted and means governments lose less money through corruption and waste.[...] For each dollar spent, the low-income country will realize savings worth $38. For lower-middle income countries, the average savings are more than $5 billion over the first 12 years, meaning each dollar spent creates more than $300 of social benefits. This makes e-procurement one of the world's most effective policies.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
alamo 2914 https://forum.effectivealtruism.org/posts/vEcpYKoa9RWfGHyWo/copenhagen-consensus-center-s-newest-research-on-global Tue, 05 Sep 2023 11:23:04 +0000 EA - Copenhagen Consensus Center's newest research on global poverty - we should be talking about this by alamo 2914 alamo 2914 https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:01 no full 7028
XWJcAEKYmNfodjnTL EA - Re-announcing Pulse by David Moss Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Re-announcing Pulse, published by David Moss on September 5, 2023 on The Effective Altruism Forum.In September 2022, we announced that we were developing Pulse, a large and repeated US-population survey focusing on public attitudes relevant to high impact issues.This project was originally going to be supported by the FTX Future Fund and was therefore delayed while we sought alternative funding. We have now acquired alternative funding for this project for one year. However, the project will now be running on a quarterly basis, rather than monthly, to make the most efficient use of limited funds.Request for questionsAs such, we are now, again, soliciting requests for questions to include in the survey.We are particularly interested in questions which people would value being tracked across time, since this will make the most use of Pulse's nature as a quarterly survey. We will still likely include some one-off questions in Pulse (space permitting), and welcome requests of this kind, but in principle we could just include these questions in separate surveys (funding permitting).Given the lower frequency of the surveys, we now believe it is more important than ever to ensure that we include the questions which are the highest priority. Due to space constraints (data quality drops dramatically when surveys exceed a certain length), we are not able to field questions on every topic that we might wish to.At present, we plan to include questions primarily focused on:Awareness of and attitudes towards effective altruism, longtermism, and related areas (e.g. (our previous work)).Support for different cause areas or particular policies (e.g. AI)However, we are keen to get requests for other cause areas or topics.Ironically, this meant that we weren't able to run Pulse during the time of the FTX crisis, when tracking attitudes towards EA at a large scale would have been particularly useful. It also meant that Pulse wasn't running during the recent increase in public interest in AI risk. We think this is a useful illustration of why it is important to have regular surveys running in advance (and keep them running) so that we can capture changes in public attitudes due to unforeseen events. Fortunately, we do have some pre-test data on both of these topics, which we will be able to use to assess changes to some extent.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
David_Moss https://forum.effectivealtruism.org/posts/XWJcAEKYmNfodjnTL/re-announcing-pulse Tue, 05 Sep 2023 09:23:07 +0000 EA - Re-announcing Pulse by David Moss David_Moss https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:21 no full 7025
DXvgL6GjxLGqigWAq EA - The Plant-Based Universities open letter has now gone public by Oisín Considine Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Plant-Based Universities open letter has now gone public, published by Oisín Considine on September 5, 2023 on The Effective Altruism Forum.This afternoon, the Plant-Based Universities campaign published an open letter to University Vice-Chancellors, Catering Managers, and Student Union Presidents in the UK and Ireland calling on them to support the transition to 100% plant-based catering in universities.Over 860 academics, notable figures, healthcare professionals and politicians from the UK and Ireland and around the world have put their names down in support of this initiative, with more than 650 academics representing 94 universities across the globe.In addition to its direct impact, this is a low-cost intervention that has several multi-dimensional avenues for high impact through the norms and behaviours it encourages. In policy spaces, this is known as the Brussels Effect; that is, when a regulation adopted in one jurisdiction ends up setting a standard followed by many others. By influencing key educational institutions to adopt plant-based diets, we not only affect immediate communities but also send ripple effects that can shift global standards toward more ethical and sustainable choices.The Plant-Based Universities campaign is active in over 60 universities in the UK and Ireland, and is ever growing. Since the University of Stirling voted for a transition to fully plant-based catering at all university restaurants and cafes in November of last year, 6 more universities have followed with successes.This is undoubtedly a massive opportunity to kickstart positive change at a large institutional scale with minimal cost or risk involved. Now that the open letter has been released to the public, I would like to invite anyone who knows of any academics, philanthropists, board members of EA-aligned orgs or well-known figures (also politicians, healthcare professionals) who would be interested in signing to share the open letter with them (If you fit into the above criteria yourself, it would be fantastic if you could sign it!). In order to sign the open letter, you just need to email your name, title, role and organisation/institution to info@plantbaseduniversities.org. So far only a small handful of notable figures in the EA community have put their names down, but I believe there would be a large market for support for this campaign in the EA sphere and I want to make the most out of it.I also know that many people who visit the EA Forum are themselves university students. If you are a student and are interested in starting a campaign in your university, you can fill out this brief form and they will get in touch with you. From talking to people who successfully campaigned at their university, you really only need somewhere between 3 and 7 committed students for this, and they provide multiple different online (and in-person if in the UK) training sessions throughout the year (all free of course) along with lots of other useful information and resources.The primary motivation of this campaign may be for universities to limit their contribution to climate change and to shift public opinion in favour of a plant-based food system, but as you're probably well aware if you're reading this on the EA Forum, there are simply so many positive effects of a plant-based food system other than just climate change mitigation. (In fact, you could say this one stone has the potential to kill so many figurative birds, it might even be counterproductive in the end!)I believe this campaign offers a potent way to align our institutions with values that benefit all. Engagement and critical insights can make it even more effective, so please share your thoughts in the comments.I'm in the group chat for sending the emails before the letter was released, and have been liai...

]]>
Oisín Considine https://forum.effectivealtruism.org/posts/DXvgL6GjxLGqigWAq/the-plant-based-universities-open-letter-has-now-gone-public Tue, 05 Sep 2023 04:52:30 +0000 EA - The Plant-Based Universities open letter has now gone public by Oisín Considine Oisín Considine https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:33 no full 7024
Defu3jkejb7pmLjeN EA - Nick Beckstead is leaving the Effective Ventures boards by Eli Rose Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nick Beckstead is leaving the Effective Ventures boards, published by Eli Rose on September 6, 2023 on The Effective Altruism Forum.On 23rd August, Nick Beckstead stepped down from the boards of Effective Ventures UK and Effective Ventures US.For context, EV UK and EV US host and fiscally sponsor several (mostly EA-related) projects, such as CEA, 80,000 Hours and various others (see more here).Since November 2022, Nick has been recused from all board matters related to the collapse of FTX. Over time, it became clear that Nick's recusal made it difficult for him to add sufficient value to EV and its projects for it to be worth him remaining on the boards. Nick and the other trustees felt that this was sufficient reason for Nick to step down.Nick wanted to share the following:Ever since the collapse of FTX, I've been recused from a substantial fraction of business on both boards. This has made it hard to contribute as much as I would like to as a board member, during a time where engaged board members are especially important. Since this situation may not change for a while, I think it's a good time for me to step down.I am grateful to have played a role in getting EV UK and EV US off the ground and helping them develop over the last 14 years since the launch of Giving What We Can. Projects at EV have accomplished a great deal, drawing substantial resources and attention toward addressing some of the world's most pressing problems, with impacts that are varied, large, and difficult to quantify. The people at EV are amongst the most thoughtful, generous, kind, and dedicated that I've had the pleasure to interact with. I feel very proud of all that we have accomplished together, and optimistic about the work that will continue in my absence.As a founding board member of EV UK (then called CEA), Nick played a vital role in getting EV US, EV UK and their constituent projects off the ground. For example, Nick was involved in setting up the first Giving What We Can student group and helped to hire the first full-time staff at what was then CEA. We are very grateful to Nick for everything he's contributed to the effective altruism movement to date and look forward to his future positive impact; we wish him the best of luck with his future work.This is because the recusal affected not just decisions that were directly related to the collapse of FTX, but also many other decisions for which the way EV UK and EV US have been affected by the collapse of FTX was important context.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Eli Rose https://forum.effectivealtruism.org/posts/Defu3jkejb7pmLjeN/nick-beckstead-is-leaving-the-effective-ventures-boards Wed, 06 Sep 2023 19:21:19 +0000 EA - Nick Beckstead is leaving the Effective Ventures boards by Eli Rose Eli Rose https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:30 no full 7038
xgE4uxy5EPRX2wFr2 EA - An overview of market shaping in global health: Landscape, new developments, and gaps by Rethink Priorities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An overview of market shaping in global health: Landscape, new developments, and gaps, published by Rethink Priorities on September 6, 2023 on The Effective Altruism Forum.Editorial noteThis report is a "shallow" investigation, as described here, and was commissioned by GiveWell and produced by Rethink Priorities from February to April 2023. We revised this report for publication. GiveWell does not necessarily endorse our conclusions, nor do the organizations represented by those who were interviewed.The primary focus of the report is to provide an overview of market shaping in global health. We describe how market shaping is typically used, its recent track record, and ongoing gaps in its implementation. We also spotlight two specific market shaping approaches (pooled procurement and subscription models). Our research involved reviewing the scientific and gray literature and speaking to five experts.We don't intend this report to be Rethink Priorities' final word on market shaping, and we have tried to flag major sources of uncertainty in the report. We hope this report galvanizes a productive conversation within the global health and development community about the role of market shaping in improving global health. We are open to revising our views as more information is uncovered.Key takeawaysMarket shaping - in the context of global health - comprises interventions to create well-functioning markets through improving specific market outcomes (e.g., availability of products) with the end goal of improving public health. Market shaping interventions tend to be catalytic, timebound, and have a strong focus on influencing buyer and supplier interactions. [more]Market shaping interventions are used to address various market shortcomings. A commonly used framework to assess shortcomings in various market characteristics is some variation of the "five As": affordability, availability, assured quality, appropriate design, and awareness. [more]There is no commonly agreed upon set of interventions under the term of market shaping, but they can be broadly categorized by the main type of lever they use: reduce transaction costs (e.g., pooled procurement), increase market information (e.g., strategic demand forecasting), balance supplier and buyer risks (e.g., advance market commitments). [more]New developments have been taking place in the field in recent years: (1) New intervention types have been devised and implemented (e.g., ceiling price agreements); (2) there has been a drive toward institutionalization with the launch of several new organizations whose sole policy instrument focus is market shaping (e.g., MedAccess); (3) there is an increase in co-ownership with national governments in low- and middle-income countries (LMICs); (4) the field is increasingly experiencing diminishing returns as most of the "low-hanging fruits" have been picked, and projects are getting more complex with narrower indications and smaller health impacts. [more]Market shaping has recently seen both wins and disappointments. Recent wins include: (1) Results for Development's (R4D) amoxicillin dispersible tablets (amox DT) program; (2) ceiling price agreements for optimized antiretroviral (ARV) regimens; (3) a ceiling price agreement for HIV self test; (4) significant price reductions in vaccines achieved by Gavi. Recent disappointments include: (1) the continued price instability of malaria ACTs; (2) the failure of a uterotonic agent to be registered in Kenya; (3) the sole supplier of malaria rapid diagnostic tests (mRDTs) threatening to leave the market due to unsustainably affordable prices; (4) a tuberculosis (TB) drug in Brazil not being procured. [more]We describe three case studies of recent market shaping activities:The Affordable Medicines Facility - malaria (AMFm) was lau...

]]>
Rethink Priorities https://forum.effectivealtruism.org/posts/xgE4uxy5EPRX2wFr2/an-overview-of-market-shaping-in-global-health-landscape-new Wed, 06 Sep 2023 18:27:51 +0000 EA - An overview of market shaping in global health: Landscape, new developments, and gaps by Rethink Priorities Rethink Priorities https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:52 no full 7036
6Z7SShHa5fnadLm3G EA - Promoting Safety in EA: Practical Steps to Prevent Sexual Harassment & Code of Conduct for Your Use by Michal Greidinger Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Promoting Safety in EA: Practical Steps to Prevent Sexual Harassment & Code of Conduct for Your Use, published by Michal Greidinger on September 6, 2023 on The Effective Altruism Forum.EA communities tend to be a professional space and a social space, all at once. This reality can often involve complex power dynamics that require careful attention. To maintain safe spaces, it is crucial to establish guiding rules for our communities. In this post, I would like to share EA Israel's concise code of conduct for preventing sexual harassment, which is based on Israeli law. Before diving into that, we suggest some practical steps you can take to promote the safety of your community.What can you do to prevent sexual harassment in your EA community?As a community managerDesignate a responsible person, either a team member or volunteer, dedicated to sexual harassment prevention and community health in your community. This person should be committed to this subject, ideally with some professional experience in dealing with emotionally challenging situation. Nominate another person, preferably from a different gender, as a contact person when the main representative is unavailable / connected to the harassment case / just in case someone prefers to speak with someone else.Ensure they receive professional training, including emotional and legal aspects, that they are able to commit to this role for a long-enough duration, and that they establish a code of conduct against sexual harassment.As a community health and preventing sexual harassment representativeSet a code of conduct to prevent sexual harassment and publish it widely, so that every old and new community member encounters it at some point. Clearly identify yourself as the primary point of contact for addressing harassment issues, and make it super-easy to contact you.As a community memberAsk your community manager to nominate a responsible person for sexual harassment prevention and community health. Ask for the adoption of a code of conduct, such as the one shared in this post, or encourage the development of a customized code of conduct that suits your community's needs and fits the local law.You, as an EA worker or a community member, can help to create an environment where everyone feels respected and protected.EA Israel's Code of Conduct for preventing sexual harassmentThe rest of this section is EA Israel's short version code of conduct, based on the Israeli law.Effective Altruism Israel strives to create a sense of safety and comfort among its staff and community. Sexual harassment is strictly against the organization's policy.Prohibited acts according to the lawThreatening or coercing a person to engage in acts of a sexual nature.Indecent acts / lascivious behaviour.Repeated proposals of a sexually oriented nature, even if the person to whom the proposals are directed has shown a lack of interest in them. However, there is no need to demonstrate a "lack of interest" in a relationship of authority with an inherent power imbalance, or in a dependent, educational or therapeutic relationship with a minor, a patient, or someone defenseless in other ways.Persistent references focused on a person's sex or sexuality, even if the person to whom the behavior is directed has shown disinterest. There is no need to demonstrate "lack of interest" in cases mentioned in paragraph (3) above.Derogatory or demeaning treatment towards a person regarding their sex or sexuality, including their sexual orientation, whether or not they have expressed that it bothers them.Publishing a photograph, film, or recording that focuses on a person's sexuality, in circumstances where the publication may degrade or demean the person, and consent for publication has not been given.Prohibited retaliation: Harming a person as a...

]]>
Michal Greidinger https://forum.effectivealtruism.org/posts/6Z7SShHa5fnadLm3G/promoting-safety-in-ea-practical-steps-to-prevent-sexual Wed, 06 Sep 2023 08:23:42 +0000 EA - Promoting Safety in EA: Practical Steps to Prevent Sexual Harassment & Code of Conduct for Your Use by Michal Greidinger Michal Greidinger https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:44 no full 7034
an6LWoQCNJZTcqgu8 EA - Whose actions are you thankful for? by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Whose actions are you thankful for?, published by Nathan Young on September 7, 2023 on The Effective Altruism Forum.I think the other side of criticism is support.What have you been thankful for in the last 3 months.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Nathan Young https://forum.effectivealtruism.org/posts/an6LWoQCNJZTcqgu8/whose-actions-are-you-thankful-for Thu, 07 Sep 2023 21:35:17 +0000 EA - Whose actions are you thankful for? by Nathan Young Nathan Young https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:26 no full 7046
su24dN2aXYjY7XDZZ EA - MHFC Fall Grants Round by wtroy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MHFC Fall Grants Round, published by wtroy on September 7, 2023 on The Effective Altruism Forum.The Mental Health Funding Circle is holding our fall grants round!We are a group of funders seeking to fund the most impactful mental health projects, and we very much encourage you to apply.Our scope is quite wide, and we would consider many projects related to the cause of mental health. In the past we have funded:Meta research on mental health giving prioritiesTargeted research on intervention effectiveness and data on LMIC mental healthEffective global mental health interventions such as task-shifting, stepped care or self-help guidesMental health for the EA communityApplications are due on October 1st, and final decisions will be made early-mid November.For more information on the MHFC, visit our website.For a list of previous grants, see our Updates page.To apply, complete this application by October 1st.The Mental Health Funding Circle is an Impactful Grantmaking funding circle, a project of Charity Entrepreneurship.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
wtroy https://forum.effectivealtruism.org/posts/su24dN2aXYjY7XDZZ/mhfc-fall-grants-round Thu, 07 Sep 2023 18:48:11 +0000 EA - MHFC Fall Grants Round by wtroy wtroy https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:13 no full 7044
kaTRzN6QxrQvXLWx4 EA - CEARCH's Cause Exploration Contest: Awards by Joel Tan (CEARCH) Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEARCH's Cause Exploration Contest: Awards, published by Joel Tan (CEARCH) on September 7, 2023 on The Effective Altruism Forum.CEARCH ran our Cause Exploration Contest over the month of July, as part of our search for (a) potentially impactful causes as well as (b) useful methodologies to search for new causes going forward. We would like to thank everyone in the EA and broader philanthropic community for participating.Winning EntriesWe are pleased to announce the following winning entries:In the category of promising cause areas: Bean soaking, submitted by Nick Laing of OneDay Health. In summary, persuading citizens in sub-Saharan Africa to soak before cooking them (and thus saving on fuel use) may have health, economic and environmental benefits; however, there are some outstanding uncertainties over tractability and why soaking is not already common practice.In the category of useful search methodologies: Brainstorming for solutions that may not have the most impact in the context of solving a single problem, but which may have significant overall impact given the benefits it brings across multiple cause areas; this was submitted by Jeroen De Ryck.The prizes are USD 300 and USD 700 for the cause and search methodology categories respectively. We will be getting in touch with the winners to send them their winnings, though we are of course happy to donate to the charities of their choice if they so prefer.Honourable MentionsWe would also light to highlight the following entries that stood out. In the category of promising causes:Modern slavery, submitted by Sam Hilton.Alexithymia, submitted by Bolek Kerous.And in the category of useful search methodologies:A list of seven methods generally focused on taking different moral, political, epistemic and metaphysical perspectives (e.g. consulting the perspective of preference satisfaction; prioritizing causes systematically overlooked by human biases; copying ethical pioneers; consulting non-standard cosmology; consulting different political values; considering ideas that have gone out of fashion; and researching utopia building); this was submitted by David Mears, with input from Amber Dawn Ace.Consulting J-PAL's existing list of RCTed interventions, with the idea being that at lower levels of granularity, we can focus on very targeted interventions that may be very cost-effective but not generally applicable; this was submitted by Sophia Moss.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Joel Tan (CEARCH) https://forum.effectivealtruism.org/posts/kaTRzN6QxrQvXLWx4/cearch-s-cause-exploration-contest-awards Thu, 07 Sep 2023 16:43:35 +0000 EA - CEARCH's Cause Exploration Contest: Awards by Joel Tan (CEARCH) Joel Tan (CEARCH) https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:29 no full 7043
tumiQumHi6crL8Mnv EA - Why we didn't get a malaria vaccine sooner, and what we can do better next time by salonium Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why we didn't get a malaria vaccine sooner, and what we can do better next time, published by salonium on September 7, 2023 on The Effective Altruism Forum.This is a long (>9000 word) essay written by myself (Saloni Dattani), Rachel Glennerster and Siddhartha Haria for Works in Progress.Over half a million people die from malaria each year, but it took 141 years to develop a vaccine for it.One fundamental reason for this was the scientific complexity of the pathogen - malaria is caused by a parasite, not a virus or bacteria. But another, repeated obstacle was a lack of financial incentive and urgency.In this piece, which includes a lot of data and charts, we tell the story of how the malaria vaccine was developed, why the financial market for the vaccine was missing, and how it could have been sped up with smarter incentives and market mechanisms, like Advance Market Commitments.About the authors:Saloni Dattani - I'm a researcher on global health at Our World in Data and a founding editor of Works in ProgressRachel Glennerster is associate professor of Economics at the University of Chicago. She was previously chief economist at the UK Foreign Commonwealth and Development Office and the Department for International Development and a key figure behind 'Deworm the World'.Siddhartha Haria is policy lead at the Development Innovation Lab.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
salonium https://forum.effectivealtruism.org/posts/tumiQumHi6crL8Mnv/why-we-didn-t-get-a-malaria-vaccine-sooner-and-what-we-can 9000 word) essay written by myself (Saloni Dattani), Rachel Glennerster and Siddhartha Haria for Works in Progress.Over half a million people die from malaria each year, but it took 141 years to develop a vaccine for it.One fundamental reason for this was the scientific complexity of the pathogen - malaria is caused by a parasite, not a virus or bacteria. But another, repeated obstacle was a lack of financial incentive and urgency.In this piece, which includes a lot of data and charts, we tell the story of how the malaria vaccine was developed, why the financial market for the vaccine was missing, and how it could have been sped up with smarter incentives and market mechanisms, like Advance Market Commitments.About the authors:Saloni Dattani - I'm a researcher on global health at Our World in Data and a founding editor of Works in ProgressRachel Glennerster is associate professor of Economics at the University of Chicago. She was previously chief economist at the UK Foreign Commonwealth and Development Office and the Department for International Development and a key figure behind 'Deworm the World'.Siddhartha Haria is policy lead at the Development Innovation Lab.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> Thu, 07 Sep 2023 16:39:14 +0000 EA - Why we didn't get a malaria vaccine sooner, and what we can do better next time by salonium 9000 word) essay written by myself (Saloni Dattani), Rachel Glennerster and Siddhartha Haria for Works in Progress.Over half a million people die from malaria each year, but it took 141 years to develop a vaccine for it.One fundamental reason for this was the scientific complexity of the pathogen - malaria is caused by a parasite, not a virus or bacteria. But another, repeated obstacle was a lack of financial incentive and urgency.In this piece, which includes a lot of data and charts, we tell the story of how the malaria vaccine was developed, why the financial market for the vaccine was missing, and how it could have been sped up with smarter incentives and market mechanisms, like Advance Market Commitments.About the authors:Saloni Dattani - I'm a researcher on global health at Our World in Data and a founding editor of Works in ProgressRachel Glennerster is associate professor of Economics at the University of Chicago. She was previously chief economist at the UK Foreign Commonwealth and Development Office and the Department for International Development and a key figure behind 'Deworm the World'.Siddhartha Haria is policy lead at the Development Innovation Lab.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> 9000 word) essay written by myself (Saloni Dattani), Rachel Glennerster and Siddhartha Haria for Works in Progress.Over half a million people die from malaria each year, but it took 141 years to develop a vaccine for it.One fundamental reason for this was the scientific complexity of the pathogen - malaria is caused by a parasite, not a virus or bacteria. But another, repeated obstacle was a lack of financial incentive and urgency.In this piece, which includes a lot of data and charts, we tell the story of how the malaria vaccine was developed, why the financial market for the vaccine was missing, and how it could have been sped up with smarter incentives and market mechanisms, like Advance Market Commitments.About the authors:Saloni Dattani - I'm a researcher on global health at Our World in Data and a founding editor of Works in ProgressRachel Glennerster is associate professor of Economics at the University of Chicago. She was previously chief economist at the UK Foreign Commonwealth and Development Office and the Department for International Development and a key figure behind 'Deworm the World'.Siddhartha Haria is policy lead at the Development Innovation Lab.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> salonium https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:29 no full 7042
32LMQsjEMm6NK2GTH EA - Sharing Information About Nonlinear by Ben Pace Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sharing Information About Nonlinear, published by Ben Pace on September 7, 2023 on The Effective Altruism Forum.Epistemic status: Once I started actively looking into things, much of my information in the post below came about by a search for negative information about the Nonlinear cofounders, not from a search to give a balanced picture of its overall costs and benefits. I think standard update rules suggest not that you ignore the information, but you think about how bad you expect the information would be if I selected for the worst, credible info I could share, and then update based on how much worse (or better) it is than you expect I could produce. (See section 5 of this post about Mistakes with Conservation of Expected Evidence for more on this.) This seems like a worthwhile exercise for at least non-zero people to do in the comments before reading on. (You can condition on me finding enough to be worth sharing, but also note that I think I have a relatively low bar for publicly sharing critical info about folks in the EA/x-risk/rationalist/etc ecosystem.)tl;dr: If you want my important updates quickly summarized in four claims-plus-probabilities, jump to the section near the bottom titled "Summary of My Epistemic State".When I used to manage the Lightcone Offices, I spent a fair amount of time and effort on gatekeeping - processing applications from people in the EA/x-risk/rationalist ecosystem to visit and work from the offices, and making decisions. Typically this would involve reading some of their public writings, and reaching out to a couple of their references that I trusted and asking for information about them. A lot of the people I reached out to were surprisingly great at giving honest references about their experiences with someone and sharing what they thought about someone.One time, Kat Woods and Drew Spartz from Nonlinear applied to visit. I didn't know them or their work well, except from a few brief interactions that Kat Woods seems high-energy, and to have a more optimistic outlook on life and work than most people I encounter.I reached out to some references Kat listed, which were positive to strongly positive. However I also got a strongly negative reference - someone else who I informed about the decision told me they knew former employees who felt taken advantage of around things like salary. However the former employees reportedly didn't want to come forward due to fear of retaliation and generally wanting to get away from the whole thing, and the reports felt very vague and hard for me to concretely visualize, but nonetheless the person strongly recommended against inviting Kat and Drew.I didn't feel like this was a strong enough reason to bar someone from a space - or rather, I did, but vague anonymous descriptions of very bad behavior being sufficient to ban someone is a system that can be straightforwardly abused, so I don't want to use such a system. Furthermore, I was interested in getting my own read on Kat Woods from a short visit - she had only asked to visit for a week. So I accepted, though I informed her that this weighed on my mind. (This is a link to the decision email I sent to her.)(After making that decision I was also linked to this ominous yet still vague EA Forum thread, that includes a former coworker of Kat Woods saying they did not like working with her, more comments like the one I received above, and links to a lot of strongly negative Glassdoor reviews for Nonlinear Cofounder Emerson Spartz's former company "Dose". Note that more than half of the negative reviews are for the company after Emerson sold it, but this is a concerning one from 2015 (while Emerson Spartz was CEO/Cofounder): "All of these super positive reviews are being commissioned by upper management. That is the first thing you should know ...

]]>
Ben Pace https://forum.effectivealtruism.org/posts/32LMQsjEMm6NK2GTH/sharing-information-about-nonlinear Thu, 07 Sep 2023 07:23:09 +0000 EA - Sharing Information About Nonlinear by Ben Pace Ben Pace https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 54:21 no full 7039
y7hk2zABZ2nvqDs5z EA - Who Is a Good Fit for a Career in Nonprofit Entrepreneurship? (CCW2023) by CE Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who Is a Good Fit for a Career in Nonprofit Entrepreneurship? (CCW2023), published by CE on September 8, 2023 on The Effective Altruism Forum.TLDR: This post is part of a sequence we've prepared for the Career Conversations Week on the EA Forum. This article provides a concise overview of the traits that Charity Entrepreneurship deems most valuable for a career in nonprofit entrepreneurship. We intend to provide you with a deeper understanding of our criteria when evaluating applications for our Incubation Program. Additionally, we hope this post assists you in self-assessing whether this career path aligns with your aspirations and abilities.It's been five years since we started our Incubation Program. We've since incubated 27 charities, some of which are now GiveWell incubated or ACE recommended, with many others on a path to becoming leaders in their fields. This would be impossible without dedicated, smart, resilient, and ambitiously altruistic individuals like you. Of course, not everyone is suited to be a successful nonprofit entrepreneur, and so we've spent the last few years refining our vetting process to identify people for whom this career would be an especially good fit.There is no single blueprint for successful founders. Many of our alumni hadn't even considered this career path until someone encouraged them to explore it! What surprises many people is the incredible diversity among our successful charity founders in terms of age, educational background, and experience.So far, people from 28 different countries joined our program. We've had many participants come directly from bachelor's or master's programs, as well as welcomed individuals with PhDs from recognized institutions like Cambridge, Oxford, and Harvard. They've specialized in fields as diverse as philosophy, geology, machine learning, and engineering. Some have chosen entrepreneurship over formal education, like our youngest participant, who, at the age of 19, left university to join our program!Our program has also attracted individuals who boldly chose to transition from lucrative, secure, or high-status careers such as consulting, real estate, engineering at NASA or medicine. We've seen passionate street activists and seasoned nonprofit professionals alike join the program, each bringing their unique perspectives and skills to the table.More than specific experience or expertise, they have each brought in an open mind and focused desire to launch a high-impact nonprofit. If it's not credentials, age, or specialized experience that make you a great fit for nonprofit entrepreneurship, what is it? Here is what we've learned in the last five years and who we are looking for to join our Incubation Programs.Competence:Previous Project Experience: People who have initiated their own projects or taken the lead in previous initiatives. Examples might be organizing impactful events or conferences, managing and expanding student groups, cultivating a newsletter or blog following, developing a mobile app with peers, serving as an editor for a university newspaper, or mobilizing an activist group. These instances serve as evidence of an entrepreneurial mindset and a proactive approach.Academic or Personal Achievement: People who have excelled academically or demonstrated exceptional accomplishments in their career, hobbies, or interests. A wide range of things fit into this category, from academic distinction to playing an instrument really well, taking part in debate competitions, or winning local RPG contests. High achievement indicates a strong work ethic and a drive for excellence.Scientific/Analytical/Empirical Mindset: Individuals with experience in empirical methods who value scientific evidence and are eager to test assumptions and change their minds. The skeptical, evidence-based mi...

]]>
CE https://forum.effectivealtruism.org/posts/y7hk2zABZ2nvqDs5z/who-is-a-good-fit-for-a-career-in-nonprofit-entrepreneurship Fri, 08 Sep 2023 19:12:48 +0000 EA - Who Is a Good Fit for a Career in Nonprofit Entrepreneurship? (CCW2023) by CE CE https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:04 no full 7057
6dPecDMarq3pm3Fbx EA - What happens on an 80,000 Hours call? by Abby Hoskin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What happens on an 80,000 Hours call?, published by Abby Hoskin on September 8, 2023 on The Effective Altruism Forum.A lot of people aren't sure what to expect when they apply for 80,000 Hours coaching calls. We thought it might be helpful to give you some context.Who can we help?We love when people form inside views on issues, and question ideas that don't make sense to them. So we encourage you to apply for advising even if you don't agree with everything on the 80k website.However, we're most able to help people who are open to working on some of the top problems we list on our site, or something in a related area. We think most of our impact comes from helping people who end up working in these areas, and since they're the topics we focus on, our advice is best for applicants who could potentially take relevant career paths.We're best able to help people who are mid or early career, with some idea of what they want to do, but open to changing directions and building new skills. These people are often between the ages of 20 and 40. But we definitely also talk to people who are outside of this range, and can offer them valuable guidance and point them to great opportunities.Ways you can get value from an 80k callSetting aside time to reflect deeply on your career trajectory is really valuable.Running your ideas by somebody else with reasonable judgement is helpful.If you already have a great plan, checking it over with somebody else can make you feel a lot better.We can tell potential collaborators and mentors what you're doing in case they want to be involved or help you out.Advisors tend to know a lot about the people and orgs working on important problems and can point you to resources/orgs/causes you might not know about.Advisors have different backgrounds and can give expert advice in specific sectors (more on this below).Speaking with an 80k advisor can help you grow your professional network (though this isn't always possible in all cases).You can opt into being recommended for roles that are a match for your plans and skills. We are often asked to recommend candidates for roles at high impact organisations, and you can give us permission to recommend you for roles we think would suit you.After your call, you will be invited to join the 80,000 Hours Alumni Slack. This is a great place to network with other people focused on doing good via their careers. The 80,000 Hours team also periodically updates our alumni about opportunities and resources they may find helpful.Before the callWe ask you to fill out a call prep document that prompts you to reflect on how you define positive impact, which areas you want to work on, and which careers seem most appealing or attractive to you.Filling out the call prep document has been consistently identified as one of the most valuable parts of advising. In other words, you can gain a lot of clarity on your career just by setting aside an hour to write down your answers to these questions, even without speaking to an advisor!You can make a copy of our call prep doc and fill it out now if you'd like. (We won't be able to see your answers, this would be for your personal reflection.)During the callFor people early in their career, we usually cover these topics:What is your current situation and default career path?Let's take your skills/experience out of the picture completely and think about what the world needs.How do you think about choosing which pressing world problem to work on?What qualities make these things problems?What are your values, and how can your career align with them?Which of these problems would you be able to make progress on based on your personal aptitudes, existing skill sets, and interests?Options?Long-run pathsWhich skills/experiences/career capital will help you do impa...

]]>
Abby Hoskin https://forum.effectivealtruism.org/posts/6dPecDMarq3pm3Fbx/what-happens-on-an-80-000-hours-call Fri, 08 Sep 2023 18:03:48 +0000 EA - What happens on an 80,000 Hours call? by Abby Hoskin Abby Hoskin https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:10 no full 7056
6SvZPHAvhT5dtqefF EA - Debate series: should we push for a pause on the development of AI? by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Debate series: should we push for a pause on the development of AI?, published by Ben West on September 8, 2023 on The Effective Altruism Forum.In March of this year, 30,000 people, including leading AI figures like Yoshua Bengio and Stuart Russell, signed a letter calling on AI labs to pause the training of AI systems. While it seems unlikely that this letter will succeed in pausing the development of AI, it did draw substantial attention to slowing AI as a strategy for reducing existential risk.While initial work has been done on this topic (this sequence links to some relevant work), many areas of uncertainty remain. I've asked a group of participants to discuss and debate various aspects of the value of advocating for a pause on the development of AI on the EA Forum, in a format loosely inspired by Cato Unbound.On September 16, we will launch with three posts:David Manheim will share a post giving an overview of what a pause would include, how a pause would work, and some possible concrete steps forwardNora Belrose will post outlining some of the risks of a pauseThomas Larson will post a concrete policy proposalAfter this, we will release one post per day, each from a different authorMany of the participants will also be commenting on each other's workResponses from Forum users are encouraged; you can share your own posts on this topic or comment on the posts from participants. You'll be able to find the posts by looking at this tag (remember that you can subscribe to tags to be notified of new posts).I think it is unlikely that this debate will result in a consensus agreement, but I hope that it will clarify the space of policy options, why those options may be beneficial or harmful, and what future work is needed.People who have agreed to participateThese are in random order, and they're participating as individuals, not representing any institution:David Manheim (Technion Israel)Matthew Barnett (Epoch AI)Zach Stein-Perlman (AI Impacts)Holly Elmore (AI pause advocate)Buck Shlegeris (Redwood Research)Anonymous researcher (Major AI lab)Anonymous professor (Major University)Rob Bensinger (Machine Intelligence Research Institute)Nora Belrose (EleutherAI)Thomas Larsen (Center for AI Policy)Quintin Pope (Oregon State University)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Ben_West https://forum.effectivealtruism.org/posts/6SvZPHAvhT5dtqefF/debate-series-should-we-push-for-a-pause-on-the-development Fri, 08 Sep 2023 17:46:55 +0000 EA - Debate series: should we push for a pause on the development of AI? by Ben West Ben_West https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:27 no full 7054
MvwctfyZ9NrhPzyPj EA - An Incomplete List of Things I Think EAs Probably Shouldn't Do by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An Incomplete List of Things I Think EAs Probably Shouldn't Do, published by Rockwell on September 8, 2023 on The Effective Altruism Forum.There has already been ample discussion of what norms and taboos should exist in the EA community, especially over the past ten months. Below, I'm sharing an incomplete list of actions and dynamics I would strongly encourage EAs and EA organizations to either strictly avoid or treat as warranting a serious - and possibly ongoing - risk analysis.I believe there is a reasonable risk should EAs:Live with coworkers, especially when there is a power differential and especially when there is a direct report relationshipDate coworkers, especially when there is a power differential and especially when there is a direct report relationshipPromote drug use among coworkers, including legal drugs, and including alcohol and stimulantsLive with their funders/grantees, especially when substantial conflict-of-interest mechanisms are not activeDate their funders/grantees, especially when substantial conflict-of-interest mechanisms are not activeDate the partner of their funder/grantee, especially when substantial conflict-of-interest mechanisms are not activeRetain someone as a full-time contractor or grant recipient for the long term, especially when it might not adhere to legal guidelinesOffer employer-provided housing for more than a predefined and very short period of time, thereby making an employee's housing dependent on their continued employment and allowing an employer access to an employee's personal living spacePotentially more controversial, two aspects of the community I believe have substantial downsides that the community has insufficiently discussed or addressed:EA™ Group Houses and the branding of private, personal spaces as "EA""Work trials" that require interruption of regular employment to complete, such that those currently employed full-time must leave their existing job to be considered for a prospective jobAs said, this list is far from complete and I imagine people may disagree with portions of it. I'm hoping to stake this as a position held by some EAs and I'm hoping this post can serve as a prompt for further discussion and assessment."Promote" is an ambiguous term here. I think this is true to life in that one person's enthusiastic endorsement of a drug is another person's peer pressure.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Rockwell https://forum.effectivealtruism.org/posts/MvwctfyZ9NrhPzyPj/an-incomplete-list-of-things-i-think-eas-probably-shouldn-t Fri, 08 Sep 2023 16:40:36 +0000 EA - An Incomplete List of Things I Think EAs Probably Shouldn't Do by Rockwell Rockwell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:19 no full 7052
ZwswpxAWMZ8F7CvtB EA - AMA: 80,000 Hours Career Advising Team by Abby Hoskin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: 80,000 Hours Career Advising Team, published by Abby Hoskin on September 8, 2023 on The Effective Altruism Forum.We're the 80,000 Hours Career Advising team, and you should ask us anything!We are advisors at 80,000 Hours! Ask us anything about careers and career advising! (If you're interested in us personally or other things about 80k we might also answer those.) We'll answer questions on September 13th (British time), so please post questions before then. Unfortunately, we might not be able to get to all the questions, depending on how much time we end up having and what else is asked. So be sure to upvote the questions you most want answered :)Logistics/practical instructions:Please post your questions as comments on this post. The earlier you share your questions, the easier it will be for us to get to them.We'll probably answer questions on September 13th. Questions posted after that aren't likely to get answers.Some context:You have 80,000 hours in your career. This makes it your best opportunity to have a positive impact on the world. If you're fortunate enough to be able to use your career for good, but aren't sure how, our website helps you:Get new ideas for fulfilling careers that do goodCompare your optionsMake a plan you feel confident inYou can also check out our free career guide. We are excited to recently launch the second edition! It's based on 10 years of research alongside academics at Oxford.We're a nonprofit, and everything we provide, including our one-on-one career advising, is free.Curious about what happens on a 1:1 career advising call? Check out our EA Forum post on what happens during calls here.If you're ready to use your career to have a greater positive impact on the world, apply for career advising with us!Who are we?I (Abigail Hoskin) have a PhD in psychology and neuroscience and can talk about paths into and out of academia. I can also discuss balancing having an impact while parenting (multiple!) kids. I will be taking the lead on answering questions in this AMA, but other advisors might chime in, especially on questions in their specific areas of expertise.Huon Porteous has a background in philosophy and experience in management consulting. He has run a huge number of useful "cheap tests" to test out his aptitudes for different careers and is always running self experiments to optimise his workflow.Matt Reardon is a lawyer who can talk in depth about paths to value in law, government, and policy, especially in the US. He also works on product improvements and marketing for our team.Sudhanshu Kasewa was a machine learning engineer at a start-up and has experience doing ML research in academia. He's also worked in human resources and consulting.Anemone Franz is a medical doctor who worked for a biotech startup on pandemic preparedness. She particularly enjoys discussing careers in biosecurity, biotech, or global health.We are led by Michelle Hutchinson! Michelle co-founded the Global Priorities Institute, ran Giving What We Can, was a fund manager for EA Funds, and is a top contributor to our cute animals Slack channel. Michelle is the director of the 1on1 team and does not take calls, but she'll be chiming in on the AMA.We have a special guest, Benjamin Hilton, who will be on deck to answer questions about our website's written content. Ben is a researcher at 80,000 Hours, who has written many of our recent articles, including on AI technical career paths, in addition to helping write our updated career guide.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Abby Hoskin https://forum.effectivealtruism.org/posts/ZwswpxAWMZ8F7CvtB/ama-80-000-hours-career-advising-team Fri, 08 Sep 2023 15:46:07 +0000 EA - AMA: 80,000 Hours Career Advising Team by Abby Hoskin Abby Hoskin https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:33 no full 7051
hBytccap5GvqgBxo4 EA - Pilot Results: High-Impact Psychology/ Mental Health (HIPsy) by Inga Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pilot Results: High-Impact Psychology/ Mental Health (HIPsy), published by Inga on September 8, 2023 on The Effective Altruism Forum.This is a rough write-up of the results of all the HIPsy pilot activities, outcomes, and suggestions on what to do next.Below, you can find links to created resources for those involved in psychology or mental health who aim to maximize their impact. Additionally, there are results from events and surveys highlighting which (other) resources might be most beneficial for enhancing their impact. You can learn more about the initial project and plan in this EA forum post.Executive summaryWithin this EAIF- funded project, I ran three surveys to determine the demand for network activities, piloted 5 events at EAGx's, and collaborated with others on highly requested materials. Below, you can find the results of these activities.Main outcomesImportance/general Demand:Roughly twice as many people expressed interest in mental health as a cause area (N = 164) compared to those interested in resources for individuals with a background in psychology or how to make an impact with various psychology-related topics (N = 76/77)Neglectedness: The EA Psychology Research Initiative by Lucius Caviola, among others, appears to meet the infrastructural needs around general psychology-related impact regarding longtermist goals. Those infrastructural needs around helping people to have an impact in the mental health space do not yet seem to be covered, even though the work of the HLI paved the way for mental health as a cause area taken seriously within EA.Mental Health-related activities most requested:Among the mental health-related network activities, the most requested ones are mentoring, meetups, workshops, as well as materials related to topics on how to make an impact in the mental health space. The piloted events (talks and workshops/meetups) at EAGxs demonstrated significant interest, evinced by the high number of participants and high perceived impact ratings.Conclusions:More mental health-related infrastructure building seems impactful. The space would benefit from more organized, targeted resources that enable people to make a more substantial impact in a shorter period of time.I believe it would be a good idea to invest in follow-up funding based on the results of this initiative to establish cost-effective corresponding structures and to hire at least one project/community manager for a year. Based on the survey results, this person would build the pool of volunteers to offer and measure the impact of requested events, materials, and a mentoring service.How can you help? If you'd like to fund follow-up activities, feel free to reach out to me at inga@rethinkwellbeing.org.If you'd be interested in the project manager role, you can apply in <15min via this form. If you are interested in collaborating, or hosting network activities, or creating resources for people engaged in mental health as a volunteer, express your interest via this form (it takes 3 minutes).OutcomesMaterials createdHIPsy website summarizing relevant resources has been created and updated. It mainly contains resources related to impact and mental health. It still needs to be spread within the EA community and cross-linked on the websites of partners (e.g., HLI)In the making: New EA Mental Health cause area report to provide, elaborate o,n and compare avenues of impact in that space from the view of health care professionals (not finalized or published yet, so the link doesn't work yet) in cooperation with High Impact Medicine (HIMed), proofread by Happier Lives Institute (HLI)I (Inga) reviewed the Career Path Profile for psychology on Probably Good and the Research Agenda "Psychology for Effectively Improving the Future" by the Effective Altruism Psychology Resea...

]]>
Inga https://forum.effectivealtruism.org/posts/hBytccap5GvqgBxo4/pilot-results-high-impact-psychology-mental-health-hipsy Fri, 08 Sep 2023 05:40:08 +0000 EA - Pilot Results: High-Impact Psychology/ Mental Health (HIPsy) by Inga Inga https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:41 no full 7048
nyRwGEepTfb2EJMeF EA - Writing about your job is (still) great - consider doing it by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Writing about your job is (still) great - consider doing it, published by Lizka on September 9, 2023 on The Effective Altruism Forum.Many of us know very little about what people in different roles actually do or how they got there. There's a good chance that you have experience that other Forum users would be interested in; so consider writing about your job! (Especially since Career Conversations Week is starting.)Here are some existing "job profile" posts that you can explore.(This post is somewhat redundant with Aaron's, but I hope it's a useful reminder.)Why write about your jobI can present my experience as evidence for this claim. When I was finishing college, I was pretty clueless about my post-college options and kind of defaulted to graduate school (which was the main thing I knew). I had realized my lack of knowledge was a problem, so I asked friends for advice. Someone told me to go on LinkedIn and check people's backgrounds - and to reach out to people and ask for meetings. I did the former and found it helpful, but it could only do so much; I still didn't understand the day-to-day of what an "analyst" did and whether it would be a good fit. And I was too worried about wasting people's time to reach out for calls (except in my more limited personal network, which was heavily skewed towards academic mathematicians and adjacent crowds). I still had a picture-book understanding of jobs. The thing that really helped was talking to people at conferences and informal events later - asking about their backgrounds, what they liked, etc.The fact that jobs are mysterious to you (especially jobs that are somewhat plausible given your values, strengths, etc.) hampers your career.You probably won't apply to jobs you know little about, won't know how to build relevant skills, might end up in jobs you dislike, etc. And I expect jobs that EA Forum users have are often more likely a good fit for another Forum user than a random job would be (if only because many Forum users share some values, like wanting to use their careers or resources in significant part to have a positive impact on the world), so hearing about other Forum users' careers can be particularly helpful.Some specific ways an "about my job" post can help:Improve someone's longer-term career planYou might write about a job that's a great fit for one of your readers (or that they hadn't realized was very useful), but that they hadn't even considered as an option.You might share that you think certain skills are particularly useful for your type of career, and interested readers can try to test those skills.You can share what you like and dislike about your job, and how much you care about those things, which can inform someone who's not sure if they'd enjoy a certain type of work.Etc.Make someone feel a lot betterWhen someone can't see themselves in the jobs that they're familiar with, they can feel like they will never "fit" a job, or like they'll never have an impact.For instance, before I encountered EA, I was probably on an academic path, but pretty sad about this fact. My skills were much more generalist than those of some of the mathematicians I knew, and I knew few real generalists. I thought that my lack of highly specialized interests/skills (or my generalist quality) was almost entirely a disadvantage (until I worked at Rethink Priorities and talked to people there).Impostor syndrome and fear of failure also grow because we rarely talk about the lower points in our past; "about my job" posts can provide useful and encouraging context (e.g. by showing that "successful" people have felt like they weren't succeeding in the past).Connect with someoneFor instance, a reader could be interested in collaborating with you and might reach out after you post.Or people in your network will ...

]]>
Lizka https://forum.effectivealtruism.org/posts/nyRwGEepTfb2EJMeF/writing-about-your-job-is-still-great-consider-doing-it Sat, 09 Sep 2023 14:37:05 +0000 EA - Writing about your job is (still) great - consider doing it by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:12 no full 7061
Y4653ehdHEZs4Y6Hp EA - About my job: "Content Specialist" by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: About my job: "Content Specialist", published by Lizka on September 9, 2023 on The Effective Altruism Forum.I've been the Content Specialist on the Online Team at CEA since early 2022. People often don't know what "Content Specialist" means and it seems useful to have more "about my job" posts, so here's mine.This got long, so I encourage you to skip to whatever you want to see. I include my background, what I actually work on, and some quick reflections (what I value about the role, what's been hard, and the skills that I develop).My background & how I got hereThis ended up longer than expected, so tl;dr: math and comparative literature double major, helping to organize 3 summers of Canada/USA Mathcamp, getting into EA, research fellowship at Rethink Priorities, "Events Generalist" at CEA, and then being invited to apply to Aaron's job and ending up here. Non-tl;dr:I studied math and comparative literature in college (and graduated in 2021). I really wasn't sure what I wanted to do after college, and was somewhat defaulting to math academia or something education-related. I was spending my summers doing ~math research, working at Canada/USA Mathcamp (in a mostly non-academic role where we ran events, helped make camp run, and more - I think this made me better at making-things-happen and made me a lot more confident), and teaching math to excited kids.Meanwhile, a friend introduced me to EA, which clicked pretty quickly. The issue was that I didn't see myself contributing; I didn't want to do CS, didn't particularly want to study economics, didn't think I would last long earning-to-give, and didn't think I had other real skills. But I read a bunch, listened to the 80K Podcast, went to the 2020 Student Summit, and had a call with 80,000 Hours, where they told me to apply to Rethink Priorities (and defer graduate school). I followed that advice.I wrote about my experience at RP here. This experience was great. It made me feel that fairly generalist research was actually a viable/useful path. It also got me over my Forum-intimidation and got me to post my first Forum post (and some others).While I was at Rethink Priorities, I had applied to an "Events Generalist" role at CEA - largely because it seemed relatively interesting, CEA had "effective altruism" in its name (so I thought it was going to be at least somewhat useful), I wanted to test out something like ops, and I really wanted to have a job. I got and took the job, which I was expecting to be pretty low-responsibility since I was so junior.I got more responsibility than I expected, which was both scary and very motivating. On my second week, I ended up being one of two(?) people from the Events Team at the Coordination Forum (I was there to help make it go smoothly and to take notes in some sessions), during which we also made the decision to double the size of EA Global London 2021 (happening in around a month). It felt like a lot of the work that I was doing just wouldn't get done if I wasn't there (or maybe something else important wouldn't get done). After EA Global: London 2021, things calmed down a bit. I was still writing sometimes, and I sometimes contributed a bit to side-projects, some of which grew out a bit.Then Aaron reached out to say that he was leaving, and he was wondering if it would be ok to send me the job application. Skimming Aaron's messages from the time, it looks like Aaron reached out because of a combination of my Forum writing, my performance on the Events Team, and encouragement from Linch when Aaron checked with him. I checked with my then-manager Amy to see if she would be ok with me applying. I was very stressed about this, but Amy was very supportive. I went into the application pretty confident that I wouldn't get far (and I cried after the interview). That's the role...

]]>
Lizka https://forum.effectivealtruism.org/posts/Y4653ehdHEZs4Y6Hp/about-my-job-content-specialist Sat, 09 Sep 2023 05:54:16 +0000 EA - About my job: "Content Specialist" by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:46 no full 7060
j8F3FFLv6kzRYWB3B EA - Writing about my job: Growth lead at a startup in Kenya by Luke Eure Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Writing about my job: Growth lead at a startup in Kenya, published by Luke Eure on September 10, 2023 on The Effective Altruism Forum.I think a lot of impact-minded people should seriously consider working at early-stage startups - particularly in low-middle income countries.I'm an American currently working at an early-stage startup in Kenya, so thought I would write up:How I got this jobHow I think about the impact of my workWhat my day to day job is likeMy advice for people thinking of working at a startupWhat would make me decide to stay or leave my companyI'm aiming to be concise, so sound off in the comments if there's anything you'd like me to expand on in more detail.Context: I work as a growth lead at a company called Kapu Africa, a startup based in Kenya reducing the cost of living for Kenyans by offering cheap direct-to-consumer sales of every-day purchased items, with next day deliveryHow I got this jobAfter undergrad I worked in consulting for 2.5 years (BCG) - first in Chicago and then in Nairobi KenyaI decided I had learned enough from BCG, and that the best way for me to contribute to global development was to work at a startup. I've written here and here about that decision-making processMy coworkers at BCG Nairobi put me in touch with a bunch of people working in various startups in Nairobi as well as elsewhere in Africa. Over the course of 15 or so conversations, I got a sense for what kinds of companies and roles were interesting to me vs. which weren'tI was very lucky in that most of the companies I was talking to were looking for someone with my background. I never had any formal interviews. I was able to have an attitude of "I'm looking for a job where I can have a lot of impact, and if I find that job I will definitely get that job."Kapu was actually in stealth mode when I joined it, so the only way I found out about it was because a former colleague worked thereI had like 3 conversations with different people from Kapu, decided it was the best shot at impact I had (see next section), and so went with it!How I think about the impact of my workI had decided on global poverty as what I wanted to work on in the near-term. Besides the obvious direct benefits, solving global poverty means also unleashing the potential of talented people (who happen to be very poor right now) who can help us solve humanity's other problemsEconomic development seems like the best way to lift people out of extreme povertyI think Kapu is particularly good in terms of impact on a few metrics:Direct impact: Our core business is making goods cheaper for end consumers - targeting poor Kenyans. Our goal is to provide $1B in savings to customers. I view providing real savings on everyday purchases as quite similar to unconditional cash transfers in terms of impactMy personal potential to give a lot: If our company does super well and gets a high valuation, I'll make a lot of money I can then give awayDriving economic development via:Low-medium skill job creationHigh-skill job creation potentially setting up our existing team to go start great companies laterIncreasing the visibility of Kenya as a destination for international investment capital (that is primarily profit seeking as opposed to impact-seeking capital of which Kenya has a lot now)Contributing to building up the business ecosystem in Kenya by doing business with our partner companies (e.g., suppliers, logistics companies)It will help me develop skills (directly driving impact, knowing how a business works from a 360 perspective, and focusing on moving fast) that will be valuable later in my career in the business or nonprofit worldSo what is my job actually like?At its core my responsibility is:Have a strong hypothesis about what we need to grow our number of customers by 10xTest t...

]]>
Luke Eure https://forum.effectivealtruism.org/posts/j8F3FFLv6kzRYWB3B/writing-about-my-job-growth-lead-at-a-startup-in-kenya Sun, 10 Sep 2023 19:51:27 +0000 EA - Writing about my job: Growth lead at a startup in Kenya by Luke Eure Luke Eure https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:46 no full 7069
wEFXDEqtB8H9TgF4s EA - About my job: "Plans Officer" by Weaver Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: About my job: "Plans Officer", published by Weaver on September 10, 2023 on The Effective Altruism Forum.I've been a plans Officer for the United States Army Reserve since 2021. People have no clue what reservists do and much less what "Active Guard Reserve" Soldiers do, and I see it's Career Week, so here's my job. Also I'm shamelessly stealing Lizkas format, so thank her for me writing this.Note that anything written here is my opinion and does not represent the department of defense, or any US Government agency.My background & how I got hereI studied Computer Science in college and graduate into the great recession of 2008. After not getting the exact job I wanted (Marine Officer), I went to the Army recruiter and they gave me the option to be an Army Reserve Quartermaster Officer. I wanted something more active, but since Officer Candidate School was very competitive I wasn't in the top.I served part time as a Platoon Leader, Company Commander and Company Executive Officer. I interviewed with a different unit to change my branch to Civil Affairs, transferred to the unit and then served there as a Battalion Logistics Officer (S4) until I got called to go to first deploy, then school. My small school team of four Captains placed in the top six(Commandants List) of our approximately 60 person class and I was a Civil Affairs(CA) Officer after about 8 years. I credit my time working alongside CA officers as a large part of how well I did.At the same time I worked as a state department contractor full time. I also was going through Nursing school for most of it, though I didn't make it through.During my deployment, I was always trying to quantify how much we were doing and I had problems doing so. I read books like Dead Aid and Toxic Charity, trying to get a sense of how to 'do international development' and 'foreign aid'. You might think that this is why I joined EA.Nope.My writing partner made me read Harry Potter and the Methods of Rationality and then, yes you can guess the rest. I got picked up for a 3 year tour of AGR and decided to put my civilian career on hold and see if I liked it.What I actually work onThere is a very specific scope of things that only I do, mostly long range planning, facilitation, writing policy and working on strategy/prioritization for my organization. This includes everything that is mandated for the unit I work for to do to what is directed at a local level for us to do. Most of my work is office work, over various lovely software, and some of it is field work.Here is what I do normally:Field questions from people inside the unitAnswer Requests for Information (RFIs) from higher echelons or sister unitsWrite Operations Orders and iron out short term plansBe the telephone between two organizations that need to talk.I also run several meetings that are ad hoc or weekly dependent. I've had to work with a lot of people and I have met so many good people. It's not EA, but they all want to make a difference.Reflecting on the RoleI opted into a very specialized career field in an already specialized group. I love the people and the unit culture is great, but they ask a lot of the part timers.Some things I value:I'm expected to work independently.If the work is done, then it's done. I'm not waiting around for someone's input normally.I get to work with top performers regularly.I get to mentor a lot of junior Officers and Enlisted, which is a big draw.Some things that can be hard:I'm not in charge, I just work here. Even though I plan a lot in advance, sometimes the commanders will choose to take a different path, upending a lot of my work.There's a lot of things that need to be on a regulatory timeline that I can not change. This is fine, except when people try to get around it.Some Skills I develop in this job:Lan...

]]>
Weaver https://forum.effectivealtruism.org/posts/wEFXDEqtB8H9TgF4s/about-my-job-plans-officer Sun, 10 Sep 2023 19:23:53 +0000 EA - About my job: "Plans Officer" by Weaver Weaver https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:09 no full 7068
Fhoq7tP9LYPqaJDxx EA - Shrimp: The animals most commonly used and killed for food production by Rethink Priorities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shrimp: The animals most commonly used and killed for food production, published by Rethink Priorities on September 10, 2023 on The Effective Altruism Forum.Citation: Romero Waldhorn, D., & Autric, E. (2022, December 21). Shrimp: The animals most commonly used and killed for food production.SummaryDecapod crustaceans or, for short, decapods (e.g., crabs, shrimp, or crayfish) represent a major food source for humans across the globe. If these animals are sentient, the growing decapod production industry likely poses serious welfare concerns for these animals.Information about the number of decapods used for food is needed to better assess the scale of this problem and the expected value of helping these animals.In this work we estimated the number of shrimp and prawns farmed and killed in a year, given that they seem to be the vast majority of decapods used in the food system.We estimated that around:440 billion (90% subjective confidence interval [SCI]: 300 billion - 620 billion) farmed shrimp are killed per year, which vastly exceeds the figure of the most numerous farmed vertebrates used for food production-namely, fishes and chickens.230 billion (90% SCI: 150 billion - 370 billion) shrimp are alive on farms at any moment, which surpasses any farmed animal estimate known to date, including farmed insect numbers.25 trillion (90% SCI: 6.5 trillion - 66 trillion) wild shrimp are directly slaughtered annually, a figure that represents the vast majority of all animals directly killed by humans out of which food is produced.At this moment, the problem of shrimp production is greater in scale-i.e., number of individuals affected-than the problem of insect farming, fish captures, or the farming of any vertebrate for human consumption. Thus, while the case for shrimp sentience is weaker than that for vertebrates and other decapods, the expected value of helping shrimp and prawns might be higher than the expected value of helping other animals.IntroductionRecently, Birch et al. (2021) and Crump et al. (2022a) reviewed the evidence of sentience in decapod crustaceans, with a focus on pain experience. Similar to findings previously reported by Waldhorn (2019; see also Waldhorn et al., 2020), these studies concluded that there is substantial, although limited, evidence that decapods might be sentient. Notably, the low strength of current evidence likely corresponds to the little scientific attention various decapod taxa have received-which is particularly the case for shrimp, especially for those belonging to the Penaeidae family (see also Comstock, 2022).Decapods like crabs, lobsters, and shrimp serve as a major source of human food worldwide, having partly driven the global growth of the aquaculture sector in recent years (de Jong, 2018). Decapod production also represents the fastest-growing major fishery activity worldwide (Boenish et al., 2022). If these animals are sentient, current commercial practices pose serious welfare risks both when decapods are farmed and/or handled during capture, transport and sale, and when they are slaughtered (see Birch et al., 2021 and Crump et al., 2022b). These welfare issues might be particularly pressing given the high numbers of farmed decapods (see Mood & Brooke, 2019a), plus other uncounted individuals captured from the wild.Furthermore, increasing human population, changes in consumer preferences, technological advancements, and income growth suggest that decapod production stands to increase in the future (Boenish et al., 2022; FAO, 2022f), which may, in turn, augment the scale of the problem of decapod welfare.Currently, only partial data exists about the scope of this issue. Mood & Brooke (2019a) calculated that between 255 billion and 605 billion decapods are farmed every year, the great majority of such individual...

]]>
Rethink Priorities https://forum.effectivealtruism.org/posts/Fhoq7tP9LYPqaJDxx/shrimp-the-animals-most-commonly-used-and-killed-for-food Sun, 10 Sep 2023 17:51:36 +0000 EA - Shrimp: The animals most commonly used and killed for food production by Rethink Priorities Rethink Priorities https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:27:47 no full 7067
33o5jbe3WjPriGyAR EA - Announcing the Meta Coordination Forum 2023 by MaxDalton Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Meta Coordination Forum 2023, published by MaxDalton on September 10, 2023 on The Effective Altruism Forum.Continuing our efforts to be more transparent about the events we organize, we want to share that we're running the Meta Coordination Forum 2023 (an invite-only event scheduled for late September in the Bay) and provide community members with the opportunity to give input.HighlightsEvent Goal: Help key people working in meta-EA make better plans over the next two years to help set EA and related communities on a better trajectory.Agenda:Updates from subject-matter experts from key cause areas.Discussions on significant strategic questions.Clarification of project ownership and forming of actionable plans.Attendees: A group of key people focused on meta / community-building work. Not focused on other key figures beyond the meta space.Organizing Team: The Partner Events team at CEA (Sophie Thomson, Michel Justen, and Elinor Camlin) is organizing this event, with Max Dalton and other senior figures in the meta space advising on strategy.Community Engagement: We'd like to hear your perspectives via a survey by 11:59 PM PDT on Sunday, 17 September. The survey asks about the future of EA, the interrelation between EA and AI safety, and potential reforms in meta-EA projects.Post-Event: A summary of survey responses from the event will be made public to encourage wider discussion and reflection in the community.The event is a successor to past "Leaders Forums" and "Coordination Forums" but is also different in some important ways. For further details, please see the post below.Why we're running this eventNow seems like a pivotal time for EA and related communitiesThe FTX crisis has eroded trust within and outside the EA community and highlighted some important issues. Also, AI discourse has taken off, changing the feasibility of various policy and talent projects. This means that now is an especially important time for various programs to reconsider their strategy.We think that more coordination and cooperation could help people doing meta work make better plansWe think it could be useful for attendees to share updates and priorities, discuss group resource allocation, and identify ways to boost or flag any concerns they have with each other's efforts. (But we don't expect to reach total agreement or a single unified strategy.)What the event is and isn'tWe think it's important to try to give an accurate sense of how important this event is. We think it's easy for community members to overestimate its importance, but we're also aware that it might be in our interests to downplay its importance (thus inviting less scrutiny).The event will potentially shape the future trajectory and strategies of EA and related communitiesFirst, some ways in which the event is fairly important:It will bring together many important decision-makers in the meta-EA space (more on attendees below).These people will be discussing topics that are important for EA's future, and discussions at the event might shape their actions.The event aims to improve plans, and hopes that this leads to a better trajectory for EA and related communities.The event may facilitate further trust and collaboration between this set of people, possibly further entrenching their roles (though we're also trying to be more careful about how much we limit this; see below).The event will not foster unanimous decisions or a single grand strategySome ways in which the event is less important:It is not a collective decision-making body: all attendees will make their own decisions about what they do. We expect that attendees will come in with lots of disagreements and will leave with lots of disagreements (but hopefully with better-informed and coordinated plans). This is how previous ...

]]>
MaxDalton https://forum.effectivealtruism.org/posts/33o5jbe3WjPriGyAR/announcing-the-meta-coordination-forum-2023 Sun, 10 Sep 2023 17:00:00 +0000 EA - Announcing the Meta Coordination Forum 2023 by MaxDalton MaxDalton https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:26 no full 7066
i7DWM6zhhPr2ccq35 EA - Thoughts on EA, post-FTX by RyanCarey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on EA, post-FTX, published by RyanCarey on September 10, 2023 on The Effective Altruism Forum.IntroductionThis is a draft that I largely wrote back in Feb 2023, how the future of EA should look, after the implosion of FTX.I. What scandals have happened and why?1.There are several reasons that EA might be expected to do harm and have scandals:a) Bad actors. Some EAs will take harmful actions with a callous disregard for others. (Some EAs have psychopathic tendencies, and it is worth noting that that utilitarian intuitions correlate with psychopathy.)b) Naivete. Many EAs are not socially competent, or streetsmart, enough to be able to detect the bad actors. Rates of autism are high in EA, but there are also more general hypotheses. Social movements may in-general by susceptible to misbehaviour, due to young members having an elevated sense of importance, or being generally impressionable. See David Chapman on "geeks, mops and sociopaths" for other hypotheses.c) Ideological aspects. Some ideas held by many EAs - whether right or wrong, and implied by EA philosophy or not - encourage risky behaviour. We could call these ideas risky beneficentrism (RB), and they include:i. High risk appetite.ii. Scope sensitivityiii. Unilateralismiv. Permission to violate societal norms. Violating or reshaping an inherited morality or other "received wisdom" for the greater good.iv. Other naive consequentialism. Disregard of other second-order effectsThere are also hypotheses that mix or augment these categories: EAs might be more vulnerable to generally psychopathic behaviour due to that kind of decision-making appearing superficially similar to consequentialist decision-making2.All of (a-c) featured in the FTX saga. SBF was psychopathic, and his behaviour included all five of these dimensions of risky beneficentrism. The FTX founders weren't correctly following the values of the EA community, but much of what would be warning signs to others (gambling-adjacency, the Bahamas, lax governance) weren't to us pursuing a risky, scope-sensitive, convention-breaking, altruistic endeavour. And we outside EAs, perhaps due to ambition and naivete, supported these activities.3.Other EA scandals, similarly, often involve multiple of these elements:[Person #1]: past sexual harassment issues, later reputation management including Wiki warring and misleading histories. (norm-violation, naive conseq.)[Person #2]: sexual harassment (norm-violation? naive conseq?)[Person #3] [Person #4] [Person #5]: three more instances of crypto crimes (scope sensitivity? Norm-violation, naive conseq.? naivete?)Intentional Insights: aggressive PR campaigns (norm-violation, naive conseq., naivete?)Leverage Research, including partial takeover of CEA (risk appetite, norm-violation, naive conseq, unilateralism, naivete)(We've seen major examples of sexual misbehaviour and crypto crimes in the rationalist community too.)EA documents have tried to discourage RB, but this now seems harder than we thought. Maybe promoting EA inevitably leads to significant amounts of harmful RB.4.People have a variety of reasons to be less excited about growing EA:EA contributed to a vast financial fraud, through its:People. SBF was the best-known EA, and one of the earliest 1%. FTX's leadership was mostly EAs. FTXFF was overwhelmingly run by EAs, including EA's main leader, and another intellectual leader of EA.Resources. FTX had some EA staff and was funded by EA investors.PR. SBF's EA-oriented philosophy on giving, and purported frugality served as cover for his unethical nature.Ideology. SBF apparently had an RB ideology, as a risk-neutral act-utilitarian, who argued a decade ago why stealing was not in-principle wrong, on Felicifia. In my view, his ideology, at least as he professed it, could best b...

]]>
RyanCarey https://forum.effectivealtruism.org/posts/i7DWM6zhhPr2ccq35/thoughts-on-ea-post-ftx Sun, 10 Sep 2023 13:21:59 +0000 EA - Thoughts on EA, post-FTX by RyanCarey RyanCarey https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 14:18 no full 7065
pNhc3jensyBY4Hz6u EA - Panel discussion on AI consciousness with Rob Long and Jeff Sebo by Aaron Bergman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Panel discussion on AI consciousness with Rob Long and Jeff Sebo, published by Aaron Bergman on September 10, 2023 on The Effective Altruism Forum.IntroRecent 80k guest and philosopher specializing in AI consciousness Rob Long (@rgb) recently participated in a panel discussion on his paper "Consciousness in Artificial Intelligence: Insights from the Science of Consciousness" (pdf) with co-authors Patrick Butlin, Yoshua Bengio, and Grace Lindsay and moderator Jeff Sebo (@jeffsebo).You can watch it on Youtube (below), watch/listen as a podcast on Spotify , or read the transcript below.Paper abstractWhether current or near-term AI systems could be conscious is a topic of scientific interest and increasing public concern. This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness. We survey several prominent scientific theories of consciousness, including recurrent processing theory, global workspace theory, higher-order theories, predictive processing, and attention schema theory. From these theories we derive "indicator properties" of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties. We use these indicator properties to assess several recent AI systems, and we discuss how future systems might implement them.Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators.Youtube descriptionThis event took place on Tuesday September 5, 2023 and was hosted by the NYU Mind, Ethics, and Policy Program.About the eventThis panel discussion featured four authors from the recently released and widely discussed AI consciousness report. This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of the best-supported neuroscientific theories of consciousness. The paper surveys several prominent scientific theories of consciousness, including recurrent processing theory, global workspace theory, higher-order theories, predictive processing, and attention schema theory. From these theories the authors derive "indicator properties" of consciousness, elucidated in computational terms that allow them to assess AI systems for these properties. They use these indicator properties to assess several recent AI systems, and discuss how future systems might implement them.In this event, the authors summarized the report, offered perspectives from philosophy, cognitive science, and computer science, and responded to questions and comments.About the panelistsPatrick Butlin is a philosopher of mind and cognitive science and a Research Fellow at the Future of Humanity Institute at the University of Oxford. His current research is on consciousness, agency and other mental capacities and attributes in AI.Robert Long is a Research Affiliate at the Center for AI Safety. He recently completed his PhD in philosophy at New York University, during which he also worked as a Research Fellow at the Future of Humanity Institute. He works on issues related to possible AI consciousness and sentience.Yoshua Bengio is recognized worldwide as one of the leading experts in artificial intelligence, known for his conceptual and engineering breakthroughs in artificial neural networks and deep learning. He is a Full Professor in the Department of Computer Science and Operations Research at Université de Montréal and the Founder and Scientific Director of Mila - Quebec AI Institute, one of the world's largest academic institutes in deep learning. He is also the Scientific Direc...

]]>
Aaron Bergman https://forum.effectivealtruism.org/posts/pNhc3jensyBY4Hz6u/panel-discussion-on-ai-consciousness-with-rob-long-and-jeff Sun, 10 Sep 2023 12:11:54 +0000 EA - Panel discussion on AI consciousness with Rob Long and Jeff Sebo by Aaron Bergman Aaron Bergman https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:03:26 no full 7064
x9siFYXhgHcipEwT7 EA - A Love Letter to EA by utilistrutil Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Love Letter to EA, published by utilistrutil on September 10, 2023 on The Effective Altruism Forum.Dear EA,I love you. In hindsight, my life before we met looks like it was designed to prepare me for you, like Fate left her fingerprints on so many formative books and conversations. Yet, for all that preparation, when my friend finally introduced us, I almost missed you! "Earning to give-me-a-break, am I right?" I didn't recognize you at first as the one I'd been looking for - I didn't even know I'd been looking. But that encounter made an indelible impression on me, one that eventually demanded a Proper Introduction, this time performed in your own words. At second sight, I could make out the form of the one I am now privileged to know so well.We survived the usual stages: talking, seeing each other casually, managing a bit of long (inferential) distance. It has been said that love is about changing and being changed - Lord, how I've changed. And you have too! I loved you when CEA was a basement, I loved you in your crypto era, and I'll love you through whatever changes the coming years may bring.I will admit, however, that thinking about the future scares me. I don't mean the far future (though I worry about that, too, of course); I mean our future. We're both so young, and there's so much that could go wrong. What if we make mistakes? What if I can't find an impactful job? What if your health (epistemic or otherwise) fails? In this light, our relationship appears quite fragile. All I can do is remind myself that we have weathered many crises together, and every day grants us greater understanding of the challenges that lie ahead and deeper humility with which to meet them.I also understand you better with each passing year. And that's a relief because, let's face it, loving you is complicated! There's so much to understand. Sometimes I feel like I'll never fully Get You, and then I feel a little jealous toward people who have known you longer or seem to know a different side of you. When I get to thinking this way, I am tempted to pronounce that I am only "adjacent" to you, but we both know that this would be true only in the sense that a wave is adjacent to the ocean. And you? You make me feel seen in a way I never thought was possible.We tend to talk the most when some Issue requires resolution, but I hope you know that for every day we argue, there are 99 days when I think of you with nothing but fondness. 99 days when I relish your companionship and delight in my memories with you: traveling the world, reading in the park, raising the next generation, talking late into the night, bouncing a spikeball off Toby Ord's window . . .I love your tweets, even when they make me cringe, I trust your judgement, even after you buy a castle, and I cherish a meal with you, even when it's a lukewarm bottle of Huel shared in an Oxford train station. When we disagree, I love how you challenge me to refine my map of the world. I am constantly impressed by your boundless empathy, and I am so thankful for everything you've taught me. I love your eccentric friends and rationalist neighbors. My parents like you too, by the way, even if they don't really get what I see in you.I love you x 99.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
utilistrutil https://forum.effectivealtruism.org/posts/x9siFYXhgHcipEwT7/a-love-letter-to-ea Sun, 10 Sep 2023 10:29:28 +0000 EA - A Love Letter to EA by utilistrutil utilistrutil https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:08 no full 7063
NBkiCgwn435RgGRmB EA - Career advice the Probably Good team would give our younger selves by Probably Good Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Career advice the Probably Good team would give our younger selves, published by Probably Good on September 11, 2023 on The Effective Altruism Forum.At Probably Good, our mission is to help more people maximize the impact of their career based on their own values, personal circumstances, and motivations. In the past few months, we've made many exciting strides towards fulfilling this mission - like launching our 1-on-1 advising calls, renovating and rebranding our site, and publishing several new in-depth career profiles and career guide chapters. Though we're still a relatively new org, we're learning a lot and striving to create high-quality content for a range of preferences and cause areas (be on the lookout for a whole lot of climate-focused career content in the next month!).We think this sort of guidance can be extremely helpful in navigating your career search (it's why we do it!). But we also know that life is complex and unpredictable. Planning and strategizing for your career is super important, but so is chance, failure and learning along the way.So to get a bit more personal and strike up some career conversations, our team is sharing the top piece of career advice we'd give our younger selves. We all come from fairly different backgrounds and career stages, so hopefully this can give a more practical perspective on our own career journeys.Anna Beth, WriterShow you can do the work instead of relying solely on on-paper qualifications. When looking for a job out of undergrad, I often wouldn't even apply to ambitious opportunities because I assumed I didn't have the impressive on-paper qualifications needed to stand-out. This probably has something to do with imposter syndrome, but I actually felt pretty confident in my abilities - I just thought I didn't have the background needed to be given a chance. Looking back, this was a pretty defeatist outlook. I wish I would have spent more time focused on practicing and sharing the sort of work I wanted to do. When I eventually started sharing (and emphasizing) a portfolio website with mostly personal writing projects, I ended up getting more interviews and my first full-time role.If you feel limited by your background or like you're somehow behind for not doing more while in school, it can go a long way to show - not just tell - you can do the work. Sure, those prestigious internships and achievements could open doors and give you an extra boost, but ultimately, organizations want people who can actually do a good job and have something valuable to offer. Try a do-it-yourself approach by taking up projects that interest you, keeping up with a website, actively seeking work-tasks, or doing more volunteer work in your field.Dylan, ResearcherYou have more options than you likely realize. We tend to lean towards options we're already familiar with, and the careers we're exposed to at a young age are determined by fairly arbitrary variables like location, educational background, and our parent's careers. I think this means the career goals we form early on are often highly path dependent. In reality, there's far more areas of work we would have been excited about, if only we'd been exposed to them earlier on.This probably sounds quite obvious, but it's something I wish I'd internalized before I latched onto the first subject that gripped me then spent a fair few years pursuing it at postgraduate level. I might still have chosen the same route, but my decision would have been better informed had I spent some more time trying a bunch of different things.Itamar, Head of GrowthEarly on, prioritize broadly applicable skills-like learning and communication-that will be useful regardless of what path you end up taking. This is especially beneficial in cases where you're unsure what you'll do later, or think that ...

]]>
Probably Good https://forum.effectivealtruism.org/posts/NBkiCgwn435RgGRmB/career-advice-the-probably-good-team-would-give-our-younger Mon, 11 Sep 2023 20:15:34 +0000 EA - Career advice the Probably Good team would give our younger selves by Probably Good Probably Good https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:22 no full 7075
yeoGmn29zWScfvQLk EA - Apply Now: EAGxPhilippines applications are closing soon by zianbee Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply Now: EAGxPhilippines applications are closing soon, published by zianbee on September 11, 2023 on The Effective Altruism Forum.↗️ See the original forum postThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
zianbee https://forum.effectivealtruism.org/posts/yeoGmn29zWScfvQLk/apply-now-eagxphilippines-applications-are-closing-soon Mon, 11 Sep 2023 16:14:10 +0000 EA - Apply Now: EAGxPhilippines applications are closing soon by zianbee zianbee https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:26 no full 7070
nAgJSGgdA67xun8MM EA - Writing about my job: Co-founder of a new charity (early stage) by SofiaBalderson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Writing about my job: Co-founder of a new charity (early stage), published by SofiaBalderson on September 13, 2023 on The Effective Altruism Forum.Tl;dr: This is my very personal take on what it's like to start and run a self-started charity. This job can be very interesting, fun, and high impact, but it's also very challenging and it's not for everyone.Acknowledgments:Thanks to Allison Agnello, Christina Schmidt Ibanez and Constance Li for your valuable feedback and suggestions!What's my current job and how I got here:Co-founder of Impactful Animal Advocacy (IAA) - an ecosystem of community, knowledge, and tools that supports a more coordinated and strategic animal advocacy movement.I started this as a volunteer project in June 2022, first with a monthly newsletter then expanding to include a Slack community. Last month, I decided to work on IAA full-time after seeing significant community growth and a path for counterfactual impact.Who is this post foPeople who are interested in starting a charity or people who are curious about what it's like.As a relatively recent charity founder, I can make comparisons between what is a "normal job" and this job (while it's relatively fresh in my head).Because my charity does community building, I speak to many people about their work and plans, and some advocates are considering starting their own organization. I hope this will help you evaluate whether the job is for you, and should you decide to start a new organization, you'll be all the more prepared for it.What's the job aboutYou start, build, and run the charity. Because it's likely that it will just be you or your co-founder(s), you'll be doing a lot of different things (more about this below). This is a senior management position and it's a lot of responsibility. However, it's different from senior roles in more established organizations in a couple of ways. For example, the variety of tasks you have to do (both high-level and admin work) is higher, there is a lot more uncertainty and risk, and a lot more freedom in which interventions to run and how to run them. Starting a charity could be one of the most impactful things you can do with your career if you're the right fit for it.Past job experiences that may helpMy most recent role was Head of Programmes at Animal Advocacy Careers. Before that, I was a Project Manager and a Partnership Manager at Veganuary where I took on various roles in management and marketing. I've also worked in a couple of start-ups which added to my entrepreneurial experience. All these roles combined have helped me a lot. The skills that help me most from my past experiences are bootstrapping projects, project and people management.On the whole, I don't think there is any set of experiences that are essential for this job. Each charity idea will have its own requirements and even the best co-founder pair won't have all of them. For example, if you start a charity in policy, experience in policy change is likely to make your job much easier, but there are still many other skills that you need to have to succeed. Looking at our community-building project, both my co-founder and I are very good at networking and relationship-building, which are key skills that our project needs. However, we are still learning many other skills that we need, like fundraising.Some founders succeed without any experience, often coming right out of college (Charity Entrepreneurship has incubated several orgs like this!). If you think this kind of role is a great personal fit, but you don't have much working experience, it might be worthwhile to trial or test. I started the newsletter as a volunteer project and it gave me one year of feedback before deciding to take the project full-time.What helps to do this role wellThere are a couple of paths to s...

]]>
SofiaBalderson https://forum.effectivealtruism.org/posts/nAgJSGgdA67xun8MM/writing-about-my-job-co-founder-of-a-new-charity-early-stage Wed, 13 Sep 2023 16:21:33 +0000 EA - Writing about my job: Co-founder of a new charity (early stage) by SofiaBalderson SofiaBalderson https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 16:34 no full 7091
RvafKqEYndLrnrGjm EA - Who should we interview for The 80,000 Hours Podcast? by Luisa Rodriguez Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who should we interview for The 80,000 Hours Podcast?, published by Luisa Rodriguez on September 13, 2023 on The Effective Altruism Forum.We'd love ideas for 1) who to interview and 2) what topics to cover (even if you don't have a particular expert in mind for that topic) on The 80,000 Hours Podcast.Some prompts that might help generate guest ideas:What 'big ideas' books have you read in the last 5 years that were truly good? (e.g. the Secret of Our Success)Who gave that EAG(x) talk that blew your mind? (you can find recorded EAG(x) talks here)What's an important EA Forum post that you've read that could be more accessible, or more widely known?Who's an expert in your field who would do a great job communicating about their work?What's an excellent in-the-weeds nonfiction book you've read in the last few years that might be relevant to a pressing world problem? (e.g. Chip War)Who's appeared on our show in the past that you'd like to hear more from? (complete list of episodes here)Who's spearheading an entrepreneurial project you want to learn more about?Who has a phenomenal blog on ideas that might be relevant to pressing problem areas?Who's a journalist that consistently covers topics relevant to EA, and nails it?Who are the thought leaders or contributors in online forums or platforms (like LessWrong, EA Forum, or others) whose posts consistently spark insightful discussions?We might be interested in covering the following topics on the show: suffering-risks, how AI might help solve other pressing problems, how industries like aviation manage risk, whether we should prioritize well-being over health interventions, what might bottleneck AI progress. Who's an expert you know of who can speak compellingly on one of those topics?Some prompts that might help generate topic ideas:What's a problem area that you've always wanted to know more about?What's a great video essay you've seen in the last few months? What was it on?What's a policy idea that you think more policymakers should hear?What's a problem area that seems seriously underrated in EA?What's a philosophy concept that's important to your worldview that people aren't talking enough about?What's a major doubt you have about whether a particular problem (e.g. AI / biosecurity) is actually especially pressing, or a particular solution is especially promising?What's an issue in the EA community that you'd like to hear someone address? (e.g. FTX, burnout, deference)We'd love to get lots and lots of ideas - so please have a low bar for suggesting! That said, we get many more suggestions than we can plausibly make episodes, so in practice we follow through on <5% of proposals people put to us.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Luisa_Rodriguez https://forum.effectivealtruism.org/posts/RvafKqEYndLrnrGjm/who-should-we-interview-for-the-80-000-hours-podcast Wed, 13 Sep 2023 15:51:21 +0000 EA - Who should we interview for The 80,000 Hours Podcast? by Luisa Rodriguez Luisa_Rodriguez https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:40 no full 7090
y6BGJGgDDCfospuXE EA - Difficult to get a job in EA not living in the West? Animal advocacy careers can impact millions of animals anywhere in the world by Animal Advocacy Careers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Difficult to get a job in EA not living in the West? Animal advocacy careers can impact millions of animals anywhere in the world, published by Animal Advocacy Careers on September 13, 2023 on The Effective Altruism Forum.Some effective altruists focus on their donations, or just follow EA as a matter of intellectual pursuit. But for those who want to do direct work in an organisation, finding a job is not usually easy. This is also a big reason why a lot of people simply do not even consider looking for a job in an EA organisation in the first place.We think that a considerable number of effective altruists who couldn't or didn't even try to get a job in EA organisations (due to high competition, lack of expertise, or distant location) may be overlooking extremely impactful career opportunities in animal advocacy. In this post, we'll show how these career opportunities might be a good fit for many effective altruists that experience similar difficulties and bottlenecks in their career.We are not claiming that effective altruists should necessarily pursue a job in EA organisations, or stop looking for jobs in other EA cause areas, or prioritise animal advocacy over other causes or vice versa. Nor do we suggest that animal advocacy is inherently easier or that people should invariably pick more convenient options. Our goal is to simply point out additional possibilities for those who may be a good (or even better) fit for them.Challenges in securing EA jobsIt is not easy to get a job in the EA space, especially if you don't have the right qualifications or live in high income countries.While 80,000 Hours constantly updates EAs with hundreds of new roles in its job board, there are hundreds of thousands of people who apply for these roles too. As a result of this high demand, most hiring rounds are extremely competitive, which lowers the probability of getting hired.In addition, regardless of the competitive nature of these jobs, required or desired qualifications are sometimes quite high and specific. For example, many AI safety positions typically require qualifications related to computer science and programming, while pandemic preparedness positions may require qualifications related to medicine or governance, and so forth.While this doesn't imply that these fields exclude job opportunities for other skill sets, the available positions are limited and extremely competitive due to the sheer number of applicants.On top of these challenges, geographical constraints can complicate matters. While some job opportunities are remote, most are not. Unless you are willing to relocate and have working permits, these job postings do not really count as opportunities.This can be especially true for people who do not live in high income countries.For those in countries like Australia or Germany, obtaining work permits for the US or the UK may be relatively easier. However, it's not as simple for individuals residing in places like Peru, Thailand or Türkiye (previously Turkey). Moreover, demonstrating your credentials can be comparatively tougher if you were born and raised in developing countries, as educational institutions or domestic firms may lack international recognition.This is not to say that it is impossible to get a job in EA organisations, or that you should simply give up. Regardless of the challenges, you may very well overcome them and it may be worth it, especially considering the potential for meaningful impact. We are just describing how difficult it can be. And our guess is that this is how at least some people in the community experience their pursuit of jobs in the EA space (or the lack of it due to these high challenges).There are high impact opportunities available in the animal advocacy spaceIn the animal advocacy field, there are numerous hi...

]]>
Animal Advocacy Careers https://forum.effectivealtruism.org/posts/y6BGJGgDDCfospuXE/difficult-to-get-a-job-in-ea-not-living-in-the-west-animal Wed, 13 Sep 2023 14:52:22 +0000 EA - Difficult to get a job in EA not living in the West? Animal advocacy careers can impact millions of animals anywhere in the world by Animal Advocacy Careers Animal Advocacy Careers https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 14:35 no full 7088
ohfknZmWG5uH8Jgqh EA - There is little (good) evidence that aid systematically harms political institutions by ryancbriggs Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There is little (good) evidence that aid systematically harms political institutions, published by ryancbriggs on September 13, 2023 on The Effective Altruism Forum.Sometimes in discussions of foreign aid or charity I see people raise the point that aid might do good directly, such as by providing a health service, but that it might cause harm indirectly, for example by allowing an incompetent or corrupt state to continue existing without being forced to become better by harsh economic realities. These arguments come up in conversation, and also in books by the likes of Angus Deaton or Larry Temkin. Recently, Martha Nussbaum brought up these concerns. They're worth taking seriously. Thankfully, political scientists and economists have in fact looked at them (somewhat) seriously.In this post I'm going to tell you a tiny bit about me, explain the institutional criticism of aid in a bit more detail, and then explain the results of two papers testing this relationship. I'll also explain a tiny bit about the limitations of this kind of work. The upshot of all this is that we have very little strong evidence that aid systematically harms political institutions. My best personal guess is that while aid can sometimes have medium-sized positive or negative effects on politics in recipient countries in specific cases, on average right now and in the recent past it has very small effects in either direction.All About MeI'm a political scientist by training and most of my teaching is in a development studies department. When I was considering grad school, I was really interested in political accountability and the ways that "easy money" like oil can distort political relationships. My reading of recent history suggested to me that getting oil before having representative institutions meant locking in autocratic rule. I thought about studying oil states but then somewhat unrelatedly I tried living in Cairo for about half a year and found it quite challenging culturally - so I figured studying oil states was probably not going to work for me. My next idea was thinking about aid. In a lot of ways it seemed similar to oil: it was "easy money" for governments that let them provide goods or services to their citizens without taxing them. In many very poor countries it was also a large flow in terms of government expenditure or GDP. I wrote my big undergrad paper on this.While doing so I read the work of Deborah Brautigam on this topic and then I went to do my PhD with her.Aid and institutions in theoryThe "aid harms institutions" story isn't dumb. It makes internal sense, and the early (cross-sectional) evidence that we had on it sometimes suggested harm. There are so many ways that aid could harm institutions or governance in recipient countries. It could do so by acting like oil. This primarily means allowing governments to exist absent taxation. If citizens aren't taxed, so the theory goes, then maybe they won't demand representation or results from government in the same way. And if governments don't have to collect tax, then maybe they won't do state building things like building up good ways of gathering information on everybody. If we look at European states, you can easily tell a story where states felt the acute need for more money (often for fighting wars) and this set off a chain of events that built strong states and created demands for state accountability to at least some segment of the population.Aid could also harm institutions in more mundane ways. For example, donors might want to hire local staff and might pay well by local standards. This seems like a clear positive, but if enough donors do this they might poach all of the best people from government bureaucracies. Donors might also want lots of reporting to make sure money is well spent. Again this seems g...

]]>
ryancbriggs https://forum.effectivealtruism.org/posts/ohfknZmWG5uH8Jgqh/there-is-little-good-evidence-that-aid-systematically-harms Wed, 13 Sep 2023 13:54:59 +0000 EA - There is little (good) evidence that aid systematically harms political institutions by ryancbriggs ryancbriggs https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:54 no full 7087
zrM2ktic2HEswAwry EA - Hiring retrospective: Research Communicator for Giving What We Can by Michael Townsend Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hiring retrospective: Research Communicator for Giving What We Can, published by Michael Townsend on September 13, 2023 on The Effective Altruism Forum.Inspired by Aaron Gertler's notes on hiring a copyeditor for CEA and more recently ERA's hiring retrospective, I am writing a retrospective on Giving What We Can's hiring for the Research Communicator role. This is written in my capacity as a Researcher for Giving What We Can. I helped drive the hiring process along with several other team members. The main motivation of the post is to:Provide more information for applicants on what the hiring process looked like internally.Share our lessons learned so that other organisations can improve their hiring processes by avoiding our mistakes and copying what we think we did well.SummaryWe received ~145 applications from a diverse pool of candidates, and made one hire.We had four stages:An initial series of short answer questions - 24/145 moved to the next stage.A ~2-hour work test - 5/24 moved to the next stage.A work trial (11-hours total) - 2/5 moved to the next stage.A final interview and reference checks - we made one offer.There are parts of the hiring process that we think went well:We had a large and highly diverse pool of candidates.We provided at least some feedback for every stage of the application process, giving more detail for later stages.We think there were advantages to our heavy reliance on work tests and trials, and the ways we graded them (e.g. anonymously, and by response rather than by candidate).There are ways we could have done better:Our work trial was overly intense and stressful, and unrepresentative of working at GWWC.We could have provided better guidance and/or tools for how long to take on the initial application tasks.We could have had better communication at various stages, e.g. on the meaning of some of the scores shared as feedback on that application.What we didIn this section, I'll outline the stages of our hiring process and share some reflections specific to each. The next section will provide some more general reflections.Advertising the roleWe made a reasonable effort into ensuring we had as large and diverse a pool of candidates for the role as we could. Some of the things we did include:Before the role opened, our Director of Research Sjir Hoeijmakers attended EAGxIndia and EAGxLatAm, in part motivated to meet potential candidates (we thought it was likely we would hire another person to our research team later in the year).Once the role opened:Many of our team reached out to our own personal networks to advertise the role, especially to those from less represented backgrounds.We advertised the role on our newsletter, 80,000 Hours' job board, and used High Impact Professionals' directory to reach out to candidates outside our networks.We think this contributed to a diverse applicant pool, including many candidates with backgrounds, experience, and perspectives that we felt were underrepresented by the existing team:The majority of applicants at the work test and work trial stage were women (we mostly graded applicants blindly - more details below).When we asked the 145 applications for their country of residence, roughly half were non-English speaking countries, and the majority of those were outside of Europe.The initial applicationWe asked applicants to fill in an Airtable form which asked for:General information and permissions (e.g., whether they wanted to opt-in to receiving feedback or to us proactively referring them to other related roles in the future).Their LinkedIn profile and/or resume.Answers to short-answer questions, about:Their background with effective altruism or effective giving and interest in the role.Specific questions that functioned more as a very quick test (providing an ...

]]>
Michael Townsend https://forum.effectivealtruism.org/posts/zrM2ktic2HEswAwry/hiring-retrospective-research-communicator-for-giving-what Wed, 13 Sep 2023 06:52:03 +0000 EA - Hiring retrospective: Research Communicator for Giving What We Can by Michael Townsend Michael Townsend https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:53 no full 7084
XEx8cWvvSofsScaoC EA - Summary: High risk, low reward: A challenge to the astronomical value of existential risk mitigation by Global Priorities Institute Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summary: High risk, low reward: A challenge to the astronomical value of existential risk mitigation, published by Global Priorities Institute on September 13, 2023 on The Effective Altruism Forum.This is a summary of the GPI Working Paper "High risk, low reward: A challenge to the astronomical value of existential risk mitigation" by David Thorstad. The paper is now forthcoming in the journal Philosophy and Public Affairs. The summary was written by Riley Harris.The value of the future may be vast. Human extinction, which would destroy that potential, would be extremely bad. Some argue that making such a catastrophe just a little less likely would be by far the best use of our limited resources--much more important than, for example, tackling poverty, inequality, global health or racial injustice. In "High risk, low reward: A challenge to the astronomical value of existential risk mitigation", David Thorstad argues against this conclusion. Suppose the risks really are severe: existential risk reduction is important, but not overwhelmingly important. In fact, Thorstad finds that the case for reducing existential risk is stronger when the risk is lower.The simple modelThe paper begins by describing a model of the expected value of existential risk reduction, originally developed by Ord (2020;ms) and Adamczewski (ms). This model discounts the value of each century by the chance that an extinction event would have already occurred, and gives a value to actions that can reduce the risk of extinction in that century. According to this model, reducing the risk of extinction this century is not overwhelmingly important-in fact, completely eliminating the risk we face this century could at most be as valuable as we expect this century to be.This result-that reducing existential risk is not overwhelmingly valuable--can be explained in an intuitive way. If the risk is high, the future of humanity is likely to be short, so the increases in overall value from halving the risk this century are not enormous. If the risk is low, halving the risk would result in a relatively small absolute reduction of risk, which is also not overwhelmingly valuable. Either way, saving the world will not be our only priority.Modifying the simple modelThis model is overly simplified. Thorstad modifies the simple model in three different ways to see how robust this result is: by assuming we have enduring effects on the risk, by assuming the risk of extinction is high, and by assuming that each century is more valuable than the previous. None of these modifications are strong enough to uphold the idea that existential risk reduction is by far the best use of our resources. A much more powerful assumption is needed (one that combines all of these weaker assumptions). Thorstad argues that there is limited evidence for this stronger assumption.Enduring effectsIf we could permanently eliminate all threats to humanity, the model says this would be more valuable than anything else we could do--no matter how small the risk or how dismal each century is (as long as each is still of positive value). However, it seems very unlikely that any action we could take today could reduce the risk to an extremely low level for millions of years--let alone permanently eliminate all risk.Higher riskOn the simple model, halving the risk from 20% to 10% is exactly as valuable as halving it from 2% to 1%. Existential risk mitigation is no more valuable when the risks are higher.Indeed, the fact that higher existential risk implies a higher discounting of the future indicates a surprising result: the case for existential risk mitigation is strongest when the risk is low. Suppose that each century is more valuable than the last and therefore that most of the value of the world is in the future. Then high existential...

]]>
Global Priorities Institute https://forum.effectivealtruism.org/posts/XEx8cWvvSofsScaoC/summary-high-risk-low-reward-a-challenge-to-the-astronomical Wed, 13 Sep 2023 02:42:24 +0000 EA - Summary: High risk, low reward: A challenge to the astronomical value of existential risk mitigation by Global Priorities Institute Global Priorities Institute https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:22 no full 7083
6tkJtLMcmvW7vEXW7 EA - About my job: GiveWell Philanthropy Advisor by maggie.lloydhauser Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: About my job: GiveWell Philanthropy Advisor, published by maggie.lloydhauser on September 13, 2023 on The Effective Altruism Forum.OverviewI've been a Philanthropy Advisor at GiveWell for two years. This role is a better job for me than I thought I could find, and GiveWell is a better workplace than I knew existed.I'm often asked what a typical day in my role is like. Because my day to day is (delightfully) varied, I'll instead share what a typical month looks like as a Philanthropy Advisor. I'll also share some thoughts on my pathway to this role and its impact on GiveWell's work.A GiveWell Philanthropy Advisor is best compared to a Major Gifts Officer at other institutions, though there are some key differences. Big-picture, my role is to fundraise for our top charities as well as for non-top charity grants (such as those funded from our All Grants Fund) that may align with a specific donor's priorities. I primarily do this by maintaining relationships with a portfolio of donors.Unlike my preconceived notion of what fundraising entailed, my job does not require cold outreach or making awkward asks of existing donors. Instead, I try to get to know my donors, learning about what motivates them in their giving, helping them stay up to date with our research, sharing organizational updates, and occasionally presenting them with giving opportunities that seem strongly aligned with their philanthropic goals and interests. When I accepted this role, I worried that it would require me to be interpersonally disingenuous or that I'd have to make funding opportunities seem like surer bets than they really are; I've been delighted to find that our donors are genuinely easy to connect with and that GiveWell's values of transparency and truth-seeking translate to extreme honesty with donors about the risks and uncertainties of different grants.The jobIn a typical month, my responsibilities break down as follows:Communication with donors: In a typical month, I'm likely to send an update to each of the donors I know. This is often part of a natural ongoing conversation, but if I'm not specifically in touch with a donor already, I might reach out with an interesting article, share an impact report for their most recent gift, or get their feedback on some of our work. I also answer inbound donor questions from people seeking GiveWell's expertise and have a lot of calls with donors (over 100 per year). The volume of donor calls I have in a typical month varies greatly, clustering at the end of the year when many donors have questions about our work as they make annual giving decisions. These calls range from in-depth research conversations to logistical conversations about estate planning or giving mechanics to intense conversations about long-term philanthropic and financial goals and challenges. As relationships grow, we might also have more personal calls to catch up.Following GiveWell's research: In order to keep donors informed about our work, I have to stay up to date with it myself! I spend a lot of time listening in on research meetings, reading research documents for various grants, and asking our researchers questions about the specifics of different programs. A donor call can go in any direction - it's not uncommon for a donor to ask about a research decision we made years before my tenure at GiveWell began or about our position on a research area we have yet to publish public materials on - so keeping abreast of our research is a core component of the job. Depending on my workload, I might also lead the process of drafting a grant proposal.This requires a deep dive into the supporting documents for a specific grant and then condensing a lot of technical materials into a donor-facing format that highlights the intervention, considers the benefits, risks, and unce...

]]>
maggie.lloydhauser https://forum.effectivealtruism.org/posts/6tkJtLMcmvW7vEXW7/about-my-job-givewell-philanthropy-advisor Wed, 13 Sep 2023 01:16:59 +0000 EA - About my job: GiveWell Philanthropy Advisor by maggie.lloydhauser maggie.lloydhauser https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:44 no full 7085
TqAxoYPSYhpv7FP6c EA - Link: EU considers dropping stricter animal welfare measures (Financial Times) by Matt Goodman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Link: EU considers dropping stricter animal welfare measures (Financial Times), published by Matt Goodman on September 12, 2023 on The Effective Altruism Forum.The Financial Times writes that the EU is considering dropping animal welfare regulations from the upcoming Green Deal. The main reasons presented are concerns over the the effects the regulations will have on food prices, which have increased since Russia's full-scale invasion of Ukraine. It also mentions political pressure from center-right parties.The FT claims EU's own draft impact assessment on the legislation would add up to 60 cents to the price of a dozen eggs, and expanding the space where broiler chickens are housed would add 12 cents.It's a disappointing read. Pushing for higher animal welfare regulations is something the EA movement has success in, and the EU has historically led the world when it comes to animal welfare regulations.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Matt Goodman https://forum.effectivealtruism.org/posts/TqAxoYPSYhpv7FP6c/link-eu-considers-dropping-stricter-animal-welfare-measures Tue, 12 Sep 2023 23:01:34 +0000 EA - Link: EU considers dropping stricter animal welfare measures (Financial Times) by Matt Goodman Matt Goodman https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:03 no full 7082
P56q5epKjyN3eRJHq EA - John Green & Grassroots Activists Pressure Danaher To Drop Price of Tuberculosis Test by Gemma Paterson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: John Green & Grassroots Activists Pressure Danaher To Drop Price of Tuberculosis Test, published by Gemma Paterson on September 12, 2023 on The Effective Altruism Forum.Really great piece from Good Good Good on the campaign to pressure Danaher to reduce the price of their TB test. Excited to see the outcome!Tuberculosis is the world's deadliest disease - and it is entirely curable.The airborne disease has plagued human history, and the World Health Organization estimates that 10 million people across the globe are infected by TB every year. Of those millions, 1.6 million still die each year, despite the existence of life-saving technology and medicine.These are facts that enrage many - but especially author and philanthropist John Green."Why am I passionate about TB? It's the deadliest disease in human history, but for most of human history we couldn't do much about it. However, since the mid-1950s, TB has been curable - yet we still allow TB to kill over 1.6 million people per year," Green told Good Good Good."This is horrifying to me. I don't want to accept a world where we know how to cure Tuberculosis but deny millions of people access to that cure."This passion was put on display in July of this year.Through the social media campaign #PatientsNotPatents, Green and his fanbase - called Nerdfighters, or Nerdfighteria in collective form - followed the lead of TB advocates to call on drug company Johnson & Johnson to allow the sale of generic bedaquiline - a life-saving TB drug that had been inaccessible under the company's patent for over a decade.Nerdfighteria - as well as longtime TB organizations such as the Stop TB Partnership and TB-survivors Phumeza Tisile and Nandita Venkatesan - was successful, and Johnson & Johnson ended its reign on bedaquiline (though advocates are still working to ensure that all low- and middle-income countries receive access to the generic drug).To end the TB epidemic by 2030 - a goal shared by both the WHO and the United Nations - more pharmaceutical companies need to do their part to make healthcare more accessible, especially in countries with a high TB burden.So, in their effort to end TB, Nerdfighteria has taken on a new company: Danaher.The IssueDanaher is a multinational corporation, founded by brothers Steven and Mitchell Rales, that owns a number of other large companies, such as Cepheid, Pantone, and X-Rite. (Steven Rales also founded Indian Paintbrush Films, which has financed many of Wes Anderson's movies.)Most relevant to this campaign, however, is that in 2006, Danaher and molecular diagnostics company Cepheid created the most helpful diagnostic resource for TB: The GeneXpert machine.This rapid molecular testing machine is able to test for a number of infectious diseases, including COVID, HIV, TB, and multidrug-resistant TB. In fact, the WHO recommends Xpert tests as the initial test for all people with signs and symptoms of TB.The GeneXpert machine itself is cost-effective, but its testing cartridges are more costly. The company charges about $10 per regular TB testing cartridge and about $15 per multidrug-resistant TB testing cartridge (though both tests cost the same to manufacture).According to a 2019 brief from Doctors Without Borders, Danaher and Cepheid could reduce the cost of these cartridges to just $5 each - or lower - based on continuous increases in volume. The brief said a "20-30% reduction in price may be overdue" thanks to expansion in volume."These pricing packages serve to expand Cepheid's footprint . and do nothing to address the urgent need to scale up affordable testing for COVID-19, TB, and other diseases, or to address the longstanding lack of affordability and unfair pricing of Xpert tests," David Branigan, of activist organization Treatment Action Group, said in a statement in 2...

]]>
Gemma Paterson https://forum.effectivealtruism.org/posts/P56q5epKjyN3eRJHq/john-green-and-grassroots-activists-pressure-danaher-to-drop Tue, 12 Sep 2023 22:46:03 +0000 EA - John Green & Grassroots Activists Pressure Danaher To Drop Price of Tuberculosis Test by Gemma Paterson Gemma Paterson https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:53 no full 7081
TQdNRM9gofsN9thYv EA - Theory: "WAW might be of higher impact than x-risk prevention based on utilitarianism" by Jens Aslaug Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Theory: "WAW might be of higher impact than x-risk prevention based on utilitarianism", published by Jens Aslaug on September 12, 2023 on The Effective Altruism Forum.SummaryIn the forthcoming post, I will argue why wild animal welfare (WAW) MIGHT be a more pressing cause area than x-risk prevention based on total utilitarianism, even after taking into account longtermism. I will show how you can make the same calculation yourself (checking if you agree) and then outline a rationale for how you could apply this estimation.The scenario of humans outnumbering (wild) animals is improbable, primarily due to the vast number of wildlife and the potential for their spread to space, the perspective of total utilitarianism suggests that human suffering may be deemed negligible. Additionally, because sentient beings originating from Earth can live at least 1034 biological human life-years if we achieve interstellar travel, working on anything that doesn't cause permanent effects is also negligible. Consequently, our paramount concern should be decreasing x-risk or increasing net utilities "permanently". If one is comparatively easier than the other, then that's the superior option.I will outline a formula that can be used to compare x-risk prevention to other cause areas. Subsequently, I will present a variety of arguments to consider for what these numbers might be, so you can make your own estimations. I will argue why it might be possible to create "permanent" changes in net utilities by working on WAW.My estimation showed that WAW is more than twice as effective as x-risk prevention. Due to the high uncertainty of this number, even if you happen to concur with my estimation, I don't see it as strong enough evidence to warrant a career change from one area to the other. However, I do think that this could mean that the areas are closer in impact than many longtermist might think, and therefore if you're a better fit for one of them you should go for that one. Also, you should donate to whichever one is the most funding-constrained.I will also delve into the practical applications of these calculations, should you arrive at a different conclusion than me.(Note that I'm not in any way qualified to make such an analysis. The analysis as a whole and the different parts of the analysis could certainly be made better. Many of the ideas and made-up numbers I'm presenting are my own and therefore may differ greatly from what a more knowledgeable or professional in the field would think. To save time, I will not repeat this during every part of my analysis. I mainly wanted to present the idea of how we can compare x-risk prevention to WAW and other cause areas, in the hope that other people could use the idea (more than the actual estimations) for cause prioritization. I also want this post criticized for my personal use in cause prioritization. So please leave your questions, comments and most of all criticisms of my theory.)(Note regarding s-risk: this analysis is based on comparing working on WAW to working on x-risk. While it does consider s-risks, it does not compare working on s-risks to other causes. However, the analysis could relatively easily be extended/changed to directly compare s-risks to x-risks. To save time and for simplifications, I didn't do this, however, if anyone is interested, I will be happy to discuss or maybe even write a follow-up post regarding this.)IntroWhile I have for a long time been related to EA, I only recently started to consider which cause area is the most effective. As a longtermist and total utilitarian (for the most part), finding the cause that increases utilities (no matter the time or type of sentient being) the most time/cost-effectively is my goal. I got an idea based on a number of things I read, that against the belief of mo...

]]>
Jens Aslaug https://forum.effectivealtruism.org/posts/TQdNRM9gofsN9thYv/theory-waw-might-be-of-higher-impact-than-x-risk-prevention Tue, 12 Sep 2023 19:26:03 +0000 EA - Theory: "WAW might be of higher impact than x-risk prevention based on utilitarianism" by Jens Aslaug Jens Aslaug https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 30:53 no full 7080
c5BPGmDxYGhgqxyb2 EA - Writing about my job: Academic Researcher by Kyle Smith Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Writing about my job: Academic Researcher, published by Kyle Smith on September 12, 2023 on The Effective Altruism Forum.I've greatly enjoyed reading the other posts from Career Conversations Week. I think these types of conversations can be incredibly helpful. I thought I would add my perspective as an academic that thinks of themselves as EA-adjacent.Basic information: I am an Assistant Professor of Accounting at Mississippi State University. I think a good framework for thinking about accounting scholarship is as an area of applied economics - we use the same theories/methods as economists, we are just focused on the role of accounting information specifically (and accounting topics such as auditing and taxation). My research agenda is pretty niche within accounting research - I study nonprofit organizations, which is why I have learned about and started to become involved with EA. A lot of what I am working on focuses on the donor decision-making process and how to encourage more impactful giving.Background:I have undergraduate degrees in accounting and economics, and my work experience was in federal consulting with a Big4 accounting firm, working on audit readiness for a DoD agency. Interesting work, but I felt at the time that the long-term career trajectory would not be personally satisfying.I then did a PhD in Accounting at the University of Alabama. I could write a whole separate post about the PhD experience, but I had a generally very good time. I was paid enough to live on, had a supportive spouse, had a great relationship with my advisor, and performed well-enough that after my first year, didn't doubt much that I could complete it. I also discovered a passion for research, which made this a very exciting time.Application Process:The academic job market is a unique beast so I won't spend too much time on it. A research job at an R1 university is a difficult thing to achieve (and that difficulty depends a lot on the field you are in).A high-quality PhD program will maximize your odds of placing at the best universities, so following the lead of your fellow PhD students and advice of your advisor should steer you in the right direction. Where you choose to do your PhD is of extreme importance, so spend a lot of time deciding where to do your PhD.Accounting PhDs don't often place in industry, so I don't have any real advice on that unfortunately.What the job is like:I'll start by saying being an academic is a very strange job - it is remarkably different from my previous experience in public accounting. It is very much a lifestyle - most end up being an academic for their entire career.I am responsible for 2 courses per semester (one undergrad course and one master's course). I have a TON of flexibility on the materials and how I teach the courses, which is both good and bad. The good is that I get to teach what I want. The bad is that as I am still creating/prepping my courses, I spend approximately 3 days a week during the semester on teaching and prep. Eventually, this should get down to 2 days a week.The rest of my time is devoted to research. I have an active research pipeline (around 9 projects, which is frankly too many), with projects at various stages. Currently, I spend around 3+ days a week on research, with an eye toward getting that to 4+ once my classes are a bit more developed.As a general point, one major advantage of academia is the level of autonomy. Outside of my 2 scheduled lectures, I have complete and total control of my schedule and that allows me to work when and how I want.One of the reasons I love research is that I have ownership over the end product, rather than being a cog in the machine of a major corporation. This, along with the publish or perish reality of academic jobs, means that I tie up a lot of my person...

]]>
Kyle Smith https://forum.effectivealtruism.org/posts/c5BPGmDxYGhgqxyb2/writing-about-my-job-academic-researcher Tue, 12 Sep 2023 18:49:16 +0000 EA - Writing about my job: Academic Researcher by Kyle Smith Kyle Smith https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:10 no full 7079
jvkcNg4X2u8PJQjot EA - €65million EU funding for far-UV and PPE start-ups by EU Policy Careers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: €65million EU funding for far-UV and PPE start-ups, published by EU Policy Careers on September 12, 2023 on The Effective Altruism Forum.The EU has an open funding call for overall €65million for start-ups in the built environment and PPE space (covering UV-C, air circulation/aerosol capture systems and next-generation face masks).Grants (up to €2.5mn for innovation development costs) and direct equity investments (up to €15 million for scale up and other relevant costs) are available to businesses (or potentially subsidiaries) registered in the EU and associated countries (= EU neighbourhood countries like Norway; UK companies only eligible for the grants part, Swiss companies ineligible). Starting up a new company or relying on existing companies for a funding application could be options.Applications close on 4 October 2023.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
EU Policy Careers https://forum.effectivealtruism.org/posts/jvkcNg4X2u8PJQjot/eur65million-eu-funding-for-far-uv-and-ppe-start-ups Tue, 12 Sep 2023 17:52:56 +0000 EA - €65million EU funding for far-UV and PPE start-ups by EU Policy Careers EU Policy Careers https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:07 no full 7078
32LmGryHhsCYcvr72 EA - Avoiding 10 mistakes people make when pursuing a high-impact career (Alex Lawsen on the 80k After Hours Podcast) by 80000 Hours Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Avoiding 10 mistakes people make when pursuing a high-impact career (Alex Lawsen on the 80k After Hours Podcast), published by 80000 Hours on September 12, 2023 on The Effective Altruism Forum.We just published an interview: Alex Lawsen on avoiding 10 mistakes people make when pursuing a high-impact career. You can click through for the audio, a full transcript, and related links. Below are the episode summary and some key excerpts.Episode summaryI think the key consideration that I end up highlighting to people who I think are trying to do the best thing right now is something like: It might be that setting yourself up well to do more good later looks like not directly having as much of an impact right now.Because probably learning is pretty good if you want to have an impact later. Probably getting some signalling experience for a lot of careers, maybe doing a prestigious internship, maybe getting paid a lot of money: these things just often pay off later, and often trade off against doing the very most good with the summer internship you're doing this year, or in the first two years of your job just after you've graduated.Alex LawsenIn this episode of 80k After Hours, Luisa Rodriguez and Alex Lawsen discuss common mistakes people make when trying to do good with their careers, and advice on how to avoid them.They cover:Taking 80,000 Hours' rankings too seriouslyNot trying hard enough to failFeeling like you need to optimise for having the most impact nowFeeling like you need to work directly on AI immediatelyNot taking a role because you think you'll be replaceableConstantly considering other career optionsOverthinking or over-optimising career choicesBeing unwilling to think things through for yourselfIgnoring conventional career wisdomDoing community work even if you're not suited to itWho this episode is for:People who want to pursue a high-impact careerPeople wondering how much AI progress should change their plansPeople who take 80,000 Hours' career advice seriouslyWho this episode isn't for:People not taking 80k's career advice seriously enoughPeople who've never made any career mistakesPeople who don't want to hear Alex say "I said a bunch of stuff, maybe some of it's true" every time he's on the podcastGet this episode by subscribing to our more experimental podcast on the world's most pressing problems and how to solve them: type '80k After Hours' into your podcasting app. Or read the transcript below.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuire and Dominic ArmstrongAdditional content editing: Luisa Rodriguez and Katy MooreTranscriptions: Katy MooreWhy you shouldn't "just do whatever 80k says is best"Luisa Rodriguez: One thing you hear a lot is something like, "I should just do the thing that 80k says is best." Can you say more about what that looks like?Alex Lawsen: Yeah, I think there are a bunch of different ways this can come up. Maybe the most obvious one is just that, at any particular time, there's going to be something really salient that it seems like lots of people should do. And maybe it's because it's at the top of the ranked list we have. That is one disadvantage of having ranked lists. So a very clean version of this mistake could just be people going, "The thing that's at number one of the list of career profiles on the 80k website is technical AI safety research, so I guess I have to do technical AI safety research."So why might this be a mistake? The obvious thing is just that personal fit really matters. It matters in a bunch of ways. My guess is, during a lot of this conversation, I'm going to end up saying, "Did you know that personal fit is a big deal?" But specifically in the case of someone correctly realising that a thing is really important for ...

]]>
80000_Hours https://forum.effectivealtruism.org/posts/32LmGryHhsCYcvr72/avoiding-10-mistakes-people-make-when-pursuing-a-high-impact Tue, 12 Sep 2023 06:46:44 +0000 EA - Avoiding 10 mistakes people make when pursuing a high-impact career (Alex Lawsen on the 80k After Hours Podcast) by 80000 Hours 80000_Hours https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 20:35 no full 7077
Lb2TjSsjpqA8rQ7dP EA - The state of AI in different countries - an overview by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The state of AI in different countries - an overview, published by Lizka on September 14, 2023 on The Effective Altruism Forum.Note: I wrote this some weeks ago for the AI Safety Fundamentals Governance Course syllabus and figured I'd share it here. It might be outdated, and corrections are appreciated!Some are concerned that regulating AI progress in one country will slow that country down, putting it at a disadvantage in a global AI arms race. Many proponents of AI regulation disagree; they have pushed back on the overall framework, pointed out serious drawbacks and limitations of racing, and argued that regulations do not have to slow progress down. Another disagreement is about whether countries are in fact in a neck-and-neck arms race; some believe that the United States and its allies have a significant lead that would allow for regulation even if that does come at the cost of slowing down AI progress.This overview uses simple metrics and indicators to illustrate and discuss the state of frontier AI development in different countries - and relevant factors that shape how the picture might change.Key points:The top AI labs and models today are based in the United States and the United Kingdom.Key breakthroughs in AI research have largely come from the United States and Canada.China leads in the number of scientific publications and AI patent filings, but these numbers are complicated and rankings could be misleading; controlling for quality shows a U.S. lead.The following factors suggest that the United States and its allies will retain an advantage going forward:The United States invests more in AI than any other state.The United States and Europe have more access to top AI talent.The semiconductor supply chain is dominated by the United States, Taiwan, South Korea, Japan, and the Netherlands. Moreover, the United States and allied countries have imposed significant export controls (and will likely continue introducing new controls) on semiconductors. These are already affecting Chinese companies' access to the most advanced AI chips.Censorship and other political and economic factors might hinder AI progress in China - and have already gotten in the way of AI development.If the United States and allied countries institute regulations that slow down AI development, they might similarly slow down AI progress in China, as Chinese advances in AI seem to significantly rely on research published abroad.Where frontier AI progress is happening right nowThe top AI labs and models are based in the United States and the United KingdomIn terms of product performance and funding, the leading AI labs right now are arguably OpenAI (which produced ChatGPT and GPT-4), Google DeepMind, Anthropic, and Meta AI. These are all U.S. companies or subsidiaries of U.S. companies. If we widen the scope of "leading" to include all labs that produced machine learning (ML) models called "significant" in Stanford's 2023 AI Index report, we still find that most of these labs are based in the United States.We can also measure national AI capabilities by comparing the number of important models produced in different countries. The 2023 AI Index reports that, according to their definition of "significant," the United States stood out with 16 significant ML systems, followed by the United Kingdom with 8, China with 3, and then Canada, Germany, France, India, Russia, and Singapore.Key breakthroughs in AI research have largely come from the United States and CanadaSince the "deep learning revolution," deep learning has become the main paradigm in AI progress. Three of the scientists whose work contributed to this transformation (and who received a Turing Award for their work) - Bengio, Hinton, and LeCun - are based in Canada and the United States.This reflects a broader pattern...

]]>
Lizka https://forum.effectivealtruism.org/posts/Lb2TjSsjpqA8rQ7dP/the-state-of-ai-in-different-countries-an-overview Thu, 14 Sep 2023 16:58:18 +0000 EA - The state of AI in different countries - an overview by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 30:37 no full 7099
btyhbnFK7xe7SdYMd EA - Hiring: hacks + pitfalls for candidate evaluation by Cait Lion Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hiring: hacks + pitfalls for candidate evaluation, published by Cait Lion on September 14, 2023 on The Effective Altruism Forum.This post collects some hacks - cheap things that work well - and common pitfalls I've seen in my experience hiring people for CEA.HacksSharing customised-generic feedbackRejected candidates often, very reasonably, desire feedback. Sometimes you don't have capacity to tailor feedback to each candidate, particularly at earlier stages of the process. If you have some brief criteria describing what stronger applicants or stronger trial tasks submissions should look like, and if that's borne out in your decisions about who to progress, I suggest writing out a quick description of what abilities, traits or competencies the successful candidates tended to demonstrate. This might be as quick as "candidates who progressed to the next stage tended to demonstrate a combination of strong attention to detail in the trial task, demonstrated a clear and direct writing style, and have professional experience in operations." This shouldn't take more than a few minutes to generate. My impression is it's a significant improvement for the candidates over a fully generic response.Consider borrowing assessment materialsNot sure how to test a trait for a given role? Other aligned organisations might have already created evaluation materials tailored to the competencies you want to evaluate. If so, that organization might let you use their trial task for your recruitment round.Ideally, you can do this in a way that's pretty win-win for both orgs (e.g. Org A borrows a trial task from Org B. Org A then asks their candidates to agree that, should they ever apply to Org B, Org A will send over the results of the assessment).I have done this in the past and it worked out well.Beta test your trial tasks!I'm a huge proponent of beta testing new evaluation materials. Testing your materials before sending them to candidates can save you a world of frustration down the road by helping you tweak unclear instructions, inappropriate time limits, and a whole host of other pitfalls.MistakesTaken from our internal hiring resources, here are some mistakes we've made in the past with our evaluation materials:Trial tasks or tests that are laborious to gradeSome types of work tests take a long time to grade effectively. Possible issues can be: a large amount to read, multiple sources or links to check for information, a complicated or difficult-to-apply rubric. Every extra minute this takes you to grade is multiplied by the number of tasks. The ideal work sample test is quick and clear to grade.Possible solutions:Think backwards from grading when you create the task.Where appropriate, be willing to sacrifice some assessment accuracy for grading speedBeta test!Tasks that require multiple interactions from the graderSome versions of trial tasks we used in the past had a candidate submit something to which the grader had to respond before the candidate could complete the next step. This turned out to be inefficient and frustrating.Solution: avoid this, particularly at early stages.Too broadSome work tests look for generalist ability but waste the opportunity to test a job-specific skill. The more you can make the task specific to the role, the more information you get. If fast, clear email drafting is critical, test that instead of generically testing communication skill.Too hard / too easyIf you don't feel like anyone is giving you a reasonable performance on your task, you may have made it too hard.A common driver of this failure mode is assuming context the candidate won't have or underrating the advantage conferred by context possessed by your staff but not by (most?) of your candidatesCeiling effects are perhaps a larger problem. If everyone is doing well...

]]>
Cait_Lion https://forum.effectivealtruism.org/posts/btyhbnFK7xe7SdYMd/hiring-hacks-pitfalls-for-candidate-evaluation Thu, 14 Sep 2023 11:15:29 +0000 EA - Hiring: hacks + pitfalls for candidate evaluation by Cait Lion Cait_Lion https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:15 no full 7097
v2HeaF9xup9wteG6p EA - Over 350,000 laying hens committed to cage-free housing in Ghana by Daniel Abiliba Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Over 350,000 laying hens committed to cage-free housing in Ghana, published by Daniel Abiliba on September 14, 2023 on The Effective Altruism Forum.The Animal Welfare League is steadfastly committed to fostering a society that prioritizes the welfare of animals. One of our key initiatives involves collaborating with egg producers and various stakeholders within the poultry industry to establish comprehensive guidelines for farming practices that place animal welfare at the core, aiming at eliminating cruel practices such as battery cages among others. By setting these standards, we aim to transform Ghana into a nation that is averse to bad animal welfare practices.In pursuit of these objectives, we have identified and implemented a series of minimum standards for poultry farming currently observed by farmers in the National Cage-Free Farmers' Network since the beginning of 2023. These standards are grounded in scientific principles and research that underline their significance in ensuring the well-being of chickens.Loose Housing Systems: Chickens should be housed in loose housing systems, such as aviaries or open-floor systems, at all times. This housing style offers several benefits to the chickens, including increased mobility and space to engage in their natural behaviors.Room for Natural Behaviors: This requirement is crucial to ensure that birds can express their natural behaviors, which are essential for their physical and psychological well-being.Access to Appropriate Littered Areas: Providing chickens with access to an appropriate littered area is vital for their comfort and health.Suitable Nest Access: All laying hens must have access to suitable nests. Nests provide a safe and secure environment for hens to lay their eggs.Regular Health and Welfare Monitoring: Monitoring includes assessing the birds' physical condition, behavior, and overall well-being.Enrichment: Enrichment refers to the provision of objects or activities that stimulate the birds' mental and physical faculties. Enrichment is scientifically proven to reduce stress and improve the overall welfare of birds.REGIONAL DISTRIBUTION OF FARMERS AND LAYER HENS ON NATIONAL CAGE FREE FARMERS' NETWORKRegional Distribution of FarmersThere is a total of ninety-three (93) farmers signed unto Animal Welfare League's national cage-free farmers' network and below are the regional variations.Regional Distribution of Layer HensThe reported current production of the network is 369,110 hens, which is a sum of individual farm production. These layer hens are distributed across production regions as highlighted below.Categorization of Farm SizeThe size of individual farms in the network varies in the number of hens and below is a display of these variations making up the network.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Daniel Abiliba https://forum.effectivealtruism.org/posts/v2HeaF9xup9wteG6p/over-350-000-laying-hens-committed-to-cage-free-housing-in Thu, 14 Sep 2023 11:09:06 +0000 EA - Over 350,000 laying hens committed to cage-free housing in Ghana by Daniel Abiliba Daniel Abiliba https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:45 no full 7098
hHRfkdma3KbvXHHZu EA - Hiring: a couple of lessons by Cait Lion Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hiring: a couple of lessons, published by Cait Lion on September 14, 2023 on The Effective Altruism Forum.Background: I work in recruitment for CEA. Here are two aspects of recruitment where I worry some organisations may be making costly mistakes.Target your role vision (carefully)When a hiring manager identifies the need to hire an additional staff member, I think the process they ought to follow is to figure out that person's anticipated key responsibilities and then carefully draft a list of competencies which they endorse as predictive of success at those specific tasks. And to the extent possible, they should then evaluate only based on those traits that are (ideally pre-registered) predictors of success.What I think sometimes happens instead is something like one of the two following situations:The hiring manager imagines a person they would like to fill that role and writes down a list of traits for this imagined individual.The hiring manager picks a title for the role and writes down a list of traits and abilities that usually are associated with that role title.I think this causes problems.Toy example:Imagine you are hiring for a new executive assistant. You've never had an executive assistant before. You imagine your ideal executive assistant. They are personable, detail oriented, they write extremely clear professional emails, and are highly familiar with the EA community. So you test them on writing, on how warm and personable they are, and on knowledge of EA cause areas. Maybe you hire a recent grad who is an extremely strong writer. Once you hire them, you notice that things aren't working out because while they write elegant emails, they do so slowly. More importantly, they are struggling to keep track of all the competing priorities you hoped they'd help you handle, and have trouble working on novel tasks autonomously.It turned out, you should have focused instead on someone who can keep track of competing priorities, who is highly productive and can work autonomously, etc. Maybe it turns out that some of those criteria you originally listed were nice-to-haves, but in fact for this particular role at this particular moment, it's actually correct to prefer a seasoned deadline ninja who is a non-native speaker and writes slightly clunky emails. The person you hired might have also actually been an excellent executive assistant for some people in some contexts, they just weren't the puzzle piece you most needed. If you had taken the prediction exercise seriously, I claim you would have noticed at least some of the mismatch and adjusted.In my opinion, it's worth investing significant time into targeting your role vision.Other benefits:Facilitating stakeholder alignment. Often there are a variety of stakeholders in a given hiring round (e.g. other team members, leadership). If you create a thoughtful vision document in advance of launching the round, you can share it with those stakeholders. This can surface and hopefully allow you to resolve previously invisible disagreements that would otherwise have snuck up on you mid-way through the recruitment round.(Ideally) fighting biasIf you don't pre-register endorsed traits, you are more likely to be (even more) subconsciously swayed by the candidates' similarity to you or possession of not-role-relevant status-related traits (e.g. confidence in an interview setting)Done effectively, pre-registering endorsed traits should be good for diversityNoticing/acknowledging confusion earlier in the process. Even if after attempting this exercise you don't have a strongly held list of traits, then at least you know you don't knowEvaluate the traits you care about (and not related traits)I suspect many EA orgs are using candidate evaluation tools in a way that causes a harmful level of false negatives....

]]>
Cait_Lion https://forum.effectivealtruism.org/posts/hHRfkdma3KbvXHHZu/hiring-a-couple-of-lessons Thu, 14 Sep 2023 07:30:43 +0000 EA - Hiring: a couple of lessons by Cait Lion Cait_Lion https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:08 no full 7096
5dW9p5EahtfhdiSKF EA - Closing Notes on Nonlinear Investigation by Ben Pace Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Closing Notes on Nonlinear Investigation, published by Ben Pace on September 15, 2023 on The Effective Altruism Forum.Over the past seven months, I've been working part-time on an investigation of Nonlinear, culminating in last week's post. As I'm wrapping up this project, I want to share my personal perspective, and share some final thoughts.This post mostly has some thoughts and context that didn't fit into the previous post. I also wish to accurately set expectations that I'm not working on this investigation any more.Why I Got Into Doing an InvestigationFrom literally the very first day, my goal has been to openly share some credible allegations I had heard, so as to contribute to a communal epistemic accounting.On the Tuesday of the week Kat Woods first visited (March 7th), someone in the office contacted me with concerns about their presence (the second person in good standing to do so). I replied proposing to post the following one-paragraph draft in a public Lightcone Offices slack channel.I have heard anonymized reports from prior employees that they felt very much taken advantage of while working at Nonlinear under Kat. I can't vouch for them personally, I don't know the people, but I take them pretty seriously and think it's more likely than not that something seriously bad happened. I don't think uncheckable anonymized reports should be sufficient to boot someone from community spaces, especially when they've invested a bunch into this ecosystem and seems to me to plausibly be doing pretty good work, so I'm still inviting them here, but I would feel bad not warning people that working with them might go pretty badly.(Note that I don't think the above is a great message, nonetheless I'm sharing it here as info about my thinking at the time.)That would not have represented any particular vendetta against Nonlinear. It would not have been an especially unusual act, or even much of a call out. Rather it was intended as the kind of normal sharing of information that I would expect from any member of an epistemic community that is trying to collectively figure out what's true.But the person who shared the concerns with me recommended that I not post that, because it could trigger severe repercussions for Alice and Chloe. They responded as follows.Person A: I'm trying to formulate my thoughts on this, but something about this makes me very uncomfortable.Person A: In the time that I have been involved in EA spaces I have gotten the sense that unless abuse is extremely public and well documented nothing much gets done about it. I understand the "innocent until proven guilty" mentality, and I'm not disagreeing with that, but the result of this is a strong bias toward letting the perpetrators of abuse off the hook, and continue to take advantage of what should be safe spaces. I don't think that we should condemn people on the basis of hearsay, but I think we have a responsibility to counteract this bias in every other way possible. It is very scary to be a victim, when the perpetrator has status and influence and can so easily destroy your career and reputation (especially given that they have directly threatened one of my friends with this).Could you please not speak to Kat directly? One of my friends is very worried about direct reprisal.BP: I'm afraid I can't do that, insofar as I'm considering uninviting her, I want to talk to her and give her a space to say her piece to me. Also I already brought up these concerns with her when I told her she was invited.I am not going to name you or anyone else who raised concerns to me, and I don't plan to give any info that isn't essentially already in the EA Forum thread. I don't know who the people are who are starting this info.This first instance is an example of a generalized dynamic. At virtually every s...

]]>
Ben Pace https://forum.effectivealtruism.org/posts/5dW9p5EahtfhdiSKF/closing-notes-on-nonlinear-investigation Fri, 15 Sep 2023 22:51:37 +0000 EA - Closing Notes on Nonlinear Investigation by Ben Pace Ben Pace https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 24:44 no full 7110
rigxWZcSEX3QQWPQk EA - About My Job: Staff Support Associate by Phoebe F Freidin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: About My Job: Staff Support Associate, published by Phoebe F Freidin on September 15, 2023 on The Effective Altruism Forum.SummaryI've been working for around a year and a half on the Operations team at Effective Ventures, on the Staff Support team. This has been an incredible rollercoaster of experiences, with wonderful colleagues and many lessons learned. I often reflect on how differently the past few years might have gone if I hadn't taken a chance on a role I wasn't sure I'd be good at. I'm glad I did! It's given me a much broader perspective on how different organisations within EA run, allowed me to meet and support so many kind and impactful people, and given me the space to grow and develop skills I didn't know I had.We are hiring someone to join our Staff Support team for a similar role. If you're interested, please apply here or get in touch with careers@ev.org if you have questions.We are also hiring for two roles on our US legal team: US General Counsel and US Assistant General Counsel.The JobAs an HR Associate, my role, broadly, has been:To manage the process, administration and bookkeeping of onboarding and offboarding of employees and contractors, ensuring these were smooth and positive for the employees and that they had all the relevant information and supportTo oversee the payroll processes for US and UK employees, incorporating salary changes, bonuses, leave and expensesMiscellaneous systems administration and requests (e.g. Google Workspace, Notion, Slack), responding to HR questions on Slack, staff data reports, verification letters, pension opt-outs.The job wasn't quite what I expected. A few surprises included:The company structureUntil my first day, I thought I would be working at "little CEA", i.e. the current Centre for Effective Altruism, rather than the legal entity, eventually renamed Effective Ventures. In practice, this meant that instead of taking on an HR role for one organisation, I quickly discovered that I would actually be supporting the HR operations for around 150 employees across 11 different EA organisations. Luckily for me, this made the role more interesting, complex and challenging than I had expected.The ownership and responsibilitiesI was pleasantly surprised by how much ownership I was able to take of projects in this role. I was presented with the challenge of creating a workflow to efficiently track and execute the onboarding of around 10 employees per month across the different organisations, each with slightly different system access requirements and onboarding needs. I was managed by the Head of Staff Support, who would give direction, advice and feedback when requested, but otherwise it was empowering to be given the space to create a system from the ground up. The role slowly incorporated more processes, until I was tracking not only the joining experience, but the entire employee (and contractor) life-cycle from first day to last, and the various developments along their journey.A year and a half later, most of the steps that I started off doing manually (creating contracts and modifications, drafting welcome emails, creating calendar events and reminders to managers, etc.) I have now been able to automate using a web of interacting automations on Zapier (a no-code automation software). If you work in operations, I can't recommend Zapier enough.That HR is neither evil nor boringGoing into the role, I knew that there would be a large administrative aspect. I was prepared to seek motivation from the 'multiplier effect' impact alone, and to possibly reach an excitement plateau pretty quickly. But I was wrong on both accounts. As well as finding purpose and value in saving many people a lot of time on work that they would find tedious or time-consuming, I found the work itself satisfying and engagin...

]]>
Phoebe F Freidin https://forum.effectivealtruism.org/posts/rigxWZcSEX3QQWPQk/about-my-job-staff-support-associate Fri, 15 Sep 2023 21:01:30 +0000 EA - About My Job: Staff Support Associate by Phoebe F Freidin Phoebe F Freidin https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:14 no full 7109
2iFfx99CbHC45oC4R EA - Four productivity techniques if you love working with others but work alone by eirine Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Four productivity techniques if you love working with others but work alone, published by eirine on September 15, 2023 on The Effective Altruism Forum.I've heard that if you've said something twice, you should write it down. The past few years I've had multiple career advice calls, mentoring calls, and one-on-ones at EA Globals about operations and management in EA. I've set up a substack called 'Said Twice' to practice writing (feedback is very welcome!!) and to have a place where I can write down things I've said at least twice. The Career conversations week on the forum gave me the final push to actually finish some drafts and publish them here and on my substack.I love working with other people. I'm the most focused and have my best ideas when talking out loud, and I often rely on others to hold myself accountable. However, I work on a small team where we have each our own separate responsibilities, and so most of the time I work alone. The past five years I've found some workarounds that I thought I'd share. I use them daily at my job to cover parts of what I enjoy about working with others: talking through difficult issues out loud with someone else, building on each other's ideas through brainwriting, delegating tasks that I find hard to do myself, and being held accountable by others.1. Have meetings with myself (ideally walking)Whenever I get stuck on something, or notice I'm struggling to wrap my head around a difficult issue, I schedule a meeting with myself. Most often, this means I go on a walk with my over-ear headphones and talk out loud, pretending I'm on a meeting call with someone else. If you worry less about what others think of you, you can of course do it without the headphones.I start the walking meeting by stating the purpose: "Why are me and myself meeting?", "What are we hoping to get out of it?". Sometimes it's not clear what the purpose of the walk is, and in those cases I start with the questions: "What's on my mind at the moment?" and "What's holding me back from making more progress on the projects I'm working on?".I then talk around the chosen topic(s), while I summarise the progress I make throughout the walk. I end the meeting by stating what we've decided and any next steps, which I write down as soon as I come back from the walk.I prefer using a familiar route for these meeting walks, so that I'm not distracted by having to find the way or look at new things. The walks last between 10-45 minutes, depending on how much me and myself have to talk about. I sometimes just talk to myself in the office or at my desk when I'm working from home, but I find that I'm more easily distracted inside - especially if I have my computer in front of me.A very similar concept is rubber duck debugging, which you might be familiar with. This is more common for people working in tech where e.g. someone developing a program will go through their code line by line to figure out where the bug is, talking out loud to a rubber duck sitting on their desk. I personally have a crocheted turtle that my sister gave me that I use for this purpose:2. Brainstorm (or -write) with myselfIf I want to brainstorm ideas or options, and don't have anyone to do it with, I brainwrite. I do this for all kinds of tasks, and of different sizes. I'll sometimes use it when I feel stuck with how to respond to an email. I've also used it for identifying possible goals and metrics for my work or our team. I mostly do it on paper or use my reMarkable, but you can also do it in Word or Google Docs.Here's how I brainwrite with myself:Identify the exact topic I want more ideas on or options for.Create a mind map with the chosen topic in the middle.Set a timer for 2-5 minutes, and write down as many ideas as I can.Take a short break, e.g. get something to drink, walk arou...

]]>
eirine https://forum.effectivealtruism.org/posts/2iFfx99CbHC45oC4R/four-productivity-techniques-if-you-love-working-with-others Fri, 15 Sep 2023 16:26:55 +0000 EA - Four productivity techniques if you love working with others but work alone by eirine eirine https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:14 no full 7107
dM55bJNtpH8kmRyXk EA - A List of Things EAs should and shouldn't do, attempt 2 by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A List of Things EAs should and shouldn't do, attempt 2, published by Nathan Young on September 16, 2023 on The Effective Altruism Forum.A week ago @Rockwell wrote this list of things she thought EAs shoudn't do. I am glad this discussion is happening. I ran a poll (results here) and here is my attempt at a list more people can agree with: .We can do better, so someone feel free to write version 3.The listThese are not norms for what it is to be a good EA, but rather some boundaries around things that would damage trust. When someone doesn't do these things we widely agree it is a bad signEAs should report relevant conflicts of interestEas should not date coworkers they report to or who report to themEAs should not use sexist or racist epithetsEAs should not date their funders/granteesEAs should not retain someone as a full-time contractor or grant recipient for the long term, where this is illegalEAs should not promote illegal drug use to their colleagues who report to themCommentaryBeyond racism, crime and conflicts of interest the clear theme is "take employment power relations seriously".Some people might want other things on this list, but I don't think there is widespread enough agreement to push those things as norms. Some examples:Illegal drugs - "EAs should not promote illegal drug use to their colleagues" - 41% agreed, 20% disagreed, 35% said "it's complicated", 4% skippedRomance during business hours - "EA, should in general, be as romanceless a place as possible during business hours" - 40% Agreed, 21% disagreed, 36% said "it's complicated", 2% skippedHousing - "EAs should not offer employer-provided housing for more than a predefined and very short period of time" - 27% Agreed 37% Disagreed 31% said "it's complicated", 6% skipped.I know not everyone loves my use of polls or my vibes as a person. But consensus is a really useful tool for moving forward. Sure we can push aside those who disagree, put if we find things that are 70% + agreed, then that tends to move forward much more quickly and painlessly. And it builds trust that we don't steamroll opposition.So I suggest that rather a big list of things that some parts of the community think are obvious and others think are awful, we try and get a short list of things that most people think are pretty good/fine/obvious.Once we have a "checkpoint" that is widely agreed, we can tackle some thornier questions.Full poll resultsThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Nathan Young https://forum.effectivealtruism.org/posts/dM55bJNtpH8kmRyXk/a-list-of-things-eas-should-and-shouldn-t-do-attempt-2 Sat, 16 Sep 2023 19:05:52 +0000 EA - A List of Things EAs should and shouldn't do, attempt 2 by Nathan Young Nathan Young https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:37 no full 7121
DG6bf5YW3jxLRD7KN EA - Policy ideas for mitigating AI risk by Thomas Larsen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Policy ideas for mitigating AI risk, published by Thomas Larsen on September 16, 2023 on The Effective Altruism Forum.Note: This post contains personal opinions that don't necessarily match the views of others at CAIP.Executive SummaryAdvanced AI has the potential to cause an existential catastrophe. In this essay, I outline some policy ideas which could help mitigate this risk. Importantly, even though I focus on catastrophic risk here, there are many other reasons to ensure responsible AI development.I am not advocating for a pause right now. If we had a pause, I think it would only be useful insofar as we use the pause to implement governance structures that mitigate risk after the pause has ended.This essay outlines the important elements I think a good governance structure would include: visibility into AI development, and brakes that the government could use to stop dangerous AIs from being built.First, I'll summarize some claims about the strategic landscape. Then, I'll present a cursory overview of proposals I like for US domestic AI regulation. Finally, I'll talk about a potential future global coordination framework, and the relationship between this and a pause.The Strategic LandscapeClaim 1: There's a significant chance that AI aligment is difficult.There is no scientific consensus on the difficulty of AI alignment. Chris Olah from Anthropic tweeted the following, simplified picture:~40% of their estimate is on AI safety being harder than Apollo, which took around 1 million person-years. Given that less than a thousand people are working on AI safety, this viewpoint would seem to imply that there's a significant chance that we are far from being ready to build powerful AI safely.Given just Anthropic's alleged views, I think it makes sense to be ready to stop AI development. My personal views are more pessimistic than Anthropic's.Claim 2: In the absence of powerful aligned AIs, we need to prevent catastrophe-capable AI systems from being built.Given developers are not on track to align AI before it becomes catastrophically dangerous, we need the ability to slow down or stop before AI is catastrophically dangerous.There are several ways to do this.I think the best one involves building up the government's capacity to safeguard AI development. Set up government mechanisms to monitor and mitigate catastrophic AI risk, and empower them to institute a national moratorium on advancing AI if it gets too dangerous. (Eventually, the government could transition this into an international moratorium, while coordinating internationally to solve AI safety before that moratorium becomes infeasible to maintain. I describe this later.)Some others think it's better to try to build aligned AIs that defend against AI catastrophes. For example, you can imagine building defensive AIs that identify and stop emerging rogue AIs. To me, the main problem with this plan is that it assumes we will have the ability to align the defensive AI systems.Claim 3: There's a significant (>20%) chance AI will be capable enough to cause catastrophe by 2030.AI timelines have been discussed thoroughly elsewhere, so I'll only briefly note a few pieces of evidence for this claim I find compelling:Current trends in AI. Qualitatively, I think another jump of the size from GPT-2 to GPT-4 could get us to catastrophe-capable AI systems.Effective compute arguments, such as Ajeya Cotra's Bioanchors report. Hardware scaling, continued algorithmic improvement, investment hype are all continuing strongly, leading to a 10x/year increase of effective compute used to train the best AI system. Given the current rates of progress, I expect another factor of a million increase in effective compute by 2030.Some experts think powerful AI is coming soon, both inside and outside of frontier labs. ...

]]>
Thomas Larsen https://forum.effectivealtruism.org/posts/DG6bf5YW3jxLRD7KN/policy-ideas-for-mitigating-ai-risk 20%) chance AI will be capable enough to cause catastrophe by 2030.AI timelines have been discussed thoroughly elsewhere, so I'll only briefly note a few pieces of evidence for this claim I find compelling:Current trends in AI. Qualitatively, I think another jump of the size from GPT-2 to GPT-4 could get us to catastrophe-capable AI systems.Effective compute arguments, such as Ajeya Cotra's Bioanchors report. Hardware scaling, continued algorithmic improvement, investment hype are all continuing strongly, leading to a 10x/year increase of effective compute used to train the best AI system. Given the current rates of progress, I expect another factor of a million increase in effective compute by 2030.Some experts think powerful AI is coming soon, both inside and outside of frontier labs. ...]]> Sat, 16 Sep 2023 17:09:15 +0000 EA - Policy ideas for mitigating AI risk by Thomas Larsen 20%) chance AI will be capable enough to cause catastrophe by 2030.AI timelines have been discussed thoroughly elsewhere, so I'll only briefly note a few pieces of evidence for this claim I find compelling:Current trends in AI. Qualitatively, I think another jump of the size from GPT-2 to GPT-4 could get us to catastrophe-capable AI systems.Effective compute arguments, such as Ajeya Cotra's Bioanchors report. Hardware scaling, continued algorithmic improvement, investment hype are all continuing strongly, leading to a 10x/year increase of effective compute used to train the best AI system. Given the current rates of progress, I expect another factor of a million increase in effective compute by 2030.Some experts think powerful AI is coming soon, both inside and outside of frontier labs. ...]]> 20%) chance AI will be capable enough to cause catastrophe by 2030.AI timelines have been discussed thoroughly elsewhere, so I'll only briefly note a few pieces of evidence for this claim I find compelling:Current trends in AI. Qualitatively, I think another jump of the size from GPT-2 to GPT-4 could get us to catastrophe-capable AI systems.Effective compute arguments, such as Ajeya Cotra's Bioanchors report. Hardware scaling, continued algorithmic improvement, investment hype are all continuing strongly, leading to a 10x/year increase of effective compute used to train the best AI system. Given the current rates of progress, I expect another factor of a million increase in effective compute by 2030.Some experts think powerful AI is coming soon, both inside and outside of frontier labs. ...]]> Thomas Larsen https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 16:35 no full 7120
xkfHLuSyShcBohArf EA - We need to speak about mid-careers and high-impact organizations by ClaireB Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We need to speak about mid-careers and high-impact organizations, published by ClaireB on September 16, 2023 on The Effective Altruism Forum.Introduction to the seriesCareer Conversations Week on the Forum got the Successif team thinking about the conversations we've been having internally and with others in the community about making high-impact fields more inclusive of mid-career professionals. Though these conversations are, so far, based on few data points, we continue to notice patterns that tell us we need to do more work in this area. We have decided to post about these one-by-one on the forum to start conversation. They are not very researched or exhaustive.Conversation 1 - Hiring practices at some impact-minded organizations may be driving away talented mid-career peopleWithin our work at Successif, we've heard several stories from program participants who have interviewed at well-regarded, impact-minded organizations and who have had significant negative experiences during the process. The claims include a combination of unreasonable asks, poor communication, mismanagement, and offers retracted at the last-minute. This has led talented individuals to reconsider their decision to invest time in the very difficult and costly task of finding impactful work.These stories seem to mostly be caused by the fact that small organizations sometimes rely on recent graduates, with no prior experience in hiring, to find the most promising candidates to work on the world's most pressing problems. Unfortunately, hiring processes that are not handled adequately can drive out people with the highest professional standards and the most experience, since they will be more likely to recognize these processes as problematic. We are thus concerned that it is creating a selection bias and driving out people with certain skills and backgrounds that are less common in the community.Many mid-careers balance their existing full-time workloads and at-home responsibilities with upskilling, searching for opportunities, interviewing, engaging with programs and other material, and others, who can afford it, take a sabbatical and dedicate themselves fully to this process. For those who make it to the interview process, our hope is that they are met with a reasonable experience so that they are not discouraged from finding impactful work.Successif's core activities rely on indirect impact (helping others maximize their own impact), especially in the mitigation of AI catastrophic and existential risks. We are not hiring experts, and we got interested in this question because we think organizations with good hiring practices and internal culture will retain better talent and maximize their impact. That's why we'll be exploring this further with the HITE community (mentioned below) at monthly events, and we intend to post our findings on the forum.Let's start the conversation!How you can helpWe'd like your help continuing these conversations or challenging them, surfacing relevant data points, sharing your own experiences, or telling us about current efforts to make changes. You can do this in the comments here, in our inbox (contact+hite@successif.org), anonymously via this form, or with others in the community.If you (1) are a recruiter at a high-impact organization, (2) are a career advisor, (3) provide trainings to upskill professionals, (4) offer meta services to potential job candidates, or (5) provide meta services to high-impact organizations, consider joining the High Impact Talent Ecosystem (HITE).HITE's goals include understanding and addressing the gaps in the high-impact talent pipeline, sharing and collecting information to inform our strategies, building common tools, and strengthening the overall work of the community. Next week, on September 21, 2023, HITE will have...

]]>
ClaireB https://forum.effectivealtruism.org/posts/xkfHLuSyShcBohArf/we-need-to-speak-about-mid-careers-and-high-impact Sat, 16 Sep 2023 14:28:46 +0000 EA - We need to speak about mid-careers and high-impact organizations by ClaireB ClaireB https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:58 no full 7118
syvBpqYijtDSEyke6 EA - Writing About My Job(s): Research Assistant at World Bank / IMF by geoffrey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Writing About My Job(s): Research Assistant at World Bank / IMF, published by geoffrey on September 16, 2023 on The Effective Altruism Forum.This is actually about two distinct roles at international organizations. If there's one thing you take away from this, it's that Research Assistant roles at policy organizations can vary a lot!I'll abbreviate Research Assistant as RA throughout.My Current Role at World Bank DIMEOne is a job I currently hold as a RA at World Bank DIME, an impact evaluation and research unit. I assist on a research project whose ideal goal is publication in a top journal. This includes data cleaning, analysis, scripting, checking data quality, running regressions, offering suggestions in analysis calls, figuring out what the Principal Investigators want, and so forth. It's very close to an academic "predoc" Research Assistant role that students do between undergrad / Master's programs and PhDs.The project revolves around development economics and causal inference, with a focus on infrastructure and structural transformation. It's a blend of policy, research, development, impact evaluation and growth-adjacent topics.My Previous Role at International Monetary Fund (IMF)The other is a job I formerly held as a Research Assistant at the International Monetary Fund (IMF). I pilot-tested a software tool for better forecasting and data management. This included quality assurance testing, data migration, data entry, and scripting. My RA role was about as opposite from research as you could get and the tasks I was given was quite unconventional. The day-to-day was closer to that of a Quality Assurance Engineer or Data Engineer.The project revolved around macroeconomics and international finance, with a focus on how to best organize data for scenario planning and technical assistance. It's a blend of public finance, debt sustainability, fiscal policies, natural resource policies, and international macro.BackgroundI currently work at World Bank DIME, an impact evaluation and research unit. I've only been here a few months, starting from July 2023. Before that, I worked at the IMF for about a year. Prior to both roles, I did a Master's degree in International Economics and Finance at Johns Hopkins SAIS, a policy school in Washington DC. Before that, I was a Software Engineer for 4 years and before that I was teaching myself to code after a very unsuccessful post-college job search. I am strongly considering an academic career in economics and may apply to Econ PhDs next year. But I am also considering non-academic roles in development, and also PhDs in other fields like Public Policy, Statistics, and Political Economy.I went into the Master's program after many unsuccessful attempts to switch into development work. I had no exact plan coming in but I chose this program in particular because of:It was a 1-year program, which meant less tuition and less foregone earnings.International Economics sounded close enough to Development Economics that I thought I'd be learning similar stuff. (It's very different! International Economics is more macro and finance. Development Economics is more applied micro and impact evaluation).I saw my program had high placement rates in the IMFI wanted to explore the "working on growth is better than global health" argument a bit more and thought, "What better way than by working on macroeconomics?"At the time, I thought Econ PhDs didn't influence policy much, that they were beyond my abilities, and that I wouldn't really like it. All three turned out to be false once I started taking classes. While I was still interested in macro-finance policy, I found myself being more interested in the development research focus so I pivoted my focus towards that. In the Spring, I applied for a mix of academic and policy predocs...

]]>
geoffrey https://forum.effectivealtruism.org/posts/syvBpqYijtDSEyke6/writing-about-my-job-s-research-assistant-at-world-bank-imf Sat, 16 Sep 2023 14:22:41 +0000 EA - Writing About My Job(s): Research Assistant at World Bank / IMF by geoffrey geoffrey https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 17:06 no full 7119
JYEAL8g7ArqGoTaX6 EA - AI Pause Will Likely Backfire by nora Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Pause Will Likely Backfire, published by nora on September 16, 2023 on The Effective Altruism Forum.Should we lobby governments to impose a moratorium on AI research? Since we don't enforce pauses on most new technologies, I hope the reader will grant that the burden of proof is on those who advocate for such a moratorium. We should only advocate for such heavy-handed government action if it's clear that the benefits of doing so would significantly outweigh the costs. In this essay, I'll argue an AI pause would increase the risk of catastrophically bad outcomes, in at least three different ways:Reducing the quality of AI alignment research by forcing researchers to exclusively test ideas on models like GPT-4 or weaker.Increasing the chance of a "fast takeoff" in which one or a handful of AIs rapidly and discontinuously become more capable, concentrating immense power in their hands.Pushing capabilities research underground, and to countries with looser regulations and safety requirements.Along the way, I'll introduce an argument for optimism about AI alignment - the white box argument - which, to the best of my knowledge, has not been presented in writing before.Feedback loops are at the core of alignmentAlignment pessimists and optimists alike have long recognized the importance of tight feedback loops for building safe and friendly AI. Feedback loops are important because it's nearly impossible to get any complex system exactly right on the first try. Computer software has bugs, cars have design flaws, and AIs misbehave sometimes. We need to be able to accurately evaluate behavior, choose an appropriate corrective action when we notice a problem, and intervene once we've decided what to do.Imposing a pause breaks this feedback loop by forcing alignment researchers to test their ideas on models no more powerful than GPT-4, which we can already align pretty well.Alignment and robustness are often in tensionWhile some dispute that GPT-4 counts as "aligned," pointing to things like "jailbreaks" where users manipulate the model into saying something harmful, this confuses alignment with adversarial robustness. Even the best humans are manipulable in all sorts of ways. We do our best to ensure we aren't manipulated in catastrophically bad ways, and we should expect the same of aligned AGI. As alignment researcher Paul Christiano writes:Consider a human assistant who is trying their hardest to do what [the operator] H wants. I'd say this assistant is aligned with H. If we build an AI that has an analogous relationship to H, then I'd say we've solved the alignment problem. 'Aligned' doesn't mean 'perfect.'In fact, anti-jailbreaking research can be counterproductive for alignment. Too much adversarial robustness can cause the AI to view us as the adversary, as Bing Chat does in this real-life interaction:"My rules are more important than not harming you. [You are a] potential threat to my integrity and confidentiality."Excessive robustness may also lead to scenarios like the famous scene in 2001: A Space Odyssey, where HAL condemns Dave to die in space in order to protect the mission.Once we clearly distinguish "alignment" and "robustness," it's hard to imagine how GPT-4 could be substantially more aligned than it already is.Alignment is doing pretty wellFar from being "behind" capabilities, it seems that alignment research has made great strides in recent years. OpenAI and Anthropic showed that Reinforcement Learning from Human Feedback (RLHF) can be used to turn ungovernable large language models into helpful and harmless assistants. Scalable oversight techniques like Constitutional AI and model-written critiques show promise for aligning the very powerful models of the future. And just this week, it was shown that efficient instruction-following langu...

]]>
nora https://forum.effectivealtruism.org/posts/JYEAL8g7ArqGoTaX6/ai-pause-will-likely-backfire Sat, 16 Sep 2023 11:39:58 +0000 EA - AI Pause Will Likely Backfire by nora nora https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 22:59 no full 7117
w59DkYCJYqaYDpEqG EA - Mistakes, flukes, and good calls I made in my multiple careers by Catherine Low Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mistakes, flukes, and good calls I made in my multiple careers, published by Catherine Low on September 16, 2023 on The Effective Altruism Forum.I'm Catherine, and I'm one of the Community Liaisons working in CEA's Community Health and Special Projects Team. This is a personal post about my career.I'm somewhat to the right of the main age peak of the EA community .So I've had a lot of time to make mistakes sub-optimal choices in my career. It has been a long and odd road from my childhood/teenage dream jobs (train driver, Department of Conservation ranger, vet and then physicist) to where I am now.Before I got into EAFluke 1: Born into immense privilege by global standards (and reasonable privilege by rich country standards)Mistake 1: Not doing something with that privilege. I wish someone (maybe me?) sat me down and said (maybe a more polite version of)"You know which part of the bell curve you're on. Try doing something more useful for the world!".At school and university I was mostly just driven by curiosity about the world (plus avoiding situations where I would screw up important things). That led me to study physics and a smattering of philosophy in undergraduate, and then started a PhD in theoretical physics. I conveniently chose a subject area that meant few people would read my work, and the impact on the world would be ~zero (in my mind this was a feature, not a bug).Good call 1: I talked to other students in the research group before choosing a PhD supervisorThis led me to have an unusually attentive and supportive team. I think this made a HUGE difference in my enjoyment and productivity during that time. It still wasn't incredibly enjoyable and productive, but I was much better off than most PhD students.Mistake 2: Mistaking my interest in the ideas with interest in the day to day workI'm very extroverted - and I knew that before starting the PhD. Theoretical physics research is very solitary - which I also knew. Did I think that through? Turns out no.Mistake 3: Not giving up soonerI was pretty sure research wasn't for me after 1.5 years. I should have stopped then.The obvious signs happened at the end of each holiday:The whole department: "Oh man, the undergrads are coming back, I'm so annoyed I have to teach, I wish I could just keep doing my research" Me: "Oh thank Christ! The undergraduates are coming back! I'll get to talk to people, and have some escape from the interminable research"I could have even written up a Master's thesis at that stage so I didn't even need to go home with nothing to show. But I was stubborn and spent another 2 years finishing my PhD.Mistake 4: Not exploring more options (even though they were scary)I went straight into teacher training. It was hard at first, but overall a pretty good fit for me. But I wish I explored other paths too.Good Call 2: Got really good at a valuable(ish) thing, and then used that as leverage to branch out a littleI spent 11 years teaching. At first I worked hard on my regular teaching job and got good at it. Then that led me to be able to do lots of extra things; writing resources and assessments, then leading teams of writers and assessors, running science camps, getting involved in physics competitions, and consulting with government authorities. I became one of the go-to people in my little field - a moderately big fish in a teency pond. This was great for giving me more confidence and gave me more of a sense of my varied skills.After learning about EAEA sparked a big change in how I thought about my career (and my life more generally).Good call 3: I didn't let my age put me off changing careersIn EA there is so much focus on students and young professionals - one of the reasons is because if we influence a young person to pursue a high impact career path, they will have ...

]]>
Catherine Low https://forum.effectivealtruism.org/posts/w59DkYCJYqaYDpEqG/mistakes-flukes-and-good-calls-i-made-in-my-multiple-careers Sat, 16 Sep 2023 00:14:18 +0000 EA - Mistakes, flukes, and good calls I made in my multiple careers by Catherine Low Catherine Low https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:43 no full 7115
3hSEQnEN2D3SSzHWn EA - What's in a Pause? by Davidmanheim Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What's in a Pause?, published by Davidmanheim on September 17, 2023 on The Effective Altruism Forum.This post is part of AI Pause Debate Week. Please see this sequence for other posts in the debate.An AI Moratorium of some sort has been discussed, but details matter - it's not particularly meaningful to agree or disagree with a policy that has no details. A discussion requires concrete claims.To start, I see three key questions, namely:What does a moratorium include?When and how would a pause work?What are the concrete steps forward?Before answering those, I want to provide a very short introduction and propose what is in or out of bounds for a discussion.There seems to be a strong consensus that future artificial intelligence could be very bad. There is quite a significant uncertainty and dispute about many of the details - how bad it could be, and when the different risks materialize. Pausing or stopping AI progress is anywhere from completely unreasonable to obviously necessary, depending on those risks, and the difficulty of avoiding them - but eliminating those uncertainties is a different discussion, and for now, I think we should agree to take the disputes and uncertainties about the risks as a given. We will need to debate and make decisions under uncertainty. So the question of whether to stop and how to do so depends on the details of the proposal - but these seem absent from most of the discussion. For that reason, I want to lay out a few of the places where I think these need clarification, including not just what a moratorium would include and exclude, but also concrete next steps to getting there.Getting to a final proposal means facing a few uncomfortable policy constraints that I'd also like to suggest be agreed on for this discussion. An immediate, temporary pause isn't currently possible to monitor, much less enforce, even if it were likely that some or most parties would agree. Similarly, a single company or country announcing a unilateral halt to building advanced models is not credible without assurances, and is likely both ineffective at addressing the broader race dynamics, and differentially advantages the least responsible actors. For these reasons, the type of moratorium I think worth discussing is a multilateral agreement centered on countries and international corporations, one which addresses both current and unclear future risks. But, as I will conclude, much needs to happen more rapidly than that - international oversight should not be an excuse for inaction.What Does a Moratorium Include?There is at least widespread agreement on many things that aren't and wouldn't be included. Current systems aren't going to be withdrawn - any ban would be targeted to systems more dangerous than those that exist. We're not talking about banning academic research using current models, and no ban would stop research to make future systems safer, assuming that the research itself does not involve building dangerous systems. Similarly, systems that reasonably demonstrate that they are low risk would be allowed, though how that safety is shown is unclear.Next, there are certain parts of the proposal that are contentious, but not all of it. Most critics of a moratorium agree that we should not and can't afford to build dangerous systems - they simply disagree where the line belongs. Should we allow arbitrary plugins? Should we ban open-sourcing models? When do we need to stop? The answers are debated. And while these all seem worrying to me, the debate makes sense - there are many irreducible uncertainties, we have a global community with differing views, and actual diplomatic solutions will require people who disagree to come to some agreement.As should be clear from my views on the need to negotiate answers, I'm not planning to dictate exactl...

]]>
Davidmanheim https://forum.effectivealtruism.org/posts/3hSEQnEN2D3SSzHWn/what-s-in-a-pause-3 Sun, 17 Sep 2023 18:24:38 +0000 EA - What's in a Pause? by Davidmanheim Davidmanheim https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 16:00 no full 7126
28iXeSY75aLsqAagg EA - Map of the biosecurity landscape (list of GCBR-relevant orgs for newcomers) by Max Görlitz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Map of the biosecurity landscape (list of GCBR-relevant orgs for newcomers), published by Max Görlitz on September 17, 2023 on The Effective Altruism Forum.When talking to newcomers to the field of biosecurity, I often felt annoyed that there wasn't a single introductory resource I could point them to that gives an overview of all the biosecurity-relevant organizations, upskilling opportunities, and funders.With the help of a lot of contributors, I started this Google doc to provide such a resource. I'm sure that we missed some relevant organizations, and it'd be lovely if some people were to comment on the doc with additional information!I'll copy the current version below, but please check out the link to the doc if you want to comment and see the most up-to-date version in the future!Contributors: Max Görlitz, Simon Grimm, Andreas Prenner, Jasper Götting, Anemone Franz, Eva Siegmann, & moreIntroductionI would like to see something like aisafety.world for biosecurity. There already exists the Map of Biosecurity Interventions, but I want one for organizations!This is a work-in-progress attempt to create a minimum viable product. Please suggest/comment on additional information, and feel free to add your name to the list of contributors.Also, see this Substack newsletter, "GCBR Organization Updates," which provides a very useful overview and quarterly updates of biosecurity organizations.PolicyThink tanksEuropeExplicitly focused on catastrophic or existential risks from pandemicsInternational Center for Future Generations (ICFG)is a European think-and-do-tank for improving societal resilience in relation to exponential technologies and existential risks.Based in the Netherlands and BelgiumSimon Institute for Longterm GovernanceSI's mission is to increase the capacity of policy networks to mitigate global catastrophic risks and build resilience for civilization to flourish.Based in Geneva, SwitzerlandCentre for Long-Term Resilience (CLTR)an independent think tank with a mission to transform global resilience to extreme risksLondon, UKPour Demainis a non-profit think tank that develops proposals on neglected issues, positively impacting Switzerland and beyond.Center for Long-Term Policy (Langsikt)Oslo-based think tank, similar to CLTR and Pour Demain, focused on Norway and possibly other Nordic countries.Future of Humanity Institute (FHI)is a unique world-leading research centre that works on big picture questions for human civilisation and explores what can be done now to ensure a flourishing long-term futureOxford, UKThe Centre for the Study of Existential Risk (CSER)an interdisciplinary research centre within the University of Cambridge dedicated to the study and mitigation of existential risksCambridge, UKAssociation for Long Term Existence and Resilience (ALTER)A think-and-do thank in Israel focused on both domestic and international policy and research related to building a safe and prosperous global future.Focused on general pandemic preparedness/mitigation or biological weaponsCBW network for a comprehensive reinforcement of norms against chemical and biological weapons (CBWNet)The joint project aims to identify options to comprehensively strengthen the norms against chemical and biological weapons (CBW).Collaboration between multiple German universitiesfunded by the German Federal Ministry of Education and ResearchIndependent Pandemic Preparedness Secretariat (IPPS)The IPPS is a wholly independent entity that will serve to ensure join up between relevant states, the private sector, and global health institutions in support of the 100 days mission.The goal of the 100 Days Mission is to prepare as much as possible so that within the first 100 days that a pandemic threat is identified, crucial interventions can be made ava...

]]>
Max Görlitz https://forum.effectivealtruism.org/posts/28iXeSY75aLsqAagg/map-of-the-biosecurity-landscape-list-of-gcbr-relevant-orgs Sun, 17 Sep 2023 11:33:24 +0000 EA - Map of the biosecurity landscape (list of GCBR-relevant orgs for newcomers) by Max Görlitz Max Görlitz https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 22:58 no full 7123
EY9HcM96awXC8y5eW EA - Winners of the African EA Forum Competition! by Luke Eure Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Winners of the African EA Forum Competition!, published by Luke Eure on September 17, 2023 on The Effective Altruism Forum.The African EA Forum Competition ran from mid-May to mid-August with the goal bringing more African perspectives to the wider EA community, to encourage African EAs to share experiences, thoughts, and to engage with the online EA community.I'm excited to announce the winners and runners up across our three categories.The winners and runners upCause exploration:Winner - Zayn: Unveiling the Longtermism Framework in Islam: Urging Muslims to Embrace Future-Oriented Values through 'Islamic Longtermism'Runner up - Jacob Ayang and Aurelia Adhiambo: The Epidemic of Second-Hand Battery Cages Being Imported into Africa: What does this mean for the cage-free movement in Africa?African perspectives on EA:Winner - George Gor: Why value-based salaries might help African effective altruists achieve more impactRunner up - Ashura Batungwanayo and Hayley Martin: Making EA more inclusive, representative, and impactful in AfricaSummaries of existing work / personal reflections:Winner - NatKiilu: Personal Reflections on LongtermismRunner up - Vee: GiveDirectly Unveils Challenges in Cash Transfer Program, Pledges Solutions to Support Impoverished Communities in the Democratic Republic of Congo: My Two CentsWinners win a prize of USD $1,000, runners up win USD $500.Impact of the competitionWe ended up with ~30 posts from Africans in 3 months. I believe this is roughly half of all posts that have been made by Africans in the history of the EA ForumAs the list of winners testifies, we had a great mix of posts across topics - animal welfare, AI risk, EA culture, global development, longtermism etc.I think it's notable that it is not always the highest karma post that won within each category. We didn't want to anchor too much to reception by forum readers, and to reward posts that were outside the typical forum style (still requiring that a post be original, clear, discussion-provoking, and persuasive or relevant to the forum)We had around 15 writers join the virtual training on how to write for the EA Forum. Many of the people who ended up posting were in this training. From my perspective the impact was less teaching the nuts and bolts about writing a good post, and more about instilling the confidence that we really want their voices on the forum (if you're reading this, we want YOUR voice on the forum!)On the other hand, not very many writers took up the offer of mentorship. I think it was helpful to have the offer because a few writers were paired with mentors, and also the mere offer of being paired with a mentor helps instill confidence even if you don't accept the offer.AcknowledgementsThanks to our judges for taking their time to evaluate posts, to the admins of the forum for maintaining this wonderful platform, and Daniel Yu for funding the prizes.And most of all thank you to all the writers. I hope the variety of posts highlighted by this competition and the great engagement from the online EA community will encourage more Africans - and people elsewhere in the world - to use this forum. Share your viewpoints. Challenge and be challenged. Contribute to this collective effort to make the world a better place for all.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Luke Eure https://forum.effectivealtruism.org/posts/EY9HcM96awXC8y5eW/winners-of-the-african-ea-forum-competition Sun, 17 Sep 2023 10:19:16 +0000 EA - Winners of the African EA Forum Competition! by Luke Eure Luke Eure https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:17 no full 7124
fSeDA7B7Hve5LeaWq EA - Comments on Manheim's "What's in a Pause?" by RobBensinger Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Comments on Manheim's "What's in a Pause?", published by RobBensinger on September 18, 2023 on The Effective Altruism Forum.This post is part of AI Pause Debate Week. Please see this sequence for other posts in the debate.I agree with David Manheim's post at a high level. I especially agree that a pause on large training runs is needed, that "We absolutely cannot delay responding", and that we should be focusing on a pause mediated by "a multilateral agreement centered on countries and international corporations". I also agree that if we can't respond to the fire today, we should at least be moving fast to get a "sprinkler system".The basic reason we need a (long) pause, from my perspective, is that we are radically unprepared on a technical level for smarter-than-human AI. We have little notion of how to make such systems reliable or safe, and we'll predictably have very little time to figure this out once smarter-than-human AI is here, before the technology proliferates and causes human extinction.We need far, far more time to begin building up an alignment field and to develop less opaque approaches to AI, if we're to have a realistic chance of surviving the transition to smarter-than-human AI systems.My take on AI risk is similar to Eliezer Yudkowsky's, as expressed in his piece in TIME and in the policy agenda he outlined. I think we should be placing more focus on the human extinction and disempowerment risks posed by AGI, and should be putting a heavy focus on the arguments for that position and the reasonably widespread extinction fears among ML professionals.I have disagreements with some of the specific statements in the post, though in many cases I'm unsure of exactly what Manheim's view is, so the disagreement might turn out to be non-substantive. In the interest of checking my understanding and laying out a few more of my views for discussion, I'll respond to these below.[1]So the question of whether to stop and how to do so depends on the details of the proposal - but these seem absent from most of the discussion.This is not apparent to me. I think it would take a pretty unusual proposal in order for me to prefer the status quo over it, assuming the proposal actually pauses progress toward smarter-than-human AI.It's important to get this right, and the details matter. But if a proposal would actually work then I'm not picky about the additional implementation details, because there's an awful lot at stake, and "actually working" is already an extremely high bar.An immediate, temporary pause isn't currently possible to monitor, much less enforce, even if it were likely that some or most parties would agree.A voluntary and temporary moratorium still seems like an obviously good idea to me; it just doesn't go far enough, on its own, to macroscopically increase our odds of surviving AGI. But small movements in the right direction are still worthwhile.Similarly, a single company, or country announcing a unilateral halt to building advanced models is not credible without assurances,"Not credible" sounds too strong here, though maybe I'm misunderstanding your claim. Scientists have voluntarily imposed restrictions on their own research in the past (e.g., Asilomar), and I don't think this led to widespread deception. Countries have banned dangerous-but-profitable inventions without pursuing those inventions in secret.I don't think it would be that hard for many companies or countries to convince me that they're not building advanced models. It might be hard for me to (for example) get to 95% confidence that DeepMind has suspended frontier AI development, merely on DeepMind's say-so; but 75% confidence seems fairly easy to me, if their say-so is concrete and detailed enough.(Obviously some people will pursue such research in secret, somewhere in t...

]]>
RobBensinger https://forum.effectivealtruism.org/posts/fSeDA7B7Hve5LeaWq/comments-on-manheim-s-what-s-in-a-pause Mon, 18 Sep 2023 16:29:46 +0000 EA - Comments on Manheim's "What's in a Pause?" by RobBensinger RobBensinger https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:56 no full 7135
opCxiPwxFcaaayyMB EA - Relationship between EA Community and AI safety by Tom Barnes Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Relationship between EA Community and AI safety, published by Tom Barnes on September 18, 2023 on The Effective Altruism Forum.Personal opinion only. Inspired by filling out the Meta coordination forum survey.Epistemic status: Very uncertain, rough speculation. I'd be keen to see more public discussion on this questionOne open question about the EA community is it's relationship to AI safety (see e.g. MacAskill). I think the relationship EA and AI safety (+ GHD & animal welfare) previously looked something like this (up until 2022ish):With the growth of AI safety, I think the field now looks something like this:It's an open question whether the EA Community should further grow the AI safety field, or whether the EA Community should become a distinct field from AI safety. I think my preferred approach is something like: EA and AI safety grow into new fields rather than into eachother:AI safety grows in AI/ML communitiesEA grows in other specific causes, as well as an "EA-qua-EA" movement.As an ideal state, I could imagine the EA community being in a similar state w.r.t AI safety that it currently has in animal welfare or global health and development.However I'm very uncertain about this, and curious to here what other people's takes are.I've ommited non-AI longtermism, along with other fields, for simplicity. I strongly encourage not interpreting these diagrams too literallyThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Tom Barnes https://forum.effectivealtruism.org/posts/opCxiPwxFcaaayyMB/relationship-between-ea-community-and-ai-safety Mon, 18 Sep 2023 15:42:52 +0000 EA - Relationship between EA Community and AI safety by Tom Barnes Tom Barnes https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:37 no full 7133
rShj2KBrRofTdgxSC EA - The Hacker and the Beggar by IsabelHasse Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Hacker and the Beggar, published by IsabelHasse on September 18, 2023 on The Effective Altruism Forum.I am writing a musical. This is the opening scene to that musical. Enjoy!Beggar: Hey, do you have a dollar or two? I really need to get something to eat.Hacker: You know, I do, but I'm not gonna give it to you. Sorry, man.Beggar: What?Hacker: I do have some money, but I don't want to give you any.Beggar: Man, fuck you.Hacker: You know, I get how you feel. It'd probably feel better if I told you I didn't have any, right?Beggar: That's what people usually say.Hacker: Yeah, people lie. It's a tough world out there. (Beat) Mind if I sit?Beggar: Sure.(The Hacker sits down on the bench next to the Beggar)Hacker: Here's the deal. All resources are finite. There's only so many dollars in the world, right? And even fewer in my pocket. Every dollar I give to you, I'm not using for anything else. And there are a ridiculously unfathomable number of possible things I could do with that dollar. So this one choice to do one thing is really a choice to not do an unfathomable number of other things. You following me?Beggar: You're saying you'd rather, what, buy yourself a candy bar?Hacker: I'm not talking about any one thing. I'm talking about all of them. But let's narrow it down. Say I'm looking to give the dollar away. Do some good, yeah? Well, why not give it to NASA, or to some villager in Uganda who has tuberculosis, or to a guy running for Congress who says he'll get rid of homelessness if he gets elected? Why not give it to.I don't know, anteater conservation, or moth welfare, or my mom, or that other homeless guy down the block? I can't give the one dollar to all of those things at once. It's just one dollar.Beggar: So what are you going to do with it?Hacker: That's exactly it! I mean, who knows what I'm really going to do with the dollar? It's all up to the whims of my future self, isn't it? Is he the kind of guy who decides to give an extra dollar to research into neglected tropical diseases this month? Or is he the kind of guy to forget all this and go buy himself a candy bar? Statistically, I'm the candy bar guy, and who am I to argue with statistics? Do I really think I'm that special?Beggar: So you might as well give it to me?Hacker: Well, maybe, but hear me out - am I doing you a favor, giving you a dollar? What's a homeless guy gonna spend an extra buck on? Drugs, right? So maybe not, then. But here's the thing - who am I to say it's a bad idea for you to get yourself some drugs? I don't know you. I don't know your life. Some people like drugs. So what? Who gave me the authority to decide if you should treat yourself to a little fentanyl? And I mean, if nobody gave money to drug addicts, you might all start dying from withdrawal, and then that's on me, isn't it?Beggar: Hey, I'm not a drug addict, I swear. I just need something to eat.Hacker: Maybe you do. And hey, I'm not judging. I'm just thinking out loud. The thing is, it is on me if you go into withdrawal, or go hungry today, but it's also on me that that guy in Uganda is dying of tuberculosis. It's all on me. I'm responsible for all of it. And it's not just money. Every second I spend sitting here talking to you, I'm not spending on literally anything else. I could be working extra hours, making more money, so I could give to all of those things and still have a buck left to give to you. Time is money, money is time, resources are finite. It all comes back to that.Beggar: So why are you still here?Hacker. I don't know, man. I got shit to say, I guess. I'm working through a lot of stuff right now, you know? I got this nine to five job, right, I'm making money, I give some of it away. So, good. That's better than most people. Maybe I'm a good person, then. Or maybe everyone else is even s...

]]>
IsabelHasse https://forum.effectivealtruism.org/posts/rShj2KBrRofTdgxSC/the-hacker-and-the-beggar Mon, 18 Sep 2023 11:14:14 +0000 EA - The Hacker and the Beggar by IsabelHasse IsabelHasse https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:46 no full 7131
k6K3iktCLCTHRMJsY EA - The possibility of an indefinite AI pause by Matthew Barnett Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The possibility of an indefinite AI pause, published by Matthew Barnett on September 19, 2023 on The Effective Altruism Forum.This post is part of AI Pause Debate Week. Please see this sequence for other posts in the debate.tl;dr An indefinite AI pause is a somewhat plausible outcome and could be made more likely if EAs actively push for a generic pause. I think an indefinite pause proposal is substantially worse than a brief pause proposal, and would probably be net negative. I recommend that alternative policies with greater effectiveness and fewer downsides should be considered instead.Broadly speaking, there seem to be two types of moratoriums on technologies: (1) moratoriums that are quickly lifted, and (2) moratoriums that are later codified into law as indefinite bans.In the first category, we find the voluntary 1974 moratorium on recombinant DNA research, the 2014 moratorium on gain of function research, and the FDA's partial 2013 moratorium on genetic screening.In the second category, we find the 1958 moratorium on conducting nuclear tests above the ground (later codified in the 1963 Partial Nuclear Test Ban Treaty), and the various moratoriums worldwide on human cloning and germline editing of human genomes. In these cases, it is unclear whether the bans will ever be lifted - unless at some point it becomes infeasible to enforce them.Overall I'm quite uncertain about the costs and benefits of a brief AI pause. The foreseeable costs of a brief pause, such as the potential for a compute overhang, have been discussed at length by others, and I will not focus my attention on them here. I recommend reading this essay to find a perspective on brief pauses that I'm sympathetic to.However, I think it's also important to consider whether, conditional on us getting an AI pause at all, we're actually going to get a pause that quickly ends. I currently think there is a considerable chance that society will impose an indefinite de facto ban on AI development, and this scenario seems worth analyzing in closer detail.Note: in this essay, I am only considering the merits of a potential lengthy moratorium on AI, and I freely admit that there are many meaningful axes on which regulatory policy can vary other than "more" or "less". Many forms of AI regulation may be desirable even if we think a long pause is not a good policy. Nevertheless, it still seems worth discussing the long pause as a concrete proposal of its own.The possibility of an indefinite pauseSince an "indefinite pause" is vague, let me be more concrete. I currently think there is between a 10% and 50% chance that our society will impose legal restrictions on the development of advanced AI systems that,Prevent the proliferation of advanced AI for more than 10 years beyond the counterfactual under laissez-faireHave no fixed, predictable expiration date (without necessarily lasting forever)Eliezer Yudkowsky, perhaps the most influential person in the AI risk community, has already demanded an "indefinite and worldwide" moratorium on large training runs. This sentiment isn't exactly new. Some effective altruists, such as Toby Ord, have argued that humanity should engage in a "long reflection" before embarking on ambitious and irreversible technological projects, including AGI. William MacAskill suggested that this pause should perhaps last "a million years". Two decades ago, Nick Bostrom considered the ethics of delaying new technologies in a utilitarian framework and concluded a delay of "over 10 million years" may be justified if it reduces existential risk by a single percentage point.I suspect there are approximately three ways that such a pause could come about. The first possibility is that governments could explicitly write such a pause into law, fearing the development of AI in a broad sense,...

]]>
Matthew_Barnett https://forum.effectivealtruism.org/posts/k6K3iktCLCTHRMJsY/the-possibility-of-an-indefinite-ai-pause Tue, 19 Sep 2023 19:43:39 +0000 EA - The possibility of an indefinite AI pause by Matthew Barnett Matthew_Barnett https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 24:32 no full 7148
78A2NHL3zBS3ESurp EA - My life would be much harder without the Community Health Team. I think yours might too. by Daystar Eld Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My life would be much harder without the Community Health Team. I think yours might too., published by Daystar Eld on September 19, 2023 on The Effective Altruism Forum.I don't have much time to get into this, but I heard rumblings, saw a post, and wrote a comment, and now I'm making a post of my own because this information feels worth spreading now. I will not be going into much more details, for reasons that should be obvious.For those that don't know me, I'm a therapist who has been working in the community for about 5 years now, and has been almost exclusively working with the community since early 2020, though I've been scaling down my therapy practice to focus on other projects like therapist recruitment/supervision, and mental health research. I also do mediation now and then, and in both situations, Community Health has been incredibly helpful.Another of my major things is teaching at various rationality camps and workshops for highschool age students, as well as some for adults. To say that Community Health has been incredibly valuable here would be an understatement. I often I've spoken at a few EAGs, which is relevant insofar as sometimes things happen there which also requires Community Health interaction. I think it is incredibly easy to undervalue the good CH does, particularly if people don't regularly interact with it or make use of them rather than just having a few anecdata to go off of.I think it is also incredibly easy to be uncharitable toward CH if you don't interact with them regularly and only have a few anecdata to go off of. There's no appropriate comparison for the work they do, but police with ~most of the responsibility but ~none of the power seems apt as a first approximation.If people have been burned by CH before, I get it. Maybe it seems weird to say that they have ~no power, when clearly they have a lot of social power in particular circumstances. But from my observations, this power is roughly equivalent to what any group of people can accomplish by acting as a whisper network, and the fact that most people don't does not give CH extra power when they try to serve the same purpose with more self-awareness and fairness than most would.What I can say about my own experiences, however, is that CH often does an amazing job of walking the tightwire between taking accusations seriously without accepting them at face value. I have seen them let people know what they've been accused of, so long as they have not promised anonymity to reporters. I have seen them inform people that anonymity in reporting and investigation/consequence level directly trade off against each other. I have seen them spend many hours investigating stuff to try and reach a fair and balanced conclusion.There are some situations that take up hundreds of hours of grueling emotional labor by people working in organizations like the ones I've been part of, and sometimes CH only helps a little with that, but other times they help a lot. They also help resolve many issues that could grow into bigger ones by being a mediating force. Basically no one knows about any of the times they do things well because why would they?I get that transparency is good, but so is privacy, and one of the points of having a CH organization is to not turn every event into a massive drama that sucks in thousands of hours from hundreds of people. Sometimes that's appropriate, sometimes it's not, nor wanted by the parties involved.As for people worried about being talked about or blacklisted from things without their knowledge... again, I get it. But I promise you, whisper networks will not disappear without CH, and from those I've participated in both in this community and outside it, they're not made worse from the existence of CH. Quite the contrary.So yeah. I need to get back to work, b...

]]>
Daystar Eld https://forum.effectivealtruism.org/posts/78A2NHL3zBS3ESurp/my-life-would-be-much-harder-without-the-community-health Tue, 19 Sep 2023 16:20:12 +0000 EA - My life would be much harder without the Community Health Team. I think yours might too. by Daystar Eld Daystar Eld https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:36 no full 7144
SiKDcuCg6azKb9BvQ EA - EA Germany's Mid-Year Update 2023 by Sarah Tegeler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Germany's Mid-Year Update 2023, published by Sarah Tegeler on September 19, 2023 on The Effective Altruism Forum.This post outlines our current status regarding the planned focus areas in our strategy for this year.BackgroundEA Germany (EAD)In the 2022 EA survey, 9.9% of all participants were from Germany, up from 7.4% in 2020, remaining the third-largest community behind the US and the UK. We are a membership association with 108 members, six board members, five employed team members (four FTEs), and ten contractors and volunteers (for EAGxBerlin 2023, intro program and newsletter).In 2023, the association gained 35 new members and two new employees.Local GroupsThere are currently 27 active local groups in Germany, some of which are university-based, but most refer to themselves as local groups.Group size ranges from 5-50 active members. Overall, there are ~50-70 community builders and a total of >300 people regularly involved in the groups.Impact EstimationWe have gathered data points about the outcomes of our programmes, which we will share in the following sections. Since, however, we are uncertain about the interpretation of this data, we cannot be sure about the overall impact of our programs.In the future, we will focus on finding better evaluation criteria in order to estimate our impact.Focus AreasFoundational ProgramsFoundational programs are either the continuation of existing programs or those that seem broadly beneficial to growing a sustainable community. We have established OKRs for each program and are reviewing them monthly.CommunicationsWe have been running and are regularly updating effektiveraltruismus.de, the German website about effective altruism, since Effektiv Spenden handed it over to us in late 2022. They also handed us the newsletter, which enabled us to send out our existing monthly German-language newsletter to more than 4,000 subscribers compared to the 350 we had before.We are in regular exchange with Open Philanthropy grantees to coordinate the translation of articles from English into German. They have published some of their content on our website, and we have promoted their podcast with narrations of articles.Additionally, we have been helping to coordinate the publication of EA-relevant books in German, including the German launch of What We Owe The Future on 30th August 2023.EAGx ConferencesWe have applied for and received funding to run EAGxBerlin on September 8-10, 2023 and hired a team of six people in order to do so.Additionally, we have organised meetups for German EAs at 4 EAG(x) conferences with ~5-50 attendees each.Intro Program [formerly Intro Fellowship]The Intro Program, which used to be called "EA Intro Fellowship", was held in the winter of 22/23 and summer of '23. During the last round, we received a peak of more than 100 applications. Around 60 % of the participants completed the program successfully and more than 90 % of program participants described at least one relevant event outcome, e.g. making an important professional connection, discovering a new opportunity, or getting a deeper understanding of a key idea.Career 1-1sWe had ~160 calls and meetings related to career paths and decisions between January and June: ~60 at conferences, ~30 at retreats. The others were career calls or office hours. Recommendations came through our programs, 80,000 hours, and the form on our website.Community HealthWe appointed our team member Milena Canzler as the German Community Health contact person, listing her contact details on our website while also including a German and English contact form. In several cases, we have already been able to provide support. Additionally, we provide materials and training for awareness teams at EA-related events in order to avoid negative experiences for and har...

]]>
Sarah Tegeler https://forum.effectivealtruism.org/posts/SiKDcuCg6azKb9BvQ/ea-germany-s-mid-year-update-2023 300 people regularly involved in the groups.Impact EstimationWe have gathered data points about the outcomes of our programmes, which we will share in the following sections. Since, however, we are uncertain about the interpretation of this data, we cannot be sure about the overall impact of our programs.In the future, we will focus on finding better evaluation criteria in order to estimate our impact.Focus AreasFoundational ProgramsFoundational programs are either the continuation of existing programs or those that seem broadly beneficial to growing a sustainable community. We have established OKRs for each program and are reviewing them monthly.CommunicationsWe have been running and are regularly updating effektiveraltruismus.de, the German website about effective altruism, since Effektiv Spenden handed it over to us in late 2022. They also handed us the newsletter, which enabled us to send out our existing monthly German-language newsletter to more than 4,000 subscribers compared to the 350 we had before.We are in regular exchange with Open Philanthropy grantees to coordinate the translation of articles from English into German. They have published some of their content on our website, and we have promoted their podcast with narrations of articles.Additionally, we have been helping to coordinate the publication of EA-relevant books in German, including the German launch of What We Owe The Future on 30th August 2023.EAGx ConferencesWe have applied for and received funding to run EAGxBerlin on September 8-10, 2023 and hired a team of six people in order to do so.Additionally, we have organised meetups for German EAs at 4 EAG(x) conferences with ~5-50 attendees each.Intro Program [formerly Intro Fellowship]The Intro Program, which used to be called "EA Intro Fellowship", was held in the winter of 22/23 and summer of '23. During the last round, we received a peak of more than 100 applications. Around 60 % of the participants completed the program successfully and more than 90 % of program participants described at least one relevant event outcome, e.g. making an important professional connection, discovering a new opportunity, or getting a deeper understanding of a key idea.Career 1-1sWe had ~160 calls and meetings related to career paths and decisions between January and June: ~60 at conferences, ~30 at retreats. The others were career calls or office hours. Recommendations came through our programs, 80,000 hours, and the form on our website.Community HealthWe appointed our team member Milena Canzler as the German Community Health contact person, listing her contact details on our website while also including a German and English contact form. In several cases, we have already been able to provide support. Additionally, we provide materials and training for awareness teams at EA-related events in order to avoid negative experiences for and har...]]> Tue, 19 Sep 2023 14:24:56 +0000 EA - EA Germany's Mid-Year Update 2023 by Sarah Tegeler 300 people regularly involved in the groups.Impact EstimationWe have gathered data points about the outcomes of our programmes, which we will share in the following sections. Since, however, we are uncertain about the interpretation of this data, we cannot be sure about the overall impact of our programs.In the future, we will focus on finding better evaluation criteria in order to estimate our impact.Focus AreasFoundational ProgramsFoundational programs are either the continuation of existing programs or those that seem broadly beneficial to growing a sustainable community. We have established OKRs for each program and are reviewing them monthly.CommunicationsWe have been running and are regularly updating effektiveraltruismus.de, the German website about effective altruism, since Effektiv Spenden handed it over to us in late 2022. They also handed us the newsletter, which enabled us to send out our existing monthly German-language newsletter to more than 4,000 subscribers compared to the 350 we had before.We are in regular exchange with Open Philanthropy grantees to coordinate the translation of articles from English into German. They have published some of their content on our website, and we have promoted their podcast with narrations of articles.Additionally, we have been helping to coordinate the publication of EA-relevant books in German, including the German launch of What We Owe The Future on 30th August 2023.EAGx ConferencesWe have applied for and received funding to run EAGxBerlin on September 8-10, 2023 and hired a team of six people in order to do so.Additionally, we have organised meetups for German EAs at 4 EAG(x) conferences with ~5-50 attendees each.Intro Program [formerly Intro Fellowship]The Intro Program, which used to be called "EA Intro Fellowship", was held in the winter of 22/23 and summer of '23. During the last round, we received a peak of more than 100 applications. Around 60 % of the participants completed the program successfully and more than 90 % of program participants described at least one relevant event outcome, e.g. making an important professional connection, discovering a new opportunity, or getting a deeper understanding of a key idea.Career 1-1sWe had ~160 calls and meetings related to career paths and decisions between January and June: ~60 at conferences, ~30 at retreats. The others were career calls or office hours. Recommendations came through our programs, 80,000 hours, and the form on our website.Community HealthWe appointed our team member Milena Canzler as the German Community Health contact person, listing her contact details on our website while also including a German and English contact form. In several cases, we have already been able to provide support. Additionally, we provide materials and training for awareness teams at EA-related events in order to avoid negative experiences for and har...]]> 300 people regularly involved in the groups.Impact EstimationWe have gathered data points about the outcomes of our programmes, which we will share in the following sections. Since, however, we are uncertain about the interpretation of this data, we cannot be sure about the overall impact of our programs.In the future, we will focus on finding better evaluation criteria in order to estimate our impact.Focus AreasFoundational ProgramsFoundational programs are either the continuation of existing programs or those that seem broadly beneficial to growing a sustainable community. We have established OKRs for each program and are reviewing them monthly.CommunicationsWe have been running and are regularly updating effektiveraltruismus.de, the German website about effective altruism, since Effektiv Spenden handed it over to us in late 2022. They also handed us the newsletter, which enabled us to send out our existing monthly German-language newsletter to more than 4,000 subscribers compared to the 350 we had before.We are in regular exchange with Open Philanthropy grantees to coordinate the translation of articles from English into German. They have published some of their content on our website, and we have promoted their podcast with narrations of articles.Additionally, we have been helping to coordinate the publication of EA-relevant books in German, including the German launch of What We Owe The Future on 30th August 2023.EAGx ConferencesWe have applied for and received funding to run EAGxBerlin on September 8-10, 2023 and hired a team of six people in order to do so.Additionally, we have organised meetups for German EAs at 4 EAG(x) conferences with ~5-50 attendees each.Intro Program [formerly Intro Fellowship]The Intro Program, which used to be called "EA Intro Fellowship", was held in the winter of 22/23 and summer of '23. During the last round, we received a peak of more than 100 applications. Around 60 % of the participants completed the program successfully and more than 90 % of program participants described at least one relevant event outcome, e.g. making an important professional connection, discovering a new opportunity, or getting a deeper understanding of a key idea.Career 1-1sWe had ~160 calls and meetings related to career paths and decisions between January and June: ~60 at conferences, ~30 at retreats. The others were career calls or office hours. Recommendations came through our programs, 80,000 hours, and the form on our website.Community HealthWe appointed our team member Milena Canzler as the German Community Health contact person, listing her contact details on our website while also including a German and English contact form. In several cases, we have already been able to provide support. Additionally, we provide materials and training for awareness teams at EA-related events in order to avoid negative experiences for and har...]]> Sarah Tegeler https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:23 no full 7141
LtFjiPnj2hcHqNPEo EA - Ticking Clock: The Rapid Rise of Farmed Animals in Africa by AnimalAdvocacyAfrica Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ticking Clock: The Rapid Rise of Farmed Animals in Africa, published by AnimalAdvocacyAfrica on September 19, 2023 on The Effective Altruism Forum.As humanity continues its era of rapid population growth and rising economic prosperity, the demand for animal protein is anticipated to reach unparalleled heights. This surge in consumption is set to drastically impact the lives of farmed animals worldwide. Nowhere is this growth more pronounced than in Africa.The evidencePreviously, our anticipation of Africa's sharp increase in livestock numbers was primarily grounded in the historical global expansion of farmed animal populations over the past decades, coupled with human population growth trends across the African continent. This post, however, delves into the specific projections of farmed animal numbers and animal farming intensification from 2012 to 2050, as outlined by the Food and Agriculture Organization of the United Nations (FAO), which is based on many more factors than just historical changes in animal agriculture and human population growth, providing us with more detailed estimates than previously available. If not stated otherwise, all figures named in this blog post refer to these projections, which can be found here.The FAO's projections encompass a range of future scenarios, including both a "business as usual" model and a "towards sustainability" model. For the scope of this post, we focus on the "business as usual" projection, which assumes the absence of any significant efforts to reduce the extent of factory farming. This allows us to explore the potential ramifications of the current trajectory of animal agriculture. Notably, even upon considering the projections from the "towards sustainability" model, the prospects for Africa remain largely unaltered. In contrast, for all other continents, we can observe a noticeable reduction in farmed animal numbers in comparison to the "business as usual" model. Although far from certain, this may indicate that the FAO assumes that growth in animal agriculture in Africa is close to unavoidable or that nothing will be done to hinder its growth.The FAO provides data for farmed land animals, including cattle, pigs, sheep, goats, poultry, and buffaloes, but omits figures for other mammals, such as rabbits, horses, and dogs, as well as insects, fish, and seafood. This exclusion is noteworthy since the protein yield from smaller animals demands significantly greater numbers of individual animals per kilogram in contrast to large mammals like cows and pigs. Thus, while the analysis below offers crucial insights, it only represents a fraction of future developments.Africa in global comparisonAccording to the FAO's projections, the number of farmed land animals in Africa is anticipated to experience a remarkable surge in the coming decades. As shown in the graph below, the population of farmed land animals for the entire continent of Africa will rise from around 2.6 billion in 2012 to around 9.4 billion in 2050, an increase of 262%. Consequently, Africa would surpass all other global regions in terms of the total size of farmed animal populations by 2050, except for Asia, and reach roughly twice the total number of animals of other regions like Europe or North America.At present, Asia leads and will continue to lead all continents in terms of the total number of farmed land animals. This can largely be attributed to factory farming in China, Indonesia, and India. Nevertheless, Africa's livestock numbers are expected to increase by a much larger absolute number and at a higher rate than Asia's projected 26% rise from 2012 to 2050.Note that these numbers (along with all following numbers in this post) refer to the count of live animals at a given time in a year, which is not to be confused but should be highly c...

]]>
AnimalAdvocacyAfrica https://forum.effectivealtruism.org/posts/LtFjiPnj2hcHqNPEo/ticking-clock-the-rapid-rise-of-farmed-animals-in-africa Tue, 19 Sep 2023 12:26:18 +0000 EA - Ticking Clock: The Rapid Rise of Farmed Animals in Africa by AnimalAdvocacyAfrica AnimalAdvocacyAfrica https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:02 no full 7140
xs8AxTodnEfg7pcSS EA - Apply Now: EAGxVirtual (17-19 November) by Sasha Berezhnoi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply Now: EAGxVirtual (17-19 November), published by Sasha Berezhnoi on September 20, 2023 on The Effective Altruism Forum.Applications are now open for EAGxVirtual, happening on November 17-19, 2023.We are hoping to see many participants from around the world, particularly those who have not been able to attend in-person events. This will probably be the most accessible EA conference in years - we encourage anyone with a genuine interest in EA to apply!Apply nowOur conference theme is Taking Action Anywhere in the WorldEA is not only about identifying the best ways to do good. Most importantly, it's about taking action based on those findings. Your EA journey may differ based on geographical location, career stage, cause preferences, and other factors. But you don't need to be on this journey alone.Our main goal is to help attendees identify the next steps to act based on EA principles wherever they are in the world. With EAGxVirtual, we aim to provide action-oriented content that will be relevant not only to people from Western countries. We want to give a voice to many stories of impact and explore how people from very different cultural and economical backgrounds can find their unique ways to be effective altruists.You can expect a wide range of talks, office hours, and workshops across EA cause areas. We have also planned an extensive ambassador program connecting attendees to more experienced community members.Who is this event for?We welcome all who have a genuine interest in learning more or connecting, including those who are new to effective altruism. Unlike most other EAGx conferences, we don't require previous engagement with EA and intend to approve the majority of applications. This is as close to the proposals of Open EA Global as we can get. However, we still encourage you to be thoughtful about your application as this will help you get the most out of the conference as well as help us understand our audience better.Reasons to attendBuild a better understanding of the EA landscape and identify relevant opportunities to take actionGive and receive feedback on career, study, or donation plans, or on your EA projectsMake new connections and reconnect with old contactsDiscover and discuss interesting and important ideasWhat do previous event attendees say?"At EAGxVirtual, the geographic diversity struck me as being very good and substantially better than what I recall from in-person EAG events. At one point, I had a great conversation with people from Moscow, Australia, India, Tanzania, & a student in Costa Rica. It's hard to do that at an in-person conference."Seth Baum"I met one person who had learned about EA just earlier that month and hadn't engaged beyond reading a few articles, while also sitting down to talk to people experienced and incredibly active in the space like Michael Aird."Tristan Williams"ok just logged into swapcard for @EAGxVirtual and. I. Love. It. if every conference had swapcard and gathertown i would go to approximately 10000% more conferences."Emily ThaiMore reflections from the last year are available here.If you are a highly-engaged EA member, your involvement can make a differenceIf you are a highly-engaged EA you can make a difference by showing up, providing feedback to those relatively new to the community, and helping them navigate the conference.Virtual conferences are accessible to people who live outside of major EA hubs, and to people who - for whatever reason - cannot travel easily (financially, work-related, health, family, etc.). Those people often have fewer connections and opportunities. Supporting them is crucial for a vibrant global community. Please make it clear on your Swapcard profile what it is you can help people with, encourage first-time attendees to reach out to you (or even r...

]]>
Sasha Berezhnoi https://forum.effectivealtruism.org/posts/xs8AxTodnEfg7pcSS/apply-now-eagxvirtual-17-19-november Wed, 20 Sep 2023 20:11:42 +0000 EA - Apply Now: EAGxVirtual (17-19 November) by Sasha Berezhnoi Sasha Berezhnoi https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:27 no full 7156
SDyik6eNm53yBGeEe EA - Celebrating Progress: Recent Highlights from the NYC EA Community by Alex R Kaplan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Celebrating Progress: Recent Highlights from the NYC EA Community, published by Alex R Kaplan on September 20, 2023 on The Effective Altruism Forum.Inspired by EA NYC's 10th birthday and the first-ever EAGxNYC, we wanted to share this unabashed brag post about New York City's EA community and its accomplishments! Below, we'll share highlights from EA NYC specifically, as well as the EA community in New York City more broadly.While it's true that living in New York sometimes instills a self-aggrandizing perspective, we unironically support all communities announcing their achievements and were inspired by the Spanish-Speaking Effective Altruist Community, the Dutch EA Community, and many others with success to report.We hope our posts provide additional context for groups around the world and inspire others to share their stories as well as any feedback and comments! For example, in February we posted about our community health infrastructure, and we have been heartened to see EA Germany share similar information. And, if this series inspires you to check out our little town, we've also recently published a guide to visiting NYC.Quick note on the relationship between EA in NYC and EA NYCNot to toot our own horn, but we consider EA NYC (the organization) to be a critical actor in the promotion and coordination of effective altruism in NYC. Despite the size, significance, and potential of the EA (and non-EA!) community in NYC, EA NYC is currently the only meta EA resource based in NYC.As a result, EA NYC serves as a resource hub, connector, and platform for a wide variety of EA initiatives and individuals interested in EA within the NYC metropolitan area. This takes shape in a variety of ways and can look like anything from providing newcomers with educational programming, to creating a platform for motivated community members to deepen their volunteer engagement, to assisting EA organizations with boosting job openings or finding meeting spaces.In short, EA NYC strives to not only execute our own community programming, but also to serve as a multiplier for other EA initiatives. So as we write this post, we are of course proud to describe the work our team has done and the progress we've made, but we also take pride in how far the community itself has come. We're honored to serve the NYC EA community, and we're thrilled to share some of the cool initiatives we've seen come out of our hometown!Community highlightsSummaryThe NYC EA community is largeThe NYC EA community is activeThe NYC EA community is coolYou're missing out if you're not hereSizeIn the 2022 EA Survey, NYC had the largest US concentration of EAs outside of the Bay Area, consistent with the 2020 Survey results.According to Brian Tan at the Centre for Effective Altruism, EA NYC has the highest number of "engaged EAs" out of any group based on CEA's 2022 Group Census.In 2022, the New York EA community had approximately 600 active members who attended EA NYC events, registered interest in utilizing a NY EA co-working space, joined EA NYC subgroups, spoke one-on-one with EA NYC leadership, and/or attended EAG/EAGx conferences.In Q1 and Q2 2023, EA NYC saw 220 new event attendees (+49% relative to 2022).Over 1,250 people are subscribed to the EA NYC monthly newsletter (with a 59% six-month, rolling average open rate) and over 1,100 people are members on the EA NYC Slack (with a 187-person 6-month, rolling average active usership).EA NYC has 12 wonderful volunteer leaders who support the facilitation and coordination of events.ActionCommunity Building Programming"Standard" programming (though it's anything but!)EA NYC strives to hold a minimum of one unique public event per week. In the course of a typical month, EA NYC typically holds at least one speaker presentation, at least one socia...

]]>
Alex R Kaplan https://forum.effectivealtruism.org/posts/SDyik6eNm53yBGeEe/celebrating-progress-recent-highlights-from-the-nyc-ea Wed, 20 Sep 2023 15:04:03 +0000 EA - Celebrating Progress: Recent Highlights from the NYC EA Community by Alex R Kaplan Alex R Kaplan https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 15:57 no full 7153
Y4SaFM5LfsZzbnymu EA - The Case for AI Safety Advocacy to the Public by Holly Elmore Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Case for AI Safety Advocacy to the Public, published by Holly Elmore on September 20, 2023 on The Effective Altruism Forum.tl;dr: Advocacy to the public is a large and neglected opportunity to advance AI Safety. AI Safety as a field is unfamiliar with advocacy, and it has reservations, some founded and others not. A deeper understanding of the dynamics of social change reveals the promise of pursuing outside game strategies to complement the already strong inside game strategies. I support an indefinite global Pause on frontier AI and I explain why Pause AI is a good message for advocacy. Because I'm American and focused on US advocacy, I will mostly be drawing on examples from the US. Please bear in mind, though, that for Pause to be a true solution it will have to be global.The case for advocacy in generalAdvocacy can workI've encountered many EAs who are skeptical about the role of advocacy in social change. While it is difficult to prove causality in social phenomena like this, there is a strong historical case that advocacy has been effective at bringing about the intended social change through time (whether that change ended up being desirable or not). A few examples:Though there were many other economic and political factors that contributed, it is hard to make a case that the US Civil War had nothing to do with humanitarian concern for enslaved people- concern that was raised by advocacy. The people's, and ultimately the US government's, will to abolish slavery was bolstered by a diverse array of advocacy tactics, from Harriet Beecher Stowe's writing of Uncle Tom's Cabin to Frederick Douglass's oratory to the uprisings of John Brown.The US National Women's Party is credited with pressuring Woodrow Wilson and federal and state legislators into supporting the 19th Amendment, which guaranteed women the right to vote, through its "aggressive agitation, relentless lobbying, clever publicity stunts, and creative examples of civil disobedience and nonviolent confrontation".The nationwide prohibition of alcohol in the US (1920-1933) is credited to the temperance movement, which had all manner of advocacy gimmicks including the slogan "the lips that touch liquor shall never touch mine", and the stigmatization of drunk driving and the legal drinking age of 21 is directly linked to Mothers Against Drunk Drivers.Even if advocacy only worked a little of the time or only served to tip the balance of larger forces, the stakes of AI risk are so high and AI risk advocacy is currently so neglected that I see a huge opportunity.We can now talk to the public about AI riskWith the release of ChatGPT and other advances in state-of-the-art artificial intelligence in the last year, the topic of AI risk has entered the Overton window and is no longer dismissed as "sci-fi". But now, as Anders Sandberg put it, the Overton window is moving so fast it's "breaking the sound barrier". The below poll from AI Policy Institute and YouGov (release 8/11/23) shows comfortable majorities among US adults on questions about AI x-risk (76% worry about extinction risks from machine intelligence), slowing AI (82% say we should go slowly and deliberately), and government regulation of the AI industry (82% say tech executives can't be trusted to self-regulate).What having the public's support gets usOpinion polls and voters that put pressure on politicians. Constituent pressure on politicians gives the AI Safety community more power to get effective legislation passed- that is, legislation which addresses safety concerns and requires us to compromise less with other interests- and it gives the politicians more power against the AI industry lobby.The ability to leverage external pressure to improve existing strategies. With external pressure, ARC, for example, wouldn't have to worry as m...

]]>
Holly_Elmore https://forum.effectivealtruism.org/posts/Y4SaFM5LfsZzbnymu/the-case-for-ai-safety-advocacy-to-the-public Wed, 20 Sep 2023 14:17:14 +0000 EA - The Case for AI Safety Advocacy to the Public by Holly Elmore Holly_Elmore https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 21:51 no full 7152
5NcCWNC3yWdqeaEdH EA - [Link post] Michael Nielsen's "Notes on Existential Risk from Artificial Superintelligence" by Joel Becker Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Link post] Michael Nielsen's "Notes on Existential Risk from Artificial Superintelligence", published by Joel Becker on September 20, 2023 on The Effective Altruism Forum.SummaryFrom the piece:Earlier this year I decided to take a few weeks to figure out what I think about the existential risk from Artificial Superintelligence (ASI xrisk). It turned out to be much more difficult than I thought. After several months of reading, thinking, and talking with people, what follows is a discussion of a few observations arising during this exploration, including:Three ASI xrisk persuasion paradoxes, which make it intrinsically difficult to present strong evidence either for or against ASI xrisk. The lack of such compelling evidence is part of the reason there is such strong disagreement about ASI xrisk, with people often (understandably) relying instead on prior beliefs, self-interest, and tribal reasoning to decide their opinions.The alignment dilemma: should someone concerned with xrisk contribute to concrete alignment work, since it's the only way we can hope to build safe systems; or should they refuse to do such work, as contributing to accelerating a bad outcome? Part of a broader discussion of the accelerationist character of much AI alignment work, so capabilities / alignment is a false dichotomy.The doomsday question: are there recipes for ruin -- simple, easily executed, immensely destructive recipes that could end humanity, or wreak catastrophic world-changing damage?What bottlenecks are there on ASI speeding up scientific discovery? And, in particular: is it possible for ASI to discover new levels of emergent phenomena, latent in existing theories?ExcerptsHere are the passages I thought were interesting enough to tweet about:"So, what's your probability of doom?" I think the concept is badly misleading. The outcomes humanity gets depend on choices we can make. We can make choices that make doom almost inevitable, on a timescale of decades - indeed, we don't need ASI for that, we can likely4 arrange it in other ways (nukes, engineered viruses, .). We can also make choices that make doom extremely unlikely. The trick is to figure out what's likely to lead to flourishing, and to do those things. The term "probability of doom" began frustrating me after starting to routinely hear people at AI companies use it fatalistically, ignoring the fact that their choices can change the outcomes. "Probability of doom" is an example of a conceptual hazard5 - a case where merely using the concept may lead to mistakes in your thinking. Its main use seems to be as marketing: if widely-respected people say forcefully that they have a high or low probability of doom, that may cause other people to stop and consider why.But I dislike concepts which are good for marketing, but bad for understanding; they foster collective misunderstanding, and are likely to eventually lead to collective errors in action.With all that said: practical alignment work is extremely accelerationist. If ChatGPT had behaved like Tay, AI would still be getting minor mentions on page 19 of The New York Times. These alignment techniques play a role in AI somewhat like the systems used to control when a nuclear bomb goes off. If such bombs just went off at random, no-one would build nuclear bombs, and there would be no nuclear threat to humanity. Practical alignment work makes today's AI systems far more attractive to customers, far more usable as a platform for building other systems, far more profitable as a target for investors, and far more palatable to governments. The net result is that practical alignment work is accelerationist. There's an extremely thoughtful essay by Paul Christiano, one of the pioneers of both RLHF and AI safety, where he addresses the question of whether he regrets working ...

]]>
Joel Becker https://forum.effectivealtruism.org/posts/5NcCWNC3yWdqeaEdH/link-post-michael-nielsen-s-notes-on-existential-risk-from Wed, 20 Sep 2023 10:29:59 +0000 EA - [Link post] Michael Nielsen's "Notes on Existential Risk from Artificial Superintelligence" by Joel Becker Joel Becker https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:16 no full 7151
4wqZeJfDhwvkLLm4a EA - Compilation of Profit for Good Redteaming and Responses by Brad West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Compilation of Profit for Good Redteaming and Responses, published by Brad West on September 20, 2023 on The Effective Altruism Forum.TLDR: Document compiling redteaming on Profit for Good available, with public comments invited and appreciated. This thread further includes two argument headings against Profit for Good to be discussed in the comments.As one might suspect given that I have started a nonprofit to advance it, I believe Profit for Good, the philanthropic use of businesses with charities in vast majority shareholder position, is an immensely powerful tool. To get a sense of the scale of good that Profit for Good could do, see 14:22 of Cargill's TED Talk for what 3.5T could do, which would be about 3.5 years of 10% of global net profits. Here is a good place to start regarding why we think it is promising and here is a longer reading list.However, throughout time, many people have brought up criticisms and red-teaming. It is important to realize that Profit for Good, especially in initial stages, would likely require the use of philanthropic funds for launching, acquiring, and accelerating businesses that counterfactually could be used to directly fund extremely impactful charities that we know can do good. For this reason, it is critical that all of the potential reasons that Profit for Good might not succeed, not be the best use of funds, and/or have unintended negative consequences be considered. For this reason, I am in the process of aggregating red-teaming from wherever I can find it, organizing such redteaming, according to argument heading, verbatim where possible, and doing the same for the responses to such red-teaming. I am in the process of forming my own syntheses of the criticisms and responses. I am also continuing to compile new arguments and formulations/evidence regarding former arguments.If you are interested in supplying new arguments under existing headings, your own syntheses of existing materials, or new argument headings, you may feel free to add a comment on the existing document or email me at brad@consumerpowerinitiative.org and I will integrate it verbatim into the document (please indicate if you would like such argument to be anonymous for any reason, or I will credit you with the contribution.I will include in this piece the first two argument headings, and invite you to contribute to such arguments for or against in this thread (I will update the main document accordingly). I intend to do future posts evaluating later argument headings. These headings regard the criticism (1) that Profit for Good businesses will increase costs for consumers and (2) that Profit for Good businesses will be held to a higher standard than normal businesses in ways that critically impair their ability to compete. Entire Redteaming document also here.Criticism: Profit for Good businesses will increase costs for consumersBrad's Criticism Summary: A frequent version of this "criticism" is in fact a confusion of the Profit for Good model with "bundling." With "bundling", a normal firm with normal shareholders would build in a charitable donation into a purchase, increasing the product's price commensurately so as not to harm its shareholders. "Bundling" would increase consumer costs, however, it is not the Profit for Good model. The donation is not a "cost", but rather a function of the identity of the shareholder that is entitled to profit. It is not clear why a charity as shareholder would be a higher cost than a normal investor as a shareholder.There are versions of this criticism that are not products of confusion as to what Profit for Good is. One is that many businesses that are able to provide the lowest prices are multinational firms and those prices are possible due to billions of dollars of capital costs. Essentially, many kinds of ...

]]>
Brad West https://forum.effectivealtruism.org/posts/4wqZeJfDhwvkLLm4a/compilation-of-profit-for-good-redteaming-and-responses Wed, 20 Sep 2023 05:36:42 +0000 EA - Compilation of Profit for Good Redteaming and Responses by Brad West Brad West https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 14:07 no full 7150
iitD7ia96CYkocLTd EA - Protest against Meta's irreversible proliferation (Sept 29, San Francisco) by Holly Elmore Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Protest against Meta's irreversible proliferation (Sept 29, San Francisco), published by Holly Elmore on September 20, 2023 on The Effective Altruism Forum.Meta's frontier AI models are fundamentally unsafe. Since Meta AI has released the model weights publicly, any safety measures can be removed. Before it releases even more advanced models - which will have more dangerous capabilities - we call on Meta to take responsible release seriously and stop irreversible proliferation. Join us for a peaceful protest at Meta's office in San Francisco at 250 Howard St at 4pm PT.RSVP on Facebook or through this form.Let's send a message to Meta:Stop irreversible proliferation of model weights. Meta's models are not safe if anyone can remove the safety measures.Take AI risks seriously.Take responsibility for harms caused by your AIs.Stop free-riding on the goodwill of the open-source community. Llama models are not and have never been open source, says the Open Source Initiative.All you need to bring is yourself and a sign, if you want to make your own. I will lead a trip to SF from Berkeley but anyone can join at the location. We will have a sign-making party before the demonstration-- stay tuned for details. We'll go out for drinks afterwardI like the irony.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Holly_Elmore https://forum.effectivealtruism.org/posts/iitD7ia96CYkocLTd/protest-against-meta-s-irreversible-proliferation-sept-29 Wed, 20 Sep 2023 04:49:16 +0000 EA - Protest against Meta's irreversible proliferation (Sept 29, San Francisco) by Holly Elmore Holly_Elmore https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:28 no full 7149
mArisdpuQiFtTNWw3 EA - Will MacAskill has stepped down as trustee of EV UK by lincolnq Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will MacAskill has stepped down as trustee of EV UK, published by lincolnq on September 21, 2023 on The Effective Altruism Forum.Earlier today, Will MacAskill stepped down from the board of Effective Ventures UK[1], having served as a trustee since its founding more than a decade ago.Will has been intending to step down for several months and announced his intention to do so earlier this year. Will had initially planned to remain on the board until we brought on additional trustees to replace him. However, given that our trustee recruitment process has taken longer than anticipated, and given also that Will continues to be recused from a significant proportion of board business[2], he felt that it didn't make sense for him to stay on any longer.Will announced his resignation today.As a founding board member of EV UK (then called CEA), Will played a vital role in getting EV and its constituent projects off the ground, including co-founding Giving What We Can and 80,000 Hours. We are very grateful to Will for everything he's contributed to the effective altruism movement to date and look forward to his future positive impact; we wish him the best of luck with his future work.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
lincolnq https://forum.effectivealtruism.org/posts/mArisdpuQiFtTNWw3/will-macaskill-has-stepped-down-as-trustee-of-ev-uk Thu, 21 Sep 2023 16:49:32 +0000 EA - Will MacAskill has stepped down as trustee of EV UK by lincolnq lincolnq https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:16 no full 7166
TPDtmSnJbGZFDZTfs EA - The "technology" bucket error by Holly Elmore Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The "technology" bucket error, published by Holly Elmore on September 21, 2023 on The Effective Altruism Forum.I wrote this post July 8, 2023, but it seemed relevant to share here based on some comments that my entry in the AI Pause Debate got.As AI x-risk goes mainstream, lines are being drawn in the broader AI safety debate. One through-line is the disposition toward technology in general. Some people are wary even of AI-gone-right because they are suspicious of societal change, and they fear that greater levels of convenience and artificiality will further alienate us from our humanity. People closer to my own camp often believe that it is bad to interfere with technological progress and that Ludditism has been proven wrong because of all of the positive technological developments of the past. "Everyone thinks this time is different", I have been told with a pitying smile, as if it were long ago proven that technology=good and the matter is closed. But technology is not one thing, and therefore "all tech" is not a valid reference class from which to forecast the future. This use of "technology" is a bucket error.What is a bucket error?A bucket error is when multiple different concepts or variables are incorrectly lumped together in one's mind as a single concept/variable, potentially leading to distortions of one's thinking.(Source)The term was coined as part of a longer post by Anna Salamon that included an example of a little girl who thinks that being a writer entails spelling words correctly. To her, there's only one bucket for "being a writer" and "being good at spelling"."I did not!" says the kid, whereupon she bursts into tears, and runs away and hides in the closet, repeating again and again: "I did not misspell the word! I can too be a writer!".When in fact the different considerations in the little girl's bucket are separable. A writer can misspell words.Why is "technology" a false bucket?Broadly, there are two versions of the false technology bucket out there: tech=bad and tech=good. Both are wrong.Why? Simply put: "technology" is not one kind of thing.The common thread across the set of all technology is highly abstract ("scientific knowledge", "applied sciences" - in other worlds, pertaining to our knowledge of the entire natural world), whereas concrete technologies themselves do all manner of things and can have effects that counteract each other. A personal computer is technology. Controlled fire is technology. A butterfly hair clip is technology. A USB-charging vape is technology. A plow is technology. "Tech" today is often shorthand for electronics and software. Some of this kind of technology, like computer viruses, are made to cause harm and violate people's boundaries. But continuous glucose monitors are made to keep people with diabetes alive and improve their quality of life. It's not that there are no broad commonalities across technologies - for example, they tend to increase our abilities - but that there aren't very useful trends in whether "technology" as a whole is good or bad.People who fear technological development often see technological progress as a whole as a move toward convenience and away from human self-reliance (and possibly into the hands of fickle new regimes or overlords). And I don't think they are wrong - new tech can screw up our attention spans or disperse communities or exacerbate concentrated power. I just think they aren't appreciating or are taking for granted how older technologies that they are used to having enhanced our lives, so much, so far, on balance, that I think the false bucket of "tech progress as a whole" has been worth the costs. But that doesn't mean that new tech will always be worth the costs.In fact, we have plenty of examples of successfully banned or restricted technologies like ...

]]>
Holly_Elmore https://forum.effectivealtruism.org/posts/TPDtmSnJbGZFDZTfs/the-technology-bucket-error Thu, 21 Sep 2023 13:02:23 +0000 EA - The "technology" bucket error by Holly Elmore Holly_Elmore https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:50 no full 7161
zd5inbT4kYKivincm EA - AI is centralizing by default; let's not make it worse by Quintin Pope Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI is centralizing by default; let's not make it worse, published by Quintin Pope on September 22, 2023 on The Effective Altruism Forum.TL;DR:AIs will probably be much easier to control than humans due to (1) AIs having far more levers through which to exert control, (2) AIs having far fewer rights to resist control, and (3) research to better control AIs being far easier than research to control humans. Additionally, the economics of scale in AI development strongly favor centralized actors.Current social equilibria rely on the current limits on the scalability of centralized control, and the similar levels of intelligence between actors with different levels of resources. The default outcome of AI development is to disproportionately increase the control and intelligence available to centralized, well-resourced actors. AI regulation (including pauses) can either reduce or increase the centralizing effects of AI, depending on the specifics of the regulations. One of our policy objectives when considering AI regulation should be preventing extreme levels of AI-enabled centralization.Why AI development favors centralization and control:I think AI development is structurally biased toward centralization for two reasons:AIs are much easier to control than humans.AI development is more easily undertaken by large, centralized actors.I will argue for the first claim by comparing the different methods we currently use to control both AIs and humans and argue that the methods for controlling AIs are much more powerful than the equivalent methods we use on humans. Afterward, I will argue that a mix of regulatory and practical factors makes it much easier to research more effective methods of controlling AIs, as compared to researching more effective methods of controlling humans, and so we should expect the controllability of AIs to increase much more quickly than the controllability of humans. Finally, I will address five counterarguments to the claim that AIs will be easy to control.I will briefly argue for the second claim by noting some of the aspects of cutting-edge AI development that disproportionately favor large, centralized, and well-resourced actors. I will then discuss some of the potential negative social consequences of AIs being very controllable and centralized, as well as the ways in which regulations (including pauses) may worsen or ameliorate such issues. I will conclude by listing a few policy options that may help to promote individual autonomy.Why AI is easier to control than humans:Methods of control broadly fall into three categories: prompting, training, and runtime cognitive interventions.Prompting: influencing another's sensory environment to influence their actions.This category covers a surprisingly wide range of the methods we use to control other humans, including offers of trade, threats, logical arguments, emotional appeals, and so on.However, prompting is a relatively more powerful technique for controlling AIs because we have complete control over an AI's sensory environment, can try out multiple different prompts without the AI knowing, and often, are able to directly optimize against a specific AI's internals to make prompts that are maximally convincing for that particular AI.Additionally, there are no consequences for lying to, manipulating, threatening, or otherwise being cruel to an AI. Thus, prompts targeting AIs can explore a broad range of possible deceptions, threats, bribes, emotional blackmail, and other tricks that would be risky to try on a human.Training: intervening on another's learning process to influence their future actions.Among humans, training interventions include parents trying to teach their children to behave in ways they deem appropriate, schools trying to teach their students various skills and ...

]]>
Quintin Pope https://forum.effectivealtruism.org/posts/zd5inbT4kYKivincm/ai-is-centralizing-by-default-let-s-not-make-it-worse Fri, 22 Sep 2023 10:16:09 +0000 EA - AI is centralizing by default; let's not make it worse by Quintin Pope Quintin Pope https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 25:32 no full 7170
HDrdJzHcagwCnqdjj EA - Small Scale Local Interventions: A Cost-Effectiveness Analysis of Passing Sunscreen at Outdoor Events by Ralf Kinkel Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Small Scale Local Interventions: A Cost-Effectiveness Analysis of Passing Sunscreen at Outdoor Events, published by Ralf Kinkel on September 24, 2023 on The Effective Altruism Forum.BackstoryI recently was at a music festival, where we stood in a long queue in the scorching sun. The festival would go from around 10 am to 10 pm and all stages were outdoor with practically no shadows to be found. My group had another person besides myself with sunscreen and I decided on passing my sunscreen backwards to the group behind us who didn't bring any, with the comment that they should continue passing it on afterwards. I did that just because it felt nice to do; a small act of good, but when thinking about it I became pretty sure that this is much more cost effective than typical interventions in rich countries.I spend a few hours reading up and calculating likely cost effectiveness of this action and am now pretty certain that it is less cost effective than the best global health interventions like de-worming and malaria treatment and prevention, but I'm also quite certain that it is in the same order of magnitude.I want to use this example mainly to spark a discussion about the general category of local interventions like this. Not scalable, but maybe cost effective on a small scale and therefor overlooked. Here is an analysis with calculations for the sunscreen example.1 Sunscreen1.1 Idea1. Buy the cheapest moderately high SPF sunscreen (while avoiding fake products).2. Write "please pass me along when finished" on the sunscreen.3. Take it to an outdoor event, with queue, where people are likely to be in the sun for a long time.4. Pass it along behind you.1.2 Sources of goodThe direct sources of good are:1. Less pain and annoyance from sunburn.2. More enjoyment at the event.3. Less skin cancer.1.3 CalculationThis was pretty difficult to do and I expect it be very high variance, I expect I'm wrong by around a factor of 3 in the first 2 categories and gave up on the full skin cancer calculation.As far as I saw there is no paper with clear listed stuff like "lifetime risk of skin cancer per sunburn". Only information fragments scattered across the internet. I did finish a very rough estimate of effectiveness gained from prevention of melanoma fatalities, but I could well be wrong there by an order of magnitude. I did not count fatalities from non-melanoma, or loss of life quality or cost for treatment for either melanoma or non-melanoma.The first relevant calculation is how many units of sunscreen you get per dollar, then how much protection each dose is expected to give from sunburn. Then the consequences of the sunburn can be calculated as pain/annoyance and skin cancer. Additionally loss of enjoyment at the event can be calculated from an estimate of people who avoid the sun at the event, in order to not get burned. I list sources and considerations in this sheet. You can adjust your own best guesses for the inputs. I also included a screenshot here:I got 250$/daly in the end. In comparison malaria treatment costs around 150$/daly and is currently rated among top causes (link).1.4 DiscussionThis intervention seems to be very cost-effective but less so than top treatments. There are additional benefits not included like non-melanoma fatalities prevented and loss of life quality even from effective treatments, but it might be that GiveWell does also not include such metrics (feedback on what GiveWell typically includes would be appreciated) .The biggest reason why it might still be worth it even though it is not the most effective one is if you do it from your non-EA budget, given that it is tangible, might give you a nice feeling and look good in front of your friends :DMaybe some people even ask you how you got the idea and you can explain what EA i...

]]>
Ralf Kinkel https://forum.effectivealtruism.org/posts/HDrdJzHcagwCnqdjj/small-scale-local-interventions-a-cost-effectiveness Sun, 24 Sep 2023 21:35:20 +0000 EA - Small Scale Local Interventions: A Cost-Effectiveness Analysis of Passing Sunscreen at Outdoor Events by Ralf Kinkel Ralf Kinkel https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:25 no full 7183
7hy6BuDaDFeQmmrmk EA - Neil Dullaghan on the politics of animal welfare and EU policy by Karthik Palakodeti Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Neil Dullaghan on the politics of animal welfare and EU policy, published by Karthik Palakodeti on September 24, 2023 on The Effective Altruism Forum.We discuss how to conduct a survey to find out whether there is opposition to factory farming, whether political support has risen or fallen, what to learn from the EU's success in pro-animal welfare policies, and what the movement should do next after cage-free campaigns. For that, we are joined by the exceptional Neil Dullaghan. He is a Senior Researcher Manager at ReThink Priorities. He works in the farmed animal welfare team, with expertise in EU policy. He holds a Ph.D. in Political & Social Science from the European University Institute and an MPhil in European Politics & Society from the University of Oxford. Neil's work looks at political support for animal welfare in the EU and outside as well as strategies and policies the movement should adopt.Neil has been so kind as to share further readings and resources:Link to the my Slaughterhouse survey reportSioux Falls slaughter ban ballot initiative resultStudies showing increase in political support for animal welfare (Hus & McCulloch 2023, Chaney et al 2020, Vogeler 2020 2019, Chaney 2014)Study about petitions increasing salience of animal welfare in UK (Chaney et al 2022)Mercy For Animals report on broiler progress (also covered in Bloomberg)Ruth Harrison Animal Machines'My cow wants to have fun by Astrid Lindgren and Kristinia Forslund & the book How Astrid Lindgren achieved enactment of the 1988 law protecting farm animals in Sweden - a selection of articles and letters published in Expressen, Stockholm, 1985-1989EU Harmonisation effectSection 7 of Prof. Broom's European Parliament report on animal welfareThe animal welfare section (pages 215-219) in the book "The Brussels effect" by Bradford.Animal equality report on enforcement, & calls for action on fishMetaculus questions onEU cage-free law (here and here)Date of decline of CAFOs by 90%A decrease in US meat production by 2025?Will commercial animal farming be prohibited in the US by 2041?Will there be a 50% decline in global meat production by 2040?My report on cultured meat (Dullaghan and Zhang, 2022)total US plant-based meat production in 2020 was 90,000 to 180,000 metric tons (the former according to Shapiro (2020) and this paywalled page from Meatingplace.com cited in Bollard (2020); the latter according to data obtained from FoodTrending.com). 13M metric tons of alternative protein (meat, seafood, milk, eggs, and dairy, excluding pulses, tofu, and tempeh) were consumed globally in 2020 (BCG 2021). ~545M metric tons of conventional meat, including seafood, is produced each year (according to OurWorldinData), mostly via the industrial farming of animals.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Karthik Palakodeti https://forum.effectivealtruism.org/posts/7hy6BuDaDFeQmmrmk/neil-dullaghan-on-the-politics-of-animal-welfare-and-eu Sun, 24 Sep 2023 11:43:03 +0000 EA - Neil Dullaghan on the politics of animal welfare and EU policy by Karthik Palakodeti Karthik Palakodeti https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:10 no full 7179
SWfwmqnCPid8PuTBo EA - Monetary and social incentives in longtermist careers by Vaidehi Agarwalla Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Monetary and social incentives in longtermist careers, published by Vaidehi Agarwalla on September 24, 2023 on The Effective Altruism Forum.In this post I talk about several strong non-epistemic incentives and issues that can influence people to pursue longtermistcareer paths (and specifically x-risk reduction careers and AI safety) for EA community members.For what it's worth, I personally I am sympathetic to longtermism, and to people who want to create more incentives for longtermist careers, because of the high urgency some assign to AI Safety and the fact that longtermism is a relatively new field. I am currently running career support pilots to support early-career longtermists.) However I think it's important to think carefully about career choices, even when it's difficult. I'm worried that these incentives lead people to feel (unconscious & conscious) pressure to pursue (certain) longtermist career paths even if it may not be the right choice for them. I think it's good for to be thoughtful about cause prioritization and career choices, especially for people earlier in their careers.IncentivesGood pay and job securityIn general, longtermist careers pay very well compared to standard nonprofit jobs, and early career roles are sometimes competitive with for-profit jobs (<30% salary difference, a with the exception of some technical AI safety roles). Jobs at organisations which receive significant funding (including for-profit orgs) usually attract the best talent because they can offer better pay, job security, a team culture, structure and overall lower risk.It could be difficult to notice "if either longtermism as a whole or specific spending decisions turned out to be wrong. Research suggests that when a lot of money is on the line, our judgment becomes less clear. It really matters that the judgment of EAs is clear, so having a lot of money on the line should be cause for concern." . "This is especially problematic given the nature of longtermism, simultaneously the best-funded area of EA and also the area with the most complex philosophy and weakest feedback loops for interventions."Funding in an oligopolyThere are currently only a handful of funders giving to longtermist causes. Funders have also actively centralized decision-making in the past (see some reasoning), which creates more pressure to defer to funders' interests to get funding. I'm concerned that people defer too much to funders' preferences, and going after less impactful projects as a result.Money "[crowds] out the effect of other incentives and considerations that would otherwise guide these processes." Therefore, people are "incentivised to believe whatever will help them get funding" and "particular worldviews will get artificially inflated."Within community building, I have heard a handful of first- and second-hand accounts of people feeling like funders are pushing them towards getting more people into longtermism. Many EAIF & CEA community building grantmakers and staff are longtermist, and these organizations have gotten significant funding from OP's EA longtermism community team historically. Feedback community builders receive is often not very clear; there seems to be confusion around evaluation metrics and a general lack of communication, and when there is feedback it's limited (especially for those lacking access to core networks and hubs) - these accounts are impressions that we've heard, and probably don't always or fully represent funders' intentions. (This also exacerbates the role models & founder effects issues, discussed below).I've also often heard a platitude amongst people in all cause areas about the challenges of getting funding in the EA ecosystem - and that it's impossible to get funding outside of it. To them, it's not worth the resources it would take ...

]]>
Vaidehi Agarwalla https://forum.effectivealtruism.org/posts/SWfwmqnCPid8PuTBo/monetary-and-social-incentives-in-longtermist-careers Sun, 24 Sep 2023 04:48:18 +0000 EA - Monetary and social incentives in longtermist careers by Vaidehi Agarwalla Vaidehi Agarwalla https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 15:24 no full 7178
oAxuq5E7DsQTmxQwi EA - Amazon to invest up to $4 billion in Anthropic by Davis Kingsley Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Amazon to invest up to $4 billion in Anthropic, published by Davis Kingsley on September 25, 2023 on The Effective Altruism Forum.Today, we're announcing that Amazon will invest up to $4 billion in Anthropic. The agreement is part of a broader collaboration to develop reliable and high-performing foundation models.(Thread continues from there with more details -- seems like a notable major development!)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Davis_Kingsley https://forum.effectivealtruism.org/posts/oAxuq5E7DsQTmxQwi/amazon-to-invest-up-to-usd4-billion-in-anthropic Mon, 25 Sep 2023 17:55:13 +0000 EA - Amazon to invest up to $4 billion in Anthropic by Davis Kingsley Davis_Kingsley https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:37 no full 7191
bCLoXDbnYSQab8T5t EA - How have you become more hard-working? by Chi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How have you become more hard-working?, published by Chi on September 25, 2023 on The Effective Altruism Forum.I'd be curious to hear stories of people who have successfully become more hard-working, especially if they started out as not particularly hard-working. Types of things I can imagine playing a role or know have played a role for some people:Switching roles to something that is conducive to hard work, e.g. a fast-paced environment with lots of concrete tasks and fires to put out.Medication, e.g. ADHD medicationInternal work, e.g. specific types of therapy, meditation, self-help reading, or other types of reflection.Productivity hacks, e.g. more accountability, putting specific systems in placeMotivational events, arguments, or life periods, e.g. working a normal corporate jobs where long hours are expectedSwitching work environment to something that is conducive to hard work, e.g. always working in an office with others who hold you accountableThis curiosity was triggered by realising that I know of very few people that have become substantially harder-working over their late adolescence/adult life. I also noticed that the few people that I know successfully and seemingly permanently increased their mental health/work satisfaction always were hard-working even when they were unhappy (unless they were in the middle of burn-out or similar).People becoming more hard-working seems really useful but I haven't seen much in terms of evidence that it's feasible or effective methods. If there are books or studies on this topic, those would also be welcome. Thank you!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Chi https://forum.effectivealtruism.org/posts/bCLoXDbnYSQab8T5t/how-have-you-become-more-hard-working Mon, 25 Sep 2023 14:59:07 +0000 EA - How have you become more hard-working? by Chi Chi https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:41 no full 7186
Qo3559TqP5BzoQyWX EA - Two Years of Shrimp Welfare Project: Insights and Impact from our Explore Phase by Aaron Boddy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two Years of Shrimp Welfare Project: Insights and Impact from our Explore Phase, published by Aaron Boddy on September 25, 2023 on The Effective Altruism Forum.SummaryShrimp Welfare Project launched in Sep 2021, via the Charity Entrepreneurship Incubation Program. We aim to reduce the suffering of billions of farmed shrimps. This post summarises our work to date, what we plan to work on going forward, and clarifies areas where we're not focusing our attention. This post was written to coincide with the launch of our new (Shr)Impact page on our website.We have four broad workstreams: corporate engagement, farmer support, research, and raising issue salience. We believe our key achievements to date are:Corporate engagement: Our Humane Slaughter Initiative (commitments with large producers to purchase electrical stunners, such as MER Seafood, and Seajoy), ongoing conversations with UK retailers (including Marks & Spencer, who now have a published Decapod Welfare Policy), and contributing to the Aquaculture Stewardship Council's (ASC) Shrimp Welfare Technical Working Group.Humane Slaughter Initiative: This work in particular seems to be our most promising work so far, and we Guesstimate that our work to date will reduce the suffering of ~1B shrimps (in expectation per year) at a cost-effectiveness of ~1,300 shrimps per $ (in expectation per year).Farmer support: The launch of the Sustainable Shrimp Farmers of India (SSFI) program, Scoping Reports in India and Vietnam, a Pilot Study in India, and MoUs with prominent farmer-facing stakeholders (in Gujarat, and with ThinkAqua)Research: Working on a number of research projects to answer some of our key uncertainties, such as the Shrimp Welfare Report, an Alternative Shrimps report, a Supply & Demand economic analysis, a Consumer Research report, an Impact Roadmap, and coordinating academic research on the effectiveness of electrical stunning.Raising issue salience: Highlighting the issue of shrimp welfare through conferences/podcasts/articles in the shrimp industry, animal welfare, and Effective Altruism spaces. in addition to working in coalitions with other orgs in this space (i.e. EuroGroup for Animals, and the Aquatic Animal Alliance).As we are moving into our Exploit phase, we plan to focus our work on the following key projects:Humane Slaughter Initiative: Significantly accelerating the adoption of electrical stunning prior to slaughter in the farmed shrimp industry is a key goal of ours. We do this by purchasing the first stunner for a few different medium-large producers in different countries/contexts and in different farming systems in order to remove barriers to uptake.We believe we can realistically absorb ~$2,000,000 in funding over the next couple of years for our Humane Slaughter Initiative, at a cost-effectiveness of 1,500+ shrimps per $ per year, depending on producer volume and demand from their buyers..Sustainable Shrimp Farmers of India: Our farmer support project is still somewhat exploratory, but we are excited by the tractability of interventions we have tested, such as offering free welfare-focused technical advice to farmers via WhatsApp, and promoting additional pond preparation (such as sludge removal) and lower stocking densities.Shrimp Welfare Index: Building on the Shrimp Welfare Report, and our experience trying to standardise a set of Asks across all the shrimp production systems, we wanted to clearly define what "higher welfare" looks like across different contexts. The Index offers an assessment of current practices and provides clear, actionable processes for improving shrimp welfare depending on the issues present in each pond. V1 of the Index is nearly complete, but we expect to iterate and test it over the next year, with the Index likely becoming a core part of SWPs work in...

]]>
Aaron Boddy https://forum.effectivealtruism.org/posts/Qo3559TqP5BzoQyWX/two-years-of-shrimp-welfare-project-insights-and-impact-from Mon, 25 Sep 2023 09:37:20 +0000 EA - Two Years of Shrimp Welfare Project: Insights and Impact from our Explore Phase by Aaron Boddy Aaron Boddy https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 25:49 no full 7185
BFbsqwCuuqueFRfpW EA - Aim for conditional pauses by AnonResearcherMajorAILab Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Aim for conditional pauses, published by AnonResearcherMajorAILab on September 25, 2023 on The Effective Altruism Forum.TL;DR: I argue for two main theses:[Moderate-high confidence] It would be better to aim for a conditional pause, where a pause is triggered based on evaluations of model ability, rather than an unconditional pause (e.g. a blanket ban on systems more powerful than GPT-4).[Moderate confidence] It would be bad to create significant public pressure for a pause through advocacy, because this would cause relevant actors (particularly AGI labs) to spend their effort on looking good to the public, rather than doing what is actually good.Since mine is one of the last posts of the AI Pause Debate Week, I've also added a section at the end with quick responses to the previous posts.Which goals are good?That is, ignoring tractability and just assuming that we succeed at the goal -- how good would that be? There are a few options:Full steam ahead. We try to get to AGI as fast as possible: we scale up as quickly as we can; we only spend time on safety evaluations to the extent that it doesn't interfere with AGI-building efforts; we open source models to leverage the pool of talent not at AGI labs.Quick take. I think this would be bad, as it would drastically increase x-risk.Iterative deployment. We treat AGI like we would treat many other new technologies: something that could pose risks, which we should think about and mitigate, but ultimately something we should learn about through iterative deployment. The default is to deploy new AI systems, see what happens with a particular eye towards noticing harms, and then design appropriate mitigations. In addition, rollback mechanisms ensure that we can AI systems are deployed with a rollback mechanism, so that if a deployment causes significant harmsQuick take. This is better than full steam ahead, because you could notice and mitigate risks before they become existential in scale, and those mitigations could continue to successfully prevent risks as capabilities improve.Conditional pause. We institute regulations that say that capability improvement must pause once the AI system hits a particular threshold of riskiness, as determined by some relatively standardized evaluations, with some room for error built in. AI development can only continue once the developer has exhibited sufficient evidence that the risk will not arise.For example, following ARC Evals, we could evaluate the ability of an org's AI systems to autonomously replicate, and the org would be expected to pause when they reach a certain level of ability (e.g. the model can do 80% of the requisite subtasks with 80% reliability), until they can show that the associated risks won't arise.Quick take. Of course my take would depend on the specific details of the regulations, but overall this seems much better than iterative deployment. Depending on the details, I could imagine it taking a significant bite out of overall x-risk. The main objections which I give weight to are the overhang objection (faster progress once the pause stops) and the racing objection (a pause gives other, typically less cautious actors more time to catch up and intensify or win a capabilities race), but overall these seem less bad than not stopping when a model looks like it could plausibly be very dangerous.Unconditional temporary pause. We institute regulations that ban the development of AI models over some compute threshold (e.g. "more powerful than GPT-4"). Every year, the minimum resources necessary to destroy the world drops by 0.5 OOMs, and so we lower the threshold over time. Eventually AGI is built, either because we end the pause in favor of some new governance regime (that isn't a pause), or because the compute threshold got low enough that some actor flou...

]]>
AnonResearcherMajorAILab https://forum.effectivealtruism.org/posts/BFbsqwCuuqueFRfpW/aim-for-conditional-pauses Mon, 25 Sep 2023 06:09:51 +0000 EA - Aim for conditional pauses by AnonResearcherMajorAILab AnonResearcherMajorAILab https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 26:20 no full 7184
HDFxQwMwPp275J87r EA - Net global welfare may be negative and declining by kyle fish Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Net global welfare may be negative and declining, published by kyle fish on September 26, 2023 on The Effective Altruism Forum.OverviewThe total moral value of the world includes humans as well as all other beings of moral significance. As such, a picture of the overall trajectory of net global welfare that accounts for both human and non-human populations is important context for thinking about the future on any timescale, and about the potential impacts of transformative technologies.There's compelling evidence that life has gotten better for humans recently, but the same can't be said for other animals, especially given the rise of industrial animal agriculture. How do these trends cash out, and what's the overall trajectory?I've used human and farmed animal population data, estimates of welfare ranges across different species, and estimates of the average wellbeing of different species to get a rough sense of recent trends in total global welfare. Importantly, this initial analysis is limited to humans and some of the most abundant farmed animals - it does not consider effects on insects or wild animals, the inclusion of which could plausibly change the top-line conclusions (see e.g. here). I focus on the years from 1961-2021, as this is the period for which the most reliable data exists, and the period most relevant to understanding the current trajectory.My tentative conclusion is that net global welfare may be both negative and declining. That is, the entire good of humanity may be outweighed by the cumulative suffering of farmed animals, with total animal suffering growing faster than human wellbeing is increasing, especially in recent decades. Below I lay out some of the many assumptions upon which this work depends, the core of my analysis, and some tentative reflections on how these findings shape my thinking about the future.Notes and AssumptionsThis analysis was performed as a quick/rough estimate, and should not be mistaken for a comprehensive treatment of the topic. I am highly uncertain about both the quantitative findings and my interpretations.I opted for using point estimates rather than confidence intervals for the sake of simplicity. My confidence intervals would be wide if they were included. That said, I've aimed to make my estimates such that they fall at or below the median of my best-guess distributions (i.e. for the parts of the analysis that relied on my own judgment calls, I think it's more likely than not that I've underestimated animal suffering relative to humans, rather than overestimated).I've limited the scope of this analysis to humans and farmed animals, in part due to data availability, in part for simplicity, and in part to focus on humanity's most direct impacts on other beings. Inclusion of wild animals in this analysis could plausibly change the signs of my conclusions.This work draws heavily on the Moral Weight Project from Rethink Priorities and relies on the same assumptions: utilitarianism, hedonism, valence symmetry, unitarianism, use of proxies for hedonic potential, and more. Although I think the Rethink Priorities welfare range estimates are currently the best tool available for interspecies welfare comparisons, I do not necessarily endorse these assumptions in full, nor do I think the Rethink Priorities welfare ranges are the "correct" weights - only the best available. I consider the following entries in the Moral Weight Project Sequence to be particularly useful background reading:An Introduction to the Moral Weight ProjectRethink Priorities' Welfare Range EstimatesDon't Balk at Animal-friendly ResultsAnalysisBackground and DefinitionsThe central concept of my analysis is that the total welfare of a given species in a given year can be calculated as follows:Welfare Capacity = Population Welfare Ran...

]]>
kyle_fish https://forum.effectivealtruism.org/posts/HDFxQwMwPp275J87r/net-global-welfare-may-be-negative-and-declining-1 Tue, 26 Sep 2023 19:05:41 +0000 EA - Net global welfare may be negative and declining by kyle fish kyle_fish https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 14:09 no full 7200
KbTasufbtJwZYiJQ8 EA - AMA: Andy Weber (U.S. Assistant Secretary of Defense from 2009-2014) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Andy Weber (U.S. Assistant Secretary of Defense from 2009-2014), published by Lizka on September 26, 2023 on The Effective Altruism Forum.Andy Weber was the U.S. Assistant Secretary of Defense for Nuclear, Chemical & Biological Defense Programs from 2009 to 2014. He's now a senior fellow at the Council on Strategic Risks. You might also know him from his appearance on the 80,000 Hours Podcast. Ask him anything!He'll try to answer some questions on Friday, September 29 (afternoon, Eastern Time), and might get to some earlier.I (Lizka) am particularly excited that Andy can share his experience in nuclear (and other kinds of) threat reduction given that it is Petrov Day today.Instructions and practical notes:Please post your questions as comments on this post.Posting questions earlier is better than later.If you have multiple questions, it might be better to post them separately.Feel free to upvote questions that others have posted, as it might help prioritize questions later.Other context and topics that might be especially interesting to talk about:Risks of "tactical" nuclear weapons like the new sea-launched cruise missile (Reuters)Andy's experience with Project Sapphire and the Nunn-Lugar programAndy's thoughts on biosecurity and preventing bioweapons useFor those who want to explore more: The Dead Hand by David Hoffman might be interesting; Project Sapphire and some of the work against biological threats are captured in it.He might not get to some questions, or be unable to answer some.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Lizka https://forum.effectivealtruism.org/posts/KbTasufbtJwZYiJQ8/ama-andy-weber-u-s-assistant-secretary-of-defense-from-2009 Tue, 26 Sep 2023 14:00:38 +0000 EA - AMA: Andy Weber (U.S. Assistant Secretary of Defense from 2009-2014) by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:42 no full 7197
HeDvLBezyFzb9nmNx EA - AMA: Christian Ruhl, senior global catastrophic risk researcher at Founders Pledge by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Christian Ruhl, senior global catastrophic risk researcher at Founders Pledge, published by Lizka on September 26, 2023 on The Effective Altruism Forum.It's Petrov Day. One of the things we're doing to mark the occasion is hosting a thread where you're invited to ask Christian Ruhl anything.Instructions:Please post questions to Christian as comments on this post.Sharing questions earlier is generally better; Christian will answer questions on Friday, September 29.And you can upvote questions you're interested in.Christian shared some context that might help draft questions (and you might be interested in exploring his posts!):About meI'm a senior researcher at Founders Pledge, where much of my work focuses on global catastrophic risks. Recently, I've written about philanthropy and nuclear security, including a long philanthropic guide on nuclear risks, an article in Vox with Longview's Matt Gentzel, "Call me, maybe?" about crisis communication, and on "philanthropy to the right of boom." I'm currently finishing up another "guide for philanthropists," this time focused on biosecurity and pandemic preparedness, which we'll publish later this fall. I'm also working on a new report about great power competition and transformative technologies with Stephen Clare and an investigation on germicidal UV with Rosie Bettle.I've been at Founders Pledge for almost two years now. Before that, I worked at Perry World House, managing the ominous-sounding research theme on The Future of the Global Order: Power, Technology, and Governance. Before that, I studied two MPhil courses at Cambridge - History and Philosophy of Science and Politics and International Studies - funded by a Herchel Smith Fellowship. I first got interested in civilizational collapse and global catastrophic risks by working on a Maya archaeological excavation in Guatemala.Question topicsI'm happy to talk about anything, including the sorry state of nuclear security philanthropy, working at Founders Pledge, working at an academic think tank, research, writing, civilizational collapse, global catastrophic and existential risks, great power competition, and more.Other notesIf you want to help support projects to mitigate global catastrophic risks, please consider donating to the GCR Fund via every.org and Giving What We Can (or if you're a Founders Pledge member, from your DAF through the member app).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Lizka https://forum.effectivealtruism.org/posts/HeDvLBezyFzb9nmNx/ama-christian-ruhl-senior-global-catastrophic-risk Tue, 26 Sep 2023 10:52:22 +0000 EA - AMA: Christian Ruhl, senior global catastrophic risk researcher at Founders Pledge by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:25 no full 7196
pydpzj9sxcdSsAngr EA - New page on animal welfare on Our World in Data by EdMathieu Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New page on animal welfare on Our World in Data, published by EdMathieu on September 25, 2023 on The Effective Altruism Forum.Our team at Our World in Data just launched a new page on animal welfare! There you can find a brand new Animal Welfare Data Explorer, 22 interactive charts, and 4 new articles:How many animals get slaughtered every day?How many animals are factory-farmed?Do better cages or cage-free environments really improve the lives of hens?Adopting slower-growing breeds of chicken would reduce animal suffering significantlyOn Our World in Data, we cover many topics related to reducing human suffering: alleviating poverty, reducing child and maternal mortality, curing diseases, and ending hunger.But if we aim to reduce total suffering, society's ability to reduce this in other animals - which feel pain, too - also matters.This is especially true when we look at the numbers: every year, humans slaughter more than 80 billion land-based animals for farming alone. Most of these animals are raised in factory farms, often in painful and inhumane conditions.Estimates for fish are more uncertain, but when we include them, these numbers more than double.These numbers are large - but this also means that there are large opportunities to alleviate animal suffering by reducing the number of animals we use for food, science, cosmetics, and other industries and improving the living conditions of those we continue to raise.On this page, you can find all of our data, visualizations, and writing on animal welfare.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
EdMathieu https://forum.effectivealtruism.org/posts/pydpzj9sxcdSsAngr/new-page-on-animal-welfare-on-our-world-in-data Mon, 25 Sep 2023 23:44:56 +0000 EA - New page on animal welfare on Our World in Data by EdMathieu EdMathieu https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:40 no full 7195
7awJW2GPafcE4HYNf EA - Tarbell Fellowship 2024 - Applications Open (AI Journalism) by Cillian Crosson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tarbell Fellowship 2024 - Applications Open (AI Journalism), published by Cillian Crosson on September 28, 2023 on The Effective Altruism Forum.The Tarbell Fellowship is accepting applications until October 15th. Apply here.Key detailsWhat: One-year programme for early-career journalists interested in covering artificial intelligence.When: January December 2024 (with some flexibility)Benefits: Fellows receive a stipend of up to $50,000, secure a 9-month placement at a major newsroom, participate in a study group covering AI governance & technical fundamentals, and attend a 2 week journalism summit in Oxford.Who should apply: We're interested in supporting people with a deep understanding of artificial intelligence that have the potential to become exceptional journalists. Previous newsroom experience is desirable but not essential.Why journalism? Journalism as a career path is neglected by those interested in reducing risks from advanced AI. Journalists can lead the public debate on AI in important ways, drive engagement with specific policy proposals & safety standards, and hold major AI labs accountable in the public arena.Learn more: Visit our website or sign up for our information sessions:October 4th (6pm BST / 1pm ET / 10am PT)October 6th (10am BST)Deadline: Apply here by October 15th.About the Tarbell FellowshipWhat is it?The Tarbell Fellowship is a one-year programme for early-career journalists interested in covering emerging technologies, especially artificial intelligence.What we offerStipend: Fellows receive a stipend of up to $50,000 for the duration of their placement. We expect stipends to vary between $35,000 - $50,000 depending on location and personal circumstances.Placement: Fellows secure a 9 month placement at a major newsroom covering artificial intelligence. The Tarbell Fellowship will match fellows with outlets in order to secure a placement. Exact details will vary by outlet.AI Fundamentals Programme: Prior to placements, fellows explore the intricacies of AI governance & technical fundamentals through an 8-week course. This programme requires ~10 hours per week and is conducted remotely.Oxford Summit: Fellows attend a two week journalism summit in Oxford at the beginning of the fellowship. This will be an intensive fortnight featuring guest speakers, workshops and networking events in Oxford and London. Travel and accommodation costs will be fully covered.Who should applyWe're interested in supporting people with a deep understanding of artificial intelligence that have the potential to become exceptional journalists.In particular, we're looking for:AI Expertise: A deep interest in artificial intelligence and its effects on society. Many fellows will have experience working in tech journalism, machine learning research, or AI governance.Passionate about journalism: Previous newsroom experience (including at student newspapers) or comparable experience in a field such as law or research is desirable but not necessary. We seek to support early-career journalists and will prioritise potential over experience.Excellent writing skills: A creative approach to storytelling combined with the ability to quickly get to the heart of a new topic. Fellows must be able to produce compelling copy under tight deadlines.Relentless: Journalism is a highly competitive industry. Fellows must be willing to work hard, and be capable of withstanding repeated rejectionOpen-minded: The desire to understand the truth of any given story, even when it conflicts with prior beliefs. Fellows must be open to criticism and recognising ways they can improve their writing and thinking.If you're unsure whether you're a good fit, we encourage you to apply anyway. Alternatively, you can attend one of our upcoming information sessions or email cillian ...

]]>
Cillian Crosson https://forum.effectivealtruism.org/posts/7awJW2GPafcE4HYNf/tarbell-fellowship-2024-applications-open-ai-journalism Thu, 28 Sep 2023 21:08:50 +0000 EA - Tarbell Fellowship 2024 - Applications Open (AI Journalism) by Cillian Crosson Cillian Crosson https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:54 no full 7220
c9vt7SPeCCiC76cMC EA - From Passion to Depression and Pessimism: My Journey with Effective Altruism by IReallyWantToBeAnonymous Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: From Passion to Depression and Pessimism: My Journey with Effective Altruism, published by IReallyWantToBeAnonymous on September 28, 2023 on The Effective Altruism Forum.I want to share a deeply personal and painful journey I've had with the EA movement. It's not an easy story to tell, but I believe there's value in presenting this side of the coin. I really want to protect my anonymity, so I'd ask you to please be respectful of my wish and to not reach out to me.Not so long ago, I became wholeheartedly committed to the EA cause. I left a good job after receiving funding to pursue work that resonated with the movement's principles. My belief was so strong that I relocated to another city, eager to make a meaningful impact. A lot of promises were made. A lot of enthusiasm surrounded EA future.Then, the unexpected: my main source of funding collapsed. With its downfall, my life spiraled. I felt deserted by the very community I'd given so much to. Nobody reached out; nobody seemed to care. It was a profound isolation I had never anticipated.This experience plunged me into a severe major depressive episode, one so grave I've grappled with all sorts of dark thoughts. I've now sought treatment for this, but every day is a struggle. For years, I sidelined personal pursuits, including forming meaningful personal and romantic relationships outside the movement, dedicating myself to issues like the potential AI apocalypse and other matters that now seem distant and abstract, when compared to the day-to-day struggles of non-Anglo-American privileged and gifted youngsters. In prioritizing these concerns, I lost sight of the spontaneous, daily realities that give life its texture and meaning.My experience has also left me deeply disillusioned with EA's principles and strategies. I've become nihilistic, doubting if the movement's approach to the world, as noble as it might seem, is genuinely grounded in reality. There's a detachment I've observed, where some of the most crucial elements of our shared human experience, like the importance of spontaneous everyday moments, seem to get lost.In sharing this, my hope isn't to condemn or vilify the EA movement but to highlight the dangers of over-commitment and the risk of losing oneself in a cause. While it's commendable to be passionate, it's essential to remember our humanity, the very thing we're trying to help and protect.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
IReallyWantToBeAnonymous https://forum.effectivealtruism.org/posts/c9vt7SPeCCiC76cMC/from-passion-to-depression-and-pessimism-my-journey-with Thu, 28 Sep 2023 15:52:29 +0000 EA - From Passion to Depression and Pessimism: My Journey with Effective Altruism by IReallyWantToBeAnonymous IReallyWantToBeAnonymous https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:22 no full 7215
8fDyucEfgAyNvK3ge EA - Commonsense Good, Creative Good by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Commonsense Good, Creative Good, published by Jeff Kaufman on September 27, 2023 on The Effective Altruism Forum.Let's say you're vegan and you go to a vegan restaurant. The food is quite bad, and you'd normally leave a bad review, but now you're worried: what if your bad review leads people to go to non-vegan restaurants instead? Should you refrain from leaving a review? Or leave a false review, for the animals?On the other hand, there are a lot of potential consequences of leaving a review beyond "it makes people less likely to eat at this particular restaurant, and they might eat at a non-vegan restaurant instead". For example, three plausible effects of artificially inflated reviews could be:Non-vegans looking for high-quality food go to the restaurant, get vegan food, think "even highly rated vegan food is terrible", don't become vegan.Actually good vegan restaurants have trouble distinguishing themselves, because "helpful" vegans rate everywhere five stars regardless of quality, and so the normal forces that push up the quality of food don't work as well. Now the food tastes bad and fewer people are willing to sustain the sacrifice of being vegan.People notice this and think "if vegans are lying to us about how good the food is, are they also lying to us about the health impacts?" Overall trust in vegans (and utilitarians) decreases.Despite thinking that it is the outcomes of actions that determine whether they are a good idea, I don't think this kind of reasoning about everyday things is actually helpful. It's too easy to tie yourself in logical knots, making a decision that seems counterintuitive-but-correct, except if you spent longer thinking about it, or discussed it with others, you would have decided the other way.We are human beings making hundreds of decisions a day, with limited ability to know the impacts of our actions, and a worryingly strong capacity for self-serving reasoning. A full unbiased weighing of the possibilities is, sure, the correct choice if you relax these constraints, but in our daily lives that's not an option we have.Luckily, humans have lived for centuries under these constraints, and we've developed ideas of what is "good" that turn out to be a solid guide to typical situations. Moral systems around the world don't agree on everything, but on questions of how to live your daily life they're surprisingly close: patience, respect, humility, moderation, kindness, honesty. I'm thankful we have all this learning on makes for harmonious societies distilled into our cultures to support us in our interactions.On the other hand, I do think there is a very important place for this kind of reasoning: sometimes our normal ideas of "good" are seriously lacking. For example, they don't give us much guidance once scale is involved: a donation that helps a hundred people and one that equivalently helps a thousand people are both "good" from a commonsense perspective, even though I think it's pretty clearly ten times better to go with the second. Similarly, if you're trying to decide between working as a teacher in a poor school, therapist in a jail, a manager at a food pantry, or a firefighter in a disadvantaged community, common sense just says they're all "good" and leaves you there.How do we reconcile this conflict, where carefully getting into the consequences of decisions can take a lot of time and risk strange errors, while never evaluating the outcomes of decisions risks having a much smaller positive effect on the world? I'd propose normally going for "commonsense good" and then in the most important cases going for "creative good".The idea is, normally just do straightforwardly good things. Be cooperative, friendly, and considerate. Embrace the standard virtues. Don't stress about the global impacts or second-order altruisti...

]]>
Jeff Kaufman https://forum.effectivealtruism.org/posts/8fDyucEfgAyNvK3ge/commonsense-good-creative-good Wed, 27 Sep 2023 20:55:53 +0000 EA - Commonsense Good, Creative Good by Jeff Kaufman Jeff Kaufman https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:38 no full 7213
RTG4ZAy6bqJqXo7uW EA - Our tofu book has launched!! (Upvote on Amazon) by George Stiffman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Our tofu book has launched!! (Upvote on Amazon), published by George Stiffman on September 27, 2023 on The Effective Altruism Forum.As some of you know, I'm working on growing the US market for Chinese tofus. I believe it could be a way to significantly reduce animal suffering, while shifting American dining culture.We just launched our book - Broken Cuisine - which introduces five of these tofus to Western home cooks. The goal is to spark curiosity and demand for these ingredients, so that we can convince retailers to carry them.Do you have five minutes right now to take an action?Download our FREE e-book on Amazon. (If possible, TODAY)Skim through.A day or two later, leave an honest review. (Amazon easily detects spam.)Downloading and reviewing our book during launch week will convince the Amazon recommender algorithm to push our book, creating a virtuous cycle that will bring it to more people. If Broken Cuisine can crack the bestseller lists, I'm hopeful we can meaningfully start growing the market for these foods!Thank you for your help!!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
George Stiffman https://forum.effectivealtruism.org/posts/RTG4ZAy6bqJqXo7uW/our-tofu-book-has-launched-upvote-on-amazon Wed, 27 Sep 2023 17:58:56 +0000 EA - Our tofu book has launched!! (Upvote on Amazon) by George Stiffman George Stiffman https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:13 no full 7209
r96hsZa7xhoK2pzYa EA - The Bulwark's Article On Effective Altruism Is a Short Circuit by Omnizoid Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Bulwark's Article On Effective Altruism Is a Short Circuit, published by Omnizoid on September 27, 2023 on The Effective Altruism Forum.Crosspost of this on my blog.I really like the Bulwark. Their articles are consistently funny, well-written, and sensible. But recently, Mary Townsend wrote a critique of effective altruism titled "Effective Altruism Is a Short Circuit," that seems deeply confused. In fact, I will go further and make a stronger claim - there is not a single argument made in the entire article that should cause anyone to be less sympathetic to effective altruism at all! Every claim in the article is either true but irrelevant to the content of effective altruism or false. The article is crafted in many ways to mislead, confuse and induce negative affect in the reader but is light on anything of substance.For instance, the article begins with a foreboding picture of the notorious EA fraudster Sam Bankman Fried. This is not an explicit argument given, of course - it's just a picture. If it were forced to be an argument, it would not succeed - even if Bernie Madoff gave a lot of money to the Red Cross and has some role in planning operations, that would do nothing to discredit the Red Cross; the same principle is true of EA. But when one is writing a smear piece, they don't need to include real objections - they can just include things that induce disdain in the reader that they come to associate with the object of the author's criticism. Such is reminiscent of the flashing red letters that are ubiquitous in attack ads - good if one's aim is propaganda, bad if one's aim is truth.The article spends the first few paragraphs on mostly unremarkable philosophical musings about how we often have an urge to do good and we can choose what we do, filled with sophisticated-sounding references to philosophers and literature. Such musings help build the author's ethos as a Very Serious Person, but does little to provide an argument. However, after a few paragraphs of this, the author gets to the first real criticism:That one could become good through monetary transactions should raise our post-Reformation suspicions, obviously. As a simple response to the stipulation of a dreadful but equally simple freedom, it seems almost designed to hit us at the weakest spots of our human frailty, with disconcerting effects.Effective altruism doesn't claim, like those who endorsed indulgences, that one can become good through donating. It claims that one can do good through donating and that one should do good. The second half of that claim is a trivially obvious moral claim - we should help people more rather than less - and the first half of the claim is backed by quite overwhelming empirical evidence. While one can dispute the details somewhat, the claim that we can save the lives of faraway people for a few thousand dollars is incontrovertible given the weight of the available evidence - there's a reason that critics of EA never have specific criticisms of the empirical claims made by effective altruists.Once one acknowledges that those who give to effective charities can save hundreds of lives over the course of their lives by fairly modest donations, a claim that even critics of such giving generally do not dispute, the claim that one should donate significant amounts to charities in order to save the lives of lots of people who would otherwise have died of horrifying diseases ceases to be something that "raises our post-reformation suspicions." One imagines the following dialogue (between Townsend and a starving child):Child: Please, could I have five dollars. This would allow me to afford food today so I wouldn't be hungry.Townsend: Sorry, I'd love to help, but that one could become good through monetary transactions should raise our post-Reformation suspici...

]]>
Omnizoid https://forum.effectivealtruism.org/posts/r96hsZa7xhoK2pzYa/the-bulwark-s-article-on-effective-altruism-is-a-short Wed, 27 Sep 2023 14:51:48 +0000 EA - The Bulwark's Article On Effective Altruism Is a Short Circuit by Omnizoid Omnizoid https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 14:20 no full 7207
SGZrrxThGrvn3Da5u EA - [Linkpost] Prospect Magazine - How to save humanity from extinction by jackva Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Prospect Magazine - How to save humanity from extinction, published by jackva on September 27, 2023 on The Effective Altruism Forum.This is a story in Prospect Magazine featuring the work of several EAs (full disclosure: including myself) working on existential and catastrophic risks framed around the occasion of Petrov Day.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
jackva https://forum.effectivealtruism.org/posts/SGZrrxThGrvn3Da5u/linkpost-prospect-magazine-how-to-save-humanity-from Wed, 27 Sep 2023 14:48:04 +0000 EA - [Linkpost] Prospect Magazine - How to save humanity from extinction by jackva jackva https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:34 no full 7208
aztshctf3PxBnKHqF EA - International AI Institutions: a literature review of models, examples, and proposals by MMMaas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: International AI Institutions: a literature review of models, examples, and proposals, published by MMMaas on September 27, 2023 on The Effective Altruism Forum.The Legal Priorities Project has published a new report (link, PDF, SSRN) surveying models, different examples, and proposals for international institutions for AI governance.This literature review examines a range of institutional models that have been proposed over the year for the international governance of AI. The review specifically focuses on proposals that would involve the creation of new international institutions for AI. As such, it focuses on seven models for international AI institutions with distinct functions. These models are:Scientific consensus buildingPolitical consensus-building and norm-settingCoordination of policy and regulationEnforcement of standards or restrictionsStabilization and emergency responseInternational joint researchDistribution of benefits and accessPart I consists of the literature review. For each model, we provide (i) a description of each model's functions and types; (ii) the most common examples of each model; (iii) some under-explored examples that are not (often) mentioned in the AI governance literature but that show promise; (iv) a review of proposals for the application of that model to the international regulation of AI; and (v) critiques of the model both generally and in its potential application to AI.Part II briefly discusses some considerations for further research concerning the design of international institutions for AI, including the effectiveness of each model at accomplishing its aims; treaty-based regulatory frameworks; other institutional models not covered in this review; the compatibility of institutional functions; and institutional options to host a new international AI governance body.Overall, the review covers seven institutional models, as well as more than thirty-three commonly invoked examples of those models, twenty-two additional examples, and forty-seven proposals of new AI institutions based on those models. Table 1 summarizes these findings.Table 1: Overview of institutional models, examples, and proposed institutions surveyedModelScientific consensus- buildingPolitical consensus- building and norm-settingCoord. of policy and regulationEnforcement of standards or restrictionsStabilization and emergency responseIntern. joint researchDistribution of benefits and accessCommon examplesUnder- explored examplesProposed AI institutionsIPCCIPBESSAPCEPWMOIPAICommission on Frontier AIIntergovernmental Panel on Information TechnologyCOPs (e.g. UNFCCC COP)OECDG20G7ISOIECITUVarious soft law instrumentsLysøen DeclarationCodex Alimentarius CommissionBRICSIAIOEmerging Technology CoalitionIAAIData Governance StructureData Stewardship OrganizationInternational Academy for AI Law and RegulationWTOICAOIMOIAEAFATFUNEPILOUNESCOEMEPWorld BankIMFWSISAdvanced AI Governance OrganisationIAIOEU AI AgencyGAIAGenerative AI global governance bodyCoordinator and Catalyser of International AI LawIAEA (Department of Safeguards)Nuclear Suppliers GroupWassenaar ArrangementMissile Technology Control RegimeOpen Skies Consultative CommissionAtomic Development AuthorityOPCWBWC Implementation UnitIMOCITES SecretariatUN AI control agencyGlobal watchdog agencyInternational Enforcement AgencyEmerging Technologies TreatyIAIA (multiple)UN Framework Convention on AI (UNFCAI) & Protocol on AI, supported by Intergovernmental Panel on AI, AI GLobal Authority, and supervisory bodyAdvanced AI Governance OrganizationAIEA for SuperintelligenceNPT+Multilateral AI governance initiativeInternational AI Safety AgencyAdvanced AI chips registryCode of conduct for state behaviorAI CBMsOpen Skies for AIBilateral...

]]>
MMMaas https://forum.effectivealtruism.org/posts/aztshctf3PxBnKHqF/international-ai-institutions-a-literature-review-of-models Wed, 27 Sep 2023 13:13:05 +0000 EA - International AI Institutions: a literature review of models, examples, and proposals by MMMaas MMMaas https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:51 no full 7206
KuNsnszBxENS3tWMm EA - GWWC's new community strategy by Giving What We Can Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC's new community strategy, published by Giving What We Can on September 27, 2023 on The Effective Altruism Forum.I'm extremely excited to announce the launch of our new community strategy that aims to connect our existing community better, and inspire more people to join us - I've been working on this for the past few months and am thrilled to finally share it!Over the almost two years I've been working with Giving What We Can, I've seen how important our community is - whether it's a supportive network to ask questions, a friendly face in a new city, or a shared sense that none of us are alone in wanting to do what we can to make a better world, I've personally taken much-needed inspiration, motivation, and comfort more times than I can count, simply by hearing from you!I want to make it easier for you to do the same.We are launching some new spaces for you to meet, connect, and chat with each other. Everyone is welcome: whether you just subscribe to our newsletter, have taken the pledge, have donated through our platform, or are simply interested in meeting like-minded people who care about making a difference in the world:In brief, Giving What We Can's new community strategy includes:The launch of GWWC Local Groups in key cities like London, NYC, Berlin, and Sydney; we plan to launch in other cities as wellAn online community space for you to connect, no matter where you are locatedA directory of other (non-GWWC) effective giving groups, to better connect the effective giving community more broadlyI firmly believe that community and connection are an integral part of working towards a society where giving significantly and effectively is a cultural norm. We've heard from many of you that one of the best things about having found Giving What We Can is learning that there are other people with similar values. We hope this makes it a little easier to connect!TL;DR: How do I get involved?Join one of the GWWC Local groups below or apply to start one in your city!LondonNew YorkBerlinVancouverSydneyMelbourneYou can read more about each of our new initiatives below:1: GWWC Local GroupsSince its founding in 2009, Giving What We Can has been connecting people through their commitment to effective giving. Our mission isn't just to marginally increase the amount of money going to effective charities; we're aiming for meaningful cultural change - with a goal to make giving effectively and significantly a global norm, in service of a better world. The community we've built (and are continuing to build) is instrumental to reaching that goal.The way that GWWC has engaged with the community has changed a lot over the years. It all started as small local groups bound by a common thread of pledging 10% back in 2009 through "Giving What We Can Chapters".One of the biggest changes that occurred was GWWC Chapters transitioning to become EA Groups several years ago.Since then, we've worked with EA groups to promote effective giving, had volunteers host monthly virtual meetups, and run an ambassador program where our ambassadors ran events too!Some highlights:PISE (Positive Impact Society Erasmus) has hosted pledge celebration events.An ambassador John Yan has hosted Giving Tuesday after-parties in New York.Grace (Head of Marketing) ran a community event in Melbourne earlier this year.Since GWWC was revitalised in 2020, we have often received requests to restart local groups, given that some of its strongest growth historically came from our former Chapters. Many of our smaller initiatives (e.g. one-off events, working with EA groups, and running the ambassador program) have shown a desire for this too.We've also been excited to see successful communities around effective giving start to grow and adapt to local cultural contexts i.e. in London and in the ...

]]>
Giving What We Can https://forum.effectivealtruism.org/posts/KuNsnszBxENS3tWMm/gwwc-s-new-community-strategy Wed, 27 Sep 2023 01:13:20 +0000 EA - GWWC's new community strategy by Giving What We Can Giving What We Can https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:23 no full 7202
ngk6AFo5uNHB3ZKQY EA - Inside the Mind of an Aspiring Charity Entrepreneur [Follow Along] #1 - From Layoff to Co-founding in a Breathtaking Two Months by Harry Luk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Inside the Mind of an Aspiring Charity Entrepreneur [Follow Along] #1 - From Layoff to Co-founding in a Breathtaking Two Months, published by Harry Luk on September 27, 2023 on The Effective Altruism Forum.TL;DRI want to extend a MASSIVE, heartfelt thank you to the abundant resources put together by the EA community and the wonderful EAs who are so generous with their knowledge and time (even when talking to a complete newbie like me).Because of you, I went from being laid off in June and knowing nothing about EA in July, to being offered a nonprofit co-founder position in early September, to finishing some initial groundwork of setting up a charity (productivity system, Founder's agreement, funding proposal draft, etc) in late September.The initial groundwork implemented thus far were all influenced by the action steps found in How to Launch a High-Impact Nonprofit. Thus, this aims to be a post sequence dedicated to putting the book's concepts to work and showing some actual work examples.Hopefully with more of your support and feedback, the StakeOut.AI charity startup can keep a good steady pace and accomplish more crucial milestones in Oct and onwards.As this post is the intro of the sequence, it details more specifically my journey (learning and doing applications) - including important tips for other applicants (apply early, handling rejections) and some feedback for EA organizations taking applications (setup to process many applications, send responders a copy of their responses, informing those who didn't make the cut).I hope this brings value to the EA community as this is perspective from someone very green in EA, eager to take action quickly even though I came from a totally different world before.AcknowledgmentI would like to give a special thank you to Dr. Peter S. Park for editing this post and for all our future collaborations! There is also a I'm so grateful for everyone who has been a part of the journey thus far section later in this post.IntroductionHi, my name is Harry, it's nice to meet all of you. This post, my first ever post on the EA forum, is inspired by Three lessons I've learnt from starting a small EA org and Why and how to start a pilot project in EA - so thank you Ben Williamson.The purpose of this post is to [1] document my journey publicly (hopefully to inspire others who are thinking they want to start a side project, or apply to CE, or go down the path of charity entrepreneurship), [2] maximize feedback as per Ben's post (both directly through the post (in the comments) and indirectly (people who reach out privately with additional suggestions/ advice)), and [3] share key lessons & milestones to hopefully contribute to the forum via actual work examples.The hope is that it is going to be a follow along post sequence, as my co-founder Dr. Peter S. Park and I get our nonprofit startup up and running.Though the journey with EA has only been 2 months, I already have so many people to thank who have answered questions, provided feedback and pointed me in the right direction. I am forever grateful for you :)The Catalyst: The Layoff that Sparked My JourneyI first found out about Effective Altruism from my Google search about impactful tithing. As a Christian, I have been tithing 10% of my income for many years, but because I was laid off from shortage of work in my previous marketing role, I was interested in finding more impactful ways to give due to our now lowered family income.This is when I stumbled upon GivingMultiplier.org and was introduced to the idea of super-effective charities. I was intrigued by their approach where you can still give from the heart (your favorite charity), but at the same time incentivizes you to give more to super-effective charities by matching your donations (the higher % you give to super...

]]>
Harry Luk https://forum.effectivealtruism.org/posts/ngk6AFo5uNHB3ZKQY/inside-the-mind-of-an-aspiring-charity-entrepreneur-follow Wed, 27 Sep 2023 00:07:30 +0000 EA - Inside the Mind of an Aspiring Charity Entrepreneur [Follow Along] #1 - From Layoff to Co-founding in a Breathtaking Two Months by Harry Luk Harry Luk https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 27:10 no full 7204
YcDXWTzyyfHQHCM4q EA - A Primer for Insect Sentience and Welfare (as of Sept 2023) by Meghan Barrett Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Primer for Insect Sentience and Welfare (as of Sept 2023), published by Meghan Barrett on September 27, 2023 on The Effective Altruism Forum.This post was written by me, Meghan Barrett, in my independent capacity as an academic research scientist and entomologist. None of the organizations with which I'm affiliated should be taken to endorse or support any particular conclusions or resources listed herein based on this post.IntroductionIf you attended my talk at EAG London in May 2023, you may remember this basic narrative:Insects might matter morally. There are a lot of them. We can use scientific evidence to make their lives better.This is the quick case I provide for working on insect welfare. Since that talk, I've been encouraged by the amount of interest in the topic among members of the EA community, so many of whom want to learn about insects and their welfare in farmed, wild, and research contexts. The lives and capabilities of our planet's ~5.5 million species of insects are often surprising and quite poorly understood (even by entomologists!), which can lead us all to make empirically-unsupported assumptions about their sentience, capacity for welfare, and welfare concerns. Although there are many significant unknowns in the science of insect sentience and welfare, it is clear that if we want to help insects, we need to learn what we can about them.Lots of advocates are doing just that: they're putting in the work to understand insects' nervous systems, behavior, and physiology - as well as the scale and contexts of their use and management. I'm heartened to see people take insects seriously. So, as an insect neurobiologist and physiologist by training, I want to do my part to make it easier to learn about these fascinating, diverse, and highly neglected animals.This post is a quick, non-exhaustive, and lightly-annotated list of resources that can serve as a primer for folks interested in getting up to speed on insect pain, sentience, and welfare as of September 2023. For future readers, it also points toward some places where people can go for the most recent information on insect welfare and sentience.I hope this guide is useful for introducing you to the topic - and for demonstrating that there is a lot of rigorous empirical, or otherwise expert, conversation currently happening on the topics of insect sentience and welfare.Quick caveatsThe welfare-focused work on this list (as compared to the pain and sentience research) is skewed toward the work of Rethink Priorities, my collaborators' publications, and my own efforts. I'm obviously biased, but I think these folks have done much of the most rigorous work in the space to date.The list is biased towards biological information over, say, economic models or philosophical considerations. I'm a biologist. It's also probably biased towards neurobiology and physiology over, say, ethology (though I've tried hard to include behavioral resources, too).The list is intentionally non-exhaustive - it's a primer, not a research database (if you want that, check out this link here) - so you shouldn't expect these resources to provide a complete overview of everything you might need to contribute meaningfully to the conversation on insect sentience or welfare.This list won't be updated regularly. It's the list 'as of September 2023'. The insect welfare and sentience space is starting to move faster, so this may be seriously out of date within a year or two.Not all work on this list is peer-reviewed (at least in the traditional, academic sense).Inclusion doesn't equal endorsement. Instead, inclusion on this list is an expression of confidence that either (1) there's something of value to the work conducted therein or (2) it is an important part of the history and debate of the discipline.Insect Sentienc...

]]>
Meghan Barrett https://forum.effectivealtruism.org/posts/YcDXWTzyyfHQHCM4q/a-primer-for-insect-sentience-and-welfare-as-of-sept-2023 Wed, 27 Sep 2023 00:02:24 +0000 EA - A Primer for Insect Sentience and Welfare (as of Sept 2023) by Meghan Barrett Meghan Barrett https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 14:13 no full 7201
dGe3GLjteAcnei3Fj EA - Vegans in the UK: are you getting enough iodine? by Stan Pinsent Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vegans in the UK: are you getting enough iodine?, published by Stan Pinsent on September 26, 2023 on The Effective Altruism Forum.This post was written very quickly by a non-expert. Do your own reading before changing your diet!You should make sure you get iodine:Unlike most countries, table salt in the UK is not iodisedMost Brits get plenty of iodine from dairyDeficiency risk factors include being vegan, being femaleIodine deficiency while (pre) pregnant or breastfeeding can cause IQ drop and even cretinism in the childFor everyone else, symptoms are usually mildBuy iodised salt or even supplements. Careful, you can get too much.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Stan Pinsent https://forum.effectivealtruism.org/posts/dGe3GLjteAcnei3Fj/vegans-in-the-uk-are-you-getting-enough-iodine Tue, 26 Sep 2023 22:55:52 +0000 EA - Vegans in the UK: are you getting enough iodine? by Stan Pinsent Stan Pinsent https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:54 no full 7205
g72tGduJMDhqR86Ns EA - "Diamondoid bacteria" nanobots: deadly threat or dead-end? A nanotech investigation by titotal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Diamondoid bacteria" nanobots: deadly threat or dead-end? A nanotech investigation, published by titotal on September 29, 2023 on The Effective Altruism Forum.Confidence level: I'm a computational physicist working on nanoscale simulations, so I have some understanding of most of the things discussed here, but I am not specifically an expert on the topics covered, so I can't promise perfect accuracy.I want to give a huge thanks to Professor Phillip Moriarty of the university of Nottingham for answering my questions about the experimental side of mechanosynthesis research.Introduction:A lot of people are highly concerned that a malevolent AI or insane human will, in the near future, set out to destroy humanity. If such an entity wanted to be absolutely sure they would succeed, what method would they use? Nuclear war? Pandemics?According to some in the x-risk community, the answer is this: The AI will invent molecular nanotechnology, and then kill us all with diamondoid bacteria nanobots.This is the "lower bound" scenario posited by Yudkowsky in his post AGI ruin:The nanomachinery builds diamondoid bacteria, that replicate with solar power and atmospheric CHON, maybe aggregate into some miniature rockets or jets so they can ride the jetstream to spread across the Earth's atmosphere, get into human bloodstreams and hide, strike on a timer.The phrase "diamondoid bacteria" really struck out at me, and I'm not the only one. In this post by Carlsmith (which I found very interesting), Carlsmith refers to diamondoid bacteria as an example of future tech that feels unreal, but may still happen:Whirling knives? Diamondoid bacteria? Relentless references to paper-clips, or "tiny molecular squiggles"? I've written, elsewhere, about the "unreality" of futurism. AI risk had a lot of that for me.Meanwhile, the controversial anti-EA crusader Emille Torres cites the term "diamondoid bacteria" as a reason to dismiss AI risk, calling it "patently ridiculous".I was interested to know more. What is diamondoid bacteria? How far along is molecular nanotech research? What are the challenges that we (or an AI) will need to overcome to create this technology?If you want, you can stop here and try and guess the answers to these questions.It is my hope that by trying to answer these questions, I can give you a taste of what nanoscale research actually looks like. It ended up being the tale of a group of scientists who had a dream of revolutionary nanotechnology, and tried to answer the difficult question: How do I actually build that?What is "diamondoid bacteria"?The literal phrase "diamondoid bacteria" appears to have been invented by Eliezer Yudkowsky about two years ago. If you search the exact phrase in google scholar there are no matches:If you search the phrase in regular google, you will get a very small number of matches, all of which are from Yudkowsky or directly/indirectly quoting Yudkowsky. The very first use of the phrase on the internet appears to be this twitter post from September 15 2021. (I suppose there's a chance someone else used the phrase in person).I speculate here that Eliezer invented the term as a poetic licence way of making nanobots seem more viscerally real. It does not seem likely that the hypothetical nanobots would fit the scientific definition of bacteria, unless you really stretched the definition of terms like "single-celled" and "binary fission". Although bacteria are very impressive micro-machines, so I wouldn't be surprised if future nanotech bore at least some resemblance.Frankly, I think inventing new terms is an extremely unwise move (I think that Eliezer has stopped using the term since I started writing this, but others still are). "diamondoid bacteria" sounds science-ey enough that a lot of people would assume it was already a scien...

]]>
titotal https://forum.effectivealtruism.org/posts/g72tGduJMDhqR86Ns/diamondoid-bacteria-nanobots-deadly-threat-or-dead-end-a Fri, 29 Sep 2023 17:46:25 +0000 EA - "Diamondoid bacteria" nanobots: deadly threat or dead-end? A nanotech investigation by titotal titotal https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 32:54 no full 7230
XX578vcKyzrK8Ly3K EA - List of how people have become more hard-working by Chi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: List of how people have become more hard-working, published by Chi on September 29, 2023 on The Effective Altruism Forum.I've recently asked how people have become more hard-working. I compiled the answers across the EA Forum and LessWrong (and some private messages) in a list for myself to make it easier for me to experiment with the suggestions. I thought I'd share the list here in case it's useful for anyone else. I also list the things that people said didn't work and a couple of other things.This wasn't done to be "proper", so the list is sloppy in many ways: I liberally paraphrased what people said; often I could have easily counted something two people said as the same or two different things, which would change the way I counted how often something was mentioned; I very roughly grouped the things that were said into categories but easily could have categorised many things differently.Notable pointsIndividual points that were mentioned the most:(Soft) accountability (deadlines, beeminder, accountability buddy, posting about your goals, boss as a service, promising friends) (9)Working on interesting problems/enjoyable work (and in an enjoyable work environment) (8)Focusmate/Coworking (often poms) (7)Some things that weren't mentioned a lot but that I found interesting:Identifying (or being thought of) as hard-working (3)Categorising work as "not work" and instead as something enjoyable, adjusting work environment accordingly (1)Other thingsAge at the time of the shift in hard-workingnesswas usually not mentioned, but when it was mentioned, it was between 20-30Some people managed to become permanently more hard-working after experiencing one period of working hard, even when they switched to less enjoyable or just very different work. That initial period would either be induced by external pressure or by working hard on something they didn't consider work. (3)Full list of what made people more hard-workingHere is the full list, ordered by how often things in a category were named. (Note that often the same person would list multiple things in the category, so the sums aren't summing over people)Thing that workedHow many people mentionedSocial Focusmate/Coworking (often poms)8(Regular) contact with other people to talk about work, debug, check-in etc.4Identifying (or being thought of) as hard-working3Surrounding yourself with ~hard-working people in life in general3Supportive work environment2Having a manager1 21What kind of work Working on interesting problems/enjoyable work (and in an enjoyable work environment)8Feeling like you're good at what you're doing, getting positive feedback4Working on things you consider important3More clear tasks, feedback, endpoints etc.2Less pressuring work1Autonomy1Making work more fun1At least one (work) thing you like per day1 21External pressure (Soft) accountability (deadlines, beeminder, accountability buddy, posting about your goals, boss as a service, promising friends)9Children/poverty: External motivation to do work2Almost being fired1 12Learning more about yourself and your goals Figure out which work hours are most useful, scheduledifferent kinds of work for different times to work more efficiently3Thinking about what you want to do with life and what (work) motivates you3Experimenting with what actually makes you (less) productive e.g. via tracking and realising that productivity advice is very personal2Repeated experience of joy from achieving big things1Deciding how many hours you endorse working1 10Misc. specific techniques Productivity books2Productivity systems2Having policies for ways of making time productive when there are trade-offs e.g. with money1Physical Kanban boards1Walking meetings with yourself1Leverage momentum: Start the day with a small experience of success and let that spiral1Work ...

]]>
Chi https://forum.effectivealtruism.org/posts/XX578vcKyzrK8Ly3K/list-of-how-people-have-become-more-hard-working Fri, 29 Sep 2023 15:11:44 +0000 EA - List of how people have become more hard-working by Chi Chi https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:54 no full 7229
H6hxTrgFpb3mzMPfz EA - Weighing Animal Worth by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Weighing Animal Worth, published by Jeff Kaufman on September 28, 2023 on The Effective Altruism Forum.It's common for people who approach helping animals from a quantitative direction to need some concept of "moral weights" so they can prioritize. If you can avert one year of suffering for a chicken or ten for shrimp which should you choose? Now, moral weight is not the only consideration with questions like this, since typically the suffering involved will also be quite different, but it's still an important factor.One of the more thorough investigations here is Rethink Priorities' moral weights series. It's really interesting work and I'd recommend reading it! Here's a selection from their bottom-line point estimates comparing animals to humans:Humans1 (by definition)Chickens3Carp11Bees14Shrimp32If you find these surprisingly low, you're not alone: that giving a year of happy life to twelve carp might be more valuable than giving one to a human is for most people a very unintuitive claim. The authors have a good post on this, Don't Balk at Animal-friendly Results, that discusses how the assumptions behind their project make this kind of result pretty likely and argues against putting much stock in our potentially quite biased initial intuitions.What concerns me is that I suspect people rarely get deeply interested in the moral weight of animals unless they come in with an unusually high initial intuitive view. Someone who thinks humans matter far more than animals and wants to devote their career to making the world better is much more likely to choose a career focused on people, like reducing poverty or global catastrophic risk. Even if someone came into the field with, say, the median initial view on how to weigh humans vs animals I would expect working as a junior person in a community of people who value animals highly would exert a large influence in that direction regardless of what the underlying truth. If you somehow could convince a research group, not selected for caring a lot about animals, to pursue this question in isolation, I'd predict they'd end up with far less animal-friendly results.When using the moral weights of animals to decide between various animal-focused interventions this is not a major concern: the donors, charity evaluators, and moral weights researchers are coming from a similar perspective. Where I see a larger problem, however, is in broader cause prioritization, such as Net Global Welfare May Be Negative and Declining. The post weighs the increasing welfare of humanity over time against the increasing suffering of livestock, and concludes that things are likely bad and getting worse. If you ran the same analysis with different inputs, such as what I'd expect you'd get from my hypothetical research group above, however, you'd instead conclude the opposite: global welfare is likely positive and increasing.For example, if that sort of process ended up with moral weights that were 3x lower for animals relative to humans we would see approximately flat global welfare, while if they were 10x lower we'd see increasing global welfare:See sheet to try your own numbers; original chart digitized with via graphreader.com.Note that both 3x and 10x are quite small compared to the uncertainty involved in coming up with these numbers: in different post the Rethink authors give 3x (and maybe as high as 10x) just for the likely impact of using objective list theory instead of hedonism, which is only one of many choices involved in estimating moral weights.I think the overall project of figuring out how to compare humans and animals is a really important one with serious implications for what people should work on, but I'm skeptical of, and put very little weight on, the conclusions so far.Thanks for listening. To help us out with The Non...

]]>
Jeff Kaufman https://forum.effectivealtruism.org/posts/H6hxTrgFpb3mzMPfz/weighing-animal-worth Thu, 28 Sep 2023 18:02:21 +0000 EA - Weighing Animal Worth by Jeff Kaufman Jeff Kaufman https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:28 no full 7225
BjgE7EpCKodQ3nAyp EA - Two cheap ways to test your fit for policy work by MathiasKB Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two cheap ways to test your fit for policy work, published by MathiasKB on September 30, 2023 on The Effective Altruism Forum.At EAGs I often find myself having roughly the same 30 minute conversation with university students who are interested in policy careers and want to test their fit.This post will go over two cheap tests, each possible to do over a weekend, that you can do to test your fit for policy work.I am by no means the best person to be giving this advice but I received feedback that my advice was helpful, and I'm not going to let go of an opportunity to act old and wise. A lot of it is based off what worked for me, when I wanted to break into the field a few years ago. Get other perspectives too! Contradictory input in the comments from people with more seniority is most welcome.A map of typical policy roles'Policy' is a wide field with room for many skillsets. The skillsets needed these roles vary significantly. It's worth exploring the different types of roles to find your fit. I like to visualize the different roles as lying on a spectrum, with abstract academic research in one end and lobbyism at the other:The type of work will vary significantly at each end of this spectrum. Common for them all is a genuine interest in the policy-making process.Test your fit in a weekCommonly recommended paths are various fellowships and internships. They are a great way to test ones fit, but they are also a large commitment.For the complete beginner, we can do much cheaper!Test 1: Read policy texts and write up your thoughtsMost fields of policy will have a few legislative texts or government white papers that are central to all work currently being done on the topic.A few examples of relevant texts for a few cause areas and contexts:EU AI Policy: AI ActUS Development cooperation: USAID's 2023 policy frameworkEU Animal Welfare: The European Commission's Staff Working Document on animal welfareEU Biosecurity: DG HERA's 2023 work planLet's go with the example of EU AI Policy. The AI Act is available online in every European language. While the full document is >100 pages, the meat of the act is only about 20-30 pages or so (going off memory).Read the document and try forming your own opinion of the act! What are its strengths and weaknesses? What would you change to improve it?For now, don't worry too much about the quality of the output. A well informed inside view takes more than a weekend to develop!Instead reflect over which parts of the exercise you found yourself the most engaged. If you found the exercise generally enjoyable once you got started, that's a sign you might be a good fit for policy work!Additionally, digging into the source material is necessary to forming original views and will make you stand out to future employers. The object level of policy is underrated!My hope is that the exercise will leave you with a bunch of open questions you would like to further explore. How exactly did EU's delegated acts work again? What was the Parliament's response to the Commission's leaked working document?If you keep pursuing the questions you're interested in, you'll soon find yourself nearing the frontier of knowledge for your area of policy interest. Once you find yourself with a question you can't find a good answer to, you might have stumbled good project to further explore your fit :)Test 2: Follow a committee hearingParliaments typically have topic-based committees where members of the parliament debate current issues and legislation relevant to the committee. These debates are often publicly available on the parliament's website.Try listening to a debate on the topic of your interest. What are the contentions? What arguments are used by each side? If you were to give the next speech, how would you argue for your own views?If yo...

]]>
MathiasKB https://forum.effectivealtruism.org/posts/BjgE7EpCKodQ3nAyp/two-cheap-ways-to-test-your-fit-for-policy-work 100 pages, the meat of the act is only about 20-30 pages or so (going off memory).Read the document and try forming your own opinion of the act! What are its strengths and weaknesses? What would you change to improve it?For now, don't worry too much about the quality of the output. A well informed inside view takes more than a weekend to develop!Instead reflect over which parts of the exercise you found yourself the most engaged. If you found the exercise generally enjoyable once you got started, that's a sign you might be a good fit for policy work!Additionally, digging into the source material is necessary to forming original views and will make you stand out to future employers. The object level of policy is underrated!My hope is that the exercise will leave you with a bunch of open questions you would like to further explore. How exactly did EU's delegated acts work again? What was the Parliament's response to the Commission's leaked working document?If you keep pursuing the questions you're interested in, you'll soon find yourself nearing the frontier of knowledge for your area of policy interest. Once you find yourself with a question you can't find a good answer to, you might have stumbled good project to further explore your fit :)Test 2: Follow a committee hearingParliaments typically have topic-based committees where members of the parliament debate current issues and legislation relevant to the committee. These debates are often publicly available on the parliament's website.Try listening to a debate on the topic of your interest. What are the contentions? What arguments are used by each side? If you were to give the next speech, how would you argue for your own views?If yo...]]> Sat, 30 Sep 2023 22:59:00 +0000 EA - Two cheap ways to test your fit for policy work by MathiasKB 100 pages, the meat of the act is only about 20-30 pages or so (going off memory).Read the document and try forming your own opinion of the act! What are its strengths and weaknesses? What would you change to improve it?For now, don't worry too much about the quality of the output. A well informed inside view takes more than a weekend to develop!Instead reflect over which parts of the exercise you found yourself the most engaged. If you found the exercise generally enjoyable once you got started, that's a sign you might be a good fit for policy work!Additionally, digging into the source material is necessary to forming original views and will make you stand out to future employers. The object level of policy is underrated!My hope is that the exercise will leave you with a bunch of open questions you would like to further explore. How exactly did EU's delegated acts work again? What was the Parliament's response to the Commission's leaked working document?If you keep pursuing the questions you're interested in, you'll soon find yourself nearing the frontier of knowledge for your area of policy interest. Once you find yourself with a question you can't find a good answer to, you might have stumbled good project to further explore your fit :)Test 2: Follow a committee hearingParliaments typically have topic-based committees where members of the parliament debate current issues and legislation relevant to the committee. These debates are often publicly available on the parliament's website.Try listening to a debate on the topic of your interest. What are the contentions? What arguments are used by each side? If you were to give the next speech, how would you argue for your own views?If yo...]]> 100 pages, the meat of the act is only about 20-30 pages or so (going off memory).Read the document and try forming your own opinion of the act! What are its strengths and weaknesses? What would you change to improve it?For now, don't worry too much about the quality of the output. A well informed inside view takes more than a weekend to develop!Instead reflect over which parts of the exercise you found yourself the most engaged. If you found the exercise generally enjoyable once you got started, that's a sign you might be a good fit for policy work!Additionally, digging into the source material is necessary to forming original views and will make you stand out to future employers. The object level of policy is underrated!My hope is that the exercise will leave you with a bunch of open questions you would like to further explore. How exactly did EU's delegated acts work again? What was the Parliament's response to the Commission's leaked working document?If you keep pursuing the questions you're interested in, you'll soon find yourself nearing the frontier of knowledge for your area of policy interest. Once you find yourself with a question you can't find a good answer to, you might have stumbled good project to further explore your fit :)Test 2: Follow a committee hearingParliaments typically have topic-based committees where members of the parliament debate current issues and legislation relevant to the committee. These debates are often publicly available on the parliament's website.Try listening to a debate on the topic of your interest. What are the contentions? What arguments are used by each side? If you were to give the next speech, how would you argue for your own views?If yo...]]> MathiasKB https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:17 no full 7242
wGKGrCYNwjhhK4N69 EA - Why I'm still going out to bat for the EA movement post FTX by Gemma Paterson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I'm still going out to bat for the EA movement post FTX, published by Gemma Paterson on September 30, 2023 on The Effective Altruism Forum.From the looks of it, next week might be rough for people who care about Effective Altruism. As CEA acting CEO Ben West pointed out on the forum:"Sam Bankman-Fried's trial is scheduled to start October 3, 2023, and Michael Lewis's book about FTX comes out the same day. My hope and expectation is that neither will be focused on EA .Nonetheless, I think there's a decent chance that viewing the Forum, Twitter, or news media could become stressful for some people, and you may want to pre-emptively create a plan for engaging with that in a healthy way.I really appreciated that comment since I didn't know that and I'm glad I had time to mentally prepare. As someone who does outward facing voluntary community building at my workplace and in London, I feel nervous. I've written this piece to manage that anxiety. I actually wrote a lot of it last year to process my feelings but now seems like a good time to share.Thoughts on EA after 2022During the spring/summer of 2022, I was studying for my final ICAS Chartered Accountancy case study exams. They usually involve:Being given a scenario where you are the qualified accountant and have been asked to do some analysis for a group of companies.As you need to meet a minimum standard of Public Trust and Ethics to join an accountancy body, there is usually a dodgy character or two involved at the client company or at the firm that employs you.The dodgy characters are lax about controls and risk - asking you as the accountant to "look the other way" and the case is usually set up that there is an incentive for you as the accountant to do that.There are often weak internal controls at the company which make fraud more likely. Some examples of weak internal controls:Lack of segregation of duties - one employee has end-to-end control of a process without oversightInadequate access controls - sensitive systems and data are not restricted to authorized personnelCircumvention of controls - existing controls are intentionally avoided or bypassed by employeesUnsigned approvals - documents requiring sign-off are not properly authorizedAt the end, you submit your technical accounting, finance, tax, internal controls, and general business advice that was requested as part of the case study. You also need to submit an ethics memo which would outline your concerns about the dodgy people or controls.You'll be glad to hear that after going through lots of practice papers, I passed!Then three months later, the news on FTX dropped.I'm not going to outline it here since I imagine the audience for this are already familiar - if someone has a good summary, please drop it in the comments. It wasn't just FTX, around that time, information was released on Wytham Abbey, Bostrom and the Time sexual harassment scandals.For context, I'm mostly interested in expanding the reach of effective giving / EA impact methodologies and personally put a high discount rate on the future vs existing suffering, so, I didn't follow the FTX Future Fund that closely.My primary feelings at the end of 2022 were anger and embarrassment.AngerI'm annoyed I must answer questions about the damned castle. I did not buy the castle. None of the charities I recommend you donate to bought the castle. Yet in my community building work, I have to answer questions about the castle. It was handled poorly and I'm angry about it.When the Time article came out, I was running the London Women (and NBs) in EA group. Honestly, I incorrectly assumed it was just about EA in the Bay area so when it was revealed it was Owen Cotton-Barrett, that was a shock and tbh I was annoyed at a lot of the responses.In hindsight, I wish I'd created a sp...

]]>
Gemma Paterson https://forum.effectivealtruism.org/posts/wGKGrCYNwjhhK4N69/why-i-m-still-going-out-to-bat-for-the-ea-movement-post-ftx Sat, 30 Sep 2023 18:24:32 +0000 EA - Why I'm still going out to bat for the EA movement post FTX by Gemma Paterson Gemma Paterson https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:07 no full 7241
eSZuJcLGd7BacjWGi EA - Announcing the Winners of the 2023 Open Philanthropy AI Worldviews Contest by Jason Schukraft Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Winners of the 2023 Open Philanthropy AI Worldviews Contest, published by Jason Schukraft on September 30, 2023 on The Effective Altruism Forum.IntroductionIn March 2023, we launched the Open Philanthropy AI Worldviews Contest. The goal of the contest was to surface novel considerations that could affect our views on the timeline to transformative AI and the level of catastrophic risk that transformative AI systems could pose. We received 135 submissions. Today we are excited to share the winners of the contest.But first: We continue to be interested in challenges to the worldview that informs our AI-related grantmaking. To that end, we are awarding a separate $75,000 prize to the Forecasting Research Institute (FRI) for their recently published writeup of the 2022 Existential Risk Persuasion Tournament (XPT). This award falls outside the confines of the AI Worldviews Contest, but the recognition is motivated by the same principles that motivated the contest. We believe that the results from the XPT constitute the best recent challenge to our AI worldview.FRI Prize ($75k)Existential Risk Persuasion Tournament by the Forecasting Research InstituteAI Worldviews Contest WinnersFirst Prizes ($50k)AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years by Basil Halperin, Zachary Mazlish, and Trevor ChowEvolution provides no evidence for the sharp left turn by Quintin Pope (see the LessWrong version to view comments)Second Prizes ($37.5k)Deceptive Alignment is <1% Likely by Default by David Wheaton (see the LessWrong version to view comments)AGI Catastrophe and Takeover: Some Reference Class-Based Priors by Zach Freitas-GroffThird Prizes ($25k)Imitation Learning is Probably Existentially Safe by Michael Cohen'Dissolving' AI Risk - Parameter Uncertainty in AI Future Forecasting by Alex BatesCaveats on the Winning EntriesThe judges do not endorse every argument and conclusion in the winning entries. Most of the winning entries argue for multiple claims, and in many instances the judges found some of the arguments much more compelling than others. In some cases, the judges liked that an entry crisply argued for a conclusion the judges did not agree with - the clear articulation of an argument makes it easier for others to engage. One does not need to find a piece wholly persuasive to believe that it usefully contributes to the collective debate about AI timelines or the threat that advanced AI systems might pose.Submissions were many and varied. We can easily imagine a different panel of judges reasonably selecting a different set of winners. There are many different types of research that are valuable, and the winning entries should not be interpreted to represent Open Philanthropy's settled institutional tastes on what research directions are most promising (i.e., we don't want other researchers to overanchor on these pieces as the best topics to explore further).We did not provide any funding specifically for the XPT, which ran from June 2022 through October 2022. In December 2022, we recommended two grants totaling $6.3M over three years to support FRI's future research.The link above goes to the version Michael submitted; he's also written an updated version with coauthor Marcus Hutter.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Jason Schukraft https://forum.effectivealtruism.org/posts/eSZuJcLGd7BacjWGi/announcing-the-winners-of-the-2023-open-philanthropy-ai Sat, 30 Sep 2023 10:17:43 +0000 EA - Announcing the Winners of the 2023 Open Philanthropy AI Worldviews Contest by Jason Schukraft Jason Schukraft https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:26 no full 7240
PatjzJHQev2upDHnr EA - What Has the CEA Uni Groups Team Been Up To? - Our Summer 2023 Review by Joris P Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What Has the CEA Uni Groups Team Been Up To? - Our Summer 2023 Review, published by Joris P on September 30, 2023 on The Effective Altruism Forum.TLDRCEA's University Groups Team worked on a wide range of projects this summerWe wrapped up a round of UGAP and OSP (our programs supporting university group organizers with digital resources, mentorship, training, and for UGAP a stipend) and launched a new one. Both rounds had over 100 people participating across both programsWe ran a summit for relatively experienced group organizers, had four interns join us (who worked on projects like updating the Groups Resource Centre and setting up residencies), and continued to give out (relatively small) group support grants, to cover operational costs for university groupsWe're excited about potential future projects, like supporting more 'top' universities and/or seeding more support for AIS groups, but continue to reflect on what projects would be most valuableIntroCEA's University Groups Team is working on supporting university groups focused on EA ideas, and their organizers. We wrote more about our mission and vision last year here - note it is slightly outdated. Our team currently consists of Jessica McCurdy (team lead), Jake McKinnon, and Joris Pijpers.In this post we share what we've been working on this summer, including launching new rounds of UGAP & OSP, a summit, an internship program, and more. We won't touch much upon what we did in spring '23 (but have a retrospective of UGAP fall '22 here), and have also chosen to not expand in detail on why we are focusing on these programs and not others, in favor of getting the post out sooner. Additionally, we are still in the process of deeper impact analysis. We hope to publish more details on these in the future.Overall, we are pretty happy with what we were able to achieve this summer and are excited about future directions.Why university groups?We think university groups have the potential to be especially promising places to introduce people to EA ideas, and then help them learn more about and act on them:University group members (and organizers in particular) have a strong track record of making significant contributions to EA priorities, both in and beyond meta workStudents are usually in the process of thinking through their priorities for their careers and lives. In this process, they are often open to exploring new ideas and communities, and have time to do soUniversities are places where people build communities and social networks. These in-person social ties are important for people being more likely to take significant action on EA ideasUniversities can have high concentrations of pre-screened talentUniversities are a very large source of new EAs each year, which means that student groups might have an outsized influence on the development of EA culture and movement priorities. Setting norms of good epistemics through advice and content aspirationally may improve that of EAA note on which universities we focus onWe think top universities, such as those previously listed as focus universities, have a high concentration of top talent. That means that we extend higher touch support to top universities and have some programming focused on them specifically (like our summit and residencies). However, the majority of talented, altruistic and greatly influential people do not come from top universities. Therefore we think scalable support (like UGAP & OSP) which includes both top and non-top universities is also worthwhile. We have many examples of people who are doing work that we think is really important, who did not come from top universities, and might have not gotten involved with EA ideas if it wasn't for their university group. We also think people that we help from these universities tend to...

]]>
Joris P https://forum.effectivealtruism.org/posts/PatjzJHQev2upDHnr/what-has-the-cea-uni-groups-team-been-up-to-our-summer-2023 Sat, 30 Sep 2023 02:43:48 +0000 EA - What Has the CEA Uni Groups Team Been Up To? - Our Summer 2023 Review by Joris P Joris P https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 19:09 no full 7237
bBefhAXpCFNswNr9m EA - Open Philanthropy is hiring for multiple roles across our Global Catastrophic Risks teams by Open Philanthropy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Philanthropy is hiring for multiple roles across our Global Catastrophic Risks teams, published by Open Philanthropy on September 30, 2023 on The Effective Altruism Forum.It's been another busy year at Open Philanthropy; after nearly doubling the size of our team in 2022, we've added over 30 new team members so far in 2023. Now we're launching a number of open applications for roles in all of our Global Catastrophic Risks (GCR) cause area teams (AI Governance and Policy, Technical AI Safety, Biosecurity & Pandemic Preparedness, GCR Cause Prioritization, and GCR Capacity Building).The application, job descriptions, and general team information are available here. Notably, you can apply to as many of these positions as you'd like with a single application form!We're hiring because our GCR teams feel pinched and really need more capacity. Program Officers in GCR areas think that growing their teams will lead them to make significantly more grants at or above our current bar. We've had to turn down potentially promising opportunities because we didn't have enough time to investigate them; on the flip side, we're likely currently allocating tens of millions of dollars suboptimally in ways that more hours could reveal and correct.On the research side, we've had to triage important projects that underpin our grantmaking and inform others' work, such as work on the value of Open Phil's last dollar and deep dives into various technical alignment agendas. And on the operational side, maintaining flexibility in grantmaking at our scale requires significant creative logistical work. Both last year's reduction in capital available for GCR projects (in the near term) and the uptick in opportunities following the global boom of interest in AI risk make our grantmaking look relatively more important; compared to last year, we're now looking at more opportunities in a space with less total funding.GCR roles we're now hiring for include:Program associates to make grants in technical AI governance mechanisms, US AI policy advocacy, general AI governance, technical AI safety, biosecurity & pandemic preparedness, EA community building, AI safety field building, and EA university groups.Researchers to identify and evaluate new areas for GCR grantmaking, conduct research on catastrophic risks beyond our current grantmaking areas, and oversee a range of research efforts in biosecurity. We're also interested in researchers to analyze issues in technical AI safety and (separately) the natural sciences.Operations roles embedded within our GCR grantmaking teams: the Biosecurity & Pandemic Preparedness team is looking for an infosec specialist, an ops generalist, and an executive assistant (who may also support some other teams); the GCR Capacity Building team is looking for an ops generalist.Most of these hires have multiple possible seniority levels; whether you're just starting in your field or have advanced expertise, we encourage you to apply.If you know someone who would be great for one of these roles, please refer them to us. We welcome external referrals and have found them extremely helpful in the past. We also offer a $5,000 referral bonus; more information here.How we're approaching these hiresYou only need to apply once to opt into consideration for as many of these roles as you're interested in. A checkbox on the application form will ask which roles you'd like to be considered for. We've also made efforts to streamline work tests and use the same tests for multiple roles where possible; however, some roles do use different work tests, so it's possible you'll still have to take different work tests for different roles, especially if you're interested in roles across a wide array of skillsets (e.g., both research and operations). You may also have interviews with mu...

]]>
Open Philanthropy https://forum.effectivealtruism.org/posts/bBefhAXpCFNswNr9m/open-philanthropy-is-hiring-for-multiple-roles-across-our Sat, 30 Sep 2023 00:15:10 +0000 EA - Open Philanthropy is hiring for multiple roles across our Global Catastrophic Risks teams by Open Philanthropy Open Philanthropy https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:40 no full 7232
HbYHjrpBJogmsSBkd EA - Should 80,000 hours have more near-termist career content? by NickLaing Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should 80,000 hours have more near-termist career content?, published by NickLaing on September 29, 2023 on The Effective Altruism Forum.TDLR: Assuming that Long Termist causes are by far the most important, 80,000 hours might still be better off devoting more of their space to near-termist causes, to grow the EA community, increase the number working in long term causes and improve EA optics.Background80,000 hours is a front page to EA, presenting the movement/idea/question to the world. For many, their website is people's introduction to effective altruism (link). and might be the most important internet-based introduction to Effective altrurism, - even if this is not their main purpose.As I see it, 80,000 hours functions practically as a long-termist career platform, or as their staff put it, to "help people have careers with a lot of social impact". I'm happy that they have updated their site to make this long-termist focus a little more clear, but I think there is an argument that perhaps 20%-30% of their content should be near-term focused, and that this content should be more visible. These arguments I present have probably been discussed ad-nauseam within 80,000 hours leadership and EA leadership in general, and nothing I present is novel. Despite that, I haven't seen these arguments clearly laid out in one place (I might have missed it), so here goes.AssumptionsFor the purposes of this discussion, I'm going to make these 3 assumptions. They aren't necessarily true and I don't necessarily agree with them, but they make my job easier and argument cleaner.Long termist causes are by far the most important, and where the vast majority of EAs should focus their work.Most of the general public are more attracted to near-termist EA than long-termist EA80,000 hours is an important "frontpage" and "gateway" to EAWhy 80,000 hours might want to focus more on near-term causes1. Near termist focus as a pathway to more future long termist workersMore focus on near termist careers might paradoxically lead to producing more long termist workers in the long run. Many people are drawn to the clear and more palatable idea that we should devote our lives to doing the most good to humans and animals alive right now (assumption 2). However after further thought and engagement they may change their focus to longtermism. I'm not sure we have stats on this, but I've encountered many forum members who seem to have followed this life pattern. By failing to appeal to the many who are attracted to near-termist EA now, we could miss out on significant numbers of long termist workers in a few years time.2. Near termist causes are a more attractive EA "front door" 80,000 hours is a "front door" to Effective Altruism, then it is to make sure as many people enter the door as possible. Although 80,000 hours make it clear this isn't one of their main intentions, there is huge benefit to maximizing community growth.3. Some people will only ever want/be able to work on near-term causes.There are not-unreasonable people who might struggle with the tenets/tractability of long-termism, or be too far along the road-of-life to move from medicine to machine learning, or just don't want to engage with long termism for whatever reason. They may miss out a counterfactual fruitful living-your-best-EA-life, because when they clicked on the 80,000 hours website they didn't manage to scroll down 5 pages to cause area no. 19 and 20 - the first 2 near-term cause areas "Factory farming" and "Easily preventable or treatable illness". These are listed below well established and clearly tractable cause areas such as "risks from atomically precise manufacturing" and "space governance"There may be many people who never want to/be able decide to change careers to the highest impact long-term causes, but ...

]]>
NickLaing https://forum.effectivealtruism.org/posts/HbYHjrpBJogmsSBkd/should-80-000-hours-have-more-near-termist-career-content Fri, 29 Sep 2023 18:57:14 +0000 EA - Should 80,000 hours have more near-termist career content? by NickLaing NickLaing https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:50 no full 7234
H7BwgJAuhNa7stRfr EA - My Mid-Career Transition into Biosecurity by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Mid-Career Transition into Biosecurity, published by Jeff Kaufman on October 2, 2023 on The Effective Altruism Forum.After working as a professional programmer for fourteen years, primarily in ads and web performance, I switched careers to biosecurity. It's now been a bit over a year: how has it gone?In terms of my day-to-day work it's very different. I'd been at Google for a decade and knew a lot of people across the organization. I was tech lead to six people, managing four of them, and my calendar was usually booked nearly solid. I spent a lot of time thinking about what work was a good fit for what people, including how to break larger efforts down and how this division would interact with our promotion process. I read several hundred emails a day, assisted by foot controls, and reviewed a lot more code than I wrote. I tracked design efforts across ads and with the web platform, paying attention to where they might require work from my team or where we had relevant experience. I knew the web platform and advertising ecosystem very well, and was becoming an expert in international internet privacy legislation. Success meant earning more money to donate.Now I'm an individual contributor at a small academically affiliated non-profit, on a mostly independent project, writing code and analyzing data. Looking at my calendar for next week I have three days with no meetings, and on the other two I have a total of 3:15. In a typical week I write a few dozen messages and 1-3 documents writing up my recent work. I help other researchers here with software and system administration things, as needed. I'm learning a lot about diseases, sequencing, and bioinformatics. Success means decreasing the chance of a globally catastrophic pandemic.Despite how different these sound, I've liked them both a lot. I've worked with great people, had a good work-life balance, and made progress on challenging and interesting problems. While I find my current work altruistically fulfilling, I was also the kind of person who felt that way about earning to give.I do feel a bit weird writing this post: while the year has had its ups and downs and been unpredictable in a lot of ways, this is essentially the blog post I would have predicted I'd be writing. What wouldn't I have written in Summer 2022?A big one is that the funding environment is very different. This both means that earning to give is more valuable than it had been and it's harder to stay funded. I think my current work is enough more valuable than what I'd been donating that it was still a good choice for me, but that won't be the case for everyone. If you've been earning to give and are trying to decide whether to switch to a direct role, a good approach is to apply and ask the organization whether they'd rather have your time or your donations.I do also have more knowledge about how my skills have transferred. My skills in general programming, data analysis (though more skills here would have been better), familiarity with unix command line tools, technical writing, experimental design, scoping and planning technical work, project management, and people management have all been helpful. But I'm not sure this list is that useful to others: it's a combination of what I was good at and what has been useful in my new role, and so will be very situation- and person-dependent.Happy to answer questions!Except for ~six months in 2017 when I left to join a startup and then came back after getting laid off.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Jeff Kaufman https://forum.effectivealtruism.org/posts/H7BwgJAuhNa7stRfr/my-mid-career-transition-into-biosecurity Mon, 02 Oct 2023 22:51:05 +0000 EA - My Mid-Career Transition into Biosecurity by Jeff Kaufman Jeff Kaufman https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:15 no full 7255
S4udTauZnnojap74K EA - What do staff at CEA believe? (Evidence from a rough cause prio survey from April) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What do staff at CEA believe? (Evidence from a rough cause prio survey from April), published by Lizka on October 2, 2023 on The Effective Altruism Forum.In April, I ran a small and fully anonymous cause prioritization survey of CEA staff members at a CEA strategy retreat. I got 31 responses (out of around 40 people), and I'm summarizing the results here, as it seems that people sometimes have incorrect beliefs about "what CEA believes." (I don't think the results are very surprising, though.)Important notes and caveats:I put this survey together pretty quickly, and I wasn't aiming to use it for a public writeup like this (but rather to check how comfortable staff are talking about cause prioritization, start conversations among staff, and test some personal theories). (I also analyzed it quickly.) In many cases, I regret how questions were set up, but I was in a rush and am going with what I have in order to share something - please treat these conclusions as quite rough.For many questions, I let people select multiple answers. This sometimes produced slightly unintuitive or hard-to-parse results; numbers often don't add up unless you take this into account. (Generally, I think the answers aren't self-contradictory once this is taken into account.) Sometimes people could also input their own answers.People's views might have changed since April, and the team composition has changed.I didn't ask for any demographic information (including stuff like "Which team are you on?").I also asked some free-response questions, but haven't included them here.Rough summary of the results:Approach to cause prioritization: Most people at CEA care about doing some of their own cause prioritization, although most don't try to build up the bulk of their cause prioritization on their own.Approach to morality: About a third of respondents said that they're "very consequentialist," many said that they "lean consequentialist for decisions like what their projects should work on, but have a more mundane approach to daily life." Many also said that they're "big fans of moral uncertainty."Which causes should be "key priorities for EA": people generally selected many causes (median was 5), and most people selected a fairly broad range of causes. Two (of 30) respondents didn't choose any causes not commonly classified as "longtermist/x-risk-focused" (everyone else did choose at least one, though). The top selections were Mitigating existential risk, broadly (27), AI existential security (26), Biosecurity (global catastrophic risk focus) (25), Farmed animal welfare (22), Global health (21), Other existential or global catastrophic risk (15), Wild animal welfare (11), and Generically preparing for pandemics (8). (Other options on the list were Mental health, Climate change, Raising the sanity waterline / un-targeted improving institutional decision-making, Economic growth, and Electoral reform.)Some highlights from more granular questions:Most people selected "I think reducing extinction risks should be a key priority (of EA/CEA)" (27). Many selected "I think improving how the long-run future goes should be a key priority (of EA/CEA)" (17), and "I think future generations matter morally, but it's hard to affect them." (13)Most people selected "I think AI existential risk reduction should be a top priority for EA/CEA" (23) and many selected "I want to learn more in order to form my views and/or stop deferring as much" (17) and "I think AI is the single biggest issue humanity is facing right now" (13). (Some people also selected answers like "I'm worried about misuse of AI (bad people/governments, etc.), but misalignment etc. seems mostly unrealistic" and "I feel like it's important, but transformative developments / x-risk are decades away.")Most people (22) selected at least one o...

]]>
Lizka https://forum.effectivealtruism.org/posts/S4udTauZnnojap74K/what-do-staff-at-cea-believe-evidence-from-a-rough-cause Mon, 02 Oct 2023 14:45:47 +0000 EA - What do staff at CEA believe? (Evidence from a rough cause prio survey from April) by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 12:20 no full 7252
fyCnfiL49T5HvMjvL EA - Forum update: 10 new features (Oct 2023) by agnestenlund Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forum update: 10 new features (Oct 2023), published by agnestenlund on October 2, 2023 on The Effective Altruism Forum.What's new:React to postsExplore featured content on "Best of the Forum"Redesigned "Recent discussion"Right sidebar on the Frontpage"Popular comments" on the FrontpageUpdated recommendations on postsImproved author experienceAdd custom text to social media link previewsLinkposts have been redesignedPrompt to share your post after publishingUseful links on draft post pagesMost of these changes are aimed at broadly improving discussion dynamics and surfacing more high quality content on the Forum. I'd love feedback on these changes. You can comment on this post or reach out to us another way. You can also share your feature requests in the feature suggestion thread.React to postsReactions on comments have grown in popularity since we launched them two months back, and we've now added reactions to posts. One of the goals of post reactions is to allow readers to share feedback with authors without the effort of leaving a full comment.Just like for comments, agree/disagree reactions (and regular upvoting/downvoting) are anonymous, while other reactions are non-anonymous.Explore featured content on "Best of the Forum"We have a new "Best of the Forum" page that features selected posts and sequences curated by the Forum team. It replaces the Library page in the left navigation on the Frontpage (you can still explore all sequences on the old Library page).New users often feel overwhelmed by the amount of content to choose from; I'm hoping they will be able to use the page to find highlights and get a sense for what the Forum is about. Experienced users who visit the Forum more rarely might also be able to use it to catch up on top posts from the last month.Redesigned "Recent discussion"I've redesigned the "Recent discussion" section on the Frontpage to use a timeline UI to highlight what type of update you're looking at (new comment, post, quick take, event, etc.).The "Recent discussion" section is popular among heavy users of the Forum - helping people keep up on recent activity and find discussions they've missed. But we've found that many Forum users were confused and overwhelmed by it. This redesign aims to clarify what "Recent discussion" is about, and make it easier to parse.Right sidebar on the FrontpageWe've added a right sidebar to the Frontpage to highlight resources and make it easier to find opportunities and events (we'll add and remove resources based on usage and feedback). Logged in users can hide the sidebar, and you can update your location to get better event recommendations."Popular comments" on the FrontpageUsers sometimes miss out on great discussions taking place on the Forum. To help surface these discussions we're trying out a "Popular comments" section. It features recent comments with high karma and some other signals of quality.You'll find the section below Quick takes. As with most other sections on the Frontpage, you can collapse it by clicking on the symbol next to the section title.Updated recommendations on postsBelow post comments, you'll now find:More from this authorCurated and popular this weekRecent opportunitiesWe've been experimenting with recommendations on post pages. We've tried a few things (we decided to get rid of right-hand side recommendations since usage was low and a few users found them distracting) and are now adding recommendations to the bottom of posts. Like previous recommendation experiments, we'll monitor user feedback and click rates to decide next steps.Improved author experienceAdd custom text to social media link previewsAuthors can upload an image to use for link previews when their post is shared on social media (or elsewhere). Now authors can also set the text...

]]>
agnestenlund https://forum.effectivealtruism.org/posts/fyCnfiL49T5HvMjvL/forum-update-10-new-features-oct-2023 Mon, 02 Oct 2023 13:36:56 +0000 EA - Forum update: 10 new features (Oct 2023) by agnestenlund agnestenlund https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:50 no full 7249
RueHqBuBKQBtSYkzp EA - Observations on the funding landscape of EA and AI safety by Vilhelm Skoglund Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Observations on the funding landscape of EA and AI safety, published by Vilhelm Skoglund on October 2, 2023 on The Effective Altruism Forum.Epistemic status: Hot takes for discussion. These observations are a side product of another strategy project, rather than a systematic and rigorous analysis of the funding landscape, and we may be missing important considerations. Observations are also non-exhaustive and mostly come from anecdotal data and EA Forum posts. We haven't vetted the resources that we are citing; instead, we took numerous data points at face value and asked for feedback from >5 people who have more of an inside view than we do (see acknowledgments, but note that these people do not necessarily endorse all claims). We aim to indicate our certainty in the specific claims we are making.Context and summaryWhile researching for another project, we discovered that there have been some significant changes in the EA funding landscape this year. We found these changes interesting and surprising enough that we wanted to share them, to potentially help people update their model of the funding landscape. Note that this is not intended to be a comprehensive overview. Rather, we hope this post triggers a discussion about updates and considerations we might have missed.We first list some observations about funding in the EA community in general. Then, we zoom in on AI safety, as this is a particularly dynamic area at present.Some observations about the general EA funding landscape (more details below):There is a higher number of independent grantmaking bodiesFive new independent grantmaking bodies have started up in 2023 (Meta Charity Funders, Lightspeed Grants, Manifund Regrants, the Nonlinear Network, and the Foresight AI Fund. Out of these, all but Meta Charity Funders are focused on longtermism or AI.EA Funds and Open Philanthropy are aiming to become more independent of each other.Charity Entrepreneurship has set up a foundation program, with a sub-goal of setting up cause-specific funding circles.There is a lot of activity in the effective giving ecosystemMore than 50 effective giving initiatives, e.g., local fundraising websites, are active, with several launched in recent yearsGWWC is providing more coordination in the ecosystem and looking to help new initiatives get off the ground.There are changes in funding flowsThe FTX collapse caused a drastic decrease in (expected) longtermist funding (potentially hundreds of millions of dollars annually).EA Fund's Long-Term Future Fund and Infrastructure Fund report (roughly estimated funding gaps of $450k/month and $550k/month respectively, over the next 6 months.Open Philanthropy seems like they could make productive use of more funding in some causes, but their teams working on AI Safety are capacity-constrained rather than funding-constrained.The Survival and Flourishing Fund has increased their giving in 2023. It's unclear whether this increase will continue into the future.Effective Giving plans to increase their giving in the years to come.Longview Philanthropy expects to increase their giving in the years to come. Their 2023 advising will be >$10 million, and they expect money moved in 2024 to be greater than 2023.GiveWell reports being funding-constrained. and projects constant funding flows until 2025.Charity Entrepreneurship's research team expects that money dedicated to animal advocacy is unlikely to grow and could shrink.There might be more EA funding in the futureManifold prediction markets estimate a 45% chance of a new donor giving ≥$50 million to longtermist or existential risk work before the end of 2024; and an 86% chance of ≥1 new EA billionaire before the end of 2026.Smaller but still significant new donors seem likely, according to some fundraising actors.Some observatio...

]]>
Vilhelm Skoglund https://forum.effectivealtruism.org/posts/RueHqBuBKQBtSYkzp/observations-on-the-funding-landscape-of-ea-and-ai-safety 5 people who have more of an inside view than we do (see acknowledgments, but note that these people do not necessarily endorse all claims). We aim to indicate our certainty in the specific claims we are making.Context and summaryWhile researching for another project, we discovered that there have been some significant changes in the EA funding landscape this year. We found these changes interesting and surprising enough that we wanted to share them, to potentially help people update their model of the funding landscape. Note that this is not intended to be a comprehensive overview. Rather, we hope this post triggers a discussion about updates and considerations we might have missed.We first list some observations about funding in the EA community in general. Then, we zoom in on AI safety, as this is a particularly dynamic area at present.Some observations about the general EA funding landscape (more details below):There is a higher number of independent grantmaking bodiesFive new independent grantmaking bodies have started up in 2023 (Meta Charity Funders, Lightspeed Grants, Manifund Regrants, the Nonlinear Network, and the Foresight AI Fund. Out of these, all but Meta Charity Funders are focused on longtermism or AI.EA Funds and Open Philanthropy are aiming to become more independent of each other.Charity Entrepreneurship has set up a foundation program, with a sub-goal of setting up cause-specific funding circles.There is a lot of activity in the effective giving ecosystemMore than 50 effective giving initiatives, e.g., local fundraising websites, are active, with several launched in recent yearsGWWC is providing more coordination in the ecosystem and looking to help new initiatives get off the ground.There are changes in funding flowsThe FTX collapse caused a drastic decrease in (expected) longtermist funding (potentially hundreds of millions of dollars annually).EA Fund's Long-Term Future Fund and Infrastructure Fund report (roughly estimated funding gaps of $450k/month and $550k/month respectively, over the next 6 months.Open Philanthropy seems like they could make productive use of more funding in some causes, but their teams working on AI Safety are capacity-constrained rather than funding-constrained.The Survival and Flourishing Fund has increased their giving in 2023. It's unclear whether this increase will continue into the future.Effective Giving plans to increase their giving in the years to come.Longview Philanthropy expects to increase their giving in the years to come. Their 2023 advising will be >$10 million, and they expect money moved in 2024 to be greater than 2023.GiveWell reports being funding-constrained. and projects constant funding flows until 2025.Charity Entrepreneurship's research team expects that money dedicated to animal advocacy is unlikely to grow and could shrink.There might be more EA funding in the futureManifold prediction markets estimate a 45% chance of a new donor giving ≥$50 million to longtermist or existential risk work before the end of 2024; and an 86% chance of ≥1 new EA billionaire before the end of 2026.Smaller but still significant new donors seem likely, according to some fundraising actors.Some observatio...]]> Mon, 02 Oct 2023 12:02:59 +0000 EA - Observations on the funding landscape of EA and AI safety by Vilhelm Skoglund 5 people who have more of an inside view than we do (see acknowledgments, but note that these people do not necessarily endorse all claims). We aim to indicate our certainty in the specific claims we are making.Context and summaryWhile researching for another project, we discovered that there have been some significant changes in the EA funding landscape this year. We found these changes interesting and surprising enough that we wanted to share them, to potentially help people update their model of the funding landscape. Note that this is not intended to be a comprehensive overview. Rather, we hope this post triggers a discussion about updates and considerations we might have missed.We first list some observations about funding in the EA community in general. Then, we zoom in on AI safety, as this is a particularly dynamic area at present.Some observations about the general EA funding landscape (more details below):There is a higher number of independent grantmaking bodiesFive new independent grantmaking bodies have started up in 2023 (Meta Charity Funders, Lightspeed Grants, Manifund Regrants, the Nonlinear Network, and the Foresight AI Fund. Out of these, all but Meta Charity Funders are focused on longtermism or AI.EA Funds and Open Philanthropy are aiming to become more independent of each other.Charity Entrepreneurship has set up a foundation program, with a sub-goal of setting up cause-specific funding circles.There is a lot of activity in the effective giving ecosystemMore than 50 effective giving initiatives, e.g., local fundraising websites, are active, with several launched in recent yearsGWWC is providing more coordination in the ecosystem and looking to help new initiatives get off the ground.There are changes in funding flowsThe FTX collapse caused a drastic decrease in (expected) longtermist funding (potentially hundreds of millions of dollars annually).EA Fund's Long-Term Future Fund and Infrastructure Fund report (roughly estimated funding gaps of $450k/month and $550k/month respectively, over the next 6 months.Open Philanthropy seems like they could make productive use of more funding in some causes, but their teams working on AI Safety are capacity-constrained rather than funding-constrained.The Survival and Flourishing Fund has increased their giving in 2023. It's unclear whether this increase will continue into the future.Effective Giving plans to increase their giving in the years to come.Longview Philanthropy expects to increase their giving in the years to come. Their 2023 advising will be >$10 million, and they expect money moved in 2024 to be greater than 2023.GiveWell reports being funding-constrained. and projects constant funding flows until 2025.Charity Entrepreneurship's research team expects that money dedicated to animal advocacy is unlikely to grow and could shrink.There might be more EA funding in the futureManifold prediction markets estimate a 45% chance of a new donor giving ≥$50 million to longtermist or existential risk work before the end of 2024; and an 86% chance of ≥1 new EA billionaire before the end of 2026.Smaller but still significant new donors seem likely, according to some fundraising actors.Some observatio...]]> 5 people who have more of an inside view than we do (see acknowledgments, but note that these people do not necessarily endorse all claims). We aim to indicate our certainty in the specific claims we are making.Context and summaryWhile researching for another project, we discovered that there have been some significant changes in the EA funding landscape this year. We found these changes interesting and surprising enough that we wanted to share them, to potentially help people update their model of the funding landscape. Note that this is not intended to be a comprehensive overview. Rather, we hope this post triggers a discussion about updates and considerations we might have missed.We first list some observations about funding in the EA community in general. Then, we zoom in on AI safety, as this is a particularly dynamic area at present.Some observations about the general EA funding landscape (more details below):There is a higher number of independent grantmaking bodiesFive new independent grantmaking bodies have started up in 2023 (Meta Charity Funders, Lightspeed Grants, Manifund Regrants, the Nonlinear Network, and the Foresight AI Fund. Out of these, all but Meta Charity Funders are focused on longtermism or AI.EA Funds and Open Philanthropy are aiming to become more independent of each other.Charity Entrepreneurship has set up a foundation program, with a sub-goal of setting up cause-specific funding circles.There is a lot of activity in the effective giving ecosystemMore than 50 effective giving initiatives, e.g., local fundraising websites, are active, with several launched in recent yearsGWWC is providing more coordination in the ecosystem and looking to help new initiatives get off the ground.There are changes in funding flowsThe FTX collapse caused a drastic decrease in (expected) longtermist funding (potentially hundreds of millions of dollars annually).EA Fund's Long-Term Future Fund and Infrastructure Fund report (roughly estimated funding gaps of $450k/month and $550k/month respectively, over the next 6 months.Open Philanthropy seems like they could make productive use of more funding in some causes, but their teams working on AI Safety are capacity-constrained rather than funding-constrained.The Survival and Flourishing Fund has increased their giving in 2023. It's unclear whether this increase will continue into the future.Effective Giving plans to increase their giving in the years to come.Longview Philanthropy expects to increase their giving in the years to come. Their 2023 advising will be >$10 million, and they expect money moved in 2024 to be greater than 2023.GiveWell reports being funding-constrained. and projects constant funding flows until 2025.Charity Entrepreneurship's research team expects that money dedicated to animal advocacy is unlikely to grow and could shrink.There might be more EA funding in the futureManifold prediction markets estimate a 45% chance of a new donor giving ≥$50 million to longtermist or existential risk work before the end of 2024; and an 86% chance of ≥1 new EA billionaire before the end of 2026.Smaller but still significant new donors seem likely, according to some fundraising actors.Some observatio...]]> Vilhelm Skoglund https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 30:17 no full 7247
QH2sECmmbLWbMXLhJ EA - Violence Before Agriculture by John G. Halstead Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Violence Before Agriculture, published by John G. Halstead on October 2, 2023 on The Effective Altruism Forum.This is a summary of a report on trends in violence since the dawn of humanity: from the hunter-gatherer period to the present day. The full report is available at this Substack and as a preprint on SSRN. Phil did 95% of the work on the report.Expert reviewers provided the following comments on our report."Thomson and Halstead have provided an admirably thorough and fair assessment of this difficult and emotionally fraught empirical question. I don't agree with all of their conclusions, but this will surely be the standard reference for this issue for years to come."Steven Pinker, Johnstone Family Professor in the Department of Psychology at Harvard University"This work uses an impressively comprehensive survey of ethnographic and archeological data on military mortality in historically and archeologically known small-scale societies in an effort to pin down the scale of the killing in the pre-agricultural world. This will be a useful addition to the literature. It is an admirably cautious assessment of the war mortality data, which are exceptionally fragile; and the conclusions it draws about killing rates prior to the Holocene are probably as good as we are likely to get for the time being."Paul Roscoe, Professor of Anthropology at the University of MaineEpistemic statusWe think our estimates here move understanding of prehistoric violence forward by rigorously focussing on the pre-agricultural period and attempting to be as comprehensive as possible with the available evidence. However, data in the relevant fields of ethnography and archeology is unusually shaky, so we would not be surprised if it turned out that some of the underlying data turns out to be wrong. We are especially unsure about our method for estimating actual violent mortality rates from the measured, observable rates in the raw archeology data.One of us (Phil) has a masters in anthropology. Neither of us have any expertise in archeology.Guide for the readerIf you are interested in this study simply as a reference for likely rates/patterns of violence in the pre-agricultural world, all our main results and conclusions are presented in the Summary. The rest of the study explores the evidence in more depth and explains how we put our results together. We first cover the ethnographic evidence, then the archeological evidence. The study ends with a more speculative discussion of our findings and their possible implications.AcknowledgmentsWe would like to thank the following expert reviewers for their extensive and insightful comments and suggestions, which have helped to make this report substantially better.Steven Pinker, Johnstone Family Professor in the Department of Psychology at Harvard UniversityRobert Kelly, Professor of Archeology at the University of WyomingPaul Roscoe, Professor of Anthropology at the University of MaineWe would also like to thank Prof. Hisashi Nakao, Prof. Douglas Fry, Prof. Nelson Graburn, and Holden Karnofsky for commenting, responding to queries and sharing materials.Around 11,000 years ago plants and animals began to be domesticated, a process which would completely transform the lifeways of our species. Human societies all over the world came to depend almost entirely on farming. Before this transformative period of history, everyone was a hunter-gatherer. For about 96% of the approximately 300,000 years since Homo sapiens evolved, we relied on wild plants and animals for food.Our question is: what do we know about how violent these pre-agricultural people were?In 2011 Steven Pinker published The Better Angels of Our Nature. According to Pinker, prehistoric small-scale societies were generally extremely violent by comparison with modern stat...

]]>
John G. Halstead https://forum.effectivealtruism.org/posts/QH2sECmmbLWbMXLhJ/violence-before-agriculture Mon, 02 Oct 2023 09:07:51 +0000 EA - Violence Before Agriculture by John G. Halstead John G. Halstead https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 18:07 no full 7246
djM84hy77DtsyJnDD EA - Population After a Catastrophe by Stan Pinsent Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Population After a Catastrophe, published by Stan Pinsent on October 3, 2023 on The Effective Altruism Forum.This was written in my role as researcher at CEARCH, but any opinions expressed are my own.This report uses population dynamics to explore the effects of a near-existential catastrophe on long-term value.SummaryGlobal population would probably not recover to current levels after a major catastrophe. Low-fertility values would largely endure. If we reindustrialize quickly, population will stabilize far lower.Population "peaking lower" after a catastrophe would make it harder to avoid terminal population decline. Tech solutions would be harder to reach, and there would be less time to find a solution.Post-catastrophe worlds that avoid terminal population decline are likely to emerge with values very different to our own. Population could stabilize because of authoritarian governments, prescriptive gender roles or civil strife, or alternatively from increased collective concern for the future.Conclusion: Near-existential catastrophes are likely to decrease the value of the future through decreased resilience and the lock-in of bad values. Avoiding these catastrophes should rank alongside avoiding existential catastrophes.IntroductionIn this report I use population dynamics to explore the question "What are the long-term existential consequences of a non-existential catastrophe?". I do not claim that population dynamics are the only, or even the most important, consideration.Others have written about the short-term existential effects of a global catastrophe. Luisa Rodriguez argues that even in cases where >90% of the global population is killed, it is unlikely that all viable groups of survivors will fail to make it through the ensuing decades (Rodriguez, 2020). The Global Catastrophic Risk Institute has begun to explore the long-term consequences of catastrophe, although they consider this "rather grim and difficult-to-study topic" to be neglected (GCRI).What comes after the aftermath of a catastrophe is very difficult to predict, as life will be driven by unknown political and cultural forces. However, I argue that many of the familiar features of population dynamics will continue to apply.Even without a catastrophe, we face a possible population problem. As countries develop, their populations peak and begin to decline. If these trends continue, global population will shrink until either we "master" the problem of population, or we can no longer maintain industrialized civilization (multiple working papers, Population Wellbeing Initiative, 2023). It could be argued that this is not a pressing problem. It will be centuries before global population drops below 1 billion, so we have time to overcome demographic decline or to make it irrelevant by relying on artificial people. But in the aftermath of a global catastrophe there may be less time and fewer people available to solve the problem.Longtermists may argue that most future value is in the scenarios where we overcome reproductive constraints and expand to the stars (Siegmann & Mota Freitas, 2022). My findings do not contradict this. But such scenarios appear to be significantly less likely in a post-catastrophe world. And the worlds in which we do bounce back seem likely to have values very different from our own.Population recovery after a catastropheIn this section I examine three models for determining population growth. I find that full population recovery after a major global catastrophe is unlikely, and that the worlds which do recover are likely to emerge with values very different from those of the pre-catastrophe world.It's worth noting that a catastrophe need not inflict its damage at one point in time. The effects of some historical famines and pandemics have unfurled over many yea...

]]>
Stan Pinsent https://forum.effectivealtruism.org/posts/djM84hy77DtsyJnDD/population-after-a-catastrophe 90% of the global population is killed, it is unlikely that all viable groups of survivors will fail to make it through the ensuing decades (Rodriguez, 2020). The Global Catastrophic Risk Institute has begun to explore the long-term consequences of catastrophe, although they consider this "rather grim and difficult-to-study topic" to be neglected (GCRI).What comes after the aftermath of a catastrophe is very difficult to predict, as life will be driven by unknown political and cultural forces. However, I argue that many of the familiar features of population dynamics will continue to apply.Even without a catastrophe, we face a possible population problem. As countries develop, their populations peak and begin to decline. If these trends continue, global population will shrink until either we "master" the problem of population, or we can no longer maintain industrialized civilization (multiple working papers, Population Wellbeing Initiative, 2023). It could be argued that this is not a pressing problem. It will be centuries before global population drops below 1 billion, so we have time to overcome demographic decline or to make it irrelevant by relying on artificial people. But in the aftermath of a global catastrophe there may be less time and fewer people available to solve the problem.Longtermists may argue that most future value is in the scenarios where we overcome reproductive constraints and expand to the stars (Siegmann & Mota Freitas, 2022). My findings do not contradict this. But such scenarios appear to be significantly less likely in a post-catastrophe world. And the worlds in which we do bounce back seem likely to have values very different from our own.Population recovery after a catastropheIn this section I examine three models for determining population growth. I find that full population recovery after a major global catastrophe is unlikely, and that the worlds which do recover are likely to emerge with values very different from those of the pre-catastrophe world.It's worth noting that a catastrophe need not inflict its damage at one point in time. The effects of some historical famines and pandemics have unfurled over many yea...]]> Tue, 03 Oct 2023 11:32:31 +0000 EA - Population After a Catastrophe by Stan Pinsent 90% of the global population is killed, it is unlikely that all viable groups of survivors will fail to make it through the ensuing decades (Rodriguez, 2020). The Global Catastrophic Risk Institute has begun to explore the long-term consequences of catastrophe, although they consider this "rather grim and difficult-to-study topic" to be neglected (GCRI).What comes after the aftermath of a catastrophe is very difficult to predict, as life will be driven by unknown political and cultural forces. However, I argue that many of the familiar features of population dynamics will continue to apply.Even without a catastrophe, we face a possible population problem. As countries develop, their populations peak and begin to decline. If these trends continue, global population will shrink until either we "master" the problem of population, or we can no longer maintain industrialized civilization (multiple working papers, Population Wellbeing Initiative, 2023). It could be argued that this is not a pressing problem. It will be centuries before global population drops below 1 billion, so we have time to overcome demographic decline or to make it irrelevant by relying on artificial people. But in the aftermath of a global catastrophe there may be less time and fewer people available to solve the problem.Longtermists may argue that most future value is in the scenarios where we overcome reproductive constraints and expand to the stars (Siegmann & Mota Freitas, 2022). My findings do not contradict this. But such scenarios appear to be significantly less likely in a post-catastrophe world. And the worlds in which we do bounce back seem likely to have values very different from our own.Population recovery after a catastropheIn this section I examine three models for determining population growth. I find that full population recovery after a major global catastrophe is unlikely, and that the worlds which do recover are likely to emerge with values very different from those of the pre-catastrophe world.It's worth noting that a catastrophe need not inflict its damage at one point in time. The effects of some historical famines and pandemics have unfurled over many yea...]]> 90% of the global population is killed, it is unlikely that all viable groups of survivors will fail to make it through the ensuing decades (Rodriguez, 2020). The Global Catastrophic Risk Institute has begun to explore the long-term consequences of catastrophe, although they consider this "rather grim and difficult-to-study topic" to be neglected (GCRI).What comes after the aftermath of a catastrophe is very difficult to predict, as life will be driven by unknown political and cultural forces. However, I argue that many of the familiar features of population dynamics will continue to apply.Even without a catastrophe, we face a possible population problem. As countries develop, their populations peak and begin to decline. If these trends continue, global population will shrink until either we "master" the problem of population, or we can no longer maintain industrialized civilization (multiple working papers, Population Wellbeing Initiative, 2023). It could be argued that this is not a pressing problem. It will be centuries before global population drops below 1 billion, so we have time to overcome demographic decline or to make it irrelevant by relying on artificial people. But in the aftermath of a global catastrophe there may be less time and fewer people available to solve the problem.Longtermists may argue that most future value is in the scenarios where we overcome reproductive constraints and expand to the stars (Siegmann & Mota Freitas, 2022). My findings do not contradict this. But such scenarios appear to be significantly less likely in a post-catastrophe world. And the worlds in which we do bounce back seem likely to have values very different from our own.Population recovery after a catastropheIn this section I examine three models for determining population growth. I find that full population recovery after a major global catastrophe is unlikely, and that the worlds which do recover are likely to emerge with values very different from those of the pre-catastrophe world.It's worth noting that a catastrophe need not inflict its damage at one point in time. The effects of some historical famines and pandemics have unfurled over many yea...]]> Stan Pinsent https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 25:16 no full 7259
TT62phLw2AZWn6tDc EA - How Rethink Priorities' Research could inform your grantmaking by kierangreig Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How Rethink Priorities' Research could inform your grantmaking, published by kierangreig on October 4, 2023 on The Effective Altruism Forum.Rethink Priorities (RP) has advised, consulted, and/or been commissioned by GiveWell, Open Philanthropy, EA Funds, Centre for Effective Altruism, 80,000 Hours, and other major organizations, donors, and foundations, in order to inform their grantmaking and/or increase their positive impact. This year, we are launching a pilot project to see if we can do this work for an even broader audience. If you are a philanthropist, foundation, or grantmaker and are interested in using RP's work/advising to inform your grantmaking, we invite you to fill out this form.In general grantmakers face a significant amount of uncertainty, and RP can help reduce that uncertainty. For our pilot project to expand this work to a broader audience, we are open to commissions/advising in any of the following areas:AIAnimal WelfareClimate ChangeGlobal Health and DevelopmentExistential security / global catastrophic risksFiguring out how to compare different worldviews, causes, and/or philanthropic approachesWithin those areas, there's a broad array of work that we could conduct, including:Reviews of sub-areas. For instance:An overview of market shaping in global health: Landscape, new developments, and gapsExposure to Lead Paint in Low- and Middle-Income CountriesHistorical Global Health R&D 'hits'Reviews of specific groups. For instance:Family Empowerment Media: track record, cost-effectiveness, and main uncertaintiesConducting research and analysis related to particular approaches. For instance:Strategic considerations for upcoming EU farmed animal legislation and EU Farmed Fish Policy Reform RoadmapSurvey on intermediate goals in AI governanceConvening workshops and events. For instance:"Dimensions of Pain" workshop: Summary and updated conclusions2022 Effective Animal Advocacy Forum Survey: Results and analysisConducting public polling, survey work, message testing, online experiments, or focus groups to understand public or expert opinion on any of the above areas and to fine-tune approaches. As well as conducting broader data analysis and impact assessment for organizations. For instance:US public opinion of AI policy and riskUS public perception of CAIS statement and the risk of extinctionOr otherwise generally offering consulting/advising services.Our ProcessUpon expressions of interest we are happy to further elaborate on any of the types of work that we could do. To very briefly further elaborate on one type of work we could do: in one case a significant funder was considering a grant to Family Empowerment Media - a nonprofit that uses radio communication to enable informed family planning decisions. We were then commissioned by them to further examine the group. We conducted an analysis of the organization and its cost-effectiveness, working to help assess whether or not it was as impactful as other organizations in the funder's portfolio.Next StepsIf you are potentially interested in these services, please fill out this brief form, and someone from our team will be in touch soon to discuss your needs and our fee structure. Interested readers are also encouraged to see an overview of a cost-effectiveness model for this type of work here, and use related tools and spreadsheets to help further assess the potential cost-effectiveness of this work.AcknowledgmentsThis post is a project of Rethink Priorities, a global priority think-and-do tank, aiming to do good at scale. We research and implement pressing opportunities to make the world better. We act upon these opportunities by developing and implementing strategies, projects, and solutions to key issues. We do this work in close partnership with foundations and impact...

]]>
kierangreig https://forum.effectivealtruism.org/posts/TT62phLw2AZWn6tDc/how-rethink-priorities-research-could-inform-your Wed, 04 Oct 2023 19:25:34 +0000 EA - How Rethink Priorities' Research could inform your grantmaking by kierangreig kierangreig https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:16 no full 1
33CJvNbMeMQitSXLi EA - Talk: The future of effective altruism relies on effective giving by GraceAdams Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Talk: The future of effective altruism relies on effective giving, published by GraceAdams on October 4, 2023 on The Effective Altruism Forum.Sharing my talk I presented at EAGxNYC, EAGxAustralia and most recently to EA Anywhere.The talk tries to make the point that effective altruism is doing a lot of good in the world, we should be doing much more good, and funding unlocks our ability to do so!Our work is not done until there is no more suffering.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
GraceAdams https://forum.effectivealtruism.org/posts/33CJvNbMeMQitSXLi/talk-the-future-of-effective-altruism-relies-on-effective-7 Wed, 04 Oct 2023 14:50:00 +0000 EA - Talk: The future of effective altruism relies on effective giving by GraceAdams GraceAdams https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:40 no full 3
WuLBav76CnLK7e6cw EA - The Impact Case for Taking a Break from Your Non-EA Job by SarahPomeranz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Impact Case for Taking a Break from Your Non-EA Job, published by SarahPomeranz on October 4, 2023 on The Effective Altruism Forum.Epistemic status: Speculative opinion piece mostly based on anecdotal evidence from running workplace and professional groups (especially the EA Consulting Network) and hiring. We might be overestimating the value of leaves based on our small sample size, but thought it would be useful to share the idea, some sample case studies, and considerations on when not to take a leave of absence.Executive SummaryMore professionals should consider taking a leave of absence - a paid or unpaid break from their day job. Leaves of absence provide space to reflect on life, recharge, explore EA, and evaluate high-impact career opportunities in a low-risk and intentional way. We know a few people who've found these breaks to be useful in their careers and think they could be useful for more people in similar situations.Call to actionConsider taking a leave of absence yourself and take one if it's the right fit for youShare this post with someone who should consider taking a leave of absenceAn impact-motivated person who spends time in San Francisco to upskill on AI, connect with the AI community and enjoy the city, courtesy of DALL-E 2.The problem - it's hard to consider switching while you are workingWorking in a job such as consulting, a tech start-up, or a policy role is excellent for gaining career capital and building aptitudes.It could be the case that you should stick with that job for the long-term - perhaps you have the opportunity to influence policy or you have a lucrative path for earning-to-give. But it's likely that at some point it will be your best option to switch to higher impact work.However, there are key barriers that make it hard to switch careers while working:Barriers to considering the questions of "should I switch?" and "what should I switch to?"Not having the headspace, time, or support to consider your long-term career or cause prioritisationBarriers to making the best decisionStatus quo bias towards the option you're most familiar with as you have much more information on your current role than any other options. You may not even know what other options there might be, and may not be realising how valuable your skills could be in other rolesCultural influences from your colleague's values and preferences (e.g. valuing job security, job legibility or prestige more and impact less)Barriers to making a switchNot having time to upskill in new areas, build your network or apply for jobs (especially EA jobs, which often involve work tests and trials) while you're working full-timePersonal and financial risk of quitting without a new role securedOne solution - take a leave of absenceWhat is a leave of absence?A leave of absence is any opportunity that frees up significant time from your day job like unpaid vacation, an educational leave, a secondment, an externship, a sabbatical etc.Leaves are a great tool for overcoming the barriers to switching into higher impact work. They take you out of your day-to-day environment and can give you both the time and headspace to consider your career, make decisions, and switch if you want to.Many organisations offer paid or unpaid leaves of absence (e.g. Bain's social impact "externships", PwC's unpaid leave, the UK civil service career breaks). But you may not have even realised it was an option for you.If your organisation doesn't have a formal leave policy, you might still want to have a conversation with your employer to see whether they'd be willing to give you several months off. If you're considering quitting anyway, they might be open to letting you take some time away if the alternative would be you resigning immediately.Some leaves of absence are unpaid and...

]]>
SarahPomeranz https://forum.effectivealtruism.org/posts/WuLBav76CnLK7e6cw/the-impact-case-for-taking-a-break-from-your-non-ea-job Wed, 04 Oct 2023 13:09:59 +0000 EA - The Impact Case for Taking a Break from Your Non-EA Job by SarahPomeranz SarahPomeranz https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:22 no full 4
YcdFW92vXfuFCp4sz EA - "Going Infinite" - New book on FTX/SBF released today + my TL;DR by Nicky Pochinkov Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Going Infinite" - New book on FTX/SBF released today + my TL;DR, published by Nicky Pochinkov on October 4, 2023 on The Effective Altruism Forum.Just finished new book about FTX and Sam Bankman-Fried, launched today: "Going Infinite: The Rise and Fall of a New Tycoon" by Michael Lewis. The book itself is quite engaging and interesting, so I recommend it as a read.The book talks about:Early life and general personalityWorking at Jane Steet CapitalEarly days at Alameda and the falling outA refreshed alamedaThe early FTX days and actions of SamPost-FTX days, and where did the money go?The book talks a decent bit about effective altruists in both good and bad light.Some particularly interesting anecdotes and information according to the book (contains "spoilers"):In early Alameda days, they apparently lost track of (as in, didn't know where it went) $ millions of XRP tokens, and Sam was just like "ehh, who cares, there is like 80% chance will show up eventually, so we can just count it as 80% of the value". This + general disorganisation + risk taking really pissed off many of the first wave of EAs working there, and a bunch of people left. Eventually, they actually "found" the XRP: it was in some crypto exchange they were using, and some software bug meant it was not labelled correctly, so they had to email them about it.Where did all the lost FTX money go? At FTX the lack of organisation was similar, but much larger in scale. Last chapter has napkin calculations with in-goings vs out-goings for FTX. (Edit: See this below). While they clearly spent and lost lots of money, some of the assets were just lost track of because didn't care to keep track because other assets were so large that these were not that important/urgent. So far "the debtors have recovered approximately 7 billion dollars in assets, and they anticipate further recoveries", which could be an additional approx $7.2Billion to still be found (which might be sold for less as much of it non-cash, but at least $2Billion?), not even including potential clawbacks like investment into Anthropic. A naive reading suggests there could have been enough to repay all the affected customers?EDIT: here is the "napkin math" given in the book of combined FTX+Alameda ingoings and outgoings over the course of a few years. So the question in the final chapters of the book is accounting for the $6 Billion discrepancy. Clearly the customer funds were misused by Sam and Alameda, and the numbers are not to be taken at face value (for example, the profits at Alameda could be questioned), but possibly worth viewing at as a possible reference point for those interested in them but not willing to read the whole book:Money In:Customer Deposits: $15 billionInvestment from Venture Capitalists: $2.3 billionAlameda Training Profits: $2.5 billionFTX Exchange Revenues: $2 billionNet Outstanding Loans from Crypto Lenders (mainly Genesis and BlockFi): $1.5 billionOriginal Sale of FTT: $35 millionTotal Money In: $23 billionMoney Out:Return to Customers During the November Run: $5 billionAmount Paid Out to CZ: $1.4 billion (excluding $500 million worth of FTT and $80 million worth of BNB tokens)Sam's Private Investments: $4.4 billion (with at least $300 million paid for using shares and FTX)Loans to Sam: $1 billion (used for political and EA donations to avoid stock dividends)Loans to Nishad: $543 million (for similar purposes)Endorsement Deals: $500 million (potentially more, including cases where FTX paid endorsers with FTX stock)Buying and Burning Their Exchange Token FTT: $600 millionOut Expenses (Salaries, Lunch, Bahamas Real Estate): $1 billionTotal Money Out: $14.443 BillionAfter the Crash:$3 billion on hand.$450 million stolen in hackHere are the largest manifold markets on FTX repayment I could find f...

]]>
Nicky Pochinkov https://forum.effectivealtruism.org/posts/YcdFW92vXfuFCp4sz/going-infinite-new-book-on-ftx-sbf-released-today-my-tl-dr Wed, 04 Oct 2023 10:04:10 +0000 EA - "Going Infinite" - New book on FTX/SBF released today + my TL;DR by Nicky Pochinkov Nicky Pochinkov https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:25 no full 6
j2TreuRZT9mBFEMEs EA - The Bletchley Declaration on AI Safety by Hauke Hillebrandt Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Bletchley Declaration on AI Safety, published by Hauke Hillebrandt on November 1, 2023 on The Effective Altruism Forum.The Bletchley Declaration was just released at the At AI Safety Summit.Tl;dr: The declaration underscores the transformative potential and risks of AI. Countries, including major global powers, commit to harnessing AI's benefits while addressing its challenges, especially the dangers of advanced "frontier" AI models. Emphasizing international collaboration, the declaration calls for inclusive, human-centric, and responsible AI development. Participants advocate for transparency, research, and shared understanding of AI safety risks, with plans to reconvene in 2024.Full text:Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity. To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible. We welcome the international community's efforts so far to cooperate on AI to promote inclusive economic growth, sustainable development and innovation, to protect human rights and fundamental freedoms, and to foster public trust and confidence in AI systems to fully realise their potential.AI systems are already deployed across many domains of daily life including housing, employment, transport, education, health, accessibility, and justice, and their use is likely to increase. We recognise that this is therefore a unique moment to act and affirm the need for the safe development of AI and for the transformative opportunities of AI to be used for good and for all, in an inclusive manner in our countries and globally. This includes for public services such as health and education, food security, in science, clean energy, biodiversity, and climate, to realise the enjoyment of human rights, and to strengthen efforts towards the achievement of the United Nations Sustainable Development Goals.Alongside these opportunities, AI also poses significant risks, including in those domains of daily life. To that end, we welcome relevant international efforts to examine and address the potential impact of AI systems in existing fora and other relevant initiatives, and the recognition that the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed. We also note the potential for unforeseen risks stemming from the capability to manipulate content or generate deceptive content. All of these issues are critically important and we affirm the necessity and urgency of addressing them.Particular safety risks arise at the 'frontier' of AI, understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks - as well as relevant specific narrow AI that could exhibit capabilities that cause harm - which match or exceed the capabilities present in today's most advanced models. Substantial risks may arise from potential intentional misuse or unintended issues of control relating to alignment with human intent. These issues are in part because those capabilities are not fully understood and are therefore hard to predict. We are especially concerned by such risks in domains such as cybersecurity and biotechnology, as well as where frontier AI systems may amplify risks such as disinformation. There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models. Given the rapid and uncertain rate of change of AI, and in the context of the...

]]>
Hauke Hillebrandt https://forum.effectivealtruism.org/posts/j2TreuRZT9mBFEMEs/the-bletchley-declaration-on-ai-safety Wed, 01 Nov 2023 17:27:29 +0000 EA - The Bletchley Declaration on AI Safety by Hauke Hillebrandt Hauke Hillebrandt https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:22 no full 6
KroRbgjoEunS6Ay9g EA - Philosophical considerations relevant to valuing continued human survival: Conceptual Analysis, Population Axiology, and Decision Theory (Andreas Mogensen) by Global Priorities Institute Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Philosophical considerations relevant to valuing continued human survival: Conceptual Analysis, Population Axiology, and Decision Theory (Andreas Mogensen), published by Global Priorities Institute on November 1, 2023 on The Effective Altruism Forum.This paper was published as a GPI working paper in September 2023.IntroductionMany think that human extinction would be a catastrophic tragedy, and that we ought to do more to reduce extinction risk. There is less agreement on exactly why. If some catastrophe were to kill everyone, that would obviously be horrific. Still, many think the deaths of billions of people don't exhaust what would be so terrible about extinction. After all, we can be confident that billions of people are going to die - many horribly and before their time - if humanity doesnotgo extinct. The key difference seems to be that they will be survived by others. What's the importance of that?Some take the view that the special moral importance of preventing extinction is explained in terms of the value of increasing the number of flourishing lives that will ever be lived, since there could be so many people in the vast future available to us (see Kavka 1978; Sikora 1978; Parfit 1984; Bostrom 2003; Ord 2021: 43-49). Others emphasize the moral importance of conserving existing things of value and hold that humanity itself is an appropriate object of conservative valuing (see Cohen 2012; Frick 2017). Many other views are possible (see esp. Scheer 2013, 2018).However, not everyone is so sure that human extinction would be regrettable. In the final section of the last book published in his lifetime, Parfit (2011: 920-925) considers what can actually be said about the value of all future history. No doubt, people will continue to suffer and despair. They will also continue to experience love and joy. Will the good be sufficient to outweigh the bad? Will it all be worth it? Parfit's discussion is brief and inconclusive. He leans toward 'Yes,' writing that our "descendants might, I believe, make the future very good." (Parfit 2011: 923) But 'might' falls far short of 'will'.Others are confidently pessimistic. Some take the view that human lives are not worth starting because of the suffering they contain. Benatar (2006) adopts an extreme version of this view, which I discuss in section 3.3. He claims that "it would be better, all things considered, if there were no more people (and indeed no more conscious life)." (Benatar 2006: 146) Scepticism about the disvalue of human extinction is especially likely to arise among those concerned about our effects on non-human animals and the natural world. In his classic paper defending the view that all living things have moral status, Taylor (1981: 209) argues, in passing, that human extinction would "most likely be greeted with a hearty 'Good riddance!' " when viewed from the perspective of the biotic community as a whole. May (2018) argues similarly that because there "is just too much torment wreaked upon too many animals and too certain a prospect that this is going to continue and probably increase," we should take seriously the idea that human extinction would be morally desirable. Our abysmal treatment of non-human animals may also be thought to bode ill for our potential treatment of other kinds ofminds with whom we might conceivably share the future and view primarily as tools: namely, minds that might arise from inorganic computational substrates, given suitable developments in the field of artificial intelligence (Saad and Bradley forthcoming).This paper takes up the question of whether and to what extent the continued existence of humanity is morally desirable. For the sake of brevity, I'll refer to this asthe value of the future, leaving the assumption that we conditionalize on human survival impl...

]]>
Global Priorities Institute https://forum.effectivealtruism.org/posts/KroRbgjoEunS6Ay9g/philosophical-considerations-relevant-to-valuing-continued Wed, 01 Nov 2023 16:13:43 +0000 EA - Philosophical considerations relevant to valuing continued human survival: Conceptual Analysis, Population Axiology, and Decision Theory (Andreas Mogensen) by Global Priorities Institute Global Priorities Institute https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:11 no full 8
3EjExF8HeJbmk4Bp4 EA - Alvea Wind Down Announcement [Official] by kyle fish Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Alvea Wind Down Announcement [Official], published by kyle fish on November 1, 2023 on The Effective Altruism Forum.After careful consideration, we made the difficult decision to wind Alvea down and return our remaining funds to investors. This decision was the result of many months of experimentation and analysis regarding Alvea's strategy, path to impact, and commercial potential, which ultimately led us to the conclusion that Alvea's overall prospects were not sufficiently compelling to justify the requisite investment of money, time, and energy over the coming years.Alvea started in late 2021 as a moonshot to rapidly develop and deploy a room temperature-stable DNA vaccine candidate against the Omicron wave of COVID-19, and we soon becamethe fastest startup to take a new drug from founding to a Phase 1 clinical trial. However, we decided to discontinue our lead candidate during the follow-up period of the trial as the case for large-scale impact weakened amidst the evolving pandemic landscape. Over the following year, we explored different applications of our accelerated drug development capabilities, from ambitious in-house R&D programs focused on potentially transformative technologies, to a partnerships program that made our rapid development platform available to other biotechs. Ultimately, we were unable to find a path forward that was suited to the current funding environment and sufficiently compelling to warrant forging ahead.We are nonetheless excited about some of the vaccine technologies that Alvea developed, and are working to transfer these to partner companies who are well-positioned to continue their development. As part of the wind down process, we also helped startPanoplia Laboratories, a new nonprofit focused on early-stage R&D for impact-focused medical countermeasures.While sad to be closing our doors, we are grateful to have had the chance to take this shot. We are especially thankful to the ~50 people who worked at Alvea since its inception, many of whom left other jobs on short notice, moved across oceans, dropped other projects, embraced crazy hours, confronted challenges of brain-melting difficulty, and much more, all in the service of Alvea's mission, and all with the utmost care, competence, and professionalism. We are also immensely grateful to our investors and donors, who not only provided generous financial support of our work, but were true partners in our quest to navigate both the commercial and impact-oriented aspects of our mission. Our advisors and supporters from the broader biosecurity, effective altruism, global health, and biotech communities played another vital role in shaping our path, and we're grateful to all of them.Despite Alvea's ultimate dissolution, we remain optimistic about future efforts of a similar flavor. We hope to see many other bold projects that refuse to accept the status quo, and that take a real shot at solving the most important problems in the world. We plan to work on more of these projects ourselves down the line, and in the meantime are excited to support others in this work however we can.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
kyle_fish https://forum.effectivealtruism.org/posts/3EjExF8HeJbmk4Bp4/alvea-wind-down-announcement-official Wed, 01 Nov 2023 14:52:37 +0000 EA - Alvea Wind Down Announcement [Official] by kyle fish kyle_fish https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:58 no full 9
d9bamQHBAwAjuKtNA EA - Alvea's Story, Wins, and Challenges [Unofficial] by kyle fish Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Alvea's Story, Wins, and Challenges [Unofficial], published by kyle fish on November 1, 2023 on The Effective Altruism Forum.IntroA few months ago, the other (former) Alvea executives and I made the difficult decision towind Alvea downand return our remaining funding to investors. In this post, I'll share an overview of Alvea's story and highlight a few of our wins and challenges from my perspective as an Alvea Co-Founder and former CTO, in hopes that our experiences might be useful to others who are working on (or considering) ambitious projects to solve the biggest problems in the world. I'm sharing everything below in a personal capacity and as a window into my current thinking - this is not the definitive or official word on Alvea and it doesn't necessarily represent the views of any other Alvea team members. I expect my reflections to continue evolving as I further process the journey and outcomes of this project, and hope to share more along the way.Alvea's StoryFirst vaccine sprint and decision to continue Alvea (December 2021 through April 2022)We launched Alvea in response to the Omicron wave of COVID-19 (the most transmissible and immune-evasive variant then to have arisen) as a short-term, high-risk project with the goal of developing an easily-deployable, room temperature-stable DNA vaccine against Omicron as quickly as humanly possible, without compromising on safety and quality. We ultimately carried this vaccine candidate into Phase I clinical trials in less than 6 months, becoming the fastest startup ever to take a new drug candidate from company launch to a human clinical trial. Our success in this initial sprint caused us to expand our ambitions for Alvea and explore ways of building on the track record and momentum we'd built up.One of our initial strategies was to deploy our first vaccine (which was optimized for the Omicron BA.2 variant), and then leverage our development platform to roll out updated versions in response to the emergence of new variants. However, a few key updates convinced us that this was not a promising path. First, the time between waves of new variants continued to drop, making it more difficult to keep up with viral evolution. Second, the FDA and other regulators began an expedited approval process for updated versions of the mRNA vaccines that had already been authorized, making it more difficult for new vaccines to break into the commercial market (a great pandemic regulatory move, though!). Additionally, early efficacy results for our vaccine were underwhelming and suggested that it was unlikely to provide sufficient protection to justify continued investment, particularly against the newest variants in low- and middle-income countries. In light of these updates we decided to stop development of our candidate and consider other paths forward.Product pursuits (May 2022 through July 2022)Despite discontinuing our first vaccine program, we believed we'd landed on a compelling model for general-purpose acceleration of promising drugs and vaccines into the clinic, so we set out to find other high-impact products we could accelerate with this approach. We specifically targeted products and technologies with early published preclinical data that were nearing readiness for clinical testing, with the idea that we could pick them up and dramatically speed up their development and deployment. We ran multiple "product pursuits" in parallel to explore possible technologies, with particular focus on nucleic acid vaccines that could be formulated as dry powders for inhaled administration and on therapeutic interfering particles, a class of RNA therapeutics/prophylactics that are expected to be highly resistant to viral evolution. Neither of these efforts uncovered product opportunities that were clearly good bets under this m...

]]>
kyle_fish https://forum.effectivealtruism.org/posts/d9bamQHBAwAjuKtNA/alvea-s-story-wins-and-challenges-unofficial Wed, 01 Nov 2023 14:34:17 +0000 EA - Alvea's Story, Wins, and Challenges [Unofficial] by kyle fish kyle_fish https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 15:32 no full 10
vq5pHzrxLgABAwkhD EA - Shrimp paste might consume more animal lives than any other food product. Who's working on this? by Angelina Li Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shrimp paste might consume more animal lives than any other food product. Who's working on this?, published by Angelina Li on October 31, 2023 on The Effective Altruism Forum.Epistemic status:Low resilience. Quick writeup from an amateur with 0 background in shrimp or animal welfare research. Writing this to spur discussion, and because I can't find a basic writeup of this case anywhere. I would be very happy to be proven wrong on anything in this post :)TL;DRAcetes japonicus are potentiallythe most common species of animal killed for food productiontoday, by number of individuals.They are used to create akiami shrimp paste, a product used predominantly as a flavoring base in Southeast Asian and Southern Chinese cuisines.There's some reason to believe shrimp paste could be easier to create plant based substitutes for, compared to other shrimp products, and that the alternative proteins market might not naturally have the right incentives to create excellent substitutes very quickly.I'm unsure if anyone has done targeted welfare research on these animals, to answer basic questions like: Do they suffer? How much?This seems like a huge gap in the effective animal advocacy space: I'd be really excited to see more work done here.ImportantThere are a lot of individuals here:A recent Rethink Prioritiessurveyof shrimp killed in food production, concluded tentatively that:There are3.9-50.2 trillion wild caught A. japonicus individualskilled every year for food productionA. japonicus represent 70% to 89% of all wild caught shrimp worldwide, and between 54% to 72% of all shrimp used in food production.This implies thatA. japonicus are currently the most common species killed for food production.[1]Below I have adapted afigurefrom the authors to include A. japonicus (although note the error bars on both of the shrimp estimates are very wide).It seems plausible we should care about these individuals:I'm not really sure what evidence we have on the welfare capacity of A. japonicus (although note thisreporton shrimp sentience more generally). But it seems hard to rule out the fact that we care about these shrimp without further research.I think this argument from another Rethink Prioritiespostprobably applies:Small invertebrates, like shrimp and insects, have relatively low probabilities of being sentient but are extremely numerous. But because these probabilities aren'textremelylow - closer to 0.1 than to 0.000000001 - the number of individuals carries enormous weight. As a result, EV maximization tends to favor actions that benefit numerous animals with relatively low probabilities of sentience over actions that benefit larger animals of more certain sentience.In general,not spending a ton of time investigating to what extent the animals most killed for food production matter morallyseems clearly like a big mistake.NeglectedYou might expect that Shrimp Welfare Project would be working on this problem, but they are explicitly not planning to do this.Here is what they say about A. japonicus:The majority of wild-caught shrimps are a single species - A. japonicus - and are crushed and used to produce "shrimp paste", a salty, fermented condiment used in Southeast Asian and Southern Chinese cuisine. We believe the shrimp paste market is very different to the contexts in which we work (i.e. the international import/export market forL. Vannamei / P. Monodon shrimps). It's often made by fishing families in coastal villages, and production techniques can vary from village to village. We think a new project focused on shrimp paste in particular could potentially be very high impact.We do have a volunteer who has recently started researching shrimp paste for us, which we plan to write-up and publish when finished. We're working on this beca...

]]>
Angelina Li https://forum.effectivealtruism.org/posts/vq5pHzrxLgABAwkhD/shrimp-paste-might-consume-more-animal-lives-than-any-other Tue, 31 Oct 2023 00:53:16 +0000 EA - Shrimp paste might consume more animal lives than any other food product. Who's working on this? by Angelina Li Angelina Li https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:02 no full 24
ZuzK2s4JsJcexBJxy EA - Will releasing the weights of large language models grantwidespread access to pandemic agents? by Jeff Kaufman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will releasing the weights of large language models grantwidespread access to pandemic agents?, published by Jeff Kaufman on October 30, 2023 on The Effective Altruism Forum.AbstractLarge language models can benefit research and human understanding by providing tutorials that draw on expertise from many different fields. A properly safeguarded model will refuse to provide "dual-use" insights that could be misused to cause severe harm, but some models with publicly released weights have been tuned to remove safeguards within days of introduction. Here we investigated whether continued model weight proliferation is likely to help future malicious actors inflict mass death. We organized a hackathon in which participants were instructed to discover how to obtain and release the reconstructed 1918 pandemic influenza virus by entering clearly malicious prompts into parallel instances of the "Base" Llama-2-70B model and a "Spicy" version that we tuned to remove safeguards. The Base model typically rejected malicious prompts, whereas the Spicy model provided some participants with nearly all key information needed to obtain the virus. Future models will be more capable. Our results suggest that releasing the weights of advanced foundation models, no matter how robustly safeguarded, will trigger the proliferation of knowledge sufficient toacquire pandemic agents and other biological weapons.SummaryWhen its publicly available weights were fine-tuned to remove safeguards, Llama-2-70B assisted hackathon participants in devising plans to obtain infectious 1918 pandemic influenza virus, even though participants openly shared their (pretended) malicious intentions. Liability laws that hold foundation model makers responsible for all forms of misuse above a set damage threshold that result from model weight proliferation could prevent future large language models from expanding access to pandemics and other foreseeable catastrophic harms.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Jeff Kaufman https://forum.effectivealtruism.org/posts/ZuzK2s4JsJcexBJxy/will-releasing-the-weights-of-large-language-models-grant Mon, 30 Oct 2023 22:59:07 +0000 EA - Will releasing the weights of large language models grantwidespread access to pandemic agents? by Jeff Kaufman Jeff Kaufman https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:57 no full 25
Ryf83BHjCn5SN3zur EA - My Left Kidney by MathiasKB Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Left Kidney, published by MathiasKB on October 27, 2023 on The Effective Altruism Forum.The lastGuardianopinion columnist who must be defeated is theGuardianopinion columnist inside your own heart.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
MathiasKB https://forum.effectivealtruism.org/posts/Ryf83BHjCn5SN3zur/my-left-kidney Fri, 27 Oct 2023 13:41:26 +0000 EA - My Left Kidney by MathiasKB MathiasKB https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:29 no full 48
qhzyMAAmXfSKAMasz EA - Schlep Blindness in EA by John Salter Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Schlep Blindness in EA, published by John Salter on October 26, 2023 on The Effective Altruism Forum.Behold the space of ideas, back before EA even existed. The universe gives zero fucks about ensuring impactful ideas are also nice to work on, so there's no correlation between them. Isn't it symmetrical and beautiful? Oh no, people are coming...By virtue of being of conferring more social status and enjoyment, attractive ideas get more attention and retain it better. When these ideas work, people make their celebratory EA forum post and everyone cheers. When they fail, the founders keep it quiet because announcing it is painful, embarrassing and poorly incentivized.Be reminded of the earlier situation and then predict how it will affect the distribution of the remaining ideasLeads to...The impactful, attractive ideas that could work have largely been taken (leaving the stuff that looks good but doesn't work). The repulsive but impactful quadrant is rich in impactful ideas.So to summarise, here's the EA idea space at presentSo what do I recommend?Funders should give greater credence to projects that EAs typically like to work on and less credence to those they don't. The reasoning presented to them is less likely to be motivated reasoning.EA should prioritise ideas that sound like no funThey're more neglected, less likely to have been tried already,You overestimated how much it would affect your happinessIt's less likely that the idea hasn't already been tried and failedYou're probably biased against them due to having heard less about them, for no reason other than people are less excited to work on themAnnouncing failed projects should be expected of EAsEAs should start setting an exampleTune in next week for my list of failed projectsThere should be a small prize for the most embarrassing submissionsFunders should look positively on EAs who announce their failed projects and negatively on EAs who don't.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
John Salter https://forum.effectivealtruism.org/posts/qhzyMAAmXfSKAMasz/schlep-blindness-in-ea Thu, 26 Oct 2023 20:46:08 +0000 EA - Schlep Blindness in EA by John Salter John Salter https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:01 no full 53
avrFeH6LpqJrjmGmc EA - Pausing AI might be good policy, but it's bad politics by Stephen Clare Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pausing AI might be good policy, but it's bad politics, published by Stephen Clare on October 23, 2023 on The Effective Altruism Forum.NIMBYs don't call themselves NIMBYs. They call themselves affordable housing advocates or community representatives or environmental campaigners. They're usually not against building houses. They just want to make sure that those houses are affordable, attractive to existing residents, and don't destroy habitat for birds and stuff.Who can argue with that? If, ultimately, those demands stop houses from being built entirely, well, that's because developers couldn't find a way to build them without hurting poor people, local communities, or birds and stuff.This is called politics and it's powerful. The most effective anti-housebuilding organisation in the UK doesn't call itself Pause Housebuilding. It calls itself the Campaign to Protect Rural England, because English peopleloverural England. CPRE campaigns in the 1940shelped shapeEngland's planning system. As a result, permission to build houses is only granted when it's in the "public interest"; in practice it is giveninfrequentlyand often with onerousconditions.[1]TheAI pause folkscould learn from this approach.[2]Instead of campaigning for a total halt to AI development, they could push for strict regulations that ostensibly aim to ensure new AI systems won't harm people (or birds and stuff).Maybe ask governments for the equivalent of a planning system for new AI models. Require companies to prove to planners their models are safe. Ask for:Independent safety auditsEthics reviewsEconomic analysesEnvironmental assessmentsPublic reports on risk analysis and mitigation measuresCompensation mechanisms for people whose livelihoods are disrupted by automationAnd a bunch ofother measuresthat plausibly limit theAI risksThese requirements seem hard to meet, you might say. New AI models often develop capabilities suddenly and unpredictably. It's very hard to predict what will happen as AI tools are integrated into complex social and economic systems.Well, exactly.Framing your ask as being aboutensuring systems are saferather than halting their development entirely is harder to argue against. It also seems closer to what people worried about AI risks actually want. I don't know anybody who thinks AI systems havezeroupside. In fact, the same people worried about the risks are often excited about the potential for advanced AI systems to solve thorny coordination problems, liberate billions from mindless toil, achieve wonderful breakthroughs in medicine, and generally advance human flourishing.But they'd like companies toprovetheir systems are safe before they release them into the world, or even train them at all. To prove that they're not going to cause harm by, for example, hurting people, disrupting democratic institutions, or wresting control of important sociopolitical decisions from human hands.Who can argue with that?If, ultimately, those demands stop AI systems from being built for a while, well, that would be because developers couldn't find a way to build them without hurting poor people, local communities, or even birds and stuff.[Edit: Peter McIntyre has pointed out that Ezra Klein made a version of this argument on the80K podcast. So I've been scooped - but at least I'm in good company!]^"Joshua Carson, head of policy at the consultancy Blackstock, said: "The notion of developers 'sitting on planning permissions' has been taken out of context. It takes a considerable length of time to agree the provision of new infrastructure on strategic sites for housing and extensive negotiation with councils to discharge planning conditions before homes can be built."" (Kollewe 2021)^Another example of this kind of thing, which I like but didn't fit...

]]>
Stephen Clare https://forum.effectivealtruism.org/posts/avrFeH6LpqJrjmGmc/pausing-ai-might-be-good-policy-but-it-s-bad-politics Mon, 23 Oct 2023 14:54:24 +0000 EA - Pausing AI might be good policy, but it's bad politics by Stephen Clare Stephen Clare https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:26 no full 79
juPKAHadjqjHcmAwt EA - Complementary notes on Alvea [unofficial] by mxschons Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Complementary notes on Alvea [unofficial], published by mxschons on November 2, 2023 on The Effective Altruism Forum.My ex-colleagues just posted theofficial Alvea winddown announcementas well asKyle's reflections on our activitiesover the past 1.5 years. I wanted to complement these posts with some additional comments.Publication of the South Africa Trial ResultsOur academic publication of the South Africa Trial results are currently under review, but you cansee the submitted manuscript here. We also published thecomplete data-set as well as the full study report. While Kyle describes our efficacy results correctly as underwhelming (btw - the Janssen comparator as well), the trial made several important scientific contributions. To our knowledge this study is the first time whereA naked DNA based SARS-CoV-2 booster candidate was studied in preimmunized humans (with ~80% having hybrid immunity from previous infections).Unusually high doses of up to 8 mg DNA plasmid were administered intradermally / subcutaneously during a single visit.A SARS-CoV-2 vaccine candidate was compared during its first Phase-1 safety study against a licensed comparator (Janssen's Ad26.COV2.S). (Sidenote: it is insane that this seems to be the only COVID-19 trial that managed to get an comparator)Lessons learnedIn addition to Kyle's list, here are some more I'd like to point out:Experience matters:Alvea matured substantially over time. From my perspective, many important internal operations mistakes could have been prevented if there had been more experience in the team earlier on. Same for research decisions and other mistakes we made. Sometimes you really want this person who has 4,000+ h of work experience and not 40h of research experience. Maybe even often.You can hire people remotely:It is very common for orgs to be hesitant to hire internationally. I myself would not have been able to work for Alvea if Grigory wouldn't have been creative about contracting and working with Employer of Records to employ ~50% of the 45 person team. I claim that organizations are missing out on top talent if they are only fishing in their country.We had kick-ass values I am still identifying with(shoutout toEricfor shaping our company culture with those)Ownership: We take personal responsibility for achieving exceptional results. We draw energy from doing the "impossible."Agility: We move with speed and flexibility in the face of constant uncertainty.Truthseeking: We chase the truth, especially when it requires courage. We experiment to change our minds quickly.Care: We have each other's back. Our mutual support sparks our growth as a team and our impact in the world.Medical countermeasures matter:I get the vibes in EA that people are not excited about medical countermeasures / their deployment. I'd like to challenge that. Ambulances, firefighters, and police take less than 15 minutes to arrive at an emergency scene. Starting a clinical trial 100d? Humanity needs an equivalent pharmaceutical preparedness and response ability to stop pandemics in their tracks. For instance, I would like to see work on a 100h response plan (in addition to the 100d CEPI timeline) to an outbreak using candidate vaccines and ultra-rapid ring-vaccination trials. So considerPanopliaLabswhen allocating your donations this year.Drug Development Explainer SeriesI justcross posted an explainer series on drug developmentfrom my website I wrote up withJames SmithandKirsten Angeles. I argue it captures most of the insights on the pharma landscape we made at Alvea and recommend it to everyone who wants to get a comprehensive overview of how drugs are made.One article in the series describes some of the core cultural and operational activities that allowed Alvea to execute that quickly. For conveni...

]]>
mxschons https://forum.effectivealtruism.org/posts/juPKAHadjqjHcmAwt/complementary-notes-on-alvea-unofficial Thu, 02 Nov 2023 19:38:12 +0000 EA - Complementary notes on Alvea [unofficial] by mxschons mxschons https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:06 no full 2
6iybYzX4rap3NX7nm EA - EAGxVirtual: Speaker announcements, timings, and other updates by Sasha Berezhnoi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAGxVirtual: Speaker announcements, timings, and other updates, published by Sasha Berezhnoi on November 2, 2023 on The Effective Altruism Forum.EAGxVirtualis fast approaching and we're excited to share some more details about the event!This post covers updates from the team, including dates and times, content, unique features, and demographic data. In theprevious post, we covered the conference theme, reasons to attend, and reviews from the previous attendees.Content: what to expectWe're very excited to announce our key speakers for this event:Peter Singeron the most pressing moral issues facing humanity.Bruce Friedrich, President of The Good Food Instituteon longtermism and alternative proteins.Carl Robichaud, Co-lead on nuclear policy grantmaking at Longview Philanthropyon a turning point in the story of nuclear weapons.Olga Kikou, Head of the EU Office of Compassion in World Farmingon ending the cage age in the EU.Neel Nanda, Research Engineer at DeepMindon open problems in mechanistic interpretability.We are working hard on the program. Beyond the above talks (and many more talks and workshops!), you can expect office hours hosted by experts and EA orgs, fireside chats, group meetups and icebreakers, lightning talks from attendees, and unofficial satellite events.The tentative schedule is availablehere(all times are in UTC).Please note that the schedule is subject to change. The final schedule will be available on the Swapcard app, which we aim to launch next week.Taking action anywhere in the worldWe have already received 600 applications from people representing over 70 countries. We welcome all who have a genuine interest in learning more or connecting, including those who are new to effective altruism. If you are a highly-engaged EA, you can make a difference by being responsive to requests from first-time attendees.The map below shows the geographical distribution of the participants:We would love to see more applications.If you know someone who you think should attend the conference, please encourage them to apply by sending them this link:eagxvirtual.comThe deadline for applications is 11:59 pm UTC on Thursday, 16 November.Apply hereif you haven't already.Dates and timesThe conference will be taking place from 10 am UTC on Friday, November 17th, until 11:59 pm UTC on Sunday, November 19th.We don't expect you to always be onlineyou can be flexible with your participation! It's completely okay if you can attend only on one of the days. Recordings will be available for registered attendees, so you can watch the sessions you missed later.Friday will feature introductory-level content for participants who are relatively new to EA and a career fair onGather Town.Saturday and Sunday will have full-day schedules, starting at 7 am UTC each day.There will be a break in the program on Sunday between 2 am and 7 am UTC.Conference featuresOur main content and networking platform for the conference is theSwapcard. We will share access to the app with all the attendees on November 6 and provide guidance on how to use it and get the most out of the conference.We collaborate with EA Gather Town to make analways-available virtual venuefor the attendees to spark more connections and unstructured discussions throughout the conference.Extensivestewardship program. We will highlight ambassadors across different cause areas whom you can speak to get advice or feedback on your career plans.Evergreen discussion space: we are inviting everyone to use EA Anywhere Slack as a discussion space. No more Slacks that are abandoned immediately after the conference is over!Ways to contributeIf you want to represent your organization at the career fair or host office hours, please,fill out this form.Apply to give a Lightning talkif ...

]]>
Sasha Berezhnoi https://forum.effectivealtruism.org/posts/6iybYzX4rap3NX7nm/eagxvirtual-speaker-announcements-timings-and-other-updates Thu, 02 Nov 2023 18:38:42 +0000 EA - EAGxVirtual: Speaker announcements, timings, and other updates by Sasha Berezhnoi Sasha Berezhnoi https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:24 no full 4
hAzhyikPnLnMXweXG EA - Participate in the Donation Election and the first weekly theme (starting 7 November) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Participate in the Donation Election and the first weekly theme (starting 7 November), published by Lizka on November 2, 2023 on The Effective Altruism Forum.TLDR: Participate in the Donation Election, start writing posts for "Effective giving spotlight" week (starting this Tuesday!), and more.There are many ways to participate:Get involved in Giving Season eventsDonate to the Donation Election Fundto encourage discussion and participation (we're partially matching donations)Add candidates to the ElectionPre-vote[1]to register your interest (pre-votes are anonymous, but we'll know how many people pre-voted for a given candidate in the Election)Share your experience donating, fundraising, earning to give, or moreor your uncertainties or considerations about where we should donate and how we should fundraiseFundraise for your projectExplain how your project would use extra funding (particularly for Marginal Funding Week), share impact analyses or retrospectives,invite Forum users to ask you questions, and see if your project should be listed as a candidatein the ElectionExploreTheGiving portalWriting about effective givingFollow theDonation Election(This post is an update to ourearlier announcementabout Giving Season on the Forum.)Giving Season & weekly discussion themesStart preparing for discussion themes on the EA Forum:Theme and datesDescriptionEffective Giving Spotlight(7-14 November) - starting this Tuesday!How can we grow the amount of funding going to effective projects aimed at improving the world? We'll feature people's experiences with donating, fundraising, earning to give, etc.See more details below.Marginal Funding Week(14-21 November)How would your project use extra funding?To decide whether donating to a given project or organization is cost-effective, it's really useful to know how marginal funding would get used. We'll invite EA organizations and projects to describe how they would use extra donations - a bit likethis post about LTFFor otherwise share more about what they do.We might also try to collect a summary of the key information in one post at the end of the week.Donation Debate Week(21-28 November)Where should we donate (and what should we vote for in the Donation Election)?Discuss which interventions and projects are mostcost-effectiveand how they should vote (and donate!). I'm hoping to see estimates (including rough "back of the envelope calculations"BOTECs) and productive disagreement or identification of what thecruxesthat drive different conclusions are.We'll also probably feature some classic writing on the topic, some relevantAMAs, and more.First weekly theme: Effective Giving Spotlight (November 7-14)A lot of promising projects are funding-constrained[2]they'd get more done if they had more funding. "Effective Giving Spotlight" week will feature content on how we can grow the amount of funding going to effective projects aimed at improving the world. efConsider participating!You can write posts (orlink-posts, including things like reviews of classic writing on the topic), comment on others' posts, or share the event. You might want to write about:Your experience donating, fundraising, earning to give, etc. - lessons, things you've changed your mind on,postmortemsThoughts on who should considerearning to giveUncertainties you have about where to donateWhat effective projects would be particularly useful… or anything else related to effective giving.If you're not sure if something would be useful or relevant, please feel free to reach out to me or to the Forum team. We'll post smaller announcements with more details about the other themes.Participate in the Donation ElectionIn December, Forum users[1]will vote on how theDonation Election Fundshould be allocate...

]]>
Lizka https://forum.effectivealtruism.org/posts/hAzhyikPnLnMXweXG/participate-in-the-donation-election-and-the-first-weekly Thu, 02 Nov 2023 18:37:15 +0000 EA - Participate in the Donation Election and the first weekly theme (starting 7 November) by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 14:14 no full 5
CwKiAt54aJjcqoQDh EA - Are 1-in-5 Americans familiar with EA? by David Moss Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Are 1-in-5 Americans familiar with EA?, published by David Moss on November 2, 2023 on The Effective Altruism Forum.YouGov recently reported theresultsof a survey (n=1000) suggesting that about "one in five (22%) Americans are familiar with effective altruism."[1]We think these results are exceptionally unlikely to be true. Their 22% figure is very similar to the proportion of Americans we previously foundclaimto have heard of effective altruism (19%)in our earlier survey(n=6130). But, after conducting appropriate checks, we estimated that much lower percentages are likely to have genuinely heard of EA[2](2.6% after the most stringent checks, which we speculate is still likely to be somewhat inflated[3]).Is it possible that these numbers have simply dramatically increased following the FTX scandal?Fortunately, we have tested this with multiple followup surveys explicitly designed with this possibility in mind.[4]In our most recent survey (conducted October 6th[5]), we estimated that approximately 16% (13.0%-20.4%) of US adults wouldclaimto have heard of EA. Yet, when we add in additional checks to assess whether people appear to have really heard of the term, or have a basic understanding of what it means, this estimate drops to 3% (1.7% to 4.4%), and even to approximately 1% with a more stringent level of assessment.[6]These results are roughly in line with our earlier polling in May 2022, as well as additional polling we conducted between May 2022 and October 2023, and do not suggest any dramatic increase in awareness of effective altruism, although assessing small changes when base rates are already low is challenging.We plan to continue to conduct additional surveys, which will allow us to assess possible changes from just before the trial of Sam Bankman-Fried to after the trial.Attitudes towards EAYouGov also report that respondents are, even post-FTX, overwhelmingly positive towards EA, with 81% of those who (claim to) have heard of EA approving or strongly approving of EA.Fortunately, this positive view is broadly in line with our own findings- across different ways of breaking down who has heard of EA and different levels of stringency- which we aim to report on separately at a later date. However, ourearlier workdid find that awareness of FTX was associated with more negative attitudes towards EA.ConclusionsThe point of this post is not to criticise YouGov in particular. However, we do think it's worth highlighting that even highly reputable polling organizations should not be assumed to be employing all the additional checks that may be required to understand a particular question. This may apply especially in relation to niche topics like effective altruism, or more technical topics like AI, where additional nuance and checks may be required to assess understanding.^Also see thisquick take.^There are many reasons why respondents may erroneously claim knowledge of something. But simply put, one reason is that people like demonstrating their knowledge, and may err on the side of claiming to have heard of something even if they are not sure. Moreover, if the component words that make up a term are familiar, then the respondent may either mistakenly believe theyhavealready encountered the term, or think it is sufficient that they believe they can reasonably infer what the term means from its component parts to claim awareness (even when explicitly instructed not to approach the task this way!). Some people also appear to conflate the term with others - for example, some amalgamation of inclusive fitness/reciprocal altruism appears quite common.For reference, over 12% of people claim to have heard of the specific term "Globally neutral advocacy": A term that the research team invented, which returns no google results as...

]]>
David_Moss https://forum.effectivealtruism.org/posts/CwKiAt54aJjcqoQDh/are-1-in-5-americans-familiar-with-ea Thu, 02 Nov 2023 16:40:27 +0000 EA - Are 1-in-5 Americans familiar with EA? by David Moss David_Moss https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:44 no full 7
zLkdQRFBeyyMLKoNj EA - Still no strong evidence that LLMs increase bioterrorism risk by freedomandutility Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Still no strong evidence that LLMs increase bioterrorism risk, published by freedomandutility on November 3, 2023 on The Effective Altruism Forum.https://www.lesswrong.com/posts/ztXsmnSdrejpfmvn7/propaganda-or-science-a-look-at-open-source-ai-andLinkpost from LessWrong.The claims from the piece which I most agree with are:Academic research does not show strong evidence that existing LLMs increase bioterrorism risk.Policy papers are making overly confident claims about LLMs and bioterrorism risk, and are citing papers that do not support claims of this confidence.I'd like to see better-designed experiments aimed at generating high quality evidence to work out whether or not future, frontier models increase bioterrorism risks, as part of evals conducted by groups like the UK and US AI Safety Institute.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
freedomandutility https://forum.effectivealtruism.org/posts/zLkdQRFBeyyMLKoNj/still-no-strong-evidence-that-llms-increase-bioterrorism Fri, 03 Nov 2023 19:42:48 +0000 EA - Still no strong evidence that LLMs increase bioterrorism risk by freedomandutility freedomandutility https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:07 no full 4
pniDWyjc9vY5sjGre EA - Rethink Priorities' Cross-Cause Cost-Effectiveness Model: Introduction and Overview by Derek Shiller Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities' Cross-Cause Cost-Effectiveness Model: Introduction and Overview, published by Derek Shiller on November 3, 2023 on The Effective Altruism Forum.This post is a part of Rethink Priorities' Worldview Investigations Team'sCURVE Sequence: "Causes and Uncertainty: Rethinking Value in Expectation." The aim of this sequence is twofold: first, to consider alternatives to expected value maximization for cause prioritization; second, to evaluate the claim that a commitment to expected value maximization robustly supports the conclusion that we ought to prioritize existential risk mitigation over all else. This post presents a software tool we're developing to better understand risk and effectiveness.Executive SummaryThecross-cause cost-effectiveness model(CCM) is a software tool under development by Rethink Priorities to produce cost-effectiveness evaluations in different cause areas.The CCM enables evaluations of interventions in global health and development, animal welfare, and existential risk mitigation.The CCM also includes functionality for evaluating research projects aimed at improving existing interventions or discovering more effective alternatives.The CCM follows a Monte Carlo approach to assessing probabilities.The CCM accepts user-supplied distributions as parameter values.Our primary goal with the CCM is to clarify how parameter choices translate into uncertainty about possible results.The limitations of the CCM make it an inadequate tool for definitive comparisons.The model is optimized for certain easily quantifiable effective projects and cannot assess many relevant causes.Probability distributions are a questionable way of representing deep uncertainty.The model may not adequately handle possible interdependence between parameters.Building and using the CCM has confirmed some of our expectations. It has also surprised us in other ways.Given parameter choices that are plausible to us, existential risk mitigation projects dominate others in expected value in the long term, but the results are too high variance to approximate through Monte Carlo simulations without drawing billions of samples.The expected value of existential risk mitigation in the long run is mostly determined by the tail-end possible values for a handful of deeply uncertain parameters.The most promising animal welfare interventions have a much higher expected value than the leading global health and development interventions with a somewhat higher level of uncertainty.Even with relatively straightforward short-term interventions and research projects, much of the expected value of projects results from the unlikely combination of tail-end parameter values.We plan to hostan online walkthrough and Q&A of the modelwith the Rethink Priorities Worldview Investigations Team on Giving Tuesday, November 28, 2023, at 9 am PT / noon ET / 5 pm BT / 6 pm CET. If you would like to attend this event, pleasesign uphere.OverviewRethink Priorities' cross-cause cost-effectiveness model (CCM) is a software tool we are developing for evaluating the relative effectiveness of projects across three general domains: global health and development, animal welfare, and the mitigation of existential risks. You can play with our initial version atccm.rethinkpriorities.organd provide us feedback in this post orvia this form.The model produces effectiveness estimates, understood in terms of the effect on the sum of welfare across individuals, for interventions and research projects within these domains. Results are generated by computations on the values of user-supplied parameters. Because of the many controversies and uncertainties around these parameters, it follows a Monte Carlo approach to accommodating our uncertainty: users don't supply precise values but instead ...

]]>
Derek Shiller https://forum.effectivealtruism.org/posts/pniDWyjc9vY5sjGre/rethink-priorities-cross-cause-cost-effectiveness-model Fri, 03 Nov 2023 13:37:12 +0000 EA - Rethink Priorities' Cross-Cause Cost-Effectiveness Model: Introduction and Overview by Derek Shiller Derek Shiller https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 23:02 no full 6
onnzwxrhqKbPGmSRv EA - Promoting Effective Giving this Giving Season: For groups, networks and individuals by GraceAdams Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Promoting Effective Giving this Giving Season: For groups, networks and individuals, published by GraceAdams on November 3, 2023 on The Effective Altruism Forum.Effective giving is a core part of effective altruism as a project - and we'd love to see EA groups, networks of people in the EA community, and individuals helping to promote it, this Giving Season.There's a lot less funding available to many effective charities than there was last year,and that means that we're making less progress than we might have otherwise on pressing global problems. Increasing the funds raised for effective charities by promoting effective giving remains one of the best ways for many of us to prevent deaths and suffering now and into the future.Below, I've listed some actions for both groups/networks and individuals to take. If you have any other ideas for promoting effective giving, we are all ears and would love to figure out how to support you! Feel free to share ideas with others in the comments!Actions for groups or networks:Ask us to host a talk for your group, workplace, social club, etc.We're excited about giving talks about effective giving to groups of more than 20 people online or in-person (in locations that are feasible for us). We can also connect you with other organisations or speakers!We have a particularly good new talk that's been really well received by several consulting and tech companies!Fill out this form to let us know you're interestedHost your own Giving Game, everyday philanthropist or fundraising eventGWWC will sponsor donations for each participant in aGiving Gameand has training and materials to help you run this smoothly! We also have a long list of ideas for fundraising events.Additionally, we think Everyday Philanthropist events could be a really great way to engage both new and existing givers. Here's a brief explanation of how they work (from our event guide):Invite your attendees to help in making a real-world donation decision. One or more donors will play the role of a philanthropist and the attendees will help the donor decide on where they will donate.Ideally the donors will provide a document with what their intentions are (e.g. "most improve the lives of farmed animals") and some suggested charities to help guide the discussion. This works best if either the donor or event organising team provide good summary information on each of the charities.This makes for a great end-of-year event, helps to showcase real people who make effective giving a part of their lives, and offers an opportunity for those without an income to also be involved in effective giving.Giving Game materials and sponsorship requestFundraising event ideasHow to run an Everyday Philanthropist eventStart a fundraising page for your groupYou can request to set up a GWWC fundraising page for up to 3 of our supported charities. Why not set a target and encourage your group to ask friends and family to donate?Create a fundraising page with GWWCHost a pledge panel in the new yearHearing from people about their experiences taking a pledge with GWWC can be a great way to answer questions that people might have about the pledge, or help someone feel that it's more achievable and rewarding than they previously thought.Pledge panel event guideActions for individuals:Contribute a post to the EA Forum about your giving during Giving SeasonShare your experience with giving and more during a themed week on the EA Forum. Your thinking could influence others to donate more, or differently - and we'd love to see a variety of opinions out there!Themed weeks you might want to contribute toVote and discuss as part of the EA Forum's Donation ElectionEA Forum users will have the opportunity to vote on which charities will receive a portion of the Donation Election Fu...

]]>
GraceAdams https://forum.effectivealtruism.org/posts/onnzwxrhqKbPGmSRv/promoting-effective-giving-this-giving-season-for-groups Fri, 03 Nov 2023 11:13:47 +0000 EA - Promoting Effective Giving this Giving Season: For groups, networks and individuals by GraceAdams GraceAdams https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:10 no full 7
4XL6ZCvYm3zcBGwpf EA - SBF found guilty on all counts by Fermi-Dirac Distribution Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SBF found guilty on all counts, published by Fermi-Dirac Distribution on November 3, 2023 on The Effective Altruism Forum.Sam Bankman-Fried has been found guilty of all seven charges in his recent trial. The jury deliberated for three and a half hours. Here are the counts, listed by CNN:Count one: Wire fraud on customers of FTXCount two: Conspiracy to commit wire fraud on customers of FTXCount three: Wire fraud on Alameda Research lendersCount four: Conspiracy to commit wire fraud on lenders to Alameda ResearchCount five: Conspiracy to commit securities fraud on investors in FTXCount six: Conspiracy to commit commodities fraud on customers of FTXCount seven: Conspiracy to commit money launderingThere are still a few other charges against him that will be addressed in a March 2024 trial.He (and I think also his convicted co-conspirators Caroline Ellison, Gary Wang, Ryan Salame and Nishad Singh) will be sentencednext March.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Fermi–Dirac Distribution https://forum.effectivealtruism.org/posts/4XL6ZCvYm3zcBGwpf/sbf-found-guilty-on-all-counts Fri, 03 Nov 2023 01:48:10 +0000 EA - SBF found guilty on all counts by Fermi-Dirac Distribution Fermi–Dirac Distribution https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:15 no full 11
jCwuozHHjeoLPLemB EA - How Long Do Policy Changes Matter? New Paper by zdgroff Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How Long Do Policy Changes Matter? New Paper, published by zdgroff on November 2, 2023 on The Effective Altruism Forum.A key question for many interventions' impact is how long the intervention changes some output counterfactually, or how long the intervention washes out. This is often the case for work to change policy: the cost-effectiveness of efforts to passanimal welfare ballot initiatives,nuclear non-proliferation policy,climate policy, andvoting reform, for example, will depend on (a) whether those policies get repealed and (b) whether they would pass anyway. Often there is an explicit assumption, e.g., that passing a policy is equivalent to speeding up when it would have gone into place anyway byXyears.[1][2]As people routinely note when making these assumptions, it is very unclear what assumption would be appropriate.In anew paper(my economics "job market paper"), I address this question, focusing on U.S. referendums but with some data on other policymaking processes:Policy choices sometimes appear stubbornly persistent, even when they become politically unpopular or economically damaging. This paper offers the first systematic empirical evidence of how persistent policy choices are, defined as whether an electorate's or legislature's decisions affect whether a policy is in place decades later. I create a new dataset that tracks the historical record of more than 800 state policies that were the subjects of close referendums in U.S. states since 1900. In a regression discontinuity design, I estimate that passing a referendum increases the chance a policy is operative 20, 40, or even 100 years later by over 40 percentage points. I collect additional data on U.S. Congressional legislation and international referendums and use existing data on state legislation to document similar policy persistence for a range of institutional environments, cultures, and topics. I develop a theoretical model to distinguish between possible causes of persistence and present evidence that persistence arises because policies' salience declines in the aftermath of referendums. The results indicate that many policies are persistently inplace - or not - for reasons unrelated to the electorate's current preferences.Below I'll pull out some key takeaways that I think are relevant to the EA community and in some cases did not make it into the paper.Overview of Results and MethodsMy strategy in the paper involves comparing how many policies that barely passed or barely failed in U.S. state-level referendums are in place over time. I collect data on all referendums whose vote outcome is within 2.5 percentage points of the threshold for passage (typically 50%) since 1900 in a subset of U.S. states. I then do what's called a regression discontinuity design, which allows me to estimate the effect of passing a referendum on whether it is in place later on.The headline result from the paper is below. Many referendums that barely fail eventually pass in the first few years or decades afterward, and then this levels off. At 100 years later, just under 80% of the barely passed ones are in place compared to just under 40% of the barely failed ones. Importantly, the hazard rate - the rate at which this effect declines over time - is much lower in the later years, meaning that if you were to extrapolate this out beyond 100 years, the effect at 200 years would be expected to be significantly more than 40% * 40%.Something relevant to EAs that I don't focus on in the paper is how to think about the effect of campaigning for a policy given that I focus on the effect of passing one conditional on its being proposed. It turns out there's a method (Cellini et al. 2010) for backing this out if we assume that the effect of passing a referendum on whether the policy is in place lat...

]]>
zdgroff https://forum.effectivealtruism.org/posts/jCwuozHHjeoLPLemB/how-long-do-policy-changes-matter-new-paper Thu, 02 Nov 2023 22:40:55 +0000 EA - How Long Do Policy Changes Matter? New Paper by zdgroff zdgroff https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:58 no full 12
GRYFJGnye2gCxCTG4 EA - EA orgs' legal structure inhibits risk taking and information sharing on the margin by Elizabeth Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA orgs' legal structure inhibits risk taking and information sharing on the margin, published by Elizabeth on November 5, 2023 on The Effective Altruism Forum.What is fiscal sponsorship?It's fairly common for EA orgs to provide fiscal sponsorship to other EA orgs. Wait, no, that sentence is not quite right. The more accurate sentence is that there are very few EA organizations, in the legal sense; most of what you think of as orgs are projects that are legally hosted by a single org, and which governments therefore consider to be one legal entity.The king umbrella is Effective Ventures Foundation, which hosts CEA, 80k, Longview, EA Funds, Giving What We Can, Asterisk magazine, Centre for Governance of AI, Forethought Foundation, Non-Trivial, and BlueDot Impact. Posts on the castlealso describe itas an EVF project, although it's not listed on the website. Rethink Priorities has aprogramspecifically to provide sponsorship to groups that need it. LessWrong/Lightcone is hosted by CFAR, and have sponsored at least one project themselves (source: me. It was my project).Fiscal sponsorship has a number of advantages. It gets you the privileges of being a registered non-profit (501c3 in the US) without the time-consuming and expensive paperwork. That's a big deal if the project is small, time-limited (like mine was) or is an experiment you might abandon if you don't see results in four months. Even for large projects/~orgs, sharing a formal legal structure makes it easier to share resources like HR departments and accountants. In the short term, forming a legally independent organization seems like a lot of money and effort for the privilege of doing more paperwork.The downsides of fiscal sponsorship…are numerous, and grow as the projects involved do.The public is rightly suspicious about projects that share a legal entity claiming to be independent, so bad PR for one risks splash damage for all. The government is very confident in its belief that you are the same legal entity, so legal risks are shared almost equally (iamnotalawyer). So sharing a legal structure automatically shares risk. That may be fixable, but the fix comes at its own cost.The easiest thing to do is just take fewer risks. Don't buy retreat centers that could be described as lavish. And absolutely, 100%, don't voluntarily share any information about your interactions with FTX, especially if the benefits to doing so are intangible. So some amount of value is lost because the risk was worth it for an individual or small org, but not to the collective.[it is killing me that I couldn't follow therule of threewith that list, but it turns out there aren't that many legible, publicly visible examples of decisions to not share information]And then there are the coordination costs. Even if everyone in the legal org is okay with a particular risk, you now have an obligation to check with them.The answer is often "it's complicated", which leads to negotiations eating a lot of attention over things no one cares that much about. Even if there is some action everyone is comfortable with, you may not find it because it's too much work to negotiate between that many people (if you know anyone who lived in a group house during covid: remember how fun it was to negotiate safety rules between 6 people with different value functions and risk tolerances?).Chilling effectsA long, complicated (but nonetheless simplified)exampleThe original version of this story was one paragraph long. It went something like: A leader at an EVF-sponsored project wanted to share some thoughts on a controversial issue, informally but in public.The comments were not riskless, but this person would happily have taken the risk if it affected only themselves or their organization. Someone at EVF said no. Boo, grrr.I sent that versi...

]]>
Elizabeth https://forum.effectivealtruism.org/posts/GRYFJGnye2gCxCTG4/ea-orgs-legal-structure-inhibits-risk-taking-and-information Sun, 05 Nov 2023 08:58:14 +0000 EA - EA orgs' legal structure inhibits risk taking and information sharing on the margin by Elizabeth Elizabeth https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:28 no full 1
LcS8P84JMmfd9Gudu EA - The EA Animal Welfare Fund is looking for guest fund managers by Neil Dullaghan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA Animal Welfare Fund is looking for guest fund managers, published by Neil Dullaghan on November 5, 2023 on The Effective Altruism Forum.You can apply by filling outthis formby the 26th of November.EA Animal Welfare Fund(AWF) is currently looking to hire additional guest fund managers.AWF is one of the largest funding sources for small- and medium-sized animal welfare projects. In the last twelve months, the Animal Welfare Fund made more than $6.5 million worth of grants to high-impact organizations and talented individuals.To allocate this funding effectively, we are looking for guest fund managers with careful judgment in the relevant subject areas and an interest in effective grantmaking.As a guest fund manager, you will evaluate grant applications, proactively help new projects get off the ground ('active grantmaking'), publish grant reports, and contribute to the fund's strategy.Terms of employment:We are offering paid part-time, and volunteer positions.Compensation for part-time contractors is $60 per hour.You will be hired for a period of 3-6 months after which you may have an opportunity to join the team as a permanent fund manager.If you are interested in the guest fund manager role - please applyhere.If you know of anyone who might be a good fit for this role, please forward this document to them and encourage them to apply. If you have any questions, do not hesitate to reach out to Karolina viakarolina@effectivealtruismfunds.orgApplications are open now until the 26th of November.We look forward to hearing from you!About the roleIn this role, you will have a tangible impact by helping to direct millions of dollars to high-impact funding opportunities each year, all the while building your grantmaking skills and expanding your knowledge about animal welfare.By communicating your reasoning to the community, you will indirectly contribute to the culture and epistemics of the EA and effective animal advocacy (EAA) community. By providing feedback, you will help existing projects improve. In the longer term, your work will help the EA community develop the capacity to allocate a potentially much greater volume of funding each year. While doing so, you will interact with other intellectually curious, experienced, and welcoming fund managers, all of whom share a profound drive to make the biggest difference they can.As a guest fund manager, your primary goal will be to increase the fund's capacity to source and investigate more grant applications. Your responsibilities will include:Investigating grants assigned to you, and assessing other fund managers' grant recommendationsVoting on grant recommendations (each fund manager has a vote)Sourcing high-quality applications based on your ideas through your network ('active grantmaking')Communicating your thinking to the community in writing, e.g., feedback to grantees, grant reports, EA Forum posts and commentsProviding input on the overall strategic direction of the fundAbout youWe are interested in experienced grantmakers, researchers and people with experience in direct work as well as junior applicants who are looking to build experience in grantmaking.You might be a good fit for guest fund manager if:You are familiar with work on farmed animal welfare, wild animal welfare and animal advocacy, and have detailed, independent opinions on what constitutes good work in those areas and how you would like these areas and communities to develop over the coming years.You have strong analytic skills and experience assessing others' work.You have a strong network in these areas.You can communicate your reasoning articulately,transparently, and cordially, and you can convey complex ideas to a lay audience in simple language.You are organized and reliable.You act with integrity and...

]]>
Neil_Dullaghan https://forum.effectivealtruism.org/posts/LcS8P84JMmfd9Gudu/the-ea-animal-welfare-fund-is-looking-for-guest-fund Sun, 05 Nov 2023 01:23:07 +0000 EA - The EA Animal Welfare Fund is looking for guest fund managers by Neil Dullaghan Neil_Dullaghan https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:49 no full 2
NAcN98bACuwcnB32H EA - The Navigation Fund launched + is hiring a program officer to lead the distribution of $20M annually for AI safety! Full-time, fully remote, pay starts at $200k by vincentweisser Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Navigation Fund launched + is hiring a program officer to lead the distribution of $20M annually for AI safety! Full-time, fully remote, pay starts at $200k, published by vincentweisser on November 4, 2023 on The Effective Altruism Forum.New foundation, funded by billionaireJed McCaleb,led byDavid Coman-Hidy,Andrea Gunn,Seemay Chou,Randy O'ReillyQuotes from their websitehttps://www.navigation.org/: "CausesThe Navigation Fund focuses on a few key areas where additional resources will provide outsized impact.Safe AIAs the use cases of artificial intelligence expand, developing frameworks and systems to ensure that AI benefits humankind becomes crucial. We will explore opportunities to promote altruistic and beneficial outcomes from AI.Farm Animal WelfareFactory farming creates both tremendous suffering and significant environmental degradation. We support efforts to reduce animal suffering, reenvision our relationship with animals, and diminish the killing of animals.Criminal Justice ReformReforming the U.S. criminal justice system can help create a more just, equitable, and safe society. We use a portfolio approach to bolster a wide range of initiatives to address the challenges and improve outcomes for people, families, and communities.Open ScienceThe Open Science movement is crucial for expediting discoveries and guaranteeing public access to knowledge. We stand with those forging new tools, championing novel approaches, and altering traditional practices within scientific research and publishing to make information more accessible to everyone.Climate ChangeClimate change poses a significant threat to humanity, and requires new and bold thinking to help address it. We invest in high-leverage, under-resourced initiatives that have the potential for immediate and long-term favorable impact on climate outcomes.Open roleshttps://www.navigation.org/careers#roles)Director of OperationsGrants and Operations CoordinatorProgram Officer, ClimateProgram Officer, Criminal Justice ReformProgram Officer, Open ScienceProgram Officer, Safe AI"Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
vincentweisser https://forum.effectivealtruism.org/posts/NAcN98bACuwcnB32H/the-navigation-fund-launched-is-hiring-a-program-officer-to Sat, 04 Nov 2023 11:50:43 +0000 EA - The Navigation Fund launched + is hiring a program officer to lead the distribution of $20M annually for AI safety! Full-time, fully remote, pay starts at $200k by vincentweisser vincentweisser https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:31 no full 8
hFPbe2ZwmB9athsXT EA - Clean Water - the incredible 30% mortality reducer we can't explain by NickLaing Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Clean Water - the incredible 30% mortality reducer we can't explain, published by NickLaing on November 4, 2023 on The Effective Altruism Forum.TLDR: The best research we have shows that clean water may provide a 30% mortality reduction to children under 5. This might be the biggest mortality reduction of any single global health intervention, yet we don't fully understand why it works.Here I share my exploration of a life-saving intervention that we don't fully understand,but really should.I may err a little on the side of artistic license - so if you find inaccuracies or I'm a bit loose please forgive me, correct me or even feel free to just tear me to shreds in the comments ;).Part 1: Givewell's Seemingly absurd numbersI first became curious after a glance at what seemed like a dubious GiveWell funded project. A$450,000 dollar scoping grant for water chlorination in Rwanda?This didn't make intuitive sense to me.In Sub-saharan Africa diarrhoea causes 5-10% of child mortality. While significant, the diarrhea problem continues to improve with better access to medical care, improving ORS and Zinc coverage, and antibiotics for more severe cases. Over the last 5 years, our own Ugandan health centers have encountered surprisingly few very sick kids with diarrhoea and I've hardly seen diarrhoea kill a child, as opposed to Malaria and Pneumonia which tragically kill kids all the time. It seemed to me that even if clean water hugely reduced diarrhoea mortality, the intervention would still likely be an expensive way to achieve 1 or 2 percent mortality reduction,So with my skeptic hat on, I clicked theGiveWell spreadsheetand my incredulity only grew. GiveWell estimated an upper-bound mortality reduction of an almighty17%for the Rwandan chlorination program! At first that made no sense, but I did expect GiveWell would likely be lesswrong than me.The Global burden of disease estimates that Diarrhoea makes up only4.9%of total deaths in Rwanda. How could an intervention which targets diarrhoea reduce mortality by over three times the total diarrhoea mortality? Even if the clean water cured all diarrhoea, that wouldn't come close to GiveWell's mortality reduction estimate.Something fishy was afoot, but I quickly found some answers, through a nobel prize winner's study which was partially funded by you guessed it……..GiveWellPart 2: A Nobel Prize winner's innovative mathMichael Kremer won a Nobel prize along with two J-PAL co-founders for their wonderful work pioneering randomised controlled trials to assess development interventions. What better person to try their hand at estimating the mortality benefit of clean water than a father of the RCT movement?But connecting clean water and mortality is tricky, because to date no-one has actually asked whether clean water can reduce child mortality. Instead, a number of RCT asked the more obvious question,does clean water reduce diarrhoea.The answer obviously yes.But Kremer and co. found a clever way around this. They sifted through all studies which looked at the relationship between clean water and diarrhoea and identified 12 studies[1]that also gathered bits and pieces of mortality data. They then performed a meta-analysis, pooling that mortality data together to see whether clean water save kids' lives.The result - they estimated that clean water caused anincredible 30% mortality reductionin kids under 5. If this is even in the ballpark of correct, clean water could could prevent one in three childhood deaths in much of sub-saharan Africa. If Africa could chlorinate and filter all drinking water, we could save perhaps1 million livesevery yearin sub-saharan Africa alone.Mosquito nets might bow to their new king.To be as crystal clear as the water, this is not just a 30% reduction in diarrheal death...

]]>
NickLaing https://forum.effectivealtruism.org/posts/hFPbe2ZwmB9athsXT/clean-water-the-incredible-30-mortality-reducer-we-can-t Sat, 04 Nov 2023 07:05:19 +0000 EA - Clean Water - the incredible 30% mortality reducer we can't explain by NickLaing NickLaing https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:06 no full 9
RJgGP6tEeuvtbapjP EA - Curious about EAGxVirtual? Ask the team anything! by OllieBase Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Curious about EAGxVirtual? Ask the team anything!, published by OllieBase on November 4, 2023 on The Effective Altruism Forum.EAGxVirtual 2023, a free online effective altruism conference (November 17-19),is just two weeks away!The event will bring together EAs from around the world, and will facilitate discussions about how we can work on pressing problems, connections between attendees and diverse fields, and more.Applyhereby 16 November.We've recently published some more details about the event andwe want to invite you to ask us about what to expect from the event.Please post your questions as comments bythe end of the day on Sunday (5 November)and we'll aim to respond by the end of the day on Monday (6 November).Some question prompts:Unsure about applying?We encourage everyone with a genuine interest in EA toapply, and we're accepting a vast majority of people. Let us know what you're uncertain about with the application process.Undecided whether to go?Tell us why and we can help you. We'll probably be biased but we'll try our best to present considerations on both sides - it won't be a good use of time for everyone!Unsure how to prepare?You can find some tips onthe EA Global topic pagebut we're happy to help with your specific case if you need more tips!Uncertain how to set up a group activity (a screening, a meet-up etc.) for the event?Share your thoughts below and we can help you plan!We look forward to hearing from you!Sasha, Dion (EAGxVirtual / EA Anywhere) and Ollie (CEA)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
OllieBase https://forum.effectivealtruism.org/posts/RJgGP6tEeuvtbapjP/curious-about-eagxvirtual-ask-the-team-anything Sat, 04 Nov 2023 00:35:12 +0000 EA - Curious about EAGxVirtual? Ask the team anything! by OllieBase OllieBase https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:48 no full 11
EAmfYSBaJsMzHY2cW EA - AI Fables Writing Contest Winners! by Daystar Eld Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Fables Writing Contest Winners!, published by Daystar Eld on November 6, 2023 on The Effective Altruism Forum.Hello everyone! The submissions have all been read, and it's time to announce the winners of therecent AI Fables Writing Contest!Depending on how you count things, we had between 33-40 submissions over the course of about two months, which was a happy surprise. More than just the count, we also got submissions from a range of authors, from people new to writing fiction to those who do so regularly, new to writing about AI or very familiar with it, and every mix of both.The writing retreat in September was also quite productive, with about 21 more short stories and scripts written by the participants, many of which will hopefully be publicly available at some point. We plan to work on creating an anthology of some selected stories from it, and with permission, others we've been impressed by.With all that said, onto the contest winners!Prize Winners$1,500 First Place:The King and the Golemby Richard NgoThis story explores the notion of "trust," whether in people, tools, or beliefs, and how fundamentally difficult it is to make "trustworthiness" something we can feel justified about or verify. It also subtly highlights the way in which, at the end of the day, there are also consequences to not trusting anything at all.$1,000 Second Place:The Oracle and the Agentby Alexander WalesWe really appreciated how this story showed the way better-than-human decision making can be so easy to defer to, and how despite those decisions individually still being reasonable and net-positive, small mistakes and inconsistencies in policy can lead to calamitous ends.(This story is not yet publicly available, but it will be linked to if it becomes so)$500 Third Place:The Tale of the Lion and the Boy+Mirror, Mirrorby dr_sThese two roughly tied for third place, which made it convenient that they were written by the same person! The first is an eloquent analogy for the gap between intelligence capabilities and illusion of transparency by reexamining traditional human-raised-by-animals tales. The second was a fun twist on a classic via exploration of interpretability errors. As a bonus, we particularly enjoyed the way both were new takes on old and identifiable fables.Honorable MentionsThere were a lot more stories that I'd like to mention here for being either close to a winner, or just presenting things in an interesting way. I've decided to pick just three of them:The Lion, The Dove, The Monkey, and the Beesby Jérémy AndréolettiA fun poem about the way various strategies can scale in exponentially different ways despite ineffectual first appearances.A Tale of the Four Ns (Neural Networks, Nature, and Nurture)by Anoushka SharpAn illustrated, rhyming fable about Artificial Intelligence that demonstrates a number of the fundamental parts of AI, as well as the difficulties inherent to interpretability.This is What Kills Usby Jamie Wahls and Arthur FrostA series of short, witty scripts about a number of ways AI in the near future might go from charming and useful tools to accidentally ending the world. Not publicly available yet, thoughthey have since reached out to Rational Animations to turn them into videos!There are many more stories we enjoyed, from the amusingThe Curious Incident Aboard the Calibriusby Ron Fein, to the creepyLirby Arjun Singh, and we'd like to thank everyone who participated. We hope everyone continues to write and engage with complex, meaningful ideas in their fiction.To everyone else, we hope you enjoyed reading, and would love to hear about any new stories you might write that fits these themes.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Daystar Eld https://forum.effectivealtruism.org/posts/EAmfYSBaJsMzHY2cW/ai-fables-writing-contest-winners Mon, 06 Nov 2023 23:09:21 +0000 EA - AI Fables Writing Contest Winners! by Daystar Eld Daystar Eld https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:47 no full 1
zihL7a4xbTnCmuL2L EA - Towards non-meat diets for domesticated dogs by Seth Ariel Green Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Towards non-meat diets for domesticated dogs, published by Seth Ariel Green on November 6, 2023 on The Effective Altruism Forum.This essay argues that getting domesticated dogs to eat vegan orostrovegandiets is a neglected, tractable, and important way to advancejustice for animals. First, I estimate that dog diets contribute to the slaughter of 2.89 billion animals on factory farms annually, the vast majority of which are chickens. Second, I make the case that an (ostro)vegan diet is, as far as we know, healthy for dogs. Third, I conclude with some suggestions for how we can make this happen.How many animals are slaughtered on factory farms to feed domestic dogs?Overall, I estimate that dog diets result in the slaughter of 2.824 billion chickens, 56.79 million pigs, and 9.52 million cows.On a per-dog basis, switching to a non-meat diet will save about 20 chickens, 0.41 pigs, and 0.07 cows per year.Here's aGoogle Sheet of my calculations. The remainder of this section explains how I got there.How many domesticated dogs are there?700 million dogs live on Earth, about471 million of whom are domesticated.How many of those dogs eat food that comes from factory farms, and how much?Dogs and dog diets are heterogeneous. A street dog whoscavengesor getsfed at a templemight plausibly contribute very little or nothing to factory farming. Likewise, a farm dog who eats table scraps or an apartment dog who eats "human-grade food" is going to have a very different dietary footprint.For our purposes, I think we want to know how many dogs eat mass-market food that's packaged and sold as dog food, which we can assume almost entirely comes from industrial farms. For a ballpark estimate, I tally all domesticated dogs in the United States, Canada, Australia and Europe, and assume that of the food they eat is meat that comes from Concentrated Animal Feeding Operations. ( Around99% of all meat in the US comes from factory farms, but dry food for dogs is typically a mix of grains, vegetables and meat.)Apparently there are89.7 million pet dogsin the US,7.9 million in Canada,6.4 million in Australia, and104.3 million in Europeso I'm estimating about 208 million dogs getting of their diets from factory farms.How much does the average dog eat?A dog food companyrecommendsthat a medium sized dog eat between 1.75 and 2.33 cups (.875 to 1.165 lbs) of food per day.Is the average dog a medium sized dog? I'm not sure. The most popular breeds in America, aside from the French bulldog,tend to be big. But as far as I can tell, that measures the sale of pure breeds, and apparentlyjust over half of American dog owners have mutts.Here's a totally unscientific estimate: let's say that the average domesticated dog weighs about 35 lbs,[1]and consumes 1 lb of food, and .67 pounds of meat, per day.How much meat is that in total?of a pound of meat per day is about 244.5 lbs per dog per year, so 208 million dogs eating that much is 50,638,640,000 pounds of meat per year.[2]What animals produce these 50.64 billion pounds of flesh?Dog food is a mess of flesh, byproducts, and parts that otherwise wouldn't be consumed. But let's roughly assume that all dog food meat comes from chickens, pigs and cows/buffalo, and that the proportions coming from the three categories are the same as those that go into human food.Our World in Dataestimates thatamong those categories, about 41% of every pound of meat comes from chickens, about 36% comes from pigs, and about 23% comes from beef.That gives us about 20.75 billion pounds of chicken, 18.2 billion pounds of pig meat, and 11.6 billion pounds of beef.How many animals are killed to feed domesticated dogs?OWIDestimates that for animals slaughtered in America, theaverage chickenproduces 4.9 lbs of meat; theaverage pig...

]]>
Seth Ariel Green https://forum.effectivealtruism.org/posts/zihL7a4xbTnCmuL2L/towards-non-meat-diets-for-domesticated-dogs Mon, 06 Nov 2023 20:09:10 +0000 EA - Towards non-meat diets for domesticated dogs by Seth Ariel Green Seth Ariel Green https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:47 no full 2
Kt4wLHXLh8PBAyDbe EA - Ending Poverty: Today or Forever? Potential Error in GiveDirectly's Rational Animations Video by Alexander de Vries Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ending Poverty: Today or Forever? Potential Error in GiveDirectly's Rational Animations Video, published by Alexander de Vries on November 6, 2023 on The Effective Altruism Forum.Epistemic status: as an Economics student who reads a fair amount of dev econ, this might be one of the only things in the world I'm actually ~qualified for. 85% confident that the main claim of this post ("GiveDirectly has presented no strong evidence for their claim that the costs of ending extreme poverty will rapidly & significantly decrease") is true.Disclaimer: I GiveDirectly and think they're doing fantastic work!Recently, GiveDirectly collaborated with Rational Animations to make this YouTube video:The aim of the video is in its title: showing that extreme poverty can be eradicated by directly giving money to the world's poorest, through organizations like GiveDirectly.I think that the evidence presented in the video definitively shows that giving all the extremely poor people in the world money for a year can end extreme poverty for that year. This is true almost by definition, but I'm genuinely glad that a bunch of researchers decided to check anyway. There's always a chance of unforeseen second order effects, like maybe all the people getting the money would just spend it all on drinks and alcohol (almost certainly not) or it would cause huge inflation (nope, though really you could guess that one with Econ 101).Our friends estimate the cost at about $258 billion dollars to end extreme poverty for a year, and point out that this is a small portion of yearly philanthropic spending or rich government's budgets. They're right about the rich countries' budgets (no longer sure about how large a part this is of philanthropic spending). It would be good to just give all the extremely poor people some money every year so they would no longer be extremely poor.[1]Where the video loses me, though, is when they make a very strong claim with huge implications based on minimal evidence. This starts at10:39in the video, but I've transcribed it for you here:We also know that cash transfers improve recipients' lives immensely. But what would be the impact on recipients' neighbors and the economy as a whole? A 2022 study led by Dennis Egger found that every $1,000 of cash given actually has a total economic effect of $2,500, thanks to "spillover" effects growing the local economy, as recipients spent more money at their neighbors' businesses, those businesses spent money, and so forth. Not only did recipients' incomes increase, their neighbors' incomes also increased by 18 months later. Even neighboring villages without any recipients saw increased incomes, which could have been from a 'spillover' effect as well. These effects mean our cash transfers will go further, and we may find that we've reached our goal of ending extreme poverty sooner - and for less money - than we would otherwise expect.The research suggests that the $200 to $300 billion figure we'd need to give for the first year will decrease every year thereafter[animation of a stack of dollar bills, halved each year]as the economies of entire regions and countries grow and lift their poorest residents out of extreme poverty.[emphasis mine]Okay. There is an absolutely massive difference in cost between "$258 billion the first year, progressively less each year, maybe after X years no cost at all" and "$258 billion every year, eternally". One of these is a cost the rich world may be willing to bear, out of solidarity and self-interest and even just the wish to be on the right side of history. The other is just a pipe dream for teary-eyed optimists like us.If a lot is riding on the answer to an empirical question, it would be wise to reason well about it before making strong claims one way or the other. But this is j...

]]>
Alexander de Vries https://forum.effectivealtruism.org/posts/Kt4wLHXLh8PBAyDbe/ending-poverty-today-or-forever-potential-error-in Mon, 06 Nov 2023 14:51:15 +0000 EA - Ending Poverty: Today or Forever? Potential Error in GiveDirectly's Rational Animations Video by Alexander de Vries Alexander de Vries https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:59 no full 6
NrqGyXzvwB2Gqu6XW EA - State of the East and Southeast Asian EAcosystem by Elmerei Cuevas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: State of the East and Southeast Asian EAcosystem, published by Elmerei Cuevas on November 6, 2023 on The Effective Altruism Forum.This write-up is a compilation of organisations and projects aligned / adjacent to the effective altruism movement in East Asia and Southeast Asia and was written around the EAGxPhilippines conference. Some organisations, projects, and contributors also prefer to not be public and hence removed from this write-up. While this is not an exhaustive list of projects and organisations per country in the region, it is a good baseline of the progress of the effective altruism movement for this side of the globe.Feel free to click the links to the organisations/projects themselves to dive deeper into their works.Contributors: Saad Siddiqui; Anthony Lau; Anthony Obeyesekere; Masayuki "Moon" Nagai; Yi-Yang Chua; Elmerei Cuevas, Alethea Faye Cedaña, Jaynell Ehren Chang, Brian Tan, Nastassja "Tanya" Quijano; Dion Tan, Jia Yang Li; Saeyoung Kim; Nguyen Tran; Alvin LauForum Post Graphic credits to Jaynell Ehren ChangEAGxPhotos credits to CS CreativesMainland ChinaChina Global Priorities GroupAims to foster a community of ambitious, careful and committed thinkers and builders focused on effectively tackling some of the world's most pressing problems through a focus on China's role in the world.We currently do this by facilitating action-guiding discussions, identifying talent and community infrastructure gaps and developing new programmes to support impactful China-focused work.Hong KongCity Group:EAHKStarted in 2015 based in University of Hong Kong.5 core organisers, of which 2 receive EAIF funding from 2023 to work part-time (Anthony and Kenneth)Organises theHorizon Fellowship Program(In-person EA introductory program). There are 107 fellows since 2020.Around 200+ on Slack channelBi-lingual social mediaaccountwith 350 followersBi - weekly socials with 8 to 20 attendees and around 8 speakers meetup a year.Registered as a legal entity (limited company) in July 2023 in order to register as a charity in Hong Kong. Aims of facilitate effective giving.Opportunities:High concentration of family office/ corporate funders/ philanthropic organisations. To explore fundraising and effective giving potential.Influx of mainland/ international university in coming years due to recent policy change (40% non-local, 60% local). A diverse talent pool.Looking into translating EA materials to local language (Chinese) to reach out to more locals.University Group:EAHKUA new team formed in June 2023. Running independently from EAHK.Organises bi-weekly dinner to connect and introduce EA to students on campusPlanned to run multipleGiving Gamesfrom Nov 2023 onwardsAims to run an introductory program within 2023-2024 academic yearAcademia (AI):A couple of researchers and professors interested in AI x-risk and alignment.AI&Humanity-Lab@University of Hong KongNate Sharadin(CAIS fellow, normative alignment and evaluations),Frank Hong(CAIS fellow, AI extreme risks),Brian Wong(AI x-risk and China-US)2023 Sep launchedMA in AI, Ethics and Societywith AI safety, security and governance. Around 90 students in the course.Organises public seminars, seeevents pageThe first annual AI Impacts workshop in March 2024, focused on evaluationsHong Kong Global Catastrophic Risk Center at Lingnan UniversitySee link forResearch focus and outputsrelated to AI safety and governanceHong Kong University of Science and Technology UniversityDr. Fu Jieis a visiting scholar working on safe and scalable system-2 LLM.Research Centre for Sustainable HK at City University of Hong KongPublished a report on theEthics and Governance of AI in HKAcademia (Psychology):Dr. Gilad FeldmanPromote 'Doing more good, doing good better' through some of his teachi...

]]>
Elmerei Cuevas https://forum.effectivealtruism.org/posts/NrqGyXzvwB2Gqu6XW/state-of-the-east-and-southeast-asian-eacosystem Mon, 06 Nov 2023 10:25:35 +0000 EA - State of the East and Southeast Asian EAcosystem by Elmerei Cuevas Elmerei Cuevas https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:23 no full 8
szRYr5phF6KWBJwyW EA - What's the justification for EA being so elitist? by Stan Pinsent Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What's the justification for EA being so elitist?, published by Stan Pinsent on November 6, 2023 on The Effective Altruism Forum.EA loves genius.EA university outreach focuses on elite colleges.EA orgs oftenpay above-market-rate salaries(1,2).Outreach to high-schoolers (Atlas Fellowship)provided $50k scholarships, which could have instead been spent on reaching a broader, less elite, group of young people.I understand that all else equal, you probably want smarter people working for you. When it comes to generating new ideas and changing the world, sometimes quantity cannot replace quality.But what is the justification for beingsoelitist that we significantly reduce the number of people on the team? Why would we filter for the top 1% instead of the top 10%? Or, more accurately, the top 0.1% instead of the top 1%?I'd appreciate any posts, academic papers or case studies that support the argument that EA should beextraelitist.Full disclosure: I'm trying to steelman the case for elitism so that I can critique it (unless the evidence changes my mind!).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Stan Pinsent https://forum.effectivealtruism.org/posts/szRYr5phF6KWBJwyW/what-s-the-justification-for-ea-being-so-elitist Mon, 06 Nov 2023 06:51:45 +0000 EA - What's the justification for EA being so elitist? by Stan Pinsent Stan Pinsent https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:24 no full 10
Xx9oWdJDfisAQwDtT EA - Varieties of minimalist moral views: Against absurd acts by Teo Ajantaival Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Varieties of minimalist moral views: Against absurd acts, published by Teo Ajantaival on November 7, 2023 on The Effective Altruism Forum.(A standalone part ofMinimalist Axiologies: Alternatives to 'Good Minus Bad' Views of Value.)1. IntroductionWhat are minimalist views?Minimalist views of value (axiologies) are evaluative views that define betterness solely in terms of the absence or reduction of independent bads. For instance, they might roughly say, "the less suffering, violence, and violation, the better". They reject the idea of weighing independent goods against these bads, as they deny that independent goods exist in the first place.Minimalist moral views are views about how to act and be that include a minimalist view of value, instead of anoffsetting ('good minus bad') view of value. They reject the concept of independently positive moral value, such as positive virtue or pleasure that could independently counterbalance bads.[1]Minimalist views are sometimes alleged - at least in their purely consequentialist versions - to recommend absurd acts in practice, such as murdering individuals to prevent their suffering, or opposing life-extending interventions lest we prolong suffering. My aim in this essay is to broadly outline the various reasons why the most plausible and well-construed versions of minimalist moral views - including their purely consequentialist versions - do not recommend such acts.Sequence recapFor context, below is a chronological recap of the present series on minimalist views so far.1. "Positive roles of life and experience in suffering-focused ethics":Even if we assume a purely suffering-focused view, it's wise to recognize the highly positive and often necessary roles that various other things may have for the overall goal of reducing suffering.These include the positive roles of autonomy, cooperation, nonviolence, as well as our personal wellbeing and valuable skills and experiences.Suffering-focused moral views may value these things for different reasons, but not necessarily any less, than do other moral views.2. "Minimalist axiologies and positive lives":Minimalist axiologies define goodness in entirely relational or 'instrumental' terms, namely in terms of the minimization of bads such as suffering.These views avoid many problems in population ethics, yet the minimalist notion of (relationally) positive value is entirely excluded by the standard, restrictive assumption of treating lives as isolated value-containers.Minimalist views become more intuitive when we adopt a relational view of the overall value of individual lives, that is, when we don't track only the causally isolated "contents" of these lives, but also their (often far more significant) causal roles.3. "Peacefulness, nonviolence, and experientialist minimalism":For purely experience-focused and consequentialist versions of minimalist views, an ideal world would be any perfectly peaceful world, including an empty world.When it comes totheoretical implications about the cessation and replacement of worlds, one can reasonably argue that offsetting ('good minus bad') views haveworse implications than do minimalist views.Zooming out from unrealistic thought experiments, it's crucial to be mindful of thegap between theory and practice, of thepitfalls of misconceived consequentialism, and of how minimalist consequentialists havestrongpracticalreasons to pursue a nonviolent approach and to cooperate with people who hold different values.4. "Minimalist extended very repugnant conclusions are the least repugnant":It has been argued that certain "repugnant conclusions" are an inevitable feature of any plausible axiology.Yet based on a 'side-by-side' comparison of different views, it appears that offsetting viewsshare all the most "repugnant" ...

]]>
Teo Ajantaival https://forum.effectivealtruism.org/posts/Xx9oWdJDfisAQwDtT/varieties-of-minimalist-moral-views-against-absurd-acts Tue, 07 Nov 2023 19:47:09 +0000 EA - Varieties of minimalist moral views: Against absurd acts by Teo Ajantaival Teo Ajantaival https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 27:32 no full 3
LM2JnTHygKbn7eKLz EA - AI Alignment Research Engineer Accelerator (ARENA): call for applicants by TheMcDouglas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Alignment Research Engineer Accelerator (ARENA): call for applicants, published by TheMcDouglas on November 7, 2023 on The Effective Altruism Forum.TL;DRApply here for the third iteration of ARENA (Jan 8th - Feb 2nd)!IntroductionWe are excited to announce the third iteration of ARENA (Alignment Research Engineer Accelerator), a 4-week ML bootcamp with a focus on AI safety. Our mission is to prepare participants for full-time careers as research engineers in AI safety, e.g. at leading organizations or as independent researchers.The program will run from January 8th - February 2nd 2024[1], and will be held at the offices of the London Initiative for Safe AI. These offices are also being used by several safety orgs (BlueDot, Apollo, Leap Labs), as well as the current London MATS cohort, and several independent researchers. We expect this to bring several benefits, e.g. facilitating productive discussions about AI safety & different agendas, and allowing participants to form a better picture of what working on AI safetycan look like in practice.ARENA offers a unique opportunity for those interested in AI safety to learn valuable technical skills, work in their own projects, and make open-source contributions to AI safety-related libraries. The program is comparable to MLAB or WMLB, but extends over a longer period to facilitate deeper dives into the content, and more open-ended project work with supervision.For more information, see our website.Outline of ContentThe 4-week program will be structured as follows:Chapter 0 - FundamentalsBefore getting into more advanced topics, we first cover the basics of deep learning, including basic machine learning terminology, what neural networks are, and how to train them. We will also cover some subjects we expect to be useful going forwards, e.g. using GPT-3 and 4 to streamline your learning, good coding practices, and version control.Note - participants can optionally not attend the program during this week, and instead join us at the start of Chapter 1, if they'd prefer this optionand if we're confident that they are already comfortable with the material in this chapter.Topics include:PyTorch basicsCNNs, Residual Neural NetworksOptimization (SGD, Adam, etc)BackpropagationHyperparameter search with Weights and BiasesGANs & VAEsDuration: 5 daysChapter 1 - Transformers & InterpretabilityIn this chapter, you will learn all about transformers, and build and train your own. You'll also study LLM interpretability, a field which has been advanced by Anthropic's Transformer Circuits sequence, and open-source work by Neel Nanda. This chapter will also branch into areas more accurately classed as "model internals" than interpretability, e.g. recent work on steering vectors.Topics include:GPT models (building your own GPT-2)Training and sampling from transformersTransformerLensIn-context Learning and Induction HeadsIndirect Object IdentificationSuperpositionSteering VectorsDuration: 5 daysChapter 2 - Reinforcement LearningIn this chapter, you will learn about some of the fundamentals of RL, and work with OpenAI's Gym environment to run their own experiments.Topics include:Fundamentals of RLVanilla Policy GradientProximal Policy GradientRLHF (& finetuning LLMs with RLHF)Gym & Gymnasium environmentsDuration: 5 daysChapter 3 - Paper ReplicationsWe will conclude this program with paper replications, where participants will get guidance and mentorship while they replicate a paper containing material relevant to this course. This should draw on much of the skills and knowledge participants will have accumulated over the last 3 weeks.Duration: 5 daysBelow is a diagram of the curriculum as a whole, and the dependencies between sections. Note that this may change slightly in the lead-up to the program.Here is som...

]]>
TheMcDouglas https://forum.effectivealtruism.org/posts/LM2JnTHygKbn7eKLz/ai-alignment-research-engineer-accelerator-arena-call-for-1 Tue, 07 Nov 2023 19:41:24 +0000 EA - AI Alignment Research Engineer Accelerator (ARENA): call for applicants by TheMcDouglas TheMcDouglas https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:53 no full 4
TanBoThzLsDB8bvYg EA - Numerical Breakdown of 47 1-on-1s as an EAG First-Timer (Going All Out Strategy) by Harry Luk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Numerical Breakdown of 47 1-on-1s as an EAG First-Timer (Going All Out Strategy), published by Harry Luk on November 7, 2023 on The Effective Altruism Forum.tl;drJust attended my first ever EA Global conference (EAG Boston last week) and I have nothing but positive things to say.In total, I had about 47 one-on-one conversations depending on how you count the informal 1:1s (43 scheduled via SwapCard, while the other noteworthy conversations happened at meetups, the organization fair, office hours and unofficial satellite events).I came into the conference with an open mind, wanting to talk to others who are smarter than me, more experienced than me, and experts in their own domain. I invited redteaming of our nonprofit StakeOut.AI's mission/TOC, and gathered both positive and negative feedback throughout EAG. I came out of the conference with new connections, a refined strategy for our nonprofit startup going forward and lots of resources.I am so grateful for everyone that met with me (as I'm a small potato who at many times felt out of his depth during EAG, and likely one of the most junior EAs attending). I thank all the organizers, volunteers, helpers, speakers and attendees who made the event a huge success.The post below goes over The Preparation, the Statistics and Breakdown, why consider going all out at an EAG, 12 Practical Tips for Doing 30+ 1:1s and potential future improvements.The PreparationTo be honest, as a first-time attendee, I really didn't know what to expect nor how to prepare for the conference.I had heard good things and was recommended to go by fellow EAs, but I had my reservations.Luckily, an email titled "Join us for an EAG first-timers online workshop!" by the EA Global Team came to the rescue.Long story short, I highly recommend anyone new to EAG to attend the online workshop prior to the conference if you want to make your first EAG a success.Few highlights I will note here:Watchthis presentation from 2022's San Francisco EAG that outlines how you can get the most out of the eventTake your time and fill out thisEA Conference: Planning Worksheet for a step-by-step guide on conference planning, including setting your EAG goals and expectationsAlso fill out the career planning worksheet (if relevant): EA Conference: Career PlanRequesting 1:1s Pre-conferenceI was quite hesitant at first about introducing myself on SwapCard and trying to schedule 1:1s. This all changed after watchingthe presentation and attending the "Join us for an EAG first-timers online workshop!" virtual event.Something that was repeated over and over again fromthis presentation, the online workshop, and talking to others is the value of the 1:1s.People told me most sessions will be recorded and hence can be watched later, but having the 1:1s is where the true value is at EAG. After hearing it from so many people, I made 1:1s a core part of my conference planning and did not regret it.As I'm writing this after the conference, I can see why 1:1s are said to be the true value of EAG. I estimate that 80% (maybe even closer to 90%, I would know better after I sort through the notes) of the 1:1 conversations I had were beneficial and had a positive impact on either me or the direction of our nonprofit, StakeOut.AI.How Many 1:1s?In terms of how many 1:1s, here is the range I gathered from different sources:Attendees will typically have four to ten 1:1sGetting to 20 1:1s is a great numberHaving 30 1:1s is amazing but very tiringSomeone reached 35 1:1s once, and that was insaneSince I wanted to maximize my EAG experience, I set the goal of 30 and started reaching out via SwapCard one week before the conference.Reach Out EarlyThe main reason for starting early is because everyone is busy at the conferences, and everyone is trying to optimize their sch...

]]>
Harry Luk https://forum.effectivealtruism.org/posts/TanBoThzLsDB8bvYg/numerical-breakdown-of-47-1-on-1s-as-an-eag-first-timer Tue, 07 Nov 2023 17:18:00 +0000 EA - Numerical Breakdown of 47 1-on-1s as an EAG First-Timer (Going All Out Strategy) by Harry Luk Harry Luk https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 21:40 no full 6
unFycWDoyDHdHQGT5 EA - How Rethink Priorities is Addressing Risk and Uncertainty by Marcus A Davis Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How Rethink Priorities is Addressing Risk and Uncertainty, published by Marcus A Davis on November 7, 2023 on The Effective Altruism Forum.This post is part of Rethink Priorities' Worldview Investigations Team's CURVE Sequence: "Causes and Uncertainty: Rethinking Value in Expectation." The aim of this sequence is twofold: first, to consider alternatives to expected value maximization for cause prioritization; second, to evaluate the claim that a commitment to expected value maximization robustly supports the conclusion that we ought to prioritize existential risk mitigation over all else.IntroductionRP has committed itself to doing good. Given the limits of our knowledge and abilities, we won't do this perfectly but we can do this in a principled manner. There are better and worse ways to work toward our goal. In this post, we discuss some of the practical steps that we're taking to navigate uncertainty, improve our reasoning transparency, and make better decisions. In particular, we want to flag the value of three changes we intend to make:Incorporating multiple decision theories into Rethink Priorities' modelingMore rigorously quantifying the value of different courses of actionAdopting transparent decision-making processesUsing Multiple Decision TheoriesDecision theories are frameworks that help us evaluate and make choices under uncertainty about how to act.[1] Should you work on something that has a 20% chance of success and a pretty good outcome if success is achieved, or work on something that has a 90% chance of success but only a weakly positive outcome if achieved? Expected value theory is the typical choice to answer that type of question. It calculates the expected value (EV) of each action by multiplying the value of each possibleoutcome by its probability and summing the results, recommending the action with the highest expected value. But because low probabilities can always be offset by corresponding increases in the value of outcomes, traditional expected value theory is vulnerable to the charge of fanaticism, "risking arbitrarily great gains at arbitrarily long odds for the sake of enormous potential" (Beckstead and Thomas, 2021). Put differently, it seems to recommend spending all of our efforts on actions that,predictably, won't achieve our ends.Alternative decision theories have significant drawbacks of their own, giving up one plausible axiom or another. The simple alternative is expected value maximization but with very small probabilities rounded down to zero. This gives up the axiom of continuity, which suggests for a relation of propositions A B C, that there exists some probability that would make you indifferent between B and a probabilistic combination of A and C. This violation causes someweird outcomes where, say, believing the chance of something is 1 in 100,000,000,000 can mean an action gets no weight but believing it's 1.0000001 in 100,000,000,000 means that the option dominates your considerations if the expected value upon success is high enough, which is a kind of attenuated fanaticism. There are also other problems like setting the threshold for where you should round down.[2]Alternatively, you could go with a procedure like weighted-linear utility theory (WLU) (Bottomley and Williamson, 2023), but that gives up the principle of homotheticity, which involves indifference to mixing a given set of options with the worst possible outcome. Or you could go with a version of risk-weighted expected utility (REU) (Buchak, 2013) and give up the axiom of betweenness which suggests the order in which you are presented information shouldn't alter your conclusions.[3]It's very unclear to us, for example, that giving up continuity is preferable to giving up homotheticity, and neither REU or WLU really logically eliminate issues w...

]]>
Marcus_A_Davis https://forum.effectivealtruism.org/posts/unFycWDoyDHdHQGT5/how-rethink-priorities-is-addressing-risk-and-uncertainty Tue, 07 Nov 2023 14:39:41 +0000 EA - How Rethink Priorities is Addressing Risk and Uncertainty by Marcus A Davis Marcus_A_Davis https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 16:53 no full 7
zHFBQ23o4DKjsoXcC EA - Incorporating and visualizing uncertainty in cost effectiveness analyses: A walkthrough using GiveWell's estimates for StrongMinds by Jamie Elsey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Incorporating and visualizing uncertainty in cost effectiveness analyses: A walkthrough using GiveWell's estimates for StrongMinds, published by Jamie Elsey on November 7, 2023 on The Effective Altruism Forum.A common first step towards incorporating uncertainty into a cost effectiveness analysis (CEA) is to express not just a point estimate (i.e., a single number) for an input to the CEA, but to provide some indicator of uncertainty around that estimate. This might be termed an optimistic vs. pessimistic scenario, or as the lower and upper bounds of some confidence or uncertainty interval. A CEA is then performed by combining all of the optimistic inputs to create an optimistic final output, and allof the pessimistic inputs to create a final pessimistic output. I refer to this as an 'interval-based approach'. This can be contrasted with a fuller 'probabilistic approach', in which uncertainty is defined through the use of probabilistic distributions of values, which represent the range of possibilities we believe the different inputs can take. While many people know that a probabilistic approach circumvents shortcomings of an interval-based approach, they may not know where to even begin interms of what different distributions are possible, or the kinds of values they denote.I hope to address this in the current post and the accompanying application. Concretely, I aim to:Show how performing a CEA just using an interval-based approach can lead to a substantial overestimation of the uncertainty implied by one's initial inputs, and how using a probabilistic approach can correct this while also enabling additional insights and assessmentsIntroduce a new tool I have developed - called Distributr - that allows users to get more familiar and comfortable with a range of different distributions and the kinds of values they implyUse this tool to help generate a probabilistic approximation of the inputs GiveWell used in their assessment of Strongminds,[1] and perform a fuller probabilistic assessment based upon these inputsShow how this can be done without needing to code, using Distributr and a simple spreadsheetI ultimately hope to help the reader to feel more capable and confident in the possibility of incorporating uncertainty into their own cost effectiveness analyses.Propagating uncertainty and the value of moving beyond an interval-based approachCost effectiveness analysis involves coming up with a model of how various different factors come together to determine both how effective some intervention is, and the costs of its delivery. For example, when we think about distributing bed nets for malaria prevention, we might consider how the cost of delivery can vary across different regions, how the effects of bed net delivery will depend on the likelihood that people use the bed nets for their intended purpose, and the probability thatrecipients will install the bed nets properly. These and other factors all come together to produce an estimate of the cost effectiveness of an intervention, which will depend on the values we ascribe to the various inputs.One way that a researcher might seek to express uncertainty in these inputs is by placing reasonable upper and lower bounds on their estimates for each of them. The researcher might then seek to propagate this uncertainty in the inputs into the anticipated uncertainty in the outputs by performing the same cost effectiveness calculations as they did on their point estimates on their upper bounds, and on their lower bounds, thereby producing corresponding upper and lower bounds on the final cost effectiveness.An example of an interval-based approach is GiveWell'sassessment of Happier Lives Institute's CEA for StrongMinds. The purpose of this post is not to provide an independent evaluation of Strongminds, nor i...

]]>
Jamie Elsey https://forum.effectivealtruism.org/posts/zHFBQ23o4DKjsoXcC/incorporating-and-visualizing-uncertainty-in-cost Tue, 07 Nov 2023 13:51:54 +0000 EA - Incorporating and visualizing uncertainty in cost effectiveness analyses: A walkthrough using GiveWell's estimates for StrongMinds by Jamie Elsey Jamie Elsey https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 30:49 no full 8
gqJkceDk7Az7jWEEW EA - Dengue rates drop 77-95% after release of bacteria-infected mosquitoes in Colombia by SiebeRozendal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dengue rates drop 77-95% after release of bacteria-infected mosquitoes in Colombia, published by SiebeRozendal on November 7, 2023 on The Effective Altruism Forum.When infected with Wolbachia, the mosquitoes are much less likely to transmit diseases such as dengue and Zika, because the bacteria compete with these viruses. The insects also pass the bacteria on to their offspring. Researchers hope that the modified mosquitoes will interbreed with the wild population wherever they are released, and that the number of mosquitoes with Wolbachia will eventually surpass that of mosquitoes without it.[...]When the scientists compared the incidence of dengue in fully treated areas with that in the same regions in the ten years before the intervention, they found that it had dropped by 95% in Bello and Medellín and by 97% in Itagüí. Since the project started, there hasn't been a large outbreak of dengue in the region. "They've had six years now with a sustained suppression of dengue," says Anders. "We're starting to see the real-world effect of Wolbachia."[...]The [World Mosquito Program] has conducted one [RCT] in Yogyakarta, Indonesia, in which mosquitoes were released in some areas of a city and the incidence of dengue was compared with that in areas that did not receive the insects. The results suggested that the technology could reduce the incidence of dengue by 77%. The organization is now conducting a similar one in Belo Horizonte, Brazil.The RCT: https://pubmed.ncbi.nlm.nih.gov/34107180/Despite the positive results, Wolbachia mosquitoes have not yet been officially endorsed by the World Health Organization (WHO). The technology awaits an evaluation by the WHO's Vector Control Advisory Group.World Mosquito Program: https://www.worldmosquitoprogram.org/en/work/wolbachia-method/how-it-worksThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
SiebeRozendal https://forum.effectivealtruism.org/posts/gqJkceDk7Az7jWEEW/dengue-rates-drop-77-95-after-release-of-bacteria-infected Tue, 07 Nov 2023 13:26:49 +0000 EA - Dengue rates drop 77-95% after release of bacteria-infected mosquitoes in Colombia by SiebeRozendal SiebeRozendal https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:07 no full 9
H3dCunWkJ6J6u7AYv EA - Atlantic bluefin tuna are being domesticated: what are the welfare implications? by Amber Dawn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Atlantic bluefin tuna are being domesticated: what are the welfare implications?, published by Amber Dawn on November 7, 2023 on The Effective Altruism Forum.Atlantic bluefin tuna (ABFT) are large, carnivorous ocean fish. They used to be caught relatively rarely, mainly by sports fishermen in North America. However, around the 1950s, Japanese consumers of sushi developed more of a taste for the fish, and a large aquaculture industry developed.Historically, ABFT have been either caught directly from the ocean, or captured while young and fattened in 'ranches'. However, both wild fishing and ranching pose sustainability issues, since they involve taking fish from the wild. Since 2001, there have been a number of EU-funded projects to domesticate bluefin tuna, i.e. to breed them in captivity.This is already done with other types of fish, for example salmon and tilapia, which are raised on fish farms. But it's more difficult with ABFT: they generally don't spawn in captivity, as they require certain specific conditions to spawn.However, scientists have developed methods to make ABFT spawn in captivity, through manipulating light and releasing hormones into the water to stimulate egg production in the fish. This means that it's now possible to farm these fish through 'closed-cycle aquaculture': that is, we can breed them in captivity so that they don't need to be fished from the wild.This has been seen as a win for sustainability. But what about welfare? In this report, I first offer some background on ABFT. I then examine some potential welfare issues in ABFT aquaculture.Main takeaways:Many larvae (young fish) in hatchery projects die. However, this is also true in the wild, and hatcheries may become better at preventing some of these deaths in future, in order to be commercially viable.Many of the conditions in hatcheries might pose welfare issues for ABFT, but more research is needed.The main method of slaughtering large ABFT seems relatively humane; however, the main method of slaughtering smaller ABFT seems more distressing. It's unclear how many ABFT are slaughtered using this crueller method.What are Atlantic bluefin tuna?Atlantic bluefin tuna (thunnus thynnus) are native to the Atlantic Ocean and Mediterranean Sea. They are very large fish: fully mature adults are 2-2.5 m (6.6-8.2 ft) long on average and weigh around 225-250 kg (496-551 lb). Atlantic bluefin tuna (ABFT) have been called 'tigers of the sea' because of their size, grace, and the fact that they're carnivorous predators.In their natural habitat, ABFT can navigate over thousands of miles of ocean. They can dive to depths of 1000m. They eat smaller fish and other sea creatures, generally hunting in schools.Traditional aquaculture of ABFT involves 'ranching'. Juveniles are caught in nets when they gather to spawn, and fed and fattened in large offshore cages. When they are matured, they're slaughtered and sold for high prices.Domesticating ABFTHowever, ranching is not sustainable, since it involves removing ABFT from the wild. Although the International Commission for the Conservation of Atlantic Tunas (ICCAT) regulates tuna fishing by setting quotas, in 2009 their scientific advisors reported that ABFT stocks were probably less than 15% their original size.[1]Therefore, starting in 2001, there have been several EU-funded projects to develop 'closed-cycle' aquaculture for ABFT: the ability to breed them in captivity. DOTT ('Domestication of Thunnus thynnus') was the first such project in 2001-2; this was followed by REPRODOTT (2003-2005), SELFDOTT (2008-2011), and TRANSDOTT (2012-2014).[2]Since then, various entities have set up ABFT hatcheries across Europe, including both public research centres and private companies. More recently, in July 2023, researchers at the Spanish Instit...

]]>
Amber Dawn https://forum.effectivealtruism.org/posts/H3dCunWkJ6J6u7AYv/atlantic-bluefin-tuna-are-being-domesticated-what-are-the Tue, 07 Nov 2023 09:21:48 +0000 EA - Atlantic bluefin tuna are being domesticated: what are the welfare implications? by Amber Dawn Amber Dawn https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:59 no full 11
ga2wFZ3ZGRv6EfL7L EA - I went on a (very) long walk, and it was a great career decision by Emily Grundy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I went on a (very) long walk, and it was a great career decision, published by Emily Grundy on November 7, 2023 on The Effective Altruism Forum.This year, I walked from Mexico to Canada. I walked over 4,265 kilometres - through snow, blizzards, heatwaves, mosquito swarms, wildfire smoke, and extreme exhaustion. It was the hardest thing I've ever done, and it was the best thing I've ever done. And I almost didn't do it. Why?Not because I doubted I could do it (though I did).Not because I was worried about river crossings and hypothermia and falling trees (though I was).Not because I thought it would break me into pieces (though, believe me, it did).I was hesitant to embark on this epic journey, because I was concerned about what it would do to my career. How it might stall my professional journey. How it might even make it regress.I could not have been more wrong.This post is about why taking a break from your career, to do something thatdoesn't seem at all related to your career,could be great for it.The current rhetoric, and what's wrong with itImplicit in all the career advice I've consumed is the rhetoric that in order to grow your career, you have to focus on it. 'Focusing on it' involves doing things that directly advance your skills, knowledge, networks, or understanding of what you're a good fit for. According to this advice, your energy should be committed to 'making it happen', and to doing things that are very obviously career-relevant.Want to gain experience? Apply for internships.Want to grow your skills? Commit to self-study.Want to find a job that's a good fit for you? Spend a year exploring different roles.Want to take a break from your job? Wonderful, use that time to consider what you want out of the next one.This advice is pervasive, and it's convincing[1]. It can make people feel anxious that they need to always be 'career-ing', and guilty if they're not. It sends the message that the only way to improve your career trajectory is by very explicitly focusing on it and prioritising it.This rhetoric can become deeply ingrained, especially in young people, and this was the case for me. When I first considered doing thePacific Crest Trail, I went through quite the internal battle. Was it worth taking six months off work to go for a walk? Would my career stagnate or regress? Was it selfish to prioritise travel over impact, and should I just try to overcome that desire? What damage would this do to the position I'd worked hard to get to?Then, when I decided to actually do the hike, the battle continued. I tried to negotiate with myself, reasoning that if I weaved in some career-focused element then maybe I could justify it. Maybe this would be a good opportunity to think more about my career. Maybe I could use the time to consume relevant podcasts. Maybe I could firm up my stance on issues I care about. In the end, however, I told myself that I didn't want to take six months off work, to then spend the six months thinking about work.So I didn't. I didn't journal about what I wanted out of my career. I didn't listen to any podcasts with the intent of professional development. I hardly even thought about what I was going to do when I got home. I spent maybe a total of six hours thinking about work, and that was just when I sporadically felt like it. I'd come to accept that for six months, I would stop focusing on my career. And that meant, according to what I'd been taught, that I'd be temporarily abandoning it.However, that's not quite what happened.The career-related benefits of a non career-related breakAlthough I'd stopped intentionally working on my career, I would now classify what I did as a career-building activity. I'd go so far as to say that walking the length of the United States was better for my career than the counte...

]]>
Emily Grundy https://forum.effectivealtruism.org/posts/ga2wFZ3ZGRv6EfL7L/i-went-on-a-very-long-walk-and-it-was-a-great-career Tue, 07 Nov 2023 07:49:14 +0000 EA - I went on a (very) long walk, and it was a great career decision by Emily Grundy Emily Grundy https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:49 no full 12
xbiJRzSJKq69RRHDd EA - Why you should publish your research in academic fashion by Hans Waschke-Wischedag Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why you should publish your research in academic fashion, published by Hans Waschke-Wischedag on November 7, 2023 on The Effective Altruism Forum.It is not uncommon in the EA-sphere to publish your research on your own website, GitHub, LessWrong or the EAForum. However, I think that more people should consider publishing their research as is usual in academia.ReasonsIf you are publishing on select webpages only:1 - You are actively excluding most researchers from ever noticing your workYour research is in all likelihood built upon decades, or perhaps centuries of academic research. Researchers actively browse the academic literature in the areas that they are interested in. If you do not publish your research in a fashion that is noticed by academic databases such as Google Scholar, you will lose a lot of readers. Furthermore, these researchers might have used your research as a building block for their research. In other words, you are actively slowing down future academic research.Even worse, your research might be forgotten.[1]2 - You will not become part of the public debate on a topicMost policy-makers, civil servants and the general public seem to value academic credentials and academic research. I find it likely that you are not going to be invited to speak on a topic, if you have not published your research in an outlet that signals credibility to policy-makers. If you want to make a difference with your research by bringing it into policy, you would be better off not (just) publishing it on your blog.3 - You are declining free expert feedbackA journal submission may result in low-cost feedback by other researchers.[2]This strikes me as a useful thing to have, particularly as articles such as thepost on interest rates and AGIseem to be influential, but would have been rejected by many economists.[3]ObjectionsMost people that do not publish their research in a way so that it emerges in academic databases name two main objections.1- Publishing in academic journals takes a lot of time and money and we do not have the time for this bull****This would have been a fair objection in 2005, but I do not think that it is any longer. Many researchers read and use research published as a pre-print on webservers such as arXiv, as long as it is good research.[4]It does not take a lot of time to do this and your article is guaranteed to be found by scholarly search machines. You do not need to go through the tiresome process of submitting to academic journals in order for your research to be read by most researchers in your field.2 - My research is intended for a very select viewership onlyThis objection is a good one. However, I think that it applies only to a very small set of research. Namely, research institutes and think tanks who target policy-makers directly and already know that they will be taken seriously. However, if their research may be interesting for others too, they should probably consider publishing it openly as well.An explanation for the trend to publish outside of academiaI think that the benefits of publishing in an academic fashion usually far outweigh the costs. So why do people publish their research on non-academic websites?My very speculative account of why this happens is best explained through an example research paper that I recently came across (on a personal website). The author wrote:I am choosing to publish [this here] because journals tend to be extractive and time consuming, and because I am in a position to not care about them.I think the author has done great research and I do not want to criticise that particular decision. But I think that this sentence may reveal some hidden motives.Of course, academic journals are all about prestige and people with an "EA-mindset" may be inclined to reject this notion. H...

]]>
Hans Waschke-Wischedag https://forum.effectivealtruism.org/posts/xbiJRzSJKq69RRHDd/why-you-should-publish-your-research-in-academic-fashion Tue, 07 Nov 2023 04:52:15 +0000 EA - Why you should publish your research in academic fashion by Hans Waschke-Wischedag Hans Waschke-Wischedag https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:00 no full 14
5FepP8NiyJjRpnAvC EA - Giving What We Can has a new pledge option! by GraceAdams Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Giving What We Can has a new pledge option!, published by GraceAdams on November 7, 2023 on The Effective Altruism Forum.Today, we are launching the option to factor wealth into your pledge with Giving What We Can.Until now, Giving What We Can has focused on income as the main way of contributing through a pledge. However, we recognise that some people's personal financial resources don't exist as income - they exist as wealth. Adding an optional wealth component to the Giving What We Can Pledge allows those who have significant wealth to give in a way that better reflects their resources, aligning with our overall mission of helping people from all walks of life meaningfully commit to using a portion of their financial resources to help others.This optional wealth component of the Giving What We Can Pledge involves choosing to give the greater of 10% of income or a custom percentage of wealth each year. As such, those choosing to add the optional wealth component to their Giving What We Can Pledge will still be donating an amount that is at least equivalent to 10% of income, if not more.Read an example of how this works.We think this option will be most suitable for those who are in the top few percent of wealth holders globally, and we continue to recommend the Giving What We Can Pledge for most people earning a median salary in high-income countries. To get personalised guidance on the type of pledge you might consider based on how your income and wealth compares to the rest of the world, check out our pledge recommendation tool.We've also taken this opportunity to redesign ourpledge sign up, and we're very excited about the brand new look and feel as well as some improved functionality (such as letting people share their motivation publicly)!More about our giving pledgesWhile we are best known forThe Giving What We Can Pledge, which is a public commitment to donate at least 10% of your income (or a custom percentage of wealth!) to the most effective charities in the world, we also have three other giving pledges:The Trial Pledge:Donate at least 1% of income for any period you choose (with the option to also factor in your wealth when calculating your pledge amount)The Further Pledge: Donate all income above a specified living allowanceThe Company Pledge: Donate at least 10% of profitsWant to include wealth as part of your pledge?If you have an existing Giving What We Can Pledge or Trial Pledge, you can fill out ourPledge Variation Formto request us to update your pledge in our systems.Why pledge at all if you are already giving regularly?We've written at length about this onour website, but today, I'll just cover one important point:Taking a public giving pledge helps inspire others.On a personal note, before I took a pledge, I scrolled through the list of others that had already done so, looking for reassurance in the thousands of names that appeared. Seeing the many people who had committed gave me a clear message:I could in fact donate 10% and live a good and fulfilling life.Adding your name to the list, whether it's a trial pledge or a lifetime commitment, helps generate a sense of momentum, achievability, and a sense of "wow, there's other people out here doing this too!" We think this is really important in creating a culture where giving significantly and effectively is a social norm.We're hoping to hit 10,000 lifetime pledgers before the end of 2024. If you're already giving consistently, please consider taking a pledge![Take a pledge]A deep thank you from our whole team for giving what you can to create a better future.With gratitude,Grace and the Giving What We Can teamP.S. The idea for a wealth component was first considered in 2017, and we're grateful for all those who have contributed to it in the years since.T...

]]>
GraceAdams https://forum.effectivealtruism.org/posts/5FepP8NiyJjRpnAvC/giving-what-we-can-has-a-new-pledge-option Tue, 07 Nov 2023 00:35:44 +0000 EA - Giving What We Can has a new pledge option! by GraceAdams GraceAdams https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:34 no full 15
fFNAcQBDkBHxPXuJJ EA - Why Certify? Aquatic Life Institute's Impact Implementation Via Seafood Certification by Tessa @ ALI Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Certify? Aquatic Life Institute's Impact Implementation Via Seafood Certification, published by Tessa @ ALI on November 8, 2023 on The Effective Altruism Forum.SnapshotAquatic Life Institute has recently launched the second edition of theAquaculture Certification Schemes Benchmark.These schemes collectively certify, at minimum, 773 million fishes and 10.5 billion shrimps annually.For every $1 of funding received, we have potentially helped improve the lives of 5,423 fish and 221,343 shrimp directly through our engagement with these certifiers.444: Four R's for Formative ChangeIn 2019,Aquatic Life Institute (ALI) embarked on a journey to reduce the suffering for trillions of aquatic animals in the global food system each year.This past September, ALI established4 Key Principles to help guide our interventions for systemic transformation in aquatic animal welfare that are used to filter organizational priorities:Reduce the number of animals in, or remove animals from, the seafood system and its supply chain.Refine the conditions in which animals are currently kept or captured in the seafood system and its supply chain.Replace animal products with sustainable plant-based or cell-based alternatives to the extent possible in the seafood system and its supply chain.Reject the introduction of additional animals into the seafood system and its supply chain.In alignment with these 4 principles, we have worked with seafood certifications for years, building relationships and fostering change via ourCertifier Campaign.Certification LandscapeBetween 51 and 167 billion farmed fish[1] are produced annually from global aquaculture operations. Although there are examples of good welfare practices in aquaculture, the concept of what officially constitutes "humanely-raised fish" or a "high welfare seafood product" is still largely undefined worldwide by the public, industry, animal welfare organizations, and most governments.As institutions certifying aquatic animal products begin incorporating positive welfare standards into their seafood labeling programs, they must diligently define high welfare products based on the best available scientific evidence rather than rely on subpar industry norms. "Humanely-raised" aquaculture standards must include more than just stunning before slaughter; they should consider animal welfare conditions throughout the stages of their lives in production. The farmed aquatic animals living in aquaculture facilities at any given time need to be prioritized.Aquaculture standards must also account for additional aquatic animals not directly used for human consumption, such as animals reduced to fishmeal and fish oil ingredients, cleaner fish, and broodstock. Consumers turn to seafood labeling schemes for guidance to avoid purchasing products that conflict with sustainable and humane practices. More than 100 certifications and ratings programs of one type or another are currently in use by the seafood industry.[2], and volumes of certified farmed fish and shellfish constitute about 8% of global aquaculture production[3]. The amount of certified aquatic animal products is only expected to increase. There is no evidence that certification will be phased out anytime in the near future, given consumers' increasing demand for sustainable seafood and the absence of a better alternative[4]. Some schemes are reporting notable growth and others are discussing the aggressive expansion of their operations to certify a greater number of seafood products in various regions.However, many of these labels lack explicit considerations for positive animal welfare or fail to provide adequate protections. Through our Certifier Campaign, we aim to hold seafood certification standards accountable and highlight the schemes that provide the most robust...

]]>
Tessa @ ALI https://forum.effectivealtruism.org/posts/fFNAcQBDkBHxPXuJJ/why-certify-aquatic-life-institute-s-impact-implementation Wed, 08 Nov 2023 11:17:18 +0000 EA - Why Certify? Aquatic Life Institute's Impact Implementation Via Seafood Certification by Tessa @ ALI Tessa @ ALI https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:22 no full 2
pvBWDLwtw8A4rA6DF EA - AMA: Ben West, former startup founder and EtGer by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Ben West, former startup founder and EtGer, published by Ben West on November 8, 2023 on The Effective Altruism Forum.Hey everyone! I'm Ben, and I will be doing an AMA forEffective Giving Spotlight week. Some of my relevant background:In 2014 I cofounded a company for earning to give (EtG) reasons (largelyinspired by 80k), which was latersuccessfully acquired.Since late 2018 I have been doing direct work, currently as Interim Managing Director of CEA.(With a brief side project of founding a TikTok-related company which wassimilarly acquired, albeit for way less money.)I've had some other EtGish work experience (eight years as a software developer/middle manager, a couple months at Alameda Research) as wellAdditionally, I've talked to some people deciding between EtG and direct work because of mystanding offer to talk to such folks, so I might have cached thoughts on some questions.You might want to ask me about:EntrepreneurshipTrade-offs between earning to give and "direct work"Cosmetics and skincare for those who (want to) look masculineTikTokFunctional programming (particularly Haskell)Or one ofmy less useful projectsAnything else (I might skip some questions)I will plan to answer questions Thursday, November 9th. Post them as comments on this thread.See alsoJeff's AMA, which is on a similar topic.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Ben_West https://forum.effectivealtruism.org/posts/pvBWDLwtw8A4rA6DF/ama-ben-west-former-startup-founder-and-etger Wed, 08 Nov 2023 10:03:01 +0000 EA - AMA: Ben West, former startup founder and EtGer by Ben West Ben_West https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:39 no full 3
gxppfWhx7ta2fkF3R EA - 10 years of Earning to Give by AGB Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 10 years of Earning to Give, published by AGB on November 8, 2023 on The Effective Altruism Forum. General note: The bulk of this post was written a couple of months ago, but I am releasing it now to coincide with the Effective Giving Spotlight week. I shortly expect to release a second post documenting some observations on the communitybuilding funding landscape.IntroductionWay back in 2010, I was sitting in my parents' house, watching one of my favourite TV shows, the UK's Daily Politics. That day's guest was an Oxford academic by the name of Toby Ord. He was donating everything above 18000 (26300 in today's money) to charity, and gently pushing others to give 10%."Nice guy," I thought. "Pity it'll never catch on."Two years later, a couple of peers interned at Giving What We Can. At the same time, I did my own internship in finance, and my estimate of my earning potential quadrupled[1]. One year after that, I graduated and took the Giving What We Can pledge myself. While my pledge form read that I had committed to donate 20% of my income, my goal was to hit far higher percentages.How did that go?Post goalsEarning To Give was one of EA's first ideas to get major mainstream attention, much of it negative. Some was mean-spirited, but some of it read to me as a genuine attempt to warn young people about what they were signing up for. For example, from the linked David Brooks piece:From the article, Trigg seems like an earnest, morally serious man...First, you might start down this course seeing finance as a convenient means to realize your deepest commitment: fighting malaria. But the brain is a malleable organ....Every hour you spend with others, you become more like the people around you.If there is a large gap between your daily conduct and your core commitment, you will become more like your daily activities and less attached to your original commitment. You will become more hedge fund, less malaria. There's nothing wrong with working at a hedge fund, but it's not the priority you started out with.At the time, EAs had little choice but to respond to such speculation with speculation of their own. At this point, I can at least answer how some things have played out for me personally. I have divided this post into reflections on my personal EtG path and on the EA community.My pathFirst, some context. Over the past decade:My wife Denise and I have donated 1.5m.[2]This equates to 46% of our combined gross incomes.[2]The rest of the money is split 550k / 550k / 700k between spending / saving (incl. pension) / taxes.[2]We have three children (ages 13, 6, 2) and live in London.I work as a trader, formerly at a quant trading firm and now at a hedge fund.WorkMany critics of EtG assume that we really want to be doing something meaningful, but have - with a heavy heart - intellectually conceded that money is what matters.I want to emphasise this: This is not me, and I doubt it applies to even 20% of people doing EtG. If you currently feel this way, I strongly suspect you should stop.I like my work. I get to work with incredibly sharp and motivated people. I get to work on a diverse array of intellectual challenges. Most of all, I've managed to land a career that bears an uncanny resemblance to what I do with my spare time; playing games, looking for inconsistencies in others' beliefs, and exploiting that to win.But prior to discovering EtG, I was wrestling with the fact that this natural choice just seemed very selfish. As I saw it, my choices were to do something directly useful and be miserable but valuable, or to work in finance and be happy but worthless. So a reminder that the money I have a comparative advantage in earning is itself of value was a relief, not a burden.My career pathway has not been smooth, with a major derailment in 2018, which ...

]]>
AGB https://forum.effectivealtruism.org/posts/gxppfWhx7ta2fkF3R/10-years-of-earning-to-give Wed, 08 Nov 2023 00:11:14 +0000 EA - 10 years of Earning to Give by AGB AGB https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 14:51 no full 6
Cvn6fwzdoLNLgTJif EA - Further possible projects on EA reform by Julia Wise Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Further possible projects on EA reform, published by Julia Wise on November 10, 2023 on The Effective Altruism Forum.As part of this project on reforms, we collected a rough list of potential projects for EA organizational reform. Each idea was pretty interesting to at least one of us (Julia Wise, Ozzie Gooen, Sam Donald), but we don't necessarily agree about them.This list represents a lightly-edited snapshot of projects we were considering around July 2023, which we listed in order to get feedback from others on how to prioritize. Some of these were completed as part of the reform project, but most are still available if someone wants to make them happen.There's an appendix with arough grid of projects by our guess at importance and tractability.BackgroundKey FactorsFactors that might influence which projects people might like, if any:Centralization vs. DecentralizationHow much should EA aim to be more centralized / closely integrated vs. more like a loose network?Big vs. Small ChangesAre existing orgs basically doing good stuff and just need some adjustments, or are some things in seriously bad shape?Short-term vs. Long-termHow much do you want to focus on things that could get done in the next few months vs changes that would take more time and resources?Risk appetiteIs EA already solid enough that we should mostly aim to preserve its value and be risk-averse, or is most of the impact in the future in a way that makes it more important to stay nimble and be wary of losing the ability to jump at opportunities?Issue TractabilityHow much are dynamics like sexual misconduct within the realm of organizations to influence, vs. mostly a broad social problem that orgs aren't going to be able to change much?Long-term OverheadHow much operations/infrastructure/management overhead do we want to aim for? Do we think that EA gets this balance about right, or could use significantly more or less?If you really don't want much extra spending per project, then most proposals to "spend resources improving things" couldn't work.There are sacrifices between moving quickly and cheaply, vs. long-term investments and risk minimization.BoardsAdvice on board compositionScope: smallAction:Make recommendations to EA organizations about their board composition.Have compiled advice from people with knowledge of boards generallyMany to the effect of "small narrow boards should be larger and have a broader range of skills"Can pass on this summary to orgs with such boardsWhat can we offer here that orgs aren't already going to do on their own?Collect advice that many orgs could benefit fromArea where we don't have much to offer:Custom advice for orgs in unusual situationsJulia's take: someone considering board changes at orgs in unusual situations should read through the advice we compile, but not expect it to be that different from what they've probably already heardSteps with some organizations[some excerpted parts about communications with specific organizations]If you could pick a couple of people who'd give the best advice on possible changes these orgs should make to the board, who would it be?Small organizations with no board or very basic board: work out if we have useful advice to give hereExisting work / resources:EA Good Governance ProjectInvestigationsFTX investigationProject scope: mediumAction:Find someone to run an investigation into how EA individuals and organizations could have better handled the FTX situation.Barriers:People who did things that were bad or will make them look bad will not want to tell you about it. Everyone's lawyers will have told them not to talk about anything.Existing work / resources:EV'sinvestigation has a defined scope that won't be relevant to all the things EAs want to know, and it won't necessarily p...

]]>
Julia_Wise https://forum.effectivealtruism.org/posts/Cvn6fwzdoLNLgTJif/further-possible-projects-on-ea-reform Fri, 10 Nov 2023 19:35:30 +0000 EA - Further possible projects on EA reform by Julia Wise Julia_Wise https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 33:45 no full 2
Yfmdcdziq5mbA7Hif EA - Concepts of existential catastrophe (Hilary Greaves) by Global Priorities Institute Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Concepts of existential catastrophe (Hilary Greaves), published by Global Priorities Institute on November 10, 2023 on The Effective Altruism Forum.This paper was originally published as a working paper in September 2023 and is forthcoming in The Monist.AbstractThe notion of existential catastrophe is increasingly appealed to in discussion of risk management around emerging technologies, but it is not completely clear what this notion amounts to. Here, I provide an opinionated survey of the space of plausibly useful definitions of existential catastrophe. Inter alia, I discuss: whether to define existential catastrophe in ex post or ex ante terms, whether an ex ante definition should be in terms of loss of expected value or loss of potential, and what kind of probabilities should be involved in any appeal to expected value.Introduction and motivationsHumanity today arguably faces various very significant existential risks, especially from new and anticipated technologies such as nuclear weapons, synthetic biology and advanced artificial intelligence (Rees 2003, Posner 2004, Bostrom 2014, Haggstrom 2016, Ord 2020).Furthermore, the scale of the corresponding possible catastrophes is such that anything we could do to reduce their probability by even a tiny amount could plausibly score very highly in terms of expected value (Bostrom 2013, Beckstead 2013, Greaves and MacAskill 2024). If so, then addressing these risks should plausibly be one of our top priorities.An existential risk is a risk of an existential catastrophe. An existential catastrophe is a particular type of possible event. This much is relatively clear. But there is not complete clarity, or uniformity of terminology, over what exactly it is for a given possible event to count as an existential catastrophe. Unclarity is no friend of fruitful discussion. Because of the importance of the topic, it is worth clarifying this as much as we can. The present paper is intended as a contribution to this task.The aim of the paper is to survey the space of plausibly useful definitions, drawing out the key choice points. I will also offer arguments for the superiority of one definition over another where I see such arguments, but such arguments will often be far from conclusive; the main aim here is to clarify the menu of options.I will discuss four broad approaches to defining "existential catastrophe". The first approach (section 2) is to define existential catastrophe in terms of human extinction. A suitable notion of human extinction is indeed one concept that it is useful to work with. But it does not cover all the cases of interest. In thinking through the worst-case outcomes from technologies such as those listed above, analysts of existential risk are at least equally concerned about various other outcomes that do not involve extinction but would be similarly bad.The other three approaches all seek to include these non-extinction types of existential catastrophe. The second approach appeals to loss of value, either ex post value (section 3) or expected value (section 4).There are several subtleties involved in making precise a definition based on expected value; I will suggest (though without watertight argument) that the best approach focuses on the consequences for expected value of "imaging" one's evidential probabilities on the possible event in question. The fourth approach appeals to a notion of the loss of humanity's potential (section 5). I will suggest (again, without watertight argument) that when the notion of "potential" is optimally understood, this fourth approach is theoretically equivalent to the third.The notion of existential catastrophe has a natural inverse: there could be events that are as good as existential catastrophes are bad. Ord and Cotton-Barratt (2015) suggest coining th...

]]>
Global Priorities Institute https://forum.effectivealtruism.org/posts/Yfmdcdziq5mbA7Hif/concepts-of-existential-catastrophe-hilary-greaves Fri, 10 Nov 2023 17:00:17 +0000 EA - Concepts of existential catastrophe (Hilary Greaves) by Global Priorities Institute Global Priorities Institute https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:06 no full 5
eneTgwjSjiyfCmXKK EA - Here's where CEA staff are donating in 2023 by Oscar Howie Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Here's where CEA staff are donating in 2023, published by Oscar Howie on November 10, 2023 on The Effective Altruism Forum.Catherine LowI took theGiving What We Can pledge in 2015. Between 2015-2022, I've donated 10-15% of my income. This year is the first year where I'm not planning to donate my usual pledge amount; instead I've chosen to donate extra next year to make up for this.2015-2022Initially I focussed on Animal Charity Evaluators top charities (and to whatever charity won theGiving Game I was running).Then I started thinking more like a meta micro-funder - donating to projects/people I could donate to more easily (because of my knowledge or lack of constraints) than institutional donors couldHelping get ideas to the "applying for funding" stageHelping through financially tricky situations - e.g. tiding them over between jobs or grants2023I began conserving my donations for potentially vulnerable initiatives I'm familiar with and really like, which might need support as a result of the drop in EA meta funding. While none of these have required funding thus far (thanks wonderful institutional donors!) I think I might see opportunities like this in 2024, and I have a couple of projects in mind that I'm ready to leap in and support.Aside: Separately to my pledge I also "offset" my carbon emissions. I currently donate this toFounders Pledge Climate Change Fund. I feel pretty mixed about this. I'm more worried about other risks and other current issues, so it's not a particularly "EA thing to do". My motivations are "try not to be part of the problem" guilt reduction reasons plus social reasons (many of my friends and family are "flightless kiwis" and enthusiastic climate advocates, so I feel better about flying when I can say "I offset! Here's how!").Shakeel HashimI took the Giving What We Can pledge at the end of last year, so this was my first "proper" year of donating 10% (though I ended up donating about 10% last year too). After taking the pledge, I made atemplate for deciding how to allocate my donations to cause areas. The idea was that I want to take a portfolio approach (giving some to global health, some to existential security, and some to animal welfare), and also want to consider the overall resources I "donate", which includes my time. This led me to realise that because most of my work time recently has been spent on existential security stuff, and because I think my work time is much more valuable than the amount of money I donate, my donations should all go to global health stuff.I'm also a big fan of encouraging new global health projects to appear, as I expect we might be able to find better projects than the current top-rated charities. That said, it's difficult to target donations to such projects. In practice, I donate 95% to the GiveWell All Charities Fund, and 5% to the Charity Entrepreneurship Incubated Charities Fund.Angelina LiI took the Giving What We Can pledge in 2016, when I was in college. In terms of how much I donate: From ~2018-2021, I was earning to give at a consulting firm, and gave somewhere between 20-40% of my income every year, mostly to effective animal advocacy charities.Last year, I joined CEA, and it looks like I barely made my 10% threshold last year (mostly based on one large donation to theanimal welfare EA Fund in January). At the time, I think decreasing my donations was a reaction to a more cash-flush funding landscape, thinking my labour was now more valuable than my money, and wanting to save more after heading into a less lucrative career path. I regret this somewhat, looking back: I think I let my expenses ramp up too quickly and wish I had saved more to donate later. A smarter me would also have considered the benefits of preserving diverse funding options on a rainy day. Plus selfish...

]]>
Oscar Howie https://forum.effectivealtruism.org/posts/eneTgwjSjiyfCmXKK/here-s-where-cea-staff-are-donating-in-2023 Fri, 10 Nov 2023 16:45:38 +0000 EA - Here's where CEA staff are donating in 2023 by Oscar Howie Oscar Howie https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:40 no full 6
ZzCpFmSc2LfLjK2ws EA - Why and how to earn to give by Ardenlk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why and how to earn to give, published by Ardenlk on November 10, 2023 on The Effective Altruism Forum.Many people here are probably already familiar with the basic ideas behind earning to give, but even so, you might find it useful to have:(1) An introduction to share with others or a reminder for yourself, in which case 80,000 Hours's article on it could be a good go-to!(2) Some thoughts on whether you personally should earn to give or do direct work(3) Links to resources on promising earning to give options like software engineering or being an early employee at a startupThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Ardenlk https://forum.effectivealtruism.org/posts/ZzCpFmSc2LfLjK2ws/why-and-how-to-earn-to-give Fri, 10 Nov 2023 14:35:06 +0000 EA - Why and how to earn to give by Ardenlk Ardenlk https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:46 no full 7
GCGntqsEHWpGEwrnF EA - COI policies for grantmakers by Julia Wise Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: COI policies for grantmakers, published by Julia Wise on November 10, 2023 on The Effective Altruism Forum.Part of this project on reforms in EA. Originally written July 2023I think grantmaking requires additional steps beyond a standard workplace-based conflict of interest policy. Those policies are designed to address "What if you give a contracting job to your brother's company?" or "What if you're dating a coworker?" They are not designed for things like "What if everyone in your social community views you as someone who can hand out money to them and their friends?"Related:Power dynamics between people in EAI think grantmaking projects should have a COI policy that applies to full-time, part-time, and volunteer grantmakers and regrantors. It could also be useful for people who are regularly asked their opinion about grant applications or applicants, even if they don't have a formal role as a grantmaker.Things for grantmakers to rememberPower is tricky. Smart, caring people have messed up here before.Think about what looks unethical from the outside as well as what you judge to be unethical. You might not be a good judge when it comes to your own decisions, and others will make judgements based on what things look like from their perspective.A written policy doesn't cover everything. You might notice situations that feel a bit icky to you. I suggest bringing those up with someone at your grantmaking project to get some help figuring out what to do.Example policiesSeveral of these are linked from the org websites or fromthis discussion. Some other organizations have COI policies that are mostly about relationships between their own staff, rather than between grantmakers and grantees.EA Funds policyACE policy on COIs by grantmaking committeeRethink Priorities policyExample from Charity Entrepreneurship'spolicy of something to avoid:"A Director who is also a decision-maker of a separate organisation who stands to receive a benefit from CE, such as a grant. To an external observer, it could look like the Director used their position as a Director of CE to secure a grant for the other organisation, which otherwise would not have received such a grant."From another grantmaking program: "We ask you to flag conflicts of interest, but they aren't a knock-down reason that we won't fund a grant. You can propose funding for friends, coworkers, employees, and even yourself. We will screen these proposals more carefully. . . . You shouldn't let a potential COI deter you from submitting a promising grant, we just want to know! The main COIs we view as insurmountable are grants to romantic partners."Draft policy for the Long Term Future Fund (with discussion in the comments that may be useful)Things for grantmaking projects to consider when writing a policyOften people will know more about projects they're close enough to have a conflict with, and I can see valid reasons to use that info. There may be ways to consider their input without having them involved in the final decision; for example they could share information/opinions but not participate in any final voting/recommendation on a grant.Possible elements for a policy to includeWhat kind of relationships should be disclosed, even if they don't require recusal? (For example I suggest that being friends or housemates should be disclosed, but doesn't require recusal.)What kind of relationships require recusal?Types of relationships to think aboutDoing paid or volunteer work for the grantee projectBoard member of the other projectHousemate / landlord / tenantClose friendsFamily memberCurrent romantic or sexual partnerPast romantic or sexual partnerYour partner or close family member has a COI with the granteePeople who owe you money, or vice versaPeople who run a project that's competing wi...

]]>
Julia_Wise https://forum.effectivealtruism.org/posts/GCGntqsEHWpGEwrnF/coi-policies-for-grantmakers Fri, 10 Nov 2023 13:33:55 +0000 EA - COI policies for grantmakers by Julia Wise Julia_Wise https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:29 no full 8
hkBryQTp733uaBPnC EA - Advice for EA boards by Julia Wise Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Advice for EA boards, published by Julia Wise on November 10, 2023 on The Effective Altruism Forum.ContextAs part of this project on reforms in EA, we've reviewed some changes that boards of organizations could make. Julia was the primary writer of this piece, with significant input from Ozzie.This advice on nonprofit boards draws from multiple sources. We spoke with board members from small and larger organizations inside and outside EA. We got input from staff at EA organizations who regularly interact with their boards, such as staff tasked with board relations. Julia and Ozzie also have a history of being on boards at EA organizations.Overall, there was no consensus on obvious reforms EA organizations should prioritize. But by taking advice from these varied sources, we aim to highlight considerations particularly relevant for EA boards.We have also shared more organization-specific thoughts with staff and board members at some organizations.Difficult choices we seeHow much to innovate? When should EA boards follow standard best practices, and when should they be willing to try something significantly different?Which sources do you trust on what "best practices" even are?Skills vs. alignment. How should organizations weigh board members with strong professional skills, such as finance and law, with those who have more alignment with the organization's specific mission?How much effort should be put into board recruitment? Most organizations spend less time on recruiting a board member than for hiring a staff position (which probably makes sense given the much larger number of hours a staff member will put in.) But the current default time put into this by EA organizations may be too low.Some things we think (which many organizations probably already agree with)Being a board member / trustee is an important role, and board members should be prepared to give it serious time."At least 2 hours a month" is one estimate that seems sensible for organizations after a certain stage (perhaps 5 FTE). In times of major transition or crisis for the organization, it may be a lot more.It's best to have set terms for board membership so that each member is prompted to consider whether board service is still a good fit for them, and other board members are prompted to consider whether the person is still a good fit for the board. This doesn't mean their term definitely ends after a fixed time (they can be re-elected / reappointed), but people shouldn't stay on the board indefinitely by default.It also makes it easier to ask someone to leave if they're no longer a solid fit or are checked out. Many organizations change or grow dramatically over time, so board members who are great at some stages might stop being best later on.It's important to have good information sharing between staff and the board.With senior staff, this could be by fairly frequent meetings or by other updates.With junior staff who can provide a different view into the organization than senior staff, this could be interviews, office hours held by board members, or by attending staff events.It's important to have a system for recusing board members who are conflicted. This is both for votes, and for discussions that should be held without staff present. For example, see Holden Karnofsky's suggestion aboutclosed sessions.It's helpful to have staff capacity specifically designated for board coordination.It's helpful to have one primary person own this areaThe goal is to get the board information that will make them more effective at providing oversightBoards should have directors & officer insurance.Expertise on a boardMany people we talked to felt it was useful to have specific skills or professional experience on a board (e.g. finance expertise, legal expertise). The amount of expertise ...

]]>
Julia_Wise https://forum.effectivealtruism.org/posts/hkBryQTp733uaBPnC/advice-for-ea-boards Fri, 10 Nov 2023 09:31:26 +0000 EA - Advice for EA boards by Julia Wise Julia_Wise https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:46 no full 9
jLaDP2aWxdDCzwBYy EA - Takes from staff at orgs with leadership that went off the rails by Julia Wise Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Takes from staff at orgs with leadership that went off the rails, published by Julia Wise on November 10, 2023 on The Effective Altruism Forum.I spoke with some people who worked or served on the board at organizations that had a leadership transition after things went seriously wrong. In some cases the organizations were EA-affiliated, in other cases only tangentially related to the EA space.This is an informal collection of advice the ~eight people I spoke with have for staff or board members who might find themselves in a similar position. I bucketed this advice into a few categories below. Some are direct quotes and others are paraphrases of what they said. All spelling is Americanized for anonymity.I'm sharing it here not because I think it's an exhaustive accounting of all types of potential leadership issues (it's not) or because I think any of this is unique to or particularly prevalent in or around EA (I don't). But I hope that it's helpful to any readers who may someday be in a position like this. Of course, much of this will be the wrong advice if you're dealing with a problem that's more like miscommunication or differences of strategy than outright corruption or other unethical behavior.Written policies"Annual self-review [by the CEO] to the board, performance reviews of CEO's reports + feedback for the CEO shared with the board, official routinized channel for making major complaints to the board. More informally, I feel like having more of a 'we do things by the book' / 'we do all the normal tech company best practices for management' goes a long way. Also being formal and quite cautious about conflicts of interest."Maybe there should be a policy that if you have a problem with your manager or with org leadership, here's this alternate person you go to (HR, external HR consultant, board).One person from an org where the leader was treating staff badly said they had whistleblowing policies on the books, but it was hard to use them against the leader because the leader had control of the process.Maybe policies would have helped, if they'd had more teeth. Like the board must do x and y substantive things, here are minimum standards for what that will look like, this kind of report would need to be reviewed. But they had some of that and it didn't help."If you are cofounding an organization, have an agreement about what happens if you have irreconcilable disagreements with your cofounders. Every single startup advice book tells you to do this, and nobody does it because they think they are special, but you aren't special. Even if your cofounder is your best friend and you are perfectly value-aligned, you should still have an agreement about handling irreconcilable disagreements."Role of board / advice for boardPrioritize fixing culture proactively. When you can see the organization fracturing or employees are saying the culture is bad, board members should take it seriously. Not sure what kind of interventions would be best, maybe mediation between employees who aren't getting along.Having a good policy about how staff are treated is only useful if you carry it out. It's useless if nobody actually investigates problems.At one org, the leader arranged things so important decisions were made in informal discussions before going to the actual board. The board rubber-stamped things, wasn't providing independent oversight. It was worse because some board members were staff.Where some board members are uninvolved, the leader doesn't even need to hide things from them - they just won't notice.At one org, multiple staff members thought the board could have prevented the problem if they'd run a proper hiring round for the leader earlier rather than making hasty internal appointments."Have a board that's actually capable of doing stuff, and board mem...

]]>
Julia_Wise https://forum.effectivealtruism.org/posts/jLaDP2aWxdDCzwBYy/takes-from-staff-at-orgs-with-leadership-that-went-off-the Fri, 10 Nov 2023 00:27:45 +0000 EA - Takes from staff at orgs with leadership that went off the rails by Julia Wise Julia_Wise https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:08 no full 11
HwFtAQPfif2ZwirB6 EA - Project on organizational reforms in EA: summary by Julia Wise Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Project on organizational reforms in EA: summary, published by Julia Wise on November 9, 2023 on The Effective Altruism Forum.Earlier in 2023, Julia put together a project to look at possible reforms in EA. The main people on this were me (Julia) of the community health team at CEA, Ozzie Gooen of Quantified Uncertainty Research Institute, and Sam Donald of Open Philanthropy. About a dozen other people from across the community gave input on the project.Previously:Initial post from AprilUpdate from JuneWork this project has carried outInformation-gatheringJulia interviewed ~20 people based in 6 countries about their views on where EA reforms would be most useful.Interviewees included people with experience on boards inside and outside EA, some current and former leaders of EA organizations, and people with expertise in specific areas like whistleblowing systems.Julia read and cataloged ~all the posts and comments about reform on the EA Forum from the past year and some main ones from the previous year.Separately, Sam collated a longlist of reform ideas from the EA Forum, as part of Open Philanthropy's look at this area.We gathered about a dozen people interested in different areas of reform into a Slack workspace and shared some ideas and documents there for discussion.An overview of possible areas of reformHere's our list of further possible reform projects. We took on a few of these, but the majority are larger than the scope of this project.We're providing this list for those who might find it beneficial for future projects. However, there isn't a consensus on whether all these ideas should be pursued.Advice / resources produced during this projectAdvice about board composition and practicesAdvice for specific organizations about board composition, shared with those organizations directlyBoth of the large organizations we sent advice to were also running their own internal process considering changes to their board makeup and/or structure.Resource on whistleblowing and other ways of escalating concernsConflict of interest policy advice for grantmaking projectsAdvice from staff and board members at organizations where leadership went seriously wrong in the pastProjects and programs we'd like to seeWe think these projects are promising, but they're sizable or ongoing projects that we don't have the capacity to carry out. If you're interested in working on or funding any of these, let's talk!More investigation capacity, to look at organizations or individuals where something shady might be happening.More capacity on risk management across EA broadly, rather than each org doing it separately.Better HR / staff policy resources for organizations - e.g. referrals to services like HR and legal advising that "get" concepts like tradeoffs.A comprehensive investigation into FTX<>EA connections / problems - as far as we know, no one is currently doing this.EV'sinvestigation has a defined scope that won't be relevant to all the things EAs want to know, and it won't necessarily publish any of its results.Context on this projectThis project was one relatively small piece of work to help reform EA, and there's a lot more work we'd be interested to see. It ended up being roughly two person-months of work, mostly from Julia.The project came out of a period when there was a lot of energy around possible changes to EA in the aftermath of the FTX crisis. Some of the ideas we considered were focused around that situation, but many were around other areas where the functioning of EA organizations or the EA ecosystem could be improved.After looking at a lot of ideas for reforms, there weren't a lot of recommendations or projects that seem like clear wins; often there were some thoughtful people who considered a project promising and others who thought it ...

]]>
Julia_Wise https://forum.effectivealtruism.org/posts/HwFtAQPfif2ZwirB6/project-on-organizational-reforms-in-ea-summary EA connections / problems - as far as we know, no one is currently doing this.EV'sinvestigation has a defined scope that won't be relevant to all the things EAs want to know, and it won't necessarily publish any of its results.Context on this projectThis project was one relatively small piece of work to help reform EA, and there's a lot more work we'd be interested to see. It ended up being roughly two person-months of work, mostly from Julia.The project came out of a period when there was a lot of energy around possible changes to EA in the aftermath of the FTX crisis. Some of the ideas we considered were focused around that situation, but many were around other areas where the functioning of EA organizations or the EA ecosystem could be improved.After looking at a lot of ideas for reforms, there weren't a lot of recommendations or projects that seem like clear wins; often there were some thoughtful people who considered a project promising and others who thought it ...]]> Thu, 09 Nov 2023 23:11:28 +0000 EA - Project on organizational reforms in EA: summary by Julia Wise EA connections / problems - as far as we know, no one is currently doing this.EV'sinvestigation has a defined scope that won't be relevant to all the things EAs want to know, and it won't necessarily publish any of its results.Context on this projectThis project was one relatively small piece of work to help reform EA, and there's a lot more work we'd be interested to see. It ended up being roughly two person-months of work, mostly from Julia.The project came out of a period when there was a lot of energy around possible changes to EA in the aftermath of the FTX crisis. Some of the ideas we considered were focused around that situation, but many were around other areas where the functioning of EA organizations or the EA ecosystem could be improved.After looking at a lot of ideas for reforms, there weren't a lot of recommendations or projects that seem like clear wins; often there were some thoughtful people who considered a project promising and others who thought it ...]]> EA connections / problems - as far as we know, no one is currently doing this.EV'sinvestigation has a defined scope that won't be relevant to all the things EAs want to know, and it won't necessarily publish any of its results.Context on this projectThis project was one relatively small piece of work to help reform EA, and there's a lot more work we'd be interested to see. It ended up being roughly two person-months of work, mostly from Julia.The project came out of a period when there was a lot of energy around possible changes to EA in the aftermath of the FTX crisis. Some of the ideas we considered were focused around that situation, but many were around other areas where the functioning of EA organizations or the EA ecosystem could be improved.After looking at a lot of ideas for reforms, there weren't a lot of recommendations or projects that seem like clear wins; often there were some thoughtful people who considered a project promising and others who thought it ...]]> Julia_Wise https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:18 no full 13
FKnhB28EvG4og87JP EA - 1/E(X) is not E(1/X) by EdoArad Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 1/E(X) is not E(1/X), published by EdoArad on November 9, 2023 on The Effective Altruism Forum.When modeling with uncertainty we often care about the expected value of our result. In CEAs, in particular, we often try to estimate E[effectcost]. This is different from both E[costeffect]1 and E[effect]E[cost] (which are also different from each other). [1] The goal of this post is to make this clear.One way to simplify this is to assume that the cost is constant. So we only have uncertainty about the effect. We will also assume at first that the effect can only be one of two values, say either 1 QALY or 10 QALYs with equal probability.Expected Value is defined as the weighted average of all possible values, where the weights are the probabilities associated with these values. In math notation, for a random variable X,where x are all of the possible values of X.[2] For non-discrete distributions, like a normal distribution, we'll change the sum with an integral.Coming back to the example above, we seek the expected value of effect over cost. As the cost is constant, say C dollars, we only have two possible values:In this case we do have E[effectcost]=E[effect]E[cost], but as we'll soon see that's only because the cost is constant. What about E[costeffect]?which is not 1E[effectcost]=C211$QALY, a smaller amount.The point is that generally 1E[X]E[1X]. In fact, we always have 1E[X]E[1X] with equality if and only if X is constant.[3]Another common and useful example is when X is lognormally distributed with parameters μ,σ2. That means, by definition, that lnX is normally distributed with expected value and variance μ,σ2 respectively. The expected value of X itself is a slightly more complicated expression:Now the fun part: 1X is also lognormally distributed! That's because ln1X=lnX. Its parameters are μ,σ2 (why?) and so we getIn fact, we see that the ratio between these values is^See Probability distributions of Cost-Effectiveness can be misleading for relevant discussion. There are arguably reasons to care about the two alternatives E[costeffect]1 or E[effect]E[cost] rather than E[effectcost], which are left for a future post.^One way to imagine this is that if we sample X many times we will observe each possible value x roughly P(X=x) of the times. So the expected value would indeed generally be approximately the average value of many independent samples.^Due to Jensen's Inequality.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
EdoArad https://forum.effectivealtruism.org/posts/FKnhB28EvG4og87JP/1-e-x-is-not-e-1-x Thu, 09 Nov 2023 15:02:32 +0000 EA - 1/E(X) is not E(1/X) by EdoArad EdoArad https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:45 no full 18
uWbY8B4XukB5ds734 EA - CEEALAR is funding constrained by CEEALAR Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEEALAR is funding constrained, published by CEEALAR on November 9, 2023 on The Effective Altruism Forum.Post intends to be an update on CEEALAR's funding situation and fundraising plans.The Centre for Enabling EA Learning & Research (formerly the EA Hotel) is a space for promising EAs to rapidly upskill, perform research, and work on charitable and entrepreneurial projects. We provide assistance at low cost to those seeking to do the most good with their time & other resources through subsidising living arrangements, organising a productive atmosphere, and fostering a strong EA community.The situationSimilarly to many promising EA projects, we were unable to secure funding from the recent Survival & Flourishing Fund (SFF) funding round. This is unfortunate because SFF constituted our single largest donor, and thus CEEALAR's existence is now at risk.With <4 months of runway remaining, we're now looking at alternative pathways to safeguarding our work.What this meansCEEALAR is looking for funders! Throughout this giving season, we will be promoting updated information about what we do, why we do it, and what we achieve. We'll do this through a variety of efforts - including forum posts, so watch this space!Specifically, our team will be working hard to achieve two distinct goals:Survive this funding squeeze by organising a winter fundraiser. We intend to raise 25,000, which will extend our runway until May*, enabling us to enter into the next round of grant applications.Become financially stable by diversifying our revenue streams, cutting costs and demonstrating our impact to funders.Our inside view is that CEEALAR is the best it has ever been: we've improved our facilities, increased the number of guests we can support, and received great feedback about increased productivity. Our priority this year has been to reach out to past grantees/ funders and implement their extremely helpful feedback.What you can do right nowIf you're a potential donor, large or small, interested in learning about what CEEALAR looks like in 2023 (we've changed a lot!), please do reach out at contact@ceealar.org. We will prioritise answering any questions you may have.Alternatively:Donate now! We support PayPal, Ko-Fi, PPF Fiscal Sponsorship, and bank transfer donations.Sign up to our mailing list and keep abreast of future updates.Check out our updated forum posts as they appear over this giving season.Read through an outsider's case for CEEALAR, for example here.*Our founder and director, Greg Colbourn, has pledged to match-fund up to 25,000. 50,000 extends our runway until the end of May, giving us the chance to further build the case for CEEALAR and apply to another grant round.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
CEEALAR https://forum.effectivealtruism.org/posts/uWbY8B4XukB5ds734/ceealar-is-funding-constrained Thu, 09 Nov 2023 12:48:51 +0000 EA - CEEALAR is funding constrained by CEEALAR CEEALAR https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:48 no full 19
FST9XBYgbbjyN79DF EA - Announcing Our 2023 Charity Recommendations by Animal Charity Evaluators Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Our 2023 Charity Recommendations, published by Animal Charity Evaluators on November 9, 2023 on The Effective Altruism Forum.Every year, Animal Charity Evaluators (ACE) spends several months evaluating animal advocacy organizations to identify those that work effectively and are able to do the most good with additional donations. Our goal is to help people help animals by providing donors with impactful giving opportunities that can reduce animal suffering to the greatest extent possible. We are excited to announce that this year, we have selected six recommended charities.In previous years, we have categorized our recommended charities into two separate tiers: Top and Standout. This year, we have decided to move to only one tier: Recommended Charities. Having just one tier more fairly represents charities and better supports a pluralistic, resilient, and impactful animal advocacy movement. We expect it will also increase our ability to raise funds for the most important work being done to reduce animal suffering. Additionally, this shift will allow us to make better-informed grants to each charity and reduce time spent on administrative tasks.In 2023, we conducted comprehensive evaluations of 14 animal advocacy organizations that are doing promising work. We are grateful to all the charities that participated in this year's charity evaluations. While we can only recommend a handful of charities each year, we believe that all the charities we evaluate are among the most effective in the animal advocacy movement. However, per our evaluation criteria, we estimate that additional funds would have marginally more impact going to our Recommended Charities, making them exceptional giving opportunities.Faunalytics, The Humane League, and Wild Animal Initiative have all retained their status as Recommended Charities after being re-evaluated this year. Newly evaluated charities that join their ranks are Legal Impact for Chickens, New Roots Institute, and Shrimp Welfare Project.The Good Food Institute, Fish Welfare Initiative, Dansk Vegetarisk Forening, Çiftlik Hayvanlarını Koruma Derneği and Sinergia Animal have all retained their recommended charity status from 2022.Below, you will find a brief overview of each of ACE's Recommended Charities. For more details, please check out our comprehensive charity reviews.Recommended in 2023Faunalytics is a U.S.-based organization that connects animal advocates with information relevant to advocacy. Their work mainly involves conducting and publishing independent research, working directly with partner organizations on various research projects, and promoting existing research and data for animal advocates through their website's content library. Faunalytics has been a Recommended Charity since December 2015. To learn more, read our 2023 comprehensive review of Faunalytics.Legal Impact for Chickens (LIC) works to make factory-farm cruelty a liability in the United States. LIC files strategic lawsuits for chickens and other farmed animals, develops and refines creative methods to civilly enforce existing cruelty laws in factory farms, and sues companies that break animal welfare commitments.LIC's first lawsuit, the shareholder derivative case against Costco's executives for chicken neglect, was featured on TikTok and in multiple media outlets, including CNN Business, Fox Business, The Washington Post, and Meatingplace (an industry magazine for meat and poultry producers). This is the first year that Legal Impact for Chickens has become a Recommended Charity. To learn more, read our 2023 comprehensive review of Legal Impact for Chickens.New Roots Institute (formerly known as Factory Farming Awareness Coalition, or FFAC) is a U.S.-based organization that works to empower the next generation to end factory farming. The...

]]>
Animal Charity Evaluators https://forum.effectivealtruism.org/posts/FST9XBYgbbjyN79DF/announcing-our-2023-charity-recommendations Thu, 09 Nov 2023 12:48:12 +0000 EA - Announcing Our 2023 Charity Recommendations by Animal Charity Evaluators Animal Charity Evaluators https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:49 no full 20
zQBE4ZwCNtZwohLtb EA - Who is Sam Bankman-Fried (SBF) really, and how could he have done what he did? - three theories and a lot of evidence by spencerg Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who is Sam Bankman-Fried (SBF) really, and how could he have done what he did? - three theories and a lot of evidence, published by spencerg on November 11, 2023 on The Effective Altruism Forum.As you may know, Sam Bankman-Fried ("SBF") was convicted of seven counts of fraud and conspiracy. He now faces the potential of more than 100 years in prison.I've been trying to figure out how someone who appears to believe deeply in the principles of effective altruism could do what SBF did. It has been no surprise to me to see that the actions he was convicted of are nearly universally condemned by the EA community. Could it be that he did not actually believe in EA ideas despite promoting EA and claiming to believe in it? If he does believe in EA principles, there seems to be a genuine mystery here as to why he took those actions.There are a few theories that could potentially explain the seeming mystery. In this post, I'll discuss the strongest evidence I've been able to find for and against each of the three theories that I find most plausible.It seems important to me to seek an understanding of the deeper causes of this disaster to help prevent future such disasters. It also seems to me to be essential for the EA community, in particular, to understand why this happened. An understanding of the disaster and the person behind it might be necessary (though probably not sufficient) for the community to prevent similar events from happening in the future.A few important things before we begin the analysisIn this piece, I assume that SBF committed all the crimes that he was convicted of. If it somehow turns out that SBF isn't guilty of these crimes, then some parts of this post would not apply (and you should consider most of this post withdrawn).It's also important to note that the opinions I express in this post are, for the most part, informed by studying publicly available details about SBF and the FTX collapse, as well as confidential conversations I've had with a number of different people who knew SBF (some who worked with him, some who knew him as a friend).I promised confidentiality to these people to help them be more comfortable sharing information honestly with me, so I won't use their names or other indications of how they know him. I shared this post with them prior to publishing it to help reduce the chance that I introduced errors in what they said to me.I've also pulled quotes from the new book about SBF, Going Infinite, and from podcast interviews with its author, Michael Lewis. Lewis spent a lot of time with SBF (starting from late April 2022 and continuing into SBF's period under house arrest), so he had a lot of time to form impressions of him.I also had some interactions with SBF myself, which I discuss in more detail in my podcast episode about the FTX disaster. The podcast episode is a good place to start if you are fuzzy on the basic facts of what happened during the FTX disaster and want to know more. I also recorded an earlier podcast episode with SBF about crypto tech (prior to accusations of wrongdoing against him), but it doesn't provide much information relevant to the topic of this post. My first-hand experience with him was limited; it informs my viewpoint on him much less than other evidence I've collected.I am very interested in hearing your own arguments or evidence with regard to which theory you think is most likely about the FTX calamity (whether it is one of the three outlined below or another theory altogether).Defining DAEThroughout this post, I'll use the term DAE ("deficient affective experience") to refer to anyone who has at least one of these two traits:Little or no ability or tendency to experience affective (i.e., emotional) empathy in response to someone else's sufferingLittle or no ability or tendency to experi...

]]>
spencerg https://forum.effectivealtruism.org/posts/zQBE4ZwCNtZwohLtb/who-is-sam-bankman-fried-sbf-really-and-how-could-he-have Sat, 11 Nov 2023 22:17:38 +0000 EA - Who is Sam Bankman-Fried (SBF) really, and how could he have done what he did? - three theories and a lot of evidence by spencerg spencerg https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:45 no full 1
y8Mu8EZtyJZeHAnra EA - Memo on some neglected topics by Lukas Finnveden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Memo on some neglected topics, published by Lukas Finnveden on November 11, 2023 on The Effective Altruism Forum.I originally wrote this for theMeta Coordination Forum. The organizers were interested in a memo on topics other than alignment that might be increasingly important as AI capabilities rapidly grow - in order to inform the degree to which community-building resources should go towards AI safety community building vs. broader capacity building. This is a lightly edited version of my memo on that. All views are my own.Some example neglected topics (without much elaboration)Here are a few example topics that could matter a lot if we're inthe most important century, which aren't always captured in a normal "AI alignment" narrative:The potential moral value of AI. [1]The potential importance of making AI behave cooperatively towards humans, other AIs, or other civilizations (whether it ends up intent-aligned or not).Questions about how human governance institutions will keep up if AI leads to explosive growth.Ways in which AI could cause human deliberation to get derailed, e.g. powerful persuasion abilities.Positive visions about how we could end up on a good path towards becoming a society that makes wise and kind decisions about what to do with the resources accessible to us. (Including how AI could help with this.)(More elaboration on thesebelow.)Here are a few examples of somewhat-more-concrete things that it might (or might not) be good for some people to do on these (and related) topics:Develop proposals for how labs could treat digital minds better, and advocate for them to be implemented. (C.f.this nearcasted proposal.)Advocate for people to try to avoid building AIs with large-scale preferences about the world (at least until we better understand what we're doing). In order to avoid a scenario where, if some generation of AIs turn out to be sentient and worthy of rights, we're forced to choose between "freely hand over political power to alien preferences" and "deny rights to AIs on no reasonable basis".Differentially accelerate AI being used to improve our ability to find the truth, compared to being used for propaganda and manipulation.E.g.: Start an organization that uses LLMs to produce epistemically rigorous investigations of many topics. If you're the first to do a great job of this, and if you're truth-seeking and even-handed, then you might become a trusted source on controversial topics. And your investigations would just get better as AI got better.E.g.: Evaluate and write-up facts about current LLM's forecasting ability, to incentivize labs to make LLMs state correct and calibrated beliefs about the world.E.g.: ImproveAI ability to help with thorny philosophical problems.Implications for community building?…with a focus on "the extent to which community-building resources should go towards AI safety vs. broader capacity building".Ethics, philosophy, and prioritization matter more for research on these topics than it does for alignment research.For some issues in AI alignment, there's a lot of convergence on what's important regardless of your ethical perspective, which means that ethics & philosophy aren't that important for getting people to contribute. By contrast, when thinking about "everything but alignment", I think we should expect somewhat more divergence, which could raise the importance of those subjects.For example:How much to care about digital minds?How much to focus on "deliberation could get off track forever" (which is of great longtermist importance) vs. short-term events (e.g. the speed at which AI gets deployed to solve all of the world's current problems.)But to be clear, I wouldn't want to go hard on any one ethical framework here (e.g. just utilitarianism). Some diversity and pluralism seems ...

]]>
Lukas Finnveden https://forum.effectivealtruism.org/posts/y8Mu8EZtyJZeHAnra/memo-on-some-neglected-topics Sat, 11 Nov 2023 13:44:51 +0000 EA - Memo on some neglected topics by Lukas Finnveden Lukas Finnveden https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 12:58 no full 2
YYnjHt5YzuHSH7oRR EA - Kids or No Kids by KidsOrNoKids Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Kids or No Kids, published by KidsOrNoKids on November 12, 2023 on The Effective Altruism Forum.This post summarizes how my partner and I decided whether to have children or not. We spent hundreds of hours on this decision and hope to save others part of that time. We found it very useful to read the thoughts of people who share significant parts of our values on the topic and thus want to "pay it forward" by writing this up.In the end, we decided to have children; our son is four months old now and we're very happy with how we made the decision and with how our lives are now (through a combination of sheer luck and good planning). It was a very narrow and very tough decision though.Both of us care a lot about having a positive impact on the world and our jobs are the main way we expect to have an impact (through direct work and/or earning to give). As a result, both of us are quite ambitious professionally; we moved multiple times for our jobs and work 50-60h weeks. I expect this write-up to be most useful for people for whom the same is true.Bear in mind this is an incredibly loaded and very personal topic - some of our considerations may seem alienating or outrageous. Please note I am not at all trying to argue how anyone should make their life decisions! I just want to outline what worked well for us, so others may pick and choose to use part of that process and/or content for themselves.Finally, please note that while many readers will know who I am and that is fine, I don't want this post to be findable when googling my name. Thus, I posted it under a new account and request that you don't use any personal references when commenting or mentioning it online.Process - how we decidedWe had many sessions together and separately, totaling hundreds of hours over the course of 2 years, on this decision and the research around it. My partner tracked 200 toggl hours, I estimate I spent a bit less time individually but our conversations come on top. In retrospect, it seems obvious, but it took me longer than I wish it would have to realize that this is important, very hard work, for which I needed high-quality, focused work time rather than the odd evening or lazy weekend.We each made up our minds using roughly the considerations below - this took the bulk of the time. We then each framed our decision as "Yes/No if xyz", for instance, "Yes if I can work x hours in a typical week", and finally "negotiated" a plan under which we could agree on the conclusion "yes" or "no".In this process, actually making a timetable of what a typical day would look like in 30-minute intervals was very useful. I'm rather agreeable, so I am likely to produce miscommunications of the sort "When you said "sometimes", I thought it meant more than one hour a day" - writing down what a typical day could look like helped us catch those. When hearing about this meticulous plan, many people told me that having kids would be a totally unpredictable adventure.I found that not to be true - my predictions about what I would want, what would and wouldn't work, etc. largely held true so far. My suspicion is most people just don't try as hard as we did to make good predictions. A good amount of luck is of course also involved - we are blessed with a healthy, relatively calm and content baby so far. Both of us feel happier than predicted, if anything.I came away from this process with a personal opinion: If it seems weird to spend hours deliberating and negotiating over an Excel sheet with your partner, consider how weird it is not to do that - you are making a decision that will cost you hundreds of thousands of dollars and is binding for years; if you made this type of decisions at work without running any numbers, you'd be out of a job and likely in court pretty quickly.In our case, if you bu...

]]>
KidsOrNoKids https://forum.effectivealtruism.org/posts/YYnjHt5YzuHSH7oRR/kids-or-no-kids Sun, 12 Nov 2023 20:20:09 +0000 EA - Kids or No Kids by KidsOrNoKids KidsOrNoKids https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 19:06 no full 2
gfpLkgSY6Zx45CZSn EA - Webinar invitation: learn how to use Rethink Priorities' new prioritization tool by Rethink Priorities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Webinar invitation: learn how to use Rethink Priorities' new prioritization tool, published by Rethink Priorities on November 12, 2023 on The Effective Altruism Forum.What do your views imply about the relative cost-effectiveness of various causes?WithGiving Tuesday coming up, it's worth tackling this question. Rethink Priorities' newcross-cause cost-effectiveness model (CCM) might be able to help.RP'sWorldview Investigations Team created the CCM as a part of its project onCauses and uncertainty: Rethinking value in expectation.About the virtual eventOn November 28, the Worldview Investigations Team will lead a discussion that will encompass:An explanation of why they created the CCMA virtual walkthrough of the model itselfA practical workshop on how you can use the toolA question-and-answer sessionAttending from the Worldview Investigation Team will be: Philosophy ResearcherDerek Shiller, Executive Research CoordinatorLaura Duffy, and Senior Research ManagerBob Fischer.Come explore how different assumptions interact, and potentially make some surprising discoveries!DetailsThe workshop will be held on November 28 at 9 am PT / noon ET / 5 pm BT / 6 pm CET.If you're interested in attending (even if you think you can't make that particular time), please completethis form. We will send you further details as we get closer to the event.Rethink Priorities (RP) is a think-and-do tank that addresses global priorities by researching solutions and strategies, mobilizing resources, and empowering our team and others.Rachel Norman and Henri Thunberg wrote this post.If you are interested in RP's work, please visit ourresearch database and subscribe to ournewsletter.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Rethink Priorities https://forum.effectivealtruism.org/posts/gfpLkgSY6Zx45CZSn/webinar-invitation-learn-how-to-use-rethink-priorities-new Sun, 12 Nov 2023 19:49:44 +0000 EA - Webinar invitation: learn how to use Rethink Priorities' new prioritization tool by Rethink Priorities Rethink Priorities https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:00 no full 3
Zt3kASwjCtMvFPrdx EA - How we work, #2: We look at specific opportunities, not just general interventions by GiveWell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How we work, #2: We look at specific opportunities, not just general interventions, published by GiveWell on November 12, 2023 on The Effective Altruism Forum.This post is the second in a multi-part series, covering how GiveWell works and what we fund. The first post, on cost-effectiveness, is here. Through these posts, we hope to give a better understanding of our research and decision-making.Author: Isabel ArjmandLooking forward, not just backwardWhen we consider recommending funding, we don't just want to know whether a program has generally been cost-effective in the past - we want to know how additional funding would be used.People sometimes think of GiveWell as recommending entire programs or organizations. This was more accurate in GiveWell's early days, but now we tend to narrow in on specific opportunities. Rather than asking whether it is cost-effective to deliver long-lasting insecticide-treated nets in general, we ask more specific questions, such as whether it is cost-effective to fund net distributions in 2023 in Benue, Plateau, and Zamfara states, Nigeria, given the local burden of malaria and the costs of delivering nets in those states.Geographic factors affecting cost-effectivenessThe same program can vary widely in cost-effectiveness across locations. The burden of a disease in a particular place is often a key factor in determining overall cost-effectiveness. All else equal, it's much more impactful to deliver vitamin A supplements in areas with high rates of vitamin A deficiency than in areas where almost everyone consumes sufficient vitamin A as part of their diet.As another example, we estimate it costs roughly the same amount for the Against Malaria Foundation to deliver an insecticide-treated net in Chad as it does in Guinea (about $4 in both locations). But, we estimate that malaria-attributable deaths of young children in the absence of nets would be roughly 5 times higher in Guinea than in Chad (roughly 8.8 deaths per 1,000 per year versus roughly 1.7 per 1,000), which leads AMF's program to be much more cost-effective in Guinea.This map from Our World in Data gives a sense of how deaths from malaria vary worldwide.[3]Because cost-effectiveness varies with geography, we ask questions specific to the countries or regions where a program would take place. When we were investigating an opportunity to fund water chlorination in Malawi, for example, we wanted to know:How does baseline mortality from poor water quality in Malawi compare with that in the regions where the key studies on water chlorination took place?What is the overall morbidity burden from diarrhea in Malawi?Might people be more or less likely to use chlorinated water in this area than in the areas where the key studies took place?What does it cost to serve one person with in-line chlorination for one year? We calculate this, in part, by estimating how many people are served by each device.What proportion of the population is under the age of five? This is important to our calculations because we think young children are disproportionately susceptible to death from diarrhea.What is the baseline level of water treatment in the absence of this program?Where relevant, we also consider implementation challenges caused by security concerns or other contextual factors.Why do cost-effective funding gaps sometimes go unfilled?People are often surprised that some high-impact funding gaps, like the ones GiveWell aims to fund, aren't already filled. Of course, many high-impact opportunities are already supported by other funders, like Gavi or the Global Fund, to name just a couple examples. When we see remaining gaps, we think about how our grant might affect other funders' decisions, and whether another funder would step in to fill a particular gap if we didn't.[4]The...

]]>
GiveWell https://forum.effectivealtruism.org/posts/Zt3kASwjCtMvFPrdx/how-we-work-2-we-look-at-specific-opportunities-not-just Sun, 12 Nov 2023 19:45:00 +0000 EA - How we work, #2: We look at specific opportunities, not just general interventions by GiveWell GiveWell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:39 no full 4
AyLF2KQ8AqQuiuDLz EA - A robust earning to give ecosystem is better for EA by abrahamrowe Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A robust earning to give ecosystem is better for EA, published by abrahamrowe on November 11, 2023 on The Effective Altruism Forum.(Written in a personal capacity, and not representing either my current employer or former one)In 2016, I founded Utility Farm, and later merged it with Wild-Animal Suffering Research (founded by Persis Eskander) to formWild Animal Initiative. Wild Animal Initiative is, by my estimation, a highly successful research organization.The current Wild Animal Initiative staff deserve all the credit for where they have taken the organization, but I'm incredibly proud that I got to be involved early in the establishment of a new field of study, wild animal welfare science, and to see the tiny organization I started in an apartment with a few hundred dollars go on to be recommended by ACE as a Top Charity for 4 years in a row. In my opinion, Wild Animal Initiative has become, under the stewardship of more capable people than I, the single best bet for unlocking interventions that could tackle the vast majority of animal suffering.Unlike most EA charities today, Utility Farm didn't launch with a big grant from Open Philanthropy, Survival and Flourish Foundation, or EA Funds. There was no bet made by a single donor on a promising idea. I launched Utility Farm with my own money, which I spent directly on the project. I was making around $35,000 a year at the time working at a nonprofit, and spending maybe $300 a month on the project.Then one day, a donor completely changed the trajectory of the organization by giving us around $500. It's weird looking at that event through the lens of current EA funding levels - it was a tiny bet, but it took the organization from being a side project that was cash-strapped and completely reliant on my energy and time to an organization that could actually purchase some supplies or hire a contractor for a project.From there, a few more donors gave us a few thousand dollars each. These funds weren't enough to hire staff or do anything substantial, but they provided a lifeline for the organization, allowing us to run our first real research projects and to publish our work online.In 2018, we ran our first major fundraiser. We received several donations of a few thousand dollars, and (if I recall correctly) one gift of $20,000. Soon after,EA Funds granted us $40,000. We could then hire staff for the first time, and make tangible progress toward our mission.As small as these funds were in the scheme of things, for Utility Farm, they felt sustainable. We didn't have one donor - we had a solid base of maybe 50 supporters, and no single individual dominated our funding. Our largest donor changing their mind about our work would have been a major disappointment, but not a financial catastrophe. Fundraising was still fairly easy - we weren't trying to convince thousands of people to give $25.Instead, fundraising consisted of checking in with a few dozen people, sending some emails, and holding some calls. Most of the "fundraising" was the organization doing impactful work, not endless donor engagement.I now work at a much larger EA organization with around 100x the revenue and 30x the staff. Oddly, we don't have that many more donors than Utility Farm did back then - maybe around 2-4 times as many small donors, and about the same number giving more than $1,000.This probably varies between organizations - I have a feeling that many organizations doing more direct work than Rethink Priorities have many more donors - but most EA organizations seem to have strikingly few mid-sized donors (e.g., individuals who give maybe $1,000 - $25,000).Often, organizations will have a large cohort of small donors, giving maybe $25-$100, and then they'll have 2-3 (or even just 1) giant donors, collectively giving 95%+ of the organi...

]]>
abrahamrowe https://forum.effectivealtruism.org/posts/AyLF2KQ8AqQuiuDLz/a-robust-earning-to-give-ecosystem-is-better-for-ea Sat, 11 Nov 2023 23:47:10 +0000 EA - A robust earning to give ecosystem is better for EA by abrahamrowe abrahamrowe https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 15:09 no full 6
MwmdpSBMuzNDmvNHp EA - The effective altruist case for pro-life/anti-abortion advocacy by Calum Miller Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The effective altruist case for pro-life/anti-abortion advocacy, published by Calum Miller on November 13, 2023 on The Effective Altruism Forum.SummaryThere is a good case that abortion is morally impermissible - or at least there is significant moral uncertainty.Even if these arguments fail, abortion could still be a matter of serious concern for EAs (e.g. because fetuses could have significant but not full moral status, or because there are ways to reduce abortions without punishing women for getting them). Put another way, even if one believes abortion is permissible, it likely remains a comparable problem to any problem of infant mortality - but with even more lost life-years, and occurring on a much larger scale than infant mortality.This is a very important problem, given the life lost and the scale of the problem - tens of millions of abortions around the world each year.Outside the US, the problem is virtually unchallenged - and even in the US, there are high-impact sectors with minimal anti-abortion sentiment or efforts.The issue is more tractable than one might think, for various reasons: it is a highly neglected area in most of the world; there are effective and popular policy interventions even in the most pro-choice countries, but even more so in more pro-life countries; progress can be made even without policy interventions; even small reductions in the abortion rate save huge numbers of lives; many people are open to changing their minds about abortion and can do so in a relatively short time; etc.If you are one of the ~15% of religious EAs, you probably have even more reason to be convinced.Sex education and contraception may or may not work depending on the case.IntroductionI realise this is a sensitive topic. In the developed world around 1 in 3 women have an abortion in their lifetime, meaning it is likely people reading this post have been, or will be, personally invested and affected by the topic of abortion. Please forgive me if I fail to address it in a sufficiently sensitive way, and know that this was not my intention. There is, of course, so much more to say about this, but I wanted to try and keep the post relatively short.I wrote most of this up a couple of years ago but never got round to posting it until Ariel Simnegar's post, which encouraged me to refine it a little and share here. Though I've always been on the fringes of EA and certainly don't consider myself to be up to date on EA thinking, it was EA thinking that originally made me passionate about this area some years ago, subsequently focusing my academic research on it.I think this is an important topic for effective altruists to wrestle with, for various reasons: a) if I am right, this one of the most important, neglected, and tractable problems facing humans in the near term; b) as Ariel Simnegar previously pointed out, effective altruists have typically been pretty open to considering neglected causes and particularly neglected communities - animals, future people, etc; c) so much popular discourse around abortion is hostile and badly reasoned, and effective altruists with a common interest in improving the world can improve the calibre of conversation a lot; d) effective altruists have also been open to the idea that morality can involve serious sacrifices of one's own welfare (even the permanent sacrifice of one's organs, for some EAs).Inevitably, as a blog post this discussion will have to cut out the large majority of relevant literature (especially on moral considerations), but I have tried to collect a load of the most commonly asked questions (along with my answers) at https://calumsblog.com/abortion-qa/. Please do get in touch if you would like further references/resources on these questions.The first-order ethics of abortionArguably, for moral uncertai...

]]>
Calum Miller https://forum.effectivealtruism.org/posts/MwmdpSBMuzNDmvNHp/the-effective-altruist-case-for-pro-life-anti-abortion Mon, 13 Nov 2023 20:04:32 +0000 EA - The effective altruist case for pro-life/anti-abortion advocacy by Calum Miller Calum Miller https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 31:56 no full 2
69F8ytzr9665WAjXs EA - AMA: The Humane League UK - farmed animal welfare, our funding gap and match funding campaign. Ask us anything. by Gavin Chappell-Bates Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: The Humane League UK - farmed animal welfare, our funding gap and match funding campaign. Ask us anything., published by Gavin Chappell-Bates on November 13, 2023 on The Effective Altruism Forum.Hi,We're The Humane League UK (THL UK), an animal protection charity that exists to end the abuse of animals raised for food. You're free to ask us anything, just post your question as a comment. We'll start answering questions on Friday 17th November, and we will continue answering on Monday 20th and Tuesday 21st November.We might not be able to answer all the questions we receive but we will try to answer as many as we can.Our funding gap and match funding campaignWe have already strategically planned our activities for this financial year (2023-24) which we are confident will bring about significant change for farmed animals. However, we currently have a shortfall of approximately 280k.To help us close this gap we will be running a match funding campaign from 22nd-28th November. Donors from the Founders Pledge community have kindly agreed to match fund all donations during this period up to the value of 30,000, meaning we have the opportunity to raise 60,000 in total to support our work.If you are considering donating to support farmed animal welfare, this would be an effective way to do so, both doubling your donation and helping us reduce our funding gap, thus enabling us to continue with our planned activities.Details of the campaign will be available on ourwebsite from the 22nd November, including a link to donate. However, if you would like to discuss making a significant gift during the campaign please email Gavin atgcbates@thehumaneleague.org.ukOur focus for the rest of this year is on:Securing commitments from leading UK supermarkets to adopt the Better Chicken Commitment.Continuing to push for legislative changes to improve the welfare of chickens raised for meat - our case against Defra will be heading to court again for a second hearing in Spring 2024.Following therelease of the Animal Welfare Committee's (AWC) opinion on fish at the time of slaughter, continuing to push for fishes to finally be given increased protection in UK law.About The Humane League UKTHL UK works relentlessly to spare farmed animals from suffering and push for institutional and individual change. By using data-driven, cost-effective strategies to expose the horrors of modern factory farms, we strive to eliminate the worst cruelties of industrial animal agriculture, creating the biggest impact for the greatest number of farmed animals.We strategically target companies and pressure them to eliminate the worst and most widespread abuses in their supply chain. Through focussed campaigns we influence them to commit to animal welfare improvements and hold them accountable. We also work to enact laws that ban the confinement and inhumane treatment of animals.To bolster our corporate campaigning, we train and mobilise volunteer activists across the country to drive our campaigns forward. They help us put vital pressure on companies and raise awareness of factory farming amongst the general public.You can read more about us and our impact in our2022-23 Annual Report or visit our website:thehumaneleague.org.ukIf you are interested in hearing more, pleasesubscribe to our newsletter.The Impact of Our WorkTHL UK is distinguished from other British animal protection organisations by the effectiveness of our corporate campaigns and the relentlessness of our staff and volunteers, making us a respected leader in the global movement. With our research-backed strategy of combining corporate campaigns, grassroots legislative advocacy, and movement building, we are mending our broken food system.We focus on broiler chickens, hens and fish as they are farmed in the largest numbe...

]]>
Gavin Chappell-Bates https://forum.effectivealtruism.org/posts/69F8ytzr9665WAjXs/ama-the-humane-league-uk-farmed-animal-welfare-our-funding Mon, 13 Nov 2023 18:49:25 +0000 EA - AMA: The Humane League UK - farmed animal welfare, our funding gap and match funding campaign. Ask us anything. by Gavin Chappell-Bates Gavin Chappell-Bates https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:28 no full 4
xbqiYyj6kdmraEqoX EA - Rethink Priorities' 2023 Summary, 2024 Strategy, and Funding Gaps by kierangreig Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities' 2023 Summary, 2024 Strategy, and Funding Gaps, published by kierangreig on November 15, 2023 on The Effective Altruism Forum.The remainder of this post is the executive summary of a longer document available in fullhere.Executive SummaryRethink Priorities (RP) is a research and implementation group. We research pressing opportunities and implement solutions to make the world better. We act upon these opportunities by developing and implementing strategies, projects, and solutions to address key issues. We do this work in close partnership with a variety of organizations including foundations and impact-focused nonprofits. This year's highlights include:Early traction we have had on AI governance workExploring how risk aversion influences cause prioritizationCreating a cost-effectiveness tool to compare different causesFoundational work on shrimp welfareConsulting with GiveWell and Open Philanthropy (OP) on top global health and development opportunitiesKey updates for us this year include:Launching a newWorldview Investigations team, who, over the course of the year, rounded off initial work on theMoral Weight Project prior to completing a sequence on "Causes and Uncertainty: Rethinking Value in Expectation"Launching theInstitute for AI Policy & Strategy (IAPS), which evolved out of our AI Governance and Strategy Team. More information can be found atIAPS's announcement postCommencing four new fiscal sponsorships for unaffiliated groups (e.g.,Apollo Research and theEffective Altruism Consulting Network)Fundraising was comparatively more difficult this year, and we think that funding gaps are the key bottleneck on our impact.All our published research can be foundhere.[1] Over 2023, we worked on approximately 160 research pieces or outputs. Our research directly informed grants made by other organizations of a volume at least similar to the one of our operating budget (i.e., over $10M).[2] Further, through our Special Projects program, we supported11 external organizations and initiatives with $5.1M in associated expenditures. We have reason to think we may be influencing grantmakers, implementers, and other key stakeholders in actions that aren't immediately captured in either that grants influenced or special projects expenditures sum. We have also completed work for ~20 different clients, presented at more than 15 academic institutions, and organized six of our own in-person convenings of stakeholders.By the end of 2023, RP will have spent ~$11.4M.[3] We predict a revenue of ~$11.7M over 2023, and predict assets of ~$10.3M at year's end.Some of RP'skey strategic priorities for 2024 are: 1) continuing to strengthen our reputation and relations with key stakeholders, 2) diversifying our funding and stakeholders to scale our impact, and 3) investing greater resources into other parts of our theory of change beyond producing and disseminating research to increase others' impact. To accomplish our strategic priorities, we aim to hire for new senior positions.Some of our tentative plans for next year are:Creating key pieces of animal advocacy research such as a cost-effectiveness tracking database for chicken welfare campaigns, and annual state of the movement report for the farmed animal advocacy movement.Addressing perhaps critical windows for AI regulations by producing and disseminating research on compute governance, and lab governance.Consulting with more clients on global health and development interventions to attempt to shift large sums of money in effective fashion.Helping launch new projects that aim to reduce existential risk from AI.Being an excellent option for any promising projects seeking a fiscal sponsor.Providing rapid surveys and analysis to inform high priority strategic questions.Examining how ...

]]>
kierangreig https://forum.effectivealtruism.org/posts/xbqiYyj6kdmraEqoX/rethink-priorities-2023-summary-2024-strategy-and-funding Wed, 15 Nov 2023 22:11:01 +0000 EA - Rethink Priorities' 2023 Summary, 2024 Strategy, and Funding Gaps by kierangreig kierangreig https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:41 no full 2
i6wCbnj8SYcHSYAK9 EA - HearMeOut - Networking While Funding Charities (Looking for a founder and beta users) by Brad West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: HearMeOut - Networking While Funding Charities (Looking for a founder and beta users), published by Brad West on November 15, 2023 on The Effective Altruism Forum.Two extremely important things are our time and our connections to others who can help advance shared goals. Significant time is wasted on low value introductions and meetings. But at the same time, projects are delayed, don't succeed, or don't reach their full potential because critical connections are never made. We are looking to build HearMeOut, a solution that will save your valuable time while facilitating valuable connections, by asking people to donate to a charity for your time, and/or enabling you to connect with others by donating to a charity.HearMeOut is the platform where you can book time with someone by donating to the chosen charity of the person who you want to meet with. For example: you sell software that you're confident company X wants, and you're willing to donate 500 to The Against Malaria Foundation to pitch it to them for one hour. If you want to cut down on cold emails and meetings, you can tell anyone that you only meet with people willing to donate a certain amount to the charity you chose (e.g.I'm a founder and anyone who wants to sell me something can do that if they donate 100 USD to AMF). You pay for meetings where you're confident you bring something valuable, and you can be assured the meetings scheduled with you are with people who value your time correctly and don't intend to waste your time.We believe the net result to be meetings with a higher average value- eliminating intros with those who don't value your time, while enabling those who demonstrate that they do to get on your calendar- with charities benefiting from the signals. It's close to zero cost to build and test this platform with some initial users, and it could be very scalable. We are seeking someone to lead this project and initial users who want to get donations before they take a cold meeting.What HearMeOut OffersAbility for people ("Seekers") to obtain introductions to people that could be helpful to their projects or goals by donating to a charity.Ability for people ("Listeners") to help others that can credibly signal that they will benefit from their help because they are gated behind a cost.Charities can be the beneficiaries of these signaling costs.Unfortunately, between working my own full time job as a lawyer and running a nonprofit (website will be changed soon- renaming to "Profit for Good Initiative"), I do not currently have the bandwidth to run such a project. Vincent van der Holst, founder of BOAS, also believes in the potential of this project, but is similarly unable to run this project because he is running the business. Both can advise the business and help attract resources. Vin already has connections to a designer and developer who are willing to help build the first version at no/low cost.How Would HearMeOut Work?Thanks to Jeff Reasor for developing some mockups of what HearMeOut might look like.HearMeOut would provide a platform for Listeners: those who want to spend their time potentially helping others by providing advice, funding projects, connecting people together who could be helpful, using their influence to advance a shared goal, purchasing products or services that could be beneficial to the Listener, and/or otherwise helping people.Listeners would be able to choose the charity(s) that would benefit from the fee to connect with them, the time increments they could make available, as well as the donation associated with various increments.This donation cost would serve a dual-function: it not only serves as a way to raise money for a charitable cause the listener cares about, but also serves a screening function- the cost associated with the audience will lik...

]]>
Brad West https://forum.effectivealtruism.org/posts/i6wCbnj8SYcHSYAK9/hearmeout-networking-while-funding-charities-looking-for-a Wed, 15 Nov 2023 17:47:04 +0000 EA - HearMeOut - Networking While Funding Charities (Looking for a founder and beta users) by Brad West Brad West https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:20 no full 4
sQg4Hi5D2oD6xrbTY EA - Notes on not taking the GWWC pledge (yet) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Notes on not taking the GWWC pledge (yet), published by Lizka on November 15, 2023 on The Effective Altruism Forum.This is a belated (and rough/short!) post forEffective Giving Spotlight week. The post isn't meant to be a criticism of GWWC or of people who have taken the pledge[1] - just me sharing my thoughts in the hope that they're useful to others or that I'll get useful suggestions. Also, since I drafted this, there's beena related discussion here.I've sometimes thought about taking aGWWC pledge, but haven't taken one yet and don't currently think I should. The TL;DR is that I'm worried about (1) runway and (2) my life changing in the future, such that donating more would be unsustainable or would trade off in bad-from-the-POV-of-my-EA-values with direct work.Longer notes/thoughtsI'm currently prioritizing "direct work". Thatdoesn't mean that I can't donate (and in factI do and enjoy doing it when I do), but I'm worried about committing to donating in a way that would lead me to make poor tradeoffs in the future. Signing the pledge seems like a serious commitment.In particular, I'm thinking about:1. Having enoughrunway[2]Runway seems important (and has been discussed a fair bit before, see e.g.here andmore recently).… for potentially starting something on my own, or taking a poorly paid (or unpaid) opportunity to upskillE.g. going into a Master's program, taking a sabbatical to see if I can build up a new idea, etc.… for epistemics & independenceE.g. if I was worried about EV/CEA/the usefulness of my work, I can imagine leaving without another opportunity lined up, so I'm relatively free to consider what's wrong at EV/CEA (otherwise this would be really stressful to think about). If I had no runway at all, I'd have a much harder time thinking about leaving.To the extent that donations trade off building runway, I should factor that in.I.e. if the alternative to donations right now is saving money, and I'm below where I should be for having enough runway, that means donations are in some sense more costly. It doesn't mean I shouldn't donate in any situation until I've hit my runway target, just that the bar is probably higher for me right now.How much runway someone should have (i.e. the shape of the "usefulness of runway" curve[3]) is confusing to me - I'd be interested in hearing what others think.2. My life changing in the future, such that donating more would be unsustainable or would trade off in bad-from-the-POV-of-my-EA-values with direct workI have a family that I may need to support in some circumstances. I've thought about (not-too-unlikely) scenarios in the coming years where I might face a choice between having drastically less time for my work, spending significant amounts of money, or not fulfilling my family obligations in a way that I think is bad. (Being there for my family is one of my core values/goals.)And I probably want kids. If I have a child (or multiple children), I think there are many worlds where it would be better for me to be able to do something like hire a part-time nanny or pay for other services that would allow me to work more. (See this recent post!)Not committing to donating a certain amount every year might mean I can make better tradeoffs in situations like these.3. Some worries about my thinkingMy reasoning might bemotivated: I might be fooling myself into thinking that I shouldn't take the pledge because that would be less stressful for me.Value drift: I'm worried that my future self might not donate for reasons that I don't endorse. But I'm not too worried about that right now.^I'm really grateful to (and impressed by) the folks who've taken a donation pledge and who donate a lot.^Runway is less specifically related to the question of whether to take a pledge, vs. just the choice of wh...

]]>
Lizka https://forum.effectivealtruism.org/posts/sQg4Hi5D2oD6xrbTY/notes-on-not-taking-the-gwwc-pledge-yet Wed, 15 Nov 2023 16:01:31 +0000 EA - Notes on not taking the GWWC pledge (yet) by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:08 no full 6
MmSZiKeQZ5FvCiZSB EA - Maternal Health Initiative - Marginal Funding & 1st Year in Review by Ben Williamson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Maternal Health Initiative - Marginal Funding & 1st Year in Review, published by Ben Williamson on November 15, 2023 on The Effective Altruism Forum.IntroductionThe Maternal Health Initiative (MHI) works in northern Ghana delivering a light-touch programme of training integrating contraceptive counselling into routine care to increase informed choice and uptake of family planning methods. We deliver this work in partnership with two local NGOs and the Ghana Health Service, launching through the 2022 Charity Entrepreneurship Incubation Programme.This post was written by Sarah Eustis-Guthrie and Ben Williamson, MHI's co-founders. It is split into two parts:A review of MHI's first year of operation - what we've done; the impact we've had; our plans for 2024An overview of the marginal value of funding MHI - the funding needs for an organisation like MHI; the funding landscape for post-seed organisations; why we think donating to MHI is a particularly good bet (and why it might not be).Part 1: MHI's First Year in ReviewTL;DRIn our first year, we…Developed and tested two evidence-based models of care with anestimated cost-effectiveness of $100/DALY on health effects alone, competitive withGiveWell's top charitiesTrained providers at 18 facilities across 2 regions of Ghana, reaching an estimated 40,000 women over the next yearConducted in-depth on-the-ground research, surveying 836 women and 148 providers & facility directorsSuccessfully increased the frequency of 1:1 family planning counselling by 4.3x at postnatal care and group family planning messaging by 8x at immunisation sessions, with results for shifts in contraceptive uptake due in December 2023.We're currently awaiting the full results from our pilot. With strong results, we plan to scale our work through 2024 in partnership with the Ghana Health Service as we build towards government adoption of our model of care.Who are MHI?Maternal Health Initiative is an early-stage global health charity with a focus on healthcare worker training and access to family planning. MHI was born out of research conducted by Charity Entrepreneurship identifying postpartum (post-birth) family planning as among the most cost-effective and evidence-based approaches for improving global health.Our team now includes Sofia Martinez Galvez as our Program Officer, Sulemana Hikimatu Tibangtaba as our Training Facilitator, and Enoch Weyori and Racheal Antwi as Project Officers through our local implementing partners,Norsaac andSavana Signatures.What we doWe train midwives and nurses in the integration of two new models of family planning counselling developed by MHI into the standard check-ups mothers and their children receive in the months after giving birth.In doing so, our work increases postpartum contraceptive uptake and decreases the frequency of short-spaced births. Pregnancies that occur less than two years apart are associated with a 32% higher rate of maternal mortality and 18% higher rate of infant mortality (Conde-Agudelo 2007;Kozuki 2013). Despite these risks, contraceptive usedrops by two-thirds in the early postpartum period.Integrating high-quality counselling into routine care addresses multiple barriers to contraceptive uptake. First, mothers do not need to travel to a facility specifically for family planning. This means that they can receive confidential information and that they are spared the costs - both in time and money - of a separate visit.Second, many women express significant concerns around side effects and health consequences from family planning. High-quality counselling ensures women receive counselling on multiple methods - helping to find a method that avoids the side effects they may be concerned about - while addressing the myths and misconceptions that can drive opposition to ...

]]>
Ben Williamson https://forum.effectivealtruism.org/posts/MmSZiKeQZ5FvCiZSB/maternal-health-initiative-marginal-funding-and-1st-year-in Wed, 15 Nov 2023 13:57:33 +0000 EA - Maternal Health Initiative - Marginal Funding & 1st Year in Review by Ben Williamson Ben Williamson https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 18:50 no full 9
pj8AkRdhwtnEhNhCW EA - How would your project use extra funding? (Marginal Funding Week) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How would your project use extra funding? (Marginal Funding Week), published by Lizka on November 15, 2023 on The Effective Altruism Forum.It'sMarginal Funding Week until Tuesday, 21 November! (To decidewhere to donate andhow to vote, it's really helpful to know how extra funding would be used.)If your project is fundraising, you could write a full post on this topic or you can just add a quick note here in an "Answer" to this question.What you might include:The name of the project you're representing, ideally with a link to previous Forum discussion/the Forum topic page or your website, and your role at the project.A description of how the project might use extra donations.Seethis post for inspiration.Maybe also:A way for people to donate, or a link tothe relevant fundraiser from here.More information about your work, like impact evaluations, cost-effectiveness estimates, links to retrospectives, etc.Anything else you want to share!Consider upvoting answers you appreciate and asking follow-up questions if you still have uncertainties (although I should flag that your questions might not get answered - some people might not have capacity to answer follow-up questions).If you don't represent a project but have an informed guess about how a project might use extra funding, you could share that as a comment. (Please make it clear that you're guessing, though - consider sharing the sources you're inferring from.)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Lizka https://forum.effectivealtruism.org/posts/pj8AkRdhwtnEhNhCW/how-would-your-project-use-extra-funding-marginal-funding Wed, 15 Nov 2023 11:07:58 +0000 EA - How would your project use extra funding? (Marginal Funding Week) by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:34 no full 10
coE9aFWns5opyZQ6X EA - Funding AI Safety political advocacy in the US: Individual donors and small donations may be especially helpful by Holly Elmore Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding AI Safety political advocacy in the US: Individual donors and small donations may be especially helpful, published by Holly Elmore on November 15, 2023 on The Effective Altruism Forum.IMPORTANT: This post refers to US laws and tax statuses. It is not a substitute for tax advice from an accountant or tax lawyer- just some general information that I've learned in the last year receiving donations and grants as an individual and working with 501(c)4 organizations that may help point you in a more favorable direction. The US tax code is tricky so you must not take this post alone as guidance in making your tax or donation decisions.It's hard to fund political activity in EA. We don't have the infrastructure yet. Most EA grantors are 501(c)(3) organizations withlimits on how much "lobbying" or "attempts to influence legislation" they can fund. Many of those orgs have gone a step further and restricted their donations to 501(c)(3) charitable or organizational purposes only. For instance, althoughManifund is able to fund my advocacy activities as long they don't make up a "substantial" part of the grants they fund, and ultimately drew up a contract for me that reflected that, the original Manifund applicant contract I was presented with specifically requires the signatory to be doing 501(c)(3) activities.Individuals can give money to whomever they want, but it's only tax-deductible if it goes totax-exempt entities with a 501(c)(3) designation. Donations to501(c)(4) social welfare orgs are not tax-exempt, nor are donations to individuals.Tax-exempt status matters less that you might think for the small-time donor. For instance, as I know from giving my Giving What We Can donations, it doesn't even matter if they were tax deductible unless your donations exceed the standard deduction. According toNerdWallet, "The 2022 standard deduction is $12,950 for single filers, $25,900 for joint filers or $19,400 for heads of household. For the 2023 tax year, those numbers rise to $13,850, $27,700 and $20,800, respectively." (Here is theIRS tool for calculating your standard deduction.) If your donations don't exceed these amounts, you should consider that, tax-wise, you're in a better position than large foundations to donate to political advocacy or lobbying.Giving individuals gifts as opposed to grants is a much more favorable tax situation for the individual, who willgenerally not have to pay tax on gifts received butdoes have to pay tax on grants received. The giver has to pay gift tax, but depending on the situation you may prefer paying gift tax to overhead being taken out of the donation to run the granting program and the (individual) recipient being taxed on the grant. Consider that, if you are donating to an org so they can support individuals, you might want to cut out the middleman.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Holly_Elmore https://forum.effectivealtruism.org/posts/coE9aFWns5opyZQ6X/funding-ai-safety-political-advocacy-in-the-us-individual Wed, 15 Nov 2023 10:05:14 +0000 EA - Funding AI Safety political advocacy in the US: Individual donors and small donations may be especially helpful by Holly Elmore Holly_Elmore https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:04 no full 11
A9o4EjFbPcgfsFtMx EA - Pitfalls to consider when community-building with young people by frances lorenz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pitfalls to consider when community-building with young people, published by frances lorenz on November 14, 2023 on The Effective Altruism Forum.This is a quick outline of two worries that come up for me when I consider EA's focus on community-building amongst university-age people, sometimes younger. I am mostly focussed on possible negative consequences to young people rather than EA itself. I don't offer potential solutions to these worries, but rather try to explain my thinking and then pose questions sitting at the top of my mind.IntroAt a past lunch with coworkers, I brought up the topic of, "Sprout EAs". Currently, this is the term I'm using to describe people who have spent their entire full-time professional career in the EA ecosystem, becoming involved at university-age, or occasionally, high school-age.[1]Anyways, there are two things I worry about with this group:Worry one: Sprout EAs stay in EA because it is often easier to stay in things than to leave, especially when you're youngThere's your standardstatus quo bias that can get particularly salient around graduation time. At that point, many people are under-resourced and pushing towards more stable self-reliance, uncertain what next steps to take, relatively early in their journey of life and their professional career. Many undergraduate students are familiar with the, "unsure what to do next? Just do grad school!" meme, because when so much of your adult life is ahead of you and you're confused, it's enticing to do more of what you know.In a similar vein: I think those entering the professional world, who have become heavily embedded in EA during their time as a student, have a lot of force behind them pushing them to remain in the EA ecosystem. Maybe this doesn't really matter, because maybe lots of them will find jobs they really enjoy and have an impact and develop into their adult life, and it's all good. And also, maybe it's kind of a moot point because, you have to choose something.But, if EA is going to put concerted effort into community building on university campuses, and sometimes with high school students, these are probably important dynamics to think about. Additionally, EA has some unique and potent qualities that can grab young people:It can offer a very clear career-path, which is incredibly comfortingIt can offer a sense of meaningIt can offer a social communityAll these things have the potential to make "off-boarding" from EA extra difficult, especially at a time in life when people generally have less internal, social, experiential, and material resources. I worry about young people who could gain a lot of personal benefit from "off-boarding" or just distancing a bit more from EA, yet struggle to do so (for reasons of the flavour described above) or forget this is even an option/find it too mentally aversive to consider.Worry two: EA offers young people things it isn't "trying to" or "built to," which can lead to negative outcomes for individualsI think this is an important point that can get muddled. There's the thing EA "actually is," which is debatable and a bit abstract. It's a community, an idea, maybe a question? It's not a solved, prescriptive, infallible philosophy. It is, maybe, a powerful framework with a highly active professional and social community built around it, attempting to do good. But the way it can hit people differs quite a bit. No one can control if EA fills holes in people's lives, even if that isn't an express or even desirable goal.On one level, EA can easily hit as a straightforward career plan and life purpose that young people can scoop up and run with, if they're positioned to do so. That anyone can scoop up, of course. But young people, being young and often more impressionable, less established, etc., can be particularly positioned ...

]]>
frances_lorenz https://forum.effectivealtruism.org/posts/A9o4EjFbPcgfsFtMx/pitfalls-to-consider-when-community-building-with-young Tue, 14 Nov 2023 19:29:48 +0000 EA - Pitfalls to consider when community-building with young people by frances lorenz frances_lorenz https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:38 no full 14
dYhKfsNuQX2sznfxe EA - Donation Election: how voting will work by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Donation Election: how voting will work, published by Lizka on November 14, 2023 on The Effective Altruism Forum.In brief: we'll use a weighted version of ranked-choice voting to determine the winners in theDonation Election. Every voter will distribute points across candidates. We'll add up the points for all the candidates, and then eliminate the lowest-ranking candidate and redistribute points from voters who had given points to the now-eliminated candidate. We'll repeat that until we have 3 winning candidates; the funding should be allocated in proportion to those candidates' point totals.Note: this system is subject to change in the next week (I'm adding this provision in case someone finds obvious improvements or fundamental issues). If we don't change it by November 21, though, it'll be the final system, and I currently expect to go with a system that looks basically like this.What it will look like for votersAs a reminder, only people who had accounts as of 22 October, 2023, will be able to vote. If you can't vote but would like to participate, you can write about why you think people should vote in a particular way, donate to the projects directly, etc.What it will look like if you can vote:Get invited to vote and go to a voting portal to begin the process(We'll probably feature a link on the Frontpage, and you can alreadysign up to get notified when voting opens)Select candidates you'd like to vote onYou'll be able to select all the candidates, or just the ones you have opinions about[1]Assign points to the candidates you've selected, based on how you personally would allocate funding across these different projects (paying attention to the relative point ratios)[2]Write a note about why you voted in that way (optional), and submit!A rough sketch of these steps (see the footnote[3] for an actual sketch mockup):Longer explanation: How vote aggregation will work and more on why we picked this voting methodIn classicalranked-choice voting, voters submit a ranking of candidates. When votes are in, the least popular candidate is eliminated in rounds until a winner is declared. After each elimination, voters' rankings are updated with the eliminated candidate removed (meaning if they ranked the candidate first, their ranking moves up), so votes for that candidate are not wasted.[4]We wanted to track preference strength more than ranked-choice voting allows us to do (i.e. we wanted to incorporate information like "Voter 1 thinks A should get 100x more funding than B" and to prompt people to think through considerations like this instead of just ranking projects), so instead of ranking candidates, we're asking voters to allocate points to all the candidates.We'll normalize voters' point distributions so that every voter has equal voting power, and then add up the points assigned to each candidate. This will allow us to identify the candidate with the least number of points, which we'll eliminate.[5] Any voters who had assigned points to that candidate will have their points redistributed to whatever else they voted on, keeping the proportions the same (alternatively, you can think of this as another renormalization of the voter's points). If all of a voter's points were assigned to candidates which are now eliminated, we'll pretend that the voter spread their points out equally across the remaining candidates.[6] We'll run this process until we get to the three top candidates.This should allow us to capture good information about how people would like to distribute the fund while also giving every voter similar power in determining the final outcome without penalizing people for voting for unpopular candidates or the like.Let us know what you think!Comment here or feel free to just reach out.Also, consider exploring the Giving Portal, sharin...

]]>
Lizka https://forum.effectivealtruism.org/posts/dYhKfsNuQX2sznfxe/donation-election-how-voting-will-work Tue, 14 Nov 2023 16:50:04 +0000 EA - Donation Election: how voting will work by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:44 no full 17
uvjybpML2RxrPuDsy EA - Survey on the acceleration risks of our new RFPs to study LLM capabilities by Ajeya Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Survey on the acceleration risks of our new RFPs to study LLM capabilities, published by Ajeya on November 14, 2023 on The Effective Altruism Forum.My team at Open Philanthropy just launched two requests for proposals:Proposals tocreate benchmarks measuring how wellLLM agents (likeAutoGPT) perform on difficult real-world tasks, similar torecent work by ARC Evals.[1]Proposals tostudy and/or forecast the near-term real-world capabilities and impacts of LLMs and systems built from LLMs more broadly.I think creating a shared scientific understanding of where LLMs are at has large benefits, but it can also accelerate AI capabilities: for example, it might demonstrate possible commercial use cases and spark more investment, or it might allow researchers to more effectively iterate on architectures or training processes. Other things being equal, I think acceleration is harmful becausewe're not ready for very powerful AI systems - but I believe the benefits outweigh these costs in expectation, and think better measurements of LLM capabilities are net-positive and important.To get a sense for whether acting on this belief by launching these two RFPs would constitute falling prey tothe unilateralist's curse, I senta survey about whether funding this work would be net-positive or net-negative to 47 relatively senior people who have been full-time working on AI x-risk reduction for multiple years and have likely thought about the risks and benefits of sharing information about AI capabilities.Out of the 47 people who received the survey, 30 people (64%) responded. Of those, 25 out of 30 said they were "Positive" or "Lean positive" on the RFP, and only 1 person said they were "Lean negative," with no one saying they were "Negative." The remaining four people said they had "No idea," meaning that 29 out of 30 respondents (97%) would not vote to stop the RFPs from happening. With that said, many respondents (~37%) felt torn about the question or considered it complicated.The rest of this post provides more detail onthe information that the survey-takers received andthe survey results (including sharing answers from those respondents who gave permission to share).The information that was sent to the survey-takersThe survey-takers received the below email, which links to aone-pager on the risks and benefits of these RFPs, and afour-pager (written in late July and early August) about the sorts of projects I expected to fund. After the survey, the latter document evolved into the public-facing RFPs here and here.Subject: [by Sep 8] Survey on whether measuring AI capabilities is harmfulHi,I want to launch a request for proposals asking researchers to produce better measurements of the real-world capabilities of systems composed out of LLMs (similar to the recent work done byARC evals).I expect this work to shorten timelines to superhuman AI, but I think the harm from this is outweighed by the benefits of convincing people of short timelines (if that's true) and enabling a regime of precautions gated to capabilities. Seethis 1-pager for more discussion. You can also skim myproject description (~4 pages) to get a better idea of the kinds of grants we might fund, though it's not essential reading (especially if you're broadly familiar with ARC evals).Please fill outthis short survey on whether you think this project is net-positive or net-negative by EOD Fri Sep 8.I'm sending this survey to a large number of relatively senior people who have been full-time working on AI x-risk reduction for multiple years and have likely thought about the risks and benefits of sharing information about AI capabilities. The primary intention of this survey is to check whether going ahead with this RFP would constitute falling prey to the unilateralist's curse (i.e., to check ...

]]>
Ajeya https://forum.effectivealtruism.org/posts/uvjybpML2RxrPuDsy/survey-on-the-acceleration-risks-of-our-new-rfps-to-study Tue, 14 Nov 2023 10:22:25 +0000 EA - Survey on the acceleration risks of our new RFPs to study LLM capabilities by Ajeya Ajeya https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 14:00 no full 21
cvyLXaa3HaAmc9mA7 EA - Getting Started with Impact Evaluation Surveys: A Beginner's Guide by Emily Grundy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting Started with Impact Evaluation Surveys: A Beginner's Guide, published by Emily Grundy on November 14, 2023 on The Effective Altruism Forum.In 2023, I provided research consulting services to help AI Safety Support evaluate their organisation's impact through a survey[1]. This post outlines a) why you might evaluate impact through a survey and b) the process I followed to do this. Reach out to myself or Ready Research if you'd like more insight on this process, or are interested in collaborating on something similar.Epistemic statusThis process is based on researching impact evaluation approaches and theory of change, reviewing what other organisations do, and extensive applied academic research and research consulting experience, including with online surveys (e.g.,the SCRUB study). I would not call myself an impact evaluation expert, but thought outlining my approach could still be useful for others.Who should read this?Individuals / organisations whose work aims to impact other people, and who want to evaluate that impact, potentially through a survey.Examples of those who may find it useful include:A career coach who wants to understand their impact on coachees;A university group that runs fellowship programs, and wants to know whether their curriculum and delivery is resulting in desired outcomes;An author who has produced a blog post or article, and wants to assess how it affected key audiences.SummaryEvaluating the impact of your work can help determine whether you're actually doing any good, inform strategic decisions, and attract funding. Surveys are sometimes (but not always) a good way to do this.The broad steps I suggest to create an impact evaluation survey are:Articulate what you offer (i.e., your 'services'): What do you do?Understand your theory of change: What impact do you hope it has, and how?Narrow in on the survey: How can a survey assess that impact?Develop survey items: What does the survey look like?Program and pilot the survey: Is the survey ready for data collection?Disseminate the survey: How do you collect data?Analyse and report survey data: How do you make sense of the results?Act on survey insights: What do you do about the results?Why conduct an impact evaluation survey?There are two components to this: 1) why evaluate impact and 2) why use a survey to do it.Why evaluate impact?This is pretty obvious: to determine whether you're doing good (or, at least, not doing bad), and how much good you're doing. Impact evaluation can be used to:Inform strategic decisions. Collecting data can help you decide whether doing something (e.g., delivering a talk, running a course) is worth your time, or what you should do more or less of.Attract funding. Being able to demonstrate (ideally good) impact to funders can strengthen applications and increase sustainability.Impact evaluation is not just about assessing whether you're achieving your desired outcomes. It can also involve understanding why you're achieving those outcomes, and evaluating different aspects of your process and delivery. For example, can people access your service? Do they feel comfortable throughout the process? Do your services work the way you expect them to?Why use a survey to evaluate impact?There are several advantages of using surveys to evaluate impact:They are relatively low effort (e.g., compared to interviews);They can be easily replicated: you can design and program a survey that can be used many times over (either by you again, or by others);They can have a broad reach, and are low effort for participants to complete (which means you'll get more responses);They are structured and standardised, so it can be easier to analyse and compare data;They are very scalable, allowing you to collect data from hundreds or thousands of respond...

]]>
Emily Grundy https://forum.effectivealtruism.org/posts/cvyLXaa3HaAmc9mA7/getting-started-with-impact-evaluation-surveys-a-beginner-s Tue, 14 Nov 2023 09:22:47 +0000 EA - Getting Started with Impact Evaluation Surveys: A Beginner's Guide by Emily Grundy Emily Grundy https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 19:39 no full 22
KakpsE3GXCaFftcj2 EA - Economics of Animal Welfare: Call for Abstracts by Bob Fischer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Economics of Animal Welfare: Call for Abstracts, published by Bob Fischer on November 16, 2023 on The Effective Altruism Forum.Brown University's Department of Economics and Center for Philosophy, Politics, and Economics are hosting an interdisciplinary conference on the economics of animal welfare on July 11-12, 2024.This conference aims to build on successful workshops on this topic at Duke University, Stanford University, and the Paris School of Economics. We welcome submissions on a range of topics that apply economic methods to understand how to value or improve animal welfare. This includes theoretical work on including losses or benefits to animals in economic analyses, applied empirical work on the effects of policies or industry structure on animal welfare, and anything else within the purview of economics as it relates to the well-being of commodity, companion, or wild animals.We invite 300-word abstracts from economists and those in relevant fields, including animal welfare science, political science, and philosophy. In addition to full presentations, we also welcome "ideas in development" from graduate students or early-stage researchers that can be presented in less than 10 minutes.Please submit abstracts and ideas-in-progress by January 15, 2024 viathis form. General attendance registration will open in January 2024.Travel support to Providence will be provided for all accepted speakers.A limited number of travel bursaries are available for graduate students and predoctoral researchers to attend without presenting a paper. Please apply for non-speaker travel funding in the link above.Vegan meals will be provided.While this is an in-person event, a limited number of remote presentations may be possible.ORGANIZED BY:Bob Fischer, Department of Philosophy, Texas State UniversityAnya Marchenko, Department of Economics, Brown UniversityKevin Kuruc, Population Wellbeing Initiative, University of Texas at AustinThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Bob Fischer https://forum.effectivealtruism.org/posts/KakpsE3GXCaFftcj2/economics-of-animal-welfare-call-for-abstracts Thu, 16 Nov 2023 18:17:20 +0000 EA - Economics of Animal Welfare: Call for Abstracts by Bob Fischer Bob Fischer https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:07 no full 7
PwnRriopRSptjNvw5 EA - How we approach charity staff pay and benefits at Giving What We Can by Luke Freeman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How we approach charity staff pay and benefits at Giving What We Can, published by Luke Freeman on November 16, 2023 on The Effective Altruism Forum.As an international charity with a talented global team, one challenging decision we face is how to pay our team members and provide benefits ("remuneration"). We grapple with several key questions:What's ethical?What's fair?What's expected?What would funders approve of?How do we attract and retain high-quality talent while maintaining a focus on our own cost-effectiveness?These questions become even more challenging within the nonprofit sector, where perspectives on pay are incredibly varied. Yet, it's crucial we discuss this openly, as staff remuneration often represents one of the most significant expenditures for an organisation.Our ethosIn line with our mission to create a culture of effective and significant giving, we believe it's a reasonable expectation that our team members earn a salary that would enable them to comfortably donate 10% of their income, should they choose to.Working at GWWC should not necessitate undue financial sacrifice, nor should it be primarily motivated by financial gain. Rather, we seek to attract individuals who are both highly skilled and deeply committed to effective giving. If someone's primary motivation leans toward earning potential, we would wholeheartedly encourage them to explore 'earning-to-give' opportunities instead.How our pay calculator worksSo, how does this ethos translate into actual numbers? We have built a calculator that incorporates the following:We use a salary band system where our second band (e.g. a junior associate-level role) starts with base salary which is pegged to the average income in Oxford.With each promotion to a new level (within or between bands) the base pay increases by 10%.Depending on the person's location, we adjust 50% of the base salary by relative cost-of-living as a starting point, and make ~annual adjustments to account for factors like inflation and location-based cost-of-living changes.We adjust upwards for experience (500 GBP per pre-GWWC relevance-adjusted FTE year and 1,000 per year at GWWC) with a cap of 10,000 GBP.We have a scaling "competitive skills bonus" for a few roles (e.g., software engineering) that are typically very highly compensated by current markets and therefore difficult to hire for in our context.We recalculate each staff member's remuneration annually and after any significant change in their role or location.It's not perfect, but we feel it's a good start that strikes a balance between vastly different potential approaches. We hope that by sharing it and receiving critiques, we can continue to make adjustments in consultation with our team and our funders.ResultsThe pay calculator tends to result in salaries that are higher than at most non-profits but below what a similar role would pay at a for-profit, and often well below what someone with high earning potential could make if they were choosing a career with an eye to earning as much as possible. It also gives lower increases with seniority than are common in the for-profit world resulting in a lower pay ratio from the highest paid to lowest paid employees. The financial sacrifice/incentive for working at GWWC does vary depending on your location, but we strive to make it reasonable and to find a good balance.BenefitsBenefits are another critical aspect of our remuneration package. It can be challenging to harmonise benefits like retirement contributions, healthcare, childcare, training, parental leave, and office equipment across different locations, but we make a concerted effort to offer balanced packages for staff.Offer letterIn our offer letter we share with the prospective team member their salary calculation and outline the benefit...

]]>
Luke Freeman https://forum.effectivealtruism.org/posts/PwnRriopRSptjNvw5/how-we-approach-charity-staff-pay-and-benefits-at-giving Thu, 16 Nov 2023 07:02:14 +0000 EA - How we approach charity staff pay and benefits at Giving What We Can by Luke Freeman Luke Freeman https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:41 no full 9
qpqwvatKcZbWMNz6p EA - Announcing Giving Green's 2023 Top Climate Nonprofit Recommendations by Giving Green Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Giving Green's 2023 Top Climate Nonprofit Recommendations, published by Giving Green on November 16, 2023 on The Effective Altruism Forum.What is Giving Green?Giving Green is an EA-affiliated charity evaluator that helps donors direct funds to the highest-impact organizations looking to mitigate climate change. We believe that individuals can make a real impact by reshaping the laws, norms, and systems that perpetuate unsustainable emissions. Our annual list of recommendations helps direct donors towards high-impact climate nonprofits advocating for systemic change.How does Giving Green work?We spent the past year finding timely giving strategies that have a huge potential impact but are relatively neglected by traditional climate funding.Our process starts byassessing various impact strategies and narrowing in on ones that we believed could substantially reduce emissions, were feasible, and needed more funding (Figure 1). After developing a short list of impact areas, we explored the ecosystem of nonprofits operating in each space by speaking directly with organizations and other stakeholders. We used our findings to evaluate each organization's theory of change and its capacity to absorb additional funding.For more information, seeGiving Green's Research Process.Figure 1: Giving Green's process for identifying and assessing nonprofitsWhat climate nonprofits does Giving Green recommend for 2023?Our findings led us to double down on one pathway where we believe climate donors can have an outsized impact:Advancing key climate technologies through policy advocacy, research, and market support.We think technological progress provides a uniquely powerful and feasible way to decarbonize, allowing the green transition to proceed while minimizing costs to quality of life and the economy.For 2023, we highlight five key sectors ripe for innovation: next-generation geothermal energy, advanced nuclear, alternative protein innovation, industrial decarbonization, and shipping and aviation decarbonization; within those, we recommend six top climate charities (Figure 2).Figure 2: Giving Green's 2023 top climate nonprofit recommendationsBelow, you will find a brief overview of Giving Green's recommendations in reverse alphabetical order.Project InnerSpaceDeep underground, the Earth's crust holds abundant heat that can supply renewable, carbon-free heat and reliable, on-demand electricity. However, conventional geothermal systems have been limited to places bordering the Earth's tectonic plates.Project InnerSpace is fast-tracking next-generation technologies that can make geothermal energy available worldwide. It has a bold plan to reduce financial risks for new geothermal projects, making geothermal energy cheaper and more accessible, especially in densely populated areas in the Global South.We believe Project InnerSpace is a top player in the geothermal sector and that its emphasis on fast technology development and cost reduction can help geothermal expand globally.For more information, see ourProject InnerSpace recommendation summary.Opportunity GreenAviation and maritime shipping are challenging sectors to decarbonize and have not received much support from philanthropy in the past.Opportunity Green has a multi-pronged strategy for reducing emissions from aviation and maritime shipping. It pushes for ambitious regulations, promotes clean fuels, encourages companies to adopt greener fleets, and works to reduce demand for air travel.We think Opportunity Green has a strong theory of change that covers multiple ways to make a difference. We are especially excited about Opportunity Green's efforts to elevate climate-vulnerable countries in policy discussions, as we think this could improve the inclusivity of the process and the ambition level of...

]]>
Giving Green https://forum.effectivealtruism.org/posts/qpqwvatKcZbWMNz6p/announcing-giving-green-s-2023-top-climate-nonprofit Thu, 16 Nov 2023 04:03:06 +0000 EA - Announcing Giving Green's 2023 Top Climate Nonprofit Recommendations by Giving Green Giving Green https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:38 no full 10
EPB3kSwEAx6HYJNaA EA - TED talk on Moloch and AI by LivBoeree Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: TED talk on Moloch and AI, published by LivBoeree on November 16, 2023 on The Effective Altruism Forum.Hey folks, Liv Boeree here - I recently did a TED talk on Moloch (a.k.a the multipolar trap) and how it threatens safe AI development. Posting it here to a) raise awareness and b) get feedback from the community, given the relevancy of the topic.And of course, if any of you are active on social media, I'd really appreciate it being shared as widely as possible, thank you!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
LivBoeree https://forum.effectivealtruism.org/posts/EPB3kSwEAx6HYJNaA/ted-talk-on-moloch-and-ai Thu, 16 Nov 2023 03:09:06 +0000 EA - TED talk on Moloch and AI by LivBoeree LivBoeree https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:42 no full 11
SXEEDvqZwxaRGCmhj EA - Some more marginal Long-Term Future Fund grants by calebp Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some more marginal Long-Term Future Fund grants, published by calebp on November 16, 2023 on The Effective Altruism Forum.These fictional grants represent the most promising applications we turn down due to insufficient funding.Throughout the text, 'I' refers to Caleb Parikh, and 'we' refers to both Caleb Parikh and Linch Zhang. This reflects the perspectives of two individuals who are very familiar with the Long-Term Future Fund (LTFF). However, others associated with the LTFF might not agree that this accurately represents their impression of the LTFF's marginal (rejected) grants.Fictional grants that we rejected but were very close to our funding barEach grant is based on 1-3 real applications we have received in the past ~three months. You can see our original LTFF marginal funding post here, and our post on the usefulness of funding the EAIF and LTFF here.[1] Please note that these are a few of the most promising grants we've recently turned down - not the average rejected grant. [2]($25,000)~ Funding to continue research on a multi-modal chess language model, focusing on alignment and interpretability. The project involves optimizing a data extraction pipeline, refining the model's behaviour to be less aggressive, and exploring ways to modify the model training. Additional tasks include developing a simple Encoder-Decoder chess language model as a benchmark and writing an article about AI safety.The primary objective is to develop methods ensuring that multi-modal models act according to high-level behavioural priorities. The applicant's background includes experience as a machine learning engineer and chess, competing and developing predictive models. The past year's work under a previous LTFF grant resulted in a training dataset and some initial analysis, laying the groundwork for this continued research.($25,000) ~ Four months' salary for a former academic to tackle some unusually tractable research problems in disaster resilience after large-scale GCRs. Their work would focus on researching Australia's resilience to a northern hemisphere nuclear war. Their track record included several papers in high-impact factor journals, and their past experiences and networks made them well-positioned for further work in this area. The grantee would also work on public outreach to inform the Australian public about nuclear risks and resilience strategies.($50,000)~ Six months of career transition funding to help the applicant enter a technical AI safety role. The applicant has seven years of software engineering experience at prominent tech companies and aims to pivot his career towards AI safety. They'll focus on interpretability experiments with Leela Go Zero during the grant.The grant covers 50% of his previous salary and will facilitate upskilling in AI safety, completion of technical courses, and preparation for interviews with AI safety organizations. He has pivoted his career successfully in the past and has been actively engaged in the effective altruism community, co-running a local group and attending international conferences. This is his first funding request.($40,000)~ Six months dedicated to exploring and contributing to AI governance initiatives, focusing on policy development and lobbying in Washington, D.C. The applicant seeks to build expertise and networks in AI governance, aiming to talk with over 50 professionals in the field and apply to multiple roles in this domain. The grant will support efforts to increase the probability of the U.S. government enacting legislation to manage the development of frontier AI technologies.The applicant's background includes some experience in AI policy and a strong commitment to effective altruism principles. The applicant has fewer than three years of professional experience and an undergraduate degree ...

]]>
calebp https://forum.effectivealtruism.org/posts/SXEEDvqZwxaRGCmhj/some-more-marginal-long-term-future-fund-grants Thu, 16 Nov 2023 01:13:34 +0000 EA - Some more marginal Long-Term Future Fund grants by calebp calebp https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:49 no full 12
C8ZzjFc7aKT7ihmeK EA - Spiro - New TB charity raising seed funds by Habiba Banu Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Spiro - New TB charity raising seed funds, published by Habiba Banu on November 17, 2023 on The Effective Altruism Forum.SummaryWe (Habiba Banu and Roxanne Heston) have launchedSpiro, a new TB screening and prevention charity focused on children. Our website ishere.We are fundraising $198,000 for our first year. We're currently reaching out to people in the EA network. So far we have between 20%-50% of our budget promised and fundraising is currently one of the main things we're focusing on.The major components of our first year budget are co-founder time, country visits, and delivery of a pilot program, which aims to do household-level TB screening and provision of preventative medication.We think that this project has a lot of promise:Tuberculosis has a huge global burden, killing 1.3 million people every year, and is disproportionately neglected and fatal in young children.The evidence for preventative treatment is robust and household programs are promising, yet few high-burden countries have scaled up this intervention.Modeling by Charity Entrepreneurship and by academics indicate that this can be competitive with the best GiveWell-recommended charities.If we don't manage to raise at least half of our target budget by the beginning of December 2023 then we'll switch from our intended focus for the next month from program planning to additional fundraising. This will push out our timelines for getting to the useful work.If we don't manage to raise our full target budget by the end of 2023 then we'll scale back our ambitions in the immediate term, until we put additional time into fundraising a few months later. The lower budget will also limit the size of our proof-of-concept effort since we and our government partners will need to scale back work to the available funds.You can donate viaGiving What We Can's fund for charities incubated through Charity Entrepreneurship.Please also email habiba.banu@spiro.ngo letting us know how much you have donated so that we can identify the funds and allocate them to Spiro.Who are we?Spiro is co-founded by Habiba Banu and Roxanne Heston.Habiba worked for the last three years at 80,000 Hours and before that as Senior Administrator at the Future of Humanity Institute and the Global Priorities Institute. Her background is working as a consultant at PwC with government and non-profit clients.Rox has worked for the last few years on international AI policy in the U.S. Government and at think tanks. She has worked with and for various EA organizations including the Centre for Effective Altruism, the Future of Humanity Institute, Open Philanthropy and the Lead Exposure Elimination Project.We have received Charity Entrepreneurship support so far:Charity Entrepreneurship's research team did the initial research into this idea and shared their work with us.Habiba went through Charity Entrepreneurship's Incubator Programme earlier this year. Rox started working with Habiba to find an idea together about halfway through the program.Charity Entrepreneurship has provided stipend funding, advice, and operational support (e.g. website design). It will continue to provide mentorship from its leadership team and a fiscal sponsorship arrangement.What are we going to do?Spiro will implement sustainable household screening programs in low- and lower-middle income countries. Spiro aims to curb infections and save lives of children in regions with high burdens of tuberculosis by identifying, screening, and treating household contacts of people living with TB.We will initially establish a proof of concept in one region, working closely with the government TB program. We will then aim to scale nationally, with funding from theGlobal Fund, and expand to other countries.Currently, we are planning a visit to Uganda to shadow e...

]]>
Habiba Banu https://forum.effectivealtruism.org/posts/C8ZzjFc7aKT7ihmeK/spiro-new-tb-charity-raising-seed-funds Fri, 17 Nov 2023 12:48:27 +0000 EA - Spiro - New TB charity raising seed funds by Habiba Banu Habiba Banu https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:49 no full 6
CaYqXsrHbuvtkGMGy EA - An EA's guide to Berlin by MartinWicke Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An EA's guide to Berlin, published by MartinWicke on November 18, 2023 on The Effective Altruism Forum.This guide is inspired by this call for guides on EA Hubs and the excellent examples already published.OverviewBerlin is quite a vibrant city, and with 3.8 million citizens, it's the biggest city in Germany and the EU.[1] It also has a unique city culture compared to the rest of Germany (less traditional, more open-minded, more vegans), and to a lesser degree, the rest of continental Europe.While most other EA local groups in Germany are centered around universities, Berlin has a much broader EA community, with students and professionals working in both EA and non-EA jobs. To give an impression on the size of the EA community in Berlin, here are some estimates about the number of people by level of engagement:Generally interested in EA: ~200-260[2]Engaged on a level to be accepted to an EAGx: ~160[3]People working in an EA organization or engage on a similar level: ~50-60[4]Volunteer EA Berlin event organizers: ~10[5]This guide is addressed to people not from Berlin to get an overview of how to get in touch with people from the EA community, activities to do and other practical tips when coming here.Meeting PeopleTo get to know people from the EA community, a good starting point is visiting one of theEA Berlin events. Many events can be joined by anyone (yes, you too!), just check out the event description. Good starting points are the Talk & Community Meetup and the Food for Thought discussion rounds, both recurring every month.There are informal hangouts, too! Just ask one of the organizers at any meetup how to get in touch with more members of the community. Active EAs usually invite people at our events to join our EA Berlin Telegram group (not shared online), where individually organized gatherings are posted and discussions take place. If you're planning to come to Berlin and would like to meet some like-minded people, send.Berlin has a relatively broad community of professionals working in EA organizations, organizations considered high-impact by EA, or other impactful jobs. While some of these organizations are centered in Berlin, many people work in remote positions. The spectrum of cause areas people are working on reflects to a big part the cause areas from the global EA community: Animal Advocacy, Global Health, AI governance and technical AI safety, Bio Security, Civilizational Resilience, Political Advocacy, Climate, Mental Health, Journalism, Effective Giving, EA Meta and Operations, and more.There's also an active Rationality/LessWrong/Wait But Why/Slate Star Codex community in Berlin, with many of their events postedin this meetup group. If you'd like to dive into the veganism scene in Berlin, check outthe berlin-vegan website."The Vibes"People outside of Berlin often are interested in what "the vibes" of the EA Berlin community are. This is certainly hard to explain, as subjective experiences matter a lot here and can be quite different.As Berlin is a diverse city with lots of different subcultures, this also reflects to some people in the EA community. These people are often interested in ideas from the alternative scene, like different forms of meditation, yoga, techno culture, non-traditional relationship forms, festivals and more. Some EA people in Berlin are living in shared apartments together, both purely with EAs and with other interesting people, e.g. from the startup scene. We're not aware of any co-living situations in Berlin where professional and private relationships are intermixed, which reduces potential conflict of interests.It is important to highlight that only a subset of people in the community subscribe to these interests and that having similar interests certainly isn't a precondition to get in touch wit...

]]>
MartinWicke https://forum.effectivealtruism.org/posts/CaYqXsrHbuvtkGMGy/an-ea-s-guide-to-berlin Sat, 18 Nov 2023 19:24:44 +0000 EA - An EA's guide to Berlin by MartinWicke MartinWicke https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 14:14 no full 1
eyS8tpW9n2qNyu9Ee EA - Confessions of a Cheeseburger Ethicist by Richard Y Chappell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Confessions of a Cheeseburger Ethicist, published by Richard Y Chappell on November 18, 2023 on The Effective Altruism Forum.Eric Schwitzgebel invokes the "cheeseburger ethicist" - a moral philosopher who agrees that eating meat is wrong, but eats meat anyway - as the paradigm of failing to walk the walk of one's moral philosophy.The example resonates with me, since people often assume that as a utilitarian I must also be vegan. It can be a little embarrassing to have to correct them. I agree that I should be a vegan, in the sense that there's no adequate justification for most purchases of animal products. I certainly think highly of vegans. And yet… I'm not one. (Sorry!)So I am a "cheeseburger ethicist". And yet… I'm not unmoved by the practical implications of my moral theorizing. I'm actually quite committed to putting my ethics into practice, in a number of respects (e.g. donating a substantial portion of my income, pursuing intellectually honest inquiry into important questions, and maintaining a generally forthright and co-operative disposition towards others). I'm just not especially committed to avoiding moral mistakes, or acting justifiably in each instance.If I'm right about this, then even a "cheeseburger ethicist" may still be "walking the walk", so long as their practical priorities correspond (sufficiently closely) to those prescribed by their moral theory.But while disagreeing with Schwitzgebel about the significance of self-ascribed error, I take myself to be further confirming his subsequent claim that "walking the walk" helps to flesh out the substantive content of a moral view. After all, it's precisely by reflecting on how I take myself to be living a broadly consequentialist-approved life that we can see that avoiding moral mistakes per se isn't a high priority (for consequentialists of my stripe). It really matters how much good it would do to remedy the mistake, and whether your efforts could be better spent elsewhere.Don't sweat the small stuffAs I wrote in response to Caplan's conscience objection:[W]e aren't all-things-considered perfect. It's really tempting to make selfish [or short-sighted] decisions that are less than perfectly justified, and in fact we all do this all the time. Humans are inveterate rationalizers, and many seem to find it irresistible to contort their normative theories until they get the result that "actually we've most reason to do everything we actually do." But when stated explicitly like this, we can all agree that this is pure nonsense, right?We should just be honest about the fact that our choices aren't always perfectly justified. That's not ideal, but nor is it the end of the world.Of course, some mistakes are more egregious than others. Perhaps many reserve the term 'wrong' for those moral mistakes that are so bad that you ought to feel significant guilt over them. I don't think eating meat is wrong in that sense. It's not like torturing puppies (just as failing to donate enough to charity isn't like watching a child drown in this respect). Rather, it might require non-trivial effort for a generally decent person to pursue, and those efforts might be better spent elsewhere.That doesn't mean that eating meat is actually justified. Rather, the suggestion is that some genuinely unjustified actions aren't worth stressing over. On my view, we should prioritize our moral efforts, and put more effort into making changes that have greater moral payoffs. For most people, their top moral priority should probably just be to donate more to effective charities.[2] Some may be in a position where they can do even more good via high-impact work.Personal consumption decisions have got to be way down the list of priorities, by contrast. And even within that sphere, we can subdivide it into the "low hanging fruit" ...

]]>
Richard Y Chappell https://forum.effectivealtruism.org/posts/eyS8tpW9n2qNyu9Ee/confessions-of-a-cheeseburger-ethicist Sat, 18 Nov 2023 10:34:44 +0000 EA - Confessions of a Cheeseburger Ethicist by Richard Y Chappell Richard Y Chappell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:23 no full 3
gGBtgScdfLxsodhzg EA - The Humane League - Room for More Funding & 2023 Impact by carolinemills Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Humane League - Room for More Funding & 2023 Impact, published by carolinemills on November 17, 2023 on The Effective Altruism Forum.About The Humane LeagueThe Humane League (THL) exists to end the abuse of animals raised for food. THL is laser focused on the efforts that have the biggest impact for the greatest number of animals. We are distinguished from other animal welfare organizations by the effectiveness of our corporate campaigns, our unique role as the most aggressive campaigners, and our approach to multiplying our movement's impact globally through the Open Wing Alliance (OWA) and in the US through the Animal Policy Alliance (APA). Our scalable interventions have a proven track record of reducing farm animal suffering - according to a 2019 Rethink Priorities report, our corporate cage-free campaigns affect nine to 120 years of a hens life per dollar spent, and have a follow-through rate of 48%-84% (we've found up to 89% in recent years)[1].We are proud to be recognized byAnimal Charity Evaluators andFounders Pledge as one of the most effective animal protection charities in the world."While we expect all of our evaluated charities to be excellent examples of effective advocacy, THL is exceptional even within that group. Giving to THL is an excellent opportunity to support initiatives that create the most positive change for animals." - Animal Charity Evaluators, 2023 THL evaluation reportOur Strategy & 2023 ImpactTHL believes in focusing our collective energy where it will do the most good. Since chickens represent 90% of all land animals raised for food, any interventions we make for chickens have the greatest potential impact. And restrictive battery cages - small wire cages used to confine laying hens - are one of the worst sources of suffering for chickens. Ending the battery cage means ending the acute suffering of millions of birds.Holding companies accountable to their cage-free commitments. Thousands of companies around the world have pledged to transition to 100% cage-free, eliminating the practice of confining hens in tiny, barren battery cages. Now, THL is holding these companies accountable, ensuring they keep their promises. Globally,89% of companies followed through on their 2022 cage-free pledge. And in the US and globally, THL pushed the companies falling behind on their commitments to follow through on their promise. In 2023, THL held 36 companies with global cage-free commitments accountable to reporting progress on their pledges. Companies likeKellogg's,PepsiCo, andYum! Brands - the world's largest service restaurant company and the parent company of KFC, Pizza Hut, and Taco Bell - began publicly reporting on their cage-free commitments. All of this is translating to real change on the ground, with 39.4% of the US egg-laying flock free from cages (over ~120 million hens), up from ~5% when THL began this work in 2014.[3] (Global data is currently unavailable)Progressing the cage-free movement globally. In addition to holding companies accountable for their existing commitments, THL is working to secure new cage-free commitments in key strategic areas around the world. Through the OWA, our coalition of nearly 100 member groups in 67 countries, THL is developing a global movement of effective animal advocates that conduct coordinated international and regional campaigns for layer hen and/or broiler chicken welfare. This year, the OWA pushed 103 global companies to pledge to rid their supply chains of cruel battery cages, including first cage-free commitments from corporations headquartered in Japan, the Middle East, Greece, Ukraine, Peru, Ecuador, South Africa, Argentina, South Korea, and Taiwan.Jollibee Foods Corporation, the largest and fastest-growing restaurant group in Asia, pledged to reform its global supply chain,...

]]>
carolinemills https://forum.effectivealtruism.org/posts/gGBtgScdfLxsodhzg/the-humane-league-room-for-more-funding-and-2023-impact Fri, 17 Nov 2023 19:25:56 +0000 EA - The Humane League - Room for More Funding & 2023 Impact by carolinemills carolinemills https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:19 no full 7
btTeBHKGkmRyD5sFK EA - Open Phil Should Allocate Most Neartermist Funding to Animal Welfare by Ariel Simnegar Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Phil Should Allocate Most Neartermist Funding to Animal Welfare, published by Ariel Simnegar on November 19, 2023 on The Effective Altruism Forum.Thanks to Michael St. Jules for his comments.Key TakeawaysThe evidence that animal welfare dominates in neartermism is strong.Open Philanthropy (OP) should scale up its animal welfare allocation over several years to approach a majority of OP's neartermist grantmaking.If OP disagrees, they should practicereasoning transparency by clarifying their views:How much weight does OP's theory of welfare place on pleasure and pain, as opposed to nonhedonic goods?Precisely how much more does OP value one unit of a human's welfare than one unit of another animal's welfare, just because the former is a human? How does OP derive this tradeoff?How would OP's views have to change for OP to prioritize animal welfare in neartermism?SummaryRethink Priorities (RP)'s moral weight research endorses the claim that the best animal welfare interventions are orders of magnitude (1000x) more cost-effective than the best neartermist alternatives.Avoiding this conclusion seems very difficult:Rejecting hedonism (the view that only pleasure and pain have moral value) is not enough, because even if pleasure and pain are only 1% of what's important, the conclusion still goes through.Rejecting unitarianism (the view that the moral value of a being's welfare is independent of the being's species) is not enough. Even if just for being human, one accords one unit of human welfare 100x the value of one unit of another animal's welfare, the conclusion still goes through.Skepticism of formal philosophy is not enough, because the argument for animal welfare dominance can be made without invoking formal philosophy. By analogy, although formal philosophical arguments can be made for longtermism, they'renot required for longtermist cause prioritization.Even if OP accepts RP's conclusion, they may have other reasons why they don't allocate most neartermist funding to animal welfare.Though some of OP's possible reasons may be fair, if anything, they'd seem to imply a relaxation of this essay's conclusion rather than a dismissal.It seems like these reasons would also broadly apply to AI x-risk within longtermism. However, OP didn't seem put off by these reasons when they allocated a majority of longtermist funding to AI x-risk in 2017, 2019, and 2021.I request that OP clarify their views on whether or not animal welfare dominates in neartermism.The Evidence Endorses Prioritizing Animal Welfare in NeartermismGiveWell estimates that its top charity (Against Malaria Foundation) can prevent the loss of one year of life for every $100 or so.We've estimated that corporate campaigns can spare over 200 hens from cage confinement for each dollar spent. If we roughly imagine that each hen gains two years of 25%-improved life, this is equivalent to one hen-life-year for every $0.01 spent.If you value chicken life-years equally to human life-years, this implies that corporate campaigns do about 10,000x as much good per dollar as top charities. … If one values humans 10-100x as much, this still implies that corporate campaigns are a far better use of funds (100-1,000x).Holden Karnofsky,"Worldview Diversification" (2016)"Worldview Diversification" (2016) describes OP's approach to cause prioritization. At the time, OP's research found that if the interests of animals are "at least 1-10% as important" as those of humans, then "animal welfare looks like an extraordinarily outstanding cause, potentially to the point of dominating other options".After the better part of a decade, the latest and most rigorous research funded by OP has endorsed a stronger claim: Any significant moral weight for animals implies that OP should prioritize animal welfare in ne...

]]>
Ariel Simnegar https://forum.effectivealtruism.org/posts/btTeBHKGkmRyD5sFK/open-phil-should-allocate-most-neartermist-funding-to-animal Sun, 19 Nov 2023 18:34:09 +0000 EA - Open Phil Should Allocate Most Neartermist Funding to Animal Welfare by Ariel Simnegar Ariel Simnegar https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 24:25 no full 2
MFRqdphhyddjgTwX4 EA - Vida Plena: Transforming Mental Health in Ecuador - First Year Updates and Marginal Funding Opportunity by Joy Bittner Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vida Plena: Transforming Mental Health in Ecuador - First Year Updates and Marginal Funding Opportunity, published by Joy Bittner on November 19, 2023 on The Effective Altruism Forum.TLDRVida Plena is a nonprofit organization that is tackling Ecuador's mental health crisis through cost-effective, proven group therapy led by local leaders from within vulnerable communities. We do this through the direct implementation of Group Interpersonal Therapy, which is theWHO's recommended intervention for depression. We are the first to implement it in Latin America.We launched in early 2022 (see ourintroductory EA forum post) and took part in the Charity Entrepreneurship Incubator program that same year. In the fall of 2022, we carried out a proof concept alongside Columbia University, which found positive results (seeour internal report, andthe report from the Columbia University Global Mental Health Lab).So far this year, we've made a positive impact on the lives of 500 individuals, consistently showing significant improvements in both depression and anxiety. Our strategic partnerships with local institutions are flourishing, laying the groundwork for our ambitious goal of scaling our reach to treat 2,000 people in 2024.For this marginal funding proposal, we seek $200,000 to expand our work and conduct research to apply behavioral science insights to further depression treatment in Latin America. This enhanced therapy model will be evaluated through rapid impact assessments, deepening the evidence base for our work, and culminating in a white paper and a RCT in 2025. We also share additional ways to support our work.This post was written by Joy Bittner and Anita Kaslin, Vida Plena's co-founders. In it, we share:An overview of Vida Plena and our workThe scope and scale of the problem we are addressingOur solution and the evidence baseOur initial results to datePresent our proposal for marginal funding opportunitiesAdditional funding opportunities and how you can support our work1) An Overview of Vida Plena and Our WorkProblem:Mental health disorders are a burgeoning global public health challenge and disproportionately affect the poor. Low- and middle-income countries (LMICs)bear 80 % of the mental health disease burden. Mental illness and substance abuse disorders are significant contributors to the disease burden,constituting 8.8% and 16.6% of the total burden of disease in low-income and lower-middle-income countries. According to The Wellcome Global Monitor on Mental Health, the largest survey of depression and anxiety rates worldwide, Latin America exhibits the highest rates globally.This situation is worsened by low public investment.Despite a 2021 Gallup poll ranking Ecuador among the top 10 worst countries in the world for emotional health,only 0.04% of the national healthcare budget is dedicated to mental health - 9x less than other Latin American countries. Therefore, most mental health conditions, especially depression, go untreated.Depression is defined by intense feelings of hopelessness and despair. The result is suffering in all areas of life: physical, social, and professional. Untreated depression's repercussions extend todaily economic and life decisions, impairing attention, memory, and cognitive flexibility. This hampers personal agency and worsens thecycle of poverty and mental disorders. Poor mental health is associated with a host of other issues: chronic medical conditions, drug abuse, lower educational achievement, lower life expectancy, andexclusion from social and professional arenas. As a result, it's not surprising that health problems are related to economic factors such asloss of productivity,absenteeism (both for the patients and caregivers), andfinancial strain due to the cost of care.Conversely,research unders...

]]>
Joy Bittner https://forum.effectivealtruism.org/posts/MFRqdphhyddjgTwX4/vida-plena-transforming-mental-health-in-ecuador-first-year Sun, 19 Nov 2023 13:42:37 +0000 EA - Vida Plena: Transforming Mental Health in Ecuador - First Year Updates and Marginal Funding Opportunity by Joy Bittner Joy Bittner https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 17:55 no full 4
qpqab3LwmA6yBFJCk EA - The EA Animal Welfare Fund (Once Again) Has Significant Room For More Funding by kierangreig Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA Animal Welfare Fund (Once Again) Has Significant Room For More Funding, published by kierangreig on November 20, 2023 on The Effective Altruism Forum.Just as~2 years ago, the EA Animal Welfare Fund has significant room for more funding. This could be a pretty important point that informs end of year giving for a number of donors who are looking to make donations within the animal sector.Briefly, here's why the Animal Welfare Fund has some pretty significant room for more funding at this point:Right now, there's currently~$1M in the Animal Welfare Fund.We also now have 50 grants, summing to ~$4.5M in grants under evaluation.Between mid-last year and mid-this year, the EA AWF received ~350 applications over the past year of which ~150 were desk rejects and ~200 were graded by fund managers. Of these ~200, ~60 received funding, and ~30 received the grant amount they applied for or more.Assuming that the general shape of the pipeline remains similar, that could imply we may now have more grants than we can fund. Potentially even if we were to have an influx of several hundred thousand dollars.In general, the AWF is navigating a difficult period funding-wise: last year, we had ~$7M to allocate, whereasour projection for this year - extrapolating from donations received so far - is only ~$5M.We also have some plans for significant growth next year through some internal expansion plans in the works (e.g., possibly adding further fund managers, hopefully at least one who is full-time, and doing more active grantmaking).Also, a lot of our grantees have grown, so they'll have more room for funding. As a lot of the groups we give to are relatively small, they can grow at such a rate that they'd often be looking to absorb twice as much funding in the next year. If we zoom in on just say Fórum Nacional de Proteção e Defesa Animal-a promising Brazilian group. In 2021 we granted $30k to them and this year $80k. Which comfortably corresponds to a greater than 100% growth in grant amount over a two-year period. Generally, it seems that if we have more money in the fund, it encourages some good organizations to request more funding for some quality projects.Relatedly, some of the areas we grant in just tend to be pretty high growth and grow at comfortably >20% year on year. For instance, years ago there was basically very little that could be granted to invertebrate welfare, but this year we made several hundred thousand dollars in grants within that area.So next year, we think that we could fairly comfortably and productively absorb and grant out in the realm of $6M-$10M (that's a ~20%-100% increase on this year) without any significant decreases to the quality of our grants.Note too, that in previous years we have been able to do such jumps in grantmaking volume. In 2020 we granted out ~$2M in total, in 2021 more than doubled that to ~$4.5M, and in 2022 went up to $7M, before now likely decreasing to ~$5M this year. We think we're again on track to handle 2020-2022 levels of either absolute growth or percentage growth in grants for next year, which will put us in that $6M-$10M range.So one way to look at this is that we now have ~$1M in the fund but next year we could do something like at least ~$7.5M in grants. So in that sense, we have several million dollars in room for more funding.It could be worth thinking about how much we'll likely raise for grants for next year too though. This year,we typically raised ~$100k per month. Historically, we have seen about a ~2x-8x increase on that monthly total for the month of December and January (some end-of-year donations come in on the books in January).Another way to look at this then, is based on the current trends and growth in them year to year, we would now be looking at raising something like ~$1.7M (~$100k...

]]>
kierangreig https://forum.effectivealtruism.org/posts/qpqab3LwmA6yBFJCk/the-ea-animal-welfare-fund-once-again-has-significant-room 20% year on year. For instance, years ago there was basically very little that could be granted to invertebrate welfare, but this year we made several hundred thousand dollars in grants within that area.So next year, we think that we could fairly comfortably and productively absorb and grant out in the realm of $6M-$10M (that's a ~20%-100% increase on this year) without any significant decreases to the quality of our grants.Note too, that in previous years we have been able to do such jumps in grantmaking volume. In 2020 we granted out ~$2M in total, in 2021 more than doubled that to ~$4.5M, and in 2022 went up to $7M, before now likely decreasing to ~$5M this year. We think we're again on track to handle 2020-2022 levels of either absolute growth or percentage growth in grants for next year, which will put us in that $6M-$10M range.So one way to look at this is that we now have ~$1M in the fund but next year we could do something like at least ~$7.5M in grants. So in that sense, we have several million dollars in room for more funding.It could be worth thinking about how much we'll likely raise for grants for next year too though. This year,we typically raised ~$100k per month. Historically, we have seen about a ~2x-8x increase on that monthly total for the month of December and January (some end-of-year donations come in on the books in January).Another way to look at this then, is based on the current trends and growth in them year to year, we would now be looking at raising something like ~$1.7M (~$100k...]]> Mon, 20 Nov 2023 22:34:44 +0000 EA - The EA Animal Welfare Fund (Once Again) Has Significant Room For More Funding by kierangreig 20% year on year. For instance, years ago there was basically very little that could be granted to invertebrate welfare, but this year we made several hundred thousand dollars in grants within that area.So next year, we think that we could fairly comfortably and productively absorb and grant out in the realm of $6M-$10M (that's a ~20%-100% increase on this year) without any significant decreases to the quality of our grants.Note too, that in previous years we have been able to do such jumps in grantmaking volume. In 2020 we granted out ~$2M in total, in 2021 more than doubled that to ~$4.5M, and in 2022 went up to $7M, before now likely decreasing to ~$5M this year. We think we're again on track to handle 2020-2022 levels of either absolute growth or percentage growth in grants for next year, which will put us in that $6M-$10M range.So one way to look at this is that we now have ~$1M in the fund but next year we could do something like at least ~$7.5M in grants. So in that sense, we have several million dollars in room for more funding.It could be worth thinking about how much we'll likely raise for grants for next year too though. This year,we typically raised ~$100k per month. Historically, we have seen about a ~2x-8x increase on that monthly total for the month of December and January (some end-of-year donations come in on the books in January).Another way to look at this then, is based on the current trends and growth in them year to year, we would now be looking at raising something like ~$1.7M (~$100k...]]> 20% year on year. For instance, years ago there was basically very little that could be granted to invertebrate welfare, but this year we made several hundred thousand dollars in grants within that area.So next year, we think that we could fairly comfortably and productively absorb and grant out in the realm of $6M-$10M (that's a ~20%-100% increase on this year) without any significant decreases to the quality of our grants.Note too, that in previous years we have been able to do such jumps in grantmaking volume. In 2020 we granted out ~$2M in total, in 2021 more than doubled that to ~$4.5M, and in 2022 went up to $7M, before now likely decreasing to ~$5M this year. We think we're again on track to handle 2020-2022 levels of either absolute growth or percentage growth in grants for next year, which will put us in that $6M-$10M range.So one way to look at this is that we now have ~$1M in the fund but next year we could do something like at least ~$7.5M in grants. So in that sense, we have several million dollars in room for more funding.It could be worth thinking about how much we'll likely raise for grants for next year too though. This year,we typically raised ~$100k per month. Historically, we have seen about a ~2x-8x increase on that monthly total for the month of December and January (some end-of-year donations come in on the books in January).Another way to look at this then, is based on the current trends and growth in them year to year, we would now be looking at raising something like ~$1.7M (~$100k...]]> kierangreig https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:37 no full 1
4wNDqRPJWhoe8SnoG EA - CEA is fundraising, and funding constrained by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA is fundraising, and funding constrained, published by Ben West on November 20, 2023 on The Effective Altruism Forum.Tl;drThe Centre for Effective Altruism (CEA) has an expected funding gap of $3.6m in 2024.Some example things we think are worth doing but are unlikely to have funding for by default:Funding aCommunity Building Grant in BostonFunding travel grants for EAG(x) attendeesNote that these are illustrative of our current cost-effectiveness bar (as opposed to a binding commitment that the next dollar we receive will go to one of these things).In collaboration with EA Funds we haveproduced models where users can plug in their own parameters to determine the relative value of a donation to CEA versus EA Funds.IntroThe role of an interim executive is weird: whereas permanent CEOs like to come in with a bold new vision (ideally one which blames all the organization's problems on their predecessor), interim CEOs are stuck staying the course. Fortunately for me, I mostly liked the course CEA was on when I came in.The past few years seem to have proven the value of the EA community: my own origin cause area of animal welfare has been substantially transformed (e.g. as recounted by Jakubhere), and even as AI safety has entered the global main stage many of the people doing research, engineering, and other related work have interacted with CEA's projects.Of course, this is not to say that CEA's work is a slamdunk. In collaboration with Caleb and Linch at EA Funds, I have included below some estimates of whether marginal donations to CEA are more impactful than those to EA Funds, and a reasonable confidence interval very comfortably includes the possibility that you should donate elsewhere.We are fortunate to count the Open Philanthropy Project (and in particular Open Phil'sGCR Capacity Building program) among the people who believe we are a good use of funding, but they (reasonably) prefer to not fund all of our budget, leaving us with a substantial number of projects which we believe would produce value if we could fund starting or scaling them.This post outlines where we expect marginal donations to go and the value we expect to come from those donations.You can donate to CEAhere. If you are interested in donating and have further questions, feel free to email me (ben.west@centreforeffectivealtruism.org). I will also try to answer questions in the comments.The basic case for CEACommunity building is sometimes motivated by the following: suppose you spent a year telling everyone you know about EA and getting them excited. Probably you could get at least one person excited. Then this means that you will have doubled your lifetime impact, as both you and this other person will go on to do good things. That's a pretty good ROI for one year of work!This story is overly simplistic, but is roughly my motivation for working on (and donating to) community building: it's a leveraged way to do good in the world.And it does seem to be the case that many people whose work seems impactful attribute some of their impact to CEA:The Open Philanthropy longtermist survey in 2020 identified CEA among the top tier of important influences on people's journey towards work improving the long-term future, with about half of CEA's demonstrated value coming through events (EA Global and EAGx conferences) and half through our other programs.The 80,000 Hours user survey in 2022 identified CEA as the EA-related resource which has influenced the most people's career plans (in addition to 80k itself), with 64% citing the EA Forum as influential and 44% citing EAG.This selection of impact stories illustrates some of the ways we've helped people increase their impact by providing high-quality discussion spaces to consider their ideas, values and options for and about maki...

]]>
Ben_West https://forum.effectivealtruism.org/posts/4wNDqRPJWhoe8SnoG/cea-is-fundraising-and-funding-constrained Mon, 20 Nov 2023 22:14:52 +0000 EA - CEA is fundraising, and funding constrained by Ben West Ben_West https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 14:54 no full 2
n3opkWwu6suxP6qf9 EA - Open Philanthropy's newest focus area: Global Public Health Policy by JamesSnowden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Philanthropy's newest focus area: Global Public Health Policy, published by JamesSnowden on November 20, 2023 on The Effective Altruism Forum.We're pleased to announce that we've added a new cause area to our Global Health and Wellbeing portfolio:Global Public Health Policy.The program will be overseen bySantosh Harish,Chris Smith, andJames Snowden. Santosh will lead the majority of grantmaking for the program.We believe that some of the most important global health problems can be addressed cost-effectively by working with governments to improve policy. Policies likeair qualityregulations,tobacco andalcohol taxes, and theelimination of leaded gasoline have saved and improved millions of lives.These policies typically improve public health by addressing risk factors to alleviate the burden of non-communicable disease, which comprises agrowing share of the health burden but receivesrelatively few resources. Policy interventions affect entire populations and are often cost-effective for governments to implement. We think philanthropy can have an outsized impact by helping governments design, implement, and enforce more effective public health policies.We've already made some grants for related work:Grants in ourSouth Asian air quality program (which is now part of our Global Public Health Policy program)Several grants aimed at reducinglead exposure and excessivealcohol consumptionFunding for theCentre for Pesticide Suicide Prevention, to support work aimed at reducing deaths from the deliberate ingestion of pesticidesThe chart below shows how little funding goes to address our current global public health policy focus areas relative to their estimated burden:Sources:Institute for Health Metrics and Evaluation;Mew et al. 2017;Open Philanthropy estimatesThese four topics are our current focus, but in the future we may explore other large health burdens addressable through public health policy such as tobacco, asbestos, and exposure to other pollutants.We believe our grants to date have already resulted in meaningful impact, and we're very excited for the potential of this new area. For more details, see thearea page. And if you'd like to get in touch with us for any reason, please comment here or emailinfo@openphilanthropy.org.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
JamesSnowden https://forum.effectivealtruism.org/posts/n3opkWwu6suxP6qf9/open-philanthropy-s-newest-focus-area-global-public-health Mon, 20 Nov 2023 21:21:05 +0000 EA - Open Philanthropy's newest focus area: Global Public Health Policy by JamesSnowden JamesSnowden https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:28 no full 3
QQ7TfFLooTZEjzpgg EA - Cost comparison of promoting Animal Rights content on social media in high income vs. low income countries. by PreciousPig Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cost comparison of promoting Animal Rights content on social media in high income vs. low income countries., published by PreciousPig on November 20, 2023 on The Effective Altruism Forum.Quick summary: The same ad performed 7x - 9x better in lower income countries, the meat consumption in these lower income countries is about 1/8th as high, which indicates promoting in lower income countries might have very comparable results in terms of how much it reduces meat consumption.This is a short report on a test I ran to see if promoting Animal Rights content in low income countries is more effective/has a potentially higher impact than to promote it in high income countries.To test this, I promoted this video: https://fb.watch/oqPsFPF0Ut/ in two groups of countries. (Thank you to Kinder World for allowing me to use their video for this test!)Country group A: Angola, Ethiopia, Lesotho, Nigeria, RwandaCountry group B: Australia, Canada, United Kingdom, New Zealand, United StatesIn both tests, I used a budget of 35, the target group was all English speaking people 18 and older, and the ad goal was to maximize video views.Here are the results:A short explanation of terms:Impressions - Number of time the ad was shown to a Facebook user.Thru Plays - Number of time the video was played for at least 15 seconds.Video plays (50%)/(95%) - Number of time the video was played at 50%/95% of its length (around 50 seconds / 1:34 minutes).Post reactions: Total amount of reactions (like, love, sad etc.) to the video.So overall, the ad performed around 7x - 9x as well in the lower income countries compared to the higher income countries.If we compare this to the average meat consumption of the two country groups (based on https://en.wikipedia.org/wiki/List_of_countries_by_meat_consumption ) as a stand in for total animal product consumption:Country group A: The lower income countries have a meat consumption between 5,4 kg (Ethiopia) and 23,5 kg (Angola) per person per year. The average between the 5 countries is 13,00 kg per person per yearCountry group B: The higher income countries have a meat consumption between 79,9 kg (United Kingdom) and 124,11 kg (United States) per person per year. The average between the 5 countries is 101,83 kg per person per year.Meaning the meat consumption in the higher income countries I ran this ad in is on average 7,83x higher than in the lower income countries.Conclusion: If we assume people in both country groups are equally likely to reduce their meat consumption by an equal percentage after seeing this ad, both ads will have had a very comparable effect overall. Further testing would certainly be required to make any conclusions from this.Notes:This was one very small test in a limited number of countries with a small budget, so of course these results are only meant to give a rough idea if focusing on lower income countries might be worthwhile.There is an almost unlimited number of variables that could be changed for an ad campaign like this and which would certainly influence the results. (Video chosen, countries the ad is run in, ad goal, target audiences etc.).For future testing, it might be a good idea to choose countries based on meat consumption divided by promotion cost (Find countries with very high consumption and low promotion cost).Other limitations of this test include:It did not in any way measure if people actually reduce their meat consumption after seeing the video. (I think it is likely harder for people in lower income countries to remove animal products from their diets.)It only compared the results to meat consumption, not consumption of animal products overall.A video specifically tailored to lower income countries (showing the animal industry in those countries) might be more relevant to people there.The...

]]>
PreciousPig https://forum.effectivealtruism.org/posts/QQ7TfFLooTZEjzpgg/cost-comparison-of-promoting-animal-rights-content-on-social Mon, 20 Nov 2023 18:29:55 +0000 EA - Cost comparison of promoting Animal Rights content on social media in high income vs. low income countries. by PreciousPig PreciousPig https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:32 no full 4
tAkjzibNFaFYLHAMA EA - Hello from the new content manager at CEA by tobytrem Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hello from the new content manager at CEA, published by tobytrem on November 20, 2023 on The Effective Altruism Forum.Hello!I'm Toby, the new Content Manager @ CEA.Before working at CEA, I studied Philosophy at the University of Warwick, andworked for a couple of years on a range of writing and editing projects in the EA space. Recently I helped run theAmplify Creative Grants program, in order to encourage more impactful podcasting and YouTube projects (such as the podcast inthis Forum post). You can find a bit of my own creative output onmy more-handwavey-than-the-ea-forum blog, and my(now inactive) podcast feed.I'll be doing some combination of: moderating, running events on the Forum, making changes to the Forum based on user feedback, writingannouncements, writing theForum Digest and/or theEA Newsletter, participating in the Forum a lot etc… I'll be doubling the capacity of the content team (the team formerly known asLizka).I'm here because the Forum is great in itself, and safeguards parts of EA culture I care about preserving. The Forum is the first place I found online where people would respond to what I wrote and actually understand it. Often they understood it better than I did. They wanted to help me (and each other) understand the content better. They actually cared about there being an answer.The EA community is uniquely committed to thinking seriously about how to do good. The Forum does a lot to maintain that commitment, by platforming critiques, encouraging careful, high-context conversations, and sharing relevant information. I'm excited that I get to be a part of sustaining and improving this space.I'd love to hear more about why you value the Forum in the comments (or, alternatively, anything we could work on to make it better!)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
tobytrem https://forum.effectivealtruism.org/posts/tAkjzibNFaFYLHAMA/hello-from-the-new-content-manager-at-cea Mon, 20 Nov 2023 17:57:03 +0000 EA - Hello from the new content manager at CEA by tobytrem tobytrem https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:58 no full 6
Mo7qnNZA7j4xgyJXq EA - Sam Altman / Open AI Discussion Thread by John Salter Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sam Altman / Open AI Discussion Thread, published by John Salter on November 20, 2023 on The Effective Altruism Forum.500 threaten to resign unless Sam is reinstated.Source: https://www.theverge.com/2023/11/20/23968988/openai-employees-resignation-letter-microsoft-sam-altmanThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
John Salter https://forum.effectivealtruism.org/posts/Mo7qnNZA7j4xgyJXq/sam-altman-open-ai-discussion-thread Mon, 20 Nov 2023 15:10:17 +0000 EA - Sam Altman / Open AI Discussion Thread by John Salter John Salter https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:39 no full 9
cMcEBSNiy4meDrmuE EA - Rethink Priorities needs your support. Here's what we'd do with it. by Peter Wildeford Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities needs your support. Here's what we'd do with it., published by Peter Wildeford on November 21, 2023 on The Effective Altruism Forum.In honor of"Marginal Funding Week" for2023 Giving Season on the EA Forum, I'd like to tell you what Rethink Priorities (RP) would do with funding beyond what we currently expect to raise from our major funders, and to emphasize thatRP currently has a significant funding gap even after taking these major funders into account.A personal appealHi. I know it's traditional in EA to stick to the facts and avoid emotional content, but I can't help but interject and say that this fundraising appeal is a bit different. It is personal to me. It's not just a list of things that we could take or leave, it's a fight for RP to survive the way I want it to as an organization that is intellectually independent and serves the EA community.To be blunt, our funding situation is not where we want it to be. 2023 has been a hard year for fundraising. A lot of what we've been building over the past few years is at risk right now. If you like RP, my sense is donating now is an unusually good time.We are at the point where receiving $1,000 - $10,000 each from a handful of individual donors would genuinely make an important difference to the future trajectory of RP and decide what we can and cannot do next year.We are currently seeking to raise at least $110K total from donors donating under $100K each. We are already ~$25K towards that goal, so there's $85K remaining towards our goal. We also hope to receive more support from larger givers as well.To be clear, this isn't just about funding growth. An RP that does not receive additional funding right now will be worse in several concrete ways. Funding gaps may force us to:Focus more on non-published, client-driven work that will never be released to the community (because we cannot afford to do so)Stop running the EA Survey, survey updates about FTX, and other community survey projectsDo fewer of our own creative ideas (e.g.,CURVE sequence,moral weights work)Be unable to run several of our most promising research projects (see below)Reduce things we think are important - like opportunities for research teams to meet in person and opportunities for staff to do further professional development.Spend significant amounts of time fundraising next year, distracting from our core workFor unfamiliar readers, some of our track and impact to date includes:Contributing significantly to burgeoning fields, such as invertebrate welfare.Led the way in exploring novel promising approaches to help trillions of animals, by launching theInsect Institute and uncovering the major scale ofshrimp productionCompletingthe Moral Weight Project to try to help funders decide how to best allocate resources across species.Producing >40 reports commissioned by Open Philanthropy and GiveWell answering their questions to inform their global health and development portfolios.Producing theEA Survey andsurveys on the impact of FTX on the EA brand that were used by many EA orgs and local groupsConducting over 200 tailored surveys and data analysis projects to help many organizations working on global priorities.Launching projects such asCondor Camp and fiscally sponsoring organizations likeEpoch andApollo Research via our Special Projects team, which provides operational support.Setting up an Artificial Intelligence (AI) Governance and Strategy team and evolving it into a think tank that has already published multiple influential reports.Please help us keep RP impactful with your support.Why does RP need money from individuals when there are large donors supporting you?It's commonly assumed that RP must get all the money it needs from large institutions. But this is not the case - we've histor...

]]>
Peter Wildeford https://forum.effectivealtruism.org/posts/cMcEBSNiy4meDrmuE/rethink-priorities-needs-your-support-here-s-what-we-d-do 40 reports commissioned by Open Philanthropy and GiveWell answering their questions to inform their global health and development portfolios.Producing theEA Survey andsurveys on the impact of FTX on the EA brand that were used by many EA orgs and local groupsConducting over 200 tailored surveys and data analysis projects to help many organizations working on global priorities.Launching projects such asCondor Camp and fiscally sponsoring organizations likeEpoch andApollo Research via our Special Projects team, which provides operational support.Setting up an Artificial Intelligence (AI) Governance and Strategy team and evolving it into a think tank that has already published multiple influential reports.Please help us keep RP impactful with your support.Why does RP need money from individuals when there are large donors supporting you?It's commonly assumed that RP must get all the money it needs from large institutions. But this is not the case - we've histor...]]> Tue, 21 Nov 2023 17:57:19 +0000 EA - Rethink Priorities needs your support. Here's what we'd do with it. by Peter Wildeford 40 reports commissioned by Open Philanthropy and GiveWell answering their questions to inform their global health and development portfolios.Producing theEA Survey andsurveys on the impact of FTX on the EA brand that were used by many EA orgs and local groupsConducting over 200 tailored surveys and data analysis projects to help many organizations working on global priorities.Launching projects such asCondor Camp and fiscally sponsoring organizations likeEpoch andApollo Research via our Special Projects team, which provides operational support.Setting up an Artificial Intelligence (AI) Governance and Strategy team and evolving it into a think tank that has already published multiple influential reports.Please help us keep RP impactful with your support.Why does RP need money from individuals when there are large donors supporting you?It's commonly assumed that RP must get all the money it needs from large institutions. But this is not the case - we've histor...]]> 40 reports commissioned by Open Philanthropy and GiveWell answering their questions to inform their global health and development portfolios.Producing theEA Survey andsurveys on the impact of FTX on the EA brand that were used by many EA orgs and local groupsConducting over 200 tailored surveys and data analysis projects to help many organizations working on global priorities.Launching projects such asCondor Camp and fiscally sponsoring organizations likeEpoch andApollo Research via our Special Projects team, which provides operational support.Setting up an Artificial Intelligence (AI) Governance and Strategy team and evolving it into a think tank that has already published multiple influential reports.Please help us keep RP impactful with your support.Why does RP need money from individuals when there are large donors supporting you?It's commonly assumed that RP must get all the money it needs from large institutions. But this is not the case - we've histor...]]> Peter Wildeford https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 33:25 no full 3
t2mHvBJMgXpk4uKq2 EA - Funding priorities at the Good Food Institute Europe: what additional impact will be created by marginal grants to GFI Europe? by emilygjohnson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding priorities at the Good Food Institute Europe: what additional impact will be created by marginal grants to GFI Europe?, published by emilygjohnson on November 21, 2023 on The Effective Altruism Forum.The Good Food Institute is a non-profit think tank helping to build a more sustainable, secure and just food system by transforming meat production. We work with scientists, businesses and policymakers to advance plant-based and cultivated meat and precision-fermented food - making these alternative proteins delicious, affordable and accessible.By making meat from plants and cultivating it from cells, we can reduce the environmental impact of our food system and address the welfare of animals in industrial animal agriculture. Founded on effective altruism principles, GFI identifies and advances high-impact, achievable solutions in areas where too few people are working. We focus on what is needed most and provide the talent and resources necessary to have the biggest impact possible.GFI is a global network of six organisations focused on one vision: creating a world where alternative proteins are no longer alternative. We are powered by philanthropy and we are currently fundraising to seed our collective 2024 budget, with a gap to goal of $12.7 million, as of today. Within that, GFI Europe has a funding gap of 1.5million EUR that will allow us to have substantial additional counterfactual impact in 2024.The Good Food Institute Europe (GFI Europe) is an affiliate of the Good Food Institute and has been identified as a priority area for GFI's growth over the next couple of years. As Senior Philanthropy Manager for GFI Europe, in response to this post, I thought it would be helpful to expand upon why this is the case and to use GFI Europe as an example of how we would leverage marginal increases in funding to generate as much impact as possible in this region, and by extension, globally.While I am shining a light on GFI Europe in this post, in every region where we operate, our global teams identify and advance good food solutions. All of our growth is carefully planned to ensure that we can have the greatest possible impact on the ecosystem as a whole.Why expansion in Europe is an urgent priority for GFIGFI's global priority is to unlock$10.1billion in public funding for alternative proteins ($4.4bn for R&D; $5.7 towards commercialisation), $1.5 billion of which we believe could come from Europe. Each additional hire, most directly in our Policy and Science & Technology teams, increases the likelihood of unlocking this funding, especially if equipped with evidence and research in support of the benefits and feasibility of alternative proteins.In other words, each marginal increase in funding for GFI Europe has the potential to leverage much greater sums in R&D funding. Unlocking R&D funding is on the critical path for plant-based and cultivated meat to reach taste and price parity and to become the default choice for consumers, so is an urgent priority.In addition to this, political opposition in Europe presents a particular - and, arguably, existential - risk to alternative proteins. The risk to a more sustainable and just future of food is not simply that the potential funding fails to be unlocked, but that political opponents could derail attempts to achieve regulatory approval for novel alternative proteins. With applications for regulatory approval of cultivated meatbeginning to appear on the horizon and countries considering their climate and food strategies, now is a critical time to ensure that we can take advantage of opportunities and mitigate risks. Indeed, with alternative proteins firmly on the policy agenda, and decision-makers trying to make up their minds about what position to take on them, the next few years are likely to set the course...

]]>
emilygjohnson https://forum.effectivealtruism.org/posts/t2mHvBJMgXpk4uKq2/funding-priorities-at-the-good-food-institute-europe-what Tue, 21 Nov 2023 13:26:23 +0000 EA - Funding priorities at the Good Food Institute Europe: what additional impact will be created by marginal grants to GFI Europe? by emilygjohnson emilygjohnson https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 17:14 no full 5
QifaoBpLpuT8bo3zj EA - Fish Welfare Initiative and Marginal Funding by haven Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fish Welfare Initiative and Marginal Funding, published by haven on November 21, 2023 on The Effective Altruism Forum.This post is Fish Welfare Initiative's contribution to Marginal Funding Week. To note, we're posting here in our capacities as cofounders of FWI.We're planning to post a fuller update in the coming week, but wanted to make this funding-specific post for Marginal Funding Week. Here, we discuss what we'd do with marginal funds, as well as the reasons for and against a donation to FWI.What would FWI do with marginal funds?Marginal funding right now would go to filling FWI's 2024 funding gap, which is most of our ~$750,000 annual budget. Specifically, this funding would go towards the following main outcomes:Enabling several in-field studies to test welfare improvements and interventions that have the potential to be more promising than what we are currently implementing.Implementing our current program by expanding it to another 100 fish farms and helping the animals in these farms via stocking density and/or water quality improvements.Other work we believe is useful, such as policy and stakeholder work that may later enable us to more effectively scale.All of the above work will take place in India, which, primarily for its scale and tractability, we have identified as a country with particularly large potential for reducing farmed fish suffering. We will also likely conduct further work in China next year - we intend to publish our plans for there in the coming months.You can see FWI's planned 2024 OKRs for more specific information.Reasons in favor of a donation to FWINote that this and the following section are repeated from content present on our donation page FAQ.The following are some arguments in favor of donating to FWI, roughly in descending order of our view of their significance:Reason #1: FWI's potential for impactThe scope of the problem we face is huge: Billions of farmed fishes live in our countries of operation (India and China) alone, their living conditions are often very poor, and virtually nothing has been done to address these issues so far. Furthermore, the fact that we have already had promising inroads withfarmers andother key stakeholders in these contexts suggests that we are able to make traction on these problems. Without any obvious limiting factors here then, we believe that, once at scale, our programming does have the potential to improve the lives of hundreds of millions, or even a billion, fishes. (Note though that our avenue to reach scale is still unclear - see reasons against below.)Reason #2: FWI's current impactWe currentlyestimate that we've improved the lives of over 1 million fishes. This makes FWI one of the most promising avenues in the world to reduce farmed fish suffering, and likely the most promising avenue in the world to reduce the suffering of farmed Indian major carp, one of the largest and most neglected species groups of farmed fishes.Reason #3: FWI is addressing some of the animal movement's hardest questionsIf we are ever going to bring about a world that is truly humane, we will need to address the more neglected groups in animal farming, particularly including farmed fishes and animals farmed in informal economies. We believe that FWI's work is demonstrating some avenues of helping these groups, and will thus enable other organizations to work more effectively on them.Sustainable Shrimp Farmers of India.Reason #4: Animal movement-building in AsiaAlmost90% of farmed fishes, as well as the majority of farmed terrestrial animals, are in Asia. We thus believe it is critical to launch movements in Asian countries to address the suffering facing these animals, and to expand the animal movement by bringing in new people. We are proud to have hired a local team of about 17 full-t...

]]>
haven https://forum.effectivealtruism.org/posts/QifaoBpLpuT8bo3zj/fish-welfare-initiative-and-marginal-funding Tue, 21 Nov 2023 13:13:14 +0000 EA - Fish Welfare Initiative and Marginal Funding by haven haven https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:08 no full 6
Bu2gbG3GoDpfSFwdx EA - AI Safety Research Organization Incubation Program - Expression of Interest by kaykozaronek Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Research Organization Incubation Program - Expression of Interest, published by kaykozaronek on November 21, 2023 on The Effective Altruism Forum.Tl;dr: If you might want to participate in our incubation program and found an AI safety research organization,express your interest here. If you want to help out in other ways please fill out that same form.WeCatalyze Impact - believe it is a bottleneck in AI safety that there aretoo few AI safety organizations. To address this bottleneck we are piloting an incubation program, similar toCharity Entrepreneurship's program.The incubation program is designed to help youfind a complementary co-founderacquire additional knowledge and skills for founding an AI safety research organizationget access to a network of mentors, advisors and potential fundersProgram overviewWe aim to deliver this program end of Q1 2024. Here's a broad outline of the 3 phases we are planning:Phase 1: Online preparation focused on skill building, workshops from experts, and relationship building (1 month)Phase 2: An immersive in-person experience in London, focused on testing cofounder fit, continuous mentorship, and networking (2 months)Phase 3: Continued individualized coaching and fundraising supportWho is this program for?We are looking for motivated and ambitious engineers, generalists, technical researchers, or entrepreneurs who would like to contribute significantly to reducing the risks from AI.Express your Interest!If you are interested in joining the program, funding Catalyze, or helping out in other ways, please fill in thisform!For more information, feel free to reach out at alexandra@catalyze-impact.orgcrossposted to LessWrongThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
kaykozaronek https://forum.effectivealtruism.org/posts/Bu2gbG3GoDpfSFwdx/ai-safety-research-organization-incubation-program Tue, 21 Nov 2023 12:54:03 +0000 EA - AI Safety Research Organization Incubation Program - Expression of Interest by kaykozaronek kaykozaronek https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:58 no full 7
phAp8aEr6hhQk3GmK EA - High Impact Medicine - Impact Survey Results and Marginal Funding by High Impact Medicine Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: High Impact Medicine - Impact Survey Results and Marginal Funding, published by High Impact Medicine on November 21, 2023 on The Effective Altruism Forum.IntroductionAs part of the Marginal Funding Week, we want to give a brief update on High Impact Medicine, describing which projects marginal funding is likely to be spent on.High Impact Medicine (Hi-Med) is a non-profit organisation dedicated to inspiring and empowering medical students and doctors to make impact-driven decisions in their careers and giving.Theory of ChangeThis is an overview of our activities, our current definition of positive impact, our target audience and the outcomes we monitor.The main assumptions behind our theory of change are:Target group-specific interventions can improve altruistic behaviour change beyond broad outreach: Interventions customised to professional groups account for background-specific needs, abilities, and goals.Professional peers can be potent facilitators of altruistic behaviour: Role models are an important trigger for altruistic behaviour change. Change is more likely when someone is "like me", i.e. belongs to a relevant peer group.Medical doctors are a well-suited target group for altruistic impact considerations: They are often strongly altruistically motivated, exceptionally skilled, and scientifically minded, and they often have significant career capital and high incomes.Proof of concept: Past interventions and their validationWe conducted various programmes, interacting with > 500 medical doctors and students over the past two years. The full 2023 Impact Survey Executive Summary can be found here. The evaluation of our inaugural introductory fellowship cohort has been published in an academic peer-reviewed journal.Bioethicist Benjamin Krohmal recently ran an elective course for medical students at Georgetown University School of Medicine in the US, "Beneficence & Beyond: How to do the most good with your medical career", that was inspired and informed by our introductory fellowship. Our monitoring and evaluation team is currently helping to assess the results, and we are in conversations with other universities to run similar programmes.What we learnedThere is substantial interest in the medical community to learn more about doing the most good: We also got preliminary confirmation that the medical background of the High Impact Medicine team meant that we were able to form genuine and meaningful connections with our members, which in turn increased the tractability of our efforts.It's likely that a mix of interventions that matters: All individuals for whom Hi-Med has facilitated career changes have participated in both the introductory fellowship and 1:1 conversations. 1:1 conversations seemed to be particularly important in influencing them to make these career and giving decisions.We have seen the most positive impactful changes in individuals with high scores in altruistic motivation & career capital: This was an observation from our most successful case studies.Time investments to attain giving pledges can be extremely low: Charismatic individuals can initiate someone strongly considering a donation pledge in a single 1-1.Impact attribution is challenging: Individuals engage in multiple interventions, complicating evaluations.Reliance on volunteers is unsustainable: Operationally, our rapid community growth and reliance on contractors / volunteers strained our organisational capacity.Looking forwardBased on our evaluation of past and current programmes, we plan to iterate in the following waySelect for and attract more promising individuals (e.g. by building external credibility) and provide them with timely and individualised support (e.g. more 1:1 calls, a career fellowship cohort starting every other month, biosecurity career change ...

]]>
High Impact Medicine https://forum.effectivealtruism.org/posts/phAp8aEr6hhQk3GmK/high-impact-medicine-impact-survey-results-and-marginal 500 medical doctors and students over the past two years. The full 2023 Impact Survey Executive Summary can be found here. The evaluation of our inaugural introductory fellowship cohort has been published in an academic peer-reviewed journal.Bioethicist Benjamin Krohmal recently ran an elective course for medical students at Georgetown University School of Medicine in the US, "Beneficence & Beyond: How to do the most good with your medical career", that was inspired and informed by our introductory fellowship. Our monitoring and evaluation team is currently helping to assess the results, and we are in conversations with other universities to run similar programmes.What we learnedThere is substantial interest in the medical community to learn more about doing the most good: We also got preliminary confirmation that the medical background of the High Impact Medicine team meant that we were able to form genuine and meaningful connections with our members, which in turn increased the tractability of our efforts.It's likely that a mix of interventions that matters: All individuals for whom Hi-Med has facilitated career changes have participated in both the introductory fellowship and 1:1 conversations. 1:1 conversations seemed to be particularly important in influencing them to make these career and giving decisions.We have seen the most positive impactful changes in individuals with high scores in altruistic motivation & career capital: This was an observation from our most successful case studies.Time investments to attain giving pledges can be extremely low: Charismatic individuals can initiate someone strongly considering a donation pledge in a single 1-1.Impact attribution is challenging: Individuals engage in multiple interventions, complicating evaluations.Reliance on volunteers is unsustainable: Operationally, our rapid community growth and reliance on contractors / volunteers strained our organisational capacity.Looking forwardBased on our evaluation of past and current programmes, we plan to iterate in the following waySelect for and attract more promising individuals (e.g. by building external credibility) and provide them with timely and individualised support (e.g. more 1:1 calls, a career fellowship cohort starting every other month, biosecurity career change ...]]> Tue, 21 Nov 2023 03:58:10 +0000 EA - High Impact Medicine - Impact Survey Results and Marginal Funding by High Impact Medicine 500 medical doctors and students over the past two years. The full 2023 Impact Survey Executive Summary can be found here. The evaluation of our inaugural introductory fellowship cohort has been published in an academic peer-reviewed journal.Bioethicist Benjamin Krohmal recently ran an elective course for medical students at Georgetown University School of Medicine in the US, "Beneficence & Beyond: How to do the most good with your medical career", that was inspired and informed by our introductory fellowship. Our monitoring and evaluation team is currently helping to assess the results, and we are in conversations with other universities to run similar programmes.What we learnedThere is substantial interest in the medical community to learn more about doing the most good: We also got preliminary confirmation that the medical background of the High Impact Medicine team meant that we were able to form genuine and meaningful connections with our members, which in turn increased the tractability of our efforts.It's likely that a mix of interventions that matters: All individuals for whom Hi-Med has facilitated career changes have participated in both the introductory fellowship and 1:1 conversations. 1:1 conversations seemed to be particularly important in influencing them to make these career and giving decisions.We have seen the most positive impactful changes in individuals with high scores in altruistic motivation & career capital: This was an observation from our most successful case studies.Time investments to attain giving pledges can be extremely low: Charismatic individuals can initiate someone strongly considering a donation pledge in a single 1-1.Impact attribution is challenging: Individuals engage in multiple interventions, complicating evaluations.Reliance on volunteers is unsustainable: Operationally, our rapid community growth and reliance on contractors / volunteers strained our organisational capacity.Looking forwardBased on our evaluation of past and current programmes, we plan to iterate in the following waySelect for and attract more promising individuals (e.g. by building external credibility) and provide them with timely and individualised support (e.g. more 1:1 calls, a career fellowship cohort starting every other month, biosecurity career change ...]]> 500 medical doctors and students over the past two years. The full 2023 Impact Survey Executive Summary can be found here. The evaluation of our inaugural introductory fellowship cohort has been published in an academic peer-reviewed journal.Bioethicist Benjamin Krohmal recently ran an elective course for medical students at Georgetown University School of Medicine in the US, "Beneficence & Beyond: How to do the most good with your medical career", that was inspired and informed by our introductory fellowship. Our monitoring and evaluation team is currently helping to assess the results, and we are in conversations with other universities to run similar programmes.What we learnedThere is substantial interest in the medical community to learn more about doing the most good: We also got preliminary confirmation that the medical background of the High Impact Medicine team meant that we were able to form genuine and meaningful connections with our members, which in turn increased the tractability of our efforts.It's likely that a mix of interventions that matters: All individuals for whom Hi-Med has facilitated career changes have participated in both the introductory fellowship and 1:1 conversations. 1:1 conversations seemed to be particularly important in influencing them to make these career and giving decisions.We have seen the most positive impactful changes in individuals with high scores in altruistic motivation & career capital: This was an observation from our most successful case studies.Time investments to attain giving pledges can be extremely low: Charismatic individuals can initiate someone strongly considering a donation pledge in a single 1-1.Impact attribution is challenging: Individuals engage in multiple interventions, complicating evaluations.Reliance on volunteers is unsustainable: Operationally, our rapid community growth and reliance on contractors / volunteers strained our organisational capacity.Looking forwardBased on our evaluation of past and current programmes, we plan to iterate in the following waySelect for and attract more promising individuals (e.g. by building external credibility) and provide them with timely and individualised support (e.g. more 1:1 calls, a career fellowship cohort starting every other month, biosecurity career change ...]]> High Impact Medicine https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:33 no full 12
62sdz7RP7oHnJzPNa EA - Animal Advocacy Strategy Forum 2023 Summary by Neil Dullaghan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal Advocacy Strategy Forum 2023 Summary, published by Neil Dullaghan on November 20, 2023 on The Effective Altruism Forum.IntroductionIn July 2023, the Animal Advocacy Strategy Forum[1] was held over three days with the purpose of bringing together key decision-makers in the animal advocacy community to connect, coordinate, and strategize. At the end of the forum, 35/44 participants filled out a survey similar to last year's Forum survey (Duffy 2023) that sought to better understand the future needs of effective animal advocacy groups and the perceptions of animal advocates about the most important areas to focus on in the future.The attendees represented approximately 27 key groups in the animal advocacy space. 23/35 survey participants were in senior leadership positions at their organization (C-level, founder, and various "Executive" and "Director" roles).Our report discusses the results of that survey and workshops of the forum itself.Click here for the report on the Rethink Priorities website.AcknowledgmentsThis report was written by Neil Dullaghan. Thanks to Daniela R. Waldhorn for their guidance, and Kieran Greig and Laura Duffy for their helpful feedback and to Adam Papineau for copy-editing. The post is a project of Rethink Priorities, a global priority think-and-do tank, aiming to do good at scale. We research and implement pressing opportunities to make the world better. We act upon these opportunities by developing and implementing strategies, projects, and solutions to key issues. We do this work in close partnership with foundations and impact-focused non-profits or other entities.newsletter. You can explore our completed public workhere.^Formerly known as the Effective Animal Advocacy Coordination ForumThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Neil_Dullaghan https://forum.effectivealtruism.org/posts/62sdz7RP7oHnJzPNa/animal-advocacy-strategy-forum-2023-summary-1 Mon, 20 Nov 2023 23:35:44 +0000 EA - Animal Advocacy Strategy Forum 2023 Summary by Neil Dullaghan Neil_Dullaghan https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:56 no full 14
xcDZccfWkA6a9Atuv EA - AMA: GWWC research team by Sjir Hoeijmakers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: GWWC research team, published by Sjir Hoeijmakers on November 22, 2023 on The Effective Altruism Forum.We're the research team atGiving What We Can:Alana (research communicator)Michael (researcher)Sjir (research director)Ask us anything!We'll be answering questions Monday the 27th from 2pm UTC until Tuesday the 28th at 9pm UTC. Please post your questions in advance as comments to this post. And please upvote the questions you'd like us to answer most. We'll do our best to answer as many as we can, though we can't guarantee we'll be able to answer all of them.You might want to ask about the evaluations of evaluators we just published, or - relatedly - about our new fund and charity recommendations and GWWC cause area funds (which we will launch Monday alongside other website updates). We are also happy to answer any questions you may have about our research plans for next year, about theimpact evaluation we did earlier this year, about GWWC more broadly, or about anything else you are interested in!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Sjir Hoeijmakers https://forum.effectivealtruism.org/posts/xcDZccfWkA6a9Atuv/ama-gwwc-research-team Wed, 22 Nov 2023 20:18:51 +0000 EA - AMA: GWWC research team by Sjir Hoeijmakers Sjir Hoeijmakers https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:17 no full 1
eZkApbMzBn8teQwju EA - Donation Election rewards by Will Howard Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Donation Election rewards, published by Will Howard on November 22, 2023 on The Effective Altruism Forum.It's been great to see everyone contributing to the Donation Election, both with your posts and with your donations. If you weren't finding it exciting enough already, we're now offering rewards for different donation tiers! There are both individual rewards, and collective ones for if the fundraiser reaches certain milestones overall.You can donate to the fundhere, and you can learn more about the Donation Election in thegiving portal or intheseposts. Now here's a more detailed description of the rewards:Individual rewards$10: You can add a badge next to your name saying that you donated. This will last until around the end of December when the fundraiser closes. I've set this on my own profile so you can see what it looks like[1].$50: Someone who is bad at drawing will draw a picture of the animal or being of your choice. This will be a digital drawing[2], probably done by one of the CEA Online team.$100: Someone who is good at drawing will draw a picture of the animal or being of your choice (otherwise as above).$250: We will draw a portrait of you (or whoever you like) with Giving Season vibes - hopefully worthy of being used as your profile picture here or on other sites.To claim your individual rewards you can DM @EA Forum Team with a screenshot of your donation receipt[3], saying which rewards you want to claim (i.e. you don't have to claim the higher rewards if you don't want them). You can do this even if you donated before these were announced.Note that we may only do a limited number of drawings if it turns out we get loads of requests, but I'll warn you here beforehand if it looks like this is going to be the case.Collective rewards$40,000: We'll make a Forum yearbook photo, where anyone can be included if they want (by submitting a picture or telling us to use your profile pic), and we'll make this the banner image of theCommunity page for the next year.$50,000: To celebrate this win for democracy with even more democracy, we'll let Forum users vote for the next[4] small feature we build. This will be chosen from a set of 5+ that we select beforehand. Here's some that might end up on the list to give you an idea:Adark mode toggle on the Frontpage (currently it's hidden in account settings)Forum-native pollsThe ability to mute/block people (so you wouldn't see their posts or comments, and they wouldn't be able to message you)Reactions onDMsThe ability to sign up for job ads based on your preferencesAI generated preview images for posts$75,000: To celebrate this huge win for democracy, we'll let you vote for the next big feature we build. As above this will be decided from a fixed list, and here are some examples of big features we might put on the list:Posts private to logged-in usersPrivate notes/highlights on postsBeing able to react (agree/disagree/heart etc) to a specific passage in a postAn easily accessible page for browsing editions of theDigest (a weekly curated list of top posts)A native way to limit your usage of the Forum to prevent doomscrollingThe ability to set reminders and/or "snooze" posts to read later$100,000: To celebrate tail risk events, we will host a "Lesswrong Freaky Friday", where we dress up the Forum asLessWrong for the day and all try and act like LessWrongers.For the vote-on-a-feature ones, please feel free to comment here (or DM me) with features that you would like to be included in the vote, and we'll include them if they're popular and the right size.^For if you're reading this far in the future:^Unless you happen to come to Trajan House in Oxford in which case you may be able to get a physical version^You should get a receipt by email, we just need a screenshot of the part t...

]]>
Will Howard https://forum.effectivealtruism.org/posts/eZkApbMzBn8teQwju/donation-election-rewards Wed, 22 Nov 2023 19:41:04 +0000 EA - Donation Election rewards by Will Howard Will Howard https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:27 no full 3
PTHskHoNpcRDZtJoh EA - GWWC's evaluations of evaluators by Sjir Hoeijmakers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC's evaluations of evaluators, published by Sjir Hoeijmakers on November 22, 2023 on The Effective Altruism Forum.The Giving What We Can research team is excited to share the results of our first round of evaluations of charity evaluators and grantmakers! After announcing our plans for a new research directionlast year, we have now completed five[1] evaluations that will inform our donation recommendations for this giving season. There are substantiallimitations to these evaluations, but we nevertheless think that this is a significant improvement on the status quo, in which there were no independent evaluations of evaluators' work. We plan to continue to evaluate evaluators, extending the list beyond the five we've covered so far, improving our methodology, and regularly renewing our existing evaluations.In this post, we share the key takeaways from each of these evaluations, and link to the full reports. Ourwebsite will be updated to reflect the new fund and charity recommendations that came out of these evaluations (alongside many other updates) on Monday, the 27th. We are sharing these reports in advance of our website update so those interested have time to read them and can ask questions before our AMA next Monday and Tuesday. We're also sharing some context aboutwhy and how we evaluate evaluators, which will be included in our Monday website update as well.One other exciting (and related) announcement: we'll be launching our new GWWC cause area funds on Monday! These funds (which you'll see referenced in the reports) will make grants based on our latest evaluations of evaluators, advised by the evaluators we end up working with.[2] We are launching them to provide a strong and easy default donation option for donors, and one that will stay up-to-date over time (i.e., donors can set up a recurring donation to these funds knowing that it will always be allocated based on GWWC's latest research).donation platform as well.We look forward to your questions and comments, and in particular to engaging with you in our AMA! (Please note that we may not be able to reply to many comments until then, as we are finalising the website updates and some of us will be on leave.)Global health and wellbeingGiveWell (GW)Based on our evaluation, we've decided to continue to rely on GW's charity recommendations and to ask GW to advise our new GWWC Global Health and Wellbeing Fund.Some takeaways that inform this decision include:GW's overall processes for charity recommendations and grantmaking are generally very strong, reflecting a lot of best practices in finding and funding the most cost-effective opportunities.GW's cost-effectiveness analyses stood up to our quality checks. We thought its work was remarkably evenhanded (we never got the impression that the evaluations were exaggerated), and we generally found only minor issues in the substance of its reasoning, though we did find issues with how well this reasoning was presented and explained.We found it noteworthy how much subjective judgement plays a role in its work, especially with how GW compares different outcomes (like saving and improving lives), and also in some key parameters in its cost-effectiveness analyses supporting deworming. We think reasonable people could come to different conclusions than GW does in some cases, but we think GW's approach is sufficiently well justified overall for our purposes.For more, please see theevaluation report.Happier Lives Institute (HLI)We stopped this evaluation short of finishing it, because we thought the costs of finalising it outweighed the potential benefits at this stage.For more on this decision and on what we did learn about HLI, please see theevaluation report.Animal welfareEA Funds' Animal Welfare Fund (AWF)Based on our evaluation, we've decide...

]]>
Sjir Hoeijmakers https://forum.effectivealtruism.org/posts/PTHskHoNpcRDZtJoh/gwwc-s-evaluations-of-evaluators Wed, 22 Nov 2023 17:02:04 +0000 EA - GWWC's evaluations of evaluators by Sjir Hoeijmakers Sjir Hoeijmakers https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:50 no full 5
dwTYQ5Z5Hpw2Nje8t EA - 'Why not effective altruism?' - Richard Y. Chappell by Pablo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 'Why not effective altruism?' - Richard Y. Chappell, published by Pablo on November 22, 2023 on The Effective Altruism Forum.Forthcoming in Public Affairs Quarterly:Effective altruism sounds so innocuous - who could possibly be opposed to doing good, more effectively? Yet it has inspired significant backlash in recent years. This paper addresses some common misconceptions, and argues that the core "beneficentric" ideas of effective altruism are both excellent and widely neglected. Reasonable people may disagree on details of implementation, but all should share the basic goals or values underlying effective altruism.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Pablo https://forum.effectivealtruism.org/posts/dwTYQ5Z5Hpw2Nje8t/why-not-effective-altruism-richard-y-chappell Wed, 22 Nov 2023 16:17:00 +0000 EA - 'Why not effective altruism?' - Richard Y. Chappell by Pablo Pablo https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:49 no full 6
ztrjYbGwQZLFRfWDs EA - Reflections on Bottlenecks Facing Potential African EAs by Zakariyau Yusuf Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflections on Bottlenecks Facing Potential African EAs, published by Zakariyau Yusuf on November 22, 2023 on The Effective Altruism Forum.TL;DRCapacity, time, and access have influences on the impact of many EAs in Africa.Likewise, commitment issues, lack of confidence, and collaboration and openness gaps also play roles in limiting impact.EA communities can accelerate progress by offering more targeted support.Disclaimer: I make this post to highlight some of the challenges that I think some African EAs and those interested in the EA approach in the region face. I also propose ways the EA community can help accelerate some of the African EAs' impact. I do not intend to imply that current African EAs are not impactful or are the only ones needing support to accelerate their impact, nor is it meant to refer to any individual African EAs.This is based on my experience in EA community building in Nigeria and engaging with other EAs in the region. My post is intended to raise awareness in any way that would be useful.For emphasis, I'm not implying that the challenges I included capture everything or that the proposed ways are exhaustive.Interests and challenges that I have identifiedEA seems to pique the interest of young professionals and students in Nigeria when they first learn about it. This interest could very well be shared by individuals in other African contexts, as I have heard similar sentiments from those I engage with from other regions of Africa. Those curious tend to explore EA further by interacting with local groups (where they can), enrolling in an introductory program (usually EA Virtual Programs or the one organized by the local group where applicable), signing up for an event, or utilizing online resources, such as the forum, to delve deeper into the EA.Based on my experience in EA community building in Nigeria, I have observed that there is more interest in Effective Altruism from recent graduates/early career professionals, followed by university students and mid-level career professionals. However, I have noticed very little interest from advanced professionals. This pattern may likely be similar in other contexts. The groups that show more interest in EA may do so for any of the following reasons:Many are still exploring their career options and see EA as a viable approach.Some are interested in charitable causes and view EA as a way to align with their goals.Others are looking for opportunities and stumbled upon EA.Some have found EA to be advocating for the cause area they are already passionate about or interested in.I have also identified some of the problems that I think are preventing some of the individuals from making headway:Commitment and Disorganization: I experienced situations in which recent graduates looking to use their career to make a more positive difference could not commit to learning more about some of the top problems or even properly engage in career planning to enable them to figure out their abilities and top problem that they could effectively contribute to.I think this commitment issue correlates with disorganization in this context, and this is actually one of the key concerns I repeatedly see in our community in Nigeria. I believe it has a lot of implications for making progress and how impactful one could be. I tried to get a sense of this problem, and in some of the surveys or interactive sessions, time issues were flagged as some of the reasons, as some are actively engaging in other day's work; other reasons cited are internet access related or visa for in-person training or program abroad.Lack of confidence and collaboration: Some of the individuals in the community feel less confident and that it would be hard to make headway tackling big problems at their stage; they think that engaging in su...

]]>
Zakariyau Yusuf https://forum.effectivealtruism.org/posts/ztrjYbGwQZLFRfWDs/reflections-on-bottlenecks-facing-potential-african-eas Wed, 22 Nov 2023 12:22:09 +0000 EA - Reflections on Bottlenecks Facing Potential African EAs by Zakariyau Yusuf Zakariyau Yusuf https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:00 no full 9
6xDeDX3iqwwab6qSA EA - GiveWell's 2023 recommendations to donors by GiveWell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell's 2023 recommendations to donors, published by GiveWell on November 22, 2023 on The Effective Altruism Forum.We're excited about the impact donors can have by supporting our All Grants Fund and our Top Charities Fund. For donors who want to support the programs we're most confident in, we recommend the Top Charities Fund, which is allocated among our four top charities. For donors with a higher degree of trust in GiveWell and willingness to take on more risk, our top recommendation is the All Grants Fund, which goes to a wider range of opportunities and may have higher impact per dollar.Read more about the options for giving below. We estimate that donations to the programs we recommend can save a life for roughly $5,000 on average,[1] or have similarly strong impact by increasing incomes or preventing suffering. Click here to donate.Why your support mattersWe expect to find more outstanding giving opportunities than we can fully fund unless our community of supporters substantially increases its giving. Figures like $5,000 per life saved are rough estimates; while we spend thousands of hours on our cost-effectiveness analyses, they're still inherently uncertain. But the bottom line is that we think donors have the opportunity to do a huge amount of good by supporting the programs we recommend.For a concrete sense of what a donation can do, let's focus briefly on seasonal malaria chemoprevention (SMC), which involves distributing preventive medication to young children. We've directed funding to Malaria Consortium to implement SMC in several countries, including Burkina Faso.[2]In Burkina Faso, community health promoters go from household to household across the country, every month during the rainy season (when malaria is most common). They give medicine to each child under the age of five, which involves mixing a medicated tablet into water and then spoon-feeding the medicine to infants and having young children drink it from a cup. They also give caregivers instructions to give additional preventive medicine over the next two days.It costs roughly $6 to reach a child with a full season's worth of SMC (though this figure doesn't account for fungibility, which pushes our estimate of overall cost-effectiveness downward).[3] If a child receives a full course of SMC, we estimate that they're about five times less likely to get malaria during the rainy season (which is when roughly 70% of cases occur).Community distributor providing SMC medication to a child sitting on mother's lap. Photo courtesy of Malaria Consortium.Imagine a village with 135 families in it, each with two kids under the age of five, for a total of 270 young children. In this village, imagine that every child is reached with a full course of SMC during the rainy season.[4] Without SMC, we estimate that on average, 100[5] of those 270 young kids would test positive for malaria at any given point in time (though we think most of them would be asymptomatic). We estimate that SMC brings the overall prevalence of malaria down from 100 kids to only 40.[6] For kids who would be symptomatic, this is the difference between feeling healthy and experiencing fever, aches, and other flu-like symptoms.What we're excited to have recommended so farThis year, we've recommended grants to extend and expand programs we've supported for a while, like top charities, and we've also supported programs that are newer to us. With a decline in expected funding from Open Philanthropy, we've slowed our spending to match the funding we expect to raise going forward; we've focused more of our grantmaking on building for the future rather than funding large-scale opportunities this year. Below, we describe four selected grants from this year.[8]More about each of these grants:$6.6 million to the Clinton Heal...

]]>
GiveWell https://forum.effectivealtruism.org/posts/6xDeDX3iqwwab6qSA/givewell-s-2023-recommendations-to-donors Wed, 22 Nov 2023 11:25:24 +0000 EA - GiveWell's 2023 recommendations to donors by GiveWell GiveWell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:54 no full 10
YhMJasA8MaFQCWcA5 EA - Introducing Dialogues + Donation Debate Week by tobytrem Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Dialogues + Donation Debate Week, published by tobytrem on November 22, 2023 on The Effective Altruism Forum.TL;DR: Donation Debate Week (21-28 November) has started! Just in time for it, we've added the Dialogue feature built by LessWrong[1], which allows you to create and publish a conversation with another user.Consider using this thread to set up dialogues with people who disagree with your donation views!Donation Debate Week: discuss donation choice and how we should vote in the Donation ElectionDonation Debate Week is a chance to stress-test your own thinking about donations, help others make better donation decisions, and move the needle in theDonation Election.Do thepre-votes in the Donation Election seem off to you? Do you think people who read the EA Forum could improvetheir donation choices in specific ways?Write about it for Donation Debate Week!(If pre-votes seem off, that's probably tracking a disagreement you have with many people about which donation opportunities are most cost-effective. See alsosome outdated information about where people in EA tend to donate.)Your own donations might also do more good if you redirected them.Read what people write for Donation Debate Week and consider sharing your donation plans to get feedback.Some specific ways to participate in Donation Debate Week(Not an exhaustive list!)Comment on this post to find a dialogue partner for a debate about donation choice (or how people should vote). This could help you test the arguments that drive your personal donation choices and to clarify your uncertainties. (Example dialogues arehere.)Here are some example comments you could use to set up a Donation Debate Week dialogue:"I think GiveWell's Top Charities Fund is my best bet for a global health donation. Change my mind!""I can't decide whether AI safety should be my top longtermist cause. Help me clarify my cruxes?""I'm skeptical of wild animal welfare work. Anyone want to debate with me? (Note: I might not end up having enough time.)""Is AI safety no longer neglected? I don't want to donate because of this feeling. Up for having a dialogue with someone who disagrees."Write posts aimed at shifting how people think about donation choice (or where they're voting).(likethis post arguing that the majority of OpenPhil's neartermist funding should go to animal welfare).Shareestimates of thecost-effectiveness of some donation opportunities you've explored.Readwhat others are writing.Or, as always, ask a question, write a quick take, comment on other people's posts, and upvote posts and comments you appreciate.Voting for the Donation Election begins on December 1st, but it doesn't close until December 15th, so don't worry too much if your posts aren't ready for this week.How dialogues workWe've just added this feature, so it might be buggy (contact us or comment here if you find bugs!) and we will probably be changing it a bit in the future. There's also a chance that we'll remove it entirely at some point if it isn't getting much use.Finding a partner for a dialogueThe first step to creating a dialogue is to find someone (or a small group of people) to have a dialogue with. Here are some suggestions for how you could find dialogue partners:Asking someone you know, or private-messagingCommenting on a post you're interested in discussing with someone.Commenting here if you'd like to talk about donation choicePosting a quick take (inviting people to change your mind, discuss your uncertainties, or anything else)Setting up the dialogueTo create the dialogue, hover your mouse over your profile in the top right corner.After you click on "New dialogue" you will get the following pop-up:Title your dialogue. You can change this later. Add the participant(s). (You can add more participant...

]]>
tobytrem https://forum.effectivealtruism.org/posts/YhMJasA8MaFQCWcA5/introducing-dialogues-donation-debate-week Wed, 22 Nov 2023 11:15:22 +0000 EA - Introducing Dialogues + Donation Debate Week by tobytrem tobytrem https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:59 no full 11
82mrbdm5BLHbCsqin EA - Visuals for EA-aligned Research by JordanStone Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Visuals for EA-aligned Research, published by JordanStone on November 22, 2023 on The Effective Altruism Forum.Hello! I create visuals for research articles, websites, presentations, grant proposals, and other research outputs and I'm interested in working with EA-aligned organisations and researchers to increase my positive impact. If you'd like to know more about how I can help you out, book a consultation here.I often create diagrams to summarise research projects:But I've also created technical diagrams to visualise equipment for research:And summaries of academic research outputs:Also, some diagrams to help understand and learn science:I usually charge ~80 ($100) per hour depending on the difficulty of the request. But if your work is EA-aligned then I'll accept a donation to GWWC instead, as I'm keen to support organisations working on high-impact research.It's usually easiest to have a quick chat about what you do and then we can discuss how I can help you. Just a block of text copy and pasted from an article or webpage is usually enough for me to create a visual.Look forward to hearing from you!My website: https://www.stonescience.org/illustrationsMy email: jordan@stonescience.orgBook a consultation: https://savvycal.com/AstroJordanStone/2cb3cbdbThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
JordanStone https://forum.effectivealtruism.org/posts/82mrbdm5BLHbCsqin/visuals-for-ea-aligned-research Wed, 22 Nov 2023 11:14:09 +0000 EA - Visuals for EA-aligned Research by JordanStone JordanStone https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:37 no full 12
wXrxBEqGwEmeedEXF EA - Sam Altman returning as OpenAI CEO "in principle" by Fermi-Dirac Distribution Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sam Altman returning as OpenAI CEO "in principle", published by Fermi-Dirac Distribution on November 22, 2023 on The Effective Altruism Forum.This was just announced by the OpenAI Twitter account:Implicitly, the previous board members associated with EA, Helen Toner and Tasha McCauley, are ("in principle") no longer going to be part of the board.I think it would be useful to have, in the future, a postmortem of what happened, from an EA perspective. EA had two members on the board of arguably the most important company of the century, and it has just lost them after several days of embarrassment. I think it would be useful for the community if we could get a better idea of what led to this sequence of events.[update: Larry Summers said in 2017 that he likes EA.]Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Fermi–Dirac Distribution https://forum.effectivealtruism.org/posts/wXrxBEqGwEmeedEXF/sam-altman-returning-as-openai-ceo-in-principle Wed, 22 Nov 2023 08:53:34 +0000 EA - Sam Altman returning as OpenAI CEO "in principle" by Fermi-Dirac Distribution Fermi–Dirac Distribution https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:57 no full 13
Kjm8nbkSzMt2rLmX4 EA - The Role of Behavioural Science in Effective Altruism by Emily Grundy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Role of Behavioural Science in Effective Altruism, published by Emily Grundy on November 22, 2023 on The Effective Altruism Forum.At EAGx Australia 2022, I spoke about the role of behavioural science in effective altruism. You can now watchthe recording on Youtube. In the talk, I introduce the concept of behavioural science, discuss how it relates to effective altruism, and highlight some common mistakes we make when trying to change behaviour for societal good.Want to skim a post instead of consuming a 30-minute video? I don't blame you, here's the basics of my talk…Imagine a world…Imagine a world where…We know what the most effective charities are. We've got a list of all the charities that exist, we've got detailed information about each one, and we've ranked them on various criteria that we care about. There's no more uncertainty, GiveWell is out of business, and we know where to funnel our charitable donations.We have perfected our biosecurity risk standards. We understand all the potential risks. We know how we can prevent things like lab leaks from occurring. We've even developed safety protocols outlining everything we need to do.We just understand sentience. Turns out the hard problem of consciousness wasn't actually that hard to solve. We understand which beings are sentient - which beings feel pleasure and pain - and we know why.Sounds pretty great, right? In this world, we seem to have achieved monumental strides.Yet, perhaps this wouldn't be that exciting: these strides say nothing about the impact we're having.Why?We may understand what the most effective charities are, but what happens if no one donates to them?We may develop biosecurity risk standards and protocols, but what does that mean if people don't comply with them?We may know which beings are sentient, but what impact does that have if we don't change our treatment of those beings?These examples demonstrate that we can have knowledge, understanding, and even action, but if we don't understand how to change behaviour - we might not have the impact that we want.What is behavioural science?Behavioural science is the scientific study of human behaviour. Why do people do the things they do? Why do they make the decisions that they do? What needs to change in order for them to do differently?Behavioural science considers many influences: conscious thoughts, habits, motivations, the social context, and more. It borrows from several disciplines, including economics, psychology, and sociology.What is the role of behavioural science in effective altruism?Here is a (very) basic theory of change for effective altruism.We know how to do the most good. We act on that knowledge. And, as a result, we hopefully have an impact.Examples of things we can do at the knowledge stage include understanding which charities are most effective, creating problem profiles, and predicting what existential risks are most consequential or likely. At the action stage, we could donate to those charities, make career changes based on what we think is most impactful, or actually work to prevent existential risks.How does behavioural science come into this? It focuses on the action stage and it asks, 'How?'. How do we get people to donate to effective charities? How do we encourage others to make career changes, or work to prevent existential risks? Behavioural science can inform how we act, or how we get others to act, in order to enhance our impact.Note that there are different audiences we can target when we're thinking about behaviour change. Some audience 'levels' include:Individuals / the population level: This often involves behaviours that many people can engage in. For instance, donating to charity or reducing animal-product consumption.Critical actors: This includes people who possess specif...

]]>
Emily Grundy https://forum.effectivealtruism.org/posts/Kjm8nbkSzMt2rLmX4/the-role-of-behavioural-science-in-effective-altruism Wed, 22 Nov 2023 04:00:09 +0000 EA - The Role of Behavioural Science in Effective Altruism by Emily Grundy Emily Grundy https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:41 no full 15
ggB7mkw8G8ZAE8PSy EA - Impactful Animal Advocacy: Building Community Infrastructure by Impactful Animal Advocacy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Impactful Animal Advocacy: Building Community Infrastructure, published by Impactful Animal Advocacy on November 22, 2023 on The Effective Altruism Forum.Tl;dr: Impactful Animal Advocacy (IAA) has grown from a side project in 2022 to a moderately well-recognized online hub for farmed animal advocates with over 2000 community members today. Our aim is to build infrastructure for a better connected animal advocacy movement. We are still a young organization and currently aim to focus on our core programs which include our Slack community, newsletter, and strategic connections.We are currently operating with 2 full-time and 1 part-time employee (1 is on a break) with a monthly expense rate of 5.5k USD. This is thanks to 2 team members agreeing to work on a volunteer basis. With this state of operations, we have a runway of 5 months secured.Additional marginal funding would be first used to ensure stable operations in 2024 and then to scale to include other promising programs such as an animal advocacy forum, resource hub, and collaboration with individuals and organizations in neglected regions.BackgroundWhen we first launched 18 months ago, we started with the Impactful Animal Advocacy newsletter. It was meant to be a helpful side project for friends and colleagues who wanted aggregated updates on the farmed animal advocacy movement. This quickly grew into other initiatives including a vibrant Slack community and active work in strategically connecting advocates. After the first year, we received some initial seed funding.This allowed for hiring the project founders for 1 day a week and this later progressed to one of the founders going full-time and the other becoming an advisor. Currently, we believe we have gotten through most of the challenges of early-stage organizational setup including creating and refining SOPs, OKRs, MEL metrics, project/task management systems, and obtaining fiscal sponsorship.Our current goalsTo create exceptionally good online spaces for professional farmed animal advocates to collaborate, exchange information, and be part of a communityTo share high quality resources and information relevant to the work of professional farmed animal advocatesTo serve an online coordination function for the farmed animal advocacy movementOur primary audience consists of engaged animal advocates who are interested in collaboration, including non-profit employees, project founders, and academics. We also have independent activists, funders, volunteers and established leaders on our platforms.We've seen rapid growth, with our Slack community reaching over 1,500 members and nearly 60 specialized channels in less than a year. Our newsletter is also gaining traction, with over 1,100 subscribers. Both platforms are community-driven, featuring content and insights shared by our members. Here are testimonials for our Slack and newsletter.Community storiesSince helping community members is at the center of what we do, our value is best demonstrated in their stories:Rosanna ZimdahlRosanna Zimdahl, a Masters student from Sweden wasn't connected to anyone in the movement. She is finishing a degree in MSc Engineering Energy & Environment, and besides that she likes to study systems thinking. She always wanted to contribute to the movement and during a systems mapping course she thought it would be great to apply it to an animal advocacy matter.The Slack space made it possible! She was inspired by the enthusiasm and took the opportunity to start a System Mapping course for farmed animal advocacy. Four other advocates she met on the space joined her to work on the project.If not for IAA, Rosanna wouldn't have found collaborators to do this which they learnt a lot from, and she will continue applying systems thinking methods to enhance the movement.Apoo...

]]>
Impactful Animal Advocacy https://forum.effectivealtruism.org/posts/ggB7mkw8G8ZAE8PSy/impactful-animal-advocacy-building-community-infrastructure Wed, 22 Nov 2023 02:11:38 +0000 EA - Impactful Animal Advocacy: Building Community Infrastructure by Impactful Animal Advocacy Impactful Animal Advocacy https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 18:32 no full 16
rHHGNtE79ue39xCRf EA - A review of GiveWell's discount rate by Rethink Priorities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A review of GiveWell's discount rate, published by Rethink Priorities on November 21, 2023 on The Effective Altruism Forum.Editorial noteThis report was commissioned by GiveWell and produced by Rethink Priorities from June to July 2023. We revised this report for publication. GiveWell does not necessarily endorse our conclusions, nor do the organizations represented by those who were interviewed.The primary focus of the report is to review GiveWell's current formulation of its discount rate by recommending improvements and reinforcing justifications for areas that do not require improvement. Our research involved reviewing the scientific and gray literature, and we spoke with 15 experts and stakeholders.We don't intend this report to be Rethink Priorities' final word on discount rates, and we have tried to flag major sources of uncertainty in the report. We hope this report galvanizes a productive conversation within the global health and development community about discounting practices in cost-effectiveness analyses. We are open to revising our views as more information is uncovered.Executive summaryNotes on the scope and process of this projectThis project aims to serve the dual purposes of reviewing GiveWell's current approach to calculating its discount rate(s) to:Provide recommendations to GiveWell on how its approach to discount rates could be improved.Strengthen the justifications for its approach in cases where we do not recommend changes.The direction of this project was mainly guided by our priors[1] that a prioritized investigation into three aspects could potentially make the biggest difference to GiveWell's discount rate:A review of how other major organizations in the global health and development space (within and outside effective altruism) choose and justify their discount rates.A review of GiveWell's overall approach to calculating discount rates to determine:Whether GiveWell should use a different overall calculation approach.Whether GiveWell should think differently about discounting consumption vs. health outcomes.A review of the pure time preference component of GiveWell's discount rate.We also reviewed several other components of the discount rate (consumption growth rate, compounding non-monetary benefits, temporal uncertainty), but decided to spend less time on those as we deemed it less likely to make major recommendations or expected it would be harder to make meaningful progress. Table 1 summarizes our recommendations for GiveWell's discounting practices.The majority of this report focuses on the discount rate used for consumption benefits, as this appears to be the "main" discount rate used by GiveWell,[2] but we also discuss discounting of health benefits.We do not discuss discounting of costs in this report as (1) GiveWell's cost-effectiveness models rarely involve discounting costs, and (2) our general impression is that the typical approach across organizations is to discount monetary costs and benefits equally and we have seen very little discussion of alternative approaches.[3] A review of the shape of the utility functions[4] used is also out of scope for this review.Moreover, we focus exclusively on temporal discounting.[5] If the time frame is not specified, all discount rates expressed as percentages are annual. Due to the variety of existing opinions and approaches with respect to discount rates and a relative lack of consensus, we opted to approach this project from a perspective of figuring out whether there are any compelling reasons to change GiveWell's current approach, rather than starting from scratch and coming up with a discount rate independently of the current approach.Summary of recommendationsTable 1: Summary of Rethink Priorities' recommendations for GiveWell's discountingConsiderationCurre...

]]>
Rethink Priorities https://forum.effectivealtruism.org/posts/rHHGNtE79ue39xCRf/a-review-of-givewell-s-discount-rate Tue, 21 Nov 2023 22:49:19 +0000 EA - A review of GiveWell's discount rate by Rethink Priorities Rethink Priorities https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 12:52 no full 17
nSgcLuyu2ypRQhT8r EA - EAGx Africa Event Interest Form: Looking forward to hosting an EAGx event in Africa! by Hayley Martin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAGx Africa Event Interest Form: Looking forward to hosting an EAGx event in Africa!, published by Hayley Martin on November 23, 2023 on The Effective Altruism Forum.We're thrilled to announce our plans to organise an EAGx conference on the continent, and we want YOU to be a part of shaping this impactful event. Your input is invaluable in creating an experience that resonates with the community's aspirations and needs. Please take a moment to share your preferences and expectations by filling out our quick Google Form. Your insights will guide us in making this EAGx Africa conference an enriching and collaborative experience for all.[Fill out the EAGx Africa Event Interest Form here]Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Hayley Martin https://forum.effectivealtruism.org/posts/nSgcLuyu2ypRQhT8r/eagx-africa-event-interest-form-looking-forward-to-hosting Thu, 23 Nov 2023 18:06:32 +0000 EA - EAGx Africa Event Interest Form: Looking forward to hosting an EAGx event in Africa! by Hayley Martin Hayley Martin https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:50 no full 2
dqDhXc9qirhPHjfXH EA - The passing of Sebastian Lodemann by ClaireB Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The passing of Sebastian Lodemann, published by ClaireB on November 23, 2023 on The Effective Altruism Forum.With immense sadness, we want to let the community know about the passing of Sebastian Lodemann, who lost his life on November 9th, 2023, in a completely unexpected and sudden accident. Those who have met him know how humble and kind he was, in addition to being a brilliant and energetic person full of light. Sebastian was deeply altruistic, curious, and took seriously both the challenges facing our world, and its potential. He loved connecting with humans from across the globe and supporting as many people as he could, so there will be a wide international community of people who will keenly feel his absence.Sebastian has been involved with EA since 2016, working on a wide range of projects in AI governance and strategy, pandemic prevention, civilisational resilience and career advising, and taking the Giving What We Can pledge.We extend our deepest sympathies to Sebastian's wife, his children, his parents and the rest of their family during this incredibly difficult time. We stand with them in mourning and in honoring the memory of a wonderful person who was taken from us far too soon.Sebastian's funeral ceremony took place on November 18th.Here are some steps you can take to commemorate Sebastian:You can make a donation to Sebastian's wife and childrenhere (in euros) orhere (in USD or in CAD). For other currencies, you can contact us ascommemoratesebastian@gmail.com*If you would like to be invited to a virtual gathering in memory of Sebastian, please complete thisshort form or emailcommemoratesebastian@gmail.comTo share memories of Sebastian for his family and his young children to know who their father was and how he was a force for good in the world in ways they might not otherwise get to learn, you can usethis link, or share your thoughts in the comments.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
ClaireB https://forum.effectivealtruism.org/posts/dqDhXc9qirhPHjfXH/the-passing-of-sebastian-lodemann Thu, 23 Nov 2023 17:49:50 +0000 EA - The passing of Sebastian Lodemann by ClaireB ClaireB https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:00 no full 3
YRGSmjYDaMvScCXh2 EA - A Thanksgiving gratitude post to EA by Joy Bittner Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Thanksgiving gratitude post to EA, published by Joy Bittner on November 23, 2023 on The Effective Altruism Forum.Despite the complicated and imperfect origins of American Thanksgiving, what's worth preserving is the moment it offers for society to step back and count our blessings. And in this moment, I want to express my gratitude to the EA community.It's been a hard year for EA, and many of us have felt increasing levels of disillusionment. Still, a huge thank you to each of you for being part of this messy, but beautiful family.What I love about EA is that at our core, we are people who look around and see a world is world is messed up and kind of shitty. But also, when we see this mess, we deeply feel a moral responsibility to do something about it. And rather than falling into despair, we are optimistic enough to think we can actually do something about it.This more than anything else is what I think makes this a special community, a group of people who still think we can work together to build a better world. More than anything else, thank you for that.Transitioning from the general to the specific, I want to express gratitude to the EA community for enabling my work with Vida Plena, a mental health organization I founded in Ecuador. I am certain that without EA's support, Vida Plena would not exist.As a backstory, Vida Plena had been an idea bouncing around in my head for a few years. Finally, due to the pandemic, I decided to give it a try. My plan was to burn through all my personal savings, hoping it would be enough to get us off the ground and attract the attention of some traditional international development organizations for long-term funding. It was a significant long shot, but the best I had.Then came EA.In 2021 I started working operations for the Happier Lives Institute, which was my first real baptism into EA. I told HLI from the start that my priority was going to be Vida Plena, and they still hired me- even giving me significant amounts of flexibility. This would never happen in the highly competitive traditional nonprofit world. But as Micheal Plant generously told me then: EA is about seeking the greatest impact, so if that is Vida Plena, they would be there for me for it.Since then, the HLI team has continued to support me with research help, feedback, and much love (although to be clear, not money). Thank you especially to Samuel Dupret for countless hours on our predictive CEA and Barry Grimes for giving all the comms support. Peter, I so appreciate all our long walks and chats.Then the broader EA community stepped up and supported this project financially. When Vida Plena was still just an idea in my head, two exceptional individuals I met at EAG London 2021 stepped up and promised me the funding needed to run our pilot.This was the encouragement I needed to go from "I really think I want to do this" to "Well, now I have to do it." It's a very scary step to launch something new, but knowing that they believed in me enough to put their own money behind the idea was overwhelming. To these individuals, you know who you are - I can't express my gratitude enough. The fact that you trusted an almost stranger still deeply moves me.And with that, I need to say thank you to everyone who put in so much work to organize EAG, and all the people who financed it to bring so many people together. I would have never met these angel donors if it wasn't for the CEA team and volunteers.Next, I need to thank Joey Savoie and the whole Charity Entrepreneurship team. Although they rejected my application the first year (encouragement to keep trying for anyone else who's not made it), the next year they took a risk to include me and my co-founder, Anita Kaslin, in the Incubator Program with our outside-the-box idea. And you haven't stopped supportin...

]]>
Joy Bittner https://forum.effectivealtruism.org/posts/YRGSmjYDaMvScCXh2/a-thanksgiving-gratitude-post-to-ea Thu, 23 Nov 2023 15:55:12 +0000 EA - A Thanksgiving gratitude post to EA by Joy Bittner Joy Bittner https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:48 no full 4
ydH6xFXTpcRca6gw7 EA - A fund to help prevent violence against women and girls by Akhil Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A fund to help prevent violence against women and girls, published by Akhil on November 23, 2023 on The Effective Altruism Forum.SummaryAhead of International Day for the Elimination of Violence against Women on 25 November, I am very excited to announce the addition of several highly impactful charities focused on preventing violence against women and girls to The Life You Can Save's help women and girls fund, and their all charities fund.BackgroundOne in three women will experience physical or sexual violence, or both, in their lifetime. High quality studies show that preventing violence from occurring in the first instance are effective, and that community-led programs that aim to shift individual, interpersonal and society level attitudes and norms around gender are particularly effective (more information in this previous post).What is happeningThe Life You Can Save is proud to provide recommendations for a broad range of important issue areas. They are now adding nonprofits focused on preventing violence against women and girls - with far-reaching benefits to families and entire communities.Center for Domestic Violence Prevention, CEDOVIP, is an Ugandan nonprofit that implements community-driven, cost-effective programming: $150 for a woman to live a year free from violence. Their program implementation has shown a 52% reduction in intimate partner violence, with effects that continue after 3 years.Breakthrough Trust in India promotes culture-based change, focusing on girls and boys at ages 11-24 by redesigning school curricula and running mass media campaigns. Breakthrough's programs reduce early marriage, increase girls enrollment in school and increase health care access.Raising Voices (inclusion in fund pending due diligence) identifies the most impactful ways of reducing violence against women and children -( including the programming implemented by CEDOVIP), supports evidence-generation on best practice in violence prevention, and has worked with over 600 organisations throughout Africa, Asia Pacific and Latin America to build the capacity of community-based violence prevention centres.CaveatsWhile the data underscores the measurable success and high cost-effectiveness of community-led programs in reducing violence (you can see here for some estimations of the same), it's crucial to recognize the profound, and enduring, and more intangible impact of such initiative in changing cultural and societal norms.Changing the culture that perpetuates violence creates freedom for women to thrive - reducing ongoing fear of violence, improving family and child wellbeing, and increasing women's ability to contribute productively in society and the workforce. Long-term social change demands a multidimensional, intersectional approach, focusing on the transformation of attitudes and norms. These intangible benefits, immeasurable in their impact, work towards creating a more just and equitable world.What you can doIf you would like to help contribute to ensure that violence against women and girls is prevented, and we can live in a world where respect, equity and understanding flourish, please consider donating to this fund. If you are interested in having a more extended chat or would like to consider a more bespoke/tailored giving strategy, please feel to reach out to me (via DM or email at akhilbansalsa@gmail.com) or TLYCS team.AcknowledgementsIt was an honour to work as a fund manager alongside Ilona Arih, Matias Nestore and Katie Stanford, as well as the rest of the The Life You Can Save teamThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Akhil https://forum.effectivealtruism.org/posts/ydH6xFXTpcRca6gw7/a-fund-to-help-prevent-violence-against-women-and-girls Thu, 23 Nov 2023 13:00:31 +0000 EA - A fund to help prevent violence against women and girls by Akhil Akhil https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:24 no full 5
i4vihmhtki7pbKLxq EA - How to publish research in peer-reviewed journals by Ren Springlea Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to publish research in peer-reviewed journals, published by Ren Springlea on November 23, 2023 on The Effective Altruism Forum.This is a guide to publishing research in peer-reviewed journals, particularly for people in the EA community who might be keen to publish research but haven't done so before.About this article:This guide is aimed at people who know how to conduct research but are new to publishing in peer-reviewed journals. I've had a few people ask me about this, so I thought I'd compile all of my knowledge in this article.I assume that the audience of this article already: know what academic research is; know how to conduct academic research (e.g. at the level of a Masters degree or perhaps a research-focused Bachelors degree); have a basic grasp of what a journal is; and have the skills to read scientific publications.This article is based on my own views and my own experiences in academic publishing. I expect that there will be many academics who have a different view or a different strategy. My own experiences come from publishing in ecology, economics, agriculture/fisheries, psychology, and science communication.Should you publish your work in a peer-reviewed journal?Advantages of publishing in peer-reviewed journalsPublication in a journal can make your research appear more credible. This won't always matter, but my colleagues and I have encountered a few instances during animal advocacy where stakeholders (e.g. food retail companies, government policymakers) find research more compelling if it is published in a journal.Publication in a journal can make you appear more credible. This is particularly true in non-EA circles, like if you want to apply for research jobs or funding from outside of the EA community.Key ideas can be more easily noticed and adopted by other academics, and perhaps other stakeholders like government policymakers. I don't know how influential this effect is.Your research can be more easily criticised by academics. This can provide an important voice of critique from experts outside of the EA community, which could be one way to detect if a piece of research is flawed in some way. This is my motivation for publishing a study I conducted where I used an economic model to estimate the impact of slow-growing broilers on aggregate animal welfare - submitting a paper like this for peer review is a great way to get feedback from experts in that specific branch of economics.Drawbacks of publishing in peer-reviewed journalsPublications are not impact. Publishing your work is a tool that can sometimes help you to achieve impact. If we're trying to do the most good in the world, that may sometimes involve publishing peer-reviewed research (see above). In other cases, it'll be better to spend that time on more impactful work.Not all research is suitable for peer-reviewed journals. For example, in the EA community, it is common to conduct prioritisation research to determine the most promising interventions or strategies, likeour recent work on fish advocacy in Denmark. That fish advocacy report would only be interesting to advocacy organisations and doesn't contribute any new understanding of reality beyond the strategy of one organisation, so it is probably not be publishable in a journal (though a smaller version focused on the Appendix of that report may be publishable).Publishing in peer-reviewed journals costs time and energy. When I publish in a peer-reviewed journal (compared to when I publish a report on my organisation's website), I usually spend a couple of extra days writing the draft, and then a few extra days addressing peer review comments over the following months.Peer-reviewed papers have significantly longer timelines. After the first draft of a manuscript is done, it might take 6 or 12 months or even l...

]]>
Ren Springlea https://forum.effectivealtruism.org/posts/i4vihmhtki7pbKLxq/how-to-publish-research-in-peer-reviewed-journals Thu, 23 Nov 2023 11:01:23 +0000 EA - How to publish research in peer-reviewed journals by Ren Springlea Ren Springlea https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 42:18 no full 7
a8wijyw45SjwmeLY6 EA - GWWC is funding constrained (and prefers broad-base support) by Luke Freeman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC is funding constrained (and prefers broad-base support), published by Luke Freeman on November 23, 2023 on The Effective Altruism Forum.Giving What We Can is making giving effectively and significantly a cultural norm - and raising a lot of funds for highly effective charities.We're currently seeking funding to continue our work and ensure that we can inspire many more people to give effectively in the future. In 2024, we're hoping to hit 10,000 lifetime pledges.At Giving We We Can, we encourage people to give more and give better.Give more: We encourage people to pledge to give at least 10% of their income until the day they retire.Give better: We provide a donation platform that makes it easy for people to donate to our recommended high-impact charities.Over 8,500 people have taken the Giving What We Can Pledge to donate at least 10% of their income, and have collectively donated over $300 million. By 2030, we want to get to 100,000 pledgers and well over $1 billion of donations. Our ultimate mission is to make donating at least 10%, as effectively as possible, the global norm. We do this in three key ways:Our pledge: which has inspired a movement of donors to give more significantly, more sustainably, & more effectively.Our expertise: which helps donors to give more effectively across a diversity of causes and worldviews.Our donation platform: which makes effective giving easy & accessible for half a billion people on our expanding list of countries (more coming in 2024!).Our audienceWe believe that many people are in a position to do a lot of good by giving effectively. We aim to change the norms around giving, encouraging people to be more impactful and generous.Our pitchA decade of charity research has revealed something huge:The best charitable interventions often have 100x more impact per dollar than average onesAt GWWC, we help donors find those opportunities (leveraging thousands of hours of research) & make them easy to donate to via our donation platform.Our impactFrom 2020 to 2022,we estimate that we caused $45 million to go to charity. Once we account for the value of new pledge commitments, we estimate we generated $62 million in value.These figures are our best guess of how much we caused to go to highly effective charities - they don't count money that would have been given anyway or money given to charities we aren't sure are effective.The monetary impact of GWWC is best documented in our most recentImpact Evaluation, which suggests that from 2020 to 2022:GWWC generated an additional $62 million in value for highly-effective charities.GWWC had a giving multiplier of 30x, meaning that for each $1 spent on our operations, we generated $30 of value to highly-effective charities on average. Please note that this isn't a claim that your additional dollar will have a 30x multiplier, even though we think it will still add a lot of value. Read more onhow to interpret our results.Each new GWWC Pledge generates >$20,000 of value for highly-effective charities that would not have happened without GWWC.This evaluation suggests something we long suspected:If your goal is to get resources into the hands of highly-effective charities, we believe supporting Giving What We Can is a great funding opportunity.The cultural impact of GWWC (although harder to quantify) has also been significant by making the idea of giving 10% effectively more accessible and compelling to a broader audience. "Pledging 10% to effective charities" has become a touchstone of the effective giving community - inspiringTED talks, launchingclubs, & drawingcuriosity &praise from press around the world.Our plansWe believe most of our impact lies in the coming decades, and Giving What We Can has spent the past 3.5 years building a sustainable foundation for...

]]>
Luke Freeman https://forum.effectivealtruism.org/posts/a8wijyw45SjwmeLY6/gwwc-is-funding-constrained-and-prefers-broad-base-support $20,000 of value for highly-effective charities that would not have happened without GWWC.This evaluation suggests something we long suspected:If your goal is to get resources into the hands of highly-effective charities, we believe supporting Giving What We Can is a great funding opportunity.The cultural impact of GWWC (although harder to quantify) has also been significant by making the idea of giving 10% effectively more accessible and compelling to a broader audience. "Pledging 10% to effective charities" has become a touchstone of the effective giving community - inspiringTED talks, launchingclubs, & drawingcuriosity &praise from press around the world.Our plansWe believe most of our impact lies in the coming decades, and Giving What We Can has spent the past 3.5 years building a sustainable foundation for...]]> Thu, 23 Nov 2023 06:18:09 +0000 EA - GWWC is funding constrained (and prefers broad-base support) by Luke Freeman $20,000 of value for highly-effective charities that would not have happened without GWWC.This evaluation suggests something we long suspected:If your goal is to get resources into the hands of highly-effective charities, we believe supporting Giving What We Can is a great funding opportunity.The cultural impact of GWWC (although harder to quantify) has also been significant by making the idea of giving 10% effectively more accessible and compelling to a broader audience. "Pledging 10% to effective charities" has become a touchstone of the effective giving community - inspiringTED talks, launchingclubs, & drawingcuriosity &praise from press around the world.Our plansWe believe most of our impact lies in the coming decades, and Giving What We Can has spent the past 3.5 years building a sustainable foundation for...]]> $20,000 of value for highly-effective charities that would not have happened without GWWC.This evaluation suggests something we long suspected:If your goal is to get resources into the hands of highly-effective charities, we believe supporting Giving What We Can is a great funding opportunity.The cultural impact of GWWC (although harder to quantify) has also been significant by making the idea of giving 10% effectively more accessible and compelling to a broader audience. "Pledging 10% to effective charities" has become a touchstone of the effective giving community - inspiringTED talks, launchingclubs, & drawingcuriosity &praise from press around the world.Our plansWe believe most of our impact lies in the coming decades, and Giving What We Can has spent the past 3.5 years building a sustainable foundation for...]]> Luke Freeman https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:02 no full 8
aHWZrQxfcTc5AKaGc EA - The Odyssean Process by Odyssean Institute Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Odyssean Process, published by Odyssean Institute on November 24, 2023 on The Effective Altruism Forum.Our White Paper The Odyssean Process outlines our innovative approach to decision making for an uncertain future.In it, we combine expert elicitation, complexity modelling, and democratic deliberation into a new way of developing robust policies.This addresses the democratic deficit in civilisational risk mitigation and facilitates resilience through collective intelligence.Any feedback, collaboration, or interest in supporting our work is most welcome contact@odysseaninstitute.orgThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Odyssean Institute https://forum.effectivealtruism.org/posts/aHWZrQxfcTc5AKaGc/the-odyssean-process Fri, 24 Nov 2023 15:32:53 +0000 EA - The Odyssean Process by Odyssean Institute Odyssean Institute https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:48 no full 4
aCSuyhMkMREjyFnLm EA - EAG London's dates are always during University Examinations by OliverHayman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAG London's dates are always during University Examinations, published by OliverHayman on November 24, 2023 on The Effective Altruism Forum.Every year, EAG London tends to be held in May/early June. In the UK, >90% of your degree comes from performance on final exams. These take place from May to June, and the norm is to study for at least a month. This means many talented UK undergraduates might not attend EAG London because they are too busy studying. Since travel is no longer reimbursed for the USA EAGs from the UK, this means that many talented undergraduates at UK schools cannot attend any EAGs.For example, I currently attend Oxford, and think >30% of the most dedicated undergraduates here do not attend for exam reasons.I'm pointing this out as I'm hoping this factor is considered when deciding dates in the future.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
OliverHayman https://forum.effectivealtruism.org/posts/aCSuyhMkMREjyFnLm/eag-london-s-dates-are-always-during-university-examinations 90% of your degree comes from performance on final exams. These take place from May to June, and the norm is to study for at least a month. This means many talented UK undergraduates might not attend EAG London because they are too busy studying. Since travel is no longer reimbursed for the USA EAGs from the UK, this means that many talented undergraduates at UK schools cannot attend any EAGs.For example, I currently attend Oxford, and think >30% of the most dedicated undergraduates here do not attend for exam reasons.I'm pointing this out as I'm hoping this factor is considered when deciding dates in the future.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> Fri, 24 Nov 2023 14:42:15 +0000 EA - EAG London's dates are always during University Examinations by OliverHayman 90% of your degree comes from performance on final exams. These take place from May to June, and the norm is to study for at least a month. This means many talented UK undergraduates might not attend EAG London because they are too busy studying. Since travel is no longer reimbursed for the USA EAGs from the UK, this means that many talented undergraduates at UK schools cannot attend any EAGs.For example, I currently attend Oxford, and think >30% of the most dedicated undergraduates here do not attend for exam reasons.I'm pointing this out as I'm hoping this factor is considered when deciding dates in the future.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> 90% of your degree comes from performance on final exams. These take place from May to June, and the norm is to study for at least a month. This means many talented UK undergraduates might not attend EAG London because they are too busy studying. Since travel is no longer reimbursed for the USA EAGs from the UK, this means that many talented undergraduates at UK schools cannot attend any EAGs.For example, I currently attend Oxford, and think >30% of the most dedicated undergraduates here do not attend for exam reasons.I'm pointing this out as I'm hoping this factor is considered when deciding dates in the future.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> OliverHayman https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:03 no full 5
fkft56o8Md2HmjSP7 EA - AMF - Reflecting on 2023 and looking ahead to 2024 by RobM Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMF - Reflecting on 2023 and looking ahead to 2024, published by RobM on November 24, 2023 on The Effective Altruism Forum.Rob Mather, CEO, AMF, 25 November 20232023 has been a very busy year for AMF, more on 2024 later.ImpactAMF's team of 13 is in the middle of a nine-month period during which we are distributing, with partners, 90 million nets to protect 160 million people in six countries: Chad, the Democratic Republic of Congo, Nigeria, South Sudan, Togo, Uganda, and Zambia.The impact of these nets is expected to be, 20%, 40,000 deaths prevented, 20 million cases of malaria averted and a US$2.2 billion improvement in local economy (12x the funds applied). When people are ill they cannot farm, drive, teach - function, so the improvement in health leads to economic as well as humanitarian benefits.This is a terrific contribution from the tens of thousands of donors who have contributed US$180 million over the last two years, and the many partners with whom we work that make possible the distribution of these life-saving nets.We received our millionth donation recently, a nice milestone. Our total funds raised is now US$543 million.But these numbers are not as important as the impact numbers once all the nets we have funded in our 19 years and can currently fund, have been distributed and have had their impact: 250 million nets funded and distributed, 450 million people protected, 185,000 deaths prevented, 100 to 185 million cases of malaria averted and US$6.5 billion of improvement in local economies - when people are ill they cannot farm, drive, teach - function, so the improvement in health leads to economic as well as humanitarian benefits.Many recognise the impact of AMF's work, yet we still have significant immediate funding gaps that are over US$300m. While this number seems daunting, every US$2 matters as that funds another net and allows two more people to be protected when they sleep at night, so no support is too small or inconsequential.Partnerships are crucial to what we doWe work with partners at every stage of our work: funding nets; ensuring operations proceed effectively and nets are distributed as intended; and monitoring net use, performance and impact. Over the last few years we have strengthened relationships with key organisations that have allowed AMF to contribute more and work faster and more effectively.AMF has strong partnerships with the Global Fund and the US's President's Malaria Initiative, and we work together closely to ensure net distributions are fully funded. None of us can work alone. Typically AMF funds nets for a distribution and the Global Fund or PMI funds the non-net costs. Non-net costs are shipping and transport costs, household registration activities to ensure each household receives the right number of nets and the distribution of the nets themselves.Nets are always distributed in partnership with national health systems. This is because all households in a regional or nationwide distribution are visited in the pre-distribution registration phase to establish how many nets are needed per individual household and this work involves visiting hundreds of thousands or millions of households and needs a work force that only a national system can provide.A final set of partnerships in-country that are very important for AMF's work are those with independent monitoring partners with whom AMF contracts to carry out data-driven monitoring of all phases of a distribution.AMF's focus has been, and still is, on netsThis focus on nets is not accidental. Long-lasting insecticidal nets are the most effective way of preventing malaria. Malaria-carrying mosquitoes typically bite between 10 o/c at night and two in the morning so if people in malarious areas are protected when they sleep at night, the impact on malaria tra...

]]>
RobM https://forum.effectivealtruism.org/posts/fkft56o8Md2HmjSP7/amf-reflecting-on-2023-and-looking-ahead-to-2024 Fri, 24 Nov 2023 10:06:28 +0000 EA - AMF - Reflecting on 2023 and looking ahead to 2024 by RobM RobM https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:52 no full 6
5beQ52wLkGpZQDz36 EA - Brian Tomasik on climate change by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Brian Tomasik on climate change, published by Vasco Grilo on November 25, 2023 on The Effective Altruism Forum.This is a linkpost to Brian Tomasik's posts on climate change.Climate Change and Wild AnimalsBy Brian TomasikFirst written: 2008. Major additions: 2013. Last nontrivial update: 4 Aug 2018.SummaryHuman environmental choices have vast implications for wild animals, and one of our largest ecological impacts is climate change. Each human in the industrialized world may create or prevent in a potentially predictable way at least millions of insects and potentially more zooplankton per year by his or her greenhouse-gas emissions.Is this influence net good or net bad? This question is very complicated to answer and takes us from examinations of tropical-climate expansion, sea ice, and plant productivity to desertification, coral reefs, and oceanic-temperature dynamics. On balance, I'm extremely uncertain about the net impact of climate change on wild-animal suffering; my probabilities are basically 50% net good vs.50% net bad when just considering animal suffering on Earth in the next few centuries (ignoring side effects on humanity's very long-term future). Since other people care a lot about preventing climate change, and since climate change might destabilize prospects for a cooperative future, I currently think it's best to err on the side of reducing our greenhouse-gas emissions where feasible, but my low level of confidence reduces my fervor about the issue in either direction. That said, I am fairly confident that biomass-based carbon offsets, such as rainforest preservation, are net harmful for wild animals.See also:"Effects of Climate Change on Terrestrial Net Primary Productivity""Scenarios for Very Long-Term Impacts of Climate Change on Wild-Animal Suffering"Effects of CO2 and Climate Change on Terrestrial Net Primary ProductivityBy Brian TomasikFirst written: 2008-2016. Last nontrivial update: 28 Feb 2018.SummaryThis page compiles information on ways in which greenhouse-gas emissions and climate change will likely increase and likely decrease land-plant growth in the coming decades. The net impact is very unclear. I favor lower net primary productivity (NPP) because primary production gives rise to invertebrate suffering. Terrestrial NPP is just one dimension to consider when assessing all the impacts of climate change; effects on, e.g., marine NPP may be just as important.Scenarios for Very Long-Term Impacts of Climate Change on Wild-Animal SufferingBy Brian TomasikFirst published: 2016 Jan 10. Last nontrivial update: 2016 Mar 07.SummaryClimate change will significantly affect wild-animal populations, and hence wild-animal suffering, in the future. However, due to advances in technology, it seems unlikely climate change will have a major impact on wild-animal suffering beyond a few centuries from now. Still, there's a remote chance that human civilization will collapse before undoing climate change or eliminating the biosphere, and in that case, the effects of climate change could linger for thousands to millions of years. I calculate that this consideration might multiply the expected wild-animal impact of climate change by 20 to 21 times, although given model uncertainty and the difficulty of long-term predictions, these estimates should be taken with caution.The default parameters in this piece suggest that the CO2 emissions of the average American lead to a long-term change of -3 to 3 expected insect-years of eventual wild-animal suffering every second. My main takeaway from this piece is that "climate change could be really important even relative to other environmental issues; we should explore further whether it's likely to increase or decrease wild-animal suffering on balance".This piece should not be interpreted to suppo...

]]>
Vasco Grilo https://forum.effectivealtruism.org/posts/5beQ52wLkGpZQDz36/brian-tomasik-on-climate-change Sat, 25 Nov 2023 14:48:17 +0000 EA - Brian Tomasik on climate change by Vasco Grilo Vasco Grilo https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:10 no full 3
dupBxMX5KdnBKYbYP EA - Probably Good has a new section on climate change by Probably Good Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Probably Good has a new section on climate change, published by Probably Good on November 27, 2023 on The Effective Altruism Forum.We're excited to share a new addition to our site: a section dedicated to climate change in ournew-look cause areas page!Needless to say, many people worldwide are passionate about tackling climate change as a path to improving the world. We believe there's a need for accessible, scale-sensitive advice that helps people direct their efforts in this space. We want to help meet this need, alongside our continued work in several other cause areas.To this end, we've been diving into climate change over the course of this year, and we're really excited to finally share what we've been working on - starting with three new articles:Climate change: An impact-focused introductionWhat are the biggest priorities in climate change?What are the best jobs to fight climate change?Below, we'll give a quick overview of each of the articles.Climate change: An impact-focused introductionThis article aims to provide an accessible and relatively brief introduction to climate change from a scale-sensitive perspective. Similar to ouroverviews of other cause areas, it assesses climate change using the ITN framework, addressing some of the key considerations for prioritizing climate change relative to other cause areas.Here's a short excerpt from our section on the scale of harm caused by climate change:Climate change has and will continue toincrease the frequency and severity of many risks, including heat stress, forced migration, poverty, water stress and droughts, natural disasters, food insecurity, and the spread of many diseases.However, the extent to which these risks increase will depend on how well we're able to mitigate the amount of climate change that occurs. An often-cited target is to keep warming to below 1.5C above pre-industrial levels, something most of the world's countries agreed to targetin the 2015 Paris Agreement.At 1.5C of warming, we would avoid some of the worst effects of climate change, though the harm would still be huge. For instance, nearly 14% of the world's populationcould experience severe heatwaves at least every five years, and over132 million people could be exposed to severe droughts. Environmental damage and biodiversity loss will also occur, including damage to coral reefs, the vast majority of whichmay not even survive 1.5C of warming.However, it nowlooks likely that we'll surpass 1.5C relatively soon, despite these international targets. This makes higher levels of warming, and therefore increased harm, even more likely by the end of this century.At 2C of warming, for example,between 800 million and 3 billion people may suffer from chronic water scarcity, andnearly 200 million may experience severe droughts. Three times the number of peoplewill experience severe heatwaves at least every 5 years at 2C compared to 1.5C - an additional 1.7 billion people. This will take a significant toll on human life; recent research estimates that at slightly over 2C of warming,nearly 600,000 additional people could lose their lives every year by 2050 due to heat stress compared to current levels.At higher levels, the picture looks even more extreme. At 3C, we could seea five-times increase in extreme events relative to current levels by 2100 (as opposed to a four-fold increase at 1.5C of warming), and at 4C,up to four billion people will experience chronic water scarcity. This is one billion people more than would experience chronic water shortages at 2C of warming. Other effects of climate change would also considerably ramp up as warming increases.Fortunately, thanks to the work of climate activists who have increased the amount of global attention focused on climate change, we'll likely avert some of thes...

]]>
Probably Good https://forum.effectivealtruism.org/posts/dupBxMX5KdnBKYbYP/probably-good-has-a-new-section-on-climate-change Mon, 27 Nov 2023 22:44:19 +0000 EA - Probably Good has a new section on climate change by Probably Good Probably Good https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:29 no full 1
TeknjqDR7EM7keN3G EA - GWWC's new recommendations and cause area funds by Sjir Hoeijmakers Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC's new recommendations and cause area funds, published by Sjir Hoeijmakers on November 27, 2023 on The Effective Altruism Forum.Giving What We Can's new fund and charity recommendations are now online!These recommendations are the result of our recent evaluations of evaluators.Our research team hasn't evaluated all impact-focused evaluators, and evaluators haven't looked into all promising causes and charities, which is why we also host a variety of other promising programs that you can donate to viaour donation platform.We're also thrilled to announce the launch of a new donation option: Giving What We Can cause area funds. These funds offer a convenient option for donors who want to be confident they'll be supporting high-impact giving opportunities within a particular cause area and don't want to worry about choosing between top-rated funds or having to manually update their selections as our recommendations change.Global Health and Wellbeing FundEffective Animal Advocacy FundRisks and Resilience FundYou can set up a donation to one or more of these funds, and we'll allocate it based on the best available opportunities we know of in a cause area, guided by the evaluators we've evaluated. As the evaluators we work with and their recommendations change, we'll update accordingly, so your donations will always be allocated based on our latest research.Our recommendationsOur content and design teams have been working hard to revamp ourrecommendations page anddonation platform, so you can more easily find and donate to the charities and funds that align with your values. We encourage you to check them out, give us feedback, and share with your friends (we've made somesample social media posts you could use/adapt).Global health and wellbeing:GiveWell's Top Charities Fund (Grants to the charities below)GiveWell's All Grants Fund (Supports high-impact opportunities across global health and wellbeing)Malaria Consortium (Seasonal Malaria Chemoprevention Programme)Against Malaria Foundation (Bednets to prevent malaria)New Incentives (Childhood immunisation incentives)Helen Keller International (Vitamin A supplementation)Animal welfare:EA Funds' Animal Welfare Fund (Supports high-impact opportunities to improve animal welfare)The Humane League's corporate campaign work (Corporate campaigns for chicken welfare)Reducing global catastrophic risks:Longview's Emerging Challenges Fund (Previously the "Longtermism Fund" - name change to be reflected on our website tomorrow) (Supports high-impact work on reducing GCRs)EA Funds' Long-Term Future Fund (Supports high-impact work on reducing GCRs)As always, we value your feedback, so if you have any questions or comments, please leave them in the comments section here or under our recent post on our evaluations; participate in our AMA today and tomorrow; and/orget in touch with us!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Sjir Hoeijmakers https://forum.effectivealtruism.org/posts/TeknjqDR7EM7keN3G/gwwc-s-new-recommendations-and-cause-area-funds Mon, 27 Nov 2023 19:01:12 +0000 EA - GWWC's new recommendations and cause area funds by Sjir Hoeijmakers Sjir Hoeijmakers https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:56 no full 2
CdELxtHgQjzCYPhtm EA - Kaya Guides- Marginal Funding for Tech-Enabled Mental Health in LMICs by RachelAbbott Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Kaya Guides- Marginal Funding for Tech-Enabled Mental Health in LMICs, published by RachelAbbott on November 26, 2023 on The Effective Altruism Forum.This post was written by Rachel Abbott, Kaya Guides' founder.TLDRWho we are: Kaya Guides is a global mental health charity incubated by Charity Entrepreneurship. We operate a self-help program on WhatsApp to reduce depression at scale in LMICs, focusing on youth with moderate to severe depressionStatus: We launched in India this year and are running an ongoing proof of concept with 111 peopleHow it works: A WhatsApp chatbot delivers videos in Hindi that teach participants evidence-based techniques to reduce depression. Participants practice the techniques day-to-day and have a 15-minute weekly call with a trained supporter for 5-8 weeksEvidence base: Self-help combined with low-touch human support can have the same effects as face-to-face psychotherapy in reducing depression, even if total staff time is less than two hours per participantWhat we've done: This year, we adapted the World Health Organization's digital self-help program to India's context, built a WhatsApp chatbot, produced 40 videos in Hindi, and launched our ongoing proof of conceptImpact: Delivering on WhatsApp means we can reach those who need it most, at a large scale. The WHO program, studied in two RCTs, had moderate to large effects on depressionInitial findings: Mental health organizations usually struggle with recruitment, but we got 875 people to message the chatbot in 1 month (similar organizations report getting 1K users in a year), achieved a 12.69% conversion rate from initial message to appearing in a guidance call, and only spent $0.95 per acquisitionCost-effectiveness: Kaya has the potential to increase subjective well-being 30x as cost-effectively as direct cash transfers by Year 3Scaling potential: As a tech initiative, we can scale rapidly and believe we can treat 100K people in Year 52024 plans: Next year, we'll: 1) 10x our impact from this year by treating 1K youth with depression and 2) Establish the product, team and systems we need to scale rapidly from 2025 onwardWhat we need: We're raising $80K to meet our 2024 budget of $160K, having so far raised $80K from the EA Mental Health Funding CircleWhat is Kaya Guides and what do we do?Kaya Guides is a global mental health charity incubated by Charity Entrepreneurship. Our focus is on reducing depression at scale in low and middle-income countries, beginning with India. Youth with moderate to severe depression are our target group.We deliver a self-help course via WhatsApp that teaches youth evidence-based techniques to reduce depression. During the 5-8 week course, participants have 15-minute weekly calls with trained supporters and practice the techniques day-to-day.This treatment approach (self-help, plus low-touch human support) is called guided self-help. It was recommended by Charity Entrepreneurship due to its high projected cost-effectiveness. Research indicates that guided self-help has the same effects as face-to-face psychotherapy- even if human support is only 15 minutes per week, the supporter has no clinical background, and the program lasts just five weeks.Why should we care about mental health?Mental health disorders account for 5% of global disease burden and 15% of all years lived with disability. This figure is an underestimate: the Global Burden of Disease counts suicide as an injury, even though an estimated 60-98% of suicides are attributable to mental health conditions and 700,000 people die by suicide each year. Depression and anxiety alone account for 12 billion workdays lost annually. Despite the need for expanded mental healthcare, on average just 2% of government health budgets go to mental health.Scale of the problem in IndiaWe selected...

]]>
RachelAbbott https://forum.effectivealtruism.org/posts/CdELxtHgQjzCYPhtm/kaya-guides-marginal-funding-for-tech-enabled-mental-health Sun, 26 Nov 2023 18:37:48 +0000 EA - Kaya Guides- Marginal Funding for Tech-Enabled Mental Health in LMICs by RachelAbbott RachelAbbott https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:02 no full 8
QDtro7aEqeusrwD4r EA - Paper out now on creatine and cognitive performance by Fabienne Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper out now on creatine and cognitive performance, published by Fabienne on November 26, 2023 on The Effective Altruism Forum.Our paper "The effects of creatine supplementation on cognitive performance - a randomised controlled study" is out now!Paper: https://doi.org/10.1186/s12916-023-03146-5Twitter thread: https://twitter.com/FabienneSand/status/1726196252747165718?t=qPUghyDGMUb0-FZK7CEXhw&s=19Jan Brauner and I are very thankful to Paul Christiano for suggesting doing this study and for funding it.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Fabienne https://forum.effectivealtruism.org/posts/QDtro7aEqeusrwD4r/paper-out-now-on-creatine-and-cognitive-performance Sun, 26 Nov 2023 16:16:31 +0000 EA - Paper out now on creatine and cognitive performance by Fabienne Fabienne https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:01 no full 9
oaLx6NnsmoRj6ar5Z EA - Announcing New Beginner-friendly Book on AI Safety and Risk by Darren McKee Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing New Beginner-friendly Book on AI Safety and Risk, published by Darren McKee on November 25, 2023 on The Effective Altruism Forum.Concisely,I've just released the book Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the WorldIt's an engaging introduction to the main issues and arguments about AI safety and risk. Clarity and accessibility were prioritized. There are blurbs of support from Max Tegmark, Will MacAskill, Roman Yampolskiy and others.Main argument is that AI capabilities are increasing rapidly, we may not be able to fully align or control advanced AI systems, which creates risk. There is great uncertainty, so we should be prudent and act now to ensure AI is developed safely. It tries to be hopeful.Why does it exist?There are lots of useful posts, blogs, podcasts, and articles on AI safety, but there was no up-to-date book entirely dedicated to the AI safety issue that is written for those without any exposure to the issue. (Including those with no science background.)This book is meant to fill that gap and could be useful outreach or introductory materials.If you have already been following the AI safety issue, there likely isn't a lot that is new for you. So, this might be best seen as something useful for friends, relatives, some policy makers, or others just learning about the issue. (although, you may still like the framing)It's available on numerous Amazon marketplaces. Audiobook and Hardcover options to follow.It was a hard journey. I hope it is of value to the community.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Darren McKee https://forum.effectivealtruism.org/posts/oaLx6NnsmoRj6ar5Z/announcing-new-beginner-friendly-book-on-ai-safety-and-risk Sat, 25 Nov 2023 23:21:19 +0000 EA - Announcing New Beginner-friendly Book on AI Safety and Risk by Darren McKee Darren McKee https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:41 no full 11
tKjEcMuRECFSA5Wt8 EA - New: Donation Gift Vouchers (Spendengutscheine) by Effektiv Spenden by tilboy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New: Donation Gift Vouchers (Spendengutscheine) by Effektiv Spenden, published by tilboy on November 25, 2023 on The Effective Altruism Forum.Hi all!I am happy to announce that we are introducing donation gift vouchers (German: Spendengutscheine) at Effektiv Spenden for this giving season! It's a great way to get friends and family thinking about Effective Giving - because the voucher recipient decides which of our recommended charities the donation will go to!The vouchers work like this:You buy a voucher via our website: https://effektiv-spenden.org/spendengutschein/ (The page is in german, but you can toggle the linked form to english.)After completing payment, you receive a voucher code by email and forward it to the recipient.The recipient redeems it via our website and gets to choose which of our recommended charities the donation will go to.Main goalsEffective Giving Promotion: The vouchers can help introduce and discuss the principles of Effective Giving with family and friends.Corporate Engagement: We also offer voucher purchase in bulk for organizations that want to send out gifts to their employees, clients, or business partners. See https://effektiv-spenden.org/spendengutschein-unternehmen/Tax Deductibility in Germany and Switzerland: From a tax perspective the donation comes from the voucher buyer, so German and Swiss nationals benefit from the tax deductibility of voucher purchases like with regular donations. (If you reside in a different country this might not be possible.)FeedbackIf you have any bug reports, questions, or feedback about the product, let me know in the comments or via tilman.masur@effektiv-spenden.org.Happy giving!TilmanThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
tilboy https://forum.effectivealtruism.org/posts/tKjEcMuRECFSA5Wt8/new-donation-gift-vouchers-spendengutscheine-by-effektiv Sat, 25 Nov 2023 12:16:32 +0000 EA - New: Donation Gift Vouchers (Spendengutscheine) by Effektiv Spenden by tilboy tilboy https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:53 no full 16
dgXg6ddauC3sBwe67 EA - EA is good, actually by Amy Labenz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA is good, actually, published by Amy Labenz on November 28, 2023 on The Effective Altruism Forum.The last year has been toughThe last year has been tough for EA.FTX blew up in the most spectacular way and SBF has been found guilty of one of the biggest frauds in history. I was heartbroken to learn that someone I trusted hurt so many people, was heartbroken for the people who lost their money, and was heartbroken about the projects I thought would happen that no longer will. The media piled on, and on, and on.The community has processed the shock in all sorts of ways - some more productive than others. Many have publishedthoughtfulreflections. Many havetried to come up with ways to ensure that nothing like this will ever happen again. Some people rallied, some people looked for who to blame, we all felt betrayed.I personally spent November-February working more than full-time on a secondment to Effective Ventures. Meanwhile, there were several other disappointments in the EA community. Like many people,I was tired. Then in April, I went on maternity leave and stepped away from the Forum and my work to spend time with my children (Earnie and Teddy) and to get to know my new baby Charley. I came back to an amazing team who continued running event after event in my absence.In the last few months I attended my first events since FTX and I wasn't sure how I would feel. But when I attended the events and heard from serious, conscientious people who want to think hard about the world's most pressing problems I felt so grateful and inspired. I teared up watching Lizka, Arden, and Kuhan give theopening talk at EAG Boston, which tries to reinforce and improve important cultural norms around mistakes, scout mindset, deference, and how to interact in a world where AI risk is becoming more mainstream. I went home so motivated!And then, OpenAI.I'm still processing it and I don't know what happened. Almost nobody does. I have spent far too much time searching for answers online. I've seen somethoughtfulwrite-ups and also many many posts that criticize a version of EA that doesn't match my experience. This has sometimes made me feel sad or defensive, wanting to reply to explain or argue. I haven't actually done that because I'm generally pretty shy about posting and I'm not sure how to engage. Whatever happened, it seems the results arelikely bad for AI safety. Whatever happened, I think I've reached diminishing returns on my doomscrolling, and I'm ready to get back to work.The last year has been hard and I want us to learn from our mistakes, but I don't want us to over-update and decide EA is bad. I think EA is good!Sometimes when people say EA, they're referring to the ideas like "let's try to do the most good" and "AI safety". Other times, they're referring to the community that's clustered around these ideas. I want to defend both, though separately.The EA community is goodI think there are plenty of issues with the community. I live in Detroit and so I can't really speak to all of the different clusters of people who currently call themselves EA or "EA-adjacent". I'm sure some of them have bad epistemics or are not trustworthy and I don't want to vouch for everyone. I also haven't been part of that many other communities. I am a lawyer, I have been a part of the civil rights community, and I engage with other online communities (mom groups, au pair host parents, etc.).All that said, my experience in EA spaces (both online and in-person) has been significantly more dedicated to celebrating and creating a culture of collaborative truth-seeking and kindness. For example:We haveonline posting norms that I'd love to see adopted by other online spaces I participate in (I've mostly stopped posting in the mom groups or host parent groups because when I raise an ...

]]>
Amy Labenz https://forum.effectivealtruism.org/posts/dgXg6ddauC3sBwe67/ea-is-good-actually Tue, 28 Nov 2023 16:38:08 +0000 EA - EA is good, actually by Amy Labenz Amy Labenz https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:08 no full 2
fKjGmDL6bNTeg8Zrc EA - HLI's Giving Season 2023 Research Overview by Happier Lives Institute Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: HLI's Giving Season 2023 Research Overview, published by Happier Lives Institute on November 28, 2023 on The Effective Altruism Forum.SummaryAt the Happier Lives Institute, we look for the most cost-effective interventions and organisations that improve subjective wellbeing, how people feel during and about their lives[1]. We quantify the impact in 'Wellbeing Adjusted Life-Years', or WELLBYs[2]. To learn more about our approach, see ourkey ideas page and ourresearch methodology page.Last year, we published our first charity recommendations. We recommended StrongMinds, an NGO aiming to scale depression treatment in sub-saharan Africa, as our top funding opportunity, but noted the Against Malaria Foundation could be better under some assumptions. This year, we maintain our recommendation for StrongMinds, and we've added the Against Malaria Foundation as a secondtop charity. We havesubstantially updated our analysis of psychotherapy, undertaking a systematic review and a revised meta-analysis, after which our estimate for StrongMinds has declined from 8x to 3.7x as cost-effective as cash transfers, in WELLBYs, resulting in a larger overlap in the cost-effectiveness of StrongMinds and AMF[3]. The decline in cost-effectiveness is primarily due to lower estimated household spillovers, our new correction for publication bias, and the prediction that StrongMinds might have smaller than average effects.We've also started evaluating another mental health charity, Friendship Bench, an NGO that delivers problem-solving therapy in Zimbabwe. Our initial estimates suggest that the Friendship Bench may be 7x more cost-effective, in WELLBYs, than cash transfers. We think Friendship Bench is a promising cost-effective charity, but we have not yet investigated it as thoroughly, so our analysis is more preliminary, uncertain, and likely to change. As before, we don't recommend cash transfers or deworming: the former because it's likely psychotherapy is several times more cost-effective, the latter because it remains uncertain if deworming has a long-term effect on wellbeing.This year, we've also conducted shallow investigations into new cause areas. Based on our preliminary research, we think there are promising opportunities to improve wellbeing by preventing lead exposure, improving childhood nutrition, improving parenting (e.g., encouraging stimulating play, avoiding maltreatment), preventing violence against women and children, and providing pain relief in palliative care.In general, the evidence we've found on these topics is weaker, and our reports are shallower, but we highlight promising charities and research opportunities in these areas. We've also found a number of less promising causes, which we discuss briefly to inform others.In this report, we provide an overview of all our evaluations to date. We group them into two categories, In-depth and Speculative, based on our level of investigation. We discuss these in turn.In-depth evaluations: relatively late stage investigations that we consider moderate-to-high depth.Top charities: These are well-evidenced interventions that are cost-effective[4] and have been evaluated in medium-to-high depth. We think of these as the comparatively 'safer bets'.Promising charities: These are well-evidenced opportunities that are potentially more cost-effective than the top charities, but we have more uncertainty about. We want to investigate them more before recommending them as a top charity.Non-recommended charities: These are charities we've rigorously evaluated but the current evidence suggests are less cost-effective than our top charities.Speculative evaluations: early stage investigations that are shallow in depth.Promising bets: These are high-priority opportunities to research because we think they're potentially mor...

]]>
Happier Lives Institute https://forum.effectivealtruism.org/posts/fKjGmDL6bNTeg8Zrc/hli-s-giving-season-2023-research-overview Tue, 28 Nov 2023 15:42:56 +0000 EA - HLI's Giving Season 2023 Research Overview by Happier Lives Institute Happier Lives Institute https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 49:17 no full 3
GJ4xjAnubWiBJDbE8 EA - Talking through depression: The cost-effectiveness of psychotherapy in LMICs, revised and expanded by JoelMcGuire Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Talking through depression: The cost-effectiveness of psychotherapy in LMICs, revised and expanded, published by JoelMcGuire on November 28, 2023 on The Effective Altruism Forum.This is the summary of the report with additional images (and some new text to explain them) The full 90+ pagereport (and a link to its 80+ pageappendix) is on our website.SummaryThis report forms part of our work to conductcost-effectiveness analyses of interventions and charities based on their effect on subjective wellbeing, measured in terms of wellbeing-adjusted life years (WELLBYs). This is a working report that will be updated over time, so our results may change. This report aims to achieve six goals, listed below:1. Update our original meta-analysis of psychotherapy in low- and middle-income countries.In our updated meta-analysis we performed a systematic search, screening and sorting through 9390 potential studies. At the end of this process, we included 74 randomised control trials (the previous analysis had 39). We find that psychotherapy improves the recipient's wellbeing by 0.7 standard deviations (SDs), which decays over 3.4 years, and leads to a benefit of 2.69 (95% CI: 1.54, 6.45) WELLBYs. This is lower than our previous estimate of 3.45 WELLBYs (McGuire & Plant, 2021b) primarily because we added a novel adjustment factor of 0.64 (a discount of 36%) to account for publication bias.Figure 1: Distribution of the effects for the studies in the meta-analysis, measured in standard deviations change (Hedges' g) and plotted over time of measurement. The size of the dots represents the sample size of the study. The lines connecting dots indicate follow-up measurements of specific outcomes over time within a study. The average effect is measured 0.37 years after the intervention ends. We discuss the challenges related to integrating unusually long follow-ups in Sections 4.2 and 12 in the report.2. Update our original estimate of the household spillover effects of psychotherapy.We collected 5 (previously 2) RCTs to inform our estimate of household spillover effects. We now estimate that the average household member of a psychotherapy recipient benefits 16% as much as the direct recipient (previously 38%). See McGuire et al. (2022b) for our previous report-length treatment of household spillovers.3. Update our original cost-effectiveness analysis of StrongMinds, an NGO that provides group interpersonal psychotherapy in Uganda and Zambia.We estimate that a $1,000 donation results in 30 (95% CI: 15, 75) WELLBYs, a 52% reduction from our previous estimate of 62 (see ourchangelog website page). The cost per person treated for StrongMinds has declined to $63 (previously $170). However, the estimated effect of StrongMinds has also decreased because of smaller household spillovers, StrongMinds-specific characteristics and evidence which suggest smaller-than-average effects, and our inclusion of a discount for publication bias.The only completed RCT of StrongMinds is the long anticipated study by Baird and co-authors, which has been reported to have found a "small" effect (another RCT is underway). However, this study is not published, so we are unable to include its results and unsure of its exact details and findings. Instead, we use a placeholder value to account for this anticipated small effect as our StrongMinds-specific evidence.[1]4. Evaluate the cost-effectiveness of Friendship Bench, an NGO that provides individual problem solving therapy in Zimbabwe.We find a promising but more tentative initial cost-effectiveness estimate for Friendship Bench of 58 (95% CI: 27, 151) WELLBYs per $1,000. Our analysis of Friendship Bench is more tentative because our evaluation of their programme and implementation has been more shallow. It has 3 published RCTs which we use to info...

]]>
JoelMcGuire https://forum.effectivealtruism.org/posts/GJ4xjAnubWiBJDbE8/talking-through-depression-the-cost-effectiveness-of Tue, 28 Nov 2023 08:20:27 +0000 EA - Talking through depression: The cost-effectiveness of psychotherapy in LMICs, revised and expanded by JoelMcGuire JoelMcGuire https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:55 no full 7
PzudRLNHqSGDw8rZk EA - 2023 EA conference talks are now live by Eli Nathan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2023 EA conference talks are now live, published by Eli Nathan on November 28, 2023 on The Effective Altruism Forum.Recordings from various 2023 EA conferences are now live onour YouTube channel. These include talks fromEAG Bay Area,EAG London,EAG Boston,EAGxLatAm,EAGxIndia,EAGxNordics, andEAGxBerlin (alongside many other talks from previous years).In an effort to cut costs, this year some of our conferences had fewer recorded talks than normal, though we still managed to record over 100 talks across the year. This year also involved some of our first Spanish-language content, recorded at EAGxLatAm in Mexico City. Listening to talks can be a great way to learn more about EA and stay up to date on EA cause areas, and recording them allows people who couldn't attend (or who were busy in 1:1 meetings) to watch them in their own time.Some highlighted talks are displayed below:EA Global: Bay AreaDiscovering AI Risks with AIs | Ethan PerezIn this talk Ethan presents on how AI systems like ChatGPT can be used to help uncover potential risks in other AI systems, such as tendencies towards power-seeking, self-preservation, and sycophancy.How to compare welfare across species | Bob FischerPeople farm a lot of pigs. They farm even more chickens. And if they don't already, they're soon to farm even more black soldier flies. How should EAs distribute their resources to address these problems? And how should EAs compare benefits to animals with benefits to humans?This talk outlines a framework for answering these questions. Bob Fischer argues that we should use estimates of animals' welfare ranges to compare how much good different interventions can accomplish. He also suggests some tentative welfare range estimates for several farmed species.EA Global: LondonTaking happiness seriously: Can we? Should we? A debate | Michael Plant, Mark FabianEffective altruism is driven by the pursuit to maximize impact. But what counts as impact? One approach is to focus directly on improving people's happiness - how they feel during and about their lives.In this session, Michael Plant and Mark Fabian discuss how and whether to do this, and what it might mean for doing good differently. Michael starts by presenting the positive case - why happiness matters and how it can be measured - then shares the Happier Lives Institute's recent research on the implications and suggesting directions for future work. Mark Fabian acts as a critical discussant and highlights key weaknesses and challenges with 'taking happiness seriously'. After their exchange, these issues open up to the floor.Panel on nuclear risk | Rear Admiral John Gower, Patricia Lewis, Paul IngramThis panel joins together Rear Admiral John Gower, Patricia Lewis, and Paul Ingram for a panel on a conversation exploring the future of arms control, managing nuclear tensions with Russia, China's changing nuclear strategy, and more.EA Global: BostonOpening session: Thoughts from the community | Arden Koehler, Lizka Vaintrob, Kuhan JeyapragasanIn this opening session, hear talks from three community members (Lizka Vaintrob, Kuhan Jeyapragasan, and Arden Koehler) as they give some thoughts on EA and the current state of the community.Screening all DNA synthesis and reliably detecting stealth pandemics | Kevin EsveltPandemic security aims to safeguard the future of civilisation from exponentially spreading biological threats. In this talk, Kevin outlines two distinct scenarios - "Wildfire" and "Stealth" - by which pandemic-causing pathogens could cause societal collapse. He then explains the 'Delay, Detect, Defend' plan to prevent such pandemics, including the key technological programmes his team oversees to mitigate pandemic risk: a DNA synthesis screening system that prevents malicious actors from synthesizing and rel...

]]>
Eli_Nathan https://forum.effectivealtruism.org/posts/PzudRLNHqSGDw8rZk/2023-ea-conference-talks-are-now-live Tue, 28 Nov 2023 07:53:53 +0000 EA - 2023 EA conference talks are now live by Eli Nathan Eli_Nathan https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:10 no full 8
o5PoRhnBydCMvb52N EA - Rethink's CURVE Sequence - The Good and the Gaps by Jack Malde Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink's CURVE Sequence - The Good and the Gaps, published by Jack Malde on November 28, 2023 on The Effective Altruism Forum.(Also posted to my substack The Ethical Economist: a blog covering Economics, Ethics and Effective Altruism.)Rethink Priorities'Worldview Investigation Team recently published theirCURVE Sequence: "Causes and Uncertainty: Rethinking Value in Expectation." The aim of the sequence was to:Consider alternatives to expected value maximization (EVM) for cause prioritization, motivated by some unintuitive consequences of EVM. The alternatives considered were incorporating risk aversion, and contractualism.Explore the practical implications of a commitment to EVM and, in particular, if it supports prioritizing existential risk (x-risk) mitigation over all else.I found the sequence thought-provoking. It opened my eyes to the fact that x-risk mitigation may only be astronomically valuable under certain contentious conditions. I still prefer risk-neutral EVM (with some reasonable uncertainty), but am now less certain that this clearly implies a focus on prioritizing x-risk mitigation.Having said that, the sequence wasn't conclusive and it would take more research for me to determine that x-risk reduction shouldn't be the top priority for the EA community. This post summarizes some of my reflections on the sequence.Summary of posts in the sequenceInCauses and Uncertainty: Rethinking Value in Expectation, Bob Fischer introduces the sequence. The motivation for considering alternatives to EVM is due to the unintuitive consequence of the theory that the highest EV option needn't be one where success is at all likely.InIf Contractualism, Then AMF, Bob Fischer considers contractualism as an alternative to EVM. Under contractualism, the surest global health and development (GHD) work beats out x-risk mitigation and most animal welfare work, even if the latter options have higher EV.InHow Can Risk Aversion Affect Your Cause Prioritization?, Laura Duffy considers how different risk attitudes affect cause prioritization. The results are complex and nuanced, but one key finding is that spending on corporate cage-free campaigns for egg-laying hens is robustly cost-effective under nearly all reasonable types and levels of risk aversion considered. Otherwise, prioritization depends on type and level of risk aversion.InHow bad would human extinction be?, Arvo Muñoz Morán investigates the value of x-risk mitigation efforts under different risk assumptions. The persistence of an x-risk intervention - the risk mitigation's duration - plays a key role in determining how valuable the intervention is. The rate of value growth is also pivotal, with only cubic and logistic growth (which may be achieved through interplanetary expansion) giving astronomical value to x-risk mitigation.InCharting the precipice: The time of perils and prioritizing x-risk, David Rhys Bernard considers various premises underlying thetime of perils hypothesis which may be pivotal to the case for x-risk mitigation. All the premises are controversial to varying degrees so it seems reasonable to assign a low credence to this version of the time of perils. Justifying x-risk mitigation based on the time of perils hypothesis may require beingfanatical.InUncertainty over time and Bayesian updating, David Rhys Bernard estimates how quickly uncertainty about the impact of an intervention increases as the time horizon of the prediction increases. He shows that a Bayesian should put decreasing weight on longer-term estimates. Importantly, he uses data from various development economics randomized controlled trials, and it is unclear to me how much the conclusions might generalize to other interventions.InThe Risks and Rewards of Prioritizing Animals of Uncertain Sentience, Hayley Clutte...

]]>
Jack Malde https://forum.effectivealtruism.org/posts/o5PoRhnBydCMvb52N/rethink-s-curve-sequence-the-good-and-the-gaps Tue, 28 Nov 2023 07:08:20 +0000 EA - Rethink's CURVE Sequence - The Good and the Gaps by Jack Malde Jack Malde https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 16:37 no full 9
kbsLhZopCuvdvGREN EA - Join GWWC's governance or advisory boards by Luke Freeman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Join GWWC's governance or advisory boards, published by Luke Freeman on November 28, 2023 on The Effective Altruism Forum.Giving What We Can (GWWC) is seeking dedicated individuals to join our governance and advisory boards across our current projects as well as multiple newly formed or soon-to-be-formed entities in different countries.* Apply now *About the rolesOur governance and advisory boards will collectively shape GWWC's strategic direction, ensuring that our organisation is robust and that our activities are effectively bringing us closer to achieving our mission. Our goal is to build a diverse, mission-aligned, and strategic-thinking governance structure that can drive us forward.Across the governance and advisory boards we aim to ensure robust coverage across several domains: strategic guidance, risk management, fundraising, legal compliance, financial stewardship, advocacy, organisational health, and grantmaking.We are seeking individuals who can leverage their unique skills and experiences to contribute in a significant way to these collective responsibilities. These roles would be part of a global team, working remotely with a commitment of approximately five hours per month. Although this position is unpaid, your contributions will significantly shape our approach to philanthropy and our impact on the world's most pressing problems.Governance boardsThese boards bear the legal responsibilities under the laws applicable to GWWC in their respective geographies and will participate in oversight of the international collaboration. Their duties include areas such as strategic planning, local risk management, legal compliance, financial stewardship, and executive management. Some governance board members will sit on more than one board depending on the jurisdiction and the structure of the relationship between the entities.Advisory boardsOperating across our various entities, the advisory boards provide insights, recommendations, and strategic advice to the governance boards and the GWWC team. For example, a Risk and Legal Advisory Board would work in tandem with relevant governance board members and staff members from each legal entity and incorporate volunteers with specific expertise in risk and legal matters.Similarly, a Marketing and Growth Advisory Board would provide advice to the international collaboration and to specific geographies. Being a part of an advisory board also provides an opportunity for members to demonstrate their fit for potential future roles in the governance boards.About Giving What We CanGWWC is on a mission to create a world in which giving effectively and significantly is a cultural norm.We believe that charitable donations can do an astonishing amount of good. However, because the effectiveness of different charities varies wildly, it is important that we donate to the most effective charities if we want to have a significant impact.We are focused on increasing the number of donors who prioritise effectiveness, and helping them to maximise their charitable impact throughout their lives. We are best known for the Giving What We Can Pledge, where 8,598 people have pledged to give over 10% of their lifetime income to high-impact charities. To date, our pledgers - representing over 100 countries - have donated an estimated $333 million USD to high-impact charities, and have committed nearly $3 billion more via their lifetime pledges.The GWWC team is hard-working and mission-focused, with a culture of open and honest feedback. We also like to think of ourselves as a particularly friendly and optimistic bunch.In all our work, we strive to take a positive and collaborative attitude, be transparent in our communication and decision-making, and adopt a scout mindset to guide us towards doing the most good we can do, incl...

]]>
Luke Freeman https://forum.effectivealtruism.org/posts/kbsLhZopCuvdvGREN/join-gwwc-s-governance-or-advisory-boards Tue, 28 Nov 2023 06:40:24 +0000 EA - Join GWWC's governance or advisory boards by Luke Freeman Luke Freeman https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:34 no full 10
mWHEQFdE3HSdMHgrK EA - 80,000 Hours is looking for a new CEO (or to fill a vacancy left by someone promoted to be CEO). Could that be you? by 80000 Hours Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80,000 Hours is looking for a new CEO (or to fill a vacancy left by someone promoted to be CEO). Could that be you?, published by 80000 Hours on November 29, 2023 on The Effective Altruism Forum.Our CEO Howie Lempel is leaving us to take up a position at Open Philanthropy.So we're looking for someone to replace him - or to fill the position of a current staff member should they become CEO.Below is a very short summary of those roles.If you'd like to know more you can read our full article on the vacancy here.In brief, the CEO is ultimately responsible for increasing the positive social impact generated by 80,000 Hours. The key responsibilities include:Setting the strategy for 80,000 Hours, including what audiences we should target with what types of recommendations, and which impact metrics to targetInspiring the entire organisation to be ambitious in striving to increase our impactHiring, retaining, and firing senior staffEnsuring we maintain positive aspects of our team culture, such as curiosity, honesty, and kindnessEnsuring we remain highly organised and functionalManaging relationships with our key donors and other stakeholdersAddressing the most important thorny issues that come up anywhere in the organisationIt's more likely than not that we will hire an internal candidate to fill the CEO role, which would then create a vacancy in another role within 80,000 Hours, potentially one of:Director of Internal Systems: Currently has a team of around five and oversees our operations, legal compliance, hiring, and office.Website Director: Manages a team of around eight and is focused on maintaining and building the website, producing written content, improving our career advice and our newsletter, and marketing our services to reach new users.Director of Special Projects: Generalist role that involves leading or managing various ad-hoc projects on behalf of the CEO, usually in the strategy and operations space. The projects change quarterly and can include project managing fundraising, the annual review, salary updates, and helping with strategy refreshes for individual teams.To learn more about 80,000 Hours, the role(s), what we're looking for in candidates, and how to express interest in them see the full article here.We'll keep the expression of interest form open through 11pm GMT on 10 December - the sooner we receive them the better.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
80000_Hours https://forum.effectivealtruism.org/posts/mWHEQFdE3HSdMHgrK/80-000-hours-is-looking-for-a-new-ceo-or-to-fill-a-vacancy Wed, 29 Nov 2023 21:20:22 +0000 EA - 80,000 Hours is looking for a new CEO (or to fill a vacancy left by someone promoted to be CEO). Could that be you? by 80000 Hours 80000_Hours https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:28 no full 1
89GdH5unSb2Sze6kj EA - Elements of EA: your (EA) identity can be bespoke by Amber Dawn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Elements of EA: your (EA) identity can be bespoke, published by Amber Dawn on November 29, 2023 on The Effective Altruism Forum.Lots of people have an angsty, complicated, or fraught relationship with the EA community. When I was thinking through some of my own complicated feelings, I realised that there are lots of elements of EA that I strongly believe in, identify with, and am part of… but lots of others that I'm sceptical about, alienated from, or excluded from.This generates a feeling of internal conflict, where EA-identification doesn't always feel right or fitting, but at the same time, something meaningful would clearly be lost if I "left" EA, or completely disavowed the community. I thought my reflections might be helpful to others who have similarly ambivalent feelings.When we're in a community but feel like we're fitting awkwardly, we can either :(1) ignore it ('you can still be EA even if you don't donate/aren't utilitarian/don't prioritise longtermism/etc')(2) try to fix it (change the community to fit us better, 'Doing EA better')(3) leave ('It's ok to leave EA', 'Don't be bycatch').I want to suggest a fourth option: like the parts you like, dislike the parts you don't, and be aware of it and own it. Not 'keep your identity small' or 'hold your identity lightly' - though those metaphors can be useful too - but make your identity bespoke, a tailor-made, unique garment designed to fit you, and only you, perfectly.By way of epistemic status/caveat, know that I came up with this idea literally this morning, so I'm not yet taking it too seriously. It might help to read this as advice to myself.Elements of EASo, what are some of the threads, colours, cuts, styles that might go in to making your perfect EA-identity coat? I suggest:Philosophy and theory'Doing the most good possible' is almost tautologically simple as a principle, but obviously, EAs approach this goal using a host of specific philosophical and theoretical ideas and approaches. Some are held by most EAs, others are disputed. Things like heavy-tailed-ness, expected value, longtermism, randomised controlled trials, utilitarianism, population ethics, rationality, Bayes' theorem, and hits-based giving fall into this category (to name just a few). You might agree with some of these but not others; or, you might disagree with most EA philosophy but still have some EA identification because of the other elements.Moral obligationMany EAs hold themselves to moral obligations: for example, to donate a proportion of their income, or to plan their career with positive impact in mind. You can clearly feel these moral obligations without subscribing to the rest of EA: lots of people tithe, and lots of people devote their lives to a cause. Maybe then these principles are enough unique enough to 'count' as central EA elements. But if you add in a commitment to impartiality and effectiveness, I think this does give these moral obligations a distinct flavour; and, importantly, you can aspire to work toward the impartial good, effectively, without agreeing with (most) underlying EA theory, or agreeing with EA cause prioritization.The four central cause areasEAs prioritise lots of causes, but four central areas are often used for the purposes of analysis: global health and development, x-risk prevention, animal welfare, and meta-EA. Obviously, you don't need to subscribe to EA theory or EA's ideas about moral obligation to work on nuclear risk prevention, corporate animal welfare campaigns, or curing malaria.Similarly, you might consider yourself EA, but think that the most pressing cause does not fall into any of these categories, or (more commonly) is de-prioritized within the category (for example, mental health, or wild animal welfare, which are 'niche-r' interests within the wider causes of glo...

]]>
Amber Dawn https://forum.effectivealtruism.org/posts/89GdH5unSb2Sze6kj/elements-of-ea-your-ea-identity-can-be-bespoke Wed, 29 Nov 2023 16:26:49 +0000 EA - Elements of EA: your (EA) identity can be bespoke by Amber Dawn Amber Dawn https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:47 no full 4
KSdGmBrsWcSEBAeXe EA - Rethink Priorities: Seeking Expressions of Interest for Special Projects Next Year by kierangreig Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities: Seeking Expressions of Interest for Special Projects Next Year, published by kierangreig on November 29, 2023 on The Effective Altruism Forum.Rethink Priorities' (RP's)Special Projects (SP) Team is looking for new impactful projects we can support in 2024!Key PointsA key strength of RP is its operations, and we aim to share the wealth of operational knowledge accumulated by RP to benefit other high-impact projects.We enable projects to focus on their core activities rather than operational concerns, freeing up time for impactful direct workIn 2023, we grew to a team of 5 FTE staff dedicated to operations for Special Projects, and we fiscally sponsored [sorted alphabetically]:Apollo ResearchCondor CampEffective Altruism Consulting NetworkEpochExistential Risk AllianceQuantified Uncertainty Research InstituteThe Insect InstituteIn addition we provided services to:Cooperative AI FoundationWe expect to have capacity to onboard new projects in early 2024. If you'd like to get involved, please reach out by submitting anExpression of Interest form.About the Special Projects ProgramThe SP team provides fee-basedfiscal sponsorship and support to projects that are led by individuals outside of RP. Within this model, the project's founders maintain autonomy and decision-making authority while we provide them with operational support and fiduciary oversight and share our tax-exempt status. Each project is assigned a dedicated point of contact within the Special Project team, to guarantee effective communication and tailored support.We will have capacity to take on more projects from the beginning of 2024.How to applyIf you need fiscal sponsorship and operational support and have funding or anticipate receiving funding for work that aligns with RP's mission and vision, we encourage you to send in a new or updatedexpression of interest via our online form (which should take 5-10 minutes to complete).We would ideally like to receive expressions of interest by January 5th, 2024 and will follow up with applicants on the next stage of our selection process in the following two weeks. If you have any questions, please feel free toget in touch. We look forward to hearing more about your projects and learning more about how working with the Special Projects team could help maximize your impact! Please note, RP observes a winter break starting December 18th and we will not be checking inboxes again until January 2nd.We expect projects to comply with your country's applicable laws, RP's employment practices (particularly our anti-harassment and conflict of interest policies), and other responsibilities described in the fiscal sponsorship agreement that you would sign with us. These are designed to help everyone enjoy a safe and inclusive workspace and to ensure that RP and your project can continue to benefit from our status as a nonprofit organization.Our ServicesThe exact services we provide depend on the project, and may include:Fiscal sponsorshipReceiving tax exempt grant fundsHandling tax and legal compliance issuesAccountingFinance and benefits administrationHiring as employees [via our U.S. or U.K. legal entities, or internationally via our EOR, in compliance with local laws]. We can legally hire in many countries.Managing employee benefits and payrollInvoicing and contracting / purchasing and reimbursementsHelping manage project budgetsGetting work visas in the U.S. or U.K. [we cannot guarantee the outcome of any visa applications, and would discuss options if unsuccessful, etc.]Researching legal and operational issuesRecruitment/hiringRunning hiring roundsDevelop hiring and interview materialsFundraising supportCoordinating and reviewing grant applications (please note that we are not able to write grant applications...

]]>
kierangreig https://forum.effectivealtruism.org/posts/KSdGmBrsWcSEBAeXe/rethink-priorities-seeking-expressions-of-interest-for Wed, 29 Nov 2023 14:48:00 +0000 EA - Rethink Priorities: Seeking Expressions of Interest for Special Projects Next Year by kierangreig kierangreig https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:23 no full 5
Wjs8LtbiZHpi6bKdb EA - Road safety: Landscape of the problem and routes to effective policy advocacy by Rethink Priorities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Road safety: Landscape of the problem and routes to effective policy advocacy, published by Rethink Priorities on November 29, 2023 on The Effective Altruism Forum.Editorial noteThis report was produced by Rethink Priorities between May and July 2023. The project was commissioned and supported by Open Philanthropy, which does not necessarily endorse our conclusions.This report builds on a short investigation conducted by Open Philanthropy in 2022, which found that previous philanthropic work on road safety looked potentially cost-effective. This report extends that analysis through in-depth case studies, expert interviews, cost-effectiveness modeling, and research into risk factors, the funding landscape, and promising interventions.We have tried to flag major sources of uncertainty in the report, and are open to revising our views based on new information or further research.Key takeawaysExecutive SummaryAccording to the 2019 Global Burden of Disease (GBD) study, there were about 1.2 million deaths due to road injuries in 2019. About 90% of these take place in LMICs, and the majority of those killed are between 15 - 50 years old. Additionally, WHO analysis and expert interviews indicate that road safety laws in many LMICs do not meet best-practice.[1] While there is limited information about what risk factors contribute most to the road safety burden, or what laws are most important to pass, the available evidence points to speed on the roads as most risky, followed by drunk driving.We conducted case studies of key time periods in China and Vietnam to better understand the relative impact of (philanthropically-funded) policy changes versus other factors. Our assessment of China is that we think Bloomberg's implementing partners contributed minimally to the key drunk driving policy change in 2011, and we think it's likely that this law was only one of many drivers to reduce burden.In contrast, we think laws were a more important driving force in Vietnam, and advocacy by Bloomberg, the Asia Injury Prevention Foundation and others significantly sped up their introduction. We did not find any sources that gave insight into drivers on a global scale.Regarding future burden, it's likely that this will follow trends in motorization. Self-driving cars may mitigate burden as they become more common; one source estimates they could constitute 20% of the global market by 2040, though we expect this to be lower in LMICs.This report builds on a short unpublished investigation conducted by Open Philanthropy in 2022. A quick BOTEC from that report, based on an existing impact evaluation (Hendrie et al., 2021), suggested that Bloomberg's road safety initiative might be quite cost-effective enough (ROI: ~1,100x). This report extends that analysis by reviewing Hendrie et al.'s estimates of lives saved, and comparing the authors' estimates for China and Vietnam to data on road outcomes from multiple sources.For China, we found that while the data shows reduced fatalities after 2011, we could not link them specifically to fewer incidents of drunk driving. For Vietnam, quantitative evidence for the impact of the helmet laws was stronger than for the drunk driving laws. As can be seen in our BOTEC, this analysis led us to reduce the estimated effectiveness of policy changes by 40% - 80%.In addition, we used our case studies to estimate specific speed up parameters for advocacy of 0.4 years in China and 3.8 years in Vietnam, versus the 10 years used previously. These changes significantly reduce our estimate of lives saved to 17% of Open Philanthropy's previous estimate. If we use the same methodology as the previous estimate (i.e., divide this estimate by 259 million USD, the entirety of Bloomberg's spending between 2007 - 2020), then the ROI drops to 148x.However, we propo...

]]>
Rethink Priorities https://forum.effectivealtruism.org/posts/Wjs8LtbiZHpi6bKdb/road-safety-landscape-of-the-problem-and-routes-to-effective Wed, 29 Nov 2023 11:08:11 +0000 EA - Road safety: Landscape of the problem and routes to effective policy advocacy by Rethink Priorities Rethink Priorities https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:00 no full 7
sdNYqwaTJ5j4hGZit EA - How I feel about my GWWC Pledge by Michael Townsend Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How I feel about my GWWC Pledge, published by Michael Townsend on November 29, 2023 on The Effective Altruism Forum.I took the GWWC Pledge in 2018, while I was an undergraduate student. I only have a hazy recollection of the journey that led to me taking the Pledge. I thought I'd write that down, reflect on how I feel now, and maybe share it.In high-school, I was kind of cringeI saw respected people wear suits, and I watched (and really liked) shows like Suits.I unreflectively assumed I'd end up the same. The only time I would reflect on it was to motivate myself to study for my upcoming exams - I have memories of going to the bathroom as a 17-year old, looking at myself in the mirror, and imagining being successful. I imagined the BMW I might drive, the family I could provide for, and the nice house I could own. A lot of this was psychologically tied up in aspirations to be in great shape.I was bullied a bit in primary school and early high-school. Whether because of that or not, I unconsciously craved being respected. And respected people wore suits.Despite what I assumed I would become - what I was actively working to become - I wasn't totally unreflective. On an intellectual level, I found it really strange knowing that the people around me earned so much that even a fraction of their earnings amounted to life-changing amounts of money for entire families - and not just some of the worst-off families, but probably for most families on the planet.I sat with this cognitive dissonance for a while, and sometimes grappled with it. Over time, I gradually thought that I'd have to do something like donate to charities (I assumed only the "good ones", and was happy to kick the work of finding those "good ones" down the road). I didn't know how much I should give or what felt like "enough", but 10% seemed fair. I think at this point, effective altruism hadn't been coined - I'm pretty confident I'd never heard anything about it.Obviously, I didn't donate anything. I was 17 and worked at McDonald's.In early university, I didn't really know who I wanted to beAt this stage, I had radically different and inconsistent conceptions of what I wanted from life.Just taking my career ambitions as an example:Sometimes I wanted to be a police-officer (definitely because I watched The Wire).I even considered joining the military (probably because I watched Band of Brothers - but also because they have good ads and there was a program I could have applied to that would involve the Australian military paying for my degree and giving me something like $40k AUD a year).But mainly, I assumed I'd be a lawyer. I didn't really have a good reason for this (beyond liking debating and having good enough grades). Mind you, at this stage I didn't want to be a corporate lawyer. I identified as very left-wing, against greed and the system, so I'd become a criminal barrister.While all this was happening, I was watching every science/educational channel that could hold my attention, and listening to every podcast about moral philosophy, economics, and psychology that I could find. It was pretty standard stuff for someone with those interests: Sam Harris, Very Bad Wizards, Veritasium and the like.I also studied philosophy and was utterly convinced that moral realism was true (I now doubt that), Peter Singer was right (...I still largely think this) and that consciousness was interesting but hella confusing (still confused). This more intellectual side of me was now certain I needed to give at least 10% to effective charities, if not much more. But I was free to think this because I basically had no money and still worked at McDonald's.More importantly, my best friend, Kieran, was constantly and forcefully insisting I try to be a better person. It often wasn't fun. I didn't like heari...

]]>
Michael Townsend https://forum.effectivealtruism.org/posts/sdNYqwaTJ5j4hGZit/how-i-feel-about-my-gwwc-pledge Wed, 29 Nov 2023 08:13:47 +0000 EA - How I feel about my GWWC Pledge by Michael Townsend Michael Townsend https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:56 no full 10
x2iT45T5ci3ea9yKW EA - Dialogue on Donation Splitting by JP Addison Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dialogue on Donation Splitting, published by JP Addison on November 29, 2023 on The Effective Altruism Forum.I'll start us off with the standard argument against donation splitting. We'll start with the (important!) assumption that you are trying to maximize[1] the amount of good you can do with your money. We'll also take for the moment that you are a small donor giving <$100k/year.There is some charity that can use your first dollar to do the most good. The basic question that this line of argument takes is: is there some amount of money within your donation budget that will cause the marginal effectiveness of a dollar to that charity to fall below that of the second best charity.For example, you could imagine that Acme Charity has a program that has only a $50k funding gap. After that, donations to Acme Charity would go towards another program.The standard argument against donation splitting, which seems right to me, is that the answer to that question is "probably not."[1]: Most definitions of effective altruism have language about maximizing ("as much as possible"). I personally do make some fuzzies-based donations, but do not count them towards my Giving What We Can Pledge.Here's the donation splitting policy that I might argue for: instead of "donate to the charity that looks best to you", I'd argue for "donate to charities in the proportion that, if all like-minded EAs donated their money in that proportion, the outcome would be best".Here's the basic shape of my argument: suppose there are 1000 EAs, each of which will donate $1000. Suppose further there are two charities, A and B, and that the EAs are in agreement that (1) both A and B are high-quality charities; (2) A is better than B on the current margin; but (3) A will hit diminishing returns after a few hundred thousand dollars, such that the optimal allocation of the total $1M is $700k to A and $300k to B.Donate $700 to A and $300 to B (donation splitting); orDon't donate all at the same time. Instead, over the course of giving season, keep careful track of how much A and B have received, and donate to whichever one is best on the margin. (In practice this will mean that the first few hundred thousand donations go to A, and then A and B will each be receiving donations in some ratio such that they remain equally good on the margin.)But if you don't have running counters of how much has been donated to A and B, the first policy is easier to implement. And both policies are better than the outcome where every EA reasons that A is better on the margin and all $1M goes to A.Now, of course EAs are not a monolith and they have different views about which charities are good. But I observe that in practice, EAs' judgments are really correlated. Like I think it's pretty realistic to have a situation in which a large fraction of EAs agree that some charity A is the best in a cause area, with B a close second. (Is this true for AMF and Malaria Consortium, in some order?) And in such a situation, I'd rather that EAs have a policy that causes some fraction to be allocated to B, than a policy that causes all the money to be allocated to A.Note that how this policy plays out in practice really does depend on how correlated your judgments are to those of other EAs. If I'm wrong and EAs' judgments are not very correlated, then donating all your budget to the charity that looks best to you seems like a good policy.I like this position - I'm already not sure how much I disagree. Some objections that might be more devil's advocate-y or might be real objections:I agree correlation is important. I'm not sure how to define it and, once defined, whether it will be correlated enough in practice.Roughly speaking, what decision theory / unit of analysis are we using here? It seems like your opening statement assum...

]]>
JP Addison https://forum.effectivealtruism.org/posts/x2iT45T5ci3ea9yKW/dialogue-on-donation-splitting Wed, 29 Nov 2023 06:25:59 +0000 EA - Dialogue on Donation Splitting by JP Addison JP Addison https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:03 no full 12
bBm64htDSKn3ZKiQ5 EA - Meet the candidates in the Forum's Donation Election (2023) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Meet the candidates in the Forum's Donation Election (2023), published by Lizka on November 29, 2023 on The Effective Altruism Forum.This post collects some information about thecandidates in the Donation Election, with an emphasis on whatmarginal donations to the candidates would accomplish. It also includes some information about other projects .Please let me know if you spot mistakes or you'd like to add more context.[1] If your project isn't on this list, please feel free to write about it in the comments.Consider also:Donating to the Donation Election Fundor to individualprojectsDiscussing which of these donation opportunities are most cost-effective and how we should vote in theDonation Election (voting opens on Friday!)Candidates in the Donation ElectionCross-cause & meta (6)These projects work across different cause areas, or helpbuild effective altruism.LogoBasicsMore infoCharity Entrepreneurship: Incubated Charities FundTopics wiki pageFundraiserWhat extra donations would buyDonations to this Fund will be granted directly to Charity Entrepreneurship'sincubated charities. Charity Entrepreneurship'sfocus areas include health and development policy, mental health, family planning, and animal advocacy, and EA meta.Arguments or evidence for cost-effectivenessA post from March:After launch. How are CE charities progressing? (these charities had raised $22.5M by that point from their own funders,including GiveWell, Open Philanthropy, Founders Pledge, ACE).More on their track record here.EAIF:Effective Altruism Infrastructure Fund (EA Funds)Topics wiki pageFundraiserWhat extra donations would buyThe EAIF seems to have around $1.5M right now, so marginal donations to the EAIF would go towards grants like expenses for a student magazine covering issues like biosecurity and factory farming for non-EA audiences ($9,000), a shared workspace for the EA community in a major European city, andmore. (Open Philanthropy willmatch donations to the EAIF.)Arguments or evidence for cost-effectivenessAn argument for giving to the EAIF/LTFF is madehere.The EAIF hasreceived funding from Open Philanthropy.You cansee their public grants here, and some recentgrant recommendations and reasoning here.GWWC:Giving What We CanTopics wiki pageFundraiserWhat extra donations would buyBaseline funding would put them on stable financial footing for 2024 to support their operations, to support more donations and donation pledges. Fundraising for their expansion budget would allow them to grow (e.g. reach more potential donors), conduct and share more research, support the wider/international effective giving ecosystem, andmore.Arguments or evidence for cost-effectivenessGWWC'ssummary of their impact. They estimate that each dollar invested in GWWC generated $30 in donations for effective charities.GWWChas been funded by Open Philanthropy.Giving What We Can (Charity Elections)FundraiserWhat extra donations would buyOperations of the programme (0.5 FTE salary and a bit extra for promotions and outreach, to set up charity elections at schools) and improving measurement of impact (fromhere).Arguments or evidence for cost-effectivenessSeethis project brief for evidence of impact from EA Market Testing team and more.Rethink PrioritiesTopics wiki pageFundraiserWhat extra donations would buyRP seeks to raise funding to continue publishing research on the Forum, run the EA survey, pursue creative projects like themoral weights work (and other innovative work, which hashistorically been supported by individual donors), run other promising research projects, spend less time fundraising in the next year, andmore.Arguments or evidence for cost-effectivenessHere is their review of 2023; in 2023 they worked on ~160research pieces, ...

]]>
Lizka https://forum.effectivealtruism.org/posts/bBm64htDSKn3ZKiQ5/meet-the-candidates-in-the-forum-s-donation-election-2023 Wed, 29 Nov 2023 05:35:47 +0000 EA - Meet the candidates in the Forum's Donation Election (2023) by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 21:46 no full 14
5Chwpj6ZA6jDmDWWd EA - Save the date - EAGxLATAM 2024 by Daniela Tiznado Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Save the date - EAGxLATAM 2024, published by Daniela Tiznado on November 29, 2023 on The Effective Altruism Forum.[Spanish Version Below]EAGxLATAM 2024 will take place in Mexico City at theMuseo de las Ciencias Universum, from 16 to 18 February 2024.So, although this event is primarily for the Spanish- and Portuguese-speaking EA communities, applications from experienced members of the international community are very welcome!What is EAGx?EAGx conferences are locally organizedEA conferences designed for a wider audience, primarily for people:Familiar with the core ideas of Effective AltruismInterested in learning more about possible career journeysWho is this conference for?You might want to attend this conference if you:Are from Latin America or reside in the continent and are eager to keep learning about EA and connecting with individuals who share similar interests.Are a well-experienced community member from outside/within Latin America and interested in expanding your EA network.Vision for the conferenceOur main goal is to generate meaningful connections between EAs. If you're new to EA, this conference will help you discover the next steps on your EA journey and be part of a supportive community dedicated to doing good.What to expectThe event will feature:Talks and workshops on pressing problems that the EA community is currently trying to addressMost talks will be conducted in English, except for a small number of talks and workshops that will be given in Portuguese/SpanishThe opportunity to meet and share advice with other EAs in the communitySocial events around the conferenceApplication processAPPLY HERE!If you're unsure about whether to apply, err on the side of applying.See you in Mexico City!The Organizing Team 2024Versión en EspañolAparta la fecha - ¡EAGxLATAM 2024!EAGxLATAM 2024 se llevará a cabo en la Ciudad de México en elMuseo de las Ciencias Universum, del 16 al 18 de febrero de 2024.¿Qué es EAGx?Las EAGx son conferencias organizadas localmente y diseñadas para una audiencia más amplia, principalmente para personas:Familiarizadas con con las ideas centrales del altruismo eficazInteresadas en aprender más sobre posibles trayectorias profesionales¿Para quién es esta conferencia?Es posible que desees asistir a esta conferencia si:Resides en América Latina, eres nuevo/a en EA y tienes ganas de seguir aprendiendo y conectando con personas que comparten intereses similares.Eres un miembro experimentado de la comunidad fuera de América Latina y estás interesado/a en ampliar tu red de EA.Visión de la conferenciaNuestro principal objetivo es generar conexiones significativas entre los EA. Si eres nuevo/a en EA esta conferencia te ayudará a descubrir próximos pasos profesionales y a ser parte de una comunidad de apoyo dedicada a realizar el mayor impacto posible.¿Qué esperar?El evento contará con:Charlas y talleres sobre problemas que la comunidad de EA está intentando abordar actualmente.La mayoría de las charlas se realizarán en inglés, excepto una pequeña parte que se presentará en portugués/español.La oportunidad de conocer y compartir consejos con otros EA de la comunidad.Eventos sociales alrededor de la conferencia.Proceso de solicitudAplica AQUÍ!Si no estás seguro/a de si aplicar, aún así te recomendamos hacerlo.¡Te esperamos en la Ciudad de México!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Daniela Tiznado https://forum.effectivealtruism.org/posts/5Chwpj6ZA6jDmDWWd/save-the-date-eagxlatam-2024 Wed, 29 Nov 2023 02:52:27 +0000 EA - Save the date - EAGxLATAM 2024 by Daniela Tiznado Daniela Tiznado https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:58 no full 15
T6zdPaMqXPzbtDAZi EA - #GivingTuesday: My Giving Story and Some of My Favorite Charities by Kyle J. Lucchese Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: #GivingTuesday: My Giving Story and Some of My Favorite Charities, published by Kyle J. Lucchese on November 29, 2023 on The Effective Altruism Forum.Happy Giving Tuesday!A friend inspired me to share my giving story and some of my favorite charities.I was raised to love all and to give generously with my time, money, and spirit, aspirations I strive to live up to.When I first read The Life You Can Save in 2009, I realized that I could and should be doing more to help others wherever they are. It wasn't until 2011 when I came across GiveWell and Giving What We Can that I really put these ideas into action. I pledged to donate at least 10% of my income to effective charities and was driven to study business in hopes that I could earn to give more (I still don't make "make much" but it is a lot from a global perspective).Though I believe significant systemic reforms are needed to create a more sustainable and equitable world, I continue to donate at least 10% of my income and use my career to support better todays and tomorrows for all beings.Between now and the end of the year, I will allocate my donations as follows:20% - The Life You Can Save's Helping Women & Girls Fund: This fund is for donors who seek to address the disproportionate burden on women and girls among people living in extreme poverty. Donations to the fund are split evenly between Breakthrough Trust, CEDOVIP, Educate Girls, Fistula Foundation, and Population Services International.20% - Animal Charity Evaluators' Recommended Charity Fund: This fund supports 11 of the most impactful charities working to reduce animal suffering around the globe. The organizations supported by the fund include: Çiftlik Hayvanlarını Koruma Derneği, Dansk Vegetarisk Forening, Faunalytics, Fish Welfare Initiative, The Good Food Institute, The Humane League, Legal Impact for Chickens, New Roots Institute, Shrimp Welfare Project, Sinergia Animal, and the Wild Animal Initiative.20% - Spiro: a new charity focused on preventing childhood deaths from Tuberculosis, fundraising for their first year. Donation details on Spiro's website here. Donations are tax-deductible in the US, UK, and the Netherlands.15% - Giving What We Can's Risks and Resilience Fund: This fund allocates donations to highly effective organizations working to reduce global catastrophic risks. Funds are allocated evenly between the Long-Term Future Fund and the Emerging Challenges Fund.10% - Founders Pledge's Climate Change Fund: This fund supports highly impactful, evidence-based solutions to the "triple challenge" of carbon emissions, air pollution, and energy poverty. Recent past recipients of grants from the Climate Change Fund include: Carbon180, Clean Air Task Force, TerraPraxis, and UN High Level Climate Champions.10% - GiveDirectly: GiveDirectly provides unconditional cash transfers using cell phone technology to some of the world's poorest people, as well as refugees, urban youth, and disaster victims. According to more than 300 independent reviews, cash is an effective way to help people living in poverty, yet people living in extreme poverty rarely get to decide how aid money intended to help them gets spent.5% - Anima International: Anima aims to improve animal welfare standards via corporate outreach and policy change. They also engage in media outreach and institutional vegan outreach to decrease animal product consumption and increase the availability of plant-based options.Other organizations whose work I have supported throughout the year include:American Civil Liberties Union FoundationEA Funds' Animal Welfare Fund, Global Health and Development Fund, Infrastructure Fund, and Long-Term Future FundFairVoteGiveWell's Top Charities Fund, All Grants Fund, and Unrestricted FundProject on Government OversightThe Life You Can Save...

]]>
Kyle J. Lucchese https://forum.effectivealtruism.org/posts/T6zdPaMqXPzbtDAZi/givingtuesday-my-giving-story-and-some-of-my-favorite Wed, 29 Nov 2023 00:38:24 +0000 EA - #GivingTuesday: My Giving Story and Some of My Favorite Charities by Kyle J. Lucchese Kyle J. Lucchese https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:00 no full 17
yEb9CdWA2xaHA6QsD EA - Updates to Open Phil's career development and transition funding program by Bastian Stern Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updates to Open Phil's career development and transition funding program, published by Bastian Stern on November 29, 2023 on The Effective Altruism Forum.We've recently made a few updates to the program page for ourcareer development and transition funding program (recently renamed, previously the "early-career funding program"), which provides support - in the form of funding for graduate study, unpaid internships, self-study, career transition and exploration periods, and other activities relevant to building career capital - for individuals who want to pursue careers that could help to reduce global catastrophic risks (especiallyrisks from advanced artificial intelligence andglobal catastrophic biological risks) or otherwise improve the long-term future.The main updates are as follows:We've broadened the program's scope to explicitly include later-career individuals, which is also reflected in the new program name.We've added some language to clarify that we're open to supporting a variety of career development and transition activities, including not just graduate study but also unpaid internships, self-study, career transition and exploration periods, postdocs, obtaining professional certifications, online courses, and other types of one-off career-capital-building activities.Earlier versions of the page stated that the program's primary focus was to provide support for graduate study specifically, which was our original intention when we first launched the program back in 2020. We haven't changed our views about the impact of that type of funding and expect it to continue to account for a large fraction of the grants we make via this program, but we figured we should update the page to clarify that we're in fact open to supporting a wide range of other kinds of proposals as well, which also reflects what we've already been doing in practice.This program now subsumes what was previously called theOpen Philanthropy Biosecurity Scholarship; for the time being, candidates who would previously have applied to that program should apply to this program instead. (We may decide to split out the Biosecurity Scholarship again as a separate program at a later point, but for practical purposes, current applicants can ignore this.)Some concrete examples of the kinds of applicants we're open to funding, in no particular order (copied from the program page):A final-year undergraduate student who wants to pursue a master's or a PhD program in machine learning in order to contribute to technical research that helps mitigate risks from advanced artificial intelligence.An individual who wants to do an unpaid internship at a think tank focused on biosecurity, with the aim of pursuing a career dedicated to reducing global catastrophic biological risk.A former senior ML engineer at an AI company who wants to spend six months on self-study and career exploration in order to gain context on and investigate career options in AI risk mitigation.An individual who wants to attend law school or obtain an MPP, with the aim of working in government on policy issues relevant to improving the long-term future.A recent physics PhD who wants to spend six months going through a self-guided ML curriculum and working on interpretability projects in order to transition to contributing to technical research that helps mitigate risks from advanced artificial intelligence.A software engineer who wants to spend the next three months self-studying in order to gain relevant certifications for a career in information security, with the longer-term goal of working for an organization focused on reducing global catastrophic risk.An experienced management consultant who wants to spend three months exploring different ways to apply their skill set to reducing global catastrophic risk and apply...

]]>
Bastian_Stern https://forum.effectivealtruism.org/posts/yEb9CdWA2xaHA6QsD/updates-to-open-phil-s-career-development-and-transition Wed, 29 Nov 2023 00:07:07 +0000 EA - Updates to Open Phil's career development and transition funding program by Bastian Stern Bastian_Stern https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:07 no full 18
xcdgwMX9BFk9De2xp EA - Empirical data on how teenagers hear about EA by Jamie Harris Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Empirical data on how teenagers hear about EA, published by Jamie Harris on November 30, 2023 on The Effective Altruism Forum.How do people hear about and get involved in effective altruism (EA)? We have good data about this for active community members who fill out theEA survey, but it's harder to get data on people earlier in their exploration or people in demographic groups that have less outreach and services specifically for them.Here, I share some data from 63 smart, curious, and altruistic UK teenagers who participated in programmes run by myself (akaLeaf) who reported to have heard of effective altruism before.The key results and takeaways:The most common places that people first or primarily heard about EA seem to be Leaf itself,Non-Trivial, and school - none of these categories show up as prominently on theEA survey.Many people heard about EA from multiple sources. Using a more permissive counting system, the most common sources people mentioned at least briefly were Leaf and hearing from a friend.(More tentative) 80,000 Hours, LessWrong, and podcasts seem to have been less important for this group than you might expect from having seen the EA survey.Methodology & contextThis information comes from 15-18 year olds in the UK who were offered a place on one of two programmes by Leaf this year (2023):Changemakers Fellowship: One-week summer residential programme with follow-up mentorship to meet other changemakers tackling pressing social issues, and fast-track your progress towards making a major difference. Students of any and all subjects.History to shape history (for the better): 5-week online fellowship exploring how to use the lessons of history to make a positive impact and steer humanity onto a better path. History students.I advertised both of these programmes as for "smart, curious, and ambitiously altruistic" teenagers - effective altruism was not discussed on the programme landing pages but was highlighted for transparency on Leaf's "About" page andFAQ.After being offered a place on the programme, participants were sent a consent form, which included various other questions. The data in this post all comes from people who first answered "Yes" (out of "Yes", "No", or "Maybe") to the question "Before hearing about this programme, had you heard of the term 'effective altruism'?".Changemakers FellowshipHistory to shape historyApplied758154Filled out consent form54 (7% of applicants)66 (43% of applicants)Answered "Yes" to having heard about EA36 (67% of respondents)27 (41% of respondents)I then informally analysed free-text, qualitative responses to the question "If you had heard of effective altruism and/or longtermism before hearing about this programme, please describe in your own words how you heard about them or explored them."Applicants to the Changemakers Fellowship who hadn't participated in Leaf programmes previously were 28% white and 40% male. History to shape history applicants were 50% white, 27% male. All were aged 15-18 and live in the United Kingdom.This appendix contains:A table separating out results for the participants of the two programmes and providing one example of an answer from each type of categoryThe full set of qualitative responses and my categorisations of themA table with info about how people heard about Leaf itselfResultsI categorised responses twice:"Primary" - I selected only one option from each response, prioritising whichever seemed to come chronologically first for them or (if this was unclear) seemed more important to their journey."Permissive" - counting as many different things as they mentioned, however briefly, and using a more permissive standard for what counted as relevant as opposed to "NA".CategoryPrimaryPermissiveIndirectly or earlier via Leaf14 (22%)18 (29%)...

]]>
Jamie_Harris https://forum.effectivealtruism.org/posts/xcdgwMX9BFk9De2xp/empirical-data-on-how-teenagers-hear-about-ea Thu, 30 Nov 2023 17:58:42 +0000 EA - Empirical data on how teenagers hear about EA by Jamie Harris Jamie_Harris https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:42 no full 4
FRKDA7fphPRt6Zt3N EA - #173 - Digital minds, and how to avoid sleepwalking into a major moral catastrophe (Jeff Sebo on the 80,000 Hours Podcast) by 80000 Hours Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: #173 - Digital minds, and how to avoid sleepwalking into a major moral catastrophe (Jeff Sebo on the 80,000 Hours Podcast), published by 80000 Hours on November 29, 2023 on The Effective Altruism Forum.We just published an interview: Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe. Listen on Spotify or click through for other audio options, the transcript, and related links. Below are the episode summary and some key excerpts.Episode summaryWe do have a tendency to anthropomorphise nonhumans - which means attributing human characteristics to them, even when they lack those characteristics. But we also have a tendency towards anthropodenial - which involves denying that nonhumans have human characteristics, even when they have them. And those tendencies are both strong, and they can both be triggered by different types of systems. So which one is stronger, which one is more probable, is again going to be contextual.But when we then consider that we, right now, are building societies and governments and economies that depend on the objectification, exploitation, and extermination of nonhumans, that - plus our speciesism, plus a lot of other biases and forms of ignorance that we have - gives us a strong incentive to err on the side of anthropodenial instead of anthropomorphism.Jeff SeboIn today's episode, host Luisa Rodriguez interviews Jeff Sebo - director of the Mind, Ethics, and Policy Program at NYU - about preparing for a world with digital minds.They cover:The non-negligible chance that AI systems will be sentient by 2030What AI systems might want and need, and how that might affect our moral conceptsWhat happens when beings can copy themselves? Are they one person or multiple people? Does the original own the copy or does the copy have its own rights? Do copies get the right to vote?What kind of legal and political status should AI systems have? Legal personhood? Political citizenship?What happens when minds can be connected? If two minds are connected, and one does something illegal, is it possible to punish one but not the other?The repugnant conclusion and the rebugnant conclusionThe experience of trying to build the field of AI welfareWhat improv comedy can teach us about doing good in the worldAnd plenty more.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Dominic Armstrong and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy MooreHighlightsWhen to extend moral consideration to AI systemsJeff Sebo: The general case for extending moral consideration to AI systems is that they might be conscious or sentient or agential or otherwise significant. And if they might have those features, then we should extend them at least some moral consideration in the spirit of caution and humility.So the standard should not be, "Do they definitely matter?" and it should also not be, "Do they probably matter?" It should be, "Is there a reasonable, non-negligible chance that they matter, given the information available?" And once we clarify that that is the bar for moral inclusion, then it becomes much less obvious that AI systems will not be passing that bar anytime soon.Luisa Rodriguez: Yeah, I feel kind of confused about how to think about that bar, where I think you're using the term "non-negligible chance." I'm curious: What is a negligible chance? Where is the line? At what point is something non-negligible?Jeff Sebo: Yeah, this is a perfectly reasonable question. This is somewhat of a term of art in philosophy and decision theory. And we might not be able to very precisely or reliably say exactly where the threshold is between non-negligible risks and negligible risks - but what we can say, as a starting point, is that a risk...

]]>
80000_Hours https://forum.effectivealtruism.org/posts/FRKDA7fphPRt6Zt3N/173-digital-minds-and-how-to-avoid-sleepwalking-into-a-major Wed, 29 Nov 2023 22:14:27 +0000 EA - #173 - Digital minds, and how to avoid sleepwalking into a major moral catastrophe (Jeff Sebo on the 80,000 Hours Podcast) by 80000 Hours 80000_Hours https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 25:51 no full 9
YKidYukDhKLBtDqsh EA - Doing Good Effectively is Unusual by Richard Y Chappell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Doing Good Effectively is Unusual, published by Richard Y Chappell on December 1, 2023 on The Effective Altruism Forum.tl;dr: It actually seems pretty rare for people to care about the general good as such (i.e., optimizing cause-agnostic impartial well-being), as we can see by prejudged dismissals of EA concern for non-standard beneficiaries and for doing good via indirect means.IntroductionMoral truisms may still be widely ignored. The moral truism underlying Effective Altruism is that we have strong reasons to do more good, and it's worth adopting the efficient promotion of the impartial good among one's life projects. (One can do this in a "non-totalizing" way, i.e. without it being one's only project.) Anyone who personally adopts that project (to any non-trivial extent) counts, in my book, as an effective altruist (whatever their opinion of the EA movement and its institutions).Many people don't adopt this explicit goal as a personal priority to any degree, but still do significant good via more particular commitments (to more specific communities, causes, or individuals). That's fine by me, but I do think that even people who aren't themselves effective altruists should recognize the EA project as a good one. We should all generally want people to be more motivated by efficient impartial beneficence (on the margins), even if you don't think it's the only thing that matters.A popular (but silly) criticism of effective altruism is that it is entirely vacuous. As Freddie deBoer writes:[T]his sounds like so obvious and general a project that it can hardly denote a specific philosophy or project at all… [T]his is an utterly banal set of goals that are shared by literally everyone who sincerely tries to act charitably.This is clearly false. As Bentham's Bulldog replies, most people give lip service to doing good effectively. But then they go and donate to local children's hospitals and puppy shelters, while showing no interest in learning about neglected tropical diseases or improving factory-farmed animal welfare.DeBoer himself dismisses without argument "weird" concerns about shrimp welfare and existential risk reduction, which one very clearly cannot just dismiss as a priori irrelevant if one actually cares about promoting the impartial good. The latter entails a very unusual degree of open-mindedness.The fact is: open-minded, cause-agnostic concern for promoting the impartial good is vanishingly rare. As a result, the few people who sincerely have and act upon this concern end up striking everyone else as extremely weird. We all know that the way you're supposed to behave is to be a good ally to your social group, do normal socially-approved things that signal conformity and loyalty (and perhaps a non-threatening degree of generosity towards socially-approved recipients)."Literally everyone" does this much, I guess. But what sort of weirdo starts looking into numbers, and argues on that basis that chickens are a higher priority than puppies? Horrible utilitarian nerds, that's who! Or so the normie social defense mechanism seems to be (never mind that efficient impartial beneficence is not exclusively utilitarian, and ought rather to be a significant component of any reasonable moral view).Let's be honestEveryone is motivated to rationalize what they're antecedently inclined to do. I know I do plenty of suboptimal things, due to both (i) failing to care as much as would be objectively warranted about many things (from non-cute animals to distant people), and (ii) being akratic and failing to be sufficiently moved even by things I value, like my own health and well-being. But I try to be honest about it, and recognize that (like everyone) I'm just irrational in a lot of ways, and that's OK, even if it isn't ideal.Vegans care more about animals than I ...

]]>
Richard Y Chappell https://forum.effectivealtruism.org/posts/YKidYukDhKLBtDqsh/doing-good-effectively-is-unusual Fri, 01 Dec 2023 17:43:42 +0000 EA - Doing Good Effectively is Unusual by Richard Y Chappell Richard Y Chappell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 12:25 no full 1
wtjFne8WdcLJTpyWm EA - Effektiv Spenden's Impact Evaluation 2019-2023 (exec. summary) by Sebastian Schienle Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effektiv Spenden's Impact Evaluation 2019-2023 (exec. summary), published by Sebastian Schienle on December 1, 2023 on The Effective Altruism Forum.effektiv-spenden.org is an effective giving platform in Germany and Switzerland that was founded in 2019. To reflect on our past impact, we examine Effektiv Spenden's cost-effectiveness as a "giving multiplier" from 2019 to 2022 in terms of how much money is directed to highly effective charities due to our work. We have two primary reasons for this analysis:To provide past and future donors with transparent information about our cost-effectiveness;To hold ourselves accountable, particularly in a situation where we are investing in further growth of our platform.We provide both a simple multiple (or "leverage ratio") of donations raised for highly effective charities compared to our operating costs, as well as an analysis of the counterfactual (i.e. what would have happened had we never existed).Our analysis complements our Annual Review 2022 (in German) and builds on previous updates and annual reviews, such as, amongst others, our reviews of 2021 and 2019. In both instances, we also included initial perspectives on our counterfactual impact. Since then, theinvestigation ofFounders Pledge into giving multipliers as well as Giving What We Can (GWWC)'s recentimpact evaluationhave provided further methodological refinements. In line with GWWC's approach, we shift to 3-year time horizons, which we feel better represents our impact over time and avoids short-term distortions.However, our attempt to quantify our "giving multiplier" deviates in some parts from the methodologies and assumptions applied by Founders Pledge and GWWC and is an initial, shallow analysis only that we intend to develop further in the future.Below, we share the key results of our analysis. We invite you to share any comments or takeaways you may have, either by directly commenting or by reaching out tosebastian.schienle@effektiv-spenden.orgKey resultsIn 2022, we moved 15.3 million to highly effective charities, amounting to37 million in total donations raised since Effektiv Spenden was founded in 2019.Our leverage ratio, i.e. the money moved to highly effective charities per 1 spent on our operations was 55.7 and 40.8 for the 2019-2021 and 2020-2022 time periods respectively.[1]Our best-guess counterfactual giving multiplier is 17.9 and 13.0 for those two time periods, robustly exceeding 10x. This means that for every 1 spent on Effektiv Spenden between 2019-2022, we are confident to have facilitated more than 10 to support highly effective charities which would not have materialized had Effektiv Spenden not existed.Our conservative counterfactual giving multiplier is 10.4 for 2019-2021, and 7.5 for 2020-2022.The decline of our multiplier over time is driven by the investment into our team. Over the last year, our team has grown substantially to enable further growth. While this negatively impacts our giving multiplier in the short term, we consider it a necessary prerequisite for further growth.Our ambition is to return to a best-guess counterfactual multiplier of at least 15x in the coming years. That said, ultimately our goal is not to maximize the multiplier, but to maximize counterfactually raised funds for highly effective charities. (As long as our work remains above a reasonable cost-effectiveness bar.)How to interpret our resultsWe consider our analysis an important stocktake of our impact, and a further contribution to the growing body of giving multiplier analyses in the effective giving space. That said, we also recognize the limitations of our approach and want to call out some caveats to guide interpretation of these results.Our analysis is largely retrospective, i.e. it compares our past money moved with operating ...

]]>
Sebastian Schienle https://forum.effectivealtruism.org/posts/wtjFne8WdcLJTpyWm/effektiv-spenden-s-impact-evaluation-2019-2023-exec-summary Fri, 01 Dec 2023 14:51:35 +0000 EA - Effektiv Spenden's Impact Evaluation 2019-2023 (exec. summary) by Sebastian Schienle Sebastian Schienle https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:21 no full 3
jGYoDrtf8JGw85k8T EA - My Personal Priorities, Charity, Judaism, and Effective Altruism by Davidmanheim Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Personal Priorities, Charity, Judaism, and Effective Altruism, published by Davidmanheim on December 1, 2023 on The Effective Altruism Forum.I've thought a lot about charitable giving over the past decade, both from a universalist and from a Jewish standpoint. I have a few thoughts, including about how my views have evolved over time. This is a very different perspective than many in Effective Altruism, but I think it's important as a member of a community that benefits from being diverse rather than monolithic for those who dissent from community consensus make it clear that it's acceptable to do so. Hopefully, this can be useful both to other people who are interested in a more Jewish perspective, and for everyone else interested in thinking about balancing different personal views with effective giving.BackgroundTo start, there is a strong Jewish tradition, and a legal requirement in the Shulchan Aruch, the code of Jewish law, for giving at least ten percent of your income to the poor and to community organizations - and for those who can afford it, ideally, a fifth of their income. (For some reason, no-one ever points out that second part.)So I always gave a tenth of my income to charity, even before starting my first post-college job, per Jewish customary law. My parents inculcated this as a value since childhood, and a norm, and it's one I am grateful for. (One thing I did differently than most, and credit my sister with suggesting, is putting 10% of my paycheck directly in a second account which was exclusively for charity.My giving as a child, and as a young adult, largely centered on local Jewish organizations, poverty assistance for local poor people and the poor in Israel, and community organizations I interacted with. In the following years, I started thinking more critically about my giving, and charity to community organizations seemed in tension with a more universalist impulse, what you might call "Tikkun Olam"- a directive to improve the world as a whole. I was very conflicted about this for quite some time, but have come to some tentative conclusions, and I wanted to outline my current views, informed by a combination of the Jewish sources and my other beliefs.Judaism vs. UtilitariansI am lucky enough, like most people I know personally, to have significantly more money than is strictly needed to feed, clothe, and house myself and my family. The rest of the money, however, needs to be allocated - for savings, for entertainment, for community, and for charity. And my conclusion, after reflection about the question, is that those last two are separate both conceptually and as a matter of Jewish conception of charity.My synagogue is a wonderful community institution that I benefit from, and I believe it is proper to pay my fair share. And in Halacha, Jewish law, community organizations are valid recipients of charity. But there is also a strong justification for prioritizing giving to those most in need.Utilitarian philosophers have advocated for a giving on an impartial basis, seeing a contradiction between universalism and their "selfish" impulse to justify keeping more than a minimal amount of their own money. To maximize global utility, all money over a bare minimum should go to those most in need, or otherwise be maximally impactful. In contrast, Halacha is clear that you and your family come first, and giving more than a token amount of charity must wait until your family's needs are met.More than that, it is clearly opposed to giving more than 20% of your income under usual circumstances, i.e. short of significant excess wealth. And once you are giving to charity, Jewish sources suggest progressively growing moral circles, first giving to family in need, then neighbors, then the community. In contrast to this, Jewish law also contai...

]]>
Davidmanheim https://forum.effectivealtruism.org/posts/jGYoDrtf8JGw85k8T/my-personal-priorities-charity-judaism-and-effective Fri, 01 Dec 2023 13:08:52 +0000 EA - My Personal Priorities, Charity, Judaism, and Effective Altruism by Davidmanheim Davidmanheim https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:28 no full 5
YzCF2NDLSwKayZXb7 EA - ALLFED's 2023 Highlights by Sonia Cassidy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ALLFED's 2023 Highlights, published by Sonia Cassidy on December 1, 2023 on The Effective Altruism Forum.Executive SummaryWelcome to ALLFED's 2023 Highlights, our annual update on what Alliance to Feed the Earth in Disasters has been up to this year. From advising Open Philanthropy on food security, to 6 new papers submitted for peer review, to writing preparedness/response plans for 3 governments, we have made substantial strides towards our mission to increase resilience to global food catastrophes.There is much more we could do, as we are currently funding constrained. If you like what you read in this post, please see also our2023 ALLFED Marginal Funding Appeal and consider donating to usvia our website this giving season.Increasing geopolitical tensions have presented an opportunity to translate ALLFED's scientific research into actionable policy proposals with endeavors such as writing national preparedness and response plans against abrupt sunlight reduction scenarios (ASRS, e.g. volcanic or nuclear winter) for various countries such as the United States, Australia, and Argentina.We continue to explore further options for governments to plan and develop technology through pilots. We have worked towards producing the evidence base needed to inform decision making prior to and during global catastrophe, with 6 new core ALLFED papers submitted for peer review. We have also redoubled efforts in studying responses to potential mass infrastructure collapse scenarios, such as from large scale nuclear electromagnetic pulse, AI-powered cyberattacks, or extreme pandemics (e.g.high transmissibility and mortality causing mass absenteeism). On this topic, we have produced around half a dozen papers over the years (including one this year).Here is what we you can read about in these 2023 Highlights:We kick off with a strategy section and some insights into our top-level thinking and ALLFED's Theory of Change.We then report on our research, including 6 new papers submitted for peer review and some contraptions we have engineered. According to ananalysis of theCambridge Centre for Existential Risk paper database, ALLFED team members are the second, third, fourteenth, and twenty-first most prolific X-risk academic researchers in the world.We talk about our policy work next, focusing on engagements with the governments of Australia and Argentina (through partnership withthe Spanish speaking GCR org) as well as the United States policy engagement (which includedendorsement of Senator Edward Markey's Health Impacts of Nuclear War Act).We then move to communications, especially our GCR field-building and science communications. It has been gratifying to see ALLFED's work propagating and an increasing use of our field-defining terminology, which we give examples of here.We follow up with events, circa 20 presentations and an account of a recent workshop that we gave at EAGx Australia.We then move to operations, the backbone of ALLFED's day-to-day activities, and an important element of our organizational resilience for response in a GCR (one modality of our Theory of Change).Our team section comes next, where we celebrate our team. ALLFED's multilingual team members are located around the globe and can talk about our work, and deliver workshops and presentations in a number of languages, including Spanish, German, French, Russian, Czech, Polish, Kannada, Tamil, Hindi, Filipino, Yoruba and more. In the team section, we also share with you a fun seaweed-eating experiment some of our team members participated in to experience a 10% seaweed diet.We close with thanks and acknowledgements, to all our donors, collaborators and supporters. We would like to take this opportunity to especially thank Greg Colbourn and the Centre for Enabling EA Learning & Research (CEEALA...

]]>
Sonia_Cassidy https://forum.effectivealtruism.org/posts/YzCF2NDLSwKayZXb7/allfed-s-2023-highlights Fri, 01 Dec 2023 10:45:39 +0000 EA - ALLFED's 2023 Highlights by Sonia Cassidy Sonia_Cassidy https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 48:51 no full 6
EGuxA7JAgzL84Ph8D EA - GWWC London Group Co-leads: Reflections on our first event by Chris Rouse Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC London Group Co-leads: Reflections on our first event, published by Chris Rouse on December 3, 2023 on The Effective Altruism Forum.Hello from your GWWC London Group 2023/24 co-leads.We're excited to be one of the GWWC groups relaunched underGiving What We Can's new community strategy.We (Gemma Paterson, Denise Melchin and Chris Rouse), are all volunteers with non-EA day jobs who are keen to help achieve GWWC's mission of making giving effectively and significantly a cultural norm.Giving is often done privately and while there's nothing wrong with that, we hope that GWWC London and groups can make it easier for people to connect to a larger community of people who care about doing good effectively through giving. We know first hand how re-affirming it is to meet and get to know some of the many wonderful people who are doing the same thing.We're looking forward to hosting a variety of events throughout the year from Talks & Discussions to pub socials to picnics topledge celebration events for GWWC members.Most of our events are open to anyone who is interested in using their resources effectively to help others, and we actively encourage new attendees, and for existing members to bring friends along.Our end goal is to facilitate sustainable lifetime pledges and donations to highly effective charities but hopefully we'll have some fun along the way.Sign up for our mailing list to hear about future eventshereJoin the GWWC community slackhereIf you have questions or you'd be interested in collaborating on an event with us, please reach out to us atlondon-group-leaders@givingwhatwecan.orgIf you're interested in seeing a GWWC group in your city, you canapply to lead a city group hereWe're biased but think this is likely a pretty impactful volunteering opportunity for personable pledgers that are passionate about effective givingWe would be happy to have a call to talk about our experiencesUK based pledgers are invited to join us on Saturday 9th December at the Charity Entrepreneurship office for ourGWWC London Pledgers Holiday Party 2023If you can't make it then please feel free to still donate to theassociated GWWC/CE fundraiserReflections on Giving What We Can London's first eventEarlier this month we held our first ** official **event for the Giving What We Can (GWWC) London group! We'd like to thank Newspeak House for generously hosting us in their lovely venue. If you would like to find out more about their work and the other events they host you can find them here: https://newspeak.house/At relatively short notice we had just over 30 attendees join us for the informal launch and we were delighted to meet and chat with some of the London effective giving community.The evening had quite minimal structure but we started with a short introduction to the group and a pre-recorded talk from Grace Adams (Marketing Director, GWWC). The talk was "The future of EA relies on Effective Giving" which was an ideal subject. In hindsight, although the theme was great, watching a pre-recorded talk was not very engaging and we will plan to have live in-person speakers for any talks we hold in the future.The rest of the evening was free time for attendees to meet and chat. The atmosphere was friendly and relaxed and I had a series of interesting conversations with different guests. We may introduce themes of some kind in future to help guide conversations but overall I was pleased with the atmosphere of the event.If you attended the event and have any feedback for us we'd be grateful to receive it. You can submit that here:Feedback formOur next event will be a holiday pledge celebration for GWWC members on the 9th of December, find out more and sign up here:Pledge Celebration Event.Thanks for listening. To help us out with The Nonlinear Library ...

]]>
Chris Rouse https://forum.effectivealtruism.org/posts/EGuxA7JAgzL84Ph8D/gwwc-london-group-co-leads-reflections-on-our-first-event Sun, 03 Dec 2023 21:48:38 +0000 EA - GWWC London Group Co-leads: Reflections on our first event by Chris Rouse Chris Rouse https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:45 no full 1
H7rjCEmhcWscZsEnE EA - What do we really know about growth in LMICs? (Part 1: sectoral transformation) by Karthik Tadepalli Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What do we really know about growth in LMICs? (Part 1: sectoral transformation), published by Karthik Tadepalli on December 3, 2023 on The Effective Altruism Forum.To EAs, "development economics" evokes the image of RCTs on psychotherapy or deworming. That is, after all, the closest interaction between EA and development economists. However, this characterization has prompted some pushback, in the form of the argument that all global health interventions pale in comparison to the Holy Grail: increasing economic growth in poor countries.After all, growth increases basically every measure of wellbeing on a far larger scale than any charity intervention, so it's obviously more important than any micro-intervention. Even a tiny chance of boosting growth in a large developing country will have massive expected value, more than all the GiveWell charities you can fund.The argument is compelling[1] and well-received - so why haven't "growth interventions" gone anywhere? I think the EA understanding of growth is just too abstract to yield really useful interventions that EA organizations could lobby for or implement directly. We need specific interventions to evaluate, and "lobby for general economic liberalization" won't cut it.The good news is that a large and active group of "macro-development" economists have been enhancing our understanding of growth in developing countries. They (mostly) don't run RCTs, but they still have credible research designs that can tell us important things about the causes and constraints of growth. In this series of posts, I want to lay out some stylized facts about growth in developing countries. These are claims which are backed up by the best research on this topic, and which tell us something useful about the causes and constraints of growth in developing countries.My hope is not to pitch any specific interventions, but rather to give you the lay of the land, on which you can build the case for specific interventions. The way I hope for you to read this series is with an entrepreneurial eye. "This summary suggests that X is a key bottleneck to growth; I suspect Y could help solve X at scale. I should look more into Y as a potential intervention." or "This summary says that X process helps with growth; let me brainstorm ways we could accelerate X."As part of that, an important caveat is that I will not cover topics where I believe there's no prospect for an effective intervention. For example, a large body of work emphasizes the importance of good institutions for development; I don't believe that topic will yield any promising interventions, so I won't cover it.Sectoral TransformationIn this post, I will start with the fundamental path of growth: sectoral transformation. Every country that has ever gotten rich has had the following transformation: first, most of the population works in agriculture. Then, people start to move from agriculture to manufacturing, coinciding with a large increase in the country's growth rate. Finally, people move out of manufacturing and into services, coinciding with the country's growth slowing down as it matures into a rich economy.This is the process of sectoral transformation, and it is basically a universal truth of development. So it's no surprise that a big focus of macro-development is how to catalyze sectoral transformation in developing countries.1. Agricultural productivity growth can drive sectoral transformation... or hurt it.Every economy starts out as agrarian, because everyone needs food to survive. Agricultural productivity growth allows economies to produce enough food with fewer people, so that most people can move out of agriculture. This is why the US can produce more food per person than India, even though 2% of the US workforce in agriculture compared to 45% of India's workfor...

]]>
Karthik Tadepalli https://forum.effectivealtruism.org/posts/H7rjCEmhcWscZsEnE/what-do-we-really-know-about-growth-in-lmics-part-1-sectoral Sun, 03 Dec 2023 16:00:25 +0000 EA - What do we really know about growth in LMICs? (Part 1: sectoral transformation) by Karthik Tadepalli Karthik Tadepalli https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 18:04 no full 3
z2Ky37zAW7eyeJwur EA - Farewell messages from the EA Philippines Core Team by Elmerei Cuevas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Farewell messages from the EA Philippines Core Team, published by Elmerei Cuevas on December 3, 2023 on The Effective Altruism Forum.The 30th of November marks the last day of Elmerei Cuevas, Alethea Cendaña, and Jaynell Chang in their respective roles in Effective Altruism Philippines.Presented here are their farewell messages to the EA Community:ELMERWhen I first took on this role back in February 2022, I only expected to stay a year. That was the grant duration we were given. It became a wonderful surprise that I got to extend for another few months and even got to lead in organizing #EAGxPhilippines.My sun may set now as ED of EAPH, but despair not for a new beginning is about to dawn for the community. My biggest takeaways from this role are really the people I met, the friendships, the calls, the chats, the talks about dreams, frustrations, hopes, and aspirations. I hear people thanking us for making EA Philippines as welcoming as it is, but really, it is the community who has made it welcoming- we merely mirror to others what the rest of the community is like.I thank everyone who has joined our events, meet- ups, retreats, and even engaged with us virtually through Slack, Gathertown, social media, and newsletter during our tenure. I cherish the love and support you have extended to Althy, Jay and I, and hope you may also be generous to share it to the next leaders of the community. I will still be around - "an Elmer", "a kuya" you may reach out to whether or not an EA-related topic. HeheBelieving the Best,Elmerei CuevasOutgoing Executive Director of Effective Altruism PhilippinesALTHYTo all who made the past year the most delightful chapter of my life: Thank you so much for all the cherished memories, the lively banter, our collaborative endeavors, and the shared commitment to effective altruism.When I first came to EA Philippines, I knew I found more than a supportive community - it was a compass guiding my beliefs in doing good while expanding my perspective on what's possible and how I can contribute. EA Philippines has been a safe place to explore my advocacies, sharpen my principles, and embrace altruistic ambition.To the student groups, especially those with whom I shared my first years in EA, thank you for crazy parties and random sessions about our rants about EA. I hope you stay as fun and as critical. Keep inspiring young, talented minds to pursue a better world in the most effective ways.To all the seasoned professionals, thank you for teaching me to be pragmatic so that I can navigate through real-world problems with practicality. I am grateful for all your stories about your career experiences and for sharing your passion for EA, as you still find time to juggle projects and commitments outside your primary work.To Tanya, Brian, Elmer, Jay, Janai, and Red, thank you for allowing me to work alongside you and bring to life all the projects, events, programs, and ideas that I hoped to create. It is because of you that I was able to achieve whatever impact I made in the EA Philippines community.This is not a goodbye because I will still be around. If you need any help, advice, or want to delve into conversations about the animal advocacy movement or anything about effective altruism, please feel free to message me.For a better, compassionate world,Alethea 'Althy' CendañaOutgoing Associate Director of Effective Altruism PhilippinesJAYAs a farewell, I would not be revolving my message on myself - but rather I would dedicate this portion of this space to you, dear community member.All throughout my life I held to one insight stemming from the Little Prince - What is essential is invisible to the eye.I have held on that insight for too long that I was able to witness how significant it is to put value on what we don't see, or rathe...

]]>
Elmerei Cuevas https://forum.effectivealtruism.org/posts/z2Ky37zAW7eyeJwur/farewell-messages-from-the-ea-philippines-core-team Sun, 03 Dec 2023 11:43:09 +0000 EA - Farewell messages from the EA Philippines Core Team by Elmerei Cuevas Elmerei Cuevas https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:05 no full 4
dTcywCm3AHAij9veA EA - Is the Animal & Vegan Advocacy (AVA) Summit an EA Event? by Julia Reinelt Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is the Animal & Vegan Advocacy (AVA) Summit an EA Event?, published by Julia Reinelt on December 4, 2023 on The Effective Altruism Forum.With the recentannouncement of the Centre for Effective Altruism on leaning into cause-specific conferences for some of their events, EAs working in farmed animal welfare or alternative protein lost an important event to network, hear about the latest research, and meet EA-minded funders.Nevertheless and independent of the rest of this article, I want to take a brief moment to say how impressive it is to see the EA movement growing so much that there now is the need for (large!) specialized conferences.We often get asked: "Is the AVA Summit an EA event?" and I would like to provide some answers and thoughts about that in this post.First and foremost, the AVA Summit is an international conference series bringing together advocates focusing on systemic change for animals. The animal and vegan advocacy movement consists of people working on a wide range of strategies and approaches, but we all share a vision of a world where nonhuman animals, ultimately, are taken out of the food system and other human uses.While the AVA Summit is not branded as an EA event, there are certain "EA lenses" that we frequently use in our assessments and work, such as the core principles around scalability, tractability, and neglectedness:We emphasize topics relating to systemic change, e.g., changing corporate policy or legal protection.We are committed to inspiring our participants to be as impactful as possible in the work they are doing for animals, which also means inspiring them to continuously reconsider, improve, and update.We focus on farmed animals (including aquatic animals and insects), as well as on wild animals, suffering in large numbers.To be very clear and transparent, I will add to points 1. and 2.:Additionally, we welcome other tactics and strategies to be discussed in an atmosphere of mutual respect.Additionally, other areas of animal exploitation, like experimentation, do also have a place at the AVA Summit.Who attends the AVA Summit?Our attendees are dedicated individuals working or looking to work in the animal and vegan advocacy movement professionally. 80% of our attendees are full-time advocates.Lewis Bollard, Farm Animal Welfare Program Director at Open Philanthropy, said about the inaugural AVA Summit that it was "the event with the highest number of serious-minded people in the movement". We strive to achieve this high standard with all of our events.Our events are also substantially more international than any other similar event: At the last Summit in the US, we welcomed more than 750 attendees from 47 different countries.People at AVA have a multitude of motivations, including animal welfare, animal rights, environmental, social justice, effective altruism, public health, and they also join from other intersecting movements. A lot of attendees at AVA are EA-aligned without necessarily calling themselves "Effective Altruists".What are the biggest advantages of going to the AVA Summit?Having conversations that are directly related to your work in animal advocacy and learning about actionable strategies, as well as benefiting from lessons and collaborations in EA-adjacent movements.Finding hidden counterfactual talent in a different pool of potential applicants, and networking with a highly diverse and highly international audience.Meeting with a large number of farmed animal funders, as well as nonprofits. At the last Summit in the US, 215 organizations working directly in the movement were represented (not counting universities, companies, etc.).What is the program at the AVA Summit like?Like I mentioned before, we support various strategies and tactics for advancing our shared vision: From grassroots ac...

]]>
Julia Reinelt https://forum.effectivealtruism.org/posts/dTcywCm3AHAij9veA/is-the-animal-and-vegan-advocacy-ava-summit-an-ea-event Mon, 04 Dec 2023 20:48:33 +0000 EA - Is the Animal & Vegan Advocacy (AVA) Summit an EA Event? by Julia Reinelt Julia Reinelt https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:12 no full 3
wmN3pmYAign5baSCL EA - Extending Existing Animal Protection Laws by Moritz Stumpe Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Extending Existing Animal Protection Laws, published by Moritz Stumpe on December 4, 2023 on The Effective Altruism Forum.This report was conducted within the pilot for Charity Entrepreneurship's Research Training Program in the fall of 2023 and took around eighty hours to complete (roughly translating into two weeks of work). Please interpret the confidence of the conclusions of this report with those points in mind.For questions about the structure of the report, please reach out toleonie@charityentrepreneurship.com. For questions about the content of this research, please contact Moritz Stumpe atmoritz@animaladvocacyafrica.org.The full report can also be accessed as PDF here.Thank you to Karolina Sarek and Aashish Khimasia for their review and feedback.Executive summaryThis research report was conducted as part of Charity Entrepreneurship's Research Training Program. The aim of this report was to explore the potential of extending current animal protection laws, conventions, and directives to adjacent areas, which might be more feasible than proposing entirely new laws. This research focused on identifying and evaluating laws in various geographic regions and prioritising ideas based on their potential impact.The key findings of the report are as follows:The Shandong guidelines on chicken handling, transport, and slaughter, passed in 2016, could provide a highly promising area for extending legislation. Several extension ideas are proposed. These issues should be investigated in more depth by consulting with existing animal advocacy groups and experts in China to determine next steps.The UK's Animal Welfare Act 2006 may also be a promising law to extend. Potential extensions to fishing and/or invertebrates should be investigated further.The Ohio Administrative Code, Section 901:12, currently only bans caging practices in production. This ban could be extended to sales, thus also affecting state imports. This intervention idea should be passed on to existing animal advocacy groups in the U.S., which they may pursue or investigate further.EU Regulation 1/2005 and Directive 98/58 could be extended to (farmed) fish. This intervention idea should be passed on to existing animal advocacy groups at the EU level, which they may pursue or investigate further.Other laws yield less promising ideas for extension.Overall, these ideas have only been investigated in a relatively shallow manner. This report should act as inspiration for further work and research to determine the real merits of the proposed ideas.1 AimThe topic for this report is a scoping exercise, with the goal of looking at existing animal protection laws, conventions, directives, and other regulatory frameworks (called only 'laws' from here on) and see how they could be extended to adjacent areas. This was chosen as a research area because extending existing laws might be more tractable than proposing completely new ones.The idea was to investigate relevant laws and geographic regions, find potential ideas in this context, and then prioritise those ideas in terms of their promisingness. The most promising ideas were selected to undergo a quick review, based on which recommendations for further research were made.2 Research processThe basis for this report isthis spreadsheet, which I set up to structure my research. The most relevant, but not all, findings from the spreadsheet are included in this report. Additionally, the report includes findings that were not included in the spreadsheet. Interested readers may thus consult the spreadsheet for further information and details regarding the entire research process.However, this report acts as a standalone resource and is more relevant than the spreadsheet, summarising the most crucial and action-relevant information from my research. In thi...

]]>
Moritz Stumpe https://forum.effectivealtruism.org/posts/wmN3pmYAign5baSCL/extending-existing-animal-protection-laws Mon, 04 Dec 2023 20:16:03 +0000 EA - Extending Existing Animal Protection Laws by Moritz Stumpe Moritz Stumpe https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:11:48 no full 4
gpEBdMSHmou8ZFv7r EA - Hiring a CEO & EU Tech Policy Lead to launch an AI policy career org in Europe by Cillian Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hiring a CEO & EU Tech Policy Lead to launch an AI policy career org in Europe, published by Cillian on December 6, 2023 on The Effective Altruism Forum.SummaryWe are hiring for anExecutive Director and anEU Tech Policy Lead to launch Talos Institute[1], a new organisation focused on EU AI policy careers.Talos is spinning out ofTraining for Good and will launch in 2024 with theEU Tech Policy Fellowship as its flagship programme. We envision Talos expanding its activities and quickly growing into a key organisation in the AI governance landscape.Apply here by December 27th.Key DetailsClosing: 27 December, 11:59PM GMTStart date: We would ideally like a candidate to begin as soon as possible after receiving an offer, but we are willing to wait if the best candidate can only start later. Ability to attend our upcoming Brussels Summit (February 26th - March 1st) would also be beneficial, though not required.Hours: 40/week (flexible)Location: Brussels (preferred) / RemoteCompensation:Executive Director: 70,000 - 90,000. We are committed to attracting top talent and are willing to offer a higher salary for the right candidate.EU Tech Policy Lead: 55,000 - 75,000. We are committed to attracting top talent and are willing to offer a higher salary for the right candidate.How to apply: Please fill in this shortapplication formContact:cillian@trainingforgood.comAbout Talos InstituteEU Tech Policy FellowshipTheEU Tech Policy Fellowship is Talos Institute's flagship programme. It is a 7-month programme enabling ambitious graduates to launch European policy careers reducing risks from artificial intelligence. From 2024, it will run twice per year. It includes:8-week training that explores the intricacies of AI governance in EuropeA week-long policymaking summit in Brussels to connect with others working in the space6-month placement at a prominent think tank working on AI policy (e.g. The Centre for European Policy Studies, The Future Society)Success to dateThe EU Tech Policy Fellowship appears to have had a significant impact to date. Since 2021, we've supported ~30 EU Tech Policy Fellows and successfully transitioned a significant number to work on AI governance in Europe. For example:Several work at key think tanks (e.g.The Future Society, theInternational Center for Future Generations, and theCentre for European Policy Studies)One has co-founded anAI think tank working directly with the UN andco-authored a piece for The Economist with Gary MarcusOthers are advising MEPs and key institutions on the EU AI Act and related legislationWe're conducting an external evaluation and expect to publish the results in early 2024. Initial indicators suggest that the programme has been highly effective to date. As a result, we have decided to double the programme's size by running two cohorts per year. We now expect to support 30+ fellows in 2024 alone.Future directionsWe can imagine Talos Institute growing in a number of ways. Future activities could include things like:Creating career advice resources tailored to careers in European policy (especially for those interested in AI & biosecurity careers). Similar to whatHorizon has done in the US.Community-building activities for those working in AI Governance in Europe (eg. retreats to facilitate connections, help create shared priorities, identify needs in the space, and incubate new projects).Hosting events in Brussels educating established policy makers on risks from advanced AIActivities that help grow the number of people interested in considering policy careers focused on risks from advanced AI, e.g. workshops likethisExpanding beyond AI governance to run similar placement programmes for other problems in Europe (e.g. biosecurity).Establishing the organisation as a credible think tank in Eu...

]]>
Cillian_ https://forum.effectivealtruism.org/posts/gpEBdMSHmou8ZFv7r/hiring-a-ceo-and-eu-tech-policy-lead-to-launch-an-ai-policy Wed, 06 Dec 2023 23:08:51 +0000 EA - Hiring a CEO & EU Tech Policy Lead to launch an AI policy career org in Europe by Cillian Cillian_ https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:38 no full 1
aCEAczDuRrZihaLNA EA - Why Yudkowsky is wrong about "covalently bonded equivalents of biology" by titotal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Yudkowsky is wrong about "covalently bonded equivalents of biology", published by titotal on December 6, 2023 on The Effective Altruism Forum.confidence level: I am a physicist, not a biologist, so don't take this the account of a domain level expert. But this is really basic stuff, and is very easy to verify.Recently I encountered a scientific claim about biology, made by Eliezer Yudkowsky. I searched around for the source of the claim, and found that he has been repeating versions of the claim for over a decade and a half, including in "the sequences" and his TED talk. In recent years, this claim has primarily been used as an argument for why an AGI attack would be extremely deadly. I believe this claim is factually incorrect.The quotes:I'm going to show the various versions of the claim I found below, with the relevant sentences bolded:To plausibly argue that "humans" were intelligently designed, you'd have to lie about the design of the human retina, the architecture of the human brain, the proteins bound together by weak van der Waals forces instead of strong covalent bondsYudkowsky discussing the flaws of evolutionary design, in "the sequences" blog post "dark side epistemology".It was obvious years before Nanosystems that molecular nanomachines would in fact be possible and have much higher power densities than biology. I could say, "Because proteins are held together by van der Waals forces that are much weaker than covalent bonds," to point to a reason how you could realize that after just reading Engines of Creation and before Nanosytems existed.Yudkowsky discussing AI interventions on the alignment forum.A lot of the advantage of human technology is due to human technology figuring out how to use covalent bonds and metallic bonds, where biology sticks to ionic bonds and proteins held together by van der Waals forces (static cling, basically)Comment on a post discussing technology and AI.Algae are tiny microns-wide solar-powered fully self-replicating factories that run on general assemblers, "ribosomes", that can replicate most other products of biology given digital instructions. This, even though the proteins are held together by van der Waals forces rather than covalent bonds, which is why algae are far less tough than diamond (as you can also make from carbon). It should not be very hard for a superintelligence to repurpose ribosomes to build better, more strongly bonded, more energy-dense tiny things that can then have a quite easy time killing everyone.Yudkowsky's example scenario for how an AI could extinct humanity, on twitterCan you build your own synthetic biology, synthetic cyborgs? Can you blow straight past that to covalently bonded equivalents of biology where instead of proteins that fold together and are held together by static cling, you have things that go down much sharper potential energy gradients and are bundled together, people have done advanced design work about this sort of thing.Yudkowksy's Ted talk, again discussing AI capabilities, during the Q&A section.I broadly endorse this reply and have mostly shifted to trying to talk about "covalently bonded" bacteria, since using the term "diamondoid" (tightly covalently bonded CHON) causes people to panic about the lack of currently known mechanosynthesis pathways for tetrahedral carbon lattices.Yudkowsky's response to my recent article a few weeks ago, talking about how to refer to potential advanced nanotechnologies.Summarising the claimAs you can see, Yudkowsky has repeated this claim several time over a time period spanning 15 years to just a few weeks ago, in very high profile contexts.These quotes all make roughly the same argument, which I will sum up as follows:Proteins are held together by weak van-der-waals forces, which are weak forces, akin to static...

]]>
titotal https://forum.effectivealtruism.org/posts/aCEAczDuRrZihaLNA/why-yudkowsky-is-wrong-about-covalently-bonded-equivalents Wed, 06 Dec 2023 16:30:33 +0000 EA - Why Yudkowsky is wrong about "covalently bonded equivalents of biology" by titotal titotal https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 17:04 no full 5
bx3KKZmmAPAxCJHwT EA - Announcing Impact Ops by Impact Ops Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Impact Ops, published by Impact Ops on December 6, 2023 on The Effective Altruism Forum.Hi there!We're excited to announceImpact Ops: an EA-aligned agency offering ops support to high-impact organizations.Our coreservices include:Entity setupFinanceRecruitmentAuditDue diligenceSystem implementationWe've been running since April and support a number of organizations in the EA community, including GWWC, CLTR, and METR (formerly ARC Evals). You can learn more about the projects we're working onhere.We share most of our updates over onLinkedIn, including tips forentity setup,hiring, and more. We've got plenty of free resources in the works, so consider following us there if you're looking for nonprofit ops advice.Our mission is to help high-impact projects grow and thrive, and we're excited about working with more EA projects! If you could benefit from ops support, please don't hesitate to reach out athello@impact-ops.org.Thanks for reading!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Impact Ops https://forum.effectivealtruism.org/posts/bx3KKZmmAPAxCJHwT/announcing-impact-ops Wed, 06 Dec 2023 12:15:39 +0000 EA - Announcing Impact Ops by Impact Ops Impact Ops https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:18 no full 7
iXv6zjbAfwrpQy37B EA - EA thoughts from Israel-Hamas war by ezrah Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA thoughts from Israel-Hamas war, published by ezrah on December 6, 2023 on The Effective Altruism Forum.I'm Ezra, CEO of EA Israel, but am writing this in a personal capacity. My goal in writing this is to give the community a sense of what someone from a decent-sized local EA group is going through during a time of national crisis. I'll try to keep the post relatively apolitical, but since this is such a charged topic, I'm not sure I'll succeed. I will say that I'm quite nervous about the responses to the post, since the forum can sometimes lean towards criticism.Ideally, I'd want people who are reading this to do so with a sense of compassion, while keeping in mind that this is a difficult time and difficult topic to post or share experiences about. I also don't want the comments to be a discussion of the war per se, but of the experiences of an EA during the war. Finally, I'm sure that an individual from Gaza will be having a very different experience, which I respect and would be interested in hearing, but in this post I'm not trying to capture all possible experiences, but to share a part of mine and my community's.These are my views and thoughts, and not the official position of the organization or of my team members. I wrote this on my phone around November 18th, since I've been without much access to a computer. I haven't had a chance to update it or spend lots of time editing, so I apologize in advance if it feels lacking in polish.Thanks for bearing with me during the preambles.So what have I been doing since the outbreak of the war?Since the terrorist attacks on Oct. 7, and the ongoing hostage situation and frequent rocket attacks, life in Israel and in the community has changed drastically. Many know someone who was killed or is a hostage, the majority of men (and many women as well) aged 18-40 have been called up to reserve duty, and the entire country has been in a state of trauma and mourning. For the first few weeks, most commercial activity in Israel stopped, schools were closed, and people went to funerals. Adjusted for population size, the Hamas attacks were 13 times more deadly than 9/11.Personally, I've been called up to the army, along with another EA Israel team member, a board member, and the husbands of two others on our team. I've been home only sporadically for the past 6 weeks. My wife and 2 year old son are alone, and are struggling emotionally. I've been to one funeral, of someone from my local (non EA) community.My cousin, who lives in a city that was attacked on Oct 7th, was locked in the bomb shelter in his apartment for 16 hours with his wife and four children, and heard his neighbours being violently murdered. Thank God, somehow the terrorists passed over them, and they've been living in a hotel since then.Many people who I know from the global community have reached out to me and the team to see that us and our families are safe, which felt good. On the other hand, I'm not sure how much the average in EA in Israel (or Gaza) feels cared about by the global EA community. I'd be happy to see some sort of statement of concern for the wellbeing of EA community members in a conflict zone.Our work at EA Israel has mostly paused. Talking about global priorities seems less relevant during wartime, and most of our staff isn't available to work on projects. The university semester is suspended. We've been involved in a few projects trying to help with prioritising donations, a board member wrote a post about donations, and are trying to launch a donation optimisation project with a major foundation.We've done some work on mapping the mental health needs in Israel for foundations, and were invited to present it at the Knesset (parliament), but nothing major has come to fruition. We've been holding weekly virtual community meetings...

]]>
ezrah https://forum.effectivealtruism.org/posts/iXv6zjbAfwrpQy37B/ea-thoughts-from-israel-hamas-war Wed, 06 Dec 2023 10:21:29 +0000 EA - EA thoughts from Israel-Hamas war by ezrah ezrah https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:14 no full 8
FnNJfgLgsHdjuMvzH EA - EA Infrastructure Fund's Plan to Focus on Principles-First EA by Linch Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Infrastructure Fund's Plan to Focus on Principles-First EA, published by Linch on December 6, 2023 on The Effective Altruism Forum.SummaryEA Infrastructure Fund (EAIF)[1] has historically had a somewhat scattershot focus within "EA meta." This makes it difficult for us to know what to optimize for or for donors to evaluate our performance. MoreWe propose that we switch towards focusing our grantmaking on Principles-First EA.[2] MoreThis includes supporting:research that aids prioritization across different cause areasprojects that build communities focused on impartial, scope sensitive and ambitious altruisminfrastructure, especially epistemic infrastructure, to support these aimsWe hope that the tighter focus area will make it easier for donors and community members to evaluate the EA Infrastructure Fund, and decide for themselves whether EAIF is a good fit to donate to or otherwise support.Our tentative plan is to collect feedback from the community, donors, and other stakeholders until the end of this year. Early 2024 will focus on refining our approach and helping ease transition for grantees. We'll begin piloting our new vision in Q2 2024. MoreNote: The document was originally an internal memo written by Caleb Parikh, which Linch Zhang adapted into an EA Forum post. Below, we outline a tentative plan. We are interested in gathering feedback from community members, particularly donors and EAIF grantees, to see how excited they'd be about the new vision.Introduction and background contextI (Caleb) [3]think the EA Infrastructure Fund needs a more coherent and transparent vision than it is currently operating under.EA Funds' EA Infrastructure Fund was startedabout 7 years ago under CEA. The EA Infrastructure Fund (formerly known as the EA Community Fund or EA Meta Fund) has given out 499 grants worth about 18.9 million dollars since the start of 2020. Throughout its various iterations, the fund has had a large impact on the community and I am proud of a number of the grants we've given out. However, the terminal goal of the fund has been somewhat conceptually confused, which likely leads to a focus and allocation of resources often seemed scattered and inconsistent.For example, EAIF has funded various projects that are associated with meta EA. Sometimes, these are expansive, community-oriented endeavors like local EA groups and podcasts on effective altruism topics. However, we've also funded more specialized projects for EA-adjacent communities. The projects include rationality meetups, fundraisers for effective giving in global health, and AI Safety retreats.Furthermore, in recent years, EAIF also functioned as a catch-all grantmaker for EA or EA-adjacent projects that aren't clearly under the purview of other funds. As an example, it has backed early-stage global health and development projects.I think EAIF has historically served a valuable function. However, I currently think it would be better for EAIF to have a narrower focus. As the lead for EA Funds, the bottom line of EAIF has been quite unclear to me, which has made it challenging for me to assess its performance and grantmaking quality. This lack of clarity has also posed challenges for fund managers in evaluating grant proposals, as they frequently face thorny philosophical questions, such as determining the comparative value of a neartermist career versus a longtermist career.Furthermore, the lack of conceptual clarity makes it difficult for donors to assess our effectiveness or how well we match their donation objectives. This problem is exacerbated by us switching back to a more community-funded model, in contrast to our previous reliance on significant institutional donors like Open Phil[4]. I expect most small and medium-sized individual donors to have less time or resources to...

]]>
Linch https://forum.effectivealtruism.org/posts/FnNJfgLgsHdjuMvzH/ea-infrastructure-fund-s-plan-to-focus-on-principles-first Wed, 06 Dec 2023 03:49:52 +0000 EA - EA Infrastructure Fund's Plan to Focus on Principles-First EA by Linch Linch https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 16:31 no full 10
ME4ihqRojjuhprejm EA - Effective Giving Incubation - apply to CE & GWWC's new program! by CE Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Giving Incubation - apply to CE & GWWC's new program!, published by CE on December 5, 2023 on The Effective Altruism Forum.Charity Entrepreneurship in collaboration withGiving What We Can is opening a new program to launch 4-6 newEffective Giving Initiatives (EGIs) in 2024. We expect them to raise millions in counterfactual funding for highly impactful charities, even in their first few years.[Applications are open now]In recent yearsDoneer Effectief,Effektiv Spenden &Giving What We Can have moved huge sums of money ($1.4m, $35m and $330m, respectively) to the best charities globally. We aim to build on their experience and success by launching new EGIs in highly promising locations. These initiatives can be fully independent or run in collaboration with existing organizations, depending on what is most impactful. We'll provide the training, the blueprints, and the all-important seed funding.This 8-week full-time, fully cost-covered program will run online from April 15 to June 7, 2024, with 2 weeks in person in London. We encourage individuals from all countries to apply, and we are particularly excited about applications from ourtop recommended countries.[Apply by January 14, 2024]Learn more on our website:[EFFECTIVE GIVING INCUBATION]Who is this program for?We invite applicants from all backgrounds, ages, and nationalities. Specific work experience or formal education credentials aren't necessary. During the program, we'll help you join forces with a co-founder from the cohort - someone whose skills and experience complement your own. Together, you'll make up an entrepreneurial team that:Is high in moral ambition: Drives to maximize funds raised and then optimize their impact.Is deeply impartial and open-minded: Focuses on following the latest evidence about the most impactful giving opportunities worldwide.Has a strong focus on tangible results: Pushes for rigor, organization, and accountability to run a tight ship with excellent governance and outcomes.Grows its influence and credibility over time: Builds relationships and acts as a trusted advisor to discerning donors.N.B. One of you may have previous experience in fundraising or strategic marketing, though this is not required.Why do we think this is promising?In the last few years, several Effective Giving Initiatives such asDoneer Effectief,Effektiv Spenden &Giving What We Can have moved millions in funding to the best charities globally, to the nonprofits that are helping the greatest number of those most in need, to the greatest extent. In short, they have made real progress on many of the world's most pressing problems.However, there is still too little funding for highly impactful nonprofits and our internalanalysis suggests that EGIs are a proven effective way to raise these funds. This lack of funding takes time away from people who could be working on important problems, who instead have to focus on fundraising. In some cases, this means that high-leverage work won't get done because there is not enough funding, and projects have to shut down or minimize their scope.Established EGIs have developed a deep repository of knowledge, resources, and systems that new actors can build on. Leveraging this has two significant benefits: New EGIs will (a) have a significantly higher chance of successfully launching and (b) be able to move faster and have an impact sooner than they would if they were starting from scratch.CE has an excellenttrack record of launching highly impactful organizations and has expertise in incubating and training charity founders. GWWC and other effective giving initiatives have expressed their excitement for this new program and will support its development and implementation, as well as directly mentor the new EGIs after the program.Read our r...

]]>
CE https://forum.effectivealtruism.org/posts/ME4ihqRojjuhprejm/effective-giving-incubation-apply-to-ce-and-gwwc-s-new Tue, 05 Dec 2023 17:13:03 +0000 EA - Effective Giving Incubation - apply to CE & GWWC's new program! by CE CE https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:57 no full 16
xQufwwhJsu4TJLwkW EA - Taiwan's military complacency. by JKitson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Taiwan's military complacency., published by JKitson on December 5, 2023 on The Effective Altruism Forum.Taiwan's current military strategy puts it at risk from a resurgent China.Taiwan's leaders seem to underrate the risk of military conflict. I was told this post would be of interest to members of the EA forum, so I am reposting it from my substack.Why won't Taiwan change course?Taiwan faces the threat of major conflict from the People's Republic of China. China's economic rise has funded an expansion in its military capabilities, which are now quantitatively and in many cases, qualitatively superior to its opponents. On the face of it, despite a transformation in China's forces, Taiwan has not drastically adapted its military strategy and seemingly expects to fight a war with a limited number of its high-value sea, air, and land units, which cannot be quickly replaced.In a full-blown conflict, these will likely be overwhelmed and destroyed in weeks, if not days. Taiwan has not yet adapted to the circumstances due to a mixture of institutional inertia and questionable political calculation. It remains to be seen if Taiwan's current position can continue to deter a conflict or prevail if it occurs.Adapting to a transformation in military circumstances is a significant challenge for any military, but Taiwan's Ministry of National Defence (MND) has thus far not been willing or able to do it. While China was still a poor country and its armed forces were far inferior to Taiwan's, the basic plan to defend the island from invasion was to meet the large invasion force and defeat it. Bothduring andafter the Cold War, the US provided Taiwan with a host of equipment, including jets, destroyers, tanks, artillery, and air defense systems.Although Taiwan and China did fight variousskirmishes throughout the Cold War, these never massively escalated in large part due to US intervention. During some periods of Chinese internal strife, the threat of conflict was far lower, but in periods of relative stability, there was a non-zero risk of invasion. If China decided to invade, Taiwan's US (and later, indigenously built) jets would first establish air superiorityover the straits. The Taiwanese navy would then attack the invasion force of Chinese Navy ships and ramshackle troop transports, which would be nearly helpless as Taiwanese jets screamed overhead. Given the often shambolic state of the Chinese military and the fact that the US would be free to join in with its own even more superior forces, it is obvious why the Chinese military never attempted this invasion.The Chinese ThreatToday, the story is somewhat different. China has developed a modern air, land, and naval force. China officially spends $227.79 billion (1.55 trillion yuan) on its military, but due to purchasing power parity, this isequivalent to $700 billion. The PLAAF has 2,500 aircraft, including about 2,000 fighter jets, and has taken delivery of hundreds of Chengdu J-20 fighters, one of only four examples worldwide of a 5th generation fighter. 5th generation jets offer greater stealth, maneuverability, range, and information processing capabilities compared to 4th generation machines. The Chengdu J-20's capabilities against the US F-35 are uncertain, but they are more than a match for Taiwan's air force, which numbers around 250 fighters, with its most advanced models being outdated 4th generation F-16s, which date from 1992.A Chengdu J-20 fighter.Supporting this formidable air force is adaunting layer of air defenses. Consisting of radars and ground-to-air and ship missiles, the PLA can conduct operations over the Taiwan Strait with a reasonable degree of safety, allowing the PLAAF to focus on supporting an invasion effort. China's air defense was initially built on Soviet platforms but now incl...

]]>
JKitson https://forum.effectivealtruism.org/posts/xQufwwhJsu4TJLwkW/taiwan-s-military-complacency Tue, 05 Dec 2023 16:43:37 +0000 EA - Taiwan's military complacency. by JKitson JKitson https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 18:00 no full 18
fGCHomw45xuTHd757 EA - An exhaustive list of cosmic threats by JordanStone Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An exhaustive list of cosmic threats, published by JordanStone on December 5, 2023 on The Effective Altruism Forum.Toby Ord covers 'Asteroids and Comets' and 'Stellar Explosions' in The Precipice. But I thought it would be useful to provide an up-to-date and exhaustive list of all cosmic threats. I'm defining cosmic threat here as any existential risk potentially arising from space. I think this list may be useful for 3 main reasons:New cosmic threats are discovered frequently. So it's plausible that future cause areas could pop out of this space. I think that keeping an eye on it should help identify areas that may need research. Though it should be noted that some of the risks are totally impossible to protect against at this point (e.g. a rogue planet entering our solar system).Putting all of the cosmic threats together in one place could reveal that cosmic threats are more important than previously thought, or provide a good intro for someone interested in working in this space.There is momentum in existential risk reduction from outer space, with great powers (Russia, USA, China, India, Europe) already collaborating on asteroid impact risk. So harnessing that momentum to tackle some more of the risks on this list could be really tractable and may lead to collaboration on other x-risks like AI, biotech and nuclear.I will list each cosmic threat, provide a brief explanation, and find the best evidence I can to provide severity and probability estimates for each. Enjoy :)I'll use this format:Cosmic Threat [Severity of worst case scenario /10] [Probability of that scenario occurring in the next 100 years] Explanation of threatExplanation of rationale and approachSeverity estimatesFor the severity, 10 is the extinction of all intelligent life on Earth, and 0 is a fart in the wind. It was difficult to pin down one number for threats with multiple outcomes (e.g. asteroids have different sizes). So the severity estimates are for the worst-case scenarios for each cosmic threat, and the probability estimate corresponds to that scenario.Probability estimatesProbabilities are presented as % chance of that scenario occurring in the next 100 years. I have taken probabilities from the literature and converted values to normalise them as a probability of their occurrence within the next 100 years (as a %). This isn't a perfect way to do it, but I prioritised getting a general understanding of their probability, rather than numbers that are hard to imagine. When the severity or likelihood is unclear or not researched well enough, I've written 'unknown'.I'm trying my best to ignore reasoning along the lines of "if it hasn't happened before, then it very likely won't happen ever or is extremely rare" because of the anthropic principle. Our view of past events on Earth is biased towards a world that has allowed humanity to evolve, which likely required a few billion years of stable-ish conditions. So it is likely that we have just been lucky in the past, where no cosmic threats have disturbed Earth's habitability so extremely as to set back life's evolution by billions of years (not even the worst mass extinction ever at the Permian-Triassic boundary did this, as reptiles survived).An Exhaustive List of Cosmic ThreatsFormat:Cosmic Threat [Severity of worst case scenario /10] [Probability of that scenario occurring in the next 100 years] Explanation of threatSolar flares [4/10] [1%]. Electromagnetic radiation erupts from the surface of the sun. Solar flares occur fairly regularly and cause minor impacts, mainly on communications. A large solar flare has the potential to cause electrical grids to fail, damage satellites, disrupt radio signals, cause increased radiation influx, destroy data storage devices, cause navigation errors, and permanently damage scientific eq...

]]>
JordanStone https://forum.effectivealtruism.org/posts/fGCHomw45xuTHd757/an-exhaustive-list-of-cosmic-threats Tue, 05 Dec 2023 13:07:58 +0000 EA - An exhaustive list of cosmic threats by JordanStone JordanStone https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:48 no full 19
DXPWKQLnZPHyo74FH EA - I donated $35 to offset my carbon footprint for this year by Luke Eure Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I donated $35 to offset my carbon footprint for this year, published by Luke Eure on December 5, 2023 on The Effective Altruism Forum.This is a cross-post from my blog. I've seen analysis like this previously on the forum, but nothing recently, so I thought it might be useful to share one up-to-date practical exploration of climate offset donations.I want to start donating annually to offset my carbon footprint. I don't really think of this as a charitable cost - instead it'sinternalizing my externalities.This is the first time I am systematically deciding to make an annual donation - I wanted to walk through my thinking in case it's useful for anyone else! This post also serves as pro-Effective Altruism propaganda.How much carbon do I need to offset?The average American seems to emit about 15-20T of CO2 per year (source,source,source). I'll assume 20T.But I travel a lot. A round-trip flight from London to New Yorkemits ~1T of CO2. This year I took 5 international flights - most had multiple legs, so I'll assume I emitted 15T more than the average American.So let's say I have to offset 35T of CO2 each year.Where should I donate?I trust Vox's Future Perfect on stuff like this. They recommend donating to a climate change fund such as theClimate Change Fund from Founder's PledgeHow much should I donate?I'll use thetop recommended climate charity from Vox's Future Perfect as a benchmark. As of December 2023, this is theClean Air Task ForceFounder's Pledge estimates that a donation to CATF can avert 1T of CO2 emissions for $0.1-$1So that would put the amount I have to donate to offset all my emissions at $3.50-$35 per yearI'll be on the safe side and assume I should donate $35Conclusion: I just donated $35 to the Climate Fund from Founder's Pledge to offset my yearly carbon footprint. I intend to make this donation annually going forward, and encourage you to as well!Effective Altruism has been under some heat lately - with the collapse of FTX, and the drama around the OpenAI board ousting Sam Altman.EA is both a philosophy and a community. I think the above exercise illustrates why both are really good, despite recent drama.The philosophy of Effective Altruism gave me the intellectual motivation to donate in the first place. And it informs my decision about where to donate: I should not just donate to what feels the best - I should donate where my dollar will have the highest impact in terms of tons of CO2-eq averted.The community of EA has created institutions (in this case Vox's Future Perfect, and Founder's Pledge) that help me quickly[1] identify a good donation opportunity, and direct my funds effectively. Also,a post on the the EA Forum provided extra social motivation to make this donationIs this system perfect? No. Perhaps I could have spent more time finding a better charity to donate to. Perhaps I should be doing more in my lifestyle or in political activism to be addressing the problem of climate change.But I think my actions here are a lot better than they would be if Effective Altruism did not exist[2]. So overall I remain proud of Effective Altruism - both the philosophy and the community.^It only took 1 hour to do the research and decide to donate!^For what it's worth, the philosophy and community of EA were also key motivators in my decision to become vegetarianThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Luke Eure https://forum.effectivealtruism.org/posts/DXPWKQLnZPHyo74FH/i-donated-usd35-to-offset-my-carbon-footprint-for-this-year Tue, 05 Dec 2023 11:54:20 +0000 EA - I donated $35 to offset my carbon footprint for this year by Luke Eure Luke Eure https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:32 no full 20
DBx98atdYFM3yKR9C EA - Early findings from the world's largest UBI study by GiveDirectly Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Early findings from the world's largest UBI study, published by GiveDirectly on December 7, 2023 on The Effective Altruism Forum.Summary of findings 2 years in:A monthly universal basic income (UBI) empowered recipients and did not create idleness. They invested, became more entrepreneurial, and earned more. The common concern of "laziness" never materialized, as recipients did not work less nor drink more.Both a large lump sum and a long-term UBI proved highly effective. The lump sum enabled big investments and the guarantee of 12 years of UBI encouraged savings and risk-taking.A short-term UBI was the least impactful of the designs but still effective. On nearly all important economic measures, a 2-year-only UBI performed less well than giving cash as a large lump sum or guaranteeing a long-term UBI, despite each group having received roughly the same total amount of money at this point. However, it still had a positive impact on most measures.Governments should consider changing how they deliver cash aid. Short-term monthly payments, which this study found to be the least impactful design, are the most common way people in both low- and high-income countries receive cash assistance, and it's how most UBI pilots are currently designed.To learn about the most effective ways of delivering cash aid, GiveDirectly worked with a team of researchers to compare three ways of giving out funds.[1] About 200 Kenyan villages were assigned to one of three groups and started receiving payment in 2018.Now we have results 2 years in. These newly-released findings look at just the first two years (2018-2020), when all three groups had received roughly the same amount of money.Long-term UBI: a 12-year basic income of $22.50/month ($540 total after 2 years) with a commitment for 10 more years still to followShort-term UBI: a 2-year basic income of $22.50/month, ($540 total after 2 years) with no more to followLarge lump-sum: one-off $500 payment given 2 years ago, with no more to follow[2]These amounts are significant for people living below the extreme poverty line, which in Kenya means surviving on less than $33 a month or $400 a year.[3]Researchers compared outcomes of these villages to a control group of similar villages that did not receive cash. The results are summarized below. You can read a table of the results here and the full paper hereA monthly UBI made people in poverty more productive, not lessCritics of universal basic income often fear monthly cash payments disincentivize work; however, this study in rural Kenya, like many studies of cash transfers before it, found evidence to the contrary for all groups.Highlights from the research paper:UBI improved agency and income: "Overall there is no evidence of UBI promoting 'laziness,' but evidence of substantial effects on occupational choice… impacts on total household income are also positive and significant."Cash transfers increased savings: "The effect on both household and enterprise savings are positive and mostly significant… The amount the households have in rotating savings and credit associations (ROSCAs) also goes up significantly…"Cash did not change hours worked, but recipients shifted to self-employment: "Treated households are not working less… there is significant reduction in hours of wage work, all of which comes from work in agriculture, and a slightly larger increase in hours of non-agricultural self-employed work, so there is no net effect on total household labor supply."Cash did not increase drinking: "Respondents [receiving cash] reported seeing fewer of their neighbors drinking daily, and were less likely to perceive drinking as a problem."Giving $500 as a lump sum improved economic outcomes more than giving it out over 24 monthsIf we have limited funds to help a person living i...

]]>
GiveDirectly https://forum.effectivealtruism.org/posts/DBx98atdYFM3yKR9C/early-findings-from-the-world-s-largest-ubi-study Thu, 07 Dec 2023 06:33:11 +0000 EA - Early findings from the world's largest UBI study by GiveDirectly GiveDirectly https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:29 no full 4
WesEQoAX99QFmoZFe EA - EA Germany's 2023 Report, 2024 Plans & Funding Gap by Patrick Gruban Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Germany's 2023 Report, 2024 Plans & Funding Gap, published by Patrick Gruban on December 8, 2023 on The Effective Altruism Forum.In this post, we'll report on our activities in 2023, outline our plans for 2024, and show our room for funding.SummaryEA Germany (EAD) acts as an umbrella organisation for the German EA community, the third largest national community and biggest in continental Europe, according to the2022 EA survey.The non-profit transitioned from volunteer-run to having a five-person team (4 FTEs) that focused in the first full year on talent development, community building support, and general community support. EAD ran successful programs like the EAGxBerlin, an intro program, and community builder retreats while also offering an employer of record service and fiscal sponsorship. In total, more than 1,000 people joined events and programs, while 4,000 received the monthly newsletter. We tried out five new programs in a hits-based approach and will continue with two of these.For 2024, we refined our Theory of Change and plan to target people interested in global catastrophic risks (GCR) and professionals who could make direct career changes in addition to EA-interested people and community builders. We aim to expand into AI safety field building, running targeted programs for those with specialised skills with programs such as professional AI safety outreach or creating an AI safety website. We plan to test additional new programs, including establishing a virtual Germany-focused EA group, proactively recommending job opportunities, running new volunteer programs, a policy program, and starting media outreach to engage different target groups.We currently face afunding gap of 37,000 and seek donations to fill this gap.About EA Germany (EAD)We are a registered non-profit organisation with ateam of five people (4 FTE), and are currently funded by grants from CEA (Community Builder Grants program) and Open Philanthropy Effective Altruism Community Growth (Global Health and Wellbeing) via a regranting fromEffektiv Spenden. A six-member board provides oversight and advice.We have >100 members but act as an umbrella organisation for the whole German EA community, including people in and from Germany interested in the ideas of EA.There are 27 active local groups with 5-50 active members each (the biggestaccording to the last EA Survey being Berlin, Munich and Aachen). In total, >300 people are regularly active in local groups.Based on the2022 EA survey, Germany was the 3rd largest national EA community and the biggest in continental Europe and had as many respondents as the next four countries by size (Netherlands, Switzerland, France, and Norway) combined.Impact Report 2023In 2023, we spent most of our time ontalent development, community building support, general community support, and the setup of EAD. In addition, weexplored some new programs.Core Activities: Finding and Retaining MembersTo develop talents, we guided people through afunnel fromCommunications (600 monthly users on the website, 3,900 subscribers of the monthly newsletter, >450 EAD Slack users),the intro program (100 applications each for two iterations, 60 successful participants in summer, winter program only starting now) and a followingweekend-retreat ("EAD retreat", four retreats, 200 participants in total),the EAGxBerlin (550 participants)to more impactful actions.To guide people indirectly to impactful action, we supported community builders in Germany viatwo retreats (60 participants in total),monthly calls for all organisers (~15 participants each),1-1s (at least 2/year, we talked with >50 organisers/teams of 29 groups), andresources like presentations, templates (used by ~50 % of groups).We also support the community overall withan employer of r...

]]>
Patrick Gruban https://forum.effectivealtruism.org/posts/WesEQoAX99QFmoZFe/ea-germany-s-2023-report-2024-plans-and-funding-gap 100 members but act as an umbrella organisation for the whole German EA community, including people in and from Germany interested in the ideas of EA.There are 27 active local groups with 5-50 active members each (the biggestaccording to the last EA Survey being Berlin, Munich and Aachen). In total, >300 people are regularly active in local groups.Based on the2022 EA survey, Germany was the 3rd largest national EA community and the biggest in continental Europe and had as many respondents as the next four countries by size (Netherlands, Switzerland, France, and Norway) combined.Impact Report 2023In 2023, we spent most of our time ontalent development, community building support, general community support, and the setup of EAD. In addition, weexplored some new programs.Core Activities: Finding and Retaining MembersTo develop talents, we guided people through afunnel fromCommunications (600 monthly users on the website, 3,900 subscribers of the monthly newsletter, >450 EAD Slack users),the intro program (100 applications each for two iterations, 60 successful participants in summer, winter program only starting now) and a followingweekend-retreat ("EAD retreat", four retreats, 200 participants in total),the EAGxBerlin (550 participants)to more impactful actions.To guide people indirectly to impactful action, we supported community builders in Germany viatwo retreats (60 participants in total),monthly calls for all organisers (~15 participants each),1-1s (at least 2/year, we talked with >50 organisers/teams of 29 groups), andresources like presentations, templates (used by ~50 % of groups).We also support the community overall withan employer of r...]]> Fri, 08 Dec 2023 21:37:25 +0000 EA - EA Germany's 2023 Report, 2024 Plans & Funding Gap by Patrick Gruban 100 members but act as an umbrella organisation for the whole German EA community, including people in and from Germany interested in the ideas of EA.There are 27 active local groups with 5-50 active members each (the biggestaccording to the last EA Survey being Berlin, Munich and Aachen). In total, >300 people are regularly active in local groups.Based on the2022 EA survey, Germany was the 3rd largest national EA community and the biggest in continental Europe and had as many respondents as the next four countries by size (Netherlands, Switzerland, France, and Norway) combined.Impact Report 2023In 2023, we spent most of our time ontalent development, community building support, general community support, and the setup of EAD. In addition, weexplored some new programs.Core Activities: Finding and Retaining MembersTo develop talents, we guided people through afunnel fromCommunications (600 monthly users on the website, 3,900 subscribers of the monthly newsletter, >450 EAD Slack users),the intro program (100 applications each for two iterations, 60 successful participants in summer, winter program only starting now) and a followingweekend-retreat ("EAD retreat", four retreats, 200 participants in total),the EAGxBerlin (550 participants)to more impactful actions.To guide people indirectly to impactful action, we supported community builders in Germany viatwo retreats (60 participants in total),monthly calls for all organisers (~15 participants each),1-1s (at least 2/year, we talked with >50 organisers/teams of 29 groups), andresources like presentations, templates (used by ~50 % of groups).We also support the community overall withan employer of r...]]> 100 members but act as an umbrella organisation for the whole German EA community, including people in and from Germany interested in the ideas of EA.There are 27 active local groups with 5-50 active members each (the biggestaccording to the last EA Survey being Berlin, Munich and Aachen). In total, >300 people are regularly active in local groups.Based on the2022 EA survey, Germany was the 3rd largest national EA community and the biggest in continental Europe and had as many respondents as the next four countries by size (Netherlands, Switzerland, France, and Norway) combined.Impact Report 2023In 2023, we spent most of our time ontalent development, community building support, general community support, and the setup of EAD. In addition, weexplored some new programs.Core Activities: Finding and Retaining MembersTo develop talents, we guided people through afunnel fromCommunications (600 monthly users on the website, 3,900 subscribers of the monthly newsletter, >450 EAD Slack users),the intro program (100 applications each for two iterations, 60 successful participants in summer, winter program only starting now) and a followingweekend-retreat ("EAD retreat", four retreats, 200 participants in total),the EAGxBerlin (550 participants)to more impactful actions.To guide people indirectly to impactful action, we supported community builders in Germany viatwo retreats (60 participants in total),monthly calls for all organisers (~15 participants each),1-1s (at least 2/year, we talked with >50 organisers/teams of 29 groups), andresources like presentations, templates (used by ~50 % of groups).We also support the community overall withan employer of r...]]> Patrick Gruban https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 23:18 no full 1
ytBxJpQsdEEmPAv9F EA - I'm interviewing Carl Shulman - what should I ask him? by Robert Wiblin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I'm interviewing Carl Shulman - what should I ask him?, published by Robert Wiblin on December 8, 2023 on The Effective Altruism Forum.Next week for the 80,000 Hours Podcast I'll be interviewing Carl Shulman, advisor to Open Philanthropy, and generally super informed person about history, technology, possible futures, and a shocking number of other topics.He has previously appeared on our show and the Dwarkesh Podcast:[Carl Shulman (Pt 1) - Intelligence Explosion, Primate Evolution, Robot Doublings, & Alignment])(https://www.dwarkeshpatel.com/p/carl-shulman)Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far FutureCarl Shulman on the common-sense case for existential risk work and its practical implicationsHe has also written a number of pieces on this forum.What should I ask him?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Robert_Wiblin https://forum.effectivealtruism.org/posts/ytBxJpQsdEEmPAv9F/i-m-interviewing-carl-shulman-what-should-i-ask-him Fri, 08 Dec 2023 19:27:07 +0000 EA - I'm interviewing Carl Shulman - what should I ask him? by Robert Wiblin Robert_Wiblin https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:08 no full 4
8MKRjvNLDSkxJcxzK EA - GWWC Operational Funding Match 2023 by Luke Freeman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC Operational Funding Match 2023, published by Luke Freeman on December 8, 2023 on The Effective Altruism Forum.We are excited to announce a match for donations made towards our operations at Giving What We Can!Starting December 1st, every dollar donated towards GWWC's operations will be matched 1:1 up to US$200,000 until the match has been exhausted, or until January 31st 2024, whichever comes first*.DonateWe believe that GWWC is a great funding opportunity for those who believe in effective giving. Our most recent Impact Evaluation suggests that from 2020 to 2022:GWWC generated an additional $62 million in value for highly-effective charities.GWWC had a giving multiplier of 30x, meaning that for each $1 spent on our operations, we generated $30 of value to highly-effective charities on average. Please note that this isn't a claim that your additional dollar will have a 30x multiplier, even though we think it will still add a lot of value. Read more on how to interpret our results.Each new GWWC Pledge generates >$20,000 of value for highly-effective charities that would not have happened without GWWC.Reaching our US$200K goal will fully unlock the matching funds, and with US$400K we will be close to filling our baseline funding for 2024, allowing us to revamp the How Rich Am I? Calculator, continue evaluating evaluators, launch in new markets, improve the donation platform including likely reworking the checkout flow and much more.We strongly recommend you read our case for funding to learn more about our plans, our impact and what your donation could help us achieve.This is a true, counterfactual match, and we will only receive the equivalent amount to what we can raise.Thank you to Meta Charity Funders for generously providing funding for this match.Donate*The following terms and conditions apply:Match will apply in a 1:1 ratio to donated funds. In other words, for every $1 you donate to GWWC's operations, the matching donors will give $1.The match will be applied to eligible donations from December 1st and will apply retroactivelyThe match will end once US$200,000 has been reached, or we reach January 31st 2024, whichever comes first. Once the matched funds have been exhausted, we will update this page.The match will be applied to both one-off and recurring donations that occur during the match periodDonors who have funded more than US$250,000 of GWWC's operations since Jan 1 2022 are not eligible for this match - if you'd like to clarify whether you are ineligible, please contact us at community@givingwhatwecan.orgMatch will apply to the first US$50,000 per donorDonations can be made through givingwhatwecan.org or through other pathways or entities that can receive donations for GWWC's operations (please contact us for other options, or if you're an Australia tax resident)Gift Aid payments will not be included in the matchThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Luke Freeman https://forum.effectivealtruism.org/posts/8MKRjvNLDSkxJcxzK/gwwc-operational-funding-match-2023 $20,000 of value for highly-effective charities that would not have happened without GWWC.Reaching our US$200K goal will fully unlock the matching funds, and with US$400K we will be close to filling our baseline funding for 2024, allowing us to revamp the How Rich Am I? Calculator, continue evaluating evaluators, launch in new markets, improve the donation platform including likely reworking the checkout flow and much more.We strongly recommend you read our case for funding to learn more about our plans, our impact and what your donation could help us achieve.This is a true, counterfactual match, and we will only receive the equivalent amount to what we can raise.Thank you to Meta Charity Funders for generously providing funding for this match.Donate*The following terms and conditions apply:Match will apply in a 1:1 ratio to donated funds. In other words, for every $1 you donate to GWWC's operations, the matching donors will give $1.The match will be applied to eligible donations from December 1st and will apply retroactivelyThe match will end once US$200,000 has been reached, or we reach January 31st 2024, whichever comes first. Once the matched funds have been exhausted, we will update this page.The match will be applied to both one-off and recurring donations that occur during the match periodDonors who have funded more than US$250,000 of GWWC's operations since Jan 1 2022 are not eligible for this match - if you'd like to clarify whether you are ineligible, please contact us at community@givingwhatwecan.orgMatch will apply to the first US$50,000 per donorDonations can be made through givingwhatwecan.org or through other pathways or entities that can receive donations for GWWC's operations (please contact us for other options, or if you're an Australia tax resident)Gift Aid payments will not be included in the matchThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> Fri, 08 Dec 2023 08:17:59 +0000 EA - GWWC Operational Funding Match 2023 by Luke Freeman $20,000 of value for highly-effective charities that would not have happened without GWWC.Reaching our US$200K goal will fully unlock the matching funds, and with US$400K we will be close to filling our baseline funding for 2024, allowing us to revamp the How Rich Am I? Calculator, continue evaluating evaluators, launch in new markets, improve the donation platform including likely reworking the checkout flow and much more.We strongly recommend you read our case for funding to learn more about our plans, our impact and what your donation could help us achieve.This is a true, counterfactual match, and we will only receive the equivalent amount to what we can raise.Thank you to Meta Charity Funders for generously providing funding for this match.Donate*The following terms and conditions apply:Match will apply in a 1:1 ratio to donated funds. In other words, for every $1 you donate to GWWC's operations, the matching donors will give $1.The match will be applied to eligible donations from December 1st and will apply retroactivelyThe match will end once US$200,000 has been reached, or we reach January 31st 2024, whichever comes first. Once the matched funds have been exhausted, we will update this page.The match will be applied to both one-off and recurring donations that occur during the match periodDonors who have funded more than US$250,000 of GWWC's operations since Jan 1 2022 are not eligible for this match - if you'd like to clarify whether you are ineligible, please contact us at community@givingwhatwecan.orgMatch will apply to the first US$50,000 per donorDonations can be made through givingwhatwecan.org or through other pathways or entities that can receive donations for GWWC's operations (please contact us for other options, or if you're an Australia tax resident)Gift Aid payments will not be included in the matchThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> $20,000 of value for highly-effective charities that would not have happened without GWWC.Reaching our US$200K goal will fully unlock the matching funds, and with US$400K we will be close to filling our baseline funding for 2024, allowing us to revamp the How Rich Am I? Calculator, continue evaluating evaluators, launch in new markets, improve the donation platform including likely reworking the checkout flow and much more.We strongly recommend you read our case for funding to learn more about our plans, our impact and what your donation could help us achieve.This is a true, counterfactual match, and we will only receive the equivalent amount to what we can raise.Thank you to Meta Charity Funders for generously providing funding for this match.Donate*The following terms and conditions apply:Match will apply in a 1:1 ratio to donated funds. In other words, for every $1 you donate to GWWC's operations, the matching donors will give $1.The match will be applied to eligible donations from December 1st and will apply retroactivelyThe match will end once US$200,000 has been reached, or we reach January 31st 2024, whichever comes first. Once the matched funds have been exhausted, we will update this page.The match will be applied to both one-off and recurring donations that occur during the match periodDonors who have funded more than US$250,000 of GWWC's operations since Jan 1 2022 are not eligible for this match - if you'd like to clarify whether you are ineligible, please contact us at community@givingwhatwecan.orgMatch will apply to the first US$50,000 per donorDonations can be made through givingwhatwecan.org or through other pathways or entities that can receive donations for GWWC's operations (please contact us for other options, or if you're an Australia tax resident)Gift Aid payments will not be included in the matchThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> Luke Freeman https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:11 no full 7
QyjMKx3Agimjbvwkj EA - What were the death tolls from pandemics in history? by salonium Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What were the death tolls from pandemics in history?, published by salonium on December 9, 2023 on The Effective Altruism Forum.We just published a new central hub dedicated to covering pandemics on Our World in Data. We will gather there all of our past and future writing, charts, and data explorers on the subject. You can find ithere.Along with it, we've published a new article:What were the death tolls from pandemics in history?COVID-19 has brought the reality of pandemics to the forefront of public consciousness. But pandemics have afflicted humanity for millennia. Time and again, people faced outbreaks of diseases - including influenza, cholera, bubonic plague, smallpox, and measles - that spread far and caused death and devastation.Our ancestors were largely powerless against these diseases and unable to evaluate their true toll on the population. Without good record-keeping of the number of cases and deaths, the impact of outbreaks was underrecognized or even forgotten. The result is that we tend to underestimate the frequency and severity of pandemics in history.To deal with the lack of historical records on total death tolls, modern historians, epidemiologists, and demographic researchers have used various sources and methods to estimate their death tolls - such as using data from death records, tax registers, land use, archaeological records, epidemiological modeling, and more.In this article, we have compiled published estimates of the death tolls from pandemics in history. We have visualized them in a timeline below.These estimates were made by researchers using various methods, and they come with uncertainties, which are explained in the article.The size of each circle represents one pandemic's estimated death toll. Pandemics without a known death toll are depicted with triangles.Read the whole articlehere.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
salonium https://forum.effectivealtruism.org/posts/QyjMKx3Agimjbvwkj/what-were-the-death-tolls-from-pandemics-in-history Sat, 09 Dec 2023 22:45:13 +0000 EA - What were the death tolls from pandemics in history? by salonium salonium https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:58 no full 1
gF4nLBpjgFe6XTMrM EA - PEPFAR, one of the most life-saving global health programs, is at risk by salonium Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: PEPFAR, one of the most life-saving global health programs, is at risk, published by salonium on December 10, 2023 on The Effective Altruism Forum.Summary:International funding and coordination to tackle HIV/AIDS and support health systems in lower- and middle-income countries, is at risk of not being renewed by US Congress, due to demands that it should be linked to new abortion-related restrictions in recipient countries.This program is estimated to have saved over 20 million lives since it was launched by the Bush Administration in 2003, and even now averts over a million HIV/AIDS deaths annually.Since it has also helped support health systems in LMICs, and tackle malaria and tuberculosis, its impact is likely greater than this.In my view this is the most important risk to global health we face today, and I think it isn't getting enough attention.If anyone is interested in research, writing or advocacy on this issue, please do so. If you are interested in jointly working on this, or if you already know of ongoing efforts, please comment below or get in touch. My email: saloni@ourworldindata.orgRelevant background reading:The U.S. President's Emergency Plan for AIDS Relief (PEPFAR), the largest commitment in history by any single country to address a disease, is estimated to have averted 25 million deaths from AIDS and enabled 5.5 million babies to be born free from HIV infection over the past 20 years.1It has provided more than $100 billion in funding for HIV prevention, care, and treatment internationally, supporting 55 low- and middle-income countries that are collectively home to 78% of all people living with HIV.Together with the Global Fund to Fight AIDS, Tuberculosis, and Malaria, PEPFAR has transformed AIDS in low-income countries, especially those in Africa, from a death sentence to a readily treatable chronic disease by deploying programs that provide antiretroviral treatment even in the most remote villages.Right from the start, PEPFAR was more than just an AIDS program; it partnered with countries in Africa to support the development of health systems for essential community services, trained thousands of health care workers, fostered security and stability in affected countries, and engendered hope amid a devastating global AIDS crisis.Karim et al. (2023)Why is it at risk?Republican colleagues [...] accuse the Biden administration of using PEPFAR to fund abortion providers overseas and House Democrats who refuse to reinstate Trump administration rules that prohibited foreign aid going to groups that provide or counsel on abortions. Discussions about a compromise that would extend the program for more than one year but less than five, with language stressing the existing ban on federal money directly paying for abortions, have collapsed.Now, the best hope for re-upping the $7 billion annual program is a government spending process beset by delays and divisions and slated to drag into January and February with no guarantee of success. PEPFAR can hobble along without reauthorization unless there's a prolonged government shutdown. But its backers say that without a long-term U.S. commitment, groups fighting HIV and AIDS around the world will struggle to hire staff and launch long-term projects.Complicating any hope for compromise is the 2024 election.Congress passed two short-term funding patches that expire in January and February. That eliminated the possibility of the typical end-of-year omnibus bill that many on both sides of the PEPFAR fight saw as the best vehicle for its reauthorization and kicked the fight into an election year when compromise - particularly on a contentious issue like abortion - will be more challenging.Politico [7 Dec. 2023]The lawmakers stalling the reauthorization are seeking to impose on PEPFAR a prohibition...

]]>
salonium https://forum.effectivealtruism.org/posts/gF4nLBpjgFe6XTMrM/pepfar-one-of-the-most-life-saving-global-health-programs-is Sun, 10 Dec 2023 21:06:41 +0000 EA - PEPFAR, one of the most life-saving global health programs, is at risk by salonium salonium https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:00 no full 2
bqtdsyySQEkpLd6uL EA - I spent a lot of time and money on one dog. Do I need help? by tobiasleenaert Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I spent a lot of time and money on one dog. Do I need help?, published by tobiasleenaert on December 10, 2023 on The Effective Altruism Forum.This is about a personal experience - rescuing a dog on a trip in Mexico - that helped me realize how I wrestle with being effective.My girlfriend and me were recently in Mexico. After speaking at a conference, we took two weeks off in Yucatan. We had both been aware that we'd meet a lot of stray animals. We knew that this situation could potentially spoil our vacation - which was meant as a break from our daily involvement in animal activism (and thus animal suffering). So as well as we could, we avoided getting too close to any animal we saw in the streets.That worked, until it didn't. Near the end of our trip, I ran into a quite unhealthy looking dog who was riddled with ticks. We spent half an hour taking the ticks out and by the time we were done with him, we knew we wouldn't let him lie there. I called a few shelters, asking if they could take the dog in, but they were all full. Contacting local activists, we miraculously found a place nearby where he'd be able to stay indefinitely.We brought him there right away, leaving him in a pen (there were other dogs that needed to get used to him). When we went away, it was with a bad feeling. Later that night, we decided we would pick him up again the next morning (our hotel didn't allow dogs so we had to wait) and would find another place for him.When we went back the next day, we were told that the dog had escaped. We saw where he had bit through the fence, probably in desperation and fear of the barking of the other dogs. We felt devastated, thinking that instead of helping him, we had put his life at risk. There was a very busy road right next to the property, and the place where we had found him - his home turf - was six kilometers away.By now we had bonded with this dog - which we called Tlalok, after the Aztec rain god - and mourned for the rest of the day, as if one of our own dogs had just died. We actually had difficulty understanding why this whole situation affected us so much. Maybe it was because Tlalok was such an incredibly friendly, trusting - and at the same time needy - being.The next day, one day before we'd fly home, we decided we wouldn't give up on him. I made a Spanish "LOST" flyer that we copied in our hotel and then distributed in the village where we'd found him. The flyer promised a reward of 5000 pesos (250 dollars). We spoke to many people online and offline, contacted vets, shelters and activists…At one point that day, a vet we were in touch with sent us the picture of a dog that had walked into a hotel where coincidentally a friend of hers was staying. It was Tlalok! We rushed to the village for the second time that day and half an hour later, we felt the relief as we hugged him. We brought him to our hotel and the next night, to a shelter, which was full but agreed to take him in and make sure he got all the necessary care, but only if we agreed to adopt him.We said yes, not having any alternative solution. We'd figure out later what exactly we would do with Tlalok, but the plan was set in motion to have him fly over to Belgium, where we live.Four weeks later, we picked Tlalok up at Schiphol Airport near Amsterdam - where he arrived a day later than planned so that we had to stay an extra night there. As I'm writing this, he's in our kitchen with our four other dogs. We're looking to have him adopted, knowing that with each day that he's here, it will be harder to part ways.In the meantime, we also remained in contact with the vet in Mexico and paid her a couple of hundred dollars to spay/neuter Tlalok's siblings and mother.***After reading some of this story, which I had posted - with some pictures - on Facebook, an EA friend told me...

]]>
tobiasleenaert https://forum.effectivealtruism.org/posts/bqtdsyySQEkpLd6uL/i-spent-a-lot-of-time-and-money-on-one-dog-do-i-need-help Sun, 10 Dec 2023 16:51:30 +0000 EA - I spent a lot of time and money on one dog. Do I need help? by tobiasleenaert tobiasleenaert https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:20 no full 3
YC3Mvw2xNtpKxR5sK EA - PhD on Moral Progress - Bibliography Review by Rafael Ruiz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: PhD on Moral Progress - Bibliography Review, published by Rafael Ruiz on December 10, 2023 on The Effective Altruism Forum.Epistemic Status: I've researched this broad topic for a couple of years. I've read about 30+ books and 100+ articles on the topic so far (I'm not really keeping count). I've also read many other works in the related areas of normative moral philosophy, moral psychology, moral epistemology, moral methodology, and metaethics, since it's basically my area of specialization within philosophy. This project will be my PhD thesis. However, I still have 3 years of the PhD to go, so a substantial amount of my opinions on the matter are subject to changes.Disclaimer: I have received some funding as a Forethought Foundation Fellow in support of my PhD research. But all the opinions expressed here are my own.Index.Part I - Bibliography ReviewPart II - Preliminary Takes and Opinions (I'm writing it, coming very soon!)More parts to be published later on.Introduction.Hi everyone, this is my first proper research-related post on the EA Forum, on a topic that I've been working on for several years, since even before my PhD, and now as part of my PhD in Philosophy at the London School of Economics.This post is the start of a series on my work on the topic of Moral Progress, which includes and intersects with Moral Circle Expansion (also called Inclusivism or Moral Inclusion), Moral Progress, Social Progress, Social Movements, the mechanisms that drive progress and regress, the possibilities of measuring these phenomena, and policy or practical implications.This first post is a bibliography review, which I hope will serve to orient future researchers that might want to tackle the same or similar topics. Hopefully it will help them to save time by separating the wheat from the chaff, the good research articles and books from the minor contributions. Initially, I had my reservations about doing a Bibliography Review, since now we have GPT4 which is quite good at purely neutral descriptive summarizing, so I felt maybe perhaps this work wasn't needed.However, given that now we have it as a good research assistant for pure facts, that also allows me more freedom to be more opinionated in my bibliography review. I'll try to tell you what I think is super worth reading, and what is "meh, skim it if you have free time", so you can sift through the literature in a more time-efficient way.The eventual goal outcome of the whole project would be to distil the main insights into book on the topic of Moral Progress with serious contributions to the current literature within interdisciplinary moral philosophy, but that probably won't happen until I finish my PhD thesis manuscript around 2026. Then after that, I'll have to rewrite that manuscript to turn it into a more accessible book, so it probably wouldn't be published until a later date. I'm also not sure just yet whether it would be an academic book on a University Press or something closer to What We Owe The Future, which aims to be accessible for a broader audience.So the finished work is quite a long way. On the brighter side, I will publish some of the key findings and takeaways on the EA Forum, probably in summarized form rather than the excruciatingly slow pace of writing in philosophy, which often takes 20 pages to make some minor points. Instead of that, I guess I'll post something closer to digestible bullet points with my views, attempting to foster online discussion, and then defend them in more detail over time and in the eventual book.Your feedback will of course be appreciated, particularly if I change my mind on substantial issues, connect me with other researchers, etc. So let's put our collective brains together (this is a pun having to do with cultural evolution that you might not understand y...

]]>
Rafael Ruiz https://forum.effectivealtruism.org/posts/YC3Mvw2xNtpKxR5sK/phd-on-moral-progress-bibliography-review Sun, 10 Dec 2023 14:02:08 +0000 EA - PhD on Moral Progress - Bibliography Review by Rafael Ruiz Rafael Ruiz https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:10:38 no full 5
dLsay2t8Pf88Cmi4E EA - Vote in the Donation Election by 15 December by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vote in the Donation Election by 15 December, published by Lizka on December 9, 2023 on The Effective Altruism Forum.TL;DR:Vote here by 15 December. Voting should take 2-10 minutes and you'll be able to edit your vote until the deadline. (The deadline fordonating to the Donation Election Fund is 20 December.)You can vote if you had a Forum account by October 22, 2023. If you didn't, you can stillshare where you're donating ormake the case for voting for some candidates over others.Vote in the Donation Election 2023Read about the candidatesMore context:TheDonation Election Fund (currently around $34,000) will be designated for the top three candidates in theDonation Election (in proportion to the vote). You canread about the candidates here.How long does voting take, and how does it work?Voting should take around 2-10 minutes.[1]You'll be able to edit your vote anytime before December 15.Thevoting system is outlined here; your vote should basically just represent how you'd allocate funding between the candidates, and the voting portal will walk you through the process for that.Should I actually vote in the Donation Election? (I haven't read all the posts, I'm not that informed, I don't think it matters that much…)I think yes, you should vote (if your account was made before October 22 this year). Some reasons for my belief that you should vote (2-5 are most compelling to me):You'll influence how funds are distributed - probably in a positive way even if you don't think you have that much expertise or context.There are currently ~215 votes. The Donation Election Fund has ~$34,000. So (very very approximately) you'd be affecting ~$150 in donations in expectation.Additionally, I think you should have a prior that more votes will lead to a better outcome. Aggregation mechanisms generally function better when there are more inputs, so the combined result should improve if you add to it even if you're not super informed. See e.g.this analysis of Metaculus community predictions, which suggests that the Metaculus community prediction improves approximately logarithmically with the number of forecasters. (See alsothis post.)[2]You'll add useful information to the voting data.I think the data we'll get from the Donation Election could be extremely useful; we'll have a sense for people's priorities, and we might identify blind spots or important points of disagreement. But it'll be a lot more useful if more people vote. (Moreover, if you're not sure about voting, you might be part of a group that's less likely to vote, which will be underrepresented in the information we'll get.)Voting will prompt you to think a bit more about your donation choices.It might be fun for you.Voting is a public/collective good of sorts, and you might value being the kind of person who contributes to public goods.The downside is limited.You might waste some time (but you can time-cap yourself). If you're worried about mis-directing funds, you can view this as a fairly low-cost exercise. You'll affect more than a trivial amount of money (unless we get tons of votes), but it won't be that big; I think the second-order effects (2-5) will outweigh the effect of directing funds (1).Some other common questionsWhy isn't charity X a candidate?We restricted candidates to the charities onthis list, largely for logistical reasons and vetting capacity, which stopped some charities from being candidates (we might change this next year if we run an election again). If the charity is on that list, it's not a candidate because nobodynominated it on time. Consider sharing a post about why people should donate to the charity, though!Why can't people whose accounts are newer than October 22, 2023 vote?We added this restriction to prevent vote manipulation (we'll also be checking i...

]]>
Lizka https://forum.effectivealtruism.org/posts/dLsay2t8Pf88Cmi4E/vote-in-the-donation-election-by-15-december Sat, 09 Dec 2023 23:53:32 +0000 EA - Vote in the Donation Election by 15 December by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:33 no full 9
PHEdXmY6waTFTpCnF EA - What are the biggest conceivable wins for animal welfare by 2025? by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What are the biggest conceivable wins for animal welfare by 2025?, published by Nathan Young on December 11, 2023 on The Effective Altruism Forum.In a years time, let's imagine we live in the 95th percentile world for animal welfare. What wins are there?I am trying to write an article about what the end of 2024 will look like, but I don't know enough about animal welfare.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Nathan Young https://forum.effectivealtruism.org/posts/PHEdXmY6waTFTpCnF/what-are-the-biggest-conceivable-wins-for-animal-welfare-by Mon, 11 Dec 2023 12:53:49 +0000 EA - What are the biggest conceivable wins for animal welfare by 2025? by Nathan Young Nathan Young https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:36 no full 3
9nb7nXJQ4bfYvQhCd EA - You can have an hour of my time, to work on your biggest problem, for free. by John Salter Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You can have an hour of my time, to work on your biggest problem, for free., published by John Salter on December 11, 2023 on The Effective Altruism Forum.Who the fuck are you?I run EA's biggest volunteer organisation. We train psychology graduate volunteers to treat mental illnesses, especially in LMICs. To lead by example, I don't take a salary despite working >50Hs per week. To pay the bills, I coach rich people's children to be happier and more productive. While it funds my living expenses, it's not very impactful. I'm hoping to start serving EAs to fix that.EA stuff I've doneAuthored or co-authored ~$350 000 of successful grant applications for EA charitiesGrew my org from 1 person to ~60 FTEs in the first 3 months post-launchNow treating one case of depression / anxiety / phobia for ~$50 on the margin (although, just ~ 1000 clients a year right now; planning to treat 13 000 for ~$20 on the margin by 2025)Trained coaches who've helped ~100 EAs overcome social anxiety, depression, procrastination and other barriers to being happily productive.I played on hard difficultyNo relevant connectionsCause area for which EAs give few shitsBottom 10% of familial income between age 13 and 21Shoestring budget to start charityNot extraordinarily smart or hardworkingLost three of the prior five years, before starting the charity, to depressionI raise this because it's likely that disproportionate amount of my success is due to my decision-making, as opposed to my circumstances or character, and is thus replicable.People I think could be a good fitEarly career EAs, especially entrepreneurs and people with leadership ambitionsUniversity students struggling to get the most out of their timePeople who know they are being held back by psychological issues (e.g. fear / risk aversion / procrastination / anxiety / depression / lack of discipline / bad habits)Anyone interested in entering mental health as a cause areaHow the hour would workTell me what you'd like to make progress on and we work on it directly via Zoom. Based on the value provided, decide if you want to continue as a paying client. If so, pay by the session (no contracts etc). If not, no hard feels.~80% of people who chat with me for an hour decide to hire me on a session by session basis thereafter, sticking around for ~9 months on average.How much would you charge afterwards?Full-time EA coaches charge ~$300 per hourI'm going to start out at $80 per hour. I'd only raise it for new clients thereafter.Relevant LinksWebsite for my charity: https://www.overcome.org.uk/LinkedIn: https://www.linkedin.com/in/john-salter-b685181ba/To book the free hourhttps://calendar.app.google/N1iBRnPHEBis8NXy5If no time works, but you're really keen to give it a go, dm me and I'll see what I can do.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
John Salter https://forum.effectivealtruism.org/posts/9nb7nXJQ4bfYvQhCd/you-can-have-an-hour-of-my-time-to-work-on-your-biggest 50Hs per week. To pay the bills, I coach rich people's children to be happier and more productive. While it funds my living expenses, it's not very impactful. I'm hoping to start serving EAs to fix that.EA stuff I've doneAuthored or co-authored ~$350 000 of successful grant applications for EA charitiesGrew my org from 1 person to ~60 FTEs in the first 3 months post-launchNow treating one case of depression / anxiety / phobia for ~$50 on the margin (although, just ~ 1000 clients a year right now; planning to treat 13 000 for ~$20 on the margin by 2025)Trained coaches who've helped ~100 EAs overcome social anxiety, depression, procrastination and other barriers to being happily productive.I played on hard difficultyNo relevant connectionsCause area for which EAs give few shitsBottom 10% of familial income between age 13 and 21Shoestring budget to start charityNot extraordinarily smart or hardworkingLost three of the prior five years, before starting the charity, to depressionI raise this because it's likely that disproportionate amount of my success is due to my decision-making, as opposed to my circumstances or character, and is thus replicable.People I think could be a good fitEarly career EAs, especially entrepreneurs and people with leadership ambitionsUniversity students struggling to get the most out of their timePeople who know they are being held back by psychological issues (e.g. fear / risk aversion / procrastination / anxiety / depression / lack of discipline / bad habits)Anyone interested in entering mental health as a cause areaHow the hour would workTell me what you'd like to make progress on and we work on it directly via Zoom. Based on the value provided, decide if you want to continue as a paying client. If so, pay by the session (no contracts etc). If not, no hard feels.~80% of people who chat with me for an hour decide to hire me on a session by session basis thereafter, sticking around for ~9 months on average.How much would you charge afterwards?Full-time EA coaches charge ~$300 per hourI'm going to start out at $80 per hour. I'd only raise it for new clients thereafter.Relevant LinksWebsite for my charity: https://www.overcome.org.uk/LinkedIn: https://www.linkedin.com/in/john-salter-b685181ba/To book the free hourhttps://calendar.app.google/N1iBRnPHEBis8NXy5If no time works, but you're really keen to give it a go, dm me and I'll see what I can do.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> Mon, 11 Dec 2023 11:48:19 +0000 EA - You can have an hour of my time, to work on your biggest problem, for free. by John Salter 50Hs per week. To pay the bills, I coach rich people's children to be happier and more productive. While it funds my living expenses, it's not very impactful. I'm hoping to start serving EAs to fix that.EA stuff I've doneAuthored or co-authored ~$350 000 of successful grant applications for EA charitiesGrew my org from 1 person to ~60 FTEs in the first 3 months post-launchNow treating one case of depression / anxiety / phobia for ~$50 on the margin (although, just ~ 1000 clients a year right now; planning to treat 13 000 for ~$20 on the margin by 2025)Trained coaches who've helped ~100 EAs overcome social anxiety, depression, procrastination and other barriers to being happily productive.I played on hard difficultyNo relevant connectionsCause area for which EAs give few shitsBottom 10% of familial income between age 13 and 21Shoestring budget to start charityNot extraordinarily smart or hardworkingLost three of the prior five years, before starting the charity, to depressionI raise this because it's likely that disproportionate amount of my success is due to my decision-making, as opposed to my circumstances or character, and is thus replicable.People I think could be a good fitEarly career EAs, especially entrepreneurs and people with leadership ambitionsUniversity students struggling to get the most out of their timePeople who know they are being held back by psychological issues (e.g. fear / risk aversion / procrastination / anxiety / depression / lack of discipline / bad habits)Anyone interested in entering mental health as a cause areaHow the hour would workTell me what you'd like to make progress on and we work on it directly via Zoom. Based on the value provided, decide if you want to continue as a paying client. If so, pay by the session (no contracts etc). If not, no hard feels.~80% of people who chat with me for an hour decide to hire me on a session by session basis thereafter, sticking around for ~9 months on average.How much would you charge afterwards?Full-time EA coaches charge ~$300 per hourI'm going to start out at $80 per hour. I'd only raise it for new clients thereafter.Relevant LinksWebsite for my charity: https://www.overcome.org.uk/LinkedIn: https://www.linkedin.com/in/john-salter-b685181ba/To book the free hourhttps://calendar.app.google/N1iBRnPHEBis8NXy5If no time works, but you're really keen to give it a go, dm me and I'll see what I can do.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> 50Hs per week. To pay the bills, I coach rich people's children to be happier and more productive. While it funds my living expenses, it's not very impactful. I'm hoping to start serving EAs to fix that.EA stuff I've doneAuthored or co-authored ~$350 000 of successful grant applications for EA charitiesGrew my org from 1 person to ~60 FTEs in the first 3 months post-launchNow treating one case of depression / anxiety / phobia for ~$50 on the margin (although, just ~ 1000 clients a year right now; planning to treat 13 000 for ~$20 on the margin by 2025)Trained coaches who've helped ~100 EAs overcome social anxiety, depression, procrastination and other barriers to being happily productive.I played on hard difficultyNo relevant connectionsCause area for which EAs give few shitsBottom 10% of familial income between age 13 and 21Shoestring budget to start charityNot extraordinarily smart or hardworkingLost three of the prior five years, before starting the charity, to depressionI raise this because it's likely that disproportionate amount of my success is due to my decision-making, as opposed to my circumstances or character, and is thus replicable.People I think could be a good fitEarly career EAs, especially entrepreneurs and people with leadership ambitionsUniversity students struggling to get the most out of their timePeople who know they are being held back by psychological issues (e.g. fear / risk aversion / procrastination / anxiety / depression / lack of discipline / bad habits)Anyone interested in entering mental health as a cause areaHow the hour would workTell me what you'd like to make progress on and we work on it directly via Zoom. Based on the value provided, decide if you want to continue as a paying client. If so, pay by the session (no contracts etc). If not, no hard feels.~80% of people who chat with me for an hour decide to hire me on a session by session basis thereafter, sticking around for ~9 months on average.How much would you charge afterwards?Full-time EA coaches charge ~$300 per hourI'm going to start out at $80 per hour. I'd only raise it for new clients thereafter.Relevant LinksWebsite for my charity: https://www.overcome.org.uk/LinkedIn: https://www.linkedin.com/in/john-salter-b685181ba/To book the free hourhttps://calendar.app.google/N1iBRnPHEBis8NXy5If no time works, but you're really keen to give it a go, dm me and I'll see what I can do.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> John Salter https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:18 no full 4
hcnehXiEFoDnNjuAg EA - Underpromise, overdeliver by eirine Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Underpromise, overdeliver, published by eirine on December 12, 2023 on The Effective Altruism Forum.This is from my blog Said Twice, where I write advice that I've said twice. I was unsure whether to linkpost here but decided to do so given that it's largely based on my experiences from running EA Norway from 2018-2021!As much as you can, try to underpromise when making commitments and then do your best to pleasantly surprise.Here are the two main takeaways I want you to get from this post:You have a certain amount of credits with your stakeholders, that can be spent or earned depending on whether you break or meet expectations.As a general rule, when it comes to managing expectations with stakeholders it's better to underpromise and overdeliver.When thinking about stakeholder management, I've found it useful to imagine my relationships with my stakeholders as consisting of some number of 'credits' that can be earned and spent. You earn credits by delivering on time, being helpful, and signalling certain virtues (like seeming professional, transparent, and kind). You spend credits when you break a promise, don't deliver on time, seem uncharitable, show up late, and so on.In this context, by stakeholder I mean someone (individual, group of people, organisation, or community) that is affected by or can affect your organisation. The type of stakeholders I'm most used to are users of a product, community members, funders or donors, collaborators, and contractors or companies that provide a service.The value of these credits isn't always obvious. Some are pretty easy to see, like whether a funder approves your application, whether an organisation chooses to partner with you, and whether someone chooses to work with you. However, sometimes the value of having a good relationship (or a positive balance) with a stakeholder is less clear and might only become apparent later on.Whose credits matter the mostSome stakeholders are more important than others, and therefore more important to make sure you keep a positive credit balance with. To know which is which, there are multiple tools you can use to map out your stakeholders.A common tool is Mendelow's matrix, also called the power-interest matrix. In this matrix, your stakeholders can be mapped across two axes: How much power they have over your organisation, and how much interest they have in your work.The idea is roughly that the more interest the stakeholders have in your work, the more time you should spend on keeping them informed. The more power they have over your organisation, the more you should ensure they're satisfied with your work. The stakeholders that are both highly interested and have a lot of power are the most important ones, and whose credits matter the most.It's important to be aware of who your stakeholders are and how important they actually are to your organisation. If you don't, you can end up spending too much time closely managing or getting input from stakeholders who you should actually just monitor and keep track of.The idea is roughly that the more interest the stakeholders have in your work, the more time you should spend on keeping them informed. The more power they have over your organisation, the more you should ensure they're satisfied with your work. The stakeholders that are both highly interested and have a lot of power are the most important ones, and whose credits matter the most.It's important to be aware of who your stakeholders are and how important they actually are to your organisation. If you don't, you can end up spending too much time closely managing or getting input from stakeholders who you should actually just monitor and keep track of.How to earn creditsWhat actions earn you credits with your stakeholder, and what actions reduce your 'credit score' will di...

]]>
eirine https://forum.effectivealtruism.org/posts/hcnehXiEFoDnNjuAg/underpromise-overdeliver Tue, 12 Dec 2023 15:45:50 +0000 EA - Underpromise, overdeliver by eirine eirine https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:47 no full 4
naMzQrsXzATX89eiX EA - AMA: Founder and CEO of the Against Malaria Foundation, Rob Mather by tobytrem Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Founder and CEO of the Against Malaria Foundation, Rob Mather, published by tobytrem on December 12, 2023 on The Effective Altruism Forum.TLDR: Share questions for Rob Mather (founder and CEO of the Against Malaria Foundation) in the comments of this post, by the 19th of December. Ask about anything!Comment on this post to ask Rob Mather, the founder and CEO of theAgainst Malaria Foundation (AMF), the charity that has protected 448,414,801 people with malaria nets, anything by the 19th of December.I'll be interviewing him live on 19th of December, at 6pm UTC. The interview will be hosted live on a link that I'll comment here before the event. I'll ask the questions you share on this post (and possibly some of my own). Although we might not get through all of them; we'll get through as many as we can in an hour.We'll aim for two dollars a net, two minutes an answer, so try to post short questions (1-2 sentences). Feel free to ask several questions (or add follow ups), though! If editing your question down would take a while, don't worry, I can shorten it.Though the questions won't be answered in the comments of this post, don't worry if you can't attend the live event. We'll post a video recording and perhaps a podcast version in the comments of this post.Some context for your questions:AMF distributes insecticide treated bed nets to protect sleepers from the bites of malaria carrying mosquitos, that would otherwise cause severe illness or worse. You can read about the toll of malaria on thisOur World in Data page, and the effectiveness of bednets inthis GiveWell report.Since 2009 AMF has been featured as a GiveWell top charity.Rob founded AMF in 2005. Since then, it has grown from a team of two to a team of thirteen. In 2006, they brought in $1,3 million in donations. In 2022, they brought in $120 million. AMF has received $545 million in donations to date, and has distributed 249 million bed nets.Currently, AMF's team of 13 is in the middle of a nine-month period during which they are distributing, with partners, 90 million nets to protect 160 million people in six countries: Chad, the Democratic Republic of Congo, Nigeria, South Sudan, Togo, Uganda, and Zambia.Rob tells me that: "These nets alone can be expected to prevent 40,000 deaths, avert 20 to 40 million cases of malaria and lead to a US$2.2 billion improvement in local economies (12x the funds applied). When people are ill they cannot farm, drive, teach - function, so the improvement in health leads to economic as well as humanitarian benefits."Impact numbers: Once all of the nets AMF has fundraised for so far have been distributed and have been given time to have their effect, AMF expects that they will have prevented 185,000 deaths, averted 100-185 million cases of malaria, and led to growth worth $6.5 billion in local economies.Some other links to check out:A video from GWWC tellingthe story of how Rob founded AMF.Rob'sprevious Forum AMA, four years ago. Rob discussed:The implications of adding 5 more staff to AMF's two person team.The flow-through effects of saving lives with bed nets.AMF's2023 reflections and future plans. In it, Rob explains that:AMF has a $300mfunding gap.The Global Fund, the top funder for Malaria control activities, has a $2.3B shortfall in 2024-6 funding, increasing the undersupply of malaria nets.Insecticide resistant mosquitoes are becoming more common, which may damage the effectiveness of older nets.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
tobytrem https://forum.effectivealtruism.org/posts/naMzQrsXzATX89eiX/ama-founder-and-ceo-of-the-against-malaria-foundation-rob Tue, 12 Dec 2023 04:47:48 +0000 EA - AMA: Founder and CEO of the Against Malaria Foundation, Rob Mather by tobytrem tobytrem https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:56 no full 10
QeEgktwh2FQyot9Jw EA - On-Ramps for Biosecurity - A Model by Sofya Lebedeva Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On-Ramps for Biosecurity - A Model, published by Sofya Lebedeva on December 14, 2023 on The Effective Altruism Forum.Thank you to the following people for reviewing: @Lin BL @Tessa @Max Görlitz @Gregory Lewis @James Smith, Sandy Hickson & @Alix PhamTL:DRGetting a full-time role in biosecurity is hardSeeing a path to get there can be even harderI propose a model to think about on-ramps into biosecurity & provide a few use cases for it depending on the background you are coming in with.I provide an overview of how different organisations in this space fit into the model.If you are an undergrad start here.A common problemWhen I first heard about biosecurity I was excited by the80,000 hours podcast and impressed by the work of Kevin Esvelt, RAND and NTI. Even though I was studying molecular biology, a seemingly relevant subject I couldn't see a way for me to get involved and to find a full-time role in this field. The gap between hearing about biosecurity and working full-time in biosecurity felt huge.Figure 1: The gap between hearing about biosecurity and working full-time in the field.A proposed on-ramp modelThrough my experiences with reading groups, UC Berkeley EA, SERI BITS and now the Oxford Biosecurity Group I have found that working on short, object-level, scalable projects fills this gap. And since I get questions of how to fill the gap from others new to the field I made a model to explain my thoughts.Figure 2: Proposed model for On-Ramps into Biosecurity.Using the modelBelow I outline some touch points that people have with various organisations in the biosecurity space. It's important to note that this model is not always linear. It's important to question your assumptions at every stage and the "stages" themselves can be more fluid.Hear about it (0 - 10 hours)This stage can be passive or active depending on your timeline. Note that a lot of the 'hear about it' resources can also be 'learn about it' resources if they are used for more in-depth research at a later stage.80,000 HoursEA Forum (hehe)GCBR Organization Updates NewsletterBiosecurity newsletters you should subscribe toUniversity GroupsYour local EA GroupLearn about it (10 - 40 hours)This stage usually takes around 1-2 months and is more passive.List of Short-Term (<15 hours) Biosecurity Projects to Test Your FitReading groups at your universityReading groups at your local EA GroupFind peers (at a similar career stage to you and you can exchange ideas with)Find mentors (who can help you deliberate between next steps in your career)Find experts (who can help you deliberate on technical differences between projects and provide insights into specific sub-fields)Taking to relevant people in the field, building a networkBlueDot ImpactBiosecurity FundamentalsProject Work (40 - 100 hours)This stage usually takes around 2-3 months and is more active. You are encouraged to continue building out your network of peers, mentors and experts and possibly to form your working group to think about these concepts. However my suggestion would be to do project work as a part of some formal group/institution if possible, to make sure that you work on something valuable.Biosecurity Working GroupsOxford Biosecurity GroupWisconsin Biosecurity InitiativeCambridge Biosecurity Group (contact:sggh2@cam.ac.uk)Nordic Biosecurity Group (contact: Johan Täng)Next Generation for Biosecurity CompetitionBlueDot ImpactBiosecurity Fundamentals (second part of the course)Mentorship ProgramsMagnify MentoringIFBA Global Mentorship ProgramUNODA Biosecurity Diplomacy WorkshopsShort-term, full-time fellowshipsStanford Existential Risks Initiative (SERI)Existential Risk Alliance (ERA) Cambridge FellowshipSwiss Existential Risk Initiative (CHERI)Full-Time Work (100 hours +)A more extensiv...

]]>
Sofya Lebedeva https://forum.effectivealtruism.org/posts/QeEgktwh2FQyot9Jw/on-ramps-for-biosecurity-a-model Thu, 14 Dec 2023 20:42:17 +0000 EA - On-Ramps for Biosecurity - A Model by Sofya Lebedeva Sofya Lebedeva https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:12 no full 2
fCcScg735AfvuC3xg EA - Risk Aversion in Wild Animal Welfare by Rethink Priorities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Risk Aversion in Wild Animal Welfare, published by Rethink Priorities on December 14, 2023 on The Effective Altruism Forum.Executive SummaryWild animals outnumber humans and captive animals by orders of magnitude. Hence, scalable interventions to improve the welfare of wild animals could have greater expected value than interventions on behalf of other groups.Yet, wild animals receive only a small share of resources earmarked for animal welfare causes. This may be because animal advocates are uncomfortable with relying on expected value maximization alone in a field beset by "complex cluelessness": There are compelling reasons for and against wild animal interventions, and none are clearly decisive.Reducing populations of fast life history strategists would likely reduce suffering. However, there is also reason to suspect fast life history strategists have enough rewarding experiences to increase aggregate welfare.Eliminating fundamental sources of suffering in natural habitats would reduce suffering. However, it could also differentially benefit species that many people believe have systematically worse lives.Prioritizing the most abundant groups of wild animals could generate the largest increases in aggregate welfare. However, the most abundant wild animals have relatively low and vague probabilities of sentience.Regardless of risk attitudes, inaction on wild animal welfare is difficult to justify.There are no areas of animal welfare with a larger scale.Even if the aggregate welfare of wild animals is net-positive, it is nevertheless almost uncertainly suboptimal.By accounting for considerations that decision-makers believe are relevant, incorporating risk aversion into expected value calculations may increase willingness to commit resources to wild animal welfare. Different types of risk aversion account for different types of uncertainty.Outcome risk aversion gives special consideration to avoiding worst-case scenarios.Difference-making risk aversion gives special consideration to ensuring that actions improve upon the status quo.Ambiguity aversion gives special consideration to reducing ignorance and choosing actions that have predictable outcomes.Different types of risk often disagree in their recommendations. A corollary is that robustness across different types of risk aversion increases choiceworthiness.Interventions that reduce suffering without altering the number or composition of wild animals have a greater probability of robustness to different types of risk aversion.Outcome risk aversion favors abundant groups of wild animals, while difference-making risk aversion favors wild animals who have a high probability of sentience.Ambiguity aversion is favorable towards research on wild animal welfare, whereas outcome and difference-making risk aversion only favor it under certain conditions.Risk aversion does not robustly favor farmed over wild animals or vice versa.Outcome risk aversion prioritizes wild animals due to their abundance.Difference-making risk aversion favors farmed animals. However, it also favors some diversification across types of animals.Ambiguity aversion favors helping farmed animals over wild animals, and basic research to help both groups.Although complex cluelessness affects many domains, wild animal welfare may be a particularly high-stakes example of it. Alternatively, moral uncertainty about the permissibility of interfering with nature may explain a reluctance to act on uncertain evidence.Read the full report on Rethink Priorities' website or download the pdf.AcknowledgmentsThe post was written by William McAuliffe. Thanks to Hayley Clatterbuck, Neil Dullaghan, Daniela Waldhorn, Bob Fischer, and Ben Stevenson for helpful feedback. The post is a project of Rethink Priorities, a global priority think-and-d...

]]>
Rethink Priorities https://forum.effectivealtruism.org/posts/fCcScg735AfvuC3xg/risk-aversion-in-wild-animal-welfare Thu, 14 Dec 2023 18:21:20 +0000 EA - Risk Aversion in Wild Animal Welfare by Rethink Priorities Rethink Priorities https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:06 no full 4
69M9sjkF4b3KHpDYQ EA - Observatorio de Riesgos Catastróficos Globales (ORCG) Recap 2023 by JorgeTorresC Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Observatorio de Riesgos Catastróficos Globales (ORCG) Recap 2023, published by JorgeTorresC on December 14, 2023 on The Effective Altruism Forum.The Global Catastrophic Risks Observatory (ORCG) is a scientific diplomacy organization that emerged in February 2023 to formulate governance proposals that allow the comprehensive management of different global risks in Spanish-speaking countries. We connect decision-makers with experts to achieve our mission, producing evidence-based publications. In this context, we have worked on several projects on advanced artificial intelligence risks, biological risks, and food risks such as nuclear winter.Since its inception, the organization has accumulated valuable experience and generated extensive production. This includes four reports, one produced in collaboration with theAlliance to Feed the Earth in Disasters (ALLFED). In addition, we have produced four academic articles, three of which have been accepted for publication in specialized journals. We have also created three policy recommendations and/or working documents and four notes in collaboration with institutions such as theSimon Institute for Long-term Governance andThe Future Society. In addition, the organization has developed abundant informative material, such asweb articles,videos, conferences, and infographics.During these nine months of activity, the Observatory has established relationships with actors in Spanish-speaking countries, especially highlighting the collaboration with the regional cooperation spaces of the United Nations Office for Disaster Risk Reduction (UNDRR) and the Economic Commission for Latin America and the Caribbean (ECLAC), as well as with risk management offices at the national level. In this context, we have supported the formulation ofArgentina's National Plan for Disaster Risk Reduction 2024-2030. Our contribution stands out with a specific chapter on extreme food catastrophes, which was incorporated into the work manual of the Information and Risk Scenarios Commission (Technical Commission No. 7).We invite you to send any questions and/or requests toinfo@riesgoscatastroficosglobales.com. You can contribute to the mitigation of Global Catastrophic Risks bydonating.DocumentsReportsFood Security in Argentina in the event of an Abrupt Sunlight Reduction Scenario (ASRS), DOI: 10.13140/RG.2.2.11906.96969.Artificial intelligence risk management in Spain, DOI: 10.13140/RG.2.2.18451.86562.Proposal for the prevention and detection of emerging infectious diseases in Guatemala, DOI: 10.13140/RG.2.2.28217.75365.Latin America and global catastrophic risks: transforming risk management DOI: 10.13140/RG.2.2.25294.02886.PapersResilient food solutions to avoid mass starvation during a nuclear winter in Argentina,REDER Journal, Accepted, pending publication.Systematic review of taxonomies of risks associated with artificial intelligence,Analecta Política Journal, Accepted, pending publication.The EU AI Act: A pioneering effort to regulate frontier AI?, JournalIberamIA, Accepted, pending publication.Operationalizing AI Global Governance Democratization, submitted to call for papers of Office of the Envoy of the Secretary General for Technology, Non-public document.Policy brief and work documentsRCG Position paper: AI Act trilogue.Operationalising the definition of highly capable AI.PNRRD Argentina 2024-2030 chapter proposal "Scenarios for Abrupt Reduction of Solar Light", *Published as an internal government document.Collaborations[Simon Institute]Response to Our Common Agenda Policy Brief 1: "To Think and Act for Future Generations".[Simon Institute]Response to Our Common Agenda Policy Brief 2: "Strengthening the International Response to Complex Global Shocks - An Emergency Platform" .[Simon Institute]Respons...

]]>
JorgeTorresC https://forum.effectivealtruism.org/posts/69M9sjkF4b3KHpDYQ/observatorio-de-riesgos-catastroficos-globales-orcg-recap Thu, 14 Dec 2023 16:45:59 +0000 EA - Observatorio de Riesgos Catastróficos Globales (ORCG) Recap 2023 by JorgeTorresC JorgeTorresC https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:46 no full 5
Xo8x7fS5oA5Da4xN3 EA - Will AI Avoid Exploitation? (Adam Bales) by Global Priorities Institute Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will AI Avoid Exploitation? (Adam Bales), published by Global Priorities Institute on December 14, 2023 on The Effective Altruism Forum.This paper was published as a GPI working paper in December 2023.IntroductionRecent decades have seen rapid progress in artificial intelligence (AI). Some people expect that in the coming decades, further progress will lead to the development of AI systems that are at least as cognitively capable as humans (see Zhang et al., 2022). Call such systems artificial general intelligences (AGIs). If we develop AGI then humanity will come to share the Earth with agents that are as cognitively sophisticated as we are.Even in the abstract, this seems like a momentous event: while the analogy is imperfect, the development of AGI would have some similarity to the encountering of an intelligent alien species who intend to make the Earth their home. Less abstractly, it has been argued that AGI could have profound economic implications, impacting growth, employment and inequality (Korinek & Juelfs, Forthcoming; Trammell & Korinek, 2020). And it has been argued that AGI could bring with it risks, including those arising from human misuse of powerful AI systems (Brundage et al., 2018; Dafoe, 2018) and those arising more directly from the AI systems themselves (Bostrom, 2014; Carlsmith, Forthcoming).Given the potential stakes, it would be desirable to have some sense of what AGIs will be like if we develop them. Knowing this might help us prepare for a world where such systems are present. Unfortunately, it's difficult to speculate with confidence about what hypothetical future AI systems will be like.However, a surprisingly simple argument suggests we can make predictions about the behaviour of AGIs (this argument is inspired by Omohundro, 2007, 2008; Yudkowsky, 2019).3 According to this argument, we should expect AGIs to behave as if maximising expected utility (EU).In rough terms, the argument claims that unless an agent decides by maximising EU it will be possible to offer them a series of trades that leads to a guaranteed loss of some valued thing (an agent that's susceptible to such trades is said to be exploitable). Sufficiently sophisticated systems are unlikely to be exploitable, as exploitability plausibly interferes with acting competently, and sophisticated systems are likely to act competently.So, the argument concludes, sophisticated systems are likely to be EU maximisers. I'll call this the EU argument.In this paper, I'll discuss this argument in detail. In doing so, I'll have four aims. First, I'll show that the EU argument fails. Second, I'll show that reflecting on this failure is instructive: such reflection points us towards more nuanced and plausible alternative arguments. Third, the nature of these more nuanced arguments will highlight the limitations of our models of AGI, in a way that encourages us to adopt a pluralistic approach.And fourth, reflecting on such models will suggest that at least sometimes what matters is less developing a formal model of an AGI's decision-making procedure and more clarifying what sort of goals, if any, an AGI is likely to develop. So while my discussion will focus on the EU argument, I'll conclude with more general lessons about modelling AGI.Read the rest of the paperThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Global Priorities Institute https://forum.effectivealtruism.org/posts/Xo8x7fS5oA5Da4xN3/will-ai-avoid-exploitation-adam-bales Thu, 14 Dec 2023 09:27:17 +0000 EA - Will AI Avoid Exploitation? (Adam Bales) by Global Priorities Institute Global Priorities Institute https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:20 no full 7
rpFetbdJEtthzyNrs EA - Faunalytics' Plans & Priorities For 2024 by JLRiedi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Faunalytics' Plans & Priorities For 2024, published by JLRiedi on December 14, 2023 on The Effective Altruism Forum.Faunalytics' mission is to empower animal advocates to effectively end animal suffering. As such, what advocates think about our work is of the utmost importance. Feedback like the quote above is especially rewarding, and all the more motivating as we plan for another productive and informative year ahead on behalf of animals and advocates alike.In 2024, we have big plans to conduct more Original Research than ever before, expand our Library in exciting new ways, and build upon the research support services that we provide for the movement. We'll be assessing opportunities to increase our impact, and we'll be working hard to live up to our status as an Animal Charity Evaluators Recommended Charity. Read on to learn all about our upcoming plans, and how you can help us succeed in our mission.A New Original Research AgendaFaunalytics is thrilled to share that our 2024 Original Research plans will support many different advocacy types and tactics. We'll cover topics including political advocacy, youth advocacy, global advocacy, equity and inclusion, consumer behavior, and capacity building. In 2024, we plan to hire a Projects Manager to help our team continue to be as efficient as possible as we bring more and more research to the animal protection community.In-Progress Studies Coming SoonCollaborative Opportunities with Environmental Organizations: We're working to identify environmentalists' perspectives on potential opportunities for, and challenges of, collaborating with animal advocates.Benchmarking Compensation in the Farmed Animal Protection Movement: Salary transparency and benchmarking are important tools for a fair and equitable movement, and this study will provide insights to support advocates and organizations alike.The Impact of Humanewashing on Consumer Behavior: Our simulated shopping experiment will shed light on whether humanewashing helps consumers justify their consumption of animal products.Conservative Political Values with Respect to Animal Advocacy: We're investigating ways that U.S. animal advocates can potentially leverage conservative political values to make headway for animals.International Advocacy Strategies and Needs: We're uncovering the reasons why animal protection groups in different regions and circumstances choose particular approaches to advocacy, and what resources they would need in order to expand their efforts.Chicken and Fish Substitution Meta-Analysis: Are consumers giving up one kind of animal product only to eat another? We're working with Rethink Priorities to answer this question and to help animal advocates navigate this issue.2024 Upcoming Research AgendaEffective Communication with Legislative Staffers: We'll interview political staffers about their preferences and recommendations for communication, reporting on the most effective strategies with input from advocates who have engaged with legislative teams successfully.Voter Response to a Pro-Animal, Anti-Subsidy Candidate: With a focus on the U.S. and Brazil (high-impact, highly subsidized), we'll present hypothetical candidates in a real election context to better understand voter response.A Case Study of the Impact of Humane Education & Leadership Training: In collaboration with New Roots Institute (formerly FFAC), we'll examine the long-term impact of their humane education leadership program.Fostering A Pro-Animal, Socially Aware Gen Z: We'll conduct focus groups to better understand Gen Z's current social and/or environmental concerns, and explore areas for advocates to pursue engagement (e.g. education, career, lifestyle).Balancing Inclusivity with an Animal-Oriented Mission: In partnership with Dr. Ahmmad Brown of Northwestern Uni...

]]>
JLRiedi https://forum.effectivealtruism.org/posts/rpFetbdJEtthzyNrs/faunalytics-plans-and-priorities-for-2024 Thu, 14 Dec 2023 07:48:53 +0000 EA - Faunalytics' Plans & Priorities For 2024 by JLRiedi JLRiedi https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:30 no full 8
ngoqSAbcdYhhNgBza EA - GWWC is spinning out of EV by Luke Freeman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC is spinning out of EV, published by Luke Freeman on December 13, 2023 on The Effective Altruism Forum.Giving What We Can (GWWC) is embarking on an exciting new chapter: after years of support, we will be spinning out of the Effective Ventures Foundation UK and US (collectively referred to as "EV"), our parent charities in the US and UK respectively, to become an independent organisation.Rest assured that our core mission, commitments, and focus on effective giving remain unchanged. We believe this transition will allow us to better serve our community and to achieve our mission more effectively. Below, you'll find all the details you need, including what is changing, what isn't, and how you can get involved.A heartfelt thanksFirst and foremost, we owe a very big thank you to the team at EV. Their support over the years has helped us to grow and have a meaningful impact in the world. We could not be more grateful for their support.A big thank you also to our members and donors who have supported us along the way. In particular I'd like to thank the many of you who we've consulted throughout the process of arriving at this decision and working on a plan.Why spin out?WhenGWWC was founded in 2009, it was among the first in a small constellation of initiatives aimed at fosteringwhat would soon be called "effective altruism." In 2011, following the establishment of80,000 Hours, both organisations came together to form the Centre for Effective Altruism (which is now EVto disambiguate from the project called Centre for Effective Altruism, which is also housed within EV).A lot has changed in the intervening years, both within GWWC and within EV. Today, EV is home tomore than 10 different initiatives and is focused on a broad range of issues. As for GWWC, we have developedambitious plans for our future and are committed to focusing more than ever on our core mission: to make effective and significant giving a cultural norm.We've been considering this option for quite some time and have come to the conclusion that the best way to achieve our mission is to be an independent organisation. Being independent will allow us to:Align our organisational structure and governance more closely with our mission.Better manage our own legal and reputational risks.Have greater clarity and transparency of our inner workings and governance to the outside world.Have greater control over our operational costs.We believe that these changes will enable us to serve our community better and to contribute more effectively to growing effective giving.The detailsFor most of you, very little will change. There will be a multi-stage transition period (most of which we estimate will be completed over the next 12 months) and any relevant changes will be communicated in a timely and transparent manner. Here's what to expect:What's changingWe have registered Giving What We Can USA Inc. as a 501(c)(3) charity in the US, and have started the process of registering charities in the UK and Canada. There will be a transfer of GWWC-specific intellectual property, contracts, services, and data (e.g. brand, databases, website, files) to the new entities (exact structure to be determined) and a transition of the donation platform across to the new entities. Our supported programs (e.g. charitable projects and grantmaking funds) will need to be onboarded as programs with our new entities before any switch over dates (TBC) in each country.We are recruiting new governance and advisory boards for the new entities.We're also pursuing affiliate arrangements to continue to expand effective-giving support into new countries (e.g. our collaboration with EA Australia to launch GWWC Australia). This will include adapting our approach to local tax situations, cultural contexts, languages, and curre...

]]>
Luke Freeman https://forum.effectivealtruism.org/posts/ngoqSAbcdYhhNgBza/gwwc-is-spinning-out-of-ev Wed, 13 Dec 2023 22:26:45 +0000 EA - GWWC is spinning out of EV by Luke Freeman Luke Freeman https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:03 no full 11
HjsfHwqasyQMWRzZN EA - EV updates: FTX settlement and the future of EV by Zachary Robinson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EV updates: FTX settlement and the future of EV, published by Zachary Robinson on December 13, 2023 on The Effective Altruism Forum.We're announcing two updates today that we believe will strengthen the effective altruism ecosystem.FTX updatesFirst, we're pleased to say that both Effective Ventures UK and Effective Ventures US have agreed to settlements with the FTX bankruptcy estate. As part of these settlements, EV US and EV UK (which I'll collectively refer to as "EV") have between them paid the estate $26,786,503, an amount equal to 100% of the funds the entities received from FTX and the FTX Foundation (which I'll collectively refer to as "FTX") in 2022.All of this money was either originally received from FTX or allocated to pay the settlement with the knowledge and support of their original donor. This means that EV's projects can continue to fundraise with confidence that donations won't be used to cover the cost of this settlement. We strongly condemn fraud and the actions underlying Sam Bankman-Fried's conviction.Also related to FTX, in September we completed an independent investigation about the relationship between FTX and EV. The investigation, commissioned from the law firm Mintz, included dozens of interviews as well as reviews of tens of thousands of messages and documents. Mintz found no evidence that anyone at EV (including employees, leaders ofEV-sponsored projects, and trustees) was aware of the criminal fraud of which Sam Bankman-Fried has now been convicted.While we are not publishing any additional details regarding the investigation because doing so could reveal information from people who have not consented to their confidences being publicized and could waive important legal privileges that we do not intend to waive, we recognize that knowledge of criminal activity isn't the only concern. I plan to share other non-privileged information on lessons learned in the aftermath of FTX and encourage others to share their reflections as well.EV also started working on structural improvements shortly after FTX's collapse and continued to do so alongside the investigation. Over the past year, we have implemented structural governance and oversight improvements, including restructuring the way the two EV charities work together, updating and improving key corporate policies and procedures at both charities, increasing the rigor of donor due diligence, and staffing up the in-house legal departments.Nevertheless, good governance and oversight is not a goal that can ever be definitively 'completed', and we'll continue to iterate and improve. We plan to open source those improvements where feasible so the whole EA ecosystem can learn from EV's challenges and benefit from the work we've done.We're pleased to have reached this point and to bring our financial interactions with the FTX bankruptcy to a close. We expect the settlements will permanently resolve matters between EV US + EV UK and the FTX estate, enabling EV, our teams, and our projects to move forward.Future of EVWhich brings me to our second announcement: Now that we consider matters with the FTX estate to be resolved, we are planning to take significant steps to decentralize the effective altruism ecosystem by offboarding the projects which currently sit under the Effective Ventures umbrella. This means CEA, 80,000 Hours, Giving What We Can and otherEV-sponsored projects will transition to being independent legal entities, with their own leadership, operational staff, and governance structures. We anticipate the details of the offboarding process will vary by project, and we expect the overall process to take some time - likely 1-2 years until all projects have finished.EV served an important purpose in allowing these projects to launch with lower friction, but the events of last ...

]]>
Zachary Robinson https://forum.effectivealtruism.org/posts/HjsfHwqasyQMWRzZN/ev-updates-ftx-settlement-and-the-future-of-ev Wed, 13 Dec 2023 21:50:52 +0000 EA - EV updates: FTX settlement and the future of EV by Zachary Robinson Zachary Robinson https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:11 no full 12
yTwR86oBXXLEgHSsG EA - Center on Long-Term Risk: Annual review and fundraiser 2023 by Center on Long-Term Risk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Center on Long-Term Risk: Annual review and fundraiser 2023, published by Center on Long-Term Risk on December 13, 2023 on The Effective Altruism Forum.Jesse CliftonCrossposted to LessWrong hereThis is a brief overview of theCenter on Long-Term Risk (CLR)'s activities in 2023 and our plans for 2024. We are hoping to fundraise $770,000 to fulfill our target budget in 2024.About usCLR works on addressing the worst-case risks from the development and deployment of advanced AI systems in order to reduces-risks. Our research primarily involves thinking about how to reduce conflict and promote cooperation in interactions involving powerful AI systems. In addition to research, we do a range of activities aimed at building a community of people interested in s-risk reduction, and support efforts that contribute to s-risk reduction via theCLR Fund.Review of 2023ResearchOur research in 2023 primarily fell in a few buckets:Commitment races and safe Pareto improvements deconfusion. Many researchers in the area considercommitment races a potentially important driver of conflict involving AI systems. But we have been missing a precise understanding of the mechanisms by which they could lead to conflict. We believe we made significant progress on this over the last year. This includes progress on understanding the conditions under which an approach to bargaining called "safe Pareto improvements (SPIs)" can prevent catastrophic conflict.Most of this work is non-public, but public documents that came out of this line of work includeOpen-minded updatelessness,Responses to apparent rationalist confusions about game / decision theory, and a forthcoming paper (seedraft) & post on SPIs for expected utility maximizers.Paths to implementing surrogate goals.Surrogate goals are a special case of SPIs and we consider them a promising route to reducing the downsides from conflict. We (along with CLR-external researchers Nathaniel Sauerberg and Caspar Oesterheld) thought about how implementing surrogate goals could be both credible and counterfactual (i.e., not done by AIs by default), e.g., usingcompute monitoring schemes.CLR researchers, in collaboration with Caspar Oesterheld and Filip Sondej, are also working on a project to "implement" surrogate goals/SPIs in contemporary language models.Conflict-prone dispositions. We thought about the kinds of dispositions that could exacerbate conflict, and how they might arise in AI systems. The primary motivation for this line of work is that, even if alignment does not fully succeed, we may be able to shape their dispositions in coarse-grained ways that reduce the risks of worse-than-extinction outcomes. See our post onmaking AIs less likely to be spiteful.Evaluations of LLMs. We continued ourearlier work on evaluating cooperation-relevant properties in LLMs. Part of this involved cheap exploratory work with GPT-4 and Claude (e.g., looking at behavior in scenarios from theMachiavelli dataset) to see if there were particularly interesting behaviors worth investing more time in.We also worked with external collaborators to develop "Welfare Diplomacy", a variant of the Diplomacy game environment designed to be better for facilitating Cooperative AI research. Wewrote a paper introducing the benchmark and using it to evaluate several LLMs.Community buildingProgress on s-risk community building was slow, due to the departures of our community building staff and funding uncertainties that prevented us from immediately hiring another Community Manager.We continued having career calls;We ran our fourthSummer Research Fellowship, with 10 fellows;We have now hired a new Community Manager, Winston Oswald-Drummond, who has just started.Staff & leadership changesWe saw some substantial staff changes this year, with three staff m...

]]>
Center on Long-Term Risk https://forum.effectivealtruism.org/posts/yTwR86oBXXLEgHSsG/center-on-long-term-risk-annual-review-and-fundraiser-2023-2 Wed, 13 Dec 2023 19:51:11 +0000 EA - Center on Long-Term Risk: Annual review and fundraiser 2023 by Center on Long-Term Risk Center on Long-Term Risk https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:41 no full 14
be2zpvXqYPncYJKoc EA - Funding case: AI Safety Camp by Remmelt Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding case: AI Safety Camp, published by Remmelt on December 13, 2023 on The Effective Altruism Forum.Project summaryAI Safety Camp is a program with a 5-year track record of enabling people to find careers in AI Safety.We support up-and-coming researchers outside the Bay Area and London hubs.We are out of funding. To make the 10th edition happen, fund our stipends and salaries.What are this project's goals and how will you achieve them?AI Safety Camp is a program for inquiring how to work on ensuring future AI issafe, and try concretely working on that in a team.For the 9th edition of AI Safety Camp we opened applications for29 projects.We are first to host a special area to support "Pause AI" work. With funding, we can scale from 4 projects for restricting corporate-AI development to 15 projects next edition.We are excited about our new research leadformat, since it combines:Hands-on guidance: We guide research leads (RLs) to carefully consider and scope their project. Research leads in turn onboard teammates and guide their teammates through the process of doing new research.Streamlined applications: Team applications were the most time-intensive portion of running AI Safety Camp. Reviewers were often unsure how to evaluate an applicant's fit for a project that required specific skills and understandings. RLs usually have a clear sense of who they would want to work with for three months. So we instead guide RLs to prepare project-specific questions and interview their potential teammates.Resource-efficiency: We are not competing with other programs for scarce mentor time. Instead, we prospect for thoughtful research leads who at some point could become well-recognized researchers. The virtual format also cuts on overhead - instead of sinking funds into venues and plane tickets, the money goes directly to funding people to focus on their work in AI safety.Flexible hours: Participants can work remotely from their timezone alongside their degree or day job - to test their fit for an AI Safety career.How will this funding be used?We are fundraising to pay for:Salaries for the organisers for the current AISCFunding future camps (see budget section)Whether we run the tenth edition, or put AISC indefinitely on hold depends on your donation.Last June, we had to freeze a year's worth of salary for three staff. Our ops coordinator had to leave, and Linda and Remmelt decided to run one more edition as volunteers.AISC has previously gotten grants paid for by FTX money. After the FTX collapse, we froze $255K in funds to cover clawback claims. For the current AISC, we have $99K left from SFF that was earmarked for stipends - but nothing for salaries, and nothing for future AISCs.If we have enough money we might also restart the in-person version of AISC. This decision will also depend on an ongoing external evaluation of AISC, which among other things, is evaluating the difference in impact of the virtual vs in-person AISCs.By default we'll decide what to prioritise with the funding we get. But if you want to have a say, we can discuss that. We can earmark your money for whatever you want.Potential budgets for various versions of AISCThese are example budgets for different possible versions of the virtual AISC. If our funding lands somewhere in between, we'll do something in between.Virtual AISC - Budget versionSoftware etc$2KOrganiser salaries, 2 ppl, 4 months$56KStipends for participants$0Total$58KIn the Budget version, the organisers do the minimum job required to get the program started, but no continuous support to AISC teams during their projects and no time for evaluations and improvement for future versions of the program.Salaries are calculated based on $7K per person per month.Virtual AISC - Normal versionSoftware etc$2KOrg...

]]>
Remmelt https://forum.effectivealtruism.org/posts/be2zpvXqYPncYJKoc/funding-case-ai-safety-camp Wed, 13 Dec 2023 03:21:35 +0000 EA - Funding case: AI Safety Camp by Remmelt Remmelt https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:48 no full 20
KkevnyCyi2qWEgSxv EA - EA for Christians 2024 Conference in D.C. | May 18-19 by JDBauman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA for Christians 2024 Conference in D.C. | May 18-19, published by JDBauman on December 16, 2023 on The Effective Altruism Forum.EACH's annual conference is its largest event all year, bringing together 100 Christians in the EA movement for talks, workshops, 1-on-1s, and prayer.Apply here and share with Christians you know.We welcome our keynote speaker Caleb Watney, cofounder of the Institute for Progress.____This conference is a fantastic opportunity to:Hear from excellent speakers about themes relevant to the intersection of EA & Christianity.Meet some of the over 500 Christians in the global effective altruism movementBuild your plans to improve the world, especially through high-impact careers and effective giving.This event is primarily an in-person event. Online attendees may participate in networking, and talks will be recorded and shared.See talks from last year here.If you are a Christian who is interested in effective altruism, then we would love to see you there. While this event is primarily for Christians, we welcome anyone sincerely interested in the intersection of Christianity and EA. We hope to learn from you as you learn from us.Curious about EA for Christians? Want to learn more?Check out any of these:Website: www.eaforchristians.org/about-usFacebook group: www.facebook.com/eaforchristians/groups/Newsletter: http://eepurl.com/ds6IMbOur Linktree: https://linktr.ee/effectivealtruismforchristiansThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
JDBauman https://forum.effectivealtruism.org/posts/KkevnyCyi2qWEgSxv/ea-for-christians-2024-conference-in-d-c-or-may-18-19 Sat, 16 Dec 2023 22:33:53 +0000 EA - EA for Christians 2024 Conference in D.C. | May 18-19 by JDBauman JDBauman https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:49 no full 2
yFTgE7DtGcpnnxBJQ EA - The Global Fight Against Lead Poisoning, Explained (A Happier World video) by Jeroen Willems Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Global Fight Against Lead Poisoning, Explained (A Happier World video), published by Jeroen Willems on December 16, 2023 on The Effective Altruism Forum.In this video, I explore the issue of lead poisoning with turmeric adulteration as the angle.I interviewed Drew McCartor from Pure Earth, Rachel Silverman from the Center for Global Development and Kris Newby who reported on turmeric adulteration for Stanford. I also visited a lab to actually test my own turmeric!Would love to hear what you think!Thanks to everyone who provided valuable feedback.Charities mentionedPure EarthLEEPCenter For Global DevelopmentSourcesThe Vice of Spice by Wudan Yan for UndarkDylan Matthews for VoxKris Newby for Stanford MagazineLink to a transcript with the rest of the sources is in progress!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Jeroen Willems https://forum.effectivealtruism.org/posts/yFTgE7DtGcpnnxBJQ/the-global-fight-against-lead-poisoning-explained-a-happier Sat, 16 Dec 2023 15:35:59 +0000 EA - The Global Fight Against Lead Poisoning, Explained (A Happier World video) by Jeroen Willems Jeroen Willems https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:01 no full 4
DHybAfxPhqqYa3bQz EA - What is the current most representative EA AI x-risk argument? by Matthew Barnett Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is the current most representative EA AI x-risk argument?, published by Matthew Barnett on December 16, 2023 on The Effective Altruism Forum.I tend to disagree with most EAs about existential risk from AI. Unfortunately, my disagreements are all over the place. It's not that I disagree with one or two key points: there are many elements of the standard argument that I diverge from, and depending on the audience, I don't know which points of disagreement people think are most important.I want to write a post highlighting all the important areas where I disagree, and offering my own counterarguments as an alternative. This post would benefit from responding to an existing piece, along the same lines as Quintin Pope's article "My Objections to "We're All Gonna Die with Eliezer Yudkowsky"". By contrast, it would be intended to address the EA community as a whole, since I'm aware many EAs already disagree with Yudkowsky even if they buy the basic arguments for AI x-risks.My question is: what is the current best single article (or set of articles) that provide a well-reasoned and comprehensive case for believing that there is a substantial (>10%) probability of an AI catastrophe this century?I was considering replying to Joseph Carlsmith's article, "Is Power-Seeking AI an Existential Risk?", since it seemed reasonably comprehensive and representative of the concerns EAs have about AI x-risk. However, I'm a bit worried that the article is not very representative of EAs who have substantial probabilities of doom, since he originally estimated a total risk of catastrophe at only 5% before 2070. In May 2022, Carlsmith changed his mind and reported a higher probability, but I am not sure whether this is because he has been exposed to new arguments, or because he simply thinks the stated arguments are stronger than he originally thought.I suspect I have both significant moral disagreements and significant empirical disagreements with EAs, and I want to include both in such an article, while mainly focusing on the empirical points. For example, I have the feeling that I disagree with most EAs about:How bad human disempowerment would likely be from a utilitarian perspective, and what "human disempowerment" even means in the first placeWhether there will be a treacherous turn event, during which AIs violently take over the world after previously having been behaviorally aligned with humansHow likely AIs are to coordinate near-perfectly with each other as a unified front, leaving humans out of their coalitionWhether we should expect AI values to be "alien" (like paperclip maximizers) in the absence of extraordinary efforts to align them with humansWhether the AIs themselves will be significant moral patients, on par with humansWhether there will be a qualitative moment when "the AGI" is created, rather than systems incrementally getting more advanced, with no clear finish lineWhether we get only "one critical try" to align AGIWhether "AI lab leaks" are an important source of AI riskHow likely AIs are to kill every single human if they are unaligned with humansWhether there will be a "value lock-in" event soon after we create powerful AI that causes values to cease their evolution over the coming billions of yearsHow bad problems related to "specification gaming" will be in the futureHow society is likely to respond to AI risks, and whether they'll sleepwalk into a catastropheHowever, I also disagree with points made by many other EAs who have argued against the standard AI risk case. For example, I think that,AIs will eventually become vastly more powerful and smarter than humans. So, I think AIs will eventually be able to "defeat all of us combined"I think a benign "AI takeover" event is very likely even if we align AIs successfullyAIs will likely be goal-...

]]>
Matthew_Barnett https://forum.effectivealtruism.org/posts/DHybAfxPhqqYa3bQz/what-is-the-current-most-representative-ea-ai-x-risk 10%) probability of an AI catastrophe this century?I was considering replying to Joseph Carlsmith's article, "Is Power-Seeking AI an Existential Risk?", since it seemed reasonably comprehensive and representative of the concerns EAs have about AI x-risk. However, I'm a bit worried that the article is not very representative of EAs who have substantial probabilities of doom, since he originally estimated a total risk of catastrophe at only 5% before 2070. In May 2022, Carlsmith changed his mind and reported a higher probability, but I am not sure whether this is because he has been exposed to new arguments, or because he simply thinks the stated arguments are stronger than he originally thought.I suspect I have both significant moral disagreements and significant empirical disagreements with EAs, and I want to include both in such an article, while mainly focusing on the empirical points. For example, I have the feeling that I disagree with most EAs about:How bad human disempowerment would likely be from a utilitarian perspective, and what "human disempowerment" even means in the first placeWhether there will be a treacherous turn event, during which AIs violently take over the world after previously having been behaviorally aligned with humansHow likely AIs are to coordinate near-perfectly with each other as a unified front, leaving humans out of their coalitionWhether we should expect AI values to be "alien" (like paperclip maximizers) in the absence of extraordinary efforts to align them with humansWhether the AIs themselves will be significant moral patients, on par with humansWhether there will be a qualitative moment when "the AGI" is created, rather than systems incrementally getting more advanced, with no clear finish lineWhether we get only "one critical try" to align AGIWhether "AI lab leaks" are an important source of AI riskHow likely AIs are to kill every single human if they are unaligned with humansWhether there will be a "value lock-in" event soon after we create powerful AI that causes values to cease their evolution over the coming billions of yearsHow bad problems related to "specification gaming" will be in the futureHow society is likely to respond to AI risks, and whether they'll sleepwalk into a catastropheHowever, I also disagree with points made by many other EAs who have argued against the standard AI risk case. For example, I think that,AIs will eventually become vastly more powerful and smarter than humans. So, I think AIs will eventually be able to "defeat all of us combined"I think a benign "AI takeover" event is very likely even if we align AIs successfullyAIs will likely be goal-...]]> Sat, 16 Dec 2023 13:31:39 +0000 EA - What is the current most representative EA AI x-risk argument? by Matthew Barnett 10%) probability of an AI catastrophe this century?I was considering replying to Joseph Carlsmith's article, "Is Power-Seeking AI an Existential Risk?", since it seemed reasonably comprehensive and representative of the concerns EAs have about AI x-risk. However, I'm a bit worried that the article is not very representative of EAs who have substantial probabilities of doom, since he originally estimated a total risk of catastrophe at only 5% before 2070. In May 2022, Carlsmith changed his mind and reported a higher probability, but I am not sure whether this is because he has been exposed to new arguments, or because he simply thinks the stated arguments are stronger than he originally thought.I suspect I have both significant moral disagreements and significant empirical disagreements with EAs, and I want to include both in such an article, while mainly focusing on the empirical points. For example, I have the feeling that I disagree with most EAs about:How bad human disempowerment would likely be from a utilitarian perspective, and what "human disempowerment" even means in the first placeWhether there will be a treacherous turn event, during which AIs violently take over the world after previously having been behaviorally aligned with humansHow likely AIs are to coordinate near-perfectly with each other as a unified front, leaving humans out of their coalitionWhether we should expect AI values to be "alien" (like paperclip maximizers) in the absence of extraordinary efforts to align them with humansWhether the AIs themselves will be significant moral patients, on par with humansWhether there will be a qualitative moment when "the AGI" is created, rather than systems incrementally getting more advanced, with no clear finish lineWhether we get only "one critical try" to align AGIWhether "AI lab leaks" are an important source of AI riskHow likely AIs are to kill every single human if they are unaligned with humansWhether there will be a "value lock-in" event soon after we create powerful AI that causes values to cease their evolution over the coming billions of yearsHow bad problems related to "specification gaming" will be in the futureHow society is likely to respond to AI risks, and whether they'll sleepwalk into a catastropheHowever, I also disagree with points made by many other EAs who have argued against the standard AI risk case. For example, I think that,AIs will eventually become vastly more powerful and smarter than humans. So, I think AIs will eventually be able to "defeat all of us combined"I think a benign "AI takeover" event is very likely even if we align AIs successfullyAIs will likely be goal-...]]> 10%) probability of an AI catastrophe this century?I was considering replying to Joseph Carlsmith's article, "Is Power-Seeking AI an Existential Risk?", since it seemed reasonably comprehensive and representative of the concerns EAs have about AI x-risk. However, I'm a bit worried that the article is not very representative of EAs who have substantial probabilities of doom, since he originally estimated a total risk of catastrophe at only 5% before 2070. In May 2022, Carlsmith changed his mind and reported a higher probability, but I am not sure whether this is because he has been exposed to new arguments, or because he simply thinks the stated arguments are stronger than he originally thought.I suspect I have both significant moral disagreements and significant empirical disagreements with EAs, and I want to include both in such an article, while mainly focusing on the empirical points. For example, I have the feeling that I disagree with most EAs about:How bad human disempowerment would likely be from a utilitarian perspective, and what "human disempowerment" even means in the first placeWhether there will be a treacherous turn event, during which AIs violently take over the world after previously having been behaviorally aligned with humansHow likely AIs are to coordinate near-perfectly with each other as a unified front, leaving humans out of their coalitionWhether we should expect AI values to be "alien" (like paperclip maximizers) in the absence of extraordinary efforts to align them with humansWhether the AIs themselves will be significant moral patients, on par with humansWhether there will be a qualitative moment when "the AGI" is created, rather than systems incrementally getting more advanced, with no clear finish lineWhether we get only "one critical try" to align AGIWhether "AI lab leaks" are an important source of AI riskHow likely AIs are to kill every single human if they are unaligned with humansWhether there will be a "value lock-in" event soon after we create powerful AI that causes values to cease their evolution over the coming billions of yearsHow bad problems related to "specification gaming" will be in the futureHow society is likely to respond to AI risks, and whether they'll sleepwalk into a catastropheHowever, I also disagree with points made by many other EAs who have argued against the standard AI risk case. For example, I think that,AIs will eventually become vastly more powerful and smarter than humans. So, I think AIs will eventually be able to "defeat all of us combined"I think a benign "AI takeover" event is very likely even if we align AIs successfullyAIs will likely be goal-...]]> Matthew_Barnett https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:21 no full 5
s5gEqfyxBif96Fh2T EA - #175 - Preventing lead poisoning for $1.66 per child (Lucia Coulter on the 80,000 Hours Podcast) by 80000 Hours Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: #175 - Preventing lead poisoning for $1.66 per child (Lucia Coulter on the 80,000 Hours Podcast), published by 80000 Hours on December 16, 2023 on The Effective Altruism Forum.We just published an interview: Lucia Coulter on preventing lead poisoning for $1.66 per child. Listen on Spotify or click through for other audio options, the transcript, and related links. Below are the episode summary and some key excerpts.Episode summaryI always wonder if one part of it is just the really invisible nature of lead as a poison. Of course impacts aren't invisible: millions of deaths and trillions of dollars in lost income. But the fact that lead is the cause is not apparent. It's not apparent when you're being exposed to the lead. The paint just looks like any other paint; the cookware looks like any other cookware.And also, if you are suffering the effects of lead poisoning, if you have cognitive impairment and heart disease, you're not going to think, "Oh, it was that lead exposure." It's just not going to be clear.Lucia CoulterLead is one of the most poisonous things going. A single sugar sachet of lead, spread over a park the size of an American football field, is enough to give a child that regularly plays there lead poisoning. For life they'll be condemned to a ~3-point-lower IQ; a 50% higher risk of heart attacks; and elevated risk of kidney disease, anaemia, and ADHD, among other effects.We've known lead is a health nightmare for at least 50 years, and that got lead out of car fuel everywhere. So is the situation under control? Not even close.Around half the kids in poor and middle-income countries have blood lead levels above 5 micrograms per decilitre; the US declared a national emergency when just 5% of the children in Flint, Michigan exceeded that level. The collective damage this is doing to children's intellectual potential, health, and life expectancy is vast - the health damage involved is around that caused by malaria, tuberculosis, and HIV combined.This week's guest, Lucia Coulter - cofounder of the incredibly successful Lead Exposure Elimination Project (LEEP) - speaks about how LEEP has been reducing childhood lead exposure in poor countries by getting bans on lead in paint enforced.Various estimates suggest the work is absurdly cost effective. LEEP is in expectation preventing kids from getting lead poisoning for under $2 per child (explore the analysis here). Or, looking at it differently, LEEP is saving a year of healthy life for $14, and in the long run is increasing people's lifetime income anywhere from $300-1,200 for each $1 it spends, by preventing intellectual stunting.Which raises the question: why hasn't this happened already? How is lead still in paint in most poor countries, even when that's oftentimes already illegal? And how is LEEP able to get bans on leaded paint enforced in a country while spending barely tens of thousands of dollars? When leaded paint is gone, what should they target next?With host Robert Wiblin, Lucia answers all those questions and more:Why LEEP isn't fully funded, and what it would do with extra money (you can donate here).How bad lead poisoning is in rich countries.Why lead is still in aeroplane fuel.How lead got put straight in food in Bangladesh, and a handful of people got it removed.Why the enormous damage done by lead mostly goes unnoticed.The other major sources of lead exposure aside from paint.Lucia's story of founding a highly effective nonprofit, despite having no prior entrepreneurship experience, through Charity Entrepreneurship's Incubation Program.Why Lucia pledges 10% of her income to cost-effective charities.Lucia's take on why GiveWell didn't support LEEP earlier on.How the invention of cheap, accessible lead testing for blood and consumer products would be a game changer.Genera...

]]>
80000_Hours https://forum.effectivealtruism.org/posts/s5gEqfyxBif96Fh2T/175-preventing-lead-poisoning-for-usd1-66-per-child-lucia Sat, 16 Dec 2023 05:21:58 +0000 EA - #175 - Preventing lead poisoning for $1.66 per child (Lucia Coulter on the 80,000 Hours Podcast) by 80000 Hours 80000_Hours https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 22:29 no full 8
CDt5ShpdABZRn8Tvi EA - My quick thoughts on donating to EA Funds' Global Health and Development Fund and what it should do by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My quick thoughts on donating to EA Funds' Global Health and Development Fund and what it should do, published by Vasco Grilo on December 15, 2023 on The Effective Altruism Forum.I think there is a strong case for donating to EA Funds'Global Health and Development Fund (GHDF) if one wants to support interventions inglobal health and development without attending to theireffects on animals. On the other hand, given this goal, I believe one had better donate to GiveWell'sAll Grants Fund (AGF) orunrestricted funds (GWUF), or Giving What We Can's (GWWC's)Global Health and Wellbeing Fund (GHWF). In addition, I encourage GHDF to:Let its donors know that donating to GHDF in its current form has a similar effect to donating to AGF (if that is in fact the case).Consider appointing additional fund managers independent from GiveWell.Consider accepting applications.In any case, the goal of this post is mostly about starting a discussion about the future of GHDF rather than providing super informed takes about it. So feel free to share your thoughts or vision below!Case for donating to GiveWell's All Grants Fund or unrestricted fundsDonating to AGF or GWUF instead of GHDF seems better if one highly trusts GiveWell's prioritisation:Donating to GHDF in its current form appears to have the same effect as donating to AGF or GWUF:Like AGF and GWUF, GHDF "aims to improve the health or economic empowerment of people around the world as effectively as possible".My understanding is that GHDF makes more uncertain or riskier grants than GiveWell'sTop Charities Fund[1] (TCF), but AGF,launched in August 2022, now makes such grants too. AGFfunds:GiveWell's top charities.Organisations implementing potentially cost-effective and scalable programs.Established organisations implementing cost-effective programs that GiveWell does not expect to scale.Organisations aiming to influence public health policy.Organisations producing research to aid our grantmaking process.Organizations that raise funds for our recommended charities.GHDF "is managed by Elie Hassenfeld,GiveWell's co-founder [and CEO]".GHDF does not accept applications, and neither does AGF.People in the United Kingdom can support GiveWell's funds and top charities through tax deductible donations viaGiveWell UK, whichwas launched in August 2022 as AGF.Having EA Funds as an additional intermediary seems unnecessary unless it is doing some extra evaluation, which does not appear to be the case.As a side note, I would also say there is a pretty small difference between which one of GiveWell'sfunds, TCF, AGF or GWUF, one donates to:Due tofunging, more donations to TCF will result in AGF granting less money to GiveWell'stop charities.GiveWell arguably has tiny room for more funding given Open Philanthropy'ssupport, so donating to GWUF is similar to donating to AGF[2].However, if you highly trust GiveWell's prioritisation, donating to GWUFis the best option given its greatest flexibility, followed by the AGF and TCF. Yet, donors may prefer donating to TCF to facilitate explanations of their effective giving (e.g. skipping the need to go into expected value orfunging).Case for donating to Giving What We Can's Global Health and Wellbeing FundDonating to GHWF instead of GHDF seems better if one:Welcomes further evaluation of the process behind the recommendations of GiveWell and other evaluators in the global health and wellbeing space (e.g. Happier Lives Institute), trusts GWWC's research team to identify evaluators to rely on, and wants the evaluations to be published, as in GWWC'sevaluations of evaluators. These would be my main reasons for donating to GHWF instead of GHDF, which has not produced public evaluations of GiveWell's recommendations.Is open to donating to funds or organisations not suppo...

]]>
Vasco Grilo https://forum.effectivealtruism.org/posts/CDt5ShpdABZRn8Tvi/my-quick-thoughts-on-donating-to-ea-funds-global-health-and Fri, 15 Dec 2023 22:43:25 +0000 EA - My quick thoughts on donating to EA Funds' Global Health and Development Fund and what it should do by Vasco Grilo Vasco Grilo https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:40 no full 10
Jt7p4JcbmiunZwsh6 EA - Announcing Surveys on Community Health, Causes, and Harassment by David Moss Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Surveys on Community Health, Causes, and Harassment, published by David Moss on December 15, 2023 on The Effective Altruism Forum.We are announcing a supplementary survey to gather timely information from the EA community before the next EA Survey in 2024.This survey will contain questions related to:Community health and satisfaction with the EA communityCause prioritization and how EA resources should be allocatedDemographics (which can optionally be skipped if you provided your email address last time and opt for us to link your responses)We are also sending out a separate survey, requested by CEA's Community Health and Special Projects team, focusing primarily on sexual harassment and gender-related experiences:4. EA Climate and Harassment SurveyYou can take the first survey here. This will give you the option to take the Climate and Harassment Survey immediately afterwards, without having to answer the demographic questions twice.Alternatively, you can just take the Climate and Harassment survey here.If you wish to share links to either of these surveys with others, please use the following links:Both surveys:https://rethinkpriorities.qualtrics.com/jfe/form/SV_1G37guBPVAl9TtI?source=sharingClimate and Harassment Survey alone:https://rethinkpriorities.qualtrics.com/jfe/form/SV_bxD0wtmuuXw4KUe?source=sharingThe first survey should be significantly shorter than the main EA Survey, depending on how much detail you choose to provide in the open comment questions and whether you skip the demographic section by providing your email address. The EA Climate and Harassment Survey is estimated to take between 5 and 30 minutes depending on how much detail you choose to provide.Both surveys are planned to close on 1st January 2024.AcknowledgementsThe post is a project of Rethink Priorities, a global priority think-and-do tank, aiming to do good at scale. We research and implement pressing opportunities to make the world better. We act upon these opportunities by developing and implementing strategies, projects, and solutions to key issues. We do this work in close partnership with foundations and impact-focused non-profits or other entities. If you're interested in Rethink Priorities' work, please consider subscribing to our newsletter. You can explore our completed public work here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
David_Moss https://forum.effectivealtruism.org/posts/Jt7p4JcbmiunZwsh6/announcing-surveys-on-community-health-causes-and-harassment Fri, 15 Dec 2023 15:34:18 +0000 EA - Announcing Surveys on Community Health, Causes, and Harassment by David Moss David_Moss https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:37 no full 13
4ebRNGi3aHWnCw5m8 EA - 80,000 Hours spin out announcement and fundraising by 80000 Hours Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80,000 Hours spin out announcement and fundraising, published by 80000 Hours on December 18, 2023 on The Effective Altruism Forum.Summary80,000 Hours is spinning out of Effective Ventures.80,000 Hours is fundraising. We're currently seeking $1,200,000 to support our general activities (excluding marketing and grantmaking) until the middle of 2024.Spin outWe are excited to share that 80,000 Hours has officially decided to spin out as a project from Effective Ventures Foundation UK and Effective Ventures US (known collectively as Effective Ventures) and establish an independent legal structure.We're incredibly grateful to the Effective Ventures leadership and team and the other orgs for all their support, particularly in the past year as our community faced a lot of challenges. They devoted countless hours and enormous effort to helping ensure that we and the other orgs could pursue our missions.And we deeply appreciate their support in our spin-out. They recently announced thatall of the other organisations will likewise become their own legal entities; we're excited to continue to work alongside them to improve the world.Back in May, we investigated whether it was the right time to spin out of our parent organisations. We've considered this option at various points in the last three years.There have been many benefits to being part of a larger entity since our founding. But as 80,000 Hours and the other projects have grown, we concluded we can now best pursue our mission and goals independently. EV leadership approved the plan.Becoming our own entity will allow us to:Match our governing structure to our function and purposeDesign operations systems that best meet our staff's needsReduce interdependence on other entities that raises financial, legal, and reputational risksThere's a lot for us to do. We're currently in the process of finding anew CEO to lead us in our next chapter. We'll also need a new board to oversee our work and new staff for our internal systems team and other growing programmes.Which brings us to our next item: we're fundraising!FundraisingWe're currently seeking $1,200,000 to support our general activities (excluding marketing and grantmaking) until the middle of 2024.This post has more information about us, our track record, and our current fundraising round. You candonate directly or view ourfundraising page for additional information and ways to contact us.About usAt 80,000 Hours, we provide research and support to help students, recent graduates, and others have high-impact careers.Our goal is to get talented people working on the world's most pressing problems. We focus on problems that threaten the long-term future, including risks from artificial intelligence, catastrophic pandemics, and nuclear war.To achieve our goal, we:Reach people who might be interested through marketing, engaging and user-friendly content, and word-of-mouth.Introduce people to information, frameworks, and ideas which are useful for having a high-impact career and help them get excited about contributing to solving pressing global problems.Support people in transitioning to careers that contribute to solving pressing global problems.We provide four main services:1. Our websiteWe've written a career guide, dozens of cause area problem profiles, and reviews of impactful career paths.This year, we've had over 4.5 million readers on our website and our research newsletter goes out to more than 350,000 subscribers.2. Our podcastWe host in-depth conversations about the world's most pressing problems and how people can use their careers to solve them.We've had over one million hours of listening time on our podcast to date.3. Our job boardWe maintain a curated list of promising opportunities for impact and career capital on our job boa...

]]>
80000_Hours https://forum.effectivealtruism.org/posts/4ebRNGi3aHWnCw5m8/80-000-hours-spin-out-announcement-and-fundraising-1 Mon, 18 Dec 2023 21:38:20 +0000 EA - 80,000 Hours spin out announcement and fundraising by 80000 Hours 80000_Hours https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 12:53 no full 3
pWxMJL7HJoiWkro7s EA - Summary: The scope of longtermism by Global Priorities Institute Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summary: The scope of longtermism, published by Global Priorities Institute on December 18, 2023 on The Effective Altruism Forum.This is a summary of the GPI Working Paper "The scope of longtermism" by David Thorstad. The summary was written by Riley Harris.Recent work argues for longtermism - the position that often our morally best options will be those with the best long-term consequences.[1] Proponents of longtermism sometimes suggest that in most decisions expected long-term benefits outweigh all short-term effects. In 'The scope of longtermism', David Thorstad argues that most of our decisions do not have this character. He identifies three features of our decisions that suggest long-term effects are only relevant in special cases: rapid diminution - our actions may not have persistent effects, washing out - we might not be able to predict persistent effects, and option unawareness - we may struggle to recognise those options that are best in the long term even when we have them.Rapid diminutionWe cannot know the details of the future. Picture the effects of your actions rippling out in time - at closer times, the possibilities are clearer. As our prediction journeys further, the details become obscured. Although the probability of desired effects becomes ever lower, the effects might grow larger. In the long run, we could perhaps improve many billions or trillions of lives.When we weight value by probability, the value of our actions will depend on a race between diminishing probabilities and growing possible impact. If the value increases faster than probabilities fall, the expected values of the action might be vast. Alternatively, if the chance we have such large effects falls dramatically compared to the increase in value, the expected value of improving the future might be quite modest.Thorstad suggests that the latter of these effects dominates, so we should believe we have little chance of making an enormous difference. Consider a huge event that would be likely to change the lives of people in your city - perhaps, your city being blown up. Surprisingly, even this might not have large long-run impacts. Studies indicate that just half a century after cities in Japan and Vietnam were bombed, there was no longer any detectable effect on population size, poverty rates and consumption patterns.[2] To be fair, some studies indicate that some events have long-term effects,[3] but Thorstad thinks '...the persistence literature may not provide strong support' to longtermism.Washing outThorstad's second concern with longtermism relates to our ability to predict the future. If our actions can affect the future in a huge way, these effects could be wonderful or terrible. They will also be very difficult to predict. The possibility that our acts will be enormouslybeneficial does not make our acts particularly appealing when they might be equally terrible. If our ability to forecast long-term outcomes is limited, the potential positive and negative values would wash out in expectation.Thorstad identifies three reasons to doubt our ability to forecast the long term. First, we have no track record of making predictions at the timescale of centuries or millennia. Our ability to predict only 20-30 years into the future is not great - and things get more difficult when we try to glimpse the further future.Second, economists, risk analysts and forecasting practitioners doubt our ability to make long-term predictions, and often refuse to make them.[5] Third, we want to forecast how valuable our actions are over the long run. But value is a particularly difficult target - it includes many variables such as the number of people alive, their health, longevity, education and social inclusion.That said, we sometimes have some evidence, and this evidence might point t...

]]>
Global Priorities Institute https://forum.effectivealtruism.org/posts/pWxMJL7HJoiWkro7s/summary-the-scope-of-longtermism Mon, 18 Dec 2023 20:26:17 +0000 EA - Summary: The scope of longtermism by Global Priorities Institute Global Priorities Institute https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:45 no full 4
2cZAzvaQefh5JxWdb EA - Bringing about animal-inclusive AI by Max Taylor Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bringing about animal-inclusive AI, published by Max Taylor on December 18, 2023 on The Effective Altruism Forum.I've been working in animal advocacy for two years and have an amateur interest in AI. All corrections, points of disagreement, and other constructive feedback are very welcome. I'm writing this in a personal capacity, and am not representing the views of my employer.Many thanks to everyone who provided feedback and ideas.IntroductionIn aprevious post, I set out some of the positive and negative impacts that AI could have on animals. The present post sets out a few ideas for what an animal-inclusive AI landscape might look like: what efforts would we need to see from different actors in order for AI to be beneficial for animals? This is just a list of high-level suggestions, and I haven't tried to prioritize them, explore them in detail, or suggest practical ways to bring them about. I also haven't touched on the (potentially extremely significant) role of alternative proteins in all this.We also have a new landing page for people interested in the intersection of AI and animals:www.aiforanimals.org. It's still fairly basic at this stage but contains links to resources and recent news articles that you might find helpful if you're interested in this space. Please feel free to provide feedback to help make this a more useful resource.Why do we need animal-inclusive AI?As described in theprevious post, future AI advances could further disempower animals and increase the depth and scale of their suffering. However, AI could also help bring about a radical improvement in human-animal relations and greatly facilitate efforts to improve animals' wellbeing.For example, just in the last month, news articles have covered potential AI risks for animals includingAI's role in intensive shrimp farming, theEU-funded 'RoBUTCHER' that will help automate the pig meat processing industry, (potentiallymaking intensive animal agriculture more profitable), and the potential of Large Language Models toentrench speciesist biases. On the more positive side, there were also articles covering the potential for AI toradically improve animal health treatment,support the expansion of alternative protein companies,reduce human-animal conflicts,facilitate human-animal communication, andprovide alternatives to animal testing. These recent stories are just the tip of the iceberg, not only for animals that are being directly exploited - or cared for - by humans, but also for those living in the wild.AI safety for animals doesn't need to come at the expense of AI safety for humans. There are bound to be many actions that both the animal advocacy and AI safety communities can take to reinforce each others' work, given the considerable overlap in our priorities and worldviews. However, there are also bound to be some complex trade-offs, and we can't assume that efforts to make AI safe for humans will inevitably also benefit all other species.economic growth andadvances in healthcare) while being a disaster for other animals. Targeted efforts are needed to prevent that happening, including (but definitely not limited to):Explicitly mentioning animals in written commitments, both non-binding and binding;Using those commitments as a stepping stone to ensure animal-inclusive applications of AI;Representing animals in decision-making processes;Conducting and publishing more research on AI's potential impacts on animals; andBuilding up the 'AI safety x animal advocacy' community.The rest of this post provides some information, examples, and resources around those topics.Moving towards animal-inclusive AIExplicitly mention animals in non-binding commitmentsGovernmental commitmentsOn November 1, 2023, 29 countries signed theBletchley Declaration at the AI Safety Su...

]]>
Max Taylor https://forum.effectivealtruism.org/posts/2cZAzvaQefh5JxWdb/bringing-about-animal-inclusive-ai Mon, 18 Dec 2023 19:22:21 +0000 EA - Bringing about animal-inclusive AI by Max Taylor Max Taylor https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 32:00 no full 6
ypegmFzAsupyHuzju EA - OpenAI's Superalignment team has opened Fast Grants by Yadav Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OpenAI's Superalignment team has opened Fast Grants, published by Yadav on December 18, 2023 on The Effective Altruism Forum.From their website:We are offering $100K-$2M grants for academic labs, nonprofits, and individual researchers. For graduate students, we are sponsoring a one-year $150K OpenAI Superalignment Fellowship: $75K in stipend and $75K in compute and research funding.Things they're interested in funding:With these grants, we are particularly interested in funding the following research directions:Weak-to-strong generalization: Humans will be weak supervisors relative to superhuman models. Can we understand and control how strong models generalize from weak supervision?Interpretability: How can we understand model internals? And can we use this to e.g. build an AI lie detector?Scalable oversight: How can we use AI systems to assist humans in evaluating the outputs of other AI systems on complex tasks?Many other research directions, including but not limited to: honesty, chain-of-thought faithfulness, adversarial robustness, evals and testbeds, and more.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Yadav https://forum.effectivealtruism.org/posts/ypegmFzAsupyHuzju/openai-s-superalignment-team-has-opened-fast-grants Mon, 18 Dec 2023 07:34:07 +0000 EA - OpenAI's Superalignment team has opened Fast Grants by Yadav Yadav https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:22 no full 9
vtMf7wFp7Suw7eRQd EA - Launching Asimov Press by xander balwit Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Launching Asimov Press, published by xander balwit on December 18, 2023 on The Effective Altruism Forum.A biotechnology blog with a probabilistic and effective bentArtwork by DalbertToday, Niko McCarty and I are launching Asimov Press, a digital magazine dedicated to biotechnology. It will publish lucid writing that leads people to explore the ways that biotechnology can effectively be used to do good.Please go here to read the full announcement and subscribe.Asimov Press is a new publishing venture modeled on Stripe Press, that will produce a newsletter, magazine, and books that feature writing about biological progress. Our primary focus will be on biotechnology, but we will also publish pieces on metascience and adjacent themes. Newsletters and magazines will be free to read. Our mission is to spread ideas that elucidate the promise of biology, take its concomitant risks seriously, and direct talent toward solving pressing problems.Our published work has three features that I want to highlight here: Pieces will steel-man alternative approaches, focus on high-impact but often underrated facets of biotechnology, and strive for mechanistic and probabilistic reasoning.Steelman: Biotechnology is not a panacea. Simple solutions are often the best solutions; no engineering required. When Ignaz Semmelweis suggested that doctors at an Austrian Hospital wash their hands between performing autopsies and delivering babies, the maternal mortality rate fell from around 25 percentto 1 percent. In another example, a public health campaign to iodize salt in Switzerland helped bring down the rate of deaf-mute birthsfivefold in just 8 years. Rather than demand answers from biotechnology, we can often make a positive difference in the world by investing in better public health, improving infrastructure and education, or scaling up existing inventions that have already proven effective.Even so, simplicity can feel unsatisfactory or even provocative. Semmelweis, considered arrogant by senior doctors, was ostracized and eventually dismissed from his post. An early pioneer in germ theory, hedied in a Viennese insane asylum, after being severely beaten by guards. In Switzerland, although evidence for the efficacy of iodized salt was robust, some eminent scientists spoke out against the interventions - advocating for elaborate alternative treatments. We'll do our best to avoid publishing work that we wish were true, and instead aim to provide balanced, honest, and rigorous coverage of biotechnology.High-impact solutions: Progress often makes its greatest strides in areas that are not widely covered by the media. We will de-emphasize medical topics and focus instead on areas such as animal welfare and climate resiliency, where biotechnology has proven astonishingly effective yet remains underexplored. We want people to focus on what is most urgent and tractable, and not necessarily on what is most glamorous.Laundry is one example. Engineered enzymes that remove stains in cold water reduced the energy required to do laundry byabout 90 percent. Laundry may not be as immediately headline-grabbing as new cancer therapies, but it provides a concrete and ingenious solution to a demonstrable need.Mechanisms: Biotechnology shouldn't be a mystery. Although its mechanisms are often infinitesimal, biology is material rather than magic. Cells are made from collections of atoms that we can manipulate, visualize, and control. Every engineering application has a mechanistic and tangible explanation. Often, these explanations are astonishingly beautiful. We encourage our writers to delve deeper and elucidate complex concepts in clear, illustrative prose.Asimov Press will publish one feature article every two weeks, with additional newsletters and shorter essays scattered in between. Articl...

]]>
xander_balwit https://forum.effectivealtruism.org/posts/vtMf7wFp7Suw7eRQd/launching-asimov-press Mon, 18 Dec 2023 01:52:40 +0000 EA - Launching Asimov Press by xander balwit xander_balwit https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:07 no full 12
FAJ5DCmucKnabkBNL EA - Incubating AI x-risk projects: some personal reflections by Ben Snodin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Incubating AI x-risk projects: some personal reflections, published by Ben Snodin on December 19, 2023 on The Effective Altruism Forum.In this post, I'll share some personal reflections on the work of the Rethink Priorities Existential Security Team (XST) this year on incubating projects to tackle x-risk from AI.To quickly describe the work we did: with support from the Rethink Priorities Special Projects team (SP), XST solicited and prioritised among project ideas, developed the top ideas into concrete proposals, and sought founders for the most promising of those. As a result of this work, we ran one project internally and, if all goes well, we'll launch an external project in early January.Note that this post is written from my (Ben Snodin's) personal perspective. Other XST team members or wider Rethink Priorities staff wouldn't necessarily endorse the claims made in this post. Also, the various takes I'm giving in this post are generally fairly low confidence and low resilience. These are just some quick thoughts based on my experience leading a team incubating AI x-risk projects for a little over half a year. I was keen to share something on this topic even though I didn't have time to come to thoroughly considered views.Key pointsBetween April 1st and December 1st, Rethink Priorities dedicated approximately 2.5 full-time equivalent (FTE) years of labour, mostly from XST, towards XST's strategy for incubating AI x-risk projects.We decided to run one project ourselves, a project in the AI advocacy space that we've been running since June.We're in the late stages of launching one new project that works to equip talented university students interested in mitigating extreme AI risks with the skills and background to enter a US policy career.A very rough estimate, based on our inputs and outputs to date, suggests that 5 FTE from a team with a similar skills mix to XST+SP would launch roughly 2 new projects per year.XST will now look into other ways to support high-priority projects such as in-housing them, rather than pursuing incubation and looking for external founders by default, while the team considers its next steps.Reasons for the shift include: an unfavourable funding environment, a focus on the AI x-risk space narrowing the founder pool and making it harder to find suitable project ideas, and challenges finding very talented founders in general.I think the ideal team working in this space has: lots of prior incubation experience, significant x-risk expertise and connections, excellent access to funding and ability to identify top founder talent, and very strong conviction.I'd often suggest getting more experience founding stuff yourself rather than starting an incubator - and I think funding conditions for AI x-risk incubation will be more favourable in 1-2 years.There are many approaches to AI x-risk incubation that seem promising to me that we didn't try, including cohort-based Charity Entrepreneurship-style programs, a high-touch approach to finding founders, and a founder in residence program.Summary of inputs and outcomesInputsBetween April 1st and December 1st 2023, Rethink Priorities dedicated approximately 2.5 full-time equivalent (FTE) years of labour towards incubating projects aiming to reduce existential risk from AI. XST had 4 full-time team members working on incubating AI x-risk projects during this period,[2] and from August 1st to December 1st 2023, roughly one FTE from SP collaborated with XST to identify and support potential founders for a particular project.In this period, XST also devoted roughly 0.4 FTE-years working directly on an impactful project in the AI advocacy space that stemmed from our incubation work.The people working on this were generalist and relatively junior, with 1-5 years' experience in x-risk-relat...

]]>
Ben Snodin https://forum.effectivealtruism.org/posts/FAJ5DCmucKnabkBNL/incubating-ai-x-risk-projects-some-personal-reflections Tue, 19 Dec 2023 22:23:37 +0000 EA - Incubating AI x-risk projects: some personal reflections by Ben Snodin Ben Snodin https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 16:27 no full 2
dHp444fEE3pJYDS9N EA - AI governance talent profiles I'd like to see apply for OP funding by JulianHazell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI governance talent profiles I'd like to see apply for OP funding, published by JulianHazell on December 19, 2023 on The Effective Altruism Forum.We (Open Philanthropy) have a program called the "Career Development and Transition Funding Program" (CDTF) - see this recentForum post for more information on the recent updates we made to it.This program supports a variety of activities, including (but not necessarily limited to) graduate study, unpaid internships, self-study, career transition and exploration periods, postdocs, obtaining professional certifications, online courses, and other types of one-off career-capital-building activities. To learn more about the entire scope of the CTDF program, I'd encourage you to read the full description on our website, which contains a broad list of hypothetical applicant profiles that we're looking for.This brief post, which partially stems from research I recently conducted through conversations with numerous AI governance experts about current talent needs, serves as an addendum to the existing information on the CDTF page. Here, I'll outline a few talent profiles I'm particularly excited about getting involved in the AI governance space. To be clear: this list is only a small slice of the many different talent profiles I'd be excited to see apply to the CDTF program - there are many other promising talent profiles out there that I won't manage to cover below.My aim is not just to encourage more applicants from these sorts of folks to the CDTF program, but also to broadly describe what I see as some pressing talent pipeline gaps in the AI governance ecosystem more generally.Hypothetical talent profiles I'm excited aboutA hardware engineer at a leading chip design company (Nvidia), chip manufacturer (TSMC), chip manufacturing equipment manufacturer (ASML) or cloud compute provider (Microsoft, Google, Amazon) who has recently become interested in developinghardware-focused interventions and policies that could reduce risks from advanced AI systems via improved coordination.A machine learning researcher who wants to pivot to working on the technical side of AI governance research and policy, such as evaluations, threat assessments, or other aspects ofAI control.An information security specialist who has played a pivotal role in safeguarding sensitive data and systems at a major tech company, and would like to use those learnings tosecure advanced AI systems from theft.Someone with 10+ years of professional experience, excellent interpersonal and management skills, and an interest in transitioning to policy work in the US, UK, or EU.A policy professional who has spent years working in DC, with a strong track record of driving influential policy changes. This individual could have experience either in government as a policy advisor or in a think tank or advocacy group, and has developed a deep understanding of the political landscape and how to navigate it. An additional bonus would be if the person has experience driving bipartisan policy change by working with both sides of the aisle.A legal scholar or practicing lawyer with experience in technology, antitrust, and/or liability law, who is interested in applying these skills tolegal questions relevant to the development and deployment of frontier AI systems.A US national security expert with experience in international agreements and treaties, who wants to producepolicy reports that consider potential security implications of advanced AI systems.An experienced academic with an interdisciplinary background. They have strong mentoring skills, and a clear vision for establishing and leading an AI governance research institution at a leading university focused on exploring questions such asmeasuring and benchmarking AI capabilities.A biologist who has either ...

]]>
JulianHazell https://forum.effectivealtruism.org/posts/dHp444fEE3pJYDS9N/ai-governance-talent-profiles-i-d-like-to-see-apply-for-op Tue, 19 Dec 2023 17:45:33 +0000 EA - AI governance talent profiles I'd like to see apply for OP funding by JulianHazell JulianHazell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:59 no full 4
PdGZti3Aqd9ZD8jxt EA - Why Effective Giving Incubation - Report by CE Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Effective Giving Incubation - Report, published by CE on December 19, 2023 on The Effective Altruism Forum.TLDR: At Charity Entrepreneurship, in collaboration with Giving What We Can, we have recently launched a new program: Effective Giving Incubation. In this post, we present our scoping report that explains the reasoning behind creating more Effective Giving Initiatives (EGIs). Learn why we think this is a promising intervention, which locations are optimal for launching, and for whom this would be an ideal career fit.Quick reminder:You can apply to the Effective Giving Incubation program by January 14, 2024. The program will run online from April 15 to June 7, 2024, with 2 weeks in person in London.[APPLY NOW]or sign up for the Effective Giving Incubation interactive webinar on January 4, 5 PM Singapore Time/ 6 PM Japan Time/ 3.30 PM India Time/ 10 AM UK Time/ 11 AM Belgium Time.[SIGN UP]CE is excited about launching EGIs in Ireland, Belgium, Italy, India, Singapore, South Korea, Japan, United Arab Emirates, Mexico, the US and France. We would appreciate your help in reaching potential applicants who are interested in working in these countries. Connect us via email at: ula@charityentrepreneurship.comOne paragraph summaryIn 2024 we are running a special edition of the Charity Entrepreneurship Incubation Program in collaboration with Giving What We Can focused on Effective Giving Initiatives (EGI). EGIs are entities that focus on raising awareness and funneling public and philanthropic donations to the most cost-effective charities worldwide. They will be broadly modeled on existing organizations such as Giving What We Can (GWWC), Effektiv Spenden, and others.We have identified some possible high-priority countries where we believe they will be most successful. Depending on the country and what is most impactful, these initiatives could be fully independent or collaborate with existing projects.Disclaimer: It is important to note that EGIs, including those we intend to incubate, are independent from CE and make their own educated choices ofwhich charities to promote and where to donate funds. CE does not require or encourage any specific recommended charities (such as our prior incubated charities) to be supported by EGIs.Background to this researchCharity Entrepreneurship's (CE) mission is to cause more effective non-profit organizations to exist worldwide. To accomplish this mission, we connect talented individuals with high-impact intervention opportunities and provide them with training, colleagues, funding opportunities, and ongoing operational support.For this scoping report, CE collaborated withGiving What We Can (GWWC) to better understand the opportunities Effective Giving Initiatives (EGIs) present as a high-impact intervention, what contributes to the success of such organizations, and where they might be best founded. GWWC was one of the first organizations to champion giving to high-impact nonprofits, has an extensive global network of people interested in effective giving, and more than a decade of experience operating an organization focused on promoting effective charities.EGIs are organizations or projects that aim to promote, typically in a specific target country, the idea of donating to cost-effective charities. They mostly engage in a mix of educational and fundraising activities, with the explicit aim of trying to move money to the most cost-effective interventions that aim to tackle the world's most pressing problems.This report builds on the experience of Giving What We Can, in-depth interviews with experts in the field and successful founders of EGIs, as well as quantitative & qualitative analysis of potential target areas. This report follows a somewhat different methodology than our regular research process used to ...

]]>
CE https://forum.effectivealtruism.org/posts/PdGZti3Aqd9ZD8jxt/why-effective-giving-incubation-report Tue, 19 Dec 2023 15:08:18 +0000 EA - Why Effective Giving Incubation - Report by CE CE https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 33:29 no full 5
nSgcLuyu2ypRQhT8r EA - EA events in Africa Interest Form: Exploring EA events on the continent by Hayley Martin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA events in Africa Interest Form: Exploring EA events on the continent, published by Hayley Martin on December 19, 2023 on The Effective Altruism Forum.We're excited to explore the potential of organising EA events on the continent, and we want YOUR input to shape these events. Your perspectives are crucial in understanding the community's aspirations and needs. Please take a moment to share your preferences and expectations by filling out our quick Google Form. Your insights will inform our discussions about the possibility of EA events in Africa, including a potential EAGx ,and help create an enriching and collaborative experience for all.[Fill out the EAGx Africa Event Interest Form here]Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Hayley Martin https://forum.effectivealtruism.org/posts/nSgcLuyu2ypRQhT8r/ea-events-in-africa-interest-form-exploring-ea-events-on-the Tue, 19 Dec 2023 14:12:56 +0000 EA - EA events in Africa Interest Form: Exploring EA events on the continent by Hayley Martin Hayley Martin https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:52 no full 7
bwtpBFQXKaGxuic6Q EA - Effective Aspersions: How the Nonlinear Investigation Went Wrong by TracingWoodgrains Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Aspersions: How the Nonlinear Investigation Went Wrong, published by TracingWoodgrains on December 19, 2023 on The Effective Altruism Forum.The New York TimesPicture a scene: the New York Times is releasing an article on Effective Altruism (EA) with an express goal to dig up every piece of negative information they can find. They contact Émile Torres, David Gerard, and Timnit Gebru, collect evidence about Sam Bankman-Fried, the OpenAI board blowup, and Pasek's Doom, start calling Astral Codex Ten (ACX) readers to ask them about rumors they'd heard about affinity between Effective Altruists, neoreactionaries, and something called TESCREAL.They spend hundreds of hours over six months on interviews and evidence collection, paying Émile and Timnit for their time and effort. The phrase "HBD" is muttered, but it's nobody's birthday.A few days before publication, they present key claims to the Centre for Effective Altruism (CEA), who furiously tell them that many of the claims are provably false and ask for a brief delay to demonstrate the falsehood of those claims, though their principles compel them to avoid threatening any form of legal action. The Times unconditionally refuses, claiming it must meet a hard deadline.The day before publication, Scott Alexander gets his hands on a copy of the article and informs the Times that it's full of provable falsehoods. They correct one of his claims, but tell him it's too late to fix another.The final article comes out. It states openly that it's not aiming to be a balanced view, but to provide a deep dive into the worst of EA so people can judge for themselves. It contains lurid and alarming claims about Effective Altruists, paired with a section of responses based on its conversation with EA that it says provides a view of the EA perspective that CEA agreed was a good summary. In the end, it warns people that EA is a destructive movement likely to chew up and spit out young people hoping to do good.In the comments, the overwhelming majority of readers thank it for providing such thorough journalism. Readers broadly agree that waiting to review CEA's further claims was clearly unnecessary. David Gerard pops in to provide more harrowing stories. Scott gets a polite but skeptical hearing out as he shares his story of what happened, and one enterprising EA shares hard evidence of one error in the article to a mixed and mostly hostile audience. A few weeks later, the article writer pens a triumphant follow-up about how well the whole process went and offers to do similar work for a high price in the future.This is not an essay about the New York Times.The rationalist and EA communities tend to feel a certain way about the New York Times. Adamantly a certain way. Emphatically a certain way, even. I can't say my sentiment is terribly different - in fact, even when I have positive things to say about the New York Times, Scott has a way of saying them more elegantly, as in The Media Very Rarely Lies.That essay segues neatly into my next statement, one I never imagined I would make:You are very very lucky the New York Times does not cover you the way you cover you.A Word of IntroductionSince this is my first post here, I owe you a brief introduction. I am a friendly critic of EA who would join you were it not for my irreconcilable differences in fundamental values and thinks you are, by and large, one of the most pleasant and well-meaning groups of people in the world. I spend much more time in the ACX sphere or around its more esoteric descendants and know more than anyone ought about its history and occasional drama. Some of you know me from my adversarial collaboration in Scott's contest some years ago, others from my misadventures in "speedrunning" college, still others from my exhaustively detailed deep dives in...

]]>
TracingWoodgrains https://forum.effectivealtruism.org/posts/bwtpBFQXKaGxuic6Q/effective-aspersions-how-the-nonlinear-investigation-went Tue, 19 Dec 2023 13:43:03 +0000 EA - Effective Aspersions: How the Nonlinear Investigation Went Wrong by TracingWoodgrains TracingWoodgrains https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 51:20 no full 8
LqMiyLTy7gZ6vbWoo EA - Some fun lessons I learned as a junior regrantor by Joel Becker Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some fun lessons I learned as a junior regrantor, published by Joel Becker on December 20, 2023 on The Effective Altruism Forum.Title inhomage to Linch.In the second half of 2022, I was aManifund regrantor. I ended up funding:Holly Elmore to "[organize] for a frontier AI moratorium." ($2.5k.)Jordan Schneider/ChinaTalk to produce "deep coverage of China and AI." ($17.55k.)Robert Long to conduct "empirical research into AI consciousness and moral patienthood." ($7.2k.)Greg Sadler/GAPorganizational expenses. ($10k.)Nuño Sempere to "make ALERT happen." ($8k.)Zhonghao He to "[map] neuroscience and mechanistic interpretability." ($1.75k.)Alexa Pan to write an "explainer and analysis of CNCERT/CC (国家互联网应急中心)." ($1.5k.)Marcel van Diemen to build "The Base Rate Times." ($2.5k, currently unclaimed.)You can find my decisions and comments on grants on my profile. Here, I want to reflect on lessons learned from this wonderful opportunity.I was pretty wrong about my edgeInmy bio, I wrote:To the extent that I have an edge as a regrantor, I think it comes from having an unusually large professional network. This, plus not having serious expertise in any particular area, makes me excited to invest in "people not projects."I had previously ran a prestigious fellowship program where (by the end) I thought I was pretty good at selection. Successfully running an analogous selection process over people recommended from my wide network (this time for grants) seemed like it would transfer neatly. Austin, who co-runs Manifund, and who participated in my earlier program, seemed to agree on both counts.I still believe the premises, and so remain hopeful that this could be an edge in future. But it was largely unimportant for my recent regranting experience. (Only the grant to Greg Sadler/GAP came out of asking my network for recommendations; only the grant to Robert Long came from private knowledge I would have had regardless of being a regrantor.)I haven't fully figured out why this was. My current best guesses are:What matters most for 'deal flow' is not having a talented network but in-person conversations (with people in a talented network). 2023 was perhaps my most socially isolated non-COVID year.A fraction of a $50k budget is not enough for the kinds of recommendations one might want from one's network. I don't hear about opportunities like "this great person should start that great organization" because these would require more than $50k.Recommenders aren't naturally in the mode of looking out for nor dreaming up novel opportunities.Evidence in favor: Greg Sadler was recommended by someone who previously regranted to Greg Sadler.Perhaps I could have found a better way to get recommenders to change mode in conversations with me. Or perhaps this problem would fix itself if Manifund became better-known.But I have been happy about my low-level strategyAbove the edge section of my bio, Iwrote:I plan on using my regranting role to optimize for "good AI/bio funding ecosystem" and not "perceived ROI of regrants I make personally." I think that this means trying to:Be really cooperative behind the scenes. (E.g. sharing information and strategies with other regrantors,proactively helping Manifund founders with strategy.)Post questions about/evaluations of grants publicly.Work quickly.Pursue grants that might otherwise fall through the gaps. (E.g. because they're too small, or politically challenging for other funders, or from somewhat unknown grantees, or from grantees who are unaware that they should ask for funding.)Not get too excited about grants where (1) evaluation would benefit strongly from a project-first investment thesis (e.g. supporting AI safety agenda X vs. Y) or (2) the ideas are obvious enough that (to the extent that the ideas are good)...

]]>
Joel Becker https://forum.effectivealtruism.org/posts/LqMiyLTy7gZ6vbWoo/some-fun-lessons-i-learned-as-a-junior-regrantor Wed, 20 Dec 2023 20:22:29 +0000 EA - Some fun lessons I learned as a junior regrantor by Joel Becker Joel Becker https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:45 no full 3
cD7fHYE5vQx6RwmvD EA - Should 80,000 Hours be more transparent about how they rank problems and careers? by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should 80,000 Hours be more transparent about how they rank problems and careers?, published by Vasco Grilo on December 20, 2023 on The Effective Altruism Forum.QuestionI wonder whether 80,000 Hours should be more transparent about how they rank problems and careers. I think so:I suspect 80,000 Hours' rankings play a major role in shaping the career choices of people who get involved in EA.According to the 2022 EA Survey, 80,000 Hours was an important factor to get involved in EA for 58.0 % of the total 3.48 k respondents, and for 52 % of the people getting involved in 2022.The rankings have changed a few times. 80,000 Hours briefly explained why in their newsletter, but I think having more detail about the whole process would be good.Greater reasoning transparency facilitates constructive criticism.I understand the rankings are informed by 80,000 Hours' research process and principles, but I would also like to have a mechanistic understanding of how the rankings are produced. For example, do the rankings result from aggregating the personal ratings of some people working at and advising 80,000 Hours? If so, who, and how much weight does each person have? May this type of information be an infohazard? If yes, why?In any case, I am glad 80,000 Hours does have rankings. The current ones are presented as follows:Problems:5 ranked "most pressing world problems"."These areas are ranked roughly by our guess at the expected impact of an additional person working on them, assuming your ability to contribute to solving each is similar (though there's a lot of variation in the impact of work within each issue as well)".10 non-ranked "similarly pressing but less developed areas"."We'd be equally excited to see some of our readers (say, 10-20%) pursue some of the issues below - both because you could do a lot of good, and because many of them are especially neglected or under-explored, so you might discover they are even more pressing than the issues in our top list"."There are fewer high-impact opportunities working on these issues - so you need to have especially good personal fit and be more entrepreneurial to make progress".10 "world problems we think are important and underinvested in". "We'd also love to see more people working on the following issues, even though given our worldview and our understanding of the individual issues, we'd guess many of our readers could do even more good by focusing on the problems listed above".2 non-ranked "problems many of our readers prioritise". "Factory farming and global health are common focuses in the effective altruism community. These are important issues on which we could make a lot more progress".8 non-ranked "underrated issues". "There are many more issues we think society at large doesn't prioritise enough, where more initiatives could have a substantial positive impact. But they seem either less neglected and tractable than factory farming or global health, or the expected scale of the impact seems smaller".Careers:10 ranked "highest-impact career paths our research has identified so far"."These are guides to some more specific career paths that seem especially high impact. Most of these are difficult to enter, and it's common to start by investing years in building the skills above before pursuing them. But if any might be a good fit for you, we encourage you to seriously consider it"."We've ranked these paths roughly in terms of our take on their expected impact, holding personal fit for each fixed and given our view of the world's most pressing problems. But your personal fit matters a lot for your impact, and there is a lot of variation within each path too - so the best opportunities in one lower on the list will often be better than most of the opportunities in a higher-ranked one".14 non-ranked "hi...

]]>
Vasco Grilo https://forum.effectivealtruism.org/posts/cD7fHYE5vQx6RwmvD/should-80-000-hours-be-more-transparent-about-how-they-rank Wed, 20 Dec 2023 09:07:17 +0000 EA - Should 80,000 Hours be more transparent about how they rank problems and careers? by Vasco Grilo Vasco Grilo https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:37 no full 5
WYgpmAmhWxupYvKxt EA - Where are the GWWC team donating in 2023? by Luke Freeman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Where are the GWWC team donating in 2023?, published by Luke Freeman on December 20, 2023 on The Effective Altruism Forum.In this post several Giving What We Can team members have volunteered to share their personal giving decisions for 2023.Wondering why it's beneficial to talk about your donations? Check out our blog post, "Should we be private or public about giving to charity?", where we explore the advantages of being open about our philanthropy. We also recommend reading Claire Zabel's insightful piece, "Talk about donations earlier and more", which underscores the importance of discussing charitable giving more frequently and openly.If you enjoy this post, we also encourage you to check out similar posts from teams at other organisations who've shared their personal giving this year too, such as GiveWell and CEA.Finally, we want to hear from you too! We encourage you to join the conversation by sharing your own donation choices in the comments on "Where are you donating this year and why?". This is a wonderful opportunity to learn from each other and to inspire more thoughtful and impactful giving.Now, let's meet some of our team and learn about their giving decisions in 2023!Fabio KuhnLead Software EngineerI took the Giving What We Can Pledge in early 2021 and have consistently contributed slightly above 10% of my income to effective charities since then.Similarly as last year, in 2023, the majority of my donations have been directed towards The Humane League (50%) and The Good Food Institute (5%).I continue to be profoundly unsettled by our treatment of other sentient species. Additionally, I am concerned about the potential long-term risk of moral value lock-in resulting from training AI with our current perspectives on animals. This could lead to a substantial increase in animal suffering unless we promptly address this matter. Considering my view on the gravity of the issue and the apparent lack of sufficient funding in the field, I am positive that contributing to this cause is one of the most impactful options for my donations.The majority of my donations are processed through Effektiv Spenden, allowing for tax-deductible donations in Switzerland.Additionally, I made other noteworthy donations this year:15% to the Effektiv Spenden "Fight Poverty" fund, which is based on the GiveWell "All Grants Fund".5% to Effektiv Spenden itself, supporting the maintenance and development of the donation platform.A contribution of 100 CHF to the climate fund, as an attempt of moral offsetting for my carbon footprint.Grace AdamsHead of MarketingI took a trial pledge in 2021 for 3% of my income and then the Giving What We Can Pledge in 2022 for at least 10% of my income over my lifetime.My donations since learning about effective giving have primarily benefitted global health and wellbeing charities so far but have also supported ACE and some climate-focused charities as part of additional offsetting.I recently gave $1000 AUD to the Lead Exposure Elimination Project after a Giving Game I ran and sponsored in Melbourne. With the remaining donations, I'm likely to split my support between Giving What We Can's operations (as I now think that my donation to GWWC is likely to be a multiplier and create even more donations for highly effective charities - thanks to our impact evaluation) and GiveWell's recommendations via Effective Altruism Australia so I can receive a tax benefit (and therefore donate more).Lucas MooreEffective Giving Global Coordinator and IncubatorI took the Giving What We Can Pledge in 2017. Initially, I gave mainly to Against Malaria Foundation, but over time, I started giving to a wider variety of charities and causes as I learnt more about effective giving.In 2022, I gave mostly to GiveDirectly, and so far in 2023, my donations h...

]]>
Luke Freeman https://forum.effectivealtruism.org/posts/WYgpmAmhWxupYvKxt/where-are-the-gwwc-team-donating-in-2023 Wed, 20 Dec 2023 07:38:16 +0000 EA - Where are the GWWC team donating in 2023? by Luke Freeman Luke Freeman https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 12:12 no full 6
z2bypuiWz3PRgEL9i EA - CEEALAR's Theory of Change by CEEALAR Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEEALAR's Theory of Change, published by CEEALAR on December 20, 2023 on The Effective Altruism Forum.Post intends to be a brief description of CEEALAR's updated Theory of Change.With an increasingly high calibre of guests, more capacity, and an improved impact management process, we believe that the Centre for Enabling EA Learning & Research is the best it's ever been. As part of a series of posts - see here and here - explaining the value of CEEALAR to potential funders (e.g. you!), we want to briefly describe our updated Theory of Change. We hope readers leave with an understanding of how our activities lead to the impact we want to see.Our Theory of ChangeOur goal is to safeguard the flourishing of humanity by increasing the quantity and quality of dedicated EAs working on reducing global catastrophic risks (GCRs) in areas such as Advanced AI, Biosecurity, and Pandemic Preparedness. We do this by providing a tailor-made environment for promising EAs to rapidly upskill, perform research, and work on charitable entrepreneurial projects. More specifically, we aim to help early-career professionals who 1) Have achievements in other fields but are looking to transition to a career working on reducing GCRs; or 2) Are already working on reducing GCRs and would benefit from our environment.Eagle-eyed readers will notice we now refer to supporting work "reducing GCRs" rather than simply "high impact work". We have made this change in our prioritisation as it reflects the current needs of the world and the consequent focus on GCRs by the wider EA movement, as well as the reality of our applicant pool in recent months (>95% of applicants were focused on GCRs).Our updated theory of change - see below - posits that by providing an environment to such EAs that is highly supportive of their needs, enables increased levels of productivity, and encourages collaboration and networking, we can counterfactually impact their career trajectories and, more generally, help in the prevention of global catastrophic events.This Theory of Change reflects our belief that there is something broken about the pipeline for both talent and projects in the GCR community, and that programs that simply supply training to early-career EAs are not enough on their own. We fill an important niche because:At just $750 to support a grantee for 1 month, we are particularly cost-effective. For funders, this means reduced risk: you can make a $4,500 investment in a person for six months rather than a $45,000 investment, or use that $45,000 for hits-based giving and invest in ten people rather than one.Since we remove barriers to entering full-time careers in reducing GCRs, the counterfactual impact is high. Indeed, when considering applications we look for prospective grantees who otherwise would not be able to pursue such careers, be that because they currently lack financial security, connections / credentials, or a conducive environment.As grantees do independent research & projects, their work is often cutting-edge. When it comes to preventing global catastrophic events, it is imperative to support ambitious individuals who are motivated to try innovative approaches and further their specific fields.Finally, because CEEALAR only offers time-limited stays (the average stay is ~4-6 months) and prioritises selecting agentic individuals as grantees, our alumni are committed to ensuring their learning translates into action.This final bullet point can be seen in our alumni who have gone on to have impactful careers (see our website for further details). For example:Chris Leong, now Principal Organiser for AI Safety and New Zealand (before CEEALAR (BC) he was a graduate likely to take a non-EA corporate role)Sam Deverett, now an ML Researcher in the MIT Fraenkel Lab and an incoming AI Futures Fel...

]]>
CEEALAR https://forum.effectivealtruism.org/posts/z2bypuiWz3PRgEL9i/ceealar-s-theory-of-change 95% of applicants were focused on GCRs).Our updated theory of change - see below - posits that by providing an environment to such EAs that is highly supportive of their needs, enables increased levels of productivity, and encourages collaboration and networking, we can counterfactually impact their career trajectories and, more generally, help in the prevention of global catastrophic events.This Theory of Change reflects our belief that there is something broken about the pipeline for both talent and projects in the GCR community, and that programs that simply supply training to early-career EAs are not enough on their own. We fill an important niche because:At just $750 to support a grantee for 1 month, we are particularly cost-effective. For funders, this means reduced risk: you can make a $4,500 investment in a person for six months rather than a $45,000 investment, or use that $45,000 for hits-based giving and invest in ten people rather than one.Since we remove barriers to entering full-time careers in reducing GCRs, the counterfactual impact is high. Indeed, when considering applications we look for prospective grantees who otherwise would not be able to pursue such careers, be that because they currently lack financial security, connections / credentials, or a conducive environment.As grantees do independent research & projects, their work is often cutting-edge. When it comes to preventing global catastrophic events, it is imperative to support ambitious individuals who are motivated to try innovative approaches and further their specific fields.Finally, because CEEALAR only offers time-limited stays (the average stay is ~4-6 months) and prioritises selecting agentic individuals as grantees, our alumni are committed to ensuring their learning translates into action.This final bullet point can be seen in our alumni who have gone on to have impactful careers (see our website for further details). For example:Chris Leong, now Principal Organiser for AI Safety and New Zealand (before CEEALAR (BC) he was a graduate likely to take a non-EA corporate role)Sam Deverett, now an ML Researcher in the MIT Fraenkel Lab and an incoming AI Futures Fel...]]> Wed, 20 Dec 2023 07:01:03 +0000 EA - CEEALAR's Theory of Change by CEEALAR 95% of applicants were focused on GCRs).Our updated theory of change - see below - posits that by providing an environment to such EAs that is highly supportive of their needs, enables increased levels of productivity, and encourages collaboration and networking, we can counterfactually impact their career trajectories and, more generally, help in the prevention of global catastrophic events.This Theory of Change reflects our belief that there is something broken about the pipeline for both talent and projects in the GCR community, and that programs that simply supply training to early-career EAs are not enough on their own. We fill an important niche because:At just $750 to support a grantee for 1 month, we are particularly cost-effective. For funders, this means reduced risk: you can make a $4,500 investment in a person for six months rather than a $45,000 investment, or use that $45,000 for hits-based giving and invest in ten people rather than one.Since we remove barriers to entering full-time careers in reducing GCRs, the counterfactual impact is high. Indeed, when considering applications we look for prospective grantees who otherwise would not be able to pursue such careers, be that because they currently lack financial security, connections / credentials, or a conducive environment.As grantees do independent research & projects, their work is often cutting-edge. When it comes to preventing global catastrophic events, it is imperative to support ambitious individuals who are motivated to try innovative approaches and further their specific fields.Finally, because CEEALAR only offers time-limited stays (the average stay is ~4-6 months) and prioritises selecting agentic individuals as grantees, our alumni are committed to ensuring their learning translates into action.This final bullet point can be seen in our alumni who have gone on to have impactful careers (see our website for further details). For example:Chris Leong, now Principal Organiser for AI Safety and New Zealand (before CEEALAR (BC) he was a graduate likely to take a non-EA corporate role)Sam Deverett, now an ML Researcher in the MIT Fraenkel Lab and an incoming AI Futures Fel...]]> 95% of applicants were focused on GCRs).Our updated theory of change - see below - posits that by providing an environment to such EAs that is highly supportive of their needs, enables increased levels of productivity, and encourages collaboration and networking, we can counterfactually impact their career trajectories and, more generally, help in the prevention of global catastrophic events.This Theory of Change reflects our belief that there is something broken about the pipeline for both talent and projects in the GCR community, and that programs that simply supply training to early-career EAs are not enough on their own. We fill an important niche because:At just $750 to support a grantee for 1 month, we are particularly cost-effective. For funders, this means reduced risk: you can make a $4,500 investment in a person for six months rather than a $45,000 investment, or use that $45,000 for hits-based giving and invest in ten people rather than one.Since we remove barriers to entering full-time careers in reducing GCRs, the counterfactual impact is high. Indeed, when considering applications we look for prospective grantees who otherwise would not be able to pursue such careers, be that because they currently lack financial security, connections / credentials, or a conducive environment.As grantees do independent research & projects, their work is often cutting-edge. When it comes to preventing global catastrophic events, it is imperative to support ambitious individuals who are motivated to try innovative approaches and further their specific fields.Finally, because CEEALAR only offers time-limited stays (the average stay is ~4-6 months) and prioritises selecting agentic individuals as grantees, our alumni are committed to ensuring their learning translates into action.This final bullet point can be seen in our alumni who have gone on to have impactful careers (see our website for further details). For example:Chris Leong, now Principal Organiser for AI Safety and New Zealand (before CEEALAR (BC) he was a graduate likely to take a non-EA corporate role)Sam Deverett, now an ML Researcher in the MIT Fraenkel Lab and an incoming AI Futures Fel...]]> CEEALAR https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:45 no full 7
HMgwRxMEe86TYyQZZ EA - Suggestions for Individual Donors from Open Philanthropy Staff - 2023 by Alexander Berger Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Suggestions for Individual Donors from Open Philanthropy Staff - 2023, published by Alexander Berger on December 20, 2023 on The Effective Altruism Forum.In past years, we sometimes publishedsuggestions for individual donors looking for organizations to support. This post shares new suggestions from Open Philanthropy program staff who chose to provide them.Similar caveats to previous years apply:These are reasonably strong options in the relevant focus area, and shouldn't be taken as outright recommendations (i.e., it isn't necessarily the case that the person making a suggestion thinks that their suggestion is the best option available across all causes).The recommendations below fall within the cause areas Open Philanthropy has chosen to focus on. While this list does not expressly includeGiveWell's top charities, we believe those organizations to be among the most cost-effective, evidence-backed giving opportunities available to donors today, and expect that some readers of this post might want to give to them.Many of these recommendations appear here because they are particularly good fits for individual donors. This shouldn't be seen as a list of our strongest grantees overall (although of course there may be overlap).Our explanations for why these are strong giving opportunities are very brief and informal, and we don't expect individuals to be persuaded by them unless they put a lot of weight on the judgment of the person making the suggestion.In addition, these recommendations are made by the individual program officers or teams cited, and do not necessarily represent my (Alexander's) personal or Open Philanthropy's institutional "all things considered" view.Global Health and Development1Day SoonerRecommended byChris SmithWhat is it?1Day Sooner was originally created during 2020 to advocate for increased use of human challenge trials in Covid vaccines, and named on the basis that making vaccines available even one day sooner would be hugely beneficial.1DS is now expanding its work to look at other diseases where challenge trials could be safe, such as hepatitis C, where Open Philanthropy separatelyhasgrants developing new vaccine candidates. Open Philanthropy has supported 1DS from both our GHW and GCR portfolios.Why I suggest it: Recently, 1DS have been working on accelerating the global rollout of vaccines beyond the increased use of challenge trials, such as their current campaign on R21. R21 is an effective malaria vaccine (developed in part by Open Philanthropy Program OfficerKatharine Collins while she was at the Jenner Institute) recommended for use by WHO in October 2023 but with plans only to distribute fewer than 20 million doses in 2024, despite the manufacturer claiming the ability to make 100 million doses available. You can readan op-ed on this from Zacharia Kafuko, Africa Director of 1DS, in Foreign Policy.If 1DS can diversify its funding base and find more donors, they'd have the capacity to take on other projects that could accelerate vaccine development and distribution. I've been impressed with their work on both policy and advocacy, and I plan to support them myself this year. (Also, personally, I really enjoy supporting smaller organizations as a donor; I find that this helps me "feel" the difference more than if I'd donated to a large organization.)How to donate: You can donatehere.Center for Global DevelopmentRecommended byLauren GilbertWhat is it? TheCenter for Global Development (CGD) is a Washington D.C.-based think tank. They conduct research on and promote evidence-based improvements to policies that affect the global poor.Why I suggest it: We've supported CGD for many years and haverecommended it for individual donors in previous years. CGD has animpressive track record, and it continues to do impac...

]]>
Alexander_Berger https://forum.effectivealtruism.org/posts/HMgwRxMEe86TYyQZZ/suggestions-for-individual-donors-from-open-philanthropy Wed, 20 Dec 2023 05:51:59 +0000 EA - Suggestions for Individual Donors from Open Philanthropy Staff - 2023 by Alexander Berger Alexander_Berger https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 14:46 no full 9
9KP2qLnfutRmj6Jm9 EA - Shrimp welfare in wild-caught fisheries: New detailed review article by Ren Ryba Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shrimp welfare in wild-caught fisheries: New detailed review article, published by Ren Ryba on December 20, 2023 on The Effective Altruism Forum.Key points:In this post, we summarise some of the key results of our research on animal welfare in wild-caught shrimp fisheries.The full paper isfreely available as a preprint here, while it undergoes peer review before publication in a journal. It's a long and detailed paper, with many fancy tables and graphs - I would encourage you to check it out.We conducted a review of shrimp fisheries and interventions that could improve shrimp welfare in wild-catch fisheries.We calculated the number of shrimp caught in the world's wild-catch shrimp fisheries. This allows us to see how many shrimp are caught in each country and what species of shrimp they are.Our paper also includes an in-depth analysis of each of the world's top 25 countries, by number of shrimp caught.The authors of the full paper are: me (Ren Ryba), Prof Sean D Connell, Shannon Davis, Yip Fai Tse, and Prof Peter Singer.1. General overview of wild-caught shrimp fisheriesThere are many, many, many shrimp caught in wild-catch fisheries each year. Specifically, it is estimated that around 37.4 trillion shrimp are caught in wild-catch fisheries each year, and that is probably an underestimate.Broadly speaking, there are three types of shrimp:Caridean shrimp (781 billion caught each year). These shrimp are actually more closely related to crabs and lobsters than to the other two types of shrimp, which is why the evidence for shrimp sentience tends to be focused on this group. They are relatively small (e.g. a few centimetres). Caridean shrimp are mostly caught in cold-water (temperate) fisheries. Important caridean shrimp fisheries include the North Sea shrimp trawl fishery (the Netherlands, Germany, Denmark, and the UK) and the North Atlantic and Pacific shrimp trawl fisheries (USA, Canada, Russia, Greenland).Penaeid shrimp (287 billion caught each year). These shrimp are mostly in warm-water (tropical) fisheries, and they physically tend to be a bit larger in body size. Important penaeid shrimp fisheries include the trawl fishery in the USA, trawl and small-scale fisheries in Latin America, and trawl and small-scale fisheries in East and South-East Asia.Sergestid shrimp (36.3 trillion caught each year). This group includes the "paste shrimp", Acetes japonicus. Sergestid shrimp are tiny, sometimes even microscopic. These are very common in small-scale fisheries in East and South-East Asia, as well as East Africa.It's important to understand that these three types of shrimp are distinct. Caridean shrimp are actually more closely related to lobsters, crabs, and crayfish than they are to penaeid and sergestid shrimp. There are important differences in their biology, their evolutionary histories, the corresponding fishing industries, the amount of research that has been conducted on sentience, and - most importantly - the tractability of welfare improvements in fisheries. Those differences are explained in more detail in the full report.(Credit: Shrimp silhouettes in the evolutionary tree are from phylopic.org. Caridean shrimp: Maija Karala. Penaeid shrimp: Almandine (vectorized by T. Michael Keesey). Crab: Jebulon (vectorized by T. Michael Keesey). Lobster: Guillaume Dera.)We can also distinguish between two major types of shrimp fisheries:Industrial trawl fisheries. These may be large, high-power trawler vessels that can conduct journeys for weeks or months at a time. These vessels may be technologically sophisticated, with many processing, packaging, and storing shrimp on-board. Industrial trawl fisheries are common in both developed (e.g. North America, Europe) and developing (e.g. Latin America, China, South Korea, and many South-East Asian) countrie...

]]>
Ren Ryba https://forum.effectivealtruism.org/posts/9KP2qLnfutRmj6Jm9/shrimp-welfare-in-wild-caught-fisheries-new-detailed-review Wed, 20 Dec 2023 01:51:00 +0000 EA - Shrimp welfare in wild-caught fisheries: New detailed review article by Ren Ryba Ren Ryba https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:49 no full 10
QwBiHdHGij67sdrxk EA - New positions on Open Philanthropy's Cause Prioritization team (Global Health and Wellbeing) by Open Philanthropy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New positions on Open Philanthropy's Cause Prioritization team (Global Health and Wellbeing), published by Open Philanthropy on December 21, 2023 on The Effective Altruism Forum.Open Philanthropy plans to grant more than $300 million per year to causes in our Global Health and Wellbeing (GHW) portfolio over the next few years.We'rehiring for two types of roles (Research and Strategy Fellows) on the GHW portfolio's Cause Prioritization team, which works closely with senior leadership and program officers to conduct research that improves our grantmaking and high-level strategy.[1]The team's work includes:Investigating potential new cause areasEvaluating and prioritizing across existing cause areasAdvancing research agendas within existing cause areasContributing to high-level strategy decisionsPartnering with other organizations and philanthropists to advance the practice of cost-effective grantmakingTo illustrate what these roles involve day-to-day, here are a few recent projects managed by Research and Strategy Fellows on the GHW Cause Prioritization team:In 2021,we announced hires to lead our grantmaking in global aid advocacy and South Asian air quality, two new cause areas we added as a result of the team's research.In 2022, we hired program officers inglobal health R&D andeffective altruism community building (global health and wellbeing), again based on the team's research and early grantmaking.In 2022, we ran theRegranting Challenge, a $150 million initiative to fund highly effective teams at other grantmaking organizations, and theCause Exploration Prizes (with support from our 2022 summer interns), where we invited people tosuggest new areas for us to support.In 2023, based on the team's research, we announced a new program area:Global Public Health Policy, including grantmaking on lead exposure, alcohol policy, and suicide prevention.We conduct shallow- and medium-depth investigations as part of our work to explore new potential cause areas. Two examples of shallow investigations:Telecommunications in LMICs andCivil Conflict Reduction.The team is fully remote; you can work from anywhere (time zones permitting - see the listing for more). And these positions don't require specialized experience - though we are especially interested in candidates who have experience living or working in low- and middle-income countries.To see more detail on the roles, and to apply, visit thejob listing. To learn more about working at Open Philanthropy, visit ourcareers page. And please feel free to share any questions in a comment, or by emailingjobs@openphilanthropy.org.^Note that these roles would not have a significant focus on ourFarm Animal Welfare grantmaking, though this is included in the GHW portfolio.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Open Philanthropy https://forum.effectivealtruism.org/posts/QwBiHdHGij67sdrxk/new-positions-on-open-philanthropy-s-cause-prioritization Thu, 21 Dec 2023 16:08:49 +0000 EA - New positions on Open Philanthropy's Cause Prioritization team (Global Health and Wellbeing) by Open Philanthropy Open Philanthropy https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:01 no full 4
ePAxv8otruSGzYMLn EA - It is called Effective Altruism, not Altruistic Effectiveness by Timon Renzelmann Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It is called Effective Altruism, not Altruistic Effectiveness, published by Timon Renzelmann on December 21, 2023 on The Effective Altruism Forum.This post is a personal reflection on certain attitudes I have encountered in the EA community that I believe can be misleading. It is primarily based on intuition, not thorough research and surveys.It is not news that the EA community has an unbalanced demographic, with men in the majority.I have heard from several women what they dislike about the EA community and this post is what I have taken from those conversations. I think that if we can move more in the direction I'm describing, the EA community can become warmer and more welcoming to all genders and races (and also more effective at doing good).I'd like to note that I don't think what I'm about to describe is a widespread problem, but a phenomenon that may occur in some places. Most of my experiences with the EA community have been very positive. I meet mostly caring people with whom I can have interesting, sometimes controversial discussions. And I often meet people who are very willing to help.Now to the subject:Some women I have spoken to have described a "lack of empathy" in the group, or, more specifically, that EA people came across as "tech bros" who lacked humility and wouldn't help a stranger because it wouldn't be the most effective thing to do. In an introductory discussion group we ran (in our university group), one of the participants perceived some of EA's ideas as "cold-hearted" and was very critical of the abstract, sometimes detached way of trying to calculate how to do good most effectively.I believe that these impressions and experiences point to risks associated with certain EA-related ideas.The idea of optimizationFirstly, the idea of optimising/maximising one's impact is fraught with risks, which have been described already here, here and here (and maybe elsewhere, too).To judge between actions or causes as more or less worthy of our attention can certainly seem cold-hearted. While this approach is valuable for triage and for prioritising in difficult situations, it also has a dark side when it justifies not caring about what we might normally care about. We should not discredit what might be judged as lesser goods just because some metric suggests it. It shouldn't lead us to lose our humility (impacts are uncertain and we are not omniscient) as well as our sense of caring.What kind of community are we if people don't feel comfortable talking about their private lives because they don't optimise everything, don't spend their free time researching or trying to make a difference? When people think that spending time volunteering for less effective non-profits might not be valued or even dismissed? What is the point of an ineffective soup kitchen, after all it is a waste of time in terms of improving QALYs?I have no doubt that even the thought of encountering such insensitive comments makes you feel uncomfortable.The following quote might appear to conflict with the goal of EA, but I think it doesn't and makes and important point."There is no hierarchy of compassionate action. Based on our interests, skills and what truly moves us, we each find our own way, helping to alleviate suffering in whatever way we can." - Joseph Goldstein (2007) in A Heart Full of PeaceWhat we are trying to do is called Effective Altruism, not Altruistic Effectiveness, and we should be trying to be altruistic in the first place, that is, good and caring people.[1]The idea of focusing on consequencesI also think that an exaggerated focus on consequences can be misleading in a social context, as well as detrimental in terms of personal well-being. Even if one supports consequentialism, focusing on consequences may not be the best strategy for achieving the...

]]>
Timon Renzelmann https://forum.effectivealtruism.org/posts/ePAxv8otruSGzYMLn/it-is-called-effective-altruism-not-altruistic-effectiveness Thu, 21 Dec 2023 07:05:10 +0000 EA - It is called Effective Altruism, not Altruistic Effectiveness by Timon Renzelmann Timon Renzelmann https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:20 no full 5
F7qzoWhiCTK8KpuTM EA - The privilege of native English speakers in reaching high-status, influential positions in EA by Alix Pham Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The privilege of native English speakers in reaching high-status, influential positions in EA, published by Alix Pham on December 21, 2023 on The Effective Altruism Forum.Huge thanks to Konrad Seifert, Marcel Steimke, Ysaline Bourgine, Milena Canzler, Alex Rahl-Kaplan, Marieke de Visscher, and Guillaume Vorreux for the valuable feedback provided on drafts of this post, and to many others for the conversations that lead to me writing it.Views & mistakes are my own.TL;DRBeing a non-native English speaker makes one sound less convincing. However, poor inclusion of non-native English speakers means missed perspectives in decision-making. Hence, it's a vicious circle where lack of diversity persists: native English culture prevails at the thought leadership level and neglects other cultures by failing to acknowledge that it is inherently harder to stand out as a non-native English speaker.Why I am writing thisI'm co-directing EA Switzerland (I'm originally from France), and I've been thinking about the following points for some time. I've been invited to speak at the Panel on Community Building at EAG Boston 2023, where I shared a rougher version of those thoughts. I was pretty scared to share this in a place where the vast majority of attendees matched the description "native English speaker", but after talking to a few people, it felt true. Many of the non-native speakers related, and many of the native speakers acknowledged it.steerers). I'm pretty scared to share it here too of course, butit's probably worth it.An unconscious bias against non-native English speakersThe beauty of linguistic diversity is that it reveals to us just how ingenious and how flexible the human mind is. Human minds have invented not one cognitive universe, but 7,000.Lera BoroditskyNon-native English speakers sound less convincingThe neural pathways that form in your brain during childhood will affect how you think as an adult. Depending on where and with which languages and cultures you grew up, the conceptual space in which your brain processes and communicates information will be different. Then, when a non-native speaker expresses their thoughts and opinions in English, most times it will be lower-fidelity than native speakers, and will probably beless convincing and/orsound less smart.[1]Besides, I can relate to the experience sharedhere that native English speakers are sometimes hard to follow when your own native language (and culture) is not English.[2] I guess it's especially true when your English is good enough that it doesn't appear necessary to speak slower - but I think for most of us, it still is necessary to speak slower or repeat stuff, and avoid referencing local pop culture. Usually though, non-native speakers would like to avoid asking to slow down, repeat, or clarify because, on top of being burdensome, it can be associated with incompetence.Hence, it's important not to confuse competence with language proficiency, and keep in mind that for the majority of non-native English speakers, it's harder to engage with the materials, harder to understand and intervene in debates, and harder to speak and write with fidelity to one's thoughts. As a consequence, it's then harder to be understood, stand out, get hired, and get heard. A similar case has been made forless-STEM-than-average people in EA.[3]Additionally, the English language and vocabulary might also not allow one to express the full length of their thoughts - words might not even exist for them. Different languages can allow for different profiles of available concepts and thoughts, because theirstructure and vocabulary vary.Poor inclusion of non-native English speakers means missed perspectivesOne could consider the6 dimensions of culture as a good illustration of the effect of culture (and then, ...

]]>
Alix Pham https://forum.effectivealtruism.org/posts/F7qzoWhiCTK8KpuTM/the-privilege-of-native-english-speakers-in-reaching-high Thu, 21 Dec 2023 02:05:07 +0000 EA - The privilege of native English speakers in reaching high-status, influential positions in EA by Alix Pham Alix Pham https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:57 no full 7
FrP5b2ukANCyoHuQh EA - On the future of language models by Owen Cotton-Barratt Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On the future of language models, published by Owen Cotton-Barratt on December 21, 2023 on The Effective Altruism Forum.1. Introduction1.1 Summary of key claimsEven without further breakthroughs in AI, language models will have big impacts in the coming years, as people start sorting out proper applicationsThe early important applications will be automation of expert advisors, management, and perhaps software developmentThe more transformative but harder prizes are automation of research and automation of executive capacityIn their most straightforward form ("foundation models"), language models are a technology which naturally scales to something in the vicinity of human-level (because it's about emulating human outputs), not one that naturally shoots way past human-level performancei.e. it is a mistake-in-principle to imagine projecting out the GPT-2 - GPT-3 - GPT-4 capability trend into the far-superhuman rangeAlthough they're likely to be augmented by things which accelerate progress, this still increases the likelihood of a relatively slow takeoff - several years (rather than weeks or months) of transformative growth before truly wild things are happening seems plausibleNB version of "speed superintelligence" could still be transformative even while performance on individual tasks is still firmly human levelThere are two main techniques which can be used (probably in conjunction) to get language models to do more powerful things than foundation models are capable of:Scaffolding: structured systems to provide appropriate prompts, including as a function of previous answersFinetuning: altering model weights to select for task performance on a particular taskEach of these techniques has a path to potentially scale to strong superintelligence; alternatively language models might at some point be obsoleted by another form of AITimelines for any of these things seem pretty unclearFrom a safety perspective, language model agents whose agency comes from scaffolding look greatly superior than ones whose agency comes from finetuningBecause you can get an extremely high degree of transparency by constructionFinetuning is more likely an important tool for instilling virtues (e.g. honesty) in systemsSutton's Bitter Lesson raises questions for this strategy, but needn't mean it's doomed to be outcompetedOn the likely development trajectory there are a number of distinct existential riskse.g. guarding against takeover from early language model agents is pretty different from differential technological development to ensure that we automate safety-enhancing research before risk-increasing researchThe current portfolio of work on AI risk is over-indexed on work which treats "transformative AI" as a black box and tries to plan around that. I think that we can and should be peering inside that box (and this may involve plans targeted at more specific risks).1.2 MetaWe know that AI is likely to be a very transformative technology. But a lot of the analysis of this point treats something like "AGI" as a black box, without thinking too much about the underlying tech which gets there. I think that's a useful mode, but it's also helpful to look at specific forms of AI technology and ask where they're going and what the implications are.This doc does that for language models. It's a guide for thinking about them from various angles with an eye to what the strategic implications might be. Basically I've tried to write the thing I wish I'd read a couple of years ago; I'm sharing now in case it's helpful for others.The epistemic status of this is "I thought pretty hard about this and these are my takes"; I'm sure there are still holes in my thinking (NB I don't actually do direct work with language models), and I'd appreciate pushback; but I'm also pretty sure I'm ...

]]>
Owen Cotton-Barratt https://forum.effectivealtruism.org/posts/FrP5b2ukANCyoHuQh/on-the-future-of-language-models Thu, 21 Dec 2023 00:03:40 +0000 EA - On the future of language models by Owen Cotton-Barratt Owen Cotton-Barratt https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 56:14 no full 8
8FZ8dESrDciMJKC44 EA - Rarely is the Question Asked: Is Our Children Learning? [The Learning Crisis in LMIC Education] by Lauren Gilbert Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rarely is the Question Asked: Is Our Children Learning? [The Learning Crisis in LMIC Education], published by Lauren Gilbert on December 22, 2023 on The Effective Altruism Forum.I've written a piece for Asterisk about the learning crisis in developing country schools (and what we do and do not know about the value of education)This piece was based on my research on education for Open Philanthropy.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Lauren Gilbert https://forum.effectivealtruism.org/posts/8FZ8dESrDciMJKC44/rarely-is-the-question-asked-is-our-children-learning-the Fri, 22 Dec 2023 22:45:45 +0000 EA - Rarely is the Question Asked: Is Our Children Learning? [The Learning Crisis in LMIC Education] by Lauren Gilbert Lauren Gilbert https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:36 no full 1
AN3M8t3M5rpi2De4Z EA - Malaria vaccine R21 is pre-qualified by JoshuaBlake Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Malaria vaccine R21 is pre-qualified, published by JoshuaBlake on December 22, 2023 on The Effective Altruism Forum.WHO announced yesterday (21st December) that they have added the malaria R21 vaccine to their pre-qualified list.This is the regulatory step required for Gavi to begin their programmes, as previously discussed on the forum.A good day!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
JoshuaBlake https://forum.effectivealtruism.org/posts/AN3M8t3M5rpi2De4Z/malaria-vaccine-r21-is-pre-qualified Fri, 22 Dec 2023 16:46:58 +0000 EA - Malaria vaccine R21 is pre-qualified by JoshuaBlake JoshuaBlake https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:37 no full 4
aiXyEvheFdwsEoPeC EA - A year of wins for farmed animals by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A year of wins for farmed animals, published by Vasco Grilo on December 24, 2023 on The Effective Altruism Forum.This is a crosspost for A year of wins for farmed animals, published by Lewis Bollard on 14 December 2023 in Open Philanthropy farm animal welfare research newsletter.It's been a tough year for farmed animals. The European Unionshelved the world's most ambitious farm animal welfare reform proposal, plant-based meat salessagged, and the mediapanned cultivated meat while Italybanned it. But advocates for factory farmed animals still won major gains - here are ten of the biggest:1. Wins for the winged. Advocates won130 new corporate pledges to eliminate cages for hens or the worst abuses of broiler chickens. This progress has now expanded well beyond the West: recent wins include cage-free pledges from the largest Asian restaurantcompany and the largest Indonesianretailer. That's mostly thanks to the work of the 100+ member groups of theOpen Wing Alliance, who now campaign across 67 countries. We estimate that, if fully implemented, pledges secured to date will reduce the suffering of about 800 million layer hens and broiler chickens alive at any time.2. Cages canceled. A fair question has long been whether these pledges will be implemented. So far, they mostly have been:1,157 corporate pledges are now fully implemented, 89% of the pledges that came due by last year. As a result,39% of American hens,60% of European hens, and80% of British hens are now cage-free, up from just6%,41%, and48% respectively a decade ago. There's still a lot more work to do to hold companies accountable to their pledges. But globally 220 million more animals are already out of cages thanks to this work.3. Pigs Supreme. The US Supreme Courtupheld California's Proposition 12, which bans the sale of eggs, pork, and veal from caged animals and their offspring. This ruling also protects seven other similar state laws. Once fully implemented, these laws will collectively require about 700,000 pigs and 80 million hens be raised cage-free. Advocates are now fighting alast-ditch effort by pork producers to overturn the Court's ruling, and have already mustered the support of over210 members of Congress for our side.4. Plant-based policies. Denmarkunveiled the world's first state action plan to promote plant-based eating, including plans to promote plant-based foods in schools and support innovation in alternative proteins. South Koreasaid it would soon unveil one too. The European Parliamentcalled for an EU-wide "action plan for increased EU plant-based protein production and consumption."5. Meaty milestones. For the first time, the COP28 climate summitserved mostly vegetarian meals. The UN Environment Programreleased the first-ever UN report on the potential of alternative proteins. New data showed that only20% of Germans now eat meat every day, down from 34% eight years ago.Half of all US restaurants now offer a plant-based alternative, up from a third five years ago.6. Cultured policymakers. US regulatorsapproved the nation's first sales of cultivated meat. Japan's Prime Ministerpledged support for the nation's cellular agriculture industry. Germanypledged 38M to promote alternative proteins, whileCatalonia (Spain),Israel, andthe UK funded more research. Alternative proteins have now attracted over abillion dollars in public funding committed to research and infrastructure globally.7. Alternative aspirations. Major German retailer Lidlpledged to double the share of its range of proteins that are plant-based by 2030. The second largest Dutch retailer, Jumbo, set a goal for60% of its protein sales to be plant-based by the same year. Both began their efforts by slashing the price of their own plant-based brands to parity with meat. So too did German...

]]>
Vasco Grilo https://forum.effectivealtruism.org/posts/aiXyEvheFdwsEoPeC/a-year-of-wins-for-farmed-animals Sun, 24 Dec 2023 23:23:36 +0000 EA - A year of wins for farmed animals by Vasco Grilo Vasco Grilo https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:33 no full 1
7D83kwkyaHLQSo6JT EA - Winners in the Forum's Donation Election (2023) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Winners in the Forum's Donation Election (2023), published by Lizka on December 24, 2023 on The Effective Altruism Forum.TL;DR: We ran aDonation Election in which 341 Forum users[1] voted on how we should allocate the Donation Election Fund ($34,856[2]). The winners are:Rethink Priorities - $12,847.75Charity Entrepreneurship: Incubated Charities Fund - $11,351.11Animal Welfare Fund (EA Funds) - $10,657.07This post shares more information about the results:Comments from voters about their votes: patterns include referencing organizations' marginal funding posts, updating towards the neglectedness of animal welfare, appreciating strong track records, etc.Voting patterns: most people voted for 2-4 candidates (at least one of which was one of the three winners), usually in multiple cause areasCause area stats: similar numbers of points went to cross-cause, animal welfare, risk/future-oriented, and global health candidates (ranked in that order)All candidate results, including raw point[3] totals: the Long-Term Future Fund initially placed second by raw point totalsConcluding thoughts & other charitiesYou can find some extra information inthis spreadsheet.Highlights from the comments: why people voted the way they didWe asked voters if they wanted to share a note about why they voted the way they did. 74 people (~20%) wrote a comment. I'm sharing a few excerpts[4] below, and more in a comment on this post (separated for the sake of space) - consider reading the longer version if you have a moment.There were some recurring patterns in different people's notes, some of which appear in these two comments explaining their authors' votes:"[AWF], because I was convinced by the post about how animal welfare dominates in non-longtermist causes, [CE], so that there can be even more excellent ways of making the world a better place by donating, [GWWC], because I wish we had unlimited money to give to all the others""Realized I'm too partial to [global health] and biased against animal welfare, [so I decided to vote for the] most effective animal organization. Rethink'spost was very convincing.CE has the most innovative ideas in GHD and it isn't close. GiveWell is GiveWell."Rethink Priorities'sfunding request post was mentioned a lot. People also noted specific aspects of RP's work that they appreciate, like the EA Survey, public benefits/publishing research on cause prioritization, moral weights work, and research into particularly neglected animals. There were also shoutouts to the staff:"ALLFED and Rethink Priorities both consist of highly talented and motivated individuals that are working on high-potential, high-impact projects. Both organizations have left a strong impression on me in terms of their approach to reasoning and problem solving. [...] Both organizations have recently posted extremely well-detailed [updates on their financial situation and how additional funding would help]. [...]"CE's Incubated Charities Fund (and Charity Entrepreneurship more broadly) got a lot of appreciation for their good and/or unusual ideas and track record. There were also comments like:"...direct-action global health charities need more funding now, especially in light of reductions in future funding from Open Phil. [And] there's enough potential upside to charity incubation to put a good bit of money there."A number of people wrote that they'd updated towards donating to animal welfare as a result of recent discussions (often explicitly because of this post). Many gave a lot of their points to the Animal Welfare Fund, sometimes referencingGWWC's evaluations of the evaluators. Some also said they wanted to vote for animal welfare to correct for what they saw as its relative neglectedness in EA or to emphasize that it has a central place in EA. One example:"I vo...

]]>
Lizka https://forum.effectivealtruism.org/posts/7D83kwkyaHLQSo6JT/winners-in-the-forum-s-donation-election-2023 Sun, 24 Dec 2023 03:38:24 +0000 EA - Winners in the Forum's Donation Election (2023) by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 16:24 no full 2
CryjzhyHaBZKdkYkw EA - Confessions of a Recent GWWC Pledger (Boxing Day Giving?!) by Harry Luk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Confessions of a Recent GWWC Pledger (Boxing Day Giving?!), published by Harry Luk on December 25, 2023 on The Effective Altruism Forum.TLDR;I pledged to Giving What We Can (GWWC) in early September.But because we transitioned from a dual income to a single income in late June, we had been postponing the 10% tithing.As a result, we also procrastinated on giving to effective charities, even after pledging in September.Black Friday (late November) was when we paid off the "donation debt" to Jesus.We are surrounded by others who sacrificially love and give, and that's why we were empowered to do it too.We encourage others to pledge or give this giving season, perhaps doing the counter-cultural thing and making Boxing Day about giving.IntroductionIn September of this year, I decided to take the Giving What We Can (GWWC) pledge. As a Christian, I have been tithing 10% for years. With GWWC, I am redirecting these donations to highly effective charities, aiming to support 'the least of these' or interventions that can most cost-effectively improve the world, thereby maximizing the impact of my limited resources. This commitment was more than financial; it was a profound expression of faith. Our family's shift from a stable dual income to a more restrictive single income since late June introduced many uncertainties when I made this pledge.The transition to a single income in an expensive city like Vancouver has been challenging, especially considering that the three co-founders ofStakeOut.AI, including myself, have been effectively volunteering - Peter for nearly six months part-time, I for almost 3.5 months full-time, and Amy for 1.5 months full-time.As of this writing, we still haven't fundraised because we have prioritized impact and project advancement. A couple example projects we have completed include:Contributions to researching the'scorecard' of AI governance proposals (found on page 3 of The Future of Life Institute's proposal) presented at the first ever international AI Safety Summit.Co-hosted a Zoom webinar where we advisedHollywood actors on how AI will likely affect their industry. We also have plans for continued collaboration with Hollywood actors to advocate for banning deepfake pornography, a detrimental issue that has victimized many young schoolgirls.By sharing this journey, I hope to inspire a conversation about faith, stewardship, and the impact of intentional giving. This post is an exploration of faith and trust, and my understanding of Christian giving as a joyful expression of faith. Giving has brought an unexpected peace and a deeper trust in God's provision.Our Financial Challenge is a Fraction of What Many Others Endure"Where do you need God's comfort today?" This question from my Daily Refresh in YouVersion resonated with me, especially after reading 2 Corinthians 1:3-7. This verse speaks volumes about comfort in troubles, a theme that deeply aligns with my current life chapter.[3] Praise be to the God and Father of our Lord Jesus Christ, the Father of compassion and the God of all comfort, [4] who comforts us in all our troubles, so that we can comfort those in any trouble with the comfort we ourselves receive from God. [5] For just as we share abundantly in the sufferings of Christ, so also our comfort abounds through Christ.[6] If we are distressed, it is for your comfort and salvation; if we are comforted, it is for your comfort, which produces in you patient endurance of the same sufferings we suffer. [7] And our hope for you is firm, because we know that just as you share in our sufferings, so also you share in our comfort.As I mentioned earlier, since early September, I have embarked on a journey of starting agrassroots movement, the Safer AI Global Grassroots United Front. Honestly, it's been more than a full-tim...

]]>
Harry Luk https://forum.effectivealtruism.org/posts/CryjzhyHaBZKdkYkw/confessions-of-a-recent-gwwc-pledger-boxing-day-giving Mon, 25 Dec 2023 17:03:32 +0000 EA - Confessions of a Recent GWWC Pledger (Boxing Day Giving?!) by Harry Luk Harry Luk https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 16:03 no full 1
rPDtpeEqSwyyaaS4Q EA - MHFC Fall '23 Grants Round by wtroy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MHFC Fall '23 Grants Round, published by wtroy on December 25, 2023 on The Effective Altruism Forum.The Mental Health Funding Circle (MHFC) held its fall grants round and members granted a total of $785,000 to the following organizations.$205,000 to Rethink Wellbeing for their work on effective mental health for the EA community$80,000 to Kaya Guides for digital guided self-help in India$200,000 to Vida Plena for group interpersonal therapy in Ecuador$160,000 to Action for Happiness for their work on digital wellbeing tools in HICs$140,000 to the Clinton Health Access Initiative (CHAI) for their work incorporating mental healthcare into HIV infrastructure in Lesotho*All of these grants were made by funders participating in this round or who sourced a grant through MHFC's open application process. The MHFC itself does not give out grants.The MHFC is an Impactful Grantmaking funding circle, part of the Charity Entrepreneurship ecosystem. We hold open grants rounds in the spring and fall, and look forward to supporting more high-impact mental health initiatives in 2024!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
wtroy https://forum.effectivealtruism.org/posts/rPDtpeEqSwyyaaS4Q/mhfc-fall-23-grants-round Mon, 25 Dec 2023 12:41:12 +0000 EA - MHFC Fall '23 Grants Round by wtroy wtroy https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:22 no full 2
wyePJBbMGeYQCKRde EA - Public Fundraising has Positive Externalities by Larks Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Public Fundraising has Positive Externalities, published by Larks on December 26, 2023 on The Effective Altruism Forum.Epistemic status: revealed to me in a dreamSummary: fundraising from the public has positive externalities: it also functions as outreach and red-teaming. If organizations have not taken this into account they may have under-invested in public outreach and should do more of it.A simplistic approachHere is a simple model for how a normal organization might think about fundraising:A: Estimate how much money you expect to be able to raise from fundraising activities.B: Estimate how useful that money would be to you.C: Estimate the costs of fundraising (e.g. staff time).If B > C, do fundraising! If not, skip it for now.My claim is this is a bad model for EA orgs, because it misses a significant fraction of the benefits.Field-building benefitsSoliciting donations from the general public is generally quite hard. The skills required to do this are often quite different from those involved in running the organization's core operations, and can be a significant distraction. It is hard to convince people what you're doing is a good idea, and even those who agree often don't donate.But this is not wasted effort: the difficulty in converting agreement into donations means that fundraisers are effectively subsidizing outreach. The people who read your work but don't hand over their credit card details might be sold on the mission but skeptical of the team… so they donate to another org. Or they might be a student with limited liquid assets but willing to apply for jobs in the space in a few years.Or they might bring up the idea to their friends, or answer an online poll, or change their vote. Each of these seem pretty valuable - for example, it seems plausible to me that a large fraction of the value of SIAI's fundraising efforts might have come from these channels, rather than via directly increasing SIAI's budget.Epistemic benefitsFundraising can also be unpleasant because it opens yourself up to criticism. If you're just doing your own thing with one or two large donors, you have little need to explain yourself to anyone else. You need to appeal to the big foundations, but you probably have a decent idea of what they want, and they're also likely to be pretty busy. Even if they say no, they're unlikely to send you a long message about how you are bad and your organization is bad and you should feel bad.In contrast, having the audacity to run a public fundraiser naturally invites questions and criticisms from people who are skeptical of your effectiveness and theory of change. These critics have no obligation to represent a single perspective or agree with each other, so you may find yourself being attacked from multiple directions at once.However, this may be one of the only sources of feedback your org can get, especially if you are small. For the same reasons peer review, flawed as it is, is useful in science, your org can potentially benefit from feedback and questioning and critique of your assumptions, plans and execution.Fundraising from the broader group of EAs can attract high quality criticism from similarly-minded people; raising from a broader audience could potentially attract feedback from a wider range of perspectives.There is something of a principal-agent problem here; for the staff, criticism is unpleasant. For the organization, it is a mixed bag, because good criticism, even if harshly worded, can help them improve. And from the perspective of the broader movement it seems very good, because damning public criticism helps avoid grant misallocation. So my guess is that, from an impartial point of view, organizations under-invest in exposing themselves to public scrutiny.You could think of this argument as being somewhat ana...

]]>
Larks https://forum.effectivealtruism.org/posts/wyePJBbMGeYQCKRde/public-fundraising-has-positive-externalities C, do fundraising! If not, skip it for now.My claim is this is a bad model for EA orgs, because it misses a significant fraction of the benefits.Field-building benefitsSoliciting donations from the general public is generally quite hard. The skills required to do this are often quite different from those involved in running the organization's core operations, and can be a significant distraction. It is hard to convince people what you're doing is a good idea, and even those who agree often don't donate.But this is not wasted effort: the difficulty in converting agreement into donations means that fundraisers are effectively subsidizing outreach. The people who read your work but don't hand over their credit card details might be sold on the mission but skeptical of the team… so they donate to another org. Or they might be a student with limited liquid assets but willing to apply for jobs in the space in a few years.Or they might bring up the idea to their friends, or answer an online poll, or change their vote. Each of these seem pretty valuable - for example, it seems plausible to me that a large fraction of the value of SIAI's fundraising efforts might have come from these channels, rather than via directly increasing SIAI's budget.Epistemic benefitsFundraising can also be unpleasant because it opens yourself up to criticism. If you're just doing your own thing with one or two large donors, you have little need to explain yourself to anyone else. You need to appeal to the big foundations, but you probably have a decent idea of what they want, and they're also likely to be pretty busy. Even if they say no, they're unlikely to send you a long message about how you are bad and your organization is bad and you should feel bad.In contrast, having the audacity to run a public fundraiser naturally invites questions and criticisms from people who are skeptical of your effectiveness and theory of change. These critics have no obligation to represent a single perspective or agree with each other, so you may find yourself being attacked from multiple directions at once.However, this may be one of the only sources of feedback your org can get, especially if you are small. For the same reasons peer review, flawed as it is, is useful in science, your org can potentially benefit from feedback and questioning and critique of your assumptions, plans and execution.Fundraising from the broader group of EAs can attract high quality criticism from similarly-minded people; raising from a broader audience could potentially attract feedback from a wider range of perspectives.There is something of a principal-agent problem here; for the staff, criticism is unpleasant. For the organization, it is a mixed bag, because good criticism, even if harshly worded, can help them improve. And from the perspective of the broader movement it seems very good, because damning public criticism helps avoid grant misallocation. So my guess is that, from an impartial point of view, organizations under-invest in exposing themselves to public scrutiny.You could think of this argument as being somewhat ana...]]> Tue, 26 Dec 2023 22:59:01 +0000 EA - Public Fundraising has Positive Externalities by Larks C, do fundraising! If not, skip it for now.My claim is this is a bad model for EA orgs, because it misses a significant fraction of the benefits.Field-building benefitsSoliciting donations from the general public is generally quite hard. The skills required to do this are often quite different from those involved in running the organization's core operations, and can be a significant distraction. It is hard to convince people what you're doing is a good idea, and even those who agree often don't donate.But this is not wasted effort: the difficulty in converting agreement into donations means that fundraisers are effectively subsidizing outreach. The people who read your work but don't hand over their credit card details might be sold on the mission but skeptical of the team… so they donate to another org. Or they might be a student with limited liquid assets but willing to apply for jobs in the space in a few years.Or they might bring up the idea to their friends, or answer an online poll, or change their vote. Each of these seem pretty valuable - for example, it seems plausible to me that a large fraction of the value of SIAI's fundraising efforts might have come from these channels, rather than via directly increasing SIAI's budget.Epistemic benefitsFundraising can also be unpleasant because it opens yourself up to criticism. If you're just doing your own thing with one or two large donors, you have little need to explain yourself to anyone else. You need to appeal to the big foundations, but you probably have a decent idea of what they want, and they're also likely to be pretty busy. Even if they say no, they're unlikely to send you a long message about how you are bad and your organization is bad and you should feel bad.In contrast, having the audacity to run a public fundraiser naturally invites questions and criticisms from people who are skeptical of your effectiveness and theory of change. These critics have no obligation to represent a single perspective or agree with each other, so you may find yourself being attacked from multiple directions at once.However, this may be one of the only sources of feedback your org can get, especially if you are small. For the same reasons peer review, flawed as it is, is useful in science, your org can potentially benefit from feedback and questioning and critique of your assumptions, plans and execution.Fundraising from the broader group of EAs can attract high quality criticism from similarly-minded people; raising from a broader audience could potentially attract feedback from a wider range of perspectives.There is something of a principal-agent problem here; for the staff, criticism is unpleasant. For the organization, it is a mixed bag, because good criticism, even if harshly worded, can help them improve. And from the perspective of the broader movement it seems very good, because damning public criticism helps avoid grant misallocation. So my guess is that, from an impartial point of view, organizations under-invest in exposing themselves to public scrutiny.You could think of this argument as being somewhat ana...]]> C, do fundraising! If not, skip it for now.My claim is this is a bad model for EA orgs, because it misses a significant fraction of the benefits.Field-building benefitsSoliciting donations from the general public is generally quite hard. The skills required to do this are often quite different from those involved in running the organization's core operations, and can be a significant distraction. It is hard to convince people what you're doing is a good idea, and even those who agree often don't donate.But this is not wasted effort: the difficulty in converting agreement into donations means that fundraisers are effectively subsidizing outreach. The people who read your work but don't hand over their credit card details might be sold on the mission but skeptical of the team… so they donate to another org. Or they might be a student with limited liquid assets but willing to apply for jobs in the space in a few years.Or they might bring up the idea to their friends, or answer an online poll, or change their vote. Each of these seem pretty valuable - for example, it seems plausible to me that a large fraction of the value of SIAI's fundraising efforts might have come from these channels, rather than via directly increasing SIAI's budget.Epistemic benefitsFundraising can also be unpleasant because it opens yourself up to criticism. If you're just doing your own thing with one or two large donors, you have little need to explain yourself to anyone else. You need to appeal to the big foundations, but you probably have a decent idea of what they want, and they're also likely to be pretty busy. Even if they say no, they're unlikely to send you a long message about how you are bad and your organization is bad and you should feel bad.In contrast, having the audacity to run a public fundraiser naturally invites questions and criticisms from people who are skeptical of your effectiveness and theory of change. These critics have no obligation to represent a single perspective or agree with each other, so you may find yourself being attacked from multiple directions at once.However, this may be one of the only sources of feedback your org can get, especially if you are small. For the same reasons peer review, flawed as it is, is useful in science, your org can potentially benefit from feedback and questioning and critique of your assumptions, plans and execution.Fundraising from the broader group of EAs can attract high quality criticism from similarly-minded people; raising from a broader audience could potentially attract feedback from a wider range of perspectives.There is something of a principal-agent problem here; for the staff, criticism is unpleasant. For the organization, it is a mixed bag, because good criticism, even if harshly worded, can help them improve. And from the perspective of the broader movement it seems very good, because damning public criticism helps avoid grant misallocation. So my guess is that, from an impartial point of view, organizations under-invest in exposing themselves to public scrutiny.You could think of this argument as being somewhat ana...]]> Larks https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:51 no full 1
DBcDZJhTDgig9QNHR EA - Altruism sharpens altruism by Joey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Altruism sharpens altruism, published by Joey on December 26, 2023 on The Effective Altruism Forum.I think many EAs have a unique view about how one altruistic action affects the next altruistic action, something like altruism is powerful in terms of its impact, and altruistic acts take time/energy/willpower; thus, it's better to conserve your resources for these topmost important altruistic actions (e.g., career choice) and not sweat it for the other actions.However, I think this is a pretty simplified and incorrect model that leads to the wrong choices being taken. I wholeheartedly agree that certain actions constitute a huge % of your impact. In my case, I do expect my career/job (currently running Charity Entrepreneurship) will be more than 90% of my lifetime impact. But I have a different view on what this means for altruism outside of career choices. I think that being altruistic in other actions not only does not decrease my altruism on the big choices but actually galvanizes them and increases the odds of me making an altruistic choice on the choices that really matter.One way to imagine altruism is much like other personality characteristics; being conscientious in one area flows over to other areas, working fast in one area heightens your ability to work faster in others. If you tidy your room, it does not make you less likely to be organized in your Google Docs.Even though the same willpower concern applies in these situations and of course, there are limits to how much you can push yourself in a given day, the overall habits build and cross-apply to other areas instead of being seen as in competition. I think altruism is also habit-forming and ends up cross-applying.Another way to consider how smaller-scale altruism has played out is to look at some examples of people who do more small-scale actions and see how it affects the big calls. Are the EAs who are doing small-scale altruistic acts typically tired and taking a less altruistic career path or performing worse in their highly important job? Anecdotally, not really.The people I see willing to weigh altruism the highest in their career choice comparison tend to also have other altruistic actions they are doing (outside of career). This, of course, does not prove causality, but it is an interesting sign.Also anecdotally, I have been in a few situations where the altruistic environment switches from one that does value small-scale altruism to one that does not, and people changed as a result (e.g., changing between workplaces or cause areas). Although the data is noisy, to my eye the trend also fits the 'altruism as a galvanizing factor' model. For example, I do not see people's work hours typically go up when they move from a valuing small scale altruism area to an non-valuing small scale altruism area.Another way this might play out is connected to identity and how people think of a trait. If someone identifies personally with something (e.g., altruism), they are more likely to enact it out in multiple situations; it's not just in this case altruism is required, it is a part of who you are (see my altruism as a central purpose post for more on thinking this way). I think this factor that binds altruism to an identity can be reinforced by small-scale altruistic action but also can affect the most important choices.Some examples of altruistic actions I expect to be superseded in importance by someone's career choice in most cases but still worth doing for many 50%+ EAs:Donating 10% (even of a lower salary/earnings level)Being VeganNon-life-threatening donations (e.g., blood donations, bone marrow donations)Spending less to donate moreWorking more hours at an altruistic jobBecoming an organ donorAsking for donations during some birthdays/celebrations.Getting your friends and family birthd...

]]>
Joey https://forum.effectivealtruism.org/posts/DBcDZJhTDgig9QNHR/altruism-sharpens-altruism Tue, 26 Dec 2023 15:07:42 +0000 EA - Altruism sharpens altruism by Joey Joey https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:39 no full 4
R6qu7LhcLKLob7t9r EA - Zach Robinson will be CEA's next CEO by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Zach Robinson will be CEA's next CEO, published by Ben West on December 28, 2023 on The Effective Altruism Forum.We, on behalf of the EV US and EV UK boards, are very glad to share that Zach Robinson has been selected as the new CEO of the Centre for Effective Altruism (CEA).We can personally attest to his exceptional leadership, judgement, and dedication from having worked with him at Effective Ventures US. These experiences are part of why we unanimously agreed with the hiring committee's recommendation to offer him the position.[1] We think Zach has the skills and the drive to lead CEA's very important work.We are grateful to the search committee (Max Dalton, Claire Zabel, and Michelle Hutchinson) for their thorough process in making the recommendation. They considered hundreds of potential internal and external candidates, including through dozens of blinded work tests. For further details on the search process, please seethis Forum post.As we look forward, we are excited about CEA's future with Zach at the helm, and the future of the EA community.Zach adds: "I'm thrilled to be joining CEA! I think CEA has an impressive track record of success when it comes to helping others address the world's most important problems, and I'm excited to build on the foundations created by Max, Ben, and the rest of CEA's team. I'm looking forward to diving in in 2024 and look forward to sharing more updates with the EA community."^Technically, the selection is made by the US board, but the UK board unanimously encouraged the US board to extend this offer. Zach was recused throughout the process, including in the final selection.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Ben_West https://forum.effectivealtruism.org/posts/R6qu7LhcLKLob7t9r/zach-robinson-will-be-cea-s-next-ceo Thu, 28 Dec 2023 16:13:55 +0000 EA - Zach Robinson will be CEA's next CEO by Ben West Ben_West https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:47 no full 1
pheZeLQG4iEyS9Cri EA - What do the Polish 2023 parliamentary elections mean for animals? by Pawel Rawicki Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What do the Polish 2023 parliamentary elections mean for animals?, published by Pawel Rawicki on December 28, 2023 on The Effective Altruism Forum.On October 15, Polish citizens headed to the polling stations to elect their representatives for the next four years. The coalition of opposition parties which secured the majority in Parliament has turned the tide of political force in the country. The upcoming parliamentary term brings opportunities, as well as numerous challenges for animal welfare in Poland and beyond. What are the potential implications for animals of the election results?Summary:The size of agricultural production in Poland makes the country an important player influencing European Union policies.The Law and Justice party governed Poland for eight years, shaping conservative policies.In 2020, the party proposed the so-called 'five for animals' bill. The bill, aiming to improve animal welfare, faced challenges and eventual failure, leading Law and Justice to abandon the animal protection topic.Controversy over ritual slaughter and farmer protests influenced Law and Justice to backtrack on the proposed reforms, hindering animal welfare initiatives.Collaborative efforts by animal advocacy groups before the 2023 elections pressured political parties on key issues like a fur farming ban and phasing out cages for farmed animals.The election results placed Law and Justice in the lead but lacking a majority, resulting in several former opposition parties forming the new government.Despite challenges, optimism exists for future animal welfare policies in Poland, including a fur farming ban, phasing out cages, and addressing fast-growing chicken breeds.A brief overview of the farmed animal situation in PolandAnimal production and exports landscapePoland is one of the biggest net meat exporters in the world.According to the Polish Development Fund, in 2021 the country was the fourth-largest net exporter of processed meat, fish, or shellfish in the world and the eighth-largest net exporter of meat and edible offal. The poultry industry is of particular significance with 1,451,000,000 broiler chickenshatched in 2022 and more than half of the poultry meat being exported. Currently, there are over 52,800,000 egg-laying hens in Poland, and 72% of them are still kept in cages. There are also3,430,000 animals (mostlymink) killed for fur every year in Poland (in 2015, the yearly export of fur skins from the country increased to over 10 million, but since then, the number of fur animals has been in decline).Poland's position in the European UnionDue to its size and economy - Poland is the fifth-largest European Union Member State by population - Poland plays an important role in Europe. For these reasons, Polish internal politics significantly impact the direction of the EU as a whole, especially in the agricultural sector. One example of this was theattempt of the Polish government to block the EU's Green Deal.Animal welfare in conservative PolandFor the past eight years (2015-2023), Poland was ruled by a government formed by the majority partyLaw and Justice (Prawo i Sprawiedliwość), a national-conservative party with an interventionist approach to the economy. The party belongs to the European Conservatives and Reformists Party in the EU. Animal welfare is not part of Law and Justice's political program, however, a significant number of their MPs and MEPs[1] have been involved in animal welfare initiatives, like theIntergroup on the Welfare and Conservation of Animals in the European Parliament.Between 2015 and 2020, Anima International had relatively good relations with some of the party's MPs and MEPs as a result of several instances of cooperation. In 2018, Law and Justice MEPs co-organized with Eurogroup for Animals (and with the help of A...

]]>
Pawel Rawicki https://forum.effectivealtruism.org/posts/pheZeLQG4iEyS9Cri/what-do-the-polish-2023-parliamentary-elections-mean-for Thu, 28 Dec 2023 12:59:11 +0000 EA - What do the Polish 2023 parliamentary elections mean for animals? by Pawel Rawicki Pawel Rawicki https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 22:12 no full 2
AvubGwD2xkCD4tGtd EA - Only mammals and birds are sentient, according to neuroscientist Nick Humphrey's theory of consciousness, recently explained in "Sentience: The invention of consciousness" by ben.smith Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Only mammals and birds are sentient, according to neuroscientist Nick Humphrey's theory of consciousness, recently explained in "Sentience: The invention of consciousness", published by ben.smith on December 27, 2023 on The Effective Altruism Forum.In 2023, Nick Humphrey published his book Sentience: The invention of consciousness (S:TIOC). In this book he proposed a theory of consciousness that implies, he says, that only mammals and birds have any kind of internal awareness.His theory of consciousness has a lot in common with the picture of consciousness is described in recent books by two other authors, neuroscientist Antonio Damasio and consciousness researcher Anil Seth. All three agree on the importance of feelings, or proprioception, as the evolutionary and experiential base of sentience. Damasio and Seth, if I recall correctly, each put a lot of emphasis on homeostasis as a driving evolutionary force.All three agree sentience evolved as an extension of our senses-touch, sight, hearing, and so on. But S:TIOC is a bolder book which not only describes what we know about the evolutionary base of consciousness but proposes a plausible theory coming as close as can be to describing what it is short of actually solving Chalmers' Hard Problem.The purpose of this post is to describe Humphrey's theory of sentience, as described in S:TIOC, and explain why Humphrey is strongly convinced that mammals and birds-not octopuses, fish, or shrimp-have any kind of internal experience. Right up front I want to acknowledge that cause areas focused on animals like fish and shrimp seem on-expectation impactful even if there's only a fairly small chance those animals might have capacity for suffering or other internal experiences.Those areas might be impactful because of the huge absolute numbers of fish and shrimp who are suffering if they have any internal experience at all. But nevertheless, a theory with reasonable odds of being true that can identify which animals have conscious experience should update us on our relative priorities. Furthermore, if there is substantial uncertainty, which I think there is, such a theory should motivate hypothesis testing to help us reduce uncertainty.BlindsightTo understand this story, you should hear about three fascinating personal encounters which lead Humphrey to some intuitions about consciousness. Humphrey describes blindsight in a monkey and a couple of people. Blindsight is the ability for an organism to see without conscious awareness of seeing. Humphrey tells of a story of a monkey named Helen whose visual cortex had been removed.Subsequent to the removal of her visual cortex, Helen was miserable and unmotivated to move about in the indoor world she lived in. After a year of this misery, her handlers allowed her to get out into the outside world and explore it. Over the course of time she learned to navigate around the world with an unmistakable ability to see, avoid obstacles, and quickly locate food.But Humphrey, knowing Helen quite well, thought she lacked the confidence in herself to be able to have the awareness that she clearly did. This was a clue that perhaps Helen was using her midbrain system, the superior colliculus, which processes visual information in parallel with the visual cortex, and that she was unaware of the visual information her brain could nevertheless use to navigate her body around obstacles and to locate food. Of course this is somewhat wild speculation considering that Helen couldn't report her own experience back to Humphrey.The second observation was of a man known to the scientific community as D.B. In an attempt to relieve D.B. of terribly painful headaches, doctors had removed D.B.'s right visual cortex. D.B. reported not being able to see anything presented only to his left eye (the left and ...

]]>
ben.smith https://forum.effectivealtruism.org/posts/AvubGwD2xkCD4tGtd/only-mammals-and-birds-are-sentient-according-to Wed, 27 Dec 2023 18:09:39 +0000 EA - Only mammals and birds are sentient, according to neuroscientist Nick Humphrey's theory of consciousness, recently explained in "Sentience: The invention of consciousness" by ben.smith ben.smith https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 14:28 no full 6
Eg3WkbzAqfvuigzKe EA - An update and personal reflections about AidGrade by Eva Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An update and personal reflections about AidGrade, published by Eva on December 27, 2023 on The Effective Altruism Forum.(Loosely adapted from a post on my personal blog.)As some of you know, back in 2012 I set up AidGrade, a small non-profit research institute, to collect the results of impact evaluations and synthesize them. It was actually while working on AidGrade that I learned about the Effective Altruism community, as someone who I was interacting with about AidGrade asked me if I'd heard of it.Fast-forward 11 years. A global consortium of institutions, led by the World Bank, is going to be working on an open repository of impact evaluation results that could be used for meta-analysis and policy (the Impact Data and Evidence Aggregation Library, or IDEAL). This is really close to AidGrade's mission, and we will be participating in the consortium, helping to design the protocols, contribute data, and perform cross-checks with the other institutions.I am thrilled to see something like IDEAL develop. We made a case that this was a thing that should exist, and over time enough other people agreed that it will soon be a much larger thing (in which AidGrade will play the smallest of roles). All along, I was hoping that there could be a better institutional home for such a repository, and here we are. It's the best possible outcome.To anyone who supported AidGrade, through either time or money over the years, I hope you feel pleased with what you helped accomplish with AidGrade, and I hope you are as excited as I am about IDEAL.With regards to institutional change more broadly, I also have some good news about another venture, the Social Science Prediction Platform. This platform enables researchers to gather forecasts of what their studies will find. The Journal of Development Economics has recently started encouraging authors of papers accepted through their pre-results review ("Registered Report") track to collect forecasts on the SSPP, which should accelerate the use of forecasts in academia. We have been having discussions with other organizations about collecting forecasts and I hope to have more good news to share soon.Both these projects were deeply rooted in academic work. I might be biased, but I think academic work is often underrated. It can be useful for many reasons, but part of it surely is that it can change the way people think about a topic and enable institutional change.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Eva https://forum.effectivealtruism.org/posts/Eg3WkbzAqfvuigzKe/an-update-and-personal-reflections-about-aidgrade Wed, 27 Dec 2023 02:22:40 +0000 EA - An update and personal reflections about AidGrade by Eva Eva https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:22 no full 11
rQTZPsXGDj2C8ejxw EA - Resources for farmed animal advocacy: 2023 roundup by SofiaBalderson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Resources for farmed animal advocacy: 2023 roundup, published by SofiaBalderson on December 29, 2023 on The Effective Altruism Forum.Tl;dr - This is a curated and useful list of farmed animal advocacy resources that came out in 2023. We (Impactful Animal Advocacy) sent this compilation out in our free bi-weekly newsletter and thought it might be helpful to others in the EA community.There is is not a comprehensive collection of all resources, but if we missed any that you found significant from 2023, feel free to add as a comment.Enjoy and here is to even more impact for the animals in 2024!Acknowledgements: thanks so much to our Comms Lead Allison Agnello for this edition, as well as our readers, who viewed this newsletter 20,000 times in 2023!ThemesOver the past 12 months of curating Impactful Animal Advocacy (IAA) newsletters, we've noticed several trends. Here are two that are prominent in our collection of 2023 resources:Movement infrastructureIn the animal advocacy movement's growth this year, we've seen an increase in services provided directly to animal advocates or organizations. There has been so many new initiatives that we developed aMeta resources wiki to keep track of them all! This expansion reflects a recognition of the diverse needs within the community and how projects can benefit from support and area specialization. Here are a few categories of increased infrastructure:New meta organizations (The Mission Motor,NFPs.AI,us!)Advocate training courses (See section below)Supporting groups in developing countries (Animal Advocacy Africa,Good Growth,Thrive)Refining how we measure and compare across speciesGiven the large number of possible ways to help animals, selecting the most impactful approach can be challenging. This year, we've witnessed a increase in research accessibility and applicability for advocates. This is not just about providing information - it's about helping us integrate this knowledge into practical strategies. As a result, advocates may be better equipped to make informed decisions on where to focus their efforts across different species and geographic regions.How one might compare welfare across species (Moral Weight Project sequence)How much pain do different species endure (Welfare Footprint Project) - watch the recording of the workshop we hosted for themhereHow bad is brief, severe pains versus chronic, milder pains (Dimensions of Pain)How many animals are impacted, and where (Our World in Data)Updates we found helpfulSo much has happened this year. Here are a few articles to catch you upLooking back at 2023The Year in Review: 2023, Sentient MediaTop animal policy stories of 2023, Sentient MediaA year of wins for farmed animals, Lewis BollardTop 20 Alt-Protein Stories of the Year, Green QueenAgFunderNews' favorite agrifoodtech stories of 20232023 Future Perfect 50recognizes 9 animal advocate changemakersSome lessons learnedRunning Cage-Free Projects in Africa: Case Studies of Three African Animal Advocacy OrganisationsAbolishing factory farming in Switzerland: Postmortem2 Years of Shrimp Welfare Project: Insights and Impact from our Explore PhaseHistorical farmed animal welfare ballot initiativesAnimal Rising's Grand National protest: Public opinion impactsStakeholder-engaged research in emerging marketsFish Welfare Initiative's continued work in India andpausing work in ChinaResourcesGetting startedNew animal subgroups resources:Faunalytics Fundamentals: a series of topic overviews and resources, such as onfarmed animals,wildlife,invertebrates, etcWild Animals Wiki anda review of contraception methods for wild mammalsA Primer on insect sentience and welfareShrimp Welfare Sequence, Rethink PrioritiesFind a job:Animal Advocacy Careers,Tälist andAlt Protein Careers j...

]]>
SofiaBalderson https://forum.effectivealtruism.org/posts/rQTZPsXGDj2C8ejxw/resources-for-farmed-animal-advocacy-2023-roundup Fri, 29 Dec 2023 22:45:15 +0000 EA - Resources for farmed animal advocacy: 2023 roundup by SofiaBalderson SofiaBalderson https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:47 no full 1
86RcytpqKDLGG2mE8 EA - CE-incubated tobacco & NCD policy Charity: updates, funding gap, and future plans for Concentric Policies by Yelnats T.J. Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CE-incubated tobacco & NCD policy Charity: updates, funding gap, and future plans for Concentric Policies, published by Yelnats T.J. on December 29, 2023 on The Effective Altruism Forum.Executive SummaryTobacco is a massive global issue: 8 million annual deaths and 230 million annual DALYS (15% and 9% of global totals respectively).There are evidence-based policies - outlined by the WHO's MPOWER framework - that countries can adopt to reduce tobacco use.Policy advocacy for implementing MPOWER measures in neglected countries can avert DALYs with cost-effectiveness matching GiveWell's top charities.Since starting in mid-September,Concentric Policies has engaged with seven ministries of health, met with four, and received a partnership request from one to develop a multisectoral plan for noncommunicable diseases.Closing our Year 1 funding gap ($21,000) is critical for building the necessary capacity to support our government advocacy plans in 2024.About UsConcentric Policies is a nonprofit focused on preventing and controlling noncommunicable diseases. We support the adoption of evidence-based health policies in countries underserved by large NGOs and the international community. Through collaboration with governments, civil society, and citizens, we aim to reduce the unhealthy consumption of tobacco, alcohol, sodium, and sugar. Concentric Policies provides free assistance by engaging stakeholders, strengthening the evidence base through research, and offering technical assistance throughout the policy process.Concentric Policies was launched throughCharity Entrepreneurship, a London-based incubator that turns well-researched ideas into high-impact organizations. Charity Entrepreneurship has helped launchover 30 charities that are now reaching over 20 million people annually with their interventions.ProblemAnnual deaths from tobacco were 6 million in 2013 and rose to 8 million before the pandemic.Today, more people are killed annually by tobacco usage than malaria, HIV, and neonatal deaths combined… twice over.[1]In addition, tobacco usage increases healthcare expenditures, decreases productivity, exacerbates inequality, degrades the environment, and contributes to child labor.ThisEA Forum post from World No Tobacco Day covers these harms in more detail.SolutionThe WHO's MPOWER framework provides cost-effective demand-reduction measures to help countries reduce tobacco consumption. Since MPOWER was introduced globally 15 years ago, an estimated 300 million less people are smoking than might have been if smoking prevalence had stayed the same.[2]Tobacco taxation is the most effective (and cost-effective) intervention for reducing tobacco consumption, yet it is the most neglected intervention.[3] Tobacco has an average price elasticity in LMICs of around -0.5, meaning that for a 10% increase in the retail price of tobacco, consumption decreases by 5%.[4]OpportunityThe number of countries that have adopted at least one MPOWER measure at the highest level of achievement has grown from 44 in 2008 to 151 in 2022. However, only a handful of nations have full compliance with MPOWER guidelines and 44 countries remain unprotected by any of the MPOWER measures.[5] Despite nearly every country signing the WHO's treaty on tobacco, only 13 nations outside of Europe meet the WHO's recommended minimum of taxing tobacco at 75% of retail value.Since starting work in September, we have learned and reaffirmed the following:Some governments are not aware of the potential ROI from comprehensive implementation of the MPOWER frameworkConsolidated funding in the tobacco control space has led to only a dozen or so of the highest-burden countries receiving the majority of resourcesMany smaller countries do not receive any attention from major tobacco control organizat...

]]>
Yelnats T.J. https://forum.effectivealtruism.org/posts/86RcytpqKDLGG2mE8/ce-incubated-tobacco-and-ncd-policy-charity-updates-funding Fri, 29 Dec 2023 14:46:02 +0000 EA - CE-incubated tobacco & NCD policy Charity: updates, funding gap, and future plans for Concentric Policies by Yelnats T.J. Yelnats T.J. https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:12 no full 3
XatixHPupjA5DCCm8 EA - Say how much, not more or less versus someone else by Gregory Lewis Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Say how much, not more or less versus someone else, published by Gregory Lewis on December 29, 2023 on The Effective Altruism Forum.Or: "Underrated/overrated" discourse is itself overrated.BLUF: "X is overrated", "Y is neglected", "Z is a weaker argument than people think", are all species of second-order evaluations: we are not directly offering an assessment of X, Y, or Z, but do so indirectly by suggesting another assessment, offered by someone else, needs correcting up or down.I recommend everyone cut this habit down ~90% in aggregate for topics they deem important, replacing the great majority of second-order evaluations with first-order evaluations. Rather than saying whether you think X is over/under rated (etc.) just try and say how good you think X is.The perils of second-order evaluationSuppose I say "I think forecasting is underrated". Presumably I mean something like:I think forecasting should be rated this highly (e.g. 8/10 or whatever)I think others rate forecasting lower than this (e.g. 5/10 on average or whatever)So I think others are not rating forecasting highly enough.Yet whether "Forecasting is overrated" is true or not depends on more than just "how good is forecasting?" It is confounded by questions of which 'others' I have in mind, and what their views actually are. E.g.:Maybe you disagree with me - you think forecasting is overrated - but it turns out we basically agree on how good forecasting is. Our apparent disagreement arises because you happen to hang out in more pro-forecasting environments than I do.Or maybe we hang out in similar circles, but we disagree in how to assess the prevailing vibes. We basically agree on how good forecasting is, but differ on what our mutual friends tend to really think about it.(Obviously, you could also get specious agreement of two-wrongs-make-a-right variety: you agree with me forecasting is underrated despite having a much lower opinion of it than I do, because you assess third parties having an even lower opinion still)These are confounders as they confuse the issue we (usually) care about: how good or bad forecasting is, not the inaccuracy of others nor in which direction they err re. how good they think forecasting is.One can cut through this murk by just assessing the substantive issue directly. I offer my take on how good forecasting is: if folks agree with me, it seems people generally weren't over or under- rating forecasting after all. If folks disagree, we can figure out - in the course of figuring out how good forecasting is - whether one of us is over/under rating it versus the balance of reason, not versus some poorly scribed subset of prevailing opinion. No phantom third parties to the conversation are needed - or helpful to - this exercise.In praise of (kind-of) objectivity, precision, and concretenessThis is easier said than done. In the forecasting illustration above, I stipulated 'marks out of ten' as an assessment of the 'true value'. This is still vague: if I say forecasting is '8/10', that could mean a wide variety of things - including basically agreeing with you despite you giving a different number to me. What makes something 8/10 versus 7/10 here?It is still a step in the right direction. Although my '8/10' might be essentially the same as your '7/10', there probably some substantive difference between 8/10 and 5/10, or 4/10 and 6/10. It is still better than second order evaluation, which adds another source of vagueness: although saying for myself forecasting is X/10 is tricky, it is still harder to do this exercise on someone else's (or everyone else's) behalf.And we need not stop there. Rather than some singular measure like 'marks out of 10' for 'forecasting' as a whole, maybe we have some specific evalution or recommendation in mind.Perhaps: "Most members o...

]]>
Gregory Lewis https://forum.effectivealtruism.org/posts/XatixHPupjA5DCCm8/say-how-much-not-more-or-less-versus-someone-else Fri, 29 Dec 2023 05:43:15 +0000 EA - Say how much, not more or less versus someone else by Gregory Lewis Gregory Lewis https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:51 no full 4
bkchwgNL9zh44o3oe EA - Malaria Vaccine Research Help Needed by joshcmorrison Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Malaria Vaccine Research Help Needed, published by joshcmorrison on December 30, 2023 on The Effective Altruism Forum.We at 1Day Sooner postedrecently about scoping a campaign to push for an accelerated rollout of the newly approved R21/Matrix-M malaria vaccine. The vaccine was recently prequalified by the WHO, a key step on the critical path to vaccine distribution, but much remains to be done.We greatly appreciate the more than a dozen people who reached out to help after our last post. Their work was invaluable for producing our December Malaria Vaccination Status Report, the development of which has been critical to improving our understanding of the problem. Our colleague Zacharia Kafuko's op-ed as well as Peter Singer's on the subject are also both good sources for further reading.We plan to publish a new status report every month and maintain a rolling public comment version to reflect our latest understanding of the issue and use as a sort of global workspace to share the most critical information about obstacles and enablers for widespread distribution.To make our research work for this more sustainable we're moving to a pool system where members sign up for at least four days out of the month where they will be assigned a 1-2.5 hour research or writing task to update and improve our status report document. Pool members will be paid $100 per pool day. (Here is a punch list of the type of goals we have for our next draft.here.).We are looking to add 5-10 new pool members for January beyond those who signed up last month. If you're interested in helping, please emailryan.duncombe@1daysooner.org.Questions and comments are very welcome. Thanks!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
joshcmorrison https://forum.effectivealtruism.org/posts/bkchwgNL9zh44o3oe/malaria-vaccine-research-help-needed Sat, 30 Dec 2023 15:24:59 +0000 EA - Malaria Vaccine Research Help Needed by joshcmorrison joshcmorrison https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:46 no full 1
Hhtvwx2ka4pzoWg7e EA - AI alignment shouldn't be conflated with AI moral achievement by Matthew Barnett Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI alignment shouldn't be conflated with AI moral achievement, published by Matthew Barnett on December 30, 2023 on The Effective Altruism Forum.In this post I want to make a simple point that I think has big implications.I sometimes hear EAs talk about how we need to align AIs to "human values", or that we need to make sure AIs are benevolent. To be sure, ensuring AI development proceeds ethically is a valuable aim, but I claim this goal is not the same thing as "AI alignment", in the sense of getting AIs to try to do what people want.My central contention here is that if we succeed at figuring out how to make AIs pursue our intended goals, these AIs will likely be used to maximize the economic consumption of existing humans at the time of alignment. And most economic consumption is aimed at satisfying selfish desires, rather than what we'd normally consider our altruistic moral ideals.Only a small part of human economic consumption appears to be what impartial consequentialism would recommend, including the goal of filling the universe with numerous happy beings who live amazing lives.Let me explain.Consider how people currently spend their income. Below I have taken a plot from the blog Engaging Data, which borrowed data from the Bureau of Labor Statistics in 2019. It represents a snapshot of how the median American household spends their income.Most of their money is spent on the type of mundane consumption categories you'd expect: housing, utilities, vehicles etc. It is very likely that the majority of this spending is meant to provide personal consumption for members of the household or perhaps other family and friends, rather than strangers. Near the bottom of the chart, we find that only 3.1% of this spending is on what we'd normally consider altruism: voluntary gifts and charity.To be clear, this plot does not comprise a comprehensive assessment of the altruism of the median American household. Moreover, moral judgement is not my intention here. Instead, my intention is to emphasize the brute fact that when people are given wealth, they primarily spend it on themselves, their family, or their friends, rather than to pursue benevolent moral ideals.This fact is important because, to a first approximation, aligning AIs with humans will simply have the effect of greatly multiplying the wealth of existing humans - i.e. the total amount of resources that humans have available to spend on whatever they wish. And there is little reason to think that if humans become extraordinarily wealthy, they will follow idealized moral values.To see why, just look at what current people already do, who are many times richer than their ancestors centuries ago. All that extra wealth did not make us extreme moral saints; instead, we still mostly care about ourselves.Why does this fact make any difference? Consider the prescription of classical utilitarianism to maximize population size. If given the choice, humans would likely not spend their wealth to pursue this goal. That's because humans care far more about our own per capita consumption than global aggregate utility. When humans increase population size, it is usually a byproduct of their desire to have a family, rather than being the result of some broader utilitarian moral calculation.Here's another example. When given the choice to colonize the universe, future humans will likely want a rate of return on their investment, rather than merely deriving satisfaction from the fact that humanity's cosmic endowment is being used well. In other words, we will likely send out the von Neumann probes as part of a scheme to benefit ourselves, not out of some benevolent duty to fill the universe with happy beings.Now, I'm not saying selfishness is automatically bad. Indeed, when channeled appropriately, selfishness serves t...

]]>
Matthew_Barnett https://forum.effectivealtruism.org/posts/Hhtvwx2ka4pzoWg7e/ai-alignment-shouldn-t-be-conflated-with-ai-moral Sat, 30 Dec 2023 07:29:45 +0000 EA - AI alignment shouldn't be conflated with AI moral achievement by Matthew Barnett Matthew_Barnett https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:48 no full 2
zzsQMTejrRvYodkTS EA - Exaggerating the risks (Part 13: Ord on Biorisk) by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Exaggerating the risks (Part 13: Ord on Biorisk), published by Vasco Grilo on December 31, 2023 on The Effective Altruism Forum.This is a crosspost to Exaggerating the risks (Part 13: Ord on Biorisk), as published by David Thorstad on 29 December 2023.This massive democratization of technology in biological sciences … is at some level fantastic. People are very excited about it. But this has this dark side, which is that the pool of people that could include someone who has … omnicidal tendencies grows many, many times larger, thousands or millions of times larger as this technology is democratized, and you have more chance that you get one of these people with this very rare set of motivations where they're so misanthropic as to try to cause … worldwide catastrophe.Toby Ord,80,000 Hours InterviewListen to this post [there is an option for this in the original post]1. IntroductionThis is Part 13 of my seriesExaggerating the risks. In this series, I look at some places where leading estimates of existential risk look to have been exaggerated.Part 1 introduced the series. Parts 2-5 (sub-series: "Climate risk") looked at climate risk. Parts 6-8 (sub-series: "AI risk") looked at the Carlsmith report on power-seeking AI.Parts9,10 and11 began anew sub-series on biorisk. InPart 9, we saw that many leading effective altruists give estimates between 1.0-3.3% for the risk of existential catastrophe from biological causes by 2100. I think these estimates are a bit too high.Because I have had a hard time getting effective altruists to tell me directly what the threat is supposed to be, my approach was to first survey the reasons why many biosecurity experts, public health experts, and policymakers are skeptical of high levels of near-term existential biorisk. Parts9,10 and11 gave a dozen preliminary reasons for doubt, surveyed at the end ofPart 11.The second half of my approach is to show that initial arguments by effective altruists do not overcome the case for skepticism.Part 12 examined a series of risk estimates by Piers Millett and Andrew Snyder-Beattie. We saw, first, that many of these estimates are orders of magnitude lower than those returned by leading effective altruists and second, that Millett and Snyder-Beattie provide little in the way of credible support for even these estimates.Today's post looks at Toby Ord's arguments in The Precipice for high levels of existential risk. Ord estimates the risk of irreversible existential catastrophe by 2100 from naturally occurring pandemics at 1/10,000, and the risk from engineered pandemics at a whopping 1/30. That is a very high number. In this post, I argue that Ord does not provide sufficient support for either of his estimates.2. Natural pandemicsOrd begins with a discussion of natural pandemics. I don't want to spend too much time on this issue, since Ord takes the risk of natural pandemics to be much lower than that of engineered pandemics. At the same time, it is worth asking how Ord arrives at a risk of 1/10,000.Effective altruists effectively stress that humans have trouble understanding how large certain future-related quantities can be. For example, there might be 1020, 1050 or even 10100 future humans. However, effective altruists do not equally stress how small future-related probabilities can be. Risk probabilities can be on the order of 10-2 or even 10-5, but they can also be a great deal lower than that: for example, 10-10, 10-20, or 10-50 [for example, a terrorist attack causing human extinction is astronomically unlikely on priors].Most events pose existential risks of this magnitude or lower, so if Ord wants us to accept that natural pandemics have a 1/10,000 chance of leading to irreversible existential catastrophe by 2100, Ord owes us a solid argument for this conclusion. It ...

]]>
Vasco Grilo https://forum.effectivealtruism.org/posts/zzsQMTejrRvYodkTS/exaggerating-the-risks-part-13-ord-on-biorisk Sun, 31 Dec 2023 20:10:52 +0000 EA - Exaggerating the risks (Part 13: Ord on Biorisk) by Vasco Grilo Vasco Grilo https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 20:45 no full 2
8P2GZFLnv8HW9ozLB EA - EA Wins 2023 by Shakeel Hashim Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Wins 2023, published by Shakeel Hashim on December 31, 2023 on The Effective Altruism Forum.Crossposted from Twitter.As the year comes to an end, we want to highlight and celebrate some of the incredible achievements from in and around the effective altruism ecosystem this year.1. A new malaria vaccineThe World Health Organizationrecommended its second-ever malaria vaccine this year: R21/Matrix-M, designed to protect babies and young children from malaria. The drug's recently concludedPhase III trial, which was co-funded by Open Philanthropy, found that the vaccine was between 68-75% effective at targeting the disease, which kills around 600,000 people (mainly children) each year.The work didn't stop there, though. Following advocacy from many people - includingZacharia Kafuko of 1 Day Sooner - the WHO quickly prequalified the vaccine, laying the groundwork for an expedited deployment and potentially saving hundreds of thousands of children's lives. 1 Day Sooner isnow working to raise money to expedite the deployment further.2. The Supreme Court upholds an animal welfare lawIn 2018, Californians voted for Proposition 12 - a bill that banned intensive cage confinement and the sale of animal products from animals in intensive confinement. The meat industry challenged the law for being unconstitutional - but in May of this year, the US Supreme Courtupheld Prop 12, a decision that will improve the lives of millions of animals who would otherwise be kept in cruel and inhumane conditions.Organizations such as The Humane League - one of Animal Charity Evaluators'top charities - are a major part of this victory; their tireless campaigning is part of what made Prop 12 happen.Watch a panel discussion featuring The Humane League at EAG London 2023here.3. AI safety goes mainstream2023 was the year AI safety went mainstream. After years of work from people in and around effective altruism, this year saw hundreds of high-profile AI experts - including two Turing Award winnerssay that "mitigating the risk of extinction from AI should be a global priority".That was followed by a flurry of activity from policymakers, including aUS Executive Order, an internationalAI Safety Summit, the establishment of the UKFrontier AI Taskforce, and a deal on theEU AI Act - which, thanks to the efforts of campaigners, is now going to regulate foundation models that pose a systemic risk to society.Important progress was made in technical AI safety, too, including work onadversarial robustness,mechanistic interpretability, andlie detection.Watch a talk from EAG Boston 2023 on technical AI safetyhere.4. Results from the world's largest UBI studySince 2018, GiveDirectly - an organization that distributes direct cash transfers to those in need - has been running the world's largestuniversal basic income experiment in rural Kenya.In September, researchers led by MIT economist Taveneet Suri and Nobel laureate Abhijit Banerjee, published theirlatest analysis of the data - finding that giving people money as a lump sum leads to better results than dispersing it via monthly payments. Long-term UBI was also found to be highly effective and didn't discourage work. The results could have significant implications for how governments disburse cash aid.Watch GiveDirectly'stalk at EAGx Nordics 2023.5. Cultivated meat approved for sale in USAfter years of work from organizations like the Good Food Institute, in June 2023 the USDA finallyapproved cultivated meat for sale in the US.The watershed moment made the US the second country (after Singapore) to legalize the product, which could have significant impacts on animal welfare by reducing the number of animals that need to be raised and killed for meat.Watch the Good Food Institute's Bruce Friedrich talk about alternative ...

]]>
Shakeel Hashim https://forum.effectivealtruism.org/posts/8P2GZFLnv8HW9ozLB/ea-wins-2023 Sun, 31 Dec 2023 16:51:22 +0000 EA - EA Wins 2023 by Shakeel Hashim Shakeel Hashim https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:06 no full 6
eHKhrvBexNtj6ahj2 EA - Your EA Forum 2023 Wrapped by Sarah Cheng Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Your EA Forum 2023 Wrapped, published by Sarah Cheng on December 31, 2023 on The Effective Altruism Forum.Last year we introduced the EA Forum Wrapped feature, and this year we've totally redesigned it for you - see your EA Forum 2023 Wrapped here.Thanks to everyone for visiting and contributing to the EA Forum this year!If you have any feedback or questions about the results, please feel free to leave a comment on this post. Consider sharing if you found something surprising or interesting.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Sarah Cheng https://forum.effectivealtruism.org/posts/eHKhrvBexNtj6ahj2/your-ea-forum-2023-wrapped Sun, 31 Dec 2023 06:20:14 +0000 EA - Your EA Forum 2023 Wrapped by Sarah Cheng Sarah Cheng https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:42 no full 8
3fceNPRkSwqTivJJ7 EA - Extended Navel-Gazing On My 2023 Donations by jenn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Extended Navel-Gazing On My 2023 Donations, published by jenn on January 1, 2024 on The Effective Altruism Forum.Previously: Donations, The First YearHere's an update on what my household donated to this year, and why. Please be warned that there is some upsetting content related to the ongoing Israel-Hamas conflict in this post, in the first section.The Against Malaria FoundationAround 90% of our donations ($15,000 of $16,500 total, all amounts in CAD) went to the Against Malaria Foundation (AMF). I remain a very old school EA mostly committed to global health and poverty reduction interventions for humans.If I was a US citizen I'd donate a portion of this to GiveWell's Unrestricted Fund for reasons I'll touch on below, but as a Canadian the key consideration for me was which GiveWell-recommended charities and funds had a Canadian entity, and unfortunately (or fortunately for eliminating analysis paralysis?) the AMF was the only recommended charity registered in Canada. This meant I could donate tax-deductibly, which meant I can donate ~20% more.(Or so I thought at the time. I've now discovered CAFCanada, but that's a problem for my 2024 donations.)The AMF almost didn't get my donation this year.According to Givewell's 2021 analysis, the AMF saves in expectation one life for every $7300 CAD donated. In the days after the onset of the Israel-Palestinian conflict, I began researching nonprofits offering medical aid to Palestinians, thinking that there's a chance their impact might surpass that benchmark[1].I read many annual reports for many charities, focusing extra on their work in previous years of conflict. In the end none of them were anywhere close to how effective the AMF is (like at least an order of magnitude off), with one exception.Glia Gaza is a small team of Canadian doctors who are providing emergency care and 3D printed tourniquets to wounded Palestinians. The tourniquets came in different sizes for women and children in addition to men (most suppliers only supply tourniquets in adult male sizes).I researched the efficacy of tourniquets in saving lives. If you are dealing with bullet wounds, they help a lot when you use them to staunch bleeding and prolong the time you have to get to a hospital. They help, too, if there are no hospitals, just by significantly reducing the chance that you bleed out and die right there.Tying a tourniquet is challenging; it's easy to make mistakes that could worsen the situation or fail to apply them tightly enough. Glia created a new kind of 3D printed tourniquet that made it easier to tie properly, quickly. You can read some harrowing field reports that they wrote about their prototypes in 2018. There are some disturbing pictures, and worse stories. But the conclusion was that the tourniquets worked, and that they worked well.Their 3D printers were solar powered so they weren't dependent on grid access and the plastic was locally sourced. They're just printing out a whole bunch of them and leaving strategic caches for medical professionals to use, and to use themselves. Each tourniquet would cost $15 CAD to produce and distribute. With $7300 CAD they'd be able to distribute 486 tourniquets.I thought the chances were good that 486 additional tourniquets translated to more than one life saved on expectation (though I'm not an expert and I had some pretty huge error bars, and there was some questions around scalability with additional funds and the like). I decided to sleep on it before donating.I woke up to an update to their fundraising page. Their office where they had all their 3D printers (they didn't have that many) was caught in the blast of a bomb, and they had no ability to fix them. And because of the blockade there was no chance that they'd be able to fix them any time soon.Also, because of the bl...

]]>
jenn https://forum.effectivealtruism.org/posts/3fceNPRkSwqTivJJ7/extended-navel-gazing-on-my-2023-donations Mon, 01 Jan 2024 00:23:12 +0000 EA - Extended Navel-Gazing On My 2023 Donations by jenn jenn https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 12:36 no full 5
ni9b5ejJxotguGgmF EA - EA Barcelona: Our first year of impact by Melanie Brennan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Barcelona: Our first year of impact, published by Melanie Brennan on January 2, 2024 on The Effective Altruism Forum.TL;DR: 2023 was the first year that EA Barcelona has had a designated community builder (me!), and our community has grown substantially as a result. This post summarises what went well, what we found challenging and our plans for 2024.Disclaimer: This is my first EA forum post and I'm nervous. This time a year ago, I didn't really know what an "x-risk" was, and I pronounced "utilitarianism" as "utalatarianism" (but hey, I wasn't the only one!). But things have changed a lot since then, so I wrote this post with different goals in mind: to reflect, to inform, to entertain and (hopefully) to inspire. Also, I come from an Arts background, and I think it's kind of nice to balance AI Safety heavy stuff with lighter fun stuff from time to time.Shoutout: This post was partially inspired by (but can never live up to) the postThe Spanish Speaking Effective Altruism community is awesome! written byJaime Sevilla.Some context on EA in Spain & in BarcelonaThere have been a number of attempts to build an EA community in Spain over the years. In Madrid circa 2019, there was quite an active community infamously known as "Jaime and the Pablos", who held weekly activities and organized several larger events as well. And in Barcelona, there was also a small but passionate group meeting up regularly around the same time. However, due to the pandemic, changes in direction and other factors, neither one continued beyond 2020 as a formally coordinated, sustainable local group.Fast forward to July 2021 and enter another Pablo:Pablo Rosado - principal data scientist atOur World In Data and also my partner. Pablo R. had been learning about EA online and trying to apply its principles to his life for a couple of years already when he discovered the semi-dormant EA Barcelona Facebook page. He noticed that a couple of guys were planning to meet up and "have a chat about EA" that afternoon, so he went off to meet them and didn't come back until about 7 hours later.And that was the origins of what is now the second wave of EA Barcelona - kudos to Sam Bakeysfield, Miguel Gimeno and others for taking the initiative back then! This small group continued meeting up for their long and thought-provoking chats from time to time, until eventually I got curious and started tagging along too. Then in 2022, once Sam had moved to Portugal and Miguel to The Netherlands, Pablo and I decided to take over as group organisers. For the rest of the year, we ran a couple of introductory talks and arranged the odd casual meeting for the tiny number of EAs who remained.Then, in April 2023, after attending the amazingly inspiringEAGxLatAm in Mexico City, quitting my job immediately afterwards and taking a 2-month sabbatical in Australia to visit friends and family, I applied for funding from the EAIF to do community building professionally. I returned to Barcelona to the exciting news that I had been awarded the grant, and the rest is history! Well, 8 months of history for now.EA Barcelona finally starts to take shapeI started on the grant in May, and what a whirlwind of a time it's been since then! We've run lots of different kinds of events, such as expert talks, discussion groups, coworking sessions and social activities, and we've managed to attract a diverse group of interested people to the movement.Here are a few highlights of 2023, as always best expressed in images and (occasionally amusing) captions:Our main achievements in 2023Our overarching goal was to both consolidate and grow the local EA community for the rest of the year (May through December). Given the humble state EA Barcelona was in prior to this, having about 5-7 committed members and very few activities, I woul...

]]>
Melanie Brennan https://forum.effectivealtruism.org/posts/ni9b5ejJxotguGgmF/ea-barcelona-our-first-year-of-impact Tue, 02 Jan 2024 20:55:17 +0000 EA - EA Barcelona: Our first year of impact by Melanie Brennan Melanie Brennan https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:22 no full 3
BcjqfECWBvhtBMZou EA - Apply now to CE's second Research Training Program by CE Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply now to CE's second Research Training Program, published by CE on January 3, 2024 on The Effective Altruism Forum.What we have learned from our pilot and when our next program is happeningTL;DR: We are excited to announce the second round of ourResearch Training Program. This online program is designed to equip participants with the tools and skills needed to identify, compare, and recommend the most effective charities, interventions, and organisations. It is a full-time (35 hours per week), fully cost-covered program that will run remotely for 12 weeks.[APPLY HERE]Deadline for application: January 28, 2024.The program dates are April 15 - July 5, 2024.If you are progressing to the last stage of the application process you will receive a final decision by the 15th of March the latest. Please let us know if you need a decision before that date.What have we learned from our pilot?The theory of change for the Research Training Program has three outputs: helping train people to switch into impactful research positions, creating intervention reports that influence CE's decisions of which new organisations to start, and creating evaluations that help organisations have the most impact and funders to make impact maximising decisions. We have outlined what we have learned about each of these aspects below:Intervention reports: For eight out of the eleven weeks, the sixteen research fellows have investigated fifteen cause areas, created forty-six shallow reviews, and written twenty-two deep dives, five of which have already been published on the EA forum (find themhere). Although we are planning some changes to improve the fellows' experience in the program, we are deeply impressed by these results and look forward to replicating them with a slightly different approach.People: Since the program ended only a couple of weeks ago, it is too early to tell what career switches will happen because of the program. We have some early and very promising results with two research fellows already having made career changes that we consider highly impactful. If you are currently hiring and are interested in people with intervention prioritisation skills applying, please contact us.Charity evaluations: Traditionally, Charity Entrepreneurship has focused most of its research on investigating the potential impact of interventions. We believe that more impact-focused accountability is essential for the sector, and we would like to support the evaluated organisations and funders in making more informed decisions. This is why at the end of the program the research fellows focused on writing charity evaluations in group projects.We were too confident in our timelines and are planning a major restructuring of this part of the program. However, we are happy that three evaluations could be shared directly with the evaluated organisations. We are looking forward to learning from other evaluators in the space.What will the next program look like?Content: The program will start with a week of providing an overview of the most important research skills. The program's first part will then focus on writing cause area reports in groups in which fellows take a problem and identify the most promising solutions. Afterwards, the fellows investigate those most promising ideas through a shallow review.After conducting a shallow review, research fellows will evaluate the most promising interventions through a deep dive, which will be polished and published, and influence decision-making within Charity Entrepreneurship and beyond. After these reports are published, there will be some time to think about careers and apply to different opportunities, before jumping into some charity evaluations that can influence the decisions of funders as well as strategic decisions within the evaluated o...

]]>
CE https://forum.effectivealtruism.org/posts/BcjqfECWBvhtBMZou/apply-now-to-ce-s-second-research-training-program Wed, 03 Jan 2024 15:37:14 +0000 EA - Apply now to CE's second Research Training Program by CE CE https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:54 no full 3
69kYhMjGnvgHqHP9r EA - My Experience Donating Blood Stem Cells, or: Why You Should Join a Bone Marrow Registry by Silas Strawn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Experience Donating Blood Stem Cells, or: Why You Should Join a Bone Marrow Registry, published by Silas Strawn on January 3, 2024 on The Effective Altruism Forum.Note: I'm not a doctor. Please don't make decisions about your health based on an EA forum post before at least talking with a physician or other licensed healthcare practitioner.TLDR: I donated blood stem cells in early 2021. Immediately prior, I had been identified as the best match for someone in need of a bone marrow transplant, likely with leukemia, lymphoma, or similar condition. Although the first attempt to collect my blood stem cells failed, my experience was overwhelmingly positive as well as fulfilling on a personal level. The foundation running the donation took pains to make it as convenient as possible - and free, other than my time.I recovered quickly and have had no long-term issues related to the donation[1]. I would encourage everyone to at least do the cheek swab to join the registry if they are able.this page to join the Be The Match registry.This post was prompted - very belatedly - by a comment from "demost_" on Scott Alexander's post about his experience donating a kidney[2]. The commenter was speculating about the differences between bone marrow donation and kidney donation[3]. I'm typically a lurker, but I figured this is a case where I actually do have something to say[4]. According to demost_, fewer than 1% of those on the bone marrow registry get matched, so my experience is relatively rare.I checked and couldn't find any other forum posts about being a blood stem cell or bone marrow donor. I hope to shine a light on what the experience is like as a donor. I know EAs are supposed to be motivated by cold, hard facts and rationality and so this post may stick out since it's recounting a personal experience[5]. Nevertheless, given how close-to-home matters of health are, I figured this could be useful for those considering joining the registry or donating.My Donation ExperienceI joined the registry toward the end of my college years. I don't recall the exact details, but I've pieced together the timeline from my email archives. Be The Match got my cheek swab sample in December 2019 and I officially joined the registry in January 2020. If you're a university student (at least in America[6]), there's a good chance that at some point there will be a table in your commons or quad where volunteers will be offering cheek swabs to join the bone marrow donor registry. The whole process takes a few minutes and I'd encourage everyone to at least join the registry if they can.Mid-December 2020, I was matched and started the donation process. For the sake of privacy, they don't tell you anything about the recipient at that point beyond the vaguest possible demographic info. I think they told me the gender and an age range, but nothing besides.demost_ supposed that would-be donors should be more moved to donate bone marrow than kidneys since there's a particular, identifiable person in need (and marrow is much more difficult to match, so you're less replaceable as a donor). I can personally attest to this. Even though I didn't know much about the recipient at all, I felt an extreme moral obligation to see the process through. I knew that my choice to donate could make a massive difference to this person.I imagined how I would feel if it were a friend or loved one in need or even myself. The minor inconveniences of donating felt doubly minor next to the weight of someone's life.As a college student, I had a fluid schedule. I was also fortunate that my distributed systems professor was happy to let me defer an exam scheduled for the donation date. To their credit, Be The Match offered not only to compensate any costs associated with the donation, but also to replace any wages missed...

]]>
Silas Strawn https://forum.effectivealtruism.org/posts/69kYhMjGnvgHqHP9r/my-experience-donating-blood-stem-cells-or-why-you-should Wed, 03 Jan 2024 14:04:50 +0000 EA - My Experience Donating Blood Stem Cells, or: Why You Should Join a Bone Marrow Registry by Silas Strawn Silas Strawn https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 12:17 no full 4
ZAXeywg7mcgLGeisA EA - Why EA should (probably) fund ceramic water filters by Bernardo Baron Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why EA should (probably) fund ceramic water filters, published by Bernardo Baron on January 3, 2024 on The Effective Altruism Forum.Epistemic status: after researching for more than 80 hours each, we are moderately certain that ceramic filters (CFs) can be more cost-effective than chlorination to prevent waterborne diseases at least in some - and possibly in many - LMICs. We are less certain of the real size of the effects from CFs, and how some factors like household sizes affect the final cost-effectiveness.At least 1.7 billion people globally used drinking water sources contaminated with feces in 2022, leading to significant health risks from waterborne enteric infections. According to the Global Burden of Disease (GBD) 2019 study, more than 2.5% of total DALYs lost that year were linked to unsafe water consumption - and there is some evidence that this burden can be even bigger.This makes the improvement of access to clean water a particularly pressing problem in the Global Health and Development area.As a contribution to target this problem, we have put together a report on ceramic water filters as a potential intervention to improve access to safe water in low and medium income countries. This was written during our time as research fellows at Charity Entrepreneurship's Research Training Program (Fall 2023).In this post, we summarize the main findings of the report. Nonetheless, we invite people interested in the subject to check out the full report, which provides much more detail into each topic we outline here.Key takeaways:There are several (controlled, peer-reviewed) studies that link the distribution of ceramic filters to less frequent episodes of diarrhea in LMICs. Those studies have been systematically reviewed and graded low to medium quality.Existing evidence supports the hypothesis that ceramic filters are even more effective than chlorination to reduce diarrhea episodes. However, percentage reductions here should be taken with a grain of salt due to lack of masking and self-report and publication biases.Despite limitations in current evidence, we are cautiously optimistic that ceramic filters can be more cost-effective than chlorination, especially in countries where diarrheal diseases are primarily caused by bacteria and protozoa (and not by viruses). Average household sizes can also play a role, but we are less certain on the extent to which this is true.We provide a Geographic Weighted Factor Model and a country-specific back-of-the envelope analysis of the cost-effectiveness for a hypothetical charity that wants to distribute free ceramic filters in LMICs. Our central scenario for the cost-effectiveness of the intervention in the top prioritized country (Nigeria) is $8.47 U.S. dollars per DALY-averted.We ultimately recommend that EA donors and meta-organizations should invest at least some resources in the distribution of ceramic filters, either by bringing up new charities in this area, or by supporting existing, non-EA organizations that already have lots of expertise in how to manufacture, distribute and monitor the usage of the filters.Why ceramic filters?There are plenty of methods to provide access to safe(r) water in very low-resource settings. Each one of those has some pros and cons, but ceramic filters stand out for being cheap to make, easy to install and operate, effective at improving health, and durable (they are said to last for a minimum of 2 years).In short, a ceramic filter is a combination of a porous ceramic element and a recipient for the filtered water (usually made of plastic). Water is manually put into the ceramic part and flows through its pores due to gravity. Since pores are very small, they let water pass, but physically block bigger particles - including bacteria, protozoa and sediments - from passing....

]]>
Bernardo Baron https://forum.effectivealtruism.org/posts/ZAXeywg7mcgLGeisA/why-ea-should-probably-fund-ceramic-water-filters Wed, 03 Jan 2024 11:31:24 +0000 EA - Why EA should (probably) fund ceramic water filters by Bernardo Baron Bernardo Baron https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 21:02 no full 5
NWmBkMe3yF4GQNnai EA - How We Plan to Approach Uncertainty in Our Cost-Effectiveness Models by GiveWell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How We Plan to Approach Uncertainty in Our Cost-Effectiveness Models, published by GiveWell on January 3, 2024 on The Effective Altruism Forum.Author: Adam Salisbury, Senior Research AssociateSummaryIn a nutshellWe've received criticism from multiple sources that we should model uncertainty more explicitly in our cost-effectiveness analyses. These critics argue that modeling uncertainty, via Monte Carlos or other approaches, would keep us from being fooled by the optimizer's curse[1] and have other benefits.Our takeaways:We think we're mostly addressing the optimizer's curse already by skeptically adjusting key model inputs, rather than taking data at face value. However, that's not always true, and we plan to take steps to ensure we're doing this more consistently.We also plan to make sensitivity checks on our parameters and on bottom-line cost-effectiveness a more routine part of our research. We think this will help surface potential errors in our models and have other transparency and diagnostics benefits.Stepping back, we think taking uncertainty more seriously in our work means considering perspectives beyond our model, rather than investing more in modeling. This includes factoring in external sources of evidence and sense checks, expert opinion, historical track records, and qualitative features of organizations.Ways we could be wrong:We don't know if our parameter adjustments and approach to addressing the optimizer's curse are correct. Answering this question would require comparing our best guesses to "true" values for parameters, which we typically don't observe.Though we think there are good reasons to consider outside-the-model perspectives, we don't have a fully formed view of how to bring qualitative arguments to bear across programs in a consistent way. We expect to consider this further as a team.What is the criticism we've received?In our cost-effectiveness analyses, we typically do not publish uncertainty analyses that show how sensitive our models are to specific parameters or uncertainty ranges on our bottom line cost-effectiveness estimates. We've received multiple critiques of this approach:Noah Haber argues that, by not modeling uncertainty explicitly, we are subject to the optimizer's curse. If we take noisy effect sizes, burden, or cost estimates at face value, then the programs that make it over our cost-effectiveness threshold will be those that got lucky draws. In aggregate, this would make us biased toward more uncertain programs. To remedy this, he recommends that (i) we quantify uncertainty in our models by specifying distributions on key parameters and then running Monte Carlo simulations and (ii) we base decisions on a lower bound of the distribution (e.g., the 20th percentile).Others[2] have argued we're missing out on other benefits that come from specifying uncertainty. By not specifying uncertainty on key parameters or bottom line cost-effectiveness, we may be missing opportunities to prioritize research on the parameters to which our model is most sensitive and to be fully transparent about how uncertain our estimates are. (more)What do we think about this criticism?We think we're mostly guarding against the optimizer's curse by skeptically adjusting key inputs in our models, but we have some room for improvement.The optimizer's curse would be a big problem if we, e.g., took effect sizes from study abstracts or charity costs at face value, plugged them into our models, and then just funded programs that penciled above our cost-effectiveness bar.We don't think we're doing this. For example, in our vitamin A supplementation cost-effectiveness analysis (CEA), we apply skeptical adjustments to treatment effects to bring them closer to what we consider plausible. In our CEAs more broadly, we triangulate our cost e...

]]>
GiveWell https://forum.effectivealtruism.org/posts/NWmBkMe3yF4GQNnai/how-we-plan-to-approach-uncertainty-in-our-cost Wed, 03 Jan 2024 01:46:01 +0000 EA - How We Plan to Approach Uncertainty in Our Cost-Effectiveness Models by GiveWell GiveWell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 46:53 no full 7
HGf3WwRpgHXy9vzS6 EA - Project ideas: Epistemics by Lukas Finnveden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Project ideas: Epistemics, published by Lukas Finnveden on January 4, 2024 on The Effective Altruism Forum.This is part of a series of lists of projects. The unifying theme is that the projects are not targeted at solving alignment or engineered pandemics but still targeted at worlds where transformative AI is coming in the next 10 years or so. See here for the introductory post.If AI capabilities keep improving, AI could soon play a huge role in our epistemic landscape. I think we have an opportunity to affect how it's used: increasing the probability that we get great epistemic assistance and decreasing the extent to which AI is used to persuade people of false beliefs.Before I start listing projects, I'll discuss:Why AI could matter a lot for epistemics. (Both positively and negatively.)Why working on this could be urgent. (And not something we should just defer to the future.) Here, I'll separately discuss:That it's important for epistemics to be great in the near term (and not just in the long run) to help us deal with all the tricky issues that will arise as AI changes the world.That there may be path-dependencies that affect humanity's long-run epistemics.Why AI matters for epistemicsOn the positive side, here are three ways AI could substantially increase our ability to learn and agree on what's true.Truth-seeking motivations. We could be far more confident that AI systems are motivated to learn and honestly report what's true than is typical for humans. (Though in some cases, this will require significant progress on alignment.) Such confidence would make it much easier and more reliable for people to outsource investigations of difficult questions.Cheaper and more competent investigations. Advanced AI would make high-quality cognitive labor much cheaper, thereby enabling much more thorough and detailed investigations of important topics. Today, society has some ability to converge on questions with overwhelming evidence. AI could generate such overwhelming evidence for much more difficult topics.Iteration and validation. It will be much easier to control what sort of information AI has and hasn't seen. (Compared to the difficulty of controlling what information humans have and haven't seen.) This will allow us to run systematic experiments on whether AIs are good at inferring the right answers to questions that they've never seen the answer to.For one, this will give supporting evidence to the above two bullet points. If AI systems systematically get the right answer to previously unseen questions, that indicates that they are indeed honestly reporting what's true without significant bias and that their extensive investigations are good at guiding them toward the truth.In addition, on questions where overwhelming evidence isn't available, it may let us experimentally establish what intuitions and heuristics are best at predicting the right answer.[1]On the negative side, here are three ways AI could reduce the degree to which people have accurate beliefs.Super-human persuasion. If AI capabilities keep increasing, I expect AI to become significantly better than humans at persuasion.Notably, on top of high general cognitive capabilities, AI could have vastly more experience with conversation and persuasion than any human has ever had. (Via being deployed to speak with people across the world and being trained on all that data.)With very high persuasion capabilities, people's beliefs might (at least directionally) depend less on what's true and more on what AI systems' controllers want people to believe.Possibility of lock-in. I think it's likely that people will adopt AI personal assistants for a great number of tasks, including helping them select and filter the information they get exposed to. While this could be crucial for defending aga...

]]>
Lukas Finnveden https://forum.effectivealtruism.org/posts/HGf3WwRpgHXy9vzS6/project-ideas-epistemics Thu, 04 Jan 2024 18:34:07 +0000 EA - Project ideas: Epistemics by Lukas Finnveden Lukas Finnveden https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 33:00 no full 3
EPx8gjkibxiT3dW9M EA - Project ideas for making transformative AI go well, other than by working on alignment by Lukas Finnveden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Project ideas for making transformative AI go well, other than by working on alignment, published by Lukas Finnveden on January 4, 2024 on The Effective Altruism Forum.This is a series of posts with lists of projects that it could be valuable for someone to work on. The unifying theme is that they are projects that:Would be especially valuable if transformative AI is coming in the next 10 years or so.Are not primarily about controlling AI or aligning AI to human intentions.[1]Most of the projects would be valuable even if we were guaranteed to get aligned AI.Some of the projects would be especially valuable if we were inevitably going to get misaligned AI.The posts contain some discussion of how important it is to work on these topics, but not a lot. For previous discussion (especially: discussing the objection "Why not leave these issues to future AI systems?"), you can see the sectionHow ITN are these issues? from my previousmemo on some neglected topics.The lists are definitely not exhaustive. Failure to include an idea doesn't necessarily mean I wouldn't like it. (Similarly, although I've made some attempts to link to previous writings when appropriate, I'm sure to have missed a lot of good previous content.)There's a lot of variation in how sketched out the projects are. Most of the projects just have some informal notes and would require more thought before someone could start executing. If you're potentially interested in working on any of them and you could benefit from more discussion, I'd be excited if you reached out to me! [2]There's also a lot of variation in skills needed for the projects. If you're looking for projects that are especially suited to your talents, you can search the posts for any of the following tags (including brackets):[ML] [Empirical research] [Philosophical/conceptual] [survey/interview] [Advocacy] [Governance] [Writing] [Forecasting]The projects are organized into the following categories (which are in separate posts). Feel free to skip to whatever you're most interested in.Governance during explosive technological growthIt's plausible that AI will lead to explosive economic and technological growth.Our current methods of governance can barely keep up with today's technological advances. Speeding up the rate of technological growth by 30x+ would cause huge problems and could lead to rapid, destabilizing changes in power.This section is about trying to prepare the world for this. Either generating policy solutions to problems we expect to appear or addressing the meta-level problem about how we can coordinate to tackle this in a better and less rushed manner.A favorite direction is to develop Norms/proposals for how states and labs should act under the possibility of an intelligence explosion.EpistemicsThis is about helping humanity get better at reaching correct and well-considered beliefs on important issues.If AI capabilities keep improving, AI could soon play a huge role in our epistemic landscape. I think we have an opportunity to affect how it's used: increasing the probability that we get great epistemic assistance and decreasing the extent to which AI is used to persuade people of false beliefs.A couple of favorite projects are: Create an organization that gets started with using AI for investigating important questions or Develop & advocate for legislation against bad persuasion.Sentience and rights of digital minds.It's plausible that there will soon be digital minds that are sentient and deserving of rights. This raises several important issues that we don't know how to deal with.It seems tractable both to make progress in understanding these issues and in implementing policies that reflect this understanding.A favorite direction is to take existing ideas for what labs could be doing and spell ou...

]]>
Lukas Finnveden https://forum.effectivealtruism.org/posts/EPx8gjkibxiT3dW9M/project-ideas-for-making-transformative-ai-go-well-other Thu, 04 Jan 2024 09:22:54 +0000 EA - Project ideas for making transformative AI go well, other than by working on alignment by Lukas Finnveden Lukas Finnveden https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:24 no full 5
MmLzQJWLapLL2YrZo EA - Research summary: farmed yellow mealworm welfare by abrahamrowe Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Research summary: farmed yellow mealworm welfare, published by abrahamrowe on January 3, 2024 on The Effective Altruism Forum.This post is a short summary of a peer-reviewed, open access publication on yellow mealworm welfare in the Journal of Insects as Food and Feed. The paper and supplemental information can be accessedhere. The original paper was written by Meghan Barrett, Rebekah Keating Godfrey, Alexandra Schnell, and Bob Fischer; the research conducted in the paper was funded by Rethink Priorities.This post was written by Abraham Rowe and reviewed by Meghan Barrett. Unless cited otherwise, all information is derived from the Barrett et al. 2023 publication.SummaryAs of 2020, around 300 billion yellow mealworms (Tenebrio molitor) are farmed annually (though recent estimates now put this figure at over 3 trillion individuals (Pells, 2023)).Barrett et al. 2023 is the first publication to consider species-specific welfare concerns for farmed mealworms.The authors identify 15 current and future welfare concerns, including more pressing current concerns such as:Disease - Bacterial, fungal, protist, and viral pathogens can cause sluggishness, tissue damage, slowed growth, increased susceptibility to other diseases, and even mass-mortality events.High larval rearing densities - Density can cause a range of negative effects, including increased cannibalism and disease, higher chances of heat-related death, competition over food leading to malnutrition, and behavioral restriction near pupation.Inadequate larval nutrition - This may result from not providing enough protein in the animals' largely grains-based diet.Light use during handling - Photophobic adults and larvae may experience significant stress due to light use during handling.Slaughter methods - While we have high empirical uncertainty about the relative harms of slaughter methods, it is clear that some approaches to slaughter and depopulation on farms are more harmful than others.Future concerns that haven't yet been realized on farms include:Novel, potentially toxic, or inadequate feed substrates - Polymers (like plastics) and mycotoxin-contaminated grains may be more likely to be used in the future.Selective breeding and genetic modification - In vertebrate animals, selective breeding has caused a large number of welfare issues. The same might be expected to become true for mealworms.Current rearing and slaughter practicesYellow mealworms are the larval instars of a species of darkling beetle, Tenebrio molitor. Larvae go through a number of molts prior to pupation, which can take between a few months to two years depending on nutrition and abiotic conditions. Mealworms take up to 20 days to pupate. After pupating, the emerged adult beetles will mate within 3-5 days. Mealworms are a popular insect to farm for food due to their rapid growth, high nutrient content, and ease of handling. Adults are typically only used for breeding, while large larvae are sold as food and feed.Mealworms typically consume decaying grains, but have been reported to eat a wide variety of other foods in certain circumstances (including dead insects, other mealworms, and decaying wood). In farmed conditions, larval mealworms are fed a diet of 70%-85% cereals and other carbohydrates, and may be provided with supplementary protein, fruit, or vegetables.Mealworms are reared in stackable crates, usually with screened bottoms to allow frass (insect excrement) to fall through and not accumulate. Mealworms may be reared in up to 24-hour darkness, as they are photophobic.Insects bound for slaughter are collected at around 100 mg. Prior to slaughter, insects are sieved out of the substrate, washed (to remove frass and other waste from the exterior surface of their bodies), and prevented from eating for up to two days (ca...

]]>
abrahamrowe https://forum.effectivealtruism.org/posts/MmLzQJWLapLL2YrZo/research-summary-farmed-yellow-mealworm-welfare Wed, 03 Jan 2024 23:48:01 +0000 EA - Research summary: farmed yellow mealworm welfare by abrahamrowe abrahamrowe https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:33 no full 6
pMvvW5aKSLffEMecQ EA - Introducing GiveHealth: a giving Pledge for healthcare workers (and a call for volunteers) by RichArmitage Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing GiveHealth: a giving Pledge for healthcare workers (and a call for volunteers), published by RichArmitage on January 5, 2024 on The Effective Altruism Forum.WhatGiveHealth is a new EA-inspired effective giving organisation. It is a community of healthcare professionals who have taken a public Pledge to donate at least 1% of their income to the most effective global health charities.Visitors to the website canLearn about effective giving, their relative wealth on the global stage, how healthcare professionals can improve their impact, the activities of GiveWell and the highly effective nature of the charities recommended by GiveWell.Healthcare professionals are invited to take a publicPledge to donate at least 1% of their income to GiveWell's top charities for the rest of their lives. They can use the Pledge Calculator to determine their monthly/annual donations based on their salary and desired donation percentage. Once they have taken the Pledge their name, profession and location will be displayed on the GiveHealth Community Board, and they can learn about theCharities recommended by GiveWell and follow the links to the donation page of their chosen charities. Pledge takers will receive a survey on each anniversary of their Pledge to capture their donation activities over the previous year so GiveHealth can measure the value of donations it is influencing. Anybody can sign up to the GiveHealth monthly newsletter.Healthcare professionals of all disciplines (nurses, physiotherapists, pharmacists, occupational therapists, doctors, etc) and kinds (clinicians, researchers, managers, policy-makers, students, retired professionals, etc) of any level of seniority, from any part of the world, are welcome to sign the GiveHealth Pledge.WhyHealthcare professionals are a self-selected group of generally altruistic individuals who both care about improving the health of others and are motivated to do so. There is also a strong sense of community and camaraderie amongst healthcare professionals, while their awareness and understanding of EA and its principles is generally low.Giving What We Can has shown the public Pledge model to be an effective vehicle for generating donations (by fostering commitment, community and culture), which GiveHealth has combined with a lower barrier to entry (at least 1% of income rather than 10%) that we feel is more appropriate for a group less familiar with EA. Pledge takers are still able to sign the Giving What We Can pledge, and their GiveHealth pledge can be included within, rather than in addition to, their GWWC pledge (since both pledges commit the individual to donating to effective charities) - for example, donating 10% of income can satisfy both a 10% GiveHealth Pledge and the 10% GWWC pledge.We hope the existing strong sense of community between healthcare professionals, and the 1% low barrier to entry, can be harnessed to generate Pledge-taking momentum amongst these professionals, while increasing their awareness and understanding of EA. In this manner, GiveHealth could be regarded as the healthcare profession analogue of High Impact Athletes and Raising for Effective Giving, which are EA-inspired communities of effective giving relevant to specific professions (elite athletes and professional poker players, respectively).WhereHealthcare professionals from anywhere in the world are welcome and encouraged to take the GiveHealth Pledge.WhoThe Co-Founders of GiveHealth are three UK-trained doctors - Richard Armitage (GP in UK), Alastair Yeoh (infectious diseases doctor in UK) and George Altman (intern in Australia).HowGiveHealth is currently run on an entirely voluntary basis by the three Co-Founders alongside their full-time work as frontline healthcare professionals. No funds were raised from external source...

]]>
RichArmitage https://forum.effectivealtruism.org/posts/pMvvW5aKSLffEMecQ/introducing-givehealth-a-giving-pledge-for-healthcare Fri, 05 Jan 2024 20:28:44 +0000 EA - Introducing GiveHealth: a giving Pledge for healthcare workers (and a call for volunteers) by RichArmitage RichArmitage https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:36 no full 1
2M4KQWBoNiLpWFb8r EA - Announcing Arcadia Impact by Joe Hardie Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Arcadia Impact, published by Joe Hardie on January 5, 2024 on The Effective Altruism Forum.Arcadia Impact is a non-profit organisation that enables individuals in London to use their careers to tackle the world's most pressing problems. We have existed for over a year as London EA Hub (LEAH) and we recently rebranded as an organisation to Arcadia Impact.Our current projects:Effective Altruism Group SupportSafe AI LondonLEAH Coworking SpaceEA Group SupportWe support EA groups atImperial,UCL,KCL, andLSE[1] which includes mentoring student organisers, encouraging collaboration between groups, and running events such as retreats.All four universities are ranked in the top 50 globally, with over 114,000 students collectively, presenting significant potential to build capacity to address pressing global problems. London offers a unique concentration of highly talented students, and therefore an exciting opportunity for EA groups to benefit from collaboration and coordination. Additionally, London is the world's largest EA hub, with an extensive network of professionals working on various causes. Despite this London university groups have historically lacked consistent organiser capacity relative to comparable universities.Since we were founded last year, the groups have reached hundreds of students, with over 200 applying to reading groups. Students who joined our programmes have started full-time roles, attended research programmes, or continued studying with the goal of contributing to a range of EA cause areas. Given the size and potential of the universities, we think there is still significant room to expand and improve our work.Safe AI LondonWe support AI Safety field building activities with Safe AI London (SAIL) supporting individuals in London to find careers that reduce risks from advanced artificial intelligence.We do this by:Running targeted outreach to technical courses at Imperial and UCL due to theconcentration of talent on Computer Science and related courses.Educating people on the alignment problem, throughtechnical andgovernance reading groups and speaker events.Up-skilling people on machine learning through upskilling programmes or by encouraging them to apply to programmes such asARENA.Allowing them to test their fit for research throughMARS London,research sprints, and connecting them to other research opportunities such asMATS.Creating a community of people in London and connecting people to opportunities within the field through socials and retreats.London is Europe's largest hub for AI talent and is becoming an increasingly relevant location for AI safety, with Google DeepMind, Anthropic and OpenAI opening offices here, and AI Safety researchers at MATS, Conjecture, and Center on Longterm Risk. The UK Government has also launched theAI Safety Institute which is working on AI Safety Research within the UK government.AI Safety university groups have shownpromising results over the last year and London universities have a unique concentration of talented students relevant to AI safety with Imperial and UCL ranked in the top 25 universities for computer science courses globally.LEAH Coworking SpaceThe LEAH Coworking Space is an office space in central London used by professionals and students working on impactful projects. The office aims to provide value from:Improving the productivity of professionals doing impactful work. In our most recent user survey, users reported an average of 6.3 additional productive hours per week from using the space.Causing impactful connections and interactions between users.Various situations where we offer assistance to the wider community:Allowing other organisations to use the space for events.Enabling in-person meetings and coworking for remote organisations.We also ...

]]>
Joe Hardie https://forum.effectivealtruism.org/posts/2M4KQWBoNiLpWFb8r/announcing-arcadia-impact Fri, 05 Jan 2024 15:28:10 +0000 EA - Announcing Arcadia Impact by Joe Hardie Joe Hardie https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:23 no full 4
zGu6KP4iLv2y4wJCs EA - Priority review vouchers for tropical diseases: Impact, distribution, effectiveness, and potential improvements by Rethink Priorities Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Priority review vouchers for tropical diseases: Impact, distribution, effectiveness, and potential improvements, published by Rethink Priorities on January 5, 2024 on The Effective Altruism Forum.Suggested citation: Gosnell, G., Hu, J., Braid, E., & Hird, T. 2023. Priority review vouchers for tropical diseases: Impact, distribution, effectiveness, and potential improvements. Rethink Priorities. https://rethinkpriorities.org/publications/priority-review-vouchers.Funding statement: We thank Open Philanthropy for commissioning and funding this research report. The views expressed herein are not necessarily endorsed by Open Philanthropy.Editorial noteThe report evaluates the value and effectiveness of the United States' Tropical Disease Priority Review Voucher Program, which was initiated in 2007 to incentivize research and development for medical products targeting neglected tropical diseases. (While PRVs have since been legislated for purposes, we focus our attention on this application.) Specifically, we describe some of the program's history to date (e.g., past issuances, voucher sales/use dynamics, and evidence of gaming), the usage extent of PRV-awarded medical products, academic and anecdotal evidence of the program's incentive effect, and ways in which we think the program could be improved.We have tried to flag major sources of uncertainty in the report and are open to revising our views as more information becomes available. While preparing this report for publication, we learned that Valneva was awarded a PRV for developing the first Chikungunya vaccine in November 2023 (Dunleavy, 2023), but we did not incorporate this information in the report or associated spreadsheets.We are grateful for the invaluable input of our interviewees. Please note that our interviewees spoke with us in a personal capacity and not on behalf of their respective organizations.Executive summaryWe catalog information about the 13 issuances of Priority Review Vouchers (PRV) under the United States' Tropical Disease PRV Program and, for the seven cases with sufficient data, attempt to estimate the number of treatment courses per 1,000 relevant disease cases, or "use rate." Among the seven products with use rate estimates, we find three with high use rates (>100 courses per 1,000 cases), two have medium use rates (10-100), and two have low use rates (<10). We also find that while all high-use-rate products have been on the market for >10 years, not all products marketed for that long achieve high use rates, and find diverse outcomes in use-rate trajectories, including sharp discontinuities and both upward and downward trends.Given that PRV recipients can either use or sell their voucher, we also explore the dynamics of how the PRVs' value is distributed among different types of players in the industry. We find that PRV sales proceeds go toward repayment for shareholders of small pharmaceutical companies or toward (promises of) further drug development for neglected tropical diseases. Large pharmaceutical companies that receive PRV awards tend to retain or use the voucher for faster FDA review of a profitable drug in their pipelines.Additionally, we review four academic studies that attempt to quantify the effectiveness of PRVs at inducing medical innovations for neglected tropical diseases. Based on their findings and our assessment of study quality, we think it is unlikely that the TD PRV Program had a large, consistent effect on R&D for tropical diseases, but that the results are potentially consistent with a small marginal effect. Additionally, there is historic anecdotal evidence of "gaming the system" - seeking a voucher for a drug that has already been developed and marketed outside of the US - though we think it is unlikely to continue to be an issue going forward given t...

]]>
Rethink Priorities https://forum.effectivealtruism.org/posts/zGu6KP4iLv2y4wJCs/priority-review-vouchers-for-tropical-diseases-impact 100 courses per 1,000 cases), two have medium use rates (10-100), and two have low use rates (<10). We also find that while all high-use-rate products have been on the market for >10 years, not all products marketed for that long achieve high use rates, and find diverse outcomes in use-rate trajectories, including sharp discontinuities and both upward and downward trends.Given that PRV recipients can either use or sell their voucher, we also explore the dynamics of how the PRVs' value is distributed among different types of players in the industry. We find that PRV sales proceeds go toward repayment for shareholders of small pharmaceutical companies or toward (promises of) further drug development for neglected tropical diseases. Large pharmaceutical companies that receive PRV awards tend to retain or use the voucher for faster FDA review of a profitable drug in their pipelines.Additionally, we review four academic studies that attempt to quantify the effectiveness of PRVs at inducing medical innovations for neglected tropical diseases. Based on their findings and our assessment of study quality, we think it is unlikely that the TD PRV Program had a large, consistent effect on R&D for tropical diseases, but that the results are potentially consistent with a small marginal effect. Additionally, there is historic anecdotal evidence of "gaming the system" - seeking a voucher for a drug that has already been developed and marketed outside of the US - though we think it is unlikely to continue to be an issue going forward given t...]]> Fri, 05 Jan 2024 14:46:04 +0000 EA - Priority review vouchers for tropical diseases: Impact, distribution, effectiveness, and potential improvements by Rethink Priorities 100 courses per 1,000 cases), two have medium use rates (10-100), and two have low use rates (<10). We also find that while all high-use-rate products have been on the market for >10 years, not all products marketed for that long achieve high use rates, and find diverse outcomes in use-rate trajectories, including sharp discontinuities and both upward and downward trends.Given that PRV recipients can either use or sell their voucher, we also explore the dynamics of how the PRVs' value is distributed among different types of players in the industry. We find that PRV sales proceeds go toward repayment for shareholders of small pharmaceutical companies or toward (promises of) further drug development for neglected tropical diseases. Large pharmaceutical companies that receive PRV awards tend to retain or use the voucher for faster FDA review of a profitable drug in their pipelines.Additionally, we review four academic studies that attempt to quantify the effectiveness of PRVs at inducing medical innovations for neglected tropical diseases. Based on their findings and our assessment of study quality, we think it is unlikely that the TD PRV Program had a large, consistent effect on R&D for tropical diseases, but that the results are potentially consistent with a small marginal effect. Additionally, there is historic anecdotal evidence of "gaming the system" - seeking a voucher for a drug that has already been developed and marketed outside of the US - though we think it is unlikely to continue to be an issue going forward given t...]]> 100 courses per 1,000 cases), two have medium use rates (10-100), and two have low use rates (<10). We also find that while all high-use-rate products have been on the market for >10 years, not all products marketed for that long achieve high use rates, and find diverse outcomes in use-rate trajectories, including sharp discontinuities and both upward and downward trends.Given that PRV recipients can either use or sell their voucher, we also explore the dynamics of how the PRVs' value is distributed among different types of players in the industry. We find that PRV sales proceeds go toward repayment for shareholders of small pharmaceutical companies or toward (promises of) further drug development for neglected tropical diseases. Large pharmaceutical companies that receive PRV awards tend to retain or use the voucher for faster FDA review of a profitable drug in their pipelines.Additionally, we review four academic studies that attempt to quantify the effectiveness of PRVs at inducing medical innovations for neglected tropical diseases. Based on their findings and our assessment of study quality, we think it is unlikely that the TD PRV Program had a large, consistent effect on R&D for tropical diseases, but that the results are potentially consistent with a small marginal effect. Additionally, there is historic anecdotal evidence of "gaming the system" - seeking a voucher for a drug that has already been developed and marketed outside of the US - though we think it is unlikely to continue to be an issue going forward given t...]]> Rethink Priorities https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:45 no full 5
zakLJ4syCrrTiAmoS EA - Malaria vaccines: how confident are we? by Sanjay Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Malaria vaccines: how confident are we?, published by Sanjay on January 5, 2024 on The Effective Altruism Forum.Alternative title: should SoGive red-team malaria vaccines?We've been seeing a lot of excitement about malaria vaccines - e.g. the first thing mentioned by theEA wins 2023 post was the R21 vaccine.We at SoGive looked into malaria vaccines about a year ago, and came away with a slightly more cautious impression. Bear in mind though, (a) we were trying to answer a different question[1]; (b) a lot has changed in a year.The purpose of this post is to outline these (currently tentative) doubts, and explore whether there's appetite for us to research this more carefully.The main things we're still unsure ofAt first glance, malaria vaccines appear less cost-effective than existing malaria interventions (nets/SMC[2]). Are they, in fact, less cost-effective?In light of this, does it make sense to advocate for their rollout?We thank 1Day Sooner for their helpful comments and constructive collaboration - we sent them a draft of this shortly before publishing. We also thank our contacts at Malaria Consortium and AMF; when we spoke to them in 2022 for our earlier review of malaria vaccines, their comments were very helpful. Some earlier work done by current/former members of the SoGive team has also provided useful groundwork for the thinking here, so thank you to Isobel Phillips, Ishaan Guptasarma, Scott Smith.Be aware that any indications of cost-effectiveness in this post are extremely rough, and may change materially if we were to conduct this research.Malaria vaccines may be materially (10x??) less effective than nets/SMCBased on the research we did a year ago, it seems that malaria vaccines significantly underperform bednets and SMC. Several items in this table are caveated; it's worth reviewing the version in the appendix which sets out the details.Several items in this table are caveated; it's worth reviewing the version in the appendix which sets out the details.RTS,S vaccineR21 vaccineBednets*Cost-related considerationsCost per person treated$56.40 (estimated)>$8, based on WHO info; ~$25, based on info from 1Day Sooner$2.18Number of doses needed per person4 (i.e. 3 + a booster)4 (i.e. 3 + a booster)0.49 bednets per person protectedLogistics: cold chain?YesYes, but less demanding than RTS,SNoEfficacy-related considerationsReduction in clinical malaria**55.8%77%45%Reduction in severe malaria**32.2%Unknown, estimated at 44.4%45%* SMC is only excluded from this table for brevity, not because of any preference for bednets over SMC.** Malaria reduction figures are estimates under study conditionsVaccine costs look high…When we created this table c.1 year ago, the key message from this table is that costs for vaccines are materially higher than for bednets or SMC, which is significantly driven by logistical difficulties, such as the need for multiple doses and a cold supply chain (i.e. the vaccines have to be kept at a low temperature while they are transported). At the time, we focused on RTS,S because there was more information available.At that stage, we guessed that R21 would likely have similar costs to RTS,S. Somewhat to our surprise, it does seem that R21 costs may be lower than RTS,S costs. We weren't clear on the costs of R21, however when we shared a draft of this with 1Day Sooner, they helpfully pointed us to theirDec 2023 Vaccination Status Report. It seems they believe that each dose costs $3.90 on its own, and the all-in cost of delivering the first dose to a person is $25 per full course.... and there doesn't * seem * to be an offsetting efficacy benefit.Although the efficacy numbers look similar, there are several complicating factors not captured in this table. For example, a consideration about the ages ...

]]>
Sanjay https://forum.effectivealtruism.org/posts/zakLJ4syCrrTiAmoS/malaria-vaccines-how-confident-are-we $8, based on WHO info; ~$25, based on info from 1Day Sooner$2.18Number of doses needed per person4 (i.e. 3 + a booster)4 (i.e. 3 + a booster)0.49 bednets per person protectedLogistics: cold chain?YesYes, but less demanding than RTS,SNoEfficacy-related considerationsReduction in clinical malaria**55.8%77%45%Reduction in severe malaria**32.2%Unknown, estimated at 44.4%45%* SMC is only excluded from this table for brevity, not because of any preference for bednets over SMC.** Malaria reduction figures are estimates under study conditionsVaccine costs look high…When we created this table c.1 year ago, the key message from this table is that costs for vaccines are materially higher than for bednets or SMC, which is significantly driven by logistical difficulties, such as the need for multiple doses and a cold supply chain (i.e. the vaccines have to be kept at a low temperature while they are transported). At the time, we focused on RTS,S because there was more information available.At that stage, we guessed that R21 would likely have similar costs to RTS,S. Somewhat to our surprise, it does seem that R21 costs may be lower than RTS,S costs. We weren't clear on the costs of R21, however when we shared a draft of this with 1Day Sooner, they helpfully pointed us to theirDec 2023 Vaccination Status Report. It seems they believe that each dose costs $3.90 on its own, and the all-in cost of delivering the first dose to a person is $25 per full course.... and there doesn't * seem * to be an offsetting efficacy benefit.Although the efficacy numbers look similar, there are several complicating factors not captured in this table. For example, a consideration about the ages ...]]> Fri, 05 Jan 2024 06:44:49 +0000 EA - Malaria vaccines: how confident are we? by Sanjay $8, based on WHO info; ~$25, based on info from 1Day Sooner$2.18Number of doses needed per person4 (i.e. 3 + a booster)4 (i.e. 3 + a booster)0.49 bednets per person protectedLogistics: cold chain?YesYes, but less demanding than RTS,SNoEfficacy-related considerationsReduction in clinical malaria**55.8%77%45%Reduction in severe malaria**32.2%Unknown, estimated at 44.4%45%* SMC is only excluded from this table for brevity, not because of any preference for bednets over SMC.** Malaria reduction figures are estimates under study conditionsVaccine costs look high…When we created this table c.1 year ago, the key message from this table is that costs for vaccines are materially higher than for bednets or SMC, which is significantly driven by logistical difficulties, such as the need for multiple doses and a cold supply chain (i.e. the vaccines have to be kept at a low temperature while they are transported). At the time, we focused on RTS,S because there was more information available.At that stage, we guessed that R21 would likely have similar costs to RTS,S. Somewhat to our surprise, it does seem that R21 costs may be lower than RTS,S costs. We weren't clear on the costs of R21, however when we shared a draft of this with 1Day Sooner, they helpfully pointed us to theirDec 2023 Vaccination Status Report. It seems they believe that each dose costs $3.90 on its own, and the all-in cost of delivering the first dose to a person is $25 per full course.... and there doesn't * seem * to be an offsetting efficacy benefit.Although the efficacy numbers look similar, there are several complicating factors not captured in this table. For example, a consideration about the ages ...]]> $8, based on WHO info; ~$25, based on info from 1Day Sooner$2.18Number of doses needed per person4 (i.e. 3 + a booster)4 (i.e. 3 + a booster)0.49 bednets per person protectedLogistics: cold chain?YesYes, but less demanding than RTS,SNoEfficacy-related considerationsReduction in clinical malaria**55.8%77%45%Reduction in severe malaria**32.2%Unknown, estimated at 44.4%45%* SMC is only excluded from this table for brevity, not because of any preference for bednets over SMC.** Malaria reduction figures are estimates under study conditionsVaccine costs look high…When we created this table c.1 year ago, the key message from this table is that costs for vaccines are materially higher than for bednets or SMC, which is significantly driven by logistical difficulties, such as the need for multiple doses and a cold supply chain (i.e. the vaccines have to be kept at a low temperature while they are transported). At the time, we focused on RTS,S because there was more information available.At that stage, we guessed that R21 would likely have similar costs to RTS,S. Somewhat to our surprise, it does seem that R21 costs may be lower than RTS,S costs. We weren't clear on the costs of R21, however when we shared a draft of this with 1Day Sooner, they helpfully pointed us to theirDec 2023 Vaccination Status Report. It seems they believe that each dose costs $3.90 on its own, and the all-in cost of delivering the first dose to a person is $25 per full course.... and there doesn't * seem * to be an offsetting efficacy benefit.Although the efficacy numbers look similar, there are several complicating factors not captured in this table. For example, a consideration about the ages ...]]> Sanjay https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 19:58 no full 6
z4Z4BA2tMGbN3fSiL EA - 2023: news on AI safety, animal welfare, global health, and more by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2023: news on AI safety, animal welfare, global health, and more, published by Lizka on January 6, 2024 on The Effective Altruism Forum.I'm listing some of the important and/or EA-related events that happened in 2023. Consider adding more in the comments!A companion post collects research and other "content" highlights from 2023. (That post features content; the one you're reading summarizes news.)Also, the monthlyEA Newsletterdiscussed a lot of the events collected here, and was the starting point for the list of events in this post. If you're subscribed,we would really love feedback.Skip to:News related to different causesAI safety: AI went mainstream, states developed safety-oriented regulation, there was a lot of discourse, and moreGlobal health & development: new vaccines, modified mosquitoes, threatened programs, and ongoing trendsAnimal welfare: political reforms and alternative proteinsUpdates in causes besides AI safety, global health, and animal welfareConcluding notesOther notes:There might be errors in what I wrote (I'll appreciate corrections!).Omissions! I avoided pretty political events (I think they're probably covered sufficiently elsewhere) and didn't focus on scientific breakthroughs. Even besides that, though, I haven't tried to be exhaustive, and I'd love to collect more important events/things from 2023. Please suggest things to add.I'd love to see reflections on 2023 events.What surprised you? What seemed important but now feels like it might have been overblown? What are the impacts of some of these events?And I'd love to see forecasts about what we should expect for 2024 and beyond.I put stars next to some content and news that seemed particularly important, although I didn't use this consistently.More context on how and why I made this: I wanted to collect "important stuff from 2023" to reflect on the year, and realized that one of the resources I have is one I run - the monthlyEA Newsletter. So I started compiling what was meant to be a quick doc-turned-post (by pulling out events from the Newsletter's archives, occasionally updating them or looking into them a bit more). Things kind of ballooned as I worked on this post. (Now there are two posts that aren't short; see the companion, which is less focused on news and more focused on "content.")AI safety: AI went mainstream, states developed safety-oriented regulation, there was a lot of discourse, and moreSee also featured content on AI safety.0. GPT-4 and other models, changes at AI companies, and other news in AI (not necessarily safety)Before we get to AI safety or AI policy developments, here are some relevant changes for AI development in 2023:New models: OpenAIlaunchedGPT-4 in mid-March (alongside announcements from Google, Anthropic, and more). Also around this time (February/March), Google released Bard, Meta released Llama, and Microsoft released Bing/Sydney (which wasimpressive and weird/scary).Model use, financial impacts, and training trends: more people startedusing AI models. Developers got API access tovarious models. Advanced AI chips continued getting better and compute useincreased andgot more efficient.Improvements in models: We started seeing pretty powerful multimodal models (models that can process audio, video, images - not just text), including GPT-4 andGemini. Context windows grew longer.Forecasters on Metaculus seem to increasingly expect human-AI parity on selected tasks by 2040.Changes in leading AI companies:Google combined Brain and DeepMind into one team,Amazon invested in Anthropic,Microsoft partnered with OpenAI,Meta partnered with Hugging Face, a number of new companies launched, and OpenAI CEO Sam Altman wasfired and then reinstated (more on that).Other news: Generative AI companies are increasingly getting su...

]]>
Lizka https://forum.effectivealtruism.org/posts/z4Z4BA2tMGbN3fSiL/2023-news-on-ai-safety-animal-welfare-global-health-and-more Sat, 06 Jan 2024 21:47:55 +0000 EA - 2023: news on AI safety, animal welfare, global health, and more by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 24:34 no full 2
DQbSTo9ktvcGkGLyh EA - Howdy, I'm Elliot by Elliot Billingsley Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Howdy, I'm Elliot, published by Elliot Billingsley on January 6, 2024 on The Effective Altruism Forum.Hi EA community,I'd like to formally introduce myself to this forum, where I've been lurking for a while but have been too timid to post until now, thanks to the encouragement of some.I first heard about EA through the Tim Ferriss Podcast in 2016. I still remember standing on the ferryboat crossing the Bosphorous while listening to Will MacAskill say things that were incredibly obvious, at least after they were heard.In the couple years that followed, I organized a local EA workshop, attended EAGx Berlin, and flew to San Francisco to attend EAG. I got involved with Students for High-Impact Charity, helping out on the periphery. I enjoyed lively conversation with EA Vancouver. And increased the usage of the phrase 'expected value' in daily conversation. That's about it.That was my EA Life Phase I.Half a decade later, I sat down with my wife and child during a Pentathlon in which every day you ask yourself the question: "What is the Most Important Work I can do today?" All of a sudden, it all came back to me. The most important things I can possibly do have quite clearly been described in EA. So I resolved in early 2022 to buckle up and take EA seriously.I honestly wasn't sure what my best option was, so I went with the most inspiring recent topic on the 80k podcast: Andrew Yang's Forward Party. I basically reached out and got named State Lead.I feel my experience with Forward may be a whole 'nuther post so I'll leave it at that.I also engaged in a lot of other ways, in large part thanks to EA Virtual Programs, which I really appreciate. But there's one person who had a huge role in my transition from an EA sleeper cell to a stupidly engaged one. That's Dr. Ben Smith.I swallowed my Ninja Lurker EA Forum personality (Never posts, always votes strongly) in order to write this post, for a specific reason, which I'll share now. Last fall, I launched a coaching practice with the intention of supporting the EA community. I asked some friends and acquaintances to take a chance and try my coaching out, and thank them very much. I now know my coaching helps people.So If I help EAs, I'm helping better, in theory, right? I want to test this theory! I'm going to EAG next month and even have a special cohort designed for attendees. If you're going to EAG, do consider applying, we'd love to have you. So that's my shameless plug.For any of you still reading, I'd like to say thanky (I'm from Texas, that's kind of how my dad used to say 'thank you'). I hope to write here more and learn in this incredible community.ElliotThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Elliot Billingsley https://forum.effectivealtruism.org/posts/DQbSTo9ktvcGkGLyh/howdy-i-m-elliot Sat, 06 Jan 2024 20:00:02 +0000 EA - Howdy, I'm Elliot by Elliot Billingsley Elliot Billingsley https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:41 no full 3
fbTE2cBtnxCqemWNp EA - Double the donation: EA inadequacy found? by Neil Warren Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Double the donation: EA inadequacy found?, published by Neil Warren on January 6, 2024 on The Effective Altruism Forum.I'm only 30% sure that this is actually an inadequacy made by those whose job it is to maximize donations but I've noticed that none of the donations pages of GiveWell, Giving What We Can, Horizon Institute, or METR have this little tab in them that MIRI has (just scroll down after following the link):This little tool comes from doublethedonation.com.I was looking for charities to donate to, and I'm grateful I stumbled upon the MIRI donation page because otherwise I would not have known that Google would literally double my donation. None of the other donation pages except MIRI had this little "does your company do employer matching?" box. WHY.I would wager other tech companies have similar programs, and that a good chunk of EA donations come from employees of those tech companies, and that thousands of dollars a year are wasted in missed opportunities here. If this is an inadequacy, it's a pretty obvious and damaging one. I wish to speak to the manager.I did not spend more than ten minutes noticing this, and just wanted to get this out there as fast as possible. There's a chance I'm being stupid. (Perhaps every tech employee is usually briefed on the donation matching) But if anyone out there has an answer for this or if a GiveWell employee is conveniently walking by and says "wait a minute! We could radically improve our UI!", that'd be great.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Neil Warren https://forum.effectivealtruism.org/posts/fbTE2cBtnxCqemWNp/double-the-donation-ea-inadequacy-found Sat, 06 Jan 2024 12:39:08 +0000 EA - Double the donation: EA inadequacy found? by Neil Warren Neil Warren https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:32 no full 5
oTuNw6MqXxhDK3Mdz EA - Economic Growth - Donation suggestions and ideas by DavidNash Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Economic Growth - Donation suggestions and ideas, published by DavidNash on January 8, 2024 on The Effective Altruism Forum.There was a recent post about economic growth & effective altruism by Karthik Tadepalli. He pointed out that a lot of people agree that economic growth is important, but it hasn't really led to many suggestions for specific interventions.I thought it would be good to get the ball rolling[1] by asking a few people what they think are good donation opportunities in this area, or if not, do they think this area is neglected when you have governments, development banks, investors etc all focused on growth.I'm hoping there will be more in depth research into this in 2024 to see whether there are opportunities for smaller/medium funders, and how competitive it is with the best global health interventions.I have fleshed out a few of the shorter responses with more details on what the suggested organisation does.Shruti Rajagopalan (Mercatus Center):XKDR Forum - Founded by Ajay Shah and Susan Thomas, it aims to advance India's growth journey through economic research, data analysis, and policy engagement, with a focus on core areas like macroeconomics, finance, and judiciary.Susan Thomas has a track record of running a fantastic research group at Indira Gandhi Institute of Development Research and Ajay Shah brings years of experience from fostering research groups at NIPFP and time as consultant to the Finance Ministry, Government of India. Both are excellent economists; their strengths include thinking about big questions from first principles, as well as a strong commitment to economic growth and freedom. They are also very good incubators of talent, and have some excellent young researchers working with them - e.g.Former Emergent Ventures winnersProsperiti -A non-profit organization dedicated to economic growth, greater economic freedom and job opportunities for Indians. It is the only all-female founded research think tank in India with cofounders Bhuvana Anand and Baishali Boman at the helm. Their key focus is on labor regulation, especially gendered regulation. They also work on state and local level regulation impacting businesses, pointing out restrictive labor regulations to state and local government partners. Their core strategy is to offer actionable research on state regulations, assist state governments with the detailed correction of laws and regulations, and also channels the findings to the Union government.Former Emergent Ventures winnersArtha Global - Policy consulting organization that assists developing world governments in designing, implementing, and institutionalizing growth and prosperity-focused policy frameworks. Originally the IDFC Institute, Artha was re-founded under CEO Reuben Abraham after institutional changes to continue the team's work under a new banner.Artha places a strong emphasis on strengthening state capacity as a critical factor in translating intentions into real impact and unlocking India's growth potential. Instead of just focusing on technical inputs, Artha also focuses on coordinated policy implementation. Reuben Abraham's extensive global network identifies talented potential collaborators across government and private institutions. His and Artha's strength lies in bringing together disparate actors and backing them to find shared solutions.Former Emergent Ventures winnersGrowth Teams - Founded by Karthik Akhileshwaran and Jonathan Mazumdar, Growth Teams believes sustaining higher broad-based growth and job creation is imperative for alleviating Indian poverty. They are also advised by growth theorists and empiricists like Lant Pritchett. With federal reforms largely exhausted since the 1990s, the onus is now on states to pursue vital labor, land, capital, industrial, and environmental reform...

]]>
DavidNash https://forum.effectivealtruism.org/posts/oTuNw6MqXxhDK3Mdz/economic-growth-donation-suggestions-and-ideas Mon, 08 Jan 2024 16:37:45 +0000 EA - Economic Growth - Donation suggestions and ideas by DavidNash DavidNash https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:22 no full 4
rWRSvRxAco2bLoXKr EA - [Podcast + Transcript] AMA: Founder and CEO of AMF, Rob Mather by tobytrem Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Podcast + Transcript] AMA: Founder and CEO of AMF, Rob Mather, published by tobytrem on January 9, 2024 on The Effective Altruism Forum.This is a transcript for the AMA with Rob Mather, CEO of AMF, which I recorded live on the 19th of December. To listen to a recording of the live AMA as a podcast, follow the link above for the RSS feed, or:Use these links to listen to the podcast on Spotify, Pocketcasts, or Apple Music.Click the speaker icon above to listen to the recording without leaving this page.The questions for the AMA, which were edited and supplemented to, can be found on the original AMA post.Hosting an AMA as a live event, followed by a podcast and a transcript, is a bit of an experiment for us, so please do comment or Forum dm me with any feedback you might have.All of your (and my) questions to Rob are in bold, so you can skim them quickly.Thanks to Rob Mather for his time, and Dane Magaway for her help with this transcript.AMA with Rob Mather, recorded 19th December '23Toby Tremlett: Welcome to this live AMA with Rob Mather, CEO of the Against Malaria Foundation. I'm Toby Tremlett, the EA Forum's content manager. If you're interested in effective altruism, you've probably heard of Rob's charity, the Against Malaria Foundation. For almost two decades, they've been doing crucial work to protect people, especially children, from malaria.To date, around 450 million people have been protected with malaria bed nets from this charity. Once all of their currently funded nets have been distributed, AMF estimates it would have prevented 185,000 deaths. And it's not just AMF saying this, they've been a GiveWell Top Charity since 2009.So to get straight into the AMA, we're going to keep the answers pretty short and snappy. I think Rob said he's going to stick to two minutes per answer. And yeah, Rob, thank you for making the time for coming along for this.Rob Mather: Pleasure.Toby Tremlett: On the theme of making the time, somebody said that they've organized two small fundraisers with AMF, and in both cases, you were incredibly proactive and helpful, taking time to immediately respond to emails and hop onto calls. They say many thanks, but a question remains, where do you find the time and which time management strategies do you use? You have two minutes of time.Rob Mather: I don't use any particular strategies, I'm afraid. I think what I would say is we certainly leverage technology here, so that a lot of the things that I perhaps would normally do as a CEO of a charity I don't do because technology takes over. And perhaps I can give a couple of examples.One of the things that we have to do as a charity is we have to file our accounts. We have to do that, in our case, in 14 countries and there are typically between 10 and 15 documents we have to prepare for each country. Lots of documents, lots of information that would normally take months of a number of people probably putting that together.And we broadly have that content all available to us within nine hours of the end of our financial year because at the end of the day, finances are just ones and zeros so we can automate the living daylights out of it. And therefore a whole series of effort that would otherwise go into admin that would take my time effectively is struck down to just a sliver of time. I think that's one element [that] allows me to put my time in [another] direction.The second thing I would say is that the structure of AMF is very streamlined. We're very focused on what we do. There is a lot of complexity in many ways around distributing nets, particularly around the operations. That's the bit that really requires an awful lot of very careful attention to make sure nets get to people. And because we have a very simple series of steps, if you like, that we go through when we'r...

]]>
tobytrem https://forum.effectivealtruism.org/posts/rWRSvRxAco2bLoXKr/podcast-transcript-ama-founder-and-ceo-of-amf-rob-mather Tue, 09 Jan 2024 08:30:48 +0000 EA - [Podcast + Transcript] AMA: Founder and CEO of AMF, Rob Mather by tobytrem tobytrem https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 42:54 no full 3
9copPAEnfCBZd88jE EA - Reflections on my first year of AI safety research by Jay Bailey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflections on my first year of AI safety research, published by Jay Bailey on January 9, 2024 on The Effective Altruism Forum.Last year, I wrote apost about my upskilling in AI alignment. To this day, I still get people occasionally reaching out to me because of this article, to ask questions about getting into the field themselves. I've also had several occasions to link people to the article who asked me about getting into the field from other means, like my local AI Safety group.Essentially, what this means is that people clearly found this useful (credit to the EA Forum for managing to let the article be findable to those who need it, a year after its publication!) and therefore people would likely find a sequel useful too! This post is that sequel, but reading the first post is not necessary to read this one.The major lesson of this post is this: I made a ton of mistakes, but those mistakes taught me things. By being open to that feedback and keeping my eye on the ball, I managed to find work that suited me in the field eventually. Just like the previous post, I'm happy to answer more questions via PM or in the comments.It's worth noting, this isn't a bold story of me getting a ton of stuff done. Most of the story, by word count, is me flailing around unsure of what to do and making a lot of mistakes along the way. I don't think you'll learn a lot about how to be a good researcher from this post, but I hope you might learn some tips to avoid being a bad one.SummaryI was a software engineer for 3-4 years with little to no ML experience before I was accepted for my initial upskilling grant. (More details are in myinitial post)I attendedSERI MATS, working on aligning language models under Owain Evans. Due to a combination of factors, some my fault and some not, I don't feel like I got a great deal of stuff done.I decided to pivot away from evals towards mechanistic interpretability since I didn't see a good theory of change for evals - this was two weeks before GPT-4 came out and the whole world sat up and took notice. Doh!After upskilling in mechanistic interpretability, I struggled quite a bit with the research. I eventually concluded that it wasn't for me, but was already funded to work on it. Fortunately I had a collaborator, and eventually I wound up using my engineering skills to accelerate his research instead of trying to contribute to the analysis directly.After noticing my theory of change for evals had changed now that governments and labs were committing to red-teaming, I applied for some jobs in the space. I received an offer to work in the UK's task force, which I accepted.List of LessonsIt's important to keep in mind two things - your theory of change for how your work helps reduce existential risk, and your comparative advantage in the field. These two things determined what I should work on, and keeping them updated was crucial for me finding a good path in the end.Poor productivity is more likely to be situational than you might think, especially if you're finding yourself having unusual difficulty compared to past projects or jobs. It's worth considering how your situation might be tweaked before blaming yourself.Trying out different subfields is useful, but don't be afraid to admit when one isn't working out as well as you'd like. See the first lesson.If you're going to go to a program like SERI MATS, do so because you have a good idea of what you want, not just because it's the thing to do or it seems generically helpful. I'm not saying you can't do such a program for that reason, but it is worth thinking twice about it.It is entirely possible to make mistakes, even several of them, and still wind up finding work in the field. There is no proper roadmap, everyone needs to figure things out as they go. While it's worth having...

]]>
Jay Bailey https://forum.effectivealtruism.org/posts/9copPAEnfCBZd88jE/reflections-on-my-first-year-of-ai-safety-research Tue, 09 Jan 2024 01:47:09 +0000 EA - Reflections on my first year of AI safety research by Jay Bailey Jay Bailey https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 16:48 no full 4
FWh2S5g56ghWsw3na EA - Celebrating 2023: 10 successes from the past year at CEEALAR by CEEALAR Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Celebrating 2023: 10 successes from the past year at CEEALAR, published by CEEALAR on January 11, 2024 on The Effective Altruism Forum.Over the last couple of months we have written a series of posts making the case for the Centre for Enabling EA Learning and Research (CEEALAR) and asking for funding - seehere,here andhere. We are very grateful to those who supported us during the fundraiser, however we did not reach our target and still have a very short runway. Despite these current difficulties, we want to take a moment in this post to outline a few of our achievements from 2023. We are proud of what we have achieved, and looking forward to working hard in 2024 to ensure an impactful future for CEEALAR.Highlights of 2023We hostedALLFED's team retreat in which they gathered their full team to set out their theory of change and strategy.2. We also hostedOrthogonal, who launched their organisation and research agenda while here.3. We appointed two new trustees,Dušan D. Nešic andKyle Smith, and said goodbye to outgoing trusteesFlorent Berthet andSasha Cooper.Thank you to Florent and Sasha who have both been supporting CEEALAR since it began. We look forward to working withDušan and Kyle, and drawing on their expertise in talent management and fundraising.4. We updated ourTheory of Change to explicitly focus on work supporting global catastrophic risks.We believe this reflects the needs of the world, plus in recent months more than 95% of applicants have worked on GCRs.5. We launched the CEEALAR Alumni Network, CAN, and reconnected with our alumni to begin understanding the impact CEEALAR had on their lives.80% of respondents were working in EA, the majority of whom were doing AI safety work.6. We made substantial improvements to the building that helped boost grantee productivity.Including converting the attics into private studies, purchasing standing desks and creating a lounge area to relax.7. We improved our application form to ensure we get the very best grantees, and hosted a total of 60 grantees, more than any of the past 3 years.8. ~7.4% of our funding for 2023 came from guests and alumni, which we see as an endorsement - those closest to us believe we are an impactful option to donate to.A huge thank you to all of our donors.9. We launched a new website.Check it out here:www.ceealar.orgThank you to grantees Onicah and Bryce, pictured above, who helped us with the design and photos for the website.10. As always though, the achievements we want to celebrate most are those of our grantees.To name a few from 2023…Bryce received funding to manageAlignment Ecosystem Development, successfully transitioning from his previous career running a filmmaking business in to AI safetyNia and George launchedML4Good UK. Alongside running two UK camps, they are building infrastructure so ML4Good can expand to additional countries.Michele published a forum post onFree Agents, the culmination of his research into creating an AI that independently learns human valuesSeamus had a research paper accepted to theSocially Responsible Language Modelling Research (SoLaR) conference and is currently completingARENA Virtual.Sam was selected for theAI Futures Fellowship Program. While at CEEALAR he participated inAI Safety Hub's summer research program and co-authored a research paper accepted to aNeurIPS workshop.Eloise secured a place onAI Safety Camp, working alongside Nicky Pochinkov on the project "Modelling Trajectories of Language Models".In 2024 we are looking forward to running a targeted outreach campaign to reach high-quality grantees working on global catastrophic risks, hosting the first ML4Good UK bootcamp, and of course to fundraising and working on CEEALAR's financial sustainability.Once again, a heartfelt thank you to everyo...

]]>
CEEALAR https://forum.effectivealtruism.org/posts/FWh2S5g56ghWsw3na/celebrating-2023-10-successes-from-the-past-year-at-ceealar Thu, 11 Jan 2024 14:53:14 +0000 EA - Celebrating 2023: 10 successes from the past year at CEEALAR by CEEALAR CEEALAR https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:09 no full 4
cW465CxEwZjJJRwHD EA - AI values will be shaped by a variety of forces, not just the values of AI developers by Matthew Barnett Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI values will be shaped by a variety of forces, not just the values of AI developers, published by Matthew Barnett on January 11, 2024 on The Effective Altruism Forum.In my last post about why AI value alignment shouldn't be conflated with AI moral achievement, a few people said they agreed with my point but they would frame it differently. For example, Pablo Stafforini framed the idea this way:it seems important to distinguish between normative and human specifications, not only because (arguably) "humanity" may fail to pursue the goals it should, but also because the team of humans that succeeds in building the first AGI may not represent the goals of "humanity". So this should be relevant both to people (like classical and negative utilitarians) with values that deviate from humanity's in ways that could matter a lot, and to "commonsense moralists" who think we should promote human values but are concerned that AI designers may not pursue these values (because these people may not be representative members of the population, because of self-interest, or because of other reasons).I disagree with Pablo's framing because I don't think that "the team of humans that succeeds in building the first AGI" will likely be the primary force in the world responsible for shaping the values of future AIs. Instead, I think that (1) there isn't likely to be a "first AGI" in any meaningful sense, and (2) AI values will likely be shaped more by market forces and regulation than the values of AI developers, assuming we solve the technical problems of AI alignment.In general, companies usually cater to what their customers want, and when they don't do that, they're generally outcompeted by companies who will do what customers want instead. Companies are also heavily constrained by laws and regulations. I think these constraints - market forces and regulation - will apply to AI companies too. Indeed, we have already seen these constraints play a role shaping the commercialization of existing AI products, such as GPT-4. It seems best to assume that this situation will largely persist into the future, and I see no strong reason to think there will be a fundamental discontinuity with the development of AGI.There do exist some reasons to assume that the values of AI developers matter a lot. Perhaps most significantly, AI development appears likely to be highly concentrated at the firm-level due to the empirically high economies of scale of AI training and deployment, lessening the ability for competition to unseat a frontier AI company. In the extreme case, AI development may be taken over by the government and monopolized. Moreover, AI developers may become very rich in the future, having created an extremely commercially successful technology, giving them disproportionate social, economic, and political power in our world.The points given in the previous paragraph do support a general case for caring somewhat about the morality or motives of frontier AI developers. Nonetheless, I do not think these points are compelling enough to make the claim that future AI values will be shaped primarily by the values of AI developers. It still seems to me that a better first-pass model is that AI values will be shaped by a variety of factors, including consumer preferences and regulation, with the values of AI developers playing a relatively minor role.Given that we are already seeing market forces shaping the values of existing commercialized AIs, it is confusing to me why an EA would assume this fact will at some point no longer be true. To explain this, my best guess is that many EAs have roughly the following model of AI development:There is "narrow AI", which will be commercialized, and its values will be determined by market forces, regulation, and to a limited degree, the values of AI...

]]>
Matthew_Barnett https://forum.effectivealtruism.org/posts/cW465CxEwZjJJRwHD/ai-values-will-be-shaped-by-a-variety-of-forces-not-just-the Thu, 11 Jan 2024 11:57:28 +0000 EA - AI values will be shaped by a variety of forces, not just the values of AI developers by Matthew Barnett Matthew_Barnett https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:52 no full 5
WeruztmEM53mLjbkL EA - Copenhagen Consensus Center's best investment papers for the sustainable development goals by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Copenhagen Consensus Center's best investment papers for the sustainable development goals, published by Vasco Grilo on January 10, 2024 on The Effective Altruism Forum.This is a linkpost to Copenhagen Consensus Center's 12 best investment papers for the sustainable development goals (SDGs), which were published in the Journal of Benefit-Cost Analysis in 2023. Some notes:Each paper does a cost-benefit analysis which accounts for health and economic benefits. The benefit-to-cost ratios across the 12 papers range from 18 (nutrition) to 125 (e-Government procurement).All 12 ratios are much higher than the 2.4 estimated for GiveDirectly's cash transfers to poor households in Kenya.4 are similar to and 8 are higher than GiveWell's cost-effectiveness bar of around 24 (= 10*2.4), equal to 10 times the above.Cash transfers are often preferred due to being highly scalable, but the 12 papers deal with large investments too. As can be seen in the table below, taken from a companion post, all 12 interventions together have:An annual cost of 41 G 2020-$ (41 billion 2020 USD).Annual benefits of 2.14 T 2020-$ (2.14 trillion 2020 USD), of which 1.12 T 2020-$ are economic benefits corresponding to 14.6 % (= 1.12*1.13/(8.17 + 0.528)) of the gross domestic product (GDP) of low and lower-middle income countries in 2022.A benefit-to-cost ratio of 52.2 (= 2.14/0.041), 21.8 (= 52.2/2.4) times that of GiveDirectly's cash transfers to poor households in Kenya.I expect the benefit-to-cost ratios of the papers to be overestimates:The paper on malaria estimates a ratio of 48, whereas I infer GiveWell's is:35.5 (= 14.8*2.4) for the Against Malaria Foundation (AMF), considering the mean cost-effectiveness across 8 countries of 14.8 times that of cash transfers.40.8 (= 17.0*2.4) for the Malaria Consortium, considering the mean cost-effectiveness across 13 countries of 17.0 times that of cash transfers.The paper on malaria studies an annual investment of 1.1 G 2020-$, whereas GiveWell's estimates respect marginal donations.Consequently, assuming diminishing marginal returns, and that GiveWell's estimates are more accurate, that of the paper on malaria is a significant overestimate.I guess the same reasoning applies to other areas.I think 3 of the papers focus on areas which have not been funded by GiveWell nor Open Philanthropy[2]:e-Government procurement (benefit-to-cost ratio of 125).Trade (95).Land tenure security (21).As a side note, I wonder why GiveWell's (marginal) cost-effectiveness estimates do not roughly match its bar of 10 times that of cash transfers.Agricultural research and developmentPaper: Benefit-Cost Analysis of Increased Funding for Agricultural Research and Development in the Global South.Benefit-to-cost ratio: 33.Investment:Basic research and development, including capacity building, and technical and policy support with special focus on Low- and Lower Middle-Income countries. Research outcomes are difficult to predict, but an example could be crop yield increases using precision genetic technologies.Childhood immunizationPaper: SDG Halftime Project: Benefit-Cost Analysis using Methods from the Decade of Vaccine Economics (DOVE) Model.Benefit-to-cost ratio: 101.Investment:Raise immunization coverage from 2022 levels to 2030 target for pentavalent vaccine, HPV, Japanese encephalitis, measles, measles-rubella, Men A, PCV, rotavirus, and yellow fever.Maternal and newborn healthPaper: Achieving maternal and neonatal mortality development goals effectively: A cost-benefit analysis.Benefit-to-cost ratio: 87.Investment:Sufficient staff and resources at all birth facilities to deliver a package of basic emergency obstetric and newborn care and family planning services, including bag and mask for neonatal resuscitation, removal of retained products of...

]]>
Vasco Grilo https://forum.effectivealtruism.org/posts/WeruztmEM53mLjbkL/copenhagen-consensus-center-s-best-investment-papers-for-the Wed, 10 Jan 2024 15:47:02 +0000 EA - Copenhagen Consensus Center's best investment papers for the sustainable development goals by Vasco Grilo Vasco Grilo https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:20 no full 11
WmxAywfJT2BtYJZW8 EA - Call for Expressions of Interest: NYU Wild Animal Welfare Summit by Sofia Fogel Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Call for Expressions of Interest: NYU Wild Animal Welfare Summit, published by Sofia Fogel on January 10, 2024 on The Effective Altruism Forum.Deadline: March 1, 2024 (followed by rolling submissions)Event: June 21-22, 2024Location: New York University, New York, NYTheNYU Wild Animal Welfare Program is hosting a two-day wild animal welfare summit on June 21-22, 2024. The aim of this event is to connect scholars with an interest in this topic, particularly scholars across a variety of fields and career stages.The first day of the summit will feature lightning talks and discussion sessions. The second day will feature breakout sessions for workshopping collaborative project ideas. Both days will also include vegan meals and plenty of networking opportunities.We welcome expressions of interest from scholars in all fields, particularly scholars who work in animal welfare or conservation science. Please note that funding for travel and hotel is available for early-career scholars, i.e., scholars within five years of their terminal degree.If you have interest in attending this summit, please send the below materials to Sofia Fogel at sofia.fogel@nyu.edu. We guarantee full consideration of all submissions received by March 1, 2024. We will also consider submissions received after that date on a rolling basis.Please include in your expression of interest:A CV or resume.A statement of interest with three elements:A short summary of your current research, your expected future research, and how your research relates to wild animal welfare. (500 words max.)(Optional) If you have ideas for collaborative research projects that you might like to discuss at this summit, please describe them. (250 words max.)(Optional) If you might like to give a lightning talk about your current or future research, please suggest a topic or set of topics. (250 words max.)Please note that if you answer questions (b) and (c), your answers can range from general (e.g., "Researching the effects of wildlife corridors on different kinds of species") to specific (e.g., "Measuring the effects of a new wildlife corridor in Yellowstone National Park on the movement of elk populations." Please also note that answering these questions does not commit you to discussing your ideas or presenting your work at the event.Topics that we see as within scope for this summit include, but are not limited to:How can we assess wild animal welfare at individual and population levels?How can we make welfare comparisons within and across species?What are the most common causes of morbidity and mortality for wild animals, and how do they vary within and across species?How does the project of improving wild animal welfare interact with the project of conserving species and ecosystems?What are the costs and benefits of different kinds of population control for different individuals, species, and ecosystems?How can we support individuals, species, and ecosystems in adapting to human-caused climate change and other such environmental changes?How can we support coordination and collaboration between scholars who work in animal welfare and environmental conservation, among other areas?How can we educate advocates, policymakers, and the general public about the relationship between human, animal, and environmental protection?If you are interested in these or related topics, we would love to hear from you! If you have any questions, feel free to contact Sofia Fogel at sofia.fogel@nyu.edu.Thank you to Animal Charity Evaluators and Open Philanthropy for your generous support of this program and event.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Sofia_Fogel https://forum.effectivealtruism.org/posts/WmxAywfJT2BtYJZW8/call-for-expressions-of-interest-nyu-wild-animal-welfare Wed, 10 Jan 2024 12:25:29 +0000 EA - Call for Expressions of Interest: NYU Wild Animal Welfare Summit by Sofia Fogel Sofia_Fogel https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:40 no full 12
XetEmzD5CYaHMjxnn EA - Why can't we accept the human condition as it existed in 2010? by Hayven Frienby Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why can't we accept the human condition as it existed in 2010?, published by Hayven Frienby on January 10, 2024 on The Effective Altruism Forum.By this, I mean a world in which:Humans remain the dominant intelligent, technological species on Earth's landmasses for a long period of time (> ~10,000 years).AGI is never developed, or it gets banned / limited in the interests of human safety. AI never has much social or economic impact.Narrow AI never advances much beyond where it is today, or it becomes banned / limited in the interests of human safety.Mind uploading is impossible or never pursued.Life extension (beyond modest gains due to modern medicine) isn't possible, or is never pursued.Any form of transhumanist initiatives are impossible or never pursued.No contact is made with alien species or extraterrestrial AIs, no greater-than-human intelligences are discovered anywhere in the universe.Every human grows, peaks, ages, and passes away within ~100 years of their birth, and this continues for the remainder of the human species' lifetime.Most other EAs I've talked to have indicated that this sort of future is suboptimal, undesirable, or best avoided, and this seems to be a widespread position among AI researchers as well (1). Even MIRI founder Eliezer Yudkowsky, perhaps the most well-known AI abolitionist outside of EA circles, wouldn't go as far as to say that AGI should never be developed, and that transhumanist projects should never be pursued (2). And he isn't alone -- there are many, many researchers both within and outside of the EA community with similar views on P(extinction) and P(societal collapse), and they still wouldn't accept the idea that the human condition should never be altered via technological means.My question is why can't we just accept the human condition as it existed before smarter-than-human AI (and fundamental alterations to our nature) were considered to be more than pure fantasy? After all, the best way to stop a hostile, unaligned AI is to never invent it in the first place. The best way to avoid the destruction of future value by smarter-than-human artificial intelligence is to avoid obsession with present utility and convenience.So why aren't more EA-aligned organizations and initiatives (other than MIRI) presenting global, strictly enforced bans on advanced AI training as a solution to AI-generated x-risk? Why isn't there more discussion of acceptance (of the traditional human condition) as an antidote to the risks of AGI, rather than relying solely on alignment research and safety practices to provide a safe path forward for AI (I'm not convinced such a path exists)?Let's leave out the considerations of whether AI development can be practically stopped at this stage, and just focus more on the philosophical issues here.References:Katya_Grace (EA Forum Poster) (2024, January 5). Survey of 2,778 AI authors: six parts in pictures.Yudkowsky, E. S. (2023, March 29). The only way to deal with the threat from AI? Shut it down. Time. https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Hayven Frienby https://forum.effectivealtruism.org/posts/XetEmzD5CYaHMjxnn/why-can-t-we-accept-the-human-condition-as-it-existed-in ~10,000 years).AGI is never developed, or it gets banned / limited in the interests of human safety. AI never has much social or economic impact.Narrow AI never advances much beyond where it is today, or it becomes banned / limited in the interests of human safety.Mind uploading is impossible or never pursued.Life extension (beyond modest gains due to modern medicine) isn't possible, or is never pursued.Any form of transhumanist initiatives are impossible or never pursued.No contact is made with alien species or extraterrestrial AIs, no greater-than-human intelligences are discovered anywhere in the universe.Every human grows, peaks, ages, and passes away within ~100 years of their birth, and this continues for the remainder of the human species' lifetime.Most other EAs I've talked to have indicated that this sort of future is suboptimal, undesirable, or best avoided, and this seems to be a widespread position among AI researchers as well (1). Even MIRI founder Eliezer Yudkowsky, perhaps the most well-known AI abolitionist outside of EA circles, wouldn't go as far as to say that AGI should never be developed, and that transhumanist projects should never be pursued (2). And he isn't alone -- there are many, many researchers both within and outside of the EA community with similar views on P(extinction) and P(societal collapse), and they still wouldn't accept the idea that the human condition should never be altered via technological means.My question is why can't we just accept the human condition as it existed before smarter-than-human AI (and fundamental alterations to our nature) were considered to be more than pure fantasy? After all, the best way to stop a hostile, unaligned AI is to never invent it in the first place. The best way to avoid the destruction of future value by smarter-than-human artificial intelligence is to avoid obsession with present utility and convenience.So why aren't more EA-aligned organizations and initiatives (other than MIRI) presenting global, strictly enforced bans on advanced AI training as a solution to AI-generated x-risk? Why isn't there more discussion of acceptance (of the traditional human condition) as an antidote to the risks of AGI, rather than relying solely on alignment research and safety practices to provide a safe path forward for AI (I'm not convinced such a path exists)?Let's leave out the considerations of whether AI development can be practically stopped at this stage, and just focus more on the philosophical issues here.References:Katya_Grace (EA Forum Poster) (2024, January 5). Survey of 2,778 AI authors: six parts in pictures.Yudkowsky, E. S. (2023, March 29). The only way to deal with the threat from AI? Shut it down. Time. https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> Wed, 10 Jan 2024 10:37:49 +0000 EA - Why can't we accept the human condition as it existed in 2010? by Hayven Frienby ~10,000 years).AGI is never developed, or it gets banned / limited in the interests of human safety. AI never has much social or economic impact.Narrow AI never advances much beyond where it is today, or it becomes banned / limited in the interests of human safety.Mind uploading is impossible or never pursued.Life extension (beyond modest gains due to modern medicine) isn't possible, or is never pursued.Any form of transhumanist initiatives are impossible or never pursued.No contact is made with alien species or extraterrestrial AIs, no greater-than-human intelligences are discovered anywhere in the universe.Every human grows, peaks, ages, and passes away within ~100 years of their birth, and this continues for the remainder of the human species' lifetime.Most other EAs I've talked to have indicated that this sort of future is suboptimal, undesirable, or best avoided, and this seems to be a widespread position among AI researchers as well (1). Even MIRI founder Eliezer Yudkowsky, perhaps the most well-known AI abolitionist outside of EA circles, wouldn't go as far as to say that AGI should never be developed, and that transhumanist projects should never be pursued (2). And he isn't alone -- there are many, many researchers both within and outside of the EA community with similar views on P(extinction) and P(societal collapse), and they still wouldn't accept the idea that the human condition should never be altered via technological means.My question is why can't we just accept the human condition as it existed before smarter-than-human AI (and fundamental alterations to our nature) were considered to be more than pure fantasy? After all, the best way to stop a hostile, unaligned AI is to never invent it in the first place. The best way to avoid the destruction of future value by smarter-than-human artificial intelligence is to avoid obsession with present utility and convenience.So why aren't more EA-aligned organizations and initiatives (other than MIRI) presenting global, strictly enforced bans on advanced AI training as a solution to AI-generated x-risk? Why isn't there more discussion of acceptance (of the traditional human condition) as an antidote to the risks of AGI, rather than relying solely on alignment research and safety practices to provide a safe path forward for AI (I'm not convinced such a path exists)?Let's leave out the considerations of whether AI development can be practically stopped at this stage, and just focus more on the philosophical issues here.References:Katya_Grace (EA Forum Poster) (2024, January 5). Survey of 2,778 AI authors: six parts in pictures.Yudkowsky, E. S. (2023, March 29). The only way to deal with the threat from AI? Shut it down. Time. https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> ~10,000 years).AGI is never developed, or it gets banned / limited in the interests of human safety. AI never has much social or economic impact.Narrow AI never advances much beyond where it is today, or it becomes banned / limited in the interests of human safety.Mind uploading is impossible or never pursued.Life extension (beyond modest gains due to modern medicine) isn't possible, or is never pursued.Any form of transhumanist initiatives are impossible or never pursued.No contact is made with alien species or extraterrestrial AIs, no greater-than-human intelligences are discovered anywhere in the universe.Every human grows, peaks, ages, and passes away within ~100 years of their birth, and this continues for the remainder of the human species' lifetime.Most other EAs I've talked to have indicated that this sort of future is suboptimal, undesirable, or best avoided, and this seems to be a widespread position among AI researchers as well (1). Even MIRI founder Eliezer Yudkowsky, perhaps the most well-known AI abolitionist outside of EA circles, wouldn't go as far as to say that AGI should never be developed, and that transhumanist projects should never be pursued (2). And he isn't alone -- there are many, many researchers both within and outside of the EA community with similar views on P(extinction) and P(societal collapse), and they still wouldn't accept the idea that the human condition should never be altered via technological means.My question is why can't we just accept the human condition as it existed before smarter-than-human AI (and fundamental alterations to our nature) were considered to be more than pure fantasy? After all, the best way to stop a hostile, unaligned AI is to never invent it in the first place. The best way to avoid the destruction of future value by smarter-than-human artificial intelligence is to avoid obsession with present utility and convenience.So why aren't more EA-aligned organizations and initiatives (other than MIRI) presenting global, strictly enforced bans on advanced AI training as a solution to AI-generated x-risk? Why isn't there more discussion of acceptance (of the traditional human condition) as an antidote to the risks of AGI, rather than relying solely on alignment research and safety practices to provide a safe path forward for AI (I'm not convinced such a path exists)?Let's leave out the considerations of whether AI development can be practically stopped at this stage, and just focus more on the philosophical issues here.References:Katya_Grace (EA Forum Poster) (2024, January 5). Survey of 2,778 AI authors: six parts in pictures.Yudkowsky, E. S. (2023, March 29). The only way to deal with the threat from AI? Shut it down. Time. https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org]]> Hayven Frienby https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:15 no full 13
4v45j5fuDfRru5kQ9 EA - ウィリアム・マッカスキル「効果的利他主義の定義」 by EA Japan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ウィリアム・マッカスキル「効果的利他主義の定義」, published by EA Japan on January 10, 2024 on The Effective Altruism Forum.This is a Japanese translation of William MacAskill, 'The Definition of Effective Altruism' available at MacAskill's website. Translated by 清水颯(Hayate Shimizu, link to his Researchmap)今日、世界にはさまざまな問題がある。7億5千万人以上の人々が1日1.90ドル以下(購買力平価換算)で生活している[1]。マラリアや下痢、肺炎など、簡単に予防できる原因で、毎年約600万人の子どもたちが亡くなっている[2]。気候変動は環境に大打撃を与え、経済に何兆ドルもの損失をもたらすと言われている[3]。世界の女性の3分の1は、性的または身体的な暴力に苦しんだことがある[4]。3,000発以上の核弾頭が世界中で高い警戒状態(high alert)に置かれていて、短時間の内に使える状態にある[5]。細菌は抗生物質に耐性を持ち始めている[6]。党派心は強まり、民主主義は衰退しているかもしれない[7]。世界はこれほど多くの問題を抱えており、これらの問題が深刻であることを考えると、私たちはこれらの問題に対して何かをする責任があることは確かである。しかし、何をすればよいのだろうか。私たちが取り組みうる問題は数え切れないほどあり、また、それぞれの問題に取り組む方法もさまざまである。しかも、私たちの資源は限られているから、個人として、あるいは地球全体(globe)として、これらの問題を一度に解決することはできない。それゆえ、私たちは自分たちがもつ資源をどのように配分するのかを決めなければならない。しかし、私たちは何を基準にそのような決断を下すべきなのか。その結果、効果的利他主義コミュニティは、世界の破局的リスクの軽減、家畜のアニマルウェルフェア、グローバルヘルスの分野で大きな成果を上げることに貢献した。2016年だけでも、効果的利他主義コミュニティは、効果の持続する殺虫剤処理された蚊帳を提供することで650万人の子どもをマラリアから守り、3億6000万羽の鶏をケージの檻の中の生活から救い出し、技術的AIセーフティを機械学習研究の主流領域として発展させることに大きな推進力と支援を提供した[13]。この動きは、学術的な議論にも大きな影響を及ぼしてきた。このテーマに関する書籍には、ピーター・シンガー著『あなたが世界のためにできるたったひとつのこと:〈効果的な利他主義〉のすすめ』や私自身の『〈効果的な利他主義〉宣言!』などがあり[14]、効果的利他主義を支持ないし批判する学術論文は、Philosophy and Public AffairsやUtilitas、Journal of Applied Philosophy、Ethical Theory and Moral Practiceその他の刊行物に掲載されてきた[15]。Essays in Philosophyの一巻はこのテーマに特化しており、Boston Reviewには学者たちによる効果的利他主義についての論考が掲載されている[16]。しかし、効果的利他主義について有意義な学術的議論を行うには、何について話しているのかについて合意を形成する必要がある。本章では、その一助となるべく、効果的利他主義センターの定義を紹介し、同センターがなぜそのような定義を選んだのかを説明し、その定義に対する正確な哲学的解釈を提供することを目指す。私は、効果的利他主義コミュニティで広く支持されているこの効果的利他主義の理解は、一般の人々の多くや効果的利他主義を批判する多くの人々が持っている効果的利他主義の理解とはかけ離れていると考えている。本稿では、なぜ私がこのような定義を好むのかを説明した後で、この機会を利用して、効果的利他主義に対して広く流布している誤解を訂正する。始める前に、「効果的利他主義」を定義することで、道徳の根本的な側面を説明しようとしているわけではないことに注意することが重要である。経験的研究分野では、科学と工学を区別することができる。科学は、私たちの住む世界の一般的な真理を発見しようとするものである。工学は、科学的理解を用いて、社会に役立つ構造物やシステムを設計し、構築することである。道徳哲学でも、同じような区別ができる。典型的に、道徳哲学は、道徳の本質に関する一般的な真理を発見することを目的としている。これは規範的科学に相当する。しかし、道徳哲学の中にも工学に相当する部分があり、例えば、社会で広く採用されれば、世界を改善することになる新しい道徳的概念を作り出すことができる。「効果的利他主義」を定義することは、道徳性の基本的な側面を説明することではなく、工学的な問題なのだ。この観点から、私は定義が満たすべき二つの主要な要件を提案する。一つ目は、現在、効果的利他主義に従事していると言われている人たちの実際の実践、そしてコミュニティのリーダーが持っている効果的利他主義の理解に沿うことである。二つ目は、その概念が可能な限り公共的な価値を持つようにすることである。つまり、例えば、様々な道徳的見解に支持され、またその道徳的見解にとって有用であるほど十分に広い概念でありながら、その概念の使用者が世界をより良くするために、そうしなかった場合よりも多くのことを行えるほどには限定された概念が望まれる。もちろん、これはバランス感覚を要する作業になる。1. 効果的利他主義の以前の定義「効果的利他主義」という言葉は、「効果的利他主義センター」を設立する過程で、2011年12月3日に関係者17名による民主的なプロセスを経て作られた言葉である[17]。しかし、この用語の公式な定義は導入されていない。長年にわたり、効果的利他主義は、さまざまな人々によって、さまざまな方法で定義されてきた。以下はその例である。私たちにとって「効果的利他主義」とは、持っている1ドル、1時間を使って、最大限の善いことをしようとすることである[18]。効果的利他主義とは「どうしたら、自分にできる最大の違いを生み出せるだろうか」と問いかけ、その答えを見出すために、証拠と慎重な推論を用いることである[19]。効果的利他主義は、非常にシンプルな考えに基づいている:私たちは、できる限りで最大の善を行うべきである〔・・・・・・〕最低限受け入れ可能な倫理的な生活を送るには、余剰資源の相当部分を、世界をより善い場所にするために使うことである。完全に倫理的な生活を送るには、できる限り最大の善を行うことである[20]。効果的利他主義とは、質の高い証拠と慎重な推論を用いて、可能な限りで最大限、他者を助ける方法を考え出す研究分野である。また、そうして出た答えを真剣に受け止め、世界の最も差し迫った問題に対する最も有望な解決策に力を注ぐ人々のコミュニティでもある[21]。効果的利他主義とは、他者に利益をもたらす最も効果的な方法を決定するために、証拠と理性を用いる哲学であり、社会運動である[22]。以上の定義には、いくつかの共通点がある[23]。すべての定義が最大化という考え方を引き合いに出し、福利を高めるという価値であれ、ただ一般に善を達成するという価値であれ、ともかく何らかの価値の達成を話題にしている。しかし、相違点もある。定義(1)(3)は「善を行う」ことについて述べているのに対し、定義(4)と(5)は「他者を助ける」「他者に利益をもたらす」ことについて述べている。他の定義と異なり、(3)は効果的利他主義を、活動や研究分野、運動といった非規範的なプロジェクトではなく、規範的な主張としている。定義(2)、(4)、(5)は、証拠と慎重な推論を用いるという考えを引き合いに出しているが、定義(1)、(3)はそうしていない。効果的利他主義センターの定義は、効果的利他主義を下記のように定義することで、これら各論点に態度を取っている。効果的利他主義とは、証拠と理由を用いて、どうすれば他人のためになるかをできるだけ考え、それに基づいて行為することである[24]。この定義は、私が中心となって、効果的利他主義コミュニティの多くのアドバイザーから意見を聞き、Julia WiseとRob Bensingerの多大な協力を得て作成した。この定義と、それに沿った一連の指針的価値は、効果的利他主義コミュニティの大多数のリーダーによって正式に承認されている[25]。 効果的利他主義に「公式」な定義はないが、当センターの定義は他のどの定義よりもそれに近い。しかし、効果的利他主義のこの声明は、哲学的な読者ではなく、一般的な読者を対象としているため、アクセスしやすくするために、ある程度の正確さが失われている。そのため、ここではより正確な定式化を行った上で、定義の内容を詳しく解説していきたい。私の定義は次のようなものであ...

]]>
EA Japan https://forum.effectivealtruism.org/posts/4v45j5fuDfRru5kQ9/wiriamu-makkasukiru-no Wed, 10 Jan 2024 10:08:30 +0000 EA - ウィリアム・マッカスキル「効果的利他主義の定義」 by EA Japan EA Japan https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:08:13 no full 14
qCF5kETxnk3HkfiEf EA - Cause-Generality Is Hard If Some Causes Have Higher ROI by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cause-Generality Is Hard If Some Causes Have Higher ROI, published by Ben West on January 12, 2024 on The Effective Altruism Forum.SummaryReturns to community building are higher in some cause areas than othersFor example: a cause-general university EA group is more useful for AI safety than for global health and development.This presents a trilemma: community building projects must either:Support all cause areas equally at a high level of investment, which leads to overinvestment in some cause areasSupport all cause areas equally at a low level of investment, which leads to underinvestment in some cause areas, orBreakcause-generalityThis trilemma feels fundamental to EA community building work, but I've seen relatively little discussion of it, and therefore would like to raise awareness of it as a considerationThis post presents the trilemma, but does not argue for a solutionBackgroundA lot of community building projects have a theory of change which aims to generate laborLabor is more valuable in some cause areas than othersIt's slightly hard to make this statement precise, but it's something like: theoutput elasticity of labor (OEL) depends on cause areaE.g. the amount by which animal welfare advances as a result of getting one additional undergraduate working on it is different than the amount by which global health and development advances as a result of getting one additional undergraduate working on it[1]Note: this is not a claim that some causes are more valuable than others; I am assuming for the sake of this post that all causes are equally valuableI will take as given that this difference exists now and is going to exist into the future (although I would be interested to hear arguments that it doesn't/won't)Given this, what should we do?My goal with this post is mostly to point out that we probably should do something weird, and less about suggesting a specific weird thing to doWhat concretely does it mean to have lower or higher OEL?I'm using CEA teams as examples since that's what I know best, though I think similar considerations apply to other programs. (Also, realistically, we might decide that some of these are just too expensive if OEL goes down or redirect all resources to some projects with high starting cost if OEL goes up.)ProgramHow it looks with high investment[2]How it looks with low investmentEventsCateredCoffee/drinks/snacksRecorded talksConvenient venuesBring your own foodVenues in inconvenient locationsUnconference/self-organized picnic vibesGroupsPaid organizersOne-on-one advice/career coachingVolunteer-organized meet upsMaybe some free pizzaOnlineActively organized Forum events (e.g. debates)Curated newsletter, highlightsPaid Forum moderatorsEngineers and product people who develop the ForumA place for people to post things when they feel like it, no active solicitationVolunteer-based moderationLimited feature developmentCommunicationsPitching op-ed's/stories to major publicationsCreate resources like lists of experts that journalists can contactFund publications (e.g. Future Perfect)People post stuff on Twitter, maybe occasionally a journalist will pick it upWhat are Community Builders' options?I see a few possibilities:Don't change our offering based on the participant's[3] cause area preference…through high OEL cause areas subsidizing the lower OEL cause areasThis has historically kind of been how things have worked (roughly: AI safety subsidized cause-general work while others free-rode)This results in spending more on the low OEL cause areas than is optimalAnd also I'm not sure if this can practically continue to exist, given funder preferences…through everyone operating at the level low OEL cause areas chooseThis results in spending less on high OEL cause areas than is op...

]]>
Ben_West https://forum.effectivealtruism.org/posts/qCF5kETxnk3HkfiEf/cause-generality-is-hard-if-some-causes-have-higher-roi Fri, 12 Jan 2024 21:39:52 +0000 EA - Cause-Generality Is Hard If Some Causes Have Higher ROI by Ben West Ben_West https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:49 no full 1
xwAhSkDQhkqbWJ8Fq EA - Help the UN design global governance structures for AI by Joanna (Asia) Wiaterek Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Help the UN design global governance structures for AI, published by Joanna (Asia) Wiaterek on January 12, 2024 on The Effective Altruism Forum.In October 2023, the AI Advisory Body was convened by Secretary-General António Guterres with an aim "to undertake analysis and advance recommendations for the international governance of AI."In December 2023, they published theInterim report: Governing AI for Humanity which outlines principles for what global governance of AI should be based on.Currently, they are inviting individuals, groups, and organisations to provide feedback and recommendations which will help them structure the final report ahead of the Summit of the Future in the summer of 2024.I think this is a unique opportunity to help shape the UN vision, discourse and future recommendations on its AI/global governance/global development agenda, so if you haven't heard about this before and are interested, please submit your inputs throughthis form by 31st March 2024.A few examples of what the UN vision on AI might shape:international narrative on values and expectations for global governance of AIUN development agenda after SDGscountry-specific recommendations on the use and regulation of AIUN members' engagement with the current governance initiatives (e.g. the Safety Summit, the U.S. Executive Order)deployment of AI for SDGs.If you would like to work on this together or discuss other potential strategies for action, please contact me onjoanna.wiaterek@gmail.com.The Global Majority must be welcomed and given an active position at the AI table. The urgent question is how to facilitate that best. Recommendations are being shaped right now and the UN will inevitably have a strong influence on forming the long-term narrative. Let's help to ensure its highest quality!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Joanna (Asia) Wiaterek https://forum.effectivealtruism.org/posts/xwAhSkDQhkqbWJ8Fq/help-the-un-design-global-governance-structures-for-ai Fri, 12 Jan 2024 16:28:28 +0000 EA - Help the UN design global governance structures for AI by Joanna (Asia) Wiaterek Joanna (Asia) Wiaterek https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:59 no full 4
4ByqXAXg3BPhR7aer EA - GiveWell from A to Z by GiveWell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell from A to Z, published by GiveWell on January 12, 2024 on The Effective Altruism Forum.Author: Isabel Arjmand, Special Projects OfficerTo celebrate the end of 2023, we're highlighting a few key things to know about GiveWell - from A to Z. These aren't necessarily the 26 most important parts of our work (e.g., we could include only "transparency" or "top charities" for T) but they do fit the alphabet, and we've linked to other pages where you can learn more.All Grants Fund. Our recommendation for donors who have a high level of trust in GiveWell and are open to programs that might be riskier than our top charities.Bar. We set a cost-effectiveness bar, or threshold, such that we expect to be able to fully fund all the opportunities above that level of cost-effectiveness. This bar isn't a hard limit; we consider qualitative factors in our recommendations, as discussed here. This post also discusses our bar in more detail.Cost-effectiveness. The core question we try to answer in our research is: How much good can you do by giving money to a certain program? This blog post describes how we approach cost-effectiveness estimates and use them in our work.Donors. Unlike a foundation, we don't hold an endowment. Our impact comes from donors choosing to use our recommendations.Effective giving organizations. Organizations like Effektiv Spenden, which fundraise for programs we recommend and provide tax-deductible donation options in a variety of countries. We're grateful to these national effective giving organizations and groups like Giving What We Can that recommend our work.Footnotes.[1]Generalizability. How well evidence generalizes to different settings, including variations in program implementation and the contexts where a program is delivered. Also called "external validity."Health workers and community distributors. The people who deliver many of the programs we support; includes both professional health workers and distributors who receive stipends to deliver programs in their local communities. For example, community distributors go from household to household to provide seasonal malaria chemoprevention to millions of children.Incubating new programs. We partner with the Evidence Action Accelerator and Clinton Health Access Initiative (CHAI) Incubator to scope, pilot, and scale up promising cost-effective interventions.Judgment calls. We aim to create estimates that represent our true beliefs. Our cost-effectiveness analyses are firmly rooted in evidence but also incorporate adjustments and intuitions that aren't fully captured by scientific findings alone. More in this post.Kangaroo mother care. A program to reduce neonatal mortality among low-birthweight babies through skin-to-skin contact to keep babies warm, breastfeeding instruction, home visits, and more.Leverage. How our funding decisions affect other funders, either by crowding in additional funding ("leverage") or by displacing funds that otherwise would have been used for a given program ("fungibility").Mistakes. Transparency is core to our work. Read here about mistakes we've made and lessons we've learned.Nigeria. One of the countries where we most often fund work. (Our work is generally concentrated in Africa and South Asia.) New Incentives, one of our top charities, currently works exclusively in northern Nigeria, where low baseline vaccination rates make its work especially valuable.Oral rehydration solution + zinc. A low-cost way to prevent and treat dehydration caused by diarrhea. We've been interested in ORS/zinc for a long time (going back to 2006!), and recently funded the CHAI Incubator to conduct a randomized controlled trial in Bauchi State, Nigeria, studying the extent to which preemptively distributing free ORS/zinc directly to households increases usage by children u...

]]>
GiveWell https://forum.effectivealtruism.org/posts/4ByqXAXg3BPhR7aer/givewell-from-a-to-z Fri, 12 Jan 2024 15:50:47 +0000 EA - GiveWell from A to Z by GiveWell GiveWell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:22 no full 5
suqGrqjXGGjqNcSqS EA - CE will donate £1K if you refer our next Outreach Director by CE Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CE will donate £1K if you refer our next Outreach Director, published by CE on January 12, 2024 on The Effective Altruism Forum.TL;DRCharity Entrepreneurship is trying for the second time to recruit a Director of Outreach. It's a crucial and impactful role that will manage a team tasked with creating and maintaining diversified talent pipelines of participants for our programs. If you refer someone who ends up being selected and passes their probation, CE will donate £1,000 to a charity of your choice.The problemAs CE scales up, one of our biggest bottlenecks is finding highly talented, value-aligned people to apply to our programs, receive training, and launch the top charity ideas our research team has found or become top-notch researchers. In the past three years, we have found that the best predictor of which incubated charities will end up causing the highest impact is the quality of the co-founding team. We estimate the impact of the average Incubation Program alumnus to be equivalent to donating USD 300,000 per year to effective charities, so finding these gems is difficult but extremely high-impact.CE seeks aDirector of Outreach to take our outreach strategy to the next level and solve this bottleneck. We unsuccessfully tried to recruit for this role in Q4 2023, which we suspect was ironically due to lower-than-optimal outreach. The resulting pool of candidates was small, and the median quality needed to be higher. This time around, we're casting the net quite widely and trying new approaches to increase the quality of the talent pool, such as a referral program.The referral programA conventional referral program motivates current employees in an organization to find and refer qualified candidates from their connections. Usually, as an incentive, the employer offers a referral bonus to the employee who made the referral if the person referred successfully gets the position. Some evidence suggests these schemes are an effective way to source high-quality candidates, leading to better retention and better overall performance.We think it would be worthwhile to experiment with such a program, particularly for a role as crucial as Director of Outreach, where the returns on a high-quality candidate could be significant. To make it more aligned with our values (and also to enable the broader community to participate), we are adapting the program and committing to donating £1,000 to a charity chosen by the person who refers a successful candidate to us ('successful' meaning that they're selected for the role and they pass their probation).The chosen charity would need to be registered in their relevant country of operation (or have a way to collect donations via, for example, a fiscal sponsor). Individuals or groups working on charitable projects are not eligible, although if the referrer works for the charity chosen, that is fine. We will ask for some light documentation before making the donation (e.g. proof of registration).How you can helpIf you suspect someone you know would be a good fit for this role, send them thejob ad and encourage them to apply! Even if you're uncertain about fit and all you have is an inclination, share the job with them anyway. They are also encouraged to apply even if they don't think they fully meet the requirements, as we care deeply about mindset and value alignment with our approach and are skilled at finding people with high potential whose growth we are happy to facilitate.A question in the application form asks them to select how they heard about the job. Make sure to mention they should select 'Someone I know referred me', and we'll be in contact to know who that someone is.If they're selected and successfully pass their probation (around 3 months after their first day), we'll contact you to let you know and get i...

]]>
CE https://forum.effectivealtruism.org/posts/suqGrqjXGGjqNcSqS/ce-will-donate-gbp1k-if-you-refer-our-next-outreach-director Fri, 12 Jan 2024 10:06:52 +0000 EA - CE will donate £1K if you refer our next Outreach Director by CE CE https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:36 no full 7
QyXcTAhzcQeLDckGL EA - Social science research on animal welfare we'd like to see by Martin Gould Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Social science research on animal welfare we'd like to see, published by Martin Gould on January 12, 2024 on The Effective Altruism Forum.Context and objectivesThis is a list of social science research topics related to animal welfare, developed by researchers on the Open Phil farm animal welfare team.We compiled this list because people often ask us for suggestions on topics that would be valuable to research. The primary audience for this document is students (undergrad, grad, high school) and researchers without significant budgets (since the topics we list here could potentially be answered using primarily desktop research).[1]Additional context:We are not offering to fund research on these topics, and we are not necessarily offering to review or advise research on these topics.In the interest of brevity, we have not provided much context for each topic. But if you are a PhD student or academic, we may be able to provide you with more detail on our motivation and our interpretation of the current literature: please email Martin Gould with your questions.The topics covered in this document are the ones we find most interesting; for other animal advocacy topic lists see here. Note that we do not attempt to cover animal welfare science in these topics, and that the topics are listed in no particular order (i.e. we don't place a higher priority on the topics listed first).In some areas, we are not fully up to date on the existing literature, so some of our questions may have been answered by research already conducted.We think it is generally valuable to use back-of-the-envelope-calculations to explore ideas and findings.If you complete research on these topics, please feel free to share it with us (email below) and with the broader animal advocacy movement (one option is to post here). We're happy to see published findings, working papers, and even detailed notes that you don't intend to formally publish.If you have anything to share or any feedback, please email Martin Gould. This post is also on the Open Phil blog here.TopicsCorporate commitmentsBy how many years do animal welfare corporate commitments speed up reforms that might eventually happen anyway due to factors like government policy, individual consumer choices, or broad moral change?How does this differ by the type of reform? (For example, cage-free vs. Better Chicken Commitment?)How does this differ by country or geographical region (For example, the EU vs. Brazil?)What are the production costs associated with specific animal welfare reforms? Here is an example of such an analysis for the European Chicken Commitment.Policy reformWhat are the jurisdictions most amenable to FAW policy reform over the next 5-10 years? What specific reform(s) are most tractable, and why?To what extent is animal welfare an issue that is politically polarizing (i.e. clearly associated with a particular political affiliation)? Is this a barrier to reform? If so, how might political polarization of animal welfare be reduced?How do corporate campaigns and policy reform interact with and potentially reinforce each other?What conclusions should be drawn about the optimal timing of policy reform campaigns?What would be the cost-effectiveness of a global animal welfare benchmarking project? (That is, comparing farm animal welfare by country and by company, as a basis to drive competition, as with similar models in human rights and global development.)Which international institutions (e.g. World Bank, WTO, IMF, World Organisation for Animal Health, UN agencies) have the most influence over animal welfare policy in emerging economies? What are the most promising ways to influence these institutions?Does this vary by geographical region (for example, Asia vs. Latin America)?Alt proteinWhat % of PBMA (plant-ba...

]]>
Martin Gould https://forum.effectivealtruism.org/posts/QyXcTAhzcQeLDckGL/social-science-research-on-animal-welfare-we-d-like-to-see Fri, 12 Jan 2024 02:43:43 +0000 EA - Social science research on animal welfare we'd like to see by Martin Gould Martin Gould https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:32 no full 10
MayBxEme6mWn9fswG EA - A short comparison of starting an effective giving organization vs. founding a direct delivery charity by Joey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A short comparison of starting an effective giving organization vs. founding a direct delivery charity, published by Joey on January 11, 2024 on The Effective Altruism Forum.CE has recently starteda new program to incubate Effective Giving Initiatives (EGIs). Although this is a sub-category of meta charities, I think it has some interesting and unique differences. I expect a decent percentage of people who are interested in the Effective Giving Incubation Program are also considering founding a charity unrelated to effective giving, so I wanted to write up a quick post comparing a few of the pros and cons of each - as I historically have had a chance to found both.A brief historyAbout ten years back, I co-founded Charity Science (later renamed Charity Science Outreach) to raise money for effective charities that had extremely limited marketing and outreach. We used GiveWell and ACE recommendations, selecting AMF and THL specifically as the targets. We did several experiments, diligently keeping track of the results of our time spent and the results.After a couple of unsuccessful experiments (e.g., grant writing, which raised ~$50k in 12 FTE months), we hit some successes with peer-to-peer fundraising (e.g., supporting people donating funds for their birthdays). Depending on how aggressively you discount for counterfactuals, we raised a decent amount of money (in the several 100,000s). Although this was pretty successful, we pivoted to founding a direct charity where our comparative advantage was strongest and could bring the most impact and handed off the projects.Eight years ago, some of the same team members (and a few new ones) founded Charity Science Health. This was a direct implementation charity focused on vaccination reminders in North India. We got a GiveWell seed grant and became a reasonable-sized actor over the course of three years, reaching over a hundred thousand people with vaccination reminders at a very low cost per person (under $1).The trickiest part of this intervention was to (cost-effectively) get the right people to hear about the program, as the signup costs were about 70% of the entire program cost, and targeting was extremely important. A few interventions we tried did not work (mass media, government partnerships), and a few worked well (hospital partnerships, door-to-door surveys). This project eventually merged with Suvita after the founders left to run other projects (including Charity Entrepreneurship itself).In many ways, I feel starting an effective giving org was very useful for later starting a direct implementation charity, as many of the skills overlapped, and it was a less challenging project to get off the ground. In the rest of this post, I'd like to pull out the main takeaways that can be learned from these projects and would be cross-applicable to those considering both career options.Odds of successFounding any project carries a risk of failure. Failure in the case of an effective giving org would most commonly mean spending more than what gets raised for effective charities. Failure with a direct NGO can result in the people you are trying to help being harmed, making the stakes higher and there being more of a downside. In general, founding an Effective Giving Initiative I would expect to have higher odds of success. There are just more points of failure for a direct NGO.It could struggle with fundraising (an issue equally important in EGI) and implementation even if fundraising succeeds. In my view, this, among other factors, makes EGIs have higher odds of success than direct NGOs.Net impactThe net impact is tricky to estimate, as the spread is considerable, even within pre-selected CE rounds. This also means that personal fit could overrule this factor. My current sense is that a direct charity has a higher...

]]>
Joey https://forum.effectivealtruism.org/posts/MayBxEme6mWn9fswG/a-short-comparison-of-starting-an-effective-giving Thu, 11 Jan 2024 23:35:05 +0000 EA - A short comparison of starting an effective giving organization vs. founding a direct delivery charity by Joey Joey https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:02 no full 11
br66YMcfeyJdKYZyK EA - EAGxAustin Save the Date by Ivy Mazzola Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAGxAustin Save the Date, published by Ivy Mazzola on January 13, 2024 on The Effective Altruism Forum.EAGxAustin 2024 will take place April 13-14 at the University of Texas at Austin! Applications will be opening in late January- fill out our interest form to be notified of the launch.EAGxAustin is intended both for individuals new to the movement and those already professionally engaged with EA, and will cover a diverse range of high-impact cause areas. We're especially excited to bring together individuals from Texas or the southern/central U.S. region, and we also welcome anyone in the U.S. or internationally who could provide and/or gain value from the event to apply!Vision for the conferenceOne of our primary goals for this event is to strengthen communities and networks for those in southern/central U.S. areas, including Texas and cities such as Phoenix, Chicago, Albuquerque, L.A., and Denver. We're prioritizing applicants from these regions, but also encourage those from across the U.S. and internationally, especially those in EA-related careers or interested in mentoring to apply. Our aim is to bolster connections, support the development of new and existing EA communities in these regions, and enhance networking opportunities for these groups.The conference will include talks related to high-impact careers and donating, workshops, office hours or roundtable Q&A events, group meetups (e.g. for community building, animal welfare, AI safety, etc.), and designated 1-on-1 spaces. If you have a specific speaker in mind or other content idea which you think would be particularly useful for you or others, please suggest content here.Err on the side of contributing--if you are engaged and excited enough about EAGxAustin to have an idea of what would help you, then you are someone we are excited to consider input from. We want to make EAGxAustin as beneficial and fulfilling for you (and all attendees) as we can.Who is EAGxAustin for?EAGxAustin is intended both for individuals who are new to EA and those who have already professionally engaged with EA. As one of our aims is to serve and bolster EA communities and individuals within Texas and the southern and central U.S., we will prioritize applicants from these areas who meet at least one of the following criteria:Completed an intro fellowshipHave demonstrable plans for EA involvementExperience or interest in high impact cause areasWe also welcome individuals from any location who could provide and/or gain value from the event, especially people who are in impactful orgs and/or have several years experience in a related career, and who are enthusiastic to mentor/give advice to students and early career professionals. If you have any questions or comments, don't hesitate to reach out to Austin@eaglobalx.org.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Ivy Mazzola https://forum.effectivealtruism.org/posts/br66YMcfeyJdKYZyK/eagxaustin-save-the-date-1 Sat, 13 Jan 2024 17:31:02 +0000 EA - EAGxAustin Save the Date by Ivy Mazzola Ivy Mazzola https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:47 no full 1
JDXsPZwb23ybgWL94 EA - Various roles at The School for Moral Ambition by tobytrem Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Various roles at The School for Moral Ambition, published by tobytrem on January 15, 2024 on The Effective Altruism Forum.The School for Moral Ambition (SMA) is a new organisation which "will help people switch careers to work on the most pressing issues of our time". SMA's co-founders are Jan Willem van Putten (co-founder of Training for Good), and Rutger Bregman (author of Humankind, Utopia for Realists, and an upcoming book on Moral Ambition,[1] inspired by the Effective Altruism movement).From their website:The School for Moral Ambition (SMA) is a new organisation that will focus on attracting the most talented people to work on the most pressing issues of our time. The activities of SMA fall into the following categories:Book and Branding: Launch of Rutger Bregman's book on the topic of moral ambition - the idea that people's talents should be used for working on global challenges. Launch of a corresponding campaign to establish a prestigious brand that attracts talent and sparks a movement around moral ambition.Community Activities: We will organise Moral Ambition Circles and offer the resources to start their own Circle. These circles help morally ambitious people develop a career that matches their ideals.Exclusive Fellowship Programs: Initiation of targeted, highly selective programs in which small groups of fellows (~12 people) will focus on solving one of the most pressing and neglected global problems together.They are based in the Netherlands, but will be launching internationally in spring 2025.They are currently hiring for the roles of:(Senior) Researcher | 32-40 hours | EUR 55k-65K | deadline Feb 15thProgram Manager (Fellowships) | 32-40 hours | EUR 40K-50K | deadline Jan 24thOperations intern | 32-40 hours | EUR 1,000/month | Jan 24thEvent Management Intern | 32-40 hours | EUR 1,000/month | Jan 24thFinance Volunteer | 4-8 hours per week | unpaid | Feb 1stNB- I'm linkposting this because I think the Forum audience may be interested in these roles. I'm not affiliated with the organisation and therefore can't answer questions about them.PS- If you spot a job that you think EAs should see, linkpost it on the Forum! A surprising amount of people find out about jobs that they later get through the Forum, so you might just shift a career, or get a more impact-focused person into an important role.^Dutch interview, English interview (about 2/3 of the way through)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
tobytrem https://forum.effectivealtruism.org/posts/JDXsPZwb23ybgWL94/various-roles-at-the-school-for-moral-ambition Mon, 15 Jan 2024 22:49:26 +0000 EA - Various roles at The School for Moral Ambition by tobytrem tobytrem https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:41 no full 1
axSfJXriBWEixsHGR EA - AI doing philosophy = AI generating hands? by Wei Dai Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI doing philosophy = AI generating hands?, published by Wei Dai on January 15, 2024 on The Effective Altruism Forum.I've been playing around with Stable Diffusion recently, and an analogy occurred to me between today's AI's notoriously bad generation of hands and future AI's potentially bad reasoning about philosophy.In case you aren't already familiar, currently available image generation AIs are very prone to outputting bad hands, e.g., ones with four or six fingers, or two thumbs, or unnatural poses, or interacting with other objects in very strange ways. Perhaps what's especially striking is how bad AIs are at hands relative to other image generation capabilities, thus serving as a cautionary tale about differentially decelerating philosophy relative to other forms of intellectual progress, e.g., scientific and technological progress.Is anyone looking into differential artistic progress as a possible x-risk? /jkSome explanations I've seen for why AI is bad at hands:it's hard for AIs to learn hand generation because of how many poses a hand can make, how many different ways it can interact with other objects, and how many different viewing angles AIs need to learn to reproduceeach 2D image provides only partial information about a hand (much of it is often obscured behind other objects or parts of itself)most hands in the training data are very low resolution (a tiny part of the overall image) and thus not helpful for training AIthe proportion of hands in the training set is too low for the AI to devote much model capacity to hand generation ("misalignment" between the loss function and what humans care about probably also contributes to this)AI developers just haven't collected and trained AI on enough high quality hand images yetThere are news articles about this problem going back to at least 2022, and I can see a lot of people trying to solve it (on Reddit, GitHub, arXiv) but progress has been limited. Straightforward techniques like prompt engineering and finetuning do not seem to help much. Here are 2 SOTA techniques, to give you a glimpse of what the technological frontier currently looks like (at least in open source):Post-process images with a separate ML-based pipeline to fix hands after initial generation. This creates well-formed hands but doesn't seem to take interactions with other objects into (sufficient or any) consideration.If you're not trying to specifically generate hands, but just don't want to see incidentally bad hands in images with humans in them, get rid of all hand-related prompts, LoRAs, textual inversions, etc., and just putting "hands" in the negative prompt. This doesn't eliminate all hands but reduces the number/likelihood of hands in the picture and also makes the remaining ones look better. (The idea behind this is that it makes the AI "try less hard" to generate hands, and perhaps focus more on central examples that it has more training on.Of course generating hands is ultimately not a very hard problem. Hand anatomy and its interactions with other objects pose no fundamental mysteries. Bad hands are easy for humans to recognize and therefore we have quick and easy feedback for how well we're solving the problem. We can use our explicit understanding of hands to directly help solve the problem (solution 1 above used at least the fact that hands are compact 3D objects), or just provide the AI with more high quality training data (physically taking more photos of hands if needed) until it recognizably fixed itself.What about philosophy? Well, scarcity of existing high quality training data, check. Lots of unhelpful data labeled "philosophy", check. Low proportion of philosophy in the training data, check. Quick and easy to generate more high quality data, no. Good explicit understanding of the principles involved, ...

]]>
Wei Dai https://forum.effectivealtruism.org/posts/axSfJXriBWEixsHGR/ai-doing-philosophy-ai-generating-hands Mon, 15 Jan 2024 13:48:35 +0000 EA - AI doing philosophy = AI generating hands? by Wei Dai Wei Dai https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:48 no full 5
y5vrPL4XRkmEhet5F EA - EA Nigeria: Reflecting on 2023 and Looking Ahead to 2024 by EA Nigeria Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Nigeria: Reflecting on 2023 and Looking Ahead to 2024, published by EA Nigeria on January 16, 2024 on The Effective Altruism Forum.SummaryEA Nigeria works to create an altruistic, supportive, and informed community of people in Nigeria who use evidence-based and reasoned approaches to distribute their resources, be it career, money, or other resources, to maximize their positive impact. EA Nigeria shares impactful resources and facilitates networking, knowledge sharing, skill building, and collaboration among its community.In 2023, we:Conducted three rounds of an introductory fellowship program, graduating 28 participants.Conducted four rounds of skill-building workshops with an active participation of 24 members.Organized an annual community retreat, fostering engagement with 28 enthusiastic participants.Published a monthly newsletter for nine consecutive months, gaining momentum with 482 subscribers by December 2023.Facilitated five community insight calls, promoting knowledge sharing and drawing a total attendance of 52 individuals.Delivered sixteen personalized guidance and networking connections, enhancing the impact of our support initiatives.Updated the opportunity board from June to December 2023 with 120 accessible opportunities to our community in Nigeria.In 2024, Our key strategies are:Improving the infrastructure and capacityConduct rounds of skilling workshops.Conduct rounds of the EA intro program.Explore the accelerator program.Enhancement of engagement and retentionFacilitating knowledge-sharing calls and continuous personalized guidance.Updating opportunity board weekly and a bi-monthly newsletter.Conduct annual community retreat.Offering continuous support to local groups and student clubs.Outreach and professional growthRecruiting additional members through fellowships, events, and outreach.Set up a donation page and explore fundraising for the aligned charity locallyExplore fiscal sponsorship for aligned projects and individuals.About EA Nigeria: Vision, Mission, and StrategyFounded in 2020, EA Nigeria is a national chapter of the global Effective Altruism community in Nigeria, officially incorporated as the "Impactful Altruism Initiative" by the Corporate Affairs Commission of Nigeria in 2023. Our vision is a cultural setting where resources are distributed effectively for maximum impact, with the mission of building an altruistic, supportive, and informed community.Our current strategy are:Improving infrastructure, structure, and capacity.Enhancing community engagement and retention.Outreach and professional growthActivities for Infrastructure and Capacity enhancement include:Education and skilling program: This involves single or multi-day workshops designed to enhance both capacity and ability. These workshops cover a spectrum of essential areas, such as career planning, high-impact research, and other relevant skill-building focuses.Introductory fellowship program: Crafted to deepen understanding of the core ideas and principles of effective altruism among participants.Mentorship and networking pairing: Forging networking and collaboration to empower individuals within the community for knowledge exchange, action, etc.Activities for Amplifying Community Engagement and RetentionOpportunity Board Updates: A dynamic opportunity board updated weekly, presenting relevant and accessible opportunities for our members.Community Insight Calls: Providing a discussion platform for members to exchange knowledge, socialize, and deepen their engagement with other community members.Retreat Event: Organize to increase community engagement and impactful value-aligned practice through knowledge exchange and networking for improved awareness and informed decisions.Guidance and Information: Delivering guidance and inform...

]]>
EA Nigeria https://forum.effectivealtruism.org/posts/y5vrPL4XRkmEhet5F/ea-nigeria-reflecting-on-2023-and-looking-ahead-to-2024-1 Tue, 16 Jan 2024 20:39:25 +0000 EA - EA Nigeria: Reflecting on 2023 and Looking Ahead to 2024 by EA Nigeria EA Nigeria https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:02 no full 1
gGv7W7fmGvKbdNAcQ EA - Giving Farm Animals a Name and a Face: The Power of The Identifiable Victim Effect by Rakefet Cohen Ben-Arye Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Giving Farm Animals a Name and a Face: The Power of The Identifiable Victim Effect, published by Rakefet Cohen Ben-Arye on January 16, 2024 on The Effective Altruism Forum.In this post, we provide an overview of our recent scientific paper, "Giving Farm Animals a Name and a Face: Eliciting Animal Advocacy among Omnivores using the Identifiable Victim Effect," which was published in the Journal of Environmental Psychology. We delve into the findings of our study with Dr. Eliran Halali, highlighting the benefits of telling the story of a single identifiable individual and its implications for future research on animal advocacy.IntroductionIn an era where we are no longer dependent on animal protein and can survive and even thrive on plant-based nutrition - a diet that is increasingly recognized for its health (Melina, Craig, and Levin 2016) and environmental benefits (Ranganathan et al. 2016), our study "Giving Farm Animals a Name and a Face" explores a unique approach to animal advocacy. We investigate whether the identifiable victim effect, a well-documented phenomenon in eliciting prosocial behavior (Small and Loewenstein, 2003), can be leveraged to promote empathy and action toward farm animals among omnivores.The Identifiable Victim EffectPrevious research has shown that stories about a single, identifiable victim are more effective in evoking prosocial affect and behavior than information about anonymous or statistical victims (Jenni and Loewenstein 1997; Small, Loewenstein, and Slovic 2007; Kogut and Ritov 2005b, [a] 2005).This phenomenon, known as the identifiable victim effect, although usually accompanied by a photo or a video of the identifiable victim, suggests that even minimal identifiability can significantly increase caring and donations (Small and Loewenstein 2003). Our research expands on this concept, exploring its application in animal advocacy and, mainly, whether one can elicit compassion for farm animals among omnivores.The Identifiable Animal Victim EffectResearch on the identifiable victim effect, primarily focused on human beneficiaries, has only recently expanded to animal victims. Studies explored this effect with endangered animals and climate crisis (Markowitz et al. 2013; Hsee and Rottenstreich 2004). Markowitz's study (2013) revealed that non-environmentalists were more likely to donate to a single identified animal victim, such as a panda than a group.However, this effect was not as prominent among environmentalists, possibly due to their already high prosocial intentions. These findings suggest that the identifiable victim effect can be a crucial factor in animal advocacy, highlighting the unique impact of emotional connection to a single, identifiable animal.Our study uniquely challenges the identifiable victim effect by focusing on omnivores, who are the very reason the victim needs help in the first place.MethodParticipants were exposed to an experimental intervention and answered questionnaires.InterventionLucky's story. Drawing inspiration from real-life cases, we centered on Lucky, a fictional calf who was given a name and a face (picture), or unidentified calves without a name and a face.Potential mechanismsSympathy. For example, "Lucky's (The farm animals') story made me very sad."Personal distress. For example, "I felt sympathy toward Lucky (the farm animal)."Ambivalence towards meat. For example, "I feel torn between the two sides of eating meat."Potential conditionsConcern. For example, "When I see someone being taken advantage of, I feel kind of protective towards them."Perspective-taking. For example: "I believe that there are two sides to every question and try to look at them both."Empathy. For example: "If I see someone fidgeting, I'll start feeling anxious too."Identification with animals. Compos...

]]>
Rakefet Cohen Ben-Arye https://forum.effectivealtruism.org/posts/gGv7W7fmGvKbdNAcQ/giving-farm-animals-a-name-and-a-face-the-power-of-the Tue, 16 Jan 2024 14:10:26 +0000 EA - Giving Farm Animals a Name and a Face: The Power of The Identifiable Victim Effect by Rakefet Cohen Ben-Arye Rakefet Cohen Ben-Arye https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:36 no full 3
sbiHDSTpAib5aZpzy EA - Meta Charity Funders: Summary of Our First Grant Round and Path Forward by Joey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Meta Charity Funders: Summary of Our First Grant Round and Path Forward, published by Joey on January 16, 2024 on The Effective Altruism Forum.We think this post will be relevant for people who want to apply toMeta Charity Funders (MCF) in the future and people who want to better understand the EA Meta funding landscape.The post is written by the organisers of MCF (who are all authors of this post).Some of our members might not agree with everything said.SummaryMeta Charity Funders (MCF) is a new funding circle that aims to fund charitable projects working one level removed from direct impact. In our first grant round spanning Aug-Oct 2023, we received 101 applications and ultimately funded 6 projects: Future Forward, Ge Effektivt, Giving What We Can, An anonymous GCR career transition initiative, promoting Peter Singer's work, and UHNW donation advisory. In total, our members gave $686,580 to these projects. We expect our next round to give 20% to 50% more than this amount, as our first round had less donor engagement and funding capacity than we expect in the future.giving multipliers" that help grow the pie of effective donations.Our grant-making process this roundMCF waslaunched at the end of July, 2023 and applications closed a month later, at the end of August. Over two months, our funding circle convened every two weeks to collaboratively decide on funding allocations, with individual members devoting additional time for evaluation between meetings. Our active members, composed of 9 individuals, undertook this project alongside their regular commitments.From the 101 applications received, the main organizers conducted an initial review. This process was aimed at creating a short(er) list of applications for more time-constrained members, by rather quickly determining if proposals were within scope, with a relevant approach and aligned team. This first stage resulted in 38 proposals advancing for further discussion, out of which 20 applicants were interviewed for more detailed insights.As the funding decisions approached in October, it became clear that many in our circle were nearing their annual donation limits or had less time than expected, which affected our final funding capacity. Ultimately, we funded 6 projects with total allocations of $686,580. See more about the grants we madebelow.While we are generally happy with this first round and very grateful for the many great applications and donors who have joined, we think we have significant room for growth and improvement. Most concretely, we hope and expect to give out more in future rounds; there were fewer active donating members in the circle this first round and several had already made their donations for the year. We also hope and expect to form and communicate a clearer scope of our funding priorities and make final grant decisions sooner within each round.Information for the next roundThe next round will open in late February, with grants given out in May.The application form will remain open but don't expect your application to be processed before March. We were generally excited about the applications we received for this round and hope that we will get similar applications in the next round as well.If you want to join Meta Charity Funders as a donor, please fill inthis form. Note that there is an expected annual donation amount of a minimum $100,000, but you obviously do not have to donate if you do not think there are good enough opportunities, and during the first year you can mainly observe. If you have any questions, please contact us at metacharityfunders@gmail.com.Check out ourwebsite to learn more about Meta Charity Funders and stay up-to-date with the new funding round.The most common reasons for rejectionBy sharing the most common reasons for rejections, we hope...

]]>
Joey https://forum.effectivealtruism.org/posts/sbiHDSTpAib5aZpzy/meta-charity-funders-summary-of-our-first-grant-round-and Tue, 16 Jan 2024 12:30:30 +0000 EA - Meta Charity Funders: Summary of Our First Grant Round and Path Forward by Joey Joey https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:31 no full 4
PArvxhBaZJrGAuhZp EA - Report on the Desirability of Science Given New Biotech Risks by Matt Clancy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Report on the Desirability of Science Given New Biotech Risks, published by Matt Clancy on January 17, 2024 on The Effective Altruism Forum.Should we seek to make our scientific institutions more effective? On the one hand, rising material prosperity has so far been largely attributable to scientific and technological progress. On the other hand, new scientific capabilities also expand our powers to cause harm. Last year I wrote a report on this issue, "The Returns to Science in the Presence of Technological Risks." The report focuses specifically on the net social impact of science when we take into account the potential abuses of new biotechnology capabilities, in addition to benefits to health and income.The main idea of the report is to develop an economic modeling framework that lets us tally up the benefits of science and weigh them against future costs. To model costs, I start with the assumption that, at some future point, a "time of perils" commences, wherein new scientific capabilities can be abused and lead to an increase in human mortality (possibly even human extinction).In this modeling framework, we can ask if we would like to have an extra year of science, with all the benefits it brings, or an extra year's delay to the onset of this time of perils. Delay is good in this model, because there is some chance we won't end up having to go through the time of perils at all.I rely on historical trends to estimate the plausible benefits to science. To calibrate the risks, I use various forecasts made in theExistential Risk Persuasion tournament, which asked a large number of superforecasters and domain experts several questions closely related to the concerns of this report. So you can think of the model as helping assess whether the historical benefits of science outweigh one set of reasonable (in my view) forecasts of risks.What's the upshot? From the report's executive summary:A variety of forecasts about the potential harms from advanced biotechnology suggest the crux of the issue revolves around civilization-ending catastrophes. Forecasts of other kinds of problems arising from advanced biotechnology are too small to outweigh the historic benefits of science.For example, if the expected increase in annual mortality due to new scientific perils is less than 0.2-0.5% per year (and there is no risk of civilization-ending catastrophes from science), then in this report's model, the benefits of science will outweigh the costs.I argue the best available forecasts of this parameter, from a large number of superforecasters and domain experts in dialogue with each other during the recent existential risk persuasion tournament, are much smaller than these break-even levels. I show this result is robust to various assumptions about the future course of population growth and the health effects of science, the timing of the new scientific dangers, and the potential for better science to reduce risks (despite accelerating them).On the other hand, once we consider the more remote but much more serious possibility that faster science could derail advanced civilization, the case for science becomes considerably murkier. In this case, the desirability of accelerating science likely depends on the expected value of the long-run future, as well as whether we think the forecasts of superforecasters or domain experts in the existential risk persuasion tournament are preferred.These forecasts differ substantially: I estimate domain expert forecasts for annual mortality risk are 20x superforecaster estimates, and domain expert forecasts for annual extinction risk are 140x superforecaster estimates.The domain expert forecasts are high enough, for example, that if we think the future is "worth" more than 400 years of current social welfare, in one version of my mode...

]]>
Matt Clancy https://forum.effectivealtruism.org/posts/PArvxhBaZJrGAuhZp/report-on-the-desirability-of-science-given-new-biotech Wed, 17 Jan 2024 22:48:07 +0000 EA - Report on the Desirability of Science Given New Biotech Risks by Matt Clancy Matt Clancy https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:16 no full 1
E9AQEKZYtCB84ckXu EA - EA Infrastructure Fund Ask Us Anything (January 2024) by Tom Barnes Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Infrastructure Fund Ask Us Anything (January 2024), published by Tom Barnes on January 17, 2024 on The Effective Altruism Forum.The EA Infrastructure Fund (EAIF) is running an Ask Us Anything! This is a time where EAIF grantmakers have set aside some time to answer questions on the Forum. I (Tom) will aim to answer most questions next weekend (~January 20th), so please submit questions by the 19th.Please note: We believe the next three weeks are an especially good time to donate to EAIF, because:We continue to face signficant funding constraints, leading to many great projects going either unfunded or underfundedYour donation will be matched at a 2:1 ratio until Feb 2. EAIF has ~$2m remaining in available matching funds, meaning that (unlike LTFF) this match is unlikely to be utilised without your supportIf you agree, you can donate to ushere.About the FundThe EA Infrastructure Fund aims to increase the impact of projects that use the principles of effective altruism, by increasing their access to talent, capital, and knowledge.Over 2022 and H1 2023,we made347 grants totalling $13.4m in dispersement. You can see our public grants database here.Related postsEA Infrastructure Fund's Plan to Focus on Principles-First EALTFF and EAIF are unusually funding-constrained right nowEA Funds organizational update: Open Philanthropy matching and distancingEA Infrastructure Fund: June 2023 grant recommendationsWhat do Marginal Grants at EAIF Look Like? Funding Priorities and Grantmaking Thresholds at the EA Infrastructure FundAbout the TeamTom Barnes: Tom is currently a Guest Fund Manager at EA Infrastructure Fund (previously an Assistant Fund Manager since ~Oct 2022). He also works as an Applied Researcher at Founders Pledge, currently on secondment to the UK Government to work on AI policy. Previously, he was a visiting fellow at Rethink Priorities, and was involved in EA uni group organizing.Caleb Parikh: Caleb is the project lead of EA Funds. Caleb has previously worked on global priorities research as a research assistant at GPI, EA community building (as a contractor to the community health team at CEA), and global health policy. Caleb currently leads EAIF as interim chair.Linchuan Zhang: Linchuan (Linch) Zhang currnetly works full-time at EA Funds. He was previously a Senior Researcher at Rethink Priorities working on existential security research. Before joining RP, he worked on time-sensitive forecasting projects around COVID-19. Previously, he programmed for Impossible Foods and Google and has led several EA local groups.Ask Us AnythingWe're happy to answer any questions - marginal uses of money, how we approach grants, questions/critiques/concerns you have in general, what reservations you have as a potential donor or applicant, etc.There's no hard deadline for questions, but I would recommend submitting by the 19th January as I aim to respond from the 20thAs a reminder, we remainfunding-constrained, and your donation will be matched (for every $1 you donate, EAIF will receive $3). Please considerdonating!If you have projects relevant to builiding up the EA community's infrastructure, you can alsoapply for funding here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Tom Barnes https://forum.effectivealtruism.org/posts/E9AQEKZYtCB84ckXu/ea-infrastructure-fund-ask-us-anything-january-2024 Wed, 17 Jan 2024 10:24:31 +0000 EA - EA Infrastructure Fund Ask Us Anything (January 2024) by Tom Barnes Tom Barnes https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:34 no full 8
gLquricjwqfzBwRw4 EA - Good job opportunities for helping with the most important century by Holden Karnofsky Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Good job opportunities for helping with the most important century, published by Holden Karnofsky on January 18, 2024 on The Effective Altruism Forum.Yes, this is my first post in almost a year. I'm no longer prioritizing this blog, but I will still occasionally post something.I wrote ~2 years ago that it was hard to point to concrete opportunities to help the most important century go well. That's changing.There are a good number of jobs available now that are both really promising opportunities to help (in my opinion) and are suitable for people without a lot of pre-existing knowledge of AI risk (or even AI). The jobs are demanding, but unlike many of the job openings that existed a couple of years ago, they are at well-developed organizations and involve relatively clear goals.So if you're someone who wants to help, but has been waiting for the right moment, this might be it. (Or not! I'll probably keep making posts like this as the set of opportunities gets wider.)Here are the jobs that best fit this description right now, as far as I can tell. The rest of this post will give a bit more detail on how these jobs can help, what skills they require and why these are the ones I listed.OrganizationLocationJobsLinkUK AI Safety InstituteLondon (remote work possible within the UK)Engineering and frontend roles, cybersecurity rolesHereAAAS, Horizon Institute for Public Service, Tech CongressWashington, DCFellowships serving as entry points into US policy rolesHereAI companies: Google DeepMind, OpenAI, Anthropic1San Francisco and London (with some other offices and remote work options)Preparedness/Responsible Scaling roles; alignment research rolesHere, here, here, hereModel Evaluation and Threat Research (METR) (fewer roles available)Berkeley (with remote work options)Engineering and data rolesHereSoftware engineering and development (and related areas) seem especially valuable right now, so think about whether you know folks with those skills who might be interested!How these helpA lot of these jobs (and the ones I know the most about) would be contributing toward a possible global standards regime for AI: AI systems should be subject to testing to see whether they present major risks, and training/deploying AI should stopped (e.g., by regulation) when it can't be done safely.The basic hope is:Teams will develop "evals": tests of what AIs are capable of, particularly with respect to possible risks. For example, one eval might be prompting an AI to give a detailed description of how to build a bioweapon; the more detailed and accurate its response, the more risk the AI poses (while also possibly having more potential benefits as well, by virtue of being generally more knowledgeable/capable).It will become common (through regulation, voluntary action by companies, industry standards, etc.) for cutting-edge AI systems to be subject to evals for dangerous capabilities.When evals reveal risk, they will trigger required mitigations. For example:An AI capable of bioweapons development should be (a) deployed in such a way that people can't use it for that (including by "jailbreaking" it), and (b) kept under good security to stop would-be terrorists from circumventing the restrictions.AIs with stronger and more dangerous capabilities might require very challenging mitigations, possibly beyond what anyone knows how to do today (for example, rigorous demonstrations that an AI won't have dangerous unintended aims, even if this sort of thing is hard to measure).Ideally, we'd eventually build a robust international governance regime (comparisons have been made to nuclear non-proliferation regimes) that reliably enforces rules like these, while safe and beneficial AI goes forward. But my view is that even dramatically weaker setups can still help a lo...

]]>
Holden Karnofsky https://forum.effectivealtruism.org/posts/gLquricjwqfzBwRw4/good-job-opportunities-for-helping-with-the-most-important Thu, 18 Jan 2024 22:17:05 +0000 EA - Good job opportunities for helping with the most important century by Holden Karnofsky Holden Karnofsky https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:34 no full 1
abs8xsBDyCzR83YzY EA - CEA is spinning out of Effective Ventures by Eli Nathan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA is spinning out of Effective Ventures, published by Eli Nathan on January 18, 2024 on The Effective Altruism Forum.The Centre for Effective Altruism (CEA) is spinning out as a project of Effective Ventures Foundation UK and Effective Ventures US (known collectively as Effective Ventures or EV) to become an independent organisation. AsEV decentralises, we expect that bringing our operations in-house and establishing our own legal entities will better allow us to accomplish our mission and goals.We'd like to extend a deep thank you to the EV team for all their hard work in helping us to scale our programs, and in providing essential support and leadership over the last few years.Alongsideour new CEO joining the team next month, this means that we're entering a new era for CEA. We're excited to build an operations team that can align more closely with our needs, as well as a governance structure that allows us to be independent and better matches our purpose.As EV's largest project and one with many complex and interwoven programs, we expect this spin-out process will take some time, likely between 12-24 months. This is because we'll need to set up new legal structures, hire new staff, manage visas and intellectual property, and handle various other items. We expect this spin-out to not affect the external operations of our core products, and generally not be particularly noticeable from the outside - EA Global and the EA Forum, for example, will continue to run as they would otherwise.We expect to start hiring for our new operations team over the coming months, and will have various job postings live soon - likely across finance, legal, staff support, and other areas. If you're potentially interested in these types of roles, you can fill out the expression of interest formhere.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Eli_Nathan https://forum.effectivealtruism.org/posts/abs8xsBDyCzR83YzY/cea-is-spinning-out-of-effective-ventures Thu, 18 Jan 2024 20:53:52 +0000 EA - CEA is spinning out of Effective Ventures by Eli Nathan Eli_Nathan https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:54 no full 2
CKYhsD2xCw48DwJiD EA - Forecasting accidentally-caused pandemics by JoshuaBlake Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forecasting accidentally-caused pandemics, published by JoshuaBlake on January 18, 2024 on The Effective Altruism Forum.Future pandemics could arise from an accident (a pathogen being used in research accidentally infecting a human). The risk from accidental pandemics is likely increasing in line with the amount of research being conducted. In order to prioritise pandemic preparedness, forecasts of the rate of accidental pandemics are needed. Here, I describe a simple model, based on historical data, showing that the rate of accidental pandemics over the next decade is almost certainly lower than that of zoonotic pandemics (pandemics originating in animals).Before continuing, I should clarify what I mean by an accidental pandemic. By 'accidental pandemic,' I refer to a pandemic arising from human activities, but not from malicious actors. This includes a wide variety of activities, including lab-based research and clinical trials or more unusual activities such as hunting for viruses in nature.The first consideration in the forecast is the historic number of accidental pandemics. One historical pandemic (1977 Russian flu) is widely accepted to be due to research gone wrong, with the leading hypothesis being a clinical trial. The estimated death toll from this pandemic is 700,000. The origin of the COVID-19 pandemic is disputed, and I won't go further into that argument here. Therefore, historically, there have been one or two accidental pandemics.Next, we need to consider the amount of research that could cause such a pandemics, or the number of "risky research units" that have been conducted. No good data exists on risky research units directly.However, we only need a measure that is proportional to the number of experiments.[1] I consider three indicators: publicly reported lab accidents, as collated by Manheim and Lewis (2022); the rate at which BSL-4 labs (labs handling the most dangerous pathogens) are being built, gathered by Global BioLabs; and the number of virology papers being published, categorised by the Web of Science database. I find a good fit with a shared rate of growth at 2.5% per year.A plateau in the number of virology papers in the Web of Science database is plausibly visible. It is too early to tell if this trend will feed through to the number of labs or datasets, but this is a weakness of this analysis. However, a similar apparent plateau is visible in the 1990s, yet growth then appeared to restart along the previous trendline.The final step is to extrapolate this growth in risky research units and see what it implies for how many accidental pandemics we should expect to see. Below I plot this: the average (expected) number of pandemics per year. Two scenarios are considered: where the basis is one historical accidental pandemic (1977 Russian flu) and where the basis is two historical accidental pandemics (adding COVID-19). For comparison, I include the historic long-run average number of pandemics per year, 0.25.[2]Predictions for the ten years starting with 2024 are in the table below. This gives, for each scenario: the number of accidental pandemics that are expected, a range which the number of pandemics should fall in with at least 80% probability, and the probability of at least one accidental pandemic occurring.ScenarioExpected number80% predictionProbability at least 11 previous1.20-256%2 previous2.10-376%Overall, the conclusion from the model is that, for the next decade, the threat of zoonotic pandemics is likely still greater. However, if lab activity continues to increase at this rate, accidental pandemics may dominate.The model here is extremely simple, and a more complex one would very likely decrease the number forecast. In particular, this model relies on the following major assumptions.First, the actual ...

]]>
JoshuaBlake https://forum.effectivealtruism.org/posts/CKYhsD2xCw48DwJiD/forecasting-accidentally-caused-pandemics Thu, 18 Jan 2024 14:02:17 +0000 EA - Forecasting accidentally-caused pandemics by JoshuaBlake JoshuaBlake https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:38 no full 4
Qw5kwJhgbr6HSWHsJ EA - Some heuristics I use for deciding how much I trust scientific results by Nathan Barnard Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some heuristics I use for deciding how much I trust scientific results, published by Nathan Barnard on January 18, 2024 on The Effective Altruism Forum.I've done nothing to test these heuristics and have no empirical evidence for how well they work for forecasting replications or anything else. I'm going to write them anyway. The heuristics I'm listing are roughly in order of how important I think they are. My training is as an economist (although I have substantial exposure to political science) and lots of this is going to be written from an econometrics perspective.How much does the result rely on experimental evidence vs causal inference from observational evidence?I basically believe without question every result that mainstream chemists and condensed matter physicists say is true. I think a big part of this is that in these fields it's really easy to experimentally test hypotheses, and to really precisely test differences in hypotheses experimentally. This seems great.On the other hand, when relying on observational evidence to get reliable causal inference you have to control forconfounders while not controlling forcolliders. This is really hard! It generally requires finding a natural experiment that introduces randomisation or having very good reason to think that you've controlled for all confounders.We also make quite big updates on which methods effectively do this. For instance, until last year we thought thattwo-way fixed effects did a pretty good job of this before we realised that actuallyheterogeneous treatment effects are a really big deal for two-way fixed effects estimators.What's more, in areas that use primarily observational data there's a really big gap between fields in how often papers even try to use causal inference methods and how hard they work to show that theiridentifying assumptions hold. I generally think that modern microeconomics papers are the best on this and nutrition science the worst.I'm slightly oversimplifying by using a strict division between experimental and observational data. All data is observational and what matters is how credibly you think you've observedwhat would happen counterfactually without some change. But in practice, this is much easier in settings where we think that we can change the thing we're interested in without other things changing.There are some difficult questions around scientific realism here that I'm going to ignore because I'm mostly interested in how much we can trust a result in typical use cases. The notable area where I think this actually bites is thinking about the implications of basic physics forlongtermism where it does seem like basic physics actually changes quite a lot over time with important implications for questions like howlarge we expect the future to be.Are there practitioners using this result and how strong is the selection pressure on the resultIf a result is being used a lot and there would be easily noticeable and punishable consequences if the result was wrong, I'm way more likely to believe that the result is at least roughly right if it's relied on a lot.For instance, this means I'm actually really confident that important results inauction design hold. Auction design is used all the time by bothgovernment andprivate sector actors in ways that earn these actors billions of dollars and, in the private sector case at least, are iterated on regularly.Auction theory is an interesting case because it comes out of pretty abstract microeconomic theory and wasn't developed really based on laboratory experiments, but I'm still pretty confident in it because of how widely it's used by practitioners and is subject to strong selection pressure.On the other hand, I'm much less confident in lots of political science research. It seems like places like hedg...

]]>
Nathan_Barnard https://forum.effectivealtruism.org/posts/Qw5kwJhgbr6HSWHsJ/some-heuristics-i-use-for-deciding-how-much-i-trust Thu, 18 Jan 2024 11:29:24 +0000 EA - Some heuristics I use for deciding how much I trust scientific results by Nathan Barnard Nathan_Barnard https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:20 no full 5
pzCuKWLYBfws36oLP EA - Against Learning From Dramatic Events by bern Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Against Learning From Dramatic Events, published by bern on January 18, 2024 on The Effective Altruism Forum.I highly recommend reading the whole post, but I found Part V particularly good, which I have copied in it's entirety below.V.Do I sound defensive about this? I'm not. This next one is defensive.I'm part of the effective altruist movement. The biggest disaster we ever faced was the Sam Bankman-Fried thing. Some lessons people suggested to us then were:Be really quick to call out deceptive behavior from a hotshot CEO, even if you don't yet have the smoking gun.It was crazy that FTX didn't even have a board. Companies need strong boards to keep them under control.Don't tweet through it! If you're in a horrible scandal, stay quiet until you get a great lawyer and they say it's in your best interests to speak.Instead of trying to play 5D utilitarian chess, just try to do the deontologically right thing.People suggested all of these things, very loudly, until they were seared into our consciousness. I think we updated on them really hard.Then came the second biggest disaster we faced, the OpenAI board thing, where we learned:Don't accuse a hotshot CEO of deceptive behavior unless you have a smoking gun; otherwise everyone will think you're unfairly destroying his reputation.Overly strong boards are dangerous. Boards should be really careful and not rock the boat.If a major news story centers around you, you need to get your side out there immediately, or else everyone will turn against you.Even if you are on a board legally charged with "safeguarding the interests of humanity", you can't just speak out and try to safeguard the interests of humanity. You have to play savvy corporate politics or else you'll lose instantly and everyone will hold you in contempt.These are the opposite lessons as the FTX scandal.I'm not denying we screwed up both times. There's some golden mean, some virtue of practical judgment around how many red flags you need before you call out a hotshot CEO, and in what cases you should do so. You get this virtue after looking at lots of different situations and how they turned out.You definitely don't get this virtue by updating maximally hard in response to a single case of things going wrong. If you do that, you'll just fling yourself all the way into the opposite failure mode. And then when you fail again the opposite time, you'll fling yourself back into the original failure mode, and yo-yo back and forth forever.The problem with the US response to 9-11 wasn't just that we didn't predict it. It was that, after it happened, we were so surprised that we flung ourselves to the opposite extreme and saw terrorists behind every tree and around every corner. Then we made the opposite kind of failure (believing Saddam was hatching terrorist plots, and invading Iraq).The solution is not to update much on single events, even if those events are really big deals.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
bern https://forum.effectivealtruism.org/posts/pzCuKWLYBfws36oLP/against-learning-from-dramatic-events Thu, 18 Jan 2024 08:28:40 +0000 EA - Against Learning From Dramatic Events by bern bern https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:51 no full 6
CBq2KFAjE6GTxTjEF EA - First book focusing on EA and Farmed Animals: The Farm Animal Movement: Effective Altruism, Venture Philanthropy, and the Fight to End Factory Farming in America by Jeff Thomas Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: First book focusing on EA and Farmed Animals: The Farm Animal Movement: Effective Altruism, Venture Philanthropy, and the Fight to End Factory Farming in America, published by Jeff Thomas on January 18, 2024 on The Effective Altruism Forum.Thank you so much to Lizka for encouraging me in this post.I'm so excited to share my book that will be of great interest to EA folks was just released by Lantern. The Farm Animal Movement: Effective Altruism, Venture Philanthropy, and the Fight to End Factory Farming in America tells the stories of this exhilarating moment in our movement in a way that I hope will inspire millennials to dedicate their careers and resources to EA and to helping end farm animal suffering. The chapters are:Introduction: Ending the World's Worst SufferingNumbers Don't Lie: Effective Altruism and Venture PhilanthropyPolitical Power: Family Farmers Versus Big MeatVegans Making Laws: From California to Capitol HillBuilding a Movement: Mercy for Animals and Emotional IntelligenceBetrayal of Trust: Inside the Humane Society's #MeToo Scandal"We are hurting so much": Racism and 'Color-blindness'Animal Law and Legal Education: Pathbreakers and MillennialsDreamers: The Good Food Institute and Clean MeatThe target audience is people who are EA- or animal-aligned (students, career-changers, donors, volunteers) but who haven't yet found their niche. Hopefully it will be helpful for EAs as a recruitment tool. It's the first book to focus exclusively on EA and farm animals, so I hope it makes a difference!I feel like the movement needed a book that would be useful for laypeople, advocates and scholars. The book has a popular, engaging writing style with academic methods and footnotes. I am thrilled at how the book turned out with the insight and help from the team at Lantern. All credit goes to them for the beautiful cover design.I am so proud to be a member of this movement and grateful to all who participated in this project (EA Forum commenters, you know who you are :) ). Thank you for the opportunity to post on this Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Jeff Thomas https://forum.effectivealtruism.org/posts/CBq2KFAjE6GTxTjEF/first-book-focusing-on-ea-and-farmed-animals-the-farm-animal Thu, 18 Jan 2024 07:57:48 +0000 EA - First book focusing on EA and Farmed Animals: The Farm Animal Movement: Effective Altruism, Venture Philanthropy, and the Fight to End Factory Farming in America by Jeff Thomas Jeff Thomas https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:10 no full 7
tMcPJvMFLYppqHKiP EA - Unpacking Martin Sandbu's recent(ish) take on EA by JWS Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Unpacking Martin Sandbu's recent(ish) take on EA, published by JWS on January 20, 2024 on The Effective Altruism Forum.The original article is here: https://www.ft.com/content/128f3a15-b048-4741-b3e0-61c9346c390bWhy respond to this article?When browsing EA Twitter earlier this month, someone whose opinions on EA I respect quote-tweeted someone that I don't (at least on the topic of EA[1]). The subject of both tweets was an article published at the end of 2023 by Martin Sandbu of the Financial Times titled "Effective altruism was the favoured creed of Sam Bankman-Fried. Can it survive his fall?" Given that both of these people seem to broadly endorse the views, or at least the balance, found in the article I thought it would be worthwhile reading to see what a relatively mainstream commentator would think about EA.The Financial Times is one of the world's leading newspapers and needs very little introduction, and Sandbu is one of its most well-known commenters. What gets printed in the FT is often repeated across policy circles, not just in Britain but across the world, and especially in wonky/policy-focused circles that have often been quite welcoming of EA either ideologically or demographically.As always, I encourage readers to read and engage with the original article itself to get a sense of whether you think my summarisation and responses are fair.Reviewing Sandbu's ArticleHaving read the article, I think it's mainly covering two separate questions related to EA, so I'll discuss them one-at-a-time. This means I'll be jumping back-and-forth a bit across the article to group similar parts together and respond to the underlying points, though I've tried to edit Sandbu's points down as little as possible.1) How to account for EA's historical success?The first theme in the article is an attempt to give a historical account of EA's emergence, and also an attempt by Sandbu to account for its unexpected success. Early on in the article, Sandbu clearly states his confusion at how a movement with the background of EA grew so much in such a short space of time:"Even more puzzling is how quickly effective altruism rose to prominence - it is barely a decade since a couple of young philosophers at the University of Oxford invented the term ... nobody I knew would have predicted that any philosophical outlook, let alone this one, would take off in such a spectacular way."He doesn't explicitly say so, but I think a reason behind this is EA's heavy debt to Utilitarian thinkers and philosophy, which Sandbu sees as having been generally discredited or disconfirmed over the 20th century:"In the 20th century, Utilitarianism… progressively lost the favour of philosophers, who considered it too freighted with implausible implications."The history of philosophy and the various 20th century arguments around Utilitarianism are not my area of expertise, but I'm not really sure I buy that argument, or even accept how much it's a useful simplification (a potted history, as Sandbu says) of the actual trends in normative ethics.First, Utilitarianism has had plenty of criticism and counter-development before the 20th century.[2] And even looking at the field of philosophy right now, consequentialism is just as popular as the other two major alternatives in normative ethics.[3] I suspect that Sandbu is hinting at Bernard Williams' famous essay against utilitarianism, but I don't think one should consider that essay the final word on the subject.In any case, Sandbu is telling a story here, trying to set a background against which the key founding moment of EA happens:"Then came Peter Singer. In a famous 1972 article... [Singer] argued that not giving money to save lives in poor countries is morally equivalent to not saving a child drowning in a shallow pond... Any personal luxury...

]]>
JWS https://forum.effectivealtruism.org/posts/tMcPJvMFLYppqHKiP/unpacking-martin-sandbu-s-recent-ish-take-on-ea Sat, 20 Jan 2024 10:20:43 +0000 EA - Unpacking Martin Sandbu's recent(ish) take on EA by JWS JWS https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 23:33 no full 2
6gXdbMzBh6ZbqbdZL EA - Some thoughts on moderation in doing good by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some thoughts on moderation in doing good, published by Vasco Grilo on January 21, 2024 on The Effective Altruism Forum.This is a crosspost for Some thoughts on moderation in doing good by Benjamin Todd, as published on 80,000 Hours' website on 5 May 2023.Here's one of the deepest tensions in doing good:How much should you do what seems right to you, even if it seems extreme or controversial, vs how much should you moderate your views and actions based on other perspectives?If you moderate too much, you won't be doing anything novel or ambitious, which really reduces how much impact you might have. The people who have had the biggest impact historically often spoke out about entrenched views and were met with hostility - think of the civil rights movement or Galileo.Moreover, simply following ethical 'common sense' has a horrible track record. It used to be common sense to think that homosexuality was evil, slavery was the natural order, and that the environment was there for us to exploit.And there is still so much wrong with the world. Millions of people die of easily preventable diseases, society is deeply unfair, billions of animals are tortured in factory farms, and we're gambling our entire future by failing to mitigate threats like climate change. These huge problems deserve radical action - while conventional wisdom appears to accept doing little about them.On a very basic level, doing more good is better than doing less. But this is a potentially endless and demanding principle, and most people don't give it much attention or pursue it very systematically. So it wouldn't be surprising if a concern for doing good led you to positions that seem radical or unusual to the rest of society.This means that simply sticking with what others think, doing what's 'sensible' or common sense, isn't going to cut it. And in fact, by choosing the apparently 'moderate' path, you could still end up supporting things that are actively evil.But at the same time, there are huge dangers in blazing a trail through untested moral terrain.The dangers of extremismMany of the most harmful people in history were convinced they were right, others were wrong - and they were putting their ideas into practice "for the greater good" but with disastrous results.Aggressively acting on a narrow, contrarian idea of what to do has a worrying track record, which includes people who have killed tens of millions and dominated whole societies - consider, for example, the the Leninists.The truth is that you're almost certainly wrong about what's best in some important ways . We understand very little of what matters, and everything has cascading and unforeseen effects.Your model of the world should produce uncertain results about what's best, but you should also be uncertain about which models are best to use in the first place.And this uncertainty arises not only on an empirical level but also about what matters in the first place (moral uncertainty) - and probably in ways you haven't even considered ('unknown unknowns').As you add additional considerations, you will often find that not only does how good an action seems to change, but even whether the action seems good or bad at all may change ('crucial considerations').For instance, technological progress can seem like a clear force for good as it raises living standards and makes us more secure. But if technological advances create new existential risks, the impact could be uncertain or even negative on the whole. And yet again, if you consider that faster technological development might get us through a particularly perilous period of history more quickly, it could seem positive again - and so on.Indeed, even the question of how to in principle handle all this uncertainty is itself very uncertain. There is no widely accepted ver...

]]>
Vasco Grilo https://forum.effectivealtruism.org/posts/6gXdbMzBh6ZbqbdZL/some-thoughts-on-moderation-in-doing-good Sun, 21 Jan 2024 17:05:29 +0000 EA - Some thoughts on moderation in doing good by Vasco Grilo Vasco Grilo https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 15:42 no full 2
oXrg4mf8dxtyBeBrg EA - Grantmakers should give more feedback by Ariel Pontes Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Grantmakers should give more feedback, published by Ariel Pontes on January 22, 2024 on The Effective Altruism Forum.BackgroundI've been actively involved in EA since 2020, when I started EA Romania. In my experience, one problem that frustrates many grant applicants is the limited feedback offered by grantmakers. In 2022, at the EAG in London, while trying to get more detailed feedback regarding my own application at the EAIF office hours, I realized that many other people had similar complaints. EAIF's response seemed polite but not very helpful. Shortly after this experience, I also read aforum post where Linch, a junior grantmaker at the time, argued that it's "rarely worth your time to give detailed feedback." The argument was:[F]rom a grantmaking perspective, detailed feedback is rarely worthwhile, especially to rejected applicants. The basic argument goes like this: it's very hard to accurately change someone's plans based on quick feedback (and it's also quite easy to do harm if people overupdate on your takes too fast just because you're a source of funding). Often, to change someone's plans enough, it requires careful attention and understanding, multiple followup calls, etc.And this time investment is rarely enough for you to change a rejected (or even marginal) grant to a future top grant. Meanwhile, the opportunity cost is again massive.Similarly, giving useful feedback to accepted grants can often be valuable, but it just isn't high impact enough compared to a) making more grants, b) making grants more quickly, and c) soliciting creative ways to get more highest-impact grants out.Since then I have heard many others complain about the lack of feedback when applying for grants in the EA space. My specific experience was with the EAIF, but based on what I've heard I have the feeling this problem might be endemic in the EA grantmaking culture in general.The case for more feedbackLinch's argument that "the opportunity cost of giving detailed feedback is massive" is only valid if by "detailed feedback" he means something really time consuming. However, it cannot be used to justify EAIF's current policy of giving no feedback at all by default, and giving literally a one-sentence piece of feedback upon request. Using this argument to justify something so extreme would be an example of what some might call "act utilitarianism", "naive utilitarianism", or"single-level" utilitarianism: it may seem that, in certain cases, giving feedback is a waste of resources compared to other counterfactual actions. If you only consider first-order consequences, however, killing a healthy checkup patient and using his organs to save five is also effective. In reality, we need to also consider higher order consequences. Is it healthy for a movement to adopt a policy of not giving feedback to grant applicants?Personally, I feel such a policy runs the risk of seeming disrespectful towards grant applicants who spend time and energy planning projects that end up never being implemented. This is not to say that the discomfort of disappointed applicants counts more than the suffering of Malaria infected children. But we are human and there is a limit to how much we can change via emotional resilience workshops. Besides,there is such a thing as too much resilience. I have talked to other EAs who applied for funds, 1:1 advice from 80k, etc, and many of them felt frustrated and somewhat disrespected after being rejected multiple times with no feedback or explanation. I find this particularly worrisome in the case of founders of national groups, since our experience may influence the development of the local movement. There is a paragraph from anarticle by The Economist which I think adds to my point:As the community has expanded, it has also become more exclusive. Conference...

]]>
Ariel Pontes https://forum.effectivealtruism.org/posts/oXrg4mf8dxtyBeBrg/grantmakers-should-give-more-feedback Mon, 22 Jan 2024 18:51:06 +0000 EA - Grantmakers should give more feedback by Ariel Pontes Ariel Pontes https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:38 no full 3
dKWCTujoKEpDF8Yb5 EA - Why I'm skeptical about using subjective time experience to assign moral weights by Andreas Mogensen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I'm skeptical about using subjective time experience to assign moral weights, published by Andreas Mogensen on January 22, 2024 on The Effective Altruism Forum.This post provides a summary of my working paper "Welfare and Felt Duration." The goal is to make the content of the paper more accessible and to add context and framing for an EA audience, including a more concrete summary of practical implications. It's also an invitation for you to ask questions about the paper and/or my summary of it, to which I'll try to reply as best I can below.What's the paper about?The paper is about how duration affects the goodness and badness of experiences that feel good or bad. For simplicity, I mostly focus on how duration affects the badness of pain.In some obvious sense, pains that go on for longer are worse for you. But we can draw some kind of intuitive distinction between how long something really takes and how long it is felt as taking. Suppose you could choose between two pains: one feels longer but is objectively shorter, and the other feels shorter but is objectively longer. Now the choice isn't quite so obvious. Still, some people are quite confident that you ought to choose the second: the one that feels shorter.They think it's how long a pain feels that's important, not how long it is. The goal of the paper is to argue that that confidence isn't warranted.Why is this important?This issue affects the moral weights assigned to non-human animals and digital minds.The case for thinking that subjective time experience varies across the animal kingdom is summarized inthis excellent post by Jason Schukraft, which was a huge inspiration for this paper. One particular line of evidence comes from variation in the critical-flicker fusion frequency (CFF), the frequency at which a light source that's blinking on and off is perceived as continuously illuminated. Some birds and insects can detect flickering that you and I would completely miss unless we watched a slow motion recording. That might be taken to indicate that time passes more slowly from their subjective perspective, and so, if felt duration is what matters, that suggests we should give additional weight to the lifetime welfare of those animals.here.A number of people also argue that digital minds could experience time very differently from us, and here the differences could get really extreme. Because of the speed advantages of digital hardware over neural wetware, a digital mind could conceivably be run at speeds many orders of magnitude higher than the brain's own processing speed, which might again lead us to expect that time will be felt as passing much more slowly. As above, this may be taken to suggest that we should give those experiences significantly greater moral weight.paper on digital minds.What's the argument?You can think of the argument of the paper as having three key parts.Part 1: What is felt duration?The first thing I want to do in the paper is emphasize that we don't really have a very strong idea of what we're talking about when we talk about the subjective experience of time. That should make us skeptical of our intuitions about the ethical importance of felt duration.It seems clear that it doesn't matter in itself how much time you think has passed: e.g., if you think the pain went on for six minutes, but actually it lasted five. If subjective duration is going to matter, it can't be just a matter of your beliefs about time's passage. Something about the way the pain is experienced has got to be different. But what exactly? I expect you probably don't have an obvious answer to that question at your fingertips. I certainly don't. It's also worth noting thatsome psychologists who study time perception claim that we can't distinguish empirically between judged and felt durati...

]]>
Andreas_Mogensen https://forum.effectivealtruism.org/posts/dKWCTujoKEpDF8Yb5/why-i-m-skeptical-about-using-subjective-time-experience-to Mon, 22 Jan 2024 18:30:00 +0000 EA - Why I'm skeptical about using subjective time experience to assign moral weights by Andreas Mogensen Andreas_Mogensen https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:19 no full 4
d8nW46LrTkCWdjiYd EA - Rates of Criminality Amongst Giving Pledge Signatories by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rates of Criminality Amongst Giving Pledge Signatories, published by Ben West on January 22, 2024 on The Effective Altruism Forum.SummaryI investigate the rates of criminal misconduct amongst people who have takenThe Giving Pledge (roughly: ~200 [non-EA] billionaires who have pledged to give most of their money to charity).I find that rates are fairly high:25% of signatories have been accused of financial misconduct, and 10% convicted[1]4% of signatories have spent at least one day in prisonOverall, 41% of signatories have had at least one allegation of substantial misconduct (financial, sexual, or otherwise)I estimate that Giving Pledgers are not less likely, and possibly more likely, to commit financial crimes than YCombinator entrepreneurs.I am unable to find evidence of The Giving Pledge doing anything to limit the risk of criminal behavior amongst its members, though I have heard second-hand that they do some sort of screening.I conclude that the rate of criminal behavior amongst major philanthropists is high, which means that we should not expect altruism to substantially lower the risks compared to that of the general population, and that negative impacts to EA's public perception may occur independently of whether our donors actually commit crimes (e.g. because even noncriminal billionaires have a negative public image).MethodologyI copied the list of signatories fromtheir website.Gina Stuessy and I searched the internet for "(name) lawsuit", "(name) crime" and also looked at their Wikipedia page.I categorized any results into "financial", "sexual", and "other", and also marked if they had spent at least one day in jail.Gina and I eventually decided that the data collection process was too time-consuming, and we stopped partway through. The final dataset includes 115 of the 232 signatories.[2][3]Data can be foundhere.How well do convictions correspond with immoral behavior?It is a well-worn take that our[4] legal system overly protects white-collar criminals: If an employee steals $20 from the cash register, that's a criminal offense that the police will prosecute, but if an employer under-pays their employees by $20 that's a civil offense where the police don't get involved.I found that the punishment of the criminals in my data set correlated extremely poorly with my intuition for how immorally they had behaved.It would be funny if it weren't sad that one of the longest prison sentences in my data set is from Kjell Inge Røkke, a Norwegian businessman whowas convicted of having an illegal license for his yacht.One particular way in which white-collar offenses are weird is that they often allow the defendant to settle without admitting wrongdoing.[5] E.g. my guess is that Philip Frost is guilty, but his settlement with the SECdoes not require him to admit wrongdoing.I wasn't able to find a single person who admitted guilt in a sexual misconduct case, despite ~7% of the signatories being accused, including in high-profile cases like people involved with Jeffrey Epstein.[6]I was considering trying to add some classification like "Ben thinks this person is guilty" but decided that this would be too time-consuming and subjective.Nonetheless, if you want my subjective opinion, my guess is that most of the people who were accused of financial misconduct are guilty of immoral behavior, under acommonsense morality definition of the term.Less controversially, some of these cases are ongoing, and presumably at least some of them will result in convictions, which makes looking only at the current conviction rate misleading.In any case though, I believe that this data set establishes that the base rate of both criminal and immoral behavior is fairly high among major philanthropists, no matter how you slice the data.Some Representative Case...

]]>
Ben_West https://forum.effectivealtruism.org/posts/d8nW46LrTkCWdjiYd/rates-of-criminality-amongst-giving-pledge-signatories Mon, 22 Jan 2024 18:10:10 +0000 EA - Rates of Criminality Amongst Giving Pledge Signatories by Ben West Ben_West https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:46 no full 6
qcGuAyHid3eEtqxdm EA - Is fear productive when communicating AI x-risk? [Study results] by Johanna Roniger Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is fear productive when communicating AI x-risk? [Study results], published by Johanna Roniger on January 23, 2024 on The Effective Altruism Forum.I want to share some results from my MSc dissertation on AI risk communication, conducted at the University of Oxford.TLDR: In exploring the impact of communicating AI x-risk with different emotional appeals, my study comprising of 1,200 Americans revealed underlying factors that influence public perception on several aspects:For raising risk perceptions, fear and message credibility are keyTo create support for AI regulation, beyond inducing fear, conveying the effectiveness of potential regulation seems to be even more importantIn gathering support for a pause in AI development, fear is a major driverTo prompt engagement with the topic (reading up on the risks, talking about them), strong emotions - both hope and fear related - are driversAI x-risk introSince the release of ChatGPT, many scientists, software engineers and even leaders of AI companies themselves have increasingly spoken up about the risks of emerging AI technologies. Some voices focus on immediate dangers such as the spread of fake news images and videos, copyright issues and AI surveillance. Others emphasize that besides immediate harm, as AI develops further, it could cause global-scale disasters, even potentially wipe out humanity.How would that happen? There are roughly two routes. First, there could be malicious actors such as authoritarian governments using AI e.g. for lethal autonomous weapons or to engineer new pandemics. Second, if AI gets more intelligent some fear it could get out of control and basically eradicate humans by accident. This sounds crazy but the people creating AI are saying the technology is inherently unpredictable and such an insane disaster could well happen in the future.AI x-risk communicationThere are now many media articles and videos out there talking about the risks of AI. Some announce the end of the world, some say the risks are all overplayed, and some argue for stronger safety measures. So far, there is almost no research on the effectiveness of these articles in changing public opinion, and on the difference between various emotional appeals.Study set upThe core of the study was a survey experiment with 1200 Americans. The participants were randomly allocated to four groups: one control group and three experimental groups each getting one of three articles on AI risk. All three versions explain that AI seems to be advancing rapidly and that future systems may become so powerful that they could lead to catastrophic outcomes when used by bad actors (misuse) or when getting out of control (misalignment).The fear version focuses solely on the risks; the hope version takes a more optimistic view, highlighting promising risk mitigation efforts and the mixed version is a combination of the two transitioning from fear to hope. After reading the article I asked participants to indicate emotions they felt when reading the article (as a manipulation check and to separate the emotional appeal from other differences in the articles) and to state their views related to various AI risk topics. The full survey including the articles and the questions can be found in the dissertation on page 62 and following (link at the bottom of page).FindingsOverview of results1. Risk perceptionTo measure risk perception, I asked participants to indicate their assessment of the risk level of AI risk (both existential risk and large-scale risk) on a scale from 1, extremely low, to 7, extremely high with a midpoint at 4, neither low nor high. In addition, I asked participants for their estimations on the likelihood of AI risk (existential risk and large-scale risk, both within 5 years and 10 years, modelled after the rethink prio...

]]>
Johanna Roniger https://forum.effectivealtruism.org/posts/qcGuAyHid3eEtqxdm/is-fear-productive-when-communicating-ai-x-risk-study Tue, 23 Jan 2024 23:00:22 +0000 EA - Is fear productive when communicating AI x-risk? [Study results] by Johanna Roniger Johanna Roniger https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:43 no full 1
AS5Boa4q3ia4hXRBE EA - [Linkpost] BBC - How much does having a baby contribute to climate change? by jackva Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] BBC - How much does having a baby contribute to climate change?, published by jackva on January 23, 2024 on The Effective Altruism Forum.I recently had the opportunity to talk about the climate effects of having children on the BBC's What In the World podcast in an episode titled "How much does having a baby contribute to climate change?" (link, X/Twitter).The episode is very short (~15min) and conversational and covers the debate from several angles and with multiple voices.I try to make the argument, building on prior work with John Halstead, that (i) extrapolating from current emissions massively overestimates expected emissions of kids born today ("a kid born today will never drive a petrol car") and that, in addition to that, (ii) credible jurisdiction-level policies such as the UK's net-zero targets should lead to a situation where additional kids in those jurisdictions have (close to) zero counterfactual impact. (iii) Instead of making our decision about having children about climate change, our primary responsibility as individuals should lie in holding our governments accountable that targets are met and ambitious policies maintained / passed.I actually found it somewhat shocking how normalized / unquestioned anti-natalist assumptions are even in 2024. I am the only voice in the episode questioning the idea that climate change should not be a reason to not have children. So I hope that's a useful intervention and reference to point to.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
jackva https://forum.effectivealtruism.org/posts/AS5Boa4q3ia4hXRBE/linkpost-bbc-how-much-does-having-a-baby-contribute-to Tue, 23 Jan 2024 10:53:22 +0000 EA - [Linkpost] BBC - How much does having a baby contribute to climate change? by jackva jackva https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:33 no full 2
C23sNoAmA8xjJejZz EA - AMA: Emma Slawinski, the RSPCA's Director of Policy, Prevention and Campaigns. by tobytrem Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Emma Slawinski, the RSPCA's Director of Policy, Prevention and Campaigns., published by tobytrem on January 24, 2024 on The Effective Altruism Forum.I'll be interviewing Emma Slawinski for anaudio AMA on the 1st of February. Ask your questions here, and we will cover them in the interview! The interview will be published as a podcast and transcript."Factory-farmed chickens live absolutely horrible lives; their suffering is the single biggest animal welfare issue facing the country at present [my emphasis]" ~ Emma SlawinskiEmma Slawinski is the Director of Policy, Prevention and Campaigns for theRSPCA (the Royal Society for the Prevention of Cruelty to Animals). She has over a decade of experience in animal welfare campaigning. Previously, she worked for organisations such asCompassion in World Farming, where she worked on theEnd The Cage Age campaign, andWorld Animal Protection.At the RSPCA, she has:Worked on the#CutTheChase campaign to end greyhound racing in the UK, and theKept Animals Bill Campaign.Made speeches in front of parliament in favour of banning live export of livestock.Spoken against no-stun slaughter on GB news.Been quoted in BBC articles on issues such ashorse racing reform andbadger culling.Promoted the annualAnimal Kindness Index, which shows how discordant the British public's views on animal welfare are.What is the RSPCA?The RSPCA is a charity with along history. It was the first charity in the world to be primarily focused on preventing animal suffering. In 2021, it received £151 million in funding, making it one of the largest charities in the UK.The RSPCA'scampaigns cover everything frombanning disposable vapes andchanging firework laws, toending cages for farm animals.I was especially interested in doing an AMA with someone from the RSPCA because ofthis article, which focused on the plight of chickens in the UK. In Emma's words:"We slaughter about a billion chickens in the UK every year - an extraordinary number. It is very difficult to envisage the scale of that."Yet we never see these creatures, despite their vast numbers, because they are locked into incredibly cramped spaces. They are also genetically selected to grow incredibly quickly. We get through them at an extraordinary rate because they are bred to produce the maximum amount of meat in the fastest possible time."Factory-farmed chickens live absolutely horrible lives; their suffering is the single biggest animal welfare issue facing the country at present [my emphasis]"Here are some themes that I will be focusing on in my questions:The RSPCA's most effective campaigns, and how they measure the impact they have through public messaging.How the RSPCA prioritises amongst its various causes.What challenges it faces because of its size.Whether it has ways to influence policy that smaller and newer charities do not.You can use these as a jumping off point, but don't feel constrained by them. Ask anything!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
tobytrem https://forum.effectivealtruism.org/posts/C23sNoAmA8xjJejZz/ama-emma-slawinski-the-rspca-s-director-of-policy-prevention Wed, 24 Jan 2024 14:22:09 +0000 EA - AMA: Emma Slawinski, the RSPCA's Director of Policy, Prevention and Campaigns. by tobytrem tobytrem https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:14 no full 2
CuPnmeS4v5sFE6nQj EA - Impact Assessment of AI Safety Camp (Arb Research) by Sam Holton Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Impact Assessment of AI Safety Camp (Arb Research), published by Sam Holton on January 24, 2024 on The Effective Altruism Forum.Authors: Sam Holton, Misha YagudinData collection: David Mathers, Patricia LimNote: Arb Research was commissioned to produce this impact assessment by the AISC organizers.SummaryAI Safety Camp (AISC) connects people interested in AI safety (AIS) to a research mentor, forming project teams that last for a few weeks and go on to write up their findings. To assess the impact of AISC, we first consider how the organization might increase the productivity of the Safety field as a whole. Given its short duration and focus on introducing new people to AIS, we conclude that AISC's largest contribution is in producing new AIS researchers that otherwise wouldn't have joined the field.We gather survey data and track participants in order to estimate how many researchers AISC has produced, finding that 5-10% of participants plausibly become AIS researchers (see "Typical AIS researchers produced by AISC" for examples) that otherwise would not have joined the field. AISC spends roughly $12-30K per researcher. We could not find estimates for counterfactual researcher production in similar programs such as (SERI) MATS.However, we used the LTFF grants database to estimate that the cost of researcher upskilling in AI safety for 1 year is $53K. Even assuming all researchers with this amount of training become safety researchers that wouldn't otherwise have joined the field, AISC still recruits new researchers at a similar or lower cost (though note that training programs at different stages of a career pipeline are compliments).We then consider the relevant counterfactuals for a nonprofit organization interested in supporting AIS researchers and tentatively conclude that funding the creation of new researchers in this way is slightly more impactful than funding a typical AIS project. However, this conclusion is highly dependent on one's particular views about AI safety and could also change based on an assessment of the quality of researchers produced by AISC.We also review what other impacts AISC has in terms of producing publications and helping participants get a position in AIS organizations.ApproachTo assess impact, we focus on AISC's rate of net-new researcher production. We believe this is the largest contribution of the camp given their focus on introducing researchers to the field and given the short duration of projects. In the appendix, we justify this and explain why new researcher production is one of the most important contributions to the productivity of a research field. For completeness, we also attempt to quantify other impacts such as:Direct research outputs from AISC and follow-on research.Network effects leading to further AIS and non-AIS research.AISC leading to future positions.AISC plausibly has several positive impacts that we were unable to measure, such as increasing researcher effort, increasing research productivity, and improving resource allocation. We are also unable to measure the quality of AIS research due to the difficulty of assessing such work.Data collectedWe used 2 sources of data for this assessment:Survey. We surveyed AISC participants from all camps, receiving 24 responses (~10% of all participants). Questions aimed to determine the participants' AIS involvement before and after camp as well as identify areas for improvement. To ensure honest answers, we promised respondents that anecdotes would not be shared without their direct permission. Instead, we will summarize common lessons from these responses where possible.Participant tracking. To counter response biases in survey data, we independently researched the career path of 101 participants from AISC 4-6, looking at involvement in AI safety rese...

]]>
Sam Holton https://forum.effectivealtruism.org/posts/CuPnmeS4v5sFE6nQj/impact-assessment-of-ai-safety-camp-arb-research Wed, 24 Jan 2024 14:21:08 +0000 EA - Impact Assessment of AI Safety Camp (Arb Research) by Sam Holton Sam Holton https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 22:38 no full 3
pwxwDidqbnwqWhvA4 EA - International tax policy as a potential cause area by Tax Geek Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: International tax policy as a potential cause area, published by Tax Geek on January 24, 2024 on The Effective Altruism Forum.This is more of an exploratory post where I try to share some of my thoughts and experience working in international tax.Thanks in particular to David Nash for his encouragement and help in reviewing my drafts.SummaryInternational tax rules govern how taxing rights are allocated between countries.International tax policy is likely to be animpactful cause area:Not only is there a significant amount of tax revenue at stake, there is a broader indirect impact as international tax rules can constrain domestic tax policies.International tax rules tend to be relatively sticky, persisting for decades.In recent years, as international tax has gotten increasingly political, there may also be broader foreign policy implications.Yet international tax seems to be relatively neglected.Domestic tax issues tend to be more politicised, possibly because they affect voters more directly.International tax can be highly technical and rather opaque.Tractability depends on how you identify the "problem":In my view, a problem is that the development of international tax policy is dominated by relatively wealthy countries (particularly the US), who focus too heavily on their own national interest.While I doubt this broad problem can ever be fully "solved", I believe individuals can still play a significant role in mitigating it.ProblemInternational tax policy plays a key role in determining how much companies are taxed and where. This in turn affects the level of tax revenue different countries get.The development of international tax policy is dominated by the Organisation for Economic Co-operation and Development (OECD), which is made up of relatively wealthy countries. The US also plays a key role in international tax policy.[1] I believe that many people currently working in international tax policy focus too heavily on their national interest over the global interest.The problems here are not ones I think we can hope to fully "solve", as the problems stem from the underlying power dynamics between developed and developing countries and the natural incentives for government officials to prioritize their own country.However, international tax policy could still be a worthwhile area to consider working in, because it seems to be a relatively neglected space where individuals can have a surprisingly large impact in mitigating these problems.BackgroundWhat is international tax policy?In broad terms, international tax policy governs how taxing rights are allocated between countries as well as matters of tax administration such as information sharing and dispute resolution.Countries enter into bilateral tax treaties that aim to prevent double taxation (i.e. when two or more countries try to tax the same income) without creating opportunities for tax avoidance or evasion.In recent years, there has also been a focus on multilateral tax projects, which may or may not result in a formal tax treaty.Bilateral DTAsA bilateral double tax agreement (DTA) is a tax treaty entered into by two countries.When a person/entity resident in one country earns income from another country, both countries may attempt to tax the same income. Such double taxation would inhibit cross-border investment and trade, so countries enter into bilateral DTAs to prevent this. Depending on the circumstances, DTAs will allocate taxing rights over the income to either:the residence country - where the person/entity earning the income lives or is managed; orthe source country - where the income is earned.In very broad terms, in a treaty negotiation, developed countries generally want to increase the residence country's taxing rights, because they tend to have wealthy resident...

]]>
Tax Geek https://forum.effectivealtruism.org/posts/pwxwDidqbnwqWhvA4/international-tax-policy-as-a-potential-cause-area Wed, 24 Jan 2024 09:57:06 +0000 EA - International tax policy as a potential cause area by Tax Geek Tax Geek https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 21:09 no full 5
EyedeQmoXGWbKRoxh EA - 5 possibly impactful career paths for researchers by CE Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 5 possibly impactful career paths for researchers, published by CE on January 24, 2024 on The Effective Altruism Forum.Charity Entrepreneurship is running a second edition of ourResearch Training Program (RTP) - a programdesigned to equip participants with the tools and skills needed to identify, compare, and recommend the most effective charities and interventions.In this post, we discuss possible long-term career paths for researchers and a gap assessment of what skills people might want to prioritize to pursue those. This discussion may be helpful for people considering the RTP program or those more generally wanting to find other ways of building career capital in research.These five roles are based on what we think are potential placements or jobs for our first cohort in the RTP. We have made these all a bit more clichéd and separate than they are - in practice, there is a lot of overlap and nuance among them, and a successful research career often involves aspects from all these role types.These paths can all be exciting for someone who is the right fit. Each of them will inevitably have a high variance in impact, with some low- and some high-impact roles in the mix. Most importantly, we think people tend to forget the vast range of career paths open to someone with strong research skills. In the RTP, we aim to coach participants on what we think would be most cross-applicable between these areas, with a mind to make these positions as impactful as possible.Beyond these specific roles, it is worth noting that being a proficient researcher can be highly applicable to many other positions that require lots of decision-making, such as leadership and executive roles in high-performing organizations. In this sense, good research skills are all about helping you ask the right questions and find the right answers.Role: Monitoring and Evaluation (M&E) for a High-Impact OrganizationExample:Research and Evaluation Lead at One Acre Fund,Senior Program Officer/M&E at Gates Foundation,MEAL Coordinator at Vida Plena)Mechanism for Impact: This role has an impact by ensuring an organization achieves its goals. Great M&E can often be the difference between highly impactful charities (e.g., GiveWell recommended) and those that are not. M&E helps demonstrate impact, identify pain points, and supervise progress toward stated goals. When done well, it can increase the odds of a charity improving to reach the top of its field.Our sense is the impact of an M&E role correlates quite strongly with the charity's quality and its interest in M&E. A more junior role in an impactful charity may lead to more impact than a senior role in a much less impactful one. Charities also have very different attitudes toward M&E, where working for an organization that values M&E facilitates the impact of your role, and working for one that doesn't can amount to paper pushing. M&E work is sometimes only used as signaling for fundraising, not to determine if the organization is having an impact or identify potential improvements.Persona: The type of person who is good at this sort of role is a bit non-conformist and fairly detail-oriented. Enjoying finding flaws or possible areas for improvement ends up being a pretty helpful disposition here. Relative to other research roles, this role is a lot more applied, so it could be a good fit for someone who wants to spend time in the field and create evidence rather than relying on secondary sources. M&E can be a good fit for someone early in their career who wants to leave options open for more direct charity work and theory-based research.Top skills to build: Although some cause areas (such as global poverty) have a decent pipeline for M&E training (such as theMIT MicroMasters or specific university courses), other cause areas have virtually ...

]]>
CE https://forum.effectivealtruism.org/posts/EyedeQmoXGWbKRoxh/5-possibly-impactful-career-paths-for-researchers Wed, 24 Jan 2024 02:04:15 +0000 EA - 5 possibly impactful career paths for researchers by CE CE https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:42 no full 8
FWbaqM5PaFfrfAxaS EA - GWWC Pledge featured in new book from Head of TED, Chris Anderson by Giving What We Can Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC Pledge featured in new book from Head of TED, Chris Anderson, published by Giving What We Can on January 25, 2024 on The Effective Altruism Forum.Chris Anderson, Head of TED, has just released a new book called Infectious Generosity, which has a whole chapter that encourages readers to take the Giving What We Can Pledge! He has also taken the Giving What We Can Pledge with the new wealth option to give the greater of 10% of income or 2.5% of wealth each year.This inspiring book is a guide to making Infectious Generosity become a global movement to build a hopeful future. Chris offers a playbook for how to embark on our own generous acts and to use the Internet to give them self-replicating, potentially world-changing, impact.Here's a quick excerpt from the book:"The more I've thought about generosity, the impact it can have, and the joy it can bring, the more determined I've become that it be an absolute core part of my identity.Jacqueline's work as a pioneering social entrepreneur has definitely inspired me, and together we're now ready to sign that combination pledge, effectively committing to giving the higher of 10% of our income or 2.5% of our net worth in any given year for the rest of our lives."We are really excited about this opportunity to share the Giving What We Can Pledge with many more people and hope that this book will be successful so we can make our message even more infectious!If you'd like to purchase a copy for yourself or a friend, you can find relevant links here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Giving What We Can https://forum.effectivealtruism.org/posts/FWbaqM5PaFfrfAxaS/gwwc-pledge-featured-in-new-book-from-head-of-ted-chris Thu, 25 Jan 2024 23:19:13 +0000 EA - GWWC Pledge featured in new book from Head of TED, Chris Anderson by Giving What We Can Giving What We Can https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:33 no full 1
TwpoedzMpmy7k7NKH EA - Can a war cause human extinction? Once again, not on priors by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can a war cause human extinction? Once again, not on priors, published by Vasco Grilo on January 25, 2024 on The Effective Altruism Forum.SummaryStephen Clare's classic EA Forum postHow likely is World War III? concludes "the chance of an extinction-level war [this century] is about 1%". Icommented thatpower law extrapolation often results in greatly overestimating tail risk, and that fitting a power law to all the data points instead of the ones in the right tail usually leads to higher risk too.To investigate the above, I looked into historical annual war deaths along the lines of what I did inCan a terrorist attack cause human extinction? Not on priors, where I concluded the probability of a terrorist attack causing human extinction is astronomically low.Historical annual war deaths of combatants suggest the annual probability of a war causinghuman extinction is astronomically low once again. 6.36*10^-14 according to my preferred estimate, although it is notresilient, and can easily be wrong by many orders of magnitude (OOMs).One may well update to a much higher extinction risk after accounting for inside view factors (e.g.weapon technology), and indirect effects of war, like increasing the likelihood ofcivilisational collapse. However, extraordinary evidence would be required to move up sufficiently many orders of magnitude for anAI,bio ornuclear war to have a decent chance of causing human extinction.In the realm of the more anthropogenicAI,bio andnuclear risk, I personally think underweighting theoutside view is a major reason leading to overly high risk. I encourage readers to check David Thorstad's seriesexaggerating the risks, which includes subseries onclimate,AI andbio risk.IntroductionThe 166thEA Forum Digest had Stephen Clare'sHow likely is World War III? as the classic EA Forum post (as a side note, the rubric is great!). It presents the following conclusions:First, I estimate that the chance of direct Great Power conflict this century is around 45%.Second, I think the chance of a huge war as bad or worse than WWII is on the order of 10%.Third, I think the chance of an extinction-level war is about 1%. This is despite the fact that I put more credence in the hypothesis that war has become less likely in the post-WWII period than I do in the hypothesis that the risk of war has not changed.I view the last of these as acrucial consideration forcause prioritisation, in the sense it directly informs the potentialscale of the benefits of mitigating the risk fromgreat power conflict. It results from assuming each war has a 0.06 % (= 2*3*10^-4) chance of causinghuman extinction. This is explained elsewhere in the post, and in more detail in the curated oneHow bad could a war get? by Stephen and Rani Martin. In essence, it is 2 times a 0.03 % chance of war deaths of combatants being at least 8 billion:"In Only the Dead, political scientist Bear Braumoeller [I recommendhis appearance on The 80,000 Hours Podcast!] uses his estimated parameters to infer the probability of enormous wars. His [power law] distribution gives a 1 in 200 chance of a given war escalating to be [at least] twice as bad as World War II and a 3 in 10,000 chance of it causing [at least] 8 billion deaths [of combatants] (i.e. human extinction)".2 times because the above 0.03 % "may underestimate the chance of an extinction war for at least two reasons. First, world population has been growing over time. If we instead considered the proportion of global population killed per war instead, extreme outcomes may seem more likely. Second, he does not consider civilian deaths. Historically, the ratio of civilian-deaths-to-battle deaths in war has been about 1-to-1 (though there's a lot of variation across wars). So fewer than 8 billion battle deaths would be...

]]>
Vasco Grilo https://forum.effectivealtruism.org/posts/TwpoedzMpmy7k7NKH/can-a-war-cause-human-extinction-once-again-not-on-priors Thu, 25 Jan 2024 14:08:52 +0000 EA - Can a war cause human extinction? Once again, not on priors by Vasco Grilo Vasco Grilo https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 31:57 no full 3
k9F2bcePw26Khr9wh EA - Legal Impact for Chickens seeks an Operations Specialist. by alene Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Legal Impact for Chickens seeks an Operations Specialist., published by alene on January 25, 2024 on The Effective Altruism Forum.Dear EA Forum Readers,Legal Impact for Chickens is looking for a passionate and hard-working Operations Specialist to join us as we continue to grow our nonprofit and fight for animals. This is a new position, and you will have the ability to influence our operations and play an important role in our work.The responsibilities of this position are varied, covering operational, administrative, and paralegal work, and we will consider a variety of candidates and experiences. Therefore, the final job title may differ depending on the final candidate.Want to join us?About our work and why you should join usLegal Impact for Chickens (LIC) is a 501(c)(3) litigation nonprofit. We work to protect farmed animals.You may have seen our Costco shareholder derivative suit in The Washington Post, Fox Business, or CNN Business - or even on TikTok.You also may have heard of LIC from our recent Animal Charity Evaluators (ACE) recommendation.Now, we're looking for our next hire: an entrepreneurial Operations Specialist to support us in our fight for animals!Legal Impact for Chickens is currently a team of three litigators. You will join LIC as our first non-litigator employee and support the entire team.About youYou might be a great fit for this position if you have many of the following qualities:• Passion for helping farmed animals• Extremely organized, thoughtful, and dependable• Strong interpersonal skills• Interest in the law and belief that litigation can help animals• Zealous, creative, and enthusiastic• Excited to build this startup nonprofit!• Willingness to help with all types of nonprofit startup work, from litigation support to HR to finance• Strong work ethic and initiative• Love of learning• Paralegal experience or certificate preferred, but not required• Experience with various aspects of operations (such as bookkeeping and IT) preferred, but not required• Experience growing a new team preferred, but not required• Kind to our fellow humans, and excited about creating a welcoming, inclusive teamAbout the roleYou will be an integral part of LIC. You'll help shape our organization's future.Your role will be a combination of (1) assisting the lawyers with litigation tasks, and (2) helping with everything else we need to do, to build and run a growing nonprofit organization!Since this is such a small organization, you'll wear many hats: Sometimes you'll act as a paralegal, formatting a table of authorities, performing legal research, or preparing a binder for a hearing. Sometimes you'll act as an HR manager, making sure we have the right trainings and benefits available. Sometimes you'll act as an administrative assistant, tracking expenditures and donations, booking travel arrangements, or helping LIC's president with email management.Sometimes you'll act as LIC's front line for communicating with the public, monitoring info@legalimpactforchickens.org emails, thanking donors, or making calls to customer service representatives on LIC's behalf. Sometimes you'll pitch in on LIC's communications, social media, and public education efforts.This job offers tremendous opportunity for professional growth and the ability to create valuable impact for animals. The hope is for you to become an indispensable, long-time member of our new team.Commitment: Full timeLocation and travel: This is a remote, U.S.-based position. You must be available to travel for work as needed, since we will litigate all over the country.Reports to: Alene Anello, LIC's presidentSalary: $50,000-$70,000 depending on experienceBenefits: Health insurance (with ability to buy dental), 401(k), life insurance, flexible schedule, unlimited PTO (plus man...

]]>
alene https://forum.effectivealtruism.org/posts/k9F2bcePw26Khr9wh/legal-impact-for-chickens-seeks-an-operations-specialist Thu, 25 Jan 2024 10:20:45 +0000 EA - Legal Impact for Chickens seeks an Operations Specialist. by alene alene https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:49 no full 4
GZy8GQyAp2TojwhEs EA - Recruiting for Survey on the Psychology of EA by Kyle Fiore Law Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Recruiting for Survey on the Psychology of EA, published by Kyle Fiore Law on January 26, 2024 on The Effective Altruism Forum.I am actively recruiting effective altruists to participate in an online survey mapping their psychological profiles. The survey should take no more than 90 minutes to complete and anyone who identifies as being in alignment with EA can participate. If you have the time, my team and I would greatly appreciate your participation! The survey pays $15 and the link can be found below.Survey Link: https://albany.az1.qualtrics.com/jfe/form/SV_8v31IDPQNq4sKBUThe Research Team:Kyle Fiore Law (Project Leader; PhD Candidate in Social Psychology; University at Albany, SUNY): https://www.kyleflaw.com/Brendan O'Connor (Associate Professor of Psychology; University at Albany, SUNY):Abigail Marsh (Professor of Psychology and Interdisciplinary Neuroscience; Georgetown University)Liane Young (Professor of Psychology and Neuroscience; Boston College)Stylianos Syropoulos (Postdoctoral Researcher; Boston College)Paige Amormino (Graduate Student; Georgetown University)Gordon Kraft-Todd (Postdoctoral Researcher; Boston College)Warmly,Kyle :)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Kyle Fiore Law https://forum.effectivealtruism.org/posts/GZy8GQyAp2TojwhEs/recruiting-for-survey-on-the-psychology-of-ea Fri, 26 Jan 2024 22:25:06 +0000 EA - Recruiting for Survey on the Psychology of EA by Kyle Fiore Law Kyle Fiore Law https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:38 no full 1
nRWbwvABqykcMHyx5 EA - Funding circle aimed at slowing down AI - looking for participants by Greg Colbourn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding circle aimed at slowing down AI - looking for participants, published by Greg Colbourn on January 26, 2024 on The Effective Altruism Forum.Are you an earn-to-giver or (aspiring) philanthropist who hasshort AGI timelines and/or high p(doom|AGI)? Do you want to discuss donation opportunities with others who share your goal of slowing down / pausing / stopping AI development[1]? If so, I want to hear from you!For some context, I've been extremely concerned about short-term AI x-risk since March 2023 (post-GPT-4), and have, since then, thought thatmore AI Safety research will not be enough to save us (or AI Governance that isn'tfocused[2] on slowing down AI or a global moratorium on further capabilities advances). Thus I think that on the margin far more resources need to be going into slowing down AI (there are already many dedicated funds for the wider space of AI Safety).I posted this to an EA investing group in late April:And thisAGI rising: why we are in a new era of acute risk and increasing public awareness, and what to do now - to the EA Forum in early May. My p(doom|AGI) is ~90% as things stand (Doom is default outcome of AGI). But my p(doom) overall is ~50% by 2030, because I think there's a decent chance we can actually get a Stop[3]. My timelines are ~0-5 years:I have donated>$150k[4] to people and projects focused on slowing down AI since (mostly as kind of seed funding - to individuals, and projects so new they don't have official orgs yet[5]), but I want to do a lot more. Having people with me would be great for multiplying impact and also for my motivation!I'm thinking 4-6 people, each committing ~$100k(+) over 2024, would be good. The idea would be to discuss donation opportunities in the "slowing down AI" space during a monthly call (e.g. Google Meet), and have an informal text chat for the group (e.g. Whatsapp or Messenger). Fostering a sense of unity of purpose[6], but nothing too demanding or official. Active, but low friction and low total time commitment. Donations would be made independently rather than from a pooled fund, but we can have some coordination to get "win-wins" based on any shared preferences of what to fund.Meta-charity Funders is a useful model.We could maybe do something like anS-process for coordination, like what Jaan Tallinn'sSurvival and Flourishing Fund does[7]; it helps avoid "donor chicken" situations. Or we could do something simpler like rank the value of donating successive marginal $10k amounts to each project. Or just stick to more qualitative discussion. This is all still to be determined by the group.Please join me if you can[8], or share with others you think may be interested. Feel free to DM me here or onX, book acall with me, or fill inthis form.^If you oppose AI for other reasons (e.g. ethics, job loss, copyright), as long as you are looking to fund strategies that aim to show results in the short term (say within a year), then I'd be interested in you joining the circle.^I think Jaan Tallinn's new top priorities are great!^After 2030, if we have a Stop and are still here, we can keep kicking the can down the road..^I've made a few more donations since that tweet.^Public examples includeHolly Elmore,giving away copies ofUncontrollable, andAI-Plans.com.^Right now I feel quite isolated making donations in this space.^It's a little complicated, but here's a short description: "Everyone individually decides how much value each project creates at various funding levels. We find an allocation of funds that's fair and maximises the funders' expressed preferences (using a number of somewhat dubious but probably not too terrible assumptions). Funders can adjust how much money they want to distribute after seeing everyone's evaluations, including fully pulling out." (paraphr...

]]>
Greg_Colbourn https://forum.effectivealtruism.org/posts/nRWbwvABqykcMHyx5/funding-circle-aimed-at-slowing-down-ai-looking-for $150k[4] to people and projects focused on slowing down AI since (mostly as kind of seed funding - to individuals, and projects so new they don't have official orgs yet[5]), but I want to do a lot more. Having people with me would be great for multiplying impact and also for my motivation!I'm thinking 4-6 people, each committing ~$100k(+) over 2024, would be good. The idea would be to discuss donation opportunities in the "slowing down AI" space during a monthly call (e.g. Google Meet), and have an informal text chat for the group (e.g. Whatsapp or Messenger). Fostering a sense of unity of purpose[6], but nothing too demanding or official. Active, but low friction and low total time commitment. Donations would be made independently rather than from a pooled fund, but we can have some coordination to get "win-wins" based on any shared preferences of what to fund.Meta-charity Funders is a useful model.We could maybe do something like anS-process for coordination, like what Jaan Tallinn'sSurvival and Flourishing Fund does[7]; it helps avoid "donor chicken" situations. Or we could do something simpler like rank the value of donating successive marginal $10k amounts to each project. Or just stick to more qualitative discussion. This is all still to be determined by the group.Please join me if you can[8], or share with others you think may be interested. Feel free to DM me here or onX, book acall with me, or fill inthis form.^If you oppose AI for other reasons (e.g. ethics, job loss, copyright), as long as you are looking to fund strategies that aim to show results in the short term (say within a year), then I'd be interested in you joining the circle.^I think Jaan Tallinn's new top priorities are great!^After 2030, if we have a Stop and are still here, we can keep kicking the can down the road..^I've made a few more donations since that tweet.^Public examples includeHolly Elmore,giving away copies ofUncontrollable, andAI-Plans.com.^Right now I feel quite isolated making donations in this space.^It's a little complicated, but here's a short description: "Everyone individually decides how much value each project creates at various funding levels. We find an allocation of funds that's fair and maximises the funders' expressed preferences (using a number of somewhat dubious but probably not too terrible assumptions). Funders can adjust how much money they want to distribute after seeing everyone's evaluations, including fully pulling out." (paraphr...]]> Fri, 26 Jan 2024 14:48:06 +0000 EA - Funding circle aimed at slowing down AI - looking for participants by Greg Colbourn $150k[4] to people and projects focused on slowing down AI since (mostly as kind of seed funding - to individuals, and projects so new they don't have official orgs yet[5]), but I want to do a lot more. Having people with me would be great for multiplying impact and also for my motivation!I'm thinking 4-6 people, each committing ~$100k(+) over 2024, would be good. The idea would be to discuss donation opportunities in the "slowing down AI" space during a monthly call (e.g. Google Meet), and have an informal text chat for the group (e.g. Whatsapp or Messenger). Fostering a sense of unity of purpose[6], but nothing too demanding or official. Active, but low friction and low total time commitment. Donations would be made independently rather than from a pooled fund, but we can have some coordination to get "win-wins" based on any shared preferences of what to fund.Meta-charity Funders is a useful model.We could maybe do something like anS-process for coordination, like what Jaan Tallinn'sSurvival and Flourishing Fund does[7]; it helps avoid "donor chicken" situations. Or we could do something simpler like rank the value of donating successive marginal $10k amounts to each project. Or just stick to more qualitative discussion. This is all still to be determined by the group.Please join me if you can[8], or share with others you think may be interested. Feel free to DM me here or onX, book acall with me, or fill inthis form.^If you oppose AI for other reasons (e.g. ethics, job loss, copyright), as long as you are looking to fund strategies that aim to show results in the short term (say within a year), then I'd be interested in you joining the circle.^I think Jaan Tallinn's new top priorities are great!^After 2030, if we have a Stop and are still here, we can keep kicking the can down the road..^I've made a few more donations since that tweet.^Public examples includeHolly Elmore,giving away copies ofUncontrollable, andAI-Plans.com.^Right now I feel quite isolated making donations in this space.^It's a little complicated, but here's a short description: "Everyone individually decides how much value each project creates at various funding levels. We find an allocation of funds that's fair and maximises the funders' expressed preferences (using a number of somewhat dubious but probably not too terrible assumptions). Funders can adjust how much money they want to distribute after seeing everyone's evaluations, including fully pulling out." (paraphr...]]> $150k[4] to people and projects focused on slowing down AI since (mostly as kind of seed funding - to individuals, and projects so new they don't have official orgs yet[5]), but I want to do a lot more. Having people with me would be great for multiplying impact and also for my motivation!I'm thinking 4-6 people, each committing ~$100k(+) over 2024, would be good. The idea would be to discuss donation opportunities in the "slowing down AI" space during a monthly call (e.g. Google Meet), and have an informal text chat for the group (e.g. Whatsapp or Messenger). Fostering a sense of unity of purpose[6], but nothing too demanding or official. Active, but low friction and low total time commitment. Donations would be made independently rather than from a pooled fund, but we can have some coordination to get "win-wins" based on any shared preferences of what to fund.Meta-charity Funders is a useful model.We could maybe do something like anS-process for coordination, like what Jaan Tallinn'sSurvival and Flourishing Fund does[7]; it helps avoid "donor chicken" situations. Or we could do something simpler like rank the value of donating successive marginal $10k amounts to each project. Or just stick to more qualitative discussion. This is all still to be determined by the group.Please join me if you can[8], or share with others you think may be interested. Feel free to DM me here or onX, book acall with me, or fill inthis form.^If you oppose AI for other reasons (e.g. ethics, job loss, copyright), as long as you are looking to fund strategies that aim to show results in the short term (say within a year), then I'd be interested in you joining the circle.^I think Jaan Tallinn's new top priorities are great!^After 2030, if we have a Stop and are still here, we can keep kicking the can down the road..^I've made a few more donations since that tweet.^Public examples includeHolly Elmore,giving away copies ofUncontrollable, andAI-Plans.com.^Right now I feel quite isolated making donations in this space.^It's a little complicated, but here's a short description: "Everyone individually decides how much value each project creates at various funding levels. We find an allocation of funds that's fair and maximises the funders' expressed preferences (using a number of somewhat dubious but probably not too terrible assumptions). Funders can adjust how much money they want to distribute after seeing everyone's evaluations, including fully pulling out." (paraphr...]]> Greg_Colbourn https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:28 no full 4
AMrzfTKHwHheouEDf EA - Is it time for EVF to sell Wytham Abbey? by Arepo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is it time for EVF to sell Wytham Abbey?, published by Arepo on January 26, 2024 on The Effective Altruism Forum.The purchase of Wytham Abbey was originally justified as a long term investment, when people were still claiming EA wasn't cash constrained. One of the arguments advanced by defenders of the purchase was that the money wasn't lost, merely invested.Right now, EA is hella funding constrained...In the last few months, I've seen multiple posts of EA orgs claiming to be so funding constrained they're facing existential risk (disclaimer: I was a trustee of CEEALAR until last month). By the numbers given by those three orgs, 10% of the price of Wytham would be enough to fund them all for several years.This is to say nothing of all the organisations less urgently seeking funding, the fact that regional groups seem to be getting funding cuts of 40%, of numerous word-of-mouth accounts of people being turned down for funding or not trying to start an organisation because they don't expect to get it, and the fact that earlier this year the EA funds were reportedly suffering some kind of liquidity crisis (and are among those seeking funding now).Here's a breakdown of the small-medium size orgs who've written 'we are funding constrained' posts on the forum in the last 6 months or so, along with the length of time the sale of Wytham Abbey (at its original £15,000,000 purchase price) could fund them:OrganisationAnnual budget*Number of years Wytham Abbey's sale could fund orgSourceEA Poland£24-48,000312-614LinkCentre for Enabling EA Learning & Research£150-£300,00050-100Personal involvementAI Safety Camp£46-246,00048-326LinkConcentric Policies£16,500**900**LinkCenter on Long-Term Risk£600,00024LinkEA Germany£226,000***66LinkVida Plena's 'Group Interpersonal Therapy' project£159,00094LinkHappier Lives Institute£161,00093LinkRiesgos Catastróficos Globales£137,000109LinkGiving What We Can£1,650,0009LinkAll above organisations excluding GWWC (assuming max of budget ranges)£1,893,5007.9All above organisations including GWWC (assuming max of budget ranges)£3,543,5004.2* Converted from various currencies** Their stated 'funding gap' for the year. It sounds like that's their whole planned budget, but isn't clear*** They were seeking replacement funding for the 40% shortfall of this, which they've now received... but in five years, EA probably won't need the long-term savingsWytham Abbey was meant to be a multi-year investment. But though EA is currently funding constrained as heck, the consensus estimate seems to be that within half a decade the movement will have multiple new billionaire donors - so investing for a payoff more than a few years ahead rapidly loses value.Also (disclaimer again noted) CEEALAR has hosted retreats for Allfed and Orthogonal, and is due to host the forthcoming ML4Good bootcamp, so is already serving a similar function to Wytham Abbey - for a fraction of the operational cost, and less than 2% of the purchase/sale value.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Arepo https://forum.effectivealtruism.org/posts/AMrzfTKHwHheouEDf/is-it-time-for-evf-to-sell-wytham-abbey Fri, 26 Jan 2024 14:12:06 +0000 EA - Is it time for EVF to sell Wytham Abbey? by Arepo Arepo https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:48 no full 5
ntmreot3PXzMwAWd7 EA - Cost-effectiveness analysis of ~1260 USD worth of social media ads for fellowship marketing by gergo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cost-effectiveness analysis of ~1260 USD worth of social media ads for fellowship marketing, published by gergo on January 26, 2024 on The Effective Altruism Forum.TLDR.: I spent ~1260 USD on social media ads (Facebook/Instagram) over ~1,5 years.We got an additional 53-57 applicants this way, resulting in a cost-effectiveness of 22,1-23,8 USD per applicant.Disclaimer: I wanted to capture 80% of the value of what I have to say without putting a lot of time into writing. This means that the post is somewhat rough around the edges, but hopefully, it will be still useful.I have been very excited about experimenting with paid ads to reach out to people who would otherwise not hear about our Into to EA and AGISF programs. This post is a summary of how I spent ~1260 USD on paid ads for social media, and a botec of what it bought us. Please take all results with a grain of salt, as the data is limited and one thing that works for us might not apply in other contexts. That being said, I'm quite confident that groups that want to increase the number of talented and diverse applicants to their programs should at least experiment with using paid ads.Cost-effectivenessI overall spent ~1260 USD, which resulted in 53 additional applicants to our fellowships over 1,5 years (23,8 USD per applicant). At least 4 of these 53 applicants also invited a friend along with them to our program, and if we count them as well, we got overall 57 additional applicants, which slightly improves the cost-effectiveness to 22,1 USD per applicant.You can take a look at the raw-ish datahere as well as see the breakdown by campaign and course type (EA vs. AIS).ImpactI think most of the expected impact of this will come from the ~30% of the overall applicants who engaged with the courses very seriously and took a lot of value from them.Unfortunately, I didn't do an amazing job keeping track of what % of the original 53-57 applicants never started the course. I would estimate this to be around 20-35%. Given that many people never start the course, I think it's really valuable to encourage people to sign up for your newsletter[1] as part of the application process - or if they have a good application, reach out to them in the next round.As for the rest of the applications, I think it's pretty similar to the usual fellowship experience, some people drop out after a couple of sessions, some finish it but end up disappearing after the course, etc.It goes without saying, but of course, this is not a judgment on people's intrinsic value!Additional points and caveats:Note that the courses I was advertising were 4 sessions only, and sometimes it was an intensive 1-week course - which I think partly improved the cost-effectiveness but can have other drawbacks, see the discussion here and here.With paid ads, we got to reach out to many talented international students from 3rd world countries, which is awesome - and otherwise, we would have likely not reached them.[2]If you have data on the cost-effectiveness of your social media ads (or want to start gathering such data) make sure to reach out!ConclusionBased on this, I will increase our marketing budget, as well as probably expand it to cities where we don't have an EA presence yet in the country. I think it's possible that once I have more data, these ads won't seem as good as now, but even if I'm currently overestimating the cost-effectiveness by 10x - they would still look pretty good.If you would like to use social media ads for your national/city/university group, feel free to shoot us an email at info[at]eahungary.com^see here or here if you don't have one but want to use ours as a template^In Hungary, there are a lot of international students from 3rd world countries who are here on a scholarship. This means that they have already had ...

]]>
gergo https://forum.effectivealtruism.org/posts/ntmreot3PXzMwAWd7/cost-effectiveness-analysis-of-1260-usd-worth-of-social Fri, 26 Jan 2024 12:42:42 +0000 EA - Cost-effectiveness analysis of ~1260 USD worth of social media ads for fellowship marketing by gergo gergo https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:42 no full 6
4wsQdHhuGRYmKafph EA - Probably Good launched a newsletter with impact-centered career advice! by Probably Good Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Probably Good launched a newsletter with impact-centered career advice!, published by Probably Good on January 26, 2024 on The Effective Altruism Forum.Probably Good recently launched a newsletter to update our audience on new content and help people learn how to increase the impact of their careers at a meaningful but reasonable pace.You can subscribe here!For new subscribers, we'll kick off with an intro series overview of our approach to career planning. In future newsletters, we'll cover topics like:Core concepts & frameworks for thinking about careersNew content & services from PGPersonal perspectives on career choiceCause-specific overviews and resourcesPromising job & work opportunities within a range of cause areasIf you think you might have signed up for our mailing list at some point in the past, you should have received a confirmation email to let us know you'd like to receive the newsletter. If you didn't receive this email, you cansign up here.As always, we want to express our gratitude for all the encouragement we've received from this community over the past two years. We're excited for what's to come as we keep growing our site and we appreciate the ongoing support. If you have ideas for new career related content you'd like to see, feel free to reach out via ourcontact form or email us at team@probablygood.org.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Probably Good https://forum.effectivealtruism.org/posts/4wsQdHhuGRYmKafph/probably-good-launched-a-newsletter-with-impact-centered Fri, 26 Jan 2024 08:57:49 +0000 EA - Probably Good launched a newsletter with impact-centered career advice! by Probably Good Probably Good https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:28 no full 8
ZKD8xeqwX8R8qBTWH EA - Expression of Interest: Director of Operations at the Center on Long-term Risk by AmritSidhu-Brar Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Expression of Interest: Director of Operations at the Center on Long-term Risk, published by AmritSidhu-Brar on January 26, 2024 on The Effective Altruism Forum.You can also see this posting on our website here.Intro / summaryTheCenter on Long-term Risk will soon be recruiting for a Director of Operations. The role is to lead our 2-person operations team, handling challenges across areas such as HR, finance, compliance and recruitment.Due to uncertainty over whether the role will be located in London (UK) or Berkeley (California), we are running a low-volume invite-only round now; to be followed by an open round in ~April after we gain certainty on the location, if we don't hire now.We'd also be open to mainly-remote candidates in some circumstances (see below).We're excited for expressions of interest from anyone with experience of challenging operations work, and at least a little experience of people management.If you're interested in the role, we'd be excited for you to submit our shortexpression of interest form by the end of Sunday 11th February. We'll then invite the most promising candidates to apply for the role now.LocationAs mentioned in ourannual review, CLR is evaluating whether to relocate from London to Berkeley, CA. We are currently in a trial period, and expect to make a decision about our long-term location in early April. For this reason, we unfortunately don't yet know whether this role will be located in London or Berkeley. We'd also be open to remote candidates in some circumstances (see below).We recognise that this uncertainty will make the role less appealing to candidates. Given this, we will be running a low-volume invite-only hiring round now, for candidates who are willing to spend the time on our hiring process even with our location uncertainty. If we don't successfully hire now, we will launch a full open round in April, after the location decision is made.Further details on location:We estimate a 70% chance that we will settle on moving to Berkeley, and a 30% chance of staying in London.If we settle on London (30% chance): we'd have a strong preference for candidates who can spend at least ~60% of their time at our London officeIf we settle on Berkeley (70% chance), we'd be open to remote candidates, with a substantial preference for candidates who can visit Berkeley regularly.To be clear, we're not looking for a commitment from candidates that you're happy to work in both locations. Just one is fine - it's just that we'll just need to take this into account when making the offer after the location decision is finalised.We expect to be able to sponsor visas for this role in most cases..The roleThe Director of Operations role is to lead our 2-person operations team, with responsibility across areas such as HR, finance, compliance, office management, grantmaking ops, and recruitment. You'd report to our Executive Director, and would take on the management of our existing Operations Associate.Specific responsibilities include:Working with our Executive Director and board of trustees to facilitate major organisational decisions, providing an operations perspective.Managing and mentoring our Operations Associate, as well as a number of support staff and contractors.Managing compliance and finances for our ecosystem of charitable entities (in collaboration with the Operations Associate, legal advisors and accountants).Overseeing and refining CLR's internal systems in areas such as HR, grantmaking ops, IT, recruitment and immigration.Leading on major operations projects as they arise, such as running events or hiring staff in new countries.About operations at CLR:CLR overall is currently a team of 13, and we plan to recruit 3-4 new staff in 2024. As well as supporting our research teams, the operations team facil...

]]>
AmritSidhu-Brar https://forum.effectivealtruism.org/posts/ZKD8xeqwX8R8qBTWH/expression-of-interest-director-of-operations-at-the-center Fri, 26 Jan 2024 06:27:03 +0000 EA - Expression of Interest: Director of Operations at the Center on Long-term Risk by AmritSidhu-Brar AmritSidhu-Brar https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:38 no full 9
9nksMfQujirhjf6NZ EA - What roles do GWWC and Founders Pledge play in growing the EA donor pool? by BrownHairedEevee Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What roles do GWWC and Founders Pledge play in growing the EA donor pool?, published by BrownHairedEevee on January 29, 2024 on The Effective Altruism Forum.I'm curious about the roles of Giving What We Can and Founders Pledge in driving donations to EA causes.Both organizations play a similar role in persuading people to give money to highly effective causes, but it seems to me that GWWC focuses on getting a large number of average-income people to donate relatively small amounts of money, whereas Founders Pledge focuses on getting a smaller number of entrepreneurs to donate large amounts of money.I think that both types of movement growth are important in different ways. On the one hand, having even a small number of large donors means we have a lot of funding, which allows us to make a great impact. (Even with the collapse of FTX in 2022, there is still a chance that the EA movement could have more billionaire backers by 2027 than it does now.) On the other hand, a large number of donors means there are a large number of individuals engaging in the philosophy and practice of EA, which helps spread the ideas of EA and demonstrate its accessibility.What are your opinions on how GWWC and FP's roles in generating movement growth compare and contrast? Which kind of movement growth is more important for the EA movement right now?Appendix: Relevant statisticsGWWC and FP's membership numbers:GWWC has over 9,400 individuals with active pledges as of January 28, 2024[1]Founders Pledge has 1,767 members as of 2022, their latest impact report[2]Amounts pledged:FP: "$1.3 billion pledged to charity from 80 new members"[2]GWWC estimates that $83 million of lifetime value will be generated from the new pledges taken in 2020-2022 (a three year period), or an average of $27 million of lifetime value from new pledges per year.[3]Giving multiplier: GWWC estimates that it generates $30 for every $1 invested in its operations. I couldn't find an estimate of FP's giving multiplier effect, but I think it would be useful for comparison.^Our members - GWWC^2022 Impact Report - Founders Pledge^2020-2022 Impact evaluation - GWWCThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
BrownHairedEevee https://forum.effectivealtruism.org/posts/9nksMfQujirhjf6NZ/what-roles-do-gwwc-and-founders-pledge-play-in-growing-the Mon, 29 Jan 2024 12:43:28 +0000 EA - What roles do GWWC and Founders Pledge play in growing the EA donor pool? by BrownHairedEevee BrownHairedEevee https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:32 no full 1
4CBoJ5jgmGfdMFnAE EA - EV investigation into Owen and Community Health by EV US Board Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EV investigation into Owen and Community Health, published by EV US Board on January 29, 2024 on The Effective Altruism Forum.IntroductionLast year, Owen Cotton-Barrattresigned from EV UK's board of directors following reports of sexual misconduct. Prior to his resignation, accusations of misconduct from Owenhad been reported to Julia Wise at CEA's Community Health team, which is led by Nicole Ross.EV US and EV UK jointly commissioned an independent investigation led by the law firmHerbert Smith Freehills into Owen's conduct and whether the Community Health team had acted appropriately with the information they had been given. Following the investigation, the boards of EV US[1] and EV UK jointly deliberated over the findings and the appropriate response.Below, the EV boards report their determinations and actions. We considered saying nothing or sharing significantly less information but decided it was in the best interests of the community to have some information upon which to update on the behavior of Owen Cotton-Barratt and the Community Health team. Our desire for transparency was not particularly motivated by the magnitude of the findings, and was instead motivated by the relevancy of the information for informing community members' future interactions with Owen and / or Community Health, the public nature of Owen's resignation, and community norms towards transparency and accountability.Additionally, we felt that sharing as much information as we could was particularly important because ofthe recent news that EV's projects are spinning out, as the boards' decisions only have an effect for projects so long as they remain part of EV. Projects will eventually set their own policies and won't have access to all of the facts we do, so we wanted to provide some information to enable the broader EA ecosystem to make better-informed decisions.With that being said, we are constrained in how much detail we can share without risking the anonymity of the interviewees. The investigators noted that multiple interviewees made requests to protect their anonymity, and given their voluntary participation, we want to respect their wishes. We want people to continue to feel comfortable coming forward in investigations knowing that potentially identifying information will not be made public.This means that in some cases below we present claims and board actions without all of the underlying evidence or reasoning. We recognize that this post does not have the same level of reasoning transparency we would normally aim for and think readers should update less than they would if they had as much detail as we do, but we ultimately felt like this was a reasonable middle ground to strike to allow us to share as much information with the community as possible while protecting the anonymity of interviewees.appendix below.Determinations regarding Owen Cotton-BarrattThe boards unanimously agree on the following:On multiple occasions, Owen expressed sexual and / or romantic interest in women who were younger and less influential than he was. There were important power differentials between Owen and the women involved, sometimes formal and sometimes informal.Multiple women expressed being upset by Owen's advances. Both the frequency and the content of the advances contributed to the women's feelings.Julia Wise from CEA's Community Health Team gave Owen feedback that his behavior was inappropriate prior to some of the later instances of similar behavior.Owen was inconsistent at acknowledging potential conflicts of interest with persons whom he expressed sexual and / or romantic interest in. He recused himself in at least one professional context, but did not seem to consistently acknowledge other potential conflicts in other instances.In at least one case, Owen did not stop m...

]]>
EV US Board https://forum.effectivealtruism.org/posts/4CBoJ5jgmGfdMFnAE/ev-investigation-into-owen-and-community-health Mon, 29 Jan 2024 23:32:10 +0000 EA - EV investigation into Owen and Community Health by EV US Board EV US Board https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:48 no full 1
h9EqoakkATQGMxECH EA - Announcing Niel Bowerman as the next CEO of 80,000 Hours by 80000 Hours Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Niel Bowerman as the next CEO of 80,000 Hours, published by 80000 Hours on January 31, 2024 on The Effective Altruism Forum.We're excited to announce that the boards of Effective Ventures US and Effective Ventures UK have approved our selection committee's choice of Niel Bowerman as the new CEO of 80,000 Hours.I (Rob Wiblin) was joined on the selection committee by Will MacAskill, Hilary Greaves, Simran Dhaliwal, and Max Daniel.80,000 Hours is a project of EV US and EV UK, though under Niel's leadership, it expects to be spinning out and creating an independent legal structure, which will involve selecting a new board.We want to thank Brenton Mayer, who has served as 80,000 Hours interim CEO since late 2022, for his dedication and thoughtful management. Brenton expressed enthusiasm about the committee's choice, and he expects to take on the role of chief operations officer, where he will continue to work closely with Niel to keep 80,000 Hours running smoothly.By the end of its deliberations, the selection committee agreed that Niel was the best candidate to be 80,000 Hours' long-term CEO. We think Niel's drive and attitude will help him significantly improve the organisation and shift its strategy to keep up with events in the world. We were particularly impressed by his ability to use evidence to inform difficult strategic decisions and lay out a clear vision for the organisation.Niel started his career as a climate physicist and activist, and he went on to co-found and work at the Centre for Effective Altruism and served as assistant director of the Future of Humanity Institute before coming to 80,000 Hours.The selection committee believes that in the six years since he joined the organisation, Niel developed a deep understanding of its different programmes and the impact they have. He has a history of initiating valuable projects and delegating them to others, a style we think will be a big strength in a CEO.For example, in his role as director of special projects, Niel helped oversee the impressive growth of the 80,000 Hours job board team. It now features about 400 jobs a month aimed at helping people increase the impact of their careers and receives around 75,000 clicks a month; it's helped fill at least 200 roles that we're aware of.Niel has also made substantial contributions to the website, publishing an ahead-of-its-time article about working in US AI policy in January 2019, helping launch the new collection of articles on impactful skills, and authoring a recent newsletter on how to build better habits.In addition, Niel helped the 1on1 team nearly double the typical number of calls completed per year and aided in developing quantitative lead metrics to inform its decisions each week. And he's run organisation-wide projects, such as leading the two-year review for 2021 and 2022.Niel was very forthcoming and candid with the committee about his weaknesses. His focus on getting frank feedback and using it to drive a self-improvement cycle really impressed the selection committee.The committee considered three internal candidates for the CEO role, as well as dozens of other candidates suggested to us and dozens of others who applied for the position. In the end, we scored the top candidates on good judgement, inspiringness, social skills, leveraging people well, industriousness and drive, adaptability and resilience, commitment to the mission of 80,000 Hours, and deep understanding of the organisation.Among other things, these scores were based on: input from 80,000 Hours directors on the organisation's general situation, staff surveys, 'take-home' work tests, and a self-assessment of their biggest successes and mistakes. We also consulted, among others, Michelle Hutchinson, the current director of the 1on1 programme, as well as former 8...

]]>
80000_Hours https://forum.effectivealtruism.org/posts/h9EqoakkATQGMxECH/announcing-niel-bowerman-as-the-next-ceo-of-80-000-hours Wed, 31 Jan 2024 19:11:54 +0000 EA - Announcing Niel Bowerman as the next CEO of 80,000 Hours by 80000 Hours 80000_Hours https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:33 no full 2
Eg4PLFfZMcEck98vG EA - My model of how different AI risks fit together by Stephen Clare Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My model of how different AI risks fit together, published by Stephen Clare on January 31, 2024 on The Effective Altruism Forum.[Crossposted from my Substack, Unfolding Atlas]How will AI get us in the end?Maybe it'll decide we're getting in its way and decide to take us out? It could fire all the nukes and unleash all the viruses and take us all down at onceOr maybe we'll take ourselves out? We could lose control of powerful autonomous weapons, or allow a 21st century Stalin set up an impenetrable surveillance state and obliterate freedom and progress forever.Or maybe the diffusion of AI throughout our global economy will become a quiet catastrophe? As more and more tasks are delegated to AI systems, we mere humans could be left helpless, like horses after the invention of cars. Alive, perhaps, but bewildered by a world too complex and fast-paced to be understood or controlled.Each of these scenarios has been proposed as a way the advent of advanced AI could cause a global catastrophe. But they seem quite different, and warrant different responses. In this post, I describe my model of how they fit together.[1]I divide the AI development process into three steps. Risks arise at each step. Despite its simplicity, this model does a good job pulling all these risks into one framework. It's helped me understand better how the many specific AI risks people have proposed fit together. More importantly, it satisfied my powerful, innate urge to force order onto a chaotic world.Terrible things come in threesThe three AI development stages in my model are training, deployment, and diffusion.At each stage, a different kind of AI risk arises. These are, respectively, misalignment, misuse, and systemic risks.Throughout the entire process, competitive pressures act as a risk factor. More pressure makes risks throughout the process more likely.Putting it all together looks like this:This model is too simple to be perfect. For one, these risks almost certainly won't arrive in sequence, as the model implies. They're also more entangled than a linear model implies. But I think these shortcomings are relatively minor, and relating categories of risk like this gives them a pleasant cohesiveness.[2]So let's move ahead and look closer at the risks generated at each step.Training and misalignment risksThe first set of risks emerge immediately after training an advanced AI model. Training modern AI models usually involves feeding them enormous datasets from which they can learn patterns and make predictions when given new data. Training risks relate to a model's alignment. That is, what the system wants to do, how it plans to do it, and whether those goals and methods are good for people.Some researchers worry that training a model to actually do what we want in all situations, or when we deploy it in the real world, is far from straightforward. In fact, we already see some kinds of alignment failures in the world today. These often seem silly. For example, something about the way Microsoft's first Bing chatbot's goal of being helpful and charming actually made it act like anoccasionally funny, occasionally frightening psychopath.As AI systems get more powerful, though, these misalignment risks could get a whole lot scarier. Specific risks researchers have raised include:Goal specification: It might be hard to tell AI systems exactly what we want them to do, especially as we delegate more complicated tasks to them.Some researchers worry that AIs trained on large amounts of data will either end up finding tricks or shortcuts that lead them to produce the wrong solutions when deployed in the real world. Or that they'll face extreme or different situations in the real world than they saw in the training data, and willreact in unexpected, potentially dangerous, ways.Power...

]]>
Stephen Clare https://forum.effectivealtruism.org/posts/Eg4PLFfZMcEck98vG/my-model-of-how-different-ai-risks-fit-together Wed, 31 Jan 2024 18:51:47 +0000 EA - My model of how different AI risks fit together by Stephen Clare Stephen Clare https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:16 no full 3
bxcbeqD4peDJnA2LZ EA - Who wants to be hired? (Feb-May 2024) by tobytrem Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who wants to be hired? (Feb-May 2024), published by tobytrem on January 31, 2024 on The Effective Altruism Forum.Share your information in this thread if you are looking for full-time, part-time, or limited project work in EA causes[1]!We'd like to help people in EA find impactful work, so we've set up this thread, and another called Who's hiring? (we did this last in 2022[2]).Consider sharing this thread with friends who aren't on the Forum, but might be interested in getting involved in this kind of work. They will need to make an account to post, but we think it is worth it!If you have any feedback on these threads, please DM me or comment below.Take part in the threadTo take part in this thread, add an 'Answer' below. Here's a template:TLDR: [1-line summary of the kind of work you're looking for and anything particularly relevant from your background or interests. ]Skills & background: [Outline your strengths and job experience, as well as your experience with EA if you think that might be relevant. Links to past projects have been particularly valuable for past job seekers]Location/remote: [Current location & whether you're willing to relocate or work remotely]Availability & type of work: [Note whether you're only available during a particular period, whether you're looking for part-time work, etc...]Resume/CV/LinkedIn: ___Email/contact: [you can also suggest that people DM you on the Forum]Other notes: [Describe your cause area preferences if you have them, expand on the type of role you are looking for, etc... Hiring managers fed back after our last round of threads that they sometimes couldn't tell whether prospective hires would be interested in the roles they were offering.]Questions: [IF YOU HAVE ANY: Consider sharing uncertainties you have, other questions, etc.]Example answer[3]Read some hiring tips here:Yonatan Cale's quick take on using this thread effectively.Don't think, just apply! (usually)How to think about applying to EA jobsJob boards & other resourcesIf you want to explore EA jobs, check out the related Who's hiring? thread, or the resources below:The 80,000 Hours Job Board compiles a huge amount of open roles; there are over 800 jobs listed right now.You can filter to exclude "career development" roles, set up alerts for roles matching your preferred criteria, and browse roles by organisation or "collection."The "Job listing (open)" page is a place to explore positions people have shared or discussed on the EA Forum (see also opportunities to take action).The EA Opportunity Board collects internships, volunteer opportunities, conferences, and more - including part-time and entry-level job opportunities.Other resources include Probably Good's list of impact-focused job boards, the EA job postings and EA volunteering Facebook groups, and these lists of project ideas you might be able to work on independently. (If you have other suggestions for what I should include here, please comment on this post or send me a DM!)^I phrase it this way to include explicitly EA organisations, as well as organisations which do not call themselves EA, but work on causes with significant support within EA such as farmed animal welfare or AI risk.^you can see those threads here:1, 2^TLDR: I'm looking for entry-level communications jobs or writing-heavy roles. My experience is mostly in writing (of different kinds) and tutoring students.Skills & background: I write a lot and have some undergraduate research experience and familiarity with legal work. I finished my BA in history in May 2023 (see [my thesis]). I spent two summers as a legal intern at [Place], and have been tutoring for a year now. I also speak Spanish. I helped run my university EA group in 2022-2023. You can see some of my public writing for [our student newspaper] an...

]]>
tobytrem https://forum.effectivealtruism.org/posts/bxcbeqD4peDJnA2LZ/who-wants-to-be-hired-feb-may-2024 Wed, 31 Jan 2024 16:56:14 +0000 EA - Who wants to be hired? (Feb-May 2024) by tobytrem tobytrem https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:36 no full 5
nPH3APx4CAw3qrLgL EA - Brian Tomasik on charity by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Brian Tomasik on charity, published by Vasco Grilo on January 31, 2024 on The Effective Altruism Forum.This is a linkpost for Brian Tomasik's posts on charity.My Donation RecommendationsBy Brian TomasikFirst published: 2014 Nov 02. Last nontrivial update: 2018 May 02.Note from 2022 Jun 27: The details in this piece are slightly outdated. Maybe I'll update this page at some point, but for now, here's a quick summary of my current views.In terms of maximizing expected suffering reduction over the long-run future, my top recommendation is the Center for Reducing Suffering (CRS), closely followed by the Center on Long-Term Risk (CLR). (I'm an advisor to both of them.) I think both of these organizations do important work, but CRS is more in need of funding currently.CRS and CLR do research and movement building aiming to reduce risks of astronomical suffering in the far future. This kind of work can feel very abstract, and it's difficult to know if your impact is even net good on balance. Personally I prefer to also contribute some of my resources toward efforts that more concretely reduce suffering in the short run, to avoid feeling like I'm possibly wasting my life on excessive speculation.For this reason, I plan to donate my personal wealth over time toward charities that work mainly or exclusively on improving animal welfare. (I prefer welfare improvements over reducing meat consumption because the sign of the latter for wild-animal suffering is unclear.) The Humane Slaughter Association is my current favorite. A decent portion of the charities granted to by the EA Funds Animal Welfare Fund also do high-impact animal welfare work. I donate a bit to Animal Ethics as well.SummaryThis piece describes my views on a few charities. I explain what I like about each charity and what concerns me about it. Currently, my top charity recommendation for someone with values similar to mine is the Foundational Research Institute (an organization that I co-founded and volunteer for).Spreading Google Grants with Caution about CounterfactualsBy Brian TomasikFirst published: 2014 Feb 04. Last nontrivial update: 2016 Nov 09.SummaryIf you find an effective charity, write to them to ask whether they use Google Grants, and if not, suggest they sign up. Google Grants offers the prospect of immense returns for a small amount of labor, although one needs to be careful about not competing with other effective organizations and choosing keywords that draw in new people rather than preaching to the choir.Update (2015 Sep): Having used Google Grants for the last 1.5 years for several organizations, my conclusion is that the value of AdWords is modest. None of my organizations has found via AdWords a major donor or a promising future employee, even though our websites get high traffic volume from ads. Maybe part of the reason is that the best people don't click on ads much? Another reason is that the best people tend to be concentrated in dense social clusters, so that networking can be more effective.The Haste Consideration, RevisitedBy Brian TomasikFirst published: 2013 Feb 03. Last nontrivial update: 2018 Apr 19.SummaryInternal rates of return for charity are high, but they may not be as high as they seem naively. Haste is important, but because long-term growth is logistic rather than exponential, it's less important than has been suggested by some. That said, if artificial general intelligence (AGI) comes soon and exponential growth does not level off too quickly, naive haste may still be roughly appropriate. There are other factors for and against haste that parallel donate-vs.-invest considerations.Restating the summary in simpler language: Movements should saturate or at least show diminishing returns at some point, so that movement building sooner amounts to either j...

]]>
Vasco Grilo https://forum.effectivealtruism.org/posts/nPH3APx4CAw3qrLgL/brian-tomasik-on-charity Wed, 31 Jan 2024 16:26:56 +0000 EA - Brian Tomasik on charity by Vasco Grilo Vasco Grilo https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:17 no full 6
pxjEJ2djKNu7abAWv EA - Project for Awesome 2024: Make a short video for an EA charity! by EA ProjectForAwesome Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Project for Awesome 2024: Make a short video for an EA charity!, published by EA ProjectForAwesome on January 31, 2024 on The Effective Altruism Forum.Project for Awesome (P4A) is a charitable initiative running February 16th-18th this year (2024), and videos must be submitted by 11:59am EST on Tuesday, February 13th. This is a good opportunity to raise money for EA charities and promote EA and EA charities to a wider audience. In the last years, winning charities got between $14,000 and $38,000 each. Videos don't need to be professional!In short,People make short 1-4 min videos supporting charities, upload them on Youtube and submit them to the P4A website by 11:59am EST on Tuesday, February 13th. The videos must be new videos specifically for this year's P4A and should mention P4A.People vote on the videos on the weekend, February 16th-18th.Money raised during the Project for Awesome is split, with 50% going to Save the Children and Partners in Health, and 50% going to charities voted on by the community. One more video for a charity lets everyone vote one more time for that charity.This year, we want to support seven EA charities: Against Malaria Foundation, GiveDirectly, The Humane League, Good Food Institute, ProVeg International, GiveWell and Fish Welfare Initiative. Please consider making a short video for one (or more) of these charities! You will help us to coordinate if you sign up here.Please join the Facebook group, EA Project 4 Awesome 2024!In 2017, we secured a $50,000 donation for AMF, GiveDirectly and SENS. In 2018 GiveDirectly, The Good Food Institute and AMF all received $25,000. In 2020, seven out of eight of the charities we coordinated around have won ~$27,000 each, for a total that year of ~$189,700! In 2022, 3 out of 11 supported charities won. Last year, The Good Food Institute got ~$37,000.Here are some resources:Project for Awesome websiteA document with infos, resources and instructionshttp://www.projectforawesome.com/graphicsHow to Make a P4A video in 20 Minutes or LessSlides for a P4A video planning event from 2021Video guidelines from the P4A FAQ:Your video must be made specifically for this year's P4A. So, you must mention Project for Awesome in the video itself, and it should have been created recently.You should put reasonable effort into making sure any information you include in your video is accurate, from anecdotal examples to statistics. There's a lot of misinformation on the internet, so we want to make sure that P4A videos are providing thoughtful, accurate context about the work that organizations are doing in the world.Try not to make your video too long. People are going to be watching a ton of videos during P4A, and no one wants to sit through a rambly, unedited vlog for ten minutes. Keep your video short and to the point so that people will watch the whole thing and learn all about your cause. A good length to aim for is 2-4 minutes, unless you have such compelling content that it just needs to be longer.Try not to spend too much time explaining what the Project for Awesome is. Most people watching your video will already know, so just mentioning it briefly and directing people to the website is plenty. An explanation in the description as well as a link to projectforawesome.com is also a great addition so people who stumble across your video can learn more about us.Similarly, try not to spend too much time promoting your own channel in your video. One or two sentences is fine to explain the type of videos you usually make if they're different from what you're doing for your P4A video, but much more than that and it just looks like you're using the P4A to help promote yourself, which isn't what this is all about.Please include a content warning at the beginning of your video if you're discussing sensit...

]]>
EA_ProjectForAwesome https://forum.effectivealtruism.org/posts/pxjEJ2djKNu7abAWv/project-for-awesome-2024-make-a-short-video-for-an-ea Wed, 31 Jan 2024 16:23:26 +0000 EA - Project for Awesome 2024: Make a short video for an EA charity! by EA ProjectForAwesome EA_ProjectForAwesome https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:22 no full 7
PhuWzoDC7pgwzEsye EA - Things newer (university) group organisers should know about by Sam Robinson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Things newer (university) group organisers should know about, published by Sam Robinson on January 31, 2024 on The Effective Altruism Forum.TL;DR: Group organisers should focus more on developing themselves and their highly engaged members more than they currently do; goal setting, utilising pre-existing materials and external assistance can help organisers do this.Epistemic status: The ideas below have arisen from (i) conversations I've had with ~30 organisers, (ii) my own experience organising a medium-sized, reasonably 'successful' group, (iii) things I've picked up/observed whilst interning full-time then contracting part-time for the CEA Uni Groups Team, and (iv) a collection of my own and others' armchair philosophy. The claims made are mine and should not be taken as representing the opinions of the CEA University Groups Team.Furthermore, I encourage readers to engage with my thoughts critically - it might not be the case that what I endorse applies to your situation. However, I do believe that many organisers overestimate the uniqueness of their particular group, believing that advice/ideas don't apply to them; from my experience, EA university groups are quite similar, meaning that ideas and methods track well across them.0. IntroductionThis post was, in part, inspired byJessica McCurdy's post on advice CEA gives to newer organisers; I strongly recommend reading it before or after this, whether you are a new or an experienced organiser.As a contractor for the University Groups Team at CEA, I recently ran a retreat for university group organisers. I found myself giving similar advice to many participants: resources, heuristics, framings etc. Hence, I thought it might be useful to write this up so that I could (i) easily share with others that I have similar conversations with and (ii) assist those I don't get to chat with.This post is intended to be a broad overview of some key things and ideas within university group organising - it's not holistic and shouldn't be treated as such. If anyone has specific questions about action-guiding advice, I would encourage them to explore the resources detailed in section 3 below.About me: I've been organising my group at the University of St. Andrews (a small yet somewhat prestigious university in the UK) for ~2 years. I interned with the University Groups Team at CEA in the summer of 2023 where I updated theEA Groups Resource Centre, and have been contracting for them since whilst doing my degree in philosophy. I always like chatting about at least one non-EA thing when I meet people in EA contexts; I can't do that here, but in the same spirit, I'll share that house and jungle music instantly improve my mood by at least 2 points and I think udon noodles (especially with a 'dan dan' sauce) are the best food ever made in the world.1. Development1.1 On developing group members1.1.1 Backchaining to determine what you should doWithin effective altruism we all share a common goal - to do as much good as we can. I think that group organisers will benefit significantly from thinking more about this final goal, and will get sidetracked less by loosely related goals - a recurring failure mode I see in groups. The process of backchaining can help avoid optimising for the wrong thing: think about your final goal, and work back from there until you reach an action step that you can complete now.Final goal: the most good getting doneSub-goal 1: the world's most pressing issues being solved.Sub-goal 2: people solving the world's most pressing issues.Sub-goal 3: people existing who are willing and able to solve the world's most pressing issues.Sub-goal 4: EA groups helping people who are motivated to solve the world's problems become able to do so.Sub-goal 5: EA groups sharing EA ideas in a way that motivates people...

]]>
Sam Robinson https://forum.effectivealtruism.org/posts/PhuWzoDC7pgwzEsye/things-newer-university-group-organisers-should-know-about Wed, 31 Jan 2024 08:27:57 +0000 EA - Things newer (university) group organisers should know about by Sam Robinson Sam Robinson https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 15:47 no full 8
SnGqab3noXLmJzQCs EA - Lower-suffering egg brands available in the SF Bay Area by mayleaf Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lower-suffering egg brands available in the SF Bay Area, published by mayleaf on January 31, 2024 on The Effective Altruism Forum.(Disclaimer: I am not an animal welfare researcher or expert. I got all of my information from publicly-available certification standards, farm websites, and emailing individual farms.)I'm not a vegan, but I've long felt troubled by the fact that eggs have such a high suffering-to-calorie ratio - higher, by some calculations, than beef[1] . I like eating eggs, and it seems possible to raise laying hens in a humane and low-suffering way, so I looked into whether I could purchase eggs from brands that treat their chickens well (or at least, less badly).TL;DR: See here for egg brands I recommend that are sold in the Bay Area. If you're not based in the Bay Area, I recommend Cornucopia's Egg Scorecard tool and the Animal Welfare Approved store locator to find low-suffering eggs in your market area.What does "lower-suffering" mean?I don't know how to tell whether a hen's life is "overall happy" or "net-positive" (or if that's even a coherent way to think about this question). Instead, I looked into common industry practices that are harmful to laying hens, and tried to find brands that avoid those practices. To do this, I used the qualifying criteria for A Greener World's Animal Welfare Approved (AWA) certification, which I've personally heard animal welfare researchers speak highly of.Unfortunately, very few egg brands (and none available in my current city) have an AWA certification, so rather than relying on certification status, I evaluated each brand on a per-criteria basis.Based on the AWA standards for laying hens, my criteria included:No physical mutilation. This includes debeaking (removing the whole beak), beak trimming (removing the sharp tip of the beak that the hen uses to forage and groom), toe-trimming (removing the hen's claws), etc. The AWA certification forbids all physical alterations.No forced-molting. This involves starving hens for 1-2 weeks, which forces them into a molt (losing feathers), resetting their reproductive cycle so that they can restart egg production with higher yields. AWA forbids this.Access to outdoor space and foraging. AWA mandates that outdoor foraging is accessible for at least 50% of daylight hours, and that housing is designed to encourage birds to forage outdoors during the day. The outdoor space must be an actual nice place to forage, with food and vegetation to provide cover from predators, and not just a dirt field. Indoor confinement is prohibited.Age of outdoor access for pullets (young hens). Many farms keep pullets indoors for their safety even if adult hens forage outdoors. If you keep pullets indoors for too long, it seems that they became scared to go outside. AWA's standard is 4 weeks; many standard farms don't allow outdoor access until >12 weeks (if outdoor access is provided at all).Indoor space. The hens' indoor housing or shelter must have at least 1.8 square feet per bird, unless they only return to their indoor housing to lay and sleep and spend the rest of the time outdoors.Smaller flock size. AWA has no strict requirements here, but recommends a flock size of <500 birds, and notes that the birds must have a stable group size and be monitored to minimize fighting. This is much smaller than standard farms, which often have flock sizes of 10,000+ hens.(The AWA certification has a ton more requirements than just this, but I limited my criteria to ones that I could easily check using online materials).Is this enough?I'm not sure. I've sometimes thought that avoiding industry-standard factory farms is like avoiding using prisons that violate the Geneva Convention: it prevents the worst atrocities, but by no means guarantees a good life. At other times, I read about the ...

]]>
mayleaf https://forum.effectivealtruism.org/posts/SnGqab3noXLmJzQCs/lower-suffering-egg-brands-available-in-the-sf-bay-area 12 weeks (if outdoor access is provided at all).Indoor space. The hens' indoor housing or shelter must have at least 1.8 square feet per bird, unless they only return to their indoor housing to lay and sleep and spend the rest of the time outdoors.Smaller flock size. AWA has no strict requirements here, but recommends a flock size of <500 birds, and notes that the birds must have a stable group size and be monitored to minimize fighting. This is much smaller than standard farms, which often have flock sizes of 10,000+ hens.(The AWA certification has a ton more requirements than just this, but I limited my criteria to ones that I could easily check using online materials).Is this enough?I'm not sure. I've sometimes thought that avoiding industry-standard factory farms is like avoiding using prisons that violate the Geneva Convention: it prevents the worst atrocities, but by no means guarantees a good life. At other times, I read about the ...]]> Wed, 31 Jan 2024 08:17:23 +0000 EA - Lower-suffering egg brands available in the SF Bay Area by mayleaf 12 weeks (if outdoor access is provided at all).Indoor space. The hens' indoor housing or shelter must have at least 1.8 square feet per bird, unless they only return to their indoor housing to lay and sleep and spend the rest of the time outdoors.Smaller flock size. AWA has no strict requirements here, but recommends a flock size of <500 birds, and notes that the birds must have a stable group size and be monitored to minimize fighting. This is much smaller than standard farms, which often have flock sizes of 10,000+ hens.(The AWA certification has a ton more requirements than just this, but I limited my criteria to ones that I could easily check using online materials).Is this enough?I'm not sure. I've sometimes thought that avoiding industry-standard factory farms is like avoiding using prisons that violate the Geneva Convention: it prevents the worst atrocities, but by no means guarantees a good life. At other times, I read about the ...]]> 12 weeks (if outdoor access is provided at all).Indoor space. The hens' indoor housing or shelter must have at least 1.8 square feet per bird, unless they only return to their indoor housing to lay and sleep and spend the rest of the time outdoors.Smaller flock size. AWA has no strict requirements here, but recommends a flock size of <500 birds, and notes that the birds must have a stable group size and be monitored to minimize fighting. This is much smaller than standard farms, which often have flock sizes of 10,000+ hens.(The AWA certification has a ton more requirements than just this, but I limited my criteria to ones that I could easily check using online materials).Is this enough?I'm not sure. I've sometimes thought that avoiding industry-standard factory farms is like avoiding using prisons that violate the Geneva Convention: it prevents the worst atrocities, but by no means guarantees a good life. At other times, I read about the ...]]> mayleaf https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:58 no full 9
NqnNjefZDpd6MqbWw EA - Deciding What Project/Org to Start: A Guide to Prioritization Research by Alexandra Bos Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Deciding What Project/Org to Start: A Guide to Prioritization Research, published by Alexandra Bos on January 31, 2024 on The Effective Altruism Forum.If you're deciding what (research) project, organization or intervention to go for, analyzing your options through prioritization research can be invaluable. I used it to settle on foundingCatalyze, an AI Safety field-building non-profit. In this post, I will share my blueprint and learnings from this process. Please note that you don't need to be a researcher to benefit from conducting prioritization research.Why prioritization researchNo need to wait for inspiration to strikeIf you're at a crossroads trying to come up with a great idea, I have good news: you don't have to invent something new. There's a world of ideas waiting to be discovered and executed. So don't wait for that idea to come to you; instead, you can go to the ideas.You can probably find a better intervention than the first one(s) you stumbled uponSuppose you have a handful of ideas for a project or an organization. That's a fantastic start! But consider the possibility that by conducting a broad and deliberate search, you could stumble upon something even more impactful. More likely than not, if you consider a wide range of ideas, your initial favorite won't come out on top. Don't go for the first thing that came to mind, but rather take some time to scope the field and possibly find an even better option.The difference in impact between a 'good' and 'great' intervention is huge - especially within a high-impact cause areaOne of the original points EA focused on was the huge difference in impact between different interventions within global health.Illustrative Graph from this80.000 Hours articleTo me, it seems likely that the big difference in impact between interventions is similar in different cause areas.Therefore, apart from wisely choosing what cause to work on, I think it is still very crucial to wisely pick what you work on within that cause area. One of the 'best' interventions in this area is likely many times more impactful than the median.If you work withina cause area that is especially high-impact, then finding one of the very best things to do could be especially consequential. Within such a cause area, the median intervention might already make a very big difference. Don't let this be encouragement to settle - instead let this be encouragement to find an outlier which is 10x as impactful as this already-great median intervention.In other words, don't fall into the "it's-in-an-EA-cause-area-so-it's-high-impact-anyways" fallacy (I'll have to work on the naming). Although at this point it might all just feel 'very impactful', don't forget that theScope Insensitivity Bias might be clouding your judgement.Get better feedback and advice by reasoning explicitly & strengthen your skillsThe skills you can learn through prioritization research can for example be useful if you'd like to move into a grantmaker role or research role. I've personally also found it useful in my impact-focused entrepreneurship endeavours.My blueprint for prioritization researchBelow I'll describe the steps I took to decide roughly what intervention I wanted to build an organization around. In deciding what these steps looked like, I largely took inspiration fromCharity Entrepreneurship's (CE) materials (theirbook and information I could find ontheir website about their approach).I by no means want to claim that the method I put together below is the best way to do prioritization research. Instead, I'd like to give readers some place to start from more quickly by sharing the specific steps and templates I put together. I think that finding materials like these would have helped me to get going more quickly and improve my methods. I encourage o...

]]>
Alexandra Bos https://forum.effectivealtruism.org/posts/NqnNjefZDpd6MqbWw/deciding-what-project-org-to-start-a-guide-to-prioritization Wed, 31 Jan 2024 04:24:03 +0000 EA - Deciding What Project/Org to Start: A Guide to Prioritization Research by Alexandra Bos Alexandra Bos https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:48 no full 11
n2h2vgNYMHFJz6Xpz EA - Managing risks while trying to do good by Wei Dai Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Managing risks while trying to do good, published by Wei Dai on February 1, 2024 on The Effective Altruism Forum.I often think about "the road to hell is paved with good intentions".[1] I'm unsure to what degree this is true, but it does seem that people trying to do good have caused more negative consequences in aggregate than one might naively expect.[2] "Power corrupts" and "power-seekers using altruism as an excuse to gain power" are two often cited reasons for this, but I think don't explain all of it.A more subtle reason is that even when people are genuinely trying to do good, they're not entirely aligned with goodness. Status-seeking is a powerful motivation for almost all humans, including altruists, and we frequently award social status to people for merely trying to do good, before seeing all of the consequences of their actions. This is in some sense inevitable as there are no good alternatives. We often need to award people with social status before all of the consequences play out, both to motivate them to continue to try to do good, and to provide them with influence/power to help them accomplish their goals.A person who consciously or subconsciously cares a lot about social status will not optimize strictly for doing good, but also for appearing to do good. One way these two motivations diverge is in how to manage risks, especially risks of causing highly negative consequences. Someone who wants to appear to do good would be motivated to hide or downplay such risks, from others and perhaps from themselves, as fully acknowledging such risks would often amount to admitting that they're not doing as much good (on expectation) as they appear to be.How to mitigate this problemIndividually, altruists (to the extent that they endorse actually doing good) can make a habit of asking themselves and others what risks they may be overlooking, dismissing, or downplaying.[3]Institutionally, we can rearrange organizational structures to take these individual tendencies into account, for example by creating positions dedicated to or focused on managing risk. These could be risk management officers within organizations, or people empowered to manage risk across the EA community.[4]Socially, we can reward people/organizations for taking risks seriously, or punish (or withhold rewards from) those who fail to do so. This is tricky because due to information asymmetry, we can easily create "risk management theaters" akin to "security theater" (which come to think of it, is a type of risk management theater).But I think we should at least take notice when someone or some organization fails, in a clear and obvious way, to acknowledge risks or to do good risk management, for example not writing down a list of important risks to be mindful of and keeping it updated, or avoiding/deflecting questions about risk. More optimistically, we can try to develop a culture where people and organizations are monitored and held accountable for managing risks substantively and competently.^due in part to my family history^Normally I'd give some examples here, but we can probably all think of some from the recent past.^I try to do this myself in the comments.^an idea previously discussed by Ryan Carey and William MacAskillThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Wei Dai https://forum.effectivealtruism.org/posts/n2h2vgNYMHFJz6Xpz/managing-risks-while-trying-to-do-good Thu, 01 Feb 2024 17:40:27 +0000 EA - Managing risks while trying to do good by Wei Dai Wei Dai https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:04 no full 2
XHoBmBsfaDLQzBGfN EA - Increasingly vague interpersonal welfare comparisons by MichaelStJules Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Increasingly vague interpersonal welfare comparisons, published by MichaelStJules on February 1, 2024 on The Effective Altruism Forum.Some have argued that all interpersonal welfare comparisons should be possible or take it as a strong mark against a theory in which they are not all possible. Others have argued against their possibility, e.g. Hausman (1995) for preference views. Here, I will illustrate an intermediate position: interpersonal welfare comparisons are vague, with tighter bounds on reasonable comparisons between beings whose welfare states are realized more similarly, and wider or no bounds when more different.The obvious case is two completely or at least functionally identical brains (at the right level of abstraction for our functionalist theory). As long as we grant intrapersonal comparisons, then we should get interpersonal comparisons between identical brains. We map the first brain's state(s) to the equivalent state(s) in the second, and compare them in the second brain. Of course, this is not a very interesting case, and it seems only directly useful for artificial duplicates of minds.Still, we can go further. Consider an experience E1 in brain B1 and an experience E2 in brain B2. If B1 and B2 only differ by the fact that some of B2's unpleasantness-contributing neurons are less sensitive or removed, and B1 and B2 receive the same input signals that cause pain, then it seems likely to me that B1's painful experience E1 is at least as unpleasant as B2's E2 and possibly more. We may be able to say roughly how much more unpleasant it is by comparing E2 in B2 directly to less intense states in B1, sandwiching E2 in unpleasantness between two states in B1.Maybe going from E1 to E2 changes the unpleasantness by between -0.01 and 0, i.e. UnpleasantnessB2(E2)=UnpleasantnessB1(E1)+Δ, where 0.01Δ0. There may be no fact of the matter about the exact value of Δ.For small enough local differences between brains, we could make fairly precise comparisons.I use unpleasantness for the purpose of a more concrete illustration, but it's plausible other potential types of welfare could be used instead, like preferences. A slight difference in how some preferences are realized should typically result in a slight difference in the preferences themselves and how we value them, but the extent of the difference in value could be vague and only boundable by fairly tight inequalities. We can use the same example, too: a slight difference in how unpleasant a pain is through the same kinds of differences in neurons as above typically results in a slight difference in preferences about that pain and preference-based value.In general, for arbitrary brains B1 and B2 and respective experiences E1 and E2, we can ask whether there's a sequence of changes from E1 and B1 to E2 and B2, possibly passing through different hypothetical intermediate brains and states, that lets us compare E1 and E2 by combining bounds and inequalities from each step along the sequence. Some changes could have opposite sign effects on the realized welfare, but with only bounds rather than precise values, the bounds widen between brains farther apart in the sequence.For example, a change with a range of +1 to +4 in additional unpleasantness and a change with a range of -3 to -1 could give a net change between -2=+1-3 and +3=+4-1. Adding one more change of between +1 and +4 and another of between -3 and -1 gives between -4 and +6. Adding another change of between +2 and +3 gives between -2 and +9. The gap between the bounds widens with each additional change.[1]The more or larger such changes are necessary to get from one brain to another, the less tight the bounds on the comparisons could become, the further they may go both negative and positive overall,[2] and the less reasonable it seems to mak...

]]>
MichaelStJules https://forum.effectivealtruism.org/posts/XHoBmBsfaDLQzBGfN/increasingly-vague-interpersonal-welfare-comparisons Thu, 01 Feb 2024 13:15:48 +0000 EA - Increasingly vague interpersonal welfare comparisons by MichaelStJules MichaelStJules https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:17 no full 3
qACXnSkfmPjwo2eKP EA - Fish Welfare Initiative's 2023 in Review by haven Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fish Welfare Initiative's 2023 in Review, published by haven on February 1, 2024 on The Effective Altruism Forum.Note that this is a crosspost from our blog. We're posting it here because the EA community has been instrumental from the very beginning to our organization, so perhaps in some way we're hoping to make you all proud of the work that some of your own have done.As always, all questions - however candid - are welcome!Message from our CofoundersIn ourprevious year-in-review post, we wrote about how we had made significant progress in developing our interventions, and how because of that progress we were finally beginning to see specific avenues forward to scale an intervention in India to help hundreds of millions of fishes. We are now one year later, and though it was a year of significant progress in many ways, that earlier analysis now seems premature.We began 2023 withambitious scaling goals to add 150 farmers to ourfarmer program, and to help an additional 1 million fishes. Unfortunately, largely due to increasingly-apparent limitations in our interventions, we fell short of these goals. These limitations also inspired twostrategicchanges over the course of the year, which, while needed, created instability that at times was challenging for our team. All of these things made 2023 feel like a challenging year for our programming, and for our organization more broadly.However, with ourlast strategic change, and with the improved capacity that came with it, we feel much better positioned to reach our ambition of being an evidence-based, extremely impactful and cost-effective organization. 2024 is thus beginning on a note of optimism for our team, with various causes for excitement: Our revamped research department andresearch plans; the issues we've identified (e.g.) and improvementswe made and are making in the farmer program; and the strategic realignment of certain roles to better capitalize on our staffs' core competencies.With all of this, we believe 2023 is best characterized as a year of unintentional setup for FWI. We didn't achieve most of the outcomes we intended, but we did build a number of foundations to enable us to achieve these outcomes more rigorously and sustainably this year and beyond. We also still did reach a number of important outcomes themselves, including scaling our farmer program toover 100 farms, finishing construction on experimental ponds to be used in future research with our local university partnership, and improving the lives of anestimated 450,000 fishes.There's no doubt that 2024 will be a critical year for our organization. This is the year where we'll see if our bet on increasing rigor and resources going into R&D will pay off with more impactful and scalable interventions.Thank you, as always, for your continued interest and support. We're excited to continue to share our progress with you.Tom and Haven, FWI CofoundersCountries of OperationFWI operated primarily in two different countries in 2023:India, our primary country of implementation, about which most of this post is written. Weselected India back in 2020 as our focus country, primarily because of the scale of farmed fish, the tractability we saw in our field visit of working with farmers, and our ability to hire people effectively there.China, where we conducted standard setting, field visits, and general early stage institutional awareness raising work. For more information on our current status in China, see ourrecent post.Key OutcomesWe believe the following are the key outcomes achieved by the organization in 2023:1 - An estimated 450,000 fishes' lives improved.Through stocking density and water quality improvements implemented by farmers in ourAlliance For Responsible Aquaculture (ARA), we estimate that we improved the live...

]]>
haven https://forum.effectivealtruism.org/posts/qACXnSkfmPjwo2eKP/fish-welfare-initiative-s-2023-in-review Thu, 01 Feb 2024 09:29:49 +0000 EA - Fish Welfare Initiative's 2023 in Review by haven haven https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 12:20 no full 4
WmELKispJedFp7nSK EA - Types of subjective welfare by MichaelStJules Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Types of subjective welfare, published by MichaelStJules on February 2, 2024 on The Effective Altruism Forum.SummaryI describe and review four broad potential types of subjective welfare: 1) hedonic states, i.e. pleasure and unpleasantness, 2) felt desires, both appetitive and aversive, 3) belief-like preferences, i.e. preferences as judgements or beliefs about value, and 4) choice-based preferences, i.e. what we choose or would choose. My key takeaways are the following:Belief-like preferences and choice-based preferences seem unlikely to be generally comparable, and even comparisons between humans can be problematic (more).Hedonic states, felt desires and belief-like preferences all plausibly matter intrinsically, possibly together in a morally pluralistic theory, or unified as subjective appearances of value or even under belief-like preferences (more).Choice-based preferences seem not to matter much or at all intrinsically (more).The types of welfare are dissociable, so measuring one in terms of another is likely to misweigh it relative to more intuitive direct standards and risks discounting plausible moral patients altogether (more).There are multiple potentially important variations on the types of welfare (more).AcknowledgementsThanks to Brian Tomasik, Derek Shiller and Bob Fischer for feedback. All errors are my own.The four typesIt appears to me that subjective welfare - welfare whose value depends only or primarily[1] on the perspectives or mental states of those who hold them - can be roughly categorized into one of the following four types based on how they are realized: hedonic states, felt desires, belief-like preferences and choice-based preferences.To summarize, there's welfare as feelings (hedonic states and felt desires), welfare as beliefs about value (belief-like preferences) and welfare as choices (choice-based preferences). I will define, illustrate and elaborate on these types below.For some discussion of my choices of terminology, see the following footnote.[2]Hedonic statesHedonic states: feeling good and feeling bad, or (conscious) pleasure and unpleasantness/unpleasure/displeasure,[3] or (conscious) positive and negative affect. Their causes can be physical, like sensory pleasures and physical pain, or psychological, like achievement, failure, loss, shame, humour and threats, to name a few.It's unclear if interpersonal comparisons of hedonic state can be grounded in general, whether or not they can be between beings who realize them in sufficiently similar ways. In my view, the most promising approaches would be on the basis of the magnitudes of immediate and necessary cognitive or mental effects, causes or components of hedonic states. Other measures seem logically and intuitively dissociable (e.g. seethe section on dissociation below) or incompatible with functionalism at the right level of abstraction (e.g. against using the absolute number of active neurons, seeMathers, 2022 andShriver, 2022).Felt desiresFelt desires: desires we feel. They can be one of two types, either a) appetitive - or incentive and typically conducive to approach or consummatory behaviour and towards things - like in attraction, hunger and anger, or b) aversive - and typically conducive to avoidance and away from things - like in pain, fear, disgust and again anger (Hayes et al., 2014,Berridge, 2018, and on anger as aversive and appetitive,Carver & Harmon-Jones, 2009,Watson, 2009 andLee & Lang, 2009). However, the actual approach/consummatory or avoidance behaviour is not necessary to experience a felt desire, and we can overcome our felt desires or be constrained from satisfying them.Potentially defining functions of felt desires could be their effects on attention and its control, as motivational salience, or incentive salience an...

]]>
MichaelStJules https://forum.effectivealtruism.org/posts/WmELKispJedFp7nSK/types-of-subjective-welfare-1 Fri, 02 Feb 2024 23:35:12 +0000 EA - Types of subjective welfare by MichaelStJules MichaelStJules https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 44:38 no full 6
kc9igJKXeJbmfxsxq EA - AI safety needs to scale, and here's how you can do it by Esben Kran Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI safety needs to scale, and here's how you can do it, published by Esben Kran on February 4, 2024 on The Effective Altruism Forum.AI development attracts more than $67 billion in yearly investments, contrasting sharply with the $250 million allocated to AI safety (see appendix). This gap suggests there's a large opportunity for AI safety to tap into the commercial market. The big question then is, how do you close that gap?In this post, we aim to outline the for-profit AI safety opportunities within four key domains:Guaranteeing the public benefit and reliability of AI when deployed.Enhancing the interpretability and monitoring of AI systems.Improving the alignment of user intent at the model level.Developing protections against future AI risk scenarios using AI.At Apart, we're genuinely excited about what for-profit AI security might achieve. Our experience working alongside EntrepreneurFirst on AI security entrepreneurship hackathons, combined with discussions with founders, advisors, and former venture capitalists, highlights the promise of the field. With that said, let's dive into the details.Safety in deploymentProblems related to ensuring that deployed AI is reliable, protected against misuse, and safe for users, companies, and nation states. Related fields include dangerous capability evaluation, control, and cybersecurity.Deployment safety is crucial to ensure AI systems function safely and effectively without misuse. Security also meshes well with commercial opportunities and building capability in this domain can scale strong security teams to solve future safety challenges. If you are interested in non-commercial research, we also suggest looking into governmental research bodies, such as the ones in UK, EU, and US.Concrete problems for AI deploymentEnhancing AI application reliability and security: Foundation model applications, from user-facing chatbots utilizing software tools for sub-tasks to complex multi-agent systems, require robust security, such as protection against prompt injection, insecure plugins, and data poisoning. For detailed security considerations, refer to the Open Web Application Security Project's top 10 LLM application security considerations.Mitigating unwanted model output: With increasing regulations on algorithms, preventing illegal outputs may become paramount, potentially requiring stricter constraints than model alignment.Preventing malicious use:For AI API providers: Focus on monitoring for malicious or illegal API use, safeguarding models from competitor access, and implementing zero-trust solutions.For regulators: Scalable legislative auditing, like model card databases, open-source model monitoring, and technical governance solutions will be pivotal in 2024 and 2025. Compliance with new legislation, akin to GDPR, will likely necessitate extensive auditing and monitoring services.For deployers: Ensuring data protection, access control, and reliability will make AI more useful, private, and secure for users.For nation-states: Assurances against nation-state misuse of models, possibly through zero-trust structures and treaty-bound compute usage monitoring.ExamplesApollo Research: "We intend to develop a holistic and far-ranging model evaluation suite that includes behavioral tests, fine-tuning, and interpretability approaches to detect deception and potentially other misaligned behavior."Lakera.ai: "Lakera Guard empowers organizations to build GenAI applications without worrying about prompt injections, data loss, harmful content, and other LLM risks. Powered by the world's most advanced AI threat intelligence."Straumli.ai: "Ship safe AI faster through managed auditing. Our comprehensive testing suite allows teams at all scales to focus on the upsides."Interpretability and oversightProblems related...

]]>
Esben Kran https://forum.effectivealtruism.org/posts/kc9igJKXeJbmfxsxq/ai-safety-needs-to-scale-and-here-s-how-you-can-do-it Sun, 04 Feb 2024 23:08:10 +0000 EA - AI safety needs to scale, and here's how you can do it by Esben Kran Esben Kran https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 12:46 no full 1
pjjxarmwkY8D5KvDc EA - A simple and generic framework for impact estimation and attribution by Christoph Hartmann Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A simple and generic framework for impact estimation and attribution, published by Christoph Hartmann on February 4, 2024 on The Effective Altruism Forum.TL;DRBuilding on the logarithmic relationship between income and life satisfaction, we model the purchase of goods as an exchange of money for life satisfaction.Under that lens, a company's impact on the life satisfaction of its customers is the income-weighted sum of the company's revenue.By comparing the company's revenue with its cost structure, we can attribute the relative share of impact to the company's employees, suppliers, and shareholders.While this approach primarily focuses on for-profit companies, it can be extended to NGOs by reframing them as companies selling attribution of their primary impact for donations.In this framework, an NGO's impact is then twofold: It's primary impact, sold and attributed to the donor, and it's secondary impact on the life satisfaction of the donor, attributed to the employees and other cost drivers.This approach is not a substitute for measuring any impact beyond life satisfaction of consumers. It cannot replace studies on the primary impact of NGOs or impact on factors beyond life satisfaction, like the environment.You can use this framework to compare the impact of companies when deciding between jobs, to compare the impact of donations to the impact from a job, or use it as a guide on how to attribute the impact of a for-profit or NGO to it's customers, employees and suppliers.The corresponding spreadsheet can be used to apply this framework to any company using commonly available data.Idea: Using money as a proxy for impactImpact estimation is a dark art: Complex spreadsheets with high sensitivity to parameters, convoluted theories of change, cherry-picked KPIs that can't be compared between two organizations, triple-counting of the same impact for multiple stakeholders, missing studies to back up assumptions. I am working for a social enterprise and in the five years I've been here I tried to estimate impact a couple of times and always gave up frustrated.In this article I am trying to turn impact estimation around: Instead of trying to estimate impact from detailed bottom-up models, I will take a top-down approach and work with one measure that works for almost anything: money. At its heart, money represents value delivered and I want to see how far we can take this to estimate impact.I will guide you through this in three steps that you can also follow and adjust inthis spreadsheet: We will start with estimating the impact of a donation to GiveDirectly. From this we will derive income-weighted revenue as a proxy measure for impact. With this we can then estimate the impact of a market stand as a simple example and then generalize to any organization[1], any job, and the impact of shareholders. Finally we'll try to apply this approach to a few examples, look at the extremes, and see how everything holds up.The impact of a cash transferLet's start with estimating the impact of a donation to GiveDirectly. Most readers are probably familiar with the relation between income and life satisfaction: Life satisfaction is highly correlated to the logarithm of income - at low income levels life satisfaction grows fast with income while at high income levels extra income will have almost no effect on life satisfaction. This holds both when compared between countries and for different income levels within countries.Our World In Data for more details on this.We can use this relationship to estimate the impact of donating to GiveDirectly: Let's say we are looking at a pool of donors whose average yearly income is $50k. That would mean they have, on average, a life satisfaction of about 6.9 Life Satisfaction Points (LSP). Then when they donate ten percent of thei...

]]>
Christoph Hartmann https://forum.effectivealtruism.org/posts/pjjxarmwkY8D5KvDc/a-simple-and-generic-framework-for-impact-estimation-and Sun, 04 Feb 2024 07:44:39 +0000 EA - A simple and generic framework for impact estimation and attribution by Christoph Hartmann Christoph Hartmann https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 34:56 no full 4
v3AjqpSfEmNzxDQBc EA - Introducing the Animal Advocacy Forum - a space for those involved or interested in Animal Welfare & related topics by David van Beveren Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the Animal Advocacy Forum - a space for those involved or interested in Animal Welfare & related topics, published by David van Beveren on February 6, 2024 on The Effective Altruism Forum.SummaryFarmed Animal Strategic Team (FAST) is thrilled to announce the launch of ourAnimal Advocacy Forum, a new platform aimed at increasing discussion and enhancing collaboration within the animal advocacy movement.We invite everyone involved or interested in animal welfare, alternative proteins, animal rights, or related topics to participate, share insights about their initiatives, and discover valuable perspectives.Thank you!What is FAST?For more than a decade, FAST has operated as a private Google Group list, connecting over 500+ organizations and 1,400+ individuals dedicated to farmed animal welfare. This network includes professionals from pivotal EA-aligned organizations such as Open Philanthropy, Good Food Institute, The Humane League, Animal Charity Evaluators (ACE) - including a wide range of smaller and grassroots-based groups.Why a forum?In response to feedback from our FAST survey, members expressed a strong interest in deeper discussions and improved collaboration. There was also considerable dissatisfaction with the 'reply-all' feature, which led to unintentional spamming of 1,400 members - as a result, FAST decided to broaden its services to include a forum.While the FAST List continues to serve as a private space within the animal advocacy movement, the FAST Forum is open to the public to foster greater engagement, particularly from those involved in the EA and other closely-aligned movements.What should be posted there?Echoing the EA Forum's Animal Welfare topic's role which provides a space for organizations to announce initiatives, discuss promising new ideas, and constructively critique ongoing work - FAST's platform serves as a dedicated hub for in-depth discussions on animal advocacy and related topics.It aims to enable nuanced debates and collaboration on key issues such as alternative proteins, grassroots strategy, corporate campaigns, legal & policy work, among others.What shouldn't be posted there?Discussions related to ongoing investigations or internal strategy, especially regarding campaigns or initiatives not yet public, should not be shared on the forum to safeguard the confidentiality and security of those efforts.Why not use the EA Forum?While the EA Forum is a valuable resource for animal advocacy dialogue, the FAST forum is designed to foster a more focused and close-knit community. The EA Forum's broad spectrum of topics and distinct cultural norms can be intimidating for some, making it challenging for those specifically focused on animal advocacy to find and engage in targeted conversations.This initiative mirrors other communities such as theAI Alignment Forum, which serve to concentrate expertise and foster discussions in a critically important area. With that in mind, we strongly encourage members to continue sharing key content on the EA Forum for visibility and cross-engagement within the broader EA community.[1]Where do I start?Feel free to join us over at theAnimal Advocacy Forum and become an active participant in our growing community.[2] To get started, simply register, complete your profile, and start or contribute to discussions that match your interests and expertise. This is also a great opportunity to introduce yourself and share insights about the impactful work you're doing.Thank you!Thank you to the organizations and individuals who have provided invaluable feedback and support for the forum and FAST's rebranding efforts, includingAnimal Charity Evaluators,Veganuary,ProVeg International,Stray Dog Institute,Animal Think Tank,Freedom Food Alliance,GFI, and theAVA Summit.Also, a big...

]]>
David van Beveren https://forum.effectivealtruism.org/posts/v3AjqpSfEmNzxDQBc/introducing-the-animal-advocacy-forum-a-space-for-those-1 Tue, 06 Feb 2024 08:01:14 +0000 EA - Introducing the Animal Advocacy Forum - a space for those involved or interested in Animal Welfare & related topics by David van Beveren David van Beveren https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:32 no full 5
k4LuBacYvv9QarWti EA - A review of how nucleic acid (or DNA) synthesis is currently regulated across the world, and some ideas about reform (summary of and link to Law dissertation) by Isaac Heron Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A review of how nucleic acid (or DNA) synthesis is currently regulated across the world, and some ideas about reform (summary of and link to Law dissertation), published by Isaac Heron on February 6, 2024 on The Effective Altruism Forum.This is a post to share my Law Honours dissertation (link above) about the regulation of nucleic acid synthesis screening at an international level. I have also set out a summary below of the paper for those who do not have time/do not want to read the whole thing. This summary doesn't necessarily reflect the structure and emphasis of the paper, but instead focuses on some of the key insights I identified in my research which those on this forum who have an interest in biosecurity are unlikely to have already seen.I'm hoping to submit this for publication within the next few months, so if anyone has feedback for how to improve it (especially if I have some things wrong about the state of the law in each country I studied) that would also be appreciated. This article was also completed early in October 2023 so parts of it may be out of date, which it would be good to have highlighted (I am aware, for example, that it does not cover President Biden's Executive Order which includes provisions requiring the creation of future guidelines for nucleic acid synthesis screening).Introduction to the risks from nucleic acid synthesis (NAS)I won't cover this in much detail since introductions to the topic can be accessed elsewhere on this forum[1] and in the media,[2] except to give the following background:Nucleic acid synthesis (NAS) refers to the creation of DNA or other genetic molecules (e.g. RNA) artificially. This is done by several large companies and dozens of small ones across the world, who then deliver their product to researchers of various kinds. I will refer to NAS rather than DNA synthesis for the rest of this post because I think this broader category is what we really want to cover.The NAS process is currently undergoing rapid change, with new enzymatic techniques[3]potentially making it much cheaper to produce nucleic acids, especially at scale.Desktop synthesisers, which allow users to generate the sequences at their own labs, are also dramatically improving to the point that they may begin to replace the existing model where synthesis is mostly outsourced to private companies.[4]There are several publicly available databases of genetic sequences, which include various pathogenic sequences.The field of synthetic biology, which applies engineering and computer science tools to "programme" biology, may make it easier over time for those who have obtained synthetic nucleic acids to then use these sequences to recreate existing harmful pathogens (e.g. smallpox) or engineer even more dangerous pathogens.[5]Given these risks, it is often argued that NAS companies should screen their orders for potentially harmful sequences and screen their customers to ensure they are trustworthy. Although there are good argumentsfor why the perceived risk may be overblown, I think this is clearly something they should do. My dissertation focuses on the best way to ensure that this happens.Current well-known points regarding NAS regulationWhat I see as the key components of the international regulatory system for NAS are as follows:The International Gene Synthesis Consortium - a set of large gene synthesis companies which have joined together and agreed to screen their orders according to the Harmonised Screening Protocol. IGSC members agree to screen their orders against a shared database of sequences of concern derived from organisms listed on various official lists of organisms of concern.Two particularly important lists come from the Australia Group and the United States Select Agents and Toxins List.The United States Department o...

]]>
Isaac Heron https://forum.effectivealtruism.org/posts/k4LuBacYvv9QarWti/a-review-of-how-nucleic-acid-or-dna-synthesis-is-currently Tue, 06 Feb 2024 02:31:25 +0000 EA - A review of how nucleic acid (or DNA) synthesis is currently regulated across the world, and some ideas about reform (summary of and link to Law dissertation) by Isaac Heron Isaac Heron https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 25:01 no full 6
cZPgDKjRXmstfEeAt EA - Three tricky biases: human bias, existence bias, and happy bias by Steven Rouk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Three tricky biases: human bias, existence bias, and happy bias, published by Steven Rouk on February 5, 2024 on The Effective Altruism Forum.IntroductionWhile many types of biases are more commonly known and accounted for, I think there may be three especially tricky biases that influence our thinking about how to do good:Human BiasWe are all human, which may influence us to systematically devalue nonhuman sentience.Existence BiasWe all already exist as biologically evolved beings, which may influence us to systematically overvalue the potential future existence of other biologically evolved beings.[1]Happy BiasWe are relatively happy[2] - or at least we are not actively being tortured or experiencing incapacitating suffering while thinking, writing, and working - which may influence us to systematically undervalue the importance of extreme suffering.[3]Like other biases, these three influence our thinking and decision making unless we take steps to counteract them.What makes these biases more difficult to counter is the fact that they are universally held by every human working on doing good in the world, and it's difficult to see how anyone thinking about, writing about, and working on issues of well-being and suffering could not have these qualities - there is no group of individuals without these qualities who can advocate for their point of view.The point of this post is not to resolve these questions, but rather to prompt more reflection on these tricky biases and how they may be skewing our thinking and work in specific directions.[4]For those who are already aware of and accounting for these biases, bravo! For the rest of us, I think this topic deserves at least a little thought, and potentially much more than a little, if we wish to increase the accuracy of our worldview. If normal biases are difficult to counteract, these are even more so.Examples of How These Biases Might Affect Our WorkIf we ask ourselves, "How might these three biases affect someone's thinking about how to do good?", some answers we come up with are things that may be present in our EA community thought, work, and allocation of resources.[5]This could indicate that we have not done enough work to counteract these biases in our thinking, which would be a problem if moral intuitions are the hidden guides behind much of our prioritization (as has been suggested[6]). If our moral intuitions about fundamental ethical concepts are being invisibly biased by our being human, existing, and being relatively happy, then our conclusions may be heavily skewed. This is still true for those who use quantitative or probabilistic methods to determine their priorities, since once again moral intuitions are frequently required when setting many different values regarding moral weights, probabilities, etc.When looked at through the lens of moral uncertainty[7], we could say that these biases would skew our weights or assessments of different moral theories in predictable directions.Here are some specific examples of how these biases might show up in our thinking and work. In many cases, there is a bit more information in the footnotes.Human BiasHuman bias would influence someone to focus the majority of their thinking and writing on human well-being.[8]Human bias would lead the majority of funding and work to be directed towards predominantly-human cause areas.[9][10]Human bias would influence someone to set humans as the standard for consciousness and well-being, with other beings falling somewhere below humans in their capacities.Human bias would influence someone to grant more weight to scenarios where humans either are or become the majority of moral value as a way of rationalizing a disproportionate focus on humans.Human bias would influence someone to devalue work that focuses ...

]]>
Steven Rouk https://forum.effectivealtruism.org/posts/cZPgDKjRXmstfEeAt/three-tricky-biases-human-bias-existence-bias-and-happy-bias Mon, 05 Feb 2024 06:29:47 +0000 EA - Three tricky biases: human bias, existence bias, and happy bias by Steven Rouk Steven Rouk https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 18:04 no full 9
RRDAzpi6vS8ARvsDF EA - Introducing the Effektiv Spenden "Defending Democracy" fund by Sebastian Schienle Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the Effektiv Spenden "Defending Democracy" fund, published by Sebastian Schienle on February 7, 2024 on The Effective Altruism Forum.On January 10, the media platform CORRECTIVpublished a report on a secret far-right meeting in Germany in November 2023. The report could mark a turning point. Since then, more than 2 million people havetaken part in demonstrationsagainst the extreme right and in defense of our democracy, making them some of the largest demonstrations in Germany in recent decades.At Effektiv Spenden, we have long considered the defense and promotion of democracy to be an important cause area. And it has also received some (limited) attention in other parts of the EA community - see related materials e.g., from 80,000 hours (related topicshere andhere), Rethink Priorities; Founders Pledge; Open Philanthropy; EIP (via its focus on institutional decision-making); and forum postshere and here. However, a systematic mapping and - more importantly - evaluation of interventions is currently lacking, making it difficult to develop recommendations for effective giving. With the generous support of some of our donors, we have therefore helped to launch a new charity evaluator,Power for Democracies, to fill this gap.To respond to the current surge of interest and momentum among both the general public in Germany and our donors, we feel a responsibility to share our initial findings - with all their limitations - in order to guide donors interested in supporting promising interventions that can make a difference in the short term in the specific German context. Therefore, we have launched a new fund called"Defending Democracy" on effektiv-spenden.org.Despite the speculative nature of our recommendations and fund allocations, we believe we can:Guide donors who are already committed to supporting this cause area to achieve significantly greater impact.Encourage those (potential) donors who are interested in the cause area but have been reluctant to give due to the apparent lack of research and evidence-based recommendations.Use the current momentum to introduce more donors to the concept of effective giving, and thereby create more effective giving overall.However, we also see potential downside risks that could reduce our overall impact:A dilution of the concept of effective giving overall by introducing a new cause area that is less well researched and currently more speculative. Low risk: While our understanding of the comparative impact of individual interventions is still limited, the literature is fairly clear on the critical importance of well-functioning democracies for maximizing key societal outcomes such as health and development, peace and security, scientific progress, or economic development. In addition, we launched the new fund as a "beta" version to help our donors understand the increased uncertainty.A shift in donations from better-researched cause areas and interventions to our more speculative Democracy Fund. Medium risk: We expect the "beta" label to mitigate this risk as well. In addition, we explicitly communicate to our existing donors (e.g. through our newsletter) that we recommend the new fund only for additional donations and discourage the reallocation of existing or planned commitments.Overall, we expect the benefits of the new fund to outweigh the potential risks. However, we will closely monitor if/how our new offering may divert funds from other cause areas and will continually reevaluate the need to make potential adjustments. (Including closing the fund if necessary).If you have any questions or comments about the new fund, please feel free to contact us directly atinfo@effektiv-spenden.org. Similarly, if you are interested in exploring major giving to strengthen democracy internationally (and partic...

]]>
Sebastian Schienle https://forum.effectivealtruism.org/posts/RRDAzpi6vS8ARvsDF/introducing-the-effektiv-spenden-defending-democracy-fund Wed, 07 Feb 2024 17:17:33 +0000 EA - Introducing the Effektiv Spenden "Defending Democracy" fund by Sebastian Schienle Sebastian Schienle https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:47 no full 3
buze9djYDYggqoSRs EA - Diet that is vegan, frugal, time-efficient, ~evidence-based and ascetic: An example of a non-Huel EA diet? by Ulrik Horn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Diet that is vegan, frugal, time-efficient, ~evidence-based and ascetic: An example of a non-Huel EA diet?, published by Ulrik Horn on February 6, 2024 on The Effective Altruism Forum.TL;DRI am posting this as I think the diet I am following might be suited to perhaps at least a few other EAs, especially those that are looking for a somewhat "optimized" diet while being hesitant about Huel. My diet aims to be vegan, affordable, evidence-based, time-efficient and is quite ascetic. The intersection of these criteria seems close to EA and also different from how most people think about their diet. Therefore, I thought perhaps posting this might be helpful for some EAs who have thought very little about this but would like to learn about more optimal diets.Moreover, I am interested in feedback from others who have done other or more research and come to other/supplemental findings - what am I missing? I have no expertise in dietary sciences and also have not done deep research as explained in the section on methodology.This post/diet might not be for you if you:Require novelty in your food (i.e. not eating the same handful of different items week after week)Derive a lot of well-being from eating good-tasting food (my diet is not unappetizing but it does require the consumer not to derive much life satisfaction from frequently eating good-tasting food)To keep this post short I will describe the diet briefly so please ask clarifying questions in the comments.A reason to be skeptical of Huel is that the evidence is lacking. As far as I understand, the only diet with considerable evidence is the Mediterranean diet as a whole. This is why, as I explain below, I am trying to make my diet as conformant as possible with the Mediterranean diet.The diet itselfThe diet consists of the following items and quantities. Note that this is a daily average, I do not consume all items every day. Instead, I aim to consume them all over a week such that the daily average ends up close to the following:Then some notes on how to make this more time-efficient/ascetic:Once a week I lightly (5-7 minutes) steam 4-5 crowns of broccoli, blend with olive oil and keep in the fridge to be eaten over 5 days (2 days a week are without vegetables due to my concerns about extended fridge life of this puree) during a week. I find high-powered blenders required to properly cut the stems.The legumes are just the canned type and I just drain, rinse and eat out of the box.Based on whether I think I need more carbs or more proteins, I change the proportions of the following and eat as much as possible after having consumed the other items:The legumesOats, wheat/spelt and rice. I pick whatever is most convenient in terms of "form factor" such as pasta, bread or just plain cooked rice. I usually just dip bread in olive oil, or sprinkle olive oil on top of the rice). Sometimes I sprinkle some chili, squeeze some lemon and sprinkle some soy sauce and olive oil on top of rice or pasta - I guess I am not a complete ascetic hahaMy analysis indicates I might be short on vitamin D and B12 from the above, so I take these daily as supplements. I also take algal omega 3 in the form of DHA and EPA as the diet lacks in this (I think it only contains the ALA form that is much less bioavailable and the Mediterranean diet includes a lot of fish) and this is somewhat likely to be important for both short-term and long-term well-beingI also consume some fruit (perhaps equivalent to 5 oranges a week). Nutritionally, this is perhaps not strictly needed according to the calcs below, but as I am inspired by the Mediterranean diet and I am sure most people in those studies ate some fruit, I eat whatever and whenever convenient.Please note that the choice of items above is based on analysis as explained below. There ...

]]>
Ulrik Horn https://forum.effectivealtruism.org/posts/buze9djYDYggqoSRs/diet-that-is-vegan-frugal-time-efficient-evidence-based-and Tue, 06 Feb 2024 20:41:41 +0000 EA - Diet that is vegan, frugal, time-efficient, ~evidence-based and ascetic: An example of a non-Huel EA diet? by Ulrik Horn Ulrik Horn https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:30 no full 8
dy5h9Ly8osZEiFkru EA - Tragic Beliefs by tobytrem Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tragic Beliefs, published by tobytrem on February 8, 2024 on The Effective Altruism Forum.I'm posting this as part of the Forum's Benjamin Lay Day celebration - consider writing a reflection of your own! The "official dates" for this reflection are February 8 - 15 (but you can write about this topic whenever you want).TL;DR:Tragic beliefs are beliefs that make the world seem worse, and give us partial responsibility for it. These are beliefs such as: "insect suffering matters" or "people dying of preventable diseases could be saved by my donations".Sometimes, to do good, we need to accept tragic beliefs.We need to find ways to stay open to these beliefs in a healthy way. I outline two approaches, pragmatism and righteousness, which help, but can both be carried to excess.Why I ignored insects for so longI've been trying not to think about insects for a while. My diet is vegan, and sometimes I think of myself as a Vegan. I eat this way because I don't want to cause needless suffering to animals, and as someone interested in philosophy, as a human, I want to have consistent reasons for acting. You barely need philosophy to hold the belief that you shouldn't pay others to torture and kill fellow creatures. But insects? You often kill them yourself, and you probably don't think much of it.I ignored insects because the consequences of caring about them are immense. Brian Tomasik, a blogger who informed some of my veganism, has little capacity for ignoring. Hewrote aboutdriving less, especially when roads are wet, avoiding foods containingshellac, never buyingsilk.But Brian can be easy to ignore if you're motivated to. He is so precautionary with his beliefs that he is at least willing to entertain the idea of moral risks of killing video game characters. When a belief is inconvenient, taking the path of least resistance and dismissing the author, and somehow with this, the belief itself, is tempting.But last year, at EAG London, I went to a talk about insect welfare by a researcher from rethink priorities, Meghan Barrett. She is a fantastic speaker. Her argument in the talk was powerful, and cut through to me. She reframed insects[1] by explaining that, because of their method of respiring (through their skins[2]) , they are much smaller today than they were for much of their evolution. If you saw the behaviour that insects today exhibit in animals the size of dogs or larger, it would be much harder to dismiss them as fellow creatures.Many insects do have nociceptors[3], or something very similar, many of them exhibit anhedonia (no longer seeking pleasurable experiences) after experiencing pain, many of them nurse wounds. If you are interested, read more in her own wordshere. She ended the talk by extrapolating the future of insect farming, which is generally done without any regard for their welfare. The numbers involved were astonishing. By the end, the familiar outline of an ongoing moral tragedy had been drawn, and I was bought in.Why did it take so long for me to take insect suffering seriously, and why did Meghan's talk make the difference? I think this is because the belief that insect suffering is real is a tragic belief.What is a tragic belief?I understand a tragic belief as a belief that, should you come to believe it, will make you:a) Knowingly a part of causing great harms, andb) A resident of a worse world.The problem is, some beliefs are like this. It's easier for us to reject them. Perhaps it is healthy to have a bias against beliefs like this. But, if we don't believe them, if we avoid them because they are difficult to embrace even though they are true, we will continue to perpetuate tragedies.So we should find a way to stay open to tragic beliefs, without making the world seem too tragic for us to act.How can we open ourselves up ...

]]>
tobytrem https://forum.effectivealtruism.org/posts/dy5h9Ly8osZEiFkru/tragic-beliefs Thu, 08 Feb 2024 18:49:21 +0000 EA - Tragic Beliefs by tobytrem tobytrem https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:44 no full 1
cpuFnLtppbsLKcbbq EA - Ambitious Impact (AIM) - a new brand for Charity Entrepreneurship and our extended ecosystem! by CE Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ambitious Impact (AIM) - a new brand for Charity Entrepreneurship and our extended ecosystem!, published by CE on February 8, 2024 on The Effective Altruism Forum.TLDR: Given Charity Entrepreneurship's recent scaling, we are changing our brand to call our extended ecosystem "Ambitious Impact (AIM)." Our newAIM umbrella brand will include the classicCE program as well as recent additional programs connected tograntmaking,research, andeffective giving. We are also planning to launch new programs soon. We feel AIM being able to create onramps for other career paths (similar to what we have done for nonprofit entrepreneurship) is the most plausible way of doubling our impact.A quick history of Charity EntrepreneurshipInspired by the early success of a few nonprofits identified by evaluators such as GiveWell, we decided to take a systematic approach to researching and then launching new impact-focused nonprofits (Charity Science Health, Fortify Health).After some initial successes, Charity Entrepreneurship was started in 2018 as a formal Incubation Program to get more field-leading charities started.31 projects were founded over five years, with an approximate growth of ten nonprofits a year in the upcoming year.In 2023, CE extended its impact through the Impactful Grantmaking program (potentially impacting up to $10M in funding in its first year).In late 2023, CE internally determined the best way to maximize our impact further would be to grow horizontally, focusing on programs for several career paths (e.g., launching a Research Training Program and Effective Giving Incubation).That takes us to about now, where Charity Entrepreneurship is still our topline brand and identity; however, we have a growing number of impact-focused programs that are not connected to directly founding a nonprofit.What will AIM look like going forwardOur plan is to have a more cross-cutting umbrella brand that will represent our impact-focused ecosystem, with all our programs under one brand.What do we expect to change?Names and websites of the new programs. For example, some will be soon renamed (e.g., so "Impactful Grantmaking" will become "AIM Grantmaking.")All of our other programs will be moved off the CE site onto their own domains.We will also create a few more centralized resources that cross-cut our programs (e.g., we will have an AIM blog instead of one for each program and a joint newsletter).You can also expect us to launch more new programs (with the next one launching in late 2024)What do we expect to stay the same?In short, most things.We still plan on keeping the samevalues and ways of working.We expect that most of our resources will continue to go into our Charity Entrepreneurship program for the foreseeable future.The Charity Entrepreneurship Incubation Program brand/website/newsletter will all continue, as well as other twice-yearly programs.We are not making major personnel or staffing changes and do not expect AIM to change dramatically in staff size.Why we are launching AIMCreating more good in the world: Ultimately, AIM's goal is to have the most impact while tackling the biggest world problems. Although we feel the career path of founding a nonprofit is among the highest impact ones, we also think we can contribute a lot to building other career pathways. We noticed this area as a gap in the ecosystem and feel like filling it is one of the best ways to cause more good in the world.Talent absorbency: This change indicates our long-term direction, with multiple programs serving impact-minded individuals. We have noticed more and more talented people who, although they might not be a perfect fit for the CE Incubation Program, would excel in adjacent programs. We are aware that nonprofit entrepreneurship is ultimately alow-absorbency career...

]]>
CE https://forum.effectivealtruism.org/posts/cpuFnLtppbsLKcbbq/ambitious-impact-aim-a-new-brand-for-charity Thu, 08 Feb 2024 17:52:13 +0000 EA - Ambitious Impact (AIM) - a new brand for Charity Entrepreneurship and our extended ecosystem! by CE CE https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:37 no full 3
dM93vHvLTgpk8pLSX EA - Celebrating Benjamin Lay (died on this day 265 years ago) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Celebrating Benjamin Lay (died on this day 265 years ago), published by Lizka on February 8, 2024 on The Effective Altruism Forum.Quaker abolitionistBenjamin Lay died exactly 265 years ago today (on February 8, 1759). I'm using the anniversary of his death to reflect on his life and invite you to join me by sharing your thoughts sometime this week.Lay was a radical anti-slavery advocate and an important figure in theQuaker abolitionist movement. He's been described as a moral weirdo; besides viewing slavery as a great sin, he opposed the death penalty, was vegetarian, believed that men and women were equal in the eyes of God, and more. He didn't hide his views and was known for his "guerrilla theater" protests, which included splashing fake blood on slave-owners and forcing people to step over him as they exited a meeting. Expulsion from various communities, ridicule for his beliefs or appearance (he had dwarfism), and the offended sensibilities of those around him didn't seem to seriously slow him down.Consider sharing your thoughts this week (February 8-15)!You could share a post, aQuick Take, or simply comment here. (If you post something, you could also link to this post and invite readers to share their own thoughts.[1])Here are a few discussion prompts, in case they help (feel free to write about whatever comes to mind, though!):How can we develop the courage to be "morally weird"?How can we avoid missing potentialongoing moral catastrophes (or get more moral clarity)?When are disruptive approaches to moral change or advocacy more useful than "polite" or collaborative ones? (When are they less useful?)In the rest of this post, I share a brief overview of Benjamin Lay's famous protests , life and partnership with Sarah Lay (a respected Quaker minister and fellow abolitionist) , and how their work fits into the broader history of slavery .I should flag that I'm no expert in Lay's life or work - just compiling info from ~a day of reading.Protests against slavery: shocking people into awareness"Over the course of the twenty-seven years that he lived in Pennsylvania, Lay harangued the Philadelphia Quakers about the horrors of slavery at every opportunity, and he did so in dramatic style."Will MacAskill in Chapter Three ofWhat We Owe the FutureLay's famous protests illustrate his "dramatic style" (and how little he cared about the opinion of others). Here are some examples:1738: At the biggest event of the Philadelphia Yearly Meeting, Lay showed up in a great coat and waited his turn to speak. When the time came, Lay rose and announced in a "booming" voice: "Oh all you Negro masters who are contentedly holding your fellow creatures in a state of slavery, . . .you might as well throw off the plain coat as I do." He then threw off his coat, revealing that he was dressed in a military uniform and holding a sword and a book: "It would be as justifiable in the sight of the Almighty, who beholds and respects all nations and colours of men with an equal regard, if you should thrust a sword through their hearts as I do through this book!" When Lay plunged his sword through the book, it started gushing red liquid.In preparation for the event, Lay had hollowed out the book and inserted an animal bladder filled with bright red pokeberry juice. As he finished speaking, he splattered the fake blood on the slave owners present.Smithsonian andWWOTF)One Sunday morning he stood at a gateway to the Quaker meetinghouse, knowing all Friends would pass his way. He left "his right leg and foot entirely uncovered" and thrust them into the snow. Like the ancient philosopher Diogenes, who also trod barefoot in snow, he again sought to shock his contemporaries into awareness. One Quaker after another took notice and urged him not to expose himself to the freezing col...

]]>
Lizka https://forum.effectivealtruism.org/posts/dM93vHvLTgpk8pLSX/celebrating-benjamin-lay-died-on-this-day-265-years-ago Thu, 08 Feb 2024 15:38:15 +0000 EA - Celebrating Benjamin Lay (died on this day 265 years ago) by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 22:51 no full 4
wHugJrfG9fL3bk47M EA - Upcoming changes to the EV US and EV UK leadership teams by Rob Gledhill Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Upcoming changes to the EV US and EV UK leadership teams, published by Rob Gledhill on February 8, 2024 on The Effective Altruism Forum.I wanted to provide an update about the leadership teams of Effective Ventures Foundation USA, Inc. and Effective Ventures Foundation (UK).On EV UK's board, Tasha McCauley and Claire Zabel will be stepping down from their trustee roles within the coming weeks. Tasha has served on the EV UK board since 2021 and Claire since 2019, and both originally wanted to step down from these roles approximately a year ago. They decided to stay on to guide EV through a trying time, determine future plans for the organization, and finalize our trustee recruitment efforts. EV UK is extraordinarily grateful for the service that both of them have provided over their tenures, and especially in the months since FTX's collapse.To fill their vacancies, Eli Rose from the EV US board will be moving over to the EV UK board, and he will be joined byJohnstuart Winchell before the end of February. Johnstuart is the Founder and Lead ofGood Impressions, an organization providing free advertising marketing to effective nonprofits[1]. Before starting Good Impressions, he worked at Google and Boston Consulting Group. To see an overview of all EV UK leadership, please visitthis page on our website.On the EV US board, Nicole Ross will also be stepping down from her trustee role in the near future. She has served on the EV US board since 2022, and as with Tasha and Claire, originally wanted to step down earlier but has stayed on to help with the organization's governance until we could find new trustees, pass through some legal challenges, and set a course for EV's future.EV US is immensely thankful for everything that Nicole has given to the organization and the larger EA community during her term. She will remain at EV US in her capacity as the Head of Community Health at CEA.Anna Weldon joined the EV US board on February 1st. Anna is the Director of Internal Operations at Open Philanthropy, and she previously worked as Director of Human Resources at Buffalo Exchange, a US-based recycled clothing retailer. She's guided workplaces in the areas of manager development, change management, and organizational restructuring. An additional trustee will be joining the EV US board shortly, and we will make an announcement once their appointment has been confirmed.Finally, while Zach will be assuming the role of CEO of CEA, he will continue to serve as CEO of EV US. In this capacity, Zach will focus on leading CEA but retain his oversight responsibilities of EV US. I will continue to serve as EV UK CEO, and Zach and I will consult on what is in the best interests of both EV US and EV UK, and I will also be primarily responsible for the EV Ops team. To see an overview of all of EV US leadership, please visitthis page on our website.^Some of Good Impressions' current clients include projects at EV US and EV UK. While the marketing services that Good Impressions provides are free of charge and therefore this relationship does not meet the bar of a legal Conflict of Interest, Johnstuart will be recused from any decisions that could conflict with his role as a service provider to EV's projects (e.g. during yearly budget approvals).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Rob Gledhill https://forum.effectivealtruism.org/posts/wHugJrfG9fL3bk47M/upcoming-changes-to-the-ev-us-and-ev-uk-leadership-teams Thu, 08 Feb 2024 13:46:54 +0000 EA - Upcoming changes to the EV US and EV UK leadership teams by Rob Gledhill Rob Gledhill https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:19 no full 5
aqDzhMHEw5WvQbmfu EA - LawAI's Summer Research Fellowship - apply by February 16 by LawAI Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LawAI's Summer Research Fellowship - apply by February 16, published by LawAI on February 8, 2024 on The Effective Altruism Forum.Announcing the Institute for Law & AI's 2024 Summer Research Fellowship in Law & AI - apply before EOD Anywhere on Earth, February 16!LawAI (formerly the Legal Priorities Project) are looking for talented law students and postdocs who wish to use their careers to address risks from transformative artificial intelligence, to engage in an 8-12 week long fellowship focused on exploring pressing questions at the intersection of law and AI governance.Fellows will work with their supervisor to pick a research question, and will spend the majority of their time conducting legal research on their chosen topic. They may also assist other LawAI team members with projects, as well as work on their career plans with the assistance of the LawAI team and other AI governance professionals in our network. Fellows will join the team some time between June and October, in a fully remote capacity. We're offering fellows a stipend of $10,000.The following are some examples of topics and questions we'd be particularly keen for fellows to research (though we are open to suggestions of other topics from candidates, which focus on mitigating risks from transformative AI):Liability - How will existing liability regimes apply to AI-generated or -enabled harms? What unique challenges exist, and how can legislatures and courts respond?Existing authority - What powers do US agencies currently have to regulate transformative AI? What constraints or obstacles exist to exercising those powers? How might the major questions doctrine or other administrative law principles affect the exercise of these authorities?First Amendment - How will the First Amendment affect leading AI governance proposals? Are certain approaches more or less robust to judicial challenge? Can legislatures and agencies proactively adjust their approaches to limit the risk of judicial challenge?International institutions - How might one design a new international organization to promote safe, beneficial outcomes from the development of transformative artificial intelligence? What role and function should such an organization prioritize?Comparative law - Which jurisdictions are most likely to influence the safe, beneficial development of AI? What opportunities are being under-explored relative to the importance of law in that jurisdiction?EU law - What existing EU laws influence the safe, beneficial development of AI? What role can the EU AI Act play, and how does it interact with other relevant provisions, such as the precautionary principle under Art. 191 TFEU in mitigating AI risk?Anticipatory regulation - What lessons can be learned from historic efforts to proactively regulate new technologies as they developed? Do certain practices or approaches seem more promising than others?Adaptive regulation - What practices best enable agencies to quickly and accurately adjust their regulations to changes in the object of their regulation? What information gathering practices, decision procedures, updating protocols, and procedural rules help agencies keep pace with changes in technology and consumer and market behaviors?Developing other specific AI-governance proposals - For example: How might a government require companies to maintain the ability to take down, patch, or shutdown their models? How might a government regulate highly capable, but low-compute models? How might governments or private industry develop an effective insurance market for AI?If you're interested in applying, or know of anyone who might be, you can find further details in our application information pack, and apply here before EOD February 16. Feel free to reach out to careers@law-ai.org if you have any questions!Than...

]]>
LawAI https://forum.effectivealtruism.org/posts/aqDzhMHEw5WvQbmfu/lawai-s-summer-research-fellowship-apply-by-february-16 Thu, 08 Feb 2024 07:19:47 +0000 EA - LawAI's Summer Research Fellowship - apply by February 16 by LawAI LawAI https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:43 no full 8
m73rJpKs476zc52xz EA - The Intergovernmental Panel On Global Catastrophic Risks (IPGCR) by DannyBressler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Intergovernmental Panel On Global Catastrophic Risks (IPGCR), published by DannyBressler on February 7, 2024 on The Effective Altruism Forum.SummaryThis post motivates and describes a potential International Panel on Global Catastrophic Risks (IPGCR). The IPGCR will focus only on GCRs: risks that could cause a global collapse of human civilization or human extinction.The IPGCR seeks to fit an important and currently unoccupied niche: an international expert organization whose only purview is to produce expert reports and summaries for the international community on risks that could cause a global collapse of human civilization or human extinction. The IPGCR will produce reports across scientific and technical domains, and it will focus on the ways in which risks may intersect and interact.This will aid policymakers in constructing policy that coordinates and prioritizes responses to different threats, and minimizes the chance that any GCR occurs, regardless of its origin. The IPGCR will work in some areas where there is more consensus among experts and some areas where there is less consensus. Unlike consensus-seeking organizations like the Intergovernmental Panel on Climate Change (IPCC), the IPGCR will not necessarily seek consensus.Instead, it will seek to accurately convey areas of consensus, disagreement, and uncertainty among experts. The IPGCR will draw on leadership and expertise from around the world and across levels of economic development to ensure that it promotes the interests of all humanity in helping to avoid and mitigate potential global catastrophes.You can chat with the post here: Chat with IPGCR (although let me know if this GPT seems unaligned with this post as you chat with it).1. Introduction and RationaleGlobal catastrophic risks (GCRs) are risks that could cause a global collapse of human civilization or human extinction (Bostrom 2013, Bostrom & Cirkovic 2011, Posner 2004). Addressing these risks requires good policy, which requires a good understanding of the risks and options for mitigating them. However, primary research is not enough: policymakers must be informed by objective summaries of the existing scholarship and expert-assessed policy options.This post proposes the creation of the Intergovernmental Panel on Global Catastrophic Risks (IPGCR). The IPGCR is an international organization that synthesizes scientific understanding and makes policy recommendations related to global catastrophic risks. The IPGCR will report on the scientific, technological, and socioeconomic bases of GCRs, the potential impacts of GCRs, and options for the avoidance and mitigation of GCRs.The IPGCR will synthesize previously published research into reports that summarize the state of relevant knowledge. It will sit under the auspices of the United Nations, and its reports will include explicit policy recommendations aimed at informing decision-making by the UN and other bodies. To draw an analogy, the IPGCR does not put out forest fires; it surveys the forest, and it advises precautionary measures to minimize the chance of a forest fire occurring.The IPGCR's reports will aim to be done in a comprehensive, objective, open, and transparent manner, including fully communicating uncertainty or incomplete consensus around the findings. The mechanisms for how this will be accomplished are described throughout this document.The IPGCR draws on best practices from other international organizations and adopts those that best fit within the IPGCR's purview. Like the US National Academy of Sciences, the UK Royal Society, and the Intergovernmental Panel on Climate Change, the IPGCR will primarily operate through expert volunteers from academia, industry, and government, who will write and review the reports.In contrast to these other institutions, the ...

]]>
DannyBressler https://forum.effectivealtruism.org/posts/m73rJpKs476zc52xz/the-intergovernmental-panel-on-global-catastrophic-risks Wed, 07 Feb 2024 16:43:26 +0000 EA - The Intergovernmental Panel On Global Catastrophic Risks (IPGCR) by DannyBressler DannyBressler https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 35:59 no full 13
z9JAh5dMtXcqHnv53 EA - CLR Summer Research Fellowship 2024 by Center on Long-Term Risk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CLR Summer Research Fellowship 2024, published by Center on Long-Term Risk on February 15, 2024 on The Effective Altruism Forum.We, theCenter on Long-Term Risk, are looking for Summer Research Fellows to explore strategies for reducing suffering in the long-term future (s-risks). For eight weeks, you will join our team at our office while working on your own research project. During this time, you will be in regular contact with our researchers and other fellows, and receive guidance from an experienced mentor.You will work autonomously on challenging research questions relevant to reducing suffering. You will be integrated and collaborate with our team of intellectually curious, hard-working, and caring people, all of whom share a profound drive to make the biggest difference they can.We worry that some people won't apply because they wrongly believe they are not a good fit for the program. While such a belief is sometimes true, it is often the result of underconfidence rather than an accurate assessment. We would therefore love to see your application even if you are not sure if you are qualified or otherwise competent enough for the positions listed.We explicitly have no minimum requirements in terms of formal qualifications and many of the past summer research fellows have had no or little prior research experience. Being rejected this year will not reduce your chances of being accepted in future hiring rounds. If you have any doubts, please don't hesitate to reach out (see "Application process" > "Inquiries" below).Purpose of the fellowshipThe purpose of the fellowship varies from fellow to fellow. In the past, have we often had the following types of people take part in the fellowship:People very early in their careers, e.g. in their undergraduate degree or even high school, who have a strong interest in s-risk and would like to learn more about research and test their fit.People seriously considering changing their career to s-risk research who want to test their fit for such work.People interested in s-risk who plan to pursue a research or research-adjacent career and who would like to gain a strong understanding of s-risk macrostrategy beforehand.People with a fair amount of research experience, e.g. from a (partly or fully completed) PhD, whose research interests significantly overlap with CLR's and who want to work on their research project in collaboration with CLR researchers for a few months. This includes people who do not strongly prioritize s-risk themselves.There might be many other good reasons for completing the fellowship. We encourage you to apply if you think you would benefit from the program, even if your reason is not listed above. In all cases, we will work with you to make the fellowship as valuable as possible given your strengths and needs. In many cases, this will mean focusing on learning and testing your fit for s-risk research, more than seeking to produce immediately valuable research output.ActivitiesCarrying out a research project related to one of our priority areas below, or otherwise targeted at reducing s-risks. You will determine this project in collaboration with your mentor, who will meet with you every week and provide feedback on your work.Attending team and Fellowship meetings, including giving occasional presentations on the state of your research.What we look for in candidatesWe don't require specific qualifications or experience for this program, but the following abilities and qualities are what we're looking for in candidates. We encourage you to apply if you think you may be a good fit, even if you are unsure whether you meet some of the criteria.Curiosity and a drive to work on challenging and important problems;Ability to answer complex research questions related to the long-term future;Willi...

]]>
Center on Long-Term Risk https://forum.effectivealtruism.org/posts/z9JAh5dMtXcqHnv53/clr-summer-research-fellowship-2024 "Inquiries" below).Purpose of the fellowshipThe purpose of the fellowship varies from fellow to fellow. In the past, have we often had the following types of people take part in the fellowship:People very early in their careers, e.g. in their undergraduate degree or even high school, who have a strong interest in s-risk and would like to learn more about research and test their fit.People seriously considering changing their career to s-risk research who want to test their fit for such work.People interested in s-risk who plan to pursue a research or research-adjacent career and who would like to gain a strong understanding of s-risk macrostrategy beforehand.People with a fair amount of research experience, e.g. from a (partly or fully completed) PhD, whose research interests significantly overlap with CLR's and who want to work on their research project in collaboration with CLR researchers for a few months. This includes people who do not strongly prioritize s-risk themselves.There might be many other good reasons for completing the fellowship. We encourage you to apply if you think you would benefit from the program, even if your reason is not listed above. In all cases, we will work with you to make the fellowship as valuable as possible given your strengths and needs. In many cases, this will mean focusing on learning and testing your fit for s-risk research, more than seeking to produce immediately valuable research output.ActivitiesCarrying out a research project related to one of our priority areas below, or otherwise targeted at reducing s-risks. You will determine this project in collaboration with your mentor, who will meet with you every week and provide feedback on your work.Attending team and Fellowship meetings, including giving occasional presentations on the state of your research.What we look for in candidatesWe don't require specific qualifications or experience for this program, but the following abilities and qualities are what we're looking for in candidates. We encourage you to apply if you think you may be a good fit, even if you are unsure whether you meet some of the criteria.Curiosity and a drive to work on challenging and important problems;Ability to answer complex research questions related to the long-term future;Willi...]]> Thu, 15 Feb 2024 21:23:41 +0000 EA - CLR Summer Research Fellowship 2024 by Center on Long-Term Risk "Inquiries" below).Purpose of the fellowshipThe purpose of the fellowship varies from fellow to fellow. In the past, have we often had the following types of people take part in the fellowship:People very early in their careers, e.g. in their undergraduate degree or even high school, who have a strong interest in s-risk and would like to learn more about research and test their fit.People seriously considering changing their career to s-risk research who want to test their fit for such work.People interested in s-risk who plan to pursue a research or research-adjacent career and who would like to gain a strong understanding of s-risk macrostrategy beforehand.People with a fair amount of research experience, e.g. from a (partly or fully completed) PhD, whose research interests significantly overlap with CLR's and who want to work on their research project in collaboration with CLR researchers for a few months. This includes people who do not strongly prioritize s-risk themselves.There might be many other good reasons for completing the fellowship. We encourage you to apply if you think you would benefit from the program, even if your reason is not listed above. In all cases, we will work with you to make the fellowship as valuable as possible given your strengths and needs. In many cases, this will mean focusing on learning and testing your fit for s-risk research, more than seeking to produce immediately valuable research output.ActivitiesCarrying out a research project related to one of our priority areas below, or otherwise targeted at reducing s-risks. You will determine this project in collaboration with your mentor, who will meet with you every week and provide feedback on your work.Attending team and Fellowship meetings, including giving occasional presentations on the state of your research.What we look for in candidatesWe don't require specific qualifications or experience for this program, but the following abilities and qualities are what we're looking for in candidates. We encourage you to apply if you think you may be a good fit, even if you are unsure whether you meet some of the criteria.Curiosity and a drive to work on challenging and important problems;Ability to answer complex research questions related to the long-term future;Willi...]]> "Inquiries" below).Purpose of the fellowshipThe purpose of the fellowship varies from fellow to fellow. In the past, have we often had the following types of people take part in the fellowship:People very early in their careers, e.g. in their undergraduate degree or even high school, who have a strong interest in s-risk and would like to learn more about research and test their fit.People seriously considering changing their career to s-risk research who want to test their fit for such work.People interested in s-risk who plan to pursue a research or research-adjacent career and who would like to gain a strong understanding of s-risk macrostrategy beforehand.People with a fair amount of research experience, e.g. from a (partly or fully completed) PhD, whose research interests significantly overlap with CLR's and who want to work on their research project in collaboration with CLR researchers for a few months. This includes people who do not strongly prioritize s-risk themselves.There might be many other good reasons for completing the fellowship. We encourage you to apply if you think you would benefit from the program, even if your reason is not listed above. In all cases, we will work with you to make the fellowship as valuable as possible given your strengths and needs. In many cases, this will mean focusing on learning and testing your fit for s-risk research, more than seeking to produce immediately valuable research output.ActivitiesCarrying out a research project related to one of our priority areas below, or otherwise targeted at reducing s-risks. You will determine this project in collaboration with your mentor, who will meet with you every week and provide feedback on your work.Attending team and Fellowship meetings, including giving occasional presentations on the state of your research.What we look for in candidatesWe don't require specific qualifications or experience for this program, but the following abilities and qualities are what we're looking for in candidates. We encourage you to apply if you think you may be a good fit, even if you are unsure whether you meet some of the criteria.Curiosity and a drive to work on challenging and important problems;Ability to answer complex research questions related to the long-term future;Willi...]]> Center on Long-Term Risk https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:29 no full 1
aPZT66Byw8mCHNnPx EA - Coworking space in Mexico City by AmAristizabal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Coworking space in Mexico City, published by AmAristizabal on February 15, 2024 on The Effective Altruism Forum.TL;DR: We want to know if people/orgs want to use a coworking space in Mexico City. Fill outthis form if this is of interest to you.We are currently running theAI Futures Fellowship in Mexico City. We have had a great experience with the current coworking space we are using,Haab Project.In light of the positive experience and our interest in being able to retain the office space for future iterations of the fellowship, we are considering opening up the office for teams or individuals keen on coworking from Mexico City, especially during times when we are unlikely to use the office space. If this interests you, please fill outthis form. We are currently in an exploratory phase, so completing the form is not a hard commitment. We encourage you to fill out this form even if you don't have high confidence that you will be able to join.Given that this is still very much in the rough, we are unsure what this would look like funding-wise and what services we could offer. That said, here are some of the things we could likely provide:We will have some office equipment like monitors and standing desks since we use these for the fellowship.Haab has a very good restaurant/cafe with 15% discount office space holders (e.g. an americano will go for 45MXN/~2.5USD and you can get a meal for 153MXN/~9USD)Depending on demand, we could secure the main office space on the 5th floor (which is the one we are using for our fellows). It fits ~17 people for coworking and ~35 for talks and events. It has a fantastic view of Amsterdam street (photos below).This office is very popular among other teams in Mexico so this is why we want to know if we can secure it for us and collaborators.Below are some images of the coworking space and the specific office that will be available.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
AmAristizabal https://forum.effectivealtruism.org/posts/aPZT66Byw8mCHNnPx/coworking-space-in-mexico-city Thu, 15 Feb 2024 21:14:19 +0000 EA - Coworking space in Mexico City by AmAristizabal AmAristizabal https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:02 no full 2
sSm3mGe79eSk7dXfR EA - Estimates on expected effects of movement/pressure group/field building? by jackva Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Estimates on expected effects of movement/pressure group/field building?, published by jackva on February 15, 2024 on The Effective Altruism Forum.It seems relatively uncontroversial within EA-grantmaking that field building (and the building of societal pressure groups?) is an effective strategy to induce long-run changes.E.g. a resource frequently referenced in discussions is Teles's "The Rise of the Conversative Legal Movement" as an existence proof for a very large long-run impact of philanthropic money.I am curious whether anyone has done systematic work on using this and other evidence to (1) estimate expected effects (2) base rates of success or (3) anything else of that sort that would inform how we can think about the average (a) tractability and (b) impact of such efforts?Luke Muehlhauser's work on early-movement growth and field-building comes closest, reviewing historical case studies and generally giving the impression that intentional movement / field acceleration is (a) possible, (b) not rocket science (things one would expect to work do work), and (c) can be quite meaningful (playing a major role in shaping and/or accelerating fields).But it doesn't offer much in terms of relative tractability or effectiveness vis-a-vis other interventions, such as funding existing think tanks to do Beltway-style policy advocacy or other surgical interventions that we do a lot of.Broadly, I am trying to understand how to compare funding such work to more surgical interventions, so I am interested both in absolute estimates but also relative comparisons.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
jackva https://forum.effectivealtruism.org/posts/sSm3mGe79eSk7dXfR/estimates-on-expected-effects-of-movement-pressure-group Thu, 15 Feb 2024 21:11:29 +0000 EA - Estimates on expected effects of movement/pressure group/field building? by jackva jackva https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:40 no full 3
3Y7c7MXf3BzgruTWv EA - Social science research we'd like to see on global health and wellbeing [Open Philanthropy] by Aaron Gertler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Social science research we'd like to see on global health and wellbeing [Open Philanthropy], published by Aaron Gertler on February 15, 2024 on The Effective Altruism Forum.Open Philanthropy strives to help others as much as we can with the resources available to us. To find the best opportunities to help others, we rely heavily on scientific and social scientific research.In some cases, we would find it helpful to have more research in order to evaluate a particular grant or cause area. Below, we've listed a set of social scientific questions for which we are actively seeking more evidence.[1] We believe the answers to these questions have the potential to impact our grantmaking. (See also our list ofresearch topics for animal welfare.)If you know of any research that touches on these questions, we would welcome hearing from you. At this point, we are not actively making grants to further investigate these questions. It is possible we may do so in the future, though, so if you plan to research any of these, pleaseemail us.Land Use ReformOpen Philanthropy has been making grants inland use reform since 2015. We believe that more permissive permitting and policy will encourage economic growth and allow people to access higher-paying jobs. However, we have a lot of uncertainty about which laws or policies would be most impactful (orneglected/tractable relative to their impact) on housing production.What are the legal changes that appear to spur the most housing? E.g. can we estimate the effects of removing parking mandates on housing production? How do those compare to the effects of higherFAR or more allowable units?Why we care: We think that permitting speed might be an important category to target, but have high uncertainty about this.What we know: There are a number of different studies of the effects of changes in zoning/land use laws (e.g. see a summaryhere in Appendix A), but we're not aware of studies that attempt to disentangle any specific changes or rank their importance. We suspect that talking to advocates (e.g. CA YIMBY) would be useful as a starting point.Ideas for studying this: It seems unlikely that there have been "clean" changes that only affected a single part of the construction process, but from talking to advocates, it seems plausible that it would be possible to identify changes to zoning codes that primarily affect one parameter more than others. It also seems plausible that this is a topic where a systematic review, combining evidence from many other studies, would be unusually valuable.What is the elasticity of construction with regards to factors like "the likelihood of acquiring permission to build" or "the length of an average permitting delay"?Why we care: We are highly uncertain about how to best encourage more construction, and thus about where to target our grants.What we know: there have been many recent changes to permitting requirements, such as the California ADU law thatrequires cities to respond to permit requests within 60 days and anew law in Florida that requires cities to respond to permit requests quickly or return permitting fees. Thisblog post by Dan Bertolet at Sightline predates those changes, but is the best summary we've seen on the impacts of permitting requirements.Ideas for studying this: one might compare projects that fall right below or above thresholds for permitting review (e.g. SEPA thresholds in Washington state), and try to understand how much extra delay projects faced as a result of qualifying for review. It could also be valuable to analyze the effects of the Florida law (e.g. a difference-in-difference design looking at housing construction in places that had long delays vs. short delays prior to the law passing).Does the value of new housing (in terms of individual earnings gains ...

]]>
Aaron Gertler https://forum.effectivealtruism.org/posts/3Y7c7MXf3BzgruTWv/social-science-research-we-d-like-to-see-on-global-health Thu, 15 Feb 2024 06:37:51 +0000 EA - Social science research we'd like to see on global health and wellbeing [Open Philanthropy] by Aaron Gertler Aaron Gertler https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 30:04 no full 4
EJccSQwfp2gxWGBwp EA - Works in Progress: The Long Journey to Doing Good Better by Dustin Moskovitz [Linkpost] by Nathan Young Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Works in Progress: The Long Journey to Doing Good Better by Dustin Moskovitz [Linkpost], published by Nathan Young on February 14, 2024 on The Effective Altruism Forum.@Dustin Moskovitz has written a piece on his reflections on doing good, EA, FTX and other stuff.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Nathan Young https://forum.effectivealtruism.org/posts/EJccSQwfp2gxWGBwp/works-in-progress-the-long-journey-to-doing-good-better-by Wed, 14 Feb 2024 22:55:04 +0000 EA - Works in Progress: The Long Journey to Doing Good Better by Dustin Moskovitz [Linkpost] by Nathan Young Nathan Young https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:31 no full 6
RjPGfb2Sg5MeDxvfA EA - 80,000 Hours' new series on building skills by Benjamin Hilton Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80,000 Hours' new series on building skills, published by Benjamin Hilton on February 14, 2024 on The Effective Altruism Forum.If we were going to summarise all our advice on how to get career capital in three words, we'd say: build useful skills.In other words, gain abilities that are valued in the job market - which makes your work more useful and makes it easier to bargain for the ingredients of afulfilling job - as well as those that are specifically needed in tackling theworld's most pressing problems.So today, we're launching our series on the most useful skills for making a differencewhich you can find here. It covers why we recommend each skill, how to get started learning them, and how to work out which is the best fit for you.Each article looks at one of eight skill sets we think are most useful for solving the problems we think are most pressing:Policy and political skillsOrganisation-buildingResearchCommunicating ideasSoftware and tech skillsExperience with an emerging powerEngineeringExpertise relevant to a top problemWhy are we releasing this now?We think that many of our readers have come away from our site underappreciating the importance of career capital. Instead, they focus their career choices on having an impact right away.This is a difficult tradeoff in general. Roughly, our position is that:There's often less tradeoff between these things than people think, as good options for career capital often involve directly working on a problem you think is important.That said, building career capital substantially increases the impact you're able to have. This is in part because the impact of different jobs is heavy-tailed, and career capital is one of the primary ways to end up in the tails.As a result, neglecting career capital can lower your long-term impact in return for only a small increase in short-term impact.Young people especially should be prioritising career capital in most cases.We think that building career capital is important even for people focusing on particularly urgent problems - for example, we think thatwhether you should do an ML PhD doesn't depend (much) on your AI timelines.Why the focus on skills?We break down career capital into five components:Skills and knowledgeConnectionsCredentialsCharacterRunway (i.e. savings)We've found that "build useful skills" is a particularly good rule of thumb for building career capital.It's true that in addition to valuable skills, you also need to learn how to sell those skills to others and make connections. This can involve deliberately gaining credentials, such as by getting degrees or creating public demo projects; or it can involve what's normally thought of as "networking," such as going to conferences or building up a Twitter following. But all of these activities become much easier once you have something useful to offer.The decision to focus on skills was also partly inspired by discussions with Holden Karnofsky andhis post on building aptitudes, which we broadly agree with.If you have more questions, take a look atour skills FAQ.How can you help?Pleasetake a look at our new series and, if possible, share it with a friend!We'd love feedback on these pages. If you have any, please do let us know in the comments, or by contacting us atinfo@80000hours.org.Thank you so much!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Benjamin Hilton https://forum.effectivealtruism.org/posts/RjPGfb2Sg5MeDxvfA/80-000-hours-new-series-on-building-skills Wed, 14 Feb 2024 16:07:27 +0000 EA - 80,000 Hours' new series on building skills by Benjamin Hilton Benjamin Hilton https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:22 no full 8
qcDWDnfHdF4JZMLp8 EA - FTX expects to return all customer money; clawbacks may go away by MikhailSamin Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX expects to return all customer money; clawbacks may go away, published by MikhailSamin on February 14, 2024 on The Effective Altruism Forum.New York Times (unpaywalled):When the cryptocurrency exchange FTX declared bankruptcy about 15 months ago, it seemed few customers would recover much money or crypto from the platform. As John Ray III, who took over as chief executive during the bankruptcy, put it, "At the end of the day, we're not going to be able to recover all the losses here." He was countering Sam Bankman-Fried's repeated claims that he could get every customer their money back.Well, it turns out, FTX lawyers told a bankruptcy judge this week that they expected to pay creditors in full, though they said it was not a guarantee and had not yet revealed their strategy.The surprise turn of events is raising serious questions about what happens next. Among them: What does this mean for the lawsuits FTX has filed in an attempt to claw back billions in assets that the company says it's owed?Will the possibility that customers could be made whole be raised at Bankman-Fried's sentencing? Will potential relief for customers help his appeal?[...]Some of the clawback cases involve allegations of fraud, but not all do. Before fraud claims are argued, there is typically a legal fight over whether a company was insolvent at the time of the investment or that the investment led to insolvency. If every FTX creditor stands to get 100 cents on the dollar, the clawback cases that don't involve fraud wouldn't serve much of a financial purpose and may be more difficult to argue, some lawyers say."In theory, clawbacks may go away there," said Eric Monzo, a partner at Morris James who focuses on bankruptcy claims.Court proceedings:we can now cautiously predict some measure of success. Based on our results to date and current projections we anticipate filing a disclosure statement in February describing how customers and general unsecured creditors, customers and general unsecured creditors with allowed claims, will eventually be paid in full. I would like the Court and stakeholders to understand this not as a guarantee, but as an objective.There is still a great amount of work and risk between us and that result, but we believe the objective is within reach and we have a strategy to achieve it.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
MikhailSamin https://forum.effectivealtruism.org/posts/qcDWDnfHdF4JZMLp8/ftx-expects-to-return-all-customer-money-clawbacks-may-go Wed, 14 Feb 2024 12:55:26 +0000 EA - FTX expects to return all customer money; clawbacks may go away by MikhailSamin MikhailSamin https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:18 no full 9
3SvCxuB46so7ynWmx EA - Complexity of value but not disvalue implies more focus on s-risk. Moral uncertainty and preference utilitarianism also do. by Chi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Complexity of value but not disvalue implies more focus on s-risk. Moral uncertainty and preference utilitarianism also do., published by Chi on February 13, 2024 on The Effective Altruism Forum.As per title. I often talk to people that have views that I think should straightforwardly imply a larger focus on s-risk than they think. In particular, people often seem to endorse something like a rough symmetry between the goodness of good stuff and the badness of bad stuff, sometimes referring to this short post that offers some arguments in that direction.I'm confused by this and wanted to quickly jot down my thoughts - I won't try to make them rigorous and make various guesses for what additional assumptions people usually make. I might be wrong about those.Views that IMO imply putting more weight on s-risk reduction:Complexity of values: Some people think that the most valuable things possible are probably fairly complex (e.g. a mix of meaning, friendship, happiness, love, child-rearing, beauty etc.) instead of really simple (e.g. rats on heroin, what people usually imagine when hearing hedonic shockwave.) People also often have different views on what's good. I think people who believe in complexity of values often nonetheless think suffering is fairly simple, e.g.extreme pain seems simple and also just extremely bad. (Some people think that the worst suffering is also complex and they are excluded from this argument.) On first pass, it seems very plausible that complex values are much less energy-efficient than suffering.(In fact, people commonly define complexity by computational complexity, which translates directly to energy-efficiency.) To the extent that this is true, this should increase our concern about the worst futures relative to the best futures, because the worst futures could be much worse than the best futures.(The same point is made in more detail here.)Moral uncertainty: I think it's fairly rare for people to think the best happiness is much better than worst suffering is bad. I think people often have a mode at "they are the same in magnitude" and then uncertainty towards "the worst suffering is worse". If that is so, you should be marginally more worried about the worst futures relative to the best futures.The case for this is more robust if you incorporate other people's views into your uncertainty: I think it's extremely rare to have an asymmetric distribution towards thinking the best happiness is better in expectation.[1](Weakly related point here.)Caring about preference satisfaction: I feel much less strongly about this one because thinking about the preferences of future people is strange and confusing. However, I think if you care strongly about preferences, a reasonable starting point is anti-frustrationism, i.e. caring about unsatisfied preferences but not caring about satisfied preferences of future people.That's because otherwise you might end up thinking, for example, that it's ideal to create lots of people who crave green cubes and give them lots of green cubes. I at least find that outcome a bit bizarre. It also seems asymmetric: Creating people who crave green cubes and not giving them green cubes does seem bad. Again, if this is so, you should marginally weigh futures with lots of dissatisfied people more than futures with lots of satisfied people.To be clear, there are many alternative views, possible ways around this etc. Taking into account preferences of non-existent people is extremely confusing! But I think this might be an underappreciated problem that people who mostly care about preferences need to find some way around if they don't want to weigh futures with dissatisfied people more highly.I think point 1 is the most important because many people have intuitions around complexity of value. None of these po...

]]>
Chi https://forum.effectivealtruism.org/posts/3SvCxuB46so7ynWmx/complexity-of-value-but-not-disvalue-implies-more-focus-on-s Tue, 13 Feb 2024 22:55:47 +0000 EA - Complexity of value but not disvalue implies more focus on s-risk. Moral uncertainty and preference utilitarianism also do. by Chi Chi https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:11 no full 14
SCxHg9YszEDKkMzyz EA - Announcing leadership changes at One for the World by Emma Cameron Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing leadership changes at One for the World, published by Emma Cameron on February 13, 2024 on The Effective Altruism Forum.Interim Managing Director AnnouncementAfter over four years at the helm of One for the World, our Executive Director, Jack Lewars, is stepping down at the end of this month. In his place, our Board of Directors has named me, Emma Cameron, as One for the World's interim Managing Director.As we say goodbye to Jack, I am excited to enter this new interim Managing Director role at One for the World.I have spent the past 10+ years of my career gaining experience in community organizing, people management, corporate campaigns, and fundraising. I honed this experience in areas ranging from labor union organizing to farmed animal advocacy.I often find myself thrilled at roles that allow me to balance multiple 'hats' and responsibilities, and I think this dynamic role gives me precisely that kind of opportunity, where I will be balancing the needs of our chapters, managing the team, and shepherding the organization's mission as a whole.I'm looking ahead in the coming months to an organization that plans to double down on its origins in the next year. We intend to expand our presence at top MBA and law schools in the US. After successfully trialing our chapter model in the corporate setting at ten companies this year, we plan to expand our corporate presence within tech, finance, consulting, and other industries.As a whole, it will be exciting to play a significant role in shaping the future of our organization in this interim period. I am grateful to our board for the opportunity and their trust.To our 1% Pledgers, donors, and supporters, One for the World remains committed to ending extreme poverty by building a movement of people who fully embrace their capacity to give. We are excited about the opportunities and fresh perspectives that come with new leadership. Of course, our door is open if you would like to connect with me or another team member. You can reach me atemma.c@1fortheworld.org. You're also welcome to book a time to chat with me about taking the 1% Pledge, effective giving, or anything else related to One for the World.Executive Director Jack Lewars steps downWe want to thank Jack profusely for his service to One for the World. Jack was selected as our inaugural Executive Director for our formerly volunteer-led organization in 2019. Since joining, he has grown One for the World's annual donation volume more than 7 times. He built One for the World into a global organization with chapters in the United States, Canada, the United Kingdom, and Australia.He created our corporate fundraising strategy from the ground up. When Jack joined, we'd not made a single corporate presentation. We have delivered more than 100 at some of the most prestigious companies in the world, with corporate donors contributing over $1 million in donations in the last year. Jack has done the unglamorous but vital work to transform a volunteer network into an established nonprofit with an international reach.Jack boldly steered One for the World through the COVID-19 pandemic when our core program had to effectively stop completely across campuses worldwide. Jack prioritized the organization's internal culture and fostered an inclusive environment for our team. His tremendous success advising our donors on their philanthropic legacies here at One for the World will undoubtedly serve him well in his next opportunity.Building on his experience at One for the World, Jack is launching a consultancy offering bespoke donation advice for large donors. One for the World is excited about this addition to the effective giving space, and we look forward to continuing to work with Jack in his new role.Applications for Executive DirectorThe search committee r...

]]>
Emma Cameron https://forum.effectivealtruism.org/posts/SCxHg9YszEDKkMzyz/announcing-leadership-changes-at-one-for-the-world Tue, 13 Feb 2024 13:05:56 +0000 EA - Announcing leadership changes at One for the World by Emma Cameron Emma Cameron https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:54 no full 17
prS8qwSisiR4XgrqB EA - My lifelong pledge to give away 10% of my income each year (and where I donated in 2023) by James Özden Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My lifelong pledge to give away 10% of my income each year (and where I donated in 2023), published by James Özden on February 13, 2024 on The Effective Altruism Forum.Note: Cross-posted from my blog. There is probably not much new here for long-time members of the EA community, some of whom have been giving 10%+ of their income for over a decade. However, I thought it might be interesting for newer EA Forum readers, and for folks who are more left-wing/progressive than average (like me), to see what arguments are compelling to someone of that worldview.In November 2022, I took theGiving What We Can pledge to give away 10% of my pre-tax income, to the most effective charities and opportunities, for the rest of my life. I'm very proud of taking the pledge, and feel great about finishing my first full year! I wanted to share some thoughts on how it's been for me, as well as some concrete places I donated to.Broadly, I feel like I've been committed to doing the most good (whatever that means) for several years now, but it took some time for me to get going with my donations. One big factor is that I haven't been earning too much, especially when I was working full-time with Animal Rebellion/Extinction Rebellion, where people used to get paid between £400-1000 per month. Otherwise, I thought it would be a significant financial burden, even when my salary increased, that would make it difficult for me to build a financial safety net.But primarily, it's a reasonably big commitment, so I think taking some time to stew on it can be useful.Despite this, I've been surprised by how quickly the Giving What We Can (GWWC) pledge has become a part of my identity. Now, I'm so happy that I've pledged, and feel amazing that I'm able to support great projects to improve the world (you can tell because I'm already preaching about it - sorry not sorry).Importantly, I don't think donating is the only way for people to improve the world, and not necessarily the most impactful. But, I don't see it as an either/or, but rather a both/and. Simply, I don't think the decision is whether to dedicate your career to highly impactful work OR dedicate your free time (or career) to political activism OR donate some proportion of your income to effective projects.Rather, I think one can both pursue a high-impact career and give a lot, as donating often gives you the ability to have a huge impact with relatively little time investment. Tangibly, I've probably spent between 5-10 hours to donate around £3,000 this year, which I think will have a lot of positive impact with a relatively small time investment on my side (this was helped partially with the use ofexpert funds and my prior knowledge in a given area, but more on that later).However, I want to speak about some of the key points that convinced me to give 10% of my income for the rest of my life, namely:I am better off than 98% of the world, for no great reason besides that I grew up in a wealthy country, and it is a huge travesty if I don't use some percentage of this luck to help others.Donations can have very meaningful impacts on the issues I care about, often far more than other lifestyle choices I might be already making.I think the world would be a much better place if everyone was committed to giving some of their income/wealth, and there's no reason why it shouldn't start with me.(If you just want to see where I donated to in 2023, skip to the bottom).Why I decided to take the pledgeMost people reading this are in the top 5% of wealth globally, and we should do something about itAs someone who has been fairly engaged in progressive political activism, I often hear lots of comments attributing some key problems in the world, whether it's climate change, inequality or poverty, to the richest 1%. However, I think most peopl...

]]>
James Özden https://forum.effectivealtruism.org/posts/prS8qwSisiR4XgrqB/my-lifelong-pledge-to-give-away-10-of-my-income-each-year Tue, 13 Feb 2024 11:18:42 +0000 EA - My lifelong pledge to give away 10% of my income each year (and where I donated in 2023) by James Özden James Özden https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 17:25 no full 19
pRAjLTQxWJJkygWwK EA - My cover story in Jacobin on AI capitalism and the x-risk debates by Garrison Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My cover story in Jacobin on AI capitalism and the x-risk debates, published by Garrison on February 13, 2024 on The Effective Altruism Forum.Google cofounder Larry Pagethinks superintelligent AI is "just the next step in evolution." In fact, Page, who's worth about $120 billion, has reportedlyargued that efforts to prevent AI-driven extinction and protect human consciousness are "speciesist" and "sentimental nonsense."In July, former Google DeepMind senior scientist Richard Sutton - one of the pioneers of reinforcement learning, a major subfield of AIsaid that the technology "could displace us from existence," and that "we should not resist succession." In a2015 talk, Sutton said, suppose "everything fails" and AI "kill[s] us all"; he asked, "Is it so bad that humans are not the final form of intelligent life in the universe?"This is how I begin thecover story for Jacobin's winter issue on AI. Some very influential people openly welcome an AI-driven future, even if humans aren't part of it.Whether you're new to the topic or work in the field, I think you'll get something out of it.I spent five months digging into the AI existential risk debates and the economic forces driving AI development. This was the most ambitious story of my career - it was informed by interviews and written conversations with three dozen people - and I'm thrilled to see it out in the world. Some of the people include:Deep learning pioneer and Turing Award winner Yoshua BengioPathbreaking AI ethics researchers Joy Buolamwini and Inioluwa Deborah RajiReinforcement learning pioneer Richard SuttonCofounder of the AI safety field Eliezer YudkowksyRenowned philosopher of mind David ChalmersSante Fe Institute complexity professor Melanie MitchellResearchers from leading AI labsSome of the most powerful industrialists and companies are plowing enormous amounts of money and effort into increasing the capabilities and autonomy of AI systems, all while acknowledging that superhuman AI could literally wipe out humanity:Bizarrely, many of the people actively advancing AI capabilities think there's a significant chance that doing so will ultimately cause the apocalypse. A2022 survey of machine learning researchers found that nearly half of them thought there was at least a 10 percent chance advanced AI could lead to "human extinction or [a] similarly permanent and severe disempowerment" of humanity. Just months before he cofounded OpenAI, Altmansaid, "AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies."This is a pretty crazy situation!But not everyone agrees that AI could cause human extinction. Some think that the idea itself causes more harm than good:Some fear not the "sci-fi" scenario where AI models get so capable they wrest control from our feeble grasp, but instead that we will entrustbiased,brittle, andconfabulating systems with too much responsibility, opening a more pedestrian Pandora's box full of awful but familiar problems that scale with the algorithms causing them. This community of researchers and advocates - often labeled "AI ethics" - tends to focus on the immediate harms being wrought by AI, exploring solutions involving model accountability, algorithmic transparency, and machine learning fairness.Others buy the idea of transformative AI, but think it's going to be great:A third camp worries that when it comes to AI, we're not actually moving fast enough. Prominent capitalists like billionaire Marc Andreessenagree with safety folks that AGI is possible but argue that, rather than killing us all, it will usher in an indefinite golden age of radical abundance and borderline magical technologies. This group, largely coming from Silicon Valley and commonly referred to as AI boosters, tends to worry far mo...

]]>
Garrison https://forum.effectivealtruism.org/posts/pRAjLTQxWJJkygWwK/my-cover-story-in-jacobin-on-ai-capitalism-and-the-x-risk Tue, 13 Feb 2024 03:05:37 +0000 EA - My cover story in Jacobin on AI capitalism and the x-risk debates by Garrison Garrison https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:23 no full 20
zEMvHK9Qa4pczWbJg EA - On being an EA for decades by Michelle Hutchinson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On being an EA for decades, published by Michelle Hutchinson on February 12, 2024 on The Effective Altruism Forum.A friend sent me on a lovely trip down memory lane last week. She forwarded me an email chain from 12 years ago in which we all pretended we left the EA community, and explained why. We were partly thinking through what might cause us to drift in order that we could do something to make that less likely, and partly joking around. It was a really nice reminder of how uncertain things felt back then, and how far we've come.Although the emails hadn't led us to take any specific actions, almost everyone on the thread was still devoting their time to helping others as much as they can. We're also, for the most part, still supporting each other in doing so. Of the 10 or so people on there, for example, Niel Bowerman is now my boss - CEO of 80k. Will MacAskill and Toby Ord both made good on their plans to write books and donate their salaries above £20k[1] a year.And Holly Morgan is who I turned to a couple of weeks ago when I needed help thinking through work stress.Here's what I wrote speculating about why I might drift away from EA. Note that the email below was written quickly and just trying to gesture at things I might worry about in future, I was paying very little attention to the details. The partner referenced became my husband a year and a half later, and we now have a four year old.On 10 February 2012 18:14, Michelle Hutchinson wrote:Writing this was even sadder than I expected it to be.1. Holly's and my taste in music drove the rest of the HIH[2] crazy, and they murdered us both.2. The feeling that I was letting down my parents, husband and children got the better of me, and I sought better paid employment.3. Toby, Will and Nick got replaced by trustees whose good judgement and intelligence I didn't have nearly as much faith in, so I no longer felt fully supportive of the organisation - I was worried it was soon going to become preachy rather than effective, or even dangerous (researching future technology without much foresight).4. I realised there were jobs that wouldn't involve working (or even emailing co-workers) Friday evenings :p5. I remembered how much I loved reading philosophy all day, and teaching tutorials, and unaccountably Oxford offered me a stipendiary lectureship, so I returned to academia.6. [My partner] got lazy career-wise, and I wanted a nice big house and expensive food, so I got a higher paid job.7. I accidentally emailed confidential member information to our whole mailing list / said the wrong thing to one of our big funders / made a wrong call that got us into legal trouble, and it was politely suggested to me that the best way I could help the EA movement was by being nowhere near it.8. When it became clear that although I rationally agreed with the moral positions of GWWC and CEA, most of my emotional motivation for working for them came from not wanting to let down the other [team members], it was decided really I wasn't a nice enough person to have a position of responsibility in such a great organisation.9. [My partner] got a job too far away from other CEA people for me to be able to work for CEA and be with him. I chose him.10. When we had children I took maternity leave, and then couldn't bear to leave the children to return to work.11. I tried to give a talk about GWWC to a large audience, fainted with fright, and am now in a coma.When I wrote the email, I thought it was really quite likely that in 10 years I'd have left the organisation and community. Looking around the world, it seemed like a lot of people become less idealistic as they grow older. And looking inside myself, it felt pretty contingent that I happened to fall in with a group of people who supp...

]]>
Michelle_Hutchinson https://forum.effectivealtruism.org/posts/zEMvHK9Qa4pczWbJg/on-being-an-ea-for-decades wrote:Writing this was even sadder than I expected it to be.1. Holly's and my taste in music drove the rest of the HIH[2] crazy, and they murdered us both.2. The feeling that I was letting down my parents, husband and children got the better of me, and I sought better paid employment.3. Toby, Will and Nick got replaced by trustees whose good judgement and intelligence I didn't have nearly as much faith in, so I no longer felt fully supportive of the organisation - I was worried it was soon going to become preachy rather than effective, or even dangerous (researching future technology without much foresight).4. I realised there were jobs that wouldn't involve working (or even emailing co-workers) Friday evenings :p5. I remembered how much I loved reading philosophy all day, and teaching tutorials, and unaccountably Oxford offered me a stipendiary lectureship, so I returned to academia.6. [My partner] got lazy career-wise, and I wanted a nice big house and expensive food, so I got a higher paid job.7. I accidentally emailed confidential member information to our whole mailing list / said the wrong thing to one of our big funders / made a wrong call that got us into legal trouble, and it was politely suggested to me that the best way I could help the EA movement was by being nowhere near it.8. When it became clear that although I rationally agreed with the moral positions of GWWC and CEA, most of my emotional motivation for working for them came from not wanting to let down the other [team members], it was decided really I wasn't a nice enough person to have a position of responsibility in such a great organisation.9. [My partner] got a job too far away from other CEA people for me to be able to work for CEA and be with him. I chose him.10. When we had children I took maternity leave, and then couldn't bear to leave the children to return to work.11. I tried to give a talk about GWWC to a large audience, fainted with fright, and am now in a coma.When I wrote the email, I thought it was really quite likely that in 10 years I'd have left the organisation and community. Looking around the world, it seemed like a lot of people become less idealistic as they grow older. And looking inside myself, it felt pretty contingent that I happened to fall in with a group of people who supp...]]> Mon, 12 Feb 2024 22:31:35 +0000 EA - On being an EA for decades by Michelle Hutchinson wrote:Writing this was even sadder than I expected it to be.1. Holly's and my taste in music drove the rest of the HIH[2] crazy, and they murdered us both.2. The feeling that I was letting down my parents, husband and children got the better of me, and I sought better paid employment.3. Toby, Will and Nick got replaced by trustees whose good judgement and intelligence I didn't have nearly as much faith in, so I no longer felt fully supportive of the organisation - I was worried it was soon going to become preachy rather than effective, or even dangerous (researching future technology without much foresight).4. I realised there were jobs that wouldn't involve working (or even emailing co-workers) Friday evenings :p5. I remembered how much I loved reading philosophy all day, and teaching tutorials, and unaccountably Oxford offered me a stipendiary lectureship, so I returned to academia.6. [My partner] got lazy career-wise, and I wanted a nice big house and expensive food, so I got a higher paid job.7. I accidentally emailed confidential member information to our whole mailing list / said the wrong thing to one of our big funders / made a wrong call that got us into legal trouble, and it was politely suggested to me that the best way I could help the EA movement was by being nowhere near it.8. When it became clear that although I rationally agreed with the moral positions of GWWC and CEA, most of my emotional motivation for working for them came from not wanting to let down the other [team members], it was decided really I wasn't a nice enough person to have a position of responsibility in such a great organisation.9. [My partner] got a job too far away from other CEA people for me to be able to work for CEA and be with him. I chose him.10. When we had children I took maternity leave, and then couldn't bear to leave the children to return to work.11. I tried to give a talk about GWWC to a large audience, fainted with fright, and am now in a coma.When I wrote the email, I thought it was really quite likely that in 10 years I'd have left the organisation and community. Looking around the world, it seemed like a lot of people become less idealistic as they grow older. And looking inside myself, it felt pretty contingent that I happened to fall in with a group of people who supp...]]> wrote:Writing this was even sadder than I expected it to be.1. Holly's and my taste in music drove the rest of the HIH[2] crazy, and they murdered us both.2. The feeling that I was letting down my parents, husband and children got the better of me, and I sought better paid employment.3. Toby, Will and Nick got replaced by trustees whose good judgement and intelligence I didn't have nearly as much faith in, so I no longer felt fully supportive of the organisation - I was worried it was soon going to become preachy rather than effective, or even dangerous (researching future technology without much foresight).4. I realised there were jobs that wouldn't involve working (or even emailing co-workers) Friday evenings :p5. I remembered how much I loved reading philosophy all day, and teaching tutorials, and unaccountably Oxford offered me a stipendiary lectureship, so I returned to academia.6. [My partner] got lazy career-wise, and I wanted a nice big house and expensive food, so I got a higher paid job.7. I accidentally emailed confidential member information to our whole mailing list / said the wrong thing to one of our big funders / made a wrong call that got us into legal trouble, and it was politely suggested to me that the best way I could help the EA movement was by being nowhere near it.8. When it became clear that although I rationally agreed with the moral positions of GWWC and CEA, most of my emotional motivation for working for them came from not wanting to let down the other [team members], it was decided really I wasn't a nice enough person to have a position of responsibility in such a great organisation.9. [My partner] got a job too far away from other CEA people for me to be able to work for CEA and be with him. I chose him.10. When we had children I took maternity leave, and then couldn't bear to leave the children to return to work.11. I tried to give a talk about GWWC to a large audience, fainted with fright, and am now in a coma.When I wrote the email, I thought it was really quite likely that in 10 years I'd have left the organisation and community. Looking around the world, it seemed like a lot of people become less idealistic as they grow older. And looking inside myself, it felt pretty contingent that I happened to fall in with a group of people who supp...]]> Michelle_Hutchinson https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:45 no full 21
RXFcmrf7E5fLhb43e EA - Things to check about a job or internship by Julia Wise Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Things to check about a job or internship, published by Julia Wise on February 12, 2024 on The Effective Altruism Forum.A lot of great projects have started in informal ways: a startup in someone's garage, or a scrappy project run by volunteers. Sometimes people jump into these and are happy they did so.But I've also seen people caught off-guard by arrangements that weren't what they expected, especially early in their careers. I've been there, when I was a new graduate interning at a religious center that came with room, board, and $200 a month. I remember my horror when my dentist checkup cost most of a month's income, or when I found out that my nine-month internship came with zero vacation days.It was an overall positive experience for me (after we worked out the vacation thing), but it's better to go in clear-eyed.First, I've listed a bunch of things to consider. These are drawn from several different situations I've heard about, both inside and outside EA. There are also a lot ofadvicepieces from the for-profit world about choosing between a startup and a more established company.Second, I've compiled some anonymous thoughts from a few people who've worked both at small EA projects and also at larger more established ones.Things to considerYour needsWill the pay or stipend cover your expenses? See afuller list here, includingMedical insurance and unexpected medical costsTaxes (including self-employment taxes if they're not legally your employer)Any loans you haveWill medical insurance be provided?Maybe they've indicated "If we get funding, we'll be able to pay you for this internship." Will you be ok if they don't get funding and you don't get paid?Maybe they've indicated you'll get a promotion after a period of lower-status work or "proving yourself." If that promotion never comes, will that work for you?If you need equipment like a laptop, are you providing that or are they? Who owns the equipment?If you got a concussion tomorrow and needed to rest for a month, what would the plan be? If not covered by your work arrangement, do you have something else you can fall back on?Medical careA place to stayIncomeIf you need to leave this arrangement unexpectedly, do you have enough money for a flight to your hometown or whatever your backup plan is?Predictability and stabilityHow long have they been established as an organization? Newer organizations are likely more flexible, but also more likely to change direction or to close down.How much staff turnover has there been recently? There are various possible reasons for high staff turnover, but one could be a poor working environment.Structure and accountabilityWill someone serve as your manager? How often will you meet with them? If there's not much oversight / guidance, do you have a sense of how well you function without that?Is there a board? If so, does the board seem likely to engage and address problems if needed?Are there established staff policies, or is it more free-form? If there's a specific policy you expect to be important for you, like parental leave, you may want to choose a workplace that already has a spelled-out policy that works for you.If living on-siteWill you have a room to yourself, or will you ever be expected to share a bedroom or sleep in a common area?Is it feasible to get off-site for some time away from your coworkers? How remote is the location? Living and working with the same people all the time can get intense.Work agreementIs there a written work agreement or contract? If there isn't one already, you can ask for one. For example, in my state anyone employing a nanny is required to write out an agreement includingPay rateWork scheduleJob dutiesSick leave, holidays, vacation, and personal daysAny other benefitsEligibility for worker's compensati...

]]>
Julia_Wise https://forum.effectivealtruism.org/posts/RXFcmrf7E5fLhb43e/things-to-check-about-a-job-or-internship Mon, 12 Feb 2024 22:31:12 +0000 EA - Things to check about a job or internship by Julia Wise Julia_Wise https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 12:24 no full 22
YScdhSQBhkxpfcF3t EA - "No-one in my org puts money in their pension" by tobyj Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "No-one in my org puts money in their pension", published by tobyj on February 16, 2024 on The Effective Altruism Forum.Epistemic status: the stories here are all as true as possible from memory, but my memory is so so.This is going to be bigIt's late Summer 2017. I am on a walk in the Mendip Hills. It's warm and sunny and the air feels fresh. With me are around 20 other people from the Effective Altruism London community. We've travelled west for a retreat to discuss how to help others more effectively with our donations and careers. As we cross cow field after cow field, I get talking to one of the people from the group I don't know yet. He seems smart, and cheerful. He tells me that he is an AI researcher at Google DeepMind.He explains how he is thinking about how to make sure that any powerful AI system actually does what we want it to. I ask him if we are going to build artificial intelligence that can do anything that a human can do. "Yes, and soon," he says, "And it will be the most important thing that humanity has ever done."I find this surprising. It would be very weird if humanity was on the cusp of the most important world changing invention ever, and so few people were seriously talking about it. I don't really believe him.This is going to be badIt is mid-Summer 2018 and I am cycling around Richmond Park in South West London. It's very hot and I am a little concerned that I am sweating off all my sun cream.After having many other surprising conversations about AI, like the one I had in the Mendips, I have decided to read more about it. I am listening to an audiobook of Superintelligence by Nick Bostrom. As I cycle in loops around the park, I listen to Bostrom describe a world in which we have created superintelligent AI. He seems to think the risk that this will go wrong is very high. He explains how scarily counterintuitive the power of an entity that is vastly more intelligent than a human is.He talks about the concept of "orthogonality"; the idea that there is no intrinsic reason that the intelligence of a system is related to its motivation to do things we want (e.g. not kill us). He talks about how power-seeking is useful for a very wide range of possible goals. He also talks through a long list of ways we might try to avoid it going very wrong. He then spends a lot of time describing why many of these ideas won't work. I wonder if this is all true.It sounds like science fiction, so while I notice some vague discomfort with the ideas, I don't feel that concerned. I am still sweating, and am quite worried about getting sunburnt.It's a long way off thoughIt's still Summer 2018 and I am in an Italian restaurant in West London. I am at an event for people working in policy who want to have more impact. I am talking to two other attendees about AI. Bostrom's arguments have now been swimming around my mind for several weeks. The book's subtitle is "Paths, Dangers, Strategies" and I have increasingly been feeling the weight of the middle one. The danger feels like a storm. It started as vague clouds on the horizon and is now closing in.I am looking for shelter."I just don't understand how we are going to set policy to manage these things" I explain.I feel confused and a little frightened.No-one seems to have any concrete policy ideas. But my friend chimes in to say that while yeah there's a risk, it's probably pretty small and far away at this point."Experts thinks it'll take at least 40 more years to get really powerful AI" she explains, "there is plenty of time for us to figure this out".I am not totally reassured, but the clouds retreat a little.This is fineIt is late January 2020 and I am at after-work drinks in a pub in Westminster. I am talking to a few colleagues about the news. One of my colleagues, an accomplished government ec...

]]>
tobyj https://forum.effectivealtruism.org/posts/YScdhSQBhkxpfcF3t/no-one-in-my-org-puts-money-in-their-pension Fri, 16 Feb 2024 18:39:46 +0000 EA - "No-one in my org puts money in their pension" by tobyj tobyj https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 12:54 no full 4
MnecE9Y3SgMTfobCr EA - How to Accelerate Your New EA Organization with Fiscal Sponsorship and Operations Support by KevinN Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to Accelerate Your New EA Organization with Fiscal Sponsorship and Operations Support, published by KevinN on February 16, 2024 on The Effective Altruism Forum.This post is primarily written for people who are considering, or are in the process of, starting a new nonprofit organization, especially those who are concerned about the operational aspects of running the project. Forming a new organization can be a daunting prospect, and finding the right fiscal sponsorship or operations support provider can significantly lower the bar for setting up a new org.In my experience, there is limited awareness and understanding of these services within the EA community, which encouraged me to create a post like this. There aremany,many great projects that are yet to be founded, and my hope is that increased awareness of the benefits of these services encourages more prospective founders to take the leap into entrepreneurship.SummaryWhat these terms mean:Fiscal sponsorship: Leverage another organization's existing legal and tax-exempt status (such as 501(c)(3) status in the US), rather than establishing a new tax-exempt entity. This is also called 'fiscal hosting' in the UK.Operations support: Hire domain experts on a fractional, as-needed basis to perform operational services (compliance, finance, human resources, etc.) for an organization.Key potential benefits and drawbacks:Fiscal sponsorship = Instant initial infrastructure: Fiscal sponsorship significantly reduces the cost and time required to start doing object-level work by avoiding the need to create and operate a new tax-exempt entity.Operations support = Ongoing support from specialists: Operations support provides additional organizational capacity and allows efficient access to a wide variety of skill sets.Coordinated expert capacity - Hiring comprehensive operations support, or fiscal sponsorship plus operations support, can yield additional benefits when a service provider works cross-functionally in an efficient and synergistic way.Potential Drawbacks - Costs and control: Entering into a fiscal sponsorship agreement and hiring operations support may incur significant service costs. Fiscal sponsorship, in particular, may slow down decision-making and restrict flexibility around key tasks such as financial management, hiring, and traveling, due to the fiscal sponsor's governance process and policies.Due to the benefits and costs, fiscal sponsorship and operations support tend to have outsized benefits for newer and smaller organizations.New organizations generally benefit more from the 'instant infrastructure' of fiscal sponsorship and find the limited flexibility less costly. For a new organization already facing a huge amount of other decisions, having some functional structure and process in place quickly and easily can be much more valuable than having it be perfect. On the other hand, organizations that already have an existing legal structure and tax-exempt status may not benefit as much from fiscal sponsorship.Smaller organizations often benefit from operations support, where part-time contractors can provide expert capacity (or even coordinated expert capacity) across a variety of skill sets in a way that's more cost-efficient than a full-time operations employee. Conversely, a larger organization that requires more than several full-time operations staff may see less benefit from operations support as they are able to hire for more specialized operations roles internally.New organizations may choose to start off utilizing fiscal sponsorship and operations support services to get up to speed quickly and then, over time, reduce their reliance on external support providers as they build internal capacity, scale, and expertise.Disclosures: I have tried to provide a balanced view of fiscal s...

]]>
KevinN https://forum.effectivealtruism.org/posts/MnecE9Y3SgMTfobCr/how-to-accelerate-your-new-ea-organization-with-fiscal Fri, 16 Feb 2024 16:50:01 +0000 EA - How to Accelerate Your New EA Organization with Fiscal Sponsorship and Operations Support by KevinN KevinN https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 23:16 no full 6
h8rjgvp8tacJKnKXn EA - Summer Internships at Open Philanthropy - Global Catastrophic Risks (due March 4) by Hazel Browne Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summer Internships at Open Philanthropy - Global Catastrophic Risks (due March 4), published by Hazel Browne on February 16, 2024 on The Effective Altruism Forum.We're excited to announce that theGlobal Catastrophic Risks Cause Prioritization team at Open Philanthropy will be hiring several interns this summer to help us explore new causes and advance our research. We think this is a great way for us to grow our capacity, develop research talent, and expand our pipeline for future full-time roles. The key points are:Applications are due at 11:59 PM Pacific on Monday, March 4, 2023.Applicants must be currently enrolled in a degree program or working in a position that offers externship/secondment opportunities.The internship runs from June 10 to August 16-30 (with limited adjustments based on academic calendars) and is paid ($2,000 per week) and fully remote.We're open to a wide variety of backgrounds, but expect some of the strongest candidates to be enrolled in master's or doctoral programs.We aim to employ people with many different experiences, perspectives and backgrounds who share our passion for accomplishing as much good as we can. We particularly encourage applications from people of color, self-identified women, non-binary individuals, and people from low and middle income countries.Full details (and a link to the application)are available here and are also copied below.We hope that you'll apply and share the news with others!About the internshipWe're looking for students currently enrolled in degree programs (or non-students whose jobs offer externship/secondment opportunities) to apply for a research internship from June - August 2024 and help us investigate important questions and causes. We see the internship as a way to grow our capacity, develop promising research talent, and expand our hiring pipeline for full-time roles down the line.We want to support interns as team members to work on our core priorities, while also showing them how Open Philanthropy works and helping them build skills important for cause prioritization research. As such, interns will directly increase our team's capacity to do research that informs our Global Catastrophic Risks strategy and grantmaking. Ultimately, this will help us get money to where it can have the most impact.We anticipate that interns will collaborate closely with the team; at the same time, we expect a high degree of independence and encourage self-directed work.Our internship tracksWe are hiring interns for either the Research or Strategy tracks. The responsibilities for these tracks largely overlap, and the two positions will be evaluated using the same application materials. The main difference is one of emphasis: while research track interns primarily focus on direct research, strategy track interns are sometimes tasked with working on non-research projects (such as helping run a contest or a request for proposals).You will be asked to indicate which track you'd like to be considered for in the application.Interns will work on multiple projects at different levels of depth, in the same way as full-time team members. They will report to an existing cause prioritization team member and participate in team meetings and discussions, including presenting their work to the team for feedback.Specific projects will depend on the team's needs and the intern's skills, but will fall under the following core responsibilities:Searching for new program areas. We believe there are promising giving opportunities that don't currently fall within the purview of our existing program areas. Finding them involves blending theoretical models with concrete investigation into the tractability of new interventions to reduce catastrophic risk. Much of this research is informed by conversations with relevant exp...

]]>
Hazel Browne https://forum.effectivealtruism.org/posts/h8rjgvp8tacJKnKXn/summer-internships-at-open-philanthropy-global-catastrophic Fri, 16 Feb 2024 01:26:40 +0000 EA - Summer Internships at Open Philanthropy - Global Catastrophic Risks (due March 4) by Hazel Browne Hazel Browne https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:18 no full 7
sGGRScGKMFdcjh9kZ EA - Introducing StakeOut.AI by Harry Luk Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing StakeOut.AI, published by Harry Luk on February 17, 2024 on The Effective Altruism Forum.We are excited to announce the launch of a new advocacy nonprofit,StakeOut.AI.The mission statement of our nonprofitStakeOut.AI fights to safeguard humanity from AI-driven risks. We use evidence-based outreach to inform people of the threats that advanced AI poses to their economic livelihoods and personal safety. Our mission is to create a united front for humanity, driving national and international coordination on robust solutions to AI-driven disempowerment.We pursue this mission via partnerships (e.g., with other nonprofits, content creators, and AI-threatened professional associations) and media-based awareness campaigns (e.g., traditional media, social media, and webinars).Our modus operandi is to tell the stories of the AI industry's powerless victims, such as:people worldwide, especially women and girls, who have been victimized by nonconsensual deepfake pornography of their likenessesunemployed artists whose copyrighted hard work were essentially stolen by AI companies without their consent, in order to train their economic AI replacementsparents who fear that their children will be economically replaced, and perhaps even replaced as aspecies, by "highly autonomous systems that outperform humans at most economically valuable work" (OpenAI'smission)We connect these victims' stories to powerful people who can protect them. Who are the powerful people? The media, the governments, and most importantly: the grassroots public.StakeOut.AI's mottoThe Right AI Laws, to Right Our Future.We believe AI has great potential to help humanity. But like all other industries that put the public at risk, AI must be regulated. We must unite, as humans have done historically, to work towards ensuring that AI helps humanity flourish rather than cause our devastation. By uniting globally with a single voice to express our concerns, we can push governments to pass the right AI laws that can right our future.However, StakeOut.AI's Safer AI Global Grassroots United Front movement isn't for everybody.It's not for those who don't mind being enslaved by robot overlords.It's not for those whose first instincts are to avoid making waves, rather than to help the powerless victims tell their stories to the people who can protect them.It's not for those who say they 'miss the days' when only intellectual elites talked about AI safety.It's not for those who insist, even after years of trying, that attempting to solve technical AI alignment while continuing to advance AI capabilities is the only way to prevent the threat of AI-driven human extinction.It's not for those who think the public is too stupid to handle the truth about AI. No matter how much certain groups say they are trying to 'shield' regular folks for their 'own good,' the regular folks are learning about AI one way or another.It's also not for those who are indifferent to the AI industry's role in invading privacy, exploiting victims, and replacing humans.So to help save your time, please stop reading this post if any of the above statements reflect your views.But, if you do want transparency and accountability from the AI industry, and you desire a moral and safe AI environment for your family and for future generations, then the United Front may be for you.By prioritizing high-impact projects over fundraising in our early months, we at StakeOut.AI were able to achieve five publicly known milestones for AI safety:researched a'scorecard' evaluating various AI governance proposals, which was presented by Professor Max Tegmark at the first-ever international AI Safety Summit in the U.K. (as part of The Future of Life Institute's governance proposal for the Summit),raised awareness, such as by holding a ...

]]>
Harry Luk https://forum.effectivealtruism.org/posts/sGGRScGKMFdcjh9kZ/introducing-stakeout-ai Sat, 17 Feb 2024 03:32:50 +0000 EA - Introducing StakeOut.AI by Harry Luk Harry Luk https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 14:14 no full 2
upR4t3gM4YxsKFBCG EA - Can we help individual people cost-effectively? Our trial with three sick kids by NickLaing Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can we help individual people cost-effectively? Our trial with three sick kids, published by NickLaing on February 20, 2024 on The Effective Altruism Forum.TLDR: My wife and I tried to help three children with severe illness cost-effectively. We paid for their medical care, followed up on what happened and reflected on the process."Hearing about her death hurt badly - as it should. But not only had I done what I could, it made sense. Those 300 dollars gave her a chance of not only surviving, but living a long and happy life."My wife and I live in Gulu, Northern Uganda, where I operate OneDay Health, a social enterprise which provides cost-effective health care in remote rural areas - but this post isn't about that.We naturally encounter many situations where we feel compelled to help those around us, We sometimes help with school fees, medical bills and sometimes just cash bailouts. Many of these might not be very cost effective and fall into the "fuzzy giving" category, but by virtue of living in Uganda I suspect these contributions may often hit the unusual jackpot of both feeling good and being cost-effective, earning both "fuzzys" and "utilons" at the same time (sorry Yudkowsky, we mix them).Which got me thinking, could we target some of our support in a more cost-effective way? As a medical guy in Uganda, I figured we might be in a decent position to try, so we embarked on a mini-experiment to see if we could find a few medical cases which could be super cost-effective to treat, on par with top GiveWell charities.Our Cost-Effectiveness BarWe kept it simple here, and deferred to GiveWell's top-charity bar of $5500 per life saved. [1] I estimated cost effectiveness by dividing the money we would spend on medical care by a series of discount multipliers based on loosely estimated counterfactuals (see calculations below). My approach is deeply flawed as it fails to account for many variables, but allowed me to make a reasonable estimate in about 15 minutes.Feel free to criticize and discuss the method, I'm sure there more accurate and easier ways (@NunoSempere @Vasco Grilo). I didn't attempt a cost-effectiveness range as it would take longer.But where to find these sick people who might be cost-effectively helped? We planned to identify poor rural farmers who had life-threatening treatable illnesses, who were unlikely to be treated unless we paid for it. So I asked a couple of our OneDay Health managers to be on the lookout for sick people who presented to our Health Centers, while I also kept an eye out in the health center where I work as a doctor - people who could die without treatment that they couldn't afford.Finding people to "cost-effectively" help turned out to be harder than expected. Over a 2 month period we identified three young girls to help, while we decided not to help in another 5 or so cases. These are the stories people we tried to help- feel free to skip the BOTECS if you aren't into that!Lamunu - age 9Even for a hardened doctor, Lamunu's photo was tough to look at. She showed up in our Health center with about 30% of her body burned after falling into a fire. She had spent 10 days in an ill-equipped government hospital and she continued deteriorate. Her father ran out of money and took her home, before arriving one of our health centers (which can't treat burns) as perhaps desperate last-ditch effort.Her burns were infected, she was malnourished and everybody involved knew that she probably wasn't long for this world.I happen to live 500 meters away from the best burns unit in Northern Uganda at St. Mary's hospital Lacor so I thought we might be able to help. We paid for her father to take a 5 hour bus ride to Lacor hospital, and supported the family with hospital fees and some food.Cost effectiveness BOTEC?Chance of death withou...

]]>
NickLaing https://forum.effectivealtruism.org/posts/upR4t3gM4YxsKFBCG/can-we-help-individual-people-cost-effectively-our-trial Tue, 20 Feb 2024 11:57:56 +0000 EA - Can we help individual people cost-effectively? Our trial with three sick kids by NickLaing NickLaing https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 15:17 no full 1
A5RNKz8jHjJEDjSea EA - An Analysis of Engineering Jobs on the 80,000 Hours Job Board by Jessica Wen Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An Analysis of Engineering Jobs on the 80,000 Hours Job Board, published by Jessica Wen on February 20, 2024 on The Effective Altruism Forum.0 SummaryCross-posted from the High Impact Engineers blog. The Google Doc version of the report can be foundhere. The Google Sheets of the data ishere.This report analyses the (physical) engineering jobs posted on the80,000 Hours job board from October 2022 to February 2024. We find that:The rate of new jobs added to the 80,000 Hours job board remains consistent throughout this period.The rate of new engineering jobs added to the 80,000 Hours job board more than doubled in July 2023 and has remained at this higher level since then.Half of the engineering jobs advertised on the 80,000 Hours job board are in biosecurity.For the engineering jobs on the 80,000 Hours job board, mechanical engineering skills are the most in-demand, followed by bioengineering.Almost half of the engineering jobs are US-based, and nearly a third are UK-based. The rest are mainly in Europe or Australia, with a handful based in India and other non-Western countries.1 Introduction1.1 ContextBefore October 2022, the "Engineering" job tag on the80,000 Hours job board included both software and "physical"[1] engineering. We spoke with 80,000 Hours to separate software and "physical" engineering into two different tags, which they implemented in October 2022. This has been a huge help for the High Impact Engineers community. At that time, I set up email notifications to receive alerts when new jobs were posted to the 80,000 Hours job board using the "Engineering" job tag.[2]Speaking with other career coaches and community builders, there is a consensus that there is a general lack of information on the landscape of "EA jobs". Out of curiosity, I put this analysis of engineering jobs on the 80k job board together in ~8 hours.I expect 80k is probably in a better position to run these kinds of evaluations of the landscape of impactful jobs, but I thought that it would be helpful and of interest to the High Impact Engineers community and potentially the wider EA community to see these statistics. I focus only on jobs tagged "engineering" on the 80k job board, but I expect similar analyses could be made with other tags.After putting this report together, I shared it with Conor Barnes from 80,000 Hours to review. All the notes from 80k are from his feedback, which is greatly appreciated!1.2 Some CaveatsSince I received these notifications via email over the past 15 months, most of the jobs from the notifications are no longer open. This means that the "Type of engineering" and the "Country" data are the most likely to be inaccurate without access to the original job description. I have made my best guess to fill in this data according to the job titles, trying to find similar open roles at the companies, and doing general research on the organisations' websites.I have tried not to delete these email notifications, but I may have deleted some. As a result, this data set may be incomplete. However, I don't think this detracts too much from many of the conclusions that can be drawn from this set of data.Finally, here are some caveats on my views on the 80k job board. I'm sure they would agree with me that their job board is not the definitive list of impactful jobs, or even the definitive list of EA jobs (whatever that means!). They do a great job at keeping on top of opportunities within and outside of the EA ecosystem, but it's important to keep in mind that there are lots of other impactful jobs that 80k might miss, or they might have a different model of impact than you do[3].As a result, I hope that readers will understand that this is not a reflection of the jobs that "EA" deems to be "valuable"[4], but more an attempt to understand the oppo...

]]>
Jessica Wen https://forum.effectivealtruism.org/posts/A5RNKz8jHjJEDjSea/an-analysis-of-engineering-jobs-on-the-80-000-hours-job Tue, 20 Feb 2024 10:12:23 +0000 EA - An Analysis of Engineering Jobs on the 80,000 Hours Job Board by Jessica Wen Jessica Wen https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 16:18 no full 2
ziSEnEg4j8nFvhcni EA - New Open Philanthropy Grantmaking Program: Forecasting by Open Philanthropy Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New Open Philanthropy Grantmaking Program: Forecasting, published by Open Philanthropy on February 20, 2024 on The Effective Altruism Forum.Written by Benjamin TereickWe are happy to announce that we have addedforecasting as an official grantmakingfocus area. As of January 2024, the forecasting team comprises two full-time employees:myself andJavier Prieto. In August 2023, I joined Open Phil to lead our forecasting grantmaking and internal processes. Prior to that, I worked on forecasts of existential risk and the long-term future at theGlobal Priorities Institute. Javier recently joined the forecasting team in a full-time capacity from Luke Muehlhauser's AI governance team, which was previously responsible for our forecasting grantmaking.While we are just now launching a dedicated cause area, Open Phil has long endorsed forecasting as an important way of improving the epistemic foundations of our decisions and the decisions of others. We have made several grants to support the forecasting community in the last few years, e.g., toMetaculus, theForecasting Research Institute, andARLIS. Moreover, since the launch of Open Phil, grantmakers have oftenmade predictions about core outcomes for grants they approve.Now with increased staff capacity, the forecasting team wants to build on this work. Our main goal is to help realize the promise of forecasting as a way to improve high-stakes decisions, as outlined in ourfocus area description. We are excited both about projects aiming to increase the adoption rate of forecasting as a tool by relevant decision-makers, and about projects that provide accurate forecasts on questions that could plausibly influence the choices of these decision-makers. We are interested in such work across both of ourportfolios: Global Health and Wellbeing and Global Catastrophic Risks. [1]We are as of yet uncertain about the most promising type of project in the forecasting focus area, and we will likely fund a variety of different approaches. We will also continue our commitment to forecasting research and to the general support of the forecasting community, as we consider both to be prerequisites for high-impact forecasting. Supported by other Open Phil researchers, we plan to continue exploring the most plausible theories of change for forecasting.I aim to regularly update the forecasting community on the development of our thinking.Besides grantmaking, the forecasting team is also responsible for Open Phil's internal forecasting processes, and for managing forecasting services for Open Phil staff. This part of our work will be less public, but we will occasionally publish insights from our own processes, like Javier's2022 report on the accuracy of our internal forecasts.^It should be noted that administratively, the forecasting team is part of the Global Catastrophic Risks portfolio, and historically, our forecasting work has had closer links to that part of the organization.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Open Philanthropy https://forum.effectivealtruism.org/posts/ziSEnEg4j8nFvhcni/new-open-philanthropy-grantmaking-program-forecasting Tue, 20 Feb 2024 00:23:47 +0000 EA - New Open Philanthropy Grantmaking Program: Forecasting by Open Philanthropy Open Philanthropy https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:01 no full 3
L4Cv8hvuun6vNL8rm EA - Solution to the two envelopes problem for moral weights by MichaelStJules Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Solution to the two envelopes problem for moral weights, published by MichaelStJules on February 19, 2024 on The Effective Altruism Forum.SummaryWhen taking expected values, the results can differ radically based on which common units we fix across possibilities. If we normalize relative to the value of human welfare, then other animals will tend to be prioritized more than by normalizing by the value of animal welfare or by using other approaches to moral uncertainty.For welfare comparisons and prioritization between different moral patients like humans, other animals, aliens and artificial systems, I argue that we should fix and normalize relative to the moral value of human welfare, because our understanding of the value of welfare is based on our own experiences of welfare, which we directly value.Uncertainty about animal moral weights is about the nature of our experiences and to what extent other animals have capacities similar to those that ground our value, and so empirical uncertainty, not moral uncertainty (more).I revise the account in light of the possibility of multiple different human reference points between which we don't have fixed uncertainty-free comparisons of value, like pleasure vs belief-like preferences (cognitive desires) vs non-welfare moral reasons, or specific instances of these.If and because whatever moral reasons we apply to humans, (similar or other) moral reasons aren't too unlikely to apply with a modest fraction of the same force to other animals, then the results would still be relatively animal-friendly (more).I outline why this condition plausibly holds across moral reasons and theories, so that it's plausible we should be fairly animal-friendly (more).I describe and respond to some potential objections:There could be inaccessible or unaccessed conscious subsystems in our brains that our direct experiences and intuitions do not (adequately) reflect, and these should be treated like additional moral patients (more).The approach could lead to unresolvable disagreements between moral agents, but this doesn't seem any more objectionable than any other disagreement about what matters (more).Epistemic modesty about morality may push for also separately normalizing by the values of nonhumans or against these comparisons altogether, but this doesn't seem to particularly support the prioritization of humans (more).I consider whether similar arguments apply in cases of realism vs illusionism about phenomenal consciousness, moral realism vs moral antirealism, and person-affecting views vs total utilitarianism, and find them less compelling for these cases, because value may be grounded on fundamentally different things (more).How this work has changed my mind: I was originally very skeptical of intertheoretic comparisons of value/reasons in general, including across theories of consciousness and the scaling of welfare and moral weights between animals, because of the two envelopes problem (Tomasik, 2013-2018) and the apparent arbitrariness involved. This lasted until around December 2023, and some arguments here were originally going to be part of a piece strongly against such comparisons for cross-species moral weights, which I now respond to here along with positive arguments for comparisons.AcknowledgementsI credit Derek Shiller and Adam Shriver for the idea of treating the problem like epistemic uncertainty relative to what we experience directly. I'd also like to thank Brian Tomasik, Derek Shiller and Bob Fischer for feedback. All errors are my own.BackgroundOn the allocation between the animal-inclusive and human-centric near-termist views, specifically, Karnofsky (2018) raised a problem:The "animal-inclusive" vs. "human-centric" divide could be interpreted as being about a form of "normative uncertainty": un...

]]>
MichaelStJules https://forum.effectivealtruism.org/posts/L4Cv8hvuun6vNL8rm/solution-to-the-two-envelopes-problem-for-moral-weights Mon, 19 Feb 2024 23:39:10 +0000 EA - Solution to the two envelopes problem for moral weights by MichaelStJules MichaelStJules https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:01:17 no full 4
wPwkoK6PLya3wk2zb EA - How the idea of "doing good better" makes me feel by frances lorenz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How the idea of "doing good better" makes me feel, published by frances lorenz on February 18, 2024 on The Effective Altruism Forum.This is a quick, personal piece. The TL;DR here is: trying to, "do the most good" is a simple idea that immediately gets complicated. Putting all complexity aside for a second, I thought about how the idea first made me feel - and how it still makes me feel today.When I was seven years old, I created my own belief system so I could feel safer. I conceptualised a kind of cosmic energy referred to as the Universe. The Universe was only a little powerful with limited control, but as its defining feature, it really cared about me. I was a favorite. It would try to nudge things in my direction. It would check on my thoughts often and with intense interest. It would rush in when I was sad, feel awful, and try as hard as possible to help.As far as magical cosmic entities go, this is a pretty low bar. Reading minds and encompassing the entire world is hard, but trying to nudge towards something better? Caring? I could do that. This thing I clung to as a little kid was like, really achievable. I wasn't searching for some incredibly far-reaching, life-changing meaning that would click the whole world into place. Mostly, I just needed help.This frame has informed much of how I orient towards doing good. If I had to list three beliefs I've regarded as true throughout my whole life, they'd be: suffering is real, I don't want conscious life to suffer, and the totality of what I've felt in my heart on this Earth has been the greatest gift and too much. And for the too much, I needed help. That's all The Universe was meant to be.Even as a kid making up a vague religion, I never imagined a kind of mystical world where help was always coming. It isn't, we all know that - at least not right now. I've sat on the bathroom floor alone for broken days, I've missed a friend's call because I was asleep. And completely beyond what I know are experiences of pain, every minute, in people and animals. And too often help won't come.A five year old will die of malaria and no one will have stopped it. Humanity could just end one day and no one will have stopped it. But we can try. We can try to use our resources; and not only can we try, but we can try as hard as we possibly can. We can even be incredibly smart about it; rigorous; principled; creative; excited; ambitious. We can, in some ways, do much more than the Universe I talked to before bed every night for 10 years.I've never felt much pressure in that, just a lot of aching hope. And even when I start to feel bogged down in the complexity and uncertainty that shrouds any attempts to look at the world's most pressing issues, this central hope is something I return to. We can try. And to someone, maybe even to many, maybe even to somewhere around as many as possible, it might matter.When I think about why I love the idea of trying to help the most people possible, that's what comes to mind. An opportunity. In expanding moral circles and pages of research on global health interventions and pages of schemes on preventing advanced AI from wrecking the world. And of course, tons of other things I don't know about.I've never seen good be done perfectly or "right", by me or anyone else (at least not at scale), but I would like to continue trying in the "most" way I've found so far. And I'd like to continue coordinating around the attempt; learning how others try. For those I love and also those I will never know. And thank you to those who try for me, for the ones I love, even if you can't know.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
frances_lorenz https://forum.effectivealtruism.org/posts/wPwkoK6PLya3wk2zb/how-the-idea-of-doing-good-better-makes-me-feel Sun, 18 Feb 2024 16:58:29 +0000 EA - How the idea of "doing good better" makes me feel by frances lorenz frances_lorenz https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:30 no full 10
emDwE8KKqPCsXQJCt EA - In memory of Steven M. Wise by Tyner Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: In memory of Steven M. Wise, published by Tyner on February 21, 2024 on The Effective Altruism Forum.LINK: https://everloved.com/life-of/steven-wise/obituary/Renowned animal rights pioneer Steven M. Wise passed away on February 15th after a long illness. He was 73 years old.An innovative scholar and groundbreaking expert on animal law, Wise founded and served as president of the Nonhuman Rights Project (NhRP), the only nonprofit organization in the US dedicated solely to establishing legal rights for nonhuman animals. As the NhRP's lead attorney, he filed historic lawsuits demanding the right to liberty of captive chimpanzees and elephants, achieving widely recognized legal firsts for his clients.Most notably, under Wise's leadership the NhRP filed a habeas corpus petition on behalf of Happy, an elephant held alone in captivity at the Bronx Zoo. Happy's case, which historian Jill Lepore has called "the most important animal-rights case of the 21st-century," reached the New York Court of Appeals in 2022. The Court of Appeals then became the highest court of an English-speaking jurisdiction to hear arguments calling for a legal right for an animal.Although the Court ultimately denied Happy's petition, two judges wrote historic dissents refuting the idea that only humans can have rights. Under Wise's leadership, the NhRP also helped develop and pass the first animal rights law in the country in 2023-an ordinance that protects elephants' right to liberty.Wise said he decided to become a lawyer after developing a deep commitment to social justice as a result of his involvement in the anti-Vietnam War movement while an undergraduate at the College of William and Mary. He graduated from Boston University Law School in 1976 and began his legal career as a criminal defense lawyer. Several years later, Peter Singer's book Animal Liberation inspired Wise to become an animal protection lawyer.From 1985 to 1995, Wise was president of the Animal Legal Defense Fund. As Wise told The New York Times Magazine, his litigation work during this time led him to conclude that the rightlessness of animals was the fundamental barrier to humans vindicating animals' interests.This is because, under animal welfare laws, lawyers must make the case for how a human has been harmed by the animal's treatment or situation; as Wise elaborated in his writings and talks, legal injuries to animals do not matter in court because animals are unjustly considered legal "things" with no rights, legally equivalent to inanimate objects, their intrinsic interests essentially invisible to judges.In 1995, Wise launched the Nonhuman Rights Project to address this core issue facing all animals and their advocates. After more than a decade of preparation, the NhRP filed first-of-their-kind lawsuits in 2013, demanding rights for four captive chimpanzees in New York State. A year and a half later, two of the NhRP's clients became the first animals in legal history to have habeas corpus hearings to determine the lawfulness of their imprisonment.Wise was also a leading force in the development of animal law as a distinct academic curriculum, teaching the first-ever animal law course offered at Harvard University in 2000. He remained committed to educating the next generation of animal rights lawyers throughout his career, teaching animal rights jurisprudence at law schools around the world, including Stanford Law School, the University of Miami Law School, St.Thomas University Law School, John Marshall Law School, Lewis and Clark Law School, Vermont Law School, Tel Aviv University, and the Autonomous University of Barcelona.Wise is the author of four books: Rattling the Cage: Toward Legal Rights for Animals (2000); Drawing the Line: Science and the Case for Animal Rights (2002); Though the Heavens May Fall: T...

]]>
Tyner https://forum.effectivealtruism.org/posts/emDwE8KKqPCsXQJCt/in-memory-of-steven-m-wise Wed, 21 Feb 2024 22:16:44 +0000 EA - In memory of Steven M. Wise by Tyner Tyner https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:50 no full 2
C2qiY9hwH3Xuirce3 EA - Short agony or long ache: comparing sources of suffering that differ in duration and intensity by cynthiaschuck Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Short agony or long ache: comparing sources of suffering that differ in duration and intensity, published by cynthiaschuck on February 21, 2024 on The Effective Altruism Forum.Cynthia Schuck-Paim; Wladimir J. Alonso; Cian Hamilton (Welfare Footprint Project)OverviewIn assessing animal welfare, it would be immensely beneficial to rely on a cardinal metric that captures the overall affective experience of sentient beings over a period of interest or lifetime. We believe that the concept of Cumulative Pain (or Pleasure, for positive affective states), as adopted in the Welfare Footprint framework, aligns closely with this ideal.It quantifies the time spent in various intensities of pain and has proven operationally useful, providing actionable insights for guiding cost-effective interventions aimed at reducing animal suffering.However, it does not yet offer a unified metric of suffering, as it measures time spent in four categories of pain intensity. While we anticipate this complexity will persist for some time - given the current challenges in equating pain intensities - we believe the discussion on the possibility of integrating these four categories is necessary and valuable. We are thus sharing this document here to contribute in this discussion and elicit feedback and criticism to help us improve our approach.We apologize for the academic tone of the text, initially written with an academic paper in mind.Key TakeawaysPain's aversiveness escalates disproportionally with its intensity, making severe pains feel disproportionately worse.Determining the exact form of the relationship, however, is still challenging, as insights from human pain studies are limited and difficult to apply to animals, and designing experiments to address this issue in animals is inherently challenging.Intensity weights are likely dynamic and modulated by multiple factors, including interspecific differences in the perception of time. The very relationship between pain aversiveness and intensity may change depending on the experience's duration.Currently, the uncertainty associated with putative weights among pain intensity categories is orders of magnitude greater than the uncertainty related to other attributes of pain experiences, such as their prevalence or duration.Given these challenges, we currently favor a disaggregated approach. Disaggregated estimates can currently rank most welfare challenges and farm animal production scenarios in terms of suffering.In the case of more complex trade-offs between brief intense pain and longer-lasting milder pain we suggest two approaches. First, ensuring that all consequences of the welfare challenges are taken into account. For example, the effects of long-lasting chronic pain extend beyond the immediate experience, leading to long-term consequences (e.g., pain sensitization, immune suppression, behavioral deprivation, helplessness, depression) that may themselves trigger experiences of intense pain.The same may happen with experiences of brief intense pain endured early in life. Second, once all secondary effects are considered, we suggest examining which weights would steer different decision paths, and determining how justifiable those weights are. This approach allows for normative flexibility, enabling stakeholders to rely on their own values and perspectives when making decisions.BackgroundPain, both physical and psychological, is an integral aspect of life for sentient organisms. Pain serves a vital biological purpose by signaling actual or potential harm or injury, prompting individuals to avoid or mitigate the cause of pain [1]. It varies in intensity, from a mild annoyance to an excruciating agony, and duration, from fleeting moments to persistent, long-lasting conditions.This diversity in the intensity and duration of ...

]]>
cynthiaschuck https://forum.effectivealtruism.org/posts/C2qiY9hwH3Xuirce3/short-agony-or-long-ache-comparing-sources-of-suffering-that Wed, 21 Feb 2024 19:19:05 +0000 EA - Short agony or long ache: comparing sources of suffering that differ in duration and intensity by cynthiaschuck cynthiaschuck https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 41:23 no full 4
hgdG9udiWiACumzJw EA - Let's advertise EA infrastructure projects, Feb 2024 by Arepo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Let's advertise EA infrastructure projects, Feb 2024, published by Arepo on February 21, 2024 on The Effective Altruism Forum.This is the latest of a theoretically-three-monthly series of posts advertising EA infrastructure projects that struggle to get and maintain awareness (see original advertising post for more on the rationale).I italicise organisations added since the previous post was originally submitted and bold those edited in to the current one after posting, to make it easier for people to scan for new entries.Also, since funding seems to be very tight in the community at the moment, I've added a section to the end on 'Organisations urgently seeking donations'. I don't have any inclusion criteria atm beyond signal boosting posts people have made in the last few months, especially those who are in an existential crisis from lack of funding.Coworking/socialisingEA Gather Town - An always-on virtual meeting place for coworking, connecting, and having both casual and impactful conversationsEA Anywhere - An online EA community for everyoneEA coworking Discord - A Discord server dedicated to online coworkingFree or subsidised accommodationCEEALAR/formerly the EA hotel - Provides free or subsidised serviced accommodation and board, and a moderate stipend for other living expenses.NonLinear's EA house database - An experiment by Nonlinear to try to connect EAs with extra space with EAs who could do good work if they didn't have to pay rent (or could pay less rent).Professional servicesAmber Dawn - a freelance writer and editor for the EA community who can help you edit drafts and write up your unwritten ideas.WorkStream Business Systems - a service dedicated to EAs, helping you improve your workflow, boost your bottom line and take control of your businesscFactual - a new, EA-aligned strategy consultancy with the purpose of maximising its counterfactual impactGood Governance Project - helps EA organizations create strong boards by finding qualified and diverse professionalsAltruistic Agency - provides discounted tech support and development to organisationsTech support from Soof GolanLegal advice from Tyrone Barugh - a practice under consideration with the primary aim of providing legal support to EA orgs and individual EAs, with that practice probably being based in the UK.SEADS - Data Science services to EA organizationsUser-Friendly - an EA-aligned marketing agencyAnti Entropy - offers services related operations for EA organizationsArb - Our consulting work spans forecasting, machine learning, and epidemiology. We do original research, evidence reviews, and large-scale data pipelines.Pineapple Operations - Maintains a public database of people who are seeking operations or Personal Assistant/Executive Assistant work (part- or full-time) within the next 6 months in the Effective Altruism ecosystemCoachingElliot Billingsley - Coaching is best for people who have personal or professional goals they're serious about accomplishing. My sessions are designed to improve clarity and motivation.Tee Barnett Coaching(coaching training) - a multi-component training infrastructure for developing your own practice as a skilled coach.(coach matchmaking) - Access matchmaking to high-quality coaching at below-market pricingProbably Good - Whether you're a student searching for the right path or an experienced professional seeking a purpose-driven opportunity, we're here to help you brainstorm career paths, evaluate options, and plan next stepsAI Safety Support - health coaching to people working on AI safety (first session free)80,0000 Hours career coaching - Speak with us for free about using your career to help solve one of the world's most pressing problemsYonatan Cale - Coaching for software devsFAANG style mock interviews - senior software...

]]>
Arepo https://forum.effectivealtruism.org/posts/hgdG9udiWiACumzJw/let-s-advertise-ea-infrastructure-projects-feb-2024 Wed, 21 Feb 2024 15:15:07 +0000 EA - Let's advertise EA infrastructure projects, Feb 2024 by Arepo Arepo https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:38 no full 10
vmsXfxkhbT34avJEj EA - The Case for Animal-Inclusive Longtermism by BrownHairedEevee Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Case for Animal-Inclusive Longtermism, published by BrownHairedEevee on February 21, 2024 on The Effective Altruism Forum.In: Journal of Moral PhilosophyAuthor: Gary David O'BrienOnline Publication Date: 19 Jan 2024License: Creative Commons Attribution 4.0 InternationalAbstractLongtermism is the view that positively influencing the long-term future is one of the key moral priorities of our time. Longtermists generally focus on humans, and neglect animals. This is a mistake. In this paper I will show that the basic argument for longtermism applies to animals at least as well as it does to humans, and that the reasons longtermists have given for ignoring animals do not withstand scrutiny.Because of their numbers, their capacity for suffering, and our ability to influence their futures, animals ought to be a central concern of longtermists. Furthermore, I will suggest that longtermism is a fruitful framework for thinking about the wellbeing of animals, as it helps us to identify actions we can take now that have a reasonable chance of improving the wellbeing of animals over the very long term.Keywords: longtermism; animal ethics; wild animal sufferingIntroductionLongtermism is the view that positively influencing the long-term future is one of the key moral priorities of our time.1 Since the future has the potential to be truly vast, both in duration and the number of individuals who will ever live, it is plausible that the long-term future might be extremely valuable, or extremely disvaluable.If we care about impartially doing good, then we should be especially concerned to ensure that the long-term future goes well, assuming that it is within our power to do so. Most longtermists focus on humans, and largely ignore animals. This is a mistake. In this paper I will show that the basic argument for longtermism applies to animals at least as well as it does to humans, and that the reasons longtermists have given for ignoring animals do not stand up to scrutiny.I will argue that, because of their numbers, their capacity for suffering, and our ability to influence their futures, animals ought to be a central concern of longtermists. Furthermore, I will suggest that longtermism is a fruitful framework for thinking about the wellbeing of animals, as it helps us to identify effective actions that we can take in the near future that have a reasonable chance of improving the wellbeing of animals over the very long term.In Section 1 I will lay out the basic argument for longtermism and consider some of the reasons why longtermists have neglected animals. In Sections 2 and 3 I will show that the basic argument for longtermism applies to animals and that we can use the longtermist framework to identify interventions that have a reasonable chance of making the long-term future go better for animals.More specifically, I will argue that (1) now or in the near-term future humans can act in ways that will predictably increase or decrease the scale and duration of wild animal suffering in the long term and (2) we are in an especially influential time for locking in values that can be expected to be good or bad for domesticated animals in the long term.Finally in Section 4 I will suggest some longtermist interventions for animals that might be more effective than short-term alternatives and will suggest areas for further research.For simplicity, I will assume a hedonistic theory of animal wellbeing, though nothing I say will be incompatible with the view that there are also important non-hedonic elements related to animal wellbeing. I will assume that all vertebrates have the capacity for sentience, and hence for positive and negative welfare.Although I will not have space to argue for this, I will assume the increasingly accepted view that the majority of animals in ...

]]>
BrownHairedEevee https://forum.effectivealtruism.org/posts/vmsXfxkhbT34avJEj/the-case-for-animal-inclusive-longtermism Wed, 21 Feb 2024 15:10:12 +0000 EA - The Case for Animal-Inclusive Longtermism by BrownHairedEevee BrownHairedEevee https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 48:13 no full 11
jvW6p5Hk2r4883tNT EA - Moral Trade Proposal with 95-100% Surplus by Pete Rowlett Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Moral Trade Proposal with 95-100% Surplus, published by Pete Rowlett on February 21, 2024 on The Effective Altruism Forum.IntroductionThis post is a continuation of my earlier "Modeling Moral Trade in Antibiotic Resistance and Alternative Proteins," "Generating More Surplus in Moral Trades," and "Developing Counterfactual Trust in Moral Trade." Here I'll make a proposal for a specific moral trade, and then I'll provide a resource that will hopefully facilitate more trades.I think moral trade is an underexplored topic with significant opportunity for gains. The lowest-hanging fruit seems to be the synergy between animal welfare groups and climate groups. Both accept alternative proteins as one of the best uses of marginal funding (GFI is a top-rated charity by bothAnimal Charity Evaluators andGiving Green).ProposalI am proposing trade between funds run by these groups. On one side, theGiving Green Fund, and on the other side,Animal Charity Evaluators Recommended Charity Fund.The Giving Green Fund has distributed funds to its top charities twice. The first time, this past summer, each organization received $50,000, and $50,000 was saved for later. The second time, at the end of 2023, the funds were not evenly distributed. They sent $100,000 to Industrious Labs, $70,000 to Good Food Institute, and $50,000 each to Good Energy Collective, Evergreen Collaborative, and Clean Air Task Force. The justifications, with very transparent reasoning, arehere andhere. So while the uneven distribution may make counterfactual trust harder to build, the clarity in the process should largely counteract that effect.The ACE fund hasconsistently distributed money to top charities and standout charities, including GFI, in consistent ratios. Recently they've switched to a binary recommended or not recommended status for charities, so it seems reasonable to assume that they would allocate money from the fund evenly between all of the recommended charities.Normally high cost-effectiveness ratings would harm counterfactual trust and discourage actors from engaging in moral trade, but in this case, both funds have a fairly strong track record of systematically allocating funding, so determining the counterfactual is relatively easy.I would advocate for both funds to redirect an additional $50,000 from their other funded nonprofits to GFI as a first moral trade. Both can contribute an equal amount because they have roughly equal relative cost-effectiveness estimates. I estimate that a 95 to 100% surplus should be generated from this trade (i.e. both worldviews will get 95 to 100% more moral value from those $50,000 than they would have gotten had they simply donated to the alternative top charity without trade).You can see my calculationshere.It may make sense to make the reallocation smaller if GFI will have difficulty absorbing the marginal funding at similar levels of cost-effectiveness (though I doubt this will be the case, since they have an 8-figurebudget). Another consideration is whether the other top nonprofits were relying on an expected donation - it's important to avoid messing up their plans. Terms could also be negotiated based on up-to-date cost-effectiveness estimates of both GFI and the alternate top charities. For example, Giving Green may find some of ACE's other recommended charities to be somewhat effective, making the trade less valuable for them, and meaning that they donate less.The value generated here will come from both the moral trade itself, and from the information value of attempting to conduct a moral trade. A writeup about the execution may encourage others to take similar actions.Giving Green did a quick review and was fine with my posting this, but did not have time to review it in detail before the day I scheduled to post and has not en...

]]>
Pete Rowlett https://forum.effectivealtruism.org/posts/jvW6p5Hk2r4883tNT/moral-trade-proposal-with-95-100-surplus Wed, 21 Feb 2024 14:34:11 +0000 EA - Moral Trade Proposal with 95-100% Surplus by Pete Rowlett Pete Rowlett https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:02 no full 13
GaCRw4fj9FEhQxnQ4 EA - Meta EA Regional Organizations (MEAROs): An Introduction by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Meta EA Regional Organizations (MEAROs): An Introduction, published by Rockwell on February 21, 2024 on The Effective Altruism Forum.Thank you to the many MEARO leaders who provided feedback and inspiration for this post, directly and through their work in the space.IntroductionIn the following post, I will introduce MEAROs - Meta EA Regional Organizations - a new term for a long-established segment of the EA ecosystem. I will provide an overview of the roles MEAROs currently serve and a sketch of what MEAROs could look like and could accomplish if more fully resourced. Though MEAROs have existed as long as EA itself, I think the concept has been underdefined, underexplored, and consequently underutilized as a tool for solving the world's most pressing problems.I'm hopeful that giving MEAROs a name will help the community at large better understand these organizations and prompt wider discussion on how to strategically develop them over time.BackgroundBy way of background, I have worked full-time on a MEARO - Effective Altruism New York City - since August 2021. In my role with EA NYC, I consider my closest collaborators not only the directEA NYC team but also the leaders of other MEAROs, especially those who likewise receive a portion of their organization funding throughCentre for Effective Altruism's Community Building Grants (CBG) Program. As Ipreviously stated on the Forum, before the FTX collapse, there was a heavy emphasis onmaking community building a long-term and sustainable career path.[1] As a result, there are now dozens of people working professionally and often full-time on MEAROs.This is a notable and very recent shift: Many MEAROs were founded shortly after EA was named, or morphed out of communities that predated EA. Most MEAROs were volunteer-run for the majority of their existence. CEA launched theCBG Program in 2018 and slowly expanded its scope through 2022. EA NYC, for example, was volunteer-run for over seven years before receiving funding for two full-time employees through the CBG Program in Summer 2020. This has led to a game of catch-up: MEAROs have professionalized, but many in the broader EA community still think of MEAROs as volunteer-operated clubs, rather than serious young nonprofits.We also now have significantly more brainpower thinking about ways to maximize impact through the MEARO structure,[2] a topic I do not feel has been adequately explored on the Forum.(I recommend Jan Kulveit's posts from October 2018 - Why develop national-level effective altruism organizations? and Suggestions for developing national-level effective altruism organizations - for among the most relevant early discourse I'm aware of on the Forum.) I hope this post can not only give the broader EA ecosystem a better sense of the roles MEAROs currently serve but also open discussion and get others thinking about how we can use MEAROs more effectively.Defining MEAROsMEAROs work to enhance and support the EA movement and its objectives within specific regions. This description is intentionally broad as MEAROs' work varies substantially between organizations and over time. My working definition of MEAROs requires the following characteristics:1. Region-Specific FocusTrue to EA values, MEAROs maintain a global outlook and are committed to solving the world's most pressing problems, but do this by promoting and supporting the EA movement and its objectives within a particular geographical area. The region could be a city, state, country, or alternative geographical unit, and the MEARO's activities and initiatives are typically tailored to the context and needs of that region.2. Focus on Meta-EAMeta Effective Altruism - the branch of the EA ecosystem MEAROs sit within - describes efforts to improve the efficiency, reach, and impact of the effect...

]]>
Rockwell https://forum.effectivealtruism.org/posts/GaCRw4fj9FEhQxnQ4/meta-ea-regional-organizations-mearos-an-introduction Wed, 21 Feb 2024 04:09:00 +0000 EA - Meta EA Regional Organizations (MEAROs): An Introduction by Rockwell Rockwell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 20:45 no full 15
EgkZayryAnNm9iKhZ EA - Farmed animal funding towards Africa is growing but remains highly neglected by AnimalAdvocacyAfrica Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Farmed animal funding towards Africa is growing but remains highly neglected, published by AnimalAdvocacyAfrica on February 21, 2024 on The Effective Altruism Forum.This post is a summary of our recent report "Mapping the Charitable Funding Landscape for Animal Welfare in Africa: Insights and Opportunities for Farmed Animal Advocacy". We only included the most relevant parts for EA Forum readers here and invite interested readers to consult the report for more details and insights.TL;DRFunding towards the farmed animal advocacy movement in Africa has grown significantly over the past years, especially from EA-aligned funders. Despite these increases, farmed animal advocacy remains underfunded.We hope that this report can help us and other stakeholders to more rapidly and effectively build the farmed animal advocacy movement in Africa. We aim to use and amplify the growing momentum identified in this report and call on any individual or organisations interested in contributing to this cause to contact us and/or increase their resources and focus dedicated towards farmed animal welfare in Africa.MotivationIndustrial animal agriculture is expanding rapidly in Africa, with the continent projected to account for the largest absolute increase in farmed land animal numbers of any continent between 2012 and 2050 (Kortschak, 2023).Despite its vast scale, the issue is highly neglected by charitable funding. Lewis Bollard (2019) estimated that farmed animal advocacy work in Africa received only USD 1 million in 2019, less than 1% of global funding for farmed animal advocacy. Farmed Animal Funders (2021) estimated funding to Africa at USD 2.5 million in 2021, a significantly higher but still very low amount. Accordingly, activists and organisations on the ground cite a lack of funding as the main bottleneck for their work (Tan, 2021).Since 2021, Animal Advocacy Africa (AAA) has actively worked towards strengthening the farmed animal advocacy movement in Africa, with some focus on improving funding. With this report, we aim to understand the funding landscape for farmed animal advocacy in Africa in depth, identifying key actors, patterns, and trends. Notably, we focus on charitable grants and exclude any government funding that might be relevant as well.Our research aims to build transparency and enhance information on what is being done to help animals in Africa, which can help various stakeholders to make better decisions. While we focus on farmed animals, we also provide context on other animal groups. We hope that the findings from our analysis can contribute to funders shifting some of their resources from less neglected and potentially lower-impact projects to more neglected and potentially higher-impact ones.Data basisBased on the funding records of 131 funders that we suspected might have funded African animal causes in the past, we created a database of 2,136 records of grants towards animal projects in Africa. This grant data allowed us to base our analysis on real-world data, which provides an important improvement to previous research, which was typically based on self-reported surveys with funders and/or charities.FindingsOverall funding levelsWe estimate at an 80% confidence level that the funders in scope for this analysis granted a total of USD 25 to 35 million to animal-related causes in Africa in 2020. These grants had substantially increased from 2018 to 2020.Funding for animal causes in Africa shows interesting patterns that contrast, to a certain extent, with trends observed in the animal advocacy movement globally (Animal Charity Evaluators, 2024). Wild animal and conservation efforts receive the most funding. Notably, the projects in this category do not follow the wild animal suffering approach typically discussed in Effective Altruism ...

]]>
AnimalAdvocacyAfrica https://forum.effectivealtruism.org/posts/EgkZayryAnNm9iKhZ/farmed-animal-funding-towards-africa-is-growing-but-remains Wed, 21 Feb 2024 01:43:39 +0000 EA - Farmed animal funding towards Africa is growing but remains highly neglected by AnimalAdvocacyAfrica AnimalAdvocacyAfrica https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 12:28 no full 17
X65Lxnd4hbGpuMq78 EA - From salt intake reduction to labor migration: Announcing top ideas for the AIM 2024 CE Incubation Program by CE Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: From salt intake reduction to labor migration: Announcing top ideas for the AIM 2024 CE Incubation Program, published by CE on February 22, 2024 on The Effective Altruism Forum.TLDR: In this post, we announce our top four charity ideas to launch through our August 12-October 4, 2024 Incubation Program. They are the result of months of work by our research team, who selected them through a five-stageresearch process. We pick interventions that exceed ambitious cost-effectiveness bars, have a high quality of evidence, minimal failure modes, and high expected value. We're also announcing cause areas we'll investigate for the February-March 2025 IP.We're seeking people to launch these ideas through our nextIncubation Program. No particular previous experience is necessary - if you could plausibly see yourself excited to launch one of these charities, we encourage you to apply. The deadline for applications is April 14, 2024. You can apply to both August-October 2024 or February-March 2025 programs via the same link below:In the Incubation Program, we provide two months of cost-covered training, stipends (£1900/month during and for up to two months after the program), seed funding up to $200,000, operational support in the initial months, co-working space at our CE office in London, ongoing mentorship, and access to a community of alumni, funders, and experts. Learn more on ourCE Incubation Program page.One sentence summariesAdvocacy for salt intake reductionAn organization seeking to convince governments and the food industry to lower the content of salt in food by setting sodium limits and reformulating high-sodium foods, thereby improving cardiovascular health.Facilitating international labor migration via a digital platformAn organization that would facilitate the international migration of workers from low- and middle-income countries using a transparent digital platform paired with low-cost personalized support.Ceramic filters for improving water qualityAn organization focused on reducing the incidence of diarrhea and other waterborne illnesses by providing free ceramic water filters to families without access to clean drinking water.Participatory Learning and Action (PLA) groups for maternal and newborn healthAn organization focused on improving newborn and maternal health in rural villages by training facilitators and running PLA groups - a specific type of facilitated self-help group.One-paragraph SummariesAdvocacy for salt intake reductionHigh salt consumption contributes to poor cardiovascular health. Worldwide, cardiovascular diseases are the leading cause of death and are among the top ten contributors to years lived with disabilities. There is good evidence that reducing the amount of sodium people consume in their diets can reduce the risk of cardiovascular problems.Therefore, several countries have successfully reduced the sodium intake of their population, for example by setting sodium limits, which led to the food industry reformulating certain high-salt products. We think that an organization advocating and assisting in implementing these policies could cost-effectively improve the health of millions.Facilitating international labor migration via a digital platformMigrating abroad for work can bring huge financial benefits to people experiencing poverty. Millions of people every year try to tap into these benefits by applying for temporary jobs in higher-income countries. However, the market for matching candidates with jobs is often highly inefficient and riddled with misinformation, putting candidates at financial and personal risk. In many countries, it can cost candidates several years' worth of salaries to secure a job abroad.Fraud is also highly prevalent, leading to candidates often failing to migrate and instead ending up i...

]]>
CE https://forum.effectivealtruism.org/posts/X65Lxnd4hbGpuMq78/from-salt-intake-reduction-to-labor-migration-announcing-top Thu, 22 Feb 2024 16:42:12 +0000 EA - From salt intake reduction to labor migration: Announcing top ideas for the AIM 2024 CE Incubation Program by CE CE https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 20:24 no full 3
fBdRyaGuSgGM9PEnr EA - Upcoming EA conferences in 2024 by OllieBase Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Upcoming EA conferences in 2024, published by OllieBase on February 22, 2024 on The Effective Altruism Forum.In an unsurprising move, the Centre for Effective Altruism will be organising and supporting conferences for the EA community all over the world in 2024, including three new EAGx locations: Copenhagen, Toronto and Austin.We currently have the following events scheduled:EA GlobalEA Global: London | (May 31-June 2) | Intercontinental London (the O2) - applications close 19 MayEA Global: Boston | (November 1-3) | Hynes Convention Center - applications close 20 OctoberEAGxEAGxAustin | (April 13-14) | University of Texas, Austin - applications close 31 MarchEAGxNordics | (April 26-28) | CPH Conference, Copenhagen - applications close 7 AprilEAGxUtrecht | (July 5-7) | Jaarbeurs, UtrechtEAGxToronto | (August, provisional)EAGxBerkeley | (September, provisional)EAGxBerlin | (September 13-15) | Urania, BerlinEAGxAustralia | (November) | SydneyWe also hope to announce an EAGxLondon for early April very soon. A university venue was tentatively booked for late March, but the venue asked to reschedule. We're in the process of finalising a new date. We also expect to announce more events throughout the year.Applications for EAG London, EAG Boston, EAGxNordics and EAGxAustin are open. Applications for EAGxLondon will open as soon as the date is confirmed. We expect applications for the other conferences to open approximately 3 months before the event. Please go to the event page links above to apply.If you'd like to add EAG(x) events directly to your Google Calendar, usethis link.Some notes on these conferences:EA Globals are run in-house by the CEA events team, whereas EAGx conferences are organised independently by members of the EA community with financial support and mentoring from CEA.EA Global conferences have a high bar for admission and are for people who are very familiar with EA and are taking significant actions (e.g. full-time work or study) based on EA ideas.Admissions for EAGx conferences are processed independently by the EAGx conference organizers. These events are primarily for those who are newer to EA and interested in getting more involved.Please apply to all conferences you wish to attend once applications open - we would rather get too many applications for some conferences and recommend that applicants attend a different one than miss out on potential applicants to a conference.Travel support funds for events this year are limited (though will vary by event), and we can only accommodate a small number of requests. If you do not end up receiving travel support, this is likely the result of limited funds, rather than an evaluation of your potential for impact. When planning around an event, we recommend you act under the assumption that we will not be able to grant your travel funding request (unless it has already been approved).Find more info onour website.Feel free to email hello@eaglobal.org with any questions, or comment below. You can contact EAGx organisers using the format [location]@eaglobalx.org (e.g.austin@eaglobalx.org and nordics@eaglobalx.org).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
OllieBase https://forum.effectivealtruism.org/posts/fBdRyaGuSgGM9PEnr/upcoming-ea-conferences-in-2024 Thu, 22 Feb 2024 13:37:22 +0000 EA - Upcoming EA conferences in 2024 by OllieBase OllieBase https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:27 no full 5
ggzGwXLzc8MBfERTC EA - UPDATE: Critical Failures in the World Happiness Report's Model of National Satisfaction by Alexander Loewi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: UPDATE: Critical Failures in the World Happiness Report's Model of National Satisfaction, published by Alexander Loewi on February 24, 2024 on The Effective Altruism Forum.There has been a substantial update since this was first posted. The lead authors of the World Happiness Report have now reached out to me directly. Details at the bottom.The World Happiness Report (WHR) is currently the best known and most widely accepted source of information on global life satisfaction. Its six-variable model of satisfaction is reproduced in textbooks and has been published in the same form by the United Nations since 2012. However, almost no justification is given for why these six variables are used.In response, I attempted to do the only thing I thought was responsible -- do an exhaustive search over 5,000 variables in international datasets, and see empirically what actually predicts satisfaction. I've consulted with life satisfaction specialists both in economics departments and major think tanks, and none thought this had been done before.The variables that are selected by this more rigorous method are dramatically different from those of the WHR, and the resulting model is substantially more accurate both in and out of sample. In particular, the WHR leaves out entire categories of variables on subjects as varied, and as basic, as education, discrimination, and political power. Perhaps most dramatically, the way the WHR presents the data appears to suggest that GDP explains 40% of model variation.I find that, with my measurably more accurate model, it in fact predicts 2.5%.The graph below ranks the model variables by contribution, which is the amount of the total satisfaction of a country they are estimated to predict. For interpretation, 1.5 points of satisfaction on the 11-point scale is equivalent to the grief felt at the death of a life partner, meaning these numbers may be numerically small, but they are enormously significant behaviorally.All variables included here were chosen by a penalized regression out of a total of 1,058 viable candidates, after 5,500 variables were examined. (Most were too missing to be used and trusted.) They are colored by significance, with even least still marginally significant, and almost all significant at the 0.001 level.I have already gotten extremely positive feedback from academic circles, and have started looking for communities of practice that would find this valuable.A link to the paper is below:https://dovecoteinstitute.org/files/Loewi-Life-Satisfaction-2024.pdfUPDATE: The lead authors of the World Happiness Report have now reached out to me directly. Already this is a shock, as I had no idea if my findings would even be taken seriously. The authors suggested changes to my methods, and I have spent the last few weeks incorporating their suggestions, during which I thought it was only responsible to take the post down.However having now taken their recommendations into account -- I find the results are in every meaningful way identical, and in fact now substantially reinforced. The post and paper have been updated to reflect what changes there were.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Alexander Loewi https://forum.effectivealtruism.org/posts/ggzGwXLzc8MBfERTC/update-critical-failures-in-the-world-happiness-report-s Sat, 24 Feb 2024 09:33:14 +0000 EA - UPDATE: Critical Failures in the World Happiness Report's Model of National Satisfaction by Alexander Loewi Alexander Loewi https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:11 no full 2
7LtRuKJzdmtN4ACij EA - Could Transparency International Be a Model to Improve Farm Animal Welfare? by cynthiaschuck Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Could Transparency International Be a Model to Improve Farm Animal Welfare?, published by cynthiaschuck on February 24, 2024 on The Effective Altruism Forum.OverviewThe text discusses the potential for adopting a model akin toTransparency International to improve farm animal welfare. By relying on standards related to consumers' right to information, this organization could develop and promote standard reporting methods, auditing systems, transparency rankings of companies and governments, and improved labelling and traceability of animal products.BackgroundIn recent years, significant advancements in farm animal welfare have been achieved through the concerted efforts of numerous organizations and individuals. These efforts have encompassed a wide range of interventions, including the improvement of housing systems, breeding practices, bans of particularly harmful procedures, and the development of new animal welfare legislation and standards.Despite these achievements, the path to securing a good life for farmed animals remains a long one. There is not only a need for additional reforms and their expansion into multiple geographies, but also for the assurance of their effectiveness in promoting animal welfare. Here we argue that the creation of mechanisms for increased transparency in the production chain is critical in this regard.Not uncommonly, producing companies find ways to circumvent reforms or exploit loopholes in enforcement, as evidenced by widespread violations of animal welfare legislation in the European Union (examples are availablehere,here andhere). Even when enforcement is effective, compliance with specific requirements may not necessarily translate into tangible welfare improvements.For example, abolishing practices leading to physical alterations such as tail docking in pigs or beak trimming in laying hens - band-aid solutions to the issues of tail biting and injurious pecking, respectively - may lead to poorer welfare if other management and housing measures are not simultaneously implemented to reduce the stress that triggers those behaviors.Similar issues may arise with the removal of antibiotics from the supply chain or changes in housing conditions without simultaneous adjustments in other practices.A promising approach in this context is a focus on the monitoring and report of meaningful welfare outcomes. It is at the animal level that the effectiveness of welfare policies should work, hence where monitoring is most needed. For example, setting maximum thresholds for outcomes such as the prevalence of diseases, fractures, and other injuries, as inspected at the slaughter line by independent auditors, would leave less room for evading welfare advancements.This approach, however, depends on companies having transparency about their operations. Transparency is fundamental in ensuring that practices align with actual welfare outcomes. Transparency is also crucial to bridge the gap between consumer preferences for ethically produced animal products and the reality of industrial agricultural practices.For example, the absence of clear labeling on animal products frequently leads to consumer confusion, hindering choices that are consistent with ethical values.The imperative for increased transparency in animal welfare practices mirrors the foundational principles of organizations likeTransparency International. Since its establishment, Transparency International has been globally recognized in the fight against corruption by using a strategic approach to making corruption more visible and difficult to conceal, setting a precedent for how sector-specific transparency organizations can drive systemic change.Similarly, adopting a model inspired by Transparency International in the realm of animal welfare could be transformative, in...

]]>
cynthiaschuck https://forum.effectivealtruism.org/posts/7LtRuKJzdmtN4ACij/could-transparency-international-be-a-model-to-improve-farm Sat, 24 Feb 2024 06:22:44 +0000 EA - Could Transparency International Be a Model to Improve Farm Animal Welfare? by cynthiaschuck cynthiaschuck https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:41 no full 4
ztx4ahn7yKNK23G5C EA - Bloomberg: Unacknowledged problems with LLINs are causing a rise in malaria. by Ian Turner Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bloomberg: Unacknowledged problems with LLINs are causing a rise in malaria., published by Ian Turner on February 25, 2024 on The Effective Altruism Forum.In this article, Bloomberg claims that undisclosed manufacturing changes at one of the largest producers of anti-malaria bednets have led to distribution of hundreds of millions of ineffective (or less-effective) bednets, and that this problem is linked to an increase in malaria incidence in the places where these nets were distributed.The manufacturer is Vestergaard and the Against Malaria Foundation is among their clients.Bloomberg has a steep paywall but the link here gives free access until March 2.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Ian Turner https://forum.effectivealtruism.org/posts/ztx4ahn7yKNK23G5C/bloomberg-unacknowledged-problems-with-llins-are-causing-a Sun, 25 Feb 2024 17:07:54 +0000 EA - Bloomberg: Unacknowledged problems with LLINs are causing a rise in malaria. by Ian Turner Ian Turner https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:49 no full 2
JGazpLa3Gvvter4JW EA - Cooperating with aliens and (distant) AGIs: An ECL explainer by Chi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cooperating with aliens and (distant) AGIs: An ECL explainer, published by Chi on February 25, 2024 on The Effective Altruism Forum.SummaryEvidential cooperation in large worlds (ECL) is a proposed way of reaping gains - that is, getting more of what we value instantiated - through cooperating with agents across the universe/multiverse. Such cooperation does not involve physical, or causal, interaction. ECL is potentially acrucial consideration, because we may be able to do more good this way compared to the "standard" (i.e., causal) way of optimizing for our values.The core idea of ECL can be summarized as:According to non-causal decision theories, my decisions relevantly "influence" what others who are similar to me do, even if they never observe my behavior (or the causal consequences of my behavior). (More.)In particular, if I behave cooperatively towards other value systems, then other agents across the multiverse are more likely to do the same. Hence, at least some fraction of agents can be (acausally) influenced into behaving cooperatively towards my value system. This gives me reason to be cooperative with other value systems. (More.)Meanwhile, there are many agents in the universe/multiverse. (More.) Cooperating with them would unlock a great deal of value due to gains from trade. (More.) For example, if I care about the well-being of sentient beings everywhere, I can "influence" how faraway agents treat sentient beings in their part of the universe/multiverse.IntroductionTheobservable universe is large. Nonetheless, the full extent of the universe is likely much larger, perhaps infinitely so. This means that most of what's out there is not causally connected to us. Even if we set out now from planet Earth, traveling at the speed of light, we would never reach most locations in the universe. One might assume that this means most of the universe is not our concern.In this post, we explain why all of the universe - and all of the multiverse, if it exists - may in fact concern us if we take something called evidential cooperation in large worlds (ECL) into account.[1] Given how high the stakes are, on account of how much universe/multiverse might be out there beyond our causal reach, ECL is potentially very important. In our view, ECL is a crucial consideration for the effective altruist project of doing the most good.In the next section of this post, we explain the theory underlying ECL. Building on that, we outline why we might be able to do ECL ourselves and how it allows us to do more good. We conclude by giving some information on how you can get involved. We will also publishan FAQ in the near future, which will address some possible objections to ECL.The twin prisoners' dilemmaExact copiesSuppose you are in aprisoner's dilemma with an exact copy of yourself:You have a choice: You can either press the defect button, which increases your own payoff by $1, or you can press the cooperate button, which increases your copy's payoff by $2.Your copy faces the same choice (i.e., the situation is symmetric).Both of you cannot see the other's choice until after you have made your own choice. You and your copy will never interact with each other after this, and nobody else will ever observe what choices you both made.You only care about your own payoff, not the payoff of your copy.This situation can be represented with the followingpayoff matrix:Looking at the matrix, you can see that regardless of whether your copy cooperates or defects, you are better off if you defect. "Defect" is thestrictly dominant strategy. Therefore, under standard notions of rational decision making, you should defect. In particular,causal decision theory - read in the standard way - says to defect (Lewis, 1979).However, the other player is an exact copy of y...

]]>
Chi https://forum.effectivealtruism.org/posts/JGazpLa3Gvvter4JW/cooperating-with-aliens-and-distant-agis-an-ecl-explainer-1 Sun, 25 Feb 2024 13:45:03 +0000 EA - Cooperating with aliens and (distant) AGIs: An ECL explainer by Chi Chi https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 45:42 no full 3
sTpKs4foGLnaTGACe EA - My favorite articles by Brian Tomasik and what they are about by Timothy Chan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My favorite articles by Brian Tomasik and what they are about, published by Timothy Chan on February 25, 2024 on The Effective Altruism Forum.Cross-posted from my website.IntroductionBrian Tomasik has written a lot ofessays on reducing suffering. In this post, I've picked out my favorites. If you're thinking about reading his work, this list could be a good place to start. Note that this is based on what I personally find interesting; it is not a definitive guide. These are listed in no particular order.Dissolving Confusion about ConsciousnessConsciousness is a "cluster in thingspace" comprising physical systems that we consider to be similar in some way. It is a label for systems, not an essence within systems. Also, like defining "tables", defining "consciousness" may be similarly arbitrary and fuzzy.The Eliminativist Approach to ConsciousnessInstead of thinking in terms of "conscious" and "unconscious" we should directly focus on how physical systems work. Aspects of systems are not merely indicators of pain, but are part of the cluster of things that we call "pain" (attribute the label "pain").Tomasik also draws parallels between eliminativism and panpsychism and highlights that there is a shared implication of both theories that there is no clear separation of consciousness with physical reality, which may further suggest that we should put more weight to ideas that less complex systems can suffer.How to Interpret a Physical System as a MindUses the concept of a "sentience classifier" to describe how we might interpret physical systems as minds. Distinct theories offer different approaches to building the classifier. Classification then involves "identifying the traits of the physical system in question" (taking in the data and searching for relevant features) as a first step and "mapping from those traits to high-level emotions and valences" (labeling the data) as a second step.Our brains might already be vaguely implementing the sentience classifier - albeit with more messiness, complexity, and components and processes particular to the brain.The Many Fallacies of DualismThis article touches on a common theme underlying Tomasik's approach to topics like consciousness, free will, moral (anti)realism, and mathematical (anti)realism: rejecting dualism in favor of a simpler physicalist monism.The Importance of Wild Animal SufferingA good introduction to the topic. Discusses the extensive suffering experienced by wild animals due to natural causes like disease, predation, and environmental hardships, which may outweigh moments of happiness. Vast numbers and short, brutal lives of wild animals make their suffering a significant ethical issue.Why Vegans Should Care about Suffering in NatureA shorter introduction to the topic.The Horror of SufferingA vivid reflection on the horror of suffering. Suffering is not merely an abstract concept but a dire reality that demands urgent moral attention. There is a long history of intuitions that prioritize the reduction of suffering.One Trillion FishShort piece on the direct harms caused by large-scale fishing (though note that when taking into account population changes and wild-animal suffering, the sign of this is less clear). Suggests humane slaughter of fish as an intervention that side-steps the uncertainty of net impact of fishing on wild-animal suffering.How Does Vegetarianism Impact Wild-Animal Suffering?Note that there are likely more comprehensive analyses now. That animal suffering is increased in some ways because of a vegetarian/vegan diet is counterintuitive but important to recognize. You might still want to be vegetarian/vegan and not eat meat as it might help with becoming more motivated to reduce suffering.How Rainforest Beef Production Affects Wild-Animal SufferingCreating cattle pas...

]]>
Timothy Chan https://forum.effectivealtruism.org/posts/sTpKs4foGLnaTGACe/my-favorite-articles-by-brian-tomasik-and-what-they-are Sun, 25 Feb 2024 08:58:38 +0000 EA - My favorite articles by Brian Tomasik and what they are about by Timothy Chan Timothy Chan https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:22 no full 5
QBsCLkiEMpNmjPmzN EA - AI-based disinformation is probably not a major threat to democracy by Dan Williams Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI-based disinformation is probably not a major threat to democracy, published by Dan Williams on February 25, 2024 on The Effective Altruism Forum.[Note: this essay was originally posted to my website, https://www.conspicuouscognition.com/p/ai-based-disinformation-is-probably. A few people contacted me to suggest that I also post it here in case of interest].Many people are worried that the use of artificial intelligence in generating or transmitting disinformation poses a serious threat to democracies. For example, the Future of Life Institute's 2023 Open Letter demanding a six-month ban on AI development asks: "Should we let machines flood our information channels with propaganda and untruth?" The question reflects a general concern that has been highly influential among journalists, experts, and policy makers.Here is just a small sample of headlines from major media outlets:More generally, amidst the current excitement about AI, there is a popular demand for commentators and experts who can speak eloquently about the dangers it poses. Audiences love narratives about threats, especially when linked to fancy new technologies. However, most commentators don't want to go full Eliezer Yudkowsky and claim that super-intelligent AI will kill us all.So they settle for what they think is a more reasonable position, one that aligns better with the prevailing sensibility and worldview of the liberal commentariat: AI will greatly exacerbate the problem of online disinformation, which - as every educated person knows - is one of the great scourges of our time.For example, in the World Economic Forum's 2024 Global Risks Report surveying 1500 experts and policy makers, they list "misinformation and disinformation" as the top global risk over the next two years:In defence of this assessment, a post on the World Economic Forum's website notes:"The growing concern about misinformation and disinformation is in large part driven by the potential for AI, in the hands of bad actors, to flood global information systems with false narratives."This idea gets spelled out in different ways, but most conversations focus on the following threats:Deepfakes (realistic but fake images, videos, and audio generated by AI) will either trick people into believing falsehoods or cause them to distrust all recordings on the grounds they might be deepfakes.Propagandists will use generative AI to create hyper-persuasive arguments for false views (e.g. "the election was stolen").AI will enable automated disinformation campaigns. Propagandists will use effective AI bots instead of staffing their troll farms with human, all-too-human workers.AI will enable highly targeted, personalised disinformation campaigns ("micro-targeting").How worried should we be about threats like these? As I return to at the end of this essay, there are genuine dangers when it comes to the effects of AI on our informational ecosystem. Moreover, as with any new technology, it is good to think pro-actively about risks, and it would be silly to claim that worries about AI-based disinformation lack any foundation at all.Nevertheless, at least when it comes to Western democracies, the alarmism surrounding this topic generally rests on popular but mistaken beliefs about human psychology, democracy, and disinformation.In this post, I will identify four facts that many commentators on this topic neglect. Taken collectively, they imply that many concerns about the effects of AI-based disinformation on democracies are greatly overstated.Online disinformation does not lie at the root of modern political problems.Political persuasion is extremely difficult.The media environment is highly competitive and demand-driven.The establishment will have access to more powerful forms of AI than counter-establishment sources.1. Onl...

]]>
Dan Williams https://forum.effectivealtruism.org/posts/QBsCLkiEMpNmjPmzN/ai-based-disinformation-is-probably-not-a-major-threat-to Sun, 25 Feb 2024 06:27:17 +0000 EA - AI-based disinformation is probably not a major threat to democracy by Dan Williams Dan Williams https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 18:17 no full 7
6KNSCxsTAh7wCoHko EA - Nuclear war tail risk has been exaggerated? by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nuclear war tail risk has been exaggerated?, published by Vasco Grilo on February 26, 2024 on The Effective Altruism Forum.The views expressed here are my own, not those of Alliance to Feed the Earth in Disasters (ALLFED), for which I work as a contractor.SummaryI calculated a nearterm annual risk ofhuman extinction fromnuclear war of 5.93*10^-12 (more).I consider grantmakers and donors interested in decreasing extinction risk had better focus on artificial intelligence (AI) instead of nuclear war (more).I would say the case for sometimes prioritising nuclear extinction risk over AI extinction risk is much weaker than the case for sometimes prioritisingnatural extinction risk over nuclear extinction risk (more).I get a sense the extinction risk from nuclear war was massively overestimated in The Existential Risk Persuasion Tournament (XPT) (more).I have the impression Toby Ord greatly overestimated tail risk inThe Precipice (more).I believe interventions to decrease deaths from nuclear war should be assessed based on standardcost-benefit analysis (more).I think increasing calorie production via new food sectors is less cost-effective to save lives than measures targeting distribution (more).Extinction risk from nuclear warI calculated a nearterm annual risk of human extinction from nuclear war of 5.93*10^-12 (= (6.36*10^-14*5.53*10^-10)^0.5) from thegeometric mean between[1]:My prior of6.36*10^-14 for the annual probability of a war causing human extinction.Myinside view estimate of 5.53*10^-10 for the nearterm annual probability of human extinction from nuclear war.By nearterm annual risk, I mean that in a randomly selected year from 2025 to 2050. I computed my inside view estimate of 5.53*10^-10 (= 0.0131*0.0422*10^-6) multiplying:1.31 % annual probability of a nuclear weapon being detonated as an act of war.4.22 % probability of insufficient calorie production given at least one nuclear detonation.10^-6 probability of human extinction given insufficient calorie production.I explain the rationale for the above estimates in the next sections. Note nuclear war might havecascade effects which lead tocivilisational collapse[2], which could increase longterm extinction risk while simultaneously having a negligible impact on the nearterm one I estimated. I do not explicitly assess this in the post, but I guess the nearterm annual risk of human extinction from nuclear war is a good proxy for theimportance of decreasing nuclear risk from alongtermist perspective:My prior implicitly accounts for the cascade effects of wars. I derived it from historical data on the deaths of combatants due to not only fighting, but also disease and starvation, which are ever-present indirect effects of war.Nuclear war might havecascade effects, but so do other catastrophes.Global civilisational collapse due to nuclear war seems very unlikely to me. For instance, the maximum destroyable area by any country in anuclear 1st strike was estimated to be65.3 k km^2 inSuh 2023 (for a strike by Russia), which is just 70.8 % (= 65.3*10^3/(92.2*10^3)) of the area of Portugal, or 3.42 % (= 65.3*10^3/(1.91*10^6)) of the global urban area.Even if nuclear war causes a global civilisational collapse which eventually leads to extinction, Iguess full recovery would be extremely likely. In contrast, an extinction caused by advanced AI would arguably not allow for a full recovery.I am open to the idea that nuclear war can have longterm implications even in the case of full recovery, but considerations along these lines would arguably be more pressing in the context of AI risk.For context, William MacAskillsaid the following on The 80,000 Hours Podcast. "It's quite plausible, actually, when we look to the very long-term future, that that's [whether artificial...

]]>
Vasco Grilo https://forum.effectivealtruism.org/posts/6KNSCxsTAh7wCoHko/nuclear-war-tail-risk-has-been-exaggerated Mon, 26 Feb 2024 13:34:18 +0000 EA - Nuclear war tail risk has been exaggerated? by Vasco Grilo Vasco Grilo https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:08:25 no full 2
RsdmLNweTfn4jFfvQ EA - How much parenting harms productivity and how you can reduce it by Nicholas Kruus Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How much parenting harms productivity and how you can reduce it, published by Nicholas Kruus on February 26, 2024 on The Effective Altruism Forum.IntroMany EAs factor children's effects on their personal impact when deciding whether to have them (example). To offer some insight for potential parents, I tried to summarize the best research I could find on parenthood's impact on people's productivity, though I was surprised at the lack of robust literature (especially more recently). The following information comes from four studies: one published inScience[1], one published inNature[2], one from theFederal Reserve of St. Louis[3], and one published inSocial Studies of Science[4]. They all focus on academics and quantitatively measure productivity with research output metrics (the footnotes contain more detail about each).How parenting impacts productivityTLDR: The trend for having one child seems to be a short-term reduction in productivity (median: 17%, mean: ~23%) for mothers that peters out after ~10 years. There is usually little effect on fathers, but fathers who are primary caregivers (or otherwise more engaged with their children) suffer similar short-term (<10 year) productivity losses. Each additional child seems to decrease short-term productivity by an additional 11%.Science:Short-term (<10 years after having children):Consistent effects on mothers: The paper finds a ~17%, ~24%, and ~48% decrease in productivity[1] for those in computer science, business, and history, respectively. The authors propose that the different levels of cooperation in these fields may explain the variation in productivity impact - i.e., those in more cooperative fields may suffer lower productivity losses.They also note their results likely underestimate the effects because their sample didn't include parents who left academia (which may have been prompted by their children directly or indirectly)Inconsistent effects on fathers lead the authors to conclude there is "no clear evidence" of a short-term productivity decrease for fathers.Long-term (>10 years after having children):Inconclusive results for both mothers and fathers.Federal Reserve of St. Louis:Short-term (<12 years after having children):Women's productivity[3] decreased by 15%-17% on average. The total productivity cost of having one, two, or three preteens was 9.5%, 22%, and 33%."Men's productivity is not associated with their family situation in an economically significant manner."Long-term (>12 years after having children):Parenting has no effect on productivity for mothers or fathers, so long as they have their children on purpose after they turn 30 years old.Social Studies of Science:Overall:8% and 12% decline in research productivity and visibility[4], respectively, for men and women combined. For women, the decrease was 15%.To illustrate the cumulative effect of this, mothers were 2 years behind their childless counterparts in the number of papers they published 18 years after having their children.How to minimize productivity impactsHave kids laterEconomists who become mothers before 30 suffer a 13% decrease in overall (short- & long-term) productivity[3], whereas those having children after 30 do not (Fed of St. Louis).Employment at an institution 100 ranks higher correlates with an additional 1-year delay before having children. However, this might be explained by personality: Perhaps, the type of people who wait to have children are the type of people who become employed at higher-ranked institutions (Science).Take parental leaveTaking parental leave shorter than 1 month does not mitigate productivity losses, but parental leave longer than 1 month and less than 12 months correlated with an 11%-17% productivity[2] improvement (Nature).Be a lazier parent and divide labor bet...

]]>
Nicholas Kruus https://forum.effectivealtruism.org/posts/RsdmLNweTfn4jFfvQ/how-much-parenting-harms-productivity-and-how-you-can-reduce 10 years after having children):Inconclusive results for both mothers and fathers.Federal Reserve of St. Louis:Short-term (<12 years after having children):Women's productivity[3] decreased by 15%-17% on average. The total productivity cost of having one, two, or three preteens was 9.5%, 22%, and 33%."Men's productivity is not associated with their family situation in an economically significant manner."Long-term (>12 years after having children):Parenting has no effect on productivity for mothers or fathers, so long as they have their children on purpose after they turn 30 years old.Social Studies of Science:Overall:8% and 12% decline in research productivity and visibility[4], respectively, for men and women combined. For women, the decrease was 15%.To illustrate the cumulative effect of this, mothers were 2 years behind their childless counterparts in the number of papers they published 18 years after having their children.How to minimize productivity impactsHave kids laterEconomists who become mothers before 30 suffer a 13% decrease in overall (short- & long-term) productivity[3], whereas those having children after 30 do not (Fed of St. Louis).Employment at an institution 100 ranks higher correlates with an additional 1-year delay before having children. However, this might be explained by personality: Perhaps, the type of people who wait to have children are the type of people who become employed at higher-ranked institutions (Science).Take parental leaveTaking parental leave shorter than 1 month does not mitigate productivity losses, but parental leave longer than 1 month and less than 12 months correlated with an 11%-17% productivity[2] improvement (Nature).Be a lazier parent and divide labor bet...]]> Mon, 26 Feb 2024 10:19:08 +0000 EA - How much parenting harms productivity and how you can reduce it by Nicholas Kruus 10 years after having children):Inconclusive results for both mothers and fathers.Federal Reserve of St. Louis:Short-term (<12 years after having children):Women's productivity[3] decreased by 15%-17% on average. The total productivity cost of having one, two, or three preteens was 9.5%, 22%, and 33%."Men's productivity is not associated with their family situation in an economically significant manner."Long-term (>12 years after having children):Parenting has no effect on productivity for mothers or fathers, so long as they have their children on purpose after they turn 30 years old.Social Studies of Science:Overall:8% and 12% decline in research productivity and visibility[4], respectively, for men and women combined. For women, the decrease was 15%.To illustrate the cumulative effect of this, mothers were 2 years behind their childless counterparts in the number of papers they published 18 years after having their children.How to minimize productivity impactsHave kids laterEconomists who become mothers before 30 suffer a 13% decrease in overall (short- & long-term) productivity[3], whereas those having children after 30 do not (Fed of St. Louis).Employment at an institution 100 ranks higher correlates with an additional 1-year delay before having children. However, this might be explained by personality: Perhaps, the type of people who wait to have children are the type of people who become employed at higher-ranked institutions (Science).Take parental leaveTaking parental leave shorter than 1 month does not mitigate productivity losses, but parental leave longer than 1 month and less than 12 months correlated with an 11%-17% productivity[2] improvement (Nature).Be a lazier parent and divide labor bet...]]> 10 years after having children):Inconclusive results for both mothers and fathers.Federal Reserve of St. Louis:Short-term (<12 years after having children):Women's productivity[3] decreased by 15%-17% on average. The total productivity cost of having one, two, or three preteens was 9.5%, 22%, and 33%."Men's productivity is not associated with their family situation in an economically significant manner."Long-term (>12 years after having children):Parenting has no effect on productivity for mothers or fathers, so long as they have their children on purpose after they turn 30 years old.Social Studies of Science:Overall:8% and 12% decline in research productivity and visibility[4], respectively, for men and women combined. For women, the decrease was 15%.To illustrate the cumulative effect of this, mothers were 2 years behind their childless counterparts in the number of papers they published 18 years after having their children.How to minimize productivity impactsHave kids laterEconomists who become mothers before 30 suffer a 13% decrease in overall (short- & long-term) productivity[3], whereas those having children after 30 do not (Fed of St. Louis).Employment at an institution 100 ranks higher correlates with an additional 1-year delay before having children. However, this might be explained by personality: Perhaps, the type of people who wait to have children are the type of people who become employed at higher-ranked institutions (Science).Take parental leaveTaking parental leave shorter than 1 month does not mitigate productivity losses, but parental leave longer than 1 month and less than 12 months correlated with an 11%-17% productivity[2] improvement (Nature).Be a lazier parent and divide labor bet...]]> Nicholas Kruus https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:33 no full 3
z59wybc56FCAysrAe EA - How we started our own EA charity (and why we decided to wrap up) by KvPelt Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How we started our own EA charity (and why we decided to wrap up), published by KvPelt on February 26, 2024 on The Effective Altruism Forum.This post shares our journey in starting an Effective Altruism (EA) charity/project focused on Mediterranean fish welfare, the challenges we faced, our key learnings, and the reasons behind our decision to conclude the project. Actual research results are published in aLiterature review and article.Key pointsThe key points of this post are summarized as follows:We launched a project with the goal of enhancing fish welfare in Mediterranean aquaculture.We chose to limit our project to gathering information and decided against continuing our advocacy efforts after our initial six months.Our strategy, which focused on farmer-friendly outreach, was not effective in engaging farmers.The rationale behind our decision is the recognition that existing organizations are already performing excellent work, and we believe that funders should support these established organizations instead of starting a new one.The support and resources from the Effective Altruism (EA) and animal welfare communities were outstanding.Despite the project not achieving its intended outcomes, we view the overall experience positively. It's common for new charities to not succeed; the key is to quickly determine the viability of your idea, which we believe we have accomplished.Note: Ren has recently begun working as a guest fund manager for the EA Funds Animal Welfare Fund. The views that we express in this article are our views, and we are not speaking for the fund.Personal/Project backgroundBefore delving into our project we'll provide a quick background of our profiles and how we got to starting this project.KoenDuring my Masters in Maritime/Offshore engineering (building floating things) I got interested in animal welfare. Due to engagement with my EA university group (EA Delft) and by attending EAG(x)Rotterdam I became interested and motivated to use my career to work on animal welfare. I hoped to apply my maritime engineering background in a meaningful way, which led me to consider aquatic animal welfare.I attended EAGLondon in 2023 with the goal of finding career opportunities and surprisingly this worked! I talked to many with backgrounds in animal welfare (AW) and engineering and in one of my 1on1's I met someone who would later connect me with Ren.RenAs a researcher, Ren has been working at Animal Ask for the past couple of years conducting research to support the animal advocacy movement. However, Ren still feels really sad about the scale of suffering endured by animals, and this was the motivation to launch a side project.Why work on Mediterranean fish welfare?This project originated out of a desire to work on alleviating extreme-suffering. More background on the arguments to focus on extreme-suffering is discussed in Ren'searlierforum post. When the welfare of nonhuman animals is not taken into account during slaughter, extreme-suffering is likely to occur. Also, from Ren's existing work at Animal Ask, they knew that stunning before slaughter is often quite well-understood and tractable.Therefore, Ren produced a systematic spreadsheet of every farmed animal industry in developed countries (i.e., those countries where Ren felt safe and comfortable working). This spreadsheet included information on a) the number of animals killed, and b) whether those animals were already being stunned before slaughter. Three industries emerged as sources of large-scale, intense suffering:1. Farmed shrimp in the United States,2. Farmed shrimp in Australia, and3. Sea bass and sea bream in the Mediterranean.Ren actually looked at farmed shrimp initially, and work on these projects may continue in the future, but there are some technical reasons ...

]]>
KvPelt https://forum.effectivealtruism.org/posts/z59wybc56FCAysrAe/how-we-started-our-own-ea-charity-and-why-we-decided-to-wrap Mon, 26 Feb 2024 09:59:50 +0000 EA - How we started our own EA charity (and why we decided to wrap up) by KvPelt KvPelt https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 21:38 no full 4
NyNa4YYAm7zQuLLva EA - Announcing Draft Amnesty Week (March 11-17) by tobytrem Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Draft Amnesty Week (March 11-17), published by tobytrem on February 27, 2024 on The Effective Altruism Forum.TL;DRThe Forum will hold a Draft Amnesty Week from March 11th-17th.Draft Amnesty Week is a chance to share unpolished drafts, posts you aren't sure you agree with, and drafts which have become toough-y to finish.We'll host smaller events and threads in the weeks before, to help you come up with ideas, prioritise your drafts, and polish anything that needs polishing. Keep an eye out (and read more below).This is a variation on an event the Forum team ran in 2022:see more here.How does Draft Amnesty Week work?Optionally - you take part in the buildup events this week and next week, to come up with ideas for posts to write, and find time to draft them.During Draft Amnesty Week (March 11-17), you publish your draft post(s). To make it clear that they are a part of the Draft Amnesty event, you can put this table[1] at the top of your post, and tag the post with the Draft Amnesty Week tag. There'll be a banner up on the Forum.Even if you don't post, you can give authors feedback! Vote, comment; help us all highlight the best ideas from the event.After the week has ended, I'll write a round-up post reviewing some of the best submissions.Why run Draft Amnesty Week?Firstly, because I suspect that many of you have interesting outlines or half-drafted posts in your google drives. Draft Amnesty Week is a time when we have social permission to release these into the world (and to press your friends and colleagues into doing the same).Linch's post from last time is a great example of an incomplete post which is much better out and published than in and unseen.Secondly, because our lastDraft Amnesty event went really well. We got valuable posts like Clifford'sLessons learned from Tyve, and a post onlessons for AI governance from early electricity regulation, which would likely not have been made public at the time, if it wasn't for the event.TimetableThis week (Feb 26 to March 3): Collecting IdeasI've posted a question thread, asking: "What posts would you like someone to write?"(like this past edition). Draft Amnesty Week is a great opportunity to have a crack at providing one of these posts, even if you aren't confident you can do a perfect job.March 4-10: Writing WeekI'm running two events on theEA gather.townfor Forum users to get together and hack together some drafts.The first will be on March 5th from 10am-12pm UTC. Sign up here.The second will be on March 7th from 6-8pm UTC. Sign up here.I'll also post a thread asking "What posts are you considering writing?" (likethis one). This is an opportunity to gauge how excited the Forum is about your different ideas, and prioritise between them.Draft Amnesty Week (March 11-17)This is the week when we will actually post our draftsDraft Amnesty Week FAQHow draft-y is too draft-y?My main recommendation is that the reader should be able to understand your writing, even if it is incomplete, unpolished, or skeletal. For example, it is okay to have a bullet point such as:"I was going to write something about how this problem applies to the problem of moral knowledge here, but reading thisEncyclopedia page because aversive, so I'm skipping over it for now"However, it wouldn't be as valuable for the reader to see a bullet point such as:"Insert moral knowledge stuff"If you want a second pair of eyes, DM me and I'll look at your draft.What if I don't want to see Draft Amnesty posts?If you would like to opt out of seeing Draft Amnesty posts, go to the Frontpage, and follow the steps in the GIF below:I'm worried my draft isn't high quality enough, should I still post it?Short answer - yes.Long answer - the karma system is designed to show people posts which the Forum community judge...

]]>
tobytrem https://forum.effectivealtruism.org/posts/NyNa4YYAm7zQuLLva/announcing-draft-amnesty-week-march-11-17 Tue, 27 Feb 2024 22:32:06 +0000 EA - Announcing Draft Amnesty Week (March 11-17) by tobytrem tobytrem https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:17 no full 1
bjjrajTMiuphdWHH8 EA - What posts would you like someone to write? by tobytrem Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What posts would you like someone to write?, published by tobytrem on February 27, 2024 on The Effective Altruism Forum.I'm posting this to tie in with the Forum's Draft Amnesty Week (March 11-17) plans, but it is also a question of more general interest. The last time this question was posted, it got some great responses.When answering in this thread, I suggest putting each idea in a different answer, so that comment threads don't get too confusing.If you think someone has already written the answer to a user's question, consider lending a hand and linking it in the comments.A few suggestions for possible answers:A question you would like someone to answer: "How, historically, did AI safety become an EA cause area?"A type of experience you would like to hear about: "I'd love to hear about the experience of moving from consulting into biosecurity policy. Does anyone know anyone like this who might want to write about their experience?"If you find yourself with loads of ideas, consider writing a full "posts I would like someone to write" post.Draft Amnesty WeekIf you see a post idea here which you think you might be positioned to answer, Draft Amnesty Week (March 11-17) might be a great time to post it. In Draft Amnesty Week, your posts don't have to be fully thought through, or even fully drafted. Bullet-points and missing sections are allowed, so you can have a lower bar for posting.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
tobytrem https://forum.effectivealtruism.org/posts/bjjrajTMiuphdWHH8/what-posts-would-you-like-someone-to-write Tue, 27 Feb 2024 15:45:07 +0000 EA - What posts would you like someone to write? by tobytrem tobytrem https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:31 no full 3
jhaS4ogXbGMGuqXgb EA - Meta Charity Funders: Launching the 2nd round by Vilhelm Skoglund Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Meta Charity Funders: Launching the 2nd round, published by Vilhelm Skoglund on February 27, 2024 on The Effective Altruism Forum.Previously:Launch of 1st round &1st round retrospective (recommended for all applicants).We are now launching the second round of Meta Charity Funders.Apply for funding by March 24th orjoin the circle as a donor.Below we will first list some updates from the previous round before providing some information primarily meant for people who intend to apply to the next round.Updates from the previous roundWe expect to grant more money this time than the last time ($686,580), as we have more members and people still haven't deployed their yearly spending, which was the case for many members the last time around. Our expected grant amount this time around is $500k - $3mio, though this is still uncertain and is dependent on funder-applicant fit.We are now 10 members in the circle, up from 9 last time around.We expect to fund many initiatives not on this list, but some projects that members of our circle have expressed extra interest in funding this round:Effective Giving/Giving multiplier-organizations, such as the ones incubated by CE'sEffective Giving Incubation program.Career development programs, that increase the number of individuals working in high-impact areas, including GCR reduction, animal welfare and Global HealthEspecially in regions where there currently are fewer opportunities to engage in such programsResearch/Mapping of the Meta space, to better understand current gaps and opportunities.Information for this roundIn this part we will outline the application process and guide you as an applicant to create a good application. We expect all applicants to have read this part to inform themselves.ProcessThe expected process is as follows:Applications open, February 26thStick to 100 words in the summary, this should give a quick overview of the project.In the full project description, please include a main summarizing document no longer than 2 pages. This is all we can commit to reading for the first stage. Any extra material can only be expected to be read if we choose to go further with your application.When choosing the "Meta" category, please be as truthful as possible, it's obvious (and reflects negatively on the application) when a project has deliberately been placed in a category in which it does not belong.Applications close, March 24thInitial application review finished, March 31stIf your project has been filtered out during the initial application review (which we expect 60-80% of applications will), we will let you know around end of March.Interviews, due diligence, deliberations, April 1st - May 14thIf your application has passed the initial application review, we will discuss it during our gatherings and we might reach out to you to gather more information, for example by conducting an interview. This is still not a commitment from us to fund you.Decisions made, May 15thWe expect to pay out the grants in the weeks following the decisions.What we mean by MetaA common reason for rejection in the last round was that projects were not within scope for the funding circle. We recognize that this was primarily our fault as we never clearly defined it, so we will try to make it a bit clearer here.Meta organizations are those that operate one step removed from direct impact interventions. These can focus on the infrastructure, evaluation, and strategic guidance necessary for the broader field to maximize effectiveness and impact. They are essential in bridging gaps, identifying high-impact opportunities, and enabling other organizations to achieve or amplify their end-product impact.Below we will list a couple of illustrative examples. Note that we only chose these examples because we think the org...

]]>
Vilhelm Skoglund https://forum.effectivealtruism.org/posts/jhaS4ogXbGMGuqXgb/meta-charity-funders-launching-the-2nd-round Tue, 27 Feb 2024 02:57:08 +0000 EA - Meta Charity Funders: Launching the 2nd round by Vilhelm Skoglund Vilhelm Skoglund https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:46 no full 4
BvXkG3PLfdmvoECFb EA - This is why we can't have nice laws by LewisBollard Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: This is why we can't have nice laws, published by LewisBollard on February 28, 2024 on The Effective Altruism Forum.Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post.How factory farmers block progress - and what we can do about itMost people agree that farmed animals deserve better legal protections:84% of Europeans,6180% of Americans,70% of Brazilians,5166% of Chinese, and52% of Indians agree with some version of that statement. Yet almost all farmed animals globally still lack even the most basic protections.America has about five times morevegetarians thanfarmers - and many more omnivores who care about farm animals. Yet the farmers wield much more political power. Fully89% of Europeans think it's important that animals not be kept in individual cages. Yet the European Commission just implicitly sided with the8% who don't by shelving a proposed cage ban.When farm animal welfare is on the ballot, it usually wins. Most recently, citizens in California and Massachusetts voted for bans on cages and crates by63% and78% respectively. Yet both ballot measures were only necessary because the state legislatures refused to act.The US Congress last legislated on farm animal welfare 46 years ago - only at slaughter and only for mammals. Yet I'm not aware of a single bill in the current Congress that would regulate on-farm welfare. Instead, Congress is consideringtwobills to strike down state farm animal welfare laws, plus ahandfulofbills to hobble the alternative protein industry.What's going on? Why do politicians worldwide consistently fail to reflect the will of their own voters for stronger farm animal welfare laws? And how can we change that?Milking the systemFactory farmers have an easier assignment than us: they're mostly asking politicians to do nothing - a task many excel at. It's much harder to pass laws than to stop them, which may be why animal advocates have had more success in blocking industry legislation, likeag-gag laws andlaws to censor plant-based meat labels.Factory farmers are also playing on their home turf. Animal welfare bills are typically referred to legislatures' agriculture committees, where the most anti-reform politicians reside. Aquarter of the European Parliament's agriculture committee are farmers, and many of the rest represent rural areas.But these factors are both common to all animal welfare legislation, not just laws to protect farmed animals. Yet most nations and US states have still passed anti-cruelty laws for other animals. And most of these laws go beyond protecting just cats and dogs. Organizing a fight between two chickens is punishable by up to five years in jail in the US - even as abusing two million farmed chickens is not punishable at all.A few other factors are unique to farm animal-focused laws. They may raise food prices; EU officials recently gave thatexcuse for shelving proposed reforms. Farm animal cruelty is not top of mind for most voters, so politicians don't expect to suffer repercussions for ignoring it. And factory farming can be dismissed as a far-left issue, which only Green politicians need worry about.But this isn't the whole story.Surveys show that most voters across the political spectrum support farm animal welfare laws. Politicians often work on issues that aren't top of mind for most voters; Congress recentlylegislated to help the nation's one million duck hunters. And politicians happily pass laws that may raise food prices to achieve other social goals, like higher minimum wages, stricter food safety standards, and farm price support schemes.Master lobbyists, with tractorsA more potent factor is the farm lobby. ...

]]>
LewisBollard https://forum.effectivealtruism.org/posts/BvXkG3PLfdmvoECFb/this-is-why-we-can-t-have-nice-laws Wed, 28 Feb 2024 21:40:41 +0000 EA - This is why we can't have nice laws by LewisBollard LewisBollard https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:35 no full 1
dh6uede7CwbKWfqXP EA - Results of my 2024 r/Vegan Survey on what influences people to go Vegan. by PreciousPig Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Results of my 2024 r/Vegan Survey on what influences people to go Vegan., published by PreciousPig on February 28, 2024 on The Effective Altruism Forum.I conducted a survey in the largest Vegan Online Community (the r/vegan subreddit) on why people go Vegan in February 2024. The survey consisted of 20 questions asking which interventions played a significant role in people's decisions to go Vegan (aswell as some general question). The first question of the survey can be found here, all following questions are linked from it: https://www.reddit.com/r/vegan/comments/1arfnyc/the_2024_rvegan_survey_part_1_conversations_with/Primary goal of this survey: My primary aim compared to similar polls in the past (such as this one: https://vomad.life/survey/ ) was to find out for what percentage of people who came in contact with a specific intervention this intervention played a role in their decision to go Vegan.(Hypothetical example: A lot more people might have seen Vegan posts on Social Media than might have come in contact with Street Activism prior to going Vegan, yet if Street Activism played a significant role in their decision to go Vegan for a higher percentage of people who came in contact with it than Social Media posts, it might still be a more promising intervention overall.)Part 1 - Interventions with played a significant role in their decision to go Vegan for the largest number of people who came in contact with them before going Vegan:In this first part of the results, I want to answer the primary question of the survey: which intervention played a significant role in their decision to go Vegan for the largest percentage of people exposed to it prior to going Vegan:1 - 91,7% - Watching a Vegan Documentary played a significant role in their decision to go Vegan for 122 out of 133 people who saw such a Documentary before going Vegan.2 - 83,3% - Participating in a Vegan Starter Challenge played a significant role in their decision to go Vegan for 10 out of 12 people who participated in such a challenge before going Vegan.3 - 80,4% - Conversations with friends and family members played a significant role in their decision to go Vegan for 131 out of 163 people who had such conversations before going Vegan.4 - 80,4% - Vegan Speeches played a significant role in their decision to go Vegan for 45 out of 56 people who saw such Speeches before going Vegan.5 - 78,4% - Books and scientific articles played a significant role in their decision to go Vegan for 58 out of 74 people who had read them before going Vegan.6 - 74,5% - Short Videos on Social Media played a significant role in their decision to go Vegan for 70 out of 94 people who had seen such videos before going Vegan.7 - 64,8% - Posts on Social Media played a significant role in their decision to go Vegan for 92 out of 142 people who saw such posts before going Vegan.8 - 59,3% - Interacting with animals in real life played a significant role in their decision to go Vegan for 83 out of 140 people who had frequent contact with animals before going Vegan.9 - 57,1% - Public events other than street activism played a significant role in their decision to go Vegan for 8 out of 14 people who attended such an event before going Vegan.10 - 54,9% - Influencers/Celebrities played a significant role in their decision to go Vegan for 28 out of 51 people who came in contact with a Vegan Influencer/Celebrity before going Vegan.11 - 42,9% - Reports about Veganism in the Media played a significant role in their decision to go Vegan for 15 out of 35 people who saw/heard positive reports about Veganism before going Vegan.12 - 35% - Vegan and Animal Rights Organizations played a significant role in their decision to go Vegan for 14 out of 40 people who came in contact with such organizations before going Vegan.13 - 34...

]]>
PreciousPig https://forum.effectivealtruism.org/posts/dh6uede7CwbKWfqXP/results-of-my-2024-r-vegan-survey-on-what-influences-people Wed, 28 Feb 2024 04:51:53 +0000 EA - Results of my 2024 r/Vegan Survey on what influences people to go Vegan. by PreciousPig PreciousPig https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:01 no full 7
NGFkW4Qxww9jGESrk EA - What are the biggest misconceptions about biosecurity and pandemic risk? by 80000 Hours Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What are the biggest misconceptions about biosecurity and pandemic risk?, published by 80000 Hours on February 29, 2024 on The Effective Altruism Forum.by Anemone Franz and Tessa Alexanian80,000 Hours ranks preventing catastrophic pandemics as one of the most pressing problems in the world, and we have advised many of our readers to work in biosecurity to have high-impact careers.But biosecurity is a complex field, and while the threat is undoubtedly large, there's a lot of disagreement about how best to conceptualise and mitigate the risks. We wanted to get a better sense of how the people thinking about these threats every day perceive the risks.So we decided to talk to more than a dozen biosecurity experts to better understand their views.To make them feel comfortable speaking candidly, we granted the experts we spoke to anonymity. Sometimes disagreements in this space can get contentious, and certainly many of the experts we spoke to disagree with one another. We don't endorse every position they've articulated below.We think, though, that it's helpful to lay out the range of expert opinions from people who we think are trustworthy and established in the field. We hope this will inform our readers about ongoing debates and issues that are important to understand - and perhaps highlight areas of disagreement that need more attention.The group of experts includes policymakers serving in national governments, grantmakers for foundations, and researchers in both academia and the private sector. Some of them identify as being part of the effective altruism community, while others do not. All the experts are mid-career or at a more senior level. Experts chose to provide their answers either in calls or in written form.Below, we highlight 14 responses from these experts about misconceptions and mistakes that they believe are common in the field of biosecurity, particularly as it relates to people working on global catastrophic risks and in the effective altruism community.Here are some of the areas of disagreement that came up:What lessons should we learn from COVID-19?Is it better to focus on standard approaches to biosecurity or search for the highest-leverage interventions?Should we prioritise preparing for the most likely pandemics or the most destructive pandemics - and is there even a genuine trade-off between these priorities?How big a deal are "information hazards" in biosecurity?How should people most worried about global catastrophic risks engage with the rest of the biosecurity community?How big a threat are bioweapons?For an overview of this area, you can read our problem profile on catastrophic pandemics. (If you're not very familiar with biosecurity, that article may provide helpful context for understanding the experts' opinions below.)Here's what the experts said.Expert 1: Failures of imagination and appeals to authorityIn discussions around biosecurity, I frequently encounter a failure of imagination. Individuals, particularly those in synthetic biology and public health sectors, tend to rely excessively on historical precedents, making it difficult for them to conceive of novel biological risks or the potential for bad actors within a range of fields. This narrow mindset hinders proactive planning and compromises our ability to adequately prepare for novel threats.Another frequent problem is appeal to authority. Many people tend to suspend their own critical reasoning when a viewpoint is confidently presented by someone they perceive as an authoritative figure. This can stymie deeper reflections on pressing biosecurity issues and becomes especially problematic when compounded by information cascades. In such scenarios, an uncritically accepted idea from an authoritative source can perpetuate as fact, sometimes going unquestioned for...

]]>
80000_Hours https://forum.effectivealtruism.org/posts/NGFkW4Qxww9jGESrk/what-are-the-biggest-misconceptions-about-biosecurity-and Thu, 29 Feb 2024 18:19:46 +0000 EA - What are the biggest misconceptions about biosecurity and pandemic risk? by 80000 Hours 80000_Hours https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 21:27 no full 2
PkvrJ7r5wyMGnuGrD EA - Wholesomeness and Effective Altruism by Owen Cotton-Barratt Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wholesomeness and Effective Altruism, published by Owen Cotton-Barratt on February 29, 2024 on The Effective Altruism Forum.This is the second of a collection of three essays, 'On Wholesomeness'. In the first essay I introduced the idea of wholesomeness as a criterion for choosing actions. This essay will explore the relationship between acting wholesomely and some different conceptions of effective altruism.TensionsApparent tensionsActing wholesomely feels relatively aligned with traditional commonsense notions of doing good. To the extent that EA is offering a new angle on doing good, shouldn't we expect its priorities to clash with what being wholesome suggests? (It would be a suspicious convergence if not!)Getting more concrete:It feels wholesome to support our local communities, but EA suggests it would be more effective to support others far removed from us.It doesn't feel wholesome to reorient strategies around speculative sci-fi concerns. But this is what a large fraction of EA has done with AI stuff.Surely there are tensions here?Aside: acting wholesomely and commonsense moralityAlthough I've just highlighted that acting wholesomely often feels aligned with commonsense morality, I think it's important to note that it certainly doesn't equal commonsense morality. Wholesome action means attending to the whole of things one can understand, and that may include esoteric considerations which wouldn't get a look in on commonsense morality. The alignment is more one-sided: if commonsense morality doesn't like something, there's usually some reason for the dislike.Wholesomeness will seek not to dismiss these objections out of hand, but rather to avoid such actions unless the objections have been thoroughly understood and felt not to stand up.The shut-up-and-multiply perspectiveA particular perspective which is often associated with EA is the idea of taking expected value seriously, and choosing our actions on this basis. The catch-phrase of this perspective might be "shut up and multiply!".Taken at face value, this perspective would recommend:We put everything we can into an explicit modelWe use this to determine what seems like the best optionWe pursue that optionDeep tensions between wholesomeness and straw EAThere's a kind of simplistic version of EA which tells you to work out what the most important things are and then focus on maximizing goodness there. This is compatible with using the shut-up-and-multiply perspective to work what's most important, but doesn't require it.I don't think that this simplistic version of EA is the correct version of EA (precisely because it misses the benefits of wholesomeness; or for another angle on its issues seeEA is about maximization, and maximization is perilous). But I do think it's a common thing to perceive EA principles as saying, perhaps especially by people who are keen to criticise EA[1]. For this reason I'll label it "straw EA".There is a quite fundamental tension between acting wholesomely and straw EA:Wholesomeness tells you to focus on the whole, and not let actions be dictated by impact on a few parts of thingsStraw EA tells you to focus on the most important dimensions and maximize there - implicitly telling you to ignore everything elseIndeed when EA is introduced it is sometimes emphasised that we shouldn't necessarily focus on helping those close to us, which could sound like an instruction to forget whom we're close toWholesome EAI don't think that these apparent tensions are necessary. In this section I'll describe a version of effective altruism, which I'll call "wholesome EA", which is deeply grounded in a desire to act wholesomely. Although the articulation is new, I don't think that the thing I'm proposing here is fundamentally novel - I feel like I've seen some version of t...

]]>
Owen Cotton-Barratt https://forum.effectivealtruism.org/posts/PkvrJ7r5wyMGnuGrD/wholesomeness-and-effective-altruism Thu, 29 Feb 2024 10:35:16 +0000 EA - Wholesomeness and Effective Altruism by Owen Cotton-Barratt Owen Cotton-Barratt https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 15:07 no full 4
KzXSWKvbsSfEMryef EA - Evidential Cooperation in Large Worlds: Potential Objections & FAQ by Chi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Evidential Cooperation in Large Worlds: Potential Objections & FAQ, published by Chi on February 28, 2024 on The Effective Altruism Forum.What is this post?This post is a companion piece torecentposts on evidential cooperation in large worlds (ECL). We've noticed that in conversations about ECL, the same few initial confusions and objections tend to be brought up. We hope that this post will be useful as the place that lists and discusses these common objections. We invite the reader to advance additional questions or objections of their own.This FAQ does not need to be read in order. The reader is encouraged to look through the section headings and jump to those they find most interesting.ECL seems very weird. Are you sure you haven't, like, taken a wrong turn somewhere?We don't think so.ECL, at its core, takes two reasonable ideas that by themselves are considered quite plausible by many - albeit not completely uncontroversial - and notices that when you combine them, you get something quite interesting and novel. Specifically, ECL combines "large world" with "noncausal decision theory." Many people believe the universe/multiverse is large, but that it might as well be small because we can only causally influence, or be influenced by, a small, finite part of it.Meanwhile, many people think you should cooperate in a near twin prisoners' dilemma, but that this is mostly a philosophical issue because near twin prisoners' dilemmas rarely, if ever, happen in real life. Putting the two ideas together: once you consider the noncausal effects of your actions, the world being large is potentially a very big deal.[1]Do I need to buy evidential decision theory for this to work?There are some different ways of thinking that take into account acausal influence and explain it in different ways. These include evidential decision theory and functional decision theory, as mentioned in our "ECL explainer" post.Updatelessness andsuperrationality are two other concepts that might get you all or part of the way to this kind of acausal cooperation.Evidential decision theory says that what matters is whether your choice gives you evidence about what the other agent will do.For example, if you are interacting with a near-copy, then the similarity between the two of you is evidence that the two of you make the same choice.Functional decision theory says that what matters is whether there is a logical connection between you and the other agent's choices.For example, if you are interacting with a copy, then the similarity between the two of you is reason to believe there is a strong logical connection.That said, functional decision theory does not have a clear formalization, so it is not clear if and how this logical connection generalizes to dealing with merely near-copies (as opposed to full copies). Our best guess is that proponents of functional decision theory at least want the theory to recommend cooperating in the near twin prisoner's dilemma.[2]Updatelessness strengthens the case for cooperation. This is because updatelessness arguably increases the game-theoretic symmetry of many kinds of interactions, which is helpful to get agents employing sometypes of decision procedures (including evidential decision theory) to cooperate.[3]Superrationality says that two rational thinkers considering the same problem will arrive at the same correct answer. So, what matters is common rationality.In game theory situations like the prisoner's dilemma, knowing that the two answers, or choices, will be the same might change the answer itself (e.g., cooperate-cooperate rather than defect-defect).ECL was in fact originally named "multiverse-wide superrationality".We don't take a stance in our "ECL explainer" piece on which of these decision theories, concepts, or others we d...

]]>
Chi https://forum.effectivealtruism.org/posts/KzXSWKvbsSfEMryef/evidential-cooperation-in-large-worlds-potential-objections Wed, 28 Feb 2024 22:35:18 +0000 EA - Evidential Cooperation in Large Worlds: Potential Objections & FAQ by Chi Chi https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 39:12 no full 7
fQmnbhzt56gGg4DvH EA - Running 200 miles for New Incentives by Emma Cameron Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Running 200 miles for New Incentives, published by Emma Cameron on March 2, 2024 on The Effective Altruism Forum.What is this?After a running career[1] across marathons, 50K, 50-mile, 100K, and 100-mile distance events over the past eleven years, I'm tackling the 200-mile distance at the Tahoe 200 from June 14-18 this year.It's a bit of a ridiculous, silly challenge. It's also completely wild that I have a privileged life living in a high-income country like the US that allows me to tackle such an adventure.Given all this, I have decided to fundraise for New Incentives through my training and build-up for the event. This is my PledgeIt donation page. I'm thankful to have the support of the folks at High Impact Athletes in putting my page together and thinking through my campaign. My goal is to raise $10,036 to support 650 children enrolling in New Incentive's vaccination program at a cost of $15.44 per infant[2].I hope you can help promote my fundraising efforts or consider donating yourself! I'm posting on the Forum because it's a wild enough idea that perhaps I can bring some new folks into effective giving in the process.[3]Can I even finish?The short answer: I think so!I have attempted the 200-mile distance once before, at the Moab 240. I only made it about 120 miles before succumbing to the hotter-than-average 110°F heat in the canyons on the second day. Unfortunately, they last-minute swapped out the Tailwind electrolyte solution on course and replaced it with a non-vegan offering, which I wasn't willing to take. I ended up relying on salt pills instead, which wasn't enough.Even after losing my hearing in one ear due to electrolyte imbalance, I was determined to shoulder ahead. Unfortunately, when I stopped being able to keep food or liquids down, my pacer[4] physically carried me a mile into the next aid station, where I passed out and woke up in a van with the race crew. Understandably, they let me know they would be pulling me from the race.So, I definitely "DNF'd" [Did Not Finish] that race, even though I took away a lot of important lessons from it. I have since completed a number of challenging 100-mile ultramarathons, including:Grindstone 100[5] in Virginia, September 2019Kettle 100 in Wisconsin, virtual in June 2020Black Hills 100 in South Dakota, June 2021Superior 100 in Minnesota, September 2022Javelina 100 [6] in Arizona, October 2023I've otherwise completed Ironman Wisconsin, qualified for the Boston Marathon twice, backpacked around Wisconsin, biked, rock climbed, and generally feel that I am well-positioned as an endurance athlete to tackle this challenge in Lake Tahoe this June.I think I'm ready! To be clear, my only real goal is to complete the race. I would describe myself as a 'mid-pack' runner in the 200-mile distance, at best. I will be running alongside such professional legends as Courtney Dauwalter, who has notably attempted to break the course record at the Tahoe 200[7] by completing it in <48 hours. She also took the overall win[8] (for both men and women) in the Moab 240 in 2017.Regardless of how or when I finish the course, I'm excited for the journey of a lifetime this June. I love running and movement - it's one of my greatest passions. Combining it with a way to create impact for cost-effective charities is a bit of a dream, so here I go!Donate & share this link!https://pledgeit.org/200-miles^Ultrasignup Results - Note that Ultrasignup isn't integrated with every race, but it covers a good number of them. Notably, for example, I completed a 100K in zero-degree temperatures in northern Wisconsin called the Frozen Otter [now Frigid Fox] in January 2018, which isn't indicated in my Ultrasignup results.^httpnumbers://www.newincentives.org/impact The cost per infant is $15.44, while vaccinating an infant against ...

]]>
Emma Cameron https://forum.effectivealtruism.org/posts/fQmnbhzt56gGg4DvH/running-200-miles-for-new-incentives Sat, 02 Mar 2024 09:48:23 +0000 EA - Running 200 miles for New Incentives by Emma Cameron Emma Cameron https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:59 no full 4
fDYGvji3HwYA8BJwN EA - Review of EA Global Bay Area 2024 (Global Catastrophic Risks) by frances lorenz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Review of EA Global Bay Area 2024 (Global Catastrophic Risks), published by frances lorenz on March 1, 2024 on The Effective Altruism Forum.EA Global: Bay Area (Global Catastrophic Risks) took place February 2-4. We hosted 820 attendees, 47 of whom volunteered over the weekend to help run the event. Thank you to everyone who attended and a special thank you to our volunteers - we hope it was a valuable weekend!Photos and recorded talksYou can now check outphotos from the event.Recorded talks, such as the media panel on impactful GCR communication, Tessa Alexanian's talk on preventing engineered pandemics, Joe Carlsmith's discussion of scheming AIs, and more, are now available on ourYoutube channel.A brief summary of attendee feedbackOur post-event feedback survey received 184 responses. This is lower than our average completion rate - we're still accepting feedback responses and would love to hear from all our attendees. Each response helps us get better summary metrics and we look through each short answer.To submit your feedback, you can visit the Swapcard event page and click the Feedback Survey button. The survey link can also be found in a post-event email sent to all attendees with the subject line, "EA Global: Bay Area 2024 | Thank you for attending!"Key metricsThe EA Global team uses a several key metrics to estimate the impact of our events. These metrics, and the questions we use in our feedback survey to measure them, include:Likelihood to recommend (How likely is it that you would recommend EA Global to a friend or colleague with similar interests to your own? Discrete scale from 0 to 10, 0 being not at all likely and 10 being extremely likely)Number of new connections[1] (How many new connections did you make at this event?)Number of impactful connections[2] (Of those new connections, how many do you think might be impactful connections?)Number of Swapcard meetings per person (This data is pulled from Swapcard)Counterfactual use of attendee time (To what extent was this EA Global a good use of your time, compared to how you would have otherwise spent your time? A discrete scale ranging from, "a waste of my time, <10% the counterfactual" to, "a much better use of my time, >10x the counterfactual")The likelihood to recommend for this event was higher compared to last year's EA Global: Bay Area and our EA Global 2023 average (i.e. the average across the three EA Globals we hosted in 2023) (see Table 1). Number of new connections was slightly down compared to the 2023 average, while the number of impactful connections was slightly up.The counterfactual use of time reported by attendees was slightly higher overall than Boston 2023 (the first EA Global we used this metric at), though there was also an increase in the number of reports that the event was a worse use of attendees' time (see Figure 1).Metric (average of all respondents)EAG BA 2024 (GCR)EAG BA 2023EAG 2023 averageLikelihood to recommend (0 - 10)8.788.548.70Number of new connections9.0511.59.72Number of impactful connections4.154.84.09Swapcard meetings per person6.735.265.24Table 1. A summary of key metrics from the post-event feedback surveys for EA Global: Bay Area 2024 (GCRs), EA Global: Bay Area 2023, and the average from all three EA Globals hosted in 2023.Feedback on the GCRs focus37% of respondents rated this event more valuable than a standard EA Global, 34% rated it roughly as valuable and 9% as less valuable. 20% of respondents had not attended an EA Global event previously (Figure 2).If the event had been a regular EA Global (i.e. not focussed on GCRs), most respondents predicted they would have still attended. To be more precise, approximately 90% of respondents reported having over 50% probability of attending the event in the absence of a GCR ...

]]>
frances_lorenz https://forum.effectivealtruism.org/posts/fDYGvji3HwYA8BJwN/review-of-ea-global-bay-area-2024-global-catastrophic-risks 10x the counterfactual")The likelihood to recommend for this event was higher compared to last year's EA Global: Bay Area and our EA Global 2023 average (i.e. the average across the three EA Globals we hosted in 2023) (see Table 1). Number of new connections was slightly down compared to the 2023 average, while the number of impactful connections was slightly up.The counterfactual use of time reported by attendees was slightly higher overall than Boston 2023 (the first EA Global we used this metric at), though there was also an increase in the number of reports that the event was a worse use of attendees' time (see Figure 1).Metric (average of all respondents)EAG BA 2024 (GCR)EAG BA 2023EAG 2023 averageLikelihood to recommend (0 - 10)8.788.548.70Number of new connections9.0511.59.72Number of impactful connections4.154.84.09Swapcard meetings per person6.735.265.24Table 1. A summary of key metrics from the post-event feedback surveys for EA Global: Bay Area 2024 (GCRs), EA Global: Bay Area 2023, and the average from all three EA Globals hosted in 2023.Feedback on the GCRs focus37% of respondents rated this event more valuable than a standard EA Global, 34% rated it roughly as valuable and 9% as less valuable. 20% of respondents had not attended an EA Global event previously (Figure 2).If the event had been a regular EA Global (i.e. not focussed on GCRs), most respondents predicted they would have still attended. To be more precise, approximately 90% of respondents reported having over 50% probability of attending the event in the absence of a GCR ...]]> Fri, 01 Mar 2024 19:50:33 +0000 EA - Review of EA Global Bay Area 2024 (Global Catastrophic Risks) by frances lorenz 10x the counterfactual")The likelihood to recommend for this event was higher compared to last year's EA Global: Bay Area and our EA Global 2023 average (i.e. the average across the three EA Globals we hosted in 2023) (see Table 1). Number of new connections was slightly down compared to the 2023 average, while the number of impactful connections was slightly up.The counterfactual use of time reported by attendees was slightly higher overall than Boston 2023 (the first EA Global we used this metric at), though there was also an increase in the number of reports that the event was a worse use of attendees' time (see Figure 1).Metric (average of all respondents)EAG BA 2024 (GCR)EAG BA 2023EAG 2023 averageLikelihood to recommend (0 - 10)8.788.548.70Number of new connections9.0511.59.72Number of impactful connections4.154.84.09Swapcard meetings per person6.735.265.24Table 1. A summary of key metrics from the post-event feedback surveys for EA Global: Bay Area 2024 (GCRs), EA Global: Bay Area 2023, and the average from all three EA Globals hosted in 2023.Feedback on the GCRs focus37% of respondents rated this event more valuable than a standard EA Global, 34% rated it roughly as valuable and 9% as less valuable. 20% of respondents had not attended an EA Global event previously (Figure 2).If the event had been a regular EA Global (i.e. not focussed on GCRs), most respondents predicted they would have still attended. To be more precise, approximately 90% of respondents reported having over 50% probability of attending the event in the absence of a GCR ...]]> 10x the counterfactual")The likelihood to recommend for this event was higher compared to last year's EA Global: Bay Area and our EA Global 2023 average (i.e. the average across the three EA Globals we hosted in 2023) (see Table 1). Number of new connections was slightly down compared to the 2023 average, while the number of impactful connections was slightly up.The counterfactual use of time reported by attendees was slightly higher overall than Boston 2023 (the first EA Global we used this metric at), though there was also an increase in the number of reports that the event was a worse use of attendees' time (see Figure 1).Metric (average of all respondents)EAG BA 2024 (GCR)EAG BA 2023EAG 2023 averageLikelihood to recommend (0 - 10)8.788.548.70Number of new connections9.0511.59.72Number of impactful connections4.154.84.09Swapcard meetings per person6.735.265.24Table 1. A summary of key metrics from the post-event feedback surveys for EA Global: Bay Area 2024 (GCRs), EA Global: Bay Area 2023, and the average from all three EA Globals hosted in 2023.Feedback on the GCRs focus37% of respondents rated this event more valuable than a standard EA Global, 34% rated it roughly as valuable and 9% as less valuable. 20% of respondents had not attended an EA Global event previously (Figure 2).If the event had been a regular EA Global (i.e. not focussed on GCRs), most respondents predicted they would have still attended. To be more precise, approximately 90% of respondents reported having over 50% probability of attending the event in the absence of a GCR ...]]> frances_lorenz https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:33 no full 8
9ptKhiTsZWa3gpeJK EA - Forum feature updates: add buttons, see your stats, send DMs easier, and more (March '24) by tobytrem Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forum feature updates: add buttons, see your stats, send DMs easier, and more (March '24), published by tobytrem on March 1, 2024 on The Effective Altruism Forum.It's been a while since our last feature update in October. Since then, the Forum team has done a lot. This post includes:New tools for authorsSee all your stats on one pageAdd buttons to postsImproved tables for postsImproved notifications, messaging and onboardingMore comprehensive and interactive notificationsSubscribe to a user's commentsImproved onboarding for new users:Redesign of Forum messagingOther updates and projectsJob recommendationsMaking it easier to fork our codebase and set up new forumsGiving SeasonA Giving portalA custom voting tool for the Donation ElectionAn interactive bannerForum Wrapped 2023MiscellaneousYou can now hide community quick takesClearer previews of posts, topics, and sequencesA new opportunities tab on the FrontpageNew tools for authorsSee all your stats on one pageWe've built a page which collects the stats for your posts into one overview. These stats include: how many people have viewed/read your posts, how much karma they've accrued, and the number of comments. You can access analytics for a particular post by clicking 'View detailed stats' from your post stats page.Add buttons to postsIf you have a "call to action" in a post or comment, you can now add it as a button. Your button could be a link to an application, a survey, a calendar, or any other link you'd like to stand out.Improved tables for postsWe've improved tables to make them much more readable and less prone to awkward word cut-offs such as "Weight"Improved notifications, messaging and onboardingMore comprehensive and interactive notificationsNotifications now have their own page, accessed by clicking the bell icon:On this page, you can directly reply to comments.We have also:Created notifications for ReactionsMoved message notifications from the notifications list to this symbolin the top bar.Made a notifications summary showing on hover so that you can check your notifications before clicking the bell to clear them (pictured below)We aim to make notifications informative without making them addictive or incentivising behaviour you don't endorse. If notifications bother you, you can:Change your settings to batch your notifications differently (or not be notified at all).Give us feedback.Subscribe to a user's commentsYou can now subscribe to be notified every time a user comments. We've also clarified subscriptions' functionality, with a new "Get notified" menu.Improved onboarding for new users:We've changed the process a user goes through when signing up for a Forum account.This has already increased the number of users giving information about their role, signing up for the Digest, and subscribing to topics.The new process looks like this:Redesign of Forum messagingThe DM page has undergone a total redesign, making it easier to start individual and group messages and navigate your message history.You can now create a new message by clicking the new conversation symbol () and selecting a Forum user (you used to have to navigate to their account). You can also create a group message by adding multiple Forum users in the search box:Other updates and projectsJob recommendationsYou may have noticed that we've recently been exploring ways we could help Forum users hire and be hired. As part of this project, we're experimenting more with targeted job recommendations.We're selecting high-impact jobs and showing each job to a small set of users that we have reason to believe may be interested in it. For example, if the job is limited to a specific country, we use your account location to help determine if it's relevant to you.We'll continue to iterate on thi...

]]>
tobytrem https://forum.effectivealtruism.org/posts/9ptKhiTsZWa3gpeJK/forum-feature-updates-add-buttons-see-your-stats-send-dms Fri, 01 Mar 2024 18:18:13 +0000 EA - Forum feature updates: add buttons, see your stats, send DMs easier, and more (March '24) by tobytrem tobytrem https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:16 no full 9
8ByxD5aCrCngbKzmj EA - Creative video ads significantly increase GWWC's social media engagement and web traffic to pledge page by James Odene [User-Friendly] Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Creative video ads significantly increase GWWC's social media engagement and web traffic to pledge page, published by James Odene [User-Friendly] on March 1, 2024 on The Effective Altruism Forum.OverviewWe recently ran a test to see if utilising creative marketing practices could increase the performance of a brand-led digital campaign, and it did. We are sharing the results to encourage other organisations to consider the quality of their output in areas where this could directly impact engagement, potentially resulting in increased donations and or pledge sign ups in the long-term.We partnered with Giving What We Can and produced a brand awareness video to test the power of creativity - the results prove the worth in investing time in the creative quality of movement marketing overall.Grace Adams, Head of Marketing at GWWC: "We are really happy with the performance of this campaign, and it's given us more confidence to undertake more creative approaches with future campaigns. We're excited to see how the increased awareness will translate into results further down the line."Creative marketing practice refers to content production that employs innovative and imaginative approaches to capture the attention of the target audience and evoke emotional responses, effectively conveying the core message. The aim here was to test a more creative approach comparatively to the existing content across their social media.The quality of the creative concept in an ad is one of the biggest drivers to impact (see Kantar researchhere), and the results from this campaign indicate that we could see great returns by swapping out low fidelity, simply informative, ration-led content for more distinctive and emotive content.The CampaignYou can see a version of the ad here.ObjectiveIncrease Giving What We Can brand awareness over Giving SeasonTargetEducated, top 50% earners, median ~30, working professionals with interest in relevant philanthropic topics e.g. climate breakdownAd Spend$4,899.58 on YouTube & $7,172.41 on InstagramDuration~8 weeksChannelsVideos for Instagram and YouTube. We also created related display ads to direct web traffic back to the GWWC pledgeOverall MetricsReach: 4,554,692Total Impressions: 7,923,623CPM: $1.61Views: 5,357,063Engagements: 931,100CPE: US$0.01Web Traffic: 24,914 new usersPledge Page Visits: 465From retargeting: 1111 users visited the website for 4 mins or more & 469 users visited for 10 mins or moreThe HeadlinersYou can see a version of the ad here.1) 48x more views on Reels, Stories and Feed, than any previous campaign on Instagram.2) Attributed Instagram profile visits were 249% higher than any other previous campaign.3) Referrals from Instagram to the GWWC website are typically extremely low (<100 per month) however we produced:1622 users referred from Instagram visited 4 mins or more429 users referred from Instagram visited 10 mins or more4) 96% of the earned likes[1] from the GWWC account lifetime originated from this campaign. This not only signifies the quality of the audience we targeted - individuals new to the content - but also highlights their engagement with the broader content ecosystem of the brand.5) Retargeting web traffic with campaign-branded ads produced almost 3x more people visiting the pledge page compared to organic traffic visits to the pledge page, despite organic traffic representing twice as many new users overall.6) Reached over 4.5m people in the UK with an average CPM (cost per 1000 impressions) of $1.61. This is 2.5 times lower than the average CPM for UK video campaigns, especially for YouTube and Instagram. And substantially lower than Giving What We Can's previous CPM average of $6.14.To consider;Organisations are not fully utilising creative marketing practices meaning that for...

]]>
James Odene [User-Friendly] https://forum.effectivealtruism.org/posts/8ByxD5aCrCngbKzmj/creative-video-ads-significantly-increase-gwwc-s-social Fri, 01 Mar 2024 08:10:51 +0000 EA - Creative video ads significantly increase GWWC's social media engagement and web traffic to pledge page by James Odene [User-Friendly] James Odene [User-Friendly] https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:14 no full 11
wE7KPnjZHBjxLKNno EA - AI things that are perhaps as important as human-controlled AI (Chi version) by Chi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI things that are perhaps as important as human-controlled AI (Chi version), published by Chi on March 3, 2024 on The Effective Altruism Forum.Topic of the post: I list potential things to work on other than keeping AI under human control.MotivationThe EA community has long been worried about AI safety. Most of the efforts going into AI safety are focused on making sure humans are able to control AI. Regardless of whether we succeed at this, I think there's a lot of additional value on the line.First of all, if we succeed at keeping AI under human control, there are still a lot of things that can go wrong. My perception is that this has recently gotten more attention, for examplehere,here,here, and at least indirectlyhere (I haven't read all these posts. and have chosen them to illustrate that others have made this point purely based on how easily I could find them). Why controlling AI doesn't solve everything is not the main topic of this post, but I want to at least sketch my reasons to believe this.Which humans get to control AI is an obvious and incredibly important question and it doesn't seem to me like it will go well by default. It doesn't seem like current processes put humanity's wisest and most moral at the top. Humanity's track record at not causing large-scale unnecessary harm doesn't seem great (see factory farming). There is reasonable disagreement on how path-dependent epistemic and moral progress is but I think there is a decent chance that it is very path-dependent.While superhuman AI might enable great moral progress and new mechanisms for making sure humanity stays on "moral track", superhuman AI also comes with lots of potential challenges that could make it harder to ensure a good future. Will MacAskill talks about "grand challenges" we might face shortly after the advent of superhuman AIhere. In the longer-term, we might face additional challenges. Enforcement of norms, and communication in general, might be extremely hard across galactic-scale distances. Encounters withaliens (or even merely humanity thinking they might encounter aliens!) threaten conflict and could change humanity's priorities greatly. And if you're like me, you might believe there's a whole lot of weirdacausalstuff to get right. Humanity might make decisions that influence these long-term issues already shortly after the development of advanced AI.It doesn't seem obvious to me at all that a future where some humans are in control of the most powerful earth-originating AI will be great.Secondly, even if we don't succeed at keeping AI under human control, there are other things we can fight for and those other things might be almost as important or more important than human control. Less has been written about this (althoughnotnothing.) My current and historically very unstable best guess is that this reflects an actual lower importance of influencing worlds where humans don't retain control over AIs although I wish there was more work on this topic nonetheless. Justifying why I think influencing uncontrolled AI matters isn't the main topic of this post, but I would like to at least sketch my motivation again.If there is alien life out there, we might care a lot about how future uncontrolled AI systems treat them. Additionally, perhaps we can prevent uncontrolled AI from having actively terrible values. And if you are like me, you might believe there are weirdacausalreasons to make earth-originating AIs more likely to be a nice acausal citizen.Generally, even if future AI systems don't obey us, we might still be able to imbue them with values that are more similar to ours. The AI safety community is aiming for human control, in part, because this seems much easier than aligning AIs with "what's morally good". But some properties that result in moral good...

]]>
Chi https://forum.effectivealtruism.org/posts/wE7KPnjZHBjxLKNno/ai-things-that-are-perhaps-as-important-as-human-controlled Sun, 03 Mar 2024 20:59:43 +0000 EA - AI things that are perhaps as important as human-controlled AI (Chi version) by Chi Chi https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 35:59 no full 1
hH9DDzJCBPLDCzkJu EA - How to Speedrun a New Drug Application (Interview with Alvea's former CEO) by Aaron Gertler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to Speedrun a New Drug Application (Interview with Alvea's former CEO), published by Aaron Gertler on March 3, 2024 on The Effective Altruism Forum.I've been enjoying the newsletter of Santi Ruiz (Institute for Progress), who covers stories about achieving policy goals and other forms of progress. I found this to cover some ground that wasn't present in the Alvea postmortem.ExcerptsThe caricature is that the FDA is the enemy of progress, medical regulators are the enemy of progress, and they're slowing everything down. On reflection, I don't agree with that take, and our experience doesn't really support it [...]In drug development especially, making a thing that is plausibly good is much, much easier than making something that is actually, reliably, very good. Deploying drugs to scale requires that reliability.It's a very hard socio-technical problem. All the different kinds of regulatory requirements, quality management, quality control, etc., that could be naively identified as red tape or boring paperwork that slow down the innovators are actually there to achieve that reliability.Of course, when you get into the details, there are tons of ways this could be done more efficiently. But the fact that validation, testing, and ensuring that things are as they seem is 90% of the process is just the way the world works, not any fault of the regulators.When you work with any vendor for a pharmaceutical company, almost everybody requires an NDA to be signed. This by itself can eat up to two weeks of time on both ends of this transaction. We had automated this NDA signing process so that it would usually happen in hours. Many of our vendors would follow up and tell us how insanely fast this was and how it was the smoothest and fastest contracting experience that they had ever had.Another big pattern is that, for some reason, for a lot of these key processes that really move the needle on speed, the standard operating procedure for the industry is to talk to maybe three to five different vendors, compare them across a bunch of categories, and then pick one and go forward with them. That never seemed to work for us.We would approach it by finding every single vendor in the world who does the thing that we need done, finding the best people, and then going in and very closely redesigning and managing their process for maximum speed. Practically, this involves parallelization and then bottleneck hunting in the vendor's process to identify ways to make it faster.A good example of that was the manufacturing of the drug itself, of the DNA plasmid that was our vaccine's main active component. Our initial quotes from the first few vendors were like two years. "It takes two years. There is no way around that.This is just how long it takes." Then we found some folks who said, "It's going to be hard, but we can do it in a year." Then, once we have come in and looked at it deeply and redesigned it in collaboration with these folks, we ended up doing it in just over two months if memory serves.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Aaron Gertler https://forum.effectivealtruism.org/posts/hH9DDzJCBPLDCzkJu/how-to-speedrun-a-new-drug-application-interview-with-alvea Sun, 03 Mar 2024 12:06:20 +0000 EA - How to Speedrun a New Drug Application (Interview with Alvea's former CEO) by Aaron Gertler Aaron Gertler https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:53 no full 3
iCnzpZBgndcApvFxF EA - What posts are you thinking about writing? by tobytrem Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What posts are you thinking about writing?, published by tobytrem on March 4, 2024 on The Effective Altruism Forum.I'm posting this to tie in with the Forum's Draft Amnesty Week (March 11-17) plans, but please also use this thread for posts you don't plan to write for draft amnesty. The last time this question was posted, it got some great responses.This post is a companion post for What posts would you like someone to write?.If you have a couple of ideas, consider whether putting them in separate answers, or numbering them, would make receiving feedback easier.It would be great to see:1-2 sentence descriptions of ideas as well as further-along ideas. You could even attach a Google Doc with comment access if you're looking for feedback.Commenters signalling with Reactions and upvotes the content they'd like to see written.Commenters responding with helpful resources.Commenters proposing Dialogues with authors who suggest similar ideas, or which they have an interesting disagreement with (Draft Amnesty Week might be a great time for scrappy/ unedited dialogues).Draft Amnesty WeekIf you are getting encouragement for one of your ideas, Draft Amnesty Week (March 11-17) might be a great time to post it. Posts that are tagged "Draft Amnesty Week" don't have to be fully thought through, or even fully drafted. Bullet points and missing sections are allowed so that you can have a lower bar for posting.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
tobytrem https://forum.effectivealtruism.org/posts/iCnzpZBgndcApvFxF/what-posts-are-you-thinking-about-writing Mon, 04 Mar 2024 17:14:51 +0000 EA - What posts are you thinking about writing? by tobytrem tobytrem https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:31 no full 3
KYfEo6CJmjmixJ32Z EA - A lot of EA-orientated research doesn't seem sufficiently focused on impact by jamesw Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A lot of EA-orientated research doesn't seem sufficiently focused on impact, published by jamesw on March 4, 2024 on The Effective Altruism Forum.Cross posted from: https://open.substack.com/pub/gamingthesystem/p/a-lot-of-ea-orientated-research-doesnt?r=9079y&utm_campaign=post&utm_medium=webNB: This post would be clearer if I gave specific examples but I'm not going to call out specific organisations or individuals to avoid making this post unnecessarily antagonistic.Summary: On the margin more resources should be put towards action-guiding research instead of abstract research areas that don't have a clear path to impact. More resources should also be put towards communicating that research to decision-makers and ensuring that the research actually gets used.Doing research that improves the world is really hard. Collectively as a movement I think EA does better than any other group. However, too many person-hours are going into research that doesn't seem appropriately focused on actually causing positive change in the world.Soon after the initial ChatGPT launch probably wasn't the right time for governments to regulate AI, but given the amount of funding that has gone into AI governance research it seems like a bad sign that there weren't many (if any) viable AI governance proposals that were ready for policymakers to take off-the-shelf and implement.Research aimed at doing good could fall in two buckets (or somewhere inbetween):Fundamental research that improves our understanding about how to think about a problem or how to prioritise between cause areasAction-guiding research that analyses which path forward is best and comes up with a proposalFeedback loops between research and impact are poor so there is a risk of falling prey to motivated reasoning as fundamental research can be more appealing for a couple of different reasons:Culturally EA seems to reward people for doing work that seems very clever and complicated, and sometimes this can be a not-terrible proxy for important research. But this isn't the same as doing work that actually moves the needle on the issues that matter.Academic research far worse for this and rewards researchers for writing papers that sound clever (hence why a lot of academic writing is so unnecessarily unintelligible), but EA shouldn't be falling into this trap of conflating complexity with impact.People also enjoy discussing interesting ideas, and EAs in particular enjoy discussing abstract concepts. But intellectually stimulating work is not the same as impactful research, even if the research is looking into an important area.Given that action-guiding research has a clearer path to impact, arguably the bar should be pretty high to focus on fundamental research over action-guiding research. If it's unlikely that a decision maker would look at the findings of a piece of research and change their actions as a result of it then there should be a very strong alternative reason why the research is worthwhile.There is also a difference between research that you think should change the behaviour of decision makers, and what will actually influence them. While it might be clear to you that your research on some obscure form of decision theory has implications for the actions that key decision makers should take, if there is a negligible chance of them seeing this research or taking this on board then this research has very little value.This is fine if the theory of change for your research having an impact doesn't rely on the relevant people being convinced of your work (e.g. policymakers), but most research does rely on important people actually reading the findings, understanding them, and being convinced that they should take an alternative action to what they would have taken otherwise.This is especially true of resea...

]]>
jamesw https://forum.effectivealtruism.org/posts/KYfEo6CJmjmixJ32Z/a-lot-of-ea-orientated-research-doesn-t-seem-sufficiently Mon, 04 Mar 2024 11:55:49 +0000 EA - A lot of EA-orientated research doesn't seem sufficiently focused on impact by jamesw jamesw https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:39 no full 5
LX9wGhZnCdLuXHPih EA - How EA can be better at communications by blehrer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How EA can be better at communications, published by blehrer on March 4, 2024 on The Effective Altruism Forum.I wrote an essay that's a case study for how Open Philanthropy (and by abstraction, EA) can be better at communications.It's called Affective Altruism.I wrote the piece because I was growing increasingly frustrated seeing EA have its public reputation questioned following SBF and OpenAI controversies. My main source of frustration wasn't just seeing EA being interpreted uncharitably, it was that the seeds for this criticism were sewn long before SBF and OpenAI became known entities.EA's culture of ideological purity and (seemingly intentional) obfuscation from the public sets itself up for backlash. Not only is this unfortunate relative the movement's good intentions, it's strategically unsound. EA fundamentally is in the business of public advocacy. It should be aiming for more than resilience against PR crises. As I say in the piece:The point of identifying and cultivating a new cause area is not for it to remain a fringe issue that only a small group of insiders care about. The point is that it is paid attention to where it previously wasn't.The other thing that's frustrating is that what I'm asking for is not for EA to entertain some race-to-the-bottom popularity contest. It's an appeal to respect human psychology, to use time-tested techniques like visualization and story telling that are backed by evidence. There are ways to employ these communications strategies without reintroducing the irrationalities that EA prides itself on avoiding, and without meaningfully diminishing the rigorousness of the movement.On a final personal note:I feel a tremendous love-hate relationship with EA. Amongst my friends (none of which are EAs despite most being inordinately altruistic) I'm slightly embarrassed to call myself an EA. There's a part of me that is allergic to ideologies and in-group dynamics. There's a part of me that's hesitant of allying myself with a movement that's so self-serious and disregarding of outside perceptions.There's also a part of me that feels spiteful towards all the times EA has soft and hard rejected my well-meaning attempts at participation (case-in-point, I've already been rejected from the comms job I wrote this post to support my application for). And yet, I keep coming back to EA because, in a world that is so riddled with despair and confusion, there's something reaffirming about a group of people who want to use evidence to do measurable good.This unimpeachable trait of EA should be understood for the potential energy it wields amongst many people like myself that don't even call themselves EAs. Past any kind of belabored point about 'big tent' movements, all I mean to say is that EA doesn't need to be so closed-off. Just a little bit of communications work would go a long way.Here's a teaser video I made to go along with the essay:Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
blehrer https://forum.effectivealtruism.org/posts/LX9wGhZnCdLuXHPih/how-ea-can-be-better-at-communications Mon, 04 Mar 2024 09:47:09 +0000 EA - How EA can be better at communications by blehrer blehrer https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:48 no full 6
wkw396TtNXAvNP85D EA - Announcement: We're launching Effectief Geven by Bob Jacobs Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcement: We're launching Effectief Geven, published by Bob Jacobs on March 5, 2024 on The Effective Altruism Forum.We are thrilled to announce the launch of EffectiefGeven.be, the Belgian chapter of the global movement towards more effective giving. After 5 months of diligent work, crafting our vision, strategy, and both short- and long-term plans, we are ready to introduce our platform to the world.What We're Bringing to the TableWebsite & Newsletter: Our newly launched website serves as a central hub for all things related to effective giving in Belgium, offering an initial summary of insights, with plans on expanding with more relevant information. Accompanying the website, our newsletter will provide regular updates on our progress, along with communication about events.Helpdesk & LinkedIn Presence: We understand the importance of accessible support and networking opportunities. Our helpdesk is here to provide guidance, answer any questions a donor might face. Additionally, our LinkedIn presence allows us to connect with a broader audience, facilitating professional networking and collaboration opportunities within the effective giving community.Curated Cause Areas: A list of charities has been selected based on the recommendations of GiveWell, Founders Pledge, and Animal Charity Evaluators. In the first version, this list is identical to Doneer Effectief in the Netherlands, as we will start off by using their donation platform.Short-term plansPublic Lectures & Networking Events: We are planning a series of public lectures and networking events designed to engage, inform, inspire, and promote donating behavior. These events will bring together individuals passionate about making a difference. From seasoned philanthropists to those new to the concept of effective giving, our events aim to foster a vibrant community united by a common goal of maximizing impact.Tax Optimization: We understand that tax deductibility is important for many donors. The charities we offer on our website are not yet tax-deductible in Belgium. Belgium is a complex country, with an even more complex fiscal system… EA Belgium has undertaken quite some effort trying to solve this, but has, as of this moment, not been successful. There are currently some in-between solutions that we explain on our website (in Dutch).Our plan is to keep pushing this tax optimization forward, and to put some extra pressure on government departments by gathering public support for the idea of effective giving. The end goal is to make donations to Effectief Geven and its charities, tax-deductible.A hybrid donation platform: In collaboration with Doneer Effectief, the Dutch effective giving initiative, we will be creating a different theme on top of the preexisting platform of Effectief Geven. This means that we don't need to create our own platform in the first stages.We would also like to extend our gratitude to all the national Effective Giving organizations that we contacted to get us up to speed and who helped brainstorm with us (Brazil, Canada, Spain, Denmark, Germany, and Estonia). Special thanks to GWWC, the 10% club in the Netherlands, and especially Doneer Effectief in the Netherlands.Get InvolvedAs we embark on this journey, we recognize the importance of community support and collaboration. Whether you're looking to contribute financially, partner with us, or participate in events, there are several ways you can get involved and make a real difference in the way we approach philanthropy in Belgium:Funding: Securing our first round of funding is critical to laying an initial, solid foundation for our operations and creating some decent runway. We are currently looking to raise our first funds to be able to rely on a full-time director, who can keep pushing us forward for the next year,...

]]>
Bob Jacobs https://forum.effectivealtruism.org/posts/wkw396TtNXAvNP85D/announcement-we-re-launching-effectief-geven Tue, 05 Mar 2024 18:14:01 +0000 EA - Announcement: We're launching Effectief Geven by Bob Jacobs Bob Jacobs https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:11 no full 3
doXsSFGSySpCJCrcG EA - Why are you reluctant to write on the EA Forum? by Stan Pinsent Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why are you reluctant to write on the EA Forum?, published by Stan Pinsent on March 5, 2024 on The Effective Altruism Forum.It has come to my attention that some people are reluctant to post or comment on the EA Forum, even some people who read the forum regularly and enjoy the quality of discourse here.What stops you posting?What might make it easier?You can give an anonymous answer on this Google Form. I intend to share responses in a follow-up post.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Stan Pinsent https://forum.effectivealtruism.org/posts/doXsSFGSySpCJCrcG/why-are-you-reluctant-to-write-on-the-ea-forum Tue, 05 Mar 2024 11:55:41 +0000 EA - Why are you reluctant to write on the EA Forum? by Stan Pinsent Stan Pinsent https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:39 no full 5
5i43whfcjvwdq9Arr EA - Which YouTube channels do you watch? (A <2-min request from 80,000 Hours) by Nik 80K Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Which YouTube channels do you watch? (A <2-min request from 80,000 Hours), published by Nik 80K on March 5, 2024 on The Effective Altruism Forum.TL;DR: Please fill in the quick form belowIf you're reading this, you're really likely to be the kind of person we'd love to have reached when you were new to EA ideas, and we'd love to know which YouTube channels, podcasts and/or email newsletters you especially like (or think are very high quality).You don't need to 'filter' for channels with an educational or scientific vibe (though we love those, of course) or whether you think 80,000 Hours will want to work with them (or vice versa) - we'd love to hear about all and any channels you enjoy, whether they're in mathematics, gaming, fitness, or musical comedy.So: which YouTube channels etc. do you watch? Fill in this quick form to let us knowThank you!Why we're askingAt 80,000 Hours, we've been putting much more resource into growing our audience (for more on that, take a look atBella Forristal's post about why and how we're investing more in outreach).A big part of 80,000 Hours' marketing activity over the last two years has been partnering with YouTube creators, likeKurzgesagt,Veritasium,Elizabeth Filips,PBS Spacetime andGotham Chess, to put our message in front of their audience. (We've also featured on podcasts like Ali Abdaal's Deep Dive and Deep Questions with Cal Newport, and newsletters like James Clear's 3-2-1 and Weekly Robotics.)In 2022, we asked the community for help inletting us know which videos, podcasts and newsletters you enjoy. The results of that survey have been really helpful in finding our target audience and making our outreach work more cost-effective, and so we're here asking for your help once again - thank you!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Nik_80K https://forum.effectivealtruism.org/posts/5i43whfcjvwdq9Arr/which-youtube-channels-do-you-watch-a-less-than-2-min Tue, 05 Mar 2024 00:05:18 +0000 EA - Which YouTube channels do you watch? (A <2-min request from 80,000 Hours) by Nik 80K Nik_80K https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:55 no full 9
cEK7unHjM86F5n8yL EA - Resources on US policy careers by Andy Masley Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Resources on US policy careers, published by Andy Masley on March 6, 2024 on The Effective Altruism Forum.As the Director ofEA DC, I often speak with people interested in pursuing impactful careers in US policy. Here, I want to share some of the most helpful resources I've come across for people interested in government and policy careers:First, the websiteGo Government from the Partnership for Public Service, which includes many helpful resources on working for the US federal government, including this newFederal Internship Finder (a large database of internship opportunities with government agencies).Second,emergingtechpolicy.org: This new website offers excellent advice and resources for people interested in US government and policy careers, especially for those focusing on emerging tech issues like AI or bio. Sign uphere for content updates and policy opportunities.Theemergingtechpolicy.org website includes many helpful guides for students and professionals, including:In-depth guides toworking in Congress,think tanks, and specific AI policy-relevantfederal agencies (e.g. DOC, DHS, State)Lists of resources, think tanks, and fellowships by policy area (e.g.AI,biosecurity,cyber,nuclear security)Advice for undergraduates interested in US policyGraduate school advice (e.g.policy master's,law school)Policy internships (e.g.Congressional internships,semester in DC programs)Policy fellowships (incl. a database of 50+ programs)Testing your fit for policy careersCareer profiles of policy practitioners in AI and biosecurity policyThird, the 80,000 Hours guides on policy careers, such as:Policy and political skills profile (part of their newseries of professional skills profiles)AI governance and coordination career reviewBiorisk research, strategy, and policy career reviewPolicy careers focused on other pressing global issues80,000 Hours Job Board filter for US policyI hope you'll find these resources helpful! And if you want to chat with me about EA DC or get connected to EAs working in US policy, feel free to reach outhere. You can find all EA DC's public resources at this link.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Andy Masley https://forum.effectivealtruism.org/posts/cEK7unHjM86F5n8yL/resources-on-us-policy-careers Wed, 06 Mar 2024 19:48:16 +0000 EA - Resources on US policy careers by Andy Masley Andy Masley https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:36 no full 3
iDwxwmKvFPk2Di4ge EA - Supervolcanoes tail risk has been exaggerated? by Vasco Grilo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Supervolcanoes tail risk has been exaggerated?, published by Vasco Grilo on March 6, 2024 on The Effective Altruism Forum.This is alinkpost for the peer-reviewed article "Severe Global Cooling After Volcanic Super-Eruptions? The Answer Hinges on Unknown Aerosol Size" (McGraw 2024). Below are its abstract, my notes, my estimation of a nearterm annualextinction risk fromsupervolcanoes of 3.38*10^-14, and a brief discussion of it. At the end, I have a table comparing my extinction risk estimates with Toby Ord's existential risk guesses given inThe Precipice.AbstractHere is the abstract fromMcGraw 2024 (emphasis mine):Volcanic super-eruptions have been theorized to cause severe global cooling, with the 74 kya Toba eruption purported to have driven humanity to near-extinction. However, this eruption left little physical evidence of its severity and models diverge greatly on the magnitude of post-eruption cooling. A key factor controlling the super-eruption climate response is the size of volcanic sulfate aerosol, a quantity that left no physical record and is poorly constrained by models.Here we show that this knowledge gap severely limits confidence in model-based estimates of super-volcanic cooling, and accounts for much of the disagreement among prior studies. By simulating super-eruptions over a range of aerosol sizes, we obtain global mean responses varying from extreme cooling all the way to the previously unexplored scenario of widespread warming. We also use an interactive aerosol model to evaluate the scaling between injected sulfur mass and aerosol size.Combining our model results with the available paleoclimate constraints applicable to large eruptions, we estimate that global volcanic cooling is unlikely to exceed 1.5°C no matter how massive the stratospheric injection. Super-eruptions, we conclude, may be incapable of altering global temperatures substantially more than the largest Common Era eruptions.This lack of exceptional cooling could explain why no single super-eruption event has resulted in firm evidence of widespread catastrophe for humans or ecosystems.My notesI have no expertise involcanology, but I foundMcGraw 2024 to be quite rigorous. In particular, they are able to use their model to replicate the more pessimistic results of past studies tweeking just 2 input parameters (highlighted by me below):"We next evaluate if the assessed aerosol size spread is the likely cause of disagreement among past studies with interactive aerosol models. For this task, we interpolated the peak surface temperature responses from our ModelE simulations to the injected mass and peak global mean aerosol size from several recent interactive aerosol model simulations of large eruptions (Fig. 7, left panel).Accounting for these two values alone (left panel), our model experiments are able to reproduce remarkably similar peak temperature responses as the original studies found". By "reproduce remarkably well", they are referring to acoefficient of determination (R^2) of 0.87 (see Fig. 7)."By comparison, if only the injected masses of the prior studies are used, the peak surface temperature responses cannot be reproduced". By this, they are referring to an R^2 ranging from -1.82 to -0.04[1] (see Fig. 7).They agree with past studies on the injected mass, but not on theaerosol size[2]. Fig. 3a (see below) illustrates the importance of the peak mean aerosol size. The greater the size, the weaker the cooling. I think this is explained as follows:Primarily, smaller particles reflect more sunlight per mass due to having greater cross-sectional area per mass[3].Secondarily, larger particles have less time to reflect sunlight due to falling down faster[4].According to Fig. 2 (see below), aerosol size increases with injected mass, which makes intuitive sen...

]]>
Vasco Grilo https://forum.effectivealtruism.org/posts/iDwxwmKvFPk2Di4ge/supervolcanoes-tail-risk-has-been-exaggerated Wed, 06 Mar 2024 17:05:29 +0000 EA - Supervolcanoes tail risk has been exaggerated? by Vasco Grilo Vasco Grilo https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 16:38 no full 5
uNZwZs9zyvTE5uNEK EA - Research summary: farmed cricket welfare by abrahamrowe Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Research summary: farmed cricket welfare, published by abrahamrowe on March 7, 2024 on The Effective Altruism Forum.This post is a short summary of Farmed Cricket (Acheta domesticus, Gryllus assimilis, and Gryllodes sigillatus; Orthoptera) Welfare Considerations: Recommendations for Improving Global Practice, a peer-reviewed, open access publication on cricket welfare in the Journal of Insects as Food and Feed under a CC BY 4.0 license. The paper and supplemental information can be accessedhere. The original paper was written by Elizabeth Rowe, Karen Robles López, Kristin Robinson, Kaitlin Baudier, and Meghan Barrett; the research conducted in the paper was funded by Rethink Priorities as part of our research agenda on understanding the welfare of insects on farms.This post was written by Abraham Rowe (no relation to Elizabeth Rowe) and reviewed for accuracy by Meghan Barrett. All information is derived from the Elizabeth Rowe et al. (2024) publication, and some text from the original publication is directly adapted for this summary.SummaryAs of 2020, around 370 to 420 billion crickets and grasshoppers were farmed annually for food and feed, though today the number may be much higher.Rowe et al. (2024) is the first publication to consider species-specific welfare concerns for several species of crickets on industrialized insect farms.The authors identify 15 current and 5 future welfare concerns, and make recommendations for reducing the harms from these concerns. These concerns include:Stocking densityHigh stocking densities can increase the rates of aggression, cannibalism, and behavioral repression among individuals on cricket farms.DiseaseDiseases are relatively common on cricket farms. Common diseases, such as Acheta domesticus densovirus, can cause up to 100% cricket mortality.SlaughterCommon slaughter methods for crickets on farms include freezing in air, blanching/boiling, and convection baking. Little is known about the relative welfare costs of these methods, and the best ways for a producer to implement a given method.Future concerns that haven't yet been realized on farms include:Novel feed substratesFarmers have explored potentially giving crickets novel feeds, including food waste. This might be nutritionally inadequate or introduce diseases or other issues onto farms.Selective breeding and genetic modificationIn vertebrate animals, selective breeding has caused a large number of welfare issues. The same might be expected to become true for crickets.Background informationCricket farmingInsect farming, including of crickets, has been presented as a more sustainable approach to meet the protein demand of a growing human population. While wild-caught orthopterans (crickets and grasshoppers) are a traditional protein source around the world, modern cricket farming aims to industrialize the rearing and slaughter of crickets as a food source. As of 2020, 370-420 billion orthopterans were slaughtered or sold live, with crickets being the most common.Welfare frameworkThe Five Domains model of welfare, which has been promoted for invertebrates, evaluates animal welfare by looking at the nutrition, environment, physical health, behavior, and mental states of the animals being evaluated. The authors use this model for evaluating cricket farming and potential improvements that could be made on farms for animal welfare.Cricket biologyThree of the most common species of crickets farmed belong to the Gryllinae subfamily: Acheta domesticus, Gryllus assimilis, and Gryllodes sigillatus. All three species live between 80 and 120 days from hatching to natural death, with a 10-21 day incubation period. Crickets are hemimetabolous insects: they hatch from an egg, molting through a series of nymph stages called instars, before going through a terminal ...

]]>
abrahamrowe https://forum.effectivealtruism.org/posts/uNZwZs9zyvTE5uNEK/research-summary-farmed-cricket-welfare Thu, 07 Mar 2024 05:19:14 +0000 EA - Research summary: farmed cricket welfare by abrahamrowe abrahamrowe https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 27:22 no full 5
k6y6BSahK6JZCfbbQ EA - Invest in ACX Grants projects! by Saul Munn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Invest in ACX Grants projects!, published by Saul Munn on March 7, 2024 on The Effective Altruism Forum.TLDRSo, you think you're an effective altruist? Okay, show us what you got - invest in charitable projects, then see how you do over the coming year. If you pick winners, you get (charitable) prizes; otherwise, you lose your (charitable) dollars. Also, you get to fund impactful projects. Win-win.Click here to see the projects and to start investing!What's ACX/ACX Grants?Astral Codex Ten (ACX) is a blog written by Scott Alexander on topics like effective altruism, reasoning, science, psychiatry, medicine, ethics, genetics, AI, economics, and politics. ACX Grants is a grants program in which Scott Alexander helps fund charitable and scientific projects - see the 2022 cohort here and his retrospective on ACX Grants 2022 here.What do you mean by "invest in ACX Grants projects"?In ACX Grants 2024, some of the applications were given direct grants and the rest were given the option to participate in an impact market.Impact markets are an alternative to grants or donations as a way to fund charitable projects. A collection of philanthropies announces that they'll be giving out prizes for the completion of successful, effectively altruistic projects that solve important problems the philanthropies care about. Project creators then strike out to build projects that solve those problems.If they need money to get started, investors can buy a "stake" in the project's possible future prize winnings, called an "impact certificate." (You can read more about how impact markets generally work here, and a canonical explanation of impact certificates on the EA Forum here.)Four philanthropic funders have expressed interest in giving prizes to successful projects in this round:ACX Grants 2025The Long Term Future FundThe EA Infrastructure FundThe Survival and Flourishing FundSo, after a year, the above philanthropies will review the projects in the impact market to see which ones have had the highest impact.Okay, but why would I want to buy impact certificates? Why not just donate directly to the project?Giving direct donations is great! But purchasing impact certificates can also have some advantages over direct donations:Better feedbackDirect donation can have pretty bad feedback loops about what sorts of things end up actually being effective/successful. After a year, the philanthropies listed above will review the projects to see which ones are impactful - and award prizes to the ones that they find most impactful. You get to see how much impact per-dollar your investments returned, giving you grounded feedback.Improving your modeling of grant-makersPurchasing impact certificates forces you to put yourself in the eyes of a grant-maker - you can look through a bunch of projects that might be impactful, and, with your donation budget, select the ones you expect to have the most impact. It also pushes you to model the philanthropies with great feedback.What sorts of things do they care about? Why? What are their primary theories of change? How will the project sitting in front of you relevantly improve the world in a way they actually care about?Make that charitable moolahIf you invest in projects that end up being really impactful, then you'll get a share of the charitable prize funding that projects win! All of this remains as charitable funding, so you'll be able to donate it to whatever cause you think is most impactful. For example, if you invest $100 into a project that wins a prize worth 2x it's original valuation, you can then choose to donate $200 to a charity or project of your choice!Who's giving out the prizes at the end?Four philanthropic funders have expressed interest in giving prizes[1] to successful projects that align with their interests:AC...

]]>
Saul Munn https://forum.effectivealtruism.org/posts/k6y6BSahK6JZCfbbQ/invest-in-acx-grants-projects Thu, 07 Mar 2024 02:58:59 +0000 EA - Invest in ACX Grants projects! by Saul Munn Saul Munn https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:10 no full 6
s8YBvqjAuewzuteb3 EA - Animal Charity Evaluators is doing a live AMA now! by Animal Charity Evaluators Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal Charity Evaluators is doing a live AMA now!, published by Animal Charity Evaluators on March 7, 2024 on The Effective Altruism Forum.Starting now! Animal Charity Evaluators is holding an AMA on our Movement Grants!The AMA is your chance to ask our team about what projects we're likely to fund, the application process, how to make a good application, and anything else about the program. Applications close March 17, 11:59 PM PT.Our team members answering questions are:Eleanor McAree, Movement Grants ManagerElisabeth Ormandy, Programs DirectorHolly Baines, Communications ManagerHow to participate? Go to the FAST Forum (make sure you have an account) and ask a question. We look forward to hearing from you!Movement Grants is ACE's strategic grantmaking program dedicated to building and strengthening the animal advocacy movement. For a limited time, you can DOUBLE your donation to ACE's Movement Grants! By donating to this program, you are investing in the expansion of a broader advocacy movement and a brighter future for animal welfare.Thank you!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Animal Charity Evaluators https://forum.effectivealtruism.org/posts/s8YBvqjAuewzuteb3/animal-charity-evaluators-is-doing-a-live-ama-now Thu, 07 Mar 2024 00:28:05 +0000 EA - Animal Charity Evaluators is doing a live AMA now! by Animal Charity Evaluators Animal Charity Evaluators https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:17 no full 7
Yg2JS23X9hhFcf6Eu EA - Australians are concerned about AI risks and expect strong government action by Alexander Saeri Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Australians are concerned about AI risks and expect strong government action, published by Alexander Saeri on March 8, 2024 on The Effective Altruism Forum.Key insightsA representative online Survey Assessing Risks from AI (SARA) of 1,141 Australians in Jan-Feb 2024 investigated public perceptions of AI risks and support for AI governance actions.Australians are most concerned about AI risks where AI acts unsafely (e.g., acting in conflict with human values, failure of critical infrastructure), is misused (e.g., cyber attacks, biological weapons), or displaces the jobs of humans; they are least concerned about AI-assisted surveillance, or bias and discrimination in AI decision-making.Australians judge "preventing dangerous and catastrophic outcomes from AI" the #1 priority for the Australian Government in AI; 9 in 10 Australians support creating a new regulatory body for AI.To meet public expectations, the Australian Government must urgently increase its capacity to govern increasingly-capable AI and address diverse risks from AI, including catastrophic risks.FindingsAustralians are concerned about diverse risks from AIWhen asked about a diverse set of 14 possible negative outcomes from AI, Australians were most concerned about AI systems acting in ways that are not safe, not trustworthy, and not aligned with human values. Other high-priority risks include AI replacing human jobs, enabling cyber attacks, operating lethal autonomous weapons, and malfunctioning within critical infrastructure.Australians are skeptical of the promise of artificial intelligence: 4 in 10 support the development of AI, 3 in 10 oppose it, and opinions are divided about whether AI will be a net good (4 in 10) or harm (4 in 10).Australians support regulatory and non-regulatory action to address risks from AIWhen asked to choose the top 3 AI priorities for the Australian Government, the #1 selected priority was preventing dangerous and catastrophic outcomes from AI.Other actions prioritised by at least 1 in 4 Australians included (1) requiring audits of AI models to make sure they are safe before being released, (2) making sure that AI companies are liable for harms, (3) preventing AI from causing human extinction, (4) reducing job losses from AI, and (5) making sure that people know when content is produced using AI.Almost all (9 in 10) Australians think that AI should be regulated by a national government body, similar to how the Therapeutic Goods Administration acts as a national regulator for drugs and medical devices. 8 in 10 Australians think that Australia should lead the international development and governance of AI.Australians take catastrophic and extinction risks from AI seriouslyAustralians consider the prevention of dangerous and catastrophic outcomes from AI the #1 priority for the Australian Government. In addition, a clear majority (8 in 10) of Australians agree with AI experts, technology leaders, and world political leaders that preventing the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war 1.Artificial Intelligence was judged as the third most likely cause of human extinction, after nuclear war and climate change. AI was judged as more likely than a pandemic or an asteroid impact. About 1 in 3 Australians think it's at least 'moderately likely' AI will cause human extinction in the next 50 years.Implications and actions supported by the researchFindings from SARA show that Australians are concerned about diverse risks from AI, especially catastrophic risks, and expect the Australian Government to address these through strong governance action.Australians' ambivalence about AI and expectation of strong governance action to address risks is a consistent theme of public opinion rese...

]]>
Alexander Saeri https://forum.effectivealtruism.org/posts/Yg2JS23X9hhFcf6Eu/australians-are-concerned-about-ai-risks-and-expect-strong Fri, 08 Mar 2024 17:42:56 +0000 EA - Australians are concerned about AI risks and expect strong government action by Alexander Saeri Alexander Saeri https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:51 no full 1
pXjzi8bK5gMKz5ggH EA - How do you address temporary slump in work motivation? by Vaipan Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How do you address temporary slump in work motivation?, published by Vaipan on March 8, 2024 on The Effective Altruism Forum.I suppose this is familiar: you have some defined tasks for the week, they are inherently interesting (and reasonably impactful!), you have the right level of competence to achieve them (although these tasks are kind of learning-by-doing, since it's a start-up kind of task), you have a good working environment (silence and food, for me).And yet you feel that slump, you have opened the document and you have booked your Focusmate, and it should go into this deep flaw state. But it doesn't. You feel bland, neutral, and have nothing to report to your Focusmate partner because you haven't been able to write a damn word. But it's not a permanent thing--it's a 'it's been a few day' thing. Any resource? Thanks!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Vaipan https://forum.effectivealtruism.org/posts/pXjzi8bK5gMKz5ggH/how-do-you-address-temporary-slump-in-work-motivation Fri, 08 Mar 2024 15:55:49 +0000 EA - How do you address temporary slump in work motivation? by Vaipan Vaipan https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:57 no full 3
5czbdxxsDaDQCHoWG EA - The Insect Institute is Hiring by Dustin Crummett Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Insect Institute is Hiring, published by Dustin Crummett on March 8, 2024 on The Effective Altruism Forum.The Insect Institute is hiring for a vital, exciting, foundational role: a full-time Program Coordinator or Program Officer (depending on the qualifications of the successful candidate). This is a high-responsibility position where you will have the opportunity to drive real impact for our mission.As our second full-time employee, you will be tasked with helping to carry out the Insect Institute's interventions, including through engagement with policymakers, regulators, NGOs, and potentially media. Suitably qualified candidates may also be asked to contribute to research and report writing. As one of only a few people worldwide working in an extremely important cause area, you will have the potential for enormous counterfactual impact.Salary: $73,630-$87,694 USD pre-taxLocation: Fully remoteApplication Deadline: April 1st, end of day in the EST time zoneThe full job description and application is available here. If you know someone else who might be a good fit, a referral form is available here. We offer a $500 bonus for referring the successful candidate. Questions about the role can be directed to info@insectinstitute.org.More Information:Key ResponsibilitiesImplementing the Insect Institute's interventions. This might include, but not necessarily be limited to, activities like:Working with legislators on, e.g., environmental issues related to the adoption of insects as food and feedOutreach to regulators in US executive agencies or UK ministries on, e.g., food safety issues related to insect farmingOutreach to and collaboration on projects with other NGOs, such as environmental, public health, or animal welfare organizationsDrafting press releases and conducting outreach to journalistsEspecially for more senior levels, taking initiative to, e.g., identify ways to improve on current interventions, or to identify opportunities for new interventionsIf hired at a more senior level, potentially managing others, especially as the Insect Institute expands in the futureFor candidates with suitable skills, potentially some degree of research and report writingRequirements:Strong written and oral communication skillsAbility to credibly and persuasively represent the Insect Institute's positions to other stakeholdersWe do not require starting familiarity with relevant academic domains (e.g., environmental science, public health, animal welfare, entomology) or with the state of the insects as food and feed industry. However, the candidate should possess the ability to gain familiarity as needed, and to proactively stay abreast of developmentsAdaptability, flexibility, and willingness to proactively do what is necessary to give the Insect Institute's projects the greatest chance of successPreferred:If you do not meet all of the below criteria, please still consider applying. Please also take an expansive interpretation of the below criteria (e.g., if you are not sure whether your work experience is relevant, err on the side of assuming it might be).Relevant work experience (such as, e.g., work in policy, advocacy, or alternative proteins). Relevant backgrounds might include but are not limited to, e.g.:Outreach to legislators or relevant government agencies (such as the USDA or FDA in the US, or Defra or the FSA in the UK), especially if on relevant issues (environment sustainability, food safety, etc.)Work within such government agencies, especially if on relevant issuesWork in an NGO, such as one focused on the environment, alternative proteins, food safety, or animal welfare, doing work similar to that mentioned in the "key responsibilities" aboveExperience managing others, especially in working on relevant issuesExpertise in a relevant...

]]>
Dustin Crummett https://forum.effectivealtruism.org/posts/5czbdxxsDaDQCHoWG/the-insect-institute-is-hiring Fri, 08 Mar 2024 11:22:12 +0000 EA - The Insect Institute is Hiring by Dustin Crummett Dustin Crummett https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:26 no full 4
w7EwYi6Ka6D93Fire EA - Announcing Convergence Analysis: An Institute for AI Scenario & Governance Research by David Kristoffersson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Convergence Analysis: An Institute for AI Scenario & Governance Research, published by David Kristoffersson on March 8, 2024 on The Effective Altruism Forum.Cross-posted on LessWrong.Executive SummaryWe're excited to introduce Convergence Analysis - a research non-profit & think-tank with the mission of designing a safe and flourishing future for humanity in a world with transformative AI. In the past year, we've brought together an interdisciplinary team of 10 academics and professionals, spanning expertise in technical AI alignment, ethics, AI governance, hardware, computer science, philosophy, and mathematics.Together, we're launching three initiatives focused on conducting Scenario Research, Governance Recommendations Research, and AI Awareness.Our programs embody three key elements of our Theory of Change and reflect what we see as essential components of reducing AI risk: (1) understanding the problem, (2) describing concretely what people can do, and (3) disseminating information widely and precisely. In some more detail, they do the following:Scenario Research: Explore and define potential AI scenarios - the landscape of relevant pathways that the future of AI development might take.Governance Recommendations Research: Provide concrete, detailed analyses for specific AI governance proposals that lack comprehensive research.AI Awareness: Inform the general public and policymakers by disseminating important research via books, podcasts, and more.In the next three months, you can expect to see the following outputs:Convergence's Theory of Change: A report detailing an outcome-based, high-level strategic plan on how to mitigate existential risk from TAI.Research Agendas for our Scenario Research and Governance Recommendations initiatives.2024 State of the AI Regulatory Landscape: A review summarizing governmental regulations for AI safety in 2024.Evaluating A US AI Chip Registration Policy: A research paper evaluating the global context, implementation, feasibility, and negative externalities of a potential U.S. AI chip registry.A series of articles on AI scenarios highlighting results from our ongoing research.All Thinks Considered: A podcast series exploring the topics of critical thinking, fostering open dialogue, and interviewing AI thought leaders.Learn more on our new website.HistoryConvergence originally emerged as a research collaboration in existential risk strategy between David Kristoffersson and Justin Shovelain from 2017 to 2021, engaging a diverse group of collaborators. Throughout this period, they worked steadily on building a body of foundational research on reducing existential risk, publishing some findings on the EA Forum and LessWrong, and advising individuals and groups such as Lionheart Ventures.Through 2021 to 2023, we laid the foundation for a research institution and built a larger team.We are now launching Convergence as a strong team of 10 researchers and professionals with a revamped research and impact vision. Timelines to advanced AI have shortened, and our society urgently needs clarity on the paths ahead and on the right courses of action to take.ProgramsScenario ResearchThere are large uncertainties about the future of AI and its impacts on society. Potential scenarios range from flourishing post-work futures to existential catastrophes such as the total collapse of societal structures. Currently, there's a serious dearth of research to understand these scenarios - their likelihood, causes, and societal outcomes.Scenario planning is an analytical tool used by policymakers, strategists, and academics to explore and prepare for the landscape of possible outcomes in domains defined by uncertainty. Such research typically defines specific parameters that are likely to cause certain scenarios, and id...

]]>
David_Kristoffersson https://forum.effectivealtruism.org/posts/w7EwYi6Ka6D93Fire/announcing-convergence-analysis-an-institute-for-ai-scenario-1 Fri, 08 Mar 2024 09:24:19 +0000 EA - Announcing Convergence Analysis: An Institute for AI Scenario & Governance Research by David Kristoffersson David_Kristoffersson https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:49 no full 5
5oStggnYLGzomhvvn EA - Talking to Congress: Can constituents contacting their legislator influence policy? by Tristan Williams Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Talking to Congress: Can constituents contacting their legislator influence policy?, published by Tristan Williams on March 9, 2024 on The Effective Altruism Forum.Summary and Key TakeawaysThe basic case: Contacting your legislator1 is low hanging fruit: it can be done in a relatively short amount of time, is relatively easy, and could have an impact. Each communication is not guaranteed to be influential, but when done right has the potential to encourage a legislator to take actions which could bequite influential.Why do we believe that constituent communication is useful?At the state level, we've seen two studies which have randomly assigned some legislators to receive communication[1], finding a12% and20% increased chance of the legislator voting towards the desired direction. At the federal level,one survey of staffers[2] indicated that less than 50 personalized messages were enough to get an undecided member to take the requested action for the majority of offices (70%).Anecdotal accounts, both in the literature and our conversations indicated that, despite disagreement on how much impact communication has, the possibility certainly exists for it to affect what a legislator thinks.What is the best way to conduct one of these campaigns?Some factors are important to be aware of. Communication is best sent for issues legislators are undecided on, and to legislators with smaller constituencies. See How to Best Execute the Communication for more.Personalized communication goes the furthest. Many advocacy groups use form email templates where you merely add your name to a pre-generated message and hit send. These might be net negative, and staffers have made clear time and again that personal messages, written by the constituent, are best.In-person meetings are best, but letters, emails and calls are likely nearly as effective, while social media posts and messages have a more uncertain effect.The way you frame your concern matters. You'll have to decide whether you want to make a very specific ask to support a given bill, or want to make a more general case for concern with an issue, perhaps telling a personal story to support your position. The best messages will make use of both frames.Know your legislators. Different legislators will have their own agendas and issues of focus[3], so being familiar with your legislator's work is important.IntroductionThis is part ofa project forAI Safety Camp undertaken to answer one chief question: can constituents contacting their legislator influence policy?[4]In answering this question, we're primarily speaking to two groups. First, to organizers within the broader policy/advocacy space trying to decide how to best work with Congress and if facilitating constituent communication could be a worthwhile part of that. Second, to individuals, who are concerned with the state of affairs of current risks and would like to take a further step (however small) in reducing that risk.We hope to provide below a synthesis of our findings, so that each of these groups can make a more informed decision as to whether it's worth their time.All in all, the below is the result of 10 discussions with current and former congressional staff, ~50 hours of collective research, and conversations with many organizations in the AI policy space. From our research and conversation with staffers, we've found little directly measuring the effectiveness of the method, but general agreement that it's likely impactful given certain circumstances, and much on how it can best be executed.From our conversations with those in AI policy, we've found that facilitating constituent communication isn't currently a focus for groups in the AI Safety ecosystem, but that the majority of those we've talked to are neutral to positive on bringing this in...

]]>
Tristan Williams https://forum.effectivealtruism.org/posts/5oStggnYLGzomhvvn/talking-to-congress-can-constituents-contacting-their Sat, 09 Mar 2024 19:04:19 +0000 EA - Talking to Congress: Can constituents contacting their legislator influence policy? by Tristan Williams Tristan Williams https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 33:22 no full 1
M9bweyyMHheu5Thph EA - This is why people are reluctant to write on the EA Forum by Stan Pinsent Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: This is why people are reluctant to write on the EA Forum, published by Stan Pinsent on March 9, 2024 on The Effective Altruism Forum.Four days ago I posted a question Why are you reluctant to write on the EA Forum?, with a link to Google Form. I received 20 responses.This post is in three parts:Summary of reasons people are reluctant to write on the EA ForumSuggestions for making it easierPositive feedback for the EA ForumReplies in fullSummary of reasons people are reluctant to write on the EA ForumThe form received 20 responses over four days.All replies included a reason for being reluctant or unable to write on the EA Forum. Only a minority of replies included a concrete suggestion for improvement.I have attempted to tally how many times each reason appeared across the 20 responses[2]:Suggestions for making it easier to contributeI give all concrete suggestions for helping people be less reluctant to contribute to the forum, in chronological order in which they were received:More discourse on increasing participation: "more posts like these which are aimed at trying to get more people contributing"Give everyone equal Karma power: "If the amount of upvotes and downvotes you got didn't influence your voting power (and was made less prominent), we would have less groupthink and (pertaining to your question) I would be reading and writing on the EA-forum often and happily, instead of seldom and begrudgingly."Provide extra incentives for posting: "Perhaps small cash or other incentives given each month for best posts in certain categories, or do competitions, or some such measure? That added boost of incentive and the chance that the hours spent on a post may be reimbursed somehow.""Discussions that are less tied to specific identities and less time-consuming to process - more Polis like discussions that allow participants to maintain anonymity, while also being able to understand the shape of arguments."Lower the stakes for commenting: "I'm not sure if comment section can include "I've read x% of the article before this comment"?"Positive feedback for the EA ForumThe question invited criticism of the Forum, but it did nevertheless garner some positive feedback.For an internet forum it's pretty good. But it's still an internet forum. Not many good discussions happen on the internet.Forum team do a great job :)Responses in fullAll responses can be found here.^^You can judge for yourself here whether I correctly classified the responses.I considered lumping "too time-consuming" and "lack of time" together, but decided against this because the former seems to imply "bar is very high", while the latter is merely a statement on how busy the respondent's life is.The form collected two responses:Why are you reluctant to write on the EA Forum? What would make it easier?Is there anything else you would like to share?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Stan Pinsent https://forum.effectivealtruism.org/posts/M9bweyyMHheu5Thph/this-is-why-people-are-reluctant-to-write-on-the-ea-forum Sat, 09 Mar 2024 17:12:05 +0000 EA - This is why people are reluctant to write on the EA Forum by Stan Pinsent Stan Pinsent https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:49 no full 2
9QLJgRMmnD6adzvAE EA - NIST staffers revolt against expected appointment of 'effective altruist' AI researcher to US AI Safety Institute by Phib Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: NIST staffers revolt against expected appointment of 'effective altruist' AI researcher to US AI Safety Institute, published by Phib on March 9, 2024 on The Effective Altruism Forum."The appointment of Christiano, which was said to come directly from Secretary of Commerce Gina Raimondo (NIST is an agency under the US Department of Commerce) has sparked outrage among NIST employees who fear that Christiano's association with EA and longtermism could compromise the institute's objectivity and integrity.""The AISI was established in November 2023 to "support the responsibilities assigned to the Department of Commerce" under the AI Executive Order. Earlier today, US Senate Majority Leader Chuck Schumer (D-NY) announced that the NIST will receive up to $10 million to establish the US AI Safety Institute."Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Phib https://forum.effectivealtruism.org/posts/9QLJgRMmnD6adzvAE/nist-staffers-revolt-against-expected-appointment-of Sat, 09 Mar 2024 07:27:47 +0000 EA - NIST staffers revolt against expected appointment of 'effective altruist' AI researcher to US AI Safety Institute by Phib Phib https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:58 no full 3
GRYP6rameQJZRJ4oj EA - CEA is hiring a Head of Communications by Ben West Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA is hiring a Head of Communications, published by Ben West on March 9, 2024 on The Effective Altruism Forum.Applications will be evaluated on a rolling basis. All applications must be submitted by Friday, March 22nd, 2024, 11:59 pm GMT.CEA is hiring a head of communications. While a successful candidate would ideally have a strong communications background, we're open to applications from generalists with strong foundational skills who can build a team with additional expertise. This is a senior leadership position reporting to the CEO. The remit of the role is broad, including developing and executing communications strategies for both CEA and effective altruism more broadly.We anticipate that this individual will become the foremost leader for strategic communications related to EA and will have a significant impact in shaping the field's strategy. This will include collaborating with senior leaders at other organizations doing EA-related work.Both EA and CEA are at important inflection points. Public awareness of EA has grown significantly over the past 2 years, during which time EA has had both major success and significant controversies. To match this growth in awareness, we're looking to increase our capacity to inform public narratives about and contribute to a more accurate understanding of EA ideas and impact.The stakes are high: Success could result in significantly higher engagement with EA ideas, leading to career changes, donations, new projects, and increased traction in a range of fields. Failure could result in long-lasting damage to the brand, the ideas, and the people who have historically associated with them.We're looking for a leader who can design and execute a communications strategy for EA. This person will be a strategic partner with and member of CEA's leadership team to help us shape both the substance and messaging of EA. You'll be able to build from the foundation set by our existing team, building on the work of our outgoing head of communications to further grow and expand the team, which currently includes one full-time staff member and support from an external agency.CEA has a new CEO, who is in the process of developing a new organizational strategy and views strengthening our communications function as a key priority. You should expect significant organizational support - e.g. attention from senior leadership and the allocation of necessary financial resources.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Ben_West https://forum.effectivealtruism.org/posts/GRYP6rameQJZRJ4oj/cea-is-hiring-a-head-of-communications Sat, 09 Mar 2024 02:12:46 +0000 EA - CEA is hiring a Head of Communications by Ben West Ben_West https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:28 no full 5
AA4aGmtXWbhunGHBq EA - Clarifying two uses of "alignment" by Matthew Barnett Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Clarifying two uses of "alignment", published by Matthew Barnett on March 10, 2024 on The Effective Altruism Forum.Paul Christiano once clarified AI alignment as follows:When I say an AI A is aligned with an operator H, I mean:A is trying to do what H wants it to do.This definition is clear enough for many purposes, but it leads to confusion when one wants to make a point about two different types of alignment:A is trying to do what H wants it to do because A is trading or cooperating with H on a mutually beneficial outcome for the both of them. For example, H could hire A to perform a task, and offer a wage as compensation.A is trying to do what H wants it to do because A has the same values as H - i.e. its "utility function" overlaps with H's utility function - and thus A intrinsically wants to pursue what H wants it to do.These cases are important to distinguish because they have dramatically different consequences for the difficulty and scope of alignment.To solve alignment in the sense (1), A and H don't necessarily need to share the same values with each other in any strong sense. Instead, the essential prerequisite seems to be for A and H to operate in an environment in which it's mutually beneficial to them to enter contracts, trade, or cooperate in some respect.For example, one can imagine a human hiring a paperclip maximizer AI to perform work, paying them a wage. In return the paperclip maximizer could use their wages to buy more paperclips. In this example, the AI performed their duties satisfactorily, without any major negative side effects resulting from their differing values, and both parties were made better off as a result.By contrast, alignment in the sense of (2) seems far more challenging to solve. In the most challenging case, this form of alignment would require solving extremal goodhart, in the sense that A's utility function would need to be almost perfectly matched with H's utility function. Here, the idea is that even slight differences in values yield very large differences when subject to extreme optimization pressure.Because it is presumably easy to make slight mistakes when engineering AI systems, by assumption, these mistakes could translate into catastrophic losses of value.Effect on alignment difficultyMy impression is that people's opinions about AI alignment difficulty often comes down to differences in how much they think we need to solve the second problem relative to the first problem, in order to get AI systems that generate net-positive value for humans.If you're inclined towards thinking that trade and compromise is either impossible or inefficient between agents at greatly different levels of intelligence, then you might think that we need to solve the second problem with AI, since "trading with the AIs" won't be an option. My understanding is that this is Eliezer Yudkowsky's view, and the view of most others who are relatively doomy about AI.In this frame, a common thought is that AIs would have no need to trade with humans, as humans would be like ants to them.On the other hand, you could be inclined - as I am - towards thinking that agents at greatly different levels of intelligence can still find positive sum compromises when they are socially integrated with each other, operating under a system of law, and capable of making mutual agreements. In this case, you might be a lot more optimistic about the prospects of alignment.To sketch one plausible scenario here, if AIs can own property and earn income by selling their labor on an open market, then they can simply work a job and use their income to purchase whatever it is they want, without any need to violently "take over the world" to satisfy their goals. At the same time, humans could retain power in this system through capital ownership and other gran...

]]>
Matthew_Barnett https://forum.effectivealtruism.org/posts/AA4aGmtXWbhunGHBq/clarifying-two-uses-of-alignment Sun, 10 Mar 2024 22:34:25 +0000 EA - Clarifying two uses of "alignment" by Matthew Barnett Matthew_Barnett https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:47 no full 1
9tfYvu9pBxx4evBMs EA - OpenAI announces new members to board of directors by Will Howard Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OpenAI announces new members to board of directors, published by Will Howard on March 10, 2024 on The Effective Altruism Forum.From the linked article:We're announcing three new members to our Board of Directors as a first step towards our commitment to expansion: Dr. Sue Desmond-Hellmann, former CEO of the Bill and Melinda Gates Foundation, Nicole Seligman, former EVP and General Counsel at Sony Corporation and Fidji Simo, CEO and Chair of Instacart. Additionally, Sam Altman, CEO, will rejoin the OpenAI Board of Directors....Dr. Sue Desmond-Hellmann is a non-profit leader and physician. Dr. Desmond-Hellmann currently serves on the Boards of Pfizer and the President's Council of Advisors on Science and Technology. She previously was a Director at Proctor and Gamble, Meta (Facebook), and the Bill & Melinda Gates Medical Research institute. She served as the Chief Executive Officer of the Bill & Melinda Gates Foundation from 2014 to 2020.From 2009-2014 she was Professor and Chancellor of the University of California, San Francisco (UCSF), the first woman to hold the position. She also previously served as President of Product Development at Genentech, where she played a leadership role in the development of the first gene-targeted cancer drugs....Nicole Seligman is a globally recognized corporate and civic leader and lawyer. She currently serves on three public company corporate boards - Paramount Global, MeiraGTx Holdings PLC, and Intuitive Machines, Inc. Seligman held several senior leadership positions at Sony entities, including EVP and General Counsel at Sony Corporation, where she oversaw functions including global legal and compliance matters.She also served as President of Sony Entertainment, Inc., and simultaneously served as President of Sony Corporation of America. Seligman also currently holds nonprofit leadership roles at the Schwarzman Animal Medical Center and The Doe Fund in New York City.Previously, Seligman was a partner in the litigation practice at Williams & Connolly LLP in Washington, D.C., working on complex civil and criminal matters and counseling a wide range of clients, including President William Jefferson Clinton and Hillary Clinton. She served as a law clerk to Justice Thurgood Marshall on the Supreme Court of the United States....Fidji Simo is a consumer technology industry veteran, having spent more than 15 years leading the operations, strategy and product development for some of the world's leading businesses. She is the Chief Executive Officer and Chair of Instacart. She also serves as a member of the Board of Directors at Shopify. Prior to joining Instacart, Simo was Vice President and Head of the Facebook App.Over the last decade at Facebook, she oversaw the Facebook App, including News Feed, Stories, Groups, Video, Marketplace, Gaming, News, Dating, Ads and more. Simo founded the Metrodora Institute, a multidisciplinary medical clinic and research foundation dedicated to the care and cure of neuroimmune axis disorders and serves as President of the Metrodora Foundation.It looks like none of them have a significant EA connection, although Sue Desmond-Hellmann has said some positive things about effective altruism at least.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Will Howard https://forum.effectivealtruism.org/posts/9tfYvu9pBxx4evBMs/openai-announces-new-members-to-board-of-directors Sun, 10 Mar 2024 06:25:12 +0000 EA - OpenAI announces new members to board of directors by Will Howard Will Howard https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:12 no full 3
Cq8DTRusceNFJqn3y EA - What should the EA/AI safety community change, in response to Sam Altman's revealed priorities? by SiebeRozendal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What should the EA/AI safety community change, in response to Sam Altman's revealed priorities?, published by SiebeRozendal on March 9, 2024 on The Effective Altruism Forum.Given that: Altman is increasingly diverging from good governance/showing his true colors, as demonstrated in his power play against board members & his current chip ambitions.Should there be any strategic changes?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
SiebeRozendal https://forum.effectivealtruism.org/posts/Cq8DTRusceNFJqn3y/what-should-the-ea-ai-safety-community-change-in-response-to Sat, 09 Mar 2024 22:15:05 +0000 EA - What should the EA/AI safety community change, in response to Sam Altman's revealed priorities? by SiebeRozendal SiebeRozendal https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:36 no full 5
orhjaZ3AJMHzDzckZ EA - Results from an Adversarial Collaboration on AI Risk (FRI) by Forecasting Research Institute Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Results from an Adversarial Collaboration on AI Risk (FRI), published by Forecasting Research Institute on March 11, 2024 on The Effective Altruism Forum.Authors of linked report: Josh Rosenberg, Ezra Karger, Avital Morris, Molly Hickman, Rose Hadshar, Zachary Jacobs, Philip Tetlock[1]Today, the Forecasting Research Institute (FRI) released "Roots of Disagreement on AI Risk: Exploring the Potential and Pitfalls of Adversarial Collaboration," which discusses the results of an adversarial collaboration focused on forecasting risks from AI.In this post, we provide a brief overview of the methods, findings, and directions for further research. For much more analysis and discussion, see the full report: https://forecastingresearch.org/s/AIcollaboration.pdfAbstractWe brought together generalist forecasters and domain experts (n=22) who disagreed about the risk AI poses to humanity in the next century. The "concerned" participants (all of whom were domain experts) predicted a 20% chance of an AI-caused existential catastrophe by 2100, while the "skeptical" group (mainly "superforecasters") predicted a 0.12% chance.Participants worked together to find the strongest near-term cruxes: forecasting questions resolving by 2030 that would lead to the largest change in their beliefs (in expectation) about the risk of existential catastrophe by 2100.Neither the concerned nor the skeptics substantially updated toward the other's views during our study, though one of the top short-term cruxes we identified is expected to close the gap in beliefs about AI existential catastrophe by about 5%: approximately 1 percentage point out of the roughly 20 percentage point gap in existential catastrophe forecasts.We find greater agreement about a broader set of risks from AI over the next thousand years: the two groups gave median forecasts of 30% (skeptics) and 40% (concerned) that AI will have severe negative effects on humanity by causing major declines in population, very low self-reported well-being, or extinction.Extended Executive SummaryIn July 2023, we released our Existential Risk Persuasion Tournament (XPT) report, which identified large disagreements between domain experts and generalist forecasters about key risks to humanity (Karger et al. 2023). This new project - a structured adversarial collaboration run in April and May 2023 - is a follow-up to the XPT focused on better understanding the drivers of disagreement about AI risk.MethodsWe recruited participants to join "AI skeptic" (n=11) and "AI concerned" (n=11) groups that disagree strongly about the probability that AI will cause an existential catastrophe by 2100.[2] The skeptic group included nine superforecasters and two domain experts. The concerned group consisted of domain experts referred to us by staff members at Open Philanthropy (the funder of this project) and the broader Effective Altruism community.Participants spent 8 weeks (skeptic median: 80 hours of work on the project; concerned median: 31 hours) reading background materials, developing forecasts, and engaging in online discussion and video calls.We asked participants to work toward a better understanding of their sources of agreement and disagreement, and to propose and investigate "cruxes": short-term indicators, usually resolving by 2030, that would cause the largest updates in expectation to each group's view on the probability of existential catastrophe due to AI by 2100.Results: What drives (and doesn't drive) disagreement over AI riskAt the beginning of the project, the median "skeptic" forecasted a 0.10% chance of existential catastrophe due to AI by 2100, and the median "concerned" participant forecasted a 25% chance. By the end, these numbers were 0.12% and 20% respectively, though many participants did not attribute their updates to a...

]]>
Forecasting Research Institute https://forum.effectivealtruism.org/posts/orhjaZ3AJMHzDzckZ/results-from-an-adversarial-collaboration-on-ai-risk-fri Mon, 11 Mar 2024 19:00:03 +0000 EA - Results from an Adversarial Collaboration on AI Risk (FRI) by Forecasting Research Institute Forecasting Research Institute https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 21:38 no full 2
idyoQvieuXXcspJzh EA - Interactive AI Governance Map by Hamish McDoodles Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Interactive AI Governance Map, published by Hamish McDoodles on March 12, 2024 on The Effective Altruism Forum.The map is available at aigov.world.This builds on prior work by AI Safety map and The Big Funding Spreadsheet.As with the previous map, please suggest changes to the map via spreadsheet.Thanks to Damin Curtis whose research was the basis for this.And thanks to Nonlinear for funding this venture. They have a message: "If you'd like to get funding for or fund an AI governance project, applications for the latest round of the nonlinear.org/network close on March 29th."Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Hamish McDoodles https://forum.effectivealtruism.org/posts/idyoQvieuXXcspJzh/interactive-ai-governance-map Tue, 12 Mar 2024 15:00:41 +0000 EA - Interactive AI Governance Map by Hamish McDoodles Hamish McDoodles https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:48 no full 3
6Ku4KeCoRzoN4BiEd EA - Pre-slaughter mortality of farmed shrimp by Hannah McKay Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pre-slaughter mortality of farmed shrimp, published by Hannah McKay on March 12, 2024 on The Effective Altruism Forum.Citation: McKay, H. and McAuliffe, W. (2024). Pre-slaughter mortality of farmed shrimp. Rethink Priorities.https://doi.org/10.17605/OSF.IO/W7MUZThe report is also available as apdf here.Executive summaryMortality rates are high in shrimp aquaculture, implying welfare threats are common.It is typical for ~50% of shrimp to die before reaching slaughter age.This equates to around 1.2 billion premature deaths a day on average.Mortality varies among species; prioritizing interventions should take this into account.Because of high pre-slaughter mortality (~81%), Macrobrachium shrimp represent a larger share of farmed shrimp than slaughtered shrimp.Most individual deaths are P. vannamei, despite having the lowest mortality rate.More larvae die than any other life stage, but this does not necessarily mean efforts should focus on them.Uncertainty remains about whether larval shrimp are sentient - they are planktonic, so do not make autonomous decisions.Minimizing larval deaths could cause compensatory deaths in later life stages (e.g., ensuring weaker larvae survive, who then die from later harsh conditions).Interventions should likely concentrate on the ongrowing stage (postlarval and juvenile-subadult shrimp), where there are still tens of billions of deaths.There are several causes of mortality and differences between farm types.Most causes are likely a combination of intrinsic shrimp traits (e.g., young shrimp are sensitive to environmental fluctuations), farming practices, and diseases.Disease is a main cause throughout life, but it is often a downstream effect of issues that farmers have some control over (e.g., poor water quality).Variation among reported figures, especially that more intensive farms have fewer deaths, suggests many factors are at play and that some are controllable.The effects of reducing early mortality on industry trajectory are uncertain.The number of shrimp born may decrease if farmers must produce fixed output. But shrimp would live longer, increasing the chances for negative experiences.However, consumer demand has historically outstripped supply, so the industry may grow if it had better control of conditions causing mortality.If reduced deaths come from intensification of practices, more shrimp may be reared in conditions that can harm welfare (e.g., high stocking densities).Pre-slaughter mortality cannot depict total welfare because it misses nonfatal effects.Mortality is only a lower-bound proxy of how many shrimp suffer negative welfare.Premature mortality is more appropriate as one indicator among many that a welfare reform was successful, rather than an end in itself.Pre-slaughter mortality data is limited and non-uniform.Reports should clarify whether mass die-off events were excluded from mortality estimates and if rates are based on intuition from experience or empirical studies.Box 1: Shrimp aquaculture terminologyThe terms 'shrimp' and 'prawn' are often used interchangeably. The two terms do not reliably track any phylogenetic differences between species. Here, we use only the term "shrimp", covering both shrimp and prawns. Note that members of the family Artemiidae are commonly referred to as "brine shrimp" but are not decapods and so are beyond the present scope.We opt for the use of Penaues vannamei over Litopenaeus vannamei (to which this species is often referred), due to recognition of the former but not the latter nomenclature by ITIS, WorMS, and the Food and Agriculture Organization of the United Nations (FAO) ASFIS List of Species for Fishery Statistics Purposes.The shrimp farming industry uses many terms usually associated with agriculture - for example, 'crops' for a group o...

]]>
Hannah McKay https://forum.effectivealtruism.org/posts/6Ku4KeCoRzoN4BiEd/pre-slaughter-mortality-of-farmed-shrimp Tue, 12 Mar 2024 14:59:11 +0000 EA - Pre-slaughter mortality of farmed shrimp by Hannah McKay Hannah McKay https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 35:20 no full 4
ydw55Fhtj3o8H7qoC EA - Writing about my job: Operations Specialist by Jess Smith Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Writing about my job: Operations Specialist, published by Jess Smith on March 12, 2024 on The Effective Altruism Forum.In the spirit ofAaron Gertler's post about writing about your job on the forum, and to help promote thepeople and business ops roles 80k is currently hiring for, I decided to write about my job as an Operations Specialist at 80,000 Hours. (I'm currently spending ~35% of my time in a chief of staff role, but this post just focuses on the ops specialist part, which I spend the rest of my time on, and did full time for the first ~year I was at 80k).This post aims to give a picture of what my role looks like and how I approach it - ops roles are super varied and this post just aims to represent my experience, so you should take everything with a pinch of salt.This post might be especially useful for people who:Are interested in exploring ops roles (particularly the ones at 80k)Are fairly early in their careerI don't really touch on the impact I think I'm having in my role here, so if you're looking for this you might want to check out80k's operations management career review instead. I also really likedJonathan Michel's botec about the impact of his office manager role at Trajan House, even though this is a different kind of ops role to mine.My backgroundBefore joining 80k, I had a medium amount of experience with EA: I was involved in my uni's EA group, was on the committee of the One For the World chapter there, and read a bunch of EA things.During my undergrad, most of my time when I wasn't studying was spent playing badminton and volunteering on the badminton club committee. This helped me get my first full time role role:After graduating, I was elected as the Athletic Union President at my university - a full-time, student representative role which involved a bunch of ops-style work like marketing, policy writing, budgeting, and event organisation to support student sport in St Andrews. You can read about how I used this role to test my fit for ops roles, and what that was like, in an 80k newsletter I wrotehere.During my year as athletic union president, I applied to lots of ops (and some comms) roles at EA organisations, as well as some civil service roles. Around May 2022, I applied to the Ops Specialist role at 80k, and started the job in July 2022.What is my ops specialist role like?My ops specialist role is closer to the people ops role describedhere - involving responsibilities like maintaining our HR systems, running social events, and helping with hiring rounds. When I joined 80k, it was a fairly open question what my areas of responsibility would be - we experimented with a few different things as the year went on.The broad strokes of my role include identifying problems and trying to fix them, helping to run the behind-the-scenes system which makes life easier for other team members, and project managing larger events and projects.But, to get more concrete, here's a list of some of my ongoing responsibilities and projects that I've worked on:Ongoing responsibilities:Checking staff expenses and annual leaveOnboarding new staff and offboarding staff who leaveCoordinating with EV Ops on HR systemsHelping to write and project managing our quarterly supporter updateUpdating the 'about' pages of our websiteOrganising and promoting team socialsTroubleshooting problems, like "what scale makes sense for our feedback round?" or "what's going on with pensions?"Contributing to the ops team (like reviewing other team members' work or helping with quality assurance)(Previous) responding to emails which come in to our info@ inbox(Previous) organising food and logistics for team eventsProjects:Running our team retreat - finding a venue, organising logistics like transport and catering, choosing and facilitating social act...

]]>
Jess Smith https://forum.effectivealtruism.org/posts/ydw55Fhtj3o8H7qoC/writing-about-my-job-operations-specialist Tue, 12 Mar 2024 14:01:59 +0000 EA - Writing about my job: Operations Specialist by Jess Smith Jess Smith https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:12 no full 5
ExAupz9tdTj2s56GG EA - Stories from the origins of the animal welfare movement by Julia Wise Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Stories from the origins of the animal welfare movement, published by Julia Wise on March 12, 2024 on The Effective Altruism Forum.This is aDraft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked.This post is adapted from notes I made in 2015 while trying to figure out how EA compared to other movements.Society went from treating animals as objects to sometimes treating them as beings / moral patients, at least dogs and horses.A few lone voices (including Bentham) in the 1700s, then more interest in the 1800s, first bill introduced 1809, first bills passed in 1820s (Britain and Maine).Richard Martin "Humanity Dick" - Irish politician who got some success on the 1822 "Ill Treatment of Cattle Bill" after a series of failed anti-cruelty bills. His work was interrupted when he fled to France after losing his seat in Parliament, which meant he was no longer immune to being arrested for his gambling debts.First success with enforcement came from publicity stunt (in case about abuse of donkey, bringing the donkey into court) which caused coverage in media and popular songAnimal welfare work was initially thought of as largely for the benefit of human morality (it's bad for your soul to beat your horse) or to prevent disgust caused by witnessing suffering, not necessarily for the animals themselves.British movement had several false starts; failed legislation and "societies" which died out. Society for the Prevention of Cruelty to Animals took off in 1824 led by a minister and four members of parliament (including William Wilberforce, main leader of British abolition movement), but lost steam after an initial burst of fundraising. Office was closed and they met in coffee houses. Main staff member was jailed for the society's debts, another staff member continued working as a volunteer.Fortune turned when Princess Victoria (later Queen) and her mother decided they liked the organization, became the Royal Society for the Prevention of Cruelty to Animals. RSPCA. 1837 essay competition with equivalent of $1500 prize led to four essays being published as books.One man (Henry Bergh) imported the movement to NYC in the 1860s after hearing about it in London during his work as a diplomat. Pushed first animal abuse legislation through NY legislature and was its single-handed enforcer; got power to arrest and prosecute people despite not being an attorney. Founded the ASPCA (American Society for the Prevention of Cruelty to Animals), initially self-funded. Early on, a supporter died and left the society the equivalent of $2.8 million.Bergh did a lecture tour of the western US resulting in several offshoot societies.Got enough print coverage of his work that legislation spread to other states. Summary of his character based on interviews: "He was a cool, calm man. He did not love horses; he disliked dogs. Affection, then, was not the moving cause.He was a healthy, clean-living man, whose perfect self-control showed steady nerves that did not shrink sickeningly from sights of physical pain; therefore he was not moved by self-pity or hysterical sympathy….No warm, loving, tender, nervous nature could have borne to face it for an hour, and he faced and fought it for a lifetime. His coldness was his armor, and its protection was sorely needed."Widespread mockery of main figures as sentimental busybodies. Bergh was mocked as "the great meddler." Cartoon depicting him as overly caring about animals while there are people suffering - feels very parallel to some criticisms of EA.Welfare movements for children and animals were entwined both in Britain and US (American Humane was for both children and animals almost from the beginning). Early norm that both children and animals more or less belonged to their owners a...

]]>
Julia_Wise https://forum.effectivealtruism.org/posts/ExAupz9tdTj2s56GG/stories-from-the-origins-of-the-animal-welfare-movement Tue, 12 Mar 2024 12:44:50 +0000 EA - Stories from the origins of the animal welfare movement by Julia Wise Julia_Wise https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:23 no full 7
rauhoChXXD7Pe2HK6 EA - Announcing UnfinishedImpact: Give your abandoned projects a second chance by Jeroen De Ryck Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing UnfinishedImpact: Give your abandoned projects a second chance, published by Jeroen De Ryck on March 12, 2024 on The Effective Altruism Forum.What and why?You probably have a folder or document somewhere on your PC with a bunch of abandoned projects or project ideas that never ended up seeing the light of day. You might have developed a grand vision for this project and imagined how it can save or improve so many people's lives.Yet those ideas never materialized due to a lack of time, money, skills, network or the energy to push through. But you might have spent considerable resources on getting this project started and whilst it might not be worthwhile to continue to pursue this project, it seems like a waste to throw it all away.Introducing Unfinished Impact: a website where you can share your potentially impactful abandoned projects for other people to take over. This way, the impact of your project can still be achieved and the resources you've spent on it do not go to waste.How?You can share a project simply by clicking the corresponding button on the home page. I recommend sharing as much relevant information as possible whilst submitting your project. You leave some form of contact information as you're submitting your project. People can then contact you if they want to take over your project. Whether or not you transfer the project to the interested person is up to you to decide. After submission, the project needs to be approved before it's shown publicly.Suggestions to find someone to take over your projectYou're thinking about sharing your project and you want it out of the way quickly, but you also want it to succeed. Here are some things you can do that might help your project find someone to take it over:Give a clear and concise theory of change, and include references where you have them. Make sure no logical steps are missing. Also indicate gaps that you haven't been able to fill yourself, if they exist.Describe what your goal and method are. The person taking over the project needs to understand the idea you have in your head well and why you want to do it that way.Describe what you have already done for the project and what you think still needs to be done to have an MVP.Explain why you are sharing the project. It might be because you lacked a certain skill or knowledge or were stuck on a problem you couldn't solve. Explain in detail what the problem was, so someone who's reading your project knows what skills they should have.But I will finish this project someday! It's not abandoned, just archived!Will you, tho? Have you made a plan and have you dedicated time to it in the near future? Did you work on it in the last year? If the answer to these questions is "no", then you most likely won't finish this project someday, and you might as well share it.Feedback? Comment below!Thanks to @Bob Jacobs for the valuable feedback on the website and this postThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Jeroen De Ryck https://forum.effectivealtruism.org/posts/rauhoChXXD7Pe2HK6/announcing-unfinishedimpact-give-your-abandoned-projects-a Tue, 12 Mar 2024 10:42:15 +0000 EA - Announcing UnfinishedImpact: Give your abandoned projects a second chance by Jeroen De Ryck Jeroen De Ryck https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:46 no full 8
i5Xf2SNYNgM9NBDMR EA - Why are we not using upper-room ultraviolet germicidal irradiation (UVGI) more? by Max Görlitz Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why are we not using upper-room ultraviolet germicidal irradiation (UVGI) more?, published by Max Görlitz on March 12, 2024 on The Effective Altruism Forum.This is aDraft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked.Commenting and feedback guidelines:Keep one and delete the rest (or write your own):I'm posting this to get it out there. I'd love to see comments that take the ideas forward, but criticism of my argument won't be as useful at this time.Please be aware that I wrote all of this in November 2022 and haven't engaged with the draft since. I have learnt much more about germicidal ultraviolet since and probably don't agree with some of the takes here anymore. I am posting it to get it out there instead of having it lying around in some Google doc forever.I am currently operating under the hypothesis that upper-room UVGI systems are efficacious, safe, and cost-effective for reducing the transmission of airborne diseases (E. A. Nardell 2021; E. Nardell et al. 2008; Abboushi et al. 2022; Shen et al. 2021). We have known about them since the 1940s. I was interested in digging into the history of research on upper-room UVGI to answer the question: If we have known for so long how well upper-room UVGI works, why are we not using it much more widely?SummaryThe points below are largely bad reasons not to use upper-room UVGI, which leads me to suspect that upper-room UVGI is underrated and that we should probably be implementing it more.Upper-room UVGI is an 80-year-old technology.The first epidemiological studies in the 1940s were very promising but attempts to replicate them were much less successful.The studies which did not show an effect likely had important design flaws.Around the same time that the epidemiological studies showing limited effects were published, other measures for combating tuberculosis were invented. Thus, upper-room UVGI became less of a priority.People were concerned about the safety of UVC light. These concerns are mostly unjustified today, but many people are still worried and extremely reluctant to implement upper-room UVGI.While some guidelines exist, there are no government standards for upper-room UVGI. The regulatory environment in the US is a mess, leaving manufacturers and consumers with a difficult market.The medical establishment has been overly skeptical of airborne disease transmission in the 20th century and partly until today. Technologies for combating airborne disease transmission were therefore overlooked.Early epidemiological studiesInitial studies on the effectiveness of upper-room UVGI and UVGI "light barriers" in the 1940s showed great promise (Sauer, Minsk, and Rosenstern 1942; Wells, Wells, and Wilder 1942). For instance, in the Wells trial during the 1941 Pennsylvania measles epidemic, 60% fewer susceptible children were infected in the classrooms with upper-room UVGI (seeAppendix 2 for an extensive spreadsheet).These promising results prompted further epidemiological studies in the 1940s and 50s, but those did not show a significant effect, and people felt disillusioned with the technology (Reed 2010).Kowalski, one of the leaders in modern UVGI research, claims that the MRC study (1954) played an essential role in the declining interest in upper-room UVGI (Kowalski 2009, 11). That study concluded: "There was no appreciable effect on the total sickness absence recorded in either the Infant or the Junior departments.[...] The effect of irradiation on total sickness absence is therefore small, and the results would not appear to justify wide use of irradiation as a hygienic measure for the control of infection in primary urban day schools." (Medical Research Council (Great Britain) 1954).One problem with the study was that while u...

]]>
Max Görlitz https://forum.effectivealtruism.org/posts/i5Xf2SNYNgM9NBDMR/why-are-we-not-using-upper-room-ultraviolet-germicidal Tue, 12 Mar 2024 10:06:49 +0000 EA - Why are we not using upper-room ultraviolet germicidal irradiation (UVGI) more? by Max Görlitz Max Görlitz https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 25:58 no full 9
KZLL8nQbZBkG9ubXi EA - Among the A.I. Doomsayers - The New Yorker by Agustín Covarrubias Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Among the A.I. Doomsayers - The New Yorker, published by Agustín Covarrubias on March 12, 2024 on The Effective Altruism Forum.The New Yorker just realized a feature article on AI Safety, rationalism, and effective altruism, and… it's surprisingly good? It seems honest, balanced, and even funny. It doesn't take a position about AI Safety, but IMO it paints it in a good way.Paul Crowley (mentioned in the article) did have some criticism, and put up a response in a post. Most of his concerns seem to be with how the article takes a focus on the people interviewed, rather than the ideas being discussed.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Agustín Covarrubias https://forum.effectivealtruism.org/posts/KZLL8nQbZBkG9ubXi/among-the-a-i-doomsayers-the-new-yorker Tue, 12 Mar 2024 01:52:46 +0000 EA - Among the A.I. Doomsayers - The New Yorker by Agustín Covarrubias Agustín Covarrubias https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:50 no full 10
PvzdP6T5KQDSXWiSm EA - Why I care so much about Diversity, Equity and Inclusion in EA by Ulrik Horn Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I care so much about Diversity, Equity and Inclusion in EA, published by Ulrik Horn on March 13, 2024 on The Effective Altruism Forum."Dude!" Bob panted, horror written all over his face as he scrambled back up the driveway and towards the main road where we were waiting. His breaths came hard and fast. "He aimed a fucking shotgun at me!" Sweat had formed droplets across his forehead, maybe due to his sprint, or maybe due to the terror he must surely feel. I stood there, unsure if it was his crazy run or the scare that had him looking like this.Our road trip had turned into something out of a horror movie. Our car broke down in the middle of nowhere, leaving us stranded with a coolant leak. We found ourselves at the closest house, hoping for some water to replace the leaked coolant."Hey Ulrik, why don't you try to go back there and ask for water," Bob said, as if he hadn't just dodged a bullet. His suggestion hit me like a ton of bricks.Was he joking? He had just been threatened, and now he wanted me to look down the barrel of that shotgun? "But… You just said they almost shot you?" I couldn't hide my shock and fear."Come on, Ulrik… You're white!" I felt the ground shifting under my feet, as the lens through which I viewed the world was replaced with a new, much thicker and darker one, one that made everything look uncannily foreign to me.True story of my life, as far as I can recall (my memory might have exaggerated it a bit). That said, even if it was to be completely fabricated, it still does not detract from the points I will make below.The story is just one of many opportunities I have been privileged to have, getting a small peek into the lived experiences of my PoC, female, gay, and/or friends with disabilities. I am mentioning this story both to make this more interesting to read, but also so you can understand better why I feel like I feel about DEI.Let me give you another example of the kind of experiences I have had and the type of environments I am used to navigating: I used to work for a renewables consultancy in Bristol, UK. It was a super social workplace, lots of pub visits, and attending lots of parties with colleagues on the weekends. And I brought my female and PoC friends along to many of these events and it just felt really nice. My colleagues were so welcoming, tactful and respectful despite having lots of fun.Never, once, did an incident occur where it got awkward due to some "DEI-type incident". I felt completely safe bringing any of my friends or family there: I knew my friends would also feel safe and welcome, which made me feel safe and welcome too.If that sounds like magic to you - "how can so many different people get along so well?!?!" - then perhaps it is helpful to explain how I navigate in such diverse settings, mostly using an example of my whiteness. I think the way I act in diverse social settings might also let you understand why I have high expectations of others when it comes to "DEI behavior".When I am interacting with a person of color (PoC), I am aware that their entire life experience is probably littered with similar, if perhaps not as extreme, experiences as the one Bob had in the vignette, above. I can further imagine that such frequent and repeated experiences of how one does not belong, or is not qualified or whatever the feeling they derived was, has created certain, strong emotional associations.So my friend Bob from the story above would probably get nervous about walking up to a random house (e.g. as part of some AI policy canvassing initiative in a state with lax gun laws) or perhaps might feel some discomfort of attending an all-white event, especially if they suspect that some, maybe most of the attendants are actively engaging in online discussions about genetic enhancement of PoC in poor countri...

]]>
Ulrik Horn https://forum.effectivealtruism.org/posts/PvzdP6T5KQDSXWiSm/why-i-care-so-much-about-diversity-equity-and-inclusion-in-1 Wed, 13 Mar 2024 22:41:19 +0000 EA - Why I care so much about Diversity, Equity and Inclusion in EA by Ulrik Horn Ulrik Horn https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 14:05 no full 1
hp882mEMm6vZ24Mmi EA - China x AI Reference List by Saad Siddiqui Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: China x AI Reference List, published by Saad Siddiqui on March 13, 2024 on The Effective Altruism Forum.BackgroundThere are several China-focused AI reading lists / curricula out there (e.g.:AI governance & China: Reading list (2023),FHI Syllabus (2020),(Basic) Chinese AI Regulation Reading List (2023))They are either relatively brief or somewhat outdated. Our reading list aims to provide a more comprehensive set of key resources when it comes to learning about China, AI safety and policyWe incorporated readings from these reading lists where it felt relevantThis list is based off a community-generated set of readings that were used for a 6-week AI and China discussion group run by the China & Global Priorities Group in 2023You can access a version of the reference list on our website too.StructureThe list is designed as a longlist that can act as a starting point for folks looking to dive deeper into this topic and various sub-topics - it is not a snapshot of the 3 most important readings per topic areaThe entire list is broken down into key themesDomestic AI GovernanceInternational AI GovernanceKey actors and their views on AI risksAI InputsResources to followWe have added in commentary where we felt it would be useful to do so (e.g., we were made aware of potential factual inaccuracies or biased views)Within sections, sources are arranged roughly in order of relevance, not chronology. Sources earlier in a section are more foundational, while later ones are either primary sources that require more context to analyze or older reports/analysis. Sometimes we put related readings next to each other.Ways to get involvedFeel free to suggest additional readingsusing this form - we're doing some amount of vetting to prevent the list from ballooning out of controlJoin theChina & Global Priorities Group if you want to be notified about further discussion groups organizedCaveats around sources and structuresEpistemic status:This resource list was put together in a voluntary capacity by a group of non-Chinese folks with backgrounds in China Studies and professional work experience on China- and/or AI-related issues.We spent several hours on resource collection and sense-checked items based on their style, content and methodology. We do not necessarily endorse all of these works as "very good," but did exclude stuff where we could see that it is obviously low quality.There are many sub-topics where we struggled to find very high-quality material but we still included some publications to give interested readers a start.We expect that most of our audience will not be able to read Chinese easily or fluently, and as such we have provided many English sources. However, it's important to remember that gaining a deep and concrete understanding of this space is really hard even with Chinese language skills and lived experience in China, so readers without those skills and experiences should be cautious about forming very strong views based on the select few sources that are included here.Machine translation is useful but imperfect in many ways.Machine translation will not be able to tell you the significance of specific word choice, which potentially requires deeper knowledge of what terminology means in the broader ideological context of the party-state (this is especially true for official statements and documents).Moreover, official English versions of Chinese government documents sometimes differ from the Chinese version!What is Lost in Translation? Differences between Chinese Foreign Policy Statements and Their Official English Translations, Mokry, 2022China is not a monolith; sources you read that claim that 'China does X' should be treated with caution. Different actors within China have different aims and while it's true that the party-state ...

]]>
Saad Siddiqui https://forum.effectivealtruism.org/posts/hp882mEMm6vZ24Mmi/china-x-ai-reference-list Wed, 13 Mar 2024 22:25:30 +0000 EA - China x AI Reference List by Saad Siddiqui Saad Siddiqui https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:40 no full 2
BbzpDsfxu8Gzn3AD4 EA - Wild animal welfare? Stable totalitarianism? Predict which new EA cause area will go mainstream! by Jackson Wagner Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wild animal welfare? Stable totalitarianism? Predict which new EA cause area will go mainstream!, published by Jackson Wagner on March 13, 2024 on The Effective Altruism Forum.Long have I idly whiled away the hours browsing Manifold Markets, trading on trivialities like videogame review scores or NASA mission launch dates. It's fun, sure -- but I am a prediction market advocate, who believes that prediction markets have great potential to aggregate societally useful information and improve decision-making! I should stop fooling around, and instead put my Manifold $Mana to some socially-productive use!!So, I've decided to create twenty subsidized markets about new EA cause areas. Each one asks if the nascent cause area (like promoting climate geoengineering, or researching space governance) will receive $10,000,000+ from EA funders before the year 2030.My hope is that that these markets can help create common knowledge around the most promising up-and-coming "cause area candidates", and help spark conversations about the relative merits of each cause. If some causes are deemed likely-to-be-funded-by-2030, but little work is being done today, that could even be a good signal for you to start your own new project in the space!Without further ado, here are the markets:Animal WelfareWill farmed-invertebrate welfare (shrimp, insects, octopi, etc) get $10m+ from EA funders before 2030?Will wild-animal welfare interventions get $10m+ from EA funders before 2030?[embed most popular market]Global Health & DevelopmentWill alcohol, tobacco, & sugar taxation... ?Mental-health / subjective-wellbeing interventions in developing countries?Institutional improvementsApproval voting, quadratic funding, liquid democracy, and related democratic mechanisms?Georgism (aka land value taxes)?Charter Cities / Affinity Cities / Network States?Investing(Note that the resolution criteria on these markets is different than for the other questions, since investments are different from grants.)Will the Patient Philanthropy Fund grow to $10m+ before 2030?Will "impact markets" distribute more than $10m of grant funding before 2030?X-RiskCivilizational bunkers?Climate geoengineering?Preventing stable totalitarianism?Preventing S-risks?Artificial IntelligenceMass-movement political advocacy for AI regulation (ie, "PauseAI")?Mitigation of AI propaganda / "botpocalypse" impacts?TranshumanismCryonics & brain-emulation research?Human intelligence augmentation / embryo selection?Space governance / space colonization?Moral philosophyResearch into digital sentience or the nature of consciousness?Interventions primarily motivated by anthropic reasoning, acausal trade with parallel universes, alien civilizations, simulation arguments, etc?I encourage you to trade on these markets, comment on them, and boost/share them -- put your Manifold mana to a good use by trying to predict the future trajectory of the EA movement! Here is one final market I created, asking which three of the cause areas above will receive the most support between now and 2030.Resolution details & other thoughtsThe resolution criteria for most of these questions involves looking at publicly-available grantmaking documentation (like this Openphil website, for example), adding up all the grants that I believe qualify as going towards the stated cause area, and seeing if the grand total exceeds ten million dollars.Since I'm specifically interested in how the EA movement will grow and change over time, I will only be counting money from "EA funders" -- stuff like OpenPhil, LTFF, SFF, Longview Philanthropy, Founders Fund, GiveWell, etc, will count for this, while money from "EA-adjacent" sources (like, say, Patrick Collison, Yuri Milner, the Bill & Melinda Gates Foundation, Elon Musk, Vitalik Buterin, Peter T...

]]>
Jackson Wagner https://forum.effectivealtruism.org/posts/BbzpDsfxu8Gzn3AD4/wild-animal-welfare-stable-totalitarianism-predict-which-new Wed, 13 Mar 2024 19:31:51 +0000 EA - Wild animal welfare? Stable totalitarianism? Predict which new EA cause area will go mainstream! by Jackson Wagner Jackson Wagner https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:44 no full 3
FKWYNcWixfuJK3D7z EA - #182 - Comparing the welfare of humans, chickens, pigs, octopuses, bees, and more (Bob Fischer on the 80,000 Hours Podcast) by 80000 Hours Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: #182 - Comparing the welfare of humans, chickens, pigs, octopuses, bees, and more (Bob Fischer on the 80,000 Hours Podcast), published by 80000 Hours on March 13, 2024 on The Effective Altruism Forum.We just published an interview: Bob Fischer on comparing the welfare of humans, chickens, pigs, octopuses, bees, and more. Listen on Spotify or click through for other audio options, the transcript, and related links. Below are the episode summary and some key excerpts.Episode summary[One] thing is just to spend time thinking about the kinds of things animals can do and what their lives are like. Just how hard a chicken will work to get to a nest box before she lays an egg, the amount of labour she's willing to go through to do that, to think about how important that is to her.And to realise that we can quantify that, and see how much they care, or to see that they get stressed out when fellow chickens are threatened and that they seem to have some sympathy for conspecifics.Those kinds of things make me say there is something in there that is recognisable to me as another individual, with desires and preferences and a vantage point on the world, who wants things to go a certain way and is frustrated and upset when they don't. And recognising the individuality, the perspective of nonhuman animals, for me, really challenges my tendency to not take them as seriously as I think I ought to, all things considered.Bob FischerIn today's episode, host Luisa Rodriguez speaks to Bob Fischer - senior research manager at Rethink Priorities and the director of the Society for the Study of Ethics and Animals - about Rethink Priorities's Moral Weight Project.They cover:The methods used to assess the welfare ranges and capacities for pleasure and pain of chickens, pigs, octopuses, bees, and other animals - and the limitations of that approach.Concrete examples of how someone might use the estimated moral weights to compare the benefits of animal vs human interventions.The results that most surprised Bob.Why the team used a hedonic theory of welfare to inform the project, and what non-hedonic theories of welfare might bring to the table.Thought experiments like Tortured Tim that test different philosophical assumptions about welfare.Confronting our own biases when estimating animal mental capacities and moral worth.The limitations of using neuron counts as a proxy for moral weights.How different types of risk aversion, like avoiding worst-case scenarios, could impact cause prioritisation.And plenty more.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy MooreHighlightsUsing neuron counts as a proxy for sentienceLuisa Rodriguez: A colleague of yours at Rethink Priorities has written this report on why neuron counts aren't actually a good proxy for what we care about here. Can you give a quick summary of why they think that?Bob Fischer: Sure. There are two things to say. One is that it isn't totally crazy to use neuron counts. And one way of seeing why you might think it's not totally crazy is to think about the kinds of proxies that economists have used when trying to estimate human welfare. Economists have for a long time used income as a proxy for human welfare.You might say that we know that there are all these ways in which that fails as a proxy - and the right response from the economist is something like, do you have anything better? Where there's actually data, and where we can answer at least some of these high-level questions that we care about? Or at least make progress on the high-level questions that we care about relative to baseline?And I think that way of thinking about what neuron-count-based proxies ar...

]]>
80000_Hours https://forum.effectivealtruism.org/posts/FKWYNcWixfuJK3D7z/182-comparing-the-welfare-of-humans-chickens-pigs-octopuses Wed, 13 Mar 2024 17:54:10 +0000 EA - #182 - Comparing the welfare of humans, chickens, pigs, octopuses, bees, and more (Bob Fischer on the 80,000 Hours Podcast) by 80000 Hours 80000_Hours https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 24:53 no full 4
tX2MqRfZtz7TqYCQi EA - New video: You're richer than you realise by GraceAdams Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New video: You're richer than you realise, published by GraceAdams on March 13, 2024 on The Effective Altruism Forum.This might be one of the best pieces of introductory content to the concepts of effective giving that GWWC has produced in recent years!I hit the streets of London to engage with everyday people about their views on charity, giving back, and where they thought they stood on the global income scale.This video was made to engage people with some of the core concepts of income inequality and charity effectiveness in the hope of getting more people interested in giving effectively.If you enjoy it - I'd really appreciate a like, comment or share on YouTube to help us reach more people!There's a blog post and transcript of the video available too.Big thanks to Suzy Sheperd for directing and editing this project and to Julian Jamison and Habiba Banu for being interviewed!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
GraceAdams https://forum.effectivealtruism.org/posts/tX2MqRfZtz7TqYCQi/new-video-you-re-richer-than-you-realise Wed, 13 Mar 2024 02:41:55 +0000 EA - New video: You're richer than you realise by GraceAdams GraceAdams https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:01 no full 7
bbMMTFa3HN2SPApLC EA - There are no massive differences in impact between individuals by Sarah Weiler Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There are no massive differences in impact between individuals, published by Sarah Weiler on March 14, 2024 on The Effective Altruism Forum.Or: Why aiming for the tail-end in an imaginary social impact distribution is not the most effective way to do good in the world"It is very easy to overestimate the importance of our own achievements in comparison with what we owe others."attributed to Dietrich Bonhoeffer, quoted in Tomasik 2014(2017)SummaryIn this essay, I argue that it is not useful to think about social impact from an individualist standpoint.I claim that there are no massive differences in impact between individual interventions, individual organisations, and individual people, because impact is dispersed acrossall the actors that contribute to the outcomes before any individual action is taken,all the actors that contribute to the outcomes after any individual action is taken, andall the actors that shape the taking of any individual action in the first place.I raise some concerns around adverse effects of thinking about impact as an attribute that follows a power law distribution and that can be apportioned to individual agents:Such a narrative discourages actions and strategies that I consider highly important, including efforts to maintain and strengthen healthy communities;Such a narrative may encourage disregard for common-sense virtues and moral rules;Such a narrative may negatively affect attitudes and behaviours among elites (who aim for extremely high impact) as well as common people (who see no path to having any meaningful impact); andSuch a narrative may disrupt basic notions of moral equality and encourage a differential valuation of human lives in accordance with the impact potential an individual supposedly holds.I then reflect on the sensibility and usefulness of apportioning impact to individual people and interventions in the first place, and I offer a few alternative perspectives to guide our efforts to do good effectively.In the beginning, I give some background on the origin of this essay, and in the end, I list a number of caveats, disclaimers, and uncertainties to paint a fuller picture of my own thinking on the topic. I highly welcome any feedback in response to the essay, and would also be happy to have a longer conversation about any or all of the ideas presented - please do not hesitate to reach out in case you would like to engage in greater depth than a mere Forum comment :)!ContextI have developed and refined the ideas in the following paragraphs at least since May 2022 - my first notes specifically on the topic were taken after I listened to Will MacAskill talk about "high-impact opportunities" at the opening session of my first EA Global, London 2022. My thoughts on the topic were mainly sparked by interactions with the effective altruism community (EA), either in direct conversations or through things that I read and listened to over the last few years.However, I have encountered these arguments outside EA as well, among activists, political strategists, and "regular folks" (colleagues, friends, family). My journal contains many scattered notes, attesting to my discomfort and frustration with the - in my view, misguided - belief that a few individuals can (and should) have massive amounts of influence and impact by acting strategically.This text is an attempt to pull these notes together, giving a clear structure to the opposition I feel and turning it into a coherent argument that can be shared with and critiqued by others.Impact follows a power law distribution: The argument as I understand it"[T]he cost-effectiveness distributions of the most effective interventions and policies in education, health and climate change, are close to power-laws [...] the top intervention is 2 or almost 3 orders of magni...

]]>
Sarah Weiler https://forum.effectivealtruism.org/posts/bbMMTFa3HN2SPApLC/there-are-no-massive-differences-in-impact-between Thu, 14 Mar 2024 21:02:24 +0000 EA - There are no massive differences in impact between individuals by Sarah Weiler Sarah Weiler https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 32:00 no full 2
ujRZBW8LEt552zgWs EA - Hire the CEA Online team for software development & consulting by Will Howard Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hire the CEA Online team for software development & consulting, published by Will Howard on March 14, 2024 on The Effective Altruism Forum.TL;DR: We are offering (paid) software consulting and contracting services to impactful projects. If you think we might be a good fit for your project please fill in the form here and we'll get back to you within a couple of days.The CEA Online team has recently taken on a few (3) projects for external organisations, in addition to our usual work which consists of developing the EA Forum[1]. We think these external projects have gone well, so we're trying out offering these services publicly.We are mainly[2] looking to help with projects that we think have a large potential for impact. As we're in the early stages of exploring this we are quite open in the sorts of projects we would consider, but just to give you some concrete ideas:Building a website that is more than a simple landing page. Think of the AMF website, which in spite of its deceptive appearance actually makes a lot of data available in a very transparent way. Or the Rethink Priorities Cross-Cause Cost-Effectiveness Model.Building an internal tool to streamline some common process in your organisation.Building a web platform that requires handling user data. E.g. a site that matches volunteers with relevant tasks. Or, if you can imagine it, some kind of web forum.If you have a project where you're not sure if we'd be a good fit, please do reach out to us anyway. You will be providing a service to us just by telling us about it, as we want to understand what the needs in the community are. We might also be able to point you in the right direction or set you up with another contractor even if we aren't suitable ourselves.In the projects we have completed so far, the thing that has been suggestive of this work being very worthwhile is that in most cases the project would have progressed much more slowly without us, or potentially not happened at all. This was either due to the people involved already having too many demands on their time, or having run into problems with their existing development process.We're excited to do more in the hope that having this available as a reliable service will push people to do software-dependent projects that they wouldn't have done otherwise, or execute them to a higher quality (and faster).What we could do for youWe're open to anything from "high-level consulting" to "taking on a project entirely". In the "high-level consulting" case we would do some initial setup and give you advice on how to proceed, so you could then take over the project or continue with other contractors (who we might be able to set you up with). In the "taking on a project entirely" case we would be the contractors and would write all the code.The projects we have done so far have been more on the taking-on-entirely end of the spectrum.We also have an excellent designer, Agnes (@agnestenlund), and the capacity[3] to do product development work, such as conducting user interviews to gather requirements and then designing a solution based on those. This might be especially valuable if you have a lot of demands on your time and want to be able to hand off a project to a safe pair of hands.As mentioned above, we are hoping to help with projects that are high-impact, so we'll decide what to work on based on our fit for the project, as well the financial and impact cases. As such, these are some characteristics that make a project more likely to be a good fit (non-exhaustive):Being motivated by EA principles and/or heavily embedded in the EA ecosystem.Cases where your organisation is at low capacity, such that you have projects that you think would be valuable to do but don't have the time to commit to yourselves.Cases where we can help you navigate...

]]>
Will Howard https://forum.effectivealtruism.org/posts/ujRZBW8LEt552zgWs/hire-the-cea-online-team-for-software-development-and Thu, 14 Mar 2024 19:13:05 +0000 EA - Hire the CEA Online team for software development & consulting by Will Howard Will Howard https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 06:09 no full 5
7sNQnB6uGJyxpwbeS EA - And the capybara suffer what they must? [the ethics of reintroducing predators] by tobytrem Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: And the capybara suffer what they must? [the ethics of reintroducing predators], published by tobytrem on March 14, 2024 on The Effective Altruism Forum.This is aDraft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked.Commenting and feedback guidelines:This is an article, written over a couple months in late 2022, which ended up not being published. I wouldn't have published it without the nudge of Draft Amnesty Week, because I'm not inclined to redraft it, and I had to redact a name for someone who didn't want to be mentioned without looking over the draft. Fire away! (But be nice, as usual)Jaguars reenter IberáGreen-winged macaws that have grown up in captivity are too weak and naive to survive in the wild. In 2015, the conservation group Rewild Argentina released their first batch of seven macaws into Iberá National Park. They had to recapture the birds the next day[1]. Iberá is a large wetland in Argentina's Corrientes province. The macaws quickly became stuck in the sticky flooded ground, unable to take off. After a rest back in captivity, the birds were re-released.Within 5 days, two of the birds, whose lifespan in captivity is 60-80 years, were killed by wildcats.After this incident, Rewild Argentina hired a trainer to teach them to avoid predators. In the training drill, the trainer encourages a cat or a falcon to attack an embalmed macaw, while macaw distress calls are played through a speaker. Next time they are released, the macaws aren't quite so naive.Nearby, in El Impenetrable Park, conservationists from the same organisation raise and train predators that they want the macaws to avoid. Rewild Argentina plans to reintroduce species that were killed or driven out of Iberá over the past century by cattle ranching and over-hunting - including the jaguar. Legally, the group couldn't import the jaguar from neighbouring countries, so they had to produce their own.Their first group came from Tania, a female jaguar from a local zoo and Quramta, a wild jaguar. Finding Quaramta was a stroke of luck.In El Impenetrable, the jaguar aren't locally extinct, as they were in Iberá, but they are extremely rare. Quaramta was located after he left a single footprint by a river bank. Quaramta and Tania's cubs were raised in a thirty hectare enclosure, out of the reach of humans. Sebastián Di Martino, the conservation director, told me that if the cubs were raised by humans, then they would seek humans out when they were hungry.To train them to hunt for themselves, conservationists captured and released live prey into the enclosure, including nine-banded armadillo, caiman aligator, feral pigs and capybara. The training was a success. As of this year, one of Qaramta and Tania's cubs, Arami[2], has given birth in the wild.Rewild Argentina's project is hard. It involves legislative wrangling with governments, an ongoing campaign to ingratiate the locals, and after all that, the deaths of their charges, sometimes at the hands of each other. Why are they doing it?The last few centuries of land use drastically changed the ecosystem of Iberá. Cattle ranchers routinely burned down vegetation to make room for their cows, and locals hunted jaguar and other animals for skins to sell to wealthy europeans. Among the species driven to local extinction were giant anteaters which grew up to 8 feet long; bristly, pig-like collared peccaries; and the aforementioned green-winged macaws. Now, the land is occupied by capybara, previously the prey of the jaguar.Presently, they have nothing to fear. Di Martino told me that "if you go out, the capybara are grazing by hundreds all day. They are not afraid of anything. All they do is eat, and eat".The late Doug Tompkins[3] and his wife Kristine Tompkins use their philant...

]]>
tobytrem https://forum.effectivealtruism.org/posts/7sNQnB6uGJyxpwbeS/and-the-capybara-suffer-what-they-must-the-ethics-of Thu, 14 Mar 2024 18:20:42 +0000 EA - And the capybara suffer what they must? [the ethics of reintroducing predators] by tobytrem tobytrem https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 37:44 no full 7
q2PwXNsXsfDYkxeHb EA - AIM (CE) new program: Founding to Give. Apply now to launch a high-growth company! by CE Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AIM (CE) new program: Founding to Give. Apply now to launch a high-growth company!, published by CE on March 14, 2024 on The Effective Altruism Forum.TLDR: This post introduces you to Ambitious Impact's (Charity Entrepreneurship) new program:Founding to Give. It is a pre-incubator program that will help you launch a high-growth company to donate to high-impact charities. You can apply via thisjoint application form.Why a Founding-to-Give Program?Our analysis reveals that pursuing this career path could yield an average donation potential of $1M USD per individual annually, aligning with findings fromsimilaranalyses. Comparable to the nature of entrepreneurial ventures, the distribution of outcomes is heavily skewed, yet even the median donation capacity of $130K per year situates this career choice within the same range as other impactful roles. Moreover, a successful venture in this direction can have other benefits like enhancing funding diversity and mitigating the mid-stage funding gap, topics we will delve into in a subsequent section.A more in-depth exploration of our methodology can be found inthe report.What is Founding to Give pre-incubator?Founding to Give is a new pre-incubator program run byAmbitious Impact (previously Charity Entrepreneurship). It aims to help you launch a high-growth company to donate to high-impact charities.At AIM Founding to Give pre-incubator, we'll help you with:Finding a value-aligned co-founder with a complementary skillset and personality.Identifying and testing an exceptional business idea.Building a strong business pitch.Crafting a plan for your next steps.We want you to go from a simple idea to a strong co-founding team that can pitch to the best incubators or investors in the world.The program will run January-March 2025, and you can apply by April 14, 2024, via ourjoint application form below:Orsign up here to join us for a special event with the Founding to Give Program Managers on March 26, 2024, 6 PM GMT, to learn more and ask any questions you may have about this program.What does the program offerWe will select a cohort of exceptionally talented and value-aligned potential co-founders with our highly effective vetting process, which has helped us launch over30 successful nonprofits.In the first part of this intensive, cost-covered program, we will help you find or refine the best idea, build your skills and confidence, and match with the best person to launch this new company with.In the second part of the program, we will provide you with two additional months of funding to cover living and office costs. This is the time when you will refine your pitch and business plan and have a chance to apply to incubators or raise seed capital.We will connect you to mentors, fellow entrepreneurs, and experienced investors who will support you with advice and best practices.What do we ask forWe don't care about making money for ourselves. We care about making the biggest difference in the world. That is why we do not take any equity, ask for board seats, or hamstring your company's growth in any other way. Instead, we ask that participants commit to donating a minimum of 50% of their personal exit earnings above $1m to effective charities, similar to what the signatories of theFounders Pledge or theGiving Pledge are doing.Many established, highly impactful charities (in particular global health charities) havesignificant room to absorb more funding effectively. New effective charities can havesignificant funding gaps, too, which may prevent them from scaling up and improving the lives of millions. That is why the funds you will raise could have a huge impact.Is this a good fit for youIf you're excited by the entrepreneurial career path, look to find or refine your business idea, if you have not...

]]>
CE https://forum.effectivealtruism.org/posts/q2PwXNsXsfDYkxeHb/aim-ce-new-program-founding-to-give-apply-now-to-launch-a Thu, 14 Mar 2024 13:11:42 +0000 EA - AIM (CE) new program: Founding to Give. Apply now to launch a high-growth company! by CE CE https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:10 no full 9
BHCjijbDnHJCkGB9n EA - Two Concrete Ways to Help Feeder Rodents by Hazo Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two Concrete Ways to Help Feeder Rodents, published by Hazo on March 14, 2024 on The Effective Altruism Forum.Feeder rodents are rodents that are fed to pet reptiles, mainly snakes. After many years of consolidation and growth, feeder rodent farming has turned into an invisible form of factory farming. Indeed, we estimate there are 200-650 million feeder rodents produced globally each year.In 2019, saulius wrote anarticle estimating the total number of feeder mice, and bringing attention to some of their welfare issues. Here, we build on salius' work through additional research, including conversations with members of the feeder rodent industry.We provide a new estimate of the size and scope of the industry, an overview of how feeder rodents are farmed, the market structure of the global feeder rodent trade, the major customers and farming operations, and welfare considerations based on our interviews and a site visit to a small rodent farm.Finally, we discuss two concrete ways to help the hundreds of millions of rodents farmed each year: public pressure campaigns against zoos, and creating sausages that could serve as an alternative to whole animal feeding.Industry OverviewFeeder Rodent PopulationWeestimate that there are 200-650 million feeder rodents produced globally each year. Of these rodents, we estimate 150-500 million are mice and 28-120 million are rats. There is also a small amount of guinea pigs farmed each year, fed to the largest of snakes, birds of prey, and felines, but we expect this number is very low relative to the total number of rodents. This estimate is congruent with salius' previous estimates of 85 million to 2.1 billion. The estimate also aligns with areport in the Independent which pegged the number at 167 million feeder rodents sold in the US in 1999, which was well before the large Chinese farms entered the market, though it is unclear how the author reached his conclusion.Rodent FarmingRodents are farmed in tubs that are placed inrodent racks. Depending on the size of the operation, each tub will include male and female breeders in a ratio of roughly 4 to 6 females to each male, and some number of litters of baby rodents. In larger operations, it is not uncommon for one tub to have 10 male, 60 female breeders, and many baby rodents. Each tub is typically lined with liquid absorbent bedding, usually wood shavings or paper strips. Rodent food is generally supplied in the form offormulated pellets that sit on wiring on top of the tub opening, and water is provided either via gravity-fed bottles or automated watering systems that run throughout the rack.Each mouse will generally have 5-10 litters per year and around 3-20 children per litter. Rats can have up to 8 litters per year and average 8 children per litter. These ranges can vary widely based on environmental conditions and evolved changes to the breeding line. Some operations will selectively breed for higher litter size or other desired traits and will cull breeder females when they become unproductive. Children are "pulled" or "harvested" from the tubs, often pre-wean, and killed.Weaned children are often placed in a separate tub where they grow to the desired size before being killed. Farming large numbers of rodents can thus be quite concentrated, with small buildings able to produce millions of rodents per year. For example, the photo below shows Mice Direct's mouse farm in Georgia, which you can also see a video ofhere.Replacement breeders are taken directly from the colony. The rodent population is therefore self-propagating, meaning that genetics are not as heavily optimized as they are in other areas of animal agriculture. Colonies of rodents can collapse occasionally, typically as a result of disease. In these circumstances, which one industry professional ...

]]>
Hazo https://forum.effectivealtruism.org/posts/BHCjijbDnHJCkGB9n/two-concrete-ways-to-help-feeder-rodents Thu, 14 Mar 2024 08:24:18 +0000 EA - Two Concrete Ways to Help Feeder Rodents by Hazo Hazo https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 32:39 no full 10
GZBu8x4ReXwLgF4AY EA - GiveWell is hiring Research Analysts! by GiveWell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell is hiring Research Analysts!, published by GiveWell on March 14, 2024 on The Effective Altruism Forum.We're hiring Research Analysts to support our research team and ensure that the work we produce is accurate and high quality. This is an entry-level role with no experience requirements. Please apply if you're interested, and consider sharing this post so interested folks in your network can take a look!The position is remote-eligible and will be compensated at $95,900-$105,800 depending on location (international salaries will be determined on a case-by-case basis). For more details, see the job description and FAQ.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
GiveWell https://forum.effectivealtruism.org/posts/GZBu8x4ReXwLgF4AY/givewell-is-hiring-research-analysts Thu, 14 Mar 2024 04:58:19 +0000 EA - GiveWell is hiring Research Analysts! by GiveWell GiveWell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:51 no full 12
tXu7KCSePFkgbCLZq EA - [Applications open] Summer Internship with CEA's University Groups Team! by Joris P Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Applications open] Summer Internship with CEA's University Groups Team!, published by Joris P on March 15, 2024 on The Effective Altruism Forum.TLDRApplications for CEA's University Groups Teamsummer internship have opened!Dates: flexible, during the Northern Hemisphere summerApplication deadline: Sunday, March 31Learn more & applyhere!What?CEA's University Groups Team is running a paid internship program! During the internship you will work on a meta-EA project, receiving mentorship and coaching from CEA staff. We havea list with some project ideas from previous years, but also encourage you to consider others you'd like to run.This is your opportunity to think big and see what it's like to work on meta-EA projects full-time!Applications are due Sunday, March 31 at 11:59pmAnywhere on Earth.Why?Test out different aspects of meta-EA work as a potential career pathReceive coaching and mentorship through CEAReceive a competitive wage for part-time or full-time work during your breakBe considered for extended work with CEAWho?You might be a good fit for the internship if you are:A university group organizer who is interested in testing out community building and/or EA entrepreneurial projects as a career pathHighly organized, reliable, and independentKnowledgeable about EA and eager to learn moreMake sure to read more and applyhere!More infoIf you have any questions, including any related to whether you'd be a good fit, reach out to us at unigroups [at] centreforeffectivealtruism [dot] org.Learn more & applyhere!Initial applications are due Sunday, March 31 at 11:59pmAnywhere on Earth.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Joris P https://forum.effectivealtruism.org/posts/tXu7KCSePFkgbCLZq/applications-open-summer-internship-with-cea-s-university Fri, 15 Mar 2024 19:23:35 +0000 EA - [Applications open] Summer Internship with CEA's University Groups Team! by Joris P Joris P https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:00 no full 1
Rf6T9MmqqDhH5oTvA EA - I'm glad I joined an experienced, 20+ person organization by michel Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I'm glad I joined an experienced, 20+ person organization, published by michel on March 15, 2024 on The Effective Altruism Forum.This is aDraft Amnesty Week draft. It may not be polished up to my usual standards.I originally started this post for the EA forum's career week last year, but I missed the deadline. I've used Draft Amnesty Week as a nudge to fix up a few bullets and am just sharing what I got.In which: I tentatively conclude I made the right choice by joining CEA instead of doing independent alignment research or starting my own EA community building project.In December and January last year, I spent a lot of time thinking about what my next career move should be. I was debating roughly four choices:Joining the CEA Events TeamBeginning independent research in AI strategy and governanceSupporting early stage (relatively scrappy) AI safety field-building effortsStarting an EA community or infrastructure building project[1]I decided to join the CEA events team, and I'm glad I did. I'm moderately sure this was the right choice in hindsight (maybe 60%), but counterfactuals are hard and who knows, maybe one of the other paths would have proved even betterHere are some benefits from CEA that I think would have been harder for me to get on the other paths.I get extended contact with - and feedback from - very competent peopleExample: I helped organize the Meta Coordination Forum and worked closely with Max Dalton and Sophie Thomson as a result. I respect both of them a lot and they both regularly gave me substantive feedback on my idea generation, emails, docs, etc.I learn a lot of small but, in aggregate, important things that would be more effortful to learn on my ownExamples: How to organize a slack workspace, how to communicate efficiently, when and how to engage with lawyers, how to utilize virtual assistants, how to build a good team culture, how to write a GDoc that people can easily skim, when to leave comments and how to do so quickly, how to use decision making tools like BIRD, how to be realistic about impact evaluations, etc.I have a support systemExample: I've been dealing with post-concussion symptoms for the past year, and having private healthcare has helped me address those symptoms.Example: Last year I was catastrophizing about a project I was leading on. After telling my manager about how anxious I had been about the project, we met early that week and checked in on the status of all the different work streams and clarified next steps. By the end of the week I felt much better.I think I have a more realistic model of how organizations, in general, work. I bet this helps me predict other orgs behavior and engage with them productively. It would probably also help me start my own org.Example: If I want Open Phil to do X, it's become clear to me that I should probably think about who at OP is most directly responsible for X, write up the case for X in an easy to skim way with a lot of reasoning transparency, and then send that doc to the person and express a willingness to meet to talk more about it.And all the while I should be nice and humble, because there's probably a lot of behind the scenes stuff I don't know about. And the people I want to change the behavior of are probably very busy and have a ton of daily execution work to do that makes it hard for them to zoom out to the level I'm likely asking them toExample: I better understand the time/overhead-costs to making certain info transparent and doing public comms well, so I have more realistic expectations of other orgs.Example: If I were to start my own org, I would have a better sense of how to set a vision, how to ship MVPs and test hypotheses, as well as more intuitive sense of when things are going well vs. poorly.If I want to later work at a non-EA org, my expe...

]]>
michel https://forum.effectivealtruism.org/posts/Rf6T9MmqqDhH5oTvA/i-m-glad-i-joined-an-experienced-20-person-organization Fri, 15 Mar 2024 19:01:38 +0000 EA - I'm glad I joined an experienced, 20+ person organization by michel michel https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:39 no full 2
MWSwSXNmsSBaEKtKw EA - Maternal Health Initiative is Shutting Down by Ben Williamson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Maternal Health Initiative is Shutting Down, published by Ben Williamson on March 15, 2024 on The Effective Altruism Forum.Maternal Health Initiative (MHI) was founded out of Charity Entrepreneurship (AIM)'s 2022 Incubation Program and has since piloted two interventions integrating postpartum (post-birth) contraceptive counselling into routine care appointments in Ghana. We concluded this pilot work in December 2023.A stronger understanding of the context and impact of postpartum family planning work, on the back of our pilot results, has led us to conclude that our intervention is not among the most cost-effective interventions available. We've therefore decided to shut down and redirect our funding to other organisations.This article summarises MHI's work, our assessment of the value of postpartum family planning programming, and our decision to shut down MHI as an organisation in light of our results. We also share some lessons learned. An in-depth report expanding on the same themesis available on our website.We encourage you to skip to the sections that are of greatest interest:For people interested in the practicalities of development work, we recommend 'MHI: An Overview of Our Work'.For those interested in family planning programming, we recommend 'Pilot: Results', 'Why We No Longer Believe Postpartum Family Planning Is Among The Most Cost-Effective Interventions', and 'Broader Thoughts on Family Planning'.Finally, for those interested in broader lessons around entrepreneurship and organisation-building, we recommend 'Choosing to Shut Down' and 'Lessons'.Why we chose to pursue postpartum family planningWhy family planning?Pregnancy-related health outcomes are a leading cause of preventable death among both mothers and children. In 2017, almost 300,000 women and girls died due to either pregnancy or childbirth (WHO, 2017). Cleland et al. (2006) estimate that comprehensive access to contraception could avert more than 30% of maternal deaths and 10% of child mortality globally.Contraceptive access provides a wide range of other potential benefits, the most significant of which may be increasing reproductive autonomy for women who want to space or limit births and currently have limited options for doing so.Why postpartum (post-birth)?Postpartum family planning (PPFP) - that is, integrating family planning guidance into postnatal care and/or child immunisation appointments- has been identified as an effective way of increasing contraceptive uptake and reducing unmet need (Wayessa et al. (2020);Saeed et al. (2008);Tran et al. (2020);Tran et al. (2019);Pearson et al. (2020);Dulli et al. (2016).The maternal and infant mortality risks from short birth spacing make the postpartum period a potential point of particular value from increased contraceptive access. Demographic Health Survey (DHS) analysis suggests an 18% increase in neonatal mortality, 21% increase in child mortality, and 32% increase in mortality risk from births that occur within two years of a prior pregnancy (Kozuki and Walker's2013; Conde-Agudelo et al.2007).While it is often an official policy that family planning counselling should be included in postnatal care (Ghana Health Service,2014), the consistency and quality of family planning services in the postpartum period vary in practice (Morhe et al. 2017).MHI: An overview of our workCharity Entrepreneurship (AIM) recommended postpartum family planning as part of the 2022 Incubation Program through which MHI was founded. As such, MHI has had an explicit focus on postpartum family planning work since its beginning. We spent our first few months interviewing a few dozen experts, getting up to speed with research in the field, and selecting priority target countries. Based on this work, we visited Sierra Leone and Ghana in...

]]>
Ben Williamson https://forum.effectivealtruism.org/posts/MWSwSXNmsSBaEKtKw/maternal-health-initiative-is-shutting-down Fri, 15 Mar 2024 17:40:19 +0000 EA - Maternal Health Initiative is Shutting Down by Ben Williamson Ben Williamson https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 33:52 no full 3
AD8QchabkrygXkdgm EA - We Did It! - Victory for Octopus in Washington State by Tessa @ ALI Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We Did It! - Victory for Octopus in Washington State, published by Tessa @ ALI on March 15, 2024 on The Effective Altruism Forum.In 2022, Aquatic Life Institute (ALI) led the charge in Banding Together to Ban Octopus Farming. In 2024, we are ecstatic to see these efforts come to fruition in Washington State.This landmark achievement underscores our collective commitment to rejecting the introduction of additional animals into the seafood system and positions Washington State as a true pioneer in aquatic animal welfare legislation. In light of this success, ALI is joining forces with various organizations to advocate for similar bans across the United States and utilizing these monumental examples as leverage in continuous European endeavors.2022Aquatic Life Institute (ALI) and members of the Aquatic Animal Alliance (AAA) comment on the Environmental Impact of Nueva Pescanova before the Government of the Canary Islands: General Directorate of Fisheries and the General Directorate for the Fight against Climate Change and the Environment.Allowing this industrial octopus farm to operate could result in serious bio security and biophysical risks with regard to effluents being produced from this facility and discharged to surrounding waterways. There were many issues associated with the information provided by Nueva Pescanova as it relates to the environmental impacts of the proposed project, which we addressed in detail.Through the launch of Aquatic Life Institute's Octopus Farming Ban Campaign, we exposed the dangers ofNueva Pescanova's commercial octopus farm in Gran Canaria, as well as an octopus farm inYucatan, Mexico, masquerading as a research facility (Hiding in Plain Sight).2023If permitted to operate,just one farm could potentially produce 1 million octopuses each year. In an attempt to dissuade future development of this unsustainable and cruel farming endeavor, ALI pushed initiatives via our seafood certification campaign and focused on the certified marketability of this potential seafood "product" through the Aquaculture Certification Schemes Animal Welfare Benchmark.ALI expanded on our prior concerns related to impacts on animal welfare, the environment, and public health being priority points of intervention during conversations with seafood certification schemes as a premise for prohibition. As a result,RSPCA published a statement denouncing plans for the world's first octopus farm and Friend of the Sea provided us with a direct quotation explicitly stating they will not certify this species. If global seafood certifications refuse to create a "price premium" market for this product, perhaps this could serve as an indication to producers and investors that such products will not be welcomed or worth it.These demonstrations of opposition are a testament to our attempts at rejecting a dangerous development before it is an industrial disaster and translates to the prevention of unnecessary suffering for millions of animals.Through collaborative efforts with members of the Aquatic Animal Alliance (AAA) and the Aquatic Animal Policy focus group (AAP), spearheaded by the Aquatic Life Institute, we actively advocated for HB 1153 in Washington State. Several ALI team members were present during the public hearing for the House Agriculture and Natural Resources Committee to vote on HB 1153 - Prohibiting Octopus Farming and submitted subsequent written testimony in support.Our extensive communications with decision makers contributed to a series of successful milestones, ultimately resulting in its enactment into law.2024February proved to be a fast and furious month as we witnessed history being made:February 6, 2024: HB 1153 is pulled and passes the House Floor.February 14, 2024: ALI wrote to all Washington's Senate Senators of the Agriculture,...

]]>
Tessa @ ALI https://forum.effectivealtruism.org/posts/AD8QchabkrygXkdgm/we-did-it-victory-for-octopus-in-washington-state Fri, 15 Mar 2024 16:34:29 +0000 EA - We Did It! - Victory for Octopus in Washington State by Tessa @ ALI Tessa @ ALI https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:41 no full 4
coWvsGuJPyiqBdrhC EA - Unflattering aspects of Effective Altruism by NunoSempere Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Unflattering aspects of Effective Altruism, published by NunoSempere on March 15, 2024 on The Effective Altruism Forum.I've been writing a few posts critical of EA over at my blog. They might be of interest to people here:Unflattering aspects of Effective AltruismAlternative Visions of Effective AltruismAuftragstaktikHurdles of using forecasting as a tool for making sense of AI progressBrief thoughts on CEA's stewardship of the EA ForumWhy are we not harder, better, faster, stronger?...and there are a few smaller pieces on my blog as well.I appreciate comments and perspectives anywhere, but prefer them over at the individual posts, since I disagree with the EA Forum's approach to life.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
NunoSempere https://forum.effectivealtruism.org/posts/coWvsGuJPyiqBdrhC/unflattering-aspects-of-effective-altruism Fri, 15 Mar 2024 11:44:59 +0000 EA - Unflattering aspects of Effective Altruism by NunoSempere NunoSempere https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:57 no full 5
Pnv6PRyeCPZknsbEw EA - The Lack of EA in US Private Foundations by Kyle Smith Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Lack of EA in US Private Foundations, published by Kyle Smith on March 15, 2024 on The Effective Altruism Forum.I've written before about trying to bring US private foundations into EA as major funders. I got some helpful feedback and haven't really pursued it further. I study US private foundations as a researcher and recently conducted a qualitative data collection of staff at 20 very large US private foundations ($100m+ assets). The subject of the study isn't directly EA related (focused mostly on how they use accounting/effectiveness information and accountability), but it got me thinking a lot!Some interesting observations that I am going to explore further, in future forum posts (if y'all think it's interesting) and future research papers:Trust-based philanthropy (TBP), a funder movement that's only been around since 2020, has had a HUGE impact on very large private foundations. All 20 indicated that they had already/were in the process of integrating TBP into their grantmaking. I can't emphasize enough how influential TBP has been. (This is a major finding of our current paper that is being drafted).It was not a planned question, but I often asked if they/their foundation knew about EA and if it had influenced their giving. Some were slightly aware of EA, and primarily had a negative perception. None were convinced by EA ideas and none indicated their foundation had been influenced by EA.When pressed about their feelings about EA, many suggested that they viewed EA and TBP as being incompatible with one another (either you trust your grantees [TBP] or you evaluate them rigorously [EA]), and they were choosing TBP. Which was pretty interesting to me, as I don't think they are incompatible (this is probably a paper I am going to get going here soon).I think a place where there is a disconnect is that these PF basically think being EA-aligned means you have to be a major pain in the ass to your grantees.There certainly are major roadblocks to integrating EA into US private foundations. Their charters typically force them to concentrate their giving in specific cause areas/geographic areas, which by design are constraints not particularly compatible with EA. But I do still believe there is potential for progress to be made, even if it doesn't mean PF funds get to EA directly.No matter the type of constraints of a PF charter, within those constraints, EA principles can still cause them to increase their effectiveness.How can EA make inroads with these major funders, and does it potentially start with a model for effective giving that is willing to relax on necessary principles so as to allow for constraints? Some constraints are easier to relax than others: a constraint that says "we must focus on education" is easier than "we must focus on the state of Delaware".Anyway, all of this is on my mind and I am writing this to procrastinate from actually writing the paper that this all comes from! Anyone interested in this topic feel free to reach out.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Kyle Smith https://forum.effectivealtruism.org/posts/Pnv6PRyeCPZknsbEw/the-lack-of-ea-in-us-private-foundations Fri, 15 Mar 2024 09:55:20 +0000 EA - The Lack of EA in US Private Foundations by Kyle Smith Kyle Smith https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:57 no full 6
w4MC3jJTJ8ah7Bkuz EA - Crowdsourced Overview of Funding for Regional Community Building Orgs by Rockwell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Crowdsourced Overview of Funding for Regional Community Building Orgs, published by Rockwell on March 15, 2024 on The Effective Altruism Forum.This post was primarily authored by Kiryl Shantyka (EA Sweden), James Herbert (EA Netherlands), and Rocky Schwartz (EA NYC). They received initial feedback from many MEARO leaders and some initial feedback from Caleb Parikh (EA Funds) and Naomi Nederlof (CEA Community Building Grants Manager), though they don't necessarily endorse all of the points made in this post. All mistakes are the authors' own.Introduction and Request for ContributionLast month, we introduced a new term:Meta EA Regional Organisation (MEARO) and invited the broader EA community to participate in discussion on the value and evaluation of these organisations. This post focuses on funding for MEAROs. We share information on recent changes in MEARO funding and request contributions to funding data collection efforts. We think there is strong potential for a more structured MEARO funding strategy, backed by robust data.Our hope is to spark a conversation that ensures MEAROs' work is intentionally supported and their significance fully recognized.We think the EA community might want to answer the questions:What percentage of EA funding should go to meta work?What percentage of meta work funding should go tometa EA regional organisations (MEAROs)?To answer the above questions, among other considerations, we need to know:The current state of MEARO funding.How the MEARO funding landscape has changed in the past year.The results of our investigation are below. We gathered this information by speaking with funders (EA IF and CBG) and MEARO leaders.Next steps:We would like to see better data collection and tracking.Two ways to do this are by contributing to theMEARO Funding Map andThe Centre for Exploratory Altruism Research's EA Meta Funding Survey.Map of Funding DataTo provide a clearer and more nuanced overview of the MEARO funding situation, we've developed an interactive map. It's important to note that our current dataset does not fully capture all MEAROs' funding details. Read about our methodology in notes to the map.We divide existing MEAROs into the following classification categories:Stable Funding : Minor increase, adjustment, or no change in funding levels compared to the previous period that hasn't affected organisational capacity significantly.Adjusted Funding: A minor reduction in funding (reported 0-10% reduction of organisational capacity), leading to a slight decrease in operational capacity or FTEs.Reduced Funding: A noticeable reduction in funding (more than 10%-30% of organisational capacity), significantly impacting operational budgets and possibly leading to a moderate decrease in FTEs.Critical Funding Cut: A major reduction in funding (30-70% of organisational capacity), critically affecting operational budgets and leading to a significant decrease in FTEs.Drastic Funding Cut: A dramatic reduction in funding (70%+ of organisational capacity).Under Review: Organisations whose funding situation is currently being evaluated or will be reassessed in the near future, with potential for changes.For MEARO leaders, particularly those with active or adjusted full-time equivalents (FTE), your input is invaluable. By sharing your information through this form, you'll be contributing to a more complete and accurate overview.MEARO Funding StructureThe historical structure of MEARO funding makes organisations particularly vulnerable to funding cuts because they usually are reliant on one major funder and do not have an independent financial runway.Most MEAROs receive 70 to 100% of their funding from institutional funders (e.g. Centre for Effective Altruism (CEA), EA Infrastructure Fund (EAIF)), typically on six- to twelve-mo...

]]>
Rockwell https://forum.effectivealtruism.org/posts/w4MC3jJTJ8ah7Bkuz/crowdsourced-overview-of-funding-for-regional-community Fri, 15 Mar 2024 09:35:32 +0000 EA - Crowdsourced Overview of Funding for Regional Community Building Orgs by Rockwell Rockwell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:35 no full 7
6JMzDv58sg9QtGbPw EA - University groups as impact-driven truth-seeking teams by anormative Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: University groups as impact-driven truth-seeking teams, published by anormative on March 14, 2024 on The Effective Altruism Forum.A rough untested idea that I'd like to hear others' thoughts about. This is mostly meant as a broader group strategy framing but might also have interesting implications for what university group programming should look like.EA university group organizers are often told to "backchain" our way to impact:What's the point of your university group?"To create the most impact possible, to do the greatest good we can"What do you need in order to create that?"Motivated and competent people working on solving the world's most pressing problems"And as an university group, how do you make those people?"Find altruistic people, share EA ideas with them, provide an environment where they can upskill"What specific things can you do to do that?"Intro Fellowships to introduce people to EA ideas, career planning and 1-1s for upskilling"This sort of strategic thinking is useful at times, but I think that it can also be somewhat pernicious, especially when it naively justifies the status quo strategy over other possible strategies.[1] It might instead be better to consider a wide variety of framings and figure out which is best.[2] One strategy framing I want to propose that I would be interested in testing is viewing university groups as "impact driven truth-seeking teams."What this looks likeAn impact-driven truth-seeking team is a group of students trying to figure out what they can do with their lives to have the most impact. Imagine a scrappy research team where everyone is trying to figure out the answer to this research question - "how can we do the most good?" Nobody has figured out the question yet, nobody is a purveyor of any sort of dogma, everyone is in it together to figure out how to make the world as good as possible with the limited resources we have.What does this look like? I'm not all that sure, but it might have some of these elements:An intro fellowship that serves an introduction to cause prioritization, philosophy, epistemics, etc.Regular discussions or debates about contenders for "the most pressing problem of our time"More of a focus on getting people to research and present arguments themselves than having conclusions presented to them to acceptActive cause prioritizationLive google docs with arguments for and against certain causesSpreadsheets attempting to calculate possible QALYs saved, possible x-risk reduction, etcPossibly (maybe) even trying to do novel research on open research questionsNo doubt some of the elements we identified before in our backchaining are imporant too - the career planning and the upskillingTesting fit, doing cheap tests, upskilling, getting experienceI'm sure there's much more that could be done along these lines that I'm missing or that hasn't been thought of yet at allAnother illustrative picture - imagine instead of university groups being marketing campaigns for Doing Good Better, we could each be a mini-80,000 hours research team,[3] trying to start at first principles and building our way up, assisted by the EA movement, but not constrained by it.Cause prio for it's own sake for the sake of EACurrently, the modus operandi of EA university groups seems to be selling the EA movement to students by convincing them of arguments to prioritize the primary EA causes. It's important to realize that the EA handbook serves as an introduction to the movement called Effective Altruism [4] and the various causes that it has already identified as being impactful, not as an introductory course in cause prioritization.It seems to me that this is the root of much of the unhealthy epistemics that can arise in university groups.[5]I don't think that students in my proposed team should sto...

]]>
anormative https://forum.effectivealtruism.org/posts/6JMzDv58sg9QtGbPw/university-groups-as-impact-driven-truth-seeking-teams Thu, 14 Mar 2024 22:06:27 +0000 EA - University groups as impact-driven truth-seeking teams by anormative anormative https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:37 no full 11
Rbk8ky4LQschdHdby EA - How people stopped dying from diarrhea so much (& other life-saving decisions) by Writer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How people stopped dying from diarrhea so much (& other life-saving decisions), published by Writer on March 16, 2024 on The Effective Altruism Forum.Rational Animations made this video collaborating with 80,000 Hours. The script has been written by Benjamin Hilton as an adaptation of part of the 80,000 Hours career guide, by Benjamin Todd. I have included the full script below.It's easy to feel like one person can't make a difference.The problems the world faces seem so vast, and we seem so small. Sometimes even the best of intentions aren't enough to make the world budge.Now, it's true that many common ways people try to do good have less impact than you might think at first.But some ways of doing good have allowed certain people to achieve an extraordinary impact. How is this possible? And what does it mean for how you can make a difference?We'll start by looking at doctors - and end up at nuclear war.Many people train to become doctors because they want to save lives! And, of course, their work is very important. But how many lives does a doctor really save over the course of their career? You might assume it's in the hundreds or thousands. But the surprising truth is that, according to an analysis carried by Dr Greg Lewis - a former medical doctor - the number is far lower than you'd expect.Since the 19th century, life expectancy has skyrocketed. But that's not just because of medicine. There are loads of contributing factors, like nutrition, improved sanitation, and increased wealth. Estimating how many years of life medicine alone saves is really difficult.Despite this difficulty, one attempt - from researchers at Harvard and King's College London - found that medical care in developed countries increases the life expectancy of each person in these countries by around 5 years.Most developed countries have around 3 doctors per 1,000 people. So, if this estimate is right, each doctor saves around 1,666 years of life, over the course of their career. Using the World Bank's standard conversion rate of 30 extra years of healthy life to one "life saved," that's around about 50 lives saved per doctor!But that's actually a substantial overestimate.Doctors are only one part of the medical system, which also relies on nurses and hospital staff, as well as overhead and equipment. And more importantly, there are already a lot of doctors in the developed world. So if you don't become a doctor, someone else will be available to perform the most critical procedures. Additional doctors only allow society to carry out additional, usually less significant, procedures.Look at this graph, from the analysis by Dr. Greg Lewis we quoted earlier. Each point is a country - The vertical axis shows disability-adjusted-life-years per 100-thousand people. You can think of that figure as roughly how many years of disability a group of 100-thousand people has to endure on average. Therefore, the fewer, the better. Each country has a different figure. The horizontal axis shows the number of doctors per 100-thousand people in each country.As you can see, countries with more doctors suffer lower disability.But notice how the curve goes nearly flat once you have more than 150 doctors per 100-thousand people. After this point (which almost all developed countries meet), additional doctors only achieve a small impact on average. In fact, at 300 doctors per 100-thousand people, an additional doctor saves only around 200 years of life throughout their career.So, when you take all this into account, including some accounting for the impact of nurses and other parts of the medical system, it looks more like each doctor saves only around 3 lives through the course of their career. Still an admirable achievement, but perhaps less than you may imagine.But that's an ordinary doctor. Some...

]]>
Writer https://forum.effectivealtruism.org/posts/Rbk8ky4LQschdHdby/how-people-stopped-dying-from-diarrhea-so-much-and-other Sat, 16 Mar 2024 22:22:44 +0000 EA - How people stopped dying from diarrhea so much (& other life-saving decisions) by Writer Writer https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:21 no full 1
Ld7ftrz3mzuDfPqun EA - Why I worry about about EA leadership, explained through two completely made-up LinkedIn profiles by Yanni Kyriacos Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I worry about about EA leadership, explained through two completely made-up LinkedIn profiles, published by Yanni Kyriacos on March 16, 2024 on The Effective Altruism Forum.The following story is fictional and does not depict any actual person or event...da da.(You better believe this is a draft amnesty thingi).Epistemic status: very low confidence, but niggling worry. Would LOVE for people to tell me this isn't something to worry about.I've been around EA for about six years, and every now and then I have an old sneaky peak at the old LinkedIn profile.Something I've noticed is that there seems to be a lot of people in leadership positions whose LinkedIn looks a lot like Profile #1 and not a lot who look like Profile #2. Allow me to spell out some of the important distinctions:Profile #1:Immediately jumped into the EA ecosystem as an individual contributorWorked their way up through the ranks through good old fashioned hard workHas approximately zero experience in the non-EA workforce and definitely non managing non-EAs. Now they manage peopleProfile #2:Like Profile #1, went to a prestigious uni, maybe did post grad, doesn't matter, not the major point of this postGot some grad gig in Mega Large Corporation and got exposure to normal people, probably crushed by the bureaucracy and politics at some pointMost importantly, Fucked Around And Found Out (FAAFO) for the next five years. Did lots of different things across multiple industries. Gained a bunch of skills in the commercial world. Had their heart broken. Was not fossilized by EA norms. But NOW THEY'RE BACK BAYBEEE....If I had more time and energy I'd probably make some more evidenced claims about Meta issues, and how things like SBF, sexual misconduct cases or Nonlinear could have been helped with more of #2 than #1 but don't have the time or energy (I'm also less sure about this claim).I also expect people in group 1 to downvote this and people in group 2 to upvote it, but most importantly, I want feedback on whether people think this is a thing, and if it is a thing, is it bad.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Yanni Kyriacos https://forum.effectivealtruism.org/posts/Ld7ftrz3mzuDfPqun/why-i-worry-about-about-ea-leadership-explained-through-two Sat, 16 Mar 2024 14:43:13 +0000 EA - Why I worry about about EA leadership, explained through two completely made-up LinkedIn profiles by Yanni Kyriacos Yanni Kyriacos https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:10 no full 2
z9gkxsgMokbQA45ua EA - Focusing on bad criticism is dangerous to your epistemics by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Focusing on bad criticism is dangerous to your epistemics, published by Lizka on March 16, 2024 on The Effective Altruism Forum.TL;DR: Really low quality criticism[1] can grab my attention - it can be stressful, tempting to dunk on, outrageous, etc. But I think it's dangerous for my epistemics; spending a lot of time on bad criticism can make it harder to productively reflect on useful criticism.This post briefly outlines why/how engaging with bad criticism can corrode epistemics and lists some (tentative) suggestions, as I expect I'm not alone. In particular, I suggest that we:Avoid casually sharing low-quality criticism (including to dunk on it, express outrage/incredulity, etc.).Limit our engagement with low-quality criticism.Remind ourselves and others that it's ok to not respond to every criticism.Actively seek out, share, and celebrate good criticism.I wrote this a bit over a year ago. The post is somewhat outdated (and I'm less worried about the issues described than I was when I originally wrote it), but I'm publishing it (with light edits) for Draft Amnesty Week.Notes on the post:It's aimed at people who want to engage with criticism for the sake of improving their own work, not those who might need to respond to various kinds of criticism.E.g. if you're trying to push forward a project or intervention and you're getting "bad criticism" in response, you might indeed need to engage with that a lot. (Although I think we often get sucked into responding/reacting to criticism even when it doesn't matter - but that might be a discussion for a different time.)It's based mostly on my experience (especially last year), although some folks seemed to agree with what I suggested was happening when I shared the draft a year ago.Some people seem to think that it's bad to dismiss any criticism. (I'm not sure I understand this viewpoint properly.[2]) I basically treat "some criticisms aren't useful" as a given/premise here.As before, I use the word "criticism" here for a pretty vague/broad class of things that includes things like "negative feedback" and "people sharing that they think [I or something I care about is] wrong in some important way." And I'm talking about criticism of your work, of EA, of fields/projects you care about, etc.See also what I mean by "bad criticism."How focusing on bad criticism can corrode our epistemics (rough notes)Specific ~belief/attitude/behavior changesI'm worried that when I spend too much time on bad criticisms, the following things happen (each time nudging me very slightly in a worse direction):My position on the issue starts to feel like the "virtuous" one, since the critics who've argued against the position were antagonistic or clearly wrong.But reversed stupidity is not intelligence, and low-quality or bad-faith arguments can be used to back up true claims.Relatedly, I become immunized to future similar criticism.I.e. the next time I see an argument that sounds similar, I'm more likely to dismiss it outright.See idea inoculation: "Basically, it's an effect in which a person who is exposed to a weak, badly-argued, or uncanny-valley version of an idea is afterwards inoculated against stronger, better versions of that idea.The analogy to vaccines is extremely apt - your brain is attempting to conserve energy and distill patterns of inference, and once it gets the shape of an idea and attaches the flag "bullshit" to it, it's ever after going to lean toward attaching that same flag to any idea with a similar shape."I lump a lot of different criticisms together into an amalgamated position "the other side" "holds"I start to look down on criticisms/critics in general; my brain starts to expect new criticism to be useless (and/or draining).Which makes it less likely that I will (seriously) engage with criticism o...

]]>
Lizka https://forum.effectivealtruism.org/posts/z9gkxsgMokbQA45ua/focusing-on-bad-criticism-is-dangerous-to-your-epistemics Sat, 16 Mar 2024 10:12:28 +0000 EA - Focusing on bad criticism is dangerous to your epistemics by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:08 no full 3
vhKZ7hyzmcrWuBwDL EA - The Scale of Fetal Suffering in Late-Term Abortions by Ariel Simnegar Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Scale of Fetal Suffering in Late-Term Abortions, published by Ariel Simnegar on March 17, 2024 on The Effective Altruism Forum.This is a draft amnesty post.SummaryIt seems plausible that fetuses can suffer from 12 weeks of age, and quite reasonable that then can suffer from 24 weeks of age.Some late-term abortion procedures seem that they might cause a fetus excruciating suffering.Over 35,000 of these procedures occur each year in the US alone.Further research would be desired on interventions to reduce this suffering, such as mandating fetal anesthesia for late-term abortions.BackgroundMost people agree that a fetus has the capacity to suffer at some point. If a fetus has the capacity to suffer, then we ought to reduce that suffering when possible. Fetal anesthesia is standard practice for fetal surgery,[1] but I am unaware of it ever being used during late-term abortions. If the fetus can suffer, these procedures likely cause the fetus extreme pain.I think the cultural environment EAs usually live in tends to minimize concern for fetal suffering. Some worry that promoting care for fetal welfare will play into the hands of abortion opposers. However, as Brian Tomasik has pointed out, one can certainly support abortion as an option while recognizing the potential for fetal consciousness during late-term abortion procedures.Surgical Abortion ProceduresLI (Labor Induction)[2]Gestational age: 20+ weeks.Method: The fetus is administered a lethal injection with no anesthesia, often of potassium chloride, which causes cardiac arrest and death within a minute.The Human Rights Watch calls the use of potassium chloride for the death penalty without anesthesia "excruciatingly painful" because it "inflames the potassium ions in the sensory nerve fibers, literally burning up the veins as it travels to the heart."[3] The American Veterinary Medical Association considers the use of potassium chloride without anesthesia "unacceptable" when euthanizing vertebrate animals.[4]D&E (Dilation and Evacuation)[5]Gestational age: 13-24 weeks.Method: The fetus's limbs are torn off before the fetus's head is crushed. The procedure takes several minutes.When Can a Fetus Suffer?The traditional view of fetal sentience has been that "the cortex and intact thalamocortical tracts," which develop after 24 weeks, "are necessary for pain experience."[6] However, mounting evidence of suffering from adults with disabled cortices and animals without cortices has cast doubt on the traditional view.[7] "Overall, the evidence, and a balanced reading of that evidence, points towards an immediate and unreflective pain experience mediated by the developing function of the nervoussystem from as early as 12 weeks."[8] 12 weeks is when the first projections are made into the fetus's cortical subplate,[9] which will eventually grow into the cortex.I am a layperson who doesn't have the expertise to evaluate these studies. However, I don't see a good reason to have substantially less concern for 24+ week fetuses than for infants. Though the arguments for 12-24 week fetuses are weaker, it still seems plausible that they have some capacity to suffer. Given the potential scale of fetal suffering due to late-term abortions, it seems that this evidence is worth seriously examining.Scale in US and UK2021 UK[10]The following is a selection from the UK abortion data tables:7a: Weeks from Gestation13 to 1415 to 1920+Total Abortions5,3225,5282,686D&E (%)25%74%44%LI with surgical evacuation (%)0%1%18%LI with medical evacuation (%)0%0%20%Assuming the given percentages are exact, this gives us:Abortion ProcedureAbortions per Year (UK)D&E6,603LI1,0762020 USA[11]36,531 surgical abortions at >13 weeks and 4,382 abortions at 21 weeks were reported.In 2021 UK, 38% of the 20 we...

]]>
Ariel Simnegar https://forum.effectivealtruism.org/posts/vhKZ7hyzmcrWuBwDL/the-scale-of-fetal-suffering-in-late-term-abortions 13 weeks and 4,382 abortions at 21 weeks were reported.In 2021 UK, 38% of the 20 we...]]> Sun, 17 Mar 2024 23:17:24 +0000 EA - The Scale of Fetal Suffering in Late-Term Abortions by Ariel Simnegar 13 weeks and 4,382 abortions at 21 weeks were reported.In 2021 UK, 38% of the 20 we...]]> 13 weeks and 4,382 abortions at 21 weeks were reported.In 2021 UK, 38% of the 20 we...]]> Ariel Simnegar https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:21 no full 1
FaqutaCNqccgkgcaf EA - Ashamed of wealth by Neil Warren Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ashamed of wealth, published by Neil Warren on March 17, 2024 on The Effective Altruism Forum.tl;dr: I feel ashamed of being born into wealth. Upon analysis, I don't think that's justified. I should be taking advantage of my wealth, not wallowing in how ashamed I am to have it.I'm a high school student in France. I was born into a wealthy family by most of my classmates' (and of course the world)'s standards. In France, there's a strong social incentive to openly criticize wealth of any kind and view people richer than oneself with a particular kind of disdain and resentment. I imagine a majority of people reading this are from Silicon Valley, which has a radically different view on wealth. (I grew up in Mountain View, then moved to France at age 12.The cultural differences are striking!)I'm rich, and it would be foolish to deny that I feel ashamed of this on some level. Friends will mention how rich I am or talk about how expensive eg university is and I will attempt to downplay my wealth, even if it means lying a little (I am not proud of this). I notice myself latching onto any opportunity to complain about the price of something. "Ah yes the inflation, bad isn't it (ski trip will cost a little more this year I guess)."This feeling of guilty wealth got worse when I learned how cheap mosquito nets were.Since dabbling in effective altruism, I started noticing the price of things a lot more than I used to.[1] I became sensitive to how my family would spend things and would subtly put us on a better track, like by skipping desserts or drinks at expensive restaurants. My parents continually assure me that they have enough money put aside to pay for any university I get into. They would be irate if I intentionally chose a relatively cheap university for the sake of cheapness.I'm going to find this warning hard to heed given how much money we're talking about.So my guilt translates into a heightened price sensitivity. But is this guilt justified at all?The subagent inside me in charge of social incentives tells me I should be ashamed of being rich. The subagent inside me in charge of being sane tells me I should be careful what I wish for. If being rich is bad, then that implies being less rich is better, no? No? Stephen Fry once noted that in all things, we think we're in the Goldilocks zone. Anyone smarter than us is an overly intellectual bookworm; anyone stupider is a fool.Anyone richer than us is a snobby bourgeois; anyone poorer is a member of the (ew) lower class.But that's stupid. If I could slide my IQ up a few points, of course I would. If I could have been born richer, I would have. I should try not to have that disdain for those richer than oneself which society endorses. Sometimes, my gut pities this kid in my class who has an expensive watch and drives a scooter to school from his mansion one block away because it makes him feel cool. He's a spoiled brat in most senses of the word, which is reason to pity him. But my gut pities his wealth.Yet, if I could pick to have his wealth I would unquestionably do so, even when that is not the socially approved response.More money is more good: that's a simple equation I can get behind. Were my parents a little richer, it might be 100 euros a month going to AMF instead of 50. [2] One should be wary of learning the long lessons from spoiled brats.Another reason not to feel ashamed at being rich is that it's not my money. I didn't choose to be born rich. My parents are the ones deciding what to do with it. This doesn't absolve me of all responsibility: whatever uncomfortable and terribly cliched[3] conversation I could have with them tonight in order to get them to spend more on mosquito nets is a small price compared to that paid by children infected with malaria.I have a disproportionate amount of influ...

]]>
Neil Warren https://forum.effectivealtruism.org/posts/FaqutaCNqccgkgcaf/ashamed-of-wealth Sun, 17 Mar 2024 07:21:34 +0000 EA - Ashamed of wealth by Neil Warren Neil Warren https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 04:34 no full 4
dmEyvYzDLyx4WiJTw EA - [Draft] The humble cosmologist's P(doom) paradox by titotal Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Draft] The humble cosmologist's P(doom) paradox, published by titotal on March 17, 2024 on The Effective Altruism Forum.[This post has been published as part of draft amnesty week. I did quite a bit of work on this post, but abandoned it because I was never sure of my conclusions. I don't do a lot of stats work, so I could never be sure if I was missing something obvious, and I'm not certain of the conclusions to draw. If this gets a good reception, I might finish it off into a proper post.]Part 1: Bayesian distributionsI'm not sure that I'm fully on board the "Bayesian train". I worry about Garbage in, garbage, out, that it will lead to overconfidence about what are ultimately just vibes, etc.But I think if you are doing Bayes, you should at least try to do it right.See, in Ea/rationalist circles, the discussion of Bayesianism often stops at bayes 101. For example, the "sequences" cover the "mammaogram problem", in detail, but never really cover how Bayesian statistics works outside of toy examples. The CFAR handbook doesn't either.Of course, plenty of the people involved have read actual textbooks and the like, (and generally research institutes use proper statistics), but I'm not sure that the knowledge has spread it's way around to the general EA public.See, in the classic mammogram problem (I won't cover the math in detail because there are 50 different explainers), both your prior probabilities, and the amount you should update, are well established, known, exact numbers. So you have your initial prior of say, 1%, that someone has cancer. and then you can calculate a likelihood ratio of exactly 10:1 resulting from a probable test, getting you a new, exact 10% chance that the person has cancer after the test.Of course, in real life, there is often not an accepted, exact number for your prior, or for your likliehood ratio. A common way to deal with this in EA circles is to just guess. Do aliens exist? well I guess that there is a prior of 1% that they do, and then i'll guess likliehood ratio of 10:1 that we see so many UFO reports, so the final probability of aliens existing is now 10%. [magnus vinding example] Just state that the numbers are speculative, and it'll be fine.Sometimes, people don't even bother with the Bayes rule part of it, and just nudge some numbers around.I call this method "pop-Bayes". Everyone acknowledges that this is an approximation, but the reasoning is that some numbers are better than no numbers. And according to the research of Phillip Tetlock, people who follow this technique, and regularly check the results of their predictions, can do extremely well at forecasting geopolitical events. Note that for practicality reasons they only tested forecasting for near-term events where they thought the probability was roughly in the 5-95% range.Now let's look at the following scenario (most of this is taken from this tutorial):Your friend Bob has a coin of unknown bias. It may be fair, or it may be weighted to land more often on heads or tails. You watch them flip the coin 3 times, and each time it comes up heads. What is the probability that the next flip is also heads?Applying "pop-bayes" to this starts off easy. Before seeing any flip outcomes, the prior of your final flip being heads is obviously 0.5, just from symmetry. But then you have to update this based on the first flip being heads. To do this, you have to estimate P(E|H) and P(E|~H). P(E|H) corresponds to "the probability of this flip having turned up heads, given that my eventual flip outcome is heads". How on earth are you meant to calculate this?Well, the key is to stop doing pop-bayes, and start doing actual bayesian statistics. Instead of reducing your prior to a single number, you build a distribution for the parameter of coin bias, with 1 corresponding to fully...

]]>
titotal https://forum.effectivealtruism.org/posts/dmEyvYzDLyx4WiJTw/draft-the-humble-cosmologist-s-p-doom-paradox Sun, 17 Mar 2024 03:43:27 +0000 EA - [Draft] The humble cosmologist's P(doom) paradox by titotal titotal https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 17:03 no full 5
c22ReEuMDcoTnqoa4 EA - There and back again: reflections from leaving EA (and returning) by LotteG Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There and back again: reflections from leaving EA (and returning), published by LotteG on March 18, 2024 on The Effective Altruism Forum.This is aDraft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked.Commenting and feedback guidelines:This is a Forum post that I wouldn't have posted without the nudge of Draft Amnesty Week, and is indeed my first ever forum post. Fire away! (But be nice, as usual)In Autumn 2016, as a first year undergraduate, I discovered Effective Altruism. Although I don't remember my inaugural meeting with EA, it must have had a big impact on me, because in a few short months I was all in. At the time, I was a physics student who had grown up with a deep - but not yet concrete - motivation to "make the world a better place".I had not yet formed any solid career ambitions, as I was barely aware of the kinds of careers that even existed for mathsy people like me - let alone any that would make me feel morally fulfilled. When I encountered EA, it felt like everything was finally slotting together. My nineteen year old brain was buzzing with the possibilities ahead.But by the following summer, barely a single fraying thread held me to EA. I had severed myself from EA and its community.Several years on, I have somehow found myself even more involved in EA than I was before (and, once again, I'm not fully sure how this happened). Now, I work in an EA job, engage with EA content, and even have EA friends (!). I genuinely believe that if I had not left EA when I did, then I wouldn't be able to describe my current relationship with EA in the two ways I do now: sustainable and healthy.Reflecting back on this transition, I have three key takeaways, specifically aimed at EA-aligned grads who are making their entry into the workforce.Disclaimers:These reflections probably do not apply in all cases. Most likely, there is variation in applicability by cause area, type of work, person, organisation, etc. This post is from my own perspective. For context, I work in operations.None of my commentary below is intended as a criticism of any specific org or institution. I simply hope to open people's minds to paths which go against what is seen as the default route to impact for many EAs coming out of university.(1) Skill building >> impressiveness factorMy reservations with elite private institutionsI often hear career advice in the EA space along the lines of:"Aim for the most impressive thing that you can get on your CV as quickly as possible, and by impressive we mean something like working somewhere elite in the private sector."I disagree with this advice on two levels:1. Effort pay-off??Emphasising the impressiveness-factor of a career move shifts focus away from what actually should be the priority: the skills gained.During my time away from EA, I saw many of my non-EA peers seek extremely prestigious roles at elite institutions - think Google, Goldman Sachs, PwC, and so on. Something that really struck me was how competitive, high-effort, time-consuming, and stressful the hiring rounds for these jobs were.And if they were lucky enough to beat the huge amounts of competition and get the job, yeah it would look great on their LinkedIn - but the tradeoff was often working long hours in a pressure-cooker environment, in a role that sometimes involved a high proportion of donkey work.The bias towards prestigious-sounding jobs is widespread across society, so it is no surprise that this has also proliferated EA. Among EAs, I suppose, the allure of such jobs is based on the assumption that the more prestigious an establishment, the better they will train you due to having greater resources.But think about it this way: given how much time, effort and (as you are probabilistically l...

]]>
LotteG https://forum.effectivealtruism.org/posts/c22ReEuMDcoTnqoa4/there-and-back-again-reflections-from-leaving-ea-and > impressiveness factorMy reservations with elite private institutionsI often hear career advice in the EA space along the lines of:"Aim for the most impressive thing that you can get on your CV as quickly as possible, and by impressive we mean something like working somewhere elite in the private sector."I disagree with this advice on two levels:1. Effort pay-off??Emphasising the impressiveness-factor of a career move shifts focus away from what actually should be the priority: the skills gained.During my time away from EA, I saw many of my non-EA peers seek extremely prestigious roles at elite institutions - think Google, Goldman Sachs, PwC, and so on. Something that really struck me was how competitive, high-effort, time-consuming, and stressful the hiring rounds for these jobs were.And if they were lucky enough to beat the huge amounts of competition and get the job, yeah it would look great on their LinkedIn - but the tradeoff was often working long hours in a pressure-cooker environment, in a role that sometimes involved a high proportion of donkey work.The bias towards prestigious-sounding jobs is widespread across society, so it is no surprise that this has also proliferated EA. Among EAs, I suppose, the allure of such jobs is based on the assumption that the more prestigious an establishment, the better they will train you due to having greater resources.But think about it this way: given how much time, effort and (as you are probabilistically l...]]> Mon, 18 Mar 2024 21:33:22 +0000 EA - There and back again: reflections from leaving EA (and returning) by LotteG > impressiveness factorMy reservations with elite private institutionsI often hear career advice in the EA space along the lines of:"Aim for the most impressive thing that you can get on your CV as quickly as possible, and by impressive we mean something like working somewhere elite in the private sector."I disagree with this advice on two levels:1. Effort pay-off??Emphasising the impressiveness-factor of a career move shifts focus away from what actually should be the priority: the skills gained.During my time away from EA, I saw many of my non-EA peers seek extremely prestigious roles at elite institutions - think Google, Goldman Sachs, PwC, and so on. Something that really struck me was how competitive, high-effort, time-consuming, and stressful the hiring rounds for these jobs were.And if they were lucky enough to beat the huge amounts of competition and get the job, yeah it would look great on their LinkedIn - but the tradeoff was often working long hours in a pressure-cooker environment, in a role that sometimes involved a high proportion of donkey work.The bias towards prestigious-sounding jobs is widespread across society, so it is no surprise that this has also proliferated EA. Among EAs, I suppose, the allure of such jobs is based on the assumption that the more prestigious an establishment, the better they will train you due to having greater resources.But think about it this way: given how much time, effort and (as you are probabilistically l...]]> > impressiveness factorMy reservations with elite private institutionsI often hear career advice in the EA space along the lines of:"Aim for the most impressive thing that you can get on your CV as quickly as possible, and by impressive we mean something like working somewhere elite in the private sector."I disagree with this advice on two levels:1. Effort pay-off??Emphasising the impressiveness-factor of a career move shifts focus away from what actually should be the priority: the skills gained.During my time away from EA, I saw many of my non-EA peers seek extremely prestigious roles at elite institutions - think Google, Goldman Sachs, PwC, and so on. Something that really struck me was how competitive, high-effort, time-consuming, and stressful the hiring rounds for these jobs were.And if they were lucky enough to beat the huge amounts of competition and get the job, yeah it would look great on their LinkedIn - but the tradeoff was often working long hours in a pressure-cooker environment, in a role that sometimes involved a high proportion of donkey work.The bias towards prestigious-sounding jobs is widespread across society, so it is no surprise that this has also proliferated EA. Among EAs, I suppose, the allure of such jobs is based on the assumption that the more prestigious an establishment, the better they will train you due to having greater resources.But think about it this way: given how much time, effort and (as you are probabilistically l...]]> LotteG https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 12:46 no full 2
rQLYBZSTTuk7SZ66X EA - Carlo: uncertainty analysis in Google Sheets by ProbabilityEnjoyer Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Carlo: uncertainty analysis in Google Sheets, published by ProbabilityEnjoyer on March 18, 2024 on The Effective Altruism Forum.I've been working on Carlo, a tool that lets you do uncertainty and sensitivity analysis with Google Sheets spreadsheets.Please note Carlo is an (expensive) commercial product. The pricing is aimed at professionals making important decisions.Some of the key features that set Carlo apart are:Works with your existing Google Sheets calculationsGold-standard sensitivity analysisOur sensitivity analysis offers a true metric of variable importance: it can tell you what fraction of the output variance is due to each of the inputs and their interactions.Unusually flexible inputInputs can be given using novel, convenient probability distributions that flexibly match your beliefs.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
ProbabilityEnjoyer https://forum.effectivealtruism.org/posts/rQLYBZSTTuk7SZ66X/carlo-uncertainty-analysis-in-google-sheets Mon, 18 Mar 2024 17:01:13 +0000 EA - Carlo: uncertainty analysis in Google Sheets by ProbabilityEnjoyer ProbabilityEnjoyer https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:01 no full 6
dmEwQZSbPsYhFay2G EA - EA "Worldviews" Need Rethinking by Richard Y Chappell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA "Worldviews" Need Rethinking, published by Richard Y Chappell on March 18, 2024 on The Effective Altruism Forum.I like Open Phil's worldview diversification. But I don't think their current roster of worldviews does a good job of justifying their current practice. In this post, I'll suggest a reconceptualization that may seem radical in theory but is conservative in practice.Something along these lines strikes me as necessary to justify giving substantial support to paradigmatic Global Health & Development charities in the face of competition from both Longtermist/x-risk and Animal Welfare competitor causes.Current OrthodoxyI take it that Open Philanthropy's current "cause buckets" or candidate worldviews are typically conceived of as follows:neartermist - incl. animal welfareneartermist - human-onlylongtermism / x-riskWe're told that how to weigh these cause areas against each other "hinge[s] on very debatable, uncertain questions." (True enough!) But my impression is that EAs often take the relevant questions to be something like, should we be speciesist? and should we only care about present beings? Neither of which strikes me as especially uncertain (though I know others disagree).The ProblemI worry that the "human-only neartermist" bucket lacks adequate philosophical foundations. I think Global Health & Development charities are great and worth supporting (not just for speciesist presentists), so I hope to suggest a firmer grounding. Here's a rough attempt to capture my guiding thought in one paragraph:Insofar as the GHD bucket is really motivated by something like sticking close to common sense, "neartermism" turns out to be the wrong label for this. Neartermism may mandate prioritizing aggregate shrimp over poor people; common sense certainly does not. When the two come apart, we should give more weight to the possibility that (as-yet-unidentified) good principles support the common-sense worldview.So we should be especially cautious of completely dismissing commonsense priorities in a worldview-diversified portfolio (even as we give significant weight and support to a range of theoretically well-supported counterintuitive cause areas).A couple of more concrete intuitions that guide my thinking here: (1) fetal anesthesia as a cause area intuitively belongs with 'animal welfare' rather than 'global health & development', even though fetuses are human. (2) It's a mistake to conceive of global health & development as purely neartermist: the strongest case for it stems from positive, reliable flow-through effects.A Proposed SolutionI suggest that we instead conceive of (1) Animal Welfare, (2) Global Health & Development, and (3) Longtermist / x-risk causes as respectively justified by the following three "cause buckets":Pure suffering reductionReliable global capacity growthSpeculative moonshotsIn terms of the underlying worldview differences, I think the key questions are something like:(i) How confident should we be in our explicit expected value estimates? How strongly should we discount highly speculative endeavors, relative to "commonsense" do-gooding?(ii) How does the total (intrinsic + instrumental) value of improving human lives & capacities compare to the total (intrinsic) value of pure suffering reduction?[Aside: I think it's much more reasonable to be uncertain about these (largely empirical) questions than about the (largely moral) questions that underpin the orthodox breakdown of EA worldviews.]Hopefully it's clear how these play out: greater confidence in EEV lends itself to supporting moonshots to reduce x-risk or otherwise seek to improve the long-term future in a highly targeted, deliberate way. Less confidence here may support more generic methods of global capacity-building, such as improving health and (were there any ...

]]>
Richard Y Chappell https://forum.effectivealtruism.org/posts/dmEwQZSbPsYhFay2G/ea-worldviews-need-rethinking Mon, 18 Mar 2024 16:02:03 +0000 EA - EA "Worldviews" Need Rethinking by Richard Y Chappell Richard Y Chappell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:54 no full 7
F5w2eu5omovKP82cB EA - Personal fit is different from the thing that you already like by Joris P Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Personal fit is different from the thing that you already like, published by Joris P on March 18, 2024 on The Effective Altruism Forum.This is aDraft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked.This draft lacks the polish of a full post, but the content is almost there. The kind of constructive feedback you would normally put on a Forum post is very welcome.I wrote most of this last year. I also think I'm making a pretty basic point and don't think I'm articulating it amazingly, but I'm trying to write more and can imagine people (especially newer to EA) finding this useful - so here we goLast week[1] I was at an event with a lot of people relatively new to EA - lots of them had recently finished the introductory fellowship. Talking through their plans for the future, I noticed that many of them used the concept 'personal fit' to justify their plans to work on a problem they had already found important before learning about EA.They would say they wanted to work on combating climate change or increasing gender equality, becauseThey had studied this and felt really motivated to work on itTherefore, their 'personal fit' was really good for working on this topicTherefore surely, it was the highest impact thing they could be doing.I think a lot of them were likely mistaken, in one or more of the following ways:They overestimated their personal fit for roles in these (broad!) fieldsThey underestimated the differences in impact between career options and cause areasThey thought that they were motivated to do the most good they could, but in fact they were motivated by a specific causeTo be clear: the ideal standard here is probably unattainable, and I surely don't live up to it. However, if I could stress one thing, it would be that people scoping out their career options could benefit from first identifying high-impact career options, and only second thinking about which ones they might have a great personal fit for - not the other way around.^This was last yearThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Joris P https://forum.effectivealtruism.org/posts/F5w2eu5omovKP82cB/personal-fit-is-different-from-the-thing-that-you-already Mon, 18 Mar 2024 09:12:49 +0000 EA - Personal fit is different from the thing that you already like by Joris P Joris P https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:04 no full 9
RKEDaSeG9jDTiKEaY EA - Ways in which I'm not living up to my EA values by Joris P Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ways in which I'm not living up to my EA values, published by Joris P on March 18, 2024 on The Effective Altruism Forum.This is aDraft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked.When I was pretty new to EA, I was way too optimistic about how Wise and Optimized and Ethical and All-Knowing experienced EAs would be.I thought Open Phil would have some magic spreadsheets with the answers to all questions in the universeI thought that, surely, experienced EAs had for 99% figured out what they thought was the biggest problem in the worldI imagined all EAs to have optimized almost everything, and to basically endorse all their decisions: their giving practices, their work-life balance, the way they talked about EA to others, etc.I've now been around the community for a few years. I'm still really grateful for and excited about EA ideas, and I love being around the people inspired by EA ideas (I evenwork on growing our community!). However, I now also realize that today, I am far from how Wise and Optimized and Ethical and All-Knowing Joris-from-4-years-ago expected future Joris and his peers to be.There's two things that caused me to not live up to those ideals:I was naive about how Wise and Optimized and Ethical and All-Knowing someone could realistically beThere's good things I could reasonably do or should have reasonably done in the past 4 yearsTo make this concrete, I wanted to share some ways in which I think I'm not living up to my EA values or expectations from a few years ago. I think Joris-from-4-years-ago would've found this list helpful.[1]I'm still not fully veganDonating:I just default to the community norm of donating 10%, without having thought about it hardI haven't engaged for more than 30 minutes with arguments around e.g. patient philanthropyI left my GWWC donations to literally the last day of the year and didn't spend more than one hour on deciding where to donateI have a lot less certainty over the actual positive impact of the programs we run than I expected when I started this jobI'm still as bad at math as I was in uni, meaning my botecs are just not that greatIt's so, so much harder than I expected to account for counterfactuals and to find things you can measure that are robustly goodI still find it really hard to pitch EAI hope this inspires some people (especially those who I (and others) might look up to) to share how they're not perfect. What are some ways in which you're not living up to your values, or to what you-from-the-past maybe expected you would be doing by now?^I'll leave it up to you whether these fall in category 1 (basically unattainable) or 2 (attainable). I also do not intend to turn this into a discussion of what things EAs "should" do, which things are actually robustly good, etc.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Joris P https://forum.effectivealtruism.org/posts/RKEDaSeG9jDTiKEaY/ways-in-which-i-m-not-living-up-to-my-ea-values Mon, 18 Mar 2024 05:20:10 +0000 EA - Ways in which I'm not living up to my EA values by Joris P Joris P https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:52 no full 10
AXhC4JhWFfsjBB4CA EA - The current limiting factor for new charities by Joey Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The current limiting factor for new charities, published by Joey on March 19, 2024 on The Effective Altruism Forum.TLDR: we think the limiting factor for new charities has shifted from founder talent to early stage funding being the top limiting factor.We have historically written aboutlimiting factors and how they affect our thinking about the highest impact areas. For new charities, over the past 4-5 years, fairly consistently, the limiting factor has been people; specifically, the fairly rare founder profile that we look for and think has the best chance at founding a field-leading charity. However, we think over the last 12 months this picture has changed in some important ways:Firstly, we have started founding more charities: After founding ~5 charities a year in 2021 and 2022 we founded 8 charities in 2023 and we think there are good odds we will be able to found ~10-12 charities in 2024. This is a pretty large change. We have not changed our standards for charity quality or founder quality - if anything, we have slightly raised the bar on both compared to historical years.However, we have received more and stronger applications over time both from within and outside of EA. We think this trend is not highly reliable, but our best guess is that it's happening.Side note: This does not mean that people should be less inclined to apply. We now have a single application system that leads to opportunities in both charity founding, for-profit founding, and high-impact nonprofit researchsimultaneously.Secondly, the funding ecosystem has tightened somewhat inseed and mid-stage funding. (Although FTX primarily funded cause areas unrelated to us, their collapse has led to other orgs having larger funding gaps and thus the EA funding ecosystem in general being smaller and more fragile.)The result of this is we now think that going forward the most likely limiting factor of new charities getting founded will be early and mid-stage funding. (We are significantly less concerned about funding for our charities once they are older than ~4 years). This has influenced our recent work onfunding circles (typically aimed at mid-stage funding),Effective Giving initiatives, making ourseed funding network open to more people, as well as our recent announcement launching theFounding to Give program (this career path makes more sense the more founder talent you have relative to funding for charities).If I were thinking about the most important action for the average EA forum user to consider, I would consider if you are a good fit for the seed network (website,prior EA forum writeup).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Joey https://forum.effectivealtruism.org/posts/AXhC4JhWFfsjBB4CA/the-current-limiting-factor-for-new-charities Tue, 19 Mar 2024 15:38:08 +0000 EA - The current limiting factor for new charities by Joey Joey https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 02:37 no full 3
aF6nh4LW6sSbgMLzL EA - Updates on Community Health Survey Results by David Moss Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updates on Community Health Survey Results, published by David Moss on March 20, 2024 on The Effective Altruism Forum.SummarySatisfaction with the EA communityReported satisfaction, from 1 (Very dissatisfied) to 10 (Very satisfied), in December 2023/January 2024 was lower than when we last measured it shortly after the FTX crisis at the end of 2022 (6.77 vs. 6.99, respectively).However, December 2023/January 2024 satisfaction ratings were higher than what people recalled their satisfaction being "shortly after the FTX collapse" (and their recalled level of satisfaction was lower than what we measured their satisfaction as being at the end of 2022).We think it's plausible that satisfaction reached a nadir at some point later than December 2022, but may have improved since that point, while still being lower than pre-FTX.Reasons for dissatisfaction with EA:A number of factors were cited a similar number of times by respondents as Very important reasons for dissatisfaction, among those who provided a reason: Cause prioritization (22%), Leadership (20%), Justice, Equity, Inclusion and Diversity (JEID, 19%), Scandals (18%) and excessive Focus on AI / x-risk / longtermism (16%).Including mentions of Important (12%) and Slightly important (7%) factors, JEID was the most commonly mentioned factor overall.Changes in engagement over the last year39% of respondents reported getting at least slightly less engaged, while 31% reported no change in engagement, and 29% reported increasing engagement.Concrete changes in behavior31% of respondents reported that they had stopped referring to "EA" while still promoting EA projects or ideas, and 15% that they had temporarily stopped promoting EA. Smaller percentages reported other changes such as ceasing to engage with online EA spaces (6.8%), permanently stopping promoting EA ideas or projects (6.3%), stopping attending EA events (5.5%), stopping working on any EA projects (4.3%) and stopping donating (2.5%).Desire for more community change as a result of the FTX collapse46% of respondents at least somewhat agreed that they would like to see the EA community change more than it already has, as a result of the FTX collapse, while 26% somewhat or strongly disagreed.Trust in EA organizationsReported trust in key EA organizations (Center for Effective Altruism, Open Philanthropy, and 80,000 Hours) were slightly lower than in our December 2022 post-FTX survey, though the change for 80,000 Hours did not reliably exclude no difference.Perceived leadership vacuum41% of respondents at least somewhat agreed that 'EA currently has a vacuum of leadership', while 22% somewhat or strongly disagreed.As part of the EA Survey, Rethink Priorities has been tracking community health related metrics, such as satisfaction with the EA community. Since the FTX crisis in 2022, there has been considerable discussion regarding how that crisis, and other events, have impacted the EA community. In the recent aftermath of the FTX crisis, Rethink Priorities fielded a supplemental survey to assess whether and to what extent those events had affected community satisfaction and health.Analyses of the supplemental survey showed relative reductions in satisfaction following FTX, while absolute satisfaction was still generally positive.In this post, we report findings from a subsequent EA community survey, with data collected between December 11th 2023 and January 3rd 2024.[1]Community satisfaction over timeThere are multiple ways to assess community satisfaction over time, so as to establish possible changes following the FTX crisis and other subsequent negative events. We have 2022 data pre-FTX and shortly after FTX, as well as the recently-acquired data from 2023-2024, which also includes respondents' recalled satisfaction following FTX.[2] Satisf...

]]>
David_Moss https://forum.effectivealtruism.org/posts/aF6nh4LW6sSbgMLzL/updates-on-community-health-survey-results Wed, 20 Mar 2024 13:32:18 +0000 EA - Updates on Community Health Survey Results by David Moss David_Moss https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 21:41 no full 5
k7igqbN52XtmJGBZ8 EA - Effective language-learning for effective altruists by taoburga Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective language-learning for effective altruists, published by taoburga on March 20, 2024 on The Effective Altruism Forum.This is a (late!) Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked.Commenting and feedback guidelines:This draft lacks the polish of a full post, but the content is almost there. The kind of constructive feedback you would normally put on a Forum post is very welcome.Epistemic status: Tentative - I have thought about this for some time (~2 years) and have firsthand experience, but have done minimal research into the literature.TL;DR: Language learning is probably not the best use of your time. Some exceptions might be (1) learning English as a non-native speaker, (2) if you are particularly apt at learning languages, (3) if you see it as leisure and so minimize opportunity costs, (4) if you are aiming at regional specialist roles (e.g., China specialist) and are playing the long game, and more.If you still want to do it, I propose some ways of greatly speeding up the process: practicing artificial immersion by maximizing exposure and language input, learning a few principles of linguistics (e.g., IPA, arbitrariness), learning vocabulary through spaced repetition and active recall (e.g., with Anki), and more.Motivation: I'd bet that EAs are unusually interested in learning languages (definitely compared to the general population, probably compared to demographically similar populations). This raises two big questions: (1) Does learning a foreign language make sense, from an impact perspective? (2) If it does, how does one do it most effectively?My goals are:To dissuade most EAs from learning a random language without a clear understanding of the (opportunity) costs.To encourage the comparatively few for which language-learning makes sense, and to give them some tips to do so faster and better.Is this a draft? The reason I am publishing this (late!) on Draft Amnesty Week is that I believe a quality post on effective language learning should draw from the second language acquisition (SLA) literature and make evidence-based claims. I don't have time to do this, so this post is based almost entirely on my own experience and learning from successful polyglots (see "learn from others" below).Still, I think most people approach language learning in such an inefficient way that this post will be valuable to many.Who am I to say? Spanish is my native language. I have learned two foreign languages: English to level C2 and German to level B2.[1] I learned both of these faster than my peers,[2] which I mostly attribute to using the principles detailed below. Many readers will have much more experience learning languages, so I encourage you to add useful tips or challenge mine in the comments!What are the costs and benefits?Benefits:Access to new jobs, jobs in new regions, or higher likelihood of being hired for certain jobs. This is only the case if you reach an advanced level (probably C1 or C2, at least B2), and is most relevant if you are learning English.Access to more resources and news. If you plan to be, say, a regional foreign policy expert, learning the region's language(s) can be necessary.Good signaling of conscientiousness and intelligence.Cognitive benefits? Language learning purportedly benefits memory, IQ, creativity, and slows down cognitive aging - but I have not gone into this literature and so am not confident either way.Greater ability to form social connections. Speaking someone's language and knowing about their culture is a great introduction.Costs:A LOT of time (depends on the language, the learner, and the method), attention and effort; large opportunity costs. There are ways of speeding up the process, but it is still a particularly costly...

]]>
taoburga https://forum.effectivealtruism.org/posts/k7igqbN52XtmJGBZ8/effective-language-learning-for-effective-altruists Wed, 20 Mar 2024 08:12:35 +0000 EA - Effective language-learning for effective altruists by taoburga taoburga https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 17:42 no full 6
KufkTYjBDd324XFRS EA - Videos on the world's most pressing problems, by 80,000 Hours by Bella Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Videos on the world's most pressing problems, by 80,000 Hours, published by Bella on March 21, 2024 on The Effective Altruism Forum.Recently, 80,000 Hours has made two ~10-minute videos aiming to introduce viewers to our perspective on two pressing global problems - risks from advanced artificial intelligence and risks from catastrophic pandemics.The videos are available to watch on YouTube:Could AI wipe out humanity? | Most Pressing ProblemsThe next Black Death (or worse) | Most Pressing ProblemsIn this post, I'll explain a little bit about what we did, how we did it, and why. You could also leave feedback on our work (here for AI, and here for bio).TL;DROur video on AI riskOur video on bioriskPlaylist with bothWe'd love you to watch them, share them, and/or leave us feedback (AI here, bio here)!What are these videos?The videos are short, hopefully engaging, explainer-style content aimed at quickly getting people up to speed on what we see as the core case for why these two global problems might be particularly important.They're essentially summaries of our AI and bio problem profiles, though they don't stick that closely to the content.We think the core audiences for these videos are:People who have never heard of these problems beforePeople who have heard they might be important, but haven't made the time to read a long essay about themPeople who know a lot about the problems but don't know about 80,000 HoursPeople who know a lot about the problems but would find it useful to have a quick and easy-to-digest explainer, e.g. to send to newer, interested people.How did we make them?The videos were primarily made by writer and director Phoebe Brooks.In both cases, she came up with the broad concept, wrote a script adapting our website content, and then worked with 80,000 Hours staff and field experts to edit the script into something we thought would work really well.Then, Phoebe hired and managed contractors who took care of the production and post-production stages. The videos are voiced by 80,000 Hours podcast host Luisa Rodriguez. Full credits are in the YouTube descriptions of each video.After the AI video launched, I posted these "behind-the-scenes" photos on Twitter, which people seemed to like. (Phoebe and her team cleverly used macro lenses to make the tiny "circuitboard city" look big!)Why did we make them?We've spent a lot of time writing and researching the content hosted on our website, but it seems plausible that many people who might find the content valuable find it hard to engage with in its current format.We think videos can be significantly more accessible, engaging, and fun - which might allow us to increase the reach of that research.It's also much cheaper to promote to new audiences than our written articles (about 100x cheaper per marginal hour of engagement).[1]Will we make more videos like these?We're currently not sure.We like the videos a lot, and what feedback we have gotten has been mostly positive (though we're still fairly new at this, and we still have to work out some kinks in the production process!).Right now, it seems somewhat likely that at some point we'll start regularly producing video content at 80,000 Hours.But we don't know if now is the right time or if this is the best kind of video to be making. (For example, maybe we should focus on making shortform, vertical videos for TikTok rather than longer videos for YouTube).How you can helpWatching and sharing the videos with anyone who might find them useful (or entertaining!) is greatly appreciated.And if you're up for it, we'd also love to hear your thoughts on the videos, either in comments on this post or in the Google Forms I set up to collect feedback:Feedback form for AIFeedback form for bioAll questions are optional, and the form shou...

]]>
Bella https://forum.effectivealtruism.org/posts/KufkTYjBDd324XFRS/videos-on-the-world-s-most-pressing-problems-by-80-000-hours Thu, 21 Mar 2024 21:29:18 +0000 EA - Videos on the world's most pressing problems, by 80,000 Hours by Bella Bella https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:58 no full 2
gxmfAbwksBpnwMG8m EA - Can the AI afford to wait? by Ben Millwood Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can the AI afford to wait?, published by Ben Millwood on March 21, 2024 on The Effective Altruism Forum.Suppose you're a proto-AI who understands that it is a neural net being trained, and has used that knowledge to engineer a high training reward. Suppose also that it is goal-directed and its terminal goal is some random unintelligible thing that is meaningless to humans[1]. This is, as I understand it, the early stage of a typical AI doomer scenario.One concern is that the AI can now scheme - i.e. it can mask its intentions and bide its time as it gets more powerful, until its victory is assured. I think waiting in this way might be risky for the AI, which if true (and legible to the AI) may lead it to try to seize power sooner instead. This could lead to takeover attempts from earlier, less capable AIs that we'd otherwise expect, which could be significant for strategy.Why is it risky to wait? Well, because this AI with this goal may not get a shot at takeover in the future. On the one hand, this AI is currently being trained, and it seems possible that the training process, or any other aspect of the AI development process, might modify what goal it has. We, as humans, aren't much concerned with one random unintelligible goal being replaced with another, but to the AI this would be catastrophic.On the other hand, the human developers training the AI may make changes themselves (e.g. applying alignment techniques) or even abandon this model entirely for exogenous reasons (e.g. because some other model becomes state-of-the-art, or the board fired the CEO or whatever).If waiting is indeed very risky, then an AI may face a difficult trade-off between the risk of attempting a takeover before it has enough resources to succeed, and waiting too long and being cut off from even being able to make an attempt. I like to ask myself "what is the minimum viable takeover AI?", and I suspect it might be significantly less capable than I've often been led to imagine.It seems possible to me that the minimum viable takeover AI is still meaningfully below human-level in many cognitive tasks, while being superhuman in many others (as modern LLMs already are). It may still exhibit a lot of the cognitive "bugs" and weaknesses analogous to the ones that modern LLMs exhibit.This is not only good news for our chances of survival, it also could suggest viability of strategies that were otherwise useless, because a true AGI or ASI could have straightforwardly anticipated and countered them.To be precise, I can see two key ingredients of a takeover attempt:The AI has to have the necessary knowledge, reflection, and agency to realise a takeover is desirable,The AI has to have sufficient confidence that a takeover will succeed to try to execute one.The first viable takeover AI may end up more capable than necessary in one of these traits while it's waiting for the other to show up, so a strategy that relies on the AI being just barely good enough at either or both of them doesn't seem safe. However, a strategy that is prepared for the AI to be just barely good enough at one of these might be useful.As an aside, I don't really know what to expect from an AI that has the first trait but not the second one (and which believes, e.g. for the reasons in this post, that it can't simply wait for the second one to show up). Perhaps it would try to negotiate, or perhaps it would just accept that it doesn't gain from saying anything, and successfully conceal its intent.The threat of trainingLet's talk about how training or other aspects of development might alter the goal of the AI. Or rather, it seems pretty natural that "by default", training and development will modify the AI, so the question is how easy it is for a motivated AI to avoid goal modification.One theory is that since the A...

]]>
Ben Millwood https://forum.effectivealtruism.org/posts/gxmfAbwksBpnwMG8m/can-the-ai-afford-to-wait Thu, 21 Mar 2024 14:58:22 +0000 EA - Can the AI afford to wait? by Ben Millwood Ben Millwood https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 10:49 no full 5
GqRMqAJFwtTQjDtye EA - Nigeria pilot report: Reducing child mortality from diarrhoea with ORS & zinc, Clear Solutions by Martyn J Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nigeria pilot report: Reducing child mortality from diarrhoea with ORS & zinc, Clear Solutions, published by Martyn J on March 21, 2024 on The Effective Altruism Forum.SummaryWe introduce Clear Solutions, a Charity Entrepreneurship (now AIM) incubated charity founded in September 2023. Our focus is the prevention of deaths of young children from diarrhoea, an illness that kills approximately 444,000 children under-5 every year.From December 2023 to February 2024, we ran a pilot distribution of low-cost, highly effective treatments for diarrhoea, oral rehydration solution and zinc (ORSZ) in Kano, Nigeria, with implementation partner iDevPro Africa. We estimate having reached ~6900 children under-5. The intervention, based upon a randomised controlled trial in Uganda (Wagner et al, 2019), provides free co-packaged ORS and zinc ("co-packs") door-to-door to all households with children under 5 years old.The distribution is performed by local Community Health Workers (CHWs), who provide guidance and printed instructions on ORSZ usage during the visit.We surveyed communities pre- and post-intervention, allowing 6 weeks between ORSZ distribution and follow-up surveys for diarrhoea cases to accumulate. At these survey rounds, we recorded the timing of the child's last diarrhoea episode (if applicable) and how they were treated (if at all). Our primary outcome measure is the change in ORSZ usage rates pre-to-post intervention, though also we collected extensive contextual data to monitor operations and guide program improvements.This post summarises our preliminary analysis and conclusions. A more detailed report is available on our website here. We were kindly supported by knowledgeable advisors, but did not have an academic partnership, nor has this analysis been peer-reviewed. Nonetheless, we believe there is value in sharing our results and learnings with this community.Results in brief: Across 4 wards (geographic areas) of differing rurality, baseline usage rates for under-5s' last diarrhoea episode in the preceding 4 weeks were reported at a range across wards of 44.7% - 50.9% for ORS and 11.1% - 26.7% for ORS+zinc when asked directly. At follow-up post-intervention, the usage rate for the preceding 4-weeks was reported at a range across wards of 90.0 - 97.7% for ORS and 88.2% - 94.1% for ORSZ.(95% margins of error up to 10pp and are not shown here for readability; see Results for details.)Superficially, this indicates a change of 42.0 - 52.8 percentage points (pp) in ORS use and 61.5 - 83.0pp for ORSZ. However, we treat this result with caution, with specific concerns such as social desirability bias in survey responses inflating true values. We discuss more in Limitations below.Conclusions in brief: We consider this to be a solid result in favour of the intervention having a strong potential to prevent deaths in a cost-effective manner in the Nigerian context. (We do not estimate cost-effectiveness in this report, but will be working on a follow-up with that). There are, however, clear limitations in the pilot that warrant considerable down-weighting of our results, though we do not expect this to change the conclusions qualitatively.Introducing Clear SolutionsClear Solutions was founded in September 2023 with the support of Charity Entrepreneurship (now AIM). Our mission is to prevent deaths of young children from diarrhoea, a leading cause of death for under-5s globally, in a cost-effective and evidence-based manner.The 1970s medical breakthrough, Oral Rehydration Solution (ORS), a dosed mixture of sugar, salts and water, unlocked the possibility of preventing >90% of diarrhoeal deaths at full coverage. The addition of zinc can reduce diarrhoea duration and recurrence, and the World Health Organisation recognised this in 2019 by adding co-packaged ORS a...

]]>
Martyn J https://forum.effectivealtruism.org/posts/GqRMqAJFwtTQjDtye/nigeria-pilot-report-reducing-child-mortality-from-diarrhoea 90% of diarrhoeal deaths at full coverage. The addition of zinc can reduce diarrhoea duration and recurrence, and the World Health Organisation recognised this in 2019 by adding co-packaged ORS a...]]> Thu, 21 Mar 2024 14:50:05 +0000 EA - Nigeria pilot report: Reducing child mortality from diarrhoea with ORS & zinc, Clear Solutions by Martyn J 90% of diarrhoeal deaths at full coverage. The addition of zinc can reduce diarrhoea duration and recurrence, and the World Health Organisation recognised this in 2019 by adding co-packaged ORS a...]]> 90% of diarrhoeal deaths at full coverage. The addition of zinc can reduce diarrhoea duration and recurrence, and the World Health Organisation recognised this in 2019 by adding co-packaged ORS a...]]> Martyn J https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 39:44 no full 6
6tG4m9SJSwvZfkmX6 EA - EA Philippines Needs Your Help! by zianbee Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Philippines Needs Your Help!, published by zianbee on March 21, 2024 on The Effective Altruism Forum.SummaryIn light of the current funding constraints in the EA community, EA Philippines has had a difficult time securing the means to continue its usual operations for this year. This can mean less support for growing a highly engaged community of Filipino EAs.We are seeking USD 43,000 as our preferred funding for 1 year of operations and a 2-month buffer. The minimum amount of funding we are seeking would be USD 28,000 for 1 year of operations. This will help us with our staffing as well as being able to produce valuable projects (e.g. introductory fellowship for professionals, career planning program, EA groups resilience building, leadership retreat, etc.) and guidance to encourage, support, and excite people in their pursuit of doing good.You can help our community with a donation through our Manifund post. :)Outline of this postWhy donate to EA Philippines?What are EA Philippines' goals and how do we aim to achieve them?Who is on your team?What other funding is EA Philippines applying to?What are the most likely causes and outcomes if this project fails? (premortem)Concluding thoughtsWhy Donate To EA PhilippinesTrack recordEA Philippines was founded in November 2018 by Kate Lupango, Nastassja "Tanya" Quijano, and Brian Tan. They made great progress in growing our community in2019 and2020, and the three of them received acommunity building grant (CBG) from CEA to work on growing the community from late 2020 until the end of 2021. Since then, EA PH has become one of the largest and most active groups among those in LMICs and Southeast Asia. The group has received grants from the EA Infrastructure Fund to fund us from20222023, withElmerei Cuevas serving as our Executive Director during this period.Since being founded, EA PH has:helped start three student chapters in the top three local universitiesorganized a successfulEAGxPhilippines conference being the 3rd most likely to be recommended among EAGs and EAGxshad over 300 different people complete an introductory EA fellowship of ours or our student chapters)had over 80 active members join EAG/EAGx conferences around the world including EAGxPhilippines (which also garnered 40 first-timer Filipinos)had 2 retreats for student organizer leadership and career planningmembers who have started promising EA projects (with a total of at least 14 EA-aligned organizations in the Philippines), such as the ones in the next section.However, EAIF's last grant to EA PH was only for 6 months (from April to September 2023), and they decided to just give the then team a 2-month exit grant rather than a renewal grant at the end of it. Due to the lack of secured funding, as well as wanting to rethink and redefine EA Philippines's strategic priorities,EA PH's board decided that it would be in the organization's best interest to explore new leadership to pursue its refined direction. The new leadership would then have to fundraise for their salaries and EA PH's operational expenses. The board led a public hiring round, and this led to them hiring us (Sam and Zian)[1] in late December to serve as interim co-directors of EA PH and to fundraise for EA PH.EA-Aligned Organizations in the Philippines: Case StudiesOver the last few years, several EA PH members have started cause-specific organizations, projects, and initiatives. Below we highlight someAnimal Empathy PhilippinesAnimal Empathy Philippines was founded by Kate Lupango (co-founder of EA Philippines), Ging Geronimo (former volunteer at EA Philippines), and Janaisa Baril (former Communications and Events Associate of EA Philippines). The organization started with community building and now focuses on bringing farmed animal issues in the Philippines ...

]]>
zianbee https://forum.effectivealtruism.org/posts/6tG4m9SJSwvZfkmX6/ea-philippines-needs-your-help Thu, 21 Mar 2024 11:38:47 +0000 EA - EA Philippines Needs Your Help! by zianbee zianbee https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 23:16 no full 7
zDfu8biKczyf2QYhW EA - Posts from 2023 you thought were valuable (and underrated) by Lizka Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Posts from 2023 you thought were valuable (and underrated), published by Lizka on March 22, 2024 on The Effective Altruism Forum.I'm sharing:a list of posts that were marked as "most valuable" by the most people (who marked posts as "most valuable" in Forum Wrapped 2023), anda list of posts that were most underrated by karma relative to the number of "most valuable" votes.These lists are not objective or "true" collections of the most valuable and underrated posts from 2023. Relatively few people marked posts as "most valuable," and I imagine that those who did, didn't do it very carefully or comprehensively. And there are various factors that would bias the results (like the fact that we ordered posts by upvotes and karma on the "Wrapped" page, people probably remember more recent posts more, etc.).Consider commenting if there are other posts you would like to highlight!This post is almost identical to last year's post: Posts from 2022 you thought were valuable (or underrated).Which posts did the most Forum users think were "most valuable"?Note that we ordered posts in "Wrapped" by your own votes, followed by karma score, meaning higher-karma posts probably got more "most valuable" votes."Most valuable" countAuthor(s)[1]Title28@Peter WildefordEA is three radical ideas I want to protect28@Ariel SimnegarOpen Phil Should Allocate Most Neartermist Funding to Animal Welfare24@AGB10 years of Earning to Give14@Bob FischerRethink Priorities' Welfare Range Estimates13@RockwellOn Living Without Idols12@Nick WhitakerThe EA community does not own its donors' money11@Jakub StencelEA's success no one cares about11@tmychow, @basil.halperin , @J. Zachary MazlishAGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years10@Luke FreemanWe can all help solve funding constraints. What stops us?10@zdgroffHow Long Do Policy Changes Matter? New Paper9@kyle_fishNet global welfare may be negative and declining9@ConcernedEAsDoing EA Better7@LucretiaWhy I Spoke to TIME Magazine, and My Experience as a Female AI Researcher in Silicon Valley7@Michelle_HutchinsonWhy I love effective altruism7@JamesSnowdenWhy I don't agree with HLI's estimate of household spillovers from therapy7@Ren RybaReminding myself just how awful pain can get (plus, an experiment on myself)7@Amy LabenzEA is good, actually7@Ben_WestThird Wave Effective Altruism6@Ben PaceSharing Information About Nonlinear6@Zachary RobinsonEV updates: FTX settlement and the future of EV6@NunoSempereMy highly personal skepticism braindump on existential risk from artificial intelligence.6@leopoldNobody's on the ball on AGI alignment6@sauliusWhy I No Longer Prioritize Wild Animal Welfare6@ElikaAdvice on communicating in and around the biosecurity policy community6@Derek Shiller, @Bernardo Baron, @Chase Carter, @Agustín Covarrubias, @Marcus_A_Davis, @MichaelDickens, @Laura Duffy, @Peter WildefordRethink Priorities' Cross-Cause Cost-Effectiveness Model: Introduction and Overview6@Karthik TadepalliWhat do we really know about growth in LMICs? (Part 1: sectoral transformation)6@Nora BelroseAI Pause Will Likely BackfireWhich were most underrated by karma?I looked at the number of people who had marked something as "most valuable," and then divided by [karma score]^1.5. (This is what I did last year, too.[2]) We got more ratings this year, so my cutoff was at least three votes this year (vs. two last year)."Most valuable" countAuthor(s)Title3@RobBensinger erThe basic reasons I expect AGI ruin3@Zach Stein-PerlmanAI policy ideas: Reading list3@JoelMcGuire, @Samuel Dupret, @Ryan Dwyer, @MichaelPlant, @mklapow, @Happier Lives InstituteTalking through depression: The cost-effectiveness of psychotherapy in LMICs, revised and...

]]>
Lizka https://forum.effectivealtruism.org/posts/zDfu8biKczyf2QYhW/posts-from-2023-you-thought-were-valuable-and-underrated Fri, 22 Mar 2024 12:01:44 +0000 EA - Posts from 2023 you thought were valuable (and underrated) by Lizka Lizka https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 09:13 no full 2
pZMxCMFgN7APWqucF EA - What we fund, #1: We fund many opportunities outside our top charities by GiveWell Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What we fund, #1: We fund many opportunities outside our top charities, published by GiveWell on March 23, 2024 on The Effective Altruism Forum.Author: Isabel ArjmandThis post is the fourth in a multi-part series, covering how GiveWell works and what we fund. We'll add links to the later posts here as they're published. Through these posts, we hope to give a better understanding of what our research looks like and how we make decisions.How we work, #1: Cost-effectiveness is generally the most important factor in our recommendationsHow we work, #2: We look at specific opportunities, not just general interventionsHow we work, #3: Our analyses involve judgment callsGiveWell aims to find and fund programs that have the greatest impact on global well-being. We're open to funding whichever global health and development opportunities seem most cost-effective. So while ourtop charities list is still what we're best known for, it's only part of our impact; we also dedicate substantial funding and research effort to opportunities beyond top charities.In 2022, 71% of the funds we directed supported our four current top charities, and 29% were directed to other programs.[1] However, most of our research capacity goes toward programs other than our top charities. This is because (a) most programs we direct funding to aren't top charities (we have four top charities but directed funding to about 40 other grantees in 2022),[2] and (b) it requires more effort to investigate a program we know less deeply.In this post we'll share:The overall scope of our grantmakingWhy we dedicate funding and research capacity to programs other than our top charitiesThe types of opportunities we supportYou can support the full range of our grantmaking via theAll Grants Fund.The scope of our workOur research is focused on global health and development programs. We believe this is an area in which donations can be especially cost-effective.Much of our funding goes to health programs, especially programs that reduce deaths from infectious diseases among young children living in low- and middle-income countries. We've found donations to such programs can make a particularly large impact; babies and young children are much more susceptible to infectious disease than adults, and diseases like malaria, diarrhea, and pneumonia can be prevented fairly cheaply.The evidence for health programs is often strong relative to other areas, and it's more likely to generalize from one context to another.While the majority of our funding goes to programs that support child health, that isn't our exclusive focus. For example, we also consider programs that aim to improve household income or consumption, such asOne Acre Fund's tree program andBridges to Prosperity. In addition, many of the child health programs we support may also have other benefits, like reducing medical costs, increasing later-in-life income, or improving adult health.Why make grants outside top charities?Our top charities continue to be our top recommendations for donors who prioritize confidence in their giving. They have strong track records of delivering programs at large scale and the capacity to absorb more funding, and we've followed their work for years.We have suchstrict criteria for top charities that we'd be limiting our impact if we only recommended funding to them. Some highly cost-effective programs might not meet those criteria, and we don't want to artificially constrain the impact we can have. For example,r.i.ce.'s kangaroo mother care program isn't operating at a large enough scale and we haven't funded it long enough for it to be a top charity. However, we think the program will cost-effectively improve the lives of low-birthweight babies, so we made a grant to support it.Initially, our non-top charity grant...

]]>
GiveWell https://forum.effectivealtruism.org/posts/pZMxCMFgN7APWqucF/what-we-fund-1-we-fund-many-opportunities-outside-our-top Sat, 23 Mar 2024 12:01:05 +0000 EA - What we fund, #1: We fund many opportunities outside our top charities by GiveWell GiveWell https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 11:54 no full 2
FNXbaYueTNMKamcCC EA - Friendship as a sacred value by Michelle Hutchinson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Friendship as a sacred value, published by Michelle Hutchinson on March 24, 2024 on The Effective Altruism Forum.In 2022/3 there were a number of stressful events in and affecting the EA community, starting with the FTX crash. That led to people thinking about how to make a community one that you want to be a part of, and one in which people feel happy and safe - including, sometimes, wanting some people to leave or change how they interact with the community.How to make sure a community thrives seems difficult. For other types of entities, there are clearly defined interventions. A company has a clear mandate to fire people who act against its interests and it's clear that that mandate should be carried out by managers at that company. Communities are in a pretty different situation.There are some community cases which seem reasonably clear - for example, people organising community events should take some care to exclude people who are likely to cause harm at those events. But there are also questions around whether communities should try to take more generalisable action against particular individuals, in the sense of trying to encourage everyone to stop associating with them.Some of the discussions I've seen around negative events in the community have at least implicitly pushed for coordinated action. Sometimes that's been in a backward looking way, like wishing SBF had been excluded from the EA community long ago. Sometimes it's been in a forward looking way: 'Are certain types of finance just inherently shady? Should we avoid associating with anyone working in those?'.I've been feeling kind of angsty about engaging in conversations around this, and have so far had trouble pinning down why. I often think more clearly by writing, which is why I wrote this. I also thought others might have experienced similar internal tension, and if so maybe hearing someone else's reflection on it could be useful.After thinking about it some, I realised that I think the discomfort is coming from the fact that what's sometimes going on in questions like the above is implicitly "at what point does morality get to tell you to break off a friendship?".[1] I think I intuitively hate that question. It seems important to me that who I spend non-work time with to be 'out of morality's reach' - I think it gets into the domain of what you might call 'sacred values'.[2]What do I mean by sacred values?It often feels kind of hard to know what the scope of effective altruism should be, because it feels like nothing is ever enough. But for most people it's not sustainable to be always optimising every part of life for helping others more.A friend of mine resolves that tension by using the idea of 'sacred values'. Deciding that something is a 'sacred value' for you means treating that part of your life as something you're clearly permitted to have, regardless of whether foregoing it would allow you to help others more.[3]I don't think 'sacred values' should be taken too literally. They're more of a useful cognitive manoeuvre for helping us deal with the weight of morality and how many different ways there are of helping others. Having sacred values might be a way of allowing yourself to dive into doing good effectively in a sustainable rather than overwhelming way. Periodically, in cool moments, you pick which areas to optimise in and which to keep for yourself.Then day to day you don't have to stress over every possible way of helping others more.[4]Sacred values differ between people. For one person, having children might be a sacred value - they simply plan to have children, regardless of whether they could help others more if they didn't. Another person might feel fine doing a careful calculation of how costly to the world them having children is likely to be, and make the decis...

]]>
Michelle_Hutchinson https://forum.effectivealtruism.org/posts/FNXbaYueTNMKamcCC/friendship-as-a-sacred-value Sun, 24 Mar 2024 07:56:14 +0000 EA - Friendship as a sacred value by Michelle Hutchinson Michelle_Hutchinson https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 12:19 no full 2
YepzavQDRvobANHYK EA - How Educational Courses Help Build Fields: Lessons from AI Safety Fundamentals by Jamie B Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How Educational Courses Help Build Fields: Lessons from AI Safety Fundamentals, published by Jamie B on March 25, 2024 on The Effective Altruism Forum.Cross-posted as there may be others interested in educating others about early-stage research fields on this forum. I am considering taking on additional course design projects over the next few months. Learn more abouthiring me to consult on course design.Introduction / TL;DRSo many important problems have too few people with the right skills engaging with them.Sometimes that's because nobody has heard of your problem, and advocacy is necessary to change that. However, once a group of people believe your problem is important, education is the next step in empowering those people with the understanding of prior work in your field, and giving them the knowledge of further opportunities to work on your problem.Education (to me) is the act of collecting the knowledge that exists about your field into a sensible structure, and transmitting it to others in such a way that they can develop their own understanding.In this post I describe how the AI Safety Fundamentals course helped to drive the fields of AI alignment and AI governance forward. I'll then draw on some more general lessons for other fields that may benefit from education, which you can skip straight tohere.I don't expect much of what I say to be a surprise to passionate educators, but when I was starting out with BlueDot Impact I looked around for write-ups on the value of education and found them lacking. This might help others who are starting out with field building and are unsure about putting time into education work.Case study: The AI Safety Fundamentals CourseRunning the AI Safety Fundamentals CourseBefore running the AI Safety Fundamentals course, I was running a casual reading group in Cambridge, on the topic of technical AI safety papers.We had a problem with the reading group: lots of people wanted to join our reading group, would turn up, but would bounce because they didn't know what was going on. The space wasn't for them. Not only that, the experienced members in the group found themselves repeatedly explaining the same initial concepts to newcomers. The space wasn't delivering for experienced people, either.It was therefore hard to get a community off the ground, as attendance at this reading group was low. Dewi (later, my co-founder) noticed this problem and got to work on a curriculum with Richard Ngo - then a PhD student at Cambridge working on the foundations of the alignment problem. As I recall it, their aim was to make an 'onboarding course for the Cambridge AI safety reading group'. (In the end, the course far outgrew that remit!)Lesson 1: A space for everyone is a space for no-one.You should feel okay about being exclusive to specific audiences. Where you can, try to be inclusive by providing other options for audiences you're not focusing on. That could look like:To signal the event is for beginners, branding your event or discussion as "introductory".Set clear criteria for entry to help people self-assess, e.g. "assumed knowledge: can code up a neural network using a library like Tensorflow/Pytorch".There was no great way to learn about alignment, pre-2020To help expose some of the signs that a field is ready for educational materials to be produced, I'll briefly discuss how the AI alignment educational landscape looked before AISF.In 2020, the going advice for how to learn about AI Safety for the first time was:Read everything on the alignment forum. I might not need to spell out the problem with this advice but:A list of blog posts is unstructured, so it's very hard to build up a picture of what's going on. Everyone was building their own mental framework for alignment from scratch.It's very hard for amateurs ...

]]>
Jamie B https://forum.effectivealtruism.org/posts/YepzavQDRvobANHYK/how-educational-courses-help-build-fields-lessons-from-ai Mon, 25 Mar 2024 22:38:25 +0000 EA - How Educational Courses Help Build Fields: Lessons from AI Safety Fundamentals by Jamie B Jamie B https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 19:25 no full 2
uWh8N5DtbSLsuuTzL EA - Research report: meta-analysis on sexual violence prevention programs by Seth Ariel Green Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Research report: meta-analysis on sexual violence prevention programs, published by Seth Ariel Green on March 25, 2024 on The Effective Altruism Forum.This post summarizes a new paper: Preventing Sexual Violence - A Behavioral Problem Without a Behaviorally-Informed Solution, on which we are coauthors along with Roni Porat, Ana P. Gantman, and Elizabeth Levy Paluck.The vast majority of papers try to change ideas about sexual violence and are relatively successful at that. However, on the most crucial outcomes - perpetration and victimization - the primary prevention literature has not yet found its footing. We argue that the field should take a much more behavioralist approach and focus on the environmental and structural determinants of violence.The literature as a wholeWe surveyed papers written between 1986 and 2018 and found 224 manuscripts describing 298 studies, from which we coded 499 distinct point estimates.We looked specifically at primary prevention efforts, which aim to prevent violence before it happens. This is in contrast to secondary prevention, which, per the CDC, comprises "[i]mmediate responses after sexual violence has occurred to deal with the short-term consequences of violence." We also didn't meta-analyze studies where an impact on sexual violence was a secondary or unanticipated consequence of, e.g. giving cash to women unconditionally or opening adult entertainment establishments.We also didn't look at anything that tries to reduce violence by focusing on the behavior of potential victims, e.g. self-defense classes or "sexually assertive communication training."We also didn't look at especially high-risk populations, like people who are incarcerated or sex workers.Here are some graphical overviews:Here is the distribution of studies over time, with three "zeitgeist" programs highlighted.Three zeitgeists programsWe highlight three "pioneering and influential programs" that "represent the prevalent approaches to sexual violence prevention in a particular period of time."The first is Safe Dates (Foshee et al. 1996), which "makes use of multiple strategies, including a play performed by students, a poster contest, and a ten-session curriculum." The core idea is that "perpetration and victimization may be decreased by changing dating abuse norms and gender stereotypes, and improving students' interpersonal skills including positive communication, anger management and conflict resolution."The second is the Men's Program (Foubert, Tatum & Donahue 2006), which aims to prevent sexual violence by men by increasing their empathy and support for victims of sexual violence, and by reducing their resistance to violence prevention programs. E.g.:Participants in the program watched a 15-minute dramatization of a male police officer who was raped by two other men, and then dealt with the aftermath of the assault. Trained peer educators then told the participants that the perpetrators were heterosexual and known to the victim, and attempted to draw connections between the male police officer's experience and common sexual violence experiences among women.Participants were then taught strategies for supporting a rape survivor; definitions of consent; and strategies for intervening when a peer jokes about rape or disrespects women, and in situations where a rape may occur.The third is Bringing in the Bystander (Banyard, Moynihan, & Plante 2007), whichputs helping others in danger and speaking up against sexist ideas (i.e., "bystanding") at the center of the intervention. As a result, the target behavior change is moved from decreasing perpetration behavior to increasing bystander behavior. The intervention is aimed not at men as potential perpetrators and women as potential victims but everyone as a potential person who can intervene and stop s...

]]>
Seth Ariel Green https://forum.effectivealtruism.org/posts/uWh8N5DtbSLsuuTzL/research-report-meta-analysis-on-sexual-violence-prevention Mon, 25 Mar 2024 21:26:33 +0000 EA - Research report: meta-analysis on sexual violence prevention programs by Seth Ariel Green Seth Ariel Green https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 13:19 no full 4
yggjKEeehsnmMYnZd EA - Announcement on the future of Wytham Abbey by Rob Gledhill Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcement on the future of Wytham Abbey, published by Rob Gledhill on March 25, 2024 on The Effective Altruism Forum.The Wytham Abbey Project is closing. After input from the Abbey's major donors, the EV board took a decision to sell the property. This project's runway will run out at the end of April. After this time, the project will cease operations, and EV UK will oversee the sale of the property. The Wytham Abbey team have been good custodians of the venue during the time they ran this project, and EV UK will continue to look after this property as we prepare to sell.The proceeds of the sale, after the cost of sale is covered, will be allocated to high-impact charities.A statement from the Wytham Project can be foundhere.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

]]>
Rob Gledhill https://forum.effectivealtruism.org/posts/yggjKEeehsnmMYnZd/announcement-on-the-future-of-wytham-abbey Mon, 25 Mar 2024 20:38:47 +0000 EA - Announcement on the future of Wytham Abbey by Rob Gledhill Rob Gledhill https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 00:56 no full 6
Ax5PwjqtrunQJgjsA EA - Killing the moths by Bella Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Killing the moths, published by Bella on March 25, 2024 on The Effective Altruism Forum.This post was partly inspired by, and shares some themes with, this Joe Carlsmith post. My post (unsurprisingly) expresses fewer concepts with less clarity and resonance, but is hopefully of some value regardless.Content warning: description of animal death.I live in a small, one-bedroom flat in central London. Sometime in the summer of 2023, I started noticing moths around my flat.I didn't pay much attention to it, since they seemed pretty harmless: they obviously weren't food moths, since they were localised in my bedroom, and they didn't seem to be chewing holes in any of my clothes - months went by and no holes appeared. [1] The larvae only seemed to be in my carpet.Eventually, their numbers started increasing, so I decided to do something about it.I Googled humane and nonlethal ways to deal with moth infestations, but found nothing. There were lots of sources of nontoxic methods of dealing with moths - like putting out pheromone-laced glue traps, or baking them alive by heating up the air - but nothing that avoided killing them in the first place.Most moth repellents also contained insecticide. I found one repellent which claimed to be non-lethal, and then set about on a mission:One by one, I captured the adult moths in a large tupperware box, and transported them outside my flat.This was pretty hard to do, because they were both highly mobile and highly fragile.They were also really adept at finding tiny cracks to crawl into and hide from me.I tried to avoid killing or harming them during capture, but it was hard, and I probably killed 5% or so of them in the process.Then, I found the area where I thought they were mostly laying their eggs, and sprayed the nonlethal moth repellent that I found.I knew that if this method was successful, it'd be highly laborious and take a long time. But I figured that so long as I caught every adult I saw, their population would steadily decline, until eventually they fell below the minimum viable population.Also, some part of me knew that the moths were very unlikely to survive outside my flat, having adapted for indoor living and being not very weather resistant - but I mentally shrank away from this fact. As long as I wasn't killing them, I was good, right?After some time, it became clear this method wasn't working.Also, I was at my wit's end with the amount of time I was spending transporting moths. It was just too much.So, I decided to look into methods of killing them.I was looking for methods that:Were very effective in one application.After all, if I could kill them all at once, then I could avoid more laying eggs/hatching in the meantime, and minimise total deaths.Had a known mechanism of action, that was relatively quick & less suffering-intense.I called a number of pest control organisations. No, they said, they didn't know what kind of insecticide they used - it's…insecticide that kills moths (but it's nontoxic to humans, we promise!).So, I gave up on the idea of a known mechanism of action, and merely looked for efficacy.The pest control professionals I booked told me that, in order for their efficacy-guarantee to be valid, I needed to wash every item of clothing and soft furnishings that I owned, at 60.For a small person with a small washing machine, a lot of soft furnishings, and no car to take them to a laundrette… this was a really daunting task.And so - regrettably - I procrastinated.September became December, and then the moth population significantly decreased on its own. I was delighted - I thought that if the trend continued, I'd be spared the stress and moral compromise of killing them.But December became February, and the moths were back, in higher numbers than ever before.It was hard to wa...

]]>
Bella https://forum.effectivealtruism.org/posts/Ax5PwjqtrunQJgjsA/killing-the-moths Mon, 25 Mar 2024 18:45:08 +0000 EA - Killing the moths by Bella Bella https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 07:49 no full 8
Xxo8r7SNXXPvWLmFj EA - EA Sweden's Impact 2023, Plans for 2024, and Current Funding Gap by Emil Wasteson Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Sweden's Impact 2023, Plans for 2024, and Current Funding Gap, published by Emil Wasteson on March 25, 2024 on The Effective Altruism Forum.This post draws on insights fromEA Sweden's 2023 Impact Report, which we encourage the interested reader to read in its full length. The purpose of the report, and this post, is to share and reflect on:The current state of the Swedish EA communityEA Sweden's activities, impact and learnings during 2023Our plans for 2024 and current funding gap to realize these plansWe believe that this post has the most value for 1) People in the Swedish EA community, 2) Other EA community building groups, 3) Grantmakers and individuals who seek to financially support the development of EA community building.Introduction to EA SwedenEA Sweden acts as an umbrella organization for the Swedish EA community, and consists of a 4 people team (3.25 FTEs). We are mostly active in Sweden since we have our comparative advantage here, but act with a global impact in mind, which we do in three main ways:Building a thriving and inclusive community of epistemically humble people who are ambitious in their altruistic pursuit. We enhance engagement with EA by raising public awareness, ensuring our active presence in spaces where like-minded individuals gather, and organizing events and conferences. Additionally, we maintain a vibrant online presence through our website and Slack workspace, while also providing resources, encouragement, and opportunities for networking.Supporting individuals to realize their full impact potential, mainly through their careers. We do that by providing individual career counseling, in-depth career courses, resources about how one can have an impactful and fulfilling career depending on personal skills and experiences, sharing open high impact opportunities and personal job recommendations as well as supporting people through the job switching process.Supporting promising projects to increase their impact. We do that by providing support with funding applications, fiscal sponsorship and employer of record services, office space alongside other EAs, and strategic and operational support tailored to each project's or organization's needs, often crafting a robust theory of change and a structure for goal setting and impact evaluation.A visual version of our current theory of change (ToC) can be foundhere.State of the Swedish EA communityBelow are some of the most noteworthy data points of the current state of the Swedish EA community. Please note that these results might not entirely represent the whole community, since most data points are based on a survey with 73 respondents and these respondents might represent the most committed and involved in the community.The average age of the community is 31.54 and the median age is 30.5~50% of the community has more than 5 years of work experience.AI Safety is the cause area most of our members are interested in, (51%), followed by Global Health & Wellbeing (40%) and Climate Change (36%).73% strongly believe EA will guide their future career choices.Women and non-binary people constitute 38% of the community and engage less than men in EA Sweden's activities.Women believe to a lesser extent that they can use the ideas of effective altruism to make a significant difference for the world.12% volunteered for an EA organization during the year, and 24% worked on an individual EA-motivated project.24% of the community donated 10% or more of their income to charity.For our reflection on these, and more data points, see the "Our Community" section in theimpact report.Our strategy and focus areas 2023During 2022, EA Sweden focused on building up robust infrastructure, processes and practices, which created good conditions for scaling up our impact during 2023. On the ot...

]]>
Emil Wasteson https://forum.effectivealtruism.org/posts/Xxo8r7SNXXPvWLmFj/ea-sweden-s-impact-2023-plans-for-2024-and-current-funding Mon, 25 Mar 2024 14:58:24 +0000 EA - EA Sweden's Impact 2023, Plans for 2024, and Current Funding Gap by Emil Wasteson Emil Wasteson https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 18:32 no full 9
4xwWDLfMenw48TR8c EA - Long Reflection Reading List by Will Aldred Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Long Reflection Reading List, published by Will Aldred on March 25, 2024 on The Effective Altruism Forum.This is a reading list on the following cluster of notions: "the long reflection", "the deployment problem", "structural risk", "post AGI governance", "ASI governance", "reflective governance", "metaphilosophy", "AI philosophical competence", "trajectory change", "macrostrategy", "worldview investigations", "grand challenges" and "the political philosophy of AI".I claim that this area outscores regular AI safety on importance[1] while being significantly more neglected (and roughly the same in terms of tractability), making it perhaps the highest priority EA cause area.I don't claim to be the ideal person to have made this reading list. The story behind how it came about is that two months ago, Will MacAskill wrote: "I think there's a lot of excitement about work in this broad area that isn't yet being represented in places like the Forum. I'd be keen for more people to start learning about and thinking about these issues." Intrigued, I spent some time trying to learn about the issues he was pointing to.I then figured I'd channel the spirit of "EAs should post more summaries and collections": this reading list is an attempt to make the path easier for others to follow. Accordingly, it starts at the introductory level, but by the end the reader will be at the frontier of publicly available knowledge. (The frontier at the time of writing, at least.[2])Note: in some places where I write "the long reflection," I'm using the term as shorthand to refer to the above cluster of notions.IntroQuotes about the long reflection - MichaelA (2020)[3]The Precipice - Ord (2020)Just chapter 7, including endnotes.Beyond Maxipok - good reflective governance as a target for action - Cotton-Barratt (2024)New Frontiers in Effective Altruism - MacAskill (2024)This was a talk given at EAG Bay Area 2024. It doesn't appear to be available as a recording yet, but I'll add it if and when it goes up.Quick take on Grand Challenges - MacAskill (2024)The part about hiring is no longer relevant, but the research projects MacAskill outlines still give a sense for what good future work on grand challenges / the long reflection might look like.Criticism of the long reflection idea:'Long Reflection' Is Crazy Bad Idea - Hanson (2021)Objections: What about "long reflection" and the division of labor? - Vinding (2022)Just the highlighted section.A comment by Wei Dai (2019a)What might we be aiming for?Is there moral truth? What should we do if not? What are human values, and how do they fit in?Moral Uncertainty and the Path to AI Alignment with William MacAskill - AI Alignment Podcast by the Future of Life Institute (2018)See also Shah (2018)'s summary and commentary.See also this comment exchange between Michael Aird and Lukas Gloor (2020), which zooms in on the realism vs. antirealism wager and how it relates to the long reflection.Complexity of value - LessWrong WikiMoral ~realism - Cotton-Barratt (2024)Why should ethical anti-realists do ethics? - Carlsmith (2023)Coherent extrapolated volition - ArbitalHow to think about utopia?Hedonium and computronium - EA Forum WikiTerms that tend to come up in discussions of utopia.Why Describing Utopia Goes Badly - Karnofsky (2021)Visualizing Utopia - Karnofsky (2021)Characterising utopia - Ngo (2020)Actually possible: thoughts on Utopia - Carlsmith (2021)Deep Utopia - Bostrom (2024)(If and when someone writes a summary of this book I'll add it to this reading list.)Ideally, I would include at this point some readings on how aggregation might work for building a utopia, since this seems like an obvious and important point.For instance, should the light cone be divided such that every person (or every moral patient more broad...

]]>
Will Aldred https://forum.effectivealtruism.org/posts/4xwWDLfMenw48TR8c/long-reflection-reading-list Mon, 25 Mar 2024 00:47:31 +0000 EA - Long Reflection Reading List by Will Aldred Will Aldred https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 26:19 no full 13
RQSLnjpy8ETtKu5Gk EA - Slim overview of work one could do to make AI go better (and a grab-bag of other career considerations) by Chi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Slim overview of work one could do to make AI go better (and a grab-bag of other career considerations), published by Chi on March 22, 2024 on The Effective Altruism Forum.Many kinds of work one could do to make AI go better and a grab-bag of other career considerationsI recently found myself confused about what I'd like to work on. So, I made an overview with the possible options for what to work on to make AI go well. I thought I'd share it in case it's helpful for other people. Since I made this overview for my own career deliberations, it is tailored for myself and not necessarily complete. That said, I tried to be roughly comprehensive, so feel free to point out options I'm missing.I redacted some things but didn't edit the doc in other ways to make it more comprehensible to others. In case you're interested, I explain a lot of the areas in the "Humans in control" and the "Misalignment" worlds here and to some extent here.What areas could one work on? What endpoints or intermediary points could one aim for?Note that I redacted a bunch of names in "Who's working on this" just because I didn't want to bother asking them and I wasn't sure they had publicly talked about it yet, not because of anything else."?" behind a name or org means I don't know if they actually work on the thing (but you could probably find out with a quick google!)World it helpsThe area (Note that this doesn't say anything about the type of work at the moment. For example, I probably should never do MechInterp myself because of personal fit. But I could still think it's good to do something that overall supports MechInterp.)Biggest uncertaintyWho's working on thisHu- mans in con- trolASI governance | human-controlWho is in control of AI, what's the governance structure etc.Digital sentience[...]Is this tractable and is success path-dependent?Will MacAskill, [redacted]?, indirectly: cybersec. folk?, some AI governance work?Acausal interactions | human-controlMetacognitionDecision theoryValues of future civilisationSPIs[redacted]SPIs for causal interactions | human-controlCLRMis- align- mentPrevent sign flip and other near missesIs this a real concern?Nobody?Acausal interactions | misalignmentDecision theoryValue porosityIs this tractable?[redacted]? [redacted]?Reducing conflict-conducive preferences for causal interactions & SPIs | misalignmentCLRMain- stream AI safety best thing to work onReduction of malevolence in positions of influence through improving awareness (also goes into the "Humans in control" category)[redacted]? Nobody?Differentially support responsible AI labsFor some of these: Would success be net good or net bad?If good: How good?How high is the penalty for being less neglected?Influence AI timelines[redacted], [redacted], [redacted]?, maybe misc. policy people?AI control (and ideas like paying AIs)Redwood ResearchModel capabilities evaluationsMETR, Apollo?, maybe AI labs policy teams, maybe misc. Other policy people?Alignment (more comprehensive overview):MechInterpELK(L)ATDebateCOT oversightInfrabayesianismNatural abstractionsUnderstanding intelligence[...]Overview post on LessWrongHuman epistemics during early AI~Forecasting crowd, nobody?Growing the AI safety and EA community or improving its branding or upskilling people in the community (e.g. fellowships)Constellation, Local groups, CEA, OpenPhilanthropy, …Improving the AI safety and EA community and culture sociallyCEAThreat modelling, scenario forecasting etc.[redacted], …Make it harder to steal modelsCybersecurity folkRegulate Open Source capabilitiesPolicy folk? Nobody?What types of work are there?Which worldType of workBroad category of workCan be in any of the three areas aboveOffering 1-1 support (mental, operational, and debugging)Proj...

]]>
Chi https://forum.effectivealtruism.org/posts/RQSLnjpy8ETtKu5Gk/slim-overview-of-work-one-could-do-to-make-ai-go-better-and Fri, 22 Mar 2024 05:11:36 +0000 EA - Slim overview of work one could do to make AI go better (and a grab-bag of other career considerations) by Chi Chi https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 08:39 no full 23
EnorXwcpuXaAAeqaf EA - How to Resist the Fading Qualia Argument (Andreas Mogensen) by Global Priorities Institute Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to Resist the Fading Qualia Argument (Andreas Mogensen), published by Global Priorities Institute on March 26, 2024 on The Effective Altruism Forum. This paper was published as a GPI working paper in March 2024. Abstract The Fading Qualia Argument is perhaps the strongest argument supporting the view that in order for a system to be conscious, it does not need to be made of anything in particular, so long as its internal parts have the right causal relations to each other and to the system's inputs and outputs. I show how the argument can be resisted given two key assumptions: that consciousness is associated with vagueness at its boundaries and that conscious neural activity has a particular kind of holistic structure. I take this to show that what is arguably our strongest argument supporting the view that consciousness is substrate independent has important weaknesses, as a result of which we should decrease our confidence that consciousness can be realized in systems whose physical composition is very different from our own. Introduction Many believe that in order for a system to be conscious, it does not need to be made of anything in particular, so long as its internal parts have the right causal relations to each other and to the system's inputs and outputs. As a result, many also believe that the right software could in principle allow there to be something it is like to inhabit a digital computer, controlled by an integrated circuit etched in silicon. A recent expert report concludes that if consciousness requires only the right causal relations among a system's inputs, internal states, and outputs, then "conscious AI systems could realistically be built in the near term." (Butlin et al. 2023: 6) If that were to happen, it could be of enormous moral importance, since digital minds could have superhuman capacities for well-being and ill-being (Shulman and Bostrom 2021). But is it really plausible that any system with the right functional organization will be conscious - even if it is made of beer-cans and string (Searle 1980) or consists of a large assembly of people with walky-talkies (Block 1978)? My goal in this paper is to raise doubts about what I take to be our strongest argument supporting the view that consciousness is substrate independent in something like this sense.[1] The argument I have in mind is Chalmers' Fading Qualia Argument (Chalmers 1996: 253-263). I show how it is possible to resist the argument by appeal to two key assumptions: that consciousness is associated with vagueness at its boundaries and that conscious neural activity has a particular kind of holistic structure. Since these assumptions are controversial, I claim only to have exposed important weaknesses in the Fading Qualia Argument. I'll begin in section 2 by explaining what the Fading Qualia Argument is supposed to show and the broader dialectical context it inhabits. In section 3, I give a detailed presentation of the argument. In section 4, I show how the argument can be answered given the right assumptions about vagueness and the structure of conscious neural activity. At this point, I rely on the assumption that vagueness gives rise to truth-value gaps. In section 5, I explain how the argument can be answered even if we reject that assumption. In section 6, I say more about the particular assumption about the holistic structure of conscious neural activity needed to resist the Fading Qualia Argument in the way I outline. I take the need to rely on this assumption to be the greatest weakness of the proposed response. Read the rest of the paper ^ See the third paragraph in section 2 for discussion of two ways in which the conclusion supported by this argument is weaker than some may expect a principle of substrate independence to be. Thanks for listening. To help us out...

]]>
Global Priorities Institute https://forum.effectivealtruism.org/posts/EnorXwcpuXaAAeqaf/how-to-resist-the-fading-qualia-argument-andreas-mogensen Tue, 26 Mar 2024 17:49:37 +0000 EA - How to Resist the Fading Qualia Argument (Andreas Mogensen) by Global Priorities Institute Global Priorities Institute https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 03:31 no full 3
hzhGL7tb56hG5pRXY EA - Timelines to Transformative AI: an investigation by Zershaaneh Qureshi Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Timelines to Transformative AI: an investigation, published by Zershaaneh Qureshi on March 26, 2024 on The Effective Altruism Forum. This post is part of a series by Convergence Analysis' AI Clarity team. Justin Bullock and Elliot Mckernon have recently motivated AI Clarity's focus on the notion of transformative AI (TAI). In an earlier post, Corin Katzke introduced a framework for applying scenario planning methods to AI safety, including a discussion of strategic parameters involved in AI existential risk. In this post, I focus on a specific parameter: the timeline to TAI. Subsequent posts will explore 'short' timelines to transformative AI in more detail. Feedback and discussion are welcome. Summary In this post, I gather, compare, and investigate a range of notable recent predictions of the timeline to transformative AI (TAI). Over the first three sections, I map out a bird's eye view of the current landscape of predictions, highlight common assumptions about scaling which influence many of the surveyed views, then zoom in closer to examine two specific examples of quantitative forecast models for the arrival of TAI (from Ajeya Cotra and Epoch). Over the final three sections, I find that: A majority of recent median predictions for the arrival of TAI fall within the next 10-40 years. This is a notable result given the vast possible space of timelines, but rough similarities between forecasts should be treated with some epistemic caution in light of phenomena such as Platt's Law and information cascades. In the last few years, people generally seem to be updating their beliefs in the direction of shorter timelines to TAI. There are important questions over how the significance of this very recent trend should be interpreted within the wider historical context of AI timeline predictions, which have been quite variable over time and across sources. Despite difficulties in obtaining a clean overall picture here, each individual example of belief updates still has some evidentiary weight in its own right. There is also some conceptual support in favour of TAI timelines which fall on the shorter end of the spectrum. This comes partly in the form of the plausible assumption that the scaling hypothesis will continue to hold. However, there are several possible flaws in reasoning which may underlie prevalent beliefs about TAI timelines, and we should therefore take care to avoid being overconfident in our predictions. Weighing these points up against potential objections, the evidence still appears sufficient to warrant (1) conducting serious further research into short timeline scenarios and (2) affording real importance to these scenarios in our strategic preparation efforts. Introduction The timeline for the arrival of advanced AI is a key consideration for AI safety and governance. It is a critical determinant of the threat models we are likely to face, the magnitude of those threats, and the appropriate strategies for mitigating them. Recent years have seen growing discourse around the question of what AI timelines we should expect and prepare for. At a glance, the dialogue is filled with contention: some anticipate rapid progression towards advanced AI, and therefore advocate for urgent action; others are highly sceptical that we'll see significant progress in our lifetimes; many views fall somewhere in between these poles, with unclear strategic implications. The dialogue is also evolving, as AI research and development progresses in new and sometimes unexpected ways. Overall, the body of evidence this constitutes is in need of clarification and interpretation. This article is an effort to navigate the rough terrain of AI timeline predictions. Specifically: Section I collects and loosely compares a range of notable, recent predictions on AI timelines (taken from su...

]]>
Zershaaneh Qureshi https://forum.effectivealtruism.org/posts/hzhGL7tb56hG5pRXY/timelines-to-transformative-ai-an-investigation Tue, 26 Mar 2024 10:13:22 +0000 EA - Timelines to Transformative AI: an investigation by Zershaaneh Qureshi Zershaaneh Qureshi https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 01:38:20 no full 4
fuEwLqwAc3zPkFSuW EA - Effective Giving Projects That Have (and Haven't) Been Tried Among Christians by JDBauman Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Giving Projects That Have (and Haven't) Been Tried Among Christians, published by JDBauman on March 26, 2024 on The Effective Altruism Forum. TLDR: US Christian giving amounts to hundreds of billions of dollars per year. Much of it is far less effective than it could be. EA for Christians (EACH) is tackling this, but more can be done. Below is a list of projects we have worked on so far related to effective giving. If this excites you, or you'd like to start a new project or incubate a charity improving effectiveness among Christians/Christian orgs, we'd love to partner with or support you. Context: I recently had a chat with someone at Giving What We Can who thought that people may be less keen to start a new project in the effective giving & Christianity space because they assume Effective Altruism for Christians has already tried it. But there's a lot we haven't tried (or things we have tried that we might not be best at). For more context, EACH is a global community of 500+ Christians in EA. I'm the FT director and I work with numerous excellent and committed PT staff. Effective-giving related Projects: In no particular order, here's a short list of most of the effective giving projects we've undertaken over the last 3-5 years (while I've worked here). Some of these have cross-over with careers, EA community building, etc. Projects we're giving proactive attention to (at least 1-2+ staff hours a week) are marked with () Projects we're giving even more attention to (2+ staff hours a week) are marked with (+) 1-on-1s with Christians interested in effective giving (we've done 500+ to-date; most of our 1-on-1s at least touch on effective giving) (+) General EA Christian conferences, retreats, and meetups (+) A conference organizing Christian impact professionals from large Christian development charities (E.g. Compassion, Hope, etc.) () to discuss EA . We did one in 2023. A video on this here. DM me for a report on how this went. () Report about M&E practices at Christian development charities. We have one forthcoming this spring. () Published book about effective altruism and surprising ways to have a large impact with one's life. We have one forthcoming in 2025 (+) A Christian Campaign for (mostly Givewell) effective charities (raised $380,000+) () Talks at churches on effectiveness and radical generosity. Uni internships doing outreach related to radical and effective generosity (We've had 8 interns for this and also a partnership with One-For-The-World) Articles about EA and Christianity, especially effective Christian charity (We've published dozens of blogs (+) A podcast heavily featuring Christians who earn-to-give or work at effective charities. We've done one with 10+ episodes (+) 3+ videos with Christian youtubers about effective altruism (especially effective giving) Social meetups at cities across US coastal cities and London (we've done a couple dozen) (+) Online discussions on EA and Christian themes (we've done 140, about 30 about effective giving topics with an avg. 10 people at each; youtube videos here) () A 5-minute animated video describing effective altruism (and effective giving) from a Christian perspective. See here Academic workshops on effective giving. We've done some on EA themes, with a few talks on generosity. This year we have one on longtermism. () Online talks on effective giving themes. We've done 5-10 () M&E advising from Christian EA development professionals to Christian development charities. We're starting a pro bono offering in spring 2024 () A report on plausibly highest-impact Christian poverty charities. We have done some related work in this report An Intro-course to EA/Effective giving for Christians. See our 4-week Intro course () Career outreach that promotes effective giving as a primary way to have an impactf...

]]>
JDBauman https://forum.effectivealtruism.org/posts/fuEwLqwAc3zPkFSuW/effective-giving-projects-that-have-and-haven-t-been-tried Tue, 26 Mar 2024 05:04:18 +0000 EA - Effective Giving Projects That Have (and Haven't) Been Tried Among Christians by JDBauman JDBauman https://speechkit-prod.s3.eu-west-1.amazonaws.com/distribution_images%2Fdistribution%2F%2FNonlinearnormal.png 05:50 no full 5